public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2020-11-19 13:03 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2020-11-19 13:03 UTC (permalink / raw
  To: gentoo-commits

commit:     e8699143a58bbb2f710323854996fe838fcf35b4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 19 13:02:08 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 19 13:02:08 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e8699143

Updated genpatches

Support for namespace user.pax.* on tmpfs.
Enable link security restrictions by default.
Bluetooth: Check key sizes only when Secure Simple Pairing
is enabled. See bug #686758
tmp513 requies REGMAP_I2C to build.  Select it by default in
Kconfig. See bug #710790. Thanks to Phil Stracchino
VIDEO_TVP5150 requies REGMAP_I2C to build.  Select it by
default in Kconfig. See bug #721096. Thanks to Max Steel
sign-file: full functionality with modern LibreSSL
Add Gentoo Linux support config settings and defaults.
Kernel patch enables gcc >= v10.1 optimizations for additional
CPUs.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  28 +
 1500_XATTR_USER_PREFIX.patch                       |  67 ++
 ...ble-link-security-restrictions-by-default.patch |  20 +
 ...zes-only-if-Secure-Simple-Pairing-enabled.patch |  37 ++
 ...3-Fix-build-issue-by-selecting-CONFIG_REG.patch |  30 +
 ...0-Fix-build-issue-by-selecting-REGMAP-I2C.patch |  10 +
 2920_sign-file-patch-for-libressl.patch            |  16 +
 5013_enable-cpu-optimizations-for-gcc10.patch      | 694 +++++++++++++++++++++
 8 files changed, 902 insertions(+)

diff --git a/0000_README b/0000_README
index 9018993..3f9bf5f 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,34 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1500_XATTR_USER_PREFIX.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc:   Support for namespace user.pax.* on tmpfs.
+
+Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
+From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc:   Enable link security restrictions by default.
+
+Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
+Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
+From:   https://bugs.gentoo.org/710790
+Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+
+Patch:  2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
+From:   https://bugs.gentoo.org/721096
+Desc:   VIDEO_TVP5150 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #721096. Thanks to Max Steel
+
+Patch:  2920_sign-file-patch-for-libressl.patch
+From:   https://bugs.gentoo.org/717166
+Desc:   sign-file: full functionality with modern LibreSSL
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+
+Patch:  5013_enable-cpu-optimizations-for-gcc10.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc = v10.1+ optimizations for additional CPUs.

diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..245dcc2
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,67 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags.  The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs.  Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT  "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+ 
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+ 
+ #endif /* _UAPI_LINUX_XATTR_H */
+--- a/mm/shmem.c	2020-05-04 15:30:27.042035334 -0400
++++ b/mm/shmem.c	2020-05-04 15:34:57.013881725 -0400
+@@ -3238,6 +3238,14 @@ static int shmem_xattr_handler_set(const
+ 	struct shmem_inode_info *info = SHMEM_I(inode);
+ 
+ 	name = xattr_full_name(handler, name);
++
++	if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++		if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++			return -EOPNOTSUPP;
++		if (size > 8)
++			return -EINVAL;
++	}
++
+ 	return simple_xattr_set(&info->xattrs, name, value, size, flags, NULL);
+ }
+ 
+@@ -3253,6 +3261,12 @@ static const struct xattr_handler shmem_
+ 	.set = shmem_xattr_handler_set,
+ };
+ 
++static const struct xattr_handler shmem_user_xattr_handler = {
++	.prefix = XATTR_USER_PREFIX,
++	.get = shmem_xattr_handler_get,
++	.set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ 	&posix_acl_access_xattr_handler,
+@@ -3260,6 +3274,7 @@ static const struct xattr_handler *shmem
+ #endif
+ 	&shmem_security_xattr_handler,
+ 	&shmem_trusted_xattr_handler,
++	&shmem_user_xattr_handler,
+ 	NULL
+ };
+ 

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..f0ed144
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,20 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+--- a/fs/namei.c	2018-09-28 07:56:07.770005006 -0400
++++ b/fs/namei.c	2018-09-28 07:56:43.370349204 -0400
+@@ -885,8 +885,8 @@ static inline void put_link(struct namei
+ 		path_put(&last->link);
+ }
+ 
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ int sysctl_protected_fifos __read_mostly;
+ int sysctl_protected_regular __read_mostly;
+ 

diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 0000000..394ad48
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ 			return 0;
+ 	}
+ 
+-	if (hci_conn_ssp_enabled(conn) &&
+-	    !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++	/* If Secure Simple Pairing is not enabled, then legacy connection
++	 * setup is used and no encryption or key sizes can be enforced.
++	 */
++	if (!hci_conn_ssp_enabled(conn))
++		return 1;
++
++	if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 		return 0;
+ 
+ 	/* The minimum encryption key size needs to be enforced by the
+-- 
+2.20.1

diff --git a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
new file mode 100644
index 0000000..4335685
--- /dev/null
+++ b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
@@ -0,0 +1,30 @@
+From dc328d75a6f37f4ff11a81ae16b1ec88c3197640 Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Mon, 23 Mar 2020 08:20:06 -0400
+Subject: [PATCH 1/1] This driver requires REGMAP_I2C to build.  Select it by
+ default in Kconfig. Reported at gentoo bugzilla:
+ https://bugs.gentoo.org/710790
+Cc: mpagano@gentoo.org
+
+Reported-by: Phil Stracchino <phils@caerllewys.net>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hwmon/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 47ac20aee06f..530b4f29ba85 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1769,6 +1769,7 @@ config SENSORS_TMP421
+ config SENSORS_TMP513
+ 	tristate "Texas Instruments TMP513 and compatibles"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you get support for Texas Instruments TMP512,
+ 	  and TMP513 temperature and power supply sensor chips.
+-- 
+2.24.1
+

diff --git a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
new file mode 100644
index 0000000..1bc058e
--- /dev/null
+++ b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
@@ -0,0 +1,10 @@
+--- a/drivers/media/i2c/Kconfig	2020-05-13 12:38:05.102903309 -0400
++++ b/drivers/media/i2c/Kconfig	2020-05-13 12:38:51.283171977 -0400
+@@ -378,6 +378,7 @@ config VIDEO_TVP514X
+ config VIDEO_TVP5150
+ 	tristate "Texas Instruments TVP5150 video decoder"
+ 	depends on VIDEO_V4L2 && I2C
++	select REGMAP_I2C
+ 	select V4L2_FWNODE
+ 	help
+ 	  Support for the Texas Instruments TVP5150 video decoder.

diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 0000000..e6ec017
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c	2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c	2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+  * signing with anything other than SHA1 - so we're stuck with that if such is
+  * the case.
+  */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+-	OPENSSL_VERSION_NUMBER < 0x10000000L || \
+-	defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++	( defined(LIBRESSL_VERSION_NUMBER) \
++	&& (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++	OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7

diff --git a/5013_enable-cpu-optimizations-for-gcc10.patch b/5013_enable-cpu-optimizations-for-gcc10.patch
new file mode 100644
index 0000000..0fc0a64
--- /dev/null
+++ b/5013_enable-cpu-optimizations-for-gcc10.patch
@@ -0,0 +1,694 @@
+From 4666424a864159b4de572c90adb2c3e1fcdd5890 Mon Sep 17 00:00:00 2001
+From: graysky <graysky@archlinux.us>
+Date: Fri, 13 Nov 2020 15:45:08 -0500
+Subject: [PATCH] 
+ enable_additional_cpu_optimizations_for_gcc_v10.1+_kernel_v5.8+.patch
+
+WARNING
+This patch works with gcc versions 10.1+ and with kernel version 5.8+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features  --->
+  Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* AMD Family 17h (Zen 2)
+* Intel Silvermont low-power processors
+* Intel Goldmont low-power processors (Apollo Lake and Denverton)
+* Intel Goldmont Plus low-power processors (Gemini Lake)
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 10th Gen Core i7/i9 (Ice Lake)
+* Intel Xeon (Cascade Lake)
+* Intel Xeon (Cooper Lake)
+* Intel 3rd Gen 10nm++  i3/i5/i7/i9-family (Tiger Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[2]
+
+Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
+Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
+kernel's objtool issue with these.[3a,b]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[4]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.8
+gcc version >=10.1
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[6]
+
+REFERENCES
+1.  https://gcc.gnu.org/gcc-4.9/changes.html
+2.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
+3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
+4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
+5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
+6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+---
+ arch/x86/Kconfig.cpu            | 301 ++++++++++++++++++++++++++++----
+ arch/x86/Makefile               |  53 +++++-
+ arch/x86/Makefile_32.cpu        |  32 +++-
+ arch/x86/include/asm/vermagic.h |  56 ++++++
+ 4 files changed, 407 insertions(+), 35 deletions(-)
+
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index 814fe0d349b0..7b08e87fe797 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -123,6 +123,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ 	depends on X86_32
++	select X86_P6_NOP
+ 	help
+ 	  Select this for Intel Pentium 4 chips.  This includes the
+ 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -155,9 +156,8 @@ config MPENTIUM4
+ 		-Paxville
+ 		-Dempsey
+ 
+-
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	help
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	help
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -173,12 +173,90 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	help
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	help
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	help
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	help
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	help
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	help
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	help
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	help
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	help
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	help
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Zen"
++	help
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
++
++config MZEN2
++	bool "AMD Zen 2"
++	help
++	  Select this for AMD Family 17h Zen 2 processors.
++
++	  Enables -march=znver2
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -260,6 +338,7 @@ config MVIAC7
+ 
+ config MPSC
+ 	bool "Intel P4 / older Netburst based Xeon"
++	select X86_P6_NOP
+ 	depends on X86_64
+ 	help
+ 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -269,8 +348,19 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
++	select X86_P6_NOP
+ 	help
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,14 +368,151 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
++	select X86_P6_NOP
+ 	help
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MGOLDMONT
++	bool "Intel Goldmont"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++	  Enables -march=goldmont
++
++config MGOLDMONTPLUS
++	bool "Intel Goldmont Plus"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++	  Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	select X86_P6_NOP
++	help
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	select X86_P6_NOP
++	help
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	select X86_P6_NOP
++	help
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	select X86_P6_NOP
++	help
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake X"
++	select X86_P6_NOP
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake X family.
++
++	  Enables -march=skylake-avx512
++
++config MCANNONLAKE
++	bool "Intel Cannon Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 8th Gen Core processors
++
++	  Enables -march=cannonlake
++
++config MICELAKE
++	bool "Intel Ice Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 10th Gen Core processors in the Ice Lake family.
++
++	  Enables -march=icelake-client
++
++config MCASCADELAKE
++	bool "Intel Cascade Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for Xeon processors in the Cascade Lake family.
++
++	  Enables -march=cascadelake
++
++config MCOOPERLAKE
++	bool "Intel Cooper Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for Xeon processors in the Cooper Lake family.
++
++	  Enables -march=cooperlake
++
++config MTIGERLAKE
++	bool "Intel Tiger Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++	  Enables -march=tigerlake
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -294,6 +521,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ help
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -318,7 +558,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -336,35 +576,36 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+ 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+ 
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs).  In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+-	def_bool y
+-	depends on X86_64
+-	depends on (MCORE2 || MPENTIUM4 || MPSC)
++	default n
++	bool "Support for P6_NOPs on Intel chips"
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
++	help
++	P6_NOPs are a relatively minor optimization that require a family >=
++	6 processor, except that it is broken on certain VIA chips.
++	Furthermore, AMD chips prefer a totally different sequence of NOPs
++	(which work on all CPUs).  In addition, it looks like Virtual PC
++	does not understand them.
++
++	As a result, disallow these if we're not compiling for X86_64 (these
++	NOPs do work on all x86-64 capable chips); the list of processors in
++	the right-hand clause are the cores that benefit from this optimization.
++
++	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+@@ -374,7 +615,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 154259f18b8b..405b1f2b3c65 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -115,13 +115,60 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++        cflags-$(CONFIG_MGOLDMONT) += \
++                $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
++        cflags-$(CONFIG_MGOLDMONTPLUS) += \
++                $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MSKYLAKE) += \
++                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++        cflags-$(CONFIG_MSKYLAKEX) += \
++                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++        cflags-$(CONFIG_MCANNONLAKE) += \
++                $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++        cflags-$(CONFIG_MICELAKE) += \
++                $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
++        cflags-$(CONFIG_MCASCADELAKE) += \
++                $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
++        cflags-$(CONFIG_MCOOPERLAKE) += \
++                $(call cc-option,-march=cooperlake,$(call cc-option,-mtune=cooperlake))
++        cflags-$(CONFIG_MTIGERLAKE) += \
++                $(call cc-option,-march=tigerlake,$(call cc-option,-mtune=tigerlake))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+diff --git a/arch/x86/Makefile_32.cpu b/arch/x86/Makefile_32.cpu
+index cd3056759880..cb0a4c6bd987 100644
+--- a/arch/x86/Makefile_32.cpu
++++ b/arch/x86/Makefile_32.cpu
+@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
++cflags-$(CONFIG_MZEN2)	+= $(call cc-option,-march=znver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -33,8 +45,24 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-option,-march=c3,-march=i486) -falign-fu
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MGOLDMONT)	+= -march=i686 $(call tune,goldmont)
++cflags-$(CONFIG_MGOLDMONTPLUS)	+= -march=i686 $(call tune,goldmont-plus)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE)	+= -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE)	+= -march=i686 $(call tune,icelake-client)
++cflags-$(CONFIG_MCASCADELAKE)	+= -march=i686 $(call tune,cascadelake)
++cflags-$(CONFIG_MCOOPERLAKE)	+= -march=i686 $(call tune,cooperlake)
++cflags-$(CONFIG_MTIGERLAKE)	+= -march=i686 $(call tune,tigerlake)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486
+diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
+index 75884d2cdec3..14c222e78213 100644
+--- a/arch/x86/include/asm/vermagic.h
++++ b/arch/x86/include/asm/vermagic.h
+@@ -17,6 +17,40 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
++#elif defined CONFIG_MCOOPERLAKE
++#define MODULE_PROC_FAMILY "COOPERLAKE "
++#elif defined CONFIG_MTIGERLAKE
++#define MODULE_PROC_FAMILY "TIGERLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +69,28 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+-- 
+2.29.2
+


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2020-12-13 16:09 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2020-12-13 16:09 UTC (permalink / raw
  To: gentoo-commits

commit:     8aab1bff0d949e888e713cf9d6b7a4d790c73952
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 13 16:08:57 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Dec 13 16:08:57 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8aab1bff

Remove redundant patch

Removed: 2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                                |  4 ----
 2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch | 10 ----------
 2 files changed, 14 deletions(-)

diff --git a/0000_README b/0000_README
index 3f9bf5f..c2ae47f 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
 
-Patch:  2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
-From:   https://bugs.gentoo.org/721096
-Desc:   VIDEO_TVP5150 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #721096. Thanks to Max Steel
-
 Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL

diff --git a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
deleted file mode 100644
index 1bc058e..0000000
--- a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
+++ /dev/null
@@ -1,10 +0,0 @@
---- a/drivers/media/i2c/Kconfig	2020-05-13 12:38:05.102903309 -0400
-+++ b/drivers/media/i2c/Kconfig	2020-05-13 12:38:51.283171977 -0400
-@@ -378,6 +378,7 @@ config VIDEO_TVP514X
- config VIDEO_TVP5150
- 	tristate "Texas Instruments TVP5150 video decoder"
- 	depends on VIDEO_V4L2 && I2C
-+	select REGMAP_I2C
- 	select V4L2_FWNODE
- 	help
- 	  Support for the Texas Instruments TVP5150 video decoder.


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2020-12-14 20:45 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2020-12-14 20:45 UTC (permalink / raw
  To: gentoo-commits

commit:     312231d0bf248ef093c70ce79db3b5b1ce1b29df
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 14 20:45:42 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Dec 14 20:45:42 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=312231d0

Linux patch 5.10.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |  4 ++++
 1001_linux-5.10.1.patch | 59 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 63 insertions(+)

diff --git a/0000_README b/0000_README
index c2ae47f..414f5d7 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-5.10.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.10.1.patch b/1001_linux-5.10.1.patch
new file mode 100644
index 0000000..1aefc35
--- /dev/null
+++ b/1001_linux-5.10.1.patch
@@ -0,0 +1,59 @@
+diff --git a/Makefile b/Makefile
+index e30cf02da8b89..076d4e6b9ccc2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index dc8568ab96f24..56b723d012ac1 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3730,14 +3730,12 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ 	blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs));
+ 
+ 	/*
+-	 * RAID10 personality requires bio splitting,
+-	 * RAID0/1/4/5/6 don't and process large discard bios properly.
++	 * RAID1 and RAID10 personalities require bio splitting,
++	 * RAID0/4/5/6 don't and process large discard bios properly.
+ 	 */
+-	if (rs_is_raid10(rs)) {
+-		limits->discard_granularity = max(chunk_size_bytes,
+-						  limits->discard_granularity);
+-		limits->max_discard_sectors = min_not_zero(rs->md.chunk_sectors,
+-							   limits->max_discard_sectors);
++	if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
++		limits->discard_granularity = chunk_size_bytes;
++		limits->max_discard_sectors = rs->md.chunk_sectors;
+ 	}
+ }
+ 
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index bb645bc3ba6d6..2175a5ac4f7c6 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -311,7 +311,7 @@ struct mddev {
+ 	int				external;	/* metadata is
+ 							 * managed externally */
+ 	char				metadata_type[17]; /* externally set*/
+-	unsigned int			chunk_sectors;
++	int				chunk_sectors;
+ 	time64_t			ctime, utime;
+ 	int				level, layout;
+ 	char				clevel[16];
+@@ -339,7 +339,7 @@ struct mddev {
+ 	 */
+ 	sector_t			reshape_position;
+ 	int				delta_disks, new_level, new_layout;
+-	unsigned int			new_chunk_sectors;
++	int				new_chunk_sectors;
+ 	int				reshape_backwards;
+ 
+ 	struct md_thread		*thread;	/* management thread */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2020-12-18 16:08 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2020-12-18 16:08 UTC (permalink / raw
  To: gentoo-commits

commit:     297148d7527b5bcd0796503f09fd87af052541dc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 18 16:07:30 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec 18 16:07:30 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=297148d7

f2fs: fix to seek incorrect data offset in inline data file

See bug #760573
Thanks to Stefan de Konink for reporting

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |  4 ++
 1900_f2fs-seek-data-offset-inline-data.patch | 65 ++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)

diff --git a/0000_README b/0000_README
index 414f5d7..4cf514e 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1900_f2fs-seek-data-offset-inline-data.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=7a6e59d719ef0ec9b3d765cba3ba98ee585cbde3
+Desc:   f2fs: fix to seek incorrect data offset in inline data file
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_f2fs-seek-data-offset-inline-data.patch b/1900_f2fs-seek-data-offset-inline-data.patch
new file mode 100644
index 0000000..28b00eb
--- /dev/null
+++ b/1900_f2fs-seek-data-offset-inline-data.patch
@@ -0,0 +1,65 @@
+From 7a6e59d719ef0ec9b3d765cba3ba98ee585cbde3 Mon Sep 17 00:00:00 2001
+From: Chao Yu <yuchao0@huawei.com>
+Date: Mon, 2 Nov 2020 17:36:58 +0800
+Subject: f2fs: fix to seek incorrect data offset in inline data file
+
+As kitestramuort reported:
+
+F2FS-fs (nvme0n1p4): access invalid blkaddr:1598541474
+[   25.725898] ------------[ cut here ]------------
+[   25.725903] WARNING: CPU: 6 PID: 2018 at f2fs_is_valid_blkaddr+0x23a/0x250
+[   25.725923] Call Trace:
+[   25.725927]  ? f2fs_llseek+0x204/0x620
+[   25.725929]  ? ovl_copy_up_data+0x14f/0x200
+[   25.725931]  ? ovl_copy_up_inode+0x174/0x1e0
+[   25.725933]  ? ovl_copy_up_one+0xa22/0xdf0
+[   25.725936]  ? ovl_copy_up_flags+0xa6/0xf0
+[   25.725938]  ? ovl_aio_cleanup_handler+0xd0/0xd0
+[   25.725939]  ? ovl_maybe_copy_up+0x86/0xa0
+[   25.725941]  ? ovl_open+0x22/0x80
+[   25.725943]  ? do_dentry_open+0x136/0x350
+[   25.725945]  ? path_openat+0xb7e/0xf40
+[   25.725947]  ? __check_sticky+0x40/0x40
+[   25.725948]  ? do_filp_open+0x70/0x100
+[   25.725950]  ? __check_sticky+0x40/0x40
+[   25.725951]  ? __check_sticky+0x40/0x40
+[   25.725953]  ? __x64_sys_openat+0x1db/0x2c0
+[   25.725955]  ? do_syscall_64+0x2d/0x40
+[   25.725957]  ? entry_SYSCALL_64_after_hwframe+0x44/0xa9
+
+llseek() reports invalid block address access, the root cause is if
+file has inline data, f2fs_seek_block() will access inline data regard
+as block address index in inode block, which should be wrong, fix it.
+
+Reported-by: kitestramuort <kitestramuort@autistici.org>
+Signed-off-by: Chao Yu <yuchao0@huawei.com>
+Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
+---
+ fs/f2fs/file.c | 11 ++++++++---
+ 1 file changed, 8 insertions(+), 3 deletions(-)
+
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index ee861c6d9ff02..fe39e591e5b4c 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -412,9 +412,14 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
+ 		goto fail;
+ 
+ 	/* handle inline data case */
+-	if (f2fs_has_inline_data(inode) && whence == SEEK_HOLE) {
+-		data_ofs = isize;
+-		goto found;
++	if (f2fs_has_inline_data(inode)) {
++		if (whence == SEEK_HOLE) {
++			data_ofs = isize;
++			goto found;
++		} else if (whence == SEEK_DATA) {
++			data_ofs = offset;
++			goto found;
++		}
+ 	}
+ 
+ 	pgofs = (pgoff_t)(offset >> PAGE_SHIFT);
+-- 
+cgit 1.2.3-1.el7
+


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2020-12-21 13:26 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2020-12-21 13:26 UTC (permalink / raw
  To: gentoo-commits

commit:     53260d25473b3195b048efe98c6a0efcf9497465
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 21 13:25:43 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Dec 21 13:25:43 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=53260d25

Rename patch and add 5.10.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 +
 1001_linux-5.10.1.patch => 1000_linux-5.10.1.patch |   0
 1001_linux-5.10.2.patch                            | 349 +++++++++++++++++++++
 3 files changed, 353 insertions(+)

diff --git a/0000_README b/0000_README
index 4cf514e..52d8ee0 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-5.10.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.1
 
+Patch:  1001_linux-5.10.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.10.1.patch b/1000_linux-5.10.1.patch
similarity index 100%
rename from 1001_linux-5.10.1.patch
rename to 1000_linux-5.10.1.patch

diff --git a/1001_linux-5.10.2.patch b/1001_linux-5.10.2.patch
new file mode 100644
index 0000000..5242eac
--- /dev/null
+++ b/1001_linux-5.10.2.patch
@@ -0,0 +1,349 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 44fde25bb221e..f6a1513dfb76c 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5663,6 +5663,7 @@
+ 					device);
+ 				j = NO_REPORT_LUNS (don't use report luns
+ 					command, uas only);
++				k = NO_SAME (do not use WRITE_SAME, uas only)
+ 				l = NOT_LOCKABLE (don't try to lock and
+ 					unlock ejectable media, not on uas);
+ 				m = MAX_SECTORS_64 (don't transfer more
+diff --git a/Makefile b/Makefile
+index 076d4e6b9ccc2..44f4cd2e58a80 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 562087df7d334..0cc6d35a08156 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -184,11 +184,6 @@ static void omap_8250_mdr1_errataset(struct uart_8250_port *up,
+ 				     struct omap8250_priv *priv)
+ {
+ 	u8 timeout = 255;
+-	u8 old_mdr1;
+-
+-	old_mdr1 = serial_in(up, UART_OMAP_MDR1);
+-	if (old_mdr1 == priv->mdr1)
+-		return;
+ 
+ 	serial_out(up, UART_OMAP_MDR1, priv->mdr1);
+ 	udelay(2);
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index fad31ccd1fa83..1b4eb7046b078 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -342,6 +342,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x06a3, 0x0006), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+ 
++	/* Agfa SNAPSCAN 1212U */
++	{ USB_DEVICE(0x06bd, 0x0001), .driver_info = USB_QUIRK_RESET_RESUME },
++
+ 	/* Guillemot Webcam Hercules Dualpix Exchange (2nd ID) */
+ 	{ USB_DEVICE(0x06f8, 0x0804), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index 53a227217f1cb..99c1ebe86f6a2 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -2734,7 +2734,7 @@ static int __init init(void)
+ {
+ 	int	retval = -ENOMEM;
+ 	int	i;
+-	struct	dummy *dum[MAX_NUM_UDC];
++	struct	dummy *dum[MAX_NUM_UDC] = {};
+ 
+ 	if (usb_disabled())
+ 		return -ENODEV;
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index c799ca5361d4d..74c497fd34762 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1712,6 +1712,10 @@ retry:
+ 	hcd->state = HC_STATE_SUSPENDED;
+ 	bus_state->next_statechange = jiffies + msecs_to_jiffies(10);
+ 	spin_unlock_irqrestore(&xhci->lock, flags);
++
++	if (bus_state->bus_suspended)
++		usleep_range(5000, 10000);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index bf89172c43cac..84da8406d5b42 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -47,6 +47,7 @@
+ #define PCI_DEVICE_ID_INTEL_DNV_XHCI			0x19d0
+ #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI	0x15b5
+ #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI	0x15b6
++#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_XHCI	0x15c1
+ #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_XHCI	0x15db
+ #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_XHCI	0x15d4
+ #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_XHCI		0x15e9
+@@ -55,6 +56,7 @@
+ #define PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI		0x8a13
+ #define PCI_DEVICE_ID_INTEL_CML_XHCI			0xa3af
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI		0x9a13
++#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI		0x1138
+ 
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4			0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+@@ -232,13 +234,15 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI))
++	     pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index aa2d35f982002..4d34f6005381e 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -333,6 +333,9 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ 	if (priv && (priv->quirks & XHCI_SKIP_PHY_INIT))
+ 		hcd->skip_phy_initialization = 1;
+ 
++	if (priv && (priv->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK))
++		xhci->quirks |= XHCI_SG_TRB_CACHE_SIZE_QUIRK;
++
+ 	ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
+ 	if (ret)
+ 		goto disable_usb_phy;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index ebb359ebb261c..d90c0d5df3b37 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1878,6 +1878,7 @@ struct xhci_hcd {
+ #define XHCI_RENESAS_FW_QUIRK	BIT_ULL(36)
+ #define XHCI_SKIP_PHY_INIT	BIT_ULL(37)
+ #define XHCI_DISABLE_SPARSE	BIT_ULL(38)
++#define XHCI_SG_TRB_CACHE_SIZE_QUIRK	BIT_ULL(39)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+diff --git a/drivers/usb/misc/legousbtower.c b/drivers/usb/misc/legousbtower.c
+index ba655b4af4fc2..1c9e09138c109 100644
+--- a/drivers/usb/misc/legousbtower.c
++++ b/drivers/usb/misc/legousbtower.c
+@@ -797,7 +797,7 @@ static int tower_probe(struct usb_interface *interface, const struct usb_device_
+ 				      &get_version_reply,
+ 				      sizeof(get_version_reply),
+ 				      1000, GFP_KERNEL);
+-	if (!result) {
++	if (result) {
+ 		dev_err(idev, "get version request failed: %d\n", result);
+ 		retval = result;
+ 		goto error;
+diff --git a/drivers/usb/misc/sisusbvga/Kconfig b/drivers/usb/misc/sisusbvga/Kconfig
+index 655d9cb0651a7..c12cdd0154102 100644
+--- a/drivers/usb/misc/sisusbvga/Kconfig
++++ b/drivers/usb/misc/sisusbvga/Kconfig
+@@ -16,7 +16,7 @@ config USB_SISUSBVGA
+ 
+ config USB_SISUSBVGA_CON
+ 	bool "Text console and mode switching support" if USB_SISUSBVGA
+-	depends on VT
++	depends on VT && BROKEN
+ 	select FONT_8x16
+ 	help
+ 	  Say Y here if you want a VGA text console via the USB dongle or
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 652d6d6f1f365..ff6f41e7e0683 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -867,6 +867,9 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ 	if (devinfo->flags & US_FL_NO_READ_CAPACITY_16)
+ 		sdev->no_read_capacity_16 = 1;
+ 
++	/* Some disks cannot handle WRITE_SAME */
++	if (devinfo->flags & US_FL_NO_SAME)
++		sdev->no_write_same = 1;
+ 	/*
+ 	 * Some disks return the total number of blocks in response
+ 	 * to READ CAPACITY rather than the highest block number.
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 711ab240058c7..870e9cf3d5dc4 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -35,12 +35,15 @@ UNUSUAL_DEV(0x054c, 0x087d, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_REPORT_OPCODES),
+ 
+-/* Reported-by: Julian Groß <julian.g@posteo.de> */
++/*
++ *  Initially Reported-by: Julian Groß <julian.g@posteo.de>
++ *  Further reports David C. Partridge <david.partridge@perdrix.co.uk>
++ */
+ UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999,
+ 		"LaCie",
+ 		"2Big Quadra USB3",
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+-		US_FL_NO_REPORT_OPCODES),
++		US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+ 
+ /*
+  * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+diff --git a/drivers/usb/storage/usb.c b/drivers/usb/storage/usb.c
+index 94a64729dc27d..90aa9c12ffac5 100644
+--- a/drivers/usb/storage/usb.c
++++ b/drivers/usb/storage/usb.c
+@@ -541,6 +541,9 @@ void usb_stor_adjust_quirks(struct usb_device *udev, unsigned long *fflags)
+ 		case 'j':
+ 			f |= US_FL_NO_REPORT_LUNS;
+ 			break;
++		case 'k':
++			f |= US_FL_NO_SAME;
++			break;
+ 		case 'l':
+ 			f |= US_FL_NOT_LOCKABLE;
+ 			break;
+diff --git a/include/linux/usb_usual.h b/include/linux/usb_usual.h
+index 4a19ac3f24d06..6b03fdd69d274 100644
+--- a/include/linux/usb_usual.h
++++ b/include/linux/usb_usual.h
+@@ -84,6 +84,8 @@
+ 		/* Cannot handle REPORT_LUNS */			\
+ 	US_FLAG(ALWAYS_SYNC, 0x20000000)			\
+ 		/* lies about caching, so always sync */	\
++	US_FLAG(NO_SAME, 0x40000000)				\
++		/* Cannot handle WRITE_SAME */			\
+ 
+ #define US_FLAG(name, value)	US_FL_##name = value ,
+ enum { US_DO_ALL_FLAGS };
+diff --git a/include/uapi/linux/ptrace.h b/include/uapi/linux/ptrace.h
+index a71b6e3b03ebc..83ee45fa634b9 100644
+--- a/include/uapi/linux/ptrace.h
++++ b/include/uapi/linux/ptrace.h
+@@ -81,7 +81,8 @@ struct seccomp_metadata {
+ 
+ struct ptrace_syscall_info {
+ 	__u8 op;	/* PTRACE_SYSCALL_INFO_* */
+-	__u32 arch __attribute__((__aligned__(sizeof(__u32))));
++	__u8 pad[3];
++	__u32 arch;
+ 	__u64 instruction_pointer;
+ 	__u64 stack_pointer;
+ 	union {
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 327ec42a36b09..de1917484647e 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -1935,11 +1935,15 @@ static int snd_pcm_oss_set_subdivide(struct snd_pcm_oss_file *pcm_oss_file, int
+ static int snd_pcm_oss_set_fragment1(struct snd_pcm_substream *substream, unsigned int val)
+ {
+ 	struct snd_pcm_runtime *runtime;
++	int fragshift;
+ 
+ 	runtime = substream->runtime;
+ 	if (runtime->oss.subdivision || runtime->oss.fragshift)
+ 		return -EINVAL;
+-	runtime->oss.fragshift = val & 0xffff;
++	fragshift = val & 0xffff;
++	if (fragshift >= 31)
++		return -EINVAL;
++	runtime->oss.fragshift = fragshift;
+ 	runtime->oss.maxfrags = (val >> 16) & 0xffff;
+ 	if (runtime->oss.fragshift < 4)		/* < 16 */
+ 		runtime->oss.fragshift = 4;
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 3bfead393aa34..91f0ed4a2e7eb 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -40,6 +40,8 @@ static u64 parse_audio_format_i_type(struct snd_usb_audio *chip,
+ 	case UAC_VERSION_1:
+ 	default: {
+ 		struct uac_format_type_i_discrete_descriptor *fmt = _fmt;
++		if (format >= 64)
++			return 0; /* invalid format */
+ 		sample_width = fmt->bBitResolution;
+ 		sample_bytes = fmt->bSubframeSize;
+ 		format = 1ULL << format;
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index ca76ba5b5c0b2..2f6d39c2ba7c8 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -193,16 +193,16 @@ static int usb_chmap_ctl_get(struct snd_kcontrol *kcontrol,
+ 	struct snd_pcm_chmap *info = snd_kcontrol_chip(kcontrol);
+ 	struct snd_usb_substream *subs = info->private_data;
+ 	struct snd_pcm_chmap_elem *chmap = NULL;
+-	int i;
++	int i = 0;
+ 
+-	memset(ucontrol->value.integer.value, 0,
+-	       sizeof(ucontrol->value.integer.value));
+ 	if (subs->cur_audiofmt)
+ 		chmap = subs->cur_audiofmt->chmap;
+ 	if (chmap) {
+ 		for (i = 0; i < chmap->channels; i++)
+ 			ucontrol->value.integer.value[i] = chmap->map[i];
+ 	}
++	for (; i < subs->channels_max; i++)
++		ucontrol->value.integer.value[i] = 0;
+ 	return 0;
+ }
+ 
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index 54188ee16c486..4e24509645173 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -1499,17 +1499,16 @@ sub dodie {
+ 	my $log_file;
+ 
+ 	if (defined($opt{"LOG_FILE"})) {
+-	    my $whence = 0; # beginning of file
+-	    my $pos = $test_log_start;
++	    my $whence = 2; # End of file
++	    my $log_size = tell LOG;
++	    my $size = $log_size - $test_log_start;
+ 
+ 	    if (defined($mail_max_size)) {
+-		my $log_size = tell LOG;
+-		$log_size -= $test_log_start;
+-		if ($log_size > $mail_max_size) {
+-		    $whence = 2; # end of file
+-		    $pos = - $mail_max_size;
++		if ($size > $mail_max_size) {
++		    $size = $mail_max_size;
+ 		}
+ 	    }
++	    my $pos = - $size;
+ 	    $log_file = "$tmpdir/log";
+ 	    open (L, "$opt{LOG_FILE}") or die "Can't open $opt{LOG_FILE} to read)";
+ 	    open (O, "> $tmpdir/log") or die "Can't open $tmpdir/log\n";
+@@ -4253,7 +4252,12 @@ sub do_send_mail {
+     $mail_command =~ s/\$SUBJECT/$subject/g;
+     $mail_command =~ s/\$MESSAGE/$message/g;
+ 
+-    run_command $mail_command;
++    my $ret = run_command $mail_command;
++    if (!$ret && defined($file)) {
++	# try again without the file
++	$message .= "\n\n*** FAILED TO SEND LOG ***\n\n";
++	do_send_email($subject, $message);
++    }
+ }
+ 
+ sub send_email {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2020-12-26 15:29 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2020-12-26 15:29 UTC (permalink / raw
  To: gentoo-commits

commit:     3126ae6a6eb890b9182ee505393b5d3ea6bfa693
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 26 15:29:19 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Dec 26 15:29:19 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3126ae6a

Linux patch 5.10.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1002_linux-5.10.3.patch | 1250 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1254 insertions(+)

diff --git a/0000_README b/0000_README
index 52d8ee0..290bc2e 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.10.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.2
 
+Patch:  1002_linux-5.10.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.10.3.patch b/1002_linux-5.10.3.patch
new file mode 100644
index 0000000..68d93c6
--- /dev/null
+++ b/1002_linux-5.10.3.patch
@@ -0,0 +1,1250 @@
+diff --git a/Makefile b/Makefile
+index 44f4cd2e58a80..a72bc404123d5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/dts/exynos5410-odroidxu.dts b/arch/arm/boot/dts/exynos5410-odroidxu.dts
+index 75b4150c26d72..bd1d8499a108b 100644
+--- a/arch/arm/boot/dts/exynos5410-odroidxu.dts
++++ b/arch/arm/boot/dts/exynos5410-odroidxu.dts
+@@ -327,6 +327,8 @@
+ 				regulator-name = "vddq_lcd";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
++				/* Supplies also GPK and GPJ */
++				regulator-always-on;
+ 			};
+ 
+ 			ldo8_reg: LDO8 {
+@@ -637,11 +639,11 @@
+ };
+ 
+ &usbdrd_dwc3_0 {
+-	dr_mode = "host";
++	dr_mode = "peripheral";
+ };
+ 
+ &usbdrd_dwc3_1 {
+-	dr_mode = "peripheral";
++	dr_mode = "host";
+ };
+ 
+ &usbdrd3_0 {
+diff --git a/arch/arm/boot/dts/exynos5410-pinctrl.dtsi b/arch/arm/boot/dts/exynos5410-pinctrl.dtsi
+index e5d0a2a4f6483..d0aa18443a69b 100644
+--- a/arch/arm/boot/dts/exynos5410-pinctrl.dtsi
++++ b/arch/arm/boot/dts/exynos5410-pinctrl.dtsi
+@@ -560,6 +560,34 @@
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
++
++	usb3_1_oc: usb3-1-oc {
++		samsung,pins = "gpk2-4", "gpk2-5";
++		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
++		samsung,pin-pud = <EXYNOS_PIN_PULL_UP>;
++		samsung,pin-drv = <EXYNOS5420_PIN_DRV_LV1>;
++	};
++
++	usb3_1_vbusctrl: usb3-1-vbusctrl {
++		samsung,pins = "gpk2-6", "gpk2-7";
++		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
++		samsung,pin-pud = <EXYNOS_PIN_PULL_DOWN>;
++		samsung,pin-drv = <EXYNOS5420_PIN_DRV_LV1>;
++	};
++
++	usb3_0_oc: usb3-0-oc {
++		samsung,pins = "gpk3-0", "gpk3-1";
++		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
++		samsung,pin-pud = <EXYNOS_PIN_PULL_UP>;
++		samsung,pin-drv = <EXYNOS5420_PIN_DRV_LV1>;
++	};
++
++	usb3_0_vbusctrl: usb3-0-vbusctrl {
++		samsung,pins = "gpk3-2", "gpk3-3";
++		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
++		samsung,pin-pud = <EXYNOS_PIN_PULL_DOWN>;
++		samsung,pin-drv = <EXYNOS5420_PIN_DRV_LV1>;
++	};
+ };
+ 
+ &pinctrl_2 {
+diff --git a/arch/arm/boot/dts/exynos5410.dtsi b/arch/arm/boot/dts/exynos5410.dtsi
+index 60a87684b1af6..584ce62361b13 100644
+--- a/arch/arm/boot/dts/exynos5410.dtsi
++++ b/arch/arm/boot/dts/exynos5410.dtsi
+@@ -390,6 +390,8 @@
+ &usbdrd3_0 {
+ 	clocks = <&clock CLK_USBD300>;
+ 	clock-names = "usbdrd30";
++	pinctrl-names = "default";
++	pinctrl-0 = <&usb3_0_oc>, <&usb3_0_vbusctrl>;
+ };
+ 
+ &usbdrd_phy0 {
+@@ -401,6 +403,8 @@
+ &usbdrd3_1 {
+ 	clocks = <&clock CLK_USBD301>;
+ 	clock-names = "usbdrd30";
++	pinctrl-names = "default";
++	pinctrl-0 = <&usb3_1_oc>, <&usb3_1_vbusctrl>;
+ };
+ 
+ &usbdrd_dwc3_1 {
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index e19df6cde35d1..170c94ec00685 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -299,11 +299,12 @@ DEFINE_IDTENTRY_ERRORCODE(exc_alignment_check)
+ 	local_irq_enable();
+ 
+ 	if (handle_user_split_lock(regs, error_code))
+-		return;
++		goto out;
+ 
+ 	do_trap(X86_TRAP_AC, SIGBUS, "alignment check", regs,
+ 		error_code, BUS_ADRALN, NULL);
+ 
++out:
+ 	local_irq_disable();
+ }
+ 
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index d11db80d24cd1..9acb9d2c4bcf9 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -147,7 +147,7 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ 	const u32 allowed = CRYPTO_ALG_KERN_DRIVER_ONLY;
+ 	struct sock *sk = sock->sk;
+ 	struct alg_sock *ask = alg_sk(sk);
+-	struct sockaddr_alg *sa = (void *)uaddr;
++	struct sockaddr_alg_new *sa = (void *)uaddr;
+ 	const struct af_alg_type *type;
+ 	void *private;
+ 	int err;
+@@ -155,7 +155,11 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ 	if (sock->state == SS_CONNECTED)
+ 		return -EINVAL;
+ 
+-	if (addr_len < sizeof(*sa))
++	BUILD_BUG_ON(offsetof(struct sockaddr_alg_new, salg_name) !=
++		     offsetof(struct sockaddr_alg, salg_name));
++	BUILD_BUG_ON(offsetof(struct sockaddr_alg, salg_name) != sizeof(*sa));
++
++	if (addr_len < sizeof(*sa) + 1)
+ 		return -EINVAL;
+ 
+ 	/* If caller uses non-allowed flag, return error. */
+@@ -163,7 +167,7 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ 		return -EINVAL;
+ 
+ 	sa->salg_type[sizeof(sa->salg_type) - 1] = 0;
+-	sa->salg_name[sizeof(sa->salg_name) + addr_len - sizeof(*sa) - 1] = 0;
++	sa->salg_name[addr_len - sizeof(*sa) - 1] = 0;
+ 
+ 	type = alg_get_type(sa->salg_type);
+ 	if (PTR_ERR(type) == -ENOENT) {
+diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+index 35f3bfc3e6f59..8e0f67455c098 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
++++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+@@ -405,6 +405,14 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
+ 		},
+ 		.driver_data = (void *)&sipodev_desc
+ 	},
++	{
++		.ident = "Vero K147",
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "VERO"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "K147"),
++		},
++		.driver_data = (void *)&sipodev_desc
++	},
+ 	{ }	/* Terminate list */
+ };
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
+index 248cc82c838e7..1b320ab581caf 100644
+--- a/drivers/hwtracing/coresight/coresight-etb10.c
++++ b/drivers/hwtracing/coresight/coresight-etb10.c
+@@ -176,6 +176,7 @@ static int etb_enable_perf(struct coresight_device *csdev, void *data)
+ 	unsigned long flags;
+ 	struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ 	struct perf_output_handle *handle = data;
++	struct cs_buffers *buf = etm_perf_sink_config(handle);
+ 
+ 	spin_lock_irqsave(&drvdata->spinlock, flags);
+ 
+@@ -186,7 +187,7 @@ static int etb_enable_perf(struct coresight_device *csdev, void *data)
+ 	}
+ 
+ 	/* Get a handle on the pid of the process to monitor */
+-	pid = task_pid_nr(handle->event->owner);
++	pid = buf->pid;
+ 
+ 	if (drvdata->pid != -1 && drvdata->pid != pid) {
+ 		ret = -EBUSY;
+@@ -383,6 +384,7 @@ static void *etb_alloc_buffer(struct coresight_device *csdev,
+ 	if (!buf)
+ 		return NULL;
+ 
++	buf->pid = task_pid_nr(event->owner);
+ 	buf->snapshot = overwrite;
+ 	buf->nr_pages = nr_pages;
+ 	buf->data_pages = pages;
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index abd706b216ac9..e516e5b879e3a 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -124,8 +124,8 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ 	if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 1))
+ 		dev_err(etm_dev,
+ 			"timeout while waiting for Idle Trace Status\n");
+-
+-	writel_relaxed(config->pe_sel, drvdata->base + TRCPROCSELR);
++	if (drvdata->nr_pe)
++		writel_relaxed(config->pe_sel, drvdata->base + TRCPROCSELR);
+ 	writel_relaxed(config->cfg, drvdata->base + TRCCONFIGR);
+ 	/* nothing specific implemented */
+ 	writel_relaxed(0x0, drvdata->base + TRCAUXCTLR);
+@@ -141,8 +141,9 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ 	writel_relaxed(config->viiectlr, drvdata->base + TRCVIIECTLR);
+ 	writel_relaxed(config->vissctlr,
+ 		       drvdata->base + TRCVISSCTLR);
+-	writel_relaxed(config->vipcssctlr,
+-		       drvdata->base + TRCVIPCSSCTLR);
++	if (drvdata->nr_pe_cmp)
++		writel_relaxed(config->vipcssctlr,
++			       drvdata->base + TRCVIPCSSCTLR);
+ 	for (i = 0; i < drvdata->nrseqstate - 1; i++)
+ 		writel_relaxed(config->seq_ctrl[i],
+ 			       drvdata->base + TRCSEQEVRn(i));
+@@ -187,13 +188,15 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ 		writeq_relaxed(config->ctxid_pid[i],
+ 			       drvdata->base + TRCCIDCVRn(i));
+ 	writel_relaxed(config->ctxid_mask0, drvdata->base + TRCCIDCCTLR0);
+-	writel_relaxed(config->ctxid_mask1, drvdata->base + TRCCIDCCTLR1);
++	if (drvdata->numcidc > 4)
++		writel_relaxed(config->ctxid_mask1, drvdata->base + TRCCIDCCTLR1);
+ 
+ 	for (i = 0; i < drvdata->numvmidc; i++)
+ 		writeq_relaxed(config->vmid_val[i],
+ 			       drvdata->base + TRCVMIDCVRn(i));
+ 	writel_relaxed(config->vmid_mask0, drvdata->base + TRCVMIDCCTLR0);
+-	writel_relaxed(config->vmid_mask1, drvdata->base + TRCVMIDCCTLR1);
++	if (drvdata->numvmidc > 4)
++		writel_relaxed(config->vmid_mask1, drvdata->base + TRCVMIDCCTLR1);
+ 
+ 	if (!drvdata->skip_power_up) {
+ 		/*
+@@ -779,7 +782,7 @@ static void etm4_init_arch_data(void *info)
+ 	 * LPOVERRIDE, bit[23] implementation supports
+ 	 * low-power state override
+ 	 */
+-	if (BMVAL(etmidr5, 23, 23))
++	if (BMVAL(etmidr5, 23, 23) && (!drvdata->skip_power_up))
+ 		drvdata->lpoverride = true;
+ 	else
+ 		drvdata->lpoverride = false;
+@@ -1178,7 +1181,8 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ 	state = drvdata->save_state;
+ 
+ 	state->trcprgctlr = readl(drvdata->base + TRCPRGCTLR);
+-	state->trcprocselr = readl(drvdata->base + TRCPROCSELR);
++	if (drvdata->nr_pe)
++		state->trcprocselr = readl(drvdata->base + TRCPROCSELR);
+ 	state->trcconfigr = readl(drvdata->base + TRCCONFIGR);
+ 	state->trcauxctlr = readl(drvdata->base + TRCAUXCTLR);
+ 	state->trceventctl0r = readl(drvdata->base + TRCEVENTCTL0R);
+@@ -1194,7 +1198,8 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ 	state->trcvictlr = readl(drvdata->base + TRCVICTLR);
+ 	state->trcviiectlr = readl(drvdata->base + TRCVIIECTLR);
+ 	state->trcvissctlr = readl(drvdata->base + TRCVISSCTLR);
+-	state->trcvipcssctlr = readl(drvdata->base + TRCVIPCSSCTLR);
++	if (drvdata->nr_pe_cmp)
++		state->trcvipcssctlr = readl(drvdata->base + TRCVIPCSSCTLR);
+ 	state->trcvdctlr = readl(drvdata->base + TRCVDCTLR);
+ 	state->trcvdsacctlr = readl(drvdata->base + TRCVDSACCTLR);
+ 	state->trcvdarcctlr = readl(drvdata->base + TRCVDARCCTLR);
+@@ -1240,10 +1245,12 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ 		state->trcvmidcvr[i] = readq(drvdata->base + TRCVMIDCVRn(i));
+ 
+ 	state->trccidcctlr0 = readl(drvdata->base + TRCCIDCCTLR0);
+-	state->trccidcctlr1 = readl(drvdata->base + TRCCIDCCTLR1);
++	if (drvdata->numcidc > 4)
++		state->trccidcctlr1 = readl(drvdata->base + TRCCIDCCTLR1);
+ 
+ 	state->trcvmidcctlr0 = readl(drvdata->base + TRCVMIDCCTLR0);
+-	state->trcvmidcctlr1 = readl(drvdata->base + TRCVMIDCCTLR1);
++	if (drvdata->numvmidc > 4)
++		state->trcvmidcctlr1 = readl(drvdata->base + TRCVMIDCCTLR1);
+ 
+ 	state->trcclaimset = readl(drvdata->base + TRCCLAIMCLR);
+ 
+@@ -1283,7 +1290,8 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ 	writel_relaxed(state->trcclaimset, drvdata->base + TRCCLAIMSET);
+ 
+ 	writel_relaxed(state->trcprgctlr, drvdata->base + TRCPRGCTLR);
+-	writel_relaxed(state->trcprocselr, drvdata->base + TRCPROCSELR);
++	if (drvdata->nr_pe)
++		writel_relaxed(state->trcprocselr, drvdata->base + TRCPROCSELR);
+ 	writel_relaxed(state->trcconfigr, drvdata->base + TRCCONFIGR);
+ 	writel_relaxed(state->trcauxctlr, drvdata->base + TRCAUXCTLR);
+ 	writel_relaxed(state->trceventctl0r, drvdata->base + TRCEVENTCTL0R);
+@@ -1299,7 +1307,8 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ 	writel_relaxed(state->trcvictlr, drvdata->base + TRCVICTLR);
+ 	writel_relaxed(state->trcviiectlr, drvdata->base + TRCVIIECTLR);
+ 	writel_relaxed(state->trcvissctlr, drvdata->base + TRCVISSCTLR);
+-	writel_relaxed(state->trcvipcssctlr, drvdata->base + TRCVIPCSSCTLR);
++	if (drvdata->nr_pe_cmp)
++		writel_relaxed(state->trcvipcssctlr, drvdata->base + TRCVIPCSSCTLR);
+ 	writel_relaxed(state->trcvdctlr, drvdata->base + TRCVDCTLR);
+ 	writel_relaxed(state->trcvdsacctlr, drvdata->base + TRCVDSACCTLR);
+ 	writel_relaxed(state->trcvdarcctlr, drvdata->base + TRCVDARCCTLR);
+@@ -1350,10 +1359,12 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ 			       drvdata->base + TRCVMIDCVRn(i));
+ 
+ 	writel_relaxed(state->trccidcctlr0, drvdata->base + TRCCIDCCTLR0);
+-	writel_relaxed(state->trccidcctlr1, drvdata->base + TRCCIDCCTLR1);
++	if (drvdata->numcidc > 4)
++		writel_relaxed(state->trccidcctlr1, drvdata->base + TRCCIDCCTLR1);
+ 
+ 	writel_relaxed(state->trcvmidcctlr0, drvdata->base + TRCVMIDCCTLR0);
+-	writel_relaxed(state->trcvmidcctlr1, drvdata->base + TRCVMIDCCTLR1);
++	if (drvdata->numvmidc > 4)
++		writel_relaxed(state->trcvmidcctlr1, drvdata->base + TRCVMIDCCTLR1);
+ 
+ 	writel_relaxed(state->trcclaimset, drvdata->base + TRCCLAIMSET);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
+index 65a29293b6cb9..f5f654ea29946 100644
+--- a/drivers/hwtracing/coresight/coresight-priv.h
++++ b/drivers/hwtracing/coresight/coresight-priv.h
+@@ -87,6 +87,7 @@ enum cs_mode {
+  * struct cs_buffer - keep track of a recording session' specifics
+  * @cur:	index of the current buffer
+  * @nr_pages:	max number of pages granted to us
++ * @pid:	PID this cs_buffer belongs to
+  * @offset:	offset within the current buffer
+  * @data_size:	how much we collected in this run
+  * @snapshot:	is this run in snapshot mode
+@@ -95,6 +96,7 @@ enum cs_mode {
+ struct cs_buffers {
+ 	unsigned int		cur;
+ 	unsigned int		nr_pages;
++	pid_t			pid;
+ 	unsigned long		offset;
+ 	local_t			data_size;
+ 	bool			snapshot;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index 44402d413ebbd..989d965f3d901 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -227,6 +227,7 @@ static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data)
+ 	unsigned long flags;
+ 	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ 	struct perf_output_handle *handle = data;
++	struct cs_buffers *buf = etm_perf_sink_config(handle);
+ 
+ 	spin_lock_irqsave(&drvdata->spinlock, flags);
+ 	do {
+@@ -243,7 +244,7 @@ static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data)
+ 		}
+ 
+ 		/* Get a handle on the pid of the process to monitor */
+-		pid = task_pid_nr(handle->event->owner);
++		pid = buf->pid;
+ 
+ 		if (drvdata->pid != -1 && drvdata->pid != pid) {
+ 			ret = -EBUSY;
+@@ -399,6 +400,7 @@ static void *tmc_alloc_etf_buffer(struct coresight_device *csdev,
+ 	if (!buf)
+ 		return NULL;
+ 
++	buf->pid = task_pid_nr(event->owner);
+ 	buf->snapshot = overwrite;
+ 	buf->nr_pages = nr_pages;
+ 	buf->data_pages = pages;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+index 714f9e867e5f6..3309b1344ffc0 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+@@ -217,6 +217,8 @@ static int tmc_pages_alloc(struct tmc_pages *tmc_pages,
+ 		} else {
+ 			page = alloc_pages_node(node,
+ 						GFP_KERNEL | __GFP_ZERO, 0);
++			if (!page)
++				goto err;
+ 		}
+ 		paddr = dma_map_page(real_dev, page, 0, PAGE_SIZE, dir);
+ 		if (dma_mapping_error(real_dev, paddr))
+@@ -1550,7 +1552,7 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
+ 
+ 	/* Insert barrier packets at the beginning, if there was an overflow */
+ 	if (lost)
+-		tmc_etr_buf_insert_barrier_packet(etr_buf, etr_buf->offset);
++		tmc_etr_buf_insert_barrier_packet(etr_buf, offset);
+ 	tmc_etr_sync_perf_buffer(etr_perf, offset, size);
+ 
+ 	/*
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 0037c6ecab650..4136bd8142894 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -7590,8 +7590,11 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
+ 			err = -EBUSY;
+ 			goto out;
+ 		}
+-		WARN_ON_ONCE(test_bit(MD_CLOSING, &mddev->flags));
+-		set_bit(MD_CLOSING, &mddev->flags);
++		if (test_and_set_bit(MD_CLOSING, &mddev->flags)) {
++			mutex_unlock(&mddev->open_mutex);
++			err = -EBUSY;
++			goto out;
++		}
+ 		did_set_md_closing = true;
+ 		mutex_unlock(&mddev->open_mutex);
+ 		sync_blockdev(bdev);
+diff --git a/drivers/media/usb/msi2500/msi2500.c b/drivers/media/usb/msi2500/msi2500.c
+index 65be6f140fe83..1c60dfb647e5c 100644
+--- a/drivers/media/usb/msi2500/msi2500.c
++++ b/drivers/media/usb/msi2500/msi2500.c
+@@ -1230,7 +1230,7 @@ static int msi2500_probe(struct usb_interface *intf,
+ 	}
+ 
+ 	dev->master = master;
+-	master->bus_num = 0;
++	master->bus_num = -1;
+ 	master->num_chipselect = 1;
+ 	master->transfer_one_message = msi2500_transfer_one_message;
+ 	spi_master_set_devdata(master, dev);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index e158d3d62056b..9ebeb031329d9 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -8095,7 +8095,7 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
+ 	int error = 0, i;
+ 	void *sense = NULL;
+ 	dma_addr_t sense_handle;
+-	unsigned long *sense_ptr;
++	void *sense_ptr;
+ 	u32 opcode = 0;
+ 	int ret = DCMD_SUCCESS;
+ 
+@@ -8218,6 +8218,13 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
+ 	}
+ 
+ 	if (ioc->sense_len) {
++		/* make sure the pointer is part of the frame */
++		if (ioc->sense_off >
++		    (sizeof(union megasas_frame) - sizeof(__le64))) {
++			error = -EINVAL;
++			goto out;
++		}
++
+ 		sense = dma_alloc_coherent(&instance->pdev->dev, ioc->sense_len,
+ 					     &sense_handle, GFP_KERNEL);
+ 		if (!sense) {
+@@ -8225,12 +8232,11 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
+ 			goto out;
+ 		}
+ 
+-		sense_ptr =
+-		(unsigned long *) ((unsigned long)cmd->frame + ioc->sense_off);
++		sense_ptr = (void *)cmd->frame + ioc->sense_off;
+ 		if (instance->consistent_mask_64bit)
+-			*sense_ptr = cpu_to_le64(sense_handle);
++			put_unaligned_le64(sense_handle, sense_ptr);
+ 		else
+-			*sense_ptr = cpu_to_le32(sense_handle);
++			put_unaligned_le32(sense_handle, sense_ptr);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/soc/tegra/fuse/speedo-tegra210.c b/drivers/soc/tegra/fuse/speedo-tegra210.c
+index 70d3f6e1aa33d..8050742237b76 100644
+--- a/drivers/soc/tegra/fuse/speedo-tegra210.c
++++ b/drivers/soc/tegra/fuse/speedo-tegra210.c
+@@ -94,7 +94,7 @@ static int get_process_id(int value, const u32 *speedos, unsigned int num)
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < num; i++)
+-		if (value < speedos[num])
++		if (value < speedos[i])
+ 			return i;
+ 
+ 	return -EINVAL;
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index f41cba10b86b9..828f9ad1be49c 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1467,6 +1467,10 @@ static void uart_set_ldisc(struct tty_struct *tty)
+ {
+ 	struct uart_state *state = tty->driver_data;
+ 	struct uart_port *uport;
++	struct tty_port *port = &state->port;
++
++	if (!tty_port_initialized(port))
++		return;
+ 
+ 	mutex_lock(&state->port.mutex);
+ 	uport = uart_port_check(state);
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index 25c65accf089c..5e07a0a86d110 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -57,7 +57,8 @@ static const struct ci_hdrc_imx_platform_flag imx6sx_usb_data = {
+ 
+ static const struct ci_hdrc_imx_platform_flag imx6ul_usb_data = {
+ 	.flags = CI_HDRC_SUPPORTS_RUNTIME_PM |
+-		CI_HDRC_TURN_VBUS_EARLY_ON,
++		CI_HDRC_TURN_VBUS_EARLY_ON |
++		CI_HDRC_DISABLE_DEVICE_STREAMING,
+ };
+ 
+ static const struct ci_hdrc_imx_platform_flag imx7d_usb_data = {
+diff --git a/drivers/usb/gadget/function/f_acm.c b/drivers/usb/gadget/function/f_acm.c
+index 46647bfac2ef8..349945e064bba 100644
+--- a/drivers/usb/gadget/function/f_acm.c
++++ b/drivers/usb/gadget/function/f_acm.c
+@@ -686,7 +686,7 @@ acm_bind(struct usb_configuration *c, struct usb_function *f)
+ 	acm_ss_out_desc.bEndpointAddress = acm_fs_out_desc.bEndpointAddress;
+ 
+ 	status = usb_assign_descriptors(f, acm_fs_function, acm_hs_function,
+-			acm_ss_function, NULL);
++			acm_ss_function, acm_ss_function);
+ 	if (status)
+ 		goto fail;
+ 
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index c727cb5de8718..f3443347874d2 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1328,6 +1328,7 @@ static long ffs_epfile_ioctl(struct file *file, unsigned code,
+ 
+ 		switch (epfile->ffs->gadget->speed) {
+ 		case USB_SPEED_SUPER:
++		case USB_SPEED_SUPER_PLUS:
+ 			desc_idx = 2;
+ 			break;
+ 		case USB_SPEED_HIGH:
+@@ -3174,7 +3175,8 @@ static int _ffs_func_bind(struct usb_configuration *c,
+ 	}
+ 
+ 	if (likely(super)) {
+-		func->function.ss_descriptors = vla_ptr(vlabuf, d, ss_descs);
++		func->function.ss_descriptors = func->function.ssp_descriptors =
++			vla_ptr(vlabuf, d, ss_descs);
+ 		ss_len = ffs_do_descs(ffs->ss_descs_count,
+ 				vla_ptr(vlabuf, d, raw_descs) + fs_len + hs_len,
+ 				d_raw_descs__sz - fs_len - hs_len,
+@@ -3584,6 +3586,7 @@ static void ffs_func_unbind(struct usb_configuration *c,
+ 	func->function.fs_descriptors = NULL;
+ 	func->function.hs_descriptors = NULL;
+ 	func->function.ss_descriptors = NULL;
++	func->function.ssp_descriptors = NULL;
+ 	func->interfaces_nums = NULL;
+ 
+ 	ffs_event_add(ffs, FUNCTIONFS_UNBIND);
+diff --git a/drivers/usb/gadget/function/f_midi.c b/drivers/usb/gadget/function/f_midi.c
+index 19d97940eeb93..8fff995b8dd50 100644
+--- a/drivers/usb/gadget/function/f_midi.c
++++ b/drivers/usb/gadget/function/f_midi.c
+@@ -1048,6 +1048,12 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
+ 		f->ss_descriptors = usb_copy_descriptors(midi_function);
+ 		if (!f->ss_descriptors)
+ 			goto fail_f_midi;
++
++		if (gadget_is_superspeed_plus(c->cdev->gadget)) {
++			f->ssp_descriptors = usb_copy_descriptors(midi_function);
++			if (!f->ssp_descriptors)
++				goto fail_f_midi;
++		}
+ 	}
+ 
+ 	kfree(midi_function);
+diff --git a/drivers/usb/gadget/function/f_rndis.c b/drivers/usb/gadget/function/f_rndis.c
+index 9534c8ab62a8e..0739b05a0ef7b 100644
+--- a/drivers/usb/gadget/function/f_rndis.c
++++ b/drivers/usb/gadget/function/f_rndis.c
+@@ -87,8 +87,10 @@ static inline struct f_rndis *func_to_rndis(struct usb_function *f)
+ /* peak (theoretical) bulk transfer rate in bits-per-second */
+ static unsigned int bitrate(struct usb_gadget *g)
+ {
++	if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
++		return 4250000000U;
+ 	if (gadget_is_superspeed(g) && g->speed == USB_SPEED_SUPER)
+-		return 13 * 1024 * 8 * 1000 * 8;
++		return 3750000000U;
+ 	else if (gadget_is_dualspeed(g) && g->speed == USB_SPEED_HIGH)
+ 		return 13 * 512 * 8 * 1000 * 8;
+ 	else
+diff --git a/drivers/usb/mtu3/mtu3_debugfs.c b/drivers/usb/mtu3/mtu3_debugfs.c
+index fdeade6254aec..7537bfd651af6 100644
+--- a/drivers/usb/mtu3/mtu3_debugfs.c
++++ b/drivers/usb/mtu3/mtu3_debugfs.c
+@@ -127,7 +127,7 @@ static void mtu3_debugfs_regset(struct mtu3 *mtu, void __iomem *base,
+ 	struct debugfs_regset32 *regset;
+ 	struct mtu3_regset *mregs;
+ 
+-	mregs = devm_kzalloc(mtu->dev, sizeof(*regset), GFP_KERNEL);
++	mregs = devm_kzalloc(mtu->dev, sizeof(*mregs), GFP_KERNEL);
+ 	if (!mregs)
+ 		return;
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 56d6f6d83bd78..2c21e34235bbb 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -563,6 +563,9 @@ static void option_instat_callback(struct urb *urb);
+ 
+ /* Device flags */
+ 
++/* Highest interface number which can be used with NCTRL() and RSVD() */
++#define FLAG_IFNUM_MAX	7
++
+ /* Interface does not support modem-control requests */
+ #define NCTRL(ifnum)	((BIT(ifnum) & 0xff) << 8)
+ 
+@@ -2101,6 +2104,14 @@ static struct usb_serial_driver * const serial_drivers[] = {
+ 
+ module_usb_serial_driver(serial_drivers, option_ids);
+ 
++static bool iface_is_reserved(unsigned long device_flags, u8 ifnum)
++{
++	if (ifnum > FLAG_IFNUM_MAX)
++		return false;
++
++	return device_flags & RSVD(ifnum);
++}
++
+ static int option_probe(struct usb_serial *serial,
+ 			const struct usb_device_id *id)
+ {
+@@ -2117,7 +2128,7 @@ static int option_probe(struct usb_serial *serial,
+ 	 * the same class/subclass/protocol as the serial interfaces.  Look at
+ 	 * the Windows driver .INF files for reserved interface numbers.
+ 	 */
+-	if (device_flags & RSVD(iface_desc->bInterfaceNumber))
++	if (iface_is_reserved(device_flags, iface_desc->bInterfaceNumber))
+ 		return -ENODEV;
+ 
+ 	/*
+@@ -2133,6 +2144,14 @@ static int option_probe(struct usb_serial *serial,
+ 	return 0;
+ }
+ 
++static bool iface_no_modem_control(unsigned long device_flags, u8 ifnum)
++{
++	if (ifnum > FLAG_IFNUM_MAX)
++		return false;
++
++	return device_flags & NCTRL(ifnum);
++}
++
+ static int option_attach(struct usb_serial *serial)
+ {
+ 	struct usb_interface_descriptor *iface_desc;
+@@ -2148,7 +2167,7 @@ static int option_attach(struct usb_serial *serial)
+ 
+ 	iface_desc = &serial->interface->cur_altsetting->desc;
+ 
+-	if (!(device_flags & NCTRL(iface_desc->bInterfaceNumber)))
++	if (!iface_no_modem_control(device_flags, iface_desc->bInterfaceNumber))
+ 		data->use_send_setup = 1;
+ 
+ 	if (device_flags & ZLP)
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index 4f5806a3b73d7..322ecae9a7580 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -25,6 +25,9 @@
+ #define FSCRYPT_CONTEXT_V1	1
+ #define FSCRYPT_CONTEXT_V2	2
+ 
++/* Keep this in sync with include/uapi/linux/fscrypt.h */
++#define FSCRYPT_MODE_MAX	FSCRYPT_MODE_ADIANTUM
++
+ struct fscrypt_context_v1 {
+ 	u8 version; /* FSCRYPT_CONTEXT_V1 */
+ 	u8 contents_encryption_mode;
+@@ -491,9 +494,9 @@ struct fscrypt_master_key {
+ 	 * Per-mode encryption keys for the various types of encryption policies
+ 	 * that use them.  Allocated and derived on-demand.
+ 	 */
+-	struct fscrypt_prepared_key mk_direct_keys[__FSCRYPT_MODE_MAX + 1];
+-	struct fscrypt_prepared_key mk_iv_ino_lblk_64_keys[__FSCRYPT_MODE_MAX + 1];
+-	struct fscrypt_prepared_key mk_iv_ino_lblk_32_keys[__FSCRYPT_MODE_MAX + 1];
++	struct fscrypt_prepared_key mk_direct_keys[FSCRYPT_MODE_MAX + 1];
++	struct fscrypt_prepared_key mk_iv_ino_lblk_64_keys[FSCRYPT_MODE_MAX + 1];
++	struct fscrypt_prepared_key mk_iv_ino_lblk_32_keys[FSCRYPT_MODE_MAX + 1];
+ 
+ 	/* Hash key for inode numbers.  Initialized only when needed. */
+ 	siphash_key_t		mk_ino_hash_key;
+diff --git a/fs/crypto/hooks.c b/fs/crypto/hooks.c
+index 20b0df47fe6ab..061418be4b086 100644
+--- a/fs/crypto/hooks.c
++++ b/fs/crypto/hooks.c
+@@ -61,7 +61,7 @@ int __fscrypt_prepare_link(struct inode *inode, struct inode *dir,
+ 		return err;
+ 
+ 	/* ... in case we looked up no-key name before key was added */
+-	if (dentry->d_flags & DCACHE_NOKEY_NAME)
++	if (fscrypt_is_nokey_name(dentry))
+ 		return -ENOKEY;
+ 
+ 	if (!fscrypt_has_permitted_context(dir, inode))
+@@ -86,7 +86,8 @@ int __fscrypt_prepare_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		return err;
+ 
+ 	/* ... in case we looked up no-key name(s) before key was added */
+-	if ((old_dentry->d_flags | new_dentry->d_flags) & DCACHE_NOKEY_NAME)
++	if (fscrypt_is_nokey_name(old_dentry) ||
++	    fscrypt_is_nokey_name(new_dentry))
+ 		return -ENOKEY;
+ 
+ 	if (old_dir != new_dir) {
+diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c
+index 53cc552a7b8fd..d7ec52cb3d9af 100644
+--- a/fs/crypto/keyring.c
++++ b/fs/crypto/keyring.c
+@@ -44,7 +44,7 @@ static void free_master_key(struct fscrypt_master_key *mk)
+ 
+ 	wipe_master_key_secret(&mk->mk_secret);
+ 
+-	for (i = 0; i <= __FSCRYPT_MODE_MAX; i++) {
++	for (i = 0; i <= FSCRYPT_MODE_MAX; i++) {
+ 		fscrypt_destroy_prepared_key(&mk->mk_direct_keys[i]);
+ 		fscrypt_destroy_prepared_key(&mk->mk_iv_ino_lblk_64_keys[i]);
+ 		fscrypt_destroy_prepared_key(&mk->mk_iv_ino_lblk_32_keys[i]);
+diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
+index d595abb8ef90d..31fb08d94f874 100644
+--- a/fs/crypto/keysetup.c
++++ b/fs/crypto/keysetup.c
+@@ -56,6 +56,8 @@ static struct fscrypt_mode *
+ select_encryption_mode(const union fscrypt_policy *policy,
+ 		       const struct inode *inode)
+ {
++	BUILD_BUG_ON(ARRAY_SIZE(fscrypt_modes) != FSCRYPT_MODE_MAX + 1);
++
+ 	if (S_ISREG(inode->i_mode))
+ 		return &fscrypt_modes[fscrypt_policy_contents_mode(policy)];
+ 
+@@ -168,7 +170,7 @@ static int setup_per_mode_enc_key(struct fscrypt_info *ci,
+ 	unsigned int hkdf_infolen = 0;
+ 	int err;
+ 
+-	if (WARN_ON(mode_num > __FSCRYPT_MODE_MAX))
++	if (WARN_ON(mode_num > FSCRYPT_MODE_MAX))
+ 		return -EINVAL;
+ 
+ 	prep_key = &keys[mode_num];
+diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
+index 4441d9944b9ef..faa0f21daa684 100644
+--- a/fs/crypto/policy.c
++++ b/fs/crypto/policy.c
+@@ -175,7 +175,10 @@ static bool fscrypt_supported_v2_policy(const struct fscrypt_policy_v2 *policy,
+ 		return false;
+ 	}
+ 
+-	if (policy->flags & ~FSCRYPT_POLICY_FLAGS_VALID) {
++	if (policy->flags & ~(FSCRYPT_POLICY_FLAGS_PAD_MASK |
++			      FSCRYPT_POLICY_FLAG_DIRECT_KEY |
++			      FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 |
++			      FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) {
+ 		fscrypt_warn(inode, "Unsupported encryption flags (0x%02x)",
+ 			     policy->flags);
+ 		return false;
+diff --git a/fs/exfat/nls.c b/fs/exfat/nls.c
+index 675d0e7058c5a..314d5407a1be5 100644
+--- a/fs/exfat/nls.c
++++ b/fs/exfat/nls.c
+@@ -659,7 +659,7 @@ static int exfat_load_upcase_table(struct super_block *sb,
+ 	unsigned char skip = false;
+ 	unsigned short *upcase_table;
+ 
+-	upcase_table = kcalloc(UTBL_COUNT, sizeof(unsigned short), GFP_KERNEL);
++	upcase_table = kvcalloc(UTBL_COUNT, sizeof(unsigned short), GFP_KERNEL);
+ 	if (!upcase_table)
+ 		return -ENOMEM;
+ 
+@@ -715,7 +715,7 @@ static int exfat_load_default_upcase_table(struct super_block *sb)
+ 	unsigned short uni = 0, *upcase_table;
+ 	unsigned int index = 0;
+ 
+-	upcase_table = kcalloc(UTBL_COUNT, sizeof(unsigned short), GFP_KERNEL);
++	upcase_table = kvcalloc(UTBL_COUNT, sizeof(unsigned short), GFP_KERNEL);
+ 	if (!upcase_table)
+ 		return -ENOMEM;
+ 
+@@ -803,5 +803,5 @@ load_default:
+ 
+ void exfat_free_upcase_table(struct exfat_sb_info *sbi)
+ {
+-	kfree(sbi->vol_utbl);
++	kvfree(sbi->vol_utbl);
+ }
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 33509266f5a00..793fc7db9d28f 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2195,6 +2195,9 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 	if (!dentry->d_name.len)
+ 		return -EINVAL;
+ 
++	if (fscrypt_is_nokey_name(dentry))
++		return -ENOKEY;
++
+ #ifdef CONFIG_UNICODE
+ 	if (sb_has_strict_encoding(sb) && IS_CASEFOLDED(dir) &&
+ 	    sb->s_encoding && utf8_validate(sb->s_encoding, &dentry->d_name))
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index cb700d7972968..9a321c52facec 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3251,6 +3251,8 @@ bool f2fs_empty_dir(struct inode *dir);
+ 
+ static inline int f2fs_add_link(struct dentry *dentry, struct inode *inode)
+ {
++	if (fscrypt_is_nokey_name(dentry))
++		return -ENOKEY;
+ 	return f2fs_do_add_link(d_inode(dentry->d_parent), &dentry->d_name,
+ 				inode, inode->i_ino, inode->i_mode);
+ }
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index ee861c6d9ff02..fe39e591e5b4c 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -412,9 +412,14 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
+ 		goto fail;
+ 
+ 	/* handle inline data case */
+-	if (f2fs_has_inline_data(inode) && whence == SEEK_HOLE) {
+-		data_ofs = isize;
+-		goto found;
++	if (f2fs_has_inline_data(inode)) {
++		if (whence == SEEK_HOLE) {
++			data_ofs = isize;
++			goto found;
++		} else if (whence == SEEK_DATA) {
++			data_ofs = offset;
++			goto found;
++		}
+ 	}
+ 
+ 	pgofs = (pgoff_t)(offset >> PAGE_SHIFT);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 1596502f7375c..f2a4265318f5c 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -4544,7 +4544,7 @@ static void init_dirty_segmap(struct f2fs_sb_info *sbi)
+ 		return;
+ 
+ 	mutex_lock(&dirty_i->seglist_lock);
+-	for (segno = 0; segno < MAIN_SECS(sbi); segno += blks_per_sec) {
++	for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) {
+ 		valid_blocks = get_valid_blocks(sbi, segno, true);
+ 		secno = GET_SEC_FROM_SEG(sbi, segno);
+ 
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index bb02989d92b61..4f13734637660 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -2455,7 +2455,7 @@ int dquot_resume(struct super_block *sb, int type)
+ 		ret = dquot_load_quota_sb(sb, cnt, dqopt->info[cnt].dqi_fmt_id,
+ 					  flags);
+ 		if (ret < 0)
+-			vfs_cleanup_quota_inode(sb, type);
++			vfs_cleanup_quota_inode(sb, cnt);
+ 	}
+ 
+ 	return ret;
+diff --git a/fs/quota/quota_v2.c b/fs/quota/quota_v2.c
+index e69a2bfdd81c0..c21106557a37e 100644
+--- a/fs/quota/quota_v2.c
++++ b/fs/quota/quota_v2.c
+@@ -157,6 +157,25 @@ static int v2_read_file_info(struct super_block *sb, int type)
+ 		qinfo->dqi_entry_size = sizeof(struct v2r1_disk_dqblk);
+ 		qinfo->dqi_ops = &v2r1_qtree_ops;
+ 	}
++	ret = -EUCLEAN;
++	/* Some sanity checks of the read headers... */
++	if ((loff_t)qinfo->dqi_blocks << qinfo->dqi_blocksize_bits >
++	    i_size_read(sb_dqopt(sb)->files[type])) {
++		quota_error(sb, "Number of blocks too big for quota file size (%llu > %llu).",
++		    (loff_t)qinfo->dqi_blocks << qinfo->dqi_blocksize_bits,
++		    i_size_read(sb_dqopt(sb)->files[type]));
++		goto out;
++	}
++	if (qinfo->dqi_free_blk >= qinfo->dqi_blocks) {
++		quota_error(sb, "Free block number too big (%u >= %u).",
++			    qinfo->dqi_free_blk, qinfo->dqi_blocks);
++		goto out;
++	}
++	if (qinfo->dqi_free_entry >= qinfo->dqi_blocks) {
++		quota_error(sb, "Block with free entry too big (%u >= %u).",
++			    qinfo->dqi_free_entry, qinfo->dqi_blocks);
++		goto out;
++	}
+ 	ret = 0;
+ out:
+ 	up_read(&dqopt->dqio_sem);
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 155521e51ac57..08fde777c3247 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -270,6 +270,15 @@ done:
+ 	return d_splice_alias(inode, dentry);
+ }
+ 
++static int ubifs_prepare_create(struct inode *dir, struct dentry *dentry,
++				struct fscrypt_name *nm)
++{
++	if (fscrypt_is_nokey_name(dentry))
++		return -ENOKEY;
++
++	return fscrypt_setup_filename(dir, &dentry->d_name, 0, nm);
++}
++
+ static int ubifs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 			bool excl)
+ {
+@@ -293,7 +302,7 @@ static int ubifs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 	if (err)
+ 		return err;
+ 
+-	err = fscrypt_setup_filename(dir, &dentry->d_name, 0, &nm);
++	err = ubifs_prepare_create(dir, dentry, &nm);
+ 	if (err)
+ 		goto out_budg;
+ 
+@@ -953,7 +962,7 @@ static int ubifs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	if (err)
+ 		return err;
+ 
+-	err = fscrypt_setup_filename(dir, &dentry->d_name, 0, &nm);
++	err = ubifs_prepare_create(dir, dentry, &nm);
+ 	if (err)
+ 		goto out_budg;
+ 
+@@ -1038,7 +1047,7 @@ static int ubifs_mknod(struct inode *dir, struct dentry *dentry,
+ 		return err;
+ 	}
+ 
+-	err = fscrypt_setup_filename(dir, &dentry->d_name, 0, &nm);
++	err = ubifs_prepare_create(dir, dentry, &nm);
+ 	if (err) {
+ 		kfree(dev);
+ 		goto out_budg;
+@@ -1122,7 +1131,7 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	if (err)
+ 		return err;
+ 
+-	err = fscrypt_setup_filename(dir, &dentry->d_name, 0, &nm);
++	err = ubifs_prepare_create(dir, dentry, &nm);
+ 	if (err)
+ 		goto out_budg;
+ 
+diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
+index a8f7a43f031bd..8e1d31c959bfa 100644
+--- a/include/linux/fscrypt.h
++++ b/include/linux/fscrypt.h
+@@ -111,6 +111,35 @@ static inline void fscrypt_handle_d_move(struct dentry *dentry)
+ 	dentry->d_flags &= ~DCACHE_NOKEY_NAME;
+ }
+ 
++/**
++ * fscrypt_is_nokey_name() - test whether a dentry is a no-key name
++ * @dentry: the dentry to check
++ *
++ * This returns true if the dentry is a no-key dentry.  A no-key dentry is a
++ * dentry that was created in an encrypted directory that hasn't had its
++ * encryption key added yet.  Such dentries may be either positive or negative.
++ *
++ * When a filesystem is asked to create a new filename in an encrypted directory
++ * and the new filename's dentry is a no-key dentry, it must fail the operation
++ * with ENOKEY.  This includes ->create(), ->mkdir(), ->mknod(), ->symlink(),
++ * ->rename(), and ->link().  (However, ->rename() and ->link() are already
++ * handled by fscrypt_prepare_rename() and fscrypt_prepare_link().)
++ *
++ * This is necessary because creating a filename requires the directory's
++ * encryption key, but just checking for the key on the directory inode during
++ * the final filesystem operation doesn't guarantee that the key was available
++ * during the preceding dentry lookup.  And the key must have already been
++ * available during the dentry lookup in order for it to have been checked
++ * whether the filename already exists in the directory and for the new file's
++ * dentry not to be invalidated due to it incorrectly having the no-key flag.
++ *
++ * Return: %true if the dentry is a no-key name
++ */
++static inline bool fscrypt_is_nokey_name(const struct dentry *dentry)
++{
++	return dentry->d_flags & DCACHE_NOKEY_NAME;
++}
++
+ /* crypto.c */
+ void fscrypt_enqueue_decrypt_work(struct work_struct *);
+ 
+@@ -244,6 +273,11 @@ static inline void fscrypt_handle_d_move(struct dentry *dentry)
+ {
+ }
+ 
++static inline bool fscrypt_is_nokey_name(const struct dentry *dentry)
++{
++	return false;
++}
++
+ /* crypto.c */
+ static inline void fscrypt_enqueue_decrypt_work(struct work_struct *work)
+ {
+diff --git a/include/uapi/linux/fscrypt.h b/include/uapi/linux/fscrypt.h
+index e5de603369381..9f4428be3e362 100644
+--- a/include/uapi/linux/fscrypt.h
++++ b/include/uapi/linux/fscrypt.h
+@@ -20,7 +20,6 @@
+ #define FSCRYPT_POLICY_FLAG_DIRECT_KEY		0x04
+ #define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64	0x08
+ #define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32	0x10
+-#define FSCRYPT_POLICY_FLAGS_VALID		0x1F
+ 
+ /* Encryption algorithms */
+ #define FSCRYPT_MODE_AES_256_XTS		1
+@@ -28,7 +27,7 @@
+ #define FSCRYPT_MODE_AES_128_CBC		5
+ #define FSCRYPT_MODE_AES_128_CTS		6
+ #define FSCRYPT_MODE_ADIANTUM			9
+-#define __FSCRYPT_MODE_MAX			9
++/* If adding a mode number > 9, update FSCRYPT_MODE_MAX in fscrypt_private.h */
+ 
+ /*
+  * Legacy policy version; ad-hoc KDF and no key verification.
+@@ -177,7 +176,7 @@ struct fscrypt_get_key_status_arg {
+ #define FS_POLICY_FLAGS_PAD_32		FSCRYPT_POLICY_FLAGS_PAD_32
+ #define FS_POLICY_FLAGS_PAD_MASK	FSCRYPT_POLICY_FLAGS_PAD_MASK
+ #define FS_POLICY_FLAG_DIRECT_KEY	FSCRYPT_POLICY_FLAG_DIRECT_KEY
+-#define FS_POLICY_FLAGS_VALID		FSCRYPT_POLICY_FLAGS_VALID
++#define FS_POLICY_FLAGS_VALID		0x07	/* contains old flags only */
+ #define FS_ENCRYPTION_MODE_INVALID	0	/* never used */
+ #define FS_ENCRYPTION_MODE_AES_256_XTS	FSCRYPT_MODE_AES_256_XTS
+ #define FS_ENCRYPTION_MODE_AES_256_GCM	2	/* never used */
+diff --git a/include/uapi/linux/if_alg.h b/include/uapi/linux/if_alg.h
+index 60b7c2efd921c..dc52a11ba6d15 100644
+--- a/include/uapi/linux/if_alg.h
++++ b/include/uapi/linux/if_alg.h
+@@ -24,6 +24,22 @@ struct sockaddr_alg {
+ 	__u8	salg_name[64];
+ };
+ 
++/*
++ * Linux v4.12 and later removed the 64-byte limit on salg_name[]; it's now an
++ * arbitrary-length field.  We had to keep the original struct above for source
++ * compatibility with existing userspace programs, though.  Use the new struct
++ * below if support for very long algorithm names is needed.  To do this,
++ * allocate 'sizeof(struct sockaddr_alg_new) + strlen(algname) + 1' bytes, and
++ * copy algname (including the null terminator) into salg_name.
++ */
++struct sockaddr_alg_new {
++	__u16	salg_family;
++	__u8	salg_type[14];
++	__u32	salg_feat;
++	__u32	salg_mask;
++	__u8	salg_name[];
++};
++
+ struct af_alg_iv {
+ 	__u32	ivlen;
+ 	__u8	iv[0];
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index f04963914366e..cbdf2a5559754 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5868,21 +5868,19 @@ static void hci_le_direct_adv_report_evt(struct hci_dev *hdev,
+ 					 struct sk_buff *skb)
+ {
+ 	u8 num_reports = skb->data[0];
+-	void *ptr = &skb->data[1];
++	struct hci_ev_le_direct_adv_info *ev = (void *)&skb->data[1];
+ 
+-	hci_dev_lock(hdev);
++	if (!num_reports || skb->len < num_reports * sizeof(*ev) + 1)
++		return;
+ 
+-	while (num_reports--) {
+-		struct hci_ev_le_direct_adv_info *ev = ptr;
++	hci_dev_lock(hdev);
+ 
++	for (; num_reports; num_reports--, ev++)
+ 		process_adv_report(hdev, ev->evt_type, &ev->bdaddr,
+ 				   ev->bdaddr_type, &ev->direct_addr,
+ 				   ev->direct_addr_type, ev->rssi, NULL, 0,
+ 				   false);
+ 
+-		ptr += sizeof(*ev);
+-	}
+-
+ 	hci_dev_unlock(hdev);
+ }
+ 
+diff --git a/net/ipv4/ipconfig.c b/net/ipv4/ipconfig.c
+index 561f15b5a944e..3cd13e1bc6a70 100644
+--- a/net/ipv4/ipconfig.c
++++ b/net/ipv4/ipconfig.c
+@@ -1441,7 +1441,7 @@ static int __init ip_auto_config(void)
+ 	int retries = CONF_OPEN_RETRIES;
+ #endif
+ 	int err;
+-	unsigned int i;
++	unsigned int i, count;
+ 
+ 	/* Initialise all name servers and NTP servers to NONE (but only if the
+ 	 * "ip=" or "nfsaddrs=" kernel command line parameters weren't decoded,
+@@ -1575,7 +1575,7 @@ static int __init ip_auto_config(void)
+ 	if (ic_dev_mtu)
+ 		pr_cont(", mtu=%d", ic_dev_mtu);
+ 	/* Name servers (if any): */
+-	for (i = 0; i < CONF_NAMESERVERS_MAX; i++) {
++	for (i = 0, count = 0; i < CONF_NAMESERVERS_MAX; i++) {
+ 		if (ic_nameservers[i] != NONE) {
+ 			if (i == 0)
+ 				pr_info("     nameserver%u=%pI4",
+@@ -1583,12 +1583,14 @@ static int __init ip_auto_config(void)
+ 			else
+ 				pr_cont(", nameserver%u=%pI4",
+ 					i, &ic_nameservers[i]);
++
++			count++;
+ 		}
+-		if (i + 1 == CONF_NAMESERVERS_MAX)
++		if ((i + 1 == CONF_NAMESERVERS_MAX) && count > 0)
+ 			pr_cont("\n");
+ 	}
+ 	/* NTP servers (if any): */
+-	for (i = 0; i < CONF_NTP_SERVERS_MAX; i++) {
++	for (i = 0, count = 0; i < CONF_NTP_SERVERS_MAX; i++) {
+ 		if (ic_ntp_servers[i] != NONE) {
+ 			if (i == 0)
+ 				pr_info("     ntpserver%u=%pI4",
+@@ -1596,8 +1598,10 @@ static int __init ip_auto_config(void)
+ 			else
+ 				pr_cont(", ntpserver%u=%pI4",
+ 					i, &ic_ntp_servers[i]);
++
++			count++;
+ 		}
+-		if (i + 1 == CONF_NTP_SERVERS_MAX)
++		if ((i + 1 == CONF_NTP_SERVERS_MAX) && count > 0)
+ 			pr_cont("\n");
+ 	}
+ #endif /* !SILENT */
+diff --git a/net/wireless/core.h b/net/wireless/core.h
+index e3e9686859d45..7df91f9402124 100644
+--- a/net/wireless/core.h
++++ b/net/wireless/core.h
+@@ -433,6 +433,8 @@ void cfg80211_sme_abandon_assoc(struct wireless_dev *wdev);
+ 
+ /* internal helpers */
+ bool cfg80211_supported_cipher_suite(struct wiphy *wiphy, u32 cipher);
++bool cfg80211_valid_key_idx(struct cfg80211_registered_device *rdev,
++			    int key_idx, bool pairwise);
+ int cfg80211_validate_key_settings(struct cfg80211_registered_device *rdev,
+ 				   struct key_params *params, int key_idx,
+ 				   bool pairwise, const u8 *mac_addr);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index f67ddf2cebcbe..535e34a84d651 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -4260,9 +4260,6 @@ static int nl80211_del_key(struct sk_buff *skb, struct genl_info *info)
+ 	if (err)
+ 		return err;
+ 
+-	if (key.idx < 0)
+-		return -EINVAL;
+-
+ 	if (info->attrs[NL80211_ATTR_MAC])
+ 		mac_addr = nla_data(info->attrs[NL80211_ATTR_MAC]);
+ 
+@@ -4278,6 +4275,10 @@ static int nl80211_del_key(struct sk_buff *skb, struct genl_info *info)
+ 	    key.type != NL80211_KEYTYPE_GROUP)
+ 		return -EINVAL;
+ 
++	if (!cfg80211_valid_key_idx(rdev, key.idx,
++				    key.type == NL80211_KEYTYPE_PAIRWISE))
++		return -EINVAL;
++
+ 	if (!rdev->ops->del_key)
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index f01746894a4e9..e4247c3543566 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -272,18 +272,53 @@ bool cfg80211_supported_cipher_suite(struct wiphy *wiphy, u32 cipher)
+ 	return false;
+ }
+ 
+-int cfg80211_validate_key_settings(struct cfg80211_registered_device *rdev,
+-				   struct key_params *params, int key_idx,
+-				   bool pairwise, const u8 *mac_addr)
++static bool
++cfg80211_igtk_cipher_supported(struct cfg80211_registered_device *rdev)
+ {
+-	int max_key_idx = 5;
++	struct wiphy *wiphy = &rdev->wiphy;
++	int i;
++
++	for (i = 0; i < wiphy->n_cipher_suites; i++) {
++		switch (wiphy->cipher_suites[i]) {
++		case WLAN_CIPHER_SUITE_AES_CMAC:
++		case WLAN_CIPHER_SUITE_BIP_CMAC_256:
++		case WLAN_CIPHER_SUITE_BIP_GMAC_128:
++		case WLAN_CIPHER_SUITE_BIP_GMAC_256:
++			return true;
++		}
++	}
++
++	return false;
++}
+ 
+-	if (wiphy_ext_feature_isset(&rdev->wiphy,
+-				    NL80211_EXT_FEATURE_BEACON_PROTECTION) ||
+-	    wiphy_ext_feature_isset(&rdev->wiphy,
+-				    NL80211_EXT_FEATURE_BEACON_PROTECTION_CLIENT))
++bool cfg80211_valid_key_idx(struct cfg80211_registered_device *rdev,
++			    int key_idx, bool pairwise)
++{
++	int max_key_idx;
++
++	if (pairwise)
++		max_key_idx = 3;
++	else if (wiphy_ext_feature_isset(&rdev->wiphy,
++					 NL80211_EXT_FEATURE_BEACON_PROTECTION) ||
++		 wiphy_ext_feature_isset(&rdev->wiphy,
++					 NL80211_EXT_FEATURE_BEACON_PROTECTION_CLIENT))
+ 		max_key_idx = 7;
++	else if (cfg80211_igtk_cipher_supported(rdev))
++		max_key_idx = 5;
++	else
++		max_key_idx = 3;
++
+ 	if (key_idx < 0 || key_idx > max_key_idx)
++		return false;
++
++	return true;
++}
++
++int cfg80211_validate_key_settings(struct cfg80211_registered_device *rdev,
++				   struct key_params *params, int key_idx,
++				   bool pairwise, const u8 *mac_addr)
++{
++	if (!cfg80211_valid_key_idx(rdev, key_idx, pairwise))
+ 		return -EINVAL;
+ 
+ 	if (!pairwise && mac_addr && !(rdev->wiphy.flags & WIPHY_FLAG_IBSS_RSN))


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2020-12-26 15:32 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2020-12-26 15:32 UTC (permalink / raw
  To: gentoo-commits

commit:     ea100a00540a07717d77258086cc0c7ae5961b9c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 26 15:32:23 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Dec 26 15:32:23 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ea100a00

Remove redundant f2fs patch.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |  4 --
 1900_f2fs-seek-data-offset-inline-data.patch | 65 ----------------------------
 2 files changed, 69 deletions(-)

diff --git a/0000_README b/0000_README
index 290bc2e..025c3da 100644
--- a/0000_README
+++ b/0000_README
@@ -63,10 +63,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1900_f2fs-seek-data-offset-inline-data.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=7a6e59d719ef0ec9b3d765cba3ba98ee585cbde3
-Desc:   f2fs: fix to seek incorrect data offset in inline data file
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_f2fs-seek-data-offset-inline-data.patch b/1900_f2fs-seek-data-offset-inline-data.patch
deleted file mode 100644
index 28b00eb..0000000
--- a/1900_f2fs-seek-data-offset-inline-data.patch
+++ /dev/null
@@ -1,65 +0,0 @@
-From 7a6e59d719ef0ec9b3d765cba3ba98ee585cbde3 Mon Sep 17 00:00:00 2001
-From: Chao Yu <yuchao0@huawei.com>
-Date: Mon, 2 Nov 2020 17:36:58 +0800
-Subject: f2fs: fix to seek incorrect data offset in inline data file
-
-As kitestramuort reported:
-
-F2FS-fs (nvme0n1p4): access invalid blkaddr:1598541474
-[   25.725898] ------------[ cut here ]------------
-[   25.725903] WARNING: CPU: 6 PID: 2018 at f2fs_is_valid_blkaddr+0x23a/0x250
-[   25.725923] Call Trace:
-[   25.725927]  ? f2fs_llseek+0x204/0x620
-[   25.725929]  ? ovl_copy_up_data+0x14f/0x200
-[   25.725931]  ? ovl_copy_up_inode+0x174/0x1e0
-[   25.725933]  ? ovl_copy_up_one+0xa22/0xdf0
-[   25.725936]  ? ovl_copy_up_flags+0xa6/0xf0
-[   25.725938]  ? ovl_aio_cleanup_handler+0xd0/0xd0
-[   25.725939]  ? ovl_maybe_copy_up+0x86/0xa0
-[   25.725941]  ? ovl_open+0x22/0x80
-[   25.725943]  ? do_dentry_open+0x136/0x350
-[   25.725945]  ? path_openat+0xb7e/0xf40
-[   25.725947]  ? __check_sticky+0x40/0x40
-[   25.725948]  ? do_filp_open+0x70/0x100
-[   25.725950]  ? __check_sticky+0x40/0x40
-[   25.725951]  ? __check_sticky+0x40/0x40
-[   25.725953]  ? __x64_sys_openat+0x1db/0x2c0
-[   25.725955]  ? do_syscall_64+0x2d/0x40
-[   25.725957]  ? entry_SYSCALL_64_after_hwframe+0x44/0xa9
-
-llseek() reports invalid block address access, the root cause is if
-file has inline data, f2fs_seek_block() will access inline data regard
-as block address index in inode block, which should be wrong, fix it.
-
-Reported-by: kitestramuort <kitestramuort@autistici.org>
-Signed-off-by: Chao Yu <yuchao0@huawei.com>
-Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
----
- fs/f2fs/file.c | 11 ++++++++---
- 1 file changed, 8 insertions(+), 3 deletions(-)
-
-diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
-index ee861c6d9ff02..fe39e591e5b4c 100644
---- a/fs/f2fs/file.c
-+++ b/fs/f2fs/file.c
-@@ -412,9 +412,14 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
- 		goto fail;
- 
- 	/* handle inline data case */
--	if (f2fs_has_inline_data(inode) && whence == SEEK_HOLE) {
--		data_ofs = isize;
--		goto found;
-+	if (f2fs_has_inline_data(inode)) {
-+		if (whence == SEEK_HOLE) {
-+			data_ofs = isize;
-+			goto found;
-+		} else if (whence == SEEK_DATA) {
-+			data_ofs = offset;
-+			goto found;
-+		}
- 	}
- 
- 	pgofs = (pgoff_t)(offset >> PAGE_SHIFT);
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2020-12-30 12:54 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2020-12-30 12:54 UTC (permalink / raw
  To: gentoo-commits

commit:     b6de6417a82978446b2e3e3bec49271f305452e6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 30 12:54:04 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 30 12:54:04 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b6de6417

Linux patch 5.10.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1003_linux-5.10.4.patch | 23858 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 23862 insertions(+)

diff --git a/0000_README b/0000_README
index 025c3da..ce1d3f7 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-5.10.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.3
 
+Patch:  1003_linux-5.10.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-5.10.4.patch b/1003_linux-5.10.4.patch
new file mode 100644
index 0000000..a623431
--- /dev/null
+++ b/1003_linux-5.10.4.patch
@@ -0,0 +1,23858 @@
+diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst
+index a334b584f2b34..64405e5da63e4 100644
+--- a/Documentation/locking/seqlock.rst
++++ b/Documentation/locking/seqlock.rst
+@@ -89,7 +89,7 @@ Read path::
+ 
+ .. _seqcount_locktype_t:
+ 
+-Sequence counters with associated locks (``seqcount_LOCKTYPE_t``)
++Sequence counters with associated locks (``seqcount_LOCKNAME_t``)
+ -----------------------------------------------------------------
+ 
+ As discussed at :ref:`seqcount_t`, sequence count write side critical
+@@ -115,27 +115,26 @@ The following sequence counters with associated locks are defined:
+   - ``seqcount_mutex_t``
+   - ``seqcount_ww_mutex_t``
+ 
+-The plain seqcount read and write APIs branch out to the specific
+-seqcount_LOCKTYPE_t implementation at compile-time. This avoids kernel
+-API explosion per each new seqcount LOCKTYPE.
++The sequence counter read and write APIs can take either a plain
++seqcount_t or any of the seqcount_LOCKNAME_t variants above.
+ 
+-Initialization (replace "LOCKTYPE" with one of the supported locks)::
++Initialization (replace "LOCKNAME" with one of the supported locks)::
+ 
+ 	/* dynamic */
+-	seqcount_LOCKTYPE_t foo_seqcount;
+-	seqcount_LOCKTYPE_init(&foo_seqcount, &lock);
++	seqcount_LOCKNAME_t foo_seqcount;
++	seqcount_LOCKNAME_init(&foo_seqcount, &lock);
+ 
+ 	/* static */
+-	static seqcount_LOCKTYPE_t foo_seqcount =
+-		SEQCNT_LOCKTYPE_ZERO(foo_seqcount, &lock);
++	static seqcount_LOCKNAME_t foo_seqcount =
++		SEQCNT_LOCKNAME_ZERO(foo_seqcount, &lock);
+ 
+ 	/* C99 struct init */
+ 	struct {
+-		.seq   = SEQCNT_LOCKTYPE_ZERO(foo.seq, &lock),
++		.seq   = SEQCNT_LOCKNAME_ZERO(foo.seq, &lock),
+ 	} foo;
+ 
+ Write path: same as in :ref:`seqcount_t`, while running from a context
+-with the associated LOCKTYPE lock acquired.
++with the associated write serialization lock acquired.
+ 
+ Read path: same as in :ref:`seqcount_t`.
+ 
+diff --git a/Documentation/x86/topology.rst b/Documentation/x86/topology.rst
+index e29739904e37e..7f58010ea86af 100644
+--- a/Documentation/x86/topology.rst
++++ b/Documentation/x86/topology.rst
+@@ -41,6 +41,8 @@ Package
+ Packages contain a number of cores plus shared resources, e.g. DRAM
+ controller, shared caches etc.
+ 
++Modern systems may also use the term 'Die' for package.
++
+ AMD nomenclature for package is 'Node'.
+ 
+ Package-related topology information in the kernel:
+@@ -53,11 +55,18 @@ Package-related topology information in the kernel:
+ 
+     The number of dies in a package. This information is retrieved via CPUID.
+ 
++  - cpuinfo_x86.cpu_die_id:
++
++    The physical ID of the die. This information is retrieved via CPUID.
++
+   - cpuinfo_x86.phys_proc_id:
+ 
+     The physical ID of the package. This information is retrieved via CPUID
+     and deduced from the APIC IDs of the cores in the package.
+ 
++    Modern systems use this value for the socket. There may be multiple
++    packages within a socket. This value may differ from cpu_die_id.
++
+   - cpuinfo_x86.logical_proc_id:
+ 
+     The logical ID of the package. As we do not trust BIOSes to enumerate the
+diff --git a/Makefile b/Makefile
+index a72bc404123d5..1e50d6af932ab 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index ba4e966484ab5..ddd4641446bdd 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -143,6 +143,22 @@ config UPROBES
+ 	    managed by the kernel and kept transparent to the probed
+ 	    application. )
+ 
++config HAVE_64BIT_ALIGNED_ACCESS
++	def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
++	help
++	  Some architectures require 64 bit accesses to be 64 bit
++	  aligned, which also requires structs containing 64 bit values
++	  to be 64 bit aligned too. This includes some 32 bit
++	  architectures which can do 64 bit accesses, as well as 64 bit
++	  architectures without unaligned access.
++
++	  This symbol should be selected by an architecture if 64 bit
++	  accesses are required to be 64 bit aligned in this way even
++	  though it is not a 64 bit architecture.
++
++	  See Documentation/unaligned-memory-access.txt for more
++	  information on the topic of unaligned memory accesses.
++
+ config HAVE_EFFICIENT_UNALIGNED_ACCESS
+ 	bool
+ 	help
+diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
+index caa27322a0ab7..3a392983ac079 100644
+--- a/arch/arm/boot/compressed/head.S
++++ b/arch/arm/boot/compressed/head.S
+@@ -116,7 +116,7 @@
+ 		/*
+ 		 * Debug print of the final appended DTB location
+ 		 */
+-		.macro dbgadtb, begin, end
++		.macro dbgadtb, begin, size
+ #ifdef DEBUG
+ 		kputc   #'D'
+ 		kputc   #'T'
+@@ -129,7 +129,7 @@
+ 		kputc	#'('
+ 		kputc	#'0'
+ 		kputc	#'x'
+-		kphex	\end, 8		/* End of appended DTB */
++		kphex	\size, 8	/* Size of appended DTB */
+ 		kputc	#')'
+ 		kputc	#'\n'
+ #endif
+diff --git a/arch/arm/boot/dts/armada-xp-98dx3236.dtsi b/arch/arm/boot/dts/armada-xp-98dx3236.dtsi
+index 654648b05c7c2..aeccedd125740 100644
+--- a/arch/arm/boot/dts/armada-xp-98dx3236.dtsi
++++ b/arch/arm/boot/dts/armada-xp-98dx3236.dtsi
+@@ -266,11 +266,6 @@
+ 	reg = <0x11000 0x100>;
+ };
+ 
+-&i2c1 {
+-	compatible = "marvell,mv78230-i2c", "marvell,mv64xxx-i2c";
+-	reg = <0x11100 0x100>;
+-};
+-
+ &mpic {
+ 	reg = <0x20a00 0x2d0>, <0x21070 0x58>;
+ };
+diff --git a/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts b/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
+index 2d44d9ad4e400..e6ad821a86359 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
+@@ -82,11 +82,6 @@
+ 	status = "okay";
+ };
+ 
+-&vuart {
+-	// VUART Host Console
+-	status = "okay";
+-};
+-
+ &uart1 {
+ 	// Host Console
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/aspeed-bmc-intel-s2600wf.dts b/arch/arm/boot/dts/aspeed-bmc-intel-s2600wf.dts
+index 1deb30ec912cf..6e9baf3bba531 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-intel-s2600wf.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-intel-s2600wf.dts
+@@ -22,9 +22,9 @@
+ 		#size-cells = <1>;
+ 		ranges;
+ 
+-		vga_memory: framebuffer@7f000000 {
++		vga_memory: framebuffer@9f000000 {
+ 			no-map;
+-			reg = <0x7f000000 0x01000000>;
++			reg = <0x9f000000 0x01000000>; /* 16M */
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts b/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
+index 4d070d6ba09f9..e86c22ce6d123 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
+@@ -26,7 +26,7 @@
+ 		#size-cells = <1>;
+ 		ranges;
+ 
+-		flash_memory: region@ba000000 {
++		flash_memory: region@b8000000 {
+ 			no-map;
+ 			reg = <0xb8000000 0x4000000>; /* 64M */
+ 		};
+diff --git a/arch/arm/boot/dts/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed-g6.dtsi
+index b58220a49cbd8..bf97aaad7be9b 100644
+--- a/arch/arm/boot/dts/aspeed-g6.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6.dtsi
+@@ -357,7 +357,7 @@
+ 				#gpio-cells = <2>;
+ 				gpio-controller;
+ 				compatible = "aspeed,ast2600-gpio";
+-				reg = <0x1e780000 0x800>;
++				reg = <0x1e780000 0x400>;
+ 				interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+ 				gpio-ranges = <&pinctrl 0 0 208>;
+ 				ngpios = <208>;
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index eae28b82c7fd0..73b6b1f89de99 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -569,11 +569,14 @@
+ 			atmel,pins = <AT91_PIOB 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ 		};
+ 	};
+-}; /* pinctrl */
+ 
+-&pmc {
+-	atmel,osc-bypass;
+-};
++	usb1 {
++		pinctrl_usb_default: usb_default {
++			atmel,pins = <AT91_PIOD 15 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++				      AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++		};
++	};
++}; /* pinctrl */
+ 
+ &pwm0 {
+ 	pinctrl-names = "default";
+@@ -684,6 +687,8 @@
+ 	atmel,vbus-gpio = <0
+ 			   &pioD 15 GPIO_ACTIVE_HIGH
+ 			   &pioD 16 GPIO_ACTIVE_HIGH>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&pinctrl_usb_default>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index cf13632edd444..5179258f92470 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -242,6 +242,11 @@
+ 						atmel,pins =
+ 							<AT91_PIOE 9 AT91_PERIPH_GPIO AT91_PINCTRL_DEGLITCH>;	/* PE9, conflicts with A9 */
+ 					};
++					pinctrl_usb_default: usb_default {
++						atmel,pins =
++							<AT91_PIOE 3 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -259,6 +264,8 @@
+ 					   &pioE 3 GPIO_ACTIVE_LOW
+ 					   &pioE 4 GPIO_ACTIVE_LOW
+ 					  >;
++			pinctrl-names = "default";
++			pinctrl-0 = <&pinctrl_usb_default>;
+ 			status = "okay";
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/at91-sama5d4_xplained.dts b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+index e5974a17374cf..0b3ad1b580b83 100644
+--- a/arch/arm/boot/dts/at91-sama5d4_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+@@ -134,6 +134,11 @@
+ 						atmel,pins =
+ 							<AT91_PIOE 31 AT91_PERIPH_GPIO AT91_PINCTRL_DEGLITCH>;
+ 					};
++					pinctrl_usb_default: usb_default {
++						atmel,pins =
++							<AT91_PIOE 11 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOE 14 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
+ 					pinctrl_key_gpio: key_gpio_0 {
+ 						atmel,pins =
+ 							<AT91_PIOE 8 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
+@@ -159,6 +164,8 @@
+ 					   &pioE 11 GPIO_ACTIVE_HIGH
+ 					   &pioE 14 GPIO_ACTIVE_HIGH
+ 					  >;
++			pinctrl-names = "default";
++			pinctrl-0 = <&pinctrl_usb_default>;
+ 			status = "okay";
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/at91sam9rl.dtsi b/arch/arm/boot/dts/at91sam9rl.dtsi
+index 5653e70c84b4b..36a42a9fe1957 100644
+--- a/arch/arm/boot/dts/at91sam9rl.dtsi
++++ b/arch/arm/boot/dts/at91sam9rl.dtsi
+@@ -282,23 +282,26 @@
+ 				atmel,adc-use-res = "highres";
+ 
+ 				trigger0 {
+-					trigger-name = "timer-counter-0";
++					trigger-name = "external-rising";
+ 					trigger-value = <0x1>;
++					trigger-external;
+ 				};
++
+ 				trigger1 {
+-					trigger-name = "timer-counter-1";
+-					trigger-value = <0x3>;
++					trigger-name = "external-falling";
++					trigger-value = <0x2>;
++					trigger-external;
+ 				};
+ 
+ 				trigger2 {
+-					trigger-name = "timer-counter-2";
+-					trigger-value = <0x5>;
++					trigger-name = "external-any";
++					trigger-value = <0x3>;
++					trigger-external;
+ 				};
+ 
+ 				trigger3 {
+-					trigger-name = "external";
+-					trigger-value = <0x13>;
+-					trigger-external;
++					trigger-name = "continuous";
++					trigger-value = <0x6>;
+ 				};
+ 			};
+ 
+diff --git a/arch/arm/boot/dts/meson8b-odroidc1.dts b/arch/arm/boot/dts/meson8b-odroidc1.dts
+index 0c26467de4d03..5963566dbcc9d 100644
+--- a/arch/arm/boot/dts/meson8b-odroidc1.dts
++++ b/arch/arm/boot/dts/meson8b-odroidc1.dts
+@@ -224,7 +224,7 @@
+ 			reg = <0>;
+ 
+ 			reset-assert-us = <10000>;
+-			reset-deassert-us = <30000>;
++			reset-deassert-us = <80000>;
+ 			reset-gpios = <&gpio GPIOH_4 GPIO_ACTIVE_LOW>;
+ 
+ 			interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm/boot/dts/meson8m2-mxiii-plus.dts b/arch/arm/boot/dts/meson8m2-mxiii-plus.dts
+index cc498191ddd1d..8f4eb1ed45816 100644
+--- a/arch/arm/boot/dts/meson8m2-mxiii-plus.dts
++++ b/arch/arm/boot/dts/meson8m2-mxiii-plus.dts
+@@ -81,7 +81,7 @@
+ 			reg = <0>;
+ 
+ 			reset-assert-us = <10000>;
+-			reset-deassert-us = <30000>;
++			reset-deassert-us = <80000>;
+ 			reset-gpios = <&gpio GPIOH_4 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/omap4-panda-es.dts b/arch/arm/boot/dts/omap4-panda-es.dts
+index cfa85aa3da085..6afa8fd7c412d 100644
+--- a/arch/arm/boot/dts/omap4-panda-es.dts
++++ b/arch/arm/boot/dts/omap4-panda-es.dts
+@@ -46,7 +46,7 @@
+ 
+ 	button_pins: pinmux_button_pins {
+ 		pinctrl-single,pins = <
+-			OMAP4_IOPAD(0x11b, PIN_INPUT_PULLUP | MUX_MODE3) /* gpio_113 */
++			OMAP4_IOPAD(0x0fc, PIN_INPUT_PULLUP | MUX_MODE3) /* gpio_113 */
+ 		>;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi
+index 2ddc85dff8ce9..2c4952427296e 100644
+--- a/arch/arm/boot/dts/sama5d2.dtsi
++++ b/arch/arm/boot/dts/sama5d2.dtsi
+@@ -656,6 +656,7 @@
+ 				clocks = <&pmc PMC_TYPE_PERIPHERAL 51>;
+ 				#address-cells = <1>;
+ 				#size-cells = <1>;
++				no-memory-wc;
+ 				ranges = <0 0xf8044000 0x1420>;
+ 			};
+ 
+@@ -724,7 +725,7 @@
+ 
+ 			can0: can@f8054000 {
+ 				compatible = "bosch,m_can";
+-				reg = <0xf8054000 0x4000>, <0x210000 0x4000>;
++				reg = <0xf8054000 0x4000>, <0x210000 0x1c00>;
+ 				reg-names = "m_can", "message_ram";
+ 				interrupts = <56 IRQ_TYPE_LEVEL_HIGH 7>,
+ 					     <64 IRQ_TYPE_LEVEL_HIGH 7>;
+@@ -1130,7 +1131,7 @@
+ 
+ 			can1: can@fc050000 {
+ 				compatible = "bosch,m_can";
+-				reg = <0xfc050000 0x4000>, <0x210000 0x4000>;
++				reg = <0xfc050000 0x4000>, <0x210000 0x3800>;
+ 				reg-names = "m_can", "message_ram";
+ 				interrupts = <57 IRQ_TYPE_LEVEL_HIGH 7>,
+ 					     <65 IRQ_TYPE_LEVEL_HIGH 7>;
+@@ -1140,7 +1141,7 @@
+ 				assigned-clocks = <&pmc PMC_TYPE_GCK 57>;
+ 				assigned-clock-parents = <&pmc PMC_TYPE_CORE PMC_UTMI>;
+ 				assigned-clock-rates = <40000000>;
+-				bosch,mram-cfg = <0x1100 0 0 64 0 0 32 32>;
++				bosch,mram-cfg = <0x1c00 0 0 64 0 0 32 32>;
+ 				status = "disabled";
+ 			};
+ 
+diff --git a/arch/arm/boot/dts/tegra20-ventana.dts b/arch/arm/boot/dts/tegra20-ventana.dts
+index b158771ac0b7d..055334ae3d288 100644
+--- a/arch/arm/boot/dts/tegra20-ventana.dts
++++ b/arch/arm/boot/dts/tegra20-ventana.dts
+@@ -3,6 +3,7 @@
+ 
+ #include <dt-bindings/input/input.h>
+ #include "tegra20.dtsi"
++#include "tegra20-cpu-opp.dtsi"
+ 
+ / {
+ 	model = "NVIDIA Tegra20 Ventana evaluation board";
+@@ -592,6 +593,16 @@
+ 		#clock-cells = <0>;
+ 	};
+ 
++	cpus {
++		cpu0: cpu@0 {
++			operating-points-v2 = <&cpu0_opp_table>;
++		};
++
++		cpu@1 {
++			operating-points-v2 = <&cpu0_opp_table>;
++		};
++	};
++
+ 	gpio-keys {
+ 		compatible = "gpio-keys";
+ 
+diff --git a/arch/arm/crypto/aes-ce-core.S b/arch/arm/crypto/aes-ce-core.S
+index 4d1707388d941..312428d83eedb 100644
+--- a/arch/arm/crypto/aes-ce-core.S
++++ b/arch/arm/crypto/aes-ce-core.S
+@@ -386,20 +386,32 @@ ENTRY(ce_aes_ctr_encrypt)
+ .Lctrloop4x:
+ 	subs		r4, r4, #4
+ 	bmi		.Lctr1x
+-	add		r6, r6, #1
++
++	/*
++	 * NOTE: the sequence below has been carefully tweaked to avoid
++	 * a silicon erratum that exists in Cortex-A57 (#1742098) and
++	 * Cortex-A72 (#1655431) cores, where AESE/AESMC instruction pairs
++	 * may produce an incorrect result if they take their input from a
++	 * register of which a single 32-bit lane has been updated the last
++	 * time it was modified. To work around this, the lanes of registers
++	 * q0-q3 below are not manipulated individually, and the different
++	 * counter values are prepared by successive manipulations of q7.
++	 */
++	add		ip, r6, #1
+ 	vmov		q0, q7
++	rev		ip, ip
++	add		lr, r6, #2
++	vmov		s31, ip			@ set lane 3 of q1 via q7
++	add		ip, r6, #3
++	rev		lr, lr
+ 	vmov		q1, q7
+-	rev		ip, r6
+-	add		r6, r6, #1
++	vmov		s31, lr			@ set lane 3 of q2 via q7
++	rev		ip, ip
+ 	vmov		q2, q7
+-	vmov		s7, ip
+-	rev		ip, r6
+-	add		r6, r6, #1
++	vmov		s31, ip			@ set lane 3 of q3 via q7
++	add		r6, r6, #4
+ 	vmov		q3, q7
+-	vmov		s11, ip
+-	rev		ip, r6
+-	add		r6, r6, #1
+-	vmov		s15, ip
++
+ 	vld1.8		{q4-q5}, [r1]!
+ 	vld1.8		{q6}, [r1]!
+ 	vld1.8		{q15}, [r1]!
+diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c
+index bda8bf17631e1..f70af1d0514b9 100644
+--- a/arch/arm/crypto/aes-neonbs-glue.c
++++ b/arch/arm/crypto/aes-neonbs-glue.c
+@@ -19,7 +19,7 @@ MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
+ MODULE_LICENSE("GPL v2");
+ 
+ MODULE_ALIAS_CRYPTO("ecb(aes)");
+-MODULE_ALIAS_CRYPTO("cbc(aes)");
++MODULE_ALIAS_CRYPTO("cbc(aes)-all");
+ MODULE_ALIAS_CRYPTO("ctr(aes)");
+ MODULE_ALIAS_CRYPTO("xts(aes)");
+ 
+@@ -191,7 +191,8 @@ static int cbc_init(struct crypto_skcipher *tfm)
+ 	struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ 	unsigned int reqsize;
+ 
+-	ctx->enc_tfm = crypto_alloc_skcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
++	ctx->enc_tfm = crypto_alloc_skcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC |
++					     CRYPTO_ALG_NEED_FALLBACK);
+ 	if (IS_ERR(ctx->enc_tfm))
+ 		return PTR_ERR(ctx->enc_tfm);
+ 
+@@ -441,7 +442,8 @@ static struct skcipher_alg aes_algs[] = { {
+ 	.base.cra_blocksize	= AES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct aesbs_cbc_ctx),
+ 	.base.cra_module	= THIS_MODULE,
+-	.base.cra_flags		= CRYPTO_ALG_INTERNAL,
++	.base.cra_flags		= CRYPTO_ALG_INTERNAL |
++				  CRYPTO_ALG_NEED_FALLBACK,
+ 
+ 	.min_keysize		= AES_MIN_KEY_SIZE,
+ 	.max_keysize		= AES_MAX_KEY_SIZE,
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 55a47df047738..1c9e6d1452c5b 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -252,31 +252,10 @@ __und_svc:
+ #else
+ 	svc_entry
+ #endif
+-	@
+-	@ call emulation code, which returns using r9 if it has emulated
+-	@ the instruction, or the more conventional lr if we are to treat
+-	@ this as a real undefined instruction
+-	@
+-	@  r0 - instruction
+-	@
+-#ifndef CONFIG_THUMB2_KERNEL
+-	ldr	r0, [r4, #-4]
+-#else
+-	mov	r1, #2
+-	ldrh	r0, [r4, #-2]			@ Thumb instruction at LR - 2
+-	cmp	r0, #0xe800			@ 32-bit instruction if xx >= 0
+-	blo	__und_svc_fault
+-	ldrh	r9, [r4]			@ bottom 16 bits
+-	add	r4, r4, #2
+-	str	r4, [sp, #S_PC]
+-	orr	r0, r9, r0, lsl #16
+-#endif
+-	badr	r9, __und_svc_finish
+-	mov	r2, r4
+-	bl	call_fpe
+ 
+ 	mov	r1, #4				@ PC correction to apply
+-__und_svc_fault:
++ THUMB(	tst	r5, #PSR_T_BIT		)	@ exception taken in Thumb mode?
++ THUMB(	movne	r1, #2			)	@ if so, fix up PC correction
+ 	mov	r0, sp				@ struct pt_regs *regs
+ 	bl	__und_fault
+ 
+diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
+index f8904227e7fdc..98c1e68bdfcbb 100644
+--- a/arch/arm/kernel/head.S
++++ b/arch/arm/kernel/head.S
+@@ -671,12 +671,8 @@ ARM_BE8(rev16	ip, ip)
+ 	ldrcc	r7, [r4], #4	@ use branch for delay slot
+ 	bcc	1b
+ 	bx	lr
+-#else
+-#ifdef CONFIG_CPU_ENDIAN_BE8
+-	moveq	r0, #0x00004000	@ set bit 22, mov to mvn instruction
+ #else
+ 	moveq	r0, #0x400000	@ set bit 22, mov to mvn instruction
+-#endif
+ 	b	2f
+ 1:	ldr	ip, [r7, r3]
+ #ifdef CONFIG_CPU_ENDIAN_BE8
+@@ -685,7 +681,7 @@ ARM_BE8(rev16	ip, ip)
+ 	tst	ip, #0x000f0000	@ check the rotation field
+ 	orrne	ip, ip, r6, lsl #24 @ mask in offset bits 31-24
+ 	biceq	ip, ip, #0x00004000 @ clear bit 22
+-	orreq	ip, ip, r0      @ mask in offset bits 7-0
++	orreq	ip, ip, r0, ror #8  @ mask in offset bits 7-0
+ #else
+ 	bic	ip, ip, #0x000000ff
+ 	tst	ip, #0xf00	@ check the rotation field
+diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
+index 0186cf9da890b..27b0a1f27fbdf 100644
+--- a/arch/arm/vfp/entry.S
++++ b/arch/arm/vfp/entry.S
+@@ -37,20 +37,3 @@ ENDPROC(vfp_null_entry)
+ 	.align	2
+ .LCvfp:
+ 	.word	vfp_vector
+-
+-@ This code is called if the VFP does not exist. It needs to flag the
+-@ failure to the VFP initialisation code.
+-
+-	__INIT
+-ENTRY(vfp_testing_entry)
+-	dec_preempt_count_ti r10, r4
+-	ldr	r0, VFP_arch_address
+-	str	r0, [r0]		@ set to non-zero value
+-	ret	r9			@ we have handled the fault
+-ENDPROC(vfp_testing_entry)
+-
+-	.align	2
+-VFP_arch_address:
+-	.word	VFP_arch
+-
+-	__FINIT
+diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
+index 4fcff9f59947d..d5837bf05a9a5 100644
+--- a/arch/arm/vfp/vfphw.S
++++ b/arch/arm/vfp/vfphw.S
+@@ -79,11 +79,6 @@ ENTRY(vfp_support_entry)
+ 	DBGSTR3	"instr %08x pc %08x state %p", r0, r2, r10
+ 
+ 	.fpu	vfpv2
+-	ldr	r3, [sp, #S_PSR]	@ Neither lazy restore nor FP exceptions
+-	and	r3, r3, #MODE_MASK	@ are supported in kernel mode
+-	teq	r3, #USR_MODE
+-	bne	vfp_kmode_exception	@ Returns through lr
+-
+ 	VFPFMRX	r1, FPEXC		@ Is the VFP enabled?
+ 	DBGSTR1	"fpexc %08x", r1
+ 	tst	r1, #FPEXC_EN
+diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
+index 8c9e7f9f0277d..2cb355c1b5b71 100644
+--- a/arch/arm/vfp/vfpmodule.c
++++ b/arch/arm/vfp/vfpmodule.c
+@@ -23,6 +23,7 @@
+ #include <asm/cputype.h>
+ #include <asm/system_info.h>
+ #include <asm/thread_notify.h>
++#include <asm/traps.h>
+ #include <asm/vfp.h>
+ 
+ #include "vfpinstr.h"
+@@ -31,7 +32,6 @@
+ /*
+  * Our undef handlers (in entry.S)
+  */
+-asmlinkage void vfp_testing_entry(void);
+ asmlinkage void vfp_support_entry(void);
+ asmlinkage void vfp_null_entry(void);
+ 
+@@ -42,7 +42,7 @@ asmlinkage void (*vfp_vector)(void) = vfp_null_entry;
+  * Used in startup: set to non-zero if VFP checks fail
+  * After startup, holds VFP architecture
+  */
+-unsigned int VFP_arch;
++static unsigned int __initdata VFP_arch;
+ 
+ /*
+  * The pointer to the vfpstate structure of the thread which currently
+@@ -436,7 +436,7 @@ static void vfp_enable(void *unused)
+  * present on all CPUs within a SMP complex. Needs to be called prior to
+  * vfp_init().
+  */
+-void vfp_disable(void)
++void __init vfp_disable(void)
+ {
+ 	if (VFP_arch) {
+ 		pr_debug("%s: should be called prior to vfp_init\n", __func__);
+@@ -642,7 +642,9 @@ static int vfp_starting_cpu(unsigned int unused)
+ 	return 0;
+ }
+ 
+-void vfp_kmode_exception(void)
++#ifdef CONFIG_KERNEL_MODE_NEON
++
++static int vfp_kmode_exception(struct pt_regs *regs, unsigned int instr)
+ {
+ 	/*
+ 	 * If we reach this point, a floating point exception has been raised
+@@ -660,9 +662,51 @@ void vfp_kmode_exception(void)
+ 		pr_crit("BUG: unsupported FP instruction in kernel mode\n");
+ 	else
+ 		pr_crit("BUG: FP instruction issued in kernel mode with FP unit disabled\n");
++	pr_crit("FPEXC == 0x%08x\n", fmrx(FPEXC));
++	return 1;
+ }
+ 
+-#ifdef CONFIG_KERNEL_MODE_NEON
++static struct undef_hook vfp_kmode_exception_hook[] = {{
++	.instr_mask	= 0xfe000000,
++	.instr_val	= 0xf2000000,
++	.cpsr_mask	= MODE_MASK | PSR_T_BIT,
++	.cpsr_val	= SVC_MODE,
++	.fn		= vfp_kmode_exception,
++}, {
++	.instr_mask	= 0xff100000,
++	.instr_val	= 0xf4000000,
++	.cpsr_mask	= MODE_MASK | PSR_T_BIT,
++	.cpsr_val	= SVC_MODE,
++	.fn		= vfp_kmode_exception,
++}, {
++	.instr_mask	= 0xef000000,
++	.instr_val	= 0xef000000,
++	.cpsr_mask	= MODE_MASK | PSR_T_BIT,
++	.cpsr_val	= SVC_MODE | PSR_T_BIT,
++	.fn		= vfp_kmode_exception,
++}, {
++	.instr_mask	= 0xff100000,
++	.instr_val	= 0xf9000000,
++	.cpsr_mask	= MODE_MASK | PSR_T_BIT,
++	.cpsr_val	= SVC_MODE | PSR_T_BIT,
++	.fn		= vfp_kmode_exception,
++}, {
++	.instr_mask	= 0x0c000e00,
++	.instr_val	= 0x0c000a00,
++	.cpsr_mask	= MODE_MASK,
++	.cpsr_val	= SVC_MODE,
++	.fn		= vfp_kmode_exception,
++}};
++
++static int __init vfp_kmode_exception_hook_init(void)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(vfp_kmode_exception_hook); i++)
++		register_undef_hook(&vfp_kmode_exception_hook[i]);
++	return 0;
++}
++subsys_initcall(vfp_kmode_exception_hook_init);
+ 
+ /*
+  * Kernel-side NEON support functions
+@@ -708,6 +752,21 @@ EXPORT_SYMBOL(kernel_neon_end);
+ 
+ #endif /* CONFIG_KERNEL_MODE_NEON */
+ 
++static int __init vfp_detect(struct pt_regs *regs, unsigned int instr)
++{
++	VFP_arch = UINT_MAX;	/* mark as not present */
++	regs->ARM_pc += 4;
++	return 0;
++}
++
++static struct undef_hook vfp_detect_hook __initdata = {
++	.instr_mask	= 0x0c000e00,
++	.instr_val	= 0x0c000a00,
++	.cpsr_mask	= MODE_MASK,
++	.cpsr_val	= SVC_MODE,
++	.fn		= vfp_detect,
++};
++
+ /*
+  * VFP support code initialisation.
+  */
+@@ -728,10 +787,11 @@ static int __init vfp_init(void)
+ 	 * The handler is already setup to just log calls, so
+ 	 * we just need to read the VFPSID register.
+ 	 */
+-	vfp_vector = vfp_testing_entry;
++	register_undef_hook(&vfp_detect_hook);
+ 	barrier();
+ 	vfpsid = fmrx(FPSID);
+ 	barrier();
++	unregister_undef_hook(&vfp_detect_hook);
+ 	vfp_vector = vfp_null_entry;
+ 
+ 	pr_info("VFP support v0.3: ");
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
+index 1b07c8c06eac5..463a72d6bb7c7 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
+@@ -340,7 +340,7 @@
+ 		eee-broken-1000t;
+ 
+ 		reset-assert-us = <10000>;
+-		reset-deassert-us = <30000>;
++		reset-deassert-us = <80000>;
+ 		reset-gpios = <&gpio GPIOZ_15 (GPIO_ACTIVE_LOW | GPIO_OPEN_DRAIN)>;
+ 
+ 		interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+index 6982632ae6461..39a09661c5f62 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+@@ -413,7 +413,7 @@
+ 		max-speed = <1000>;
+ 
+ 		reset-assert-us = <10000>;
+-		reset-deassert-us = <30000>;
++		reset-deassert-us = <80000>;
+ 		reset-gpios = <&gpio GPIOZ_15 (GPIO_ACTIVE_LOW | GPIO_OPEN_DRAIN)>;
+ 
+ 		interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
+index 2802ddbb83ac7..feb0885047400 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
+@@ -264,7 +264,7 @@
+ 		max-speed = <1000>;
+ 
+ 		reset-assert-us = <10000>;
+-		reset-deassert-us = <30000>;
++		reset-deassert-us = <80000>;
+ 		reset-gpios = <&gpio GPIOZ_15 (GPIO_ACTIVE_LOW | GPIO_OPEN_DRAIN)>;
+ 
+ 		interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
+index 7be3e354093bf..de27beafe9db9 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
+@@ -165,7 +165,7 @@
+ 			reg = <0>;
+ 
+ 			reset-assert-us = <10000>;
+-			reset-deassert-us = <30000>;
++			reset-deassert-us = <80000>;
+ 			reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 
+ 			interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
+index 70fcfb7b0683d..50de1d01e5655 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
+@@ -200,7 +200,7 @@
+ 			reg = <0>;
+ 
+ 			reset-assert-us = <10000>;
+-			reset-deassert-us = <30000>;
++			reset-deassert-us = <80000>;
+ 			reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 
+ 			interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
+index 222ee8069cfaa..9b0b81f191f1f 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
+@@ -126,7 +126,7 @@
+ 			reg = <0>;
+ 
+ 			reset-assert-us = <10000>;
+-			reset-deassert-us = <30000>;
++			reset-deassert-us = <80000>;
+ 			reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 
+ 			interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+index ad812854a107f..a350fee1264d7 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+@@ -147,7 +147,7 @@
+ 			reg = <0>;
+ 
+ 			reset-assert-us = <10000>;
+-			reset-deassert-us = <30000>;
++			reset-deassert-us = <80000>;
+ 			reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 
+ 			interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-p230.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-p230.dts
+index b08c4537f260d..b2ab05c220903 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-p230.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-p230.dts
+@@ -82,7 +82,7 @@
+ 
+ 		/* External PHY reset is shared with internal PHY Led signal */
+ 		reset-assert-us = <10000>;
+-		reset-deassert-us = <30000>;
++		reset-deassert-us = <80000>;
+ 		reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 
+ 		interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts b/arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts
+index bff8ec2c1c70c..62d3e04299b67 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts
+@@ -194,7 +194,7 @@
+ 		reg = <0>;
+ 
+ 		reset-assert-us = <10000>;
+-		reset-deassert-us = <30000>;
++		reset-deassert-us = <80000>;
+ 		reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 
+ 		interrupt-parent = <&gpio_intc>;
+@@ -341,7 +341,7 @@
+ 		#size-cells = <1>;
+ 		compatible = "winbond,w25q16", "jedec,spi-nor";
+ 		reg = <0>;
+-		spi-max-frequency = <3000000>;
++		spi-max-frequency = <104000000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm-nexbox-a1.dts b/arch/arm64/boot/dts/amlogic/meson-gxm-nexbox-a1.dts
+index 83eca3af44ce7..dfa7a37a1281f 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxm-nexbox-a1.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxm-nexbox-a1.dts
+@@ -112,7 +112,7 @@
+ 		max-speed = <1000>;
+ 
+ 		reset-assert-us = <10000>;
+-		reset-deassert-us = <30000>;
++		reset-deassert-us = <80000>;
+ 		reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm-q200.dts b/arch/arm64/boot/dts/amlogic/meson-gxm-q200.dts
+index ea45ae0c71b7f..8edbfe040805c 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxm-q200.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxm-q200.dts
+@@ -64,7 +64,7 @@
+ 
+ 		/* External PHY reset is shared with internal PHY Led signal */
+ 		reset-assert-us = <10000>;
+-		reset-deassert-us = <30000>;
++		reset-deassert-us = <80000>;
+ 		reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 
+ 		interrupt-parent = <&gpio_intc>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
+index c89c9f846fb10..dde7cfe12cffa 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
+@@ -114,7 +114,7 @@
+ 		max-speed = <1000>;
+ 
+ 		reset-assert-us = <10000>;
+-		reset-deassert-us = <30000>;
++		reset-deassert-us = <80000>;
+ 		reset-gpios = <&gpio GPIOZ_14 GPIO_ACTIVE_LOW>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi b/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
+index 71317f5aada1d..c309517abae32 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
+@@ -130,7 +130,7 @@
+ 			opp-microvolt = <790000>;
+ 		};
+ 
+-		opp-1512000000 {
++		opp-1500000000 {
+ 			opp-hz = /bits/ 64 <1500000000>;
+ 			opp-microvolt = <800000>;
+ 		};
+diff --git a/arch/arm64/boot/dts/exynos/exynos7.dtsi b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+index b9ed6a33e2901..7599e1a00ff51 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+@@ -79,8 +79,10 @@
+ 	};
+ 
+ 	psci {
+-		compatible = "arm,psci-0.2";
++		compatible = "arm,psci";
+ 		method = "smc";
++		cpu_off = <0x84000002>;
++		cpu_on = <0xC4000003>;
+ 	};
+ 
+ 	soc: soc@0 {
+@@ -481,13 +483,6 @@
+ 		pmu_system_controller: system-controller@105c0000 {
+ 			compatible = "samsung,exynos7-pmu", "syscon";
+ 			reg = <0x105c0000 0x5000>;
+-
+-			reboot: syscon-reboot {
+-				compatible = "syscon-reboot";
+-				regmap = <&pmu_system_controller>;
+-				offset = <0x0400>;
+-				mask = <0x1>;
+-			};
+ 		};
+ 
+ 		rtc: rtc@10590000 {
+@@ -687,3 +682,4 @@
+ };
+ 
+ #include "exynos7-pinctrl.dtsi"
++#include "arm/exynos-syscon-restart.dtsi"
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28.dts
+index 8161dd2379712..b3fa4dbeebd52 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28.dts
+@@ -155,20 +155,10 @@
+ 		};
+ 
+ 		partition@210000 {
+-			reg = <0x210000 0x0f0000>;
++			reg = <0x210000 0x1d0000>;
+ 			label = "bootloader";
+ 		};
+ 
+-		partition@300000 {
+-			reg = <0x300000 0x040000>;
+-			label = "DP firmware";
+-		};
+-
+-		partition@340000 {
+-			reg = <0x340000 0x0a0000>;
+-			label = "trusted firmware";
+-		};
+-
+ 		partition@3e0000 {
+ 			reg = <0x3e0000 0x020000>;
+ 			label = "bootloader environment";
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+index 7a6fb7e1fb82f..33aa0efa2293a 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+@@ -309,7 +309,7 @@
+ 			      <0x0 0x20000000 0x0 0x10000000>;
+ 			reg-names = "fspi_base", "fspi_mmap";
+ 			interrupts = <GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>;
+-			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
++			clocks = <&clockgen 2 0>, <&clockgen 2 0>;
+ 			clock-names = "fspi_en", "fspi";
+ 			status = "disabled";
+ 		};
+@@ -934,7 +934,7 @@
+ 			ethernet@0,4 {
+ 				compatible = "fsl,enetc-ptp";
+ 				reg = <0x000400 0 0 0 0>;
+-				clocks = <&clockgen 4 0>;
++				clocks = <&clockgen 2 3>;
+ 				little-endian;
+ 				fsl,extts-fifo;
+ 			};
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index f3a678e0fd99b..bf76ebe463794 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -146,7 +146,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&rgmii_pins>;
+ 	phy-mode = "rgmii-id";
+-	phy = <&phy1>;
++	phy-handle = <&phy1>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/marvell/armada-7040.dtsi b/arch/arm64/boot/dts/marvell/armada-7040.dtsi
+index 7a3198cd7a071..2f440711d21d2 100644
+--- a/arch/arm64/boot/dts/marvell/armada-7040.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-7040.dtsi
+@@ -15,10 +15,6 @@
+ 		     "marvell,armada-ap806";
+ };
+ 
+-&smmu {
+-	status = "okay";
+-};
+-
+ &cp0_pcie0 {
+ 	iommu-map =
+ 		<0x0   &smmu 0x480 0x20>,
+diff --git a/arch/arm64/boot/dts/marvell/armada-8040.dtsi b/arch/arm64/boot/dts/marvell/armada-8040.dtsi
+index 79e8ce59baa88..22c2d6ebf3818 100644
+--- a/arch/arm64/boot/dts/marvell/armada-8040.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-8040.dtsi
+@@ -15,10 +15,6 @@
+ 		     "marvell,armada-ap806";
+ };
+ 
+-&smmu {
+-	status = "okay";
+-};
+-
+ &cp0_pcie0 {
+ 	iommu-map =
+ 		<0x0   &smmu 0x480 0x20>,
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 9cfd961c45eb3..08a914d3a6435 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -363,7 +363,7 @@
+ 			compatible = "mediatek,mt8183-gce";
+ 			reg = <0 0x10238000 0 0x4000>;
+ 			interrupts = <GIC_SPI 162 IRQ_TYPE_LEVEL_LOW>;
+-			#mbox-cells = <3>;
++			#mbox-cells = <2>;
+ 			clocks = <&infracfg CLK_INFRA_GCE>;
+ 			clock-names = "gce";
+ 		};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 93438d2b94696..6946fb210e484 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -378,7 +378,7 @@
+ 					nvidia,schmitt = <TEGRA_PIN_DISABLE>;
+ 					nvidia,lpdr = <TEGRA_PIN_ENABLE>;
+ 					nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+-					nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
++					nvidia,io-hv = <TEGRA_PIN_ENABLE>;
+ 					nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ 					nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ 				};
+@@ -390,7 +390,7 @@
+ 					nvidia,schmitt = <TEGRA_PIN_DISABLE>;
+ 					nvidia,lpdr = <TEGRA_PIN_ENABLE>;
+ 					nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+-					nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
++					nvidia,io-hv = <TEGRA_PIN_ENABLE>;
+ 					nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ 					nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ 				};
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 59e0cbfa22143..cdc1e3d60c58e 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -156,8 +156,8 @@
+ 			no-map;
+ 		};
+ 
+-		tz: tz@48500000 {
+-			reg = <0x0 0x48500000 0x0 0x00200000>;
++		tz: memory@4a600000 {
++			reg = <0x0 0x4a600000 0x0 0x00400000>;
+ 			no-map;
+ 		};
+ 
+@@ -167,7 +167,7 @@
+ 		};
+ 
+ 		q6_region: memory@4ab00000 {
+-			reg = <0x0 0x4ab00000 0x0 0x02800000>;
++			reg = <0x0 0x4ab00000 0x0 0x05500000>;
+ 			no-map;
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi b/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
+index b18d21e42f596..f7ac4c4033db6 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
+@@ -78,6 +78,9 @@
+ 		sda-gpios = <&msmgpio 105 (GPIO_ACTIVE_HIGH|GPIO_OPEN_DRAIN)>;
+ 		scl-gpios = <&msmgpio 106 (GPIO_ACTIVE_HIGH|GPIO_OPEN_DRAIN)>;
+ 
++		pinctrl-names = "default";
++		pinctrl-0 = <&muic_i2c_default>;
++
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+@@ -314,6 +317,14 @@
+ 		};
+ 	};
+ 
++	muic_i2c_default: muic-i2c-default {
++		pins = "gpio105", "gpio106";
++		function = "gpio";
++
++		drive-strength = <2>;
++		bias-disable;
++	};
++
+ 	muic_int_default: muic-int-default {
+ 		pins = "gpio12";
+ 		function = "gpio";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 6678f1e8e3958..c71f3afc1cc9f 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -1394,7 +1394,8 @@
+ 		ipa: ipa@1e40000 {
+ 			compatible = "qcom,sc7180-ipa";
+ 
+-			iommus = <&apps_smmu 0x440 0x3>;
++			iommus = <&apps_smmu 0x440 0x0>,
++				 <&apps_smmu 0x442 0x0>;
+ 			reg = <0 0x1e40000 0 0x7000>,
+ 			      <0 0x1e47000 0 0x2000>,
+ 			      <0 0x1e04000 0 0x2c000>;
+@@ -2811,7 +2812,7 @@
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+ 
+-			interconnects = <&mmss_noc MASTER_MDP0 &mc_virt SLAVE_EBI1>;
++			interconnects = <&mmss_noc MASTER_MDP0 0 &mc_virt SLAVE_EBI1 0>;
+ 			interconnect-names = "mdp0-mem";
+ 
+ 			iommus = <&apps_smmu 0x800 0x2>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 40e8c11f23ab0..f97f354af86f4 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -2141,7 +2141,8 @@
+ 		ipa: ipa@1e40000 {
+ 			compatible = "qcom,sdm845-ipa";
+ 
+-			iommus = <&apps_smmu 0x720 0x3>;
++			iommus = <&apps_smmu 0x720 0x0>,
++				 <&apps_smmu 0x722 0x0>;
+ 			reg = <0 0x1e40000 0 0x7000>,
+ 			      <0 0x1e47000 0 0x2000>,
+ 			      <0 0x1e04000 0 0x2c000>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index d03ca31907466..76a8c996d497f 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -264,23 +264,28 @@
+ 	status = "okay";
+ 	clock-frequency = <400000>;
+ 
+-	hid@15 {
++	tsel: hid@15 {
+ 		compatible = "hid-over-i2c";
+ 		reg = <0x15>;
+ 		hid-descr-addr = <0x1>;
+ 
+-		interrupts-extended = <&tlmm 37 IRQ_TYPE_EDGE_RISING>;
++		interrupts-extended = <&tlmm 37 IRQ_TYPE_LEVEL_HIGH>;
++
++		pinctrl-names = "default";
++		pinctrl-0 = <&i2c3_hid_active>;
+ 	};
+ 
+-	hid@2c {
++	tsc2: hid@2c {
+ 		compatible = "hid-over-i2c";
+ 		reg = <0x2c>;
+ 		hid-descr-addr = <0x20>;
+ 
+-		interrupts-extended = <&tlmm 37 IRQ_TYPE_EDGE_RISING>;
++		interrupts-extended = <&tlmm 37 IRQ_TYPE_LEVEL_HIGH>;
+ 
+ 		pinctrl-names = "default";
+-		pinctrl-0 = <&i2c2_hid_active>;
++		pinctrl-0 = <&i2c3_hid_active>;
++
++		status = "disabled";
+ 	};
+ };
+ 
+@@ -288,15 +293,15 @@
+ 	status = "okay";
+ 	clock-frequency = <400000>;
+ 
+-	hid@10 {
++	tsc1: hid@10 {
+ 		compatible = "hid-over-i2c";
+ 		reg = <0x10>;
+ 		hid-descr-addr = <0x1>;
+ 
+-		interrupts-extended = <&tlmm 125 IRQ_TYPE_EDGE_FALLING>;
++		interrupts-extended = <&tlmm 125 IRQ_TYPE_LEVEL_LOW>;
+ 
+ 		pinctrl-names = "default";
+-		pinctrl-0 = <&i2c6_hid_active>;
++		pinctrl-0 = <&i2c5_hid_active>;
+ 	};
+ };
+ 
+@@ -304,7 +309,7 @@
+ 	status = "okay";
+ 	clock-frequency = <400000>;
+ 
+-	hid@5c {
++	ecsh: hid@5c {
+ 		compatible = "hid-over-i2c";
+ 		reg = <0x5c>;
+ 		hid-descr-addr = <0x1>;
+@@ -312,7 +317,7 @@
+ 		interrupts-extended = <&tlmm 92 IRQ_TYPE_LEVEL_LOW>;
+ 
+ 		pinctrl-names = "default";
+-		pinctrl-0 = <&i2c12_hid_active>;
++		pinctrl-0 = <&i2c11_hid_active>;
+ 	};
+ };
+ 
+@@ -426,8 +431,8 @@
+ &tlmm {
+ 	gpio-reserved-ranges = <0 4>, <81 4>;
+ 
+-	i2c2_hid_active: i2c2-hid-active {
+-		pins = <37>;
++	i2c3_hid_active: i2c2-hid-active {
++		pins = "gpio37";
+ 		function = "gpio";
+ 
+ 		input-enable;
+@@ -435,8 +440,8 @@
+ 		drive-strength = <2>;
+ 	};
+ 
+-	i2c6_hid_active: i2c6-hid-active {
+-		pins = <125>;
++	i2c5_hid_active: i2c5-hid-active {
++		pins = "gpio125";
+ 		function = "gpio";
+ 
+ 		input-enable;
+@@ -444,8 +449,8 @@
+ 		drive-strength = <2>;
+ 	};
+ 
+-	i2c12_hid_active: i2c12-hid-active {
+-		pins = <92>;
++	i2c11_hid_active: i2c11-hid-active {
++		pins = "gpio92";
+ 		function = "gpio";
+ 
+ 		input-enable;
+@@ -454,7 +459,7 @@
+ 	};
+ 
+ 	wcd_intr_default: wcd_intr_default {
+-		pins = <54>;
++		pins = "gpio54";
+ 		function = "gpio";
+ 
+ 		input-enable;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250-mtp.dts b/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
+index fd194ed7fbc86..98675e1f8204f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
+@@ -14,7 +14,7 @@
+ 
+ / {
+ 	model = "Qualcomm Technologies, Inc. SM8250 MTP";
+-	compatible = "qcom,sm8250-mtp";
++	compatible = "qcom,sm8250-mtp", "qcom,sm8250";
+ 
+ 	aliases {
+ 		serial0 = &uart12;
+diff --git a/arch/arm64/boot/dts/renesas/cat875.dtsi b/arch/arm64/boot/dts/renesas/cat875.dtsi
+index 33daa95706840..801ea54b027c4 100644
+--- a/arch/arm64/boot/dts/renesas/cat875.dtsi
++++ b/arch/arm64/boot/dts/renesas/cat875.dtsi
+@@ -21,7 +21,6 @@
+ 	status = "okay";
+ 
+ 	phy0: ethernet-phy@0 {
+-		rxc-skew-ps = <1500>;
+ 		reg = <0>;
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <21 IRQ_TYPE_LEVEL_LOW>;
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
+index 178401a34cbf8..b9e46aed53362 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
+@@ -23,7 +23,6 @@
+ 	status = "okay";
+ 
+ 	phy0: ethernet-phy@0 {
+-		rxc-skew-ps = <1500>;
+ 		reg = <0>;
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+index b70ffb1c6a630..b76282e704de1 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+@@ -334,6 +334,7 @@
+ };
+ 
+ &usb20_otg {
++	dr_mode = "host";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index bbdb19a3e85d1..db0d5c8e5f96a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -1237,8 +1237,8 @@
+ 
+ 		uart0 {
+ 			uart0_xfer: uart0-xfer {
+-				rockchip,pins = <1 RK_PB1 1 &pcfg_pull_up>,
+-						<1 RK_PB0 1 &pcfg_pull_none>;
++				rockchip,pins = <1 RK_PB1 1 &pcfg_pull_none>,
++						<1 RK_PB0 1 &pcfg_pull_up>;
+ 			};
+ 
+ 			uart0_cts: uart0-cts {
+@@ -1256,8 +1256,8 @@
+ 
+ 		uart1 {
+ 			uart1_xfer: uart1-xfer {
+-				rockchip,pins = <3 RK_PA4 4 &pcfg_pull_up>,
+-						<3 RK_PA6 4 &pcfg_pull_none>;
++				rockchip,pins = <3 RK_PA4 4 &pcfg_pull_none>,
++						<3 RK_PA6 4 &pcfg_pull_up>;
+ 			};
+ 
+ 			uart1_cts: uart1-cts {
+@@ -1275,15 +1275,15 @@
+ 
+ 		uart2-0 {
+ 			uart2m0_xfer: uart2m0-xfer {
+-				rockchip,pins = <1 RK_PA0 2 &pcfg_pull_up>,
+-						<1 RK_PA1 2 &pcfg_pull_none>;
++				rockchip,pins = <1 RK_PA0 2 &pcfg_pull_none>,
++						<1 RK_PA1 2 &pcfg_pull_up>;
+ 			};
+ 		};
+ 
+ 		uart2-1 {
+ 			uart2m1_xfer: uart2m1-xfer {
+-				rockchip,pins = <2 RK_PA0 1 &pcfg_pull_up>,
+-						<2 RK_PA1 1 &pcfg_pull_none>;
++				rockchip,pins = <2 RK_PA0 1 &pcfg_pull_none>,
++						<2 RK_PA1 1 &pcfg_pull_up>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index 533525229a8db..b9662205be9bf 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -834,7 +834,7 @@
+ 		};
+ 	};
+ 
+-	dss: dss@04a00000 {
++	dss: dss@4a00000 {
+ 		compatible = "ti,am65x-dss";
+ 		reg =	<0x0 0x04a00000 0x0 0x1000>, /* common */
+ 			<0x0 0x04a02000 0x0 0x1000>, /* vidl1 */
+@@ -867,6 +867,8 @@
+ 
+ 		status = "disabled";
+ 
++		dma-coherent;
++
+ 		dss_ports: ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index e2a96b2c423c4..c66ded9079be4 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -1278,7 +1278,7 @@
+ 		};
+ 	};
+ 
+-	dss: dss@04a00000 {
++	dss: dss@4a00000 {
+ 		compatible = "ti,j721e-dss";
+ 		reg =
+ 			<0x00 0x04a00000 0x00 0x10000>, /* common_m */
+diff --git a/arch/arm64/crypto/poly1305-armv8.pl b/arch/arm64/crypto/poly1305-armv8.pl
+index 6e5576d19af8f..cbc980fb02e33 100644
+--- a/arch/arm64/crypto/poly1305-armv8.pl
++++ b/arch/arm64/crypto/poly1305-armv8.pl
+@@ -840,7 +840,6 @@ poly1305_blocks_neon:
+ 	 ldp	d14,d15,[sp,#64]
+ 	addp	$ACC2,$ACC2,$ACC2
+ 	 ldr	x30,[sp,#8]
+-	 .inst	0xd50323bf		// autiasp
+ 
+ 	////////////////////////////////////////////////////////////////
+ 	// lazy reduction, but without narrowing
+@@ -882,6 +881,7 @@ poly1305_blocks_neon:
+ 	str	x4,[$ctx,#8]		// set is_base2_26
+ 
+ 	ldr	x29,[sp],#80
++	 .inst	0xd50323bf		// autiasp
+ 	ret
+ .size	poly1305_blocks_neon,.-poly1305_blocks_neon
+ 
+diff --git a/arch/arm64/crypto/poly1305-core.S_shipped b/arch/arm64/crypto/poly1305-core.S_shipped
+index 8d1c4e420ccdc..fb2822abf63aa 100644
+--- a/arch/arm64/crypto/poly1305-core.S_shipped
++++ b/arch/arm64/crypto/poly1305-core.S_shipped
+@@ -779,7 +779,6 @@ poly1305_blocks_neon:
+ 	 ldp	d14,d15,[sp,#64]
+ 	addp	v21.2d,v21.2d,v21.2d
+ 	 ldr	x30,[sp,#8]
+-	 .inst	0xd50323bf		// autiasp
+ 
+ 	////////////////////////////////////////////////////////////////
+ 	// lazy reduction, but without narrowing
+@@ -821,6 +820,7 @@ poly1305_blocks_neon:
+ 	str	x4,[x0,#8]		// set is_base2_26
+ 
+ 	ldr	x29,[sp],#80
++	 .inst	0xd50323bf		// autiasp
+ 	ret
+ .size	poly1305_blocks_neon,.-poly1305_blocks_neon
+ 
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 0cd9f0f75c135..cc060c41adaab 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -214,6 +214,7 @@ enum vcpu_sysreg {
+ #define c2_TTBR1	(TTBR1_EL1 * 2)	/* Translation Table Base Register 1 */
+ #define c2_TTBR1_high	(c2_TTBR1 + 1)	/* TTBR1 top 32 bits */
+ #define c2_TTBCR	(TCR_EL1 * 2)	/* Translation Table Base Control R. */
++#define c2_TTBCR2	(c2_TTBCR + 1)	/* Translation Table Base Control R. 2 */
+ #define c3_DACR		(DACR32_EL2 * 2)/* Domain Access Control Register */
+ #define c5_DFSR		(ESR_EL1 * 2)	/* Data Fault Status Register */
+ #define c5_IFSR		(IFSR32_EL2 * 2)/* Instruction Fault Status Register */
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index 52a0638ed967b..ef15c8a2a49dc 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -189,7 +189,8 @@ long get_mte_ctrl(struct task_struct *task)
+ 
+ 	switch (task->thread.sctlr_tcf0) {
+ 	case SCTLR_EL1_TCF0_NONE:
+-		return PR_MTE_TCF_NONE;
++		ret |= PR_MTE_TCF_NONE;
++		break;
+ 	case SCTLR_EL1_TCF0_SYNC:
+ 		ret |= PR_MTE_TCF_SYNC;
+ 		break;
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index c1fac9836af1a..2b28bf1a53266 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1987,6 +1987,7 @@ static const struct sys_reg_desc cp15_regs[] = {
+ 	{ Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
+ 	{ Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 },
+ 	{ Op1( 0), CRn( 2), CRm( 0), Op2( 2), access_vm_reg, NULL, c2_TTBCR },
++	{ Op1( 0), CRn( 2), CRm( 0), Op2( 3), access_vm_reg, NULL, c2_TTBCR2 },
+ 	{ Op1( 0), CRn( 3), CRm( 0), Op2( 0), access_vm_reg, NULL, c3_DACR },
+ 	{ Op1( 0), CRn( 5), CRm( 0), Op2( 0), access_vm_reg, NULL, c5_DFSR },
+ 	{ Op1( 0), CRn( 5), CRm( 0), Op2( 1), access_vm_reg, NULL, c5_IFSR },
+diff --git a/arch/m68k/mac/config.c b/arch/m68k/mac/config.c
+index 0ac53d87493c8..2bea1799b8de7 100644
+--- a/arch/m68k/mac/config.c
++++ b/arch/m68k/mac/config.c
+@@ -777,16 +777,12 @@ static struct resource scc_b_rsrcs[] = {
+ struct platform_device scc_a_pdev = {
+ 	.name           = "scc",
+ 	.id             = 0,
+-	.num_resources  = ARRAY_SIZE(scc_a_rsrcs),
+-	.resource       = scc_a_rsrcs,
+ };
+ EXPORT_SYMBOL(scc_a_pdev);
+ 
+ struct platform_device scc_b_pdev = {
+ 	.name           = "scc",
+ 	.id             = 1,
+-	.num_resources  = ARRAY_SIZE(scc_b_rsrcs),
+-	.resource       = scc_b_rsrcs,
+ };
+ EXPORT_SYMBOL(scc_b_pdev);
+ 
+@@ -813,10 +809,15 @@ static void __init mac_identify(void)
+ 
+ 	/* Set up serial port resources for the console initcall. */
+ 
+-	scc_a_rsrcs[0].start = (resource_size_t) mac_bi_data.sccbase + 2;
+-	scc_a_rsrcs[0].end   = scc_a_rsrcs[0].start;
+-	scc_b_rsrcs[0].start = (resource_size_t) mac_bi_data.sccbase;
+-	scc_b_rsrcs[0].end   = scc_b_rsrcs[0].start;
++	scc_a_rsrcs[0].start     = (resource_size_t)mac_bi_data.sccbase + 2;
++	scc_a_rsrcs[0].end       = scc_a_rsrcs[0].start;
++	scc_a_pdev.num_resources = ARRAY_SIZE(scc_a_rsrcs);
++	scc_a_pdev.resource      = scc_a_rsrcs;
++
++	scc_b_rsrcs[0].start     = (resource_size_t)mac_bi_data.sccbase;
++	scc_b_rsrcs[0].end       = scc_b_rsrcs[0].start;
++	scc_b_pdev.num_resources = ARRAY_SIZE(scc_b_rsrcs);
++	scc_b_pdev.resource      = scc_b_rsrcs;
+ 
+ 	switch (macintosh_config->scc_type) {
+ 	case MAC_SCC_PSC:
+diff --git a/arch/mips/bcm47xx/Kconfig b/arch/mips/bcm47xx/Kconfig
+index 6889f74e06f54..490bb6da74b7e 100644
+--- a/arch/mips/bcm47xx/Kconfig
++++ b/arch/mips/bcm47xx/Kconfig
+@@ -27,6 +27,7 @@ config BCM47XX_BCMA
+ 	select BCMA
+ 	select BCMA_HOST_SOC
+ 	select BCMA_DRIVER_MIPS
++	select BCMA_DRIVER_PCI if PCI
+ 	select BCMA_DRIVER_PCI_HOSTMODE if PCI
+ 	select BCMA_DRIVER_GPIO
+ 	default y
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index ca579deef9391..9d11f68a9e8bb 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -498,8 +498,8 @@ static void __init request_crashkernel(struct resource *res)
+ 
+ static void __init check_kernel_sections_mem(void)
+ {
+-	phys_addr_t start = PFN_PHYS(PFN_DOWN(__pa_symbol(&_text)));
+-	phys_addr_t size = PFN_PHYS(PFN_UP(__pa_symbol(&_end))) - start;
++	phys_addr_t start = __pa_symbol(&_text);
++	phys_addr_t size = __pa_symbol(&_end) - start;
+ 
+ 	if (!memblock_is_region_memory(start, size)) {
+ 		pr_info("Kernel sections are not in the memory maps\n");
+diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
+index f8ce6d2dde7b1..e4b364b5da9e7 100644
+--- a/arch/powerpc/boot/Makefile
++++ b/arch/powerpc/boot/Makefile
+@@ -368,6 +368,8 @@ initrd-y := $(filter-out $(image-y), $(initrd-y))
+ targets	+= $(image-y) $(initrd-y)
+ targets += $(foreach x, dtbImage uImage cuImage simpleImage treeImage, \
+ 		$(patsubst $(x).%, dts/%.dtb, $(filter $(x).%, $(image-y))))
++targets += $(foreach x, dtbImage uImage cuImage simpleImage treeImage, \
++		$(patsubst $(x).%, dts/fsl/%.dtb, $(filter $(x).%, $(image-y))))
+ 
+ $(addprefix $(obj)/, $(initrd-y)): $(obj)/ramdisk.image.gz
+ 
+diff --git a/arch/powerpc/include/asm/bitops.h b/arch/powerpc/include/asm/bitops.h
+index 4a4d3afd53406..299ab33505a6c 100644
+--- a/arch/powerpc/include/asm/bitops.h
++++ b/arch/powerpc/include/asm/bitops.h
+@@ -216,15 +216,34 @@ static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr)
+  */
+ static inline int fls(unsigned int x)
+ {
+-	return 32 - __builtin_clz(x);
++	int lz;
++
++	if (__builtin_constant_p(x))
++		return x ? 32 - __builtin_clz(x) : 0;
++	asm("cntlzw %0,%1" : "=r" (lz) : "r" (x));
++	return 32 - lz;
+ }
+ 
+ #include <asm-generic/bitops/builtin-__fls.h>
+ 
++/*
++ * 64-bit can do this using one cntlzd (count leading zeroes doubleword)
++ * instruction; for 32-bit we use the generic version, which does two
++ * 32-bit fls calls.
++ */
++#ifdef CONFIG_PPC64
+ static inline int fls64(__u64 x)
+ {
+-	return 64 - __builtin_clzll(x);
++	int lz;
++
++	if (__builtin_constant_p(x))
++		return x ? 64 - __builtin_clzll(x) : 0;
++	asm("cntlzd %0,%1" : "=r" (lz) : "r" (x));
++	return 64 - lz;
+ }
++#else
++#include <asm-generic/bitops/fls64.h>
++#endif
+ 
+ #ifdef CONFIG_PPC64
+ unsigned int __arch_hweight8(unsigned int w);
+diff --git a/arch/powerpc/include/asm/book3s/32/mmu-hash.h b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+index 2e277ca0170fb..a8982d52f6b1d 100644
+--- a/arch/powerpc/include/asm/book3s/32/mmu-hash.h
++++ b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+@@ -94,6 +94,7 @@ typedef struct {
+ } mm_context_t;
+ 
+ void update_bats(void);
++static inline void cleanup_cpu_mmu_context(void) { };
+ 
+ /* patch sites */
+ extern s32 patch__hash_page_A0, patch__hash_page_A1, patch__hash_page_A2;
+diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
+index 1376be95e975f..523d3e6e24009 100644
+--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
+@@ -524,9 +524,9 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
+ 	if (pte_val(*ptep) & _PAGE_HASHPTE)
+ 		flush_hash_entry(mm, ptep, addr);
+ 	__asm__ __volatile__("\
+-		stw%U0%X0 %2,%0\n\
++		stw%X0 %2,%0\n\
+ 		eieio\n\
+-		stw%U0%X0 %L2,%1"
++		stw%X1 %L2,%1"
+ 	: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
+ 	: "r" (pte) : "memory");
+ 
+diff --git a/arch/powerpc/include/asm/cpm1.h b/arch/powerpc/include/asm/cpm1.h
+index a116fe9317892..3bdd74739cb88 100644
+--- a/arch/powerpc/include/asm/cpm1.h
++++ b/arch/powerpc/include/asm/cpm1.h
+@@ -68,6 +68,7 @@ extern void cpm_reset(void);
+ #define PROFF_SPI	((uint)0x0180)
+ #define PROFF_SCC3	((uint)0x0200)
+ #define PROFF_SMC1	((uint)0x0280)
++#define PROFF_DSP1	((uint)0x02c0)
+ #define PROFF_SCC4	((uint)0x0300)
+ #define PROFF_SMC2	((uint)0x0380)
+ 
+diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
+index 3d2f94afc13ae..398eba3998790 100644
+--- a/arch/powerpc/include/asm/cputable.h
++++ b/arch/powerpc/include/asm/cputable.h
+@@ -369,7 +369,7 @@ static inline void cpu_feature_keys_init(void) { }
+ 	    CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX)
+ #define CPU_FTRS_82XX	(CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_NOEXECUTE)
+ #define CPU_FTRS_G2_LE	(CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | \
+-	    CPU_FTR_MAYBE_CAN_NAP)
++	    CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NOEXECUTE)
+ #define CPU_FTRS_E300	(CPU_FTR_MAYBE_CAN_DOZE | \
+ 	    CPU_FTR_MAYBE_CAN_NAP | \
+ 	    CPU_FTR_COMMON  | CPU_FTR_NOEXECUTE)
+@@ -409,7 +409,6 @@ static inline void cpu_feature_keys_init(void) { }
+ 	    CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
+ 	    CPU_FTR_DEBUG_LVL_EXC | CPU_FTR_EMB_HV | CPU_FTR_ALTIVEC_COMP | \
+ 	    CPU_FTR_CELL_TB_BUG | CPU_FTR_SMT)
+-#define CPU_FTRS_GENERIC_32	(CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN)
+ 
+ /* 64-bit CPUs */
+ #define CPU_FTRS_PPC970	(CPU_FTR_LWSYNC | \
+@@ -520,8 +519,6 @@ enum {
+ 	    CPU_FTRS_7447 | CPU_FTRS_7447A | CPU_FTRS_82XX |
+ 	    CPU_FTRS_G2_LE | CPU_FTRS_E300 | CPU_FTRS_E300C2 |
+ 	    CPU_FTRS_CLASSIC32 |
+-#else
+-	    CPU_FTRS_GENERIC_32 |
+ #endif
+ #ifdef CONFIG_PPC_8xx
+ 	    CPU_FTRS_8XX |
+@@ -596,8 +593,6 @@ enum {
+ 	    CPU_FTRS_7447 & CPU_FTRS_7447A & CPU_FTRS_82XX &
+ 	    CPU_FTRS_G2_LE & CPU_FTRS_E300 & CPU_FTRS_E300C2 &
+ 	    CPU_FTRS_CLASSIC32 &
+-#else
+-	    CPU_FTRS_GENERIC_32 &
+ #endif
+ #ifdef CONFIG_PPC_8xx
+ 	    CPU_FTRS_8XX &
+diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
+index 6277e7596ae58..ac75f4ab0dba1 100644
+--- a/arch/powerpc/include/asm/nohash/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/pgtable.h
+@@ -192,9 +192,9 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
+ 	 */
+ 	if (IS_ENABLED(CONFIG_PPC32) && IS_ENABLED(CONFIG_PTE_64BIT) && !percpu) {
+ 		__asm__ __volatile__("\
+-			stw%U0%X0 %2,%0\n\
++			stw%X0 %2,%0\n\
+ 			eieio\n\
+-			stw%U0%X0 %L2,%1"
++			stw%X1 %L2,%1"
+ 		: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
+ 		: "r" (pte) : "memory");
+ 		return;
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index bf0bf1b900d21..fe2ef598e2ead 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -173,6 +173,9 @@ KCOV_INSTRUMENT_cputable.o := n
+ KCOV_INSTRUMENT_setup_64.o := n
+ KCOV_INSTRUMENT_paca.o := n
+ 
++CFLAGS_setup_64.o		+= -fno-stack-protector
++CFLAGS_paca.o			+= -fno-stack-protector
++
+ extra-$(CONFIG_PPC_FPU)		+= fpu.o
+ extra-$(CONFIG_ALTIVEC)		+= vector.o
+ extra-$(CONFIG_PPC64)		+= entry_64.o
+diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
+index 7c767765071da..c88e66adecb52 100644
+--- a/arch/powerpc/kernel/head_32.h
++++ b/arch/powerpc/kernel/head_32.h
+@@ -131,18 +131,28 @@
+ #ifdef CONFIG_VMAP_STACK
+ 	mfspr	r11, SPRN_SRR0
+ 	mtctr	r11
+-#endif
+ 	andi.	r11, r9, MSR_PR
+-	lwz	r11,TASK_STACK-THREAD(r12)
++	mr	r11, r1
++	lwz	r1,TASK_STACK-THREAD(r12)
+ 	beq-	99f
+-	addi	r11, r11, THREAD_SIZE - INT_FRAME_SIZE
+-#ifdef CONFIG_VMAP_STACK
++	addi	r1, r1, THREAD_SIZE - INT_FRAME_SIZE
+ 	li	r10, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
+ 	mtmsr	r10
+ 	isync
++	tovirt(r12, r12)
++	stw	r11,GPR1(r1)
++	stw	r11,0(r1)
++	mr	r11, r1
++#else
++	andi.	r11, r9, MSR_PR
++	lwz	r11,TASK_STACK-THREAD(r12)
++	beq-	99f
++	addi	r11, r11, THREAD_SIZE - INT_FRAME_SIZE
++	tophys(r11, r11)
++	stw	r1,GPR1(r11)
++	stw	r1,0(r11)
++	tovirt(r1, r11)		/* set new kernel sp */
+ #endif
+-	tovirt_vmstack r12, r12
+-	tophys_novmstack r11, r11
+ 	mflr	r10
+ 	stw	r10, _LINK(r11)
+ #ifdef CONFIG_VMAP_STACK
+@@ -150,9 +160,6 @@
+ #else
+ 	mfspr	r10,SPRN_SRR0
+ #endif
+-	stw	r1,GPR1(r11)
+-	stw	r1,0(r11)
+-	tovirt_novmstack r1, r11	/* set new kernel sp */
+ 	stw	r10,_NIP(r11)
+ 	mfcr	r10
+ 	rlwinm	r10,r10,0,4,2	/* Clear SO bit in CR */
+diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
+index 1510b2a56669f..2d6581db0c7b6 100644
+--- a/arch/powerpc/kernel/head_64.S
++++ b/arch/powerpc/kernel/head_64.S
+@@ -417,6 +417,10 @@ generic_secondary_common_init:
+ 	/* From now on, r24 is expected to be logical cpuid */
+ 	mr	r24,r5
+ 
++	/* Create a temp kernel stack for use before relocation is on.	*/
++	ld	r1,PACAEMERGSP(r13)
++	subi	r1,r1,STACK_FRAME_OVERHEAD
++
+ 	/* See if we need to call a cpu state restore handler */
+ 	LOAD_REG_ADDR(r23, cur_cpu_spec)
+ 	ld	r23,0(r23)
+@@ -445,10 +449,6 @@ generic_secondary_common_init:
+ 	sync				/* order paca.run and cur_cpu_spec */
+ 	isync				/* In case code patching happened */
+ 
+-	/* Create a temp kernel stack for use before relocation is on.	*/
+-	ld	r1,PACAEMERGSP(r13)
+-	subi	r1,r1,STACK_FRAME_OVERHEAD
+-
+ 	b	__secondary_start
+ #endif /* SMP */
+ 
+@@ -990,7 +990,7 @@ start_here_common:
+ 	bl	start_kernel
+ 
+ 	/* Not reached */
+-	trap
++0:	trap
+ 	EMIT_BUG_ENTRY 0b, __FILE__, __LINE__, 0
+ 	.previous
+ 
+diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
+index 0ad15768d762c..7f5aae3c387d2 100644
+--- a/arch/powerpc/kernel/paca.c
++++ b/arch/powerpc/kernel/paca.c
+@@ -208,7 +208,7 @@ static struct rtas_args * __init new_rtas_args(int cpu, unsigned long limit)
+ struct paca_struct **paca_ptrs __read_mostly;
+ EXPORT_SYMBOL(paca_ptrs);
+ 
+-void __init __nostackprotector initialise_paca(struct paca_struct *new_paca, int cpu)
++void __init initialise_paca(struct paca_struct *new_paca, int cpu)
+ {
+ #ifdef CONFIG_PPC_PSERIES
+ 	new_paca->lppaca_ptr = NULL;
+@@ -241,7 +241,7 @@ void __init __nostackprotector initialise_paca(struct paca_struct *new_paca, int
+ }
+ 
+ /* Put the paca pointer into r13 and SPRG_PACA */
+-void __nostackprotector setup_paca(struct paca_struct *new_paca)
++void setup_paca(struct paca_struct *new_paca)
+ {
+ 	/* Setup r13 */
+ 	local_paca = new_paca;
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 954f41676f692..cccb32cf0e08c 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -1030,7 +1030,7 @@ static struct rtas_filter rtas_filters[] __ro_after_init = {
+ 	{ "ibm,display-message", -1, 0, -1, -1, -1 },
+ 	{ "ibm,errinjct", -1, 2, -1, -1, -1, 1024 },
+ 	{ "ibm,close-errinjct", -1, -1, -1, -1, -1 },
+-	{ "ibm,open-errinct", -1, -1, -1, -1, -1 },
++	{ "ibm,open-errinjct", -1, -1, -1, -1, -1 },
+ 	{ "ibm,get-config-addr-info2", -1, -1, -1, -1, -1 },
+ 	{ "ibm,get-dynamic-sensor-state", -1, 1, -1, -1, -1 },
+ 	{ "ibm,get-indices", -1, 2, 3, -1, -1 },
+diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
+index 808ec9fab6052..da8c71f321ad3 100644
+--- a/arch/powerpc/kernel/setup-common.c
++++ b/arch/powerpc/kernel/setup-common.c
+@@ -919,8 +919,6 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	/* On BookE, setup per-core TLB data structures. */
+ 	setup_tlb_core_data();
+-
+-	smp_release_cpus();
+ #endif
+ 
+ 	/* Print various info about the machine that has been gathered so far. */
+@@ -944,6 +942,8 @@ void __init setup_arch(char **cmdline_p)
+ 	exc_lvl_early_init();
+ 	emergency_stack_init();
+ 
++	smp_release_cpus();
++
+ 	initmem_init();
+ 
+ 	early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
+diff --git a/arch/powerpc/kernel/setup.h b/arch/powerpc/kernel/setup.h
+index 2ec835574cc94..2dd0d9cb5a208 100644
+--- a/arch/powerpc/kernel/setup.h
++++ b/arch/powerpc/kernel/setup.h
+@@ -8,12 +8,6 @@
+ #ifndef __ARCH_POWERPC_KERNEL_SETUP_H
+ #define __ARCH_POWERPC_KERNEL_SETUP_H
+ 
+-#ifdef CONFIG_CC_IS_CLANG
+-#define __nostackprotector
+-#else
+-#define __nostackprotector __attribute__((__optimize__("no-stack-protector")))
+-#endif
+-
+ void initialize_cache_info(void);
+ void irqstack_early_init(void);
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 74fd47f46fa58..c28e949cc2229 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -283,7 +283,7 @@ void __init record_spr_defaults(void)
+  * device-tree is not accessible via normal means at this point.
+  */
+ 
+-void __init __nostackprotector early_setup(unsigned long dt_ptr)
++void __init early_setup(unsigned long dt_ptr)
+ {
+ 	static __initdata struct paca_struct boot_paca;
+ 
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 8c2857cbd9609..7d6cf75a7fd80 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -919,7 +919,7 @@ static struct sched_domain_topology_level powerpc_topology[] = {
+ 	{ NULL, },
+ };
+ 
+-static int init_big_cores(void)
++static int __init init_big_cores(void)
+ {
+ 	int cpu;
+ 
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index 855457ed09b54..b18bce1a209fa 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -1346,6 +1346,9 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 	switch (opcode) {
+ #ifdef __powerpc64__
+ 	case 1:
++		if (!cpu_has_feature(CPU_FTR_ARCH_31))
++			return -1;
++
+ 		prefix_r = GET_PREFIX_R(word);
+ 		ra = GET_PREFIX_RA(suffix);
+ 		rd = (suffix >> 21) & 0x1f;
+@@ -2733,6 +2736,9 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 		}
+ 		break;
+ 	case 1: /* Prefixed instructions */
++		if (!cpu_has_feature(CPU_FTR_ARCH_31))
++			return -1;
++
+ 		prefix_r = GET_PREFIX_R(word);
+ 		ra = GET_PREFIX_RA(suffix);
+ 		op->update_reg = ra;
+@@ -2751,6 +2757,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			case 41:	/* plwa */
+ 				op->type = MKOP(LOAD, PREFIXED | SIGNEXT, 4);
+ 				break;
++#ifdef CONFIG_VSX
+ 			case 42:        /* plxsd */
+ 				op->reg = rd + 32;
+ 				op->type = MKOP(LOAD_VSX, PREFIXED, 8);
+@@ -2791,13 +2798,14 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 				op->element_size = 16;
+ 				op->vsx_flags = VSX_CHECK_VEC;
+ 				break;
++#endif /* CONFIG_VSX */
+ 			case 56:        /* plq */
+ 				op->type = MKOP(LOAD, PREFIXED, 16);
+ 				break;
+ 			case 57:	/* pld */
+ 				op->type = MKOP(LOAD, PREFIXED, 8);
+ 				break;
+-			case 60:        /* stq */
++			case 60:        /* pstq */
+ 				op->type = MKOP(STORE, PREFIXED, 16);
+ 				break;
+ 			case 61:	/* pstd */
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 0add963a849b3..72e1b51beb10c 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -303,7 +303,6 @@ static inline void cmo_account_page_fault(void)
+ static inline void cmo_account_page_fault(void) { }
+ #endif /* CONFIG_PPC_SMLPAR */
+ 
+-#ifdef CONFIG_PPC_BOOK3S
+ static void sanity_check_fault(bool is_write, bool is_user,
+ 			       unsigned long error_code, unsigned long address)
+ {
+@@ -320,6 +319,9 @@ static void sanity_check_fault(bool is_write, bool is_user,
+ 		return;
+ 	}
+ 
++	if (!IS_ENABLED(CONFIG_PPC_BOOK3S))
++		return;
++
+ 	/*
+ 	 * For hash translation mode, we should never get a
+ 	 * PROTFAULT. Any update to pte to reduce access will result in us
+@@ -354,10 +356,6 @@ static void sanity_check_fault(bool is_write, bool is_user,
+ 
+ 	WARN_ON_ONCE(error_code & DSISR_PROTFAULT);
+ }
+-#else
+-static void sanity_check_fault(bool is_write, bool is_user,
+-			       unsigned long error_code, unsigned long address) { }
+-#endif /* CONFIG_PPC_BOOK3S */
+ 
+ /*
+  * Define the correct "is_write" bit in error_code based
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index 3fc325bebe4df..22eb1c718e622 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -532,7 +532,7 @@ void __flush_dcache_icache(void *p)
+ 	 * space occurs, before returning to user space.
+ 	 */
+ 
+-	if (cpu_has_feature(MMU_FTR_TYPE_44x))
++	if (mmu_has_feature(MMU_FTR_TYPE_44x))
+ 		return;
+ 
+ 	invalidate_icache_range(addr, addr + PAGE_SIZE);
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 08643cba14948..43599e671d383 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -137,6 +137,9 @@ static void pmao_restore_workaround(bool ebb) { }
+ 
+ bool is_sier_available(void)
+ {
++	if (!ppmu)
++		return false;
++
+ 	if (ppmu->flags & PPMU_HAS_SIER)
+ 		return true;
+ 
+@@ -2121,6 +2124,16 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
+ 	local64_set(&event->hw.period_left, left);
+ 	perf_event_update_userpage(event);
+ 
++	/*
++	 * Due to hardware limitation, sometimes SIAR could sample a kernel
++	 * address even when freeze on supervisor state (kernel) is set in
++	 * MMCR2. Check attr.exclude_kernel and address to drop the sample in
++	 * these cases.
++	 */
++	if (event->attr.exclude_kernel && record)
++		if (is_kernel_addr(mfspr(SPRN_SIAR)))
++			record = 0;
++
+ 	/*
+ 	 * Finally record data if requested.
+ 	 */
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index 2848904df6383..e1a21d34c6e49 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -247,6 +247,9 @@ void isa207_get_mem_weight(u64 *weight)
+ 	u64 sier = mfspr(SPRN_SIER);
+ 	u64 val = (sier & ISA207_SIER_TYPE_MASK) >> ISA207_SIER_TYPE_SHIFT;
+ 
++	if (cpu_has_feature(CPU_FTR_ARCH_31))
++		mantissa = P10_MMCRA_THR_CTR_MANT(mmcra);
++
+ 	if (val == 0 || val == 7)
+ 		*weight = 0;
+ 	else
+@@ -311,9 +314,11 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
+ 	}
+ 
+ 	if (unit >= 6 && unit <= 9) {
+-		if (cpu_has_feature(CPU_FTR_ARCH_31) && (unit == 6)) {
+-			mask |= CNST_L2L3_GROUP_MASK;
+-			value |= CNST_L2L3_GROUP_VAL(event >> p10_L2L3_EVENT_SHIFT);
++		if (cpu_has_feature(CPU_FTR_ARCH_31)) {
++			if (unit == 6) {
++				mask |= CNST_L2L3_GROUP_MASK;
++				value |= CNST_L2L3_GROUP_VAL(event >> p10_L2L3_EVENT_SHIFT);
++			}
+ 		} else if (cpu_has_feature(CPU_FTR_ARCH_300)) {
+ 			mask  |= CNST_CACHE_GROUP_MASK;
+ 			value |= CNST_CACHE_GROUP_VAL(event & 0xff);
+@@ -339,12 +344,22 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
+ 		value |= CNST_L1_QUAL_VAL(cache);
+ 	}
+ 
++	if (cpu_has_feature(CPU_FTR_ARCH_31)) {
++		mask |= CNST_RADIX_SCOPE_GROUP_MASK;
++		value |= CNST_RADIX_SCOPE_GROUP_VAL(event >> p10_EVENT_RADIX_SCOPE_QUAL_SHIFT);
++	}
++
+ 	if (is_event_marked(event)) {
+ 		mask  |= CNST_SAMPLE_MASK;
+ 		value |= CNST_SAMPLE_VAL(event >> EVENT_SAMPLE_SHIFT);
+ 	}
+ 
+-	if (cpu_has_feature(CPU_FTR_ARCH_300))  {
++	if (cpu_has_feature(CPU_FTR_ARCH_31)) {
++		if (event_is_threshold(event)) {
++			mask  |= CNST_THRESH_CTL_SEL_MASK;
++			value |= CNST_THRESH_CTL_SEL_VAL(event >> EVENT_THRESH_SHIFT);
++		}
++	} else if (cpu_has_feature(CPU_FTR_ARCH_300))  {
+ 		if (event_is_threshold(event) && is_thresh_cmp_valid(event)) {
+ 			mask  |= CNST_THRESH_MASK;
+ 			value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
+@@ -456,6 +471,13 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
+ 			}
+ 		}
+ 
++		/* Set RADIX_SCOPE_QUAL bit */
++		if (cpu_has_feature(CPU_FTR_ARCH_31)) {
++			val = (event[i] >> p10_EVENT_RADIX_SCOPE_QUAL_SHIFT) &
++				p10_EVENT_RADIX_SCOPE_QUAL_MASK;
++			mmcr1 |= val << p10_MMCR1_RADIX_SCOPE_QUAL_SHIFT;
++		}
++
+ 		if (is_event_marked(event[i])) {
+ 			mmcra |= MMCRA_SAMPLE_ENABLE;
+ 
+diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h
+index 7025de5e60e7d..454b32c314406 100644
+--- a/arch/powerpc/perf/isa207-common.h
++++ b/arch/powerpc/perf/isa207-common.h
+@@ -101,6 +101,9 @@
+ #define p10_EVENT_CACHE_SEL_MASK	0x3ull
+ #define p10_EVENT_MMCR3_MASK		0x7fffull
+ #define p10_EVENT_MMCR3_SHIFT		45
++#define p10_EVENT_RADIX_SCOPE_QUAL_SHIFT	9
++#define p10_EVENT_RADIX_SCOPE_QUAL_MASK	0x1
++#define p10_MMCR1_RADIX_SCOPE_QUAL_SHIFT	45
+ 
+ #define p10_EVENT_VALID_MASK		\
+ 	((p10_SDAR_MODE_MASK   << p10_SDAR_MODE_SHIFT		|	\
+@@ -112,6 +115,7 @@
+ 	(p9_EVENT_COMBINE_MASK << p9_EVENT_COMBINE_SHIFT)	|	\
+ 	(p10_EVENT_MMCR3_MASK  << p10_EVENT_MMCR3_SHIFT)	|	\
+ 	(EVENT_MARKED_MASK     << EVENT_MARKED_SHIFT)		|	\
++	(p10_EVENT_RADIX_SCOPE_QUAL_MASK << p10_EVENT_RADIX_SCOPE_QUAL_SHIFT)	|	\
+ 	 EVENT_LINUX_MASK					|	\
+ 	EVENT_PSEL_MASK))
+ /*
+@@ -125,9 +129,9 @@
+  *
+  *        28        24        20        16        12         8         4         0
+  * | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - |
+- *               [ ] |   [ ]   [  sample ]   [     ]   [6] [5]   [4] [3]   [2] [1]
+- *                |  |    |                     |
+- *      BHRB IFM -*  |    |                     |      Count of events for each PMC.
++ *               [ ] |   [ ] |  [  sample ]   [     ]   [6] [5]   [4] [3]   [2] [1]
++ *                |  |    |  |                  |
++ *      BHRB IFM -*  |    |  |*radix_scope      |      Count of events for each PMC.
+  *              EBB -*    |                     |        p1, p2, p3, p4, p5, p6.
+  *      L1 I/D qualifier -*                     |
+  *                     nc - number of counters -*
+@@ -145,6 +149,9 @@
+ #define CNST_THRESH_VAL(v)	(((v) & EVENT_THRESH_MASK) << 32)
+ #define CNST_THRESH_MASK	CNST_THRESH_VAL(EVENT_THRESH_MASK)
+ 
++#define CNST_THRESH_CTL_SEL_VAL(v)	(((v) & 0x7ffull) << 32)
++#define CNST_THRESH_CTL_SEL_MASK	CNST_THRESH_CTL_SEL_VAL(0x7ff)
++
+ #define CNST_EBB_VAL(v)		(((v) & EVENT_EBB_MASK) << 24)
+ #define CNST_EBB_MASK		CNST_EBB_VAL(EVENT_EBB_MASK)
+ 
+@@ -165,6 +172,9 @@
+ #define CNST_L2L3_GROUP_VAL(v)	(((v) & 0x1full) << 55)
+ #define CNST_L2L3_GROUP_MASK	CNST_L2L3_GROUP_VAL(0x1f)
+ 
++#define CNST_RADIX_SCOPE_GROUP_VAL(v)	(((v) & 0x1ull) << 21)
++#define CNST_RADIX_SCOPE_GROUP_MASK	CNST_RADIX_SCOPE_GROUP_VAL(1)
++
+ /*
+  * For NC we are counting up to 4 events. This requires three bits, and we need
+  * the fifth event to overflow and set the 4th bit. To achieve that we bias the
+@@ -221,6 +231,10 @@
+ #define MMCRA_THR_CTR_EXP(v)		(((v) >> MMCRA_THR_CTR_EXP_SHIFT) &\
+ 						MMCRA_THR_CTR_EXP_MASK)
+ 
++#define P10_MMCRA_THR_CTR_MANT_MASK	0xFFul
++#define P10_MMCRA_THR_CTR_MANT(v)	(((v) >> MMCRA_THR_CTR_MANT_SHIFT) &\
++						P10_MMCRA_THR_CTR_MANT_MASK)
++
+ /* MMCRA Threshold Compare bit constant for power9 */
+ #define p9_MMCRA_THR_CMP_SHIFT	45
+ 
+diff --git a/arch/powerpc/perf/power10-pmu.c b/arch/powerpc/perf/power10-pmu.c
+index 9dbe8f9b89b4f..cf44fb7446130 100644
+--- a/arch/powerpc/perf/power10-pmu.c
++++ b/arch/powerpc/perf/power10-pmu.c
+@@ -23,10 +23,10 @@
+  *
+  *        28        24        20        16        12         8         4         0
+  * | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - |
+- *   [   ] [  sample ]   [ ] [ ]   [ pmc ]   [unit ]   [ ]   m   [    pmcxsel    ]
+- *     |        |        |    |                        |     |
+- *     |        |        |    |                        |     *- mark
+- *     |        |        |    *- L1/L2/L3 cache_sel    |
++ *   [   ] [  sample ]   [ ] [ ]   [ pmc ]   [unit ]   [ ] |  m   [    pmcxsel    ]
++ *     |        |        |    |                        |   |  |
++ *     |        |        |    |                        |   |  *- mark
++ *     |        |        |    *- L1/L2/L3 cache_sel    |   |*-radix_scope_qual
+  *     |        |        sdar_mode                     |
+  *     |        *- sampling mode for marked events     *- combine
+  *     |
+@@ -59,6 +59,7 @@
+  *
+  * MMCR1[16] = cache_sel[0]
+  * MMCR1[17] = cache_sel[1]
++ * MMCR1[18] = radix_scope_qual
+  *
+  * if mark:
+  *	MMCRA[63]    = 1		(SAMPLE_ENABLE)
+@@ -175,6 +176,7 @@ PMU_FORMAT_ATTR(src_sel,        "config:45-46");
+ PMU_FORMAT_ATTR(invert_bit,     "config:47");
+ PMU_FORMAT_ATTR(src_mask,       "config:48-53");
+ PMU_FORMAT_ATTR(src_match,      "config:54-59");
++PMU_FORMAT_ATTR(radix_scope,	"config:9");
+ 
+ static struct attribute *power10_pmu_format_attr[] = {
+ 	&format_attr_event.attr,
+@@ -194,6 +196,7 @@ static struct attribute *power10_pmu_format_attr[] = {
+ 	&format_attr_invert_bit.attr,
+ 	&format_attr_src_mask.attr,
+ 	&format_attr_src_match.attr,
++	&format_attr_radix_scope.attr,
+ 	NULL,
+ };
+ 
+diff --git a/arch/powerpc/platforms/8xx/micropatch.c b/arch/powerpc/platforms/8xx/micropatch.c
+index aed4bc75f3520..aef179fcbd4f8 100644
+--- a/arch/powerpc/platforms/8xx/micropatch.c
++++ b/arch/powerpc/platforms/8xx/micropatch.c
+@@ -360,6 +360,17 @@ void __init cpm_load_patch(cpm8xx_t *cp)
+ 	if (IS_ENABLED(CONFIG_SMC_UCODE_PATCH)) {
+ 		smc_uart_t *smp;
+ 
++		if (IS_ENABLED(CONFIG_PPC_EARLY_DEBUG_CPM)) {
++			int i;
++
++			for (i = 0; i < sizeof(*smp); i += 4) {
++				u32 __iomem *src = (u32 __iomem *)&cp->cp_dparam[PROFF_SMC1 + i];
++				u32 __iomem *dst = (u32 __iomem *)&cp->cp_dparam[PROFF_DSP1 + i];
++
++				out_be32(dst, in_be32(src));
++			}
++		}
++
+ 		smp = (smc_uart_t *)&cp->cp_dparam[PROFF_SMC1];
+ 		out_be16(&smp->smc_rpbase, 0x1ec0);
+ 		smp = (smc_uart_t *)&cp->cp_dparam[PROFF_SMC2];
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index c194c4ae8bc7d..32a9c4c09b989 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -36,7 +36,7 @@ config PPC_BOOK3S_6xx
+ 	select PPC_HAVE_PMU_SUPPORT
+ 	select PPC_HAVE_KUEP
+ 	select PPC_HAVE_KUAP
+-	select HAVE_ARCH_VMAP_STACK if !ADB_PMU
++	select HAVE_ARCH_VMAP_STACK
+ 
+ config PPC_85xx
+ 	bool "Freescale 85xx"
+diff --git a/arch/powerpc/platforms/powermac/sleep.S b/arch/powerpc/platforms/powermac/sleep.S
+index 7e0f8ba6e54a5..d497a60003d2d 100644
+--- a/arch/powerpc/platforms/powermac/sleep.S
++++ b/arch/powerpc/platforms/powermac/sleep.S
+@@ -44,7 +44,8 @@
+ #define SL_TB		0xa0
+ #define SL_R2		0xa8
+ #define SL_CR		0xac
+-#define SL_R12		0xb0	/* r12 to r31 */
++#define SL_LR		0xb0
++#define SL_R12		0xb4	/* r12 to r31 */
+ #define SL_SIZE		(SL_R12 + 80)
+ 
+ 	.section .text
+@@ -63,105 +64,107 @@ _GLOBAL(low_sleep_handler)
+ 	blr
+ #else
+ 	mflr	r0
+-	stw	r0,4(r1)
+-	stwu	r1,-SL_SIZE(r1)
++	lis	r11,sleep_storage@ha
++	addi	r11,r11,sleep_storage@l
++	stw	r0,SL_LR(r11)
+ 	mfcr	r0
+-	stw	r0,SL_CR(r1)
+-	stw	r2,SL_R2(r1)
+-	stmw	r12,SL_R12(r1)
++	stw	r0,SL_CR(r11)
++	stw	r1,SL_SP(r11)
++	stw	r2,SL_R2(r11)
++	stmw	r12,SL_R12(r11)
+ 
+ 	/* Save MSR & SDR1 */
+ 	mfmsr	r4
+-	stw	r4,SL_MSR(r1)
++	stw	r4,SL_MSR(r11)
+ 	mfsdr1	r4
+-	stw	r4,SL_SDR1(r1)
++	stw	r4,SL_SDR1(r11)
+ 
+ 	/* Get a stable timebase and save it */
+ 1:	mftbu	r4
+-	stw	r4,SL_TB(r1)
++	stw	r4,SL_TB(r11)
+ 	mftb	r5
+-	stw	r5,SL_TB+4(r1)
++	stw	r5,SL_TB+4(r11)
+ 	mftbu	r3
+ 	cmpw	r3,r4
+ 	bne	1b
+ 
+ 	/* Save SPRGs */
+ 	mfsprg	r4,0
+-	stw	r4,SL_SPRG0(r1)
++	stw	r4,SL_SPRG0(r11)
+ 	mfsprg	r4,1
+-	stw	r4,SL_SPRG0+4(r1)
++	stw	r4,SL_SPRG0+4(r11)
+ 	mfsprg	r4,2
+-	stw	r4,SL_SPRG0+8(r1)
++	stw	r4,SL_SPRG0+8(r11)
+ 	mfsprg	r4,3
+-	stw	r4,SL_SPRG0+12(r1)
++	stw	r4,SL_SPRG0+12(r11)
+ 
+ 	/* Save BATs */
+ 	mfdbatu	r4,0
+-	stw	r4,SL_DBAT0(r1)
++	stw	r4,SL_DBAT0(r11)
+ 	mfdbatl	r4,0
+-	stw	r4,SL_DBAT0+4(r1)
++	stw	r4,SL_DBAT0+4(r11)
+ 	mfdbatu	r4,1
+-	stw	r4,SL_DBAT1(r1)
++	stw	r4,SL_DBAT1(r11)
+ 	mfdbatl	r4,1
+-	stw	r4,SL_DBAT1+4(r1)
++	stw	r4,SL_DBAT1+4(r11)
+ 	mfdbatu	r4,2
+-	stw	r4,SL_DBAT2(r1)
++	stw	r4,SL_DBAT2(r11)
+ 	mfdbatl	r4,2
+-	stw	r4,SL_DBAT2+4(r1)
++	stw	r4,SL_DBAT2+4(r11)
+ 	mfdbatu	r4,3
+-	stw	r4,SL_DBAT3(r1)
++	stw	r4,SL_DBAT3(r11)
+ 	mfdbatl	r4,3
+-	stw	r4,SL_DBAT3+4(r1)
++	stw	r4,SL_DBAT3+4(r11)
+ 	mfibatu	r4,0
+-	stw	r4,SL_IBAT0(r1)
++	stw	r4,SL_IBAT0(r11)
+ 	mfibatl	r4,0
+-	stw	r4,SL_IBAT0+4(r1)
++	stw	r4,SL_IBAT0+4(r11)
+ 	mfibatu	r4,1
+-	stw	r4,SL_IBAT1(r1)
++	stw	r4,SL_IBAT1(r11)
+ 	mfibatl	r4,1
+-	stw	r4,SL_IBAT1+4(r1)
++	stw	r4,SL_IBAT1+4(r11)
+ 	mfibatu	r4,2
+-	stw	r4,SL_IBAT2(r1)
++	stw	r4,SL_IBAT2(r11)
+ 	mfibatl	r4,2
+-	stw	r4,SL_IBAT2+4(r1)
++	stw	r4,SL_IBAT2+4(r11)
+ 	mfibatu	r4,3
+-	stw	r4,SL_IBAT3(r1)
++	stw	r4,SL_IBAT3(r11)
+ 	mfibatl	r4,3
+-	stw	r4,SL_IBAT3+4(r1)
++	stw	r4,SL_IBAT3+4(r11)
+ 
+ BEGIN_MMU_FTR_SECTION
+ 	mfspr	r4,SPRN_DBAT4U
+-	stw	r4,SL_DBAT4(r1)
++	stw	r4,SL_DBAT4(r11)
+ 	mfspr	r4,SPRN_DBAT4L
+-	stw	r4,SL_DBAT4+4(r1)
++	stw	r4,SL_DBAT4+4(r11)
+ 	mfspr	r4,SPRN_DBAT5U
+-	stw	r4,SL_DBAT5(r1)
++	stw	r4,SL_DBAT5(r11)
+ 	mfspr	r4,SPRN_DBAT5L
+-	stw	r4,SL_DBAT5+4(r1)
++	stw	r4,SL_DBAT5+4(r11)
+ 	mfspr	r4,SPRN_DBAT6U
+-	stw	r4,SL_DBAT6(r1)
++	stw	r4,SL_DBAT6(r11)
+ 	mfspr	r4,SPRN_DBAT6L
+-	stw	r4,SL_DBAT6+4(r1)
++	stw	r4,SL_DBAT6+4(r11)
+ 	mfspr	r4,SPRN_DBAT7U
+-	stw	r4,SL_DBAT7(r1)
++	stw	r4,SL_DBAT7(r11)
+ 	mfspr	r4,SPRN_DBAT7L
+-	stw	r4,SL_DBAT7+4(r1)
++	stw	r4,SL_DBAT7+4(r11)
+ 	mfspr	r4,SPRN_IBAT4U
+-	stw	r4,SL_IBAT4(r1)
++	stw	r4,SL_IBAT4(r11)
+ 	mfspr	r4,SPRN_IBAT4L
+-	stw	r4,SL_IBAT4+4(r1)
++	stw	r4,SL_IBAT4+4(r11)
+ 	mfspr	r4,SPRN_IBAT5U
+-	stw	r4,SL_IBAT5(r1)
++	stw	r4,SL_IBAT5(r11)
+ 	mfspr	r4,SPRN_IBAT5L
+-	stw	r4,SL_IBAT5+4(r1)
++	stw	r4,SL_IBAT5+4(r11)
+ 	mfspr	r4,SPRN_IBAT6U
+-	stw	r4,SL_IBAT6(r1)
++	stw	r4,SL_IBAT6(r11)
+ 	mfspr	r4,SPRN_IBAT6L
+-	stw	r4,SL_IBAT6+4(r1)
++	stw	r4,SL_IBAT6+4(r11)
+ 	mfspr	r4,SPRN_IBAT7U
+-	stw	r4,SL_IBAT7(r1)
++	stw	r4,SL_IBAT7(r11)
+ 	mfspr	r4,SPRN_IBAT7L
+-	stw	r4,SL_IBAT7+4(r1)
++	stw	r4,SL_IBAT7+4(r11)
+ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
+ 
+ 	/* Backup various CPU config stuffs */
+@@ -180,9 +183,9 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
+ 	lis	r5,grackle_wake_up@ha
+ 	addi	r5,r5,grackle_wake_up@l
+ 	tophys(r5,r5)
+-	stw	r5,SL_PC(r1)
++	stw	r5,SL_PC(r11)
+ 	lis	r4,KERNELBASE@h
+-	tophys(r5,r1)
++	tophys(r5,r11)
+ 	addi	r5,r5,SL_PC
+ 	lis	r6,MAGIC@ha
+ 	addi	r6,r6,MAGIC@l
+@@ -194,12 +197,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
+ 	tophys(r3,r3)
+ 	stw	r3,0x80(r4)
+ 	stw	r5,0x84(r4)
+-	/* Store a pointer to our backup storage into
+-	 * a kernel global
+-	 */
+-	lis r3,sleep_storage@ha
+-	addi r3,r3,sleep_storage@l
+-	stw r5,0(r3)
+ 
+ 	.globl	low_cpu_offline_self
+ low_cpu_offline_self:
+@@ -279,7 +276,7 @@ _GLOBAL(core99_wake_up)
+ 	lis	r3,sleep_storage@ha
+ 	addi	r3,r3,sleep_storage@l
+ 	tophys(r3,r3)
+-	lwz	r1,0(r3)
++	addi	r1,r3,SL_PC
+ 
+ 	/* Pass thru to older resume code ... */
+ _ASM_NOKPROBE_SYMBOL(core99_wake_up)
+@@ -399,13 +396,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
+ 	blt	1b
+ 	sync
+ 
+-	/* restore the MSR and turn on the MMU */
+-	lwz	r3,SL_MSR(r1)
+-	bl	turn_on_mmu
+-
+-	/* get back the stack pointer */
+-	tovirt(r1,r1)
+-
+ 	/* Restore TB */
+ 	li	r3,0
+ 	mttbl	r3
+@@ -419,28 +409,24 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
+ 	mtcr	r0
+ 	lwz	r2,SL_R2(r1)
+ 	lmw	r12,SL_R12(r1)
+-	addi	r1,r1,SL_SIZE
+-	lwz	r0,4(r1)
+-	mtlr	r0
+-	blr
+-_ASM_NOKPROBE_SYMBOL(grackle_wake_up)
+ 
+-turn_on_mmu:
+-	mflr	r4
+-	tovirt(r4,r4)
++	/* restore the MSR and SP and turn on the MMU and return */
++	lwz	r3,SL_MSR(r1)
++	lwz	r4,SL_LR(r1)
++	lwz	r1,SL_SP(r1)
+ 	mtsrr0	r4
+ 	mtsrr1	r3
+ 	sync
+ 	isync
+ 	rfi
+-_ASM_NOKPROBE_SYMBOL(turn_on_mmu)
++_ASM_NOKPROBE_SYMBOL(grackle_wake_up)
+ 
+ #endif /* defined(CONFIG_PM) || defined(CONFIG_CPU_FREQ) */
+ 
+-	.section .data
++	.section .bss
+ 	.balign	L1_CACHE_BYTES
+ sleep_storage:
+-	.long 0
++	.space SL_SIZE
+ 	.balign	L1_CACHE_BYTES, 0
+ 
+ #endif /* CONFIG_PPC_BOOK3S_32 */
+diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
+index 6828108486f83..0e42fe2d7b6ac 100644
+--- a/arch/powerpc/platforms/powernv/memtrace.c
++++ b/arch/powerpc/platforms/powernv/memtrace.c
+@@ -30,6 +30,7 @@ struct memtrace_entry {
+ 	char name[16];
+ };
+ 
++static DEFINE_MUTEX(memtrace_mutex);
+ static u64 memtrace_size;
+ 
+ static struct memtrace_entry *memtrace_array;
+@@ -67,6 +68,23 @@ static int change_memblock_state(struct memory_block *mem, void *arg)
+ 	return 0;
+ }
+ 
++static void memtrace_clear_range(unsigned long start_pfn,
++				 unsigned long nr_pages)
++{
++	unsigned long pfn;
++
++	/*
++	 * As pages are offline, we cannot trust the memmap anymore. As HIGHMEM
++	 * does not apply, avoid passing around "struct page" and use
++	 * clear_page() instead directly.
++	 */
++	for (pfn = start_pfn; pfn < start_pfn + nr_pages; pfn++) {
++		if (IS_ALIGNED(pfn, PAGES_PER_SECTION))
++			cond_resched();
++		clear_page(__va(PFN_PHYS(pfn)));
++	}
++}
++
+ /* called with device_hotplug_lock held */
+ static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
+ {
+@@ -111,6 +129,11 @@ static u64 memtrace_alloc_node(u32 nid, u64 size)
+ 	lock_device_hotplug();
+ 	for (base_pfn = end_pfn; base_pfn > start_pfn; base_pfn -= nr_pages) {
+ 		if (memtrace_offline_pages(nid, base_pfn, nr_pages) == true) {
++			/*
++			 * Clear the range while we still have a linear
++			 * mapping.
++			 */
++			memtrace_clear_range(base_pfn, nr_pages);
+ 			/*
+ 			 * Remove memory in memory block size chunks so that
+ 			 * iomem resources are always split to the same size and
+@@ -257,6 +280,7 @@ static int memtrace_online(void)
+ 
+ static int memtrace_enable_set(void *data, u64 val)
+ {
++	int rc = -EAGAIN;
+ 	u64 bytes;
+ 
+ 	/*
+@@ -269,25 +293,31 @@ static int memtrace_enable_set(void *data, u64 val)
+ 		return -EINVAL;
+ 	}
+ 
++	mutex_lock(&memtrace_mutex);
++
+ 	/* Re-add/online previously removed/offlined memory */
+ 	if (memtrace_size) {
+ 		if (memtrace_online())
+-			return -EAGAIN;
++			goto out_unlock;
+ 	}
+ 
+-	if (!val)
+-		return 0;
++	if (!val) {
++		rc = 0;
++		goto out_unlock;
++	}
+ 
+ 	/* Offline and remove memory */
+ 	if (memtrace_init_regions_runtime(val))
+-		return -EINVAL;
++		goto out_unlock;
+ 
+ 	if (memtrace_init_debugfs())
+-		return -EINVAL;
++		goto out_unlock;
+ 
+ 	memtrace_size = val;
+-
+-	return 0;
++	rc = 0;
++out_unlock:
++	mutex_unlock(&memtrace_mutex);
++	return rc;
+ }
+ 
+ static int memtrace_enable_get(void *data, u64 *val)
+diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
+index abeaa533b976b..b711dc3262a30 100644
+--- a/arch/powerpc/platforms/powernv/npu-dma.c
++++ b/arch/powerpc/platforms/powernv/npu-dma.c
+@@ -385,7 +385,8 @@ static void pnv_npu_peers_take_ownership(struct iommu_table_group *table_group)
+ 	for (i = 0; i < npucomp->pe_num; ++i) {
+ 		struct pnv_ioda_pe *pe = npucomp->pe[i];
+ 
+-		if (!pe->table_group.ops->take_ownership)
++		if (!pe->table_group.ops ||
++		    !pe->table_group.ops->take_ownership)
+ 			continue;
+ 		pe->table_group.ops->take_ownership(&pe->table_group);
+ 	}
+@@ -401,7 +402,8 @@ static void pnv_npu_peers_release_ownership(
+ 	for (i = 0; i < npucomp->pe_num; ++i) {
+ 		struct pnv_ioda_pe *pe = npucomp->pe[i];
+ 
+-		if (!pe->table_group.ops->release_ownership)
++		if (!pe->table_group.ops ||
++		    !pe->table_group.ops->release_ownership)
+ 			continue;
+ 		pe->table_group.ops->release_ownership(&pe->table_group);
+ 	}
+@@ -623,6 +625,11 @@ int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
+ 		return -ENODEV;
+ 
+ 	hose = pci_bus_to_host(npdev->bus);
++	if (hose->npu == NULL) {
++		dev_info_once(&npdev->dev, "Nvlink1 does not support contexts");
++		return 0;
++	}
++
+ 	nphb = hose->private_data;
+ 
+ 	dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=%u\n",
+@@ -670,6 +677,11 @@ int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev)
+ 		return -ENODEV;
+ 
+ 	hose = pci_bus_to_host(npdev->bus);
++	if (hose->npu == NULL) {
++		dev_info_once(&npdev->dev, "Nvlink1 does not support contexts");
++		return 0;
++	}
++
+ 	nphb = hose->private_data;
+ 
+ 	dev_dbg(&gpdev->dev, "destroy context opalid=%llu\n",
+diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c
+index c4434f20f42fa..28aac933a4391 100644
+--- a/arch/powerpc/platforms/powernv/pci-sriov.c
++++ b/arch/powerpc/platforms/powernv/pci-sriov.c
+@@ -422,7 +422,7 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs)
+ {
+ 	struct pnv_iov_data   *iov;
+ 	struct pnv_phb        *phb;
+-	unsigned int           win;
++	int                    win;
+ 	struct resource       *res;
+ 	int                    i, j;
+ 	int64_t                rc;
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index a02012f1b04af..12cbffd3c2e32 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -746,6 +746,7 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add)
+ 	parent = of_find_node_by_path("/cpus");
+ 	if (!parent) {
+ 		pr_warn("Could not find CPU root node in device tree\n");
++		kfree(cpu_drcs);
+ 		return -1;
+ 	}
+ 
+diff --git a/arch/powerpc/platforms/pseries/suspend.c b/arch/powerpc/platforms/pseries/suspend.c
+index 81e0ac58d6204..64b36a93c33a6 100644
+--- a/arch/powerpc/platforms/pseries/suspend.c
++++ b/arch/powerpc/platforms/pseries/suspend.c
+@@ -13,7 +13,6 @@
+ #include <asm/mmu.h>
+ #include <asm/rtas.h>
+ #include <asm/topology.h>
+-#include "../../kernel/cacheinfo.h"
+ 
+ static u64 stream_id;
+ static struct device suspend_dev;
+@@ -78,9 +77,7 @@ static void pseries_suspend_enable_irqs(void)
+ 	 * Update configuration which can be modified based on device tree
+ 	 * changes during resume.
+ 	 */
+-	cacheinfo_cpu_offline(smp_processor_id());
+ 	post_mobility_fixup();
+-	cacheinfo_cpu_online(smp_processor_id());
+ }
+ 
+ /**
+@@ -187,7 +184,6 @@ static struct bus_type suspend_subsys = {
+ 
+ static const struct platform_suspend_ops pseries_suspend_ops = {
+ 	.valid		= suspend_valid_only_mem,
+-	.begin		= pseries_suspend_begin,
+ 	.prepare_late	= pseries_prepare_late,
+ 	.enter		= pseries_suspend_enter,
+ };
+diff --git a/arch/powerpc/xmon/nonstdio.c b/arch/powerpc/xmon/nonstdio.c
+index 5c1a50912229a..9b0d85bff021e 100644
+--- a/arch/powerpc/xmon/nonstdio.c
++++ b/arch/powerpc/xmon/nonstdio.c
+@@ -178,7 +178,7 @@ void xmon_printf(const char *format, ...)
+ 
+ 	if (n && rc == 0) {
+ 		/* No udbg hooks, fallback to printk() - dangerous */
+-		printk("%s", xmon_outbuf);
++		pr_cont("%s", xmon_outbuf);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index 55c43a6c91112..5559edf36756c 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -1383,6 +1383,7 @@ static long check_bp_loc(unsigned long addr)
+ 	return 1;
+ }
+ 
++#ifndef CONFIG_PPC_8xx
+ static int find_free_data_bpt(void)
+ {
+ 	int i;
+@@ -1394,6 +1395,7 @@ static int find_free_data_bpt(void)
+ 	printf("Couldn't find free breakpoint register\n");
+ 	return -1;
+ }
++#endif
+ 
+ static void print_data_bpts(void)
+ {
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 8e577f14f1205..e4133c20744ce 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -174,7 +174,7 @@ void __init setup_bootmem(void)
+ 	 * Make sure that any memory beyond mem_start + (-PAGE_OFFSET) is removed
+ 	 * as it is unusable by kernel.
+ 	 */
+-	memblock_enforce_memory_limit(mem_start - PAGE_OFFSET);
++	memblock_enforce_memory_limit(-PAGE_OFFSET);
+ 
+ 	/* Reserve from the start of the kernel to the end of the kernel */
+ 	memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 92beb14446449..6343dca0dbeb6 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -110,9 +110,9 @@ _LPP_OFFSET	= __LC_LPP
+ #endif
+ 	.endm
+ 
+-	.macro	SWITCH_ASYNC savearea,timer
++	.macro	SWITCH_ASYNC savearea,timer,clock
+ 	tmhh	%r8,0x0001		# interrupting from user ?
+-	jnz	2f
++	jnz	4f
+ #if IS_ENABLED(CONFIG_KVM)
+ 	lgr	%r14,%r9
+ 	larl	%r13,.Lsie_gmap
+@@ -125,10 +125,26 @@ _LPP_OFFSET	= __LC_LPP
+ #endif
+ 0:	larl	%r13,.Lpsw_idle_exit
+ 	cgr	%r13,%r9
+-	jne	1f
++	jne	3f
+ 
+-	mvc	__CLOCK_IDLE_EXIT(8,%r2), __LC_INT_CLOCK
+-	mvc	__TIMER_IDLE_EXIT(8,%r2), __LC_ASYNC_ENTER_TIMER
++	larl	%r1,smp_cpu_mtid
++	llgf	%r1,0(%r1)
++	ltgr	%r1,%r1
++	jz	2f			# no SMT, skip mt_cycles calculation
++	.insn	rsy,0xeb0000000017,%r1,5,__SF_EMPTY+80(%r15)
++	larl	%r3,mt_cycles
++	ag	%r3,__LC_PERCPU_OFFSET
++	la	%r4,__SF_EMPTY+16(%r15)
++1:	lg	%r0,0(%r3)
++	slg	%r0,0(%r4)
++	alg	%r0,64(%r4)
++	stg	%r0,0(%r3)
++	la	%r3,8(%r3)
++	la	%r4,8(%r4)
++	brct	%r1,1b
++
++2:	mvc	__CLOCK_IDLE_EXIT(8,%r2), \clock
++	mvc	__TIMER_IDLE_EXIT(8,%r2), \timer
+ 	# account system time going idle
+ 	ni	__LC_CPU_FLAGS+7,255-_CIF_ENABLED_WAIT
+ 
+@@ -146,17 +162,17 @@ _LPP_OFFSET	= __LC_LPP
+ 	mvc	__LC_LAST_UPDATE_TIMER(8),__TIMER_IDLE_EXIT(%r2)
+ 
+ 	nihh	%r8,0xfcfd		# clear wait state and irq bits
+-1:	lg	%r14,__LC_ASYNC_STACK	# are we already on the target stack?
++3:	lg	%r14,__LC_ASYNC_STACK	# are we already on the target stack?
+ 	slgr	%r14,%r15
+ 	srag	%r14,%r14,STACK_SHIFT
+-	jnz	3f
++	jnz	5f
+ 	CHECK_STACK \savearea
+ 	aghi	%r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE)
+-	j	4f
+-2:	UPDATE_VTIME %r14,%r15,\timer
++	j	6f
++4:	UPDATE_VTIME %r14,%r15,\timer
+ 	BPENTER __TI_flags(%r12),_TIF_ISOLATE_BP
+-3:	lg	%r15,__LC_ASYNC_STACK	# load async stack
+-4:	la	%r11,STACK_FRAME_OVERHEAD(%r15)
++5:	lg	%r15,__LC_ASYNC_STACK	# load async stack
++6:	la	%r11,STACK_FRAME_OVERHEAD(%r15)
+ 	.endm
+ 
+ 	.macro UPDATE_VTIME w1,w2,enter_timer
+@@ -745,7 +761,7 @@ ENTRY(io_int_handler)
+ 	stmg	%r8,%r15,__LC_SAVE_AREA_ASYNC
+ 	lg	%r12,__LC_CURRENT
+ 	lmg	%r8,%r9,__LC_IO_OLD_PSW
+-	SWITCH_ASYNC __LC_SAVE_AREA_ASYNC,__LC_ASYNC_ENTER_TIMER
++	SWITCH_ASYNC __LC_SAVE_AREA_ASYNC,__LC_ASYNC_ENTER_TIMER,__LC_INT_CLOCK
+ 	stmg	%r0,%r7,__PT_R0(%r11)
+ 	# clear user controlled registers to prevent speculative use
+ 	xgr	%r0,%r0
+@@ -945,7 +961,7 @@ ENTRY(ext_int_handler)
+ 	stmg	%r8,%r15,__LC_SAVE_AREA_ASYNC
+ 	lg	%r12,__LC_CURRENT
+ 	lmg	%r8,%r9,__LC_EXT_OLD_PSW
+-	SWITCH_ASYNC __LC_SAVE_AREA_ASYNC,__LC_ASYNC_ENTER_TIMER
++	SWITCH_ASYNC __LC_SAVE_AREA_ASYNC,__LC_ASYNC_ENTER_TIMER,__LC_INT_CLOCK
+ 	stmg	%r0,%r7,__PT_R0(%r11)
+ 	# clear user controlled registers to prevent speculative use
+ 	xgr	%r0,%r0
+@@ -1167,7 +1183,7 @@ ENTRY(mcck_int_handler)
+ 	TSTMSK	__LC_MCCK_CODE,MCCK_CODE_PSW_IA_VALID
+ 	jno	.Lmcck_panic
+ 4:	ssm	__LC_PGM_NEW_PSW	# turn dat on, keep irqs off
+-	SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+64,__LC_MCCK_ENTER_TIMER
++	SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+64,__LC_MCCK_ENTER_TIMER,__LC_MCCK_CLOCK
+ .Lmcck_skip:
+ 	lghi	%r14,__LC_GPREGS_SAVE_AREA+64
+ 	stmg	%r0,%r7,__PT_R0(%r11)
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 390d97daa2b3f..3a0d545f0ce84 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -896,24 +896,12 @@ static void __no_sanitize_address smp_start_secondary(void *cpuvoid)
+ /* Upping and downing of CPUs */
+ int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+ {
+-	struct pcpu *pcpu;
+-	int base, i, rc;
++	struct pcpu *pcpu = pcpu_devices + cpu;
++	int rc;
+ 
+-	pcpu = pcpu_devices + cpu;
+ 	if (pcpu->state != CPU_STATE_CONFIGURED)
+ 		return -EIO;
+-	base = smp_get_base_cpu(cpu);
+-	for (i = 0; i <= smp_cpu_mtid; i++) {
+-		if (base + i < nr_cpu_ids)
+-			if (cpu_online(base + i))
+-				break;
+-	}
+-	/*
+-	 * If this is the first CPU of the core to get online
+-	 * do an initial CPU reset.
+-	 */
+-	if (i > smp_cpu_mtid &&
+-	    pcpu_sigp_retry(pcpu_devices + base, SIGP_INITIAL_CPU_RESET, 0) !=
++	if (pcpu_sigp_retry(pcpu, SIGP_INITIAL_CPU_RESET, 0) !=
+ 	    SIGP_CC_ORDER_CODE_ACCEPTED)
+ 		return -EIO;
+ 
+diff --git a/arch/s390/lib/test_unwind.c b/arch/s390/lib/test_unwind.c
+index 7c988994931f0..6bad84c372dcb 100644
+--- a/arch/s390/lib/test_unwind.c
++++ b/arch/s390/lib/test_unwind.c
+@@ -205,12 +205,15 @@ static noinline int unwindme_func3(struct unwindme *u)
+ /* This function must appear in the backtrace. */
+ static noinline int unwindme_func2(struct unwindme *u)
+ {
++	unsigned long flags;
+ 	int rc;
+ 
+ 	if (u->flags & UWM_SWITCH_STACK) {
+-		preempt_disable();
++		local_irq_save(flags);
++		local_mcck_disable();
+ 		rc = CALL_ON_STACK(unwindme_func3, S390_lowcore.nodat_stack, 1, u);
+-		preempt_enable();
++		local_mcck_enable();
++		local_irq_restore(flags);
+ 		return rc;
+ 	} else {
+ 		return unwindme_func3(u);
+diff --git a/arch/s390/purgatory/head.S b/arch/s390/purgatory/head.S
+index 5a10ce34b95d1..3d1c31e0cf3dd 100644
+--- a/arch/s390/purgatory/head.S
++++ b/arch/s390/purgatory/head.S
+@@ -62,14 +62,15 @@
+ 	jh	10b
+ .endm
+ 
+-.macro START_NEXT_KERNEL base
++.macro START_NEXT_KERNEL base subcode
+ 	lg	%r4,kernel_entry-\base(%r13)
+ 	lg	%r5,load_psw_mask-\base(%r13)
+ 	ogr	%r4,%r5
+ 	stg	%r4,0(%r0)
+ 
+ 	xgr	%r0,%r0
+-	diag	%r0,%r0,0x308
++	lghi	%r1,\subcode
++	diag	%r0,%r1,0x308
+ .endm
+ 
+ .text
+@@ -123,7 +124,7 @@ ENTRY(purgatory_start)
+ 	je	.start_crash_kernel
+ 
+ 	/* start normal kernel */
+-	START_NEXT_KERNEL .base_crash
++	START_NEXT_KERNEL .base_crash 0
+ 
+ .return_old_kernel:
+ 	lmg	%r6,%r15,gprregs-.base_crash(%r13)
+@@ -227,7 +228,7 @@ ENTRY(purgatory_start)
+ 	MEMCPY	%r9,%r10,%r11
+ 
+ 	/* start crash kernel */
+-	START_NEXT_KERNEL .base_dst
++	START_NEXT_KERNEL .base_dst 1
+ 
+ 
+ load_psw_mask:
+diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
+index 96edf64d4fb30..182bb7bdaa0a1 100644
+--- a/arch/sparc/mm/init_64.c
++++ b/arch/sparc/mm/init_64.c
+@@ -2894,7 +2894,7 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
+ 	if (!page)
+ 		return NULL;
+ 	if (!pgtable_pte_page_ctor(page)) {
+-		free_unref_page(page);
++		__free_page(page);
+ 		return NULL;
+ 	}
+ 	return (pte_t *) page_address(page);
+diff --git a/arch/um/drivers/chan_user.c b/arch/um/drivers/chan_user.c
+index 4d80526a4236e..d8845d4aac6a7 100644
+--- a/arch/um/drivers/chan_user.c
++++ b/arch/um/drivers/chan_user.c
+@@ -26,10 +26,10 @@ int generic_read(int fd, char *c_out, void *unused)
+ 	n = read(fd, c_out, sizeof(*c_out));
+ 	if (n > 0)
+ 		return n;
+-	else if (errno == EAGAIN)
+-		return 0;
+ 	else if (n == 0)
+ 		return -EIO;
++	else if (errno == EAGAIN)
++		return 0;
+ 	return -errno;
+ }
+ 
+diff --git a/arch/um/drivers/xterm.c b/arch/um/drivers/xterm.c
+index fc7f1e7467032..87ca4a47cd66e 100644
+--- a/arch/um/drivers/xterm.c
++++ b/arch/um/drivers/xterm.c
+@@ -18,6 +18,7 @@
+ struct xterm_chan {
+ 	int pid;
+ 	int helper_pid;
++	int chan_fd;
+ 	char *title;
+ 	int device;
+ 	int raw;
+@@ -33,6 +34,7 @@ static void *xterm_init(char *str, int device, const struct chan_opts *opts)
+ 		return NULL;
+ 	*data = ((struct xterm_chan) { .pid 		= -1,
+ 				       .helper_pid 	= -1,
++				       .chan_fd		= -1,
+ 				       .device 		= device,
+ 				       .title 		= opts->xterm_title,
+ 				       .raw  		= opts->raw } );
+@@ -149,6 +151,7 @@ static int xterm_open(int input, int output, int primary, void *d,
+ 		goto out_kill;
+ 	}
+ 
++	data->chan_fd = fd;
+ 	new = xterm_fd(fd, &data->helper_pid);
+ 	if (new < 0) {
+ 		err = new;
+@@ -206,6 +209,8 @@ static void xterm_close(int fd, void *d)
+ 		os_kill_process(data->helper_pid, 0);
+ 	data->helper_pid = -1;
+ 
++	if (data->chan_fd != -1)
++		os_close_file(data->chan_fd);
+ 	os_close_file(fd);
+ }
+ 
+diff --git a/arch/um/kernel/time.c b/arch/um/kernel/time.c
+index 3d109ff3309b2..8dafc3f2add42 100644
+--- a/arch/um/kernel/time.c
++++ b/arch/um/kernel/time.c
+@@ -260,11 +260,6 @@ static void __time_travel_add_event(struct time_travel_event *e,
+ 	struct time_travel_event *tmp;
+ 	bool inserted = false;
+ 
+-	if (WARN(time_travel_mode == TT_MODE_BASIC &&
+-		 e != &time_travel_timer_event,
+-		 "only timer events can be handled in basic mode"))
+-		return;
+-
+ 	if (e->pending)
+ 		return;
+ 
+diff --git a/arch/um/os-Linux/irq.c b/arch/um/os-Linux/irq.c
+index d508310ee5e1e..f1732c308c615 100644
+--- a/arch/um/os-Linux/irq.c
++++ b/arch/um/os-Linux/irq.c
+@@ -48,7 +48,7 @@ int os_epoll_triggered(int index, int events)
+ int os_event_mask(int irq_type)
+ {
+ 	if (irq_type == IRQ_READ)
+-		return EPOLLIN | EPOLLPRI;
++		return EPOLLIN | EPOLLPRI | EPOLLERR | EPOLLHUP | EPOLLRDHUP;
+ 	if (irq_type == IRQ_WRITE)
+ 		return EPOLLOUT;
+ 	return 0;
+diff --git a/arch/um/os-Linux/umid.c b/arch/um/os-Linux/umid.c
+index 1d7558dac75f3..a3dd61521d240 100644
+--- a/arch/um/os-Linux/umid.c
++++ b/arch/um/os-Linux/umid.c
+@@ -137,20 +137,13 @@ static inline int is_umdir_used(char *dir)
+ {
+ 	char pid[sizeof("nnnnnnnnn")], *end, *file;
+ 	int dead, fd, p, n, err;
+-	size_t filelen;
++	size_t filelen = strlen(dir) + sizeof("/pid") + 1;
+ 
+-	err = asprintf(&file, "%s/pid", dir);
+-	if (err < 0)
+-		return 0;
+-
+-	filelen = strlen(file);
++	file = malloc(filelen);
++	if (!file)
++		return -ENOMEM;
+ 
+-	n = snprintf(file, filelen, "%s/pid", dir);
+-	if (n >= filelen) {
+-		printk(UM_KERN_ERR "is_umdir_used - pid filename too long\n");
+-		err = -E2BIG;
+-		goto out;
+-	}
++	snprintf(file, filelen, "%s/pid", dir);
+ 
+ 	dead = 0;
+ 	fd = open(file, O_RDONLY);
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index af457f8cb29dd..7d4d89fa8647a 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -257,7 +257,8 @@ static struct event_constraint intel_icl_event_constraints[] = {
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x54, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0x60, 0x8b, 0xf),
+ 	INTEL_UEVENT_CONSTRAINT(0x04a3, 0xff),  /* CYCLE_ACTIVITY.STALLS_TOTAL */
+-	INTEL_UEVENT_CONSTRAINT(0x10a3, 0xff),  /* CYCLE_ACTIVITY.STALLS_MEM_ANY */
++	INTEL_UEVENT_CONSTRAINT(0x10a3, 0xff),  /* CYCLE_ACTIVITY.CYCLES_MEM_ANY */
++	INTEL_UEVENT_CONSTRAINT(0x14a3, 0xff),  /* CYCLE_ACTIVITY.STALLS_MEM_ANY */
+ 	INTEL_EVENT_CONSTRAINT(0xa3, 0xf),      /* CYCLE_ACTIVITY.* */
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xa8, 0xb0, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xb7, 0xbd, 0xf),
+@@ -5464,7 +5465,7 @@ __init int intel_pmu_init(void)
+ 		mem_attr = icl_events_attrs;
+ 		td_attr = icl_td_events_attrs;
+ 		tsx_attr = icl_tsx_events_attrs;
+-		x86_pmu.rtm_abort_event = X86_CONFIG(.event=0xca, .umask=0x02);
++		x86_pmu.rtm_abort_event = X86_CONFIG(.event=0xc9, .umask=0x04);
+ 		x86_pmu.lbr_pt_coexist = true;
+ 		intel_pmu_pebs_data_source_skl(pmem);
+ 		x86_pmu.update_topdown_event = icl_update_topdown_event;
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index 8961653c5dd2b..e2b0efcba1017 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -919,7 +919,7 @@ static __always_inline bool get_lbr_predicted(u64 info)
+ 	return !(info & LBR_INFO_MISPRED);
+ }
+ 
+-static __always_inline bool get_lbr_cycles(u64 info)
++static __always_inline u16 get_lbr_cycles(u64 info)
+ {
+ 	if (static_cpu_has(X86_FEATURE_ARCH_LBR) &&
+ 	    !(x86_pmu.lbr_timed_lbr && info & LBR_INFO_CYC_CNT_VALID))
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 4e3099d9ae625..57af25cb44f63 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -259,6 +259,7 @@ static inline u64 native_x2apic_icr_read(void)
+ 
+ extern int x2apic_mode;
+ extern int x2apic_phys;
++extern void __init x2apic_set_max_apicid(u32 apicid);
+ extern void __init check_x2apic(void);
+ extern void x2apic_setup(void);
+ static inline int x2apic_enabled(void)
+diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
+index 86b63c7feab75..86b2e0dcc4bfe 100644
+--- a/arch/x86/include/asm/cacheinfo.h
++++ b/arch/x86/include/asm/cacheinfo.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_CACHEINFO_H
+ #define _ASM_X86_CACHEINFO_H
+ 
+-void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id);
+-void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id);
++void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu);
++void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
+ 
+ #endif /* _ASM_X86_CACHEINFO_H */
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index a0f147893a041..fc25c88c7ff29 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -177,7 +177,8 @@ enum mce_notifier_prios {
+ 	MCE_PRIO_EXTLOG,
+ 	MCE_PRIO_UC,
+ 	MCE_PRIO_EARLY,
+-	MCE_PRIO_CEC
++	MCE_PRIO_CEC,
++	MCE_PRIO_HIGHEST = MCE_PRIO_CEC
+ };
+ 
+ struct notifier_block;
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index b3eef1d5c9037..113f6ca7b8284 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -1841,20 +1841,22 @@ static __init void try_to_enable_x2apic(int remap_mode)
+ 		return;
+ 
+ 	if (remap_mode != IRQ_REMAP_X2APIC_MODE) {
+-		/* IR is required if there is APIC ID > 255 even when running
+-		 * under KVM
++		/*
++		 * Using X2APIC without IR is not architecturally supported
++		 * on bare metal but may be supported in guests.
+ 		 */
+-		if (max_physical_apicid > 255 ||
+-		    !x86_init.hyper.x2apic_available()) {
++		if (!x86_init.hyper.x2apic_available()) {
+ 			pr_info("x2apic: IRQ remapping doesn't support X2APIC mode\n");
+ 			x2apic_disable();
+ 			return;
+ 		}
+ 
+ 		/*
+-		 * without IR all CPUs can be addressed by IOAPIC/MSI
+-		 * only in physical mode
++		 * Without IR, all CPUs can be addressed by IOAPIC/MSI only
++		 * in physical mode, and CPUs with an APIC ID that cannnot
++		 * be addressed must not be brought online.
+ 		 */
++		x2apic_set_max_apicid(255);
+ 		x2apic_phys = 1;
+ 	}
+ 	x2apic_enable();
+diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c
+index bc9693841353c..e14eae6d6ea71 100644
+--- a/arch/x86/kernel/apic/x2apic_phys.c
++++ b/arch/x86/kernel/apic/x2apic_phys.c
+@@ -8,6 +8,12 @@
+ int x2apic_phys;
+ 
+ static struct apic apic_x2apic_phys;
++static u32 x2apic_max_apicid __ro_after_init;
++
++void __init x2apic_set_max_apicid(u32 apicid)
++{
++	x2apic_max_apicid = apicid;
++}
+ 
+ static int __init set_x2apic_phys_mode(char *arg)
+ {
+@@ -98,6 +104,9 @@ static int x2apic_phys_probe(void)
+ /* Common x2apic functions, also used by x2apic_cluster */
+ int x2apic_apic_id_valid(u32 apicid)
+ {
++	if (x2apic_max_apicid && apicid > x2apic_max_apicid)
++		return 0;
++
+ 	return 1;
+ }
+ 
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 6062ce586b959..2f1fbd8150af7 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -330,7 +330,6 @@ static void legacy_fixup_core_id(struct cpuinfo_x86 *c)
+  */
+ static void amd_get_topology(struct cpuinfo_x86 *c)
+ {
+-	u8 node_id;
+ 	int cpu = smp_processor_id();
+ 
+ 	/* get information required for multi-node processors */
+@@ -340,7 +339,7 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
+ 
+ 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
+ 
+-		node_id  = ecx & 0xff;
++		c->cpu_die_id  = ecx & 0xff;
+ 
+ 		if (c->x86 == 0x15)
+ 			c->cu_id = ebx & 0xff;
+@@ -360,15 +359,15 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
+ 		if (!err)
+ 			c->x86_coreid_bits = get_count_order(c->x86_max_cores);
+ 
+-		cacheinfo_amd_init_llc_id(c, cpu, node_id);
++		cacheinfo_amd_init_llc_id(c, cpu);
+ 
+ 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
+ 		u64 value;
+ 
+ 		rdmsrl(MSR_FAM10H_NODE_ID, value);
+-		node_id = value & 7;
++		c->cpu_die_id = value & 7;
+ 
+-		per_cpu(cpu_llc_id, cpu) = node_id;
++		per_cpu(cpu_llc_id, cpu) = c->cpu_die_id;
+ 	} else
+ 		return;
+ 
+@@ -393,7 +392,7 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c)
+ 	/* Convert the initial APIC ID into the socket ID */
+ 	c->phys_proc_id = c->initial_apicid >> bits;
+ 	/* use socket ID also for last level cache */
+-	per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
++	per_cpu(cpu_llc_id, cpu) = c->cpu_die_id = c->phys_proc_id;
+ }
+ 
+ static void amd_detect_ppin(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
+index 57074cf3ad7c1..f9ac682e75e78 100644
+--- a/arch/x86/kernel/cpu/cacheinfo.c
++++ b/arch/x86/kernel/cpu/cacheinfo.c
+@@ -646,7 +646,7 @@ static int find_num_cache_leaves(struct cpuinfo_x86 *c)
+ 	return i;
+ }
+ 
+-void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
++void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu)
+ {
+ 	/*
+ 	 * We may have multiple LLCs if L3 caches exist, so check if we
+@@ -657,7 +657,7 @@ void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
+ 
+ 	if (c->x86 < 0x17) {
+ 		/* LLC is at the node level. */
+-		per_cpu(cpu_llc_id, cpu) = node_id;
++		per_cpu(cpu_llc_id, cpu) = c->cpu_die_id;
+ 	} else if (c->x86 == 0x17 && c->x86_model <= 0x1F) {
+ 		/*
+ 		 * LLC is at the core complex level.
+@@ -684,7 +684,7 @@ void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
+ 	}
+ }
+ 
+-void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
++void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu)
+ {
+ 	/*
+ 	 * We may have multiple LLCs if L3 caches exist, so check if we
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index ac6c30e5801da..dc0840aae26c1 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -65,7 +65,6 @@ static void hygon_get_topology_early(struct cpuinfo_x86 *c)
+  */
+ static void hygon_get_topology(struct cpuinfo_x86 *c)
+ {
+-	u8 node_id;
+ 	int cpu = smp_processor_id();
+ 
+ 	/* get information required for multi-node processors */
+@@ -75,7 +74,7 @@ static void hygon_get_topology(struct cpuinfo_x86 *c)
+ 
+ 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
+ 
+-		node_id  = ecx & 0xff;
++		c->cpu_die_id  = ecx & 0xff;
+ 
+ 		c->cpu_core_id = ebx & 0xff;
+ 
+@@ -93,14 +92,14 @@ static void hygon_get_topology(struct cpuinfo_x86 *c)
+ 		/* Socket ID is ApicId[6] for these processors. */
+ 		c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT;
+ 
+-		cacheinfo_hygon_init_llc_id(c, cpu, node_id);
++		cacheinfo_hygon_init_llc_id(c, cpu);
+ 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
+ 		u64 value;
+ 
+ 		rdmsrl(MSR_FAM10H_NODE_ID, value);
+-		node_id = value & 7;
++		c->cpu_die_id = value & 7;
+ 
+-		per_cpu(cpu_llc_id, cpu) = node_id;
++		per_cpu(cpu_llc_id, cpu) = c->cpu_die_id;
+ 	} else
+ 		return;
+ 
+@@ -123,7 +122,7 @@ static void hygon_detect_cmp(struct cpuinfo_x86 *c)
+ 	/* Convert the initial APIC ID into the socket ID */
+ 	c->phys_proc_id = c->initial_apicid >> bits;
+ 	/* use socket ID also for last level cache */
+-	per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
++	per_cpu(cpu_llc_id, cpu) = c->cpu_die_id = c->phys_proc_id;
+ }
+ 
+ static void srat_detect_node(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 32b7099e35111..311688202ea51 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -162,7 +162,8 @@ EXPORT_SYMBOL_GPL(mce_log);
+ 
+ void mce_register_decode_chain(struct notifier_block *nb)
+ {
+-	if (WARN_ON(nb->priority > MCE_PRIO_MCELOG && nb->priority < MCE_PRIO_EDAC))
++	if (WARN_ON(nb->priority < MCE_PRIO_LOWEST ||
++		    nb->priority > MCE_PRIO_HIGHEST))
+ 		return;
+ 
+ 	blocking_notifier_chain_register(&x86_mce_decoder_chain, nb);
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 547c7abb39f51..39f7d8c3c064b 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -937,6 +937,11 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
+ 		 * So clear it by resetting the current kprobe:
+ 		 */
+ 		regs->flags &= ~X86_EFLAGS_TF;
++		/*
++		 * Since the single step (trap) has been cancelled,
++		 * we need to restore BTF here.
++		 */
++		restore_btf();
+ 
+ 		/*
+ 		 * If the TF flag was set before the kprobe hit,
+diff --git a/arch/x86/kernel/tboot.c b/arch/x86/kernel/tboot.c
+index ae64f98ec2ab6..4c09ba1102047 100644
+--- a/arch/x86/kernel/tboot.c
++++ b/arch/x86/kernel/tboot.c
+@@ -93,6 +93,7 @@ static struct mm_struct tboot_mm = {
+ 	.pgd            = swapper_pg_dir,
+ 	.mm_users       = ATOMIC_INIT(2),
+ 	.mm_count       = ATOMIC_INIT(1),
++	.write_protect_seq = SEQCNT_ZERO(tboot_mm.write_protect_seq),
+ 	MMAP_LOCK_INITIALIZER(init_mm)
+ 	.page_table_lock =  __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
+ 	.mmlist         = LIST_HEAD_INIT(init_mm.mmlist),
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index f7a6e8f83783c..dc921d76e42e8 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -264,6 +264,20 @@ static inline int guest_cpuid_stepping(struct kvm_vcpu *vcpu)
+ 	return x86_stepping(best->eax);
+ }
+ 
++static inline bool guest_has_spec_ctrl_msr(struct kvm_vcpu *vcpu)
++{
++	return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) ||
++		guest_cpuid_has(vcpu, X86_FEATURE_AMD_STIBP) ||
++		guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) ||
++		guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD));
++}
++
++static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu)
++{
++	return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) ||
++		guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB));
++}
++
+ static inline bool supports_cpuid_fault(struct kvm_vcpu *vcpu)
+ {
+ 	return vcpu->arch.msr_platform_info & MSR_PLATFORM_INFO_CPUID_FAULT;
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 566f4d18185b1..5c9630c3f6ba1 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -1127,9 +1127,6 @@ void sev_vm_destroy(struct kvm *kvm)
+ 
+ int __init sev_hardware_setup(void)
+ {
+-	struct sev_user_data_status *status;
+-	int rc;
+-
+ 	/* Maximum number of encrypted guests supported simultaneously */
+ 	max_sev_asid = cpuid_ecx(0x8000001F);
+ 
+@@ -1148,26 +1145,9 @@ int __init sev_hardware_setup(void)
+ 	if (!sev_reclaim_asid_bitmap)
+ 		return 1;
+ 
+-	status = kmalloc(sizeof(*status), GFP_KERNEL);
+-	if (!status)
+-		return 1;
+-
+-	/*
+-	 * Check SEV platform status.
+-	 *
+-	 * PLATFORM_STATUS can be called in any state, if we failed to query
+-	 * the PLATFORM status then either PSP firmware does not support SEV
+-	 * feature or SEV firmware is dead.
+-	 */
+-	rc = sev_platform_status(status, NULL);
+-	if (rc)
+-		goto err;
+-
+ 	pr_info("SEV supported\n");
+ 
+-err:
+-	kfree(status);
+-	return rc;
++	return 0;
+ }
+ 
+ void sev_hardware_teardown(void)
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index da7eb4aaf44f8..94b0cb8330451 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2543,10 +2543,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		break;
+ 	case MSR_IA32_SPEC_CTRL:
+ 		if (!msr_info->host_initiated &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_STIBP) &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
++		    !guest_has_spec_ctrl_msr(vcpu))
+ 			return 1;
+ 
+ 		msr_info->data = svm->spec_ctrl;
+@@ -2630,10 +2627,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 		break;
+ 	case MSR_IA32_SPEC_CTRL:
+ 		if (!msr->host_initiated &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_STIBP) &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
++		    !guest_has_spec_ctrl_msr(vcpu))
+ 			return 1;
+ 
+ 		if (kvm_spec_ctrl_test_value(data))
+@@ -2658,12 +2652,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 		break;
+ 	case MSR_IA32_PRED_CMD:
+ 		if (!msr->host_initiated &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB))
++		    !guest_has_pred_cmd_msr(vcpu))
+ 			return 1;
+ 
+ 		if (data & ~PRED_CMD_IBPB)
+ 			return 1;
+-		if (!boot_cpu_has(X86_FEATURE_AMD_IBPB))
++		if (!boot_cpu_has(X86_FEATURE_IBPB))
+ 			return 1;
+ 		if (!data)
+ 			break;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 47b8357b97517..c01aac2bac37c 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1826,7 +1826,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		break;
+ 	case MSR_IA32_SPEC_CTRL:
+ 		if (!msr_info->host_initiated &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
++		    !guest_has_spec_ctrl_msr(vcpu))
+ 			return 1;
+ 
+ 		msr_info->data = to_vmx(vcpu)->spec_ctrl;
+@@ -2028,7 +2028,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		break;
+ 	case MSR_IA32_SPEC_CTRL:
+ 		if (!msr_info->host_initiated &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
++		    !guest_has_spec_ctrl_msr(vcpu))
+ 			return 1;
+ 
+ 		if (kvm_spec_ctrl_test_value(data))
+@@ -2063,12 +2063,12 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		goto find_uret_msr;
+ 	case MSR_IA32_PRED_CMD:
+ 		if (!msr_info->host_initiated &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
++		    !guest_has_pred_cmd_msr(vcpu))
+ 			return 1;
+ 
+ 		if (data & ~PRED_CMD_IBPB)
+ 			return 1;
+-		if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL))
++		if (!boot_cpu_has(X86_FEATURE_IBPB))
+ 			return 1;
+ 		if (!data)
+ 			break;
+diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
+index fe7a12599d8eb..968d7005f4a72 100644
+--- a/arch/x86/mm/ident_map.c
++++ b/arch/x86/mm/ident_map.c
+@@ -62,6 +62,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
+ 			  unsigned long addr, unsigned long end)
+ {
+ 	unsigned long next;
++	int result;
+ 
+ 	for (; addr < end; addr = next) {
+ 		p4d_t *p4d = p4d_page + p4d_index(addr);
+@@ -73,13 +74,20 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
+ 
+ 		if (p4d_present(*p4d)) {
+ 			pud = pud_offset(p4d, 0);
+-			ident_pud_init(info, pud, addr, next);
++			result = ident_pud_init(info, pud, addr, next);
++			if (result)
++				return result;
++
+ 			continue;
+ 		}
+ 		pud = (pud_t *)info->alloc_pgt_page(info->context);
+ 		if (!pud)
+ 			return -ENOMEM;
+-		ident_pud_init(info, pud, addr, next);
++
++		result = ident_pud_init(info, pud, addr, next);
++		if (result)
++			return result;
++
+ 		set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
+ 	}
+ 
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 094ef56ab7b42..37de7d006858d 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -145,7 +145,7 @@ config CRYPTO_MANAGER_DISABLE_TESTS
+ 
+ config CRYPTO_MANAGER_EXTRA_TESTS
+ 	bool "Enable extra run-time crypto self tests"
+-	depends on DEBUG_KERNEL && !CRYPTO_MANAGER_DISABLE_TESTS
++	depends on DEBUG_KERNEL && !CRYPTO_MANAGER_DISABLE_TESTS && CRYPTO_MANAGER
+ 	help
+ 	  Enable extra run-time self tests of registered crypto algorithms,
+ 	  including randomized fuzz tests.
+diff --git a/crypto/ecdh.c b/crypto/ecdh.c
+index b0232d6ab4ce7..d56b8603dec95 100644
+--- a/crypto/ecdh.c
++++ b/crypto/ecdh.c
+@@ -53,12 +53,13 @@ static int ecdh_set_secret(struct crypto_kpp *tfm, const void *buf,
+ 		return ecc_gen_privkey(ctx->curve_id, ctx->ndigits,
+ 				       ctx->private_key);
+ 
+-	if (ecc_is_key_valid(ctx->curve_id, ctx->ndigits,
+-			     (const u64 *)params.key, params.key_size) < 0)
+-		return -EINVAL;
+-
+ 	memcpy(ctx->private_key, params.key, params.key_size);
+ 
++	if (ecc_is_key_valid(ctx->curve_id, ctx->ndigits,
++			     ctx->private_key, params.key_size) < 0) {
++		memzero_explicit(ctx->private_key, params.key_size);
++		return -EINVAL;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/accessibility/speakup/speakup_dectlk.c b/drivers/accessibility/speakup/speakup_dectlk.c
+index 780214b5ca16e..ab6d61e80b1cb 100644
+--- a/drivers/accessibility/speakup/speakup_dectlk.c
++++ b/drivers/accessibility/speakup/speakup_dectlk.c
+@@ -37,7 +37,7 @@ static unsigned char get_index(struct spk_synth *synth);
+ static int in_escape;
+ static int is_flushing;
+ 
+-static spinlock_t flush_lock;
++static DEFINE_SPINLOCK(flush_lock);
+ static DECLARE_WAIT_QUEUE_HEAD(flush);
+ 
+ static struct var_t vars[] = {
+diff --git a/drivers/acpi/acpi_pnp.c b/drivers/acpi/acpi_pnp.c
+index 4ed755a963aa5..8f2dc176bb412 100644
+--- a/drivers/acpi/acpi_pnp.c
++++ b/drivers/acpi/acpi_pnp.c
+@@ -319,6 +319,9 @@ static bool matching_id(const char *idstr, const char *list_id)
+ {
+ 	int i;
+ 
++	if (strlen(idstr) != strlen(list_id))
++		return false;
++
+ 	if (memcmp(idstr, list_id, 3))
+ 		return false;
+ 
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 94d91c67aeaeb..ef77dbcaf58f6 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -749,7 +749,7 @@ static void acpi_pm_notify_work_func(struct acpi_device_wakeup_context *context)
+ static DEFINE_MUTEX(acpi_wakeup_lock);
+ 
+ static int __acpi_device_wakeup_enable(struct acpi_device *adev,
+-				       u32 target_state, int max_count)
++				       u32 target_state)
+ {
+ 	struct acpi_device_wakeup *wakeup = &adev->wakeup;
+ 	acpi_status status;
+@@ -757,9 +757,10 @@ static int __acpi_device_wakeup_enable(struct acpi_device *adev,
+ 
+ 	mutex_lock(&acpi_wakeup_lock);
+ 
+-	if (wakeup->enable_count >= max_count)
++	if (wakeup->enable_count >= INT_MAX) {
++		acpi_handle_info(adev->handle, "Wakeup enable count out of bounds!\n");
+ 		goto out;
+-
++	}
+ 	if (wakeup->enable_count > 0)
+ 		goto inc;
+ 
+@@ -799,7 +800,7 @@ out:
+  */
+ static int acpi_device_wakeup_enable(struct acpi_device *adev, u32 target_state)
+ {
+-	return __acpi_device_wakeup_enable(adev, target_state, 1);
++	return __acpi_device_wakeup_enable(adev, target_state);
+ }
+ 
+ /**
+@@ -829,8 +830,12 @@ out:
+ 	mutex_unlock(&acpi_wakeup_lock);
+ }
+ 
+-static int __acpi_pm_set_device_wakeup(struct device *dev, bool enable,
+-				       int max_count)
++/**
++ * acpi_pm_set_device_wakeup - Enable/disable remote wakeup for given device.
++ * @dev: Device to enable/disable to generate wakeup events.
++ * @enable: Whether to enable or disable the wakeup functionality.
++ */
++int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
+ {
+ 	struct acpi_device *adev;
+ 	int error;
+@@ -850,36 +855,14 @@ static int __acpi_pm_set_device_wakeup(struct device *dev, bool enable,
+ 		return 0;
+ 	}
+ 
+-	error = __acpi_device_wakeup_enable(adev, acpi_target_system_state(),
+-					    max_count);
++	error = __acpi_device_wakeup_enable(adev, acpi_target_system_state());
+ 	if (!error)
+ 		dev_dbg(dev, "Wakeup enabled by ACPI\n");
+ 
+ 	return error;
+ }
+-
+-/**
+- * acpi_pm_set_device_wakeup - Enable/disable remote wakeup for given device.
+- * @dev: Device to enable/disable to generate wakeup events.
+- * @enable: Whether to enable or disable the wakeup functionality.
+- */
+-int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
+-{
+-	return __acpi_pm_set_device_wakeup(dev, enable, 1);
+-}
+ EXPORT_SYMBOL_GPL(acpi_pm_set_device_wakeup);
+ 
+-/**
+- * acpi_pm_set_bridge_wakeup - Enable/disable remote wakeup for given bridge.
+- * @dev: Bridge device to enable/disable to generate wakeup events.
+- * @enable: Whether to enable or disable the wakeup functionality.
+- */
+-int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable)
+-{
+-	return __acpi_pm_set_device_wakeup(dev, enable, INT_MAX);
+-}
+-EXPORT_SYMBOL_GPL(acpi_pm_set_bridge_wakeup);
+-
+ /**
+  * acpi_dev_pm_low_power - Put ACPI device into a low-power state.
+  * @dev: Device to put into a low-power state.
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 442608220b5c8..4c97b0f44fce2 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -5,6 +5,7 @@
+ #include <linux/list_sort.h>
+ #include <linux/libnvdimm.h>
+ #include <linux/module.h>
++#include <linux/nospec.h>
+ #include <linux/mutex.h>
+ #include <linux/ndctl.h>
+ #include <linux/sysfs.h>
+@@ -478,8 +479,11 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 		cmd_mask = nd_desc->cmd_mask;
+ 		if (cmd == ND_CMD_CALL && call_pkg->nd_family) {
+ 			family = call_pkg->nd_family;
+-			if (!test_bit(family, &nd_desc->bus_family_mask))
++			if (family > NVDIMM_BUS_FAMILY_MAX ||
++			    !test_bit(family, &nd_desc->bus_family_mask))
+ 				return -EINVAL;
++			family = array_index_nospec(family,
++						    NVDIMM_BUS_FAMILY_MAX + 1);
+ 			dsm_mask = acpi_desc->family_dsm_mask[family];
+ 			guid = to_nfit_bus_uuid(family);
+ 		} else {
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index ad04824ca3baa..f2f5f1dc7c61d 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -541,7 +541,7 @@ static acpi_status acpi_dev_process_resource(struct acpi_resource *ares,
+ 		ret = c->preproc(ares, c->preproc_data);
+ 		if (ret < 0) {
+ 			c->error = ret;
+-			return AE_CTRL_TERMINATE;
++			return AE_ABORT_METHOD;
+ 		} else if (ret > 0) {
+ 			return AE_OK;
+ 		}
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index b5117576792bc..2a3952925855d 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -3146,6 +3146,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 	t->buffer->debug_id = t->debug_id;
+ 	t->buffer->transaction = t;
+ 	t->buffer->target_node = target_node;
++	t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);
+ 	trace_binder_transaction_alloc_buf(t->buffer);
+ 
+ 	if (binder_alloc_copy_user_to_buffer(
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 2f846b7ae8b82..7caf74ad24053 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -696,6 +696,8 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
+ 	binder_insert_free_buffer(alloc, buffer);
+ }
+ 
++static void binder_alloc_clear_buf(struct binder_alloc *alloc,
++				   struct binder_buffer *buffer);
+ /**
+  * binder_alloc_free_buf() - free a binder buffer
+  * @alloc:	binder_alloc for this proc
+@@ -706,6 +708,18 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
+ void binder_alloc_free_buf(struct binder_alloc *alloc,
+ 			    struct binder_buffer *buffer)
+ {
++	/*
++	 * We could eliminate the call to binder_alloc_clear_buf()
++	 * from binder_alloc_deferred_release() by moving this to
++	 * binder_alloc_free_buf_locked(). However, that could
++	 * increase contention for the alloc mutex if clear_on_free
++	 * is used frequently for large buffers. The mutex is not
++	 * needed for correctness here.
++	 */
++	if (buffer->clear_on_free) {
++		binder_alloc_clear_buf(alloc, buffer);
++		buffer->clear_on_free = false;
++	}
+ 	mutex_lock(&alloc->mutex);
+ 	binder_free_buf_locked(alloc, buffer);
+ 	mutex_unlock(&alloc->mutex);
+@@ -802,6 +816,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
+ 		/* Transaction should already have been freed */
+ 		BUG_ON(buffer->transaction);
+ 
++		if (buffer->clear_on_free) {
++			binder_alloc_clear_buf(alloc, buffer);
++			buffer->clear_on_free = false;
++		}
+ 		binder_free_buf_locked(alloc, buffer);
+ 		buffers++;
+ 	}
+@@ -1135,6 +1153,36 @@ static struct page *binder_alloc_get_page(struct binder_alloc *alloc,
+ 	return lru_page->page_ptr;
+ }
+ 
++/**
++ * binder_alloc_clear_buf() - zero out buffer
++ * @alloc: binder_alloc for this proc
++ * @buffer: binder buffer to be cleared
++ *
++ * memset the given buffer to 0
++ */
++static void binder_alloc_clear_buf(struct binder_alloc *alloc,
++				   struct binder_buffer *buffer)
++{
++	size_t bytes = binder_alloc_buffer_size(alloc, buffer);
++	binder_size_t buffer_offset = 0;
++
++	while (bytes) {
++		unsigned long size;
++		struct page *page;
++		pgoff_t pgoff;
++		void *kptr;
++
++		page = binder_alloc_get_page(alloc, buffer,
++					     buffer_offset, &pgoff);
++		size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
++		kptr = kmap(page) + pgoff;
++		memset(kptr, 0, size);
++		kunmap(page);
++		bytes -= size;
++		buffer_offset += size;
++	}
++}
++
+ /**
+  * binder_alloc_copy_user_to_buffer() - copy src user to tgt user
+  * @alloc: binder_alloc for this proc
+diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
+index 55d8b4106766a..6e8e001381af4 100644
+--- a/drivers/android/binder_alloc.h
++++ b/drivers/android/binder_alloc.h
+@@ -23,6 +23,7 @@ struct binder_transaction;
+  * @entry:              entry alloc->buffers
+  * @rb_node:            node for allocated_buffers/free_buffers rb trees
+  * @free:               %true if buffer is free
++ * @clear_on_free:      %true if buffer must be zeroed after use
+  * @allow_user_free:    %true if user is allowed to free buffer
+  * @async_transaction:  %true if buffer is in use for an async txn
+  * @debug_id:           unique ID for debugging
+@@ -41,9 +42,10 @@ struct binder_buffer {
+ 	struct rb_node rb_node; /* free entry by size or allocated entry */
+ 				/* by address */
+ 	unsigned free:1;
++	unsigned clear_on_free:1;
+ 	unsigned allow_user_free:1;
+ 	unsigned async_transaction:1;
+-	unsigned debug_id:29;
++	unsigned debug_id:28;
+ 
+ 	struct binder_transaction *transaction;
+ 
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index d661ada1518fb..e8cb66093f211 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -1386,7 +1386,7 @@ static void device_links_purge(struct device *dev)
+ 		return;
+ 
+ 	mutex_lock(&wfs_lock);
+-	list_del(&dev->links.needs_suppliers);
++	list_del_init(&dev->links.needs_suppliers);
+ 	mutex_unlock(&wfs_lock);
+ 
+ 	/*
+diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c
+index beb34b4f76b0e..172f720b8d637 100644
+--- a/drivers/block/null_blk_zoned.c
++++ b/drivers/block/null_blk_zoned.c
+@@ -6,8 +6,7 @@
+ #define CREATE_TRACE_POINTS
+ #include "null_blk_trace.h"
+ 
+-/* zone_size in MBs to sectors. */
+-#define ZONE_SIZE_SHIFT		11
++#define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT)
+ 
+ static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect)
+ {
+@@ -16,7 +15,7 @@ static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect)
+ 
+ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
+ {
+-	sector_t dev_size = (sector_t)dev->size * 1024 * 1024;
++	sector_t dev_capacity_sects, zone_capacity_sects;
+ 	sector_t sector = 0;
+ 	unsigned int i;
+ 
+@@ -38,9 +37,13 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
+ 		return -EINVAL;
+ 	}
+ 
+-	dev->zone_size_sects = dev->zone_size << ZONE_SIZE_SHIFT;
+-	dev->nr_zones = dev_size >>
+-				(SECTOR_SHIFT + ilog2(dev->zone_size_sects));
++	zone_capacity_sects = MB_TO_SECTS(dev->zone_capacity);
++	dev_capacity_sects = MB_TO_SECTS(dev->size);
++	dev->zone_size_sects = MB_TO_SECTS(dev->zone_size);
++	dev->nr_zones = dev_capacity_sects >> ilog2(dev->zone_size_sects);
++	if (dev_capacity_sects & (dev->zone_size_sects - 1))
++		dev->nr_zones++;
++
+ 	dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct blk_zone),
+ 			GFP_KERNEL | __GFP_ZERO);
+ 	if (!dev->zones)
+@@ -101,8 +104,12 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
+ 		struct blk_zone *zone = &dev->zones[i];
+ 
+ 		zone->start = zone->wp = sector;
+-		zone->len = dev->zone_size_sects;
+-		zone->capacity = dev->zone_capacity << ZONE_SIZE_SHIFT;
++		if (zone->start + dev->zone_size_sects > dev_capacity_sects)
++			zone->len = dev_capacity_sects - zone->start;
++		else
++			zone->len = dev->zone_size_sects;
++		zone->capacity =
++			min_t(sector_t, zone->len, zone_capacity_sects);
+ 		zone->type = BLK_ZONE_TYPE_SEQWRITE_REQ;
+ 		zone->cond = BLK_ZONE_COND_EMPTY;
+ 
+@@ -332,8 +339,11 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
+ 
+ 	trace_nullb_zone_op(cmd, zno, zone->cond);
+ 
+-	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
++	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
++		if (append)
++			return BLK_STS_IOERR;
+ 		return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
++	}
+ 
+ 	null_lock_zone(dev, zno);
+ 
+diff --git a/drivers/block/rnbd/rnbd-clt-sysfs.c b/drivers/block/rnbd/rnbd-clt-sysfs.c
+index 4f4474eecadb7..d9dd138ca9c64 100644
+--- a/drivers/block/rnbd/rnbd-clt-sysfs.c
++++ b/drivers/block/rnbd/rnbd-clt-sysfs.c
+@@ -433,8 +433,9 @@ void rnbd_clt_remove_dev_symlink(struct rnbd_clt_dev *dev)
+ 	 * i.e. rnbd_clt_unmap_dev_store() leading to a sysfs warning because
+ 	 * of sysfs link already was removed already.
+ 	 */
+-	if (strlen(dev->blk_symlink_name) && try_module_get(THIS_MODULE)) {
++	if (dev->blk_symlink_name && try_module_get(THIS_MODULE)) {
+ 		sysfs_remove_link(rnbd_devs_kobj, dev->blk_symlink_name);
++		kfree(dev->blk_symlink_name);
+ 		module_put(THIS_MODULE);
+ 	}
+ }
+@@ -487,10 +488,17 @@ static int rnbd_clt_get_path_name(struct rnbd_clt_dev *dev, char *buf,
+ static int rnbd_clt_add_dev_symlink(struct rnbd_clt_dev *dev)
+ {
+ 	struct kobject *gd_kobj = &disk_to_dev(dev->gd)->kobj;
+-	int ret;
++	int ret, len;
++
++	len = strlen(dev->pathname) + strlen(dev->sess->sessname) + 2;
++	dev->blk_symlink_name = kzalloc(len, GFP_KERNEL);
++	if (!dev->blk_symlink_name) {
++		rnbd_clt_err(dev, "Failed to allocate memory for blk_symlink_name\n");
++		return -ENOMEM;
++	}
+ 
+ 	ret = rnbd_clt_get_path_name(dev, dev->blk_symlink_name,
+-				      sizeof(dev->blk_symlink_name));
++				      len);
+ 	if (ret) {
+ 		rnbd_clt_err(dev, "Failed to get /sys/block symlink path, err: %d\n",
+ 			      ret);
+@@ -508,7 +516,8 @@ static int rnbd_clt_add_dev_symlink(struct rnbd_clt_dev *dev)
+ 	return 0;
+ 
+ out_err:
+-	dev->blk_symlink_name[0] = '\0';
++	kfree(dev->blk_symlink_name);
++	dev->blk_symlink_name = NULL ;
+ 	return ret;
+ }
+ 
+diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
+index 8b2411ccbda97..7af1b60582fe5 100644
+--- a/drivers/block/rnbd/rnbd-clt.c
++++ b/drivers/block/rnbd/rnbd-clt.c
+@@ -59,6 +59,7 @@ static void rnbd_clt_put_dev(struct rnbd_clt_dev *dev)
+ 	ida_simple_remove(&index_ida, dev->clt_device_id);
+ 	mutex_unlock(&ida_lock);
+ 	kfree(dev->hw_queues);
++	kfree(dev->pathname);
+ 	rnbd_clt_put_sess(dev->sess);
+ 	mutex_destroy(&dev->lock);
+ 	kfree(dev);
+@@ -1381,10 +1382,16 @@ static struct rnbd_clt_dev *init_dev(struct rnbd_clt_session *sess,
+ 		       pathname, sess->sessname, ret);
+ 		goto out_queues;
+ 	}
++
++	dev->pathname = kstrdup(pathname, GFP_KERNEL);
++	if (!dev->pathname) {
++		ret = -ENOMEM;
++		goto out_queues;
++	}
++
+ 	dev->clt_device_id	= ret;
+ 	dev->sess		= sess;
+ 	dev->access_mode	= access_mode;
+-	strlcpy(dev->pathname, pathname, sizeof(dev->pathname));
+ 	mutex_init(&dev->lock);
+ 	refcount_set(&dev->refcount, 1);
+ 	dev->dev_state = DEV_STATE_INIT;
+@@ -1413,8 +1420,8 @@ static bool __exists_dev(const char *pathname)
+ 	list_for_each_entry(sess, &sess_list, list) {
+ 		mutex_lock(&sess->lock);
+ 		list_for_each_entry(dev, &sess->devs_list, list) {
+-			if (!strncmp(dev->pathname, pathname,
+-				     sizeof(dev->pathname))) {
++			if (strlen(dev->pathname) == strlen(pathname) &&
++			    !strcmp(dev->pathname, pathname)) {
+ 				found = true;
+ 				break;
+ 			}
+diff --git a/drivers/block/rnbd/rnbd-clt.h b/drivers/block/rnbd/rnbd-clt.h
+index ed33654aa4868..b193d59040503 100644
+--- a/drivers/block/rnbd/rnbd-clt.h
++++ b/drivers/block/rnbd/rnbd-clt.h
+@@ -108,7 +108,7 @@ struct rnbd_clt_dev {
+ 	u32			clt_device_id;
+ 	struct mutex		lock;
+ 	enum rnbd_clt_dev_state	dev_state;
+-	char			pathname[NAME_MAX];
++	char			*pathname;
+ 	enum rnbd_access_mode	access_mode;
+ 	bool			read_only;
+ 	bool			rotational;
+@@ -126,7 +126,7 @@ struct rnbd_clt_dev {
+ 	struct list_head        list;
+ 	struct gendisk		*gd;
+ 	struct kobject		kobj;
+-	char			blk_symlink_name[NAME_MAX];
++	char			*blk_symlink_name;
+ 	refcount_t		refcount;
+ 	struct work_struct	unmap_on_rmmod_work;
+ };
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index 76912c584a76d..9860d4842f36c 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -274,6 +274,7 @@ static int xen_blkif_disconnect(struct xen_blkif *blkif)
+ 
+ 		if (ring->xenblkd) {
+ 			kthread_stop(ring->xenblkd);
++			ring->xenblkd = NULL;
+ 			wake_up(&ring->shutdown_wq);
+ 		}
+ 
+@@ -675,7 +676,8 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
+ 	/* setup back pointer */
+ 	be->blkif->be = be;
+ 
+-	err = xenbus_watch_pathfmt(dev, &be->backend_watch, backend_changed,
++	err = xenbus_watch_pathfmt(dev, &be->backend_watch, NULL,
++				   backend_changed,
+ 				   "%s/%s", dev->nodename, "physical-device");
+ 	if (err)
+ 		goto fail;
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index ba45c59bd9f36..5f9f027956317 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -704,7 +704,7 @@ static int mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ 	err = mtk_hci_wmt_sync(hdev, &wmt_params);
+ 	if (err < 0) {
+ 		bt_dev_err(hdev, "Failed to power on data RAM (%d)", err);
+-		return err;
++		goto free_fw;
+ 	}
+ 
+ 	fw_ptr = fw->data;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 1005b6e8ff743..80468745d5c5e 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -1763,6 +1763,8 @@ static int btusb_setup_bcm92035(struct hci_dev *hdev)
+ 
+ static int btusb_setup_csr(struct hci_dev *hdev)
+ {
++	struct btusb_data *data = hci_get_drvdata(hdev);
++	u16 bcdDevice = le16_to_cpu(data->udev->descriptor.bcdDevice);
+ 	struct hci_rp_read_local_version *rp;
+ 	struct sk_buff *skb;
+ 	bool is_fake = false;
+@@ -1832,6 +1834,12 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ 		 le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_4_0)
+ 		is_fake = true;
+ 
++	/* Other clones which beat all the above checks */
++	else if (bcdDevice == 0x0134 &&
++		 le16_to_cpu(rp->lmp_subver) == 0x0c5c &&
++		 le16_to_cpu(rp->hci_ver) == BLUETOOTH_VER_2_0)
++		is_fake = true;
++
+ 	if (is_fake) {
+ 		bt_dev_warn(hdev, "CSR: Unbranded CSR clone detected; adding workarounds...");
+ 
+@@ -3067,7 +3075,7 @@ static int btusb_mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ 	err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
+ 	if (err < 0) {
+ 		bt_dev_err(hdev, "Failed to power on data RAM (%d)", err);
+-		return err;
++		goto err_release_fw;
+ 	}
+ 
+ 	fw_ptr = fw->data;
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 981d96cc76959..78d635f1d1567 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -245,6 +245,9 @@ static int h5_close(struct hci_uart *hu)
+ 	skb_queue_purge(&h5->rel);
+ 	skb_queue_purge(&h5->unrel);
+ 
++	kfree_skb(h5->rx_skb);
++	h5->rx_skb = NULL;
++
+ 	if (h5->vnd && h5->vnd->close)
+ 		h5->vnd->close(h5);
+ 
+diff --git a/drivers/bus/fsl-mc/fsl-mc-allocator.c b/drivers/bus/fsl-mc/fsl-mc-allocator.c
+index e71a6f52ea0cf..2d7c764bb7dcf 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-allocator.c
++++ b/drivers/bus/fsl-mc/fsl-mc-allocator.c
+@@ -292,8 +292,10 @@ int __must_check fsl_mc_object_allocate(struct fsl_mc_device *mc_dev,
+ 		goto error;
+ 
+ 	mc_adev = resource->data;
+-	if (!mc_adev)
++	if (!mc_adev) {
++		error = -EINVAL;
+ 		goto error;
++	}
+ 
+ 	mc_adev->consumer_link = device_link_add(&mc_dev->dev,
+ 						 &mc_adev->dev,
+diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
+index 76a6ee505d33d..806766b1b45f6 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
++++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
+@@ -967,8 +967,11 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, mc);
+ 
+ 	plat_res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+-	if (plat_res)
++	if (plat_res) {
+ 		mc->fsl_mc_regs = devm_ioremap_resource(&pdev->dev, plat_res);
++		if (IS_ERR(mc->fsl_mc_regs))
++			return PTR_ERR(mc->fsl_mc_regs);
++	}
+ 
+ 	if (mc->fsl_mc_regs && IS_ENABLED(CONFIG_ACPI) &&
+ 	    !dev_of_node(&pdev->dev)) {
+diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
+index 0ffdebde82657..8cefa359fccd8 100644
+--- a/drivers/bus/mhi/core/init.c
++++ b/drivers/bus/mhi/core/init.c
+@@ -610,7 +610,7 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
+ {
+ 	struct mhi_event *mhi_event;
+ 	const struct mhi_event_config *event_cfg;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	struct device *dev = mhi_cntrl->cntrl_dev;
+ 	int i, num;
+ 
+ 	num = config->num_events;
+@@ -692,7 +692,7 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
+ 			const struct mhi_controller_config *config)
+ {
+ 	const struct mhi_channel_config *ch_cfg;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	struct device *dev = mhi_cntrl->cntrl_dev;
+ 	int i;
+ 	u32 chan;
+ 
+@@ -1276,10 +1276,8 @@ static int mhi_driver_remove(struct device *dev)
+ 		mutex_unlock(&mhi_chan->mutex);
+ 	}
+ 
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+ 	while (mhi_dev->dev_wake)
+ 		mhi_device_put(mhi_dev);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/bus/mips_cdmm.c b/drivers/bus/mips_cdmm.c
+index 9f7ed1fcd4285..626dedd110cbc 100644
+--- a/drivers/bus/mips_cdmm.c
++++ b/drivers/bus/mips_cdmm.c
+@@ -559,10 +559,8 @@ static void mips_cdmm_bus_discover(struct mips_cdmm_bus *bus)
+ 		dev_set_name(&dev->dev, "cdmm%u-%u", cpu, id);
+ 		++id;
+ 		ret = device_register(&dev->dev);
+-		if (ret) {
++		if (ret)
+ 			put_device(&dev->dev);
+-			kfree(dev);
+-		}
+ 	}
+ }
+ 
+diff --git a/drivers/clk/at91/sam9x60.c b/drivers/clk/at91/sam9x60.c
+index 3c4c956035954..c8cbec5308f02 100644
+--- a/drivers/clk/at91/sam9x60.c
++++ b/drivers/clk/at91/sam9x60.c
+@@ -174,7 +174,6 @@ static void __init sam9x60_pmc_setup(struct device_node *np)
+ 	struct regmap *regmap;
+ 	struct clk_hw *hw;
+ 	int i;
+-	bool bypass;
+ 
+ 	i = of_property_match_string(np, "clock-names", "td_slck");
+ 	if (i < 0)
+@@ -209,10 +208,7 @@ static void __init sam9x60_pmc_setup(struct device_node *np)
+ 	if (IS_ERR(hw))
+ 		goto err_free;
+ 
+-	bypass = of_property_read_bool(np, "atmel,osc-bypass");
+-
+-	hw = at91_clk_register_main_osc(regmap, "main_osc", mainxtal_name,
+-					bypass);
++	hw = at91_clk_register_main_osc(regmap, "main_osc", mainxtal_name, 0);
+ 	if (IS_ERR(hw))
+ 		goto err_free;
+ 	main_osc_hw = hw;
+diff --git a/drivers/clk/at91/sama7g5.c b/drivers/clk/at91/sama7g5.c
+index 0db2ab3eca147..a092a940baa40 100644
+--- a/drivers/clk/at91/sama7g5.c
++++ b/drivers/clk/at91/sama7g5.c
+@@ -838,7 +838,7 @@ static void __init sama7g5_pmc_setup(struct device_node *np)
+ 	sama7g5_pmc = pmc_data_allocate(PMC_I2S1_MUX + 1,
+ 					nck(sama7g5_systemck),
+ 					nck(sama7g5_periphck),
+-					nck(sama7g5_gck));
++					nck(sama7g5_gck), 8);
+ 	if (!sama7g5_pmc)
+ 		return;
+ 
+@@ -980,6 +980,8 @@ static void __init sama7g5_pmc_setup(struct device_node *np)
+ 						    sama7g5_prog_mux_table);
+ 		if (IS_ERR(hw))
+ 			goto err_free;
++
++		sama7g5_pmc->pchws[i] = hw;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(sama7g5_systemck); i++) {
+@@ -1052,7 +1054,7 @@ err_free:
+ 		kfree(alloc_mem);
+ 	}
+ 
+-	pmc_data_free(sama7g5_pmc);
++	kfree(sama7g5_pmc);
+ }
+ 
+ /* Some clks are used for a clocksource */
+diff --git a/drivers/clk/bcm/clk-bcm2711-dvp.c b/drivers/clk/bcm/clk-bcm2711-dvp.c
+index 8333e20dc9d22..69e2f85f7029d 100644
+--- a/drivers/clk/bcm/clk-bcm2711-dvp.c
++++ b/drivers/clk/bcm/clk-bcm2711-dvp.c
+@@ -108,6 +108,7 @@ static const struct of_device_id clk_dvp_dt_ids[] = {
+ 	{ .compatible = "brcm,brcm2711-dvp", },
+ 	{ /* sentinel */ }
+ };
++MODULE_DEVICE_TABLE(of, clk_dvp_dt_ids);
+ 
+ static struct platform_driver clk_dvp_driver = {
+ 	.probe	= clk_dvp_probe,
+diff --git a/drivers/clk/clk-fsl-sai.c b/drivers/clk/clk-fsl-sai.c
+index 0221180a4dd73..1e81c8d8a6fd3 100644
+--- a/drivers/clk/clk-fsl-sai.c
++++ b/drivers/clk/clk-fsl-sai.c
+@@ -68,9 +68,20 @@ static int fsl_sai_clk_probe(struct platform_device *pdev)
+ 	if (IS_ERR(hw))
+ 		return PTR_ERR(hw);
+ 
++	platform_set_drvdata(pdev, hw);
++
+ 	return devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, hw);
+ }
+ 
++static int fsl_sai_clk_remove(struct platform_device *pdev)
++{
++	struct clk_hw *hw = platform_get_drvdata(pdev);
++
++	clk_hw_unregister_composite(hw);
++
++	return 0;
++}
++
+ static const struct of_device_id of_fsl_sai_clk_ids[] = {
+ 	{ .compatible = "fsl,vf610-sai-clock" },
+ 	{ }
+@@ -79,6 +90,7 @@ MODULE_DEVICE_TABLE(of, of_fsl_sai_clk_ids);
+ 
+ static struct platform_driver fsl_sai_clk_driver = {
+ 	.probe = fsl_sai_clk_probe,
++	.remove = fsl_sai_clk_remove,
+ 	.driver		= {
+ 		.name	= "fsl-sai-clk",
+ 		.of_match_table = of_fsl_sai_clk_ids,
+diff --git a/drivers/clk/clk-s2mps11.c b/drivers/clk/clk-s2mps11.c
+index aa21371f9104c..a3e883a9f4067 100644
+--- a/drivers/clk/clk-s2mps11.c
++++ b/drivers/clk/clk-s2mps11.c
+@@ -195,6 +195,7 @@ static int s2mps11_clk_probe(struct platform_device *pdev)
+ 	return ret;
+ 
+ err_reg:
++	of_node_put(s2mps11_clks[0].clk_np);
+ 	while (--i >= 0)
+ 		clkdev_drop(s2mps11_clks[i].lookup);
+ 
+diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
+index c90460e7ef215..43db67337bc06 100644
+--- a/drivers/clk/clk-versaclock5.c
++++ b/drivers/clk/clk-versaclock5.c
+@@ -739,8 +739,8 @@ static int vc5_update_power(struct device_node *np_output,
+ {
+ 	u32 value;
+ 
+-	if (!of_property_read_u32(np_output,
+-				  "idt,voltage-microvolts", &value)) {
++	if (!of_property_read_u32(np_output, "idt,voltage-microvolt",
++				  &value)) {
+ 		clk_out->clk_output_cfg0_mask |= VC5_CLK_OUTPUT_CFG0_PWR_MASK;
+ 		switch (value) {
+ 		case 1800000:
+diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c
+index dac6edc670cce..c8e9cb6c8e39c 100644
+--- a/drivers/clk/ingenic/cgu.c
++++ b/drivers/clk/ingenic/cgu.c
+@@ -392,15 +392,21 @@ static unsigned int
+ ingenic_clk_calc_hw_div(const struct ingenic_cgu_clk_info *clk_info,
+ 			unsigned int div)
+ {
+-	unsigned int i;
++	unsigned int i, best_i = 0, best = (unsigned int)-1;
+ 
+ 	for (i = 0; i < (1 << clk_info->div.bits)
+ 				&& clk_info->div.div_table[i]; i++) {
+-		if (clk_info->div.div_table[i] >= div)
+-			return i;
++		if (clk_info->div.div_table[i] >= div &&
++		    clk_info->div.div_table[i] < best) {
++			best = clk_info->div.div_table[i];
++			best_i = i;
++
++			if (div == best)
++				break;
++		}
+ 	}
+ 
+-	return i - 1;
++	return best_i;
+ }
+ 
+ static unsigned
+diff --git a/drivers/clk/meson/Kconfig b/drivers/clk/meson/Kconfig
+index 034da203e8e0e..9a8a548d839d8 100644
+--- a/drivers/clk/meson/Kconfig
++++ b/drivers/clk/meson/Kconfig
+@@ -110,6 +110,7 @@ config COMMON_CLK_G12A
+ 	select COMMON_CLK_MESON_AO_CLKC
+ 	select COMMON_CLK_MESON_EE_CLKC
+ 	select COMMON_CLK_MESON_CPU_DYNDIV
++	select COMMON_CLK_MESON_VID_PLL_DIV
+ 	select MFD_SYSCON
+ 	help
+ 	  Support for the clock controller on Amlogic S905D2, S905X2 and S905Y2
+diff --git a/drivers/clk/mvebu/armada-37xx-xtal.c b/drivers/clk/mvebu/armada-37xx-xtal.c
+index e9e306d4e9af9..41271351cf1f4 100644
+--- a/drivers/clk/mvebu/armada-37xx-xtal.c
++++ b/drivers/clk/mvebu/armada-37xx-xtal.c
+@@ -13,8 +13,8 @@
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+ 
+-#define NB_GPIO1_LATCH	0xC
+-#define XTAL_MODE	    BIT(31)
++#define NB_GPIO1_LATCH	0x8
++#define XTAL_MODE	    BIT(9)
+ 
+ static int armada_3700_xtal_clock_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/clk/qcom/gcc-sc7180.c b/drivers/clk/qcom/gcc-sc7180.c
+index 68d8f7aaf64e1..b080739ab0c33 100644
+--- a/drivers/clk/qcom/gcc-sc7180.c
++++ b/drivers/clk/qcom/gcc-sc7180.c
+@@ -642,7 +642,7 @@ static struct clk_rcg2 gcc_sdcc1_ice_core_clk_src = {
+ 		.name = "gcc_sdcc1_ice_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = 4,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+@@ -666,7 +666,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parent_data_5,
+ 		.num_parents = 5,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c
+index 17ebbac7ddfb4..046d79416b7d0 100644
+--- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c
+@@ -26,7 +26,6 @@
+ #include <dt-bindings/clock/r8a779a0-cpg-mssr.h>
+ 
+ #include "renesas-cpg-mssr.h"
+-#include "rcar-gen3-cpg.h"
+ 
+ enum rcar_r8a779a0_clk_types {
+ 	CLK_TYPE_R8A779A0_MAIN = CLK_TYPE_CUSTOM,
+@@ -84,6 +83,14 @@ enum clk_ids {
+ 	DEF_BASE(_name, _id, CLK_TYPE_R8A779A0_PLL2X_3X, CLK_MAIN, \
+ 		 .offset = _offset)
+ 
++#define DEF_MDSEL(_name, _id, _md, _parent0, _div0, _parent1, _div1) \
++	DEF_BASE(_name, _id, CLK_TYPE_R8A779A0_MDSEL,	\
++		 (_parent0) << 16 | (_parent1),		\
++		 .div = (_div0) << 16 | (_div1), .offset = _md)
++
++#define DEF_OSC(_name, _id, _parent, _div)		\
++	DEF_BASE(_name, _id, CLK_TYPE_R8A779A0_OSC, _parent, .div = _div)
++
+ static const struct cpg_core_clk r8a779a0_core_clks[] __initconst = {
+ 	/* External Clock Inputs */
+ 	DEF_INPUT("extal",  CLK_EXTAL),
+@@ -136,8 +143,8 @@ static const struct cpg_core_clk r8a779a0_core_clks[] __initconst = {
+ 	DEF_DIV6P1("canfd",	R8A779A0_CLK_CANFD,	CLK_PLL5_DIV4,	0x878),
+ 	DEF_DIV6P1("csi0",	R8A779A0_CLK_CSI0,	CLK_PLL5_DIV4,	0x880),
+ 
+-	DEF_GEN3_OSC("osc",	R8A779A0_CLK_OSC,	CLK_EXTAL,	8),
+-	DEF_GEN3_MDSEL("r",	R8A779A0_CLK_R, 29, CLK_EXTALR, 1, CLK_OCO, 1),
++	DEF_OSC("osc",		R8A779A0_CLK_OSC,	CLK_EXTAL,	8),
++	DEF_MDSEL("r",		R8A779A0_CLK_R, 29, CLK_EXTALR, 1, CLK_OCO, 1),
+ };
+ 
+ static const struct mssr_mod_clk r8a779a0_mod_clks[] __initconst = {
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-a64.c b/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
+index 5f66bf8797723..149cfde817cba 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
+@@ -389,6 +389,7 @@ static struct clk_div_table ths_div_table[] = {
+ 	{ .val = 1, .div = 2 },
+ 	{ .val = 2, .div = 4 },
+ 	{ .val = 3, .div = 6 },
++	{ /* Sentinel */ },
+ };
+ static const char * const ths_parents[] = { "osc24M" };
+ static struct ccu_div ths_clk = {
+diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-h3.c b/drivers/clk/sunxi-ng/ccu-sun8i-h3.c
+index 6b636362379ee..7e629a4493afd 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun8i-h3.c
++++ b/drivers/clk/sunxi-ng/ccu-sun8i-h3.c
+@@ -322,6 +322,7 @@ static struct clk_div_table ths_div_table[] = {
+ 	{ .val = 1, .div = 2 },
+ 	{ .val = 2, .div = 4 },
+ 	{ .val = 3, .div = 6 },
++	{ /* Sentinel */ },
+ };
+ static SUNXI_CCU_DIV_TABLE_WITH_GATE(ths_clk, "ths", "osc24M",
+ 				     0x074, 0, 2, ths_div_table, BIT(31), 0);
+diff --git a/drivers/clk/tegra/clk-dfll.c b/drivers/clk/tegra/clk-dfll.c
+index cfbaa90c7adbf..a5f526bb0483e 100644
+--- a/drivers/clk/tegra/clk-dfll.c
++++ b/drivers/clk/tegra/clk-dfll.c
+@@ -1856,13 +1856,13 @@ static int dfll_fetch_pwm_params(struct tegra_dfll *td)
+ 			    &td->reg_init_uV);
+ 	if (!ret) {
+ 		dev_err(td->dev, "couldn't get initialized voltage\n");
+-		return ret;
++		return -EINVAL;
+ 	}
+ 
+ 	ret = read_dt_param(td, "nvidia,pwm-period-nanoseconds", &pwm_period);
+ 	if (!ret) {
+ 		dev_err(td->dev, "couldn't get PWM period\n");
+-		return ret;
++		return -EINVAL;
+ 	}
+ 	td->pwm_rate = (NSEC_PER_SEC / pwm_period) * (MAX_DFLL_VOLTAGES - 1);
+ 
+diff --git a/drivers/clk/tegra/clk-id.h b/drivers/clk/tegra/clk-id.h
+index ff7da2d3e94d8..24413812ec5b6 100644
+--- a/drivers/clk/tegra/clk-id.h
++++ b/drivers/clk/tegra/clk-id.h
+@@ -227,6 +227,7 @@ enum clk_id {
+ 	tegra_clk_sdmmc4,
+ 	tegra_clk_sdmmc4_8,
+ 	tegra_clk_se,
++	tegra_clk_se_10,
+ 	tegra_clk_soc_therm,
+ 	tegra_clk_soc_therm_8,
+ 	tegra_clk_sor0,
+diff --git a/drivers/clk/tegra/clk-tegra-periph.c b/drivers/clk/tegra/clk-tegra-periph.c
+index 2b2a3b81c16ba..60cc34f90cb9b 100644
+--- a/drivers/clk/tegra/clk-tegra-periph.c
++++ b/drivers/clk/tegra/clk-tegra-periph.c
+@@ -630,7 +630,7 @@ static struct tegra_periph_init_data periph_clks[] = {
+ 	INT8("host1x", mux_pllm_pllc2_c_c3_pllp_plla, CLK_SOURCE_HOST1X, 28, 0, tegra_clk_host1x_8),
+ 	INT8("host1x", mux_pllc4_out1_pllc_pllc4_out2_pllp_clkm_plla_pllc4_out0, CLK_SOURCE_HOST1X, 28, 0, tegra_clk_host1x_9),
+ 	INT8("se", mux_pllp_pllc2_c_c3_pllm_clkm, CLK_SOURCE_SE, 127, TEGRA_PERIPH_ON_APB, tegra_clk_se),
+-	INT8("se", mux_pllp_pllc2_c_c3_clkm, CLK_SOURCE_SE, 127, TEGRA_PERIPH_ON_APB, tegra_clk_se),
++	INT8("se", mux_pllp_pllc2_c_c3_clkm, CLK_SOURCE_SE, 127, TEGRA_PERIPH_ON_APB, tegra_clk_se_10),
+ 	INT8("2d", mux_pllm_pllc2_c_c3_pllp_plla, CLK_SOURCE_2D, 21, 0, tegra_clk_gr2d_8),
+ 	INT8("3d", mux_pllm_pllc2_c_c3_pllp_plla, CLK_SOURCE_3D, 24, 0, tegra_clk_gr3d_8),
+ 	INT8("vic03", mux_pllm_pllc_pllp_plla_pllc2_c3_clkm, CLK_SOURCE_VIC03, 178, 0, tegra_clk_vic03),
+diff --git a/drivers/clk/ti/fapll.c b/drivers/clk/ti/fapll.c
+index 95e36ba64accf..8024c6d2b9e95 100644
+--- a/drivers/clk/ti/fapll.c
++++ b/drivers/clk/ti/fapll.c
+@@ -498,6 +498,7 @@ static struct clk * __init ti_fapll_synth_setup(struct fapll_data *fd,
+ {
+ 	struct clk_init_data *init;
+ 	struct fapll_synth *synth;
++	struct clk *clk = ERR_PTR(-ENOMEM);
+ 
+ 	init = kzalloc(sizeof(*init), GFP_KERNEL);
+ 	if (!init)
+@@ -520,13 +521,19 @@ static struct clk * __init ti_fapll_synth_setup(struct fapll_data *fd,
+ 	synth->hw.init = init;
+ 	synth->clk_pll = pll_clk;
+ 
+-	return clk_register(NULL, &synth->hw);
++	clk = clk_register(NULL, &synth->hw);
++	if (IS_ERR(clk)) {
++		pr_err("failed to register clock\n");
++		goto free;
++	}
++
++	return clk;
+ 
+ free:
+ 	kfree(synth);
+ 	kfree(init);
+ 
+-	return ERR_PTR(-ENOMEM);
++	return clk;
+ }
+ 
+ static void __init ti_fapll_setup(struct device_node *node)
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index 68b087bff59cc..2be849bb794ac 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -654,7 +654,7 @@ config ATCPIT100_TIMER
+ 
+ config RISCV_TIMER
+ 	bool "Timer for the RISC-V platform" if COMPILE_TEST
+-	depends on GENERIC_SCHED_CLOCK && RISCV
++	depends on GENERIC_SCHED_CLOCK && RISCV && RISCV_SBI
+ 	select TIMER_PROBE
+ 	select TIMER_OF
+ 	help
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 6c3e841801461..d0177824c518b 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -396,10 +396,10 @@ static void erratum_set_next_event_tval_generic(const int access, unsigned long
+ 	ctrl &= ~ARCH_TIMER_CTRL_IT_MASK;
+ 
+ 	if (access == ARCH_TIMER_PHYS_ACCESS) {
+-		cval = evt + arch_counter_get_cntpct();
++		cval = evt + arch_counter_get_cntpct_stable();
+ 		write_sysreg(cval, cntp_cval_el0);
+ 	} else {
+-		cval = evt + arch_counter_get_cntvct();
++		cval = evt + arch_counter_get_cntvct_stable();
+ 		write_sysreg(cval, cntv_cval_el0);
+ 	}
+ 
+@@ -822,15 +822,24 @@ static void arch_timer_evtstrm_enable(int divider)
+ 
+ static void arch_timer_configure_evtstream(void)
+ {
+-	int evt_stream_div, pos;
++	int evt_stream_div, lsb;
++
++	/*
++	 * As the event stream can at most be generated at half the frequency
++	 * of the counter, use half the frequency when computing the divider.
++	 */
++	evt_stream_div = arch_timer_rate / ARCH_TIMER_EVT_STREAM_FREQ / 2;
++
++	/*
++	 * Find the closest power of two to the divisor. If the adjacent bit
++	 * of lsb (last set bit, starts from 0) is set, then we use (lsb + 1).
++	 */
++	lsb = fls(evt_stream_div) - 1;
++	if (lsb > 0 && (evt_stream_div & BIT(lsb - 1)))
++		lsb++;
+ 
+-	/* Find the closest power of two to the divisor */
+-	evt_stream_div = arch_timer_rate / ARCH_TIMER_EVT_STREAM_FREQ;
+-	pos = fls(evt_stream_div);
+-	if (pos > 1 && !(evt_stream_div & (1 << (pos - 2))))
+-		pos--;
+ 	/* enable event stream */
+-	arch_timer_evtstrm_enable(min(pos, 15));
++	arch_timer_evtstrm_enable(max(0, min(lsb, 15)));
+ }
+ 
+ static void arch_counter_set_user_access(void)
+diff --git a/drivers/clocksource/ingenic-timer.c b/drivers/clocksource/ingenic-timer.c
+index 58fd9189fab7f..905fd6b163a81 100644
+--- a/drivers/clocksource/ingenic-timer.c
++++ b/drivers/clocksource/ingenic-timer.c
+@@ -127,7 +127,7 @@ static irqreturn_t ingenic_tcu_cevt_cb(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static struct clk * __init ingenic_tcu_get_clock(struct device_node *np, int id)
++static struct clk *ingenic_tcu_get_clock(struct device_node *np, int id)
+ {
+ 	struct of_phandle_args args;
+ 
+diff --git a/drivers/clocksource/timer-cadence-ttc.c b/drivers/clocksource/timer-cadence-ttc.c
+index 80e9606020307..4efd0cf3b602d 100644
+--- a/drivers/clocksource/timer-cadence-ttc.c
++++ b/drivers/clocksource/timer-cadence-ttc.c
+@@ -413,10 +413,8 @@ static int __init ttc_setup_clockevent(struct clk *clk,
+ 	ttcce->ttc.clk = clk;
+ 
+ 	err = clk_prepare_enable(ttcce->ttc.clk);
+-	if (err) {
+-		kfree(ttcce);
+-		return err;
+-	}
++	if (err)
++		goto out_kfree;
+ 
+ 	ttcce->ttc.clk_rate_change_nb.notifier_call =
+ 		ttc_rate_change_clockevent_cb;
+@@ -426,7 +424,7 @@ static int __init ttc_setup_clockevent(struct clk *clk,
+ 				    &ttcce->ttc.clk_rate_change_nb);
+ 	if (err) {
+ 		pr_warn("Unable to register clock notifier.\n");
+-		return err;
++		goto out_kfree;
+ 	}
+ 
+ 	ttcce->ttc.freq = clk_get_rate(ttcce->ttc.clk);
+@@ -455,15 +453,17 @@ static int __init ttc_setup_clockevent(struct clk *clk,
+ 
+ 	err = request_irq(irq, ttc_clock_event_interrupt,
+ 			  IRQF_TIMER, ttcce->ce.name, ttcce);
+-	if (err) {
+-		kfree(ttcce);
+-		return err;
+-	}
++	if (err)
++		goto out_kfree;
+ 
+ 	clockevents_config_and_register(&ttcce->ce,
+ 			ttcce->ttc.freq / PRESCALE, 1, 0xfffe);
+ 
+ 	return 0;
++
++out_kfree:
++	kfree(ttcce);
++	return err;
+ }
+ 
+ static int __init ttc_timer_probe(struct platform_device *pdev)
+diff --git a/drivers/clocksource/timer-orion.c b/drivers/clocksource/timer-orion.c
+index d01ff41818676..5101e834d78ff 100644
+--- a/drivers/clocksource/timer-orion.c
++++ b/drivers/clocksource/timer-orion.c
+@@ -143,7 +143,8 @@ static int __init orion_timer_init(struct device_node *np)
+ 	irq = irq_of_parse_and_map(np, 1);
+ 	if (irq <= 0) {
+ 		pr_err("%pOFn: unable to parse timer1 irq\n", np);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out_unprep_clk;
+ 	}
+ 
+ 	rate = clk_get_rate(clk);
+@@ -160,7 +161,7 @@ static int __init orion_timer_init(struct device_node *np)
+ 				    clocksource_mmio_readl_down);
+ 	if (ret) {
+ 		pr_err("Failed to initialize mmio timer\n");
+-		return ret;
++		goto out_unprep_clk;
+ 	}
+ 
+ 	sched_clock_register(orion_read_sched_clock, 32, rate);
+@@ -170,7 +171,7 @@ static int __init orion_timer_init(struct device_node *np)
+ 			  "orion_event", NULL);
+ 	if (ret) {
+ 		pr_err("%pOFn: unable to setup irq\n", np);
+-		return ret;
++		goto out_unprep_clk;
+ 	}
+ 
+ 	ticks_per_jiffy = (clk_get_rate(clk) + HZ/2) / HZ;
+@@ -183,5 +184,9 @@ static int __init orion_timer_init(struct device_node *np)
+ 	orion_delay_timer_init(rate);
+ 
+ 	return 0;
++
++out_unprep_clk:
++	clk_disable_unprepare(clk);
++	return ret;
+ }
+ TIMER_OF_DECLARE(orion_timer, "marvell,orion-timer", orion_timer_init);
+diff --git a/drivers/counter/microchip-tcb-capture.c b/drivers/counter/microchip-tcb-capture.c
+index 039c54a78aa57..710acc0a37044 100644
+--- a/drivers/counter/microchip-tcb-capture.c
++++ b/drivers/counter/microchip-tcb-capture.c
+@@ -183,16 +183,20 @@ static int mchp_tc_count_action_get(struct counter_device *counter,
+ 
+ 	regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], CMR), &cmr);
+ 
+-	*action = MCHP_TC_SYNAPSE_ACTION_NONE;
+-
+-	if (cmr & ATMEL_TC_ETRGEDG_NONE)
++	switch (cmr & ATMEL_TC_ETRGEDG) {
++	default:
+ 		*action = MCHP_TC_SYNAPSE_ACTION_NONE;
+-	else if (cmr & ATMEL_TC_ETRGEDG_RISING)
++		break;
++	case ATMEL_TC_ETRGEDG_RISING:
+ 		*action = MCHP_TC_SYNAPSE_ACTION_RISING_EDGE;
+-	else if (cmr & ATMEL_TC_ETRGEDG_FALLING)
++		break;
++	case ATMEL_TC_ETRGEDG_FALLING:
+ 		*action = MCHP_TC_SYNAPSE_ACTION_FALLING_EDGE;
+-	else if (cmr & ATMEL_TC_ETRGEDG_BOTH)
++		break;
++	case ATMEL_TC_ETRGEDG_BOTH:
+ 		*action = MCHP_TC_SYNAPSE_ACTION_BOTH_EDGE;
++		break;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
+index 015ec0c028358..1f73fa75b1a05 100644
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -94,7 +94,7 @@ config ARM_IMX6Q_CPUFREQ
+ 	tristate "Freescale i.MX6 cpufreq support"
+ 	depends on ARCH_MXC
+ 	depends on REGULATOR_ANATOP
+-	select NVMEM_IMX_OCOTP
++	depends on NVMEM_IMX_OCOTP || COMPILE_TEST
+ 	select PM_OPP
+ 	help
+ 	  This adds cpufreq driver support for Freescale i.MX6 series SoCs.
+diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
+index 39e34f5066d3d..b0fc5e84f8570 100644
+--- a/drivers/cpufreq/armada-8k-cpufreq.c
++++ b/drivers/cpufreq/armada-8k-cpufreq.c
+@@ -204,6 +204,12 @@ static void __exit armada_8k_cpufreq_exit(void)
+ }
+ module_exit(armada_8k_cpufreq_exit);
+ 
++static const struct of_device_id __maybe_unused armada_8k_cpufreq_of_match[] = {
++	{ .compatible = "marvell,ap806-cpu-clock" },
++	{ },
++};
++MODULE_DEVICE_TABLE(of, armada_8k_cpufreq_of_match);
++
+ MODULE_AUTHOR("Gregory Clement <gregory.clement@bootlin.com>");
+ MODULE_DESCRIPTION("Armada 8K cpufreq driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/cpufreq/highbank-cpufreq.c b/drivers/cpufreq/highbank-cpufreq.c
+index 5a7f6dafcddb6..ac57cddc5f2fe 100644
+--- a/drivers/cpufreq/highbank-cpufreq.c
++++ b/drivers/cpufreq/highbank-cpufreq.c
+@@ -101,6 +101,13 @@ out_put_node:
+ }
+ module_init(hb_cpufreq_driver_init);
+ 
++static const struct of_device_id __maybe_unused hb_cpufreq_of_match[] = {
++	{ .compatible = "calxeda,highbank" },
++	{ .compatible = "calxeda,ecx-2000" },
++	{ },
++};
++MODULE_DEVICE_TABLE(of, hb_cpufreq_of_match);
++
+ MODULE_AUTHOR("Mark Langsdorf <mark.langsdorf@calxeda.com>");
+ MODULE_DESCRIPTION("Calxeda Highbank cpufreq driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 36a3ccfe6d3d1..cb95da684457f 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2207,9 +2207,9 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu,
+ 					    unsigned int policy_min,
+ 					    unsigned int policy_max)
+ {
+-	int max_freq = intel_pstate_get_max_freq(cpu);
+ 	int32_t max_policy_perf, min_policy_perf;
+ 	int max_state, turbo_max;
++	int max_freq;
+ 
+ 	/*
+ 	 * HWP needs some special consideration, because on BDX the
+@@ -2223,6 +2223,7 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu,
+ 			cpu->pstate.max_pstate : cpu->pstate.turbo_pstate;
+ 		turbo_max = cpu->pstate.turbo_pstate;
+ 	}
++	max_freq = max_state * cpu->pstate.scaling;
+ 
+ 	max_policy_perf = max_state * policy_max / max_freq;
+ 	if (policy_max == policy_min) {
+@@ -2325,9 +2326,18 @@ static void intel_pstate_adjust_policy_max(struct cpudata *cpu,
+ static void intel_pstate_verify_cpu_policy(struct cpudata *cpu,
+ 					   struct cpufreq_policy_data *policy)
+ {
++	int max_freq;
++
+ 	update_turbo_state();
+-	cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq,
+-				     intel_pstate_get_max_freq(cpu));
++	if (hwp_active) {
++		int max_state, turbo_max;
++
++		intel_pstate_get_hwp_max(cpu->cpu, &turbo_max, &max_state);
++		max_freq = max_state * cpu->pstate.scaling;
++	} else {
++		max_freq = intel_pstate_get_max_freq(cpu);
++	}
++	cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, max_freq);
+ 
+ 	intel_pstate_adjust_policy_max(cpu, policy);
+ }
+diff --git a/drivers/cpufreq/loongson1-cpufreq.c b/drivers/cpufreq/loongson1-cpufreq.c
+index 0ea88778882ac..86f612593e497 100644
+--- a/drivers/cpufreq/loongson1-cpufreq.c
++++ b/drivers/cpufreq/loongson1-cpufreq.c
+@@ -216,6 +216,7 @@ static struct platform_driver ls1x_cpufreq_platdrv = {
+ 
+ module_platform_driver(ls1x_cpufreq_platdrv);
+ 
++MODULE_ALIAS("platform:ls1x-cpufreq");
+ MODULE_AUTHOR("Kelvin Cheung <keguang.zhang@gmail.com>");
+ MODULE_DESCRIPTION("Loongson1 CPUFreq driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
+index 7d1212c9b7c88..a310372dc53e9 100644
+--- a/drivers/cpufreq/mediatek-cpufreq.c
++++ b/drivers/cpufreq/mediatek-cpufreq.c
+@@ -540,6 +540,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
+ 
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, mtk_cpufreq_machines);
+ 
+ static int __init mtk_cpufreq_driver_init(void)
+ {
+diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+index d06b37822c3df..fba9937a406b3 100644
+--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
++++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+@@ -464,6 +464,7 @@ static const struct of_device_id qcom_cpufreq_match_list[] __initconst = {
+ 	{ .compatible = "qcom,msm8960", .data = &match_data_krait },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, qcom_cpufreq_match_list);
+ 
+ /*
+  * Since the driver depends on smem and nvmem drivers, which may
+diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
+index 43db05b949d95..e5140ad63db83 100644
+--- a/drivers/cpufreq/scpi-cpufreq.c
++++ b/drivers/cpufreq/scpi-cpufreq.c
+@@ -233,6 +233,7 @@ static struct platform_driver scpi_cpufreq_platdrv = {
+ };
+ module_platform_driver(scpi_cpufreq_platdrv);
+ 
++MODULE_ALIAS("platform:scpi-cpufreq");
+ MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+ MODULE_DESCRIPTION("ARM SCPI CPUFreq interface driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/cpufreq/sti-cpufreq.c b/drivers/cpufreq/sti-cpufreq.c
+index 4ac6fb23792a0..c40d3d7d4ea43 100644
+--- a/drivers/cpufreq/sti-cpufreq.c
++++ b/drivers/cpufreq/sti-cpufreq.c
+@@ -292,6 +292,13 @@ register_cpufreq_dt:
+ }
+ module_init(sti_cpufreq_init);
+ 
++static const struct of_device_id __maybe_unused sti_cpufreq_of_match[] = {
++	{ .compatible = "st,stih407" },
++	{ .compatible = "st,stih410" },
++	{ },
++};
++MODULE_DEVICE_TABLE(of, sti_cpufreq_of_match);
++
+ MODULE_DESCRIPTION("STMicroelectronics CPUFreq/OPP driver");
+ MODULE_AUTHOR("Ajitpal Singh <ajitpal.singh@st.com>");
+ MODULE_AUTHOR("Lee Jones <lee.jones@linaro.org>");
+diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+index 9907a165135b7..2deed8d8773fa 100644
+--- a/drivers/cpufreq/sun50i-cpufreq-nvmem.c
++++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+@@ -167,6 +167,7 @@ static const struct of_device_id sun50i_cpufreq_match_list[] = {
+ 	{ .compatible = "allwinner,sun50i-h6" },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(of, sun50i_cpufreq_match_list);
+ 
+ static const struct of_device_id *sun50i_cpufreq_match_node(void)
+ {
+diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c
+index e89b905754d21..f711d8eaea6a2 100644
+--- a/drivers/cpufreq/vexpress-spc-cpufreq.c
++++ b/drivers/cpufreq/vexpress-spc-cpufreq.c
+@@ -591,6 +591,7 @@ static struct platform_driver ve_spc_cpufreq_platdrv = {
+ };
+ module_platform_driver(ve_spc_cpufreq_platdrv);
+ 
++MODULE_ALIAS("platform:vexpress-spc-cpufreq");
+ MODULE_AUTHOR("Viresh Kumar <viresh.kumar@linaro.org>");
+ MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+ MODULE_DESCRIPTION("Vexpress SPC ARM big LITTLE cpufreq driver");
+diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
+index 37da0c070a883..9d6645b1f0abe 100644
+--- a/drivers/crypto/Kconfig
++++ b/drivers/crypto/Kconfig
+@@ -548,6 +548,7 @@ config CRYPTO_DEV_ATMEL_SHA
+ 
+ config CRYPTO_DEV_ATMEL_I2C
+ 	tristate
++	select BITREVERSE
+ 
+ config CRYPTO_DEV_ATMEL_ECC
+ 	tristate "Support for Microchip / Atmel ECC hw accelerator"
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+index a94bf28f858a7..4c5a2c11d7141 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+@@ -262,13 +262,13 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	u32 common;
+ 	u64 byte_count;
+ 	__le32 *bf;
+-	void *buf;
++	void *buf = NULL;
+ 	int j, i, todo;
+ 	int nbw = 0;
+ 	u64 fill, min_fill;
+ 	__be64 *bebits;
+ 	__le64 *lebits;
+-	void *result;
++	void *result = NULL;
+ 	u64 bs;
+ 	int digestsize;
+ 	dma_addr_t addr_res, addr_pad;
+@@ -285,13 +285,17 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 
+ 	/* the padding could be up to two block. */
+ 	buf = kzalloc(bs * 2, GFP_KERNEL | GFP_DMA);
+-	if (!buf)
+-		return -ENOMEM;
++	if (!buf) {
++		err = -ENOMEM;
++		goto theend;
++	}
+ 	bf = (__le32 *)buf;
+ 
+ 	result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
+-	if (!result)
+-		return -ENOMEM;
++	if (!result) {
++		err = -ENOMEM;
++		goto theend;
++	}
+ 
+ 	flow = rctx->flow;
+ 	chan = &ce->chanlist[flow];
+@@ -403,11 +407,11 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	dma_unmap_sg(ce->dev, areq->src, nr_sgs, DMA_TO_DEVICE);
+ 	dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE);
+ 
+-	kfree(buf);
+ 
+ 	memcpy(areq->result, result, algt->alg.hash.halg.digestsize);
+-	kfree(result);
+ theend:
++	kfree(buf);
++	kfree(result);
+ 	crypto_finalize_hash_request(engine, breq, err);
+ 	return 0;
+ }
+diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
+index 981de43ea5e24..2e3690f65786d 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.c
++++ b/drivers/crypto/amcc/crypto4xx_core.c
+@@ -917,7 +917,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 	}
+ 
+ 	pd->pd_ctl.w = PD_CTL_HOST_READY |
+-		((crypto_tfm_alg_type(req->tfm) == CRYPTO_ALG_TYPE_AHASH) |
++		((crypto_tfm_alg_type(req->tfm) == CRYPTO_ALG_TYPE_AHASH) ||
+ 		 (crypto_tfm_alg_type(req->tfm) == CRYPTO_ALG_TYPE_AEAD) ?
+ 			PD_CTL_HASH_FINAL : 0);
+ 	pd->pd_ctl_len.w = 0x00400000 | (assoclen + datalen);
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index cf5bd7666dfcd..8697ae53b0633 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -3404,8 +3404,8 @@ static int caam_cra_init(struct crypto_skcipher *tfm)
+ 		fallback = crypto_alloc_skcipher(tfm_name, 0,
+ 						 CRYPTO_ALG_NEED_FALLBACK);
+ 		if (IS_ERR(fallback)) {
+-			dev_err(ctx->jrdev, "Failed to allocate %s fallback: %ld\n",
+-				tfm_name, PTR_ERR(fallback));
++			pr_err("Failed to allocate %s fallback: %ld\n",
++			       tfm_name, PTR_ERR(fallback));
+ 			return PTR_ERR(fallback);
+ 		}
+ 
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index 66f60d78bdc84..a24ae966df4a3 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -2502,8 +2502,8 @@ static int caam_cra_init(struct crypto_skcipher *tfm)
+ 		fallback = crypto_alloc_skcipher(tfm_name, 0,
+ 						 CRYPTO_ALG_NEED_FALLBACK);
+ 		if (IS_ERR(fallback)) {
+-			dev_err(ctx->jrdev, "Failed to allocate %s fallback: %ld\n",
+-				tfm_name, PTR_ERR(fallback));
++			pr_err("Failed to allocate %s fallback: %ld\n",
++			       tfm_name, PTR_ERR(fallback));
+ 			return PTR_ERR(fallback);
+ 		}
+ 
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index 98c1ff1744bb1..a780e627838ae 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -1611,7 +1611,8 @@ static int caam_cra_init_skcipher(struct crypto_skcipher *tfm)
+ 		fallback = crypto_alloc_skcipher(tfm_name, 0,
+ 						 CRYPTO_ALG_NEED_FALLBACK);
+ 		if (IS_ERR(fallback)) {
+-			dev_err(ctx->dev, "Failed to allocate %s fallback: %ld\n",
++			dev_err(caam_alg->caam.dev,
++				"Failed to allocate %s fallback: %ld\n",
+ 				tfm_name, PTR_ERR(fallback));
+ 			return PTR_ERR(fallback);
+ 		}
+diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
+index eb2418450f120..2e1562108a858 100644
+--- a/drivers/crypto/inside-secure/safexcel.c
++++ b/drivers/crypto/inside-secure/safexcel.c
+@@ -1639,7 +1639,7 @@ static int safexcel_probe_generic(void *pdev,
+ 
+ 		priv->ring[i].rdr_req = devm_kcalloc(dev,
+ 			EIP197_DEFAULT_RING_SIZE,
+-			sizeof(priv->ring[i].rdr_req),
++			sizeof(*priv->ring[i].rdr_req),
+ 			GFP_KERNEL);
+ 		if (!priv->ring[i].rdr_req)
+ 			return -ENOMEM;
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 4fd14d90cc409..1b1e0ab0a831a 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -1137,7 +1137,7 @@ static int omap_aes_probe(struct platform_device *pdev)
+ 	if (err < 0) {
+ 		dev_err(dev, "%s: failed to get_sync(%d)\n",
+ 			__func__, err);
+-		goto err_res;
++		goto err_pm_disable;
+ 	}
+ 
+ 	omap_aes_dma_stop(dd);
+@@ -1246,6 +1246,7 @@ err_engine:
+ 	omap_aes_dma_cleanup(dd);
+ err_irq:
+ 	tasklet_kill(&dd->done_task);
++err_pm_disable:
+ 	pm_runtime_disable(dev);
+ err_res:
+ 	dd = NULL;
+diff --git a/drivers/crypto/qat/qat_common/qat_hal.c b/drivers/crypto/qat/qat_common/qat_hal.c
+index 6b9d47682d04d..52ef80efeddc6 100644
+--- a/drivers/crypto/qat/qat_common/qat_hal.c
++++ b/drivers/crypto/qat/qat_common/qat_hal.c
+@@ -1146,7 +1146,7 @@ static int qat_hal_put_rel_rd_xfer(struct icp_qat_fw_loader_handle *handle,
+ 	unsigned short mask;
+ 	unsigned short dr_offset = 0x10;
+ 
+-	status = ctx_enables = qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES);
++	ctx_enables = qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES);
+ 	if (CE_INUSE_CONTEXTS & ctx_enables) {
+ 		if (ctx & 0x1) {
+ 			pr_err("QAT: bad 4-ctx mode,ctx=0x%x\n", ctx);
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index 66773892f665d..a713a35dc5022 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -460,7 +460,7 @@ DEF_TALITOS2_DONE(ch1_3, TALITOS2_ISR_CH_1_3_DONE)
+ /*
+  * locate current (offending) descriptor
+  */
+-static u32 current_desc_hdr(struct device *dev, int ch)
++static __be32 current_desc_hdr(struct device *dev, int ch)
+ {
+ 	struct talitos_private *priv = dev_get_drvdata(dev);
+ 	int tail, iter;
+@@ -478,7 +478,7 @@ static u32 current_desc_hdr(struct device *dev, int ch)
+ 
+ 	iter = tail;
+ 	while (priv->chan[ch].fifo[iter].dma_desc != cur_desc &&
+-	       priv->chan[ch].fifo[iter].desc->next_desc != cur_desc) {
++	       priv->chan[ch].fifo[iter].desc->next_desc != cpu_to_be32(cur_desc)) {
+ 		iter = (iter + 1) & (priv->fifo_len - 1);
+ 		if (iter == tail) {
+ 			dev_err(dev, "couldn't locate current descriptor\n");
+@@ -486,7 +486,7 @@ static u32 current_desc_hdr(struct device *dev, int ch)
+ 		}
+ 	}
+ 
+-	if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc) {
++	if (priv->chan[ch].fifo[iter].desc->next_desc == cpu_to_be32(cur_desc)) {
+ 		struct talitos_edesc *edesc;
+ 
+ 		edesc = container_of(priv->chan[ch].fifo[iter].desc,
+@@ -501,13 +501,13 @@ static u32 current_desc_hdr(struct device *dev, int ch)
+ /*
+  * user diagnostics; report root cause of error based on execution unit status
+  */
+-static void report_eu_error(struct device *dev, int ch, u32 desc_hdr)
++static void report_eu_error(struct device *dev, int ch, __be32 desc_hdr)
+ {
+ 	struct talitos_private *priv = dev_get_drvdata(dev);
+ 	int i;
+ 
+ 	if (!desc_hdr)
+-		desc_hdr = in_be32(priv->chan[ch].reg + TALITOS_DESCBUF);
++		desc_hdr = cpu_to_be32(in_be32(priv->chan[ch].reg + TALITOS_DESCBUF));
+ 
+ 	switch (desc_hdr & DESC_HDR_SEL0_MASK) {
+ 	case DESC_HDR_SEL0_AFEU:
+diff --git a/drivers/dax/super.c b/drivers/dax/super.c
+index edc279be3e596..cadbd0a1a1ef0 100644
+--- a/drivers/dax/super.c
++++ b/drivers/dax/super.c
+@@ -752,6 +752,7 @@ err_chrdev:
+ 
+ static void __exit dax_core_exit(void)
+ {
++	dax_bus_exit();
+ 	unregister_chrdev_region(dax_devt, MINORMASK+1);
+ 	ida_destroy(&dax_minor_ida);
+ 	dax_fs_exit();
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index 1c8f2581cb09a..1187e5e80eded 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -200,7 +200,7 @@ int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences)
+ 			max = max(old->shared_count + num_fences,
+ 				  old->shared_max * 2);
+ 	} else {
+-		max = 4;
++		max = max(4ul, roundup_pow_of_two(num_fences));
+ 	}
+ 
+ 	new = dma_resv_list_alloc(max);
+diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
+index 2753a6b916f60..9b0d463f89bbd 100644
+--- a/drivers/dma/mv_xor_v2.c
++++ b/drivers/dma/mv_xor_v2.c
+@@ -771,8 +771,10 @@ static int mv_xor_v2_probe(struct platform_device *pdev)
+ 		goto disable_clk;
+ 
+ 	msi_desc = first_msi_entry(&pdev->dev);
+-	if (!msi_desc)
++	if (!msi_desc) {
++		ret = -ENODEV;
+ 		goto free_msi_irqs;
++	}
+ 	xor_dev->msi_desc = msi_desc;
+ 
+ 	ret = devm_request_irq(&pdev->dev, msi_desc->irq,
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index 82cf6c77f5c93..d3902784cae24 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -3201,8 +3201,7 @@ static int udma_setup_resources(struct udma_dev *ud)
+ 	} else if (UDMA_CAP3_UCHAN_CNT(cap3)) {
+ 		ud->tpl_levels = 3;
+ 		ud->tpl_start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3);
+-		ud->tpl_start_idx[0] = ud->tpl_start_idx[1] +
+-				       UDMA_CAP3_HCHAN_CNT(cap3);
++		ud->tpl_start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3);
+ 	} else if (UDMA_CAP3_HCHAN_CNT(cap3)) {
+ 		ud->tpl_levels = 2;
+ 		ud->tpl_start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3);
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 1362274d840b9..620f7041db6b5 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -18,6 +18,9 @@ static struct amd64_family_type *fam_type;
+ /* Per-node stuff */
+ static struct ecc_settings **ecc_stngs;
+ 
++/* Device for the PCI component */
++static struct device *pci_ctl_dev;
++
+ /*
+  * Valid scrub rates for the K8 hardware memory scrubber. We map the scrubbing
+  * bandwidth to a valid bit pattern. The 'set' operation finds the 'matching-
+@@ -2683,6 +2686,9 @@ reserve_mc_sibling_devs(struct amd64_pvt *pvt, u16 pci_id1, u16 pci_id2)
+ 			return -ENODEV;
+ 		}
+ 
++		if (!pci_ctl_dev)
++			pci_ctl_dev = &pvt->F0->dev;
++
+ 		edac_dbg(1, "F0: %s\n", pci_name(pvt->F0));
+ 		edac_dbg(1, "F3: %s\n", pci_name(pvt->F3));
+ 		edac_dbg(1, "F6: %s\n", pci_name(pvt->F6));
+@@ -2707,6 +2713,9 @@ reserve_mc_sibling_devs(struct amd64_pvt *pvt, u16 pci_id1, u16 pci_id2)
+ 		return -ENODEV;
+ 	}
+ 
++	if (!pci_ctl_dev)
++		pci_ctl_dev = &pvt->F2->dev;
++
+ 	edac_dbg(1, "F1: %s\n", pci_name(pvt->F1));
+ 	edac_dbg(1, "F2: %s\n", pci_name(pvt->F2));
+ 	edac_dbg(1, "F3: %s\n", pci_name(pvt->F3));
+@@ -3623,21 +3632,10 @@ static void remove_one_instance(unsigned int nid)
+ 
+ static void setup_pci_device(void)
+ {
+-	struct mem_ctl_info *mci;
+-	struct amd64_pvt *pvt;
+-
+ 	if (pci_ctl)
+ 		return;
+ 
+-	mci = edac_mc_find(0);
+-	if (!mci)
+-		return;
+-
+-	pvt = mci->pvt_info;
+-	if (pvt->umc)
+-		pci_ctl = edac_pci_create_generic_ctl(&pvt->F0->dev, EDAC_MOD_STR);
+-	else
+-		pci_ctl = edac_pci_create_generic_ctl(&pvt->F2->dev, EDAC_MOD_STR);
++	pci_ctl = edac_pci_create_generic_ctl(pci_ctl_dev, EDAC_MOD_STR);
+ 	if (!pci_ctl) {
+ 		pr_warn("%s(): Unable to create PCI control\n", __func__);
+ 		pr_warn("%s(): PCI error report via EDAC not set\n", __func__);
+@@ -3716,6 +3714,8 @@ static int __init amd64_edac_init(void)
+ 	return 0;
+ 
+ err_pci:
++	pci_ctl_dev = NULL;
++
+ 	msrs_free(msrs);
+ 	msrs = NULL;
+ 
+@@ -3745,6 +3745,8 @@ static void __exit amd64_edac_exit(void)
+ 	kfree(ecc_stngs);
+ 	ecc_stngs = NULL;
+ 
++	pci_ctl_dev = NULL;
++
+ 	msrs_free(msrs);
+ 	msrs = NULL;
+ }
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index c8d11da85becf..7b52691c45d26 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include <linux/kernel.h>
++#include <linux/io.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+ #include <asm/mce.h>
+@@ -19,14 +20,16 @@
+ #define i10nm_printk(level, fmt, arg...)	\
+ 	edac_printk(level, "i10nm", fmt, ##arg)
+ 
+-#define I10NM_GET_SCK_BAR(d, reg)		\
++#define I10NM_GET_SCK_BAR(d, reg)	\
+ 	pci_read_config_dword((d)->uracu, 0xd0, &(reg))
+ #define I10NM_GET_IMC_BAR(d, i, reg)	\
+ 	pci_read_config_dword((d)->uracu, 0xd8 + (i) * 4, &(reg))
+ #define I10NM_GET_DIMMMTR(m, i, j)	\
+-	(*(u32 *)((m)->mbase + 0x2080c + (i) * 0x4000 + (j) * 4))
++	readl((m)->mbase + 0x2080c + (i) * 0x4000 + (j) * 4)
+ #define I10NM_GET_MCDDRTCFG(m, i, j)	\
+-	(*(u32 *)((m)->mbase + 0x20970 + (i) * 0x4000 + (j) * 4))
++	readl((m)->mbase + 0x20970 + (i) * 0x4000 + (j) * 4)
++#define I10NM_GET_MCMTR(m, i)		\
++	readl((m)->mbase + 0x20ef8 + (i) * 0x4000)
+ 
+ #define I10NM_GET_SCK_MMIO_BASE(reg)	(GET_BITFIELD(reg, 0, 28) << 23)
+ #define I10NM_GET_IMC_MMIO_OFFSET(reg)	(GET_BITFIELD(reg, 0, 10) << 12)
+@@ -148,7 +151,7 @@ static bool i10nm_check_ecc(struct skx_imc *imc, int chan)
+ {
+ 	u32 mcmtr;
+ 
+-	mcmtr = *(u32 *)(imc->mbase + 0x20ef8 + chan * 0x4000);
++	mcmtr = I10NM_GET_MCMTR(imc, chan);
+ 	edac_dbg(1, "ch%d mcmtr reg %x\n", chan, mcmtr);
+ 
+ 	return !!GET_BITFIELD(mcmtr, 2, 2);
+diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c
+index 7f28edb070bd0..6c474fbef32af 100644
+--- a/drivers/edac/mce_amd.c
++++ b/drivers/edac/mce_amd.c
+@@ -1003,7 +1003,7 @@ static void decode_smca_error(struct mce *m)
+ 		pr_cont(", %s.\n", smca_mce_descs[bank_type].descs[xec]);
+ 
+ 	if (bank_type == SMCA_UMC && xec == 0 && decode_dram_ecc)
+-		decode_dram_ecc(cpu_to_node(m->extcpu), m);
++		decode_dram_ecc(topology_die_id(m->extcpu), m);
+ }
+ 
+ static inline void amd_decode_err_code(u16 ec)
+diff --git a/drivers/extcon/extcon-max77693.c b/drivers/extcon/extcon-max77693.c
+index 4a410fd2ea9ae..92af97e00828f 100644
+--- a/drivers/extcon/extcon-max77693.c
++++ b/drivers/extcon/extcon-max77693.c
+@@ -1277,4 +1277,4 @@ module_platform_driver(max77693_muic_driver);
+ MODULE_DESCRIPTION("Maxim MAX77693 Extcon driver");
+ MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>");
+ MODULE_LICENSE("GPL");
+-MODULE_ALIAS("platform:extcon-max77693");
++MODULE_ALIAS("platform:max77693-muic");
+diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
+index ce336899d6366..66196b293b6c2 100644
+--- a/drivers/firmware/arm_scmi/notify.c
++++ b/drivers/firmware/arm_scmi/notify.c
+@@ -1474,17 +1474,17 @@ int scmi_notification_init(struct scmi_handle *handle)
+ 	ni->gid = gid;
+ 	ni->handle = handle;
+ 
++	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
++						sizeof(char *), GFP_KERNEL);
++	if (!ni->registered_protocols)
++		goto err;
++
+ 	ni->notify_wq = alloc_workqueue(dev_name(handle->dev),
+ 					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
+ 					0);
+ 	if (!ni->notify_wq)
+ 		goto err;
+ 
+-	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
+-						sizeof(char *), GFP_KERNEL);
+-	if (!ni->registered_protocols)
+-		goto err;
+-
+ 	mutex_init(&ni->pending_mtx);
+ 	hash_init(ni->pending_events_handlers);
+ 
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 6c6eec044a978..df3f9bcab581c 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -57,6 +57,7 @@ struct mm_struct efi_mm = {
+ 	.mm_rb			= RB_ROOT,
+ 	.mm_users		= ATOMIC_INIT(2),
+ 	.mm_count		= ATOMIC_INIT(1),
++	.write_protect_seq      = SEQCNT_ZERO(efi_mm.write_protect_seq),
+ 	MMAP_LOCK_INITIALIZER(efi_mm)
+ 	.page_table_lock	= __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock),
+ 	.mmlist			= LIST_HEAD_INIT(efi_mm.mmlist),
+diff --git a/drivers/firmware/tegra/bpmp-debugfs.c b/drivers/firmware/tegra/bpmp-debugfs.c
+index c1bbba9ee93a3..440d99c63638b 100644
+--- a/drivers/firmware/tegra/bpmp-debugfs.c
++++ b/drivers/firmware/tegra/bpmp-debugfs.c
+@@ -412,16 +412,12 @@ static int bpmp_populate_debugfs_inband(struct tegra_bpmp *bpmp,
+ 				goto out;
+ 			}
+ 
+-			len = strlen(ppath) + strlen(name) + 1;
++			len = snprintf(pathbuf, pathlen, "%s%s/", ppath, name);
+ 			if (len >= pathlen) {
+ 				err = -EINVAL;
+ 				goto out;
+ 			}
+ 
+-			strncpy(pathbuf, ppath, pathlen);
+-			strncat(pathbuf, name, strlen(name));
+-			strcat(pathbuf, "/");
+-
+ 			err = bpmp_populate_debugfs_inband(bpmp, dentry,
+ 							   pathbuf);
+ 			if (err < 0)
+diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c
+index c006ec008a1aa..90dbe58ca1edc 100644
+--- a/drivers/fsi/fsi-master-aspeed.c
++++ b/drivers/fsi/fsi-master-aspeed.c
+@@ -8,6 +8,7 @@
+ #include <linux/io.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+@@ -19,6 +20,7 @@
+ 
+ struct fsi_master_aspeed {
+ 	struct fsi_master	master;
++	struct mutex		lock;	/* protect HW access */
+ 	struct device		*dev;
+ 	void __iomem		*base;
+ 	struct clk		*clk;
+@@ -254,6 +256,8 @@ static int aspeed_master_read(struct fsi_master *master, int link,
+ 	addr |= id << 21;
+ 	addr += link * FSI_HUB_LINK_SIZE;
+ 
++	mutex_lock(&aspeed->lock);
++
+ 	switch (size) {
+ 	case 1:
+ 		ret = opb_readb(aspeed, fsi_base + addr, val);
+@@ -265,14 +269,14 @@ static int aspeed_master_read(struct fsi_master *master, int link,
+ 		ret = opb_readl(aspeed, fsi_base + addr, val);
+ 		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto done;
+ 	}
+ 
+ 	ret = check_errors(aspeed, ret);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
++done:
++	mutex_unlock(&aspeed->lock);
++	return ret;
+ }
+ 
+ static int aspeed_master_write(struct fsi_master *master, int link,
+@@ -287,6 +291,8 @@ static int aspeed_master_write(struct fsi_master *master, int link,
+ 	addr |= id << 21;
+ 	addr += link * FSI_HUB_LINK_SIZE;
+ 
++	mutex_lock(&aspeed->lock);
++
+ 	switch (size) {
+ 	case 1:
+ 		ret = opb_writeb(aspeed, fsi_base + addr, *(u8 *)val);
+@@ -298,14 +304,14 @@ static int aspeed_master_write(struct fsi_master *master, int link,
+ 		ret = opb_writel(aspeed, fsi_base + addr, *(__be32 *)val);
+ 		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto done;
+ 	}
+ 
+ 	ret = check_errors(aspeed, ret);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
++done:
++	mutex_unlock(&aspeed->lock);
++	return ret;
+ }
+ 
+ static int aspeed_master_link_enable(struct fsi_master *master, int link,
+@@ -320,17 +326,21 @@ static int aspeed_master_link_enable(struct fsi_master *master, int link,
+ 
+ 	reg = cpu_to_be32(0x80000000 >> bit);
+ 
+-	if (!enable)
+-		return opb_writel(aspeed, ctrl_base + FSI_MCENP0 + (4 * idx),
+-				  reg);
++	mutex_lock(&aspeed->lock);
++
++	if (!enable) {
++		ret = opb_writel(aspeed, ctrl_base + FSI_MCENP0 + (4 * idx), reg);
++		goto done;
++	}
+ 
+ 	ret = opb_writel(aspeed, ctrl_base + FSI_MSENP0 + (4 * idx), reg);
+ 	if (ret)
+-		return ret;
++		goto done;
+ 
+ 	mdelay(FSI_LINK_ENABLE_SETUP_TIME);
+-
+-	return 0;
++done:
++	mutex_unlock(&aspeed->lock);
++	return ret;
+ }
+ 
+ static int aspeed_master_term(struct fsi_master *master, int link, uint8_t id)
+@@ -431,9 +441,11 @@ static ssize_t cfam_reset_store(struct device *dev, struct device_attribute *att
+ {
+ 	struct fsi_master_aspeed *aspeed = dev_get_drvdata(dev);
+ 
++	mutex_lock(&aspeed->lock);
+ 	gpiod_set_value(aspeed->cfam_reset_gpio, 1);
+ 	usleep_range(900, 1000);
+ 	gpiod_set_value(aspeed->cfam_reset_gpio, 0);
++	mutex_unlock(&aspeed->lock);
+ 
+ 	return count;
+ }
+@@ -597,6 +609,7 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ 
+ 	dev_set_drvdata(&pdev->dev, aspeed);
+ 
++	mutex_init(&aspeed->lock);
+ 	aspeed_master_init(aspeed);
+ 
+ 	rc = fsi_master_register(&aspeed->master);
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 6e3c4d7a7d146..4ad3c4b276dcf 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1477,7 +1477,8 @@ static void gpiochip_set_irq_hooks(struct gpio_chip *gc)
+ 	if (WARN_ON(gc->irq.irq_enable))
+ 		return;
+ 	/* Check if the irqchip already has this hook... */
+-	if (irqchip->irq_enable == gpiochip_irq_enable) {
++	if (irqchip->irq_enable == gpiochip_irq_enable ||
++		irqchip->irq_mask == gpiochip_irq_mask) {
+ 		/*
+ 		 * ...and if so, give a gentle warning that this is bad
+ 		 * practice.
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index 65d1b23d7e746..b9c11c2b2885a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -1414,10 +1414,12 @@ out:
+ 		pm_runtime_put_autosuspend(connector->dev->dev);
+ 	}
+ 
+-	drm_dp_set_subconnector_property(&amdgpu_connector->base,
+-					 ret,
+-					 amdgpu_dig_connector->dpcd,
+-					 amdgpu_dig_connector->downstream_ports);
++	if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
++	    connector->connector_type == DRM_MODE_CONNECTOR_eDP)
++		drm_dp_set_subconnector_property(&amdgpu_connector->base,
++						 ret,
++						 amdgpu_dig_connector->dpcd,
++						 amdgpu_dig_connector->downstream_ports);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 8c9bacfdbc300..c485ec86804e5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -193,10 +193,14 @@ static bool amdgpu_gfx_is_multipipe_capable(struct amdgpu_device *adev)
+ }
+ 
+ bool amdgpu_gfx_is_high_priority_compute_queue(struct amdgpu_device *adev,
+-					       int queue)
++					       int pipe, int queue)
+ {
+-	/* Policy: make queue 0 of each pipe as high priority compute queue */
+-	return (queue == 0);
++	bool multipipe_policy = amdgpu_gfx_is_multipipe_capable(adev);
++	int cond;
++	/* Policy: alternate between normal and high priority */
++	cond = multipipe_policy ? pipe : queue;
++
++	return ((cond % 2) != 0);
+ 
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
+index 258498cbf1eba..f353a5b71804e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
+@@ -373,7 +373,7 @@ void amdgpu_queue_mask_bit_to_mec_queue(struct amdgpu_device *adev, int bit,
+ bool amdgpu_gfx_is_mec_queue_enabled(struct amdgpu_device *adev, int mec,
+ 				     int pipe, int queue);
+ bool amdgpu_gfx_is_high_priority_compute_queue(struct amdgpu_device *adev,
+-					       int queue);
++					       int pipe, int queue);
+ int amdgpu_gfx_me_queue_to_bit(struct amdgpu_device *adev, int me,
+ 			       int pipe, int queue);
+ void amdgpu_gfx_bit_to_me_queue(struct amdgpu_device *adev, int bit,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index 3e4892b7b7d3c..ff4e226739308 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -494,13 +494,14 @@ void amdgpu_gmc_get_vbios_allocations(struct amdgpu_device *adev)
+ 		break;
+ 	}
+ 
+-	if (!amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_DCE))
++	if (!amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_DCE)) {
+ 		size = 0;
+-	else
++	} else {
+ 		size = amdgpu_gmc_get_vbios_fb_size(adev);
+ 
+-	if (adev->mman.keep_stolen_vga_memory)
+-		size = max(size, (unsigned)AMDGPU_VBIOS_VGA_ALLOCATION);
++		if (adev->mman.keep_stolen_vga_memory)
++			size = max(size, (unsigned)AMDGPU_VBIOS_VGA_ALLOCATION);
++	}
+ 
+ 	/* set to 0 if the pre-OS buffer uses up most of vram */
+ 	if ((adev->gmc.real_vram_size - size) < (8 * 1024 * 1024))
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 55f4b8c3b9338..4ebb43e090999 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4334,7 +4334,8 @@ static int gfx_v10_0_compute_ring_init(struct amdgpu_device *adev, int ring_id,
+ 	irq_type = AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE0_EOP
+ 		+ ((ring->me - 1) * adev->gfx.mec.num_pipe_per_mec)
+ 		+ ring->pipe;
+-	hw_prio = amdgpu_gfx_is_high_priority_compute_queue(adev, ring->queue) ?
++	hw_prio = amdgpu_gfx_is_high_priority_compute_queue(adev, ring->pipe,
++							    ring->queue) ?
+ 			AMDGPU_GFX_PIPE_PRIO_HIGH : AMDGPU_GFX_PIPE_PRIO_NORMAL;
+ 	/* type-2 packets are deprecated on MEC, use type-3 instead */
+ 	r = amdgpu_ring_init(adev, ring, 1024,
+@@ -6360,7 +6361,8 @@ static void gfx_v10_0_compute_mqd_set_priority(struct amdgpu_ring *ring, struct
+ 	struct amdgpu_device *adev = ring->adev;
+ 
+ 	if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) {
+-		if (amdgpu_gfx_is_high_priority_compute_queue(adev, ring->queue)) {
++		if (amdgpu_gfx_is_high_priority_compute_queue(adev, ring->pipe,
++							      ring->queue)) {
+ 			mqd->cp_hqd_pipe_priority = AMDGPU_GFX_PIPE_PRIO_HIGH;
+ 			mqd->cp_hqd_queue_priority =
+ 				AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index 94b7e0531d092..c36258d56b445 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -1915,7 +1915,8 @@ static int gfx_v8_0_compute_ring_init(struct amdgpu_device *adev, int ring_id,
+ 		+ ((ring->me - 1) * adev->gfx.mec.num_pipe_per_mec)
+ 		+ ring->pipe;
+ 
+-	hw_prio = amdgpu_gfx_is_high_priority_compute_queue(adev, ring->queue) ?
++	hw_prio = amdgpu_gfx_is_high_priority_compute_queue(adev, ring->pipe,
++							    ring->queue) ?
+ 			AMDGPU_GFX_PIPE_PRIO_HIGH : AMDGPU_RING_PRIO_DEFAULT;
+ 	/* type-2 packets are deprecated on MEC, use type-3 instead */
+ 	r = amdgpu_ring_init(adev, ring, 1024,
+@@ -4433,7 +4434,8 @@ static void gfx_v8_0_mqd_set_priority(struct amdgpu_ring *ring, struct vi_mqd *m
+ 	struct amdgpu_device *adev = ring->adev;
+ 
+ 	if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) {
+-		if (amdgpu_gfx_is_high_priority_compute_queue(adev, ring->queue)) {
++		if (amdgpu_gfx_is_high_priority_compute_queue(adev, ring->pipe,
++							      ring->queue)) {
+ 			mqd->cp_hqd_pipe_priority = AMDGPU_GFX_PIPE_PRIO_HIGH;
+ 			mqd->cp_hqd_queue_priority =
+ 				AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 0d8e203b10efb..957c12b727676 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -2228,7 +2228,8 @@ static int gfx_v9_0_compute_ring_init(struct amdgpu_device *adev, int ring_id,
+ 	irq_type = AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE0_EOP
+ 		+ ((ring->me - 1) * adev->gfx.mec.num_pipe_per_mec)
+ 		+ ring->pipe;
+-	hw_prio = amdgpu_gfx_is_high_priority_compute_queue(adev, ring->queue) ?
++	hw_prio = amdgpu_gfx_is_high_priority_compute_queue(adev, ring->pipe,
++							    ring->queue) ?
+ 			AMDGPU_GFX_PIPE_PRIO_HIGH : AMDGPU_GFX_PIPE_PRIO_NORMAL;
+ 	/* type-2 packets are deprecated on MEC, use type-3 instead */
+ 	return amdgpu_ring_init(adev, ring, 1024,
+@@ -3383,7 +3384,9 @@ static void gfx_v9_0_mqd_set_priority(struct amdgpu_ring *ring, struct v9_mqd *m
+ 	struct amdgpu_device *adev = ring->adev;
+ 
+ 	if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) {
+-		if (amdgpu_gfx_is_high_priority_compute_queue(adev, ring->queue)) {
++		if (amdgpu_gfx_is_high_priority_compute_queue(adev,
++							      ring->pipe,
++							      ring->queue)) {
+ 			mqd->cp_hqd_pipe_priority = AMDGPU_GFX_PIPE_PRIO_HIGH;
+ 			mqd->cp_hqd_queue_priority =
+ 				AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index 3de5e14c5ae31..d7f67620f57ba 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -774,6 +774,7 @@ int kfd_create_crat_image_acpi(void **crat_image, size_t *size)
+ 	struct acpi_table_header *crat_table;
+ 	acpi_status status;
+ 	void *pcrat_image;
++	int rc = 0;
+ 
+ 	if (!crat_image)
+ 		return -EINVAL;
+@@ -798,14 +799,17 @@ int kfd_create_crat_image_acpi(void **crat_image, size_t *size)
+ 	}
+ 
+ 	pcrat_image = kvmalloc(crat_table->length, GFP_KERNEL);
+-	if (!pcrat_image)
+-		return -ENOMEM;
++	if (!pcrat_image) {
++		rc = -ENOMEM;
++		goto out;
++	}
+ 
+ 	memcpy(pcrat_image, crat_table, crat_table->length);
+ 	*crat_image = pcrat_image;
+ 	*size = crat_table->length;
+-
+-	return 0;
++out:
++	acpi_put_table(crat_table);
++	return rc;
+ }
+ 
+ /* Memory required to create Virtual CRAT.
+@@ -988,6 +992,7 @@ static int kfd_create_vcrat_image_cpu(void *pcrat_image, size_t *size)
+ 				CRAT_OEMID_LENGTH);
+ 		memcpy(crat_table->oem_table_id, acpi_table->oem_table_id,
+ 				CRAT_OEMTABLEID_LENGTH);
++		acpi_put_table(acpi_table);
+ 	}
+ 	crat_table->total_entries = 0;
+ 	crat_table->num_domains = 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0f7749e9424d4..30c6b9edddb50 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2278,7 +2278,8 @@ void amdgpu_dm_update_connector_after_detect(
+ 
+ 			drm_connector_update_edid_property(connector,
+ 							   aconnector->edid);
+-			drm_add_edid_modes(connector, aconnector->edid);
++			aconnector->num_modes = drm_add_edid_modes(connector, aconnector->edid);
++			drm_connector_list_update(connector);
+ 
+ 			if (aconnector->dc_link->aux_mode)
+ 				drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index ff1e9963ec7a2..98464886341f6 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -4230,7 +4230,7 @@ void dp_set_panel_mode(struct dc_link *link, enum dp_panel_mode panel_mode)
+ 
+ 		if (edp_config_set.bits.PANEL_MODE_EDP
+ 			!= panel_mode_edp) {
+-			enum ddc_result result = DDC_RESULT_UNKNOWN;
++			enum dc_status result = DC_ERROR_UNEXPECTED;
+ 
+ 			edp_config_set.bits.PANEL_MODE_EDP =
+ 			panel_mode_edp;
+@@ -4240,7 +4240,7 @@ void dp_set_panel_mode(struct dc_link *link, enum dp_panel_mode panel_mode)
+ 				&edp_config_set.raw,
+ 				sizeof(edp_config_set.raw));
+ 
+-			ASSERT(result == DDC_RESULT_SUCESSFULL);
++			ASSERT(result == DC_OK);
+ 		}
+ 	}
+ 	DC_LOG_DETECTION_DP_CAPS("Link: %d eDP panel mode supported: %d "
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index b8695660b480e..09bc2c249e1af 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -1614,7 +1614,7 @@ static void apply_degamma_for_user_regamma(struct pwl_float_data_ex *rgb_regamma
+ 	struct pwl_float_data_ex *rgb = rgb_regamma;
+ 	const struct hw_x_point *coord_x = coordinates_x;
+ 
+-	build_coefficients(&coeff, true);
++	build_coefficients(&coeff, TRANSFER_FUNCTION_SRGB);
+ 
+ 	i = 0;
+ 	while (i != hw_points_num + 1) {
+diff --git a/drivers/gpu/drm/aspeed/Kconfig b/drivers/gpu/drm/aspeed/Kconfig
+index 018383cfcfa79..5e95bcea43e92 100644
+--- a/drivers/gpu/drm/aspeed/Kconfig
++++ b/drivers/gpu/drm/aspeed/Kconfig
+@@ -3,6 +3,7 @@ config DRM_ASPEED_GFX
+ 	tristate "ASPEED BMC Display Controller"
+ 	depends on DRM && OF
+ 	depends on (COMPILE_TEST || ARCH_ASPEED)
++	depends on MMU
+ 	select DRM_KMS_HELPER
+ 	select DRM_KMS_CMA_HELPER
+ 	select DMA_CMA if HAVE_DMA_CONTIGUOUS
+diff --git a/drivers/gpu/drm/bridge/ti-tpd12s015.c b/drivers/gpu/drm/bridge/ti-tpd12s015.c
+index 514cbf0eac75a..e0e015243a602 100644
+--- a/drivers/gpu/drm/bridge/ti-tpd12s015.c
++++ b/drivers/gpu/drm/bridge/ti-tpd12s015.c
+@@ -160,7 +160,7 @@ static int tpd12s015_probe(struct platform_device *pdev)
+ 
+ 	/* Register the IRQ if the HPD GPIO is IRQ-capable. */
+ 	tpd->hpd_irq = gpiod_to_irq(tpd->hpd_gpio);
+-	if (tpd->hpd_irq) {
++	if (tpd->hpd_irq >= 0) {
+ 		ret = devm_request_threaded_irq(&pdev->dev, tpd->hpd_irq, NULL,
+ 						tpd12s015_hpd_isr,
+ 						IRQF_TRIGGER_RISING |
+diff --git a/drivers/gpu/drm/drm_dp_aux_dev.c b/drivers/gpu/drm/drm_dp_aux_dev.c
+index 2510717d5a08f..e25181bf2c480 100644
+--- a/drivers/gpu/drm/drm_dp_aux_dev.c
++++ b/drivers/gpu/drm/drm_dp_aux_dev.c
+@@ -63,7 +63,7 @@ static struct drm_dp_aux_dev *drm_dp_aux_dev_get_by_minor(unsigned index)
+ 
+ 	mutex_lock(&aux_idr_mutex);
+ 	aux_dev = idr_find(&aux_idr, index);
+-	if (!kref_get_unless_zero(&aux_dev->refcount))
++	if (aux_dev && !kref_get_unless_zero(&aux_dev->refcount))
+ 		aux_dev = NULL;
+ 	mutex_unlock(&aux_idr_mutex);
+ 
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 631125b46e04c..b7ddf504e0249 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -3102,6 +3102,8 @@ static int drm_cvt_modes(struct drm_connector *connector,
+ 
+ 		height = (cvt->code[0] + ((cvt->code[1] & 0xf0) << 4) + 1) * 2;
+ 		switch (cvt->code[1] & 0x0c) {
++		/* default - because compiler doesn't see that we've enumerated all cases */
++		default:
+ 		case 0x00:
+ 			width = height * 4 / 3;
+ 			break;
+diff --git a/drivers/gpu/drm/gma500/cdv_intel_dp.c b/drivers/gpu/drm/gma500/cdv_intel_dp.c
+index 720a767118c9c..deb4fd13591d2 100644
+--- a/drivers/gpu/drm/gma500/cdv_intel_dp.c
++++ b/drivers/gpu/drm/gma500/cdv_intel_dp.c
+@@ -2083,7 +2083,7 @@ cdv_intel_dp_init(struct drm_device *dev, struct psb_intel_mode_device *mode_dev
+ 			DRM_INFO("failed to retrieve link info, disabling eDP\n");
+ 			drm_encoder_cleanup(encoder);
+ 			cdv_intel_dp_destroy(connector);
+-			goto err_priv;
++			goto err_connector;
+ 		} else {
+         		DRM_DEBUG_KMS("DPCD: Rev=%x LN_Rate=%x LN_CNT=%x LN_DOWNSP=%x\n",
+ 				intel_dp->dpcd[0], intel_dp->dpcd[1], 
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index b07dc1156a0e6..bcc80f428172b 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -382,7 +382,7 @@ eb_vma_misplaced(const struct drm_i915_gem_exec_object2 *entry,
+ 		return true;
+ 
+ 	if (!(flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS) &&
+-	    (vma->node.start + vma->node.size - 1) >> 32)
++	    (vma->node.start + vma->node.size + 4095) >> 32)
+ 		return true;
+ 
+ 	if (flags & __EXEC_OBJECT_NEEDS_MAP &&
+diff --git a/drivers/gpu/drm/imx/dcss/dcss-plane.c b/drivers/gpu/drm/imx/dcss/dcss-plane.c
+index 961d671f171b4..f54087ac44d35 100644
+--- a/drivers/gpu/drm/imx/dcss/dcss-plane.c
++++ b/drivers/gpu/drm/imx/dcss/dcss-plane.c
+@@ -111,7 +111,8 @@ static bool dcss_plane_can_rotate(const struct drm_format_info *format,
+ 		supported_rotation = DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_180 |
+ 				     DRM_MODE_REFLECT_MASK;
+ 	else if (!format->is_yuv &&
+-		 modifier == DRM_FORMAT_MOD_VIVANTE_TILED)
++		 (modifier == DRM_FORMAT_MOD_VIVANTE_TILED ||
++		  modifier == DRM_FORMAT_MOD_VIVANTE_SUPER_TILED))
+ 		supported_rotation = DRM_MODE_ROTATE_MASK |
+ 				     DRM_MODE_REFLECT_MASK;
+ 	else if (format->is_yuv && linear_format &&
+@@ -273,6 +274,7 @@ static void dcss_plane_atomic_update(struct drm_plane *plane,
+ 	u32 src_w, src_h, dst_w, dst_h;
+ 	struct drm_rect src, dst;
+ 	bool enable = true;
++	bool is_rotation_90_or_270;
+ 
+ 	if (!fb || !state->crtc || !state->visible)
+ 		return;
+@@ -311,8 +313,13 @@ static void dcss_plane_atomic_update(struct drm_plane *plane,
+ 
+ 	dcss_plane_atomic_set_base(dcss_plane);
+ 
++	is_rotation_90_or_270 = state->rotation & (DRM_MODE_ROTATE_90 |
++						   DRM_MODE_ROTATE_270);
++
+ 	dcss_scaler_setup(dcss->scaler, dcss_plane->ch_num,
+-			  state->fb->format, src_w, src_h,
++			  state->fb->format,
++			  is_rotation_90_or_270 ? src_h : src_w,
++			  is_rotation_90_or_270 ? src_w : src_h,
+ 			  dst_w, dst_h,
+ 			  drm_mode_vrefresh(&crtc_state->mode));
+ 
+diff --git a/drivers/gpu/drm/mcde/mcde_drv.c b/drivers/gpu/drm/mcde/mcde_drv.c
+index 92f8bd907193f..210f5e1630081 100644
+--- a/drivers/gpu/drm/mcde/mcde_drv.c
++++ b/drivers/gpu/drm/mcde/mcde_drv.c
+@@ -331,8 +331,8 @@ static int mcde_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (!irq) {
+-		ret = -EINVAL;
++	if (irq < 0) {
++		ret = irq;
+ 		goto clk_disable;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+index 8eba44be3a8ae..3064eac1a7507 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+@@ -359,7 +359,7 @@ static const struct mtk_ddp_comp_funcs ddp_ufoe = {
+ 
+ static const char * const mtk_ddp_comp_stem[MTK_DDP_COMP_TYPE_MAX] = {
+ 	[MTK_DISP_OVL] = "ovl",
+-	[MTK_DISP_OVL_2L] = "ovl_2l",
++	[MTK_DISP_OVL_2L] = "ovl-2l",
+ 	[MTK_DISP_RDMA] = "rdma",
+ 	[MTK_DISP_WDMA] = "wdma",
+ 	[MTK_DISP_COLOR] = "color",
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 8b9c8dd788c41..3d1de9cbb1c8d 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -389,15 +389,17 @@ static void meson_drv_unbind(struct device *dev)
+ 		meson_canvas_free(priv->canvas, priv->canvas_id_vd1_2);
+ 	}
+ 
++	drm_dev_unregister(drm);
++	drm_kms_helper_poll_fini(drm);
++	drm_atomic_helper_shutdown(drm);
++	component_unbind_all(dev, drm);
++	drm_irq_uninstall(drm);
++	drm_dev_put(drm);
++
+ 	if (priv->afbcd.ops) {
+ 		priv->afbcd.ops->reset(priv);
+ 		meson_rdma_free(priv);
+ 	}
+-
+-	drm_dev_unregister(drm);
+-	drm_irq_uninstall(drm);
+-	drm_kms_helper_poll_fini(drm);
+-	drm_dev_put(drm);
+ }
+ 
+ static const struct component_master_ops meson_drv_master_ops = {
+diff --git a/drivers/gpu/drm/meson/meson_dw_hdmi.c b/drivers/gpu/drm/meson/meson_dw_hdmi.c
+index 29a8ff41595d2..aad75a22dc338 100644
+--- a/drivers/gpu/drm/meson/meson_dw_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_dw_hdmi.c
+@@ -145,8 +145,6 @@ struct meson_dw_hdmi {
+ 	struct reset_control *hdmitx_apb;
+ 	struct reset_control *hdmitx_ctrl;
+ 	struct reset_control *hdmitx_phy;
+-	struct clk *hdmi_pclk;
+-	struct clk *venci_clk;
+ 	struct regulator *hdmi_supply;
+ 	u32 irq_stat;
+ 	struct dw_hdmi *hdmi;
+@@ -941,6 +939,34 @@ static void meson_dw_hdmi_init(struct meson_dw_hdmi *meson_dw_hdmi)
+ 
+ }
+ 
++static void meson_disable_regulator(void *data)
++{
++	regulator_disable(data);
++}
++
++static void meson_disable_clk(void *data)
++{
++	clk_disable_unprepare(data);
++}
++
++static int meson_enable_clk(struct device *dev, char *name)
++{
++	struct clk *clk;
++	int ret;
++
++	clk = devm_clk_get(dev, name);
++	if (IS_ERR(clk)) {
++		dev_err(dev, "Unable to get %s pclk\n", name);
++		return PTR_ERR(clk);
++	}
++
++	ret = clk_prepare_enable(clk);
++	if (!ret)
++		ret = devm_add_action_or_reset(dev, meson_disable_clk, clk);
++
++	return ret;
++}
++
+ static int meson_dw_hdmi_bind(struct device *dev, struct device *master,
+ 				void *data)
+ {
+@@ -989,6 +1015,10 @@ static int meson_dw_hdmi_bind(struct device *dev, struct device *master,
+ 		ret = regulator_enable(meson_dw_hdmi->hdmi_supply);
+ 		if (ret)
+ 			return ret;
++		ret = devm_add_action_or_reset(dev, meson_disable_regulator,
++					       meson_dw_hdmi->hdmi_supply);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	meson_dw_hdmi->hdmitx_apb = devm_reset_control_get_exclusive(dev,
+@@ -1017,19 +1047,17 @@ static int meson_dw_hdmi_bind(struct device *dev, struct device *master,
+ 	if (IS_ERR(meson_dw_hdmi->hdmitx))
+ 		return PTR_ERR(meson_dw_hdmi->hdmitx);
+ 
+-	meson_dw_hdmi->hdmi_pclk = devm_clk_get(dev, "isfr");
+-	if (IS_ERR(meson_dw_hdmi->hdmi_pclk)) {
+-		dev_err(dev, "Unable to get HDMI pclk\n");
+-		return PTR_ERR(meson_dw_hdmi->hdmi_pclk);
+-	}
+-	clk_prepare_enable(meson_dw_hdmi->hdmi_pclk);
++	ret = meson_enable_clk(dev, "isfr");
++	if (ret)
++		return ret;
+ 
+-	meson_dw_hdmi->venci_clk = devm_clk_get(dev, "venci");
+-	if (IS_ERR(meson_dw_hdmi->venci_clk)) {
+-		dev_err(dev, "Unable to get venci clk\n");
+-		return PTR_ERR(meson_dw_hdmi->venci_clk);
+-	}
+-	clk_prepare_enable(meson_dw_hdmi->venci_clk);
++	ret = meson_enable_clk(dev, "iahb");
++	if (ret)
++		return ret;
++
++	ret = meson_enable_clk(dev, "venci");
++	if (ret)
++		return ret;
+ 
+ 	dw_plat_data->regm = devm_regmap_init(dev, NULL, meson_dw_hdmi,
+ 					      &meson_dw_hdmi_regmap_config);
+@@ -1062,10 +1090,10 @@ static int meson_dw_hdmi_bind(struct device *dev, struct device *master,
+ 
+ 	encoder->possible_crtcs = BIT(0);
+ 
+-	DRM_DEBUG_DRIVER("encoder initialized\n");
+-
+ 	meson_dw_hdmi_init(meson_dw_hdmi);
+ 
++	DRM_DEBUG_DRIVER("encoder initialized\n");
++
+ 	/* Bridge / Connector */
+ 
+ 	dw_plat_data->priv_data = meson_dw_hdmi;
+diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
+index e5816b4984942..dabb4a1ccdcf7 100644
+--- a/drivers/gpu/drm/msm/Kconfig
++++ b/drivers/gpu/drm/msm/Kconfig
+@@ -4,8 +4,8 @@ config DRM_MSM
+ 	tristate "MSM DRM"
+ 	depends on DRM
+ 	depends on ARCH_QCOM || SOC_IMX5 || (ARM && COMPILE_TEST)
++	depends on IOMMU_SUPPORT
+ 	depends on OF && COMMON_CLK
+-	depends on MMU
+ 	depends on QCOM_OCMEM || QCOM_OCMEM=n
+ 	select IOMMU_IO_PGTABLE
+ 	select QCOM_MDT_LOADER if ARCH_QCOM
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index d6804a8023555..69ed2c6094665 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -755,12 +755,8 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
+ 	gpu_write(gpu, REG_A5XX_CP_RB_CNTL,
+ 		MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE);
+ 
+-	/* Disable preemption if WHERE_AM_I isn't available */
+-	if (!a5xx_gpu->has_whereami && gpu->nr_rings > 1) {
+-		a5xx_preempt_fini(gpu);
+-		gpu->nr_rings = 1;
+-	} else {
+-		/* Create a privileged buffer for the RPTR shadow */
++	/* Create a privileged buffer for the RPTR shadow */
++	if (a5xx_gpu->has_whereami) {
+ 		if (!a5xx_gpu->shadow_bo) {
+ 			a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev,
+ 				sizeof(u32) * gpu->nr_rings,
+@@ -774,6 +770,10 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
+ 
+ 		gpu_write64(gpu, REG_A5XX_CP_RB_RPTR_ADDR,
+ 			REG_A5XX_CP_RB_RPTR_ADDR_HI, shadowptr(a5xx_gpu, gpu->rb[0]));
++	} else if (gpu->nr_rings > 1) {
++		/* Disable preemption if WHERE_AM_I isn't available */
++		a5xx_preempt_fini(gpu);
++		gpu->nr_rings = 1;
+ 	}
+ 
+ 	a5xx_preempt_hw_init(gpu);
+@@ -1207,7 +1207,9 @@ static int a5xx_pm_resume(struct msm_gpu *gpu)
+ static int a5xx_pm_suspend(struct msm_gpu *gpu)
+ {
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
++	struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
+ 	u32 mask = 0xf;
++	int i, ret;
+ 
+ 	/* A510 has 3 XIN ports in VBIF */
+ 	if (adreno_is_a510(adreno_gpu))
+@@ -1227,7 +1229,15 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu)
+ 	gpu_write(gpu, REG_A5XX_RBBM_BLOCK_SW_RESET_CMD, 0x003C0000);
+ 	gpu_write(gpu, REG_A5XX_RBBM_BLOCK_SW_RESET_CMD, 0x00000000);
+ 
+-	return msm_gpu_pm_suspend(gpu);
++	ret = msm_gpu_pm_suspend(gpu);
++	if (ret)
++		return ret;
++
++	if (a5xx_gpu->has_whereami)
++		for (i = 0; i < gpu->nr_rings; i++)
++			a5xx_gpu->shadow[i] = 0;
++
++	return 0;
+ }
+ 
+ static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 948f3656c20ca..420ca4a0eb5f7 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1045,12 +1045,21 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
+ {
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ 	struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
++	int i, ret;
+ 
+ 	trace_msm_gpu_suspend(0);
+ 
+ 	devfreq_suspend_device(gpu->devfreq.devfreq);
+ 
+-	return a6xx_gmu_stop(a6xx_gpu);
++	ret = a6xx_gmu_stop(a6xx_gpu);
++	if (ret)
++		return ret;
++
++	if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami)
++		for (i = 0; i < gpu->nr_rings; i++)
++			a6xx_gpu->shadow[i] = 0;
++
++	return 0;
+ }
+ 
+ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+index 393858ef8a832..37c8270681c23 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+@@ -219,9 +219,6 @@ static int _dpu_core_perf_crtc_update_bus(struct dpu_kms *kms,
+ 	int i, ret = 0;
+ 	u64 avg_bw;
+ 
+-	if (!kms->num_paths)
+-		return -EINVAL;
+-
+ 	drm_for_each_crtc(tmp_crtc, crtc->dev) {
+ 		if (tmp_crtc->enabled &&
+ 			curr_client_type ==
+@@ -239,6 +236,9 @@ static int _dpu_core_perf_crtc_update_bus(struct dpu_kms *kms,
+ 		}
+ 	}
+ 
++	if (!kms->num_paths)
++		return 0;
++
+ 	avg_bw = perf.bw_ctl;
+ 	do_div(avg_bw, (kms->num_paths * 1000)); /*Bps_to_icc*/
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
+index b15b4ce4ba35a..4963bfe6a4726 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
+@@ -572,6 +572,19 @@ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog)
+ 	dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, DP_DP_HPD_CTRL_HPD_EN);
+ }
+ 
++u32 dp_catalog_hpd_get_state_status(struct dp_catalog *dp_catalog)
++{
++	struct dp_catalog_private *catalog = container_of(dp_catalog,
++				struct dp_catalog_private, dp_catalog);
++	u32 status;
++
++	status = dp_read_aux(catalog, REG_DP_DP_HPD_INT_STATUS);
++	status >>= DP_DP_HPD_STATE_STATUS_BITS_SHIFT;
++	status &= DP_DP_HPD_STATE_STATUS_BITS_MASK;
++
++	return status;
++}
++
+ u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog)
+ {
+ 	struct dp_catalog_private *catalog = container_of(dp_catalog,
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.h b/drivers/gpu/drm/msm/dp/dp_catalog.h
+index 4b7666f1fe6fe..6d257dbebf294 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.h
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.h
+@@ -97,6 +97,7 @@ void dp_catalog_ctrl_enable_irq(struct dp_catalog *dp_catalog, bool enable);
+ void dp_catalog_hpd_config_intr(struct dp_catalog *dp_catalog,
+ 			u32 intr_mask, bool en);
+ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog);
++u32 dp_catalog_hpd_get_state_status(struct dp_catalog *dp_catalog);
+ u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog);
+ void dp_catalog_ctrl_phy_reset(struct dp_catalog *dp_catalog);
+ int dp_catalog_ctrl_update_vx_px(struct dp_catalog *dp_catalog, u8 v_level,
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 2e3e1917351f0..c83a1650437da 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1061,23 +1061,15 @@ static bool dp_ctrl_train_pattern_set(struct dp_ctrl_private *ctrl,
+ static int dp_ctrl_read_link_status(struct dp_ctrl_private *ctrl,
+ 				    u8 *link_status)
+ {
+-	int len = 0;
+-	u32 const offset = DP_LANE_ALIGN_STATUS_UPDATED - DP_LANE0_1_STATUS;
+-	u32 link_status_read_max_retries = 100;
+-
+-	while (--link_status_read_max_retries) {
+-		len = drm_dp_dpcd_read_link_status(ctrl->aux,
+-			link_status);
+-		if (len != DP_LINK_STATUS_SIZE) {
+-			DRM_ERROR("DP link status read failed, err: %d\n", len);
+-			return len;
+-		}
++	int ret = 0, len;
+ 
+-		if (!(link_status[offset] & DP_LINK_STATUS_UPDATED))
+-			return 0;
++	len = drm_dp_dpcd_read_link_status(ctrl->aux, link_status);
++	if (len != DP_LINK_STATUS_SIZE) {
++		DRM_ERROR("DP link status read failed, err: %d\n", len);
++		ret = -EINVAL;
+ 	}
+ 
+-	return -ETIMEDOUT;
++	return ret;
+ }
+ 
+ static int dp_ctrl_link_train_1(struct dp_ctrl_private *ctrl,
+@@ -1400,6 +1392,8 @@ int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip)
+ void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl)
+ {
+ 	struct dp_ctrl_private *ctrl;
++	struct dp_io *dp_io;
++	struct phy *phy;
+ 
+ 	if (!dp_ctrl) {
+ 		DRM_ERROR("Invalid input data\n");
+@@ -1407,8 +1401,11 @@ void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl)
+ 	}
+ 
+ 	ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
++	dp_io = &ctrl->parser->io;
++	phy = dp_io->phy;
+ 
+ 	dp_catalog_ctrl_enable_irq(ctrl->catalog, false);
++	phy_exit(phy);
+ 
+ 	DRM_DEBUG_DP("Host deinitialized successfully\n");
+ }
+@@ -1643,9 +1640,6 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	if (rc)
+ 		return rc;
+ 
+-	ctrl->link->phy_params.p_level = 0;
+-	ctrl->link->phy_params.v_level = 0;
+-
+ 	while (--link_train_max_retries &&
+ 		!atomic_read(&ctrl->dp_ctrl.aborted)) {
+ 		rc = dp_ctrl_reinitialize_mainlink(ctrl);
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index e175aa3fd3a93..fe0279542a1c2 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -108,14 +108,12 @@ struct dp_display_private {
+ 	/* event related only access by event thread */
+ 	struct mutex event_mutex;
+ 	wait_queue_head_t event_q;
+-	atomic_t hpd_state;
++	u32 hpd_state;
+ 	u32 event_pndx;
+ 	u32 event_gndx;
+ 	struct dp_event event_list[DP_EVENT_Q_MAX];
+ 	spinlock_t event_lock;
+ 
+-	struct completion resume_comp;
+-
+ 	struct dp_audio *audio;
+ };
+ 
+@@ -335,6 +333,7 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
+ 	dp->dp_display.max_pclk_khz = DP_MAX_PIXEL_CLK_KHZ;
+ 	dp->dp_display.max_dp_lanes = dp->parser->max_dp_lanes;
+ 
++	dp_link_reset_phy_params_vx_px(dp->link);
+ 	rc = dp_ctrl_on_link(dp->ctrl);
+ 	if (rc) {
+ 		DRM_ERROR("failed to complete DP link training\n");
+@@ -366,6 +365,20 @@ static void dp_display_host_init(struct dp_display_private *dp)
+ 	dp->core_initialized = true;
+ }
+ 
++static void dp_display_host_deinit(struct dp_display_private *dp)
++{
++	if (!dp->core_initialized) {
++		DRM_DEBUG_DP("DP core not initialized\n");
++		return;
++	}
++
++	dp_ctrl_host_deinit(dp->ctrl);
++	dp_aux_deinit(dp->aux);
++	dp_power_deinit(dp->power);
++
++	dp->core_initialized = false;
++}
++
+ static int dp_display_usbpd_configure_cb(struct device *dev)
+ {
+ 	int rc = 0;
+@@ -490,7 +503,7 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
+ 
+ 	mutex_lock(&dp->event_mutex);
+ 
+-	state =  atomic_read(&dp->hpd_state);
++	state =  dp->hpd_state;
+ 	if (state == ST_SUSPEND_PENDING) {
+ 		mutex_unlock(&dp->event_mutex);
+ 		return 0;
+@@ -508,17 +521,14 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
+ 		return 0;
+ 	}
+ 
+-	if (state == ST_SUSPENDED)
+-		tout = DP_TIMEOUT_NONE;
+-
+-	atomic_set(&dp->hpd_state, ST_CONNECT_PENDING);
++	dp->hpd_state = ST_CONNECT_PENDING;
+ 
+ 	hpd->hpd_high = 1;
+ 
+ 	ret = dp_display_usbpd_configure_cb(&dp->pdev->dev);
+ 	if (ret) {	/* failed */
+ 		hpd->hpd_high = 0;
+-		atomic_set(&dp->hpd_state, ST_DISCONNECTED);
++		dp->hpd_state = ST_DISCONNECTED;
+ 	}
+ 
+ 	/* start sanity checking */
+@@ -539,10 +549,10 @@ static int dp_connect_pending_timeout(struct dp_display_private *dp, u32 data)
+ 
+ 	mutex_lock(&dp->event_mutex);
+ 
+-	state =  atomic_read(&dp->hpd_state);
++	state = dp->hpd_state;
+ 	if (state == ST_CONNECT_PENDING) {
+ 		dp_display_enable(dp, 0);
+-		atomic_set(&dp->hpd_state, ST_CONNECTED);
++		dp->hpd_state = ST_CONNECTED;
+ 	}
+ 
+ 	mutex_unlock(&dp->event_mutex);
+@@ -553,7 +563,14 @@ static int dp_connect_pending_timeout(struct dp_display_private *dp, u32 data)
+ static void dp_display_handle_plugged_change(struct msm_dp *dp_display,
+ 		bool plugged)
+ {
+-	if (dp_display->plugged_cb && dp_display->codec_dev)
++	struct dp_display_private *dp;
++
++	dp = container_of(dp_display,
++			struct dp_display_private, dp_display);
++
++	/* notify audio subsystem only if sink supports audio */
++	if (dp_display->plugged_cb && dp_display->codec_dev &&
++			dp->audio_supported)
+ 		dp_display->plugged_cb(dp_display->codec_dev, plugged);
+ }
+ 
+@@ -567,7 +584,7 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 
+ 	mutex_lock(&dp->event_mutex);
+ 
+-	state = atomic_read(&dp->hpd_state);
++	state = dp->hpd_state;
+ 	if (state == ST_SUSPEND_PENDING) {
+ 		mutex_unlock(&dp->event_mutex);
+ 		return 0;
+@@ -585,7 +602,7 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 		return 0;
+ 	}
+ 
+-	atomic_set(&dp->hpd_state, ST_DISCONNECT_PENDING);
++	dp->hpd_state = ST_DISCONNECT_PENDING;
+ 
+ 	/* disable HPD plug interrupt until disconnect is done */
+ 	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK
+@@ -620,10 +637,10 @@ static int dp_disconnect_pending_timeout(struct dp_display_private *dp, u32 data
+ 
+ 	mutex_lock(&dp->event_mutex);
+ 
+-	state =  atomic_read(&dp->hpd_state);
++	state =  dp->hpd_state;
+ 	if (state == ST_DISCONNECT_PENDING) {
+ 		dp_display_disable(dp, 0);
+-		atomic_set(&dp->hpd_state, ST_DISCONNECTED);
++		dp->hpd_state = ST_DISCONNECTED;
+ 	}
+ 
+ 	mutex_unlock(&dp->event_mutex);
+@@ -638,7 +655,7 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
+ 	mutex_lock(&dp->event_mutex);
+ 
+ 	/* irq_hpd can happen at either connected or disconnected state */
+-	state =  atomic_read(&dp->hpd_state);
++	state =  dp->hpd_state;
+ 	if (state == ST_SUSPEND_PENDING) {
+ 		mutex_unlock(&dp->event_mutex);
+ 		return 0;
+@@ -789,17 +806,10 @@ static int dp_display_enable(struct dp_display_private *dp, u32 data)
+ 
+ 	dp_display = g_dp_display;
+ 
+-	if (dp_display->power_on) {
+-		DRM_DEBUG_DP("Link already setup, return\n");
+-		return 0;
+-	}
+-
+ 	rc = dp_ctrl_on_stream(dp->ctrl);
+ 	if (!rc)
+ 		dp_display->power_on = true;
+ 
+-	/* complete resume_comp regardless it is armed or not */
+-	complete(&dp->resume_comp);
+ 	return rc;
+ }
+ 
+@@ -828,9 +838,6 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
+ 
+ 	dp_display = g_dp_display;
+ 
+-	if (!dp_display->power_on)
+-		return -EINVAL;
+-
+ 	/* wait only if audio was enabled */
+ 	if (dp_display->audio_enabled) {
+ 		if (!wait_for_completion_timeout(&dp->audio_comp,
+@@ -1151,9 +1158,6 @@ static int dp_display_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	mutex_init(&dp->event_mutex);
+-
+-	init_completion(&dp->resume_comp);
+-
+ 	g_dp_display = &dp->dp_display;
+ 
+ 	/* Store DP audio handle inside DP display */
+@@ -1189,20 +1193,54 @@ static int dp_display_remove(struct platform_device *pdev)
+ 
+ static int dp_pm_resume(struct device *dev)
+ {
++	struct platform_device *pdev = to_platform_device(dev);
++	struct msm_dp *dp_display = platform_get_drvdata(pdev);
++	struct dp_display_private *dp;
++	u32 status;
++
++	dp = container_of(dp_display, struct dp_display_private, dp_display);
++
++	mutex_lock(&dp->event_mutex);
++
++	/* start from disconnected state */
++	dp->hpd_state = ST_DISCONNECTED;
++
++	/* turn on dp ctrl/phy */
++	dp_display_host_init(dp);
++
++	dp_catalog_ctrl_hpd_config(dp->catalog);
++
++	status = dp_catalog_hpd_get_state_status(dp->catalog);
++
++	if (status) {
++		dp->dp_display.is_connected = true;
++	} else {
++		dp->dp_display.is_connected = false;
++		/* make sure next resume host_init be called */
++		dp->core_initialized = false;
++	}
++
++	mutex_unlock(&dp->event_mutex);
++
+ 	return 0;
+ }
+ 
+ static int dp_pm_suspend(struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+-	struct dp_display_private *dp = platform_get_drvdata(pdev);
++	struct msm_dp *dp_display = platform_get_drvdata(pdev);
++	struct dp_display_private *dp;
+ 
+-	if (!dp) {
+-		DRM_ERROR("DP driver bind failed. Invalid driver data\n");
+-		return -EINVAL;
+-	}
++	dp = container_of(dp_display, struct dp_display_private, dp_display);
++
++	mutex_lock(&dp->event_mutex);
+ 
+-	atomic_set(&dp->hpd_state, ST_SUSPENDED);
++	if (dp->core_initialized == true)
++		dp_display_host_deinit(dp);
++
++	dp->hpd_state = ST_SUSPENDED;
++
++	mutex_unlock(&dp->event_mutex);
+ 
+ 	return 0;
+ }
+@@ -1317,19 +1355,6 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ 	return 0;
+ }
+ 
+-static int dp_display_wait4resume_done(struct dp_display_private *dp)
+-{
+-	int ret = 0;
+-
+-	reinit_completion(&dp->resume_comp);
+-	if (!wait_for_completion_timeout(&dp->resume_comp,
+-				WAIT_FOR_RESUME_TIMEOUT_JIFFIES)) {
+-		DRM_ERROR("wait4resume_done timedout\n");
+-		ret = -ETIMEDOUT;
+-	}
+-	return ret;
+-}
+-
+ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
+ {
+ 	int rc = 0;
+@@ -1344,6 +1369,8 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 
+ 	mutex_lock(&dp_display->event_mutex);
+ 
++	dp_del_event(dp_display, EV_CONNECT_PENDING_TIMEOUT);
++
+ 	rc = dp_display_set_mode(dp, &dp_display->dp_mode);
+ 	if (rc) {
+ 		DRM_ERROR("Failed to perform a mode set, rc=%d\n", rc);
+@@ -1358,15 +1385,10 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 		return rc;
+ 	}
+ 
+-	state =  atomic_read(&dp_display->hpd_state);
+-	if (state == ST_SUSPENDED) {
+-		/* start link training */
+-		dp_add_event(dp_display, EV_HPD_PLUG_INT, 0, 0);
+-		mutex_unlock(&dp_display->event_mutex);
++	state =  dp_display->hpd_state;
+ 
+-		/* wait until dp interface is up */
+-		goto resume_done;
+-	}
++	if (state == ST_SUSPEND_PENDING)
++		dp_display_host_init(dp_display);
+ 
+ 	dp_display_enable(dp_display, 0);
+ 
+@@ -1377,21 +1399,15 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 		dp_display_unprepare(dp);
+ 	}
+ 
+-	dp_del_event(dp_display, EV_CONNECT_PENDING_TIMEOUT);
+-
+ 	if (state == ST_SUSPEND_PENDING)
+ 		dp_add_event(dp_display, EV_IRQ_HPD_INT, 0, 0);
+ 
+ 	/* completed connection */
+-	atomic_set(&dp_display->hpd_state, ST_CONNECTED);
++	dp_display->hpd_state = ST_CONNECTED;
+ 
+ 	mutex_unlock(&dp_display->event_mutex);
+ 
+ 	return rc;
+-
+-resume_done:
+-	dp_display_wait4resume_done(dp_display);
+-	return rc;
+ }
+ 
+ int msm_dp_display_pre_disable(struct msm_dp *dp, struct drm_encoder *encoder)
+@@ -1415,20 +1431,20 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 
+ 	mutex_lock(&dp_display->event_mutex);
+ 
++	dp_del_event(dp_display, EV_DISCONNECT_PENDING_TIMEOUT);
++
+ 	dp_display_disable(dp_display, 0);
+ 
+ 	rc = dp_display_unprepare(dp);
+ 	if (rc)
+ 		DRM_ERROR("DP display unprepare failed, rc=%d\n", rc);
+ 
+-	dp_del_event(dp_display, EV_DISCONNECT_PENDING_TIMEOUT);
+-
+-	state =  atomic_read(&dp_display->hpd_state);
++	state =  dp_display->hpd_state;
+ 	if (state == ST_DISCONNECT_PENDING) {
+ 		/* completed disconnection */
+-		atomic_set(&dp_display->hpd_state, ST_DISCONNECTED);
++		dp_display->hpd_state = ST_DISCONNECTED;
+ 	} else {
+-		atomic_set(&dp_display->hpd_state, ST_SUSPEND_PENDING);
++		dp_display->hpd_state = ST_SUSPEND_PENDING;
+ 	}
+ 
+ 	mutex_unlock(&dp_display->event_mutex);
+diff --git a/drivers/gpu/drm/msm/dp/dp_link.c b/drivers/gpu/drm/msm/dp/dp_link.c
+index c811da515fb3b..be986da78c4a5 100644
+--- a/drivers/gpu/drm/msm/dp/dp_link.c
++++ b/drivers/gpu/drm/msm/dp/dp_link.c
+@@ -773,7 +773,8 @@ static int dp_link_process_link_training_request(struct dp_link_private *link)
+ 			link->request.test_lane_count);
+ 
+ 	link->dp_link.link_params.num_lanes = link->request.test_lane_count;
+-	link->dp_link.link_params.rate = link->request.test_link_rate;
++	link->dp_link.link_params.rate = 
++		drm_dp_bw_code_to_link_rate(link->request.test_link_rate);
+ 
+ 	return 0;
+ }
+@@ -869,6 +870,9 @@ static int dp_link_parse_vx_px(struct dp_link_private *link)
+ 		drm_dp_get_adjust_request_voltage(link->link_status, 0);
+ 	link->dp_link.phy_params.p_level =
+ 		drm_dp_get_adjust_request_pre_emphasis(link->link_status, 0);
++
++	link->dp_link.phy_params.p_level >>= DP_TRAIN_PRE_EMPHASIS_SHIFT;
++
+ 	DRM_DEBUG_DP("Requested: v_level = 0x%x, p_level = 0x%x\n",
+ 			link->dp_link.phy_params.v_level,
+ 			link->dp_link.phy_params.p_level);
+@@ -911,7 +915,8 @@ static int dp_link_process_phy_test_pattern_request(
+ 			link->request.test_lane_count);
+ 
+ 	link->dp_link.link_params.num_lanes = link->request.test_lane_count;
+-	link->dp_link.link_params.rate = link->request.test_link_rate;
++	link->dp_link.link_params.rate =
++		drm_dp_bw_code_to_link_rate(link->request.test_link_rate);
+ 
+ 	ret = dp_link_parse_vx_px(link);
+ 
+@@ -939,22 +944,20 @@ static u8 get_link_status(const u8 link_status[DP_LINK_STATUS_SIZE], int r)
+  */
+ static int dp_link_process_link_status_update(struct dp_link_private *link)
+ {
+-	if (!(get_link_status(link->link_status,
+-				DP_LANE_ALIGN_STATUS_UPDATED) &
+-				DP_LINK_STATUS_UPDATED) ||
+-			(drm_dp_clock_recovery_ok(link->link_status,
+-					link->dp_link.link_params.num_lanes) &&
+-			drm_dp_channel_eq_ok(link->link_status,
+-					link->dp_link.link_params.num_lanes)))
+-		return -EINVAL;
++       bool channel_eq_done = drm_dp_channel_eq_ok(link->link_status,
++                       link->dp_link.link_params.num_lanes);
+ 
+-	DRM_DEBUG_DP("channel_eq_done = %d, clock_recovery_done = %d\n",
+-			drm_dp_clock_recovery_ok(link->link_status,
+-			link->dp_link.link_params.num_lanes),
+-			drm_dp_clock_recovery_ok(link->link_status,
+-			link->dp_link.link_params.num_lanes));
++       bool clock_recovery_done = drm_dp_clock_recovery_ok(link->link_status,
++                       link->dp_link.link_params.num_lanes);
+ 
+-	return 0;
++       DRM_DEBUG_DP("channel_eq_done = %d, clock_recovery_done = %d\n",
++                        channel_eq_done, clock_recovery_done);
++
++       if (channel_eq_done && clock_recovery_done)
++               return -EINVAL;
++
++
++       return 0;
+ }
+ 
+ /**
+@@ -1156,6 +1159,12 @@ int dp_link_adjust_levels(struct dp_link *dp_link, u8 *link_status)
+ 	return 0;
+ }
+ 
++void dp_link_reset_phy_params_vx_px(struct dp_link *dp_link)
++{
++	dp_link->phy_params.v_level = 0;
++	dp_link->phy_params.p_level = 0;
++}
++
+ u32 dp_link_get_test_bits_depth(struct dp_link *dp_link, u32 bpp)
+ {
+ 	u32 tbd;
+diff --git a/drivers/gpu/drm/msm/dp/dp_link.h b/drivers/gpu/drm/msm/dp/dp_link.h
+index 49811b6221e53..9dd4dd9265304 100644
+--- a/drivers/gpu/drm/msm/dp/dp_link.h
++++ b/drivers/gpu/drm/msm/dp/dp_link.h
+@@ -135,6 +135,7 @@ static inline u32 dp_link_bit_depth_to_bpc(u32 tbd)
+ 	}
+ }
+ 
++void dp_link_reset_phy_params_vx_px(struct dp_link *dp_link);
+ u32 dp_link_get_test_bits_depth(struct dp_link *dp_link, u32 bpp);
+ int dp_link_process_request(struct dp_link *dp_link);
+ int dp_link_get_colorimetry_config(struct dp_link *dp_link);
+diff --git a/drivers/gpu/drm/msm/dp/dp_reg.h b/drivers/gpu/drm/msm/dp/dp_reg.h
+index 43042ff90a199..268602803d9a3 100644
+--- a/drivers/gpu/drm/msm/dp/dp_reg.h
++++ b/drivers/gpu/drm/msm/dp/dp_reg.h
+@@ -32,6 +32,8 @@
+ #define DP_DP_IRQ_HPD_INT_ACK			(0x00000002)
+ #define DP_DP_HPD_REPLUG_INT_ACK		(0x00000004)
+ #define DP_DP_HPD_UNPLUG_INT_ACK		(0x00000008)
++#define DP_DP_HPD_STATE_STATUS_BITS_MASK	(0x0000000F)
++#define DP_DP_HPD_STATE_STATUS_BITS_SHIFT	(0x1C)
+ 
+ #define REG_DP_DP_HPD_INT_MASK			(0x0000000C)
+ #define DP_DP_HPD_PLUG_INT_MASK			(0x00000001)
+diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c
+index 6ac04fc303f56..e4e9bf04b7368 100644
+--- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c
++++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c
+@@ -559,6 +559,7 @@ static int dsi_pll_10nm_restore_state(struct msm_dsi_pll *pll)
+ 	struct pll_10nm_cached_state *cached = &pll_10nm->cached_state;
+ 	void __iomem *phy_base = pll_10nm->phy_cmn_mmio;
+ 	u32 val;
++	int ret;
+ 
+ 	val = pll_read(pll_10nm->mmio + REG_DSI_10nm_PHY_PLL_PLL_OUTDIV_RATE);
+ 	val &= ~0x3;
+@@ -573,6 +574,13 @@ static int dsi_pll_10nm_restore_state(struct msm_dsi_pll *pll)
+ 	val |= cached->pll_mux;
+ 	pll_write(phy_base + REG_DSI_10nm_PHY_CMN_CLK_CFG1, val);
+ 
++	ret = dsi_pll_10nm_vco_set_rate(&pll->clk_hw, pll_10nm->vco_current_rate, pll_10nm->vco_ref_clk_rate);
++	if (ret) {
++		DRM_DEV_ERROR(&pll_10nm->pdev->dev,
++			"restore vco rate failed. ret=%d\n", ret);
++		return ret;
++	}
++
+ 	DBG("DSI PLL%d", pll_10nm->id);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
+index de0dfb8151258..93bf142e4a4e6 100644
+--- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
+@@ -585,6 +585,7 @@ static int dsi_pll_7nm_restore_state(struct msm_dsi_pll *pll)
+ 	struct pll_7nm_cached_state *cached = &pll_7nm->cached_state;
+ 	void __iomem *phy_base = pll_7nm->phy_cmn_mmio;
+ 	u32 val;
++	int ret;
+ 
+ 	val = pll_read(pll_7nm->mmio + REG_DSI_7nm_PHY_PLL_PLL_OUTDIV_RATE);
+ 	val &= ~0x3;
+@@ -599,6 +600,13 @@ static int dsi_pll_7nm_restore_state(struct msm_dsi_pll *pll)
+ 	val |= cached->pll_mux;
+ 	pll_write(phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG1, val);
+ 
++	ret = dsi_pll_7nm_vco_set_rate(&pll->clk_hw, pll_7nm->vco_current_rate, pll_7nm->vco_ref_clk_rate);
++	if (ret) {
++		DRM_DEV_ERROR(&pll_7nm->pdev->dev,
++			"restore vco rate failed. ret=%d\n", ret);
++		return ret;
++	}
++
+ 	DBG("DSI PLL%d", pll_7nm->id);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
+index b9dd8f8f48872..0b2686b060c73 100644
+--- a/drivers/gpu/drm/msm/msm_drv.h
++++ b/drivers/gpu/drm/msm/msm_drv.h
+@@ -423,6 +423,11 @@ static inline int msm_dp_display_disable(struct msm_dp *dp,
+ {
+ 	return -EINVAL;
+ }
++static inline int msm_dp_display_pre_disable(struct msm_dp *dp,
++					struct drm_encoder *encoder)
++{
++	return -EINVAL;
++}
+ static inline void msm_dp_display_mode_set(struct msm_dp *dp,
+ 				struct drm_encoder *encoder,
+ 				struct drm_display_mode *mode,
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.c b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+index 35122aef037b4..17f26052e8450 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+@@ -134,11 +134,8 @@ static int mxsfb_attach_bridge(struct mxsfb_drm_private *mxsfb)
+ 		return -ENODEV;
+ 
+ 	ret = drm_bridge_attach(&mxsfb->encoder, bridge, NULL, 0);
+-	if (ret) {
+-		DRM_DEV_ERROR(drm->dev,
+-			      "failed to attach bridge: %d\n", ret);
+-		return ret;
+-	}
++	if (ret)
++		return dev_err_probe(drm->dev, ret, "Failed to attach bridge\n");
+ 
+ 	mxsfb->bridge = bridge;
+ 
+@@ -212,7 +209,8 @@ static int mxsfb_load(struct drm_device *drm,
+ 
+ 	ret = mxsfb_attach_bridge(mxsfb);
+ 	if (ret) {
+-		dev_err(drm->dev, "Cannot connect bridge: %d\n", ret);
++		if (ret != -EPROBE_DEFER)
++			dev_err(drm->dev, "Cannot connect bridge: %d\n", ret);
+ 		goto err_vblank;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+index 42ec51bb7b1b0..7f43172488123 100644
+--- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
++++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+@@ -889,6 +889,7 @@ static int omap_dmm_probe(struct platform_device *dev)
+ 					   &omap_dmm->refill_pa, GFP_KERNEL);
+ 	if (!omap_dmm->refill_va) {
+ 		dev_err(&dev->dev, "could not allocate refill memory\n");
++		ret = -ENOMEM;
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 2be358fb46f7d..204674fccd646 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -1327,6 +1327,7 @@ static const struct drm_display_mode boe_nv133fhm_n61_modes = {
+ 	.vsync_start = 1080 + 3,
+ 	.vsync_end = 1080 + 3 + 6,
+ 	.vtotal = 1080 + 3 + 6 + 31,
++	.flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC,
+ };
+ 
+ /* Also used for boe_nv133fhm_n62 */
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c
+index e6896733838ab..bf7c34cfb84c0 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.c
++++ b/drivers/gpu/drm/panfrost/panfrost_device.c
+@@ -206,7 +206,6 @@ int panfrost_device_init(struct panfrost_device *pfdev)
+ 	struct resource *res;
+ 
+ 	mutex_init(&pfdev->sched_lock);
+-	mutex_init(&pfdev->reset_lock);
+ 	INIT_LIST_HEAD(&pfdev->scheduled_jobs);
+ 	INIT_LIST_HEAD(&pfdev->as_lru_list);
+ 
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
+index 2e9cbd1c4a58e..67f9f66904be2 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.h
++++ b/drivers/gpu/drm/panfrost/panfrost_device.h
+@@ -105,7 +105,11 @@ struct panfrost_device {
+ 	struct panfrost_perfcnt *perfcnt;
+ 
+ 	struct mutex sched_lock;
+-	struct mutex reset_lock;
++
++	struct {
++		struct work_struct work;
++		atomic_t pending;
++	} reset;
+ 
+ 	struct mutex shrinker_lock;
+ 	struct list_head shrinker_list;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 30e7b7196dab0..1ce2001106e56 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -20,12 +20,22 @@
+ #include "panfrost_gpu.h"
+ #include "panfrost_mmu.h"
+ 
++#define JOB_TIMEOUT_MS 500
++
+ #define job_write(dev, reg, data) writel(data, dev->iomem + (reg))
+ #define job_read(dev, reg) readl(dev->iomem + (reg))
+ 
++enum panfrost_queue_status {
++	PANFROST_QUEUE_STATUS_ACTIVE,
++	PANFROST_QUEUE_STATUS_STOPPED,
++	PANFROST_QUEUE_STATUS_STARTING,
++	PANFROST_QUEUE_STATUS_FAULT_PENDING,
++};
++
+ struct panfrost_queue_state {
+ 	struct drm_gpu_scheduler sched;
+-
++	atomic_t status;
++	struct mutex lock;
+ 	u64 fence_context;
+ 	u64 emit_seqno;
+ };
+@@ -369,13 +379,64 @@ void panfrost_job_enable_interrupts(struct panfrost_device *pfdev)
+ 	job_write(pfdev, JOB_INT_MASK, irq_mask);
+ }
+ 
++static bool panfrost_scheduler_stop(struct panfrost_queue_state *queue,
++				    struct drm_sched_job *bad)
++{
++	enum panfrost_queue_status old_status;
++	bool stopped = false;
++
++	mutex_lock(&queue->lock);
++	old_status = atomic_xchg(&queue->status,
++				 PANFROST_QUEUE_STATUS_STOPPED);
++	if (old_status == PANFROST_QUEUE_STATUS_STOPPED)
++		goto out;
++
++	WARN_ON(old_status != PANFROST_QUEUE_STATUS_ACTIVE);
++	drm_sched_stop(&queue->sched, bad);
++	if (bad)
++		drm_sched_increase_karma(bad);
++
++	stopped = true;
++
++	/*
++	 * Set the timeout to max so the timer doesn't get started
++	 * when we return from the timeout handler (restored in
++	 * panfrost_scheduler_start()).
++	 */
++	queue->sched.timeout = MAX_SCHEDULE_TIMEOUT;
++
++out:
++	mutex_unlock(&queue->lock);
++
++	return stopped;
++}
++
++static void panfrost_scheduler_start(struct panfrost_queue_state *queue)
++{
++	enum panfrost_queue_status old_status;
++
++	mutex_lock(&queue->lock);
++	old_status = atomic_xchg(&queue->status,
++				 PANFROST_QUEUE_STATUS_STARTING);
++	WARN_ON(old_status != PANFROST_QUEUE_STATUS_STOPPED);
++
++	/* Restore the original timeout before starting the scheduler. */
++	queue->sched.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
++	drm_sched_resubmit_jobs(&queue->sched);
++	drm_sched_start(&queue->sched, true);
++	old_status = atomic_xchg(&queue->status,
++				 PANFROST_QUEUE_STATUS_ACTIVE);
++	if (old_status == PANFROST_QUEUE_STATUS_FAULT_PENDING)
++		drm_sched_fault(&queue->sched);
++
++	mutex_unlock(&queue->lock);
++}
++
+ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
+ {
+ 	struct panfrost_job *job = to_panfrost_job(sched_job);
+ 	struct panfrost_device *pfdev = job->pfdev;
+ 	int js = panfrost_job_get_slot(job);
+-	unsigned long flags;
+-	int i;
+ 
+ 	/*
+ 	 * If the GPU managed to complete this jobs fence, the timeout is
+@@ -392,40 +453,13 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
+ 		job_read(pfdev, JS_TAIL_LO(js)),
+ 		sched_job);
+ 
+-	if (!mutex_trylock(&pfdev->reset_lock))
++	/* Scheduler is already stopped, nothing to do. */
++	if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job))
+ 		return;
+ 
+-	for (i = 0; i < NUM_JOB_SLOTS; i++) {
+-		struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
+-
+-		drm_sched_stop(sched, sched_job);
+-		if (js != i)
+-			/* Ensure any timeouts on other slots have finished */
+-			cancel_delayed_work_sync(&sched->work_tdr);
+-	}
+-
+-	drm_sched_increase_karma(sched_job);
+-
+-	spin_lock_irqsave(&pfdev->js->job_lock, flags);
+-	for (i = 0; i < NUM_JOB_SLOTS; i++) {
+-		if (pfdev->jobs[i]) {
+-			pm_runtime_put_noidle(pfdev->dev);
+-			panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
+-			pfdev->jobs[i] = NULL;
+-		}
+-	}
+-	spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
+-
+-	panfrost_device_reset(pfdev);
+-
+-	for (i = 0; i < NUM_JOB_SLOTS; i++)
+-		drm_sched_resubmit_jobs(&pfdev->js->queue[i].sched);
+-
+-	/* restart scheduler after GPU is usable again */
+-	for (i = 0; i < NUM_JOB_SLOTS; i++)
+-		drm_sched_start(&pfdev->js->queue[i].sched, true);
+-
+-	mutex_unlock(&pfdev->reset_lock);
++	/* Schedule a reset if there's no reset in progress. */
++	if (!atomic_xchg(&pfdev->reset.pending, 1))
++		schedule_work(&pfdev->reset.work);
+ }
+ 
+ static const struct drm_sched_backend_ops panfrost_sched_ops = {
+@@ -457,6 +491,8 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
+ 		job_write(pfdev, JOB_INT_CLEAR, mask);
+ 
+ 		if (status & JOB_INT_MASK_ERR(j)) {
++			enum panfrost_queue_status old_status;
++
+ 			job_write(pfdev, JS_COMMAND_NEXT(j), JS_COMMAND_NOP);
+ 
+ 			dev_err(pfdev->dev, "js fault, js=%d, status=%s, head=0x%x, tail=0x%x",
+@@ -465,7 +501,18 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
+ 				job_read(pfdev, JS_HEAD_LO(j)),
+ 				job_read(pfdev, JS_TAIL_LO(j)));
+ 
+-			drm_sched_fault(&pfdev->js->queue[j].sched);
++			/*
++			 * When the queue is being restarted we don't report
++			 * faults directly to avoid races between the timeout
++			 * and reset handlers. panfrost_scheduler_start() will
++			 * call drm_sched_fault() after the queue has been
++			 * started if status == FAULT_PENDING.
++			 */
++			old_status = atomic_cmpxchg(&pfdev->js->queue[j].status,
++						    PANFROST_QUEUE_STATUS_STARTING,
++						    PANFROST_QUEUE_STATUS_FAULT_PENDING);
++			if (old_status == PANFROST_QUEUE_STATUS_ACTIVE)
++				drm_sched_fault(&pfdev->js->queue[j].sched);
+ 		}
+ 
+ 		if (status & JOB_INT_MASK_DONE(j)) {
+@@ -492,11 +539,66 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
++static void panfrost_reset(struct work_struct *work)
++{
++	struct panfrost_device *pfdev = container_of(work,
++						     struct panfrost_device,
++						     reset.work);
++	unsigned long flags;
++	unsigned int i;
++	bool cookie;
++
++	cookie = dma_fence_begin_signalling();
++	for (i = 0; i < NUM_JOB_SLOTS; i++) {
++		/*
++		 * We want pending timeouts to be handled before we attempt
++		 * to stop the scheduler. If we don't do that and the timeout
++		 * handler is in flight, it might have removed the bad job
++		 * from the list, and we'll lose this job if the reset handler
++		 * enters the critical section in panfrost_scheduler_stop()
++		 * before the timeout handler.
++		 *
++		 * Timeout is set to MAX_SCHEDULE_TIMEOUT - 1 because we need
++		 * something big enough to make sure the timer will not expire
++		 * before we manage to stop the scheduler, but we can't use
++		 * MAX_SCHEDULE_TIMEOUT because drm_sched_get_cleanup_job()
++		 * considers that as 'timer is not running' and will dequeue
++		 * the job without making sure the timeout handler is not
++		 * running.
++		 */
++		pfdev->js->queue[i].sched.timeout = MAX_SCHEDULE_TIMEOUT - 1;
++		cancel_delayed_work_sync(&pfdev->js->queue[i].sched.work_tdr);
++		panfrost_scheduler_stop(&pfdev->js->queue[i], NULL);
++	}
++
++	/* All timers have been stopped, we can safely reset the pending state. */
++	atomic_set(&pfdev->reset.pending, 0);
++
++	spin_lock_irqsave(&pfdev->js->job_lock, flags);
++	for (i = 0; i < NUM_JOB_SLOTS; i++) {
++		if (pfdev->jobs[i]) {
++			pm_runtime_put_noidle(pfdev->dev);
++			panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
++			pfdev->jobs[i] = NULL;
++		}
++	}
++	spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
++
++	panfrost_device_reset(pfdev);
++
++	for (i = 0; i < NUM_JOB_SLOTS; i++)
++		panfrost_scheduler_start(&pfdev->js->queue[i]);
++
++	dma_fence_end_signalling(cookie);
++}
++
+ int panfrost_job_init(struct panfrost_device *pfdev)
+ {
+ 	struct panfrost_job_slot *js;
+ 	int ret, j, irq;
+ 
++	INIT_WORK(&pfdev->reset.work, panfrost_reset);
++
+ 	pfdev->js = js = devm_kzalloc(pfdev->dev, sizeof(*js), GFP_KERNEL);
+ 	if (!js)
+ 		return -ENOMEM;
+@@ -519,7 +621,7 @@ int panfrost_job_init(struct panfrost_device *pfdev)
+ 
+ 		ret = drm_sched_init(&js->queue[j].sched,
+ 				     &panfrost_sched_ops,
+-				     1, 0, msecs_to_jiffies(500),
++				     1, 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
+ 				     "pan_js");
+ 		if (ret) {
+ 			dev_err(pfdev->dev, "Failed to create scheduler: %d.", ret);
+@@ -558,6 +660,7 @@ int panfrost_job_open(struct panfrost_file_priv *panfrost_priv)
+ 	int ret, i;
+ 
+ 	for (i = 0; i < NUM_JOB_SLOTS; i++) {
++		mutex_init(&js->queue[i].lock);
+ 		sched = &js->queue[i].sched;
+ 		ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i],
+ 					    DRM_SCHED_PRIORITY_NORMAL, &sched,
+@@ -570,10 +673,14 @@ int panfrost_job_open(struct panfrost_file_priv *panfrost_priv)
+ 
+ void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
+ {
++	struct panfrost_device *pfdev = panfrost_priv->pfdev;
++	struct panfrost_job_slot *js = pfdev->js;
+ 	int i;
+ 
+-	for (i = 0; i < NUM_JOB_SLOTS; i++)
++	for (i = 0; i < NUM_JOB_SLOTS; i++) {
+ 		drm_sched_entity_destroy(&panfrost_priv->sched_entity[i]);
++		mutex_destroy(&js->queue[i].lock);
++	}
+ }
+ 
+ int panfrost_job_is_idle(struct panfrost_device *pfdev)
+diff --git a/drivers/gpu/drm/tve200/tve200_drv.c b/drivers/gpu/drm/tve200/tve200_drv.c
+index c3aa39bd38ecd..b5259cb1383fc 100644
+--- a/drivers/gpu/drm/tve200/tve200_drv.c
++++ b/drivers/gpu/drm/tve200/tve200_drv.c
+@@ -200,8 +200,8 @@ static int tve200_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (!irq) {
+-		ret = -EINVAL;
++	if (irq < 0) {
++		ret = irq;
+ 		goto clk_disable;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
+index fef43f4e3bac4..edcfd8c120c44 100644
+--- a/drivers/gpu/drm/udl/udl_modeset.c
++++ b/drivers/gpu/drm/udl/udl_modeset.c
+@@ -303,8 +303,10 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
+ 	}
+ 
+ 	urb = udl_get_urb(dev);
+-	if (!urb)
++	if (!urb) {
++		ret = -ENOMEM;
+ 		goto out_drm_gem_shmem_vunmap;
++	}
+ 	cmd = urb->transfer_buffer;
+ 
+ 	for (i = clip.y1; i < clip.y2; i++) {
+diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c
+index fa69b94debd9b..7596dc1646484 100644
+--- a/drivers/hsi/controllers/omap_ssi_core.c
++++ b/drivers/hsi/controllers/omap_ssi_core.c
+@@ -355,7 +355,7 @@ static int ssi_add_controller(struct hsi_controller *ssi,
+ 
+ 	err = ida_simple_get(&platform_omap_ssi_ida, 0, 0, GFP_KERNEL);
+ 	if (err < 0)
+-		goto out_err;
++		return err;
+ 	ssi->id = err;
+ 
+ 	ssi->owner = THIS_MODULE;
+diff --git a/drivers/hwmon/ina3221.c b/drivers/hwmon/ina3221.c
+index 41fb17e0d6416..ad11cbddc3a7b 100644
+--- a/drivers/hwmon/ina3221.c
++++ b/drivers/hwmon/ina3221.c
+@@ -489,7 +489,7 @@ static int ina3221_write_enable(struct device *dev, int channel, bool enable)
+ 
+ 	/* For enabling routine, increase refcount and resume() at first */
+ 	if (enable) {
+-		ret = pm_runtime_get_sync(ina->pm_dev);
++		ret = pm_runtime_resume_and_get(ina->pm_dev);
+ 		if (ret < 0) {
+ 			dev_err(dev, "Failed to get PM runtime\n");
+ 			return ret;
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index a250481b5a97f..3bc2551577a30 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -11,13 +11,6 @@
+  *   convert raw register values is from https://github.com/ocerman/zenpower.
+  *   The information is not confirmed from chip datasheets, but experiments
+  *   suggest that it provides reasonable temperature values.
+- * - Register addresses to read chip voltage and current are also from
+- *   https://github.com/ocerman/zenpower, and not confirmed from chip
+- *   datasheets. Current calibration is board specific and not typically
+- *   shared by board vendors. For this reason, current values are
+- *   normalized to report 1A/LSB for core current and and 0.25A/LSB for SoC
+- *   current. Reported values can be adjusted using the sensors configuration
+- *   file.
+  */
+ 
+ #include <linux/bitops.h>
+@@ -109,10 +102,7 @@ struct k10temp_data {
+ 	int temp_offset;
+ 	u32 temp_adjust_mask;
+ 	u32 show_temp;
+-	u32 svi_addr[2];
+ 	bool is_zen;
+-	bool show_current;
+-	int cfactor[2];
+ };
+ 
+ #define TCTL_BIT	0
+@@ -137,16 +127,6 @@ static const struct tctl_offset tctl_offset_table[] = {
+ 	{ 0x17, "AMD Ryzen Threadripper 29", 27000 }, /* 29{20,50,70,90}[W]X */
+ };
+ 
+-static bool is_threadripper(void)
+-{
+-	return strstr(boot_cpu_data.x86_model_id, "Threadripper");
+-}
+-
+-static bool is_epyc(void)
+-{
+-	return strstr(boot_cpu_data.x86_model_id, "EPYC");
+-}
+-
+ static void read_htcreg_pci(struct pci_dev *pdev, u32 *regval)
+ {
+ 	pci_read_config_dword(pdev, REG_HARDWARE_THERMAL_CONTROL, regval);
+@@ -211,16 +191,6 @@ static const char *k10temp_temp_label[] = {
+ 	"Tccd8",
+ };
+ 
+-static const char *k10temp_in_label[] = {
+-	"Vcore",
+-	"Vsoc",
+-};
+-
+-static const char *k10temp_curr_label[] = {
+-	"Icore",
+-	"Isoc",
+-};
+-
+ static int k10temp_read_labels(struct device *dev,
+ 			       enum hwmon_sensor_types type,
+ 			       u32 attr, int channel, const char **str)
+@@ -229,50 +199,6 @@ static int k10temp_read_labels(struct device *dev,
+ 	case hwmon_temp:
+ 		*str = k10temp_temp_label[channel];
+ 		break;
+-	case hwmon_in:
+-		*str = k10temp_in_label[channel];
+-		break;
+-	case hwmon_curr:
+-		*str = k10temp_curr_label[channel];
+-		break;
+-	default:
+-		return -EOPNOTSUPP;
+-	}
+-	return 0;
+-}
+-
+-static int k10temp_read_curr(struct device *dev, u32 attr, int channel,
+-			     long *val)
+-{
+-	struct k10temp_data *data = dev_get_drvdata(dev);
+-	u32 regval;
+-
+-	switch (attr) {
+-	case hwmon_curr_input:
+-		amd_smn_read(amd_pci_dev_to_node_id(data->pdev),
+-			     data->svi_addr[channel], &regval);
+-		*val = DIV_ROUND_CLOSEST(data->cfactor[channel] *
+-					 (regval & 0xff),
+-					 1000);
+-		break;
+-	default:
+-		return -EOPNOTSUPP;
+-	}
+-	return 0;
+-}
+-
+-static int k10temp_read_in(struct device *dev, u32 attr, int channel, long *val)
+-{
+-	struct k10temp_data *data = dev_get_drvdata(dev);
+-	u32 regval;
+-
+-	switch (attr) {
+-	case hwmon_in_input:
+-		amd_smn_read(amd_pci_dev_to_node_id(data->pdev),
+-			     data->svi_addr[channel], &regval);
+-		regval = (regval >> 16) & 0xff;
+-		*val = DIV_ROUND_CLOSEST(155000 - regval * 625, 100);
+-		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -331,10 +257,6 @@ static int k10temp_read(struct device *dev, enum hwmon_sensor_types type,
+ 	switch (type) {
+ 	case hwmon_temp:
+ 		return k10temp_read_temp(dev, attr, channel, val);
+-	case hwmon_in:
+-		return k10temp_read_in(dev, attr, channel, val);
+-	case hwmon_curr:
+-		return k10temp_read_curr(dev, attr, channel, val);
+ 	default:
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -383,11 +305,6 @@ static umode_t k10temp_is_visible(const void *_data,
+ 			return 0;
+ 		}
+ 		break;
+-	case hwmon_in:
+-	case hwmon_curr:
+-		if (!data->show_current)
+-			return 0;
+-		break;
+ 	default:
+ 		return 0;
+ 	}
+@@ -517,20 +434,10 @@ static int k10temp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		case 0x8:	/* Zen+ */
+ 		case 0x11:	/* Zen APU */
+ 		case 0x18:	/* Zen+ APU */
+-			data->show_current = !is_threadripper() && !is_epyc();
+-			data->svi_addr[0] = F17H_M01H_SVI_TEL_PLANE0;
+-			data->svi_addr[1] = F17H_M01H_SVI_TEL_PLANE1;
+-			data->cfactor[0] = F17H_M01H_CFACTOR_ICORE;
+-			data->cfactor[1] = F17H_M01H_CFACTOR_ISOC;
+ 			k10temp_get_ccd_support(pdev, data, 4);
+ 			break;
+ 		case 0x31:	/* Zen2 Threadripper */
+ 		case 0x71:	/* Zen2 */
+-			data->show_current = !is_threadripper() && !is_epyc();
+-			data->cfactor[0] = F17H_M31H_CFACTOR_ICORE;
+-			data->cfactor[1] = F17H_M31H_CFACTOR_ISOC;
+-			data->svi_addr[0] = F17H_M31H_SVI_TEL_PLANE0;
+-			data->svi_addr[1] = F17H_M31H_SVI_TEL_PLANE1;
+ 			k10temp_get_ccd_support(pdev, data, 8);
+ 			break;
+ 		}
+@@ -542,11 +449,6 @@ static int k10temp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 		switch (boot_cpu_data.x86_model) {
+ 		case 0x0 ... 0x1:	/* Zen3 */
+-			data->show_current = true;
+-			data->svi_addr[0] = F19H_M01_SVI_TEL_PLANE0;
+-			data->svi_addr[1] = F19H_M01_SVI_TEL_PLANE1;
+-			data->cfactor[0] = F19H_M01H_CFACTOR_ICORE;
+-			data->cfactor[1] = F19H_M01H_CFACTOR_ISOC;
+ 			k10temp_get_ccd_support(pdev, data, 8);
+ 			break;
+ 		}
+diff --git a/drivers/hwtracing/coresight/coresight-catu.c b/drivers/hwtracing/coresight/coresight-catu.c
+index 99430f6cf5a5d..a61313f320bda 100644
+--- a/drivers/hwtracing/coresight/coresight-catu.c
++++ b/drivers/hwtracing/coresight/coresight-catu.c
+@@ -567,7 +567,7 @@ out:
+ 	return ret;
+ }
+ 
+-static int __exit catu_remove(struct amba_device *adev)
++static int catu_remove(struct amba_device *adev)
+ {
+ 	struct catu_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c
+index d28eae93e55c8..61dbc1afd8da5 100644
+--- a/drivers/hwtracing/coresight/coresight-cti-core.c
++++ b/drivers/hwtracing/coresight/coresight-cti-core.c
+@@ -836,7 +836,7 @@ static void cti_device_release(struct device *dev)
+ 	if (drvdata->csdev_release)
+ 		drvdata->csdev_release(dev);
+ }
+-static int __exit cti_remove(struct amba_device *adev)
++static int cti_remove(struct amba_device *adev)
+ {
+ 	struct cti_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
+index 1b320ab581caf..0cf6f0b947b6f 100644
+--- a/drivers/hwtracing/coresight/coresight-etb10.c
++++ b/drivers/hwtracing/coresight/coresight-etb10.c
+@@ -803,7 +803,7 @@ err_misc_register:
+ 	return ret;
+ }
+ 
+-static int __exit etb_remove(struct amba_device *adev)
++static int etb_remove(struct amba_device *adev)
+ {
+ 	struct etb_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
+index 47f610b1c2b18..5bf5a5a4ce6d1 100644
+--- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
+@@ -902,14 +902,14 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
+ 	return 0;
+ }
+ 
+-static void __exit clear_etmdrvdata(void *info)
++static void clear_etmdrvdata(void *info)
+ {
+ 	int cpu = *(int *)info;
+ 
+ 	etmdrvdata[cpu] = NULL;
+ }
+ 
+-static int __exit etm_remove(struct amba_device *adev)
++static int etm_remove(struct amba_device *adev)
+ {
+ 	struct etm_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index e516e5b879e3a..95b54b0a36252 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -1570,14 +1570,14 @@ static struct amba_cs_uci_id uci_id_etm4[] = {
+ 	}
+ };
+ 
+-static void __exit clear_etmdrvdata(void *info)
++static void clear_etmdrvdata(void *info)
+ {
+ 	int cpu = *(int *)info;
+ 
+ 	etmdrvdata[cpu] = NULL;
+ }
+ 
+-static int __exit etm4_remove(struct amba_device *adev)
++static int etm4_remove(struct amba_device *adev)
+ {
+ 	struct etmv4_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
+index af40814ce5603..3fc6c678b51d8 100644
+--- a/drivers/hwtracing/coresight/coresight-funnel.c
++++ b/drivers/hwtracing/coresight/coresight-funnel.c
+@@ -274,7 +274,7 @@ out_disable_clk:
+ 	return ret;
+ }
+ 
+-static int __exit funnel_remove(struct device *dev)
++static int funnel_remove(struct device *dev)
+ {
+ 	struct funnel_drvdata *drvdata = dev_get_drvdata(dev);
+ 
+@@ -328,7 +328,7 @@ static int static_funnel_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
+-static int __exit static_funnel_remove(struct platform_device *pdev)
++static int static_funnel_remove(struct platform_device *pdev)
+ {
+ 	funnel_remove(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+@@ -370,7 +370,7 @@ static int dynamic_funnel_probe(struct amba_device *adev,
+ 	return funnel_probe(&adev->dev, &adev->res);
+ }
+ 
+-static int __exit dynamic_funnel_remove(struct amba_device *adev)
++static int dynamic_funnel_remove(struct amba_device *adev)
+ {
+ 	return funnel_remove(&adev->dev);
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
+index 62afdde0e5eab..38008aca2c0f4 100644
+--- a/drivers/hwtracing/coresight/coresight-replicator.c
++++ b/drivers/hwtracing/coresight/coresight-replicator.c
+@@ -291,7 +291,7 @@ out_disable_clk:
+ 	return ret;
+ }
+ 
+-static int __exit replicator_remove(struct device *dev)
++static int replicator_remove(struct device *dev)
+ {
+ 	struct replicator_drvdata *drvdata = dev_get_drvdata(dev);
+ 
+@@ -318,7 +318,7 @@ static int static_replicator_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
+-static int __exit static_replicator_remove(struct platform_device *pdev)
++static int static_replicator_remove(struct platform_device *pdev)
+ {
+ 	replicator_remove(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+@@ -388,7 +388,7 @@ static int dynamic_replicator_probe(struct amba_device *adev,
+ 	return replicator_probe(&adev->dev, &adev->res);
+ }
+ 
+-static int __exit dynamic_replicator_remove(struct amba_device *adev)
++static int dynamic_replicator_remove(struct amba_device *adev)
+ {
+ 	return replicator_remove(&adev->dev);
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
+index b0ad912651a99..587c1d7f25208 100644
+--- a/drivers/hwtracing/coresight/coresight-stm.c
++++ b/drivers/hwtracing/coresight/coresight-stm.c
+@@ -951,7 +951,7 @@ stm_unregister:
+ 	return ret;
+ }
+ 
+-static int __exit stm_remove(struct amba_device *adev)
++static int stm_remove(struct amba_device *adev)
+ {
+ 	struct stm_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-core.c b/drivers/hwtracing/coresight/coresight-tmc-core.c
+index 5653e0945c74b..8169dff5a9f6a 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-core.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-core.c
+@@ -559,7 +559,7 @@ out:
+ 	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ }
+ 
+-static int __exit tmc_remove(struct amba_device *adev)
++static int tmc_remove(struct amba_device *adev)
+ {
+ 	struct tmc_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
+index 566c57e035961..5b35029461a0c 100644
+--- a/drivers/hwtracing/coresight/coresight-tpiu.c
++++ b/drivers/hwtracing/coresight/coresight-tpiu.c
+@@ -173,7 +173,7 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
+ 	return PTR_ERR(drvdata->csdev);
+ }
+ 
+-static int __exit tpiu_remove(struct amba_device *adev)
++static int tpiu_remove(struct amba_device *adev)
+ {
+ 	struct tpiu_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 8b4c35f47a70f..dce75b85253c1 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -366,6 +366,7 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ 		geni_se_select_mode(se, GENI_SE_FIFO);
+ 
+ 	writel_relaxed(len, se->base + SE_I2C_RX_TRANS_LEN);
++	geni_se_setup_m_cmd(se, I2C_READ, m_param);
+ 
+ 	if (dma_buf && geni_se_rx_dma_prep(se, dma_buf, len, &rx_dma)) {
+ 		geni_se_select_mode(se, GENI_SE_FIFO);
+@@ -373,8 +374,6 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ 		dma_buf = NULL;
+ 	}
+ 
+-	geni_se_setup_m_cmd(se, I2C_READ, m_param);
+-
+ 	time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT);
+ 	if (!time_left)
+ 		geni_i2c_abort_xfer(gi2c);
+@@ -408,6 +407,7 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ 		geni_se_select_mode(se, GENI_SE_FIFO);
+ 
+ 	writel_relaxed(len, se->base + SE_I2C_TX_TRANS_LEN);
++	geni_se_setup_m_cmd(se, I2C_WRITE, m_param);
+ 
+ 	if (dma_buf && geni_se_tx_dma_prep(se, dma_buf, len, &tx_dma)) {
+ 		geni_se_select_mode(se, GENI_SE_FIFO);
+@@ -415,8 +415,6 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ 		dma_buf = NULL;
+ 	}
+ 
+-	geni_se_setup_m_cmd(se, I2C_WRITE, m_param);
+-
+ 	if (!dma_buf) /* Get FIFO IRQ */
+ 		writel_relaxed(1, se->base + SE_GENI_TX_WATERMARK_REG);
+ 
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index 91ae90514aff4..17e9ceb9c6c48 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -295,7 +295,7 @@ config ASPEED_ADC
+ config AT91_ADC
+ 	tristate "Atmel AT91 ADC"
+ 	depends on ARCH_AT91 || COMPILE_TEST
+-	depends on INPUT && SYSFS
++	depends on INPUT && SYSFS && OF
+ 	select IIO_BUFFER
+ 	select IIO_TRIGGERED_BUFFER
+ 	help
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 86039e9ecaca1..3a6f239d4acca 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -57,7 +57,7 @@ EXPORT_SYMBOL_GPL(ad_sd_set_comm);
+ int ad_sd_write_reg(struct ad_sigma_delta *sigma_delta, unsigned int reg,
+ 	unsigned int size, unsigned int val)
+ {
+-	uint8_t *data = sigma_delta->data;
++	uint8_t *data = sigma_delta->tx_buf;
+ 	struct spi_transfer t = {
+ 		.tx_buf		= data,
+ 		.len		= size + 1,
+@@ -99,7 +99,7 @@ EXPORT_SYMBOL_GPL(ad_sd_write_reg);
+ static int ad_sd_read_reg_raw(struct ad_sigma_delta *sigma_delta,
+ 	unsigned int reg, unsigned int size, uint8_t *val)
+ {
+-	uint8_t *data = sigma_delta->data;
++	uint8_t *data = sigma_delta->tx_buf;
+ 	int ret;
+ 	struct spi_transfer t[] = {
+ 		{
+@@ -146,22 +146,22 @@ int ad_sd_read_reg(struct ad_sigma_delta *sigma_delta,
+ {
+ 	int ret;
+ 
+-	ret = ad_sd_read_reg_raw(sigma_delta, reg, size, sigma_delta->data);
++	ret = ad_sd_read_reg_raw(sigma_delta, reg, size, sigma_delta->rx_buf);
+ 	if (ret < 0)
+ 		goto out;
+ 
+ 	switch (size) {
+ 	case 4:
+-		*val = get_unaligned_be32(sigma_delta->data);
++		*val = get_unaligned_be32(sigma_delta->rx_buf);
+ 		break;
+ 	case 3:
+-		*val = get_unaligned_be24(&sigma_delta->data[0]);
++		*val = get_unaligned_be24(sigma_delta->rx_buf);
+ 		break;
+ 	case 2:
+-		*val = get_unaligned_be16(sigma_delta->data);
++		*val = get_unaligned_be16(sigma_delta->rx_buf);
+ 		break;
+ 	case 1:
+-		*val = sigma_delta->data[0];
++		*val = sigma_delta->rx_buf[0];
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -395,11 +395,9 @@ static irqreturn_t ad_sd_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev);
++	uint8_t *data = sigma_delta->rx_buf;
+ 	unsigned int reg_size;
+ 	unsigned int data_reg;
+-	uint8_t data[16];
+-
+-	memset(data, 0x00, 16);
+ 
+ 	reg_size = indio_dev->channels[0].scan_type.realbits +
+ 			indio_dev->channels[0].scan_type.shift;
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 9b2c548fae957..0a793e7cd53ee 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -1469,7 +1469,7 @@ static struct platform_driver at91_adc_driver = {
+ 	.id_table = at91_adc_ids,
+ 	.driver = {
+ 		   .name = DRIVER_NAME,
+-		   .of_match_table = of_match_ptr(at91_adc_dt_ids),
++		   .of_match_table = at91_adc_dt_ids,
+ 		   .pm = &at91_adc_pm_ops,
+ 	},
+ };
+diff --git a/drivers/iio/adc/rockchip_saradc.c b/drivers/iio/adc/rockchip_saradc.c
+index 1f3d7d639d378..12584f1631d88 100644
+--- a/drivers/iio/adc/rockchip_saradc.c
++++ b/drivers/iio/adc/rockchip_saradc.c
+@@ -462,7 +462,7 @@ static int rockchip_saradc_resume(struct device *dev)
+ 
+ 	ret = clk_prepare_enable(info->clk);
+ 	if (ret)
+-		return ret;
++		clk_disable_unprepare(info->pclk);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/iio/adc/ti-ads124s08.c b/drivers/iio/adc/ti-ads124s08.c
+index 4b4fbe33930ce..b4a128b191889 100644
+--- a/drivers/iio/adc/ti-ads124s08.c
++++ b/drivers/iio/adc/ti-ads124s08.c
+@@ -99,6 +99,14 @@ struct ads124s_private {
+ 	struct gpio_desc *reset_gpio;
+ 	struct spi_device *spi;
+ 	struct mutex lock;
++	/*
++	 * Used to correctly align data.
++	 * Ensure timestamp is naturally aligned.
++	 * Note that the full buffer length may not be needed if not
++	 * all channels are enabled, as long as the alignment of the
++	 * timestamp is maintained.
++	 */
++	u32 buffer[ADS124S08_MAX_CHANNELS + sizeof(s64)/sizeof(u32)] __aligned(8);
+ 	u8 data[5] ____cacheline_aligned;
+ };
+ 
+@@ -269,7 +277,6 @@ static irqreturn_t ads124s_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct ads124s_private *priv = iio_priv(indio_dev);
+-	u32 buffer[ADS124S08_MAX_CHANNELS + sizeof(s64)/sizeof(u16)];
+ 	int scan_index, j = 0;
+ 	int ret;
+ 
+@@ -284,7 +291,7 @@ static irqreturn_t ads124s_trigger_handler(int irq, void *p)
+ 		if (ret)
+ 			dev_err(&priv->spi->dev, "Start ADC conversions failed\n");
+ 
+-		buffer[j] = ads124s_read(indio_dev, scan_index);
++		priv->buffer[j] = ads124s_read(indio_dev, scan_index);
+ 		ret = ads124s_write_cmd(indio_dev, ADS124S08_STOP_CONV);
+ 		if (ret)
+ 			dev_err(&priv->spi->dev, "Stop ADC conversions failed\n");
+@@ -292,7 +299,7 @@ static irqreturn_t ads124s_trigger_handler(int irq, void *p)
+ 		j++;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, priv->buffer,
+ 			pf->timestamp);
+ 
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/imu/bmi160/bmi160.h b/drivers/iio/imu/bmi160/bmi160.h
+index a82e040bd1098..32c2ea2d71129 100644
+--- a/drivers/iio/imu/bmi160/bmi160.h
++++ b/drivers/iio/imu/bmi160/bmi160.h
+@@ -10,6 +10,13 @@ struct bmi160_data {
+ 	struct iio_trigger *trig;
+ 	struct regulator_bulk_data supplies[2];
+ 	struct iio_mount_matrix orientation;
++	/*
++	 * Ensure natural alignment for timestamp if present.
++	 * Max length needed: 2 * 3 channels + 4 bytes padding + 8 byte ts.
++	 * If fewer channels are enabled, less space may be needed, as
++	 * long as the timestamp is still aligned to 8 bytes.
++	 */
++	__le16 buf[12] __aligned(8);
+ };
+ 
+ extern const struct regmap_config bmi160_regmap_config;
+diff --git a/drivers/iio/imu/bmi160/bmi160_core.c b/drivers/iio/imu/bmi160/bmi160_core.c
+index 222ebb26f0132..82f03a4dc47a7 100644
+--- a/drivers/iio/imu/bmi160/bmi160_core.c
++++ b/drivers/iio/imu/bmi160/bmi160_core.c
+@@ -427,8 +427,6 @@ static irqreturn_t bmi160_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct bmi160_data *data = iio_priv(indio_dev);
+-	__le16 buf[16];
+-	/* 3 sens x 3 axis x __le16 + 3 x __le16 pad + 4 x __le16 tstamp */
+ 	int i, ret, j = 0, base = BMI160_REG_DATA_MAGN_XOUT_L;
+ 	__le16 sample;
+ 
+@@ -438,10 +436,10 @@ static irqreturn_t bmi160_trigger_handler(int irq, void *p)
+ 				       &sample, sizeof(sample));
+ 		if (ret)
+ 			goto done;
+-		buf[j++] = sample;
++		data->buf[j++] = sample;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, buf, pf->timestamp);
++	iio_push_to_buffers_with_timestamp(indio_dev, data->buf, pf->timestamp);
+ done:
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 	return IRQ_HANDLED;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 42f485634d044..2ab1ac5a2412f 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -2255,19 +2255,35 @@ st_lsm6dsx_report_motion_event(struct st_lsm6dsx_hw *hw)
+ static irqreturn_t st_lsm6dsx_handler_thread(int irq, void *private)
+ {
+ 	struct st_lsm6dsx_hw *hw = private;
++	int fifo_len = 0, len;
+ 	bool event;
+-	int count;
+ 
+ 	event = st_lsm6dsx_report_motion_event(hw);
+ 
+ 	if (!hw->settings->fifo_ops.read_fifo)
+ 		return event ? IRQ_HANDLED : IRQ_NONE;
+ 
+-	mutex_lock(&hw->fifo_lock);
+-	count = hw->settings->fifo_ops.read_fifo(hw);
+-	mutex_unlock(&hw->fifo_lock);
++	/*
++	 * If we are using edge IRQs, new samples can arrive while
++	 * processing current interrupt since there are no hw
++	 * guarantees the irq line stays "low" long enough to properly
++	 * detect the new interrupt. In this case the new sample will
++	 * be missed.
++	 * Polling FIFO status register allow us to read new
++	 * samples even if the interrupt arrives while processing
++	 * previous data and the timeslot where the line is "low" is
++	 * too short to be properly detected.
++	 */
++	do {
++		mutex_lock(&hw->fifo_lock);
++		len = hw->settings->fifo_ops.read_fifo(hw);
++		mutex_unlock(&hw->fifo_lock);
++
++		if (len > 0)
++			fifo_len += len;
++	} while (len > 0);
+ 
+-	return count || event ? IRQ_HANDLED : IRQ_NONE;
++	return fifo_len || event ? IRQ_HANDLED : IRQ_NONE;
+ }
+ 
+ static int st_lsm6dsx_irq_setup(struct st_lsm6dsx_hw *hw)
+diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
+index a4f6bb96d4f42..276b609d79174 100644
+--- a/drivers/iio/industrialio-buffer.c
++++ b/drivers/iio/industrialio-buffer.c
+@@ -865,12 +865,12 @@ static int iio_buffer_update_demux(struct iio_dev *indio_dev,
+ 				       indio_dev->masklength,
+ 				       in_ind + 1);
+ 		while (in_ind != out_ind) {
+-			in_ind = find_next_bit(indio_dev->active_scan_mask,
+-					       indio_dev->masklength,
+-					       in_ind + 1);
+ 			length = iio_storage_bytes_for_si(indio_dev, in_ind);
+ 			/* Make sure we are aligned */
+ 			in_loc = roundup(in_loc, length) + length;
++			in_ind = find_next_bit(indio_dev->active_scan_mask,
++					       indio_dev->masklength,
++					       in_ind + 1);
+ 		}
+ 		length = iio_storage_bytes_for_si(indio_dev, in_ind);
+ 		out_loc = roundup(out_loc, length);
+diff --git a/drivers/iio/light/rpr0521.c b/drivers/iio/light/rpr0521.c
+index aa2972b048334..31224a33bade3 100644
+--- a/drivers/iio/light/rpr0521.c
++++ b/drivers/iio/light/rpr0521.c
+@@ -194,6 +194,17 @@ struct rpr0521_data {
+ 	bool pxs_need_dis;
+ 
+ 	struct regmap *regmap;
++
++	/*
++	 * Ensure correct naturally aligned timestamp.
++	 * Note that the read will put garbage data into
++	 * the padding but this should not be a problem
++	 */
++	struct {
++		__le16 channels[3];
++		u8 garbage;
++		s64 ts __aligned(8);
++	} scan;
+ };
+ 
+ static IIO_CONST_ATTR(in_intensity_scale_available, RPR0521_ALS_SCALE_AVAIL);
+@@ -449,8 +460,6 @@ static irqreturn_t rpr0521_trigger_consumer_handler(int irq, void *p)
+ 	struct rpr0521_data *data = iio_priv(indio_dev);
+ 	int err;
+ 
+-	u8 buffer[16]; /* 3 16-bit channels + padding + ts */
+-
+ 	/* Use irq timestamp when reasonable. */
+ 	if (iio_trigger_using_own(indio_dev) && data->irq_timestamp) {
+ 		pf->timestamp = data->irq_timestamp;
+@@ -461,11 +470,11 @@ static irqreturn_t rpr0521_trigger_consumer_handler(int irq, void *p)
+ 		pf->timestamp = iio_get_time_ns(indio_dev);
+ 
+ 	err = regmap_bulk_read(data->regmap, RPR0521_REG_PXS_DATA,
+-		&buffer,
++		data->scan.channels,
+ 		(3 * 2) + 1);	/* 3 * 16-bit + (discarded) int clear reg. */
+ 	if (!err)
+ 		iio_push_to_buffers_with_timestamp(indio_dev,
+-						   buffer, pf->timestamp);
++						   &data->scan, pf->timestamp);
+ 	else
+ 		dev_err(&data->client->dev,
+ 			"Trigger consumer can't read from sensor.\n");
+diff --git a/drivers/iio/light/st_uvis25.h b/drivers/iio/light/st_uvis25.h
+index 78bc56aad1299..283086887caf5 100644
+--- a/drivers/iio/light/st_uvis25.h
++++ b/drivers/iio/light/st_uvis25.h
+@@ -27,6 +27,11 @@ struct st_uvis25_hw {
+ 	struct iio_trigger *trig;
+ 	bool enabled;
+ 	int irq;
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u8 chan;
++		s64 ts __aligned(8);
++	} scan;
+ };
+ 
+ extern const struct dev_pm_ops st_uvis25_pm_ops;
+diff --git a/drivers/iio/light/st_uvis25_core.c b/drivers/iio/light/st_uvis25_core.c
+index a18a82e6bbf5d..1055594b22764 100644
+--- a/drivers/iio/light/st_uvis25_core.c
++++ b/drivers/iio/light/st_uvis25_core.c
+@@ -232,17 +232,19 @@ static const struct iio_buffer_setup_ops st_uvis25_buffer_ops = {
+ 
+ static irqreturn_t st_uvis25_buffer_handler_thread(int irq, void *p)
+ {
+-	u8 buffer[ALIGN(sizeof(u8), sizeof(s64)) + sizeof(s64)];
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *iio_dev = pf->indio_dev;
+ 	struct st_uvis25_hw *hw = iio_priv(iio_dev);
++	unsigned int val;
+ 	int err;
+ 
+-	err = regmap_read(hw->regmap, ST_UVIS25_REG_OUT_ADDR, (int *)buffer);
++	err = regmap_read(hw->regmap, ST_UVIS25_REG_OUT_ADDR, &val);
+ 	if (err < 0)
+ 		goto out;
+ 
+-	iio_push_to_buffers_with_timestamp(iio_dev, buffer,
++	hw->scan.chan = val;
++
++	iio_push_to_buffers_with_timestamp(iio_dev, &hw->scan,
+ 					   iio_get_time_ns(iio_dev));
+ 
+ out:
+diff --git a/drivers/iio/magnetometer/mag3110.c b/drivers/iio/magnetometer/mag3110.c
+index 838b13c8bb3db..c96415a1aeadd 100644
+--- a/drivers/iio/magnetometer/mag3110.c
++++ b/drivers/iio/magnetometer/mag3110.c
+@@ -56,6 +56,12 @@ struct mag3110_data {
+ 	int sleep_val;
+ 	struct regulator *vdd_reg;
+ 	struct regulator *vddio_reg;
++	/* Ensure natural alignment of timestamp */
++	struct {
++		__be16 channels[3];
++		u8 temperature;
++		s64 ts __aligned(8);
++	} scan;
+ };
+ 
+ static int mag3110_request(struct mag3110_data *data)
+@@ -387,10 +393,9 @@ static irqreturn_t mag3110_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct mag3110_data *data = iio_priv(indio_dev);
+-	u8 buffer[16]; /* 3 16-bit channels + 1 byte temp + padding + ts */
+ 	int ret;
+ 
+-	ret = mag3110_read(data, (__be16 *) buffer);
++	ret = mag3110_read(data, data->scan.channels);
+ 	if (ret < 0)
+ 		goto done;
+ 
+@@ -399,10 +404,10 @@ static irqreturn_t mag3110_trigger_handler(int irq, void *p)
+ 			MAG3110_DIE_TEMP);
+ 		if (ret < 0)
+ 			goto done;
+-		buffer[6] = ret;
++		data->scan.temperature = ret;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 		iio_get_time_ns(indio_dev));
+ 
+ done:
+diff --git a/drivers/iio/pressure/mpl3115.c b/drivers/iio/pressure/mpl3115.c
+index ccdb0b70e48ca..1eb9e7b29e050 100644
+--- a/drivers/iio/pressure/mpl3115.c
++++ b/drivers/iio/pressure/mpl3115.c
+@@ -144,7 +144,14 @@ static irqreturn_t mpl3115_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct mpl3115_data *data = iio_priv(indio_dev);
+-	u8 buffer[16]; /* 32-bit channel + 16-bit channel + padding + ts */
++	/*
++	 * 32-bit channel + 16-bit channel + padding + ts
++	 * Note that it is possible for only one of the first 2
++	 * channels to be enabled. If that happens, the first element
++	 * of the buffer may be either 16 or 32-bits.  As such we cannot
++	 * use a simple structure definition to express this data layout.
++	 */
++	u8 buffer[16] __aligned(8);
+ 	int ret, pos = 0;
+ 
+ 	mutex_lock(&data->lock);
+diff --git a/drivers/iio/trigger/iio-trig-hrtimer.c b/drivers/iio/trigger/iio-trig-hrtimer.c
+index f59bf8d585866..410de837d0417 100644
+--- a/drivers/iio/trigger/iio-trig-hrtimer.c
++++ b/drivers/iio/trigger/iio-trig-hrtimer.c
+@@ -102,7 +102,7 @@ static int iio_trig_hrtimer_set_state(struct iio_trigger *trig, bool state)
+ 
+ 	if (state)
+ 		hrtimer_start(&trig_info->timer, trig_info->period,
+-			      HRTIMER_MODE_REL);
++			      HRTIMER_MODE_REL_HARD);
+ 	else
+ 		hrtimer_cancel(&trig_info->timer);
+ 
+@@ -132,7 +132,7 @@ static struct iio_sw_trigger *iio_trig_hrtimer_probe(const char *name)
+ 	trig_info->swt.trigger->ops = &iio_hrtimer_trigger_ops;
+ 	trig_info->swt.trigger->dev.groups = iio_hrtimer_attr_groups;
+ 
+-	hrtimer_init(&trig_info->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	hrtimer_init(&trig_info->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ 	trig_info->timer.function = iio_hrtimer_trig_handler;
+ 
+ 	trig_info->sampling_frequency = HRTIMER_DEFAULT_SAMPLING_FREQUENCY;
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index a77750b8954db..c51b84b2d2f37 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -477,6 +477,10 @@ static void cma_release_dev(struct rdma_id_private *id_priv)
+ 	list_del(&id_priv->list);
+ 	cma_dev_put(id_priv->cma_dev);
+ 	id_priv->cma_dev = NULL;
++	if (id_priv->id.route.addr.dev_addr.sgid_attr) {
++		rdma_put_gid_attr(id_priv->id.route.addr.dev_addr.sgid_attr);
++		id_priv->id.route.addr.dev_addr.sgid_attr = NULL;
++	}
+ 	mutex_unlock(&lock);
+ }
+ 
+@@ -1861,9 +1865,6 @@ static void _destroy_id(struct rdma_id_private *id_priv,
+ 
+ 	kfree(id_priv->id.route.path_rec);
+ 
+-	if (id_priv->id.route.addr.dev_addr.sgid_attr)
+-		rdma_put_gid_attr(id_priv->id.route.addr.dev_addr.sgid_attr);
+-
+ 	put_net(id_priv->id.route.addr.dev_addr.net);
+ 	rdma_restrack_del(&id_priv->res);
+ 	kfree(id_priv);
+@@ -2495,8 +2496,9 @@ static int cma_listen_handler(struct rdma_cm_id *id,
+ 	return id_priv->id.event_handler(id, event);
+ }
+ 
+-static void cma_listen_on_dev(struct rdma_id_private *id_priv,
+-			      struct cma_device *cma_dev)
++static int cma_listen_on_dev(struct rdma_id_private *id_priv,
++			     struct cma_device *cma_dev,
++			     struct rdma_id_private **to_destroy)
+ {
+ 	struct rdma_id_private *dev_id_priv;
+ 	struct net *net = id_priv->id.route.addr.dev_addr.net;
+@@ -2504,21 +2506,21 @@ static void cma_listen_on_dev(struct rdma_id_private *id_priv,
+ 
+ 	lockdep_assert_held(&lock);
+ 
++	*to_destroy = NULL;
+ 	if (cma_family(id_priv) == AF_IB && !rdma_cap_ib_cm(cma_dev->device, 1))
+-		return;
++		return 0;
+ 
+ 	dev_id_priv =
+ 		__rdma_create_id(net, cma_listen_handler, id_priv,
+ 				 id_priv->id.ps, id_priv->id.qp_type, id_priv);
+ 	if (IS_ERR(dev_id_priv))
+-		return;
++		return PTR_ERR(dev_id_priv);
+ 
+ 	dev_id_priv->state = RDMA_CM_ADDR_BOUND;
+ 	memcpy(cma_src_addr(dev_id_priv), cma_src_addr(id_priv),
+ 	       rdma_addr_size(cma_src_addr(id_priv)));
+ 
+ 	_cma_attach_to_dev(dev_id_priv, cma_dev);
+-	list_add_tail(&dev_id_priv->listen_list, &id_priv->listen_list);
+ 	cma_id_get(id_priv);
+ 	dev_id_priv->internal_id = 1;
+ 	dev_id_priv->afonly = id_priv->afonly;
+@@ -2527,19 +2529,42 @@ static void cma_listen_on_dev(struct rdma_id_private *id_priv,
+ 
+ 	ret = rdma_listen(&dev_id_priv->id, id_priv->backlog);
+ 	if (ret)
+-		dev_warn(&cma_dev->device->dev,
+-			 "RDMA CMA: cma_listen_on_dev, error %d\n", ret);
++		goto err_listen;
++	list_add_tail(&dev_id_priv->listen_list, &id_priv->listen_list);
++	return 0;
++err_listen:
++	/* Caller must destroy this after releasing lock */
++	*to_destroy = dev_id_priv;
++	dev_warn(&cma_dev->device->dev, "RDMA CMA: %s, error %d\n", __func__, ret);
++	return ret;
+ }
+ 
+-static void cma_listen_on_all(struct rdma_id_private *id_priv)
++static int cma_listen_on_all(struct rdma_id_private *id_priv)
+ {
++	struct rdma_id_private *to_destroy;
+ 	struct cma_device *cma_dev;
++	int ret;
+ 
+ 	mutex_lock(&lock);
+ 	list_add_tail(&id_priv->list, &listen_any_list);
+-	list_for_each_entry(cma_dev, &dev_list, list)
+-		cma_listen_on_dev(id_priv, cma_dev);
++	list_for_each_entry(cma_dev, &dev_list, list) {
++		ret = cma_listen_on_dev(id_priv, cma_dev, &to_destroy);
++		if (ret) {
++			/* Prevent racing with cma_process_remove() */
++			if (to_destroy)
++				list_del_init(&to_destroy->list);
++			goto err_listen;
++		}
++	}
+ 	mutex_unlock(&lock);
++	return 0;
++
++err_listen:
++	list_del(&id_priv->list);
++	mutex_unlock(&lock);
++	if (to_destroy)
++		rdma_destroy_id(&to_destroy->id);
++	return ret;
+ }
+ 
+ void rdma_set_service_type(struct rdma_cm_id *id, int tos)
+@@ -3692,8 +3717,11 @@ int rdma_listen(struct rdma_cm_id *id, int backlog)
+ 			ret = -ENOSYS;
+ 			goto err;
+ 		}
+-	} else
+-		cma_listen_on_all(id_priv);
++	} else {
++		ret = cma_listen_on_all(id_priv);
++		if (ret)
++			goto err;
++	}
+ 
+ 	return 0;
+ err:
+@@ -4773,69 +4801,6 @@ static struct notifier_block cma_nb = {
+ 	.notifier_call = cma_netdev_callback
+ };
+ 
+-static int cma_add_one(struct ib_device *device)
+-{
+-	struct cma_device *cma_dev;
+-	struct rdma_id_private *id_priv;
+-	unsigned int i;
+-	unsigned long supported_gids = 0;
+-	int ret;
+-
+-	cma_dev = kmalloc(sizeof *cma_dev, GFP_KERNEL);
+-	if (!cma_dev)
+-		return -ENOMEM;
+-
+-	cma_dev->device = device;
+-	cma_dev->default_gid_type = kcalloc(device->phys_port_cnt,
+-					    sizeof(*cma_dev->default_gid_type),
+-					    GFP_KERNEL);
+-	if (!cma_dev->default_gid_type) {
+-		ret = -ENOMEM;
+-		goto free_cma_dev;
+-	}
+-
+-	cma_dev->default_roce_tos = kcalloc(device->phys_port_cnt,
+-					    sizeof(*cma_dev->default_roce_tos),
+-					    GFP_KERNEL);
+-	if (!cma_dev->default_roce_tos) {
+-		ret = -ENOMEM;
+-		goto free_gid_type;
+-	}
+-
+-	rdma_for_each_port (device, i) {
+-		supported_gids = roce_gid_type_mask_support(device, i);
+-		WARN_ON(!supported_gids);
+-		if (supported_gids & (1 << CMA_PREFERRED_ROCE_GID_TYPE))
+-			cma_dev->default_gid_type[i - rdma_start_port(device)] =
+-				CMA_PREFERRED_ROCE_GID_TYPE;
+-		else
+-			cma_dev->default_gid_type[i - rdma_start_port(device)] =
+-				find_first_bit(&supported_gids, BITS_PER_LONG);
+-		cma_dev->default_roce_tos[i - rdma_start_port(device)] = 0;
+-	}
+-
+-	init_completion(&cma_dev->comp);
+-	refcount_set(&cma_dev->refcount, 1);
+-	INIT_LIST_HEAD(&cma_dev->id_list);
+-	ib_set_client_data(device, &cma_client, cma_dev);
+-
+-	mutex_lock(&lock);
+-	list_add_tail(&cma_dev->list, &dev_list);
+-	list_for_each_entry(id_priv, &listen_any_list, list)
+-		cma_listen_on_dev(id_priv, cma_dev);
+-	mutex_unlock(&lock);
+-
+-	trace_cm_add_one(device);
+-	return 0;
+-
+-free_gid_type:
+-	kfree(cma_dev->default_gid_type);
+-
+-free_cma_dev:
+-	kfree(cma_dev);
+-	return ret;
+-}
+-
+ static void cma_send_device_removal_put(struct rdma_id_private *id_priv)
+ {
+ 	struct rdma_cm_event event = { .event = RDMA_CM_EVENT_DEVICE_REMOVAL };
+@@ -4898,6 +4863,80 @@ static void cma_process_remove(struct cma_device *cma_dev)
+ 	wait_for_completion(&cma_dev->comp);
+ }
+ 
++static int cma_add_one(struct ib_device *device)
++{
++	struct rdma_id_private *to_destroy;
++	struct cma_device *cma_dev;
++	struct rdma_id_private *id_priv;
++	unsigned int i;
++	unsigned long supported_gids = 0;
++	int ret;
++
++	cma_dev = kmalloc(sizeof(*cma_dev), GFP_KERNEL);
++	if (!cma_dev)
++		return -ENOMEM;
++
++	cma_dev->device = device;
++	cma_dev->default_gid_type = kcalloc(device->phys_port_cnt,
++					    sizeof(*cma_dev->default_gid_type),
++					    GFP_KERNEL);
++	if (!cma_dev->default_gid_type) {
++		ret = -ENOMEM;
++		goto free_cma_dev;
++	}
++
++	cma_dev->default_roce_tos = kcalloc(device->phys_port_cnt,
++					    sizeof(*cma_dev->default_roce_tos),
++					    GFP_KERNEL);
++	if (!cma_dev->default_roce_tos) {
++		ret = -ENOMEM;
++		goto free_gid_type;
++	}
++
++	rdma_for_each_port (device, i) {
++		supported_gids = roce_gid_type_mask_support(device, i);
++		WARN_ON(!supported_gids);
++		if (supported_gids & (1 << CMA_PREFERRED_ROCE_GID_TYPE))
++			cma_dev->default_gid_type[i - rdma_start_port(device)] =
++				CMA_PREFERRED_ROCE_GID_TYPE;
++		else
++			cma_dev->default_gid_type[i - rdma_start_port(device)] =
++				find_first_bit(&supported_gids, BITS_PER_LONG);
++		cma_dev->default_roce_tos[i - rdma_start_port(device)] = 0;
++	}
++
++	init_completion(&cma_dev->comp);
++	refcount_set(&cma_dev->refcount, 1);
++	INIT_LIST_HEAD(&cma_dev->id_list);
++	ib_set_client_data(device, &cma_client, cma_dev);
++
++	mutex_lock(&lock);
++	list_add_tail(&cma_dev->list, &dev_list);
++	list_for_each_entry(id_priv, &listen_any_list, list) {
++		ret = cma_listen_on_dev(id_priv, cma_dev, &to_destroy);
++		if (ret)
++			goto free_listen;
++	}
++	mutex_unlock(&lock);
++
++	trace_cm_add_one(device);
++	return 0;
++
++free_listen:
++	list_del(&cma_dev->list);
++	mutex_unlock(&lock);
++
++	/* cma_process_remove() will delete to_destroy */
++	cma_process_remove(cma_dev);
++	kfree(cma_dev->default_roce_tos);
++free_gid_type:
++	kfree(cma_dev->default_gid_type);
++
++free_cma_dev:
++	kfree(cma_dev);
++	return ret;
++}
++
+ static void cma_remove_one(struct ib_device *device, void *client_data)
+ {
+ 	struct cma_device *cma_dev = client_data;
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index a3b1fc84cdcab..4a041511b70ec 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -1374,9 +1374,6 @@ int ib_register_device(struct ib_device *device, const char *name,
+ 	}
+ 
+ 	ret = enable_device_and_get(device);
+-	dev_set_uevent_suppress(&device->dev, false);
+-	/* Mark for userspace that device is ready */
+-	kobject_uevent(&device->dev.kobj, KOBJ_ADD);
+ 	if (ret) {
+ 		void (*dealloc_fn)(struct ib_device *);
+ 
+@@ -1396,8 +1393,12 @@ int ib_register_device(struct ib_device *device, const char *name,
+ 		ib_device_put(device);
+ 		__ib_unregister_device(device);
+ 		device->ops.dealloc_driver = dealloc_fn;
++		dev_set_uevent_suppress(&device->dev, false);
+ 		return ret;
+ 	}
++	dev_set_uevent_suppress(&device->dev, false);
++	/* Mark for userspace that device is ready */
++	kobject_uevent(&device->dev.kobj, KOBJ_ADD);
+ 	ib_device_put(device);
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/core/uverbs_std_types_device.c b/drivers/infiniband/core/uverbs_std_types_device.c
+index 302f898c5833f..9ec6971056fa8 100644
+--- a/drivers/infiniband/core/uverbs_std_types_device.c
++++ b/drivers/infiniband/core/uverbs_std_types_device.c
+@@ -317,8 +317,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_GID_TABLE)(
+ 	struct ib_device *ib_dev;
+ 	size_t user_entry_size;
+ 	ssize_t num_entries;
+-	size_t max_entries;
+-	size_t num_bytes;
++	int max_entries;
+ 	u32 flags;
+ 	int ret;
+ 
+@@ -336,19 +335,16 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_GID_TABLE)(
+ 		attrs, UVERBS_ATTR_QUERY_GID_TABLE_RESP_ENTRIES,
+ 		user_entry_size);
+ 	if (max_entries <= 0)
+-		return -EINVAL;
++		return max_entries ?: -EINVAL;
+ 
+ 	ucontext = ib_uverbs_get_ucontext(attrs);
+ 	if (IS_ERR(ucontext))
+ 		return PTR_ERR(ucontext);
+ 	ib_dev = ucontext->device;
+ 
+-	if (check_mul_overflow(max_entries, sizeof(*entries), &num_bytes))
+-		return -EINVAL;
+-
+-	entries = uverbs_zalloc(attrs, num_bytes);
+-	if (!entries)
+-		return -ENOMEM;
++	entries = uverbs_kcalloc(attrs, max_entries, sizeof(*entries));
++	if (IS_ERR(entries))
++		return PTR_ERR(entries);
+ 
+ 	num_entries = rdma_query_gid_table(ib_dev, entries, max_entries);
+ 	if (num_entries < 0)
+diff --git a/drivers/infiniband/core/uverbs_std_types_mr.c b/drivers/infiniband/core/uverbs_std_types_mr.c
+index 9b22bb553e8b3..dc58564417292 100644
+--- a/drivers/infiniband/core/uverbs_std_types_mr.c
++++ b/drivers/infiniband/core/uverbs_std_types_mr.c
+@@ -33,6 +33,7 @@
+ #include "rdma_core.h"
+ #include "uverbs.h"
+ #include <rdma/uverbs_std_types.h>
++#include "restrack.h"
+ 
+ static int uverbs_free_mr(struct ib_uobject *uobject,
+ 			  enum rdma_remove_reason why,
+@@ -134,6 +135,9 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)(
+ 	atomic_inc(&pd->usecnt);
+ 	atomic_inc(&dm->usecnt);
+ 
++	rdma_restrack_new(&mr->res, RDMA_RESTRACK_MR);
++	rdma_restrack_set_name(&mr->res, NULL);
++	rdma_restrack_add(&mr->res);
+ 	uobj->object = mr;
+ 
+ 	uverbs_finalize_uobj_create(attrs, UVERBS_ATTR_REG_DM_MR_HANDLE);
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 740f8454b6b46..3d895cc41c3ad 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -1698,8 +1698,10 @@ static int _ib_modify_qp(struct ib_qp *qp, struct ib_qp_attr *attr,
+ 			slave = rdma_lag_get_ah_roce_slave(qp->device,
+ 							   &attr->ah_attr,
+ 							   GFP_KERNEL);
+-			if (IS_ERR(slave))
++			if (IS_ERR(slave)) {
++				ret = PTR_ERR(slave);
+ 				goto out_av;
++			}
+ 			attr->xmit_slave = slave;
+ 		}
+ 	}
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index cf3db96283976..266de55f57192 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -1657,8 +1657,8 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ 	srq->qplib_srq.max_wqe = entries;
+ 
+ 	srq->qplib_srq.max_sge = srq_init_attr->attr.max_sge;
+-	srq->qplib_srq.wqe_size =
+-			bnxt_re_get_rwqe_size(srq->qplib_srq.max_sge);
++	 /* 128 byte wqe size for SRQ . So use max sges */
++	srq->qplib_srq.wqe_size = bnxt_re_get_rwqe_size(dev_attr->max_srq_sges);
+ 	srq->qplib_srq.threshold = srq_init_attr->attr.srq_limit;
+ 	srq->srq_limit = srq_init_attr->attr.srq_limit;
+ 	srq->qplib_srq.eventq_hw_ring_id = rdev->nq[0].ring_id;
+@@ -2078,6 +2078,7 @@ int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+ 		goto out;
+ 	}
+ 	qp_attr->qp_state = __to_ib_qp_state(qplib_qp->state);
++	qp_attr->cur_qp_state = __to_ib_qp_state(qplib_qp->cur_qp_state);
+ 	qp_attr->en_sqd_async_notify = qplib_qp->en_sqd_async_notify ? 1 : 0;
+ 	qp_attr->qp_access_flags = __to_ib_access_flags(qplib_qp->access);
+ 	qp_attr->pkey_index = qplib_qp->pkey_index;
+diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c
+index 28349ed508854..d6cfefc269ee3 100644
+--- a/drivers/infiniband/hw/cxgb4/cq.c
++++ b/drivers/infiniband/hw/cxgb4/cq.c
+@@ -1008,6 +1008,9 @@ int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ 	if (attr->flags)
+ 		return -EINVAL;
+ 
++	if (entries < 1 || entries > ibdev->attrs.max_cqe)
++		return -EINVAL;
++
+ 	if (vector >= rhp->rdev.lldi.nciq)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
+index 75b06db60f7c2..7dd3b6097226f 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
++++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
+@@ -31,13 +31,13 @@
+  */
+ 
+ #include <linux/platform_device.h>
++#include <linux/pci.h>
+ #include <rdma/ib_addr.h>
+ #include <rdma/ib_cache.h>
+ #include "hns_roce_device.h"
+ 
+-#define HNS_ROCE_PORT_NUM_SHIFT		24
+-#define HNS_ROCE_VLAN_SL_BIT_MASK	7
+-#define HNS_ROCE_VLAN_SL_SHIFT		13
++#define VLAN_SL_MASK 7
++#define VLAN_SL_SHIFT 13
+ 
+ static inline u16 get_ah_udp_sport(const struct rdma_ah_attr *ah_attr)
+ {
+@@ -58,47 +58,44 @@ static inline u16 get_ah_udp_sport(const struct rdma_ah_attr *ah_attr)
+ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ 		       struct ib_udata *udata)
+ {
+-	struct hns_roce_dev *hr_dev = to_hr_dev(ibah->device);
+-	const struct ib_gid_attr *gid_attr;
+-	struct device *dev = hr_dev->dev;
+-	struct hns_roce_ah *ah = to_hr_ah(ibah);
+ 	struct rdma_ah_attr *ah_attr = init_attr->ah_attr;
+ 	const struct ib_global_route *grh = rdma_ah_read_grh(ah_attr);
+-	u16 vlan_id = 0xffff;
+-	bool vlan_en = false;
+-	int ret;
+-
+-	gid_attr = ah_attr->grh.sgid_attr;
+-	ret = rdma_read_gid_l2_fields(gid_attr, &vlan_id, NULL);
+-	if (ret)
+-		return ret;
+-
+-	/* Get mac address */
+-	memcpy(ah->av.mac, ah_attr->roce.dmac, ETH_ALEN);
+-
+-	if (vlan_id < VLAN_N_VID) {
+-		vlan_en = true;
+-		vlan_id |= (rdma_ah_get_sl(ah_attr) &
+-			     HNS_ROCE_VLAN_SL_BIT_MASK) <<
+-			     HNS_ROCE_VLAN_SL_SHIFT;
+-	}
++	struct hns_roce_dev *hr_dev = to_hr_dev(ibah->device);
++	struct hns_roce_ah *ah = to_hr_ah(ibah);
++	int ret = 0;
+ 
+ 	ah->av.port = rdma_ah_get_port_num(ah_attr);
+ 	ah->av.gid_index = grh->sgid_index;
+-	ah->av.vlan_id = vlan_id;
+-	ah->av.vlan_en = vlan_en;
+-	dev_dbg(dev, "gid_index = 0x%x,vlan_id = 0x%x\n", ah->av.gid_index,
+-		ah->av.vlan_id);
+ 
+ 	if (rdma_ah_get_static_rate(ah_attr))
+ 		ah->av.stat_rate = IB_RATE_10_GBPS;
+ 
+-	memcpy(ah->av.dgid, grh->dgid.raw, HNS_ROCE_GID_SIZE);
+-	ah->av.sl = rdma_ah_get_sl(ah_attr);
++	ah->av.hop_limit = grh->hop_limit;
+ 	ah->av.flowlabel = grh->flow_label;
+ 	ah->av.udp_sport = get_ah_udp_sport(ah_attr);
++	ah->av.sl = rdma_ah_get_sl(ah_attr);
++	ah->av.tclass = get_tclass(grh);
+ 
+-	return 0;
++	memcpy(ah->av.dgid, grh->dgid.raw, HNS_ROCE_GID_SIZE);
++	memcpy(ah->av.mac, ah_attr->roce.dmac, ETH_ALEN);
++
++	/* HIP08 needs to record vlan info in Address Vector */
++	if (hr_dev->pci_dev->revision <= PCI_REVISION_ID_HIP08) {
++		ah->av.vlan_en = 0;
++
++		ret = rdma_read_gid_l2_fields(ah_attr->grh.sgid_attr,
++					      &ah->av.vlan_id, NULL);
++		if (ret)
++			return ret;
++
++		if (ah->av.vlan_id < VLAN_N_VID) {
++			ah->av.vlan_en = 1;
++			ah->av.vlan_id |= (rdma_ah_get_sl(ah_attr) & VLAN_SL_MASK) <<
++					  VLAN_SL_SHIFT;
++		}
++	}
++
++	return ret;
+ }
+ 
+ int hns_roce_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index 809b22aa5056c..da346129f6e9e 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -274,7 +274,7 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
+ 
+ 	if (udata) {
+ 		ret = ib_copy_from_udata(&ucmd, udata,
+-					 min(sizeof(ucmd), udata->inlen));
++					 min(udata->inlen, sizeof(ucmd)));
+ 		if (ret) {
+ 			ibdev_err(ibdev, "Failed to copy CQ udata, err %d\n",
+ 				  ret);
+@@ -313,7 +313,8 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
+ 
+ 	if (udata) {
+ 		resp.cqn = hr_cq->cqn;
+-		ret = ib_copy_to_udata(udata, &resp, sizeof(resp));
++		ret = ib_copy_to_udata(udata, &resp,
++				       min(udata->outlen, sizeof(resp)));
+ 		if (ret)
+ 			goto err_cqc;
+ 	}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 6d2acff69f982..1ea87f92aabbe 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -547,7 +547,7 @@ struct hns_roce_av {
+ 	u8 dgid[HNS_ROCE_GID_SIZE];
+ 	u8 mac[ETH_ALEN];
+ 	u16 vlan_id;
+-	bool vlan_en;
++	u8 vlan_en;
+ };
+ 
+ struct hns_roce_ah {
+@@ -1132,6 +1132,14 @@ static inline u32 to_hr_hem_entries_shift(u32 count, u32 buf_shift)
+ 	return ilog2(to_hr_hem_entries_count(count, buf_shift));
+ }
+ 
++#define DSCP_SHIFT 2
++
++static inline u8 get_tclass(const struct ib_global_route *grh)
++{
++	return grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP ?
++	       grh->traffic_class >> DSCP_SHIFT : grh->traffic_class;
++}
++
+ int hns_roce_init_uar_table(struct hns_roce_dev *dev);
+ int hns_roce_uar_alloc(struct hns_roce_dev *dev, struct hns_roce_uar *uar);
+ void hns_roce_uar_free(struct hns_roce_dev *dev, struct hns_roce_uar *uar);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 7487cf3d2c37a..66f9f036ef946 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -1017,7 +1017,7 @@ void hns_roce_cleanup_hem_table(struct hns_roce_dev *hr_dev,
+ 
+ void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev)
+ {
+-	if (hr_dev->caps.srqc_entry_sz)
++	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ)
+ 		hns_roce_cleanup_hem_table(hr_dev,
+ 					   &hr_dev->srq_table.table);
+ 	hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table);
+@@ -1027,7 +1027,7 @@ void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev)
+ 	if (hr_dev->caps.cqc_timer_entry_sz)
+ 		hns_roce_cleanup_hem_table(hr_dev,
+ 					   &hr_dev->cqc_timer_table);
+-	if (hr_dev->caps.sccc_sz)
++	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_FLOW_CTRL)
+ 		hns_roce_cleanup_hem_table(hr_dev,
+ 					   &hr_dev->qp_table.sccc_table);
+ 	if (hr_dev->caps.trrl_entry_sz)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 0468028ffe390..5c29c7d8c50e6 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -214,25 +214,20 @@ static int fill_ext_sge_inl_data(struct hns_roce_qp *qp,
+ 	return 0;
+ }
+ 
+-static void set_extend_sge(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
+-			   unsigned int *sge_ind, unsigned int valid_num_sge)
++static void set_extend_sge(struct hns_roce_qp *qp, struct ib_sge *sge,
++			   unsigned int *sge_ind, unsigned int cnt)
+ {
+ 	struct hns_roce_v2_wqe_data_seg *dseg;
+-	unsigned int cnt = valid_num_sge;
+-	struct ib_sge *sge = wr->sg_list;
+ 	unsigned int idx = *sge_ind;
+ 
+-	if (qp->ibqp.qp_type == IB_QPT_RC || qp->ibqp.qp_type == IB_QPT_UC) {
+-		cnt -= HNS_ROCE_SGE_IN_WQE;
+-		sge += HNS_ROCE_SGE_IN_WQE;
+-	}
+-
+ 	while (cnt > 0) {
+ 		dseg = hns_roce_get_extend_sge(qp, idx & (qp->sge.sge_cnt - 1));
+-		set_data_seg_v2(dseg, sge);
+-		idx++;
++		if (likely(sge->length)) {
++			set_data_seg_v2(dseg, sge);
++			idx++;
++			cnt--;
++		}
+ 		sge++;
+-		cnt--;
+ 	}
+ 
+ 	*sge_ind = idx;
+@@ -340,7 +335,8 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
+ 			}
+ 		}
+ 
+-		set_extend_sge(qp, wr, sge_ind, valid_num_sge);
++		set_extend_sge(qp, wr->sg_list + i, sge_ind,
++			       valid_num_sge - HNS_ROCE_SGE_IN_WQE);
+ 	}
+ 
+ 	roce_set_field(rc_sq_wqe->byte_16,
+@@ -433,8 +429,6 @@ static inline int set_ud_wqe(struct hns_roce_qp *qp,
+ 	unsigned int curr_idx = *sge_idx;
+ 	int valid_num_sge;
+ 	u32 msg_len = 0;
+-	bool loopback;
+-	u8 *smac;
+ 	int ret;
+ 
+ 	valid_num_sge = calc_wr_sge_num(wr, &msg_len);
+@@ -457,13 +451,6 @@ static inline int set_ud_wqe(struct hns_roce_qp *qp,
+ 	roce_set_field(ud_sq_wqe->byte_48, V2_UD_SEND_WQE_BYTE_48_DMAC_5_M,
+ 		       V2_UD_SEND_WQE_BYTE_48_DMAC_5_S, ah->av.mac[5]);
+ 
+-	/* MAC loopback */
+-	smac = (u8 *)hr_dev->dev_addr[qp->port];
+-	loopback = ether_addr_equal_unaligned(ah->av.mac, smac) ? 1 : 0;
+-
+-	roce_set_bit(ud_sq_wqe->byte_40,
+-		     V2_UD_SEND_WQE_BYTE_40_LBI_S, loopback);
+-
+ 	ud_sq_wqe->msg_len = cpu_to_le32(msg_len);
+ 
+ 	/* Set sig attr */
+@@ -495,8 +482,6 @@ static inline int set_ud_wqe(struct hns_roce_qp *qp,
+ 	roce_set_field(ud_sq_wqe->byte_32, V2_UD_SEND_WQE_BYTE_32_DQPN_M,
+ 		       V2_UD_SEND_WQE_BYTE_32_DQPN_S, ud_wr(wr)->remote_qpn);
+ 
+-	roce_set_field(ud_sq_wqe->byte_36, V2_UD_SEND_WQE_BYTE_36_VLAN_M,
+-		       V2_UD_SEND_WQE_BYTE_36_VLAN_S, ah->av.vlan_id);
+ 	roce_set_field(ud_sq_wqe->byte_36, V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_M,
+ 		       V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_S, ah->av.hop_limit);
+ 	roce_set_field(ud_sq_wqe->byte_36, V2_UD_SEND_WQE_BYTE_36_TCLASS_M,
+@@ -508,14 +493,21 @@ static inline int set_ud_wqe(struct hns_roce_qp *qp,
+ 	roce_set_field(ud_sq_wqe->byte_40, V2_UD_SEND_WQE_BYTE_40_PORTN_M,
+ 		       V2_UD_SEND_WQE_BYTE_40_PORTN_S, qp->port);
+ 
+-	roce_set_bit(ud_sq_wqe->byte_40, V2_UD_SEND_WQE_BYTE_40_UD_VLAN_EN_S,
+-		     ah->av.vlan_en ? 1 : 0);
+ 	roce_set_field(ud_sq_wqe->byte_48, V2_UD_SEND_WQE_BYTE_48_SGID_INDX_M,
+ 		       V2_UD_SEND_WQE_BYTE_48_SGID_INDX_S, ah->av.gid_index);
+ 
++	if (hr_dev->pci_dev->revision <= PCI_REVISION_ID_HIP08) {
++		roce_set_bit(ud_sq_wqe->byte_40,
++			     V2_UD_SEND_WQE_BYTE_40_UD_VLAN_EN_S,
++			     ah->av.vlan_en);
++		roce_set_field(ud_sq_wqe->byte_36,
++			       V2_UD_SEND_WQE_BYTE_36_VLAN_M,
++			       V2_UD_SEND_WQE_BYTE_36_VLAN_S, ah->av.vlan_id);
++	}
++
+ 	memcpy(&ud_sq_wqe->dgid[0], &ah->av.dgid[0], GID_LEN_V2);
+ 
+-	set_extend_sge(qp, wr, &curr_idx, valid_num_sge);
++	set_extend_sge(qp, wr->sg_list, &curr_idx, valid_num_sge);
+ 
+ 	*sge_idx = curr_idx;
+ 
+@@ -4468,15 +4460,11 @@ static int hns_roce_v2_set_path(struct ib_qp *ibqp,
+ 	roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_HOP_LIMIT_M,
+ 		       V2_QPC_BYTE_24_HOP_LIMIT_S, 0);
+ 
+-	if (is_udp)
+-		roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_TC_M,
+-			       V2_QPC_BYTE_24_TC_S, grh->traffic_class >> 2);
+-	else
+-		roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_TC_M,
+-			       V2_QPC_BYTE_24_TC_S, grh->traffic_class);
+-
++	roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_TC_M,
++		       V2_QPC_BYTE_24_TC_S, get_tclass(&attr->ah_attr.grh));
+ 	roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_TC_M,
+ 		       V2_QPC_BYTE_24_TC_S, 0);
++
+ 	roce_set_field(context->byte_28_at_fl, V2_QPC_BYTE_28_FL_M,
+ 		       V2_QPC_BYTE_28_FL_S, grh->flow_label);
+ 	roce_set_field(qpc_mask->byte_28_at_fl, V2_QPC_BYTE_28_FL_M,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index afeffafc59f90..ae721fa61e0e4 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -325,7 +325,8 @@ static int hns_roce_alloc_ucontext(struct ib_ucontext *uctx,
+ 
+ 	resp.cqe_size = hr_dev->caps.cqe_sz;
+ 
+-	ret = ib_copy_to_udata(udata, &resp, sizeof(resp));
++	ret = ib_copy_to_udata(udata, &resp,
++			       min(udata->outlen, sizeof(resp)));
+ 	if (ret)
+ 		goto error_fail_copy_to_udata;
+ 
+@@ -631,7 +632,7 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
+ 		goto err_unmap_trrl;
+ 	}
+ 
+-	if (hr_dev->caps.srqc_entry_sz) {
++	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) {
+ 		ret = hns_roce_init_hem_table(hr_dev, &hr_dev->srq_table.table,
+ 					      HEM_TYPE_SRQC,
+ 					      hr_dev->caps.srqc_entry_sz,
+@@ -643,7 +644,7 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
+ 		}
+ 	}
+ 
+-	if (hr_dev->caps.sccc_sz) {
++	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_FLOW_CTRL) {
+ 		ret = hns_roce_init_hem_table(hr_dev,
+ 					      &hr_dev->qp_table.sccc_table,
+ 					      HEM_TYPE_SCCC,
+@@ -687,11 +688,11 @@ err_unmap_qpc_timer:
+ 		hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qpc_timer_table);
+ 
+ err_unmap_ctx:
+-	if (hr_dev->caps.sccc_sz)
++	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_FLOW_CTRL)
+ 		hns_roce_cleanup_hem_table(hr_dev,
+ 					   &hr_dev->qp_table.sccc_table);
+ err_unmap_srq:
+-	if (hr_dev->caps.srqc_entry_sz)
++	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ)
+ 		hns_roce_cleanup_hem_table(hr_dev, &hr_dev->srq_table.table);
+ 
+ err_unmap_cq:
+diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c
+index 98f69496adb49..f78fa1d3d8075 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_pd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_pd.c
+@@ -70,16 +70,17 @@ int hns_roce_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
+ 	}
+ 
+ 	if (udata) {
+-		struct hns_roce_ib_alloc_pd_resp uresp = {.pdn = pd->pdn};
++		struct hns_roce_ib_alloc_pd_resp resp = {.pdn = pd->pdn};
+ 
+-		if (ib_copy_to_udata(udata, &uresp, sizeof(uresp))) {
++		ret = ib_copy_to_udata(udata, &resp,
++				       min(udata->outlen, sizeof(resp)));
++		if (ret) {
+ 			hns_roce_pd_free(to_hr_dev(ib_dev), pd->pdn);
+-			ibdev_err(ib_dev, "failed to copy to udata\n");
+-			return -EFAULT;
++			ibdev_err(ib_dev, "failed to copy to udata, ret = %d\n", ret);
+ 		}
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ int hns_roce_dealloc_pd(struct ib_pd *pd, struct ib_udata *udata)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 6c081dd985fc9..ef1452215b17d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -286,7 +286,7 @@ static int alloc_qpc(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ 		}
+ 	}
+ 
+-	if (hr_dev->caps.sccc_sz) {
++	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_FLOW_CTRL) {
+ 		/* Alloc memory for SCC CTX */
+ 		ret = hns_roce_table_get(hr_dev, &qp_table->sccc_table,
+ 					 hr_qp->qpn);
+@@ -432,7 +432,12 @@ static int set_extend_sge_param(struct hns_roce_dev *hr_dev, u32 sq_wqe_cnt,
+ 	}
+ 
+ 	hr_qp->sge.sge_shift = HNS_ROCE_SGE_SHIFT;
+-	hr_qp->sge.sge_cnt = cnt;
++
++	/* If the number of extended sge is not zero, they MUST use the
++	 * space of HNS_HW_PAGE_SIZE at least.
++	 */
++	hr_qp->sge.sge_cnt = cnt ?
++			max(cnt, (u32)HNS_HW_PAGE_SIZE / HNS_ROCE_SGE_SIZE) : 0;
+ 
+ 	return 0;
+ }
+@@ -860,9 +865,12 @@ static int set_qp_param(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 	}
+ 
+ 	if (udata) {
+-		if (ib_copy_from_udata(ucmd, udata, sizeof(*ucmd))) {
+-			ibdev_err(ibdev, "Failed to copy QP ucmd\n");
+-			return -EFAULT;
++		ret = ib_copy_from_udata(ucmd, udata,
++					 min(udata->inlen, sizeof(*ucmd)));
++		if (ret) {
++			ibdev_err(ibdev,
++				  "failed to copy QP ucmd, ret = %d\n", ret);
++			return ret;
+ 		}
+ 
+ 		ret = set_user_sq_size(hr_dev, &init_attr->cap, hr_qp, ucmd);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index 8caf74e44efd9..75d74f4bb52c9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -300,7 +300,8 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
+ 	srq->max_gs = init_attr->attr.max_sge;
+ 
+ 	if (udata) {
+-		ret = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd));
++		ret = ib_copy_from_udata(&ucmd, udata,
++					 min(udata->inlen, sizeof(ucmd)));
+ 		if (ret) {
+ 			ibdev_err(ibdev, "Failed to copy SRQ udata, err %d\n",
+ 				  ret);
+@@ -343,11 +344,10 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
+ 	resp.srqn = srq->srqn;
+ 
+ 	if (udata) {
+-		if (ib_copy_to_udata(udata, &resp,
+-				     min(udata->outlen, sizeof(resp)))) {
+-			ret = -EFAULT;
++		ret = ib_copy_to_udata(udata, &resp,
++				       min(udata->outlen, sizeof(resp)));
++		if (ret)
+ 			goto err_srqc_alloc;
+-		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index b261797b258fd..971694e781b65 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -642,6 +642,7 @@ void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ 	if (mlx5_mr_cache_invalidate(mr)) {
+ 		detach_mr_from_cache(mr);
+ 		destroy_mkey(dev, mr);
++		kfree(mr);
+ 		return;
+ 	}
+ 
+@@ -1247,10 +1248,8 @@ err_1:
+ }
+ 
+ static void set_mr_fields(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr,
+-			  int npages, u64 length, int access_flags)
++			  u64 length, int access_flags)
+ {
+-	mr->npages = npages;
+-	atomic_add(npages, &dev->mdev->priv.reg_pages);
+ 	mr->ibmr.lkey = mr->mmkey.key;
+ 	mr->ibmr.rkey = mr->mmkey.key;
+ 	mr->ibmr.length = length;
+@@ -1290,8 +1289,7 @@ static struct ib_mr *mlx5_ib_get_dm_mr(struct ib_pd *pd, u64 start_addr,
+ 
+ 	kfree(in);
+ 
+-	mr->umem = NULL;
+-	set_mr_fields(dev, mr, 0, length, acc);
++	set_mr_fields(dev, mr, length, acc);
+ 
+ 	return &mr->ibmr;
+ 
+@@ -1419,7 +1417,9 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ 	mlx5_ib_dbg(dev, "mkey 0x%x\n", mr->mmkey.key);
+ 
+ 	mr->umem = umem;
+-	set_mr_fields(dev, mr, npages, length, access_flags);
++	mr->npages = npages;
++	atomic_add(mr->npages, &dev->mdev->priv.reg_pages);
++	set_mr_fields(dev, mr, length, access_flags);
+ 
+ 	if (xlt_with_umr && !(access_flags & IB_ACCESS_ON_DEMAND)) {
+ 		/*
+@@ -1531,8 +1531,6 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
+ 	mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n",
+ 		    start, virt_addr, length, access_flags);
+ 
+-	atomic_sub(mr->npages, &dev->mdev->priv.reg_pages);
+-
+ 	if (!mr->umem)
+ 		return -EINVAL;
+ 
+@@ -1553,12 +1551,17 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
+ 		 * used.
+ 		 */
+ 		flags |= IB_MR_REREG_TRANS;
++		atomic_sub(mr->npages, &dev->mdev->priv.reg_pages);
++		mr->npages = 0;
+ 		ib_umem_release(mr->umem);
+ 		mr->umem = NULL;
++
+ 		err = mr_umem_get(dev, addr, len, access_flags, &mr->umem,
+ 				  &npages, &page_shift, &ncont, &order);
+ 		if (err)
+ 			goto err;
++		mr->npages = ncont;
++		atomic_add(mr->npages, &dev->mdev->priv.reg_pages);
+ 	}
+ 
+ 	if (!mlx5_ib_can_reconfig_with_umr(dev, mr->access_flags,
+@@ -1609,7 +1612,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
+ 			goto err;
+ 	}
+ 
+-	set_mr_fields(dev, mr, npages, len, access_flags);
++	set_mr_fields(dev, mr, len, access_flags);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/infiniband/hw/mthca/mthca_cq.c b/drivers/infiniband/hw/mthca/mthca_cq.c
+index 119b2573c9a08..26c3408dcacae 100644
+--- a/drivers/infiniband/hw/mthca/mthca_cq.c
++++ b/drivers/infiniband/hw/mthca/mthca_cq.c
+@@ -604,7 +604,7 @@ static inline int mthca_poll_one(struct mthca_dev *dev,
+ 			entry->byte_len  = MTHCA_ATOMIC_BYTE_LEN;
+ 			break;
+ 		default:
+-			entry->opcode    = MTHCA_OPCODE_INVALID;
++			entry->opcode = 0xFF;
+ 			break;
+ 		}
+ 	} else {
+diff --git a/drivers/infiniband/hw/mthca/mthca_dev.h b/drivers/infiniband/hw/mthca/mthca_dev.h
+index 9dbbf4d16796a..a445160de3e16 100644
+--- a/drivers/infiniband/hw/mthca/mthca_dev.h
++++ b/drivers/infiniband/hw/mthca/mthca_dev.h
+@@ -105,7 +105,6 @@ enum {
+ 	MTHCA_OPCODE_ATOMIC_CS      = 0x11,
+ 	MTHCA_OPCODE_ATOMIC_FA      = 0x12,
+ 	MTHCA_OPCODE_BIND_MW        = 0x18,
+-	MTHCA_OPCODE_INVALID        = 0xff
+ };
+ 
+ enum {
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index af3923bf0a36b..d4917646641aa 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -634,7 +634,8 @@ next_wqe:
+ 	}
+ 
+ 	if (unlikely(qp_type(qp) == IB_QPT_RC &&
+-		     qp->req.psn > (qp->comp.psn + RXE_MAX_UNACKED_PSNS))) {
++		psn_compare(qp->req.psn, (qp->comp.psn +
++				RXE_MAX_UNACKED_PSNS)) > 0)) {
+ 		qp->req.wait_psn = 1;
+ 		goto exit;
+ 	}
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index f298adc02acba..d54a77ebe1184 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -1640,10 +1640,8 @@ static int rtrs_rdma_addr_resolved(struct rtrs_clt_con *con)
+ 		return err;
+ 	}
+ 	err = rdma_resolve_route(con->c.cm_id, RTRS_CONNECT_TIMEOUT_MS);
+-	if (err) {
++	if (err)
+ 		rtrs_err(s, "Resolving route failed, err: %d\n", err);
+-		destroy_con_cq_qp(con);
+-	}
+ 
+ 	return err;
+ }
+@@ -1837,8 +1835,8 @@ static int rtrs_clt_rdma_cm_handler(struct rdma_cm_id *cm_id,
+ 		cm_err = rtrs_rdma_route_resolved(con);
+ 		break;
+ 	case RDMA_CM_EVENT_ESTABLISHED:
+-		con->cm_err = rtrs_rdma_conn_established(con, ev);
+-		if (likely(!con->cm_err)) {
++		cm_err = rtrs_rdma_conn_established(con, ev);
++		if (likely(!cm_err)) {
+ 			/*
+ 			 * Report success and wake up. Here we abuse state_wq,
+ 			 * i.e. wake up without state change, but we set cm_err.
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index d6f93601712e4..1cb778aff3c59 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -1328,17 +1328,42 @@ static void rtrs_srv_dev_release(struct device *dev)
+ 	kfree(srv);
+ }
+ 
+-static struct rtrs_srv *__alloc_srv(struct rtrs_srv_ctx *ctx,
+-				     const uuid_t *paths_uuid)
++static void free_srv(struct rtrs_srv *srv)
++{
++	int i;
++
++	WARN_ON(refcount_read(&srv->refcount));
++	for (i = 0; i < srv->queue_depth; i++)
++		mempool_free(srv->chunks[i], chunk_pool);
++	kfree(srv->chunks);
++	mutex_destroy(&srv->paths_mutex);
++	mutex_destroy(&srv->paths_ev_mutex);
++	/* last put to release the srv structure */
++	put_device(&srv->dev);
++}
++
++static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx,
++					   const uuid_t *paths_uuid)
+ {
+ 	struct rtrs_srv *srv;
+ 	int i;
+ 
++	mutex_lock(&ctx->srv_mutex);
++	list_for_each_entry(srv, &ctx->srv_list, ctx_list) {
++		if (uuid_equal(&srv->paths_uuid, paths_uuid) &&
++		    refcount_inc_not_zero(&srv->refcount)) {
++			mutex_unlock(&ctx->srv_mutex);
++			return srv;
++		}
++	}
++
++	/* need to allocate a new srv */
+ 	srv = kzalloc(sizeof(*srv), GFP_KERNEL);
+-	if  (!srv)
++	if  (!srv) {
++		mutex_unlock(&ctx->srv_mutex);
+ 		return NULL;
++	}
+ 
+-	refcount_set(&srv->refcount, 1);
+ 	INIT_LIST_HEAD(&srv->paths_list);
+ 	mutex_init(&srv->paths_mutex);
+ 	mutex_init(&srv->paths_ev_mutex);
+@@ -1347,6 +1372,8 @@ static struct rtrs_srv *__alloc_srv(struct rtrs_srv_ctx *ctx,
+ 	srv->ctx = ctx;
+ 	device_initialize(&srv->dev);
+ 	srv->dev.release = rtrs_srv_dev_release;
++	list_add(&srv->ctx_list, &ctx->srv_list);
++	mutex_unlock(&ctx->srv_mutex);
+ 
+ 	srv->chunks = kcalloc(srv->queue_depth, sizeof(*srv->chunks),
+ 			      GFP_KERNEL);
+@@ -1358,7 +1385,7 @@ static struct rtrs_srv *__alloc_srv(struct rtrs_srv_ctx *ctx,
+ 		if (!srv->chunks[i])
+ 			goto err_free_chunks;
+ 	}
+-	list_add(&srv->ctx_list, &ctx->srv_list);
++	refcount_set(&srv->refcount, 1);
+ 
+ 	return srv;
+ 
+@@ -1369,52 +1396,9 @@ err_free_chunks:
+ 
+ err_free_srv:
+ 	kfree(srv);
+-
+ 	return NULL;
+ }
+ 
+-static void free_srv(struct rtrs_srv *srv)
+-{
+-	int i;
+-
+-	WARN_ON(refcount_read(&srv->refcount));
+-	for (i = 0; i < srv->queue_depth; i++)
+-		mempool_free(srv->chunks[i], chunk_pool);
+-	kfree(srv->chunks);
+-	mutex_destroy(&srv->paths_mutex);
+-	mutex_destroy(&srv->paths_ev_mutex);
+-	/* last put to release the srv structure */
+-	put_device(&srv->dev);
+-}
+-
+-static inline struct rtrs_srv *__find_srv_and_get(struct rtrs_srv_ctx *ctx,
+-						   const uuid_t *paths_uuid)
+-{
+-	struct rtrs_srv *srv;
+-
+-	list_for_each_entry(srv, &ctx->srv_list, ctx_list) {
+-		if (uuid_equal(&srv->paths_uuid, paths_uuid) &&
+-		    refcount_inc_not_zero(&srv->refcount))
+-			return srv;
+-	}
+-
+-	return NULL;
+-}
+-
+-static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx,
+-					   const uuid_t *paths_uuid)
+-{
+-	struct rtrs_srv *srv;
+-
+-	mutex_lock(&ctx->srv_mutex);
+-	srv = __find_srv_and_get(ctx, paths_uuid);
+-	if (!srv)
+-		srv = __alloc_srv(ctx, paths_uuid);
+-	mutex_unlock(&ctx->srv_mutex);
+-
+-	return srv;
+-}
+-
+ static void put_srv(struct rtrs_srv *srv)
+ {
+ 	if (refcount_dec_and_test(&srv->refcount)) {
+@@ -1813,7 +1797,11 @@ static int rtrs_rdma_connect(struct rdma_cm_id *cm_id,
+ 	}
+ 	recon_cnt = le16_to_cpu(msg->recon_cnt);
+ 	srv = get_or_create_srv(ctx, &msg->paths_uuid);
+-	if (!srv) {
++	/*
++	 * "refcount == 0" happens if a previous thread calls get_or_create_srv
++	 * allocate srv, but chunks of srv are not allocated yet.
++	 */
++	if (!srv || refcount_read(&srv->refcount) == 0) {
+ 		err = -ENOMEM;
+ 		goto reject_w_err;
+ 	}
+diff --git a/drivers/input/keyboard/omap4-keypad.c b/drivers/input/keyboard/omap4-keypad.c
+index d6c924032aaa8..dd16f7b3c7ef6 100644
+--- a/drivers/input/keyboard/omap4-keypad.c
++++ b/drivers/input/keyboard/omap4-keypad.c
+@@ -186,12 +186,8 @@ static int omap4_keypad_open(struct input_dev *input)
+ 	return 0;
+ }
+ 
+-static void omap4_keypad_close(struct input_dev *input)
++static void omap4_keypad_stop(struct omap4_keypad *keypad_data)
+ {
+-	struct omap4_keypad *keypad_data = input_get_drvdata(input);
+-
+-	disable_irq(keypad_data->irq);
+-
+ 	/* Disable interrupts and wake-up events */
+ 	kbd_write_irqreg(keypad_data, OMAP4_KBD_IRQENABLE,
+ 			 OMAP4_VAL_IRQDISABLE);
+@@ -200,7 +196,15 @@ static void omap4_keypad_close(struct input_dev *input)
+ 	/* clear pending interrupts */
+ 	kbd_write_irqreg(keypad_data, OMAP4_KBD_IRQSTATUS,
+ 			 kbd_read_irqreg(keypad_data, OMAP4_KBD_IRQSTATUS));
++}
++
++static void omap4_keypad_close(struct input_dev *input)
++{
++	struct omap4_keypad *keypad_data;
+ 
++	keypad_data = input_get_drvdata(input);
++	disable_irq(keypad_data->irq);
++	omap4_keypad_stop(keypad_data);
+ 	enable_irq(keypad_data->irq);
+ 
+ 	pm_runtime_put_sync(input->dev.parent);
+@@ -223,13 +227,37 @@ static int omap4_keypad_parse_dt(struct device *dev,
+ 	return 0;
+ }
+ 
++static int omap4_keypad_check_revision(struct device *dev,
++				       struct omap4_keypad *keypad_data)
++{
++	unsigned int rev;
++
++	rev = __raw_readl(keypad_data->base + OMAP4_KBD_REVISION);
++	rev &= 0x03 << 30;
++	rev >>= 30;
++	switch (rev) {
++	case KBD_REVISION_OMAP4:
++		keypad_data->reg_offset = 0x00;
++		keypad_data->irqreg_offset = 0x00;
++		break;
++	case KBD_REVISION_OMAP5:
++		keypad_data->reg_offset = 0x10;
++		keypad_data->irqreg_offset = 0x0c;
++		break;
++	default:
++		dev_err(dev, "Keypad reports unsupported revision %d", rev);
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static int omap4_keypad_probe(struct platform_device *pdev)
+ {
+ 	struct omap4_keypad *keypad_data;
+ 	struct input_dev *input_dev;
+ 	struct resource *res;
+ 	unsigned int max_keys;
+-	int rev;
+ 	int irq;
+ 	int error;
+ 
+@@ -269,41 +297,33 @@ static int omap4_keypad_probe(struct platform_device *pdev)
+ 		goto err_release_mem;
+ 	}
+ 
++	pm_runtime_enable(&pdev->dev);
+ 
+ 	/*
+ 	 * Enable clocks for the keypad module so that we can read
+ 	 * revision register.
+ 	 */
+-	pm_runtime_enable(&pdev->dev);
+ 	error = pm_runtime_get_sync(&pdev->dev);
+ 	if (error) {
+ 		dev_err(&pdev->dev, "pm_runtime_get_sync() failed\n");
+-		goto err_unmap;
+-	}
+-	rev = __raw_readl(keypad_data->base + OMAP4_KBD_REVISION);
+-	rev &= 0x03 << 30;
+-	rev >>= 30;
+-	switch (rev) {
+-	case KBD_REVISION_OMAP4:
+-		keypad_data->reg_offset = 0x00;
+-		keypad_data->irqreg_offset = 0x00;
+-		break;
+-	case KBD_REVISION_OMAP5:
+-		keypad_data->reg_offset = 0x10;
+-		keypad_data->irqreg_offset = 0x0c;
+-		break;
+-	default:
+-		dev_err(&pdev->dev,
+-			"Keypad reports unsupported revision %d", rev);
+-		error = -EINVAL;
+-		goto err_pm_put_sync;
++		pm_runtime_put_noidle(&pdev->dev);
++	} else {
++		error = omap4_keypad_check_revision(&pdev->dev,
++						    keypad_data);
++		if (!error) {
++			/* Ensure device does not raise interrupts */
++			omap4_keypad_stop(keypad_data);
++		}
++		pm_runtime_put_sync(&pdev->dev);
+ 	}
++	if (error)
++		goto err_pm_disable;
+ 
+ 	/* input device allocation */
+ 	keypad_data->input = input_dev = input_allocate_device();
+ 	if (!input_dev) {
+ 		error = -ENOMEM;
+-		goto err_pm_put_sync;
++		goto err_pm_disable;
+ 	}
+ 
+ 	input_dev->name = pdev->name;
+@@ -349,28 +369,25 @@ static int omap4_keypad_probe(struct platform_device *pdev)
+ 		goto err_free_keymap;
+ 	}
+ 
+-	device_init_wakeup(&pdev->dev, true);
+-	pm_runtime_put_sync(&pdev->dev);
+-
+ 	error = input_register_device(keypad_data->input);
+ 	if (error < 0) {
+ 		dev_err(&pdev->dev, "failed to register input device\n");
+-		goto err_pm_disable;
++		goto err_free_irq;
+ 	}
+ 
++	device_init_wakeup(&pdev->dev, true);
+ 	platform_set_drvdata(pdev, keypad_data);
++
+ 	return 0;
+ 
+-err_pm_disable:
+-	pm_runtime_disable(&pdev->dev);
++err_free_irq:
+ 	free_irq(keypad_data->irq, keypad_data);
+ err_free_keymap:
+ 	kfree(keypad_data->keymap);
+ err_free_input:
+ 	input_free_device(input_dev);
+-err_pm_put_sync:
+-	pm_runtime_put_sync(&pdev->dev);
+-err_unmap:
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
+ 	iounmap(keypad_data->base);
+ err_release_mem:
+ 	release_mem_region(res->start, resource_size(res));
+diff --git a/drivers/input/mouse/cyapa_gen6.c b/drivers/input/mouse/cyapa_gen6.c
+index 7eba66fbef580..812edfced86ee 100644
+--- a/drivers/input/mouse/cyapa_gen6.c
++++ b/drivers/input/mouse/cyapa_gen6.c
+@@ -573,7 +573,7 @@ static int cyapa_pip_retrieve_data_structure(struct cyapa *cyapa,
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+ 	put_unaligned_le16(PIP_OUTPUT_REPORT_ADDR, &cmd.head.addr);
+-	put_unaligned_le16(sizeof(cmd), &cmd.head.length - 2);
++	put_unaligned_le16(sizeof(cmd) - 2, &cmd.head.length);
+ 	cmd.head.report_id = PIP_APP_CMD_REPORT_ID;
+ 	cmd.head.cmd_code = PIP_RETRIEVE_DATA_STRUCTURE;
+ 	put_unaligned_le16(read_offset, &cmd.read_offset);
+diff --git a/drivers/input/touchscreen/ads7846.c b/drivers/input/touchscreen/ads7846.c
+index 8fd7fc39c4fd7..ff97897feaf2a 100644
+--- a/drivers/input/touchscreen/ads7846.c
++++ b/drivers/input/touchscreen/ads7846.c
+@@ -33,6 +33,7 @@
+ #include <linux/regulator/consumer.h>
+ #include <linux/module.h>
+ #include <asm/irq.h>
++#include <asm/unaligned.h>
+ 
+ /*
+  * This code has been heavily tested on a Nokia 770, and lightly
+@@ -199,6 +200,26 @@ struct ads7846 {
+ #define	REF_ON	(READ_12BIT_DFR(x, 1, 1))
+ #define	REF_OFF	(READ_12BIT_DFR(y, 0, 0))
+ 
++static int get_pendown_state(struct ads7846 *ts)
++{
++	if (ts->get_pendown_state)
++		return ts->get_pendown_state();
++
++	return !gpio_get_value(ts->gpio_pendown);
++}
++
++static void ads7846_report_pen_up(struct ads7846 *ts)
++{
++	struct input_dev *input = ts->input;
++
++	input_report_key(input, BTN_TOUCH, 0);
++	input_report_abs(input, ABS_PRESSURE, 0);
++	input_sync(input);
++
++	ts->pendown = false;
++	dev_vdbg(&ts->spi->dev, "UP\n");
++}
++
+ /* Must be called with ts->lock held */
+ static void ads7846_stop(struct ads7846 *ts)
+ {
+@@ -215,6 +236,10 @@ static void ads7846_stop(struct ads7846 *ts)
+ static void ads7846_restart(struct ads7846 *ts)
+ {
+ 	if (!ts->disabled && !ts->suspended) {
++		/* Check if pen was released since last stop */
++		if (ts->pendown && !get_pendown_state(ts))
++			ads7846_report_pen_up(ts);
++
+ 		/* Tell IRQ thread that it may poll the device. */
+ 		ts->stopped = false;
+ 		mb();
+@@ -411,7 +436,7 @@ static int ads7845_read12_ser(struct device *dev, unsigned command)
+ 
+ 	if (status == 0) {
+ 		/* BE12 value, then padding */
+-		status = be16_to_cpu(*((u16 *)&req->sample[1]));
++		status = get_unaligned_be16(&req->sample[1]);
+ 		status = status >> 3;
+ 		status &= 0x0fff;
+ 	}
+@@ -606,14 +631,6 @@ static const struct attribute_group ads784x_attr_group = {
+ 
+ /*--------------------------------------------------------------------------*/
+ 
+-static int get_pendown_state(struct ads7846 *ts)
+-{
+-	if (ts->get_pendown_state)
+-		return ts->get_pendown_state();
+-
+-	return !gpio_get_value(ts->gpio_pendown);
+-}
+-
+ static void null_wait_for_sync(void)
+ {
+ }
+@@ -786,10 +803,11 @@ static void ads7846_report_state(struct ads7846 *ts)
+ 		/* compute touch pressure resistance using equation #2 */
+ 		Rt = z2;
+ 		Rt -= z1;
+-		Rt *= x;
+ 		Rt *= ts->x_plate_ohms;
++		Rt = DIV_ROUND_CLOSEST(Rt, 16);
++		Rt *= x;
+ 		Rt /= z1;
+-		Rt = (Rt + 2047) >> 12;
++		Rt = DIV_ROUND_CLOSEST(Rt, 256);
+ 	} else {
+ 		Rt = 0;
+ 	}
+@@ -868,16 +886,8 @@ static irqreturn_t ads7846_irq(int irq, void *handle)
+ 				   msecs_to_jiffies(TS_POLL_PERIOD));
+ 	}
+ 
+-	if (ts->pendown && !ts->stopped) {
+-		struct input_dev *input = ts->input;
+-
+-		input_report_key(input, BTN_TOUCH, 0);
+-		input_report_abs(input, ABS_PRESSURE, 0);
+-		input_sync(input);
+-
+-		ts->pendown = false;
+-		dev_vdbg(&ts->spi->dev, "UP\n");
+-	}
++	if (ts->pendown && !ts->stopped)
++		ads7846_report_pen_up(ts);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 702fbaa6c9ada..ef37ccfa82562 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -10,8 +10,15 @@
+ 
+ struct qcom_smmu {
+ 	struct arm_smmu_device smmu;
++	bool bypass_quirk;
++	u8 bypass_cbndx;
+ };
+ 
++static struct qcom_smmu *to_qcom_smmu(struct arm_smmu_device *smmu)
++{
++	return container_of(smmu, struct qcom_smmu, smmu);
++}
++
+ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
+ 	{ .compatible = "qcom,adreno" },
+ 	{ .compatible = "qcom,mdp4" },
+@@ -23,6 +30,87 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
+ 	{ }
+ };
+ 
++static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
++{
++	unsigned int last_s2cr = ARM_SMMU_GR0_S2CR(smmu->num_mapping_groups - 1);
++	struct qcom_smmu *qsmmu = to_qcom_smmu(smmu);
++	u32 reg;
++	u32 smr;
++	int i;
++
++	/*
++	 * With some firmware versions writes to S2CR of type FAULT are
++	 * ignored, and writing BYPASS will end up written as FAULT in the
++	 * register. Perform a write to S2CR to detect if this is the case and
++	 * if so reserve a context bank to emulate bypass streams.
++	 */
++	reg = FIELD_PREP(ARM_SMMU_S2CR_TYPE, S2CR_TYPE_BYPASS) |
++	      FIELD_PREP(ARM_SMMU_S2CR_CBNDX, 0xff) |
++	      FIELD_PREP(ARM_SMMU_S2CR_PRIVCFG, S2CR_PRIVCFG_DEFAULT);
++	arm_smmu_gr0_write(smmu, last_s2cr, reg);
++	reg = arm_smmu_gr0_read(smmu, last_s2cr);
++	if (FIELD_GET(ARM_SMMU_S2CR_TYPE, reg) != S2CR_TYPE_BYPASS) {
++		qsmmu->bypass_quirk = true;
++		qsmmu->bypass_cbndx = smmu->num_context_banks - 1;
++
++		set_bit(qsmmu->bypass_cbndx, smmu->context_map);
++
++		reg = FIELD_PREP(ARM_SMMU_CBAR_TYPE, CBAR_TYPE_S1_TRANS_S2_BYPASS);
++		arm_smmu_gr1_write(smmu, ARM_SMMU_GR1_CBAR(qsmmu->bypass_cbndx), reg);
++	}
++
++	for (i = 0; i < smmu->num_mapping_groups; i++) {
++		smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
++
++		if (FIELD_GET(ARM_SMMU_SMR_VALID, smr)) {
++			smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
++			smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
++			smmu->smrs[i].valid = true;
++
++			smmu->s2crs[i].type = S2CR_TYPE_BYPASS;
++			smmu->s2crs[i].privcfg = S2CR_PRIVCFG_DEFAULT;
++			smmu->s2crs[i].cbndx = 0xff;
++		}
++	}
++
++	return 0;
++}
++
++static void qcom_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx)
++{
++	struct arm_smmu_s2cr *s2cr = smmu->s2crs + idx;
++	struct qcom_smmu *qsmmu = to_qcom_smmu(smmu);
++	u32 cbndx = s2cr->cbndx;
++	u32 type = s2cr->type;
++	u32 reg;
++
++	if (qsmmu->bypass_quirk) {
++		if (type == S2CR_TYPE_BYPASS) {
++			/*
++			 * Firmware with quirky S2CR handling will substitute
++			 * BYPASS writes with FAULT, so point the stream to the
++			 * reserved context bank and ask for translation on the
++			 * stream
++			 */
++			type = S2CR_TYPE_TRANS;
++			cbndx = qsmmu->bypass_cbndx;
++		} else if (type == S2CR_TYPE_FAULT) {
++			/*
++			 * Firmware with quirky S2CR handling will ignore FAULT
++			 * writes, so trick it to write FAULT by asking for a
++			 * BYPASS.
++			 */
++			type = S2CR_TYPE_BYPASS;
++			cbndx = 0xff;
++		}
++	}
++
++	reg = FIELD_PREP(ARM_SMMU_S2CR_TYPE, type) |
++	      FIELD_PREP(ARM_SMMU_S2CR_CBNDX, cbndx) |
++	      FIELD_PREP(ARM_SMMU_S2CR_PRIVCFG, s2cr->privcfg);
++	arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_S2CR(idx), reg);
++}
++
+ static int qcom_smmu_def_domain_type(struct device *dev)
+ {
+ 	const struct of_device_id *match =
+@@ -61,8 +149,10 @@ static int qcom_smmu500_reset(struct arm_smmu_device *smmu)
+ }
+ 
+ static const struct arm_smmu_impl qcom_smmu_impl = {
++	.cfg_probe = qcom_smmu_cfg_probe,
+ 	.def_domain_type = qcom_smmu_def_domain_type,
+ 	.reset = qcom_smmu500_reset,
++	.write_s2cr = qcom_smmu_write_s2cr,
+ };
+ 
+ struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu)
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+index dad7fa86fbd4c..bcbacf22331d6 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+@@ -929,9 +929,16 @@ static void arm_smmu_write_smr(struct arm_smmu_device *smmu, int idx)
+ static void arm_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx)
+ {
+ 	struct arm_smmu_s2cr *s2cr = smmu->s2crs + idx;
+-	u32 reg = FIELD_PREP(ARM_SMMU_S2CR_TYPE, s2cr->type) |
+-		  FIELD_PREP(ARM_SMMU_S2CR_CBNDX, s2cr->cbndx) |
+-		  FIELD_PREP(ARM_SMMU_S2CR_PRIVCFG, s2cr->privcfg);
++	u32 reg;
++
++	if (smmu->impl && smmu->impl->write_s2cr) {
++		smmu->impl->write_s2cr(smmu, idx);
++		return;
++	}
++
++	reg = FIELD_PREP(ARM_SMMU_S2CR_TYPE, s2cr->type) |
++	      FIELD_PREP(ARM_SMMU_S2CR_CBNDX, s2cr->cbndx) |
++	      FIELD_PREP(ARM_SMMU_S2CR_PRIVCFG, s2cr->privcfg);
+ 
+ 	if (smmu->features & ARM_SMMU_FEAT_EXIDS && smmu->smrs &&
+ 	    smmu->smrs[idx].valid)
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h
+index 1a746476927c9..b71647eaa319b 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.h
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h
+@@ -436,6 +436,7 @@ struct arm_smmu_impl {
+ 	int (*alloc_context_bank)(struct arm_smmu_domain *smmu_domain,
+ 				  struct arm_smmu_device *smmu,
+ 				  struct device *dev, int start);
++	void (*write_s2cr)(struct arm_smmu_device *smmu, int idx);
+ };
+ 
+ #define INVALID_SMENDX			-1
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index a49afa11673cc..c9da9e93f545c 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -5387,6 +5387,7 @@ static void intel_iommu_aux_detach_device(struct iommu_domain *domain,
+ 	aux_domain_remove_dev(to_dmar_domain(domain), dev);
+ }
+ 
++#ifdef CONFIG_INTEL_IOMMU_SVM
+ /*
+  * 2D array for converting and sanitizing IOMMU generic TLB granularity to
+  * VT-d granularity. Invalidation is typically included in the unmap operation
+@@ -5433,7 +5434,6 @@ static inline u64 to_vtd_size(u64 granu_size, u64 nr_granules)
+ 	return order_base_2(nr_pages);
+ }
+ 
+-#ifdef CONFIG_INTEL_IOMMU_SVM
+ static int
+ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
+ 			   struct iommu_cache_invalidate_info *inv_info)
+diff --git a/drivers/irqchip/irq-alpine-msi.c b/drivers/irqchip/irq-alpine-msi.c
+index 23a3b877f7f1d..ede02dc2bcd0b 100644
+--- a/drivers/irqchip/irq-alpine-msi.c
++++ b/drivers/irqchip/irq-alpine-msi.c
+@@ -165,8 +165,7 @@ static int alpine_msix_middle_domain_alloc(struct irq_domain *domain,
+ 	return 0;
+ 
+ err_sgi:
+-	while (--i >= 0)
+-		irq_domain_free_irqs_parent(domain, virq, i);
++	irq_domain_free_irqs_parent(domain, virq, i - 1);
+ 	alpine_msix_free_sgi(priv, sgi, nr_irqs);
+ 	return err;
+ }
+diff --git a/drivers/irqchip/irq-ti-sci-inta.c b/drivers/irqchip/irq-ti-sci-inta.c
+index b2ab8db439d92..532d0ae172d9f 100644
+--- a/drivers/irqchip/irq-ti-sci-inta.c
++++ b/drivers/irqchip/irq-ti-sci-inta.c
+@@ -726,7 +726,7 @@ static int ti_sci_inta_irq_domain_probe(struct platform_device *pdev)
+ 	INIT_LIST_HEAD(&inta->vint_list);
+ 	mutex_init(&inta->vint_mutex);
+ 
+-	dev_info(dev, "Interrupt Aggregator domain %d created\n", pdev->id);
++	dev_info(dev, "Interrupt Aggregator domain %d created\n", inta->ti_sci_id);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/irqchip/irq-ti-sci-intr.c b/drivers/irqchip/irq-ti-sci-intr.c
+index ac9d6d658e65c..fe8fad22bcf96 100644
+--- a/drivers/irqchip/irq-ti-sci-intr.c
++++ b/drivers/irqchip/irq-ti-sci-intr.c
+@@ -129,7 +129,7 @@ static void ti_sci_intr_irq_domain_free(struct irq_domain *domain,
+  * @virq:	Corresponding Linux virtual IRQ number
+  * @hwirq:	Corresponding hwirq for the IRQ within this IRQ domain
+  *
+- * Returns parent irq if all went well else appropriate error pointer.
++ * Returns intr output irq if all went well else appropriate error pointer.
+  */
+ static int ti_sci_intr_alloc_parent_irq(struct irq_domain *domain,
+ 					unsigned int virq, u32 hwirq)
+@@ -173,7 +173,7 @@ static int ti_sci_intr_alloc_parent_irq(struct irq_domain *domain,
+ 	if (err)
+ 		goto err_msg;
+ 
+-	return p_hwirq;
++	return out_irq;
+ 
+ err_msg:
+ 	irq_domain_free_irqs_parent(domain, virq, 1);
+@@ -198,19 +198,19 @@ static int ti_sci_intr_irq_domain_alloc(struct irq_domain *domain,
+ 	struct irq_fwspec *fwspec = data;
+ 	unsigned long hwirq;
+ 	unsigned int flags;
+-	int err, p_hwirq;
++	int err, out_irq;
+ 
+ 	err = ti_sci_intr_irq_domain_translate(domain, fwspec, &hwirq, &flags);
+ 	if (err)
+ 		return err;
+ 
+-	p_hwirq = ti_sci_intr_alloc_parent_irq(domain, virq, hwirq);
+-	if (p_hwirq < 0)
+-		return p_hwirq;
++	out_irq = ti_sci_intr_alloc_parent_irq(domain, virq, hwirq);
++	if (out_irq < 0)
++		return out_irq;
+ 
+ 	irq_domain_set_hwirq_and_chip(domain, virq, hwirq,
+ 				      &ti_sci_intr_irq_chip,
+-				      (void *)(uintptr_t)p_hwirq);
++				      (void *)(uintptr_t)out_irq);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/irqchip/qcom-pdc.c b/drivers/irqchip/qcom-pdc.c
+index bd39e9de6ecf7..5dc63c20b67ea 100644
+--- a/drivers/irqchip/qcom-pdc.c
++++ b/drivers/irqchip/qcom-pdc.c
+@@ -159,6 +159,8 @@ static int qcom_pdc_gic_set_type(struct irq_data *d, unsigned int type)
+ {
+ 	int pin_out = d->hwirq;
+ 	enum pdc_irq_config_bits pdc_type;
++	enum pdc_irq_config_bits old_pdc_type;
++	int ret;
+ 
+ 	if (pin_out == GPIO_NO_WAKE_IRQ)
+ 		return 0;
+@@ -187,9 +189,26 @@ static int qcom_pdc_gic_set_type(struct irq_data *d, unsigned int type)
+ 		return -EINVAL;
+ 	}
+ 
++	old_pdc_type = pdc_reg_read(IRQ_i_CFG, pin_out);
+ 	pdc_reg_write(IRQ_i_CFG, pin_out, pdc_type);
+ 
+-	return irq_chip_set_type_parent(d, type);
++	ret = irq_chip_set_type_parent(d, type);
++	if (ret)
++		return ret;
++
++	/*
++	 * When we change types the PDC can give a phantom interrupt.
++	 * Clear it.  Specifically the phantom shows up when reconfiguring
++	 * polarity of interrupt without changing the state of the signal
++	 * but let's be consistent and clear it always.
++	 *
++	 * Doing this works because we have IRQCHIP_SET_TYPE_MASKED so the
++	 * interrupt will be cleared before the rest of the system sees it.
++	 */
++	if (old_pdc_type != pdc_type)
++		irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, false);
++
++	return 0;
+ }
+ 
+ static struct irq_chip qcom_pdc_gic_chip = {
+diff --git a/drivers/leds/leds-lp50xx.c b/drivers/leds/leds-lp50xx.c
+index 5fb4f24aeb2e8..f13117eed976d 100644
+--- a/drivers/leds/leds-lp50xx.c
++++ b/drivers/leds/leds-lp50xx.c
+@@ -487,8 +487,10 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 		 */
+ 		mc_led_info = devm_kcalloc(priv->dev, LP50XX_LEDS_PER_MODULE,
+ 					   sizeof(*mc_led_info), GFP_KERNEL);
+-		if (!mc_led_info)
+-			return -ENOMEM;
++		if (!mc_led_info) {
++			ret = -ENOMEM;
++			goto child_out;
++		}
+ 
+ 		fwnode_for_each_child_node(child, led_node) {
+ 			ret = fwnode_property_read_u32(led_node, "color",
+diff --git a/drivers/leds/leds-netxbig.c b/drivers/leds/leds-netxbig.c
+index e6fd47365b588..68fbf0b66fadd 100644
+--- a/drivers/leds/leds-netxbig.c
++++ b/drivers/leds/leds-netxbig.c
+@@ -448,31 +448,39 @@ static int netxbig_leds_get_of_pdata(struct device *dev,
+ 	gpio_ext = devm_kzalloc(dev, sizeof(*gpio_ext), GFP_KERNEL);
+ 	if (!gpio_ext) {
+ 		of_node_put(gpio_ext_np);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto put_device;
+ 	}
+ 	ret = netxbig_gpio_ext_get(dev, gpio_ext_dev, gpio_ext);
+ 	of_node_put(gpio_ext_np);
+ 	if (ret)
+-		return ret;
++		goto put_device;
+ 	pdata->gpio_ext = gpio_ext;
+ 
+ 	/* Timers (optional) */
+ 	ret = of_property_count_u32_elems(np, "timers");
+ 	if (ret > 0) {
+-		if (ret % 3)
+-			return -EINVAL;
++		if (ret % 3) {
++			ret = -EINVAL;
++			goto put_device;
++		}
++
+ 		num_timers = ret / 3;
+ 		timers = devm_kcalloc(dev, num_timers, sizeof(*timers),
+ 				      GFP_KERNEL);
+-		if (!timers)
+-			return -ENOMEM;
++		if (!timers) {
++			ret = -ENOMEM;
++			goto put_device;
++		}
+ 		for (i = 0; i < num_timers; i++) {
+ 			u32 tmp;
+ 
+ 			of_property_read_u32_index(np, "timers", 3 * i,
+ 						   &timers[i].mode);
+-			if (timers[i].mode >= NETXBIG_LED_MODE_NUM)
+-				return -EINVAL;
++			if (timers[i].mode >= NETXBIG_LED_MODE_NUM) {
++				ret = -EINVAL;
++				goto put_device;
++			}
+ 			of_property_read_u32_index(np, "timers",
+ 						   3 * i + 1, &tmp);
+ 			timers[i].delay_on = tmp;
+@@ -488,12 +496,15 @@ static int netxbig_leds_get_of_pdata(struct device *dev,
+ 	num_leds = of_get_available_child_count(np);
+ 	if (!num_leds) {
+ 		dev_err(dev, "No LED subnodes found in DT\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto put_device;
+ 	}
+ 
+ 	leds = devm_kcalloc(dev, num_leds, sizeof(*leds), GFP_KERNEL);
+-	if (!leds)
+-		return -ENOMEM;
++	if (!leds) {
++		ret = -ENOMEM;
++		goto put_device;
++	}
+ 
+ 	led = leds;
+ 	for_each_available_child_of_node(np, child) {
+@@ -574,6 +585,8 @@ static int netxbig_leds_get_of_pdata(struct device *dev,
+ 
+ err_node_put:
+ 	of_node_put(child);
++put_device:
++	put_device(gpio_ext_dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/leds/leds-turris-omnia.c b/drivers/leds/leds-turris-omnia.c
+index 8c5bdc3847ee7..880fc8def5309 100644
+--- a/drivers/leds/leds-turris-omnia.c
++++ b/drivers/leds/leds-turris-omnia.c
+@@ -98,9 +98,9 @@ static int omnia_led_register(struct i2c_client *client, struct omnia_led *led,
+ 	}
+ 
+ 	ret = of_property_read_u32(np, "color", &color);
+-	if (ret || color != LED_COLOR_ID_MULTI) {
++	if (ret || color != LED_COLOR_ID_RGB) {
+ 		dev_warn(dev,
+-			 "Node %pOF: must contain 'color' property with value LED_COLOR_ID_MULTI\n",
++			 "Node %pOF: must contain 'color' property with value LED_COLOR_ID_RGB\n",
+ 			 np);
+ 		return 0;
+ 	}
+diff --git a/drivers/macintosh/adb-iop.c b/drivers/macintosh/adb-iop.c
+index f3d1a460fbce1..0ee3272491501 100644
+--- a/drivers/macintosh/adb-iop.c
++++ b/drivers/macintosh/adb-iop.c
+@@ -25,6 +25,7 @@
+ static struct adb_request *current_req;
+ static struct adb_request *last_req;
+ static unsigned int autopoll_devs;
++static u8 autopoll_addr;
+ 
+ static enum adb_iop_state {
+ 	idle,
+@@ -41,6 +42,11 @@ static int adb_iop_autopoll(int);
+ static void adb_iop_poll(void);
+ static int adb_iop_reset_bus(void);
+ 
++/* ADB command byte structure */
++#define ADDR_MASK       0xF0
++#define OP_MASK         0x0C
++#define TALK            0x0C
++
+ struct adb_driver adb_iop_driver = {
+ 	.name         = "ISM IOP",
+ 	.probe        = adb_iop_probe,
+@@ -78,10 +84,7 @@ static void adb_iop_complete(struct iop_msg *msg)
+ 
+ 	local_irq_save(flags);
+ 
+-	if (current_req->reply_expected)
+-		adb_iop_state = awaiting_reply;
+-	else
+-		adb_iop_done();
++	adb_iop_state = awaiting_reply;
+ 
+ 	local_irq_restore(flags);
+ }
+@@ -89,38 +92,52 @@ static void adb_iop_complete(struct iop_msg *msg)
+ /*
+  * Listen for ADB messages from the IOP.
+  *
+- * This will be called when unsolicited messages (usually replies to TALK
+- * commands or autopoll packets) are received.
++ * This will be called when unsolicited IOP messages are received.
++ * These IOP messages can carry ADB autopoll responses and also occur
++ * after explicit ADB commands.
+  */
+ 
+ static void adb_iop_listen(struct iop_msg *msg)
+ {
+ 	struct adb_iopmsg *amsg = (struct adb_iopmsg *)msg->message;
++	u8 addr = (amsg->cmd & ADDR_MASK) >> 4;
++	u8 op = amsg->cmd & OP_MASK;
+ 	unsigned long flags;
+ 	bool req_done = false;
+ 
+ 	local_irq_save(flags);
+ 
+-	/* Handle a timeout. Timeout packets seem to occur even after
+-	 * we've gotten a valid reply to a TALK, presumably because of
+-	 * autopolling.
++	/* Responses to Talk commands may be unsolicited as they are
++	 * produced when the IOP polls devices. They are mostly timeouts.
+ 	 */
+-
+-	if (amsg->flags & ADB_IOP_EXPLICIT) {
++	if (op == TALK && ((1 << addr) & autopoll_devs))
++		autopoll_addr = addr;
++
++	switch (amsg->flags & (ADB_IOP_EXPLICIT |
++			       ADB_IOP_AUTOPOLL |
++			       ADB_IOP_TIMEOUT)) {
++	case ADB_IOP_EXPLICIT:
++	case ADB_IOP_EXPLICIT | ADB_IOP_TIMEOUT:
+ 		if (adb_iop_state == awaiting_reply) {
+ 			struct adb_request *req = current_req;
+ 
+-			req->reply_len = amsg->count + 1;
+-			memcpy(req->reply, &amsg->cmd, req->reply_len);
++			if (req->reply_expected) {
++				req->reply_len = amsg->count + 1;
++				memcpy(req->reply, &amsg->cmd, req->reply_len);
++			}
+ 
+ 			req_done = true;
+ 		}
+-	} else if (!(amsg->flags & ADB_IOP_TIMEOUT)) {
+-		adb_input(&amsg->cmd, amsg->count + 1,
+-			  amsg->flags & ADB_IOP_AUTOPOLL);
++		break;
++	case ADB_IOP_AUTOPOLL:
++		if (((1 << addr) & autopoll_devs) &&
++		    amsg->cmd == ADB_READREG(addr, 0))
++			adb_input(&amsg->cmd, amsg->count + 1, 1);
++		break;
+ 	}
+-
+-	msg->reply[0] = autopoll_devs ? ADB_IOP_AUTOPOLL : 0;
++	msg->reply[0] = autopoll_addr ? ADB_IOP_AUTOPOLL : 0;
++	msg->reply[1] = 0;
++	msg->reply[2] = autopoll_addr ? ADB_READREG(autopoll_addr, 0) : 0;
+ 	iop_complete_message(msg);
+ 
+ 	if (req_done)
+@@ -233,6 +250,9 @@ static void adb_iop_set_ap_complete(struct iop_msg *msg)
+ 	struct adb_iopmsg *amsg = (struct adb_iopmsg *)msg->message;
+ 
+ 	autopoll_devs = (amsg->data[1] << 8) | amsg->data[0];
++	if (autopoll_devs & (1 << autopoll_addr))
++		return;
++	autopoll_addr = autopoll_devs ? (ffs(autopoll_devs) - 1) : 0;
+ }
+ 
+ static int adb_iop_autopoll(int devs)
+diff --git a/drivers/mailbox/arm_mhu_db.c b/drivers/mailbox/arm_mhu_db.c
+index 275efe4cca0c2..8eb66c4ecf5bf 100644
+--- a/drivers/mailbox/arm_mhu_db.c
++++ b/drivers/mailbox/arm_mhu_db.c
+@@ -180,7 +180,7 @@ static void mhu_db_shutdown(struct mbox_chan *chan)
+ 
+ 	/* Reset channel */
+ 	mhu_db_mbox_clear_irq(chan);
+-	kfree(chan->con_priv);
++	devm_kfree(mbox->dev, chan->con_priv);
+ 	chan->con_priv = NULL;
+ }
+ 
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index cd0478d44058b..5e306bba43751 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1600,6 +1600,7 @@ static int target_message(struct file *filp, struct dm_ioctl *param, size_t para
+ 
+ 	if (!argc) {
+ 		DMWARN("Empty message received.");
++		r = -EINVAL;
+ 		goto out_argv;
+ 	}
+ 
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 4aaf4820b6f62..f0e64e76fd793 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -664,9 +664,27 @@ out:
+  * Takes the lock on the TOKEN lock resource so no other
+  * node can communicate while the operation is underway.
+  */
+-static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
++static int lock_token(struct md_cluster_info *cinfo)
+ {
+-	int error, set_bit = 0;
++	int error;
++
++	error = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX);
++	if (error) {
++		pr_err("md-cluster(%s:%d): failed to get EX on TOKEN (%d)\n",
++				__func__, __LINE__, error);
++	} else {
++		/* Lock the receive sequence */
++		mutex_lock(&cinfo->recv_mutex);
++	}
++	return error;
++}
++
++/* lock_comm()
++ * Sets the MD_CLUSTER_SEND_LOCK bit to lock the send channel.
++ */
++static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
++{
++	int rv, set_bit = 0;
+ 	struct mddev *mddev = cinfo->mddev;
+ 
+ 	/*
+@@ -677,34 +695,19 @@ static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
+ 	 */
+ 	if (mddev_locked && !test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
+ 				      &cinfo->state)) {
+-		error = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
++		rv = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
+ 					      &cinfo->state);
+-		WARN_ON_ONCE(error);
++		WARN_ON_ONCE(rv);
+ 		md_wakeup_thread(mddev->thread);
+ 		set_bit = 1;
+ 	}
+-	error = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX);
+-	if (set_bit)
+-		clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
+ 
+-	if (error)
+-		pr_err("md-cluster(%s:%d): failed to get EX on TOKEN (%d)\n",
+-				__func__, __LINE__, error);
+-
+-	/* Lock the receive sequence */
+-	mutex_lock(&cinfo->recv_mutex);
+-	return error;
+-}
+-
+-/* lock_comm()
+- * Sets the MD_CLUSTER_SEND_LOCK bit to lock the send channel.
+- */
+-static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
+-{
+ 	wait_event(cinfo->wait,
+ 		   !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
+-
+-	return lock_token(cinfo, mddev_locked);
++	rv = lock_token(cinfo);
++	if (set_bit)
++		clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
++	return rv;
+ }
+ 
+ static void unlock_comm(struct md_cluster_info *cinfo)
+@@ -784,9 +787,11 @@ static int sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg,
+ {
+ 	int ret;
+ 
+-	lock_comm(cinfo, mddev_locked);
+-	ret = __sendmsg(cinfo, cmsg);
+-	unlock_comm(cinfo);
++	ret = lock_comm(cinfo, mddev_locked);
++	if (!ret) {
++		ret = __sendmsg(cinfo, cmsg);
++		unlock_comm(cinfo);
++	}
+ 	return ret;
+ }
+ 
+@@ -1061,7 +1066,7 @@ static int metadata_update_start(struct mddev *mddev)
+ 		return 0;
+ 	}
+ 
+-	ret = lock_token(cinfo, 1);
++	ret = lock_token(cinfo);
+ 	clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
+ 	return ret;
+ }
+@@ -1255,7 +1260,10 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
+ 	int raid_slot = -1;
+ 
+ 	md_update_sb(mddev, 1);
+-	lock_comm(cinfo, 1);
++	if (lock_comm(cinfo, 1)) {
++		pr_err("%s: lock_comm failed\n", __func__);
++		return;
++	}
+ 
+ 	memset(&cmsg, 0, sizeof(cmsg));
+ 	cmsg.type = cpu_to_le32(METADATA_UPDATED);
+@@ -1407,7 +1415,8 @@ static int add_new_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	cmsg.type = cpu_to_le32(NEWDISK);
+ 	memcpy(cmsg.uuid, uuid, 16);
+ 	cmsg.raid_slot = cpu_to_le32(rdev->desc_nr);
+-	lock_comm(cinfo, 1);
++	if (lock_comm(cinfo, 1))
++		return -EAGAIN;
+ 	ret = __sendmsg(cinfo, &cmsg);
+ 	if (ret) {
+ 		unlock_comm(cinfo);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 4136bd8142894..3be74cf3635fe 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6948,8 +6948,10 @@ static int hot_remove_disk(struct mddev *mddev, dev_t dev)
+ 		goto busy;
+ 
+ kick_rdev:
+-	if (mddev_is_clustered(mddev))
+-		md_cluster_ops->remove_disk(mddev, rdev);
++	if (mddev_is_clustered(mddev)) {
++		if (md_cluster_ops->remove_disk(mddev, rdev))
++			goto busy;
++	}
+ 
+ 	md_kick_rdev_from_array(rdev);
+ 	set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
+@@ -7278,6 +7280,7 @@ static int update_raid_disks(struct mddev *mddev, int raid_disks)
+ 		return -EINVAL;
+ 	if (mddev->sync_thread ||
+ 	    test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
++	    test_bit(MD_RESYNCING_REMOTE, &mddev->recovery) ||
+ 	    mddev->reshape_position != MaxSector)
+ 		return -EBUSY;
+ 
+@@ -9645,8 +9648,11 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ 		}
+ 	}
+ 
+-	if (mddev->raid_disks != le32_to_cpu(sb->raid_disks))
+-		update_raid_disks(mddev, le32_to_cpu(sb->raid_disks));
++	if (mddev->raid_disks != le32_to_cpu(sb->raid_disks)) {
++		ret = update_raid_disks(mddev, le32_to_cpu(sb->raid_disks));
++		if (ret)
++			pr_warn("md: updating array disks failed. %d\n", ret);
++	}
+ 
+ 	/*
+ 	 * Since mddev->delta_disks has already updated in update_raid_disks,
+diff --git a/drivers/media/common/siano/smsdvb-main.c b/drivers/media/common/siano/smsdvb-main.c
+index 88f90dfd368b1..ae17407e477a4 100644
+--- a/drivers/media/common/siano/smsdvb-main.c
++++ b/drivers/media/common/siano/smsdvb-main.c
+@@ -1169,12 +1169,15 @@ static int smsdvb_hotplug(struct smscore_device_t *coredev,
+ 	rc = dvb_create_media_graph(&client->adapter, true);
+ 	if (rc < 0) {
+ 		pr_err("dvb_create_media_graph failed %d\n", rc);
+-		goto client_error;
++		goto media_graph_error;
+ 	}
+ 
+ 	pr_info("DVB interface registered.\n");
+ 	return 0;
+ 
++media_graph_error:
++	smsdvb_debugfs_release(client);
++
+ client_error:
+ 	dvb_unregister_frontend(&client->frontend);
+ 
+diff --git a/drivers/media/i2c/imx214.c b/drivers/media/i2c/imx214.c
+index 1ef5af9a8c8bc..cee1a4817af99 100644
+--- a/drivers/media/i2c/imx214.c
++++ b/drivers/media/i2c/imx214.c
+@@ -786,7 +786,7 @@ static int imx214_s_stream(struct v4l2_subdev *subdev, int enable)
+ 		if (ret < 0)
+ 			goto err_rpm_put;
+ 	} else {
+-		ret = imx214_start_streaming(imx214);
++		ret = imx214_stop_streaming(imx214);
+ 		if (ret < 0)
+ 			goto err_rpm_put;
+ 		pm_runtime_put(imx214->dev);
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index 1cee45e353554..0ae66091a6962 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -473,8 +473,8 @@ static const struct imx219_mode supported_modes[] = {
+ 		.width = 3280,
+ 		.height = 2464,
+ 		.crop = {
+-			.left = 0,
+-			.top = 0,
++			.left = IMX219_PIXEL_ARRAY_LEFT,
++			.top = IMX219_PIXEL_ARRAY_TOP,
+ 			.width = 3280,
+ 			.height = 2464
+ 		},
+@@ -489,8 +489,8 @@ static const struct imx219_mode supported_modes[] = {
+ 		.width = 1920,
+ 		.height = 1080,
+ 		.crop = {
+-			.left = 680,
+-			.top = 692,
++			.left = 688,
++			.top = 700,
+ 			.width = 1920,
+ 			.height = 1080
+ 		},
+@@ -505,8 +505,8 @@ static const struct imx219_mode supported_modes[] = {
+ 		.width = 1640,
+ 		.height = 1232,
+ 		.crop = {
+-			.left = 0,
+-			.top = 0,
++			.left = IMX219_PIXEL_ARRAY_LEFT,
++			.top = IMX219_PIXEL_ARRAY_TOP,
+ 			.width = 3280,
+ 			.height = 2464
+ 		},
+@@ -521,8 +521,8 @@ static const struct imx219_mode supported_modes[] = {
+ 		.width = 640,
+ 		.height = 480,
+ 		.crop = {
+-			.left = 1000,
+-			.top = 752,
++			.left = 1008,
++			.top = 760,
+ 			.width = 1280,
+ 			.height = 960
+ 		},
+@@ -1008,6 +1008,7 @@ static int imx219_get_selection(struct v4l2_subdev *sd,
+ 		return 0;
+ 
+ 	case V4L2_SEL_TGT_CROP_DEFAULT:
++	case V4L2_SEL_TGT_CROP_BOUNDS:
+ 		sel->r.top = IMX219_PIXEL_ARRAY_TOP;
+ 		sel->r.left = IMX219_PIXEL_ARRAY_LEFT;
+ 		sel->r.width = IMX219_PIXEL_ARRAY_WIDTH;
+diff --git a/drivers/media/i2c/max2175.c b/drivers/media/i2c/max2175.c
+index 03b4ed3a61b83..661208c9bfc5d 100644
+--- a/drivers/media/i2c/max2175.c
++++ b/drivers/media/i2c/max2175.c
+@@ -503,7 +503,7 @@ static void max2175_set_bbfilter(struct max2175 *ctx)
+ 	}
+ }
+ 
+-static bool max2175_set_csm_mode(struct max2175 *ctx,
++static int max2175_set_csm_mode(struct max2175 *ctx,
+ 			  enum max2175_csm_mode new_mode)
+ {
+ 	int ret = max2175_poll_csm_ready(ctx);
+diff --git a/drivers/media/i2c/max9271.c b/drivers/media/i2c/max9271.c
+index 0f6f7a092a463..c247db569bab0 100644
+--- a/drivers/media/i2c/max9271.c
++++ b/drivers/media/i2c/max9271.c
+@@ -223,12 +223,12 @@ int max9271_enable_gpios(struct max9271_device *dev, u8 gpio_mask)
+ {
+ 	int ret;
+ 
+-	ret = max9271_read(dev, 0x0f);
++	ret = max9271_read(dev, 0x0e);
+ 	if (ret < 0)
+ 		return 0;
+ 
+ 	/* BIT(0) reserved: GPO is always enabled. */
+-	ret |= gpio_mask | BIT(0);
++	ret |= (gpio_mask & ~BIT(0));
+ 	ret = max9271_write(dev, 0x0e, ret);
+ 	if (ret < 0) {
+ 		dev_err(&dev->client->dev, "Failed to enable gpio (%d)\n", ret);
+@@ -245,12 +245,12 @@ int max9271_disable_gpios(struct max9271_device *dev, u8 gpio_mask)
+ {
+ 	int ret;
+ 
+-	ret = max9271_read(dev, 0x0f);
++	ret = max9271_read(dev, 0x0e);
+ 	if (ret < 0)
+ 		return 0;
+ 
+ 	/* BIT(0) reserved: GPO cannot be disabled */
+-	ret &= (~gpio_mask | BIT(0));
++	ret &= ~(gpio_mask | BIT(0));
+ 	ret = max9271_write(dev, 0x0e, ret);
+ 	if (ret < 0) {
+ 		dev_err(&dev->client->dev, "Failed to disable gpio (%d)\n", ret);
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index 8d0254d0e5ea7..8f0812e859012 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -1216,20 +1216,6 @@ static int ov5640_set_autogain(struct ov5640_dev *sensor, bool on)
+ 			      BIT(1), on ? 0 : BIT(1));
+ }
+ 
+-static int ov5640_set_stream_bt656(struct ov5640_dev *sensor, bool on)
+-{
+-	int ret;
+-
+-	ret = ov5640_write_reg(sensor, OV5640_REG_CCIR656_CTRL00,
+-			       on ? 0x1 : 0x00);
+-	if (ret)
+-		return ret;
+-
+-	return ov5640_write_reg(sensor, OV5640_REG_SYS_CTRL0, on ?
+-				OV5640_REG_SYS_CTRL0_SW_PWUP :
+-				OV5640_REG_SYS_CTRL0_SW_PWDN);
+-}
+-
+ static int ov5640_set_stream_dvp(struct ov5640_dev *sensor, bool on)
+ {
+ 	return ov5640_write_reg(sensor, OV5640_REG_SYS_CTRL0, on ?
+@@ -1994,13 +1980,13 @@ static int ov5640_set_power_mipi(struct ov5640_dev *sensor, bool on)
+ static int ov5640_set_power_dvp(struct ov5640_dev *sensor, bool on)
+ {
+ 	unsigned int flags = sensor->ep.bus.parallel.flags;
+-	u8 pclk_pol = 0;
+-	u8 hsync_pol = 0;
+-	u8 vsync_pol = 0;
++	bool bt656 = sensor->ep.bus_type == V4L2_MBUS_BT656;
++	u8 polarities = 0;
+ 	int ret;
+ 
+ 	if (!on) {
+ 		/* Reset settings to their default values. */
++		ov5640_write_reg(sensor, OV5640_REG_CCIR656_CTRL00, 0x00);
+ 		ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x58);
+ 		ov5640_write_reg(sensor, OV5640_REG_POLARITY_CTRL00, 0x20);
+ 		ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT_ENABLE01, 0x00);
+@@ -2024,7 +2010,35 @@ static int ov5640_set_power_dvp(struct ov5640_dev *sensor, bool on)
+ 	 * - VSYNC:	active high
+ 	 * - HREF:	active low
+ 	 * - PCLK:	active low
++	 *
++	 * VSYNC & HREF are not configured if BT656 bus mode is selected
+ 	 */
++
++	/*
++	 * BT656 embedded synchronization configuration
++	 *
++	 * CCIR656 CTRL00
++	 * - [7]:	SYNC code selection (0: auto generate sync code,
++	 *		1: sync code from regs 0x4732-0x4735)
++	 * - [6]:	f value in CCIR656 SYNC code when fixed f value
++	 * - [5]:	Fixed f value
++	 * - [4:3]:	Blank toggle data options (00: data=1'h040/1'h200,
++	 *		01: data from regs 0x4736-0x4738, 10: always keep 0)
++	 * - [1]:	Clip data disable
++	 * - [0]:	CCIR656 mode enable
++	 *
++	 * Default CCIR656 SAV/EAV mode with default codes
++	 * SAV=0xff000080 & EAV=0xff00009d is enabled here with settings:
++	 * - CCIR656 mode enable
++	 * - auto generation of sync codes
++	 * - blank toggle data 1'h040/1'h200
++	 * - clip reserved data (0x00 & 0xff changed to 0x01 & 0xfe)
++	 */
++	ret = ov5640_write_reg(sensor, OV5640_REG_CCIR656_CTRL00,
++			       bt656 ? 0x01 : 0x00);
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * configure parallel port control lines polarity
+ 	 *
+@@ -2035,29 +2049,26 @@ static int ov5640_set_power_dvp(struct ov5640_dev *sensor, bool on)
+ 	 *		datasheet and hardware, 0 is active high
+ 	 *		and 1 is active low...)
+ 	 */
+-	if (sensor->ep.bus_type == V4L2_MBUS_PARALLEL) {
+-		if (flags & V4L2_MBUS_PCLK_SAMPLE_RISING)
+-			pclk_pol = 1;
++	if (!bt656) {
+ 		if (flags & V4L2_MBUS_HSYNC_ACTIVE_HIGH)
+-			hsync_pol = 1;
++			polarities |= BIT(1);
+ 		if (flags & V4L2_MBUS_VSYNC_ACTIVE_LOW)
+-			vsync_pol = 1;
+-
+-		ret = ov5640_write_reg(sensor, OV5640_REG_POLARITY_CTRL00,
+-				       (pclk_pol << 5) | (hsync_pol << 1) |
+-				       vsync_pol);
+-
+-		if (ret)
+-			return ret;
++			polarities |= BIT(0);
+ 	}
++	if (flags & V4L2_MBUS_PCLK_SAMPLE_RISING)
++		polarities |= BIT(5);
++
++	ret = ov5640_write_reg(sensor, OV5640_REG_POLARITY_CTRL00, polarities);
++	if (ret)
++		return ret;
+ 
+ 	/*
+-	 * powerdown MIPI TX/RX PHY & disable MIPI
++	 * powerdown MIPI TX/RX PHY & enable DVP
+ 	 *
+ 	 * MIPI CONTROL 00
+-	 * 4:	 PWDN PHY TX
+-	 * 3:	 PWDN PHY RX
+-	 * 2:	 MIPI enable
++	 * [4] = 1	: Power down MIPI HS Tx
++	 * [3] = 1	: Power down MIPI LS Rx
++	 * [2] = 0	: DVP enable (MIPI disable)
+ 	 */
+ 	ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x18);
+ 	if (ret)
+@@ -2074,8 +2085,7 @@ static int ov5640_set_power_dvp(struct ov5640_dev *sensor, bool on)
+ 	 * - [3:0]:	D[9:6] output enable
+ 	 */
+ 	ret = ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT_ENABLE01,
+-			       sensor->ep.bus_type == V4L2_MBUS_PARALLEL ?
+-			       0x7f : 0x1f);
++			       bt656 ? 0x1f : 0x7f);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2925,8 +2935,6 @@ static int ov5640_s_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 		if (sensor->ep.bus_type == V4L2_MBUS_CSI2_DPHY)
+ 			ret = ov5640_set_stream_mipi(sensor, enable);
+-		else if (sensor->ep.bus_type == V4L2_MBUS_BT656)
+-			ret = ov5640_set_stream_bt656(sensor, enable);
+ 		else
+ 			ret = ov5640_set_stream_dvp(sensor, enable);
+ 
+diff --git a/drivers/media/i2c/rdacm20.c b/drivers/media/i2c/rdacm20.c
+index 1ed928c4ca70f..16bcb764b0e0d 100644
+--- a/drivers/media/i2c/rdacm20.c
++++ b/drivers/media/i2c/rdacm20.c
+@@ -487,9 +487,18 @@ static int rdacm20_initialize(struct rdacm20_device *dev)
+ 	 * Reset the sensor by cycling the OV10635 reset signal connected to the
+ 	 * MAX9271 GPIO1 and verify communication with the OV10635.
+ 	 */
+-	max9271_clear_gpios(dev->serializer, MAX9271_GPIO1OUT);
++	ret = max9271_enable_gpios(dev->serializer, MAX9271_GPIO1OUT);
++	if (ret)
++		return ret;
++
++	ret = max9271_clear_gpios(dev->serializer, MAX9271_GPIO1OUT);
++	if (ret)
++		return ret;
+ 	usleep_range(10000, 15000);
+-	max9271_set_gpios(dev->serializer, MAX9271_GPIO1OUT);
++
++	ret = max9271_set_gpios(dev->serializer, MAX9271_GPIO1OUT);
++	if (ret)
++		return ret;
+ 	usleep_range(10000, 15000);
+ 
+ again:
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index 7d9401219a3ac..3b3221fd3fe8f 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -2082,6 +2082,7 @@ static int tvp5150_parse_dt(struct tvp5150 *decoder, struct device_node *np)
+ 
+ 	ep_np = of_graph_get_endpoint_by_regs(np, TVP5150_PAD_VID_OUT, 0);
+ 	if (!ep_np) {
++		ret = -EINVAL;
+ 		dev_err(dev, "Error no output endpoint available\n");
+ 		goto err_free;
+ 	}
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.c b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+index 4e598e937dfe2..1fcd131482e0e 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.c
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+@@ -791,6 +791,7 @@ static void cio2_vb2_return_all_buffers(struct cio2_queue *q,
+ 			atomic_dec(&q->bufs_queued);
+ 			vb2_buffer_done(&q->bufs[i]->vbb.vb2_buf,
+ 					state);
++			q->bufs[i] = NULL;
+ 		}
+ 	}
+ }
+@@ -1232,29 +1233,15 @@ static int cio2_subdev_get_fmt(struct v4l2_subdev *sd,
+ 			       struct v4l2_subdev_format *fmt)
+ {
+ 	struct cio2_queue *q = container_of(sd, struct cio2_queue, subdev);
+-	struct v4l2_subdev_format format;
+-	int ret;
+-
+-	if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
+-		fmt->format = *v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
+-		return 0;
+-	}
+ 
+-	if (fmt->pad == CIO2_PAD_SINK) {
+-		format.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+-		ret = v4l2_subdev_call(sd, pad, get_fmt, NULL,
+-				       &format);
++	mutex_lock(&q->subdev_lock);
+ 
+-		if (ret)
+-			return ret;
+-		/* update colorspace etc */
+-		q->subdev_fmt.colorspace = format.format.colorspace;
+-		q->subdev_fmt.ycbcr_enc = format.format.ycbcr_enc;
+-		q->subdev_fmt.quantization = format.format.quantization;
+-		q->subdev_fmt.xfer_func = format.format.xfer_func;
+-	}
++	if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
++		fmt->format = *v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
++	else
++		fmt->format = q->subdev_fmt;
+ 
+-	fmt->format = q->subdev_fmt;
++	mutex_unlock(&q->subdev_lock);
+ 
+ 	return 0;
+ }
+@@ -1271,6 +1258,9 @@ static int cio2_subdev_set_fmt(struct v4l2_subdev *sd,
+ 			       struct v4l2_subdev_format *fmt)
+ {
+ 	struct cio2_queue *q = container_of(sd, struct cio2_queue, subdev);
++	struct v4l2_mbus_framefmt *mbus;
++	u32 mbus_code = fmt->format.code;
++	unsigned int i;
+ 
+ 	/*
+ 	 * Only allow setting sink pad format;
+@@ -1279,16 +1269,29 @@ static int cio2_subdev_set_fmt(struct v4l2_subdev *sd,
+ 	if (fmt->pad == CIO2_PAD_SOURCE)
+ 		return cio2_subdev_get_fmt(sd, cfg, fmt);
+ 
+-	if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
+-		*v4l2_subdev_get_try_format(sd, cfg, fmt->pad) = fmt->format;
+-	} else {
+-		/* It's the sink, allow changing frame size */
+-		q->subdev_fmt.width = fmt->format.width;
+-		q->subdev_fmt.height = fmt->format.height;
+-		q->subdev_fmt.code = fmt->format.code;
+-		fmt->format = q->subdev_fmt;
++	if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
++		mbus = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
++	else
++		mbus = &q->subdev_fmt;
++
++	fmt->format.code = formats[0].mbus_code;
++
++	for (i = 0; i < ARRAY_SIZE(formats); i++) {
++		if (formats[i].mbus_code == fmt->format.code) {
++			fmt->format.code = mbus_code;
++			break;
++		}
+ 	}
+ 
++	fmt->format.width = min_t(u32, fmt->format.width, CIO2_IMAGE_MAX_WIDTH);
++	fmt->format.height = min_t(u32, fmt->format.height,
++				   CIO2_IMAGE_MAX_LENGTH);
++	fmt->format.field = V4L2_FIELD_NONE;
++
++	mutex_lock(&q->subdev_lock);
++	*mbus = fmt->format;
++	mutex_unlock(&q->subdev_lock);
++
+ 	return 0;
+ }
+ 
+@@ -1547,6 +1550,7 @@ static int cio2_queue_init(struct cio2_device *cio2, struct cio2_queue *q)
+ 
+ 	/* Initialize miscellaneous variables */
+ 	mutex_init(&q->lock);
++	mutex_init(&q->subdev_lock);
+ 
+ 	/* Initialize formats to default values */
+ 	fmt = &q->subdev_fmt;
+@@ -1663,6 +1667,7 @@ fail_vdev_media_entity:
+ fail_subdev_media_entity:
+ 	cio2_fbpt_exit(q, &cio2->pci_dev->dev);
+ fail_fbpt:
++	mutex_destroy(&q->subdev_lock);
+ 	mutex_destroy(&q->lock);
+ 
+ 	return r;
+@@ -1675,6 +1680,7 @@ static void cio2_queue_exit(struct cio2_device *cio2, struct cio2_queue *q)
+ 	v4l2_device_unregister_subdev(&q->subdev);
+ 	media_entity_cleanup(&q->subdev.entity);
+ 	cio2_fbpt_exit(q, &cio2->pci_dev->dev);
++	mutex_destroy(&q->subdev_lock);
+ 	mutex_destroy(&q->lock);
+ }
+ 
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.h b/drivers/media/pci/intel/ipu3/ipu3-cio2.h
+index 549b08f88f0c7..146492383aa5b 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.h
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.h
+@@ -335,6 +335,7 @@ struct cio2_queue {
+ 
+ 	/* Subdev, /dev/v4l-subdevX */
+ 	struct v4l2_subdev subdev;
++	struct mutex subdev_lock; /* Serialise acces to subdev_fmt field */
+ 	struct media_pad subdev_pads[CIO2_PADS];
+ 	struct v4l2_mbus_framefmt subdev_fmt;
+ 	atomic_t frame_sequence;
+diff --git a/drivers/media/pci/netup_unidvb/netup_unidvb_spi.c b/drivers/media/pci/netup_unidvb/netup_unidvb_spi.c
+index d4f12c250f91a..526042d8afae5 100644
+--- a/drivers/media/pci/netup_unidvb/netup_unidvb_spi.c
++++ b/drivers/media/pci/netup_unidvb/netup_unidvb_spi.c
+@@ -175,7 +175,7 @@ int netup_spi_init(struct netup_unidvb_dev *ndev)
+ 	struct spi_master *master;
+ 	struct netup_spi *nspi;
+ 
+-	master = spi_alloc_master(&ndev->pci_dev->dev,
++	master = devm_spi_alloc_master(&ndev->pci_dev->dev,
+ 		sizeof(struct netup_spi));
+ 	if (!master) {
+ 		dev_err(&ndev->pci_dev->dev,
+@@ -208,6 +208,7 @@ int netup_spi_init(struct netup_unidvb_dev *ndev)
+ 		ndev->pci_slot,
+ 		ndev->pci_func);
+ 	if (!spi_new_device(master, &netup_spi_board)) {
++		spi_unregister_master(master);
+ 		ndev->spi = NULL;
+ 		dev_err(&ndev->pci_dev->dev,
+ 			"%s(): unable to create SPI device\n", __func__);
+@@ -226,13 +227,13 @@ void netup_spi_release(struct netup_unidvb_dev *ndev)
+ 	if (!spi)
+ 		return;
+ 
++	spi_unregister_master(spi->master);
+ 	spin_lock_irqsave(&spi->lock, flags);
+ 	reg = readw(&spi->regs->control_stat);
+ 	writew(reg | NETUP_SPI_CTRL_IRQ, &spi->regs->control_stat);
+ 	reg = readw(&spi->regs->control_stat);
+ 	writew(reg & ~NETUP_SPI_CTRL_IMASK, &spi->regs->control_stat);
+ 	spin_unlock_irqrestore(&spi->lock, flags);
+-	spi_unregister_master(spi->master);
+ 	ndev->spi = NULL;
+ }
+ 
+diff --git a/drivers/media/pci/saa7146/mxb.c b/drivers/media/pci/saa7146/mxb.c
+index 129a1f8ebe1ad..73fc901ecf3db 100644
+--- a/drivers/media/pci/saa7146/mxb.c
++++ b/drivers/media/pci/saa7146/mxb.c
+@@ -641,16 +641,17 @@ static int vidioc_s_audio(struct file *file, void *fh, const struct v4l2_audio *
+ 	struct mxb *mxb = (struct mxb *)dev->ext_priv;
+ 
+ 	DEB_D("VIDIOC_S_AUDIO %d\n", a->index);
+-	if (mxb_inputs[mxb->cur_input].audioset & (1 << a->index)) {
+-		if (mxb->cur_audinput != a->index) {
+-			mxb->cur_audinput = a->index;
+-			tea6420_route(mxb, a->index);
+-			if (mxb->cur_audinput == 0)
+-				mxb_update_audmode(mxb);
+-		}
+-		return 0;
++	if (a->index >= 32 ||
++	    !(mxb_inputs[mxb->cur_input].audioset & (1 << a->index)))
++		return -EINVAL;
++
++	if (mxb->cur_audinput != a->index) {
++		mxb->cur_audinput = a->index;
++		tea6420_route(mxb, a->index);
++		if (mxb->cur_audinput == 0)
++			mxb_update_audmode(mxb);
+ 	}
+-	return -EINVAL;
++	return 0;
+ }
+ 
+ #ifdef CONFIG_VIDEO_ADV_DEBUG
+diff --git a/drivers/media/pci/solo6x10/solo6x10-g723.c b/drivers/media/pci/solo6x10/solo6x10-g723.c
+index 906ce86437ae3..d137b94869d82 100644
+--- a/drivers/media/pci/solo6x10/solo6x10-g723.c
++++ b/drivers/media/pci/solo6x10/solo6x10-g723.c
+@@ -385,7 +385,7 @@ int solo_g723_init(struct solo_dev *solo_dev)
+ 
+ 	ret = snd_ctl_add(card, snd_ctl_new1(&kctl, solo_dev));
+ 	if (ret < 0)
+-		return ret;
++		goto snd_error;
+ 
+ 	ret = solo_snd_pcm_init(solo_dev);
+ 	if (ret < 0)
+diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+index 227245ccaedc7..88a23bce569d9 100644
+--- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+@@ -1306,6 +1306,7 @@ static int mtk_jpeg_clk_init(struct mtk_jpeg_dev *jpeg)
+ 				jpeg->variant->clks);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to get jpeg clock:%d\n", ret);
++		put_device(&pdev->dev);
+ 		return ret;
+ 	}
+ 
+@@ -1331,6 +1332,12 @@ static void mtk_jpeg_job_timeout_work(struct work_struct *work)
+ 	v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_ERROR);
+ 	v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
+ }
++
++static inline void mtk_jpeg_clk_release(struct mtk_jpeg_dev *jpeg)
++{
++	put_device(jpeg->larb);
++}
++
+ static int mtk_jpeg_probe(struct platform_device *pdev)
+ {
+ 	struct mtk_jpeg_dev *jpeg;
+@@ -1435,6 +1442,7 @@ err_m2m_init:
+ 	v4l2_device_unregister(&jpeg->v4l2_dev);
+ 
+ err_dev_register:
++	mtk_jpeg_clk_release(jpeg);
+ 
+ err_clk_init:
+ 
+@@ -1452,6 +1460,7 @@ static int mtk_jpeg_remove(struct platform_device *pdev)
+ 	video_device_release(jpeg->vdev);
+ 	v4l2_m2m_release(jpeg->m2m_dev);
+ 	v4l2_device_unregister(&jpeg->v4l2_dev);
++	mtk_jpeg_clk_release(jpeg);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
+index 36dfe3fc056a4..ddee7046ce422 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
+@@ -47,11 +47,14 @@ int mtk_vcodec_init_dec_pm(struct mtk_vcodec_dev *mtkdev)
+ 		dec_clk->clk_info = devm_kcalloc(&pdev->dev,
+ 			dec_clk->clk_num, sizeof(*clk_info),
+ 			GFP_KERNEL);
+-		if (!dec_clk->clk_info)
+-			return -ENOMEM;
++		if (!dec_clk->clk_info) {
++			ret = -ENOMEM;
++			goto put_device;
++		}
+ 	} else {
+ 		mtk_v4l2_err("Failed to get vdec clock count");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_device;
+ 	}
+ 
+ 	for (i = 0; i < dec_clk->clk_num; i++) {
+@@ -60,25 +63,29 @@ int mtk_vcodec_init_dec_pm(struct mtk_vcodec_dev *mtkdev)
+ 			"clock-names", i, &clk_info->clk_name);
+ 		if (ret) {
+ 			mtk_v4l2_err("Failed to get clock name id = %d", i);
+-			return ret;
++			goto put_device;
+ 		}
+ 		clk_info->vcodec_clk = devm_clk_get(&pdev->dev,
+ 			clk_info->clk_name);
+ 		if (IS_ERR(clk_info->vcodec_clk)) {
+ 			mtk_v4l2_err("devm_clk_get (%d)%s fail", i,
+ 				clk_info->clk_name);
+-			return PTR_ERR(clk_info->vcodec_clk);
++			ret = PTR_ERR(clk_info->vcodec_clk);
++			goto put_device;
+ 		}
+ 	}
+ 
+ 	pm_runtime_enable(&pdev->dev);
+-
++	return 0;
++put_device:
++	put_device(pm->larbvdec);
+ 	return ret;
+ }
+ 
+ void mtk_vcodec_release_dec_pm(struct mtk_vcodec_dev *dev)
+ {
+ 	pm_runtime_disable(dev->pm.dev);
++	put_device(dev->pm.larbvdec);
+ }
+ 
+ void mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm)
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_pm.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_pm.c
+index ee22902aaa71c..1a047c25679fa 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_pm.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_pm.c
+@@ -47,14 +47,16 @@ int mtk_vcodec_init_enc_pm(struct mtk_vcodec_dev *mtkdev)
+ 	node = of_parse_phandle(dev->of_node, "mediatek,larb", 1);
+ 	if (!node) {
+ 		mtk_v4l2_err("no mediatek,larb found");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto put_larbvenc;
+ 	}
+ 
+ 	pdev = of_find_device_by_node(node);
+ 	of_node_put(node);
+ 	if (!pdev) {
+ 		mtk_v4l2_err("no mediatek,larb device found");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto put_larbvenc;
+ 	}
+ 
+ 	pm->larbvenclt = &pdev->dev;
+@@ -67,11 +69,14 @@ int mtk_vcodec_init_enc_pm(struct mtk_vcodec_dev *mtkdev)
+ 		enc_clk->clk_info = devm_kcalloc(&pdev->dev,
+ 			enc_clk->clk_num, sizeof(*clk_info),
+ 			GFP_KERNEL);
+-		if (!enc_clk->clk_info)
+-			return -ENOMEM;
++		if (!enc_clk->clk_info) {
++			ret = -ENOMEM;
++			goto put_larbvenclt;
++		}
+ 	} else {
+ 		mtk_v4l2_err("Failed to get venc clock count");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_larbvenclt;
+ 	}
+ 
+ 	for (i = 0; i < enc_clk->clk_num; i++) {
+@@ -80,17 +85,24 @@ int mtk_vcodec_init_enc_pm(struct mtk_vcodec_dev *mtkdev)
+ 			"clock-names", i, &clk_info->clk_name);
+ 		if (ret) {
+ 			mtk_v4l2_err("venc failed to get clk name %d", i);
+-			return ret;
++			goto put_larbvenclt;
+ 		}
+ 		clk_info->vcodec_clk = devm_clk_get(&pdev->dev,
+ 			clk_info->clk_name);
+ 		if (IS_ERR(clk_info->vcodec_clk)) {
+ 			mtk_v4l2_err("venc devm_clk_get (%d)%s fail", i,
+ 				clk_info->clk_name);
+-			return PTR_ERR(clk_info->vcodec_clk);
++			ret = PTR_ERR(clk_info->vcodec_clk);
++			goto put_larbvenclt;
+ 		}
+ 	}
+ 
++	return 0;
++
++put_larbvenclt:
++	put_device(pm->larbvenclt);
++put_larbvenc:
++	put_device(pm->larbvenc);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 6103aaf43987b..d5bfd6fff85b4 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -355,12 +355,26 @@ static __maybe_unused int venus_runtime_suspend(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
++	if (pm_ops->core_power) {
++		ret = pm_ops->core_power(dev, POWER_OFF);
++		if (ret)
++			return ret;
++	}
++
+ 	ret = icc_set_bw(core->cpucfg_path, 0, 0);
+ 	if (ret)
+-		return ret;
++		goto err_cpucfg_path;
+ 
+-	if (pm_ops->core_power)
+-		ret = pm_ops->core_power(dev, POWER_OFF);
++	ret = icc_set_bw(core->video_path, 0, 0);
++	if (ret)
++		goto err_video_path;
++
++	return ret;
++
++err_video_path:
++	icc_set_bw(core->cpucfg_path, kbps_to_icc(1000), 0);
++err_cpucfg_path:
++	pm_ops->core_power(dev, POWER_ON);
+ 
+ 	return ret;
+ }
+@@ -371,16 +385,20 @@ static __maybe_unused int venus_runtime_resume(struct device *dev)
+ 	const struct venus_pm_ops *pm_ops = core->pm_ops;
+ 	int ret;
+ 
++	ret = icc_set_bw(core->video_path, kbps_to_icc(20000), 0);
++	if (ret)
++		return ret;
++
++	ret = icc_set_bw(core->cpucfg_path, kbps_to_icc(1000), 0);
++	if (ret)
++		return ret;
++
+ 	if (pm_ops->core_power) {
+ 		ret = pm_ops->core_power(dev, POWER_ON);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	ret = icc_set_bw(core->cpucfg_path, 0, kbps_to_icc(1000));
+-	if (ret)
+-		return ret;
+-
+ 	return hfi_core_resume(core, false);
+ }
+ 
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index a9538c2cc3c9d..2946547a0df4a 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -212,6 +212,16 @@ static int load_scale_bw(struct venus_core *core)
+ 	}
+ 	mutex_unlock(&core->lock);
+ 
++	/*
++	 * keep minimum bandwidth vote for "video-mem" path,
++	 * so that clks can be disabled during vdec_session_release().
++	 * Actual bandwidth drop will be done during device supend
++	 * so that device can power down without any warnings.
++	 */
++
++	if (!total_avg && !total_peak)
++		total_avg = kbps_to_icc(1000);
++
+ 	dev_dbg(core->dev, VDBGL "total: avg_bw: %u, peak_bw: %u\n",
+ 		total_avg, total_peak);
+ 
+diff --git a/drivers/media/rc/sunxi-cir.c b/drivers/media/rc/sunxi-cir.c
+index ddee6ee37bab1..4afc5895bee74 100644
+--- a/drivers/media/rc/sunxi-cir.c
++++ b/drivers/media/rc/sunxi-cir.c
+@@ -137,6 +137,8 @@ static irqreturn_t sunxi_ir_irq(int irqno, void *dev_id)
+ 	} else if (status & REG_RXSTA_RPE) {
+ 		ir_raw_event_set_idle(ir->rc, true);
+ 		ir_raw_event_handle(ir->rc);
++	} else {
++		ir_raw_event_handle(ir->rc);
+ 	}
+ 
+ 	spin_unlock(&ir->ir_lock);
+diff --git a/drivers/media/usb/gspca/gspca.c b/drivers/media/usb/gspca/gspca.c
+index c295f642d352c..158c8e28ed2cc 100644
+--- a/drivers/media/usb/gspca/gspca.c
++++ b/drivers/media/usb/gspca/gspca.c
+@@ -1575,6 +1575,7 @@ out:
+ 		input_unregister_device(gspca_dev->input_dev);
+ #endif
+ 	v4l2_ctrl_handler_free(gspca_dev->vdev.ctrl_handler);
++	v4l2_device_unregister(&gspca_dev->v4l2_dev);
+ 	kfree(gspca_dev->usb_buf);
+ 	kfree(gspca_dev);
+ 	return ret;
+diff --git a/drivers/media/usb/tm6000/tm6000-video.c b/drivers/media/usb/tm6000/tm6000-video.c
+index bfba06ea60e9d..2df736c029d6e 100644
+--- a/drivers/media/usb/tm6000/tm6000-video.c
++++ b/drivers/media/usb/tm6000/tm6000-video.c
+@@ -461,11 +461,12 @@ static int tm6000_alloc_urb_buffers(struct tm6000_core *dev)
+ 	if (dev->urb_buffer)
+ 		return 0;
+ 
+-	dev->urb_buffer = kmalloc_array(num_bufs, sizeof(void *), GFP_KERNEL);
++	dev->urb_buffer = kmalloc_array(num_bufs, sizeof(*dev->urb_buffer),
++					GFP_KERNEL);
+ 	if (!dev->urb_buffer)
+ 		return -ENOMEM;
+ 
+-	dev->urb_dma = kmalloc_array(num_bufs, sizeof(dma_addr_t *),
++	dev->urb_dma = kmalloc_array(num_bufs, sizeof(*dev->urb_dma),
+ 				     GFP_KERNEL);
+ 	if (!dev->urb_dma)
+ 		return -ENOMEM;
+diff --git a/drivers/media/v4l2-core/v4l2-fwnode.c b/drivers/media/v4l2-core/v4l2-fwnode.c
+index d7bbe33840cb4..dfc53d11053fc 100644
+--- a/drivers/media/v4l2-core/v4l2-fwnode.c
++++ b/drivers/media/v4l2-core/v4l2-fwnode.c
+@@ -93,7 +93,7 @@ v4l2_fwnode_bus_type_to_mbus(enum v4l2_fwnode_bus_type type)
+ 	const struct v4l2_fwnode_bus_conv *conv =
+ 		get_v4l2_fwnode_bus_conv_by_fwnode_bus(type);
+ 
+-	return conv ? conv->mbus_type : V4L2_MBUS_UNKNOWN;
++	return conv ? conv->mbus_type : V4L2_MBUS_INVALID;
+ }
+ 
+ static const char *
+@@ -436,6 +436,10 @@ static int __v4l2_fwnode_endpoint_parse(struct fwnode_handle *fwnode,
+ 		 v4l2_fwnode_mbus_type_to_string(vep->bus_type),
+ 		 vep->bus_type);
+ 	mbus_type = v4l2_fwnode_bus_type_to_mbus(bus_type);
++	if (mbus_type == V4L2_MBUS_INVALID) {
++		pr_debug("unsupported bus type %u\n", bus_type);
++		return -EINVAL;
++	}
+ 
+ 	if (vep->bus_type != V4L2_MBUS_UNKNOWN) {
+ 		if (mbus_type != V4L2_MBUS_UNKNOWN &&
+diff --git a/drivers/memory/Kconfig b/drivers/memory/Kconfig
+index 00e013b14703e..cc2c83e1accfb 100644
+--- a/drivers/memory/Kconfig
++++ b/drivers/memory/Kconfig
+@@ -128,7 +128,7 @@ config OMAP_GPMC_DEBUG
+ 
+ config TI_EMIF_SRAM
+ 	tristate "Texas Instruments EMIF SRAM driver"
+-	depends on SOC_AM33XX || SOC_AM43XX || (ARM && COMPILE_TEST)
++	depends on SOC_AM33XX || SOC_AM43XX || (ARM && CPU_V7 && COMPILE_TEST)
+ 	depends on SRAM
+ 	help
+ 	  This driver is for the EMIF module available on Texas Instruments
+diff --git a/drivers/memory/jz4780-nemc.c b/drivers/memory/jz4780-nemc.c
+index 3ec5cb0fce1ee..555f7ac3b7dd9 100644
+--- a/drivers/memory/jz4780-nemc.c
++++ b/drivers/memory/jz4780-nemc.c
+@@ -291,6 +291,8 @@ static int jz4780_nemc_probe(struct platform_device *pdev)
+ 	nemc->dev = dev;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 
+ 	/*
+ 	 * The driver currently only uses the registers up to offset
+@@ -304,9 +306,9 @@ static int jz4780_nemc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	nemc->base = devm_ioremap(dev, res->start, NEMC_REG_LEN);
+-	if (IS_ERR(nemc->base)) {
++	if (!nemc->base) {
+ 		dev_err(dev, "failed to get I/O memory\n");
+-		return PTR_ERR(nemc->base);
++		return -ENOMEM;
+ 	}
+ 
+ 	writel(0, nemc->base + NEMC_NFCSR);
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index f2a33a1af8361..da0fdb4c75959 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -212,7 +212,7 @@ EXPORT_SYMBOL(rpcif_enable_rpm);
+ 
+ void rpcif_disable_rpm(struct rpcif *rpc)
+ {
+-	pm_runtime_put_sync(rpc->dev);
++	pm_runtime_disable(rpc->dev);
+ }
+ EXPORT_SYMBOL(rpcif_disable_rpm);
+ 
+@@ -508,7 +508,8 @@ exit:
+ 	return ret;
+ 
+ err_out:
+-	ret = reset_control_reset(rpc->rstc);
++	if (reset_control_reset(rpc->rstc))
++		dev_err(rpc->dev, "Failed to reset HW\n");
+ 	rpcif_hw_init(rpc, rpc->bus_size == 2);
+ 	goto exit;
+ }
+@@ -560,9 +561,11 @@ static int rpcif_probe(struct platform_device *pdev)
+ 	} else if (of_device_is_compatible(flash, "cfi-flash")) {
+ 		name = "rpc-if-hyperflash";
+ 	} else	{
++		of_node_put(flash);
+ 		dev_warn(&pdev->dev, "unknown flash type\n");
+ 		return -ENODEV;
+ 	}
++	of_node_put(flash);
+ 
+ 	vdev = platform_device_alloc(name, pdev->id);
+ 	if (!vdev)
+diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c
+index ef03d6fafc5ce..12bc3f5a6cbbd 100644
+--- a/drivers/memstick/core/memstick.c
++++ b/drivers/memstick/core/memstick.c
+@@ -468,7 +468,6 @@ static void memstick_check(struct work_struct *work)
+ 			host->card = card;
+ 			if (device_register(&card->dev)) {
+ 				put_device(&card->dev);
+-				kfree(host->card);
+ 				host->card = NULL;
+ 			}
+ 		} else
+diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
+index dd3a1f3dcc191..d2ef46337191c 100644
+--- a/drivers/memstick/host/r592.c
++++ b/drivers/memstick/host/r592.c
+@@ -759,8 +759,10 @@ static int r592_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		goto error3;
+ 
+ 	dev->mmio = pci_ioremap_bar(pdev, 0);
+-	if (!dev->mmio)
++	if (!dev->mmio) {
++		error = -ENOMEM;
+ 		goto error4;
++	}
+ 
+ 	dev->irq = pdev->irq;
+ 	spin_lock_init(&dev->irq_lock);
+@@ -786,12 +788,14 @@ static int r592_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		&dev->dummy_dma_page_physical_address, GFP_KERNEL);
+ 	r592_stop_dma(dev , 0);
+ 
+-	if (request_irq(dev->irq, &r592_irq, IRQF_SHARED,
+-			  DRV_NAME, dev))
++	error = request_irq(dev->irq, &r592_irq, IRQF_SHARED,
++			  DRV_NAME, dev);
++	if (error)
+ 		goto error6;
+ 
+ 	r592_update_card_detect(dev);
+-	if (memstick_add_host(host))
++	error = memstick_add_host(host);
++	if (error)
+ 		goto error7;
+ 
+ 	message("driver successfully loaded");
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 8b99a13669bfc..4789507f325b8 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -1189,6 +1189,7 @@ config MFD_SIMPLE_MFD_I2C
+ config MFD_SL28CPLD
+ 	tristate "Kontron sl28cpld Board Management Controller"
+ 	depends on I2C
++	depends on ARCH_LAYERSCAPE || COMPILE_TEST
+ 	select MFD_SIMPLE_MFD_I2C
+ 	help
+ 	  Say yes here to enable support for the Kontron sl28cpld board
+diff --git a/drivers/mfd/htc-i2cpld.c b/drivers/mfd/htc-i2cpld.c
+index 247f9849e54ae..417b0355d904d 100644
+--- a/drivers/mfd/htc-i2cpld.c
++++ b/drivers/mfd/htc-i2cpld.c
+@@ -346,6 +346,7 @@ static int htcpld_register_chip_i2c(
+ 	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_READ_BYTE_DATA)) {
+ 		dev_warn(dev, "i2c adapter %d non-functional\n",
+ 			 pdata->i2c_adapter_id);
++		i2c_put_adapter(adapter);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -360,6 +361,7 @@ static int htcpld_register_chip_i2c(
+ 		/* I2C device registration failed, contineu with the next */
+ 		dev_warn(dev, "Unable to add I2C device for 0x%x\n",
+ 			 plat_chip_data->addr);
++		i2c_put_adapter(adapter);
+ 		return PTR_ERR(client);
+ 	}
+ 
+diff --git a/drivers/mfd/motorola-cpcap.c b/drivers/mfd/motorola-cpcap.c
+index 2283d88adcc25..30d82bfe5b02f 100644
+--- a/drivers/mfd/motorola-cpcap.c
++++ b/drivers/mfd/motorola-cpcap.c
+@@ -97,7 +97,7 @@ static struct regmap_irq_chip cpcap_irq_chip[CPCAP_NR_IRQ_CHIPS] = {
+ 		.ack_base = CPCAP_REG_MI1,
+ 		.mask_base = CPCAP_REG_MIM1,
+ 		.use_ack = true,
+-		.ack_invert = true,
++		.clear_ack = true,
+ 	},
+ 	{
+ 		.name = "cpcap-m2",
+@@ -106,7 +106,7 @@ static struct regmap_irq_chip cpcap_irq_chip[CPCAP_NR_IRQ_CHIPS] = {
+ 		.ack_base = CPCAP_REG_MI2,
+ 		.mask_base = CPCAP_REG_MIM2,
+ 		.use_ack = true,
+-		.ack_invert = true,
++		.clear_ack = true,
+ 	},
+ 	{
+ 		.name = "cpcap1-4",
+@@ -115,7 +115,7 @@ static struct regmap_irq_chip cpcap_irq_chip[CPCAP_NR_IRQ_CHIPS] = {
+ 		.ack_base = CPCAP_REG_INT1,
+ 		.mask_base = CPCAP_REG_INTM1,
+ 		.use_ack = true,
+-		.ack_invert = true,
++		.clear_ack = true,
+ 	},
+ };
+ 
+diff --git a/drivers/mfd/stmfx.c b/drivers/mfd/stmfx.c
+index 5e680bfdf5c90..988e2ba6dd0f3 100644
+--- a/drivers/mfd/stmfx.c
++++ b/drivers/mfd/stmfx.c
+@@ -329,11 +329,11 @@ static int stmfx_chip_init(struct i2c_client *client)
+ 
+ 	stmfx->vdd = devm_regulator_get_optional(&client->dev, "vdd");
+ 	ret = PTR_ERR_OR_ZERO(stmfx->vdd);
+-	if (ret == -ENODEV) {
+-		stmfx->vdd = NULL;
+-	} else {
+-		return dev_err_probe(&client->dev, ret,
+-				     "Failed to get VDD regulator\n");
++	if (ret) {
++		if (ret == -ENODEV)
++			stmfx->vdd = NULL;
++		else
++			return dev_err_probe(&client->dev, ret, "Failed to get VDD regulator\n");
+ 	}
+ 
+ 	if (stmfx->vdd) {
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 146ca6fb3260f..d3844730eacaf 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -811,8 +811,10 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
+ 
+ 	pci_set_master(pdev);
+ 
+-	if (!pci_endpoint_test_alloc_irq_vectors(test, irq_type))
++	if (!pci_endpoint_test_alloc_irq_vectors(test, irq_type)) {
++		err = -EINVAL;
+ 		goto err_disable_irq;
++	}
+ 
+ 	for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
+ 		if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) {
+@@ -849,8 +851,10 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
+ 		goto err_ida_remove;
+ 	}
+ 
+-	if (!pci_endpoint_test_request_irq(test))
++	if (!pci_endpoint_test_request_irq(test)) {
++		err = -EINVAL;
+ 		goto err_kfree_test_name;
++	}
+ 
+ 	misc_device = &test->miscdev;
+ 	misc_device->minor = MISC_DYNAMIC_MINOR;
+diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
+index 29f6180a00363..316393c694d7a 100644
+--- a/drivers/mmc/host/pxamci.c
++++ b/drivers/mmc/host/pxamci.c
+@@ -731,6 +731,7 @@ static int pxamci_probe(struct platform_device *pdev)
+ 
+ 		host->power = devm_gpiod_get_optional(dev, "power", GPIOD_OUT_LOW);
+ 		if (IS_ERR(host->power)) {
++			ret = PTR_ERR(host->power);
+ 			dev_err(dev, "Failed requesting gpio_power\n");
+ 			goto out;
+ 		}
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index ed12aacb1c736..41d193fa77bbf 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -1272,7 +1272,7 @@ static void tegra_sdhci_set_timeout(struct sdhci_host *host,
+ 	 * busy wait mode.
+ 	 */
+ 	val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_MISC_CTRL);
+-	if (cmd && cmd->busy_timeout >= 11 * HZ)
++	if (cmd && cmd->busy_timeout >= 11 * MSEC_PER_SEC)
+ 		val |= SDHCI_MISC_CTRL_ERASE_TIMEOUT_LIMIT;
+ 	else
+ 		val &= ~SDHCI_MISC_CTRL_ERASE_TIMEOUT_LIMIT;
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index e9e163ae9d863..b07cbb0661fb1 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -993,6 +993,8 @@ int __get_mtd_device(struct mtd_info *mtd)
+ 		}
+ 	}
+ 
++	master->usecount++;
++
+ 	while (mtd->parent) {
+ 		mtd->usecount++;
+ 		mtd = mtd->parent;
+@@ -1059,6 +1061,8 @@ void __put_mtd_device(struct mtd_info *mtd)
+ 		mtd = mtd->parent;
+ 	}
+ 
++	master->usecount--;
++
+ 	if (master->_put_device)
+ 		master->_put_device(master);
+ 
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index dc8104e675062..81028ba35f35d 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -149,8 +149,10 @@ static int gpmi_init(struct gpmi_nand_data *this)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(this->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(this->dev);
+ 		return ret;
++	}
+ 
+ 	ret = gpmi_reset_block(r->gpmi_regs, false);
+ 	if (ret)
+@@ -2252,7 +2254,7 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
+ 	void *buf_read = NULL;
+ 	const void *buf_write = NULL;
+ 	bool direct = false;
+-	struct completion *completion;
++	struct completion *dma_completion, *bch_completion;
+ 	unsigned long to;
+ 
+ 	if (check_only)
+@@ -2263,8 +2265,10 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
+ 		this->transfers[i].direction = DMA_NONE;
+ 
+ 	ret = pm_runtime_get_sync(this->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(this->dev);
+ 		return ret;
++	}
+ 
+ 	/*
+ 	 * This driver currently supports only one NAND chip. Plus, dies share
+@@ -2347,22 +2351,24 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
+ 		       this->resources.bch_regs + HW_BCH_FLASH0LAYOUT1);
+ 	}
+ 
++	desc->callback = dma_irq_callback;
++	desc->callback_param = this;
++	dma_completion = &this->dma_done;
++	bch_completion = NULL;
++
++	init_completion(dma_completion);
++
+ 	if (this->bch && buf_read) {
+ 		writel(BM_BCH_CTRL_COMPLETE_IRQ_EN,
+ 		       this->resources.bch_regs + HW_BCH_CTRL_SET);
+-		completion = &this->bch_done;
+-	} else {
+-		desc->callback = dma_irq_callback;
+-		desc->callback_param = this;
+-		completion = &this->dma_done;
++		bch_completion = &this->bch_done;
++		init_completion(bch_completion);
+ 	}
+ 
+-	init_completion(completion);
+-
+ 	dmaengine_submit(desc);
+ 	dma_async_issue_pending(get_dma_chan(this));
+ 
+-	to = wait_for_completion_timeout(completion, msecs_to_jiffies(1000));
++	to = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000));
+ 	if (!to) {
+ 		dev_err(this->dev, "DMA timeout, last DMA\n");
+ 		gpmi_dump_info(this);
+@@ -2370,6 +2376,16 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
+ 		goto unmap;
+ 	}
+ 
++	if (this->bch && buf_read) {
++		to = wait_for_completion_timeout(bch_completion, msecs_to_jiffies(1000));
++		if (!to) {
++			dev_err(this->dev, "BCH timeout, last DMA\n");
++			gpmi_dump_info(this);
++			ret = -ETIMEDOUT;
++			goto unmap;
++		}
++	}
++
+ 	writel(BM_BCH_CTRL_COMPLETE_IRQ_EN,
+ 	       this->resources.bch_regs + HW_BCH_CTRL_CLR);
+ 	gpmi_clear_bch(this);
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index 48e6dac96be6d..817bddccb775f 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -510,7 +510,7 @@ static int meson_nfc_dma_buffer_setup(struct nand_chip *nand, void *databuf,
+ }
+ 
+ static void meson_nfc_dma_buffer_release(struct nand_chip *nand,
+-					 int infolen, int datalen,
++					 int datalen, int infolen,
+ 					 enum dma_data_direction dir)
+ {
+ 	struct meson_nfc *nfc = nand_get_controller_data(nand);
+@@ -1044,9 +1044,12 @@ static int meson_nfc_clk_init(struct meson_nfc *nfc)
+ 
+ 	ret = clk_set_rate(nfc->device_clk, 24000000);
+ 	if (ret)
+-		goto err_phase_rx;
++		goto err_disable_rx;
+ 
+ 	return 0;
++
++err_disable_rx:
++	clk_disable_unprepare(nfc->phase_rx);
+ err_phase_rx:
+ 	clk_disable_unprepare(nfc->phase_tx);
+ err_phase_tx:
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index 777fb0de06801..dfc17a28a06b9 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -1570,6 +1570,8 @@ static int check_flash_errors(struct qcom_nand_host *host, int cw_cnt)
+ 	struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
+ 	int i;
+ 
++	nandc_read_buffer_sync(nandc, true);
++
+ 	for (i = 0; i < cw_cnt; i++) {
+ 		u32 flash = le32_to_cpu(nandc->reg_read_buf[i]);
+ 
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index c352217946455..7900571fc85b3 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -318,6 +318,10 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand,
+ 		buf += ret;
+ 	}
+ 
++	if (req->ooblen)
++		memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs,
++		       req->ooblen);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mtd/parsers/cmdlinepart.c b/drivers/mtd/parsers/cmdlinepart.c
+index a79e4d866b08a..0ddff1a4b51fb 100644
+--- a/drivers/mtd/parsers/cmdlinepart.c
++++ b/drivers/mtd/parsers/cmdlinepart.c
+@@ -226,7 +226,7 @@ static int mtdpart_setup_real(char *s)
+ 		struct cmdline_mtd_partition *this_mtd;
+ 		struct mtd_partition *parts;
+ 		int mtd_id_len, num_parts;
+-		char *p, *mtd_id, *semicol;
++		char *p, *mtd_id, *semicol, *open_parenth;
+ 
+ 		/*
+ 		 * Replace the first ';' by a NULL char so strrchr can work
+@@ -236,6 +236,14 @@ static int mtdpart_setup_real(char *s)
+ 		if (semicol)
+ 			*semicol = '\0';
+ 
++		/*
++		 * make sure that part-names with ":" will not be handled as
++		 * part of the mtd-id with an ":"
++		 */
++		open_parenth = strchr(s, '(');
++		if (open_parenth)
++			*open_parenth = '\0';
++
+ 		mtd_id = s;
+ 
+ 		/*
+@@ -245,6 +253,10 @@ static int mtdpart_setup_real(char *s)
+ 		 */
+ 		p = strrchr(s, ':');
+ 
++		/* Restore the '(' now. */
++		if (open_parenth)
++			*open_parenth = '(';
++
+ 		/* Restore the ';' now. */
+ 		if (semicol)
+ 			*semicol = ';';
+diff --git a/drivers/mtd/spi-nor/atmel.c b/drivers/mtd/spi-nor/atmel.c
+index 3f5f21a473a69..deacf87a68a06 100644
+--- a/drivers/mtd/spi-nor/atmel.c
++++ b/drivers/mtd/spi-nor/atmel.c
+@@ -8,39 +8,78 @@
+ 
+ #include "core.h"
+ 
++/*
++ * The Atmel AT25FS010/AT25FS040 parts have some weird configuration for the
++ * block protection bits. We don't support them. But legacy behavior in linux
++ * is to unlock the whole flash array on startup. Therefore, we have to support
++ * exactly this operation.
++ */
++static int atmel_at25fs_lock(struct spi_nor *nor, loff_t ofs, uint64_t len)
++{
++	return -EOPNOTSUPP;
++}
++
++static int atmel_at25fs_unlock(struct spi_nor *nor, loff_t ofs, uint64_t len)
++{
++	int ret;
++
++	/* We only support unlocking the whole flash array */
++	if (ofs || len != nor->params->size)
++		return -EINVAL;
++
++	/* Write 0x00 to the status register to disable write protection */
++	ret = spi_nor_write_sr_and_check(nor, 0);
++	if (ret)
++		dev_dbg(nor->dev, "unable to clear BP bits, WP# asserted?\n");
++
++	return ret;
++}
++
++static int atmel_at25fs_is_locked(struct spi_nor *nor, loff_t ofs, uint64_t len)
++{
++	return -EOPNOTSUPP;
++}
++
++static const struct spi_nor_locking_ops atmel_at25fs_locking_ops = {
++	.lock = atmel_at25fs_lock,
++	.unlock = atmel_at25fs_unlock,
++	.is_locked = atmel_at25fs_is_locked,
++};
++
++static void atmel_at25fs_default_init(struct spi_nor *nor)
++{
++	nor->params->locking_ops = &atmel_at25fs_locking_ops;
++}
++
++static const struct spi_nor_fixups atmel_at25fs_fixups = {
++	.default_init = atmel_at25fs_default_init,
++};
++
+ static const struct flash_info atmel_parts[] = {
+ 	/* Atmel -- some are (confusingly) marketed as "DataFlash" */
+-	{ "at25fs010",  INFO(0x1f6601, 0, 32 * 1024,   4, SECT_4K) },
+-	{ "at25fs040",  INFO(0x1f6604, 0, 64 * 1024,   8, SECT_4K) },
++	{ "at25fs010",  INFO(0x1f6601, 0, 32 * 1024,   4, SECT_4K | SPI_NOR_HAS_LOCK)
++		.fixups = &atmel_at25fs_fixups },
++	{ "at25fs040",  INFO(0x1f6604, 0, 64 * 1024,   8, SECT_4K | SPI_NOR_HAS_LOCK)
++		.fixups = &atmel_at25fs_fixups },
+ 
+-	{ "at25df041a", INFO(0x1f4401, 0, 64 * 1024,   8, SECT_4K) },
+-	{ "at25df321",  INFO(0x1f4700, 0, 64 * 1024,  64, SECT_4K) },
+-	{ "at25df321a", INFO(0x1f4701, 0, 64 * 1024,  64, SECT_4K) },
+-	{ "at25df641",  INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K) },
++	{ "at25df041a", INFO(0x1f4401, 0, 64 * 1024,   8, SECT_4K | SPI_NOR_HAS_LOCK) },
++	{ "at25df321",  INFO(0x1f4700, 0, 64 * 1024,  64, SECT_4K | SPI_NOR_HAS_LOCK) },
++	{ "at25df321a", INFO(0x1f4701, 0, 64 * 1024,  64, SECT_4K | SPI_NOR_HAS_LOCK) },
++	{ "at25df641",  INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_HAS_LOCK) },
+ 
+ 	{ "at25sl321",	INFO(0x1f4216, 0, 64 * 1024, 64,
+ 			     SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
+ 
+ 	{ "at26f004",   INFO(0x1f0400, 0, 64 * 1024,  8, SECT_4K) },
+-	{ "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, SECT_4K) },
+-	{ "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, SECT_4K) },
+-	{ "at26df321",  INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) },
++	{ "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, SECT_4K | SPI_NOR_HAS_LOCK) },
++	{ "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, SECT_4K | SPI_NOR_HAS_LOCK) },
++	{ "at26df321",  INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K | SPI_NOR_HAS_LOCK) },
+ 
+ 	{ "at45db081d", INFO(0x1f2500, 0, 64 * 1024, 16, SECT_4K) },
+ };
+ 
+-static void atmel_default_init(struct spi_nor *nor)
+-{
+-	nor->flags |= SNOR_F_HAS_LOCK;
+-}
+-
+-static const struct spi_nor_fixups atmel_fixups = {
+-	.default_init = atmel_default_init,
+-};
+-
+ const struct spi_nor_manufacturer spi_nor_atmel = {
+ 	.name = "atmel",
+ 	.parts = atmel_parts,
+ 	.nparts = ARRAY_SIZE(atmel_parts),
+-	.fixups = &atmel_fixups,
+ };
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index f0ae7a01703a1..ad6c79d9a7f86 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -906,7 +906,7 @@ static int spi_nor_write_16bit_cr_and_check(struct spi_nor *nor, u8 cr)
+  *
+  * Return: 0 on success, -errno otherwise.
+  */
+-static int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1)
++int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1)
+ {
+ 	if (nor->flags & SNOR_F_HAS_16BIT_SR)
+ 		return spi_nor_write_16bit_sr_and_check(nor, sr1);
+@@ -2915,20 +2915,27 @@ static int spi_nor_quad_enable(struct spi_nor *nor)
+ }
+ 
+ /**
+- * spi_nor_unlock_all() - Unlocks the entire flash memory array.
++ * spi_nor_try_unlock_all() - Tries to unlock the entire flash memory array.
+  * @nor:	pointer to a 'struct spi_nor'.
+  *
+  * Some SPI NOR flashes are write protected by default after a power-on reset
+  * cycle, in order to avoid inadvertent writes during power-up. Backward
+  * compatibility imposes to unlock the entire flash memory array at power-up
+  * by default.
++ *
++ * Unprotecting the entire flash array will fail for boards which are hardware
++ * write-protected. Thus any errors are ignored.
+  */
+-static int spi_nor_unlock_all(struct spi_nor *nor)
++static void spi_nor_try_unlock_all(struct spi_nor *nor)
+ {
+-	if (nor->flags & SNOR_F_HAS_LOCK)
+-		return spi_nor_unlock(&nor->mtd, 0, nor->params->size);
++	int ret;
+ 
+-	return 0;
++	if (!(nor->flags & SNOR_F_HAS_LOCK))
++		return;
++
++	ret = spi_nor_unlock(&nor->mtd, 0, nor->params->size);
++	if (ret)
++		dev_dbg(nor->dev, "Failed to unlock the entire flash memory array\n");
+ }
+ 
+ static int spi_nor_init(struct spi_nor *nor)
+@@ -2941,11 +2948,7 @@ static int spi_nor_init(struct spi_nor *nor)
+ 		return err;
+ 	}
+ 
+-	err = spi_nor_unlock_all(nor);
+-	if (err) {
+-		dev_dbg(nor->dev, "Failed to unlock the entire flash memory array\n");
+-		return err;
+-	}
++	spi_nor_try_unlock_all(nor);
+ 
+ 	if (nor->addr_width == 4 && !(nor->flags & SNOR_F_4B_OPCODES)) {
+ 		/*
+diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h
+index 6f2f6b27173fd..6f62ee861231a 100644
+--- a/drivers/mtd/spi-nor/core.h
++++ b/drivers/mtd/spi-nor/core.h
+@@ -409,6 +409,7 @@ void spi_nor_unlock_and_unprep(struct spi_nor *nor);
+ int spi_nor_sr1_bit6_quad_enable(struct spi_nor *nor);
+ int spi_nor_sr2_bit1_quad_enable(struct spi_nor *nor);
+ int spi_nor_sr2_bit7_quad_enable(struct spi_nor *nor);
++int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1);
+ 
+ int spi_nor_xread_sr(struct spi_nor *nor, u8 *sr);
+ ssize_t spi_nor_read_data(struct spi_nor *nor, loff_t from, size_t len,
+diff --git a/drivers/mtd/spi-nor/sst.c b/drivers/mtd/spi-nor/sst.c
+index e0af6d25d573b..0ab07624fb73f 100644
+--- a/drivers/mtd/spi-nor/sst.c
++++ b/drivers/mtd/spi-nor/sst.c
+@@ -18,7 +18,8 @@ static const struct flash_info sst_parts[] = {
+ 			      SECT_4K | SST_WRITE) },
+ 	{ "sst25vf032b", INFO(0xbf254a, 0, 64 * 1024, 64,
+ 			      SECT_4K | SST_WRITE) },
+-	{ "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128, SECT_4K) },
++	{ "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128,
++			      SECT_4K | SPI_NOR_4BIT_BP) },
+ 	{ "sst25wf512",  INFO(0xbf2501, 0, 64 * 1024,  1,
+ 			      SECT_4K | SST_WRITE) },
+ 	{ "sst25wf010",  INFO(0xbf2502, 0, 64 * 1024,  2,
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 61a93b1920379..7fc4ac1582afc 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -380,10 +380,6 @@ void m_can_config_endisable(struct m_can_classdev *cdev, bool enable)
+ 		cccr &= ~CCCR_CSR;
+ 
+ 	if (enable) {
+-		/* Clear the Clock stop request if it was set */
+-		if (cccr & CCCR_CSR)
+-			cccr &= ~CCCR_CSR;
+-
+ 		/* enable m_can configuration */
+ 		m_can_write(cdev, M_CAN_CCCR, cccr | CCCR_INIT);
+ 		udelay(5);
+diff --git a/drivers/net/dsa/qca/ar9331.c b/drivers/net/dsa/qca/ar9331.c
+index e24a99031b80f..4d49c5f2b7905 100644
+--- a/drivers/net/dsa/qca/ar9331.c
++++ b/drivers/net/dsa/qca/ar9331.c
+@@ -159,6 +159,8 @@ struct ar9331_sw_priv {
+ 	struct dsa_switch ds;
+ 	struct dsa_switch_ops ops;
+ 	struct irq_domain *irqdomain;
++	u32 irq_mask;
++	struct mutex lock_irq;
+ 	struct mii_bus *mbus; /* mdio master */
+ 	struct mii_bus *sbus; /* mdio slave */
+ 	struct regmap *regmap;
+@@ -520,32 +522,44 @@ static irqreturn_t ar9331_sw_irq(int irq, void *data)
+ static void ar9331_sw_mask_irq(struct irq_data *d)
+ {
+ 	struct ar9331_sw_priv *priv = irq_data_get_irq_chip_data(d);
+-	struct regmap *regmap = priv->regmap;
+-	int ret;
+ 
+-	ret = regmap_update_bits(regmap, AR9331_SW_REG_GINT_MASK,
+-				 AR9331_SW_GINT_PHY_INT, 0);
+-	if (ret)
+-		dev_err(priv->dev, "could not mask IRQ\n");
++	priv->irq_mask = 0;
+ }
+ 
+ static void ar9331_sw_unmask_irq(struct irq_data *d)
++{
++	struct ar9331_sw_priv *priv = irq_data_get_irq_chip_data(d);
++
++	priv->irq_mask = AR9331_SW_GINT_PHY_INT;
++}
++
++static void ar9331_sw_irq_bus_lock(struct irq_data *d)
++{
++	struct ar9331_sw_priv *priv = irq_data_get_irq_chip_data(d);
++
++	mutex_lock(&priv->lock_irq);
++}
++
++static void ar9331_sw_irq_bus_sync_unlock(struct irq_data *d)
+ {
+ 	struct ar9331_sw_priv *priv = irq_data_get_irq_chip_data(d);
+ 	struct regmap *regmap = priv->regmap;
+ 	int ret;
+ 
+ 	ret = regmap_update_bits(regmap, AR9331_SW_REG_GINT_MASK,
+-				 AR9331_SW_GINT_PHY_INT,
+-				 AR9331_SW_GINT_PHY_INT);
++				 AR9331_SW_GINT_PHY_INT, priv->irq_mask);
+ 	if (ret)
+-		dev_err(priv->dev, "could not unmask IRQ\n");
++		dev_err(priv->dev, "failed to change IRQ mask\n");
++
++	mutex_unlock(&priv->lock_irq);
+ }
+ 
+ static struct irq_chip ar9331_sw_irq_chip = {
+ 	.name = AR9331_SW_NAME,
+ 	.irq_mask = ar9331_sw_mask_irq,
+ 	.irq_unmask = ar9331_sw_unmask_irq,
++	.irq_bus_lock = ar9331_sw_irq_bus_lock,
++	.irq_bus_sync_unlock = ar9331_sw_irq_bus_sync_unlock,
+ };
+ 
+ static int ar9331_sw_irq_map(struct irq_domain *domain, unsigned int irq,
+@@ -584,6 +598,7 @@ static int ar9331_sw_irq_init(struct ar9331_sw_priv *priv)
+ 		return irq ? irq : -EINVAL;
+ 	}
+ 
++	mutex_init(&priv->lock_irq);
+ 	ret = devm_request_threaded_irq(dev, irq, NULL, ar9331_sw_irq,
+ 					IRQF_ONESHOT, AR9331_SW_NAME, priv);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/allwinner/sun4i-emac.c b/drivers/net/ethernet/allwinner/sun4i-emac.c
+index 862ea44beea77..5ed80d9a6b9fe 100644
+--- a/drivers/net/ethernet/allwinner/sun4i-emac.c
++++ b/drivers/net/ethernet/allwinner/sun4i-emac.c
+@@ -828,13 +828,13 @@ static int emac_probe(struct platform_device *pdev)
+ 	db->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(db->clk)) {
+ 		ret = PTR_ERR(db->clk);
+-		goto out_iounmap;
++		goto out_dispose_mapping;
+ 	}
+ 
+ 	ret = clk_prepare_enable(db->clk);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Error couldn't enable clock (%d)\n", ret);
+-		goto out_iounmap;
++		goto out_dispose_mapping;
+ 	}
+ 
+ 	ret = sunxi_sram_claim(&pdev->dev);
+@@ -893,6 +893,8 @@ out_release_sram:
+ 	sunxi_sram_release(&pdev->dev);
+ out_clk_disable_unprepare:
+ 	clk_disable_unprepare(db->clk);
++out_dispose_mapping:
++	irq_dispose_mapping(ndev->irq);
+ out_iounmap:
+ 	iounmap(db->membase);
+ out:
+@@ -911,6 +913,7 @@ static int emac_remove(struct platform_device *pdev)
+ 	unregister_netdev(ndev);
+ 	sunxi_sram_release(&pdev->dev);
+ 	clk_disable_unprepare(db->clk);
++	irq_dispose_mapping(ndev->irq);
+ 	iounmap(db->membase);
+ 	free_netdev(ndev);
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index be85dad2e3bc4..fcca023f22e54 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -4069,8 +4069,10 @@ static int bcmgenet_probe(struct platform_device *pdev)
+ 	clk_disable_unprepare(priv->clk);
+ 
+ 	err = register_netdev(dev);
+-	if (err)
++	if (err) {
++		bcmgenet_mii_exit(dev);
+ 		goto err;
++	}
+ 
+ 	return err;
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index cf9400a9886d7..d880ab2a7d962 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -878,7 +878,7 @@ static int dpaa2_eth_build_sg_fd_single_buf(struct dpaa2_eth_priv *priv,
+ 	swa = (struct dpaa2_eth_swa *)sgt_buf;
+ 	swa->type = DPAA2_ETH_SWA_SINGLE;
+ 	swa->single.skb = skb;
+-	swa->sg.sgt_size = sgt_buf_size;
++	swa->single.sgt_size = sgt_buf_size;
+ 
+ 	/* Separately map the SGT buffer */
+ 	sgt_addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_BIDIRECTIONAL);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index 567fd67e900ef..e402c62eb3137 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -219,8 +219,11 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count)
+ 	} while (count);
+ 
+ no_buffers:
+-	if (rx_ring->next_to_use != ntu)
++	if (rx_ring->next_to_use != ntu) {
++		/* clear the status bits for the next_to_use descriptor */
++		rx_desc->wb.qword1.status_error_len = 0;
+ 		i40e_release_rx_desc(rx_ring, ntu);
++	}
+ 
+ 	return ok;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 797886524054c..98101a8e2952d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -446,8 +446,11 @@ bool ice_alloc_rx_bufs_zc(struct ice_ring *rx_ring, u16 count)
+ 		}
+ 	} while (--count);
+ 
+-	if (rx_ring->next_to_use != ntu)
++	if (rx_ring->next_to_use != ntu) {
++		/* clear the status bits for the next_to_use descriptor */
++		rx_desc->wb.status_error0 = 0;
+ 		ice_release_rx_desc(rx_ring, ntu);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/korina.c b/drivers/net/ethernet/korina.c
+index bf48f0ded9c7d..925161959b9ba 100644
+--- a/drivers/net/ethernet/korina.c
++++ b/drivers/net/ethernet/korina.c
+@@ -219,7 +219,7 @@ static int korina_send_packet(struct sk_buff *skb, struct net_device *dev)
+ 			dev_kfree_skb_any(skb);
+ 			spin_unlock_irqrestore(&lp->lock, flags);
+ 
+-			return NETDEV_TX_BUSY;
++			return NETDEV_TX_OK;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 8ff207aa14792..e455a2f31f070 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -50,6 +50,7 @@
+ #ifdef CONFIG_RFS_ACCEL
+ #include <linux/cpu_rmap.h>
+ #endif
++#include <linux/version.h>
+ #include <net/devlink.h>
+ #include "mlx5_core.h"
+ #include "lib/eq.h"
+@@ -233,7 +234,10 @@ static void mlx5_set_driver_version(struct mlx5_core_dev *dev)
+ 	strncat(string, ",", remaining_size);
+ 
+ 	remaining_size = max_t(int, 0, driver_ver_sz - strlen(string));
+-	strncat(string, DRIVER_VERSION, remaining_size);
++
++	snprintf(string + strlen(string), remaining_size, "%u.%u.%u",
++		 (u8)((LINUX_VERSION_CODE >> 16) & 0xff), (u8)((LINUX_VERSION_CODE >> 8) & 0xff),
++		 (u16)(LINUX_VERSION_CODE & 0xffff));
+ 
+ 	/*Send the command*/
+ 	MLX5_SET(set_driver_version_in, in, opcode,
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index b319c22c211cd..8947c3a628109 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -1962,6 +1962,14 @@ static struct sk_buff *lan743x_rx_allocate_skb(struct lan743x_rx *rx)
+ 				  length, GFP_ATOMIC | GFP_DMA);
+ }
+ 
++static void lan743x_rx_update_tail(struct lan743x_rx *rx, int index)
++{
++	/* update the tail once per 8 descriptors */
++	if ((index & 7) == 7)
++		lan743x_csr_write(rx->adapter, RX_TAIL(rx->channel_number),
++				  index);
++}
++
+ static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
+ 					struct sk_buff *skb)
+ {
+@@ -1992,6 +2000,7 @@ static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
+ 	descriptor->data0 = (RX_DESC_DATA0_OWN_ |
+ 			    (length & RX_DESC_DATA0_BUF_LENGTH_MASK_));
+ 	skb_reserve(buffer_info->skb, RX_HEAD_PADDING);
++	lan743x_rx_update_tail(rx, index);
+ 
+ 	return 0;
+ }
+@@ -2010,6 +2019,7 @@ static void lan743x_rx_reuse_ring_element(struct lan743x_rx *rx, int index)
+ 	descriptor->data0 = (RX_DESC_DATA0_OWN_ |
+ 			    ((buffer_info->buffer_length) &
+ 			    RX_DESC_DATA0_BUF_LENGTH_MASK_));
++	lan743x_rx_update_tail(rx, index);
+ }
+ 
+ static void lan743x_rx_release_ring_element(struct lan743x_rx *rx, int index)
+@@ -2220,6 +2230,7 @@ static int lan743x_rx_napi_poll(struct napi_struct *napi, int weight)
+ {
+ 	struct lan743x_rx *rx = container_of(napi, struct lan743x_rx, napi);
+ 	struct lan743x_adapter *adapter = rx->adapter;
++	int result = RX_PROCESS_RESULT_NOTHING_TO_DO;
+ 	u32 rx_tail_flags = 0;
+ 	int count;
+ 
+@@ -2228,27 +2239,19 @@ static int lan743x_rx_napi_poll(struct napi_struct *napi, int weight)
+ 		lan743x_csr_write(adapter, DMAC_INT_STS,
+ 				  DMAC_INT_BIT_RXFRM_(rx->channel_number));
+ 	}
+-	count = 0;
+-	while (count < weight) {
+-		int rx_process_result = lan743x_rx_process_packet(rx);
+-
+-		if (rx_process_result == RX_PROCESS_RESULT_PACKET_RECEIVED) {
+-			count++;
+-		} else if (rx_process_result ==
+-			RX_PROCESS_RESULT_NOTHING_TO_DO) {
++	for (count = 0; count < weight; count++) {
++		result = lan743x_rx_process_packet(rx);
++		if (result == RX_PROCESS_RESULT_NOTHING_TO_DO)
+ 			break;
+-		} else if (rx_process_result ==
+-			RX_PROCESS_RESULT_PACKET_DROPPED) {
+-			continue;
+-		}
+ 	}
+ 	rx->frame_count += count;
+-	if (count == weight)
+-		goto done;
++	if (count == weight || result == RX_PROCESS_RESULT_PACKET_RECEIVED)
++		return weight;
+ 
+ 	if (!napi_complete_done(napi, count))
+-		goto done;
++		return count;
+ 
++	/* re-arm interrupts, must write to rx tail on some chip variants */
+ 	if (rx->vector_flags & LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET)
+ 		rx_tail_flags |= RX_TAIL_SET_TOP_INT_VEC_EN_;
+ 	if (rx->vector_flags & LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_SET) {
+@@ -2258,10 +2261,10 @@ static int lan743x_rx_napi_poll(struct napi_struct *napi, int weight)
+ 				  INT_BIT_DMA_RX_(rx->channel_number));
+ 	}
+ 
+-	/* update RX_TAIL */
+-	lan743x_csr_write(adapter, RX_TAIL(rx->channel_number),
+-			  rx_tail_flags | rx->last_tail);
+-done:
++	if (rx_tail_flags)
++		lan743x_csr_write(adapter, RX_TAIL(rx->channel_number),
++				  rx_tail_flags | rx->last_tail);
++
+ 	return count;
+ }
+ 
+@@ -2405,7 +2408,7 @@ static int lan743x_rx_open(struct lan743x_rx *rx)
+ 
+ 	netif_napi_add(adapter->netdev,
+ 		       &rx->napi, lan743x_rx_napi_poll,
+-		       rx->ring_size - 1);
++		       NAPI_POLL_WEIGHT);
+ 
+ 	lan743x_csr_write(adapter, DMAC_CMD,
+ 			  DMAC_CMD_RX_SWR_(rx->channel_number));
+diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+index 1e7729421a825..9cf2bc5f42892 100644
+--- a/drivers/net/ethernet/mscc/ocelot_vsc7514.c
++++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+@@ -1267,7 +1267,7 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ 
+ 	err = mscc_ocelot_init_ports(pdev, ports);
+ 	if (err)
+-		goto out_put_ports;
++		goto out_ocelot_deinit;
+ 
+ 	if (ocelot->ptp) {
+ 		err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info);
+@@ -1282,8 +1282,14 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ 	register_switchdev_notifier(&ocelot_switchdev_nb);
+ 	register_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
+ 
++	of_node_put(ports);
++
+ 	dev_info(&pdev->dev, "Ocelot switch probed\n");
+ 
++	return 0;
++
++out_ocelot_deinit:
++	ocelot_deinit(ocelot);
+ out_put_ports:
+ 	of_node_put(ports);
+ 	return err;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c
+index bb448c82cdc28..c029950a81e20 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/main.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/main.c
+@@ -860,9 +860,6 @@ static void nfp_flower_clean(struct nfp_app *app)
+ 	skb_queue_purge(&app_priv->cmsg_skbs_low);
+ 	flush_work(&app_priv->cmsg_work);
+ 
+-	flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app,
+-				 nfp_flower_setup_indr_tc_release);
+-
+ 	if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM)
+ 		nfp_flower_qos_cleanup(app);
+ 
+@@ -951,6 +948,9 @@ static int nfp_flower_start(struct nfp_app *app)
+ static void nfp_flower_stop(struct nfp_app *app)
+ {
+ 	nfp_tunnel_config_stop(app);
++
++	flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app,
++				 nfp_flower_setup_indr_tc_release);
+ }
+ 
+ static int
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index a12df3946a07c..c968c5c5a60a0 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1129,38 +1129,10 @@ static void ionic_lif_rx_mode(struct ionic_lif *lif, unsigned int rx_mode)
+ 		lif->rx_mode = rx_mode;
+ }
+ 
+-static void _ionic_lif_rx_mode(struct ionic_lif *lif, unsigned int rx_mode,
+-			       bool from_ndo)
+-{
+-	struct ionic_deferred_work *work;
+-
+-	if (from_ndo) {
+-		work = kzalloc(sizeof(*work), GFP_ATOMIC);
+-		if (!work) {
+-			netdev_err(lif->netdev, "%s OOM\n", __func__);
+-			return;
+-		}
+-		work->type = IONIC_DW_TYPE_RX_MODE;
+-		work->rx_mode = rx_mode;
+-		netdev_dbg(lif->netdev, "deferred: rx_mode\n");
+-		ionic_lif_deferred_enqueue(&lif->deferred, work);
+-	} else {
+-		ionic_lif_rx_mode(lif, rx_mode);
+-	}
+-}
+-
+-static void ionic_dev_uc_sync(struct net_device *netdev, bool from_ndo)
+-{
+-	if (from_ndo)
+-		__dev_uc_sync(netdev, ionic_ndo_addr_add, ionic_ndo_addr_del);
+-	else
+-		__dev_uc_sync(netdev, ionic_addr_add, ionic_addr_del);
+-
+-}
+-
+-static void ionic_set_rx_mode(struct net_device *netdev, bool from_ndo)
++static void ionic_set_rx_mode(struct net_device *netdev, bool can_sleep)
+ {
+ 	struct ionic_lif *lif = netdev_priv(netdev);
++	struct ionic_deferred_work *work;
+ 	unsigned int nfilters;
+ 	unsigned int rx_mode;
+ 
+@@ -1177,7 +1149,10 @@ static void ionic_set_rx_mode(struct net_device *netdev, bool from_ndo)
+ 	 *       we remove our overflow flag and check the netdev flags
+ 	 *       to see if we can disable NIC PROMISC
+ 	 */
+-	ionic_dev_uc_sync(netdev, from_ndo);
++	if (can_sleep)
++		__dev_uc_sync(netdev, ionic_addr_add, ionic_addr_del);
++	else
++		__dev_uc_sync(netdev, ionic_ndo_addr_add, ionic_ndo_addr_del);
+ 	nfilters = le32_to_cpu(lif->identity->eth.max_ucast_filters);
+ 	if (netdev_uc_count(netdev) + 1 > nfilters) {
+ 		rx_mode |= IONIC_RX_MODE_F_PROMISC;
+@@ -1189,7 +1164,10 @@ static void ionic_set_rx_mode(struct net_device *netdev, bool from_ndo)
+ 	}
+ 
+ 	/* same for multicast */
+-	ionic_dev_uc_sync(netdev, from_ndo);
++	if (can_sleep)
++		__dev_mc_sync(netdev, ionic_addr_add, ionic_addr_del);
++	else
++		__dev_mc_sync(netdev, ionic_ndo_addr_add, ionic_ndo_addr_del);
+ 	nfilters = le32_to_cpu(lif->identity->eth.max_mcast_filters);
+ 	if (netdev_mc_count(netdev) > nfilters) {
+ 		rx_mode |= IONIC_RX_MODE_F_ALLMULTI;
+@@ -1200,13 +1178,26 @@ static void ionic_set_rx_mode(struct net_device *netdev, bool from_ndo)
+ 			rx_mode &= ~IONIC_RX_MODE_F_ALLMULTI;
+ 	}
+ 
+-	if (lif->rx_mode != rx_mode)
+-		_ionic_lif_rx_mode(lif, rx_mode, from_ndo);
++	if (lif->rx_mode != rx_mode) {
++		if (!can_sleep) {
++			work = kzalloc(sizeof(*work), GFP_ATOMIC);
++			if (!work) {
++				netdev_err(lif->netdev, "%s OOM\n", __func__);
++				return;
++			}
++			work->type = IONIC_DW_TYPE_RX_MODE;
++			work->rx_mode = rx_mode;
++			netdev_dbg(lif->netdev, "deferred: rx_mode\n");
++			ionic_lif_deferred_enqueue(&lif->deferred, work);
++		} else {
++			ionic_lif_rx_mode(lif, rx_mode);
++		}
++	}
+ }
+ 
+ static void ionic_ndo_set_rx_mode(struct net_device *netdev)
+ {
+-	ionic_set_rx_mode(netdev, true);
++	ionic_set_rx_mode(netdev, false);
+ }
+ 
+ static __le64 ionic_netdev_features_to_nic(netdev_features_t features)
+@@ -1773,7 +1764,7 @@ static int ionic_txrx_init(struct ionic_lif *lif)
+ 	if (lif->netdev->features & NETIF_F_RXHASH)
+ 		ionic_lif_rss_init(lif);
+ 
+-	ionic_set_rx_mode(lif->netdev, false);
++	ionic_set_rx_mode(lif->netdev, true);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+index 5a7e240fd4698..c2faf96fcade8 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+@@ -2492,6 +2492,7 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		qlcnic_sriov_vf_register_map(ahw);
+ 		break;
+ 	default:
++		err = -EINVAL;
+ 		goto err_out_free_hw_res;
+ 	}
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 21b71148c5324..34bb95dd92392 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3072,6 +3072,7 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 			dev_err(&vdev->dev,
+ 				"device MTU appears to have changed it is now %d < %d",
+ 				mtu, dev->min_mtu);
++			err = -EINVAL;
+ 			goto free;
+ 		}
+ 
+diff --git a/drivers/net/wireless/admtek/adm8211.c b/drivers/net/wireless/admtek/adm8211.c
+index 5cf2045fadeff..c41e72508d3db 100644
+--- a/drivers/net/wireless/admtek/adm8211.c
++++ b/drivers/net/wireless/admtek/adm8211.c
+@@ -1796,6 +1796,7 @@ static int adm8211_probe(struct pci_dev *pdev,
+ 	if (io_len < 256 || mem_len < 1024) {
+ 		printk(KERN_ERR "%s (adm8211): Too short PCI resources\n",
+ 		       pci_name(pdev));
++		err = -ENOMEM;
+ 		goto err_disable_pdev;
+ 	}
+ 
+@@ -1805,6 +1806,7 @@ static int adm8211_probe(struct pci_dev *pdev,
+ 	if (reg != ADM8211_SIG1 && reg != ADM8211_SIG2) {
+ 		printk(KERN_ERR "%s (adm8211): Invalid signature (0x%x)\n",
+ 		       pci_name(pdev), reg);
++		err = -EINVAL;
+ 		goto err_disable_pdev;
+ 	}
+ 
+@@ -1815,8 +1817,8 @@ static int adm8211_probe(struct pci_dev *pdev,
+ 		return err; /* someone else grabbed it? don't disable it */
+ 	}
+ 
+-	if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)) ||
+-	    dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32))) {
++	err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
++	if (err) {
+ 		printk(KERN_ERR "%s (adm8211): No suitable DMA available\n",
+ 		       pci_name(pdev));
+ 		goto err_free_reg;
+diff --git a/drivers/net/wireless/ath/ath10k/usb.c b/drivers/net/wireless/ath/ath10k/usb.c
+index 05a620ff6fe2c..19b9c27e30e20 100644
+--- a/drivers/net/wireless/ath/ath10k/usb.c
++++ b/drivers/net/wireless/ath/ath10k/usb.c
+@@ -997,6 +997,8 @@ static int ath10k_usb_probe(struct usb_interface *interface,
+ 
+ 	ar_usb = ath10k_usb_priv(ar);
+ 	ret = ath10k_usb_create(ar, interface);
++	if (ret)
++		goto err;
+ 	ar_usb->ar = ar;
+ 
+ 	ar->dev_id = product_id;
+@@ -1009,7 +1011,7 @@ static int ath10k_usb_probe(struct usb_interface *interface,
+ 	ret = ath10k_core_register(ar, &bus_params);
+ 	if (ret) {
+ 		ath10k_warn(ar, "failed to register driver core: %d\n", ret);
+-		goto err;
++		goto err_usb_destroy;
+ 	}
+ 
+ 	/* TODO: remove this once USB support is fully implemented */
+@@ -1017,6 +1019,9 @@ static int ath10k_usb_probe(struct usb_interface *interface,
+ 
+ 	return 0;
+ 
++err_usb_destroy:
++	ath10k_usb_destroy(ar);
++
+ err:
+ 	ath10k_core_destroy(ar);
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 932266d1111bd..7b5834157fe51 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -1401,13 +1401,15 @@ static int ath10k_wmi_tlv_svc_avail_parse(struct ath10k *ar, u16 tag, u16 len,
+ 
+ 	switch (tag) {
+ 	case WMI_TLV_TAG_STRUCT_SERVICE_AVAILABLE_EVENT:
++		arg->service_map_ext_valid = true;
+ 		arg->service_map_ext_len = *(__le32 *)ptr;
+ 		arg->service_map_ext = ptr + sizeof(__le32);
+ 		return 0;
+ 	default:
+ 		break;
+ 	}
+-	return -EPROTO;
++
++	return 0;
+ }
+ 
+ static int ath10k_wmi_tlv_op_pull_svc_avail(struct ath10k *ar,
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 1fa7107a50515..37b53af760d76 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -5751,8 +5751,13 @@ void ath10k_wmi_event_service_available(struct ath10k *ar, struct sk_buff *skb)
+ 			    ret);
+ 	}
+ 
+-	ath10k_wmi_map_svc_ext(ar, arg.service_map_ext, ar->wmi.svc_map,
+-			       __le32_to_cpu(arg.service_map_ext_len));
++	/*
++	 * Initialization of "arg.service_map_ext_valid" to ZERO is necessary
++	 * for the below logic to work.
++	 */
++	if (arg.service_map_ext_valid)
++		ath10k_wmi_map_svc_ext(ar, arg.service_map_ext, ar->wmi.svc_map,
++				       __le32_to_cpu(arg.service_map_ext_len));
+ }
+ 
+ static int ath10k_wmi_event_temperature(struct ath10k *ar, struct sk_buff *skb)
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 4898e19b0af65..66ecf09068c19 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -6917,6 +6917,7 @@ struct wmi_svc_rdy_ev_arg {
+ };
+ 
+ struct wmi_svc_avail_ev_arg {
++	bool service_map_ext_valid;
+ 	__le32 service_map_ext_len;
+ 	const __le32 *service_map_ext;
+ };
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 18b97420f0d8a..5a7915f75e1e2 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -75,12 +75,14 @@ static inline enum wme_ac ath11k_tid_to_ac(u32 tid)
+ 
+ enum ath11k_skb_flags {
+ 	ATH11K_SKB_HW_80211_ENCAP = BIT(0),
++	ATH11K_SKB_CIPHER_SET = BIT(1),
+ };
+ 
+ struct ath11k_skb_cb {
+ 	dma_addr_t paddr;
+ 	u8 eid;
+ 	u8 flags;
++	u32 cipher;
+ 	struct ath11k *ar;
+ 	struct ieee80211_vif *vif;
+ } __packed;
+diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c
+index 3d962eee4d61d..21dfd08d3debb 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_tx.c
+@@ -84,7 +84,6 @@ int ath11k_dp_tx(struct ath11k *ar, struct ath11k_vif *arvif,
+ 	struct ath11k_dp *dp = &ab->dp;
+ 	struct hal_tx_info ti = {0};
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+-	struct ieee80211_key_conf *key = info->control.hw_key;
+ 	struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB(skb);
+ 	struct hal_srng *tcl_ring;
+ 	struct ieee80211_hdr *hdr = (void *)skb->data;
+@@ -149,9 +148,9 @@ tcl_ring_sel:
+ 	ti.meta_data_flags = arvif->tcl_metadata;
+ 
+ 	if (ti.encap_type == HAL_TCL_ENCAP_TYPE_RAW) {
+-		if (key) {
++		if (skb_cb->flags & ATH11K_SKB_CIPHER_SET) {
+ 			ti.encrypt_type =
+-				ath11k_dp_tx_get_encrypt_type(key->cipher);
++				ath11k_dp_tx_get_encrypt_type(skb_cb->cipher);
+ 
+ 			if (ieee80211_has_protected(hdr->frame_control))
+ 				skb_put(skb, IEEE80211_CCMP_MIC_LEN);
+diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c
+index 11a411b76fe42..66331da350129 100644
+--- a/drivers/net/wireless/ath/ath11k/hw.c
++++ b/drivers/net/wireless/ath/ath11k/hw.c
+@@ -127,7 +127,7 @@ static void ath11k_init_wmi_config_ipq8074(struct ath11k_base *ab,
+ 	config->beacon_tx_offload_max_vdev = ab->num_radios * TARGET_MAX_BCN_OFFLD;
+ 	config->rx_batchmode = TARGET_RX_BATCHMODE;
+ 	config->peer_map_unmap_v2_support = 1;
+-	config->twt_ap_pdev_count = 2;
++	config->twt_ap_pdev_count = ab->num_radios;
+ 	config->twt_ap_sta_count = 1000;
+ }
+ 
+@@ -157,7 +157,7 @@ static int ath11k_hw_mac_id_to_srng_id_qca6390(struct ath11k_hw_params *hw,
+ 
+ const struct ath11k_hw_ops ipq8074_ops = {
+ 	.get_hw_mac_from_pdev_id = ath11k_hw_ipq8074_mac_from_pdev_id,
+-	.wmi_init_config = ath11k_init_wmi_config_qca6390,
++	.wmi_init_config = ath11k_init_wmi_config_ipq8074,
+ 	.mac_id_to_pdev_id = ath11k_hw_mac_id_to_pdev_id_ipq8074,
+ 	.mac_id_to_srng_id = ath11k_hw_mac_id_to_srng_id_ipq8074,
+ };
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 7f8dd47d23333..af427d9051a07 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -3977,21 +3977,20 @@ static void ath11k_mgmt_over_wmi_tx_purge(struct ath11k *ar)
+ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ {
+ 	struct ath11k *ar = container_of(work, struct ath11k, wmi_mgmt_tx_work);
+-	struct ieee80211_tx_info *info;
++	struct ath11k_skb_cb *skb_cb;
+ 	struct ath11k_vif *arvif;
+ 	struct sk_buff *skb;
+ 	int ret;
+ 
+ 	while ((skb = skb_dequeue(&ar->wmi_mgmt_tx_queue)) != NULL) {
+-		info = IEEE80211_SKB_CB(skb);
+-		if (!info->control.vif) {
+-			ath11k_warn(ar->ab, "no vif found for mgmt frame, flags 0x%x\n",
+-				    info->control.flags);
++		skb_cb = ATH11K_SKB_CB(skb);
++		if (!skb_cb->vif) {
++			ath11k_warn(ar->ab, "no vif found for mgmt frame\n");
+ 			ieee80211_free_txskb(ar->hw, skb);
+ 			continue;
+ 		}
+ 
+-		arvif = ath11k_vif_to_arvif(info->control.vif);
++		arvif = ath11k_vif_to_arvif(skb_cb->vif);
+ 		if (ar->allocated_vdev_map & (1LL << arvif->vdev_id) &&
+ 		    arvif->is_started) {
+ 			ret = ath11k_mac_mgmt_tx_wmi(ar, arvif, skb);
+@@ -4004,8 +4003,8 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ 			}
+ 		} else {
+ 			ath11k_warn(ar->ab,
+-				    "dropping mgmt frame for vdev %d, flags 0x%x is_started %d\n",
+-				    arvif->vdev_id, info->control.flags,
++				    "dropping mgmt frame for vdev %d, is_started %d\n",
++				    arvif->vdev_id,
+ 				    arvif->is_started);
+ 			ieee80211_free_txskb(ar->hw, skb);
+ 		}
+@@ -4053,10 +4052,20 @@ static void ath11k_mac_op_tx(struct ieee80211_hw *hw,
+ 	struct ieee80211_vif *vif = info->control.vif;
+ 	struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
++	struct ieee80211_key_conf *key = info->control.hw_key;
++	u32 info_flags = info->flags;
+ 	bool is_prb_rsp;
+ 	int ret;
+ 
+-	if (info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) {
++	memset(skb_cb, 0, sizeof(*skb_cb));
++	skb_cb->vif = vif;
++
++	if (key) {
++		skb_cb->cipher = key->cipher;
++		skb_cb->flags |= ATH11K_SKB_CIPHER_SET;
++	}
++
++	if (info_flags & IEEE80211_TX_CTL_HW_80211_ENCAP) {
+ 		skb_cb->flags |= ATH11K_SKB_HW_80211_ENCAP;
+ 	} else if (ieee80211_is_mgmt(hdr->frame_control)) {
+ 		is_prb_rsp = ieee80211_is_probe_resp(hdr->frame_control);
+@@ -4094,7 +4103,8 @@ static int ath11k_mac_config_mon_status_default(struct ath11k *ar, bool enable)
+ 
+ 	if (enable) {
+ 		tlv_filter = ath11k_mac_mon_status_filter_default;
+-		tlv_filter.rx_filter = ath11k_debugfs_rx_filter(ar);
++		if (ath11k_debugfs_rx_filter(ar))
++			tlv_filter.rx_filter = ath11k_debugfs_rx_filter(ar);
+ 	}
+ 
+ 	for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) {
+@@ -5225,20 +5235,26 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 	    arvif->vdev_type != WMI_VDEV_TYPE_AP &&
+ 	    arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
+ 		memcpy(&arvif->chanctx, ctx, sizeof(*ctx));
+-		mutex_unlock(&ar->conf_mutex);
+-		return 0;
++		ret = 0;
++		goto out;
+ 	}
+ 
+ 	if (WARN_ON(arvif->is_started)) {
+-		mutex_unlock(&ar->conf_mutex);
+-		return -EBUSY;
++		ret = -EBUSY;
++		goto out;
+ 	}
+ 
+ 	if (ab->hw_params.vdev_start_delay) {
+ 		param.vdev_id = arvif->vdev_id;
+ 		param.peer_type = WMI_PEER_TYPE_DEFAULT;
+ 		param.peer_addr = ar->mac_addr;
++
+ 		ret = ath11k_peer_create(ar, arvif, NULL, &param);
++		if (ret) {
++			ath11k_warn(ab, "failed to create peer after vdev start delay: %d",
++				    ret);
++			goto out;
++		}
+ 	}
+ 
+ 	ret = ath11k_mac_vdev_start(arvif, &ctx->def);
+@@ -5246,23 +5262,21 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 		ath11k_warn(ab, "failed to start vdev %i addr %pM on freq %d: %d\n",
+ 			    arvif->vdev_id, vif->addr,
+ 			    ctx->def.chan->center_freq, ret);
+-		goto err;
++		goto out;
+ 	}
+ 	if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
+ 		ret = ath11k_monitor_vdev_up(ar, arvif->vdev_id);
+ 		if (ret)
+-			goto err;
++			goto out;
+ 	}
+ 
+ 	arvif->is_started = true;
+ 
+ 	/* TODO: Setup ps and cts/rts protection */
+ 
+-	mutex_unlock(&ar->conf_mutex);
+-
+-	return 0;
++	ret = 0;
+ 
+-err:
++out:
+ 	mutex_unlock(&ar->conf_mutex);
+ 
+ 	return ret;
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index c2b1651582259..99a88ca83deaa 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -1585,15 +1585,17 @@ static int ath11k_qmi_fw_ind_register_send(struct ath11k_base *ab)
+ 	struct qmi_wlanfw_ind_register_resp_msg_v01 *resp;
+ 	struct qmi_handle *handle = &ab->qmi.handle;
+ 	struct qmi_txn txn;
+-	int ret = 0;
++	int ret;
+ 
+ 	req = kzalloc(sizeof(*req), GFP_KERNEL);
+ 	if (!req)
+ 		return -ENOMEM;
+ 
+ 	resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+-	if (!resp)
++	if (!resp) {
++		ret = -ENOMEM;
+ 		goto resp_out;
++	}
+ 
+ 	req->client_id_valid = 1;
+ 	req->client_id = QMI_WLANFW_CLIENT_ID;
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index f6a1f0352989d..678d0885fcee7 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -80,6 +80,7 @@ ath11k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
+ 	 */
+ 	init_country_param.flags = ALPHA_IS_SET;
+ 	memcpy(&init_country_param.cc_info.alpha2, request->alpha2, 2);
++	init_country_param.cc_info.alpha2[2] = 0;
+ 
+ 	ret = ath11k_wmi_send_init_country_cmd(ar, init_country_param);
+ 	if (ret)
+@@ -584,7 +585,6 @@ ath11k_reg_build_regd(struct ath11k_base *ab,
+ 	if (!tmp_regd)
+ 		goto ret;
+ 
+-	tmp_regd->n_reg_rules = num_rules;
+ 	memcpy(tmp_regd->alpha2, reg_info->alpha2, REG_ALPHA2_LEN + 1);
+ 	memcpy(alpha2, reg_info->alpha2, REG_ALPHA2_LEN + 1);
+ 	alpha2[2] = '\0';
+@@ -597,7 +597,7 @@ ath11k_reg_build_regd(struct ath11k_base *ab,
+ 	/* Update reg_rules[] below. Firmware is expected to
+ 	 * send these rules in order(2G rules first and then 5G)
+ 	 */
+-	for (; i < tmp_regd->n_reg_rules; i++) {
++	for (; i < num_rules; i++) {
+ 		if (reg_info->num_2g_reg_rules &&
+ 		    (i < reg_info->num_2g_reg_rules)) {
+ 			reg_rule = reg_info->reg_rules_2g_ptr + i;
+@@ -652,6 +652,8 @@ ath11k_reg_build_regd(struct ath11k_base *ab,
+ 			   flags);
+ 	}
+ 
++	tmp_regd->n_reg_rules = i;
++
+ 	if (intersect) {
+ 		default_regd = ab->default_regd[reg_info->phy_id];
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 8eca92520837e..04b8b002edfe0 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -2198,37 +2198,6 @@ int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
+ 		}
+ 	}
+ 
+-	len = params->num_hint_s_ssid * sizeof(struct hint_short_ssid);
+-	tlv = ptr;
+-	tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_FIXED_STRUCT) |
+-		      FIELD_PREP(WMI_TLV_LEN, len);
+-	ptr += TLV_HDR_SIZE;
+-	if (params->num_hint_s_ssid) {
+-		s_ssid = ptr;
+-		for (i = 0; i < params->num_hint_s_ssid; ++i) {
+-			s_ssid->freq_flags = params->hint_s_ssid[i].freq_flags;
+-			s_ssid->short_ssid = params->hint_s_ssid[i].short_ssid;
+-			s_ssid++;
+-		}
+-	}
+-	ptr += len;
+-
+-	len = params->num_hint_bssid * sizeof(struct hint_bssid);
+-	tlv = ptr;
+-	tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_FIXED_STRUCT) |
+-		      FIELD_PREP(WMI_TLV_LEN, len);
+-	ptr += TLV_HDR_SIZE;
+-	if (params->num_hint_bssid) {
+-		hint_bssid = ptr;
+-		for (i = 0; i < params->num_hint_bssid; ++i) {
+-			hint_bssid->freq_flags =
+-				params->hint_bssid[i].freq_flags;
+-			ether_addr_copy(&params->hint_bssid[i].bssid.addr[0],
+-					&hint_bssid->bssid.addr[0]);
+-			hint_bssid++;
+-		}
+-	}
+-
+ 	ret = ath11k_wmi_cmd_send(wmi, skb,
+ 				  WMI_START_SCAN_CMDID);
+ 	if (ret) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index a2dbbb977d0cb..0ee421f30aa24 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -2137,7 +2137,8 @@ brcmf_cfg80211_connect(struct wiphy *wiphy, struct net_device *ndev,
+ 				    BRCMF_WSEC_MAX_PSK_LEN);
+ 	else if (profile->use_fwsup == BRCMF_PROFILE_FWSUP_SAE) {
+ 		/* clean up user-space RSNE */
+-		if (brcmf_fil_iovar_data_set(ifp, "wpaie", NULL, 0)) {
++		err = brcmf_fil_iovar_data_set(ifp, "wpaie", NULL, 0);
++		if (err) {
+ 			bphy_err(drvr, "failed to clean up user-space RSNE\n");
+ 			goto done;
+ 		}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 39381cbde89e6..d8db0dbcfe091 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -1936,16 +1936,18 @@ brcmf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	fwreq = brcmf_pcie_prepare_fw_request(devinfo);
+ 	if (!fwreq) {
+ 		ret = -ENOMEM;
+-		goto fail_bus;
++		goto fail_brcmf;
+ 	}
+ 
+ 	ret = brcmf_fw_get_firmwares(bus->dev, fwreq, brcmf_pcie_setup);
+ 	if (ret < 0) {
+ 		kfree(fwreq);
+-		goto fail_bus;
++		goto fail_brcmf;
+ 	}
+ 	return 0;
+ 
++fail_brcmf:
++	brcmf_free(&devinfo->pdev->dev);
+ fail_bus:
+ 	kfree(bus->msgbuf);
+ 	kfree(bus);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 99987a789e7e3..59c2b2b6027da 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -4541,6 +4541,7 @@ void brcmf_sdio_remove(struct brcmf_sdio *bus)
+ 		brcmf_sdiod_intr_unregister(bus->sdiodev);
+ 
+ 		brcmf_detach(bus->sdiodev->dev);
++		brcmf_free(bus->sdiodev->dev);
+ 
+ 		cancel_work_sync(&bus->datawork);
+ 		if (bus->brcmf_wq)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 51ce93d21ffe5..8fa1c22fd96db 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -808,7 +808,7 @@ static bool is_trig_data_contained(struct iwl_ucode_tlv *new,
+ 	struct iwl_fw_ini_trigger_tlv *old_trig = (void *)old->data;
+ 	__le32 *new_data = new_trig->data, *old_data = old_trig->data;
+ 	u32 new_dwords_num = iwl_tlv_array_len(new, new_trig, data);
+-	u32 old_dwords_num = iwl_tlv_array_len(new, new_trig, data);
++	u32 old_dwords_num = iwl_tlv_array_len(old, old_trig, data);
+ 	int i, j;
+ 
+ 	for (i = 0; i < new_dwords_num; i++) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index f1c5b3a9c26f7..0d1118f66f0d5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -315,6 +315,12 @@ static const struct iwl_rx_handlers iwl_mvm_rx_handlers[] = {
+ 		       iwl_mvm_mu_mimo_grp_notif, RX_HANDLER_SYNC),
+ 	RX_HANDLER_GRP(DATA_PATH_GROUP, STA_PM_NOTIF,
+ 		       iwl_mvm_sta_pm_notif, RX_HANDLER_SYNC),
++	RX_HANDLER_GRP(MAC_CONF_GROUP, PROBE_RESPONSE_DATA_NOTIF,
++		       iwl_mvm_probe_resp_data_notif,
++		       RX_HANDLER_ASYNC_LOCKED),
++	RX_HANDLER_GRP(MAC_CONF_GROUP, CHANNEL_SWITCH_NOA_NOTIF,
++		       iwl_mvm_channel_switch_noa_notif,
++		       RX_HANDLER_SYNC),
+ };
+ #undef RX_HANDLER
+ #undef RX_HANDLER_GRP
+diff --git a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
+index b849d27bd741e..d1fc948364c79 100644
+--- a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
++++ b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
+@@ -1223,13 +1223,6 @@ static netdev_tx_t ezusb_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (skb->len < ETH_HLEN)
+ 		goto drop;
+ 
+-	ctx = ezusb_alloc_ctx(upriv, EZUSB_RID_TX, 0);
+-	if (!ctx)
+-		goto busy;
+-
+-	memset(ctx->buf, 0, BULK_BUF_SIZE);
+-	buf = ctx->buf->data;
+-
+ 	tx_control = 0;
+ 
+ 	err = orinoco_process_xmit_skb(skb, dev, priv, &tx_control,
+@@ -1237,6 +1230,13 @@ static netdev_tx_t ezusb_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (err)
+ 		goto drop;
+ 
++	ctx = ezusb_alloc_ctx(upriv, EZUSB_RID_TX, 0);
++	if (!ctx)
++		goto drop;
++
++	memset(ctx->buf, 0, BULK_BUF_SIZE);
++	buf = ctx->buf->data;
++
+ 	{
+ 		__le16 *tx_cntl = (__le16 *)buf;
+ 		*tx_cntl = cpu_to_le16(tx_control);
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index 9ba8a8f64976b..6283df5aaaf8b 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -1471,6 +1471,8 @@ int mwifiex_shutdown_sw(struct mwifiex_adapter *adapter)
+ 	priv = mwifiex_get_priv(adapter, MWIFIEX_BSS_ROLE_ANY);
+ 	mwifiex_deauthenticate(priv, NULL);
+ 
++	mwifiex_init_shutdown_fw(priv, MWIFIEX_FUNC_SHUTDOWN);
++
+ 	mwifiex_uninit_sw(adapter);
+ 	adapter->is_up = false;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 214fc95b8a33f..145e839fea4e5 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -72,9 +72,11 @@ mt76_free_pending_txwi(struct mt76_dev *dev)
+ {
+ 	struct mt76_txwi_cache *t;
+ 
++	local_bh_disable();
+ 	while ((t = __mt76_get_txwi(dev)) != NULL)
+ 		dma_unmap_single(dev->dev, t->dma_addr, dev->drv->txwi_size,
+ 				 DMA_TO_DEVICE);
++	local_bh_enable();
+ }
+ 
+ static int
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 4befe7f937a91..466447a5184f8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -305,6 +305,7 @@ mt76_phy_init(struct mt76_dev *dev, struct ieee80211_hw *hw)
+ 	ieee80211_hw_set(hw, SUPPORT_FAST_XMIT);
+ 	ieee80211_hw_set(hw, SUPPORTS_CLONED_SKBS);
+ 	ieee80211_hw_set(hw, SUPPORTS_AMSDU_IN_AMPDU);
++	ieee80211_hw_set(hw, SUPPORTS_REORDERING_BUFFER);
+ 
+ 	if (!(dev->drv->drv_flags & MT_DRV_AMSDU_OFFLOAD)) {
+ 		ieee80211_hw_set(hw, TX_AMSDU);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/pci.c b/drivers/net/wireless/mediatek/mt76/mt7603/pci.c
+index a5845da3547a9..06fa28f645f28 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/pci.c
+@@ -57,7 +57,8 @@ mt76pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	return 0;
+ error:
+-	ieee80211_free_hw(mt76_hw(dev));
++	mt76_free_device(&dev->mt76);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 8dc645e398fda..3d62fda067e44 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -1046,15 +1046,17 @@ int mt7615_mac_wtbl_update_key(struct mt7615_dev *dev,
+ 	if (cmd == SET_KEY) {
+ 		if (cipher == MT_CIPHER_TKIP) {
+ 			/* Rx/Tx MIC keys are swapped */
++			memcpy(data, key, 16);
+ 			memcpy(data + 16, key + 24, 8);
+ 			memcpy(data + 24, key + 16, 8);
++		} else {
++			if (cipher != MT_CIPHER_BIP_CMAC_128 && wcid->cipher)
++				memmove(data + 16, data, 16);
++			if (cipher != MT_CIPHER_BIP_CMAC_128 || !wcid->cipher)
++				memcpy(data, key, keylen);
++			else if (cipher == MT_CIPHER_BIP_CMAC_128)
++				memcpy(data + 16, key, 16);
+ 		}
+-		if (cipher != MT_CIPHER_BIP_CMAC_128 && wcid->cipher)
+-			memmove(data + 16, data, 16);
+-		if (cipher != MT_CIPHER_BIP_CMAC_128 || !wcid->cipher)
+-			memcpy(data, key, keylen);
+-		else if (cipher == MT_CIPHER_BIP_CMAC_128)
+-			memcpy(data + 16, key, 16);
+ 	} else {
+ 		if (wcid->cipher & ~BIT(cipher)) {
+ 			if (cipher != MT_CIPHER_BIP_CMAC_128)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7615/mmio.c
+index 6de492a4cf025..9b191307e140e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mmio.c
+@@ -240,7 +240,8 @@ int mt7615_mmio_probe(struct device *pdev, void __iomem *mem_base,
+ 
+ 	return 0;
+ error:
+-	ieee80211_free_hw(mt76_hw(dev));
++	mt76_free_device(&dev->mt76);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+index 2486cda3243bc..69e38f477b1e4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+@@ -150,7 +150,7 @@ static int mt7663s_tx_pick_quota(struct mt76_sdio *sdio, enum mt76_txq_id qid,
+ 			return -EBUSY;
+ 	} else {
+ 		if (sdio->sched.pse_data_quota < *pse_size + pse_sz ||
+-		    sdio->sched.ple_data_quota < *ple_size)
++		    sdio->sched.ple_data_quota < *ple_size + 1)
+ 			return -EBUSY;
+ 
+ 		*ple_size = *ple_size + 1;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
+index dda11c704abaa..b87d8e136cb9a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
+@@ -194,7 +194,8 @@ mt76x0e_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ error:
+-	ieee80211_free_hw(mt76_hw(dev));
++	mt76_free_device(&dev->mt76);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
+index 4d50dad29ddff..ecaf85b483ac3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
+@@ -90,7 +90,8 @@ mt76x2e_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ error:
+-	ieee80211_free_hw(mt76_hw(dev));
++	mt76_free_device(&dev->mt76);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index 1049927faf246..8f2ad32ade180 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -233,6 +233,7 @@ static const struct file_operations fops_tx_stats = {
+ 	.read = seq_read,
+ 	.llseek = seq_lseek,
+ 	.release = single_release,
++	.owner = THIS_MODULE,
+ };
+ 
+ static int mt7915_read_temperature(struct seq_file *s, void *data)
+@@ -460,6 +461,7 @@ static const struct file_operations fops_sta_stats = {
+ 	.read = seq_read,
+ 	.llseek = seq_lseek,
+ 	.release = single_release,
++	.owner = THIS_MODULE,
+ };
+ 
+ void mt7915_sta_add_debugfs(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/pci.c b/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
+index fe62b4d853e48..3ac5bbb94d294 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
+@@ -140,7 +140,7 @@ static int mt7915_pci_probe(struct pci_dev *pdev,
+ 	dev = container_of(mdev, struct mt7915_dev, mt76);
+ 	ret = mt7915_alloc_device(pdev, dev);
+ 	if (ret)
+-		return ret;
++		goto error;
+ 
+ 	mt76_mmio_init(&dev->mt76, pcim_iomap_table(pdev)[0]);
+ 	mdev->rev = (mt7915_l1_rr(dev, MT_HW_CHIPID) << 16) |
+@@ -163,7 +163,8 @@ static int mt7915_pci_probe(struct pci_dev *pdev,
+ 
+ 	return 0;
+ error:
+-	ieee80211_free_hw(mt76_hw(dev));
++	mt76_free_device(&dev->mt76);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c
+index 5337e67092ca6..0f328ce47fee3 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c
+@@ -299,19 +299,19 @@ static int qtnf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	sysctl_bar = qtnf_map_bar(pdev, QTN_SYSCTL_BAR);
+ 	if (IS_ERR(sysctl_bar)) {
+ 		pr_err("failed to map BAR%u\n", QTN_SYSCTL_BAR);
+-		return ret;
++		return PTR_ERR(sysctl_bar);
+ 	}
+ 
+ 	dmareg_bar = qtnf_map_bar(pdev, QTN_DMA_BAR);
+ 	if (IS_ERR(dmareg_bar)) {
+ 		pr_err("failed to map BAR%u\n", QTN_DMA_BAR);
+-		return ret;
++		return PTR_ERR(dmareg_bar);
+ 	}
+ 
+ 	epmem_bar = qtnf_map_bar(pdev, QTN_SHMEM_BAR);
+ 	if (IS_ERR(epmem_bar)) {
+ 		pr_err("failed to map BAR%u\n", QTN_SHMEM_BAR);
+-		return ret;
++		return PTR_ERR(epmem_bar);
+ 	}
+ 
+ 	chipid = qtnf_chip_id_get(sysctl_bar);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index a62d41c0ccbc0..00b5589847985 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -741,24 +741,24 @@ static int rsi_reset_card(struct rsi_hw *adapter)
+ 		if (ret < 0)
+ 			goto fail;
+ 	} else {
+-		if ((rsi_usb_master_reg_write(adapter,
+-					      NWP_WWD_INTERRUPT_TIMER,
+-					      NWP_WWD_INT_TIMER_CLKS,
+-					      RSI_9116_REG_SIZE)) < 0) {
++		ret = rsi_usb_master_reg_write(adapter,
++					       NWP_WWD_INTERRUPT_TIMER,
++					       NWP_WWD_INT_TIMER_CLKS,
++					       RSI_9116_REG_SIZE);
++		if (ret < 0)
+ 			goto fail;
+-		}
+-		if ((rsi_usb_master_reg_write(adapter,
+-					      NWP_WWD_SYSTEM_RESET_TIMER,
+-					      NWP_WWD_SYS_RESET_TIMER_CLKS,
+-					      RSI_9116_REG_SIZE)) < 0) {
++		ret = rsi_usb_master_reg_write(adapter,
++					       NWP_WWD_SYSTEM_RESET_TIMER,
++					       NWP_WWD_SYS_RESET_TIMER_CLKS,
++					       RSI_9116_REG_SIZE);
++		if (ret < 0)
+ 			goto fail;
+-		}
+-		if ((rsi_usb_master_reg_write(adapter,
+-					      NWP_WWD_MODE_AND_RSTART,
+-					      NWP_WWD_TIMER_DISABLE,
+-					      RSI_9116_REG_SIZE)) < 0) {
++		ret = rsi_usb_master_reg_write(adapter,
++					       NWP_WWD_MODE_AND_RSTART,
++					       NWP_WWD_TIMER_DISABLE,
++					       RSI_9116_REG_SIZE);
++		if (ret < 0)
+ 			goto fail;
+-		}
+ 	}
+ 
+ 	rsi_dbg(INFO_ZONE, "Reset card done\n");
+diff --git a/drivers/net/wireless/st/cw1200/main.c b/drivers/net/wireless/st/cw1200/main.c
+index f7fe56affbcd2..326b1cc1d2bcb 100644
+--- a/drivers/net/wireless/st/cw1200/main.c
++++ b/drivers/net/wireless/st/cw1200/main.c
+@@ -381,6 +381,7 @@ static struct ieee80211_hw *cw1200_init_common(const u8 *macaddr,
+ 				    CW1200_LINK_ID_MAX,
+ 				    cw1200_skb_dtor,
+ 				    priv)) {
++		destroy_workqueue(priv->workqueue);
+ 		ieee80211_free_hw(hw);
+ 		return NULL;
+ 	}
+@@ -392,6 +393,7 @@ static struct ieee80211_hw *cw1200_init_common(const u8 *macaddr,
+ 			for (; i > 0; i--)
+ 				cw1200_queue_deinit(&priv->tx_queue[i - 1]);
+ 			cw1200_queue_stats_deinit(&priv->tx_queue_stats);
++			destroy_workqueue(priv->workqueue);
+ 			ieee80211_free_hw(hw);
+ 			return NULL;
+ 		}
+diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
+index f1c1624cec8f5..6f10e0998f1ce 100644
+--- a/drivers/net/xen-netback/xenbus.c
++++ b/drivers/net/xen-netback/xenbus.c
+@@ -557,12 +557,14 @@ static int xen_register_credit_watch(struct xenbus_device *dev,
+ 		return -ENOMEM;
+ 	snprintf(node, maxlen, "%s/rate", dev->nodename);
+ 	vif->credit_watch.node = node;
++	vif->credit_watch.will_handle = NULL;
+ 	vif->credit_watch.callback = xen_net_rate_changed;
+ 	err = register_xenbus_watch(&vif->credit_watch);
+ 	if (err) {
+ 		pr_err("Failed to set watcher %s\n", vif->credit_watch.node);
+ 		kfree(node);
+ 		vif->credit_watch.node = NULL;
++		vif->credit_watch.will_handle = NULL;
+ 		vif->credit_watch.callback = NULL;
+ 	}
+ 	return err;
+@@ -609,6 +611,7 @@ static int xen_register_mcast_ctrl_watch(struct xenbus_device *dev,
+ 	snprintf(node, maxlen, "%s/request-multicast-control",
+ 		 dev->otherend);
+ 	vif->mcast_ctrl_watch.node = node;
++	vif->mcast_ctrl_watch.will_handle = NULL;
+ 	vif->mcast_ctrl_watch.callback = xen_mcast_ctrl_changed;
+ 	err = register_xenbus_watch(&vif->mcast_ctrl_watch);
+ 	if (err) {
+@@ -616,6 +619,7 @@ static int xen_register_mcast_ctrl_watch(struct xenbus_device *dev,
+ 		       vif->mcast_ctrl_watch.node);
+ 		kfree(node);
+ 		vif->mcast_ctrl_watch.node = NULL;
++		vif->mcast_ctrl_watch.will_handle = NULL;
+ 		vif->mcast_ctrl_watch.callback = NULL;
+ 	}
+ 	return err;
+@@ -820,7 +824,7 @@ static void connect(struct backend_info *be)
+ 	xenvif_carrier_on(be->vif);
+ 
+ 	unregister_hotplug_status_watch(be);
+-	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
++	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
+ 				   hotplug_status_changed,
+ 				   "%s/%s", dev->nodename, "hotplug-status");
+ 	if (!err)
+diff --git a/drivers/nfc/s3fwrn5/firmware.c b/drivers/nfc/s3fwrn5/firmware.c
+index ec930ee2c847e..64df50827642b 100644
+--- a/drivers/nfc/s3fwrn5/firmware.c
++++ b/drivers/nfc/s3fwrn5/firmware.c
+@@ -293,8 +293,10 @@ static int s3fwrn5_fw_request_firmware(struct s3fwrn5_fw_info *fw_info)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (fw->fw->size < S3FWRN5_FW_IMAGE_HEADER_SIZE)
++	if (fw->fw->size < S3FWRN5_FW_IMAGE_HEADER_SIZE) {
++		release_firmware(fw->fw);
+ 		return -EINVAL;
++	}
+ 
+ 	memcpy(fw->date, fw->fw->data + 0x00, 12);
+ 	fw->date[12] = '\0';
+diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
+index 47a4828b8b310..9251441fd8a35 100644
+--- a/drivers/nvdimm/label.c
++++ b/drivers/nvdimm/label.c
+@@ -980,6 +980,15 @@ static int __blk_label_update(struct nd_region *nd_region,
+ 		}
+ 	}
+ 
++	/* release slots associated with any invalidated UUIDs */
++	mutex_lock(&nd_mapping->lock);
++	list_for_each_entry_safe(label_ent, e, &nd_mapping->labels, list)
++		if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags)) {
++			reap_victim(nd_mapping, label_ent);
++			list_move(&label_ent->list, &list);
++		}
++	mutex_unlock(&nd_mapping->lock);
++
+ 	/*
+ 	 * Find the resource associated with the first label in the set
+ 	 * per the v1.2 namespace specification.
+@@ -999,8 +1008,10 @@ static int __blk_label_update(struct nd_region *nd_region,
+ 		if (is_old_resource(res, old_res_list, old_num_resources))
+ 			continue; /* carry-over */
+ 		slot = nd_label_alloc_slot(ndd);
+-		if (slot == UINT_MAX)
++		if (slot == UINT_MAX) {
++			rc = -ENXIO;
+ 			goto abort;
++		}
+ 		dev_dbg(ndd->dev, "allocated: %d\n", slot);
+ 
+ 		nd_label = to_label(ndd, slot);
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index bea86899bd5df..9c3d2982248d3 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -893,6 +893,7 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
+ 		burst = 0x2; /* 512 bytes */
+ 
+ 	/* Set SCB_MAX_BURST_SIZE, CFG_READ_UR_MODE, SCB_ACCESS_EN */
++	tmp = readl(base + PCIE_MISC_MISC_CTRL);
+ 	u32p_replace_bits(&tmp, 1, PCIE_MISC_MISC_CTRL_SCB_ACCESS_EN_MASK);
+ 	u32p_replace_bits(&tmp, 1, PCIE_MISC_MISC_CTRL_CFG_READ_UR_MODE_MASK);
+ 	u32p_replace_bits(&tmp, burst, PCIE_MISC_MISC_CTRL_MAX_BURST_SIZE_MASK);
+diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c
+index 905e938082432..cc5b7823edeb7 100644
+--- a/drivers/pci/controller/pcie-iproc.c
++++ b/drivers/pci/controller/pcie-iproc.c
+@@ -192,8 +192,15 @@ static const struct iproc_pcie_ib_map paxb_v2_ib_map[] = {
+ 		.imap_window_offset = 0x4,
+ 	},
+ 	{
+-		/* IARR1/IMAP1 (currently unused) */
+-		.type = IPROC_PCIE_IB_MAP_INVALID,
++		/* IARR1/IMAP1 */
++		.type = IPROC_PCIE_IB_MAP_MEM,
++		.size_unit = SZ_1M,
++		.region_sizes = { 8 },
++		.nr_sizes = 1,
++		.nr_windows = 8,
++		.imap_addr_offset = 0x4,
++		.imap_window_offset = 0x8,
++
+ 	},
+ 	{
+ 		/* IARR2/IMAP2 */
+@@ -307,7 +314,7 @@ enum iproc_pcie_reg {
+ };
+ 
+ /* iProc PCIe PAXB BCMA registers */
+-static const u16 iproc_pcie_reg_paxb_bcma[] = {
++static const u16 iproc_pcie_reg_paxb_bcma[IPROC_PCIE_MAX_NUM_REG] = {
+ 	[IPROC_PCIE_CLK_CTRL]		= 0x000,
+ 	[IPROC_PCIE_CFG_IND_ADDR]	= 0x120,
+ 	[IPROC_PCIE_CFG_IND_DATA]	= 0x124,
+@@ -318,7 +325,7 @@ static const u16 iproc_pcie_reg_paxb_bcma[] = {
+ };
+ 
+ /* iProc PCIe PAXB registers */
+-static const u16 iproc_pcie_reg_paxb[] = {
++static const u16 iproc_pcie_reg_paxb[IPROC_PCIE_MAX_NUM_REG] = {
+ 	[IPROC_PCIE_CLK_CTRL]		= 0x000,
+ 	[IPROC_PCIE_CFG_IND_ADDR]	= 0x120,
+ 	[IPROC_PCIE_CFG_IND_DATA]	= 0x124,
+@@ -334,7 +341,7 @@ static const u16 iproc_pcie_reg_paxb[] = {
+ };
+ 
+ /* iProc PCIe PAXB v2 registers */
+-static const u16 iproc_pcie_reg_paxb_v2[] = {
++static const u16 iproc_pcie_reg_paxb_v2[IPROC_PCIE_MAX_NUM_REG] = {
+ 	[IPROC_PCIE_CLK_CTRL]		= 0x000,
+ 	[IPROC_PCIE_CFG_IND_ADDR]	= 0x120,
+ 	[IPROC_PCIE_CFG_IND_DATA]	= 0x124,
+@@ -351,6 +358,8 @@ static const u16 iproc_pcie_reg_paxb_v2[] = {
+ 	[IPROC_PCIE_OMAP3]		= 0xdf8,
+ 	[IPROC_PCIE_IARR0]		= 0xd00,
+ 	[IPROC_PCIE_IMAP0]		= 0xc00,
++	[IPROC_PCIE_IARR1]		= 0xd08,
++	[IPROC_PCIE_IMAP1]		= 0xd70,
+ 	[IPROC_PCIE_IARR2]		= 0xd10,
+ 	[IPROC_PCIE_IMAP2]		= 0xcc0,
+ 	[IPROC_PCIE_IARR3]		= 0xe00,
+@@ -363,7 +372,7 @@ static const u16 iproc_pcie_reg_paxb_v2[] = {
+ };
+ 
+ /* iProc PCIe PAXC v1 registers */
+-static const u16 iproc_pcie_reg_paxc[] = {
++static const u16 iproc_pcie_reg_paxc[IPROC_PCIE_MAX_NUM_REG] = {
+ 	[IPROC_PCIE_CLK_CTRL]		= 0x000,
+ 	[IPROC_PCIE_CFG_IND_ADDR]	= 0x1f0,
+ 	[IPROC_PCIE_CFG_IND_DATA]	= 0x1f4,
+@@ -372,7 +381,7 @@ static const u16 iproc_pcie_reg_paxc[] = {
+ };
+ 
+ /* iProc PCIe PAXC v2 registers */
+-static const u16 iproc_pcie_reg_paxc_v2[] = {
++static const u16 iproc_pcie_reg_paxc_v2[IPROC_PCIE_MAX_NUM_REG] = {
+ 	[IPROC_PCIE_MSI_GIC_MODE]	= 0x050,
+ 	[IPROC_PCIE_MSI_BASE_ADDR]	= 0x074,
+ 	[IPROC_PCIE_MSI_WINDOW_SIZE]	= 0x078,
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index bf03648c20723..745a4e0c4994f 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -1060,7 +1060,7 @@ static int acpi_pci_propagate_wakeup(struct pci_bus *bus, bool enable)
+ {
+ 	while (bus->parent) {
+ 		if (acpi_pm_device_can_wakeup(&bus->self->dev))
+-			return acpi_pm_set_bridge_wakeup(&bus->self->dev, enable);
++			return acpi_pm_set_device_wakeup(&bus->self->dev, enable);
+ 
+ 		bus = bus->parent;
+ 	}
+@@ -1068,7 +1068,7 @@ static int acpi_pci_propagate_wakeup(struct pci_bus *bus, bool enable)
+ 	/* We have reached the root bus. */
+ 	if (bus->bridge) {
+ 		if (acpi_pm_device_can_wakeup(bus->bridge))
+-			return acpi_pm_set_bridge_wakeup(bus->bridge, enable);
++			return acpi_pm_set_device_wakeup(bus->bridge, enable);
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index e578d34095e91..6427cbd0a5be2 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -6202,19 +6202,21 @@ static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev,
+ 	while (*p) {
+ 		count = 0;
+ 		if (sscanf(p, "%d%n", &align_order, &count) == 1 &&
+-							p[count] == '@') {
++		    p[count] == '@') {
+ 			p += count + 1;
++			if (align_order > 63) {
++				pr_err("PCI: Invalid requested alignment (order %d)\n",
++				       align_order);
++				align_order = PAGE_SHIFT;
++			}
+ 		} else {
+-			align_order = -1;
++			align_order = PAGE_SHIFT;
+ 		}
+ 
+ 		ret = pci_dev_str_match(dev, p, &p);
+ 		if (ret == 1) {
+ 			*resize = true;
+-			if (align_order == -1)
+-				align = PAGE_SIZE;
+-			else
+-				align = 1 << align_order;
++			align = 1ULL << align_order;
+ 			break;
+ 		} else if (ret < 0) {
+ 			pr_err("PCI: Can't parse resource_alignment parameter: %s\n",
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index f70692ac79c56..fb1dc11e7cc52 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5567,17 +5567,26 @@ static void pci_fixup_no_d0_pme(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x2142, pci_fixup_no_d0_pme);
+ 
+ /*
+- * Device [12d8:0x400e] and [12d8:0x400f]
++ * Device 12d8:0x400e [OHCI] and 12d8:0x400f [EHCI]
++ *
+  * These devices advertise PME# support in all power states but don't
+  * reliably assert it.
++ *
++ * These devices also advertise MSI, but documentation (PI7C9X440SL.pdf)
++ * says "The MSI Function is not implemented on this device" in chapters
++ * 7.3.27, 7.3.29-7.3.31.
+  */
+-static void pci_fixup_no_pme(struct pci_dev *dev)
++static void pci_fixup_no_msi_no_pme(struct pci_dev *dev)
+ {
++#ifdef CONFIG_PCI_MSI
++	pci_info(dev, "MSI is not implemented on this device, disabling it\n");
++	dev->no_msi = 1;
++#endif
+ 	pci_info(dev, "PME# is unreliable, disabling it\n");
+ 	dev->pme_support = 0;
+ }
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400e, pci_fixup_no_pme);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400f, pci_fixup_no_pme);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400e, pci_fixup_no_msi_no_pme);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400f, pci_fixup_no_msi_no_pme);
+ 
+ static void apex_pci_fixup_class(struct pci_dev *pdev)
+ {
+diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c
+index 3861505741e6d..ed2077e7470ae 100644
+--- a/drivers/pci/slot.c
++++ b/drivers/pci/slot.c
+@@ -272,6 +272,9 @@ placeholder:
+ 		goto err;
+ 	}
+ 
++	INIT_LIST_HEAD(&slot->list);
++	list_add(&slot->list, &parent->slots);
++
+ 	err = kobject_init_and_add(&slot->kobj, &pci_slot_ktype, NULL,
+ 				   "%s", slot_name);
+ 	if (err) {
+@@ -279,9 +282,6 @@ placeholder:
+ 		goto err;
+ 	}
+ 
+-	INIT_LIST_HEAD(&slot->list);
+-	list_add(&slot->list, &parent->slots);
+-
+ 	down_read(&pci_bus_sem);
+ 	list_for_each_entry(dev, &parent->devices, bus_list)
+ 		if (PCI_SLOT(dev->devfn) == slot_nr)
+diff --git a/drivers/phy/mediatek/Kconfig b/drivers/phy/mediatek/Kconfig
+index c8126bde9d7cc..43150608d8b62 100644
+--- a/drivers/phy/mediatek/Kconfig
++++ b/drivers/phy/mediatek/Kconfig
+@@ -38,7 +38,9 @@ config PHY_MTK_XSPHY
+ 
+ config PHY_MTK_HDMI
+ 	tristate "MediaTek HDMI-PHY Driver"
+-	depends on ARCH_MEDIATEK && OF
++	depends on ARCH_MEDIATEK || COMPILE_TEST
++	depends on COMMON_CLK
++	depends on OF
+ 	select GENERIC_PHY
+ 	help
+ 	  Support HDMI PHY for Mediatek SoCs.
+diff --git a/drivers/phy/mediatek/phy-mtk-hdmi.c b/drivers/phy/mediatek/phy-mtk-hdmi.c
+index 47c029d4b270b..206cc34687223 100644
+--- a/drivers/phy/mediatek/phy-mtk-hdmi.c
++++ b/drivers/phy/mediatek/phy-mtk-hdmi.c
+@@ -84,8 +84,9 @@ mtk_hdmi_phy_dev_get_ops(const struct mtk_hdmi_phy *hdmi_phy)
+ 	    hdmi_phy->conf->hdmi_phy_disable_tmds)
+ 		return &mtk_hdmi_phy_dev_ops;
+ 
+-	dev_err(hdmi_phy->dev, "Failed to get dev ops of phy\n");
+-		return NULL;
++	if (hdmi_phy)
++		dev_err(hdmi_phy->dev, "Failed to get dev ops of phy\n");
++	return NULL;
+ }
+ 
+ static void mtk_hdmi_phy_clk_get_data(struct mtk_hdmi_phy *hdmi_phy,
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index e34e4475027ca..2cb949f931b69 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -656,8 +656,10 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ 	 */
+ 	pm_runtime_enable(dev);
+ 	phy_usb2_ops = of_device_get_match_data(dev);
+-	if (!phy_usb2_ops)
+-		return -EINVAL;
++	if (!phy_usb2_ops) {
++		ret = -EINVAL;
++		goto error;
++	}
+ 
+ 	mutex_init(&channel->lock);
+ 	for (i = 0; i < NUM_OF_PHYS; i++) {
+diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c
+index ad88d74c18842..181a1be5f4917 100644
+--- a/drivers/phy/tegra/xusb.c
++++ b/drivers/phy/tegra/xusb.c
+@@ -688,7 +688,7 @@ static int tegra_xusb_setup_usb_role_switch(struct tegra_xusb_port *port)
+ 	 * reference to retrieve usb-phy details.
+ 	 */
+ 	port->usb_phy.dev = &lane->pad->lanes[port->index]->dev;
+-	port->usb_phy.dev->driver = port->padctl->dev->driver;
++	port->usb_phy.dev->driver = port->dev.driver;
+ 	port->usb_phy.otg->usb_phy = &port->usb_phy;
+ 	port->usb_phy.otg->set_peripheral = tegra_xusb_set_peripheral;
+ 	port->usb_phy.otg->set_host = tegra_xusb_set_host;
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 3663d87f51a01..9fc4433fece4f 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1602,9 +1602,11 @@ static int pinctrl_pins_show(struct seq_file *s, void *what)
+ 	struct pinctrl_dev *pctldev = s->private;
+ 	const struct pinctrl_ops *ops = pctldev->desc->pctlops;
+ 	unsigned i, pin;
++#ifdef CONFIG_GPIOLIB
+ 	struct pinctrl_gpio_range *range;
+ 	unsigned int gpio_num;
+ 	struct gpio_chip *chip;
++#endif
+ 
+ 	seq_printf(s, "registered pins: %d\n", pctldev->desc->npins);
+ 
+diff --git a/drivers/pinctrl/pinctrl-falcon.c b/drivers/pinctrl/pinctrl-falcon.c
+index 62c02b969327f..7521a924dffb0 100644
+--- a/drivers/pinctrl/pinctrl-falcon.c
++++ b/drivers/pinctrl/pinctrl-falcon.c
+@@ -431,24 +431,28 @@ static int pinctrl_falcon_probe(struct platform_device *pdev)
+ 
+ 	/* load and remap the pad resources of the different banks */
+ 	for_each_compatible_node(np, NULL, "lantiq,pad-falcon") {
+-		struct platform_device *ppdev = of_find_device_by_node(np);
+ 		const __be32 *bank = of_get_property(np, "lantiq,bank", NULL);
+ 		struct resource res;
++		struct platform_device *ppdev;
+ 		u32 avail;
+ 		int pins;
+ 
+ 		if (!of_device_is_available(np))
+ 			continue;
+ 
+-		if (!ppdev) {
+-			dev_err(&pdev->dev, "failed to find pad pdev\n");
+-			continue;
+-		}
+ 		if (!bank || *bank >= PORTS)
+ 			continue;
+ 		if (of_address_to_resource(np, 0, &res))
+ 			continue;
++
++		ppdev = of_find_device_by_node(np);
++		if (!ppdev) {
++			dev_err(&pdev->dev, "failed to find pad pdev\n");
++			continue;
++		}
++
+ 		falcon_info.clk[*bank] = clk_get(&ppdev->dev, NULL);
++		put_device(&ppdev->dev);
+ 		if (IS_ERR(falcon_info.clk[*bank])) {
+ 			dev_err(&ppdev->dev, "failed to get clock\n");
+ 			of_node_put(np);
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sun50i-a100.c b/drivers/pinctrl/sunxi/pinctrl-sun50i-a100.c
+index 19cfd1e76ee2c..e69f6da40dc0a 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sun50i-a100.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sun50i-a100.c
+@@ -677,7 +677,7 @@ static const struct sunxi_desc_pin a100_pins[] = {
+ 		  SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 19)),
+ };
+ 
+-static const unsigned int a100_irq_bank_map[] = { 0, 1, 2, 3, 4, 5, 6};
++static const unsigned int a100_irq_bank_map[] = { 1, 2, 3, 4, 5, 6, 7};
+ 
+ static const struct sunxi_pinctrl_desc a100_pinctrl_data = {
+ 	.pins = a100_pins,
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index 8e792f8e2dc9a..e42a3a0005a72 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -1142,20 +1142,22 @@ static void sunxi_pinctrl_irq_handler(struct irq_desc *desc)
+ 	if (bank == pctl->desc->irq_banks)
+ 		return;
+ 
++	chained_irq_enter(chip, desc);
++
+ 	reg = sunxi_irq_status_reg_from_bank(pctl->desc, bank);
+ 	val = readl(pctl->membase + reg);
+ 
+ 	if (val) {
+ 		int irqoffset;
+ 
+-		chained_irq_enter(chip, desc);
+ 		for_each_set_bit(irqoffset, &val, IRQ_PER_BANK) {
+ 			int pin_irq = irq_find_mapping(pctl->domain,
+ 						       bank * IRQ_PER_BANK + irqoffset);
+ 			generic_handle_irq(pin_irq);
+ 		}
+-		chained_irq_exit(chip, desc);
+ 	}
++
++	chained_irq_exit(chip, desc);
+ }
+ 
+ static int sunxi_pinctrl_add_function(struct sunxi_pinctrl *pctl,
+diff --git a/drivers/platform/chrome/cros_ec_spi.c b/drivers/platform/chrome/cros_ec_spi.c
+index dfa1f816a45f4..f9df218fc2bbe 100644
+--- a/drivers/platform/chrome/cros_ec_spi.c
++++ b/drivers/platform/chrome/cros_ec_spi.c
+@@ -742,7 +742,6 @@ static int cros_ec_spi_probe(struct spi_device *spi)
+ 	int err;
+ 
+ 	spi->bits_per_word = 8;
+-	spi->mode = SPI_MODE_0;
+ 	spi->rt = true;
+ 	err = spi_setup(spi);
+ 	if (err < 0)
+diff --git a/drivers/platform/x86/dell-smbios-base.c b/drivers/platform/x86/dell-smbios-base.c
+index 2e2cd565926aa..3a1dbf1994413 100644
+--- a/drivers/platform/x86/dell-smbios-base.c
++++ b/drivers/platform/x86/dell-smbios-base.c
+@@ -594,6 +594,7 @@ static int __init dell_smbios_init(void)
+ 	if (wmi && smm) {
+ 		pr_err("No SMBIOS backends available (wmi: %d, smm: %d)\n",
+ 			wmi, smm);
++		ret = -ENODEV;
+ 		goto fail_create_group;
+ 	}
+ 
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index 0419c8001fe33..3b49a1f4061bc 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -15,9 +15,13 @@
+ #include <linux/platform_device.h>
+ #include <linux/suspend.h>
+ 
++/* Returned when NOT in tablet mode on some HP Stream x360 11 models */
++#define VGBS_TABLET_MODE_FLAG_ALT	0x10
+ /* When NOT in tablet mode, VGBS returns with the flag 0x40 */
+-#define TABLET_MODE_FLAG 0x40
+-#define DOCK_MODE_FLAG   0x80
++#define VGBS_TABLET_MODE_FLAG		0x40
++#define VGBS_DOCK_MODE_FLAG		0x80
++
++#define VGBS_TABLET_MODE_FLAGS (VGBS_TABLET_MODE_FLAG | VGBS_TABLET_MODE_FLAG_ALT)
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("AceLan Kao");
+@@ -72,9 +76,9 @@ static void detect_tablet_mode(struct platform_device *device)
+ 	if (ACPI_FAILURE(status))
+ 		return;
+ 
+-	m = !(vgbs & TABLET_MODE_FLAG);
++	m = !(vgbs & VGBS_TABLET_MODE_FLAGS);
+ 	input_report_switch(priv->input_dev, SW_TABLET_MODE, m);
+-	m = (vgbs & DOCK_MODE_FLAG) ? 1 : 0;
++	m = (vgbs & VGBS_DOCK_MODE_FLAG) ? 1 : 0;
+ 	input_report_switch(priv->input_dev, SW_DOCK, m);
+ }
+ 
+@@ -212,6 +216,12 @@ static const struct dmi_system_id dmi_switches_allow_list[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion 13 x360 PC"),
+ 		},
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Switch SA5-271"),
++		},
++	},
+ 	{} /* Array terminator */
+ };
+ 
+diff --git a/drivers/platform/x86/mlx-platform.c b/drivers/platform/x86/mlx-platform.c
+index 986ad3dda1c10..8bce3da32a42b 100644
+--- a/drivers/platform/x86/mlx-platform.c
++++ b/drivers/platform/x86/mlx-platform.c
+@@ -319,15 +319,6 @@ static struct i2c_mux_reg_platform_data mlxplat_extended_mux_data[] = {
+ };
+ 
+ /* Platform hotplug devices */
+-static struct i2c_board_info mlxplat_mlxcpld_psu[] = {
+-	{
+-		I2C_BOARD_INFO("24c02", 0x51),
+-	},
+-	{
+-		I2C_BOARD_INFO("24c02", 0x50),
+-	},
+-};
+-
+ static struct i2c_board_info mlxplat_mlxcpld_pwr[] = {
+ 	{
+ 		I2C_BOARD_INFO("dps460", 0x59),
+@@ -383,15 +374,13 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_psu_items_data[] = {
+ 		.label = "psu1",
+ 		.reg = MLXPLAT_CPLD_LPC_REG_PSU_OFFSET,
+ 		.mask = BIT(0),
+-		.hpdev.brdinfo = &mlxplat_mlxcpld_psu[0],
+-		.hpdev.nr = MLXPLAT_CPLD_PSU_DEFAULT_NR,
++		.hpdev.nr = MLXPLAT_CPLD_NR_NONE,
+ 	},
+ 	{
+ 		.label = "psu2",
+ 		.reg = MLXPLAT_CPLD_LPC_REG_PSU_OFFSET,
+ 		.mask = BIT(1),
+-		.hpdev.brdinfo = &mlxplat_mlxcpld_psu[1],
+-		.hpdev.nr = MLXPLAT_CPLD_PSU_DEFAULT_NR,
++		.hpdev.nr = MLXPLAT_CPLD_NR_NONE,
+ 	},
+ };
+ 
+@@ -458,7 +447,7 @@ static struct mlxreg_core_item mlxplat_mlxcpld_default_items[] = {
+ 		.aggr_mask = MLXPLAT_CPLD_AGGR_PSU_MASK_DEF,
+ 		.reg = MLXPLAT_CPLD_LPC_REG_PSU_OFFSET,
+ 		.mask = MLXPLAT_CPLD_PSU_MASK,
+-		.count = ARRAY_SIZE(mlxplat_mlxcpld_psu),
++		.count = ARRAY_SIZE(mlxplat_mlxcpld_default_psu_items_data),
+ 		.inversed = 1,
+ 		.health = false,
+ 	},
+@@ -467,7 +456,7 @@ static struct mlxreg_core_item mlxplat_mlxcpld_default_items[] = {
+ 		.aggr_mask = MLXPLAT_CPLD_AGGR_PWR_MASK_DEF,
+ 		.reg = MLXPLAT_CPLD_LPC_REG_PWR_OFFSET,
+ 		.mask = MLXPLAT_CPLD_PWR_MASK,
+-		.count = ARRAY_SIZE(mlxplat_mlxcpld_pwr),
++		.count = ARRAY_SIZE(mlxplat_mlxcpld_default_pwr_items_data),
+ 		.inversed = 0,
+ 		.health = false,
+ 	},
+@@ -476,7 +465,7 @@ static struct mlxreg_core_item mlxplat_mlxcpld_default_items[] = {
+ 		.aggr_mask = MLXPLAT_CPLD_AGGR_FAN_MASK_DEF,
+ 		.reg = MLXPLAT_CPLD_LPC_REG_FAN_OFFSET,
+ 		.mask = MLXPLAT_CPLD_FAN_MASK,
+-		.count = ARRAY_SIZE(mlxplat_mlxcpld_fan),
++		.count = ARRAY_SIZE(mlxplat_mlxcpld_default_fan_items_data),
+ 		.inversed = 1,
+ 		.health = false,
+ 	},
+@@ -497,7 +486,7 @@ static struct mlxreg_core_item mlxplat_mlxcpld_comex_items[] = {
+ 		.aggr_mask = MLXPLAT_CPLD_AGGR_MASK_CARRIER,
+ 		.reg = MLXPLAT_CPLD_LPC_REG_PSU_OFFSET,
+ 		.mask = MLXPLAT_CPLD_PSU_MASK,
+-		.count = ARRAY_SIZE(mlxplat_mlxcpld_psu),
++		.count = ARRAY_SIZE(mlxplat_mlxcpld_default_psu_items_data),
+ 		.inversed = 1,
+ 		.health = false,
+ 	},
+@@ -506,7 +495,7 @@ static struct mlxreg_core_item mlxplat_mlxcpld_comex_items[] = {
+ 		.aggr_mask = MLXPLAT_CPLD_AGGR_MASK_CARRIER,
+ 		.reg = MLXPLAT_CPLD_LPC_REG_PWR_OFFSET,
+ 		.mask = MLXPLAT_CPLD_PWR_MASK,
+-		.count = ARRAY_SIZE(mlxplat_mlxcpld_pwr),
++		.count = ARRAY_SIZE(mlxplat_mlxcpld_default_pwr_items_data),
+ 		.inversed = 0,
+ 		.health = false,
+ 	},
+@@ -515,7 +504,7 @@ static struct mlxreg_core_item mlxplat_mlxcpld_comex_items[] = {
+ 		.aggr_mask = MLXPLAT_CPLD_AGGR_MASK_CARRIER,
+ 		.reg = MLXPLAT_CPLD_LPC_REG_FAN_OFFSET,
+ 		.mask = MLXPLAT_CPLD_FAN_MASK,
+-		.count = ARRAY_SIZE(mlxplat_mlxcpld_fan),
++		.count = ARRAY_SIZE(mlxplat_mlxcpld_default_fan_items_data),
+ 		.inversed = 1,
+ 		.health = false,
+ 	},
+@@ -603,15 +592,13 @@ static struct mlxreg_core_data mlxplat_mlxcpld_msn274x_psu_items_data[] = {
+ 		.label = "psu1",
+ 		.reg = MLXPLAT_CPLD_LPC_REG_PSU_OFFSET,
+ 		.mask = BIT(0),
+-		.hpdev.brdinfo = &mlxplat_mlxcpld_psu[0],
+-		.hpdev.nr = MLXPLAT_CPLD_PSU_MSNXXXX_NR,
++		.hpdev.nr = MLXPLAT_CPLD_NR_NONE,
+ 	},
+ 	{
+ 		.label = "psu2",
+ 		.reg = MLXPLAT_CPLD_LPC_REG_PSU_OFFSET,
+ 		.mask = BIT(1),
+-		.hpdev.brdinfo = &mlxplat_mlxcpld_psu[1],
+-		.hpdev.nr = MLXPLAT_CPLD_PSU_MSNXXXX_NR,
++		.hpdev.nr = MLXPLAT_CPLD_NR_NONE,
+ 	},
+ };
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index 9d981b76c1e72..a4df1ea923864 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -548,14 +548,15 @@ out:
+ 
+ /*
+  * The HP Pavilion x2 10 series comes in a number of variants:
+- * Bay Trail SoC    + AXP288 PMIC, DMI_BOARD_NAME: "815D"
+- * Cherry Trail SoC + AXP288 PMIC, DMI_BOARD_NAME: "813E"
+- * Cherry Trail SoC + TI PMIC,     DMI_BOARD_NAME: "827C" or "82F4"
++ * Bay Trail SoC    + AXP288 PMIC, Micro-USB, DMI_BOARD_NAME: "8021"
++ * Bay Trail SoC    + AXP288 PMIC, Type-C,    DMI_BOARD_NAME: "815D"
++ * Cherry Trail SoC + AXP288 PMIC, Type-C,    DMI_BOARD_NAME: "813E"
++ * Cherry Trail SoC + TI PMIC,     Type-C,    DMI_BOARD_NAME: "827C" or "82F4"
+  *
+- * The variants with the AXP288 PMIC are all kinds of special:
++ * The variants with the AXP288 + Type-C connector are all kinds of special:
+  *
+- * 1. All variants use a Type-C connector which the AXP288 does not support, so
+- * when using a Type-C charger it is not recognized. Unlike most AXP288 devices,
++ * 1. They use a Type-C connector which the AXP288 does not support, so when
++ * using a Type-C charger it is not recognized. Unlike most AXP288 devices,
+  * this model actually has mostly working ACPI AC / Battery code, the ACPI code
+  * "solves" this by simply setting the input_current_limit to 3A.
+  * There are still some issues with the ACPI code, so we use this native driver,
+@@ -578,12 +579,17 @@ out:
+  */
+ static const struct dmi_system_id axp288_hp_x2_dmi_ids[] = {
+ 	{
+-		/*
+-		 * Bay Trail model has "Hewlett-Packard" as sys_vendor, Cherry
+-		 * Trail model has "HP", so we only match on product_name.
+-		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion x2 Detachable"),
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "HP Pavilion x2 Detachable"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "815D"),
++		},
++	},
++	{
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "HP Pavilion x2 Detachable"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "813E"),
+ 		},
+ 	},
+ 	{} /* Terminating entry */
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index d14186525e1e9..845af0f44c022 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -448,8 +448,10 @@ static ssize_t bq24190_sysfs_show(struct device *dev,
+ 		return -EINVAL;
+ 
+ 	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(bdi->dev);
+ 		return ret;
++	}
+ 
+ 	ret = bq24190_read_mask(bdi, info->reg, info->mask, info->shift, &v);
+ 	if (ret)
+@@ -1077,8 +1079,10 @@ static int bq24190_charger_get_property(struct power_supply *psy,
+ 	dev_dbg(bdi->dev, "prop: %d\n", psp);
+ 
+ 	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(bdi->dev);
+ 		return ret;
++	}
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_CHARGE_TYPE:
+@@ -1149,8 +1153,10 @@ static int bq24190_charger_set_property(struct power_supply *psy,
+ 	dev_dbg(bdi->dev, "prop: %d\n", psp);
+ 
+ 	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(bdi->dev);
+ 		return ret;
++	}
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_ONLINE:
+@@ -1410,8 +1416,10 @@ static int bq24190_battery_get_property(struct power_supply *psy,
+ 	dev_dbg(bdi->dev, "prop: %d\n", psp);
+ 
+ 	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(bdi->dev);
+ 		return ret;
++	}
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_STATUS:
+@@ -1456,8 +1464,10 @@ static int bq24190_battery_set_property(struct power_supply *psy,
+ 	dev_dbg(bdi->dev, "prop: %d\n", psp);
+ 
+ 	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(bdi->dev);
+ 		return ret;
++	}
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_ONLINE:
+diff --git a/drivers/power/supply/bq25890_charger.c b/drivers/power/supply/bq25890_charger.c
+index 34c21c51bac10..945c3257ca931 100644
+--- a/drivers/power/supply/bq25890_charger.c
++++ b/drivers/power/supply/bq25890_charger.c
+@@ -299,7 +299,7 @@ static const union {
+ 	/* TODO: BQ25896 has max ICHG 3008 mA */
+ 	[TBL_ICHG] =	{ .rt = {0,	  5056000, 64000} },	 /* uA */
+ 	[TBL_ITERM] =	{ .rt = {64000,   1024000, 64000} },	 /* uA */
+-	[TBL_IILIM] =   { .rt = {50000,   3200000, 50000} },	 /* uA */
++	[TBL_IILIM] =   { .rt = {100000,  3250000, 50000} },	 /* uA */
+ 	[TBL_VREG] =	{ .rt = {3840000, 4608000, 16000} },	 /* uV */
+ 	[TBL_BOOSTV] =	{ .rt = {4550000, 5510000, 64000} },	 /* uV */
+ 	[TBL_SYSVMIN] = { .rt = {3000000, 3700000, 100000} },	 /* uV */
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index f284547913d6f..2e9672fe4df1f 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -85,9 +85,10 @@ static enum power_supply_property max17042_battery_props[] = {
+ 	POWER_SUPPLY_PROP_TEMP_MAX,
+ 	POWER_SUPPLY_PROP_HEALTH,
+ 	POWER_SUPPLY_PROP_SCOPE,
++	POWER_SUPPLY_PROP_TIME_TO_EMPTY_NOW,
++	// these two have to be at the end on the list
+ 	POWER_SUPPLY_PROP_CURRENT_NOW,
+ 	POWER_SUPPLY_PROP_CURRENT_AVG,
+-	POWER_SUPPLY_PROP_TIME_TO_EMPTY_NOW,
+ };
+ 
+ static int max17042_get_temperature(struct max17042_chip *chip, int *temp)
+diff --git a/drivers/ps3/ps3stor_lib.c b/drivers/ps3/ps3stor_lib.c
+index 333ba83006e48..a12a1ad9b5fe3 100644
+--- a/drivers/ps3/ps3stor_lib.c
++++ b/drivers/ps3/ps3stor_lib.c
+@@ -189,7 +189,7 @@ int ps3stor_setup(struct ps3_storage_device *dev, irq_handler_t handler)
+ 	dev->bounce_lpar = ps3_mm_phys_to_lpar(__pa(dev->bounce_buf));
+ 	dev->bounce_dma = dma_map_single(&dev->sbd.core, dev->bounce_buf,
+ 					 dev->bounce_size, DMA_BIDIRECTIONAL);
+-	if (!dev->bounce_dma) {
++	if (dma_mapping_error(&dev->sbd.core, dev->bounce_dma)) {
+ 		dev_err(&dev->sbd.core, "%s:%u: map DMA region failed\n",
+ 			__func__, __LINE__);
+ 		error = -ENODEV;
+diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
+index c50d453552bd4..86bcafd23e4f6 100644
+--- a/drivers/pwm/pwm-imx27.c
++++ b/drivers/pwm/pwm-imx27.c
+@@ -235,8 +235,9 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	period_cycles /= prescale;
+ 	c = clkrate * state->duty_cycle;
+-	do_div(c, NSEC_PER_SEC * prescale);
++	do_div(c, NSEC_PER_SEC);
+ 	duty_cycles = c;
++	duty_cycles /= prescale;
+ 
+ 	/*
+ 	 * according to imx pwm RM, the real period value should be PERIOD
+diff --git a/drivers/pwm/pwm-lp3943.c b/drivers/pwm/pwm-lp3943.c
+index 7551253ada32b..bf3f14fb5f244 100644
+--- a/drivers/pwm/pwm-lp3943.c
++++ b/drivers/pwm/pwm-lp3943.c
+@@ -275,6 +275,7 @@ static int lp3943_pwm_probe(struct platform_device *pdev)
+ 	lp3943_pwm->chip.dev = &pdev->dev;
+ 	lp3943_pwm->chip.ops = &lp3943_pwm_ops;
+ 	lp3943_pwm->chip.npwm = LP3943_NUM_PWMS;
++	lp3943_pwm->chip.base = -1;
+ 
+ 	platform_set_drvdata(pdev, lp3943_pwm);
+ 
+diff --git a/drivers/pwm/pwm-sun4i.c b/drivers/pwm/pwm-sun4i.c
+index 38a4c5c1317b2..482d5b9cec1fb 100644
+--- a/drivers/pwm/pwm-sun4i.c
++++ b/drivers/pwm/pwm-sun4i.c
+@@ -294,12 +294,8 @@ static int sun4i_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	ctrl |= BIT_CH(PWM_CLK_GATING, pwm->hwpwm);
+ 
+-	if (state->enabled) {
++	if (state->enabled)
+ 		ctrl |= BIT_CH(PWM_EN, pwm->hwpwm);
+-	} else {
+-		ctrl &= ~BIT_CH(PWM_EN, pwm->hwpwm);
+-		ctrl &= ~BIT_CH(PWM_CLK_GATING, pwm->hwpwm);
+-	}
+ 
+ 	sun4i_pwm_writel(sun4i_pwm, ctrl, PWM_CTRL_REG);
+ 
+diff --git a/drivers/pwm/pwm-zx.c b/drivers/pwm/pwm-zx.c
+index e2c21cc34a96a..3763ce5311ac2 100644
+--- a/drivers/pwm/pwm-zx.c
++++ b/drivers/pwm/pwm-zx.c
+@@ -238,6 +238,7 @@ static int zx_pwm_probe(struct platform_device *pdev)
+ 	ret = pwmchip_add(&zpc->chip);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to add PWM chip: %d\n", ret);
++		clk_disable_unprepare(zpc->pclk);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/regulator/axp20x-regulator.c b/drivers/regulator/axp20x-regulator.c
+index cd1224182ad74..90cb8445f7216 100644
+--- a/drivers/regulator/axp20x-regulator.c
++++ b/drivers/regulator/axp20x-regulator.c
+@@ -594,7 +594,7 @@ static const struct regulator_desc axp22x_regulators[] = {
+ 		 AXP22X_DLDO1_V_OUT, AXP22X_DLDO1_V_OUT_MASK,
+ 		 AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_DLDO1_MASK),
+ 	AXP_DESC(AXP22X, DLDO2, "dldo2", "dldoin", 700, 3300, 100,
+-		 AXP22X_DLDO2_V_OUT, AXP22X_PWR_OUT_DLDO2_MASK,
++		 AXP22X_DLDO2_V_OUT, AXP22X_DLDO2_V_OUT_MASK,
+ 		 AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_DLDO2_MASK),
+ 	AXP_DESC(AXP22X, DLDO3, "dldo3", "dldoin", 700, 3300, 100,
+ 		 AXP22X_DLDO3_V_OUT, AXP22X_DLDO3_V_OUT_MASK,
+diff --git a/drivers/remoteproc/mtk_common.h b/drivers/remoteproc/mtk_common.h
+index 47b4561443a94..f2bcc9d9fda65 100644
+--- a/drivers/remoteproc/mtk_common.h
++++ b/drivers/remoteproc/mtk_common.h
+@@ -32,22 +32,22 @@
+ #define MT8183_SCP_CACHESIZE_8KB	BIT(8)
+ #define MT8183_SCP_CACHE_CON_WAYEN	BIT(10)
+ 
+-#define MT8192_L2TCM_SRAM_PD_0		0x210C0
+-#define MT8192_L2TCM_SRAM_PD_1		0x210C4
+-#define MT8192_L2TCM_SRAM_PD_2		0x210C8
+-#define MT8192_L1TCM_SRAM_PDN		0x2102C
+-#define MT8192_CPU0_SRAM_PD		0x21080
+-
+-#define MT8192_SCP2APMCU_IPC_SET	0x24080
+-#define MT8192_SCP2APMCU_IPC_CLR	0x24084
++#define MT8192_L2TCM_SRAM_PD_0		0x10C0
++#define MT8192_L2TCM_SRAM_PD_1		0x10C4
++#define MT8192_L2TCM_SRAM_PD_2		0x10C8
++#define MT8192_L1TCM_SRAM_PDN		0x102C
++#define MT8192_CPU0_SRAM_PD		0x1080
++
++#define MT8192_SCP2APMCU_IPC_SET	0x4080
++#define MT8192_SCP2APMCU_IPC_CLR	0x4084
+ #define MT8192_SCP_IPC_INT_BIT		BIT(0)
+-#define MT8192_SCP2SPM_IPC_CLR		0x24094
+-#define MT8192_GIPC_IN_SET		0x24098
++#define MT8192_SCP2SPM_IPC_CLR		0x4094
++#define MT8192_GIPC_IN_SET		0x4098
+ #define MT8192_HOST_IPC_INT_BIT		BIT(0)
+ 
+-#define MT8192_CORE0_SW_RSTN_CLR	0x30000
+-#define MT8192_CORE0_SW_RSTN_SET	0x30004
+-#define MT8192_CORE0_WDT_CFG		0x30034
++#define MT8192_CORE0_SW_RSTN_CLR	0x10000
++#define MT8192_CORE0_SW_RSTN_SET	0x10004
++#define MT8192_CORE0_WDT_CFG		0x10034
+ 
+ #define SCP_FW_VER_LEN			32
+ #define SCP_SHARE_BUFFER_SIZE		288
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index 577cbd5d421ec..52fa01d67c18e 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -350,9 +350,10 @@ static int scp_load(struct rproc *rproc, const struct firmware *fw)
+ 
+ 	ret = scp->data->scp_before_load(scp);
+ 	if (ret < 0)
+-		return ret;
++		goto leave;
+ 
+ 	ret = scp_elf_load_segments(rproc, fw);
++leave:
+ 	clk_disable_unprepare(scp->clk);
+ 
+ 	return ret;
+@@ -772,12 +773,14 @@ static const struct mtk_scp_of_data mt8192_of_data = {
+ 	.host_to_scp_int_bit = MT8192_HOST_IPC_INT_BIT,
+ };
+ 
++#if defined(CONFIG_OF)
+ static const struct of_device_id mtk_scp_of_match[] = {
+ 	{ .compatible = "mediatek,mt8183-scp", .data = &mt8183_of_data },
+ 	{ .compatible = "mediatek,mt8192-scp", .data = &mt8192_of_data },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, mtk_scp_of_match);
++#endif
+ 
+ static struct platform_driver mtk_scp_driver = {
+ 	.probe = scp_probe,
+diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
+index efb2c1aa80a3c..9eb599701f9b0 100644
+--- a/drivers/remoteproc/qcom_q6v5_adsp.c
++++ b/drivers/remoteproc/qcom_q6v5_adsp.c
+@@ -193,8 +193,10 @@ static int adsp_start(struct rproc *rproc)
+ 
+ 	dev_pm_genpd_set_performance_state(adsp->dev, INT_MAX);
+ 	ret = pm_runtime_get_sync(adsp->dev);
+-	if (ret)
++	if (ret) {
++		pm_runtime_put_noidle(adsp->dev);
+ 		goto disable_xo_clk;
++	}
+ 
+ 	ret = clk_bulk_prepare_enable(adsp->num_clks, adsp->clks);
+ 	if (ret) {
+@@ -362,15 +364,12 @@ static int adsp_init_mmio(struct qcom_adsp *adsp,
+ 				struct platform_device *pdev)
+ {
+ 	struct device_node *syscon;
+-	struct resource *res;
+ 	int ret;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	adsp->qdsp6ss_base = devm_ioremap(&pdev->dev, res->start,
+-			resource_size(res));
+-	if (!adsp->qdsp6ss_base) {
++	adsp->qdsp6ss_base = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(adsp->qdsp6ss_base)) {
+ 		dev_err(adsp->dev, "failed to map QDSP6SS registers\n");
+-		return -ENOMEM;
++		return PTR_ERR(adsp->qdsp6ss_base);
+ 	}
+ 
+ 	syscon = of_parse_phandle(pdev->dev.of_node, "qcom,halt-regs", 0);
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index eb3457a6c3b73..ba6f7551242de 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -349,8 +349,11 @@ static int q6v5_pds_enable(struct q6v5 *qproc, struct device **pds,
+ 	for (i = 0; i < pd_count; i++) {
+ 		dev_pm_genpd_set_performance_state(pds[i], INT_MAX);
+ 		ret = pm_runtime_get_sync(pds[i]);
+-		if (ret < 0)
++		if (ret < 0) {
++			pm_runtime_put_noidle(pds[i]);
++			dev_pm_genpd_set_performance_state(pds[i], 0);
+ 			goto unroll_pd_votes;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 3837f23995e05..0678b417707ef 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -90,8 +90,11 @@ static int adsp_pds_enable(struct qcom_adsp *adsp, struct device **pds,
+ 	for (i = 0; i < pd_count; i++) {
+ 		dev_pm_genpd_set_performance_state(pds[i], INT_MAX);
+ 		ret = pm_runtime_get_sync(pds[i]);
+-		if (ret < 0)
++		if (ret < 0) {
++			pm_runtime_put_noidle(pds[i]);
++			dev_pm_genpd_set_performance_state(pds[i], 0);
+ 			goto unroll_pd_votes;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/remoteproc/qcom_sysmon.c b/drivers/remoteproc/qcom_sysmon.c
+index 9eb2f6bccea63..b37b111b15b39 100644
+--- a/drivers/remoteproc/qcom_sysmon.c
++++ b/drivers/remoteproc/qcom_sysmon.c
+@@ -22,6 +22,9 @@ struct qcom_sysmon {
+ 	struct rproc_subdev subdev;
+ 	struct rproc *rproc;
+ 
++	int state;
++	struct mutex state_lock;
++
+ 	struct list_head node;
+ 
+ 	const char *name;
+@@ -448,7 +451,10 @@ static int sysmon_prepare(struct rproc_subdev *subdev)
+ 		.ssr_event = SSCTL_SSR_EVENT_BEFORE_POWERUP
+ 	};
+ 
++	mutex_lock(&sysmon->state_lock);
++	sysmon->state = SSCTL_SSR_EVENT_BEFORE_POWERUP;
+ 	blocking_notifier_call_chain(&sysmon_notifiers, 0, (void *)&event);
++	mutex_unlock(&sysmon->state_lock);
+ 
+ 	return 0;
+ }
+@@ -472,20 +478,25 @@ static int sysmon_start(struct rproc_subdev *subdev)
+ 		.ssr_event = SSCTL_SSR_EVENT_AFTER_POWERUP
+ 	};
+ 
++	mutex_lock(&sysmon->state_lock);
++	sysmon->state = SSCTL_SSR_EVENT_AFTER_POWERUP;
+ 	blocking_notifier_call_chain(&sysmon_notifiers, 0, (void *)&event);
++	mutex_unlock(&sysmon->state_lock);
+ 
+ 	mutex_lock(&sysmon_lock);
+ 	list_for_each_entry(target, &sysmon_list, node) {
+-		if (target == sysmon ||
+-		    target->rproc->state != RPROC_RUNNING)
++		if (target == sysmon)
+ 			continue;
+ 
++		mutex_lock(&target->state_lock);
+ 		event.subsys_name = target->name;
++		event.ssr_event = target->state;
+ 
+ 		if (sysmon->ssctl_version == 2)
+ 			ssctl_send_event(sysmon, &event);
+ 		else if (sysmon->ept)
+ 			sysmon_send_event(sysmon, &event);
++		mutex_unlock(&target->state_lock);
+ 	}
+ 	mutex_unlock(&sysmon_lock);
+ 
+@@ -500,7 +511,10 @@ static void sysmon_stop(struct rproc_subdev *subdev, bool crashed)
+ 		.ssr_event = SSCTL_SSR_EVENT_BEFORE_SHUTDOWN
+ 	};
+ 
++	mutex_lock(&sysmon->state_lock);
++	sysmon->state = SSCTL_SSR_EVENT_BEFORE_SHUTDOWN;
+ 	blocking_notifier_call_chain(&sysmon_notifiers, 0, (void *)&event);
++	mutex_unlock(&sysmon->state_lock);
+ 
+ 	/* Don't request graceful shutdown if we've crashed */
+ 	if (crashed)
+@@ -521,7 +535,10 @@ static void sysmon_unprepare(struct rproc_subdev *subdev)
+ 		.ssr_event = SSCTL_SSR_EVENT_AFTER_SHUTDOWN
+ 	};
+ 
++	mutex_lock(&sysmon->state_lock);
++	sysmon->state = SSCTL_SSR_EVENT_AFTER_SHUTDOWN;
+ 	blocking_notifier_call_chain(&sysmon_notifiers, 0, (void *)&event);
++	mutex_unlock(&sysmon->state_lock);
+ }
+ 
+ /**
+@@ -534,11 +551,10 @@ static int sysmon_notify(struct notifier_block *nb, unsigned long event,
+ 			 void *data)
+ {
+ 	struct qcom_sysmon *sysmon = container_of(nb, struct qcom_sysmon, nb);
+-	struct rproc *rproc = sysmon->rproc;
+ 	struct sysmon_event *sysmon_event = data;
+ 
+ 	/* Skip non-running rprocs and the originating instance */
+-	if (rproc->state != RPROC_RUNNING ||
++	if (sysmon->state != SSCTL_SSR_EVENT_AFTER_POWERUP ||
+ 	    !strcmp(sysmon_event->subsys_name, sysmon->name)) {
+ 		dev_dbg(sysmon->dev, "not notifying %s\n", sysmon->name);
+ 		return NOTIFY_DONE;
+@@ -591,6 +607,7 @@ struct qcom_sysmon *qcom_add_sysmon_subdev(struct rproc *rproc,
+ 	init_completion(&sysmon->ind_comp);
+ 	init_completion(&sysmon->shutdown_comp);
+ 	mutex_init(&sysmon->lock);
++	mutex_init(&sysmon->state_lock);
+ 
+ 	sysmon->shutdown_irq = of_irq_get_byname(sysmon->dev->of_node,
+ 						 "shutdown-ack");
+diff --git a/drivers/remoteproc/ti_k3_dsp_remoteproc.c b/drivers/remoteproc/ti_k3_dsp_remoteproc.c
+index 9011e477290ce..863c0214e0a8e 100644
+--- a/drivers/remoteproc/ti_k3_dsp_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_dsp_remoteproc.c
+@@ -445,10 +445,10 @@ static int k3_dsp_rproc_of_get_memories(struct platform_device *pdev,
+ 
+ 		kproc->mem[i].cpu_addr = devm_ioremap_wc(dev, res->start,
+ 							 resource_size(res));
+-		if (IS_ERR(kproc->mem[i].cpu_addr)) {
++		if (!kproc->mem[i].cpu_addr) {
+ 			dev_err(dev, "failed to map %s memory\n",
+ 				data->mems[i].name);
+-			return PTR_ERR(kproc->mem[i].cpu_addr);
++			return -ENOMEM;
+ 		}
+ 		kproc->mem[i].bus_addr = res->start;
+ 		kproc->mem[i].dev_addr = data->mems[i].dev_addr;
+diff --git a/drivers/rtc/rtc-ep93xx.c b/drivers/rtc/rtc-ep93xx.c
+index 8ec9ea1ca72e1..6f90b85a58140 100644
+--- a/drivers/rtc/rtc-ep93xx.c
++++ b/drivers/rtc/rtc-ep93xx.c
+@@ -33,7 +33,7 @@ struct ep93xx_rtc {
+ static int ep93xx_rtc_get_swcomp(struct device *dev, unsigned short *preload,
+ 				 unsigned short *delete)
+ {
+-	struct ep93xx_rtc *ep93xx_rtc = dev_get_platdata(dev);
++	struct ep93xx_rtc *ep93xx_rtc = dev_get_drvdata(dev);
+ 	unsigned long comp;
+ 
+ 	comp = readl(ep93xx_rtc->mmio_base + EP93XX_RTC_SWCOMP);
+@@ -51,7 +51,7 @@ static int ep93xx_rtc_get_swcomp(struct device *dev, unsigned short *preload,
+ 
+ static int ep93xx_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ {
+-	struct ep93xx_rtc *ep93xx_rtc = dev_get_platdata(dev);
++	struct ep93xx_rtc *ep93xx_rtc = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+ 	time = readl(ep93xx_rtc->mmio_base + EP93XX_RTC_DATA);
+@@ -62,7 +62,7 @@ static int ep93xx_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ 
+ static int ep93xx_rtc_set_time(struct device *dev, struct rtc_time *tm)
+ {
+-	struct ep93xx_rtc *ep93xx_rtc = dev_get_platdata(dev);
++	struct ep93xx_rtc *ep93xx_rtc = dev_get_drvdata(dev);
+ 	unsigned long secs = rtc_tm_to_time64(tm);
+ 
+ 	writel(secs + 1, ep93xx_rtc->mmio_base + EP93XX_RTC_LOAD);
+diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c
+index 07a5630ec841f..4d9711d51f8f3 100644
+--- a/drivers/rtc/rtc-pcf2127.c
++++ b/drivers/rtc/rtc-pcf2127.c
+@@ -243,10 +243,8 @@ static int pcf2127_nvmem_read(void *priv, unsigned int offset,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = regmap_bulk_read(pcf2127->regmap, PCF2127_REG_RAM_RD_CMD,
+-			       val, bytes);
+-
+-	return ret ?: bytes;
++	return regmap_bulk_read(pcf2127->regmap, PCF2127_REG_RAM_RD_CMD,
++				val, bytes);
+ }
+ 
+ static int pcf2127_nvmem_write(void *priv, unsigned int offset,
+@@ -261,10 +259,8 @@ static int pcf2127_nvmem_write(void *priv, unsigned int offset,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = regmap_bulk_write(pcf2127->regmap, PCF2127_REG_RAM_WRT_CMD,
+-				val, bytes);
+-
+-	return ret ?: bytes;
++	return regmap_bulk_write(pcf2127->regmap, PCF2127_REG_RAM_WRT_CMD,
++				 val, bytes);
+ }
+ 
+ /* watchdog driver */
+diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c
+index 99f86612f7751..dc78a523a69f2 100644
+--- a/drivers/s390/block/dasd_alias.c
++++ b/drivers/s390/block/dasd_alias.c
+@@ -256,7 +256,6 @@ void dasd_alias_disconnect_device_from_lcu(struct dasd_device *device)
+ 		return;
+ 	device->discipline->get_uid(device, &uid);
+ 	spin_lock_irqsave(&lcu->lock, flags);
+-	list_del_init(&device->alias_list);
+ 	/* make sure that the workers don't use this device */
+ 	if (device == lcu->suc_data.device) {
+ 		spin_unlock_irqrestore(&lcu->lock, flags);
+@@ -283,6 +282,7 @@ void dasd_alias_disconnect_device_from_lcu(struct dasd_device *device)
+ 
+ 	spin_lock_irqsave(&aliastree.lock, flags);
+ 	spin_lock(&lcu->lock);
++	list_del_init(&device->alias_list);
+ 	if (list_empty(&lcu->grouplist) &&
+ 	    list_empty(&lcu->active_devices) &&
+ 	    list_empty(&lcu->inactive_devices)) {
+@@ -462,11 +462,19 @@ static int read_unit_address_configuration(struct dasd_device *device,
+ 	spin_unlock_irqrestore(&lcu->lock, flags);
+ 
+ 	rc = dasd_sleep_on(cqr);
+-	if (rc && !suborder_not_supported(cqr)) {
++	if (!rc)
++		goto out;
++
++	if (suborder_not_supported(cqr)) {
++		/* suborder not supported or device unusable for IO */
++		rc = -EOPNOTSUPP;
++	} else {
++		/* IO failed but should be retried */
+ 		spin_lock_irqsave(&lcu->lock, flags);
+ 		lcu->flags |= NEED_UAC_UPDATE;
+ 		spin_unlock_irqrestore(&lcu->lock, flags);
+ 	}
++out:
+ 	dasd_sfree_request(cqr, cqr->memdev);
+ 	return rc;
+ }
+@@ -503,6 +511,14 @@ static int _lcu_update(struct dasd_device *refdev, struct alias_lcu *lcu)
+ 		return rc;
+ 
+ 	spin_lock_irqsave(&lcu->lock, flags);
++	/*
++	 * there is another update needed skip the remaining handling
++	 * the data might already be outdated
++	 * but especially do not add the device to an LCU with pending
++	 * update
++	 */
++	if (lcu->flags & NEED_UAC_UPDATE)
++		goto out;
+ 	lcu->pav = NO_PAV;
+ 	for (i = 0; i < MAX_DEVICES_PER_LCU; ++i) {
+ 		switch (lcu->uac->unit[i].ua_type) {
+@@ -521,6 +537,7 @@ static int _lcu_update(struct dasd_device *refdev, struct alias_lcu *lcu)
+ 				 alias_list) {
+ 		_add_device_to_lcu(lcu, device, refdev);
+ 	}
++out:
+ 	spin_unlock_irqrestore(&lcu->lock, flags);
+ 	return 0;
+ }
+@@ -625,6 +642,7 @@ int dasd_alias_add_device(struct dasd_device *device)
+ 	}
+ 	if (lcu->flags & UPDATE_PENDING) {
+ 		list_move(&device->alias_list, &lcu->active_devices);
++		private->pavgroup = NULL;
+ 		_schedule_lcu_update(lcu, device);
+ 	}
+ 	spin_unlock_irqrestore(&lcu->lock, flags);
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index b29fe8d50baf2..33280ca181e95 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -1664,10 +1664,10 @@ void __init ccw_device_destroy_console(struct ccw_device *cdev)
+ 	struct io_subchannel_private *io_priv = to_io_private(sch);
+ 
+ 	set_io_private(sch, NULL);
+-	put_device(&sch->dev);
+-	put_device(&cdev->dev);
+ 	dma_free_coherent(&sch->dev, sizeof(*io_priv->dma_area),
+ 			  io_priv->dma_area, io_priv->dma_area_dma);
++	put_device(&sch->dev);
++	put_device(&cdev->dev);
+ 	kfree(io_priv);
+ }
+ 
+diff --git a/drivers/scsi/aacraid/commctrl.c b/drivers/scsi/aacraid/commctrl.c
+index e3e157a749880..1b1da162f5f6b 100644
+--- a/drivers/scsi/aacraid/commctrl.c
++++ b/drivers/scsi/aacraid/commctrl.c
+@@ -25,6 +25,7 @@
+ #include <linux/completion.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/blkdev.h>
++#include <linux/compat.h>
+ #include <linux/delay.h> /* ssleep prototype */
+ #include <linux/kthread.h>
+ #include <linux/uaccess.h>
+@@ -226,6 +227,12 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ 	return status;
+ }
+ 
++struct compat_fib_ioctl {
++	u32	fibctx;
++	s32	wait;
++	compat_uptr_t fib;
++};
++
+ /**
+  *	next_getadapter_fib	-	get the next fib
+  *	@dev: adapter to use
+@@ -243,8 +250,19 @@ static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ 	struct list_head * entry;
+ 	unsigned long flags;
+ 
+-	if(copy_from_user((void *)&f, arg, sizeof(struct fib_ioctl)))
+-		return -EFAULT;
++	if (in_compat_syscall()) {
++		struct compat_fib_ioctl cf;
++
++		if (copy_from_user(&cf, arg, sizeof(struct compat_fib_ioctl)))
++			return -EFAULT;
++
++		f.fibctx = cf.fibctx;
++		f.wait = cf.wait;
++		f.fib = compat_ptr(cf.fib);
++	} else {
++		if (copy_from_user(&f, arg, sizeof(struct fib_ioctl)))
++			return -EFAULT;
++	}
+ 	/*
+ 	 *	Verify that the HANDLE passed in was a valid AdapterFibContext
+ 	 *
+diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
+index 8f3772480582c..0a82afaf40285 100644
+--- a/drivers/scsi/aacraid/linit.c
++++ b/drivers/scsi/aacraid/linit.c
+@@ -1182,63 +1182,6 @@ static long aac_cfg_ioctl(struct file *file,
+ 	return aac_do_ioctl(aac, cmd, (void __user *)arg);
+ }
+ 
+-#ifdef CONFIG_COMPAT
+-static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long arg)
+-{
+-	long ret;
+-	switch (cmd) {
+-	case FSACTL_MINIPORT_REV_CHECK:
+-	case FSACTL_SENDFIB:
+-	case FSACTL_OPEN_GET_ADAPTER_FIB:
+-	case FSACTL_CLOSE_GET_ADAPTER_FIB:
+-	case FSACTL_SEND_RAW_SRB:
+-	case FSACTL_GET_PCI_INFO:
+-	case FSACTL_QUERY_DISK:
+-	case FSACTL_DELETE_DISK:
+-	case FSACTL_FORCE_DELETE_DISK:
+-	case FSACTL_GET_CONTAINERS:
+-	case FSACTL_SEND_LARGE_FIB:
+-		ret = aac_do_ioctl(dev, cmd, (void __user *)arg);
+-		break;
+-
+-	case FSACTL_GET_NEXT_ADAPTER_FIB: {
+-		struct fib_ioctl __user *f;
+-
+-		f = compat_alloc_user_space(sizeof(*f));
+-		ret = 0;
+-		if (clear_user(f, sizeof(*f)))
+-			ret = -EFAULT;
+-		if (copy_in_user(f, (void __user *)arg, sizeof(struct fib_ioctl) - sizeof(u32)))
+-			ret = -EFAULT;
+-		if (!ret)
+-			ret = aac_do_ioctl(dev, cmd, f);
+-		break;
+-	}
+-
+-	default:
+-		ret = -ENOIOCTLCMD;
+-		break;
+-	}
+-	return ret;
+-}
+-
+-static int aac_compat_ioctl(struct scsi_device *sdev, unsigned int cmd,
+-			    void __user *arg)
+-{
+-	struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
+-	if (!capable(CAP_SYS_RAWIO))
+-		return -EPERM;
+-	return aac_compat_do_ioctl(dev, cmd, (unsigned long)arg);
+-}
+-
+-static long aac_compat_cfg_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+-{
+-	if (!capable(CAP_SYS_RAWIO))
+-		return -EPERM;
+-	return aac_compat_do_ioctl(file->private_data, cmd, arg);
+-}
+-#endif
+-
+ static ssize_t aac_show_model(struct device *device,
+ 			      struct device_attribute *attr, char *buf)
+ {
+@@ -1523,7 +1466,7 @@ static const struct file_operations aac_cfg_fops = {
+ 	.owner		= THIS_MODULE,
+ 	.unlocked_ioctl	= aac_cfg_ioctl,
+ #ifdef CONFIG_COMPAT
+-	.compat_ioctl   = aac_compat_cfg_ioctl,
++	.compat_ioctl   = aac_cfg_ioctl,
+ #endif
+ 	.open		= aac_cfg_open,
+ 	.llseek		= noop_llseek,
+@@ -1536,7 +1479,7 @@ static struct scsi_host_template aac_driver_template = {
+ 	.info				= aac_info,
+ 	.ioctl				= aac_ioctl,
+ #ifdef CONFIG_COMPAT
+-	.compat_ioctl			= aac_compat_ioctl,
++	.compat_ioctl			= aac_ioctl,
+ #endif
+ 	.queuecommand			= aac_queuecommand,
+ 	.bios_param			= aac_biosparm,
+diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c
+index 5f8a7ef8f6a8e..4f7befb43d604 100644
+--- a/drivers/scsi/fnic/fnic_main.c
++++ b/drivers/scsi/fnic/fnic_main.c
+@@ -740,6 +740,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	for (i = 0; i < FNIC_IO_LOCKS; i++)
+ 		spin_lock_init(&fnic->io_req_lock[i]);
+ 
++	err = -ENOMEM;
+ 	fnic->io_req_pool = mempool_create_slab_pool(2, fnic_io_req_cache);
+ 	if (!fnic->io_req_pool)
+ 		goto err_out_free_resources;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 960de375ce699..2cbd8a524edab 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2409,8 +2409,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
+ 			      DRV_NAME " phy", hisi_hba);
+ 	if (rc) {
+ 		dev_err(dev, "could not request phy interrupt, rc=%d\n", rc);
+-		rc = -ENOENT;
+-		goto free_irq_vectors;
++		return -ENOENT;
+ 	}
+ 
+ 	rc = devm_request_irq(dev, pci_irq_vector(pdev, 2),
+@@ -2418,8 +2417,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
+ 			      DRV_NAME " channel", hisi_hba);
+ 	if (rc) {
+ 		dev_err(dev, "could not request chnl interrupt, rc=%d\n", rc);
+-		rc = -ENOENT;
+-		goto free_irq_vectors;
++		return -ENOENT;
+ 	}
+ 
+ 	rc = devm_request_irq(dev, pci_irq_vector(pdev, 11),
+@@ -2427,8 +2425,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
+ 			      DRV_NAME " fatal", hisi_hba);
+ 	if (rc) {
+ 		dev_err(dev, "could not request fatal interrupt, rc=%d\n", rc);
+-		rc = -ENOENT;
+-		goto free_irq_vectors;
++		return -ENOENT;
+ 	}
+ 
+ 	if (hisi_sas_intr_conv)
+@@ -2449,8 +2446,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
+ 		if (rc) {
+ 			dev_err(dev, "could not request cq%d interrupt, rc=%d\n",
+ 				i, rc);
+-			rc = -ENOENT;
+-			goto free_irq_vectors;
++			return -ENOENT;
+ 		}
+ 		cq->irq_mask = pci_irq_get_affinity(pdev, i + BASE_VECTORS_V3_HW);
+ 		if (!cq->irq_mask) {
+@@ -2460,10 +2456,6 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
+ 	}
+ 
+ 	return 0;
+-
+-free_irq_vectors:
+-	pci_free_irq_vectors(pdev);
+-	return rc;
+ }
+ 
+ static int hisi_sas_v3_init(struct hisi_hba *hisi_hba)
+@@ -3317,11 +3309,11 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	rc = interrupt_preinit_v3_hw(hisi_hba);
+ 	if (rc)
+-		goto err_out_ha;
++		goto err_out_debugfs;
+ 	dev_err(dev, "%d hw queues\n", shost->nr_hw_queues);
+ 	rc = scsi_add_host(shost, dev);
+ 	if (rc)
+-		goto err_out_ha;
++		goto err_out_free_irq_vectors;
+ 
+ 	rc = sas_register_ha(sha);
+ 	if (rc)
+@@ -3348,8 +3340,12 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ err_out_register_ha:
+ 	scsi_remove_host(shost);
+-err_out_ha:
++err_out_free_irq_vectors:
++	pci_free_irq_vectors(pdev);
++err_out_debugfs:
+ 	hisi_sas_debugfs_exit(hisi_hba);
++err_out_ha:
++	hisi_sas_free(hisi_hba);
+ 	scsi_host_put(shost);
+ err_out_regions:
+ 	pci_release_regions(pdev);
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 549adfaa97ce5..93e507677bdcb 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -753,7 +753,7 @@ struct lpfc_hba {
+ #define HBA_SP_QUEUE_EVT	0x8 /* Slow-path qevt posted to worker thread*/
+ #define HBA_POST_RECEIVE_BUFFER 0x10 /* Rcv buffers need to be posted */
+ #define HBA_PERSISTENT_TOPO	0x20 /* Persistent topology support in hba */
+-#define ELS_XRI_ABORT_EVENT	0x40
++#define ELS_XRI_ABORT_EVENT	0x40 /* ELS_XRI abort event was queued */
+ #define ASYNC_EVENT		0x80
+ #define LINK_DISABLED		0x100 /* Link disabled by user */
+ #define FCF_TS_INPROG           0x200 /* FCF table scan in progress */
+diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h
+index 482e4a888daec..1437e44ade801 100644
+--- a/drivers/scsi/lpfc/lpfc_disc.h
++++ b/drivers/scsi/lpfc/lpfc_disc.h
+@@ -41,6 +41,7 @@ enum lpfc_work_type {
+ 	LPFC_EVT_DEV_LOSS,
+ 	LPFC_EVT_FASTPATH_MGMT_EVT,
+ 	LPFC_EVT_RESET_HBA,
++	LPFC_EVT_RECOVER_PORT
+ };
+ 
+ /* structure used to queue event to the discovery tasklet */
+@@ -128,6 +129,7 @@ struct lpfc_nodelist {
+ 	struct lpfc_vport *vport;
+ 	struct lpfc_work_evt els_retry_evt;
+ 	struct lpfc_work_evt dev_loss_evt;
++	struct lpfc_work_evt recovery_evt;
+ 	struct kref     kref;
+ 	atomic_t cmd_pending;
+ 	uint32_t cmd_qdepth;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index bb02fd8bc2ddf..9746d2f4fcfad 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -552,6 +552,15 @@ lpfc_work_list_done(struct lpfc_hba *phba)
+ 								    fcf_inuse,
+ 								    nlp_did);
+ 			break;
++		case LPFC_EVT_RECOVER_PORT:
++			ndlp = (struct lpfc_nodelist *)(evtp->evt_arg1);
++			lpfc_sli_abts_recover_port(ndlp->vport, ndlp);
++			free_evt = 0;
++			/* decrement the node reference count held for
++			 * this queued work
++			 */
++			lpfc_nlp_put(ndlp);
++			break;
+ 		case LPFC_EVT_ONLINE:
+ 			if (phba->link_state < LPFC_LINK_DOWN)
+ 				*(int *) (evtp->evt_arg1) = lpfc_online(phba);
+@@ -4515,6 +4524,8 @@ lpfc_initialize_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 	INIT_LIST_HEAD(&ndlp->els_retry_evt.evt_listp);
+ 	INIT_LIST_HEAD(&ndlp->dev_loss_evt.evt_listp);
+ 	timer_setup(&ndlp->nlp_delayfunc, lpfc_els_retry_delay, 0);
++	INIT_LIST_HEAD(&ndlp->recovery_evt.evt_listp);
++
+ 	ndlp->nlp_DID = did;
+ 	ndlp->vport = vport;
+ 	ndlp->phba = vport->phba;
+@@ -5011,6 +5022,29 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 				mempool_free(mbox, phba->mbox_mem_pool);
+ 				acc_plogi = 1;
+ 			}
++		} else {
++			lpfc_printf_vlog(vport, KERN_INFO,
++					 LOG_NODE | LOG_DISCOVERY,
++					 "1444 Failed to allocate mempool "
++					 "unreg_rpi UNREG x%x, "
++					 "DID x%x, flag x%x, "
++					 "ndlp x%px\n",
++					 ndlp->nlp_rpi, ndlp->nlp_DID,
++					 ndlp->nlp_flag, ndlp);
++
++			/* Because mempool_alloc failed, we
++			 * will issue a LOGO here and keep the rpi alive if
++			 * not unloading.
++			 */
++			if (!(vport->load_flag & FC_UNLOADING)) {
++				ndlp->nlp_flag &= ~NLP_UNREG_INP;
++				lpfc_issue_els_logo(vport, ndlp, 0);
++				ndlp->nlp_prev_state = ndlp->nlp_state;
++				lpfc_nlp_set_state(vport, ndlp,
++						   NLP_STE_NPR_NODE);
++			}
++
++			return 1;
+ 		}
+ 		lpfc_no_rpi(phba, ndlp);
+ out:
+@@ -5214,6 +5248,7 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 
+ 	list_del_init(&ndlp->els_retry_evt.evt_listp);
+ 	list_del_init(&ndlp->dev_loss_evt.evt_listp);
++	list_del_init(&ndlp->recovery_evt.evt_listp);
+ 	lpfc_cleanup_vports_rrqs(vport, ndlp);
+ 	if (phba->sli_rev == LPFC_SLI_REV4)
+ 		ndlp->nlp_flag |= NLP_RELEASE_RPI;
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index ca25e54bb7824..40fe889033d43 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -5958,18 +5958,21 @@ lpfc_sli4_async_grp5_evt(struct lpfc_hba *phba,
+ void lpfc_sli4_async_event_proc(struct lpfc_hba *phba)
+ {
+ 	struct lpfc_cq_event *cq_event;
++	unsigned long iflags;
+ 
+ 	/* First, declare the async event has been handled */
+-	spin_lock_irq(&phba->hbalock);
++	spin_lock_irqsave(&phba->hbalock, iflags);
+ 	phba->hba_flag &= ~ASYNC_EVENT;
+-	spin_unlock_irq(&phba->hbalock);
++	spin_unlock_irqrestore(&phba->hbalock, iflags);
++
+ 	/* Now, handle all the async events */
++	spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags);
+ 	while (!list_empty(&phba->sli4_hba.sp_asynce_work_queue)) {
+-		/* Get the first event from the head of the event queue */
+-		spin_lock_irq(&phba->hbalock);
+ 		list_remove_head(&phba->sli4_hba.sp_asynce_work_queue,
+ 				 cq_event, struct lpfc_cq_event, list);
+-		spin_unlock_irq(&phba->hbalock);
++		spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock,
++				       iflags);
++
+ 		/* Process the asynchronous event */
+ 		switch (bf_get(lpfc_trailer_code, &cq_event->cqe.mcqe_cmpl)) {
+ 		case LPFC_TRAILER_CODE_LINK:
+@@ -6001,9 +6004,12 @@ void lpfc_sli4_async_event_proc(struct lpfc_hba *phba)
+ 					&cq_event->cqe.mcqe_cmpl));
+ 			break;
+ 		}
++
+ 		/* Free the completion event processed to the free pool */
+ 		lpfc_sli4_cq_event_release(phba, cq_event);
++		spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags);
+ 	}
++	spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock, iflags);
+ }
+ 
+ /**
+@@ -6630,6 +6636,8 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ 	/* This abort list used by worker thread */
+ 	spin_lock_init(&phba->sli4_hba.sgl_list_lock);
+ 	spin_lock_init(&phba->sli4_hba.nvmet_io_wait_lock);
++	spin_lock_init(&phba->sli4_hba.asynce_list_lock);
++	spin_lock_init(&phba->sli4_hba.els_xri_abrt_list_lock);
+ 
+ 	/*
+ 	 * Initialize driver internal slow-path work queues
+@@ -6641,8 +6649,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ 	INIT_LIST_HEAD(&phba->sli4_hba.sp_queue_event);
+ 	/* Asynchronous event CQ Event work queue list */
+ 	INIT_LIST_HEAD(&phba->sli4_hba.sp_asynce_work_queue);
+-	/* Fast-path XRI aborted CQ Event work queue list */
+-	INIT_LIST_HEAD(&phba->sli4_hba.sp_fcp_xri_aborted_work_queue);
+ 	/* Slow-path XRI aborted CQ Event work queue list */
+ 	INIT_LIST_HEAD(&phba->sli4_hba.sp_els_xri_aborted_work_queue);
+ 	/* Receive queue CQ Event work queue list */
+@@ -10174,26 +10180,28 @@ lpfc_sli4_cq_event_release(struct lpfc_hba *phba,
+ static void
+ lpfc_sli4_cq_event_release_all(struct lpfc_hba *phba)
+ {
+-	LIST_HEAD(cqelist);
+-	struct lpfc_cq_event *cqe;
++	LIST_HEAD(cq_event_list);
++	struct lpfc_cq_event *cq_event;
+ 	unsigned long iflags;
+ 
+ 	/* Retrieve all the pending WCQEs from pending WCQE lists */
+-	spin_lock_irqsave(&phba->hbalock, iflags);
+-	/* Pending FCP XRI abort events */
+-	list_splice_init(&phba->sli4_hba.sp_fcp_xri_aborted_work_queue,
+-			 &cqelist);
++
+ 	/* Pending ELS XRI abort events */
++	spin_lock_irqsave(&phba->sli4_hba.els_xri_abrt_list_lock, iflags);
+ 	list_splice_init(&phba->sli4_hba.sp_els_xri_aborted_work_queue,
+-			 &cqelist);
++			 &cq_event_list);
++	spin_unlock_irqrestore(&phba->sli4_hba.els_xri_abrt_list_lock, iflags);
++
+ 	/* Pending asynnc events */
++	spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags);
+ 	list_splice_init(&phba->sli4_hba.sp_asynce_work_queue,
+-			 &cqelist);
+-	spin_unlock_irqrestore(&phba->hbalock, iflags);
++			 &cq_event_list);
++	spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock, iflags);
+ 
+-	while (!list_empty(&cqelist)) {
+-		list_remove_head(&cqelist, cqe, struct lpfc_cq_event, list);
+-		lpfc_sli4_cq_event_release(phba, cqe);
++	while (!list_empty(&cq_event_list)) {
++		list_remove_head(&cq_event_list, cq_event,
++				 struct lpfc_cq_event, list);
++		lpfc_sli4_cq_event_release(phba, cq_event);
+ 	}
+ }
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_mem.c b/drivers/scsi/lpfc/lpfc_mem.c
+index 27ff67e9edae7..be54fbf5146f1 100644
+--- a/drivers/scsi/lpfc/lpfc_mem.c
++++ b/drivers/scsi/lpfc/lpfc_mem.c
+@@ -46,6 +46,7 @@
+ #define LPFC_MEM_POOL_SIZE      64      /* max elem in non-DMA safety pool */
+ #define LPFC_DEVICE_DATA_POOL_SIZE 64   /* max elements in device data pool */
+ #define LPFC_RRQ_POOL_SIZE	256	/* max elements in non-DMA  pool */
++#define LPFC_MBX_POOL_SIZE	256	/* max elements in MBX non-DMA pool */
+ 
+ int
+ lpfc_mem_alloc_active_rrq_pool_s4(struct lpfc_hba *phba) {
+@@ -111,8 +112,8 @@ lpfc_mem_alloc(struct lpfc_hba *phba, int align)
+ 		pool->current_count++;
+ 	}
+ 
+-	phba->mbox_mem_pool = mempool_create_kmalloc_pool(LPFC_MEM_POOL_SIZE,
+-							 sizeof(LPFC_MBOXQ_t));
++	phba->mbox_mem_pool = mempool_create_kmalloc_pool(LPFC_MBX_POOL_SIZE,
++							  sizeof(LPFC_MBOXQ_t));
+ 	if (!phba->mbox_mem_pool)
+ 		goto fail_free_mbuf_pool;
+ 
+@@ -588,8 +589,6 @@ lpfc_sli4_rb_free(struct lpfc_hba *phba, struct hbq_dmabuf *dmab)
+  * Description: Allocates a DMA-mapped receive buffer from the lpfc_hrb_pool PCI
+  * pool along a non-DMA-mapped container for it.
+  *
+- * Notes: Not interrupt-safe.  Must be called with no locks held.
+- *
+  * Returns:
+  *   pointer to HBQ on success
+  *   NULL on failure
+@@ -599,7 +598,7 @@ lpfc_sli4_nvmet_alloc(struct lpfc_hba *phba)
+ {
+ 	struct rqb_dmabuf *dma_buf;
+ 
+-	dma_buf = kzalloc(sizeof(struct rqb_dmabuf), GFP_KERNEL);
++	dma_buf = kzalloc(sizeof(*dma_buf), GFP_KERNEL);
+ 	if (!dma_buf)
+ 		return NULL;
+ 
+@@ -722,7 +721,6 @@ lpfc_rq_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp)
+ 	drqe.address_hi = putPaddrHigh(rqb_entry->dbuf.phys);
+ 	rc = lpfc_sli4_rq_put(rqb_entry->hrq, rqb_entry->drq, &hrqe, &drqe);
+ 	if (rc < 0) {
+-		(rqbp->rqb_free_buffer)(phba, rqb_entry);
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+ 				"6409 Cannot post to HRQ %d: %x %x %x "
+ 				"DRQ %x %x\n",
+@@ -732,6 +730,7 @@ lpfc_rq_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp)
+ 				rqb_entry->hrq->entry_count,
+ 				rqb_entry->drq->host_index,
+ 				rqb_entry->drq->hba_index);
++		(rqbp->rqb_free_buffer)(phba, rqb_entry);
+ 	} else {
+ 		list_add_tail(&rqb_entry->hbuf.list, &rqbp->rqb_buffer_list);
+ 		rqbp->buffer_count++;
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 0c39ed50998c8..69f1a0457f51e 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2280,6 +2280,8 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
+ 	int ret, i, pending = 0;
+ 	struct lpfc_sli_ring  *pring;
+ 	struct lpfc_hba  *phba = vport->phba;
++	struct lpfc_sli4_hdw_queue *qp;
++	int abts_scsi, abts_nvme;
+ 
+ 	/* Host transport has to clean up and confirm requiring an indefinite
+ 	 * wait. Print a message if a 10 second wait expires and renew the
+@@ -2290,17 +2292,23 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
+ 		ret = wait_for_completion_timeout(lport_unreg_cmp, wait_tmo);
+ 		if (unlikely(!ret)) {
+ 			pending = 0;
++			abts_scsi = 0;
++			abts_nvme = 0;
+ 			for (i = 0; i < phba->cfg_hdw_queue; i++) {
+-				pring = phba->sli4_hba.hdwq[i].io_wq->pring;
++				qp = &phba->sli4_hba.hdwq[i];
++				pring = qp->io_wq->pring;
+ 				if (!pring)
+ 					continue;
+-				if (pring->txcmplq_cnt)
+-					pending += pring->txcmplq_cnt;
++				pending += pring->txcmplq_cnt;
++				abts_scsi += qp->abts_scsi_io_bufs;
++				abts_nvme += qp->abts_nvme_io_bufs;
+ 			}
+ 			lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+ 					 "6176 Lport x%px Localport x%px wait "
+-					 "timed out. Pending %d. Renewing.\n",
+-					 lport, vport->localport, pending);
++					 "timed out. Pending %d [%d:%d]. "
++					 "Renewing.\n",
++					 lport, vport->localport, pending,
++					 abts_scsi, abts_nvme);
+ 			continue;
+ 		}
+ 		break;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index e158cd77d387f..fcaafa564dfcd 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -7248,12 +7248,16 @@ lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq,
+ 	struct rqb_dmabuf *rqb_buffer;
+ 	LIST_HEAD(rqb_buf_list);
+ 
+-	spin_lock_irqsave(&phba->hbalock, flags);
+ 	rqbp = hrq->rqbp;
+ 	for (i = 0; i < count; i++) {
++		spin_lock_irqsave(&phba->hbalock, flags);
+ 		/* IF RQ is already full, don't bother */
+-		if (rqbp->buffer_count + i >= rqbp->entry_count - 1)
++		if (rqbp->buffer_count + i >= rqbp->entry_count - 1) {
++			spin_unlock_irqrestore(&phba->hbalock, flags);
+ 			break;
++		}
++		spin_unlock_irqrestore(&phba->hbalock, flags);
++
+ 		rqb_buffer = rqbp->rqb_alloc_buffer(phba);
+ 		if (!rqb_buffer)
+ 			break;
+@@ -7262,6 +7266,8 @@ lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq,
+ 		rqb_buffer->idx = idx;
+ 		list_add_tail(&rqb_buffer->hbuf.list, &rqb_buf_list);
+ 	}
++
++	spin_lock_irqsave(&phba->hbalock, flags);
+ 	while (!list_empty(&rqb_buf_list)) {
+ 		list_remove_head(&rqb_buf_list, rqb_buffer, struct rqb_dmabuf,
+ 				 hbuf.list);
+@@ -10364,6 +10370,32 @@ lpfc_extra_ring_setup( struct lpfc_hba *phba)
+ 	return 0;
+ }
+ 
++static void
++lpfc_sli_post_recovery_event(struct lpfc_hba *phba,
++			     struct lpfc_nodelist *ndlp)
++{
++	unsigned long iflags;
++	struct lpfc_work_evt  *evtp = &ndlp->recovery_evt;
++
++	spin_lock_irqsave(&phba->hbalock, iflags);
++	if (!list_empty(&evtp->evt_listp)) {
++		spin_unlock_irqrestore(&phba->hbalock, iflags);
++		return;
++	}
++
++	/* Incrementing the reference count until the queued work is done. */
++	evtp->evt_arg1  = lpfc_nlp_get(ndlp);
++	if (!evtp->evt_arg1) {
++		spin_unlock_irqrestore(&phba->hbalock, iflags);
++		return;
++	}
++	evtp->evt = LPFC_EVT_RECOVER_PORT;
++	list_add_tail(&evtp->evt_listp, &phba->work_list);
++	spin_unlock_irqrestore(&phba->hbalock, iflags);
++
++	lpfc_worker_wake_up(phba);
++}
++
+ /* lpfc_sli_abts_err_handler - handle a failed ABTS request from an SLI3 port.
+  * @phba: Pointer to HBA context object.
+  * @iocbq: Pointer to iocb object.
+@@ -10454,7 +10486,7 @@ lpfc_sli4_abts_err_handler(struct lpfc_hba *phba,
+ 	ext_status = axri->parameter & IOERR_PARAM_MASK;
+ 	if ((bf_get(lpfc_wcqe_xa_status, axri) == IOSTAT_LOCAL_REJECT) &&
+ 	    ((ext_status == IOERR_SEQUENCE_TIMEOUT) || (ext_status == 0)))
+-		lpfc_sli_abts_recover_port(vport, ndlp);
++		lpfc_sli_post_recovery_event(phba, ndlp);
+ }
+ 
+ /**
+@@ -13062,23 +13094,30 @@ lpfc_sli_intr_handler(int irq, void *dev_id)
+ void lpfc_sli4_els_xri_abort_event_proc(struct lpfc_hba *phba)
+ {
+ 	struct lpfc_cq_event *cq_event;
++	unsigned long iflags;
+ 
+ 	/* First, declare the els xri abort event has been handled */
+-	spin_lock_irq(&phba->hbalock);
++	spin_lock_irqsave(&phba->hbalock, iflags);
+ 	phba->hba_flag &= ~ELS_XRI_ABORT_EVENT;
+-	spin_unlock_irq(&phba->hbalock);
++	spin_unlock_irqrestore(&phba->hbalock, iflags);
++
+ 	/* Now, handle all the els xri abort events */
++	spin_lock_irqsave(&phba->sli4_hba.els_xri_abrt_list_lock, iflags);
+ 	while (!list_empty(&phba->sli4_hba.sp_els_xri_aborted_work_queue)) {
+ 		/* Get the first event from the head of the event queue */
+-		spin_lock_irq(&phba->hbalock);
+ 		list_remove_head(&phba->sli4_hba.sp_els_xri_aborted_work_queue,
+ 				 cq_event, struct lpfc_cq_event, list);
+-		spin_unlock_irq(&phba->hbalock);
++		spin_unlock_irqrestore(&phba->sli4_hba.els_xri_abrt_list_lock,
++				       iflags);
+ 		/* Notify aborted XRI for ELS work queue */
+ 		lpfc_sli4_els_xri_aborted(phba, &cq_event->cqe.wcqe_axri);
++
+ 		/* Free the event processed back to the free pool */
+ 		lpfc_sli4_cq_event_release(phba, cq_event);
++		spin_lock_irqsave(&phba->sli4_hba.els_xri_abrt_list_lock,
++				  iflags);
+ 	}
++	spin_unlock_irqrestore(&phba->sli4_hba.els_xri_abrt_list_lock, iflags);
+ }
+ 
+ /**
+@@ -13289,9 +13328,13 @@ lpfc_sli4_sp_handle_async_event(struct lpfc_hba *phba, struct lpfc_mcqe *mcqe)
+ 	cq_event = lpfc_cq_event_setup(phba, mcqe, sizeof(struct lpfc_mcqe));
+ 	if (!cq_event)
+ 		return false;
+-	spin_lock_irqsave(&phba->hbalock, iflags);
++
++	spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags);
+ 	list_add_tail(&cq_event->list, &phba->sli4_hba.sp_asynce_work_queue);
++	spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock, iflags);
++
+ 	/* Set the async event flag */
++	spin_lock_irqsave(&phba->hbalock, iflags);
+ 	phba->hba_flag |= ASYNC_EVENT;
+ 	spin_unlock_irqrestore(&phba->hbalock, iflags);
+ 
+@@ -13566,17 +13609,20 @@ lpfc_sli4_sp_handle_abort_xri_wcqe(struct lpfc_hba *phba,
+ 		break;
+ 	case LPFC_NVME_LS: /* NVME LS uses ELS resources */
+ 	case LPFC_ELS:
+-		cq_event = lpfc_cq_event_setup(
+-			phba, wcqe, sizeof(struct sli4_wcqe_xri_aborted));
+-		if (!cq_event)
+-			return false;
++		cq_event = lpfc_cq_event_setup(phba, wcqe, sizeof(*wcqe));
++		if (!cq_event) {
++			workposted = false;
++			break;
++		}
+ 		cq_event->hdwq = cq->hdwq;
+-		spin_lock_irqsave(&phba->hbalock, iflags);
++		spin_lock_irqsave(&phba->sli4_hba.els_xri_abrt_list_lock,
++				  iflags);
+ 		list_add_tail(&cq_event->list,
+ 			      &phba->sli4_hba.sp_els_xri_aborted_work_queue);
+ 		/* Set the els xri abort event flag */
+ 		phba->hba_flag |= ELS_XRI_ABORT_EVENT;
+-		spin_unlock_irqrestore(&phba->hbalock, iflags);
++		spin_unlock_irqrestore(&phba->sli4_hba.els_xri_abrt_list_lock,
++				       iflags);
+ 		workposted = true;
+ 		break;
+ 	default:
+diff --git a/drivers/scsi/lpfc/lpfc_sli4.h b/drivers/scsi/lpfc/lpfc_sli4.h
+index a966cdeb52ee7..100cb1a94811b 100644
+--- a/drivers/scsi/lpfc/lpfc_sli4.h
++++ b/drivers/scsi/lpfc/lpfc_sli4.h
+@@ -920,8 +920,9 @@ struct lpfc_sli4_hba {
+ 	struct list_head sp_queue_event;
+ 	struct list_head sp_cqe_event_pool;
+ 	struct list_head sp_asynce_work_queue;
+-	struct list_head sp_fcp_xri_aborted_work_queue;
++	spinlock_t asynce_list_lock; /* protect sp_asynce_work_queue list */
+ 	struct list_head sp_els_xri_aborted_work_queue;
++	spinlock_t els_xri_abrt_list_lock; /* protect els_xri_aborted list */
+ 	struct list_head sp_unsol_work_queue;
+ 	struct lpfc_sli4_link link_state;
+ 	struct lpfc_sli4_lnk_info lnk_info;
+@@ -1103,8 +1104,7 @@ void lpfc_sli4_async_event_proc(struct lpfc_hba *);
+ void lpfc_sli4_fcf_redisc_event_proc(struct lpfc_hba *);
+ int lpfc_sli4_resume_rpi(struct lpfc_nodelist *,
+ 			void (*)(struct lpfc_hba *, LPFC_MBOXQ_t *), void *);
+-void lpfc_sli4_fcp_xri_abort_event_proc(struct lpfc_hba *);
+-void lpfc_sli4_els_xri_abort_event_proc(struct lpfc_hba *);
++void lpfc_sli4_els_xri_abort_event_proc(struct lpfc_hba *phba);
+ void lpfc_sli4_nvme_xri_aborted(struct lpfc_hba *phba,
+ 				struct sli4_wcqe_xri_aborted *axri,
+ 				struct lpfc_io_buf *lpfc_ncmd);
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 3cf3e58b69799..2025361b36e96 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -1131,7 +1131,8 @@ static int pm8001_pci_probe(struct pci_dev *pdev,
+ 
+ 	pm8001_init_sas_add(pm8001_ha);
+ 	/* phy setting support for motherboard controller */
+-	if (pm8001_configure_phy_settings(pm8001_ha))
++	rc = pm8001_configure_phy_settings(pm8001_ha);
++	if (rc)
+ 		goto err_out_shost;
+ 
+ 	pm8001_post_sas_ha_init(shost, chip);
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 7593f248afb2c..155382ce84698 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -3363,7 +3363,7 @@ hw_event_sas_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	pm8001_get_attached_sas_addr(phy, phy->sas_phy.attached_sas_addr);
+ 	spin_unlock_irqrestore(&phy->sas_phy.frame_rcvd_lock, flags);
+ 	if (pm8001_ha->flags == PM8001F_RUN_TIME)
+-		msleep(200);/*delay a moment to wait disk to spinup*/
++		mdelay(200); /* delay a moment to wait for disk to spin up */
+ 	pm8001_bytes_dmaed(pm8001_ha, phy_id);
+ }
+ 
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 61fab01d2d527..f5fc7f518f8af 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2766,7 +2766,7 @@ retry_probe:
+ 			QEDI_ERR(&qedi->dbg_ctx,
+ 				 "Unable to start offload thread!\n");
+ 			rc = -ENODEV;
+-			goto free_cid_que;
++			goto free_tmf_thread;
+ 		}
+ 
+ 		INIT_DELAYED_WORK(&qedi->recovery_work, qedi_recovery_handler);
+@@ -2790,6 +2790,8 @@ retry_probe:
+ 
+ 	return 0;
+ 
++free_tmf_thread:
++	destroy_workqueue(qedi->tmf_thread);
+ free_cid_que:
+ 	qedi_release_cid_que(qedi);
+ free_uio:
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 898c70b8ebbf6..52e8b555bd1dc 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1268,9 +1268,10 @@ qla24xx_async_prli(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 		lio->u.logio.flags |= SRB_LOGIN_NVME_PRLI;
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0x211b,
+-	    "Async-prli - %8phC hdl=%x, loopid=%x portid=%06x retries=%d %s.\n",
++	    "Async-prli - %8phC hdl=%x, loopid=%x portid=%06x retries=%d fc4type %x priority %x %s.\n",
+ 	    fcport->port_name, sp->handle, fcport->loop_id, fcport->d_id.b24,
+-	    fcport->login_retry, NVME_TARGET(vha->hw, fcport) ? "nvme" : "fc");
++	    fcport->login_retry, fcport->fc4_type, vha->hw->fc4_type_priority,
++	    NVME_TARGET(vha->hw, fcport) ? "nvme" : "fcp");
+ 
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+@@ -1932,26 +1933,58 @@ qla24xx_handle_prli_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ 			break;
+ 		}
+ 
+-		/*
+-		 * Retry PRLI with other FC-4 type if failure occurred on dual
+-		 * FCP/NVMe port
+-		 */
+-		if (NVME_FCP_TARGET(ea->fcport)) {
+-			ql_dbg(ql_dbg_disc, vha, 0x2118,
+-				"%s %d %8phC post %s prli\n",
+-				__func__, __LINE__, ea->fcport->port_name,
+-				(ea->fcport->fc4_type & FS_FC4TYPE_NVME) ?
+-				"NVMe" : "FCP");
+-			if (vha->hw->fc4_type_priority == FC4_PRIORITY_NVME)
++		ql_dbg(ql_dbg_disc, vha, 0x2118,
++		       "%s %d %8phC priority %s, fc4type %x\n",
++		       __func__, __LINE__, ea->fcport->port_name,
++		       vha->hw->fc4_type_priority == FC4_PRIORITY_FCP ?
++		       "FCP" : "NVMe", ea->fcport->fc4_type);
++
++		if (N2N_TOPO(vha->hw)) {
++			if (vha->hw->fc4_type_priority == FC4_PRIORITY_NVME) {
+ 				ea->fcport->fc4_type &= ~FS_FC4TYPE_NVME;
+-			else
++				ea->fcport->fc4_type |= FS_FC4TYPE_FCP;
++			} else {
+ 				ea->fcport->fc4_type &= ~FS_FC4TYPE_FCP;
+-		}
++				ea->fcport->fc4_type |= FS_FC4TYPE_NVME;
++			}
+ 
+-		ea->fcport->flags &= ~FCF_ASYNC_SENT;
+-		ea->fcport->keep_nport_handle = 0;
+-		ea->fcport->logout_on_delete = 1;
+-		qlt_schedule_sess_for_deletion(ea->fcport);
++			if (ea->fcport->n2n_link_reset_cnt < 3) {
++				ea->fcport->n2n_link_reset_cnt++;
++				vha->relogin_jif = jiffies + 2 * HZ;
++				/*
++				 * PRLI failed. Reset link to kick start
++				 * state machine
++				 */
++				set_bit(N2N_LINK_RESET, &vha->dpc_flags);
++			} else {
++				ql_log(ql_log_warn, vha, 0x2119,
++				       "%s %d %8phC Unable to reconnect\n",
++				       __func__, __LINE__,
++				       ea->fcport->port_name);
++			}
++		} else {
++			/*
++			 * switch connect. login failed. Take connection down
++			 * and allow relogin to retrigger
++			 */
++			if (NVME_FCP_TARGET(ea->fcport)) {
++				ql_dbg(ql_dbg_disc, vha, 0x2118,
++				       "%s %d %8phC post %s prli\n",
++				       __func__, __LINE__,
++				       ea->fcport->port_name,
++				       (ea->fcport->fc4_type & FS_FC4TYPE_NVME)
++				       ? "NVMe" : "FCP");
++				if (vha->hw->fc4_type_priority == FC4_PRIORITY_NVME)
++					ea->fcport->fc4_type &= ~FS_FC4TYPE_NVME;
++				else
++					ea->fcport->fc4_type &= ~FS_FC4TYPE_FCP;
++			}
++
++			ea->fcport->flags &= ~FCF_ASYNC_SENT;
++			ea->fcport->keep_nport_handle = 0;
++			ea->fcport->logout_on_delete = 1;
++			qlt_schedule_sess_for_deletion(ea->fcport);
++		}
+ 		break;
+ 	}
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 07afd0d8a8f3e..d6325fb2ef73b 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -1129,7 +1129,7 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha)
+ 		if (ha->flags.scm_supported_a &&
+ 		    (ha->fw_attributes_ext[0] & FW_ATTR_EXT0_SCM_SUPPORTED)) {
+ 			ha->flags.scm_supported_f = 1;
+-			ha->sf_init_cb->flags |= BIT_13;
++			ha->sf_init_cb->flags |= cpu_to_le16(BIT_13);
+ 		}
+ 		ql_log(ql_log_info, vha, 0x11a3, "SCM in FW: %s\n",
+ 		       (ha->flags.scm_supported_f) ? "Supported" :
+@@ -1137,9 +1137,9 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha)
+ 
+ 		if (vha->flags.nvme2_enabled) {
+ 			/* set BIT_15 of special feature control block for SLER */
+-			ha->sf_init_cb->flags |= BIT_15;
++			ha->sf_init_cb->flags |= cpu_to_le16(BIT_15);
+ 			/* set BIT_14 of special feature control block for PI CTRL*/
+-			ha->sf_init_cb->flags |= BIT_14;
++			ha->sf_init_cb->flags |= cpu_to_le16(BIT_14);
+ 		}
+ 	}
+ 
+@@ -3998,9 +3998,6 @@ qla24xx_report_id_acquisition(scsi_qla_host_t *vha,
+ 				fcport->scan_state = QLA_FCPORT_FOUND;
+ 				fcport->n2n_flag = 1;
+ 				fcport->keep_nport_handle = 1;
+-				fcport->fc4_type = FS_FC4TYPE_FCP;
+-				if (vha->flags.nvme_enabled)
+-					fcport->fc4_type |= FS_FC4TYPE_NVME;
+ 
+ 				if (wwn_to_u64(vha->port_name) >
+ 				    wwn_to_u64(fcport->port_name)) {
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
+index bd8623ee156a6..26c13a953b975 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.c
++++ b/drivers/scsi/qla2xxx/qla_tmpl.c
+@@ -928,7 +928,8 @@ qla27xx_template_checksum(void *p, ulong size)
+ static inline int
+ qla27xx_verify_template_checksum(struct qla27xx_fwdt_template *tmp)
+ {
+-	return qla27xx_template_checksum(tmp, tmp->template_size) == 0;
++	return qla27xx_template_checksum(tmp,
++		le32_to_cpu(tmp->template_size)) == 0;
+ }
+ 
+ static inline int
+@@ -944,7 +945,7 @@ qla27xx_execute_fwdt_template(struct scsi_qla_host *vha,
+ 	ulong len = 0;
+ 
+ 	if (qla27xx_fwdt_template_valid(tmp)) {
+-		len = tmp->template_size;
++		len = le32_to_cpu(tmp->template_size);
+ 		tmp = memcpy(buf, tmp, len);
+ 		ql27xx_edit_template(vha, tmp);
+ 		qla27xx_walk_template(vha, tmp, buf, &len);
+@@ -960,7 +961,7 @@ qla27xx_fwdt_calculate_dump_size(struct scsi_qla_host *vha, void *p)
+ 	ulong len = 0;
+ 
+ 	if (qla27xx_fwdt_template_valid(tmp)) {
+-		len = tmp->template_size;
++		len = le32_to_cpu(tmp->template_size);
+ 		qla27xx_walk_template(vha, tmp, NULL, &len);
+ 	}
+ 
+@@ -972,7 +973,7 @@ qla27xx_fwdt_template_size(void *p)
+ {
+ 	struct qla27xx_fwdt_template *tmp = p;
+ 
+-	return tmp->template_size;
++	return le32_to_cpu(tmp->template_size);
+ }
+ 
+ int
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.h b/drivers/scsi/qla2xxx/qla_tmpl.h
+index c47184db50813..6e0987edfcebc 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.h
++++ b/drivers/scsi/qla2xxx/qla_tmpl.h
+@@ -12,7 +12,7 @@
+ struct __packed qla27xx_fwdt_template {
+ 	__le32 template_type;
+ 	__le32 entry_offset;
+-	uint32_t template_size;
++	__le32 template_size;
+ 	uint32_t count;		/* borrow field for running/residual count */
+ 
+ 	__le32 entry_count;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 03c6d0620bfd0..2d17137f8ff3b 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2948,6 +2948,78 @@ void sdev_enable_disk_events(struct scsi_device *sdev)
+ }
+ EXPORT_SYMBOL(sdev_enable_disk_events);
+ 
++static unsigned char designator_prio(const unsigned char *d)
++{
++	if (d[1] & 0x30)
++		/* not associated with LUN */
++		return 0;
++
++	if (d[3] == 0)
++		/* invalid length */
++		return 0;
++
++	/*
++	 * Order of preference for lun descriptor:
++	 * - SCSI name string
++	 * - NAA IEEE Registered Extended
++	 * - EUI-64 based 16-byte
++	 * - EUI-64 based 12-byte
++	 * - NAA IEEE Registered
++	 * - NAA IEEE Extended
++	 * - EUI-64 based 8-byte
++	 * - SCSI name string (truncated)
++	 * - T10 Vendor ID
++	 * as longer descriptors reduce the likelyhood
++	 * of identification clashes.
++	 */
++
++	switch (d[1] & 0xf) {
++	case 8:
++		/* SCSI name string, variable-length UTF-8 */
++		return 9;
++	case 3:
++		switch (d[4] >> 4) {
++		case 6:
++			/* NAA registered extended */
++			return 8;
++		case 5:
++			/* NAA registered */
++			return 5;
++		case 4:
++			/* NAA extended */
++			return 4;
++		case 3:
++			/* NAA locally assigned */
++			return 1;
++		default:
++			break;
++		}
++		break;
++	case 2:
++		switch (d[3]) {
++		case 16:
++			/* EUI64-based, 16 byte */
++			return 7;
++		case 12:
++			/* EUI64-based, 12 byte */
++			return 6;
++		case 8:
++			/* EUI64-based, 8 byte */
++			return 3;
++		default:
++			break;
++		}
++		break;
++	case 1:
++		/* T10 vendor ID */
++		return 1;
++	default:
++		break;
++	}
++
++	return 0;
++}
++
+ /**
+  * scsi_vpd_lun_id - return a unique device identification
+  * @sdev: SCSI device
+@@ -2964,7 +3036,7 @@ EXPORT_SYMBOL(sdev_enable_disk_events);
+  */
+ int scsi_vpd_lun_id(struct scsi_device *sdev, char *id, size_t id_len)
+ {
+-	u8 cur_id_type = 0xff;
++	u8 cur_id_prio = 0;
+ 	u8 cur_id_size = 0;
+ 	const unsigned char *d, *cur_id_str;
+ 	const struct scsi_vpd *vpd_pg83;
+@@ -2977,20 +3049,6 @@ int scsi_vpd_lun_id(struct scsi_device *sdev, char *id, size_t id_len)
+ 		return -ENXIO;
+ 	}
+ 
+-	/*
+-	 * Look for the correct descriptor.
+-	 * Order of preference for lun descriptor:
+-	 * - SCSI name string
+-	 * - NAA IEEE Registered Extended
+-	 * - EUI-64 based 16-byte
+-	 * - EUI-64 based 12-byte
+-	 * - NAA IEEE Registered
+-	 * - NAA IEEE Extended
+-	 * - T10 Vendor ID
+-	 * as longer descriptors reduce the likelyhood
+-	 * of identification clashes.
+-	 */
+-
+ 	/* The id string must be at least 20 bytes + terminating NULL byte */
+ 	if (id_len < 21) {
+ 		rcu_read_unlock();
+@@ -3000,8 +3058,9 @@ int scsi_vpd_lun_id(struct scsi_device *sdev, char *id, size_t id_len)
+ 	memset(id, 0, id_len);
+ 	d = vpd_pg83->data + 4;
+ 	while (d < vpd_pg83->data + vpd_pg83->len) {
+-		/* Skip designators not referring to the LUN */
+-		if ((d[1] & 0x30) != 0x00)
++		u8 prio = designator_prio(d);
++
++		if (prio == 0 || cur_id_prio > prio)
+ 			goto next_desig;
+ 
+ 		switch (d[1] & 0xf) {
+@@ -3009,28 +3068,19 @@ int scsi_vpd_lun_id(struct scsi_device *sdev, char *id, size_t id_len)
+ 			/* T10 Vendor ID */
+ 			if (cur_id_size > d[3])
+ 				break;
+-			/* Prefer anything */
+-			if (cur_id_type > 0x01 && cur_id_type != 0xff)
+-				break;
++			cur_id_prio = prio;
+ 			cur_id_size = d[3];
+ 			if (cur_id_size + 4 > id_len)
+ 				cur_id_size = id_len - 4;
+ 			cur_id_str = d + 4;
+-			cur_id_type = d[1] & 0xf;
+ 			id_size = snprintf(id, id_len, "t10.%*pE",
+ 					   cur_id_size, cur_id_str);
+ 			break;
+ 		case 0x2:
+ 			/* EUI-64 */
+-			if (cur_id_size > d[3])
+-				break;
+-			/* Prefer NAA IEEE Registered Extended */
+-			if (cur_id_type == 0x3 &&
+-			    cur_id_size == d[3])
+-				break;
++			cur_id_prio = prio;
+ 			cur_id_size = d[3];
+ 			cur_id_str = d + 4;
+-			cur_id_type = d[1] & 0xf;
+ 			switch (cur_id_size) {
+ 			case 8:
+ 				id_size = snprintf(id, id_len,
+@@ -3048,17 +3098,14 @@ int scsi_vpd_lun_id(struct scsi_device *sdev, char *id, size_t id_len)
+ 						   cur_id_str);
+ 				break;
+ 			default:
+-				cur_id_size = 0;
+ 				break;
+ 			}
+ 			break;
+ 		case 0x3:
+ 			/* NAA */
+-			if (cur_id_size > d[3])
+-				break;
++			cur_id_prio = prio;
+ 			cur_id_size = d[3];
+ 			cur_id_str = d + 4;
+-			cur_id_type = d[1] & 0xf;
+ 			switch (cur_id_size) {
+ 			case 8:
+ 				id_size = snprintf(id, id_len,
+@@ -3071,26 +3118,25 @@ int scsi_vpd_lun_id(struct scsi_device *sdev, char *id, size_t id_len)
+ 						   cur_id_str);
+ 				break;
+ 			default:
+-				cur_id_size = 0;
+ 				break;
+ 			}
+ 			break;
+ 		case 0x8:
+ 			/* SCSI name string */
+-			if (cur_id_size + 4 > d[3])
++			if (cur_id_size > d[3])
+ 				break;
+ 			/* Prefer others for truncated descriptor */
+-			if (cur_id_size && d[3] > id_len)
+-				break;
++			if (d[3] > id_len) {
++				prio = 2;
++				if (cur_id_prio > prio)
++					break;
++			}
++			cur_id_prio = prio;
+ 			cur_id_size = id_size = d[3];
+ 			cur_id_str = d + 4;
+-			cur_id_type = d[1] & 0xf;
+ 			if (cur_id_size >= id_len)
+ 				cur_id_size = id_len - 1;
+ 			memcpy(id, cur_id_str, cur_id_size);
+-			/* Decrease priority for truncated descriptor */
+-			if (cur_id_size != id_size)
+-				cur_id_size = 6;
+ 			break;
+ 		default:
+ 			break;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 2eb3e4f9375a5..2e68c0a876986 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2313,7 +2313,9 @@ iscsi_create_conn(struct iscsi_cls_session *session, int dd_size, uint32_t cid)
+ 	return conn;
+ 
+ release_conn_ref:
+-	put_device(&conn->dev);
++	device_unregister(&conn->dev);
++	put_device(&session->dev);
++	return NULL;
+ release_parent_ref:
+ 	put_device(&session->dev);
+ free_conn:
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 0c148fcd24deb..911aba3e7675c 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1751,8 +1751,9 @@ static void __ufshcd_release(struct ufs_hba *hba)
+ 
+ 	if (hba->clk_gating.active_reqs || hba->clk_gating.is_suspended ||
+ 	    hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL ||
+-	    ufshcd_any_tag_in_use(hba) || hba->outstanding_tasks ||
+-	    hba->active_uic_cmd || hba->uic_async_done)
++	    hba->outstanding_tasks ||
++	    hba->active_uic_cmd || hba->uic_async_done ||
++	    hba->clk_gating.state == CLKS_OFF)
+ 		return;
+ 
+ 	hba->clk_gating.state = REQ_CLKS_OFF;
+diff --git a/drivers/slimbus/qcom-ctrl.c b/drivers/slimbus/qcom-ctrl.c
+index 4aad2566f52d2..f04b961b96cd4 100644
+--- a/drivers/slimbus/qcom-ctrl.c
++++ b/drivers/slimbus/qcom-ctrl.c
+@@ -472,15 +472,10 @@ static void qcom_slim_rxwq(struct work_struct *work)
+ static void qcom_slim_prg_slew(struct platform_device *pdev,
+ 				struct qcom_slim_ctrl *ctrl)
+ {
+-	struct resource	*slew_mem;
+-
+ 	if (!ctrl->slew_reg) {
+ 		/* SLEW RATE register for this SLIMbus */
+-		slew_mem = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+-				"slew");
+-		ctrl->slew_reg = devm_ioremap(&pdev->dev, slew_mem->start,
+-				resource_size(slew_mem));
+-		if (!ctrl->slew_reg)
++		ctrl->slew_reg = devm_platform_ioremap_resource_byname(pdev, "slew");
++		if (IS_ERR(ctrl->slew_reg))
+ 			return;
+ 	}
+ 
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index 218aefc3531cd..50cfd67c2871e 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -1205,6 +1205,9 @@ static int qcom_slim_ngd_runtime_resume(struct device *dev)
+ 	struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev);
+ 	int ret = 0;
+ 
++	if (!ctrl->qmi.handle)
++		return 0;
++
+ 	if (ctrl->state >= QCOM_SLIM_NGD_CTRL_ASLEEP)
+ 		ret = qcom_slim_ngd_power_up(ctrl);
+ 	if (ret) {
+@@ -1503,6 +1506,9 @@ static int __maybe_unused qcom_slim_ngd_runtime_suspend(struct device *dev)
+ 	struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev);
+ 	int ret = 0;
+ 
++	if (!ctrl->qmi.handle)
++		return 0;
++
+ 	ret = qcom_slim_qmi_power_request(ctrl, false);
+ 	if (ret && ret != -EBUSY)
+ 		dev_info(ctrl->dev, "slim resource not idle:%d\n", ret);
+diff --git a/drivers/soc/amlogic/meson-canvas.c b/drivers/soc/amlogic/meson-canvas.c
+index c655f5f92b124..d0329ad170d13 100644
+--- a/drivers/soc/amlogic/meson-canvas.c
++++ b/drivers/soc/amlogic/meson-canvas.c
+@@ -72,8 +72,10 @@ struct meson_canvas *meson_canvas_get(struct device *dev)
+ 	 * current state, this driver probe cannot return -EPROBE_DEFER
+ 	 */
+ 	canvas = dev_get_drvdata(&canvas_pdev->dev);
+-	if (!canvas)
++	if (!canvas) {
++		put_device(&canvas_pdev->dev);
+ 		return ERR_PTR(-EINVAL);
++	}
+ 
+ 	return canvas;
+ }
+diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c
+index f669d3754627d..ca75b14931ec9 100644
+--- a/drivers/soc/mediatek/mtk-scpsys.c
++++ b/drivers/soc/mediatek/mtk-scpsys.c
+@@ -524,6 +524,7 @@ static void mtk_register_power_domains(struct platform_device *pdev,
+ 	for (i = 0; i < num; i++) {
+ 		struct scp_domain *scpd = &scp->domains[i];
+ 		struct generic_pm_domain *genpd = &scpd->genpd;
++		bool on;
+ 
+ 		/*
+ 		 * Initially turn on all domains to make the domains usable
+@@ -531,9 +532,9 @@ static void mtk_register_power_domains(struct platform_device *pdev,
+ 		 * software.  The unused domains will be switched off during
+ 		 * late_init time.
+ 		 */
+-		genpd->power_on(genpd);
++		on = !WARN_ON(genpd->power_on(genpd) < 0);
+ 
+-		pm_genpd_init(genpd, NULL, false);
++		pm_genpd_init(genpd, NULL, !on);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
+index 088dc99f77f3f..f63135c09667f 100644
+--- a/drivers/soc/qcom/pdr_interface.c
++++ b/drivers/soc/qcom/pdr_interface.c
+@@ -569,7 +569,7 @@ EXPORT_SYMBOL(pdr_add_lookup);
+ int pdr_restart_pd(struct pdr_handle *pdr, struct pdr_service *pds)
+ {
+ 	struct servreg_restart_pd_resp resp;
+-	struct servreg_restart_pd_req req;
++	struct servreg_restart_pd_req req = { 0 };
+ 	struct sockaddr_qrtr addr;
+ 	struct pdr_service *tmp;
+ 	struct qmi_txn txn;
+diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
+index d0e4f520cff8c..751a49f6534f4 100644
+--- a/drivers/soc/qcom/qcom-geni-se.c
++++ b/drivers/soc/qcom/qcom-geni-se.c
+@@ -289,10 +289,23 @@ static void geni_se_select_fifo_mode(struct geni_se *se)
+ 
+ static void geni_se_select_dma_mode(struct geni_se *se)
+ {
++	u32 proto = geni_se_read_proto(se);
+ 	u32 val;
+ 
+ 	geni_se_irq_clear(se);
+ 
++	val = readl_relaxed(se->base + SE_GENI_M_IRQ_EN);
++	if (proto != GENI_SE_UART) {
++		val &= ~(M_CMD_DONE_EN | M_TX_FIFO_WATERMARK_EN);
++		val &= ~(M_RX_FIFO_WATERMARK_EN | M_RX_FIFO_LAST_EN);
++	}
++	writel_relaxed(val, se->base + SE_GENI_M_IRQ_EN);
++
++	val = readl_relaxed(se->base + SE_GENI_S_IRQ_EN);
++	if (proto != GENI_SE_UART)
++		val &= ~S_CMD_DONE_EN;
++	writel_relaxed(val, se->base + SE_GENI_S_IRQ_EN);
++
+ 	val = readl_relaxed(se->base + SE_GENI_DMA_MODE_EN);
+ 	val |= GENI_DMA_MODE_EN;
+ 	writel_relaxed(val, se->base + SE_GENI_DMA_MODE_EN);
+@@ -651,7 +664,7 @@ int geni_se_tx_dma_prep(struct geni_se *se, void *buf, size_t len,
+ 	writel_relaxed(lower_32_bits(*iova), se->base + SE_DMA_TX_PTR_L);
+ 	writel_relaxed(upper_32_bits(*iova), se->base + SE_DMA_TX_PTR_H);
+ 	writel_relaxed(GENI_SE_DMA_EOT_BUF, se->base + SE_DMA_TX_ATTR);
+-	writel_relaxed(len, se->base + SE_DMA_TX_LEN);
++	writel(len, se->base + SE_DMA_TX_LEN);
+ 	return 0;
+ }
+ EXPORT_SYMBOL(geni_se_tx_dma_prep);
+@@ -688,7 +701,7 @@ int geni_se_rx_dma_prep(struct geni_se *se, void *buf, size_t len,
+ 	writel_relaxed(upper_32_bits(*iova), se->base + SE_DMA_RX_PTR_H);
+ 	/* RX does not have EOT buffer type bit. So just reset RX_ATTR */
+ 	writel_relaxed(0, se->base + SE_DMA_RX_ATTR);
+-	writel_relaxed(len, se->base + SE_DMA_RX_LEN);
++	writel(len, se->base + SE_DMA_RX_LEN);
+ 	return 0;
+ }
+ EXPORT_SYMBOL(geni_se_rx_dma_prep);
+diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c
+index 07183d731d747..a9709aae54abb 100644
+--- a/drivers/soc/qcom/smp2p.c
++++ b/drivers/soc/qcom/smp2p.c
+@@ -318,15 +318,16 @@ static int qcom_smp2p_inbound_entry(struct qcom_smp2p *smp2p,
+ static int smp2p_update_bits(void *data, u32 mask, u32 value)
+ {
+ 	struct smp2p_entry *entry = data;
++	unsigned long flags;
+ 	u32 orig;
+ 	u32 val;
+ 
+-	spin_lock(&entry->lock);
++	spin_lock_irqsave(&entry->lock, flags);
+ 	val = orig = readl(entry->value);
+ 	val &= ~mask;
+ 	val |= value;
+ 	writel(val, entry->value);
+-	spin_unlock(&entry->lock);
++	spin_unlock_irqrestore(&entry->lock, flags);
+ 
+ 	if (val != orig)
+ 		qcom_smp2p_kick(entry->smp2p);
+diff --git a/drivers/soc/renesas/rmobile-sysc.c b/drivers/soc/renesas/rmobile-sysc.c
+index 54b616ad4a62a..beb1c7211c3d6 100644
+--- a/drivers/soc/renesas/rmobile-sysc.c
++++ b/drivers/soc/renesas/rmobile-sysc.c
+@@ -327,6 +327,7 @@ static int __init rmobile_init_pm_domains(void)
+ 
+ 		pmd = of_get_child_by_name(np, "pm-domains");
+ 		if (!pmd) {
++			iounmap(base);
+ 			pr_warn("%pOF lacks pm-domains node\n", np);
+ 			continue;
+ 		}
+diff --git a/drivers/soc/rockchip/io-domain.c b/drivers/soc/rockchip/io-domain.c
+index eece97f97ef8f..b29e829e815e5 100644
+--- a/drivers/soc/rockchip/io-domain.c
++++ b/drivers/soc/rockchip/io-domain.c
+@@ -547,6 +547,7 @@ static int rockchip_iodomain_probe(struct platform_device *pdev)
+ 		if (uV < 0) {
+ 			dev_err(iod->dev, "Can't determine voltage: %s\n",
+ 				supply_name);
++			ret = uV;
+ 			goto unreg_notify;
+ 		}
+ 
+diff --git a/drivers/soc/ti/knav_dma.c b/drivers/soc/ti/knav_dma.c
+index 8c863ecb1c605..56597f6ea666a 100644
+--- a/drivers/soc/ti/knav_dma.c
++++ b/drivers/soc/ti/knav_dma.c
+@@ -749,8 +749,9 @@ static int knav_dma_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(kdev->dev);
+ 	ret = pm_runtime_get_sync(kdev->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(kdev->dev);
+ 		dev_err(kdev->dev, "unable to enable pktdma, err %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* Initialise all packet dmas */
+@@ -764,7 +765,8 @@ static int knav_dma_probe(struct platform_device *pdev)
+ 
+ 	if (list_empty(&kdev->list)) {
+ 		dev_err(dev, "no valid dma instance\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_put_sync;
+ 	}
+ 
+ 	debugfs_create_file("knav_dma", S_IFREG | S_IRUGO, NULL, NULL,
+@@ -772,6 +774,13 @@ static int knav_dma_probe(struct platform_device *pdev)
+ 
+ 	device_ready = true;
+ 	return ret;
++
++err_put_sync:
++	pm_runtime_put_sync(kdev->dev);
++err_pm_disable:
++	pm_runtime_disable(kdev->dev);
++
++	return ret;
+ }
+ 
+ static int knav_dma_remove(struct platform_device *pdev)
+diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c
+index a460f201bf8e7..53e36d4328d1e 100644
+--- a/drivers/soc/ti/knav_qmss_queue.c
++++ b/drivers/soc/ti/knav_qmss_queue.c
+@@ -1784,6 +1784,7 @@ static int knav_queue_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 	ret = pm_runtime_get_sync(&pdev->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(&pdev->dev);
+ 		dev_err(dev, "Failed to enable QMSS\n");
+ 		return ret;
+ 	}
+@@ -1851,9 +1852,10 @@ static int knav_queue_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err;
+ 
+-	regions =  of_get_child_by_name(node, "descriptor-regions");
++	regions = of_get_child_by_name(node, "descriptor-regions");
+ 	if (!regions) {
+ 		dev_err(dev, "descriptor-regions not specified\n");
++		ret = -ENODEV;
+ 		goto err;
+ 	}
+ 	ret = knav_queue_setup_regions(kdev, regions);
+diff --git a/drivers/soc/ti/omap_prm.c b/drivers/soc/ti/omap_prm.c
+index 980b04c38fd94..4d41dc3cdce1f 100644
+--- a/drivers/soc/ti/omap_prm.c
++++ b/drivers/soc/ti/omap_prm.c
+@@ -484,6 +484,10 @@ static int omap_reset_deassert(struct reset_controller_dev *rcdev,
+ 	struct ti_prm_platform_data *pdata = dev_get_platdata(reset->dev);
+ 	int ret = 0;
+ 
++	/* Nothing to do if the reset is already deasserted */
++	if (!omap_reset_status(rcdev, id))
++		return 0;
++
+ 	has_rstst = reset->prm->data->rstst ||
+ 		(reset->prm->data->flags & OMAP_PRM_HAS_RSTST);
+ 
+diff --git a/drivers/soundwire/master.c b/drivers/soundwire/master.c
+index 3488bb824e845..9b05c9e25ebe4 100644
+--- a/drivers/soundwire/master.c
++++ b/drivers/soundwire/master.c
+@@ -8,6 +8,15 @@
+ #include <linux/soundwire/sdw_type.h>
+ #include "bus.h"
+ 
++/*
++ * The 3s value for autosuspend will only be used if there are no
++ * devices physically attached on a bus segment. In practice enabling
++ * the bus operation will result in children devices become active and
++ * the master device will only suspend when all its children are no
++ * longer active.
++ */
++#define SDW_MASTER_SUSPEND_DELAY_MS 3000
++
+ /*
+  * The sysfs for properties reflects the MIPI description as given
+  * in the MIPI DisCo spec
+@@ -154,7 +163,12 @@ int sdw_master_device_add(struct sdw_bus *bus, struct device *parent,
+ 	bus->dev = &md->dev;
+ 	bus->md = md;
+ 
++	pm_runtime_set_autosuspend_delay(&bus->md->dev, SDW_MASTER_SUSPEND_DELAY_MS);
++	pm_runtime_use_autosuspend(&bus->md->dev);
++	pm_runtime_mark_last_busy(&bus->md->dev);
++	pm_runtime_set_active(&bus->md->dev);
+ 	pm_runtime_enable(&bus->md->dev);
++	pm_runtime_idle(&bus->md->dev);
+ device_register_err:
+ 	return ret;
+ }
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index fbca4ebf63e92..6d22df01f3547 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -799,7 +799,7 @@ static int qcom_swrm_probe(struct platform_device *pdev)
+ 	data = of_device_get_match_data(dev);
+ 	ctrl->rows_index = sdw_find_row_index(data->default_rows);
+ 	ctrl->cols_index = sdw_find_col_index(data->default_cols);
+-#if IS_ENABLED(CONFIG_SLIMBUS)
++#if IS_REACHABLE(CONFIG_SLIMBUS)
+ 	if (dev->parent->bus == &slimbus_bus) {
+ #else
+ 	if (false) {
+diff --git a/drivers/soundwire/sysfs_slave_dpn.c b/drivers/soundwire/sysfs_slave_dpn.c
+index 05a721ea9830a..c4b6543c09fd6 100644
+--- a/drivers/soundwire/sysfs_slave_dpn.c
++++ b/drivers/soundwire/sysfs_slave_dpn.c
+@@ -37,6 +37,7 @@ static int field##_attribute_alloc(struct device *dev,			\
+ 		return -ENOMEM;						\
+ 	dpn_attr->N = N;						\
+ 	dpn_attr->dir = dir;						\
++	sysfs_attr_init(&dpn_attr->dev_attr.attr);			\
+ 	dpn_attr->format_string = format_string;			\
+ 	dpn_attr->dev_attr.attr.name = __stringify(field);		\
+ 	dpn_attr->dev_attr.attr.mode = 0444;				\
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index 5cff60de8e834..3fd16b7f61507 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -255,6 +255,7 @@ config SPI_DW_MMIO
+ config SPI_DW_BT1
+ 	tristate "Baikal-T1 SPI driver for DW SPI core"
+ 	depends on MIPS_BAIKAL_T1 || COMPILE_TEST
++	select MULTIPLEXER
+ 	help
+ 	  Baikal-T1 SoC is equipped with three DW APB SSI-based MMIO SPI
+ 	  controllers. Two of them are pretty much normal: with IRQ, DMA,
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 8c009c175f2c4..1e63fd4821f96 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -365,10 +365,14 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq,
+ 	if (dummy_cycles)
+ 		ifr |= QSPI_IFR_NBDUM(dummy_cycles);
+ 
+-	/* Set data enable */
+-	if (op->data.nbytes)
++	/* Set data enable and data transfer type. */
++	if (op->data.nbytes) {
+ 		ifr |= QSPI_IFR_DATAEN;
+ 
++		if (op->addr.nbytes)
++			ifr |= QSPI_IFR_TFRTYP_MEM;
++	}
++
+ 	/*
+ 	 * If the QSPI controller is set in regular SPI mode, set it in
+ 	 * Serial Memory Mode (SMM).
+@@ -393,7 +397,7 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq,
+ 			atmel_qspi_write(icr, aq, QSPI_WICR);
+ 		atmel_qspi_write(ifr, aq, QSPI_IFR);
+ 	} else {
+-		if (op->data.dir == SPI_MEM_DATA_OUT)
++		if (op->data.nbytes && op->data.dir == SPI_MEM_DATA_OUT)
+ 			ifr |= QSPI_IFR_SAMA5D2_WRITE_TRSFR;
+ 
+ 		/* Set QSPI Instruction Frame registers */
+@@ -535,7 +539,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int irq, err = 0;
+ 
+-	ctrl = spi_alloc_master(&pdev->dev, sizeof(*aq));
++	ctrl = devm_spi_alloc_master(&pdev->dev, sizeof(*aq));
+ 	if (!ctrl)
+ 		return -ENOMEM;
+ 
+@@ -557,8 +561,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
+ 	aq->regs = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(aq->regs)) {
+ 		dev_err(&pdev->dev, "missing registers\n");
+-		err = PTR_ERR(aq->regs);
+-		goto exit;
++		return PTR_ERR(aq->regs);
+ 	}
+ 
+ 	/* Map the AHB memory */
+@@ -566,8 +569,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
+ 	aq->mem = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(aq->mem)) {
+ 		dev_err(&pdev->dev, "missing AHB memory\n");
+-		err = PTR_ERR(aq->mem);
+-		goto exit;
++		return PTR_ERR(aq->mem);
+ 	}
+ 
+ 	aq->mmap_size = resource_size(res);
+@@ -579,22 +581,21 @@ static int atmel_qspi_probe(struct platform_device *pdev)
+ 
+ 	if (IS_ERR(aq->pclk)) {
+ 		dev_err(&pdev->dev, "missing peripheral clock\n");
+-		err = PTR_ERR(aq->pclk);
+-		goto exit;
++		return PTR_ERR(aq->pclk);
+ 	}
+ 
+ 	/* Enable the peripheral clock */
+ 	err = clk_prepare_enable(aq->pclk);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "failed to enable the peripheral clock\n");
+-		goto exit;
++		return err;
+ 	}
+ 
+ 	aq->caps = of_device_get_match_data(&pdev->dev);
+ 	if (!aq->caps) {
+ 		dev_err(&pdev->dev, "Could not retrieve QSPI caps\n");
+ 		err = -EINVAL;
+-		goto exit;
++		goto disable_pclk;
+ 	}
+ 
+ 	if (aq->caps->has_qspick) {
+@@ -638,8 +639,6 @@ disable_qspick:
+ 	clk_disable_unprepare(aq->qspick);
+ disable_pclk:
+ 	clk_disable_unprepare(aq->pclk);
+-exit:
+-	spi_controller_put(ctrl);
+ 
+ 	return err;
+ }
+diff --git a/drivers/spi/spi-ar934x.c b/drivers/spi/spi-ar934x.c
+index d08dec09d423d..def32e0aaefe3 100644
+--- a/drivers/spi/spi-ar934x.c
++++ b/drivers/spi/spi-ar934x.c
+@@ -176,10 +176,11 @@ static int ar934x_spi_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ctlr = spi_alloc_master(&pdev->dev, sizeof(*sp));
++	ctlr = devm_spi_alloc_master(&pdev->dev, sizeof(*sp));
+ 	if (!ctlr) {
+ 		dev_info(&pdev->dev, "failed to allocate spi controller\n");
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto err_clk_disable;
+ 	}
+ 
+ 	/* disable flash mapping and expose spi controller registers */
+@@ -202,7 +203,13 @@ static int ar934x_spi_probe(struct platform_device *pdev)
+ 	sp->clk_freq = clk_get_rate(clk);
+ 	sp->ctlr = ctlr;
+ 
+-	return devm_spi_register_controller(&pdev->dev, ctlr);
++	ret = spi_register_controller(ctlr);
++	if (!ret)
++		return 0;
++
++err_clk_disable:
++	clk_disable_unprepare(clk);
++	return ret;
+ }
+ 
+ static int ar934x_spi_remove(struct platform_device *pdev)
+@@ -213,6 +220,7 @@ static int ar934x_spi_remove(struct platform_device *pdev)
+ 	ctlr = dev_get_drvdata(&pdev->dev);
+ 	sp = spi_controller_get_devdata(ctlr);
+ 
++	spi_unregister_controller(ctlr);
+ 	clk_disable_unprepare(sp->clk);
+ 
+ 	return 0;
+diff --git a/drivers/spi/spi-bcm63xx-hsspi.c b/drivers/spi/spi-bcm63xx-hsspi.c
+index 9909b18f3c5a5..1f08d7553f079 100644
+--- a/drivers/spi/spi-bcm63xx-hsspi.c
++++ b/drivers/spi/spi-bcm63xx-hsspi.c
+@@ -494,8 +494,10 @@ static int bcm63xx_hsspi_resume(struct device *dev)
+ 
+ 	if (bs->pll_clk) {
+ 		ret = clk_prepare_enable(bs->pll_clk);
+-		if (ret)
++		if (ret) {
++			clk_disable_unprepare(bs->clk);
+ 			return ret;
++		}
+ 	}
+ 
+ 	spi_master_resume(master);
+diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
+index 818f2b22875d2..7453a1dbbc061 100644
+--- a/drivers/spi/spi-davinci.c
++++ b/drivers/spi/spi-davinci.c
+@@ -1040,13 +1040,13 @@ static int davinci_spi_remove(struct platform_device *pdev)
+ 	spi_bitbang_stop(&dspi->bitbang);
+ 
+ 	clk_disable_unprepare(dspi->clk);
+-	spi_master_put(master);
+ 
+ 	if (dspi->dma_rx) {
+ 		dma_release_channel(dspi->dma_rx);
+ 		dma_release_channel(dspi->dma_tx);
+ 	}
+ 
++	spi_master_put(master);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/spi/spi-dw-bt1.c b/drivers/spi/spi-dw-bt1.c
+index f382dfad78421..c279b7891e3ac 100644
+--- a/drivers/spi/spi-dw-bt1.c
++++ b/drivers/spi/spi-dw-bt1.c
+@@ -280,8 +280,10 @@ static int dw_spi_bt1_probe(struct platform_device *pdev)
+ 	dws->bus_num = pdev->id;
+ 	dws->reg_io_width = 4;
+ 	dws->max_freq = clk_get_rate(dwsbt1->clk);
+-	if (!dws->max_freq)
++	if (!dws->max_freq) {
++		ret = -EINVAL;
+ 		goto err_disable_clk;
++	}
+ 
+ 	init_func = device_get_match_data(&pdev->dev);
+ 	ret = init_func(pdev, dwsbt1);
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 1a08c1d584abe..0287366874882 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1165,7 +1165,7 @@ static int dspi_init(struct fsl_dspi *dspi)
+ 	unsigned int mcr;
+ 
+ 	/* Set idle states for all chip select signals to high */
+-	mcr = SPI_MCR_PCSIS(GENMASK(dspi->ctlr->num_chipselect - 1, 0));
++	mcr = SPI_MCR_PCSIS(GENMASK(dspi->ctlr->max_native_cs - 1, 0));
+ 
+ 	if (dspi->devtype_data->trans_mode == DSPI_XSPI_MODE)
+ 		mcr |= SPI_MCR_XSPI;
+@@ -1250,7 +1250,7 @@ static int dspi_probe(struct platform_device *pdev)
+ 
+ 	pdata = dev_get_platdata(&pdev->dev);
+ 	if (pdata) {
+-		ctlr->num_chipselect = pdata->cs_num;
++		ctlr->num_chipselect = ctlr->max_native_cs = pdata->cs_num;
+ 		ctlr->bus_num = pdata->bus_num;
+ 
+ 		/* Only Coldfire uses platform data */
+@@ -1263,7 +1263,7 @@ static int dspi_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev, "can't get spi-num-chipselects\n");
+ 			goto out_ctlr_put;
+ 		}
+-		ctlr->num_chipselect = cs_num;
++		ctlr->num_chipselect = ctlr->max_native_cs = cs_num;
+ 
+ 		of_property_read_u32(np, "bus-num", &bus_num);
+ 		ctlr->bus_num = bus_num;
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index 299e9870cf58d..9494257e1c33f 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -716,10 +716,11 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
+ 	type = fsl_spi_get_type(&ofdev->dev);
+ 	if (type == TYPE_FSL) {
+ 		struct fsl_spi_platform_data *pdata = dev_get_platdata(dev);
++		bool spisel_boot = false;
+ #if IS_ENABLED(CONFIG_FSL_SOC)
+ 		struct mpc8xxx_spi_probe_info *pinfo = to_of_pinfo(pdata);
+-		bool spisel_boot = of_property_read_bool(np, "fsl,spisel_boot");
+ 
++		spisel_boot = of_property_read_bool(np, "fsl,spisel_boot");
+ 		if (spisel_boot) {
+ 			pinfo->immr_spi_cs = ioremap(get_immrbase() + IMMR_SPI_CS_OFFSET, 4);
+ 			if (!pinfo->immr_spi_cs)
+@@ -734,10 +735,14 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
+ 		 * supported on the GRLIB variant.
+ 		 */
+ 		ret = gpiod_count(dev, "cs");
+-		if (ret <= 0)
++		if (ret < 0)
++			ret = 0;
++		if (ret == 0 && !spisel_boot) {
+ 			pdata->max_chipselect = 1;
+-		else
++		} else {
++			pdata->max_chipselect = ret + spisel_boot;
+ 			pdata->cs_control = fsl_spi_cs_control;
++		}
+ 	}
+ 
+ 	ret = of_address_to_resource(np, 0, &mem);
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index 25810a7eef101..0e3d8e6c08f42 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -603,7 +603,7 @@ static int spi_geni_probe(struct platform_device *pdev)
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+ 
+-	spi = spi_alloc_master(dev, sizeof(*mas));
++	spi = devm_spi_alloc_master(dev, sizeof(*mas));
+ 	if (!spi)
+ 		return -ENOMEM;
+ 
+@@ -673,7 +673,6 @@ spi_geni_probe_free_irq:
+ 	free_irq(mas->irq, spi);
+ spi_geni_probe_runtime_disable:
+ 	pm_runtime_disable(dev);
+-	spi_master_put(spi);
+ 	dev_pm_opp_of_remove_table(&pdev->dev);
+ put_clkname:
+ 	dev_pm_opp_put_clkname(mas->se.opp_table);
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 7ceb0ba27b755..0584f4d2fde29 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -350,11 +350,6 @@ static int spi_gpio_probe_pdata(struct platform_device *pdev,
+ 	return 0;
+ }
+ 
+-static void spi_gpio_put(void *data)
+-{
+-	spi_master_put(data);
+-}
+-
+ static int spi_gpio_probe(struct platform_device *pdev)
+ {
+ 	int				status;
+@@ -363,16 +358,10 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 	struct device			*dev = &pdev->dev;
+ 	struct spi_bitbang		*bb;
+ 
+-	master = spi_alloc_master(dev, sizeof(*spi_gpio));
++	master = devm_spi_alloc_master(dev, sizeof(*spi_gpio));
+ 	if (!master)
+ 		return -ENOMEM;
+ 
+-	status = devm_add_action_or_reset(&pdev->dev, spi_gpio_put, master);
+-	if (status) {
+-		spi_master_put(master);
+-		return status;
+-	}
+-
+ 	if (pdev->dev.of_node)
+ 		status = spi_gpio_probe_dt(pdev, master);
+ 	else
+@@ -432,7 +421,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 	if (status)
+ 		return status;
+ 
+-	return devm_spi_register_master(&pdev->dev, spi_master_get(master));
++	return devm_spi_register_master(&pdev->dev, master);
+ }
+ 
+ MODULE_ALIAS("platform:" DRIVER_NAME);
+diff --git a/drivers/spi/spi-img-spfi.c b/drivers/spi/spi-img-spfi.c
+index b068537375d60..5f05d519fbbd0 100644
+--- a/drivers/spi/spi-img-spfi.c
++++ b/drivers/spi/spi-img-spfi.c
+@@ -731,8 +731,10 @@ static int img_spfi_resume(struct device *dev)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret)
++	if (ret) {
++		pm_runtime_put_noidle(dev);
+ 		return ret;
++	}
+ 	spfi_reset(spfi);
+ 	pm_runtime_put(dev);
+ 
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 0b597905ee72c..8df5e973404f0 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1538,6 +1538,7 @@ spi_imx_prepare_message(struct spi_master *master, struct spi_message *msg)
+ 
+ 	ret = pm_runtime_get_sync(spi_imx->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(spi_imx->dev);
+ 		dev_err(spi_imx->dev, "failed to enable clock\n");
+ 		return ret;
+ 	}
+@@ -1748,6 +1749,7 @@ static int spi_imx_remove(struct platform_device *pdev)
+ 
+ 	ret = pm_runtime_get_sync(spi_imx->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(spi_imx->dev);
+ 		dev_err(spi_imx->dev, "failed to enable clock\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
+index ef53290b7d24d..4682f49dc7330 100644
+--- a/drivers/spi/spi-mem.c
++++ b/drivers/spi/spi-mem.c
+@@ -243,6 +243,7 @@ static int spi_mem_access_start(struct spi_mem *mem)
+ 
+ 		ret = pm_runtime_get_sync(ctlr->dev.parent);
+ 		if (ret < 0) {
++			pm_runtime_put_noidle(ctlr->dev.parent);
+ 			dev_err(&ctlr->dev, "Failed to power device: %d\n",
+ 				ret);
+ 			return ret;
+diff --git a/drivers/spi/spi-mt7621.c b/drivers/spi/spi-mt7621.c
+index 2c3b7a2a1ec77..b4b9b7309b5e9 100644
+--- a/drivers/spi/spi-mt7621.c
++++ b/drivers/spi/spi-mt7621.c
+@@ -350,9 +350,10 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ 	if (status)
+ 		return status;
+ 
+-	master = spi_alloc_master(&pdev->dev, sizeof(*rs));
++	master = devm_spi_alloc_master(&pdev->dev, sizeof(*rs));
+ 	if (!master) {
+ 		dev_info(&pdev->dev, "master allocation failed\n");
++		clk_disable_unprepare(clk);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -377,10 +378,15 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ 	ret = device_reset(&pdev->dev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "SPI reset failed!\n");
++		clk_disable_unprepare(clk);
+ 		return ret;
+ 	}
+ 
+-	return devm_spi_register_controller(&pdev->dev, master);
++	ret = spi_register_controller(master);
++	if (ret)
++		clk_disable_unprepare(clk);
++
++	return ret;
+ }
+ 
+ static int mt7621_spi_remove(struct platform_device *pdev)
+@@ -391,6 +397,7 @@ static int mt7621_spi_remove(struct platform_device *pdev)
+ 	master = dev_get_drvdata(&pdev->dev);
+ 	rs = spi_controller_get_devdata(master);
+ 
++	spi_unregister_controller(master);
+ 	clk_disable_unprepare(rs->clk);
+ 
+ 	return 0;
+diff --git a/drivers/spi/spi-mtk-nor.c b/drivers/spi/spi-mtk-nor.c
+index b97f26a60cbef..288f6c2bbd573 100644
+--- a/drivers/spi/spi-mtk-nor.c
++++ b/drivers/spi/spi-mtk-nor.c
+@@ -768,7 +768,7 @@ static int mtk_nor_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	ctlr = spi_alloc_master(&pdev->dev, sizeof(*sp));
++	ctlr = devm_spi_alloc_master(&pdev->dev, sizeof(*sp));
+ 	if (!ctlr) {
+ 		dev_err(&pdev->dev, "failed to allocate spi controller\n");
+ 		return -ENOMEM;
+diff --git a/drivers/spi/spi-mxic.c b/drivers/spi/spi-mxic.c
+index 8c630acb0110b..96b418293bf2a 100644
+--- a/drivers/spi/spi-mxic.c
++++ b/drivers/spi/spi-mxic.c
+@@ -529,7 +529,7 @@ static int mxic_spi_probe(struct platform_device *pdev)
+ 	struct mxic_spi *mxic;
+ 	int ret;
+ 
+-	master = spi_alloc_master(&pdev->dev, sizeof(struct mxic_spi));
++	master = devm_spi_alloc_master(&pdev->dev, sizeof(struct mxic_spi));
+ 	if (!master)
+ 		return -ENOMEM;
+ 
+@@ -574,15 +574,9 @@ static int mxic_spi_probe(struct platform_device *pdev)
+ 	ret = spi_register_master(master);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "spi_register_master failed\n");
+-		goto err_put_master;
++		pm_runtime_disable(&pdev->dev);
+ 	}
+ 
+-	return 0;
+-
+-err_put_master:
+-	spi_master_put(master);
+-	pm_runtime_disable(&pdev->dev);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/spi/spi-mxs.c b/drivers/spi/spi-mxs.c
+index 918918a9e0491..435309b09227e 100644
+--- a/drivers/spi/spi-mxs.c
++++ b/drivers/spi/spi-mxs.c
+@@ -607,6 +607,7 @@ static int mxs_spi_probe(struct platform_device *pdev)
+ 
+ 	ret = pm_runtime_get_sync(ssp->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(ssp->dev);
+ 		dev_err(ssp->dev, "runtime_get_sync failed\n");
+ 		goto out_pm_runtime_disable;
+ 	}
+diff --git a/drivers/spi/spi-npcm-fiu.c b/drivers/spi/spi-npcm-fiu.c
+index 1cb9329de945e..b62471ab6d7f2 100644
+--- a/drivers/spi/spi-npcm-fiu.c
++++ b/drivers/spi/spi-npcm-fiu.c
+@@ -677,7 +677,7 @@ static int npcm_fiu_probe(struct platform_device *pdev)
+ 	struct npcm_fiu_spi *fiu;
+ 	void __iomem *regbase;
+ 	struct resource *res;
+-	int id;
++	int id, ret;
+ 
+ 	ctrl = devm_spi_alloc_master(dev, sizeof(*fiu));
+ 	if (!ctrl)
+@@ -735,7 +735,11 @@ static int npcm_fiu_probe(struct platform_device *pdev)
+ 	ctrl->num_chipselect = fiu->info->max_cs;
+ 	ctrl->dev.of_node = dev->of_node;
+ 
+-	return devm_spi_register_master(dev, ctrl);
++	ret = devm_spi_register_master(dev, ctrl);
++	if (ret)
++		clk_disable_unprepare(fiu->clk);
++
++	return ret;
+ }
+ 
+ static int npcm_fiu_remove(struct platform_device *pdev)
+diff --git a/drivers/spi/spi-pic32.c b/drivers/spi/spi-pic32.c
+index 156961b4ca86f..104bde153efd2 100644
+--- a/drivers/spi/spi-pic32.c
++++ b/drivers/spi/spi-pic32.c
+@@ -839,6 +839,7 @@ static int pic32_spi_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_bailout:
++	pic32_spi_dma_unprep(pic32s);
+ 	clk_disable_unprepare(pic32s->clk);
+ err_master:
+ 	spi_master_put(master);
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index 814268405ab0b..d6b534d38e5da 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1686,9 +1686,9 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (platform_info->is_slave)
+-		controller = spi_alloc_slave(dev, sizeof(struct driver_data));
++		controller = devm_spi_alloc_slave(dev, sizeof(*drv_data));
+ 	else
+-		controller = spi_alloc_master(dev, sizeof(struct driver_data));
++		controller = devm_spi_alloc_master(dev, sizeof(*drv_data));
+ 
+ 	if (!controller) {
+ 		dev_err(&pdev->dev, "cannot alloc spi_controller\n");
+@@ -1911,7 +1911,6 @@ out_error_dma_irq_alloc:
+ 	free_irq(ssp->irq, drv_data);
+ 
+ out_error_controller_alloc:
+-	spi_controller_put(controller);
+ 	pxa_ssp_free(ssp);
+ 	return status;
+ }
+diff --git a/drivers/spi/spi-qcom-qspi.c b/drivers/spi/spi-qcom-qspi.c
+index 5eed88af6899b..8863be3708845 100644
+--- a/drivers/spi/spi-qcom-qspi.c
++++ b/drivers/spi/spi-qcom-qspi.c
+@@ -462,7 +462,7 @@ static int qcom_qspi_probe(struct platform_device *pdev)
+ 
+ 	dev = &pdev->dev;
+ 
+-	master = spi_alloc_master(dev, sizeof(*ctrl));
++	master = devm_spi_alloc_master(dev, sizeof(*ctrl));
+ 	if (!master)
+ 		return -ENOMEM;
+ 
+@@ -473,54 +473,49 @@ static int qcom_qspi_probe(struct platform_device *pdev)
+ 	spin_lock_init(&ctrl->lock);
+ 	ctrl->dev = dev;
+ 	ctrl->base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(ctrl->base)) {
+-		ret = PTR_ERR(ctrl->base);
+-		goto exit_probe_master_put;
+-	}
++	if (IS_ERR(ctrl->base))
++		return PTR_ERR(ctrl->base);
+ 
+ 	ctrl->clks = devm_kcalloc(dev, QSPI_NUM_CLKS,
+ 				  sizeof(*ctrl->clks), GFP_KERNEL);
+-	if (!ctrl->clks) {
+-		ret = -ENOMEM;
+-		goto exit_probe_master_put;
+-	}
++	if (!ctrl->clks)
++		return -ENOMEM;
+ 
+ 	ctrl->clks[QSPI_CLK_CORE].id = "core";
+ 	ctrl->clks[QSPI_CLK_IFACE].id = "iface";
+ 	ret = devm_clk_bulk_get(dev, QSPI_NUM_CLKS, ctrl->clks);
+ 	if (ret)
+-		goto exit_probe_master_put;
++		return ret;
+ 
+ 	ctrl->icc_path_cpu_to_qspi = devm_of_icc_get(dev, "qspi-config");
+-	if (IS_ERR(ctrl->icc_path_cpu_to_qspi)) {
+-		ret = dev_err_probe(dev, PTR_ERR(ctrl->icc_path_cpu_to_qspi),
+-				    "Failed to get cpu path\n");
+-		goto exit_probe_master_put;
+-	}
++	if (IS_ERR(ctrl->icc_path_cpu_to_qspi))
++		return dev_err_probe(dev, PTR_ERR(ctrl->icc_path_cpu_to_qspi),
++				     "Failed to get cpu path\n");
++
+ 	/* Set BW vote for register access */
+ 	ret = icc_set_bw(ctrl->icc_path_cpu_to_qspi, Bps_to_icc(1000),
+ 				Bps_to_icc(1000));
+ 	if (ret) {
+ 		dev_err(ctrl->dev, "%s: ICC BW voting failed for cpu: %d\n",
+ 				__func__, ret);
+-		goto exit_probe_master_put;
++		return ret;
+ 	}
+ 
+ 	ret = icc_disable(ctrl->icc_path_cpu_to_qspi);
+ 	if (ret) {
+ 		dev_err(ctrl->dev, "%s: ICC disable failed for cpu: %d\n",
+ 				__func__, ret);
+-		goto exit_probe_master_put;
++		return ret;
+ 	}
+ 
+ 	ret = platform_get_irq(pdev, 0);
+ 	if (ret < 0)
+-		goto exit_probe_master_put;
++		return ret;
+ 	ret = devm_request_irq(dev, ret, qcom_qspi_irq,
+ 			IRQF_TRIGGER_HIGH, dev_name(dev), ctrl);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to request irq %d\n", ret);
+-		goto exit_probe_master_put;
++		return ret;
+ 	}
+ 
+ 	master->max_speed_hz = 300000000;
+@@ -537,10 +532,8 @@ static int qcom_qspi_probe(struct platform_device *pdev)
+ 	master->auto_runtime_pm = true;
+ 
+ 	ctrl->opp_table = dev_pm_opp_set_clkname(&pdev->dev, "core");
+-	if (IS_ERR(ctrl->opp_table)) {
+-		ret = PTR_ERR(ctrl->opp_table);
+-		goto exit_probe_master_put;
+-	}
++	if (IS_ERR(ctrl->opp_table))
++		return PTR_ERR(ctrl->opp_table);
+ 	/* OPP table is optional */
+ 	ret = dev_pm_opp_of_add_table(&pdev->dev);
+ 	if (ret && ret != -ENODEV) {
+@@ -562,9 +555,6 @@ static int qcom_qspi_probe(struct platform_device *pdev)
+ exit_probe_put_clkname:
+ 	dev_pm_opp_put_clkname(ctrl->opp_table);
+ 
+-exit_probe_master_put:
+-	spi_master_put(master);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/spi/spi-rb4xx.c b/drivers/spi/spi-rb4xx.c
+index 8aa51beb4ff3e..9f97d18a05c10 100644
+--- a/drivers/spi/spi-rb4xx.c
++++ b/drivers/spi/spi-rb4xx.c
+@@ -143,7 +143,7 @@ static int rb4xx_spi_probe(struct platform_device *pdev)
+ 	if (IS_ERR(spi_base))
+ 		return PTR_ERR(spi_base);
+ 
+-	master = spi_alloc_master(&pdev->dev, sizeof(*rbspi));
++	master = devm_spi_alloc_master(&pdev->dev, sizeof(*rbspi));
+ 	if (!master)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/spi/spi-rpc-if.c b/drivers/spi/spi-rpc-if.c
+index ed3e548227f47..3579675485a5e 100644
+--- a/drivers/spi/spi-rpc-if.c
++++ b/drivers/spi/spi-rpc-if.c
+@@ -134,7 +134,7 @@ static int rpcif_spi_probe(struct platform_device *pdev)
+ 	struct rpcif *rpc;
+ 	int error;
+ 
+-	ctlr = spi_alloc_master(&pdev->dev, sizeof(*rpc));
++	ctlr = devm_spi_alloc_master(&pdev->dev, sizeof(*rpc));
+ 	if (!ctlr)
+ 		return -ENOMEM;
+ 
+@@ -159,13 +159,8 @@ static int rpcif_spi_probe(struct platform_device *pdev)
+ 	error = spi_register_controller(ctlr);
+ 	if (error) {
+ 		dev_err(&pdev->dev, "spi_register_controller failed\n");
+-		goto err_put_ctlr;
++		rpcif_disable_rpm(rpc);
+ 	}
+-	return 0;
+-
+-err_put_ctlr:
+-	rpcif_disable_rpm(rpc);
+-	spi_controller_put(ctlr);
+ 
+ 	return error;
+ }
+diff --git a/drivers/spi/spi-sc18is602.c b/drivers/spi/spi-sc18is602.c
+index ee0f3edf49cdb..297c512069a57 100644
+--- a/drivers/spi/spi-sc18is602.c
++++ b/drivers/spi/spi-sc18is602.c
+@@ -238,13 +238,12 @@ static int sc18is602_probe(struct i2c_client *client,
+ 	struct sc18is602_platform_data *pdata = dev_get_platdata(dev);
+ 	struct sc18is602 *hw;
+ 	struct spi_master *master;
+-	int error;
+ 
+ 	if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C |
+ 				     I2C_FUNC_SMBUS_WRITE_BYTE_DATA))
+ 		return -EINVAL;
+ 
+-	master = spi_alloc_master(dev, sizeof(struct sc18is602));
++	master = devm_spi_alloc_master(dev, sizeof(struct sc18is602));
+ 	if (!master)
+ 		return -ENOMEM;
+ 
+@@ -298,15 +297,7 @@ static int sc18is602_probe(struct i2c_client *client,
+ 	master->min_speed_hz = hw->freq / 128;
+ 	master->max_speed_hz = hw->freq / 4;
+ 
+-	error = devm_spi_register_master(dev, master);
+-	if (error)
+-		goto error_reg;
+-
+-	return 0;
+-
+-error_reg:
+-	spi_master_put(master);
+-	return error;
++	return devm_spi_register_master(dev, master);
+ }
+ 
+ static const struct i2c_device_id sc18is602_id[] = {
+diff --git a/drivers/spi/spi-sh.c b/drivers/spi/spi-sh.c
+index 20bdae5fdf3b8..15123a8f41e1e 100644
+--- a/drivers/spi/spi-sh.c
++++ b/drivers/spi/spi-sh.c
+@@ -440,7 +440,7 @@ static int spi_sh_probe(struct platform_device *pdev)
+ 	if (irq < 0)
+ 		return irq;
+ 
+-	master = spi_alloc_master(&pdev->dev, sizeof(struct spi_sh_data));
++	master = devm_spi_alloc_master(&pdev->dev, sizeof(struct spi_sh_data));
+ 	if (master == NULL) {
+ 		dev_err(&pdev->dev, "spi_alloc_master error.\n");
+ 		return -ENOMEM;
+@@ -458,16 +458,14 @@ static int spi_sh_probe(struct platform_device *pdev)
+ 		break;
+ 	default:
+ 		dev_err(&pdev->dev, "No support width\n");
+-		ret = -ENODEV;
+-		goto error1;
++		return -ENODEV;
+ 	}
+ 	ss->irq = irq;
+ 	ss->master = master;
+ 	ss->addr = devm_ioremap(&pdev->dev, res->start, resource_size(res));
+ 	if (ss->addr == NULL) {
+ 		dev_err(&pdev->dev, "ioremap error.\n");
+-		ret = -ENOMEM;
+-		goto error1;
++		return -ENOMEM;
+ 	}
+ 	INIT_LIST_HEAD(&ss->queue);
+ 	spin_lock_init(&ss->lock);
+@@ -477,7 +475,7 @@ static int spi_sh_probe(struct platform_device *pdev)
+ 	ret = request_irq(irq, spi_sh_irq, 0, "spi_sh", ss);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "request_irq error\n");
+-		goto error1;
++		return ret;
+ 	}
+ 
+ 	master->num_chipselect = 2;
+@@ -496,9 +494,6 @@ static int spi_sh_probe(struct platform_device *pdev)
+ 
+  error3:
+ 	free_irq(irq, ss);
+- error1:
+-	spi_master_put(master);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/spi/spi-sprd.c b/drivers/spi/spi-sprd.c
+index 635738f54c731..b41a75749b498 100644
+--- a/drivers/spi/spi-sprd.c
++++ b/drivers/spi/spi-sprd.c
+@@ -1010,6 +1010,7 @@ static int sprd_spi_remove(struct platform_device *pdev)
+ 
+ 	ret = pm_runtime_get_sync(ss->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(ss->dev);
+ 		dev_err(ss->dev, "failed to resume SPI controller\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi-st-ssc4.c b/drivers/spi/spi-st-ssc4.c
+index 77d26d64541a5..6c44dda9ee8c5 100644
+--- a/drivers/spi/spi-st-ssc4.c
++++ b/drivers/spi/spi-st-ssc4.c
+@@ -375,13 +375,14 @@ static int spi_st_probe(struct platform_device *pdev)
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to register master\n");
+-		goto clk_disable;
++		goto rpm_disable;
+ 	}
+ 
+ 	return 0;
+ 
+-clk_disable:
++rpm_disable:
+ 	pm_runtime_disable(&pdev->dev);
++clk_disable:
+ 	clk_disable_unprepare(spi_st->clk);
+ put_master:
+ 	spi_master_put(master);
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index a900962b4336e..947e6b9dc9f4d 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -434,8 +434,10 @@ static int stm32_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(qspi->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(qspi->dev);
+ 		return ret;
++	}
+ 
+ 	mutex_lock(&qspi->lock);
+ 	ret = stm32_qspi_send(mem, op);
+@@ -462,8 +464,10 @@ static int stm32_qspi_setup(struct spi_device *spi)
+ 		return -EINVAL;
+ 
+ 	ret = pm_runtime_get_sync(qspi->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(qspi->dev);
+ 		return ret;
++	}
+ 
+ 	presc = DIV_ROUND_UP(qspi->clk_rate, spi->max_speed_hz) - 1;
+ 
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 2cc850eb8922d..471dedf3d3392 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -2062,6 +2062,7 @@ static int stm32_spi_resume(struct device *dev)
+ 
+ 	ret = pm_runtime_get_sync(dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(dev);
+ 		dev_err(dev, "Unable to power device:%d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi-synquacer.c b/drivers/spi/spi-synquacer.c
+index 42e82dbe3d410..8cdca6ab80989 100644
+--- a/drivers/spi/spi-synquacer.c
++++ b/drivers/spi/spi-synquacer.c
+@@ -657,7 +657,8 @@ static int synquacer_spi_probe(struct platform_device *pdev)
+ 
+ 	if (!master->max_speed_hz) {
+ 		dev_err(&pdev->dev, "missing clock source\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto disable_clk;
+ 	}
+ 	master->min_speed_hz = master->max_speed_hz / 254;
+ 
+@@ -670,7 +671,7 @@ static int synquacer_spi_probe(struct platform_device *pdev)
+ 	rx_irq = platform_get_irq(pdev, 0);
+ 	if (rx_irq <= 0) {
+ 		ret = rx_irq;
+-		goto put_spi;
++		goto disable_clk;
+ 	}
+ 	snprintf(sspi->rx_irq_name, SYNQUACER_HSSPI_IRQ_NAME_MAX, "%s-rx",
+ 		 dev_name(&pdev->dev));
+@@ -678,13 +679,13 @@ static int synquacer_spi_probe(struct platform_device *pdev)
+ 				0, sspi->rx_irq_name, sspi);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "request rx_irq failed (%d)\n", ret);
+-		goto put_spi;
++		goto disable_clk;
+ 	}
+ 
+ 	tx_irq = platform_get_irq(pdev, 1);
+ 	if (tx_irq <= 0) {
+ 		ret = tx_irq;
+-		goto put_spi;
++		goto disable_clk;
+ 	}
+ 	snprintf(sspi->tx_irq_name, SYNQUACER_HSSPI_IRQ_NAME_MAX, "%s-tx",
+ 		 dev_name(&pdev->dev));
+@@ -692,7 +693,7 @@ static int synquacer_spi_probe(struct platform_device *pdev)
+ 				0, sspi->tx_irq_name, sspi);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "request tx_irq failed (%d)\n", ret);
+-		goto put_spi;
++		goto disable_clk;
+ 	}
+ 
+ 	master->dev.of_node = np;
+@@ -710,7 +711,7 @@ static int synquacer_spi_probe(struct platform_device *pdev)
+ 
+ 	ret = synquacer_spi_enable(master);
+ 	if (ret)
+-		goto fail_enable;
++		goto disable_clk;
+ 
+ 	pm_runtime_set_active(sspi->dev);
+ 	pm_runtime_enable(sspi->dev);
+@@ -723,7 +724,7 @@ static int synquacer_spi_probe(struct platform_device *pdev)
+ 
+ disable_pm:
+ 	pm_runtime_disable(sspi->dev);
+-fail_enable:
++disable_clk:
+ 	clk_disable_unprepare(sspi->clk);
+ put_spi:
+ 	spi_master_put(master);
+diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c
+index ca6886aaa5197..a2e5907276e7f 100644
+--- a/drivers/spi/spi-tegra114.c
++++ b/drivers/spi/spi-tegra114.c
+@@ -966,6 +966,7 @@ static int tegra_spi_setup(struct spi_device *spi)
+ 
+ 	ret = pm_runtime_get_sync(tspi->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(tspi->dev);
+ 		dev_err(tspi->dev, "pm runtime failed, e = %d\n", ret);
+ 		if (cdata)
+ 			tegra_spi_cleanup(spi);
+@@ -1474,6 +1475,7 @@ static int tegra_spi_resume(struct device *dev)
+ 
+ 	ret = pm_runtime_get_sync(dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(dev);
+ 		dev_err(dev, "pm runtime failed, e = %d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi-tegra20-sflash.c b/drivers/spi/spi-tegra20-sflash.c
+index b59015c7c8a80..cfb7de7379376 100644
+--- a/drivers/spi/spi-tegra20-sflash.c
++++ b/drivers/spi/spi-tegra20-sflash.c
+@@ -552,6 +552,7 @@ static int tegra_sflash_resume(struct device *dev)
+ 
+ 	ret = pm_runtime_get_sync(dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(dev);
+ 		dev_err(dev, "pm runtime failed, e = %d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index a0810765d4e52..f7c832fd40036 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -751,6 +751,7 @@ static int tegra_slink_setup(struct spi_device *spi)
+ 
+ 	ret = pm_runtime_get_sync(tspi->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(tspi->dev);
+ 		dev_err(tspi->dev, "pm runtime failed, e = %d\n", ret);
+ 		return ret;
+ 	}
+@@ -1188,6 +1189,7 @@ static int tegra_slink_resume(struct device *dev)
+ 
+ 	ret = pm_runtime_get_sync(dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(dev);
+ 		dev_err(dev, "pm runtime failed, e = %d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index 3c41649698a5b..9417385c09217 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -174,6 +174,7 @@ static int ti_qspi_setup(struct spi_device *spi)
+ 
+ 	ret = pm_runtime_get_sync(qspi->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(qspi->dev);
+ 		dev_err(qspi->dev, "pm_runtime_get_sync() failed\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index fc9a59788d2ea..2eaa7dbb70108 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -405,9 +405,11 @@ static int spi_drv_probe(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = sdrv->probe(spi);
+-	if (ret)
+-		dev_pm_domain_detach(dev, true);
++	if (sdrv->probe) {
++		ret = sdrv->probe(spi);
++		if (ret)
++			dev_pm_domain_detach(dev, true);
++	}
+ 
+ 	return ret;
+ }
+@@ -415,9 +417,10 @@ static int spi_drv_probe(struct device *dev)
+ static int spi_drv_remove(struct device *dev)
+ {
+ 	const struct spi_driver		*sdrv = to_spi_driver(dev->driver);
+-	int ret;
++	int ret = 0;
+ 
+-	ret = sdrv->remove(to_spi_device(dev));
++	if (sdrv->remove)
++		ret = sdrv->remove(to_spi_device(dev));
+ 	dev_pm_domain_detach(dev, true);
+ 
+ 	return ret;
+@@ -442,10 +445,8 @@ int __spi_register_driver(struct module *owner, struct spi_driver *sdrv)
+ {
+ 	sdrv->driver.owner = owner;
+ 	sdrv->driver.bus = &spi_bus_type;
+-	if (sdrv->probe)
+-		sdrv->driver.probe = spi_drv_probe;
+-	if (sdrv->remove)
+-		sdrv->driver.remove = spi_drv_remove;
++	sdrv->driver.probe = spi_drv_probe;
++	sdrv->driver.remove = spi_drv_remove;
+ 	if (sdrv->shutdown)
+ 		sdrv->driver.shutdown = spi_drv_shutdown;
+ 	return driver_register(&sdrv->driver);
+diff --git a/drivers/staging/comedi/drivers/mf6x4.c b/drivers/staging/comedi/drivers/mf6x4.c
+index ea430237efa7f..9da8dd748078d 100644
+--- a/drivers/staging/comedi/drivers/mf6x4.c
++++ b/drivers/staging/comedi/drivers/mf6x4.c
+@@ -112,8 +112,9 @@ static int mf6x4_ai_eoc(struct comedi_device *dev,
+ 	struct mf6x4_private *devpriv = dev->private;
+ 	unsigned int status;
+ 
++	/* EOLC goes low at end of conversion. */
+ 	status = ioread32(devpriv->gpioc_reg);
+-	if (status & MF6X4_GPIOC_EOLC)
++	if ((status & MF6X4_GPIOC_EOLC) == 0)
+ 		return 0;
+ 	return -EBUSY;
+ }
+diff --git a/drivers/staging/gasket/gasket_interrupt.c b/drivers/staging/gasket/gasket_interrupt.c
+index 2d6195f7300e9..864342acfd86e 100644
+--- a/drivers/staging/gasket/gasket_interrupt.c
++++ b/drivers/staging/gasket/gasket_interrupt.c
+@@ -487,14 +487,16 @@ int gasket_interrupt_system_status(struct gasket_dev *gasket_dev)
+ int gasket_interrupt_set_eventfd(struct gasket_interrupt_data *interrupt_data,
+ 				 int interrupt, int event_fd)
+ {
+-	struct eventfd_ctx *ctx = eventfd_ctx_fdget(event_fd);
+-
+-	if (IS_ERR(ctx))
+-		return PTR_ERR(ctx);
++	struct eventfd_ctx *ctx;
+ 
+ 	if (interrupt < 0 || interrupt >= interrupt_data->num_interrupts)
+ 		return -EINVAL;
+ 
++	ctx = eventfd_ctx_fdget(event_fd);
++
++	if (IS_ERR(ctx))
++		return PTR_ERR(ctx);
++
+ 	interrupt_data->eventfd_ctxs[interrupt] = ctx;
+ 	return 0;
+ }
+@@ -505,6 +507,9 @@ int gasket_interrupt_clear_eventfd(struct gasket_interrupt_data *interrupt_data,
+ 	if (interrupt < 0 || interrupt >= interrupt_data->num_interrupts)
+ 		return -EINVAL;
+ 
+-	interrupt_data->eventfd_ctxs[interrupt] = NULL;
++	if (interrupt_data->eventfd_ctxs[interrupt]) {
++		eventfd_ctx_put(interrupt_data->eventfd_ctxs[interrupt]);
++		interrupt_data->eventfd_ctxs[interrupt] = NULL;
++	}
+ 	return 0;
+ }
+diff --git a/drivers/staging/greybus/audio_codec.c b/drivers/staging/greybus/audio_codec.c
+index 494aa823e9984..42ce6c88ea753 100644
+--- a/drivers/staging/greybus/audio_codec.c
++++ b/drivers/staging/greybus/audio_codec.c
+@@ -490,6 +490,7 @@ static int gbcodec_hw_params(struct snd_pcm_substream *substream,
+ 	if (ret) {
+ 		dev_err_ratelimited(dai->dev, "%d: Error during set_config\n",
+ 				    ret);
++		gb_pm_runtime_put_noidle(bundle);
+ 		mutex_unlock(&codec->lock);
+ 		return ret;
+ 	}
+@@ -566,6 +567,7 @@ static int gbcodec_prepare(struct snd_pcm_substream *substream,
+ 		break;
+ 	}
+ 	if (ret) {
++		gb_pm_runtime_put_noidle(bundle);
+ 		mutex_unlock(&codec->lock);
+ 		dev_err_ratelimited(dai->dev, "set_data_size failed:%d\n",
+ 				    ret);
+diff --git a/drivers/staging/greybus/audio_helper.c b/drivers/staging/greybus/audio_helper.c
+index 237531ba60f30..3011b8abce389 100644
+--- a/drivers/staging/greybus/audio_helper.c
++++ b/drivers/staging/greybus/audio_helper.c
+@@ -135,7 +135,8 @@ int gbaudio_dapm_free_controls(struct snd_soc_dapm_context *dapm,
+ 		if (!w) {
+ 			dev_err(dapm->dev, "%s: widget not found\n",
+ 				widget->name);
+-			return -EINVAL;
++			widget++;
++			continue;
+ 		}
+ 		widget++;
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/drivers/staging/hikey9xx/hi6421-spmi-pmic.c b/drivers/staging/hikey9xx/hi6421-spmi-pmic.c
+index 64b30d263c8d0..4f34a52829700 100644
+--- a/drivers/staging/hikey9xx/hi6421-spmi-pmic.c
++++ b/drivers/staging/hikey9xx/hi6421-spmi-pmic.c
+@@ -262,8 +262,10 @@ static int hi6421_spmi_pmic_probe(struct spmi_device *pdev)
+ 	hi6421_spmi_pmic_irq_prc(pmic);
+ 
+ 	pmic->irqs = devm_kzalloc(dev, HISI_IRQ_NUM * sizeof(int), GFP_KERNEL);
+-	if (!pmic->irqs)
++	if (!pmic->irqs) {
++		ret = -ENOMEM;
+ 		goto irq_malloc;
++	}
+ 
+ 	pmic->domain = irq_domain_add_simple(np, HISI_IRQ_NUM, 0,
+ 					     &hi6421_spmi_domain_ops, pmic);
+diff --git a/drivers/staging/media/rkisp1/rkisp1-capture.c b/drivers/staging/media/rkisp1/rkisp1-capture.c
+index b6f497ce3e95c..0c934ca5adaa3 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-capture.c
++++ b/drivers/staging/media/rkisp1/rkisp1-capture.c
+@@ -992,6 +992,7 @@ rkisp1_vb2_start_streaming(struct vb2_queue *queue, unsigned int count)
+ 
+ 	ret = pm_runtime_get_sync(cap->rkisp1->dev);
+ 	if (ret < 0) {
++		pm_runtime_put_noidle(cap->rkisp1->dev);
+ 		dev_err(cap->rkisp1->dev, "power up failed %d\n", ret);
+ 		goto err_destroy_dummy;
+ 	}
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_video.c b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+index 667b86dde1ee8..911f607d9b092 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_video.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+@@ -479,8 +479,10 @@ static int cedrus_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 
+ 	if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
+ 		ret = pm_runtime_get_sync(dev->dev);
+-		if (ret < 0)
++		if (ret < 0) {
++			pm_runtime_put_noidle(dev->dev);
+ 			goto err_cleanup;
++		}
+ 
+ 		if (dev->dec_ops[ctx->current_codec]->start) {
+ 			ret = dev->dec_ops[ctx->current_codec]->start(ctx);
+diff --git a/drivers/staging/vc04_services/vchiq-mmal/Kconfig b/drivers/staging/vc04_services/vchiq-mmal/Kconfig
+index 500c0d12e4ff2..c99525a0bb452 100644
+--- a/drivers/staging/vc04_services/vchiq-mmal/Kconfig
++++ b/drivers/staging/vc04_services/vchiq-mmal/Kconfig
+@@ -1,6 +1,6 @@
+ config BCM2835_VCHIQ_MMAL
+ 	tristate "BCM2835 MMAL VCHIQ service"
+-	depends on (ARCH_BCM2835 || COMPILE_TEST)
++	depends on BCM2835_VCHIQ
+ 	help
+ 	  Enables the MMAL API over VCHIQ interface as used for the
+ 	  majority of the multimedia services on VideoCore.
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index cc2959f22f01a..612f063c1cfcd 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -438,13 +438,11 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
+ 	if (cpufreq_cdev->cpufreq_state == state)
+ 		return 0;
+ 
+-	cpufreq_cdev->cpufreq_state = state;
+-
+ 	frequency = get_state_freq(cpufreq_cdev, state);
+ 
+ 	ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
+-
+ 	if (ret > 0) {
++		cpufreq_cdev->cpufreq_state = state;
+ 		cpus = cpufreq_cdev->policy->cpus;
+ 		max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
+ 		capacity = frequency * max_capacity;
+diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
+index fa876e2c13e5d..f7d3023f860f0 100644
+--- a/drivers/tty/serial/8250/8250_mtk.c
++++ b/drivers/tty/serial/8250/8250_mtk.c
+@@ -572,15 +572,22 @@ static int mtk8250_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 	err = mtk8250_runtime_resume(&pdev->dev);
+ 	if (err)
+-		return err;
++		goto err_pm_disable;
+ 
+ 	data->line = serial8250_register_8250_port(&uart);
+-	if (data->line < 0)
+-		return data->line;
++	if (data->line < 0) {
++		err = data->line;
++		goto err_pm_disable;
++	}
+ 
+ 	data->rx_wakeup_irq = platform_get_irq_optional(pdev, 1);
+ 
+ 	return 0;
++
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
++
++	return err;
+ }
+ 
+ static int mtk8250_remove(struct platform_device *pdev)
+diff --git a/drivers/tty/serial/pmac_zilog.c b/drivers/tty/serial/pmac_zilog.c
+index 063484b22523a..d6aef8a1f0a48 100644
+--- a/drivers/tty/serial/pmac_zilog.c
++++ b/drivers/tty/serial/pmac_zilog.c
+@@ -1693,22 +1693,26 @@ static int __init pmz_probe(void)
+ 
+ #else
+ 
++/* On PCI PowerMacs, pmz_probe() does an explicit search of the OpenFirmware
++ * tree to obtain the device_nodes needed to start the console before the
++ * macio driver. On Macs without OpenFirmware, global platform_devices take
++ * the place of those device_nodes.
++ */
+ extern struct platform_device scc_a_pdev, scc_b_pdev;
+ 
+ static int __init pmz_init_port(struct uart_pmac_port *uap)
+ {
+-	struct resource *r_ports;
+-	int irq;
++	struct resource *r_ports, *r_irq;
+ 
+ 	r_ports = platform_get_resource(uap->pdev, IORESOURCE_MEM, 0);
+-	irq = platform_get_irq(uap->pdev, 0);
+-	if (!r_ports || irq <= 0)
++	r_irq = platform_get_resource(uap->pdev, IORESOURCE_IRQ, 0);
++	if (!r_ports || !r_irq)
+ 		return -ENODEV;
+ 
+ 	uap->port.mapbase  = r_ports->start;
+ 	uap->port.membase  = (unsigned char __iomem *) r_ports->start;
+ 	uap->port.iotype   = UPIO_MEM;
+-	uap->port.irq      = irq;
++	uap->port.irq      = r_irq->start;
+ 	uap->port.uartclk  = ZS_CLOCK;
+ 	uap->port.fifosize = 1;
+ 	uap->port.ops      = &pmz_pops;
+diff --git a/drivers/usb/host/ehci-omap.c b/drivers/usb/host/ehci-omap.c
+index 8771a2ed69268..7f4a03e8647af 100644
+--- a/drivers/usb/host/ehci-omap.c
++++ b/drivers/usb/host/ehci-omap.c
+@@ -220,6 +220,7 @@ static int ehci_hcd_omap_probe(struct platform_device *pdev)
+ 
+ err_pm_runtime:
+ 	pm_runtime_put_sync(dev);
++	pm_runtime_disable(dev);
+ 
+ err_phy:
+ 	for (i = 0; i < omap->nports; i++) {
+diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
+index 0894f6caccb2c..ebb8180b52ab1 100644
+--- a/drivers/usb/host/max3421-hcd.c
++++ b/drivers/usb/host/max3421-hcd.c
+@@ -1847,7 +1847,7 @@ max3421_probe(struct spi_device *spi)
+ 	struct max3421_hcd *max3421_hcd;
+ 	struct usb_hcd *hcd = NULL;
+ 	struct max3421_hcd_platform_data *pdata = NULL;
+-	int retval = -ENOMEM;
++	int retval;
+ 
+ 	if (spi_setup(spi) < 0) {
+ 		dev_err(&spi->dev, "Unable to setup SPI bus");
+@@ -1889,6 +1889,7 @@ max3421_probe(struct spi_device *spi)
+ 		goto error;
+ 	}
+ 
++	retval = -ENOMEM;
+ 	hcd = usb_create_hcd(&max3421_hcd_desc, &spi->dev,
+ 			     dev_name(&spi->dev));
+ 	if (!hcd) {
+diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c
+index 27dbbe1b28b12..e832909a924fa 100644
+--- a/drivers/usb/host/oxu210hp-hcd.c
++++ b/drivers/usb/host/oxu210hp-hcd.c
+@@ -4151,8 +4151,10 @@ static struct usb_hcd *oxu_create(struct platform_device *pdev,
+ 	oxu->is_otg = otg;
+ 
+ 	ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
+-	if (ret < 0)
++	if (ret < 0) {
++		usb_put_hcd(hcd);
+ 		return ERR_PTR(ret);
++	}
+ 
+ 	device_wakeup_enable(hcd->self.controller);
+ 	return hcd;
+diff --git a/drivers/usb/serial/digi_acceleport.c b/drivers/usb/serial/digi_acceleport.c
+index 91055a191995f..0d606fa9fdca1 100644
+--- a/drivers/usb/serial/digi_acceleport.c
++++ b/drivers/usb/serial/digi_acceleport.c
+@@ -19,7 +19,6 @@
+ #include <linux/tty_flip.h>
+ #include <linux/module.h>
+ #include <linux/spinlock.h>
+-#include <linux/workqueue.h>
+ #include <linux/uaccess.h>
+ #include <linux/usb.h>
+ #include <linux/wait.h>
+@@ -198,14 +197,12 @@ struct digi_port {
+ 	int dp_throttle_restart;
+ 	wait_queue_head_t dp_flush_wait;
+ 	wait_queue_head_t dp_close_wait;	/* wait queue for close */
+-	struct work_struct dp_wakeup_work;
+ 	struct usb_serial_port *dp_port;
+ };
+ 
+ 
+ /* Local Function Declarations */
+ 
+-static void digi_wakeup_write_lock(struct work_struct *work);
+ static int digi_write_oob_command(struct usb_serial_port *port,
+ 	unsigned char *buf, int count, int interruptible);
+ static int digi_write_inb_command(struct usb_serial_port *port,
+@@ -356,26 +353,6 @@ __releases(lock)
+ 	return timeout;
+ }
+ 
+-
+-/*
+- *  Digi Wakeup Write
+- *
+- *  Wake up port, line discipline, and tty processes sleeping
+- *  on writes.
+- */
+-
+-static void digi_wakeup_write_lock(struct work_struct *work)
+-{
+-	struct digi_port *priv =
+-			container_of(work, struct digi_port, dp_wakeup_work);
+-	struct usb_serial_port *port = priv->dp_port;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&priv->dp_port_lock, flags);
+-	tty_port_tty_wakeup(&port->port);
+-	spin_unlock_irqrestore(&priv->dp_port_lock, flags);
+-}
+-
+ /*
+  *  Digi Write OOB Command
+  *
+@@ -986,6 +963,7 @@ static void digi_write_bulk_callback(struct urb *urb)
+ 	unsigned long flags;
+ 	int ret = 0;
+ 	int status = urb->status;
++	bool wakeup;
+ 
+ 	/* port and serial sanity check */
+ 	if (port == NULL || (priv = usb_get_serial_port_data(port)) == NULL) {
+@@ -1012,6 +990,7 @@ static void digi_write_bulk_callback(struct urb *urb)
+ 	}
+ 
+ 	/* try to send any buffered data on this port */
++	wakeup = true;
+ 	spin_lock_irqsave(&priv->dp_port_lock, flags);
+ 	priv->dp_write_urb_in_use = 0;
+ 	if (priv->dp_out_buf_len > 0) {
+@@ -1027,19 +1006,18 @@ static void digi_write_bulk_callback(struct urb *urb)
+ 		if (ret == 0) {
+ 			priv->dp_write_urb_in_use = 1;
+ 			priv->dp_out_buf_len = 0;
++			wakeup = false;
+ 		}
+ 	}
+-	/* wake up processes sleeping on writes immediately */
+-	tty_port_tty_wakeup(&port->port);
+-	/* also queue up a wakeup at scheduler time, in case we */
+-	/* lost the race in write_chan(). */
+-	schedule_work(&priv->dp_wakeup_work);
+-
+ 	spin_unlock_irqrestore(&priv->dp_port_lock, flags);
++
+ 	if (ret && ret != -EPERM)
+ 		dev_err_console(port,
+ 			"%s: usb_submit_urb failed, ret=%d, port=%d\n",
+ 			__func__, ret, priv->dp_port_num);
++
++	if (wakeup)
++		tty_port_tty_wakeup(&port->port);
+ }
+ 
+ static int digi_write_room(struct tty_struct *tty)
+@@ -1239,7 +1217,6 @@ static int digi_port_init(struct usb_serial_port *port, unsigned port_num)
+ 	init_waitqueue_head(&priv->dp_transmit_idle_wait);
+ 	init_waitqueue_head(&priv->dp_flush_wait);
+ 	init_waitqueue_head(&priv->dp_close_wait);
+-	INIT_WORK(&priv->dp_wakeup_work, digi_wakeup_write_lock);
+ 	priv->dp_port = port;
+ 
+ 	init_waitqueue_head(&port->write_wait);
+@@ -1508,13 +1485,14 @@ static int digi_read_oob_callback(struct urb *urb)
+ 			rts = C_CRTSCTS(tty);
+ 
+ 		if (tty && opcode == DIGI_CMD_READ_INPUT_SIGNALS) {
++			bool wakeup = false;
++
+ 			spin_lock_irqsave(&priv->dp_port_lock, flags);
+ 			/* convert from digi flags to termiox flags */
+ 			if (val & DIGI_READ_INPUT_SIGNALS_CTS) {
+ 				priv->dp_modem_signals |= TIOCM_CTS;
+-				/* port must be open to use tty struct */
+ 				if (rts)
+-					tty_port_tty_wakeup(&port->port);
++					wakeup = true;
+ 			} else {
+ 				priv->dp_modem_signals &= ~TIOCM_CTS;
+ 				/* port must be open to use tty struct */
+@@ -1533,6 +1511,9 @@ static int digi_read_oob_callback(struct urb *urb)
+ 				priv->dp_modem_signals &= ~TIOCM_CD;
+ 
+ 			spin_unlock_irqrestore(&priv->dp_port_lock, flags);
++
++			if (wakeup)
++				tty_port_tty_wakeup(&port->port);
+ 		} else if (opcode == DIGI_CMD_TRANSMIT_IDLE) {
+ 			spin_lock_irqsave(&priv->dp_port_lock, flags);
+ 			priv->dp_transmit_idle = 1;
+diff --git a/drivers/usb/serial/keyspan_pda.c b/drivers/usb/serial/keyspan_pda.c
+index c1333919716b6..39ed3ad323651 100644
+--- a/drivers/usb/serial/keyspan_pda.c
++++ b/drivers/usb/serial/keyspan_pda.c
+@@ -40,11 +40,12 @@
+ #define DRIVER_AUTHOR "Brian Warner <warner@lothar.com>"
+ #define DRIVER_DESC "USB Keyspan PDA Converter driver"
+ 
++#define KEYSPAN_TX_THRESHOLD	16
++
+ struct keyspan_pda_private {
+ 	int			tx_room;
+ 	int			tx_throttled;
+-	struct work_struct			wakeup_work;
+-	struct work_struct			unthrottle_work;
++	struct work_struct	unthrottle_work;
+ 	struct usb_serial	*serial;
+ 	struct usb_serial_port	*port;
+ };
+@@ -97,15 +98,6 @@ static const struct usb_device_id id_table_fake_xircom[] = {
+ };
+ #endif
+ 
+-static void keyspan_pda_wakeup_write(struct work_struct *work)
+-{
+-	struct keyspan_pda_private *priv =
+-		container_of(work, struct keyspan_pda_private, wakeup_work);
+-	struct usb_serial_port *port = priv->port;
+-
+-	tty_port_tty_wakeup(&port->port);
+-}
+-
+ static void keyspan_pda_request_unthrottle(struct work_struct *work)
+ {
+ 	struct keyspan_pda_private *priv =
+@@ -120,7 +112,7 @@ static void keyspan_pda_request_unthrottle(struct work_struct *work)
+ 				 7, /* request_unthrottle */
+ 				 USB_TYPE_VENDOR | USB_RECIP_INTERFACE
+ 				 | USB_DIR_OUT,
+-				 16, /* value: threshold */
++				 KEYSPAN_TX_THRESHOLD,
+ 				 0, /* index */
+ 				 NULL,
+ 				 0,
+@@ -139,6 +131,8 @@ static void keyspan_pda_rx_interrupt(struct urb *urb)
+ 	int retval;
+ 	int status = urb->status;
+ 	struct keyspan_pda_private *priv;
++	unsigned long flags;
++
+ 	priv = usb_get_serial_port_data(port);
+ 
+ 	switch (status) {
+@@ -172,18 +166,21 @@ static void keyspan_pda_rx_interrupt(struct urb *urb)
+ 		break;
+ 	case 1:
+ 		/* status interrupt */
+-		if (len < 3) {
++		if (len < 2) {
+ 			dev_warn(&port->dev, "short interrupt message received\n");
+ 			break;
+ 		}
+-		dev_dbg(&port->dev, "rx int, d1=%d, d2=%d\n", data[1], data[2]);
++		dev_dbg(&port->dev, "rx int, d1=%d\n", data[1]);
+ 		switch (data[1]) {
+ 		case 1: /* modemline change */
+ 			break;
+ 		case 2: /* tx unthrottle interrupt */
++			spin_lock_irqsave(&port->lock, flags);
+ 			priv->tx_throttled = 0;
++			priv->tx_room = max(priv->tx_room, KEYSPAN_TX_THRESHOLD);
++			spin_unlock_irqrestore(&port->lock, flags);
+ 			/* queue up a wakeup at scheduler time */
+-			schedule_work(&priv->wakeup_work);
++			usb_serial_port_softint(port);
+ 			break;
+ 		default:
+ 			break;
+@@ -443,6 +440,7 @@ static int keyspan_pda_write(struct tty_struct *tty,
+ 	int request_unthrottle = 0;
+ 	int rc = 0;
+ 	struct keyspan_pda_private *priv;
++	unsigned long flags;
+ 
+ 	priv = usb_get_serial_port_data(port);
+ 	/* guess how much room is left in the device's ring buffer, and if we
+@@ -462,13 +460,13 @@ static int keyspan_pda_write(struct tty_struct *tty,
+ 	   the TX urb is in-flight (wait until it completes)
+ 	   the device is full (wait until it says there is room)
+ 	*/
+-	spin_lock_bh(&port->lock);
++	spin_lock_irqsave(&port->lock, flags);
+ 	if (!test_bit(0, &port->write_urbs_free) || priv->tx_throttled) {
+-		spin_unlock_bh(&port->lock);
++		spin_unlock_irqrestore(&port->lock, flags);
+ 		return 0;
+ 	}
+ 	clear_bit(0, &port->write_urbs_free);
+-	spin_unlock_bh(&port->lock);
++	spin_unlock_irqrestore(&port->lock, flags);
+ 
+ 	/* At this point the URB is in our control, nobody else can submit it
+ 	   again (the only sudden transition was the one from EINPROGRESS to
+@@ -514,7 +512,8 @@ static int keyspan_pda_write(struct tty_struct *tty,
+ 			goto exit;
+ 		}
+ 	}
+-	if (count > priv->tx_room) {
++
++	if (count >= priv->tx_room) {
+ 		/* we're about to completely fill the Tx buffer, so
+ 		   we'll be throttled afterwards. */
+ 		count = priv->tx_room;
+@@ -547,7 +546,7 @@ static int keyspan_pda_write(struct tty_struct *tty,
+ 
+ 	rc = count;
+ exit:
+-	if (rc < 0)
++	if (rc <= 0)
+ 		set_bit(0, &port->write_urbs_free);
+ 	return rc;
+ }
+@@ -562,21 +561,24 @@ static void keyspan_pda_write_bulk_callback(struct urb *urb)
+ 	priv = usb_get_serial_port_data(port);
+ 
+ 	/* queue up a wakeup at scheduler time */
+-	schedule_work(&priv->wakeup_work);
++	usb_serial_port_softint(port);
+ }
+ 
+ 
+ static int keyspan_pda_write_room(struct tty_struct *tty)
+ {
+ 	struct usb_serial_port *port = tty->driver_data;
+-	struct keyspan_pda_private *priv;
+-	priv = usb_get_serial_port_data(port);
+-	/* used by n_tty.c for processing of tabs and such. Giving it our
+-	   conservative guess is probably good enough, but needs testing by
+-	   running a console through the device. */
+-	return priv->tx_room;
+-}
++	struct keyspan_pda_private *priv = usb_get_serial_port_data(port);
++	unsigned long flags;
++	int room = 0;
+ 
++	spin_lock_irqsave(&port->lock, flags);
++	if (test_bit(0, &port->write_urbs_free) && !priv->tx_throttled)
++		room = priv->tx_room;
++	spin_unlock_irqrestore(&port->lock, flags);
++
++	return room;
++}
+ 
+ static int keyspan_pda_chars_in_buffer(struct tty_struct *tty)
+ {
+@@ -656,8 +658,12 @@ error:
+ }
+ static void keyspan_pda_close(struct usb_serial_port *port)
+ {
++	struct keyspan_pda_private *priv = usb_get_serial_port_data(port);
++
+ 	usb_kill_urb(port->write_urb);
+ 	usb_kill_urb(port->interrupt_in_urb);
++
++	cancel_work_sync(&priv->unthrottle_work);
+ }
+ 
+ 
+@@ -714,7 +720,6 @@ static int keyspan_pda_port_probe(struct usb_serial_port *port)
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
+-	INIT_WORK(&priv->wakeup_work, keyspan_pda_wakeup_write);
+ 	INIT_WORK(&priv->unthrottle_work, keyspan_pda_request_unthrottle);
+ 	priv->serial = port->serial;
+ 	priv->port = port;
+diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c
+index 5eed1078fac87..5a5d2a95070ed 100644
+--- a/drivers/usb/serial/mos7720.c
++++ b/drivers/usb/serial/mos7720.c
+@@ -639,6 +639,8 @@ static void parport_mos7715_restore_state(struct parport *pp,
+ 		spin_unlock(&release_lock);
+ 		return;
+ 	}
++	mos_parport->shadowDCR = s->u.pc.ctr;
++	mos_parport->shadowECR = s->u.pc.ecr;
+ 	write_parport_reg_nonblock(mos_parport, MOS7720_DCR,
+ 				   mos_parport->shadowDCR);
+ 	write_parport_reg_nonblock(mos_parport, MOS7720_ECR,
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 1fa6fcac82992..81b932f72e103 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -464,6 +464,11 @@ static int mlx5_vdpa_poll_one(struct mlx5_vdpa_cq *vcq)
+ static void mlx5_vdpa_handle_completions(struct mlx5_vdpa_virtqueue *mvq, int num)
+ {
+ 	mlx5_cq_set_ci(&mvq->cq.mcq);
++
++	/* make sure CQ cosumer update is visible to the hardware before updating
++	 * RX doorbell record.
++	 */
++	dma_wmb();
+ 	rx_post(&mvq->vqqp, num);
+ 	if (mvq->event_cb.callback)
+ 		mvq->event_cb.callback(mvq->event_cb.private);
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index e6190173482c7..706de3ef94bbf 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -161,8 +161,6 @@ static void vfio_pci_probe_mmaps(struct vfio_pci_device *vdev)
+ 	int i;
+ 	struct vfio_pci_dummy_resource *dummy_res;
+ 
+-	INIT_LIST_HEAD(&vdev->dummy_resources_list);
+-
+ 	for (i = 0; i < PCI_STD_NUM_BARS; i++) {
+ 		int bar = i + PCI_STD_RESOURCES;
+ 
+@@ -1635,8 +1633,8 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
+ 
+ 	mutex_unlock(&vdev->vma_lock);
+ 
+-	if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+-			    vma->vm_end - vma->vm_start, vma->vm_page_prot))
++	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
++			       vma->vm_end - vma->vm_start, vma->vm_page_prot))
+ 		ret = VM_FAULT_SIGBUS;
+ 
+ up_out:
+@@ -1966,6 +1964,7 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	mutex_init(&vdev->igate);
+ 	spin_lock_init(&vdev->irqlock);
+ 	mutex_init(&vdev->ioeventfds_lock);
++	INIT_LIST_HEAD(&vdev->dummy_resources_list);
+ 	INIT_LIST_HEAD(&vdev->ioeventfds_list);
+ 	mutex_init(&vdev->vma_lock);
+ 	INIT_LIST_HEAD(&vdev->vma_list);
+diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
+index 65c61710c0e9a..9adcf6a8f8885 100644
+--- a/drivers/vfio/pci/vfio_pci_nvlink2.c
++++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
+@@ -231,7 +231,7 @@ int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
+ 		return -EINVAL;
+ 
+ 	if (of_property_read_u32(npu_node, "memory-region", &mem_phandle))
+-		return -EINVAL;
++		return -ENODEV;
+ 
+ 	mem_node = of_find_node_by_phandle(mem_phandle);
+ 	if (!mem_node)
+@@ -393,7 +393,7 @@ int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
+ 	int ret;
+ 	struct vfio_pci_npu2_data *data;
+ 	struct device_node *nvlink_dn;
+-	u32 nvlink_index = 0;
++	u32 nvlink_index = 0, mem_phandle = 0;
+ 	struct pci_dev *npdev = vdev->pdev;
+ 	struct device_node *npu_node = pci_device_to_OF_node(npdev);
+ 	struct pci_controller *hose = pci_bus_to_host(npdev->bus);
+@@ -408,6 +408,9 @@ int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
+ 	if (!pnv_pci_get_gpu_dev(vdev->pdev))
+ 		return -ENODEV;
+ 
++	if (of_property_read_u32(npu_node, "memory-region", &mem_phandle))
++		return -ENODEV;
++
+ 	/*
+ 	 * NPU2 normally has 8 ATSD registers (for concurrency) and 6 links
+ 	 * so we can allocate one register per link, using nvlink index as
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 6ff8a50966915..4ce9f00ae10e8 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -1643,7 +1643,8 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ 			if (!vhost_vq_is_setup(vq))
+ 				continue;
+ 
+-			if (vhost_scsi_setup_vq_cmds(vq, vq->num))
++			ret = vhost_scsi_setup_vq_cmds(vq, vq->num);
++			if (ret)
+ 				goto destroy_vq_cmds;
+ 		}
+ 
+diff --git a/drivers/video/fbdev/atmel_lcdfb.c b/drivers/video/fbdev/atmel_lcdfb.c
+index 8c1d47e52b1a6..355b6120dc4f0 100644
+--- a/drivers/video/fbdev/atmel_lcdfb.c
++++ b/drivers/video/fbdev/atmel_lcdfb.c
+@@ -987,8 +987,8 @@ static int atmel_lcdfb_of_init(struct atmel_lcdfb_info *sinfo)
+ 	}
+ 
+ 	INIT_LIST_HEAD(&pdata->pwr_gpios);
+-	ret = -ENOMEM;
+ 	for (i = 0; i < gpiod_count(dev, "atmel,power-control"); i++) {
++		ret = -ENOMEM;
+ 		gpiod = devm_gpiod_get_index(dev, "atmel,power-control",
+ 					     i, GPIOD_ASIS);
+ 		if (IS_ERR(gpiod))
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index becc776979602..71e16b53e9c18 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1608,7 +1608,6 @@ static struct virtqueue *vring_create_virtqueue_packed(
+ 	vq->num_added = 0;
+ 	vq->packed_ring = true;
+ 	vq->use_dma_api = vring_use_dma_api(vdev);
+-	list_add_tail(&vq->vq.list, &vdev->vqs);
+ #ifdef DEBUG
+ 	vq->in_use = false;
+ 	vq->last_add_time_valid = false;
+@@ -1669,6 +1668,7 @@ static struct virtqueue *vring_create_virtqueue_packed(
+ 			cpu_to_le16(vq->packed.event_flags_shadow);
+ 	}
+ 
++	list_add_tail(&vq->vq.list, &vdev->vqs);
+ 	return &vq->vq;
+ 
+ err_desc_extra:
+@@ -1676,9 +1676,9 @@ err_desc_extra:
+ err_desc_state:
+ 	kfree(vq);
+ err_vq:
+-	vring_free_queue(vdev, event_size_in_bytes, device, ring_dma_addr);
++	vring_free_queue(vdev, event_size_in_bytes, device, device_event_dma_addr);
+ err_device:
+-	vring_free_queue(vdev, event_size_in_bytes, driver, ring_dma_addr);
++	vring_free_queue(vdev, event_size_in_bytes, driver, driver_event_dma_addr);
+ err_driver:
+ 	vring_free_queue(vdev, ring_size_in_bytes, ring, ring_dma_addr);
+ err_ring:
+@@ -2085,7 +2085,6 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
+ 	vq->last_used_idx = 0;
+ 	vq->num_added = 0;
+ 	vq->use_dma_api = vring_use_dma_api(vdev);
+-	list_add_tail(&vq->vq.list, &vdev->vqs);
+ #ifdef DEBUG
+ 	vq->in_use = false;
+ 	vq->last_add_time_valid = false;
+@@ -2127,6 +2126,7 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
+ 	memset(vq->split.desc_state, 0, vring.num *
+ 			sizeof(struct vring_desc_state_split));
+ 
++	list_add_tail(&vq->vq.list, &vdev->vqs);
+ 	return &vq->vq;
+ }
+ EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index fd7968635e6df..db935d6b10c27 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -386,6 +386,7 @@ config ARM_SBSA_WATCHDOG
+ config ARMADA_37XX_WATCHDOG
+ 	tristate "Armada 37xx watchdog"
+ 	depends on ARCH_MVEBU || COMPILE_TEST
++	depends on HAS_IOMEM
+ 	select MFD_SYSCON
+ 	select WATCHDOG_CORE
+ 	help
+@@ -631,7 +632,7 @@ config SUNXI_WATCHDOG
+ 
+ config COH901327_WATCHDOG
+ 	bool "ST-Ericsson COH 901 327 watchdog"
+-	depends on ARCH_U300 || (ARM && COMPILE_TEST)
++	depends on ARCH_U300 || (ARM && COMMON_CLK && COMPILE_TEST)
+ 	default y if MACH_U300
+ 	select WATCHDOG_CORE
+ 	help
+@@ -789,6 +790,7 @@ config MOXART_WDT
+ 
+ config SIRFSOC_WATCHDOG
+ 	tristate "SiRFSOC watchdog"
++	depends on HAS_IOMEM
+ 	depends on ARCH_SIRF || COMPILE_TEST
+ 	select WATCHDOG_CORE
+ 	default y
+diff --git a/drivers/watchdog/qcom-wdt.c b/drivers/watchdog/qcom-wdt.c
+index ab7465d186fda..cdf754233e53d 100644
+--- a/drivers/watchdog/qcom-wdt.c
++++ b/drivers/watchdog/qcom-wdt.c
+@@ -148,7 +148,7 @@ static int qcom_wdt_restart(struct watchdog_device *wdd, unsigned long action,
+ 	 */
+ 	wmb();
+ 
+-	msleep(150);
++	mdelay(150);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/watchdog/sprd_wdt.c b/drivers/watchdog/sprd_wdt.c
+index 65cb55f3916fc..b9b1daa9e2a4c 100644
+--- a/drivers/watchdog/sprd_wdt.c
++++ b/drivers/watchdog/sprd_wdt.c
+@@ -108,18 +108,6 @@ static int sprd_wdt_load_value(struct sprd_wdt *wdt, u32 timeout,
+ 	u32 tmr_step = timeout * SPRD_WDT_CNT_STEP;
+ 	u32 prtmr_step = pretimeout * SPRD_WDT_CNT_STEP;
+ 
+-	sprd_wdt_unlock(wdt->base);
+-	writel_relaxed((tmr_step >> SPRD_WDT_CNT_HIGH_SHIFT) &
+-		      SPRD_WDT_LOW_VALUE_MASK, wdt->base + SPRD_WDT_LOAD_HIGH);
+-	writel_relaxed((tmr_step & SPRD_WDT_LOW_VALUE_MASK),
+-		       wdt->base + SPRD_WDT_LOAD_LOW);
+-	writel_relaxed((prtmr_step >> SPRD_WDT_CNT_HIGH_SHIFT) &
+-			SPRD_WDT_LOW_VALUE_MASK,
+-		       wdt->base + SPRD_WDT_IRQ_LOAD_HIGH);
+-	writel_relaxed(prtmr_step & SPRD_WDT_LOW_VALUE_MASK,
+-		       wdt->base + SPRD_WDT_IRQ_LOAD_LOW);
+-	sprd_wdt_lock(wdt->base);
+-
+ 	/*
+ 	 * Waiting the load value operation done,
+ 	 * it needs two or three RTC clock cycles.
+@@ -134,6 +122,19 @@ static int sprd_wdt_load_value(struct sprd_wdt *wdt, u32 timeout,
+ 
+ 	if (delay_cnt >= SPRD_WDT_LOAD_TIMEOUT)
+ 		return -EBUSY;
++
++	sprd_wdt_unlock(wdt->base);
++	writel_relaxed((tmr_step >> SPRD_WDT_CNT_HIGH_SHIFT) &
++		      SPRD_WDT_LOW_VALUE_MASK, wdt->base + SPRD_WDT_LOAD_HIGH);
++	writel_relaxed((tmr_step & SPRD_WDT_LOW_VALUE_MASK),
++		       wdt->base + SPRD_WDT_LOAD_LOW);
++	writel_relaxed((prtmr_step >> SPRD_WDT_CNT_HIGH_SHIFT) &
++			SPRD_WDT_LOW_VALUE_MASK,
++		       wdt->base + SPRD_WDT_IRQ_LOAD_HIGH);
++	writel_relaxed(prtmr_step & SPRD_WDT_LOW_VALUE_MASK,
++		       wdt->base + SPRD_WDT_IRQ_LOAD_LOW);
++	sprd_wdt_lock(wdt->base);
++
+ 	return 0;
+ }
+ 
+@@ -345,15 +346,10 @@ static int __maybe_unused sprd_wdt_pm_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (watchdog_active(&wdt->wdd)) {
++	if (watchdog_active(&wdt->wdd))
+ 		ret = sprd_wdt_start(&wdt->wdd);
+-		if (ret) {
+-			sprd_wdt_disable(wdt);
+-			return ret;
+-		}
+-	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct dev_pm_ops sprd_wdt_pm_ops = {
+diff --git a/drivers/watchdog/watchdog_core.c b/drivers/watchdog/watchdog_core.c
+index 4238447578128..0e9a99559609c 100644
+--- a/drivers/watchdog/watchdog_core.c
++++ b/drivers/watchdog/watchdog_core.c
+@@ -267,15 +267,19 @@ static int __watchdog_register_device(struct watchdog_device *wdd)
+ 	}
+ 
+ 	if (test_bit(WDOG_STOP_ON_REBOOT, &wdd->status)) {
+-		wdd->reboot_nb.notifier_call = watchdog_reboot_notifier;
+-
+-		ret = register_reboot_notifier(&wdd->reboot_nb);
+-		if (ret) {
+-			pr_err("watchdog%d: Cannot register reboot notifier (%d)\n",
+-			       wdd->id, ret);
+-			watchdog_dev_unregister(wdd);
+-			ida_simple_remove(&watchdog_ida, id);
+-			return ret;
++		if (!wdd->ops->stop)
++			pr_warn("watchdog%d: stop_on_reboot not supported\n", wdd->id);
++		else {
++			wdd->reboot_nb.notifier_call = watchdog_reboot_notifier;
++
++			ret = register_reboot_notifier(&wdd->reboot_nb);
++			if (ret) {
++				pr_err("watchdog%d: Cannot register reboot notifier (%d)\n",
++					wdd->id, ret);
++				watchdog_dev_unregister(wdd);
++				ida_simple_remove(&watchdog_ida, id);
++				return ret;
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
+index 4b99ec3dec58a..e7c692cfb2cf8 100644
+--- a/drivers/xen/xen-pciback/xenbus.c
++++ b/drivers/xen/xen-pciback/xenbus.c
+@@ -689,7 +689,7 @@ static int xen_pcibk_xenbus_probe(struct xenbus_device *dev,
+ 
+ 	/* watch the backend node for backend configuration information */
+ 	err = xenbus_watch_path(dev, dev->nodename, &pdev->be_watch,
+-				xen_pcibk_be_watch);
++				NULL, xen_pcibk_be_watch);
+ 	if (err)
+ 		goto out;
+ 
+diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
+index 5f5b8a7d5b80b..2a93b7c9c1599 100644
+--- a/drivers/xen/xenbus/xenbus.h
++++ b/drivers/xen/xenbus/xenbus.h
+@@ -44,6 +44,8 @@ struct xen_bus_type {
+ 	int (*get_bus_id)(char bus_id[XEN_BUS_ID_SIZE], const char *nodename);
+ 	int (*probe)(struct xen_bus_type *bus, const char *type,
+ 		     const char *dir);
++	bool (*otherend_will_handle)(struct xenbus_watch *watch,
++				     const char *path, const char *token);
+ 	void (*otherend_changed)(struct xenbus_watch *watch, const char *path,
+ 				 const char *token);
+ 	struct bus_type bus;
+diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
+index fd80e318b99cc..0cd728961fce9 100644
+--- a/drivers/xen/xenbus/xenbus_client.c
++++ b/drivers/xen/xenbus/xenbus_client.c
+@@ -127,18 +127,22 @@ EXPORT_SYMBOL_GPL(xenbus_strstate);
+  */
+ int xenbus_watch_path(struct xenbus_device *dev, const char *path,
+ 		      struct xenbus_watch *watch,
++		      bool (*will_handle)(struct xenbus_watch *,
++					  const char *, const char *),
+ 		      void (*callback)(struct xenbus_watch *,
+ 				       const char *, const char *))
+ {
+ 	int err;
+ 
+ 	watch->node = path;
++	watch->will_handle = will_handle;
+ 	watch->callback = callback;
+ 
+ 	err = register_xenbus_watch(watch);
+ 
+ 	if (err) {
+ 		watch->node = NULL;
++		watch->will_handle = NULL;
+ 		watch->callback = NULL;
+ 		xenbus_dev_fatal(dev, err, "adding watch on %s", path);
+ 	}
+@@ -165,6 +169,8 @@ EXPORT_SYMBOL_GPL(xenbus_watch_path);
+  */
+ int xenbus_watch_pathfmt(struct xenbus_device *dev,
+ 			 struct xenbus_watch *watch,
++			 bool (*will_handle)(struct xenbus_watch *,
++					const char *, const char *),
+ 			 void (*callback)(struct xenbus_watch *,
+ 					  const char *, const char *),
+ 			 const char *pathfmt, ...)
+@@ -181,7 +187,7 @@ int xenbus_watch_pathfmt(struct xenbus_device *dev,
+ 		xenbus_dev_fatal(dev, -ENOMEM, "allocating path for watch");
+ 		return -ENOMEM;
+ 	}
+-	err = xenbus_watch_path(dev, path, watch, callback);
++	err = xenbus_watch_path(dev, path, watch, will_handle, callback);
+ 
+ 	if (err)
+ 		kfree(path);
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 38725d97d9093..44634d970a5ca 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -136,6 +136,7 @@ static int watch_otherend(struct xenbus_device *dev)
+ 		container_of(dev->dev.bus, struct xen_bus_type, bus);
+ 
+ 	return xenbus_watch_pathfmt(dev, &dev->otherend_watch,
++				    bus->otherend_will_handle,
+ 				    bus->otherend_changed,
+ 				    "%s/%s", dev->otherend, "state");
+ }
+diff --git a/drivers/xen/xenbus/xenbus_probe_backend.c b/drivers/xen/xenbus/xenbus_probe_backend.c
+index 2ba699897e6dd..5abded97e1a7e 100644
+--- a/drivers/xen/xenbus/xenbus_probe_backend.c
++++ b/drivers/xen/xenbus/xenbus_probe_backend.c
+@@ -180,6 +180,12 @@ static int xenbus_probe_backend(struct xen_bus_type *bus, const char *type,
+ 	return err;
+ }
+ 
++static bool frontend_will_handle(struct xenbus_watch *watch,
++				 const char *path, const char *token)
++{
++	return watch->nr_pending == 0;
++}
++
+ static void frontend_changed(struct xenbus_watch *watch,
+ 			     const char *path, const char *token)
+ {
+@@ -191,6 +197,7 @@ static struct xen_bus_type xenbus_backend = {
+ 	.levels = 3,		/* backend/type/<frontend>/<id> */
+ 	.get_bus_id = backend_bus_id,
+ 	.probe = xenbus_probe_backend,
++	.otherend_will_handle = frontend_will_handle,
+ 	.otherend_changed = frontend_changed,
+ 	.bus = {
+ 		.name		= "xen-backend",
+diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
+index 3a06eb699f333..12e02eb01f599 100644
+--- a/drivers/xen/xenbus/xenbus_xs.c
++++ b/drivers/xen/xenbus/xenbus_xs.c
+@@ -705,9 +705,13 @@ int xs_watch_msg(struct xs_watch_event *event)
+ 
+ 	spin_lock(&watches_lock);
+ 	event->handle = find_watch(event->token);
+-	if (event->handle != NULL) {
++	if (event->handle != NULL &&
++			(!event->handle->will_handle ||
++			 event->handle->will_handle(event->handle,
++				 event->path, event->token))) {
+ 		spin_lock(&watch_events_lock);
+ 		list_add_tail(&event->list, &watch_events);
++		event->handle->nr_pending++;
+ 		wake_up(&watch_events_waitq);
+ 		spin_unlock(&watch_events_lock);
+ 	} else
+@@ -765,6 +769,8 @@ int register_xenbus_watch(struct xenbus_watch *watch)
+ 
+ 	sprintf(token, "%lX", (long)watch);
+ 
++	watch->nr_pending = 0;
++
+ 	down_read(&xs_watch_rwsem);
+ 
+ 	spin_lock(&watches_lock);
+@@ -814,11 +820,14 @@ void unregister_xenbus_watch(struct xenbus_watch *watch)
+ 
+ 	/* Cancel pending watch events. */
+ 	spin_lock(&watch_events_lock);
+-	list_for_each_entry_safe(event, tmp, &watch_events, list) {
+-		if (event->handle != watch)
+-			continue;
+-		list_del(&event->list);
+-		kfree(event);
++	if (watch->nr_pending) {
++		list_for_each_entry_safe(event, tmp, &watch_events, list) {
++			if (event->handle != watch)
++				continue;
++			list_del(&event->list);
++			kfree(event);
++		}
++		watch->nr_pending = 0;
+ 	}
+ 	spin_unlock(&watch_events_lock);
+ 
+@@ -865,7 +874,6 @@ void xs_suspend_cancel(void)
+ 
+ static int xenwatch_thread(void *unused)
+ {
+-	struct list_head *ent;
+ 	struct xs_watch_event *event;
+ 
+ 	xenwatch_pid = current->pid;
+@@ -880,13 +888,15 @@ static int xenwatch_thread(void *unused)
+ 		mutex_lock(&xenwatch_mutex);
+ 
+ 		spin_lock(&watch_events_lock);
+-		ent = watch_events.next;
+-		if (ent != &watch_events)
+-			list_del(ent);
++		event = list_first_entry_or_null(&watch_events,
++				struct xs_watch_event, list);
++		if (event) {
++			list_del(&event->list);
++			event->handle->nr_pending--;
++		}
+ 		spin_unlock(&watch_events_lock);
+ 
+-		if (ent != &watch_events) {
+-			event = list_entry(ent, struct xs_watch_event, list);
++		if (event) {
+ 			event->handle->callback(event->handle, event->path,
+ 						event->token);
+ 			kfree(event);
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 0b29bdb251050..62461239600fc 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2593,7 +2593,6 @@ int btrfs_free_reserved_extent(struct btrfs_fs_info *fs_info,
+ 			       u64 start, u64 len, int delalloc);
+ int btrfs_pin_reserved_extent(struct btrfs_trans_handle *trans, u64 start,
+ 			      u64 len);
+-void btrfs_prepare_extent_commit(struct btrfs_fs_info *fs_info);
+ int btrfs_finish_extent_commit(struct btrfs_trans_handle *trans);
+ int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
+ 			 struct btrfs_ref *generic_ref);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 5fd60b13f4f83..4209dbd6286e4 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2730,31 +2730,6 @@ btrfs_inc_block_group_reservations(struct btrfs_block_group *bg)
+ 	atomic_inc(&bg->reservations);
+ }
+ 
+-void btrfs_prepare_extent_commit(struct btrfs_fs_info *fs_info)
+-{
+-	struct btrfs_caching_control *next;
+-	struct btrfs_caching_control *caching_ctl;
+-	struct btrfs_block_group *cache;
+-
+-	down_write(&fs_info->commit_root_sem);
+-
+-	list_for_each_entry_safe(caching_ctl, next,
+-				 &fs_info->caching_block_groups, list) {
+-		cache = caching_ctl->block_group;
+-		if (btrfs_block_group_done(cache)) {
+-			cache->last_byte_to_unpin = (u64)-1;
+-			list_del_init(&caching_ctl->list);
+-			btrfs_put_caching_control(caching_ctl);
+-		} else {
+-			cache->last_byte_to_unpin = caching_ctl->progress;
+-		}
+-	}
+-
+-	up_write(&fs_info->commit_root_sem);
+-
+-	btrfs_update_global_block_rsv(fs_info);
+-}
+-
+ /*
+  * Returns the free cluster for the given space info and sets empty_cluster to
+  * what it should be based on the mount options.
+@@ -2816,10 +2791,10 @@ static int unpin_extent_range(struct btrfs_fs_info *fs_info,
+ 		len = cache->start + cache->length - start;
+ 		len = min(len, end + 1 - start);
+ 
+-		if (start < cache->last_byte_to_unpin) {
+-			len = min(len, cache->last_byte_to_unpin - start);
+-			if (return_free_space)
+-				btrfs_add_free_space(cache, start, len);
++		if (start < cache->last_byte_to_unpin && return_free_space) {
++			u64 add_len = min(len, cache->last_byte_to_unpin - start);
++
++			btrfs_add_free_space(cache, start, add_len);
+ 		}
+ 
+ 		start += len;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 69a384145dc6f..e8ca229a216be 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1275,6 +1275,7 @@ static int cluster_pages_for_defrag(struct inode *inode,
+ 	u64 page_end;
+ 	u64 page_cnt;
+ 	u64 start = (u64)start_index << PAGE_SHIFT;
++	u64 search_start;
+ 	int ret;
+ 	int i;
+ 	int i_done;
+@@ -1371,6 +1372,40 @@ again:
+ 
+ 	lock_extent_bits(&BTRFS_I(inode)->io_tree,
+ 			 page_start, page_end - 1, &cached_state);
++
++	/*
++	 * When defragmenting we skip ranges that have holes or inline extents,
++	 * (check should_defrag_range()), to avoid unnecessary IO and wasting
++	 * space. At btrfs_defrag_file(), we check if a range should be defragged
++	 * before locking the inode and then, if it should, we trigger a sync
++	 * page cache readahead - we lock the inode only after that to avoid
++	 * blocking for too long other tasks that possibly want to operate on
++	 * other file ranges. But before we were able to get the inode lock,
++	 * some other task may have punched a hole in the range, or we may have
++	 * now an inline extent, in which case we should not defrag. So check
++	 * for that here, where we have the inode and the range locked, and bail
++	 * out if that happened.
++	 */
++	search_start = page_start;
++	while (search_start < page_end) {
++		struct extent_map *em;
++
++		em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, search_start,
++				      page_end - search_start);
++		if (IS_ERR(em)) {
++			ret = PTR_ERR(em);
++			goto out_unlock_range;
++		}
++		if (em->block_start >= EXTENT_MAP_LAST_BYTE) {
++			free_extent_map(em);
++			/* Ok, 0 means we did not defrag anything */
++			ret = 0;
++			goto out_unlock_range;
++		}
++		search_start = extent_map_end(em);
++		free_extent_map(em);
++	}
++
+ 	clear_extent_bit(&BTRFS_I(inode)->io_tree, page_start,
+ 			  page_end - 1, EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING |
+ 			  EXTENT_DEFRAG, 0, 0, &cached_state);
+@@ -1401,6 +1436,10 @@ again:
+ 	btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT);
+ 	extent_changeset_free(data_reserved);
+ 	return i_done;
++
++out_unlock_range:
++	unlock_extent_cached(&BTRFS_I(inode)->io_tree,
++			     page_start, page_end - 1, &cached_state);
+ out:
+ 	for (i = 0; i < i_done; i++) {
+ 		unlock_page(pages[i]);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 52ada47aff50d..96dbfc011f45d 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -155,6 +155,7 @@ static noinline void switch_commit_roots(struct btrfs_trans_handle *trans)
+ 	struct btrfs_transaction *cur_trans = trans->transaction;
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	struct btrfs_root *root, *tmp;
++	struct btrfs_caching_control *caching_ctl, *next;
+ 
+ 	down_write(&fs_info->commit_root_sem);
+ 	list_for_each_entry_safe(root, tmp, &cur_trans->switch_commits,
+@@ -180,6 +181,45 @@ static noinline void switch_commit_roots(struct btrfs_trans_handle *trans)
+ 		spin_lock(&cur_trans->dropped_roots_lock);
+ 	}
+ 	spin_unlock(&cur_trans->dropped_roots_lock);
++
++	/*
++	 * We have to update the last_byte_to_unpin under the commit_root_sem,
++	 * at the same time we swap out the commit roots.
++	 *
++	 * This is because we must have a real view of the last spot the caching
++	 * kthreads were while caching.  Consider the following views of the
++	 * extent tree for a block group
++	 *
++	 * commit root
++	 * +----+----+----+----+----+----+----+
++	 * |\\\\|    |\\\\|\\\\|    |\\\\|\\\\|
++	 * +----+----+----+----+----+----+----+
++	 * 0    1    2    3    4    5    6    7
++	 *
++	 * new commit root
++	 * +----+----+----+----+----+----+----+
++	 * |    |    |    |\\\\|    |    |\\\\|
++	 * +----+----+----+----+----+----+----+
++	 * 0    1    2    3    4    5    6    7
++	 *
++	 * If the cache_ctl->progress was at 3, then we are only allowed to
++	 * unpin [0,1) and [2,3], because the caching thread has already
++	 * processed those extents.  We are not allowed to unpin [5,6), because
++	 * the caching thread will re-start it's search from 3, and thus find
++	 * the hole from [4,6) to add to the free space cache.
++	 */
++	list_for_each_entry_safe(caching_ctl, next,
++				 &fs_info->caching_block_groups, list) {
++		struct btrfs_block_group *cache = caching_ctl->block_group;
++
++		if (btrfs_block_group_done(cache)) {
++			cache->last_byte_to_unpin = (u64)-1;
++			list_del_init(&caching_ctl->list);
++			btrfs_put_caching_control(caching_ctl);
++		} else {
++			cache->last_byte_to_unpin = caching_ctl->progress;
++		}
++	}
+ 	up_write(&fs_info->commit_root_sem);
+ }
+ 
+@@ -2293,8 +2333,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 		goto unlock_tree_log;
+ 	}
+ 
+-	btrfs_prepare_extent_commit(fs_info);
+-
+ 	cur_trans = fs_info->running_transaction;
+ 
+ 	btrfs_set_root_node(&fs_info->tree_root->root_item,
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index ded4229c314a0..2b200b5a44c3a 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1140,12 +1140,19 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release)
+ {
+ 	struct ceph_mds_session *session = cap->session;
+ 	struct ceph_inode_info *ci = cap->ci;
+-	struct ceph_mds_client *mdsc =
+-		ceph_sb_to_client(ci->vfs_inode.i_sb)->mdsc;
++	struct ceph_mds_client *mdsc;
+ 	int removed = 0;
+ 
++	/* 'ci' being NULL means the remove have already occurred */
++	if (!ci) {
++		dout("%s: cap inode is NULL\n", __func__);
++		return;
++	}
++
+ 	dout("__ceph_remove_cap %p from %p\n", cap, &ci->vfs_inode);
+ 
++	mdsc = ceph_inode_to_client(&ci->vfs_inode)->mdsc;
++
+ 	/* remove from inode's cap rbtree, and clear auth cap */
+ 	rb_erase(&cap->ci_node, &ci->i_caps);
+ 	if (ci->i_auth_cap == cap) {
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index d88e2683626e7..2da6b41cb5526 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -94,6 +94,8 @@ static const __le16 smb2_rsp_struct_sizes[NUMBER_OF_SMB2_COMMANDS] = {
+ 	/* SMB2_OPLOCK_BREAK */ cpu_to_le16(24)
+ };
+ 
++#define SMB311_NEGPROT_BASE_SIZE (sizeof(struct smb2_sync_hdr) + sizeof(struct smb2_negotiate_rsp))
++
+ static __u32 get_neg_ctxt_len(struct smb2_sync_hdr *hdr, __u32 len,
+ 			      __u32 non_ctxlen)
+ {
+@@ -109,11 +111,17 @@ static __u32 get_neg_ctxt_len(struct smb2_sync_hdr *hdr, __u32 len,
+ 
+ 	/* Make sure that negotiate contexts start after gss security blob */
+ 	nc_offset = le32_to_cpu(pneg_rsp->NegotiateContextOffset);
+-	if (nc_offset < non_ctxlen) {
+-		pr_warn_once("Invalid negotiate context offset\n");
++	if (nc_offset + 1 < non_ctxlen) {
++		pr_warn_once("Invalid negotiate context offset %d\n", nc_offset);
+ 		return 0;
+-	}
+-	size_of_pad_before_neg_ctxts = nc_offset - non_ctxlen;
++	} else if (nc_offset + 1 == non_ctxlen) {
++		cifs_dbg(FYI, "no SPNEGO security blob in negprot rsp\n");
++		size_of_pad_before_neg_ctxts = 0;
++	} else if (non_ctxlen == SMB311_NEGPROT_BASE_SIZE)
++		/* has padding, but no SPNEGO blob */
++		size_of_pad_before_neg_ctxts = nc_offset - non_ctxlen + 1;
++	else
++		size_of_pad_before_neg_ctxts = nc_offset - non_ctxlen;
+ 
+ 	/* Verify that at least minimal negotiate contexts fit within frame */
+ 	if (len < nc_offset + (neg_count * sizeof(struct smb2_neg_context))) {
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 3d914d7d0d110..22f1d8dc12b00 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -477,7 +477,8 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 		goto out;
+ 	}
+ 
+-	if (bytes_left || p->Next)
++	/* Azure rounds the buffer size up 8, to a 16 byte boundary */
++	if ((bytes_left > 8) || p->Next)
+ 		cifs_dbg(VFS, "%s: incomplete interface info\n", __func__);
+ 
+ 
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index acb72705062dd..fc06c762fbbf6 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -427,8 +427,8 @@ build_preauth_ctxt(struct smb2_preauth_neg_context *pneg_ctxt)
+ 	pneg_ctxt->ContextType = SMB2_PREAUTH_INTEGRITY_CAPABILITIES;
+ 	pneg_ctxt->DataLength = cpu_to_le16(38);
+ 	pneg_ctxt->HashAlgorithmCount = cpu_to_le16(1);
+-	pneg_ctxt->SaltLength = cpu_to_le16(SMB311_SALT_SIZE);
+-	get_random_bytes(pneg_ctxt->Salt, SMB311_SALT_SIZE);
++	pneg_ctxt->SaltLength = cpu_to_le16(SMB311_LINUX_CLIENT_SALT_SIZE);
++	get_random_bytes(pneg_ctxt->Salt, SMB311_LINUX_CLIENT_SALT_SIZE);
+ 	pneg_ctxt->HashAlgorithms = SMB2_PREAUTH_INTEGRITY_SHA512;
+ }
+ 
+@@ -566,6 +566,9 @@ static void decode_preauth_context(struct smb2_preauth_neg_context *ctxt)
+ 	if (len < MIN_PREAUTH_CTXT_DATA_LEN) {
+ 		pr_warn_once("server sent bad preauth context\n");
+ 		return;
++	} else if (len < MIN_PREAUTH_CTXT_DATA_LEN + le16_to_cpu(ctxt->SaltLength)) {
++		pr_warn_once("server sent invalid SaltLength\n");
++		return;
+ 	}
+ 	if (le16_to_cpu(ctxt->HashAlgorithmCount) != 1)
+ 		pr_warn_once("Invalid SMB3 hash algorithm count\n");
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index fa57b03ca98c4..204a622b89ed3 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -333,12 +333,20 @@ struct smb2_neg_context {
+ 	/* Followed by array of data */
+ } __packed;
+ 
+-#define SMB311_SALT_SIZE			32
++#define SMB311_LINUX_CLIENT_SALT_SIZE			32
+ /* Hash Algorithm Types */
+ #define SMB2_PREAUTH_INTEGRITY_SHA512	cpu_to_le16(0x0001)
+ #define SMB2_PREAUTH_HASH_SIZE 64
+ 
+-#define MIN_PREAUTH_CTXT_DATA_LEN	(SMB311_SALT_SIZE + 6)
++/*
++ * SaltLength that the server send can be zero, so the only three required
++ * fields (all __le16) end up six bytes total, so the minimum context data len
++ * in the response is six bytes which accounts for
++ *
++ *      HashAlgorithmCount, SaltLength, and 1 HashAlgorithm.
++ */
++#define MIN_PREAUTH_CTXT_DATA_LEN 6
++
+ struct smb2_preauth_neg_context {
+ 	__le16	ContextType; /* 1 */
+ 	__le16	DataLength;
+@@ -346,7 +354,7 @@ struct smb2_preauth_neg_context {
+ 	__le16	HashAlgorithmCount; /* 1 */
+ 	__le16	SaltLength;
+ 	__le16	HashAlgorithms; /* HashAlgorithms[0] since only one defined */
+-	__u8	Salt[SMB311_SALT_SIZE];
++	__u8	Salt[SMB311_LINUX_CLIENT_SALT_SIZE];
+ } __packed;
+ 
+ /* Encryption Algorithms Ciphers */
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index 347be146884c3..ea4f693bee224 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -312,27 +312,12 @@ static void erofs_raw_access_readahead(struct readahead_control *rac)
+ 		submit_bio(bio);
+ }
+ 
+-static int erofs_get_block(struct inode *inode, sector_t iblock,
+-			   struct buffer_head *bh, int create)
+-{
+-	struct erofs_map_blocks map = {
+-		.m_la = iblock << 9,
+-	};
+-	int err;
+-
+-	err = erofs_map_blocks(inode, &map, EROFS_GET_BLOCKS_RAW);
+-	if (err)
+-		return err;
+-
+-	if (map.m_flags & EROFS_MAP_MAPPED)
+-		bh->b_blocknr = erofs_blknr(map.m_pa);
+-
+-	return err;
+-}
+-
+ static sector_t erofs_bmap(struct address_space *mapping, sector_t block)
+ {
+ 	struct inode *inode = mapping->host;
++	struct erofs_map_blocks map = {
++		.m_la = blknr_to_addr(block),
++	};
+ 
+ 	if (EROFS_I(inode)->datalayout == EROFS_INODE_FLAT_INLINE) {
+ 		erofs_blk_t blks = i_size_read(inode) >> LOG_BLOCK_SIZE;
+@@ -341,7 +326,10 @@ static sector_t erofs_bmap(struct address_space *mapping, sector_t block)
+ 			return 0;
+ 	}
+ 
+-	return generic_block_bmap(mapping, block, erofs_get_block);
++	if (!erofs_map_blocks(inode, &map, EROFS_GET_BLOCKS_RAW))
++		return erofs_blknr(map.m_pa);
++
++	return 0;
+ }
+ 
+ /* for uncompressed (aligned) files and raw access for other files */
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 4df61129566d4..117b1c395ae4a 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1902,23 +1902,30 @@ fetch_events:
+ 		}
+ 		write_unlock_irq(&ep->lock);
+ 
+-		if (eavail || res)
+-			break;
++		if (!eavail && !res)
++			timed_out = !schedule_hrtimeout_range(to, slack,
++							      HRTIMER_MODE_ABS);
+ 
+-		if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS)) {
+-			timed_out = 1;
+-			break;
+-		}
+-
+-		/* We were woken up, thus go and try to harvest some events */
++		/*
++		 * We were woken up, thus go and try to harvest some events.
++		 * If timed out and still on the wait queue, recheck eavail
++		 * carefully under lock, below.
++		 */
+ 		eavail = 1;
+-
+ 	} while (0);
+ 
+ 	__set_current_state(TASK_RUNNING);
+ 
+ 	if (!list_empty_careful(&wait.entry)) {
+ 		write_lock_irq(&ep->lock);
++		/*
++		 * If the thread timed out and is not on the wait queue, it
++		 * means that the thread was woken up after its timeout expired
++		 * before it could reacquire the lock. Thus, when wait.entry is
++		 * empty, it needs to harvest events.
++		 */
++		if (timed_out)
++			eavail = list_empty(&wait.entry);
+ 		__remove_wait_queue(&ep->wq, &wait);
+ 		write_unlock_irq(&ep->lock);
+ 	}
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 17d7096b3212d..12eac88373032 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5815,8 +5815,8 @@ int ext4_ext_replay_update_ex(struct inode *inode, ext4_lblk_t start,
+ 	int ret;
+ 
+ 	path = ext4_find_extent(inode, start, NULL, 0);
+-	if (!path)
+-		return -EINVAL;
++	if (IS_ERR(path))
++		return PTR_ERR(path);
+ 	ex = path[path->p_depth].p_ext;
+ 	if (!ex) {
+ 		ret = -EFSCORRUPTED;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 0d8385aea8981..0afab6d5c65bd 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -175,6 +175,7 @@ void ext4_evict_inode(struct inode *inode)
+ 	 */
+ 	int extra_credits = 6;
+ 	struct ext4_xattr_inode_array *ea_inode_array = NULL;
++	bool freeze_protected = false;
+ 
+ 	trace_ext4_evict_inode(inode);
+ 
+@@ -232,9 +233,14 @@ void ext4_evict_inode(struct inode *inode)
+ 
+ 	/*
+ 	 * Protect us against freezing - iput() caller didn't have to have any
+-	 * protection against it
++	 * protection against it. When we are in a running transaction though,
++	 * we are already protected against freezing and we cannot grab further
++	 * protection due to lock ordering constraints.
+ 	 */
+-	sb_start_intwrite(inode->i_sb);
++	if (!ext4_journal_current_handle()) {
++		sb_start_intwrite(inode->i_sb);
++		freeze_protected = true;
++	}
+ 
+ 	if (!IS_NOQUOTA(inode))
+ 		extra_credits += EXT4_MAXQUOTAS_DEL_BLOCKS(inode->i_sb);
+@@ -253,7 +259,8 @@ void ext4_evict_inode(struct inode *inode)
+ 		 * cleaned up.
+ 		 */
+ 		ext4_orphan_del(NULL, inode);
+-		sb_end_intwrite(inode->i_sb);
++		if (freeze_protected)
++			sb_end_intwrite(inode->i_sb);
+ 		goto no_delete;
+ 	}
+ 
+@@ -294,7 +301,8 @@ void ext4_evict_inode(struct inode *inode)
+ stop_handle:
+ 		ext4_journal_stop(handle);
+ 		ext4_orphan_del(NULL, inode);
+-		sb_end_intwrite(inode->i_sb);
++		if (freeze_protected)
++			sb_end_intwrite(inode->i_sb);
+ 		ext4_xattr_inode_array_free(ea_inode_array);
+ 		goto no_delete;
+ 	}
+@@ -323,7 +331,8 @@ stop_handle:
+ 	else
+ 		ext4_free_inode(handle, inode);
+ 	ext4_journal_stop(handle);
+-	sb_end_intwrite(inode->i_sb);
++	if (freeze_protected)
++		sb_end_intwrite(inode->i_sb);
+ 	ext4_xattr_inode_array_free(ea_inode_array);
+ 	return;
+ no_delete:
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 24af9ed5c3e52..37a619bf1ac7c 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5126,6 +5126,7 @@ ext4_mb_free_metadata(handle_t *handle, struct ext4_buddy *e4b,
+ 				ext4_group_first_block_no(sb, group) +
+ 				EXT4_C2B(sbi, cluster),
+ 				"Block already on to-be-freed list");
++			kmem_cache_free(ext4_free_data_cachep, new_entry);
+ 			return 0;
+ 		}
+ 	}
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 94472044f4c1d..2b08b162075c3 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -666,19 +666,17 @@ static bool system_going_down(void)
+ 
+ static void ext4_handle_error(struct super_block *sb)
+ {
++	journal_t *journal = EXT4_SB(sb)->s_journal;
++
+ 	if (test_opt(sb, WARN_ON_ERROR))
+ 		WARN_ON_ONCE(1);
+ 
+-	if (sb_rdonly(sb))
++	if (sb_rdonly(sb) || test_opt(sb, ERRORS_CONT))
+ 		return;
+ 
+-	if (!test_opt(sb, ERRORS_CONT)) {
+-		journal_t *journal = EXT4_SB(sb)->s_journal;
+-
+-		ext4_set_mount_flag(sb, EXT4_MF_FS_ABORTED);
+-		if (journal)
+-			jbd2_journal_abort(journal, -EIO);
+-	}
++	ext4_set_mount_flag(sb, EXT4_MF_FS_ABORTED);
++	if (journal)
++		jbd2_journal_abort(journal, -EIO);
+ 	/*
+ 	 * We force ERRORS_RO behavior when system is rebooting. Otherwise we
+ 	 * could panic during 'reboot -f' as the underlying device got already
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index d5d8ce077f295..42394de6c7eb1 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -109,7 +109,7 @@ static void clear_node_page_dirty(struct page *page)
+ 
+ static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
+ {
+-	return f2fs_get_meta_page(sbi, current_nat_addr(sbi, nid));
++	return f2fs_get_meta_page_retry(sbi, current_nat_addr(sbi, nid));
+ }
+ 
+ static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 00eff2f518079..fef22e476c526 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3918,6 +3918,7 @@ free_bio_info:
+ 
+ #ifdef CONFIG_UNICODE
+ 	utf8_unload(sb->s_encoding);
++	sb->s_encoding = NULL;
+ #endif
+ free_options:
+ #ifdef CONFIG_QUOTA
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index 21a9e534417c0..d2c0e58c6416f 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1464,6 +1464,8 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ 	if (!sb->s_root) {
+ 		err = virtio_fs_fill_super(sb, fsc);
+ 		if (err) {
++			fuse_mount_put(fm);
++			sb->s_fs_info = NULL;
+ 			deactivate_locked_super(sb);
+ 			return err;
+ 		}
+diff --git a/fs/inode.c b/fs/inode.c
+index 9d78c37b00b81..5eea9912a0b9d 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -1627,7 +1627,9 @@ static void iput_final(struct inode *inode)
+ 	else
+ 		drop = generic_drop_inode(inode);
+ 
+-	if (!drop && (sb->s_flags & SB_ACTIVE)) {
++	if (!drop &&
++	    !(inode->i_state & I_DONTCACHE) &&
++	    (sb->s_flags & SB_ACTIVE)) {
+ 		inode_add_lru(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		return;
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+index cba36f03c3555..aaa363f358916 100644
+--- a/fs/io-wq.h
++++ b/fs/io-wq.h
+@@ -59,6 +59,7 @@ static inline void wq_list_add_tail(struct io_wq_work_node *node,
+ 		list->last->next = node;
+ 		list->last = node;
+ 	}
++	node->next = NULL;
+ }
+ 
+ static inline void wq_list_cut(struct io_wq_work_list *list,
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 86dac2b2e2763..0fcd065baa760 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1641,10 +1641,6 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+ 
+ 	spin_lock_irqsave(&ctx->completion_lock, flags);
+ 
+-	/* if force is set, the ring is going away. always drop after that */
+-	if (force)
+-		ctx->cq_overflow_flushed = 1;
+-
+ 	cqe = NULL;
+ 	list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, compl.list) {
+ 		if (tsk && req->task != tsk)
+@@ -2246,7 +2242,7 @@ static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush)
+ 		 * we wake up the task, and the next invocation will flush the
+ 		 * entries. We cannot safely to it from here.
+ 		 */
+-		if (noflush && !list_empty(&ctx->cq_overflow_list))
++		if (noflush)
+ 			return -1U;
+ 
+ 		io_cqring_overflow_flush(ctx, false, NULL, NULL);
+@@ -3052,9 +3048,7 @@ static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
+ 		iov[0].iov_len = kbuf->len;
+ 		return 0;
+ 	}
+-	if (!req->rw.len)
+-		return 0;
+-	else if (req->rw.len > 1)
++	if (req->rw.len != 1)
+ 		return -EINVAL;
+ 
+ #ifdef CONFIG_COMPAT
+@@ -3948,11 +3942,17 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
+ 	head = idr_find(&ctx->io_buffer_idr, p->bgid);
+ 	if (head)
+ 		ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
+-
+-	io_ring_submit_lock(ctx, !force_nonblock);
+ 	if (ret < 0)
+ 		req_set_fail_links(req);
+-	__io_req_complete(req, ret, 0, cs);
++
++	/* need to hold the lock to complete IOPOLL requests */
++	if (ctx->flags & IORING_SETUP_IOPOLL) {
++		__io_req_complete(req, ret, 0, cs);
++		io_ring_submit_unlock(ctx, !force_nonblock);
++	} else {
++		io_ring_submit_unlock(ctx, !force_nonblock);
++		__io_req_complete(req, ret, 0, cs);
++	}
+ 	return 0;
+ }
+ 
+@@ -4037,10 +4037,17 @@ static int io_provide_buffers(struct io_kiocb *req, bool force_nonblock,
+ 		}
+ 	}
+ out:
+-	io_ring_submit_unlock(ctx, !force_nonblock);
+ 	if (ret < 0)
+ 		req_set_fail_links(req);
+-	__io_req_complete(req, ret, 0, cs);
++
++	/* need to hold the lock to complete IOPOLL requests */
++	if (ctx->flags & IORING_SETUP_IOPOLL) {
++		__io_req_complete(req, ret, 0, cs);
++		io_ring_submit_unlock(ctx, !force_nonblock);
++	} else {
++		io_ring_submit_unlock(ctx, !force_nonblock);
++		__io_req_complete(req, ret, 0, cs);
++	}
+ 	return 0;
+ }
+ 
+@@ -6074,8 +6081,28 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
+ 	}
+ 
+ 	if (ret) {
++		struct io_ring_ctx *lock_ctx = NULL;
++
++		if (req->ctx->flags & IORING_SETUP_IOPOLL)
++			lock_ctx = req->ctx;
++
++		/*
++		 * io_iopoll_complete() does not hold completion_lock to
++		 * complete polled io, so here for polled io, we can not call
++		 * io_req_complete() directly, otherwise there maybe concurrent
++		 * access to cqring, defer_list, etc, which is not safe. Given
++		 * that io_iopoll_complete() is always called under uring_lock,
++		 * so here for polled io, we also get uring_lock to complete
++		 * it.
++		 */
++		if (lock_ctx)
++			mutex_lock(&lock_ctx->uring_lock);
++
+ 		req_set_fail_links(req);
+ 		io_req_complete(req, ret);
++
++		if (lock_ctx)
++			mutex_unlock(&lock_ctx->uring_lock);
+ 	}
+ 
+ 	return io_steal_work(req);
+@@ -8369,28 +8396,35 @@ static void io_ring_exit_work(struct work_struct *work)
+ 	 * as nobody else will be looking for them.
+ 	 */
+ 	do {
+-		if (ctx->rings)
+-			io_cqring_overflow_flush(ctx, true, NULL, NULL);
+ 		io_iopoll_try_reap_events(ctx);
+ 	} while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20));
+ 	io_ring_ctx_free(ctx);
+ }
+ 
++static bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
++{
++	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++
++	return req->ctx == data;
++}
++
+ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ {
+ 	mutex_lock(&ctx->uring_lock);
+ 	percpu_ref_kill(&ctx->refs);
++	/* if force is set, the ring is going away. always drop after that */
++	ctx->cq_overflow_flushed = 1;
++	if (ctx->rings)
++		io_cqring_overflow_flush(ctx, true, NULL, NULL);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
+ 	io_kill_timeouts(ctx, NULL);
+ 	io_poll_remove_all(ctx, NULL);
+ 
+ 	if (ctx->io_wq)
+-		io_wq_cancel_all(ctx->io_wq);
++		io_wq_cancel_cb(ctx->io_wq, io_cancel_ctx_cb, ctx, true);
+ 
+ 	/* if we failed setting up the ctx, we might not have any rings */
+-	if (ctx->rings)
+-		io_cqring_overflow_flush(ctx, true, NULL, NULL);
+ 	io_iopoll_try_reap_events(ctx);
+ 	idr_for_each(&ctx->personality_idr, io_remove_personalities, ctx);
+ 
+@@ -8421,14 +8455,6 @@ static int io_uring_release(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
+-static bool io_wq_files_match(struct io_wq_work *work, void *data)
+-{
+-	struct files_struct *files = data;
+-
+-	return !files || ((work->flags & IO_WQ_WORK_FILES) &&
+-				work->identity->files == files);
+-}
+-
+ /*
+  * Returns true if 'preq' is the link parent of 'req'
+  */
+@@ -8566,21 +8592,20 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+  * Returns true if we found and killed one or more files pinning requests
+  */
+ static bool io_uring_cancel_files(struct io_ring_ctx *ctx,
++				  struct task_struct *task,
+ 				  struct files_struct *files)
+ {
+ 	if (list_empty_careful(&ctx->inflight_list))
+ 		return false;
+ 
+-	/* cancel all at once, should be faster than doing it one by one*/
+-	io_wq_cancel_cb(ctx->io_wq, io_wq_files_match, files, true);
+-
+ 	while (!list_empty_careful(&ctx->inflight_list)) {
+ 		struct io_kiocb *cancel_req = NULL, *req;
+ 		DEFINE_WAIT(wait);
+ 
+ 		spin_lock_irq(&ctx->inflight_lock);
+ 		list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {
+-			if (files && (req->work.flags & IO_WQ_WORK_FILES) &&
++			if (req->task == task &&
++			    (req->work.flags & IO_WQ_WORK_FILES) &&
+ 			    req->work.identity->files != files)
+ 				continue;
+ 			/* req is being completed, ignore */
+@@ -8623,7 +8648,7 @@ static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ {
+ 	bool ret;
+ 
+-	ret = io_uring_cancel_files(ctx, files);
++	ret = io_uring_cancel_files(ctx, task, files);
+ 	if (!files) {
+ 		enum io_wq_cancel cret;
+ 
+@@ -8662,12 +8687,10 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 		io_sq_thread_park(ctx->sq_data);
+ 	}
+ 
+-	if (files)
+-		io_cancel_defer_files(ctx, NULL, files);
+-	else
+-		io_cancel_defer_files(ctx, task, NULL);
+-
++	io_cancel_defer_files(ctx, task, files);
++	io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
+ 	io_cqring_overflow_flush(ctx, true, task, files);
++	io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
+ 
+ 	while (__io_uring_cancel_task_requests(ctx, task, files)) {
+ 		io_run_task_work();
+@@ -8692,10 +8715,9 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ static int io_uring_add_task_file(struct io_ring_ctx *ctx, struct file *file)
+ {
+ 	struct io_uring_task *tctx = current->io_uring;
++	int ret;
+ 
+ 	if (unlikely(!tctx)) {
+-		int ret;
+-
+ 		ret = io_uring_alloc_task_context(current);
+ 		if (unlikely(ret))
+ 			return ret;
+@@ -8706,7 +8728,12 @@ static int io_uring_add_task_file(struct io_ring_ctx *ctx, struct file *file)
+ 
+ 		if (!old) {
+ 			get_file(file);
+-			xa_store(&tctx->xa, (unsigned long)file, file, GFP_KERNEL);
++			ret = xa_err(xa_store(&tctx->xa, (unsigned long)file,
++						file, GFP_KERNEL));
++			if (ret) {
++				fput(file);
++				return ret;
++			}
+ 		}
+ 		tctx->last = file;
+ 	}
+@@ -8969,8 +8996,10 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ 	 */
+ 	ret = 0;
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
++		io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
+ 		if (!list_empty_careful(&ctx->cq_overflow_list))
+ 			io_cqring_overflow_flush(ctx, false, NULL, NULL);
++		io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
+ 		if (flags & IORING_ENTER_SQ_WAKEUP)
+ 			wake_up(&ctx->sq_data->wait);
+ 		if (flags & IORING_ENTER_SQ_WAIT)
+@@ -9173,55 +9202,52 @@ static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+ 	return 0;
+ }
+ 
++static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
++{
++	int ret, fd;
++
++	fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
++	if (fd < 0)
++		return fd;
++
++	ret = io_uring_add_task_file(ctx, file);
++	if (ret) {
++		put_unused_fd(fd);
++		return ret;
++	}
++	fd_install(fd, file);
++	return fd;
++}
++
+ /*
+  * Allocate an anonymous fd, this is what constitutes the application
+  * visible backing of an io_uring instance. The application mmaps this
+  * fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
+  * we have to tie this fd to a socket for file garbage collection purposes.
+  */
+-static int io_uring_get_fd(struct io_ring_ctx *ctx)
++static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
+ {
+ 	struct file *file;
++#if defined(CONFIG_UNIX)
+ 	int ret;
+-	int fd;
+ 
+-#if defined(CONFIG_UNIX)
+ 	ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
+ 				&ctx->ring_sock);
+ 	if (ret)
+-		return ret;
++		return ERR_PTR(ret);
+ #endif
+ 
+-	ret = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
+-	if (ret < 0)
+-		goto err;
+-	fd = ret;
+-
+ 	file = anon_inode_getfile("[io_uring]", &io_uring_fops, ctx,
+ 					O_RDWR | O_CLOEXEC);
+-	if (IS_ERR(file)) {
+-		put_unused_fd(fd);
+-		ret = PTR_ERR(file);
+-		goto err;
+-	}
+-
+ #if defined(CONFIG_UNIX)
+-	ctx->ring_sock->file = file;
+-#endif
+-	ret = io_uring_add_task_file(ctx, file);
+-	if (ret) {
+-		fput(file);
+-		put_unused_fd(fd);
+-		goto err;
++	if (IS_ERR(file)) {
++		sock_release(ctx->ring_sock);
++		ctx->ring_sock = NULL;
++	} else {
++		ctx->ring_sock->file = file;
+ 	}
+-	fd_install(fd, file);
+-	return fd;
+-err:
+-#if defined(CONFIG_UNIX)
+-	sock_release(ctx->ring_sock);
+-	ctx->ring_sock = NULL;
+ #endif
+-	return ret;
++	return file;
+ }
+ 
+ static int io_uring_create(unsigned entries, struct io_uring_params *p,
+@@ -9229,6 +9255,7 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
+ {
+ 	struct user_struct *user = NULL;
+ 	struct io_ring_ctx *ctx;
++	struct file *file;
+ 	bool limit_mem;
+ 	int ret;
+ 
+@@ -9375,13 +9402,22 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
+ 		goto err;
+ 	}
+ 
++	file = io_uring_get_file(ctx);
++	if (IS_ERR(file)) {
++		ret = PTR_ERR(file);
++		goto err;
++	}
++
+ 	/*
+ 	 * Install ring fd as the very last thing, so we don't risk someone
+ 	 * having closed it before we finish setup
+ 	 */
+-	ret = io_uring_get_fd(ctx);
+-	if (ret < 0)
+-		goto err;
++	ret = io_uring_install_fd(ctx, file);
++	if (ret < 0) {
++		/* fput will clean it up */
++		fput(file);
++		return ret;
++	}
+ 
+ 	trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
+ 	return ret;
+diff --git a/fs/jffs2/readinode.c b/fs/jffs2/readinode.c
+index 2f6f0b140c05a..03b4f99614bef 100644
+--- a/fs/jffs2/readinode.c
++++ b/fs/jffs2/readinode.c
+@@ -672,6 +672,22 @@ static inline int read_direntry(struct jffs2_sb_info *c, struct jffs2_raw_node_r
+ 			jffs2_free_full_dirent(fd);
+ 			return -EIO;
+ 		}
++
++#ifdef CONFIG_JFFS2_SUMMARY
++		/*
++		 * we use CONFIG_JFFS2_SUMMARY because without it, we
++		 * have checked it while mounting
++		 */
++		crc = crc32(0, fd->name, rd->nsize);
++		if (unlikely(crc != je32_to_cpu(rd->name_crc))) {
++			JFFS2_NOTICE("name CRC failed on dirent node at"
++			   "%#08x: read %#08x,calculated %#08x\n",
++			   ref_offset(ref), je32_to_cpu(rd->node_crc), crc);
++			jffs2_mark_node_obsolete(c, ref);
++			jffs2_free_full_dirent(fd);
++			return 0;
++		}
++#endif
+ 	}
+ 
+ 	fd->nhash = full_name_hash(NULL, fd->name, rd->nsize);
+diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
+index 05d7878dfad15..4fd297bdf0f3f 100644
+--- a/fs/jffs2/super.c
++++ b/fs/jffs2/super.c
+@@ -215,11 +215,28 @@ static int jffs2_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 	return 0;
+ }
+ 
++static inline void jffs2_update_mount_opts(struct fs_context *fc)
++{
++	struct jffs2_sb_info *new_c = fc->s_fs_info;
++	struct jffs2_sb_info *c = JFFS2_SB_INFO(fc->root->d_sb);
++
++	mutex_lock(&c->alloc_sem);
++	if (new_c->mount_opts.override_compr) {
++		c->mount_opts.override_compr = new_c->mount_opts.override_compr;
++		c->mount_opts.compr = new_c->mount_opts.compr;
++	}
++	if (new_c->mount_opts.rp_size)
++		c->mount_opts.rp_size = new_c->mount_opts.rp_size;
++	mutex_unlock(&c->alloc_sem);
++}
++
+ static int jffs2_reconfigure(struct fs_context *fc)
+ {
+ 	struct super_block *sb = fc->root->d_sb;
+ 
+ 	sync_filesystem(sb);
++	jffs2_update_mount_opts(fc);
++
+ 	return jffs2_do_remount_fs(sb, fc);
+ }
+ 
+diff --git a/fs/jfs/jfs_dmap.h b/fs/jfs/jfs_dmap.h
+index 29891fad3f095..aa03a904d5ab2 100644
+--- a/fs/jfs/jfs_dmap.h
++++ b/fs/jfs/jfs_dmap.h
+@@ -183,7 +183,7 @@ typedef union dmtree {
+ #define	dmt_leafidx	t1.leafidx
+ #define	dmt_height	t1.height
+ #define	dmt_budmin	t1.budmin
+-#define	dmt_stree	t1.stree
++#define	dmt_stree	t2.stree
+ 
+ /*
+  *	on-disk aggregate disk allocation map descriptor.
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index 0afb6d59bad03..771c289f6df7f 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -439,12 +439,7 @@ nlm_bind_host(struct nlm_host *host)
+ 	 * RPC rebind is required
+ 	 */
+ 	if ((clnt = host->h_rpcclnt) != NULL) {
+-		if (time_after_eq(jiffies, host->h_nextrebind)) {
+-			rpc_force_rebind(clnt);
+-			host->h_nextrebind = jiffies + NLM_HOST_REBIND;
+-			dprintk("lockd: next rebind in %lu jiffies\n",
+-					host->h_nextrebind - jiffies);
+-		}
++		nlm_rebind_host(host);
+ 	} else {
+ 		unsigned long increment = nlmsvc_timeout;
+ 		struct rpc_timeout timeparms = {
+@@ -494,13 +489,20 @@ nlm_bind_host(struct nlm_host *host)
+ 	return clnt;
+ }
+ 
+-/*
+- * Force a portmap lookup of the remote lockd port
++/**
++ * nlm_rebind_host - If needed, force a portmap lookup of the peer's lockd port
++ * @host: NLM host handle for peer
++ *
++ * This is not needed when using a connection-oriented protocol, such as TCP.
++ * The existing autobind mechanism is sufficient to force a rebind when
++ * required, e.g. on connection state transitions.
+  */
+ void
+ nlm_rebind_host(struct nlm_host *host)
+ {
+-	dprintk("lockd: rebind host %s\n", host->h_name);
++	if (host->h_proto != IPPROTO_UDP)
++		return;
++
+ 	if (host->h_rpcclnt && time_after_eq(jiffies, host->h_nextrebind)) {
+ 		rpc_force_rebind(host->h_rpcclnt);
+ 		host->h_nextrebind = jiffies + NLM_HOST_REBIND;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 24bf5797f88ae..fd0eda328943b 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1056,7 +1056,7 @@ static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)
+ 	u32 idx = hdr->pgio_mirror_idx + 1;
+ 	u32 new_idx = 0;
+ 
+-	if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx + 1, &new_idx))
++	if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx))
+ 		ff_layout_send_layouterror(hdr->lseg);
+ 	else
+ 		pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index aa6493905bbe8..43af053f467a7 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -2180,7 +2180,7 @@ static int nfsiod_start(void)
+ {
+ 	struct workqueue_struct *wq;
+ 	dprintk("RPC:       creating workqueue nfsiod\n");
+-	wq = alloc_workqueue("nfsiod", WQ_MEM_RECLAIM, 0);
++	wq = alloc_workqueue("nfsiod", WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
+ 	if (wq == NULL)
+ 		return -ENOMEM;
+ 	nfsiod_workqueue = wq;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index e89468678ae16..6858b4bb556d5 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -4961,12 +4961,12 @@ static int _nfs4_proc_readdir(struct dentry *dentry, const struct cred *cred,
+ 		u64 cookie, struct page **pages, unsigned int count, bool plus)
+ {
+ 	struct inode		*dir = d_inode(dentry);
++	struct nfs_server	*server = NFS_SERVER(dir);
+ 	struct nfs4_readdir_arg args = {
+ 		.fh = NFS_FH(dir),
+ 		.pages = pages,
+ 		.pgbase = 0,
+ 		.count = count,
+-		.bitmask = NFS_SERVER(d_inode(dentry))->attr_bitmask,
+ 		.plus = plus,
+ 	};
+ 	struct nfs4_readdir_res res;
+@@ -4981,9 +4981,15 @@ static int _nfs4_proc_readdir(struct dentry *dentry, const struct cred *cred,
+ 	dprintk("%s: dentry = %pd2, cookie = %Lu\n", __func__,
+ 			dentry,
+ 			(unsigned long long)cookie);
++	if (!(server->caps & NFS_CAP_SECURITY_LABEL))
++		args.bitmask = server->attr_bitmask_nl;
++	else
++		args.bitmask = server->attr_bitmask;
++
+ 	nfs4_setup_readdir(cookie, NFS_I(dir)->cookieverf, dentry, &args);
+ 	res.pgbase = args.pgbase;
+-	status = nfs4_call_sync(NFS_SERVER(dir)->client, NFS_SERVER(dir), &msg, &args.seq_args, &res.seq_res, 0);
++	status = nfs4_call_sync(server->client, server, &msg, &args.seq_args,
++			&res.seq_res, 0);
+ 	if (status >= 0) {
+ 		memcpy(NFS_I(dir)->cookieverf, res.verifier.data, NFS4_VERIFIER_SIZE);
+ 		status += args.pgbase;
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index c6dbfcae75171..c16b93df1bc14 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -3009,15 +3009,19 @@ static void nfs4_xdr_enc_getdeviceinfo(struct rpc_rqst *req,
+ 	struct compound_hdr hdr = {
+ 		.minorversion = nfs4_xdr_minorversion(&args->seq_args),
+ 	};
++	uint32_t replen;
+ 
+ 	encode_compound_hdr(xdr, req, &hdr);
+ 	encode_sequence(xdr, &args->seq_args, &hdr);
++
++	replen = hdr.replen + op_decode_hdr_maxsz;
++
+ 	encode_getdeviceinfo(xdr, args, &hdr);
+ 
+-	/* set up reply kvec. Subtract notification bitmap max size (2)
+-	 * so that notification bitmap is put in xdr_buf tail */
++	/* set up reply kvec. device_addr4 opaque data is read into the
++	 * pages */
+ 	rpc_prepare_reply_pages(req, args->pdev->pages, args->pdev->pgbase,
+-				args->pdev->pglen, hdr.replen - 2);
++				args->pdev->pglen, replen + 2 + 1);
+ 	encode_nops(&hdr);
+ }
+ 
+diff --git a/fs/nfs_common/grace.c b/fs/nfs_common/grace.c
+index b73d9dd37f73c..26f2a50eceac9 100644
+--- a/fs/nfs_common/grace.c
++++ b/fs/nfs_common/grace.c
+@@ -69,10 +69,14 @@ __state_in_grace(struct net *net, bool open)
+ 	if (!open)
+ 		return !list_empty(grace_list);
+ 
++	spin_lock(&grace_lock);
+ 	list_for_each_entry(lm, grace_list, list) {
+-		if (lm->block_opens)
++		if (lm->block_opens) {
++			spin_unlock(&grace_lock);
+ 			return true;
++		}
+ 	}
++	spin_unlock(&grace_lock);
+ 	return false;
+ }
+ 
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index 3c6c2f7d1688b..5849c1bd88f17 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -600,7 +600,7 @@ static struct notifier_block nfsd_file_lease_notifier = {
+ static int
+ nfsd_file_fsnotify_handle_event(struct fsnotify_mark *mark, u32 mask,
+ 				struct inode *inode, struct inode *dir,
+-				const struct qstr *name)
++				const struct qstr *name, u32 cookie)
+ {
+ 	trace_nfsd_file_fsnotify_handle_event(inode, mask);
+ 
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index d7f27ed6b7941..47006eec724e6 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -769,6 +769,7 @@ static int nfs4_init_cp_state(struct nfsd_net *nn, copy_stateid_t *stid,
+ 	spin_lock(&nn->s2s_cp_lock);
+ 	new_id = idr_alloc_cyclic(&nn->s2s_cp_stateids, stid, 0, 0, GFP_NOWAIT);
+ 	stid->stid.si_opaque.so_id = new_id;
++	stid->stid.si_generation = 1;
+ 	spin_unlock(&nn->s2s_cp_lock);
+ 	idr_preload_end();
+ 	if (new_id < 0)
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 27b1ad1361508..9323e30a7eafe 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -527,8 +527,7 @@ static void nfsd_last_thread(struct svc_serv *serv, struct net *net)
+ 		return;
+ 
+ 	nfsd_shutdown_net(net);
+-	printk(KERN_WARNING "nfsd: last server has exited, flushing export "
+-			    "cache\n");
++	pr_info("nfsd: last server has exited, flushing export cache\n");
+ 	nfsd_export_flush(net);
+ }
+ 
+diff --git a/fs/notify/dnotify/dnotify.c b/fs/notify/dnotify/dnotify.c
+index 5dcda8f20c04f..e45ca6ecba959 100644
+--- a/fs/notify/dnotify/dnotify.c
++++ b/fs/notify/dnotify/dnotify.c
+@@ -72,7 +72,7 @@ static void dnotify_recalc_inode_mask(struct fsnotify_mark *fsn_mark)
+  */
+ static int dnotify_handle_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 				struct inode *inode, struct inode *dir,
+-				const struct qstr *name)
++				const struct qstr *name, u32 cookie)
+ {
+ 	struct dnotify_mark *dn_mark;
+ 	struct dnotify_struct *dn;
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index 9167884a61eca..1192c99536200 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -268,12 +268,11 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ 			continue;
+ 
+ 		/*
+-		 * If the event is for a child and this mark is on a parent not
++		 * If the event is on a child and this mark is on a parent not
+ 		 * watching children, don't send it!
+ 		 */
+-		if (event_mask & FS_EVENT_ON_CHILD &&
+-		    type == FSNOTIFY_OBJ_TYPE_INODE &&
+-		     !(mark->mask & FS_EVENT_ON_CHILD))
++		if (type == FSNOTIFY_OBJ_TYPE_PARENT &&
++		    !(mark->mask & FS_EVENT_ON_CHILD))
+ 			continue;
+ 
+ 		marks_mask |= mark->mask;
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index 8d3ad5ef29258..30d422b8c0fc7 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -152,6 +152,13 @@ static bool fsnotify_event_needs_parent(struct inode *inode, struct mount *mnt,
+ 	if (mask & FS_ISDIR)
+ 		return false;
+ 
++	/*
++	 * All events that are possible on child can also may be reported with
++	 * parent/name info to inode/sb/mount.  Otherwise, a watching parent
++	 * could result in events reported with unexpected name info to sb/mount.
++	 */
++	BUILD_BUG_ON(FS_EVENTS_POSS_ON_CHILD & ~FS_EVENTS_POSS_TO_PARENT);
++
+ 	/* Did either inode/sb/mount subscribe for events with parent/name? */
+ 	marks_mask |= fsnotify_parent_needed_mask(inode->i_fsnotify_mask);
+ 	marks_mask |= fsnotify_parent_needed_mask(inode->i_sb->s_fsnotify_mask);
+@@ -232,47 +239,76 @@ notify:
+ }
+ EXPORT_SYMBOL_GPL(__fsnotify_parent);
+ 
++static int fsnotify_handle_inode_event(struct fsnotify_group *group,
++				       struct fsnotify_mark *inode_mark,
++				       u32 mask, const void *data, int data_type,
++				       struct inode *dir, const struct qstr *name,
++				       u32 cookie)
++{
++	const struct path *path = fsnotify_data_path(data, data_type);
++	struct inode *inode = fsnotify_data_inode(data, data_type);
++	const struct fsnotify_ops *ops = group->ops;
++
++	if (WARN_ON_ONCE(!ops->handle_inode_event))
++		return 0;
++
++	if ((inode_mark->mask & FS_EXCL_UNLINK) &&
++	    path && d_unlinked(path->dentry))
++		return 0;
++
++	/* Check interest of this mark in case event was sent with two marks */
++	if (!(mask & inode_mark->mask & ALL_FSNOTIFY_EVENTS))
++		return 0;
++
++	return ops->handle_inode_event(inode_mark, mask, inode, dir, name, cookie);
++}
++
+ static int fsnotify_handle_event(struct fsnotify_group *group, __u32 mask,
+ 				 const void *data, int data_type,
+ 				 struct inode *dir, const struct qstr *name,
+ 				 u32 cookie, struct fsnotify_iter_info *iter_info)
+ {
+ 	struct fsnotify_mark *inode_mark = fsnotify_iter_inode_mark(iter_info);
+-	struct fsnotify_mark *child_mark = fsnotify_iter_child_mark(iter_info);
+-	struct inode *inode = fsnotify_data_inode(data, data_type);
+-	const struct fsnotify_ops *ops = group->ops;
++	struct fsnotify_mark *parent_mark = fsnotify_iter_parent_mark(iter_info);
+ 	int ret;
+ 
+-	if (WARN_ON_ONCE(!ops->handle_inode_event))
+-		return 0;
+-
+ 	if (WARN_ON_ONCE(fsnotify_iter_sb_mark(iter_info)) ||
+ 	    WARN_ON_ONCE(fsnotify_iter_vfsmount_mark(iter_info)))
+ 		return 0;
+ 
+-	/*
+-	 * An event can be sent on child mark iterator instead of inode mark
+-	 * iterator because of other groups that have interest of this inode
+-	 * and have marks on both parent and child.  We can simplify this case.
+-	 */
+-	if (!inode_mark) {
+-		inode_mark = child_mark;
+-		child_mark = NULL;
++	if (parent_mark) {
++		/*
++		 * parent_mark indicates that the parent inode is watching
++		 * children and interested in this event, which is an event
++		 * possible on child. But is *this mark* watching children and
++		 * interested in this event?
++		 */
++		if (parent_mark->mask & FS_EVENT_ON_CHILD) {
++			ret = fsnotify_handle_inode_event(group, parent_mark, mask,
++							  data, data_type, dir, name, 0);
++			if (ret)
++				return ret;
++		}
++		if (!inode_mark)
++			return 0;
++	}
++
++	if (mask & FS_EVENT_ON_CHILD) {
++		/*
++		 * Some events can be sent on both parent dir and child marks
++		 * (e.g. FS_ATTRIB).  If both parent dir and child are
++		 * watching, report the event once to parent dir with name (if
++		 * interested) and once to child without name (if interested).
++		 * The child watcher is expecting an event without a file name
++		 * and without the FS_EVENT_ON_CHILD flag.
++		 */
++		mask &= ~FS_EVENT_ON_CHILD;
+ 		dir = NULL;
+ 		name = NULL;
+ 	}
+ 
+-	ret = ops->handle_inode_event(inode_mark, mask, inode, dir, name);
+-	if (ret || !child_mark)
+-		return ret;
+-
+-	/*
+-	 * Some events can be sent on both parent dir and child marks
+-	 * (e.g. FS_ATTRIB).  If both parent dir and child are watching,
+-	 * report the event once to parent dir with name and once to child
+-	 * without name.
+-	 */
+-	return ops->handle_inode_event(child_mark, mask, inode, NULL, NULL);
++	return fsnotify_handle_inode_event(group, inode_mark, mask, data, data_type,
++					   dir, name, cookie);
+ }
+ 
+ static int send_to_group(__u32 mask, const void *data, int data_type,
+@@ -430,7 +466,7 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 	struct fsnotify_iter_info iter_info = {};
+ 	struct super_block *sb;
+ 	struct mount *mnt = NULL;
+-	struct inode *child = NULL;
++	struct inode *parent = NULL;
+ 	int ret = 0;
+ 	__u32 test_mask, marks_mask;
+ 
+@@ -442,11 +478,10 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 		inode = dir;
+ 	} else if (mask & FS_EVENT_ON_CHILD) {
+ 		/*
+-		 * Event on child - report on TYPE_INODE to dir if it is
+-		 * watching children and on TYPE_CHILD to child.
++		 * Event on child - report on TYPE_PARENT to dir if it is
++		 * watching children and on TYPE_INODE to child.
+ 		 */
+-		child = inode;
+-		inode = dir;
++		parent = dir;
+ 	}
+ 	sb = inode->i_sb;
+ 
+@@ -460,7 +495,7 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 	if (!sb->s_fsnotify_marks &&
+ 	    (!mnt || !mnt->mnt_fsnotify_marks) &&
+ 	    (!inode || !inode->i_fsnotify_marks) &&
+-	    (!child || !child->i_fsnotify_marks))
++	    (!parent || !parent->i_fsnotify_marks))
+ 		return 0;
+ 
+ 	marks_mask = sb->s_fsnotify_mask;
+@@ -468,8 +503,8 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 		marks_mask |= mnt->mnt_fsnotify_mask;
+ 	if (inode)
+ 		marks_mask |= inode->i_fsnotify_mask;
+-	if (child)
+-		marks_mask |= child->i_fsnotify_mask;
++	if (parent)
++		marks_mask |= parent->i_fsnotify_mask;
+ 
+ 
+ 	/*
+@@ -492,9 +527,9 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 		iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+ 			fsnotify_first_mark(&inode->i_fsnotify_marks);
+ 	}
+-	if (child) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_CHILD] =
+-			fsnotify_first_mark(&child->i_fsnotify_marks);
++	if (parent) {
++		iter_info.marks[FSNOTIFY_OBJ_TYPE_PARENT] =
++			fsnotify_first_mark(&parent->i_fsnotify_marks);
+ 	}
+ 
+ 	/*
+diff --git a/fs/notify/inotify/inotify.h b/fs/notify/inotify/inotify.h
+index 4327d0e9c3645..2007e37119160 100644
+--- a/fs/notify/inotify/inotify.h
++++ b/fs/notify/inotify/inotify.h
+@@ -24,11 +24,10 @@ static inline struct inotify_event_info *INOTIFY_E(struct fsnotify_event *fse)
+ 
+ extern void inotify_ignored_and_remove_idr(struct fsnotify_mark *fsn_mark,
+ 					   struct fsnotify_group *group);
+-extern int inotify_handle_event(struct fsnotify_group *group, u32 mask,
+-				const void *data, int data_type,
+-				struct inode *dir,
+-				const struct qstr *file_name, u32 cookie,
+-				struct fsnotify_iter_info *iter_info);
++extern int inotify_handle_inode_event(struct fsnotify_mark *inode_mark,
++				      u32 mask, struct inode *inode,
++				      struct inode *dir,
++				      const struct qstr *name, u32 cookie);
+ 
+ extern const struct fsnotify_ops inotify_fsnotify_ops;
+ extern struct kmem_cache *inotify_inode_mark_cachep;
+diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c
+index 9ddcbadc98e29..1901d799909b8 100644
+--- a/fs/notify/inotify/inotify_fsnotify.c
++++ b/fs/notify/inotify/inotify_fsnotify.c
+@@ -55,25 +55,21 @@ static int inotify_merge(struct list_head *list,
+ 	return event_compare(last_event, event);
+ }
+ 
+-static int inotify_one_event(struct fsnotify_group *group, u32 mask,
+-			     struct fsnotify_mark *inode_mark,
+-			     const struct path *path,
+-			     const struct qstr *file_name, u32 cookie)
++int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
++			       struct inode *inode, struct inode *dir,
++			       const struct qstr *name, u32 cookie)
+ {
+ 	struct inotify_inode_mark *i_mark;
+ 	struct inotify_event_info *event;
+ 	struct fsnotify_event *fsn_event;
++	struct fsnotify_group *group = inode_mark->group;
+ 	int ret;
+ 	int len = 0;
+ 	int alloc_len = sizeof(struct inotify_event_info);
+ 	struct mem_cgroup *old_memcg;
+ 
+-	if ((inode_mark->mask & FS_EXCL_UNLINK) &&
+-	    path && d_unlinked(path->dentry))
+-		return 0;
+-
+-	if (file_name) {
+-		len = file_name->len;
++	if (name) {
++		len = name->len;
+ 		alloc_len += len + 1;
+ 	}
+ 
+@@ -117,7 +113,7 @@ static int inotify_one_event(struct fsnotify_group *group, u32 mask,
+ 	event->sync_cookie = cookie;
+ 	event->name_len = len;
+ 	if (len)
+-		strcpy(event->name, file_name->name);
++		strcpy(event->name, name->name);
+ 
+ 	ret = fsnotify_add_event(group, fsn_event, inotify_merge);
+ 	if (ret) {
+@@ -131,37 +127,6 @@ static int inotify_one_event(struct fsnotify_group *group, u32 mask,
+ 	return 0;
+ }
+ 
+-int inotify_handle_event(struct fsnotify_group *group, u32 mask,
+-			 const void *data, int data_type, struct inode *dir,
+-			 const struct qstr *file_name, u32 cookie,
+-			 struct fsnotify_iter_info *iter_info)
+-{
+-	const struct path *path = fsnotify_data_path(data, data_type);
+-	struct fsnotify_mark *inode_mark = fsnotify_iter_inode_mark(iter_info);
+-	struct fsnotify_mark *child_mark = fsnotify_iter_child_mark(iter_info);
+-	int ret = 0;
+-
+-	if (WARN_ON(fsnotify_iter_vfsmount_mark(iter_info)))
+-		return 0;
+-
+-	/*
+-	 * Some events cannot be sent on both parent and child marks
+-	 * (e.g. IN_CREATE).  Those events are always sent on inode_mark.
+-	 * For events that are possible on both parent and child (e.g. IN_OPEN),
+-	 * event is sent on inode_mark with name if the parent is watching and
+-	 * is sent on child_mark without name if child is watching.
+-	 * If both parent and child are watching, report the event with child's
+-	 * name here and report another event without child's name below.
+-	 */
+-	if (inode_mark)
+-		ret = inotify_one_event(group, mask, inode_mark, path,
+-					file_name, cookie);
+-	if (ret || !child_mark)
+-		return ret;
+-
+-	return inotify_one_event(group, mask, child_mark, path, NULL, 0);
+-}
+-
+ static void inotify_freeing_mark(struct fsnotify_mark *fsn_mark, struct fsnotify_group *group)
+ {
+ 	inotify_ignored_and_remove_idr(fsn_mark, group);
+@@ -227,7 +192,7 @@ static void inotify_free_mark(struct fsnotify_mark *fsn_mark)
+ }
+ 
+ const struct fsnotify_ops inotify_fsnotify_ops = {
+-	.handle_event = inotify_handle_event,
++	.handle_inode_event = inotify_handle_inode_event,
+ 	.free_group_priv = inotify_free_group_priv,
+ 	.free_event = inotify_free_event,
+ 	.freeing_mark = inotify_freeing_mark,
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 186722ba38947..5f6c6bf65909c 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -486,14 +486,10 @@ void inotify_ignored_and_remove_idr(struct fsnotify_mark *fsn_mark,
+ 				    struct fsnotify_group *group)
+ {
+ 	struct inotify_inode_mark *i_mark;
+-	struct fsnotify_iter_info iter_info = { };
+-
+-	fsnotify_iter_set_report_type_mark(&iter_info, FSNOTIFY_OBJ_TYPE_INODE,
+-					   fsn_mark);
+ 
+ 	/* Queue ignore event for the watch */
+-	inotify_handle_event(group, FS_IN_IGNORED, NULL, FSNOTIFY_EVENT_NONE,
+-			     NULL, NULL, 0, &iter_info);
++	inotify_handle_inode_event(fsn_mark, FS_IN_IGNORED, NULL, NULL, NULL,
++				   0);
+ 
+ 	i_mark = container_of(fsn_mark, struct inotify_inode_mark, fsn_mark);
+ 	/* remove this mark from the idr */
+diff --git a/fs/open.c b/fs/open.c
+index 9af548fb841b0..4d7537ae59df5 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -1010,6 +1010,10 @@ inline int build_open_flags(const struct open_how *how, struct open_flags *op)
+ 	if (how->resolve & ~VALID_RESOLVE_FLAGS)
+ 		return -EINVAL;
+ 
++	/* Scoping flags are mutually exclusive. */
++	if ((how->resolve & RESOLVE_BENEATH) && (how->resolve & RESOLVE_IN_ROOT))
++		return -EINVAL;
++
+ 	/* Deal with the mode. */
+ 	if (WILL_CREATE(flags)) {
+ 		if (how->mode & ~S_IALLUGO)
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index efccb7c1f9bc5..a1f72ac053e5f 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -541,46 +541,31 @@ static long ovl_real_ioctl(struct file *file, unsigned int cmd,
+ 			   unsigned long arg)
+ {
+ 	struct fd real;
+-	const struct cred *old_cred;
+ 	long ret;
+ 
+ 	ret = ovl_real_fdget(file, &real);
+ 	if (ret)
+ 		return ret;
+ 
+-	old_cred = ovl_override_creds(file_inode(file)->i_sb);
+ 	ret = security_file_ioctl(real.file, cmd, arg);
+-	if (!ret)
++	if (!ret) {
++		/*
++		 * Don't override creds, since we currently can't safely check
++		 * permissions before doing so.
++		 */
+ 		ret = vfs_ioctl(real.file, cmd, arg);
+-	revert_creds(old_cred);
++	}
+ 
+ 	fdput(real);
+ 
+ 	return ret;
+ }
+ 
+-static unsigned int ovl_iflags_to_fsflags(unsigned int iflags)
+-{
+-	unsigned int flags = 0;
+-
+-	if (iflags & S_SYNC)
+-		flags |= FS_SYNC_FL;
+-	if (iflags & S_APPEND)
+-		flags |= FS_APPEND_FL;
+-	if (iflags & S_IMMUTABLE)
+-		flags |= FS_IMMUTABLE_FL;
+-	if (iflags & S_NOATIME)
+-		flags |= FS_NOATIME_FL;
+-
+-	return flags;
+-}
+-
+ static long ovl_ioctl_set_flags(struct file *file, unsigned int cmd,
+-				unsigned long arg, unsigned int flags)
++				unsigned long arg)
+ {
+ 	long ret;
+ 	struct inode *inode = file_inode(file);
+-	unsigned int oldflags;
+ 
+ 	if (!inode_owner_or_capable(inode))
+ 		return -EACCES;
+@@ -591,10 +576,13 @@ static long ovl_ioctl_set_flags(struct file *file, unsigned int cmd,
+ 
+ 	inode_lock(inode);
+ 
+-	/* Check the capability before cred override */
+-	oldflags = ovl_iflags_to_fsflags(READ_ONCE(inode->i_flags));
+-	ret = vfs_ioc_setflags_prepare(inode, oldflags, flags);
+-	if (ret)
++	/*
++	 * Prevent copy up if immutable and has no CAP_LINUX_IMMUTABLE
++	 * capability.
++	 */
++	ret = -EPERM;
++	if (!ovl_has_upperdata(inode) && IS_IMMUTABLE(inode) &&
++	    !capable(CAP_LINUX_IMMUTABLE))
+ 		goto unlock;
+ 
+ 	ret = ovl_maybe_copy_up(file_dentry(file), O_WRONLY);
+@@ -613,46 +601,6 @@ unlock:
+ 
+ }
+ 
+-static long ovl_ioctl_set_fsflags(struct file *file, unsigned int cmd,
+-				  unsigned long arg)
+-{
+-	unsigned int flags;
+-
+-	if (get_user(flags, (int __user *) arg))
+-		return -EFAULT;
+-
+-	return ovl_ioctl_set_flags(file, cmd, arg, flags);
+-}
+-
+-static unsigned int ovl_fsxflags_to_fsflags(unsigned int xflags)
+-{
+-	unsigned int flags = 0;
+-
+-	if (xflags & FS_XFLAG_SYNC)
+-		flags |= FS_SYNC_FL;
+-	if (xflags & FS_XFLAG_APPEND)
+-		flags |= FS_APPEND_FL;
+-	if (xflags & FS_XFLAG_IMMUTABLE)
+-		flags |= FS_IMMUTABLE_FL;
+-	if (xflags & FS_XFLAG_NOATIME)
+-		flags |= FS_NOATIME_FL;
+-
+-	return flags;
+-}
+-
+-static long ovl_ioctl_set_fsxflags(struct file *file, unsigned int cmd,
+-				   unsigned long arg)
+-{
+-	struct fsxattr fa;
+-
+-	memset(&fa, 0, sizeof(fa));
+-	if (copy_from_user(&fa, (void __user *) arg, sizeof(fa)))
+-		return -EFAULT;
+-
+-	return ovl_ioctl_set_flags(file, cmd, arg,
+-				   ovl_fsxflags_to_fsflags(fa.fsx_xflags));
+-}
+-
+ long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ 	long ret;
+@@ -663,12 +611,9 @@ long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 		ret = ovl_real_ioctl(file, cmd, arg);
+ 		break;
+ 
+-	case FS_IOC_SETFLAGS:
+-		ret = ovl_ioctl_set_fsflags(file, cmd, arg);
+-		break;
+-
+ 	case FS_IOC_FSSETXATTR:
+-		ret = ovl_ioctl_set_fsxflags(file, cmd, arg);
++	case FS_IOC_SETFLAGS:
++		ret = ovl_ioctl_set_flags(file, cmd, arg);
+ 		break;
+ 
+ 	default:
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index b84663252adda..6c0a05f55d6b1 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -349,6 +349,16 @@ static const struct file_operations proc_dir_operations = {
+ 	.iterate_shared		= proc_readdir,
+ };
+ 
++static int proc_net_d_revalidate(struct dentry *dentry, unsigned int flags)
++{
++	return 0;
++}
++
++const struct dentry_operations proc_net_dentry_ops = {
++	.d_revalidate	= proc_net_d_revalidate,
++	.d_delete	= always_delete_dentry,
++};
++
+ /*
+  * proc directories can do almost nothing..
+  */
+@@ -471,8 +481,8 @@ struct proc_dir_entry *proc_symlink(const char *name,
+ }
+ EXPORT_SYMBOL(proc_symlink);
+ 
+-struct proc_dir_entry *proc_mkdir_data(const char *name, umode_t mode,
+-		struct proc_dir_entry *parent, void *data)
++struct proc_dir_entry *_proc_mkdir(const char *name, umode_t mode,
++		struct proc_dir_entry *parent, void *data, bool force_lookup)
+ {
+ 	struct proc_dir_entry *ent;
+ 
+@@ -484,10 +494,20 @@ struct proc_dir_entry *proc_mkdir_data(const char *name, umode_t mode,
+ 		ent->data = data;
+ 		ent->proc_dir_ops = &proc_dir_operations;
+ 		ent->proc_iops = &proc_dir_inode_operations;
++		if (force_lookup) {
++			pde_force_lookup(ent);
++		}
+ 		ent = proc_register(parent, ent);
+ 	}
+ 	return ent;
+ }
++EXPORT_SYMBOL_GPL(_proc_mkdir);
++
++struct proc_dir_entry *proc_mkdir_data(const char *name, umode_t mode,
++		struct proc_dir_entry *parent, void *data)
++{
++	return _proc_mkdir(name, mode, parent, data, false);
++}
+ EXPORT_SYMBOL_GPL(proc_mkdir_data);
+ 
+ struct proc_dir_entry *proc_mkdir_mode(const char *name, umode_t mode,
+diff --git a/fs/proc/internal.h b/fs/proc/internal.h
+index 917cc85e34663..afbe96b6bf77d 100644
+--- a/fs/proc/internal.h
++++ b/fs/proc/internal.h
+@@ -310,3 +310,10 @@ extern unsigned long task_statm(struct mm_struct *,
+ 				unsigned long *, unsigned long *,
+ 				unsigned long *, unsigned long *);
+ extern void task_mem(struct seq_file *, struct mm_struct *);
++
++extern const struct dentry_operations proc_net_dentry_ops;
++static inline void pde_force_lookup(struct proc_dir_entry *pde)
++{
++	/* /proc/net/ entries can be changed under us by setns(CLONE_NEWNET) */
++	pde->proc_dops = &proc_net_dentry_ops;
++}
+diff --git a/fs/proc/proc_net.c b/fs/proc/proc_net.c
+index ed8a6306990c4..1aa9236bf1af5 100644
+--- a/fs/proc/proc_net.c
++++ b/fs/proc/proc_net.c
+@@ -39,22 +39,6 @@ static struct net *get_proc_net(const struct inode *inode)
+ 	return maybe_get_net(PDE_NET(PDE(inode)));
+ }
+ 
+-static int proc_net_d_revalidate(struct dentry *dentry, unsigned int flags)
+-{
+-	return 0;
+-}
+-
+-static const struct dentry_operations proc_net_dentry_ops = {
+-	.d_revalidate	= proc_net_d_revalidate,
+-	.d_delete	= always_delete_dentry,
+-};
+-
+-static void pde_force_lookup(struct proc_dir_entry *pde)
+-{
+-	/* /proc/net/ entries can be changed under us by setns(CLONE_NEWNET) */
+-	pde->proc_dops = &proc_net_dentry_ops;
+-}
+-
+ static int seq_open_net(struct inode *inode, struct file *file)
+ {
+ 	unsigned int state_size = PDE(inode)->state_size;
+diff --git a/fs/proc_namespace.c b/fs/proc_namespace.c
+index e59d4bb3a89e4..eafb75755fa37 100644
+--- a/fs/proc_namespace.c
++++ b/fs/proc_namespace.c
+@@ -320,7 +320,8 @@ static int mountstats_open(struct inode *inode, struct file *file)
+ 
+ const struct file_operations proc_mounts_operations = {
+ 	.open		= mounts_open,
+-	.read		= seq_read,
++	.read_iter	= seq_read_iter,
++	.splice_read	= generic_file_splice_read,
+ 	.llseek		= seq_lseek,
+ 	.release	= mounts_release,
+ 	.poll		= mounts_poll,
+@@ -328,7 +329,8 @@ const struct file_operations proc_mounts_operations = {
+ 
+ const struct file_operations proc_mountinfo_operations = {
+ 	.open		= mountinfo_open,
+-	.read		= seq_read,
++	.read_iter	= seq_read_iter,
++	.splice_read	= generic_file_splice_read,
+ 	.llseek		= seq_lseek,
+ 	.release	= mounts_release,
+ 	.poll		= mounts_poll,
+@@ -336,7 +338,8 @@ const struct file_operations proc_mountinfo_operations = {
+ 
+ const struct file_operations proc_mountstats_operations = {
+ 	.open		= mountstats_open,
+-	.read		= seq_read,
++	.read_iter	= seq_read_iter,
++	.splice_read	= generic_file_splice_read,
+ 	.llseek		= seq_lseek,
+ 	.release	= mounts_release,
+ };
+diff --git a/fs/ubifs/auth.c b/fs/ubifs/auth.c
+index b93b3cd10bfd3..8c50de693e1d4 100644
+--- a/fs/ubifs/auth.c
++++ b/fs/ubifs/auth.c
+@@ -338,8 +338,10 @@ int ubifs_init_authentication(struct ubifs_info *c)
+ 	c->authenticated = true;
+ 
+ 	c->log_hash = ubifs_hash_get_desc(c);
+-	if (IS_ERR(c->log_hash))
++	if (IS_ERR(c->log_hash)) {
++		err = PTR_ERR(c->log_hash);
+ 		goto out_free_hmac;
++	}
+ 
+ 	err = 0;
+ 
+diff --git a/fs/ubifs/io.c b/fs/ubifs/io.c
+index 7e4bfaf2871fa..eae9cf5a57b05 100644
+--- a/fs/ubifs/io.c
++++ b/fs/ubifs/io.c
+@@ -319,7 +319,7 @@ void ubifs_pad(const struct ubifs_info *c, void *buf, int pad)
+ {
+ 	uint32_t crc;
+ 
+-	ubifs_assert(c, pad >= 0 && !(pad & 7));
++	ubifs_assert(c, pad >= 0);
+ 
+ 	if (pad >= UBIFS_PAD_NODE_SZ) {
+ 		struct ubifs_ch *ch = buf;
+@@ -764,6 +764,10 @@ int ubifs_wbuf_write_nolock(struct ubifs_wbuf *wbuf, void *buf, int len)
+ 		 * write-buffer.
+ 		 */
+ 		memcpy(wbuf->buf + wbuf->used, buf, len);
++		if (aligned_len > len) {
++			ubifs_assert(c, aligned_len - len < 8);
++			ubifs_pad(c, wbuf->buf + wbuf->used + len, aligned_len - len);
++		}
+ 
+ 		if (aligned_len == wbuf->avail) {
+ 			dbg_io("flush jhead %s wbuf to LEB %d:%d",
+@@ -856,13 +860,18 @@ int ubifs_wbuf_write_nolock(struct ubifs_wbuf *wbuf, void *buf, int len)
+ 	}
+ 
+ 	spin_lock(&wbuf->lock);
+-	if (aligned_len)
++	if (aligned_len) {
+ 		/*
+ 		 * And now we have what's left and what does not take whole
+ 		 * max. write unit, so write it to the write-buffer and we are
+ 		 * done.
+ 		 */
+ 		memcpy(wbuf->buf, buf + written, len);
++		if (aligned_len > len) {
++			ubifs_assert(c, aligned_len - len < 8);
++			ubifs_pad(c, wbuf->buf + len, aligned_len - len);
++		}
++	}
+ 
+ 	if (c->leb_size - wbuf->offs >= c->max_write_size)
+ 		wbuf->size = c->max_write_size;
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index a3abcc4b7d9ff..6d1879bf94403 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -620,7 +620,6 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev);
+ bool acpi_pm_device_can_wakeup(struct device *dev);
+ int acpi_pm_device_sleep_state(struct device *, int *, int);
+ int acpi_pm_set_device_wakeup(struct device *dev, bool enable);
+-int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable);
+ #else
+ static inline void acpi_pm_wakeup_event(struct device *dev)
+ {
+@@ -651,10 +650,6 @@ static inline int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
+ {
+ 	return -ENODEV;
+ }
+-static inline int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable)
+-{
+-	return -ENODEV;
+-}
+ #endif
+ 
+ #ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 8667d0cdc71e7..8bde32cf97115 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2878,8 +2878,7 @@ extern int inode_needs_sync(struct inode *inode);
+ extern int generic_delete_inode(struct inode *inode);
+ static inline int generic_drop_inode(struct inode *inode)
+ {
+-	return !inode->i_nlink || inode_unhashed(inode) ||
+-		(inode->i_state & I_DONTCACHE);
++	return !inode->i_nlink || inode_unhashed(inode);
+ }
+ extern void d_mark_dontcache(struct inode *inode);
+ 
+diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
+index f8529a3a29234..a2e42d3cd87cf 100644
+--- a/include/linux/fsnotify_backend.h
++++ b/include/linux/fsnotify_backend.h
+@@ -137,6 +137,7 @@ struct mem_cgroup;
+  *		if @file_name is not NULL, this is the directory that
+  *		@file_name is relative to.
+  * @file_name:	optional file name associated with event
++ * @cookie:	inotify rename cookie
+  *
+  * free_group_priv - called when a group refcnt hits 0 to clean up the private union
+  * freeing_mark - called when a mark is being destroyed for some reason.  The group
+@@ -151,7 +152,7 @@ struct fsnotify_ops {
+ 			    struct fsnotify_iter_info *iter_info);
+ 	int (*handle_inode_event)(struct fsnotify_mark *mark, u32 mask,
+ 			    struct inode *inode, struct inode *dir,
+-			    const struct qstr *file_name);
++			    const struct qstr *file_name, u32 cookie);
+ 	void (*free_group_priv)(struct fsnotify_group *group);
+ 	void (*freeing_mark)(struct fsnotify_mark *mark, struct fsnotify_group *group);
+ 	void (*free_event)(struct fsnotify_event *event);
+@@ -277,7 +278,7 @@ static inline const struct path *fsnotify_data_path(const void *data,
+ 
+ enum fsnotify_obj_type {
+ 	FSNOTIFY_OBJ_TYPE_INODE,
+-	FSNOTIFY_OBJ_TYPE_CHILD,
++	FSNOTIFY_OBJ_TYPE_PARENT,
+ 	FSNOTIFY_OBJ_TYPE_VFSMOUNT,
+ 	FSNOTIFY_OBJ_TYPE_SB,
+ 	FSNOTIFY_OBJ_TYPE_COUNT,
+@@ -285,7 +286,7 @@ enum fsnotify_obj_type {
+ };
+ 
+ #define FSNOTIFY_OBJ_TYPE_INODE_FL	(1U << FSNOTIFY_OBJ_TYPE_INODE)
+-#define FSNOTIFY_OBJ_TYPE_CHILD_FL	(1U << FSNOTIFY_OBJ_TYPE_CHILD)
++#define FSNOTIFY_OBJ_TYPE_PARENT_FL	(1U << FSNOTIFY_OBJ_TYPE_PARENT)
+ #define FSNOTIFY_OBJ_TYPE_VFSMOUNT_FL	(1U << FSNOTIFY_OBJ_TYPE_VFSMOUNT)
+ #define FSNOTIFY_OBJ_TYPE_SB_FL		(1U << FSNOTIFY_OBJ_TYPE_SB)
+ #define FSNOTIFY_OBJ_ALL_TYPES_MASK	((1U << FSNOTIFY_OBJ_TYPE_COUNT) - 1)
+@@ -330,7 +331,7 @@ static inline struct fsnotify_mark *fsnotify_iter_##name##_mark( \
+ }
+ 
+ FSNOTIFY_ITER_FUNCS(inode, INODE)
+-FSNOTIFY_ITER_FUNCS(child, CHILD)
++FSNOTIFY_ITER_FUNCS(parent, PARENT)
+ FSNOTIFY_ITER_FUNCS(vfsmount, VFSMOUNT)
+ FSNOTIFY_ITER_FUNCS(sb, SB)
+ 
+diff --git a/include/linux/iio/adc/ad_sigma_delta.h b/include/linux/iio/adc/ad_sigma_delta.h
+index a3a838dcf8e4a..7199280d89ca4 100644
+--- a/include/linux/iio/adc/ad_sigma_delta.h
++++ b/include/linux/iio/adc/ad_sigma_delta.h
+@@ -79,8 +79,12 @@ struct ad_sigma_delta {
+ 	/*
+ 	 * DMA (thus cache coherency maintenance) requires the
+ 	 * transfer buffers to live in their own cache lines.
++	 * 'tx_buf' is up to 32 bits.
++	 * 'rx_buf' is up to 32 bits per sample + 64 bit timestamp,
++	 * rounded to 16 bytes to take into account padding.
+ 	 */
+-	uint8_t				data[4] ____cacheline_aligned;
++	uint8_t				tx_buf[4] ____cacheline_aligned;
++	uint8_t				rx_buf[16] __aligned(8);
+ };
+ 
+ static inline int ad_sigma_delta_set_channel(struct ad_sigma_delta *sd,
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 5a9238f6caad9..915f4f100383b 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -14,6 +14,7 @@
+ #include <linux/uprobes.h>
+ #include <linux/page-flags-layout.h>
+ #include <linux/workqueue.h>
++#include <linux/seqlock.h>
+ 
+ #include <asm/mmu.h>
+ 
+@@ -446,6 +447,13 @@ struct mm_struct {
+ 		 */
+ 		atomic_t has_pinned;
+ 
++		/**
++		 * @write_protect_seq: Locked when any thread is write
++		 * protecting pages mapped by this mm to enforce a later COW,
++		 * for instance during page table copying for fork().
++		 */
++		seqcount_t write_protect_seq;
++
+ #ifdef CONFIG_MMU
+ 		atomic_long_t pgtables_bytes;	/* PTE page table pages */
+ #endif
+diff --git a/include/linux/of.h b/include/linux/of.h
+index 5d51891cbf1a6..af655d264f10f 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -1300,6 +1300,7 @@ static inline int of_get_available_child_count(const struct device_node *np)
+ #define _OF_DECLARE(table, name, compat, fn, fn_type)			\
+ 	static const struct of_device_id __of_table_##name		\
+ 		__used __section("__" #table "_of_table")		\
++		__aligned(__alignof__(struct of_device_id))		\
+ 		 = { .compatible = compat,				\
+ 		     .data = (fn == (fn_type)NULL) ? fn : fn  }
+ #else
+diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
+index 270cab43ca3da..000cc0533c336 100644
+--- a/include/linux/proc_fs.h
++++ b/include/linux/proc_fs.h
+@@ -80,6 +80,7 @@ extern void proc_flush_pid(struct pid *);
+ 
+ extern struct proc_dir_entry *proc_symlink(const char *,
+ 		struct proc_dir_entry *, const char *);
++struct proc_dir_entry *_proc_mkdir(const char *, umode_t, struct proc_dir_entry *, void *, bool);
+ extern struct proc_dir_entry *proc_mkdir(const char *, struct proc_dir_entry *);
+ extern struct proc_dir_entry *proc_mkdir_data(const char *, umode_t,
+ 					      struct proc_dir_entry *, void *);
+@@ -162,6 +163,11 @@ static inline struct proc_dir_entry *proc_symlink(const char *name,
+ static inline struct proc_dir_entry *proc_mkdir(const char *name,
+ 	struct proc_dir_entry *parent) {return NULL;}
+ static inline struct proc_dir_entry *proc_create_mount_point(const char *name) { return NULL; }
++static inline struct proc_dir_entry *_proc_mkdir(const char *name, umode_t mode,
++		struct proc_dir_entry *parent, void *data, bool force_lookup)
++{
++	return NULL;
++}
+ static inline struct proc_dir_entry *proc_mkdir_data(const char *name,
+ 	umode_t mode, struct proc_dir_entry *parent, void *data) { return NULL; }
+ static inline struct proc_dir_entry *proc_mkdir_mode(const char *name,
+@@ -199,7 +205,7 @@ struct net;
+ static inline struct proc_dir_entry *proc_net_mkdir(
+ 	struct net *net, const char *name, struct proc_dir_entry *parent)
+ {
+-	return proc_mkdir_data(name, 0, parent, net);
++	return _proc_mkdir(name, 0, parent, net, true);
+ }
+ 
+ struct ns_common;
+diff --git a/include/linux/rmap.h b/include/linux/rmap.h
+index 3a6adfa70fb0e..70085ca1a3fc9 100644
+--- a/include/linux/rmap.h
++++ b/include/linux/rmap.h
+@@ -91,7 +91,6 @@ enum ttu_flags {
+ 
+ 	TTU_SPLIT_HUGE_PMD	= 0x4,	/* split huge PMD if any */
+ 	TTU_IGNORE_MLOCK	= 0x8,	/* ignore mlock */
+-	TTU_IGNORE_ACCESS	= 0x10,	/* don't age */
+ 	TTU_IGNORE_HWPOISON	= 0x20,	/* corrupted page is recoverable */
+ 	TTU_BATCH_FLUSH		= 0x40,	/* Batch TLB flushes where possible
+ 					 * and caller guarantees they will
+diff --git a/include/linux/seq_buf.h b/include/linux/seq_buf.h
+index fb0205d87d3c1..9d6c28cc4d8f2 100644
+--- a/include/linux/seq_buf.h
++++ b/include/linux/seq_buf.h
+@@ -30,7 +30,7 @@ static inline void seq_buf_clear(struct seq_buf *s)
+ }
+ 
+ static inline void
+-seq_buf_init(struct seq_buf *s, unsigned char *buf, unsigned int size)
++seq_buf_init(struct seq_buf *s, char *buf, unsigned int size)
+ {
+ 	s->buffer = buf;
+ 	s->size = size;
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index a603d48d2b2cd..3ac5037d1c3da 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -330,6 +330,7 @@ struct xprt_class {
+ 	struct rpc_xprt *	(*setup)(struct xprt_create *);
+ 	struct module		*owner;
+ 	char			name[32];
++	const char *		netid[];
+ };
+ 
+ /*
+diff --git a/include/linux/trace_seq.h b/include/linux/trace_seq.h
+index 6c30508fca198..5a2c650d9e1c1 100644
+--- a/include/linux/trace_seq.h
++++ b/include/linux/trace_seq.h
+@@ -12,7 +12,7 @@
+  */
+ 
+ struct trace_seq {
+-	unsigned char		buffer[PAGE_SIZE];
++	char			buffer[PAGE_SIZE];
+ 	struct seq_buf		seq;
+ 	int			full;
+ };
+@@ -51,7 +51,7 @@ static inline int trace_seq_used(struct trace_seq *s)
+  * that is about to be written to and then return the result
+  * of that write.
+  */
+-static inline unsigned char *
++static inline char *
+ trace_seq_buffer_ptr(struct trace_seq *s)
+ {
+ 	return s->buffer + seq_buf_used(&s->seq);
+diff --git a/include/media/v4l2-fwnode.h b/include/media/v4l2-fwnode.h
+index c090742765431..ed0840f3d5dff 100644
+--- a/include/media/v4l2-fwnode.h
++++ b/include/media/v4l2-fwnode.h
+@@ -231,6 +231,9 @@ struct v4l2_fwnode_connector {
+  * guessing @vep.bus_type between CSI-2 D-PHY, parallel and BT.656 busses is
+  * supported. NEVER RELY ON GUESSING @vep.bus_type IN NEW DRIVERS!
+  *
++ * The caller is required to initialise all fields of @vep, either with
++ * explicitly values, or by zeroing them.
++ *
+  * The function does not change the V4L2 fwnode endpoint state if it fails.
+  *
+  * NOTE: This function does not parse properties the size of which is variable
+@@ -273,6 +276,9 @@ void v4l2_fwnode_endpoint_free(struct v4l2_fwnode_endpoint *vep);
+  * guessing @vep.bus_type between CSI-2 D-PHY, parallel and BT.656 busses is
+  * supported. NEVER RELY ON GUESSING @vep.bus_type IN NEW DRIVERS!
+  *
++ * The caller is required to initialise all fields of @vep, either with
++ * explicitly values, or by zeroing them.
++ *
+  * The function does not change the V4L2 fwnode endpoint state if it fails.
+  *
+  * v4l2_fwnode_endpoint_alloc_parse() has two important differences to
+diff --git a/include/media/v4l2-mediabus.h b/include/media/v4l2-mediabus.h
+index 59b1de1971142..c20e2dc6d4320 100644
+--- a/include/media/v4l2-mediabus.h
++++ b/include/media/v4l2-mediabus.h
+@@ -103,6 +103,7 @@
+  * @V4L2_MBUS_CCP2:	CCP2 (Compact Camera Port 2)
+  * @V4L2_MBUS_CSI2_DPHY: MIPI CSI-2 serial interface, with D-PHY
+  * @V4L2_MBUS_CSI2_CPHY: MIPI CSI-2 serial interface, with C-PHY
++ * @V4L2_MBUS_INVALID:	invalid bus type (keep as last)
+  */
+ enum v4l2_mbus_type {
+ 	V4L2_MBUS_UNKNOWN,
+@@ -112,6 +113,7 @@ enum v4l2_mbus_type {
+ 	V4L2_MBUS_CCP2,
+ 	V4L2_MBUS_CSI2_DPHY,
+ 	V4L2_MBUS_CSI2_CPHY,
++	V4L2_MBUS_INVALID,
+ };
+ 
+ /**
+diff --git a/include/rdma/uverbs_ioctl.h b/include/rdma/uverbs_ioctl.h
+index b00270c72740f..94fac55772f57 100644
+--- a/include/rdma/uverbs_ioctl.h
++++ b/include/rdma/uverbs_ioctl.h
+@@ -862,6 +862,16 @@ static inline __malloc void *uverbs_zalloc(struct uverbs_attr_bundle *bundle,
+ {
+ 	return _uverbs_alloc(bundle, size, GFP_KERNEL | __GFP_ZERO);
+ }
++
++static inline __malloc void *uverbs_kcalloc(struct uverbs_attr_bundle *bundle,
++					    size_t n, size_t size)
++{
++	size_t bytes;
++
++	if (unlikely(check_mul_overflow(n, size, &bytes)))
++		return ERR_PTR(-EOVERFLOW);
++	return uverbs_zalloc(bundle, bytes);
++}
+ int _uverbs_get_const(s64 *to, const struct uverbs_attr_bundle *attrs_bundle,
+ 		      size_t idx, s64 lower_bound, u64 upper_bound,
+ 		      s64 *def_val);
+diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
+index f1ce2c4c077e2..ec84ad1065683 100644
+--- a/include/uapi/linux/android/binder.h
++++ b/include/uapi/linux/android/binder.h
+@@ -248,6 +248,7 @@ enum transaction_flags {
+ 	TF_ROOT_OBJECT	= 0x04,	/* contents are the component's root object */
+ 	TF_STATUS_CODE	= 0x08,	/* contents are a 32-bit status code */
+ 	TF_ACCEPT_FDS	= 0x10,	/* allow replies with file descriptors */
++	TF_CLEAR_BUF	= 0x20,	/* clear buffer on txn complete */
+ };
+ 
+ struct binder_transaction_data {
+diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
+index 5203f54a2be1c..cf89c318f2ac9 100644
+--- a/include/uapi/linux/devlink.h
++++ b/include/uapi/linux/devlink.h
+@@ -322,7 +322,7 @@ enum devlink_reload_limit {
+ 	DEVLINK_RELOAD_LIMIT_MAX = __DEVLINK_RELOAD_LIMIT_MAX - 1
+ };
+ 
+-#define DEVLINK_RELOAD_LIMITS_VALID_MASK (BIT(__DEVLINK_RELOAD_LIMIT_MAX) - 1)
++#define DEVLINK_RELOAD_LIMITS_VALID_MASK (_BITUL(__DEVLINK_RELOAD_LIMIT_MAX) - 1)
+ 
+ enum devlink_attr {
+ 	/* don't change the order or add anything between, this is ABI! */
+diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
+index 5a8315e6d8a60..00c7235ae93e7 100644
+--- a/include/xen/xenbus.h
++++ b/include/xen/xenbus.h
+@@ -61,6 +61,15 @@ struct xenbus_watch
+ 	/* Path being watched. */
+ 	const char *node;
+ 
++	unsigned int nr_pending;
++
++	/*
++	 * Called just before enqueing new event while a spinlock is held.
++	 * The event will be discarded if this callback returns false.
++	 */
++	bool (*will_handle)(struct xenbus_watch *,
++			      const char *path, const char *token);
++
+ 	/* Callback (executed in a process context with no locks held). */
+ 	void (*callback)(struct xenbus_watch *,
+ 			 const char *path, const char *token);
+@@ -197,10 +206,14 @@ void xenbus_probe(struct work_struct *);
+ 
+ int xenbus_watch_path(struct xenbus_device *dev, const char *path,
+ 		      struct xenbus_watch *watch,
++		      bool (*will_handle)(struct xenbus_watch *,
++					  const char *, const char *),
+ 		      void (*callback)(struct xenbus_watch *,
+ 				       const char *, const char *));
+-__printf(4, 5)
++__printf(5, 6)
+ int xenbus_watch_pathfmt(struct xenbus_device *dev, struct xenbus_watch *watch,
++			 bool (*will_handle)(struct xenbus_watch *,
++					     const char *, const char *),
+ 			 void (*callback)(struct xenbus_watch *,
+ 					  const char *, const char *),
+ 			 const char *pathfmt, ...);
+diff --git a/kernel/audit_fsnotify.c b/kernel/audit_fsnotify.c
+index bfcfcd61adb64..5b3f01da172bc 100644
+--- a/kernel/audit_fsnotify.c
++++ b/kernel/audit_fsnotify.c
+@@ -154,7 +154,7 @@ static void audit_autoremove_mark_rule(struct audit_fsnotify_mark *audit_mark)
+ /* Update mark data in audit rules based on fsnotify events. */
+ static int audit_mark_handle_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 				   struct inode *inode, struct inode *dir,
+-				   const struct qstr *dname)
++				   const struct qstr *dname, u32 cookie)
+ {
+ 	struct audit_fsnotify_mark *audit_mark;
+ 
+diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
+index 83e1c07fc99e1..6c91902f4f455 100644
+--- a/kernel/audit_tree.c
++++ b/kernel/audit_tree.c
+@@ -1037,7 +1037,7 @@ static void evict_chunk(struct audit_chunk *chunk)
+ 
+ static int audit_tree_handle_event(struct fsnotify_mark *mark, u32 mask,
+ 				   struct inode *inode, struct inode *dir,
+-				   const struct qstr *file_name)
++				   const struct qstr *file_name, u32 cookie)
+ {
+ 	return 0;
+ }
+diff --git a/kernel/audit_watch.c b/kernel/audit_watch.c
+index 246e5ba704c00..2acf7ca491542 100644
+--- a/kernel/audit_watch.c
++++ b/kernel/audit_watch.c
+@@ -466,7 +466,7 @@ void audit_remove_watch_rule(struct audit_krule *krule)
+ /* Update watch data in audit rules based on fsnotify events. */
+ static int audit_watch_handle_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 				    struct inode *inode, struct inode *dir,
+-				    const struct qstr *dname)
++				    const struct qstr *dname, u32 cookie)
+ {
+ 	struct audit_parent *parent;
+ 
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 57b5b5d0a5fdd..53c70c470a38d 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -983,25 +983,48 @@ partition_and_rebuild_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+  */
+ static void rebuild_sched_domains_locked(void)
+ {
++	struct cgroup_subsys_state *pos_css;
+ 	struct sched_domain_attr *attr;
+ 	cpumask_var_t *doms;
++	struct cpuset *cs;
+ 	int ndoms;
+ 
+ 	lockdep_assert_cpus_held();
+ 	percpu_rwsem_assert_held(&cpuset_rwsem);
+ 
+ 	/*
+-	 * We have raced with CPU hotplug. Don't do anything to avoid
++	 * If we have raced with CPU hotplug, return early to avoid
+ 	 * passing doms with offlined cpu to partition_sched_domains().
+-	 * Anyways, hotplug work item will rebuild sched domains.
++	 * Anyways, cpuset_hotplug_workfn() will rebuild sched domains.
++	 *
++	 * With no CPUs in any subpartitions, top_cpuset's effective CPUs
++	 * should be the same as the active CPUs, so checking only top_cpuset
++	 * is enough to detect racing CPU offlines.
+ 	 */
+ 	if (!top_cpuset.nr_subparts_cpus &&
+ 	    !cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask))
+ 		return;
+ 
+-	if (top_cpuset.nr_subparts_cpus &&
+-	   !cpumask_subset(top_cpuset.effective_cpus, cpu_active_mask))
+-		return;
++	/*
++	 * With subpartition CPUs, however, the effective CPUs of a partition
++	 * root should be only a subset of the active CPUs.  Since a CPU in any
++	 * partition root could be offlined, all must be checked.
++	 */
++	if (top_cpuset.nr_subparts_cpus) {
++		rcu_read_lock();
++		cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
++			if (!is_partition_root(cs)) {
++				pos_css = css_rightmost_descendant(pos_css);
++				continue;
++			}
++			if (!cpumask_subset(cs->effective_cpus,
++					    cpu_active_mask)) {
++				rcu_read_unlock();
++				return;
++			}
++		}
++		rcu_read_unlock();
++	}
+ 
+ 	/* Generate domain masks and attrs */
+ 	ndoms = generate_sched_domains(&doms, &attr);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 6d266388d3804..dc55f68a6ee36 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1007,6 +1007,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
+ 	mm->vmacache_seqnum = 0;
+ 	atomic_set(&mm->mm_users, 1);
+ 	atomic_set(&mm->mm_count, 1);
++	seqcount_init(&mm->write_protect_seq);
+ 	mmap_init_lock(mm);
+ 	INIT_LIST_HEAD(&mm->mmlist);
+ 	mm->core_state = NULL;
+diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
+index e4ca69608f3b8..c6b419db68efc 100644
+--- a/kernel/irq/irqdomain.c
++++ b/kernel/irq/irqdomain.c
+@@ -1373,8 +1373,15 @@ static void irq_domain_free_irqs_hierarchy(struct irq_domain *domain,
+ 					   unsigned int irq_base,
+ 					   unsigned int nr_irqs)
+ {
+-	if (domain->ops->free)
+-		domain->ops->free(domain, irq_base, nr_irqs);
++	unsigned int i;
++
++	if (!domain->ops->free)
++		return;
++
++	for (i = 0; i < nr_irqs; i++) {
++		if (irq_domain_get_irq_data(domain, irq_base + i))
++			domain->ops->free(domain, irq_base + i, 1);
++	}
+ }
+ 
+ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index bd04b09b84b32..593df7edfe97f 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -177,7 +177,7 @@ module_param(rcu_unlock_delay, int, 0444);
+  * per-CPU. Object size is equal to one page. This value
+  * can be changed at boot time.
+  */
+-static int rcu_min_cached_objs = 2;
++static int rcu_min_cached_objs = 5;
+ module_param(rcu_min_cached_objs, int, 0444);
+ 
+ /* Retrieve RCU kthreads priority for rcutorture */
+@@ -928,8 +928,8 @@ void __rcu_irq_enter_check_tick(void)
+ {
+ 	struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
+ 
+-	 // Enabling the tick is unsafe in NMI handlers.
+-	if (WARN_ON_ONCE(in_nmi()))
++	// If we're here from NMI there's nothing to do.
++	if (in_nmi())
+ 		return;
+ 
+ 	RCU_LOCKDEP_WARN(rcu_dynticks_curr_cpu_in_eqs(),
+@@ -1093,8 +1093,11 @@ static void rcu_disable_urgency_upon_qs(struct rcu_data *rdp)
+  * CPU can safely enter RCU read-side critical sections.  In other words,
+  * if the current CPU is not in its idle loop or is in an interrupt or
+  * NMI handler, return true.
++ *
++ * Make notrace because it can be called by the internal functions of
++ * ftrace, and making this notrace removes unnecessary recursion calls.
+  */
+-bool rcu_is_watching(void)
++notrace bool rcu_is_watching(void)
+ {
+ 	bool ret;
+ 
+@@ -3084,6 +3087,9 @@ struct kfree_rcu_cpu_work {
+  *	In order to save some per-cpu space the list is singular.
+  *	Even though it is lockless an access has to be protected by the
+  *	per-cpu lock.
++ * @page_cache_work: A work to refill the cache when it is empty
++ * @work_in_progress: Indicates that page_cache_work is running
++ * @hrtimer: A hrtimer for scheduling a page_cache_work
+  * @nr_bkv_objs: number of allocated objects at @bkvcache.
+  *
+  * This is a per-CPU structure.  The reason that it is not included in
+@@ -3100,6 +3106,11 @@ struct kfree_rcu_cpu {
+ 	bool monitor_todo;
+ 	bool initialized;
+ 	int count;
++
++	struct work_struct page_cache_work;
++	atomic_t work_in_progress;
++	struct hrtimer hrtimer;
++
+ 	struct llist_head bkvcache;
+ 	int nr_bkv_objs;
+ };
+@@ -3217,10 +3228,10 @@ static void kfree_rcu_work(struct work_struct *work)
+ 			}
+ 			rcu_lock_release(&rcu_callback_map);
+ 
+-			krcp = krc_this_cpu_lock(&flags);
++			raw_spin_lock_irqsave(&krcp->lock, flags);
+ 			if (put_cached_bnode(krcp, bkvhead[i]))
+ 				bkvhead[i] = NULL;
+-			krc_this_cpu_unlock(krcp, flags);
++			raw_spin_unlock_irqrestore(&krcp->lock, flags);
+ 
+ 			if (bkvhead[i])
+ 				free_page((unsigned long) bkvhead[i]);
+@@ -3347,6 +3358,57 @@ static void kfree_rcu_monitor(struct work_struct *work)
+ 		raw_spin_unlock_irqrestore(&krcp->lock, flags);
+ }
+ 
++static enum hrtimer_restart
++schedule_page_work_fn(struct hrtimer *t)
++{
++	struct kfree_rcu_cpu *krcp =
++		container_of(t, struct kfree_rcu_cpu, hrtimer);
++
++	queue_work(system_highpri_wq, &krcp->page_cache_work);
++	return HRTIMER_NORESTART;
++}
++
++static void fill_page_cache_func(struct work_struct *work)
++{
++	struct kvfree_rcu_bulk_data *bnode;
++	struct kfree_rcu_cpu *krcp =
++		container_of(work, struct kfree_rcu_cpu,
++			page_cache_work);
++	unsigned long flags;
++	bool pushed;
++	int i;
++
++	for (i = 0; i < rcu_min_cached_objs; i++) {
++		bnode = (struct kvfree_rcu_bulk_data *)
++			__get_free_page(GFP_KERNEL | __GFP_NOWARN);
++
++		if (bnode) {
++			raw_spin_lock_irqsave(&krcp->lock, flags);
++			pushed = put_cached_bnode(krcp, bnode);
++			raw_spin_unlock_irqrestore(&krcp->lock, flags);
++
++			if (!pushed) {
++				free_page((unsigned long) bnode);
++				break;
++			}
++		}
++	}
++
++	atomic_set(&krcp->work_in_progress, 0);
++}
++
++static void
++run_page_cache_worker(struct kfree_rcu_cpu *krcp)
++{
++	if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
++			!atomic_xchg(&krcp->work_in_progress, 1)) {
++		hrtimer_init(&krcp->hrtimer, CLOCK_MONOTONIC,
++			HRTIMER_MODE_REL);
++		krcp->hrtimer.function = schedule_page_work_fn;
++		hrtimer_start(&krcp->hrtimer, 0, HRTIMER_MODE_REL);
++	}
++}
++
+ static inline bool
+ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
+ {
+@@ -3363,32 +3425,8 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
+ 	if (!krcp->bkvhead[idx] ||
+ 			krcp->bkvhead[idx]->nr_records == KVFREE_BULK_MAX_ENTR) {
+ 		bnode = get_cached_bnode(krcp);
+-		if (!bnode) {
+-			/*
+-			 * To keep this path working on raw non-preemptible
+-			 * sections, prevent the optional entry into the
+-			 * allocator as it uses sleeping locks. In fact, even
+-			 * if the caller of kfree_rcu() is preemptible, this
+-			 * path still is not, as krcp->lock is a raw spinlock.
+-			 * With additional page pre-allocation in the works,
+-			 * hitting this return is going to be much less likely.
+-			 */
+-			if (IS_ENABLED(CONFIG_PREEMPT_RT))
+-				return false;
+-
+-			/*
+-			 * NOTE: For one argument of kvfree_rcu() we can
+-			 * drop the lock and get the page in sleepable
+-			 * context. That would allow to maintain an array
+-			 * for the CONFIG_PREEMPT_RT as well if no cached
+-			 * pages are available.
+-			 */
+-			bnode = (struct kvfree_rcu_bulk_data *)
+-				__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+-		}
+-
+ 		/* Switch to emergency path. */
+-		if (unlikely(!bnode))
++		if (!bnode)
+ 			return false;
+ 
+ 		/* Initialize the new block. */
+@@ -3452,12 +3490,10 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
+ 		goto unlock_return;
+ 	}
+ 
+-	/*
+-	 * Under high memory pressure GFP_NOWAIT can fail,
+-	 * in that case the emergency path is maintained.
+-	 */
+ 	success = kvfree_call_rcu_add_ptr_to_bulk(krcp, ptr);
+ 	if (!success) {
++		run_page_cache_worker(krcp);
++
+ 		if (head == NULL)
+ 			// Inline if kvfree_rcu(one_arg) call.
+ 			goto unlock_return;
+@@ -4449,24 +4485,14 @@ static void __init kfree_rcu_batch_init(void)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
+-		struct kvfree_rcu_bulk_data *bnode;
+ 
+ 		for (i = 0; i < KFREE_N_BATCHES; i++) {
+ 			INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work);
+ 			krcp->krw_arr[i].krcp = krcp;
+ 		}
+ 
+-		for (i = 0; i < rcu_min_cached_objs; i++) {
+-			bnode = (struct kvfree_rcu_bulk_data *)
+-				__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+-
+-			if (bnode)
+-				put_cached_bnode(krcp, bnode);
+-			else
+-				pr_err("Failed to preallocate for %d CPU!\n", cpu);
+-		}
+-
+ 		INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor);
++		INIT_WORK(&krcp->page_cache_work, fill_page_cache_func);
+ 		krcp->initialized = true;
+ 	}
+ 	if (register_shrinker(&kfree_rcu_shrinker))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index e7e453492cffc..77aa0e788b9b7 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6100,12 +6100,8 @@ static void do_sched_yield(void)
+ 	schedstat_inc(rq->yld_count);
+ 	current->sched_class->yield_task(rq);
+ 
+-	/*
+-	 * Since we are going to call schedule() anyway, there's
+-	 * no need to preempt or enable interrupts:
+-	 */
+ 	preempt_disable();
+-	rq_unlock(rq, &rf);
++	rq_unlock_irq(rq, &rf);
+ 	sched_preempt_enable_no_resched();
+ 
+ 	schedule();
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 1d3c97268ec0d..8d06d1f4e2f7b 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2547,7 +2547,7 @@ int sched_dl_global_validate(void)
+ 	u64 period = global_rt_period();
+ 	u64 new_bw = to_ratio(period, runtime);
+ 	struct dl_bw *dl_b;
+-	int cpu, ret = 0;
++	int cpu, cpus, ret = 0;
+ 	unsigned long flags;
+ 
+ 	/*
+@@ -2562,9 +2562,10 @@ int sched_dl_global_validate(void)
+ 	for_each_possible_cpu(cpu) {
+ 		rcu_read_lock_sched();
+ 		dl_b = dl_bw_of(cpu);
++		cpus = dl_bw_cpus(cpu);
+ 
+ 		raw_spin_lock_irqsave(&dl_b->lock, flags);
+-		if (new_bw < dl_b->total_bw)
++		if (new_bw * cpus < dl_b->total_bw)
+ 			ret = -EBUSY;
+ 		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+ 
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index df80bfcea92eb..c122176c627ec 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -257,30 +257,6 @@ struct rt_bandwidth {
+ 
+ void __dl_clear_params(struct task_struct *p);
+ 
+-/*
+- * To keep the bandwidth of -deadline tasks and groups under control
+- * we need some place where:
+- *  - store the maximum -deadline bandwidth of the system (the group);
+- *  - cache the fraction of that bandwidth that is currently allocated.
+- *
+- * This is all done in the data structure below. It is similar to the
+- * one used for RT-throttling (rt_bandwidth), with the main difference
+- * that, since here we are only interested in admission control, we
+- * do not decrease any runtime while the group "executes", neither we
+- * need a timer to replenish it.
+- *
+- * With respect to SMP, the bandwidth is given on a per-CPU basis,
+- * meaning that:
+- *  - dl_bw (< 100%) is the bandwidth of the system (group) on each CPU;
+- *  - dl_total_bw array contains, in the i-eth element, the currently
+- *    allocated bandwidth on the i-eth CPU.
+- * Moreover, groups consume bandwidth on each CPU, while tasks only
+- * consume bandwidth on the CPU they're running on.
+- * Finally, dl_total_bw_cpu is used to cache the index of dl_total_bw
+- * that will be shown the next time the proc or cgroup controls will
+- * be red. It on its turn can be changed by writing on its own
+- * control.
+- */
+ struct dl_bandwidth {
+ 	raw_spinlock_t		dl_runtime_lock;
+ 	u64			dl_runtime;
+@@ -292,6 +268,24 @@ static inline int dl_bandwidth_enabled(void)
+ 	return sysctl_sched_rt_runtime >= 0;
+ }
+ 
++/*
++ * To keep the bandwidth of -deadline tasks under control
++ * we need some place where:
++ *  - store the maximum -deadline bandwidth of each cpu;
++ *  - cache the fraction of bandwidth that is currently allocated in
++ *    each root domain;
++ *
++ * This is all done in the data structure below. It is similar to the
++ * one used for RT-throttling (rt_bandwidth), with the main difference
++ * that, since here we are only interested in admission control, we
++ * do not decrease any runtime while the group "executes", neither we
++ * need a timer to replenish it.
++ *
++ * With respect to SMP, bandwidth is given on a per root domain basis,
++ * meaning that:
++ *  - bw (< 100%) is the deadline bandwidth of each CPU;
++ *  - total_bw is the currently allocated bandwidth in each root domain;
++ */
+ struct dl_bw {
+ 	raw_spinlock_t		lock;
+ 	u64			bw;
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index a125ea5e04cd7..0dde84b9d29fe 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -2041,10 +2041,12 @@ struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name)
+ 
+ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp)
+ {
+-	struct module *mod = __module_address((unsigned long)btp);
++	struct module *mod;
+ 
+-	if (mod)
+-		module_put(mod);
++	preempt_disable();
++	mod = __module_address((unsigned long)btp);
++	module_put(mod);
++	preempt_enable();
+ }
+ 
+ static __always_inline
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index a6268e09160a5..ddeb865706ba4 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -129,7 +129,16 @@ int ring_buffer_print_entry_header(struct trace_seq *s)
+ #define RB_ALIGNMENT		4U
+ #define RB_MAX_SMALL_DATA	(RB_ALIGNMENT * RINGBUF_TYPE_DATA_TYPE_LEN_MAX)
+ #define RB_EVNT_MIN_SIZE	8U	/* two 32bit words */
+-#define RB_ALIGN_DATA		__aligned(RB_ALIGNMENT)
++
++#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS
++# define RB_FORCE_8BYTE_ALIGNMENT	0
++# define RB_ARCH_ALIGNMENT		RB_ALIGNMENT
++#else
++# define RB_FORCE_8BYTE_ALIGNMENT	1
++# define RB_ARCH_ALIGNMENT		8U
++#endif
++
++#define RB_ALIGN_DATA		__aligned(RB_ARCH_ALIGNMENT)
+ 
+ /* define RINGBUF_TYPE_DATA for 'case RINGBUF_TYPE_DATA:' */
+ #define RINGBUF_TYPE_DATA 0 ... RINGBUF_TYPE_DATA_TYPE_LEN_MAX
+@@ -2719,7 +2728,7 @@ rb_update_event(struct ring_buffer_per_cpu *cpu_buffer,
+ 
+ 	event->time_delta = delta;
+ 	length -= RB_EVNT_HDR_SIZE;
+-	if (length > RB_MAX_SMALL_DATA) {
++	if (length > RB_MAX_SMALL_DATA || RB_FORCE_8BYTE_ALIGNMENT) {
+ 		event->type_len = 0;
+ 		event->array[0] = length;
+ 	} else
+@@ -2734,11 +2743,11 @@ static unsigned rb_calculate_event_length(unsigned length)
+ 	if (!length)
+ 		length++;
+ 
+-	if (length > RB_MAX_SMALL_DATA)
++	if (length > RB_MAX_SMALL_DATA || RB_FORCE_8BYTE_ALIGNMENT)
+ 		length += sizeof(event.array[0]);
+ 
+ 	length += RB_EVNT_HDR_SIZE;
+-	length = ALIGN(length, RB_ALIGNMENT);
++	length = ALIGN(length, RB_ARCH_ALIGNMENT);
+ 
+ 	/*
+ 	 * In case the time delta is larger than the 27 bits for it
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 06134189e9a72..3119d68d012df 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -68,10 +68,21 @@ bool ring_buffer_expanded;
+ static bool __read_mostly tracing_selftest_running;
+ 
+ /*
+- * If a tracer is running, we do not want to run SELFTEST.
++ * If boot-time tracing including tracers/events via kernel cmdline
++ * is running, we do not want to run SELFTEST.
+  */
+ bool __read_mostly tracing_selftest_disabled;
+ 
++#ifdef CONFIG_FTRACE_STARTUP_TEST
++void __init disable_tracing_selftest(const char *reason)
++{
++	if (!tracing_selftest_disabled) {
++		tracing_selftest_disabled = true;
++		pr_info("Ftrace startup test is disabled due to %s\n", reason);
++	}
++}
++#endif
++
+ /* Pipe tracepoints to printk */
+ struct trace_iterator *tracepoint_print_iter;
+ int tracepoint_printk;
+@@ -2113,11 +2124,7 @@ int __init register_tracer(struct tracer *type)
+ 	apply_trace_boot_options();
+ 
+ 	/* disable other selftests, since this will break it. */
+-	tracing_selftest_disabled = true;
+-#ifdef CONFIG_FTRACE_STARTUP_TEST
+-	printk(KERN_INFO "Disabling FTRACE selftests due to running tracer '%s'\n",
+-	       type->name);
+-#endif
++	disable_tracing_selftest("running a tracer");
+ 
+  out_unlock:
+ 	return ret;
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 1dadef445cd1e..6784b572ce597 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -896,6 +896,8 @@ extern bool ring_buffer_expanded;
+ extern bool tracing_selftest_disabled;
+ 
+ #ifdef CONFIG_FTRACE_STARTUP_TEST
++extern void __init disable_tracing_selftest(const char *reason);
++
+ extern int trace_selftest_startup_function(struct tracer *trace,
+ 					   struct trace_array *tr);
+ extern int trace_selftest_startup_function_graph(struct tracer *trace,
+@@ -919,6 +921,9 @@ extern int trace_selftest_startup_branch(struct tracer *trace,
+  */
+ #define __tracer_data		__refdata
+ #else
++static inline void __init disable_tracing_selftest(const char *reason)
++{
++}
+ /* Tracers are seldom changed. Optimize when selftests are disabled. */
+ #define __tracer_data		__read_mostly
+ #endif /* CONFIG_FTRACE_STARTUP_TEST */
+diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
+index c22a152ef0b4f..a82f03f385f89 100644
+--- a/kernel/trace/trace_boot.c
++++ b/kernel/trace/trace_boot.c
+@@ -344,6 +344,8 @@ static int __init trace_boot_init(void)
+ 	trace_boot_init_one_instance(tr, trace_node);
+ 	trace_boot_init_instances(trace_node);
+ 
++	disable_tracing_selftest("running boot-time tracing");
++
+ 	return 0;
+ }
+ /*
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 47a71f96e5bcc..802f3e7d8b8b5 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -3201,7 +3201,7 @@ static __init int setup_trace_event(char *str)
+ {
+ 	strlcpy(bootup_event_buf, str, COMMAND_LINE_SIZE);
+ 	ring_buffer_expanded = true;
+-	tracing_selftest_disabled = true;
++	disable_tracing_selftest("running event tracing");
+ 
+ 	return 1;
+ }
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index b911e9f6d9f5c..b29f92c51b1a4 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -25,11 +25,12 @@
+ 
+ /* Kprobe early definition from command line */
+ static char kprobe_boot_events_buf[COMMAND_LINE_SIZE] __initdata;
+-static bool kprobe_boot_events_enabled __initdata;
+ 
+ static int __init set_kprobe_boot_events(char *str)
+ {
+ 	strlcpy(kprobe_boot_events_buf, str, COMMAND_LINE_SIZE);
++	disable_tracing_selftest("running kprobe events");
++
+ 	return 0;
+ }
+ __setup("kprobe_event=", set_kprobe_boot_events);
+@@ -1887,8 +1888,6 @@ static __init void setup_boot_kprobe_events(void)
+ 		ret = trace_run_command(cmd, create_or_delete_trace_kprobe);
+ 		if (ret)
+ 			pr_warn("Failed to add event(%d): %s\n", ret, cmd);
+-		else
+-			kprobe_boot_events_enabled = true;
+ 
+ 		cmd = p;
+ 	}
+@@ -1973,10 +1972,8 @@ static __init int kprobe_trace_self_tests_init(void)
+ 	if (tracing_is_disabled())
+ 		return -ENODEV;
+ 
+-	if (kprobe_boot_events_enabled) {
+-		pr_info("Skipping kprobe tests due to kprobe_event on cmdline\n");
++	if (tracing_selftest_disabled)
+ 		return 0;
+-	}
+ 
+ 	target = kprobe_trace_selftest_target;
+ 
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index 4738ad48a6674..6f28b8b11ead6 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -787,7 +787,7 @@ trace_selftest_startup_function_graph(struct tracer *trace,
+ 
+ 	/* Have we just recovered from a hang? */
+ 	if (graph_hang_thresh > GRAPH_MAX_FUNC_TEST) {
+-		tracing_selftest_disabled = true;
++		disable_tracing_selftest("recovering from a hang");
+ 		ret = -1;
+ 		goto out;
+ 	}
+diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
+index bd7b3aaa93c38..c70d6347afa2b 100644
+--- a/lib/dynamic_debug.c
++++ b/lib/dynamic_debug.c
+@@ -561,9 +561,14 @@ static int ddebug_exec_queries(char *query, const char *modname)
+ int dynamic_debug_exec_queries(const char *query, const char *modname)
+ {
+ 	int rc;
+-	char *qry = kstrndup(query, PAGE_SIZE, GFP_KERNEL);
++	char *qry; /* writable copy of query */
+ 
+-	if (!query)
++	if (!query) {
++		pr_err("non-null query/command string expected\n");
++		return -EINVAL;
++	}
++	qry = kstrndup(query, PAGE_SIZE, GFP_KERNEL);
++	if (!qry)
+ 		return -ENOMEM;
+ 
+ 	rc = ddebug_exec_queries(qry, modname);
+diff --git a/mm/gup.c b/mm/gup.c
+index 98eb8e6d2609c..054ff923d3d92 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -123,6 +123,28 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page,
+ 	return NULL;
+ }
+ 
++static void put_compound_head(struct page *page, int refs, unsigned int flags)
++{
++	if (flags & FOLL_PIN) {
++		mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_RELEASED,
++				    refs);
++
++		if (hpage_pincount_available(page))
++			hpage_pincount_sub(page, refs);
++		else
++			refs *= GUP_PIN_COUNTING_BIAS;
++	}
++
++	VM_BUG_ON_PAGE(page_ref_count(page) < refs, page);
++	/*
++	 * Calling put_page() for each ref is unnecessarily slow. Only the last
++	 * ref needs a put_page().
++	 */
++	if (refs > 1)
++		page_ref_sub(page, refs - 1);
++	put_page(page);
++}
++
+ /**
+  * try_grab_page() - elevate a page's refcount by a flag-dependent amount
+  *
+@@ -177,41 +199,6 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags)
+ 	return true;
+ }
+ 
+-#ifdef CONFIG_DEV_PAGEMAP_OPS
+-static bool __unpin_devmap_managed_user_page(struct page *page)
+-{
+-	int count, refs = 1;
+-
+-	if (!page_is_devmap_managed(page))
+-		return false;
+-
+-	if (hpage_pincount_available(page))
+-		hpage_pincount_sub(page, 1);
+-	else
+-		refs = GUP_PIN_COUNTING_BIAS;
+-
+-	count = page_ref_sub_return(page, refs);
+-
+-	mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_RELEASED, 1);
+-	/*
+-	 * devmap page refcounts are 1-based, rather than 0-based: if
+-	 * refcount is 1, then the page is free and the refcount is
+-	 * stable because nobody holds a reference on the page.
+-	 */
+-	if (count == 1)
+-		free_devmap_managed_page(page);
+-	else if (!count)
+-		__put_page(page);
+-
+-	return true;
+-}
+-#else
+-static bool __unpin_devmap_managed_user_page(struct page *page)
+-{
+-	return false;
+-}
+-#endif /* CONFIG_DEV_PAGEMAP_OPS */
+-
+ /**
+  * unpin_user_page() - release a dma-pinned page
+  * @page:            pointer to page to be released
+@@ -223,28 +210,7 @@ static bool __unpin_devmap_managed_user_page(struct page *page)
+  */
+ void unpin_user_page(struct page *page)
+ {
+-	int refs = 1;
+-
+-	page = compound_head(page);
+-
+-	/*
+-	 * For devmap managed pages we need to catch refcount transition from
+-	 * GUP_PIN_COUNTING_BIAS to 1, when refcount reach one it means the
+-	 * page is free and we need to inform the device driver through
+-	 * callback. See include/linux/memremap.h and HMM for details.
+-	 */
+-	if (__unpin_devmap_managed_user_page(page))
+-		return;
+-
+-	if (hpage_pincount_available(page))
+-		hpage_pincount_sub(page, 1);
+-	else
+-		refs = GUP_PIN_COUNTING_BIAS;
+-
+-	if (page_ref_sub_and_test(page, refs))
+-		__put_page(page);
+-
+-	mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_RELEASED, 1);
++	put_compound_head(compound_head(page), 1, FOLL_PIN);
+ }
+ EXPORT_SYMBOL(unpin_user_page);
+ 
+@@ -2062,29 +2028,6 @@ EXPORT_SYMBOL(get_user_pages_unlocked);
+  * This code is based heavily on the PowerPC implementation by Nick Piggin.
+  */
+ #ifdef CONFIG_HAVE_FAST_GUP
+-
+-static void put_compound_head(struct page *page, int refs, unsigned int flags)
+-{
+-	if (flags & FOLL_PIN) {
+-		mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_RELEASED,
+-				    refs);
+-
+-		if (hpage_pincount_available(page))
+-			hpage_pincount_sub(page, refs);
+-		else
+-			refs *= GUP_PIN_COUNTING_BIAS;
+-	}
+-
+-	VM_BUG_ON_PAGE(page_ref_count(page) < refs, page);
+-	/*
+-	 * Calling put_page() for each ref is unnecessarily slow. Only the last
+-	 * ref needs a put_page().
+-	 */
+-	if (refs > 1)
+-		page_ref_sub(page, refs - 1);
+-	put_page(page);
+-}
+-
+ #ifdef CONFIG_GUP_GET_PTE_LOW_HIGH
+ 
+ /*
+@@ -2677,13 +2620,61 @@ static int __gup_longterm_unlocked(unsigned long start, int nr_pages,
+ 	return ret;
+ }
+ 
+-static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
++static unsigned long lockless_pages_from_mm(unsigned long start,
++					    unsigned long end,
++					    unsigned int gup_flags,
++					    struct page **pages)
++{
++	unsigned long flags;
++	int nr_pinned = 0;
++	unsigned seq;
++
++	if (!IS_ENABLED(CONFIG_HAVE_FAST_GUP) ||
++	    !gup_fast_permitted(start, end))
++		return 0;
++
++	if (gup_flags & FOLL_PIN) {
++		seq = raw_read_seqcount(&current->mm->write_protect_seq);
++		if (seq & 1)
++			return 0;
++	}
++
++	/*
++	 * Disable interrupts. The nested form is used, in order to allow full,
++	 * general purpose use of this routine.
++	 *
++	 * With interrupts disabled, we block page table pages from being freed
++	 * from under us. See struct mmu_table_batch comments in
++	 * include/asm-generic/tlb.h for more details.
++	 *
++	 * We do not adopt an rcu_read_lock() here as we also want to block IPIs
++	 * that come from THPs splitting.
++	 */
++	local_irq_save(flags);
++	gup_pgd_range(start, end, gup_flags, pages, &nr_pinned);
++	local_irq_restore(flags);
++
++	/*
++	 * When pinning pages for DMA there could be a concurrent write protect
++	 * from fork() via copy_page_range(), in this case always fail fast GUP.
++	 */
++	if (gup_flags & FOLL_PIN) {
++		if (read_seqcount_retry(&current->mm->write_protect_seq, seq)) {
++			unpin_user_pages(pages, nr_pinned);
++			return 0;
++		}
++	}
++	return nr_pinned;
++}
++
++static int internal_get_user_pages_fast(unsigned long start,
++					unsigned long nr_pages,
+ 					unsigned int gup_flags,
+ 					struct page **pages)
+ {
+-	unsigned long addr, len, end;
+-	unsigned long flags;
+-	int nr_pinned = 0, ret = 0;
++	unsigned long len, end;
++	unsigned long nr_pinned;
++	int ret;
+ 
+ 	if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
+ 				       FOLL_FORCE | FOLL_PIN | FOLL_GET |
+@@ -2697,54 +2688,33 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
+ 		might_lock_read(&current->mm->mmap_lock);
+ 
+ 	start = untagged_addr(start) & PAGE_MASK;
+-	addr = start;
+-	len = (unsigned long) nr_pages << PAGE_SHIFT;
+-	end = start + len;
+-
+-	if (end <= start)
++	len = nr_pages << PAGE_SHIFT;
++	if (check_add_overflow(start, len, &end))
+ 		return 0;
+ 	if (unlikely(!access_ok((void __user *)start, len)))
+ 		return -EFAULT;
+ 
+-	/*
+-	 * Disable interrupts. The nested form is used, in order to allow
+-	 * full, general purpose use of this routine.
+-	 *
+-	 * With interrupts disabled, we block page table pages from being
+-	 * freed from under us. See struct mmu_table_batch comments in
+-	 * include/asm-generic/tlb.h for more details.
+-	 *
+-	 * We do not adopt an rcu_read_lock(.) here as we also want to
+-	 * block IPIs that come from THPs splitting.
+-	 */
+-	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && gup_fast_permitted(start, end)) {
+-		unsigned long fast_flags = gup_flags;
+-
+-		local_irq_save(flags);
+-		gup_pgd_range(addr, end, fast_flags, pages, &nr_pinned);
+-		local_irq_restore(flags);
+-		ret = nr_pinned;
+-	}
+-
+-	if (nr_pinned < nr_pages && !(gup_flags & FOLL_FAST_ONLY)) {
+-		/* Try to get the remaining pages with get_user_pages */
+-		start += nr_pinned << PAGE_SHIFT;
+-		pages += nr_pinned;
+-
+-		ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned,
+-					      gup_flags, pages);
++	nr_pinned = lockless_pages_from_mm(start, end, gup_flags, pages);
++	if (nr_pinned == nr_pages || gup_flags & FOLL_FAST_ONLY)
++		return nr_pinned;
+ 
+-		/* Have to be a bit careful with return values */
+-		if (nr_pinned > 0) {
+-			if (ret < 0)
+-				ret = nr_pinned;
+-			else
+-				ret += nr_pinned;
+-		}
++	/* Slow path: try to get the remaining pages with get_user_pages */
++	start += nr_pinned << PAGE_SHIFT;
++	pages += nr_pinned;
++	ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned, gup_flags,
++				      pages);
++	if (ret < 0) {
++		/*
++		 * The caller has to unpin the pages we already pinned so
++		 * returning -errno is not an option
++		 */
++		if (nr_pinned)
++			return nr_pinned;
++		return ret;
+ 	}
+-
+-	return ret;
++	return ret + nr_pinned;
+ }
++
+ /**
+  * get_user_pages_fast_only() - pin user pages in memory
+  * @start:      starting user address
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index ec2bb93f74314..85eda66eb625d 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2321,7 +2321,7 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
+ 
+ static void unmap_page(struct page *page)
+ {
+-	enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS |
++	enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK |
+ 		TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD;
+ 	bool unmap_success;
+ 
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index d029d938d26d6..3b38ea958e954 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5115,6 +5115,7 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 
+ 		if (unlikely(add < 0)) {
+ 			hugetlb_acct_memory(h, -gbl_reserve);
++			ret = add;
+ 			goto out_put_pages;
+ 		} else if (unlikely(chg > add)) {
+ 			/*
+diff --git a/mm/init-mm.c b/mm/init-mm.c
+index 3a613c85f9ede..153162669f806 100644
+--- a/mm/init-mm.c
++++ b/mm/init-mm.c
+@@ -31,6 +31,7 @@ struct mm_struct init_mm = {
+ 	.pgd		= swapper_pg_dir,
+ 	.mm_users	= ATOMIC_INIT(2),
+ 	.mm_count	= ATOMIC_INIT(1),
++	.write_protect_seq = SEQCNT_ZERO(init_mm.write_protect_seq),
+ 	MMAP_LOCK_INITIALIZER(init_mm)
+ 	.page_table_lock =  __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
+ 	.arg_lock	=  __SPIN_LOCK_UNLOCKED(init_mm.arg_lock),
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 13f5677b93222..9abf4c5f2bce2 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -908,14 +908,7 @@ static int madvise_inject_error(int behavior,
+ 		} else {
+ 			pr_info("Injecting memory failure for pfn %#lx at process virtual address %#lx\n",
+ 				 pfn, start);
+-			/*
+-			 * Drop the page reference taken by get_user_pages_fast(). In
+-			 * the absence of MF_COUNT_INCREASED the memory_failure()
+-			 * routine is responsible for pinning the page to prevent it
+-			 * from being released back to the page allocator.
+-			 */
+-			put_page(page);
+-			ret = memory_failure(pfn, 0);
++			ret = memory_failure(pfn, MF_COUNT_INCREASED);
+ 		}
+ 
+ 		if (ret)
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 29459a6ce1c7a..a717728cc7b4a 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2987,6 +2987,7 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
+ 		objcg = rcu_dereference(memcg->objcg);
+ 		if (objcg && obj_cgroup_tryget(objcg))
+ 			break;
++		objcg = NULL;
+ 	}
+ 	rcu_read_unlock();
+ 
+@@ -3246,8 +3247,10 @@ int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size)
+ 	 * independently later.
+ 	 */
+ 	rcu_read_lock();
++retry:
+ 	memcg = obj_cgroup_memcg(objcg);
+-	css_get(&memcg->css);
++	if (unlikely(!css_tryget(&memcg->css)))
++		goto retry;
+ 	rcu_read_unlock();
+ 
+ 	nr_pages = size >> PAGE_SHIFT;
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 5d880d4eb9a26..fd653c9953cfd 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -989,7 +989,7 @@ static int get_hwpoison_page(struct page *page)
+ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
+ 				  int flags, struct page **hpagep)
+ {
+-	enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
++	enum ttu_flags ttu = TTU_IGNORE_MLOCK;
+ 	struct address_space *mapping;
+ 	LIST_HEAD(tokill);
+ 	bool unmap_success = true;
+@@ -1231,6 +1231,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
+ 	loff_t start;
+ 	dax_entry_t cookie;
+ 
++	if (flags & MF_COUNT_INCREASED)
++		/*
++		 * Drop the extra refcount in case we come from madvise().
++		 */
++		put_page(page);
++
+ 	/*
+ 	 * Prevent the inode from being freed while we are interrogating
+ 	 * the address_space, typically this would be handled by
+diff --git a/mm/memory.c b/mm/memory.c
+index c48f8df6e5026..50632c4366b8a 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1171,6 +1171,15 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ 		mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
+ 					0, src_vma, src_mm, addr, end);
+ 		mmu_notifier_invalidate_range_start(&range);
++		/*
++		 * Disabling preemption is not needed for the write side, as
++		 * the read side doesn't spin, but goes to the mmap_lock.
++		 *
++		 * Use the raw variant of the seqcount_t write API to avoid
++		 * lockdep complaining about preemptibility.
++		 */
++		mmap_assert_write_locked(src_mm);
++		raw_write_seqcount_begin(&src_mm->write_protect_seq);
+ 	}
+ 
+ 	ret = 0;
+@@ -1187,8 +1196,10 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ 		}
+ 	} while (dst_pgd++, src_pgd++, addr = next, addr != end);
+ 
+-	if (is_cow)
++	if (is_cow) {
++		raw_write_seqcount_end(&src_mm->write_protect_seq);
+ 		mmu_notifier_invalidate_range_end(&range);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 63b2e46b65552..0f855deea4b2d 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1304,7 +1304,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+ 			if (WARN_ON(PageLRU(page)))
+ 				isolate_lru_page(page);
+ 			if (page_mapped(page))
+-				try_to_unmap(page, TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS);
++				try_to_unmap(page, TTU_IGNORE_MLOCK);
+ 			continue;
+ 		}
+ 
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 5795cb82e27c3..8ea0c65f10756 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1122,8 +1122,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
+ 		/* Establish migration ptes */
+ 		VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma,
+ 				page);
+-		try_to_unmap(page,
+-			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
++		try_to_unmap(page, TTU_MIGRATION|TTU_IGNORE_MLOCK);
+ 		page_was_mapped = 1;
+ 	}
+ 
+@@ -1329,8 +1328,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
+ 
+ 	if (page_mapped(hpage)) {
+ 		bool mapping_locked = false;
+-		enum ttu_flags ttu = TTU_MIGRATION|TTU_IGNORE_MLOCK|
+-					TTU_IGNORE_ACCESS;
++		enum ttu_flags ttu = TTU_MIGRATION|TTU_IGNORE_MLOCK;
+ 
+ 		if (!PageAnon(hpage)) {
+ 			/*
+@@ -2688,7 +2686,7 @@ static void migrate_vma_prepare(struct migrate_vma *migrate)
+  */
+ static void migrate_vma_unmap(struct migrate_vma *migrate)
+ {
+-	int flags = TTU_MIGRATION | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
++	int flags = TTU_MIGRATION | TTU_IGNORE_MLOCK;
+ 	const unsigned long npages = migrate->npages;
+ 	const unsigned long start = migrate->start;
+ 	unsigned long addr, i, restore = 0;
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index eaa227a479e4a..32f783ddb5c3a 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2470,12 +2470,12 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
+ 	return false;
+ }
+ 
+-static inline void boost_watermark(struct zone *zone)
++static inline bool boost_watermark(struct zone *zone)
+ {
+ 	unsigned long max_boost;
+ 
+ 	if (!watermark_boost_factor)
+-		return;
++		return false;
+ 	/*
+ 	 * Don't bother in zones that are unlikely to produce results.
+ 	 * On small machines, including kdump capture kernels running
+@@ -2483,7 +2483,7 @@ static inline void boost_watermark(struct zone *zone)
+ 	 * memory situation immediately.
+ 	 */
+ 	if ((pageblock_nr_pages * 4) > zone_managed_pages(zone))
+-		return;
++		return false;
+ 
+ 	max_boost = mult_frac(zone->_watermark[WMARK_HIGH],
+ 			watermark_boost_factor, 10000);
+@@ -2497,12 +2497,14 @@ static inline void boost_watermark(struct zone *zone)
+ 	 * boosted watermark resulting in a hang.
+ 	 */
+ 	if (!max_boost)
+-		return;
++		return false;
+ 
+ 	max_boost = max(pageblock_nr_pages, max_boost);
+ 
+ 	zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages,
+ 		max_boost);
++
++	return true;
+ }
+ 
+ /*
+@@ -2540,8 +2542,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
+ 	 * likelihood of future fallbacks. Wake kswapd now as the node
+ 	 * may be balanced overall and kswapd will not wake naturally.
+ 	 */
+-	boost_watermark(zone);
+-	if (alloc_flags & ALLOC_KSWAPD)
++	if (boost_watermark(zone) && (alloc_flags & ALLOC_KSWAPD))
+ 		set_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
+ 
+ 	/* We are not allowed to try stealing from the whole block */
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 31b29321adfe1..6657000b18d41 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1533,15 +1533,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 			goto discard;
+ 		}
+ 
+-		if (!(flags & TTU_IGNORE_ACCESS)) {
+-			if (ptep_clear_flush_young_notify(vma, address,
+-						pvmw.pte)) {
+-				ret = false;
+-				page_vma_mapped_walk_done(&pvmw);
+-				break;
+-			}
+-		}
+-
+ 		/* Nuke the page table entry. */
+ 		flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
+ 		if (should_defer_flush(mm, flags)) {
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 6ae491a8b210f..279dc0c96568c 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2256,7 +2256,7 @@ static void __vunmap(const void *addr, int deallocate_pages)
+ 	debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
+ 	debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
+ 
+-	kasan_poison_vmalloc(area->addr, area->size);
++	kasan_poison_vmalloc(area->addr, get_vm_area_size(area));
+ 
+ 	vm_remove_mappings(area, deallocate_pages);
+ 
+@@ -3448,11 +3448,11 @@ static void *s_next(struct seq_file *m, void *p, loff_t *pos)
+ }
+ 
+ static void s_stop(struct seq_file *m, void *p)
+-	__releases(&vmap_purge_lock)
+ 	__releases(&vmap_area_lock)
++	__releases(&vmap_purge_lock)
+ {
+-	mutex_unlock(&vmap_purge_lock);
+ 	spin_unlock(&vmap_area_lock);
++	mutex_unlock(&vmap_purge_lock);
+ }
+ 
+ static void show_numa_info(struct seq_file *m, struct vm_struct *v)
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 7b4e31eac2cff..0ec6321e98878 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1072,7 +1072,6 @@ static void page_check_dirty_writeback(struct page *page,
+ static unsigned int shrink_page_list(struct list_head *page_list,
+ 				     struct pglist_data *pgdat,
+ 				     struct scan_control *sc,
+-				     enum ttu_flags ttu_flags,
+ 				     struct reclaim_stat *stat,
+ 				     bool ignore_references)
+ {
+@@ -1297,7 +1296,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
+ 		 * processes. Try to unmap it here.
+ 		 */
+ 		if (page_mapped(page)) {
+-			enum ttu_flags flags = ttu_flags | TTU_BATCH_FLUSH;
++			enum ttu_flags flags = TTU_BATCH_FLUSH;
+ 			bool was_swapbacked = PageSwapBacked(page);
+ 
+ 			if (unlikely(PageTransHuge(page)))
+@@ -1514,7 +1513,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
+ 	}
+ 
+ 	nr_reclaimed = shrink_page_list(&clean_pages, zone->zone_pgdat, &sc,
+-			TTU_IGNORE_ACCESS, &stat, true);
++					&stat, true);
+ 	list_splice(&clean_pages, page_list);
+ 	mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE,
+ 			    -(long)nr_reclaimed);
+@@ -1958,8 +1957,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
+ 	if (nr_taken == 0)
+ 		return 0;
+ 
+-	nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, 0,
+-				&stat, false);
++	nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);
+ 
+ 	spin_lock_irq(&pgdat->lru_lock);
+ 
+@@ -2131,8 +2129,7 @@ unsigned long reclaim_pages(struct list_head *page_list)
+ 
+ 		nr_reclaimed += shrink_page_list(&node_page_list,
+ 						NODE_DATA(nid),
+-						&sc, 0,
+-						&dummy_stat, false);
++						&sc, &dummy_stat, false);
+ 		while (!list_empty(&node_page_list)) {
+ 			page = lru_to_page(&node_page_list);
+ 			list_del(&page->lru);
+@@ -2145,8 +2142,7 @@ unsigned long reclaim_pages(struct list_head *page_list)
+ 	if (!list_empty(&node_page_list)) {
+ 		nr_reclaimed += shrink_page_list(&node_page_list,
+ 						NODE_DATA(nid),
+-						&sc, 0,
+-						&dummy_stat, false);
++						&sc, &dummy_stat, false);
+ 		while (!list_empty(&node_page_list)) {
+ 			page = lru_to_page(&node_page_list);
+ 			list_del(&page->lru);
+diff --git a/mm/z3fold.c b/mm/z3fold.c
+index 18feaa0bc5377..0152ad9931a87 100644
+--- a/mm/z3fold.c
++++ b/mm/z3fold.c
+@@ -90,7 +90,7 @@ struct z3fold_buddy_slots {
+ 	 * be enough slots to hold all possible variants
+ 	 */
+ 	unsigned long slot[BUDDY_MASK + 1];
+-	unsigned long pool; /* back link + flags */
++	unsigned long pool; /* back link */
+ 	rwlock_t lock;
+ };
+ #define HANDLE_FLAG_MASK	(0x03)
+@@ -185,7 +185,7 @@ enum z3fold_page_flags {
+  * handle flags, go under HANDLE_FLAG_MASK
+  */
+ enum z3fold_handle_flags {
+-	HANDLES_ORPHANED = 0,
++	HANDLES_NOFREE = 0,
+ };
+ 
+ /*
+@@ -303,10 +303,9 @@ static inline void put_z3fold_header(struct z3fold_header *zhdr)
+ 		z3fold_page_unlock(zhdr);
+ }
+ 
+-static inline void free_handle(unsigned long handle)
++static inline void free_handle(unsigned long handle, struct z3fold_header *zhdr)
+ {
+ 	struct z3fold_buddy_slots *slots;
+-	struct z3fold_header *zhdr;
+ 	int i;
+ 	bool is_free;
+ 
+@@ -316,22 +315,19 @@ static inline void free_handle(unsigned long handle)
+ 	if (WARN_ON(*(unsigned long *)handle == 0))
+ 		return;
+ 
+-	zhdr = handle_to_z3fold_header(handle);
+ 	slots = handle_to_slots(handle);
+ 	write_lock(&slots->lock);
+ 	*(unsigned long *)handle = 0;
+-	if (zhdr->slots == slots) {
++
++	if (test_bit(HANDLES_NOFREE, &slots->pool)) {
+ 		write_unlock(&slots->lock);
+ 		return; /* simple case, nothing else to do */
+ 	}
+ 
+-	/* we are freeing a foreign handle if we are here */
+-	zhdr->foreign_handles--;
++	if (zhdr->slots != slots)
++		zhdr->foreign_handles--;
++
+ 	is_free = true;
+-	if (!test_bit(HANDLES_ORPHANED, &slots->pool)) {
+-		write_unlock(&slots->lock);
+-		return;
+-	}
+ 	for (i = 0; i <= BUDDY_MASK; i++) {
+ 		if (slots->slot[i]) {
+ 			is_free = false;
+@@ -343,6 +339,8 @@ static inline void free_handle(unsigned long handle)
+ 	if (is_free) {
+ 		struct z3fold_pool *pool = slots_to_pool(slots);
+ 
++		if (zhdr->slots == slots)
++			zhdr->slots = NULL;
+ 		kmem_cache_free(pool->c_handle, slots);
+ 	}
+ }
+@@ -525,8 +523,6 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
+ {
+ 	struct page *page = virt_to_page(zhdr);
+ 	struct z3fold_pool *pool = zhdr_to_pool(zhdr);
+-	bool is_free = true;
+-	int i;
+ 
+ 	WARN_ON(!list_empty(&zhdr->buddy));
+ 	set_bit(PAGE_STALE, &page->private);
+@@ -536,21 +532,6 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
+ 		list_del_init(&page->lru);
+ 	spin_unlock(&pool->lock);
+ 
+-	/* If there are no foreign handles, free the handles array */
+-	read_lock(&zhdr->slots->lock);
+-	for (i = 0; i <= BUDDY_MASK; i++) {
+-		if (zhdr->slots->slot[i]) {
+-			is_free = false;
+-			break;
+-		}
+-	}
+-	if (!is_free)
+-		set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
+-	read_unlock(&zhdr->slots->lock);
+-
+-	if (is_free)
+-		kmem_cache_free(pool->c_handle, zhdr->slots);
+-
+ 	if (locked)
+ 		z3fold_page_unlock(zhdr);
+ 
+@@ -653,6 +634,28 @@ static inline void add_to_unbuddied(struct z3fold_pool *pool,
+ 	}
+ }
+ 
++static inline enum buddy get_free_buddy(struct z3fold_header *zhdr, int chunks)
++{
++	enum buddy bud = HEADLESS;
++
++	if (zhdr->middle_chunks) {
++		if (!zhdr->first_chunks &&
++		    chunks <= zhdr->start_middle - ZHDR_CHUNKS)
++			bud = FIRST;
++		else if (!zhdr->last_chunks)
++			bud = LAST;
++	} else {
++		if (!zhdr->first_chunks)
++			bud = FIRST;
++		else if (!zhdr->last_chunks)
++			bud = LAST;
++		else
++			bud = MIDDLE;
++	}
++
++	return bud;
++}
++
+ static inline void *mchunk_memmove(struct z3fold_header *zhdr,
+ 				unsigned short dst_chunk)
+ {
+@@ -714,18 +717,7 @@ static struct z3fold_header *compact_single_buddy(struct z3fold_header *zhdr)
+ 		if (WARN_ON(new_zhdr == zhdr))
+ 			goto out_fail;
+ 
+-		if (new_zhdr->first_chunks == 0) {
+-			if (new_zhdr->middle_chunks != 0 &&
+-					chunks >= new_zhdr->start_middle) {
+-				new_bud = LAST;
+-			} else {
+-				new_bud = FIRST;
+-			}
+-		} else if (new_zhdr->last_chunks == 0) {
+-			new_bud = LAST;
+-		} else if (new_zhdr->middle_chunks == 0) {
+-			new_bud = MIDDLE;
+-		}
++		new_bud = get_free_buddy(new_zhdr, chunks);
+ 		q = new_zhdr;
+ 		switch (new_bud) {
+ 		case FIRST:
+@@ -847,9 +839,8 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked)
+ 		return;
+ 	}
+ 
+-	if (unlikely(PageIsolated(page) ||
+-		     test_bit(PAGE_CLAIMED, &page->private) ||
+-		     test_bit(PAGE_STALE, &page->private))) {
++	if (test_bit(PAGE_STALE, &page->private) ||
++	    test_and_set_bit(PAGE_CLAIMED, &page->private)) {
+ 		z3fold_page_unlock(zhdr);
+ 		return;
+ 	}
+@@ -858,13 +849,16 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked)
+ 	    zhdr->mapped_count == 0 && compact_single_buddy(zhdr)) {
+ 		if (kref_put(&zhdr->refcount, release_z3fold_page_locked))
+ 			atomic64_dec(&pool->pages_nr);
+-		else
++		else {
++			clear_bit(PAGE_CLAIMED, &page->private);
+ 			z3fold_page_unlock(zhdr);
++		}
+ 		return;
+ 	}
+ 
+ 	z3fold_compact_page(zhdr);
+ 	add_to_unbuddied(pool, zhdr);
++	clear_bit(PAGE_CLAIMED, &page->private);
+ 	z3fold_page_unlock(zhdr);
+ }
+ 
+@@ -973,6 +967,9 @@ lookup:
+ 		}
+ 	}
+ 
++	if (zhdr && !zhdr->slots)
++		zhdr->slots = alloc_slots(pool,
++					can_sleep ? GFP_NOIO : GFP_ATOMIC);
+ 	return zhdr;
+ }
+ 
+@@ -1109,17 +1106,8 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
+ retry:
+ 		zhdr = __z3fold_alloc(pool, size, can_sleep);
+ 		if (zhdr) {
+-			if (zhdr->first_chunks == 0) {
+-				if (zhdr->middle_chunks != 0 &&
+-				    chunks >= zhdr->start_middle)
+-					bud = LAST;
+-				else
+-					bud = FIRST;
+-			} else if (zhdr->last_chunks == 0)
+-				bud = LAST;
+-			else if (zhdr->middle_chunks == 0)
+-				bud = MIDDLE;
+-			else {
++			bud = get_free_buddy(zhdr, chunks);
++			if (bud == HEADLESS) {
+ 				if (kref_put(&zhdr->refcount,
+ 					     release_z3fold_page_locked))
+ 					atomic64_dec(&pool->pages_nr);
+@@ -1265,12 +1253,11 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
+ 		pr_err("%s: unknown bud %d\n", __func__, bud);
+ 		WARN_ON(1);
+ 		put_z3fold_header(zhdr);
+-		clear_bit(PAGE_CLAIMED, &page->private);
+ 		return;
+ 	}
+ 
+ 	if (!page_claimed)
+-		free_handle(handle);
++		free_handle(handle, zhdr);
+ 	if (kref_put(&zhdr->refcount, release_z3fold_page_locked_list)) {
+ 		atomic64_dec(&pool->pages_nr);
+ 		return;
+@@ -1280,8 +1267,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
+ 		z3fold_page_unlock(zhdr);
+ 		return;
+ 	}
+-	if (unlikely(PageIsolated(page)) ||
+-	    test_and_set_bit(NEEDS_COMPACTING, &page->private)) {
++	if (test_and_set_bit(NEEDS_COMPACTING, &page->private)) {
+ 		put_z3fold_header(zhdr);
+ 		clear_bit(PAGE_CLAIMED, &page->private);
+ 		return;
+@@ -1345,6 +1331,10 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ 	struct page *page = NULL;
+ 	struct list_head *pos;
+ 	unsigned long first_handle = 0, middle_handle = 0, last_handle = 0;
++	struct z3fold_buddy_slots slots __attribute__((aligned(SLOTS_ALIGN)));
++
++	rwlock_init(&slots.lock);
++	slots.pool = (unsigned long)pool | (1 << HANDLES_NOFREE);
+ 
+ 	spin_lock(&pool->lock);
+ 	if (!pool->ops || !pool->ops->evict || retries == 0) {
+@@ -1359,35 +1349,36 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ 		list_for_each_prev(pos, &pool->lru) {
+ 			page = list_entry(pos, struct page, lru);
+ 
+-			/* this bit could have been set by free, in which case
+-			 * we pass over to the next page in the pool.
+-			 */
+-			if (test_and_set_bit(PAGE_CLAIMED, &page->private)) {
+-				page = NULL;
+-				continue;
+-			}
+-
+-			if (unlikely(PageIsolated(page))) {
+-				clear_bit(PAGE_CLAIMED, &page->private);
+-				page = NULL;
+-				continue;
+-			}
+ 			zhdr = page_address(page);
+ 			if (test_bit(PAGE_HEADLESS, &page->private))
+ 				break;
+ 
++			if (kref_get_unless_zero(&zhdr->refcount) == 0) {
++				zhdr = NULL;
++				break;
++			}
+ 			if (!z3fold_page_trylock(zhdr)) {
+-				clear_bit(PAGE_CLAIMED, &page->private);
++				if (kref_put(&zhdr->refcount,
++						release_z3fold_page))
++					atomic64_dec(&pool->pages_nr);
+ 				zhdr = NULL;
+ 				continue; /* can't evict at this point */
+ 			}
+-			if (zhdr->foreign_handles) {
+-				clear_bit(PAGE_CLAIMED, &page->private);
+-				z3fold_page_unlock(zhdr);
++
++			/* test_and_set_bit is of course atomic, but we still
++			 * need to do it under page lock, otherwise checking
++			 * that bit in __z3fold_alloc wouldn't make sense
++			 */
++			if (zhdr->foreign_handles ||
++			    test_and_set_bit(PAGE_CLAIMED, &page->private)) {
++				if (kref_put(&zhdr->refcount,
++						release_z3fold_page))
++					atomic64_dec(&pool->pages_nr);
++				else
++					z3fold_page_unlock(zhdr);
+ 				zhdr = NULL;
+ 				continue; /* can't evict such page */
+ 			}
+-			kref_get(&zhdr->refcount);
+ 			list_del_init(&zhdr->buddy);
+ 			zhdr->cpu = -1;
+ 			break;
+@@ -1409,12 +1400,16 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ 			first_handle = 0;
+ 			last_handle = 0;
+ 			middle_handle = 0;
++			memset(slots.slot, 0, sizeof(slots.slot));
+ 			if (zhdr->first_chunks)
+-				first_handle = encode_handle(zhdr, FIRST);
++				first_handle = __encode_handle(zhdr, &slots,
++								FIRST);
+ 			if (zhdr->middle_chunks)
+-				middle_handle = encode_handle(zhdr, MIDDLE);
++				middle_handle = __encode_handle(zhdr, &slots,
++								MIDDLE);
+ 			if (zhdr->last_chunks)
+-				last_handle = encode_handle(zhdr, LAST);
++				last_handle = __encode_handle(zhdr, &slots,
++								LAST);
+ 			/*
+ 			 * it's safe to unlock here because we hold a
+ 			 * reference to this page
+@@ -1429,19 +1424,16 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ 			ret = pool->ops->evict(pool, middle_handle);
+ 			if (ret)
+ 				goto next;
+-			free_handle(middle_handle);
+ 		}
+ 		if (first_handle) {
+ 			ret = pool->ops->evict(pool, first_handle);
+ 			if (ret)
+ 				goto next;
+-			free_handle(first_handle);
+ 		}
+ 		if (last_handle) {
+ 			ret = pool->ops->evict(pool, last_handle);
+ 			if (ret)
+ 				goto next;
+-			free_handle(last_handle);
+ 		}
+ next:
+ 		if (test_bit(PAGE_HEADLESS, &page->private)) {
+@@ -1455,9 +1447,11 @@ next:
+ 			spin_unlock(&pool->lock);
+ 			clear_bit(PAGE_CLAIMED, &page->private);
+ 		} else {
++			struct z3fold_buddy_slots *slots = zhdr->slots;
+ 			z3fold_page_lock(zhdr);
+ 			if (kref_put(&zhdr->refcount,
+ 					release_z3fold_page_locked)) {
++				kmem_cache_free(pool->c_handle, slots);
+ 				atomic64_dec(&pool->pages_nr);
+ 				return 0;
+ 			}
+@@ -1573,8 +1567,7 @@ static bool z3fold_page_isolate(struct page *page, isolate_mode_t mode)
+ 	VM_BUG_ON_PAGE(!PageMovable(page), page);
+ 	VM_BUG_ON_PAGE(PageIsolated(page), page);
+ 
+-	if (test_bit(PAGE_HEADLESS, &page->private) ||
+-	    test_bit(PAGE_CLAIMED, &page->private))
++	if (test_bit(PAGE_HEADLESS, &page->private))
+ 		return false;
+ 
+ 	zhdr = page_address(page);
+@@ -1586,6 +1579,8 @@ static bool z3fold_page_isolate(struct page *page, isolate_mode_t mode)
+ 	if (zhdr->mapped_count != 0 || zhdr->foreign_handles != 0)
+ 		goto out;
+ 
++	if (test_and_set_bit(PAGE_CLAIMED, &page->private))
++		goto out;
+ 	pool = zhdr_to_pool(zhdr);
+ 	spin_lock(&pool->lock);
+ 	if (!list_empty(&zhdr->buddy))
+@@ -1612,16 +1607,17 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
+ 
+ 	VM_BUG_ON_PAGE(!PageMovable(page), page);
+ 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
++	VM_BUG_ON_PAGE(!test_bit(PAGE_CLAIMED, &page->private), page);
+ 	VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
+ 
+ 	zhdr = page_address(page);
+ 	pool = zhdr_to_pool(zhdr);
+ 
+-	if (!z3fold_page_trylock(zhdr)) {
++	if (!z3fold_page_trylock(zhdr))
+ 		return -EAGAIN;
+-	}
+ 	if (zhdr->mapped_count != 0 || zhdr->foreign_handles != 0) {
+ 		z3fold_page_unlock(zhdr);
++		clear_bit(PAGE_CLAIMED, &page->private);
+ 		return -EBUSY;
+ 	}
+ 	if (work_pending(&zhdr->work)) {
+@@ -1663,6 +1659,7 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
+ 	queue_work_on(new_zhdr->cpu, pool->compact_wq, &new_zhdr->work);
+ 
+ 	page_mapcount_reset(page);
++	clear_bit(PAGE_CLAIMED, &page->private);
+ 	put_page(page);
+ 	return 0;
+ }
+@@ -1686,6 +1683,7 @@ static void z3fold_page_putback(struct page *page)
+ 	spin_lock(&pool->lock);
+ 	list_add(&page->lru, &pool->lru);
+ 	spin_unlock(&pool->lock);
++	clear_bit(PAGE_CLAIMED, &page->private);
+ 	z3fold_page_unlock(zhdr);
+ }
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index cbdf2a5559754..17a72695865b5 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4941,6 +4941,11 @@ static void hci_phy_link_complete_evt(struct hci_dev *hdev,
+ 		return;
+ 	}
+ 
++	if (!hcon->amp_mgr) {
++		hci_dev_unlock(hdev);
++		return;
++	}
++
+ 	if (ev->status) {
+ 		hci_conn_del(hcon);
+ 		hci_dev_unlock(hdev);
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 6f12bab4d2fa6..610ed0817bd77 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -698,7 +698,8 @@ static void del_from_white_list(struct hci_request *req, bdaddr_t *bdaddr,
+ 		   cp.bdaddr_type);
+ 	hci_req_add(req, HCI_OP_LE_DEL_FROM_WHITE_LIST, sizeof(cp), &cp);
+ 
+-	if (use_ll_privacy(req->hdev)) {
++	if (use_ll_privacy(req->hdev) &&
++	    hci_dev_test_flag(req->hdev, HCI_ENABLE_LL_PRIVACY)) {
+ 		struct smp_irk *irk;
+ 
+ 		irk = hci_find_irk_by_addr(req->hdev, bdaddr, bdaddr_type);
+@@ -732,7 +733,8 @@ static int add_to_white_list(struct hci_request *req,
+ 		return -1;
+ 
+ 	/* White list can not be used with RPAs */
+-	if (!allow_rpa && !use_ll_privacy(hdev) &&
++	if (!allow_rpa &&
++	    !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY) &&
+ 	    hci_find_irk_by_addr(hdev, &params->addr, params->addr_type)) {
+ 		return -1;
+ 	}
+@@ -750,7 +752,8 @@ static int add_to_white_list(struct hci_request *req,
+ 		   cp.bdaddr_type);
+ 	hci_req_add(req, HCI_OP_LE_ADD_TO_WHITE_LIST, sizeof(cp), &cp);
+ 
+-	if (use_ll_privacy(hdev)) {
++	if (use_ll_privacy(hdev) &&
++	    hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY)) {
+ 		struct smp_irk *irk;
+ 
+ 		irk = hci_find_irk_by_addr(hdev, &params->addr,
+@@ -812,7 +815,8 @@ static u8 update_white_list(struct hci_request *req)
+ 		}
+ 
+ 		/* White list can not be used with RPAs */
+-		if (!allow_rpa && !use_ll_privacy(hdev) &&
++		if (!allow_rpa &&
++		    !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY) &&
+ 		    hci_find_irk_by_addr(hdev, &b->bdaddr, b->bdaddr_type)) {
+ 			return 0x00;
+ 		}
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 79ffcdef0b7ad..22a110f37abc6 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -1003,6 +1003,11 @@ static int sco_sock_getsockopt(struct socket *sock, int level, int optname,
+ 
+ 	case BT_SNDMTU:
+ 	case BT_RCVMTU:
++		if (sk->sk_state != BT_CONNECTED) {
++			err = -ENOTCONN;
++			break;
++		}
++
+ 		if (put_user(sco_pi(sk)->conn->mtu, (u32 __user *)optval))
+ 			err = -EFAULT;
+ 		break;
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 1e2e5a406d587..2a5a11f92b03e 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1758,7 +1758,7 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
+ 	} else if (rx->sdata->vif.type == NL80211_IFTYPE_OCB) {
+ 		sta->rx_stats.last_rx = jiffies;
+ 	} else if (!ieee80211_is_s1g_beacon(hdr->frame_control) &&
+-		   is_multicast_ether_addr(hdr->addr1)) {
++		   !is_multicast_ether_addr(hdr->addr1)) {
+ 		/*
+ 		 * Mesh beacons will update last_rx when if they are found to
+ 		 * match the current local configuration when processed.
+diff --git a/net/mac80211/vht.c b/net/mac80211/vht.c
+index fb0e3a657d2d3..c3ca973737742 100644
+--- a/net/mac80211/vht.c
++++ b/net/mac80211/vht.c
+@@ -465,12 +465,18 @@ enum ieee80211_sta_rx_bandwidth ieee80211_sta_cur_vht_bw(struct sta_info *sta)
+ 	 * IEEE80211-2016 specification makes higher bandwidth operation
+ 	 * possible on the TDLS link if the peers have wider bandwidth
+ 	 * capability.
++	 *
++	 * However, in this case, and only if the TDLS peer is authorized,
++	 * limit to the tdls_chandef so that the configuration here isn't
++	 * wider than what's actually requested on the channel context.
+ 	 */
+ 	if (test_sta_flag(sta, WLAN_STA_TDLS_PEER) &&
+-	    test_sta_flag(sta, WLAN_STA_TDLS_WIDER_BW))
+-		return bw;
+-
+-	bw = min(bw, ieee80211_chan_width_to_rx_bw(bss_width));
++	    test_sta_flag(sta, WLAN_STA_TDLS_WIDER_BW) &&
++	    test_sta_flag(sta, WLAN_STA_AUTHORIZED) &&
++	    sta->tdls_chandef.chan)
++		bw = min(bw, ieee80211_chan_width_to_rx_bw(sta->tdls_chandef.width));
++	else
++		bw = min(bw, ieee80211_chan_width_to_rx_bw(bss_width));
+ 
+ 	return bw;
+ }
+diff --git a/net/sunrpc/debugfs.c b/net/sunrpc/debugfs.c
+index fd9bca2427242..56029e3af6ff0 100644
+--- a/net/sunrpc/debugfs.c
++++ b/net/sunrpc/debugfs.c
+@@ -128,13 +128,13 @@ static int do_xprt_debugfs(struct rpc_clnt *clnt, struct rpc_xprt *xprt, void *n
+ 		return 0;
+ 	len = snprintf(name, sizeof(name), "../../rpc_xprt/%s",
+ 		       xprt->debugfs->d_name.name);
+-	if (len > sizeof(name))
++	if (len >= sizeof(name))
+ 		return -1;
+ 	if (*nump == 0)
+ 		strcpy(link, "xprt");
+ 	else {
+ 		len = snprintf(link, sizeof(link), "xprt%d", *nump);
+-		if (len > sizeof(link))
++		if (len >= sizeof(link))
+ 			return -1;
+ 	}
+ 	debugfs_create_symlink(link, clnt->cl_debugfs, name);
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index f06d7c315017c..cf702a5f7fe5d 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -675,6 +675,23 @@ struct rpc_task *rpc_wake_up_next(struct rpc_wait_queue *queue)
+ }
+ EXPORT_SYMBOL_GPL(rpc_wake_up_next);
+ 
++/**
++ * rpc_wake_up_locked - wake up all rpc_tasks
++ * @queue: rpc_wait_queue on which the tasks are sleeping
++ *
++ */
++static void rpc_wake_up_locked(struct rpc_wait_queue *queue)
++{
++	struct rpc_task *task;
++
++	for (;;) {
++		task = __rpc_find_next_queued(queue);
++		if (task == NULL)
++			break;
++		rpc_wake_up_task_queue_locked(queue, task);
++	}
++}
++
+ /**
+  * rpc_wake_up - wake up all rpc_tasks
+  * @queue: rpc_wait_queue on which the tasks are sleeping
+@@ -683,25 +700,28 @@ EXPORT_SYMBOL_GPL(rpc_wake_up_next);
+  */
+ void rpc_wake_up(struct rpc_wait_queue *queue)
+ {
+-	struct list_head *head;
+-
+ 	spin_lock(&queue->lock);
+-	head = &queue->tasks[queue->maxpriority];
++	rpc_wake_up_locked(queue);
++	spin_unlock(&queue->lock);
++}
++EXPORT_SYMBOL_GPL(rpc_wake_up);
++
++/**
++ * rpc_wake_up_status_locked - wake up all rpc_tasks and set their status value.
++ * @queue: rpc_wait_queue on which the tasks are sleeping
++ * @status: status value to set
++ */
++static void rpc_wake_up_status_locked(struct rpc_wait_queue *queue, int status)
++{
++	struct rpc_task *task;
++
+ 	for (;;) {
+-		while (!list_empty(head)) {
+-			struct rpc_task *task;
+-			task = list_first_entry(head,
+-					struct rpc_task,
+-					u.tk_wait.list);
+-			rpc_wake_up_task_queue_locked(queue, task);
+-		}
+-		if (head == &queue->tasks[0])
++		task = __rpc_find_next_queued(queue);
++		if (task == NULL)
+ 			break;
+-		head--;
++		rpc_wake_up_task_queue_set_status_locked(queue, task, status);
+ 	}
+-	spin_unlock(&queue->lock);
+ }
+-EXPORT_SYMBOL_GPL(rpc_wake_up);
+ 
+ /**
+  * rpc_wake_up_status - wake up all rpc_tasks and set their status value.
+@@ -712,23 +732,8 @@ EXPORT_SYMBOL_GPL(rpc_wake_up);
+  */
+ void rpc_wake_up_status(struct rpc_wait_queue *queue, int status)
+ {
+-	struct list_head *head;
+-
+ 	spin_lock(&queue->lock);
+-	head = &queue->tasks[queue->maxpriority];
+-	for (;;) {
+-		while (!list_empty(head)) {
+-			struct rpc_task *task;
+-			task = list_first_entry(head,
+-					struct rpc_task,
+-					u.tk_wait.list);
+-			task->tk_status = status;
+-			rpc_wake_up_task_queue_locked(queue, task);
+-		}
+-		if (head == &queue->tasks[0])
+-			break;
+-		head--;
+-	}
++	rpc_wake_up_status_locked(queue, status);
+ 	spin_unlock(&queue->lock);
+ }
+ EXPORT_SYMBOL_GPL(rpc_wake_up_status);
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index f6c17e75f20ed..57f09ea3ef2af 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -151,31 +151,64 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(xprt_unregister_transport);
+ 
++static void
++xprt_class_release(const struct xprt_class *t)
++{
++	module_put(t->owner);
++}
++
++static const struct xprt_class *
++xprt_class_find_by_netid_locked(const char *netid)
++{
++	const struct xprt_class *t;
++	unsigned int i;
++
++	list_for_each_entry(t, &xprt_list, list) {
++		for (i = 0; t->netid[i][0] != '\0'; i++) {
++			if (strcmp(t->netid[i], netid) != 0)
++				continue;
++			if (!try_module_get(t->owner))
++				continue;
++			return t;
++		}
++	}
++	return NULL;
++}
++
++static const struct xprt_class *
++xprt_class_find_by_netid(const char *netid)
++{
++	const struct xprt_class *t;
++
++	spin_lock(&xprt_list_lock);
++	t = xprt_class_find_by_netid_locked(netid);
++	if (!t) {
++		spin_unlock(&xprt_list_lock);
++		request_module("rpc%s", netid);
++		spin_lock(&xprt_list_lock);
++		t = xprt_class_find_by_netid_locked(netid);
++	}
++	spin_unlock(&xprt_list_lock);
++	return t;
++}
++
+ /**
+  * xprt_load_transport - load a transport implementation
+- * @transport_name: transport to load
++ * @netid: transport to load
+  *
+  * Returns:
+  * 0:		transport successfully loaded
+  * -ENOENT:	transport module not available
+  */
+-int xprt_load_transport(const char *transport_name)
++int xprt_load_transport(const char *netid)
+ {
+-	struct xprt_class *t;
+-	int result;
++	const struct xprt_class *t;
+ 
+-	result = 0;
+-	spin_lock(&xprt_list_lock);
+-	list_for_each_entry(t, &xprt_list, list) {
+-		if (strcmp(t->name, transport_name) == 0) {
+-			spin_unlock(&xprt_list_lock);
+-			goto out;
+-		}
+-	}
+-	spin_unlock(&xprt_list_lock);
+-	result = request_module("xprt%s", transport_name);
+-out:
+-	return result;
++	t = xprt_class_find_by_netid(netid);
++	if (!t)
++		return -ENOENT;
++	xprt_class_release(t);
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(xprt_load_transport);
+ 
+diff --git a/net/sunrpc/xprtrdma/module.c b/net/sunrpc/xprtrdma/module.c
+index 620327c01302c..45c5b41ac8dc9 100644
+--- a/net/sunrpc/xprtrdma/module.c
++++ b/net/sunrpc/xprtrdma/module.c
+@@ -24,6 +24,7 @@ MODULE_DESCRIPTION("RPC/RDMA Transport");
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_ALIAS("svcrdma");
+ MODULE_ALIAS("xprtrdma");
++MODULE_ALIAS("rpcrdma6");
+ 
+ static void __exit rpc_rdma_cleanup(void)
+ {
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index 0f5120c7668ff..c48536f2121fb 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -179,6 +179,31 @@ rpcrdma_nonpayload_inline(const struct rpcrdma_xprt *r_xprt,
+ 		r_xprt->rx_ep->re_max_inline_recv;
+ }
+ 
++/* ACL likes to be lazy in allocating pages. For TCP, these
++ * pages can be allocated during receive processing. Not true
++ * for RDMA, which must always provision receive buffers
++ * up front.
++ */
++static noinline int
++rpcrdma_alloc_sparse_pages(struct xdr_buf *buf)
++{
++	struct page **ppages;
++	int len;
++
++	len = buf->page_len;
++	ppages = buf->pages + (buf->page_base >> PAGE_SHIFT);
++	while (len > 0) {
++		if (!*ppages)
++			*ppages = alloc_page(GFP_NOWAIT | __GFP_NOWARN);
++		if (!*ppages)
++			return -ENOBUFS;
++		ppages++;
++		len -= PAGE_SIZE;
++	}
++
++	return 0;
++}
++
+ /* Split @vec on page boundaries into SGEs. FMR registers pages, not
+  * a byte range. Other modes coalesce these SGEs into a single MR
+  * when they can.
+@@ -233,15 +258,6 @@ rpcrdma_convert_iovs(struct rpcrdma_xprt *r_xprt, struct xdr_buf *xdrbuf,
+ 	ppages = xdrbuf->pages + (xdrbuf->page_base >> PAGE_SHIFT);
+ 	page_base = offset_in_page(xdrbuf->page_base);
+ 	while (len) {
+-		/* ACL likes to be lazy in allocating pages - ACLs
+-		 * are small by default but can get huge.
+-		 */
+-		if (unlikely(xdrbuf->flags & XDRBUF_SPARSE_PAGES)) {
+-			if (!*ppages)
+-				*ppages = alloc_page(GFP_NOWAIT | __GFP_NOWARN);
+-			if (!*ppages)
+-				return -ENOBUFS;
+-		}
+ 		seg->mr_page = *ppages;
+ 		seg->mr_offset = (char *)page_base;
+ 		seg->mr_len = min_t(u32, PAGE_SIZE - page_base, len);
+@@ -867,6 +883,12 @@ rpcrdma_marshal_req(struct rpcrdma_xprt *r_xprt, struct rpc_rqst *rqst)
+ 	__be32 *p;
+ 	int ret;
+ 
++	if (unlikely(rqst->rq_rcv_buf.flags & XDRBUF_SPARSE_PAGES)) {
++		ret = rpcrdma_alloc_sparse_pages(&rqst->rq_rcv_buf);
++		if (ret)
++			return ret;
++	}
++
+ 	rpcrdma_set_xdrlen(&req->rl_hdrbuf, 0);
+ 	xdr_init_encode(xdr, &req->rl_hdrbuf, rdmab_data(req->rl_rdmabuf),
+ 			rqst);
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 8915e42240d38..035060c05fd5a 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -768,6 +768,7 @@ static struct xprt_class xprt_rdma = {
+ 	.owner			= THIS_MODULE,
+ 	.ident			= XPRT_TRANSPORT_RDMA,
+ 	.setup			= xprt_setup_rdma,
++	.netid			= { "rdma", "rdma6", "" },
+ };
+ 
+ void xprt_rdma_cleanup(void)
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 7090bbee0ec59..c56a66cdf4ac8 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -433,7 +433,8 @@ xs_read_xdr_buf(struct socket *sock, struct msghdr *msg, int flags,
+ 		if (ret <= 0)
+ 			goto sock_err;
+ 		xs_flush_bvec(buf->bvec, ret, seek + buf->page_base);
+-		offset += ret - buf->page_base;
++		ret -= buf->page_base;
++		offset += ret;
+ 		if (offset == count || msg->msg_flags & (MSG_EOR|MSG_TRUNC))
+ 			goto out;
+ 		if (ret != want)
+@@ -3059,6 +3060,7 @@ static struct xprt_class	xs_local_transport = {
+ 	.owner		= THIS_MODULE,
+ 	.ident		= XPRT_TRANSPORT_LOCAL,
+ 	.setup		= xs_setup_local,
++	.netid		= { "" },
+ };
+ 
+ static struct xprt_class	xs_udp_transport = {
+@@ -3067,6 +3069,7 @@ static struct xprt_class	xs_udp_transport = {
+ 	.owner		= THIS_MODULE,
+ 	.ident		= XPRT_TRANSPORT_UDP,
+ 	.setup		= xs_setup_udp,
++	.netid		= { "udp", "udp6", "" },
+ };
+ 
+ static struct xprt_class	xs_tcp_transport = {
+@@ -3075,6 +3078,7 @@ static struct xprt_class	xs_tcp_transport = {
+ 	.owner		= THIS_MODULE,
+ 	.ident		= XPRT_TRANSPORT_TCP,
+ 	.setup		= xs_setup_tcp,
++	.netid		= { "tcp", "tcp6", "" },
+ };
+ 
+ static struct xprt_class	xs_bc_tcp_transport = {
+@@ -3083,6 +3087,7 @@ static struct xprt_class	xs_bc_tcp_transport = {
+ 	.owner		= THIS_MODULE,
+ 	.ident		= XPRT_TRANSPORT_BC_TCP,
+ 	.setup		= xs_setup_bc_tcp,
++	.netid		= { "" },
+ };
+ 
+ /**
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 8d0e49c46db37..3409f37d838b3 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -694,7 +694,7 @@ static  void cfg80211_scan_req_add_chan(struct cfg80211_scan_request *request,
+ static bool cfg80211_find_ssid_match(struct cfg80211_colocated_ap *ap,
+ 				     struct cfg80211_scan_request *request)
+ {
+-	u8 i;
++	int i;
+ 	u32 s_ssid;
+ 
+ 	for (i = 0; i < request->n_ssids; i++) {
+diff --git a/samples/bpf/lwt_len_hist.sh b/samples/bpf/lwt_len_hist.sh
+old mode 100644
+new mode 100755
+index 090b96eaf7f76..0eda9754f50b8
+--- a/samples/bpf/lwt_len_hist.sh
++++ b/samples/bpf/lwt_len_hist.sh
+@@ -8,6 +8,8 @@ VETH1=tst_lwt1b
+ TRACE_ROOT=/sys/kernel/debug/tracing
+ 
+ function cleanup {
++	# To reset saved histogram, remove pinned map
++	rm /sys/fs/bpf/tc/globals/lwt_len_hist_map
+ 	ip route del 192.168.253.2/32 dev $VETH0 2> /dev/null
+ 	ip link del $VETH0 2> /dev/null
+ 	ip link del $VETH1 2> /dev/null
+diff --git a/samples/bpf/test_lwt_bpf.sh b/samples/bpf/test_lwt_bpf.sh
+old mode 100644
+new mode 100755
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index 1149e94ca32fd..33c58de58626c 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -1250,6 +1250,8 @@ static void tx_only(struct xsk_socket_info *xsk, u32 *frame_nb, int batch_size)
+ 	while (xsk_ring_prod__reserve(&xsk->tx, batch_size, &idx) <
+ 				      batch_size) {
+ 		complete_tx_only(xsk, batch_size);
++		if (benchmark_done)
++			return;
+ 	}
+ 
+ 	for (i = 0; i < batch_size; i++) {
+diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
+index fab38b493cef7..0ad235ee96f91 100755
+--- a/scripts/checkpatch.pl
++++ b/scripts/checkpatch.pl
+@@ -4384,7 +4384,7 @@ sub process {
+ 			    $fix) {
+ 				fix_delete_line($fixlinenr, $rawline);
+ 				my $fixed_line = $rawline;
+-				$fixed_line =~ /(^..*$Type\s*$Ident\(.*\)\s*){(.*)$/;
++				$fixed_line =~ /(^..*$Type\s*$Ident\(.*\)\s*)\{(.*)$/;
+ 				my $line1 = $1;
+ 				my $line2 = $2;
+ 				fix_insert_line($fixlinenr, ltrim($line1));
+diff --git a/scripts/kconfig/preprocess.c b/scripts/kconfig/preprocess.c
+index 0243086fb1685..0590f86df6e40 100644
+--- a/scripts/kconfig/preprocess.c
++++ b/scripts/kconfig/preprocess.c
+@@ -114,7 +114,7 @@ static char *do_error_if(int argc, char *argv[])
+ 	if (!strcmp(argv[0], "y"))
+ 		pperror("%s", argv[1]);
+ 
+-	return NULL;
++	return xstrdup("");
+ }
+ 
+ static char *do_filename(int argc, char *argv[])
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index f699cf05d4098..6325bec3f66f8 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -1390,7 +1390,7 @@ sub dump_enum($$) {
+ 	$members = $2;
+     }
+ 
+-    if ($declaration_name) {
++    if ($members) {
+ 	my %_members;
+ 
+ 	$members =~ s/\s+$//;
+@@ -1431,7 +1431,7 @@ sub dump_enum($$) {
+     }
+ }
+ 
+-my $typedef_type = qr { ((?:\s+[\w\*]+){1,8})\s* }x;
++my $typedef_type = qr { ((?:\s+[\w\*]+\b){1,8})\s* }x;
+ my $typedef_ident = qr { \*?\s*(\w\S+)\s* }x;
+ my $typedef_args = qr { \s*\((.*)\); }x;
+ 
+diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c
+index 21989fa0c1074..f6a7e9643b546 100644
+--- a/security/integrity/ima/ima_crypto.c
++++ b/security/integrity/ima/ima_crypto.c
+@@ -537,7 +537,7 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash)
+ 	loff_t i_size;
+ 	int rc;
+ 	struct file *f = file;
+-	bool new_file_instance = false, modified_mode = false;
++	bool new_file_instance = false;
+ 
+ 	/*
+ 	 * For consistency, fail file's opened with the O_DIRECT flag on
+@@ -555,18 +555,10 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash)
+ 				O_TRUNC | O_CREAT | O_NOCTTY | O_EXCL);
+ 		flags |= O_RDONLY;
+ 		f = dentry_open(&file->f_path, flags, file->f_cred);
+-		if (IS_ERR(f)) {
+-			/*
+-			 * Cannot open the file again, lets modify f_mode
+-			 * of original and continue
+-			 */
+-			pr_info_ratelimited("Unable to reopen file for reading.\n");
+-			f = file;
+-			f->f_mode |= FMODE_READ;
+-			modified_mode = true;
+-		} else {
+-			new_file_instance = true;
+-		}
++		if (IS_ERR(f))
++			return PTR_ERR(f);
++
++		new_file_instance = true;
+ 	}
+ 
+ 	i_size = i_size_read(file_inode(f));
+@@ -581,8 +573,6 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash)
+ out:
+ 	if (new_file_instance)
+ 		fput(f);
+-	else if (modified_mode)
+-		f->f_mode &= ~FMODE_READ;
+ 	return rc;
+ }
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 6b1826fc3658e..c46312710e73e 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -1451,7 +1451,7 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
+ 			 * inode_doinit with a dentry, before these inodes could
+ 			 * be used again by userspace.
+ 			 */
+-			goto out;
++			goto out_invalid;
+ 		}
+ 
+ 		rc = inode_doinit_use_xattr(inode, dentry, sbsec->def_sid,
+@@ -1508,7 +1508,7 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
+ 			 * could be used again by userspace.
+ 			 */
+ 			if (!dentry)
+-				goto out;
++				goto out_invalid;
+ 			rc = selinux_genfs_get_sid(dentry, sclass,
+ 						   sbsec->flags, &sid);
+ 			if (rc) {
+@@ -1533,11 +1533,10 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
+ out:
+ 	spin_lock(&isec->lock);
+ 	if (isec->initialized == LABEL_PENDING) {
+-		if (!sid || rc) {
++		if (rc) {
+ 			isec->initialized = LABEL_INVALID;
+ 			goto out_unlock;
+ 		}
+-
+ 		isec->initialized = LABEL_INITIALIZED;
+ 		isec->sid = sid;
+ 	}
+@@ -1545,6 +1544,15 @@ out:
+ out_unlock:
+ 	spin_unlock(&isec->lock);
+ 	return rc;
++
++out_invalid:
++	spin_lock(&isec->lock);
++	if (isec->initialized == LABEL_PENDING) {
++		isec->initialized = LABEL_INVALID;
++		isec->sid = sid;
++	}
++	spin_unlock(&isec->lock);
++	return 0;
+ }
+ 
+ /* Convert a Linux signal to an access vector. */
+diff --git a/security/smack/smack_access.c b/security/smack/smack_access.c
+index efe2406a39609..7eabb448acab4 100644
+--- a/security/smack/smack_access.c
++++ b/security/smack/smack_access.c
+@@ -688,9 +688,10 @@ bool smack_privileged_cred(int cap, const struct cred *cred)
+ bool smack_privileged(int cap)
+ {
+ 	/*
+-	 * All kernel tasks are privileged
++	 * Kernel threads may not have credentials we can use.
++	 * The io_uring kernel threads do have reliable credentials.
+ 	 */
+-	if (unlikely(current->flags & PF_KTHREAD))
++	if ((current->flags & (PF_KTHREAD | PF_IO_WORKER)) == PF_KTHREAD)
+ 		return true;
+ 
+ 	return smack_privileged_cred(cap, current_cred());
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 0aeeb6244ff6c..0f335162f87c7 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -77,7 +77,8 @@ static void snd_malloc_dev_iram(struct snd_dma_buffer *dmab, size_t size)
+ 	/* Assign the pool into private_data field */
+ 	dmab->private_data = pool;
+ 
+-	dmab->area = gen_pool_dma_alloc(pool, size, &dmab->addr);
++	dmab->area = gen_pool_dma_alloc_align(pool, size, &dmab->addr,
++					PAGE_SIZE);
+ }
+ 
+ /**
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index de1917484647e..142fc751a8477 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -693,6 +693,8 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+ 
+ 	oss_buffer_size = snd_pcm_plug_client_size(substream,
+ 						   snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_BUFFER_SIZE, NULL)) * oss_frame_size;
++	if (!oss_buffer_size)
++		return -EINVAL;
+ 	oss_buffer_size = rounddown_pow_of_two(oss_buffer_size);
+ 	if (atomic_read(&substream->mmap_count)) {
+ 		if (oss_buffer_size > runtime->oss.mmap_bytes)
+@@ -728,17 +730,21 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+ 
+ 	min_period_size = snd_pcm_plug_client_size(substream,
+ 						   snd_pcm_hw_param_value_min(slave_params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, NULL));
+-	min_period_size *= oss_frame_size;
+-	min_period_size = roundup_pow_of_two(min_period_size);
+-	if (oss_period_size < min_period_size)
+-		oss_period_size = min_period_size;
++	if (min_period_size) {
++		min_period_size *= oss_frame_size;
++		min_period_size = roundup_pow_of_two(min_period_size);
++		if (oss_period_size < min_period_size)
++			oss_period_size = min_period_size;
++	}
+ 
+ 	max_period_size = snd_pcm_plug_client_size(substream,
+ 						   snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, NULL));
+-	max_period_size *= oss_frame_size;
+-	max_period_size = rounddown_pow_of_two(max_period_size);
+-	if (oss_period_size > max_period_size)
+-		oss_period_size = max_period_size;
++	if (max_period_size) {
++		max_period_size *= oss_frame_size;
++		max_period_size = rounddown_pow_of_two(max_period_size);
++		if (oss_period_size > max_period_size)
++			oss_period_size = max_period_size;
++	}
+ 
+ 	oss_periods = oss_buffer_size / oss_period_size;
+ 
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 4bb58e8b08a85..687216e745267 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -1803,7 +1803,7 @@ int snd_hda_codec_reset(struct hda_codec *codec)
+ 		return -EBUSY;
+ 
+ 	/* OK, let it free */
+-	snd_hdac_device_unregister(&codec->core);
++	device_release_driver(hda_codec_dev(codec));
+ 
+ 	/* allow device access again */
+ 	snd_hda_unlock_devices(bus);
+diff --git a/sound/pci/hda/hda_sysfs.c b/sound/pci/hda/hda_sysfs.c
+index eb8ec109d7adb..d5ffcba794e50 100644
+--- a/sound/pci/hda/hda_sysfs.c
++++ b/sound/pci/hda/hda_sysfs.c
+@@ -139,7 +139,7 @@ static int reconfig_codec(struct hda_codec *codec)
+ 			   "The codec is being used, can't reconfigure.\n");
+ 		goto error;
+ 	}
+-	err = snd_hda_codec_configure(codec);
++	err = device_reprobe(hda_codec_dev(codec));
+ 	if (err < 0)
+ 		goto error;
+ 	err = snd_card_register(codec->card);
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index d8370a417e3d4..ee500e46dd4f6 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -95,7 +95,7 @@ enum {
+ };
+ 
+ /* Strings for Input Source Enum Control */
+-static const char *const in_src_str[3] = {"Rear Mic", "Line", "Front Mic" };
++static const char *const in_src_str[3] = { "Microphone", "Line In", "Front Microphone" };
+ #define IN_SRC_NUM_OF_INPUTS 3
+ enum {
+ 	REAR_MIC,
+@@ -1223,7 +1223,7 @@ static const struct hda_pintbl ae5_pincfgs[] = {
+ 	{ 0x0e, 0x01c510f0 }, /* SPDIF In */
+ 	{ 0x0f, 0x01017114 }, /* Port A -- Rear L/R. */
+ 	{ 0x10, 0x01017012 }, /* Port D -- Center/LFE or FP Hp */
+-	{ 0x11, 0x01a170ff }, /* Port B -- LineMicIn2 / Rear Headphone */
++	{ 0x11, 0x012170ff }, /* Port B -- LineMicIn2 / Rear Headphone */
+ 	{ 0x12, 0x01a170f0 }, /* Port C -- LineIn1 */
+ 	{ 0x13, 0x908700f0 }, /* What U Hear In*/
+ 	{ 0x18, 0x50d000f0 }, /* N/A */
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index b0068f8ca46dd..2ddc27db8c012 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -78,6 +78,7 @@ struct hdmi_spec_per_pin {
+ 	int pcm_idx; /* which pcm is attached. -1 means no pcm is attached */
+ 	int repoll_count;
+ 	bool setup; /* the stream has been set up by prepare callback */
++	bool silent_stream;
+ 	int channels; /* current number of channels */
+ 	bool non_pcm;
+ 	bool chmap_set;		/* channel-map override by ALSA API? */
+@@ -979,6 +980,13 @@ static int hdmi_choose_cvt(struct hda_codec *codec,
+ 	else
+ 		per_pin = get_pin(spec, pin_idx);
+ 
++	if (per_pin && per_pin->silent_stream) {
++		cvt_idx = cvt_nid_to_cvt_index(codec, per_pin->cvt_nid);
++		if (cvt_id)
++			*cvt_id = cvt_idx;
++		return 0;
++	}
++
+ 	/* Dynamically assign converter to stream */
+ 	for (cvt_idx = 0; cvt_idx < spec->num_cvts; cvt_idx++) {
+ 		per_cvt = get_cvt(spec, cvt_idx);
+@@ -1642,30 +1650,95 @@ static void hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ 	snd_hda_power_down_pm(codec);
+ }
+ 
++#define I915_SILENT_RATE		48000
++#define I915_SILENT_CHANNELS		2
++#define I915_SILENT_FORMAT		SNDRV_PCM_FORMAT_S16_LE
++#define I915_SILENT_FORMAT_BITS	16
++#define I915_SILENT_FMT_MASK		0xf
++
+ static void silent_stream_enable(struct hda_codec *codec,
+-				struct hdmi_spec_per_pin *per_pin)
++				 struct hdmi_spec_per_pin *per_pin)
+ {
+-	unsigned int newval, oldval;
+-
+-	codec_dbg(codec, "hdmi: enabling silent stream for NID %d\n",
+-			per_pin->pin_nid);
++	struct hdmi_spec *spec = codec->spec;
++	struct hdmi_spec_per_cvt *per_cvt;
++	int cvt_idx, pin_idx, err;
++	unsigned int format;
+ 
+ 	mutex_lock(&per_pin->lock);
+ 
+-	if (!per_pin->channels)
+-		per_pin->channels = 2;
++	if (per_pin->setup) {
++		codec_dbg(codec, "hdmi: PCM already open, no silent stream\n");
++		goto unlock_out;
++	}
+ 
+-	oldval = snd_hda_codec_read(codec, per_pin->pin_nid, 0,
+-			AC_VERB_GET_CONV, 0);
+-	newval = (oldval & 0xF0) | 0xF;
+-	snd_hda_codec_write(codec, per_pin->pin_nid, 0,
+-			AC_VERB_SET_CHANNEL_STREAMID, newval);
++	pin_idx = pin_id_to_pin_index(codec, per_pin->pin_nid, per_pin->dev_id);
++	err = hdmi_choose_cvt(codec, pin_idx, &cvt_idx);
++	if (err) {
++		codec_err(codec, "hdmi: no free converter to enable silent mode\n");
++		goto unlock_out;
++	}
++
++	per_cvt = get_cvt(spec, cvt_idx);
++	per_cvt->assigned = 1;
++	per_pin->cvt_nid = per_cvt->cvt_nid;
++	per_pin->silent_stream = true;
+ 
++	codec_dbg(codec, "hdmi: enabling silent stream pin-NID=0x%x cvt-NID=0x%x\n",
++		  per_pin->pin_nid, per_cvt->cvt_nid);
++
++	snd_hda_set_dev_select(codec, per_pin->pin_nid, per_pin->dev_id);
++	snd_hda_codec_write_cache(codec, per_pin->pin_nid, 0,
++				  AC_VERB_SET_CONNECT_SEL,
++				  per_pin->mux_idx);
++
++	/* configure unused pins to choose other converters */
++	pin_cvt_fixup(codec, per_pin, 0);
++
++	snd_hdac_sync_audio_rate(&codec->core, per_pin->pin_nid,
++				 per_pin->dev_id, I915_SILENT_RATE);
++
++	/* trigger silent stream generation in hw */
++	format = snd_hdac_calc_stream_format(I915_SILENT_RATE, I915_SILENT_CHANNELS,
++					     I915_SILENT_FORMAT, I915_SILENT_FORMAT_BITS, 0);
++	snd_hda_codec_setup_stream(codec, per_pin->cvt_nid,
++				   I915_SILENT_FMT_MASK, I915_SILENT_FMT_MASK, format);
++	usleep_range(100, 200);
++	snd_hda_codec_setup_stream(codec, per_pin->cvt_nid, I915_SILENT_FMT_MASK, 0, format);
++
++	per_pin->channels = I915_SILENT_CHANNELS;
+ 	hdmi_setup_audio_infoframe(codec, per_pin, per_pin->non_pcm);
+ 
++ unlock_out:
+ 	mutex_unlock(&per_pin->lock);
+ }
+ 
++static void silent_stream_disable(struct hda_codec *codec,
++				  struct hdmi_spec_per_pin *per_pin)
++{
++	struct hdmi_spec *spec = codec->spec;
++	struct hdmi_spec_per_cvt *per_cvt;
++	int cvt_idx;
++
++	mutex_lock(&per_pin->lock);
++	if (!per_pin->silent_stream)
++		goto unlock_out;
++
++	codec_dbg(codec, "HDMI: disable silent stream on pin-NID=0x%x cvt-NID=0x%x\n",
++		  per_pin->pin_nid, per_pin->cvt_nid);
++
++	cvt_idx = cvt_nid_to_cvt_index(codec, per_pin->cvt_nid);
++	if (cvt_idx >= 0 && cvt_idx < spec->num_cvts) {
++		per_cvt = get_cvt(spec, cvt_idx);
++		per_cvt->assigned = 0;
++	}
++
++	per_pin->cvt_nid = 0;
++	per_pin->silent_stream = false;
++
++ unlock_out:
++	mutex_unlock(&spec->pcm_lock);
++}
++
+ /* update ELD and jack state via audio component */
+ static void sync_eld_via_acomp(struct hda_codec *codec,
+ 			       struct hdmi_spec_per_pin *per_pin)
+@@ -1701,6 +1774,7 @@ static void sync_eld_via_acomp(struct hda_codec *codec,
+ 				pm_ret);
+ 			silent_stream_enable(codec, per_pin);
+ 		} else if (monitor_prev && !monitor_next) {
++			silent_stream_disable(codec, per_pin);
+ 			pm_ret = snd_hda_power_down_pm(codec);
+ 			if (pm_ret < 0)
+ 				codec_err(codec,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8616c56248707..dde5ba2095415 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2516,6 +2516,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1462, 0x1229, "MSI-GP73", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1275, "MSI-GL63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950),
+@@ -3104,6 +3105,7 @@ static void alc_disable_headset_jack_key(struct hda_codec *codec)
+ 	case 0x10ec0215:
+ 	case 0x10ec0225:
+ 	case 0x10ec0285:
++	case 0x10ec0287:
+ 	case 0x10ec0295:
+ 	case 0x10ec0289:
+ 	case 0x10ec0299:
+@@ -3130,6 +3132,7 @@ static void alc_enable_headset_jack_key(struct hda_codec *codec)
+ 	case 0x10ec0215:
+ 	case 0x10ec0225:
+ 	case 0x10ec0285:
++	case 0x10ec0287:
+ 	case 0x10ec0295:
+ 	case 0x10ec0289:
+ 	case 0x10ec0299:
+@@ -6366,6 +6369,7 @@ enum {
+ 	ALC287_FIXUP_HP_GPIO_LED,
+ 	ALC256_FIXUP_HP_HEADSET_MIC,
+ 	ALC236_FIXUP_DELL_AIO_HEADSET_MIC,
++	ALC282_FIXUP_ACER_DISABLE_LINEOUT,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7789,6 +7793,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE
+ 	},
++	[ALC282_FIXUP_ACER_DISABLE_LINEOUT] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x1b, 0x411111f0 },
++			{ 0x18, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++			{ },
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7803,11 +7817,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ 	SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
++	SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x1166, "Acer Veriton N4640G", ALC269_FIXUP_LIFEBOOK),
++	SND_PCI_QUIRK(0x1025, 0x1167, "Acer Veriton N6640G", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK),
+ 	SND_PCI_QUIRK(0x1025, 0x1247, "Acer vCopperbox", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS),
+ 	SND_PCI_QUIRK(0x1025, 0x1248, "Acer Veriton N4660G", ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE),
+@@ -7868,6 +7885,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x09bf, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1028, 0x0a58, "Dell Precision 3650 Tower", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -7956,6 +7974,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x10d0, "ASUS X540LA/X540LJ", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+@@ -7976,6 +7995,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+@@ -8013,6 +8033,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x152d, 0x1082, "Quanta NL3", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x1558, 0x1323, "Clevo N130ZU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x1325, "System76 Darter Pro (darp5)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x1401, "Clevo L140[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -8560,6 +8581,22 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60140},
+ 		{0x19, 0x04a11030},
+ 		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0282, 0x1025, "Acer", ALC282_FIXUP_ACER_DISABLE_LINEOUT,
++		ALC282_STANDARD_PINS,
++		{0x12, 0x90a609c0},
++		{0x18, 0x03a11830},
++		{0x19, 0x04a19831},
++		{0x1a, 0x0481303f},
++		{0x1b, 0x04211020},
++		{0x21, 0x0321101f}),
++	SND_HDA_PIN_QUIRK(0x10ec0282, 0x1025, "Acer", ALC282_FIXUP_ACER_DISABLE_LINEOUT,
++		ALC282_STANDARD_PINS,
++		{0x12, 0x90a60940},
++		{0x18, 0x03a11830},
++		{0x19, 0x04a19831},
++		{0x1a, 0x0481303f},
++		{0x1b, 0x04211020},
++		{0x21, 0x0321101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0283, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC282_STANDARD_PINS,
+ 		{0x12, 0x90a60130},
+@@ -8573,11 +8610,20 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60130},
+ 		{0x19, 0x03a11020},
+ 		{0x21, 0x0321101f}),
++	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
++		{0x14, 0x90170110},
++		{0x19, 0x04a11040},
++		{0x21, 0x04211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
+ 		{0x12, 0x90a60130},
+ 		{0x14, 0x90170110},
+ 		{0x19, 0x04a11040},
+ 		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0287, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_HEADSET_JACK,
++		{0x14, 0x90170110},
++		{0x17, 0x90170111},
++		{0x19, 0x03a11030},
++		{0x21, 0x03211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0286, 0x1025, "Acer", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE,
+ 		{0x12, 0x90a60130},
+ 		{0x17, 0x90170110},
+diff --git a/sound/soc/amd/acp-da7219-max98357a.c b/sound/soc/amd/acp-da7219-max98357a.c
+index a7702e64ec512..849288d01c6b4 100644
+--- a/sound/soc/amd/acp-da7219-max98357a.c
++++ b/sound/soc/amd/acp-da7219-max98357a.c
+@@ -73,8 +73,13 @@ static int cz_da7219_init(struct snd_soc_pcm_runtime *rtd)
+ 		return ret;
+ 	}
+ 
+-	da7219_dai_wclk = clk_get(component->dev, "da7219-dai-wclk");
+-	da7219_dai_bclk = clk_get(component->dev, "da7219-dai-bclk");
++	da7219_dai_wclk = devm_clk_get(component->dev, "da7219-dai-wclk");
++	if (IS_ERR(da7219_dai_wclk))
++		return PTR_ERR(da7219_dai_wclk);
++
++	da7219_dai_bclk = devm_clk_get(component->dev, "da7219-dai-bclk");
++	if (IS_ERR(da7219_dai_bclk))
++		return PTR_ERR(da7219_dai_bclk);
+ 
+ 	ret = snd_soc_card_jack_new(card, "Headset Jack",
+ 				SND_JACK_HEADSET | SND_JACK_LINEOUT |
+diff --git a/sound/soc/amd/raven/pci-acp3x.c b/sound/soc/amd/raven/pci-acp3x.c
+index 31b797c8bfe64..77f2d93896067 100644
+--- a/sound/soc/amd/raven/pci-acp3x.c
++++ b/sound/soc/amd/raven/pci-acp3x.c
+@@ -118,6 +118,10 @@ static int snd_acp3x_probe(struct pci_dev *pci,
+ 	int ret, i;
+ 	u32 addr, val;
+ 
++	/* Raven device detection */
++	if (pci->revision != 0x00)
++		return -ENODEV;
++
+ 	if (pci_enable_device(pci)) {
+ 		dev_err(&pci->dev, "pci_enable_device failed\n");
+ 		return -ENODEV;
+diff --git a/sound/soc/amd/renoir/rn-pci-acp3x.c b/sound/soc/amd/renoir/rn-pci-acp3x.c
+index b943e59fc3024..338b78c514ec9 100644
+--- a/sound/soc/amd/renoir/rn-pci-acp3x.c
++++ b/sound/soc/amd/renoir/rn-pci-acp3x.c
+@@ -6,6 +6,7 @@
+ 
+ #include <linux/pci.h>
+ #include <linux/acpi.h>
++#include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/io.h>
+ #include <linux/delay.h>
+@@ -20,14 +21,13 @@ module_param(acp_power_gating, int, 0644);
+ MODULE_PARM_DESC(acp_power_gating, "Enable acp power gating");
+ 
+ /**
+- * dmic_acpi_check = -1 - Checks ACPI method to know DMIC hardware status runtime
+- *                 = 0 - Skips the DMIC device creation and returns probe failure
+- *                 = 1 - Assumes that platform has DMIC support and skips ACPI
+- *                       method check
++ * dmic_acpi_check = -1 - Use ACPI/DMI method to detect the DMIC hardware presence at runtime
++ *                 =  0 - Skip the DMIC device creation and return probe failure
++ *                 =  1 - Force DMIC support
+  */
+ static int dmic_acpi_check = ACP_DMIC_AUTO;
+ module_param(dmic_acpi_check, bint, 0644);
+-MODULE_PARM_DESC(dmic_acpi_check, "checks Dmic hardware runtime");
++MODULE_PARM_DESC(dmic_acpi_check, "Digital microphone presence (-1=auto, 0=none, 1=force)");
+ 
+ struct acp_dev_data {
+ 	void __iomem *acp_base;
+@@ -163,6 +163,17 @@ static int rn_acp_deinit(void __iomem *acp_base)
+ 	return 0;
+ }
+ 
++static const struct dmi_system_id rn_acp_quirk_table[] = {
++	{
++		/* Lenovo IdeaPad Flex 5 14ARE05, IdeaPad 5 15ARE05 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "LNVNB161216"),
++		}
++	},
++	{}
++};
++
+ static int snd_rn_acp_probe(struct pci_dev *pci,
+ 			    const struct pci_device_id *pci_id)
+ {
+@@ -172,10 +183,15 @@ static int snd_rn_acp_probe(struct pci_dev *pci,
+ 	acpi_handle handle;
+ 	acpi_integer dmic_status;
+ #endif
++	const struct dmi_system_id *dmi_id;
+ 	unsigned int irqflags;
+ 	int ret, index;
+ 	u32 addr;
+ 
++	/* Renoir device check */
++	if (pci->revision != 0x01)
++		return -ENODEV;
++
+ 	if (pci_enable_device(pci)) {
+ 		dev_err(&pci->dev, "pci_enable_device failed\n");
+ 		return -ENODEV;
+@@ -232,6 +248,12 @@ static int snd_rn_acp_probe(struct pci_dev *pci,
+ 			goto de_init;
+ 		}
+ #endif
++		dmi_id = dmi_first_match(rn_acp_quirk_table);
++		if (dmi_id && !dmi_id->driver_data) {
++			dev_info(&pci->dev, "ACPI settings override using DMI (ACP mic is not present)");
++			ret = -ENODEV;
++			goto de_init;
++		}
+ 	}
+ 
+ 	adata->res = devm_kzalloc(&pci->dev,
+diff --git a/sound/soc/atmel/Kconfig b/sound/soc/atmel/Kconfig
+index bd8854bfd2ee4..142373ec411ad 100644
+--- a/sound/soc/atmel/Kconfig
++++ b/sound/soc/atmel/Kconfig
+@@ -148,6 +148,7 @@ config SND_MCHP_SOC_SPDIFTX
+ config SND_MCHP_SOC_SPDIFRX
+ 	tristate "Microchip ASoC driver for boards using S/PDIF RX"
+ 	depends on OF && (ARCH_AT91 || COMPILE_TEST)
++	depends on COMMON_CLK
+ 	select SND_SOC_GENERIC_DMAENGINE_PCM
+ 	select REGMAP_MMIO
+ 	help
+diff --git a/sound/soc/codecs/cros_ec_codec.c b/sound/soc/codecs/cros_ec_codec.c
+index 28f039adfa138..5c3b7e5e55d23 100644
+--- a/sound/soc/codecs/cros_ec_codec.c
++++ b/sound/soc/codecs/cros_ec_codec.c
+@@ -332,7 +332,7 @@ static int i2s_rx_event(struct snd_soc_dapm_widget *w,
+ 		snd_soc_dapm_to_component(w->dapm);
+ 	struct cros_ec_codec_priv *priv =
+ 		snd_soc_component_get_drvdata(component);
+-	struct ec_param_ec_codec_i2s_rx p;
++	struct ec_param_ec_codec_i2s_rx p = {};
+ 
+ 	switch (event) {
+ 	case SND_SOC_DAPM_PRE_PMU:
+diff --git a/sound/soc/codecs/cx2072x.c b/sound/soc/codecs/cx2072x.c
+index 2ad00ed21bec6..2f10991a8bdb5 100644
+--- a/sound/soc/codecs/cx2072x.c
++++ b/sound/soc/codecs/cx2072x.c
+@@ -1579,7 +1579,7 @@ static struct snd_soc_dai_driver soc_codec_cx2072x_dai[] = {
+ 		.id	= CX2072X_DAI_DSP,
+ 		.probe = cx2072x_dsp_dai_probe,
+ 		.playback = {
+-			.stream_name = "Playback",
++			.stream_name = "DSP Playback",
+ 			.channels_min = 2,
+ 			.channels_max = 2,
+ 			.rates = CX2072X_RATES_DSP,
+@@ -1591,7 +1591,7 @@ static struct snd_soc_dai_driver soc_codec_cx2072x_dai[] = {
+ 		.name = "cx2072x-aec",
+ 		.id	= 3,
+ 		.capture = {
+-			.stream_name = "Capture",
++			.stream_name = "AEC Capture",
+ 			.channels_min = 2,
+ 			.channels_max = 2,
+ 			.rates = CX2072X_RATES_DSP,
+diff --git a/sound/soc/codecs/max98390.c b/sound/soc/codecs/max98390.c
+index ff5cc9bbec291..bb736c44e68a3 100644
+--- a/sound/soc/codecs/max98390.c
++++ b/sound/soc/codecs/max98390.c
+@@ -784,6 +784,7 @@ static int max98390_dsm_init(struct snd_soc_component *component)
+ 	if (fw->size < MAX98390_DSM_PARAM_MIN_SIZE) {
+ 		dev_err(component->dev,
+ 			"param fw is invalid.\n");
++		ret = -EINVAL;
+ 		goto err_alloc;
+ 	}
+ 	dsm_param = (char *)fw->data;
+@@ -794,6 +795,7 @@ static int max98390_dsm_init(struct snd_soc_component *component)
+ 		fw->size < param_size + MAX98390_DSM_PAYLOAD_OFFSET) {
+ 		dev_err(component->dev,
+ 			"param fw is invalid.\n");
++		ret = -EINVAL;
+ 		goto err_alloc;
+ 	}
+ 	regmap_write(max98390->regmap, MAX98390_R203A_AMP_EN, 0x80);
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index fc9ea198ac799..f57884113406b 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -4645,8 +4645,12 @@ static int wm8994_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 	pm_runtime_idle(&pdev->dev);
+ 
+-	return devm_snd_soc_register_component(&pdev->dev, &soc_component_dev_wm8994,
++	ret = devm_snd_soc_register_component(&pdev->dev, &soc_component_dev_wm8994,
+ 			wm8994_dai, ARRAY_SIZE(wm8994_dai));
++	if (ret < 0)
++		pm_runtime_disable(&pdev->dev);
++
++	return ret;
+ }
+ 
+ static int wm8994_remove(struct platform_device *pdev)
+diff --git a/sound/soc/codecs/wm8997.c b/sound/soc/codecs/wm8997.c
+index 37e4bb3dbd8a9..229f2986cd96b 100644
+--- a/sound/soc/codecs/wm8997.c
++++ b/sound/soc/codecs/wm8997.c
+@@ -1177,6 +1177,8 @@ static int wm8997_probe(struct platform_device *pdev)
+ 		goto err_spk_irqs;
+ 	}
+ 
++	return ret;
++
+ err_spk_irqs:
+ 	arizona_free_spk_irqs(arizona);
+ 
+diff --git a/sound/soc/codecs/wm8998.c b/sound/soc/codecs/wm8998.c
+index f6c5cc80c970b..5413254295b70 100644
+--- a/sound/soc/codecs/wm8998.c
++++ b/sound/soc/codecs/wm8998.c
+@@ -1375,7 +1375,7 @@ static int wm8998_probe(struct platform_device *pdev)
+ 
+ 	ret = arizona_init_spk_irqs(arizona);
+ 	if (ret < 0)
+-		return ret;
++		goto err_pm_disable;
+ 
+ 	ret = devm_snd_soc_register_component(&pdev->dev,
+ 					      &soc_component_dev_wm8998,
+@@ -1390,6 +1390,8 @@ static int wm8998_probe(struct platform_device *pdev)
+ 
+ err_spk_irqs:
+ 	arizona_free_spk_irqs(arizona);
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index e61d00486c653..dec8716aa8ef5 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -1519,7 +1519,7 @@ static int wm_adsp_create_control(struct wm_adsp *dsp,
+ 	ctl_work = kzalloc(sizeof(*ctl_work), GFP_KERNEL);
+ 	if (!ctl_work) {
+ 		ret = -ENOMEM;
+-		goto err_ctl_cache;
++		goto err_list_del;
+ 	}
+ 
+ 	ctl_work->dsp = dsp;
+@@ -1529,7 +1529,8 @@ static int wm_adsp_create_control(struct wm_adsp *dsp,
+ 
+ 	return 0;
+ 
+-err_ctl_cache:
++err_list_del:
++	list_del(&ctl->list);
+ 	kfree(ctl->cache);
+ err_ctl_subname:
+ 	kfree(ctl->subname);
+diff --git a/sound/soc/intel/Kconfig b/sound/soc/intel/Kconfig
+index a5b446d5af19f..c1bf69a0bcfe1 100644
+--- a/sound/soc/intel/Kconfig
++++ b/sound/soc/intel/Kconfig
+@@ -198,7 +198,7 @@ endif ## SND_SOC_INTEL_SST_TOPLEVEL || SND_SOC_SOF_INTEL_TOPLEVEL
+ 
+ config SND_SOC_INTEL_KEEMBAY
+ 	tristate "Keembay Platforms"
+-	depends on ARM64 || COMPILE_TEST
++	depends on ARCH_KEEMBAY || COMPILE_TEST
+ 	depends on COMMON_CLK
+ 	help
+ 	  If you have a Intel Keembay platform then enable this option
+diff --git a/sound/soc/intel/boards/sof_maxim_common.c b/sound/soc/intel/boards/sof_maxim_common.c
+index b6e63ea13d64e..c2a9757181fe1 100644
+--- a/sound/soc/intel/boards/sof_maxim_common.c
++++ b/sound/soc/intel/boards/sof_maxim_common.c
+@@ -49,11 +49,11 @@ static int max98373_hw_params(struct snd_pcm_substream *substream,
+ 	for_each_rtd_codec_dais(rtd, j, codec_dai) {
+ 		if (!strcmp(codec_dai->component->name, MAX_98373_DEV0_NAME)) {
+ 			/* DEV0 tdm slot configuration */
+-			snd_soc_dai_set_tdm_slot(codec_dai, 0x03, 3, 8, 24);
++			snd_soc_dai_set_tdm_slot(codec_dai, 0x03, 3, 8, 32);
+ 		}
+ 		if (!strcmp(codec_dai->component->name, MAX_98373_DEV1_NAME)) {
+ 			/* DEV1 tdm slot configuration */
+-			snd_soc_dai_set_tdm_slot(codec_dai, 0x0C, 3, 8, 24);
++			snd_soc_dai_set_tdm_slot(codec_dai, 0x0C, 3, 8, 32);
+ 		}
+ 	}
+ 	return 0;
+diff --git a/sound/soc/jz4740/jz4740-i2s.c b/sound/soc/jz4740/jz4740-i2s.c
+index c7bd20104b204..0793e284d0e78 100644
+--- a/sound/soc/jz4740/jz4740-i2s.c
++++ b/sound/soc/jz4740/jz4740-i2s.c
+@@ -312,10 +312,14 @@ static int jz4740_i2s_set_sysclk(struct snd_soc_dai *dai, int clk_id,
+ 	switch (clk_id) {
+ 	case JZ4740_I2S_CLKSRC_EXT:
+ 		parent = clk_get(NULL, "ext");
++		if (IS_ERR(parent))
++			return PTR_ERR(parent);
+ 		clk_set_parent(i2s->clk_i2s, parent);
+ 		break;
+ 	case JZ4740_I2S_CLKSRC_PLL:
+ 		parent = clk_get(NULL, "pll half");
++		if (IS_ERR(parent))
++			return PTR_ERR(parent);
+ 		clk_set_parent(i2s->clk_i2s, parent);
+ 		ret = clk_set_rate(i2s->clk_i2s, freq);
+ 		break;
+diff --git a/sound/soc/meson/Kconfig b/sound/soc/meson/Kconfig
+index 363dc3b1bbe47..ce0cbdc69b2ec 100644
+--- a/sound/soc/meson/Kconfig
++++ b/sound/soc/meson/Kconfig
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ menu "ASoC support for Amlogic platforms"
+-	depends on ARCH_MESON || COMPILE_TEST
++	depends on ARCH_MESON || (COMPILE_TEST && COMMON_CLK)
+ 
+ config SND_MESON_AIU
+ 	tristate "Amlogic AIU"
+diff --git a/sound/soc/qcom/Kconfig b/sound/soc/qcom/Kconfig
+index 2696ffcba880f..a824f793811be 100644
+--- a/sound/soc/qcom/Kconfig
++++ b/sound/soc/qcom/Kconfig
+@@ -106,6 +106,7 @@ config SND_SOC_QDSP6
+ config SND_SOC_MSM8996
+ 	tristate "SoC Machine driver for MSM8996 and APQ8096 boards"
+ 	depends on QCOM_APR
++	depends on COMMON_CLK
+ 	select SND_SOC_QDSP6
+ 	select SND_SOC_QCOM_COMMON
+ 	help
+diff --git a/sound/soc/qcom/common.c b/sound/soc/qcom/common.c
+index 54660f126d09e..09af007007007 100644
+--- a/sound/soc/qcom/common.c
++++ b/sound/soc/qcom/common.c
+@@ -58,7 +58,7 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 		dlc = devm_kzalloc(dev, 2 * sizeof(*dlc), GFP_KERNEL);
+ 		if (!dlc) {
+ 			ret = -ENOMEM;
+-			goto err;
++			goto err_put_np;
+ 		}
+ 
+ 		link->cpus	= &dlc[0];
+@@ -70,7 +70,7 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 		ret = of_property_read_string(np, "link-name", &link->name);
+ 		if (ret) {
+ 			dev_err(card->dev, "error getting codec dai_link name\n");
+-			goto err;
++			goto err_put_np;
+ 		}
+ 
+ 		cpu = of_get_child_by_name(np, "cpu");
+@@ -130,8 +130,10 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 		} else {
+ 			/* DPCM frontend */
+ 			dlc = devm_kzalloc(dev, sizeof(*dlc), GFP_KERNEL);
+-			if (!dlc)
+-				return -ENOMEM;
++			if (!dlc) {
++				ret = -ENOMEM;
++				goto err;
++			}
+ 
+ 			link->codecs	 = dlc;
+ 			link->num_codecs = 1;
+@@ -158,10 +160,11 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 
+ 	return 0;
+ err:
+-	of_node_put(np);
+ 	of_node_put(cpu);
+ 	of_node_put(codec);
+ 	of_node_put(platform);
++err_put_np:
++	of_node_put(np);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(qcom_snd_parse_of);
+diff --git a/sound/soc/qcom/lpass-hdmi.c b/sound/soc/qcom/lpass-hdmi.c
+index 172952d3a5d66..abfb8737a89f4 100644
+--- a/sound/soc/qcom/lpass-hdmi.c
++++ b/sound/soc/qcom/lpass-hdmi.c
+@@ -24,7 +24,7 @@ static int lpass_hdmi_daiops_hw_params(struct snd_pcm_substream *substream,
+ 	unsigned int rate = params_rate(params);
+ 	unsigned int channels = params_channels(params);
+ 	unsigned int ret;
+-	unsigned int bitwidth;
++	int bitwidth;
+ 	unsigned int word_length;
+ 	unsigned int ch_sts_buf0;
+ 	unsigned int ch_sts_buf1;
+diff --git a/sound/soc/qcom/qdsp6/q6afe-clocks.c b/sound/soc/qcom/qdsp6/q6afe-clocks.c
+index 2efc2eaa04243..acfc0c698f6a1 100644
+--- a/sound/soc/qcom/qdsp6/q6afe-clocks.c
++++ b/sound/soc/qcom/qdsp6/q6afe-clocks.c
+@@ -16,6 +16,7 @@
+ 		.afe_clk_id	= Q6AFE_##id,		\
+ 		.name = #id,				\
+ 		.attributes = LPASS_CLK_ATTRIBUTE_COUPLE_NO, \
++		.rate = 19200000,			\
+ 		.hw.init = &(struct clk_init_data) {	\
+ 			.ops = &clk_q6afe_ops,		\
+ 			.name = #id,			\
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index dcab9527ba3d7..91bf339581590 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2231,6 +2231,7 @@ static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd)
+ 		case SNDRV_PCM_TRIGGER_START:
+ 		case SNDRV_PCM_TRIGGER_RESUME:
+ 		case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
++		case SNDRV_PCM_TRIGGER_DRAIN:
+ 			ret = dpcm_dai_trigger_fe_be(substream, cmd, true);
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_STOP:
+@@ -2248,6 +2249,7 @@ static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd)
+ 		case SNDRV_PCM_TRIGGER_START:
+ 		case SNDRV_PCM_TRIGGER_RESUME:
+ 		case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
++		case SNDRV_PCM_TRIGGER_DRAIN:
+ 			ret = dpcm_dai_trigger_fe_be(substream, cmd, false);
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_STOP:
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index a066e08860cbf..5bfc2f8b13b90 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -271,6 +271,7 @@ config SND_SOC_SOF_JASPERLAKE
+ 
+ config SND_SOC_SOF_HDA_COMMON
+ 	tristate
++	select SND_INTEL_DSP_CONFIG
+ 	select SND_SOC_SOF_INTEL_COMMON
+ 	select SND_SOC_SOF_HDA_LINK_BASELINE
+ 	help
+@@ -330,7 +331,6 @@ config SND_SOC_SOF_HDA
+ 	tristate
+ 	select SND_HDA_EXT_CORE if SND_SOC_SOF_HDA_LINK
+ 	select SND_SOC_HDAC_HDA if SND_SOC_SOF_HDA_AUDIO_CODEC
+-	select SND_INTEL_DSP_CONFIG
+ 	help
+ 	  This option is not user-selectable but automagically handled by
+ 	  'select' statements at a higher level
+diff --git a/sound/soc/sunxi/sun4i-i2s.c b/sound/soc/sunxi/sun4i-i2s.c
+index f23ff29e7c1d3..a994b5cf87b31 100644
+--- a/sound/soc/sunxi/sun4i-i2s.c
++++ b/sound/soc/sunxi/sun4i-i2s.c
+@@ -450,11 +450,11 @@ static int sun8i_i2s_set_chan_cfg(const struct sun4i_i2s *i2s,
+ 	switch (i2s->format & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_DSP_A:
+ 	case SND_SOC_DAIFMT_DSP_B:
+-	case SND_SOC_DAIFMT_LEFT_J:
+-	case SND_SOC_DAIFMT_RIGHT_J:
+ 		lrck_period = params_physical_width(params) * slots;
+ 		break;
+ 
++	case SND_SOC_DAIFMT_LEFT_J:
++	case SND_SOC_DAIFMT_RIGHT_J:
+ 	case SND_SOC_DAIFMT_I2S:
+ 		lrck_period = params_physical_width(params);
+ 		break;
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 4457214a3ae62..57d6d4ff01e08 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -382,6 +382,9 @@ static const struct usb_audio_device_name usb_audio_names[] = {
+ 	/* ASUS ROG Strix */
+ 	PROFILE_NAME(0x0b05, 0x1917,
+ 		     "Realtek", "ALC1220-VB-DT", "Realtek-ALC1220-VB-Desktop"),
++	/* ASUS PRIME TRX40 PRO-S */
++	PROFILE_NAME(0x0b05, 0x1918,
++		     "Realtek", "ALC1220-VB-DT", "Realtek-ALC1220-VB-Desktop"),
+ 
+ 	/* Dell WD15 Dock */
+ 	PROFILE_NAME(0x0bda, 0x4014, "Dell", "WD15 Dock", "Dell-WD15-Dock"),
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index f3ca59005d914..674e15bf98ed5 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -531,6 +531,12 @@ static int set_sample_rate_v1(struct snd_usb_audio *chip, int iface,
+ 	}
+ 
+ 	crate = data[0] | (data[1] << 8) | (data[2] << 16);
++	if (!crate) {
++		dev_info(&dev->dev, "failed to read current rate; disabling the check\n");
++		chip->sample_rate_read_error = 3; /* three strikes, see above */
++		return 0;
++	}
++
+ 	if (crate != rate) {
+ 		dev_warn(&dev->dev, "current rate %d is different from the runtime rate %d\n", crate, rate);
+ 		// runtime->rate = crate;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index c50be2f75f702..f82c2ab809c1d 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1799,6 +1799,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 	case 0x25ce:  /* Mytek devices */
+ 	case 0x278b:  /* Rotel? */
+ 	case 0x292b:  /* Gustard/Ess based devices */
++	case 0x2972:  /* FiiO devices */
+ 	case 0x2ab6:  /* T+A devices */
+ 	case 0x3353:  /* Khadas devices */
+ 	case 0x3842:  /* EVGA */
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index cdde783f3018b..89ba522e377dc 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -90,7 +90,7 @@ __BUILDXX = $(CXX) $(CXXFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.cpp,$(
+ ###############################
+ 
+ $(OUTPUT)test-all.bin:
+-	$(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -I/usr/include/slang -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma -lzstd
++	$(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -I/usr/include/slang -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma -lzstd -lcap
+ 
+ $(OUTPUT)test-hello.bin:
+ 	$(BUILD)
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 28baee7ba1ca8..ad165e6e74bc0 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -7649,6 +7649,16 @@ bool bpf_map__is_pinned(const struct bpf_map *map)
+ 	return map->pinned;
+ }
+ 
++static void sanitize_pin_path(char *s)
++{
++	/* bpffs disallows periods in path names */
++	while (*s) {
++		if (*s == '.')
++			*s = '_';
++		s++;
++	}
++}
++
+ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
+ {
+ 	struct bpf_map *map;
+@@ -7678,6 +7688,7 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
+ 				err = -ENAMETOOLONG;
+ 				goto err_unpin_maps;
+ 			}
++			sanitize_pin_path(buf);
+ 			pin_path = buf;
+ 		} else if (!map->pin_path) {
+ 			continue;
+@@ -7722,6 +7733,7 @@ int bpf_object__unpin_maps(struct bpf_object *obj, const char *path)
+ 				return -EINVAL;
+ 			else if (len >= PATH_MAX)
+ 				return -ENAMETOOLONG;
++			sanitize_pin_path(buf);
+ 			pin_path = buf;
+ 		} else if (!map->pin_path) {
+ 			continue;
+diff --git a/tools/perf/tests/expand-cgroup.c b/tools/perf/tests/expand-cgroup.c
+index d5771e4d094f8..4c59f3ae438fc 100644
+--- a/tools/perf/tests/expand-cgroup.c
++++ b/tools/perf/tests/expand-cgroup.c
+@@ -145,7 +145,7 @@ static int expand_libpfm_events(void)
+ 	int ret;
+ 	struct evlist *evlist;
+ 	struct rblist metric_events;
+-	const char event_str[] = "UNHALTED_CORE_CYCLES";
++	const char event_str[] = "CYCLES";
+ 	struct option opt = {
+ 		.value = &evlist,
+ 	};
+diff --git a/tools/perf/tests/pmu-events.c b/tools/perf/tests/pmu-events.c
+index d3517a74d95e3..31f987bb7ebba 100644
+--- a/tools/perf/tests/pmu-events.c
++++ b/tools/perf/tests/pmu-events.c
+@@ -561,7 +561,7 @@ static int metric_parse_fake(const char *str)
+ 		}
+ 	}
+ 
+-	if (expr__parse(&result, &ctx, str, 1))
++	if (expr__parse(&result, &ctx, str, 0))
+ 		pr_err("expr__parse failed\n");
+ 	else
+ 		ret = 0;
+diff --git a/tools/perf/util/parse-regs-options.c b/tools/perf/util/parse-regs-options.c
+index e687497b3aac0..a4a100425b3a2 100644
+--- a/tools/perf/util/parse-regs-options.c
++++ b/tools/perf/util/parse-regs-options.c
+@@ -54,7 +54,7 @@ __parse_regs(const struct option *opt, const char *str, int unset, bool intr)
+ #endif
+ 				fputc('\n', stderr);
+ 				/* just printing available regs */
+-				return -1;
++				goto error;
+ 			}
+ #ifdef HAVE_PERF_REGS_SUPPORT
+ 			for (r = sample_reg_masks; r->name; r++) {
+diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
+index 064b63a6a3f31..bbecb449ea944 100644
+--- a/tools/perf/util/probe-file.c
++++ b/tools/perf/util/probe-file.c
+@@ -791,7 +791,7 @@ static char *synthesize_sdt_probe_command(struct sdt_note *note,
+ 					const char *sdtgrp)
+ {
+ 	struct strbuf buf;
+-	char *ret = NULL, **args;
++	char *ret = NULL;
+ 	int i, args_count, err;
+ 	unsigned long long ref_ctr_offset;
+ 
+@@ -813,12 +813,19 @@ static char *synthesize_sdt_probe_command(struct sdt_note *note,
+ 		goto out;
+ 
+ 	if (note->args) {
+-		args = argv_split(note->args, &args_count);
++		char **args = argv_split(note->args, &args_count);
++
++		if (args == NULL)
++			goto error;
+ 
+ 		for (i = 0; i < args_count; ++i) {
+-			if (synthesize_sdt_probe_arg(&buf, i, args[i]) < 0)
++			if (synthesize_sdt_probe_arg(&buf, i, args[i]) < 0) {
++				argv_free(args);
+ 				goto error;
++			}
+ 		}
++
++		argv_free(args);
+ 	}
+ 
+ out:
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 542768f5195b7..136df8c102812 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -220,7 +220,8 @@ $(RESOLVE_BTFIDS): $(BPFOBJ) | $(BUILD_DIR)/resolve_btfids	\
+ # build would have failed anyways.
+ define get_sys_includes
+ $(shell $(1) -v -E - </dev/null 2>&1 \
+-	| sed -n '/<...> search starts here:/,/End of search list./{ s| \(/.*\)|-idirafter \1|p }')
++	| sed -n '/<...> search starts here:/,/End of search list./{ s| \(/.*\)|-idirafter \1|p }') \
++$(shell $(1) -dM -E - </dev/null | grep '#define __riscv_xlen ' | sed 's/#define /-D/' | sed 's/ /=/')
+ endef
+ 
+ # Determine target endianness.
+diff --git a/tools/testing/selftests/bpf/progs/local_storage.c b/tools/testing/selftests/bpf/progs/local_storage.c
+index 0758ba229ae0e..09529e33be982 100644
+--- a/tools/testing/selftests/bpf/progs/local_storage.c
++++ b/tools/testing/selftests/bpf/progs/local_storage.c
+@@ -58,20 +58,22 @@ int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim)
+ {
+ 	__u32 pid = bpf_get_current_pid_tgid() >> 32;
+ 	struct dummy_storage *storage;
++	int err;
+ 
+ 	if (pid != monitored_pid)
+ 		return 0;
+ 
+ 	storage = bpf_inode_storage_get(&inode_storage_map, victim->d_inode, 0,
+-				     BPF_SK_STORAGE_GET_F_CREATE);
++					BPF_LOCAL_STORAGE_GET_F_CREATE);
+ 	if (!storage)
+ 		return 0;
+ 
+-	if (storage->value == DUMMY_STORAGE_VALUE)
++	if (storage->value != DUMMY_STORAGE_VALUE)
+ 		inode_storage_result = -1;
+ 
+-	inode_storage_result =
+-		bpf_inode_storage_delete(&inode_storage_map, victim->d_inode);
++	err = bpf_inode_storage_delete(&inode_storage_map, victim->d_inode);
++	if (!err)
++		inode_storage_result = err;
+ 
+ 	return 0;
+ }
+@@ -82,19 +84,23 @@ int BPF_PROG(socket_bind, struct socket *sock, struct sockaddr *address,
+ {
+ 	__u32 pid = bpf_get_current_pid_tgid() >> 32;
+ 	struct dummy_storage *storage;
++	int err;
+ 
+ 	if (pid != monitored_pid)
+ 		return 0;
+ 
+ 	storage = bpf_sk_storage_get(&sk_storage_map, sock->sk, 0,
+-				     BPF_SK_STORAGE_GET_F_CREATE);
++				     BPF_LOCAL_STORAGE_GET_F_CREATE);
+ 	if (!storage)
+ 		return 0;
+ 
+-	if (storage->value == DUMMY_STORAGE_VALUE)
++	if (storage->value != DUMMY_STORAGE_VALUE)
+ 		sk_storage_result = -1;
+ 
+-	sk_storage_result = bpf_sk_storage_delete(&sk_storage_map, sock->sk);
++	err = bpf_sk_storage_delete(&sk_storage_map, sock->sk);
++	if (!err)
++		sk_storage_result = err;
++
+ 	return 0;
+ }
+ 
+@@ -109,7 +115,7 @@ int BPF_PROG(socket_post_create, struct socket *sock, int family, int type,
+ 		return 0;
+ 
+ 	storage = bpf_sk_storage_get(&sk_storage_map, sock->sk, 0,
+-				     BPF_SK_STORAGE_GET_F_CREATE);
++				     BPF_LOCAL_STORAGE_GET_F_CREATE);
+ 	if (!storage)
+ 		return 0;
+ 
+@@ -131,7 +137,7 @@ int BPF_PROG(file_open, struct file *file)
+ 		return 0;
+ 
+ 	storage = bpf_inode_storage_get(&inode_storage_map, file->f_inode, 0,
+-				     BPF_LOCAL_STORAGE_GET_F_CREATE);
++					BPF_LOCAL_STORAGE_GET_F_CREATE);
+ 	if (!storage)
+ 		return 0;
+ 
+diff --git a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
+index f48dbfe24ddc8..a621b58ab079d 100644
+--- a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
++++ b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
+@@ -15,7 +15,6 @@
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
+ #include <linux/types.h>
+-#include <linux/tcp.h>
+ #include <linux/socket.h>
+ #include <linux/pkt_cls.h>
+ #include <linux/erspan.h>
+@@ -528,12 +527,11 @@ int _ipip_set_tunnel(struct __sk_buff *skb)
+ 	struct bpf_tunnel_key key = {};
+ 	void *data = (void *)(long)skb->data;
+ 	struct iphdr *iph = data;
+-	struct tcphdr *tcp = data + sizeof(*iph);
+ 	void *data_end = (void *)(long)skb->data_end;
+ 	int ret;
+ 
+ 	/* single length check */
+-	if (data + sizeof(*iph) + sizeof(*tcp) > data_end) {
++	if (data + sizeof(*iph) > data_end) {
+ 		ERROR(1);
+ 		return TC_ACT_SHOT;
+ 	}
+@@ -541,16 +539,6 @@ int _ipip_set_tunnel(struct __sk_buff *skb)
+ 	key.tunnel_ttl = 64;
+ 	if (iph->protocol == IPPROTO_ICMP) {
+ 		key.remote_ipv4 = 0xac100164; /* 172.16.1.100 */
+-	} else {
+-		if (iph->protocol != IPPROTO_TCP || iph->ihl != 5)
+-			return TC_ACT_SHOT;
+-
+-		if (tcp->dest == bpf_htons(5200))
+-			key.remote_ipv4 = 0xac100164; /* 172.16.1.100 */
+-		else if (tcp->dest == bpf_htons(5201))
+-			key.remote_ipv4 = 0xac100165; /* 172.16.1.101 */
+-		else
+-			return TC_ACT_SHOT;
+ 	}
+ 
+ 	ret = bpf_skb_set_tunnel_key(skb, &key, sizeof(key), 0);
+@@ -585,19 +573,20 @@ int _ipip6_set_tunnel(struct __sk_buff *skb)
+ 	struct bpf_tunnel_key key = {};
+ 	void *data = (void *)(long)skb->data;
+ 	struct iphdr *iph = data;
+-	struct tcphdr *tcp = data + sizeof(*iph);
+ 	void *data_end = (void *)(long)skb->data_end;
+ 	int ret;
+ 
+ 	/* single length check */
+-	if (data + sizeof(*iph) + sizeof(*tcp) > data_end) {
++	if (data + sizeof(*iph) > data_end) {
+ 		ERROR(1);
+ 		return TC_ACT_SHOT;
+ 	}
+ 
+ 	__builtin_memset(&key, 0x0, sizeof(key));
+-	key.remote_ipv6[3] = bpf_htonl(0x11); /* ::11 */
+ 	key.tunnel_ttl = 64;
++	if (iph->protocol == IPPROTO_ICMP) {
++		key.remote_ipv6[3] = bpf_htonl(0x11); /* ::11 */
++	}
+ 
+ 	ret = bpf_skb_set_tunnel_key(skb, &key, sizeof(key),
+ 				     BPF_F_TUNINFO_IPV6);
+@@ -634,35 +623,18 @@ int _ip6ip6_set_tunnel(struct __sk_buff *skb)
+ 	struct bpf_tunnel_key key = {};
+ 	void *data = (void *)(long)skb->data;
+ 	struct ipv6hdr *iph = data;
+-	struct tcphdr *tcp = data + sizeof(*iph);
+ 	void *data_end = (void *)(long)skb->data_end;
+ 	int ret;
+ 
+ 	/* single length check */
+-	if (data + sizeof(*iph) + sizeof(*tcp) > data_end) {
++	if (data + sizeof(*iph) > data_end) {
+ 		ERROR(1);
+ 		return TC_ACT_SHOT;
+ 	}
+ 
+-	key.remote_ipv6[0] = bpf_htonl(0x2401db00);
+ 	key.tunnel_ttl = 64;
+-
+ 	if (iph->nexthdr == 58 /* NEXTHDR_ICMP */) {
+-		key.remote_ipv6[3] = bpf_htonl(1);
+-	} else {
+-		if (iph->nexthdr != 6 /* NEXTHDR_TCP */) {
+-			ERROR(iph->nexthdr);
+-			return TC_ACT_SHOT;
+-		}
+-
+-		if (tcp->dest == bpf_htons(5200)) {
+-			key.remote_ipv6[3] = bpf_htonl(1);
+-		} else if (tcp->dest == bpf_htons(5201)) {
+-			key.remote_ipv6[3] = bpf_htonl(2);
+-		} else {
+-			ERROR(tcp->dest);
+-			return TC_ACT_SHOT;
+-		}
++		key.remote_ipv6[3] = bpf_htonl(0x11); /* ::11 */
+ 	}
+ 
+ 	ret = bpf_skb_set_tunnel_key(skb, &key, sizeof(key),
+diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
+index 0fa1e421c3d7a..427ca00a32177 100644
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -1273,6 +1273,16 @@ static char *test_to_str(int test)
+ 	return "unknown";
+ }
+ 
++static void append_str(char *dst, const char *src, size_t dst_cap)
++{
++	size_t avail = dst_cap - strlen(dst);
++
++	if (avail <= 1) /* just zero byte could be written */
++		return;
++
++	strncat(dst, src, avail - 1); /* strncat() adds + 1 for zero byte */
++}
++
+ #define OPTSTRING 60
+ static void test_options(char *options)
+ {
+@@ -1281,42 +1291,42 @@ static void test_options(char *options)
+ 	memset(options, 0, OPTSTRING);
+ 
+ 	if (txmsg_pass)
+-		strncat(options, "pass,", OPTSTRING);
++		append_str(options, "pass,", OPTSTRING);
+ 	if (txmsg_redir)
+-		strncat(options, "redir,", OPTSTRING);
++		append_str(options, "redir,", OPTSTRING);
+ 	if (txmsg_drop)
+-		strncat(options, "drop,", OPTSTRING);
++		append_str(options, "drop,", OPTSTRING);
+ 	if (txmsg_apply) {
+ 		snprintf(tstr, OPTSTRING, "apply %d,", txmsg_apply);
+-		strncat(options, tstr, OPTSTRING);
++		append_str(options, tstr, OPTSTRING);
+ 	}
+ 	if (txmsg_cork) {
+ 		snprintf(tstr, OPTSTRING, "cork %d,", txmsg_cork);
+-		strncat(options, tstr, OPTSTRING);
++		append_str(options, tstr, OPTSTRING);
+ 	}
+ 	if (txmsg_start) {
+ 		snprintf(tstr, OPTSTRING, "start %d,", txmsg_start);
+-		strncat(options, tstr, OPTSTRING);
++		append_str(options, tstr, OPTSTRING);
+ 	}
+ 	if (txmsg_end) {
+ 		snprintf(tstr, OPTSTRING, "end %d,", txmsg_end);
+-		strncat(options, tstr, OPTSTRING);
++		append_str(options, tstr, OPTSTRING);
+ 	}
+ 	if (txmsg_start_pop) {
+ 		snprintf(tstr, OPTSTRING, "pop (%d,%d),",
+ 			 txmsg_start_pop, txmsg_start_pop + txmsg_pop);
+-		strncat(options, tstr, OPTSTRING);
++		append_str(options, tstr, OPTSTRING);
+ 	}
+ 	if (txmsg_ingress)
+-		strncat(options, "ingress,", OPTSTRING);
++		append_str(options, "ingress,", OPTSTRING);
+ 	if (txmsg_redir_skb)
+-		strncat(options, "redir_skb,", OPTSTRING);
++		append_str(options, "redir_skb,", OPTSTRING);
+ 	if (txmsg_ktls_skb)
+-		strncat(options, "ktls_skb,", OPTSTRING);
++		append_str(options, "ktls_skb,", OPTSTRING);
+ 	if (ktls)
+-		strncat(options, "ktls,", OPTSTRING);
++		append_str(options, "ktls,", OPTSTRING);
+ 	if (peek_flag)
+-		strncat(options, "peek,", OPTSTRING);
++		append_str(options, "peek,", OPTSTRING);
+ }
+ 
+ static int __test_exec(int cgrp, int test, struct sockmap_options *opt)
+diff --git a/tools/testing/selftests/bpf/test_tunnel.sh b/tools/testing/selftests/bpf/test_tunnel.sh
+index bd12ec97a44df..1ccbe804e8e1c 100755
+--- a/tools/testing/selftests/bpf/test_tunnel.sh
++++ b/tools/testing/selftests/bpf/test_tunnel.sh
+@@ -24,12 +24,12 @@
+ # Root namespace with metadata-mode tunnel + BPF
+ # Device names and addresses:
+ # 	veth1 IP: 172.16.1.200, IPv6: 00::22 (underlay)
+-# 	tunnel dev <type>11, ex: gre11, IPv4: 10.1.1.200 (overlay)
++# 	tunnel dev <type>11, ex: gre11, IPv4: 10.1.1.200, IPv6: 1::22 (overlay)
+ #
+ # Namespace at_ns0 with native tunnel
+ # Device names and addresses:
+ # 	veth0 IPv4: 172.16.1.100, IPv6: 00::11 (underlay)
+-# 	tunnel dev <type>00, ex: gre00, IPv4: 10.1.1.100 (overlay)
++# 	tunnel dev <type>00, ex: gre00, IPv4: 10.1.1.100, IPv6: 1::11 (overlay)
+ #
+ #
+ # End-to-end ping packet flow
+@@ -250,7 +250,7 @@ add_ipip_tunnel()
+ 	ip addr add dev $DEV 10.1.1.200/24
+ }
+ 
+-add_ipip6tnl_tunnel()
++add_ip6tnl_tunnel()
+ {
+ 	ip netns exec at_ns0 ip addr add ::11/96 dev veth0
+ 	ip netns exec at_ns0 ip link set dev veth0 up
+@@ -262,11 +262,13 @@ add_ipip6tnl_tunnel()
+ 		ip link add dev $DEV_NS type $TYPE \
+ 		local ::11 remote ::22
+ 	ip netns exec at_ns0 ip addr add dev $DEV_NS 10.1.1.100/24
++	ip netns exec at_ns0 ip addr add dev $DEV_NS 1::11/96
+ 	ip netns exec at_ns0 ip link set dev $DEV_NS up
+ 
+ 	# root namespace
+ 	ip link add dev $DEV type $TYPE external
+ 	ip addr add dev $DEV 10.1.1.200/24
++	ip addr add dev $DEV 1::22/96
+ 	ip link set dev $DEV up
+ }
+ 
+@@ -534,7 +536,7 @@ test_ipip6()
+ 
+ 	check $TYPE
+ 	config_device
+-	add_ipip6tnl_tunnel
++	add_ip6tnl_tunnel
+ 	ip link set dev veth1 mtu 1500
+ 	attach_bpf $DEV ipip6_set_tunnel ipip6_get_tunnel
+ 	# underlay
+@@ -553,6 +555,34 @@ test_ipip6()
+         echo -e ${GREEN}"PASS: $TYPE"${NC}
+ }
+ 
++test_ip6ip6()
++{
++	TYPE=ip6tnl
++	DEV_NS=ip6ip6tnl00
++	DEV=ip6ip6tnl11
++	ret=0
++
++	check $TYPE
++	config_device
++	add_ip6tnl_tunnel
++	ip link set dev veth1 mtu 1500
++	attach_bpf $DEV ip6ip6_set_tunnel ip6ip6_get_tunnel
++	# underlay
++	ping6 $PING_ARG ::11
++	# ip6 over ip6
++	ping6 $PING_ARG 1::11
++	check_err $?
++	ip netns exec at_ns0 ping6 $PING_ARG 1::22
++	check_err $?
++	cleanup
++
++	if [ $ret -ne 0 ]; then
++                echo -e ${RED}"FAIL: ip6$TYPE"${NC}
++                return 1
++        fi
++        echo -e ${GREEN}"PASS: ip6$TYPE"${NC}
++}
++
+ setup_xfrm_tunnel()
+ {
+ 	auth=0x$(printf '1%.0s' {1..40})
+@@ -646,6 +676,7 @@ cleanup()
+ 	ip link del veth1 2> /dev/null
+ 	ip link del ipip11 2> /dev/null
+ 	ip link del ipip6tnl11 2> /dev/null
++	ip link del ip6ip6tnl11 2> /dev/null
+ 	ip link del gretap11 2> /dev/null
+ 	ip link del ip6gre11 2> /dev/null
+ 	ip link del ip6gretap11 2> /dev/null
+@@ -742,6 +773,10 @@ bpf_tunnel_test()
+ 	test_ipip6
+ 	errors=$(( $errors + $? ))
+ 
++	echo "Testing IP6IP6 tunnel..."
++	test_ip6ip6
++	errors=$(( $errors + $? ))
++
+ 	echo "Testing IPSec tunnel..."
+ 	test_xfrm_tunnel
+ 	errors=$(( $errors + $? ))
+diff --git a/tools/testing/selftests/run_kselftest.sh b/tools/testing/selftests/run_kselftest.sh
+index 609a4ef9300e3..97165a83df632 100755
+--- a/tools/testing/selftests/run_kselftest.sh
++++ b/tools/testing/selftests/run_kselftest.sh
+@@ -48,7 +48,7 @@ while true; do
+ 		-l | --list)
+ 			echo "$available"
+ 			exit 0 ;;
+-		-n | --dry-run)
++		-d | --dry-run)
+ 			dryrun="echo"
+ 			shift ;;
+ 		-h | --help)
+diff --git a/tools/testing/selftests/seccomp/config b/tools/testing/selftests/seccomp/config
+index 64c19d8eba795..ad431a5178fbe 100644
+--- a/tools/testing/selftests/seccomp/config
++++ b/tools/testing/selftests/seccomp/config
+@@ -1,3 +1,4 @@
++CONFIG_PID_NS=y
+ CONFIG_SECCOMP=y
+ CONFIG_SECCOMP_FILTER=y
+ CONFIG_USER_NS=y


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-06 14:54 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-01-06 14:54 UTC (permalink / raw
  To: gentoo-commits

commit:     d2a35690ae40f9b1fa3f2e91ab73c25acff4c71e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jan  6 14:53:50 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jan  6 14:53:50 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d2a35690

Linux patch 5.10.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1004_linux-5.10.5.patch | 3149 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3153 insertions(+)

diff --git a/0000_README b/0000_README
index ce1d3f7..53642e2 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-5.10.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.4
 
+Patch:  1004_linux-5.10.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-5.10.5.patch b/1004_linux-5.10.5.patch
new file mode 100644
index 0000000..1af7e50
--- /dev/null
+++ b/1004_linux-5.10.5.patch
@@ -0,0 +1,3149 @@
+diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
+index b0ea17da8ff63..654649556306f 100644
+--- a/Documentation/gpu/todo.rst
++++ b/Documentation/gpu/todo.rst
+@@ -273,6 +273,24 @@ Contact: Daniel Vetter, Noralf Tronnes
+ 
+ Level: Advanced
+ 
++Garbage collect fbdev scrolling acceleration
++--------------------------------------------
++
++Scroll acceleration is disabled in fbcon by hard-wiring p->scrollmode =
++SCROLL_REDRAW. There's a ton of code this will allow us to remove:
++- lots of code in fbcon.c
++- a bunch of the hooks in fbcon_ops, maybe the remaining hooks could be called
++  directly instead of the function table (with a switch on p->rotate)
++- fb_copyarea is unused after this, and can be deleted from all drivers
++
++Note that not all acceleration code can be deleted, since clearing and cursor
++support is still accelerated, which might be good candidates for further
++deletion projects.
++
++Contact: Daniel Vetter
++
++Level: Intermediate
++
+ idr_init_base()
+ ---------------
+ 
+diff --git a/Makefile b/Makefile
+index 1e50d6af932ab..bb431fd473d2c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
+index ef12e097f3184..27ca549ff47ed 100644
+--- a/arch/ia64/mm/init.c
++++ b/arch/ia64/mm/init.c
+@@ -536,7 +536,7 @@ virtual_memmap_init(u64 start, u64 end, void *arg)
+ 
+ 	if (map_start < map_end)
+ 		memmap_init_zone((unsigned long)(map_end - map_start),
+-				 args->nid, args->zone, page_to_pfn(map_start),
++				 args->nid, args->zone, page_to_pfn(map_start), page_to_pfn(map_end),
+ 				 MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
+ 	return 0;
+ }
+@@ -546,7 +546,7 @@ memmap_init (unsigned long size, int nid, unsigned long zone,
+ 	     unsigned long start_pfn)
+ {
+ 	if (!vmem_map) {
+-		memmap_init_zone(size, nid, zone, start_pfn,
++		memmap_init_zone(size, nid, zone, start_pfn, start_pfn + size,
+ 				 MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
+ 	} else {
+ 		struct page *start;
+diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
+index 7d0f7682d01df..6b1eca53e36cc 100644
+--- a/arch/powerpc/kernel/irq.c
++++ b/arch/powerpc/kernel/irq.c
+@@ -102,14 +102,6 @@ static inline notrace unsigned long get_irq_happened(void)
+ 	return happened;
+ }
+ 
+-static inline notrace int decrementer_check_overflow(void)
+-{
+-	u64 now = get_tb();
+-	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+- 
+-	return now >= *next_tb;
+-}
+-
+ #ifdef CONFIG_PPC_BOOK3E
+ 
+ /* This is called whenever we are re-enabling interrupts
+@@ -142,35 +134,6 @@ notrace unsigned int __check_irq_replay(void)
+ 	trace_hardirqs_on();
+ 	trace_hardirqs_off();
+ 
+-	/*
+-	 * We are always hard disabled here, but PACA_IRQ_HARD_DIS may
+-	 * not be set, which means interrupts have only just been hard
+-	 * disabled as part of the local_irq_restore or interrupt return
+-	 * code. In that case, skip the decrementr check becaus it's
+-	 * expensive to read the TB.
+-	 *
+-	 * HARD_DIS then gets cleared here, but it's reconciled later.
+-	 * Either local_irq_disable will replay the interrupt and that
+-	 * will reconcile state like other hard interrupts. Or interrupt
+-	 * retur will replay the interrupt and in that case it sets
+-	 * PACA_IRQ_HARD_DIS by hand (see comments in entry_64.S).
+-	 */
+-	if (happened & PACA_IRQ_HARD_DIS) {
+-		local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
+-
+-		/*
+-		 * We may have missed a decrementer interrupt if hard disabled.
+-		 * Check the decrementer register in case we had a rollover
+-		 * while hard disabled.
+-		 */
+-		if (!(happened & PACA_IRQ_DEC)) {
+-			if (decrementer_check_overflow()) {
+-				local_paca->irq_happened |= PACA_IRQ_DEC;
+-				happened |= PACA_IRQ_DEC;
+-			}
+-		}
+-	}
+-
+ 	if (happened & PACA_IRQ_DEC) {
+ 		local_paca->irq_happened &= ~PACA_IRQ_DEC;
+ 		return 0x900;
+@@ -186,6 +149,9 @@ notrace unsigned int __check_irq_replay(void)
+ 		return 0x280;
+ 	}
+ 
++	if (happened & PACA_IRQ_HARD_DIS)
++		local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
++
+ 	/* There should be nothing left ! */
+ 	BUG_ON(local_paca->irq_happened != 0);
+ 
+@@ -229,18 +195,6 @@ again:
+ 	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
+ 		WARN_ON_ONCE(mfmsr() & MSR_EE);
+ 
+-	if (happened & PACA_IRQ_HARD_DIS) {
+-		/*
+-		 * We may have missed a decrementer interrupt if hard disabled.
+-		 * Check the decrementer register in case we had a rollover
+-		 * while hard disabled.
+-		 */
+-		if (!(happened & PACA_IRQ_DEC)) {
+-			if (decrementer_check_overflow())
+-				happened |= PACA_IRQ_DEC;
+-		}
+-	}
+-
+ 	/*
+ 	 * Force the delivery of pending soft-disabled interrupts on PS3.
+ 	 * Any HV call will have this side effect.
+@@ -345,6 +299,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
+ 		if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
+ 			WARN_ON_ONCE(!(mfmsr() & MSR_EE));
+ 		__hard_irq_disable();
++		local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
+ 	} else {
+ 		/*
+ 		 * We should already be hard disabled here. We had bugs
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index 74efe46f55327..7d372ff3504b2 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -552,14 +552,11 @@ void timer_interrupt(struct pt_regs *regs)
+ 	struct pt_regs *old_regs;
+ 	u64 now;
+ 
+-	/* Some implementations of hotplug will get timer interrupts while
+-	 * offline, just ignore these and we also need to set
+-	 * decrementers_next_tb as MAX to make sure __check_irq_replay
+-	 * don't replay timer interrupt when return, otherwise we'll trap
+-	 * here infinitely :(
++	/*
++	 * Some implementations of hotplug will get timer interrupts while
++	 * offline, just ignore these.
+ 	 */
+ 	if (unlikely(!cpu_online(smp_processor_id()))) {
+-		*next_tb = ~(u64)0;
+ 		set_dec(decrementer_max);
+ 		return;
+ 	}
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index d95954ad4c0af..c61c3b62c8c62 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -731,7 +731,7 @@ int opal_hmi_exception_early2(struct pt_regs *regs)
+ 	return 1;
+ }
+ 
+-/* HMI exception handler called in virtual mode during check_irq_replay. */
++/* HMI exception handler called in virtual mode when irqs are next enabled. */
+ int opal_handle_hmi_exception(struct pt_regs *regs)
+ {
+ 	/*
+diff --git a/arch/powerpc/sysdev/mpic_msgr.c b/arch/powerpc/sysdev/mpic_msgr.c
+index f6b253e2be409..36ec0bdd8b63c 100644
+--- a/arch/powerpc/sysdev/mpic_msgr.c
++++ b/arch/powerpc/sysdev/mpic_msgr.c
+@@ -191,7 +191,7 @@ static int mpic_msgr_probe(struct platform_device *dev)
+ 
+ 	/* IO map the message register block. */
+ 	of_address_to_resource(np, 0, &rsrc);
+-	msgr_block_addr = ioremap(rsrc.start, resource_size(&rsrc));
++	msgr_block_addr = devm_ioremap(&dev->dev, rsrc.start, resource_size(&rsrc));
+ 	if (!msgr_block_addr) {
+ 		dev_err(&dev->dev, "Failed to iomap MPIC message registers");
+ 		return -EFAULT;
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 6343dca0dbeb6..71203324ff42b 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -406,6 +406,7 @@ ENTRY(system_call)
+ 	mvc	__PT_PSW(16,%r11),__LC_SVC_OLD_PSW
+ 	mvc	__PT_INT_CODE(4,%r11),__LC_SVC_ILC
+ 	stg	%r14,__PT_FLAGS(%r11)
++	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+ 	ENABLE_INTS
+ .Lsysc_do_svc:
+ 	# clear user controlled register to prevent speculative use
+@@ -422,7 +423,6 @@ ENTRY(system_call)
+ 	jnl	.Lsysc_nr_ok
+ 	slag	%r8,%r1,3
+ .Lsysc_nr_ok:
+-	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+ 	stg	%r2,__PT_ORIG_GPR2(%r11)
+ 	stg	%r7,STACK_FRAME_OVERHEAD(%r15)
+ 	lg	%r9,0(%r8,%r10)			# get system call add.
+@@ -712,8 +712,8 @@ ENTRY(pgm_check_handler)
+ 	mvc	__THREAD_per_address(8,%r14),__LC_PER_ADDRESS
+ 	mvc	__THREAD_per_cause(2,%r14),__LC_PER_CODE
+ 	mvc	__THREAD_per_paid(1,%r14),__LC_PER_ACCESS_ID
+-6:	RESTORE_SM_CLEAR_PER
+-	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
++6:	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
++	RESTORE_SM_CLEAR_PER
+ 	larl	%r1,pgm_check_table
+ 	llgh	%r10,__PT_INT_CODE+2(%r11)
+ 	nill	%r10,0x007f
+@@ -734,8 +734,8 @@ ENTRY(pgm_check_handler)
+ # PER event in supervisor state, must be kprobes
+ #
+ .Lpgm_kprobe:
+-	RESTORE_SM_CLEAR_PER
+ 	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
++	RESTORE_SM_CLEAR_PER
+ 	lgr	%r2,%r11		# pass pointer to pt_regs
+ 	brasl	%r14,do_per_trap
+ 	j	.Lpgm_return
+@@ -777,10 +777,10 @@ ENTRY(io_int_handler)
+ 	stmg	%r8,%r9,__PT_PSW(%r11)
+ 	mvc	__PT_INT_CODE(12,%r11),__LC_SUBCHANNEL_ID
+ 	xc	__PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
++	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+ 	TSTMSK	__LC_CPU_FLAGS,_CIF_IGNORE_IRQ
+ 	jo	.Lio_restore
+ 	TRACE_IRQS_OFF
+-	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+ .Lio_loop:
+ 	lgr	%r2,%r11		# pass pointer to pt_regs
+ 	lghi	%r3,IO_INTERRUPT
+@@ -980,10 +980,10 @@ ENTRY(ext_int_handler)
+ 	mvc	__PT_INT_PARM(4,%r11),__LC_EXT_PARAMS
+ 	mvc	__PT_INT_PARM_LONG(8,%r11),0(%r1)
+ 	xc	__PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
++	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+ 	TSTMSK	__LC_CPU_FLAGS,_CIF_IGNORE_IRQ
+ 	jo	.Lio_restore
+ 	TRACE_IRQS_OFF
+-	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+ 	lgr	%r2,%r11		# pass pointer to pt_regs
+ 	lghi	%r3,EXT_INTERRUPT
+ 	brasl	%r14,do_IRQ
+diff --git a/arch/um/drivers/random.c b/arch/um/drivers/random.c
+index ce115fce52f02..e4b9b2ce9abf4 100644
+--- a/arch/um/drivers/random.c
++++ b/arch/um/drivers/random.c
+@@ -11,6 +11,7 @@
+ #include <linux/fs.h>
+ #include <linux/interrupt.h>
+ #include <linux/miscdevice.h>
++#include <linux/hw_random.h>
+ #include <linux/delay.h>
+ #include <linux/uaccess.h>
+ #include <init.h>
+@@ -18,9 +19,8 @@
+ #include <os.h>
+ 
+ /*
+- * core module and version information
++ * core module information
+  */
+-#define RNG_VERSION "1.0.0"
+ #define RNG_MODULE_NAME "hw_random"
+ 
+ /* Changed at init time, in the non-modular case, and at module load
+@@ -28,88 +28,36 @@
+  * protects against a module being loaded twice at the same time.
+  */
+ static int random_fd = -1;
+-static DECLARE_WAIT_QUEUE_HEAD(host_read_wait);
++static struct hwrng hwrng = { 0, };
++static DECLARE_COMPLETION(have_data);
+ 
+-static int rng_dev_open (struct inode *inode, struct file *filp)
++static int rng_dev_read(struct hwrng *rng, void *buf, size_t max, bool block)
+ {
+-	/* enforce read-only access to this chrdev */
+-	if ((filp->f_mode & FMODE_READ) == 0)
+-		return -EINVAL;
+-	if ((filp->f_mode & FMODE_WRITE) != 0)
+-		return -EINVAL;
++	int ret;
+ 
+-	return 0;
+-}
+-
+-static atomic_t host_sleep_count = ATOMIC_INIT(0);
+-
+-static ssize_t rng_dev_read (struct file *filp, char __user *buf, size_t size,
+-			     loff_t *offp)
+-{
+-	u32 data;
+-	int n, ret = 0, have_data;
+-
+-	while (size) {
+-		n = os_read_file(random_fd, &data, sizeof(data));
+-		if (n > 0) {
+-			have_data = n;
+-			while (have_data && size) {
+-				if (put_user((u8) data, buf++)) {
+-					ret = ret ? : -EFAULT;
+-					break;
+-				}
+-				size--;
+-				ret++;
+-				have_data--;
+-				data >>= 8;
+-			}
+-		}
+-		else if (n == -EAGAIN) {
+-			DECLARE_WAITQUEUE(wait, current);
+-
+-			if (filp->f_flags & O_NONBLOCK)
+-				return ret ? : -EAGAIN;
+-
+-			atomic_inc(&host_sleep_count);
++	for (;;) {
++		ret = os_read_file(random_fd, buf, max);
++		if (block && ret == -EAGAIN) {
+ 			add_sigio_fd(random_fd);
+ 
+-			add_wait_queue(&host_read_wait, &wait);
+-			set_current_state(TASK_INTERRUPTIBLE);
++			ret = wait_for_completion_killable(&have_data);
+ 
+-			schedule();
+-			remove_wait_queue(&host_read_wait, &wait);
++			ignore_sigio_fd(random_fd);
++			deactivate_fd(random_fd, RANDOM_IRQ);
+ 
+-			if (atomic_dec_and_test(&host_sleep_count)) {
+-				ignore_sigio_fd(random_fd);
+-				deactivate_fd(random_fd, RANDOM_IRQ);
+-			}
++			if (ret < 0)
++				break;
++		} else {
++			break;
+ 		}
+-		else
+-			return n;
+-
+-		if (signal_pending (current))
+-			return ret ? : -ERESTARTSYS;
+ 	}
+-	return ret;
+-}
+ 
+-static const struct file_operations rng_chrdev_ops = {
+-	.owner		= THIS_MODULE,
+-	.open		= rng_dev_open,
+-	.read		= rng_dev_read,
+-	.llseek		= noop_llseek,
+-};
+-
+-/* rng_init shouldn't be called more than once at boot time */
+-static struct miscdevice rng_miscdev = {
+-	HWRNG_MINOR,
+-	RNG_MODULE_NAME,
+-	&rng_chrdev_ops,
+-};
++	return ret != -EAGAIN ? ret : 0;
++}
+ 
+ static irqreturn_t random_interrupt(int irq, void *data)
+ {
+-	wake_up(&host_read_wait);
++	complete(&have_data);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -126,18 +74,19 @@ static int __init rng_init (void)
+ 		goto out;
+ 
+ 	random_fd = err;
+-
+ 	err = um_request_irq(RANDOM_IRQ, random_fd, IRQ_READ, random_interrupt,
+ 			     0, "random", NULL);
+ 	if (err)
+ 		goto err_out_cleanup_hw;
+ 
+ 	sigio_broken(random_fd, 1);
++	hwrng.name = RNG_MODULE_NAME;
++	hwrng.read = rng_dev_read;
++	hwrng.quality = 1024;
+ 
+-	err = misc_register (&rng_miscdev);
++	err = hwrng_register(&hwrng);
+ 	if (err) {
+-		printk (KERN_ERR RNG_MODULE_NAME ": misc device register "
+-			"failed\n");
++		pr_err(RNG_MODULE_NAME " registering failed (%d)\n", err);
+ 		goto err_out_cleanup_hw;
+ 	}
+ out:
+@@ -161,8 +110,8 @@ static void cleanup(void)
+ 
+ static void __exit rng_cleanup(void)
+ {
++	hwrng_unregister(&hwrng);
+ 	os_close_file(random_fd);
+-	misc_deregister (&rng_miscdev);
+ }
+ 
+ module_init (rng_init);
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index eae8c83364f71..b12c1b0d3e1d0 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -47,18 +47,25 @@
+ /* Max request size is determined by sector mask - 32K */
+ #define UBD_MAX_REQUEST (8 * sizeof(long))
+ 
++struct io_desc {
++	char *buffer;
++	unsigned long length;
++	unsigned long sector_mask;
++	unsigned long long cow_offset;
++	unsigned long bitmap_words[2];
++};
++
+ struct io_thread_req {
+ 	struct request *req;
+ 	int fds[2];
+ 	unsigned long offsets[2];
+ 	unsigned long long offset;
+-	unsigned long length;
+-	char *buffer;
+ 	int sectorsize;
+-	unsigned long sector_mask;
+-	unsigned long long cow_offset;
+-	unsigned long bitmap_words[2];
+ 	int error;
++
++	int desc_cnt;
++	/* io_desc has to be the last element of the struct */
++	struct io_desc io_desc[];
+ };
+ 
+ 
+@@ -525,12 +532,7 @@ static void ubd_handler(void)
+ 				blk_queue_max_write_zeroes_sectors(io_req->req->q, 0);
+ 				blk_queue_flag_clear(QUEUE_FLAG_DISCARD, io_req->req->q);
+ 			}
+-			if ((io_req->error) || (io_req->buffer == NULL))
+-				blk_mq_end_request(io_req->req, io_req->error);
+-			else {
+-				if (!blk_update_request(io_req->req, io_req->error, io_req->length))
+-					__blk_mq_end_request(io_req->req, io_req->error);
+-			}
++			blk_mq_end_request(io_req->req, io_req->error);
+ 			kfree(io_req);
+ 		}
+ 	}
+@@ -946,6 +948,7 @@ static int ubd_add(int n, char **error_out)
+ 	blk_queue_write_cache(ubd_dev->queue, true, false);
+ 
+ 	blk_queue_max_segments(ubd_dev->queue, MAX_SG);
++	blk_queue_segment_boundary(ubd_dev->queue, PAGE_SIZE - 1);
+ 	err = ubd_disk_register(UBD_MAJOR, ubd_dev->size, n, &ubd_gendisk[n]);
+ 	if(err){
+ 		*error_out = "Failed to register device";
+@@ -1289,37 +1292,74 @@ static void cowify_bitmap(__u64 io_offset, int length, unsigned long *cow_mask,
+ 	*cow_offset += bitmap_offset;
+ }
+ 
+-static void cowify_req(struct io_thread_req *req, unsigned long *bitmap,
++static void cowify_req(struct io_thread_req *req, struct io_desc *segment,
++		       unsigned long offset, unsigned long *bitmap,
+ 		       __u64 bitmap_offset, __u64 bitmap_len)
+ {
+-	__u64 sector = req->offset >> SECTOR_SHIFT;
++	__u64 sector = offset >> SECTOR_SHIFT;
+ 	int i;
+ 
+-	if (req->length > (sizeof(req->sector_mask) * 8) << SECTOR_SHIFT)
++	if (segment->length > (sizeof(segment->sector_mask) * 8) << SECTOR_SHIFT)
+ 		panic("Operation too long");
+ 
+ 	if (req_op(req->req) == REQ_OP_READ) {
+-		for (i = 0; i < req->length >> SECTOR_SHIFT; i++) {
++		for (i = 0; i < segment->length >> SECTOR_SHIFT; i++) {
+ 			if(ubd_test_bit(sector + i, (unsigned char *) bitmap))
+ 				ubd_set_bit(i, (unsigned char *)
+-					    &req->sector_mask);
++					    &segment->sector_mask);
++		}
++	} else {
++		cowify_bitmap(offset, segment->length, &segment->sector_mask,
++			      &segment->cow_offset, bitmap, bitmap_offset,
++			      segment->bitmap_words, bitmap_len);
++	}
++}
++
++static void ubd_map_req(struct ubd *dev, struct io_thread_req *io_req,
++			struct request *req)
++{
++	struct bio_vec bvec;
++	struct req_iterator iter;
++	int i = 0;
++	unsigned long byte_offset = io_req->offset;
++	int op = req_op(req);
++
++	if (op == REQ_OP_WRITE_ZEROES || op == REQ_OP_DISCARD) {
++		io_req->io_desc[0].buffer = NULL;
++		io_req->io_desc[0].length = blk_rq_bytes(req);
++	} else {
++		rq_for_each_segment(bvec, req, iter) {
++			BUG_ON(i >= io_req->desc_cnt);
++
++			io_req->io_desc[i].buffer =
++				page_address(bvec.bv_page) + bvec.bv_offset;
++			io_req->io_desc[i].length = bvec.bv_len;
++			i++;
++		}
++	}
++
++	if (dev->cow.file) {
++		for (i = 0; i < io_req->desc_cnt; i++) {
++			cowify_req(io_req, &io_req->io_desc[i], byte_offset,
++				   dev->cow.bitmap, dev->cow.bitmap_offset,
++				   dev->cow.bitmap_len);
++			byte_offset += io_req->io_desc[i].length;
+ 		}
++
+ 	}
+-	else cowify_bitmap(req->offset, req->length, &req->sector_mask,
+-			   &req->cow_offset, bitmap, bitmap_offset,
+-			   req->bitmap_words, bitmap_len);
+ }
+ 
+-static int ubd_queue_one_vec(struct blk_mq_hw_ctx *hctx, struct request *req,
+-		u64 off, struct bio_vec *bvec)
++static struct io_thread_req *ubd_alloc_req(struct ubd *dev, struct request *req,
++					   int desc_cnt)
+ {
+-	struct ubd *dev = hctx->queue->queuedata;
+ 	struct io_thread_req *io_req;
+-	int ret;
++	int i;
+ 
+-	io_req = kmalloc(sizeof(struct io_thread_req), GFP_ATOMIC);
++	io_req = kmalloc(sizeof(*io_req) +
++			 (desc_cnt * sizeof(struct io_desc)),
++			 GFP_ATOMIC);
+ 	if (!io_req)
+-		return -ENOMEM;
++		return NULL;
+ 
+ 	io_req->req = req;
+ 	if (dev->cow.file)
+@@ -1327,26 +1367,41 @@ static int ubd_queue_one_vec(struct blk_mq_hw_ctx *hctx, struct request *req,
+ 	else
+ 		io_req->fds[0] = dev->fd;
+ 	io_req->error = 0;
+-
+-	if (bvec != NULL) {
+-		io_req->buffer = page_address(bvec->bv_page) + bvec->bv_offset;
+-		io_req->length = bvec->bv_len;
+-	} else {
+-		io_req->buffer = NULL;
+-		io_req->length = blk_rq_bytes(req);
+-	}
+-
+ 	io_req->sectorsize = SECTOR_SIZE;
+ 	io_req->fds[1] = dev->fd;
+-	io_req->cow_offset = -1;
+-	io_req->offset = off;
+-	io_req->sector_mask = 0;
++	io_req->offset = (u64) blk_rq_pos(req) << SECTOR_SHIFT;
+ 	io_req->offsets[0] = 0;
+ 	io_req->offsets[1] = dev->cow.data_offset;
+ 
+-	if (dev->cow.file)
+-		cowify_req(io_req, dev->cow.bitmap,
+-			   dev->cow.bitmap_offset, dev->cow.bitmap_len);
++	for (i = 0 ; i < desc_cnt; i++) {
++		io_req->io_desc[i].sector_mask = 0;
++		io_req->io_desc[i].cow_offset = -1;
++	}
++
++	return io_req;
++}
++
++static int ubd_submit_request(struct ubd *dev, struct request *req)
++{
++	int segs = 0;
++	struct io_thread_req *io_req;
++	int ret;
++	int op = req_op(req);
++
++	if (op == REQ_OP_FLUSH)
++		segs = 0;
++	else if (op == REQ_OP_WRITE_ZEROES || op == REQ_OP_DISCARD)
++		segs = 1;
++	else
++		segs = blk_rq_nr_phys_segments(req);
++
++	io_req = ubd_alloc_req(dev, req, segs);
++	if (!io_req)
++		return -ENOMEM;
++
++	io_req->desc_cnt = segs;
++	if (segs)
++		ubd_map_req(dev, io_req, req);
+ 
+ 	ret = os_write_file(thread_fd, &io_req, sizeof(io_req));
+ 	if (ret != sizeof(io_req)) {
+@@ -1357,22 +1412,6 @@ static int ubd_queue_one_vec(struct blk_mq_hw_ctx *hctx, struct request *req,
+ 	return ret;
+ }
+ 
+-static int queue_rw_req(struct blk_mq_hw_ctx *hctx, struct request *req)
+-{
+-	struct req_iterator iter;
+-	struct bio_vec bvec;
+-	int ret;
+-	u64 off = (u64)blk_rq_pos(req) << SECTOR_SHIFT;
+-
+-	rq_for_each_segment(bvec, req, iter) {
+-		ret = ubd_queue_one_vec(hctx, req, off, &bvec);
+-		if (ret < 0)
+-			return ret;
+-		off += bvec.bv_len;
+-	}
+-	return 0;
+-}
+-
+ static blk_status_t ubd_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 				 const struct blk_mq_queue_data *bd)
+ {
+@@ -1385,17 +1424,12 @@ static blk_status_t ubd_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	spin_lock_irq(&ubd_dev->lock);
+ 
+ 	switch (req_op(req)) {
+-	/* operations with no lentgth/offset arguments */
+ 	case REQ_OP_FLUSH:
+-		ret = ubd_queue_one_vec(hctx, req, 0, NULL);
+-		break;
+ 	case REQ_OP_READ:
+ 	case REQ_OP_WRITE:
+-		ret = queue_rw_req(hctx, req);
+-		break;
+ 	case REQ_OP_DISCARD:
+ 	case REQ_OP_WRITE_ZEROES:
+-		ret = ubd_queue_one_vec(hctx, req, (u64)blk_rq_pos(req) << 9, NULL);
++		ret = ubd_submit_request(ubd_dev, req);
+ 		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+@@ -1483,22 +1517,22 @@ static int map_error(int error_code)
+  * will result in unpredictable behaviour and/or crashes.
+  */
+ 
+-static int update_bitmap(struct io_thread_req *req)
++static int update_bitmap(struct io_thread_req *req, struct io_desc *segment)
+ {
+ 	int n;
+ 
+-	if(req->cow_offset == -1)
++	if (segment->cow_offset == -1)
+ 		return map_error(0);
+ 
+-	n = os_pwrite_file(req->fds[1], &req->bitmap_words,
+-			  sizeof(req->bitmap_words), req->cow_offset);
+-	if (n != sizeof(req->bitmap_words))
++	n = os_pwrite_file(req->fds[1], &segment->bitmap_words,
++			  sizeof(segment->bitmap_words), segment->cow_offset);
++	if (n != sizeof(segment->bitmap_words))
+ 		return map_error(-n);
+ 
+ 	return map_error(0);
+ }
+ 
+-static void do_io(struct io_thread_req *req)
++static void do_io(struct io_thread_req *req, struct io_desc *desc)
+ {
+ 	char *buf = NULL;
+ 	unsigned long len;
+@@ -1513,21 +1547,20 @@ static void do_io(struct io_thread_req *req)
+ 		return;
+ 	}
+ 
+-	nsectors = req->length / req->sectorsize;
++	nsectors = desc->length / req->sectorsize;
+ 	start = 0;
+ 	do {
+-		bit = ubd_test_bit(start, (unsigned char *) &req->sector_mask);
++		bit = ubd_test_bit(start, (unsigned char *) &desc->sector_mask);
+ 		end = start;
+ 		while((end < nsectors) &&
+-		      (ubd_test_bit(end, (unsigned char *)
+-				    &req->sector_mask) == bit))
++		      (ubd_test_bit(end, (unsigned char *) &desc->sector_mask) == bit))
+ 			end++;
+ 
+ 		off = req->offset + req->offsets[bit] +
+ 			start * req->sectorsize;
+ 		len = (end - start) * req->sectorsize;
+-		if (req->buffer != NULL)
+-			buf = &req->buffer[start * req->sectorsize];
++		if (desc->buffer != NULL)
++			buf = &desc->buffer[start * req->sectorsize];
+ 
+ 		switch (req_op(req->req)) {
+ 		case REQ_OP_READ:
+@@ -1567,7 +1600,8 @@ static void do_io(struct io_thread_req *req)
+ 		start = end;
+ 	} while(start < nsectors);
+ 
+-	req->error = update_bitmap(req);
++	req->offset += len;
++	req->error = update_bitmap(req, desc);
+ }
+ 
+ /* Changed in start_io_thread, which is serialized by being called only
+@@ -1600,8 +1634,13 @@ int io_thread(void *arg)
+ 		}
+ 
+ 		for (count = 0; count < n/sizeof(struct io_thread_req *); count++) {
++			struct io_thread_req *req = (*io_req_buffer)[count];
++			int i;
++
+ 			io_count++;
+-			do_io((*io_req_buffer)[count]);
++			for (i = 0; !req->error && i < req->desc_cnt; i++)
++				do_io(req, &(req->io_desc[i]));
++
+ 		}
+ 
+ 		written = 0;
+diff --git a/block/blk-pm.c b/block/blk-pm.c
+index b85234d758f7b..17bd020268d42 100644
+--- a/block/blk-pm.c
++++ b/block/blk-pm.c
+@@ -67,6 +67,10 @@ int blk_pre_runtime_suspend(struct request_queue *q)
+ 
+ 	WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE);
+ 
++	spin_lock_irq(&q->queue_lock);
++	q->rpm_status = RPM_SUSPENDING;
++	spin_unlock_irq(&q->queue_lock);
++
+ 	/*
+ 	 * Increase the pm_only counter before checking whether any
+ 	 * non-PM blk_queue_enter() calls are in progress to avoid that any
+@@ -89,15 +93,14 @@ int blk_pre_runtime_suspend(struct request_queue *q)
+ 	/* Switch q_usage_counter back to per-cpu mode. */
+ 	blk_mq_unfreeze_queue(q);
+ 
+-	spin_lock_irq(&q->queue_lock);
+-	if (ret < 0)
++	if (ret < 0) {
++		spin_lock_irq(&q->queue_lock);
++		q->rpm_status = RPM_ACTIVE;
+ 		pm_runtime_mark_last_busy(q->dev);
+-	else
+-		q->rpm_status = RPM_SUSPENDING;
+-	spin_unlock_irq(&q->queue_lock);
++		spin_unlock_irq(&q->queue_lock);
+ 
+-	if (ret)
+ 		blk_clear_pm_only(q);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 78d635f1d1567..376164cdf2ea9 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -251,8 +251,12 @@ static int h5_close(struct hci_uart *hu)
+ 	if (h5->vnd && h5->vnd->close)
+ 		h5->vnd->close(h5);
+ 
+-	if (!hu->serdev)
+-		kfree(h5);
++	if (hu->serdev)
++		serdev_device_close(hu->serdev);
++
++	kfree_skb(h5->rx_skb);
++	kfree(h5);
++	h5 = NULL;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig
+index e92c4d9469d82..5952210526aaa 100644
+--- a/drivers/char/hw_random/Kconfig
++++ b/drivers/char/hw_random/Kconfig
+@@ -540,15 +540,15 @@ endif # HW_RANDOM
+ 
+ config UML_RANDOM
+ 	depends on UML
+-	tristate "Hardware random number generator"
++	select HW_RANDOM
++	tristate "UML Random Number Generator support"
+ 	help
+ 	  This option enables UML's "hardware" random number generator.  It
+ 	  attaches itself to the host's /dev/random, supplying as much entropy
+ 	  as the host has, rather than the small amount the UML gets from its
+-	  own drivers.  It registers itself as a standard hardware random number
+-	  generator, major 10, minor 183, and the canonical device name is
+-	  /dev/hwrng.
+-	  The way to make use of this is to install the rng-tools package
+-	  (check your distro, or download from
+-	  http://sourceforge.net/projects/gkernel/).  rngd periodically reads
+-	  /dev/hwrng and injects the entropy into /dev/random.
++	  own drivers. It registers itself as a rng-core driver thus providing
++	  a device which is usually called /dev/hwrng. This hardware random
++	  number generator does feed into the kernel's random number generator
++	  entropy pool.
++
++	  If unsure, say Y.
+diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
+index 27513d311242e..de7b74505e75e 100644
+--- a/drivers/dax/bus.c
++++ b/drivers/dax/bus.c
+@@ -367,19 +367,28 @@ void kill_dev_dax(struct dev_dax *dev_dax)
+ }
+ EXPORT_SYMBOL_GPL(kill_dev_dax);
+ 
+-static void free_dev_dax_ranges(struct dev_dax *dev_dax)
++static void trim_dev_dax_range(struct dev_dax *dev_dax)
+ {
++	int i = dev_dax->nr_range - 1;
++	struct range *range = &dev_dax->ranges[i].range;
+ 	struct dax_region *dax_region = dev_dax->region;
+-	int i;
+ 
+ 	device_lock_assert(dax_region->dev);
+-	for (i = 0; i < dev_dax->nr_range; i++) {
+-		struct range *range = &dev_dax->ranges[i].range;
+-
+-		__release_region(&dax_region->res, range->start,
+-				range_len(range));
++	dev_dbg(&dev_dax->dev, "delete range[%d]: %#llx:%#llx\n", i,
++		(unsigned long long)range->start,
++		(unsigned long long)range->end);
++
++	__release_region(&dax_region->res, range->start, range_len(range));
++	if (--dev_dax->nr_range == 0) {
++		kfree(dev_dax->ranges);
++		dev_dax->ranges = NULL;
+ 	}
+-	dev_dax->nr_range = 0;
++}
++
++static void free_dev_dax_ranges(struct dev_dax *dev_dax)
++{
++	while (dev_dax->nr_range)
++		trim_dev_dax_range(dev_dax);
+ }
+ 
+ static void unregister_dev_dax(void *dev)
+@@ -804,15 +813,10 @@ static int alloc_dev_dax_range(struct dev_dax *dev_dax, u64 start,
+ 		return 0;
+ 
+ 	rc = devm_register_dax_mapping(dev_dax, dev_dax->nr_range - 1);
+-	if (rc) {
+-		dev_dbg(dev, "delete range[%d]: %pa:%pa\n", dev_dax->nr_range - 1,
+-				&alloc->start, &alloc->end);
+-		dev_dax->nr_range--;
+-		__release_region(res, alloc->start, resource_size(alloc));
+-		return rc;
+-	}
++	if (rc)
++		trim_dev_dax_range(dev_dax);
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ static int adjust_dev_dax_range(struct dev_dax *dev_dax, struct resource *res, resource_size_t size)
+@@ -885,12 +889,7 @@ static int dev_dax_shrink(struct dev_dax *dev_dax, resource_size_t size)
+ 		if (shrink >= range_len(range)) {
+ 			devm_release_action(dax_region->dev,
+ 					unregister_dax_mapping, &mapping->dev);
+-			__release_region(&dax_region->res, range->start,
+-					range_len(range));
+-			dev_dax->nr_range--;
+-			dev_dbg(dev, "delete range[%d]: %#llx:%#llx\n", i,
+-					(unsigned long long) range->start,
+-					(unsigned long long) range->end);
++			trim_dev_dax_range(dev_dax);
+ 			to_shrink -= shrink;
+ 			if (!to_shrink)
+ 				break;
+@@ -1274,7 +1273,6 @@ static void dev_dax_release(struct device *dev)
+ 	put_dax(dax_dev);
+ 	free_dev_dax_id(dev_dax);
+ 	dax_region_put(dax_region);
+-	kfree(dev_dax->ranges);
+ 	kfree(dev_dax->pgmap);
+ 	kfree(dev_dax);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index 6b431db146cd9..1c6e401dd4cce 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -704,24 +704,24 @@ static struct wm_table ddr4_wm_table_rn = {
+ 			.wm_inst = WM_B,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 10.12,
+-			.sr_enter_plus_exit_time_us = 11.48,
++			.sr_exit_time_us = 11.12,
++			.sr_enter_plus_exit_time_us = 12.48,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_C,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 10.12,
+-			.sr_enter_plus_exit_time_us = 11.48,
++			.sr_exit_time_us = 11.12,
++			.sr_enter_plus_exit_time_us = 12.48,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_D,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 10.12,
+-			.sr_enter_plus_exit_time_us = 11.48,
++			.sr_exit_time_us = 11.12,
++			.sr_enter_plus_exit_time_us = 12.48,
+ 			.valid = true,
+ 		},
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+index b409f6b2bfd83..210466b2d8631 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+@@ -119,7 +119,8 @@ static const struct link_encoder_funcs dce110_lnk_enc_funcs = {
+ 	.disable_hpd = dce110_link_encoder_disable_hpd,
+ 	.is_dig_enabled = dce110_is_dig_enabled,
+ 	.destroy = dce110_link_encoder_destroy,
+-	.get_max_link_cap = dce110_link_encoder_get_max_link_cap
++	.get_max_link_cap = dce110_link_encoder_get_max_link_cap,
++	.get_dig_frontend = dce110_get_dig_frontend,
+ };
+ 
+ static enum bp_result link_transmitter_control(
+@@ -235,6 +236,44 @@ static void set_link_training_complete(
+ 
+ }
+ 
++unsigned int dce110_get_dig_frontend(struct link_encoder *enc)
++{
++	struct dce110_link_encoder *enc110 = TO_DCE110_LINK_ENC(enc);
++	u32 value;
++	enum engine_id result;
++
++	REG_GET(DIG_BE_CNTL, DIG_FE_SOURCE_SELECT, &value);
++
++	switch (value) {
++	case DCE110_DIG_FE_SOURCE_SELECT_DIGA:
++		result = ENGINE_ID_DIGA;
++		break;
++	case DCE110_DIG_FE_SOURCE_SELECT_DIGB:
++		result = ENGINE_ID_DIGB;
++		break;
++	case DCE110_DIG_FE_SOURCE_SELECT_DIGC:
++		result = ENGINE_ID_DIGC;
++		break;
++	case DCE110_DIG_FE_SOURCE_SELECT_DIGD:
++		result = ENGINE_ID_DIGD;
++		break;
++	case DCE110_DIG_FE_SOURCE_SELECT_DIGE:
++		result = ENGINE_ID_DIGE;
++		break;
++	case DCE110_DIG_FE_SOURCE_SELECT_DIGF:
++		result = ENGINE_ID_DIGF;
++		break;
++	case DCE110_DIG_FE_SOURCE_SELECT_DIGG:
++		result = ENGINE_ID_DIGG;
++		break;
++	default:
++		// invalid source select DIG
++		result = ENGINE_ID_UNKNOWN;
++	}
++
++	return result;
++}
++
+ void dce110_link_encoder_set_dp_phy_pattern_training_pattern(
+ 	struct link_encoder *enc,
+ 	uint32_t index)
+@@ -1665,7 +1704,8 @@ static const struct link_encoder_funcs dce60_lnk_enc_funcs = {
+ 	.disable_hpd = dce110_link_encoder_disable_hpd,
+ 	.is_dig_enabled = dce110_is_dig_enabled,
+ 	.destroy = dce110_link_encoder_destroy,
+-	.get_max_link_cap = dce110_link_encoder_get_max_link_cap
++	.get_max_link_cap = dce110_link_encoder_get_max_link_cap,
++	.get_dig_frontend = dce110_get_dig_frontend
+ };
+ 
+ void dce60_link_encoder_construct(
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.h
+index cb714a48b171c..fc6ade824c231 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.h
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.h
+@@ -295,6 +295,8 @@ void dce110_link_encoder_connect_dig_be_to_fe(
+ 	enum engine_id engine,
+ 	bool connect);
+ 
++unsigned int dce110_get_dig_frontend(struct link_encoder *enc);
++
+ void dce110_link_encoder_set_dp_phy_pattern_training_pattern(
+ 	struct link_encoder *enc,
+ 	uint32_t index);
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 1c6b78ad5ade4..b61bf53ec07af 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -2537,7 +2537,7 @@ int i3c_master_register(struct i3c_master_controller *master,
+ 
+ 	ret = i3c_master_bus_init(master);
+ 	if (ret)
+-		goto err_put_dev;
++		goto err_destroy_wq;
+ 
+ 	ret = device_add(&master->dev);
+ 	if (ret)
+@@ -2568,6 +2568,9 @@ err_del_dev:
+ err_cleanup_bus:
+ 	i3c_master_bus_cleanup(master);
+ 
++err_destroy_wq:
++	destroy_workqueue(master->wq);
++
+ err_put_dev:
+ 	put_device(&master->dev);
+ 
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index f74982dcbea0d..6b8e5bdd8526d 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -537,6 +537,15 @@ static int verity_verify_io(struct dm_verity_io *io)
+ 	return 0;
+ }
+ 
++/*
++ * Skip verity work in response to I/O error when system is shutting down.
++ */
++static inline bool verity_is_system_shutting_down(void)
++{
++	return system_state == SYSTEM_HALT || system_state == SYSTEM_POWER_OFF
++		|| system_state == SYSTEM_RESTART;
++}
++
+ /*
+  * End one "io" structure with a given error.
+  */
+@@ -564,7 +573,8 @@ static void verity_end_io(struct bio *bio)
+ {
+ 	struct dm_verity_io *io = bio->bi_private;
+ 
+-	if (bio->bi_status && !verity_fec_is_enabled(io->v)) {
++	if (bio->bi_status &&
++	    (!verity_fec_is_enabled(io->v) || verity_is_system_shutting_down())) {
+ 		verity_finish_io(io, bio->bi_status);
+ 		return;
+ 	}
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 3b598a3cb462a..9f9d8b67b5dd1 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1128,7 +1128,7 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
+ 	struct md_rdev *err_rdev = NULL;
+ 	gfp_t gfp = GFP_NOIO;
+ 
+-	if (r10_bio->devs[slot].rdev) {
++	if (slot >= 0 && r10_bio->devs[slot].rdev) {
+ 		/*
+ 		 * This is an error retry, but we cannot
+ 		 * safely dereference the rdev in the r10_bio,
+@@ -1493,6 +1493,7 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
+ 	r10_bio->mddev = mddev;
+ 	r10_bio->sector = bio->bi_iter.bi_sector;
+ 	r10_bio->state = 0;
++	r10_bio->read_slot = -1;
+ 	memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * conf->copies);
+ 
+ 	if (bio_data_dir(bio) == READ)
+diff --git a/drivers/media/usb/dvb-usb/gp8psk.c b/drivers/media/usb/dvb-usb/gp8psk.c
+index c07f46f5176ea..b4f661bb56481 100644
+--- a/drivers/media/usb/dvb-usb/gp8psk.c
++++ b/drivers/media/usb/dvb-usb/gp8psk.c
+@@ -182,7 +182,7 @@ out_rel_fw:
+ 
+ static int gp8psk_power_ctrl(struct dvb_usb_device *d, int onoff)
+ {
+-	u8 status, buf;
++	u8 status = 0, buf;
+ 	int gp_product_id = le16_to_cpu(d->udev->descriptor.idProduct);
+ 
+ 	if (onoff) {
+diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
+index 16695366ec926..26ff49fdf0f7d 100644
+--- a/drivers/misc/vmw_vmci/vmci_context.c
++++ b/drivers/misc/vmw_vmci/vmci_context.c
+@@ -743,7 +743,7 @@ static int vmci_ctx_get_chkpt_doorbells(struct vmci_ctx *context,
+ 			return VMCI_ERROR_MORE_DATA;
+ 		}
+ 
+-		dbells = kmalloc(data_size, GFP_ATOMIC);
++		dbells = kzalloc(data_size, GFP_ATOMIC);
+ 		if (!dbells)
+ 			return VMCI_ERROR_NO_MEM;
+ 
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 0e0a5269dc82f..903b465c8568b 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -1102,7 +1102,7 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
+ 	if (IS_ERR(opp_table->clk)) {
+ 		ret = PTR_ERR(opp_table->clk);
+ 		if (ret == -EPROBE_DEFER)
+-			goto err;
++			goto remove_opp_dev;
+ 
+ 		dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__, ret);
+ 	}
+@@ -1111,7 +1111,7 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
+ 	ret = dev_pm_opp_of_find_icc_paths(dev, opp_table);
+ 	if (ret) {
+ 		if (ret == -EPROBE_DEFER)
+-			goto err;
++			goto put_clk;
+ 
+ 		dev_warn(dev, "%s: Error finding interconnect paths: %d\n",
+ 			 __func__, ret);
+@@ -1125,6 +1125,11 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
+ 	list_add(&opp_table->node, &opp_tables);
+ 	return opp_table;
+ 
++put_clk:
++	if (!IS_ERR(opp_table->clk))
++		clk_put(opp_table->clk);
++remove_opp_dev:
++	_remove_opp_dev(opp_dev, opp_table);
+ err:
+ 	kfree(opp_table);
+ 	return ERR_PTR(ret);
+diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c
+index c6b89273feba8..d4b2ab7861266 100644
+--- a/drivers/rtc/rtc-pl031.c
++++ b/drivers/rtc/rtc-pl031.c
+@@ -361,8 +361,10 @@ static int pl031_probe(struct amba_device *adev, const struct amba_id *id)
+ 
+ 	device_init_wakeup(&adev->dev, true);
+ 	ldata->rtc = devm_rtc_allocate_device(&adev->dev);
+-	if (IS_ERR(ldata->rtc))
+-		return PTR_ERR(ldata->rtc);
++	if (IS_ERR(ldata->rtc)) {
++		ret = PTR_ERR(ldata->rtc);
++		goto out;
++	}
+ 
+ 	ldata->rtc->ops = ops;
+ 	ldata->rtc->range_min = vendor->range_min;
+diff --git a/drivers/rtc/rtc-sun6i.c b/drivers/rtc/rtc-sun6i.c
+index e2b8b150bcb44..f2818cdd11d82 100644
+--- a/drivers/rtc/rtc-sun6i.c
++++ b/drivers/rtc/rtc-sun6i.c
+@@ -272,7 +272,7 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
+ 								300000000);
+ 	if (IS_ERR(rtc->int_osc)) {
+ 		pr_crit("Couldn't register the internal oscillator\n");
+-		return;
++		goto err;
+ 	}
+ 
+ 	parents[0] = clk_hw_get_name(rtc->int_osc);
+@@ -290,7 +290,7 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
+ 	rtc->losc = clk_register(NULL, &rtc->hw);
+ 	if (IS_ERR(rtc->losc)) {
+ 		pr_crit("Couldn't register the LOSC clock\n");
+-		return;
++		goto err_register;
+ 	}
+ 
+ 	of_property_read_string_index(node, "clock-output-names", 1,
+@@ -301,7 +301,7 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
+ 					  &rtc->lock);
+ 	if (IS_ERR(rtc->ext_losc)) {
+ 		pr_crit("Couldn't register the LOSC external gate\n");
+-		return;
++		goto err_register;
+ 	}
+ 
+ 	clk_data->num = 2;
+@@ -314,6 +314,8 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
+ 	of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ 	return;
+ 
++err_register:
++	clk_hw_unregister_fixed_rate(rtc->int_osc);
+ err:
+ 	kfree(clk_data);
+ }
+diff --git a/drivers/scsi/cxgbi/cxgb4i/Kconfig b/drivers/scsi/cxgbi/cxgb4i/Kconfig
+index b206e266b4e72..8b0deece9758b 100644
+--- a/drivers/scsi/cxgbi/cxgb4i/Kconfig
++++ b/drivers/scsi/cxgbi/cxgb4i/Kconfig
+@@ -4,6 +4,7 @@ config SCSI_CXGB4_ISCSI
+ 	depends on PCI && INET && (IPV6 || IPV6=n)
+ 	depends on THERMAL || !THERMAL
+ 	depends on ETHERNET
++	depends on TLS || TLS=n
+ 	select NET_VENDOR_CHELSIO
+ 	select CHELSIO_T4
+ 	select CHELSIO_LIB
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index 3fd16b7f61507..aadaea052f51d 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -256,6 +256,7 @@ config SPI_DW_BT1
+ 	tristate "Baikal-T1 SPI driver for DW SPI core"
+ 	depends on MIPS_BAIKAL_T1 || COMPILE_TEST
+ 	select MULTIPLEXER
++	select MUX_MMIO
+ 	help
+ 	  Baikal-T1 SoC is equipped with three DW APB SSI-based MMIO SPI
+ 	  controllers. Two of them are pretty much normal: with IRQ, DMA,
+@@ -269,8 +270,6 @@ config SPI_DW_BT1
+ config SPI_DW_BT1_DIRMAP
+ 	bool "Directly mapped Baikal-T1 Boot SPI flash support"
+ 	depends on SPI_DW_BT1
+-	select MULTIPLEXER
+-	select MUX_MMIO
+ 	help
+ 	  Directly mapped SPI flash memory is an interface specific to the
+ 	  Baikal-T1 System Boot Controller. It is a 16MB MMIO region, which
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index cef437817b0dc..8d1ae973041ae 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -1033,7 +1033,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	struct vc_data *svc = *default_mode;
+ 	struct fbcon_display *t, *p = &fb_display[vc->vc_num];
+ 	int logo = 1, new_rows, new_cols, rows, cols, charcnt = 256;
+-	int cap, ret;
++	int ret;
+ 
+ 	if (WARN_ON(info_idx == -1))
+ 	    return;
+@@ -1042,7 +1042,6 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 		con2fb_map[vc->vc_num] = info_idx;
+ 
+ 	info = registered_fb[con2fb_map[vc->vc_num]];
+-	cap = info->flags;
+ 
+ 	if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET)
+ 		logo_shown = FBCON_LOGO_DONTSHOW;
+@@ -1147,11 +1146,13 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 
+ 	ops->graphics = 0;
+ 
+-	if ((cap & FBINFO_HWACCEL_COPYAREA) &&
+-	    !(cap & FBINFO_HWACCEL_DISABLED))
+-		p->scrollmode = SCROLL_MOVE;
+-	else /* default to something safe */
+-		p->scrollmode = SCROLL_REDRAW;
++	/*
++	 * No more hw acceleration for fbcon.
++	 *
++	 * FIXME: Garbage collect all the now dead code after sufficient time
++	 * has passed.
++	 */
++	p->scrollmode = SCROLL_REDRAW;
+ 
+ 	/*
+ 	 *  ++guenther: console.c:vc_allocate() relies on initializing
+@@ -1961,45 +1962,15 @@ static void updatescrollmode(struct fbcon_display *p,
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	int fh = vc->vc_font.height;
+-	int cap = info->flags;
+-	u16 t = 0;
+-	int ypan = FBCON_SWAP(ops->rotate, info->fix.ypanstep,
+-				  info->fix.xpanstep);
+-	int ywrap = FBCON_SWAP(ops->rotate, info->fix.ywrapstep, t);
+ 	int yres = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 	int vyres = FBCON_SWAP(ops->rotate, info->var.yres_virtual,
+ 				   info->var.xres_virtual);
+-	int good_pan = (cap & FBINFO_HWACCEL_YPAN) &&
+-		divides(ypan, vc->vc_font.height) && vyres > yres;
+-	int good_wrap = (cap & FBINFO_HWACCEL_YWRAP) &&
+-		divides(ywrap, vc->vc_font.height) &&
+-		divides(vc->vc_font.height, vyres) &&
+-		divides(vc->vc_font.height, yres);
+-	int reading_fast = cap & FBINFO_READS_FAST;
+-	int fast_copyarea = (cap & FBINFO_HWACCEL_COPYAREA) &&
+-		!(cap & FBINFO_HWACCEL_DISABLED);
+-	int fast_imageblit = (cap & FBINFO_HWACCEL_IMAGEBLIT) &&
+-		!(cap & FBINFO_HWACCEL_DISABLED);
+ 
+ 	p->vrows = vyres/fh;
+ 	if (yres > (fh * (vc->vc_rows + 1)))
+ 		p->vrows -= (yres - (fh * vc->vc_rows)) / fh;
+ 	if ((yres % fh) && (vyres % fh < yres % fh))
+ 		p->vrows--;
+-
+-	if (good_wrap || good_pan) {
+-		if (reading_fast || fast_copyarea)
+-			p->scrollmode = good_wrap ?
+-				SCROLL_WRAP_MOVE : SCROLL_PAN_MOVE;
+-		else
+-			p->scrollmode = good_wrap ? SCROLL_REDRAW :
+-				SCROLL_PAN_REDRAW;
+-	} else {
+-		if (reading_fast || (fast_copyarea && !fast_imageblit))
+-			p->scrollmode = SCROLL_MOVE;
+-		else
+-			p->scrollmode = SCROLL_REDRAW;
+-	}
+ }
+ 
+ #define PITCH(w) (((w) + 7) >> 3)
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 836319cbaca9d..359302f71f7ef 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -227,8 +227,10 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret)
++	if (ret) {
++		pm_runtime_put_noidle(dev);
+ 		return dev_err_probe(dev, ret, "runtime pm failed\n");
++	}
+ 
+ 	platform_set_drvdata(pdev, wdt);
+ 
+diff --git a/fs/bfs/inode.c b/fs/bfs/inode.c
+index 3ac7611ef7ce2..fd691e4815c56 100644
+--- a/fs/bfs/inode.c
++++ b/fs/bfs/inode.c
+@@ -350,7 +350,7 @@ static int bfs_fill_super(struct super_block *s, void *data, int silent)
+ 
+ 	info->si_lasti = (le32_to_cpu(bfs_sb->s_start) - BFS_BSIZE) / sizeof(struct bfs_inode) + BFS_ROOT_INO - 1;
+ 	if (info->si_lasti == BFS_MAX_LASTI)
+-		printf("WARNING: filesystem %s was created with 512 inodes, the real maximum is 511, mounting anyway\n", s->s_id);
++		printf("NOTE: filesystem %s was created with 512 inodes, the real maximum is 511, mounting anyway\n", s->s_id);
+ 	else if (info->si_lasti > BFS_MAX_LASTI) {
+ 		printf("Impossible last inode number %lu > %d on %s\n", info->si_lasti, BFS_MAX_LASTI, s->s_id);
+ 		goto out1;
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 526faf4778ce4..2462a9a84b956 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -1335,6 +1335,8 @@ retry_lookup:
+ 				in, ceph_vinop(in));
+ 			if (in->i_state & I_NEW)
+ 				discard_new_inode(in);
++			else
++				iput(in);
+ 			goto done;
+ 		}
+ 		req->r_target_inode = in;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 37a619bf1ac7c..e67d5de6f28ca 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2395,9 +2395,9 @@ repeat:
+ 
+ 				nr = sbi->s_mb_prefetch;
+ 				if (ext4_has_feature_flex_bg(sb)) {
+-					nr = (group / sbi->s_mb_prefetch) *
+-						sbi->s_mb_prefetch;
+-					nr = nr + sbi->s_mb_prefetch - group;
++					nr = 1 << sbi->s_log_groups_per_flex;
++					nr -= group & (nr - 1);
++					nr = min(nr, sbi->s_mb_prefetch);
+ 				}
+ 				prefetch_grp = ext4_mb_prefetch(sb, group,
+ 							nr, &prefetch_ios);
+@@ -2733,7 +2733,8 @@ static int ext4_mb_init_backend(struct super_block *sb)
+ 
+ 	if (ext4_has_feature_flex_bg(sb)) {
+ 		/* a single flex group is supposed to be read by a single IO */
+-		sbi->s_mb_prefetch = 1 << sbi->s_es->s_log_groups_per_flex;
++		sbi->s_mb_prefetch = min(1 << sbi->s_es->s_log_groups_per_flex,
++			BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9));
+ 		sbi->s_mb_prefetch *= 8; /* 8 prefetch IOs in flight at most */
+ 	} else {
+ 		sbi->s_mb_prefetch = 32;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 2b08b162075c3..ea5aefa23a20a 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -4186,18 +4186,25 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	 */
+ 	sbi->s_li_wait_mult = EXT4_DEF_LI_WAIT_MULT;
+ 
+-	blocksize = BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
+-
+-	if (blocksize == PAGE_SIZE)
+-		set_opt(sb, DIOREAD_NOLOCK);
+-
+-	if (blocksize < EXT4_MIN_BLOCK_SIZE ||
+-	    blocksize > EXT4_MAX_BLOCK_SIZE) {
++	if (le32_to_cpu(es->s_log_block_size) >
++	    (EXT4_MAX_BLOCK_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
+ 		ext4_msg(sb, KERN_ERR,
+-		       "Unsupported filesystem blocksize %d (%d log_block_size)",
+-			 blocksize, le32_to_cpu(es->s_log_block_size));
++			 "Invalid log block size: %u",
++			 le32_to_cpu(es->s_log_block_size));
+ 		goto failed_mount;
+ 	}
++	if (le32_to_cpu(es->s_log_cluster_size) >
++	    (EXT4_MAX_CLUSTER_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
++		ext4_msg(sb, KERN_ERR,
++			 "Invalid log cluster size: %u",
++			 le32_to_cpu(es->s_log_cluster_size));
++		goto failed_mount;
++	}
++
++	blocksize = EXT4_MIN_BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
++
++	if (blocksize == PAGE_SIZE)
++		set_opt(sb, DIOREAD_NOLOCK);
+ 
+ 	if (le32_to_cpu(es->s_rev_level) == EXT4_GOOD_OLD_REV) {
+ 		sbi->s_inode_size = EXT4_GOOD_OLD_INODE_SIZE;
+@@ -4416,21 +4423,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	if (!ext4_feature_set_ok(sb, (sb_rdonly(sb))))
+ 		goto failed_mount;
+ 
+-	if (le32_to_cpu(es->s_log_block_size) >
+-	    (EXT4_MAX_BLOCK_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
+-		ext4_msg(sb, KERN_ERR,
+-			 "Invalid log block size: %u",
+-			 le32_to_cpu(es->s_log_block_size));
+-		goto failed_mount;
+-	}
+-	if (le32_to_cpu(es->s_log_cluster_size) >
+-	    (EXT4_MAX_CLUSTER_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
+-		ext4_msg(sb, KERN_ERR,
+-			 "Invalid log cluster size: %u",
+-			 le32_to_cpu(es->s_log_cluster_size));
+-		goto failed_mount;
+-	}
+-
+ 	if (le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks) > (blocksize / 4)) {
+ 		ext4_msg(sb, KERN_ERR,
+ 			 "Number of reserved GDT blocks insanely large: %d",
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 023462e80e58d..b39bf416d5114 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1600,7 +1600,7 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 			goto out;
+ 		}
+ 
+-		if (NM_I(sbi)->dirty_nat_cnt == 0 &&
++		if (NM_I(sbi)->nat_cnt[DIRTY_NAT] == 0 &&
+ 				SIT_I(sbi)->dirty_sentries == 0 &&
+ 				prefree_segments(sbi) == 0) {
+ 			f2fs_flush_sit_entries(sbi, cpc);
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 14262e0f1cd60..c5fee4d7ea72f 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -798,8 +798,6 @@ destroy_decompress_ctx:
+ 	if (cops->destroy_decompress_ctx)
+ 		cops->destroy_decompress_ctx(dic);
+ out_free_dic:
+-	if (verity)
+-		atomic_set(&dic->pending_pages, dic->nr_cpages);
+ 	if (!verity)
+ 		f2fs_decompress_end_io(dic->rpages, dic->cluster_size,
+ 								ret, false);
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index be4da52604edc..b29243ee1c3e5 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -202,7 +202,7 @@ static void f2fs_verify_bio(struct bio *bio)
+ 		dic = (struct decompress_io_ctx *)page_private(page);
+ 
+ 		if (dic) {
+-			if (atomic_dec_return(&dic->pending_pages))
++			if (atomic_dec_return(&dic->verity_pages))
+ 				continue;
+ 			f2fs_verify_pages(dic->rpages,
+ 						dic->cluster_size);
+@@ -1027,7 +1027,8 @@ static inline bool f2fs_need_verity(const struct inode *inode, pgoff_t idx)
+ 
+ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
+ 				      unsigned nr_pages, unsigned op_flag,
+-				      pgoff_t first_idx, bool for_write)
++				      pgoff_t first_idx, bool for_write,
++				      bool for_verity)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct bio *bio;
+@@ -1049,7 +1050,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
+ 		post_read_steps |= 1 << STEP_DECRYPT;
+ 	if (f2fs_compressed_file(inode))
+ 		post_read_steps |= 1 << STEP_DECOMPRESS_NOWQ;
+-	if (f2fs_need_verity(inode, first_idx))
++	if (for_verity && f2fs_need_verity(inode, first_idx))
+ 		post_read_steps |= 1 << STEP_VERITY;
+ 
+ 	if (post_read_steps) {
+@@ -1079,7 +1080,7 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
+ 	struct bio *bio;
+ 
+ 	bio = f2fs_grab_read_bio(inode, blkaddr, 1, op_flags,
+-					page->index, for_write);
++					page->index, for_write, true);
+ 	if (IS_ERR(bio))
+ 		return PTR_ERR(bio);
+ 
+@@ -2133,7 +2134,7 @@ submit_and_realloc:
+ 	if (bio == NULL) {
+ 		bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
+ 				is_readahead ? REQ_RAHEAD : 0, page->index,
+-				false);
++				false, true);
+ 		if (IS_ERR(bio)) {
+ 			ret = PTR_ERR(bio);
+ 			bio = NULL;
+@@ -2180,6 +2181,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ 	const unsigned blkbits = inode->i_blkbits;
+ 	const unsigned blocksize = 1 << blkbits;
+ 	struct decompress_io_ctx *dic = NULL;
++	struct bio_post_read_ctx *ctx;
++	bool for_verity = false;
+ 	int i;
+ 	int ret = 0;
+ 
+@@ -2245,10 +2248,29 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ 		goto out_put_dnode;
+ 	}
+ 
++	/*
++	 * It's possible to enable fsverity on the fly when handling a cluster,
++	 * which requires complicated error handling. Instead of adding more
++	 * complexity, let's give a rule where end_io post-processes fsverity
++	 * per cluster. In order to do that, we need to submit bio, if previous
++	 * bio sets a different post-process policy.
++	 */
++	if (fsverity_active(cc->inode)) {
++		atomic_set(&dic->verity_pages, cc->nr_cpages);
++		for_verity = true;
++
++		if (bio) {
++			ctx = bio->bi_private;
++			if (!(ctx->enabled_steps & (1 << STEP_VERITY))) {
++				__submit_bio(sbi, bio, DATA);
++				bio = NULL;
++			}
++		}
++	}
++
+ 	for (i = 0; i < dic->nr_cpages; i++) {
+ 		struct page *page = dic->cpages[i];
+ 		block_t blkaddr;
+-		struct bio_post_read_ctx *ctx;
+ 
+ 		blkaddr = data_blkaddr(dn.inode, dn.node_page,
+ 						dn.ofs_in_node + i + 1);
+@@ -2264,17 +2286,31 @@ submit_and_realloc:
+ 		if (!bio) {
+ 			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
+ 					is_readahead ? REQ_RAHEAD : 0,
+-					page->index, for_write);
++					page->index, for_write, for_verity);
+ 			if (IS_ERR(bio)) {
++				unsigned int remained = dic->nr_cpages - i;
++				bool release = false;
++
+ 				ret = PTR_ERR(bio);
+ 				dic->failed = true;
+-				if (!atomic_sub_return(dic->nr_cpages - i,
+-							&dic->pending_pages)) {
++
++				if (for_verity) {
++					if (!atomic_sub_return(remained,
++						&dic->verity_pages))
++						release = true;
++				} else {
++					if (!atomic_sub_return(remained,
++						&dic->pending_pages))
++						release = true;
++				}
++
++				if (release) {
+ 					f2fs_decompress_end_io(dic->rpages,
+-							cc->cluster_size, true,
+-							false);
++						cc->cluster_size, true,
++						false);
+ 					f2fs_free_dic(dic);
+ 				}
++
+ 				f2fs_put_dnode(&dn);
+ 				*bio_ret = NULL;
+ 				return ret;
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index a8357fd4f5fab..197c914119da8 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -145,8 +145,8 @@ static void update_general_status(struct f2fs_sb_info *sbi)
+ 		si->node_pages = NODE_MAPPING(sbi)->nrpages;
+ 	if (sbi->meta_inode)
+ 		si->meta_pages = META_MAPPING(sbi)->nrpages;
+-	si->nats = NM_I(sbi)->nat_cnt;
+-	si->dirty_nats = NM_I(sbi)->dirty_nat_cnt;
++	si->nats = NM_I(sbi)->nat_cnt[TOTAL_NAT];
++	si->dirty_nats = NM_I(sbi)->nat_cnt[DIRTY_NAT];
+ 	si->sits = MAIN_SEGS(sbi);
+ 	si->dirty_sits = SIT_I(sbi)->dirty_sentries;
+ 	si->free_nids = NM_I(sbi)->nid_cnt[FREE_NID];
+@@ -278,9 +278,10 @@ get_cache:
+ 	si->cache_mem += (NM_I(sbi)->nid_cnt[FREE_NID] +
+ 				NM_I(sbi)->nid_cnt[PREALLOC_NID]) *
+ 				sizeof(struct free_nid);
+-	si->cache_mem += NM_I(sbi)->nat_cnt * sizeof(struct nat_entry);
+-	si->cache_mem += NM_I(sbi)->dirty_nat_cnt *
+-					sizeof(struct nat_entry_set);
++	si->cache_mem += NM_I(sbi)->nat_cnt[TOTAL_NAT] *
++				sizeof(struct nat_entry);
++	si->cache_mem += NM_I(sbi)->nat_cnt[DIRTY_NAT] *
++				sizeof(struct nat_entry_set);
+ 	si->cache_mem += si->inmem_pages * sizeof(struct inmem_pages);
+ 	for (i = 0; i < MAX_INO_ENTRY; i++)
+ 		si->cache_mem += sbi->im[i].ino_num * sizeof(struct ino_entry);
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 9a321c52facec..06e5a6053f3f9 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -894,6 +894,13 @@ enum nid_state {
+ 	MAX_NID_STATE,
+ };
+ 
++enum nat_state {
++	TOTAL_NAT,
++	DIRTY_NAT,
++	RECLAIMABLE_NAT,
++	MAX_NAT_STATE,
++};
++
+ struct f2fs_nm_info {
+ 	block_t nat_blkaddr;		/* base disk address of NAT */
+ 	nid_t max_nid;			/* maximum possible node ids */
+@@ -909,8 +916,7 @@ struct f2fs_nm_info {
+ 	struct rw_semaphore nat_tree_lock;	/* protect nat_tree_lock */
+ 	struct list_head nat_entries;	/* cached nat entry list (clean) */
+ 	spinlock_t nat_list_lock;	/* protect clean nat entry list */
+-	unsigned int nat_cnt;		/* the # of cached nat entries */
+-	unsigned int dirty_nat_cnt;	/* total num of nat entries in set */
++	unsigned int nat_cnt[MAX_NAT_STATE]; /* the # of cached nat entries */
+ 	unsigned int nat_blocks;	/* # of nat blocks */
+ 
+ 	/* free node ids management */
+@@ -1404,6 +1410,7 @@ struct decompress_io_ctx {
+ 	size_t rlen;			/* valid data length in rbuf */
+ 	size_t clen;			/* valid data length in cbuf */
+ 	atomic_t pending_pages;		/* in-flight compressed page count */
++	atomic_t verity_pages;		/* in-flight page count for verity */
+ 	bool failed;			/* indicate IO error during decompression */
+ 	void *private;			/* payload buffer for specified decompression algorithm */
+ 	void *private2;			/* extra payload buffer */
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 42394de6c7eb1..e65d73293a3f6 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -62,8 +62,8 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type)
+ 				sizeof(struct free_nid)) >> PAGE_SHIFT;
+ 		res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2);
+ 	} else if (type == NAT_ENTRIES) {
+-		mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >>
+-							PAGE_SHIFT;
++		mem_size = (nm_i->nat_cnt[TOTAL_NAT] *
++				sizeof(struct nat_entry)) >> PAGE_SHIFT;
+ 		res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2);
+ 		if (excess_cached_nats(sbi))
+ 			res = false;
+@@ -177,7 +177,8 @@ static struct nat_entry *__init_nat_entry(struct f2fs_nm_info *nm_i,
+ 	list_add_tail(&ne->list, &nm_i->nat_entries);
+ 	spin_unlock(&nm_i->nat_list_lock);
+ 
+-	nm_i->nat_cnt++;
++	nm_i->nat_cnt[TOTAL_NAT]++;
++	nm_i->nat_cnt[RECLAIMABLE_NAT]++;
+ 	return ne;
+ }
+ 
+@@ -207,7 +208,8 @@ static unsigned int __gang_lookup_nat_cache(struct f2fs_nm_info *nm_i,
+ static void __del_from_nat_cache(struct f2fs_nm_info *nm_i, struct nat_entry *e)
+ {
+ 	radix_tree_delete(&nm_i->nat_root, nat_get_nid(e));
+-	nm_i->nat_cnt--;
++	nm_i->nat_cnt[TOTAL_NAT]--;
++	nm_i->nat_cnt[RECLAIMABLE_NAT]--;
+ 	__free_nat_entry(e);
+ }
+ 
+@@ -253,7 +255,8 @@ static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i,
+ 	if (get_nat_flag(ne, IS_DIRTY))
+ 		goto refresh_list;
+ 
+-	nm_i->dirty_nat_cnt++;
++	nm_i->nat_cnt[DIRTY_NAT]++;
++	nm_i->nat_cnt[RECLAIMABLE_NAT]--;
+ 	set_nat_flag(ne, IS_DIRTY, true);
+ refresh_list:
+ 	spin_lock(&nm_i->nat_list_lock);
+@@ -273,7 +276,8 @@ static void __clear_nat_cache_dirty(struct f2fs_nm_info *nm_i,
+ 
+ 	set_nat_flag(ne, IS_DIRTY, false);
+ 	set->entry_cnt--;
+-	nm_i->dirty_nat_cnt--;
++	nm_i->nat_cnt[DIRTY_NAT]--;
++	nm_i->nat_cnt[RECLAIMABLE_NAT]++;
+ }
+ 
+ static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i,
+@@ -2944,14 +2948,17 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 	LIST_HEAD(sets);
+ 	int err = 0;
+ 
+-	/* during unmount, let's flush nat_bits before checking dirty_nat_cnt */
++	/*
++	 * during unmount, let's flush nat_bits before checking
++	 * nat_cnt[DIRTY_NAT].
++	 */
+ 	if (enabled_nat_bits(sbi, cpc)) {
+ 		down_write(&nm_i->nat_tree_lock);
+ 		remove_nats_in_journal(sbi);
+ 		up_write(&nm_i->nat_tree_lock);
+ 	}
+ 
+-	if (!nm_i->dirty_nat_cnt)
++	if (!nm_i->nat_cnt[DIRTY_NAT])
+ 		return 0;
+ 
+ 	down_write(&nm_i->nat_tree_lock);
+@@ -2962,7 +2969,8 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 	 * into nat entry set.
+ 	 */
+ 	if (enabled_nat_bits(sbi, cpc) ||
+-		!__has_cursum_space(journal, nm_i->dirty_nat_cnt, NAT_JOURNAL))
++		!__has_cursum_space(journal,
++			nm_i->nat_cnt[DIRTY_NAT], NAT_JOURNAL))
+ 		remove_nats_in_journal(sbi);
+ 
+ 	while ((found = __gang_lookup_nat_set(nm_i,
+@@ -3086,7 +3094,6 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
+ 						F2FS_RESERVED_NODE_NUM;
+ 	nm_i->nid_cnt[FREE_NID] = 0;
+ 	nm_i->nid_cnt[PREALLOC_NID] = 0;
+-	nm_i->nat_cnt = 0;
+ 	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
+ 	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
+ 	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
+@@ -3220,7 +3227,7 @@ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi)
+ 			__del_from_nat_cache(nm_i, natvec[idx]);
+ 		}
+ 	}
+-	f2fs_bug_on(sbi, nm_i->nat_cnt);
++	f2fs_bug_on(sbi, nm_i->nat_cnt[TOTAL_NAT]);
+ 
+ 	/* destroy nat set cache */
+ 	nid = 0;
+diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
+index 69e5859e993cf..f84541b57acbb 100644
+--- a/fs/f2fs/node.h
++++ b/fs/f2fs/node.h
+@@ -126,13 +126,13 @@ static inline void raw_nat_from_node_info(struct f2fs_nat_entry *raw_ne,
+ 
+ static inline bool excess_dirty_nats(struct f2fs_sb_info *sbi)
+ {
+-	return NM_I(sbi)->dirty_nat_cnt >= NM_I(sbi)->max_nid *
++	return NM_I(sbi)->nat_cnt[DIRTY_NAT] >= NM_I(sbi)->max_nid *
+ 					NM_I(sbi)->dirty_nats_ratio / 100;
+ }
+ 
+ static inline bool excess_cached_nats(struct f2fs_sb_info *sbi)
+ {
+-	return NM_I(sbi)->nat_cnt >= DEF_NAT_CACHE_THRESHOLD;
++	return NM_I(sbi)->nat_cnt[TOTAL_NAT] >= DEF_NAT_CACHE_THRESHOLD;
+ }
+ 
+ static inline bool excess_dirty_nodes(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/shrinker.c b/fs/f2fs/shrinker.c
+index d66de5999a26d..dd3c3c7a90ec8 100644
+--- a/fs/f2fs/shrinker.c
++++ b/fs/f2fs/shrinker.c
+@@ -18,9 +18,7 @@ static unsigned int shrinker_run_no;
+ 
+ static unsigned long __count_nat_entries(struct f2fs_sb_info *sbi)
+ {
+-	long count = NM_I(sbi)->nat_cnt - NM_I(sbi)->dirty_nat_cnt;
+-
+-	return count > 0 ? count : 0;
++	return NM_I(sbi)->nat_cnt[RECLAIMABLE_NAT];
+ }
+ 
+ static unsigned long __count_free_nids(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index fef22e476c526..aa284ce7ec00d 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2744,7 +2744,6 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ 	block_t total_sections, blocks_per_seg;
+ 	struct f2fs_super_block *raw_super = (struct f2fs_super_block *)
+ 					(bh->b_data + F2FS_SUPER_OFFSET);
+-	unsigned int blocksize;
+ 	size_t crc_offset = 0;
+ 	__u32 crc = 0;
+ 
+@@ -2778,10 +2777,10 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ 	}
+ 
+ 	/* Currently, support only 4KB block size */
+-	blocksize = 1 << le32_to_cpu(raw_super->log_blocksize);
+-	if (blocksize != F2FS_BLKSIZE) {
+-		f2fs_info(sbi, "Invalid blocksize (%u), supports only 4KB",
+-			  blocksize);
++	if (le32_to_cpu(raw_super->log_blocksize) != F2FS_BLKSIZE_BITS) {
++		f2fs_info(sbi, "Invalid log_blocksize (%u), supports only %u",
++			  le32_to_cpu(raw_super->log_blocksize),
++			  F2FS_BLKSIZE_BITS);
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index 19ac5baad50fd..05b36b28f2e87 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -781,9 +781,10 @@ void send_sigio(struct fown_struct *fown, int fd, int band)
+ {
+ 	struct task_struct *p;
+ 	enum pid_type type;
++	unsigned long flags;
+ 	struct pid *pid;
+ 	
+-	read_lock(&fown->lock);
++	read_lock_irqsave(&fown->lock, flags);
+ 
+ 	type = fown->pid_type;
+ 	pid = fown->pid;
+@@ -804,7 +805,7 @@ void send_sigio(struct fown_struct *fown, int fd, int band)
+ 		read_unlock(&tasklist_lock);
+ 	}
+  out_unlock_fown:
+-	read_unlock(&fown->lock);
++	read_unlock_irqrestore(&fown->lock, flags);
+ }
+ 
+ static void send_sigurg_to_task(struct task_struct *p,
+@@ -819,9 +820,10 @@ int send_sigurg(struct fown_struct *fown)
+ 	struct task_struct *p;
+ 	enum pid_type type;
+ 	struct pid *pid;
++	unsigned long flags;
+ 	int ret = 0;
+ 	
+-	read_lock(&fown->lock);
++	read_lock_irqsave(&fown->lock, flags);
+ 
+ 	type = fown->pid_type;
+ 	pid = fown->pid;
+@@ -844,7 +846,7 @@ int send_sigurg(struct fown_struct *fown)
+ 		read_unlock(&tasklist_lock);
+ 	}
+  out_unlock_fown:
+-	read_unlock(&fown->lock);
++	read_unlock_irqrestore(&fown->lock, flags);
+ 	return ret;
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 0fcd065baa760..1f798c5c4213e 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -941,6 +941,10 @@ enum io_mem_account {
+ 	ACCT_PINNED,
+ };
+ 
++static void destroy_fixed_file_ref_node(struct fixed_file_ref_node *ref_node);
++static struct fixed_file_ref_node *alloc_fixed_file_ref_node(
++			struct io_ring_ctx *ctx);
++
+ static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
+ 			     struct io_comp_state *cs);
+ static void io_cqring_fill_event(struct io_kiocb *req, long res);
+@@ -1369,6 +1373,13 @@ static bool io_grab_identity(struct io_kiocb *req)
+ 		spin_unlock_irq(&ctx->inflight_lock);
+ 		req->work.flags |= IO_WQ_WORK_FILES;
+ 	}
++	if (!(req->work.flags & IO_WQ_WORK_MM) &&
++	    (def->work_flags & IO_WQ_WORK_MM)) {
++		if (id->mm != current->mm)
++			return false;
++		mmgrab(id->mm);
++		req->work.flags |= IO_WQ_WORK_MM;
++	}
+ 
+ 	return true;
+ }
+@@ -1393,13 +1404,6 @@ static void io_prep_async_work(struct io_kiocb *req)
+ 			req->work.flags |= IO_WQ_WORK_UNBOUND;
+ 	}
+ 
+-	/* ->mm can never change on us */
+-	if (!(req->work.flags & IO_WQ_WORK_MM) &&
+-	    (def->work_flags & IO_WQ_WORK_MM)) {
+-		mmgrab(id->mm);
+-		req->work.flags |= IO_WQ_WORK_MM;
+-	}
+-
+ 	/* if we fail grabbing identity, we must COW, regrab, and retry */
+ 	if (io_grab_identity(req))
+ 		return;
+@@ -1632,8 +1636,6 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+ 	LIST_HEAD(list);
+ 
+ 	if (!force) {
+-		if (list_empty_careful(&ctx->cq_overflow_list))
+-			return true;
+ 		if ((ctx->cached_cq_tail - READ_ONCE(rings->cq.head) ==
+ 		    rings->cq_ring_entries))
+ 			return false;
+@@ -5861,15 +5863,15 @@ static void io_req_drop_files(struct io_kiocb *req)
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 	unsigned long flags;
+ 
++	put_files_struct(req->work.identity->files);
++	put_nsproxy(req->work.identity->nsproxy);
+ 	spin_lock_irqsave(&ctx->inflight_lock, flags);
+ 	list_del(&req->inflight_entry);
+-	if (waitqueue_active(&ctx->inflight_wait))
+-		wake_up(&ctx->inflight_wait);
+ 	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
+ 	req->flags &= ~REQ_F_INFLIGHT;
+-	put_files_struct(req->work.identity->files);
+-	put_nsproxy(req->work.identity->nsproxy);
+ 	req->work.flags &= ~IO_WQ_WORK_FILES;
++	if (waitqueue_active(&ctx->inflight_wait))
++		wake_up(&ctx->inflight_wait);
+ }
+ 
+ static void __io_clean_op(struct io_kiocb *req)
+@@ -6575,8 +6577,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
+ 
+ 	/* if we have a backlog and couldn't flush it all, return BUSY */
+ 	if (test_bit(0, &ctx->sq_check_overflow)) {
+-		if (!list_empty(&ctx->cq_overflow_list) &&
+-		    !io_cqring_overflow_flush(ctx, false, NULL, NULL))
++		if (!io_cqring_overflow_flush(ctx, false, NULL, NULL))
+ 			return -EBUSY;
+ 	}
+ 
+@@ -6798,8 +6799,16 @@ static int io_sq_thread(void *data)
+ 		 * kthread parking. This synchronizes the thread vs users,
+ 		 * the users are synchronized on the sqd->ctx_lock.
+ 		 */
+-		if (kthread_should_park())
++		if (kthread_should_park()) {
+ 			kthread_parkme();
++			/*
++			 * When sq thread is unparked, in case the previous park operation
++			 * comes from io_put_sq_data(), which means that sq thread is going
++			 * to be stopped, so here needs to have a check.
++			 */
++			if (kthread_should_stop())
++				break;
++		}
+ 
+ 		if (unlikely(!list_empty(&sqd->ctx_new_list)))
+ 			io_sqd_init_new(sqd);
+@@ -6991,18 +7000,32 @@ static void io_file_ref_kill(struct percpu_ref *ref)
+ 	complete(&data->done);
+ }
+ 
++static void io_sqe_files_set_node(struct fixed_file_data *file_data,
++				  struct fixed_file_ref_node *ref_node)
++{
++	spin_lock_bh(&file_data->lock);
++	file_data->node = ref_node;
++	list_add_tail(&ref_node->node, &file_data->ref_list);
++	spin_unlock_bh(&file_data->lock);
++	percpu_ref_get(&file_data->refs);
++}
++
+ static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+ {
+ 	struct fixed_file_data *data = ctx->file_data;
+-	struct fixed_file_ref_node *ref_node = NULL;
++	struct fixed_file_ref_node *backup_node, *ref_node = NULL;
+ 	unsigned nr_tables, i;
++	int ret;
+ 
+ 	if (!data)
+ 		return -ENXIO;
++	backup_node = alloc_fixed_file_ref_node(ctx);
++	if (!backup_node)
++		return -ENOMEM;
+ 
+-	spin_lock(&data->lock);
++	spin_lock_bh(&data->lock);
+ 	ref_node = data->node;
+-	spin_unlock(&data->lock);
++	spin_unlock_bh(&data->lock);
+ 	if (ref_node)
+ 		percpu_ref_kill(&ref_node->refs);
+ 
+@@ -7010,7 +7033,18 @@ static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+ 
+ 	/* wait for all refs nodes to complete */
+ 	flush_delayed_work(&ctx->file_put_work);
+-	wait_for_completion(&data->done);
++	do {
++		ret = wait_for_completion_interruptible(&data->done);
++		if (!ret)
++			break;
++		ret = io_run_task_work_sig();
++		if (ret < 0) {
++			percpu_ref_resurrect(&data->refs);
++			reinit_completion(&data->done);
++			io_sqe_files_set_node(data, backup_node);
++			return ret;
++		}
++	} while (1);
+ 
+ 	__io_sqe_files_unregister(ctx);
+ 	nr_tables = DIV_ROUND_UP(ctx->nr_user_files, IORING_MAX_FILES_TABLE);
+@@ -7021,6 +7055,7 @@ static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+ 	kfree(data);
+ 	ctx->file_data = NULL;
+ 	ctx->nr_user_files = 0;
++	destroy_fixed_file_ref_node(backup_node);
+ 	return 0;
+ }
+ 
+@@ -7385,7 +7420,7 @@ static void io_file_data_ref_zero(struct percpu_ref *ref)
+ 	data = ref_node->file_data;
+ 	ctx = data->ctx;
+ 
+-	spin_lock(&data->lock);
++	spin_lock_bh(&data->lock);
+ 	ref_node->done = true;
+ 
+ 	while (!list_empty(&data->ref_list)) {
+@@ -7397,7 +7432,7 @@ static void io_file_data_ref_zero(struct percpu_ref *ref)
+ 		list_del(&ref_node->node);
+ 		first_add |= llist_add(&ref_node->llist, &ctx->file_put_llist);
+ 	}
+-	spin_unlock(&data->lock);
++	spin_unlock_bh(&data->lock);
+ 
+ 	if (percpu_ref_is_dying(&data->refs))
+ 		delay = 0;
+@@ -7519,11 +7554,7 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 		return PTR_ERR(ref_node);
+ 	}
+ 
+-	file_data->node = ref_node;
+-	spin_lock(&file_data->lock);
+-	list_add_tail(&ref_node->node, &file_data->ref_list);
+-	spin_unlock(&file_data->lock);
+-	percpu_ref_get(&file_data->refs);
++	io_sqe_files_set_node(file_data, ref_node);
+ 	return ret;
+ out_fput:
+ 	for (i = 0; i < ctx->nr_user_files; i++) {
+@@ -7679,11 +7710,7 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ 
+ 	if (needs_switch) {
+ 		percpu_ref_kill(&data->node->refs);
+-		spin_lock(&data->lock);
+-		list_add_tail(&ref_node->node, &data->ref_list);
+-		data->node = ref_node;
+-		spin_unlock(&data->lock);
+-		percpu_ref_get(&ctx->file_data->refs);
++		io_sqe_files_set_node(data, ref_node);
+ 	} else
+ 		destroy_fixed_file_ref_node(ref_node);
+ 
+diff --git a/fs/jffs2/jffs2_fs_sb.h b/fs/jffs2/jffs2_fs_sb.h
+index 778275f48a879..5a7091746f68b 100644
+--- a/fs/jffs2/jffs2_fs_sb.h
++++ b/fs/jffs2/jffs2_fs_sb.h
+@@ -38,6 +38,7 @@ struct jffs2_mount_opts {
+ 	 * users. This is implemented simply by means of not allowing the
+ 	 * latter users to write to the file system if the amount if the
+ 	 * available space is less then 'rp_size'. */
++	bool set_rp_size;
+ 	unsigned int rp_size;
+ };
+ 
+diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
+index 4fd297bdf0f3f..81ca58c10b728 100644
+--- a/fs/jffs2/super.c
++++ b/fs/jffs2/super.c
+@@ -88,7 +88,7 @@ static int jffs2_show_options(struct seq_file *s, struct dentry *root)
+ 
+ 	if (opts->override_compr)
+ 		seq_printf(s, ",compr=%s", jffs2_compr_name(opts->compr));
+-	if (opts->rp_size)
++	if (opts->set_rp_size)
+ 		seq_printf(s, ",rp_size=%u", opts->rp_size / 1024);
+ 
+ 	return 0;
+@@ -202,11 +202,8 @@ static int jffs2_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 	case Opt_rp_size:
+ 		if (result.uint_32 > UINT_MAX / 1024)
+ 			return invalf(fc, "jffs2: rp_size unrepresentable");
+-		opt = result.uint_32 * 1024;
+-		if (opt > c->mtd->size)
+-			return invalf(fc, "jffs2: Too large reserve pool specified, max is %llu KB",
+-				      c->mtd->size / 1024);
+-		c->mount_opts.rp_size = opt;
++		c->mount_opts.rp_size = result.uint_32 * 1024;
++		c->mount_opts.set_rp_size = true;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -225,8 +222,10 @@ static inline void jffs2_update_mount_opts(struct fs_context *fc)
+ 		c->mount_opts.override_compr = new_c->mount_opts.override_compr;
+ 		c->mount_opts.compr = new_c->mount_opts.compr;
+ 	}
+-	if (new_c->mount_opts.rp_size)
++	if (new_c->mount_opts.set_rp_size) {
++		c->mount_opts.set_rp_size = new_c->mount_opts.set_rp_size;
+ 		c->mount_opts.rp_size = new_c->mount_opts.rp_size;
++	}
+ 	mutex_unlock(&c->alloc_sem);
+ }
+ 
+@@ -266,6 +265,10 @@ static int jffs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	c->mtd = sb->s_mtd;
+ 	c->os_priv = sb;
+ 
++	if (c->mount_opts.rp_size > c->mtd->size)
++		return invalf(fc, "jffs2: Too large reserve pool specified, max is %llu KB",
++			      c->mtd->size / 1024);
++
+ 	/* Initialize JFFS2 superblock locks, the further initialization will
+ 	 * be done later */
+ 	mutex_init(&c->alloc_sem);
+diff --git a/fs/namespace.c b/fs/namespace.c
+index cebaa3e817940..93006abe7946a 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -156,10 +156,10 @@ static inline void mnt_add_count(struct mount *mnt, int n)
+ /*
+  * vfsmount lock must be held for write
+  */
+-unsigned int mnt_get_count(struct mount *mnt)
++int mnt_get_count(struct mount *mnt)
+ {
+ #ifdef CONFIG_SMP
+-	unsigned int count = 0;
++	int count = 0;
+ 	int cpu;
+ 
+ 	for_each_possible_cpu(cpu) {
+@@ -1139,6 +1139,7 @@ static DECLARE_DELAYED_WORK(delayed_mntput_work, delayed_mntput);
+ static void mntput_no_expire(struct mount *mnt)
+ {
+ 	LIST_HEAD(list);
++	int count;
+ 
+ 	rcu_read_lock();
+ 	if (likely(READ_ONCE(mnt->mnt_ns))) {
+@@ -1162,7 +1163,9 @@ static void mntput_no_expire(struct mount *mnt)
+ 	 */
+ 	smp_mb();
+ 	mnt_add_count(mnt, -1);
+-	if (mnt_get_count(mnt)) {
++	count = mnt_get_count(mnt);
++	if (count != 0) {
++		WARN_ON(count < 0);
+ 		rcu_read_unlock();
+ 		unlock_mount_hash();
+ 		return;
+diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
+index 8432bd6b95f08..c078f88552695 100644
+--- a/fs/nfs/nfs42xdr.c
++++ b/fs/nfs/nfs42xdr.c
+@@ -1019,29 +1019,24 @@ static int decode_deallocate(struct xdr_stream *xdr, struct nfs42_falloc_res *re
+ 	return decode_op_hdr(xdr, OP_DEALLOCATE);
+ }
+ 
+-static int decode_read_plus_data(struct xdr_stream *xdr, struct nfs_pgio_res *res,
+-				 uint32_t *eof)
++static int decode_read_plus_data(struct xdr_stream *xdr,
++				 struct nfs_pgio_res *res)
+ {
+ 	uint32_t count, recvd;
+ 	uint64_t offset;
+ 	__be32 *p;
+ 
+ 	p = xdr_inline_decode(xdr, 8 + 4);
+-	if (unlikely(!p))
+-		return -EIO;
++	if (!p)
++		return 1;
+ 
+ 	p = xdr_decode_hyper(p, &offset);
+ 	count = be32_to_cpup(p);
+ 	recvd = xdr_align_data(xdr, res->count, count);
+ 	res->count += recvd;
+ 
+-	if (count > recvd) {
+-		dprintk("NFS: server cheating in read reply: "
+-				"count %u > recvd %u\n", count, recvd);
+-		*eof = 0;
++	if (count > recvd)
+ 		return 1;
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -1052,18 +1047,16 @@ static int decode_read_plus_hole(struct xdr_stream *xdr, struct nfs_pgio_res *re
+ 	__be32 *p;
+ 
+ 	p = xdr_inline_decode(xdr, 8 + 8);
+-	if (unlikely(!p))
+-		return -EIO;
++	if (!p)
++		return 1;
+ 
+ 	p = xdr_decode_hyper(p, &offset);
+ 	p = xdr_decode_hyper(p, &length);
+ 	recvd = xdr_expand_hole(xdr, res->count, length);
+ 	res->count += recvd;
+ 
+-	if (recvd < length) {
+-		*eof = 0;
++	if (recvd < length)
+ 		return 1;
+-	}
+ 	return 0;
+ }
+ 
+@@ -1088,12 +1081,12 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res)
+ 
+ 	for (i = 0; i < segments; i++) {
+ 		p = xdr_inline_decode(xdr, 4);
+-		if (unlikely(!p))
+-			return -EIO;
++		if (!p)
++			goto early_out;
+ 
+ 		type = be32_to_cpup(p++);
+ 		if (type == NFS4_CONTENT_DATA)
+-			status = decode_read_plus_data(xdr, res, &eof);
++			status = decode_read_plus_data(xdr, res);
+ 		else if (type == NFS4_CONTENT_HOLE)
+ 			status = decode_read_plus_hole(xdr, res, &eof);
+ 		else
+@@ -1102,12 +1095,17 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res)
+ 		if (status < 0)
+ 			return status;
+ 		if (status > 0)
+-			break;
++			goto early_out;
+ 	}
+ 
+ out:
+ 	res->eof = eof;
+ 	return 0;
++early_out:
++	if (unlikely(!i))
++		return -EIO;
++	res->eof = 0;
++	return 0;
+ }
+ 
+ static int decode_seek(struct xdr_stream *xdr, struct nfs42_seek_res *res)
+diff --git a/fs/nfs/nfs4super.c b/fs/nfs/nfs4super.c
+index 93f5c1678ec29..984cc42ee54d8 100644
+--- a/fs/nfs/nfs4super.c
++++ b/fs/nfs/nfs4super.c
+@@ -67,7 +67,7 @@ static void nfs4_evict_inode(struct inode *inode)
+ 	nfs_inode_evict_delegation(inode);
+ 	/* Note that above delegreturn would trigger pnfs return-on-close */
+ 	pnfs_return_layout(inode);
+-	pnfs_destroy_layout(NFS_I(inode));
++	pnfs_destroy_layout_final(NFS_I(inode));
+ 	/* First call standard NFS clear_inode() code */
+ 	nfs_clear_inode(inode);
+ 	nfs4_xattr_cache_zap(inode);
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 0e50b9d45c320..07f59dc8cb2e7 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -294,6 +294,7 @@ void
+ pnfs_put_layout_hdr(struct pnfs_layout_hdr *lo)
+ {
+ 	struct inode *inode;
++	unsigned long i_state;
+ 
+ 	if (!lo)
+ 		return;
+@@ -304,8 +305,12 @@ pnfs_put_layout_hdr(struct pnfs_layout_hdr *lo)
+ 		if (!list_empty(&lo->plh_segs))
+ 			WARN_ONCE(1, "NFS: BUG unfreed layout segments.\n");
+ 		pnfs_detach_layout_hdr(lo);
++		i_state = inode->i_state;
+ 		spin_unlock(&inode->i_lock);
+ 		pnfs_free_layout_hdr(lo);
++		/* Notify pnfs_destroy_layout_final() that we're done */
++		if (i_state & (I_FREEING | I_CLEAR))
++			wake_up_var(lo);
+ 	}
+ }
+ 
+@@ -734,8 +739,7 @@ pnfs_free_lseg_list(struct list_head *free_me)
+ 	}
+ }
+ 
+-void
+-pnfs_destroy_layout(struct nfs_inode *nfsi)
++static struct pnfs_layout_hdr *__pnfs_destroy_layout(struct nfs_inode *nfsi)
+ {
+ 	struct pnfs_layout_hdr *lo;
+ 	LIST_HEAD(tmp_list);
+@@ -753,9 +757,34 @@ pnfs_destroy_layout(struct nfs_inode *nfsi)
+ 		pnfs_put_layout_hdr(lo);
+ 	} else
+ 		spin_unlock(&nfsi->vfs_inode.i_lock);
++	return lo;
++}
++
++void pnfs_destroy_layout(struct nfs_inode *nfsi)
++{
++	__pnfs_destroy_layout(nfsi);
+ }
+ EXPORT_SYMBOL_GPL(pnfs_destroy_layout);
+ 
++static bool pnfs_layout_removed(struct nfs_inode *nfsi,
++				struct pnfs_layout_hdr *lo)
++{
++	bool ret;
++
++	spin_lock(&nfsi->vfs_inode.i_lock);
++	ret = nfsi->layout != lo;
++	spin_unlock(&nfsi->vfs_inode.i_lock);
++	return ret;
++}
++
++void pnfs_destroy_layout_final(struct nfs_inode *nfsi)
++{
++	struct pnfs_layout_hdr *lo = __pnfs_destroy_layout(nfsi);
++
++	if (lo)
++		wait_var_event(lo, pnfs_layout_removed(nfsi, lo));
++}
++
+ static bool
+ pnfs_layout_add_bulk_destroy_list(struct inode *inode,
+ 		struct list_head *layout_list)
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 2661c44c62db4..78c3893918486 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -266,6 +266,7 @@ struct pnfs_layout_segment *pnfs_layout_process(struct nfs4_layoutget *lgp);
+ void pnfs_layoutget_free(struct nfs4_layoutget *lgp);
+ void pnfs_free_lseg_list(struct list_head *tmp_list);
+ void pnfs_destroy_layout(struct nfs_inode *);
++void pnfs_destroy_layout_final(struct nfs_inode *);
+ void pnfs_destroy_all_layouts(struct nfs_client *);
+ int pnfs_destroy_layouts_byfsid(struct nfs_client *clp,
+ 		struct nfs_fsid *fsid,
+@@ -710,6 +711,10 @@ static inline void pnfs_destroy_layout(struct nfs_inode *nfsi)
+ {
+ }
+ 
++static inline void pnfs_destroy_layout_final(struct nfs_inode *nfsi)
++{
++}
++
+ static inline struct pnfs_layout_segment *
+ pnfs_get_lseg(struct pnfs_layout_segment *lseg)
+ {
+diff --git a/fs/pnode.h b/fs/pnode.h
+index 49a058c73e4c7..26f74e092bd98 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -44,7 +44,7 @@ int propagate_mount_busy(struct mount *, int);
+ void propagate_mount_unlock(struct mount *);
+ void mnt_release_group_id(struct mount *);
+ int get_dominating_id(struct mount *mnt, const struct path *root);
+-unsigned int mnt_get_count(struct mount *mnt);
++int mnt_get_count(struct mount *mnt);
+ void mnt_set_mountpoint(struct mount *, struct mountpoint *,
+ 			struct mount *);
+ void mnt_change_mountpoint(struct mount *parent, struct mountpoint *mp,
+diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
+index a6f856f341dc7..c5562c871c8be 100644
+--- a/fs/quota/quota_tree.c
++++ b/fs/quota/quota_tree.c
+@@ -62,7 +62,7 @@ static ssize_t read_blk(struct qtree_mem_dqinfo *info, uint blk, char *buf)
+ 
+ 	memset(buf, 0, info->dqi_usable_bs);
+ 	return sb->s_op->quota_read(sb, info->dqi_type, buf,
+-	       info->dqi_usable_bs, blk << info->dqi_blocksize_bits);
++	       info->dqi_usable_bs, (loff_t)blk << info->dqi_blocksize_bits);
+ }
+ 
+ static ssize_t write_blk(struct qtree_mem_dqinfo *info, uint blk, char *buf)
+@@ -71,7 +71,7 @@ static ssize_t write_blk(struct qtree_mem_dqinfo *info, uint blk, char *buf)
+ 	ssize_t ret;
+ 
+ 	ret = sb->s_op->quota_write(sb, info->dqi_type, buf,
+-	       info->dqi_usable_bs, blk << info->dqi_blocksize_bits);
++	       info->dqi_usable_bs, (loff_t)blk << info->dqi_blocksize_bits);
+ 	if (ret != info->dqi_usable_bs) {
+ 		quota_error(sb, "dquota write failed");
+ 		if (ret >= 0)
+@@ -284,7 +284,7 @@ static uint find_free_dqentry(struct qtree_mem_dqinfo *info,
+ 			    blk);
+ 		goto out_buf;
+ 	}
+-	dquot->dq_off = (blk << info->dqi_blocksize_bits) +
++	dquot->dq_off = ((loff_t)blk << info->dqi_blocksize_bits) +
+ 			sizeof(struct qt_disk_dqdbheader) +
+ 			i * info->dqi_entry_size;
+ 	kfree(buf);
+@@ -559,7 +559,7 @@ static loff_t find_block_dqentry(struct qtree_mem_dqinfo *info,
+ 		ret = -EIO;
+ 		goto out_buf;
+ 	} else {
+-		ret = (blk << info->dqi_blocksize_bits) + sizeof(struct
++		ret = ((loff_t)blk << info->dqi_blocksize_bits) + sizeof(struct
+ 		  qt_disk_dqdbheader) + i * info->dqi_entry_size;
+ 	}
+ out_buf:
+diff --git a/fs/reiserfs/stree.c b/fs/reiserfs/stree.c
+index 8bf88d690729e..476a7ff494822 100644
+--- a/fs/reiserfs/stree.c
++++ b/fs/reiserfs/stree.c
+@@ -454,6 +454,12 @@ static int is_leaf(char *buf, int blocksize, struct buffer_head *bh)
+ 					 "(second one): %h", ih);
+ 			return 0;
+ 		}
++		if (is_direntry_le_ih(ih) && (ih_item_len(ih) < (ih_entry_count(ih) * IH_SIZE))) {
++			reiserfs_warning(NULL, "reiserfs-5093",
++					 "item entry count seems wrong %h",
++					 ih);
++			return 0;
++		}
+ 		prev_location = ih_location(ih);
+ 	}
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index db6ae4d3fb4ed..cd5c313729ea1 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2439,8 +2439,9 @@ extern int __meminit __early_pfn_to_nid(unsigned long pfn,
+ #endif
+ 
+ extern void set_dma_reserve(unsigned long new_dma_reserve);
+-extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long,
+-		enum meminit_context, struct vmem_altmap *, int migratetype);
++extern void memmap_init_zone(unsigned long, int, unsigned long,
++		unsigned long, unsigned long, enum meminit_context,
++		struct vmem_altmap *, int migratetype);
+ extern void setup_per_zone_wmarks(void);
+ extern int __meminit init_per_zone_wmark_min(void);
+ extern void mem_init(void);
+diff --git a/include/uapi/linux/const.h b/include/uapi/linux/const.h
+index 5ed721ad5b198..af2a44c08683d 100644
+--- a/include/uapi/linux/const.h
++++ b/include/uapi/linux/const.h
+@@ -28,4 +28,9 @@
+ #define _BITUL(x)	(_UL(1) << (x))
+ #define _BITULL(x)	(_ULL(1) << (x))
+ 
++#define __ALIGN_KERNEL(x, a)		__ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1)
++#define __ALIGN_KERNEL_MASK(x, mask)	(((x) + (mask)) & ~(mask))
++
++#define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
++
+ #endif /* _UAPI_LINUX_CONST_H */
+diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h
+index 9ca87bc73c447..cde753bb20935 100644
+--- a/include/uapi/linux/ethtool.h
++++ b/include/uapi/linux/ethtool.h
+@@ -14,7 +14,7 @@
+ #ifndef _UAPI_LINUX_ETHTOOL_H
+ #define _UAPI_LINUX_ETHTOOL_H
+ 
+-#include <linux/kernel.h>
++#include <linux/const.h>
+ #include <linux/types.h>
+ #include <linux/if_ether.h>
+ 
+diff --git a/include/uapi/linux/kernel.h b/include/uapi/linux/kernel.h
+index 0ff8f7477847c..fadf2db71fe8a 100644
+--- a/include/uapi/linux/kernel.h
++++ b/include/uapi/linux/kernel.h
+@@ -3,13 +3,6 @@
+ #define _UAPI_LINUX_KERNEL_H
+ 
+ #include <linux/sysinfo.h>
+-
+-/*
+- * 'kernel.h' contains some often-used function prototypes etc
+- */
+-#define __ALIGN_KERNEL(x, a)		__ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1)
+-#define __ALIGN_KERNEL_MASK(x, mask)	(((x) + (mask)) & ~(mask))
+-
+-#define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
++#include <linux/const.h>
+ 
+ #endif /* _UAPI_LINUX_KERNEL_H */
+diff --git a/include/uapi/linux/lightnvm.h b/include/uapi/linux/lightnvm.h
+index f9a1be7fc6962..ead2e72e5c88e 100644
+--- a/include/uapi/linux/lightnvm.h
++++ b/include/uapi/linux/lightnvm.h
+@@ -21,7 +21,7 @@
+ #define _UAPI_LINUX_LIGHTNVM_H
+ 
+ #ifdef __KERNEL__
+-#include <linux/kernel.h>
++#include <linux/const.h>
+ #include <linux/ioctl.h>
+ #else /* __KERNEL__ */
+ #include <stdio.h>
+diff --git a/include/uapi/linux/mroute6.h b/include/uapi/linux/mroute6.h
+index c36177a86516e..a1fd6173e2dbe 100644
+--- a/include/uapi/linux/mroute6.h
++++ b/include/uapi/linux/mroute6.h
+@@ -2,7 +2,7 @@
+ #ifndef _UAPI__LINUX_MROUTE6_H
+ #define _UAPI__LINUX_MROUTE6_H
+ 
+-#include <linux/kernel.h>
++#include <linux/const.h>
+ #include <linux/types.h>
+ #include <linux/sockios.h>
+ #include <linux/in6.h>		/* For struct sockaddr_in6. */
+diff --git a/include/uapi/linux/netfilter/x_tables.h b/include/uapi/linux/netfilter/x_tables.h
+index a8283f7dbc519..b8c6bb233ac1c 100644
+--- a/include/uapi/linux/netfilter/x_tables.h
++++ b/include/uapi/linux/netfilter/x_tables.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+ #ifndef _UAPI_X_TABLES_H
+ #define _UAPI_X_TABLES_H
+-#include <linux/kernel.h>
++#include <linux/const.h>
+ #include <linux/types.h>
+ 
+ #define XT_FUNCTION_MAXNAMELEN 30
+diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h
+index c3816ff7bfc32..3d94269bbfa87 100644
+--- a/include/uapi/linux/netlink.h
++++ b/include/uapi/linux/netlink.h
+@@ -2,7 +2,7 @@
+ #ifndef _UAPI__LINUX_NETLINK_H
+ #define _UAPI__LINUX_NETLINK_H
+ 
+-#include <linux/kernel.h>
++#include <linux/const.h>
+ #include <linux/socket.h> /* for __kernel_sa_family_t */
+ #include <linux/types.h>
+ 
+diff --git a/include/uapi/linux/sysctl.h b/include/uapi/linux/sysctl.h
+index 27c1ed2822e69..458179df9b271 100644
+--- a/include/uapi/linux/sysctl.h
++++ b/include/uapi/linux/sysctl.h
+@@ -23,7 +23,7 @@
+ #ifndef _UAPI_LINUX_SYSCTL_H
+ #define _UAPI_LINUX_SYSCTL_H
+ 
+-#include <linux/kernel.h>
++#include <linux/const.h>
+ #include <linux/types.h>
+ #include <linux/compiler.h>
+ 
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 191c329e482ad..32596fdbcd5b8 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -908,6 +908,8 @@ int cgroup1_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 	opt = fs_parse(fc, cgroup1_fs_parameters, param, &result);
+ 	if (opt == -ENOPARAM) {
+ 		if (strcmp(param->key, "source") == 0) {
++			if (fc->source)
++				return invalf(fc, "Multiple sources not supported");
+ 			fc->source = param->string;
+ 			param->string = NULL;
+ 			return 0;
+diff --git a/kernel/module.c b/kernel/module.c
+index a4fa44a652a75..e20499309b2af 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -1895,7 +1895,6 @@ static int mod_sysfs_init(struct module *mod)
+ 	if (err)
+ 		mod_kobject_put(mod);
+ 
+-	/* delay uevent until full sysfs population */
+ out:
+ 	return err;
+ }
+@@ -1932,7 +1931,6 @@ static int mod_sysfs_setup(struct module *mod,
+ 	add_sect_attrs(mod, info);
+ 	add_notes_attrs(mod, info);
+ 
+-	kobject_uevent(&mod->mkobj.kobj, KOBJ_ADD);
+ 	return 0;
+ 
+ out_unreg_modinfo_attrs:
+@@ -3639,6 +3637,9 @@ static noinline int do_init_module(struct module *mod)
+ 	blocking_notifier_call_chain(&module_notify_list,
+ 				     MODULE_STATE_LIVE, mod);
+ 
++	/* Delay uevent until module has finished its init routine */
++	kobject_uevent(&mod->mkobj.kobj, KOBJ_ADD);
++
+ 	/*
+ 	 * We need to finish all async code before the module init sequence
+ 	 * is done.  This has potential to deadlock.  For example, a newly
+@@ -3991,6 +3992,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 				     MODULE_STATE_GOING, mod);
+ 	klp_module_going(mod);
+  bug_cleanup:
++	mod->state = MODULE_STATE_GOING;
+ 	/* module_bug_cleanup needs module_mutex protection */
+ 	mutex_lock(&module_mutex);
+ 	module_bug_cleanup(mod);
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 81632cd5e3b72..e8d351b7f9b03 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -941,13 +941,6 @@ static bool can_stop_idle_tick(int cpu, struct tick_sched *ts)
+ 		 */
+ 		if (tick_do_timer_cpu == cpu)
+ 			return false;
+-		/*
+-		 * Boot safety: make sure the timekeeping duty has been
+-		 * assigned before entering dyntick-idle mode,
+-		 * tick_do_timer_cpu is TICK_DO_TIMER_BOOT
+-		 */
+-		if (unlikely(tick_do_timer_cpu == TICK_DO_TIMER_BOOT))
+-			return false;
+ 
+ 		/* Should not happen for nohz-full */
+ 		if (WARN_ON_ONCE(tick_do_timer_cpu == TICK_DO_TIMER_NONE))
+diff --git a/lib/zlib_dfltcc/Makefile b/lib/zlib_dfltcc/Makefile
+index 8e4d5afbbb109..66e1c96387c40 100644
+--- a/lib/zlib_dfltcc/Makefile
++++ b/lib/zlib_dfltcc/Makefile
+@@ -8,4 +8,4 @@
+ 
+ obj-$(CONFIG_ZLIB_DFLTCC) += zlib_dfltcc.o
+ 
+-zlib_dfltcc-objs := dfltcc.o dfltcc_deflate.o dfltcc_inflate.o dfltcc_syms.o
++zlib_dfltcc-objs := dfltcc.o dfltcc_deflate.o dfltcc_inflate.o
+diff --git a/lib/zlib_dfltcc/dfltcc.c b/lib/zlib_dfltcc/dfltcc.c
+index c30de430b30ca..782f76e9d4dab 100644
+--- a/lib/zlib_dfltcc/dfltcc.c
++++ b/lib/zlib_dfltcc/dfltcc.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: Zlib
+ /* dfltcc.c - SystemZ DEFLATE CONVERSION CALL support. */
+ 
+-#include <linux/zutil.h>
++#include <linux/export.h>
++#include <linux/module.h>
+ #include "dfltcc_util.h"
+ #include "dfltcc.h"
+ 
+@@ -53,3 +54,6 @@ void dfltcc_reset(
+     dfltcc_state->dht_threshold = DFLTCC_DHT_MIN_SAMPLE_SIZE;
+     dfltcc_state->param.ribm = DFLTCC_RIBM;
+ }
++EXPORT_SYMBOL(dfltcc_reset);
++
++MODULE_LICENSE("GPL");
+diff --git a/lib/zlib_dfltcc/dfltcc_deflate.c b/lib/zlib_dfltcc/dfltcc_deflate.c
+index 00c185101c6d1..6c946e8532eec 100644
+--- a/lib/zlib_dfltcc/dfltcc_deflate.c
++++ b/lib/zlib_dfltcc/dfltcc_deflate.c
+@@ -4,6 +4,7 @@
+ #include "dfltcc_util.h"
+ #include "dfltcc.h"
+ #include <asm/setup.h>
++#include <linux/export.h>
+ #include <linux/zutil.h>
+ 
+ /*
+@@ -34,6 +35,7 @@ int dfltcc_can_deflate(
+ 
+     return 1;
+ }
++EXPORT_SYMBOL(dfltcc_can_deflate);
+ 
+ static void dfltcc_gdht(
+     z_streamp strm
+@@ -277,3 +279,4 @@ again:
+         goto again; /* deflate() must use all input or all output */
+     return 1;
+ }
++EXPORT_SYMBOL(dfltcc_deflate);
+diff --git a/lib/zlib_dfltcc/dfltcc_inflate.c b/lib/zlib_dfltcc/dfltcc_inflate.c
+index db107016d29b3..fb60b5a6a1cb6 100644
+--- a/lib/zlib_dfltcc/dfltcc_inflate.c
++++ b/lib/zlib_dfltcc/dfltcc_inflate.c
+@@ -125,7 +125,7 @@ dfltcc_inflate_action dfltcc_inflate(
+     param->ho = (state->write - state->whave) & ((1 << HB_BITS) - 1);
+     if (param->hl)
+         param->nt = 0; /* Honor history for the first block */
+-    param->cv = state->flags ? REVERSE(state->check) : state->check;
++    param->cv = state->check;
+ 
+     /* Inflate */
+     do {
+@@ -138,7 +138,7 @@ dfltcc_inflate_action dfltcc_inflate(
+     state->bits = param->sbb;
+     state->whave = param->hl;
+     state->write = (param->ho + param->hl) & ((1 << HB_BITS) - 1);
+-    state->check = state->flags ? REVERSE(param->cv) : param->cv;
++    state->check = param->cv;
+     if (cc == DFLTCC_CC_OP2_CORRUPT && param->oesc != 0) {
+         /* Report an error if stream is corrupted */
+         state->mode = BAD;
+diff --git a/lib/zlib_dfltcc/dfltcc_syms.c b/lib/zlib_dfltcc/dfltcc_syms.c
+deleted file mode 100644
+index 6f23481804c1d..0000000000000
+--- a/lib/zlib_dfltcc/dfltcc_syms.c
++++ /dev/null
+@@ -1,17 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * linux/lib/zlib_dfltcc/dfltcc_syms.c
+- *
+- * Exported symbols for the s390 zlib dfltcc support.
+- *
+- */
+-
+-#include <linux/init.h>
+-#include <linux/module.h>
+-#include <linux/zlib.h>
+-#include "dfltcc.h"
+-
+-EXPORT_SYMBOL(dfltcc_can_deflate);
+-EXPORT_SYMBOL(dfltcc_deflate);
+-EXPORT_SYMBOL(dfltcc_reset);
+-MODULE_LICENSE("GPL");
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 3b38ea958e954..1fd11f96a707a 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4106,10 +4106,30 @@ retry_avoidcopy:
+ 		 * may get SIGKILLed if it later faults.
+ 		 */
+ 		if (outside_reserve) {
++			struct address_space *mapping = vma->vm_file->f_mapping;
++			pgoff_t idx;
++			u32 hash;
++
+ 			put_page(old_page);
+ 			BUG_ON(huge_pte_none(pte));
++			/*
++			 * Drop hugetlb_fault_mutex and i_mmap_rwsem before
++			 * unmapping.  unmapping needs to hold i_mmap_rwsem
++			 * in write mode.  Dropping i_mmap_rwsem in read mode
++			 * here is OK as COW mappings do not interact with
++			 * PMD sharing.
++			 *
++			 * Reacquire both after unmap operation.
++			 */
++			idx = vma_hugecache_offset(h, vma, haddr);
++			hash = hugetlb_fault_mutex_hash(mapping, idx);
++			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
++			i_mmap_unlock_read(mapping);
++
+ 			unmap_ref_private(mm, vma, old_page, haddr);
+-			BUG_ON(huge_pte_none(pte));
++
++			i_mmap_lock_read(mapping);
++			mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 			spin_lock(ptl);
+ 			ptep = huge_pte_offset(mm, haddr, huge_page_size(h));
+ 			if (likely(ptep &&
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 0f855deea4b2d..aa453a4331437 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -714,7 +714,7 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
+ 	 * expects the zone spans the pfn range. All the pages in the range
+ 	 * are reserved so nobody should be touching them so we should be safe
+ 	 */
+-	memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn,
++	memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn, 0,
+ 			 MEMINIT_HOTPLUG, altmap, migratetype);
+ 
+ 	set_zone_contiguous(zone);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 32f783ddb5c3a..14b9e83ff9da2 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -448,6 +448,8 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
+ 	if (end_pfn < pgdat_end_pfn(NODE_DATA(nid)))
+ 		return false;
+ 
++	if (NODE_DATA(nid)->first_deferred_pfn != ULONG_MAX)
++		return true;
+ 	/*
+ 	 * We start only with one section of pages, more pages are added as
+ 	 * needed until the rest of deferred pages are initialized.
+@@ -6050,7 +6052,7 @@ overlap_memmap_init(unsigned long zone, unsigned long *pfn)
+  * zone stats (e.g., nr_isolate_pageblock) are touched.
+  */
+ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
+-		unsigned long start_pfn,
++		unsigned long start_pfn, unsigned long zone_end_pfn,
+ 		enum meminit_context context,
+ 		struct vmem_altmap *altmap, int migratetype)
+ {
+@@ -6086,7 +6088,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
+ 		if (context == MEMINIT_EARLY) {
+ 			if (overlap_memmap_init(zone, &pfn))
+ 				continue;
+-			if (defer_init(nid, pfn, end_pfn))
++			if (defer_init(nid, pfn, zone_end_pfn))
+ 				break;
+ 		}
+ 
+@@ -6200,7 +6202,7 @@ void __meminit __weak memmap_init(unsigned long size, int nid,
+ 
+ 		if (end_pfn > start_pfn) {
+ 			size = end_pfn - start_pfn;
+-			memmap_init_zone(size, nid, zone, start_pfn,
++			memmap_init_zone(size, nid, zone, start_pfn, range_end_pfn,
+ 					 MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
+ 		}
+ 	}
+diff --git a/net/ethtool/channels.c b/net/ethtool/channels.c
+index 5635604cb9ba1..25a9e566ef5cd 100644
+--- a/net/ethtool/channels.c
++++ b/net/ethtool/channels.c
+@@ -194,8 +194,9 @@ int ethnl_set_channels(struct sk_buff *skb, struct genl_info *info)
+ 	if (netif_is_rxfh_configured(dev) &&
+ 	    !ethtool_get_max_rxfh_channel(dev, &max_rx_in_use) &&
+ 	    (channels.combined_count + channels.rx_count) <= max_rx_in_use) {
++		ret = -EINVAL;
+ 		GENL_SET_ERR_MSG(info, "requested channel counts are too low for existing indirection table settings");
+-		return -EINVAL;
++		goto out_ops;
+ 	}
+ 
+ 	/* Disabling channels, query zero-copy AF_XDP sockets */
+@@ -203,8 +204,9 @@ int ethnl_set_channels(struct sk_buff *skb, struct genl_info *info)
+ 		       min(channels.rx_count, channels.tx_count);
+ 	for (i = from_channel; i < old_total; i++)
+ 		if (xsk_get_pool_from_qid(dev, i)) {
++			ret = -EINVAL;
+ 			GENL_SET_ERR_MSG(info, "requested channel counts are too low for existing zerocopy AF_XDP sockets");
+-			return -EINVAL;
++			goto out_ops;
+ 		}
+ 
+ 	ret = dev->ethtool_ops->set_channels(dev, &channels);
+diff --git a/net/ethtool/strset.c b/net/ethtool/strset.c
+index 0baad0ce18328..c3a5489964cde 100644
+--- a/net/ethtool/strset.c
++++ b/net/ethtool/strset.c
+@@ -182,7 +182,7 @@ static int strset_parse_request(struct ethnl_req_info *req_base,
+ 		ret = strset_get_id(attr, &id, extack);
+ 		if (ret < 0)
+ 			return ret;
+-		if (ret >= ETH_SS_COUNT) {
++		if (id >= ETH_SS_COUNT) {
+ 			NL_SET_ERR_MSG_ATTR(extack, attr,
+ 					    "unknown string set id");
+ 			return -EOPNOTSUPP;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 88f2a7a0ccb86..967ce9ccfc0da 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2081,6 +2081,8 @@ struct sock *mptcp_sk_clone(const struct sock *sk,
+ 	sock_reset_flag(nsk, SOCK_RCU_FREE);
+ 	/* will be fully established after successful MPC subflow creation */
+ 	inet_sk_state_store(nsk, TCP_SYN_RECV);
++
++	security_inet_csk_clone(nsk, req);
+ 	bh_unlock_sock(nsk);
+ 
+ 	/* keep a single reference */
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index b0ad7687ee2c8..c6653ee7f701b 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1596,6 +1596,21 @@ free_sched:
+ 	return err;
+ }
+ 
++static void taprio_reset(struct Qdisc *sch)
++{
++	struct taprio_sched *q = qdisc_priv(sch);
++	struct net_device *dev = qdisc_dev(sch);
++	int i;
++
++	hrtimer_cancel(&q->advance_timer);
++	if (q->qdiscs) {
++		for (i = 0; i < dev->num_tx_queues && q->qdiscs[i]; i++)
++			qdisc_reset(q->qdiscs[i]);
++	}
++	sch->qstats.backlog = 0;
++	sch->q.qlen = 0;
++}
++
+ static void taprio_destroy(struct Qdisc *sch)
+ {
+ 	struct taprio_sched *q = qdisc_priv(sch);
+@@ -1606,7 +1621,6 @@ static void taprio_destroy(struct Qdisc *sch)
+ 	list_del(&q->taprio_list);
+ 	spin_unlock(&taprio_list_lock);
+ 
+-	hrtimer_cancel(&q->advance_timer);
+ 
+ 	taprio_disable_offload(dev, q, NULL);
+ 
+@@ -1953,6 +1967,7 @@ static struct Qdisc_ops taprio_qdisc_ops __read_mostly = {
+ 	.init		= taprio_init,
+ 	.change		= taprio_change,
+ 	.destroy	= taprio_destroy,
++	.reset		= taprio_reset,
+ 	.peek		= taprio_peek,
+ 	.dequeue	= taprio_dequeue,
+ 	.enqueue	= taprio_enqueue,
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 47b155a49226f..9f3f8e953ff04 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -755,8 +755,13 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 		runtime->boundary *= 2;
+ 
+ 	/* clear the buffer for avoiding possible kernel info leaks */
+-	if (runtime->dma_area && !substream->ops->copy_user)
+-		memset(runtime->dma_area, 0, runtime->dma_bytes);
++	if (runtime->dma_area && !substream->ops->copy_user) {
++		size_t size = runtime->dma_bytes;
++
++		if (runtime->info & SNDRV_PCM_INFO_MMAP)
++			size = PAGE_ALIGN(size);
++		memset(runtime->dma_area, 0, size);
++	}
+ 
+ 	snd_pcm_timer_resolution_change(substream);
+ 	snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP);
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index c78720a3299c4..257ad5206240f 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -95,11 +95,21 @@ static inline unsigned short snd_rawmidi_file_flags(struct file *file)
+ 	}
+ }
+ 
+-static inline int snd_rawmidi_ready(struct snd_rawmidi_substream *substream)
++static inline bool __snd_rawmidi_ready(struct snd_rawmidi_runtime *runtime)
++{
++	return runtime->avail >= runtime->avail_min;
++}
++
++static bool snd_rawmidi_ready(struct snd_rawmidi_substream *substream)
+ {
+ 	struct snd_rawmidi_runtime *runtime = substream->runtime;
++	unsigned long flags;
++	bool ready;
+ 
+-	return runtime->avail >= runtime->avail_min;
++	spin_lock_irqsave(&runtime->lock, flags);
++	ready = __snd_rawmidi_ready(runtime);
++	spin_unlock_irqrestore(&runtime->lock, flags);
++	return ready;
+ }
+ 
+ static inline int snd_rawmidi_ready_append(struct snd_rawmidi_substream *substream,
+@@ -1019,7 +1029,7 @@ int snd_rawmidi_receive(struct snd_rawmidi_substream *substream,
+ 	if (result > 0) {
+ 		if (runtime->event)
+ 			schedule_work(&runtime->event_work);
+-		else if (snd_rawmidi_ready(substream))
++		else if (__snd_rawmidi_ready(runtime))
+ 			wake_up(&runtime->sleep);
+ 	}
+ 	spin_unlock_irqrestore(&runtime->lock, flags);
+@@ -1098,7 +1108,7 @@ static ssize_t snd_rawmidi_read(struct file *file, char __user *buf, size_t coun
+ 	result = 0;
+ 	while (count > 0) {
+ 		spin_lock_irq(&runtime->lock);
+-		while (!snd_rawmidi_ready(substream)) {
++		while (!__snd_rawmidi_ready(runtime)) {
+ 			wait_queue_entry_t wait;
+ 
+ 			if ((file->f_flags & O_NONBLOCK) != 0 || result > 0) {
+@@ -1115,9 +1125,11 @@ static ssize_t snd_rawmidi_read(struct file *file, char __user *buf, size_t coun
+ 				return -ENODEV;
+ 			if (signal_pending(current))
+ 				return result > 0 ? result : -ERESTARTSYS;
+-			if (!runtime->avail)
+-				return result > 0 ? result : -EIO;
+ 			spin_lock_irq(&runtime->lock);
++			if (!runtime->avail) {
++				spin_unlock_irq(&runtime->lock);
++				return result > 0 ? result : -EIO;
++			}
+ 		}
+ 		spin_unlock_irq(&runtime->lock);
+ 		count1 = snd_rawmidi_kernel_read1(substream,
+@@ -1255,7 +1267,7 @@ int __snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream, int coun
+ 	runtime->avail += count;
+ 	substream->bytes += count;
+ 	if (count > 0) {
+-		if (runtime->drain || snd_rawmidi_ready(substream))
++		if (runtime->drain || __snd_rawmidi_ready(runtime))
+ 			wake_up(&runtime->sleep);
+ 	}
+ 	return count;
+@@ -1444,9 +1456,11 @@ static ssize_t snd_rawmidi_write(struct file *file, const char __user *buf,
+ 				return -ENODEV;
+ 			if (signal_pending(current))
+ 				return result > 0 ? result : -ERESTARTSYS;
+-			if (!runtime->avail && !timeout)
+-				return result > 0 ? result : -EIO;
+ 			spin_lock_irq(&runtime->lock);
++			if (!runtime->avail && !timeout) {
++				spin_unlock_irq(&runtime->lock);
++				return result > 0 ? result : -EIO;
++			}
+ 		}
+ 		spin_unlock_irq(&runtime->lock);
+ 		count1 = snd_rawmidi_kernel_write1(substream, buf, NULL, count);
+@@ -1526,6 +1540,7 @@ static void snd_rawmidi_proc_info_read(struct snd_info_entry *entry,
+ 	struct snd_rawmidi *rmidi;
+ 	struct snd_rawmidi_substream *substream;
+ 	struct snd_rawmidi_runtime *runtime;
++	unsigned long buffer_size, avail, xruns;
+ 
+ 	rmidi = entry->private_data;
+ 	snd_iprintf(buffer, "%s\n\n", rmidi->name);
+@@ -1544,13 +1559,16 @@ static void snd_rawmidi_proc_info_read(struct snd_info_entry *entry,
+ 				    "  Owner PID    : %d\n",
+ 				    pid_vnr(substream->pid));
+ 				runtime = substream->runtime;
++				spin_lock_irq(&runtime->lock);
++				buffer_size = runtime->buffer_size;
++				avail = runtime->avail;
++				spin_unlock_irq(&runtime->lock);
+ 				snd_iprintf(buffer,
+ 				    "  Mode         : %s\n"
+ 				    "  Buffer size  : %lu\n"
+ 				    "  Avail        : %lu\n",
+ 				    runtime->oss ? "OSS compatible" : "native",
+-				    (unsigned long) runtime->buffer_size,
+-				    (unsigned long) runtime->avail);
++				    buffer_size, avail);
+ 			}
+ 		}
+ 	}
+@@ -1568,13 +1586,16 @@ static void snd_rawmidi_proc_info_read(struct snd_info_entry *entry,
+ 					    "  Owner PID    : %d\n",
+ 					    pid_vnr(substream->pid));
+ 				runtime = substream->runtime;
++				spin_lock_irq(&runtime->lock);
++				buffer_size = runtime->buffer_size;
++				avail = runtime->avail;
++				xruns = runtime->xruns;
++				spin_unlock_irq(&runtime->lock);
+ 				snd_iprintf(buffer,
+ 					    "  Buffer size  : %lu\n"
+ 					    "  Avail        : %lu\n"
+ 					    "  Overruns     : %lu\n",
+-					    (unsigned long) runtime->buffer_size,
+-					    (unsigned long) runtime->avail,
+-					    (unsigned long) runtime->xruns);
++					    buffer_size, avail, xruns);
+ 			}
+ 		}
+ 	}
+diff --git a/sound/core/seq/seq_queue.h b/sound/core/seq/seq_queue.h
+index 9254c8dbe5e37..25d2d6b610079 100644
+--- a/sound/core/seq/seq_queue.h
++++ b/sound/core/seq/seq_queue.h
+@@ -26,10 +26,10 @@ struct snd_seq_queue {
+ 	
+ 	struct snd_seq_timer *timer;	/* time keeper for this queue */
+ 	int	owner;		/* client that 'owns' the timer */
+-	unsigned int	locked:1,	/* timer is only accesibble by owner if set */
+-		klocked:1,	/* kernel lock (after START) */	
+-		check_again:1,
+-		check_blocked:1;
++	bool	locked;		/* timer is only accesibble by owner if set */
++	bool	klocked;	/* kernel lock (after START) */
++	bool	check_again;	/* concurrent access happened during check */
++	bool	check_blocked;	/* queue being checked */
+ 
+ 	unsigned int flags;		/* status flags */
+ 	unsigned int info_flags;	/* info for sync */
+diff --git a/tools/include/uapi/linux/const.h b/tools/include/uapi/linux/const.h
+index 5ed721ad5b198..af2a44c08683d 100644
+--- a/tools/include/uapi/linux/const.h
++++ b/tools/include/uapi/linux/const.h
+@@ -28,4 +28,9 @@
+ #define _BITUL(x)	(_UL(1) << (x))
+ #define _BITULL(x)	(_ULL(1) << (x))
+ 
++#define __ALIGN_KERNEL(x, a)		__ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1)
++#define __ALIGN_KERNEL_MASK(x, mask)	(((x) + (mask)) & ~(mask))
++
++#define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
++
+ #endif /* _UAPI_LINUX_CONST_H */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-09  0:14 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-01-09  0:14 UTC (permalink / raw
  To: gentoo-commits

commit:     52fcea92547bfbe61325bee94a23e801d4a80453
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan  9 00:14:25 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan  9 00:14:25 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=52fcea92

Add support for shiftfs

UID/GID shifting overlay filesystem for containers

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                    |    4 +
 5000_shifts-ubuntu-20.04.patch | 2203 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2207 insertions(+)

diff --git a/0000_README b/0000_README
index 53642e2..2fb3d39 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
+Patch:  5000_shifts-ubuntu-20.04.patch
+From:   https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
+Desc:   UID/GID shifting overlay filesystem for containers 
+
 Patch:  5013_enable-cpu-optimizations-for-gcc10.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc = v10.1+ optimizations for additional CPUs.

diff --git a/5000_shifts-ubuntu-20.04.patch b/5000_shifts-ubuntu-20.04.patch
new file mode 100644
index 0000000..665fc66
--- /dev/null
+++ b/5000_shifts-ubuntu-20.04.patch
@@ -0,0 +1,2203 @@
+--- /dev/null	2021-01-08 13:33:13.190303432 -0500
++++ b/fs/shiftfs.c	2021-01-08 19:02:40.000000000 -0500
+@@ -0,0 +1,2157 @@
++#include <linux/btrfs.h>
++#include <linux/capability.h>
++#include <linux/cred.h>
++#include <linux/mount.h>
++#include <linux/fdtable.h>
++#include <linux/file.h>
++#include <linux/fs.h>
++#include <linux/namei.h>
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/magic.h>
++#include <linux/parser.h>
++#include <linux/security.h>
++#include <linux/seq_file.h>
++#include <linux/statfs.h>
++#include <linux/slab.h>
++#include <linux/user_namespace.h>
++#include <linux/uidgid.h>
++#include <linux/xattr.h>
++#include <linux/posix_acl.h>
++#include <linux/posix_acl_xattr.h>
++#include <linux/uio.h>
++#include <linux/fiemap.h>
++
++struct shiftfs_super_info {
++	struct vfsmount *mnt;
++	struct user_namespace *userns;
++	/* creds of process who created the super block */
++	const struct cred *creator_cred;
++	bool mark;
++	unsigned int passthrough;
++	unsigned int passthrough_mark;
++};
++
++static void shiftfs_fill_inode(struct inode *inode, unsigned long ino,
++			       umode_t mode, dev_t dev, struct dentry *dentry);
++
++#define SHIFTFS_PASSTHROUGH_NONE 0
++#define SHIFTFS_PASSTHROUGH_STAT 1
++#define SHIFTFS_PASSTHROUGH_IOCTL 2
++#define SHIFTFS_PASSTHROUGH_ALL                                                \
++	(SHIFTFS_PASSTHROUGH_STAT | SHIFTFS_PASSTHROUGH_IOCTL)
++
++static inline bool shiftfs_passthrough_ioctls(struct shiftfs_super_info *info)
++{
++	if (!(info->passthrough & SHIFTFS_PASSTHROUGH_IOCTL))
++		return false;
++
++	return true;
++}
++
++static inline bool shiftfs_passthrough_statfs(struct shiftfs_super_info *info)
++{
++	if (!(info->passthrough & SHIFTFS_PASSTHROUGH_STAT))
++		return false;
++
++	return true;
++}
++
++enum {
++	OPT_MARK,
++	OPT_PASSTHROUGH,
++	OPT_LAST,
++};
++
++/* global filesystem options */
++static const match_table_t tokens = {
++	{ OPT_MARK, "mark" },
++	{ OPT_PASSTHROUGH, "passthrough=%u" },
++	{ OPT_LAST, NULL }
++};
++
++static const struct cred *shiftfs_override_creds(const struct super_block *sb)
++{
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++
++	return override_creds(sbinfo->creator_cred);
++}
++
++static inline void shiftfs_revert_object_creds(const struct cred *oldcred,
++					       struct cred *newcred)
++{
++	revert_creds(oldcred);
++	put_cred(newcred);
++}
++
++static kuid_t shift_kuid(struct user_namespace *from, struct user_namespace *to,
++			 kuid_t kuid)
++{
++	uid_t uid = from_kuid(from, kuid);
++	return make_kuid(to, uid);
++}
++
++static kgid_t shift_kgid(struct user_namespace *from, struct user_namespace *to,
++			 kgid_t kgid)
++{
++	gid_t gid = from_kgid(from, kgid);
++	return make_kgid(to, gid);
++}
++
++static int shiftfs_override_object_creds(const struct super_block *sb,
++					 const struct cred **oldcred,
++					 struct cred **newcred,
++					 struct dentry *dentry, umode_t mode,
++					 bool hardlink)
++{
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++	kuid_t fsuid = current_fsuid();
++	kgid_t fsgid = current_fsgid();
++
++	*oldcred = shiftfs_override_creds(sb);
++
++	*newcred = prepare_creds();
++	if (!*newcred) {
++		revert_creds(*oldcred);
++		return -ENOMEM;
++	}
++
++	(*newcred)->fsuid = shift_kuid(sb->s_user_ns, sbinfo->userns, fsuid);
++	(*newcred)->fsgid = shift_kgid(sb->s_user_ns, sbinfo->userns, fsgid);
++
++	if (!hardlink) {
++		int err = security_dentry_create_files_as(dentry, mode,
++							  &dentry->d_name,
++							  *oldcred, *newcred);
++		if (err) {
++			shiftfs_revert_object_creds(*oldcred, *newcred);
++			return err;
++		}
++	}
++
++	put_cred(override_creds(*newcred));
++	return 0;
++}
++
++static void shiftfs_copyattr(struct inode *from, struct inode *to)
++{
++	struct user_namespace *from_ns = from->i_sb->s_user_ns;
++	struct user_namespace *to_ns = to->i_sb->s_user_ns;
++
++	to->i_uid = shift_kuid(from_ns, to_ns, from->i_uid);
++	to->i_gid = shift_kgid(from_ns, to_ns, from->i_gid);
++	to->i_mode = from->i_mode;
++	to->i_atime = from->i_atime;
++	to->i_mtime = from->i_mtime;
++	to->i_ctime = from->i_ctime;
++	i_size_write(to, i_size_read(from));
++}
++
++static void shiftfs_copyflags(struct inode *from, struct inode *to)
++{
++	unsigned int mask = S_SYNC | S_IMMUTABLE | S_APPEND | S_NOATIME;
++
++	inode_set_flags(to, from->i_flags & mask, mask);
++}
++
++static void shiftfs_file_accessed(struct file *file)
++{
++	struct inode *upperi, *loweri;
++
++	if (file->f_flags & O_NOATIME)
++		return;
++
++	upperi = file_inode(file);
++	loweri = upperi->i_private;
++
++	if (!loweri)
++		return;
++
++	upperi->i_mtime = loweri->i_mtime;
++	upperi->i_ctime = loweri->i_ctime;
++
++	touch_atime(&file->f_path);
++}
++
++static int shiftfs_parse_mount_options(struct shiftfs_super_info *sbinfo,
++				       char *options)
++{
++	char *p;
++	substring_t args[MAX_OPT_ARGS];
++
++	sbinfo->mark = false;
++	sbinfo->passthrough = 0;
++
++	while ((p = strsep(&options, ",")) != NULL) {
++		int err, intarg, token;
++
++		if (!*p)
++			continue;
++
++		token = match_token(p, tokens, args);
++		switch (token) {
++		case OPT_MARK:
++			sbinfo->mark = true;
++			break;
++		case OPT_PASSTHROUGH:
++			err = match_int(&args[0], &intarg);
++			if (err)
++				return err;
++
++			if (intarg & ~SHIFTFS_PASSTHROUGH_ALL)
++				return -EINVAL;
++
++			sbinfo->passthrough = intarg;
++			break;
++		default:
++			return -EINVAL;
++		}
++	}
++
++	return 0;
++}
++
++static void shiftfs_d_release(struct dentry *dentry)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++
++	if (lowerd)
++		dput(lowerd);
++}
++
++static struct dentry *shiftfs_d_real(struct dentry *dentry,
++				     const struct inode *inode)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++
++	if (inode && d_inode(dentry) == inode)
++		return dentry;
++
++	lowerd = d_real(lowerd, inode);
++	if (lowerd && (!inode || inode == d_inode(lowerd)))
++		return lowerd;
++
++	WARN(1, "shiftfs_d_real(%pd4, %s:%lu): real dentry not found\n", dentry,
++	     inode ? inode->i_sb->s_id : "NULL", inode ? inode->i_ino : 0);
++	return dentry;
++}
++
++static int shiftfs_d_weak_revalidate(struct dentry *dentry, unsigned int flags)
++{
++	int err = 1;
++	struct dentry *lowerd = dentry->d_fsdata;
++
++	if (d_is_negative(lowerd) != d_is_negative(dentry))
++		return 0;
++
++	if ((lowerd->d_flags & DCACHE_OP_WEAK_REVALIDATE))
++		err = lowerd->d_op->d_weak_revalidate(lowerd, flags);
++
++	if (d_really_is_positive(dentry)) {
++		struct inode *inode = d_inode(dentry);
++		struct inode *loweri = d_inode(lowerd);
++
++		shiftfs_copyattr(loweri, inode);
++	}
++
++	return err;
++}
++
++static int shiftfs_d_revalidate(struct dentry *dentry, unsigned int flags)
++{
++	int err = 1;
++	struct dentry *lowerd = dentry->d_fsdata;
++
++	if (d_unhashed(lowerd) ||
++	    ((d_is_negative(lowerd) != d_is_negative(dentry))))
++		return 0;
++
++	if (flags & LOOKUP_RCU)
++		return -ECHILD;
++
++	if ((lowerd->d_flags & DCACHE_OP_REVALIDATE))
++		err = lowerd->d_op->d_revalidate(lowerd, flags);
++
++	if (d_really_is_positive(dentry)) {
++		struct inode *inode = d_inode(dentry);
++		struct inode *loweri = d_inode(lowerd);
++
++		shiftfs_copyattr(loweri, inode);
++	}
++
++	return err;
++}
++
++static const struct dentry_operations shiftfs_dentry_ops = {
++	.d_release	   = shiftfs_d_release,
++	.d_real		   = shiftfs_d_real,
++	.d_revalidate	   = shiftfs_d_revalidate,
++	.d_weak_revalidate = shiftfs_d_weak_revalidate,
++};
++
++static const char *shiftfs_get_link(struct dentry *dentry, struct inode *inode,
++				    struct delayed_call *done)
++{
++	const char *p;
++	const struct cred *oldcred;
++	struct dentry *lowerd;
++
++	/* RCU lookup not supported */
++	if (!dentry)
++		return ERR_PTR(-ECHILD);
++
++	lowerd = dentry->d_fsdata;
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	p = vfs_get_link(lowerd, done);
++	revert_creds(oldcred);
++
++	return p;
++}
++
++static int shiftfs_setxattr(struct dentry *dentry, struct inode *inode,
++			    const char *name, const void *value,
++			    size_t size, int flags)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	int err;
++	const struct cred *oldcred;
++
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = vfs_setxattr(lowerd, name, value, size, flags);
++	revert_creds(oldcred);
++
++	shiftfs_copyattr(lowerd->d_inode, inode);
++
++	return err;
++}
++
++static int shiftfs_xattr_get(const struct xattr_handler *handler,
++			     struct dentry *dentry, struct inode *inode,
++			     const char *name, void *value, size_t size)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	int err;
++	const struct cred *oldcred;
++
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = vfs_getxattr(lowerd, name, value, size);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static ssize_t shiftfs_listxattr(struct dentry *dentry, char *list,
++				 size_t size)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	int err;
++	const struct cred *oldcred;
++
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = vfs_listxattr(lowerd, list, size);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static int shiftfs_removexattr(struct dentry *dentry, const char *name)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	int err;
++	const struct cred *oldcred;
++
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = vfs_removexattr(lowerd, name);
++	revert_creds(oldcred);
++
++	/* update c/mtime */
++	shiftfs_copyattr(lowerd->d_inode, d_inode(dentry));
++
++	return err;
++}
++
++static int shiftfs_xattr_set(const struct xattr_handler *handler,
++			     struct dentry *dentry, struct inode *inode,
++			     const char *name, const void *value, size_t size,
++			     int flags)
++{
++	if (!value)
++		return shiftfs_removexattr(dentry, name);
++	return shiftfs_setxattr(dentry, inode, name, value, size, flags);
++}
++
++static int shiftfs_inode_test(struct inode *inode, void *data)
++{
++	return inode->i_private == data;
++}
++
++static int shiftfs_inode_set(struct inode *inode, void *data)
++{
++	inode->i_private = data;
++	return 0;
++}
++
++static int shiftfs_create_object(struct inode *diri, struct dentry *dentry,
++				 umode_t mode, const char *symlink,
++				 struct dentry *hardlink, bool excl)
++{
++	int err;
++	const struct cred *oldcred;
++	struct cred *newcred;
++	void *loweri_iop_ptr = NULL;
++	umode_t modei = mode;
++	struct super_block *dir_sb = diri->i_sb;
++	struct dentry *lowerd_new = dentry->d_fsdata;
++	struct inode *inode = NULL, *loweri_dir = diri->i_private;
++	const struct inode_operations *loweri_dir_iop = loweri_dir->i_op;
++	struct dentry *lowerd_link = NULL;
++
++	if (hardlink) {
++		loweri_iop_ptr = loweri_dir_iop->link;
++	} else {
++		switch (mode & S_IFMT) {
++		case S_IFDIR:
++			loweri_iop_ptr = loweri_dir_iop->mkdir;
++			break;
++		case S_IFREG:
++			loweri_iop_ptr = loweri_dir_iop->create;
++			break;
++		case S_IFLNK:
++			loweri_iop_ptr = loweri_dir_iop->symlink;
++			break;
++		case S_IFSOCK:
++			/* fall through */
++		case S_IFIFO:
++			loweri_iop_ptr = loweri_dir_iop->mknod;
++			break;
++		}
++	}
++	if (!loweri_iop_ptr) {
++		err = -EINVAL;
++		goto out_iput;
++	}
++
++	inode_lock_nested(loweri_dir, I_MUTEX_PARENT);
++
++	if (!hardlink) {
++		inode = new_inode(dir_sb);
++		if (!inode) {
++			err = -ENOMEM;
++			goto out_iput;
++		}
++
++		/*
++		 * new_inode() will have added the new inode to the super
++		 * block's list of inodes. Further below we will call
++		 * inode_insert5() Which would perform the same operation again
++		 * thereby corrupting the list. To avoid this raise I_CREATING
++		 * in i_state which will cause inode_insert5() to skip this
++		 * step. I_CREATING will be cleared by d_instantiate_new()
++		 * below.
++		 */
++		spin_lock(&inode->i_lock);
++		inode->i_state |= I_CREATING;
++		spin_unlock(&inode->i_lock);
++
++		inode_init_owner(inode, diri, mode);
++		modei = inode->i_mode;
++	}
++
++	err = shiftfs_override_object_creds(dentry->d_sb, &oldcred, &newcred,
++					    dentry, modei, hardlink != NULL);
++	if (err)
++		goto out_iput;
++
++	if (hardlink) {
++		lowerd_link = hardlink->d_fsdata;
++		err = vfs_link(lowerd_link, loweri_dir, lowerd_new, NULL);
++	} else {
++		switch (modei & S_IFMT) {
++		case S_IFDIR:
++			err = vfs_mkdir(loweri_dir, lowerd_new, modei);
++			break;
++		case S_IFREG:
++			err = vfs_create(loweri_dir, lowerd_new, modei, excl);
++			break;
++		case S_IFLNK:
++			err = vfs_symlink(loweri_dir, lowerd_new, symlink);
++			break;
++		case S_IFSOCK:
++			/* fall through */
++		case S_IFIFO:
++			err = vfs_mknod(loweri_dir, lowerd_new, modei, 0);
++			break;
++		default:
++			err = -EINVAL;
++			break;
++		}
++	}
++
++	shiftfs_revert_object_creds(oldcred, newcred);
++
++	if (!err && WARN_ON(!lowerd_new->d_inode))
++		err = -EIO;
++	if (err)
++		goto out_iput;
++
++	if (hardlink) {
++		inode = d_inode(hardlink);
++		ihold(inode);
++
++		/* copy up times from lower inode */
++		shiftfs_copyattr(d_inode(lowerd_link), inode);
++		set_nlink(d_inode(hardlink), d_inode(lowerd_link)->i_nlink);
++		d_instantiate(dentry, inode);
++	} else {
++		struct inode *inode_tmp;
++		struct inode *loweri_new = d_inode(lowerd_new);
++
++		inode_tmp = inode_insert5(inode, (unsigned long)loweri_new,
++					  shiftfs_inode_test, shiftfs_inode_set,
++					  loweri_new);
++		if (unlikely(inode_tmp != inode)) {
++			pr_err_ratelimited("shiftfs: newly created inode found in cache\n");
++			iput(inode_tmp);
++			err = -EINVAL;
++			goto out_iput;
++		}
++
++		ihold(loweri_new);
++		shiftfs_fill_inode(inode, loweri_new->i_ino, loweri_new->i_mode,
++				   0, lowerd_new);
++		d_instantiate_new(dentry, inode);
++	}
++
++	shiftfs_copyattr(loweri_dir, diri);
++	if (loweri_iop_ptr == loweri_dir_iop->mkdir)
++		set_nlink(diri, loweri_dir->i_nlink);
++
++	inode = NULL;
++
++out_iput:
++	iput(inode);
++	inode_unlock(loweri_dir);
++
++	return err;
++}
++
++static int shiftfs_create(struct inode *dir, struct dentry *dentry,
++			  umode_t mode,  bool excl)
++{
++	mode |= S_IFREG;
++
++	return shiftfs_create_object(dir, dentry, mode, NULL, NULL, excl);
++}
++
++static int shiftfs_mkdir(struct inode *dir, struct dentry *dentry,
++			 umode_t mode)
++{
++	mode |= S_IFDIR;
++
++	return shiftfs_create_object(dir, dentry, mode, NULL, NULL, false);
++}
++
++static int shiftfs_link(struct dentry *hardlink, struct inode *dir,
++			struct dentry *dentry)
++{
++	return shiftfs_create_object(dir, dentry, 0, NULL, hardlink, false);
++}
++
++static int shiftfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
++			 dev_t rdev)
++{
++	if (!S_ISFIFO(mode) && !S_ISSOCK(mode))
++		return -EPERM;
++
++	return shiftfs_create_object(dir, dentry, mode, NULL, NULL, false);
++}
++
++static int shiftfs_symlink(struct inode *dir, struct dentry *dentry,
++			   const char *symlink)
++{
++	return shiftfs_create_object(dir, dentry, S_IFLNK, symlink, NULL, false);
++}
++
++static int shiftfs_rm(struct inode *dir, struct dentry *dentry, bool rmdir)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	struct inode *loweri = dir->i_private;
++	struct inode *inode = d_inode(dentry);
++	int err;
++	const struct cred *oldcred;
++
++	dget(lowerd);
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	inode_lock_nested(loweri, I_MUTEX_PARENT);
++	if (rmdir)
++		err = vfs_rmdir(loweri, lowerd);
++	else
++		err = vfs_unlink(loweri, lowerd, NULL);
++	revert_creds(oldcred);
++
++	if (!err) {
++		d_drop(dentry);
++
++		if (rmdir)
++			clear_nlink(inode);
++		else
++			drop_nlink(inode);
++	}
++	inode_unlock(loweri);
++
++	shiftfs_copyattr(loweri, dir);
++	dput(lowerd);
++
++	return err;
++}
++
++static int shiftfs_unlink(struct inode *dir, struct dentry *dentry)
++{
++	return shiftfs_rm(dir, dentry, false);
++}
++
++static int shiftfs_rmdir(struct inode *dir, struct dentry *dentry)
++{
++	return shiftfs_rm(dir, dentry, true);
++}
++
++static int shiftfs_rename(struct inode *olddir, struct dentry *old,
++			  struct inode *newdir, struct dentry *new,
++			  unsigned int flags)
++{
++	struct dentry *lowerd_dir_old = old->d_parent->d_fsdata,
++		      *lowerd_dir_new = new->d_parent->d_fsdata,
++		      *lowerd_old = old->d_fsdata, *lowerd_new = new->d_fsdata,
++		      *trapd;
++	struct inode *loweri_dir_old = lowerd_dir_old->d_inode,
++		     *loweri_dir_new = lowerd_dir_new->d_inode;
++	int err = -EINVAL;
++	const struct cred *oldcred;
++
++	trapd = lock_rename(lowerd_dir_new, lowerd_dir_old);
++
++	if (trapd == lowerd_old || trapd == lowerd_new)
++		goto out_unlock;
++
++	oldcred = shiftfs_override_creds(old->d_sb);
++	err = vfs_rename(loweri_dir_old, lowerd_old, loweri_dir_new, lowerd_new,
++			 NULL, flags);
++	revert_creds(oldcred);
++
++	shiftfs_copyattr(loweri_dir_old, olddir);
++	shiftfs_copyattr(loweri_dir_new, newdir);
++
++out_unlock:
++	unlock_rename(lowerd_dir_new, lowerd_dir_old);
++
++	return err;
++}
++
++static struct dentry *shiftfs_lookup(struct inode *dir, struct dentry *dentry,
++				     unsigned int flags)
++{
++	struct dentry *new;
++	struct inode *newi;
++	const struct cred *oldcred;
++	struct dentry *lowerd = dentry->d_parent->d_fsdata;
++	struct inode *inode = NULL, *loweri = lowerd->d_inode;
++
++	inode_lock(loweri);
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	new = lookup_one_len(dentry->d_name.name, lowerd, dentry->d_name.len);
++	revert_creds(oldcred);
++	inode_unlock(loweri);
++
++	if (IS_ERR(new))
++		return new;
++
++	dentry->d_fsdata = new;
++
++	newi = new->d_inode;
++	if (!newi)
++		goto out;
++
++	inode = iget5_locked(dentry->d_sb, (unsigned long)newi,
++			     shiftfs_inode_test, shiftfs_inode_set, newi);
++	if (!inode) {
++		dput(new);
++		return ERR_PTR(-ENOMEM);
++	}
++	if (inode->i_state & I_NEW) {
++		/*
++		 * inode->i_private set by shiftfs_inode_set(), but we still
++		 * need to take a reference
++		*/
++		ihold(newi);
++		shiftfs_fill_inode(inode, newi->i_ino, newi->i_mode, 0, new);
++		unlock_new_inode(inode);
++	}
++
++out:
++	return d_splice_alias(inode, dentry);
++}
++
++static int shiftfs_permission(struct inode *inode, int mask)
++{
++	int err;
++	const struct cred *oldcred;
++	struct inode *loweri = inode->i_private;
++
++	if (!loweri) {
++		WARN_ON(!(mask & MAY_NOT_BLOCK));
++		return -ECHILD;
++	}
++
++	err = generic_permission(inode, mask);
++	if (err)
++		return err;
++
++	oldcred = shiftfs_override_creds(inode->i_sb);
++	err = inode_permission(loweri, mask);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static int shiftfs_fiemap(struct inode *inode,
++			  struct fiemap_extent_info *fieinfo, u64 start,
++			  u64 len)
++{
++	int err;
++	const struct cred *oldcred;
++	struct inode *loweri = inode->i_private;
++
++	if (!loweri->i_op->fiemap)
++		return -EOPNOTSUPP;
++
++	oldcred = shiftfs_override_creds(inode->i_sb);
++	if (fieinfo->fi_flags & FIEMAP_FLAG_SYNC)
++		filemap_write_and_wait(loweri->i_mapping);
++	err = loweri->i_op->fiemap(loweri, fieinfo, start, len);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static int shiftfs_tmpfile(struct inode *dir, struct dentry *dentry,
++			   umode_t mode)
++{
++	int err;
++	const struct cred *oldcred;
++	struct dentry *lowerd = dentry->d_fsdata;
++	struct inode *loweri = dir->i_private;
++
++	if (!loweri->i_op->tmpfile)
++		return -EOPNOTSUPP;
++
++	oldcred = shiftfs_override_creds(dir->i_sb);
++	err = loweri->i_op->tmpfile(loweri, lowerd, mode);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static int shiftfs_setattr(struct dentry *dentry, struct iattr *attr)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	struct inode *loweri = lowerd->d_inode;
++	struct iattr newattr;
++	const struct cred *oldcred;
++	struct super_block *sb = dentry->d_sb;
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++	int err;
++
++	err = setattr_prepare(dentry, attr);
++	if (err)
++		return err;
++
++	newattr = *attr;
++	newattr.ia_uid = shift_kuid(sb->s_user_ns, sbinfo->userns, attr->ia_uid);
++	newattr.ia_gid = shift_kgid(sb->s_user_ns, sbinfo->userns, attr->ia_gid);
++
++	/*
++	 * mode change is for clearing setuid/setgid bits. Allow lower fs
++	 * to interpret this in its own way.
++	 */
++	if (newattr.ia_valid & (ATTR_KILL_SUID|ATTR_KILL_SGID))
++		newattr.ia_valid &= ~ATTR_MODE;
++
++	inode_lock(loweri);
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = notify_change(lowerd, &newattr, NULL);
++	revert_creds(oldcred);
++	inode_unlock(loweri);
++
++	shiftfs_copyattr(loweri, d_inode(dentry));
++
++	return err;
++}
++
++static int shiftfs_getattr(const struct path *path, struct kstat *stat,
++			   u32 request_mask, unsigned int query_flags)
++{
++	struct inode *inode = path->dentry->d_inode;
++	struct dentry *lowerd = path->dentry->d_fsdata;
++	struct inode *loweri = lowerd->d_inode;
++	struct shiftfs_super_info *info = path->dentry->d_sb->s_fs_info;
++	struct path newpath = { .mnt = info->mnt, .dentry = lowerd };
++	struct user_namespace *from_ns = loweri->i_sb->s_user_ns;
++	struct user_namespace *to_ns = inode->i_sb->s_user_ns;
++	const struct cred *oldcred;
++	int err;
++
++	oldcred = shiftfs_override_creds(inode->i_sb);
++	err = vfs_getattr(&newpath, stat, request_mask, query_flags);
++	revert_creds(oldcred);
++
++	if (err)
++		return err;
++
++	/* transform the underlying id */
++	stat->uid = shift_kuid(from_ns, to_ns, stat->uid);
++	stat->gid = shift_kgid(from_ns, to_ns, stat->gid);
++	return 0;
++}
++
++#ifdef CONFIG_SHIFT_FS_POSIX_ACL
++
++static int
++shift_acl_ids(struct user_namespace *from, struct user_namespace *to,
++	      struct posix_acl *acl)
++{
++	int i;
++
++	for (i = 0; i < acl->a_count; i++) {
++		struct posix_acl_entry *e = &acl->a_entries[i];
++		switch(e->e_tag) {
++		case ACL_USER:
++			e->e_uid = shift_kuid(from, to, e->e_uid);
++			if (!uid_valid(e->e_uid))
++				return -EOVERFLOW;
++			break;
++		case ACL_GROUP:
++			e->e_gid = shift_kgid(from, to, e->e_gid);
++			if (!gid_valid(e->e_gid))
++				return -EOVERFLOW;
++			break;
++		}
++	}
++	return 0;
++}
++
++static void
++shift_acl_xattr_ids(struct user_namespace *from, struct user_namespace *to,
++		    void *value, size_t size)
++{
++	struct posix_acl_xattr_header *header = value;
++	struct posix_acl_xattr_entry *entry = (void *)(header + 1), *end;
++	int count;
++	kuid_t kuid;
++	kgid_t kgid;
++
++	if (!value)
++		return;
++	if (size < sizeof(struct posix_acl_xattr_header))
++		return;
++	if (header->a_version != cpu_to_le32(POSIX_ACL_XATTR_VERSION))
++		return;
++
++	count = posix_acl_xattr_count(size);
++	if (count < 0)
++		return;
++	if (count == 0)
++		return;
++
++	for (end = entry + count; entry != end; entry++) {
++		switch(le16_to_cpu(entry->e_tag)) {
++		case ACL_USER:
++			kuid = make_kuid(&init_user_ns, le32_to_cpu(entry->e_id));
++			kuid = shift_kuid(from, to, kuid);
++			entry->e_id = cpu_to_le32(from_kuid(&init_user_ns, kuid));
++			break;
++		case ACL_GROUP:
++			kgid = make_kgid(&init_user_ns, le32_to_cpu(entry->e_id));
++			kgid = shift_kgid(from, to, kgid);
++			entry->e_id = cpu_to_le32(from_kgid(&init_user_ns, kgid));
++			break;
++		default:
++			break;
++		}
++	}
++}
++
++static struct posix_acl *shiftfs_get_acl(struct inode *inode, int type)
++{
++	struct inode *loweri = inode->i_private;
++	const struct cred *oldcred;
++	struct posix_acl *lower_acl, *acl = NULL;
++	struct user_namespace *from_ns = loweri->i_sb->s_user_ns;
++	struct user_namespace *to_ns = inode->i_sb->s_user_ns;
++	int size;
++	int err;
++
++	if (!IS_POSIXACL(loweri))
++		return NULL;
++
++	oldcred = shiftfs_override_creds(inode->i_sb);
++	lower_acl = get_acl(loweri, type);
++	revert_creds(oldcred);
++
++	if (lower_acl && !IS_ERR(lower_acl)) {
++		/* XXX: export posix_acl_clone? */
++		size = sizeof(struct posix_acl) +
++		       lower_acl->a_count * sizeof(struct posix_acl_entry);
++		acl = kmemdup(lower_acl, size, GFP_KERNEL);
++		posix_acl_release(lower_acl);
++
++		if (!acl)
++			return ERR_PTR(-ENOMEM);
++
++		refcount_set(&acl->a_refcount, 1);
++
++		err = shift_acl_ids(from_ns, to_ns, acl);
++		if (err) {
++			kfree(acl);
++			return ERR_PTR(err);
++		}
++	}
++
++	return acl;
++}
++
++static int
++shiftfs_posix_acl_xattr_get(const struct xattr_handler *handler,
++			   struct dentry *dentry, struct inode *inode,
++			   const char *name, void *buffer, size_t size)
++{
++	struct inode *loweri = inode->i_private;
++	int ret;
++
++	ret = shiftfs_xattr_get(NULL, dentry, inode, handler->name,
++				buffer, size);
++	if (ret < 0)
++		return ret;
++
++	inode_lock(loweri);
++	shift_acl_xattr_ids(loweri->i_sb->s_user_ns, inode->i_sb->s_user_ns,
++			    buffer, size);
++	inode_unlock(loweri);
++	return ret;
++}
++
++static int
++shiftfs_posix_acl_xattr_set(const struct xattr_handler *handler,
++			    struct dentry *dentry, struct inode *inode,
++			    const char *name, const void *value,
++			    size_t size, int flags)
++{
++	struct inode *loweri = inode->i_private;
++	int err;
++
++	if (!IS_POSIXACL(loweri) || !loweri->i_op->set_acl)
++		return -EOPNOTSUPP;
++	if (handler->flags == ACL_TYPE_DEFAULT && !S_ISDIR(inode->i_mode))
++		return value ? -EACCES : 0;
++	if (!inode_owner_or_capable(inode))
++		return -EPERM;
++
++	if (value) {
++		shift_acl_xattr_ids(inode->i_sb->s_user_ns,
++				    loweri->i_sb->s_user_ns,
++				    (void *)value, size);
++		err = shiftfs_setxattr(dentry, inode, handler->name, value,
++				       size, flags);
++	} else {
++		err = shiftfs_removexattr(dentry, handler->name);
++	}
++
++	if (!err)
++		shiftfs_copyattr(loweri, inode);
++
++	return err;
++}
++
++static const struct xattr_handler
++shiftfs_posix_acl_access_xattr_handler = {
++	.name = XATTR_NAME_POSIX_ACL_ACCESS,
++	.flags = ACL_TYPE_ACCESS,
++	.get = shiftfs_posix_acl_xattr_get,
++	.set = shiftfs_posix_acl_xattr_set,
++};
++
++static const struct xattr_handler
++shiftfs_posix_acl_default_xattr_handler = {
++	.name = XATTR_NAME_POSIX_ACL_DEFAULT,
++	.flags = ACL_TYPE_DEFAULT,
++	.get = shiftfs_posix_acl_xattr_get,
++	.set = shiftfs_posix_acl_xattr_set,
++};
++
++#else /* !CONFIG_SHIFT_FS_POSIX_ACL */
++
++#define shiftfs_get_acl NULL
++
++#endif /* CONFIG_SHIFT_FS_POSIX_ACL */
++
++static const struct inode_operations shiftfs_dir_inode_operations = {
++	.lookup		= shiftfs_lookup,
++	.mkdir		= shiftfs_mkdir,
++	.symlink	= shiftfs_symlink,
++	.unlink		= shiftfs_unlink,
++	.rmdir		= shiftfs_rmdir,
++	.rename		= shiftfs_rename,
++	.link		= shiftfs_link,
++	.setattr	= shiftfs_setattr,
++	.create		= shiftfs_create,
++	.mknod		= shiftfs_mknod,
++	.permission	= shiftfs_permission,
++	.getattr	= shiftfs_getattr,
++	.listxattr	= shiftfs_listxattr,
++	.get_acl	= shiftfs_get_acl,
++};
++
++static const struct inode_operations shiftfs_file_inode_operations = {
++	.fiemap		= shiftfs_fiemap,
++	.getattr	= shiftfs_getattr,
++	.get_acl	= shiftfs_get_acl,
++	.listxattr	= shiftfs_listxattr,
++	.permission	= shiftfs_permission,
++	.setattr	= shiftfs_setattr,
++	.tmpfile	= shiftfs_tmpfile,
++};
++
++static const struct inode_operations shiftfs_special_inode_operations = {
++	.getattr	= shiftfs_getattr,
++	.get_acl	= shiftfs_get_acl,
++	.listxattr	= shiftfs_listxattr,
++	.permission	= shiftfs_permission,
++	.setattr	= shiftfs_setattr,
++};
++
++static const struct inode_operations shiftfs_symlink_inode_operations = {
++	.getattr	= shiftfs_getattr,
++	.get_link	= shiftfs_get_link,
++	.listxattr	= shiftfs_listxattr,
++	.setattr	= shiftfs_setattr,
++};
++
++static struct file *shiftfs_open_realfile(const struct file *file,
++					  struct inode *realinode)
++{
++	struct file *realfile;
++	const struct cred *old_cred;
++	struct inode *inode = file_inode(file);
++	struct dentry *lowerd = file->f_path.dentry->d_fsdata;
++	struct shiftfs_super_info *info = inode->i_sb->s_fs_info;
++	struct path realpath = { .mnt = info->mnt, .dentry = lowerd };
++
++	old_cred = shiftfs_override_creds(inode->i_sb);
++	realfile = open_with_fake_path(&realpath, file->f_flags, realinode,
++				       info->creator_cred);
++	revert_creds(old_cred);
++
++	return realfile;
++}
++
++#define SHIFTFS_SETFL_MASK (O_APPEND | O_NONBLOCK | O_NDELAY | O_DIRECT)
++
++static int shiftfs_change_flags(struct file *file, unsigned int flags)
++{
++	struct inode *inode = file_inode(file);
++	int err;
++
++	/* if some flag changed that cannot be changed then something's amiss */
++	if (WARN_ON((file->f_flags ^ flags) & ~SHIFTFS_SETFL_MASK))
++		return -EIO;
++
++	flags &= SHIFTFS_SETFL_MASK;
++
++	if (((flags ^ file->f_flags) & O_APPEND) && IS_APPEND(inode))
++		return -EPERM;
++
++	if (flags & O_DIRECT) {
++		if (!file->f_mapping->a_ops ||
++		    !file->f_mapping->a_ops->direct_IO)
++			return -EINVAL;
++	}
++
++	if (file->f_op->check_flags) {
++		err = file->f_op->check_flags(flags);
++		if (err)
++			return err;
++	}
++
++	spin_lock(&file->f_lock);
++	file->f_flags = (file->f_flags & ~SHIFTFS_SETFL_MASK) | flags;
++	spin_unlock(&file->f_lock);
++
++	return 0;
++}
++
++static int shiftfs_open(struct inode *inode, struct file *file)
++{
++	struct file *realfile;
++
++	realfile = shiftfs_open_realfile(file, inode->i_private);
++	if (IS_ERR(realfile))
++		return PTR_ERR(realfile);
++
++	file->private_data = realfile;
++	/* For O_DIRECT dentry_open() checks f_mapping->a_ops->direct_IO. */
++	file->f_mapping = realfile->f_mapping;
++
++	return 0;
++}
++
++static int shiftfs_dir_open(struct inode *inode, struct file *file)
++{
++	struct file *realfile;
++	const struct cred *oldcred;
++	struct dentry *lowerd = file->f_path.dentry->d_fsdata;
++	struct shiftfs_super_info *info = inode->i_sb->s_fs_info;
++	struct path realpath = { .mnt = info->mnt, .dentry = lowerd };
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	realfile = dentry_open(&realpath, file->f_flags | O_NOATIME,
++			       info->creator_cred);
++	revert_creds(oldcred);
++	if (IS_ERR(realfile))
++		return PTR_ERR(realfile);
++
++	file->private_data = realfile;
++
++	return 0;
++}
++
++static int shiftfs_release(struct inode *inode, struct file *file)
++{
++	struct file *realfile = file->private_data;
++
++	if (realfile)
++		fput(realfile);
++
++	return 0;
++}
++
++static int shiftfs_dir_release(struct inode *inode, struct file *file)
++{
++	return shiftfs_release(inode, file);
++}
++
++static loff_t shiftfs_dir_llseek(struct file *file, loff_t offset, int whence)
++{
++	struct file *realfile = file->private_data;
++
++	return vfs_llseek(realfile, offset, whence);
++}
++
++static loff_t shiftfs_file_llseek(struct file *file, loff_t offset, int whence)
++{
++	struct inode *realinode = file_inode(file)->i_private;
++
++	return generic_file_llseek_size(file, offset, whence,
++					realinode->i_sb->s_maxbytes,
++					i_size_read(realinode));
++}
++
++/* XXX: Need to figure out what to to about atime updates, maybe other
++ * timestamps too ... ref. ovl_file_accessed() */
++
++static rwf_t shiftfs_iocb_to_rwf(struct kiocb *iocb)
++{
++	int ifl = iocb->ki_flags;
++	rwf_t flags = 0;
++
++	if (ifl & IOCB_NOWAIT)
++		flags |= RWF_NOWAIT;
++	if (ifl & IOCB_HIPRI)
++		flags |= RWF_HIPRI;
++	if (ifl & IOCB_DSYNC)
++		flags |= RWF_DSYNC;
++	if (ifl & IOCB_SYNC)
++		flags |= RWF_SYNC;
++
++	return flags;
++}
++
++static int shiftfs_real_fdget(const struct file *file, struct fd *lowerfd)
++{
++	struct file *realfile;
++
++	if (file->f_op->open != shiftfs_open &&
++	    file->f_op->open != shiftfs_dir_open)
++		return -EINVAL;
++
++	realfile = file->private_data;
++	lowerfd->flags = 0;
++	lowerfd->file = realfile;
++
++	/* Did the flags change since open? */
++	if (unlikely(file->f_flags & ~lowerfd->file->f_flags))
++		return shiftfs_change_flags(lowerfd->file, file->f_flags);
++
++	return 0;
++}
++
++static ssize_t shiftfs_read_iter(struct kiocb *iocb, struct iov_iter *iter)
++{
++	struct file *file = iocb->ki_filp;
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	ssize_t ret;
++
++	if (!iov_iter_count(iter))
++		return 0;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		return ret;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	ret = vfs_iter_read(lowerfd.file, iter, &iocb->ki_pos,
++			    shiftfs_iocb_to_rwf(iocb));
++	revert_creds(oldcred);
++
++	shiftfs_file_accessed(file);
++
++	fdput(lowerfd);
++	return ret;
++}
++
++static ssize_t shiftfs_write_iter(struct kiocb *iocb, struct iov_iter *iter)
++{
++	struct file *file = iocb->ki_filp;
++	struct inode *inode = file_inode(file);
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	ssize_t ret;
++
++	if (!iov_iter_count(iter))
++		return 0;
++
++	inode_lock(inode);
++	/* Update mode */
++	shiftfs_copyattr(inode->i_private, inode);
++	ret = file_remove_privs(file);
++	if (ret)
++		goto out_unlock;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		goto out_unlock;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	file_start_write(lowerfd.file);
++	ret = vfs_iter_write(lowerfd.file, iter, &iocb->ki_pos,
++			     shiftfs_iocb_to_rwf(iocb));
++	file_end_write(lowerfd.file);
++	revert_creds(oldcred);
++
++	/* Update size */
++	shiftfs_copyattr(inode->i_private, inode);
++
++	fdput(lowerfd);
++
++out_unlock:
++	inode_unlock(inode);
++	return ret;
++}
++
++static int shiftfs_fsync(struct file *file, loff_t start, loff_t end,
++			 int datasync)
++{
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	int ret;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		return ret;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	ret = vfs_fsync_range(lowerfd.file, start, end, datasync);
++	revert_creds(oldcred);
++
++	fdput(lowerfd);
++	return ret;
++}
++
++static int shiftfs_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	struct file *realfile = file->private_data;
++	const struct cred *oldcred;
++	int ret;
++
++	if (!realfile->f_op->mmap)
++		return -ENODEV;
++
++	if (WARN_ON(file != vma->vm_file))
++		return -EIO;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	vma->vm_file = get_file(realfile);
++	ret = call_mmap(vma->vm_file, vma);
++	revert_creds(oldcred);
++
++	shiftfs_file_accessed(file);
++
++	if (ret) {
++		/*
++		 * Drop refcount from new vm_file value and restore original
++		 * vm_file value
++		 */
++		vma->vm_file = file;
++		fput(realfile);
++	} else {
++		/* Drop refcount from previous vm_file value */
++		fput(file);
++	}
++
++	return ret;
++}
++
++static long shiftfs_fallocate(struct file *file, int mode, loff_t offset,
++			      loff_t len)
++{
++	struct inode *inode = file_inode(file);
++	struct inode *loweri = inode->i_private;
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	int ret;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		return ret;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	ret = vfs_fallocate(lowerfd.file, mode, offset, len);
++	revert_creds(oldcred);
++
++	/* Update size */
++	shiftfs_copyattr(loweri, inode);
++
++	fdput(lowerfd);
++	return ret;
++}
++
++static int shiftfs_fadvise(struct file *file, loff_t offset, loff_t len,
++			   int advice)
++{
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	int ret;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		return ret;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	ret = vfs_fadvise(lowerfd.file, offset, len, advice);
++	revert_creds(oldcred);
++
++	fdput(lowerfd);
++	return ret;
++}
++
++static int shiftfs_override_ioctl_creds(int cmd, const struct super_block *sb,
++					const struct cred **oldcred,
++					struct cred **newcred)
++{
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++	kuid_t fsuid = current_fsuid();
++	kgid_t fsgid = current_fsgid();
++
++	*oldcred = shiftfs_override_creds(sb);
++
++	*newcred = prepare_creds();
++	if (!*newcred) {
++		revert_creds(*oldcred);
++		return -ENOMEM;
++	}
++
++	(*newcred)->fsuid = shift_kuid(sb->s_user_ns, sbinfo->userns, fsuid);
++	(*newcred)->fsgid = shift_kgid(sb->s_user_ns, sbinfo->userns, fsgid);
++
++	/* clear all caps to prevent bypassing capable() checks */
++	cap_clear((*newcred)->cap_bset);
++	cap_clear((*newcred)->cap_effective);
++	cap_clear((*newcred)->cap_inheritable);
++	cap_clear((*newcred)->cap_permitted);
++
++	if (cmd == BTRFS_IOC_SNAP_DESTROY) {
++		kuid_t kuid_root = make_kuid(sb->s_user_ns, 0);
++		/*
++		 * Allow the root user in the container to remove subvolumes
++		 * from other users.
++		 */
++		if (uid_valid(kuid_root) && uid_eq(fsuid, kuid_root))
++			cap_raise((*newcred)->cap_effective, CAP_DAC_OVERRIDE);
++	}
++
++	put_cred(override_creds(*newcred));
++	return 0;
++}
++
++static inline void shiftfs_revert_ioctl_creds(const struct cred *oldcred,
++					      struct cred *newcred)
++{
++	return shiftfs_revert_object_creds(oldcred, newcred);
++}
++
++static inline bool is_btrfs_snap_ioctl(int cmd)
++{
++	if ((cmd == BTRFS_IOC_SNAP_CREATE) || (cmd == BTRFS_IOC_SNAP_CREATE_V2))
++		return true;
++
++	return false;
++}
++
++static int shiftfs_btrfs_ioctl_fd_restore(int cmd, int fd, void __user *arg,
++					  struct btrfs_ioctl_vol_args *v1,
++					  struct btrfs_ioctl_vol_args_v2 *v2)
++{
++	int ret;
++
++	if (!is_btrfs_snap_ioctl(cmd))
++		return 0;
++
++	if (cmd == BTRFS_IOC_SNAP_CREATE)
++		ret = copy_to_user(arg, v1, sizeof(*v1));
++	else
++		ret = copy_to_user(arg, v2, sizeof(*v2));
++
++	__close_fd(current->files, fd);
++	kfree(v1);
++	kfree(v2);
++
++	return ret;
++}
++
++static int shiftfs_btrfs_ioctl_fd_replace(int cmd, void __user *arg,
++					  struct btrfs_ioctl_vol_args **b1,
++					  struct btrfs_ioctl_vol_args_v2 **b2,
++					  int *newfd)
++{
++	int oldfd, ret;
++	struct fd src;
++	struct fd lfd = {};
++	struct btrfs_ioctl_vol_args *v1 = NULL;
++	struct btrfs_ioctl_vol_args_v2 *v2 = NULL;
++
++	if (!is_btrfs_snap_ioctl(cmd))
++		return 0;
++
++	if (cmd == BTRFS_IOC_SNAP_CREATE) {
++		v1 = memdup_user(arg, sizeof(*v1));
++		if (IS_ERR(v1))
++			return PTR_ERR(v1);
++		oldfd = v1->fd;
++		*b1 = v1;
++	} else {
++		v2 = memdup_user(arg, sizeof(*v2));
++		if (IS_ERR(v2))
++			return PTR_ERR(v2);
++		oldfd = v2->fd;
++		*b2 = v2;
++	}
++
++	src = fdget(oldfd);
++	if (!src.file)
++		return -EINVAL;
++
++	ret = shiftfs_real_fdget(src.file, &lfd);
++	if (ret) {
++		fdput(src);
++		return ret;
++	}
++
++	/*
++	 * shiftfs_real_fdget() does not take a reference to lfd.file, so
++	 * take a reference here to offset the one which will be put by
++	 * __close_fd(), and make sure that reference is put on fdput(lfd).
++	 */
++	get_file(lfd.file);
++	lfd.flags |= FDPUT_FPUT;
++	fdput(src);
++
++	*newfd = get_unused_fd_flags(lfd.file->f_flags);
++	if (*newfd < 0) {
++		fdput(lfd);
++		return *newfd;
++	}
++
++	fd_install(*newfd, lfd.file);
++
++	if (cmd == BTRFS_IOC_SNAP_CREATE) {
++		v1->fd = *newfd;
++		ret = copy_to_user(arg, v1, sizeof(*v1));
++		v1->fd = oldfd;
++	} else {
++		v2->fd = *newfd;
++		ret = copy_to_user(arg, v2, sizeof(*v2));
++		v2->fd = oldfd;
++	}
++
++	if (ret)
++		shiftfs_btrfs_ioctl_fd_restore(cmd, *newfd, arg, v1, v2);
++
++	return ret;
++}
++
++static long shiftfs_real_ioctl(struct file *file, unsigned int cmd,
++			       unsigned long arg)
++{
++	struct fd lowerfd;
++	struct cred *newcred;
++	const struct cred *oldcred;
++	int newfd = -EBADF;
++	long err = 0, ret = 0;
++	void __user *argp = (void __user *)arg;
++	struct super_block *sb = file->f_path.dentry->d_sb;
++	struct btrfs_ioctl_vol_args *btrfs_v1 = NULL;
++	struct btrfs_ioctl_vol_args_v2 *btrfs_v2 = NULL;
++
++	ret = shiftfs_btrfs_ioctl_fd_replace(cmd, argp, &btrfs_v1, &btrfs_v2,
++					     &newfd);
++	if (ret < 0)
++		return ret;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		goto out_restore;
++
++	ret = shiftfs_override_ioctl_creds(cmd, sb, &oldcred, &newcred);
++	if (ret)
++		goto out_fdput;
++
++	ret = vfs_ioctl(lowerfd.file, cmd, arg);
++
++	shiftfs_revert_ioctl_creds(oldcred, newcred);
++
++	shiftfs_copyattr(file_inode(lowerfd.file), file_inode(file));
++	shiftfs_copyflags(file_inode(lowerfd.file), file_inode(file));
++
++out_fdput:
++	fdput(lowerfd);
++
++out_restore:
++	err = shiftfs_btrfs_ioctl_fd_restore(cmd, newfd, argp,
++					     btrfs_v1, btrfs_v2);
++	if (!ret)
++		ret = err;
++
++	return ret;
++}
++
++static bool in_ioctl_whitelist(int flag, unsigned long arg)
++{
++	void __user *argp = (void __user *)arg;
++	u64 flags = 0;
++
++	switch (flag) {
++	case BTRFS_IOC_FS_INFO:
++		return true;
++	case BTRFS_IOC_SNAP_CREATE:
++		return true;
++	case BTRFS_IOC_SNAP_CREATE_V2:
++		return true;
++	case BTRFS_IOC_SUBVOL_CREATE:
++		return true;
++	case BTRFS_IOC_SUBVOL_CREATE_V2:
++		return true;
++	case BTRFS_IOC_SUBVOL_GETFLAGS:
++		return true;
++	case BTRFS_IOC_SUBVOL_SETFLAGS:
++		if (copy_from_user(&flags, argp, sizeof(flags)))
++			return false;
++
++		if (flags & ~BTRFS_SUBVOL_RDONLY)
++			return false;
++
++		return true;
++	case BTRFS_IOC_SNAP_DESTROY:
++		return true;
++	}
++
++	return false;
++}
++
++static long shiftfs_ioctl(struct file *file, unsigned int cmd,
++			  unsigned long arg)
++{
++	switch (cmd) {
++	case FS_IOC_GETVERSION:
++		/* fall through */
++	case FS_IOC_GETFLAGS:
++		/* fall through */
++	case FS_IOC_SETFLAGS:
++		break;
++	default:
++		if (!in_ioctl_whitelist(cmd, arg) ||
++		    !shiftfs_passthrough_ioctls(file->f_path.dentry->d_sb->s_fs_info))
++			return -ENOTTY;
++	}
++
++	return shiftfs_real_ioctl(file, cmd, arg);
++}
++
++static long shiftfs_compat_ioctl(struct file *file, unsigned int cmd,
++				 unsigned long arg)
++{
++	switch (cmd) {
++	case FS_IOC32_GETVERSION:
++		/* fall through */
++	case FS_IOC32_GETFLAGS:
++		/* fall through */
++	case FS_IOC32_SETFLAGS:
++		break;
++	default:
++		if (!in_ioctl_whitelist(cmd, arg) ||
++		    !shiftfs_passthrough_ioctls(file->f_path.dentry->d_sb->s_fs_info))
++			return -ENOIOCTLCMD;
++	}
++
++	return shiftfs_real_ioctl(file, cmd, arg);
++}
++
++enum shiftfs_copyop {
++	SHIFTFS_COPY,
++	SHIFTFS_CLONE,
++	SHIFTFS_DEDUPE,
++};
++
++static ssize_t shiftfs_copyfile(struct file *file_in, loff_t pos_in,
++				struct file *file_out, loff_t pos_out, u64 len,
++				unsigned int flags, enum shiftfs_copyop op)
++{
++	ssize_t ret;
++	struct fd real_in, real_out;
++	const struct cred *oldcred;
++	struct inode *inode_out = file_inode(file_out);
++	struct inode *loweri = inode_out->i_private;
++
++	ret = shiftfs_real_fdget(file_out, &real_out);
++	if (ret)
++		return ret;
++
++	ret = shiftfs_real_fdget(file_in, &real_in);
++	if (ret) {
++		fdput(real_out);
++		return ret;
++	}
++
++	oldcred = shiftfs_override_creds(inode_out->i_sb);
++	switch (op) {
++	case SHIFTFS_COPY:
++		ret = vfs_copy_file_range(real_in.file, pos_in, real_out.file,
++					  pos_out, len, flags);
++		break;
++
++	case SHIFTFS_CLONE:
++		ret = vfs_clone_file_range(real_in.file, pos_in, real_out.file,
++					   pos_out, len, flags);
++		break;
++
++	case SHIFTFS_DEDUPE:
++		ret = vfs_dedupe_file_range_one(real_in.file, pos_in,
++						real_out.file, pos_out, len,
++						flags);
++		break;
++	}
++	revert_creds(oldcred);
++
++	/* Update size */
++	shiftfs_copyattr(loweri, inode_out);
++
++	fdput(real_in);
++	fdput(real_out);
++
++	return ret;
++}
++
++static ssize_t shiftfs_copy_file_range(struct file *file_in, loff_t pos_in,
++				       struct file *file_out, loff_t pos_out,
++				       size_t len, unsigned int flags)
++{
++	return shiftfs_copyfile(file_in, pos_in, file_out, pos_out, len, flags,
++				SHIFTFS_COPY);
++}
++
++static loff_t shiftfs_remap_file_range(struct file *file_in, loff_t pos_in,
++				       struct file *file_out, loff_t pos_out,
++				       loff_t len, unsigned int remap_flags)
++{
++	enum shiftfs_copyop op;
++
++	if (remap_flags & ~(REMAP_FILE_DEDUP | REMAP_FILE_ADVISORY))
++		return -EINVAL;
++
++	if (remap_flags & REMAP_FILE_DEDUP)
++		op = SHIFTFS_DEDUPE;
++	else
++		op = SHIFTFS_CLONE;
++
++	return shiftfs_copyfile(file_in, pos_in, file_out, pos_out, len,
++				remap_flags, op);
++}
++
++static int shiftfs_iterate_shared(struct file *file, struct dir_context *ctx)
++{
++	const struct cred *oldcred;
++	int err = -ENOTDIR;
++	struct file *realfile = file->private_data;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	err = iterate_dir(realfile, ctx);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++const struct file_operations shiftfs_file_operations = {
++	.open			= shiftfs_open,
++	.release		= shiftfs_release,
++	.llseek			= shiftfs_file_llseek,
++	.read_iter		= shiftfs_read_iter,
++	.write_iter		= shiftfs_write_iter,
++	.fsync			= shiftfs_fsync,
++	.mmap			= shiftfs_mmap,
++	.fallocate		= shiftfs_fallocate,
++	.fadvise		= shiftfs_fadvise,
++	.unlocked_ioctl		= shiftfs_ioctl,
++	.compat_ioctl		= shiftfs_compat_ioctl,
++	.copy_file_range	= shiftfs_copy_file_range,
++	.remap_file_range	= shiftfs_remap_file_range,
++};
++
++const struct file_operations shiftfs_dir_operations = {
++	.open			= shiftfs_dir_open,
++	.release		= shiftfs_dir_release,
++	.compat_ioctl		= shiftfs_compat_ioctl,
++	.fsync			= shiftfs_fsync,
++	.iterate_shared		= shiftfs_iterate_shared,
++	.llseek			= shiftfs_dir_llseek,
++	.read			= generic_read_dir,
++	.unlocked_ioctl		= shiftfs_ioctl,
++};
++
++static const struct address_space_operations shiftfs_aops = {
++	/* For O_DIRECT dentry_open() checks f_mapping->a_ops->direct_IO */
++	.direct_IO	= noop_direct_IO,
++};
++
++static void shiftfs_fill_inode(struct inode *inode, unsigned long ino,
++			       umode_t mode, dev_t dev, struct dentry *dentry)
++{
++	struct inode *loweri;
++
++	inode->i_ino = ino;
++	inode->i_flags |= S_NOCMTIME;
++
++	mode &= S_IFMT;
++	inode->i_mode = mode;
++	switch (mode & S_IFMT) {
++	case S_IFDIR:
++		inode->i_op = &shiftfs_dir_inode_operations;
++		inode->i_fop = &shiftfs_dir_operations;
++		break;
++	case S_IFLNK:
++		inode->i_op = &shiftfs_symlink_inode_operations;
++		break;
++	case S_IFREG:
++		inode->i_op = &shiftfs_file_inode_operations;
++		inode->i_fop = &shiftfs_file_operations;
++		inode->i_mapping->a_ops = &shiftfs_aops;
++		break;
++	default:
++		inode->i_op = &shiftfs_special_inode_operations;
++		init_special_inode(inode, mode, dev);
++		break;
++	}
++
++	if (!dentry)
++		return;
++
++	loweri = dentry->d_inode;
++	if (!loweri->i_op->get_link)
++		inode->i_opflags |= IOP_NOFOLLOW;
++
++	shiftfs_copyattr(loweri, inode);
++	shiftfs_copyflags(loweri, inode);
++	set_nlink(inode, loweri->i_nlink);
++}
++
++static int shiftfs_show_options(struct seq_file *m, struct dentry *dentry)
++{
++	struct super_block *sb = dentry->d_sb;
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++
++	if (sbinfo->mark)
++		seq_show_option(m, "mark", NULL);
++
++	if (sbinfo->passthrough)
++		seq_printf(m, ",passthrough=%u", sbinfo->passthrough);
++
++	return 0;
++}
++
++static int shiftfs_statfs(struct dentry *dentry, struct kstatfs *buf)
++{
++	struct super_block *sb = dentry->d_sb;
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++	struct dentry *root = sb->s_root;
++	struct dentry *realroot = root->d_fsdata;
++	struct path realpath = { .mnt = sbinfo->mnt, .dentry = realroot };
++	int err;
++
++	err = vfs_statfs(&realpath, buf);
++	if (err)
++		return err;
++
++	if (!shiftfs_passthrough_statfs(sbinfo))
++		buf->f_type = sb->s_magic;
++
++	return 0;
++}
++
++static void shiftfs_evict_inode(struct inode *inode)
++{
++	struct inode *loweri = inode->i_private;
++
++	clear_inode(inode);
++
++	if (loweri)
++		iput(loweri);
++}
++
++static void shiftfs_put_super(struct super_block *sb)
++{
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++
++	if (sbinfo) {
++		mntput(sbinfo->mnt);
++		put_cred(sbinfo->creator_cred);
++		kfree(sbinfo);
++	}
++}
++
++static const struct xattr_handler shiftfs_xattr_handler = {
++	.prefix = "",
++	.get    = shiftfs_xattr_get,
++	.set    = shiftfs_xattr_set,
++};
++
++const struct xattr_handler *shiftfs_xattr_handlers[] = {
++#ifdef CONFIG_SHIFT_FS_POSIX_ACL
++	&shiftfs_posix_acl_access_xattr_handler,
++	&shiftfs_posix_acl_default_xattr_handler,
++#endif
++	&shiftfs_xattr_handler,
++	NULL
++};
++
++static inline bool passthrough_is_subset(int old_flags, int new_flags)
++{
++	if ((new_flags & old_flags) != new_flags)
++		return false;
++
++	return true;
++}
++
++static int shiftfs_super_check_flags(unsigned long old_flags,
++				     unsigned long new_flags)
++{
++	if ((old_flags & SB_RDONLY) && !(new_flags & SB_RDONLY))
++		return -EPERM;
++
++	if ((old_flags & SB_NOSUID) && !(new_flags & SB_NOSUID))
++		return -EPERM;
++
++	if ((old_flags & SB_NODEV) && !(new_flags & SB_NODEV))
++		return -EPERM;
++
++	if ((old_flags & SB_NOEXEC) && !(new_flags & SB_NOEXEC))
++		return -EPERM;
++
++	if ((old_flags & SB_NOATIME) && !(new_flags & SB_NOATIME))
++		return -EPERM;
++
++	if ((old_flags & SB_NODIRATIME) && !(new_flags & SB_NODIRATIME))
++		return -EPERM;
++
++	if (!(old_flags & SB_POSIXACL) && (new_flags & SB_POSIXACL))
++		return -EPERM;
++
++	return 0;
++}
++
++static int shiftfs_remount(struct super_block *sb, int *flags, char *data)
++{
++	int err;
++	struct shiftfs_super_info new = {};
++	struct shiftfs_super_info *info = sb->s_fs_info;
++
++	err = shiftfs_parse_mount_options(&new, data);
++	if (err)
++		return err;
++
++	err = shiftfs_super_check_flags(sb->s_flags, *flags);
++	if (err)
++		return err;
++
++	/* Mark mount option cannot be changed. */
++	if (info->mark || (info->mark != new.mark))
++		return -EPERM;
++
++	if (info->passthrough != new.passthrough) {
++		/* Don't allow exceeding passthrough options of mark mount. */
++		if (!passthrough_is_subset(info->passthrough_mark,
++					   info->passthrough))
++			return -EPERM;
++
++		info->passthrough = new.passthrough;
++	}
++
++	return 0;
++}
++
++static const struct super_operations shiftfs_super_ops = {
++	.put_super	= shiftfs_put_super,
++	.show_options	= shiftfs_show_options,
++	.statfs		= shiftfs_statfs,
++	.remount_fs	= shiftfs_remount,
++	.evict_inode	= shiftfs_evict_inode,
++};
++
++struct shiftfs_data {
++	void *data;
++	const char *path;
++};
++
++static void shiftfs_super_force_flags(struct super_block *sb,
++				      unsigned long lower_flags)
++{
++	sb->s_flags |= lower_flags & (SB_RDONLY | SB_NOSUID | SB_NODEV |
++				      SB_NOEXEC | SB_NOATIME | SB_NODIRATIME);
++
++	if (!(lower_flags & SB_POSIXACL))
++		sb->s_flags &= ~SB_POSIXACL;
++}
++
++static int shiftfs_fill_super(struct super_block *sb, void *raw_data,
++			      int silent)
++{
++	int err;
++	struct path path = {};
++	struct shiftfs_super_info *sbinfo_mp;
++	char *name = NULL;
++	struct inode *inode = NULL;
++	struct dentry *dentry = NULL;
++	struct shiftfs_data *data = raw_data;
++	struct shiftfs_super_info *sbinfo = NULL;
++
++	if (!data->path)
++		return -EINVAL;
++
++	sb->s_fs_info = kzalloc(sizeof(*sbinfo), GFP_KERNEL);
++	if (!sb->s_fs_info)
++		return -ENOMEM;
++	sbinfo = sb->s_fs_info;
++
++	err = shiftfs_parse_mount_options(sbinfo, data->data);
++	if (err)
++		return err;
++
++	/* to mount a mark, must be userns admin */
++	if (!sbinfo->mark && !ns_capable(current_user_ns(), CAP_SYS_ADMIN))
++		return -EPERM;
++
++	name = kstrdup(data->path, GFP_KERNEL);
++	if (!name)
++		return -ENOMEM;
++
++	err = kern_path(name, LOOKUP_FOLLOW, &path);
++	if (err)
++		goto out_free_name;
++
++	if (!S_ISDIR(path.dentry->d_inode->i_mode)) {
++		err = -ENOTDIR;
++		goto out_put_path;
++	}
++
++	sb->s_flags |= SB_POSIXACL;
++
++	if (sbinfo->mark) {
++		struct cred *cred_tmp;
++		struct super_block *lower_sb = path.mnt->mnt_sb;
++
++		/* to mark a mount point, must root wrt lower s_user_ns */
++		if (!ns_capable(lower_sb->s_user_ns, CAP_SYS_ADMIN)) {
++			err = -EPERM;
++			goto out_put_path;
++		}
++
++		/*
++		 * this part is visible unshifted, so make sure no
++		 * executables that could be used to give suid
++		 * privileges
++		 */
++		sb->s_iflags = SB_I_NOEXEC;
++
++		shiftfs_super_force_flags(sb, lower_sb->s_flags);
++
++		/*
++		 * Handle nesting of shiftfs mounts by referring this mark
++		 * mount back to the original mark mount. This is more
++		 * efficient and alleviates concerns about stack depth.
++		 */
++		if (lower_sb->s_magic == SHIFTFS_MAGIC) {
++			sbinfo_mp = lower_sb->s_fs_info;
++
++			/* Doesn't make sense to mark a mark mount */
++			if (sbinfo_mp->mark) {
++				err = -EINVAL;
++				goto out_put_path;
++			}
++
++			if (!passthrough_is_subset(sbinfo_mp->passthrough,
++						   sbinfo->passthrough)) {
++				err = -EPERM;
++				goto out_put_path;
++			}
++
++			sbinfo->mnt = mntget(sbinfo_mp->mnt);
++			dentry = dget(path.dentry->d_fsdata);
++			/*
++			 * Copy up the passthrough mount options from the
++			 * parent mark mountpoint.
++			 */
++			sbinfo->passthrough_mark = sbinfo_mp->passthrough_mark;
++			sbinfo->creator_cred = get_cred(sbinfo_mp->creator_cred);
++		} else {
++			sbinfo->mnt = mntget(path.mnt);
++			dentry = dget(path.dentry);
++			/*
++			 * For a new mark passthrough_mark and passthrough
++			 * are identical.
++			 */
++			sbinfo->passthrough_mark = sbinfo->passthrough;
++
++			cred_tmp = prepare_creds();
++			if (!cred_tmp) {
++				err = -ENOMEM;
++				goto out_put_path;
++			}
++			/* Don't override disk quota limits or use reserved space. */
++			cap_lower(cred_tmp->cap_effective, CAP_SYS_RESOURCE);
++			sbinfo->creator_cred = cred_tmp;
++		}
++	} else {
++		/*
++		 * This leg executes if we're admin capable in the namespace,
++		 * so be very careful.
++		 */
++		err = -EPERM;
++		if (path.dentry->d_sb->s_magic != SHIFTFS_MAGIC)
++			goto out_put_path;
++
++		sbinfo_mp = path.dentry->d_sb->s_fs_info;
++		if (!sbinfo_mp->mark)
++			goto out_put_path;
++
++		if (!passthrough_is_subset(sbinfo_mp->passthrough,
++					   sbinfo->passthrough))
++			goto out_put_path;
++
++		sbinfo->mnt = mntget(sbinfo_mp->mnt);
++		sbinfo->creator_cred = get_cred(sbinfo_mp->creator_cred);
++		dentry = dget(path.dentry->d_fsdata);
++		/*
++		 * Copy up passthrough settings from mark mountpoint so we can
++		 * verify when the overlay wants to remount with different
++		 * passthrough settings.
++		 */
++		sbinfo->passthrough_mark = sbinfo_mp->passthrough;
++		shiftfs_super_force_flags(sb, path.mnt->mnt_sb->s_flags);
++	}
++
++	sb->s_stack_depth = dentry->d_sb->s_stack_depth + 1;
++	if (sb->s_stack_depth > FILESYSTEM_MAX_STACK_DEPTH) {
++		printk(KERN_ERR "shiftfs: maximum stacking depth exceeded\n");
++		err = -EINVAL;
++		goto out_put_path;
++	}
++
++	inode = new_inode(sb);
++	if (!inode) {
++		err = -ENOMEM;
++		goto out_put_path;
++	}
++	shiftfs_fill_inode(inode, dentry->d_inode->i_ino, S_IFDIR, 0, dentry);
++
++	ihold(dentry->d_inode);
++	inode->i_private = dentry->d_inode;
++
++	sb->s_magic = SHIFTFS_MAGIC;
++	sb->s_maxbytes = MAX_LFS_FILESIZE;
++	sb->s_op = &shiftfs_super_ops;
++	sb->s_xattr = shiftfs_xattr_handlers;
++	sb->s_d_op = &shiftfs_dentry_ops;
++	sb->s_root = d_make_root(inode);
++	if (!sb->s_root) {
++		err = -ENOMEM;
++		goto out_put_path;
++	}
++
++	sb->s_root->d_fsdata = dentry;
++	sbinfo->userns = get_user_ns(dentry->d_sb->s_user_ns);
++	shiftfs_copyattr(dentry->d_inode, sb->s_root->d_inode);
++
++	dentry = NULL;
++	err = 0;
++
++out_put_path:
++	path_put(&path);
++
++out_free_name:
++	kfree(name);
++
++	dput(dentry);
++
++	return err;
++}
++
++static struct dentry *shiftfs_mount(struct file_system_type *fs_type,
++				    int flags, const char *dev_name, void *data)
++{
++	struct shiftfs_data d = { data, dev_name };
++
++	return mount_nodev(fs_type, flags, &d, shiftfs_fill_super);
++}
++
++static struct file_system_type shiftfs_type = {
++	.owner		= THIS_MODULE,
++	.name		= "shiftfs",
++	.mount		= shiftfs_mount,
++	.kill_sb	= kill_anon_super,
++	.fs_flags	= FS_USERNS_MOUNT,
++};
++
++static int __init shiftfs_init(void)
++{
++	return register_filesystem(&shiftfs_type);
++}
++
++static void __exit shiftfs_exit(void)
++{
++	unregister_filesystem(&shiftfs_type);
++}
++
++MODULE_ALIAS_FS("shiftfs");
++MODULE_AUTHOR("James Bottomley");
++MODULE_AUTHOR("Seth Forshee <seth.forshee@canonical.com>");
++MODULE_AUTHOR("Christian Brauner <christian.brauner@ubuntu.com>");
++MODULE_DESCRIPTION("id shifting filesystem");
++MODULE_LICENSE("GPL v2");
++module_init(shiftfs_init)
++module_exit(shiftfs_exit)
+--- a/include/uapi/linux/magic.h	2021-01-06 19:08:45.234777659 -0500
++++ b/include/uapi/linux/magic.h	2021-01-06 19:09:53.900375394 -0500
+@@ -96,4 +96,6 @@
+ #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
+ #define Z3FOLD_MAGIC		0x33
+ 
++#define SHIFTFS_MAGIC         0x6a656a62
++
+ #endif /* __LINUX_MAGIC_H__ */
+--- a/fs/Makefile	2021-01-08 18:08:28.187064015 -0500
++++ b/fs/Makefile	2021-01-08 18:09:00.788217579 -0500
+@@ -136,3 +136,4 @@ obj-$(CONFIG_EFIVAR_FS)		+= efivarfs/
+ obj-$(CONFIG_EROFS_FS)		+= erofs/
+ obj-$(CONFIG_VBOXSF_FS)		+= vboxsf/
+ obj-$(CONFIG_ZONEFS_FS)		+= zonefs/
++obj-$(CONFIG_SHIFT_FS)    += shiftfs.o
+--- a/fs/Kconfig	2021-01-06 19:14:17.709697891 -0500
++++ b/fs/Kconfig	2021-01-06 19:15:23.413281282 -0500
+@@ -122,6 +122,24 @@ source "fs/autofs/Kconfig"
+ source "fs/fuse/Kconfig"
+ source "fs/overlayfs/Kconfig"
+ 
++config SHIFT_FS
++  tristate "UID/GID shifting overlay filesystem for containers"
++  help
++    This filesystem can overlay any mounted filesystem and shift
++    the uid/gid the files appear at.  The idea is that
++    unprivileged containers can use this to mount root volumes
++    using this technique.
++
++config SHIFT_FS_POSIX_ACL
++  bool "shiftfs POSIX Access Control Lists"
++  depends on SHIFT_FS
++  select FS_POSIX_ACL
++  help
++    POSIX Access Control Lists (ACLs) support permissions for users and
++    groups beyond the owner/group/world scheme.
++
++    If you don't know what Access Control Lists are, say N.
++
+ menu "Caches"
+ 
+ source "fs/fscache/Kconfig"


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-09 17:58 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-01-09 17:58 UTC (permalink / raw
  To: gentoo-commits

commit:     6fb3f2bc4509c9c6cba565cc2c20b0fff199e0a9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan  9 17:57:37 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan  9 17:57:37 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6fb3f2bc

Linux patch 5.10.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1005_linux-5.10.6.patch | 1736 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1740 insertions(+)

diff --git a/0000_README b/0000_README
index 2fb3d39..4881039 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-5.10.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.5
 
+Patch:  1005_linux-5.10.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-5.10.6.patch b/1005_linux-5.10.6.patch
new file mode 100644
index 0000000..f3e7f57
--- /dev/null
+++ b/1005_linux-5.10.6.patch
@@ -0,0 +1,1736 @@
+diff --git a/Documentation/devicetree/bindings/rtc/rtc.yaml b/Documentation/devicetree/bindings/rtc/rtc.yaml
+index 8acd2de3de3ad..d30dc045aac64 100644
+--- a/Documentation/devicetree/bindings/rtc/rtc.yaml
++++ b/Documentation/devicetree/bindings/rtc/rtc.yaml
+@@ -63,6 +63,11 @@ properties:
+     description:
+       Enables wake up of host system on alarm.
+ 
++  reset-source:
++    $ref: /schemas/types.yaml#/definitions/flag
++    description:
++      The RTC is able to reset the machine.
++
+ additionalProperties: true
+ 
+ ...
+diff --git a/Makefile b/Makefile
+index bb431fd473d2c..2b3f0d06b0054 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 30c6b9edddb50..0f7749e9424d4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2278,8 +2278,7 @@ void amdgpu_dm_update_connector_after_detect(
+ 
+ 			drm_connector_update_edid_property(connector,
+ 							   aconnector->edid);
+-			aconnector->num_modes = drm_add_edid_modes(connector, aconnector->edid);
+-			drm_connector_list_update(connector);
++			drm_add_edid_modes(connector, aconnector->edid);
+ 
+ 			if (aconnector->dc_link->aux_mode)
+ 				drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
+diff --git a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+index e08684e34078a..91b37b76618d2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
++++ b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+@@ -2622,11 +2622,22 @@ static bool cnl_ddi_hdmi_pll_dividers(struct intel_crtc_state *crtc_state)
+ 	return true;
+ }
+ 
++/*
++ * Display WA #22010492432: tgl
++ * Program half of the nominal DCO divider fraction value.
++ */
++static bool
++tgl_combo_pll_div_frac_wa_needed(struct drm_i915_private *i915)
++{
++	return IS_TIGERLAKE(i915) && i915->dpll.ref_clks.nssc == 38400;
++}
++
+ static int __cnl_ddi_wrpll_get_freq(struct drm_i915_private *dev_priv,
+ 				    const struct intel_shared_dpll *pll,
+ 				    int ref_clock)
+ {
+ 	const struct intel_dpll_hw_state *pll_state = &pll->state.hw_state;
++	u32 dco_fraction;
+ 	u32 p0, p1, p2, dco_freq;
+ 
+ 	p0 = pll_state->cfgcr1 & DPLL_CFGCR1_PDIV_MASK;
+@@ -2669,8 +2680,13 @@ static int __cnl_ddi_wrpll_get_freq(struct drm_i915_private *dev_priv,
+ 	dco_freq = (pll_state->cfgcr0 & DPLL_CFGCR0_DCO_INTEGER_MASK) *
+ 		   ref_clock;
+ 
+-	dco_freq += (((pll_state->cfgcr0 & DPLL_CFGCR0_DCO_FRACTION_MASK) >>
+-		      DPLL_CFGCR0_DCO_FRACTION_SHIFT) * ref_clock) / 0x8000;
++	dco_fraction = (pll_state->cfgcr0 & DPLL_CFGCR0_DCO_FRACTION_MASK) >>
++		       DPLL_CFGCR0_DCO_FRACTION_SHIFT;
++
++	if (tgl_combo_pll_div_frac_wa_needed(dev_priv))
++		dco_fraction *= 2;
++
++	dco_freq += (dco_fraction * ref_clock) / 0x8000;
+ 
+ 	if (drm_WARN_ON(&dev_priv->drm, p0 == 0 || p1 == 0 || p2 == 0))
+ 		return 0;
+@@ -2948,16 +2964,6 @@ static const struct skl_wrpll_params tgl_tbt_pll_24MHz_values = {
+ 	/* the following params are unused */
+ };
+ 
+-/*
+- * Display WA #22010492432: tgl
+- * Divide the nominal .dco_fraction value by 2.
+- */
+-static const struct skl_wrpll_params tgl_tbt_pll_38_4MHz_values = {
+-	.dco_integer = 0x54, .dco_fraction = 0x1800,
+-	/* the following params are unused */
+-	.pdiv = 0, .kdiv = 0, .qdiv_mode = 0, .qdiv_ratio = 0,
+-};
+-
+ static bool icl_calc_dp_combo_pll(struct intel_crtc_state *crtc_state,
+ 				  struct skl_wrpll_params *pll_params)
+ {
+@@ -2991,14 +2997,12 @@ static bool icl_calc_tbt_pll(struct intel_crtc_state *crtc_state,
+ 			MISSING_CASE(dev_priv->dpll.ref_clks.nssc);
+ 			fallthrough;
+ 		case 19200:
++		case 38400:
+ 			*pll_params = tgl_tbt_pll_19_2MHz_values;
+ 			break;
+ 		case 24000:
+ 			*pll_params = tgl_tbt_pll_24MHz_values;
+ 			break;
+-		case 38400:
+-			*pll_params = tgl_tbt_pll_38_4MHz_values;
+-			break;
+ 		}
+ 	} else {
+ 		switch (dev_priv->dpll.ref_clks.nssc) {
+@@ -3065,9 +3069,14 @@ static void icl_calc_dpll_state(struct drm_i915_private *i915,
+ 				const struct skl_wrpll_params *pll_params,
+ 				struct intel_dpll_hw_state *pll_state)
+ {
++	u32 dco_fraction = pll_params->dco_fraction;
++
+ 	memset(pll_state, 0, sizeof(*pll_state));
+ 
+-	pll_state->cfgcr0 = DPLL_CFGCR0_DCO_FRACTION(pll_params->dco_fraction) |
++	if (tgl_combo_pll_div_frac_wa_needed(i915))
++		dco_fraction = DIV_ROUND_CLOSEST(dco_fraction, 2);
++
++	pll_state->cfgcr0 = DPLL_CFGCR0_DCO_FRACTION(dco_fraction) |
+ 			    pll_params->dco_integer;
+ 
+ 	pll_state->cfgcr1 = DPLL_CFGCR1_QDIV_RATIO(pll_params->qdiv_ratio) |
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 4a041511b70ec..76b9c436edcd2 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -1177,25 +1177,6 @@ out:
+ 	return ret;
+ }
+ 
+-static void setup_dma_device(struct ib_device *device,
+-			     struct device *dma_device)
+-{
+-	/*
+-	 * If the caller does not provide a DMA capable device then the IB
+-	 * device will be used. In this case the caller should fully setup the
+-	 * ibdev for DMA. This usually means using dma_virt_ops.
+-	 */
+-#ifdef CONFIG_DMA_VIRT_OPS
+-	if (!dma_device) {
+-		device->dev.dma_ops = &dma_virt_ops;
+-		dma_device = &device->dev;
+-	}
+-#endif
+-	WARN_ON(!dma_device);
+-	device->dma_device = dma_device;
+-	WARN_ON(!device->dma_device->dma_parms);
+-}
+-
+ /*
+  * setup_device() allocates memory and sets up data that requires calling the
+  * device ops, this is the only reason these actions are not done during
+@@ -1341,7 +1322,14 @@ int ib_register_device(struct ib_device *device, const char *name,
+ 	if (ret)
+ 		return ret;
+ 
+-	setup_dma_device(device, dma_device);
++	/*
++	 * If the caller does not provide a DMA capable device then the IB core
++	 * will set up ib_sge and scatterlist structures that stash the kernel
++	 * virtual address into the address field.
++	 */
++	WARN_ON(dma_device && !dma_device->dma_parms);
++	device->dma_device = dma_device;
++
+ 	ret = setup_device(device);
+ 	if (ret)
+ 		return ret;
+@@ -2676,6 +2664,21 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
+ }
+ EXPORT_SYMBOL(ib_set_device_ops);
+ 
++#ifdef CONFIG_INFINIBAND_VIRT_DMA
++int ib_dma_virt_map_sg(struct ib_device *dev, struct scatterlist *sg, int nents)
++{
++	struct scatterlist *s;
++	int i;
++
++	for_each_sg(sg, s, nents, i) {
++		sg_dma_address(s) = (uintptr_t)sg_virt(s);
++		sg_dma_len(s) = s->length;
++	}
++	return nents;
++}
++EXPORT_SYMBOL(ib_dma_virt_map_sg);
++#endif /* CONFIG_INFINIBAND_VIRT_DMA */
++
+ static const struct rdma_nl_cbs ibnl_ls_cb_table[RDMA_NL_LS_NUM_OPS] = {
+ 	[RDMA_NL_LS_OP_RESOLVE] = {
+ 		.doit = ib_nl_handle_resolve_resp,
+diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
+index 13f43ab7220b0..a96030b784eb2 100644
+--- a/drivers/infiniband/core/rw.c
++++ b/drivers/infiniband/core/rw.c
+@@ -285,8 +285,11 @@ static void rdma_rw_unmap_sg(struct ib_device *dev, struct scatterlist *sg,
+ static int rdma_rw_map_sg(struct ib_device *dev, struct scatterlist *sg,
+ 			  u32 sg_cnt, enum dma_data_direction dir)
+ {
+-	if (is_pci_p2pdma_page(sg_page(sg)))
++	if (is_pci_p2pdma_page(sg_page(sg))) {
++		if (WARN_ON_ONCE(ib_uses_virt_dma(dev)))
++			return 0;
+ 		return pci_p2pdma_map_sg(dev->dma_device, sg, sg_cnt, dir);
++	}
+ 	return ib_dma_map_sg(dev, sg, sg_cnt, dir);
+ }
+ 
+diff --git a/drivers/infiniband/sw/rdmavt/Kconfig b/drivers/infiniband/sw/rdmavt/Kconfig
+index c8e268082952b..0df48b3a6b56c 100644
+--- a/drivers/infiniband/sw/rdmavt/Kconfig
++++ b/drivers/infiniband/sw/rdmavt/Kconfig
+@@ -4,6 +4,5 @@ config INFINIBAND_RDMAVT
+ 	depends on INFINIBAND_VIRT_DMA
+ 	depends on X86_64
+ 	depends on PCI
+-	select DMA_VIRT_OPS
+ 	help
+ 	This is a common software verbs provider for RDMA networks.
+diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
+index 8490fdb9c91e5..90fc234f489ac 100644
+--- a/drivers/infiniband/sw/rdmavt/mr.c
++++ b/drivers/infiniband/sw/rdmavt/mr.c
+@@ -324,8 +324,6 @@ static void __rvt_free_mr(struct rvt_mr *mr)
+  * @acc: access flags
+  *
+  * Return: the memory region on success, otherwise returns an errno.
+- * Note that all DMA addresses should be created via the functions in
+- * struct dma_virt_ops.
+  */
+ struct ib_mr *rvt_get_dma_mr(struct ib_pd *pd, int acc)
+ {
+@@ -766,7 +764,7 @@ int rvt_lkey_ok(struct rvt_lkey_table *rkt, struct rvt_pd *pd,
+ 
+ 	/*
+ 	 * We use LKEY == zero for kernel virtual addresses
+-	 * (see rvt_get_dma_mr() and dma_virt_ops).
++	 * (see rvt_get_dma_mr()).
+ 	 */
+ 	if (sge->lkey == 0) {
+ 		struct rvt_dev_info *dev = ib_to_rvt(pd->ibpd.device);
+@@ -877,7 +875,7 @@ int rvt_rkey_ok(struct rvt_qp *qp, struct rvt_sge *sge,
+ 
+ 	/*
+ 	 * We use RKEY == zero for kernel virtual addresses
+-	 * (see rvt_get_dma_mr() and dma_virt_ops).
++	 * (see rvt_get_dma_mr()).
+ 	 */
+ 	rcu_read_lock();
+ 	if (rkey == 0) {
+diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c
+index 670a9623b46e1..d1bbe66610cfe 100644
+--- a/drivers/infiniband/sw/rdmavt/vt.c
++++ b/drivers/infiniband/sw/rdmavt/vt.c
+@@ -524,7 +524,6 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb)
+ int rvt_register_device(struct rvt_dev_info *rdi)
+ {
+ 	int ret = 0, i;
+-	u64 dma_mask;
+ 
+ 	if (!rdi)
+ 		return -EINVAL;
+@@ -579,13 +578,6 @@ int rvt_register_device(struct rvt_dev_info *rdi)
+ 	/* Completion queues */
+ 	spin_lock_init(&rdi->n_cqs_lock);
+ 
+-	/* DMA Operations */
+-	rdi->ibdev.dev.dma_parms = rdi->ibdev.dev.parent->dma_parms;
+-	dma_mask = IS_ENABLED(CONFIG_64BIT) ? DMA_BIT_MASK(64) : DMA_BIT_MASK(32);
+-	ret = dma_coerce_mask_and_coherent(&rdi->ibdev.dev, dma_mask);
+-	if (ret)
+-		goto bail_wss;
+-
+ 	/* Protection Domain */
+ 	spin_lock_init(&rdi->n_pds_lock);
+ 	rdi->n_pds_allocated = 0;
+diff --git a/drivers/infiniband/sw/rxe/Kconfig b/drivers/infiniband/sw/rxe/Kconfig
+index 8810bfa680495..4521490667925 100644
+--- a/drivers/infiniband/sw/rxe/Kconfig
++++ b/drivers/infiniband/sw/rxe/Kconfig
+@@ -5,7 +5,6 @@ config RDMA_RXE
+ 	depends on INFINIBAND_VIRT_DMA
+ 	select NET_UDP_TUNNEL
+ 	select CRYPTO_CRC32
+-	select DMA_VIRT_OPS
+ 	help
+ 	This driver implements the InfiniBand RDMA transport over
+ 	the Linux network stack. It enables a system with a
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 34bef7d8e6b41..943914c2a50c7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -20,18 +20,6 @@
+ 
+ static struct rxe_recv_sockets recv_sockets;
+ 
+-struct device *rxe_dma_device(struct rxe_dev *rxe)
+-{
+-	struct net_device *ndev;
+-
+-	ndev = rxe->ndev;
+-
+-	if (is_vlan_dev(ndev))
+-		ndev = vlan_dev_real_dev(ndev);
+-
+-	return ndev->dev.parent;
+-}
+-
+ int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid)
+ {
+ 	int err;
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index f9c832e82552f..512868c230238 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -1118,23 +1118,15 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
+ 	int err;
+ 	struct ib_device *dev = &rxe->ib_dev;
+ 	struct crypto_shash *tfm;
+-	u64 dma_mask;
+ 
+ 	strlcpy(dev->node_desc, "rxe", sizeof(dev->node_desc));
+ 
+ 	dev->node_type = RDMA_NODE_IB_CA;
+ 	dev->phys_port_cnt = 1;
+ 	dev->num_comp_vectors = num_possible_cpus();
+-	dev->dev.parent = rxe_dma_device(rxe);
+ 	dev->local_dma_lkey = 0;
+ 	addrconf_addr_eui48((unsigned char *)&dev->node_guid,
+ 			    rxe->ndev->dev_addr);
+-	dev->dev.dma_parms = &rxe->dma_parms;
+-	dma_set_max_seg_size(&dev->dev, UINT_MAX);
+-	dma_mask = IS_ENABLED(CONFIG_64BIT) ? DMA_BIT_MASK(64) : DMA_BIT_MASK(32);
+-	err = dma_coerce_mask_and_coherent(&dev->dev, dma_mask);
+-	if (err)
+-		return err;
+ 
+ 	dev->uverbs_cmd_mask = BIT_ULL(IB_USER_VERBS_CMD_GET_CONTEXT)
+ 	    | BIT_ULL(IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL)
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index 3414b341b7091..4bf5d85a1ab3c 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -352,7 +352,6 @@ struct rxe_port {
+ struct rxe_dev {
+ 	struct ib_device	ib_dev;
+ 	struct ib_device_attr	attr;
+-	struct device_dma_parameters dma_parms;
+ 	int			max_ucontext;
+ 	int			max_inline_data;
+ 	struct mutex	usdev_lock;
+diff --git a/drivers/infiniband/sw/siw/Kconfig b/drivers/infiniband/sw/siw/Kconfig
+index 3450ba5081df5..1b5105cbabaee 100644
+--- a/drivers/infiniband/sw/siw/Kconfig
++++ b/drivers/infiniband/sw/siw/Kconfig
+@@ -2,7 +2,6 @@ config RDMA_SIW
+ 	tristate "Software RDMA over TCP/IP (iWARP) driver"
+ 	depends on INET && INFINIBAND && LIBCRC32C
+ 	depends on INFINIBAND_VIRT_DMA
+-	select DMA_VIRT_OPS
+ 	help
+ 	This driver implements the iWARP RDMA transport over
+ 	the Linux TCP/IP network stack. It enables a system with a
+diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h
+index e9753831ac3f3..adda789962196 100644
+--- a/drivers/infiniband/sw/siw/siw.h
++++ b/drivers/infiniband/sw/siw/siw.h
+@@ -69,7 +69,6 @@ struct siw_pd {
+ 
+ struct siw_device {
+ 	struct ib_device base_dev;
+-	struct device_dma_parameters dma_parms;
+ 	struct net_device *netdev;
+ 	struct siw_dev_cap attrs;
+ 
+diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c
+index 181e06c1c43d7..9d152e198a59b 100644
+--- a/drivers/infiniband/sw/siw/siw_main.c
++++ b/drivers/infiniband/sw/siw/siw_main.c
+@@ -305,25 +305,8 @@ static struct siw_device *siw_device_create(struct net_device *netdev)
+ {
+ 	struct siw_device *sdev = NULL;
+ 	struct ib_device *base_dev;
+-	struct device *parent = netdev->dev.parent;
+-	u64 dma_mask;
+ 	int rv;
+ 
+-	if (!parent) {
+-		/*
+-		 * The loopback device has no parent device,
+-		 * so it appears as a top-level device. To support
+-		 * loopback device connectivity, take this device
+-		 * as the parent device. Skip all other devices
+-		 * w/o parent device.
+-		 */
+-		if (netdev->type != ARPHRD_LOOPBACK) {
+-			pr_warn("siw: device %s error: no parent device\n",
+-				netdev->name);
+-			return NULL;
+-		}
+-		parent = &netdev->dev;
+-	}
+ 	sdev = ib_alloc_device(siw_device, base_dev);
+ 	if (!sdev)
+ 		return NULL;
+@@ -382,13 +365,6 @@ static struct siw_device *siw_device_create(struct net_device *netdev)
+ 	 * per physical port.
+ 	 */
+ 	base_dev->phys_port_cnt = 1;
+-	base_dev->dev.parent = parent;
+-	base_dev->dev.dma_parms = &sdev->dma_parms;
+-	dma_set_max_seg_size(&base_dev->dev, UINT_MAX);
+-	dma_mask = IS_ENABLED(CONFIG_64BIT) ? DMA_BIT_MASK(64) : DMA_BIT_MASK(32);
+-	if (dma_coerce_mask_and_coherent(&base_dev->dev, dma_mask))
+-		goto error;
+-
+ 	base_dev->num_comp_vectors = num_possible_cpus();
+ 
+ 	xa_init_flags(&sdev->qp_xa, XA_FLAGS_ALLOC1);
+@@ -430,7 +406,7 @@ static struct siw_device *siw_device_create(struct net_device *netdev)
+ 	atomic_set(&sdev->num_mr, 0);
+ 	atomic_set(&sdev->num_pd, 0);
+ 
+-	sdev->numa_node = dev_to_node(parent);
++	sdev->numa_node = dev_to_node(&netdev->dev);
+ 	spin_lock_init(&sdev->lock);
+ 
+ 	return sdev;
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index 7900571fc85b3..c352217946455 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -318,10 +318,6 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand,
+ 		buf += ret;
+ 	}
+ 
+-	if (req->ooblen)
+-		memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs,
+-		       req->ooblen);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/join.c b/drivers/net/wireless/marvell/mwifiex/join.c
+index 5934f71475477..173ccf79cbfcc 100644
+--- a/drivers/net/wireless/marvell/mwifiex/join.c
++++ b/drivers/net/wireless/marvell/mwifiex/join.c
+@@ -877,6 +877,8 @@ mwifiex_cmd_802_11_ad_hoc_start(struct mwifiex_private *priv,
+ 
+ 	memset(adhoc_start->ssid, 0, IEEE80211_MAX_SSID_LEN);
+ 
++	if (req_ssid->ssid_len > IEEE80211_MAX_SSID_LEN)
++		req_ssid->ssid_len = IEEE80211_MAX_SSID_LEN;
+ 	memcpy(adhoc_start->ssid, req_ssid->ssid, req_ssid->ssid_len);
+ 
+ 	mwifiex_dbg(adapter, INFO, "info: ADHOC_S_CMD: SSID = %s\n",
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index ae6620489457d..5c1e7cb7fe0de 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -414,7 +414,8 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
+ 	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
+ 		goto out_free_rsp;
+ 
+-	r->req.p2p_client = &ndev->device->dev;
++	if (!ib_uses_virt_dma(ndev->device))
++		r->req.p2p_client = &ndev->device->dev;
+ 	r->send_sge.length = sizeof(*r->req.cqe);
+ 	r->send_sge.lkey = ndev->pd->local_dma_lkey;
+ 
+diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c
+index 4d9711d51f8f3..f0a6861ff3aef 100644
+--- a/drivers/rtc/rtc-pcf2127.c
++++ b/drivers/rtc/rtc-pcf2127.c
+@@ -331,6 +331,37 @@ static const struct watchdog_ops pcf2127_watchdog_ops = {
+ 	.set_timeout = pcf2127_wdt_set_timeout,
+ };
+ 
++static int pcf2127_watchdog_init(struct device *dev, struct pcf2127 *pcf2127)
++{
++	u32 wdd_timeout;
++	int ret;
++
++	if (!IS_ENABLED(CONFIG_WATCHDOG) ||
++	    !device_property_read_bool(dev, "reset-source"))
++		return 0;
++
++	pcf2127->wdd.parent = dev;
++	pcf2127->wdd.info = &pcf2127_wdt_info;
++	pcf2127->wdd.ops = &pcf2127_watchdog_ops;
++	pcf2127->wdd.min_timeout = PCF2127_WD_VAL_MIN;
++	pcf2127->wdd.max_timeout = PCF2127_WD_VAL_MAX;
++	pcf2127->wdd.timeout = PCF2127_WD_VAL_DEFAULT;
++	pcf2127->wdd.min_hw_heartbeat_ms = 500;
++	pcf2127->wdd.status = WATCHDOG_NOWAYOUT_INIT_STATUS;
++
++	watchdog_set_drvdata(&pcf2127->wdd, pcf2127);
++
++	/* Test if watchdog timer is started by bootloader */
++	ret = regmap_read(pcf2127->regmap, PCF2127_REG_WD_VAL, &wdd_timeout);
++	if (ret)
++		return ret;
++
++	if (wdd_timeout)
++		set_bit(WDOG_HW_RUNNING, &pcf2127->wdd.status);
++
++	return devm_watchdog_register_device(dev, &pcf2127->wdd);
++}
++
+ /* Alarm */
+ static int pcf2127_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm)
+ {
+@@ -532,7 +563,6 @@ static int pcf2127_probe(struct device *dev, struct regmap *regmap,
+ 			 int alarm_irq, const char *name, bool has_nvmem)
+ {
+ 	struct pcf2127 *pcf2127;
+-	u32 wdd_timeout;
+ 	int ret = 0;
+ 
+ 	dev_dbg(dev, "%s\n", __func__);
+@@ -571,17 +601,6 @@ static int pcf2127_probe(struct device *dev, struct regmap *regmap,
+ 		pcf2127->rtc->ops = &pcf2127_rtc_alrm_ops;
+ 	}
+ 
+-	pcf2127->wdd.parent = dev;
+-	pcf2127->wdd.info = &pcf2127_wdt_info;
+-	pcf2127->wdd.ops = &pcf2127_watchdog_ops;
+-	pcf2127->wdd.min_timeout = PCF2127_WD_VAL_MIN;
+-	pcf2127->wdd.max_timeout = PCF2127_WD_VAL_MAX;
+-	pcf2127->wdd.timeout = PCF2127_WD_VAL_DEFAULT;
+-	pcf2127->wdd.min_hw_heartbeat_ms = 500;
+-	pcf2127->wdd.status = WATCHDOG_NOWAYOUT_INIT_STATUS;
+-
+-	watchdog_set_drvdata(&pcf2127->wdd, pcf2127);
+-
+ 	if (has_nvmem) {
+ 		struct nvmem_config nvmem_cfg = {
+ 			.priv = pcf2127,
+@@ -611,19 +630,7 @@ static int pcf2127_probe(struct device *dev, struct regmap *regmap,
+ 		return ret;
+ 	}
+ 
+-	/* Test if watchdog timer is started by bootloader */
+-	ret = regmap_read(pcf2127->regmap, PCF2127_REG_WD_VAL, &wdd_timeout);
+-	if (ret)
+-		return ret;
+-
+-	if (wdd_timeout)
+-		set_bit(WDOG_HW_RUNNING, &pcf2127->wdd.status);
+-
+-#ifdef CONFIG_WATCHDOG
+-	ret = devm_watchdog_register_device(dev, &pcf2127->wdd);
+-	if (ret)
+-		return ret;
+-#endif /* CONFIG_WATCHDOG */
++	pcf2127_watchdog_init(dev, pcf2127);
+ 
+ 	/*
+ 	 * Disable battery low/switch-over timestamp and interrupts.
+diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c
+index 8df73bc2f8cb2..914a827a93ee8 100644
+--- a/drivers/scsi/ufs/ufs-mediatek.c
++++ b/drivers/scsi/ufs/ufs-mediatek.c
+@@ -743,7 +743,7 @@ static int ufs_mtk_link_startup_notify(struct ufs_hba *hba,
+ 	return ret;
+ }
+ 
+-static void ufs_mtk_device_reset(struct ufs_hba *hba)
++static int ufs_mtk_device_reset(struct ufs_hba *hba)
+ {
+ 	struct arm_smccc_res res;
+ 
+@@ -764,6 +764,8 @@ static void ufs_mtk_device_reset(struct ufs_hba *hba)
+ 	usleep_range(10000, 15000);
+ 
+ 	dev_info(hba->dev, "device reset done\n");
++
++	return 0;
+ }
+ 
+ static int ufs_mtk_link_set_hpm(struct ufs_hba *hba)
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index f9d6ef3565407..a244c8ae1b4eb 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -1421,13 +1421,13 @@ static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
+  *
+  * Toggles the (optional) reset line to reset the attached device.
+  */
+-static void ufs_qcom_device_reset(struct ufs_hba *hba)
++static int ufs_qcom_device_reset(struct ufs_hba *hba)
+ {
+ 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ 
+ 	/* reset gpio is optional */
+ 	if (!host->device_reset)
+-		return;
++		return -EOPNOTSUPP;
+ 
+ 	/*
+ 	 * The UFS device shall detect reset pulses of 1us, sleep for 10us to
+@@ -1438,6 +1438,8 @@ static void ufs_qcom_device_reset(struct ufs_hba *hba)
+ 
+ 	gpiod_set_value_cansleep(host->device_reset, 0);
+ 	usleep_range(10, 15);
++
++	return 0;
+ }
+ 
+ #if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND)
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index e0f00a42371c5..cd51553e522da 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -318,7 +318,7 @@ struct ufs_hba_variant_ops {
+ 	int     (*resume)(struct ufs_hba *, enum ufs_pm_op);
+ 	void	(*dbg_register_dump)(struct ufs_hba *hba);
+ 	int	(*phy_initialization)(struct ufs_hba *);
+-	void	(*device_reset)(struct ufs_hba *hba);
++	int	(*device_reset)(struct ufs_hba *hba);
+ 	void	(*config_scaling_param)(struct ufs_hba *hba,
+ 					struct devfreq_dev_profile *profile,
+ 					void *data);
+@@ -1181,9 +1181,17 @@ static inline void ufshcd_vops_dbg_register_dump(struct ufs_hba *hba)
+ static inline void ufshcd_vops_device_reset(struct ufs_hba *hba)
+ {
+ 	if (hba->vops && hba->vops->device_reset) {
+-		hba->vops->device_reset(hba);
+-		ufshcd_set_ufs_dev_active(hba);
+-		ufshcd_update_reg_hist(&hba->ufs_stats.dev_reset, 0);
++		int err = hba->vops->device_reset(hba);
++
++		if (!err) {
++			ufshcd_set_ufs_dev_active(hba);
++			if (ufshcd_is_wb_allowed(hba)) {
++				hba->wb_enabled = false;
++				hba->wb_buf_flush_enabled = false;
++			}
++		}
++		if (err != -EOPNOTSUPP)
++			ufshcd_update_reg_hist(&hba->ufs_stats.dev_reset, err);
+ 	}
+ }
+ 
+diff --git a/fs/exec.c b/fs/exec.c
+index 547a2390baf54..ca89e0e3ef10f 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -965,8 +965,8 @@ EXPORT_SYMBOL(read_code);
+ 
+ /*
+  * Maps the mm_struct mm into the current task struct.
+- * On success, this function returns with the mutex
+- * exec_update_mutex locked.
++ * On success, this function returns with exec_update_lock
++ * held for writing.
+  */
+ static int exec_mmap(struct mm_struct *mm)
+ {
+@@ -981,7 +981,7 @@ static int exec_mmap(struct mm_struct *mm)
+ 	if (old_mm)
+ 		sync_mm_rss(old_mm);
+ 
+-	ret = mutex_lock_killable(&tsk->signal->exec_update_mutex);
++	ret = down_write_killable(&tsk->signal->exec_update_lock);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -995,7 +995,7 @@ static int exec_mmap(struct mm_struct *mm)
+ 		mmap_read_lock(old_mm);
+ 		if (unlikely(old_mm->core_state)) {
+ 			mmap_read_unlock(old_mm);
+-			mutex_unlock(&tsk->signal->exec_update_mutex);
++			up_write(&tsk->signal->exec_update_lock);
+ 			return -EINTR;
+ 		}
+ 	}
+@@ -1382,7 +1382,7 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 	return 0;
+ 
+ out_unlock:
+-	mutex_unlock(&me->signal->exec_update_mutex);
++	up_write(&me->signal->exec_update_lock);
+ out:
+ 	return retval;
+ }
+@@ -1423,7 +1423,7 @@ void setup_new_exec(struct linux_binprm * bprm)
+ 	 * some architectures like powerpc
+ 	 */
+ 	me->mm->task_size = TASK_SIZE;
+-	mutex_unlock(&me->signal->exec_update_mutex);
++	up_write(&me->signal->exec_update_lock);
+ 	mutex_unlock(&me->signal->cred_guard_mutex);
+ }
+ EXPORT_SYMBOL(setup_new_exec);
+diff --git a/fs/fuse/acl.c b/fs/fuse/acl.c
+index 5a48cee6d7d33..f529075a2ce87 100644
+--- a/fs/fuse/acl.c
++++ b/fs/fuse/acl.c
+@@ -19,6 +19,9 @@ struct posix_acl *fuse_get_acl(struct inode *inode, int type)
+ 	void *value = NULL;
+ 	struct posix_acl *acl;
+ 
++	if (fuse_is_bad(inode))
++		return ERR_PTR(-EIO);
++
+ 	if (!fc->posix_acl || fc->no_getxattr)
+ 		return NULL;
+ 
+@@ -53,6 +56,9 @@ int fuse_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 	const char *name;
+ 	int ret;
+ 
++	if (fuse_is_bad(inode))
++		return -EIO;
++
+ 	if (!fc->posix_acl || fc->no_setxattr)
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index ff7dbeb16f88d..ffa031fe52933 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -202,7 +202,7 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
+ 	int ret;
+ 
+ 	inode = d_inode_rcu(entry);
+-	if (inode && is_bad_inode(inode))
++	if (inode && fuse_is_bad(inode))
+ 		goto invalid;
+ 	else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) ||
+ 		 (flags & LOOKUP_REVAL)) {
+@@ -463,6 +463,9 @@ static struct dentry *fuse_lookup(struct inode *dir, struct dentry *entry,
+ 	bool outarg_valid = true;
+ 	bool locked;
+ 
++	if (fuse_is_bad(dir))
++		return ERR_PTR(-EIO);
++
+ 	locked = fuse_lock_inode(dir);
+ 	err = fuse_lookup_name(dir->i_sb, get_node_id(dir), &entry->d_name,
+ 			       &outarg, &inode);
+@@ -606,6 +609,9 @@ static int fuse_atomic_open(struct inode *dir, struct dentry *entry,
+ 	struct fuse_conn *fc = get_fuse_conn(dir);
+ 	struct dentry *res = NULL;
+ 
++	if (fuse_is_bad(dir))
++		return -EIO;
++
+ 	if (d_in_lookup(entry)) {
+ 		res = fuse_lookup(dir, entry, 0);
+ 		if (IS_ERR(res))
+@@ -654,6 +660,9 @@ static int create_new_entry(struct fuse_mount *fm, struct fuse_args *args,
+ 	int err;
+ 	struct fuse_forget_link *forget;
+ 
++	if (fuse_is_bad(dir))
++		return -EIO;
++
+ 	forget = fuse_alloc_forget();
+ 	if (!forget)
+ 		return -ENOMEM;
+@@ -781,6 +790,9 @@ static int fuse_unlink(struct inode *dir, struct dentry *entry)
+ 	struct fuse_mount *fm = get_fuse_mount(dir);
+ 	FUSE_ARGS(args);
+ 
++	if (fuse_is_bad(dir))
++		return -EIO;
++
+ 	args.opcode = FUSE_UNLINK;
+ 	args.nodeid = get_node_id(dir);
+ 	args.in_numargs = 1;
+@@ -817,6 +829,9 @@ static int fuse_rmdir(struct inode *dir, struct dentry *entry)
+ 	struct fuse_mount *fm = get_fuse_mount(dir);
+ 	FUSE_ARGS(args);
+ 
++	if (fuse_is_bad(dir))
++		return -EIO;
++
+ 	args.opcode = FUSE_RMDIR;
+ 	args.nodeid = get_node_id(dir);
+ 	args.in_numargs = 1;
+@@ -895,6 +910,9 @@ static int fuse_rename2(struct inode *olddir, struct dentry *oldent,
+ 	struct fuse_conn *fc = get_fuse_conn(olddir);
+ 	int err;
+ 
++	if (fuse_is_bad(olddir))
++		return -EIO;
++
+ 	if (flags & ~(RENAME_NOREPLACE | RENAME_EXCHANGE | RENAME_WHITEOUT))
+ 		return -EINVAL;
+ 
+@@ -1030,7 +1048,7 @@ static int fuse_do_getattr(struct inode *inode, struct kstat *stat,
+ 	if (!err) {
+ 		if (fuse_invalid_attr(&outarg.attr) ||
+ 		    (inode->i_mode ^ outarg.attr.mode) & S_IFMT) {
+-			make_bad_inode(inode);
++			fuse_make_bad(inode);
+ 			err = -EIO;
+ 		} else {
+ 			fuse_change_attributes(inode, &outarg.attr,
+@@ -1232,6 +1250,9 @@ static int fuse_permission(struct inode *inode, int mask)
+ 	bool refreshed = false;
+ 	int err = 0;
+ 
++	if (fuse_is_bad(inode))
++		return -EIO;
++
+ 	if (!fuse_allow_current_process(fc))
+ 		return -EACCES;
+ 
+@@ -1327,7 +1348,7 @@ static const char *fuse_get_link(struct dentry *dentry, struct inode *inode,
+ 	int err;
+ 
+ 	err = -EIO;
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		goto out_err;
+ 
+ 	if (fc->cache_symlinks)
+@@ -1375,7 +1396,7 @@ static int fuse_dir_fsync(struct file *file, loff_t start, loff_t end,
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	int err;
+ 
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		return -EIO;
+ 
+ 	if (fc->no_fsyncdir)
+@@ -1664,7 +1685,7 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
+ 
+ 	if (fuse_invalid_attr(&outarg.attr) ||
+ 	    (inode->i_mode ^ outarg.attr.mode) & S_IFMT) {
+-		make_bad_inode(inode);
++		fuse_make_bad(inode);
+ 		err = -EIO;
+ 		goto error;
+ 	}
+@@ -1727,6 +1748,9 @@ static int fuse_setattr(struct dentry *entry, struct iattr *attr)
+ 	struct file *file = (attr->ia_valid & ATTR_FILE) ? attr->ia_file : NULL;
+ 	int ret;
+ 
++	if (fuse_is_bad(inode))
++		return -EIO;
++
+ 	if (!fuse_allow_current_process(get_fuse_conn(inode)))
+ 		return -EACCES;
+ 
+@@ -1785,6 +1809,9 @@ static int fuse_getattr(const struct path *path, struct kstat *stat,
+ 	struct inode *inode = d_inode(path->dentry);
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 
++	if (fuse_is_bad(inode))
++		return -EIO;
++
+ 	if (!fuse_allow_current_process(fc)) {
+ 		if (!request_mask) {
+ 			/*
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index c03034e8c1529..8b306005453cc 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -226,6 +226,9 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
+ 	bool dax_truncate = (file->f_flags & O_TRUNC) &&
+ 			  fc->atomic_o_trunc && FUSE_IS_DAX(inode);
+ 
++	if (fuse_is_bad(inode))
++		return -EIO;
++
+ 	err = generic_file_open(inode, file);
+ 	if (err)
+ 		return err;
+@@ -463,7 +466,7 @@ static int fuse_flush(struct file *file, fl_owner_t id)
+ 	FUSE_ARGS(args);
+ 	int err;
+ 
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		return -EIO;
+ 
+ 	err = write_inode_now(inode, 1);
+@@ -535,7 +538,7 @@ static int fuse_fsync(struct file *file, loff_t start, loff_t end,
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	int err;
+ 
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		return -EIO;
+ 
+ 	inode_lock(inode);
+@@ -859,7 +862,7 @@ static int fuse_readpage(struct file *file, struct page *page)
+ 	int err;
+ 
+ 	err = -EIO;
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		goto out;
+ 
+ 	err = fuse_do_readpage(file, page);
+@@ -952,7 +955,7 @@ static void fuse_readahead(struct readahead_control *rac)
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	unsigned int i, max_pages, nr_pages = 0;
+ 
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		return;
+ 
+ 	max_pages = min_t(unsigned int, fc->max_pages,
+@@ -1555,7 +1558,7 @@ static ssize_t fuse_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ 	struct fuse_file *ff = file->private_data;
+ 	struct inode *inode = file_inode(file);
+ 
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		return -EIO;
+ 
+ 	if (FUSE_IS_DAX(inode))
+@@ -1573,7 +1576,7 @@ static ssize_t fuse_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	struct fuse_file *ff = file->private_data;
+ 	struct inode *inode = file_inode(file);
+ 
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		return -EIO;
+ 
+ 	if (FUSE_IS_DAX(inode))
+@@ -2172,7 +2175,7 @@ static int fuse_writepages(struct address_space *mapping,
+ 	int err;
+ 
+ 	err = -EIO;
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		goto out;
+ 
+ 	data.inode = inode;
+@@ -2954,7 +2957,7 @@ long fuse_ioctl_common(struct file *file, unsigned int cmd,
+ 	if (!fuse_allow_current_process(fc))
+ 		return -EACCES;
+ 
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		return -EIO;
+ 
+ 	return fuse_do_ioctl(file, cmd, arg, flags);
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index d51598017d133..404d66f01e8d7 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -172,6 +172,8 @@ enum {
+ 	FUSE_I_INIT_RDPLUS,
+ 	/** An operation changing file size is in progress  */
+ 	FUSE_I_SIZE_UNSTABLE,
++	/* Bad inode */
++	FUSE_I_BAD,
+ };
+ 
+ struct fuse_conn;
+@@ -858,6 +860,16 @@ static inline u64 fuse_get_attr_version(struct fuse_conn *fc)
+ 	return atomic64_read(&fc->attr_version);
+ }
+ 
++static inline void fuse_make_bad(struct inode *inode)
++{
++	set_bit(FUSE_I_BAD, &get_fuse_inode(inode)->state);
++}
++
++static inline bool fuse_is_bad(struct inode *inode)
++{
++	return unlikely(test_bit(FUSE_I_BAD, &get_fuse_inode(inode)->state));
++}
++
+ /** Device operations */
+ extern const struct file_operations fuse_dev_operations;
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 1a47afc95f800..f94b0bb57619c 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -132,7 +132,7 @@ static void fuse_evict_inode(struct inode *inode)
+ 			fi->forget = NULL;
+ 		}
+ 	}
+-	if (S_ISREG(inode->i_mode) && !is_bad_inode(inode)) {
++	if (S_ISREG(inode->i_mode) && !fuse_is_bad(inode)) {
+ 		WARN_ON(!list_empty(&fi->write_files));
+ 		WARN_ON(!list_empty(&fi->queued_writes));
+ 	}
+@@ -342,7 +342,7 @@ retry:
+ 		unlock_new_inode(inode);
+ 	} else if ((inode->i_mode ^ attr->mode) & S_IFMT) {
+ 		/* Inode has changed type, any I/O on the old should fail */
+-		make_bad_inode(inode);
++		fuse_make_bad(inode);
+ 		iput(inode);
+ 		goto retry;
+ 	}
+diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c
+index 3b5e91045871a..3441ffa740f3d 100644
+--- a/fs/fuse/readdir.c
++++ b/fs/fuse/readdir.c
+@@ -207,7 +207,7 @@ retry:
+ 			dput(dentry);
+ 			goto retry;
+ 		}
+-		if (is_bad_inode(inode)) {
++		if (fuse_is_bad(inode)) {
+ 			dput(dentry);
+ 			return -EIO;
+ 		}
+@@ -568,7 +568,7 @@ int fuse_readdir(struct file *file, struct dir_context *ctx)
+ 	struct inode *inode = file_inode(file);
+ 	int err;
+ 
+-	if (is_bad_inode(inode))
++	if (fuse_is_bad(inode))
+ 		return -EIO;
+ 
+ 	mutex_lock(&ff->readdir.lock);
+diff --git a/fs/fuse/xattr.c b/fs/fuse/xattr.c
+index 371bdcbc72337..cdea18de94f7e 100644
+--- a/fs/fuse/xattr.c
++++ b/fs/fuse/xattr.c
+@@ -113,6 +113,9 @@ ssize_t fuse_listxattr(struct dentry *entry, char *list, size_t size)
+ 	struct fuse_getxattr_out outarg;
+ 	ssize_t ret;
+ 
++	if (fuse_is_bad(inode))
++		return -EIO;
++
+ 	if (!fuse_allow_current_process(fm->fc))
+ 		return -EACCES;
+ 
+@@ -178,6 +181,9 @@ static int fuse_xattr_get(const struct xattr_handler *handler,
+ 			 struct dentry *dentry, struct inode *inode,
+ 			 const char *name, void *value, size_t size)
+ {
++	if (fuse_is_bad(inode))
++		return -EIO;
++
+ 	return fuse_getxattr(inode, name, value, size);
+ }
+ 
+@@ -186,6 +192,9 @@ static int fuse_xattr_set(const struct xattr_handler *handler,
+ 			  const char *name, const void *value, size_t size,
+ 			  int flags)
+ {
++	if (fuse_is_bad(inode))
++		return -EIO;
++
+ 	if (!value)
+ 		return fuse_removexattr(inode, name);
+ 
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index b362523a9829a..55ce0ee9c5c73 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -405,11 +405,11 @@ print0:
+ 
+ static int lock_trace(struct task_struct *task)
+ {
+-	int err = mutex_lock_killable(&task->signal->exec_update_mutex);
++	int err = down_read_killable(&task->signal->exec_update_lock);
+ 	if (err)
+ 		return err;
+ 	if (!ptrace_may_access(task, PTRACE_MODE_ATTACH_FSCREDS)) {
+-		mutex_unlock(&task->signal->exec_update_mutex);
++		up_read(&task->signal->exec_update_lock);
+ 		return -EPERM;
+ 	}
+ 	return 0;
+@@ -417,7 +417,7 @@ static int lock_trace(struct task_struct *task)
+ 
+ static void unlock_trace(struct task_struct *task)
+ {
+-	mutex_unlock(&task->signal->exec_update_mutex);
++	up_read(&task->signal->exec_update_lock);
+ }
+ 
+ #ifdef CONFIG_STACKTRACE
+@@ -2930,7 +2930,7 @@ static int do_io_accounting(struct task_struct *task, struct seq_file *m, int wh
+ 	unsigned long flags;
+ 	int result;
+ 
+-	result = mutex_lock_killable(&task->signal->exec_update_mutex);
++	result = down_read_killable(&task->signal->exec_update_lock);
+ 	if (result)
+ 		return result;
+ 
+@@ -2966,7 +2966,7 @@ static int do_io_accounting(struct task_struct *task, struct seq_file *m, int wh
+ 	result = 0;
+ 
+ out_unlock:
+-	mutex_unlock(&task->signal->exec_update_mutex);
++	up_read(&task->signal->exec_update_lock);
+ 	return result;
+ }
+ 
+diff --git a/include/linux/kdev_t.h b/include/linux/kdev_t.h
+index 85b5151911cfd..4856706fbfeb4 100644
+--- a/include/linux/kdev_t.h
++++ b/include/linux/kdev_t.h
+@@ -21,61 +21,61 @@
+ 	})
+ 
+ /* acceptable for old filesystems */
+-static inline bool old_valid_dev(dev_t dev)
++static __always_inline bool old_valid_dev(dev_t dev)
+ {
+ 	return MAJOR(dev) < 256 && MINOR(dev) < 256;
+ }
+ 
+-static inline u16 old_encode_dev(dev_t dev)
++static __always_inline u16 old_encode_dev(dev_t dev)
+ {
+ 	return (MAJOR(dev) << 8) | MINOR(dev);
+ }
+ 
+-static inline dev_t old_decode_dev(u16 val)
++static __always_inline dev_t old_decode_dev(u16 val)
+ {
+ 	return MKDEV((val >> 8) & 255, val & 255);
+ }
+ 
+-static inline u32 new_encode_dev(dev_t dev)
++static __always_inline u32 new_encode_dev(dev_t dev)
+ {
+ 	unsigned major = MAJOR(dev);
+ 	unsigned minor = MINOR(dev);
+ 	return (minor & 0xff) | (major << 8) | ((minor & ~0xff) << 12);
+ }
+ 
+-static inline dev_t new_decode_dev(u32 dev)
++static __always_inline dev_t new_decode_dev(u32 dev)
+ {
+ 	unsigned major = (dev & 0xfff00) >> 8;
+ 	unsigned minor = (dev & 0xff) | ((dev >> 12) & 0xfff00);
+ 	return MKDEV(major, minor);
+ }
+ 
+-static inline u64 huge_encode_dev(dev_t dev)
++static __always_inline u64 huge_encode_dev(dev_t dev)
+ {
+ 	return new_encode_dev(dev);
+ }
+ 
+-static inline dev_t huge_decode_dev(u64 dev)
++static __always_inline dev_t huge_decode_dev(u64 dev)
+ {
+ 	return new_decode_dev(dev);
+ }
+ 
+-static inline int sysv_valid_dev(dev_t dev)
++static __always_inline int sysv_valid_dev(dev_t dev)
+ {
+ 	return MAJOR(dev) < (1<<14) && MINOR(dev) < (1<<18);
+ }
+ 
+-static inline u32 sysv_encode_dev(dev_t dev)
++static __always_inline u32 sysv_encode_dev(dev_t dev)
+ {
+ 	return MINOR(dev) | (MAJOR(dev) << 18);
+ }
+ 
+-static inline unsigned sysv_major(u32 dev)
++static __always_inline unsigned sysv_major(u32 dev)
+ {
+ 	return (dev >> 18) & 0x3fff;
+ }
+ 
+-static inline unsigned sysv_minor(u32 dev)
++static __always_inline unsigned sysv_minor(u32 dev)
+ {
+ 	return dev & 0x3ffff;
+ }
+diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
+index 25e3fde856178..4c715be487171 100644
+--- a/include/linux/rwsem.h
++++ b/include/linux/rwsem.h
+@@ -123,6 +123,7 @@ static inline int rwsem_is_contended(struct rw_semaphore *sem)
+  * lock for reading
+  */
+ extern void down_read(struct rw_semaphore *sem);
++extern int __must_check down_read_interruptible(struct rw_semaphore *sem);
+ extern int __must_check down_read_killable(struct rw_semaphore *sem);
+ 
+ /*
+@@ -171,6 +172,7 @@ extern void downgrade_write(struct rw_semaphore *sem);
+  * See Documentation/locking/lockdep-design.rst for more details.)
+  */
+ extern void down_read_nested(struct rw_semaphore *sem, int subclass);
++extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass);
+ extern void down_write_nested(struct rw_semaphore *sem, int subclass);
+ extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass);
+ extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock);
+@@ -191,6 +193,7 @@ extern void down_read_non_owner(struct rw_semaphore *sem);
+ extern void up_read_non_owner(struct rw_semaphore *sem);
+ #else
+ # define down_read_nested(sem, subclass)		down_read(sem)
++# define down_read_killable_nested(sem, subclass)	down_read_killable(sem)
+ # define down_write_nest_lock(sem, nest_lock)	down_write(sem)
+ # define down_write_nested(sem, subclass)	down_write(sem)
+ # define down_write_killable_nested(sem, subclass)	down_write_killable(sem)
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 1bad18a1d8ba7..4b6a8234d7fc2 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -228,12 +228,13 @@ struct signal_struct {
+ 					 * credential calculations
+ 					 * (notably. ptrace)
+ 					 * Deprecated do not use in new code.
+-					 * Use exec_update_mutex instead.
+-					 */
+-	struct mutex exec_update_mutex;	/* Held while task_struct is being
+-					 * updated during exec, and may have
+-					 * inconsistent permissions.
++					 * Use exec_update_lock instead.
+ 					 */
++	struct rw_semaphore exec_update_lock;	/* Held while task_struct is
++						 * being updated during exec,
++						 * and may have inconsistent
++						 * permissions.
++						 */
+ } __randomize_layout;
+ 
+ /*
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 9bf6c319a670e..65771bef5e654 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -3943,6 +3943,16 @@ static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt)
+ 		-ENOSYS;
+ }
+ 
++/*
++ * Drivers that don't need a DMA mapping at the RDMA layer, set dma_device to
++ * NULL. This causes the ib_dma* helpers to just stash the kernel virtual
++ * address into the dma address.
++ */
++static inline bool ib_uses_virt_dma(struct ib_device *dev)
++{
++	return IS_ENABLED(CONFIG_INFINIBAND_VIRT_DMA) && !dev->dma_device;
++}
++
+ /**
+  * ib_dma_mapping_error - check a DMA addr for error
+  * @dev: The device for which the dma_addr was created
+@@ -3950,6 +3960,8 @@ static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt)
+  */
+ static inline int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
+ {
++	if (ib_uses_virt_dma(dev))
++		return 0;
+ 	return dma_mapping_error(dev->dma_device, dma_addr);
+ }
+ 
+@@ -3964,6 +3976,8 @@ static inline u64 ib_dma_map_single(struct ib_device *dev,
+ 				    void *cpu_addr, size_t size,
+ 				    enum dma_data_direction direction)
+ {
++	if (ib_uses_virt_dma(dev))
++		return (uintptr_t)cpu_addr;
+ 	return dma_map_single(dev->dma_device, cpu_addr, size, direction);
+ }
+ 
+@@ -3978,7 +3992,8 @@ static inline void ib_dma_unmap_single(struct ib_device *dev,
+ 				       u64 addr, size_t size,
+ 				       enum dma_data_direction direction)
+ {
+-	dma_unmap_single(dev->dma_device, addr, size, direction);
++	if (!ib_uses_virt_dma(dev))
++		dma_unmap_single(dev->dma_device, addr, size, direction);
+ }
+ 
+ /**
+@@ -3995,6 +4010,8 @@ static inline u64 ib_dma_map_page(struct ib_device *dev,
+ 				  size_t size,
+ 					 enum dma_data_direction direction)
+ {
++	if (ib_uses_virt_dma(dev))
++		return (uintptr_t)(page_address(page) + offset);
+ 	return dma_map_page(dev->dma_device, page, offset, size, direction);
+ }
+ 
+@@ -4009,7 +4026,30 @@ static inline void ib_dma_unmap_page(struct ib_device *dev,
+ 				     u64 addr, size_t size,
+ 				     enum dma_data_direction direction)
+ {
+-	dma_unmap_page(dev->dma_device, addr, size, direction);
++	if (!ib_uses_virt_dma(dev))
++		dma_unmap_page(dev->dma_device, addr, size, direction);
++}
++
++int ib_dma_virt_map_sg(struct ib_device *dev, struct scatterlist *sg, int nents);
++static inline int ib_dma_map_sg_attrs(struct ib_device *dev,
++				      struct scatterlist *sg, int nents,
++				      enum dma_data_direction direction,
++				      unsigned long dma_attrs)
++{
++	if (ib_uses_virt_dma(dev))
++		return ib_dma_virt_map_sg(dev, sg, nents);
++	return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
++				dma_attrs);
++}
++
++static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
++					 struct scatterlist *sg, int nents,
++					 enum dma_data_direction direction,
++					 unsigned long dma_attrs)
++{
++	if (!ib_uses_virt_dma(dev))
++		dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction,
++				   dma_attrs);
+ }
+ 
+ /**
+@@ -4023,7 +4063,7 @@ static inline int ib_dma_map_sg(struct ib_device *dev,
+ 				struct scatterlist *sg, int nents,
+ 				enum dma_data_direction direction)
+ {
+-	return dma_map_sg(dev->dma_device, sg, nents, direction);
++	return ib_dma_map_sg_attrs(dev, sg, nents, direction, 0);
+ }
+ 
+ /**
+@@ -4037,24 +4077,7 @@ static inline void ib_dma_unmap_sg(struct ib_device *dev,
+ 				   struct scatterlist *sg, int nents,
+ 				   enum dma_data_direction direction)
+ {
+-	dma_unmap_sg(dev->dma_device, sg, nents, direction);
+-}
+-
+-static inline int ib_dma_map_sg_attrs(struct ib_device *dev,
+-				      struct scatterlist *sg, int nents,
+-				      enum dma_data_direction direction,
+-				      unsigned long dma_attrs)
+-{
+-	return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
+-				dma_attrs);
+-}
+-
+-static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
+-					 struct scatterlist *sg, int nents,
+-					 enum dma_data_direction direction,
+-					 unsigned long dma_attrs)
+-{
+-	dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction, dma_attrs);
++	ib_dma_unmap_sg_attrs(dev, sg, nents, direction, 0);
+ }
+ 
+ /**
+@@ -4065,6 +4088,8 @@ static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
+  */
+ static inline unsigned int ib_dma_max_seg_size(struct ib_device *dev)
+ {
++	if (ib_uses_virt_dma(dev))
++		return UINT_MAX;
+ 	return dma_get_max_seg_size(dev->dma_device);
+ }
+ 
+@@ -4080,7 +4105,8 @@ static inline void ib_dma_sync_single_for_cpu(struct ib_device *dev,
+ 					      size_t size,
+ 					      enum dma_data_direction dir)
+ {
+-	dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
++	if (!ib_uses_virt_dma(dev))
++		dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
+ }
+ 
+ /**
+@@ -4095,7 +4121,8 @@ static inline void ib_dma_sync_single_for_device(struct ib_device *dev,
+ 						 size_t size,
+ 						 enum dma_data_direction dir)
+ {
+-	dma_sync_single_for_device(dev->dma_device, addr, size, dir);
++	if (!ib_uses_virt_dma(dev))
++		dma_sync_single_for_device(dev->dma_device, addr, size, dir);
+ }
+ 
+ /**
+diff --git a/init/init_task.c b/init/init_task.c
+index a56f0abb63e93..15f6eb93a04fa 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -26,7 +26,7 @@ static struct signal_struct init_signals = {
+ 	.multiprocess	= HLIST_HEAD_INIT,
+ 	.rlim		= INIT_RLIMITS,
+ 	.cred_guard_mutex = __MUTEX_INITIALIZER(init_signals.cred_guard_mutex),
+-	.exec_update_mutex = __MUTEX_INITIALIZER(init_signals.exec_update_mutex),
++	.exec_update_lock = __RWSEM_INITIALIZER(init_signals.exec_update_lock),
+ #ifdef CONFIG_POSIX_TIMERS
+ 	.posix_timers = LIST_HEAD_INIT(init_signals.posix_timers),
+ 	.cputimer	= {
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index dc568ca295bdc..c3ba29d058b73 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -1325,7 +1325,7 @@ static void put_ctx(struct perf_event_context *ctx)
+  * function.
+  *
+  * Lock order:
+- *    exec_update_mutex
++ *    exec_update_lock
+  *	task_struct::perf_event_mutex
+  *	  perf_event_context::mutex
+  *	    perf_event::child_mutex;
+@@ -11720,24 +11720,6 @@ SYSCALL_DEFINE5(perf_event_open,
+ 		goto err_task;
+ 	}
+ 
+-	if (task) {
+-		err = mutex_lock_interruptible(&task->signal->exec_update_mutex);
+-		if (err)
+-			goto err_task;
+-
+-		/*
+-		 * Preserve ptrace permission check for backwards compatibility.
+-		 *
+-		 * We must hold exec_update_mutex across this and any potential
+-		 * perf_install_in_context() call for this new event to
+-		 * serialize against exec() altering our credentials (and the
+-		 * perf_event_exit_task() that could imply).
+-		 */
+-		err = -EACCES;
+-		if (!perfmon_capable() && !ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS))
+-			goto err_cred;
+-	}
+-
+ 	if (flags & PERF_FLAG_PID_CGROUP)
+ 		cgroup_fd = pid;
+ 
+@@ -11745,7 +11727,7 @@ SYSCALL_DEFINE5(perf_event_open,
+ 				 NULL, NULL, cgroup_fd);
+ 	if (IS_ERR(event)) {
+ 		err = PTR_ERR(event);
+-		goto err_cred;
++		goto err_task;
+ 	}
+ 
+ 	if (is_sampling_event(event)) {
+@@ -11864,6 +11846,24 @@ SYSCALL_DEFINE5(perf_event_open,
+ 		goto err_context;
+ 	}
+ 
++	if (task) {
++		err = down_read_interruptible(&task->signal->exec_update_lock);
++		if (err)
++			goto err_file;
++
++		/*
++		 * Preserve ptrace permission check for backwards compatibility.
++		 *
++		 * We must hold exec_update_lock across this and any potential
++		 * perf_install_in_context() call for this new event to
++		 * serialize against exec() altering our credentials (and the
++		 * perf_event_exit_task() that could imply).
++		 */
++		err = -EACCES;
++		if (!perfmon_capable() && !ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS))
++			goto err_cred;
++	}
++
+ 	if (move_group) {
+ 		gctx = __perf_event_ctx_lock_double(group_leader, ctx);
+ 
+@@ -12017,7 +12017,7 @@ SYSCALL_DEFINE5(perf_event_open,
+ 	mutex_unlock(&ctx->mutex);
+ 
+ 	if (task) {
+-		mutex_unlock(&task->signal->exec_update_mutex);
++		up_read(&task->signal->exec_update_lock);
+ 		put_task_struct(task);
+ 	}
+ 
+@@ -12039,7 +12039,10 @@ err_locked:
+ 	if (move_group)
+ 		perf_event_ctx_unlock(group_leader, gctx);
+ 	mutex_unlock(&ctx->mutex);
+-/* err_file: */
++err_cred:
++	if (task)
++		up_read(&task->signal->exec_update_lock);
++err_file:
+ 	fput(event_file);
+ err_context:
+ 	perf_unpin_context(ctx);
+@@ -12051,9 +12054,6 @@ err_alloc:
+ 	 */
+ 	if (!event_file)
+ 		free_event(event);
+-err_cred:
+-	if (task)
+-		mutex_unlock(&task->signal->exec_update_mutex);
+ err_task:
+ 	if (task)
+ 		put_task_struct(task);
+@@ -12358,7 +12358,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
+ /*
+  * When a child task exits, feed back event values to parent events.
+  *
+- * Can be called with exec_update_mutex held when called from
++ * Can be called with exec_update_lock held when called from
+  * setup_new_exec().
+  */
+ void perf_event_exit_task(struct task_struct *child)
+diff --git a/kernel/fork.c b/kernel/fork.c
+index dc55f68a6ee36..c675fdbd3dce1 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1222,7 +1222,7 @@ struct mm_struct *mm_access(struct task_struct *task, unsigned int mode)
+ 	struct mm_struct *mm;
+ 	int err;
+ 
+-	err =  mutex_lock_killable(&task->signal->exec_update_mutex);
++	err =  down_read_killable(&task->signal->exec_update_lock);
+ 	if (err)
+ 		return ERR_PTR(err);
+ 
+@@ -1232,7 +1232,7 @@ struct mm_struct *mm_access(struct task_struct *task, unsigned int mode)
+ 		mmput(mm);
+ 		mm = ERR_PTR(-EACCES);
+ 	}
+-	mutex_unlock(&task->signal->exec_update_mutex);
++	up_read(&task->signal->exec_update_lock);
+ 
+ 	return mm;
+ }
+@@ -1592,7 +1592,7 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk)
+ 	sig->oom_score_adj_min = current->signal->oom_score_adj_min;
+ 
+ 	mutex_init(&sig->cred_guard_mutex);
+-	mutex_init(&sig->exec_update_mutex);
++	init_rwsem(&sig->exec_update_lock);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/kcmp.c b/kernel/kcmp.c
+index b3ff9288c6cc9..c0d2ad9b4705d 100644
+--- a/kernel/kcmp.c
++++ b/kernel/kcmp.c
+@@ -75,25 +75,25 @@ get_file_raw_ptr(struct task_struct *task, unsigned int idx)
+ 	return file;
+ }
+ 
+-static void kcmp_unlock(struct mutex *m1, struct mutex *m2)
++static void kcmp_unlock(struct rw_semaphore *l1, struct rw_semaphore *l2)
+ {
+-	if (likely(m2 != m1))
+-		mutex_unlock(m2);
+-	mutex_unlock(m1);
++	if (likely(l2 != l1))
++		up_read(l2);
++	up_read(l1);
+ }
+ 
+-static int kcmp_lock(struct mutex *m1, struct mutex *m2)
++static int kcmp_lock(struct rw_semaphore *l1, struct rw_semaphore *l2)
+ {
+ 	int err;
+ 
+-	if (m2 > m1)
+-		swap(m1, m2);
++	if (l2 > l1)
++		swap(l1, l2);
+ 
+-	err = mutex_lock_killable(m1);
+-	if (!err && likely(m1 != m2)) {
+-		err = mutex_lock_killable_nested(m2, SINGLE_DEPTH_NESTING);
++	err = down_read_killable(l1);
++	if (!err && likely(l1 != l2)) {
++		err = down_read_killable_nested(l2, SINGLE_DEPTH_NESTING);
+ 		if (err)
+-			mutex_unlock(m1);
++			up_read(l1);
+ 	}
+ 
+ 	return err;
+@@ -173,8 +173,8 @@ SYSCALL_DEFINE5(kcmp, pid_t, pid1, pid_t, pid2, int, type,
+ 	/*
+ 	 * One should have enough rights to inspect task details.
+ 	 */
+-	ret = kcmp_lock(&task1->signal->exec_update_mutex,
+-			&task2->signal->exec_update_mutex);
++	ret = kcmp_lock(&task1->signal->exec_update_lock,
++			&task2->signal->exec_update_lock);
+ 	if (ret)
+ 		goto err;
+ 	if (!ptrace_may_access(task1, PTRACE_MODE_READ_REALCREDS) ||
+@@ -229,8 +229,8 @@ SYSCALL_DEFINE5(kcmp, pid_t, pid1, pid_t, pid2, int, type,
+ 	}
+ 
+ err_unlock:
+-	kcmp_unlock(&task1->signal->exec_update_mutex,
+-		    &task2->signal->exec_update_mutex);
++	kcmp_unlock(&task1->signal->exec_update_lock,
++		    &task2->signal->exec_update_lock);
+ err:
+ 	put_task_struct(task1);
+ 	put_task_struct(task2);
+diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
+index f11b9bd3431d2..a163542d178ee 100644
+--- a/kernel/locking/rwsem.c
++++ b/kernel/locking/rwsem.c
+@@ -1345,6 +1345,18 @@ static inline void __down_read(struct rw_semaphore *sem)
+ 	}
+ }
+ 
++static inline int __down_read_interruptible(struct rw_semaphore *sem)
++{
++	if (!rwsem_read_trylock(sem)) {
++		if (IS_ERR(rwsem_down_read_slowpath(sem, TASK_INTERRUPTIBLE)))
++			return -EINTR;
++		DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
++	} else {
++		rwsem_set_reader_owned(sem);
++	}
++	return 0;
++}
++
+ static inline int __down_read_killable(struct rw_semaphore *sem)
+ {
+ 	if (!rwsem_read_trylock(sem)) {
+@@ -1495,6 +1507,20 @@ void __sched down_read(struct rw_semaphore *sem)
+ }
+ EXPORT_SYMBOL(down_read);
+ 
++int __sched down_read_interruptible(struct rw_semaphore *sem)
++{
++	might_sleep();
++	rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
++
++	if (LOCK_CONTENDED_RETURN(sem, __down_read_trylock, __down_read_interruptible)) {
++		rwsem_release(&sem->dep_map, _RET_IP_);
++		return -EINTR;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL(down_read_interruptible);
++
+ int __sched down_read_killable(struct rw_semaphore *sem)
+ {
+ 	might_sleep();
+@@ -1605,6 +1631,20 @@ void down_read_nested(struct rw_semaphore *sem, int subclass)
+ }
+ EXPORT_SYMBOL(down_read_nested);
+ 
++int down_read_killable_nested(struct rw_semaphore *sem, int subclass)
++{
++	might_sleep();
++	rwsem_acquire_read(&sem->dep_map, subclass, 0, _RET_IP_);
++
++	if (LOCK_CONTENDED_RETURN(sem, __down_read_trylock, __down_read_killable)) {
++		rwsem_release(&sem->dep_map, _RET_IP_);
++		return -EINTR;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL(down_read_killable_nested);
++
+ void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest)
+ {
+ 	might_sleep();
+diff --git a/kernel/pid.c b/kernel/pid.c
+index a96bc4bf4f869..4856818c9de1a 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -628,7 +628,7 @@ static struct file *__pidfd_fget(struct task_struct *task, int fd)
+ 	struct file *file;
+ 	int ret;
+ 
+-	ret = mutex_lock_killable(&task->signal->exec_update_mutex);
++	ret = down_read_killable(&task->signal->exec_update_lock);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+@@ -637,7 +637,7 @@ static struct file *__pidfd_fget(struct task_struct *task, int fd)
+ 	else
+ 		file = ERR_PTR(-EPERM);
+ 
+-	mutex_unlock(&task->signal->exec_update_mutex);
++	up_read(&task->signal->exec_update_lock);
+ 
+ 	return file ?: ERR_PTR(-EBADF);
+ }
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 502552d6e9aff..c4aa2cbb92697 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -763,7 +763,7 @@ static int hci_init3_req(struct hci_request *req, unsigned long opt)
+ 			hci_req_add(req, HCI_OP_LE_CLEAR_RESOLV_LIST, 0, NULL);
+ 		}
+ 
+-		if (hdev->commands[35] & 0x40) {
++		if (hdev->commands[35] & 0x04) {
+ 			__le16 rpa_timeout = cpu_to_le16(hdev->rpa_timeout);
+ 
+ 			/* Set RPA timeout */
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 2ddc27db8c012..d12b4799c3cb7 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1736,7 +1736,7 @@ static void silent_stream_disable(struct hda_codec *codec,
+ 	per_pin->silent_stream = false;
+ 
+  unlock_out:
+-	mutex_unlock(&spec->pcm_lock);
++	mutex_unlock(&per_pin->lock);
+ }
+ 
+ /* update ELD and jack state via audio component */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index dde5ba2095415..006af6541dada 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7885,7 +7885,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x09bf, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+-	SND_PCI_QUIRK(0x1028, 0x0a58, "Dell Precision 3650 Tower", ALC255_FIXUP_DELL_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-12 20:03 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-01-12 20:03 UTC (permalink / raw
  To: gentoo-commits

commit:     1835eb1b0600c0d6bff5d0748356ad023fb6c18d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 12 20:03:36 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jan 12 20:03:36 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1835eb1b

Linux patch 5.10.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1006_linux-5.10.7.patch | 5134 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5138 insertions(+)

diff --git a/0000_README b/0000_README
index 4881039..d4ad009 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-5.10.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.6
 
+Patch:  1006_linux-5.10.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-5.10.7.patch b/1006_linux-5.10.7.patch
new file mode 100644
index 0000000..4907b5f
--- /dev/null
+++ b/1006_linux-5.10.7.patch
@@ -0,0 +1,5134 @@
+diff --git a/Makefile b/Makefile
+index 2b3f0d06b0054..9b6c90eed5e9c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+@@ -450,7 +450,7 @@ LEX		= flex
+ YACC		= bison
+ AWK		= awk
+ INSTALLKERNEL  := installkernel
+-DEPMOD		= /sbin/depmod
++DEPMOD		= depmod
+ PERL		= perl
+ PYTHON		= python
+ PYTHON3		= python3
+diff --git a/arch/alpha/include/asm/local64.h b/arch/alpha/include/asm/local64.h
+deleted file mode 100644
+index 36c93b5cc239b..0000000000000
+--- a/arch/alpha/include/asm/local64.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#include <asm-generic/local64.h>
+diff --git a/arch/arc/include/asm/Kbuild b/arch/arc/include/asm/Kbuild
+index 81f4edec0c2a9..3c1afa524b9c2 100644
+--- a/arch/arc/include/asm/Kbuild
++++ b/arch/arc/include/asm/Kbuild
+@@ -1,7 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ generic-y += extable.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += parport.h
+ generic-y += user.h
+diff --git a/arch/arm/boot/dts/omap3-n950-n9.dtsi b/arch/arm/boot/dts/omap3-n950-n9.dtsi
+index 11d41e86f814d..7dde9fbb06d33 100644
+--- a/arch/arm/boot/dts/omap3-n950-n9.dtsi
++++ b/arch/arm/boot/dts/omap3-n950-n9.dtsi
+@@ -494,3 +494,11 @@
+ 		clock-names = "sysclk";
+ 	};
+ };
++
++&aes1_target {
++	status = "disabled";
++};
++
++&aes2_target {
++	status = "disabled";
++};
+diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
+index 383635b68763c..f1398b9267c08 100644
+--- a/arch/arm/include/asm/Kbuild
++++ b/arch/arm/include/asm/Kbuild
+@@ -2,7 +2,6 @@
+ generic-y += early_ioremap.h
+ generic-y += extable.h
+ generic-y += flat.h
+-generic-y += local64.h
+ generic-y += parport.h
+ generic-y += seccomp.h
+ 
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 6a87d592bd001..485b7dbd4f9e3 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -10,7 +10,7 @@
+ #
+ # Copyright (C) 1995-2001 by Russell King
+ 
+-LDFLAGS_vmlinux	:=--no-undefined -X -z norelro
++LDFLAGS_vmlinux	:=--no-undefined -X
+ 
+ ifeq ($(CONFIG_RELOCATABLE), y)
+ # Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
+@@ -110,16 +110,20 @@ KBUILD_CPPFLAGS	+= -mbig-endian
+ CHECKFLAGS	+= -D__AARCH64EB__
+ # Prefer the baremetal ELF build target, but not all toolchains include
+ # it so fall back to the standard linux version if needed.
+-KBUILD_LDFLAGS	+= -EB $(call ld-option, -maarch64elfb, -maarch64linuxb)
++KBUILD_LDFLAGS	+= -EB $(call ld-option, -maarch64elfb, -maarch64linuxb -z norelro)
+ UTS_MACHINE	:= aarch64_be
+ else
+ KBUILD_CPPFLAGS	+= -mlittle-endian
+ CHECKFLAGS	+= -D__AARCH64EL__
+ # Same as above, prefer ELF but fall back to linux target if needed.
+-KBUILD_LDFLAGS	+= -EL $(call ld-option, -maarch64elf, -maarch64linux)
++KBUILD_LDFLAGS	+= -EL $(call ld-option, -maarch64elf, -maarch64linux -z norelro)
+ UTS_MACHINE	:= aarch64
+ endif
+ 
++ifeq ($(CONFIG_LD_IS_LLD), y)
++KBUILD_LDFLAGS	+= -z norelro
++endif
++
+ CHECKFLAGS	+= -D__aarch64__
+ 
+ ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y)
+diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
+index ff9cbb6312128..07ac208edc894 100644
+--- a/arch/arm64/include/asm/Kbuild
++++ b/arch/arm64/include/asm/Kbuild
+@@ -1,6 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+ generic-y += early_ioremap.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += qrwlock.h
+ generic-y += qspinlock.h
+diff --git a/arch/csky/include/asm/Kbuild b/arch/csky/include/asm/Kbuild
+index 64876e59e2ef9..2a5a4d94fafad 100644
+--- a/arch/csky/include/asm/Kbuild
++++ b/arch/csky/include/asm/Kbuild
+@@ -2,7 +2,6 @@
+ generic-y += asm-offsets.h
+ generic-y += gpio.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += qrwlock.h
+ generic-y += seccomp.h
+ generic-y += user.h
+diff --git a/arch/h8300/include/asm/Kbuild b/arch/h8300/include/asm/Kbuild
+index ddf04f32b5467..60ee7f0d60a8f 100644
+--- a/arch/h8300/include/asm/Kbuild
++++ b/arch/h8300/include/asm/Kbuild
+@@ -2,7 +2,6 @@
+ generic-y += asm-offsets.h
+ generic-y += extable.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += parport.h
+ generic-y += spinlock.h
+diff --git a/arch/hexagon/include/asm/Kbuild b/arch/hexagon/include/asm/Kbuild
+index 373964bb177e4..3ece3c93fe086 100644
+--- a/arch/hexagon/include/asm/Kbuild
++++ b/arch/hexagon/include/asm/Kbuild
+@@ -2,5 +2,4 @@
+ generic-y += extable.h
+ generic-y += iomap.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+diff --git a/arch/ia64/include/asm/local64.h b/arch/ia64/include/asm/local64.h
+deleted file mode 100644
+index 36c93b5cc239b..0000000000000
+--- a/arch/ia64/include/asm/local64.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#include <asm-generic/local64.h>
+diff --git a/arch/m68k/include/asm/Kbuild b/arch/m68k/include/asm/Kbuild
+index 1bff55aa2d54e..0dbf9c5c6faeb 100644
+--- a/arch/m68k/include/asm/Kbuild
++++ b/arch/m68k/include/asm/Kbuild
+@@ -2,6 +2,5 @@
+ generated-y += syscall_table.h
+ generic-y += extable.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += spinlock.h
+diff --git a/arch/microblaze/include/asm/Kbuild b/arch/microblaze/include/asm/Kbuild
+index 63bce836b9f10..29b0e557aa7c5 100644
+--- a/arch/microblaze/include/asm/Kbuild
++++ b/arch/microblaze/include/asm/Kbuild
+@@ -2,7 +2,6 @@
+ generated-y += syscall_table.h
+ generic-y += extable.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += parport.h
+ generic-y += syscalls.h
+diff --git a/arch/mips/include/asm/Kbuild b/arch/mips/include/asm/Kbuild
+index 198b3bafdac97..95b4fa7bd0d1f 100644
+--- a/arch/mips/include/asm/Kbuild
++++ b/arch/mips/include/asm/Kbuild
+@@ -6,7 +6,6 @@ generated-y += syscall_table_64_n64.h
+ generated-y += syscall_table_64_o32.h
+ generic-y += export.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += parport.h
+ generic-y += qrwlock.h
+diff --git a/arch/nds32/include/asm/Kbuild b/arch/nds32/include/asm/Kbuild
+index ff1e94299317d..82a4453c9c2d5 100644
+--- a/arch/nds32/include/asm/Kbuild
++++ b/arch/nds32/include/asm/Kbuild
+@@ -4,6 +4,5 @@ generic-y += cmpxchg.h
+ generic-y += export.h
+ generic-y += gpio.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += parport.h
+ generic-y += user.h
+diff --git a/arch/parisc/include/asm/Kbuild b/arch/parisc/include/asm/Kbuild
+index e3ee5c0bfe80f..a1bd2adc63e3a 100644
+--- a/arch/parisc/include/asm/Kbuild
++++ b/arch/parisc/include/asm/Kbuild
+@@ -3,7 +3,6 @@ generated-y += syscall_table_32.h
+ generated-y += syscall_table_64.h
+ generated-y += syscall_table_c32.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += seccomp.h
+ generic-y += user.h
+diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild
+index 90cd5c53af666..e1f9b4ea1c537 100644
+--- a/arch/powerpc/include/asm/Kbuild
++++ b/arch/powerpc/include/asm/Kbuild
+@@ -5,7 +5,6 @@ generated-y += syscall_table_c32.h
+ generated-y += syscall_table_spu.h
+ generic-y += export.h
+ generic-y += kvm_types.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += qrwlock.h
+ generic-y += vtime.h
+diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
+index 6db90cdf11da8..f887f9d5b9e84 100644
+--- a/arch/powerpc/kernel/vmlinux.lds.S
++++ b/arch/powerpc/kernel/vmlinux.lds.S
+@@ -85,7 +85,7 @@ SECTIONS
+ 		ALIGN_FUNCTION();
+ #endif
+ 		/* careful! __ftr_alt_* sections need to be close to .text */
+-		*(.text.hot TEXT_MAIN .text.fixup .text.unlikely .fixup __ftr_alt_* .ref.text);
++		*(.text.hot .text.hot.* TEXT_MAIN .text.fixup .text.unlikely .text.unlikely.* .fixup __ftr_alt_* .ref.text);
+ #ifdef CONFIG_PPC64
+ 		*(.tramp.ftrace.text);
+ #endif
+diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
+index 59dd7be550054..445ccc97305a5 100644
+--- a/arch/riscv/include/asm/Kbuild
++++ b/arch/riscv/include/asm/Kbuild
+@@ -3,6 +3,5 @@ generic-y += early_ioremap.h
+ generic-y += extable.h
+ generic-y += flat.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += user.h
+ generic-y += vmlinux.lds.h
+diff --git a/arch/s390/include/asm/Kbuild b/arch/s390/include/asm/Kbuild
+index 319efa0e6d024..1a18d7b82f86d 100644
+--- a/arch/s390/include/asm/Kbuild
++++ b/arch/s390/include/asm/Kbuild
+@@ -7,5 +7,4 @@ generated-y += unistd_nr.h
+ generic-y += asm-offsets.h
+ generic-y += export.h
+ generic-y += kvm_types.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+diff --git a/arch/sh/include/asm/Kbuild b/arch/sh/include/asm/Kbuild
+index 7435182ef8465..fc44d9c88b419 100644
+--- a/arch/sh/include/asm/Kbuild
++++ b/arch/sh/include/asm/Kbuild
+@@ -1,6 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+ generated-y += syscall_table.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += parport.h
+diff --git a/arch/sparc/include/asm/Kbuild b/arch/sparc/include/asm/Kbuild
+index 5269a704801fa..3688fdae50e45 100644
+--- a/arch/sparc/include/asm/Kbuild
++++ b/arch/sparc/include/asm/Kbuild
+@@ -6,5 +6,4 @@ generated-y += syscall_table_64.h
+ generated-y += syscall_table_c32.h
+ generic-y += export.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+diff --git a/arch/x86/include/asm/local64.h b/arch/x86/include/asm/local64.h
+deleted file mode 100644
+index 36c93b5cc239b..0000000000000
+--- a/arch/x86/include/asm/local64.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#include <asm-generic/local64.h>
+diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
+index 23ad8e953dfb1..a29997e6cf9e6 100644
+--- a/arch/x86/kernel/cpu/mtrr/generic.c
++++ b/arch/x86/kernel/cpu/mtrr/generic.c
+@@ -167,9 +167,6 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
+ 	*repeat = 0;
+ 	*uniform = 1;
+ 
+-	/* Make end inclusive instead of exclusive */
+-	end--;
+-
+ 	prev_match = MTRR_TYPE_INVALID;
+ 	for (i = 0; i < num_var_ranges; ++i) {
+ 		unsigned short start_state, end_state, inclusive;
+@@ -261,6 +258,9 @@ u8 mtrr_type_lookup(u64 start, u64 end, u8 *uniform)
+ 	int repeat;
+ 	u64 partial_end;
+ 
++	/* Make end inclusive instead of exclusive */
++	end--;
++
+ 	if (!mtrr_state_set)
+ 		return MTRR_TYPE_INVALID;
+ 
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index f3418428682b1..5a59e3315b340 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -525,89 +525,70 @@ static void rdtgroup_remove(struct rdtgroup *rdtgrp)
+ 	kfree(rdtgrp);
+ }
+ 
+-struct task_move_callback {
+-	struct callback_head	work;
+-	struct rdtgroup		*rdtgrp;
+-};
+-
+-static void move_myself(struct callback_head *head)
++static void _update_task_closid_rmid(void *task)
+ {
+-	struct task_move_callback *callback;
+-	struct rdtgroup *rdtgrp;
+-
+-	callback = container_of(head, struct task_move_callback, work);
+-	rdtgrp = callback->rdtgrp;
+-
+ 	/*
+-	 * If resource group was deleted before this task work callback
+-	 * was invoked, then assign the task to root group and free the
+-	 * resource group.
++	 * If the task is still current on this CPU, update PQR_ASSOC MSR.
++	 * Otherwise, the MSR is updated when the task is scheduled in.
+ 	 */
+-	if (atomic_dec_and_test(&rdtgrp->waitcount) &&
+-	    (rdtgrp->flags & RDT_DELETED)) {
+-		current->closid = 0;
+-		current->rmid = 0;
+-		rdtgroup_remove(rdtgrp);
+-	}
+-
+-	if (unlikely(current->flags & PF_EXITING))
+-		goto out;
+-
+-	preempt_disable();
+-	/* update PQR_ASSOC MSR to make resource group go into effect */
+-	resctrl_sched_in();
+-	preempt_enable();
++	if (task == current)
++		resctrl_sched_in();
++}
+ 
+-out:
+-	kfree(callback);
++static void update_task_closid_rmid(struct task_struct *t)
++{
++	if (IS_ENABLED(CONFIG_SMP) && task_curr(t))
++		smp_call_function_single(task_cpu(t), _update_task_closid_rmid, t, 1);
++	else
++		_update_task_closid_rmid(t);
+ }
+ 
+ static int __rdtgroup_move_task(struct task_struct *tsk,
+ 				struct rdtgroup *rdtgrp)
+ {
+-	struct task_move_callback *callback;
+-	int ret;
+-
+-	callback = kzalloc(sizeof(*callback), GFP_KERNEL);
+-	if (!callback)
+-		return -ENOMEM;
+-	callback->work.func = move_myself;
+-	callback->rdtgrp = rdtgrp;
++	/* If the task is already in rdtgrp, no need to move the task. */
++	if ((rdtgrp->type == RDTCTRL_GROUP && tsk->closid == rdtgrp->closid &&
++	     tsk->rmid == rdtgrp->mon.rmid) ||
++	    (rdtgrp->type == RDTMON_GROUP && tsk->rmid == rdtgrp->mon.rmid &&
++	     tsk->closid == rdtgrp->mon.parent->closid))
++		return 0;
+ 
+ 	/*
+-	 * Take a refcount, so rdtgrp cannot be freed before the
+-	 * callback has been invoked.
++	 * Set the task's closid/rmid before the PQR_ASSOC MSR can be
++	 * updated by them.
++	 *
++	 * For ctrl_mon groups, move both closid and rmid.
++	 * For monitor groups, can move the tasks only from
++	 * their parent CTRL group.
+ 	 */
+-	atomic_inc(&rdtgrp->waitcount);
+-	ret = task_work_add(tsk, &callback->work, TWA_RESUME);
+-	if (ret) {
+-		/*
+-		 * Task is exiting. Drop the refcount and free the callback.
+-		 * No need to check the refcount as the group cannot be
+-		 * deleted before the write function unlocks rdtgroup_mutex.
+-		 */
+-		atomic_dec(&rdtgrp->waitcount);
+-		kfree(callback);
+-		rdt_last_cmd_puts("Task exited\n");
+-	} else {
+-		/*
+-		 * For ctrl_mon groups move both closid and rmid.
+-		 * For monitor groups, can move the tasks only from
+-		 * their parent CTRL group.
+-		 */
+-		if (rdtgrp->type == RDTCTRL_GROUP) {
+-			tsk->closid = rdtgrp->closid;
++
++	if (rdtgrp->type == RDTCTRL_GROUP) {
++		tsk->closid = rdtgrp->closid;
++		tsk->rmid = rdtgrp->mon.rmid;
++	} else if (rdtgrp->type == RDTMON_GROUP) {
++		if (rdtgrp->mon.parent->closid == tsk->closid) {
+ 			tsk->rmid = rdtgrp->mon.rmid;
+-		} else if (rdtgrp->type == RDTMON_GROUP) {
+-			if (rdtgrp->mon.parent->closid == tsk->closid) {
+-				tsk->rmid = rdtgrp->mon.rmid;
+-			} else {
+-				rdt_last_cmd_puts("Can't move task to different control group\n");
+-				ret = -EINVAL;
+-			}
++		} else {
++			rdt_last_cmd_puts("Can't move task to different control group\n");
++			return -EINVAL;
+ 		}
+ 	}
+-	return ret;
++
++	/*
++	 * Ensure the task's closid and rmid are written before determining if
++	 * the task is current that will decide if it will be interrupted.
++	 */
++	barrier();
++
++	/*
++	 * By now, the task's closid and rmid are set. If the task is current
++	 * on a CPU, the PQR_ASSOC MSR needs to be updated to make the resource
++	 * group go into effect. If the task is not current, the MSR will be
++	 * updated when the task is scheduled in.
++	 */
++	update_task_closid_rmid(tsk);
++
++	return 0;
+ }
+ 
+ static bool is_closid_match(struct task_struct *t, struct rdtgroup *r)
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index 9c4a9c8e43d90..581925e476d6c 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -49,7 +49,7 @@ static inline u64 rsvd_bits(int s, int e)
+ 	if (e < s)
+ 		return 0;
+ 
+-	return ((1ULL << (e - s + 1)) - 1) << s;
++	return ((2ULL << (e - s)) - 1) << s;
+ }
+ 
+ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 7a6ae9e90bd70..52f36c8790862 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3485,16 +3485,16 @@ static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
+  * Return the level of the lowest level SPTE added to sptes.
+  * That SPTE may be non-present.
+  */
+-static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes)
++static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level)
+ {
+ 	struct kvm_shadow_walk_iterator iterator;
+-	int leaf = vcpu->arch.mmu->root_level;
++	int leaf = -1;
+ 	u64 spte;
+ 
+-
+ 	walk_shadow_page_lockless_begin(vcpu);
+ 
+-	for (shadow_walk_init(&iterator, vcpu, addr);
++	for (shadow_walk_init(&iterator, vcpu, addr),
++	     *root_level = iterator.level;
+ 	     shadow_walk_okay(&iterator);
+ 	     __shadow_walk_next(&iterator, spte)) {
+ 		leaf = iterator.level;
+@@ -3504,7 +3504,6 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes)
+ 
+ 		if (!is_shadow_present_pte(spte))
+ 			break;
+-
+ 	}
+ 
+ 	walk_shadow_page_lockless_end(vcpu);
+@@ -3517,9 +3516,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
+ {
+ 	u64 sptes[PT64_ROOT_MAX_LEVEL];
+ 	struct rsvd_bits_validate *rsvd_check;
+-	int root = vcpu->arch.mmu->shadow_root_level;
+-	int leaf;
+-	int level;
++	int root, leaf, level;
+ 	bool reserved = false;
+ 
+ 	if (!VALID_PAGE(vcpu->arch.mmu->root_hpa)) {
+@@ -3528,9 +3525,14 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
+ 	}
+ 
+ 	if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
+-		leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes);
++		leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root);
+ 	else
+-		leaf = get_walk(vcpu, addr, sptes);
++		leaf = get_walk(vcpu, addr, sptes, &root);
++
++	if (unlikely(leaf < 0)) {
++		*sptep = 0ull;
++		return reserved;
++	}
+ 
+ 	rsvd_check = &vcpu->arch.mmu->shadow_zero_check;
+ 
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 84c8f06bec261..b9265a585ea3c 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -42,7 +42,48 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
+ 	WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
+ }
+ 
+-#define for_each_tdp_mmu_root(_kvm, _root)			    \
++static void tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root)
++{
++	if (kvm_mmu_put_root(kvm, root))
++		kvm_tdp_mmu_free_root(kvm, root);
++}
++
++static inline bool tdp_mmu_next_root_valid(struct kvm *kvm,
++					   struct kvm_mmu_page *root)
++{
++	lockdep_assert_held(&kvm->mmu_lock);
++
++	if (list_entry_is_head(root, &kvm->arch.tdp_mmu_roots, link))
++		return false;
++
++	kvm_mmu_get_root(kvm, root);
++	return true;
++
++}
++
++static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
++						     struct kvm_mmu_page *root)
++{
++	struct kvm_mmu_page *next_root;
++
++	next_root = list_next_entry(root, link);
++	tdp_mmu_put_root(kvm, root);
++	return next_root;
++}
++
++/*
++ * Note: this iterator gets and puts references to the roots it iterates over.
++ * This makes it safe to release the MMU lock and yield within the loop, but
++ * if exiting the loop early, the caller must drop the reference to the most
++ * recent root. (Unless keeping a live reference is desirable.)
++ */
++#define for_each_tdp_mmu_root_yield_safe(_kvm, _root)				\
++	for (_root = list_first_entry(&_kvm->arch.tdp_mmu_roots,	\
++				      typeof(*_root), link);		\
++	     tdp_mmu_next_root_valid(_kvm, _root);			\
++	     _root = tdp_mmu_next_root(_kvm, _root))
++
++#define for_each_tdp_mmu_root(_kvm, _root)				\
+ 	list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link)
+ 
+ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
+@@ -439,18 +480,9 @@ bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end)
+ 	struct kvm_mmu_page *root;
+ 	bool flush = false;
+ 
+-	for_each_tdp_mmu_root(kvm, root) {
+-		/*
+-		 * Take a reference on the root so that it cannot be freed if
+-		 * this thread releases the MMU lock and yields in this loop.
+-		 */
+-		kvm_mmu_get_root(kvm, root);
+-
++	for_each_tdp_mmu_root_yield_safe(kvm, root)
+ 		flush |= zap_gfn_range(kvm, root, start, end, true);
+ 
+-		kvm_mmu_put_root(kvm, root);
+-	}
+-
+ 	return flush;
+ }
+ 
+@@ -609,13 +641,7 @@ static int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, unsigned long start,
+ 	int ret = 0;
+ 	int as_id;
+ 
+-	for_each_tdp_mmu_root(kvm, root) {
+-		/*
+-		 * Take a reference on the root so that it cannot be freed if
+-		 * this thread releases the MMU lock and yields in this loop.
+-		 */
+-		kvm_mmu_get_root(kvm, root);
+-
++	for_each_tdp_mmu_root_yield_safe(kvm, root) {
+ 		as_id = kvm_mmu_page_as_id(root);
+ 		slots = __kvm_memslots(kvm, as_id);
+ 		kvm_for_each_memslot(memslot, slots) {
+@@ -637,8 +663,6 @@ static int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, unsigned long start,
+ 			ret |= handler(kvm, memslot, root, gfn_start,
+ 				       gfn_end, data);
+ 		}
+-
+-		kvm_mmu_put_root(kvm, root);
+ 	}
+ 
+ 	return ret;
+@@ -826,21 +850,13 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ 	int root_as_id;
+ 	bool spte_set = false;
+ 
+-	for_each_tdp_mmu_root(kvm, root) {
++	for_each_tdp_mmu_root_yield_safe(kvm, root) {
+ 		root_as_id = kvm_mmu_page_as_id(root);
+ 		if (root_as_id != slot->as_id)
+ 			continue;
+ 
+-		/*
+-		 * Take a reference on the root so that it cannot be freed if
+-		 * this thread releases the MMU lock and yields in this loop.
+-		 */
+-		kvm_mmu_get_root(kvm, root);
+-
+ 		spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn,
+ 			     slot->base_gfn + slot->npages, min_level);
+-
+-		kvm_mmu_put_root(kvm, root);
+ 	}
+ 
+ 	return spte_set;
+@@ -894,21 +910,13 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, struct kvm_memory_slot *slot)
+ 	int root_as_id;
+ 	bool spte_set = false;
+ 
+-	for_each_tdp_mmu_root(kvm, root) {
++	for_each_tdp_mmu_root_yield_safe(kvm, root) {
+ 		root_as_id = kvm_mmu_page_as_id(root);
+ 		if (root_as_id != slot->as_id)
+ 			continue;
+ 
+-		/*
+-		 * Take a reference on the root so that it cannot be freed if
+-		 * this thread releases the MMU lock and yields in this loop.
+-		 */
+-		kvm_mmu_get_root(kvm, root);
+-
+ 		spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn,
+ 				slot->base_gfn + slot->npages);
+-
+-		kvm_mmu_put_root(kvm, root);
+ 	}
+ 
+ 	return spte_set;
+@@ -1017,21 +1025,13 @@ bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot)
+ 	int root_as_id;
+ 	bool spte_set = false;
+ 
+-	for_each_tdp_mmu_root(kvm, root) {
++	for_each_tdp_mmu_root_yield_safe(kvm, root) {
+ 		root_as_id = kvm_mmu_page_as_id(root);
+ 		if (root_as_id != slot->as_id)
+ 			continue;
+ 
+-		/*
+-		 * Take a reference on the root so that it cannot be freed if
+-		 * this thread releases the MMU lock and yields in this loop.
+-		 */
+-		kvm_mmu_get_root(kvm, root);
+-
+ 		spte_set |= set_dirty_gfn_range(kvm, root, slot->base_gfn,
+ 				slot->base_gfn + slot->npages);
+-
+-		kvm_mmu_put_root(kvm, root);
+ 	}
+ 	return spte_set;
+ }
+@@ -1077,21 +1077,13 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
+ 	struct kvm_mmu_page *root;
+ 	int root_as_id;
+ 
+-	for_each_tdp_mmu_root(kvm, root) {
++	for_each_tdp_mmu_root_yield_safe(kvm, root) {
+ 		root_as_id = kvm_mmu_page_as_id(root);
+ 		if (root_as_id != slot->as_id)
+ 			continue;
+ 
+-		/*
+-		 * Take a reference on the root so that it cannot be freed if
+-		 * this thread releases the MMU lock and yields in this loop.
+-		 */
+-		kvm_mmu_get_root(kvm, root);
+-
+ 		zap_collapsible_spte_range(kvm, root, slot->base_gfn,
+ 					   slot->base_gfn + slot->npages);
+-
+-		kvm_mmu_put_root(kvm, root);
+ 	}
+ }
+ 
+@@ -1148,12 +1140,15 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm,
+  * Return the level of the lowest level SPTE added to sptes.
+  * That SPTE may be non-present.
+  */
+-int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes)
++int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
++			 int *root_level)
+ {
+ 	struct tdp_iter iter;
+ 	struct kvm_mmu *mmu = vcpu->arch.mmu;
+-	int leaf = vcpu->arch.mmu->shadow_root_level;
+ 	gfn_t gfn = addr >> PAGE_SHIFT;
++	int leaf = -1;
++
++	*root_level = vcpu->arch.mmu->shadow_root_level;
+ 
+ 	tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
+ 		leaf = iter.level;
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
+index 556e065503f69..cbbdbadd1526f 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.h
++++ b/arch/x86/kvm/mmu/tdp_mmu.h
+@@ -44,5 +44,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
+ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm,
+ 				   struct kvm_memory_slot *slot, gfn_t gfn);
+ 
+-int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes);
++int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
++			 int *root_level);
++
+ #endif /* __KVM_X86_MMU_TDP_MMU_H */
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index dfd82f51ba66b..f6a9e2e366425 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -829,6 +829,8 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ 	}
+ 
+ 	free_page((unsigned long)pmd_sv);
++
++	pgtable_pmd_page_dtor(virt_to_page(pmd));
+ 	free_page((unsigned long)pmd);
+ 
+ 	return 1;
+diff --git a/arch/xtensa/include/asm/Kbuild b/arch/xtensa/include/asm/Kbuild
+index c59c42a1221a8..adefb1636f7ae 100644
+--- a/arch/xtensa/include/asm/Kbuild
++++ b/arch/xtensa/include/asm/Kbuild
+@@ -2,7 +2,6 @@
+ generated-y += syscall_table.h
+ generic-y += extable.h
+ generic-y += kvm_para.h
+-generic-y += local64.h
+ generic-y += mcs_spinlock.h
+ generic-y += param.h
+ generic-y += qrwlock.h
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 2db8bda43b6e6..2d53e2ff48ff8 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -18,6 +18,7 @@
+ #include <linux/bio.h>
+ #include <linux/blkdev.h>
+ #include <linux/blk-mq.h>
++#include <linux/blk-pm.h>
+ #include <linux/highmem.h>
+ #include <linux/mm.h>
+ #include <linux/pagemap.h>
+@@ -424,11 +425,11 @@ EXPORT_SYMBOL(blk_cleanup_queue);
+ /**
+  * blk_queue_enter() - try to increase q->q_usage_counter
+  * @q: request queue pointer
+- * @flags: BLK_MQ_REQ_NOWAIT and/or BLK_MQ_REQ_PREEMPT
++ * @flags: BLK_MQ_REQ_NOWAIT and/or BLK_MQ_REQ_PM
+  */
+ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+ {
+-	const bool pm = flags & BLK_MQ_REQ_PREEMPT;
++	const bool pm = flags & BLK_MQ_REQ_PM;
+ 
+ 	while (true) {
+ 		bool success = false;
+@@ -440,7 +441,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+ 			 * responsible for ensuring that that counter is
+ 			 * globally visible before the queue is unfrozen.
+ 			 */
+-			if (pm || !blk_queue_pm_only(q)) {
++			if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) ||
++			    !blk_queue_pm_only(q)) {
+ 				success = true;
+ 			} else {
+ 				percpu_ref_put(&q->q_usage_counter);
+@@ -465,8 +467,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+ 
+ 		wait_event(q->mq_freeze_wq,
+ 			   (!q->mq_freeze_depth &&
+-			    (pm || (blk_pm_request_resume(q),
+-				    !blk_queue_pm_only(q)))) ||
++			    blk_pm_resume_queue(pm, q)) ||
+ 			   blk_queue_dying(q));
+ 		if (blk_queue_dying(q))
+ 			return -ENODEV;
+@@ -630,7 +631,7 @@ struct request *blk_get_request(struct request_queue *q, unsigned int op,
+ 	struct request *req;
+ 
+ 	WARN_ON_ONCE(op & REQ_NOWAIT);
+-	WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PREEMPT));
++	WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PM));
+ 
+ 	req = blk_mq_alloc_request(q, op, flags);
+ 	if (!IS_ERR(req) && q->mq_ops->initialize_rq_fn)
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index bbe86d1199dc5..7e963b457f2ec 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2525,8 +2525,8 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
+ 	bool use_debt, ioc_locked;
+ 	unsigned long flags;
+ 
+-	/* bypass IOs if disabled or for root cgroup */
+-	if (!ioc->enabled || !iocg->level)
++	/* bypass IOs if disabled, still initializing, or for root cgroup */
++	if (!ioc->enabled || !iocg || !iocg->level)
+ 		return;
+ 
+ 	/* calculate the absolute vtime cost */
+@@ -2653,14 +2653,14 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq,
+ 			   struct bio *bio)
+ {
+ 	struct ioc_gq *iocg = blkg_to_iocg(bio->bi_blkg);
+-	struct ioc *ioc = iocg->ioc;
++	struct ioc *ioc = rqos_to_ioc(rqos);
+ 	sector_t bio_end = bio_end_sector(bio);
+ 	struct ioc_now now;
+ 	u64 vtime, abs_cost, cost;
+ 	unsigned long flags;
+ 
+-	/* bypass if disabled or for root cgroup */
+-	if (!ioc->enabled || !iocg->level)
++	/* bypass if disabled, still initializing, or for root cgroup */
++	if (!ioc->enabled || !iocg || !iocg->level)
+ 		return;
+ 
+ 	abs_cost = calc_vtime_cost(bio, iocg, true);
+@@ -2837,6 +2837,12 @@ static int blk_iocost_init(struct request_queue *q)
+ 	ioc_refresh_params(ioc, true);
+ 	spin_unlock_irq(&ioc->lock);
+ 
++	/*
++	 * rqos must be added before activation to allow iocg_pd_init() to
++	 * lookup the ioc from q. This means that the rqos methods may get
++	 * called before policy activation completion, can't assume that the
++	 * target bio has an iocg associated and need to test for NULL iocg.
++	 */
+ 	rq_qos_add(q, rqos);
+ 	ret = blkcg_activate_policy(q, &blkcg_policy_iocost);
+ 	if (ret) {
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index 3094542e12ae0..4d6e83e5b4429 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -129,6 +129,7 @@ static const char *const blk_queue_flag_name[] = {
+ 	QUEUE_FLAG_NAME(PCI_P2PDMA),
+ 	QUEUE_FLAG_NAME(ZONE_RESETALL),
+ 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
++	QUEUE_FLAG_NAME(NOWAIT),
+ };
+ #undef QUEUE_FLAG_NAME
+ 
+@@ -297,7 +298,6 @@ static const char *const rqf_name[] = {
+ 	RQF_NAME(MIXED_MERGE),
+ 	RQF_NAME(MQ_INFLIGHT),
+ 	RQF_NAME(DONTPREP),
+-	RQF_NAME(PREEMPT),
+ 	RQF_NAME(FAILED),
+ 	RQF_NAME(QUIET),
+ 	RQF_NAME(ELVPRIV),
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 55bcee5dc0320..2a1eff60c7975 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -292,8 +292,8 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
+ 	rq->mq_hctx = data->hctx;
+ 	rq->rq_flags = 0;
+ 	rq->cmd_flags = data->cmd_flags;
+-	if (data->flags & BLK_MQ_REQ_PREEMPT)
+-		rq->rq_flags |= RQF_PREEMPT;
++	if (data->flags & BLK_MQ_REQ_PM)
++		rq->rq_flags |= RQF_PM;
+ 	if (blk_queue_io_stat(data->q))
+ 		rq->rq_flags |= RQF_IO_STAT;
+ 	INIT_LIST_HEAD(&rq->queuelist);
+diff --git a/block/blk-pm.h b/block/blk-pm.h
+index ea5507d23e759..a2283cc9f716d 100644
+--- a/block/blk-pm.h
++++ b/block/blk-pm.h
+@@ -6,11 +6,14 @@
+ #include <linux/pm_runtime.h>
+ 
+ #ifdef CONFIG_PM
+-static inline void blk_pm_request_resume(struct request_queue *q)
++static inline int blk_pm_resume_queue(const bool pm, struct request_queue *q)
+ {
+-	if (q->dev && (q->rpm_status == RPM_SUSPENDED ||
+-		       q->rpm_status == RPM_SUSPENDING))
+-		pm_request_resume(q->dev);
++	if (!q->dev || !blk_queue_pm_only(q))
++		return 1;	/* Nothing to do */
++	if (pm && q->rpm_status != RPM_SUSPENDED)
++		return 1;	/* Request allowed */
++	pm_request_resume(q->dev);
++	return 0;
+ }
+ 
+ static inline void blk_pm_mark_last_busy(struct request *rq)
+@@ -44,8 +47,9 @@ static inline void blk_pm_put_request(struct request *rq)
+ 		--rq->q->nr_pending;
+ }
+ #else
+-static inline void blk_pm_request_resume(struct request_queue *q)
++static inline int blk_pm_resume_queue(const bool pm, struct request_queue *q)
+ {
++	return 1;
+ }
+ 
+ static inline void blk_pm_mark_last_busy(struct request *rq)
+diff --git a/crypto/asymmetric_keys/asym_tpm.c b/crypto/asymmetric_keys/asym_tpm.c
+index 378b18b9bc342..84a5d6af9609e 100644
+--- a/crypto/asymmetric_keys/asym_tpm.c
++++ b/crypto/asymmetric_keys/asym_tpm.c
+@@ -354,7 +354,7 @@ static uint32_t derive_pub_key(const void *pub_key, uint32_t len, uint8_t *buf)
+ 	memcpy(cur, e, sizeof(e));
+ 	cur += sizeof(e);
+ 	/* Zero parameters to satisfy set_pub_key ABI. */
+-	memset(cur, 0, SETKEY_PARAMS_SIZE);
++	memzero_explicit(cur, SETKEY_PARAMS_SIZE);
+ 
+ 	return cur - buf;
+ }
+diff --git a/crypto/ecdh.c b/crypto/ecdh.c
+index d56b8603dec95..96f80c8f8e304 100644
+--- a/crypto/ecdh.c
++++ b/crypto/ecdh.c
+@@ -39,7 +39,8 @@ static int ecdh_set_secret(struct crypto_kpp *tfm, const void *buf,
+ 	struct ecdh params;
+ 	unsigned int ndigits;
+ 
+-	if (crypto_ecdh_decode_key(buf, len, &params) < 0)
++	if (crypto_ecdh_decode_key(buf, len, &params) < 0 ||
++	    params.key_size > sizeof(ctx->private_key))
+ 		return -EINVAL;
+ 
+ 	ndigits = ecdh_supported_curve(params.curve_id);
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index 65a3886f68c9e..5f0472c18bcbd 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -3607,7 +3607,7 @@ static int idt77252_init_one(struct pci_dev *pcidev,
+ 
+ 	if ((err = dma_set_mask_and_coherent(&pcidev->dev, DMA_BIT_MASK(32)))) {
+ 		printk("idt77252: can't enable DMA for PCI device at %s\n", pci_name(pcidev));
+-		return err;
++		goto err_out_disable_pdev;
+ 	}
+ 
+ 	card = kzalloc(sizeof(struct idt77252_dev), GFP_KERNEL);
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index e8cb66093f211..a6187f6380d8d 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -4278,7 +4278,7 @@ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
+ 		if (fwnode_is_primary(fn)) {
+ 			dev->fwnode = fn->secondary;
+ 			if (!(parent && fn == parent->fwnode))
+-				fn->secondary = ERR_PTR(-ENODEV);
++				fn->secondary = NULL;
+ 		} else {
+ 			dev->fwnode = NULL;
+ 		}
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 376164cdf2ea9..78d635f1d1567 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -251,12 +251,8 @@ static int h5_close(struct hci_uart *hu)
+ 	if (h5->vnd && h5->vnd->close)
+ 		h5->vnd->close(h5);
+ 
+-	if (hu->serdev)
+-		serdev_device_close(hu->serdev);
+-
+-	kfree_skb(h5->rx_skb);
+-	kfree(h5);
+-	h5 = NULL;
++	if (!hu->serdev)
++		kfree(h5);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 844967f98866a..922416b3aaceb 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -76,10 +76,6 @@ static void dma_buf_release(struct dentry *dentry)
+ 
+ 	dmabuf->ops->release(dmabuf);
+ 
+-	mutex_lock(&db_list.lock);
+-	list_del(&dmabuf->list_node);
+-	mutex_unlock(&db_list.lock);
+-
+ 	if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
+ 		dma_resv_fini(dmabuf->resv);
+ 
+@@ -88,6 +84,22 @@ static void dma_buf_release(struct dentry *dentry)
+ 	kfree(dmabuf);
+ }
+ 
++static int dma_buf_file_release(struct inode *inode, struct file *file)
++{
++	struct dma_buf *dmabuf;
++
++	if (!is_dma_buf_file(file))
++		return -EINVAL;
++
++	dmabuf = file->private_data;
++
++	mutex_lock(&db_list.lock);
++	list_del(&dmabuf->list_node);
++	mutex_unlock(&db_list.lock);
++
++	return 0;
++}
++
+ static const struct dentry_operations dma_buf_dentry_ops = {
+ 	.d_dname = dmabuffs_dname,
+ 	.d_release = dma_buf_release,
+@@ -413,6 +425,7 @@ static void dma_buf_show_fdinfo(struct seq_file *m, struct file *file)
+ }
+ 
+ static const struct file_operations dma_buf_fops = {
++	.release	= dma_buf_file_release,
+ 	.mmap		= dma_buf_mmap_internal,
+ 	.llseek		= dma_buf_llseek,
+ 	.poll		= dma_buf_poll,
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 07a5db06a29ad..fb97c9f319a55 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -379,7 +379,7 @@ int idxd_register_driver(void)
+ 	return 0;
+ 
+ drv_fail:
+-	for (; i > 0; i--)
++	while (--i >= 0)
+ 		driver_unregister(&idxd_drvs[i]->drv);
+ 	return rc;
+ }
+@@ -1639,7 +1639,7 @@ int idxd_register_bus_type(void)
+ 	return 0;
+ 
+ bus_err:
+-	for (; i > 0; i--)
++	while (--i >= 0)
+ 		bus_unregister(idxd_bus_types[i]);
+ 	return rc;
+ }
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index bcc80f428172b..bd3046e5a9348 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -1046,7 +1046,7 @@ static void reloc_gpu_flush(struct i915_execbuffer *eb, struct reloc_cache *cach
+ 	GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
+ 	cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
+ 
+-	__i915_gem_object_flush_map(obj, 0, sizeof(u32) * (cache->rq_size + 1));
++	i915_gem_object_flush_map(obj);
+ 	i915_gem_object_unpin_map(obj);
+ 
+ 	intel_gt_chipset_flush(cache->rq->engine->gt);
+@@ -1296,6 +1296,8 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
+ 		goto err_pool;
+ 	}
+ 
++	memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
++
+ 	batch = i915_vma_instance(pool->obj, vma->vm, NULL);
+ 	if (IS_ERR(batch)) {
+ 		err = PTR_ERR(batch);
+diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
+index e88970256e8ef..e7362ec22aded 100644
+--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
++++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
+@@ -1166,7 +1166,7 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 		}
+ 	}
+ 	if (IS_ERR(src)) {
+-		unsigned long x, n;
++		unsigned long x, n, remain;
+ 		void *ptr;
+ 
+ 		/*
+@@ -1177,14 +1177,15 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 		 * We don't care about copying too much here as we only
+ 		 * validate up to the end of the batch.
+ 		 */
++		remain = length;
+ 		if (!(dst_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))
+-			length = round_up(length,
++			remain = round_up(remain,
+ 					  boot_cpu_data.x86_clflush_size);
+ 
+ 		ptr = dst;
+ 		x = offset_in_page(offset);
+-		for (n = offset >> PAGE_SHIFT; length; n++) {
+-			int len = min(length, PAGE_SIZE - x);
++		for (n = offset >> PAGE_SHIFT; remain; n++) {
++			int len = min(remain, PAGE_SIZE - x);
+ 
+ 			src = kmap_atomic(i915_gem_object_get_page(src_obj, n));
+ 			if (needs_clflush)
+@@ -1193,13 +1194,15 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 			kunmap_atomic(src);
+ 
+ 			ptr += len;
+-			length -= len;
++			remain -= len;
+ 			x = 0;
+ 		}
+ 	}
+ 
+ 	i915_gem_object_unpin_pages(src_obj);
+ 
++	memset32(dst + length, 0, (dst_obj->base.size - length) / sizeof(u32));
++
+ 	/* dst_obj is returned with vmap pinned */
+ 	return dst;
+ }
+@@ -1392,11 +1395,6 @@ static unsigned long *alloc_whitelist(u32 batch_length)
+ 
+ #define LENGTH_BIAS 2
+ 
+-static bool shadow_needs_clflush(struct drm_i915_gem_object *obj)
+-{
+-	return !(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE);
+-}
+-
+ /**
+  * intel_engine_cmd_parser() - parse a batch buffer for privilege violations
+  * @engine: the engine on which the batch is to execute
+@@ -1539,16 +1537,9 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ 				ret = 0; /* allow execution */
+ 			}
+ 		}
+-
+-		if (shadow_needs_clflush(shadow->obj))
+-			drm_clflush_virt_range(batch_end, 8);
+ 	}
+ 
+-	if (shadow_needs_clflush(shadow->obj)) {
+-		void *ptr = page_mask_bits(shadow->obj->mm.mapping);
+-
+-		drm_clflush_virt_range(ptr, (void *)(cmd + 1) - ptr);
+-	}
++	i915_gem_object_flush_map(shadow->obj);
+ 
+ 	if (!IS_ERR_OR_NULL(jump_whitelist))
+ 		kfree(jump_whitelist);
+diff --git a/drivers/hwmon/amd_energy.c b/drivers/hwmon/amd_energy.c
+index 3197cda7bcd9f..f22154863c98a 100644
+--- a/drivers/hwmon/amd_energy.c
++++ b/drivers/hwmon/amd_energy.c
+@@ -222,7 +222,7 @@ static int amd_create_sensor(struct device *dev,
+ 	 */
+ 	cpus = num_present_cpus() / num_siblings;
+ 
+-	s_config = devm_kcalloc(dev, cpus + sockets,
++	s_config = devm_kcalloc(dev, cpus + sockets + 1,
+ 				sizeof(u32), GFP_KERNEL);
+ 	if (!s_config)
+ 		return -ENOMEM;
+@@ -254,6 +254,7 @@ static int amd_create_sensor(struct device *dev,
+ 			scnprintf(label_l[i], 10, "Esocket%u", (i - cpus));
+ 	}
+ 
++	s_config[i] = 0;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/ide/ide-atapi.c b/drivers/ide/ide-atapi.c
+index 2162bc80f09e0..013ad33fbbc81 100644
+--- a/drivers/ide/ide-atapi.c
++++ b/drivers/ide/ide-atapi.c
+@@ -223,7 +223,6 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq)
+ 	sense_rq->rq_disk = rq->rq_disk;
+ 	sense_rq->cmd_flags = REQ_OP_DRV_IN;
+ 	ide_req(sense_rq)->type = ATA_PRIV_SENSE;
+-	sense_rq->rq_flags |= RQF_PREEMPT;
+ 
+ 	req->cmd[0] = GPCMD_REQUEST_SENSE;
+ 	req->cmd[4] = cmd_len;
+diff --git a/drivers/ide/ide-io.c b/drivers/ide/ide-io.c
+index 1a53c7a752244..4867b67b60d69 100644
+--- a/drivers/ide/ide-io.c
++++ b/drivers/ide/ide-io.c
+@@ -515,15 +515,10 @@ repeat:
+ 		 * above to return us whatever is in the queue. Since we call
+ 		 * ide_do_request() ourselves, we end up taking requests while
+ 		 * the queue is blocked...
+-		 * 
+-		 * We let requests forced at head of queue with ide-preempt
+-		 * though. I hope that doesn't happen too much, hopefully not
+-		 * unless the subdriver triggers such a thing in its own PM
+-		 * state machine.
+ 		 */
+ 		if ((drive->dev_flags & IDE_DFLAG_BLOCKED) &&
+ 		    ata_pm_request(rq) == 0 &&
+-		    (rq->rq_flags & RQF_PREEMPT) == 0) {
++		    (rq->rq_flags & RQF_PM) == 0) {
+ 			/* there should be no pending command at this point */
+ 			ide_unlock_port(hwif);
+ 			goto plug_device;
+diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
+index 192e6c65d34e7..82ab308f1aafe 100644
+--- a/drivers/ide/ide-pm.c
++++ b/drivers/ide/ide-pm.c
+@@ -77,7 +77,7 @@ int generic_ide_resume(struct device *dev)
+ 	}
+ 
+ 	memset(&rqpm, 0, sizeof(rqpm));
+-	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, BLK_MQ_REQ_PREEMPT);
++	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, BLK_MQ_REQ_PM);
+ 	ide_req(rq)->type = ATA_PRIV_PM_RESUME;
+ 	ide_req(rq)->special = &rqpm;
+ 	rqpm.pm_step = IDE_PM_START_RESUME;
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 3242ebd0bca36..4a10c9ff368c5 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -142,7 +142,7 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d
+ 	}
+ 	desc.qw2 = 0;
+ 	desc.qw3 = 0;
+-	qi_submit_sync(svm->iommu, &desc, 1, 0);
++	qi_submit_sync(sdev->iommu, &desc, 1, 0);
+ 
+ 	if (sdev->dev_iotlb) {
+ 		desc.qw0 = QI_DEV_EIOTLB_PASID(svm->pasid) |
+@@ -166,7 +166,7 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d
+ 		}
+ 		desc.qw2 = 0;
+ 		desc.qw3 = 0;
+-		qi_submit_sync(svm->iommu, &desc, 1, 0);
++		qi_submit_sync(sdev->iommu, &desc, 1, 0);
+ 	}
+ }
+ 
+@@ -211,7 +211,7 @@ static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
+ 	 */
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(sdev, &svm->devs, list)
+-		intel_pasid_tear_down_entry(svm->iommu, sdev->dev,
++		intel_pasid_tear_down_entry(sdev->iommu, sdev->dev,
+ 					    svm->pasid, true);
+ 	rcu_read_unlock();
+ 
+@@ -363,6 +363,7 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev,
+ 	}
+ 	sdev->dev = dev;
+ 	sdev->sid = PCI_DEVID(info->bus, info->devfn);
++	sdev->iommu = iommu;
+ 
+ 	/* Only count users if device has aux domains */
+ 	if (iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX))
+@@ -546,6 +547,7 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags,
+ 		goto out;
+ 	}
+ 	sdev->dev = dev;
++	sdev->iommu = iommu;
+ 
+ 	ret = intel_iommu_enable_pasid(iommu, dev);
+ 	if (ret) {
+@@ -575,7 +577,6 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags,
+ 			kfree(sdev);
+ 			goto out;
+ 		}
+-		svm->iommu = iommu;
+ 
+ 		if (pasid_max > intel_pasid_max_id)
+ 			pasid_max = intel_pasid_max_id;
+diff --git a/drivers/md/bcache/features.c b/drivers/md/bcache/features.c
+index 6469223f0b777..d636b7b2d070c 100644
+--- a/drivers/md/bcache/features.c
++++ b/drivers/md/bcache/features.c
+@@ -17,7 +17,7 @@ struct feature {
+ };
+ 
+ static struct feature feature_list[] = {
+-	{BCH_FEATURE_INCOMPAT, BCH_FEATURE_INCOMPAT_LARGE_BUCKET,
++	{BCH_FEATURE_INCOMPAT, BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE,
+ 		"large_bucket"},
+ 	{0, 0, 0 },
+ };
+diff --git a/drivers/md/bcache/features.h b/drivers/md/bcache/features.h
+index a1653c4780416..84fc2c0f01015 100644
+--- a/drivers/md/bcache/features.h
++++ b/drivers/md/bcache/features.h
+@@ -13,11 +13,15 @@
+ 
+ /* Feature set definition */
+ /* Incompat feature set */
+-#define BCH_FEATURE_INCOMPAT_LARGE_BUCKET	0x0001 /* 32bit bucket size */
++/* 32bit bucket size, obsoleted */
++#define BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET		0x0001
++/* real bucket size is (1 << bucket_size) */
++#define BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE	0x0002
+ 
+-#define BCH_FEATURE_COMPAT_SUUP		0
+-#define BCH_FEATURE_RO_COMPAT_SUUP	0
+-#define BCH_FEATURE_INCOMPAT_SUUP	BCH_FEATURE_INCOMPAT_LARGE_BUCKET
++#define BCH_FEATURE_COMPAT_SUPP		0
++#define BCH_FEATURE_RO_COMPAT_SUPP	0
++#define BCH_FEATURE_INCOMPAT_SUPP	(BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET| \
++					 BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE)
+ 
+ #define BCH_HAS_COMPAT_FEATURE(sb, mask) \
+ 		((sb)->feature_compat & (mask))
+@@ -77,7 +81,23 @@ static inline void bch_clear_feature_##name(struct cache_sb *sb) \
+ 		~BCH##_FEATURE_INCOMPAT_##flagname; \
+ }
+ 
+-BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LARGE_BUCKET);
++BCH_FEATURE_INCOMPAT_FUNCS(obso_large_bucket, OBSO_LARGE_BUCKET);
++BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LOG_LARGE_BUCKET_SIZE);
++
++static inline bool bch_has_unknown_compat_features(struct cache_sb *sb)
++{
++	return ((sb->feature_compat & ~BCH_FEATURE_COMPAT_SUPP) != 0);
++}
++
++static inline bool bch_has_unknown_ro_compat_features(struct cache_sb *sb)
++{
++	return ((sb->feature_ro_compat & ~BCH_FEATURE_RO_COMPAT_SUPP) != 0);
++}
++
++static inline bool bch_has_unknown_incompat_features(struct cache_sb *sb)
++{
++	return ((sb->feature_incompat & ~BCH_FEATURE_INCOMPAT_SUPP) != 0);
++}
+ 
+ int bch_print_cache_set_feature_compat(struct cache_set *c, char *buf, int size);
+ int bch_print_cache_set_feature_ro_compat(struct cache_set *c, char *buf, int size);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 46a00134a36ae..aa4531c2ce0df 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -64,9 +64,25 @@ static unsigned int get_bucket_size(struct cache_sb *sb, struct cache_sb_disk *s
+ {
+ 	unsigned int bucket_size = le16_to_cpu(s->bucket_size);
+ 
+-	if (sb->version >= BCACHE_SB_VERSION_CDEV_WITH_FEATURES &&
+-	     bch_has_feature_large_bucket(sb))
+-		bucket_size |= le16_to_cpu(s->bucket_size_hi) << 16;
++	if (sb->version >= BCACHE_SB_VERSION_CDEV_WITH_FEATURES) {
++		if (bch_has_feature_large_bucket(sb)) {
++			unsigned int max, order;
++
++			max = sizeof(unsigned int) * BITS_PER_BYTE - 1;
++			order = le16_to_cpu(s->bucket_size);
++			/*
++			 * bcache tool will make sure the overflow won't
++			 * happen, an error message here is enough.
++			 */
++			if (order > max)
++				pr_err("Bucket size (1 << %u) overflows\n",
++					order);
++			bucket_size = 1 << order;
++		} else if (bch_has_feature_obso_large_bucket(sb)) {
++			bucket_size +=
++				le16_to_cpu(s->obso_bucket_size_hi) << 16;
++		}
++	}
+ 
+ 	return bucket_size;
+ }
+@@ -228,6 +244,20 @@ static const char *read_super(struct cache_sb *sb, struct block_device *bdev,
+ 		sb->feature_compat = le64_to_cpu(s->feature_compat);
+ 		sb->feature_incompat = le64_to_cpu(s->feature_incompat);
+ 		sb->feature_ro_compat = le64_to_cpu(s->feature_ro_compat);
++
++		/* Check incompatible features */
++		err = "Unsupported compatible feature found";
++		if (bch_has_unknown_compat_features(sb))
++			goto err;
++
++		err = "Unsupported read-only compatible feature found";
++		if (bch_has_unknown_ro_compat_features(sb))
++			goto err;
++
++		err = "Unsupported incompatible feature found";
++		if (bch_has_unknown_incompat_features(sb))
++			goto err;
++
+ 		err = read_super_common(sb, bdev, s);
+ 		if (err)
+ 			goto err;
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index ff0bea1554f9b..3b320f3d48b30 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -380,7 +380,7 @@ static int bareudp6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 		goto free_dst;
+ 
+ 	min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len +
+-		BAREUDP_BASE_HLEN + info->options_len + sizeof(struct iphdr);
++		BAREUDP_BASE_HLEN + info->options_len + sizeof(struct ipv6hdr);
+ 
+ 	err = skb_cow_head(skb, min_headroom);
+ 	if (unlikely(err))
+@@ -534,6 +534,7 @@ static void bareudp_setup(struct net_device *dev)
+ 	SET_NETDEV_DEVTYPE(dev, &bareudp_type);
+ 	dev->features    |= NETIF_F_SG | NETIF_F_HW_CSUM;
+ 	dev->features    |= NETIF_F_RXCSUM;
++	dev->features    |= NETIF_F_LLTX;
+ 	dev->features    |= NETIF_F_GSO_SOFTWARE;
+ 	dev->hw_features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
+ 	dev->hw_features |= NETIF_F_GSO_SOFTWARE;
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 09701c17f3f63..4b36d89bec061 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -92,9 +92,7 @@
+ 					 GSWIP_MDIO_PHY_FDUP_MASK)
+ 
+ /* GSWIP MII Registers */
+-#define GSWIP_MII_CFG0			0x00
+-#define GSWIP_MII_CFG1			0x02
+-#define GSWIP_MII_CFG5			0x04
++#define GSWIP_MII_CFGp(p)		(0x2 * (p))
+ #define  GSWIP_MII_CFG_EN		BIT(14)
+ #define  GSWIP_MII_CFG_LDCLKDIS		BIT(12)
+ #define  GSWIP_MII_CFG_MODE_MIIP	0x0
+@@ -392,17 +390,9 @@ static void gswip_mii_mask(struct gswip_priv *priv, u32 clear, u32 set,
+ static void gswip_mii_mask_cfg(struct gswip_priv *priv, u32 clear, u32 set,
+ 			       int port)
+ {
+-	switch (port) {
+-	case 0:
+-		gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG0);
+-		break;
+-	case 1:
+-		gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG1);
+-		break;
+-	case 5:
+-		gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG5);
+-		break;
+-	}
++	/* There's no MII_CFG register for the CPU port */
++	if (!dsa_is_cpu_port(priv->ds, port))
++		gswip_mii_mask(priv, clear, set, GSWIP_MII_CFGp(port));
+ }
+ 
+ static void gswip_mii_mask_pcdu(struct gswip_priv *priv, u32 clear, u32 set,
+@@ -822,9 +812,8 @@ static int gswip_setup(struct dsa_switch *ds)
+ 	gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1);
+ 
+ 	/* Disable the xMII link */
+-	gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 0);
+-	gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 1);
+-	gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 5);
++	for (i = 0; i < priv->hw_info->max_ports; i++)
++		gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i);
+ 
+ 	/* enable special tag insertion on cpu port */
+ 	gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN,
+@@ -1541,9 +1530,7 @@ static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
+ {
+ 	struct gswip_priv *priv = ds->priv;
+ 
+-	/* Enable the xMII interface only for the external PHY */
+-	if (interface != PHY_INTERFACE_MODE_INTERNAL)
+-		gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port);
++	gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port);
+ }
+ 
+ static void gswip_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 0fdd19d99d99f..b1ae9eb8f2479 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -2577,6 +2577,7 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+ 			 NETIF_F_HW_VLAN_CTAG_TX;
+ 	dev->hw_features |= dev->features;
+ 	dev->vlan_features |= dev->features;
++	dev->max_mtu = UMAC_MAX_MTU_SIZE;
+ 
+ 	/* Request the WOL interrupt and advertise suspend if available */
+ 	priv->wol_irq_disabled = 1;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 0af0af2b70fe4..033bfab24ef2f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -6790,8 +6790,10 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
+ 		ctx->tqm_fp_rings_count = resp->tqm_fp_rings_count;
+ 		if (!ctx->tqm_fp_rings_count)
+ 			ctx->tqm_fp_rings_count = bp->max_q;
++		else if (ctx->tqm_fp_rings_count > BNXT_MAX_TQM_FP_RINGS)
++			ctx->tqm_fp_rings_count = BNXT_MAX_TQM_FP_RINGS;
+ 
+-		tqm_rings = ctx->tqm_fp_rings_count + 1;
++		tqm_rings = ctx->tqm_fp_rings_count + BNXT_MAX_TQM_SP_RINGS;
+ 		ctx_pg = kcalloc(tqm_rings, sizeof(*ctx_pg), GFP_KERNEL);
+ 		if (!ctx_pg) {
+ 			kfree(ctx);
+@@ -6925,7 +6927,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables)
+ 	     pg_attr = &req.tqm_sp_pg_size_tqm_sp_lvl,
+ 	     pg_dir = &req.tqm_sp_page_dir,
+ 	     ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP;
+-	     i < 9; i++, num_entries++, pg_attr++, pg_dir++, ena <<= 1) {
++	     i < BNXT_MAX_TQM_RINGS;
++	     i++, num_entries++, pg_attr++, pg_dir++, ena <<= 1) {
+ 		if (!(enables & ena))
+ 			continue;
+ 
+@@ -12887,10 +12890,10 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev,
+  */
+ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ {
++	pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT;
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct bnxt *bp = netdev_priv(netdev);
+ 	int err = 0, off;
+-	pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT;
+ 
+ 	netdev_info(bp->dev, "PCI Slot Reset\n");
+ 
+@@ -12919,22 +12922,8 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ 		pci_save_state(pdev);
+ 
+ 		err = bnxt_hwrm_func_reset(bp);
+-		if (!err) {
+-			err = bnxt_hwrm_func_qcaps(bp);
+-			if (!err && netif_running(netdev))
+-				err = bnxt_open(netdev);
+-		}
+-		bnxt_ulp_start(bp, err);
+-		if (!err) {
+-			bnxt_reenable_sriov(bp);
++		if (!err)
+ 			result = PCI_ERS_RESULT_RECOVERED;
+-		}
+-	}
+-
+-	if (result != PCI_ERS_RESULT_RECOVERED) {
+-		if (netif_running(netdev))
+-			dev_close(netdev);
+-		pci_disable_device(pdev);
+ 	}
+ 
+ 	rtnl_unlock();
+@@ -12952,10 +12941,21 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ static void bnxt_io_resume(struct pci_dev *pdev)
+ {
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
++	struct bnxt *bp = netdev_priv(netdev);
++	int err;
+ 
++	netdev_info(bp->dev, "PCI Slot Resume\n");
+ 	rtnl_lock();
+ 
+-	netif_device_attach(netdev);
++	err = bnxt_hwrm_func_qcaps(bp);
++	if (!err && netif_running(netdev))
++		err = bnxt_open(netdev);
++
++	bnxt_ulp_start(bp, err);
++	if (!err) {
++		bnxt_reenable_sriov(bp);
++		netif_device_attach(netdev);
++	}
+ 
+ 	rtnl_unlock();
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 47b3c31278798..e4e926c65118a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1435,6 +1435,11 @@ struct bnxt_ctx_pg_info {
+ 	struct bnxt_ctx_pg_info **ctx_pg_tbl;
+ };
+ 
++#define BNXT_MAX_TQM_SP_RINGS		1
++#define BNXT_MAX_TQM_FP_RINGS		8
++#define BNXT_MAX_TQM_RINGS		\
++	(BNXT_MAX_TQM_SP_RINGS + BNXT_MAX_TQM_FP_RINGS)
++
+ struct bnxt_ctx_mem_info {
+ 	u32	qp_max_entries;
+ 	u16	qp_min_qp1_entries;
+@@ -1473,7 +1478,7 @@ struct bnxt_ctx_mem_info {
+ 	struct bnxt_ctx_pg_info stat_mem;
+ 	struct bnxt_ctx_pg_info mrav_mem;
+ 	struct bnxt_ctx_pg_info tim_mem;
+-	struct bnxt_ctx_pg_info *tqm_mem[9];
++	struct bnxt_ctx_pg_info *tqm_mem[BNXT_MAX_TQM_RINGS];
+ };
+ 
+ struct bnxt_fw_health {
+diff --git a/drivers/net/ethernet/ethoc.c b/drivers/net/ethernet/ethoc.c
+index 0981fe9652e50..3d9b0b161e241 100644
+--- a/drivers/net/ethernet/ethoc.c
++++ b/drivers/net/ethernet/ethoc.c
+@@ -1211,7 +1211,7 @@ static int ethoc_probe(struct platform_device *pdev)
+ 	ret = mdiobus_register(priv->mdio);
+ 	if (ret) {
+ 		dev_err(&netdev->dev, "failed to register MDIO bus\n");
+-		goto free2;
++		goto free3;
+ 	}
+ 
+ 	ret = ethoc_mdio_probe(netdev);
+@@ -1243,6 +1243,7 @@ error2:
+ 	netif_napi_del(&priv->napi);
+ error:
+ 	mdiobus_unregister(priv->mdio);
++free3:
+ 	mdiobus_free(priv->mdio);
+ free2:
+ 	clk_disable_unprepare(priv->clk);
+diff --git a/drivers/net/ethernet/freescale/ucc_geth.c b/drivers/net/ethernet/freescale/ucc_geth.c
+index ba8869c3d891c..6d853f018d531 100644
+--- a/drivers/net/ethernet/freescale/ucc_geth.c
++++ b/drivers/net/ethernet/freescale/ucc_geth.c
+@@ -3889,6 +3889,7 @@ static int ucc_geth_probe(struct platform_device* ofdev)
+ 	INIT_WORK(&ugeth->timeout_work, ucc_geth_timeout_work);
+ 	netif_napi_add(dev, &ugeth->napi, ucc_geth_poll, 64);
+ 	dev->mtu = 1500;
++	dev->max_mtu = 1518;
+ 
+ 	ugeth->msg_enable = netif_msg_init(debug.msg_enable, UGETH_MSG_DEFAULT);
+ 	ugeth->phy_interface = phy_interface;
+@@ -3934,12 +3935,12 @@ static int ucc_geth_remove(struct platform_device* ofdev)
+ 	struct device_node *np = ofdev->dev.of_node;
+ 
+ 	unregister_netdev(dev);
+-	free_netdev(dev);
+ 	ucc_geth_memclean(ugeth);
+ 	if (of_phy_is_fixed_link(np))
+ 		of_phy_deregister_fixed_link(np);
+ 	of_node_put(ugeth->ug_info->tbi_node);
+ 	of_node_put(ugeth->ug_info->phy_node);
++	free_netdev(dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+index 7165da0ee9aa5..a6e3f07caf99c 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+@@ -415,6 +415,10 @@ static void __lb_other_process(struct hns_nic_ring_data *ring_data,
+ 	/* for mutl buffer*/
+ 	new_skb = skb_copy(skb, GFP_ATOMIC);
+ 	dev_kfree_skb_any(skb);
++	if (!new_skb) {
++		netdev_err(ndev, "skb alloc failed\n");
++		return;
++	}
+ 	skb = new_skb;
+ 
+ 	check_ok = 0;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index da9450f187176..e2540cc00d34e 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -932,6 +932,7 @@ static void release_resources(struct ibmvnic_adapter *adapter)
+ 	release_rx_pools(adapter);
+ 
+ 	release_napi(adapter);
++	release_login_buffer(adapter);
+ 	release_login_rsp_buffer(adapter);
+ }
+ 
+@@ -2247,8 +2248,7 @@ static void __ibmvnic_reset(struct work_struct *work)
+ 				set_current_state(TASK_UNINTERRUPTIBLE);
+ 				schedule_timeout(60 * HZ);
+ 			}
+-		} else if (!(rwi->reset_reason == VNIC_RESET_FATAL &&
+-				adapter->from_passive_init)) {
++		} else {
+ 			rc = do_reset(adapter, rwi, reset_state);
+ 		}
+ 		kfree(rwi);
+@@ -2869,9 +2869,7 @@ static int reset_one_sub_crq_queue(struct ibmvnic_adapter *adapter,
+ 	int rc;
+ 
+ 	if (!scrq) {
+-		netdev_dbg(adapter->netdev,
+-			   "Invalid scrq reset. irq (%d) or msgs (%p).\n",
+-			   scrq->irq, scrq->msgs);
++		netdev_dbg(adapter->netdev, "Invalid scrq reset.\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -3768,7 +3766,9 @@ static int send_login(struct ibmvnic_adapter *adapter)
+ 		return -1;
+ 	}
+ 
++	release_login_buffer(adapter);
+ 	release_login_rsp_buffer(adapter);
++
+ 	client_data_len = vnic_client_data_len(adapter);
+ 
+ 	buffer_size =
+diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h
+index ba7a0f8f69376..5b2143f4b1f85 100644
+--- a/drivers/net/ethernet/intel/e1000e/e1000.h
++++ b/drivers/net/ethernet/intel/e1000e/e1000.h
+@@ -436,6 +436,7 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca);
+ #define FLAG2_DFLT_CRC_STRIPPING          BIT(12)
+ #define FLAG2_CHECK_RX_HWTSTAMP           BIT(13)
+ #define FLAG2_CHECK_SYSTIM_OVERFLOW       BIT(14)
++#define FLAG2_ENABLE_S0IX_FLOWS           BIT(15)
+ 
+ #define E1000_RX_DESC_PS(R, i)	    \
+ 	(&(((union e1000_rx_desc_packet_split *)((R).desc))[i]))
+diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c
+index 03215b0aee4bd..06442e6bef731 100644
+--- a/drivers/net/ethernet/intel/e1000e/ethtool.c
++++ b/drivers/net/ethernet/intel/e1000e/ethtool.c
+@@ -23,6 +23,13 @@ struct e1000_stats {
+ 	int stat_offset;
+ };
+ 
++static const char e1000e_priv_flags_strings[][ETH_GSTRING_LEN] = {
++#define E1000E_PRIV_FLAGS_S0IX_ENABLED	BIT(0)
++	"s0ix-enabled",
++};
++
++#define E1000E_PRIV_FLAGS_STR_LEN ARRAY_SIZE(e1000e_priv_flags_strings)
++
+ #define E1000_STAT(str, m) { \
+ 		.stat_string = str, \
+ 		.type = E1000_STATS, \
+@@ -1776,6 +1783,8 @@ static int e1000e_get_sset_count(struct net_device __always_unused *netdev,
+ 		return E1000_TEST_LEN;
+ 	case ETH_SS_STATS:
+ 		return E1000_STATS_LEN;
++	case ETH_SS_PRIV_FLAGS:
++		return E1000E_PRIV_FLAGS_STR_LEN;
+ 	default:
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -2097,6 +2106,10 @@ static void e1000_get_strings(struct net_device __always_unused *netdev,
+ 			p += ETH_GSTRING_LEN;
+ 		}
+ 		break;
++	case ETH_SS_PRIV_FLAGS:
++		memcpy(data, e1000e_priv_flags_strings,
++		       E1000E_PRIV_FLAGS_STR_LEN * ETH_GSTRING_LEN);
++		break;
+ 	}
+ }
+ 
+@@ -2305,6 +2318,37 @@ static int e1000e_get_ts_info(struct net_device *netdev,
+ 	return 0;
+ }
+ 
++static u32 e1000e_get_priv_flags(struct net_device *netdev)
++{
++	struct e1000_adapter *adapter = netdev_priv(netdev);
++	u32 priv_flags = 0;
++
++	if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS)
++		priv_flags |= E1000E_PRIV_FLAGS_S0IX_ENABLED;
++
++	return priv_flags;
++}
++
++static int e1000e_set_priv_flags(struct net_device *netdev, u32 priv_flags)
++{
++	struct e1000_adapter *adapter = netdev_priv(netdev);
++	unsigned int flags2 = adapter->flags2;
++
++	flags2 &= ~FLAG2_ENABLE_S0IX_FLOWS;
++	if (priv_flags & E1000E_PRIV_FLAGS_S0IX_ENABLED) {
++		struct e1000_hw *hw = &adapter->hw;
++
++		if (hw->mac.type < e1000_pch_cnp)
++			return -EINVAL;
++		flags2 |= FLAG2_ENABLE_S0IX_FLOWS;
++	}
++
++	if (flags2 != adapter->flags2)
++		adapter->flags2 = flags2;
++
++	return 0;
++}
++
+ static const struct ethtool_ops e1000_ethtool_ops = {
+ 	.supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS,
+ 	.get_drvinfo		= e1000_get_drvinfo,
+@@ -2336,6 +2380,8 @@ static const struct ethtool_ops e1000_ethtool_ops = {
+ 	.set_eee		= e1000e_set_eee,
+ 	.get_link_ksettings	= e1000_get_link_ksettings,
+ 	.set_link_ksettings	= e1000_set_link_ksettings,
++	.get_priv_flags		= e1000e_get_priv_flags,
++	.set_priv_flags		= e1000e_set_priv_flags,
+ };
+ 
+ void e1000e_set_ethtool_ops(struct net_device *netdev)
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 9aa6fad8ed477..6fb46682b058a 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -1240,6 +1240,9 @@ static s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force)
+ 		return 0;
+ 
+ 	if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) {
++		struct e1000_adapter *adapter = hw->adapter;
++		bool firmware_bug = false;
++
+ 		if (force) {
+ 			/* Request ME un-configure ULP mode in the PHY */
+ 			mac_reg = er32(H2ME);
+@@ -1248,16 +1251,24 @@ static s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force)
+ 			ew32(H2ME, mac_reg);
+ 		}
+ 
+-		/* Poll up to 300msec for ME to clear ULP_CFG_DONE. */
++		/* Poll up to 2.5 seconds for ME to clear ULP_CFG_DONE.
++		 * If this takes more than 1 second, show a warning indicating a
++		 * firmware bug
++		 */
+ 		while (er32(FWSM) & E1000_FWSM_ULP_CFG_DONE) {
+-			if (i++ == 30) {
++			if (i++ == 250) {
+ 				ret_val = -E1000_ERR_PHY;
+ 				goto out;
+ 			}
++			if (i > 100 && !firmware_bug)
++				firmware_bug = true;
+ 
+ 			usleep_range(10000, 11000);
+ 		}
+-		e_dbg("ULP_CONFIG_DONE cleared after %dmsec\n", i * 10);
++		if (firmware_bug)
++			e_warn("ULP_CONFIG_DONE took %dmsec.  This is a firmware bug\n", i * 10);
++		else
++			e_dbg("ULP_CONFIG_DONE cleared after %dmsec\n", i * 10);
+ 
+ 		if (force) {
+ 			mac_reg = er32(H2ME);
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 128ab6898070e..e9b82c209c2df 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -103,45 +103,6 @@ static const struct e1000_reg_info e1000_reg_info_tbl[] = {
+ 	{0, NULL}
+ };
+ 
+-struct e1000e_me_supported {
+-	u16 device_id;		/* supported device ID */
+-};
+-
+-static const struct e1000e_me_supported me_supported[] = {
+-	{E1000_DEV_ID_PCH_LPT_I217_LM},
+-	{E1000_DEV_ID_PCH_LPTLP_I218_LM},
+-	{E1000_DEV_ID_PCH_I218_LM2},
+-	{E1000_DEV_ID_PCH_I218_LM3},
+-	{E1000_DEV_ID_PCH_SPT_I219_LM},
+-	{E1000_DEV_ID_PCH_SPT_I219_LM2},
+-	{E1000_DEV_ID_PCH_LBG_I219_LM3},
+-	{E1000_DEV_ID_PCH_SPT_I219_LM4},
+-	{E1000_DEV_ID_PCH_SPT_I219_LM5},
+-	{E1000_DEV_ID_PCH_CNP_I219_LM6},
+-	{E1000_DEV_ID_PCH_CNP_I219_LM7},
+-	{E1000_DEV_ID_PCH_ICP_I219_LM8},
+-	{E1000_DEV_ID_PCH_ICP_I219_LM9},
+-	{E1000_DEV_ID_PCH_CMP_I219_LM10},
+-	{E1000_DEV_ID_PCH_CMP_I219_LM11},
+-	{E1000_DEV_ID_PCH_CMP_I219_LM12},
+-	{E1000_DEV_ID_PCH_TGP_I219_LM13},
+-	{E1000_DEV_ID_PCH_TGP_I219_LM14},
+-	{E1000_DEV_ID_PCH_TGP_I219_LM15},
+-	{0}
+-};
+-
+-static bool e1000e_check_me(u16 device_id)
+-{
+-	struct e1000e_me_supported *id;
+-
+-	for (id = (struct e1000e_me_supported *)me_supported;
+-	     id->device_id; id++)
+-		if (device_id == id->device_id)
+-			return true;
+-
+-	return false;
+-}
+-
+ /**
+  * __ew32_prepare - prepare to write to MAC CSR register on certain parts
+  * @hw: pointer to the HW structure
+@@ -6962,7 +6923,6 @@ static __maybe_unused int e1000e_pm_suspend(struct device *dev)
+ 	struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev));
+ 	struct e1000_adapter *adapter = netdev_priv(netdev);
+ 	struct pci_dev *pdev = to_pci_dev(dev);
+-	struct e1000_hw *hw = &adapter->hw;
+ 	int rc;
+ 
+ 	e1000e_flush_lpic(pdev);
+@@ -6970,13 +6930,13 @@ static __maybe_unused int e1000e_pm_suspend(struct device *dev)
+ 	e1000e_pm_freeze(dev);
+ 
+ 	rc = __e1000_shutdown(pdev, false);
+-	if (rc)
++	if (rc) {
+ 		e1000e_pm_thaw(dev);
+-
+-	/* Introduce S0ix implementation */
+-	if (hw->mac.type >= e1000_pch_cnp &&
+-	    !e1000e_check_me(hw->adapter->pdev->device))
+-		e1000e_s0ix_entry_flow(adapter);
++	} else {
++		/* Introduce S0ix implementation */
++		if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS)
++			e1000e_s0ix_entry_flow(adapter);
++	}
+ 
+ 	return rc;
+ }
+@@ -6986,12 +6946,10 @@ static __maybe_unused int e1000e_pm_resume(struct device *dev)
+ 	struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev));
+ 	struct e1000_adapter *adapter = netdev_priv(netdev);
+ 	struct pci_dev *pdev = to_pci_dev(dev);
+-	struct e1000_hw *hw = &adapter->hw;
+ 	int rc;
+ 
+ 	/* Introduce S0ix implementation */
+-	if (hw->mac.type >= e1000_pch_cnp &&
+-	    !e1000e_check_me(hw->adapter->pdev->device))
++	if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS)
+ 		e1000e_s0ix_exit_flow(adapter);
+ 
+ 	rc = __e1000_resume(pdev);
+@@ -7655,6 +7613,9 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (!(adapter->flags & FLAG_HAS_AMT))
+ 		e1000e_get_hw_control(adapter);
+ 
++	if (hw->mac.type >= e1000_pch_cnp)
++		adapter->flags2 |= FLAG2_ENABLE_S0IX_FLOWS;
++
+ 	strlcpy(netdev->name, "eth%d", sizeof(netdev->name));
+ 	err = register_netdev(netdev);
+ 	if (err)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index d231a2cdd98ff..118473dfdcbd2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -120,6 +120,7 @@ enum i40e_state_t {
+ 	__I40E_RESET_INTR_RECEIVED,
+ 	__I40E_REINIT_REQUESTED,
+ 	__I40E_PF_RESET_REQUESTED,
++	__I40E_PF_RESET_AND_REBUILD_REQUESTED,
+ 	__I40E_CORE_RESET_REQUESTED,
+ 	__I40E_GLOBAL_RESET_REQUESTED,
+ 	__I40E_EMP_RESET_INTR_RECEIVED,
+@@ -146,6 +147,8 @@ enum i40e_state_t {
+ };
+ 
+ #define I40E_PF_RESET_FLAG	BIT_ULL(__I40E_PF_RESET_REQUESTED)
++#define I40E_PF_RESET_AND_REBUILD_FLAG	\
++	BIT_ULL(__I40E_PF_RESET_AND_REBUILD_REQUESTED)
+ 
+ /* VSI state flags */
+ enum i40e_vsi_state_t {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 1337686bd0998..1db482d310c2d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -36,6 +36,8 @@ static int i40e_setup_misc_vector(struct i40e_pf *pf);
+ static void i40e_determine_queue_usage(struct i40e_pf *pf);
+ static int i40e_setup_pf_filter_control(struct i40e_pf *pf);
+ static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired);
++static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit,
++				   bool lock_acquired);
+ static int i40e_reset(struct i40e_pf *pf);
+ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
+ static int i40e_setup_misc_vector_for_recovery_mode(struct i40e_pf *pf);
+@@ -8536,6 +8538,14 @@ void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags, bool lock_acquired)
+ 			 "FW LLDP is disabled\n" :
+ 			 "FW LLDP is enabled\n");
+ 
++	} else if (reset_flags & I40E_PF_RESET_AND_REBUILD_FLAG) {
++		/* Request a PF Reset
++		 *
++		 * Resets PF and reinitializes PFs VSI.
++		 */
++		i40e_prep_for_reset(pf, lock_acquired);
++		i40e_reset_and_rebuild(pf, true, lock_acquired);
++
+ 	} else if (reset_flags & BIT_ULL(__I40E_REINIT_REQUESTED)) {
+ 		int v;
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 1b5390ec3d78a..61968e9174dab 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1772,7 +1772,7 @@ int i40e_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
+ 	if (num_vfs) {
+ 		if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) {
+ 			pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+-			i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG);
++			i40e_do_reset_safe(pf, I40E_PF_RESET_AND_REBUILD_FLAG);
+ 		}
+ 		ret = i40e_pci_sriov_enable(pdev, num_vfs);
+ 		goto sriov_configure_out;
+@@ -1781,7 +1781,7 @@ int i40e_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
+ 	if (!pci_vfs_assigned(pf->pdev)) {
+ 		i40e_free_vfs(pf);
+ 		pf->flags &= ~I40E_FLAG_VEB_MODE_ENABLED;
+-		i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG);
++		i40e_do_reset_safe(pf, I40E_PF_RESET_AND_REBUILD_FLAG);
+ 	} else {
+ 		dev_warn(&pdev->dev, "Unable to free VFs because some are assigned to VMs.\n");
+ 		ret = -EINVAL;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 95543dfd4fe77..0a867d64d4675 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1834,11 +1834,9 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
+ 	netif_tx_stop_all_queues(netdev);
+ 	if (CLIENT_ALLOWED(adapter)) {
+ 		err = iavf_lan_add_device(adapter);
+-		if (err) {
+-			rtnl_unlock();
++		if (err)
+ 			dev_info(&pdev->dev, "Failed to add VF to client API service list: %d\n",
+ 				 err);
+-		}
+ 	}
+ 	dev_info(&pdev->dev, "MAC address: %pM\n", adapter->hw.mac.addr);
+ 	if (netdev->features & NETIF_F_GRO)
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 4a9041ee1b391..c3e429445b83d 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -5232,7 +5232,7 @@ static int mvneta_probe(struct platform_device *pdev)
+ 	err = mvneta_port_power_up(pp, pp->phy_interface);
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "can't power up port\n");
+-		return err;
++		goto err_netdev;
+ 	}
+ 
+ 	/* Armada3700 network controller does not support per-cpu
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index cea886c5bcb57..2026c923b5855 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1231,7 +1231,7 @@ static void mvpp22_gop_init_rgmii(struct mvpp2_port *port)
+ 
+ 	regmap_read(priv->sysctrl_base, GENCONF_CTRL0, &val);
+ 	if (port->gop_id == 2)
+-		val |= GENCONF_CTRL0_PORT0_RGMII | GENCONF_CTRL0_PORT1_RGMII;
++		val |= GENCONF_CTRL0_PORT0_RGMII;
+ 	else if (port->gop_id == 3)
+ 		val |= GENCONF_CTRL0_PORT1_RGMII_MII;
+ 	regmap_write(priv->sysctrl_base, GENCONF_CTRL0, val);
+@@ -2370,17 +2370,18 @@ static void mvpp2_rx_pkts_coal_set(struct mvpp2_port *port,
+ static void mvpp2_tx_pkts_coal_set(struct mvpp2_port *port,
+ 				   struct mvpp2_tx_queue *txq)
+ {
+-	unsigned int thread = mvpp2_cpu_to_thread(port->priv, get_cpu());
++	unsigned int thread;
+ 	u32 val;
+ 
+ 	if (txq->done_pkts_coal > MVPP2_TXQ_THRESH_MASK)
+ 		txq->done_pkts_coal = MVPP2_TXQ_THRESH_MASK;
+ 
+ 	val = (txq->done_pkts_coal << MVPP2_TXQ_THRESH_OFFSET);
+-	mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_NUM_REG, txq->id);
+-	mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_THRESH_REG, val);
+-
+-	put_cpu();
++	/* PKT-coalescing registers are per-queue + per-thread */
++	for (thread = 0; thread < MVPP2_MAX_THREADS; thread++) {
++		mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_NUM_REG, txq->id);
++		mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_THRESH_REG, val);
++	}
+ }
+ 
+ static u32 mvpp2_usec_to_cycles(u32 usec, unsigned long clk_hz)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+index 5692c6087bbb0..a30eb90ba3d28 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+@@ -405,6 +405,38 @@ static int mvpp2_prs_tcam_first_free(struct mvpp2 *priv, unsigned char start,
+ 	return -EINVAL;
+ }
+ 
++/* Drop flow control pause frames */
++static void mvpp2_prs_drop_fc(struct mvpp2 *priv)
++{
++	unsigned char da[ETH_ALEN] = { 0x01, 0x80, 0xC2, 0x00, 0x00, 0x01 };
++	struct mvpp2_prs_entry pe;
++	unsigned int len;
++
++	memset(&pe, 0, sizeof(pe));
++
++	/* For all ports - drop flow control frames */
++	pe.index = MVPP2_PE_FC_DROP;
++	mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC);
++
++	/* Set match on DA */
++	len = ETH_ALEN;
++	while (len--)
++		mvpp2_prs_tcam_data_byte_set(&pe, len, da[len], 0xff);
++
++	mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_DROP_MASK,
++				 MVPP2_PRS_RI_DROP_MASK);
++
++	mvpp2_prs_sram_bits_set(&pe, MVPP2_PRS_SRAM_LU_GEN_BIT, 1);
++	mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_FLOWS);
++
++	/* Mask all ports */
++	mvpp2_prs_tcam_port_map_set(&pe, MVPP2_PRS_PORT_MASK);
++
++	/* Update shadow table and hw entry */
++	mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_MAC);
++	mvpp2_prs_hw_write(priv, &pe);
++}
++
+ /* Enable/disable dropping all mac da's */
+ static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add)
+ {
+@@ -1162,6 +1194,7 @@ static void mvpp2_prs_mac_init(struct mvpp2 *priv)
+ 	mvpp2_prs_hw_write(priv, &pe);
+ 
+ 	/* Create dummy entries for drop all and promiscuous modes */
++	mvpp2_prs_drop_fc(priv);
+ 	mvpp2_prs_mac_drop_all_set(priv, 0, false);
+ 	mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false);
+ 	mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false);
+@@ -1647,8 +1680,9 @@ static int mvpp2_prs_pppoe_init(struct mvpp2 *priv)
+ 	mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_IP6);
+ 	mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP6,
+ 				 MVPP2_PRS_RI_L3_PROTO_MASK);
+-	/* Skip eth_type + 4 bytes of IPv6 header */
+-	mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN + 4,
++	/* Jump to DIP of IPV6 header */
++	mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN + 8 +
++				 MVPP2_MAX_L3_ADDR_SIZE,
+ 				 MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
+ 	/* Set L3 offset */
+ 	mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L3,
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
+index e22f6c85d3803..4b68dd3747338 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
+@@ -129,7 +129,7 @@
+ #define MVPP2_PE_VID_EDSA_FLTR_DEFAULT	(MVPP2_PRS_TCAM_SRAM_SIZE - 7)
+ #define MVPP2_PE_VLAN_DBL		(MVPP2_PRS_TCAM_SRAM_SIZE - 6)
+ #define MVPP2_PE_VLAN_NONE		(MVPP2_PRS_TCAM_SRAM_SIZE - 5)
+-/* reserved */
++#define MVPP2_PE_FC_DROP		(MVPP2_PRS_TCAM_SRAM_SIZE - 4)
+ #define MVPP2_PE_MAC_MC_PROMISCUOUS	(MVPP2_PRS_TCAM_SRAM_SIZE - 3)
+ #define MVPP2_PE_MAC_UC_PROMISCUOUS	(MVPP2_PRS_TCAM_SRAM_SIZE - 2)
+ #define MVPP2_PE_MAC_NON_PROMISCUOUS	(MVPP2_PRS_TCAM_SRAM_SIZE - 1)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+index 07ee1d236ab3e..2ed17913af80e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+@@ -366,6 +366,15 @@ struct mlx5e_swp_spec {
+ 	u8 tun_l4_proto;
+ };
+ 
++static inline void mlx5e_eseg_swp_offsets_add_vlan(struct mlx5_wqe_eth_seg *eseg)
++{
++	/* SWP offsets are in 2-bytes words */
++	eseg->swp_outer_l3_offset += VLAN_HLEN / 2;
++	eseg->swp_outer_l4_offset += VLAN_HLEN / 2;
++	eseg->swp_inner_l3_offset += VLAN_HLEN / 2;
++	eseg->swp_inner_l4_offset += VLAN_HLEN / 2;
++}
++
+ static inline void
+ mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg,
+ 		   struct mlx5e_swp_spec *swp_spec)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
+index 899b98aca0d3f..1fae7fab8297e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
+@@ -51,7 +51,7 @@ static inline bool mlx5_geneve_tx_allowed(struct mlx5_core_dev *mdev)
+ }
+ 
+ static inline void
+-mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg)
++mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg, u16 ihs)
+ {
+ 	struct mlx5e_swp_spec swp_spec = {};
+ 	unsigned int offset = 0;
+@@ -85,6 +85,8 @@ mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg)
+ 	}
+ 
+ 	mlx5e_set_eseg_swp(skb, eseg, &swp_spec);
++	if (skb_vlan_tag_present(skb) &&  ihs)
++		mlx5e_eseg_swp_offsets_add_vlan(eseg);
+ }
+ 
+ #else
+@@ -163,7 +165,7 @@ static inline unsigned int mlx5e_accel_tx_ids_len(struct mlx5e_txqsq *sq,
+ 
+ static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv,
+ 				       struct sk_buff *skb,
+-				       struct mlx5_wqe_eth_seg *eseg)
++				       struct mlx5_wqe_eth_seg *eseg, u16 ihs)
+ {
+ #ifdef CONFIG_MLX5_EN_IPSEC
+ 	if (xfrm_offload(skb))
+@@ -172,7 +174,7 @@ static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv,
+ 
+ #if IS_ENABLED(CONFIG_GENEVE)
+ 	if (skb->encapsulation)
+-		mlx5e_tx_tunnel_accel(skb, eseg);
++		mlx5e_tx_tunnel_accel(skb, eseg, ihs);
+ #endif
+ 
+ 	return true;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index d97203cf6a007..38a23d209b33b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -615,9 +615,9 @@ void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq)
+ 
+ static bool mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq,
+ 				   struct sk_buff *skb, struct mlx5e_accel_tx_state *accel,
+-				   struct mlx5_wqe_eth_seg *eseg)
++				   struct mlx5_wqe_eth_seg *eseg, u16 ihs)
+ {
+-	if (unlikely(!mlx5e_accel_tx_eseg(priv, skb, eseg)))
++	if (unlikely(!mlx5e_accel_tx_eseg(priv, skb, eseg, ihs)))
+ 		return false;
+ 
+ 	mlx5e_txwqe_build_eseg_csum(sq, skb, accel, eseg);
+@@ -647,7 +647,8 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		if (mlx5e_tx_skb_supports_mpwqe(skb, &attr)) {
+ 			struct mlx5_wqe_eth_seg eseg = {};
+ 
+-			if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &eseg)))
++			if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &eseg,
++							     attr.ihs)))
+ 				return NETDEV_TX_OK;
+ 
+ 			mlx5e_sq_xmit_mpwqe(sq, skb, &eseg, netdev_xmit_more());
+@@ -664,7 +665,7 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	/* May update the WQE, but may not post other WQEs. */
+ 	mlx5e_accel_tx_finish(sq, wqe, &accel,
+ 			      (struct mlx5_wqe_inline_seg *)(wqe->data + wqe_attr.ds_cnt_inl));
+-	if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &wqe->eth)))
++	if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &wqe->eth, attr.ihs)))
+ 		return NETDEV_TX_OK;
+ 
+ 	mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, netdev_xmit_more());
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index b3d2250c77d04..a81feffb09b8b 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -337,7 +337,7 @@ void ionic_rx_fill(struct ionic_queue *q)
+ 	unsigned int i, j;
+ 	unsigned int len;
+ 
+-	len = netdev->mtu + ETH_HLEN;
++	len = netdev->mtu + ETH_HLEN + VLAN_HLEN;
+ 	nfrags = round_up(len, PAGE_SIZE) / PAGE_SIZE;
+ 
+ 	for (i = ionic_q_space_avail(q); i; i--) {
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+index a2494bf850079..ca0ee29a57b50 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+@@ -1799,6 +1799,11 @@ netdev_features_t qede_features_check(struct sk_buff *skb,
+ 			      ntohs(udp_hdr(skb)->dest) != gnv_port))
+ 				return features & ~(NETIF_F_CSUM_MASK |
+ 						    NETIF_F_GSO_MASK);
++		} else if (l4_proto == IPPROTO_IPIP) {
++			/* IPIP tunnels are unknown to the device or at least unsupported natively,
++			 * offloads for them can't be done trivially, so disable them for such skb.
++			 */
++			return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 85d9c3e30c699..762cabf16157b 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2243,7 +2243,8 @@ static void rtl_pll_power_down(struct rtl8169_private *tp)
+ 	}
+ 
+ 	switch (tp->mac_version) {
+-	case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_33:
++	case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_26:
++	case RTL_GIGA_MAC_VER_32 ... RTL_GIGA_MAC_VER_33:
+ 	case RTL_GIGA_MAC_VER_37:
+ 	case RTL_GIGA_MAC_VER_39:
+ 	case RTL_GIGA_MAC_VER_43:
+@@ -2269,7 +2270,8 @@ static void rtl_pll_power_down(struct rtl8169_private *tp)
+ static void rtl_pll_power_up(struct rtl8169_private *tp)
+ {
+ 	switch (tp->mac_version) {
+-	case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_33:
++	case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_26:
++	case RTL_GIGA_MAC_VER_32 ... RTL_GIGA_MAC_VER_33:
+ 	case RTL_GIGA_MAC_VER_37:
+ 	case RTL_GIGA_MAC_VER_39:
+ 	case RTL_GIGA_MAC_VER_43:
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 81ee0a071b4e9..e5234bb02dafd 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -725,6 +725,8 @@ static SIMPLE_DEV_PM_OPS(intel_eth_pm_ops, intel_eth_pci_suspend,
+ #define PCI_DEVICE_ID_INTEL_EHL_PSE1_RGMII1G_ID		0x4bb0
+ #define PCI_DEVICE_ID_INTEL_EHL_PSE1_SGMII1G_ID		0x4bb1
+ #define PCI_DEVICE_ID_INTEL_EHL_PSE1_SGMII2G5_ID	0x4bb2
++#define PCI_DEVICE_ID_INTEL_TGLH_SGMII1G_0_ID		0x43ac
++#define PCI_DEVICE_ID_INTEL_TGLH_SGMII1G_1_ID		0x43a2
+ #define PCI_DEVICE_ID_INTEL_TGL_SGMII1G_ID		0xa0ac
+ 
+ static const struct pci_device_id intel_eth_pci_id_table[] = {
+@@ -739,6 +741,8 @@ static const struct pci_device_id intel_eth_pci_id_table[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII1G_ID, &ehl_pse1_sgmii1g_info) },
+ 	{ PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII2G5_ID, &ehl_pse1_sgmii1g_info) },
+ 	{ PCI_DEVICE_DATA(INTEL, TGL_SGMII1G_ID, &tgl_sgmii1g_info) },
++	{ PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_0_ID, &tgl_sgmii1g_info) },
++	{ PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_1_ID, &tgl_sgmii1g_info) },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+index 6d6bd77bb6afc..9ddadae8e4c51 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+@@ -135,7 +135,7 @@ static int meson8b_init_rgmii_tx_clk(struct meson8b_dwmac *dwmac)
+ 	struct device *dev = dwmac->dev;
+ 	static const struct clk_parent_data mux_parents[] = {
+ 		{ .fw_name = "clkin0", },
+-		{ .fw_name = "clkin1", },
++		{ .index = -1, },
+ 	};
+ 	static const struct clk_div_table div_table[] = {
+ 		{ .div = 2, .val = 2, },
+diff --git a/drivers/net/ethernet/ti/cpts.c b/drivers/net/ethernet/ti/cpts.c
+index d1fc7955d4227..43222a34cba06 100644
+--- a/drivers/net/ethernet/ti/cpts.c
++++ b/drivers/net/ethernet/ti/cpts.c
+@@ -599,6 +599,7 @@ void cpts_unregister(struct cpts *cpts)
+ 
+ 	ptp_clock_unregister(cpts->clock);
+ 	cpts->clock = NULL;
++	cpts->phc_index = -1;
+ 
+ 	cpts_write32(cpts, 0, int_enable);
+ 	cpts_write32(cpts, 0, control);
+@@ -784,6 +785,7 @@ struct cpts *cpts_create(struct device *dev, void __iomem *regs,
+ 	cpts->cc.read = cpts_systim_read;
+ 	cpts->cc.mask = CLOCKSOURCE_MASK(32);
+ 	cpts->info = cpts_info;
++	cpts->phc_index = -1;
+ 
+ 	if (n_ext_ts)
+ 		cpts->info.n_ext_ts = n_ext_ts;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index cd06cae760356..1ac80756e5afa 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1401,7 +1401,7 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
+ 	int i;
+ 
+ 	if (it->nr_segs > MAX_SKB_FRAGS + 1)
+-		return ERR_PTR(-ENOMEM);
++		return ERR_PTR(-EMSGSIZE);
+ 
+ 	local_bh_disable();
+ 	skb = napi_get_frags(&tfile->napi);
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index e04f588538ccb..5dc1365dc1f9a 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1863,9 +1863,6 @@ static void cdc_ncm_status(struct usbnet *dev, struct urb *urb)
+ 		 * USB_CDC_NOTIFY_NETWORK_CONNECTION notification shall be
+ 		 * sent by device after USB_CDC_NOTIFY_SPEED_CHANGE.
+ 		 */
+-		netif_info(dev, link, dev->net,
+-			   "network connection: %sconnected\n",
+-			   !!event->wValue ? "" : "dis");
+ 		usbnet_link_change(dev, !!event->wValue, 0);
+ 		break;
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index fc378ff56775b..21120b4e5637d 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1036,6 +1036,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)},	/* Quectel EC25, EC20 R2.0  Mini PCIe */
+ 	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)},	/* Quectel EP06/EG06/EM06 */
+ 	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)},	/* Quectel EG12/EM12 */
++	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0620)},	/* Quectel EM160R-GL */
+ 	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0800)},	/* Quectel RM500Q-GL */
+ 
+ 	/* 3. Combined interface devices matching on interface number */
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 34bb95dd92392..038ce4e5e84ba 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2093,14 +2093,16 @@ static int virtnet_set_channels(struct net_device *dev,
+ 
+ 	get_online_cpus();
+ 	err = _virtnet_set_queues(vi, queue_pairs);
+-	if (!err) {
+-		netif_set_real_num_tx_queues(dev, queue_pairs);
+-		netif_set_real_num_rx_queues(dev, queue_pairs);
+-
+-		virtnet_set_affinity(vi);
++	if (err) {
++		put_online_cpus();
++		goto err;
+ 	}
++	virtnet_set_affinity(vi);
+ 	put_online_cpus();
+ 
++	netif_set_real_num_tx_queues(dev, queue_pairs);
++	netif_set_real_num_rx_queues(dev, queue_pairs);
++ err:
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/wan/hdlc_ppp.c b/drivers/net/wan/hdlc_ppp.c
+index 64f8556513369..261b53fc8e04c 100644
+--- a/drivers/net/wan/hdlc_ppp.c
++++ b/drivers/net/wan/hdlc_ppp.c
+@@ -569,6 +569,13 @@ static void ppp_timer(struct timer_list *t)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&ppp->lock, flags);
++	/* mod_timer could be called after we entered this function but
++	 * before we got the lock.
++	 */
++	if (timer_pending(&proto->timer)) {
++		spin_unlock_irqrestore(&ppp->lock, flags);
++		return;
++	}
+ 	switch (proto->state) {
+ 	case STOPPING:
+ 	case REQ_SENT:
+diff --git a/drivers/net/wireless/realtek/rtlwifi/core.c b/drivers/net/wireless/realtek/rtlwifi/core.c
+index a7259dbc953da..965bd95890459 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/core.c
++++ b/drivers/net/wireless/realtek/rtlwifi/core.c
+@@ -78,7 +78,6 @@ static void rtl_fw_do_work(const struct firmware *firmware, void *context,
+ 
+ 	rtl_dbg(rtlpriv, COMP_ERR, DBG_LOUD,
+ 		"Firmware callback routine entered!\n");
+-	complete(&rtlpriv->firmware_loading_complete);
+ 	if (!firmware) {
+ 		if (rtlpriv->cfg->alt_fw_name) {
+ 			err = request_firmware(&firmware,
+@@ -91,13 +90,13 @@ static void rtl_fw_do_work(const struct firmware *firmware, void *context,
+ 		}
+ 		pr_err("Selected firmware is not available\n");
+ 		rtlpriv->max_fw_size = 0;
+-		return;
++		goto exit;
+ 	}
+ found_alt:
+ 	if (firmware->size > rtlpriv->max_fw_size) {
+ 		pr_err("Firmware is too big!\n");
+ 		release_firmware(firmware);
+-		return;
++		goto exit;
+ 	}
+ 	if (!is_wow) {
+ 		memcpy(rtlpriv->rtlhal.pfirmware, firmware->data,
+@@ -109,6 +108,9 @@ found_alt:
+ 		rtlpriv->rtlhal.wowlan_fwsize = firmware->size;
+ 	}
+ 	release_firmware(firmware);
++
++exit:
++	complete(&rtlpriv->firmware_loading_complete);
+ }
+ 
+ void rtl_fw_cb(const struct firmware *firmware, void *context)
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 2d17137f8ff3b..31d7a6ddc9db7 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -249,7 +249,8 @@ int __scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
+ 
+ 	req = blk_get_request(sdev->request_queue,
+ 			data_direction == DMA_TO_DEVICE ?
+-			REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, BLK_MQ_REQ_PREEMPT);
++			REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN,
++			rq_flags & RQF_PM ? BLK_MQ_REQ_PM : 0);
+ 	if (IS_ERR(req))
+ 		return ret;
+ 	rq = scsi_req(req);
+@@ -1203,6 +1204,8 @@ static blk_status_t
+ scsi_device_state_check(struct scsi_device *sdev, struct request *req)
+ {
+ 	switch (sdev->sdev_state) {
++	case SDEV_CREATED:
++		return BLK_STS_OK;
+ 	case SDEV_OFFLINE:
+ 	case SDEV_TRANSPORT_OFFLINE:
+ 		/*
+@@ -1229,18 +1232,18 @@ scsi_device_state_check(struct scsi_device *sdev, struct request *req)
+ 		return BLK_STS_RESOURCE;
+ 	case SDEV_QUIESCE:
+ 		/*
+-		 * If the devices is blocked we defer normal commands.
++		 * If the device is blocked we only accept power management
++		 * commands.
+ 		 */
+-		if (req && !(req->rq_flags & RQF_PREEMPT))
++		if (req && WARN_ON_ONCE(!(req->rq_flags & RQF_PM)))
+ 			return BLK_STS_RESOURCE;
+ 		return BLK_STS_OK;
+ 	default:
+ 		/*
+ 		 * For any other not fully online state we only allow
+-		 * special commands.  In particular any user initiated
+-		 * command is not allowed.
++		 * power management commands.
+ 		 */
+-		if (req && !(req->rq_flags & RQF_PREEMPT))
++		if (req && !(req->rq_flags & RQF_PM))
+ 			return BLK_STS_IOERR;
+ 		return BLK_STS_OK;
+ 	}
+@@ -2508,15 +2511,13 @@ void sdev_evt_send_simple(struct scsi_device *sdev,
+ EXPORT_SYMBOL_GPL(sdev_evt_send_simple);
+ 
+ /**
+- *	scsi_device_quiesce - Block user issued commands.
++ *	scsi_device_quiesce - Block all commands except power management.
+  *	@sdev:	scsi device to quiesce.
+  *
+  *	This works by trying to transition to the SDEV_QUIESCE state
+  *	(which must be a legal transition).  When the device is in this
+- *	state, only special requests will be accepted, all others will
+- *	be deferred.  Since special requests may also be requeued requests,
+- *	a successful return doesn't guarantee the device will be
+- *	totally quiescent.
++ *	state, only power management requests will be accepted, all others will
++ *	be deferred.
+  *
+  *	Must be called with user context, may sleep.
+  *
+@@ -2578,12 +2579,12 @@ void scsi_device_resume(struct scsi_device *sdev)
+ 	 * device deleted during suspend)
+ 	 */
+ 	mutex_lock(&sdev->state_mutex);
++	if (sdev->sdev_state == SDEV_QUIESCE)
++		scsi_device_set_state(sdev, SDEV_RUNNING);
+ 	if (sdev->quiesced_by) {
+ 		sdev->quiesced_by = NULL;
+ 		blk_clear_pm_only(sdev->request_queue);
+ 	}
+-	if (sdev->sdev_state == SDEV_QUIESCE)
+-		scsi_device_set_state(sdev, SDEV_RUNNING);
+ 	mutex_unlock(&sdev->state_mutex);
+ }
+ EXPORT_SYMBOL(scsi_device_resume);
+diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
+index f3d5b1bbd5aa7..c37dd15d16d24 100644
+--- a/drivers/scsi/scsi_transport_spi.c
++++ b/drivers/scsi/scsi_transport_spi.c
+@@ -117,12 +117,16 @@ static int spi_execute(struct scsi_device *sdev, const void *cmd,
+ 		sshdr = &sshdr_tmp;
+ 
+ 	for(i = 0; i < DV_RETRIES; i++) {
++		/*
++		 * The purpose of the RQF_PM flag below is to bypass the
++		 * SDEV_QUIESCE state.
++		 */
+ 		result = scsi_execute(sdev, cmd, dir, buffer, bufflen, sense,
+ 				      sshdr, DV_TIMEOUT, /* retries */ 1,
+ 				      REQ_FAILFAST_DEV |
+ 				      REQ_FAILFAST_TRANSPORT |
+ 				      REQ_FAILFAST_DRIVER,
+-				      0, NULL);
++				      RQF_PM, NULL);
+ 		if (driver_byte(result) != DRIVER_SENSE ||
+ 		    sshdr->sense_key != UNIT_ATTENTION)
+ 			break;
+@@ -1005,23 +1009,26 @@ spi_dv_device(struct scsi_device *sdev)
+ 	 */
+ 	lock_system_sleep();
+ 
++	if (scsi_autopm_get_device(sdev))
++		goto unlock_system_sleep;
++
+ 	if (unlikely(spi_dv_in_progress(starget)))
+-		goto unlock;
++		goto put_autopm;
+ 
+ 	if (unlikely(scsi_device_get(sdev)))
+-		goto unlock;
++		goto put_autopm;
+ 
+ 	spi_dv_in_progress(starget) = 1;
+ 
+ 	buffer = kzalloc(len, GFP_KERNEL);
+ 
+ 	if (unlikely(!buffer))
+-		goto out_put;
++		goto put_sdev;
+ 
+ 	/* We need to verify that the actual device will quiesce; the
+ 	 * later target quiesce is just a nice to have */
+ 	if (unlikely(scsi_device_quiesce(sdev)))
+-		goto out_free;
++		goto free_buffer;
+ 
+ 	scsi_target_quiesce(starget);
+ 
+@@ -1041,12 +1048,16 @@ spi_dv_device(struct scsi_device *sdev)
+ 
+ 	spi_initial_dv(starget) = 1;
+ 
+- out_free:
++free_buffer:
+ 	kfree(buffer);
+- out_put:
++
++put_sdev:
+ 	spi_dv_in_progress(starget) = 0;
+ 	scsi_device_put(sdev);
+-unlock:
++put_autopm:
++	scsi_autopm_put_device(sdev);
++
++unlock_system_sleep:
+ 	unlock_system_sleep();
+ }
+ EXPORT_SYMBOL(spi_dv_device);
+diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
+index df3a564c3e334..fadd566025b86 100644
+--- a/drivers/scsi/ufs/ufshcd-pci.c
++++ b/drivers/scsi/ufs/ufshcd-pci.c
+@@ -148,6 +148,8 @@ static int ufs_intel_common_init(struct ufs_hba *hba)
+ {
+ 	struct intel_host *host;
+ 
++	hba->caps |= UFSHCD_CAP_RPM_AUTOSUSPEND;
++
+ 	host = devm_kzalloc(hba->dev, sizeof(*host), GFP_KERNEL);
+ 	if (!host)
+ 		return -ENOMEM;
+@@ -163,6 +165,41 @@ static void ufs_intel_common_exit(struct ufs_hba *hba)
+ 	intel_ltr_hide(hba->dev);
+ }
+ 
++static int ufs_intel_resume(struct ufs_hba *hba, enum ufs_pm_op op)
++{
++	/*
++	 * To support S4 (suspend-to-disk) with spm_lvl other than 5, the base
++	 * address registers must be restored because the restore kernel can
++	 * have used different addresses.
++	 */
++	ufshcd_writel(hba, lower_32_bits(hba->utrdl_dma_addr),
++		      REG_UTP_TRANSFER_REQ_LIST_BASE_L);
++	ufshcd_writel(hba, upper_32_bits(hba->utrdl_dma_addr),
++		      REG_UTP_TRANSFER_REQ_LIST_BASE_H);
++	ufshcd_writel(hba, lower_32_bits(hba->utmrdl_dma_addr),
++		      REG_UTP_TASK_REQ_LIST_BASE_L);
++	ufshcd_writel(hba, upper_32_bits(hba->utmrdl_dma_addr),
++		      REG_UTP_TASK_REQ_LIST_BASE_H);
++
++	if (ufshcd_is_link_hibern8(hba)) {
++		int ret = ufshcd_uic_hibern8_exit(hba);
++
++		if (!ret) {
++			ufshcd_set_link_active(hba);
++		} else {
++			dev_err(hba->dev, "%s: hibern8 exit failed %d\n",
++				__func__, ret);
++			/*
++			 * Force reset and restore. Any other actions can lead
++			 * to an unrecoverable state.
++			 */
++			ufshcd_set_link_off(hba);
++		}
++	}
++
++	return 0;
++}
++
+ static int ufs_intel_ehl_init(struct ufs_hba *hba)
+ {
+ 	hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8;
+@@ -174,6 +211,7 @@ static struct ufs_hba_variant_ops ufs_intel_cnl_hba_vops = {
+ 	.init			= ufs_intel_common_init,
+ 	.exit			= ufs_intel_common_exit,
+ 	.link_startup_notify	= ufs_intel_link_startup_notify,
++	.resume			= ufs_intel_resume,
+ };
+ 
+ static struct ufs_hba_variant_ops ufs_intel_ehl_hba_vops = {
+@@ -181,6 +219,7 @@ static struct ufs_hba_variant_ops ufs_intel_ehl_hba_vops = {
+ 	.init			= ufs_intel_ehl_init,
+ 	.exit			= ufs_intel_common_exit,
+ 	.link_startup_notify	= ufs_intel_link_startup_notify,
++	.resume			= ufs_intel_resume,
+ };
+ 
+ #ifdef CONFIG_PM_SLEEP
+@@ -207,6 +246,30 @@ static int ufshcd_pci_resume(struct device *dev)
+ {
+ 	return ufshcd_system_resume(dev_get_drvdata(dev));
+ }
++
++/**
++ * ufshcd_pci_poweroff - suspend-to-disk poweroff function
++ * @dev: pointer to PCI device handle
++ *
++ * Returns 0 if successful
++ * Returns non-zero otherwise
++ */
++static int ufshcd_pci_poweroff(struct device *dev)
++{
++	struct ufs_hba *hba = dev_get_drvdata(dev);
++	int spm_lvl = hba->spm_lvl;
++	int ret;
++
++	/*
++	 * For poweroff we need to set the UFS device to PowerDown mode.
++	 * Force spm_lvl to ensure that.
++	 */
++	hba->spm_lvl = 5;
++	ret = ufshcd_system_suspend(hba);
++	hba->spm_lvl = spm_lvl;
++	return ret;
++}
++
+ #endif /* !CONFIG_PM_SLEEP */
+ 
+ #ifdef CONFIG_PM
+@@ -302,8 +365,14 @@ ufshcd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ }
+ 
+ static const struct dev_pm_ops ufshcd_pci_pm_ops = {
+-	SET_SYSTEM_SLEEP_PM_OPS(ufshcd_pci_suspend,
+-				ufshcd_pci_resume)
++#ifdef CONFIG_PM_SLEEP
++	.suspend	= ufshcd_pci_suspend,
++	.resume		= ufshcd_pci_resume,
++	.freeze		= ufshcd_pci_suspend,
++	.thaw		= ufshcd_pci_resume,
++	.poweroff	= ufshcd_pci_poweroff,
++	.restore	= ufshcd_pci_resume,
++#endif
+ 	SET_RUNTIME_PM_OPS(ufshcd_pci_runtime_suspend,
+ 			   ufshcd_pci_runtime_resume,
+ 			   ufshcd_pci_runtime_idle)
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 911aba3e7675c..02f161468daf5 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -3620,7 +3620,7 @@ static int ufshcd_dme_enable(struct ufs_hba *hba)
+ 	ret = ufshcd_send_uic_cmd(hba, &uic_cmd);
+ 	if (ret)
+ 		dev_err(hba->dev,
+-			"dme-reset: error code %d\n", ret);
++			"dme-enable: error code %d\n", ret);
+ 
+ 	return ret;
+ }
+@@ -7093,7 +7093,6 @@ static inline void ufshcd_blk_pm_runtime_init(struct scsi_device *sdev)
+ static int ufshcd_scsi_add_wlus(struct ufs_hba *hba)
+ {
+ 	int ret = 0;
+-	struct scsi_device *sdev_rpmb;
+ 	struct scsi_device *sdev_boot;
+ 
+ 	hba->sdev_ufs_device = __scsi_add_device(hba->host, 0, 0,
+@@ -7106,14 +7105,14 @@ static int ufshcd_scsi_add_wlus(struct ufs_hba *hba)
+ 	ufshcd_blk_pm_runtime_init(hba->sdev_ufs_device);
+ 	scsi_device_put(hba->sdev_ufs_device);
+ 
+-	sdev_rpmb = __scsi_add_device(hba->host, 0, 0,
++	hba->sdev_rpmb = __scsi_add_device(hba->host, 0, 0,
+ 		ufshcd_upiu_wlun_to_scsi_wlun(UFS_UPIU_RPMB_WLUN), NULL);
+-	if (IS_ERR(sdev_rpmb)) {
+-		ret = PTR_ERR(sdev_rpmb);
++	if (IS_ERR(hba->sdev_rpmb)) {
++		ret = PTR_ERR(hba->sdev_rpmb);
+ 		goto remove_sdev_ufs_device;
+ 	}
+-	ufshcd_blk_pm_runtime_init(sdev_rpmb);
+-	scsi_device_put(sdev_rpmb);
++	ufshcd_blk_pm_runtime_init(hba->sdev_rpmb);
++	scsi_device_put(hba->sdev_rpmb);
+ 
+ 	sdev_boot = __scsi_add_device(hba->host, 0, 0,
+ 		ufshcd_upiu_wlun_to_scsi_wlun(UFS_UPIU_BOOT_WLUN), NULL);
+@@ -7637,6 +7636,63 @@ out:
+ 	return ret;
+ }
+ 
++static int
++ufshcd_send_request_sense(struct ufs_hba *hba, struct scsi_device *sdp);
++
++static int ufshcd_clear_ua_wlun(struct ufs_hba *hba, u8 wlun)
++{
++	struct scsi_device *sdp;
++	unsigned long flags;
++	int ret = 0;
++
++	spin_lock_irqsave(hba->host->host_lock, flags);
++	if (wlun == UFS_UPIU_UFS_DEVICE_WLUN)
++		sdp = hba->sdev_ufs_device;
++	else if (wlun == UFS_UPIU_RPMB_WLUN)
++		sdp = hba->sdev_rpmb;
++	else
++		BUG_ON(1);
++	if (sdp) {
++		ret = scsi_device_get(sdp);
++		if (!ret && !scsi_device_online(sdp)) {
++			ret = -ENODEV;
++			scsi_device_put(sdp);
++		}
++	} else {
++		ret = -ENODEV;
++	}
++	spin_unlock_irqrestore(hba->host->host_lock, flags);
++	if (ret)
++		goto out_err;
++
++	ret = ufshcd_send_request_sense(hba, sdp);
++	scsi_device_put(sdp);
++out_err:
++	if (ret)
++		dev_err(hba->dev, "%s: UAC clear LU=%x ret = %d\n",
++				__func__, wlun, ret);
++	return ret;
++}
++
++static int ufshcd_clear_ua_wluns(struct ufs_hba *hba)
++{
++	int ret = 0;
++
++	if (!hba->wlun_dev_clr_ua)
++		goto out;
++
++	ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_UFS_DEVICE_WLUN);
++	if (!ret)
++		ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_RPMB_WLUN);
++	if (!ret)
++		hba->wlun_dev_clr_ua = false;
++out:
++	if (ret)
++		dev_err(hba->dev, "%s: Failed to clear UAC WLUNS ret = %d\n",
++				__func__, ret);
++	return ret;
++}
++
+ /**
+  * ufshcd_probe_hba - probe hba to detect device and initialize
+  * @hba: per-adapter instance
+@@ -7756,6 +7812,8 @@ out:
+ 		pm_runtime_put_sync(hba->dev);
+ 		ufshcd_exit_clk_scaling(hba);
+ 		ufshcd_hba_exit(hba);
++	} else {
++		ufshcd_clear_ua_wluns(hba);
+ 	}
+ }
+ 
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index cd51553e522da..6c62a281c8631 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -683,6 +683,7 @@ struct ufs_hba {
+ 	 * "UFS device" W-LU.
+ 	 */
+ 	struct scsi_device *sdev_ufs_device;
++	struct scsi_device *sdev_rpmb;
+ 
+ 	enum ufs_dev_pwr_mode curr_dev_pwr_mode;
+ 	enum uic_link_state uic_link_state;
+diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
+index d99231c737fbf..80d74cce2a010 100644
+--- a/drivers/staging/comedi/comedi_fops.c
++++ b/drivers/staging/comedi/comedi_fops.c
+@@ -2987,7 +2987,9 @@ static int put_compat_cmd(struct comedi32_cmd_struct __user *cmd32,
+ 	v32.chanlist_len = cmd->chanlist_len;
+ 	v32.data = ptr_to_compat(cmd->data);
+ 	v32.data_len = cmd->data_len;
+-	return copy_to_user(cmd32, &v32, sizeof(v32));
++	if (copy_to_user(cmd32, &v32, sizeof(v32)))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ /* Handle 32-bit COMEDI_CMD ioctl. */
+diff --git a/drivers/staging/mt7621-dma/mtk-hsdma.c b/drivers/staging/mt7621-dma/mtk-hsdma.c
+index 354536783e1ce..5ad55ca620229 100644
+--- a/drivers/staging/mt7621-dma/mtk-hsdma.c
++++ b/drivers/staging/mt7621-dma/mtk-hsdma.c
+@@ -712,7 +712,7 @@ static int mtk_hsdma_probe(struct platform_device *pdev)
+ 	ret = dma_async_device_register(dd);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register dma device\n");
+-		return ret;
++		goto err_uninit_hsdma;
+ 	}
+ 
+ 	ret = of_dma_controller_register(pdev->dev.of_node,
+@@ -728,6 +728,8 @@ static int mtk_hsdma_probe(struct platform_device *pdev)
+ 
+ err_unregister:
+ 	dma_async_device_unregister(dd);
++err_uninit_hsdma:
++	mtk_hsdma_uninit(hsdma);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
+index 44e15d7fb2f09..66d6f1d06f219 100644
+--- a/drivers/target/target_core_xcopy.c
++++ b/drivers/target/target_core_xcopy.c
+@@ -46,60 +46,83 @@ static int target_xcopy_gen_naa_ieee(struct se_device *dev, unsigned char *buf)
+ 	return 0;
+ }
+ 
+-struct xcopy_dev_search_info {
+-	const unsigned char *dev_wwn;
+-	struct se_device *found_dev;
+-};
+-
++/**
++ * target_xcopy_locate_se_dev_e4_iter - compare XCOPY NAA device identifiers
++ *
++ * @se_dev: device being considered for match
++ * @dev_wwn: XCOPY requested NAA dev_wwn
++ * @return: 1 on match, 0 on no-match
++ */
+ static int target_xcopy_locate_se_dev_e4_iter(struct se_device *se_dev,
+-					      void *data)
++					      const unsigned char *dev_wwn)
+ {
+-	struct xcopy_dev_search_info *info = data;
+ 	unsigned char tmp_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
+ 	int rc;
+ 
+-	if (!se_dev->dev_attrib.emulate_3pc)
++	if (!se_dev->dev_attrib.emulate_3pc) {
++		pr_debug("XCOPY: emulate_3pc disabled on se_dev %p\n", se_dev);
+ 		return 0;
++	}
+ 
+ 	memset(&tmp_dev_wwn[0], 0, XCOPY_NAA_IEEE_REGEX_LEN);
+ 	target_xcopy_gen_naa_ieee(se_dev, &tmp_dev_wwn[0]);
+ 
+-	rc = memcmp(&tmp_dev_wwn[0], info->dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN);
+-	if (rc != 0)
+-		return 0;
+-
+-	info->found_dev = se_dev;
+-	pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev);
+-
+-	rc = target_depend_item(&se_dev->dev_group.cg_item);
++	rc = memcmp(&tmp_dev_wwn[0], dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN);
+ 	if (rc != 0) {
+-		pr_err("configfs_depend_item attempt failed: %d for se_dev: %p\n",
+-		       rc, se_dev);
+-		return rc;
++		pr_debug("XCOPY: skip non-matching: %*ph\n",
++			 XCOPY_NAA_IEEE_REGEX_LEN, tmp_dev_wwn);
++		return 0;
+ 	}
++	pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev);
+ 
+-	pr_debug("Called configfs_depend_item for se_dev: %p se_dev->se_dev_group: %p\n",
+-		 se_dev, &se_dev->dev_group);
+ 	return 1;
+ }
+ 
+-static int target_xcopy_locate_se_dev_e4(const unsigned char *dev_wwn,
+-					struct se_device **found_dev)
++static int target_xcopy_locate_se_dev_e4(struct se_session *sess,
++					const unsigned char *dev_wwn,
++					struct se_device **_found_dev,
++					struct percpu_ref **_found_lun_ref)
+ {
+-	struct xcopy_dev_search_info info;
+-	int ret;
+-
+-	memset(&info, 0, sizeof(info));
+-	info.dev_wwn = dev_wwn;
+-
+-	ret = target_for_each_device(target_xcopy_locate_se_dev_e4_iter, &info);
+-	if (ret == 1) {
+-		*found_dev = info.found_dev;
+-		return 0;
+-	} else {
+-		pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n");
+-		return -EINVAL;
++	struct se_dev_entry *deve;
++	struct se_node_acl *nacl;
++	struct se_lun *this_lun = NULL;
++	struct se_device *found_dev = NULL;
++
++	/* cmd with NULL sess indicates no associated $FABRIC_MOD */
++	if (!sess)
++		goto err_out;
++
++	pr_debug("XCOPY 0xe4: searching for: %*ph\n",
++		 XCOPY_NAA_IEEE_REGEX_LEN, dev_wwn);
++
++	nacl = sess->se_node_acl;
++	rcu_read_lock();
++	hlist_for_each_entry_rcu(deve, &nacl->lun_entry_hlist, link) {
++		struct se_device *this_dev;
++		int rc;
++
++		this_lun = rcu_dereference(deve->se_lun);
++		this_dev = rcu_dereference_raw(this_lun->lun_se_dev);
++
++		rc = target_xcopy_locate_se_dev_e4_iter(this_dev, dev_wwn);
++		if (rc) {
++			if (percpu_ref_tryget_live(&this_lun->lun_ref))
++				found_dev = this_dev;
++			break;
++		}
+ 	}
++	rcu_read_unlock();
++	if (found_dev == NULL)
++		goto err_out;
++
++	pr_debug("lun_ref held for se_dev: %p se_dev->se_dev_group: %p\n",
++		 found_dev, &found_dev->dev_group);
++	*_found_dev = found_dev;
++	*_found_lun_ref = &this_lun->lun_ref;
++	return 0;
++err_out:
++	pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n");
++	return -EINVAL;
+ }
+ 
+ static int target_xcopy_parse_tiddesc_e4(struct se_cmd *se_cmd, struct xcopy_op *xop,
+@@ -246,12 +269,16 @@ static int target_xcopy_parse_target_descriptors(struct se_cmd *se_cmd,
+ 
+ 	switch (xop->op_origin) {
+ 	case XCOL_SOURCE_RECV_OP:
+-		rc = target_xcopy_locate_se_dev_e4(xop->dst_tid_wwn,
+-						&xop->dst_dev);
++		rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess,
++						xop->dst_tid_wwn,
++						&xop->dst_dev,
++						&xop->remote_lun_ref);
+ 		break;
+ 	case XCOL_DEST_RECV_OP:
+-		rc = target_xcopy_locate_se_dev_e4(xop->src_tid_wwn,
+-						&xop->src_dev);
++		rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess,
++						xop->src_tid_wwn,
++						&xop->src_dev,
++						&xop->remote_lun_ref);
+ 		break;
+ 	default:
+ 		pr_err("XCOPY CSCD descriptor IDs not found in CSCD list - "
+@@ -391,18 +418,12 @@ static int xcopy_pt_get_cmd_state(struct se_cmd *se_cmd)
+ 
+ static void xcopy_pt_undepend_remotedev(struct xcopy_op *xop)
+ {
+-	struct se_device *remote_dev;
+-
+ 	if (xop->op_origin == XCOL_SOURCE_RECV_OP)
+-		remote_dev = xop->dst_dev;
++		pr_debug("putting dst lun_ref for %p\n", xop->dst_dev);
+ 	else
+-		remote_dev = xop->src_dev;
+-
+-	pr_debug("Calling configfs_undepend_item for"
+-		  " remote_dev: %p remote_dev->dev_group: %p\n",
+-		  remote_dev, &remote_dev->dev_group.cg_item);
++		pr_debug("putting src lun_ref for %p\n", xop->src_dev);
+ 
+-	target_undepend_item(&remote_dev->dev_group.cg_item);
++	percpu_ref_put(xop->remote_lun_ref);
+ }
+ 
+ static void xcopy_pt_release_cmd(struct se_cmd *se_cmd)
+diff --git a/drivers/target/target_core_xcopy.h b/drivers/target/target_core_xcopy.h
+index c56a1bde9417b..e5f20005179a8 100644
+--- a/drivers/target/target_core_xcopy.h
++++ b/drivers/target/target_core_xcopy.h
+@@ -27,6 +27,7 @@ struct xcopy_op {
+ 	struct se_device *dst_dev;
+ 	unsigned char dst_tid_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
+ 	unsigned char local_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
++	struct percpu_ref *remote_lun_ref;
+ 
+ 	sector_t src_lba;
+ 	sector_t dst_lba;
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index 5e07a0a86d110..ee565bdb44d65 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -139,9 +139,13 @@ static struct imx_usbmisc_data *usbmisc_get_init_data(struct device *dev)
+ 	misc_pdev = of_find_device_by_node(args.np);
+ 	of_node_put(args.np);
+ 
+-	if (!misc_pdev || !platform_get_drvdata(misc_pdev))
++	if (!misc_pdev)
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 
++	if (!platform_get_drvdata(misc_pdev)) {
++		put_device(&misc_pdev->dev);
++		return ERR_PTR(-EPROBE_DEFER);
++	}
+ 	data->dev = &misc_pdev->dev;
+ 
+ 	/*
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index f52f1bc0559f9..781905745812e 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1895,6 +1895,10 @@ static const struct usb_device_id acm_ids[] = {
+ 	{ USB_DEVICE(0x04d8, 0xfd08),
+ 	.driver_info = IGNORE_DEVICE,
+ 	},
++
++	{ USB_DEVICE(0x04d8, 0xf58b),
++	.driver_info = IGNORE_DEVICE,
++	},
+ #endif
+ 
+ 	/*Samsung phone in firmware update mode */
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 02d0cfd23bb29..508b1c3f8b731 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -465,13 +465,23 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 	if (!desc->resp_count || !--desc->resp_count)
+ 		goto out;
+ 
++	if (test_bit(WDM_DISCONNECTING, &desc->flags)) {
++		rv = -ENODEV;
++		goto out;
++	}
++	if (test_bit(WDM_RESETTING, &desc->flags)) {
++		rv = -EIO;
++		goto out;
++	}
++
+ 	set_bit(WDM_RESPONDING, &desc->flags);
+ 	spin_unlock_irq(&desc->iuspin);
+ 	rv = usb_submit_urb(desc->response, GFP_KERNEL);
+ 	spin_lock_irq(&desc->iuspin);
+ 	if (rv) {
+-		dev_err(&desc->intf->dev,
+-			"usb_submit_urb failed with result %d\n", rv);
++		if (!test_bit(WDM_DISCONNECTING, &desc->flags))
++			dev_err(&desc->intf->dev,
++				"usb_submit_urb failed with result %d\n", rv);
+ 
+ 		/* make sure the next notification trigger a submit */
+ 		clear_bit(WDM_RESPONDING, &desc->flags);
+@@ -1027,9 +1037,9 @@ static void wdm_disconnect(struct usb_interface *intf)
+ 	wake_up_all(&desc->wait);
+ 	mutex_lock(&desc->rlock);
+ 	mutex_lock(&desc->wlock);
+-	kill_urbs(desc);
+ 	cancel_work_sync(&desc->rxwork);
+ 	cancel_work_sync(&desc->service_outs_intr);
++	kill_urbs(desc);
+ 	mutex_unlock(&desc->wlock);
+ 	mutex_unlock(&desc->rlock);
+ 
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 67cbd42421bee..134dc2005ce97 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -274,8 +274,25 @@ static int usblp_ctrl_msg(struct usblp *usblp, int request, int type, int dir, i
+ #define usblp_reset(usblp)\
+ 	usblp_ctrl_msg(usblp, USBLP_REQ_RESET, USB_TYPE_CLASS, USB_DIR_OUT, USB_RECIP_OTHER, 0, NULL, 0)
+ 
+-#define usblp_hp_channel_change_request(usblp, channel, buffer) \
+-	usblp_ctrl_msg(usblp, USBLP_REQ_HP_CHANNEL_CHANGE_REQUEST, USB_TYPE_VENDOR, USB_DIR_IN, USB_RECIP_INTERFACE, channel, buffer, 1)
++static int usblp_hp_channel_change_request(struct usblp *usblp, int channel, u8 *new_channel)
++{
++	u8 *buf;
++	int ret;
++
++	buf = kzalloc(1, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	ret = usblp_ctrl_msg(usblp, USBLP_REQ_HP_CHANNEL_CHANGE_REQUEST,
++			USB_TYPE_VENDOR, USB_DIR_IN, USB_RECIP_INTERFACE,
++			channel, buf, 1);
++	if (ret == 0)
++		*new_channel = buf[0];
++
++	kfree(buf);
++
++	return ret;
++}
+ 
+ /*
+  * See the description for usblp_select_alts() below for the usage
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 2f95f08ca5119..1b241f937d8f4 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -285,6 +285,7 @@
+ 
+ /* Global USB2 PHY Vendor Control Register */
+ #define DWC3_GUSB2PHYACC_NEWREGREQ	BIT(25)
++#define DWC3_GUSB2PHYACC_DONE		BIT(24)
+ #define DWC3_GUSB2PHYACC_BUSY		BIT(23)
+ #define DWC3_GUSB2PHYACC_WRITE		BIT(22)
+ #define DWC3_GUSB2PHYACC_ADDR(n)	(n << 16)
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index 417e05381b5d0..bdf1f98dfad8c 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -754,7 +754,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ 
+ 	ret = priv->drvdata->setup_regmaps(priv, base);
+ 	if (ret)
+-		return ret;
++		goto err_disable_clks;
+ 
+ 	if (priv->vbus) {
+ 		ret = regulator_enable(priv->vbus);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 78cb4db8a6e45..ee44321fee386 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1763,6 +1763,8 @@ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep,
+ 			list_for_each_entry_safe(r, t, &dep->started_list, list)
+ 				dwc3_gadget_move_cancelled_request(r);
+ 
++			dep->flags &= ~DWC3_EP_WAIT_TRANSFER_COMPLETE;
++
+ 			goto out;
+ 		}
+ 	}
+@@ -2083,6 +2085,7 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
+ 
+ static void dwc3_gadget_disable_irq(struct dwc3 *dwc);
+ static void __dwc3_gadget_stop(struct dwc3 *dwc);
++static int __dwc3_gadget_start(struct dwc3 *dwc);
+ 
+ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ {
+@@ -2145,6 +2148,8 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 			dwc->ev_buf->lpos = (dwc->ev_buf->lpos + count) %
+ 						dwc->ev_buf->length;
+ 		}
++	} else {
++		__dwc3_gadget_start(dwc);
+ 	}
+ 
+ 	ret = dwc3_gadget_run_stop(dwc, is_on, false);
+@@ -2319,10 +2324,6 @@ static int dwc3_gadget_start(struct usb_gadget *g,
+ 	}
+ 
+ 	dwc->gadget_driver	= driver;
+-
+-	if (pm_runtime_active(dwc->dev))
+-		__dwc3_gadget_start(dwc);
+-
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
+ 	return 0;
+@@ -2348,13 +2349,6 @@ static int dwc3_gadget_stop(struct usb_gadget *g)
+ 	unsigned long		flags;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+-
+-	if (pm_runtime_suspended(dwc->dev))
+-		goto out;
+-
+-	__dwc3_gadget_stop(dwc);
+-
+-out:
+ 	dwc->gadget_driver	= NULL;
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
+diff --git a/drivers/usb/dwc3/ulpi.c b/drivers/usb/dwc3/ulpi.c
+index aa213c9815f67..f23f4c9a557e9 100644
+--- a/drivers/usb/dwc3/ulpi.c
++++ b/drivers/usb/dwc3/ulpi.c
+@@ -7,6 +7,8 @@
+  * Author: Heikki Krogerus <heikki.krogerus@linux.intel.com>
+  */
+ 
++#include <linux/delay.h>
++#include <linux/time64.h>
+ #include <linux/ulpi/regs.h>
+ 
+ #include "core.h"
+@@ -17,14 +19,28 @@
+ 		DWC3_GUSB2PHYACC_ADDR(ULPI_ACCESS_EXTENDED) | \
+ 		DWC3_GUSB2PHYACC_EXTEND_ADDR(a) : DWC3_GUSB2PHYACC_ADDR(a))
+ 
+-static int dwc3_ulpi_busyloop(struct dwc3 *dwc)
++#define DWC3_ULPI_BASE_DELAY	DIV_ROUND_UP(NSEC_PER_SEC, 60000000L)
++
++static int dwc3_ulpi_busyloop(struct dwc3 *dwc, u8 addr, bool read)
+ {
+-	unsigned int count = 1000;
++	unsigned long ns = 5L * DWC3_ULPI_BASE_DELAY;
++	unsigned int count = 10000;
+ 	u32 reg;
+ 
++	if (addr >= ULPI_EXT_VENDOR_SPECIFIC)
++		ns += DWC3_ULPI_BASE_DELAY;
++
++	if (read)
++		ns += DWC3_ULPI_BASE_DELAY;
++
++	reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
++	if (reg & DWC3_GUSB2PHYCFG_SUSPHY)
++		usleep_range(1000, 1200);
++
+ 	while (count--) {
++		ndelay(ns);
+ 		reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYACC(0));
+-		if (!(reg & DWC3_GUSB2PHYACC_BUSY))
++		if (reg & DWC3_GUSB2PHYACC_DONE)
+ 			return 0;
+ 		cpu_relax();
+ 	}
+@@ -38,16 +54,10 @@ static int dwc3_ulpi_read(struct device *dev, u8 addr)
+ 	u32 reg;
+ 	int ret;
+ 
+-	reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+-	if (reg & DWC3_GUSB2PHYCFG_SUSPHY) {
+-		reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
+-		dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+-	}
+-
+ 	reg = DWC3_GUSB2PHYACC_NEWREGREQ | DWC3_ULPI_ADDR(addr);
+ 	dwc3_writel(dwc->regs, DWC3_GUSB2PHYACC(0), reg);
+ 
+-	ret = dwc3_ulpi_busyloop(dwc);
++	ret = dwc3_ulpi_busyloop(dwc, addr, true);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -61,17 +71,11 @@ static int dwc3_ulpi_write(struct device *dev, u8 addr, u8 val)
+ 	struct dwc3 *dwc = dev_get_drvdata(dev);
+ 	u32 reg;
+ 
+-	reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+-	if (reg & DWC3_GUSB2PHYCFG_SUSPHY) {
+-		reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
+-		dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+-	}
+-
+ 	reg = DWC3_GUSB2PHYACC_NEWREGREQ | DWC3_ULPI_ADDR(addr);
+ 	reg |= DWC3_GUSB2PHYACC_WRITE | val;
+ 	dwc3_writel(dwc->regs, DWC3_GUSB2PHYACC(0), reg);
+ 
+-	return dwc3_ulpi_busyloop(dwc);
++	return dwc3_ulpi_busyloop(dwc, addr, false);
+ }
+ 
+ static const struct ulpi_ops dwc3_ulpi_ops = {
+diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig
+index 7e47e6223089c..2d152571a7de8 100644
+--- a/drivers/usb/gadget/Kconfig
++++ b/drivers/usb/gadget/Kconfig
+@@ -265,6 +265,7 @@ config USB_CONFIGFS_NCM
+ 	depends on NET
+ 	select USB_U_ETHER
+ 	select USB_F_NCM
++	select CRC32
+ 	help
+ 	  NCM is an advanced protocol for Ethernet encapsulation, allows
+ 	  grouping of several ethernet frames into one USB transfer and
+@@ -314,6 +315,7 @@ config USB_CONFIGFS_EEM
+ 	depends on NET
+ 	select USB_U_ETHER
+ 	select USB_F_EEM
++	select CRC32
+ 	help
+ 	  CDC EEM is a newer USB standard that is somewhat simpler than CDC ECM
+ 	  and therefore can be supported by more hardware.  Technically ECM and
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index c6d455f2bb928..1a556a628971f 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -392,8 +392,11 @@ int usb_function_deactivate(struct usb_function *function)
+ 
+ 	spin_lock_irqsave(&cdev->lock, flags);
+ 
+-	if (cdev->deactivations == 0)
++	if (cdev->deactivations == 0) {
++		spin_unlock_irqrestore(&cdev->lock, flags);
+ 		status = usb_gadget_deactivate(cdev->gadget);
++		spin_lock_irqsave(&cdev->lock, flags);
++	}
+ 	if (status == 0)
+ 		cdev->deactivations++;
+ 
+@@ -424,8 +427,11 @@ int usb_function_activate(struct usb_function *function)
+ 		status = -EINVAL;
+ 	else {
+ 		cdev->deactivations--;
+-		if (cdev->deactivations == 0)
++		if (cdev->deactivations == 0) {
++			spin_unlock_irqrestore(&cdev->lock, flags);
+ 			status = usb_gadget_activate(cdev->gadget);
++			spin_lock_irqsave(&cdev->lock, flags);
++		}
+ 	}
+ 
+ 	spin_unlock_irqrestore(&cdev->lock, flags);
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 56051bb973498..36ffb43f9c1a0 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -221,9 +221,16 @@ static ssize_t gadget_dev_desc_bcdUSB_store(struct config_item *item,
+ 
+ static ssize_t gadget_dev_desc_UDC_show(struct config_item *item, char *page)
+ {
+-	char *udc_name = to_gadget_info(item)->composite.gadget_driver.udc_name;
++	struct gadget_info *gi = to_gadget_info(item);
++	char *udc_name;
++	int ret;
++
++	mutex_lock(&gi->lock);
++	udc_name = gi->composite.gadget_driver.udc_name;
++	ret = sprintf(page, "%s\n", udc_name ?: "");
++	mutex_unlock(&gi->lock);
+ 
+-	return sprintf(page, "%s\n", udc_name ?: "");
++	return ret;
+ }
+ 
+ static int unregister_gadget(struct gadget_info *gi)
+@@ -1248,9 +1255,9 @@ static void purge_configs_funcs(struct gadget_info *gi)
+ 
+ 		cfg = container_of(c, struct config_usb_cfg, c);
+ 
+-		list_for_each_entry_safe(f, tmp, &c->functions, list) {
++		list_for_each_entry_safe_reverse(f, tmp, &c->functions, list) {
+ 
+-			list_move_tail(&f->list, &cfg->func_list);
++			list_move(&f->list, &cfg->func_list);
+ 			if (f->unbind) {
+ 				dev_dbg(&gi->cdev.gadget->dev,
+ 					"unbind function '%s'/%p\n",
+@@ -1536,7 +1543,7 @@ static const struct usb_gadget_driver configfs_driver_template = {
+ 	.suspend	= configfs_composite_suspend,
+ 	.resume		= configfs_composite_resume,
+ 
+-	.max_speed	= USB_SPEED_SUPER,
++	.max_speed	= USB_SPEED_SUPER_PLUS,
+ 	.driver = {
+ 		.owner          = THIS_MODULE,
+ 		.name		= "configfs-gadget",
+@@ -1576,7 +1583,7 @@ static struct config_group *gadgets_make(
+ 	gi->composite.unbind = configfs_do_nothing;
+ 	gi->composite.suspend = NULL;
+ 	gi->composite.resume = NULL;
+-	gi->composite.max_speed = USB_SPEED_SUPER;
++	gi->composite.max_speed = USB_SPEED_SUPER_PLUS;
+ 
+ 	spin_lock_init(&gi->spinlock);
+ 	mutex_init(&gi->lock);
+diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c
+index 64a4112068fc8..2f1eb2e81d306 100644
+--- a/drivers/usb/gadget/function/f_printer.c
++++ b/drivers/usb/gadget/function/f_printer.c
+@@ -1162,6 +1162,7 @@ fail_tx_reqs:
+ 		printer_req_free(dev->in_ep, req);
+ 	}
+ 
++	usb_free_all_descriptors(f);
+ 	return ret;
+ 
+ }
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index 3633df6d7610f..5d960b6603b6f 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -271,7 +271,7 @@ static struct usb_endpoint_descriptor fs_epout_desc = {
+ 
+ 	.bEndpointAddress = USB_DIR_OUT,
+ 	.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
+-	.wMaxPacketSize = cpu_to_le16(1023),
++	/* .wMaxPacketSize = DYNAMIC */
+ 	.bInterval = 1,
+ };
+ 
+@@ -280,7 +280,7 @@ static struct usb_endpoint_descriptor hs_epout_desc = {
+ 	.bDescriptorType = USB_DT_ENDPOINT,
+ 
+ 	.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
+-	.wMaxPacketSize = cpu_to_le16(1024),
++	/* .wMaxPacketSize = DYNAMIC */
+ 	.bInterval = 4,
+ };
+ 
+@@ -348,7 +348,7 @@ static struct usb_endpoint_descriptor fs_epin_desc = {
+ 
+ 	.bEndpointAddress = USB_DIR_IN,
+ 	.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
+-	.wMaxPacketSize = cpu_to_le16(1023),
++	/* .wMaxPacketSize = DYNAMIC */
+ 	.bInterval = 1,
+ };
+ 
+@@ -357,7 +357,7 @@ static struct usb_endpoint_descriptor hs_epin_desc = {
+ 	.bDescriptorType = USB_DT_ENDPOINT,
+ 
+ 	.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
+-	.wMaxPacketSize = cpu_to_le16(1024),
++	/* .wMaxPacketSize = DYNAMIC */
+ 	.bInterval = 4,
+ };
+ 
+@@ -444,12 +444,28 @@ struct cntrl_range_lay3 {
+ 	__le32	dRES;
+ } __packed;
+ 
+-static void set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
++static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
+ 	struct usb_endpoint_descriptor *ep_desc,
+-	unsigned int factor, bool is_playback)
++	enum usb_device_speed speed, bool is_playback)
+ {
+ 	int chmask, srate, ssize;
+-	u16 max_packet_size;
++	u16 max_size_bw, max_size_ep;
++	unsigned int factor;
++
++	switch (speed) {
++	case USB_SPEED_FULL:
++		max_size_ep = 1023;
++		factor = 1000;
++		break;
++
++	case USB_SPEED_HIGH:
++		max_size_ep = 1024;
++		factor = 8000;
++		break;
++
++	default:
++		return -EINVAL;
++	}
+ 
+ 	if (is_playback) {
+ 		chmask = uac2_opts->p_chmask;
+@@ -461,10 +477,12 @@ static void set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
+ 		ssize = uac2_opts->c_ssize;
+ 	}
+ 
+-	max_packet_size = num_channels(chmask) * ssize *
++	max_size_bw = num_channels(chmask) * ssize *
+ 		DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1)));
+-	ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_packet_size,
+-				le16_to_cpu(ep_desc->wMaxPacketSize)));
++	ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw,
++						    max_size_ep));
++
++	return 0;
+ }
+ 
+ /* Use macro to overcome line length limitation */
+@@ -670,10 +688,33 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	}
+ 
+ 	/* Calculate wMaxPacketSize according to audio bandwidth */
+-	set_ep_max_packet_size(uac2_opts, &fs_epin_desc, 1000, true);
+-	set_ep_max_packet_size(uac2_opts, &fs_epout_desc, 1000, false);
+-	set_ep_max_packet_size(uac2_opts, &hs_epin_desc, 8000, true);
+-	set_ep_max_packet_size(uac2_opts, &hs_epout_desc, 8000, false);
++	ret = set_ep_max_packet_size(uac2_opts, &fs_epin_desc, USB_SPEED_FULL,
++				     true);
++	if (ret < 0) {
++		dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
++		return ret;
++	}
++
++	ret = set_ep_max_packet_size(uac2_opts, &fs_epout_desc, USB_SPEED_FULL,
++				     false);
++	if (ret < 0) {
++		dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
++		return ret;
++	}
++
++	ret = set_ep_max_packet_size(uac2_opts, &hs_epin_desc, USB_SPEED_HIGH,
++				     true);
++	if (ret < 0) {
++		dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
++		return ret;
++	}
++
++	ret = set_ep_max_packet_size(uac2_opts, &hs_epout_desc, USB_SPEED_HIGH,
++				     false);
++	if (ret < 0) {
++		dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
++		return ret;
++	}
+ 
+ 	if (EPOUT_EN(uac2_opts)) {
+ 		agdev->out_ep = usb_ep_autoconfig(gadget, &fs_epout_desc);
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index 31ea76adcc0db..c019f2b0c0af3 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -45,9 +45,10 @@
+ #define UETH__VERSION	"29-May-2008"
+ 
+ /* Experiments show that both Linux and Windows hosts allow up to 16k
+- * frame sizes. Set the max size to 15k+52 to prevent allocating 32k
++ * frame sizes. Set the max MTU size to 15k+52 to prevent allocating 32k
+  * blocks and still have efficient handling. */
+-#define GETHER_MAX_ETH_FRAME_LEN 15412
++#define GETHER_MAX_MTU_SIZE 15412
++#define GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN)
+ 
+ struct eth_dev {
+ 	/* lock is held while accessing port_usb
+@@ -786,7 +787,7 @@ struct eth_dev *gether_setup_name(struct usb_gadget *g,
+ 
+ 	/* MTU range: 14 - 15412 */
+ 	net->min_mtu = ETH_HLEN;
+-	net->max_mtu = GETHER_MAX_ETH_FRAME_LEN;
++	net->max_mtu = GETHER_MAX_MTU_SIZE;
+ 
+ 	dev->gadget = g;
+ 	SET_NETDEV_DEV(net, &g->dev);
+@@ -848,7 +849,7 @@ struct net_device *gether_setup_name_default(const char *netname)
+ 
+ 	/* MTU range: 14 - 15412 */
+ 	net->min_mtu = ETH_HLEN;
+-	net->max_mtu = GETHER_MAX_ETH_FRAME_LEN;
++	net->max_mtu = GETHER_MAX_MTU_SIZE;
+ 
+ 	return net;
+ }
+diff --git a/drivers/usb/gadget/legacy/acm_ms.c b/drivers/usb/gadget/legacy/acm_ms.c
+index 59be2d8417c9c..e8033e5f0c18e 100644
+--- a/drivers/usb/gadget/legacy/acm_ms.c
++++ b/drivers/usb/gadget/legacy/acm_ms.c
+@@ -200,8 +200,10 @@ static int acm_ms_bind(struct usb_composite_dev *cdev)
+ 		struct usb_descriptor_header *usb_desc;
+ 
+ 		usb_desc = usb_otg_descriptor_alloc(gadget);
+-		if (!usb_desc)
++		if (!usb_desc) {
++			status = -ENOMEM;
+ 			goto fail_string_ids;
++		}
+ 		usb_otg_descriptor_init(gadget, usb_desc);
+ 		otg_desc[0] = usb_desc;
+ 		otg_desc[1] = NULL;
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index 99c1ebe86f6a2..016937579ed97 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -2114,9 +2114,21 @@ static int dummy_hub_control(
+ 				dum_hcd->port_status &= ~USB_PORT_STAT_POWER;
+ 			set_link_state(dum_hcd);
+ 			break;
+-		default:
++		case USB_PORT_FEAT_ENABLE:
++		case USB_PORT_FEAT_C_ENABLE:
++		case USB_PORT_FEAT_C_SUSPEND:
++			/* Not allowed for USB-3 */
++			if (hcd->speed == HCD_USB3)
++				goto error;
++			fallthrough;
++		case USB_PORT_FEAT_C_CONNECTION:
++		case USB_PORT_FEAT_C_RESET:
+ 			dum_hcd->port_status &= ~(1 << wValue);
+ 			set_link_state(dum_hcd);
++			break;
++		default:
++		/* Disallow INDICATOR and C_OVER_CURRENT */
++			goto error;
+ 		}
+ 		break;
+ 	case GetHubDescriptor:
+@@ -2277,18 +2289,17 @@ static int dummy_hub_control(
+ 			 */
+ 			dum_hcd->re_timeout = jiffies + msecs_to_jiffies(50);
+ 			fallthrough;
++		case USB_PORT_FEAT_C_CONNECTION:
++		case USB_PORT_FEAT_C_RESET:
++		case USB_PORT_FEAT_C_ENABLE:
++		case USB_PORT_FEAT_C_SUSPEND:
++			/* Not allowed for USB-3, and ignored for USB-2 */
++			if (hcd->speed == HCD_USB3)
++				goto error;
++			break;
+ 		default:
+-			if (hcd->speed == HCD_USB3) {
+-				if ((dum_hcd->port_status &
+-				     USB_SS_PORT_STAT_POWER) != 0) {
+-					dum_hcd->port_status |= (1 << wValue);
+-				}
+-			} else
+-				if ((dum_hcd->port_status &
+-				     USB_PORT_STAT_POWER) != 0) {
+-					dum_hcd->port_status |= (1 << wValue);
+-				}
+-			set_link_state(dum_hcd);
++		/* Disallow TEST, INDICATOR, and C_OVER_CURRENT */
++			goto error;
+ 		}
+ 		break;
+ 	case GetPortErrorCount:
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index d4a8d0efbbc4d..73f1373d517a2 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -4646,19 +4646,19 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci,
+ {
+ 	unsigned long long timeout_ns;
+ 
++	if (xhci->quirks & XHCI_INTEL_HOST)
++		timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
++	else
++		timeout_ns = udev->u1_params.sel;
++
+ 	/* Prevent U1 if service interval is shorter than U1 exit latency */
+ 	if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) {
+-		if (xhci_service_interval_to_ns(desc) <= udev->u1_params.mel) {
++		if (xhci_service_interval_to_ns(desc) <= timeout_ns) {
+ 			dev_dbg(&udev->dev, "Disable U1, ESIT shorter than exit latency\n");
+ 			return USB3_LPM_DISABLED;
+ 		}
+ 	}
+ 
+-	if (xhci->quirks & XHCI_INTEL_HOST)
+-		timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
+-	else
+-		timeout_ns = udev->u1_params.sel;
+-
+ 	/* The U1 timeout is encoded in 1us intervals.
+ 	 * Don't return a timeout of zero, because that's USB3_LPM_DISABLED.
+ 	 */
+@@ -4710,19 +4710,19 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci,
+ {
+ 	unsigned long long timeout_ns;
+ 
++	if (xhci->quirks & XHCI_INTEL_HOST)
++		timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
++	else
++		timeout_ns = udev->u2_params.sel;
++
+ 	/* Prevent U2 if service interval is shorter than U2 exit latency */
+ 	if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) {
+-		if (xhci_service_interval_to_ns(desc) <= udev->u2_params.mel) {
++		if (xhci_service_interval_to_ns(desc) <= timeout_ns) {
+ 			dev_dbg(&udev->dev, "Disable U2, ESIT shorter than exit latency\n");
+ 			return USB3_LPM_DISABLED;
+ 		}
+ 	}
+ 
+-	if (xhci->quirks & XHCI_INTEL_HOST)
+-		timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
+-	else
+-		timeout_ns = udev->u2_params.sel;
+-
+ 	/* The U2 timeout is encoded in 256us intervals */
+ 	timeout_ns = DIV_ROUND_UP_ULL(timeout_ns, 256 * 1000);
+ 	/* If the necessary timeout value is bigger than what we can set in the
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index e3165d79b5f64..6c3d760bd4dd8 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -495,6 +495,9 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ 		timeout = schedule_timeout(YUREX_WRITE_TIMEOUT);
+ 	finish_wait(&dev->waitq, &wait);
+ 
++	/* make sure URB is idle after timeout or (spurious) CMD_ACK */
++	usb_kill_urb(dev->cntl_urb);
++
+ 	mutex_unlock(&dev->io_mutex);
+ 
+ 	if (retval < 0) {
+diff --git a/drivers/usb/serial/iuu_phoenix.c b/drivers/usb/serial/iuu_phoenix.c
+index b4ba79123d9da..c14205190e7a7 100644
+--- a/drivers/usb/serial/iuu_phoenix.c
++++ b/drivers/usb/serial/iuu_phoenix.c
+@@ -532,23 +532,29 @@ static int iuu_uart_flush(struct usb_serial_port *port)
+ 	struct device *dev = &port->dev;
+ 	int i;
+ 	int status;
+-	u8 rxcmd = IUU_UART_RX;
++	u8 *rxcmd;
+ 	struct iuu_private *priv = usb_get_serial_port_data(port);
+ 
+ 	if (iuu_led(port, 0xF000, 0, 0, 0xFF) < 0)
+ 		return -EIO;
+ 
++	rxcmd = kmalloc(1, GFP_KERNEL);
++	if (!rxcmd)
++		return -ENOMEM;
++
++	rxcmd[0] = IUU_UART_RX;
++
+ 	for (i = 0; i < 2; i++) {
+-		status = bulk_immediate(port, &rxcmd, 1);
++		status = bulk_immediate(port, rxcmd, 1);
+ 		if (status != IUU_OPERATION_OK) {
+ 			dev_dbg(dev, "%s - uart_flush_write error\n", __func__);
+-			return status;
++			goto out_free;
+ 		}
+ 
+ 		status = read_immediate(port, &priv->len, 1);
+ 		if (status != IUU_OPERATION_OK) {
+ 			dev_dbg(dev, "%s - uart_flush_read error\n", __func__);
+-			return status;
++			goto out_free;
+ 		}
+ 
+ 		if (priv->len > 0) {
+@@ -556,12 +562,16 @@ static int iuu_uart_flush(struct usb_serial_port *port)
+ 			status = read_immediate(port, priv->buf, priv->len);
+ 			if (status != IUU_OPERATION_OK) {
+ 				dev_dbg(dev, "%s - uart_flush_read error\n", __func__);
+-				return status;
++				goto out_free;
+ 			}
+ 		}
+ 	}
+ 	dev_dbg(dev, "%s - uart_flush_read OK!\n", __func__);
+ 	iuu_led(port, 0, 0xF000, 0, 0xFF);
++
++out_free:
++	kfree(rxcmd);
++
+ 	return status;
+ }
+ 
+diff --git a/drivers/usb/serial/keyspan_pda.c b/drivers/usb/serial/keyspan_pda.c
+index 39ed3ad323651..aec32bf06e018 100644
+--- a/drivers/usb/serial/keyspan_pda.c
++++ b/drivers/usb/serial/keyspan_pda.c
+@@ -555,10 +555,8 @@ exit:
+ static void keyspan_pda_write_bulk_callback(struct urb *urb)
+ {
+ 	struct usb_serial_port *port = urb->context;
+-	struct keyspan_pda_private *priv;
+ 
+ 	set_bit(0, &port->write_urbs_free);
+-	priv = usb_get_serial_port_data(port);
+ 
+ 	/* queue up a wakeup at scheduler time */
+ 	usb_serial_port_softint(port);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 2c21e34235bbb..3fe959104311b 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1117,6 +1117,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0xff, 0x30) },	/* EM160R-GL */
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+@@ -2057,6 +2059,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff),			/* Fibocom NL678 series */
+ 	  .driver_info = RSVD(6) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) },			/* LongSung M5710 */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) },			/* GosunCn GM500 RNDIS */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) },			/* GosunCn GM500 MBIM */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) },			/* GosunCn GM500 ECM/NCM */
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 870e9cf3d5dc4..f9677a5ec31b2 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -90,6 +90,13 @@ UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_BROKEN_FUA),
+ 
++/* Reported-by: Thinh Nguyen <thinhn@synopsys.com> */
++UNUSUAL_DEV(0x154b, 0xf00b, 0x0000, 0x9999,
++		"PNY",
++		"Pro Elite SSD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_NO_ATA_1X),
++
+ /* Reported-by: Thinh Nguyen <thinhn@synopsys.com> */
+ UNUSUAL_DEV(0x154b, 0xf00d, 0x0000, 0x9999,
+ 		"PNY",
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index d7f63b74c6b14..17896bd87fc3f 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -202,10 +202,21 @@ static int
+ pmc_usb_mux_dp_hpd(struct pmc_usb_port *port, struct typec_displayport_data *dp)
+ {
+ 	u8 msg[2] = { };
++	int ret;
+ 
+ 	msg[0] = PMC_USB_DP_HPD;
+ 	msg[0] |= port->usb3_port << PMC_USB_MSG_USB3_PORT_SHIFT;
+ 
++	/* Configure HPD first if HPD,IRQ comes together */
++	if (!IOM_PORT_HPD_ASSERTED(port->iom_status) &&
++	    dp->status & DP_STATUS_IRQ_HPD &&
++	    dp->status & DP_STATUS_HPD_STATE) {
++		msg[1] = PMC_USB_DP_HPD_LVL;
++		ret = pmc_usb_command(port, msg, sizeof(msg));
++		if (ret)
++			return ret;
++	}
++
+ 	if (dp->status & DP_STATUS_IRQ_HPD)
+ 		msg[1] = PMC_USB_DP_HPD_IRQ;
+ 
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index 66cde5e5f7964..3209b5ddd30c9 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -396,6 +396,8 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		default:
+ 			usbip_dbg_vhci_rh(" ClearPortFeature: default %x\n",
+ 					  wValue);
++			if (wValue >= 32)
++				goto error;
+ 			vhci_hcd->port_status[rhport] &= ~(1 << wValue);
+ 			break;
+ 		}
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 531a00d703cdf..c8784dfafdd73 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -863,6 +863,7 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
+ 	size_t len, total_len = 0;
+ 	int err;
+ 	struct vhost_net_ubuf_ref *ubufs;
++	struct ubuf_info *ubuf;
+ 	bool zcopy_used;
+ 	int sent_pkts = 0;
+ 
+@@ -895,9 +896,7 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
+ 
+ 		/* use msg_control to pass vhost zerocopy ubuf info to skb */
+ 		if (zcopy_used) {
+-			struct ubuf_info *ubuf;
+ 			ubuf = nvq->ubuf_info + nvq->upend_idx;
+-
+ 			vq->heads[nvq->upend_idx].id = cpu_to_vhost32(vq, head);
+ 			vq->heads[nvq->upend_idx].len = VHOST_DMA_IN_PROGRESS;
+ 			ubuf->callback = vhost_zerocopy_callback;
+@@ -927,7 +926,8 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
+ 		err = sock->ops->sendmsg(sock, &msg, len);
+ 		if (unlikely(err < 0)) {
+ 			if (zcopy_used) {
+-				vhost_net_ubuf_put(ubufs);
++				if (vq->heads[ubuf->desc].len == VHOST_DMA_IN_PROGRESS)
++					vhost_net_ubuf_put(ubufs);
+ 				nvq->upend_idx = ((unsigned)nvq->upend_idx - 1)
+ 					% UIO_MAXIOV;
+ 			}
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 87bd37b70738e..faed0e96cec23 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3564,16 +3564,6 @@ static int try_flush_qgroup(struct btrfs_root *root)
+ 	int ret;
+ 	bool can_commit = true;
+ 
+-	/*
+-	 * We don't want to run flush again and again, so if there is a running
+-	 * one, we won't try to start a new flush, but exit directly.
+-	 */
+-	if (test_and_set_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state)) {
+-		wait_event(root->qgroup_flush_wait,
+-			!test_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state));
+-		return 0;
+-	}
+-
+ 	/*
+ 	 * If current process holds a transaction, we shouldn't flush, as we
+ 	 * assume all space reservation happens before a transaction handle is
+@@ -3588,6 +3578,26 @@ static int try_flush_qgroup(struct btrfs_root *root)
+ 	    current->journal_info != BTRFS_SEND_TRANS_STUB)
+ 		can_commit = false;
+ 
++	/*
++	 * We don't want to run flush again and again, so if there is a running
++	 * one, we won't try to start a new flush, but exit directly.
++	 */
++	if (test_and_set_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state)) {
++		/*
++		 * We are already holding a transaction, thus we can block other
++		 * threads from flushing.  So exit right now. This increases
++		 * the chance of EDQUOT for heavy load and near limit cases.
++		 * But we can argue that if we're already near limit, EDQUOT is
++		 * unavoidable anyway.
++		 */
++		if (!can_commit)
++			return 0;
++
++		wait_event(root->qgroup_flush_wait,
++			!test_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state));
++		return 0;
++	}
++
+ 	ret = btrfs_start_delalloc_snapshot(root);
+ 	if (ret < 0)
+ 		goto out;
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 340c76a12ce10..9e08ddb629685 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -236,6 +236,7 @@ struct waiting_dir_move {
+ 	 * after this directory is moved, we can try to rmdir the ino rmdir_ino.
+ 	 */
+ 	u64 rmdir_ino;
++	u64 rmdir_gen;
+ 	bool orphanized;
+ };
+ 
+@@ -316,7 +317,7 @@ static int is_waiting_for_move(struct send_ctx *sctx, u64 ino);
+ static struct waiting_dir_move *
+ get_waiting_dir_move(struct send_ctx *sctx, u64 ino);
+ 
+-static int is_waiting_for_rm(struct send_ctx *sctx, u64 dir_ino);
++static int is_waiting_for_rm(struct send_ctx *sctx, u64 dir_ino, u64 gen);
+ 
+ static int need_send_hole(struct send_ctx *sctx)
+ {
+@@ -2299,7 +2300,7 @@ static int get_cur_path(struct send_ctx *sctx, u64 ino, u64 gen,
+ 
+ 		fs_path_reset(name);
+ 
+-		if (is_waiting_for_rm(sctx, ino)) {
++		if (is_waiting_for_rm(sctx, ino, gen)) {
+ 			ret = gen_unique_name(sctx, ino, gen, name);
+ 			if (ret < 0)
+ 				goto out;
+@@ -2858,8 +2859,8 @@ out:
+ 	return ret;
+ }
+ 
+-static struct orphan_dir_info *
+-add_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino)
++static struct orphan_dir_info *add_orphan_dir_info(struct send_ctx *sctx,
++						   u64 dir_ino, u64 dir_gen)
+ {
+ 	struct rb_node **p = &sctx->orphan_dirs.rb_node;
+ 	struct rb_node *parent = NULL;
+@@ -2868,20 +2869,23 @@ add_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino)
+ 	while (*p) {
+ 		parent = *p;
+ 		entry = rb_entry(parent, struct orphan_dir_info, node);
+-		if (dir_ino < entry->ino) {
++		if (dir_ino < entry->ino)
+ 			p = &(*p)->rb_left;
+-		} else if (dir_ino > entry->ino) {
++		else if (dir_ino > entry->ino)
+ 			p = &(*p)->rb_right;
+-		} else {
++		else if (dir_gen < entry->gen)
++			p = &(*p)->rb_left;
++		else if (dir_gen > entry->gen)
++			p = &(*p)->rb_right;
++		else
+ 			return entry;
+-		}
+ 	}
+ 
+ 	odi = kmalloc(sizeof(*odi), GFP_KERNEL);
+ 	if (!odi)
+ 		return ERR_PTR(-ENOMEM);
+ 	odi->ino = dir_ino;
+-	odi->gen = 0;
++	odi->gen = dir_gen;
+ 	odi->last_dir_index_offset = 0;
+ 
+ 	rb_link_node(&odi->node, parent, p);
+@@ -2889,8 +2893,8 @@ add_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino)
+ 	return odi;
+ }
+ 
+-static struct orphan_dir_info *
+-get_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino)
++static struct orphan_dir_info *get_orphan_dir_info(struct send_ctx *sctx,
++						   u64 dir_ino, u64 gen)
+ {
+ 	struct rb_node *n = sctx->orphan_dirs.rb_node;
+ 	struct orphan_dir_info *entry;
+@@ -2901,15 +2905,19 @@ get_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino)
+ 			n = n->rb_left;
+ 		else if (dir_ino > entry->ino)
+ 			n = n->rb_right;
++		else if (gen < entry->gen)
++			n = n->rb_left;
++		else if (gen > entry->gen)
++			n = n->rb_right;
+ 		else
+ 			return entry;
+ 	}
+ 	return NULL;
+ }
+ 
+-static int is_waiting_for_rm(struct send_ctx *sctx, u64 dir_ino)
++static int is_waiting_for_rm(struct send_ctx *sctx, u64 dir_ino, u64 gen)
+ {
+-	struct orphan_dir_info *odi = get_orphan_dir_info(sctx, dir_ino);
++	struct orphan_dir_info *odi = get_orphan_dir_info(sctx, dir_ino, gen);
+ 
+ 	return odi != NULL;
+ }
+@@ -2954,7 +2962,7 @@ static int can_rmdir(struct send_ctx *sctx, u64 dir, u64 dir_gen,
+ 	key.type = BTRFS_DIR_INDEX_KEY;
+ 	key.offset = 0;
+ 
+-	odi = get_orphan_dir_info(sctx, dir);
++	odi = get_orphan_dir_info(sctx, dir, dir_gen);
+ 	if (odi)
+ 		key.offset = odi->last_dir_index_offset;
+ 
+@@ -2985,7 +2993,7 @@ static int can_rmdir(struct send_ctx *sctx, u64 dir, u64 dir_gen,
+ 
+ 		dm = get_waiting_dir_move(sctx, loc.objectid);
+ 		if (dm) {
+-			odi = add_orphan_dir_info(sctx, dir);
++			odi = add_orphan_dir_info(sctx, dir, dir_gen);
+ 			if (IS_ERR(odi)) {
+ 				ret = PTR_ERR(odi);
+ 				goto out;
+@@ -2993,12 +3001,13 @@ static int can_rmdir(struct send_ctx *sctx, u64 dir, u64 dir_gen,
+ 			odi->gen = dir_gen;
+ 			odi->last_dir_index_offset = found_key.offset;
+ 			dm->rmdir_ino = dir;
++			dm->rmdir_gen = dir_gen;
+ 			ret = 0;
+ 			goto out;
+ 		}
+ 
+ 		if (loc.objectid > send_progress) {
+-			odi = add_orphan_dir_info(sctx, dir);
++			odi = add_orphan_dir_info(sctx, dir, dir_gen);
+ 			if (IS_ERR(odi)) {
+ 				ret = PTR_ERR(odi);
+ 				goto out;
+@@ -3038,6 +3047,7 @@ static int add_waiting_dir_move(struct send_ctx *sctx, u64 ino, bool orphanized)
+ 		return -ENOMEM;
+ 	dm->ino = ino;
+ 	dm->rmdir_ino = 0;
++	dm->rmdir_gen = 0;
+ 	dm->orphanized = orphanized;
+ 
+ 	while (*p) {
+@@ -3183,7 +3193,7 @@ static int path_loop(struct send_ctx *sctx, struct fs_path *name,
+ 	while (ino != BTRFS_FIRST_FREE_OBJECTID) {
+ 		fs_path_reset(name);
+ 
+-		if (is_waiting_for_rm(sctx, ino))
++		if (is_waiting_for_rm(sctx, ino, gen))
+ 			break;
+ 		if (is_waiting_for_move(sctx, ino)) {
+ 			if (*ancestor_ino == 0)
+@@ -3223,6 +3233,7 @@ static int apply_dir_move(struct send_ctx *sctx, struct pending_dir_move *pm)
+ 	u64 parent_ino, parent_gen;
+ 	struct waiting_dir_move *dm = NULL;
+ 	u64 rmdir_ino = 0;
++	u64 rmdir_gen;
+ 	u64 ancestor;
+ 	bool is_orphan;
+ 	int ret;
+@@ -3237,6 +3248,7 @@ static int apply_dir_move(struct send_ctx *sctx, struct pending_dir_move *pm)
+ 	dm = get_waiting_dir_move(sctx, pm->ino);
+ 	ASSERT(dm);
+ 	rmdir_ino = dm->rmdir_ino;
++	rmdir_gen = dm->rmdir_gen;
+ 	is_orphan = dm->orphanized;
+ 	free_waiting_dir_move(sctx, dm);
+ 
+@@ -3273,6 +3285,7 @@ static int apply_dir_move(struct send_ctx *sctx, struct pending_dir_move *pm)
+ 			dm = get_waiting_dir_move(sctx, pm->ino);
+ 			ASSERT(dm);
+ 			dm->rmdir_ino = rmdir_ino;
++			dm->rmdir_gen = rmdir_gen;
+ 		}
+ 		goto out;
+ 	}
+@@ -3291,7 +3304,7 @@ static int apply_dir_move(struct send_ctx *sctx, struct pending_dir_move *pm)
+ 		struct orphan_dir_info *odi;
+ 		u64 gen;
+ 
+-		odi = get_orphan_dir_info(sctx, rmdir_ino);
++		odi = get_orphan_dir_info(sctx, rmdir_ino, rmdir_gen);
+ 		if (!odi) {
+ 			/* already deleted */
+ 			goto finish;
+diff --git a/include/asm-generic/Kbuild b/include/asm-generic/Kbuild
+index e78bbb9a07e90..d1300c6e0a471 100644
+--- a/include/asm-generic/Kbuild
++++ b/include/asm-generic/Kbuild
+@@ -34,6 +34,7 @@ mandatory-y += kmap_types.h
+ mandatory-y += kprobes.h
+ mandatory-y += linkage.h
+ mandatory-y += local.h
++mandatory-y += local64.h
+ mandatory-y += mm-arch-hooks.h
+ mandatory-y += mmiowb.h
+ mandatory-y += mmu.h
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index 794b2a33a2c36..f8ea27423d1d8 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -446,8 +446,8 @@ enum {
+ 	BLK_MQ_REQ_NOWAIT	= (__force blk_mq_req_flags_t)(1 << 0),
+ 	/* allocate from reserved pool */
+ 	BLK_MQ_REQ_RESERVED	= (__force blk_mq_req_flags_t)(1 << 1),
+-	/* set RQF_PREEMPT */
+-	BLK_MQ_REQ_PREEMPT	= (__force blk_mq_req_flags_t)(1 << 3),
++	/* set RQF_PM */
++	BLK_MQ_REQ_PM		= (__force blk_mq_req_flags_t)(1 << 2),
+ };
+ 
+ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 033eb5f73b654..542471b76f410 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -79,9 +79,6 @@ typedef __u32 __bitwise req_flags_t;
+ #define RQF_MQ_INFLIGHT		((__force req_flags_t)(1 << 6))
+ /* don't call prep for this one */
+ #define RQF_DONTPREP		((__force req_flags_t)(1 << 7))
+-/* set for "ide_preempt" requests and also for requests for which the SCSI
+-   "quiesce" state must be ignored. */
+-#define RQF_PREEMPT		((__force req_flags_t)(1 << 8))
+ /* vaguely specified driver internal error.  Ignored by the block layer */
+ #define RQF_FAILED		((__force req_flags_t)(1 << 10))
+ /* don't warn about errors */
+@@ -430,8 +427,7 @@ struct request_queue {
+ 	unsigned long		queue_flags;
+ 	/*
+ 	 * Number of contexts that have called blk_set_pm_only(). If this
+-	 * counter is above zero then only RQF_PM and RQF_PREEMPT requests are
+-	 * processed.
++	 * counter is above zero then only RQF_PM requests are processed.
+ 	 */
+ 	atomic_t		pm_only;
+ 
+@@ -696,6 +692,18 @@ static inline bool queue_is_mq(struct request_queue *q)
+ 	return q->mq_ops;
+ }
+ 
++#ifdef CONFIG_PM
++static inline enum rpm_status queue_rpm_status(struct request_queue *q)
++{
++	return q->rpm_status;
++}
++#else
++static inline enum rpm_status queue_rpm_status(struct request_queue *q)
++{
++	return RPM_ACTIVE;
++}
++#endif
++
+ static inline enum blk_zoned_model
+ blk_queue_zoned_model(struct request_queue *q)
+ {
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index d956987ed032d..94522685a0d94 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -758,6 +758,7 @@ struct intel_svm_dev {
+ 	struct list_head list;
+ 	struct rcu_head rcu;
+ 	struct device *dev;
++	struct intel_iommu *iommu;
+ 	struct svm_dev_ops *ops;
+ 	struct iommu_sva sva;
+ 	u32 pasid;
+@@ -771,7 +772,6 @@ struct intel_svm {
+ 	struct mmu_notifier notifier;
+ 	struct mm_struct *mm;
+ 
+-	struct intel_iommu *iommu;
+ 	unsigned int flags;
+ 	u32 pasid;
+ 	int gpasid; /* In case that guest PASID is different from host PASID */
+diff --git a/include/net/red.h b/include/net/red.h
+index fc455445f4b22..932f0d79d60cb 100644
+--- a/include/net/red.h
++++ b/include/net/red.h
+@@ -168,12 +168,14 @@ static inline void red_set_vars(struct red_vars *v)
+ 	v->qcount	= -1;
+ }
+ 
+-static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog)
++static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, u8 Scell_log)
+ {
+ 	if (fls(qth_min) + Wlog > 32)
+ 		return false;
+ 	if (fls(qth_max) + Wlog > 32)
+ 		return false;
++	if (Scell_log >= 32)
++		return false;
+ 	if (qth_max < qth_min)
+ 		return false;
+ 	return true;
+diff --git a/include/uapi/linux/bcache.h b/include/uapi/linux/bcache.h
+index 52e8bcb339811..cf7399f03b712 100644
+--- a/include/uapi/linux/bcache.h
++++ b/include/uapi/linux/bcache.h
+@@ -213,7 +213,7 @@ struct cache_sb_disk {
+ 		__le16		keys;
+ 	};
+ 	__le64			d[SB_JOURNAL_BUCKETS];	/* journal buckets */
+-	__le16			bucket_size_hi;
++	__le16			obso_bucket_size_hi;	/* obsoleted */
+ };
+ 
+ /*
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 437935e7a1991..0695c7895c892 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3728,17 +3728,24 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
+ 	 * is updated and visible.
+ 	 */
+ 	if (!freezable || !workqueue_freezing) {
++		bool kick = false;
++
+ 		pwq->max_active = wq->saved_max_active;
+ 
+ 		while (!list_empty(&pwq->delayed_works) &&
+-		       pwq->nr_active < pwq->max_active)
++		       pwq->nr_active < pwq->max_active) {
+ 			pwq_activate_first_delayed(pwq);
++			kick = true;
++		}
+ 
+ 		/*
+ 		 * Need to kick a worker after thawed or an unbound wq's
+-		 * max_active is bumped.  It's a slow path.  Do it always.
++		 * max_active is bumped. In realtime scenarios, always kicking a
++		 * worker will cause interference on the isolated cpu cores, so
++		 * let's kick iff work items were activated.
+ 		 */
+-		wake_up_worker(pwq->pool);
++		if (kick)
++			wake_up_worker(pwq->pool);
+ 	} else {
+ 		pwq->max_active = 0;
+ 	}
+diff --git a/lib/genalloc.c b/lib/genalloc.c
+index 7f1244b5294a8..dab97bb69df63 100644
+--- a/lib/genalloc.c
++++ b/lib/genalloc.c
+@@ -81,14 +81,14 @@ static int clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear)
+  * users set the same bit, one user will return remain bits, otherwise
+  * return 0.
+  */
+-static int bitmap_set_ll(unsigned long *map, int start, int nr)
++static int bitmap_set_ll(unsigned long *map, unsigned long start, unsigned long nr)
+ {
+ 	unsigned long *p = map + BIT_WORD(start);
+-	const int size = start + nr;
++	const unsigned long size = start + nr;
+ 	int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
+ 	unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
+ 
+-	while (nr - bits_to_set >= 0) {
++	while (nr >= bits_to_set) {
+ 		if (set_bits_ll(p, mask_to_set))
+ 			return nr;
+ 		nr -= bits_to_set;
+@@ -116,14 +116,15 @@ static int bitmap_set_ll(unsigned long *map, int start, int nr)
+  * users clear the same bit, one user will return remain bits,
+  * otherwise return 0.
+  */
+-static int bitmap_clear_ll(unsigned long *map, int start, int nr)
++static unsigned long
++bitmap_clear_ll(unsigned long *map, unsigned long start, unsigned long nr)
+ {
+ 	unsigned long *p = map + BIT_WORD(start);
+-	const int size = start + nr;
++	const unsigned long size = start + nr;
+ 	int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
+ 	unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
+ 
+-	while (nr - bits_to_clear >= 0) {
++	while (nr >= bits_to_clear) {
+ 		if (clear_bits_ll(p, mask_to_clear))
+ 			return nr;
+ 		nr -= bits_to_clear;
+@@ -183,8 +184,8 @@ int gen_pool_add_owner(struct gen_pool *pool, unsigned long virt, phys_addr_t ph
+ 		 size_t size, int nid, void *owner)
+ {
+ 	struct gen_pool_chunk *chunk;
+-	int nbits = size >> pool->min_alloc_order;
+-	int nbytes = sizeof(struct gen_pool_chunk) +
++	unsigned long nbits = size >> pool->min_alloc_order;
++	unsigned long nbytes = sizeof(struct gen_pool_chunk) +
+ 				BITS_TO_LONGS(nbits) * sizeof(long);
+ 
+ 	chunk = vzalloc_node(nbytes, nid);
+@@ -242,7 +243,7 @@ void gen_pool_destroy(struct gen_pool *pool)
+ 	struct list_head *_chunk, *_next_chunk;
+ 	struct gen_pool_chunk *chunk;
+ 	int order = pool->min_alloc_order;
+-	int bit, end_bit;
++	unsigned long bit, end_bit;
+ 
+ 	list_for_each_safe(_chunk, _next_chunk, &pool->chunks) {
+ 		chunk = list_entry(_chunk, struct gen_pool_chunk, next_chunk);
+@@ -278,7 +279,7 @@ unsigned long gen_pool_alloc_algo_owner(struct gen_pool *pool, size_t size,
+ 	struct gen_pool_chunk *chunk;
+ 	unsigned long addr = 0;
+ 	int order = pool->min_alloc_order;
+-	int nbits, start_bit, end_bit, remain;
++	unsigned long nbits, start_bit, end_bit, remain;
+ 
+ #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
+ 	BUG_ON(in_nmi());
+@@ -487,7 +488,7 @@ void gen_pool_free_owner(struct gen_pool *pool, unsigned long addr, size_t size,
+ {
+ 	struct gen_pool_chunk *chunk;
+ 	int order = pool->min_alloc_order;
+-	int start_bit, nbits, remain;
++	unsigned long start_bit, nbits, remain;
+ 
+ #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
+ 	BUG_ON(in_nmi());
+@@ -755,7 +756,7 @@ unsigned long gen_pool_best_fit(unsigned long *map, unsigned long size,
+ 	index = bitmap_find_next_zero_area(map, size, start, nr, 0);
+ 
+ 	while (index < size) {
+-		int next_bit = find_next_bit(map, size, index + nr);
++		unsigned long next_bit = find_next_bit(map, size, index + nr);
+ 		if ((next_bit - index) < len) {
+ 			len = next_bit - index;
+ 			start_bit = index;
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index 586042472ac90..eb34d204d4ee7 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -2826,7 +2826,7 @@ EXPORT_SYMBOL(__test_set_page_writeback);
+  */
+ void wait_on_page_writeback(struct page *page)
+ {
+-	if (PageWriteback(page)) {
++	while (PageWriteback(page)) {
+ 		trace_wait_on_page_writeback(page, page_mapping(page));
+ 		wait_on_page_bit(page, PG_writeback);
+ 	}
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 94fff0700bdd3..b4562f9d074cf 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -1317,8 +1317,8 @@ static const struct attribute_group dql_group = {
+ static ssize_t xps_cpus_show(struct netdev_queue *queue,
+ 			     char *buf)
+ {
++	int cpu, len, ret, num_tc = 1, tc = 0;
+ 	struct net_device *dev = queue->dev;
+-	int cpu, len, num_tc = 1, tc = 0;
+ 	struct xps_dev_maps *dev_maps;
+ 	cpumask_var_t mask;
+ 	unsigned long index;
+@@ -1328,22 +1328,31 @@ static ssize_t xps_cpus_show(struct netdev_queue *queue,
+ 
+ 	index = get_netdev_queue_index(queue);
+ 
++	if (!rtnl_trylock())
++		return restart_syscall();
++
+ 	if (dev->num_tc) {
+ 		/* Do not allow XPS on subordinate device directly */
+ 		num_tc = dev->num_tc;
+-		if (num_tc < 0)
+-			return -EINVAL;
++		if (num_tc < 0) {
++			ret = -EINVAL;
++			goto err_rtnl_unlock;
++		}
+ 
+ 		/* If queue belongs to subordinate dev use its map */
+ 		dev = netdev_get_tx_queue(dev, index)->sb_dev ? : dev;
+ 
+ 		tc = netdev_txq_to_tc(dev, index);
+-		if (tc < 0)
+-			return -EINVAL;
++		if (tc < 0) {
++			ret = -EINVAL;
++			goto err_rtnl_unlock;
++		}
+ 	}
+ 
+-	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+-		return -ENOMEM;
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) {
++		ret = -ENOMEM;
++		goto err_rtnl_unlock;
++	}
+ 
+ 	rcu_read_lock();
+ 	dev_maps = rcu_dereference(dev->xps_cpus_map);
+@@ -1366,9 +1375,15 @@ static ssize_t xps_cpus_show(struct netdev_queue *queue,
+ 	}
+ 	rcu_read_unlock();
+ 
++	rtnl_unlock();
++
+ 	len = snprintf(buf, PAGE_SIZE, "%*pb\n", cpumask_pr_args(mask));
+ 	free_cpumask_var(mask);
+ 	return len < PAGE_SIZE ? len : -EINVAL;
++
++err_rtnl_unlock:
++	rtnl_unlock();
++	return ret;
+ }
+ 
+ static ssize_t xps_cpus_store(struct netdev_queue *queue,
+@@ -1396,7 +1411,13 @@ static ssize_t xps_cpus_store(struct netdev_queue *queue,
+ 		return err;
+ 	}
+ 
++	if (!rtnl_trylock()) {
++		free_cpumask_var(mask);
++		return restart_syscall();
++	}
++
+ 	err = netif_set_xps_queue(dev, mask, index);
++	rtnl_unlock();
+ 
+ 	free_cpumask_var(mask);
+ 
+@@ -1408,22 +1429,29 @@ static struct netdev_queue_attribute xps_cpus_attribute __ro_after_init
+ 
+ static ssize_t xps_rxqs_show(struct netdev_queue *queue, char *buf)
+ {
++	int j, len, ret, num_tc = 1, tc = 0;
+ 	struct net_device *dev = queue->dev;
+ 	struct xps_dev_maps *dev_maps;
+ 	unsigned long *mask, index;
+-	int j, len, num_tc = 1, tc = 0;
+ 
+ 	index = get_netdev_queue_index(queue);
+ 
++	if (!rtnl_trylock())
++		return restart_syscall();
++
+ 	if (dev->num_tc) {
+ 		num_tc = dev->num_tc;
+ 		tc = netdev_txq_to_tc(dev, index);
+-		if (tc < 0)
+-			return -EINVAL;
++		if (tc < 0) {
++			ret = -EINVAL;
++			goto err_rtnl_unlock;
++		}
+ 	}
+ 	mask = bitmap_zalloc(dev->num_rx_queues, GFP_KERNEL);
+-	if (!mask)
+-		return -ENOMEM;
++	if (!mask) {
++		ret = -ENOMEM;
++		goto err_rtnl_unlock;
++	}
+ 
+ 	rcu_read_lock();
+ 	dev_maps = rcu_dereference(dev->xps_rxqs_map);
+@@ -1449,10 +1477,16 @@ static ssize_t xps_rxqs_show(struct netdev_queue *queue, char *buf)
+ out_no_maps:
+ 	rcu_read_unlock();
+ 
++	rtnl_unlock();
++
+ 	len = bitmap_print_to_pagebuf(false, buf, mask, dev->num_rx_queues);
+ 	bitmap_free(mask);
+ 
+ 	return len < PAGE_SIZE ? len : -EINVAL;
++
++err_rtnl_unlock:
++	rtnl_unlock();
++	return ret;
+ }
+ 
+ static ssize_t xps_rxqs_store(struct netdev_queue *queue, const char *buf,
+@@ -1478,10 +1512,17 @@ static ssize_t xps_rxqs_store(struct netdev_queue *queue, const char *buf,
+ 		return err;
+ 	}
+ 
++	if (!rtnl_trylock()) {
++		bitmap_free(mask);
++		return restart_syscall();
++	}
++
+ 	cpus_read_lock();
+ 	err = __netif_set_xps_queue(dev, mask, index, true);
+ 	cpus_read_unlock();
+ 
++	rtnl_unlock();
++
+ 	bitmap_free(mask);
+ 	return err ? : len;
+ }
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index cdf6ec5aa45de..84bb707bd88d8 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -292,7 +292,7 @@ __be32 fib_compute_spec_dst(struct sk_buff *skb)
+ 			.flowi4_iif = LOOPBACK_IFINDEX,
+ 			.flowi4_oif = l3mdev_master_ifindex_rcu(dev),
+ 			.daddr = ip_hdr(skb)->saddr,
+-			.flowi4_tos = RT_TOS(ip_hdr(skb)->tos),
++			.flowi4_tos = ip_hdr(skb)->tos & IPTOS_RT_MASK,
+ 			.flowi4_scope = scope,
+ 			.flowi4_mark = vmark ? skb->mark : 0,
+ 		};
+diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c
+index 66fdbfe5447cd..5d1e6fe9d8387 100644
+--- a/net/ipv4/gre_demux.c
++++ b/net/ipv4/gre_demux.c
+@@ -128,7 +128,7 @@ int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 	 * to 0 and sets the configured key in the
+ 	 * inner erspan header field
+ 	 */
+-	if (greh->protocol == htons(ETH_P_ERSPAN) ||
++	if ((greh->protocol == htons(ETH_P_ERSPAN) && hdr_len != 4) ||
+ 	    greh->protocol == htons(ETH_P_ERSPAN2)) {
+ 		struct erspan_base_hdr *ershdr;
+ 
+diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
+index 563b62b76a5f1..c576a63d09db1 100644
+--- a/net/ipv4/netfilter/arp_tables.c
++++ b/net/ipv4/netfilter/arp_tables.c
+@@ -1379,7 +1379,7 @@ static int compat_get_entries(struct net *net,
+ 	xt_compat_lock(NFPROTO_ARP);
+ 	t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
+ 	if (!IS_ERR(t)) {
+-		const struct xt_table_info *private = t->private;
++		const struct xt_table_info *private = xt_table_get_private_protected(t);
+ 		struct xt_table_info info;
+ 
+ 		ret = compat_table_info(private, &info);
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index 6e2851f8d3a3f..e8f6f9d862376 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -1589,7 +1589,7 @@ compat_get_entries(struct net *net, struct compat_ipt_get_entries __user *uptr,
+ 	xt_compat_lock(AF_INET);
+ 	t = xt_find_table_lock(net, AF_INET, get.name);
+ 	if (!IS_ERR(t)) {
+-		const struct xt_table_info *private = t->private;
++		const struct xt_table_info *private = xt_table_get_private_protected(t);
+ 		struct xt_table_info info;
+ 		ret = compat_table_info(private, &info);
+ 		if (!ret && get.size == info.size)
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index c4f532f4d3118..0d453fa9e327b 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -1598,7 +1598,7 @@ compat_get_entries(struct net *net, struct compat_ip6t_get_entries __user *uptr,
+ 	xt_compat_lock(AF_INET6);
+ 	t = xt_find_table_lock(net, AF_INET6, get.name);
+ 	if (!IS_ERR(t)) {
+-		const struct xt_table_info *private = t->private;
++		const struct xt_table_info *private = xt_table_get_private_protected(t);
+ 		struct xt_table_info info;
+ 		ret = compat_table_info(private, &info);
+ 		if (!ret && get.size == info.size)
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 5b1f4ec66dd98..888ccc2d4e34b 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -1120,7 +1120,7 @@ int ncsi_rcv_rsp(struct sk_buff *skb, struct net_device *dev,
+ 	int payload, i, ret;
+ 
+ 	/* Find the NCSI device */
+-	nd = ncsi_find_dev(dev);
++	nd = ncsi_find_dev(orig_dev);
+ 	ndp = nd ? TO_NCSI_DEV_PRIV(nd) : NULL;
+ 	if (!ndp)
+ 		return -ENODEV;
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index 521e970be4028..7d01086b38f0f 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -143,20 +143,6 @@ htable_size(u8 hbits)
+ 	return hsize * sizeof(struct hbucket *) + sizeof(struct htable);
+ }
+ 
+-/* Compute htable_bits from the user input parameter hashsize */
+-static u8
+-htable_bits(u32 hashsize)
+-{
+-	/* Assume that hashsize == 2^htable_bits */
+-	u8 bits = fls(hashsize - 1);
+-
+-	if (jhash_size(bits) != hashsize)
+-		/* Round up to the first 2^n value */
+-		bits = fls(hashsize);
+-
+-	return bits;
+-}
+-
+ #ifdef IP_SET_HASH_WITH_NETS
+ #if IPSET_NET_COUNT > 1
+ #define __CIDR(cidr, i)		(cidr[i])
+@@ -1520,7 +1506,11 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
+ 	if (!h)
+ 		return -ENOMEM;
+ 
+-	hbits = htable_bits(hashsize);
++	/* Compute htable_bits from the user input parameter hashsize.
++	 * Assume that hashsize == 2^htable_bits,
++	 * otherwise round up to the first 2^n value.
++	 */
++	hbits = fls(hashsize - 1);
+ 	hsize = htable_size(hbits);
+ 	if (hsize == 0) {
+ 		kfree(h);
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 9af4f93c7f0e1..4990f7cbfafdf 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -123,7 +123,7 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 		u32 flags = ntohl(nla_get_be32(tb[NFTA_DYNSET_FLAGS]));
+ 
+ 		if (flags & ~NFT_DYNSET_F_INV)
+-			return -EINVAL;
++			return -EOPNOTSUPP;
+ 		if (flags & NFT_DYNSET_F_INV)
+ 			priv->invert = true;
+ 	}
+@@ -156,7 +156,7 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 	timeout = 0;
+ 	if (tb[NFTA_DYNSET_TIMEOUT] != NULL) {
+ 		if (!(set->flags & NFT_SET_TIMEOUT))
+-			return -EINVAL;
++			return -EOPNOTSUPP;
+ 
+ 		err = nf_msecs_to_jiffies64(tb[NFTA_DYNSET_TIMEOUT], &timeout);
+ 		if (err)
+@@ -170,7 +170,7 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 
+ 	if (tb[NFTA_DYNSET_SREG_DATA] != NULL) {
+ 		if (!(set->flags & NFT_SET_MAP))
+-			return -EINVAL;
++			return -EOPNOTSUPP;
+ 		if (set->dtype == NFT_DATA_VERDICT)
+ 			return -EOPNOTSUPP;
+ 
+diff --git a/net/netfilter/xt_RATEEST.c b/net/netfilter/xt_RATEEST.c
+index 37253d399c6b8..0d5c422f87452 100644
+--- a/net/netfilter/xt_RATEEST.c
++++ b/net/netfilter/xt_RATEEST.c
+@@ -115,6 +115,9 @@ static int xt_rateest_tg_checkentry(const struct xt_tgchk_param *par)
+ 	} cfg;
+ 	int ret;
+ 
++	if (strnlen(info->name, sizeof(est->name)) >= sizeof(est->name))
++		return -ENAMETOOLONG;
++
+ 	net_get_random_once(&jhash_rnd, sizeof(jhash_rnd));
+ 
+ 	mutex_lock(&xn->hash_lock);
+diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c
+index bd618b00d3193..50f680f03a547 100644
+--- a/net/sched/sch_choke.c
++++ b/net/sched/sch_choke.c
+@@ -362,7 +362,7 @@ static int choke_change(struct Qdisc *sch, struct nlattr *opt,
+ 
+ 	ctl = nla_data(tb[TCA_CHOKE_PARMS]);
+ 
+-	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog))
++	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log))
+ 		return -EINVAL;
+ 
+ 	if (ctl->limit > CHOKE_MAX_QUEUE)
+diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
+index 8599c6f31b057..e0bc77533acc3 100644
+--- a/net/sched/sch_gred.c
++++ b/net/sched/sch_gred.c
+@@ -480,7 +480,7 @@ static inline int gred_change_vq(struct Qdisc *sch, int dp,
+ 	struct gred_sched *table = qdisc_priv(sch);
+ 	struct gred_sched_data *q = table->tab[dp];
+ 
+-	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog)) {
++	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "invalid RED parameters");
+ 		return -EINVAL;
+ 	}
+diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
+index e89fab6ccb34f..b4ae34d7aa965 100644
+--- a/net/sched/sch_red.c
++++ b/net/sched/sch_red.c
+@@ -250,7 +250,7 @@ static int __red_change(struct Qdisc *sch, struct nlattr **tb,
+ 	max_P = tb[TCA_RED_MAX_P] ? nla_get_u32(tb[TCA_RED_MAX_P]) : 0;
+ 
+ 	ctl = nla_data(tb[TCA_RED_PARMS]);
+-	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog))
++	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log))
+ 		return -EINVAL;
+ 
+ 	err = red_get_flags(ctl->flags, TC_RED_HISTORIC_FLAGS,
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index bca2be57d9fc1..b25e51440623b 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -647,7 +647,7 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ 	}
+ 
+ 	if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,
+-					ctl_v1->Wlog))
++					ctl_v1->Wlog, ctl_v1->Scell_log))
+ 		return -EINVAL;
+ 	if (ctl_v1 && ctl_v1->qth_min) {
+ 		p = kmalloc(sizeof(*p), GFP_KERNEL);
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index c6653ee7f701b..c966c05a0be92 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1604,8 +1604,9 @@ static void taprio_reset(struct Qdisc *sch)
+ 
+ 	hrtimer_cancel(&q->advance_timer);
+ 	if (q->qdiscs) {
+-		for (i = 0; i < dev->num_tx_queues && q->qdiscs[i]; i++)
+-			qdisc_reset(q->qdiscs[i]);
++		for (i = 0; i < dev->num_tx_queues; i++)
++			if (q->qdiscs[i])
++				qdisc_reset(q->qdiscs[i]);
+ 	}
+ 	sch->qstats.backlog = 0;
+ 	sch->q.qlen = 0;
+@@ -1625,7 +1626,7 @@ static void taprio_destroy(struct Qdisc *sch)
+ 	taprio_disable_offload(dev, q, NULL);
+ 
+ 	if (q->qdiscs) {
+-		for (i = 0; i < dev->num_tx_queues && q->qdiscs[i]; i++)
++		for (i = 0; i < dev->num_tx_queues; i++)
+ 			qdisc_put(q->qdiscs[i]);
+ 
+ 		kfree(q->qdiscs);
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 62504471fd207..189cfbbcccc04 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -772,6 +772,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 		}
+ 	}
+ 
++	/* FQ and CQ are now owned by the buffer pool and cleaned up with it. */
++	xs->fq_tmp = NULL;
++	xs->cq_tmp = NULL;
++
+ 	xs->dev = dev;
+ 	xs->zc = xs->umem->zc;
+ 	xs->queue_id = qid;
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index d5adeee9d5d91..46c2ae7d91d15 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -75,8 +75,6 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
+ 
+ 	pool->fq = xs->fq_tmp;
+ 	pool->cq = xs->cq_tmp;
+-	xs->fq_tmp = NULL;
+-	xs->cq_tmp = NULL;
+ 
+ 	for (i = 0; i < pool->free_heads_cnt; i++) {
+ 		xskb = &pool->heads[i];
+diff --git a/scripts/depmod.sh b/scripts/depmod.sh
+index e083bcae343f3..3643b4f896ede 100755
+--- a/scripts/depmod.sh
++++ b/scripts/depmod.sh
+@@ -15,6 +15,8 @@ if ! test -r System.map ; then
+ 	exit 0
+ fi
+ 
++# legacy behavior: "depmod" in /sbin, no /sbin in PATH
++PATH="$PATH:/sbin"
+ if [ -z $(command -v $DEPMOD) ]; then
+ 	echo "Warning: 'make modules_install' requires $DEPMOD. Please install it." >&2
+ 	echo "This is probably in the kmod package." >&2
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 6852668f1bcb4..770ad25f1907c 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2220,8 +2220,6 @@ static const struct snd_pci_quirk power_save_denylist[] = {
+ 	SND_PCI_QUIRK(0x1849, 0x7662, "Asrock H81M-HDS", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ 	SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0),
+-	/* https://bugzilla.redhat.com/show_bug.cgi?id=1581607 */
+-	SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ 	SND_PCI_QUIRK(0x1558, 0x6504, "Clevo W65_67SB", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index be5000dd15853..d49cc4409d59c 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -1070,6 +1070,7 @@ static int patch_conexant_auto(struct hda_codec *codec)
+ static const struct hda_device_id snd_hda_id_conexant[] = {
+ 	HDA_CODEC_ENTRY(0x14f11f86, "CX8070", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f12008, "CX8200", patch_conexant_auto),
++	HDA_CODEC_ENTRY(0x14f120d0, "CX11970", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f15045, "CX20549 (Venice)", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f15047, "CX20551 (Waikiki)", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f15051, "CX20561 (Hermosa)", patch_conexant_auto),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 006af6541dada..3c1d2a3fb1a4f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6289,6 +6289,7 @@ enum {
+ 	ALC221_FIXUP_HP_FRONT_MIC,
+ 	ALC292_FIXUP_TPT460,
+ 	ALC298_FIXUP_SPK_VOLUME,
++	ALC298_FIXUP_LENOVO_SPK_VOLUME,
+ 	ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER,
+ 	ALC269_FIXUP_ATIV_BOOK_8,
+ 	ALC221_FIXUP_HP_MIC_NO_PRESENCE,
+@@ -7119,6 +7120,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE,
+ 	},
++	[ALC298_FIXUP_LENOVO_SPK_VOLUME] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc298_fixup_speaker_volume,
++	},
+ 	[ALC295_FIXUP_DISABLE_DAC3] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc295_fixup_disable_dac3,
+@@ -7959,11 +7964,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+@@ -8021,6 +8028,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
++	SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
++	SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+@@ -8126,6 +8135,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index 7ef8f3105cdb7..0ab40a8a68fb5 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -1002,6 +1002,7 @@ static const struct hda_verb vt1802_init_verbs[] = {
+ enum {
+ 	VIA_FIXUP_INTMIC_BOOST,
+ 	VIA_FIXUP_ASUS_G75,
++	VIA_FIXUP_POWER_SAVE,
+ };
+ 
+ static void via_fixup_intmic_boost(struct hda_codec *codec,
+@@ -1011,6 +1012,13 @@ static void via_fixup_intmic_boost(struct hda_codec *codec,
+ 		override_mic_boost(codec, 0x30, 0, 2, 40);
+ }
+ 
++static void via_fixup_power_save(struct hda_codec *codec,
++				 const struct hda_fixup *fix, int action)
++{
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		codec->power_save_node = 0;
++}
++
+ static const struct hda_fixup via_fixups[] = {
+ 	[VIA_FIXUP_INTMIC_BOOST] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -1025,11 +1033,16 @@ static const struct hda_fixup via_fixups[] = {
+ 			{ }
+ 		}
+ 	},
++	[VIA_FIXUP_POWER_SAVE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = via_fixup_power_save,
++	},
+ };
+ 
+ static const struct snd_pci_quirk vt2002p_fixups[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75),
+ 	SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST),
++	SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", VIA_FIXUP_POWER_SAVE),
+ 	{}
+ };
+ 
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index c8213652470c4..0c23fa6d8525d 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1889,6 +1889,8 @@ static int snd_usbmidi_get_ms_info(struct snd_usb_midi *umidi,
+ 		ms_ep = find_usb_ms_endpoint_descriptor(hostep);
+ 		if (!ms_ep)
+ 			continue;
++		if (ms_ep->bNumEmbMIDIJack > 0x10)
++			continue;
+ 		if (usb_endpoint_dir_out(ep)) {
+ 			if (endpoints[epidx].out_ep) {
+ 				if (++epidx >= MIDI_MAX_ENDPOINTS) {
+@@ -2141,6 +2143,8 @@ static int snd_usbmidi_detect_roland(struct snd_usb_midi *umidi,
+ 		    cs_desc[1] == USB_DT_CS_INTERFACE &&
+ 		    cs_desc[2] == 0xf1 &&
+ 		    cs_desc[3] == 0x02) {
++			if (cs_desc[4] > 0x10 || cs_desc[5] > 0x10)
++				continue;
+ 			endpoint->in_cables  = (1 << cs_desc[4]) - 1;
+ 			endpoint->out_cables = (1 << cs_desc[5]) - 1;
+ 			return snd_usbmidi_detect_endpoints(umidi, endpoint, 1);
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
+index 4d900bc1f76c6..5c7700212f753 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
+@@ -230,7 +230,7 @@ switch_create()
+ 	__mlnx_qos -i $swp4 --pfc=0,1,0,0,0,0,0,0 >/dev/null
+ 	# PG0 will get autoconfigured to Xoff, give PG1 arbitrarily 100K, which
+ 	# is (-2*MTU) about 80K of delay provision.
+-	__mlnx_qos -i $swp3 --buffer_size=0,$_100KB,0,0,0,0,0,0 >/dev/null
++	__mlnx_qos -i $swp4 --buffer_size=0,$_100KB,0,0,0,0,0,0 >/dev/null
+ 
+ 	# bridges
+ 	# -------
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index 691893afc15d8..e63f316327080 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for vm selftests
+ uname_M := $(shell uname -m 2>/dev/null || echo not)
+-MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/')
++MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/ppc64/')
+ 
+ # Without this, failed build products remain, with up-to-date timestamps,
+ # thus tricking Make (and you!) into believing that All Is Well, in subsequent
+@@ -39,7 +39,7 @@ TEST_GEN_FILES += transhuge-stress
+ TEST_GEN_FILES += userfaultfd
+ TEST_GEN_FILES += khugepaged
+ 
+-ifeq ($(ARCH),x86_64)
++ifeq ($(MACHINE),x86_64)
+ CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_32bit_program.c -m32)
+ CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_64bit_program.c)
+ CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_program.c -no-pie)
+@@ -61,13 +61,13 @@ TEST_GEN_FILES += $(BINARIES_64)
+ endif
+ else
+ 
+-ifneq (,$(findstring $(ARCH),powerpc))
++ifneq (,$(findstring $(MACHINE),ppc64))
+ TEST_GEN_FILES += protection_keys
+ endif
+ 
+ endif
+ 
+-ifneq (,$(filter $(MACHINE),arm64 ia64 mips64 parisc64 ppc64 ppc64le riscv64 s390x sh64 sparc64 x86_64))
++ifneq (,$(filter $(MACHINE),arm64 ia64 mips64 parisc64 ppc64 riscv64 s390x sh64 sparc64 x86_64))
+ TEST_GEN_FILES += va_128TBswitch
+ TEST_GEN_FILES += virtual_address_range
+ TEST_GEN_FILES += write_to_hugetlbfs
+@@ -82,7 +82,7 @@ include ../lib.mk
+ 
+ $(OUTPUT)/hmm-tests: LDLIBS += -lhugetlbfs -lpthread
+ 
+-ifeq ($(ARCH),x86_64)
++ifeq ($(MACHINE),x86_64)
+ BINARIES_32 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_32))
+ BINARIES_64 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_64))
+ 
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 2541a17ff1c45..3083fb53861df 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -482,9 +482,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ 	kvm->mmu_notifier_count++;
+ 	need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end,
+ 					     range->flags);
+-	need_tlb_flush |= kvm->tlbs_dirty;
+ 	/* we've to flush the tlb before the pages can be freed */
+-	if (need_tlb_flush)
++	if (need_tlb_flush || kvm->tlbs_dirty)
+ 		kvm_flush_remote_tlbs(kvm);
+ 
+ 	spin_unlock(&kvm->mmu_lock);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-17 16:18 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-01-17 16:18 UTC (permalink / raw
  To: gentoo-commits

commit:     43758b12bec40d0ae26d022ef93a79de156478b9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 17 16:18:20 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jan 17 16:18:20 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=43758b12

Linux patch 5.10.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1007_linux-5.10.8.patch | 3472 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3476 insertions(+)

diff --git a/0000_README b/0000_README
index d4ad009..b0f1ce8 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-5.10.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.7
 
+Patch:  1007_linux-5.10.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-5.10.8.patch b/1007_linux-5.10.8.patch
new file mode 100644
index 0000000..28a8032
--- /dev/null
+++ b/1007_linux-5.10.8.patch
@@ -0,0 +1,3472 @@
+diff --git a/Makefile b/Makefile
+index 9b6c90eed5e9c..4ee137b5d2416 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index ddd4641446bdd..69fe7133c765d 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -1053,6 +1053,12 @@ config ARCH_WANT_LD_ORPHAN_WARN
+ 	  by the linker, since the locations of such sections can change between linker
+ 	  versions.
+ 
++config ARCH_SPLIT_ARG64
++	bool
++	help
++	   If a 32-bit architecture requires 64-bit arguments to be split into
++	   pairs of 32-bit arguments, select this option.
++
+ source "kernel/gcov/Kconfig"
+ 
+ source "scripts/gcc-plugins/Kconfig"
+diff --git a/arch/arm/mach-omap2/omap_device.c b/arch/arm/mach-omap2/omap_device.c
+index fc7bb2ca16727..64b23b0cd23c7 100644
+--- a/arch/arm/mach-omap2/omap_device.c
++++ b/arch/arm/mach-omap2/omap_device.c
+@@ -230,10 +230,12 @@ static int _omap_device_notifier_call(struct notifier_block *nb,
+ 		break;
+ 	case BUS_NOTIFY_BIND_DRIVER:
+ 		od = to_omap_device(pdev);
+-		if (od && (od->_state == OMAP_DEVICE_STATE_ENABLED) &&
+-		    pm_runtime_status_suspended(dev)) {
++		if (od) {
+ 			od->_driver_status = BUS_NOTIFY_BIND_DRIVER;
+-			pm_runtime_set_active(dev);
++			if (od->_state == OMAP_DEVICE_STATE_ENABLED &&
++			    pm_runtime_status_suspended(dev)) {
++				pm_runtime_set_active(dev);
++			}
+ 		}
+ 		break;
+ 	case BUS_NOTIFY_ADD_DEVICE:
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index fce8cbecd6bc7..a884d77739895 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -96,7 +96,8 @@
+ #endif /* CONFIG_ARM64_FORCE_52BIT */
+ 
+ extern phys_addr_t arm64_dma_phys_limit;
+-#define ARCH_LOW_ADDRESS_LIMIT	(arm64_dma_phys_limit - 1)
++extern phys_addr_t arm64_dma32_phys_limit;
++#define ARCH_LOW_ADDRESS_LIMIT	((arm64_dma_phys_limit ? : arm64_dma32_phys_limit) - 1)
+ 
+ struct debug_info {
+ #ifdef CONFIG_HAVE_HW_BREAKPOINT
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 6f36c4f62f694..0a52e076153bb 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -2552,7 +2552,7 @@ static void verify_hyp_capabilities(void)
+ 	int parange, ipa_max;
+ 	unsigned int safe_vmid_bits, vmid_bits;
+ 
+-	if (!IS_ENABLED(CONFIG_KVM) || !IS_ENABLED(CONFIG_KVM_ARM_HOST))
++	if (!IS_ENABLED(CONFIG_KVM))
+ 		return;
+ 
+ 	safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 2b28bf1a53266..b246a4acba416 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -663,6 +663,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+ {
+ 	u64 pmcr, val;
+ 
++	/* No PMU available, PMCR_EL0 may UNDEF... */
++	if (!kvm_arm_support_pmu_v3())
++		return;
++
+ 	pmcr = read_sysreg(pmcr_el0);
+ 	/*
+ 	 * Writable bits of PMCR_EL0 (ARMV8_PMU_PMCR_MASK) are reset to UNKNOWN
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 095540667f0fd..00576a960f11f 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -60,7 +60,7 @@ EXPORT_SYMBOL(memstart_addr);
+  * bit addressable memory area.
+  */
+ phys_addr_t arm64_dma_phys_limit __ro_after_init;
+-static phys_addr_t arm64_dma32_phys_limit __ro_after_init;
++phys_addr_t arm64_dma32_phys_limit __ro_after_init;
+ 
+ #ifdef CONFIG_KEXEC_CORE
+ /*
+diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
+index a0dda2a1f2df0..d66da35f2e8d3 100644
+--- a/arch/powerpc/kernel/head_book3s_32.S
++++ b/arch/powerpc/kernel/head_book3s_32.S
+@@ -262,10 +262,19 @@ __secondary_hold_acknowledge:
+ MachineCheck:
+ 	EXCEPTION_PROLOG_0
+ #ifdef CONFIG_PPC_CHRP
++#ifdef CONFIG_VMAP_STACK
++	mr	r11, r1
++	mfspr	r1, SPRN_SPRG_THREAD
++	lwz	r1, RTAS_SP(r1)
++	cmpwi	cr1, r1, 0
++	bne	cr1, 7f
++	mr	r1, r11
++#else
+ 	mfspr	r11, SPRN_SPRG_THREAD
+ 	lwz	r11, RTAS_SP(r11)
+ 	cmpwi	cr1, r11, 0
+ 	bne	cr1, 7f
++#endif
+ #endif /* CONFIG_PPC_CHRP */
+ 	EXCEPTION_PROLOG_1 for_rtas=1
+ 7:	EXCEPTION_PROLOG_2
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index fbf26e0f7a6a9..3a5ecb1039bfb 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -18,6 +18,7 @@ config X86_32
+ 	select MODULES_USE_ELF_REL
+ 	select OLD_SIGACTION
+ 	select GENERIC_VDSO_32
++	select ARCH_SPLIT_ARG64
+ 
+ config X86_64
+ 	def_bool y
+diff --git a/block/genhd.c b/block/genhd.c
+index 9387f050c248a..ec6264e2ed671 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -256,14 +256,17 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
+ 		part = rcu_dereference(ptbl->part[piter->idx]);
+ 		if (!part)
+ 			continue;
++		get_device(part_to_dev(part));
++		piter->part = part;
+ 		if (!part_nr_sects_read(part) &&
+ 		    !(piter->flags & DISK_PITER_INCL_EMPTY) &&
+ 		    !(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
+-		      piter->idx == 0))
++		      piter->idx == 0)) {
++			put_device(part_to_dev(part));
++			piter->part = NULL;
+ 			continue;
++		}
+ 
+-		get_device(part_to_dev(part));
+-		piter->part = part;
+ 		piter->idx += inc;
+ 		break;
+ 	}
+diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
+index 8dfac7f3ed7aa..ff2ee87987c7e 100644
+--- a/drivers/base/regmap/regmap-debugfs.c
++++ b/drivers/base/regmap/regmap-debugfs.c
+@@ -582,8 +582,12 @@ void regmap_debugfs_init(struct regmap *map)
+ 		devname = dev_name(map->dev);
+ 
+ 	if (name) {
+-		map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s",
++		if (!map->debugfs_name) {
++			map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s",
+ 					      devname, name);
++			if (!map->debugfs_name)
++				return;
++		}
+ 		name = map->debugfs_name;
+ 	} else {
+ 		name = devname;
+@@ -591,9 +595,10 @@ void regmap_debugfs_init(struct regmap *map)
+ 
+ 	if (!strcmp(name, "dummy")) {
+ 		kfree(map->debugfs_name);
+-
+ 		map->debugfs_name = kasprintf(GFP_KERNEL, "dummy%d",
+ 						dummy_index);
++		if (!map->debugfs_name)
++				return;
+ 		name = map->debugfs_name;
+ 		dummy_index++;
+ 	}
+diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
+index ecceaaa1a66ff..f40ebe9f50474 100644
+--- a/drivers/block/Kconfig
++++ b/drivers/block/Kconfig
+@@ -451,6 +451,7 @@ config BLK_DEV_RBD
+ config BLK_DEV_RSXX
+ 	tristate "IBM Flash Adapter 900GB Full Height PCIe Device Driver"
+ 	depends on PCI
++	select CRC32
+ 	help
+ 	  Device driver for IBM's high speed PCIe SSD
+ 	  storage device: Flash Adapter 900GB Full Height.
+diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
+index 7af1b60582fe5..ba334fe7626db 100644
+--- a/drivers/block/rnbd/rnbd-clt.c
++++ b/drivers/block/rnbd/rnbd-clt.c
+@@ -1671,7 +1671,8 @@ static void rnbd_destroy_sessions(void)
+ 	 */
+ 
+ 	list_for_each_entry_safe(sess, sn, &sess_list, list) {
+-		WARN_ON(!rnbd_clt_get_sess(sess));
++		if (!rnbd_clt_get_sess(sess))
++			continue;
+ 		close_rtrs(sess);
+ 		list_for_each_entry_safe(dev, tn, &sess->devs_list, list) {
+ 			/*
+diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c
+index 0acc9e241cd7d..b9ccb6a3dad98 100644
+--- a/drivers/cpufreq/powernow-k8.c
++++ b/drivers/cpufreq/powernow-k8.c
+@@ -878,9 +878,9 @@ static int get_transition_latency(struct powernow_k8_data *data)
+ 
+ /* Take a frequency, and issue the fid/vid transition command */
+ static int transition_frequency_fidvid(struct powernow_k8_data *data,
+-		unsigned int index)
++		unsigned int index,
++		struct cpufreq_policy *policy)
+ {
+-	struct cpufreq_policy *policy;
+ 	u32 fid = 0;
+ 	u32 vid = 0;
+ 	int res;
+@@ -912,9 +912,6 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data,
+ 	freqs.old = find_khz_freq_from_fid(data->currfid);
+ 	freqs.new = find_khz_freq_from_fid(fid);
+ 
+-	policy = cpufreq_cpu_get(smp_processor_id());
+-	cpufreq_cpu_put(policy);
+-
+ 	cpufreq_freq_transition_begin(policy, &freqs);
+ 	res = transition_fid_vid(data, fid, vid);
+ 	cpufreq_freq_transition_end(policy, &freqs, res);
+@@ -969,7 +966,7 @@ static long powernowk8_target_fn(void *arg)
+ 
+ 	powernow_k8_acpi_pst_values(data, newstate);
+ 
+-	ret = transition_frequency_fidvid(data, newstate);
++	ret = transition_frequency_fidvid(data, newstate, pol);
+ 
+ 	if (ret) {
+ 		pr_err("transition frequency failed\n");
+diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
+index b971505b87152..08d71dafa0015 100644
+--- a/drivers/dma/dw-edma/dw-edma-core.c
++++ b/drivers/dma/dw-edma/dw-edma-core.c
+@@ -86,12 +86,12 @@ static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc)
+ 
+ 	if (desc->chunk) {
+ 		/* Create and add new element into the linked list */
+-		desc->chunks_alloc++;
+-		list_add_tail(&chunk->list, &desc->chunk->list);
+ 		if (!dw_edma_alloc_burst(chunk)) {
+ 			kfree(chunk);
+ 			return NULL;
+ 		}
++		desc->chunks_alloc++;
++		list_add_tail(&chunk->list, &desc->chunk->list);
+ 	} else {
+ 		/* List head */
+ 		chunk->burst = NULL;
+diff --git a/drivers/dma/mediatek/mtk-hsdma.c b/drivers/dma/mediatek/mtk-hsdma.c
+index f133ae8dece16..6ad8afbb95f2b 100644
+--- a/drivers/dma/mediatek/mtk-hsdma.c
++++ b/drivers/dma/mediatek/mtk-hsdma.c
+@@ -1007,6 +1007,7 @@ static int mtk_hsdma_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_free:
++	mtk_hsdma_hw_deinit(hsdma);
+ 	of_dma_controller_free(pdev->dev.of_node);
+ err_unregister:
+ 	dma_async_device_unregister(dd);
+diff --git a/drivers/dma/milbeaut-xdmac.c b/drivers/dma/milbeaut-xdmac.c
+index 85a597228fb04..748b260bbc976 100644
+--- a/drivers/dma/milbeaut-xdmac.c
++++ b/drivers/dma/milbeaut-xdmac.c
+@@ -351,7 +351,7 @@ static int milbeaut_xdmac_probe(struct platform_device *pdev)
+ 
+ 	ret = dma_async_device_register(ddev);
+ 	if (ret)
+-		return ret;
++		goto disable_xdmac;
+ 
+ 	ret = of_dma_controller_register(dev->of_node,
+ 					 of_dma_simple_xlate, mdev);
+@@ -364,6 +364,8 @@ static int milbeaut_xdmac_probe(struct platform_device *pdev)
+ 
+ unregister_dmac:
+ 	dma_async_device_unregister(ddev);
++disable_xdmac:
++	disable_xdmac(mdev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index 22faea653ea82..79777550a6ffc 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -2781,7 +2781,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
+ 		has_dre = false;
+ 
+ 	if (!has_dre)
+-		xdev->common.copy_align = fls(width - 1);
++		xdev->common.copy_align = (enum dmaengine_alignment)fls(width - 1);
+ 
+ 	if (of_device_is_compatible(node, "xlnx,axi-vdma-mm2s-channel") ||
+ 	    of_device_is_compatible(node, "xlnx,axi-dma-mm2s-channel") ||
+@@ -2900,7 +2900,8 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
+ static int xilinx_dma_child_probe(struct xilinx_dma_device *xdev,
+ 				    struct device_node *node)
+ {
+-	int ret, i, nr_channels = 1;
++	int ret, i;
++	u32 nr_channels = 1;
+ 
+ 	ret = of_property_read_u32(node, "dma-channels", &nr_channels);
+ 	if (xdev->dma_config->dmatype == XDMA_TYPE_AXIMCDMA && ret < 0)
+@@ -3112,7 +3113,11 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Register the DMA engine with the core */
+-	dma_async_device_register(&xdev->common);
++	err = dma_async_device_register(&xdev->common);
++	if (err) {
++		dev_err(xdev->dev, "failed to register the dma device\n");
++		goto error;
++	}
+ 
+ 	err = of_dma_controller_register(node, of_dma_xilinx_xlate,
+ 					 xdev);
+diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
+index 3d4bf9b6a0a2c..06d4ce31838a5 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_types.h
++++ b/drivers/gpu/drm/i915/display/intel_display_types.h
+@@ -1382,6 +1382,9 @@ struct intel_dp {
+ 		bool ycbcr_444_to_420;
+ 	} dfp;
+ 
++	/* To control wakeup latency, e.g. for irq-driven dp aux transfers. */
++	struct pm_qos_request pm_qos;
++
+ 	/* Display stream compression testing */
+ 	bool force_dsc_en;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 9bc59fd2f95f5..1901c88d418fa 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1411,7 +1411,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
+ 	 * lowest possible wakeup latency and so prevent the cpu from going into
+ 	 * deep sleep states.
+ 	 */
+-	cpu_latency_qos_update_request(&i915->pm_qos, 0);
++	cpu_latency_qos_update_request(&intel_dp->pm_qos, 0);
+ 
+ 	intel_dp_check_edp(intel_dp);
+ 
+@@ -1544,7 +1544,7 @@ done:
+ 
+ 	ret = recv_bytes;
+ out:
+-	cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
++	cpu_latency_qos_update_request(&intel_dp->pm_qos, PM_QOS_DEFAULT_VALUE);
+ 
+ 	if (vdd)
+ 		edp_panel_vdd_off(intel_dp, false);
+@@ -1776,6 +1776,9 @@ static i915_reg_t skl_aux_data_reg(struct intel_dp *intel_dp, int index)
+ static void
+ intel_dp_aux_fini(struct intel_dp *intel_dp)
+ {
++	if (cpu_latency_qos_request_active(&intel_dp->pm_qos))
++		cpu_latency_qos_remove_request(&intel_dp->pm_qos);
++
+ 	kfree(intel_dp->aux.name);
+ }
+ 
+@@ -1818,6 +1821,7 @@ intel_dp_aux_init(struct intel_dp *intel_dp)
+ 				       aux_ch_name(dig_port->aux_ch),
+ 				       port_name(encoder->port));
+ 	intel_dp->aux.transfer = intel_dp_aux_transfer;
++	cpu_latency_qos_add_request(&intel_dp->pm_qos, PM_QOS_DEFAULT_VALUE);
+ }
+ 
+ bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index acc32066cec35..382cf048eefe0 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -577,8 +577,6 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
+ 
+ 	pci_set_master(pdev);
+ 
+-	cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE);
+-
+ 	intel_gt_init_workarounds(dev_priv);
+ 
+ 	/* On the 945G/GM, the chipset reports the MSI capability on the
+@@ -623,7 +621,6 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
+ err_msi:
+ 	if (pdev->msi_enabled)
+ 		pci_disable_msi(pdev);
+-	cpu_latency_qos_remove_request(&dev_priv->pm_qos);
+ err_mem_regions:
+ 	intel_memory_regions_driver_release(dev_priv);
+ err_ggtt:
+@@ -645,8 +642,6 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
+ 
+ 	if (pdev->msi_enabled)
+ 		pci_disable_msi(pdev);
+-
+-	cpu_latency_qos_remove_request(&dev_priv->pm_qos);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 8426d59746693..83f4af097b858 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -892,9 +892,6 @@ struct drm_i915_private {
+ 
+ 	bool display_irqs_enabled;
+ 
+-	/* To control wakeup latency, e.g. for irq-driven dp aux transfers. */
+-	struct pm_qos_request pm_qos;
+-
+ 	/* Sideband mailbox protection */
+ 	struct mutex sb_lock;
+ 	struct pm_qos_request sb_qos;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 1ce2001106e56..04e6f6f9b742e 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -617,6 +617,8 @@ int panfrost_job_init(struct panfrost_device *pfdev)
+ 	}
+ 
+ 	for (j = 0; j < NUM_JOB_SLOTS; j++) {
++		mutex_init(&js->queue[j].lock);
++
+ 		js->queue[j].fence_context = dma_fence_context_alloc(1);
+ 
+ 		ret = drm_sched_init(&js->queue[j].sched,
+@@ -647,8 +649,10 @@ void panfrost_job_fini(struct panfrost_device *pfdev)
+ 
+ 	job_write(pfdev, JOB_INT_MASK, 0);
+ 
+-	for (j = 0; j < NUM_JOB_SLOTS; j++)
++	for (j = 0; j < NUM_JOB_SLOTS; j++) {
+ 		drm_sched_fini(&js->queue[j].sched);
++		mutex_destroy(&js->queue[j].lock);
++	}
+ 
+ }
+ 
+@@ -660,7 +664,6 @@ int panfrost_job_open(struct panfrost_file_priv *panfrost_priv)
+ 	int ret, i;
+ 
+ 	for (i = 0; i < NUM_JOB_SLOTS; i++) {
+-		mutex_init(&js->queue[i].lock);
+ 		sched = &js->queue[i].sched;
+ 		ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i],
+ 					    DRM_SCHED_PRIORITY_NORMAL, &sched,
+@@ -673,14 +676,10 @@ int panfrost_job_open(struct panfrost_file_priv *panfrost_priv)
+ 
+ void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
+ {
+-	struct panfrost_device *pfdev = panfrost_priv->pfdev;
+-	struct panfrost_job_slot *js = pfdev->js;
+ 	int i;
+ 
+-	for (i = 0; i < NUM_JOB_SLOTS; i++) {
++	for (i = 0; i < NUM_JOB_SLOTS; i++)
+ 		drm_sched_entity_destroy(&panfrost_priv->sched_entity[i]);
+-		mutex_destroy(&js->queue[i].lock);
+-	}
+ }
+ 
+ int panfrost_job_is_idle(struct panfrost_device *pfdev)
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index cd71e71339446..9e852b4bbf92b 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -1270,6 +1270,37 @@ static int wacom_devm_sysfs_create_group(struct wacom *wacom,
+ 					       group);
+ }
+ 
++static void wacom_devm_kfifo_release(struct device *dev, void *res)
++{
++	struct kfifo_rec_ptr_2 *devres = res;
++
++	kfifo_free(devres);
++}
++
++static int wacom_devm_kfifo_alloc(struct wacom *wacom)
++{
++	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
++	struct kfifo_rec_ptr_2 *pen_fifo = &wacom_wac->pen_fifo;
++	int error;
++
++	pen_fifo = devres_alloc(wacom_devm_kfifo_release,
++			      sizeof(struct kfifo_rec_ptr_2),
++			      GFP_KERNEL);
++
++	if (!pen_fifo)
++		return -ENOMEM;
++
++	error = kfifo_alloc(pen_fifo, WACOM_PKGLEN_MAX, GFP_KERNEL);
++	if (error) {
++		devres_free(pen_fifo);
++		return error;
++	}
++
++	devres_add(&wacom->hdev->dev, pen_fifo);
++
++	return 0;
++}
++
+ enum led_brightness wacom_leds_brightness_get(struct wacom_led *led)
+ {
+ 	struct wacom *wacom = led->wacom;
+@@ -2724,7 +2755,7 @@ static int wacom_probe(struct hid_device *hdev,
+ 	if (features->check_for_hid_type && features->hid_type != hdev->type)
+ 		return -ENODEV;
+ 
+-	error = kfifo_alloc(&wacom_wac->pen_fifo, WACOM_PKGLEN_MAX, GFP_KERNEL);
++	error = wacom_devm_kfifo_alloc(wacom);
+ 	if (error)
+ 		return error;
+ 
+@@ -2786,8 +2817,6 @@ static void wacom_remove(struct hid_device *hdev)
+ 
+ 	if (wacom->wacom_wac.features.type != REMOTE)
+ 		wacom_release_resources(wacom);
+-
+-	kfifo_free(&wacom_wac->pen_fifo);
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index ae90713443fa6..877fe3733a42b 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1449,7 +1449,7 @@ static int i801_add_mux(struct i801_priv *priv)
+ 
+ 	/* Register GPIO descriptor lookup table */
+ 	lookup = devm_kzalloc(dev,
+-			      struct_size(lookup, table, mux_config->n_gpios),
++			      struct_size(lookup, table, mux_config->n_gpios + 1),
+ 			      GFP_KERNEL);
+ 	if (!lookup)
+ 		return -ENOMEM;
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 33de99b7bc20c..0818d3e507347 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -38,6 +38,7 @@
+ #define I2C_IO_CONFIG_OPEN_DRAIN	0x0003
+ #define I2C_IO_CONFIG_PUSH_PULL		0x0000
+ #define I2C_SOFT_RST			0x0001
++#define I2C_HANDSHAKE_RST		0x0020
+ #define I2C_FIFO_ADDR_CLR		0x0001
+ #define I2C_DELAY_LEN			0x0002
+ #define I2C_TIME_CLR_VALUE		0x0000
+@@ -45,6 +46,7 @@
+ #define I2C_WRRD_TRANAC_VALUE		0x0002
+ #define I2C_RD_TRANAC_VALUE		0x0001
+ #define I2C_SCL_MIS_COMP_VALUE		0x0000
++#define I2C_CHN_CLR_FLAG		0x0000
+ 
+ #define I2C_DMA_CON_TX			0x0000
+ #define I2C_DMA_CON_RX			0x0001
+@@ -54,7 +56,9 @@
+ #define I2C_DMA_START_EN		0x0001
+ #define I2C_DMA_INT_FLAG_NONE		0x0000
+ #define I2C_DMA_CLR_FLAG		0x0000
++#define I2C_DMA_WARM_RST		0x0001
+ #define I2C_DMA_HARD_RST		0x0002
++#define I2C_DMA_HANDSHAKE_RST		0x0004
+ 
+ #define MAX_SAMPLE_CNT_DIV		8
+ #define MAX_STEP_CNT_DIV		64
+@@ -475,11 +479,24 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ {
+ 	u16 control_reg;
+ 
+-	writel(I2C_DMA_HARD_RST, i2c->pdmabase + OFFSET_RST);
+-	udelay(50);
+-	writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
+-
+-	mtk_i2c_writew(i2c, I2C_SOFT_RST, OFFSET_SOFTRESET);
++	if (i2c->dev_comp->dma_sync) {
++		writel(I2C_DMA_WARM_RST, i2c->pdmabase + OFFSET_RST);
++		udelay(10);
++		writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
++		udelay(10);
++		writel(I2C_DMA_HANDSHAKE_RST | I2C_DMA_HARD_RST,
++		       i2c->pdmabase + OFFSET_RST);
++		mtk_i2c_writew(i2c, I2C_HANDSHAKE_RST | I2C_SOFT_RST,
++			       OFFSET_SOFTRESET);
++		udelay(10);
++		writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
++		mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_SOFTRESET);
++	} else {
++		writel(I2C_DMA_HARD_RST, i2c->pdmabase + OFFSET_RST);
++		udelay(50);
++		writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
++		mtk_i2c_writew(i2c, I2C_SOFT_RST, OFFSET_SOFTRESET);
++	}
+ 
+ 	/* Set ioconfig */
+ 	if (i2c->use_push_pull)
+diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c
+index 19cda6742423d..2917fecf6c80d 100644
+--- a/drivers/i2c/busses/i2c-sprd.c
++++ b/drivers/i2c/busses/i2c-sprd.c
+@@ -72,6 +72,8 @@
+ 
+ /* timeout (ms) for pm runtime autosuspend */
+ #define SPRD_I2C_PM_TIMEOUT	1000
++/* timeout (ms) for transfer message */
++#define I2C_XFER_TIMEOUT	1000
+ 
+ /* SPRD i2c data structure */
+ struct sprd_i2c {
+@@ -244,6 +246,7 @@ static int sprd_i2c_handle_msg(struct i2c_adapter *i2c_adap,
+ 			       struct i2c_msg *msg, bool is_last_msg)
+ {
+ 	struct sprd_i2c *i2c_dev = i2c_adap->algo_data;
++	unsigned long time_left;
+ 
+ 	i2c_dev->msg = msg;
+ 	i2c_dev->buf = msg->buf;
+@@ -273,7 +276,10 @@ static int sprd_i2c_handle_msg(struct i2c_adapter *i2c_adap,
+ 
+ 	sprd_i2c_opt_start(i2c_dev);
+ 
+-	wait_for_completion(&i2c_dev->complete);
++	time_left = wait_for_completion_timeout(&i2c_dev->complete,
++				msecs_to_jiffies(I2C_XFER_TIMEOUT));
++	if (!time_left)
++		return -ETIMEDOUT;
+ 
+ 	return i2c_dev->err;
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
+index 7dd3b6097226f..174b19e397124 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
++++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
+@@ -36,9 +36,6 @@
+ #include <rdma/ib_cache.h>
+ #include "hns_roce_device.h"
+ 
+-#define VLAN_SL_MASK 7
+-#define VLAN_SL_SHIFT 13
+-
+ static inline u16 get_ah_udp_sport(const struct rdma_ah_attr *ah_attr)
+ {
+ 	u32 fl = ah_attr->grh.flow_label;
+@@ -81,18 +78,12 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ 
+ 	/* HIP08 needs to record vlan info in Address Vector */
+ 	if (hr_dev->pci_dev->revision <= PCI_REVISION_ID_HIP08) {
+-		ah->av.vlan_en = 0;
+-
+ 		ret = rdma_read_gid_l2_fields(ah_attr->grh.sgid_attr,
+ 					      &ah->av.vlan_id, NULL);
+ 		if (ret)
+ 			return ret;
+ 
+-		if (ah->av.vlan_id < VLAN_N_VID) {
+-			ah->av.vlan_en = 1;
+-			ah->av.vlan_id |= (rdma_ah_get_sl(ah_attr) & VLAN_SL_MASK) <<
+-					  VLAN_SL_SHIFT;
+-		}
++		ah->av.vlan_en = ah->av.vlan_id < VLAN_N_VID;
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/interconnect/imx/imx.c b/drivers/interconnect/imx/imx.c
+index 41dba7090c2ae..e398ebf1dbbab 100644
+--- a/drivers/interconnect/imx/imx.c
++++ b/drivers/interconnect/imx/imx.c
+@@ -99,6 +99,7 @@ static int imx_icc_node_init_qos(struct icc_provider *provider,
+ 		if (!dn || !of_device_is_available(dn)) {
+ 			dev_warn(dev, "Missing property %s, skip scaling %s\n",
+ 				 adj->phandle_name, node->name);
++			of_node_put(dn);
+ 			return 0;
+ 		}
+ 
+diff --git a/drivers/interconnect/qcom/Kconfig b/drivers/interconnect/qcom/Kconfig
+index a8f93ba265f81..b3fb5b02bcf1e 100644
+--- a/drivers/interconnect/qcom/Kconfig
++++ b/drivers/interconnect/qcom/Kconfig
+@@ -42,13 +42,23 @@ config INTERCONNECT_QCOM_QCS404
+ 	  This is a driver for the Qualcomm Network-on-Chip on qcs404-based
+ 	  platforms.
+ 
++config INTERCONNECT_QCOM_RPMH_POSSIBLE
++	tristate
++	default INTERCONNECT_QCOM
++	depends on QCOM_RPMH || (COMPILE_TEST && !QCOM_RPMH)
++	depends on QCOM_COMMAND_DB || (COMPILE_TEST && !QCOM_COMMAND_DB)
++	depends on OF || COMPILE_TEST
++	help
++	  Compile-testing RPMH drivers is possible on other platforms,
++	  but in order to avoid link failures, drivers must not be built-in
++	  when QCOM_RPMH or QCOM_COMMAND_DB are loadable modules
++
+ config INTERCONNECT_QCOM_RPMH
+ 	tristate
+ 
+ config INTERCONNECT_QCOM_SC7180
+ 	tristate "Qualcomm SC7180 interconnect driver"
+-	depends on INTERCONNECT_QCOM
+-	depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
++	depends on INTERCONNECT_QCOM_RPMH_POSSIBLE
+ 	select INTERCONNECT_QCOM_RPMH
+ 	select INTERCONNECT_QCOM_BCM_VOTER
+ 	help
+@@ -57,8 +67,7 @@ config INTERCONNECT_QCOM_SC7180
+ 
+ config INTERCONNECT_QCOM_SDM845
+ 	tristate "Qualcomm SDM845 interconnect driver"
+-	depends on INTERCONNECT_QCOM
+-	depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
++	depends on INTERCONNECT_QCOM_RPMH_POSSIBLE
+ 	select INTERCONNECT_QCOM_RPMH
+ 	select INTERCONNECT_QCOM_BCM_VOTER
+ 	help
+@@ -67,8 +76,7 @@ config INTERCONNECT_QCOM_SDM845
+ 
+ config INTERCONNECT_QCOM_SM8150
+ 	tristate "Qualcomm SM8150 interconnect driver"
+-	depends on INTERCONNECT_QCOM
+-	depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
++	depends on INTERCONNECT_QCOM_RPMH_POSSIBLE
+ 	select INTERCONNECT_QCOM_RPMH
+ 	select INTERCONNECT_QCOM_BCM_VOTER
+ 	help
+@@ -77,8 +85,7 @@ config INTERCONNECT_QCOM_SM8150
+ 
+ config INTERCONNECT_QCOM_SM8250
+ 	tristate "Qualcomm SM8250 interconnect driver"
+-	depends on INTERCONNECT_QCOM
+-	depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
++	depends on INTERCONNECT_QCOM_RPMH_POSSIBLE
+ 	select INTERCONNECT_QCOM_RPMH
+ 	select INTERCONNECT_QCOM_BCM_VOTER
+ 	help
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index ef37ccfa82562..0eba5e883e3f1 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -55,6 +55,8 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
+ 
+ 		set_bit(qsmmu->bypass_cbndx, smmu->context_map);
+ 
++		arm_smmu_cb_write(smmu, qsmmu->bypass_cbndx, ARM_SMMU_CB_SCTLR, 0);
++
+ 		reg = FIELD_PREP(ARM_SMMU_CBAR_TYPE, CBAR_TYPE_S1_TRANS_S2_BYPASS);
+ 		arm_smmu_gr1_write(smmu, ARM_SMMU_GR1_CBAR(qsmmu->bypass_cbndx), reg);
+ 	}
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index b46dbfa6d0ed6..004feaed3c72c 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1461,8 +1461,8 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
+ 		int mask = ilog2(__roundup_pow_of_two(npages));
+ 		unsigned long align = (1ULL << (VTD_PAGE_SHIFT + mask));
+ 
+-		if (WARN_ON_ONCE(!ALIGN(addr, align)))
+-			addr &= ~(align - 1);
++		if (WARN_ON_ONCE(!IS_ALIGNED(addr, align)))
++			addr = ALIGN_DOWN(addr, align);
+ 
+ 		desc.qw0 = QI_EIOTLB_PASID(pasid) |
+ 				QI_EIOTLB_DID(did) |
+diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
+index 0cfce1d3b7bbd..aedaae4630bc8 100644
+--- a/drivers/iommu/intel/irq_remapping.c
++++ b/drivers/iommu/intel/irq_remapping.c
+@@ -1390,6 +1390,8 @@ static int intel_irq_remapping_alloc(struct irq_domain *domain,
+ 		irq_data = irq_domain_get_irq_data(domain, virq + i);
+ 		irq_cfg = irqd_cfg(irq_data);
+ 		if (!irq_data || !irq_cfg) {
++			if (!i)
++				kfree(data);
+ 			ret = -EINVAL;
+ 			goto out_free_data;
+ 		}
+diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
+index 8f39f9ba5c80e..4c2ce210c1237 100644
+--- a/drivers/lightnvm/Kconfig
++++ b/drivers/lightnvm/Kconfig
+@@ -19,6 +19,7 @@ if NVM
+ 
+ config NVM_PBLK
+ 	tristate "Physical Block Device Open-Channel SSD target"
++	select CRC32
+ 	help
+ 	  Allows an open-channel SSD to be exposed as a block device to the
+ 	  host. The target assumes the device exposes raw flash and must be
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index aa4531c2ce0df..a148b92ad8563 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1341,6 +1341,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
+ 	bcache_device_link(&dc->disk, c, "bdev");
+ 	atomic_inc(&c->attached_dev_nr);
+ 
++	if (bch_has_feature_obso_large_bucket(&(c->cache->sb))) {
++		pr_err("The obsoleted large bucket layout is unsupported, set the bcache device into read-only\n");
++		pr_err("Please update to the latest bcache-tools to create the cache device\n");
++		set_disk_ro(dc->disk.disk, 1);
++	}
++
+ 	/* Allow the writeback thread to proceed */
+ 	up_write(&dc->writeback_lock);
+ 
+@@ -1564,6 +1570,12 @@ static int flash_dev_run(struct cache_set *c, struct uuid_entry *u)
+ 
+ 	bcache_device_link(d, c, "volume");
+ 
++	if (bch_has_feature_obso_large_bucket(&c->cache->sb)) {
++		pr_err("The obsoleted large bucket layout is unsupported, set the bcache device into read-only\n");
++		pr_err("Please update to the latest bcache-tools to create the cache device\n");
++		set_disk_ro(d->disk, 1);
++	}
++
+ 	return 0;
+ err:
+ 	kobject_put(&d->kobj);
+@@ -2123,6 +2135,9 @@ static int run_cache_set(struct cache_set *c)
+ 	c->cache->sb.last_mount = (u32)ktime_get_real_seconds();
+ 	bcache_write_super(c);
+ 
++	if (bch_has_feature_obso_large_bucket(&c->cache->sb))
++		pr_err("Detect obsoleted large bucket layout, all attached bcache device will be read-only\n");
++
+ 	list_for_each_entry_safe(dc, t, &uncached_devices, list)
+ 		bch_cached_dev_attach(dc, c, NULL);
+ 
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index 3b320f3d48b30..59c1724bcd0ed 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -645,11 +645,20 @@ static int bareudp_link_config(struct net_device *dev,
+ 	return 0;
+ }
+ 
++static void bareudp_dellink(struct net_device *dev, struct list_head *head)
++{
++	struct bareudp_dev *bareudp = netdev_priv(dev);
++
++	list_del(&bareudp->next);
++	unregister_netdevice_queue(dev, head);
++}
++
+ static int bareudp_newlink(struct net *net, struct net_device *dev,
+ 			   struct nlattr *tb[], struct nlattr *data[],
+ 			   struct netlink_ext_ack *extack)
+ {
+ 	struct bareudp_conf conf;
++	LIST_HEAD(list_kill);
+ 	int err;
+ 
+ 	err = bareudp2info(data, &conf, extack);
+@@ -662,17 +671,14 @@ static int bareudp_newlink(struct net *net, struct net_device *dev,
+ 
+ 	err = bareudp_link_config(dev, tb);
+ 	if (err)
+-		return err;
++		goto err_unconfig;
+ 
+ 	return 0;
+-}
+-
+-static void bareudp_dellink(struct net_device *dev, struct list_head *head)
+-{
+-	struct bareudp_dev *bareudp = netdev_priv(dev);
+ 
+-	list_del(&bareudp->next);
+-	unregister_netdevice_queue(dev, head);
++err_unconfig:
++	bareudp_dellink(dev, &list_kill);
++	unregister_netdevice_many(&list_kill);
++	return err;
+ }
+ 
+ static size_t bareudp_get_size(const struct net_device *dev)
+diff --git a/drivers/net/can/Kconfig b/drivers/net/can/Kconfig
+index 424970939fd4c..1c28eade6becc 100644
+--- a/drivers/net/can/Kconfig
++++ b/drivers/net/can/Kconfig
+@@ -123,6 +123,7 @@ config CAN_JANZ_ICAN3
+ config CAN_KVASER_PCIEFD
+ 	depends on PCI
+ 	tristate "Kvaser PCIe FD cards"
++	select CRC32
+ 	  help
+ 	  This is a driver for the Kvaser PCI Express CAN FD family.
+ 
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 7fc4ac1582afc..3c1e379751683 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1914,8 +1914,6 @@ EXPORT_SYMBOL_GPL(m_can_class_resume);
+ void m_can_class_unregister(struct m_can_classdev *m_can_dev)
+ {
+ 	unregister_candev(m_can_dev->net);
+-
+-	m_can_clk_stop(m_can_dev);
+ }
+ EXPORT_SYMBOL_GPL(m_can_class_unregister);
+ 
+diff --git a/drivers/net/can/m_can/tcan4x5x.c b/drivers/net/can/m_can/tcan4x5x.c
+index 7347ab39c5b65..f726c5112294f 100644
+--- a/drivers/net/can/m_can/tcan4x5x.c
++++ b/drivers/net/can/m_can/tcan4x5x.c
+@@ -129,30 +129,6 @@ struct tcan4x5x_priv {
+ 	int reg_offset;
+ };
+ 
+-static struct can_bittiming_const tcan4x5x_bittiming_const = {
+-	.name = DEVICE_NAME,
+-	.tseg1_min = 2,
+-	.tseg1_max = 31,
+-	.tseg2_min = 2,
+-	.tseg2_max = 16,
+-	.sjw_max = 16,
+-	.brp_min = 1,
+-	.brp_max = 32,
+-	.brp_inc = 1,
+-};
+-
+-static struct can_bittiming_const tcan4x5x_data_bittiming_const = {
+-	.name = DEVICE_NAME,
+-	.tseg1_min = 1,
+-	.tseg1_max = 32,
+-	.tseg2_min = 1,
+-	.tseg2_max = 16,
+-	.sjw_max = 16,
+-	.brp_min = 1,
+-	.brp_max = 32,
+-	.brp_inc = 1,
+-};
+-
+ static void tcan4x5x_check_wake(struct tcan4x5x_priv *priv)
+ {
+ 	int wake_state = 0;
+@@ -479,8 +455,6 @@ static int tcan4x5x_can_probe(struct spi_device *spi)
+ 	mcan_class->dev = &spi->dev;
+ 	mcan_class->ops = &tcan4x5x_ops;
+ 	mcan_class->is_peripheral = true;
+-	mcan_class->bit_timing = &tcan4x5x_bittiming_const;
+-	mcan_class->data_timing = &tcan4x5x_data_bittiming_const;
+ 	mcan_class->net->irq = spi->irq;
+ 
+ 	spi_set_drvdata(spi, priv);
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 4b36d89bec061..662e68a0e7e61 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1436,11 +1436,12 @@ static void gswip_phylink_validate(struct dsa_switch *ds, int port,
+ 	phylink_set(mask, Pause);
+ 	phylink_set(mask, Asym_Pause);
+ 
+-	/* With the exclusion of MII and Reverse MII, we support Gigabit,
+-	 * including Half duplex
++	/* With the exclusion of MII, Reverse MII and Reduced MII, we
++	 * support Gigabit, including Half duplex
+ 	 */
+ 	if (state->interface != PHY_INTERFACE_MODE_MII &&
+-	    state->interface != PHY_INTERFACE_MODE_REVMII) {
++	    state->interface != PHY_INTERFACE_MODE_REVMII &&
++	    state->interface != PHY_INTERFACE_MODE_RMII) {
+ 		phylink_set(mask, 1000baseT_Full);
+ 		phylink_set(mask, 1000baseT_Half);
+ 	}
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index 50e3a70e5a290..07a956098e11f 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -621,7 +621,7 @@ static void chtls_reset_synq(struct listen_ctx *listen_ctx)
+ 
+ 	while (!skb_queue_empty(&listen_ctx->synq)) {
+ 		struct chtls_sock *csk =
+-			container_of((struct synq *)__skb_dequeue
++			container_of((struct synq *)skb_peek
+ 				(&listen_ctx->synq), struct chtls_sock, synq);
+ 		struct sock *child = csk->sk;
+ 
+@@ -1109,6 +1109,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ 				    const struct cpl_pass_accept_req *req,
+ 				    struct chtls_dev *cdev)
+ {
++	struct adapter *adap = pci_get_drvdata(cdev->pdev);
+ 	struct neighbour *n = NULL;
+ 	struct inet_sock *newinet;
+ 	const struct iphdr *iph;
+@@ -1118,9 +1119,10 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ 	struct dst_entry *dst;
+ 	struct tcp_sock *tp;
+ 	struct sock *newsk;
++	bool found = false;
+ 	u16 port_id;
+ 	int rxq_idx;
+-	int step;
++	int step, i;
+ 
+ 	iph = (const struct iphdr *)network_hdr;
+ 	newsk = tcp_create_openreq_child(lsk, oreq, cdev->askb);
+@@ -1152,7 +1154,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ 		n = dst_neigh_lookup(dst, &ip6h->saddr);
+ #endif
+ 	}
+-	if (!n)
++	if (!n || !n->dev)
+ 		goto free_sk;
+ 
+ 	ndev = n->dev;
+@@ -1161,6 +1163,13 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ 	if (is_vlan_dev(ndev))
+ 		ndev = vlan_dev_real_dev(ndev);
+ 
++	for_each_port(adap, i)
++		if (cdev->ports[i] == ndev)
++			found = true;
++
++	if (!found)
++		goto free_dst;
++
+ 	port_id = cxgb4_port_idx(ndev);
+ 
+ 	csk = chtls_sock_create(cdev);
+@@ -1237,6 +1246,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ free_csk:
+ 	chtls_sock_release(&csk->kref);
+ free_dst:
++	neigh_release(n);
+ 	dst_release(dst);
+ free_sk:
+ 	inet_csk_prepare_forced_close(newsk);
+@@ -1386,7 +1396,7 @@ static void chtls_pass_accept_request(struct sock *sk,
+ 
+ 	newsk = chtls_recv_sock(sk, oreq, network_hdr, req, cdev);
+ 	if (!newsk)
+-		goto free_oreq;
++		goto reject;
+ 
+ 	if (chtls_get_module(newsk))
+ 		goto reject;
+@@ -1402,8 +1412,6 @@ static void chtls_pass_accept_request(struct sock *sk,
+ 	kfree_skb(skb);
+ 	return;
+ 
+-free_oreq:
+-	chtls_reqsk_free(oreq);
+ reject:
+ 	mk_tid_release(reply_skb, 0, tid);
+ 	cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb);
+@@ -1588,6 +1596,11 @@ static int chtls_pass_establish(struct chtls_dev *cdev, struct sk_buff *skb)
+ 			sk_wake_async(sk, 0, POLL_OUT);
+ 
+ 		data = lookup_stid(cdev->tids, stid);
++		if (!data) {
++			/* listening server close */
++			kfree_skb(skb);
++			goto unlock;
++		}
+ 		lsk = ((struct listen_ctx *)data)->lsk;
+ 
+ 		bh_lock_sock(lsk);
+@@ -1996,39 +2009,6 @@ static void t4_defer_reply(struct sk_buff *skb, struct chtls_dev *cdev,
+ 	spin_unlock_bh(&cdev->deferq.lock);
+ }
+ 
+-static void send_abort_rpl(struct sock *sk, struct sk_buff *skb,
+-			   struct chtls_dev *cdev, int status, int queue)
+-{
+-	struct cpl_abort_req_rss *req = cplhdr(skb);
+-	struct sk_buff *reply_skb;
+-	struct chtls_sock *csk;
+-
+-	csk = rcu_dereference_sk_user_data(sk);
+-
+-	reply_skb = alloc_skb(sizeof(struct cpl_abort_rpl),
+-			      GFP_KERNEL);
+-
+-	if (!reply_skb) {
+-		req->status = (queue << 1);
+-		t4_defer_reply(skb, cdev, send_defer_abort_rpl);
+-		return;
+-	}
+-
+-	set_abort_rpl_wr(reply_skb, GET_TID(req), status);
+-	kfree_skb(skb);
+-
+-	set_wr_txq(reply_skb, CPL_PRIORITY_DATA, queue);
+-	if (csk_conn_inline(csk)) {
+-		struct l2t_entry *e = csk->l2t_entry;
+-
+-		if (e && sk->sk_state != TCP_SYN_RECV) {
+-			cxgb4_l2t_send(csk->egress_dev, reply_skb, e);
+-			return;
+-		}
+-	}
+-	cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb);
+-}
+-
+ static void chtls_send_abort_rpl(struct sock *sk, struct sk_buff *skb,
+ 				 struct chtls_dev *cdev,
+ 				 int status, int queue)
+@@ -2077,9 +2057,9 @@ static void bl_abort_syn_rcv(struct sock *lsk, struct sk_buff *skb)
+ 	queue = csk->txq_idx;
+ 
+ 	skb->sk	= NULL;
++	chtls_send_abort_rpl(child, skb, BLOG_SKB_CB(skb)->cdev,
++			     CPL_ABORT_NO_RST, queue);
+ 	do_abort_syn_rcv(child, lsk);
+-	send_abort_rpl(child, skb, BLOG_SKB_CB(skb)->cdev,
+-		       CPL_ABORT_NO_RST, queue);
+ }
+ 
+ static int abort_syn_rcv(struct sock *sk, struct sk_buff *skb)
+@@ -2109,8 +2089,8 @@ static int abort_syn_rcv(struct sock *sk, struct sk_buff *skb)
+ 	if (!sock_owned_by_user(psk)) {
+ 		int queue = csk->txq_idx;
+ 
++		chtls_send_abort_rpl(sk, skb, cdev, CPL_ABORT_NO_RST, queue);
+ 		do_abort_syn_rcv(sk, psk);
+-		send_abort_rpl(sk, skb, cdev, CPL_ABORT_NO_RST, queue);
+ 	} else {
+ 		skb->sk = sk;
+ 		BLOG_SKB_CB(skb)->backlog_rcv = bl_abort_syn_rcv;
+@@ -2128,9 +2108,6 @@ static void chtls_abort_req_rss(struct sock *sk, struct sk_buff *skb)
+ 	int queue = csk->txq_idx;
+ 
+ 	if (is_neg_adv(req->status)) {
+-		if (sk->sk_state == TCP_SYN_RECV)
+-			chtls_set_tcb_tflag(sk, 0, 0);
+-
+ 		kfree_skb(skb);
+ 		return;
+ 	}
+@@ -2157,12 +2134,12 @@ static void chtls_abort_req_rss(struct sock *sk, struct sk_buff *skb)
+ 		if (sk->sk_state == TCP_SYN_RECV && !abort_syn_rcv(sk, skb))
+ 			return;
+ 
+-		chtls_release_resources(sk);
+-		chtls_conn_done(sk);
+ 	}
+ 
+ 	chtls_send_abort_rpl(sk, skb, BLOG_SKB_CB(skb)->cdev,
+ 			     rst_status, queue);
++	chtls_release_resources(sk);
++	chtls_conn_done(sk);
+ }
+ 
+ static void chtls_abort_rpl_rss(struct sock *sk, struct sk_buff *skb)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+index 1ffe8fac702d9..98a9f5e3fe864 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+@@ -168,7 +168,7 @@ struct hclgevf_mbx_arq_ring {
+ #define hclge_mbx_ring_ptr_move_crq(crq) \
+ 	(crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num)
+ #define hclge_mbx_tail_ptr_move_arq(arq) \
+-	(arq.tail = (arq.tail + 1) % HCLGE_MBX_MAX_ARQ_MSG_SIZE)
++		(arq.tail = (arq.tail + 1) % HCLGE_MBX_MAX_ARQ_MSG_NUM)
+ #define hclge_mbx_head_ptr_move_arq(arq) \
+-		(arq.head = (arq.head + 1) % HCLGE_MBX_MAX_ARQ_MSG_SIZE)
++		(arq.head = (arq.head + 1) % HCLGE_MBX_MAX_ARQ_MSG_NUM)
+ #endif
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 1f026408ad38b..4321132a4f630 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -752,7 +752,8 @@ static int hclge_get_sset_count(struct hnae3_handle *handle, int stringset)
+ 		handle->flags |= HNAE3_SUPPORT_SERDES_SERIAL_LOOPBACK;
+ 		handle->flags |= HNAE3_SUPPORT_SERDES_PARALLEL_LOOPBACK;
+ 
+-		if (hdev->hw.mac.phydev) {
++		if (hdev->hw.mac.phydev && hdev->hw.mac.phydev->drv &&
++		    hdev->hw.mac.phydev->drv->set_loopback) {
+ 			count += 1;
+ 			handle->flags |= HNAE3_SUPPORT_PHY_LOOPBACK;
+ 		}
+@@ -4484,8 +4485,8 @@ static int hclge_set_rss_tuple(struct hnae3_handle *handle,
+ 		req->ipv4_sctp_en = tuple_sets;
+ 		break;
+ 	case SCTP_V6_FLOW:
+-		if ((nfc->data & RXH_L4_B_0_1) ||
+-		    (nfc->data & RXH_L4_B_2_3))
++		if (hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 &&
++		    (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)))
+ 			return -EINVAL;
+ 
+ 		req->ipv6_sctp_en = tuple_sets;
+@@ -4665,6 +4666,8 @@ static void hclge_rss_init_cfg(struct hclge_dev *hdev)
+ 		vport[i].rss_tuple_sets.ipv6_udp_en =
+ 			HCLGE_RSS_INPUT_TUPLE_OTHER;
+ 		vport[i].rss_tuple_sets.ipv6_sctp_en =
++			hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 ?
++			HCLGE_RSS_INPUT_TUPLE_SCTP_NO_PORT :
+ 			HCLGE_RSS_INPUT_TUPLE_SCTP;
+ 		vport[i].rss_tuple_sets.ipv6_fragment_en =
+ 			HCLGE_RSS_INPUT_TUPLE_OTHER;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index 64e6afdb61b8d..213ac73f94cdd 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -105,6 +105,8 @@
+ #define HCLGE_D_IP_BIT			BIT(2)
+ #define HCLGE_S_IP_BIT			BIT(3)
+ #define HCLGE_V_TAG_BIT			BIT(4)
++#define HCLGE_RSS_INPUT_TUPLE_SCTP_NO_PORT	\
++		(HCLGE_D_IP_BIT | HCLGE_S_IP_BIT | HCLGE_V_TAG_BIT)
+ 
+ #define HCLGE_RSS_TC_SIZE_0		1
+ #define HCLGE_RSS_TC_SIZE_1		2
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index c8e3fdd5999c4..dc5d150a9c546 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -901,8 +901,8 @@ static int hclgevf_set_rss_tuple(struct hnae3_handle *handle,
+ 		req->ipv4_sctp_en = tuple_sets;
+ 		break;
+ 	case SCTP_V6_FLOW:
+-		if ((nfc->data & RXH_L4_B_0_1) ||
+-		    (nfc->data & RXH_L4_B_2_3))
++		if (hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 &&
++		    (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)))
+ 			return -EINVAL;
+ 
+ 		req->ipv6_sctp_en = tuple_sets;
+@@ -2481,7 +2481,10 @@ static void hclgevf_rss_init_cfg(struct hclgevf_dev *hdev)
+ 		tuple_sets->ipv4_fragment_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER;
+ 		tuple_sets->ipv6_tcp_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER;
+ 		tuple_sets->ipv6_udp_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER;
+-		tuple_sets->ipv6_sctp_en = HCLGEVF_RSS_INPUT_TUPLE_SCTP;
++		tuple_sets->ipv6_sctp_en =
++			hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 ?
++					HCLGEVF_RSS_INPUT_TUPLE_SCTP_NO_PORT :
++					HCLGEVF_RSS_INPUT_TUPLE_SCTP;
+ 		tuple_sets->ipv6_fragment_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index c5bcc3894fd54..526a62f970466 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -122,6 +122,8 @@
+ #define HCLGEVF_D_IP_BIT		BIT(2)
+ #define HCLGEVF_S_IP_BIT		BIT(3)
+ #define HCLGEVF_V_TAG_BIT		BIT(4)
++#define HCLGEVF_RSS_INPUT_TUPLE_SCTP_NO_PORT	\
++	(HCLGEVF_D_IP_BIT | HCLGEVF_S_IP_BIT | HCLGEVF_V_TAG_BIT)
+ 
+ #define HCLGEVF_STATS_TIMER_INTERVAL	36U
+ 
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index c3e429445b83d..ceb4f27898002 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -4409,7 +4409,7 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog,
+ 	struct bpf_prog *old_prog;
+ 
+ 	if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) {
+-		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
++		NL_SET_ERR_MSG_MOD(extack, "MTU too large for XDP");
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 2026c923b5855..2dcdec3eacc36 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5480,7 +5480,7 @@ static int mvpp2_port_init(struct mvpp2_port *port)
+ 	struct mvpp2 *priv = port->priv;
+ 	struct mvpp2_txq_pcpu *txq_pcpu;
+ 	unsigned int thread;
+-	int queue, err;
++	int queue, err, val;
+ 
+ 	/* Checks for hardware constraints */
+ 	if (port->first_rxq + port->nrxqs >
+@@ -5494,6 +5494,18 @@ static int mvpp2_port_init(struct mvpp2_port *port)
+ 	mvpp2_egress_disable(port);
+ 	mvpp2_port_disable(port);
+ 
++	if (mvpp2_is_xlg(port->phy_interface)) {
++		val = readl(port->base + MVPP22_XLG_CTRL0_REG);
++		val &= ~MVPP22_XLG_CTRL0_FORCE_LINK_PASS;
++		val |= MVPP22_XLG_CTRL0_FORCE_LINK_DOWN;
++		writel(val, port->base + MVPP22_XLG_CTRL0_REG);
++	} else {
++		val = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG);
++		val &= ~MVPP2_GMAC_FORCE_LINK_PASS;
++		val |= MVPP2_GMAC_FORCE_LINK_DOWN;
++		writel(val, port->base + MVPP2_GMAC_AUTONEG_CONFIG);
++	}
++
+ 	port->tx_time_coal = MVPP2_TXDONE_COAL_USEC;
+ 
+ 	port->txqs = devm_kcalloc(dev, port->ntxqs, sizeof(*port->txqs),
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 8f17e26dca538..fc27a40202c6d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -862,8 +862,10 @@ static int cgx_lmac_init(struct cgx *cgx)
+ 		if (!lmac)
+ 			return -ENOMEM;
+ 		lmac->name = kcalloc(1, sizeof("cgx_fwi_xxx_yyy"), GFP_KERNEL);
+-		if (!lmac->name)
+-			return -ENOMEM;
++		if (!lmac->name) {
++			err = -ENOMEM;
++			goto err_lmac_free;
++		}
+ 		sprintf(lmac->name, "cgx_fwi_%d_%d", cgx->cgx_id, i);
+ 		lmac->lmac_id = i;
+ 		lmac->cgx = cgx;
+@@ -874,7 +876,7 @@ static int cgx_lmac_init(struct cgx *cgx)
+ 						 CGX_LMAC_FWI + i * 9),
+ 				   cgx_fwi_event_handler, 0, lmac->name, lmac);
+ 		if (err)
+-			return err;
++			goto err_irq;
+ 
+ 		/* Enable interrupt */
+ 		cgx_write(cgx, lmac->lmac_id, CGXX_CMRX_INT_ENA_W1S,
+@@ -886,6 +888,12 @@ static int cgx_lmac_init(struct cgx *cgx)
+ 	}
+ 
+ 	return cgx_lmac_verify_fwi_version(cgx);
++
++err_irq:
++	kfree(lmac->name);
++err_lmac_free:
++	kfree(lmac);
++	return err;
+ }
+ 
+ static int cgx_lmac_exit(struct cgx *cgx)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+index d29af7b9c695a..76177f7c5ec29 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+@@ -626,6 +626,11 @@ bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe,
+ 	if (!reg_c0)
+ 		return true;
+ 
++	/* If reg_c0 is not equal to the default flow tag then skb->mark
++	 * is not supported and must be reset back to 0.
++	 */
++	skb->mark = 0;
++
+ 	priv = netdev_priv(skb->dev);
+ 	esw = priv->mdev->priv.eswitch;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index d25a56ec6876a..f01395a9fd8df 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1007,6 +1007,22 @@ static int mlx5e_get_link_ksettings(struct net_device *netdev,
+ 	return mlx5e_ethtool_get_link_ksettings(priv, link_ksettings);
+ }
+ 
++static int mlx5e_speed_validate(struct net_device *netdev, bool ext,
++				const unsigned long link_modes, u8 autoneg)
++{
++	/* Extended link-mode has no speed limitations. */
++	if (ext)
++		return 0;
++
++	if ((link_modes & MLX5E_PROT_MASK(MLX5E_56GBASE_R4)) &&
++	    autoneg != AUTONEG_ENABLE) {
++		netdev_err(netdev, "%s: 56G link speed requires autoneg enabled\n",
++			   __func__);
++		return -EINVAL;
++	}
++	return 0;
++}
++
+ static u32 mlx5e_ethtool2ptys_adver_link(const unsigned long *link_modes)
+ {
+ 	u32 i, ptys_modes = 0;
+@@ -1100,13 +1116,9 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
+ 	link_modes = autoneg == AUTONEG_ENABLE ? ethtool2ptys_adver_func(adver) :
+ 		mlx5e_port_speed2linkmodes(mdev, speed, !ext);
+ 
+-	if ((link_modes & MLX5E_PROT_MASK(MLX5E_56GBASE_R4)) &&
+-	    autoneg != AUTONEG_ENABLE) {
+-		netdev_err(priv->netdev, "%s: 56G link speed requires autoneg enabled\n",
+-			   __func__);
+-		err = -EINVAL;
++	err = mlx5e_speed_validate(priv->netdev, ext, link_modes, autoneg);
++	if (err)
+ 		goto out;
+-	}
+ 
+ 	link_modes = link_modes & eproto.cap;
+ 	if (!link_modes) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+index 1f48f99c0997d..7ad332d8625b9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+@@ -936,6 +936,7 @@ static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc,
+ 	in = kvzalloc(inlen, GFP_KERNEL);
+ 	if (!in) {
+ 		kfree(ft->g);
++		ft->g = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -1081,6 +1082,7 @@ static int mlx5e_create_inner_ttc_table_groups(struct mlx5e_ttc_table *ttc)
+ 	in = kvzalloc(inlen, GFP_KERNEL);
+ 	if (!in) {
+ 		kfree(ft->g);
++		ft->g = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -1384,6 +1386,7 @@ err_destroy_groups:
+ 	ft->g[ft->num_groups] = NULL;
+ 	mlx5e_destroy_groups(ft);
+ 	kvfree(in);
++	kfree(ft->g);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+index 33081b24f10aa..9025e5f38bb65 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+@@ -556,7 +556,9 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev)
+ 	struct mlx5_core_dev *tmp_dev;
+ 	int i, err;
+ 
+-	if (!MLX5_CAP_GEN(dev, vport_group_manager))
++	if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
++	    !MLX5_CAP_GEN(dev, lag_master) ||
++	    MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_MAX_PORTS)
+ 		return;
+ 
+ 	tmp_dev = mlx5_get_next_phys_dev(dev);
+@@ -574,12 +576,9 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev)
+ 	if (mlx5_lag_dev_add_pf(ldev, dev, netdev) < 0)
+ 		return;
+ 
+-	for (i = 0; i < MLX5_MAX_PORTS; i++) {
+-		tmp_dev = ldev->pf[i].dev;
+-		if (!tmp_dev || !MLX5_CAP_GEN(tmp_dev, lag_master) ||
+-		    MLX5_CAP_GEN(tmp_dev, num_lag_ports) != MLX5_MAX_PORTS)
++	for (i = 0; i < MLX5_MAX_PORTS; i++)
++		if (!ldev->pf[i].dev)
+ 			break;
+-	}
+ 
+ 	if (i >= MLX5_MAX_PORTS)
+ 		ldev->flags |= MLX5_LAG_FLAG_READY;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
+index 0fc7de4aa572f..8e0dddc6383f0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
+@@ -116,7 +116,7 @@ free:
+ static void mlx5_rdma_del_roce_addr(struct mlx5_core_dev *dev)
+ {
+ 	mlx5_core_roce_gid_set(dev, 0, 0, 0,
+-			       NULL, NULL, false, 0, 0);
++			       NULL, NULL, false, 0, 1);
+ }
+ 
+ static void mlx5_rdma_make_default_gid(struct mlx5_core_dev *dev, union ib_gid *gid)
+diff --git a/drivers/net/ethernet/natsemi/macsonic.c b/drivers/net/ethernet/natsemi/macsonic.c
+index 776b7d264dc34..2289e1fe37419 100644
+--- a/drivers/net/ethernet/natsemi/macsonic.c
++++ b/drivers/net/ethernet/natsemi/macsonic.c
+@@ -506,10 +506,14 @@ static int mac_sonic_platform_probe(struct platform_device *pdev)
+ 
+ 	err = register_netdev(dev);
+ 	if (err)
+-		goto out;
++		goto undo_probe;
+ 
+ 	return 0;
+ 
++undo_probe:
++	dma_free_coherent(lp->device,
++			  SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
++			  lp->descriptors, lp->descriptors_laddr);
+ out:
+ 	free_netdev(dev);
+ 
+@@ -584,12 +588,16 @@ static int mac_sonic_nubus_probe(struct nubus_board *board)
+ 
+ 	err = register_netdev(ndev);
+ 	if (err)
+-		goto out;
++		goto undo_probe;
+ 
+ 	nubus_set_drvdata(board, ndev);
+ 
+ 	return 0;
+ 
++undo_probe:
++	dma_free_coherent(lp->device,
++			  SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
++			  lp->descriptors, lp->descriptors_laddr);
+ out:
+ 	free_netdev(ndev);
+ 	return err;
+diff --git a/drivers/net/ethernet/natsemi/xtsonic.c b/drivers/net/ethernet/natsemi/xtsonic.c
+index afa166ff7aef5..28d9e98db81a8 100644
+--- a/drivers/net/ethernet/natsemi/xtsonic.c
++++ b/drivers/net/ethernet/natsemi/xtsonic.c
+@@ -229,11 +229,14 @@ int xtsonic_probe(struct platform_device *pdev)
+ 	sonic_msg_init(dev);
+ 
+ 	if ((err = register_netdev(dev)))
+-		goto out1;
++		goto undo_probe1;
+ 
+ 	return 0;
+ 
+-out1:
++undo_probe1:
++	dma_free_coherent(lp->device,
++			  SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
++			  lp->descriptors, lp->descriptors_laddr);
+ 	release_region(dev->base_addr, SONIC_MEM_SIZE);
+ out:
+ 	free_netdev(dev);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index c968c5c5a60a0..d0ae1cf43592d 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -123,6 +123,12 @@ static void ionic_link_status_check(struct ionic_lif *lif)
+ 	link_up = link_status == IONIC_PORT_OPER_STATUS_UP;
+ 
+ 	if (link_up) {
++		if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) {
++			mutex_lock(&lif->queue_lock);
++			ionic_start_queues(lif);
++			mutex_unlock(&lif->queue_lock);
++		}
++
+ 		if (!netif_carrier_ok(netdev)) {
+ 			u32 link_speed;
+ 
+@@ -132,12 +138,6 @@ static void ionic_link_status_check(struct ionic_lif *lif)
+ 				    link_speed / 1000);
+ 			netif_carrier_on(netdev);
+ 		}
+-
+-		if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) {
+-			mutex_lock(&lif->queue_lock);
+-			ionic_start_queues(lif);
+-			mutex_unlock(&lif->queue_lock);
+-		}
+ 	} else {
+ 		if (netif_carrier_ok(netdev)) {
+ 			netdev_info(netdev, "Link down\n");
+diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
+index 4366c7a8de951..6b5ddb07ee833 100644
+--- a/drivers/net/ethernet/qlogic/Kconfig
++++ b/drivers/net/ethernet/qlogic/Kconfig
+@@ -78,6 +78,7 @@ config QED
+ 	depends on PCI
+ 	select ZLIB_INFLATE
+ 	select CRC8
++	select CRC32
+ 	select NET_DEVLINK
+ 	help
+ 	  This enables the support for Marvell FastLinQ adapters family.
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index 58e0511badba8..a5e0eff4a3874 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -64,6 +64,7 @@ struct emac_variant {
+  * @variant:	reference to the current board variant
+  * @regmap:	regmap for using the syscon
+  * @internal_phy_powered: Does the internal PHY is enabled
++ * @use_internal_phy: Is the internal PHY selected for use
+  * @mux_handle:	Internal pointer used by mdio-mux lib
+  */
+ struct sunxi_priv_data {
+@@ -74,6 +75,7 @@ struct sunxi_priv_data {
+ 	const struct emac_variant *variant;
+ 	struct regmap_field *regmap_field;
+ 	bool internal_phy_powered;
++	bool use_internal_phy;
+ 	void *mux_handle;
+ };
+ 
+@@ -539,8 +541,11 @@ static const struct stmmac_dma_ops sun8i_dwmac_dma_ops = {
+ 	.dma_interrupt = sun8i_dwmac_dma_interrupt,
+ };
+ 
++static int sun8i_dwmac_power_internal_phy(struct stmmac_priv *priv);
++
+ static int sun8i_dwmac_init(struct platform_device *pdev, void *priv)
+ {
++	struct net_device *ndev = platform_get_drvdata(pdev);
+ 	struct sunxi_priv_data *gmac = priv;
+ 	int ret;
+ 
+@@ -554,13 +559,25 @@ static int sun8i_dwmac_init(struct platform_device *pdev, void *priv)
+ 
+ 	ret = clk_prepare_enable(gmac->tx_clk);
+ 	if (ret) {
+-		if (gmac->regulator)
+-			regulator_disable(gmac->regulator);
+ 		dev_err(&pdev->dev, "Could not enable AHB clock\n");
+-		return ret;
++		goto err_disable_regulator;
++	}
++
++	if (gmac->use_internal_phy) {
++		ret = sun8i_dwmac_power_internal_phy(netdev_priv(ndev));
++		if (ret)
++			goto err_disable_clk;
+ 	}
+ 
+ 	return 0;
++
++err_disable_clk:
++	clk_disable_unprepare(gmac->tx_clk);
++err_disable_regulator:
++	if (gmac->regulator)
++		regulator_disable(gmac->regulator);
++
++	return ret;
+ }
+ 
+ static void sun8i_dwmac_core_init(struct mac_device_info *hw,
+@@ -831,7 +848,6 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child,
+ 	struct sunxi_priv_data *gmac = priv->plat->bsp_priv;
+ 	u32 reg, val;
+ 	int ret = 0;
+-	bool need_power_ephy = false;
+ 
+ 	if (current_child ^ desired_child) {
+ 		regmap_field_read(gmac->regmap_field, &reg);
+@@ -839,13 +855,12 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child,
+ 		case DWMAC_SUN8I_MDIO_MUX_INTERNAL_ID:
+ 			dev_info(priv->device, "Switch mux to internal PHY");
+ 			val = (reg & ~H3_EPHY_MUX_MASK) | H3_EPHY_SELECT;
+-
+-			need_power_ephy = true;
++			gmac->use_internal_phy = true;
+ 			break;
+ 		case DWMAC_SUN8I_MDIO_MUX_EXTERNAL_ID:
+ 			dev_info(priv->device, "Switch mux to external PHY");
+ 			val = (reg & ~H3_EPHY_MUX_MASK) | H3_EPHY_SHUTDOWN;
+-			need_power_ephy = false;
++			gmac->use_internal_phy = false;
+ 			break;
+ 		default:
+ 			dev_err(priv->device, "Invalid child ID %x\n",
+@@ -853,7 +868,7 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child,
+ 			return -EINVAL;
+ 		}
+ 		regmap_field_write(gmac->regmap_field, val);
+-		if (need_power_ephy) {
++		if (gmac->use_internal_phy) {
+ 			ret = sun8i_dwmac_power_internal_phy(priv);
+ 			if (ret)
+ 				return ret;
+@@ -883,22 +898,23 @@ static int sun8i_dwmac_register_mdio_mux(struct stmmac_priv *priv)
+ 	return ret;
+ }
+ 
+-static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
++static int sun8i_dwmac_set_syscon(struct device *dev,
++				  struct plat_stmmacenet_data *plat)
+ {
+-	struct sunxi_priv_data *gmac = priv->plat->bsp_priv;
+-	struct device_node *node = priv->device->of_node;
++	struct sunxi_priv_data *gmac = plat->bsp_priv;
++	struct device_node *node = dev->of_node;
+ 	int ret;
+ 	u32 reg, val;
+ 
+ 	ret = regmap_field_read(gmac->regmap_field, &val);
+ 	if (ret) {
+-		dev_err(priv->device, "Fail to read from regmap field.\n");
++		dev_err(dev, "Fail to read from regmap field.\n");
+ 		return ret;
+ 	}
+ 
+ 	reg = gmac->variant->default_syscon_value;
+ 	if (reg != val)
+-		dev_warn(priv->device,
++		dev_warn(dev,
+ 			 "Current syscon value is not the default %x (expect %x)\n",
+ 			 val, reg);
+ 
+@@ -911,9 +927,9 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
+ 		/* Force EPHY xtal frequency to 24MHz. */
+ 		reg |= H3_EPHY_CLK_SEL;
+ 
+-		ret = of_mdio_parse_addr(priv->device, priv->plat->phy_node);
++		ret = of_mdio_parse_addr(dev, plat->phy_node);
+ 		if (ret < 0) {
+-			dev_err(priv->device, "Could not parse MDIO addr\n");
++			dev_err(dev, "Could not parse MDIO addr\n");
+ 			return ret;
+ 		}
+ 		/* of_mdio_parse_addr returns a valid (0 ~ 31) PHY
+@@ -929,17 +945,17 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
+ 
+ 	if (!of_property_read_u32(node, "allwinner,tx-delay-ps", &val)) {
+ 		if (val % 100) {
+-			dev_err(priv->device, "tx-delay must be a multiple of 100\n");
++			dev_err(dev, "tx-delay must be a multiple of 100\n");
+ 			return -EINVAL;
+ 		}
+ 		val /= 100;
+-		dev_dbg(priv->device, "set tx-delay to %x\n", val);
++		dev_dbg(dev, "set tx-delay to %x\n", val);
+ 		if (val <= gmac->variant->tx_delay_max) {
+ 			reg &= ~(gmac->variant->tx_delay_max <<
+ 				 SYSCON_ETXDC_SHIFT);
+ 			reg |= (val << SYSCON_ETXDC_SHIFT);
+ 		} else {
+-			dev_err(priv->device, "Invalid TX clock delay: %d\n",
++			dev_err(dev, "Invalid TX clock delay: %d\n",
+ 				val);
+ 			return -EINVAL;
+ 		}
+@@ -947,17 +963,17 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
+ 
+ 	if (!of_property_read_u32(node, "allwinner,rx-delay-ps", &val)) {
+ 		if (val % 100) {
+-			dev_err(priv->device, "rx-delay must be a multiple of 100\n");
++			dev_err(dev, "rx-delay must be a multiple of 100\n");
+ 			return -EINVAL;
+ 		}
+ 		val /= 100;
+-		dev_dbg(priv->device, "set rx-delay to %x\n", val);
++		dev_dbg(dev, "set rx-delay to %x\n", val);
+ 		if (val <= gmac->variant->rx_delay_max) {
+ 			reg &= ~(gmac->variant->rx_delay_max <<
+ 				 SYSCON_ERXDC_SHIFT);
+ 			reg |= (val << SYSCON_ERXDC_SHIFT);
+ 		} else {
+-			dev_err(priv->device, "Invalid RX clock delay: %d\n",
++			dev_err(dev, "Invalid RX clock delay: %d\n",
+ 				val);
+ 			return -EINVAL;
+ 		}
+@@ -968,7 +984,7 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
+ 	if (gmac->variant->support_rmii)
+ 		reg &= ~SYSCON_RMII_EN;
+ 
+-	switch (priv->plat->interface) {
++	switch (plat->interface) {
+ 	case PHY_INTERFACE_MODE_MII:
+ 		/* default */
+ 		break;
+@@ -982,8 +998,8 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
+ 		reg |= SYSCON_RMII_EN | SYSCON_ETCS_EXT_GMII;
+ 		break;
+ 	default:
+-		dev_err(priv->device, "Unsupported interface mode: %s",
+-			phy_modes(priv->plat->interface));
++		dev_err(dev, "Unsupported interface mode: %s",
++			phy_modes(plat->interface));
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1004,17 +1020,10 @@ static void sun8i_dwmac_exit(struct platform_device *pdev, void *priv)
+ 	struct sunxi_priv_data *gmac = priv;
+ 
+ 	if (gmac->variant->soc_has_internal_phy) {
+-		/* sun8i_dwmac_exit could be called with mdiomux uninit */
+-		if (gmac->mux_handle)
+-			mdio_mux_uninit(gmac->mux_handle);
+ 		if (gmac->internal_phy_powered)
+ 			sun8i_dwmac_unpower_internal_phy(gmac);
+ 	}
+ 
+-	sun8i_dwmac_unset_syscon(gmac);
+-
+-	reset_control_put(gmac->rst_ephy);
+-
+ 	clk_disable_unprepare(gmac->tx_clk);
+ 
+ 	if (gmac->regulator)
+@@ -1049,16 +1058,11 @@ static struct mac_device_info *sun8i_dwmac_setup(void *ppriv)
+ {
+ 	struct mac_device_info *mac;
+ 	struct stmmac_priv *priv = ppriv;
+-	int ret;
+ 
+ 	mac = devm_kzalloc(priv->device, sizeof(*mac), GFP_KERNEL);
+ 	if (!mac)
+ 		return NULL;
+ 
+-	ret = sun8i_dwmac_set_syscon(priv);
+-	if (ret)
+-		return NULL;
+-
+ 	mac->pcsr = priv->ioaddr;
+ 	mac->mac = &sun8i_dwmac_ops;
+ 	mac->dma = &sun8i_dwmac_dma_ops;
+@@ -1134,10 +1138,6 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	plat_dat = stmmac_probe_config_dt(pdev, &stmmac_res.mac);
+-	if (IS_ERR(plat_dat))
+-		return PTR_ERR(plat_dat);
+-
+ 	gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL);
+ 	if (!gmac)
+ 		return -ENOMEM;
+@@ -1201,11 +1201,15 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
+ 	ret = of_get_phy_mode(dev->of_node, &interface);
+ 	if (ret)
+ 		return -EINVAL;
+-	plat_dat->interface = interface;
++
++	plat_dat = stmmac_probe_config_dt(pdev, &stmmac_res.mac);
++	if (IS_ERR(plat_dat))
++		return PTR_ERR(plat_dat);
+ 
+ 	/* platform data specifying hardware features and callbacks.
+ 	 * hardware features were copied from Allwinner drivers.
+ 	 */
++	plat_dat->interface = interface;
+ 	plat_dat->rx_coe = STMMAC_RX_COE_TYPE2;
+ 	plat_dat->tx_coe = 1;
+ 	plat_dat->has_sun8i = true;
+@@ -1214,9 +1218,13 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
+ 	plat_dat->exit = sun8i_dwmac_exit;
+ 	plat_dat->setup = sun8i_dwmac_setup;
+ 
++	ret = sun8i_dwmac_set_syscon(&pdev->dev, plat_dat);
++	if (ret)
++		goto dwmac_deconfig;
++
+ 	ret = sun8i_dwmac_init(pdev, plat_dat->bsp_priv);
+ 	if (ret)
+-		return ret;
++		goto dwmac_syscon;
+ 
+ 	ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ 	if (ret)
+@@ -1230,7 +1238,7 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
+ 	if (gmac->variant->soc_has_internal_phy) {
+ 		ret = get_ephy_nodes(priv);
+ 		if (ret)
+-			goto dwmac_exit;
++			goto dwmac_remove;
+ 		ret = sun8i_dwmac_register_mdio_mux(priv);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "Failed to register mux\n");
+@@ -1239,15 +1247,42 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
+ 	} else {
+ 		ret = sun8i_dwmac_reset(priv);
+ 		if (ret)
+-			goto dwmac_exit;
++			goto dwmac_remove;
+ 	}
+ 
+ 	return ret;
+ dwmac_mux:
+-	sun8i_dwmac_unset_syscon(gmac);
++	reset_control_put(gmac->rst_ephy);
++	clk_put(gmac->ephy_clk);
++dwmac_remove:
++	stmmac_dvr_remove(&pdev->dev);
+ dwmac_exit:
++	sun8i_dwmac_exit(pdev, gmac);
++dwmac_syscon:
++	sun8i_dwmac_unset_syscon(gmac);
++dwmac_deconfig:
++	stmmac_remove_config_dt(pdev, plat_dat);
++
++	return ret;
++}
++
++static int sun8i_dwmac_remove(struct platform_device *pdev)
++{
++	struct net_device *ndev = platform_get_drvdata(pdev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++	struct sunxi_priv_data *gmac = priv->plat->bsp_priv;
++
++	if (gmac->variant->soc_has_internal_phy) {
++		mdio_mux_uninit(gmac->mux_handle);
++		sun8i_dwmac_unpower_internal_phy(gmac);
++		reset_control_put(gmac->rst_ephy);
++		clk_put(gmac->ephy_clk);
++	}
++
+ 	stmmac_pltfr_remove(pdev);
+-return ret;
++	sun8i_dwmac_unset_syscon(gmac);
++
++	return 0;
+ }
+ 
+ static const struct of_device_id sun8i_dwmac_match[] = {
+@@ -1269,7 +1304,7 @@ MODULE_DEVICE_TABLE(of, sun8i_dwmac_match);
+ 
+ static struct platform_driver sun8i_dwmac_driver = {
+ 	.probe  = sun8i_dwmac_probe,
+-	.remove = stmmac_pltfr_remove,
++	.remove = sun8i_dwmac_remove,
+ 	.driver = {
+ 		.name           = "dwmac-sun8i",
+ 		.pm		= &stmmac_pltfr_pm_ops,
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 5dc1365dc1f9a..854c6624e6859 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1199,7 +1199,10 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 	 * accordingly. Otherwise, we should check here.
+ 	 */
+ 	if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)
+-		delayed_ndp_size = ALIGN(ctx->max_ndp_size, ctx->tx_ndp_modulus);
++		delayed_ndp_size = ctx->max_ndp_size +
++			max_t(u32,
++			      ctx->tx_ndp_modulus,
++			      ctx->tx_modulus + ctx->tx_remainder) - 1;
+ 	else
+ 		delayed_ndp_size = 0;
+ 
+@@ -1410,7 +1413,8 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 	if (!(dev->driver_info->flags & FLAG_SEND_ZLP) &&
+ 	    skb_out->len > ctx->min_tx_pkt) {
+ 		padding_count = ctx->tx_curr_size - skb_out->len;
+-		skb_put_zero(skb_out, padding_count);
++		if (!WARN_ON(padding_count > ctx->tx_curr_size))
++			skb_put_zero(skb_out, padding_count);
+ 	} else if (skb_out->len < ctx->tx_curr_size &&
+ 		   (skb_out->len % dev->maxpacket) == 0) {
+ 		skb_put_u8(skb_out, 0);	/* force short packet */
+diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
+index 39e5ab261d7ce..4be2a5cf022c8 100644
+--- a/drivers/net/wan/Kconfig
++++ b/drivers/net/wan/Kconfig
+@@ -282,6 +282,7 @@ config SLIC_DS26522
+ 	tristate "Slic Maxim ds26522 card support"
+ 	depends on SPI
+ 	depends on FSL_SOC || ARCH_MXC || ARCH_LAYERSCAPE || COMPILE_TEST
++	select BITREVERSE
+ 	help
+ 	  This module initializes and configures the slic maxim card
+ 	  in T1 or E1 mode.
+diff --git a/drivers/net/wireless/ath/wil6210/Kconfig b/drivers/net/wireless/ath/wil6210/Kconfig
+index 6a95b199bf626..f074e9c31aa22 100644
+--- a/drivers/net/wireless/ath/wil6210/Kconfig
++++ b/drivers/net/wireless/ath/wil6210/Kconfig
+@@ -2,6 +2,7 @@
+ config WIL6210
+ 	tristate "Wilocity 60g WiFi card wil6210 support"
+ 	select WANT_DEV_COREDUMP
++	select CRC32
+ 	depends on CFG80211
+ 	depends on PCI
+ 	default n
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index c0c33320fe659..9aa3d9e91c5d1 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -262,6 +262,16 @@ static inline void nvme_tcp_advance_req(struct nvme_tcp_request *req,
+ 	}
+ }
+ 
++static inline void nvme_tcp_send_all(struct nvme_tcp_queue *queue)
++{
++	int ret;
++
++	/* drain the send queue as much as we can... */
++	do {
++		ret = nvme_tcp_try_send(queue);
++	} while (ret > 0);
++}
++
+ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 		bool sync, bool last)
+ {
+@@ -279,7 +289,7 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 	if (queue->io_cpu == smp_processor_id() &&
+ 	    sync && empty && mutex_trylock(&queue->send_mutex)) {
+ 		queue->more_requests = !last;
+-		nvme_tcp_try_send(queue);
++		nvme_tcp_send_all(queue);
+ 		queue->more_requests = false;
+ 		mutex_unlock(&queue->send_mutex);
+ 	} else if (last) {
+diff --git a/drivers/ptp/Kconfig b/drivers/ptp/Kconfig
+index 942f72d8151da..deb429a3dff1d 100644
+--- a/drivers/ptp/Kconfig
++++ b/drivers/ptp/Kconfig
+@@ -64,6 +64,7 @@ config DP83640_PHY
+ 	depends on NETWORK_PHY_TIMESTAMPING
+ 	depends on PHYLIB
+ 	depends on PTP_1588_CLOCK
++	select CRC32
+ 	help
+ 	  Supports the DP83640 PHYTER with IEEE 1588 features.
+ 
+@@ -78,6 +79,7 @@ config DP83640_PHY
+ config PTP_1588_CLOCK_INES
+ 	tristate "ZHAW InES PTP time stamping IP core"
+ 	depends on NETWORK_PHY_TIMESTAMPING
++	depends on HAS_IOMEM
+ 	depends on PHYLIB
+ 	depends on PTP_1588_CLOCK
+ 	help
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index d488325499a9f..a22c4b5f64f7e 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -726,7 +726,7 @@ static const struct rpmh_vreg_hw_data pmic5_ftsmps510 = {
+ static const struct rpmh_vreg_hw_data pmic5_hfsmps515 = {
+ 	.regulator_type = VRM,
+ 	.ops = &rpmh_regulator_vrm_ops,
+-	.voltage_range = REGULATOR_LINEAR_RANGE(2800000, 0, 4, 1600),
++	.voltage_range = REGULATOR_LINEAR_RANGE(2800000, 0, 4, 16000),
+ 	.n_voltages = 5,
+ 	.pmic_mode_map = pmic_mode_map_pmic5_smps,
+ 	.of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index b235393e091ca..2f7e06ec9a30e 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -1075,7 +1075,8 @@ struct qeth_card *qeth_get_card_by_busid(char *bus_id);
+ void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads,
+ 			      int clear_start_mask);
+ int qeth_threads_running(struct qeth_card *, unsigned long);
+-int qeth_set_offline(struct qeth_card *card, bool resetting);
++int qeth_set_offline(struct qeth_card *card, const struct qeth_discipline *disc,
++		     bool resetting);
+ 
+ int qeth_send_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *,
+ 		  int (*reply_cb)
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index e27319de7b00b..f108232498baf 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -5300,12 +5300,12 @@ out:
+ 	return rc;
+ }
+ 
+-static int qeth_set_online(struct qeth_card *card)
++static int qeth_set_online(struct qeth_card *card,
++			   const struct qeth_discipline *disc)
+ {
+ 	bool carrier_ok;
+ 	int rc;
+ 
+-	mutex_lock(&card->discipline_mutex);
+ 	mutex_lock(&card->conf_mutex);
+ 	QETH_CARD_TEXT(card, 2, "setonlin");
+ 
+@@ -5322,7 +5322,7 @@ static int qeth_set_online(struct qeth_card *card)
+ 		/* no need for locking / error handling at this early stage: */
+ 		qeth_set_real_num_tx_queues(card, qeth_tx_actual_queues(card));
+ 
+-	rc = card->discipline->set_online(card, carrier_ok);
++	rc = disc->set_online(card, carrier_ok);
+ 	if (rc)
+ 		goto err_online;
+ 
+@@ -5330,7 +5330,6 @@ static int qeth_set_online(struct qeth_card *card)
+ 	kobject_uevent(&card->gdev->dev.kobj, KOBJ_CHANGE);
+ 
+ 	mutex_unlock(&card->conf_mutex);
+-	mutex_unlock(&card->discipline_mutex);
+ 	return 0;
+ 
+ err_online:
+@@ -5345,15 +5344,14 @@ err_hardsetup:
+ 	qdio_free(CARD_DDEV(card));
+ 
+ 	mutex_unlock(&card->conf_mutex);
+-	mutex_unlock(&card->discipline_mutex);
+ 	return rc;
+ }
+ 
+-int qeth_set_offline(struct qeth_card *card, bool resetting)
++int qeth_set_offline(struct qeth_card *card, const struct qeth_discipline *disc,
++		     bool resetting)
+ {
+ 	int rc, rc2, rc3;
+ 
+-	mutex_lock(&card->discipline_mutex);
+ 	mutex_lock(&card->conf_mutex);
+ 	QETH_CARD_TEXT(card, 3, "setoffl");
+ 
+@@ -5374,7 +5372,7 @@ int qeth_set_offline(struct qeth_card *card, bool resetting)
+ 
+ 	cancel_work_sync(&card->rx_mode_work);
+ 
+-	card->discipline->set_offline(card);
++	disc->set_offline(card);
+ 
+ 	qeth_qdio_clear_card(card, 0);
+ 	qeth_drain_output_queues(card);
+@@ -5395,16 +5393,19 @@ int qeth_set_offline(struct qeth_card *card, bool resetting)
+ 	kobject_uevent(&card->gdev->dev.kobj, KOBJ_CHANGE);
+ 
+ 	mutex_unlock(&card->conf_mutex);
+-	mutex_unlock(&card->discipline_mutex);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(qeth_set_offline);
+ 
+ static int qeth_do_reset(void *data)
+ {
++	const struct qeth_discipline *disc;
+ 	struct qeth_card *card = data;
+ 	int rc;
+ 
++	/* Lock-free, other users will block until we are done. */
++	disc = card->discipline;
++
+ 	QETH_CARD_TEXT(card, 2, "recover1");
+ 	if (!qeth_do_run_thread(card, QETH_RECOVER_THREAD))
+ 		return 0;
+@@ -5412,8 +5413,8 @@ static int qeth_do_reset(void *data)
+ 	dev_warn(&card->gdev->dev,
+ 		 "A recovery process has been started for the device\n");
+ 
+-	qeth_set_offline(card, true);
+-	rc = qeth_set_online(card);
++	qeth_set_offline(card, disc, true);
++	rc = qeth_set_online(card, disc);
+ 	if (!rc) {
+ 		dev_info(&card->gdev->dev,
+ 			 "Device successfully recovered!\n");
+@@ -6360,6 +6361,7 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev)
+ 		break;
+ 	default:
+ 		card->info.layer_enforced = true;
++		/* It's so early that we don't need the discipline_mutex yet. */
+ 		rc = qeth_core_load_discipline(card, enforced_disc);
+ 		if (rc)
+ 			goto err_load;
+@@ -6392,10 +6394,12 @@ static void qeth_core_remove_device(struct ccwgroup_device *gdev)
+ 
+ 	QETH_CARD_TEXT(card, 2, "removedv");
+ 
++	mutex_lock(&card->discipline_mutex);
+ 	if (card->discipline) {
+ 		card->discipline->remove(gdev);
+ 		qeth_core_free_discipline(card);
+ 	}
++	mutex_unlock(&card->discipline_mutex);
+ 
+ 	qeth_free_qdio_queues(card);
+ 
+@@ -6410,6 +6414,7 @@ static int qeth_core_set_online(struct ccwgroup_device *gdev)
+ 	int rc = 0;
+ 	enum qeth_discipline_id def_discipline;
+ 
++	mutex_lock(&card->discipline_mutex);
+ 	if (!card->discipline) {
+ 		def_discipline = IS_IQD(card) ? QETH_DISCIPLINE_LAYER3 :
+ 						QETH_DISCIPLINE_LAYER2;
+@@ -6423,16 +6428,23 @@ static int qeth_core_set_online(struct ccwgroup_device *gdev)
+ 		}
+ 	}
+ 
+-	rc = qeth_set_online(card);
++	rc = qeth_set_online(card, card->discipline);
++
+ err:
++	mutex_unlock(&card->discipline_mutex);
+ 	return rc;
+ }
+ 
+ static int qeth_core_set_offline(struct ccwgroup_device *gdev)
+ {
+ 	struct qeth_card *card = dev_get_drvdata(&gdev->dev);
++	int rc;
+ 
+-	return qeth_set_offline(card, false);
++	mutex_lock(&card->discipline_mutex);
++	rc = qeth_set_offline(card, card->discipline, false);
++	mutex_unlock(&card->discipline_mutex);
++
++	return rc;
+ }
+ 
+ static void qeth_core_shutdown(struct ccwgroup_device *gdev)
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 79939ba5d5235..cfc931f2b7e2c 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -2208,7 +2208,7 @@ static void qeth_l2_remove_device(struct ccwgroup_device *gdev)
+ 	wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0);
+ 
+ 	if (gdev->state == CCWGROUP_ONLINE)
+-		qeth_set_offline(card, false);
++		qeth_set_offline(card, card->discipline, false);
+ 
+ 	cancel_work_sync(&card->close_dev_work);
+ 	if (card->dev->reg_state == NETREG_REGISTERED)
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index b1c1d2510d55b..291861c9b9569 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -1816,7 +1816,7 @@ static netdev_features_t qeth_l3_osa_features_check(struct sk_buff *skb,
+ 						    struct net_device *dev,
+ 						    netdev_features_t features)
+ {
+-	if (qeth_get_ip_version(skb) != 4)
++	if (vlan_get_protocol(skb) != htons(ETH_P_IP))
+ 		features &= ~NETIF_F_HW_VLAN_CTAG_TX;
+ 	return qeth_features_check(skb, dev, features);
+ }
+@@ -1974,7 +1974,7 @@ static void qeth_l3_remove_device(struct ccwgroup_device *cgdev)
+ 	wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0);
+ 
+ 	if (cgdev->state == CCWGROUP_ONLINE)
+-		qeth_set_offline(card, false);
++		qeth_set_offline(card, card->discipline, false);
+ 
+ 	cancel_work_sync(&card->close_dev_work);
+ 	if (card->dev->reg_state == NETREG_REGISTERED)
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index fcaafa564dfcd..f103340820c66 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -10459,7 +10459,6 @@ lpfc_sli4_abts_err_handler(struct lpfc_hba *phba,
+ 			   struct lpfc_nodelist *ndlp,
+ 			   struct sli4_wcqe_xri_aborted *axri)
+ {
+-	struct lpfc_vport *vport;
+ 	uint32_t ext_status = 0;
+ 
+ 	if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) {
+@@ -10469,7 +10468,6 @@ lpfc_sli4_abts_err_handler(struct lpfc_hba *phba,
+ 		return;
+ 	}
+ 
+-	vport = ndlp->vport;
+ 	lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
+ 			"3116 Port generated FCP XRI ABORT event on "
+ 			"vpi %d rpi %d xri x%x status 0x%x parameter x%x\n",
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 02f161468daf5..7558b4abebfc5 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -7651,7 +7651,7 @@ static int ufshcd_clear_ua_wlun(struct ufs_hba *hba, u8 wlun)
+ 	else if (wlun == UFS_UPIU_RPMB_WLUN)
+ 		sdp = hba->sdev_rpmb;
+ 	else
+-		BUG_ON(1);
++		BUG();
+ 	if (sdp) {
+ 		ret = scsi_device_get(sdp);
+ 		if (!ret && !scsi_device_online(sdp)) {
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index 0e3d8e6c08f42..01ef79f15b024 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -83,6 +83,7 @@ struct spi_geni_master {
+ 	spinlock_t lock;
+ 	int irq;
+ 	bool cs_flag;
++	bool abort_failed;
+ };
+ 
+ static int get_spi_clk_cfg(unsigned int speed_hz,
+@@ -141,8 +142,49 @@ static void handle_fifo_timeout(struct spi_master *spi,
+ 	spin_unlock_irq(&mas->lock);
+ 
+ 	time_left = wait_for_completion_timeout(&mas->abort_done, HZ);
+-	if (!time_left)
++	if (!time_left) {
+ 		dev_err(mas->dev, "Failed to cancel/abort m_cmd\n");
++
++		/*
++		 * No need for a lock since SPI core has a lock and we never
++		 * access this from an interrupt.
++		 */
++		mas->abort_failed = true;
++	}
++}
++
++static bool spi_geni_is_abort_still_pending(struct spi_geni_master *mas)
++{
++	struct geni_se *se = &mas->se;
++	u32 m_irq, m_irq_en;
++
++	if (!mas->abort_failed)
++		return false;
++
++	/*
++	 * The only known case where a transfer times out and then a cancel
++	 * times out then an abort times out is if something is blocking our
++	 * interrupt handler from running.  Avoid starting any new transfers
++	 * until that sorts itself out.
++	 */
++	spin_lock_irq(&mas->lock);
++	m_irq = readl(se->base + SE_GENI_M_IRQ_STATUS);
++	m_irq_en = readl(se->base + SE_GENI_M_IRQ_EN);
++	spin_unlock_irq(&mas->lock);
++
++	if (m_irq & m_irq_en) {
++		dev_err(mas->dev, "Interrupts pending after abort: %#010x\n",
++			m_irq & m_irq_en);
++		return true;
++	}
++
++	/*
++	 * If we're here the problem resolved itself so no need to check more
++	 * on future transfers.
++	 */
++	mas->abort_failed = false;
++
++	return false;
+ }
+ 
+ static void spi_geni_set_cs(struct spi_device *slv, bool set_flag)
+@@ -158,9 +200,15 @@ static void spi_geni_set_cs(struct spi_device *slv, bool set_flag)
+ 	if (set_flag == mas->cs_flag)
+ 		return;
+ 
++	pm_runtime_get_sync(mas->dev);
++
++	if (spi_geni_is_abort_still_pending(mas)) {
++		dev_err(mas->dev, "Can't set chip select\n");
++		goto exit;
++	}
++
+ 	mas->cs_flag = set_flag;
+ 
+-	pm_runtime_get_sync(mas->dev);
+ 	spin_lock_irq(&mas->lock);
+ 	reinit_completion(&mas->cs_done);
+ 	if (set_flag)
+@@ -173,6 +221,7 @@ static void spi_geni_set_cs(struct spi_device *slv, bool set_flag)
+ 	if (!time_left)
+ 		handle_fifo_timeout(spi, NULL);
+ 
++exit:
+ 	pm_runtime_put(mas->dev);
+ }
+ 
+@@ -280,6 +329,9 @@ static int spi_geni_prepare_message(struct spi_master *spi,
+ 	int ret;
+ 	struct spi_geni_master *mas = spi_master_get_devdata(spi);
+ 
++	if (spi_geni_is_abort_still_pending(mas))
++		return -EBUSY;
++
+ 	ret = setup_fifo_params(spi_msg->spi, spi);
+ 	if (ret)
+ 		dev_err(mas->dev, "Couldn't select mode %d\n", ret);
+@@ -354,6 +406,12 @@ static bool geni_spi_handle_tx(struct spi_geni_master *mas)
+ 	unsigned int bytes_per_fifo_word = geni_byte_per_fifo_word(mas);
+ 	unsigned int i = 0;
+ 
++	/* Stop the watermark IRQ if nothing to send */
++	if (!mas->cur_xfer) {
++		writel(0, se->base + SE_GENI_TX_WATERMARK_REG);
++		return false;
++	}
++
+ 	max_bytes = (mas->tx_fifo_depth - mas->tx_wm) * bytes_per_fifo_word;
+ 	if (mas->tx_rem_bytes < max_bytes)
+ 		max_bytes = mas->tx_rem_bytes;
+@@ -396,6 +454,14 @@ static void geni_spi_handle_rx(struct spi_geni_master *mas)
+ 		if (rx_last_byte_valid && rx_last_byte_valid < 4)
+ 			rx_bytes -= bytes_per_fifo_word - rx_last_byte_valid;
+ 	}
++
++	/* Clear out the FIFO and bail if nowhere to put it */
++	if (!mas->cur_xfer) {
++		for (i = 0; i < DIV_ROUND_UP(rx_bytes, bytes_per_fifo_word); i++)
++			readl(se->base + SE_GENI_RX_FIFOn);
++		return;
++	}
++
+ 	if (mas->rx_rem_bytes < rx_bytes)
+ 		rx_bytes = mas->rx_rem_bytes;
+ 
+@@ -495,6 +561,9 @@ static int spi_geni_transfer_one(struct spi_master *spi,
+ {
+ 	struct spi_geni_master *mas = spi_master_get_devdata(spi);
+ 
++	if (spi_geni_is_abort_still_pending(mas))
++		return -EBUSY;
++
+ 	/* Terminate and return success for 0 byte length transfer */
+ 	if (!xfer->len)
+ 		return 0;
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 471dedf3d3392..6017209c6d2f7 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -493,9 +493,9 @@ static u32 stm32h7_spi_prepare_fthlv(struct stm32_spi *spi, u32 xfer_len)
+ 
+ 	/* align packet size with data registers access */
+ 	if (spi->cur_bpw > 8)
+-		fthlv -= (fthlv % 2); /* multiple of 2 */
++		fthlv += (fthlv % 2) ? 1 : 0;
+ 	else
+-		fthlv -= (fthlv % 4); /* multiple of 4 */
++		fthlv += (fthlv % 4) ? (4 - (fthlv % 4)) : 0;
+ 
+ 	if (!fthlv)
+ 		fthlv = 1;
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index 92dd86bceae31..8de4bf8edb9c0 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -35,6 +35,22 @@ enum {
+ 	BTRFS_INODE_IN_DELALLOC_LIST,
+ 	BTRFS_INODE_HAS_PROPS,
+ 	BTRFS_INODE_SNAPSHOT_FLUSH,
++	/*
++	 * Set and used when logging an inode and it serves to signal that an
++	 * inode does not have xattrs, so subsequent fsyncs can avoid searching
++	 * for xattrs to log. This bit must be cleared whenever a xattr is added
++	 * to an inode.
++	 */
++	BTRFS_INODE_NO_XATTRS,
++	/*
++	 * Set when we are in a context where we need to start a transaction and
++	 * have dirty pages with the respective file range locked. This is to
++	 * ensure that when reserving space for the transaction, if we are low
++	 * on available space and need to flush delalloc, we will not flush
++	 * delalloc for this inode, because that could result in a deadlock (on
++	 * the file range, inode's io_tree).
++	 */
++	BTRFS_INODE_NO_DELALLOC_FLUSH,
+ };
+ 
+ /* in memory btrfs inode */
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 62461239600fc..e01545538e07f 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -3001,7 +3001,8 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
+ 			       u32 min_type);
+ 
+ int btrfs_start_delalloc_snapshot(struct btrfs_root *root);
+-int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr);
++int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr,
++			       bool in_reclaim_context);
+ int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
+ 			      unsigned int extra_bits,
+ 			      struct extent_state **cached_state);
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index 10638537b9ef3..d297804631829 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -703,7 +703,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ 	 * flush all outstanding I/O and inode extent mappings before the
+ 	 * copy operation is declared as being finished
+ 	 */
+-	ret = btrfs_start_delalloc_roots(fs_info, U64_MAX);
++	ret = btrfs_start_delalloc_roots(fs_info, U64_MAX, false);
+ 	if (ret) {
+ 		mutex_unlock(&dev_replace->lock_finishing_cancel_unmount);
+ 		return ret;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 7e8d8169779d2..acc47e2ffb46b 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9389,7 +9389,9 @@ static struct btrfs_delalloc_work *btrfs_alloc_delalloc_work(struct inode *inode
+  * some fairly slow code that needs optimization. This walks the list
+  * of all the inodes with pending delalloc and forces them to disk.
+  */
+-static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot)
++static int start_delalloc_inodes(struct btrfs_root *root,
++				 struct writeback_control *wbc, bool snapshot,
++				 bool in_reclaim_context)
+ {
+ 	struct btrfs_inode *binode;
+ 	struct inode *inode;
+@@ -9397,6 +9399,7 @@ static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot
+ 	struct list_head works;
+ 	struct list_head splice;
+ 	int ret = 0;
++	bool full_flush = wbc->nr_to_write == LONG_MAX;
+ 
+ 	INIT_LIST_HEAD(&works);
+ 	INIT_LIST_HEAD(&splice);
+@@ -9410,6 +9413,11 @@ static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot
+ 
+ 		list_move_tail(&binode->delalloc_inodes,
+ 			       &root->delalloc_inodes);
++
++		if (in_reclaim_context &&
++		    test_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &binode->runtime_flags))
++			continue;
++
+ 		inode = igrab(&binode->vfs_inode);
+ 		if (!inode) {
+ 			cond_resched_lock(&root->delalloc_lock);
+@@ -9420,18 +9428,24 @@ static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot
+ 		if (snapshot)
+ 			set_bit(BTRFS_INODE_SNAPSHOT_FLUSH,
+ 				&binode->runtime_flags);
+-		work = btrfs_alloc_delalloc_work(inode);
+-		if (!work) {
+-			iput(inode);
+-			ret = -ENOMEM;
+-			goto out;
+-		}
+-		list_add_tail(&work->list, &works);
+-		btrfs_queue_work(root->fs_info->flush_workers,
+-				 &work->work);
+-		if (*nr != U64_MAX) {
+-			(*nr)--;
+-			if (*nr == 0)
++		if (full_flush) {
++			work = btrfs_alloc_delalloc_work(inode);
++			if (!work) {
++				iput(inode);
++				ret = -ENOMEM;
++				goto out;
++			}
++			list_add_tail(&work->list, &works);
++			btrfs_queue_work(root->fs_info->flush_workers,
++					 &work->work);
++		} else {
++			ret = sync_inode(inode, wbc);
++			if (!ret &&
++			    test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,
++				     &BTRFS_I(inode)->runtime_flags))
++				ret = sync_inode(inode, wbc);
++			btrfs_add_delayed_iput(inode);
++			if (ret || wbc->nr_to_write <= 0)
+ 				goto out;
+ 		}
+ 		cond_resched();
+@@ -9457,17 +9471,29 @@ out:
+ 
+ int btrfs_start_delalloc_snapshot(struct btrfs_root *root)
+ {
++	struct writeback_control wbc = {
++		.nr_to_write = LONG_MAX,
++		.sync_mode = WB_SYNC_NONE,
++		.range_start = 0,
++		.range_end = LLONG_MAX,
++	};
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+-	u64 nr = U64_MAX;
+ 
+ 	if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state))
+ 		return -EROFS;
+ 
+-	return start_delalloc_inodes(root, &nr, true);
++	return start_delalloc_inodes(root, &wbc, true, false);
+ }
+ 
+-int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr)
++int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr,
++			       bool in_reclaim_context)
+ {
++	struct writeback_control wbc = {
++		.nr_to_write = (nr == U64_MAX) ? LONG_MAX : (unsigned long)nr,
++		.sync_mode = WB_SYNC_NONE,
++		.range_start = 0,
++		.range_end = LLONG_MAX,
++	};
+ 	struct btrfs_root *root;
+ 	struct list_head splice;
+ 	int ret;
+@@ -9481,6 +9507,13 @@ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr)
+ 	spin_lock(&fs_info->delalloc_root_lock);
+ 	list_splice_init(&fs_info->delalloc_roots, &splice);
+ 	while (!list_empty(&splice) && nr) {
++		/*
++		 * Reset nr_to_write here so we know that we're doing a full
++		 * flush.
++		 */
++		if (nr == U64_MAX)
++			wbc.nr_to_write = LONG_MAX;
++
+ 		root = list_first_entry(&splice, struct btrfs_root,
+ 					delalloc_root);
+ 		root = btrfs_grab_root(root);
+@@ -9489,9 +9522,9 @@ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr)
+ 			       &fs_info->delalloc_roots);
+ 		spin_unlock(&fs_info->delalloc_root_lock);
+ 
+-		ret = start_delalloc_inodes(root, &nr, false);
++		ret = start_delalloc_inodes(root, &wbc, false, in_reclaim_context);
+ 		btrfs_put_root(root);
+-		if (ret < 0)
++		if (ret < 0 || wbc.nr_to_write <= 0)
+ 			goto out;
+ 		spin_lock(&fs_info->delalloc_root_lock);
+ 	}
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index e8ca229a216be..bd46e107f955e 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -4940,7 +4940,7 @@ long btrfs_ioctl(struct file *file, unsigned int
+ 	case BTRFS_IOC_SYNC: {
+ 		int ret;
+ 
+-		ret = btrfs_start_delalloc_roots(fs_info, U64_MAX);
++		ret = btrfs_start_delalloc_roots(fs_info, U64_MAX, false);
+ 		if (ret)
+ 			return ret;
+ 		ret = btrfs_sync_fs(inode->i_sb, 1);
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index 99aa87c089121..a646af95dd100 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -89,6 +89,19 @@ static int copy_inline_to_page(struct btrfs_inode *inode,
+ 	if (ret)
+ 		goto out_unlock;
+ 
++	/*
++	 * After dirtying the page our caller will need to start a transaction,
++	 * and if we are low on metadata free space, that can cause flushing of
++	 * delalloc for all inodes in order to get metadata space released.
++	 * However we are holding the range locked for the whole duration of
++	 * the clone/dedupe operation, so we may deadlock if that happens and no
++	 * other task releases enough space. So mark this inode as not being
++	 * possible to flush to avoid such deadlock. We will clear that flag
++	 * when we finish cloning all extents, since a transaction is started
++	 * after finding each extent to clone.
++	 */
++	set_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &inode->runtime_flags);
++
+ 	if (comp_type == BTRFS_COMPRESS_NONE) {
+ 		char *map;
+ 
+@@ -547,6 +560,8 @@ process_slot:
+ out:
+ 	btrfs_free_path(path);
+ 	kvfree(buf);
++	clear_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &BTRFS_I(inode)->runtime_flags);
++
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 64099565ab8f5..e8347461c8ddd 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -532,7 +532,9 @@ static void shrink_delalloc(struct btrfs_fs_info *fs_info,
+ 
+ 	loops = 0;
+ 	while ((delalloc_bytes || dio_bytes) && loops < 3) {
+-		btrfs_start_delalloc_roots(fs_info, items);
++		u64 nr_pages = min(delalloc_bytes, to_reclaim) >> PAGE_SHIFT;
++
++		btrfs_start_delalloc_roots(fs_info, nr_pages, true);
+ 
+ 		loops++;
+ 		if (wait_ordered && !trans) {
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 56cbc1706b6f7..5b11bb9770664 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -4571,6 +4571,10 @@ static int btrfs_log_all_xattrs(struct btrfs_trans_handle *trans,
+ 	const u64 ino = btrfs_ino(inode);
+ 	int ins_nr = 0;
+ 	int start_slot = 0;
++	bool found_xattrs = false;
++
++	if (test_bit(BTRFS_INODE_NO_XATTRS, &inode->runtime_flags))
++		return 0;
+ 
+ 	key.objectid = ino;
+ 	key.type = BTRFS_XATTR_ITEM_KEY;
+@@ -4609,6 +4613,7 @@ static int btrfs_log_all_xattrs(struct btrfs_trans_handle *trans,
+ 			start_slot = slot;
+ 		ins_nr++;
+ 		path->slots[0]++;
++		found_xattrs = true;
+ 		cond_resched();
+ 	}
+ 	if (ins_nr > 0) {
+@@ -4618,6 +4623,9 @@ static int btrfs_log_all_xattrs(struct btrfs_trans_handle *trans,
+ 			return ret;
+ 	}
+ 
++	if (!found_xattrs)
++		set_bit(BTRFS_INODE_NO_XATTRS, &inode->runtime_flags);
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index 95d9aebff2c4b..e51774201d53b 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -213,9 +213,11 @@ int btrfs_setxattr(struct btrfs_trans_handle *trans, struct inode *inode,
+ 	}
+ out:
+ 	btrfs_free_path(path);
+-	if (!ret)
++	if (!ret) {
+ 		set_bit(BTRFS_INODE_COPY_EVERYTHING,
+ 			&BTRFS_I(inode)->runtime_flags);
++		clear_bit(BTRFS_INODE_NO_XATTRS, &BTRFS_I(inode)->runtime_flags);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 1f798c5c4213e..4833b68f1a1cc 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1625,9 +1625,9 @@ static bool io_match_files(struct io_kiocb *req,
+ }
+ 
+ /* Returns true if there are no backlogged entries after the flush */
+-static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+-				     struct task_struct *tsk,
+-				     struct files_struct *files)
++static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
++				       struct task_struct *tsk,
++				       struct files_struct *files)
+ {
+ 	struct io_rings *rings = ctx->rings;
+ 	struct io_kiocb *req, *tmp;
+@@ -1681,6 +1681,20 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+ 	return cqe != NULL;
+ }
+ 
++static void io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
++				     struct task_struct *tsk,
++				     struct files_struct *files)
++{
++	if (test_bit(0, &ctx->cq_check_overflow)) {
++		/* iopoll syncs against uring_lock, not completion_lock */
++		if (ctx->flags & IORING_SETUP_IOPOLL)
++			mutex_lock(&ctx->uring_lock);
++		__io_cqring_overflow_flush(ctx, force, tsk, files);
++		if (ctx->flags & IORING_SETUP_IOPOLL)
++			mutex_unlock(&ctx->uring_lock);
++	}
++}
++
+ static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags)
+ {
+ 	struct io_ring_ctx *ctx = req->ctx;
+@@ -2047,14 +2061,15 @@ static void io_req_task_cancel(struct callback_head *cb)
+ static void __io_req_task_submit(struct io_kiocb *req)
+ {
+ 	struct io_ring_ctx *ctx = req->ctx;
++	bool fail;
+ 
+-	if (!__io_sq_thread_acquire_mm(ctx)) {
+-		mutex_lock(&ctx->uring_lock);
++	fail = __io_sq_thread_acquire_mm(ctx);
++	mutex_lock(&ctx->uring_lock);
++	if (!fail)
+ 		__io_queue_sqe(req, NULL);
+-		mutex_unlock(&ctx->uring_lock);
+-	} else {
++	else
+ 		__io_req_task_cancel(req, -EFAULT);
+-	}
++	mutex_unlock(&ctx->uring_lock);
+ }
+ 
+ static void io_req_task_submit(struct callback_head *cb)
+@@ -2234,22 +2249,10 @@ static void io_double_put_req(struct io_kiocb *req)
+ 		io_free_req(req);
+ }
+ 
+-static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush)
++static unsigned io_cqring_events(struct io_ring_ctx *ctx)
+ {
+ 	struct io_rings *rings = ctx->rings;
+ 
+-	if (test_bit(0, &ctx->cq_check_overflow)) {
+-		/*
+-		 * noflush == true is from the waitqueue handler, just ensure
+-		 * we wake up the task, and the next invocation will flush the
+-		 * entries. We cannot safely to it from here.
+-		 */
+-		if (noflush)
+-			return -1U;
+-
+-		io_cqring_overflow_flush(ctx, false, NULL, NULL);
+-	}
+-
+ 	/* See comment at the top of this file */
+ 	smp_rmb();
+ 	return ctx->cached_cq_tail - READ_ONCE(rings->cq.head);
+@@ -2474,7 +2477,9 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
+ 		 * If we do, we can potentially be spinning for commands that
+ 		 * already triggered a CQE (eg in error).
+ 		 */
+-		if (io_cqring_events(ctx, false))
++		if (test_bit(0, &ctx->cq_check_overflow))
++			__io_cqring_overflow_flush(ctx, false, NULL, NULL);
++		if (io_cqring_events(ctx))
+ 			break;
+ 
+ 		/*
+@@ -6577,7 +6582,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
+ 
+ 	/* if we have a backlog and couldn't flush it all, return BUSY */
+ 	if (test_bit(0, &ctx->sq_check_overflow)) {
+-		if (!io_cqring_overflow_flush(ctx, false, NULL, NULL))
++		if (!__io_cqring_overflow_flush(ctx, false, NULL, NULL))
+ 			return -EBUSY;
+ 	}
+ 
+@@ -6866,7 +6871,7 @@ struct io_wait_queue {
+ 	unsigned nr_timeouts;
+ };
+ 
+-static inline bool io_should_wake(struct io_wait_queue *iowq, bool noflush)
++static inline bool io_should_wake(struct io_wait_queue *iowq)
+ {
+ 	struct io_ring_ctx *ctx = iowq->ctx;
+ 
+@@ -6875,7 +6880,7 @@ static inline bool io_should_wake(struct io_wait_queue *iowq, bool noflush)
+ 	 * started waiting. For timeouts, we always want to return to userspace,
+ 	 * regardless of event count.
+ 	 */
+-	return io_cqring_events(ctx, noflush) >= iowq->to_wait ||
++	return io_cqring_events(ctx) >= iowq->to_wait ||
+ 			atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts;
+ }
+ 
+@@ -6885,11 +6890,13 @@ static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
+ 	struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue,
+ 							wq);
+ 
+-	/* use noflush == true, as we can't safely rely on locking context */
+-	if (!io_should_wake(iowq, true))
+-		return -1;
+-
+-	return autoremove_wake_function(curr, mode, wake_flags, key);
++	/*
++	 * Cannot safely flush overflowed CQEs from here, ensure we wake up
++	 * the task, and the next invocation will do it.
++	 */
++	if (io_should_wake(iowq) || test_bit(0, &iowq->ctx->cq_check_overflow))
++		return autoremove_wake_function(curr, mode, wake_flags, key);
++	return -1;
+ }
+ 
+ static int io_run_task_work_sig(void)
+@@ -6928,7 +6935,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 	int ret = 0;
+ 
+ 	do {
+-		if (io_cqring_events(ctx, false) >= min_events)
++		io_cqring_overflow_flush(ctx, false, NULL, NULL);
++		if (io_cqring_events(ctx) >= min_events)
+ 			return 0;
+ 		if (!io_run_task_work())
+ 			break;
+@@ -6950,6 +6958,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 	iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
+ 	trace_io_uring_cqring_wait(ctx, min_events);
+ 	do {
++		io_cqring_overflow_flush(ctx, false, NULL, NULL);
+ 		prepare_to_wait_exclusive(&ctx->wait, &iowq.wq,
+ 						TASK_INTERRUPTIBLE);
+ 		/* make sure we run task_work before checking for signals */
+@@ -6958,8 +6967,10 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 			continue;
+ 		else if (ret < 0)
+ 			break;
+-		if (io_should_wake(&iowq, false))
++		if (io_should_wake(&iowq))
+ 			break;
++		if (test_bit(0, &ctx->cq_check_overflow))
++			continue;
+ 		schedule();
+ 	} while (1);
+ 	finish_wait(&ctx->wait, &iowq.wq);
+@@ -7450,12 +7461,12 @@ static struct fixed_file_ref_node *alloc_fixed_file_ref_node(
+ 
+ 	ref_node = kzalloc(sizeof(*ref_node), GFP_KERNEL);
+ 	if (!ref_node)
+-		return ERR_PTR(-ENOMEM);
++		return NULL;
+ 
+ 	if (percpu_ref_init(&ref_node->refs, io_file_data_ref_zero,
+ 			    0, GFP_KERNEL)) {
+ 		kfree(ref_node);
+-		return ERR_PTR(-ENOMEM);
++		return NULL;
+ 	}
+ 	INIT_LIST_HEAD(&ref_node->node);
+ 	INIT_LIST_HEAD(&ref_node->file_list);
+@@ -7549,9 +7560,9 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 	}
+ 
+ 	ref_node = alloc_fixed_file_ref_node(ctx);
+-	if (IS_ERR(ref_node)) {
++	if (!ref_node) {
+ 		io_sqe_files_unregister(ctx);
+-		return PTR_ERR(ref_node);
++		return -ENOMEM;
+ 	}
+ 
+ 	io_sqe_files_set_node(file_data, ref_node);
+@@ -7651,8 +7662,8 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ 		return -EINVAL;
+ 
+ 	ref_node = alloc_fixed_file_ref_node(ctx);
+-	if (IS_ERR(ref_node))
+-		return PTR_ERR(ref_node);
++	if (!ref_node)
++		return -ENOMEM;
+ 
+ 	done = 0;
+ 	fds = u64_to_user_ptr(up->fds);
+@@ -8384,7 +8395,8 @@ static __poll_t io_uring_poll(struct file *file, poll_table *wait)
+ 	smp_rmb();
+ 	if (!io_sqring_full(ctx))
+ 		mask |= EPOLLOUT | EPOLLWRNORM;
+-	if (io_cqring_events(ctx, false))
++	io_cqring_overflow_flush(ctx, false, NULL, NULL);
++	if (io_cqring_events(ctx))
+ 		mask |= EPOLLIN | EPOLLRDNORM;
+ 
+ 	return mask;
+@@ -8442,7 +8454,7 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ 	/* if force is set, the ring is going away. always drop after that */
+ 	ctx->cq_overflow_flushed = 1;
+ 	if (ctx->rings)
+-		io_cqring_overflow_flush(ctx, true, NULL, NULL);
++		__io_cqring_overflow_flush(ctx, true, NULL, NULL);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
+ 	io_kill_timeouts(ctx, NULL);
+@@ -8715,9 +8727,7 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 	}
+ 
+ 	io_cancel_defer_files(ctx, task, files);
+-	io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
+ 	io_cqring_overflow_flush(ctx, true, task, files);
+-	io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
+ 
+ 	while (__io_uring_cancel_task_requests(ctx, task, files)) {
+ 		io_run_task_work();
+@@ -9023,10 +9033,8 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ 	 */
+ 	ret = 0;
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+-		io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
+-		if (!list_empty_careful(&ctx->cq_overflow_list))
+-			io_cqring_overflow_flush(ctx, false, NULL, NULL);
+-		io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
++		io_cqring_overflow_flush(ctx, false, NULL, NULL);
++
+ 		if (flags & IORING_ENTER_SQ_WAKEUP)
+ 			wake_up(&ctx->sq_data->wait);
+ 		if (flags & IORING_ENTER_SQ_WAIT)
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 3e01d8f2ab906..dcab112e1f001 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -1285,26 +1285,23 @@ fput_and_out:
+ 	return ret;
+ }
+ 
++#ifndef CONFIG_ARCH_SPLIT_ARG64
+ SYSCALL_DEFINE5(fanotify_mark, int, fanotify_fd, unsigned int, flags,
+ 			      __u64, mask, int, dfd,
+ 			      const char  __user *, pathname)
+ {
+ 	return do_fanotify_mark(fanotify_fd, flags, mask, dfd, pathname);
+ }
++#endif
+ 
+-#ifdef CONFIG_COMPAT
+-COMPAT_SYSCALL_DEFINE6(fanotify_mark,
++#if defined(CONFIG_ARCH_SPLIT_ARG64) || defined(CONFIG_COMPAT)
++SYSCALL32_DEFINE6(fanotify_mark,
+ 				int, fanotify_fd, unsigned int, flags,
+-				__u32, mask0, __u32, mask1, int, dfd,
++				SC_ARG64(mask), int, dfd,
+ 				const char  __user *, pathname)
+ {
+-	return do_fanotify_mark(fanotify_fd, flags,
+-#ifdef __BIG_ENDIAN
+-				((__u64)mask0 << 32) | mask1,
+-#else
+-				((__u64)mask1 << 32) | mask0,
+-#endif
+-				 dfd, pathname);
++	return do_fanotify_mark(fanotify_fd, flags, SC_VAL64(__u64, mask),
++				dfd, pathname);
+ }
+ #endif
+ 
+diff --git a/fs/zonefs/Kconfig b/fs/zonefs/Kconfig
+index ef2697b78820d..827278f937fe7 100644
+--- a/fs/zonefs/Kconfig
++++ b/fs/zonefs/Kconfig
+@@ -3,6 +3,7 @@ config ZONEFS_FS
+ 	depends on BLOCK
+ 	depends on BLK_DEV_ZONED
+ 	select FS_IOMAP
++	select CRC32
+ 	help
+ 	  zonefs is a simple file system which exposes zones of a zoned block
+ 	  device (e.g. host-managed or host-aware SMR disk drives) as files.
+diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
+index 37bea07c12f21..aea0ce9f3b745 100644
+--- a/include/linux/syscalls.h
++++ b/include/linux/syscalls.h
+@@ -251,6 +251,30 @@ static inline int is_syscall_trace_event(struct trace_event_call *tp_event)
+ 	static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+ #endif /* __SYSCALL_DEFINEx */
+ 
++/* For split 64-bit arguments on 32-bit architectures */
++#ifdef __LITTLE_ENDIAN
++#define SC_ARG64(name) u32, name##_lo, u32, name##_hi
++#else
++#define SC_ARG64(name) u32, name##_hi, u32, name##_lo
++#endif
++#define SC_VAL64(type, name) ((type) name##_hi << 32 | name##_lo)
++
++#ifdef CONFIG_COMPAT
++#define SYSCALL32_DEFINE1 COMPAT_SYSCALL_DEFINE1
++#define SYSCALL32_DEFINE2 COMPAT_SYSCALL_DEFINE2
++#define SYSCALL32_DEFINE3 COMPAT_SYSCALL_DEFINE3
++#define SYSCALL32_DEFINE4 COMPAT_SYSCALL_DEFINE4
++#define SYSCALL32_DEFINE5 COMPAT_SYSCALL_DEFINE5
++#define SYSCALL32_DEFINE6 COMPAT_SYSCALL_DEFINE6
++#else
++#define SYSCALL32_DEFINE1 SYSCALL_DEFINE1
++#define SYSCALL32_DEFINE2 SYSCALL_DEFINE2
++#define SYSCALL32_DEFINE3 SYSCALL_DEFINE3
++#define SYSCALL32_DEFINE4 SYSCALL_DEFINE4
++#define SYSCALL32_DEFINE5 SYSCALL_DEFINE5
++#define SYSCALL32_DEFINE6 SYSCALL_DEFINE6
++#endif
++
+ /*
+  * Called before coming back to user-mode. Returning to user-mode with an
+  * address limit different than USER_DS can allow to overwrite kernel memory.
+diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
+index 4f4e93bf814c3..cc17bc9575482 100644
+--- a/include/net/xdp_sock.h
++++ b/include/net/xdp_sock.h
+@@ -58,10 +58,6 @@ struct xdp_sock {
+ 
+ 	struct xsk_queue *tx ____cacheline_aligned_in_smp;
+ 	struct list_head tx_list;
+-	/* Mutual exclusion of NAPI TX thread and sendmsg error paths
+-	 * in the SKB destructor callback.
+-	 */
+-	spinlock_t tx_completion_lock;
+ 	/* Protects generic receive. */
+ 	spinlock_t rx_lock;
+ 
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index 01755b838c745..eaa8386dbc630 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -73,6 +73,11 @@ struct xsk_buff_pool {
+ 	bool dma_need_sync;
+ 	bool unaligned;
+ 	void *addrs;
++	/* Mutual exclusion of the completion ring in the SKB mode. Two cases to protect:
++	 * NAPI TX thread and sendmsg error paths in the SKB destructor callback and when
++	 * sockets share a single cq when the same netdev and queue id is shared.
++	 */
++	spinlock_t cq_lock;
+ 	struct xdp_buff_xsk *free_heads[];
+ };
+ 
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index f292e0267bb9e..15bbfaf943fd1 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -284,7 +284,8 @@ static int register_vlan_device(struct net_device *real_dev, u16 vlan_id)
+ 	return 0;
+ 
+ out_free_newdev:
+-	if (new_dev->reg_state == NETREG_UNINITIALIZED)
++	if (new_dev->reg_state == NETREG_UNINITIALIZED ||
++	    new_dev->reg_state == NETREG_UNREGISTERED)
+ 		free_netdev(new_dev);
+ 	return err;
+ }
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 26bdc3c20b7e4..8bd565f2073e7 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1139,6 +1139,7 @@ static int isotp_getname(struct socket *sock, struct sockaddr *uaddr, int peer)
+ 	if (peer)
+ 		return -EOPNOTSUPP;
+ 
++	memset(addr, 0, sizeof(*addr));
+ 	addr->can_family = AF_CAN;
+ 	addr->can_ifindex = so->ifindex;
+ 	addr->can_addr.tp.rx_id = so->rxid;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index e578544b2cc71..fbadd93b95ace 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -2011,6 +2011,12 @@ int pskb_trim_rcsum_slow(struct sk_buff *skb, unsigned int len)
+ 		skb->csum = csum_block_sub(skb->csum,
+ 					   skb_checksum(skb, len, delta, 0),
+ 					   len);
++	} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
++		int hdlen = (len > skb_headlen(skb)) ? skb_headlen(skb) : len;
++		int offset = skb_checksum_start_offset(skb) + skb->csum_offset;
++
++		if (offset + sizeof(__sum16) > hdlen)
++			return -EINVAL;
+ 	}
+ 	return __pskb_trim(skb, len);
+ }
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 879b76ae4435c..97975bed491ad 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -302,7 +302,7 @@ static int __ip_finish_output(struct net *net, struct sock *sk, struct sk_buff *
+ 	if (skb_is_gso(skb))
+ 		return ip_finish_output_gso(net, sk, skb, mtu);
+ 
+-	if (skb->len > mtu || (IPCB(skb)->flags & IPSKB_FRAG_PMTU))
++	if (skb->len > mtu || IPCB(skb)->frag_max_size)
+ 		return ip_fragment(net, sk, skb, mtu, ip_finish_output2);
+ 
+ 	return ip_finish_output2(net, sk, skb);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index ee65c9225178d..64594aa755f05 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -759,8 +759,11 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		goto tx_error;
+ 	}
+ 
+-	if (tnl_update_pmtu(dev, skb, rt, tnl_params->frag_off, inner_iph,
+-			    0, 0, false)) {
++	df = tnl_params->frag_off;
++	if (skb->protocol == htons(ETH_P_IP) && !tunnel->ignore_df)
++		df |= (inner_iph->frag_off & htons(IP_DF));
++
++	if (tnl_update_pmtu(dev, skb, rt, df, inner_iph, 0, 0, false)) {
+ 		ip_rt_put(rt);
+ 		goto tx_error;
+ 	}
+@@ -788,10 +791,6 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 			ttl = ip4_dst_hoplimit(&rt->dst);
+ 	}
+ 
+-	df = tnl_params->frag_off;
+-	if (skb->protocol == htons(ETH_P_IP) && !tunnel->ignore_df)
+-		df |= (inner_iph->frag_off&htons(IP_DF));
+-
+ 	max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
+ 			+ rt->dst.header_len + ip_encap_hlen(&tunnel->encap);
+ 	if (max_headroom > dev->needed_headroom)
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 0dc43ad28eb95..f63f7ada51b36 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -496,7 +496,7 @@ static int nh_check_attr_group(struct net *net, struct nlattr *tb[],
+ 	for (i = NHA_GROUP_TYPE + 1; i < __NHA_MAX; ++i) {
+ 		if (!tb[i])
+ 			continue;
+-		if (tb[NHA_FDB])
++		if (i == NHA_FDB)
+ 			continue;
+ 		NL_SET_ERR_MSG(extack,
+ 			       "No other attributes can be set in nexthop groups");
+@@ -1277,8 +1277,10 @@ static struct nexthop *nexthop_create_group(struct net *net,
+ 	return nh;
+ 
+ out_no_nh:
+-	for (; i >= 0; --i)
++	for (i--; i >= 0; --i) {
++		list_del(&nhg->nh_entries[i].nh_list);
+ 		nexthop_put(nhg->nh_entries[i].nh);
++	}
+ 
+ 	kfree(nhg->spare);
+ 	kfree(nhg);
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 605cdd38a919a..f43e275557251 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1025,6 +1025,8 @@ static void fib6_purge_rt(struct fib6_info *rt, struct fib6_node *fn,
+ {
+ 	struct fib6_table *table = rt->fib6_table;
+ 
++	/* Flush all cached dst in exception table */
++	rt6_flush_exceptions(rt);
+ 	fib6_drop_pcpu_from(rt, table);
+ 
+ 	if (rt->nh && !list_empty(&rt->nh_list))
+@@ -1927,9 +1929,6 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
+ 	net->ipv6.rt6_stats->fib_rt_entries--;
+ 	net->ipv6.rt6_stats->fib_discarded_routes++;
+ 
+-	/* Flush all cached dst in exception table */
+-	rt6_flush_exceptions(rt);
+-
+ 	/* Reset round-robin state, if necessary */
+ 	if (rcu_access_pointer(fn->rr_ptr) == rt)
+ 		fn->rr_ptr = NULL;
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 189cfbbcccc04..d5f42c62fd79e 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -364,9 +364,9 @@ static void xsk_destruct_skb(struct sk_buff *skb)
+ 	struct xdp_sock *xs = xdp_sk(skb->sk);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&xs->tx_completion_lock, flags);
++	spin_lock_irqsave(&xs->pool->cq_lock, flags);
+ 	xskq_prod_submit_addr(xs->pool->cq, addr);
+-	spin_unlock_irqrestore(&xs->tx_completion_lock, flags);
++	spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
+ 
+ 	sock_wfree(skb);
+ }
+@@ -378,6 +378,7 @@ static int xsk_generic_xmit(struct sock *sk)
+ 	bool sent_frame = false;
+ 	struct xdp_desc desc;
+ 	struct sk_buff *skb;
++	unsigned long flags;
+ 	int err = 0;
+ 
+ 	mutex_lock(&xs->mutex);
+@@ -409,10 +410,13 @@ static int xsk_generic_xmit(struct sock *sk)
+ 		 * if there is space in it. This avoids having to implement
+ 		 * any buffering in the Tx path.
+ 		 */
++		spin_lock_irqsave(&xs->pool->cq_lock, flags);
+ 		if (unlikely(err) || xskq_prod_reserve(xs->pool->cq)) {
++			spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
+ 			kfree_skb(skb);
+ 			goto out;
+ 		}
++		spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
+ 
+ 		skb->dev = xs->dev;
+ 		skb->priority = sk->sk_priority;
+@@ -424,6 +428,9 @@ static int xsk_generic_xmit(struct sock *sk)
+ 		if  (err == NETDEV_TX_BUSY) {
+ 			/* Tell user-space to retry the send */
+ 			skb->destructor = sock_wfree;
++			spin_lock_irqsave(&xs->pool->cq_lock, flags);
++			xskq_prod_cancel(xs->pool->cq);
++			spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
+ 			/* Free skb without triggering the perf drop trace */
+ 			consume_skb(skb);
+ 			err = -EAGAIN;
+@@ -1197,7 +1204,6 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
+ 	xs->state = XSK_READY;
+ 	mutex_init(&xs->mutex);
+ 	spin_lock_init(&xs->rx_lock);
+-	spin_lock_init(&xs->tx_completion_lock);
+ 
+ 	INIT_LIST_HEAD(&xs->map_list);
+ 	spin_lock_init(&xs->map_list_lock);
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index 46c2ae7d91d15..2ef6f926610ee 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -71,6 +71,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
+ 	INIT_LIST_HEAD(&pool->free_list);
+ 	INIT_LIST_HEAD(&pool->xsk_tx_list);
+ 	spin_lock_init(&pool->xsk_tx_list_lock);
++	spin_lock_init(&pool->cq_lock);
+ 	refcount_set(&pool->users, 1);
+ 
+ 	pool->fq = xs->fq_tmp;
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index 9e71b9f27679b..ef6de0fb4e312 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -286,6 +286,11 @@ static inline bool xskq_prod_is_full(struct xsk_queue *q)
+ 	return !free_entries;
+ }
+ 
++static inline void xskq_prod_cancel(struct xsk_queue *q)
++{
++	q->cached_prod--;
++}
++
+ static inline int xskq_prod_reserve(struct xsk_queue *q)
+ {
+ 	if (xskq_prod_is_full(q))
+diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
+index 3fae61ef63396..ff3aa0cf39978 100644
+--- a/tools/bpf/bpftool/net.c
++++ b/tools/bpf/bpftool/net.c
+@@ -11,7 +11,6 @@
+ #include <bpf/bpf.h>
+ #include <bpf/libbpf.h>
+ #include <net/if.h>
+-#include <linux/if.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/socket.h>
+ #include <linux/tc_act/tc_bpf.h>
+diff --git a/tools/include/uapi/linux/fscrypt.h b/tools/include/uapi/linux/fscrypt.h
+index e5de603369381..9f4428be3e362 100644
+--- a/tools/include/uapi/linux/fscrypt.h
++++ b/tools/include/uapi/linux/fscrypt.h
+@@ -20,7 +20,6 @@
+ #define FSCRYPT_POLICY_FLAG_DIRECT_KEY		0x04
+ #define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64	0x08
+ #define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32	0x10
+-#define FSCRYPT_POLICY_FLAGS_VALID		0x1F
+ 
+ /* Encryption algorithms */
+ #define FSCRYPT_MODE_AES_256_XTS		1
+@@ -28,7 +27,7 @@
+ #define FSCRYPT_MODE_AES_128_CBC		5
+ #define FSCRYPT_MODE_AES_128_CTS		6
+ #define FSCRYPT_MODE_ADIANTUM			9
+-#define __FSCRYPT_MODE_MAX			9
++/* If adding a mode number > 9, update FSCRYPT_MODE_MAX in fscrypt_private.h */
+ 
+ /*
+  * Legacy policy version; ad-hoc KDF and no key verification.
+@@ -177,7 +176,7 @@ struct fscrypt_get_key_status_arg {
+ #define FS_POLICY_FLAGS_PAD_32		FSCRYPT_POLICY_FLAGS_PAD_32
+ #define FS_POLICY_FLAGS_PAD_MASK	FSCRYPT_POLICY_FLAGS_PAD_MASK
+ #define FS_POLICY_FLAG_DIRECT_KEY	FSCRYPT_POLICY_FLAG_DIRECT_KEY
+-#define FS_POLICY_FLAGS_VALID		FSCRYPT_POLICY_FLAGS_VALID
++#define FS_POLICY_FLAGS_VALID		0x07	/* contains old flags only */
+ #define FS_ENCRYPTION_MODE_INVALID	0	/* never used */
+ #define FS_ENCRYPTION_MODE_AES_256_XTS	FSCRYPT_MODE_AES_256_XTS
+ #define FS_ENCRYPTION_MODE_AES_256_GCM	2	/* never used */
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 136df8c102812..9359377aeb35c 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -146,6 +146,9 @@ VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux)				\
+ 		     /sys/kernel/btf/vmlinux				\
+ 		     /boot/vmlinux-$(shell uname -r)
+ VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
++ifeq ($(VMLINUX_BTF),)
++$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)")
++endif
+ 
+ DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool
+ 
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index eb693a3b7b4a1..4c7d33618437c 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -869,7 +869,7 @@ ipv6_torture()
+ 	pid3=$!
+ 	ip netns exec me ping -f 2001:db8:101::2 >/dev/null 2>&1 &
+ 	pid4=$!
+-	ip netns exec me mausezahn veth1 -B 2001:db8:101::2 -A 2001:db8:91::1 -c 0 -t tcp "dp=1-1023, flags=syn" >/dev/null 2>&1 &
++	ip netns exec me mausezahn -6 veth1 -B 2001:db8:101::2 -A 2001:db8:91::1 -c 0 -t tcp "dp=1-1023, flags=syn" >/dev/null 2>&1 &
+ 	pid5=$!
+ 
+ 	sleep 300
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 6bbf69a28e128..3367fb5f2feff 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -162,7 +162,15 @@
+ # - list_flush_ipv6_exception
+ #	Using the same topology as in pmtu_ipv6, create exceptions, and check
+ #	they are shown when listing exception caches, gone after flushing them
+-
++#
++# - pmtu_ipv4_route_change
++#	Use the same topology as in pmtu_ipv4, but issue a route replacement
++#	command and delete the corresponding device afterward. This tests for
++#	proper cleanup of the PMTU exceptions by the route replacement path.
++#	Device unregistration should complete successfully
++#
++# - pmtu_ipv6_route_change
++#	Same as above but with IPv6
+ 
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
+@@ -224,7 +232,9 @@ tests="
+ 	cleanup_ipv4_exception		ipv4: cleanup of cached exceptions	1
+ 	cleanup_ipv6_exception		ipv6: cleanup of cached exceptions	1
+ 	list_flush_ipv4_exception	ipv4: list and flush cached exceptions	1
+-	list_flush_ipv6_exception	ipv6: list and flush cached exceptions	1"
++	list_flush_ipv6_exception	ipv6: list and flush cached exceptions	1
++	pmtu_ipv4_route_change		ipv4: PMTU exception w/route replace	1
++	pmtu_ipv6_route_change		ipv6: PMTU exception w/route replace	1"
+ 
+ NS_A="ns-A"
+ NS_B="ns-B"
+@@ -1770,6 +1780,63 @@ test_list_flush_ipv6_exception() {
+ 	return ${fail}
+ }
+ 
++test_pmtu_ipvX_route_change() {
++	family=${1}
++
++	setup namespaces routing || return 2
++	trace "${ns_a}"  veth_A-R1    "${ns_r1}" veth_R1-A \
++	      "${ns_r1}" veth_R1-B    "${ns_b}"  veth_B-R1 \
++	      "${ns_a}"  veth_A-R2    "${ns_r2}" veth_R2-A \
++	      "${ns_r2}" veth_R2-B    "${ns_b}"  veth_B-R2
++
++	if [ ${family} -eq 4 ]; then
++		ping=ping
++		dst1="${prefix4}.${b_r1}.1"
++		dst2="${prefix4}.${b_r2}.1"
++		gw="${prefix4}.${a_r1}.2"
++	else
++		ping=${ping6}
++		dst1="${prefix6}:${b_r1}::1"
++		dst2="${prefix6}:${b_r2}::1"
++		gw="${prefix6}:${a_r1}::2"
++	fi
++
++	# Set up initial MTU values
++	mtu "${ns_a}"  veth_A-R1 2000
++	mtu "${ns_r1}" veth_R1-A 2000
++	mtu "${ns_r1}" veth_R1-B 1400
++	mtu "${ns_b}"  veth_B-R1 1400
++
++	mtu "${ns_a}"  veth_A-R2 2000
++	mtu "${ns_r2}" veth_R2-A 2000
++	mtu "${ns_r2}" veth_R2-B 1500
++	mtu "${ns_b}"  veth_B-R2 1500
++
++	# Create route exceptions
++	run_cmd ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1800 ${dst1}
++	run_cmd ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1800 ${dst2}
++
++	# Check that exceptions have been created with the correct PMTU
++	pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst1})"
++	check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1
++	pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst2})"
++	check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1
++
++	# Replace the route from A to R1
++	run_cmd ${ns_a} ip route change default via ${gw}
++
++	# Delete the device in A
++	run_cmd ${ns_a} ip link del "veth_A-R1"
++}
++
++test_pmtu_ipv4_route_change() {
++	test_pmtu_ipvX_route_change 4
++}
++
++test_pmtu_ipv6_route_change() {
++	test_pmtu_ipvX_route_change 6
++}
++
+ usage() {
+ 	echo
+ 	echo "$0 [OPTIONS] [TEST]..."


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-19 20:31 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-01-19 20:31 UTC (permalink / raw
  To: gentoo-commits

commit:     1764de0399b91e57cbac4235ad6f965ce69754a8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 19 20:31:23 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jan 19 20:31:23 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1764de03

Linux patch 5.10.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1008_linux-5.10.9.patch | 5583 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5587 insertions(+)

diff --git a/0000_README b/0000_README
index b0f1ce8..e4c8bac 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-5.10.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.8
 
+Patch:  1008_linux-5.10.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-5.10.9.patch b/1008_linux-5.10.9.patch
new file mode 100644
index 0000000..2b7861e
--- /dev/null
+++ b/1008_linux-5.10.9.patch
@@ -0,0 +1,5583 @@
+diff --git a/Documentation/devicetree/bindings/display/bridge/sii902x.txt b/Documentation/devicetree/bindings/display/bridge/sii902x.txt
+index 0d1db3f9da84f..02c21b5847418 100644
+--- a/Documentation/devicetree/bindings/display/bridge/sii902x.txt
++++ b/Documentation/devicetree/bindings/display/bridge/sii902x.txt
+@@ -8,6 +8,8 @@ Optional properties:
+ 	- interrupts: describe the interrupt line used to inform the host
+ 	  about hotplug events.
+ 	- reset-gpios: OF device-tree gpio specification for RST_N pin.
++	- iovcc-supply: I/O Supply Voltage (1.8V or 3.3V)
++	- cvcc12-supply: Digital Core Supply Voltage (1.2V)
+ 
+ 	HDMI audio properties:
+ 	- #sound-dai-cells: <0> or <1>. <0> if only i2s or spdif pin
+@@ -54,6 +56,8 @@ Example:
+ 		compatible = "sil,sii9022";
+ 		reg = <0x39>;
+ 		reset-gpios = <&pioA 1 0>;
++		iovcc-supply = <&v3v3_hdmi>;
++		cvcc12-supply = <&v1v2_hdmi>;
+ 
+ 		#sound-dai-cells = <0>;
+ 		sil,i2s-data-lanes = < 0 1 2 >;
+diff --git a/Documentation/sound/alsa-configuration.rst b/Documentation/sound/alsa-configuration.rst
+index c755b1c5e16f2..32603db7de837 100644
+--- a/Documentation/sound/alsa-configuration.rst
++++ b/Documentation/sound/alsa-configuration.rst
+@@ -1501,7 +1501,7 @@ Module for Digigram miXart8 sound cards.
+ 
+ This module supports multiple cards.
+ Note: One miXart8 board will be represented as 4 alsa cards.
+-See MIXART.txt for details.
++See Documentation/sound/cards/mixart.rst for details.
+ 
+ When the driver is compiled as a module and the hotplug firmware
+ is supported, the firmware data is loaded via hotplug automatically.
+diff --git a/Makefile b/Makefile
+index 4ee137b5d2416..1572ebd192a93 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arc/Makefile b/arch/arc/Makefile
+index 0c6bf0d1df7ad..578bdbbb0fa7f 100644
+--- a/arch/arc/Makefile
++++ b/arch/arc/Makefile
+@@ -102,16 +102,22 @@ libs-y		+= arch/arc/lib/ $(LIBGCC)
+ 
+ boot		:= arch/arc/boot
+ 
+-#default target for make without any arguments.
+-KBUILD_IMAGE	:= $(boot)/bootpImage
+-
+-all:	bootpImage
+-bootpImage: vmlinux
+-
+-boot_targets += uImage uImage.bin uImage.gz
++boot_targets := uImage.bin uImage.gz uImage.lzma
+ 
++PHONY += $(boot_targets)
+ $(boot_targets): vmlinux
+ 	$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
+ 
++uimage-default-y			:= uImage.bin
++uimage-default-$(CONFIG_KERNEL_GZIP)	:= uImage.gz
++uimage-default-$(CONFIG_KERNEL_LZMA)	:= uImage.lzma
++
++PHONY += uImage
++uImage: $(uimage-default-y)
++	@ln -sf $< $(boot)/uImage
++	@$(kecho) '  Image $(boot)/uImage is ready'
++
++CLEAN_FILES += $(boot)/uImage
++
+ archclean:
+ 	$(Q)$(MAKE) $(clean)=$(boot)
+diff --git a/arch/arc/boot/Makefile b/arch/arc/boot/Makefile
+index 538b92f4dd253..3b1f8a69a89ef 100644
+--- a/arch/arc/boot/Makefile
++++ b/arch/arc/boot/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-targets := vmlinux.bin vmlinux.bin.gz uImage
++targets := vmlinux.bin vmlinux.bin.gz
+ 
+ # uImage build relies on mkimage being availble on your host for ARC target
+ # You will need to build u-boot for ARC, rename mkimage to arc-elf32-mkimage
+@@ -13,11 +13,6 @@ LINUX_START_TEXT = $$(readelf -h vmlinux | \
+ UIMAGE_LOADADDR    = $(CONFIG_LINUX_LINK_BASE)
+ UIMAGE_ENTRYADDR   = $(LINUX_START_TEXT)
+ 
+-suffix-y := bin
+-suffix-$(CONFIG_KERNEL_GZIP)	:= gz
+-suffix-$(CONFIG_KERNEL_LZMA)	:= lzma
+-
+-targets += uImage
+ targets += uImage.bin
+ targets += uImage.gz
+ targets += uImage.lzma
+@@ -42,7 +37,3 @@ $(obj)/uImage.gz: $(obj)/vmlinux.bin.gz FORCE
+ 
+ $(obj)/uImage.lzma: $(obj)/vmlinux.bin.lzma FORCE
+ 	$(call if_changed,uimage,lzma)
+-
+-$(obj)/uImage: $(obj)/uImage.$(suffix-y)
+-	@ln -sf $(notdir $<) $@
+-	@echo '  Image $@ is ready'
+diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
+index b0dfed0f12be0..d9c264dc25fcb 100644
+--- a/arch/arc/include/asm/page.h
++++ b/arch/arc/include/asm/page.h
+@@ -10,6 +10,7 @@
+ #ifndef __ASSEMBLY__
+ 
+ #define clear_page(paddr)		memset((paddr), 0, PAGE_SIZE)
++#define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
+ #define copy_page(to, from)		memcpy((to), (from), PAGE_SIZE)
+ 
+ struct vm_area_struct;
+diff --git a/arch/arm/boot/dts/picoxcell-pc3x2.dtsi b/arch/arm/boot/dts/picoxcell-pc3x2.dtsi
+index c4c6c7e9e37b6..5898879a3038e 100644
+--- a/arch/arm/boot/dts/picoxcell-pc3x2.dtsi
++++ b/arch/arm/boot/dts/picoxcell-pc3x2.dtsi
+@@ -45,18 +45,21 @@
+ 		emac: gem@30000 {
+ 			compatible = "cadence,gem";
+ 			reg = <0x30000 0x10000>;
++			interrupt-parent = <&vic0>;
+ 			interrupts = <31>;
+ 		};
+ 
+ 		dmac1: dmac@40000 {
+ 			compatible = "snps,dw-dmac";
+ 			reg = <0x40000 0x10000>;
++			interrupt-parent = <&vic0>;
+ 			interrupts = <25>;
+ 		};
+ 
+ 		dmac2: dmac@50000 {
+ 			compatible = "snps,dw-dmac";
+ 			reg = <0x50000 0x10000>;
++			interrupt-parent = <&vic0>;
+ 			interrupts = <26>;
+ 		};
+ 
+@@ -233,6 +236,7 @@
+ 		axi2pico@c0000000 {
+ 			compatible = "picochip,axi2pico-pc3x2";
+ 			reg = <0xc0000000 0x10000>;
++			interrupt-parent = <&vic0>;
+ 			interrupts = <13 14 15 16 17 18 19 20 21>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-golden.dts b/arch/arm/boot/dts/ste-ux500-samsung-golden.dts
+index a1093cb37dc7a..aed1f2d5f2467 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-golden.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-golden.dts
+@@ -326,6 +326,7 @@
+ 				panel@0 {
+ 					compatible = "samsung,s6e63m0";
+ 					reg = <0>;
++					max-brightness = <15>;
+ 					vdd3-supply = <&panel_reg_3v0>;
+ 					vci-supply = <&panel_reg_1v8>;
+ 					reset-gpios = <&gpio4 11 GPIO_ACTIVE_LOW>;
+diff --git a/arch/arm/mach-omap2/pmic-cpcap.c b/arch/arm/mach-omap2/pmic-cpcap.c
+index eab281a5fc9f7..09076ad0576d9 100644
+--- a/arch/arm/mach-omap2/pmic-cpcap.c
++++ b/arch/arm/mach-omap2/pmic-cpcap.c
+@@ -71,7 +71,7 @@ static struct omap_voltdm_pmic omap_cpcap_iva = {
+ 	.vp_vstepmin = OMAP4_VP_VSTEPMIN_VSTEPMIN,
+ 	.vp_vstepmax = OMAP4_VP_VSTEPMAX_VSTEPMAX,
+ 	.vddmin = 900000,
+-	.vddmax = 1350000,
++	.vddmax = 1375000,
+ 	.vp_timeout_us = OMAP4_VP_VLIMITTO_TIMEOUT_US,
+ 	.i2c_slave_addr = 0x44,
+ 	.volt_reg_addr = 0x0,
+diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c
+index c61c641674e6b..e3946b06e840a 100644
+--- a/arch/mips/boot/compressed/decompress.c
++++ b/arch/mips/boot/compressed/decompress.c
+@@ -13,6 +13,7 @@
+ #include <linux/libfdt.h>
+ 
+ #include <asm/addrspace.h>
++#include <asm/unaligned.h>
+ 
+ /*
+  * These two variables specify the free mem region
+@@ -117,7 +118,7 @@ void decompress_kernel(unsigned long boot_heap_start)
+ 		dtb_size = fdt_totalsize((void *)&__appended_dtb);
+ 
+ 		/* last four bytes is always image size in little endian */
+-		image_size = le32_to_cpup((void *)&__image_end - 4);
++		image_size = get_unaligned_le32((void *)&__image_end - 4);
+ 
+ 		/* copy dtb to where the booted kernel will expect it */
+ 		memcpy((void *)VMLINUX_LOAD_ADDRESS_ULL + image_size,
+diff --git a/arch/mips/kernel/binfmt_elfn32.c b/arch/mips/kernel/binfmt_elfn32.c
+index 6ee3f7218c675..c4441416e96b6 100644
+--- a/arch/mips/kernel/binfmt_elfn32.c
++++ b/arch/mips/kernel/binfmt_elfn32.c
+@@ -103,4 +103,11 @@ jiffies_to_old_timeval32(unsigned long jiffies, struct old_timeval32 *value)
+ #undef ns_to_kernel_old_timeval
+ #define ns_to_kernel_old_timeval ns_to_old_timeval32
+ 
++/*
++ * Some data types as stored in coredump.
++ */
++#define user_long_t             compat_long_t
++#define user_siginfo_t          compat_siginfo_t
++#define copy_siginfo_to_external        copy_siginfo_to_external32
++
+ #include "../../../fs/binfmt_elf.c"
+diff --git a/arch/mips/kernel/binfmt_elfo32.c b/arch/mips/kernel/binfmt_elfo32.c
+index 6dd103d3cebba..7b2a23f48c1ac 100644
+--- a/arch/mips/kernel/binfmt_elfo32.c
++++ b/arch/mips/kernel/binfmt_elfo32.c
+@@ -106,4 +106,11 @@ jiffies_to_old_timeval32(unsigned long jiffies, struct old_timeval32 *value)
+ #undef ns_to_kernel_old_timeval
+ #define ns_to_kernel_old_timeval ns_to_old_timeval32
+ 
++/*
++ * Some data types as stored in coredump.
++ */
++#define user_long_t             compat_long_t
++#define user_siginfo_t          compat_siginfo_t
++#define copy_siginfo_to_external        copy_siginfo_to_external32
++
+ #include "../../../fs/binfmt_elf.c"
+diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
+index 3d80a51256de6..dab8febb57419 100644
+--- a/arch/mips/kernel/relocate.c
++++ b/arch/mips/kernel/relocate.c
+@@ -187,8 +187,14 @@ static int __init relocate_exception_table(long offset)
+ static inline __init unsigned long rotate_xor(unsigned long hash,
+ 					      const void *area, size_t size)
+ {
+-	size_t i;
+-	unsigned long *ptr = (unsigned long *)area;
++	const typeof(hash) *ptr = PTR_ALIGN(area, sizeof(hash));
++	size_t diff, i;
++
++	diff = (void *)ptr - area;
++	if (unlikely(size < diff + sizeof(hash)))
++		return hash;
++
++	size = ALIGN_DOWN(size - diff, sizeof(hash));
+ 
+ 	for (i = 0; i < size / sizeof(hash); i++) {
+ 		/* Rotate by odd number of bits and XOR. */
+diff --git a/arch/mips/lib/uncached.c b/arch/mips/lib/uncached.c
+index 09d5deea747f2..f80a67c092b63 100644
+--- a/arch/mips/lib/uncached.c
++++ b/arch/mips/lib/uncached.c
+@@ -37,10 +37,12 @@
+  */
+ unsigned long run_uncached(void *func)
+ {
+-	register long sp __asm__("$sp");
+ 	register long ret __asm__("$2");
+ 	long lfunc = (long)func, ufunc;
+ 	long usp;
++	long sp;
++
++	__asm__("move %0, $sp" : "=r" (sp));
+ 
+ 	if (sp >= (long)CKSEG0 && sp < (long)CKSEG2)
+ 		usp = CKSEG1ADDR(sp);
+diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
+index 9cede7ce37e66..c9644c38ec28f 100644
+--- a/arch/mips/mm/c-r4k.c
++++ b/arch/mips/mm/c-r4k.c
+@@ -1609,7 +1609,7 @@ static void __init loongson2_sc_init(void)
+ 	c->options |= MIPS_CPU_INCLUSIVE_CACHES;
+ }
+ 
+-static void __init loongson3_sc_init(void)
++static void loongson3_sc_init(void)
+ {
+ 	struct cpuinfo_mips *c = &current_cpu_data;
+ 	unsigned int config2, lsize;
+diff --git a/arch/mips/mm/sc-mips.c b/arch/mips/mm/sc-mips.c
+index dd0a5becaabd8..06ec304ad4d16 100644
+--- a/arch/mips/mm/sc-mips.c
++++ b/arch/mips/mm/sc-mips.c
+@@ -146,7 +146,7 @@ static inline int mips_sc_is_activated(struct cpuinfo_mips *c)
+ 	return 1;
+ }
+ 
+-static int __init mips_sc_probe_cm3(void)
++static int mips_sc_probe_cm3(void)
+ {
+ 	struct cpuinfo_mips *c = &current_cpu_data;
+ 	unsigned long cfg = read_gcr_l2_config();
+@@ -180,7 +180,7 @@ static int __init mips_sc_probe_cm3(void)
+ 	return 0;
+ }
+ 
+-static inline int __init mips_sc_probe(void)
++static inline int mips_sc_probe(void)
+ {
+ 	struct cpuinfo_mips *c = &current_cpu_data;
+ 	unsigned int config1, config2;
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 183f1f4b2ae66..73e8b5e5bb654 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -99,7 +99,6 @@
+ 				| _PAGE_DIRTY)
+ 
+ #define PAGE_KERNEL		__pgprot(_PAGE_KERNEL)
+-#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_KERNEL | _PAGE_EXEC)
+ #define PAGE_KERNEL_READ	__pgprot(_PAGE_KERNEL & ~_PAGE_WRITE)
+ #define PAGE_KERNEL_EXEC	__pgprot(_PAGE_KERNEL | _PAGE_EXEC)
+ #define PAGE_KERNEL_READ_EXEC	__pgprot((_PAGE_KERNEL & ~_PAGE_WRITE) \
+diff --git a/arch/riscv/include/asm/vdso.h b/arch/riscv/include/asm/vdso.h
+index 8454f746bbfd0..1453a2f563bcc 100644
+--- a/arch/riscv/include/asm/vdso.h
++++ b/arch/riscv/include/asm/vdso.h
+@@ -10,7 +10,7 @@
+ 
+ #include <linux/types.h>
+ 
+-#ifndef GENERIC_TIME_VSYSCALL
++#ifndef CONFIG_GENERIC_TIME_VSYSCALL
+ struct vdso_data {
+ };
+ #endif
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index 524d918f3601b..835e45bb59c40 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -124,15 +124,15 @@ skip_context_tracking:
+ 	REG_L a1, (a1)
+ 	jr a1
+ 1:
+-#ifdef CONFIG_TRACE_IRQFLAGS
+-	call trace_hardirqs_on
+-#endif
+ 	/*
+ 	 * Exceptions run with interrupts enabled or disabled depending on the
+ 	 * state of SR_PIE in m/sstatus.
+ 	 */
+ 	andi t0, s1, SR_PIE
+ 	beqz t0, 1f
++#ifdef CONFIG_TRACE_IRQFLAGS
++	call trace_hardirqs_on
++#endif
+ 	csrs CSR_STATUS, SR_IE
+ 
+ 1:
+@@ -186,14 +186,7 @@ check_syscall_nr:
+ 	 * Syscall number held in a7.
+ 	 * If syscall number is above allowed value, redirect to ni_syscall.
+ 	 */
+-	bge a7, t0, 1f
+-	/*
+-	 * Check if syscall is rejected by tracer, i.e., a7 == -1.
+-	 * If yes, we pretend it was executed.
+-	 */
+-	li t1, -1
+-	beq a7, t1, ret_from_syscall_rejected
+-	blt a7, t1, 1f
++	bgeu a7, t0, 1f
+ 	/* Call syscall */
+ 	la s0, sys_call_table
+ 	slli t0, a7, RISCV_LGPTR
+diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
+index 678204231700c..3f1d35e7c98a6 100644
+--- a/arch/riscv/kernel/vdso.c
++++ b/arch/riscv/kernel/vdso.c
+@@ -12,7 +12,7 @@
+ #include <linux/binfmts.h>
+ #include <linux/err.h>
+ #include <asm/page.h>
+-#ifdef GENERIC_TIME_VSYSCALL
++#ifdef CONFIG_GENERIC_TIME_VSYSCALL
+ #include <vdso/datapage.h>
+ #else
+ #include <asm/vdso.h>
+diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
+index 12ddd1f6bf70c..a8a2ffd9114aa 100644
+--- a/arch/riscv/mm/kasan_init.c
++++ b/arch/riscv/mm/kasan_init.c
+@@ -93,8 +93,8 @@ void __init kasan_init(void)
+ 								VMALLOC_END));
+ 
+ 	for_each_mem_range(i, &_start, &_end) {
+-		void *start = (void *)_start;
+-		void *end = (void *)_end;
++		void *start = (void *)__va(_start);
++		void *end = (void *)__va(_end);
+ 
+ 		if (start >= end)
+ 			break;
+diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
+index 5208ba49c89a9..2c87350c1fb09 100644
+--- a/arch/x86/hyperv/mmu.c
++++ b/arch/x86/hyperv/mmu.c
+@@ -66,11 +66,17 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
+ 	if (!hv_hypercall_pg)
+ 		goto do_native;
+ 
+-	if (cpumask_empty(cpus))
+-		return;
+-
+ 	local_irq_save(flags);
+ 
++	/*
++	 * Only check the mask _after_ interrupt has been disabled to avoid the
++	 * mask changing under our feet.
++	 */
++	if (cpumask_empty(cpus)) {
++		local_irq_restore(flags);
++		return;
++	}
++
+ 	flush_pcpu = (struct hv_tlb_flush **)
+ 		     this_cpu_ptr(hyperv_pcpu_input_arg);
+ 
+diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
+index 7d04b356d44d3..cdc04d0912423 100644
+--- a/arch/x86/kernel/sev-es-shared.c
++++ b/arch/x86/kernel/sev-es-shared.c
+@@ -305,14 +305,14 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
+ 	case 0xe4:
+ 	case 0xe5:
+ 		*exitinfo |= IOIO_TYPE_IN;
+-		*exitinfo |= (u64)insn->immediate.value << 16;
++		*exitinfo |= (u8)insn->immediate.value << 16;
+ 		break;
+ 
+ 	/* OUT immediate opcodes */
+ 	case 0xe6:
+ 	case 0xe7:
+ 		*exitinfo |= IOIO_TYPE_OUT;
+-		*exitinfo |= (u64)insn->immediate.value << 16;
++		*exitinfo |= (u8)insn->immediate.value << 16;
+ 		break;
+ 
+ 	/* IN register opcodes */
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 9e81d1052091f..9e4eb0fc1c16e 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -6332,13 +6332,13 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
+ 	 * limit 'something'.
+ 	 */
+ 	/* no more than 50% of tags for async I/O */
+-	bfqd->word_depths[0][0] = max((1U << bt->sb.shift) >> 1, 1U);
++	bfqd->word_depths[0][0] = max(bt->sb.depth >> 1, 1U);
+ 	/*
+ 	 * no more than 75% of tags for sync writes (25% extra tags
+ 	 * w.r.t. async I/O, to prevent async I/O from starving sync
+ 	 * writes)
+ 	 */
+-	bfqd->word_depths[0][1] = max(((1U << bt->sb.shift) * 3) >> 2, 1U);
++	bfqd->word_depths[0][1] = max((bt->sb.depth * 3) >> 2, 1U);
+ 
+ 	/*
+ 	 * In-word depths in case some bfq_queue is being weight-
+@@ -6348,9 +6348,9 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
+ 	 * shortage.
+ 	 */
+ 	/* no more than ~18% of tags for async I/O */
+-	bfqd->word_depths[1][0] = max(((1U << bt->sb.shift) * 3) >> 4, 1U);
++	bfqd->word_depths[1][0] = max((bt->sb.depth * 3) >> 4, 1U);
+ 	/* no more than ~37% of tags for sync writes (~20% extra tags) */
+-	bfqd->word_depths[1][1] = max(((1U << bt->sb.shift) * 6) >> 4, 1U);
++	bfqd->word_depths[1][1] = max((bt->sb.depth * 6) >> 4, 1U);
+ 
+ 	for (i = 0; i < 2; i++)
+ 		for (j = 0; j < 2; j++)
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index 4d6e83e5b4429..4de03da9a624b 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -246,6 +246,7 @@ static const char *const hctx_flag_name[] = {
+ 	HCTX_FLAG_NAME(BLOCKING),
+ 	HCTX_FLAG_NAME(NO_SCHED),
+ 	HCTX_FLAG_NAME(STACKING),
++	HCTX_FLAG_NAME(TAG_HCTX_SHARED),
+ };
+ #undef HCTX_FLAG_NAME
+ 
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index e3638bafb9411..aee023ad02375 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -97,7 +97,7 @@ void acpi_scan_table_handler(u32 event, void *table, void *context);
+ extern struct list_head acpi_bus_id_list;
+ 
+ struct acpi_device_bus_id {
+-	char bus_id[15];
++	const char *bus_id;
+ 	unsigned int instance_no;
+ 	struct list_head node;
+ };
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index bc6a79e332209..f23ef508fe88c 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -486,6 +486,7 @@ static void acpi_device_del(struct acpi_device *device)
+ 				acpi_device_bus_id->instance_no--;
+ 			else {
+ 				list_del(&acpi_device_bus_id->node);
++				kfree_const(acpi_device_bus_id->bus_id);
+ 				kfree(acpi_device_bus_id);
+ 			}
+ 			break;
+@@ -674,7 +675,14 @@ int acpi_device_add(struct acpi_device *device,
+ 	}
+ 	if (!found) {
+ 		acpi_device_bus_id = new_bus_id;
+-		strcpy(acpi_device_bus_id->bus_id, acpi_device_hid(device));
++		acpi_device_bus_id->bus_id =
++			kstrdup_const(acpi_device_hid(device), GFP_KERNEL);
++		if (!acpi_device_bus_id->bus_id) {
++			pr_err(PREFIX "Memory allocation error for bus id\n");
++			result = -ENOMEM;
++			goto err_free_new_bus_id;
++		}
++
+ 		acpi_device_bus_id->instance_no = 0;
+ 		list_add_tail(&acpi_device_bus_id->node, &acpi_bus_id_list);
+ 	}
+@@ -709,6 +717,11 @@ int acpi_device_add(struct acpi_device *device,
+ 	if (device->parent)
+ 		list_del(&device->node);
+ 	list_del(&device->wakeup_list);
++
++ err_free_new_bus_id:
++	if (!found)
++		kfree(new_bus_id);
++
+ 	mutex_unlock(&acpi_device_lock);
+ 
+  err_detach:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 026789b466db9..2ddbcfe0a72ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2524,11 +2524,11 @@ static int amdgpu_device_ip_fini(struct amdgpu_device *adev)
+ 	if (adev->gmc.xgmi.num_physical_nodes > 1)
+ 		amdgpu_xgmi_remove_device(adev);
+ 
+-	amdgpu_amdkfd_device_fini(adev);
+-
+ 	amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
+ 	amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+ 
++	amdgpu_amdkfd_device_fini(adev);
++
+ 	/* need to disable SMC first */
+ 	for (i = 0; i < adev->num_ip_blocks; i++) {
+ 		if (!adev->ip_blocks[i].status.hw)
+@@ -3008,7 +3008,7 @@ bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type)
+ #endif
+ 	default:
+ 		if (amdgpu_dc > 0)
+-			DRM_INFO("Display Core has been requested via kernel parameter "
++			DRM_INFO_ONCE("Display Core has been requested via kernel parameter "
+ 					 "but isn't supported by ASIC, ignoring\n");
+ 		return false;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 8e988f07f0856..0b786d8dd8bc7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1076,6 +1076,8 @@ static const struct pci_device_id pciidlist[] = {
+ 
+ 	/* Renoir */
+ 	{0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
++	{0x1002, 0x1638, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
++	{0x1002, 0x164C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
+ 
+ 	/* Navi12 */
+ 	{0x1002, 0x7360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12},
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index a6dbe4b83533f..2f47f81a74a57 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -1283,8 +1283,12 @@ static int psp_hdcp_terminate(struct psp_context *psp)
+ 	if (amdgpu_sriov_vf(psp->adev))
+ 		return 0;
+ 
+-	if (!psp->hdcp_context.hdcp_initialized)
+-		return 0;
++	if (!psp->hdcp_context.hdcp_initialized) {
++		if (psp->hdcp_context.hdcp_shared_buf)
++			goto out;
++		else
++			return 0;
++	}
+ 
+ 	ret = psp_hdcp_unload(psp);
+ 	if (ret)
+@@ -1292,6 +1296,7 @@ static int psp_hdcp_terminate(struct psp_context *psp)
+ 
+ 	psp->hdcp_context.hdcp_initialized = false;
+ 
++out:
+ 	/* free hdcp shared memory */
+ 	amdgpu_bo_free_kernel(&psp->hdcp_context.hdcp_shared_bo,
+ 			      &psp->hdcp_context.hdcp_shared_mc_addr,
+@@ -1430,8 +1435,12 @@ static int psp_dtm_terminate(struct psp_context *psp)
+ 	if (amdgpu_sriov_vf(psp->adev))
+ 		return 0;
+ 
+-	if (!psp->dtm_context.dtm_initialized)
+-		return 0;
++	if (!psp->dtm_context.dtm_initialized) {
++		if (psp->dtm_context.dtm_shared_buf)
++			goto out;
++		else
++			return 0;
++	}
+ 
+ 	ret = psp_dtm_unload(psp);
+ 	if (ret)
+@@ -1439,6 +1448,7 @@ static int psp_dtm_terminate(struct psp_context *psp)
+ 
+ 	psp->dtm_context.dtm_initialized = false;
+ 
++out:
+ 	/* free hdcp shared memory */
+ 	amdgpu_bo_free_kernel(&psp->dtm_context.dtm_shared_bo,
+ 			      &psp->dtm_context.dtm_shared_mc_addr,
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index f57c5f57efa8a..41cd108214d6d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -1242,7 +1242,8 @@ static int soc15_common_early_init(void *handle)
+ 		break;
+ 	case CHIP_RENOIR:
+ 		adev->asic_funcs = &soc15_asic_funcs;
+-		if (adev->pdev->device == 0x1636)
++		if ((adev->pdev->device == 0x1636) ||
++		    (adev->pdev->device == 0x164c))
+ 			adev->apu_flags |= AMD_APU_IS_RENOIR;
+ 		else
+ 			adev->apu_flags |= AMD_APU_IS_GREEN_SARDINE;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 5b0cedfa824a9..e1e5d81a5e438 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -2471,9 +2471,14 @@ enum dc_status dc_link_validate_mode_timing(
+ static struct abm *get_abm_from_stream_res(const struct dc_link *link)
+ {
+ 	int i;
+-	struct dc *dc = link->ctx->dc;
++	struct dc *dc = NULL;
+ 	struct abm *abm = NULL;
+ 
++	if (!link || !link->ctx)
++		return NULL;
++
++	dc = link->ctx->dc;
++
+ 	for (i = 0; i < MAX_PIPES; i++) {
+ 		struct pipe_ctx pipe_ctx = dc->current_state->res_ctx.pipe_ctx[i];
+ 		struct dc_stream_state *stream = pipe_ctx.stream;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+index 860e72a51534c..80170f9721ce9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+@@ -2635,14 +2635,15 @@ static void dml20v2_DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndP
+ 	}
+ 
+ 	if (mode_lib->vba.DRAMClockChangeSupportsVActive &&
+-			mode_lib->vba.MinActiveDRAMClockChangeMargin > 60 &&
+-			mode_lib->vba.PrefetchMode[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] == 0) {
++			mode_lib->vba.MinActiveDRAMClockChangeMargin > 60) {
+ 		mode_lib->vba.DRAMClockChangeWatermark += 25;
+ 
+ 		for (k = 0; k < mode_lib->vba.NumberOfActivePlanes; ++k) {
+-			if (mode_lib->vba.DRAMClockChangeWatermark >
+-			dml_max(mode_lib->vba.StutterEnterPlusExitWatermark, mode_lib->vba.UrgentWatermark))
+-				mode_lib->vba.MinTTUVBlank[k] += 25;
++			if (mode_lib->vba.PrefetchMode[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] == 0) {
++				if (mode_lib->vba.DRAMClockChangeWatermark >
++				dml_max(mode_lib->vba.StutterEnterPlusExitWatermark, mode_lib->vba.UrgentWatermark))
++					mode_lib->vba.MinTTUVBlank[k] += 25;
++			}
+ 		}
+ 
+ 		mode_lib->vba.DRAMClockChangeSupport[0][0] = dm_dram_clock_change_vactive;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index 66c1026489bee..425c48e100e4f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -188,6 +188,7 @@ static int renoir_get_dpm_clk_limited(struct smu_context *smu, enum smu_clk_type
+ 			return -EINVAL;
+ 		*freq = clk_table->SocClocks[dpm_level].Freq;
+ 		break;
++	case SMU_UCLK:
+ 	case SMU_MCLK:
+ 		if (dpm_level >= NUM_FCLK_DPM_LEVELS)
+ 			return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
+index 660f403d5770c..7907c9e0b5dec 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
+@@ -222,6 +222,7 @@ int smu_v12_0_set_soft_freq_limited_range(struct smu_context *smu, enum smu_clk_
+ 	break;
+ 	case SMU_FCLK:
+ 	case SMU_MCLK:
++	case SMU_UCLK:
+ 		ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetHardMinFclkByFreq, min, NULL);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/gpu/drm/bridge/sii902x.c b/drivers/gpu/drm/bridge/sii902x.c
+index 33fd33f953ec4..89558e5815303 100644
+--- a/drivers/gpu/drm/bridge/sii902x.c
++++ b/drivers/gpu/drm/bridge/sii902x.c
+@@ -17,6 +17,7 @@
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/regmap.h>
++#include <linux/regulator/consumer.h>
+ #include <linux/clk.h>
+ 
+ #include <drm/drm_atomic_helper.h>
+@@ -168,6 +169,7 @@ struct sii902x {
+ 	struct drm_connector connector;
+ 	struct gpio_desc *reset_gpio;
+ 	struct i2c_mux_core *i2cmux;
++	struct regulator_bulk_data supplies[2];
+ 	/*
+ 	 * Mutex protects audio and video functions from interfering
+ 	 * each other, by keeping their i2c command sequences atomic.
+@@ -954,41 +956,13 @@ static const struct drm_bridge_timings default_sii902x_timings = {
+ 		 | DRM_BUS_FLAG_DE_HIGH,
+ };
+ 
+-static int sii902x_probe(struct i2c_client *client,
+-			 const struct i2c_device_id *id)
++static int sii902x_init(struct sii902x *sii902x)
+ {
+-	struct device *dev = &client->dev;
++	struct device *dev = &sii902x->i2c->dev;
+ 	unsigned int status = 0;
+-	struct sii902x *sii902x;
+ 	u8 chipid[4];
+ 	int ret;
+ 
+-	ret = i2c_check_functionality(client->adapter,
+-				      I2C_FUNC_SMBUS_BYTE_DATA);
+-	if (!ret) {
+-		dev_err(dev, "I2C adapter not suitable\n");
+-		return -EIO;
+-	}
+-
+-	sii902x = devm_kzalloc(dev, sizeof(*sii902x), GFP_KERNEL);
+-	if (!sii902x)
+-		return -ENOMEM;
+-
+-	sii902x->i2c = client;
+-	sii902x->regmap = devm_regmap_init_i2c(client, &sii902x_regmap_config);
+-	if (IS_ERR(sii902x->regmap))
+-		return PTR_ERR(sii902x->regmap);
+-
+-	sii902x->reset_gpio = devm_gpiod_get_optional(dev, "reset",
+-						      GPIOD_OUT_LOW);
+-	if (IS_ERR(sii902x->reset_gpio)) {
+-		dev_err(dev, "Failed to retrieve/request reset gpio: %ld\n",
+-			PTR_ERR(sii902x->reset_gpio));
+-		return PTR_ERR(sii902x->reset_gpio);
+-	}
+-
+-	mutex_init(&sii902x->mutex);
+-
+ 	sii902x_reset(sii902x);
+ 
+ 	ret = regmap_write(sii902x->regmap, SII902X_REG_TPI_RQB, 0x0);
+@@ -1012,11 +986,11 @@ static int sii902x_probe(struct i2c_client *client,
+ 	regmap_read(sii902x->regmap, SII902X_INT_STATUS, &status);
+ 	regmap_write(sii902x->regmap, SII902X_INT_STATUS, status);
+ 
+-	if (client->irq > 0) {
++	if (sii902x->i2c->irq > 0) {
+ 		regmap_write(sii902x->regmap, SII902X_INT_ENABLE,
+ 			     SII902X_HOTPLUG_EVENT);
+ 
+-		ret = devm_request_threaded_irq(dev, client->irq, NULL,
++		ret = devm_request_threaded_irq(dev, sii902x->i2c->irq, NULL,
+ 						sii902x_interrupt,
+ 						IRQF_ONESHOT, dev_name(dev),
+ 						sii902x);
+@@ -1031,9 +1005,9 @@ static int sii902x_probe(struct i2c_client *client,
+ 
+ 	sii902x_audio_codec_init(sii902x, dev);
+ 
+-	i2c_set_clientdata(client, sii902x);
++	i2c_set_clientdata(sii902x->i2c, sii902x);
+ 
+-	sii902x->i2cmux = i2c_mux_alloc(client->adapter, dev,
++	sii902x->i2cmux = i2c_mux_alloc(sii902x->i2c->adapter, dev,
+ 					1, 0, I2C_MUX_GATE,
+ 					sii902x_i2c_bypass_select,
+ 					sii902x_i2c_bypass_deselect);
+@@ -1044,6 +1018,62 @@ static int sii902x_probe(struct i2c_client *client,
+ 	return i2c_mux_add_adapter(sii902x->i2cmux, 0, 0, 0);
+ }
+ 
++static int sii902x_probe(struct i2c_client *client,
++			 const struct i2c_device_id *id)
++{
++	struct device *dev = &client->dev;
++	struct sii902x *sii902x;
++	int ret;
++
++	ret = i2c_check_functionality(client->adapter,
++				      I2C_FUNC_SMBUS_BYTE_DATA);
++	if (!ret) {
++		dev_err(dev, "I2C adapter not suitable\n");
++		return -EIO;
++	}
++
++	sii902x = devm_kzalloc(dev, sizeof(*sii902x), GFP_KERNEL);
++	if (!sii902x)
++		return -ENOMEM;
++
++	sii902x->i2c = client;
++	sii902x->regmap = devm_regmap_init_i2c(client, &sii902x_regmap_config);
++	if (IS_ERR(sii902x->regmap))
++		return PTR_ERR(sii902x->regmap);
++
++	sii902x->reset_gpio = devm_gpiod_get_optional(dev, "reset",
++						      GPIOD_OUT_LOW);
++	if (IS_ERR(sii902x->reset_gpio)) {
++		dev_err(dev, "Failed to retrieve/request reset gpio: %ld\n",
++			PTR_ERR(sii902x->reset_gpio));
++		return PTR_ERR(sii902x->reset_gpio);
++	}
++
++	mutex_init(&sii902x->mutex);
++
++	sii902x->supplies[0].supply = "iovcc";
++	sii902x->supplies[1].supply = "cvcc12";
++	ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(sii902x->supplies),
++				      sii902x->supplies);
++	if (ret < 0)
++		return ret;
++
++	ret = regulator_bulk_enable(ARRAY_SIZE(sii902x->supplies),
++				    sii902x->supplies);
++	if (ret < 0) {
++		dev_err_probe(dev, ret, "Failed to enable supplies");
++		return ret;
++	}
++
++	ret = sii902x_init(sii902x);
++	if (ret < 0) {
++		regulator_bulk_disable(ARRAY_SIZE(sii902x->supplies),
++				       sii902x->supplies);
++	}
++
++	return ret;
++}
++
+ static int sii902x_remove(struct i2c_client *client)
+ 
+ {
+@@ -1051,6 +1081,8 @@ static int sii902x_remove(struct i2c_client *client)
+ 
+ 	i2c_mux_del_adapters(sii902x->i2cmux);
+ 	drm_bridge_remove(&sii902x->bridge);
++	regulator_bulk_disable(ARRAY_SIZE(sii902x->supplies),
++			       sii902x->supplies);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
+index e5574e506a5cc..6d9e81ea67f4b 100644
+--- a/drivers/gpu/drm/i915/Makefile
++++ b/drivers/gpu/drm/i915/Makefile
+@@ -38,6 +38,7 @@ i915-y += i915_drv.o \
+ 	  i915_config.o \
+ 	  i915_irq.o \
+ 	  i915_getparam.o \
++	  i915_mitigations.o \
+ 	  i915_params.o \
+ 	  i915_pci.o \
+ 	  i915_scatterlist.o \
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index 520715b7d5b55..1515cf229ed12 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -1585,10 +1585,6 @@ static void gen11_dsi_get_power_domains(struct intel_encoder *encoder,
+ 
+ 	get_dsi_io_power_domains(i915,
+ 				 enc_to_intel_dsi(encoder));
+-
+-	if (crtc_state->dsc.compression_enable)
+-		intel_display_power_get(i915,
+-					intel_dsc_power_domain(crtc_state));
+ }
+ 
+ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
+diff --git a/drivers/gpu/drm/i915/display/intel_panel.c b/drivers/gpu/drm/i915/display/intel_panel.c
+index 9f23bac0d7924..d64fce1a17cbc 100644
+--- a/drivers/gpu/drm/i915/display/intel_panel.c
++++ b/drivers/gpu/drm/i915/display/intel_panel.c
+@@ -1650,16 +1650,13 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus
+ 		val = pch_get_backlight(connector);
+ 	else
+ 		val = lpt_get_backlight(connector);
+-	val = intel_panel_compute_brightness(connector, val);
+-	panel->backlight.level = clamp(val, panel->backlight.min,
+-				       panel->backlight.max);
+ 
+ 	if (cpu_mode) {
+ 		drm_dbg_kms(&dev_priv->drm,
+ 			    "CPU backlight register was enabled, switching to PCH override\n");
+ 
+ 		/* Write converted CPU PWM value to PCH override register */
+-		lpt_set_backlight(connector->base.state, panel->backlight.level);
++		lpt_set_backlight(connector->base.state, val);
+ 		intel_de_write(dev_priv, BLC_PWM_PCH_CTL1,
+ 			       pch_ctl1 | BLM_PCH_OVERRIDE_ENABLE);
+ 
+@@ -1667,6 +1664,10 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus
+ 			       cpu_ctl2 & ~BLM_PWM_ENABLE);
+ 	}
+ 
++	val = intel_panel_compute_brightness(connector, val);
++	panel->backlight.level = clamp(val, panel->backlight.min,
++				       panel->backlight.max);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/vlv_dsi.c b/drivers/gpu/drm/i915/display/vlv_dsi.c
+index 5e5522923b1e4..690239d3f2e53 100644
+--- a/drivers/gpu/drm/i915/display/vlv_dsi.c
++++ b/drivers/gpu/drm/i915/display/vlv_dsi.c
+@@ -812,10 +812,20 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
+ 		intel_dsi_prepare(encoder, pipe_config);
+ 
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON);
+-	intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay);
+ 
+-	/* Deassert reset */
+-	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
++	/*
++	 * Give the panel time to power-on and then deassert its reset.
++	 * Depending on the VBT MIPI sequences version the deassert-seq
++	 * may contain the necessary delay, intel_dsi_msleep() will skip
++	 * the delay in that case. If there is no deassert-seq, then an
++	 * unconditional msleep is used to give the panel time to power-on.
++	 */
++	if (dev_priv->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) {
++		intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay);
++		intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
++	} else {
++		msleep(intel_dsi->panel_on_delay);
++	}
+ 
+ 	if (IS_GEMINILAKE(dev_priv)) {
+ 		glk_cold_boot = glk_dsi_enable_io(encoder);
+diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+index d93d85cd30270..94465374ca2fe 100644
+--- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c
++++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+@@ -7,8 +7,6 @@
+ #include "i915_drv.h"
+ #include "intel_gpu_commands.h"
+ 
+-#define MAX_URB_ENTRIES 64
+-#define STATE_SIZE (4 * 1024)
+ #define GT3_INLINE_DATA_DELAYS 0x1E00
+ #define batch_advance(Y, CS) GEM_BUG_ON((Y)->end != (CS))
+ 
+@@ -34,38 +32,59 @@ struct batch_chunk {
+ };
+ 
+ struct batch_vals {
+-	u32 max_primitives;
+-	u32 max_urb_entries;
+-	u32 cmd_size;
+-	u32 state_size;
++	u32 max_threads;
+ 	u32 state_start;
+-	u32 batch_size;
++	u32 surface_start;
+ 	u32 surface_height;
+ 	u32 surface_width;
+-	u32 scratch_size;
+-	u32 max_size;
++	u32 size;
+ };
+ 
++static inline int num_primitives(const struct batch_vals *bv)
++{
++	/*
++	 * We need to saturate the GPU with work in order to dispatch
++	 * a shader on every HW thread, and clear the thread-local registers.
++	 * In short, we have to dispatch work faster than the shaders can
++	 * run in order to fill the EU and occupy each HW thread.
++	 */
++	return bv->max_threads;
++}
++
+ static void
+ batch_get_defaults(struct drm_i915_private *i915, struct batch_vals *bv)
+ {
+ 	if (IS_HASWELL(i915)) {
+-		bv->max_primitives = 280;
+-		bv->max_urb_entries = MAX_URB_ENTRIES;
++		switch (INTEL_INFO(i915)->gt) {
++		default:
++		case 1:
++			bv->max_threads = 70;
++			break;
++		case 2:
++			bv->max_threads = 140;
++			break;
++		case 3:
++			bv->max_threads = 280;
++			break;
++		}
+ 		bv->surface_height = 16 * 16;
+ 		bv->surface_width = 32 * 2 * 16;
+ 	} else {
+-		bv->max_primitives = 128;
+-		bv->max_urb_entries = MAX_URB_ENTRIES / 2;
++		switch (INTEL_INFO(i915)->gt) {
++		default:
++		case 1: /* including vlv */
++			bv->max_threads = 36;
++			break;
++		case 2:
++			bv->max_threads = 128;
++			break;
++		}
+ 		bv->surface_height = 16 * 8;
+ 		bv->surface_width = 32 * 16;
+ 	}
+-	bv->cmd_size = bv->max_primitives * 4096;
+-	bv->state_size = STATE_SIZE;
+-	bv->state_start = bv->cmd_size;
+-	bv->batch_size = bv->cmd_size + bv->state_size;
+-	bv->scratch_size = bv->surface_height * bv->surface_width;
+-	bv->max_size = bv->batch_size + bv->scratch_size;
++	bv->state_start = round_up(SZ_1K + num_primitives(bv) * 64, SZ_4K);
++	bv->surface_start = bv->state_start + SZ_4K;
++	bv->size = bv->surface_start + bv->surface_height * bv->surface_width;
+ }
+ 
+ static void batch_init(struct batch_chunk *bc,
+@@ -155,7 +174,8 @@ static u32
+ gen7_fill_binding_table(struct batch_chunk *state,
+ 			const struct batch_vals *bv)
+ {
+-	u32 surface_start = gen7_fill_surface_state(state, bv->batch_size, bv);
++	u32 surface_start =
++		gen7_fill_surface_state(state, bv->surface_start, bv);
+ 	u32 *cs = batch_alloc_items(state, 32, 8);
+ 	u32 offset = batch_offset(state, cs);
+ 
+@@ -214,9 +234,9 @@ static void
+ gen7_emit_state_base_address(struct batch_chunk *batch,
+ 			     u32 surface_state_base)
+ {
+-	u32 *cs = batch_alloc_items(batch, 0, 12);
++	u32 *cs = batch_alloc_items(batch, 0, 10);
+ 
+-	*cs++ = STATE_BASE_ADDRESS | (12 - 2);
++	*cs++ = STATE_BASE_ADDRESS | (10 - 2);
+ 	/* general */
+ 	*cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY;
+ 	/* surface */
+@@ -233,8 +253,6 @@ gen7_emit_state_base_address(struct batch_chunk *batch,
+ 	*cs++ = BASE_ADDRESS_MODIFY;
+ 	*cs++ = 0;
+ 	*cs++ = BASE_ADDRESS_MODIFY;
+-	*cs++ = 0;
+-	*cs++ = 0;
+ 	batch_advance(batch, cs);
+ }
+ 
+@@ -244,8 +262,7 @@ gen7_emit_vfe_state(struct batch_chunk *batch,
+ 		    u32 urb_size, u32 curbe_size,
+ 		    u32 mode)
+ {
+-	u32 urb_entries = bv->max_urb_entries;
+-	u32 threads = bv->max_primitives - 1;
++	u32 threads = bv->max_threads - 1;
+ 	u32 *cs = batch_alloc_items(batch, 32, 8);
+ 
+ 	*cs++ = MEDIA_VFE_STATE | (8 - 2);
+@@ -254,7 +271,7 @@ gen7_emit_vfe_state(struct batch_chunk *batch,
+ 	*cs++ = 0;
+ 
+ 	/* number of threads & urb entries for GPGPU vs Media Mode */
+-	*cs++ = threads << 16 | urb_entries << 8 | mode << 2;
++	*cs++ = threads << 16 | 1 << 8 | mode << 2;
+ 
+ 	*cs++ = 0;
+ 
+@@ -293,17 +310,12 @@ gen7_emit_media_object(struct batch_chunk *batch,
+ {
+ 	unsigned int x_offset = (media_object_index % 16) * 64;
+ 	unsigned int y_offset = (media_object_index / 16) * 16;
+-	unsigned int inline_data_size;
+-	unsigned int media_batch_size;
+-	unsigned int i;
++	unsigned int pkt = 6 + 3;
+ 	u32 *cs;
+ 
+-	inline_data_size = 112 * 8;
+-	media_batch_size = inline_data_size + 6;
+-
+-	cs = batch_alloc_items(batch, 8, media_batch_size);
++	cs = batch_alloc_items(batch, 8, pkt);
+ 
+-	*cs++ = MEDIA_OBJECT | (media_batch_size - 2);
++	*cs++ = MEDIA_OBJECT | (pkt - 2);
+ 
+ 	/* interface descriptor offset */
+ 	*cs++ = 0;
+@@ -317,25 +329,44 @@ gen7_emit_media_object(struct batch_chunk *batch,
+ 	*cs++ = 0;
+ 
+ 	/* inline */
+-	*cs++ = (y_offset << 16) | (x_offset);
++	*cs++ = y_offset << 16 | x_offset;
+ 	*cs++ = 0;
+ 	*cs++ = GT3_INLINE_DATA_DELAYS;
+-	for (i = 3; i < inline_data_size; i++)
+-		*cs++ = 0;
+ 
+ 	batch_advance(batch, cs);
+ }
+ 
+ static void gen7_emit_pipeline_flush(struct batch_chunk *batch)
+ {
+-	u32 *cs = batch_alloc_items(batch, 0, 5);
++	u32 *cs = batch_alloc_items(batch, 0, 4);
+ 
+-	*cs++ = GFX_OP_PIPE_CONTROL(5);
+-	*cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE |
+-		PIPE_CONTROL_GLOBAL_GTT_IVB;
++	*cs++ = GFX_OP_PIPE_CONTROL(4);
++	*cs++ = PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
++		PIPE_CONTROL_DEPTH_CACHE_FLUSH |
++		PIPE_CONTROL_DC_FLUSH_ENABLE |
++		PIPE_CONTROL_CS_STALL;
+ 	*cs++ = 0;
+ 	*cs++ = 0;
++
++	batch_advance(batch, cs);
++}
++
++static void gen7_emit_pipeline_invalidate(struct batch_chunk *batch)
++{
++	u32 *cs = batch_alloc_items(batch, 0, 8);
++
++	/* ivb: Stall before STATE_CACHE_INVALIDATE */
++	*cs++ = GFX_OP_PIPE_CONTROL(4);
++	*cs++ = PIPE_CONTROL_STALL_AT_SCOREBOARD |
++		PIPE_CONTROL_CS_STALL;
++	*cs++ = 0;
++	*cs++ = 0;
++
++	*cs++ = GFX_OP_PIPE_CONTROL(4);
++	*cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE;
+ 	*cs++ = 0;
++	*cs++ = 0;
++
+ 	batch_advance(batch, cs);
+ }
+ 
+@@ -344,34 +375,34 @@ static void emit_batch(struct i915_vma * const vma,
+ 		       const struct batch_vals *bv)
+ {
+ 	struct drm_i915_private *i915 = vma->vm->i915;
+-	unsigned int desc_count = 64;
+-	const u32 urb_size = 112;
++	const unsigned int desc_count = 1;
++	const unsigned int urb_size = 1;
+ 	struct batch_chunk cmds, state;
+-	u32 interface_descriptor;
++	u32 descriptors;
+ 	unsigned int i;
+ 
+-	batch_init(&cmds, vma, start, 0, bv->cmd_size);
+-	batch_init(&state, vma, start, bv->state_start, bv->state_size);
++	batch_init(&cmds, vma, start, 0, bv->state_start);
++	batch_init(&state, vma, start, bv->state_start, SZ_4K);
+ 
+-	interface_descriptor =
+-		gen7_fill_interface_descriptor(&state, bv,
+-					       IS_HASWELL(i915) ?
+-					       &cb_kernel_hsw :
+-					       &cb_kernel_ivb,
+-					       desc_count);
+-	gen7_emit_pipeline_flush(&cmds);
++	descriptors = gen7_fill_interface_descriptor(&state, bv,
++						     IS_HASWELL(i915) ?
++						     &cb_kernel_hsw :
++						     &cb_kernel_ivb,
++						     desc_count);
++
++	gen7_emit_pipeline_invalidate(&cmds);
+ 	batch_add(&cmds, PIPELINE_SELECT | PIPELINE_SELECT_MEDIA);
+ 	batch_add(&cmds, MI_NOOP);
+-	gen7_emit_state_base_address(&cmds, interface_descriptor);
++	gen7_emit_pipeline_invalidate(&cmds);
++
+ 	gen7_emit_pipeline_flush(&cmds);
++	gen7_emit_state_base_address(&cmds, descriptors);
++	gen7_emit_pipeline_invalidate(&cmds);
+ 
+ 	gen7_emit_vfe_state(&cmds, bv, urb_size - 1, 0, 0);
++	gen7_emit_interface_descriptor_load(&cmds, descriptors, desc_count);
+ 
+-	gen7_emit_interface_descriptor_load(&cmds,
+-					    interface_descriptor,
+-					    desc_count);
+-
+-	for (i = 0; i < bv->max_primitives; i++)
++	for (i = 0; i < num_primitives(bv); i++)
+ 		gen7_emit_media_object(&cmds, i);
+ 
+ 	batch_add(&cmds, MI_BATCH_BUFFER_END);
+@@ -385,15 +416,15 @@ int gen7_setup_clear_gpr_bb(struct intel_engine_cs * const engine,
+ 
+ 	batch_get_defaults(engine->i915, &bv);
+ 	if (!vma)
+-		return bv.max_size;
++		return bv.size;
+ 
+-	GEM_BUG_ON(vma->obj->base.size < bv.max_size);
++	GEM_BUG_ON(vma->obj->base.size < bv.size);
+ 
+ 	batch = i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
+ 	if (IS_ERR(batch))
+ 		return PTR_ERR(batch);
+ 
+-	emit_batch(vma, memset(batch, 0, bv.max_size), &bv);
++	emit_batch(vma, memset(batch, 0, bv.size), &bv);
+ 
+ 	i915_gem_object_flush_map(vma->obj);
+ 	__i915_gem_object_release_map(vma->obj);
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+index 16b48e72c3691..6aaca73eaee60 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+@@ -32,6 +32,7 @@
+ #include "gen6_ppgtt.h"
+ #include "gen7_renderclear.h"
+ #include "i915_drv.h"
++#include "i915_mitigations.h"
+ #include "intel_breadcrumbs.h"
+ #include "intel_context.h"
+ #include "intel_gt.h"
+@@ -885,7 +886,8 @@ static int switch_context(struct i915_request *rq)
+ 	GEM_BUG_ON(HAS_EXECLISTS(engine->i915));
+ 
+ 	if (engine->wa_ctx.vma && ce != engine->kernel_context) {
+-		if (engine->wa_ctx.vma->private != ce) {
++		if (engine->wa_ctx.vma->private != ce &&
++		    i915_mitigate_clear_residuals()) {
+ 			ret = clear_residuals(rq);
+ 			if (ret)
+ 				return ret;
+@@ -1289,7 +1291,7 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine)
+ 
+ 	GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma);
+ 
+-	if (IS_HASWELL(engine->i915) && engine->class == RENDER_CLASS) {
++	if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) {
+ 		err = gen7_ctx_switch_bb_init(engine);
+ 		if (err)
+ 			goto err_ring_unpin;
+diff --git a/drivers/gpu/drm/i915/i915_mitigations.c b/drivers/gpu/drm/i915/i915_mitigations.c
+new file mode 100644
+index 0000000000000..84f12598d1458
+--- /dev/null
++++ b/drivers/gpu/drm/i915/i915_mitigations.c
+@@ -0,0 +1,146 @@
++// SPDX-License-Identifier: MIT
++/*
++ * Copyright © 2021 Intel Corporation
++ */
++
++#include <linux/kernel.h>
++#include <linux/moduleparam.h>
++#include <linux/slab.h>
++#include <linux/string.h>
++
++#include "i915_drv.h"
++#include "i915_mitigations.h"
++
++static unsigned long mitigations __read_mostly = ~0UL;
++
++enum {
++	CLEAR_RESIDUALS = 0,
++};
++
++static const char * const names[] = {
++	[CLEAR_RESIDUALS] = "residuals",
++};
++
++bool i915_mitigate_clear_residuals(void)
++{
++	return READ_ONCE(mitigations) & BIT(CLEAR_RESIDUALS);
++}
++
++static int mitigations_set(const char *val, const struct kernel_param *kp)
++{
++	unsigned long new = ~0UL;
++	char *str, *sep, *tok;
++	bool first = true;
++	int err = 0;
++
++	BUILD_BUG_ON(ARRAY_SIZE(names) >= BITS_PER_TYPE(mitigations));
++
++	str = kstrdup(val, GFP_KERNEL);
++	if (!str)
++		return -ENOMEM;
++
++	for (sep = str; (tok = strsep(&sep, ","));) {
++		bool enable = true;
++		int i;
++
++		/* Be tolerant of leading/trailing whitespace */
++		tok = strim(tok);
++
++		if (first) {
++			first = false;
++
++			if (!strcmp(tok, "auto"))
++				continue;
++
++			new = 0;
++			if (!strcmp(tok, "off"))
++				continue;
++		}
++
++		if (*tok == '!') {
++			enable = !enable;
++			tok++;
++		}
++
++		if (!strncmp(tok, "no", 2)) {
++			enable = !enable;
++			tok += 2;
++		}
++
++		if (*tok == '\0')
++			continue;
++
++		for (i = 0; i < ARRAY_SIZE(names); i++) {
++			if (!strcmp(tok, names[i])) {
++				if (enable)
++					new |= BIT(i);
++				else
++					new &= ~BIT(i);
++				break;
++			}
++		}
++		if (i == ARRAY_SIZE(names)) {
++			pr_err("Bad \"%s.mitigations=%s\", '%s' is unknown\n",
++			       DRIVER_NAME, val, tok);
++			err = -EINVAL;
++			break;
++		}
++	}
++	kfree(str);
++	if (err)
++		return err;
++
++	WRITE_ONCE(mitigations, new);
++	return 0;
++}
++
++static int mitigations_get(char *buffer, const struct kernel_param *kp)
++{
++	unsigned long local = READ_ONCE(mitigations);
++	int count, i;
++	bool enable;
++
++	if (!local)
++		return scnprintf(buffer, PAGE_SIZE, "%s\n", "off");
++
++	if (local & BIT(BITS_PER_LONG - 1)) {
++		count = scnprintf(buffer, PAGE_SIZE, "%s,", "auto");
++		enable = false;
++	} else {
++		enable = true;
++		count = 0;
++	}
++
++	for (i = 0; i < ARRAY_SIZE(names); i++) {
++		if ((local & BIT(i)) != enable)
++			continue;
++
++		count += scnprintf(buffer + count, PAGE_SIZE - count,
++				   "%s%s,", enable ? "" : "!", names[i]);
++	}
++
++	buffer[count - 1] = '\n';
++	return count;
++}
++
++static const struct kernel_param_ops ops = {
++	.set = mitigations_set,
++	.get = mitigations_get,
++};
++
++module_param_cb_unsafe(mitigations, &ops, NULL, 0600);
++MODULE_PARM_DESC(mitigations,
++"Selectively enable security mitigations for all Intel® GPUs in the system.\n"
++"\n"
++"  auto -- enables all mitigations required for the platform [default]\n"
++"  off  -- disables all mitigations\n"
++"\n"
++"Individual mitigations can be enabled by passing a comma-separated string,\n"
++"e.g. mitigations=residuals to enable only clearing residuals or\n"
++"mitigations=auto,noresiduals to disable only the clear residual mitigation.\n"
++"Either '!' or 'no' may be used to switch from enabling the mitigation to\n"
++"disabling it.\n"
++"\n"
++"Active mitigations for Ivybridge, Baytrail, Haswell:\n"
++"  residuals -- clear all thread-local registers between contexts"
++);
+diff --git a/drivers/gpu/drm/i915/i915_mitigations.h b/drivers/gpu/drm/i915/i915_mitigations.h
+new file mode 100644
+index 0000000000000..1359d8135287a
+--- /dev/null
++++ b/drivers/gpu/drm/i915/i915_mitigations.h
+@@ -0,0 +1,13 @@
++/* SPDX-License-Identifier: MIT */
++/*
++ * Copyright © 2021 Intel Corporation
++ */
++
++#ifndef __I915_MITIGATIONS_H__
++#define __I915_MITIGATIONS_H__
++
++#include <linux/types.h>
++
++bool i915_mitigate_clear_residuals(void);
++
++#endif /* __I915_MITIGATIONS_H__ */
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 49685571dc0ee..d556c353e5aea 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -444,14 +444,14 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv)
+ 
+ 	drm_mode_config_init(ddev);
+ 
+-	/* Bind all our sub-components: */
+-	ret = component_bind_all(dev, ddev);
++	ret = msm_init_vram(ddev);
+ 	if (ret)
+ 		goto err_destroy_mdss;
+ 
+-	ret = msm_init_vram(ddev);
++	/* Bind all our sub-components: */
++	ret = component_bind_all(dev, ddev);
+ 	if (ret)
+-		goto err_msm_uninit;
++		goto err_destroy_mdss;
+ 
+ 	dma_set_max_seg_size(dev, UINT_MAX);
+ 
+diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
+index 1f63807c0399e..ec171f2b684a1 100644
+--- a/drivers/hwmon/pwm-fan.c
++++ b/drivers/hwmon/pwm-fan.c
+@@ -324,8 +324,18 @@ static int pwm_fan_probe(struct platform_device *pdev)
+ 
+ 	ctx->pwm_value = MAX_PWM;
+ 
+-	/* Set duty cycle to maximum allowed and enable PWM output */
+ 	pwm_init_state(ctx->pwm, &state);
++	/*
++	 * __set_pwm assumes that MAX_PWM * (period - 1) fits into an unsigned
++	 * long. Check this here to prevent the fan running at a too low
++	 * frequency.
++	 */
++	if (state.period > ULONG_MAX / MAX_PWM + 1) {
++		dev_err(dev, "Configured period too big\n");
++		return -EINVAL;
++	}
++
++	/* Set duty cycle to maximum allowed and enable PWM output */
+ 	state.duty_cycle = ctx->pwm->args.period - 1;
+ 	state.enabled = true;
+ 
+diff --git a/drivers/infiniband/core/restrack.c b/drivers/infiniband/core/restrack.c
+index 4aeeaaed0f17d..bbbbec5b15939 100644
+--- a/drivers/infiniband/core/restrack.c
++++ b/drivers/infiniband/core/restrack.c
+@@ -244,6 +244,7 @@ void rdma_restrack_add(struct rdma_restrack_entry *res)
+ 	} else {
+ 		ret = xa_alloc_cyclic(&rt->xa, &res->id, res, xa_limit_32b,
+ 				      &rt->next_id, GFP_KERNEL);
++		ret = (ret < 0) ? ret : 0;
+ 	}
+ 
+ 	if (!ret)
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 246e3cbe0b2c7..fb092ff79d840 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -3950,7 +3950,7 @@ static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev)
+ 
+ 	err = set_has_smi_cap(dev);
+ 	if (err)
+-		return err;
++		goto err_mp;
+ 
+ 	if (!mlx5_core_mp_enabled(mdev)) {
+ 		for (i = 1; i <= dev->num_ports; i++) {
+@@ -4362,7 +4362,7 @@ static int mlx5_ib_stage_bfrag_init(struct mlx5_ib_dev *dev)
+ 
+ 	err = mlx5_alloc_bfreg(dev->mdev, &dev->fp_bfreg, false, true);
+ 	if (err)
+-		mlx5_free_bfreg(dev->mdev, &dev->fp_bfreg);
++		mlx5_free_bfreg(dev->mdev, &dev->bfreg);
+ 
+ 	return err;
+ }
+diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+index 7350fe16f164d..81a560056cd52 100644
+--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
++++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+@@ -434,9 +434,9 @@ static void ocrdma_dealloc_ucontext_pd(struct ocrdma_ucontext *uctx)
+ 		pr_err("%s(%d) Freeing in use pdid=0x%x.\n",
+ 		       __func__, dev->id, pd->id);
+ 	}
+-	kfree(uctx->cntxt_pd);
+ 	uctx->cntxt_pd = NULL;
+ 	_ocrdma_dealloc_pd(dev, pd);
++	kfree(pd);
+ }
+ 
+ static struct ocrdma_pd *ocrdma_get_ucontext_pd(struct ocrdma_ucontext *uctx)
+diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+index 9e961f8ffa10d..6a2b7d1d184ca 100644
+--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
++++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+@@ -214,6 +214,7 @@ find_free_vf_and_create_qp_grp(struct usnic_ib_dev *us_ibdev,
+ 
+ 		}
+ 		usnic_uiom_free_dev_list(dev_list);
++		dev_list = NULL;
+ 	}
+ 
+ 	/* Try to find resources on an unused vf */
+@@ -239,6 +240,8 @@ find_free_vf_and_create_qp_grp(struct usnic_ib_dev *us_ibdev,
+ qp_grp_check:
+ 	if (IS_ERR_OR_NULL(qp_grp)) {
+ 		usnic_err("Failed to allocate qp_grp\n");
++		if (usnic_ib_share_vf)
++			usnic_uiom_free_dev_list(dev_list);
+ 		return ERR_PTR(qp_grp ? PTR_ERR(qp_grp) : -ENOMEM);
+ 	}
+ 	return qp_grp;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index c9da9e93f545c..151243fa01ba5 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -67,8 +67,8 @@
+ #define MAX_AGAW_WIDTH 64
+ #define MAX_AGAW_PFN_WIDTH	(MAX_AGAW_WIDTH - VTD_PAGE_SHIFT)
+ 
+-#define __DOMAIN_MAX_PFN(gaw)  ((((uint64_t)1) << (gaw-VTD_PAGE_SHIFT)) - 1)
+-#define __DOMAIN_MAX_ADDR(gaw) ((((uint64_t)1) << gaw) - 1)
++#define __DOMAIN_MAX_PFN(gaw)  ((((uint64_t)1) << ((gaw) - VTD_PAGE_SHIFT)) - 1)
++#define __DOMAIN_MAX_ADDR(gaw) ((((uint64_t)1) << (gaw)) - 1)
+ 
+ /* We limit DOMAIN_MAX_PFN to fit in an unsigned long, and DOMAIN_MAX_ADDR
+    to match. That way, we can use 'unsigned long' for PFNs with impunity. */
+@@ -739,6 +739,18 @@ static void domain_update_iommu_cap(struct dmar_domain *domain)
+ 	 */
+ 	if (domain->nid == NUMA_NO_NODE)
+ 		domain->nid = domain_update_device_node(domain);
++
++	/*
++	 * First-level translation restricts the input-address to a
++	 * canonical address (i.e., address bits 63:N have the same
++	 * value as address bit [N-1], where N is 48-bits with 4-level
++	 * paging and 57-bits with 5-level paging). Hence, skip bit
++	 * [N-1].
++	 */
++	if (domain_use_first_level(domain))
++		domain->domain.geometry.aperture_end = __DOMAIN_MAX_ADDR(domain->gaw - 1);
++	else
++		domain->domain.geometry.aperture_end = __DOMAIN_MAX_ADDR(domain->gaw);
+ }
+ 
+ struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus,
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 4a10c9ff368c5..43f392d27d318 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -118,8 +118,10 @@ void intel_svm_check(struct intel_iommu *iommu)
+ 	iommu->flags |= VTD_FLAG_SVM_CAPABLE;
+ }
+ 
+-static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_dev *sdev,
+-				unsigned long address, unsigned long pages, int ih)
++static void __flush_svm_range_dev(struct intel_svm *svm,
++				  struct intel_svm_dev *sdev,
++				  unsigned long address,
++				  unsigned long pages, int ih)
+ {
+ 	struct qi_desc desc;
+ 
+@@ -170,6 +172,22 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d
+ 	}
+ }
+ 
++static void intel_flush_svm_range_dev(struct intel_svm *svm,
++				      struct intel_svm_dev *sdev,
++				      unsigned long address,
++				      unsigned long pages, int ih)
++{
++	unsigned long shift = ilog2(__roundup_pow_of_two(pages));
++	unsigned long align = (1ULL << (VTD_PAGE_SHIFT + shift));
++	unsigned long start = ALIGN_DOWN(address, align);
++	unsigned long end = ALIGN(address + (pages << VTD_PAGE_SHIFT), align);
++
++	while (start < end) {
++		__flush_svm_range_dev(svm, sdev, start, align >> VTD_PAGE_SHIFT, ih);
++		start += align;
++	}
++}
++
+ static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address,
+ 				unsigned long pages, int ih)
+ {
+@@ -281,6 +299,7 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev,
+ 	struct dmar_domain *dmar_domain;
+ 	struct device_domain_info *info;
+ 	struct intel_svm *svm = NULL;
++	unsigned long iflags;
+ 	int ret = 0;
+ 
+ 	if (WARN_ON(!iommu) || !data)
+@@ -382,12 +401,12 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev,
+ 	 * each bind of a new device even with an existing PASID, we need to
+ 	 * call the nested mode setup function here.
+ 	 */
+-	spin_lock(&iommu->lock);
++	spin_lock_irqsave(&iommu->lock, iflags);
+ 	ret = intel_pasid_setup_nested(iommu, dev,
+ 				       (pgd_t *)(uintptr_t)data->gpgd,
+ 				       data->hpasid, &data->vendor.vtd, dmar_domain,
+ 				       data->addr_width);
+-	spin_unlock(&iommu->lock);
++	spin_unlock_irqrestore(&iommu->lock, iflags);
+ 	if (ret) {
+ 		dev_err_ratelimited(dev, "Failed to set up PASID %llu in nested mode, Err %d\n",
+ 				    data->hpasid, ret);
+@@ -487,6 +506,7 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags,
+ 	struct device_domain_info *info;
+ 	struct intel_svm_dev *sdev;
+ 	struct intel_svm *svm = NULL;
++	unsigned long iflags;
+ 	int pasid_max;
+ 	int ret;
+ 
+@@ -606,14 +626,14 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags,
+ 			}
+ 		}
+ 
+-		spin_lock(&iommu->lock);
++		spin_lock_irqsave(&iommu->lock, iflags);
+ 		ret = intel_pasid_setup_first_level(iommu, dev,
+ 				mm ? mm->pgd : init_mm.pgd,
+ 				svm->pasid, FLPT_DEFAULT_DID,
+ 				(mm ? 0 : PASID_FLAG_SUPERVISOR_MODE) |
+ 				(cpu_feature_enabled(X86_FEATURE_LA57) ?
+ 				 PASID_FLAG_FL5LP : 0));
+-		spin_unlock(&iommu->lock);
++		spin_unlock_irqrestore(&iommu->lock, iflags);
+ 		if (ret) {
+ 			if (mm)
+ 				mmu_notifier_unregister(&svm->notifier, mm);
+@@ -633,14 +653,14 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags,
+ 		 * Binding a new device with existing PASID, need to setup
+ 		 * the PASID entry.
+ 		 */
+-		spin_lock(&iommu->lock);
++		spin_lock_irqsave(&iommu->lock, iflags);
+ 		ret = intel_pasid_setup_first_level(iommu, dev,
+ 						mm ? mm->pgd : init_mm.pgd,
+ 						svm->pasid, FLPT_DEFAULT_DID,
+ 						(mm ? 0 : PASID_FLAG_SUPERVISOR_MODE) |
+ 						(cpu_feature_enabled(X86_FEATURE_LA57) ?
+ 						PASID_FLAG_FL5LP : 0));
+-		spin_unlock(&iommu->lock);
++		spin_unlock_irqrestore(&iommu->lock, iflags);
+ 		if (ret) {
+ 			kfree(sdev);
+ 			goto out;
+diff --git a/drivers/isdn/mISDN/Kconfig b/drivers/isdn/mISDN/Kconfig
+index 26cf0ac9c4ad0..c9a53c2224728 100644
+--- a/drivers/isdn/mISDN/Kconfig
++++ b/drivers/isdn/mISDN/Kconfig
+@@ -13,6 +13,7 @@ if MISDN != n
+ config MISDN_DSP
+ 	tristate "Digital Audio Processing of transparent data"
+ 	depends on MISDN
++	select BITREVERSE
+ 	help
+ 	  Enable support for digital audio processing capability.
+ 
+diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
+index 30ba3573626c2..0e04d3718af3c 100644
+--- a/drivers/md/Kconfig
++++ b/drivers/md/Kconfig
+@@ -602,6 +602,7 @@ config DM_ZONED
+ 	tristate "Drive-managed zoned block device target support"
+ 	depends on BLK_DEV_DM
+ 	depends on BLK_DEV_ZONED
++	select CRC32
+ 	help
+ 	  This device-mapper target takes a host-managed or host-aware zoned
+ 	  block device and exposes most of its capacity as a regular block
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index 9c1a86bde658e..fce4cbf9529d6 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -1534,6 +1534,12 @@ sector_t dm_bufio_get_device_size(struct dm_bufio_client *c)
+ }
+ EXPORT_SYMBOL_GPL(dm_bufio_get_device_size);
+ 
++struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c)
++{
++	return c->dm_io;
++}
++EXPORT_SYMBOL_GPL(dm_bufio_get_dm_io_client);
++
+ sector_t dm_bufio_get_block_number(struct dm_buffer *b)
+ {
+ 	return b->block;
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 392337f16ecfd..89de9cde02028 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -1454,13 +1454,16 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc,
+ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 			       int error);
+ 
+-static void crypt_alloc_req_skcipher(struct crypt_config *cc,
++static int crypt_alloc_req_skcipher(struct crypt_config *cc,
+ 				     struct convert_context *ctx)
+ {
+ 	unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1);
+ 
+-	if (!ctx->r.req)
+-		ctx->r.req = mempool_alloc(&cc->req_pool, GFP_NOIO);
++	if (!ctx->r.req) {
++		ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO);
++		if (!ctx->r.req)
++			return -ENOMEM;
++	}
+ 
+ 	skcipher_request_set_tfm(ctx->r.req, cc->cipher_tfm.tfms[key_index]);
+ 
+@@ -1471,13 +1474,18 @@ static void crypt_alloc_req_skcipher(struct crypt_config *cc,
+ 	skcipher_request_set_callback(ctx->r.req,
+ 	    CRYPTO_TFM_REQ_MAY_BACKLOG,
+ 	    kcryptd_async_done, dmreq_of_req(cc, ctx->r.req));
++
++	return 0;
+ }
+ 
+-static void crypt_alloc_req_aead(struct crypt_config *cc,
++static int crypt_alloc_req_aead(struct crypt_config *cc,
+ 				 struct convert_context *ctx)
+ {
+-	if (!ctx->r.req_aead)
+-		ctx->r.req_aead = mempool_alloc(&cc->req_pool, GFP_NOIO);
++	if (!ctx->r.req) {
++		ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO);
++		if (!ctx->r.req)
++			return -ENOMEM;
++	}
+ 
+ 	aead_request_set_tfm(ctx->r.req_aead, cc->cipher_tfm.tfms_aead[0]);
+ 
+@@ -1488,15 +1496,17 @@ static void crypt_alloc_req_aead(struct crypt_config *cc,
+ 	aead_request_set_callback(ctx->r.req_aead,
+ 	    CRYPTO_TFM_REQ_MAY_BACKLOG,
+ 	    kcryptd_async_done, dmreq_of_req(cc, ctx->r.req_aead));
++
++	return 0;
+ }
+ 
+-static void crypt_alloc_req(struct crypt_config *cc,
++static int crypt_alloc_req(struct crypt_config *cc,
+ 			    struct convert_context *ctx)
+ {
+ 	if (crypt_integrity_aead(cc))
+-		crypt_alloc_req_aead(cc, ctx);
++		return crypt_alloc_req_aead(cc, ctx);
+ 	else
+-		crypt_alloc_req_skcipher(cc, ctx);
++		return crypt_alloc_req_skcipher(cc, ctx);
+ }
+ 
+ static void crypt_free_req_skcipher(struct crypt_config *cc,
+@@ -1529,17 +1539,28 @@ static void crypt_free_req(struct crypt_config *cc, void *req, struct bio *base_
+  * Encrypt / decrypt data from one bio to another one (can be the same one)
+  */
+ static blk_status_t crypt_convert(struct crypt_config *cc,
+-			 struct convert_context *ctx, bool atomic)
++			 struct convert_context *ctx, bool atomic, bool reset_pending)
+ {
+ 	unsigned int tag_offset = 0;
+ 	unsigned int sector_step = cc->sector_size >> SECTOR_SHIFT;
+ 	int r;
+ 
+-	atomic_set(&ctx->cc_pending, 1);
++	/*
++	 * if reset_pending is set we are dealing with the bio for the first time,
++	 * else we're continuing to work on the previous bio, so don't mess with
++	 * the cc_pending counter
++	 */
++	if (reset_pending)
++		atomic_set(&ctx->cc_pending, 1);
+ 
+ 	while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) {
+ 
+-		crypt_alloc_req(cc, ctx);
++		r = crypt_alloc_req(cc, ctx);
++		if (r) {
++			complete(&ctx->restart);
++			return BLK_STS_DEV_RESOURCE;
++		}
++
+ 		atomic_inc(&ctx->cc_pending);
+ 
+ 		if (crypt_integrity_aead(cc))
+@@ -1553,7 +1574,25 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ 		 * but the driver request queue is full, let's wait.
+ 		 */
+ 		case -EBUSY:
+-			wait_for_completion(&ctx->restart);
++			if (in_interrupt()) {
++				if (try_wait_for_completion(&ctx->restart)) {
++					/*
++					 * we don't have to block to wait for completion,
++					 * so proceed
++					 */
++				} else {
++					/*
++					 * we can't wait for completion without blocking
++					 * exit and continue processing in a workqueue
++					 */
++					ctx->r.req = NULL;
++					ctx->cc_sector += sector_step;
++					tag_offset++;
++					return BLK_STS_DEV_RESOURCE;
++				}
++			} else {
++				wait_for_completion(&ctx->restart);
++			}
+ 			reinit_completion(&ctx->restart);
+ 			fallthrough;
+ 		/*
+@@ -1691,6 +1730,12 @@ static void crypt_inc_pending(struct dm_crypt_io *io)
+ 	atomic_inc(&io->io_pending);
+ }
+ 
++static void kcryptd_io_bio_endio(struct work_struct *work)
++{
++	struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work);
++	bio_endio(io->base_bio);
++}
++
+ /*
+  * One of the bios was finished. Check for completion of
+  * the whole request and correctly clean up the buffer.
+@@ -1713,7 +1758,23 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
+ 		kfree(io->integrity_metadata);
+ 
+ 	base_bio->bi_status = error;
+-	bio_endio(base_bio);
++
++	/*
++	 * If we are running this function from our tasklet,
++	 * we can't call bio_endio() here, because it will call
++	 * clone_endio() from dm.c, which in turn will
++	 * free the current struct dm_crypt_io structure with
++	 * our tasklet. In this case we need to delay bio_endio()
++	 * execution to after the tasklet is done and dequeued.
++	 */
++	if (tasklet_trylock(&io->tasklet)) {
++		tasklet_unlock(&io->tasklet);
++		bio_endio(base_bio);
++		return;
++	}
++
++	INIT_WORK(&io->work, kcryptd_io_bio_endio);
++	queue_work(cc->io_queue, &io->work);
+ }
+ 
+ /*
+@@ -1945,6 +2006,37 @@ static bool kcryptd_crypt_write_inline(struct crypt_config *cc,
+ 	}
+ }
+ 
++static void kcryptd_crypt_write_continue(struct work_struct *work)
++{
++	struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work);
++	struct crypt_config *cc = io->cc;
++	struct convert_context *ctx = &io->ctx;
++	int crypt_finished;
++	sector_t sector = io->sector;
++	blk_status_t r;
++
++	wait_for_completion(&ctx->restart);
++	reinit_completion(&ctx->restart);
++
++	r = crypt_convert(cc, &io->ctx, true, false);
++	if (r)
++		io->error = r;
++	crypt_finished = atomic_dec_and_test(&ctx->cc_pending);
++	if (!crypt_finished && kcryptd_crypt_write_inline(cc, ctx)) {
++		/* Wait for completion signaled by kcryptd_async_done() */
++		wait_for_completion(&ctx->restart);
++		crypt_finished = 1;
++	}
++
++	/* Encryption was already finished, submit io now */
++	if (crypt_finished) {
++		kcryptd_crypt_write_io_submit(io, 0);
++		io->sector = sector;
++	}
++
++	crypt_dec_pending(io);
++}
++
+ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ {
+ 	struct crypt_config *cc = io->cc;
+@@ -1973,7 +2065,17 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ 
+ 	crypt_inc_pending(io);
+ 	r = crypt_convert(cc, ctx,
+-			  test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags));
++			  test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags), true);
++	/*
++	 * Crypto API backlogged the request, because its queue was full
++	 * and we're in softirq context, so continue from a workqueue
++	 * (TODO: is it actually possible to be in softirq in the write path?)
++	 */
++	if (r == BLK_STS_DEV_RESOURCE) {
++		INIT_WORK(&io->work, kcryptd_crypt_write_continue);
++		queue_work(cc->crypt_queue, &io->work);
++		return;
++	}
+ 	if (r)
+ 		io->error = r;
+ 	crypt_finished = atomic_dec_and_test(&ctx->cc_pending);
+@@ -1998,6 +2100,25 @@ static void kcryptd_crypt_read_done(struct dm_crypt_io *io)
+ 	crypt_dec_pending(io);
+ }
+ 
++static void kcryptd_crypt_read_continue(struct work_struct *work)
++{
++	struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work);
++	struct crypt_config *cc = io->cc;
++	blk_status_t r;
++
++	wait_for_completion(&io->ctx.restart);
++	reinit_completion(&io->ctx.restart);
++
++	r = crypt_convert(cc, &io->ctx, true, false);
++	if (r)
++		io->error = r;
++
++	if (atomic_dec_and_test(&io->ctx.cc_pending))
++		kcryptd_crypt_read_done(io);
++
++	crypt_dec_pending(io);
++}
++
+ static void kcryptd_crypt_read_convert(struct dm_crypt_io *io)
+ {
+ 	struct crypt_config *cc = io->cc;
+@@ -2009,7 +2130,16 @@ static void kcryptd_crypt_read_convert(struct dm_crypt_io *io)
+ 			   io->sector);
+ 
+ 	r = crypt_convert(cc, &io->ctx,
+-			  test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags));
++			  test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags), true);
++	/*
++	 * Crypto API backlogged the request, because its queue was full
++	 * and we're in softirq context, so continue from a workqueue
++	 */
++	if (r == BLK_STS_DEV_RESOURCE) {
++		INIT_WORK(&io->work, kcryptd_crypt_read_continue);
++		queue_work(cc->crypt_queue, &io->work);
++		return;
++	}
+ 	if (r)
+ 		io->error = r;
+ 
+@@ -2091,8 +2221,12 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io)
+ 
+ 	if ((bio_data_dir(io->base_bio) == READ && test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags)) ||
+ 	    (bio_data_dir(io->base_bio) == WRITE && test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags))) {
+-		if (in_irq()) {
+-			/* Crypto API's "skcipher_walk_first() refuses to work in hard IRQ context */
++		/*
++		 * in_irq(): Crypto API's skcipher_walk_first() refuses to work in hard IRQ context.
++		 * irqs_disabled(): the kernel may run some IO completion from the idle thread, but
++		 * it is being executed with irqs disabled.
++		 */
++		if (in_irq() || irqs_disabled()) {
+ 			tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work);
+ 			tasklet_schedule(&io->tasklet);
+ 			return;
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 5a7a1b90e671c..81df019ab284a 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -1379,12 +1379,52 @@ thorough_test:
+ #undef MAY_BE_HASH
+ }
+ 
+-static void dm_integrity_flush_buffers(struct dm_integrity_c *ic)
++struct flush_request {
++	struct dm_io_request io_req;
++	struct dm_io_region io_reg;
++	struct dm_integrity_c *ic;
++	struct completion comp;
++};
++
++static void flush_notify(unsigned long error, void *fr_)
++{
++	struct flush_request *fr = fr_;
++	if (unlikely(error != 0))
++		dm_integrity_io_error(fr->ic, "flusing disk cache", -EIO);
++	complete(&fr->comp);
++}
++
++static void dm_integrity_flush_buffers(struct dm_integrity_c *ic, bool flush_data)
+ {
+ 	int r;
++
++	struct flush_request fr;
++
++	if (!ic->meta_dev)
++		flush_data = false;
++	if (flush_data) {
++		fr.io_req.bi_op = REQ_OP_WRITE,
++		fr.io_req.bi_op_flags = REQ_PREFLUSH | REQ_SYNC,
++		fr.io_req.mem.type = DM_IO_KMEM,
++		fr.io_req.mem.ptr.addr = NULL,
++		fr.io_req.notify.fn = flush_notify,
++		fr.io_req.notify.context = &fr;
++		fr.io_req.client = dm_bufio_get_dm_io_client(ic->bufio),
++		fr.io_reg.bdev = ic->dev->bdev,
++		fr.io_reg.sector = 0,
++		fr.io_reg.count = 0,
++		fr.ic = ic;
++		init_completion(&fr.comp);
++		r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL);
++		BUG_ON(r);
++	}
++
+ 	r = dm_bufio_write_dirty_buffers(ic->bufio);
+ 	if (unlikely(r))
+ 		dm_integrity_io_error(ic, "writing tags", r);
++
++	if (flush_data)
++		wait_for_completion(&fr.comp);
+ }
+ 
+ static void sleep_on_endio_wait(struct dm_integrity_c *ic)
+@@ -2110,7 +2150,7 @@ offload_to_thread:
+ 
+ 	if (unlikely(dio->op == REQ_OP_DISCARD) && likely(ic->mode != 'D')) {
+ 		integrity_metadata(&dio->work);
+-		dm_integrity_flush_buffers(ic);
++		dm_integrity_flush_buffers(ic, false);
+ 
+ 		dio->in_flight = (atomic_t)ATOMIC_INIT(1);
+ 		dio->completion = NULL;
+@@ -2195,7 +2235,7 @@ static void integrity_commit(struct work_struct *w)
+ 	flushes = bio_list_get(&ic->flush_bio_list);
+ 	if (unlikely(ic->mode != 'J')) {
+ 		spin_unlock_irq(&ic->endio_wait.lock);
+-		dm_integrity_flush_buffers(ic);
++		dm_integrity_flush_buffers(ic, true);
+ 		goto release_flush_bios;
+ 	}
+ 
+@@ -2409,7 +2449,7 @@ skip_io:
+ 	complete_journal_op(&comp);
+ 	wait_for_completion_io(&comp.comp);
+ 
+-	dm_integrity_flush_buffers(ic);
++	dm_integrity_flush_buffers(ic, true);
+ }
+ 
+ static void integrity_writer(struct work_struct *w)
+@@ -2451,7 +2491,7 @@ static void recalc_write_super(struct dm_integrity_c *ic)
+ {
+ 	int r;
+ 
+-	dm_integrity_flush_buffers(ic);
++	dm_integrity_flush_buffers(ic, false);
+ 	if (dm_integrity_failed(ic))
+ 		return;
+ 
+@@ -2654,7 +2694,7 @@ static void bitmap_flush_work(struct work_struct *work)
+ 	unsigned long limit;
+ 	struct bio *bio;
+ 
+-	dm_integrity_flush_buffers(ic);
++	dm_integrity_flush_buffers(ic, false);
+ 
+ 	range.logical_sector = 0;
+ 	range.n_sectors = ic->provided_data_sectors;
+@@ -2663,9 +2703,7 @@ static void bitmap_flush_work(struct work_struct *work)
+ 	add_new_range_and_wait(ic, &range);
+ 	spin_unlock_irq(&ic->endio_wait.lock);
+ 
+-	dm_integrity_flush_buffers(ic);
+-	if (ic->meta_dev)
+-		blkdev_issue_flush(ic->dev->bdev, GFP_NOIO);
++	dm_integrity_flush_buffers(ic, true);
+ 
+ 	limit = ic->provided_data_sectors;
+ 	if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)) {
+@@ -2934,11 +2972,11 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
+ 		if (ic->meta_dev)
+ 			queue_work(ic->writer_wq, &ic->writer_work);
+ 		drain_workqueue(ic->writer_wq);
+-		dm_integrity_flush_buffers(ic);
++		dm_integrity_flush_buffers(ic, true);
+ 	}
+ 
+ 	if (ic->mode == 'B') {
+-		dm_integrity_flush_buffers(ic);
++		dm_integrity_flush_buffers(ic, true);
+ #if 1
+ 		/* set to 0 to test bitmap replay code */
+ 		init_journal(ic, 0, ic->journal_sections, 0);
+@@ -3754,7 +3792,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 	unsigned extra_args;
+ 	struct dm_arg_set as;
+ 	static const struct dm_arg _args[] = {
+-		{0, 9, "Invalid number of feature args"},
++		{0, 15, "Invalid number of feature args"},
+ 	};
+ 	unsigned journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec;
+ 	bool should_write_sb;
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 56b723d012ac1..6dca932d6f1d1 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3730,10 +3730,10 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ 	blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs));
+ 
+ 	/*
+-	 * RAID1 and RAID10 personalities require bio splitting,
+-	 * RAID0/4/5/6 don't and process large discard bios properly.
++	 * RAID0 and RAID10 personalities require bio splitting,
++	 * RAID1/4/5/6 don't and process large discard bios properly.
+ 	 */
+-	if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
++	if (rs_is_raid0(rs) || rs_is_raid10(rs)) {
+ 		limits->discard_granularity = chunk_size_bytes;
+ 		limits->max_discard_sectors = rs->md.chunk_sectors;
+ 	}
+diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
+index 4668b2cd98f4e..11890db71f3fe 100644
+--- a/drivers/md/dm-snap.c
++++ b/drivers/md/dm-snap.c
+@@ -141,6 +141,11 @@ struct dm_snapshot {
+ 	 * for them to be committed.
+ 	 */
+ 	struct bio_list bios_queued_during_merge;
++
++	/*
++	 * Flush data after merge.
++	 */
++	struct bio flush_bio;
+ };
+ 
+ /*
+@@ -1121,6 +1126,17 @@ shut:
+ 
+ static void error_bios(struct bio *bio);
+ 
++static int flush_data(struct dm_snapshot *s)
++{
++	struct bio *flush_bio = &s->flush_bio;
++
++	bio_reset(flush_bio);
++	bio_set_dev(flush_bio, s->origin->bdev);
++	flush_bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
++
++	return submit_bio_wait(flush_bio);
++}
++
+ static void merge_callback(int read_err, unsigned long write_err, void *context)
+ {
+ 	struct dm_snapshot *s = context;
+@@ -1134,6 +1150,11 @@ static void merge_callback(int read_err, unsigned long write_err, void *context)
+ 		goto shut;
+ 	}
+ 
++	if (flush_data(s) < 0) {
++		DMERR("Flush after merge failed: shutting down merge");
++		goto shut;
++	}
++
+ 	if (s->store->type->commit_merge(s->store,
+ 					 s->num_merging_chunks) < 0) {
+ 		DMERR("Write error in exception store: shutting down merge");
+@@ -1318,6 +1339,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 	s->first_merging_chunk = 0;
+ 	s->num_merging_chunks = 0;
+ 	bio_list_init(&s->bios_queued_during_merge);
++	bio_init(&s->flush_bio, NULL, 0);
+ 
+ 	/* Allocate hash table for COW data */
+ 	if (init_hash_tables(s)) {
+@@ -1504,6 +1526,8 @@ static void snapshot_dtr(struct dm_target *ti)
+ 
+ 	dm_exception_store_destroy(s->store);
+ 
++	bio_uninit(&s->flush_bio);
++
+ 	dm_put_device(ti, s->cow);
+ 
+ 	dm_put_device(ti, s->origin);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 4e0cbfe3f14d4..1e99a4c1eca43 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -562,7 +562,7 @@ static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode,
+ 		 * subset of the parent bdev; require extra privileges.
+ 		 */
+ 		if (!capable(CAP_SYS_RAWIO)) {
+-			DMWARN_LIMIT(
++			DMDEBUG_LIMIT(
+ 	"%s: sending ioctl %x to DM device without required privilege.",
+ 				current->comm, cmd);
+ 			r = -ENOIOCTLCMD;
+diff --git a/drivers/misc/habanalabs/common/device.c b/drivers/misc/habanalabs/common/device.c
+index 783bbdcb1e618..09c328ee65da8 100644
+--- a/drivers/misc/habanalabs/common/device.c
++++ b/drivers/misc/habanalabs/common/device.c
+@@ -1027,6 +1027,7 @@ again:
+ 						GFP_KERNEL);
+ 		if (!hdev->kernel_ctx) {
+ 			rc = -ENOMEM;
++			hl_mmu_fini(hdev);
+ 			goto out_err;
+ 		}
+ 
+@@ -1038,6 +1039,7 @@ again:
+ 				"failed to init kernel ctx in hard reset\n");
+ 			kfree(hdev->kernel_ctx);
+ 			hdev->kernel_ctx = NULL;
++			hl_mmu_fini(hdev);
+ 			goto out_err;
+ 		}
+ 	}
+diff --git a/drivers/misc/habanalabs/common/habanalabs_drv.c b/drivers/misc/habanalabs/common/habanalabs_drv.c
+index f9067d3ef4376..3bcef64a677ae 100644
+--- a/drivers/misc/habanalabs/common/habanalabs_drv.c
++++ b/drivers/misc/habanalabs/common/habanalabs_drv.c
+@@ -528,6 +528,7 @@ static struct pci_driver hl_pci_driver = {
+ 	.id_table = ids,
+ 	.probe = hl_pci_probe,
+ 	.remove = hl_pci_remove,
++	.shutdown = hl_pci_remove,
+ 	.driver.pm = &hl_pm_ops,
+ 	.err_handler = &hl_pci_err_handler,
+ };
+diff --git a/drivers/misc/habanalabs/common/pci.c b/drivers/misc/habanalabs/common/pci.c
+index 4327e5704ebb6..607f9a11fba1a 100644
+--- a/drivers/misc/habanalabs/common/pci.c
++++ b/drivers/misc/habanalabs/common/pci.c
+@@ -130,10 +130,8 @@ static int hl_pci_elbi_write(struct hl_device *hdev, u64 addr, u32 data)
+ 	if ((val & PCI_CONFIG_ELBI_STS_MASK) == PCI_CONFIG_ELBI_STS_DONE)
+ 		return 0;
+ 
+-	if (val & PCI_CONFIG_ELBI_STS_ERR) {
+-		dev_err(hdev->dev, "Error writing to ELBI\n");
++	if (val & PCI_CONFIG_ELBI_STS_ERR)
+ 		return -EIO;
+-	}
+ 
+ 	if (!(val & PCI_CONFIG_ELBI_STS_MASK)) {
+ 		dev_err(hdev->dev, "ELBI write didn't finish in time\n");
+@@ -160,8 +158,12 @@ int hl_pci_iatu_write(struct hl_device *hdev, u32 addr, u32 data)
+ 
+ 	dbi_offset = addr & 0xFFF;
+ 
+-	rc = hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0x00300000);
+-	rc |= hl_pci_elbi_write(hdev, prop->pcie_dbi_base_address + dbi_offset,
++	/* Ignore result of writing to pcie_aux_dbi_reg_addr as it could fail
++	 * in case the firmware security is enabled
++	 */
++	hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0x00300000);
++
++	rc = hl_pci_elbi_write(hdev, prop->pcie_dbi_base_address + dbi_offset,
+ 				data);
+ 
+ 	if (rc)
+@@ -244,9 +246,11 @@ int hl_pci_set_inbound_region(struct hl_device *hdev, u8 region,
+ 
+ 	rc |= hl_pci_iatu_write(hdev, offset + 0x4, ctrl_reg_val);
+ 
+-	/* Return the DBI window to the default location */
+-	rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0);
+-	rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr + 4, 0);
++	/* Return the DBI window to the default location
++	 * Ignore result of writing to pcie_aux_dbi_reg_addr as it could fail
++	 * in case the firmware security is enabled
++	 */
++	hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0);
+ 
+ 	if (rc)
+ 		dev_err(hdev->dev, "failed to map bar %u to 0x%08llx\n",
+@@ -294,9 +298,11 @@ int hl_pci_set_outbound_region(struct hl_device *hdev,
+ 	/* Enable */
+ 	rc |= hl_pci_iatu_write(hdev, 0x004, 0x80000000);
+ 
+-	/* Return the DBI window to the default location */
+-	rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0);
+-	rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr + 4, 0);
++	/* Return the DBI window to the default location
++	 * Ignore result of writing to pcie_aux_dbi_reg_addr as it could fail
++	 * in case the firmware security is enabled
++	 */
++	hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index 7ea6b4368a913..ed1bd41262ecd 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -754,11 +754,17 @@ static int gaudi_init_tpc_mem(struct hl_device *hdev)
+ 	size_t fw_size;
+ 	void *cpu_addr;
+ 	dma_addr_t dma_handle;
+-	int rc;
++	int rc, count = 5;
+ 
++again:
+ 	rc = request_firmware(&fw, GAUDI_TPC_FW_FILE, hdev->dev);
++	if (rc == -EINTR && count-- > 0) {
++		msleep(50);
++		goto again;
++	}
++
+ 	if (rc) {
+-		dev_err(hdev->dev, "Firmware file %s is not found!\n",
++		dev_err(hdev->dev, "Failed to load firmware file %s\n",
+ 				GAUDI_TPC_FW_FILE);
+ 		goto out;
+ 	}
+@@ -2893,7 +2899,7 @@ static int gaudi_init_cpu_queues(struct hl_device *hdev, u32 cpu_timeout)
+ static void gaudi_pre_hw_init(struct hl_device *hdev)
+ {
+ 	/* Perform read from the device to make sure device is up */
+-	RREG32(mmPCIE_DBI_DEVICE_ID_VENDOR_ID_REG);
++	RREG32(mmHW_STATE);
+ 
+ 	/* Set the access through PCI bars (Linux driver only) as
+ 	 * secured
+@@ -2996,7 +3002,7 @@ static int gaudi_hw_init(struct hl_device *hdev)
+ 	}
+ 
+ 	/* Perform read from the device to flush all configuration */
+-	RREG32(mmPCIE_DBI_DEVICE_ID_VENDOR_ID_REG);
++	RREG32(mmHW_STATE);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi_coresight.c b/drivers/misc/habanalabs/gaudi/gaudi_coresight.c
+index 3d2b0f0f46507..283d37b76447e 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi_coresight.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi_coresight.c
+@@ -9,6 +9,7 @@
+ #include "../include/gaudi/gaudi_coresight.h"
+ #include "../include/gaudi/asic_reg/gaudi_regs.h"
+ #include "../include/gaudi/gaudi_masks.h"
++#include "../include/gaudi/gaudi_reg_map.h"
+ 
+ #include <uapi/misc/habanalabs.h>
+ #include <linux/coresight.h>
+@@ -876,7 +877,7 @@ int gaudi_debug_coresight(struct hl_device *hdev, void *data)
+ 	}
+ 
+ 	/* Perform read from the device to flush all configuration */
+-	RREG32(mmPCIE_DBI_DEVICE_ID_VENDOR_ID_REG);
++	RREG32(mmHW_STATE);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index 8c8368c2f335c..64dbbb04b0434 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -222,8 +222,12 @@ int bnxt_get_ulp_msix_base(struct bnxt *bp)
+ 
+ int bnxt_get_ulp_stat_ctxs(struct bnxt *bp)
+ {
+-	if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP))
+-		return BNXT_MIN_ROCE_STAT_CTXS;
++	if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) {
++		struct bnxt_en_dev *edev = bp->edev;
++
++		if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested)
++			return BNXT_MIN_ROCE_STAT_CTXS;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c b/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
+index c8e5d889bd81f..21de56345503f 100644
+--- a/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
++++ b/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
+@@ -223,3 +223,4 @@ static struct platform_driver fs_enet_bb_mdio_driver = {
+ };
+ 
+ module_platform_driver(fs_enet_bb_mdio_driver);
++MODULE_LICENSE("GPL");
+diff --git a/drivers/net/ethernet/freescale/fs_enet/mii-fec.c b/drivers/net/ethernet/freescale/fs_enet/mii-fec.c
+index 8b51ee142fa3c..152f4d83765aa 100644
+--- a/drivers/net/ethernet/freescale/fs_enet/mii-fec.c
++++ b/drivers/net/ethernet/freescale/fs_enet/mii-fec.c
+@@ -224,3 +224,4 @@ static struct platform_driver fs_enet_fec_mdio_driver = {
+ };
+ 
+ module_platform_driver(fs_enet_fec_mdio_driver);
++MODULE_LICENSE("GPL");
+diff --git a/drivers/net/ethernet/freescale/ucc_geth.h b/drivers/net/ethernet/freescale/ucc_geth.h
+index 3fe9039721952..c80bed2c995c1 100644
+--- a/drivers/net/ethernet/freescale/ucc_geth.h
++++ b/drivers/net/ethernet/freescale/ucc_geth.h
+@@ -575,7 +575,14 @@ struct ucc_geth_tx_global_pram {
+ 	u32 vtagtable[0x8];	/* 8 4-byte VLAN tags */
+ 	u32 tqptr;		/* a base pointer to the Tx Queues Memory
+ 				   Region */
+-	u8 res2[0x80 - 0x74];
++	u8 res2[0x78 - 0x74];
++	u64 snums_en;
++	u32 l2l3baseptr;	/* top byte consists of a few other bit fields */
++
++	u16 mtu[8];
++	u8 res3[0xa8 - 0x94];
++	u32 wrrtablebase;	/* top byte is reserved */
++	u8 res4[0xc0 - 0xac];
+ } __packed;
+ 
+ /* structure representing Extended Filtering Global Parameters in PRAM */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index e521254d886ef..072363e73f1ce 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -118,16 +118,17 @@ struct mlx5_ct_tuple {
+ 	u16 zone;
+ };
+ 
+-struct mlx5_ct_shared_counter {
++struct mlx5_ct_counter {
+ 	struct mlx5_fc *counter;
+ 	refcount_t refcount;
++	bool is_shared;
+ };
+ 
+ struct mlx5_ct_entry {
+ 	struct rhash_head node;
+ 	struct rhash_head tuple_node;
+ 	struct rhash_head tuple_nat_node;
+-	struct mlx5_ct_shared_counter *shared_counter;
++	struct mlx5_ct_counter *counter;
+ 	unsigned long cookie;
+ 	unsigned long restore_cookie;
+ 	struct mlx5_ct_tuple tuple;
+@@ -394,13 +395,14 @@ mlx5_tc_ct_set_tuple_match(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec,
+ }
+ 
+ static void
+-mlx5_tc_ct_shared_counter_put(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_entry *entry)
++mlx5_tc_ct_counter_put(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_entry *entry)
+ {
+-	if (!refcount_dec_and_test(&entry->shared_counter->refcount))
++	if (entry->counter->is_shared &&
++	    !refcount_dec_and_test(&entry->counter->refcount))
+ 		return;
+ 
+-	mlx5_fc_destroy(ct_priv->dev, entry->shared_counter->counter);
+-	kfree(entry->shared_counter);
++	mlx5_fc_destroy(ct_priv->dev, entry->counter->counter);
++	kfree(entry->counter);
+ }
+ 
+ static void
+@@ -699,7 +701,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
+ 	attr->dest_ft = ct_priv->post_ct;
+ 	attr->ft = nat ? ct_priv->ct_nat : ct_priv->ct;
+ 	attr->outer_match_level = MLX5_MATCH_L4;
+-	attr->counter = entry->shared_counter->counter;
++	attr->counter = entry->counter->counter;
+ 	attr->flags |= MLX5_ESW_ATTR_FLAG_NO_IN_PORT;
+ 
+ 	mlx5_tc_ct_set_tuple_match(netdev_priv(ct_priv->netdev), spec, flow_rule);
+@@ -732,13 +734,34 @@ err_attr:
+ 	return err;
+ }
+ 
+-static struct mlx5_ct_shared_counter *
++static struct mlx5_ct_counter *
++mlx5_tc_ct_counter_create(struct mlx5_tc_ct_priv *ct_priv)
++{
++	struct mlx5_ct_counter *counter;
++	int ret;
++
++	counter = kzalloc(sizeof(*counter), GFP_KERNEL);
++	if (!counter)
++		return ERR_PTR(-ENOMEM);
++
++	counter->is_shared = false;
++	counter->counter = mlx5_fc_create(ct_priv->dev, true);
++	if (IS_ERR(counter->counter)) {
++		ct_dbg("Failed to create counter for ct entry");
++		ret = PTR_ERR(counter->counter);
++		kfree(counter);
++		return ERR_PTR(ret);
++	}
++
++	return counter;
++}
++
++static struct mlx5_ct_counter *
+ mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv,
+ 			      struct mlx5_ct_entry *entry)
+ {
+ 	struct mlx5_ct_tuple rev_tuple = entry->tuple;
+-	struct mlx5_ct_shared_counter *shared_counter;
+-	struct mlx5_core_dev *dev = ct_priv->dev;
++	struct mlx5_ct_counter *shared_counter;
+ 	struct mlx5_ct_entry *rev_entry;
+ 	__be16 tmp_port;
+ 	int ret;
+@@ -767,25 +790,20 @@ mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv,
+ 	rev_entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, &rev_tuple,
+ 					   tuples_ht_params);
+ 	if (rev_entry) {
+-		if (refcount_inc_not_zero(&rev_entry->shared_counter->refcount)) {
++		if (refcount_inc_not_zero(&rev_entry->counter->refcount)) {
+ 			mutex_unlock(&ct_priv->shared_counter_lock);
+-			return rev_entry->shared_counter;
++			return rev_entry->counter;
+ 		}
+ 	}
+ 	mutex_unlock(&ct_priv->shared_counter_lock);
+ 
+-	shared_counter = kzalloc(sizeof(*shared_counter), GFP_KERNEL);
+-	if (!shared_counter)
+-		return ERR_PTR(-ENOMEM);
+-
+-	shared_counter->counter = mlx5_fc_create(dev, true);
+-	if (IS_ERR(shared_counter->counter)) {
+-		ct_dbg("Failed to create counter for ct entry");
+-		ret = PTR_ERR(shared_counter->counter);
+-		kfree(shared_counter);
++	shared_counter = mlx5_tc_ct_counter_create(ct_priv);
++	if (IS_ERR(shared_counter)) {
++		ret = PTR_ERR(shared_counter);
+ 		return ERR_PTR(ret);
+ 	}
+ 
++	shared_counter->is_shared = true;
+ 	refcount_set(&shared_counter->refcount, 1);
+ 	return shared_counter;
+ }
+@@ -798,10 +816,13 @@ mlx5_tc_ct_entry_add_rules(struct mlx5_tc_ct_priv *ct_priv,
+ {
+ 	int err;
+ 
+-	entry->shared_counter = mlx5_tc_ct_shared_counter_get(ct_priv, entry);
+-	if (IS_ERR(entry->shared_counter)) {
+-		err = PTR_ERR(entry->shared_counter);
+-		ct_dbg("Failed to create counter for ct entry");
++	if (nf_ct_acct_enabled(dev_net(ct_priv->netdev)))
++		entry->counter = mlx5_tc_ct_counter_create(ct_priv);
++	else
++		entry->counter = mlx5_tc_ct_shared_counter_get(ct_priv, entry);
++
++	if (IS_ERR(entry->counter)) {
++		err = PTR_ERR(entry->counter);
+ 		return err;
+ 	}
+ 
+@@ -820,7 +841,7 @@ mlx5_tc_ct_entry_add_rules(struct mlx5_tc_ct_priv *ct_priv,
+ err_nat:
+ 	mlx5_tc_ct_entry_del_rule(ct_priv, entry, false);
+ err_orig:
+-	mlx5_tc_ct_shared_counter_put(ct_priv, entry);
++	mlx5_tc_ct_counter_put(ct_priv, entry);
+ 	return err;
+ }
+ 
+@@ -918,7 +939,7 @@ mlx5_tc_ct_del_ft_entry(struct mlx5_tc_ct_priv *ct_priv,
+ 	rhashtable_remove_fast(&ct_priv->ct_tuples_ht, &entry->tuple_node,
+ 			       tuples_ht_params);
+ 	mutex_unlock(&ct_priv->shared_counter_lock);
+-	mlx5_tc_ct_shared_counter_put(ct_priv, entry);
++	mlx5_tc_ct_counter_put(ct_priv, entry);
+ 
+ }
+ 
+@@ -956,7 +977,7 @@ mlx5_tc_ct_block_flow_offload_stats(struct mlx5_ct_ft *ft,
+ 	if (!entry)
+ 		return -ENOENT;
+ 
+-	mlx5_fc_query_cached(entry->shared_counter->counter, &bytes, &packets, &lastuse);
++	mlx5_fc_query_cached(entry->counter->counter, &bytes, &packets, &lastuse);
+ 	flow_stats_update(&f->stats, bytes, packets, 0, lastuse,
+ 			  FLOW_ACTION_HW_STATS_DELAYED);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
+index d46f8b225ebe3..3e19b1721303f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
+@@ -95,22 +95,21 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
+ 		return 0;
+ 	}
+ 
+-	if (!IS_ERR_OR_NULL(vport->egress.acl))
+-		return 0;
+-
+-	vport->egress.acl = esw_acl_table_create(esw, vport->vport,
+-						 MLX5_FLOW_NAMESPACE_ESW_EGRESS,
+-						 table_size);
+-	if (IS_ERR_OR_NULL(vport->egress.acl)) {
+-		err = PTR_ERR(vport->egress.acl);
+-		vport->egress.acl = NULL;
+-		goto out;
++	if (!vport->egress.acl) {
++		vport->egress.acl = esw_acl_table_create(esw, vport->vport,
++							 MLX5_FLOW_NAMESPACE_ESW_EGRESS,
++							 table_size);
++		if (IS_ERR(vport->egress.acl)) {
++			err = PTR_ERR(vport->egress.acl);
++			vport->egress.acl = NULL;
++			goto out;
++		}
++
++		err = esw_acl_egress_lgcy_groups_create(esw, vport);
++		if (err)
++			goto out;
+ 	}
+ 
+-	err = esw_acl_egress_lgcy_groups_create(esw, vport);
+-	if (err)
+-		goto out;
+-
+ 	esw_debug(esw->dev,
+ 		  "vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
+ 		  vport->vport, vport->info.vlan, vport->info.qos);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c
+index c3faae67e4d6e..4c74e2690d57b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c
+@@ -173,7 +173,7 @@ int esw_acl_egress_ofld_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport
+ 		table_size++;
+ 	vport->egress.acl = esw_acl_table_create(esw, vport->vport,
+ 						 MLX5_FLOW_NAMESPACE_ESW_EGRESS, table_size);
+-	if (IS_ERR_OR_NULL(vport->egress.acl)) {
++	if (IS_ERR(vport->egress.acl)) {
+ 		err = PTR_ERR(vport->egress.acl);
+ 		vport->egress.acl = NULL;
+ 		return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+index b68976b378b81..d64fad2823e73 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+@@ -180,7 +180,7 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
+ 		vport->ingress.acl = esw_acl_table_create(esw, vport->vport,
+ 							  MLX5_FLOW_NAMESPACE_ESW_INGRESS,
+ 							  table_size);
+-		if (IS_ERR_OR_NULL(vport->ingress.acl)) {
++		if (IS_ERR(vport->ingress.acl)) {
+ 			err = PTR_ERR(vport->ingress.acl);
+ 			vport->ingress.acl = NULL;
+ 			return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
+index 4e55d7225a265..548c005ea6335 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
+@@ -258,7 +258,7 @@ int esw_acl_ingress_ofld_setup(struct mlx5_eswitch *esw,
+ 	vport->ingress.acl = esw_acl_table_create(esw, vport->vport,
+ 						  MLX5_FLOW_NAMESPACE_ESW_INGRESS,
+ 						  num_ftes);
+-	if (IS_ERR_OR_NULL(vport->ingress.acl)) {
++	if (IS_ERR(vport->ingress.acl)) {
+ 		err = PTR_ERR(vport->ingress.acl);
+ 		vport->ingress.acl = NULL;
+ 		return err;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index e5234bb02dafd..9a6a519426a08 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -236,6 +236,7 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
+ 	int ret;
+ 	int i;
+ 
++	plat->phy_addr = -1;
+ 	plat->clk_csr = 5;
+ 	plat->has_gmac = 0;
+ 	plat->has_gmac4 = 1;
+@@ -345,7 +346,6 @@ static int ehl_sgmii_data(struct pci_dev *pdev,
+ 			  struct plat_stmmacenet_data *plat)
+ {
+ 	plat->bus_id = 1;
+-	plat->phy_addr = 0;
+ 	plat->phy_interface = PHY_INTERFACE_MODE_SGMII;
+ 
+ 	plat->serdes_powerup = intel_serdes_powerup;
+@@ -362,7 +362,6 @@ static int ehl_rgmii_data(struct pci_dev *pdev,
+ 			  struct plat_stmmacenet_data *plat)
+ {
+ 	plat->bus_id = 1;
+-	plat->phy_addr = 0;
+ 	plat->phy_interface = PHY_INTERFACE_MODE_RGMII;
+ 
+ 	return ehl_common_data(pdev, plat);
+@@ -376,7 +375,6 @@ static int ehl_pse0_common_data(struct pci_dev *pdev,
+ 				struct plat_stmmacenet_data *plat)
+ {
+ 	plat->bus_id = 2;
+-	plat->phy_addr = 1;
+ 	return ehl_common_data(pdev, plat);
+ }
+ 
+@@ -408,7 +406,6 @@ static int ehl_pse1_common_data(struct pci_dev *pdev,
+ 				struct plat_stmmacenet_data *plat)
+ {
+ 	plat->bus_id = 3;
+-	plat->phy_addr = 1;
+ 	return ehl_common_data(pdev, plat);
+ }
+ 
+@@ -450,7 +447,6 @@ static int tgl_sgmii_data(struct pci_dev *pdev,
+ 			  struct plat_stmmacenet_data *plat)
+ {
+ 	plat->bus_id = 1;
+-	plat->phy_addr = 0;
+ 	plat->phy_interface = PHY_INTERFACE_MODE_SGMII;
+ 	plat->serdes_powerup = intel_serdes_powerup;
+ 	plat->serdes_powerdown = intel_serdes_powerdown;
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 8c1d61c2cbacb..6aaa0675c28a3 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -793,6 +793,13 @@ static const struct usb_device_id	products[] = {
+ 	.driver_info = 0,
+ },
+ 
++/* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */
++{
++	USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x721e, USB_CLASS_COMM,
++			USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++	.driver_info = 0,
++},
++
+ /* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */
+ {
+ 	USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index b1770489aca51..88f177aca342e 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -6893,6 +6893,7 @@ static const struct usb_device_id rtl8152_table[] = {
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x7205)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x720c)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x7214)},
++	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x721e)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0xa387)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA,  0x09ff)},
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 01625327eef7c..3638501a09593 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2272,6 +2272,7 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc,
+ {
+ 	u8 channel_num;
+ 	u32 center_freq;
++	struct ieee80211_channel *channel;
+ 
+ 	rx_status->freq = 0;
+ 	rx_status->rate_idx = 0;
+@@ -2292,9 +2293,12 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc,
+ 		rx_status->band = NL80211_BAND_5GHZ;
+ 	} else {
+ 		spin_lock_bh(&ar->data_lock);
+-		rx_status->band = ar->rx_channel->band;
+-		channel_num =
+-			ieee80211_frequency_to_channel(ar->rx_channel->center_freq);
++		channel = ar->rx_channel;
++		if (channel) {
++			rx_status->band = channel->band;
++			channel_num =
++				ieee80211_frequency_to_channel(channel->center_freq);
++		}
+ 		spin_unlock_bh(&ar->data_lock);
+ 		ath11k_dbg_dump(ar->ab, ATH11K_DBG_DATA, NULL, "rx_desc: ",
+ 				rx_desc, sizeof(struct hal_rx_desc));
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index 99a88ca83deaa..2ae7c6bf091e9 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -1654,6 +1654,7 @@ static int ath11k_qmi_respond_fw_mem_request(struct ath11k_base *ab)
+ 	struct qmi_wlanfw_respond_mem_resp_msg_v01 resp;
+ 	struct qmi_txn txn = {};
+ 	int ret = 0, i;
++	bool delayed;
+ 
+ 	req = kzalloc(sizeof(*req), GFP_KERNEL);
+ 	if (!req)
+@@ -1666,11 +1667,13 @@ static int ath11k_qmi_respond_fw_mem_request(struct ath11k_base *ab)
+ 	 * failure to FW and FW will then request mulitple blocks of small
+ 	 * chunk size memory.
+ 	 */
+-	if (!ab->bus_params.fixed_mem_region && ab->qmi.mem_seg_count <= 2) {
++	if (!ab->bus_params.fixed_mem_region && ab->qmi.target_mem_delayed) {
++		delayed = true;
+ 		ath11k_dbg(ab, ATH11K_DBG_QMI, "qmi delays mem_request %d\n",
+ 			   ab->qmi.mem_seg_count);
+ 		memset(req, 0, sizeof(*req));
+ 	} else {
++		delayed = false;
+ 		req->mem_seg_len = ab->qmi.mem_seg_count;
+ 
+ 		for (i = 0; i < req->mem_seg_len ; i++) {
+@@ -1702,6 +1705,12 @@ static int ath11k_qmi_respond_fw_mem_request(struct ath11k_base *ab)
+ 	}
+ 
+ 	if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
++		/* the error response is expected when
++		 * target_mem_delayed is true.
++		 */
++		if (delayed && resp.resp.error == 0)
++			goto out;
++
+ 		ath11k_warn(ab, "Respond mem req failed, result: %d, err: %d\n",
+ 			    resp.resp.result, resp.resp.error);
+ 		ret = -EINVAL;
+@@ -1736,6 +1745,8 @@ static int ath11k_qmi_alloc_target_mem_chunk(struct ath11k_base *ab)
+ 	int i;
+ 	struct target_mem_chunk *chunk;
+ 
++	ab->qmi.target_mem_delayed = false;
++
+ 	for (i = 0; i < ab->qmi.mem_seg_count; i++) {
+ 		chunk = &ab->qmi.target_mem[i];
+ 		chunk->vaddr = dma_alloc_coherent(ab->dev,
+@@ -1743,6 +1754,15 @@ static int ath11k_qmi_alloc_target_mem_chunk(struct ath11k_base *ab)
+ 						  &chunk->paddr,
+ 						  GFP_KERNEL);
+ 		if (!chunk->vaddr) {
++			if (ab->qmi.mem_seg_count <= 2) {
++				ath11k_dbg(ab, ATH11K_DBG_QMI,
++					   "qmi dma allocation failed (%d B type %u), will try later with small size\n",
++					    chunk->size,
++					    chunk->type);
++				ath11k_qmi_free_target_mem_chunk(ab);
++				ab->qmi.target_mem_delayed = true;
++				return 0;
++			}
+ 			ath11k_err(ab, "failed to alloc memory, size: 0x%x, type: %u\n",
+ 				   chunk->size,
+ 				   chunk->type);
+@@ -2467,7 +2487,7 @@ static void ath11k_qmi_msg_mem_request_cb(struct qmi_handle *qmi_hdl,
+ 				    ret);
+ 			return;
+ 		}
+-	} else if (msg->mem_seg_len > 2) {
++	} else {
+ 		ret = ath11k_qmi_alloc_target_mem_chunk(ab);
+ 		if (ret) {
+ 			ath11k_warn(ab, "qmi failed to alloc target memory: %d\n",
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.h b/drivers/net/wireless/ath/ath11k/qmi.h
+index b0a818f0401b9..59f1452b3544c 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.h
++++ b/drivers/net/wireless/ath/ath11k/qmi.h
+@@ -121,6 +121,7 @@ struct ath11k_qmi {
+ 	struct target_mem_chunk target_mem[ATH11K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01];
+ 	u32 mem_seg_count;
+ 	u32 target_mem_mode;
++	bool target_mem_delayed;
+ 	u8 cal_done;
+ 	struct target_info target;
+ 	struct m3_mem_region m3_mem;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 9a270e49df179..34cb59b2fcd67 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2802,6 +2802,11 @@ static const struct attribute_group *nvme_subsys_attrs_groups[] = {
+ 	NULL,
+ };
+ 
++static inline bool nvme_discovery_ctrl(struct nvme_ctrl *ctrl)
++{
++	return ctrl->opts && ctrl->opts->discovery_nqn;
++}
++
+ static bool nvme_validate_cntlid(struct nvme_subsystem *subsys,
+ 		struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ {
+@@ -2821,7 +2826,7 @@ static bool nvme_validate_cntlid(struct nvme_subsystem *subsys,
+ 		}
+ 
+ 		if ((id->cmic & NVME_CTRL_CMIC_MULTI_CTRL) ||
+-		    (ctrl->opts && ctrl->opts->discovery_nqn))
++		    nvme_discovery_ctrl(ctrl))
+ 			continue;
+ 
+ 		dev_err(ctrl->device,
+@@ -3090,7 +3095,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 			goto out_free;
+ 		}
+ 
+-		if (!ctrl->opts->discovery_nqn && !ctrl->kas) {
++		if (!nvme_discovery_ctrl(ctrl) && !ctrl->kas) {
+ 			dev_err(ctrl->device,
+ 				"keep-alive support is mandatory for fabrics\n");
+ 			ret = -EINVAL;
+@@ -3130,7 +3135,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (!ctrl->identified) {
++	if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) {
+ 		ret = nvme_hwmon_init(ctrl);
+ 		if (ret < 0)
+ 			return ret;
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index f4c246462658f..5ead217ac2bc8 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -166,6 +166,7 @@ struct nvme_fc_ctrl {
+ 	struct blk_mq_tag_set	admin_tag_set;
+ 	struct blk_mq_tag_set	tag_set;
+ 
++	struct work_struct	ioerr_work;
+ 	struct delayed_work	connect_work;
+ 
+ 	struct kref		ref;
+@@ -1888,6 +1889,15 @@ __nvme_fc_fcpop_chk_teardowns(struct nvme_fc_ctrl *ctrl,
+ 	}
+ }
+ 
++static void
++nvme_fc_ctrl_ioerr_work(struct work_struct *work)
++{
++	struct nvme_fc_ctrl *ctrl =
++			container_of(work, struct nvme_fc_ctrl, ioerr_work);
++
++	nvme_fc_error_recovery(ctrl, "transport detected io error");
++}
++
+ static void
+ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
+ {
+@@ -2046,7 +2056,7 @@ done:
+ 
+ check_error:
+ 	if (terminate_assoc)
+-		nvme_fc_error_recovery(ctrl, "transport detected io error");
++		queue_work(nvme_reset_wq, &ctrl->ioerr_work);
+ }
+ 
+ static int
+@@ -3233,6 +3243,7 @@ nvme_fc_delete_ctrl(struct nvme_ctrl *nctrl)
+ {
+ 	struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);
+ 
++	cancel_work_sync(&ctrl->ioerr_work);
+ 	cancel_delayed_work_sync(&ctrl->connect_work);
+ 	/*
+ 	 * kill the association on the link side.  this will block
+@@ -3449,6 +3460,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
+ 
+ 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work);
+ 	INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work);
++	INIT_WORK(&ctrl->ioerr_work, nvme_fc_ctrl_ioerr_work);
+ 	spin_lock_init(&ctrl->lock);
+ 
+ 	/* io queue count */
+@@ -3540,6 +3552,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
+ 
+ fail_ctrl:
+ 	nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING);
++	cancel_work_sync(&ctrl->ioerr_work);
+ 	cancel_work_sync(&ctrl->ctrl.reset_work);
+ 	cancel_delayed_work_sync(&ctrl->connect_work);
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 3be352403839a..a89d74c5cd1a7 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -967,6 +967,7 @@ static inline struct blk_mq_tags *nvme_queue_tagset(struct nvme_queue *nvmeq)
+ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ {
+ 	struct nvme_completion *cqe = &nvmeq->cqes[idx];
++	__u16 command_id = READ_ONCE(cqe->command_id);
+ 	struct request *req;
+ 
+ 	/*
+@@ -975,17 +976,17 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ 	 * aborts.  We don't even bother to allocate a struct request
+ 	 * for them but rather special case them here.
+ 	 */
+-	if (unlikely(nvme_is_aen_req(nvmeq->qid, cqe->command_id))) {
++	if (unlikely(nvme_is_aen_req(nvmeq->qid, command_id))) {
+ 		nvme_complete_async_event(&nvmeq->dev->ctrl,
+ 				cqe->status, &cqe->result);
+ 		return;
+ 	}
+ 
+-	req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), cqe->command_id);
++	req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), command_id);
+ 	if (unlikely(!req)) {
+ 		dev_warn(nvmeq->dev->ctrl.device,
+ 			"invalid id %d completed on queue %d\n",
+-			cqe->command_id, le16_to_cpu(cqe->sq_id));
++			command_id, le16_to_cpu(cqe->sq_id));
+ 		return;
+ 	}
+ 
+@@ -3201,7 +3202,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_DEVICE(0x144d, 0xa821),   /* Samsung PM1725 */
+ 		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY, },
+ 	{ PCI_DEVICE(0x144d, 0xa822),   /* Samsung PM1725a */
+-		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY, },
++		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
++				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE(0x1d1d, 0x1f1f),	/* LighNVM qemu device */
+ 		.driver_data = NVME_QUIRK_LIGHTNVM, },
+ 	{ PCI_DEVICE(0x1d1d, 0x2807),	/* CNEX WL */
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 9aa3d9e91c5d1..81db2331f6d78 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -201,7 +201,7 @@ static inline size_t nvme_tcp_req_cur_offset(struct nvme_tcp_request *req)
+ 
+ static inline size_t nvme_tcp_req_cur_length(struct nvme_tcp_request *req)
+ {
+-	return min_t(size_t, req->iter.bvec->bv_len - req->iter.iov_offset,
++	return min_t(size_t, iov_iter_single_seg_count(&req->iter),
+ 			req->pdu_len - req->pdu_sent);
+ }
+ 
+@@ -286,7 +286,7 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 	 * directly, otherwise queue io_work. Also, only do that if we
+ 	 * are on the same cpu, so we don't introduce contention.
+ 	 */
+-	if (queue->io_cpu == smp_processor_id() &&
++	if (queue->io_cpu == __smp_processor_id() &&
+ 	    sync && empty && mutex_trylock(&queue->send_mutex)) {
+ 		queue->more_requests = !last;
+ 		nvme_tcp_send_all(queue);
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 5c1e7cb7fe0de..06b6b742bb213 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -1220,6 +1220,14 @@ nvmet_rdma_find_get_device(struct rdma_cm_id *cm_id)
+ 	}
+ 	ndev->inline_data_size = nport->inline_data_size;
+ 	ndev->inline_page_count = inline_page_count;
++
++	if (nport->pi_enable && !(cm_id->device->attrs.device_cap_flags &
++				  IB_DEVICE_INTEGRITY_HANDOVER)) {
++		pr_warn("T10-PI is not supported by device %s. Disabling it\n",
++			cm_id->device->name);
++		nport->pi_enable = false;
++	}
++
+ 	ndev->device = cm_id->device;
+ 	kref_init(&ndev->ref);
+ 
+@@ -1641,6 +1649,16 @@ static void __nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue)
+ 	spin_lock_irqsave(&queue->state_lock, flags);
+ 	switch (queue->state) {
+ 	case NVMET_RDMA_Q_CONNECTING:
++		while (!list_empty(&queue->rsp_wait_list)) {
++			struct nvmet_rdma_rsp *rsp;
++
++			rsp = list_first_entry(&queue->rsp_wait_list,
++					       struct nvmet_rdma_rsp,
++					       wait_list);
++			list_del(&rsp->wait_list);
++			nvmet_rdma_put_rsp(rsp);
++		}
++		fallthrough;
+ 	case NVMET_RDMA_Q_LIVE:
+ 		queue->state = NVMET_RDMA_Q_DISCONNECTING;
+ 		disconnect = true;
+@@ -1845,14 +1863,6 @@ static int nvmet_rdma_enable_port(struct nvmet_rdma_port *port)
+ 		goto out_destroy_id;
+ 	}
+ 
+-	if (port->nport->pi_enable &&
+-	    !(cm_id->device->attrs.device_cap_flags &
+-	      IB_DEVICE_INTEGRITY_HANDOVER)) {
+-		pr_err("T10-PI is not supported for %pISpcs\n", addr);
+-		ret = -EINVAL;
+-		goto out_destroy_id;
+-	}
+-
+ 	port->cm_id = cm_id;
+ 	return 0;
+ 
+diff --git a/drivers/regulator/bd718x7-regulator.c b/drivers/regulator/bd718x7-regulator.c
+index 0774467994fbe..3333b8905f1b7 100644
+--- a/drivers/regulator/bd718x7-regulator.c
++++ b/drivers/regulator/bd718x7-regulator.c
+@@ -15,6 +15,36 @@
+ #include <linux/regulator/of_regulator.h>
+ #include <linux/slab.h>
+ 
++/* Typical regulator startup times as per data sheet in uS */
++#define BD71847_BUCK1_STARTUP_TIME 144
++#define BD71847_BUCK2_STARTUP_TIME 162
++#define BD71847_BUCK3_STARTUP_TIME 162
++#define BD71847_BUCK4_STARTUP_TIME 240
++#define BD71847_BUCK5_STARTUP_TIME 270
++#define BD71847_BUCK6_STARTUP_TIME 200
++#define BD71847_LDO1_STARTUP_TIME  440
++#define BD71847_LDO2_STARTUP_TIME  370
++#define BD71847_LDO3_STARTUP_TIME  310
++#define BD71847_LDO4_STARTUP_TIME  400
++#define BD71847_LDO5_STARTUP_TIME  530
++#define BD71847_LDO6_STARTUP_TIME  400
++
++#define BD71837_BUCK1_STARTUP_TIME 160
++#define BD71837_BUCK2_STARTUP_TIME 180
++#define BD71837_BUCK3_STARTUP_TIME 180
++#define BD71837_BUCK4_STARTUP_TIME 180
++#define BD71837_BUCK5_STARTUP_TIME 160
++#define BD71837_BUCK6_STARTUP_TIME 240
++#define BD71837_BUCK7_STARTUP_TIME 220
++#define BD71837_BUCK8_STARTUP_TIME 200
++#define BD71837_LDO1_STARTUP_TIME  440
++#define BD71837_LDO2_STARTUP_TIME  370
++#define BD71837_LDO3_STARTUP_TIME  310
++#define BD71837_LDO4_STARTUP_TIME  400
++#define BD71837_LDO5_STARTUP_TIME  310
++#define BD71837_LDO6_STARTUP_TIME  400
++#define BD71837_LDO7_STARTUP_TIME  530
++
+ /*
+  * BD718(37/47/50) have two "enable control modes". ON/OFF can either be
+  * controlled by software - or by PMIC internal HW state machine. Whether
+@@ -613,6 +643,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.vsel_mask = DVS_BUCK_RUN_MASK,
+ 			.enable_reg = BD718XX_REG_BUCK1_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71847_BUCK1_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = buck_set_hw_dvs_levels,
+ 		},
+@@ -646,6 +677,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.vsel_mask = DVS_BUCK_RUN_MASK,
+ 			.enable_reg = BD718XX_REG_BUCK2_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71847_BUCK2_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = buck_set_hw_dvs_levels,
+ 		},
+@@ -680,6 +712,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.linear_range_selectors = bd71847_buck3_volt_range_sel,
+ 			.enable_reg = BD718XX_REG_1ST_NODVS_BUCK_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71847_BUCK3_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -706,6 +739,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.vsel_range_mask = BD71847_BUCK4_RANGE_MASK,
+ 			.linear_range_selectors = bd71847_buck4_volt_range_sel,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71847_BUCK4_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -727,6 +761,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.vsel_mask = BD718XX_3RD_NODVS_BUCK_MASK,
+ 			.enable_reg = BD718XX_REG_3RD_NODVS_BUCK_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71847_BUCK5_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -750,6 +785,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.vsel_mask = BD718XX_4TH_NODVS_BUCK_MASK,
+ 			.enable_reg = BD718XX_REG_4TH_NODVS_BUCK_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71847_BUCK6_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -775,6 +811,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.linear_range_selectors = bd718xx_ldo1_volt_range_sel,
+ 			.enable_reg = BD718XX_REG_LDO1_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71847_LDO1_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -796,6 +833,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.n_voltages = ARRAY_SIZE(ldo_2_volts),
+ 			.enable_reg = BD718XX_REG_LDO2_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71847_LDO2_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -818,6 +856,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.vsel_mask = BD718XX_LDO3_MASK,
+ 			.enable_reg = BD718XX_REG_LDO3_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71847_LDO3_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -840,6 +879,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.vsel_mask = BD718XX_LDO4_MASK,
+ 			.enable_reg = BD718XX_REG_LDO4_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71847_LDO4_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -865,6 +905,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.linear_range_selectors = bd71847_ldo5_volt_range_sel,
+ 			.enable_reg = BD718XX_REG_LDO5_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71847_LDO5_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -889,6 +930,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = {
+ 			.vsel_mask = BD718XX_LDO6_MASK,
+ 			.enable_reg = BD718XX_REG_LDO6_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71847_LDO6_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -942,6 +984,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = DVS_BUCK_RUN_MASK,
+ 			.enable_reg = BD718XX_REG_BUCK1_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71837_BUCK1_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = buck_set_hw_dvs_levels,
+ 		},
+@@ -975,6 +1018,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = DVS_BUCK_RUN_MASK,
+ 			.enable_reg = BD718XX_REG_BUCK2_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71837_BUCK2_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = buck_set_hw_dvs_levels,
+ 		},
+@@ -1005,6 +1049,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = DVS_BUCK_RUN_MASK,
+ 			.enable_reg = BD71837_REG_BUCK3_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71837_BUCK3_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = buck_set_hw_dvs_levels,
+ 		},
+@@ -1033,6 +1078,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = DVS_BUCK_RUN_MASK,
+ 			.enable_reg = BD71837_REG_BUCK4_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71837_BUCK4_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = buck_set_hw_dvs_levels,
+ 		},
+@@ -1065,6 +1111,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.linear_range_selectors = bd71837_buck5_volt_range_sel,
+ 			.enable_reg = BD718XX_REG_1ST_NODVS_BUCK_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71837_BUCK5_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1088,6 +1135,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = BD71837_BUCK6_MASK,
+ 			.enable_reg = BD718XX_REG_2ND_NODVS_BUCK_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71837_BUCK6_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1109,6 +1157,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = BD718XX_3RD_NODVS_BUCK_MASK,
+ 			.enable_reg = BD718XX_REG_3RD_NODVS_BUCK_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71837_BUCK7_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1132,6 +1181,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = BD718XX_4TH_NODVS_BUCK_MASK,
+ 			.enable_reg = BD718XX_REG_4TH_NODVS_BUCK_CTRL,
+ 			.enable_mask = BD718XX_BUCK_EN,
++			.enable_time = BD71837_BUCK8_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1157,6 +1207,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.linear_range_selectors = bd718xx_ldo1_volt_range_sel,
+ 			.enable_reg = BD718XX_REG_LDO1_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71837_LDO1_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1178,6 +1229,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.n_voltages = ARRAY_SIZE(ldo_2_volts),
+ 			.enable_reg = BD718XX_REG_LDO2_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71837_LDO2_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1200,6 +1252,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = BD718XX_LDO3_MASK,
+ 			.enable_reg = BD718XX_REG_LDO3_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71837_LDO3_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1222,6 +1275,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = BD718XX_LDO4_MASK,
+ 			.enable_reg = BD718XX_REG_LDO4_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71837_LDO4_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1246,6 +1300,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = BD71837_LDO5_MASK,
+ 			.enable_reg = BD718XX_REG_LDO5_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71837_LDO5_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1272,6 +1327,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = BD718XX_LDO6_MASK,
+ 			.enable_reg = BD718XX_REG_LDO6_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71837_LDO6_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+@@ -1296,6 +1352,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = {
+ 			.vsel_mask = BD71837_LDO7_MASK,
+ 			.enable_reg = BD71837_REG_LDO7_VOLT,
+ 			.enable_mask = BD718XX_LDO_EN,
++			.enable_time = BD71837_LDO7_STARTUP_TIME,
+ 			.owner = THIS_MODULE,
+ 		},
+ 		.init = {
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 7558b4abebfc5..7b9a9a771b11b 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -8818,7 +8818,8 @@ int ufshcd_system_suspend(struct ufs_hba *hba)
+ 	if ((ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl) ==
+ 	     hba->curr_dev_pwr_mode) &&
+ 	    (ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl) ==
+-	     hba->uic_link_state))
++	     hba->uic_link_state) &&
++	     !hba->dev_info.b_rpm_dev_flush_capable)
+ 		goto out;
+ 
+ 	if (pm_runtime_suspended(hba->dev)) {
+diff --git a/drivers/spi/spi-altera.c b/drivers/spi/spi-altera.c
+index 809bfff3690ab..cbc4c28c1541c 100644
+--- a/drivers/spi/spi-altera.c
++++ b/drivers/spi/spi-altera.c
+@@ -189,24 +189,26 @@ static int altera_spi_txrx(struct spi_master *master,
+ 
+ 		/* send the first byte */
+ 		altera_spi_tx_word(hw);
+-	} else {
+-		while (hw->count < hw->len) {
+-			altera_spi_tx_word(hw);
+ 
+-			for (;;) {
+-				altr_spi_readl(hw, ALTERA_SPI_STATUS, &val);
+-				if (val & ALTERA_SPI_STATUS_RRDY_MSK)
+-					break;
++		return 1;
++	}
++
++	while (hw->count < hw->len) {
++		altera_spi_tx_word(hw);
+ 
+-				cpu_relax();
+-			}
++		for (;;) {
++			altr_spi_readl(hw, ALTERA_SPI_STATUS, &val);
++			if (val & ALTERA_SPI_STATUS_RRDY_MSK)
++				break;
+ 
+-			altera_spi_rx_word(hw);
++			cpu_relax();
+ 		}
+-		spi_finalize_current_transfer(master);
++
++		altera_spi_rx_word(hw);
+ 	}
++	spi_finalize_current_transfer(master);
+ 
+-	return t->len;
++	return 0;
+ }
+ 
+ static irqreturn_t altera_spi_irq(int irq, void *dev)
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 2eaa7dbb70108..7694e1ae5b0b2 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1100,6 +1100,7 @@ static int spi_transfer_wait(struct spi_controller *ctlr,
+ {
+ 	struct spi_statistics *statm = &ctlr->statistics;
+ 	struct spi_statistics *stats = &msg->spi->statistics;
++	u32 speed_hz = xfer->speed_hz;
+ 	unsigned long long ms;
+ 
+ 	if (spi_controller_is_slave(ctlr)) {
+@@ -1108,8 +1109,11 @@ static int spi_transfer_wait(struct spi_controller *ctlr,
+ 			return -EINTR;
+ 		}
+ 	} else {
++		if (!speed_hz)
++			speed_hz = 100000;
++
+ 		ms = 8LL * 1000LL * xfer->len;
+-		do_div(ms, xfer->speed_hz);
++		do_div(ms, speed_hz);
+ 		ms += ms + 200; /* some tolerance */
+ 
+ 		if (ms > UINT_MAX)
+diff --git a/drivers/staging/hikey9xx/hisi-spmi-controller.c b/drivers/staging/hikey9xx/hisi-spmi-controller.c
+index f831c43f4783f..29f226503668d 100644
+--- a/drivers/staging/hikey9xx/hisi-spmi-controller.c
++++ b/drivers/staging/hikey9xx/hisi-spmi-controller.c
+@@ -278,21 +278,24 @@ static int spmi_controller_probe(struct platform_device *pdev)
+ 	iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (!iores) {
+ 		dev_err(&pdev->dev, "can not get resource!\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_put_controller;
+ 	}
+ 
+ 	spmi_controller->base = devm_ioremap(&pdev->dev, iores->start,
+ 					     resource_size(iores));
+ 	if (!spmi_controller->base) {
+ 		dev_err(&pdev->dev, "can not remap base addr!\n");
+-		return -EADDRNOTAVAIL;
++		ret = -EADDRNOTAVAIL;
++		goto err_put_controller;
+ 	}
+ 
+ 	ret = of_property_read_u32(pdev->dev.of_node, "spmi-channel",
+ 				   &spmi_controller->channel);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "can not get channel\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_put_controller;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, spmi_controller);
+@@ -309,9 +312,15 @@ static int spmi_controller_probe(struct platform_device *pdev)
+ 	ctrl->write_cmd = spmi_write_cmd;
+ 
+ 	ret = spmi_controller_add(ctrl);
+-	if (ret)
+-		dev_err(&pdev->dev, "spmi_add_controller failed with error %d!\n", ret);
++	if (ret) {
++		dev_err(&pdev->dev, "spmi_controller_add failed with error %d!\n", ret);
++		goto err_put_controller;
++	}
++
++	return 0;
+ 
++err_put_controller:
++	spmi_controller_put(ctrl);
+ 	return ret;
+ }
+ 
+@@ -320,7 +329,7 @@ static int spmi_del_controller(struct platform_device *pdev)
+ 	struct spmi_controller *ctrl = platform_get_drvdata(pdev);
+ 
+ 	spmi_controller_remove(ctrl);
+-	kfree(ctrl);
++	spmi_controller_put(ctrl);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/typec/altmodes/Kconfig b/drivers/usb/typec/altmodes/Kconfig
+index 187690fd1a5bd..60d375e9c3c7c 100644
+--- a/drivers/usb/typec/altmodes/Kconfig
++++ b/drivers/usb/typec/altmodes/Kconfig
+@@ -20,6 +20,6 @@ config TYPEC_NVIDIA_ALTMODE
+ 	  to enable support for VirtualLink devices with NVIDIA GPUs.
+ 
+ 	  To compile this driver as a module, choose M here: the
+-	  module will be called typec_displayport.
++	  module will be called typec_nvidia.
+ 
+ endmenu
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index b0c73c58f9874..720a7b7abd46d 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -717,14 +717,15 @@ static long privcmd_ioctl_restrict(struct file *file, void __user *udata)
+ 	return 0;
+ }
+ 
+-static long privcmd_ioctl_mmap_resource(struct file *file, void __user *udata)
++static long privcmd_ioctl_mmap_resource(struct file *file,
++				struct privcmd_mmap_resource __user *udata)
+ {
+ 	struct privcmd_data *data = file->private_data;
+ 	struct mm_struct *mm = current->mm;
+ 	struct vm_area_struct *vma;
+ 	struct privcmd_mmap_resource kdata;
+ 	xen_pfn_t *pfns = NULL;
+-	struct xen_mem_acquire_resource xdata;
++	struct xen_mem_acquire_resource xdata = { };
+ 	int rc;
+ 
+ 	if (copy_from_user(&kdata, udata, sizeof(kdata)))
+@@ -734,6 +735,22 @@ static long privcmd_ioctl_mmap_resource(struct file *file, void __user *udata)
+ 	if (data->domid != DOMID_INVALID && data->domid != kdata.dom)
+ 		return -EPERM;
+ 
++	/* Both fields must be set or unset */
++	if (!!kdata.addr != !!kdata.num)
++		return -EINVAL;
++
++	xdata.domid = kdata.dom;
++	xdata.type = kdata.type;
++	xdata.id = kdata.id;
++
++	if (!kdata.addr && !kdata.num) {
++		/* Query the size of the resource. */
++		rc = HYPERVISOR_memory_op(XENMEM_acquire_resource, &xdata);
++		if (rc)
++			return rc;
++		return __put_user(xdata.nr_frames, &udata->num);
++	}
++
+ 	mmap_write_lock(mm);
+ 
+ 	vma = find_vma(mm, kdata.addr);
+@@ -768,10 +785,6 @@ static long privcmd_ioctl_mmap_resource(struct file *file, void __user *udata)
+ 	} else
+ 		vma->vm_private_data = PRIV_VMA_LOCKED;
+ 
+-	memset(&xdata, 0, sizeof(xdata));
+-	xdata.domid = kdata.dom;
+-	xdata.type = kdata.type;
+-	xdata.id = kdata.id;
+ 	xdata.frame = kdata.idx;
+ 	xdata.nr_frames = kdata.num;
+ 	set_xen_guest_handle(xdata.frame_list, pfns);
+diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
+index 741c7e19c32f2..9e1a06144e32d 100644
+--- a/fs/btrfs/discard.c
++++ b/fs/btrfs/discard.c
+@@ -199,16 +199,15 @@ static struct btrfs_block_group *find_next_block_group(
+ static struct btrfs_block_group *peek_discard_list(
+ 					struct btrfs_discard_ctl *discard_ctl,
+ 					enum btrfs_discard_state *discard_state,
+-					int *discard_index)
++					int *discard_index, u64 now)
+ {
+ 	struct btrfs_block_group *block_group;
+-	const u64 now = ktime_get_ns();
+ 
+ 	spin_lock(&discard_ctl->lock);
+ again:
+ 	block_group = find_next_block_group(discard_ctl, now);
+ 
+-	if (block_group && now > block_group->discard_eligible_time) {
++	if (block_group && now >= block_group->discard_eligible_time) {
+ 		if (block_group->discard_index == BTRFS_DISCARD_INDEX_UNUSED &&
+ 		    block_group->used != 0) {
+ 			if (btrfs_is_block_group_data_only(block_group))
+@@ -222,12 +221,11 @@ again:
+ 			block_group->discard_state = BTRFS_DISCARD_EXTENTS;
+ 		}
+ 		discard_ctl->block_group = block_group;
++	}
++	if (block_group) {
+ 		*discard_state = block_group->discard_state;
+ 		*discard_index = block_group->discard_index;
+-	} else {
+-		block_group = NULL;
+ 	}
+-
+ 	spin_unlock(&discard_ctl->lock);
+ 
+ 	return block_group;
+@@ -330,28 +328,15 @@ void btrfs_discard_queue_work(struct btrfs_discard_ctl *discard_ctl,
+ 		btrfs_discard_schedule_work(discard_ctl, false);
+ }
+ 
+-/**
+- * btrfs_discard_schedule_work - responsible for scheduling the discard work
+- * @discard_ctl: discard control
+- * @override: override the current timer
+- *
+- * Discards are issued by a delayed workqueue item.  @override is used to
+- * update the current delay as the baseline delay interval is reevaluated on
+- * transaction commit.  This is also maxed with any other rate limit.
+- */
+-void btrfs_discard_schedule_work(struct btrfs_discard_ctl *discard_ctl,
+-				 bool override)
++static void __btrfs_discard_schedule_work(struct btrfs_discard_ctl *discard_ctl,
++					  u64 now, bool override)
+ {
+ 	struct btrfs_block_group *block_group;
+-	const u64 now = ktime_get_ns();
+-
+-	spin_lock(&discard_ctl->lock);
+ 
+ 	if (!btrfs_run_discard_work(discard_ctl))
+-		goto out;
+-
++		return;
+ 	if (!override && delayed_work_pending(&discard_ctl->work))
+-		goto out;
++		return;
+ 
+ 	block_group = find_next_block_group(discard_ctl, now);
+ 	if (block_group) {
+@@ -384,7 +369,24 @@ void btrfs_discard_schedule_work(struct btrfs_discard_ctl *discard_ctl,
+ 		mod_delayed_work(discard_ctl->discard_workers,
+ 				 &discard_ctl->work, delay);
+ 	}
+-out:
++}
++
++/*
++ * btrfs_discard_schedule_work - responsible for scheduling the discard work
++ * @discard_ctl:  discard control
++ * @override:     override the current timer
++ *
++ * Discards are issued by a delayed workqueue item.  @override is used to
++ * update the current delay as the baseline delay interval is reevaluated on
++ * transaction commit.  This is also maxed with any other rate limit.
++ */
++void btrfs_discard_schedule_work(struct btrfs_discard_ctl *discard_ctl,
++				 bool override)
++{
++	const u64 now = ktime_get_ns();
++
++	spin_lock(&discard_ctl->lock);
++	__btrfs_discard_schedule_work(discard_ctl, now, override);
+ 	spin_unlock(&discard_ctl->lock);
+ }
+ 
+@@ -429,13 +431,18 @@ static void btrfs_discard_workfn(struct work_struct *work)
+ 	int discard_index = 0;
+ 	u64 trimmed = 0;
+ 	u64 minlen = 0;
++	u64 now = ktime_get_ns();
+ 
+ 	discard_ctl = container_of(work, struct btrfs_discard_ctl, work.work);
+ 
+ 	block_group = peek_discard_list(discard_ctl, &discard_state,
+-					&discard_index);
++					&discard_index, now);
+ 	if (!block_group || !btrfs_run_discard_work(discard_ctl))
+ 		return;
++	if (now < block_group->discard_eligible_time) {
++		btrfs_discard_schedule_work(discard_ctl, false);
++		return;
++	}
+ 
+ 	/* Perform discarding */
+ 	minlen = discard_minlen[discard_index];
+@@ -484,9 +491,8 @@ static void btrfs_discard_workfn(struct work_struct *work)
+ 
+ 	spin_lock(&discard_ctl->lock);
+ 	discard_ctl->block_group = NULL;
++	__btrfs_discard_schedule_work(discard_ctl, now, false);
+ 	spin_unlock(&discard_ctl->lock);
+-
+-	btrfs_discard_schedule_work(discard_ctl, false);
+ }
+ 
+ /**
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 60f5f68d892df..30cf917a58e92 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -676,9 +676,7 @@ alloc_extent_state_atomic(struct extent_state *prealloc)
+ 
+ static void extent_io_tree_panic(struct extent_io_tree *tree, int err)
+ {
+-	struct inode *inode = tree->private_data;
+-
+-	btrfs_panic(btrfs_sb(inode->i_sb), err,
++	btrfs_panic(tree->fs_info, err,
+ 	"locking error: extent tree was modified by another thread while locked");
+ }
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index faed0e96cec23..d504a9a207515 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3224,6 +3224,12 @@ out:
+ 	return ret;
+ }
+ 
++static bool rescan_should_stop(struct btrfs_fs_info *fs_info)
++{
++	return btrfs_fs_closing(fs_info) ||
++		test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state);
++}
++
+ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ {
+ 	struct btrfs_fs_info *fs_info = container_of(work, struct btrfs_fs_info,
+@@ -3232,6 +3238,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 	struct btrfs_trans_handle *trans = NULL;
+ 	int err = -ENOMEM;
+ 	int ret = 0;
++	bool stopped = false;
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+@@ -3244,7 +3251,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 	path->skip_locking = 1;
+ 
+ 	err = 0;
+-	while (!err && !btrfs_fs_closing(fs_info)) {
++	while (!err && !(stopped = rescan_should_stop(fs_info))) {
+ 		trans = btrfs_start_transaction(fs_info->fs_root, 0);
+ 		if (IS_ERR(trans)) {
+ 			err = PTR_ERR(trans);
+@@ -3287,7 +3294,7 @@ out:
+ 	}
+ 
+ 	mutex_lock(&fs_info->qgroup_rescan_lock);
+-	if (!btrfs_fs_closing(fs_info))
++	if (!stopped)
+ 		fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN;
+ 	if (trans) {
+ 		ret = update_qgroup_status_item(trans);
+@@ -3306,7 +3313,7 @@ out:
+ 
+ 	btrfs_end_transaction(trans);
+ 
+-	if (btrfs_fs_closing(fs_info)) {
++	if (stopped) {
+ 		btrfs_info(fs_info, "qgroup scan paused");
+ 	} else if (err >= 0) {
+ 		btrfs_info(fs_info, "qgroup scan completed%s",
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 9ba92d86da0bf..108e93ff6cb6f 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -3027,11 +3027,16 @@ static int delete_v1_space_cache(struct extent_buffer *leaf,
+ 		return 0;
+ 
+ 	for (i = 0; i < btrfs_header_nritems(leaf); i++) {
++		u8 type;
++
+ 		btrfs_item_key_to_cpu(leaf, &key, i);
+ 		if (key.type != BTRFS_EXTENT_DATA_KEY)
+ 			continue;
+ 		ei = btrfs_item_ptr(leaf, i, struct btrfs_file_extent_item);
+-		if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_REG &&
++		type = btrfs_file_extent_type(leaf, ei);
++
++		if ((type == BTRFS_FILE_EXTENT_REG ||
++		     type == BTRFS_FILE_EXTENT_PREALLOC) &&
+ 		    btrfs_file_extent_disk_bytenr(leaf, ei) == data_bytenr) {
+ 			found = true;
+ 			space_cache_ino = key.objectid;
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 8840a4fa81eb7..2663485c17cb8 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -1895,6 +1895,14 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data)
+ 		btrfs_scrub_cancel(fs_info);
+ 		btrfs_pause_balance(fs_info);
+ 
++		/*
++		 * Pause the qgroup rescan worker if it is running. We don't want
++		 * it to be still running after we are in RO mode, as after that,
++		 * by the time we unmount, it might have left a transaction open,
++		 * so we would leak the transaction and/or crash.
++		 */
++		btrfs_qgroup_wait_for_completion(fs_info, false);
++
+ 		ret = btrfs_commit_super(fs_info);
+ 		if (ret)
+ 			goto restore;
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index ea2bb4cb58909..40845428b739c 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -754,6 +754,7 @@ int btrfs_check_chunk_valid(struct extent_buffer *leaf,
+ {
+ 	struct btrfs_fs_info *fs_info = leaf->fs_info;
+ 	u64 length;
++	u64 chunk_end;
+ 	u64 stripe_len;
+ 	u16 num_stripes;
+ 	u16 sub_stripes;
+@@ -808,6 +809,12 @@ int btrfs_check_chunk_valid(struct extent_buffer *leaf,
+ 			  "invalid chunk length, have %llu", length);
+ 		return -EUCLEAN;
+ 	}
++	if (unlikely(check_add_overflow(logical, length, &chunk_end))) {
++		chunk_err(leaf, chunk, logical,
++"invalid chunk logical start and length, have logical start %llu length %llu",
++			  logical, length);
++		return -EUCLEAN;
++	}
+ 	if (!is_power_of_2(stripe_len) || stripe_len != BTRFS_STRIPE_LEN) {
+ 		chunk_err(leaf, chunk, logical,
+ 			  "invalid chunk stripe length: %llu",
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 6ee849698962d..7b6db272fd0b8 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -1317,7 +1317,8 @@ void dfs_cache_del_vol(const char *fullpath)
+ 	vi = find_vol(fullpath);
+ 	spin_unlock(&vol_list_lock);
+ 
+-	kref_put(&vi->refcnt, vol_release);
++	if (!IS_ERR(vi))
++		kref_put(&vi->refcnt, vol_release);
+ }
+ 
+ /**
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index fc06c762fbbf6..c6f8bc6729aa1 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -3248,7 +3248,7 @@ close_exit:
+ 	free_rsp_buf(resp_buftype, rsp);
+ 
+ 	/* retry close in a worker thread if this one is interrupted */
+-	if (rc == -EINTR) {
++	if (is_interrupt_error(rc)) {
+ 		int tmp_rc;
+ 
+ 		tmp_rc = smb2_handle_cancelled_close(tcon, persistent_fid,
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index f2033e13a273c..a1dd7ca962c3f 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1207,7 +1207,7 @@ static void ext4_fc_cleanup(journal_t *journal, int full)
+ 	list_splice_init(&sbi->s_fc_dentry_q[FC_Q_STAGING],
+ 				&sbi->s_fc_dentry_q[FC_Q_MAIN]);
+ 	list_splice_init(&sbi->s_fc_q[FC_Q_STAGING],
+-				&sbi->s_fc_q[FC_Q_STAGING]);
++				&sbi->s_fc_q[FC_Q_MAIN]);
+ 
+ 	ext4_clear_mount_flag(sb, EXT4_MF_FC_COMMITTING);
+ 	ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
+@@ -1269,14 +1269,14 @@ static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl)
+ 	entry.len = darg.dname_len;
+ 	inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
+ 
+-	if (IS_ERR_OR_NULL(inode)) {
++	if (IS_ERR(inode)) {
+ 		jbd_debug(1, "Inode %d not found", darg.ino);
+ 		return 0;
+ 	}
+ 
+ 	old_parent = ext4_iget(sb, darg.parent_ino,
+ 				EXT4_IGET_NORMAL);
+-	if (IS_ERR_OR_NULL(old_parent)) {
++	if (IS_ERR(old_parent)) {
+ 		jbd_debug(1, "Dir with inode  %d not found", darg.parent_ino);
+ 		iput(inode);
+ 		return 0;
+@@ -1361,7 +1361,7 @@ static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl)
+ 			darg.parent_ino, darg.dname_len);
+ 
+ 	inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
+-	if (IS_ERR_OR_NULL(inode)) {
++	if (IS_ERR(inode)) {
+ 		jbd_debug(1, "Inode not found.");
+ 		return 0;
+ 	}
+@@ -1417,10 +1417,11 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl)
+ 	trace_ext4_fc_replay(sb, tag, ino, 0, 0);
+ 
+ 	inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);
+-	if (!IS_ERR_OR_NULL(inode)) {
++	if (!IS_ERR(inode)) {
+ 		ext4_ext_clear_bb(inode);
+ 		iput(inode);
+ 	}
++	inode = NULL;
+ 
+ 	ext4_fc_record_modified_inode(sb, ino);
+ 
+@@ -1463,7 +1464,7 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl)
+ 
+ 	/* Given that we just wrote the inode on disk, this SHOULD succeed. */
+ 	inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);
+-	if (IS_ERR_OR_NULL(inode)) {
++	if (IS_ERR(inode)) {
+ 		jbd_debug(1, "Inode not found.");
+ 		return -EFSCORRUPTED;
+ 	}
+@@ -1515,7 +1516,7 @@ static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl)
+ 		goto out;
+ 
+ 	inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
+-	if (IS_ERR_OR_NULL(inode)) {
++	if (IS_ERR(inode)) {
+ 		jbd_debug(1, "inode %d not found.", darg.ino);
+ 		inode = NULL;
+ 		ret = -EINVAL;
+@@ -1528,7 +1529,7 @@ static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl)
+ 		 * dot and dot dot dirents are setup properly.
+ 		 */
+ 		dir = ext4_iget(sb, darg.parent_ino, EXT4_IGET_NORMAL);
+-		if (IS_ERR_OR_NULL(dir)) {
++		if (IS_ERR(dir)) {
+ 			jbd_debug(1, "Dir %d not found.", darg.ino);
+ 			goto out;
+ 		}
+@@ -1604,7 +1605,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 
+ 	inode = ext4_iget(sb, le32_to_cpu(fc_add_ex->fc_ino),
+ 				EXT4_IGET_NORMAL);
+-	if (IS_ERR_OR_NULL(inode)) {
++	if (IS_ERR(inode)) {
+ 		jbd_debug(1, "Inode not found.");
+ 		return 0;
+ 	}
+@@ -1728,7 +1729,7 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl)
+ 		le32_to_cpu(lrange->fc_ino), cur, remaining);
+ 
+ 	inode = ext4_iget(sb, le32_to_cpu(lrange->fc_ino), EXT4_IGET_NORMAL);
+-	if (IS_ERR_OR_NULL(inode)) {
++	if (IS_ERR(inode)) {
+ 		jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange->fc_ino));
+ 		return 0;
+ 	}
+@@ -1809,7 +1810,7 @@ static void ext4_fc_set_bitmaps_and_counters(struct super_block *sb)
+ 	for (i = 0; i < state->fc_modified_inodes_used; i++) {
+ 		inode = ext4_iget(sb, state->fc_modified_inodes[i],
+ 			EXT4_IGET_NORMAL);
+-		if (IS_ERR_OR_NULL(inode)) {
++		if (IS_ERR(inode)) {
+ 			jbd_debug(1, "Inode %d not found.",
+ 				state->fc_modified_inodes[i]);
+ 			continue;
+@@ -1826,7 +1827,7 @@ static void ext4_fc_set_bitmaps_and_counters(struct super_block *sb)
+ 
+ 			if (ret > 0) {
+ 				path = ext4_find_extent(inode, map.m_lblk, NULL, 0);
+-				if (!IS_ERR_OR_NULL(path)) {
++				if (!IS_ERR(path)) {
+ 					for (j = 0; j < path->p_depth; j++)
+ 						ext4_mb_mark_bb(inode->i_sb,
+ 							path[j].p_block, 1, 1);
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 3ed8c048fb12c..b692355b8c770 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -809,7 +809,7 @@ static int ext4_sample_last_mounted(struct super_block *sb,
+ 	err = ext4_journal_get_write_access(handle, sbi->s_sbh);
+ 	if (err)
+ 		goto out_journal;
+-	strlcpy(sbi->s_es->s_last_mounted, cp,
++	strncpy(sbi->s_es->s_last_mounted, cp,
+ 		sizeof(sbi->s_es->s_last_mounted));
+ 	ext4_handle_dirty_super(handle, sb);
+ out_journal:
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index f0381876a7e5b..106bf149e8ca8 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -1157,7 +1157,10 @@ resizefs_out:
+ 			err = ext4_journal_get_write_access(handle, sbi->s_sbh);
+ 			if (err)
+ 				goto pwsalt_err_journal;
++			lock_buffer(sbi->s_sbh);
+ 			generate_random_uuid(sbi->s_es->s_encrypt_pw_salt);
++			ext4_superblock_csum_set(sb);
++			unlock_buffer(sbi->s_sbh);
+ 			err = ext4_handle_dirty_metadata(handle, NULL,
+ 							 sbi->s_sbh);
+ 		pwsalt_err_journal:
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 793fc7db9d28f..df0886e08a772 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3602,9 +3602,6 @@ static int ext4_setent(handle_t *handle, struct ext4_renament *ent,
+ 			return retval2;
+ 		}
+ 	}
+-	brelse(ent->bh);
+-	ent->bh = NULL;
+-
+ 	return retval;
+ }
+ 
+@@ -3803,6 +3800,7 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		}
+ 	}
+ 
++	old_file_type = old.de->file_type;
+ 	if (IS_DIRSYNC(old.dir) || IS_DIRSYNC(new.dir))
+ 		ext4_handle_sync(handle);
+ 
+@@ -3830,7 +3828,6 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	force_reread = (new.dir->i_ino == old.dir->i_ino &&
+ 			ext4_test_inode_flag(new.dir, EXT4_INODE_INLINE_DATA));
+ 
+-	old_file_type = old.de->file_type;
+ 	if (whiteout) {
+ 		/*
+ 		 * Do this before adding a new entry, so the old entry is sure
+@@ -3928,15 +3925,19 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	retval = 0;
+ 
+ end_rename:
+-	brelse(old.dir_bh);
+-	brelse(old.bh);
+-	brelse(new.bh);
+ 	if (whiteout) {
+-		if (retval)
++		if (retval) {
++			ext4_setent(handle, &old,
++				old.inode->i_ino, old_file_type);
+ 			drop_nlink(whiteout);
++		}
+ 		unlock_new_inode(whiteout);
+ 		iput(whiteout);
++
+ 	}
++	brelse(old.dir_bh);
++	brelse(old.bh);
++	brelse(new.bh);
+ 	if (handle)
+ 		ext4_journal_stop(handle);
+ 	return retval;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4833b68f1a1cc..265aea2cd7bc8 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1009,6 +1009,8 @@ static int __io_sq_thread_acquire_mm(struct io_ring_ctx *ctx)
+ {
+ 	struct mm_struct *mm;
+ 
++	if (current->flags & PF_EXITING)
++		return -EFAULT;
+ 	if (current->mm)
+ 		return 0;
+ 
+@@ -6839,6 +6841,7 @@ static int io_sq_thread(void *data)
+ 
+ 		if (ret & SQT_SPIN) {
+ 			io_run_task_work();
++			io_sq_thread_drop_mm();
+ 			cond_resched();
+ 		} else if (ret == SQT_IDLE) {
+ 			if (kthread_should_park())
+@@ -6853,6 +6856,7 @@ static int io_sq_thread(void *data)
+ 	}
+ 
+ 	io_run_task_work();
++	io_sq_thread_drop_mm();
+ 
+ 	if (cur_css)
+ 		io_sq_thread_unassociate_blkcg();
+@@ -8817,6 +8821,15 @@ static void io_uring_attempt_task_drop(struct file *file)
+ 		io_uring_del_task_file(file);
+ }
+ 
++static void io_uring_remove_task_files(struct io_uring_task *tctx)
++{
++	struct file *file;
++	unsigned long index;
++
++	xa_for_each(&tctx->xa, index, file)
++		io_uring_del_task_file(file);
++}
++
+ void __io_uring_files_cancel(struct files_struct *files)
+ {
+ 	struct io_uring_task *tctx = current->io_uring;
+@@ -8825,16 +8838,12 @@ void __io_uring_files_cancel(struct files_struct *files)
+ 
+ 	/* make sure overflow events are dropped */
+ 	atomic_inc(&tctx->in_idle);
+-
+-	xa_for_each(&tctx->xa, index, file) {
+-		struct io_ring_ctx *ctx = file->private_data;
+-
+-		io_uring_cancel_task_requests(ctx, files);
+-		if (files)
+-			io_uring_del_task_file(file);
+-	}
+-
++	xa_for_each(&tctx->xa, index, file)
++		io_uring_cancel_task_requests(file->private_data, files);
+ 	atomic_dec(&tctx->in_idle);
++
++	if (files)
++		io_uring_remove_task_files(tctx);
+ }
+ 
+ static s64 tctx_inflight(struct io_uring_task *tctx)
+@@ -8897,6 +8906,8 @@ void __io_uring_task_cancel(void)
+ 
+ 	finish_wait(&tctx->wait, &wait);
+ 	atomic_dec(&tctx->in_idle);
++
++	io_uring_remove_task_files(tctx);
+ }
+ 
+ static int io_uring_flush(struct file *file, void *data)
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 93006abe7946a..c7fbb50a5aaa5 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1713,8 +1713,6 @@ static int can_umount(const struct path *path, int flags)
+ {
+ 	struct mount *mnt = real_mount(path->mnt);
+ 
+-	if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
+-		return -EINVAL;
+ 	if (!may_mount())
+ 		return -EPERM;
+ 	if (path->dentry != path->mnt->mnt_root)
+@@ -1728,6 +1726,7 @@ static int can_umount(const struct path *path, int flags)
+ 	return 0;
+ }
+ 
++// caller is responsible for flags being sane
+ int path_umount(struct path *path, int flags)
+ {
+ 	struct mount *mnt = real_mount(path->mnt);
+@@ -1749,6 +1748,10 @@ static int ksys_umount(char __user *name, int flags)
+ 	struct path path;
+ 	int ret;
+ 
++	// basic validity checks done first
++	if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
++		return -EINVAL;
++
+ 	if (!(flags & UMOUNT_NOFOLLOW))
+ 		lookup_flags |= LOOKUP_FOLLOW;
+ 	ret = user_path_at(AT_FDCWD, name, lookup_flags, &path);
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 816e1427f17eb..04bf8066980c1 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -1011,22 +1011,24 @@ nfs_delegation_find_inode_server(struct nfs_server *server,
+ 				 const struct nfs_fh *fhandle)
+ {
+ 	struct nfs_delegation *delegation;
+-	struct inode *freeme, *res = NULL;
++	struct super_block *freeme = NULL;
++	struct inode *res = NULL;
+ 
+ 	list_for_each_entry_rcu(delegation, &server->delegations, super_list) {
+ 		spin_lock(&delegation->lock);
+ 		if (delegation->inode != NULL &&
+ 		    !test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) &&
+ 		    nfs_compare_fh(fhandle, &NFS_I(delegation->inode)->fh) == 0) {
+-			freeme = igrab(delegation->inode);
+-			if (freeme && nfs_sb_active(freeme->i_sb))
+-				res = freeme;
++			if (nfs_sb_active(server->super)) {
++				freeme = server->super;
++				res = igrab(delegation->inode);
++			}
+ 			spin_unlock(&delegation->lock);
+ 			if (res != NULL)
+ 				return res;
+ 			if (freeme) {
+ 				rcu_read_unlock();
+-				iput(freeme);
++				nfs_sb_deactive(freeme);
+ 				rcu_read_lock();
+ 			}
+ 			return ERR_PTR(-EAGAIN);
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 6673a77884d9d..98554dd18a715 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -142,9 +142,29 @@ struct nfs_fs_context {
+ 	} clone_data;
+ };
+ 
+-#define nfs_errorf(fc, fmt, ...) errorf(fc, fmt, ## __VA_ARGS__)
+-#define nfs_invalf(fc, fmt, ...) invalf(fc, fmt, ## __VA_ARGS__)
+-#define nfs_warnf(fc, fmt, ...) warnf(fc, fmt, ## __VA_ARGS__)
++#define nfs_errorf(fc, fmt, ...) ((fc)->log.log ?		\
++	errorf(fc, fmt, ## __VA_ARGS__) :			\
++	({ dprintk(fmt "\n", ## __VA_ARGS__); }))
++
++#define nfs_ferrorf(fc, fac, fmt, ...) ((fc)->log.log ?		\
++	errorf(fc, fmt, ## __VA_ARGS__) :			\
++	({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); }))
++
++#define nfs_invalf(fc, fmt, ...) ((fc)->log.log ?		\
++	invalf(fc, fmt, ## __VA_ARGS__) :			\
++	({ dprintk(fmt "\n", ## __VA_ARGS__);  -EINVAL; }))
++
++#define nfs_finvalf(fc, fac, fmt, ...) ((fc)->log.log ?		\
++	invalf(fc, fmt, ## __VA_ARGS__) :			\
++	({ dfprintk(fac, fmt "\n", ## __VA_ARGS__);  -EINVAL; }))
++
++#define nfs_warnf(fc, fmt, ...) ((fc)->log.log ?		\
++	warnf(fc, fmt, ## __VA_ARGS__) :			\
++	({ dprintk(fmt "\n", ## __VA_ARGS__); }))
++
++#define nfs_fwarnf(fc, fac, fmt, ...) ((fc)->log.log ?		\
++	warnf(fc, fmt, ## __VA_ARGS__) :			\
++	({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); }))
+ 
+ static inline struct nfs_fs_context *nfs_fc2context(const struct fs_context *fc)
+ {
+@@ -585,12 +605,14 @@ extern void nfs4_test_session_trunk(struct rpc_clnt *clnt,
+ 
+ static inline struct inode *nfs_igrab_and_active(struct inode *inode)
+ {
+-	inode = igrab(inode);
+-	if (inode != NULL && !nfs_sb_active(inode->i_sb)) {
+-		iput(inode);
+-		inode = NULL;
++	struct super_block *sb = inode->i_sb;
++
++	if (sb && nfs_sb_active(sb)) {
++		if (igrab(inode))
++			return inode;
++		nfs_sb_deactive(sb);
+ 	}
+-	return inode;
++	return NULL;
+ }
+ 
+ static inline void nfs_iput_and_deactive(struct inode *inode)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 6858b4bb556d5..0cd5b127f3bb9 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3534,10 +3534,8 @@ static void nfs4_close_done(struct rpc_task *task, void *data)
+ 	trace_nfs4_close(state, &calldata->arg, &calldata->res, task->tk_status);
+ 
+ 	/* Handle Layoutreturn errors */
+-	if (pnfs_roc_done(task, calldata->inode,
+-				&calldata->arg.lr_args,
+-				&calldata->res.lr_res,
+-				&calldata->res.lr_ret) == -EAGAIN)
++	if (pnfs_roc_done(task, &calldata->arg.lr_args, &calldata->res.lr_res,
++			  &calldata->res.lr_ret) == -EAGAIN)
+ 		goto out_restart;
+ 
+ 	/* hmm. we are done with the inode, and in the process of freeing
+@@ -6379,10 +6377,8 @@ static void nfs4_delegreturn_done(struct rpc_task *task, void *calldata)
+ 	trace_nfs4_delegreturn_exit(&data->args, &data->res, task->tk_status);
+ 
+ 	/* Handle Layoutreturn errors */
+-	if (pnfs_roc_done(task, data->inode,
+-				&data->args.lr_args,
+-				&data->res.lr_res,
+-				&data->res.lr_ret) == -EAGAIN)
++	if (pnfs_roc_done(task, &data->args.lr_args, &data->res.lr_res,
++			  &data->res.lr_ret) == -EAGAIN)
+ 		goto out_restart;
+ 
+ 	switch (task->tk_status) {
+@@ -6436,10 +6432,10 @@ static void nfs4_delegreturn_release(void *calldata)
+ 	struct nfs4_delegreturndata *data = calldata;
+ 	struct inode *inode = data->inode;
+ 
++	if (data->lr.roc)
++		pnfs_roc_release(&data->lr.arg, &data->lr.res,
++				 data->res.lr_ret);
+ 	if (inode) {
+-		if (data->lr.roc)
+-			pnfs_roc_release(&data->lr.arg, &data->lr.res,
+-					data->res.lr_ret);
+ 		nfs_post_op_update_inode_force_wcc(inode, &data->fattr);
+ 		nfs_iput_and_deactive(inode);
+ 	}
+@@ -6515,16 +6511,14 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
+ 	nfs_fattr_init(data->res.fattr);
+ 	data->timestamp = jiffies;
+ 	data->rpc_status = 0;
+-	data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res, cred);
+ 	data->inode = nfs_igrab_and_active(inode);
+-	if (data->inode) {
++	if (data->inode || issync) {
++		data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res,
++					cred);
+ 		if (data->lr.roc) {
+ 			data->args.lr_args = &data->lr.arg;
+ 			data->res.lr_res = &data->lr.res;
+ 		}
+-	} else if (data->lr.roc) {
+-		pnfs_roc_release(&data->lr.arg, &data->lr.res, 0);
+-		data->lr.roc = false;
+ 	}
+ 
+ 	task_setup_data.callback_data = data;
+@@ -7106,9 +7100,9 @@ static int _nfs4_do_setlk(struct nfs4_state *state, int cmd, struct file_lock *f
+ 					data->arg.new_lock_owner, ret);
+ 	} else
+ 		data->cancelled = true;
++	trace_nfs4_set_lock(fl, state, &data->res.stateid, cmd, ret);
+ 	rpc_put_task(task);
+ 	dprintk("%s: done, ret = %d!\n", __func__, ret);
+-	trace_nfs4_set_lock(fl, state, &data->res.stateid, cmd, ret);
+ 	return ret;
+ }
+ 
+diff --git a/fs/nfs/nfs4super.c b/fs/nfs/nfs4super.c
+index 984cc42ee54d8..d09bcfd7db894 100644
+--- a/fs/nfs/nfs4super.c
++++ b/fs/nfs/nfs4super.c
+@@ -227,7 +227,7 @@ int nfs4_try_get_tree(struct fs_context *fc)
+ 			   fc, ctx->nfs_server.hostname,
+ 			   ctx->nfs_server.export_path);
+ 	if (err) {
+-		nfs_errorf(fc, "NFS4: Couldn't follow remote path");
++		nfs_ferrorf(fc, MOUNT, "NFS4: Couldn't follow remote path");
+ 		dfprintk(MOUNT, "<-- nfs4_try_get_tree() = %d [error]\n", err);
+ 	} else {
+ 		dfprintk(MOUNT, "<-- nfs4_try_get_tree() = 0\n");
+@@ -250,7 +250,7 @@ int nfs4_get_referral_tree(struct fs_context *fc)
+ 			    fc, ctx->nfs_server.hostname,
+ 			    ctx->nfs_server.export_path);
+ 	if (err) {
+-		nfs_errorf(fc, "NFS4: Couldn't follow remote path");
++		nfs_ferrorf(fc, MOUNT, "NFS4: Couldn't follow remote path");
+ 		dfprintk(MOUNT, "<-- nfs4_get_referral_tree() = %d [error]\n", err);
+ 	} else {
+ 		dfprintk(MOUNT, "<-- nfs4_get_referral_tree() = 0\n");
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 07f59dc8cb2e7..471bfa273dade 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1509,10 +1509,8 @@ out_noroc:
+ 	return false;
+ }
+ 
+-int pnfs_roc_done(struct rpc_task *task, struct inode *inode,
+-		struct nfs4_layoutreturn_args **argpp,
+-		struct nfs4_layoutreturn_res **respp,
+-		int *ret)
++int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp,
++		  struct nfs4_layoutreturn_res **respp, int *ret)
+ {
+ 	struct nfs4_layoutreturn_args *arg = *argpp;
+ 	int retval = -EAGAIN;
+@@ -1545,7 +1543,7 @@ int pnfs_roc_done(struct rpc_task *task, struct inode *inode,
+ 		return 0;
+ 	case -NFS4ERR_OLD_STATEID:
+ 		if (!nfs4_layout_refresh_old_stateid(&arg->stateid,
+-					&arg->range, inode))
++						     &arg->range, arg->inode))
+ 			break;
+ 		*ret = -NFS4ERR_NOMATCHING_LAYOUT;
+ 		return -EAGAIN;
+@@ -1560,12 +1558,18 @@ void pnfs_roc_release(struct nfs4_layoutreturn_args *args,
+ 		int ret)
+ {
+ 	struct pnfs_layout_hdr *lo = args->layout;
++	struct inode *inode = args->inode;
+ 	const nfs4_stateid *arg_stateid = NULL;
+ 	const nfs4_stateid *res_stateid = NULL;
+ 	struct nfs4_xdr_opaque_data *ld_private = args->ld_private;
+ 
+ 	switch (ret) {
+ 	case -NFS4ERR_NOMATCHING_LAYOUT:
++		spin_lock(&inode->i_lock);
++		if (pnfs_layout_is_valid(lo) &&
++		    nfs4_stateid_match_other(&args->stateid, &lo->plh_stateid))
++			pnfs_set_plh_return_info(lo, args->range.iomode, 0);
++		spin_unlock(&inode->i_lock);
+ 		break;
+ 	case 0:
+ 		if (res->lrs_present)
+@@ -2015,6 +2019,27 @@ lookup_again:
+ 		goto lookup_again;
+ 	}
+ 
++	/*
++	 * Because we free lsegs when sending LAYOUTRETURN, we need to wait
++	 * for LAYOUTRETURN.
++	 */
++	if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) {
++		spin_unlock(&ino->i_lock);
++		dprintk("%s wait for layoutreturn\n", __func__);
++		lseg = ERR_PTR(pnfs_prepare_to_retry_layoutget(lo));
++		if (!IS_ERR(lseg)) {
++			pnfs_put_layout_hdr(lo);
++			dprintk("%s retrying\n", __func__);
++			trace_pnfs_update_layout(ino, pos, count, iomode, lo,
++						 lseg,
++						 PNFS_UPDATE_LAYOUT_RETRY);
++			goto lookup_again;
++		}
++		trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
++					 PNFS_UPDATE_LAYOUT_RETURN);
++		goto out_put_layout_hdr;
++	}
++
+ 	lseg = pnfs_find_lseg(lo, &arg, strict_iomode);
+ 	if (lseg) {
+ 		trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
+@@ -2067,28 +2092,6 @@ lookup_again:
+ 		nfs4_stateid_copy(&stateid, &lo->plh_stateid);
+ 	}
+ 
+-	/*
+-	 * Because we free lsegs before sending LAYOUTRETURN, we need to wait
+-	 * for LAYOUTRETURN even if first is true.
+-	 */
+-	if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) {
+-		spin_unlock(&ino->i_lock);
+-		dprintk("%s wait for layoutreturn\n", __func__);
+-		lseg = ERR_PTR(pnfs_prepare_to_retry_layoutget(lo));
+-		if (!IS_ERR(lseg)) {
+-			if (first)
+-				pnfs_clear_first_layoutget(lo);
+-			pnfs_put_layout_hdr(lo);
+-			dprintk("%s retrying\n", __func__);
+-			trace_pnfs_update_layout(ino, pos, count, iomode, lo,
+-					lseg, PNFS_UPDATE_LAYOUT_RETRY);
+-			goto lookup_again;
+-		}
+-		trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
+-				PNFS_UPDATE_LAYOUT_RETURN);
+-		goto out_put_layout_hdr;
+-	}
+-
+ 	if (pnfs_layoutgets_blocked(lo)) {
+ 		trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
+ 				PNFS_UPDATE_LAYOUT_BLOCKED);
+@@ -2242,6 +2245,7 @@ static void _lgopen_prepare_attached(struct nfs4_opendata *data,
+ 					     &rng, GFP_KERNEL);
+ 	if (!lgp) {
+ 		pnfs_clear_first_layoutget(lo);
++		nfs_layoutget_end(lo);
+ 		pnfs_put_layout_hdr(lo);
+ 		return;
+ 	}
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 78c3893918486..132a345e93731 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -295,10 +295,8 @@ bool pnfs_roc(struct inode *ino,
+ 		struct nfs4_layoutreturn_args *args,
+ 		struct nfs4_layoutreturn_res *res,
+ 		const struct cred *cred);
+-int pnfs_roc_done(struct rpc_task *task, struct inode *inode,
+-		struct nfs4_layoutreturn_args **argpp,
+-		struct nfs4_layoutreturn_res **respp,
+-		int *ret);
++int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp,
++		  struct nfs4_layoutreturn_res **respp, int *ret);
+ void pnfs_roc_release(struct nfs4_layoutreturn_args *args,
+ 		struct nfs4_layoutreturn_res *res,
+ 		int ret);
+@@ -770,7 +768,7 @@ pnfs_roc(struct inode *ino,
+ }
+ 
+ static inline int
+-pnfs_roc_done(struct rpc_task *task, struct inode *inode,
++pnfs_roc_done(struct rpc_task *task,
+ 		struct nfs4_layoutreturn_args **argpp,
+ 		struct nfs4_layoutreturn_res **respp,
+ 		int *ret)
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 679767ac258d0..e3b25822e0bb1 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -78,22 +78,18 @@ void
+ pnfs_generic_clear_request_commit(struct nfs_page *req,
+ 				  struct nfs_commit_info *cinfo)
+ {
+-	struct pnfs_layout_segment *freeme = NULL;
++	struct pnfs_commit_bucket *bucket = NULL;
+ 
+ 	if (!test_and_clear_bit(PG_COMMIT_TO_DS, &req->wb_flags))
+ 		goto out;
+ 	cinfo->ds->nwritten--;
+-	if (list_is_singular(&req->wb_list)) {
+-		struct pnfs_commit_bucket *bucket;
+-
++	if (list_is_singular(&req->wb_list))
+ 		bucket = list_first_entry(&req->wb_list,
+-					  struct pnfs_commit_bucket,
+-					  written);
+-		freeme = pnfs_free_bucket_lseg(bucket);
+-	}
++					  struct pnfs_commit_bucket, written);
+ out:
+ 	nfs_request_remove_commit_list(req, cinfo);
+-	pnfs_put_lseg(freeme);
++	if (bucket)
++		pnfs_put_lseg(pnfs_free_bucket_lseg(bucket));
+ }
+ EXPORT_SYMBOL_GPL(pnfs_generic_clear_request_commit);
+ 
+@@ -407,12 +403,16 @@ pnfs_bucket_get_committing(struct list_head *head,
+ 			   struct pnfs_commit_bucket *bucket,
+ 			   struct nfs_commit_info *cinfo)
+ {
++	struct pnfs_layout_segment *lseg;
+ 	struct list_head *pos;
+ 
+ 	list_for_each(pos, &bucket->committing)
+ 		cinfo->ds->ncommitting--;
+ 	list_splice_init(&bucket->committing, head);
+-	return pnfs_free_bucket_lseg(bucket);
++	lseg = pnfs_free_bucket_lseg(bucket);
++	if (!lseg)
++		lseg = pnfs_get_lseg(bucket->lseg);
++	return lseg;
+ }
+ 
+ static struct nfs_commit_data *
+@@ -424,8 +424,6 @@ pnfs_bucket_fetch_commitdata(struct pnfs_commit_bucket *bucket,
+ 	if (!data)
+ 		return NULL;
+ 	data->lseg = pnfs_bucket_get_committing(&data->pages, bucket, cinfo);
+-	if (!data->lseg)
+-		data->lseg = pnfs_get_lseg(bucket->lseg);
+ 	return data;
+ }
+ 
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index ee5a235b30562..602e3a52884d8 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1035,6 +1035,25 @@ struct clear_refs_private {
+ };
+ 
+ #ifdef CONFIG_MEM_SOFT_DIRTY
++
++#define is_cow_mapping(flags) (((flags) & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE)
++
++static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
++{
++	struct page *page;
++
++	if (!pte_write(pte))
++		return false;
++	if (!is_cow_mapping(vma->vm_flags))
++		return false;
++	if (likely(!atomic_read(&vma->vm_mm->has_pinned)))
++		return false;
++	page = vm_normal_page(vma, addr, pte);
++	if (!page)
++		return false;
++	return page_maybe_dma_pinned(page);
++}
++
+ static inline void clear_soft_dirty(struct vm_area_struct *vma,
+ 		unsigned long addr, pte_t *pte)
+ {
+@@ -1049,6 +1068,8 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma,
+ 	if (pte_present(ptent)) {
+ 		pte_t old_pte;
+ 
++		if (pte_is_pinned(vma, addr, ptent))
++			return;
+ 		old_pte = ptep_modify_prot_start(vma, addr, pte);
+ 		ptent = pte_wrprotect(old_pte);
+ 		ptent = pte_clear_soft_dirty(ptent);
+@@ -1215,41 +1236,26 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
+ 			.type = type,
+ 		};
+ 
++		if (mmap_write_lock_killable(mm)) {
++			count = -EINTR;
++			goto out_mm;
++		}
+ 		if (type == CLEAR_REFS_MM_HIWATER_RSS) {
+-			if (mmap_write_lock_killable(mm)) {
+-				count = -EINTR;
+-				goto out_mm;
+-			}
+-
+ 			/*
+ 			 * Writing 5 to /proc/pid/clear_refs resets the peak
+ 			 * resident set size to this mm's current rss value.
+ 			 */
+ 			reset_mm_hiwater_rss(mm);
+-			mmap_write_unlock(mm);
+-			goto out_mm;
++			goto out_unlock;
+ 		}
+ 
+-		if (mmap_read_lock_killable(mm)) {
+-			count = -EINTR;
+-			goto out_mm;
+-		}
+ 		tlb_gather_mmu(&tlb, mm, 0, -1);
+ 		if (type == CLEAR_REFS_SOFT_DIRTY) {
+ 			for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ 				if (!(vma->vm_flags & VM_SOFTDIRTY))
+ 					continue;
+-				mmap_read_unlock(mm);
+-				if (mmap_write_lock_killable(mm)) {
+-					count = -EINTR;
+-					goto out_mm;
+-				}
+-				for (vma = mm->mmap; vma; vma = vma->vm_next) {
+-					vma->vm_flags &= ~VM_SOFTDIRTY;
+-					vma_set_page_prot(vma);
+-				}
+-				mmap_write_downgrade(mm);
+-				break;
++				vma->vm_flags &= ~VM_SOFTDIRTY;
++				vma_set_page_prot(vma);
+ 			}
+ 
+ 			mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY,
+@@ -1261,7 +1267,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
+ 		if (type == CLEAR_REFS_SOFT_DIRTY)
+ 			mmu_notifier_invalidate_range_end(&range);
+ 		tlb_finish_mmu(&tlb, 0, -1);
+-		mmap_read_unlock(mm);
++out_unlock:
++		mmap_write_unlock(mm);
+ out_mm:
+ 		mmput(mm);
+ 	}
+diff --git a/fs/select.c b/fs/select.c
+index ebfebdfe5c69a..37aaa8317f3ae 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -1011,14 +1011,17 @@ static int do_sys_poll(struct pollfd __user *ufds, unsigned int nfds,
+ 	fdcount = do_poll(head, &table, end_time);
+ 	poll_freewait(&table);
+ 
++	if (!user_write_access_begin(ufds, nfds * sizeof(*ufds)))
++		goto out_fds;
++
+ 	for (walk = head; walk; walk = walk->next) {
+ 		struct pollfd *fds = walk->entries;
+ 		int j;
+ 
+-		for (j = 0; j < walk->len; j++, ufds++)
+-			if (__put_user(fds[j].revents, &ufds->revents))
+-				goto out_fds;
++		for (j = walk->len; j; fds++, ufds++, j--)
++			unsafe_put_user(fds->revents, &ufds->revents, Efault);
+   	}
++	user_write_access_end();
+ 
+ 	err = fdcount;
+ out_fds:
+@@ -1030,6 +1033,11 @@ out_fds:
+ 	}
+ 
+ 	return err;
++
++Efault:
++	user_write_access_end();
++	err = -EFAULT;
++	goto out_fds;
+ }
+ 
+ static long do_restart_poll(struct restart_block *restart_block)
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 39263c6b52e1a..5b1dc1ad4fb32 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -885,6 +885,13 @@ static inline int acpi_device_modalias(struct device *dev,
+ 	return -ENODEV;
+ }
+ 
++static inline struct platform_device *
++acpi_create_platform_device(struct acpi_device *adev,
++			    struct property_entry *properties)
++{
++	return NULL;
++}
++
+ static inline bool acpi_dma_supported(struct acpi_device *adev)
+ {
+ 	return false;
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index 74c6c0486eed7..555ab0fddbef7 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -13,6 +13,12 @@
+ /* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 */
+ #if GCC_VERSION < 40900
+ # error Sorry, your version of GCC is too old - please use 4.9 or newer.
++#elif defined(CONFIG_ARM64) && GCC_VERSION < 50100
++/*
++ * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63293
++ * https://lore.kernel.org/r/20210107111841.GN1551@shell.armlinux.org.uk
++ */
++# error Sorry, your version of GCC is too old - please use 5.1 or newer.
+ #endif
+ 
+ /*
+diff --git a/include/linux/dm-bufio.h b/include/linux/dm-bufio.h
+index 29d255fdd5d64..90bd558a17f51 100644
+--- a/include/linux/dm-bufio.h
++++ b/include/linux/dm-bufio.h
+@@ -150,6 +150,7 @@ void dm_bufio_set_minimum_buffers(struct dm_bufio_client *c, unsigned n);
+ 
+ unsigned dm_bufio_get_block_size(struct dm_bufio_client *c);
+ sector_t dm_bufio_get_device_size(struct dm_bufio_client *c);
++struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c);
+ sector_t dm_bufio_get_block_number(struct dm_buffer *b);
+ void *dm_bufio_get_block_data(struct dm_buffer *b);
+ void *dm_bufio_get_aux_data(struct dm_buffer *b);
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 6cdd0152c253a..5c119d6cecf14 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -86,6 +86,12 @@ void rcu_sched_clock_irq(int user);
+ void rcu_report_dead(unsigned int cpu);
+ void rcutree_migrate_callbacks(int cpu);
+ 
++#ifdef CONFIG_TASKS_RCU_GENERIC
++void rcu_init_tasks_generic(void);
++#else
++static inline void rcu_init_tasks_generic(void) { }
++#endif
++
+ #ifdef CONFIG_RCU_STALL_COMMON
+ void rcu_sysrq_start(void);
+ void rcu_sysrq_end(void);
+diff --git a/init/main.c b/init/main.c
+index 32b2a8affafd1..9d964511fe0c2 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1512,6 +1512,7 @@ static noinline void __init kernel_init_freeable(void)
+ 
+ 	init_mm_internals();
+ 
++	rcu_init_tasks_generic();
+ 	do_pre_smp_initcalls();
+ 	lockup_detector_init();
+ 
+diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
+index 5b6af30bfbcd8..f3d3a562a802a 100644
+--- a/kernel/bpf/task_iter.c
++++ b/kernel/bpf/task_iter.c
+@@ -136,8 +136,7 @@ struct bpf_iter_seq_task_file_info {
+ };
+ 
+ static struct file *
+-task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info,
+-		       struct task_struct **task, struct files_struct **fstruct)
++task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info)
+ {
+ 	struct pid_namespace *ns = info->common.ns;
+ 	u32 curr_tid = info->tid, max_fds;
+@@ -150,26 +149,29 @@ task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info,
+ 	 * Otherwise, it does not hold any reference.
+ 	 */
+ again:
+-	if (*task) {
+-		curr_task = *task;
+-		curr_files = *fstruct;
++	if (info->task) {
++		curr_task = info->task;
++		curr_files = info->files;
+ 		curr_fd = info->fd;
+ 	} else {
+ 		curr_task = task_seq_get_next(ns, &curr_tid, true);
+-		if (!curr_task)
++		if (!curr_task) {
++			info->task = NULL;
++			info->files = NULL;
++			info->tid = curr_tid;
+ 			return NULL;
++		}
+ 
+ 		curr_files = get_files_struct(curr_task);
+ 		if (!curr_files) {
+ 			put_task_struct(curr_task);
+-			curr_tid = ++(info->tid);
++			curr_tid = curr_tid + 1;
+ 			info->fd = 0;
+ 			goto again;
+ 		}
+ 
+-		/* set *fstruct, *task and info->tid */
+-		*fstruct = curr_files;
+-		*task = curr_task;
++		info->files = curr_files;
++		info->task = curr_task;
+ 		if (curr_tid == info->tid) {
+ 			curr_fd = info->fd;
+ 		} else {
+@@ -199,8 +201,8 @@ again:
+ 	rcu_read_unlock();
+ 	put_files_struct(curr_files);
+ 	put_task_struct(curr_task);
+-	*task = NULL;
+-	*fstruct = NULL;
++	info->task = NULL;
++	info->files = NULL;
+ 	info->fd = 0;
+ 	curr_tid = ++(info->tid);
+ 	goto again;
+@@ -209,21 +211,13 @@ again:
+ static void *task_file_seq_start(struct seq_file *seq, loff_t *pos)
+ {
+ 	struct bpf_iter_seq_task_file_info *info = seq->private;
+-	struct files_struct *files = NULL;
+-	struct task_struct *task = NULL;
+ 	struct file *file;
+ 
+-	file = task_file_seq_get_next(info, &task, &files);
+-	if (!file) {
+-		info->files = NULL;
+-		info->task = NULL;
+-		return NULL;
+-	}
+-
+-	if (*pos == 0)
++	info->task = NULL;
++	info->files = NULL;
++	file = task_file_seq_get_next(info);
++	if (file && *pos == 0)
+ 		++*pos;
+-	info->task = task;
+-	info->files = files;
+ 
+ 	return file;
+ }
+@@ -231,24 +225,11 @@ static void *task_file_seq_start(struct seq_file *seq, loff_t *pos)
+ static void *task_file_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
+ 	struct bpf_iter_seq_task_file_info *info = seq->private;
+-	struct files_struct *files = info->files;
+-	struct task_struct *task = info->task;
+-	struct file *file;
+ 
+ 	++*pos;
+ 	++info->fd;
+ 	fput((struct file *)v);
+-	file = task_file_seq_get_next(info, &task, &files);
+-	if (!file) {
+-		info->files = NULL;
+-		info->task = NULL;
+-		return NULL;
+-	}
+-
+-	info->task = task;
+-	info->files = files;
+-
+-	return file;
++	return task_file_seq_get_next(info);
+ }
+ 
+ struct bpf_iter__task_file {
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index d5d9f2d03e8a0..73bbe792fe1e8 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -241,7 +241,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
+ 	}
+ }
+ 
+-/* Spawn RCU-tasks grace-period kthread, e.g., at core_initcall() time. */
++/* Spawn RCU-tasks grace-period kthread. */
+ static void __init rcu_spawn_tasks_kthread_generic(struct rcu_tasks *rtp)
+ {
+ 	struct task_struct *t;
+@@ -569,7 +569,6 @@ static int __init rcu_spawn_tasks_kthread(void)
+ 	rcu_spawn_tasks_kthread_generic(&rcu_tasks);
+ 	return 0;
+ }
+-core_initcall(rcu_spawn_tasks_kthread);
+ 
+ #ifndef CONFIG_TINY_RCU
+ static void show_rcu_tasks_classic_gp_kthread(void)
+@@ -697,7 +696,6 @@ static int __init rcu_spawn_tasks_rude_kthread(void)
+ 	rcu_spawn_tasks_kthread_generic(&rcu_tasks_rude);
+ 	return 0;
+ }
+-core_initcall(rcu_spawn_tasks_rude_kthread);
+ 
+ #ifndef CONFIG_TINY_RCU
+ static void show_rcu_tasks_rude_gp_kthread(void)
+@@ -975,6 +973,11 @@ static void rcu_tasks_trace_pregp_step(void)
+ static void rcu_tasks_trace_pertask(struct task_struct *t,
+ 				    struct list_head *hop)
+ {
++	// During early boot when there is only the one boot CPU, there
++	// is no idle task for the other CPUs. Just return.
++	if (unlikely(t == NULL))
++		return;
++
+ 	WRITE_ONCE(t->trc_reader_special.b.need_qs, false);
+ 	WRITE_ONCE(t->trc_reader_checked, false);
+ 	t->trc_ipi_to_cpu = -1;
+@@ -1200,7 +1203,6 @@ static int __init rcu_spawn_tasks_trace_kthread(void)
+ 	rcu_spawn_tasks_kthread_generic(&rcu_tasks_trace);
+ 	return 0;
+ }
+-core_initcall(rcu_spawn_tasks_trace_kthread);
+ 
+ #ifndef CONFIG_TINY_RCU
+ static void show_rcu_tasks_trace_gp_kthread(void)
+@@ -1229,6 +1231,21 @@ void show_rcu_tasks_gp_kthreads(void)
+ }
+ #endif /* #ifndef CONFIG_TINY_RCU */
+ 
++void __init rcu_init_tasks_generic(void)
++{
++#ifdef CONFIG_TASKS_RCU
++	rcu_spawn_tasks_kthread();
++#endif
++
++#ifdef CONFIG_TASKS_RUDE_RCU
++	rcu_spawn_tasks_rude_kthread();
++#endif
++
++#ifdef CONFIG_TASKS_TRACE_RCU
++	rcu_spawn_tasks_trace_kthread();
++#endif
++}
++
+ #else /* #ifdef CONFIG_TASKS_RCU_GENERIC */
+ static inline void rcu_tasks_bootup_oddness(void) {}
+ void show_rcu_tasks_gp_kthreads(void) {}
+diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
+index e1bf5228fb692..29db703f68806 100644
+--- a/kernel/trace/Kconfig
++++ b/kernel/trace/Kconfig
+@@ -531,7 +531,7 @@ config KPROBE_EVENTS
+ config KPROBE_EVENTS_ON_NOTRACE
+ 	bool "Do NOT protect notrace function from kprobe events"
+ 	depends on KPROBE_EVENTS
+-	depends on KPROBES_ON_FTRACE
++	depends on DYNAMIC_FTRACE
+ 	default n
+ 	help
+ 	  This is only for the developers who want to debug ftrace itself
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index b29f92c51b1a4..5fff39541b8ae 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -434,7 +434,7 @@ static int disable_trace_kprobe(struct trace_event_call *call,
+ 	return 0;
+ }
+ 
+-#if defined(CONFIG_KPROBES_ON_FTRACE) && \
++#if defined(CONFIG_DYNAMIC_FTRACE) && \
+ 	!defined(CONFIG_KPROBE_EVENTS_ON_NOTRACE)
+ static bool __within_notrace_func(unsigned long addr)
+ {
+diff --git a/lib/raid6/Makefile b/lib/raid6/Makefile
+index b4c0df6d706dc..c770570bfe4f2 100644
+--- a/lib/raid6/Makefile
++++ b/lib/raid6/Makefile
+@@ -48,7 +48,7 @@ endif
+ endif
+ 
+ quiet_cmd_unroll = UNROLL  $@
+-      cmd_unroll = $(AWK) -f$(srctree)/$(src)/unroll.awk -vN=$* < $< > $@
++      cmd_unroll = $(AWK) -v N=$* -f $(srctree)/$(src)/unroll.awk < $< > $@
+ 
+ targets += int1.c int2.c int4.c int8.c int16.c int32.c
+ $(obj)/int%.c: $(src)/int.uc $(src)/unroll.awk FORCE
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 1fd11f96a707a..9a3f06cdcc2a8 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4372,7 +4372,7 @@ retry:
+ 		 * So we need to block hugepage fault by PG_hwpoison bit check.
+ 		 */
+ 		if (unlikely(PageHWPoison(page))) {
+-			ret = VM_FAULT_HWPOISON |
++			ret = VM_FAULT_HWPOISON_LARGE |
+ 				VM_FAULT_SET_HINDEX(hstate_index(h));
+ 			goto backout_unlocked;
+ 		}
+diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c
+index 702250f148e73..c90d722c61817 100644
+--- a/mm/process_vm_access.c
++++ b/mm/process_vm_access.c
+@@ -9,6 +9,7 @@
+ #include <linux/mm.h>
+ #include <linux/uio.h>
+ #include <linux/sched.h>
++#include <linux/compat.h>
+ #include <linux/sched/mm.h>
+ #include <linux/highmem.h>
+ #include <linux/ptrace.h>
+diff --git a/mm/slub.c b/mm/slub.c
+index 34dcc09e2ec9b..3f4303f4b657d 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1971,7 +1971,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
+ 
+ 		t = acquire_slab(s, n, page, object == NULL, &objects);
+ 		if (!t)
+-			break;
++			continue; /* cmpxchg raced */
+ 
+ 		available += objects;
+ 		if (!object) {
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 279dc0c96568c..fff03a331314f 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2405,8 +2405,10 @@ void *vmap(struct page **pages, unsigned int count,
+ 		return NULL;
+ 	}
+ 
+-	if (flags & VM_MAP_PUT_PAGES)
++	if (flags & VM_MAP_PUT_PAGES) {
+ 		area->pages = pages;
++		area->nr_pages = count;
++	}
+ 	return area->addr;
+ }
+ EXPORT_SYMBOL(vmap);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 0ec6321e98878..4c5a9b2286bf5 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1240,6 +1240,8 @@ static unsigned int shrink_page_list(struct list_head *page_list,
+ 			if (!PageSwapCache(page)) {
+ 				if (!(sc->gfp_mask & __GFP_IO))
+ 					goto keep_locked;
++				if (page_maybe_dma_pinned(page))
++					goto keep_locked;
+ 				if (PageTransHuge(page)) {
+ 					/* cannot split THP, skip it */
+ 					if (!can_split_huge_page(page, NULL))
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index 7d01086b38f0f..7cd1d31fb2b88 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -630,7 +630,7 @@ mtype_resize(struct ip_set *set, bool retried)
+ 	struct htype *h = set->data;
+ 	struct htable *t, *orig;
+ 	u8 htable_bits;
+-	size_t dsize = set->dsize;
++	size_t hsize, dsize = set->dsize;
+ #ifdef IP_SET_HASH_WITH_NETS
+ 	u8 flags;
+ 	struct mtype_elem *tmp;
+@@ -654,14 +654,12 @@ mtype_resize(struct ip_set *set, bool retried)
+ retry:
+ 	ret = 0;
+ 	htable_bits++;
+-	if (!htable_bits) {
+-		/* In case we have plenty of memory :-) */
+-		pr_warn("Cannot increase the hashsize of set %s further\n",
+-			set->name);
+-		ret = -IPSET_ERR_HASH_FULL;
+-		goto out;
+-	}
+-	t = ip_set_alloc(htable_size(htable_bits));
++	if (!htable_bits)
++		goto hbwarn;
++	hsize = htable_size(htable_bits);
++	if (!hsize)
++		goto hbwarn;
++	t = ip_set_alloc(hsize);
+ 	if (!t) {
+ 		ret = -ENOMEM;
+ 		goto out;
+@@ -803,6 +801,12 @@ cleanup:
+ 	if (ret == -EAGAIN)
+ 		goto retry;
+ 	goto out;
++
++hbwarn:
++	/* In case we have plenty of memory :-) */
++	pr_warn("Cannot increase the hashsize of set %s further\n", set->name);
++	ret = -IPSET_ERR_HASH_FULL;
++	goto out;
+ }
+ 
+ /* Get the current number of elements and ext_size in the set  */
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index 46c5557c1fecf..0ee702d374b02 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -523,6 +523,9 @@ nf_conntrack_hash_sysctl(struct ctl_table *table, int write,
+ {
+ 	int ret;
+ 
++	/* module_param hashsize could have changed value */
++	nf_conntrack_htable_size_user = nf_conntrack_htable_size;
++
+ 	ret = proc_dointvec(table, write, buffer, lenp, ppos);
+ 	if (ret < 0 || !write)
+ 		return ret;
+diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
+index ea923f8cf9c42..b7c3c902290f1 100644
+--- a/net/netfilter/nf_nat_core.c
++++ b/net/netfilter/nf_nat_core.c
+@@ -1174,6 +1174,7 @@ static int __init nf_nat_init(void)
+ 	ret = register_pernet_subsys(&nat_net_ops);
+ 	if (ret < 0) {
+ 		nf_ct_extend_unregister(&nat_extend);
++		kvfree(nf_nat_bysource);
+ 		return ret;
+ 	}
+ 
+diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c
+index 010dcb876f9d7..6e4dbd577a39f 100644
+--- a/net/sunrpc/addr.c
++++ b/net/sunrpc/addr.c
+@@ -185,7 +185,7 @@ static int rpc_parse_scope_id(struct net *net, const char *buf,
+ 			scope_id = dev->ifindex;
+ 			dev_put(dev);
+ 		} else {
+-			if (kstrtou32(p, 10, &scope_id) == 0) {
++			if (kstrtou32(p, 10, &scope_id) != 0) {
+ 				kfree(p);
+ 				return 0;
+ 			}
+diff --git a/net/wireless/Kconfig b/net/wireless/Kconfig
+index 27026f587fa61..f620acd2a0f5e 100644
+--- a/net/wireless/Kconfig
++++ b/net/wireless/Kconfig
+@@ -21,6 +21,7 @@ config CFG80211
+ 	tristate "cfg80211 - wireless configuration API"
+ 	depends on RFKILL || !RFKILL
+ 	select FW_LOADER
++	select CRC32
+ 	# may need to update this when certificates are changed and are
+ 	# using a different algorithm, though right now they shouldn't
+ 	# (this is here rather than below to allow it to be a module)
+diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile
+index e46df0a2d4f9d..2c40e68853dde 100644
+--- a/scripts/kconfig/Makefile
++++ b/scripts/kconfig/Makefile
+@@ -94,16 +94,6 @@ configfiles=$(wildcard $(srctree)/kernel/configs/$@ $(srctree)/arch/$(SRCARCH)/c
+ 	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh -m .config $(configfiles)
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+ 
+-PHONY += kvmconfig
+-kvmconfig: kvm_guest.config
+-	@echo >&2 "WARNING: 'make $@' will be removed after Linux 5.10"
+-	@echo >&2 "         Please use 'make $<' instead."
+-
+-PHONY += xenconfig
+-xenconfig: xen.config
+-	@echo >&2 "WARNING: 'make $@' will be removed after Linux 5.10"
+-	@echo >&2 "         Please use 'make $<' instead."
+-
+ PHONY += tinyconfig
+ tinyconfig:
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile allnoconfig tiny.config
+diff --git a/security/lsm_audit.c b/security/lsm_audit.c
+index 53d0d183db8f8..08d5ef49f2e47 100644
+--- a/security/lsm_audit.c
++++ b/security/lsm_audit.c
+@@ -278,7 +278,9 @@ static void dump_common_audit_data(struct audit_buffer *ab,
+ 		struct inode *inode;
+ 
+ 		audit_log_format(ab, " name=");
++		spin_lock(&a->u.dentry->d_lock);
+ 		audit_log_untrustedstring(ab, a->u.dentry->d_name.name);
++		spin_unlock(&a->u.dentry->d_lock);
+ 
+ 		inode = d_backing_inode(a->u.dentry);
+ 		if (inode) {
+@@ -297,8 +299,9 @@ static void dump_common_audit_data(struct audit_buffer *ab,
+ 		dentry = d_find_alias(inode);
+ 		if (dentry) {
+ 			audit_log_format(ab, " name=");
+-			audit_log_untrustedstring(ab,
+-					 dentry->d_name.name);
++			spin_lock(&dentry->d_lock);
++			audit_log_untrustedstring(ab, dentry->d_name.name);
++			spin_unlock(&dentry->d_lock);
+ 			dput(dentry);
+ 		}
+ 		audit_log_format(ab, " dev=");
+diff --git a/sound/firewire/fireface/ff-transaction.c b/sound/firewire/fireface/ff-transaction.c
+index 7f82762ccc8c8..ee7122c461d46 100644
+--- a/sound/firewire/fireface/ff-transaction.c
++++ b/sound/firewire/fireface/ff-transaction.c
+@@ -88,7 +88,7 @@ static void transmit_midi_msg(struct snd_ff *ff, unsigned int port)
+ 
+ 	/* Set interval to next transaction. */
+ 	ff->next_ktime[port] = ktime_add_ns(ktime_get(),
+-				ff->rx_bytes[port] * 8 * NSEC_PER_SEC / 31250);
++			ff->rx_bytes[port] * 8 * (NSEC_PER_SEC / 31250));
+ 
+ 	if (quad_count == 1)
+ 		tcode = TCODE_WRITE_QUADLET_REQUEST;
+diff --git a/sound/firewire/tascam/tascam-transaction.c b/sound/firewire/tascam/tascam-transaction.c
+index 90288b4b46379..a073cece4a7d5 100644
+--- a/sound/firewire/tascam/tascam-transaction.c
++++ b/sound/firewire/tascam/tascam-transaction.c
+@@ -209,7 +209,7 @@ static void midi_port_work(struct work_struct *work)
+ 
+ 	/* Set interval to next transaction. */
+ 	port->next_ktime = ktime_add_ns(ktime_get(),
+-				port->consume_bytes * 8 * NSEC_PER_SEC / 31250);
++			port->consume_bytes * 8 * (NSEC_PER_SEC / 31250));
+ 
+ 	/* Start this transaction. */
+ 	port->idling = false;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3c1d2a3fb1a4f..dd82ff2bd5d65 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7970,6 +7970,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x8780, "HP ZBook Fury 17 G7 Mobile Workstation",
++		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation",
++		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+diff --git a/sound/soc/amd/renoir/rn-pci-acp3x.c b/sound/soc/amd/renoir/rn-pci-acp3x.c
+index 338b78c514ec9..6f153856657ae 100644
+--- a/sound/soc/amd/renoir/rn-pci-acp3x.c
++++ b/sound/soc/amd/renoir/rn-pci-acp3x.c
+@@ -171,6 +171,13 @@ static const struct dmi_system_id rn_acp_quirk_table[] = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "LNVNB161216"),
+ 		}
+ 	},
++	{
++		/* Lenovo ThinkPad X395 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "20NLCTO1WW"),
++		}
++	},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/intel/skylake/cnl-sst.c b/sound/soc/intel/skylake/cnl-sst.c
+index fcd8dff27ae8e..1275c149acc02 100644
+--- a/sound/soc/intel/skylake/cnl-sst.c
++++ b/sound/soc/intel/skylake/cnl-sst.c
+@@ -224,6 +224,7 @@ static int cnl_set_dsp_D0(struct sst_dsp *ctx, unsigned int core_id)
+ 				"dsp boot timeout, status=%#x error=%#x\n",
+ 				sst_dsp_shim_read(ctx, CNL_ADSP_FW_STATUS),
+ 				sst_dsp_shim_read(ctx, CNL_ADSP_ERROR_CODE));
++			ret = -ETIMEDOUT;
+ 			goto err;
+ 		}
+ 	} else {
+diff --git a/sound/soc/meson/axg-tdm-interface.c b/sound/soc/meson/axg-tdm-interface.c
+index c8664ab80d45a..87cac440b3693 100644
+--- a/sound/soc/meson/axg-tdm-interface.c
++++ b/sound/soc/meson/axg-tdm-interface.c
+@@ -467,8 +467,20 @@ static int axg_tdm_iface_set_bias_level(struct snd_soc_component *component,
+ 	return ret;
+ }
+ 
++static const struct snd_soc_dapm_widget axg_tdm_iface_dapm_widgets[] = {
++	SND_SOC_DAPM_SIGGEN("Playback Signal"),
++};
++
++static const struct snd_soc_dapm_route axg_tdm_iface_dapm_routes[] = {
++	{ "Loopback", NULL, "Playback Signal" },
++};
++
+ static const struct snd_soc_component_driver axg_tdm_iface_component_drv = {
+-	.set_bias_level	= axg_tdm_iface_set_bias_level,
++	.dapm_widgets		= axg_tdm_iface_dapm_widgets,
++	.num_dapm_widgets	= ARRAY_SIZE(axg_tdm_iface_dapm_widgets),
++	.dapm_routes		= axg_tdm_iface_dapm_routes,
++	.num_dapm_routes	= ARRAY_SIZE(axg_tdm_iface_dapm_routes),
++	.set_bias_level		= axg_tdm_iface_set_bias_level,
+ };
+ 
+ static const struct of_device_id axg_tdm_iface_of_match[] = {
+diff --git a/sound/soc/meson/axg-tdmin.c b/sound/soc/meson/axg-tdmin.c
+index 88ed95ae886bb..b4faf9d5c1aad 100644
+--- a/sound/soc/meson/axg-tdmin.c
++++ b/sound/soc/meson/axg-tdmin.c
+@@ -224,15 +224,6 @@ static const struct axg_tdm_formatter_ops axg_tdmin_ops = {
+ };
+ 
+ static const struct axg_tdm_formatter_driver axg_tdmin_drv = {
+-	.component_drv	= &axg_tdmin_component_drv,
+-	.regmap_cfg	= &axg_tdmin_regmap_cfg,
+-	.ops		= &axg_tdmin_ops,
+-	.quirks		= &(const struct axg_tdm_formatter_hw) {
+-		.skew_offset	= 2,
+-	},
+-};
+-
+-static const struct axg_tdm_formatter_driver g12a_tdmin_drv = {
+ 	.component_drv	= &axg_tdmin_component_drv,
+ 	.regmap_cfg	= &axg_tdmin_regmap_cfg,
+ 	.ops		= &axg_tdmin_ops,
+@@ -247,10 +238,10 @@ static const struct of_device_id axg_tdmin_of_match[] = {
+ 		.data = &axg_tdmin_drv,
+ 	}, {
+ 		.compatible = "amlogic,g12a-tdmin",
+-		.data = &g12a_tdmin_drv,
++		.data = &axg_tdmin_drv,
+ 	}, {
+ 		.compatible = "amlogic,sm1-tdmin",
+-		.data = &g12a_tdmin_drv,
++		.data = &axg_tdmin_drv,
+ 	}, {}
+ };
+ MODULE_DEVICE_TABLE(of, axg_tdmin_of_match);
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 7f87b449f950b..148c095df27b1 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -2486,6 +2486,7 @@ void snd_soc_dapm_free_widget(struct snd_soc_dapm_widget *w)
+ 	enum snd_soc_dapm_direction dir;
+ 
+ 	list_del(&w->list);
++	list_del(&w->dirty);
+ 	/*
+ 	 * remove source and sink paths associated to this widget.
+ 	 * While removing the path, remove reference to it from both
+diff --git a/tools/bootconfig/scripts/bconf2ftrace.sh b/tools/bootconfig/scripts/bconf2ftrace.sh
+index 595e164dc352f..feb30c2c78815 100755
+--- a/tools/bootconfig/scripts/bconf2ftrace.sh
++++ b/tools/bootconfig/scripts/bconf2ftrace.sh
+@@ -152,6 +152,7 @@ setup_instance() { # [instance]
+ 	set_array_of ${instance}.options ${instancedir}/trace_options
+ 	set_value_of ${instance}.trace_clock ${instancedir}/trace_clock
+ 	set_value_of ${instance}.cpumask ${instancedir}/tracing_cpumask
++	set_value_of ${instance}.tracing_on ${instancedir}/tracing_on
+ 	set_value_of ${instance}.tracer ${instancedir}/current_tracer
+ 	set_array_of ${instance}.ftrace.filters \
+ 		${instancedir}/set_ftrace_filter
+diff --git a/tools/bootconfig/scripts/ftrace2bconf.sh b/tools/bootconfig/scripts/ftrace2bconf.sh
+index 6c0d4b61e0c26..a0c3bcc6da4f3 100755
+--- a/tools/bootconfig/scripts/ftrace2bconf.sh
++++ b/tools/bootconfig/scripts/ftrace2bconf.sh
+@@ -221,6 +221,10 @@ instance_options() { # [instance-name]
+ 	if [ `echo $val | sed -e s/f//g`x != x ]; then
+ 		emit_kv $PREFIX.cpumask = $val
+ 	fi
++	val=`cat $INSTANCE/tracing_on`
++	if [ `echo $val | sed -e s/f//g`x != x ]; then
++		emit_kv $PREFIX.tracing_on = $val
++	fi
+ 
+ 	val=
+ 	for i in `cat $INSTANCE/set_event`; do
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 15385ea00190f..74bf480aa4f05 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2973,7 +2973,7 @@ int machines__for_each_thread(struct machines *machines,
+ 
+ pid_t machine__get_current_tid(struct machine *machine, int cpu)
+ {
+-	int nr_cpus = min(machine->env->nr_cpus_online, MAX_NR_CPUS);
++	int nr_cpus = min(machine->env->nr_cpus_avail, MAX_NR_CPUS);
+ 
+ 	if (cpu < 0 || cpu >= nr_cpus || !machine->current_tid)
+ 		return -1;
+@@ -2985,7 +2985,7 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
+ 			     pid_t tid)
+ {
+ 	struct thread *thread;
+-	int nr_cpus = min(machine->env->nr_cpus_online, MAX_NR_CPUS);
++	int nr_cpus = min(machine->env->nr_cpus_avail, MAX_NR_CPUS);
+ 
+ 	if (cpu < 0)
+ 		return -EINVAL;
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 098080287c687..22098fffac4f1 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -2397,7 +2397,7 @@ int perf_session__cpu_bitmap(struct perf_session *session,
+ {
+ 	int i, err = -1;
+ 	struct perf_cpu_map *map;
+-	int nr_cpus = min(session->header.env.nr_cpus_online, MAX_NR_CPUS);
++	int nr_cpus = min(session->header.env.nr_cpus_avail, MAX_NR_CPUS);
+ 
+ 	for (i = 0; i < PERF_TYPE_MAX; ++i) {
+ 		struct evsel *evsel;
+diff --git a/tools/testing/selftests/net/udpgro.sh b/tools/testing/selftests/net/udpgro.sh
+index ac2a30be9b325..f8a19f548ae9d 100755
+--- a/tools/testing/selftests/net/udpgro.sh
++++ b/tools/testing/selftests/net/udpgro.sh
+@@ -5,6 +5,14 @@
+ 
+ readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)"
+ 
++# set global exit status, but never reset nonzero one.
++check_err()
++{
++	if [ $ret -eq 0 ]; then
++		ret=$1
++	fi
++}
++
+ cleanup() {
+ 	local -r jobs="$(jobs -p)"
+ 	local -r ns="$(ip netns list|grep $PEER_NS)"
+@@ -44,7 +52,9 @@ run_one() {
+ 	# Hack: let bg programs complete the startup
+ 	sleep 0.1
+ 	./udpgso_bench_tx ${tx_args}
++	ret=$?
+ 	wait $(jobs -p)
++	return $ret
+ }
+ 
+ run_test() {
+@@ -87,8 +97,10 @@ run_one_nat() {
+ 
+ 	sleep 0.1
+ 	./udpgso_bench_tx ${tx_args}
++	ret=$?
+ 	kill -INT $pid
+ 	wait $(jobs -p)
++	return $ret
+ }
+ 
+ run_one_2sock() {
+@@ -110,7 +122,9 @@ run_one_2sock() {
+ 	sleep 0.1
+ 	# first UDP GSO socket should be closed at this point
+ 	./udpgso_bench_tx ${tx_args}
++	ret=$?
+ 	wait $(jobs -p)
++	return $ret
+ }
+ 
+ run_nat_test() {
+@@ -131,36 +145,54 @@ run_all() {
+ 	local -r core_args="-l 4"
+ 	local -r ipv4_args="${core_args} -4 -D 192.168.1.1"
+ 	local -r ipv6_args="${core_args} -6 -D 2001:db8::1"
++	ret=0
+ 
+ 	echo "ipv4"
+ 	run_test "no GRO" "${ipv4_args} -M 10 -s 1400" "-4 -n 10 -l 1400"
++	check_err $?
+ 
+ 	# explicitly check we are not receiving UDP_SEGMENT cmsg (-S -1)
+ 	# when GRO does not take place
+ 	run_test "no GRO chk cmsg" "${ipv4_args} -M 10 -s 1400" "-4 -n 10 -l 1400 -S -1"
++	check_err $?
+ 
+ 	# the GSO packets are aggregated because:
+ 	# * veth schedule napi after each xmit
+ 	# * segmentation happens in BH context, veth napi poll is delayed after
+ 	#   the transmission of the last segment
+ 	run_test "GRO" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720"
++	check_err $?
+ 	run_test "GRO chk cmsg" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720 -S 1472"
++	check_err $?
+ 	run_test "GRO with custom segment size" "${ipv4_args} -M 1 -s 14720 -S 500 " "-4 -n 1 -l 14720"
++	check_err $?
+ 	run_test "GRO with custom segment size cmsg" "${ipv4_args} -M 1 -s 14720 -S 500 " "-4 -n 1 -l 14720 -S 500"
++	check_err $?
+ 
+ 	run_nat_test "bad GRO lookup" "${ipv4_args} -M 1 -s 14720 -S 0" "-n 10 -l 1472"
++	check_err $?
+ 	run_2sock_test "multiple GRO socks" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720 -S 1472"
++	check_err $?
+ 
+ 	echo "ipv6"
+ 	run_test "no GRO" "${ipv6_args} -M 10 -s 1400" "-n 10 -l 1400"
++	check_err $?
+ 	run_test "no GRO chk cmsg" "${ipv6_args} -M 10 -s 1400" "-n 10 -l 1400 -S -1"
++	check_err $?
+ 	run_test "GRO" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 1 -l 14520"
++	check_err $?
+ 	run_test "GRO chk cmsg" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 1 -l 14520 -S 1452"
++	check_err $?
+ 	run_test "GRO with custom segment size" "${ipv6_args} -M 1 -s 14520 -S 500" "-n 1 -l 14520"
++	check_err $?
+ 	run_test "GRO with custom segment size cmsg" "${ipv6_args} -M 1 -s 14520 -S 500" "-n 1 -l 14520 -S 500"
++	check_err $?
+ 
+ 	run_nat_test "bad GRO lookup" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 10 -l 1452"
++	check_err $?
+ 	run_2sock_test "multiple GRO socks" "${ipv6_args} -M 1 -s 14520 -S 0 " "-n 1 -l 14520 -S 1452"
++	check_err $?
++	return $ret
+ }
+ 
+ if [ ! -f ../bpf/xdp_dummy.o ]; then
+@@ -180,3 +212,5 @@ elif [[ $1 == "__subprocess_2sock" ]]; then
+ 	shift
+ 	run_one_2sock $@
+ fi
++
++exit $?
+diff --git a/tools/testing/selftests/netfilter/nft_conntrack_helper.sh b/tools/testing/selftests/netfilter/nft_conntrack_helper.sh
+index edf0a48da6bf8..bf6b9626c7dd2 100755
+--- a/tools/testing/selftests/netfilter/nft_conntrack_helper.sh
++++ b/tools/testing/selftests/netfilter/nft_conntrack_helper.sh
+@@ -94,7 +94,13 @@ check_for_helper()
+ 	local message=$2
+ 	local port=$3
+ 
+-	ip netns exec ${netns} conntrack -L -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp'
++	if echo $message |grep -q 'ipv6';then
++		local family="ipv6"
++	else
++		local family="ipv4"
++	fi
++
++	ip netns exec ${netns} conntrack -L -f $family -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp'
+ 	if [ $? -ne 0 ] ; then
+ 		echo "FAIL: ${netns} did not show attached helper $message" 1>&2
+ 		ret=1
+@@ -111,8 +117,8 @@ test_helper()
+ 
+ 	sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null &
+ 
+-	sleep 1
+ 	sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null &
++	sleep 1
+ 
+ 	check_for_helper "$ns1" "ip $msg" $port
+ 	check_for_helper "$ns2" "ip $msg" $port
+@@ -128,8 +134,8 @@ test_helper()
+ 
+ 	sleep 3 | ip netns exec ${ns2} nc -w 2 -6 -l -p $port > /dev/null &
+ 
+-	sleep 1
+ 	sleep 1 | ip netns exec ${ns1} nc -w 2 -6 dead:1::2 $port > /dev/null &
++	sleep 1
+ 
+ 	check_for_helper "$ns1" "ipv6 $msg" $port
+ 	check_for_helper "$ns2" "ipv6 $msg" $port


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-23 16:38 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-01-23 16:38 UTC (permalink / raw
  To: gentoo-commits

commit:     39c60e26cf9b140173075799b67ca1bbce88f1a9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 23 16:38:32 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan 23 16:38:32 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=39c60e26

Linux patch 5.10.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1009_linux-5.10.10.patch | 1406 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1410 insertions(+)

diff --git a/0000_README b/0000_README
index e4c8bac..4ad6d69 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-5.10.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.9
 
+Patch:  1009_linux-5.10.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-5.10.10.patch b/1009_linux-5.10.10.patch
new file mode 100644
index 0000000..70f7173
--- /dev/null
+++ b/1009_linux-5.10.10.patch
@@ -0,0 +1,1406 @@
+diff --git a/Documentation/devicetree/bindings/net/renesas,etheravb.yaml b/Documentation/devicetree/bindings/net/renesas,etheravb.yaml
+index 244befb6402aa..de9dd574a2f95 100644
+--- a/Documentation/devicetree/bindings/net/renesas,etheravb.yaml
++++ b/Documentation/devicetree/bindings/net/renesas,etheravb.yaml
+@@ -163,6 +163,7 @@ allOf:
+             enum:
+               - renesas,etheravb-r8a774a1
+               - renesas,etheravb-r8a774b1
++              - renesas,etheravb-r8a774e1
+               - renesas,etheravb-r8a7795
+               - renesas,etheravb-r8a7796
+               - renesas,etheravb-r8a77961
+diff --git a/Makefile b/Makefile
+index 1572ebd192a93..7d86ad6ad36cc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index e04d90af4c27c..6fb8cb7b9bcc6 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -312,6 +312,25 @@ static struct syscore_ops hv_syscore_ops = {
+ 	.resume		= hv_resume,
+ };
+ 
++static void (* __initdata old_setup_percpu_clockev)(void);
++
++static void __init hv_stimer_setup_percpu_clockev(void)
++{
++	/*
++	 * Ignore any errors in setting up stimer clockevents
++	 * as we can run with the LAPIC timer as a fallback.
++	 */
++	(void)hv_stimer_alloc();
++
++	/*
++	 * Still register the LAPIC timer, because the direct-mode STIMER is
++	 * not supported by old versions of Hyper-V. This also allows users
++	 * to switch to LAPIC timer via /sys, if they want to.
++	 */
++	if (old_setup_percpu_clockev)
++		old_setup_percpu_clockev();
++}
++
+ /*
+  * This function is to be invoked early in the boot sequence after the
+  * hypervisor has been detected.
+@@ -390,10 +409,14 @@ void __init hyperv_init(void)
+ 	wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+ 
+ 	/*
+-	 * Ignore any errors in setting up stimer clockevents
+-	 * as we can run with the LAPIC timer as a fallback.
++	 * hyperv_init() is called before LAPIC is initialized: see
++	 * apic_intr_mode_init() -> x86_platform.apic_post_init() and
++	 * apic_bsp_setup() -> setup_local_APIC(). The direct-mode STIMER
++	 * depends on LAPIC, so hv_stimer_alloc() should be called from
++	 * x86_init.timers.setup_percpu_clockev.
+ 	 */
+-	(void)hv_stimer_alloc();
++	old_setup_percpu_clockev = x86_init.timers.setup_percpu_clockev;
++	x86_init.timers.setup_percpu_clockev = hv_stimer_setup_percpu_clockev;
+ 
+ 	hv_apic_init();
+ 
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index 8892908ad58ce..788a4ba1e2e74 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -356,7 +356,8 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 	if (ret)
+ 		goto error_free_key;
+ 
+-	if (strcmp(sig->pkey_algo, "sm2") == 0 && sig->data_size) {
++	if (sig->pkey_algo && strcmp(sig->pkey_algo, "sm2") == 0 &&
++	    sig->data_size) {
+ 		ret = cert_sig_digest_update(sig, tfm);
+ 		if (ret)
+ 			goto error_free_key;
+diff --git a/drivers/gpu/drm/amd/display/Kconfig b/drivers/gpu/drm/amd/display/Kconfig
+index 60dfdd432aba0..3c410d236c491 100644
+--- a/drivers/gpu/drm/amd/display/Kconfig
++++ b/drivers/gpu/drm/amd/display/Kconfig
+@@ -6,7 +6,7 @@ config DRM_AMD_DC
+ 	bool "AMD DC - Enable new display engine"
+ 	default y
+ 	select SND_HDA_COMPONENT if SND_HDA_CORE
+-	select DRM_AMD_DC_DCN if (X86 || PPC64 || (ARM64 && KERNEL_MODE_NEON)) && !(KCOV_INSTRUMENT_ALL && KCOV_ENABLE_COMPARISONS)
++	select DRM_AMD_DC_DCN if (X86 || PPC64) && !(KCOV_INSTRUMENT_ALL && KCOV_ENABLE_COMPARISONS)
+ 	help
+ 	  Choose this option if you want to use the new display engine
+ 	  support for AMDGPU. This adds required support for Vega and
+diff --git a/drivers/gpu/drm/amd/display/dc/calcs/Makefile b/drivers/gpu/drm/amd/display/dc/calcs/Makefile
+index 64f515d744103..4674aca8f2069 100644
+--- a/drivers/gpu/drm/amd/display/dc/calcs/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/calcs/Makefile
+@@ -33,10 +33,6 @@ ifdef CONFIG_PPC64
+ calcs_ccflags := -mhard-float -maltivec
+ endif
+ 
+-ifdef CONFIG_ARM64
+-calcs_rcflags := -mgeneral-regs-only
+-endif
+-
+ ifdef CONFIG_CC_IS_GCC
+ ifeq ($(call cc-ifversion, -lt, 0701, y), y)
+ IS_OLD_GCC = 1
+@@ -57,9 +53,6 @@ endif
+ CFLAGS_$(AMDDALPATH)/dc/calcs/dcn_calcs.o := $(calcs_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/calcs/dcn_calc_auto.o := $(calcs_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/calcs/dcn_calc_math.o := $(calcs_ccflags) -Wno-tautological-compare
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/calcs/dcn_calcs.o := $(calcs_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/calcs/dcn_calc_auto.o := $(calcs_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/calcs/dcn_calc_math.o := $(calcs_rcflags)
+ 
+ BW_CALCS = dce_calcs.o bw_fixed.o custom_float.o
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
+index 1a495759a0343..52b1ce775a1e8 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
+@@ -104,13 +104,6 @@ ifdef CONFIG_PPC64
+ CFLAGS_$(AMDDALPATH)/dc/clk_mgr/dcn21/rn_clk_mgr.o := $(call cc-option,-mno-gnu-attribute)
+ endif
+ 
+-# prevent build errors:
+-# ...: '-mgeneral-regs-only' is incompatible with the use of floating-point types
+-# this file is unused on arm64, just like on ppc64
+-ifdef CONFIG_ARM64
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/clk_mgr/dcn21/rn_clk_mgr.o := -mgeneral-regs-only
+-endif
+-
+ AMD_DAL_CLK_MGR_DCN21 = $(addprefix $(AMDDALPATH)/dc/clk_mgr/dcn21/,$(CLK_MGR_DCN21))
+ 
+ AMD_DISPLAY_FILES += $(AMD_DAL_CLK_MGR_DCN21)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/Makefile b/drivers/gpu/drm/amd/display/dc/dcn10/Makefile
+index 733e6e6e43bd6..62ad1a11bff9c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/Makefile
+@@ -31,11 +31,4 @@ DCN10 = dcn10_init.o dcn10_resource.o dcn10_ipp.o dcn10_hw_sequencer.o \
+ 
+ AMD_DAL_DCN10 = $(addprefix $(AMDDALPATH)/dc/dcn10/,$(DCN10))
+ 
+-# fix:
+-# ...: '-mgeneral-regs-only' is incompatible with the use of floating-point types
+-# aarch64 does not support soft-float, so use hard-float and handle this in code
+-ifdef CONFIG_ARM64
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dcn10/dcn10_resource.o := -mgeneral-regs-only
+-endif
+-
+ AMD_DISPLAY_FILES += $(AMD_DAL_DCN10)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+index a78712caf1244..462d3d981ea5e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+@@ -1339,47 +1339,6 @@ static uint32_t read_pipe_fuses(struct dc_context *ctx)
+ 	return value;
+ }
+ 
+-/*
+- * Some architectures don't support soft-float (e.g. aarch64), on those
+- * this function has to be called with hardfloat enabled, make sure not
+- * to inline it so whatever fp stuff is done stays inside
+- */
+-static noinline void dcn10_resource_construct_fp(
+-	struct dc *dc)
+-{
+-	if (dc->ctx->dce_version == DCN_VERSION_1_01) {
+-		struct dcn_soc_bounding_box *dcn_soc = dc->dcn_soc;
+-		struct dcn_ip_params *dcn_ip = dc->dcn_ip;
+-		struct display_mode_lib *dml = &dc->dml;
+-
+-		dml->ip.max_num_dpp = 3;
+-		/* TODO how to handle 23.84? */
+-		dcn_soc->dram_clock_change_latency = 23;
+-		dcn_ip->max_num_dpp = 3;
+-	}
+-	if (ASICREV_IS_RV1_F0(dc->ctx->asic_id.hw_internal_rev)) {
+-		dc->dcn_soc->urgent_latency = 3;
+-		dc->debug.disable_dmcu = true;
+-		dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 = 41.60f;
+-	}
+-
+-
+-	dc->dcn_soc->number_of_channels = dc->ctx->asic_id.vram_width / ddr4_dram_width;
+-	ASSERT(dc->dcn_soc->number_of_channels < 3);
+-	if (dc->dcn_soc->number_of_channels == 0)/*old sbios bug*/
+-		dc->dcn_soc->number_of_channels = 2;
+-
+-	if (dc->dcn_soc->number_of_channels == 1) {
+-		dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 = 19.2f;
+-		dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8 = 17.066f;
+-		dc->dcn_soc->fabric_and_dram_bandwidth_vmid0p72 = 14.933f;
+-		dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 = 12.8f;
+-		if (ASICREV_IS_RV1_F0(dc->ctx->asic_id.hw_internal_rev)) {
+-			dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 = 20.80f;
+-		}
+-	}
+-}
+-
+ static bool dcn10_resource_construct(
+ 	uint8_t num_virtual_links,
+ 	struct dc *dc,
+@@ -1531,15 +1490,37 @@ static bool dcn10_resource_construct(
+ 	memcpy(dc->dcn_ip, &dcn10_ip_defaults, sizeof(dcn10_ip_defaults));
+ 	memcpy(dc->dcn_soc, &dcn10_soc_defaults, sizeof(dcn10_soc_defaults));
+ 
+-#if defined(CONFIG_ARM64)
+-	/* Aarch64 does not support -msoft-float/-mfloat-abi=soft */
+-	DC_FP_START();
+-	dcn10_resource_construct_fp(dc);
+-	DC_FP_END();
+-#else
+-	/* Other architectures we build for build this with soft-float */
+-	dcn10_resource_construct_fp(dc);
+-#endif
++	if (dc->ctx->dce_version == DCN_VERSION_1_01) {
++		struct dcn_soc_bounding_box *dcn_soc = dc->dcn_soc;
++		struct dcn_ip_params *dcn_ip = dc->dcn_ip;
++		struct display_mode_lib *dml = &dc->dml;
++
++		dml->ip.max_num_dpp = 3;
++		/* TODO how to handle 23.84? */
++		dcn_soc->dram_clock_change_latency = 23;
++		dcn_ip->max_num_dpp = 3;
++	}
++	if (ASICREV_IS_RV1_F0(dc->ctx->asic_id.hw_internal_rev)) {
++		dc->dcn_soc->urgent_latency = 3;
++		dc->debug.disable_dmcu = true;
++		dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 = 41.60f;
++	}
++
++
++	dc->dcn_soc->number_of_channels = dc->ctx->asic_id.vram_width / ddr4_dram_width;
++	ASSERT(dc->dcn_soc->number_of_channels < 3);
++	if (dc->dcn_soc->number_of_channels == 0)/*old sbios bug*/
++		dc->dcn_soc->number_of_channels = 2;
++
++	if (dc->dcn_soc->number_of_channels == 1) {
++		dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 = 19.2f;
++		dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8 = 17.066f;
++		dc->dcn_soc->fabric_and_dram_bandwidth_vmid0p72 = 14.933f;
++		dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 = 12.8f;
++		if (ASICREV_IS_RV1_F0(dc->ctx->asic_id.hw_internal_rev)) {
++			dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 = 20.80f;
++		}
++	}
+ 
+ 	pool->base.pp_smu = dcn10_pp_smu_create(ctx);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/Makefile b/drivers/gpu/drm/amd/display/dc/dcn20/Makefile
+index 624cb1341ef14..5fcaf78334ff9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/Makefile
+@@ -17,10 +17,6 @@ ifdef CONFIG_PPC64
+ CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o := -mhard-float -maltivec
+ endif
+ 
+-ifdef CONFIG_ARM64
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o := -mgeneral-regs-only
+-endif
+-
+ ifdef CONFIG_CC_IS_GCC
+ ifeq ($(call cc-ifversion, -lt, 0701, y), y)
+ IS_OLD_GCC = 1
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/Makefile b/drivers/gpu/drm/amd/display/dc/dcn21/Makefile
+index 51a2f3d4c194b..07684d3e375ab 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/Makefile
+@@ -13,10 +13,6 @@ ifdef CONFIG_PPC64
+ CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o := -mhard-float -maltivec
+ endif
+ 
+-ifdef CONFIG_ARM64
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o := -mgeneral-regs-only
+-endif
+-
+ ifdef CONFIG_CC_IS_GCC
+ ifeq ($(call cc-ifversion, -lt, 0701, y), y)
+ IS_OLD_GCC = 1
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/Makefile b/drivers/gpu/drm/amd/display/dc/dml/Makefile
+index dbc7e2abe3795..417331438c306 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dml/Makefile
+@@ -33,10 +33,6 @@ ifdef CONFIG_PPC64
+ dml_ccflags := -mhard-float -maltivec
+ endif
+ 
+-ifdef CONFIG_ARM64
+-dml_rcflags := -mgeneral-regs-only
+-endif
+-
+ ifdef CONFIG_CC_IS_GCC
+ ifeq ($(call cc-ifversion, -lt, 0701, y), y)
+ IS_OLD_GCC = 1
+@@ -64,13 +60,6 @@ CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20v2.o := $(dml_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_rq_dlg_calc_20v2.o := $(dml_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_mode_vba_21.o := $(dml_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_rq_dlg_calc_21.o := $(dml_ccflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/display_mode_vba.o := $(dml_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20.o := $(dml_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn20/display_rq_dlg_calc_20.o := $(dml_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20v2.o := $(dml_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn20/display_rq_dlg_calc_20v2.o := $(dml_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn21/display_mode_vba_21.o := $(dml_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn21/display_rq_dlg_calc_21.o := $(dml_rcflags)
+ endif
+ ifdef CONFIG_DRM_AMD_DC_DCN3_0
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_mode_vba_30.o := $(dml_ccflags) -Wframe-larger-than=2048
+@@ -78,8 +67,6 @@ CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_rq_dlg_calc_30.o := $(dml_ccflags)
+ endif
+ CFLAGS_$(AMDDALPATH)/dc/dml/dml1_display_rq_dlg_calc.o := $(dml_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/dml/display_rq_dlg_helpers.o := $(dml_ccflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dml1_display_rq_dlg_calc.o := $(dml_rcflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/display_rq_dlg_helpers.o := $(dml_rcflags)
+ 
+ DML = display_mode_lib.o display_rq_dlg_helpers.o dml1_display_rq_dlg_calc.o \
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dsc/Makefile b/drivers/gpu/drm/amd/display/dc/dsc/Makefile
+index f2624a1156e5c..ea29cf95d470b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dsc/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dsc/Makefile
+@@ -10,10 +10,6 @@ ifdef CONFIG_PPC64
+ dsc_ccflags := -mhard-float -maltivec
+ endif
+ 
+-ifdef CONFIG_ARM64
+-dsc_rcflags := -mgeneral-regs-only
+-endif
+-
+ ifdef CONFIG_CC_IS_GCC
+ ifeq ($(call cc-ifversion, -lt, 0701, y), y)
+ IS_OLD_GCC = 1
+@@ -32,7 +28,6 @@ endif
+ endif
+ 
+ CFLAGS_$(AMDDALPATH)/dc/dsc/rc_calc.o := $(dsc_ccflags)
+-CFLAGS_REMOVE_$(AMDDALPATH)/dc/dsc/rc_calc.o := $(dsc_rcflags)
+ 
+ DSC = dc_dsc.o rc_calc.o rc_calc_dpi.o
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/os_types.h b/drivers/gpu/drm/amd/display/dc/os_types.h
+index 95cb56929e79e..126c2f3a4dd3b 100644
+--- a/drivers/gpu/drm/amd/display/dc/os_types.h
++++ b/drivers/gpu/drm/amd/display/dc/os_types.h
+@@ -55,10 +55,6 @@
+ #include <asm/fpu/api.h>
+ #define DC_FP_START() kernel_fpu_begin()
+ #define DC_FP_END() kernel_fpu_end()
+-#elif defined(CONFIG_ARM64)
+-#include <asm/neon.h>
+-#define DC_FP_START() kernel_neon_begin()
+-#define DC_FP_END() kernel_neon_end()
+ #elif defined(CONFIG_PPC64)
+ #include <asm/switch_to.h>
+ #include <asm/cputable.h>
+diff --git a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+index b6e377aa1131b..6ac1accade803 100644
+--- a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
++++ b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+@@ -452,7 +452,7 @@ static int otm8009a_probe(struct mipi_dsi_device *dsi)
+ 	dsi->lanes = 2;
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
+-			  MIPI_DSI_MODE_LPM;
++			  MIPI_DSI_MODE_LPM | MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+ 	drm_panel_init(&ctx->panel, dev, &otm8009a_drm_funcs,
+ 		       DRM_MODE_CONNECTOR_DSI);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 8a39be076e143..59de6b3b5f026 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -1432,7 +1432,7 @@ mcp251xfd_handle_rxif_one(struct mcp251xfd_priv *priv,
+ 	else
+ 		skb = alloc_can_skb(priv->ndev, (struct can_frame **)&cfd);
+ 
+-	if (!cfd) {
++	if (!skb) {
+ 		stats->rx_dropped++;
+ 		return 0;
+ 	}
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h b/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h
+index 92473dda55d9f..22a0220123ade 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h
+@@ -40,6 +40,13 @@
+ #define TCB_L2T_IX_M		0xfffULL
+ #define TCB_L2T_IX_V(x)		((x) << TCB_L2T_IX_S)
+ 
++#define TCB_T_FLAGS_W           1
++#define TCB_T_FLAGS_S           0
++#define TCB_T_FLAGS_M           0xffffffffffffffffULL
++#define TCB_T_FLAGS_V(x)        ((__u64)(x) << TCB_T_FLAGS_S)
++
++#define TCB_FIELD_COOKIE_TFLAG	1
++
+ #define TCB_SMAC_SEL_W		0
+ #define TCB_SMAC_SEL_S		24
+ #define TCB_SMAC_SEL_M		0xffULL
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
+index 2d3dfdd2a7163..a7c72fd2f024b 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
+@@ -573,7 +573,11 @@ int send_tx_flowc_wr(struct sock *sk, int compl,
+ void chtls_tcp_push(struct sock *sk, int flags);
+ int chtls_push_frames(struct chtls_sock *csk, int comp);
+ int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val);
++void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word,
++				 u64 mask, u64 val, u8 cookie,
++				 int through_l2t);
+ int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 mode, int cipher_type);
++void chtls_set_quiesce_ctrl(struct sock *sk, int val);
+ void skb_entail(struct sock *sk, struct sk_buff *skb, int flags);
+ unsigned int keyid_to_addr(int start_addr, int keyid);
+ void free_tls_keyid(struct sock *sk);
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index 07a956098e11f..5beec901713fb 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -32,6 +32,7 @@
+ #include "chtls.h"
+ #include "chtls_cm.h"
+ #include "clip_tbl.h"
++#include "t4_tcb.h"
+ 
+ /*
+  * State transitions and actions for close.  Note that if we are in SYN_SENT
+@@ -267,7 +268,9 @@ static void chtls_send_reset(struct sock *sk, int mode, struct sk_buff *skb)
+ 	if (sk->sk_state != TCP_SYN_RECV)
+ 		chtls_send_abort(sk, mode, skb);
+ 	else
+-		goto out;
++		chtls_set_tcb_field_rpl_skb(sk, TCB_T_FLAGS_W,
++					    TCB_T_FLAGS_V(TCB_T_FLAGS_M), 0,
++					    TCB_FIELD_COOKIE_TFLAG, 1);
+ 
+ 	return;
+ out:
+@@ -1948,6 +1951,8 @@ static void chtls_close_con_rpl(struct sock *sk, struct sk_buff *skb)
+ 		else if (tcp_sk(sk)->linger2 < 0 &&
+ 			 !csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN))
+ 			chtls_abort_conn(sk, skb);
++		else if (csk_flag_nochk(csk, CSK_TX_DATA_SENT))
++			chtls_set_quiesce_ctrl(sk, 0);
+ 		break;
+ 	default:
+ 		pr_info("close_con_rpl in bad state %d\n", sk->sk_state);
+@@ -2291,6 +2296,28 @@ static int chtls_wr_ack(struct chtls_dev *cdev, struct sk_buff *skb)
+ 	return 0;
+ }
+ 
++static int chtls_set_tcb_rpl(struct chtls_dev *cdev, struct sk_buff *skb)
++{
++	struct cpl_set_tcb_rpl *rpl = cplhdr(skb) + RSS_HDR;
++	unsigned int hwtid = GET_TID(rpl);
++	struct sock *sk;
++
++	sk = lookup_tid(cdev->tids, hwtid);
++
++	/* return EINVAL if socket doesn't exist */
++	if (!sk)
++		return -EINVAL;
++
++	/* Reusing the skb as size of cpl_set_tcb_field structure
++	 * is greater than cpl_abort_req
++	 */
++	if (TCB_COOKIE_G(rpl->cookie) == TCB_FIELD_COOKIE_TFLAG)
++		chtls_send_abort(sk, CPL_ABORT_SEND_RST, NULL);
++
++	kfree_skb(skb);
++	return 0;
++}
++
+ chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = {
+ 	[CPL_PASS_OPEN_RPL]     = chtls_pass_open_rpl,
+ 	[CPL_CLOSE_LISTSRV_RPL] = chtls_close_listsrv_rpl,
+@@ -2303,5 +2330,6 @@ chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = {
+ 	[CPL_CLOSE_CON_RPL]     = chtls_conn_cpl,
+ 	[CPL_ABORT_REQ_RSS]     = chtls_conn_cpl,
+ 	[CPL_ABORT_RPL_RSS]     = chtls_conn_cpl,
+-	[CPL_FW4_ACK]           = chtls_wr_ack,
++	[CPL_FW4_ACK]		= chtls_wr_ack,
++	[CPL_SET_TCB_RPL]	= chtls_set_tcb_rpl,
+ };
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c
+index a4fb463af22ac..1e67140b0f801 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c
+@@ -88,6 +88,24 @@ static int chtls_set_tcb_field(struct sock *sk, u16 word, u64 mask, u64 val)
+ 	return ret < 0 ? ret : 0;
+ }
+ 
++void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word,
++				 u64 mask, u64 val, u8 cookie,
++				 int through_l2t)
++{
++	struct sk_buff *skb;
++	unsigned int wrlen;
++
++	wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata);
++	wrlen = roundup(wrlen, 16);
++
++	skb = alloc_skb(wrlen, GFP_KERNEL | __GFP_NOFAIL);
++	if (!skb)
++		return;
++
++	__set_tcb_field(sk, skb, word, mask, val, cookie, 0);
++	send_or_defer(sk, tcp_sk(sk), skb, through_l2t);
++}
++
+ /*
+  * Set one of the t_flags bits in the TCB.
+  */
+@@ -113,6 +131,29 @@ static int chtls_set_tcb_quiesce(struct sock *sk, int val)
+ 				   TF_RX_QUIESCE_V(val));
+ }
+ 
++void chtls_set_quiesce_ctrl(struct sock *sk, int val)
++{
++	struct chtls_sock *csk;
++	struct sk_buff *skb;
++	unsigned int wrlen;
++	int ret;
++
++	wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata);
++	wrlen = roundup(wrlen, 16);
++
++	skb = alloc_skb(wrlen, GFP_ATOMIC);
++	if (!skb)
++		return;
++
++	csk = rcu_dereference_sk_user_data(sk);
++
++	__set_tcb_field(sk, skb, 1, TF_RX_QUIESCE_V(1), 0, 0, 1);
++	set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->port_id);
++	ret = cxgb4_ofld_send(csk->egress_dev, skb);
++	if (ret < 0)
++		kfree_skb(skb);
++}
++
+ /* TLS Key bitmap processing */
+ int chtls_init_kmap(struct chtls_dev *cdev, struct cxgb4_lld_info *lldi)
+ {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index e402c62eb3137..8557807b41717 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -348,12 +348,12 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
+ 		 * SBP is *not* set in PRT_SBPVSI (default not set).
+ 		 */
+ 		skb = i40e_construct_skb_zc(rx_ring, *bi);
+-		*bi = NULL;
+ 		if (!skb) {
+ 			rx_ring->rx_stats.alloc_buff_failed++;
+ 			break;
+ 		}
+ 
++		*bi = NULL;
+ 		cleaned_count++;
+ 		i40e_inc_ntc(rx_ring);
+ 
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 2dcdec3eacc36..d1f7b51cab620 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5874,8 +5874,6 @@ static void mvpp2_phylink_validate(struct phylink_config *config,
+ 
+ 	phylink_set(mask, Autoneg);
+ 	phylink_set_port_modes(mask);
+-	phylink_set(mask, Pause);
+-	phylink_set(mask, Asym_Pause);
+ 
+ 	switch (state->interface) {
+ 	case PHY_INTERFACE_MODE_10GBASER:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+index 8fa286ccdd6bb..bf85ce9835d7f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+@@ -19,7 +19,7 @@
+ #define MLXSW_THERMAL_ASIC_TEMP_NORM	75000	/* 75C */
+ #define MLXSW_THERMAL_ASIC_TEMP_HIGH	85000	/* 85C */
+ #define MLXSW_THERMAL_ASIC_TEMP_HOT	105000	/* 105C */
+-#define MLXSW_THERMAL_ASIC_TEMP_CRIT	110000	/* 110C */
++#define MLXSW_THERMAL_ASIC_TEMP_CRIT	140000	/* 140C */
+ #define MLXSW_THERMAL_HYSTERESIS_TEMP	5000	/* 5C */
+ #define MLXSW_THERMAL_MODULE_TEMP_SHIFT	(MLXSW_THERMAL_HYSTERESIS_TEMP * 2)
+ #define MLXSW_THERMAL_ZONE_MAX_NAME	16
+@@ -176,6 +176,12 @@ mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core,
+ 	if (err)
+ 		return err;
+ 
++	if (crit_temp > emerg_temp) {
++		dev_warn(dev, "%s : Critical threshold %d is above emergency threshold %d\n",
++			 tz->tzdev->type, crit_temp, emerg_temp);
++		return 0;
++	}
++
+ 	/* According to the system thermal requirements, the thermal zones are
+ 	 * defined with four trip points. The critical and emergency
+ 	 * temperature thresholds, provided by QSFP module are set as "active"
+@@ -190,11 +196,8 @@ mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core,
+ 		tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp;
+ 	tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = crit_temp;
+ 	tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = emerg_temp;
+-	if (emerg_temp > crit_temp)
+-		tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp +
++	tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp +
+ 					MLXSW_THERMAL_MODULE_TEMP_SHIFT;
+-	else
+-		tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+index f21847739ef1f..d258e0ccf9465 100644
+--- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
++++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+@@ -564,11 +564,6 @@ static const struct net_device_ops netxen_netdev_ops = {
+ 	.ndo_set_features = netxen_set_features,
+ };
+ 
+-static inline bool netxen_function_zero(struct pci_dev *pdev)
+-{
+-	return (PCI_FUNC(pdev->devfn) == 0) ? true : false;
+-}
+-
+ static inline void netxen_set_interrupt_mode(struct netxen_adapter *adapter,
+ 					     u32 mode)
+ {
+@@ -664,7 +659,7 @@ static int netxen_setup_intr(struct netxen_adapter *adapter)
+ 	netxen_initialize_interrupt_registers(adapter);
+ 	netxen_set_msix_bit(pdev, 0);
+ 
+-	if (netxen_function_zero(pdev)) {
++	if (adapter->portnum == 0) {
+ 		if (!netxen_setup_msi_interrupts(adapter, num_msix))
+ 			netxen_set_interrupt_mode(adapter, NETXEN_MSI_MODE);
+ 		else
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
+index 67ba67ed0cb99..de5255b951e14 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
+@@ -572,68 +572,24 @@ static int dwmac5_est_write(void __iomem *ioaddr, u32 reg, u32 val, bool gcl)
+ int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg,
+ 			 unsigned int ptp_rate)
+ {
+-	u32 speed, total_offset, offset, ctrl, ctr_low;
+-	u32 extcfg = readl(ioaddr + GMAC_EXT_CONFIG);
+-	u32 mac_cfg = readl(ioaddr + GMAC_CONFIG);
+ 	int i, ret = 0x0;
+-	u64 total_ctr;
+-
+-	if (extcfg & GMAC_CONFIG_EIPG_EN) {
+-		offset = (extcfg & GMAC_CONFIG_EIPG) >> GMAC_CONFIG_EIPG_SHIFT;
+-		offset = 104 + (offset * 8);
+-	} else {
+-		offset = (mac_cfg & GMAC_CONFIG_IPG) >> GMAC_CONFIG_IPG_SHIFT;
+-		offset = 96 - (offset * 8);
+-	}
+-
+-	speed = mac_cfg & (GMAC_CONFIG_PS | GMAC_CONFIG_FES);
+-	speed = speed >> GMAC_CONFIG_FES_SHIFT;
+-
+-	switch (speed) {
+-	case 0x0:
+-		offset = offset * 1000; /* 1G */
+-		break;
+-	case 0x1:
+-		offset = offset * 400; /* 2.5G */
+-		break;
+-	case 0x2:
+-		offset = offset * 100000; /* 10M */
+-		break;
+-	case 0x3:
+-		offset = offset * 10000; /* 100M */
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	offset = offset / 1000;
++	u32 ctrl;
+ 
+ 	ret |= dwmac5_est_write(ioaddr, BTR_LOW, cfg->btr[0], false);
+ 	ret |= dwmac5_est_write(ioaddr, BTR_HIGH, cfg->btr[1], false);
+ 	ret |= dwmac5_est_write(ioaddr, TER, cfg->ter, false);
+ 	ret |= dwmac5_est_write(ioaddr, LLR, cfg->gcl_size, false);
++	ret |= dwmac5_est_write(ioaddr, CTR_LOW, cfg->ctr[0], false);
++	ret |= dwmac5_est_write(ioaddr, CTR_HIGH, cfg->ctr[1], false);
+ 	if (ret)
+ 		return ret;
+ 
+-	total_offset = 0;
+ 	for (i = 0; i < cfg->gcl_size; i++) {
+-		ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i] + offset, true);
++		ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i], true);
+ 		if (ret)
+ 			return ret;
+-
+-		total_offset += offset;
+ 	}
+ 
+-	total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000ULL;
+-	total_ctr += total_offset;
+-
+-	ctr_low = do_div(total_ctr, 1000000000);
+-
+-	ret |= dwmac5_est_write(ioaddr, CTR_LOW, ctr_low, false);
+-	ret |= dwmac5_est_write(ioaddr, CTR_HIGH, total_ctr, false);
+-	if (ret)
+-		return ret;
+-
+ 	ctrl = readl(ioaddr + MTL_EST_CONTROL);
+ 	ctrl &= ~PTOV;
+ 	ctrl |= ((1000000000 / ptp_rate) * 6) << PTOV_SHIFT;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c33db79cdd0ad..b3d6d8e3f4de9 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2158,7 +2158,7 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
+ 			spin_lock_irqsave(&ch->lock, flags);
+ 			stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 0);
+ 			spin_unlock_irqrestore(&ch->lock, flags);
+-			__napi_schedule_irqoff(&ch->rx_napi);
++			__napi_schedule(&ch->rx_napi);
+ 		}
+ 	}
+ 
+@@ -2167,7 +2167,7 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
+ 			spin_lock_irqsave(&ch->lock, flags);
+ 			stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 0, 1);
+ 			spin_unlock_irqrestore(&ch->lock, flags);
+-			__napi_schedule_irqoff(&ch->tx_napi);
++			__napi_schedule(&ch->tx_napi);
+ 		}
+ 	}
+ 
+@@ -3996,6 +3996,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 	int txfifosz = priv->plat->tx_fifo_size;
++	const int mtu = new_mtu;
+ 
+ 	if (txfifosz == 0)
+ 		txfifosz = priv->dma_cap.tx_fifo_size;
+@@ -4013,7 +4014,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
+ 	if ((txfifosz < new_mtu) || (new_mtu > BUF_SIZE_16KiB))
+ 		return -EINVAL;
+ 
+-	dev->mtu = new_mtu;
++	dev->mtu = mtu;
+ 
+ 	netdev_update_features(dev);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index cc27d660a8185..06553d028d746 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -605,7 +605,8 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ {
+ 	u32 size, wid = priv->dma_cap.estwid, dep = priv->dma_cap.estdep;
+ 	struct plat_stmmacenet_data *plat = priv->plat;
+-	struct timespec64 time;
++	struct timespec64 time, current_time;
++	ktime_t current_time_ns;
+ 	bool fpe = false;
+ 	int i, ret = 0;
+ 	u64 ctr;
+@@ -700,7 +701,22 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 	}
+ 
+ 	/* Adjust for real system time */
+-	time = ktime_to_timespec64(qopt->base_time);
++	priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, &current_time);
++	current_time_ns = timespec64_to_ktime(current_time);
++	if (ktime_after(qopt->base_time, current_time_ns)) {
++		time = ktime_to_timespec64(qopt->base_time);
++	} else {
++		ktime_t base_time;
++		s64 n;
++
++		n = div64_s64(ktime_sub_ns(current_time_ns, qopt->base_time),
++			      qopt->cycle_time);
++		base_time = ktime_add_ns(qopt->base_time,
++					 (n + 1) * qopt->cycle_time);
++
++		time = ktime_to_timespec64(base_time);
++	}
++
+ 	priv->plat->est->btr[0] = (u32)time.tv_nsec;
+ 	priv->plat->est->btr[1] = (u32)time.tv_sec;
+ 
+diff --git a/drivers/net/ipa/ipa_modem.c b/drivers/net/ipa/ipa_modem.c
+index e34fe2d77324e..9b08eb8239846 100644
+--- a/drivers/net/ipa/ipa_modem.c
++++ b/drivers/net/ipa/ipa_modem.c
+@@ -216,6 +216,7 @@ int ipa_modem_start(struct ipa *ipa)
+ 	ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]->netdev = netdev;
+ 	ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->netdev = netdev;
+ 
++	SET_NETDEV_DEV(netdev, &ipa->pdev->dev);
+ 	priv = netdev_priv(netdev);
+ 	priv->ipa = ipa;
+ 
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index 0fc39ac5ca88b..10722fed666de 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -284,7 +284,8 @@ static int smsc_phy_probe(struct phy_device *phydev)
+ 	/* Make clk optional to keep DTB backward compatibility. */
+ 	priv->refclk = clk_get_optional(dev, NULL);
+ 	if (IS_ERR(priv->refclk))
+-		dev_err_probe(dev, PTR_ERR(priv->refclk), "Failed to request clock\n");
++		return dev_err_probe(dev, PTR_ERR(priv->refclk),
++				     "Failed to request clock\n");
+ 
+ 	ret = clk_prepare_enable(priv->refclk);
+ 	if (ret)
+diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c
+index 6fa7a009a24a4..f9b359d4e2939 100644
+--- a/drivers/net/usb/rndis_host.c
++++ b/drivers/net/usb/rndis_host.c
+@@ -387,7 +387,7 @@ generic_rndis_bind(struct usbnet *dev, struct usb_interface *intf, int flags)
+ 	reply_len = sizeof *phym;
+ 	retval = rndis_query(dev, intf, u.buf,
+ 			     RNDIS_OID_GEN_PHYSICAL_MEDIUM,
+-			     0, (void **) &phym, &reply_len);
++			     reply_len, (void **)&phym, &reply_len);
+ 	if (retval != 0 || !phym) {
+ 		/* OID is optional so don't fail here. */
+ 		phym_unspec = cpu_to_le32(RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED);
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index 70467b9d61baa..a3afd1b9ac567 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -115,6 +115,7 @@ struct cdns_spi {
+ 	void __iomem *regs;
+ 	struct clk *ref_clk;
+ 	struct clk *pclk;
++	unsigned int clk_rate;
+ 	u32 speed_hz;
+ 	const u8 *txbuf;
+ 	u8 *rxbuf;
+@@ -250,7 +251,7 @@ static void cdns_spi_config_clock_freq(struct spi_device *spi,
+ 	u32 ctrl_reg, baud_rate_val;
+ 	unsigned long frequency;
+ 
+-	frequency = clk_get_rate(xspi->ref_clk);
++	frequency = xspi->clk_rate;
+ 
+ 	ctrl_reg = cdns_spi_read(xspi, CDNS_SPI_CR);
+ 
+@@ -558,8 +559,9 @@ static int cdns_spi_probe(struct platform_device *pdev)
+ 	master->auto_runtime_pm = true;
+ 	master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
+ 
++	xspi->clk_rate = clk_get_rate(xspi->ref_clk);
+ 	/* Set to default valid value */
+-	master->max_speed_hz = clk_get_rate(xspi->ref_clk) / 4;
++	master->max_speed_hz = xspi->clk_rate / 4;
+ 	xspi->speed_hz = master->max_speed_hz;
+ 
+ 	master->bits_per_word_mask = SPI_BPW_MASK(8);
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index 9494257e1c33f..6d8e0a05a5355 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -115,14 +115,13 @@ static void fsl_spi_chipselect(struct spi_device *spi, int value)
+ {
+ 	struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(spi->master);
+ 	struct fsl_spi_platform_data *pdata;
+-	bool pol = spi->mode & SPI_CS_HIGH;
+ 	struct spi_mpc8xxx_cs	*cs = spi->controller_state;
+ 
+ 	pdata = spi->dev.parent->parent->platform_data;
+ 
+ 	if (value == BITBANG_CS_INACTIVE) {
+ 		if (pdata->cs_control)
+-			pdata->cs_control(spi, !pol);
++			pdata->cs_control(spi, false);
+ 	}
+ 
+ 	if (value == BITBANG_CS_ACTIVE) {
+@@ -134,7 +133,7 @@ static void fsl_spi_chipselect(struct spi_device *spi, int value)
+ 		fsl_spi_change_mode(spi);
+ 
+ 		if (pdata->cs_control)
+-			pdata->cs_control(spi, pol);
++			pdata->cs_control(spi, true);
+ 	}
+ }
+ 
+diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
+index 2277f83da2501..716566da400e1 100644
+--- a/fs/nfsd/nfs3xdr.c
++++ b/fs/nfsd/nfs3xdr.c
+@@ -863,9 +863,14 @@ compose_entry_fh(struct nfsd3_readdirres *cd, struct svc_fh *fhp,
+ 	if (isdotent(name, namlen)) {
+ 		if (namlen == 2) {
+ 			dchild = dget_parent(dparent);
+-			/* filesystem root - cannot return filehandle for ".." */
++			/*
++			 * Don't return filehandle for ".." if we're at
++			 * the filesystem or export root:
++			 */
+ 			if (dchild == dparent)
+ 				goto out;
++			if (dparent == exp->ex_path.dentry)
++				goto out;
+ 		} else
+ 			dchild = dget(dparent);
+ 	} else
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 6ec088a96302f..96555a8a2c545 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1391,12 +1391,13 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
+ 		if (ctx.optlen != 0) {
+ 			*optlen = ctx.optlen;
+ 			*kernel_optval = ctx.optval;
++			/* export and don't free sockopt buf */
++			return 0;
+ 		}
+ 	}
+ 
+ out:
+-	if (ret)
+-		sockopt_free_buf(&ctx);
++	sockopt_free_buf(&ctx);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index deda1185237b8..c489430cac78c 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -108,7 +108,7 @@ BPF_CALL_2(bpf_map_peek_elem, struct bpf_map *, map, void *, value)
+ }
+ 
+ const struct bpf_func_proto bpf_map_peek_elem_proto = {
+-	.func		= bpf_map_pop_elem,
++	.func		= bpf_map_peek_elem,
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_CONST_MAP_PTR,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 53fe6ef6d931f..618cb1b451ade 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2214,6 +2214,8 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
+ 	case PTR_TO_RDWR_BUF:
+ 	case PTR_TO_RDWR_BUF_OR_NULL:
+ 	case PTR_TO_PERCPU_BTF_ID:
++	case PTR_TO_MEM:
++	case PTR_TO_MEM_OR_NULL:
+ 		return true;
+ 	default:
+ 		return false;
+@@ -5255,7 +5257,7 @@ static bool signed_add_overflows(s64 a, s64 b)
+ 	return res < a;
+ }
+ 
+-static bool signed_add32_overflows(s64 a, s64 b)
++static bool signed_add32_overflows(s32 a, s32 b)
+ {
+ 	/* Do the add in u32, where overflow is well-defined */
+ 	s32 res = (s32)((u32)a + (u32)b);
+@@ -5265,7 +5267,7 @@ static bool signed_add32_overflows(s64 a, s64 b)
+ 	return res < a;
+ }
+ 
+-static bool signed_sub_overflows(s32 a, s32 b)
++static bool signed_sub_overflows(s64 a, s64 b)
+ {
+ 	/* Do the sub in u64, where overflow is well-defined */
+ 	s64 res = (s64)((u64)a - (u64)b);
+@@ -5277,7 +5279,7 @@ static bool signed_sub_overflows(s32 a, s32 b)
+ 
+ static bool signed_sub32_overflows(s32 a, s32 b)
+ {
+-	/* Do the sub in u64, where overflow is well-defined */
++	/* Do the sub in u32, where overflow is well-defined */
+ 	s32 res = (s32)((u32)a - (u32)b);
+ 
+ 	if (b < 0)
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index fbadd93b95ace..f0d6dba37b43d 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -496,13 +496,17 @@ EXPORT_SYMBOL(__netdev_alloc_skb);
+ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
+ 				 gfp_t gfp_mask)
+ {
+-	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
++	struct napi_alloc_cache *nc;
+ 	struct sk_buff *skb;
+ 	void *data;
+ 
+ 	len += NET_SKB_PAD + NET_IP_ALIGN;
+ 
+-	if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) ||
++	/* If requested length is either too small or too big,
++	 * we use kmalloc() for skb->head allocation.
++	 */
++	if (len <= SKB_WITH_OVERHEAD(1024) ||
++	    len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
+ 	    (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
+ 		skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
+ 		if (!skb)
+@@ -510,6 +514,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
+ 		goto skb_success;
+ 	}
+ 
++	nc = this_cpu_ptr(&napi_alloc_cache);
+ 	len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ 	len = SKB_DATA_ALIGN(len);
+ 
+@@ -3648,7 +3653,8 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 	unsigned int delta_truesize = 0;
+ 	unsigned int delta_len = 0;
+ 	struct sk_buff *tail = NULL;
+-	struct sk_buff *nskb;
++	struct sk_buff *nskb, *tmp;
++	int err;
+ 
+ 	skb_push(skb, -skb_network_offset(skb) + offset);
+ 
+@@ -3658,11 +3664,28 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 		nskb = list_skb;
+ 		list_skb = list_skb->next;
+ 
++		err = 0;
++		if (skb_shared(nskb)) {
++			tmp = skb_clone(nskb, GFP_ATOMIC);
++			if (tmp) {
++				consume_skb(nskb);
++				nskb = tmp;
++				err = skb_unclone(nskb, GFP_ATOMIC);
++			} else {
++				err = -ENOMEM;
++			}
++		}
++
+ 		if (!tail)
+ 			skb->next = nskb;
+ 		else
+ 			tail->next = nskb;
+ 
++		if (unlikely(err)) {
++			nskb->next = list_skb;
++			goto err_linearize;
++		}
++
+ 		tail = nskb;
+ 
+ 		delta_len += nskb->len;
+diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
+index bbdd3c7b6cb5b..b065f0a103ed0 100644
+--- a/net/core/sock_reuseport.c
++++ b/net/core/sock_reuseport.c
+@@ -293,7 +293,7 @@ select_by_hash:
+ 			i = j = reciprocal_scale(hash, socks);
+ 			while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
+ 				i++;
+-				if (i >= reuse->num_socks)
++				if (i >= socks)
+ 					i = 0;
+ 				if (i == j)
+ 					goto out;
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index 16014ad194066..a352ce4f878a3 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -1765,6 +1765,8 @@ static int dcb_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	fn = &reply_funcs[dcb->cmd];
+ 	if (!fn->cb)
+ 		return -EOPNOTSUPP;
++	if (fn->type == RTM_SETDCB && !netlink_capable(skb, CAP_NET_ADMIN))
++		return -EPERM;
+ 
+ 	if (!tb[DCB_ATTR_IFNAME])
+ 		return -EINVAL;
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 183003e45762a..a47e0f9b20d0a 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -353,9 +353,13 @@ static int dsa_port_devlink_setup(struct dsa_port *dp)
+ 
+ static void dsa_port_teardown(struct dsa_port *dp)
+ {
++	struct devlink_port *dlp = &dp->devlink_port;
++
+ 	if (!dp->setup)
+ 		return;
+ 
++	devlink_port_type_clear(dlp);
++
+ 	switch (dp->type) {
+ 	case DSA_PORT_TYPE_UNUSED:
+ 		break;
+diff --git a/net/dsa/master.c b/net/dsa/master.c
+index c91de041a91d8..3a44da35dfeba 100644
+--- a/net/dsa/master.c
++++ b/net/dsa/master.c
+@@ -308,8 +308,18 @@ static struct lock_class_key dsa_master_addr_list_lock_key;
+ 
+ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
+ {
++	struct dsa_switch *ds = cpu_dp->ds;
++	struct device_link *consumer_link;
+ 	int ret;
+ 
++	/* The DSA master must use SET_NETDEV_DEV for this to work. */
++	consumer_link = device_link_add(ds->dev, dev->dev.parent,
++					DL_FLAG_AUTOREMOVE_CONSUMER);
++	if (!consumer_link)
++		netdev_err(dev,
++			   "Failed to create a device link to DSA switch %s\n",
++			   dev_name(ds->dev));
++
+ 	rtnl_lock();
+ 	ret = dev_set_mtu(dev, ETH_DATA_LEN + cpu_dp->tag_ops->overhead);
+ 	rtnl_unlock();
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 8b07f3a4f2db2..a3271ec3e1627 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -443,7 +443,6 @@ static int esp_output_encap(struct xfrm_state *x, struct sk_buff *skb,
+ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
+ {
+ 	u8 *tail;
+-	u8 *vaddr;
+ 	int nfrags;
+ 	int esph_offset;
+ 	struct page *page;
+@@ -485,14 +484,10 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 			page = pfrag->page;
+ 			get_page(page);
+ 
+-			vaddr = kmap_atomic(page);
+-
+-			tail = vaddr + pfrag->offset;
++			tail = page_address(page) + pfrag->offset;
+ 
+ 			esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
+ 
+-			kunmap_atomic(vaddr);
+-
+ 			nfrags = skb_shinfo(skb)->nr_frags;
+ 
+ 			__skb_fill_page_desc(skb, nfrags, page, pfrag->offset,
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 52c2f063529fb..2b804fcebcc65 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -478,7 +478,6 @@ static int esp6_output_encap(struct xfrm_state *x, struct sk_buff *skb,
+ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
+ {
+ 	u8 *tail;
+-	u8 *vaddr;
+ 	int nfrags;
+ 	int esph_offset;
+ 	struct page *page;
+@@ -519,14 +518,10 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ 			page = pfrag->page;
+ 			get_page(page);
+ 
+-			vaddr = kmap_atomic(page);
+-
+-			tail = vaddr + pfrag->offset;
++			tail = page_address(page) + pfrag->offset;
+ 
+ 			esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
+ 
+-			kunmap_atomic(vaddr);
+-
+ 			nfrags = skb_shinfo(skb)->nr_frags;
+ 
+ 			__skb_fill_page_desc(skb, nfrags, page, pfrag->offset,
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 749ad72386b23..077d43af8226b 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -125,8 +125,43 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
+ 	return -EINVAL;
+ }
+ 
++static int
++ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk,
++				    struct sk_buff *skb, unsigned int mtu)
++{
++	struct sk_buff *segs, *nskb;
++	netdev_features_t features;
++	int ret = 0;
++
++	/* Please see corresponding comment in ip_finish_output_gso
++	 * describing the cases where GSO segment length exceeds the
++	 * egress MTU.
++	 */
++	features = netif_skb_features(skb);
++	segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
++	if (IS_ERR_OR_NULL(segs)) {
++		kfree_skb(skb);
++		return -ENOMEM;
++	}
++
++	consume_skb(skb);
++
++	skb_list_walk_safe(segs, segs, nskb) {
++		int err;
++
++		skb_mark_not_on_list(segs);
++		err = ip6_fragment(net, sk, segs, ip6_finish_output2);
++		if (err && ret == 0)
++			ret = err;
++	}
++
++	return ret;
++}
++
+ static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
++	unsigned int mtu;
++
+ #if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
+ 	/* Policy lookup after SNAT yielded a new policy */
+ 	if (skb_dst(skb)->xfrm) {
+@@ -135,7 +170,11 @@ static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff
+ 	}
+ #endif
+ 
+-	if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) ||
++	mtu = ip6_skb_dst_mtu(skb);
++	if (skb_is_gso(skb) && !skb_gso_validate_network_len(skb, mtu))
++		return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu);
++
++	if ((skb->len > mtu && !skb_is_gso(skb)) ||
+ 	    dst_allfrag(skb_dst(skb)) ||
+ 	    (IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size))
+ 		return ip6_fragment(net, sk, skb, ip6_finish_output2);
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 5e7983cb61546..ff048cb8d8074 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1645,8 +1645,11 @@ static int ipip6_newlink(struct net *src_net, struct net_device *dev,
+ 	}
+ 
+ #ifdef CONFIG_IPV6_SIT_6RD
+-	if (ipip6_netlink_6rd_parms(data, &ip6rd))
++	if (ipip6_netlink_6rd_parms(data, &ip6rd)) {
+ 		err = ipip6_tunnel_update_6rd(nt, &ip6rd);
++		if (err < 0)
++			unregister_netdevice_queue(dev, NULL);
++	}
+ #endif
+ 
+ 	return err;
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 56a4d0d20a267..ca1e9de388910 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -662,7 +662,7 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx)
+ 		if (!skip_hw && tx->key &&
+ 		    tx->key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)
+ 			info->control.hw_key = &tx->key->conf;
+-	} else if (!ieee80211_is_mgmt(hdr->frame_control) && tx->sta &&
++	} else if (ieee80211_is_data_present(hdr->frame_control) && tx->sta &&
+ 		   test_sta_flag(tx->sta, WLAN_STA_USES_ENCRYPTION)) {
+ 		return TX_DROP;
+ 	}
+@@ -3836,7 +3836,7 @@ void __ieee80211_schedule_txq(struct ieee80211_hw *hw,
+ 		 * get immediately moved to the back of the list on the next
+ 		 * call to ieee80211_next_txq().
+ 		 */
+-		if (txqi->txq.sta &&
++		if (txqi->txq.sta && local->airtime_flags &&
+ 		    wiphy_ext_feature_isset(local->hw.wiphy,
+ 					    NL80211_EXT_FEATURE_AIRTIME_FAIRNESS))
+ 			list_add(&txqi->schedule_order,
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 667c44aa5a63c..dc201363f2c48 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -430,7 +430,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
+-	if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST) {
++	if (state == RXRPC_CALL_SERVER_RECV_REQUEST) {
+ 		unsigned long timo = READ_ONCE(call->next_req_timo);
+ 		unsigned long now, expect_req_by;
+ 
+diff --git a/net/rxrpc/key.c b/net/rxrpc/key.c
+index 2e8bd3b97301e..979338a64c0ca 100644
+--- a/net/rxrpc/key.c
++++ b/net/rxrpc/key.c
+@@ -1109,7 +1109,7 @@ static long rxrpc_read(const struct key *key,
+ 		default: /* we have a ticket we can't encode */
+ 			pr_err("Unsupported key token type (%u)\n",
+ 			       token->security_index);
+-			continue;
++			return -ENOPKG;
+ 		}
+ 
+ 		_debug("token[%u]: toksize=%u", ntoks, toksize);
+@@ -1224,7 +1224,9 @@ static long rxrpc_read(const struct key *key,
+ 			break;
+ 
+ 		default:
+-			break;
++			pr_err("Unsupported key token type (%u)\n",
++			       token->security_index);
++			return -ENOPKG;
+ 		}
+ 
+ 		ASSERTCMP((unsigned long)xdr - (unsigned long)oldxdr, ==,
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 06b880da2a8ea..c92e6984933cb 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -996,7 +996,6 @@ void tipc_link_reset(struct tipc_link *l)
+ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
+ 		   struct sk_buff_head *xmitq)
+ {
+-	struct tipc_msg *hdr = buf_msg(skb_peek(list));
+ 	struct sk_buff_head *backlogq = &l->backlogq;
+ 	struct sk_buff_head *transmq = &l->transmq;
+ 	struct sk_buff *skb, *_skb;
+@@ -1004,13 +1003,18 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
+ 	u16 ack = l->rcv_nxt - 1;
+ 	u16 seqno = l->snd_nxt;
+ 	int pkt_cnt = skb_queue_len(list);
+-	int imp = msg_importance(hdr);
+ 	unsigned int mss = tipc_link_mss(l);
+ 	unsigned int cwin = l->window;
+ 	unsigned int mtu = l->mtu;
++	struct tipc_msg *hdr;
+ 	bool new_bundle;
+ 	int rc = 0;
++	int imp;
++
++	if (pkt_cnt <= 0)
++		return 0;
+ 
++	hdr = buf_msg(skb_peek(list));
+ 	if (unlikely(msg_size(hdr) > mtu)) {
+ 		pr_warn("Too large msg, purging xmit list %d %d %d %d %d!\n",
+ 			skb_queue_len(list), msg_user(hdr),
+@@ -1019,6 +1023,7 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
+ 		return -EMSGSIZE;
+ 	}
+ 
++	imp = msg_importance(hdr);
+ 	/* Allow oversubscription of one data msg per source at congestion */
+ 	if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) {
+ 		if (imp == TIPC_SYSTEM_IMPORTANCE) {
+diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile
+index 2c40e68853dde..e46df0a2d4f9d 100644
+--- a/scripts/kconfig/Makefile
++++ b/scripts/kconfig/Makefile
+@@ -94,6 +94,16 @@ configfiles=$(wildcard $(srctree)/kernel/configs/$@ $(srctree)/arch/$(SRCARCH)/c
+ 	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh -m .config $(configfiles)
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+ 
++PHONY += kvmconfig
++kvmconfig: kvm_guest.config
++	@echo >&2 "WARNING: 'make $@' will be removed after Linux 5.10"
++	@echo >&2 "         Please use 'make $<' instead."
++
++PHONY += xenconfig
++xenconfig: xen.config
++	@echo >&2 "WARNING: 'make $@' will be removed after Linux 5.10"
++	@echo >&2 "         Please use 'make $<' instead."
++
+ PHONY += tinyconfig
+ tinyconfig:
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile allnoconfig tiny.config
+diff --git a/tools/testing/selftests/bpf/progs/profiler.inc.h b/tools/testing/selftests/bpf/progs/profiler.inc.h
+index 30982a7e4d0f7..4896fdf816f73 100644
+--- a/tools/testing/selftests/bpf/progs/profiler.inc.h
++++ b/tools/testing/selftests/bpf/progs/profiler.inc.h
+@@ -256,6 +256,7 @@ static INLINE void* populate_cgroup_info(struct cgroup_data_t* cgroup_data,
+ 		BPF_CORE_READ(task, nsproxy, cgroup_ns, root_cset, dfl_cgrp, kn);
+ 	struct kernfs_node* proc_kernfs = BPF_CORE_READ(task, cgroups, dfl_cgrp, kn);
+ 
++#if __has_builtin(__builtin_preserve_enum_value)
+ 	if (ENABLE_CGROUP_V1_RESOLVER && CONFIG_CGROUP_PIDS) {
+ 		int cgrp_id = bpf_core_enum_value(enum cgroup_subsys_id___local,
+ 						  pids_cgrp_id___local);
+@@ -275,6 +276,7 @@ static INLINE void* populate_cgroup_info(struct cgroup_data_t* cgroup_data,
+ 			}
+ 		}
+ 	}
++#endif
+ 
+ 	cgroup_data->cgroup_root_inode = get_inode_from_kernfs(root_kernfs);
+ 	cgroup_data->cgroup_proc_inode = get_inode_from_kernfs(proc_kernfs);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-27 11:29 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-01-27 11:29 UTC (permalink / raw
  To: gentoo-commits

commit:     42b6a29af6c32b8480be206fc4489da99d58ab2c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jan 27 11:28:59 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jan 27 11:28:59 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=42b6a29a

Linux patch 5.10.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1010_linux-5.10.11.patch | 7021 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7025 insertions(+)

diff --git a/0000_README b/0000_README
index 4ad6d69..fe8a778 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-5.10.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.10
 
+Patch:  1010_linux-5.10.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-5.10.11.patch b/1010_linux-5.10.11.patch
new file mode 100644
index 0000000..a4b4a65
--- /dev/null
+++ b/1010_linux-5.10.11.patch
@@ -0,0 +1,7021 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-devlink b/Documentation/ABI/testing/sysfs-class-devlink
+index b662f747c83eb..8a21ce515f61f 100644
+--- a/Documentation/ABI/testing/sysfs-class-devlink
++++ b/Documentation/ABI/testing/sysfs-class-devlink
+@@ -5,8 +5,8 @@ Description:
+ 		Provide a place in sysfs for the device link objects in the
+ 		kernel at any given time.  The name of a device link directory,
+ 		denoted as ... above, is of the form <supplier>--<consumer>
+-		where <supplier> is the supplier device name and <consumer> is
+-		the consumer device name.
++		where <supplier> is the supplier bus:device name and <consumer>
++		is the consumer bus:device name.
+ 
+ What:		/sys/class/devlink/.../auto_remove_on
+ Date:		May 2020
+diff --git a/Documentation/ABI/testing/sysfs-devices-consumer b/Documentation/ABI/testing/sysfs-devices-consumer
+index 1f06d74d1c3cc..0809fda092e66 100644
+--- a/Documentation/ABI/testing/sysfs-devices-consumer
++++ b/Documentation/ABI/testing/sysfs-devices-consumer
+@@ -4,5 +4,6 @@ Contact:	Saravana Kannan <saravanak@google.com>
+ Description:
+ 		The /sys/devices/.../consumer:<consumer> are symlinks to device
+ 		links where this device is the supplier. <consumer> denotes the
+-		name of the consumer in that device link. There can be zero or
+-		more of these symlinks for a given device.
++		name of the consumer in that device link and is of the form
++		bus:device name. There can be zero or more of these symlinks
++		for a given device.
+diff --git a/Documentation/ABI/testing/sysfs-devices-supplier b/Documentation/ABI/testing/sysfs-devices-supplier
+index a919e0db5e902..207f5972e98d8 100644
+--- a/Documentation/ABI/testing/sysfs-devices-supplier
++++ b/Documentation/ABI/testing/sysfs-devices-supplier
+@@ -4,5 +4,6 @@ Contact:	Saravana Kannan <saravanak@google.com>
+ Description:
+ 		The /sys/devices/.../supplier:<supplier> are symlinks to device
+ 		links where this device is the consumer. <supplier> denotes the
+-		name of the supplier in that device link. There can be zero or
+-		more of these symlinks for a given device.
++		name of the supplier in that device link and is of the form
++		bus:device name. There can be zero or more of these symlinks
++		for a given device.
+diff --git a/Documentation/admin-guide/device-mapper/dm-integrity.rst b/Documentation/admin-guide/device-mapper/dm-integrity.rst
+index 3ab4f7756a6e6..bf878c879afb6 100644
+--- a/Documentation/admin-guide/device-mapper/dm-integrity.rst
++++ b/Documentation/admin-guide/device-mapper/dm-integrity.rst
+@@ -177,14 +177,20 @@ bitmap_flush_interval:number
+ 	The bitmap flush interval in milliseconds. The metadata buffers
+ 	are synchronized when this interval expires.
+ 
++allow_discards
++	Allow block discard requests (a.k.a. TRIM) for the integrity device.
++	Discards are only allowed to devices using internal hash.
++
+ fix_padding
+ 	Use a smaller padding of the tag area that is more
+ 	space-efficient. If this option is not present, large padding is
+ 	used - that is for compatibility with older kernels.
+ 
+-allow_discards
+-	Allow block discard requests (a.k.a. TRIM) for the integrity device.
+-	Discards are only allowed to devices using internal hash.
++legacy_recalculate
++	Allow recalculating of volumes with HMAC keys. This is disabled by
++	default for security reasons - an attacker could modify the volume,
++	set recalc_sector to zero, and the kernel would not detect the
++	modification.
+ 
+ The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and
+ allow_discards can be changed when reloading the target (load an inactive
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index f6a1513dfb76c..26bfe7ae711b8 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5965,6 +5965,10 @@
+ 			This option is obsoleted by the "nopv" option, which
+ 			has equivalent effect for XEN platform.
+ 
++	xen_no_vector_callback
++			[KNL,X86,XEN] Disable the vector callback for Xen
++			event channel interrupts.
++
+ 	xen_scrub_pages=	[XEN]
+ 			Boolean option to control scrubbing pages before giving them back
+ 			to Xen, for use by other domains. Can be also changed at runtime
+diff --git a/Makefile b/Makefile
+index 7d86ad6ad36cc..7a5d906f6ee36 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
+index 60e901cd0de6a..5a957a9a09843 100644
+--- a/arch/arm/xen/enlighten.c
++++ b/arch/arm/xen/enlighten.c
+@@ -371,7 +371,7 @@ static int __init xen_guest_init(void)
+ 	}
+ 	gnttab_init();
+ 	if (!xen_initial_domain())
+-		xenbus_probe(NULL);
++		xenbus_probe();
+ 
+ 	/*
+ 	 * Making sure board specific code will not set up ops for
+diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
+index 015ddffaf6caa..b56a4b2bc2486 100644
+--- a/arch/arm64/include/asm/atomic.h
++++ b/arch/arm64/include/asm/atomic.h
+@@ -17,7 +17,7 @@
+ #include <asm/lse.h>
+ 
+ #define ATOMIC_OP(op)							\
+-static inline void arch_##op(int i, atomic_t *v)			\
++static __always_inline void arch_##op(int i, atomic_t *v)		\
+ {									\
+ 	__lse_ll_sc_body(op, i, v);					\
+ }
+@@ -32,7 +32,7 @@ ATOMIC_OP(atomic_sub)
+ #undef ATOMIC_OP
+ 
+ #define ATOMIC_FETCH_OP(name, op)					\
+-static inline int arch_##op##name(int i, atomic_t *v)			\
++static __always_inline int arch_##op##name(int i, atomic_t *v)		\
+ {									\
+ 	return __lse_ll_sc_body(op##name, i, v);			\
+ }
+@@ -56,7 +56,7 @@ ATOMIC_FETCH_OPS(atomic_sub_return)
+ #undef ATOMIC_FETCH_OPS
+ 
+ #define ATOMIC64_OP(op)							\
+-static inline void arch_##op(long i, atomic64_t *v)			\
++static __always_inline void arch_##op(long i, atomic64_t *v)		\
+ {									\
+ 	__lse_ll_sc_body(op, i, v);					\
+ }
+@@ -71,7 +71,7 @@ ATOMIC64_OP(atomic64_sub)
+ #undef ATOMIC64_OP
+ 
+ #define ATOMIC64_FETCH_OP(name, op)					\
+-static inline long arch_##op##name(long i, atomic64_t *v)		\
++static __always_inline long arch_##op##name(long i, atomic64_t *v)	\
+ {									\
+ 	return __lse_ll_sc_body(op##name, i, v);			\
+ }
+@@ -94,7 +94,7 @@ ATOMIC64_FETCH_OPS(atomic64_sub_return)
+ #undef ATOMIC64_FETCH_OP
+ #undef ATOMIC64_FETCH_OPS
+ 
+-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
++static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
+ {
+ 	return __lse_ll_sc_body(atomic64_dec_if_positive, v);
+ }
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index a8184cad88907..50852992752b0 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -914,13 +914,6 @@ static void do_signal(struct pt_regs *regs)
+ asmlinkage void do_notify_resume(struct pt_regs *regs,
+ 				 unsigned long thread_flags)
+ {
+-	/*
+-	 * The assembly code enters us with IRQs off, but it hasn't
+-	 * informed the tracing code of that for efficiency reasons.
+-	 * Update the trace code with the current status.
+-	 */
+-	trace_hardirqs_off();
+-
+ 	do {
+ 		/* Check valid user FS if needed */
+ 		addr_limit_user_check();
+diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
+index f8f758e4a3064..6fa8cfb8232aa 100644
+--- a/arch/arm64/kernel/syscall.c
++++ b/arch/arm64/kernel/syscall.c
+@@ -165,15 +165,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 	if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
+ 		local_daif_mask();
+ 		flags = current_thread_info()->flags;
+-		if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) {
+-			/*
+-			 * We're off to userspace, where interrupts are
+-			 * always enabled after we restore the flags from
+-			 * the SPSR.
+-			 */
+-			trace_hardirqs_on();
++		if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
+ 			return;
+-		}
+ 		local_daif_restore(DAIF_PROCCTX);
+ 	}
+ 
+diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
+index 1d32b174ab6ae..c1a8aac01cf91 100644
+--- a/arch/powerpc/include/asm/exception-64s.h
++++ b/arch/powerpc/include/asm/exception-64s.h
+@@ -63,6 +63,12 @@
+ 	nop;								\
+ 	nop;
+ 
++#define SCV_ENTRY_FLUSH_SLOT						\
++	SCV_ENTRY_FLUSH_FIXUP_SECTION;					\
++	nop;								\
++	nop;								\
++	nop;
++
+ /*
+  * r10 must be free to use, r13 must be paca
+  */
+@@ -70,6 +76,13 @@
+ 	STF_ENTRY_BARRIER_SLOT;						\
+ 	ENTRY_FLUSH_SLOT
+ 
++/*
++ * r10, ctr must be free to use, r13 must be paca
++ */
++#define SCV_INTERRUPT_TO_KERNEL						\
++	STF_ENTRY_BARRIER_SLOT;						\
++	SCV_ENTRY_FLUSH_SLOT
++
+ /*
+  * Macros for annotating the expected destination of (h)rfid
+  *
+diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
+index fbd406cd6916c..8d100059e266c 100644
+--- a/arch/powerpc/include/asm/feature-fixups.h
++++ b/arch/powerpc/include/asm/feature-fixups.h
+@@ -221,6 +221,14 @@ label##3:					       	\
+ 	FTR_ENTRY_OFFSET 957b-958b;			\
+ 	.popsection;
+ 
++#define SCV_ENTRY_FLUSH_FIXUP_SECTION			\
++957:							\
++	.pushsection __scv_entry_flush_fixup,"a";	\
++	.align 2;					\
++958:							\
++	FTR_ENTRY_OFFSET 957b-958b;			\
++	.popsection;
++
+ #define RFI_FLUSH_FIXUP_SECTION				\
+ 951:							\
+ 	.pushsection __rfi_flush_fixup,"a";		\
+@@ -254,10 +262,12 @@ label##3:					       	\
+ 
+ extern long stf_barrier_fallback;
+ extern long entry_flush_fallback;
++extern long scv_entry_flush_fallback;
+ extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
+ extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
+ extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup;
+ extern long __start___entry_flush_fixup, __stop___entry_flush_fixup;
++extern long __start___scv_entry_flush_fixup, __stop___scv_entry_flush_fixup;
+ extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
+ extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
+ extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 2f3846192ec7d..2831b0aa92b15 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -75,7 +75,7 @@ BEGIN_FTR_SECTION
+ 	bne	.Ltabort_syscall
+ END_FTR_SECTION_IFSET(CPU_FTR_TM)
+ #endif
+-	INTERRUPT_TO_KERNEL
++	SCV_INTERRUPT_TO_KERNEL
+ 	mr	r10,r1
+ 	ld	r1,PACAKSAVE(r13)
+ 	std	r10,0(r1)
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 4d01f09ecf808..3cde2fbd74fce 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -2993,6 +2993,25 @@ TRAMP_REAL_BEGIN(entry_flush_fallback)
+ 	ld	r11,PACA_EXRFI+EX_R11(r13)
+ 	blr
+ 
++/*
++ * The SCV entry flush happens with interrupts enabled, so it must disable
++ * to prevent EXRFI being clobbered by NMIs (e.g., soft_nmi_common). r10
++ * (containing LR) does not need to be preserved here because scv entry
++ * puts 0 in the pt_regs, CTR can be clobbered for the same reason.
++ */
++TRAMP_REAL_BEGIN(scv_entry_flush_fallback)
++	li	r10,0
++	mtmsrd	r10,1
++	lbz	r10,PACAIRQHAPPENED(r13)
++	ori	r10,r10,PACA_IRQ_HARD_DIS
++	stb	r10,PACAIRQHAPPENED(r13)
++	std	r11,PACA_EXRFI+EX_R11(r13)
++	L1D_DISPLACEMENT_FLUSH
++	ld	r11,PACA_EXRFI+EX_R11(r13)
++	li	r10,MSR_RI
++	mtmsrd	r10,1
++	blr
++
+ TRAMP_REAL_BEGIN(rfi_flush_fallback)
+ 	SET_SCRATCH0(r13);
+ 	GET_PACA(r13);
+diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
+index f887f9d5b9e84..4a1f494ef03f3 100644
+--- a/arch/powerpc/kernel/vmlinux.lds.S
++++ b/arch/powerpc/kernel/vmlinux.lds.S
+@@ -145,6 +145,13 @@ SECTIONS
+ 		__stop___entry_flush_fixup = .;
+ 	}
+ 
++	. = ALIGN(8);
++	__scv_entry_flush_fixup : AT(ADDR(__scv_entry_flush_fixup) - LOAD_OFFSET) {
++		__start___scv_entry_flush_fixup = .;
++		*(__scv_entry_flush_fixup)
++		__stop___scv_entry_flush_fixup = .;
++	}
++
+ 	. = ALIGN(8);
+ 	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
+ 		__start___stf_exit_barrier_fixup = .;
+@@ -187,6 +194,12 @@ SECTIONS
+ 	.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
+ 		_sinittext = .;
+ 		INIT_TEXT
++
++		/*
++		 *.init.text might be RO so we must ensure this section ends on
++		 * a page boundary.
++		 */
++		. = ALIGN(PAGE_SIZE);
+ 		_einittext = .;
+ #ifdef CONFIG_PPC64
+ 		*(.tramp.ftrace.init);
+@@ -200,21 +213,9 @@ SECTIONS
+ 		EXIT_TEXT
+ 	}
+ 
+-	.init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
+-		INIT_DATA
+-	}
+-
+-	.init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) {
+-		INIT_SETUP(16)
+-	}
+-
+-	.initcall.init : AT(ADDR(.initcall.init) - LOAD_OFFSET) {
+-		INIT_CALLS
+-	}
++	. = ALIGN(PAGE_SIZE);
+ 
+-	.con_initcall.init : AT(ADDR(.con_initcall.init) - LOAD_OFFSET) {
+-		CON_INITCALL
+-	}
++	INIT_DATA_SECTION(16)
+ 
+ 	. = ALIGN(8);
+ 	__ftr_fixup : AT(ADDR(__ftr_fixup) - LOAD_OFFSET) {
+@@ -242,9 +243,6 @@ SECTIONS
+ 		__stop___fw_ftr_fixup = .;
+ 	}
+ #endif
+-	.init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
+-		INIT_RAM_FS
+-	}
+ 
+ 	PERCPU_SECTION(L1_CACHE_BYTES)
+ 
+diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
+index 321c12a9ef6b8..92705d6dfb6e0 100644
+--- a/arch/powerpc/lib/feature-fixups.c
++++ b/arch/powerpc/lib/feature-fixups.c
+@@ -290,9 +290,6 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
+ 	long *start, *end;
+ 	int i;
+ 
+-	start = PTRRELOC(&__start___entry_flush_fixup);
+-	end = PTRRELOC(&__stop___entry_flush_fixup);
+-
+ 	instrs[0] = 0x60000000; /* nop */
+ 	instrs[1] = 0x60000000; /* nop */
+ 	instrs[2] = 0x60000000; /* nop */
+@@ -312,6 +309,8 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
+ 	if (types & L1D_FLUSH_MTTRIG)
+ 		instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */
+ 
++	start = PTRRELOC(&__start___entry_flush_fixup);
++	end = PTRRELOC(&__stop___entry_flush_fixup);
+ 	for (i = 0; start < end; start++, i++) {
+ 		dest = (void *)start + *start;
+ 
+@@ -328,6 +327,25 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
+ 		patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2]));
+ 	}
+ 
++	start = PTRRELOC(&__start___scv_entry_flush_fixup);
++	end = PTRRELOC(&__stop___scv_entry_flush_fixup);
++	for (; start < end; start++, i++) {
++		dest = (void *)start + *start;
++
++		pr_devel("patching dest %lx\n", (unsigned long)dest);
++
++		patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0]));
++
++		if (types == L1D_FLUSH_FALLBACK)
++			patch_branch((struct ppc_inst *)(dest + 1), (unsigned long)&scv_entry_flush_fallback,
++				     BRANCH_SET_LINK);
++		else
++			patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1]));
++
++		patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2]));
++	}
++
++
+ 	printk(KERN_DEBUG "entry-flush: patched %d locations (%s flush)\n", i,
+ 		(types == L1D_FLUSH_NONE)       ? "no" :
+ 		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 44377fd7860e4..234a21d26f674 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -134,7 +134,7 @@ config PA_BITS
+ 
+ config PAGE_OFFSET
+ 	hex
+-	default 0xC0000000 if 32BIT && MAXPHYSMEM_2GB
++	default 0xC0000000 if 32BIT && MAXPHYSMEM_1GB
+ 	default 0x80000000 if 64BIT && !MMU
+ 	default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB
+ 	default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB
+@@ -247,10 +247,12 @@ config MODULE_SECTIONS
+ 
+ choice
+ 	prompt "Maximum Physical Memory"
+-	default MAXPHYSMEM_2GB if 32BIT
++	default MAXPHYSMEM_1GB if 32BIT
+ 	default MAXPHYSMEM_2GB if 64BIT && CMODEL_MEDLOW
+ 	default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY
+ 
++	config MAXPHYSMEM_1GB
++		bool "1GiB"
+ 	config MAXPHYSMEM_2GB
+ 		bool "2GiB"
+ 	config MAXPHYSMEM_128GB
+diff --git a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
+index 4a2729f5ca3f0..24d75a146e02d 100644
+--- a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
++++ b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
+@@ -88,7 +88,9 @@
+ 	phy-mode = "gmii";
+ 	phy-handle = <&phy0>;
+ 	phy0: ethernet-phy@0 {
++		compatible = "ethernet-phy-id0007.0771";
+ 		reg = <0>;
++		reset-gpios = <&gpio 12 GPIO_ACTIVE_LOW>;
+ 	};
+ };
+ 
+diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
+index d222d353d86d4..8c3d1e4517031 100644
+--- a/arch/riscv/configs/defconfig
++++ b/arch/riscv/configs/defconfig
+@@ -64,6 +64,8 @@ CONFIG_HW_RANDOM=y
+ CONFIG_HW_RANDOM_VIRTIO=y
+ CONFIG_SPI=y
+ CONFIG_SPI_SIFIVE=y
++CONFIG_GPIOLIB=y
++CONFIG_GPIO_SIFIVE=y
+ # CONFIG_PTP_1588_CLOCK is not set
+ CONFIG_POWER_RESET=y
+ CONFIG_DRM=y
+diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
+index de59dd457b415..d867813570442 100644
+--- a/arch/riscv/kernel/cacheinfo.c
++++ b/arch/riscv/kernel/cacheinfo.c
+@@ -26,7 +26,16 @@ cache_get_priv_group(struct cacheinfo *this_leaf)
+ 
+ static struct cacheinfo *get_cacheinfo(u32 level, enum cache_type type)
+ {
+-	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(smp_processor_id());
++	/*
++	 * Using raw_smp_processor_id() elides a preemptability check, but this
++	 * is really indicative of a larger problem: the cacheinfo UABI assumes
++	 * that cores have a homonogenous view of the cache hierarchy.  That
++	 * happens to be the case for the current set of RISC-V systems, but
++	 * likely won't be true in general.  Since there's no way to provide
++	 * correct information for these systems via the current UABI we're
++	 * just eliding the check for now.
++	 */
++	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(raw_smp_processor_id());
+ 	struct cacheinfo *this_leaf;
+ 	int index;
+ 
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index 835e45bb59c40..744f3209c48d0 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -155,6 +155,15 @@ skip_context_tracking:
+ 	tail do_trap_unknown
+ 
+ handle_syscall:
++#ifdef CONFIG_RISCV_M_MODE
++	/*
++	 * When running is M-Mode (no MMU config), MPIE does not get set.
++	 * As a result, we need to force enable interrupts here because
++	 * handle_exception did not do set SR_IE as it always sees SR_PIE
++	 * being cleared.
++	 */
++	csrs CSR_STATUS, SR_IE
++#endif
+ #if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING)
+ 	/* Recover a0 - a7 for system calls */
+ 	REG_L a0, PT_A0(sp)
+diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
+index 4d3a1048ad8b1..8a5cf99c07762 100644
+--- a/arch/riscv/kernel/time.c
++++ b/arch/riscv/kernel/time.c
+@@ -4,6 +4,7 @@
+  * Copyright (C) 2017 SiFive
+  */
+ 
++#include <linux/of_clk.h>
+ #include <linux/clocksource.h>
+ #include <linux/delay.h>
+ #include <asm/sbi.h>
+@@ -24,6 +25,8 @@ void __init time_init(void)
+ 	riscv_timebase = prop;
+ 
+ 	lpj_fine = riscv_timebase / HZ;
++
++	of_clk_init(NULL);
+ 	timer_probe();
+ }
+ 
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index e4133c20744ce..608082fb9a6c6 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -155,9 +155,10 @@ disable:
+ void __init setup_bootmem(void)
+ {
+ 	phys_addr_t mem_start = 0;
+-	phys_addr_t start, end = 0;
++	phys_addr_t start, dram_end, end = 0;
+ 	phys_addr_t vmlinux_end = __pa_symbol(&_end);
+ 	phys_addr_t vmlinux_start = __pa_symbol(&_start);
++	phys_addr_t max_mapped_addr = __pa(~(ulong)0);
+ 	u64 i;
+ 
+ 	/* Find the memory region containing the kernel */
+@@ -179,7 +180,18 @@ void __init setup_bootmem(void)
+ 	/* Reserve from the start of the kernel to the end of the kernel */
+ 	memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
+ 
+-	max_pfn = PFN_DOWN(memblock_end_of_DRAM());
++	dram_end = memblock_end_of_DRAM();
++
++	/*
++	 * memblock allocator is not aware of the fact that last 4K bytes of
++	 * the addressable memory can not be mapped because of IS_ERR_VALUE
++	 * macro. Make sure that last 4k bytes are not usable by memblock
++	 * if end of dram is equal to maximum addressable memory.
++	 */
++	if (max_mapped_addr == (dram_end - 1))
++		memblock_set_current_limit(max_mapped_addr - 4096);
++
++	max_pfn = PFN_DOWN(dram_end);
+ 	max_low_pfn = max_pfn;
+ 	set_max_mapnr(max_low_pfn);
+ 
+diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
+index 159da4ed578f2..b6f3d49991d37 100644
+--- a/arch/sh/Kconfig
++++ b/arch/sh/Kconfig
+@@ -30,7 +30,6 @@ config SUPERH
+ 	select HAVE_ARCH_KGDB
+ 	select HAVE_ARCH_SECCOMP_FILTER
+ 	select HAVE_ARCH_TRACEHOOK
+-	select HAVE_COPY_THREAD_TLS
+ 	select HAVE_DEBUG_BUGVERBOSE
+ 	select HAVE_DEBUG_KMEMLEAK
+ 	select HAVE_DYNAMIC_FTRACE
+diff --git a/arch/sh/drivers/dma/Kconfig b/arch/sh/drivers/dma/Kconfig
+index d0de378beefe5..7d54f284ce10f 100644
+--- a/arch/sh/drivers/dma/Kconfig
++++ b/arch/sh/drivers/dma/Kconfig
+@@ -63,8 +63,7 @@ config PVR2_DMA
+ 
+ config G2_DMA
+ 	tristate "G2 Bus DMA support"
+-	depends on SH_DREAMCAST
+-	select SH_DMA_API
++	depends on SH_DREAMCAST && SH_DMA_API
+ 	help
+ 	  This enables support for the DMA controller for the Dreamcast's
+ 	  G2 bus. Drivers that want this will generally enable this on
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 870efeec8bdac..94c6e6330e043 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -73,10 +73,8 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs,
+ 						  unsigned int nr)
+ {
+ 	if (likely(nr < IA32_NR_syscalls)) {
+-		instrumentation_begin();
+ 		nr = array_index_nospec(nr, IA32_NR_syscalls);
+ 		regs->ax = ia32_sys_call_table[nr](regs);
+-		instrumentation_end();
+ 	}
+ }
+ 
+@@ -91,8 +89,11 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
+ 	 * or may not be necessary, but it matches the old asm behavior.
+ 	 */
+ 	nr = (unsigned int)syscall_enter_from_user_mode(regs, nr);
++	instrumentation_begin();
+ 
+ 	do_syscall_32_irqs_on(regs, nr);
++
++	instrumentation_end();
+ 	syscall_exit_to_user_mode(regs);
+ }
+ 
+@@ -121,11 +122,12 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
+ 		res = get_user(*(u32 *)&regs->bp,
+ 		       (u32 __user __force *)(unsigned long)(u32)regs->sp);
+ 	}
+-	instrumentation_end();
+ 
+ 	if (res) {
+ 		/* User code screwed up. */
+ 		regs->ax = -EFAULT;
++
++		instrumentation_end();
+ 		syscall_exit_to_user_mode(regs);
+ 		return false;
+ 	}
+@@ -135,6 +137,8 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
+ 
+ 	/* Now this is just like a normal syscall. */
+ 	do_syscall_32_irqs_on(regs, nr);
++
++	instrumentation_end();
+ 	syscall_exit_to_user_mode(regs);
+ 	return true;
+ }
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 6fb8cb7b9bcc6..6375967a8244d 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -16,6 +16,7 @@
+ #include <asm/hyperv-tlfs.h>
+ #include <asm/mshyperv.h>
+ #include <asm/idtentry.h>
++#include <linux/kexec.h>
+ #include <linux/version.h>
+ #include <linux/vmalloc.h>
+ #include <linux/mm.h>
+@@ -26,6 +27,8 @@
+ #include <linux/syscore_ops.h>
+ #include <clocksource/hyperv_timer.h>
+ 
++int hyperv_init_cpuhp;
++
+ void *hv_hypercall_pg;
+ EXPORT_SYMBOL_GPL(hv_hypercall_pg);
+ 
+@@ -424,6 +427,7 @@ void __init hyperv_init(void)
+ 
+ 	register_syscore_ops(&hv_syscore_ops);
+ 
++	hyperv_init_cpuhp = cpuhp;
+ 	return;
+ 
+ remove_cpuhp_state:
+diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
+index dcd9503b10983..38f4936045ab6 100644
+--- a/arch/x86/include/asm/fpu/api.h
++++ b/arch/x86/include/asm/fpu/api.h
+@@ -16,14 +16,25 @@
+  * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It
+  * disables preemption so be careful if you intend to use it for long periods
+  * of time.
+- * If you intend to use the FPU in softirq you need to check first with
++ * If you intend to use the FPU in irq/softirq you need to check first with
+  * irq_fpu_usable() if it is possible.
+  */
+-extern void kernel_fpu_begin(void);
++
++/* Kernel FPU states to initialize in kernel_fpu_begin_mask() */
++#define KFPU_387	_BITUL(0)	/* 387 state will be initialized */
++#define KFPU_MXCSR	_BITUL(1)	/* MXCSR will be initialized */
++
++extern void kernel_fpu_begin_mask(unsigned int kfpu_mask);
+ extern void kernel_fpu_end(void);
+ extern bool irq_fpu_usable(void);
+ extern void fpregs_mark_activate(void);
+ 
++/* Code that is unaware of kernel_fpu_begin_mask() can use this */
++static inline void kernel_fpu_begin(void)
++{
++	kernel_fpu_begin_mask(KFPU_387 | KFPU_MXCSR);
++}
++
+ /*
+  * Use fpregs_lock() while editing CPU's FPU registers or fpu->state.
+  * A context switch will (and softirq might) save CPU's FPU registers to
+diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
+index ffc289992d1b0..30f76b9668579 100644
+--- a/arch/x86/include/asm/mshyperv.h
++++ b/arch/x86/include/asm/mshyperv.h
+@@ -74,6 +74,8 @@ static inline void hv_disable_stimer0_percpu_irq(int irq) {}
+ 
+ 
+ #if IS_ENABLED(CONFIG_HYPERV)
++extern int hyperv_init_cpuhp;
++
+ extern void *hv_hypercall_pg;
+ extern void  __percpu  **hyperv_pcpu_input_arg;
+ 
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index f4234575f3fdb..1f6caceccbb02 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -110,6 +110,8 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu);
+ #define topology_die_id(cpu)			(cpu_data(cpu).cpu_die_id)
+ #define topology_core_id(cpu)			(cpu_data(cpu).cpu_core_id)
+ 
++extern unsigned int __max_die_per_package;
++
+ #ifdef CONFIG_SMP
+ #define topology_die_cpumask(cpu)		(per_cpu(cpu_die_map, cpu))
+ #define topology_core_cpumask(cpu)		(per_cpu(cpu_core_map, cpu))
+@@ -118,8 +120,6 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu);
+ extern unsigned int __max_logical_packages;
+ #define topology_max_packages()			(__max_logical_packages)
+ 
+-extern unsigned int __max_die_per_package;
+-
+ static inline int topology_max_die_per_package(void)
+ {
+ 	return __max_die_per_package;
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 2f1fbd8150af7..a2551b10780c6 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -569,12 +569,12 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
+ 		u32 ecx;
+ 
+ 		ecx = cpuid_ecx(0x8000001e);
+-		nodes_per_socket = ((ecx >> 8) & 7) + 1;
++		__max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1;
+ 	} else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) {
+ 		u64 value;
+ 
+ 		rdmsrl(MSR_FAM10H_NODE_ID, value);
+-		nodes_per_socket = ((value >> 3) & 7) + 1;
++		__max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1;
+ 	}
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index 05ef1f4550cbd..6cc50ab07bded 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -135,14 +135,32 @@ static void hv_machine_shutdown(void)
+ {
+ 	if (kexec_in_progress && hv_kexec_handler)
+ 		hv_kexec_handler();
++
++	/*
++	 * Call hv_cpu_die() on all the CPUs, otherwise later the hypervisor
++	 * corrupts the old VP Assist Pages and can crash the kexec kernel.
++	 */
++	if (kexec_in_progress && hyperv_init_cpuhp > 0)
++		cpuhp_remove_state(hyperv_init_cpuhp);
++
++	/* The function calls stop_other_cpus(). */
+ 	native_machine_shutdown();
++
++	/* Disable the hypercall page when there is only 1 active CPU. */
++	if (kexec_in_progress)
++		hyperv_cleanup();
+ }
+ 
+ static void hv_machine_crash_shutdown(struct pt_regs *regs)
+ {
+ 	if (hv_crash_handler)
+ 		hv_crash_handler(regs);
++
++	/* The function calls crash_smp_send_stop(). */
+ 	native_machine_crash_shutdown(regs);
++
++	/* Disable the hypercall page when there is only 1 active CPU. */
++	hyperv_cleanup();
+ }
+ #endif /* CONFIG_KEXEC_CORE */
+ #endif /* CONFIG_HYPERV */
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index d3a0791bc052a..91288da295995 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -25,10 +25,10 @@
+ #define BITS_SHIFT_NEXT_LEVEL(eax)	((eax) & 0x1f)
+ #define LEVEL_MAX_SIBLINGS(ebx)		((ebx) & 0xffff)
+ 
+-#ifdef CONFIG_SMP
+ unsigned int __max_die_per_package __read_mostly = 1;
+ EXPORT_SYMBOL(__max_die_per_package);
+ 
++#ifdef CONFIG_SMP
+ /*
+  * Check if given CPUID extended toplogy "leaf" is implemented
+  */
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index eb86a2b831b15..571220ac8beaa 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -121,7 +121,7 @@ int copy_fpregs_to_fpstate(struct fpu *fpu)
+ }
+ EXPORT_SYMBOL(copy_fpregs_to_fpstate);
+ 
+-void kernel_fpu_begin(void)
++void kernel_fpu_begin_mask(unsigned int kfpu_mask)
+ {
+ 	preempt_disable();
+ 
+@@ -141,13 +141,14 @@ void kernel_fpu_begin(void)
+ 	}
+ 	__cpu_invalidate_fpregs_state();
+ 
+-	if (boot_cpu_has(X86_FEATURE_XMM))
++	/* Put sane initial values into the control registers. */
++	if (likely(kfpu_mask & KFPU_MXCSR) && boot_cpu_has(X86_FEATURE_XMM))
+ 		ldmxcsr(MXCSR_DEFAULT);
+ 
+-	if (boot_cpu_has(X86_FEATURE_FPU))
++	if (unlikely(kfpu_mask & KFPU_387) && boot_cpu_has(X86_FEATURE_FPU))
+ 		asm volatile ("fninit");
+ }
+-EXPORT_SYMBOL_GPL(kernel_fpu_begin);
++EXPORT_SYMBOL_GPL(kernel_fpu_begin_mask);
+ 
+ void kernel_fpu_end(void)
+ {
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 84f581c91db45..098015b739993 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -665,17 +665,6 @@ static void __init trim_platform_memory_ranges(void)
+ 
+ static void __init trim_bios_range(void)
+ {
+-	/*
+-	 * A special case is the first 4Kb of memory;
+-	 * This is a BIOS owned area, not kernel ram, but generally
+-	 * not listed as such in the E820 table.
+-	 *
+-	 * This typically reserves additional memory (64KiB by default)
+-	 * since some BIOSes are known to corrupt low memory.  See the
+-	 * Kconfig help text for X86_RESERVE_LOW.
+-	 */
+-	e820__range_update(0, PAGE_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED);
+-
+ 	/*
+ 	 * special case: Some BIOSes report the PC BIOS
+ 	 * area (640Kb -> 1Mb) as RAM even though it is not.
+@@ -733,6 +722,15 @@ early_param("reservelow", parse_reservelow);
+ 
+ static void __init trim_low_memory_range(void)
+ {
++	/*
++	 * A special case is the first 4Kb of memory;
++	 * This is a BIOS owned area, not kernel ram, but generally
++	 * not listed as such in the E820 table.
++	 *
++	 * This typically reserves additional memory (64KiB by default)
++	 * since some BIOSes are known to corrupt low memory.  See the
++	 * Kconfig help text for X86_RESERVE_LOW.
++	 */
+ 	memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
+ }
+ 	
+diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
+index 0bd1a0fc587e0..84c1821819afb 100644
+--- a/arch/x86/kernel/sev-es.c
++++ b/arch/x86/kernel/sev-es.c
+@@ -225,7 +225,7 @@ static inline u64 sev_es_rd_ghcb_msr(void)
+ 	return __rdmsr(MSR_AMD64_SEV_ES_GHCB);
+ }
+ 
+-static inline void sev_es_wr_ghcb_msr(u64 val)
++static __always_inline void sev_es_wr_ghcb_msr(u64 val)
+ {
+ 	u32 low, high;
+ 
+@@ -286,6 +286,12 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
+ 	u16 d2;
+ 	u8  d1;
+ 
++	/* If instruction ran in kernel mode and the I/O buffer is in kernel space */
++	if (!user_mode(ctxt->regs) && !access_ok(target, size)) {
++		memcpy(dst, buf, size);
++		return ES_OK;
++	}
++
+ 	switch (size) {
+ 	case 1:
+ 		memcpy(&d1, buf, 1);
+@@ -335,6 +341,12 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
+ 	u16 d2;
+ 	u8  d1;
+ 
++	/* If instruction ran in kernel mode and the I/O buffer is in kernel space */
++	if (!user_mode(ctxt->regs) && !access_ok(s, size)) {
++		memcpy(buf, src, size);
++		return ES_OK;
++	}
++
+ 	switch (size) {
+ 	case 1:
+ 		if (get_user(d1, s))
+diff --git a/arch/x86/lib/mmx_32.c b/arch/x86/lib/mmx_32.c
+index 4321fa02e18df..419365c48b2ad 100644
+--- a/arch/x86/lib/mmx_32.c
++++ b/arch/x86/lib/mmx_32.c
+@@ -26,6 +26,16 @@
+ #include <asm/fpu/api.h>
+ #include <asm/asm.h>
+ 
++/*
++ * Use KFPU_387.  MMX instructions are not affected by MXCSR,
++ * but both AMD and Intel documentation states that even integer MMX
++ * operations will result in #MF if an exception is pending in FCW.
++ *
++ * EMMS is not needed afterwards because, after calling kernel_fpu_end(),
++ * any subsequent user of the 387 stack will reinitialize it using
++ * KFPU_387.
++ */
++
+ void *_mmx_memcpy(void *to, const void *from, size_t len)
+ {
+ 	void *p;
+@@ -37,7 +47,7 @@ void *_mmx_memcpy(void *to, const void *from, size_t len)
+ 	p = to;
+ 	i = len >> 6; /* len/64 */
+ 
+-	kernel_fpu_begin();
++	kernel_fpu_begin_mask(KFPU_387);
+ 
+ 	__asm__ __volatile__ (
+ 		"1: prefetch (%0)\n"		/* This set is 28 bytes */
+@@ -127,7 +137,7 @@ static void fast_clear_page(void *page)
+ {
+ 	int i;
+ 
+-	kernel_fpu_begin();
++	kernel_fpu_begin_mask(KFPU_387);
+ 
+ 	__asm__ __volatile__ (
+ 		"  pxor %%mm0, %%mm0\n" : :
+@@ -160,7 +170,7 @@ static void fast_copy_page(void *to, void *from)
+ {
+ 	int i;
+ 
+-	kernel_fpu_begin();
++	kernel_fpu_begin_mask(KFPU_387);
+ 
+ 	/*
+ 	 * maybe the prefetch stuff can go before the expensive fnsave...
+@@ -247,7 +257,7 @@ static void fast_clear_page(void *page)
+ {
+ 	int i;
+ 
+-	kernel_fpu_begin();
++	kernel_fpu_begin_mask(KFPU_387);
+ 
+ 	__asm__ __volatile__ (
+ 		"  pxor %%mm0, %%mm0\n" : :
+@@ -282,7 +292,7 @@ static void fast_copy_page(void *to, void *from)
+ {
+ 	int i;
+ 
+-	kernel_fpu_begin();
++	kernel_fpu_begin_mask(KFPU_387);
+ 
+ 	__asm__ __volatile__ (
+ 		"1: prefetch (%0)\n"
+diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
+index 9e87ab010c82b..ec50b7423a4c8 100644
+--- a/arch/x86/xen/enlighten_hvm.c
++++ b/arch/x86/xen/enlighten_hvm.c
+@@ -188,6 +188,8 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
+        return 0;
+ }
+ 
++static bool no_vector_callback __initdata;
++
+ static void __init xen_hvm_guest_init(void)
+ {
+ 	if (xen_pv_domain())
+@@ -207,7 +209,7 @@ static void __init xen_hvm_guest_init(void)
+ 
+ 	xen_panic_handler_init();
+ 
+-	if (xen_feature(XENFEAT_hvm_callback_vector))
++	if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
+ 		xen_have_vector_callback = 1;
+ 
+ 	xen_hvm_smp_init();
+@@ -233,6 +235,13 @@ static __init int xen_parse_nopv(char *arg)
+ }
+ early_param("xen_nopv", xen_parse_nopv);
+ 
++static __init int xen_parse_no_vector_callback(char *arg)
++{
++	no_vector_callback = true;
++	return 0;
++}
++early_param("xen_no_vector_callback", xen_parse_no_vector_callback);
++
+ bool __init xen_hvm_need_lapic(void)
+ {
+ 	if (xen_pv_domain())
+diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
+index f5e7db4f82abb..6ff3c887e0b99 100644
+--- a/arch/x86/xen/smp_hvm.c
++++ b/arch/x86/xen/smp_hvm.c
+@@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
+ 	int cpu;
+ 
+ 	native_smp_prepare_cpus(max_cpus);
+-	WARN_ON(xen_smp_intr_init(0));
+ 
+-	xen_init_lock_cpu(0);
++	if (xen_have_vector_callback) {
++		WARN_ON(xen_smp_intr_init(0));
++		xen_init_lock_cpu(0);
++	}
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		if (cpu == 0)
+@@ -50,9 +52,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
+ static void xen_hvm_cpu_die(unsigned int cpu)
+ {
+ 	if (common_cpu_die(cpu) == 0) {
+-		xen_smp_intr_free(cpu);
+-		xen_uninit_lock_cpu(cpu);
+-		xen_teardown_timer(cpu);
++		if (xen_have_vector_callback) {
++			xen_smp_intr_free(cpu);
++			xen_uninit_lock_cpu(cpu);
++			xen_teardown_timer(cpu);
++		}
+ 	}
+ }
+ #else
+@@ -64,14 +68,19 @@ static void xen_hvm_cpu_die(unsigned int cpu)
+ 
+ void __init xen_hvm_smp_init(void)
+ {
+-	if (!xen_have_vector_callback)
++	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
++	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
++	smp_ops.smp_cpus_done = xen_smp_cpus_done;
++	smp_ops.cpu_die = xen_hvm_cpu_die;
++
++	if (!xen_have_vector_callback) {
++#ifdef CONFIG_PARAVIRT_SPINLOCKS
++		nopvspin = true;
++#endif
+ 		return;
++	}
+ 
+-	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
+ 	smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
+-	smp_ops.cpu_die = xen_hvm_cpu_die;
+ 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
+ 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
+-	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
+-	smp_ops.smp_cpus_done = xen_smp_cpus_done;
+ }
+diff --git a/crypto/xor.c b/crypto/xor.c
+index eacbf4f939900..8f899f898ec9f 100644
+--- a/crypto/xor.c
++++ b/crypto/xor.c
+@@ -107,6 +107,8 @@ do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2)
+ 	preempt_enable();
+ 
+ 	// bytes/ns == GB/s, multiply by 1000 to get MB/s [not MiB/s]
++	if (!min)
++		min = 1;
+ 	speed = (1000 * REPS * BENCH_SIZE) / (unsigned int)ktime_to_ns(min);
+ 	tmpl->speed = speed;
+ 
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index f23ef508fe88c..dca5cc423cd41 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -586,6 +586,8 @@ static int acpi_get_device_data(acpi_handle handle, struct acpi_device **device,
+ 	if (!device)
+ 		return -EINVAL;
+ 
++	*device = NULL;
++
+ 	status = acpi_get_data_full(handle, acpi_scan_drop_device,
+ 				    (void **)device, callback);
+ 	if (ACPI_FAILURE(status) || !*device) {
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index a6187f6380d8d..96f73aaf71da3 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -115,6 +115,16 @@ int device_links_read_lock_held(void)
+ #endif
+ #endif /* !CONFIG_SRCU */
+ 
++static bool device_is_ancestor(struct device *dev, struct device *target)
++{
++	while (target->parent) {
++		target = target->parent;
++		if (dev == target)
++			return true;
++	}
++	return false;
++}
++
+ /**
+  * device_is_dependent - Check if one device depends on another one
+  * @dev: Device to check dependencies for.
+@@ -128,7 +138,12 @@ int device_is_dependent(struct device *dev, void *target)
+ 	struct device_link *link;
+ 	int ret;
+ 
+-	if (dev == target)
++	/*
++	 * The "ancestors" check is needed to catch the case when the target
++	 * device has not been completely initialized yet and it is still
++	 * missing from the list of children of its parent device.
++	 */
++	if (dev == target || device_is_ancestor(dev, target))
+ 		return 1;
+ 
+ 	ret = device_for_each_child(dev, target, device_is_dependent);
+@@ -363,7 +378,9 @@ static int devlink_add_symlinks(struct device *dev,
+ 	struct device *con = link->consumer;
+ 	char *buf;
+ 
+-	len = max(strlen(dev_name(sup)), strlen(dev_name(con)));
++	len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)),
++		  strlen(dev_bus_name(con)) + strlen(dev_name(con)));
++	len += strlen(":");
+ 	len += strlen("supplier:") + 1;
+ 	buf = kzalloc(len, GFP_KERNEL);
+ 	if (!buf)
+@@ -377,12 +394,12 @@ static int devlink_add_symlinks(struct device *dev,
+ 	if (ret)
+ 		goto err_con;
+ 
+-	snprintf(buf, len, "consumer:%s", dev_name(con));
++	snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
+ 	ret = sysfs_create_link(&sup->kobj, &link->link_dev.kobj, buf);
+ 	if (ret)
+ 		goto err_con_dev;
+ 
+-	snprintf(buf, len, "supplier:%s", dev_name(sup));
++	snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
+ 	ret = sysfs_create_link(&con->kobj, &link->link_dev.kobj, buf);
+ 	if (ret)
+ 		goto err_sup_dev;
+@@ -390,7 +407,7 @@ static int devlink_add_symlinks(struct device *dev,
+ 	goto out;
+ 
+ err_sup_dev:
+-	snprintf(buf, len, "consumer:%s", dev_name(con));
++	snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
+ 	sysfs_remove_link(&sup->kobj, buf);
+ err_con_dev:
+ 	sysfs_remove_link(&link->link_dev.kobj, "consumer");
+@@ -413,7 +430,9 @@ static void devlink_remove_symlinks(struct device *dev,
+ 	sysfs_remove_link(&link->link_dev.kobj, "consumer");
+ 	sysfs_remove_link(&link->link_dev.kobj, "supplier");
+ 
+-	len = max(strlen(dev_name(sup)), strlen(dev_name(con)));
++	len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)),
++		  strlen(dev_bus_name(con)) + strlen(dev_name(con)));
++	len += strlen(":");
+ 	len += strlen("supplier:") + 1;
+ 	buf = kzalloc(len, GFP_KERNEL);
+ 	if (!buf) {
+@@ -421,9 +440,9 @@ static void devlink_remove_symlinks(struct device *dev,
+ 		return;
+ 	}
+ 
+-	snprintf(buf, len, "supplier:%s", dev_name(sup));
++	snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
+ 	sysfs_remove_link(&con->kobj, buf);
+-	snprintf(buf, len, "consumer:%s", dev_name(con));
++	snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
+ 	sysfs_remove_link(&sup->kobj, buf);
+ 	kfree(buf);
+ }
+@@ -633,8 +652,9 @@ struct device_link *device_link_add(struct device *consumer,
+ 
+ 	link->link_dev.class = &devlink_class;
+ 	device_set_pm_not_required(&link->link_dev);
+-	dev_set_name(&link->link_dev, "%s--%s",
+-		     dev_name(supplier), dev_name(consumer));
++	dev_set_name(&link->link_dev, "%s:%s--%s:%s",
++		     dev_bus_name(supplier), dev_name(supplier),
++		     dev_bus_name(consumer), dev_name(consumer));
+ 	if (device_register(&link->link_dev)) {
+ 		put_device(consumer);
+ 		put_device(supplier);
+@@ -1652,9 +1672,7 @@ const char *dev_driver_string(const struct device *dev)
+ 	 * never change once they are set, so they don't need special care.
+ 	 */
+ 	drv = READ_ONCE(dev->driver);
+-	return drv ? drv->name :
+-			(dev->bus ? dev->bus->name :
+-			(dev->class ? dev->class->name : ""));
++	return drv ? drv->name : dev_bus_name(dev);
+ }
+ EXPORT_SYMBOL(dev_driver_string);
+ 
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 148e81969e046..3c94ebc8d4bb0 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -612,6 +612,8 @@ dev_groups_failed:
+ 	else if (drv->remove)
+ 		drv->remove(dev);
+ probe_failed:
++	kfree(dev->dma_range_map);
++	dev->dma_range_map = NULL;
+ 	if (dev->bus)
+ 		blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ 					     BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
+diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c
+index 37244a7e68c22..9cf249c344d9e 100644
+--- a/drivers/clk/tegra/clk-tegra30.c
++++ b/drivers/clk/tegra/clk-tegra30.c
+@@ -1256,6 +1256,8 @@ static struct tegra_clk_init_table init_table[] __initdata = {
+ 	{ TEGRA30_CLK_I2S3_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+ 	{ TEGRA30_CLK_I2S4_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+ 	{ TEGRA30_CLK_VIMCLK_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
++	{ TEGRA30_CLK_HDA, TEGRA30_CLK_PLL_P, 102000000, 0 },
++	{ TEGRA30_CLK_HDA2CODEC_2X, TEGRA30_CLK_PLL_P, 48000000, 0 },
+ 	/* must be the last entry */
+ 	{ TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 },
+ };
+diff --git a/drivers/counter/ti-eqep.c b/drivers/counter/ti-eqep.c
+index a60aee1a1a291..65df9ef5b5bc0 100644
+--- a/drivers/counter/ti-eqep.c
++++ b/drivers/counter/ti-eqep.c
+@@ -235,36 +235,6 @@ static ssize_t ti_eqep_position_ceiling_write(struct counter_device *counter,
+ 	return len;
+ }
+ 
+-static ssize_t ti_eqep_position_floor_read(struct counter_device *counter,
+-					   struct counter_count *count,
+-					   void *ext_priv, char *buf)
+-{
+-	struct ti_eqep_cnt *priv = counter->priv;
+-	u32 qposinit;
+-
+-	regmap_read(priv->regmap32, QPOSINIT, &qposinit);
+-
+-	return sprintf(buf, "%u\n", qposinit);
+-}
+-
+-static ssize_t ti_eqep_position_floor_write(struct counter_device *counter,
+-					    struct counter_count *count,
+-					    void *ext_priv, const char *buf,
+-					    size_t len)
+-{
+-	struct ti_eqep_cnt *priv = counter->priv;
+-	int err;
+-	u32 res;
+-
+-	err = kstrtouint(buf, 0, &res);
+-	if (err < 0)
+-		return err;
+-
+-	regmap_write(priv->regmap32, QPOSINIT, res);
+-
+-	return len;
+-}
+-
+ static ssize_t ti_eqep_position_enable_read(struct counter_device *counter,
+ 					    struct counter_count *count,
+ 					    void *ext_priv, char *buf)
+@@ -301,11 +271,6 @@ static struct counter_count_ext ti_eqep_position_ext[] = {
+ 		.read	= ti_eqep_position_ceiling_read,
+ 		.write	= ti_eqep_position_ceiling_write,
+ 	},
+-	{
+-		.name	= "floor",
+-		.read	= ti_eqep_position_floor_read,
+-		.write	= ti_eqep_position_floor_write,
+-	},
+ 	{
+ 		.name	= "enable",
+ 		.read	= ti_eqep_position_enable_read,
+diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
+index 9d6645b1f0abe..ff5e85eefbf69 100644
+--- a/drivers/crypto/Kconfig
++++ b/drivers/crypto/Kconfig
+@@ -366,6 +366,7 @@ if CRYPTO_DEV_OMAP
+ config CRYPTO_DEV_OMAP_SHAM
+ 	tristate "Support for OMAP MD5/SHA1/SHA2 hw accelerator"
+ 	depends on ARCH_OMAP2PLUS
++	select CRYPTO_ENGINE
+ 	select CRYPTO_SHA1
+ 	select CRYPTO_MD5
+ 	select CRYPTO_SHA256
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index 5d4de5cd67595..f20ac3d694246 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -508,7 +508,8 @@ config GPIO_SAMA5D2_PIOBU
+ 
+ config GPIO_SIFIVE
+ 	bool "SiFive GPIO support"
+-	depends on OF_GPIO && IRQ_DOMAIN_HIERARCHY
++	depends on OF_GPIO
++	select IRQ_DOMAIN_HIERARCHY
+ 	select GPIO_GENERIC
+ 	select GPIOLIB_IRQCHIP
+ 	select REGMAP_MMIO
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index e9faeaf65d14f..689c06cbbb457 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -1960,6 +1960,21 @@ struct gpio_chardev_data {
+ #endif
+ };
+ 
++static int chipinfo_get(struct gpio_chardev_data *cdev, void __user *ip)
++{
++	struct gpio_device *gdev = cdev->gdev;
++	struct gpiochip_info chipinfo;
++
++	memset(&chipinfo, 0, sizeof(chipinfo));
++
++	strscpy(chipinfo.name, dev_name(&gdev->dev), sizeof(chipinfo.name));
++	strscpy(chipinfo.label, gdev->label, sizeof(chipinfo.label));
++	chipinfo.lines = gdev->ngpio;
++	if (copy_to_user(ip, &chipinfo, sizeof(chipinfo)))
++		return -EFAULT;
++	return 0;
++}
++
+ #ifdef CONFIG_GPIO_CDEV_V1
+ /*
+  * returns 0 if the versions match, else the previously selected ABI version
+@@ -1974,6 +1989,41 @@ static int lineinfo_ensure_abi_version(struct gpio_chardev_data *cdata,
+ 
+ 	return abiv;
+ }
++
++static int lineinfo_get_v1(struct gpio_chardev_data *cdev, void __user *ip,
++			   bool watch)
++{
++	struct gpio_desc *desc;
++	struct gpioline_info lineinfo;
++	struct gpio_v2_line_info lineinfo_v2;
++
++	if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
++		return -EFAULT;
++
++	/* this doubles as a range check on line_offset */
++	desc = gpiochip_get_desc(cdev->gdev->chip, lineinfo.line_offset);
++	if (IS_ERR(desc))
++		return PTR_ERR(desc);
++
++	if (watch) {
++		if (lineinfo_ensure_abi_version(cdev, 1))
++			return -EPERM;
++
++		if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines))
++			return -EBUSY;
++	}
++
++	gpio_desc_to_lineinfo(desc, &lineinfo_v2);
++	gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
++
++	if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) {
++		if (watch)
++			clear_bit(lineinfo.line_offset, cdev->watched_lines);
++		return -EFAULT;
++	}
++
++	return 0;
++}
+ #endif
+ 
+ static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip,
+@@ -2011,6 +2061,22 @@ static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip,
+ 	return 0;
+ }
+ 
++static int lineinfo_unwatch(struct gpio_chardev_data *cdev, void __user *ip)
++{
++	__u32 offset;
++
++	if (copy_from_user(&offset, ip, sizeof(offset)))
++		return -EFAULT;
++
++	if (offset >= cdev->gdev->ngpio)
++		return -EINVAL;
++
++	if (!test_and_clear_bit(offset, cdev->watched_lines))
++		return -EBUSY;
++
++	return 0;
++}
++
+ /*
+  * gpio_ioctl() - ioctl handler for the GPIO chardev
+  */
+@@ -2018,80 +2084,24 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ 	struct gpio_chardev_data *cdev = file->private_data;
+ 	struct gpio_device *gdev = cdev->gdev;
+-	struct gpio_chip *gc = gdev->chip;
+ 	void __user *ip = (void __user *)arg;
+-	__u32 offset;
+ 
+ 	/* We fail any subsequent ioctl():s when the chip is gone */
+-	if (!gc)
++	if (!gdev->chip)
+ 		return -ENODEV;
+ 
+ 	/* Fill in the struct and pass to userspace */
+ 	if (cmd == GPIO_GET_CHIPINFO_IOCTL) {
+-		struct gpiochip_info chipinfo;
+-
+-		memset(&chipinfo, 0, sizeof(chipinfo));
+-
+-		strscpy(chipinfo.name, dev_name(&gdev->dev),
+-			sizeof(chipinfo.name));
+-		strscpy(chipinfo.label, gdev->label,
+-			sizeof(chipinfo.label));
+-		chipinfo.lines = gdev->ngpio;
+-		if (copy_to_user(ip, &chipinfo, sizeof(chipinfo)))
+-			return -EFAULT;
+-		return 0;
++		return chipinfo_get(cdev, ip);
+ #ifdef CONFIG_GPIO_CDEV_V1
+-	} else if (cmd == GPIO_GET_LINEINFO_IOCTL) {
+-		struct gpio_desc *desc;
+-		struct gpioline_info lineinfo;
+-		struct gpio_v2_line_info lineinfo_v2;
+-
+-		if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
+-			return -EFAULT;
+-
+-		/* this doubles as a range check on line_offset */
+-		desc = gpiochip_get_desc(gc, lineinfo.line_offset);
+-		if (IS_ERR(desc))
+-			return PTR_ERR(desc);
+-
+-		gpio_desc_to_lineinfo(desc, &lineinfo_v2);
+-		gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
+-
+-		if (copy_to_user(ip, &lineinfo, sizeof(lineinfo)))
+-			return -EFAULT;
+-		return 0;
+ 	} else if (cmd == GPIO_GET_LINEHANDLE_IOCTL) {
+ 		return linehandle_create(gdev, ip);
+ 	} else if (cmd == GPIO_GET_LINEEVENT_IOCTL) {
+ 		return lineevent_create(gdev, ip);
+-	} else if (cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) {
+-		struct gpio_desc *desc;
+-		struct gpioline_info lineinfo;
+-		struct gpio_v2_line_info lineinfo_v2;
+-
+-		if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
+-			return -EFAULT;
+-
+-		/* this doubles as a range check on line_offset */
+-		desc = gpiochip_get_desc(gc, lineinfo.line_offset);
+-		if (IS_ERR(desc))
+-			return PTR_ERR(desc);
+-
+-		if (lineinfo_ensure_abi_version(cdev, 1))
+-			return -EPERM;
+-
+-		if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines))
+-			return -EBUSY;
+-
+-		gpio_desc_to_lineinfo(desc, &lineinfo_v2);
+-		gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
+-
+-		if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) {
+-			clear_bit(lineinfo.line_offset, cdev->watched_lines);
+-			return -EFAULT;
+-		}
+-
+-		return 0;
++	} else if (cmd == GPIO_GET_LINEINFO_IOCTL ||
++		   cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) {
++		return lineinfo_get_v1(cdev, ip,
++				       cmd == GPIO_GET_LINEINFO_WATCH_IOCTL);
+ #endif /* CONFIG_GPIO_CDEV_V1 */
+ 	} else if (cmd == GPIO_V2_GET_LINEINFO_IOCTL ||
+ 		   cmd == GPIO_V2_GET_LINEINFO_WATCH_IOCTL) {
+@@ -2100,16 +2110,7 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	} else if (cmd == GPIO_V2_GET_LINE_IOCTL) {
+ 		return linereq_create(gdev, ip);
+ 	} else if (cmd == GPIO_GET_LINEINFO_UNWATCH_IOCTL) {
+-		if (copy_from_user(&offset, ip, sizeof(offset)))
+-			return -EFAULT;
+-
+-		if (offset >= cdev->gdev->ngpio)
+-			return -EINVAL;
+-
+-		if (!test_and_clear_bit(offset, cdev->watched_lines))
+-			return -EBUSY;
+-
+-		return 0;
++		return lineinfo_unwatch(cdev, ip);
+ 	}
+ 	return -EINVAL;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 2ddbcfe0a72ff..76d10f1c579ba 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -80,7 +80,6 @@ MODULE_FIRMWARE("amdgpu/renoir_gpu_info.bin");
+ MODULE_FIRMWARE("amdgpu/navi10_gpu_info.bin");
+ MODULE_FIRMWARE("amdgpu/navi14_gpu_info.bin");
+ MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin");
+-MODULE_FIRMWARE("amdgpu/green_sardine_gpu_info.bin");
+ 
+ #define AMDGPU_RESUME_MS		2000
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h b/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
+index 4137dc710aafd..7ad0434be293b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
++++ b/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
+@@ -47,7 +47,7 @@ enum psp_gfx_crtl_cmd_id
+     GFX_CTRL_CMD_ID_DISABLE_INT     = 0x00060000,   /* disable PSP-to-Gfx interrupt */
+     GFX_CTRL_CMD_ID_MODE1_RST       = 0x00070000,   /* trigger the Mode 1 reset */
+     GFX_CTRL_CMD_ID_GBR_IH_SET      = 0x00080000,   /* set Gbr IH_RB_CNTL registers */
+-    GFX_CTRL_CMD_ID_CONSUME_CMD     = 0x000A0000,   /* send interrupt to psp for updating write pointer of vf */
++    GFX_CTRL_CMD_ID_CONSUME_CMD     = 0x00090000,   /* send interrupt to psp for updating write pointer of vf */
+     GFX_CTRL_CMD_ID_DESTROY_GPCOM_RING = 0x000C0000, /* destroy GPCOM ring */
+ 
+     GFX_CTRL_CMD_ID_MAX             = 0x000F0000,   /* max command ID */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index d7f67620f57ba..31d793ee0836e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -1034,11 +1034,14 @@ static int kfd_create_vcrat_image_cpu(void *pcrat_image, size_t *size)
+ 				(struct crat_subtype_iolink *)sub_type_hdr);
+ 		if (ret < 0)
+ 			return ret;
+-		crat_table->length += (sub_type_hdr->length * entries);
+-		crat_table->total_entries += entries;
+ 
+-		sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +
+-				sub_type_hdr->length * entries);
++		if (entries) {
++			crat_table->length += (sub_type_hdr->length * entries);
++			crat_table->total_entries += entries;
++
++			sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +
++					sub_type_hdr->length * entries);
++		}
+ #else
+ 		pr_info("IO link not available for non x86 platforms\n");
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index d0699e98db929..e00a30e7d2529 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -113,7 +113,7 @@ int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
+ 	mutex_lock(&adev->dm.dc_lock);
+ 
+ 	/* Enable CRTC CRC generation if necessary. */
+-	if (dm_is_crc_source_crtc(source)) {
++	if (dm_is_crc_source_crtc(source) || source == AMDGPU_DM_PIPE_CRC_SOURCE_NONE) {
+ 		if (!dc_stream_configure_crc(stream_state->ctx->dc,
+ 					     stream_state, enable, enable)) {
+ 			ret = -EINVAL;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+index 462d3d981ea5e..0a01be38ee1b8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+@@ -608,8 +608,8 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 		.disable_pplib_clock_request = false,
+ 		.disable_pplib_wm_range = false,
+ 		.pplib_wm_report_mode = WM_REPORT_DEFAULT,
+-		.pipe_split_policy = MPC_SPLIT_DYNAMIC,
+-		.force_single_disp_pipe_split = true,
++		.pipe_split_policy = MPC_SPLIT_AVOID,
++		.force_single_disp_pipe_split = false,
+ 		.disable_dcc = DCC_ENABLE,
+ 		.voltage_align_fclk = true,
+ 		.disable_stereo_support = true,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index d50a9c3706372..a92f6e4b2eb8f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -2520,8 +2520,7 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
+ 		 * if this primary pipe has a bottom pipe in prev. state
+ 		 * and if the bottom pipe is still available (which it should be),
+ 		 * pick that pipe as secondary
+-		 * Same logic applies for ODM pipes. Since mpo is not allowed with odm
+-		 * check in else case.
++		 * Same logic applies for ODM pipes
+ 		 */
+ 		if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe) {
+ 			preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe->pipe_idx;
+@@ -2529,7 +2528,9 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
+ 				secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
+ 				secondary_pipe->pipe_idx = preferred_pipe_idx;
+ 			}
+-		} else if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe) {
++		}
++		if (secondary_pipe == NULL &&
++				dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe) {
+ 			preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe->pipe_idx;
+ 			if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) {
+ 				secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index f9170b4b22e7e..8a871e5c3e26b 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -3007,7 +3007,7 @@ int drm_atomic_helper_set_config(struct drm_mode_set *set,
+ 
+ 	ret = handle_conflicting_encoders(state, true);
+ 	if (ret)
+-		return ret;
++		goto fail;
+ 
+ 	ret = drm_atomic_commit(state);
+ 
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index 6e74e6745ecae..3491460498491 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -388,19 +388,18 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
+ 		return -ENOENT;
+ 
+ 	*fence = drm_syncobj_fence_get(syncobj);
+-	drm_syncobj_put(syncobj);
+ 
+ 	if (*fence) {
+ 		ret = dma_fence_chain_find_seqno(fence, point);
+ 		if (!ret)
+-			return 0;
++			goto out;
+ 		dma_fence_put(*fence);
+ 	} else {
+ 		ret = -EINVAL;
+ 	}
+ 
+ 	if (!(flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
+-		return ret;
++		goto out;
+ 
+ 	memset(&wait, 0, sizeof(wait));
+ 	wait.task = current;
+@@ -432,6 +431,9 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
+ 	if (wait.node.next)
+ 		drm_syncobj_remove_wait(syncobj, &wait);
+ 
++out:
++	drm_syncobj_put(syncobj);
++
+ 	return ret;
+ }
+ EXPORT_SYMBOL(drm_syncobj_find_fence);
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index cdcb7b1034ae4..3f2bbd9370a86 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3387,7 +3387,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
+ 	intel_ddi_init_dp_buf_reg(encoder);
+ 
+ 	if (!is_mst)
+-		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
++		intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
+ 
+ 	intel_dp_sink_set_decompression_state(intel_dp, crtc_state, true);
+ 	/*
+@@ -3469,8 +3469,8 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
+ 
+ 	intel_ddi_init_dp_buf_reg(encoder);
+ 	if (!is_mst)
+-		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+-	intel_dp_configure_protocol_converter(intel_dp);
++		intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
++	intel_dp_configure_protocol_converter(intel_dp, crtc_state);
+ 	intel_dp_sink_set_decompression_state(intel_dp, crtc_state,
+ 					      true);
+ 	intel_dp_sink_set_fec_ready(intel_dp, crtc_state);
+@@ -3647,7 +3647,7 @@ static void intel_ddi_post_disable_dp(struct intel_atomic_state *state,
+ 	 * Power down sink before disabling the port, otherwise we end
+ 	 * up getting interrupts from the sink on detecting link loss.
+ 	 */
+-	intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
++	intel_dp_set_power(intel_dp, DP_SET_POWER_D3);
+ 
+ 	if (INTEL_GEN(dev_priv) >= 12) {
+ 		if (is_mst) {
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 1901c88d418fa..1937b3d6342ae 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -3496,22 +3496,22 @@ void intel_dp_sink_set_decompression_state(struct intel_dp *intel_dp,
+ 			    enable ? "enable" : "disable");
+ }
+ 
+-/* If the sink supports it, try to set the power state appropriately */
+-void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
++/* If the device supports it, try to set the power state appropriately */
++void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode)
+ {
+-	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
++	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
++	struct drm_i915_private *i915 = to_i915(encoder->base.dev);
+ 	int ret, i;
+ 
+ 	/* Should have a valid DPCD by this point */
+ 	if (intel_dp->dpcd[DP_DPCD_REV] < 0x11)
+ 		return;
+ 
+-	if (mode != DRM_MODE_DPMS_ON) {
++	if (mode != DP_SET_POWER_D0) {
+ 		if (downstream_hpd_needs_d0(intel_dp))
+ 			return;
+ 
+-		ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER,
+-					 DP_SET_POWER_D3);
++		ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, mode);
+ 	} else {
+ 		struct intel_lspcon *lspcon = dp_to_lspcon(intel_dp);
+ 
+@@ -3520,8 +3520,7 @@ void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
+ 		 * time to wake up.
+ 		 */
+ 		for (i = 0; i < 3; i++) {
+-			ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER,
+-						 DP_SET_POWER_D0);
++			ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, mode);
+ 			if (ret == 1)
+ 				break;
+ 			msleep(1);
+@@ -3532,8 +3531,9 @@ void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
+ 	}
+ 
+ 	if (ret != 1)
+-		drm_dbg_kms(&i915->drm, "failed to %s sink power state\n",
+-			    mode == DRM_MODE_DPMS_ON ? "enable" : "disable");
++		drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Set power to %s failed\n",
++			    encoder->base.base.id, encoder->base.name,
++			    mode == DP_SET_POWER_D0 ? "D0" : "D3");
+ }
+ 
+ static bool cpt_dp_port_selected(struct drm_i915_private *dev_priv,
+@@ -3707,7 +3707,7 @@ static void intel_disable_dp(struct intel_atomic_state *state,
+ 	 * ensure that we have vdd while we switch off the panel. */
+ 	intel_edp_panel_vdd_on(intel_dp);
+ 	intel_edp_backlight_off(old_conn_state);
+-	intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
++	intel_dp_set_power(intel_dp, DP_SET_POWER_D3);
+ 	intel_edp_panel_off(intel_dp);
+ }
+ 
+@@ -3856,7 +3856,8 @@ static void intel_dp_enable_port(struct intel_dp *intel_dp,
+ 	intel_de_posting_read(dev_priv, intel_dp->output_reg);
+ }
+ 
+-void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp)
++void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp,
++					   const struct intel_crtc_state *crtc_state)
+ {
+ 	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+ 	u8 tmp;
+@@ -3875,8 +3876,8 @@ void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp)
+ 		drm_dbg_kms(&i915->drm, "Failed to set protocol converter HDMI mode to %s\n",
+ 			    enableddisabled(intel_dp->has_hdmi_sink));
+ 
+-	tmp = intel_dp->dfp.ycbcr_444_to_420 ?
+-		DP_CONVERSION_TO_YCBCR420_ENABLE : 0;
++	tmp = crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR444 &&
++		intel_dp->dfp.ycbcr_444_to_420 ? DP_CONVERSION_TO_YCBCR420_ENABLE : 0;
+ 
+ 	if (drm_dp_dpcd_writeb(&intel_dp->aux,
+ 			       DP_PROTOCOL_CONVERTER_CONTROL_1, tmp) != 1)
+@@ -3929,8 +3930,8 @@ static void intel_enable_dp(struct intel_atomic_state *state,
+ 				    lane_mask);
+ 	}
+ 
+-	intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+-	intel_dp_configure_protocol_converter(intel_dp);
++	intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
++	intel_dp_configure_protocol_converter(intel_dp, pipe_config);
+ 	intel_dp_start_link_train(intel_dp);
+ 	intel_dp_stop_link_train(intel_dp);
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
+index 08a1c0aa8b94b..2dd934182471e 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.h
++++ b/drivers/gpu/drm/i915/display/intel_dp.h
+@@ -50,8 +50,9 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
+ 					    int link_rate, u8 lane_count);
+ int intel_dp_retrain_link(struct intel_encoder *encoder,
+ 			  struct drm_modeset_acquire_ctx *ctx);
+-void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
+-void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp);
++void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode);
++void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp,
++					   const struct intel_crtc_state *crtc_state);
+ void intel_dp_sink_set_decompression_state(struct intel_dp *intel_dp,
+ 					   const struct intel_crtc_state *crtc_state,
+ 					   bool enable);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 64d885539e94a..5d745d9b99b2a 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -488,7 +488,7 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
+ 		    intel_dp->active_mst_links);
+ 
+ 	if (first_mst_stream)
+-		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
++		intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
+ 
+ 	drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true);
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c b/drivers/gpu/drm/i915/display/intel_hdcp.c
+index 5492076d1ae09..17a8c2e73a820 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdcp.c
++++ b/drivers/gpu/drm/i915/display/intel_hdcp.c
+@@ -2187,6 +2187,7 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
+ 	if (content_protection_type_changed) {
+ 		mutex_lock(&hdcp->mutex);
+ 		hdcp->value = DRM_MODE_CONTENT_PROTECTION_DESIRED;
++		drm_connector_get(&connector->base);
+ 		schedule_work(&hdcp->prop_work);
+ 		mutex_unlock(&hdcp->mutex);
+ 	}
+@@ -2198,6 +2199,14 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
+ 		desired_and_not_enabled =
+ 			hdcp->value != DRM_MODE_CONTENT_PROTECTION_ENABLED;
+ 		mutex_unlock(&hdcp->mutex);
++		/*
++		 * If HDCP already ENABLED and CP property is DESIRED, schedule
++		 * prop_work to update correct CP property to user space.
++		 */
++		if (!desired_and_not_enabled && !content_protection_type_changed) {
++			drm_connector_get(&connector->base);
++			schedule_work(&hdcp->prop_work);
++		}
+ 	}
+ 
+ 	if (desired_and_not_enabled || content_protection_type_changed)
+diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+index a24cc1ff08a0c..0625cbb3b4312 100644
+--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
++++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+@@ -134,11 +134,6 @@ static bool remove_signaling_context(struct intel_breadcrumbs *b,
+ 	return true;
+ }
+ 
+-static inline bool __request_completed(const struct i915_request *rq)
+-{
+-	return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno);
+-}
+-
+ __maybe_unused static bool
+ check_signal_order(struct intel_context *ce, struct i915_request *rq)
+ {
+@@ -257,7 +252,7 @@ static void signal_irq_work(struct irq_work *work)
+ 		list_for_each_entry_rcu(rq, &ce->signals, signal_link) {
+ 			bool release;
+ 
+-			if (!__request_completed(rq))
++			if (!__i915_request_is_complete(rq))
+ 				break;
+ 
+ 			if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL,
+@@ -379,7 +374,7 @@ static void insert_breadcrumb(struct i915_request *rq)
+ 	 * straight onto a signaled list, and queue the irq worker for
+ 	 * its signal completion.
+ 	 */
+-	if (__request_completed(rq)) {
++	if (__i915_request_is_complete(rq)) {
+ 		if (__signal_request(rq) &&
+ 		    llist_add(&rq->signal_node, &b->signaled_requests))
+ 			irq_work_queue(&b->irq_work);
+diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
+index 724b2cb897d33..ee9b33c3aff83 100644
+--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
++++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
+@@ -3936,6 +3936,9 @@ err:
+ static void lrc_destroy_wa_ctx(struct intel_engine_cs *engine)
+ {
+ 	i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0);
++
++	/* Called on error unwind, clear all flags to prevent further use */
++	memset(&engine->wa_ctx, 0, sizeof(engine->wa_ctx));
+ }
+ 
+ typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch);
+diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
+index 7ea94d201fe6f..8015964043eb7 100644
+--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
++++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
+@@ -126,6 +126,10 @@ static void __rcu_cacheline_free(struct rcu_head *rcu)
+ 	struct intel_timeline_cacheline *cl =
+ 		container_of(rcu, typeof(*cl), rcu);
+ 
++	/* Must wait until after all *rq->hwsp are complete before removing */
++	i915_gem_object_unpin_map(cl->hwsp->vma->obj);
++	__idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
++
+ 	i915_active_fini(&cl->active);
+ 	kfree(cl);
+ }
+@@ -133,11 +137,6 @@ static void __rcu_cacheline_free(struct rcu_head *rcu)
+ static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
+ {
+ 	GEM_BUG_ON(!i915_active_is_idle(&cl->active));
+-
+-	i915_gem_object_unpin_map(cl->hwsp->vma->obj);
+-	i915_vma_put(cl->hwsp->vma);
+-	__idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
+-
+ 	call_rcu(&cl->rcu, __rcu_cacheline_free);
+ }
+ 
+@@ -179,7 +178,6 @@ cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
+ 		return ERR_CAST(vaddr);
+ 	}
+ 
+-	i915_vma_get(hwsp->vma);
+ 	cl->hwsp = hwsp;
+ 	cl->vaddr = page_pack_bits(vaddr, cacheline);
+ 
+diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
+index 620b6fab2c5cf..92adfee30c7c0 100644
+--- a/drivers/gpu/drm/i915/i915_request.h
++++ b/drivers/gpu/drm/i915/i915_request.h
+@@ -434,7 +434,7 @@ static inline u32 hwsp_seqno(const struct i915_request *rq)
+ 
+ static inline bool __i915_request_has_started(const struct i915_request *rq)
+ {
+-	return i915_seqno_passed(hwsp_seqno(rq), rq->fence.seqno - 1);
++	return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno - 1);
+ }
+ 
+ /**
+@@ -465,11 +465,19 @@ static inline bool __i915_request_has_started(const struct i915_request *rq)
+  */
+ static inline bool i915_request_started(const struct i915_request *rq)
+ {
++	bool result;
++
+ 	if (i915_request_signaled(rq))
+ 		return true;
+ 
+-	/* Remember: started but may have since been preempted! */
+-	return __i915_request_has_started(rq);
++	result = true;
++	rcu_read_lock(); /* the HWSP may be freed at runtime */
++	if (likely(!i915_request_signaled(rq)))
++		/* Remember: started but may have since been preempted! */
++		result = __i915_request_has_started(rq);
++	rcu_read_unlock();
++
++	return result;
+ }
+ 
+ /**
+@@ -482,10 +490,16 @@ static inline bool i915_request_started(const struct i915_request *rq)
+  */
+ static inline bool i915_request_is_running(const struct i915_request *rq)
+ {
++	bool result;
++
+ 	if (!i915_request_is_active(rq))
+ 		return false;
+ 
+-	return __i915_request_has_started(rq);
++	rcu_read_lock();
++	result = __i915_request_has_started(rq) && i915_request_is_active(rq);
++	rcu_read_unlock();
++
++	return result;
+ }
+ 
+ /**
+@@ -509,12 +523,25 @@ static inline bool i915_request_is_ready(const struct i915_request *rq)
+ 	return !list_empty(&rq->sched.link);
+ }
+ 
++static inline bool __i915_request_is_complete(const struct i915_request *rq)
++{
++	return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno);
++}
++
+ static inline bool i915_request_completed(const struct i915_request *rq)
+ {
++	bool result;
++
+ 	if (i915_request_signaled(rq))
+ 		return true;
+ 
+-	return i915_seqno_passed(hwsp_seqno(rq), rq->fence.seqno);
++	result = true;
++	rcu_read_lock(); /* the HWSP may be freed at runtime */
++	if (likely(!i915_request_signaled(rq)))
++		result = __i915_request_is_complete(rq);
++	rcu_read_unlock();
++
++	return result;
+ }
+ 
+ static inline void i915_request_mark_complete(struct i915_request *rq)
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 36d6b6093d16d..5b8cabb099eb1 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -221,7 +221,7 @@ nv50_dmac_wait(struct nvif_push *push, u32 size)
+ 
+ int
+ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
+-		 const s32 *oclass, u8 head, void *data, u32 size, u64 syncbuf,
++		 const s32 *oclass, u8 head, void *data, u32 size, s64 syncbuf,
+ 		 struct nv50_dmac *dmac)
+ {
+ 	struct nouveau_cli *cli = (void *)device->object.client;
+@@ -270,7 +270,7 @@ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!syncbuf)
++	if (syncbuf < 0)
+ 		return 0;
+ 
+ 	ret = nvif_object_ctor(&dmac->base.user, "kmsSyncCtxDma", NV50_DISP_HANDLE_SYNCBUF,
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.h b/drivers/gpu/drm/nouveau/dispnv50/disp.h
+index 92bddc0836171..38dec11e7dda5 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.h
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.h
+@@ -95,7 +95,7 @@ struct nv50_outp_atom {
+ 
+ int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
+ 		     const s32 *oclass, u8 head, void *data, u32 size,
+-		     u64 syncbuf, struct nv50_dmac *dmac);
++		     s64 syncbuf, struct nv50_dmac *dmac);
+ void nv50_dmac_destroy(struct nv50_dmac *);
+ 
+ /*
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
+index 685b708713242..b390029c69ec1 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
+@@ -76,7 +76,7 @@ wimmc37b_init_(const struct nv50_wimm_func *func, struct nouveau_drm *drm,
+ 	int ret;
+ 
+ 	ret = nv50_dmac_create(&drm->client.device, &disp->disp->object,
+-			       &oclass, 0, &args, sizeof(args), 0,
++			       &oclass, 0, &args, sizeof(args), -1,
+ 			       &wndw->wimm);
+ 	if (ret) {
+ 		NV_ERROR(drm, "wimm%04x allocation failed: %d\n", oclass, ret);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
+index 7deb81b6dbac6..4b571cc6bc70f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
+@@ -75,7 +75,7 @@ shadow_image(struct nvkm_bios *bios, int idx, u32 offset, struct shadow *mthd)
+ 	nvkm_debug(subdev, "%08x: type %02x, %d bytes\n",
+ 		   image.base, image.type, image.size);
+ 
+-	if (!shadow_fetch(bios, mthd, image.size)) {
++	if (!shadow_fetch(bios, mthd, image.base + image.size)) {
+ 		nvkm_debug(subdev, "%08x: fetch failed\n", image.base);
+ 		return 0;
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
+index edb6148cbca04..d0e80ad526845 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
+@@ -33,7 +33,7 @@ static void
+ gm200_i2c_aux_fini(struct gm200_i2c_aux *aux)
+ {
+ 	struct nvkm_device *device = aux->base.pad->i2c->subdev.device;
+-	nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00310000, 0x00000000);
++	nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00710000, 0x00000000);
+ }
+ 
+ static int
+@@ -54,10 +54,10 @@ gm200_i2c_aux_init(struct gm200_i2c_aux *aux)
+ 			AUX_ERR(&aux->base, "begin idle timeout %08x", ctrl);
+ 			return -EBUSY;
+ 		}
+-	} while (ctrl & 0x03010000);
++	} while (ctrl & 0x07010000);
+ 
+ 	/* set some magic, and wait up to 1ms for it to appear */
+-	nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00300000, ureq);
++	nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00700000, ureq);
+ 	timeout = 1000;
+ 	do {
+ 		ctrl = nvkm_rd32(device, 0x00d954 + (aux->ch * 0x50));
+@@ -67,7 +67,7 @@ gm200_i2c_aux_init(struct gm200_i2c_aux *aux)
+ 			gm200_i2c_aux_fini(aux);
+ 			return -EBUSY;
+ 		}
+-	} while ((ctrl & 0x03000000) != urep);
++	} while ((ctrl & 0x07000000) != urep);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c
+index 2340040942c93..1115376bc85f5 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c
+@@ -22,6 +22,7 @@
+  * Authors: Ben Skeggs
+  */
+ #include "priv.h"
++#include <subdev/timer.h>
+ 
+ static void
+ gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
+@@ -31,7 +32,6 @@ gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
+ 	u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0400));
+ 	u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0400));
+ 	nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat);
+-	nvkm_mask(device, 0x122128 + (i * 0x0400), 0x00000200, 0x00000000);
+ }
+ 
+ static void
+@@ -42,7 +42,6 @@ gf100_ibus_intr_rop(struct nvkm_subdev *ibus, int i)
+ 	u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0400));
+ 	u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0400));
+ 	nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat);
+-	nvkm_mask(device, 0x124128 + (i * 0x0400), 0x00000200, 0x00000000);
+ }
+ 
+ static void
+@@ -53,7 +52,6 @@ gf100_ibus_intr_gpc(struct nvkm_subdev *ibus, int i)
+ 	u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0400));
+ 	u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0400));
+ 	nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat);
+-	nvkm_mask(device, 0x128128 + (i * 0x0400), 0x00000200, 0x00000000);
+ }
+ 
+ void
+@@ -90,6 +88,12 @@ gf100_ibus_intr(struct nvkm_subdev *ibus)
+ 			intr1 &= ~stat;
+ 		}
+ 	}
++
++	nvkm_mask(device, 0x121c4c, 0x0000003f, 0x00000002);
++	nvkm_msec(device, 2000,
++		if (!(nvkm_rd32(device, 0x121c4c) & 0x0000003f))
++			break;
++	);
+ }
+ 
+ static int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c
+index f3915f85838ed..22e487b493ad1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c
+@@ -22,6 +22,7 @@
+  * Authors: Ben Skeggs
+  */
+ #include "priv.h"
++#include <subdev/timer.h>
+ 
+ static void
+ gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
+@@ -31,7 +32,6 @@ gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
+ 	u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0800));
+ 	u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0800));
+ 	nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat);
+-	nvkm_mask(device, 0x122128 + (i * 0x0800), 0x00000200, 0x00000000);
+ }
+ 
+ static void
+@@ -42,7 +42,6 @@ gk104_ibus_intr_rop(struct nvkm_subdev *ibus, int i)
+ 	u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0800));
+ 	u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0800));
+ 	nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat);
+-	nvkm_mask(device, 0x124128 + (i * 0x0800), 0x00000200, 0x00000000);
+ }
+ 
+ static void
+@@ -53,7 +52,6 @@ gk104_ibus_intr_gpc(struct nvkm_subdev *ibus, int i)
+ 	u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0800));
+ 	u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0800));
+ 	nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat);
+-	nvkm_mask(device, 0x128128 + (i * 0x0800), 0x00000200, 0x00000000);
+ }
+ 
+ void
+@@ -90,6 +88,12 @@ gk104_ibus_intr(struct nvkm_subdev *ibus)
+ 			intr1 &= ~stat;
+ 		}
+ 	}
++
++	nvkm_mask(device, 0x12004c, 0x0000003f, 0x00000002);
++	nvkm_msec(device, 2000,
++		if (!(nvkm_rd32(device, 0x12004c) & 0x0000003f))
++			break;
++	);
+ }
+ 
+ static int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c
+index de91e9a261725..6d5212ae2fd57 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c
+@@ -316,9 +316,9 @@ nvkm_mmu_vram(struct nvkm_mmu *mmu)
+ {
+ 	struct nvkm_device *device = mmu->subdev.device;
+ 	struct nvkm_mm *mm = &device->fb->ram->vram;
+-	const u32 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL);
+-	const u32 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP);
+-	const u32 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED);
++	const u64 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL);
++	const u64 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP);
++	const u64 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED);
+ 	u8 type = NVKM_MEM_KIND * !!mmu->func->kind;
+ 	u8 heap = NVKM_MEM_VRAM;
+ 	int heapM, heapN, heapU;
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index afc178b0d89f4..eaba98e15de46 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1268,6 +1268,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
+ 	card->dai_link = dai_link;
+ 	card->num_links = 1;
+ 	card->name = vc4_hdmi->variant->card_name;
++	card->driver_name = "vc4-hdmi";
+ 	card->dev = dev;
+ 	card->owner = THIS_MODULE;
+ 
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index 612629678c845..9b56226ce0d1c 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -899,6 +899,7 @@ config HID_SONY
+ 	depends on NEW_LEDS
+ 	depends on LEDS_CLASS
+ 	select POWER_SUPPLY
++	select CRC32
+ 	help
+ 	Support for
+ 
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index f170feaac40ba..94180c63571ed 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -387,6 +387,7 @@
+ #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W	0x0401
+ #define USB_DEVICE_ID_HP_X2		0x074d
+ #define USB_DEVICE_ID_HP_X2_10_COVER	0x0755
++#define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN	0x2706
+ 
+ #define USB_VENDOR_ID_ELECOM		0x056e
+ #define USB_DEVICE_ID_ELECOM_BM084	0x0061
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 4dca113924593..32024905fd70f 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -322,6 +322,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD),
+ 	  HID_BATTERY_QUIRK_IGNORE },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),
++	  HID_BATTERY_QUIRK_IGNORE },
+ 	{}
+ };
+ 
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index 1ffcfc9a1e033..45e7e0bdd382b 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1869,6 +1869,10 @@ static const struct hid_device_id logi_dj_receivers[] = {
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		0xc531),
+ 	 .driver_data = recvr_type_gaming_hidpp},
++	{ /* Logitech G602 receiver (0xc537) */
++	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
++		0xc537),
++	 .driver_data = recvr_type_gaming_hidpp},
+ 	{ /* Logitech lightspeed receiver (0xc539) */
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1),
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 0ca7231195473..74ebfb12c360e 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -4051,6 +4051,8 @@ static const struct hid_device_id hidpp_devices[] = {
+ 	{ /* MX Master mouse over Bluetooth */
+ 	  HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012),
+ 	  .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
++	{ /* MX Ergo trackball over Bluetooth */
++	  HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01d) },
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e),
+ 	  .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
+ 	{ /* MX Master 3 mouse over Bluetooth */
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index d670bcd57bdef..0743ef51d3b24 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2054,6 +2054,10 @@ static const struct hid_device_id mt_devices[] = {
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 			USB_VENDOR_ID_SYNAPTICS, 0xce08) },
+ 
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_SYNAPTICS, 0xce09) },
++
+ 	/* TopSeed panels */
+ 	{ .driver_data = MT_CLS_TOPSEED,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2,
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 4fad3e6745e53..a5a402e776c77 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2542,7 +2542,6 @@ static void hv_kexec_handler(void)
+ 	/* Make sure conn_state is set as hv_synic_cleanup checks for it */
+ 	mb();
+ 	cpuhp_remove_state(hyperv_cpuhp_online);
+-	hyperv_cleanup();
+ };
+ 
+ static void hv_crash_handler(struct pt_regs *regs)
+@@ -2558,7 +2557,6 @@ static void hv_crash_handler(struct pt_regs *regs)
+ 	cpu = smp_processor_id();
+ 	hv_stimer_cleanup(cpu);
+ 	hv_synic_disable_regs(cpu);
+-	hyperv_cleanup();
+ };
+ 
+ static int hv_synic_suspend(void)
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 52acd77438ede..251e75c9ba9d0 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -268,6 +268,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7aa6),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Alder Lake-P */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x51a6),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{
+ 		/* Alder Lake CPU */
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
+diff --git a/drivers/hwtracing/stm/heartbeat.c b/drivers/hwtracing/stm/heartbeat.c
+index 3e7df1c0477f7..81d7b21d31ec2 100644
+--- a/drivers/hwtracing/stm/heartbeat.c
++++ b/drivers/hwtracing/stm/heartbeat.c
+@@ -64,7 +64,7 @@ static void stm_heartbeat_unlink(struct stm_source_data *data)
+ 
+ static int stm_heartbeat_init(void)
+ {
+-	int i, ret = -ENOMEM;
++	int i, ret;
+ 
+ 	if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX)
+ 		return -EINVAL;
+@@ -72,8 +72,10 @@ static int stm_heartbeat_init(void)
+ 	for (i = 0; i < nr_devs; i++) {
+ 		stm_heartbeat[i].data.name =
+ 			kasprintf(GFP_KERNEL, "heartbeat.%d", i);
+-		if (!stm_heartbeat[i].data.name)
++		if (!stm_heartbeat[i].data.name) {
++			ret = -ENOMEM;
+ 			goto fail_unregister;
++		}
+ 
+ 		stm_heartbeat[i].data.nr_chans	= 1;
+ 		stm_heartbeat[i].data.link	= stm_heartbeat_link;
+diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
+index a49e0ed4a599d..7e693dcbdd196 100644
+--- a/drivers/i2c/busses/Kconfig
++++ b/drivers/i2c/busses/Kconfig
+@@ -1012,6 +1012,7 @@ config I2C_SIRF
+ config I2C_SPRD
+ 	tristate "Spreadtrum I2C interface"
+ 	depends on I2C=y && (ARCH_SPRD || COMPILE_TEST)
++	depends on COMMON_CLK
+ 	help
+ 	  If you say yes to this option, support will be included for the
+ 	  Spreadtrum I2C interface.
+diff --git a/drivers/i2c/busses/i2c-octeon-core.c b/drivers/i2c/busses/i2c-octeon-core.c
+index d9607905dc2f1..845eda70b8cab 100644
+--- a/drivers/i2c/busses/i2c-octeon-core.c
++++ b/drivers/i2c/busses/i2c-octeon-core.c
+@@ -347,7 +347,7 @@ static int octeon_i2c_read(struct octeon_i2c *i2c, int target,
+ 		if (result)
+ 			return result;
+ 		if (recv_len && i == 0) {
+-			if (data[i] > I2C_SMBUS_BLOCK_MAX + 1)
++			if (data[i] > I2C_SMBUS_BLOCK_MAX)
+ 				return -EPROTO;
+ 			length += data[i];
+ 		}
+diff --git a/drivers/i2c/busses/i2c-tegra-bpmp.c b/drivers/i2c/busses/i2c-tegra-bpmp.c
+index ec7a7e917eddb..c0c7d01473f2b 100644
+--- a/drivers/i2c/busses/i2c-tegra-bpmp.c
++++ b/drivers/i2c/busses/i2c-tegra-bpmp.c
+@@ -80,7 +80,7 @@ static int tegra_bpmp_xlate_flags(u16 flags, u16 *out)
+ 		flags &= ~I2C_M_RECV_LEN;
+ 	}
+ 
+-	return (flags != 0) ? -EINVAL : 0;
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 6f08c0c3238d5..0727383f49402 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -533,7 +533,7 @@ static int tegra_i2c_poll_register(struct tegra_i2c_dev *i2c_dev,
+ 	void __iomem *addr = i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg);
+ 	u32 val;
+ 
+-	if (!i2c_dev->atomic_mode)
++	if (!i2c_dev->atomic_mode && !in_irq())
+ 		return readl_relaxed_poll_timeout(addr, val, !(val & mask),
+ 						  delay_us, timeout_us);
+ 
+diff --git a/drivers/iio/adc/ti_am335x_adc.c b/drivers/iio/adc/ti_am335x_adc.c
+index b11c8c47ba2aa..e946903b09936 100644
+--- a/drivers/iio/adc/ti_am335x_adc.c
++++ b/drivers/iio/adc/ti_am335x_adc.c
+@@ -397,16 +397,12 @@ static int tiadc_iio_buffered_hardware_setup(struct device *dev,
+ 	ret = devm_request_threaded_irq(dev, irq, pollfunc_th, pollfunc_bh,
+ 				flags, indio_dev->name, indio_dev);
+ 	if (ret)
+-		goto error_kfifo_free;
++		return ret;
+ 
+ 	indio_dev->setup_ops = setup_ops;
+ 	indio_dev->modes |= INDIO_BUFFER_SOFTWARE;
+ 
+ 	return 0;
+-
+-error_kfifo_free:
+-	iio_kfifo_free(indio_dev->buffer);
+-	return ret;
+ }
+ 
+ static const char * const chan_name_ain[] = {
+diff --git a/drivers/iio/common/st_sensors/st_sensors_trigger.c b/drivers/iio/common/st_sensors/st_sensors_trigger.c
+index 0507283bd4c1d..2dbd2646e44e9 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_trigger.c
++++ b/drivers/iio/common/st_sensors/st_sensors_trigger.c
+@@ -23,35 +23,31 @@
+  * @sdata: Sensor data.
+  *
+  * returns:
+- * 0 - no new samples available
+- * 1 - new samples available
+- * negative - error or unknown
++ * false - no new samples available or read error
++ * true - new samples available
+  */
+-static int st_sensors_new_samples_available(struct iio_dev *indio_dev,
+-					    struct st_sensor_data *sdata)
++static bool st_sensors_new_samples_available(struct iio_dev *indio_dev,
++					     struct st_sensor_data *sdata)
+ {
+ 	int ret, status;
+ 
+ 	/* How would I know if I can't check it? */
+ 	if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr)
+-		return -EINVAL;
++		return true;
+ 
+ 	/* No scan mask, no interrupt */
+ 	if (!indio_dev->active_scan_mask)
+-		return 0;
++		return false;
+ 
+ 	ret = regmap_read(sdata->regmap,
+ 			  sdata->sensor_settings->drdy_irq.stat_drdy.addr,
+ 			  &status);
+ 	if (ret < 0) {
+ 		dev_err(sdata->dev, "error checking samples available\n");
+-		return ret;
++		return false;
+ 	}
+ 
+-	if (status & sdata->sensor_settings->drdy_irq.stat_drdy.mask)
+-		return 1;
+-
+-	return 0;
++	return !!(status & sdata->sensor_settings->drdy_irq.stat_drdy.mask);
+ }
+ 
+ /**
+@@ -180,9 +176,15 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
+ 
+ 	/* Tell the interrupt handler that we're dealing with edges */
+ 	if (irq_trig == IRQF_TRIGGER_FALLING ||
+-	    irq_trig == IRQF_TRIGGER_RISING)
++	    irq_trig == IRQF_TRIGGER_RISING) {
++		if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) {
++			dev_err(&indio_dev->dev,
++				"edge IRQ not supported w/o stat register.\n");
++			err = -EOPNOTSUPP;
++			goto iio_trigger_free;
++		}
+ 		sdata->edge_irq = true;
+-	else
++	} else {
+ 		/*
+ 		 * If we're not using edges (i.e. level interrupts) we
+ 		 * just mask off the IRQ, handle one interrupt, then
+@@ -190,6 +192,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
+ 		 * interrupt handler top half again and start over.
+ 		 */
+ 		irq_trig |= IRQF_ONESHOT;
++	}
+ 
+ 	/*
+ 	 * If the interrupt pin is Open Drain, by definition this
+diff --git a/drivers/iio/dac/ad5504.c b/drivers/iio/dac/ad5504.c
+index 28921b62e6420..e9297c25d4ef6 100644
+--- a/drivers/iio/dac/ad5504.c
++++ b/drivers/iio/dac/ad5504.c
+@@ -187,9 +187,9 @@ static ssize_t ad5504_write_dac_powerdown(struct iio_dev *indio_dev,
+ 		return ret;
+ 
+ 	if (pwr_down)
+-		st->pwr_down_mask |= (1 << chan->channel);
+-	else
+ 		st->pwr_down_mask &= ~(1 << chan->channel);
++	else
++		st->pwr_down_mask |= (1 << chan->channel);
+ 
+ 	ret = ad5504_spi_write(st, AD5504_ADDR_CTRL,
+ 				AD5504_DAC_PWRDWN_MODE(st->pwr_down_mode) |
+diff --git a/drivers/iio/temperature/mlx90632.c b/drivers/iio/temperature/mlx90632.c
+index 503fe54a0bb93..608ccb1d8bc82 100644
+--- a/drivers/iio/temperature/mlx90632.c
++++ b/drivers/iio/temperature/mlx90632.c
+@@ -248,6 +248,12 @@ static int mlx90632_set_meas_type(struct regmap *regmap, u8 type)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	/*
++	 * Give the mlx90632 some time to reset properly before sending a new I2C command
++	 * if this is not done, the following I2C command(s) will not be accepted.
++	 */
++	usleep_range(150, 200);
++
+ 	ret = regmap_write_bits(regmap, MLX90632_REG_CONTROL,
+ 				 (MLX90632_CFG_MTYP_MASK | MLX90632_CFG_PWR_MASK),
+ 				 (MLX90632_MTYP_STATUS(type) | MLX90632_PWR_STATUS_HALT));
+diff --git a/drivers/infiniband/core/cma_configfs.c b/drivers/infiniband/core/cma_configfs.c
+index 7ec4af2ed87ab..35d1ec1095f9c 100644
+--- a/drivers/infiniband/core/cma_configfs.c
++++ b/drivers/infiniband/core/cma_configfs.c
+@@ -131,8 +131,10 @@ static ssize_t default_roce_mode_store(struct config_item *item,
+ 		return ret;
+ 
+ 	gid_type = ib_cache_gid_parse_type_str(buf);
+-	if (gid_type < 0)
++	if (gid_type < 0) {
++		cma_configfs_params_put(cma_dev);
+ 		return -EINVAL;
++	}
+ 
+ 	ret = cma_set_default_gid_type(cma_dev, group->port_num, gid_type);
+ 
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index ffe2563ad3456..2cc785c1970b4 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -95,8 +95,6 @@ struct ucma_context {
+ 	u64			uid;
+ 
+ 	struct list_head	list;
+-	/* sync between removal event and id destroy, protected by file mut */
+-	int			destroying;
+ 	struct work_struct	close_work;
+ };
+ 
+@@ -122,7 +120,7 @@ static DEFINE_XARRAY_ALLOC(ctx_table);
+ static DEFINE_XARRAY_ALLOC(multicast_table);
+ 
+ static const struct file_operations ucma_fops;
+-static int __destroy_id(struct ucma_context *ctx);
++static int ucma_destroy_private_ctx(struct ucma_context *ctx);
+ 
+ static inline struct ucma_context *_ucma_find_context(int id,
+ 						      struct ucma_file *file)
+@@ -179,19 +177,14 @@ static void ucma_close_id(struct work_struct *work)
+ 
+ 	/* once all inflight tasks are finished, we close all underlying
+ 	 * resources. The context is still alive till its explicit destryoing
+-	 * by its creator.
++	 * by its creator. This puts back the xarray's reference.
+ 	 */
+ 	ucma_put_ctx(ctx);
+ 	wait_for_completion(&ctx->comp);
+ 	/* No new events will be generated after destroying the id. */
+ 	rdma_destroy_id(ctx->cm_id);
+ 
+-	/*
+-	 * At this point ctx->ref is zero so the only place the ctx can be is in
+-	 * a uevent or in __destroy_id(). Since the former doesn't touch
+-	 * ctx->cm_id and the latter sync cancels this, there is no races with
+-	 * this store.
+-	 */
++	/* Reading the cm_id without holding a positive ref is not allowed */
+ 	ctx->cm_id = NULL;
+ }
+ 
+@@ -204,7 +197,6 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
+ 		return NULL;
+ 
+ 	INIT_WORK(&ctx->close_work, ucma_close_id);
+-	refcount_set(&ctx->ref, 1);
+ 	init_completion(&ctx->comp);
+ 	/* So list_del() will work if we don't do ucma_finish_ctx() */
+ 	INIT_LIST_HEAD(&ctx->list);
+@@ -218,6 +210,13 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
+ 	return ctx;
+ }
+ 
++static void ucma_set_ctx_cm_id(struct ucma_context *ctx,
++			       struct rdma_cm_id *cm_id)
++{
++	refcount_set(&ctx->ref, 1);
++	ctx->cm_id = cm_id;
++}
++
+ static void ucma_finish_ctx(struct ucma_context *ctx)
+ {
+ 	lockdep_assert_held(&ctx->file->mut);
+@@ -303,7 +302,7 @@ static int ucma_connect_event_handler(struct rdma_cm_id *cm_id,
+ 	ctx = ucma_alloc_ctx(listen_ctx->file);
+ 	if (!ctx)
+ 		goto err_backlog;
+-	ctx->cm_id = cm_id;
++	ucma_set_ctx_cm_id(ctx, cm_id);
+ 
+ 	uevent = ucma_create_uevent(listen_ctx, event);
+ 	if (!uevent)
+@@ -321,8 +320,7 @@ static int ucma_connect_event_handler(struct rdma_cm_id *cm_id,
+ 	return 0;
+ 
+ err_alloc:
+-	xa_erase(&ctx_table, ctx->id);
+-	kfree(ctx);
++	ucma_destroy_private_ctx(ctx);
+ err_backlog:
+ 	atomic_inc(&listen_ctx->backlog);
+ 	/* Returning error causes the new ID to be destroyed */
+@@ -356,8 +354,12 @@ static int ucma_event_handler(struct rdma_cm_id *cm_id,
+ 		wake_up_interruptible(&ctx->file->poll_wait);
+ 	}
+ 
+-	if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL && !ctx->destroying)
+-		queue_work(system_unbound_wq, &ctx->close_work);
++	if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) {
++		xa_lock(&ctx_table);
++		if (xa_load(&ctx_table, ctx->id) == ctx)
++			queue_work(system_unbound_wq, &ctx->close_work);
++		xa_unlock(&ctx_table);
++	}
+ 	return 0;
+ }
+ 
+@@ -461,13 +463,12 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
+ 		ret = PTR_ERR(cm_id);
+ 		goto err1;
+ 	}
+-	ctx->cm_id = cm_id;
++	ucma_set_ctx_cm_id(ctx, cm_id);
+ 
+ 	resp.id = ctx->id;
+ 	if (copy_to_user(u64_to_user_ptr(cmd.response),
+ 			 &resp, sizeof(resp))) {
+-		xa_erase(&ctx_table, ctx->id);
+-		__destroy_id(ctx);
++		ucma_destroy_private_ctx(ctx);
+ 		return -EFAULT;
+ 	}
+ 
+@@ -477,8 +478,7 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
+ 	return 0;
+ 
+ err1:
+-	xa_erase(&ctx_table, ctx->id);
+-	kfree(ctx);
++	ucma_destroy_private_ctx(ctx);
+ 	return ret;
+ }
+ 
+@@ -516,68 +516,73 @@ static void ucma_cleanup_mc_events(struct ucma_multicast *mc)
+ 	rdma_unlock_handler(mc->ctx->cm_id);
+ }
+ 
+-/*
+- * ucma_free_ctx is called after the underlying rdma CM-ID is destroyed. At
+- * this point, no new events will be reported from the hardware. However, we
+- * still need to cleanup the UCMA context for this ID. Specifically, there
+- * might be events that have not yet been consumed by the user space software.
+- * mutex. After that we release them as needed.
+- */
+-static int ucma_free_ctx(struct ucma_context *ctx)
++static int ucma_cleanup_ctx_events(struct ucma_context *ctx)
+ {
+ 	int events_reported;
+ 	struct ucma_event *uevent, *tmp;
+ 	LIST_HEAD(list);
+ 
+-	ucma_cleanup_multicast(ctx);
+-
+-	/* Cleanup events not yet reported to the user. */
++	/* Cleanup events not yet reported to the user.*/
+ 	mutex_lock(&ctx->file->mut);
+ 	list_for_each_entry_safe(uevent, tmp, &ctx->file->event_list, list) {
+-		if (uevent->ctx == ctx || uevent->conn_req_ctx == ctx)
++		if (uevent->ctx != ctx)
++			continue;
++
++		if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST &&
++		    xa_cmpxchg(&ctx_table, uevent->conn_req_ctx->id,
++			       uevent->conn_req_ctx, XA_ZERO_ENTRY,
++			       GFP_KERNEL) == uevent->conn_req_ctx) {
+ 			list_move_tail(&uevent->list, &list);
++			continue;
++		}
++		list_del(&uevent->list);
++		kfree(uevent);
+ 	}
+ 	list_del(&ctx->list);
+ 	events_reported = ctx->events_reported;
+ 	mutex_unlock(&ctx->file->mut);
+ 
+ 	/*
+-	 * If this was a listening ID then any connections spawned from it
+-	 * that have not been delivered to userspace are cleaned up too.
+-	 * Must be done outside any locks.
++	 * If this was a listening ID then any connections spawned from it that
++	 * have not been delivered to userspace are cleaned up too. Must be done
++	 * outside any locks.
+ 	 */
+ 	list_for_each_entry_safe(uevent, tmp, &list, list) {
+-		list_del(&uevent->list);
+-		if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST &&
+-		    uevent->conn_req_ctx != ctx)
+-			__destroy_id(uevent->conn_req_ctx);
++		ucma_destroy_private_ctx(uevent->conn_req_ctx);
+ 		kfree(uevent);
+ 	}
+-
+-	mutex_destroy(&ctx->mutex);
+-	kfree(ctx);
+ 	return events_reported;
+ }
+ 
+-static int __destroy_id(struct ucma_context *ctx)
++/*
++ * When this is called the xarray must have a XA_ZERO_ENTRY in the ctx->id (ie
++ * the ctx is not public to the user). This either because:
++ *  - ucma_finish_ctx() hasn't been called
++ *  - xa_cmpxchg() succeed to remove the entry (only one thread can succeed)
++ */
++static int ucma_destroy_private_ctx(struct ucma_context *ctx)
+ {
++	int events_reported;
++
+ 	/*
+-	 * If the refcount is already 0 then ucma_close_id() has already
+-	 * destroyed the cm_id, otherwise holding the refcount keeps cm_id
+-	 * valid. Prevent queue_work() from being called.
++	 * Destroy the underlying cm_id. New work queuing is prevented now by
++	 * the removal from the xarray. Once the work is cancled ref will either
++	 * be 0 because the work ran to completion and consumed the ref from the
++	 * xarray, or it will be positive because we still have the ref from the
++	 * xarray. This can also be 0 in cases where cm_id was never set
+ 	 */
+-	if (refcount_inc_not_zero(&ctx->ref)) {
+-		rdma_lock_handler(ctx->cm_id);
+-		ctx->destroying = 1;
+-		rdma_unlock_handler(ctx->cm_id);
+-		ucma_put_ctx(ctx);
+-	}
+-
+ 	cancel_work_sync(&ctx->close_work);
+-	/* At this point it's guaranteed that there is no inflight closing task */
+-	if (ctx->cm_id)
++	if (refcount_read(&ctx->ref))
+ 		ucma_close_id(&ctx->close_work);
+-	return ucma_free_ctx(ctx);
++
++	events_reported = ucma_cleanup_ctx_events(ctx);
++	ucma_cleanup_multicast(ctx);
++
++	WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, XA_ZERO_ENTRY, NULL,
++			   GFP_KERNEL) != NULL);
++	mutex_destroy(&ctx->mutex);
++	kfree(ctx);
++	return events_reported;
+ }
+ 
+ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf,
+@@ -596,14 +601,17 @@ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf,
+ 
+ 	xa_lock(&ctx_table);
+ 	ctx = _ucma_find_context(cmd.id, file);
+-	if (!IS_ERR(ctx))
+-		__xa_erase(&ctx_table, ctx->id);
++	if (!IS_ERR(ctx)) {
++		if (__xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY,
++				 GFP_KERNEL) != ctx)
++			ctx = ERR_PTR(-ENOENT);
++	}
+ 	xa_unlock(&ctx_table);
+ 
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
+-	resp.events_reported = __destroy_id(ctx);
++	resp.events_reported = ucma_destroy_private_ctx(ctx);
+ 	if (copy_to_user(u64_to_user_ptr(cmd.response),
+ 			 &resp, sizeof(resp)))
+ 		ret = -EFAULT;
+@@ -1777,15 +1785,16 @@ static int ucma_close(struct inode *inode, struct file *filp)
+ 	 * prevented by this being a FD release function. The list_add_tail() in
+ 	 * ucma_connect_event_handler() can run concurrently, however it only
+ 	 * adds to the list *after* a listening ID. By only reading the first of
+-	 * the list, and relying on __destroy_id() to block
++	 * the list, and relying on ucma_destroy_private_ctx() to block
+ 	 * ucma_connect_event_handler(), no additional locking is needed.
+ 	 */
+ 	while (!list_empty(&file->ctx_list)) {
+ 		struct ucma_context *ctx = list_first_entry(
+ 			&file->ctx_list, struct ucma_context, list);
+ 
+-		xa_erase(&ctx_table, ctx->id);
+-		__destroy_id(ctx);
++		WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY,
++				   GFP_KERNEL) != ctx);
++		ucma_destroy_private_ctx(ctx);
+ 	}
+ 	kfree(file);
+ 	return 0;
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index e9fecbdf391bc..5157ae29a4460 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -126,7 +126,7 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
+ 	 */
+ 	if (mask)
+ 		pgsz_bitmap &= GENMASK(count_trailing_zeros(mask), 0);
+-	return rounddown_pow_of_two(pgsz_bitmap);
++	return pgsz_bitmap ? rounddown_pow_of_two(pgsz_bitmap) : 0;
+ }
+ EXPORT_SYMBOL(ib_umem_find_best_pgsz);
+ 
+diff --git a/drivers/interconnect/imx/imx8mq.c b/drivers/interconnect/imx/imx8mq.c
+index ba43a15aefec0..d7768d3c6d8aa 100644
+--- a/drivers/interconnect/imx/imx8mq.c
++++ b/drivers/interconnect/imx/imx8mq.c
+@@ -7,6 +7,7 @@
+ 
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
++#include <linux/interconnect-provider.h>
+ #include <dt-bindings/interconnect/imx8mq.h>
+ 
+ #include "imx.h"
+@@ -94,6 +95,7 @@ static struct platform_driver imx8mq_icc_driver = {
+ 	.remove = imx8mq_icc_remove,
+ 	.driver = {
+ 		.name = "imx8mq-interconnect",
++		.sync_state = icc_sync_state,
+ 	},
+ };
+ 
+diff --git a/drivers/irqchip/irq-mips-cpu.c b/drivers/irqchip/irq-mips-cpu.c
+index 95d4fd8f7a968..0bbb0b2d0dd5f 100644
+--- a/drivers/irqchip/irq-mips-cpu.c
++++ b/drivers/irqchip/irq-mips-cpu.c
+@@ -197,6 +197,13 @@ static int mips_cpu_ipi_alloc(struct irq_domain *domain, unsigned int virq,
+ 		if (ret)
+ 			return ret;
+ 
++		ret = irq_domain_set_hwirq_and_chip(domain->parent, virq + i, hwirq,
++						    &mips_mt_cpu_irq_controller,
++						    NULL);
++
++		if (ret)
++			return ret;
++
+ 		ret = irq_set_irq_type(virq + i, IRQ_TYPE_LEVEL_HIGH);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
+index c1bcac71008c6..28ddcaa5358b1 100644
+--- a/drivers/lightnvm/core.c
++++ b/drivers/lightnvm/core.c
+@@ -844,11 +844,10 @@ static int nvm_bb_chunk_sense(struct nvm_dev *dev, struct ppa_addr ppa)
+ 	rqd.ppa_addr = generic_to_dev_addr(dev, ppa);
+ 
+ 	ret = nvm_submit_io_sync_raw(dev, &rqd);
++	__free_page(page);
+ 	if (ret)
+ 		return ret;
+ 
+-	__free_page(page);
+-
+ 	return rqd.error;
+ }
+ 
+diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
+index 0e04d3718af3c..2cefb075b2b84 100644
+--- a/drivers/md/Kconfig
++++ b/drivers/md/Kconfig
+@@ -585,6 +585,7 @@ config DM_INTEGRITY
+ 	select BLK_DEV_INTEGRITY
+ 	select DM_BUFIO
+ 	select CRYPTO
++	select CRYPTO_SKCIPHER
+ 	select ASYNC_XOR
+ 	help
+ 	  This device-mapper target emulates a block device that has
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 89de9cde02028..875823d6ee7e0 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -1481,9 +1481,9 @@ static int crypt_alloc_req_skcipher(struct crypt_config *cc,
+ static int crypt_alloc_req_aead(struct crypt_config *cc,
+ 				 struct convert_context *ctx)
+ {
+-	if (!ctx->r.req) {
+-		ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO);
+-		if (!ctx->r.req)
++	if (!ctx->r.req_aead) {
++		ctx->r.req_aead = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO);
++		if (!ctx->r.req_aead)
+ 			return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 81df019ab284a..b64fede032dc5 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -257,8 +257,9 @@ struct dm_integrity_c {
+ 	bool journal_uptodate;
+ 	bool just_formatted;
+ 	bool recalculate_flag;
+-	bool fix_padding;
+ 	bool discard;
++	bool fix_padding;
++	bool legacy_recalculate;
+ 
+ 	struct alg_spec internal_hash_alg;
+ 	struct alg_spec journal_crypt_alg;
+@@ -386,6 +387,14 @@ static int dm_integrity_failed(struct dm_integrity_c *ic)
+ 	return READ_ONCE(ic->failed);
+ }
+ 
++static bool dm_integrity_disable_recalculate(struct dm_integrity_c *ic)
++{
++	if ((ic->internal_hash_alg.key || ic->journal_mac_alg.key) &&
++	    !ic->legacy_recalculate)
++		return true;
++	return false;
++}
++
+ static commit_id_t dm_integrity_commit_id(struct dm_integrity_c *ic, unsigned i,
+ 					  unsigned j, unsigned char seq)
+ {
+@@ -3140,6 +3149,7 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type,
+ 		arg_count += !!ic->journal_crypt_alg.alg_string;
+ 		arg_count += !!ic->journal_mac_alg.alg_string;
+ 		arg_count += (ic->sb->flags & cpu_to_le32(SB_FLAG_FIXED_PADDING)) != 0;
++		arg_count += ic->legacy_recalculate;
+ 		DMEMIT("%s %llu %u %c %u", ic->dev->name, ic->start,
+ 		       ic->tag_size, ic->mode, arg_count);
+ 		if (ic->meta_dev)
+@@ -3163,6 +3173,8 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type,
+ 		}
+ 		if ((ic->sb->flags & cpu_to_le32(SB_FLAG_FIXED_PADDING)) != 0)
+ 			DMEMIT(" fix_padding");
++		if (ic->legacy_recalculate)
++			DMEMIT(" legacy_recalculate");
+ 
+ #define EMIT_ALG(a, n)							\
+ 		do {							\
+@@ -3792,7 +3804,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 	unsigned extra_args;
+ 	struct dm_arg_set as;
+ 	static const struct dm_arg _args[] = {
+-		{0, 15, "Invalid number of feature args"},
++		{0, 16, "Invalid number of feature args"},
+ 	};
+ 	unsigned journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec;
+ 	bool should_write_sb;
+@@ -3940,6 +3952,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 			ic->discard = true;
+ 		} else if (!strcmp(opt_string, "fix_padding")) {
+ 			ic->fix_padding = true;
++		} else if (!strcmp(opt_string, "legacy_recalculate")) {
++			ic->legacy_recalculate = true;
+ 		} else {
+ 			r = -EINVAL;
+ 			ti->error = "Invalid argument";
+@@ -4235,6 +4249,20 @@ try_smaller_buffer:
+ 			r = -ENOMEM;
+ 			goto bad;
+ 		}
++	} else {
++		if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)) {
++			ti->error = "Recalculate can only be specified with internal_hash";
++			r = -EINVAL;
++			goto bad;
++		}
++	}
++
++	if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING) &&
++	    le64_to_cpu(ic->sb->recalc_sector) < ic->provided_data_sectors &&
++	    dm_integrity_disable_recalculate(ic)) {
++		ti->error = "Recalculating with HMAC is disabled for security reasons - if you really need it, use the argument \"legacy_recalculate\"";
++		r = -EOPNOTSUPP;
++		goto bad;
+ 	}
+ 
+ 	ic->bufio = dm_bufio_client_create(ic->meta_dev ? ic->meta_dev->bdev : ic->dev->bdev,
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 7eeb7c4169c94..09ded08cbb609 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -370,14 +370,23 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
+ {
+ 	int r;
+ 	dev_t dev;
++	unsigned int major, minor;
++	char dummy;
+ 	struct dm_dev_internal *dd;
+ 	struct dm_table *t = ti->table;
+ 
+ 	BUG_ON(!t);
+ 
+-	dev = dm_get_dev_t(path);
+-	if (!dev)
+-		return -ENODEV;
++	if (sscanf(path, "%u:%u%c", &major, &minor, &dummy) == 2) {
++		/* Extract the major/minor numbers */
++		dev = MKDEV(major, minor);
++		if (MAJOR(dev) != major || MINOR(dev) != minor)
++			return -EOVERFLOW;
++	} else {
++		dev = dm_get_dev_t(path);
++		if (!dev)
++			return -ENODEV;
++	}
+ 
+ 	dd = find_device(&t->devices, dev);
+ 	if (!dd) {
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index de7cb0369c308..002426e3cf76c 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -384,8 +384,10 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
+ 		     "merging was advertised but not possible");
+ 	blk_queue_max_segments(mq->queue, mmc_get_max_segments(host));
+ 
+-	if (mmc_card_mmc(card))
++	if (mmc_card_mmc(card) && card->ext_csd.data_sector_size) {
+ 		block_size = card->ext_csd.data_sector_size;
++		WARN_ON(block_size != 512 && block_size != 4096);
++	}
+ 
+ 	blk_queue_logical_block_size(mq->queue, block_size);
+ 	/*
+diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c
+index bbf3496f44955..f9780c65ebe98 100644
+--- a/drivers/mmc/host/sdhci-brcmstb.c
++++ b/drivers/mmc/host/sdhci-brcmstb.c
+@@ -314,11 +314,7 @@ err_clk:
+ 
+ static void sdhci_brcmstb_shutdown(struct platform_device *pdev)
+ {
+-	int ret;
+-
+-	ret = sdhci_pltfm_unregister(pdev);
+-	if (ret)
+-		dev_err(&pdev->dev, "failed to shutdown\n");
++	sdhci_pltfm_suspend(&pdev->dev);
+ }
+ 
+ MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match);
+diff --git a/drivers/mmc/host/sdhci-of-dwcmshc.c b/drivers/mmc/host/sdhci-of-dwcmshc.c
+index 4b673792b5a42..d90020ed36227 100644
+--- a/drivers/mmc/host/sdhci-of-dwcmshc.c
++++ b/drivers/mmc/host/sdhci-of-dwcmshc.c
+@@ -16,6 +16,8 @@
+ 
+ #include "sdhci-pltfm.h"
+ 
++#define SDHCI_DWCMSHC_ARG2_STUFF	GENMASK(31, 16)
++
+ /* DWCMSHC specific Mode Select value */
+ #define DWCMSHC_CTRL_HS400		0x7
+ 
+@@ -49,6 +51,29 @@ static void dwcmshc_adma_write_desc(struct sdhci_host *host, void **desc,
+ 	sdhci_adma_write_desc(host, desc, addr, len, cmd);
+ }
+ 
++static void dwcmshc_check_auto_cmd23(struct mmc_host *mmc,
++				     struct mmc_request *mrq)
++{
++	struct sdhci_host *host = mmc_priv(mmc);
++
++	/*
++	 * No matter V4 is enabled or not, ARGUMENT2 register is 32-bit
++	 * block count register which doesn't support stuff bits of
++	 * CMD23 argument on dwcmsch host controller.
++	 */
++	if (mrq->sbc && (mrq->sbc->arg & SDHCI_DWCMSHC_ARG2_STUFF))
++		host->flags &= ~SDHCI_AUTO_CMD23;
++	else
++		host->flags |= SDHCI_AUTO_CMD23;
++}
++
++static void dwcmshc_request(struct mmc_host *mmc, struct mmc_request *mrq)
++{
++	dwcmshc_check_auto_cmd23(mmc, mrq);
++
++	sdhci_request(mmc, mrq);
++}
++
+ static void dwcmshc_set_uhs_signaling(struct sdhci_host *host,
+ 				      unsigned int timing)
+ {
+@@ -133,6 +158,8 @@ static int dwcmshc_probe(struct platform_device *pdev)
+ 
+ 	sdhci_get_of_property(pdev);
+ 
++	host->mmc_host_ops.request = dwcmshc_request;
++
+ 	err = sdhci_add_host(host);
+ 	if (err)
+ 		goto err_clk;
+diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c
+index 24c978de2a3f1..0e5234a5ca224 100644
+--- a/drivers/mmc/host/sdhci-xenon.c
++++ b/drivers/mmc/host/sdhci-xenon.c
+@@ -167,7 +167,12 @@ static void xenon_reset_exit(struct sdhci_host *host,
+ 	/* Disable tuning request and auto-retuning again */
+ 	xenon_retune_setup(host);
+ 
+-	xenon_set_acg(host, true);
++	/*
++	 * The ACG should be turned off at the early init time, in order
++	 * to solve a possible issues with the 1.8V regulator stabilization.
++	 * The feature is enabled in later stage.
++	 */
++	xenon_set_acg(host, false);
+ 
+ 	xenon_set_sdclk_off_idle(host, sdhc_id, false);
+ 
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 81028ba35f35d..31a6210eb5d44 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -1613,7 +1613,7 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
+ 	/* Extract interleaved payload data and ECC bits */
+ 	for (step = 0; step < nfc_geo->ecc_chunk_count; step++) {
+ 		if (buf)
+-			nand_extract_bits(buf, step * eccsize, tmp_buf,
++			nand_extract_bits(buf, step * eccsize * 8, tmp_buf,
+ 					  src_bit_off, eccsize * 8);
+ 		src_bit_off += eccsize * 8;
+ 
+diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c
+index a8048cb8d2205..9a9f1c24d8321 100644
+--- a/drivers/mtd/nand/raw/nandsim.c
++++ b/drivers/mtd/nand/raw/nandsim.c
+@@ -2211,6 +2211,9 @@ static int ns_attach_chip(struct nand_chip *chip)
+ {
+ 	unsigned int eccsteps, eccbytes;
+ 
++	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++	chip->ecc.algo = bch ? NAND_ECC_ALGO_BCH : NAND_ECC_ALGO_HAMMING;
++
+ 	if (!bch)
+ 		return 0;
+ 
+@@ -2234,8 +2237,6 @@ static int ns_attach_chip(struct nand_chip *chip)
+ 		return -EINVAL;
+ 	}
+ 
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-	chip->ecc.algo = NAND_ECC_ALGO_BCH;
+ 	chip->ecc.size = 512;
+ 	chip->ecc.strength = bch;
+ 	chip->ecc.bytes = eccbytes;
+@@ -2274,8 +2275,6 @@ static int __init ns_init_module(void)
+ 	nsmtd       = nand_to_mtd(chip);
+ 	nand_set_controller_data(chip, (void *)ns);
+ 
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-	chip->ecc.algo   = NAND_ECC_ALGO_HAMMING;
+ 	/* The NAND_SKIP_BBTSCAN option is necessary for 'overridesize' */
+ 	/* and 'badblocks' parameters to work */
+ 	chip->options   |= NAND_SKIP_BBTSCAN;
+diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
+index 81e39d7507d8f..09879aea9f7cc 100644
+--- a/drivers/net/can/dev.c
++++ b/drivers/net/can/dev.c
+@@ -592,11 +592,11 @@ static void can_restart(struct net_device *dev)
+ 
+ 	cf->can_id |= CAN_ERR_RESTARTED;
+ 
+-	netif_rx_ni(skb);
+-
+ 	stats->rx_packets++;
+ 	stats->rx_bytes += cf->can_dlc;
+ 
++	netif_rx_ni(skb);
++
+ restart:
+ 	netdev_dbg(dev, "restarted\n");
+ 	priv->can_stats.restarts++;
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+index d29d20525588c..d565922838186 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+@@ -512,11 +512,11 @@ static int pcan_usb_fd_decode_canmsg(struct pcan_usb_fd_if *usb_if,
+ 	else
+ 		memcpy(cfd->data, rm->d, cfd->len);
+ 
+-	peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(rm->ts_low));
+-
+ 	netdev->stats.rx_packets++;
+ 	netdev->stats.rx_bytes += cfd->len;
+ 
++	peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(rm->ts_low));
++
+ 	return 0;
+ }
+ 
+@@ -578,11 +578,11 @@ static int pcan_usb_fd_decode_status(struct pcan_usb_fd_if *usb_if,
+ 	if (!skb)
+ 		return -ENOMEM;
+ 
+-	peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(sm->ts_low));
+-
+ 	netdev->stats.rx_packets++;
+ 	netdev->stats.rx_bytes += cf->can_dlc;
+ 
++	peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(sm->ts_low));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
+index d6ba9426be4de..b1baa4ac1d537 100644
+--- a/drivers/net/can/vxcan.c
++++ b/drivers/net/can/vxcan.c
+@@ -39,6 +39,7 @@ static netdev_tx_t vxcan_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	struct net_device *peer;
+ 	struct canfd_frame *cfd = (struct canfd_frame *)skb->data;
+ 	struct net_device_stats *peerstats, *srcstats = &dev->stats;
++	u8 len;
+ 
+ 	if (can_dropped_invalid_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+@@ -61,12 +62,13 @@ static netdev_tx_t vxcan_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	skb->dev        = peer;
+ 	skb->ip_summed  = CHECKSUM_UNNECESSARY;
+ 
++	len = cfd->len;
+ 	if (netif_rx_ni(skb) == NET_RX_SUCCESS) {
+ 		srcstats->tx_packets++;
+-		srcstats->tx_bytes += cfd->len;
++		srcstats->tx_bytes += len;
+ 		peerstats = &peer->stats;
+ 		peerstats->rx_packets++;
+-		peerstats->rx_bytes += cfd->len;
++		peerstats->rx_bytes += len;
+ 	}
+ 
+ out_unlock:
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 288b5a5c3e0db..95c7fa171e35a 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1404,7 +1404,7 @@ int b53_vlan_prepare(struct dsa_switch *ds, int port,
+ 	    !(vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED))
+ 		return -EINVAL;
+ 
+-	if (vlan->vid_end > dev->num_vlans)
++	if (vlan->vid_end >= dev->num_vlans)
+ 		return -ERANGE;
+ 
+ 	b53_enable_vlan(dev, true, ds->vlan_filtering);
+diff --git a/drivers/net/dsa/mv88e6xxx/global1_vtu.c b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
+index 1048509a849bc..0938caccc62ac 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1_vtu.c
++++ b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
+@@ -351,6 +351,10 @@ int mv88e6250_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
+ 		if (err)
+ 			return err;
+ 
++		err = mv88e6185_g1_stu_data_read(chip, entry);
++		if (err)
++			return err;
++
+ 		/* VTU DBNum[3:0] are located in VTU Operation 3:0
+ 		 * VTU DBNum[5:4] are located in VTU Operation 9:8
+ 		 */
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index b1ae9eb8f2479..0404aafd5ce56 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -2503,8 +2503,10 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+ 	priv = netdev_priv(dev);
+ 
+ 	priv->clk = devm_clk_get_optional(&pdev->dev, "sw_sysport");
+-	if (IS_ERR(priv->clk))
+-		return PTR_ERR(priv->clk);
++	if (IS_ERR(priv->clk)) {
++		ret = PTR_ERR(priv->clk);
++		goto err_free_netdev;
++	}
+ 
+ 	/* Allocate number of TX rings */
+ 	priv->tx_rings = devm_kcalloc(&pdev->dev, txq,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index fa9152ff5e2a0..f4ecc755eaff1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -454,6 +454,9 @@ int rvu_mbox_handler_cgx_mac_addr_set(struct rvu *rvu,
+ 	int pf = rvu_get_pf(req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
++	if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
++		return -EPERM;
++
+ 	rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
+ 
+ 	cgx_lmac_addr_set(cgx_id, lmac_id, req->mac_addr);
+@@ -470,6 +473,9 @@ int rvu_mbox_handler_cgx_mac_addr_get(struct rvu *rvu,
+ 	int rc = 0, i;
+ 	u64 cfg;
+ 
++	if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
++		return -EPERM;
++
+ 	rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
+ 
+ 	rsp->hdr.rc = rc;
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index a53bd36b11c60..d4768dcb6c699 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -60,14 +60,27 @@ int ocelot_mact_learn(struct ocelot *ocelot, int port,
+ 		      const unsigned char mac[ETH_ALEN],
+ 		      unsigned int vid, enum macaccess_entry_type type)
+ {
++	u32 cmd = ANA_TABLES_MACACCESS_VALID |
++		ANA_TABLES_MACACCESS_DEST_IDX(port) |
++		ANA_TABLES_MACACCESS_ENTRYTYPE(type) |
++		ANA_TABLES_MACACCESS_MAC_TABLE_CMD(MACACCESS_CMD_LEARN);
++	unsigned int mc_ports;
++
++	/* Set MAC_CPU_COPY if the CPU port is used by a multicast entry */
++	if (type == ENTRYTYPE_MACv4)
++		mc_ports = (mac[1] << 8) | mac[2];
++	else if (type == ENTRYTYPE_MACv6)
++		mc_ports = (mac[0] << 8) | mac[1];
++	else
++		mc_ports = 0;
++
++	if (mc_ports & BIT(ocelot->num_phys_ports))
++		cmd |= ANA_TABLES_MACACCESS_MAC_CPU_COPY;
++
+ 	ocelot_mact_select(ocelot, mac, vid);
+ 
+ 	/* Issue a write command */
+-	ocelot_write(ocelot, ANA_TABLES_MACACCESS_VALID |
+-			     ANA_TABLES_MACACCESS_DEST_IDX(port) |
+-			     ANA_TABLES_MACACCESS_ENTRYTYPE(type) |
+-			     ANA_TABLES_MACACCESS_MAC_TABLE_CMD(MACACCESS_CMD_LEARN),
+-			     ANA_TABLES_MACACCESS);
++	ocelot_write(ocelot, cmd, ANA_TABLES_MACACCESS);
+ 
+ 	return ocelot_mact_wait_for_completion(ocelot);
+ }
+diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c
+index b34da11acf65b..d60cd4326f4cd 100644
+--- a/drivers/net/ethernet/mscc/ocelot_net.c
++++ b/drivers/net/ethernet/mscc/ocelot_net.c
+@@ -952,10 +952,8 @@ static int ocelot_netdevice_event(struct notifier_block *unused,
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ 	int ret = 0;
+ 
+-	if (!ocelot_netdevice_dev_check(dev))
+-		return 0;
+-
+ 	if (event == NETDEV_PRECHANGEUPPER &&
++	    ocelot_netdevice_dev_check(dev) &&
+ 	    netif_is_lag_master(info->upper_dev)) {
+ 		struct netdev_lag_upper_info *lag_upper_info = info->upper_info;
+ 		struct netlink_ext_ack *extack;
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index c633046329352..d5d236d687e9e 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -2606,10 +2606,10 @@ static int sh_eth_close(struct net_device *ndev)
+ 	/* Free all the skbuffs in the Rx queue and the DMA buffer. */
+ 	sh_eth_ring_free(ndev);
+ 
+-	pm_runtime_put_sync(&mdp->pdev->dev);
+-
+ 	mdp->is_opened = 0;
+ 
++	pm_runtime_put(&mdp->pdev->dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a89d74c5cd1a7..77f615568194d 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -542,50 +542,71 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req)
+ 	return true;
+ }
+ 
+-static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
++static void nvme_free_prps(struct nvme_dev *dev, struct request *req)
+ {
+-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+ 	const int last_prp = NVME_CTRL_PAGE_SIZE / sizeof(__le64) - 1;
+-	dma_addr_t dma_addr = iod->first_dma, next_dma_addr;
++	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
++	dma_addr_t dma_addr = iod->first_dma;
+ 	int i;
+ 
+-	if (iod->dma_len) {
+-		dma_unmap_page(dev->dev, dma_addr, iod->dma_len,
+-			       rq_dma_dir(req));
+-		return;
++	for (i = 0; i < iod->npages; i++) {
++		__le64 *prp_list = nvme_pci_iod_list(req)[i];
++		dma_addr_t next_dma_addr = le64_to_cpu(prp_list[last_prp]);
++
++		dma_pool_free(dev->prp_page_pool, prp_list, dma_addr);
++		dma_addr = next_dma_addr;
+ 	}
+ 
+-	WARN_ON_ONCE(!iod->nents);
++}
+ 
+-	if (is_pci_p2pdma_page(sg_page(iod->sg)))
+-		pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents,
+-				    rq_dma_dir(req));
+-	else
+-		dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req));
++static void nvme_free_sgls(struct nvme_dev *dev, struct request *req)
++{
++	const int last_sg = SGES_PER_PAGE - 1;
++	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
++	dma_addr_t dma_addr = iod->first_dma;
++	int i;
+ 
++	for (i = 0; i < iod->npages; i++) {
++		struct nvme_sgl_desc *sg_list = nvme_pci_iod_list(req)[i];
++		dma_addr_t next_dma_addr = le64_to_cpu((sg_list[last_sg]).addr);
+ 
+-	if (iod->npages == 0)
+-		dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0],
+-			dma_addr);
++		dma_pool_free(dev->prp_page_pool, sg_list, dma_addr);
++		dma_addr = next_dma_addr;
++	}
+ 
+-	for (i = 0; i < iod->npages; i++) {
+-		void *addr = nvme_pci_iod_list(req)[i];
++}
+ 
+-		if (iod->use_sgl) {
+-			struct nvme_sgl_desc *sg_list = addr;
++static void nvme_unmap_sg(struct nvme_dev *dev, struct request *req)
++{
++	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+ 
+-			next_dma_addr =
+-			    le64_to_cpu((sg_list[SGES_PER_PAGE - 1]).addr);
+-		} else {
+-			__le64 *prp_list = addr;
++	if (is_pci_p2pdma_page(sg_page(iod->sg)))
++		pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents,
++				    rq_dma_dir(req));
++	else
++		dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req));
++}
+ 
+-			next_dma_addr = le64_to_cpu(prp_list[last_prp]);
+-		}
++static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
++{
++	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+ 
+-		dma_pool_free(dev->prp_page_pool, addr, dma_addr);
+-		dma_addr = next_dma_addr;
++	if (iod->dma_len) {
++		dma_unmap_page(dev->dev, iod->first_dma, iod->dma_len,
++			       rq_dma_dir(req));
++		return;
+ 	}
+ 
++	WARN_ON_ONCE(!iod->nents);
++
++	nvme_unmap_sg(dev, req);
++	if (iod->npages == 0)
++		dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0],
++			      iod->first_dma);
++	else if (iod->use_sgl)
++		nvme_free_sgls(dev, req);
++	else
++		nvme_free_prps(dev, req);
+ 	mempool_free(iod->sg, dev->iod_mempool);
+ }
+ 
+@@ -661,7 +682,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev,
+ 			__le64 *old_prp_list = prp_list;
+ 			prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma);
+ 			if (!prp_list)
+-				return BLK_STS_RESOURCE;
++				goto free_prps;
+ 			list[iod->npages++] = prp_list;
+ 			prp_list[0] = old_prp_list[i - 1];
+ 			old_prp_list[i - 1] = cpu_to_le64(prp_dma);
+@@ -681,14 +702,14 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev,
+ 		dma_addr = sg_dma_address(sg);
+ 		dma_len = sg_dma_len(sg);
+ 	}
+-
+ done:
+ 	cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg));
+ 	cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma);
+-
+ 	return BLK_STS_OK;
+-
+- bad_sgl:
++free_prps:
++	nvme_free_prps(dev, req);
++	return BLK_STS_RESOURCE;
++bad_sgl:
+ 	WARN(DO_ONCE(nvme_print_sgl, iod->sg, iod->nents),
+ 			"Invalid SGL for payload:%d nents:%d\n",
+ 			blk_rq_payload_bytes(req), iod->nents);
+@@ -760,7 +781,7 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
+ 
+ 			sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &sgl_dma);
+ 			if (!sg_list)
+-				return BLK_STS_RESOURCE;
++				goto free_sgls;
+ 
+ 			i = 0;
+ 			nvme_pci_iod_list(req)[iod->npages++] = sg_list;
+@@ -773,6 +794,9 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
+ 	} while (--entries > 0);
+ 
+ 	return BLK_STS_OK;
++free_sgls:
++	nvme_free_sgls(dev, req);
++	return BLK_STS_RESOURCE;
+ }
+ 
+ static blk_status_t nvme_setup_prp_simple(struct nvme_dev *dev,
+@@ -841,7 +865,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
+ 	sg_init_table(iod->sg, blk_rq_nr_phys_segments(req));
+ 	iod->nents = blk_rq_map_sg(req->q, req, iod->sg);
+ 	if (!iod->nents)
+-		goto out;
++		goto out_free_sg;
+ 
+ 	if (is_pci_p2pdma_page(sg_page(iod->sg)))
+ 		nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg,
+@@ -850,16 +874,21 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
+ 		nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents,
+ 					     rq_dma_dir(req), DMA_ATTR_NO_WARN);
+ 	if (!nr_mapped)
+-		goto out;
++		goto out_free_sg;
+ 
+ 	iod->use_sgl = nvme_pci_use_sgls(dev, req);
+ 	if (iod->use_sgl)
+ 		ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw, nr_mapped);
+ 	else
+ 		ret = nvme_pci_setup_prps(dev, req, &cmnd->rw);
+-out:
+ 	if (ret != BLK_STS_OK)
+-		nvme_unmap_data(dev, req);
++		goto out_unmap_sg;
++	return BLK_STS_OK;
++
++out_unmap_sg:
++	nvme_unmap_sg(dev, req);
++out_free_sg:
++	mempool_free(iod->sg, dev->iod_mempool);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
+index 34803a6c76643..5c1a109842a76 100644
+--- a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
+@@ -347,7 +347,7 @@ FUNC_GROUP_DECL(RMII4, F24, E23, E24, E25, C25, C24, B26, B25, B24);
+ 
+ #define D22 40
+ SIG_EXPR_LIST_DECL_SESG(D22, SD1CLK, SD1, SIG_DESC_SET(SCU414, 8));
+-SIG_EXPR_LIST_DECL_SEMG(D22, PWM8, PWM8G0, PWM8, SIG_DESC_SET(SCU414, 8));
++SIG_EXPR_LIST_DECL_SEMG(D22, PWM8, PWM8G0, PWM8, SIG_DESC_SET(SCU4B4, 8));
+ PIN_DECL_2(D22, GPIOF0, SD1CLK, PWM8);
+ GROUP_DECL(PWM8G0, D22);
+ 
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index 7e950f5d62d0f..7815426e7aeaa 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -926,6 +926,10 @@ int mtk_pinconf_adv_pull_set(struct mtk_pinctrl *hw,
+ 			err = hw->soc->bias_set(hw, desc, pullup);
+ 			if (err)
+ 				return err;
++		} else if (hw->soc->bias_set_combo) {
++			err = hw->soc->bias_set_combo(hw, desc, pullup, arg);
++			if (err)
++				return err;
+ 		} else {
+ 			return -ENOTSUPP;
+ 		}
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index 621909b01debd..033d142f0c272 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -2052,7 +2052,7 @@ static inline bool ingenic_gpio_get_value(struct ingenic_gpio_chip *jzgc,
+ static void ingenic_gpio_set_value(struct ingenic_gpio_chip *jzgc,
+ 				   u8 offset, int value)
+ {
+-	if (jzgc->jzpc->info->version >= ID_JZ4760)
++	if (jzgc->jzpc->info->version >= ID_JZ4770)
+ 		ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_PAT0, offset, !!value);
+ 	else
+ 		ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_DATA, offset, !!value);
+@@ -2082,7 +2082,7 @@ static void irq_set_type(struct ingenic_gpio_chip *jzgc,
+ 		break;
+ 	}
+ 
+-	if (jzgc->jzpc->info->version >= ID_JZ4760) {
++	if (jzgc->jzpc->info->version >= ID_JZ4770) {
+ 		reg1 = JZ4760_GPIO_PAT1;
+ 		reg2 = JZ4760_GPIO_PAT0;
+ 	} else {
+@@ -2122,7 +2122,7 @@ static void ingenic_gpio_irq_enable(struct irq_data *irqd)
+ 	struct ingenic_gpio_chip *jzgc = gpiochip_get_data(gc);
+ 	int irq = irqd->hwirq;
+ 
+-	if (jzgc->jzpc->info->version >= ID_JZ4760)
++	if (jzgc->jzpc->info->version >= ID_JZ4770)
+ 		ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_INT, irq, true);
+ 	else
+ 		ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_SELECT, irq, true);
+@@ -2138,7 +2138,7 @@ static void ingenic_gpio_irq_disable(struct irq_data *irqd)
+ 
+ 	ingenic_gpio_irq_mask(irqd);
+ 
+-	if (jzgc->jzpc->info->version >= ID_JZ4760)
++	if (jzgc->jzpc->info->version >= ID_JZ4770)
+ 		ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_INT, irq, false);
+ 	else
+ 		ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_SELECT, irq, false);
+@@ -2163,7 +2163,7 @@ static void ingenic_gpio_irq_ack(struct irq_data *irqd)
+ 			irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_HIGH);
+ 	}
+ 
+-	if (jzgc->jzpc->info->version >= ID_JZ4760)
++	if (jzgc->jzpc->info->version >= ID_JZ4770)
+ 		ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_FLAG, irq, false);
+ 	else
+ 		ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_DATA, irq, true);
+@@ -2220,7 +2220,7 @@ static void ingenic_gpio_irq_handler(struct irq_desc *desc)
+ 
+ 	chained_irq_enter(irq_chip, desc);
+ 
+-	if (jzgc->jzpc->info->version >= ID_JZ4760)
++	if (jzgc->jzpc->info->version >= ID_JZ4770)
+ 		flag = ingenic_gpio_read_reg(jzgc, JZ4760_GPIO_FLAG);
+ 	else
+ 		flag = ingenic_gpio_read_reg(jzgc, JZ4740_GPIO_FLAG);
+@@ -2302,7 +2302,7 @@ static int ingenic_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
+ 	struct ingenic_pinctrl *jzpc = jzgc->jzpc;
+ 	unsigned int pin = gc->base + offset;
+ 
+-	if (jzpc->info->version >= ID_JZ4760) {
++	if (jzpc->info->version >= ID_JZ4770) {
+ 		if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_INT) ||
+ 		    ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1))
+ 			return GPIO_LINE_DIRECTION_IN;
+@@ -2360,7 +2360,7 @@ static int ingenic_pinmux_set_pin_fn(struct ingenic_pinctrl *jzpc,
+ 		ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, func & 0x2);
+ 		ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, func & 0x1);
+ 		ingenic_shadow_config_pin_load(jzpc, pin);
+-	} else if (jzpc->info->version >= ID_JZ4760) {
++	} else if (jzpc->info->version >= ID_JZ4770) {
+ 		ingenic_config_pin(jzpc, pin, JZ4760_GPIO_INT, false);
+ 		ingenic_config_pin(jzpc, pin, GPIO_MSK, false);
+ 		ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, func & 0x2);
+@@ -2368,7 +2368,7 @@ static int ingenic_pinmux_set_pin_fn(struct ingenic_pinctrl *jzpc,
+ 	} else {
+ 		ingenic_config_pin(jzpc, pin, JZ4740_GPIO_FUNC, true);
+ 		ingenic_config_pin(jzpc, pin, JZ4740_GPIO_TRIG, func & 0x2);
+-		ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, func > 0);
++		ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, func & 0x1);
+ 	}
+ 
+ 	return 0;
+@@ -2418,7 +2418,7 @@ static int ingenic_pinmux_gpio_set_direction(struct pinctrl_dev *pctldev,
+ 		ingenic_shadow_config_pin(jzpc, pin, GPIO_MSK, true);
+ 		ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, input);
+ 		ingenic_shadow_config_pin_load(jzpc, pin);
+-	} else if (jzpc->info->version >= ID_JZ4760) {
++	} else if (jzpc->info->version >= ID_JZ4770) {
+ 		ingenic_config_pin(jzpc, pin, JZ4760_GPIO_INT, false);
+ 		ingenic_config_pin(jzpc, pin, GPIO_MSK, true);
+ 		ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, input);
+@@ -2448,7 +2448,7 @@ static int ingenic_pinconf_get(struct pinctrl_dev *pctldev,
+ 	unsigned int offt = pin / PINS_PER_GPIO_CHIP;
+ 	bool pull;
+ 
+-	if (jzpc->info->version >= ID_JZ4760)
++	if (jzpc->info->version >= ID_JZ4770)
+ 		pull = !ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PEN);
+ 	else
+ 		pull = !ingenic_get_pin_config(jzpc, pin, JZ4740_GPIO_PULL_DIS);
+@@ -2498,7 +2498,7 @@ static void ingenic_set_bias(struct ingenic_pinctrl *jzpc,
+ 					REG_SET(X1830_GPIO_PEH), bias << idxh);
+ 		}
+ 
+-	} else if (jzpc->info->version >= ID_JZ4760) {
++	} else if (jzpc->info->version >= ID_JZ4770) {
+ 		ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PEN, !bias);
+ 	} else {
+ 		ingenic_config_pin(jzpc, pin, JZ4740_GPIO_PULL_DIS, !bias);
+@@ -2508,7 +2508,7 @@ static void ingenic_set_bias(struct ingenic_pinctrl *jzpc,
+ static void ingenic_set_output_level(struct ingenic_pinctrl *jzpc,
+ 				     unsigned int pin, bool high)
+ {
+-	if (jzpc->info->version >= ID_JZ4760)
++	if (jzpc->info->version >= ID_JZ4770)
+ 		ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, high);
+ 	else
+ 		ingenic_config_pin(jzpc, pin, JZ4740_GPIO_DATA, high);
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 77a25bdf0da70..37526aa1fb2c4 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -51,6 +51,7 @@
+  * @dual_edge_irqs: Bitmap of irqs that need sw emulated dual edge
+  *                  detection.
+  * @skip_wake_irqs: Skip IRQs that are handled by wakeup interrupt controller
++ * @disabled_for_mux: These IRQs were disabled because we muxed away.
+  * @soc:            Reference to soc_data of platform specific data.
+  * @regs:           Base addresses for the TLMM tiles.
+  * @phys_base:      Physical base address
+@@ -72,6 +73,7 @@ struct msm_pinctrl {
+ 	DECLARE_BITMAP(dual_edge_irqs, MAX_NR_GPIO);
+ 	DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO);
+ 	DECLARE_BITMAP(skip_wake_irqs, MAX_NR_GPIO);
++	DECLARE_BITMAP(disabled_for_mux, MAX_NR_GPIO);
+ 
+ 	const struct msm_pinctrl_soc_data *soc;
+ 	void __iomem *regs[MAX_NR_TILES];
+@@ -96,6 +98,14 @@ MSM_ACCESSOR(intr_cfg)
+ MSM_ACCESSOR(intr_status)
+ MSM_ACCESSOR(intr_target)
+ 
++static void msm_ack_intr_status(struct msm_pinctrl *pctrl,
++				const struct msm_pingroup *g)
++{
++	u32 val = g->intr_ack_high ? BIT(g->intr_status_bit) : 0;
++
++	msm_writel_intr_status(val, pctrl, g);
++}
++
+ static int msm_get_groups_count(struct pinctrl_dev *pctldev)
+ {
+ 	struct msm_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
+@@ -171,6 +181,10 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
+ 			      unsigned group)
+ {
+ 	struct msm_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
++	struct gpio_chip *gc = &pctrl->chip;
++	unsigned int irq = irq_find_mapping(gc->irq.domain, group);
++	struct irq_data *d = irq_get_irq_data(irq);
++	unsigned int gpio_func = pctrl->soc->gpio_func;
+ 	const struct msm_pingroup *g;
+ 	unsigned long flags;
+ 	u32 val, mask;
+@@ -187,6 +201,20 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
+ 	if (WARN_ON(i == g->nfuncs))
+ 		return -EINVAL;
+ 
++	/*
++	 * If an GPIO interrupt is setup on this pin then we need special
++	 * handling.  Specifically interrupt detection logic will still see
++	 * the pin twiddle even when we're muxed away.
++	 *
++	 * When we see a pin with an interrupt setup on it then we'll disable
++	 * (mask) interrupts on it when we mux away until we mux back.  Note
++	 * that disable_irq() refcounts and interrupts are disabled as long as
++	 * at least one disable_irq() has been called.
++	 */
++	if (d && i != gpio_func &&
++	    !test_and_set_bit(d->hwirq, pctrl->disabled_for_mux))
++		disable_irq(irq);
++
+ 	raw_spin_lock_irqsave(&pctrl->lock, flags);
+ 
+ 	val = msm_readl_ctl(pctrl, g);
+@@ -196,6 +224,20 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
+ 
+ 	raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+ 
++	if (d && i == gpio_func &&
++	    test_and_clear_bit(d->hwirq, pctrl->disabled_for_mux)) {
++		/*
++		 * Clear interrupts detected while not GPIO since we only
++		 * masked things.
++		 */
++		if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs))
++			irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, false);
++		else
++			msm_ack_intr_status(pctrl, g);
++
++		enable_irq(irq);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -210,8 +252,7 @@ static int msm_pinmux_request_gpio(struct pinctrl_dev *pctldev,
+ 	if (!g->nfuncs)
+ 		return 0;
+ 
+-	/* For now assume function 0 is GPIO because it always is */
+-	return msm_pinmux_set_mux(pctldev, g->funcs[0], offset);
++	return msm_pinmux_set_mux(pctldev, g->funcs[pctrl->soc->gpio_func], offset);
+ }
+ 
+ static const struct pinmux_ops msm_pinmux_ops = {
+@@ -774,7 +815,7 @@ static void msm_gpio_irq_mask(struct irq_data *d)
+ 	raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+ }
+ 
+-static void msm_gpio_irq_clear_unmask(struct irq_data *d, bool status_clear)
++static void msm_gpio_irq_unmask(struct irq_data *d)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
+@@ -792,17 +833,6 @@ static void msm_gpio_irq_clear_unmask(struct irq_data *d, bool status_clear)
+ 
+ 	raw_spin_lock_irqsave(&pctrl->lock, flags);
+ 
+-	if (status_clear) {
+-		/*
+-		 * clear the interrupt status bit before unmask to avoid
+-		 * any erroneous interrupts that would have got latched
+-		 * when the interrupt is not in use.
+-		 */
+-		val = msm_readl_intr_status(pctrl, g);
+-		val &= ~BIT(g->intr_status_bit);
+-		msm_writel_intr_status(val, pctrl, g);
+-	}
+-
+ 	val = msm_readl_intr_cfg(pctrl, g);
+ 	val |= BIT(g->intr_raw_status_bit);
+ 	val |= BIT(g->intr_enable_bit);
+@@ -822,7 +852,7 @@ static void msm_gpio_irq_enable(struct irq_data *d)
+ 		irq_chip_enable_parent(d);
+ 
+ 	if (!test_bit(d->hwirq, pctrl->skip_wake_irqs))
+-		msm_gpio_irq_clear_unmask(d, true);
++		msm_gpio_irq_unmask(d);
+ }
+ 
+ static void msm_gpio_irq_disable(struct irq_data *d)
+@@ -837,11 +867,6 @@ static void msm_gpio_irq_disable(struct irq_data *d)
+ 		msm_gpio_irq_mask(d);
+ }
+ 
+-static void msm_gpio_irq_unmask(struct irq_data *d)
+-{
+-	msm_gpio_irq_clear_unmask(d, false);
+-}
+-
+ /**
+  * msm_gpio_update_dual_edge_parent() - Prime next edge for IRQs handled by parent.
+  * @d: The irq dta.
+@@ -894,7 +919,6 @@ static void msm_gpio_irq_ack(struct irq_data *d)
+ 	struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
+ 	const struct msm_pingroup *g;
+ 	unsigned long flags;
+-	u32 val;
+ 
+ 	if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) {
+ 		if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
+@@ -906,12 +930,7 @@ static void msm_gpio_irq_ack(struct irq_data *d)
+ 
+ 	raw_spin_lock_irqsave(&pctrl->lock, flags);
+ 
+-	val = msm_readl_intr_status(pctrl, g);
+-	if (g->intr_ack_high)
+-		val |= BIT(g->intr_status_bit);
+-	else
+-		val &= ~BIT(g->intr_status_bit);
+-	msm_writel_intr_status(val, pctrl, g);
++	msm_ack_intr_status(pctrl, g);
+ 
+ 	if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
+ 		msm_gpio_update_dual_edge_pos(pctrl, g, d);
+@@ -936,6 +955,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
+ 	const struct msm_pingroup *g;
+ 	unsigned long flags;
++	bool was_enabled;
+ 	u32 val;
+ 
+ 	if (msm_gpio_needs_dual_edge_parent_workaround(d, type)) {
+@@ -997,6 +1017,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	 * could cause the INTR_STATUS to be set for EDGE interrupts.
+ 	 */
+ 	val = msm_readl_intr_cfg(pctrl, g);
++	was_enabled = val & BIT(g->intr_raw_status_bit);
+ 	val |= BIT(g->intr_raw_status_bit);
+ 	if (g->intr_detection_width == 2) {
+ 		val &= ~(3 << g->intr_detection_bit);
+@@ -1046,6 +1067,14 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	}
+ 	msm_writel_intr_cfg(val, pctrl, g);
+ 
++	/*
++	 * The first time we set RAW_STATUS_EN it could trigger an interrupt.
++	 * Clear the interrupt.  This is safe because we have
++	 * IRQCHIP_SET_TYPE_MASKED.
++	 */
++	if (!was_enabled)
++		msm_ack_intr_status(pctrl, g);
++
+ 	if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
+ 		msm_gpio_update_dual_edge_pos(pctrl, g, d);
+ 
+@@ -1099,16 +1128,11 @@ static int msm_gpio_irq_reqres(struct irq_data *d)
+ 	}
+ 
+ 	/*
+-	 * Clear the interrupt that may be pending before we enable
+-	 * the line.
+-	 * This is especially a problem with the GPIOs routed to the
+-	 * PDC. These GPIOs are direct-connect interrupts to the GIC.
+-	 * Disabling the interrupt line at the PDC does not prevent
+-	 * the interrupt from being latched at the GIC. The state at
+-	 * GIC needs to be cleared before enabling.
++	 * The disable / clear-enable workaround we do in msm_pinmux_set_mux()
++	 * only works if disable is not lazy since we only clear any bogus
++	 * interrupt in hardware. Explicitly mark the interrupt as UNLAZY.
+ 	 */
+-	if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs))
+-		irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, 0);
++	irq_set_status_flags(d->irq, IRQ_DISABLE_UNLAZY);
+ 
+ 	return 0;
+ out:
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.h b/drivers/pinctrl/qcom/pinctrl-msm.h
+index 333f99243c43a..e31a5167c91ec 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.h
++++ b/drivers/pinctrl/qcom/pinctrl-msm.h
+@@ -118,6 +118,7 @@ struct msm_gpio_wakeirq_map {
+  * @wakeirq_dual_edge_errata: If true then GPIOs using the wakeirq_map need
+  *                            to be aware that their parent can't handle dual
+  *                            edge interrupts.
++ * @gpio_func: Which function number is GPIO (usually 0).
+  */
+ struct msm_pinctrl_soc_data {
+ 	const struct pinctrl_pin_desc *pins;
+@@ -134,6 +135,7 @@ struct msm_pinctrl_soc_data {
+ 	const struct msm_gpio_wakeirq_map *wakeirq_map;
+ 	unsigned int nwakeirq_map;
+ 	bool wakeirq_dual_edge_errata;
++	unsigned int gpio_func;
+ };
+ 
+ extern const struct dev_pm_ops msm_pinctrl_dev_pm_ops;
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index ecd477964d117..18bf8aeb5f870 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -247,7 +247,8 @@ static int hp_wmi_perform_query(int query, enum hp_wmi_command command,
+ 	ret = bios_return->return_code;
+ 
+ 	if (ret) {
+-		if (ret != HPWMI_RET_UNKNOWN_CMDTYPE)
++		if (ret != HPWMI_RET_UNKNOWN_COMMAND &&
++		    ret != HPWMI_RET_UNKNOWN_CMDTYPE)
+ 			pr_warn("query 0x%x returned error 0x%x\n", query, ret);
+ 		goto out_free;
+ 	}
+diff --git a/drivers/platform/x86/i2c-multi-instantiate.c b/drivers/platform/x86/i2c-multi-instantiate.c
+index 6acc8457866e1..d3b5afbe4833e 100644
+--- a/drivers/platform/x86/i2c-multi-instantiate.c
++++ b/drivers/platform/x86/i2c-multi-instantiate.c
+@@ -166,13 +166,29 @@ static const struct i2c_inst_data bsg2150_data[]  = {
+ 	{}
+ };
+ 
+-static const struct i2c_inst_data int3515_data[]  = {
+-	{ "tps6598x", IRQ_RESOURCE_APIC, 0 },
+-	{ "tps6598x", IRQ_RESOURCE_APIC, 1 },
+-	{ "tps6598x", IRQ_RESOURCE_APIC, 2 },
+-	{ "tps6598x", IRQ_RESOURCE_APIC, 3 },
+-	{}
+-};
++/*
++ * Device with _HID INT3515 (TI PD controllers) has some unresolved interrupt
++ * issues. The most common problem seen is interrupt flood.
++ *
++ * There are at least two known causes. Firstly, on some boards, the
++ * I2CSerialBus resource index does not match the Interrupt resource, i.e. they
++ * are not one-to-one mapped like in the array below. Secondly, on some boards
++ * the IRQ line from the PD controller is not actually connected at all. But the
++ * interrupt flood is also seen on some boards where those are not a problem, so
++ * there are some other problems as well.
++ *
++ * Because of the issues with the interrupt, the device is disabled for now. If
++ * you wish to debug the issues, uncomment the below, and add an entry for the
++ * INT3515 device to the i2c_multi_instance_ids table.
++ *
++ * static const struct i2c_inst_data int3515_data[]  = {
++ *	{ "tps6598x", IRQ_RESOURCE_APIC, 0 },
++ *	{ "tps6598x", IRQ_RESOURCE_APIC, 1 },
++ *	{ "tps6598x", IRQ_RESOURCE_APIC, 2 },
++ *	{ "tps6598x", IRQ_RESOURCE_APIC, 3 },
++ *	{ }
++ * };
++ */
+ 
+ /*
+  * Note new device-ids must also be added to i2c_multi_instantiate_ids in
+@@ -181,7 +197,6 @@ static const struct i2c_inst_data int3515_data[]  = {
+ static const struct acpi_device_id i2c_multi_inst_acpi_ids[] = {
+ 	{ "BSG1160", (unsigned long)bsg1160_data },
+ 	{ "BSG2150", (unsigned long)bsg2150_data },
+-	{ "INT3515", (unsigned long)int3515_data },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(acpi, i2c_multi_inst_acpi_ids);
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 7598cd46cf606..5b81bafa5c165 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -92,6 +92,7 @@ struct ideapad_private {
+ 	struct dentry *debug;
+ 	unsigned long cfg;
+ 	bool has_hw_rfkill_switch;
++	bool has_touchpad_switch;
+ 	const char *fnesc_guid;
+ };
+ 
+@@ -535,7 +536,9 @@ static umode_t ideapad_is_visible(struct kobject *kobj,
+ 	} else if (attr == &dev_attr_fn_lock.attr) {
+ 		supported = acpi_has_method(priv->adev->handle, "HALS") &&
+ 			acpi_has_method(priv->adev->handle, "SALS");
+-	} else
++	} else if (attr == &dev_attr_touchpad.attr)
++		supported = priv->has_touchpad_switch;
++	else
+ 		supported = true;
+ 
+ 	return supported ? attr->mode : 0;
+@@ -867,6 +870,9 @@ static void ideapad_sync_touchpad_state(struct ideapad_private *priv)
+ {
+ 	unsigned long value;
+ 
++	if (!priv->has_touchpad_switch)
++		return;
++
+ 	/* Without reading from EC touchpad LED doesn't switch state */
+ 	if (!read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value)) {
+ 		/* Some IdeaPads don't really turn off touchpad - they only
+@@ -989,6 +995,9 @@ static int ideapad_acpi_add(struct platform_device *pdev)
+ 	priv->platform_device = pdev;
+ 	priv->has_hw_rfkill_switch = dmi_check_system(hw_rfkill_list);
+ 
++	/* Most ideapads with ELAN0634 touchpad don't use EC touchpad switch */
++	priv->has_touchpad_switch = !acpi_dev_present("ELAN0634", NULL, -1);
++
+ 	ret = ideapad_sysfs_init(priv);
+ 	if (ret)
+ 		return ret;
+@@ -1006,6 +1015,10 @@ static int ideapad_acpi_add(struct platform_device *pdev)
+ 	if (!priv->has_hw_rfkill_switch)
+ 		write_ec_cmd(priv->adev->handle, VPCCMD_W_RF, 1);
+ 
++	/* The same for Touchpad */
++	if (!priv->has_touchpad_switch)
++		write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, 1);
++
+ 	for (i = 0; i < IDEAPAD_RFKILL_DEV_NUM; i++)
+ 		if (test_bit(ideapad_rfk_data[i].cfgbit, &priv->cfg))
+ 			ideapad_register_rfkill(priv, i);
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index 3b49a1f4061bc..65fb3a3031470 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -204,12 +204,6 @@ static const struct dmi_system_id dmi_switches_allow_list[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7130"),
+ 		},
+ 	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Stream x360 Convertible PC 11"),
+-		},
+-	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 9ebeb031329d9..cc45cdac13844 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -8232,11 +8232,9 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
+ 			goto out;
+ 		}
+ 
++		/* always store 64 bits regardless of addressing */
+ 		sense_ptr = (void *)cmd->frame + ioc->sense_off;
+-		if (instance->consistent_mask_64bit)
+-			put_unaligned_le64(sense_handle, sense_ptr);
+-		else
+-			put_unaligned_le32(sense_handle, sense_ptr);
++		put_unaligned_le64(sense_handle, sense_ptr);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index f5fc7f518f8af..47ad64b066236 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2245,7 +2245,7 @@ qedi_show_boot_tgt_info(struct qedi_ctx *qedi, int type,
+ 			     chap_name);
+ 		break;
+ 	case ISCSI_BOOT_TGT_CHAP_SECRET:
+-		rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN,
++		rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN,
+ 			     chap_secret);
+ 		break;
+ 	case ISCSI_BOOT_TGT_REV_CHAP_NAME:
+@@ -2253,7 +2253,7 @@ qedi_show_boot_tgt_info(struct qedi_ctx *qedi, int type,
+ 			     mchap_name);
+ 		break;
+ 	case ISCSI_BOOT_TGT_REV_CHAP_SECRET:
+-		rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN,
++		rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN,
+ 			     mchap_secret);
+ 		break;
+ 	case ISCSI_BOOT_TGT_FLAGS:
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 24c0f7ec03511..4a08c450b756f 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -6740,7 +6740,7 @@ static int __init scsi_debug_init(void)
+ 		k = sdeb_zbc_model_str(sdeb_zbc_model_s);
+ 		if (k < 0) {
+ 			ret = k;
+-			goto free_vm;
++			goto free_q_arr;
+ 		}
+ 		sdeb_zbc_model = k;
+ 		switch (sdeb_zbc_model) {
+@@ -6753,7 +6753,8 @@ static int __init scsi_debug_init(void)
+ 			break;
+ 		default:
+ 			pr_err("Invalid ZBC model\n");
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto free_q_arr;
+ 		}
+ 	}
+ 	if (sdeb_zbc_model != BLK_ZONED_NONE) {
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 656bcf4940d6d..fedb89d4ac3f0 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -986,8 +986,10 @@ static blk_status_t sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd)
+ 		}
+ 	}
+ 
+-	if (sdp->no_write_same)
++	if (sdp->no_write_same) {
++		rq->rq_flags |= RQF_QUIET;
+ 		return BLK_STS_TARGET;
++	}
+ 
+ 	if (sdkp->ws16 || lba > 0xffffffff || nr_blocks > 0xffff)
+ 		return sd_setup_write_same16_cmnd(cmd, false);
+diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
+index dcdb4eb1f90ba..c339517b7a094 100644
+--- a/drivers/scsi/ufs/Kconfig
++++ b/drivers/scsi/ufs/Kconfig
+@@ -72,6 +72,7 @@ config SCSI_UFS_DWC_TC_PCI
+ config SCSI_UFSHCD_PLATFORM
+ 	tristate "Platform bus based UFS Controller support"
+ 	depends on SCSI_UFSHCD
++	depends on HAS_IOMEM
+ 	help
+ 	This selects the UFS host controller support. Select this if
+ 	you have an UFS controller on Platform bus.
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 7b9a9a771b11b..8132893284670 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -283,7 +283,8 @@ static inline void ufshcd_wb_config(struct ufs_hba *hba)
+ 	if (ret)
+ 		dev_err(hba->dev, "%s: En WB flush during H8: failed: %d\n",
+ 			__func__, ret);
+-	ufshcd_wb_toggle_flush(hba, true);
++	if (!(hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL))
++		ufshcd_wb_toggle_flush(hba, true);
+ }
+ 
+ static void ufshcd_scsi_unblock_requests(struct ufs_hba *hba)
+@@ -4912,7 +4913,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ 		break;
+ 	} /* end of switch */
+ 
+-	if ((host_byte(result) != DID_OK) && !hba->silence_err_logs)
++	if ((host_byte(result) != DID_OK) &&
++	    (host_byte(result) != DID_REQUEUE) && !hba->silence_err_logs)
+ 		ufshcd_print_trs(hba, 1 << lrbp->task_tag, true);
+ 	return result;
+ }
+@@ -5353,9 +5355,6 @@ static int ufshcd_wb_toggle_flush_during_h8(struct ufs_hba *hba, bool set)
+ 
+ static inline void ufshcd_wb_toggle_flush(struct ufs_hba *hba, bool enable)
+ {
+-	if (hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL)
+-		return;
+-
+ 	if (enable)
+ 		ufshcd_wb_buf_flush_enable(hba);
+ 	else
+@@ -6210,9 +6209,13 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
+ 		intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
+ 	}
+ 
+-	if (enabled_intr_status && retval == IRQ_NONE) {
+-		dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x\n",
+-					__func__, intr_status);
++	if (enabled_intr_status && retval == IRQ_NONE &&
++				!ufshcd_eh_in_progress(hba)) {
++		dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x (0x%08x, 0x%08x)\n",
++					__func__,
++					intr_status,
++					hba->ufs_stats.last_intr_status,
++					enabled_intr_status);
+ 		ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
+ 	}
+ 
+@@ -6256,7 +6259,10 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
+ 	 * Even though we use wait_event() which sleeps indefinitely,
+ 	 * the maximum wait time is bounded by %TM_CMD_TIMEOUT.
+ 	 */
+-	req = blk_get_request(q, REQ_OP_DRV_OUT, BLK_MQ_REQ_RESERVED);
++	req = blk_get_request(q, REQ_OP_DRV_OUT, 0);
++	if (IS_ERR(req))
++		return PTR_ERR(req);
++
+ 	req->end_io_data = &wait;
+ 	free_slot = req->tag;
+ 	WARN_ON_ONCE(free_slot < 0 || free_slot >= hba->nutmrs);
+@@ -6569,19 +6575,16 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
+ {
+ 	struct Scsi_Host *host;
+ 	struct ufs_hba *hba;
+-	unsigned int tag;
+ 	u32 pos;
+ 	int err;
+-	u8 resp = 0xF;
+-	struct ufshcd_lrb *lrbp;
++	u8 resp = 0xF, lun;
+ 	unsigned long flags;
+ 
+ 	host = cmd->device->host;
+ 	hba = shost_priv(host);
+-	tag = cmd->request->tag;
+ 
+-	lrbp = &hba->lrb[tag];
+-	err = ufshcd_issue_tm_cmd(hba, lrbp->lun, 0, UFS_LOGICAL_RESET, &resp);
++	lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
++	err = ufshcd_issue_tm_cmd(hba, lun, 0, UFS_LOGICAL_RESET, &resp);
+ 	if (err || resp != UPIU_TASK_MANAGEMENT_FUNC_COMPL) {
+ 		if (!err)
+ 			err = resp;
+@@ -6590,7 +6593,7 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
+ 
+ 	/* clear the commands that were pending for corresponding LUN */
+ 	for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) {
+-		if (hba->lrb[pos].lun == lrbp->lun) {
++		if (hba->lrb[pos].lun == lun) {
+ 			err = ufshcd_clear_cmd(hba, pos);
+ 			if (err)
+ 				break;
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 590e6d0722281..7d5814a95e1ed 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -562,8 +562,6 @@ tcmu_get_block_page(struct tcmu_dev *udev, uint32_t dbi)
+ 
+ static inline void tcmu_free_cmd(struct tcmu_cmd *tcmu_cmd)
+ {
+-	if (tcmu_cmd->se_cmd)
+-		tcmu_cmd->se_cmd->priv = NULL;
+ 	kfree(tcmu_cmd->dbi);
+ 	kmem_cache_free(tcmu_cmd_cache, tcmu_cmd);
+ }
+@@ -1188,11 +1186,12 @@ tcmu_queue_cmd(struct se_cmd *se_cmd)
+ 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 
+ 	mutex_lock(&udev->cmdr_lock);
+-	se_cmd->priv = tcmu_cmd;
+ 	if (!(se_cmd->transport_state & CMD_T_ABORTED))
+ 		ret = queue_cmd_ring(tcmu_cmd, &scsi_ret);
+ 	if (ret < 0)
+ 		tcmu_free_cmd(tcmu_cmd);
++	else
++		se_cmd->priv = tcmu_cmd;
+ 	mutex_unlock(&udev->cmdr_lock);
+ 	return scsi_ret;
+ }
+@@ -1255,6 +1254,7 @@ tcmu_tmr_notify(struct se_device *se_dev, enum tcm_tmreq_table tmf,
+ 
+ 		list_del_init(&cmd->queue_entry);
+ 		tcmu_free_cmd(cmd);
++		se_cmd->priv = NULL;
+ 		target_complete_cmd(se_cmd, SAM_STAT_TASK_ABORTED);
+ 		unqueued = true;
+ 	}
+@@ -1346,6 +1346,7 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
+ 	}
+ 
+ done:
++	se_cmd->priv = NULL;
+ 	if (read_len_valid) {
+ 		pr_debug("read_len = %d\n", read_len);
+ 		target_complete_cmd_with_length(cmd->se_cmd,
+@@ -1492,6 +1493,7 @@ static void tcmu_check_expired_queue_cmd(struct tcmu_cmd *cmd)
+ 	se_cmd = cmd->se_cmd;
+ 	tcmu_free_cmd(cmd);
+ 
++	se_cmd->priv = NULL;
+ 	target_complete_cmd(se_cmd, SAM_STAT_TASK_SET_FULL);
+ }
+ 
+@@ -1606,6 +1608,7 @@ static void run_qfull_queue(struct tcmu_dev *udev, bool fail)
+ 			 * removed then LIO core will do the right thing and
+ 			 * fail the retry.
+ 			 */
++			tcmu_cmd->se_cmd->priv = NULL;
+ 			target_complete_cmd(tcmu_cmd->se_cmd, SAM_STAT_BUSY);
+ 			tcmu_free_cmd(tcmu_cmd);
+ 			continue;
+@@ -1619,6 +1622,7 @@ static void run_qfull_queue(struct tcmu_dev *udev, bool fail)
+ 			 * Ignore scsi_ret for now. target_complete_cmd
+ 			 * drops it.
+ 			 */
++			tcmu_cmd->se_cmd->priv = NULL;
+ 			target_complete_cmd(tcmu_cmd->se_cmd,
+ 					    SAM_STAT_CHECK_CONDITION);
+ 			tcmu_free_cmd(tcmu_cmd);
+@@ -2226,6 +2230,7 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
+ 		if (!test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags)) {
+ 			WARN_ON(!cmd->se_cmd);
+ 			list_del_init(&cmd->queue_entry);
++			cmd->se_cmd->priv = NULL;
+ 			if (err_level == 1) {
+ 				/*
+ 				 * Userspace was not able to start the
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index 7e5e363152607..c2869489ba681 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -2079,9 +2079,6 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
+ 	return 0;
+ }
+ 
+-extern ssize_t redirected_tty_write(struct file *, const char __user *,
+-							size_t, loff_t *);
+-
+ /**
+  *	job_control		-	check job control
+  *	@tty: tty
+@@ -2103,7 +2100,7 @@ static int job_control(struct tty_struct *tty, struct file *file)
+ 	/* NOTE: not yet done after every sleep pending a thorough
+ 	   check of the logic of this change. -- jlc */
+ 	/* don't stop on /dev/console */
+-	if (file->f_op->write == redirected_tty_write)
++	if (file->f_op->write_iter == redirected_tty_write)
+ 		return 0;
+ 
+ 	return __tty_check_change(tty, SIGTTIN);
+@@ -2307,7 +2304,7 @@ static ssize_t n_tty_write(struct tty_struct *tty, struct file *file,
+ 	ssize_t retval = 0;
+ 
+ 	/* Job control check -- must be done at start (POSIX.1 7.1.1.4). */
+-	if (L_TOSTOP(tty) && file->f_op->write != redirected_tty_write) {
++	if (L_TOSTOP(tty) && file->f_op->write_iter != redirected_tty_write) {
+ 		retval = tty_check_change(tty);
+ 		if (retval)
+ 			return retval;
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 118b299122898..e0c00a1b07639 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -648,6 +648,14 @@ static void wait_for_xmitr(struct uart_port *port)
+ 				  (val & STAT_TX_RDY(port)), 1, 10000);
+ }
+ 
++static void wait_for_xmite(struct uart_port *port)
++{
++	u32 val;
++
++	readl_poll_timeout_atomic(port->membase + UART_STAT, val,
++				  (val & STAT_TX_EMP), 1, 10000);
++}
++
+ static void mvebu_uart_console_putchar(struct uart_port *port, int ch)
+ {
+ 	wait_for_xmitr(port);
+@@ -675,7 +683,7 @@ static void mvebu_uart_console_write(struct console *co, const char *s,
+ 
+ 	uart_console_write(port, s, count, mvebu_uart_console_putchar);
+ 
+-	wait_for_xmitr(port);
++	wait_for_xmite(port);
+ 
+ 	if (ier)
+ 		writel(ier, port->membase + UART_CTRL(port));
+diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c
+index 13eadcb8aec4e..214bf3086c68a 100644
+--- a/drivers/tty/serial/sifive.c
++++ b/drivers/tty/serial/sifive.c
+@@ -999,6 +999,7 @@ static int sifive_serial_probe(struct platform_device *pdev)
+ 	/* Set up clock divider */
+ 	ssp->clkin_rate = clk_get_rate(ssp->clk);
+ 	ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE;
++	ssp->port.uartclk = ssp->baud_rate * 16;
+ 	__ssp_update_div(ssp);
+ 
+ 	platform_set_drvdata(pdev, ssp);
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 56ade99ef99f4..2f8223b2ffa45 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -143,12 +143,9 @@ LIST_HEAD(tty_drivers);			/* linked list of tty drivers */
+ DEFINE_MUTEX(tty_mutex);
+ 
+ static ssize_t tty_read(struct file *, char __user *, size_t, loff_t *);
+-static ssize_t tty_write(struct file *, const char __user *, size_t, loff_t *);
+-ssize_t redirected_tty_write(struct file *, const char __user *,
+-							size_t, loff_t *);
++static ssize_t tty_write(struct kiocb *, struct iov_iter *);
+ static __poll_t tty_poll(struct file *, poll_table *);
+ static int tty_open(struct inode *, struct file *);
+-long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
+ #ifdef CONFIG_COMPAT
+ static long tty_compat_ioctl(struct file *file, unsigned int cmd,
+ 				unsigned long arg);
+@@ -438,8 +435,7 @@ static ssize_t hung_up_tty_read(struct file *file, char __user *buf,
+ 	return 0;
+ }
+ 
+-static ssize_t hung_up_tty_write(struct file *file, const char __user *buf,
+-				 size_t count, loff_t *ppos)
++static ssize_t hung_up_tty_write(struct kiocb *iocb, struct iov_iter *from)
+ {
+ 	return -EIO;
+ }
+@@ -478,7 +474,8 @@ static void tty_show_fdinfo(struct seq_file *m, struct file *file)
+ static const struct file_operations tty_fops = {
+ 	.llseek		= no_llseek,
+ 	.read		= tty_read,
+-	.write		= tty_write,
++	.write_iter	= tty_write,
++	.splice_write	= iter_file_splice_write,
+ 	.poll		= tty_poll,
+ 	.unlocked_ioctl	= tty_ioctl,
+ 	.compat_ioctl	= tty_compat_ioctl,
+@@ -491,7 +488,8 @@ static const struct file_operations tty_fops = {
+ static const struct file_operations console_fops = {
+ 	.llseek		= no_llseek,
+ 	.read		= tty_read,
+-	.write		= redirected_tty_write,
++	.write_iter	= redirected_tty_write,
++	.splice_write	= iter_file_splice_write,
+ 	.poll		= tty_poll,
+ 	.unlocked_ioctl	= tty_ioctl,
+ 	.compat_ioctl	= tty_compat_ioctl,
+@@ -503,7 +501,7 @@ static const struct file_operations console_fops = {
+ static const struct file_operations hung_up_tty_fops = {
+ 	.llseek		= no_llseek,
+ 	.read		= hung_up_tty_read,
+-	.write		= hung_up_tty_write,
++	.write_iter	= hung_up_tty_write,
+ 	.poll		= hung_up_tty_poll,
+ 	.unlocked_ioctl	= hung_up_tty_ioctl,
+ 	.compat_ioctl	= hung_up_tty_compat_ioctl,
+@@ -607,9 +605,9 @@ static void __tty_hangup(struct tty_struct *tty, int exit_session)
+ 	/* This breaks for file handles being sent over AF_UNIX sockets ? */
+ 	list_for_each_entry(priv, &tty->tty_files, list) {
+ 		filp = priv->file;
+-		if (filp->f_op->write == redirected_tty_write)
++		if (filp->f_op->write_iter == redirected_tty_write)
+ 			cons_filp = filp;
+-		if (filp->f_op->write != tty_write)
++		if (filp->f_op->write_iter != tty_write)
+ 			continue;
+ 		closecount++;
+ 		__tty_fasync(-1, filp, 0);	/* can't block */
+@@ -902,9 +900,9 @@ static inline ssize_t do_tty_write(
+ 	ssize_t (*write)(struct tty_struct *, struct file *, const unsigned char *, size_t),
+ 	struct tty_struct *tty,
+ 	struct file *file,
+-	const char __user *buf,
+-	size_t count)
++	struct iov_iter *from)
+ {
++	size_t count = iov_iter_count(from);
+ 	ssize_t ret, written = 0;
+ 	unsigned int chunk;
+ 
+@@ -956,14 +954,20 @@ static inline ssize_t do_tty_write(
+ 		size_t size = count;
+ 		if (size > chunk)
+ 			size = chunk;
++
+ 		ret = -EFAULT;
+-		if (copy_from_user(tty->write_buf, buf, size))
++		if (copy_from_iter(tty->write_buf, size, from) != size)
+ 			break;
++
+ 		ret = write(tty, file, tty->write_buf, size);
+ 		if (ret <= 0)
+ 			break;
++
++		/* FIXME! Have Al check this! */
++		if (ret != size)
++			iov_iter_revert(from, size-ret);
++
+ 		written += ret;
+-		buf += ret;
+ 		count -= ret;
+ 		if (!count)
+ 			break;
+@@ -1023,9 +1027,9 @@ void tty_write_message(struct tty_struct *tty, char *msg)
+  *	write method will not be invoked in parallel for each device.
+  */
+ 
+-static ssize_t tty_write(struct file *file, const char __user *buf,
+-						size_t count, loff_t *ppos)
++static ssize_t tty_write(struct kiocb *iocb, struct iov_iter *from)
+ {
++	struct file *file = iocb->ki_filp;
+ 	struct tty_struct *tty = file_tty(file);
+  	struct tty_ldisc *ld;
+ 	ssize_t ret;
+@@ -1039,17 +1043,16 @@ static ssize_t tty_write(struct file *file, const char __user *buf,
+ 		tty_err(tty, "missing write_room method\n");
+ 	ld = tty_ldisc_ref_wait(tty);
+ 	if (!ld)
+-		return hung_up_tty_write(file, buf, count, ppos);
++		return hung_up_tty_write(iocb, from);
+ 	if (!ld->ops->write)
+ 		ret = -EIO;
+ 	else
+-		ret = do_tty_write(ld->ops->write, tty, file, buf, count);
++		ret = do_tty_write(ld->ops->write, tty, file, from);
+ 	tty_ldisc_deref(ld);
+ 	return ret;
+ }
+ 
+-ssize_t redirected_tty_write(struct file *file, const char __user *buf,
+-						size_t count, loff_t *ppos)
++ssize_t redirected_tty_write(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ 	struct file *p = NULL;
+ 
+@@ -1060,11 +1063,11 @@ ssize_t redirected_tty_write(struct file *file, const char __user *buf,
+ 
+ 	if (p) {
+ 		ssize_t res;
+-		res = vfs_write(p, buf, count, &p->f_pos);
++		res = vfs_iocb_iter_write(p, iocb, iter);
+ 		fput(p);
+ 		return res;
+ 	}
+-	return tty_write(file, buf, count, ppos);
++	return tty_write(iocb, iter);
+ }
+ 
+ /**
+@@ -2293,7 +2296,7 @@ static int tioccons(struct file *file)
+ {
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+-	if (file->f_op->write == redirected_tty_write) {
++	if (file->f_op->write_iter == redirected_tty_write) {
+ 		struct file *f;
+ 		spin_lock(&redirect_lock);
+ 		f = redirect;
+diff --git a/drivers/usb/cdns3/cdns3-imx.c b/drivers/usb/cdns3/cdns3-imx.c
+index 54a2d70a9c730..7e728aab64755 100644
+--- a/drivers/usb/cdns3/cdns3-imx.c
++++ b/drivers/usb/cdns3/cdns3-imx.c
+@@ -184,7 +184,11 @@ static int cdns_imx_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	data->num_clks = ARRAY_SIZE(imx_cdns3_core_clks);
+-	data->clks = (struct clk_bulk_data *)imx_cdns3_core_clks;
++	data->clks = devm_kmemdup(dev, imx_cdns3_core_clks,
++				sizeof(imx_cdns3_core_clks), GFP_KERNEL);
++	if (!data->clks)
++		return -ENOMEM;
++
+ 	ret = devm_clk_bulk_get(dev, data->num_clks, data->clks);
+ 	if (ret)
+ 		return ret;
+@@ -214,20 +218,11 @@ err:
+ 	return ret;
+ }
+ 
+-static int cdns_imx_remove_core(struct device *dev, void *data)
+-{
+-	struct platform_device *pdev = to_platform_device(dev);
+-
+-	platform_device_unregister(pdev);
+-
+-	return 0;
+-}
+-
+ static int cdns_imx_remove(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 
+-	device_for_each_child(dev, NULL, cdns_imx_remove_core);
++	of_platform_depopulate(dev);
+ 	platform_set_drvdata(pdev, NULL);
+ 
+ 	return 0;
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/epn.c b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
+index 0bd6b20435b8a..02d8bfae58fb1 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/epn.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
+@@ -420,7 +420,10 @@ static void ast_vhub_stop_active_req(struct ast_vhub_ep *ep,
+ 	u32 state, reg, loops;
+ 
+ 	/* Stop DMA activity */
+-	writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
++	if (ep->epn.desc_mode)
++		writel(VHUB_EP_DMA_CTRL_RESET, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
++	else
++		writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
+ 
+ 	/* Wait for it to complete */
+ 	for (loops = 0; loops < 1000; loops++) {
+diff --git a/drivers/usb/gadget/udc/bdc/Kconfig b/drivers/usb/gadget/udc/bdc/Kconfig
+index 3e88c7670b2ed..fb01ff47b64cf 100644
+--- a/drivers/usb/gadget/udc/bdc/Kconfig
++++ b/drivers/usb/gadget/udc/bdc/Kconfig
+@@ -17,7 +17,7 @@ if USB_BDC_UDC
+ comment "Platform Support"
+ config	USB_BDC_PCI
+ 	tristate "BDC support for PCIe based platforms"
+-	depends on USB_PCI
++	depends on USB_PCI && BROKEN
+ 	default USB_BDC_UDC
+ 	help
+ 		Enable support for platforms which have BDC connected through PCIe, such as Lego3 FPGA platform.
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index debf54205d22e..da691a69fec10 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1532,10 +1532,13 @@ static ssize_t soft_connect_store(struct device *dev,
+ 		struct device_attribute *attr, const char *buf, size_t n)
+ {
+ 	struct usb_udc		*udc = container_of(dev, struct usb_udc, dev);
++	ssize_t			ret;
+ 
++	mutex_lock(&udc_lock);
+ 	if (!udc->driver) {
+ 		dev_err(dev, "soft-connect without a gadget driver\n");
+-		return -EOPNOTSUPP;
++		ret = -EOPNOTSUPP;
++		goto out;
+ 	}
+ 
+ 	if (sysfs_streq(buf, "connect")) {
+@@ -1546,10 +1549,14 @@ static ssize_t soft_connect_store(struct device *dev,
+ 		usb_gadget_udc_stop(udc);
+ 	} else {
+ 		dev_err(dev, "unsupported command '%s'\n", buf);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 
+-	return n;
++	ret = n;
++out:
++	mutex_unlock(&udc_lock);
++	return ret;
+ }
+ static DEVICE_ATTR_WO(soft_connect);
+ 
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index 016937579ed97..17704ee2d7f54 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -2266,17 +2266,20 @@ static int dummy_hub_control(
+ 			}
+ 			fallthrough;
+ 		case USB_PORT_FEAT_RESET:
++			if (!(dum_hcd->port_status & USB_PORT_STAT_CONNECTION))
++				break;
+ 			/* if it's already enabled, disable */
+ 			if (hcd->speed == HCD_USB3) {
+-				dum_hcd->port_status = 0;
+ 				dum_hcd->port_status =
+ 					(USB_SS_PORT_STAT_POWER |
+ 					 USB_PORT_STAT_CONNECTION |
+ 					 USB_PORT_STAT_RESET);
+-			} else
++			} else {
+ 				dum_hcd->port_status &= ~(USB_PORT_STAT_ENABLE
+ 					| USB_PORT_STAT_LOW_SPEED
+ 					| USB_PORT_STAT_HIGH_SPEED);
++				dum_hcd->port_status |= USB_PORT_STAT_RESET;
++			}
+ 			/*
+ 			 * We want to reset device status. All but the
+ 			 * Self powered feature
+@@ -2288,7 +2291,8 @@ static int dummy_hub_control(
+ 			 * interval? Is it still 50msec as for HS?
+ 			 */
+ 			dum_hcd->re_timeout = jiffies + msecs_to_jiffies(50);
+-			fallthrough;
++			set_link_state(dum_hcd);
++			break;
+ 		case USB_PORT_FEAT_C_CONNECTION:
+ 		case USB_PORT_FEAT_C_RESET:
+ 		case USB_PORT_FEAT_C_ENABLE:
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index 3575b72018810..b5db2b2d0901a 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -574,6 +574,7 @@ static int ehci_run (struct usb_hcd *hcd)
+ 	struct ehci_hcd		*ehci = hcd_to_ehci (hcd);
+ 	u32			temp;
+ 	u32			hcc_params;
++	int			rc;
+ 
+ 	hcd->uses_new_polling = 1;
+ 
+@@ -629,9 +630,20 @@ static int ehci_run (struct usb_hcd *hcd)
+ 	down_write(&ehci_cf_port_reset_rwsem);
+ 	ehci->rh_state = EHCI_RH_RUNNING;
+ 	ehci_writel(ehci, FLAG_CF, &ehci->regs->configured_flag);
++
++	/* Wait until HC become operational */
+ 	ehci_readl(ehci, &ehci->regs->command);	/* unblock posted writes */
+ 	msleep(5);
++	rc = ehci_handshake(ehci, &ehci->regs->status, STS_HALT, 0, 100 * 1000);
++
+ 	up_write(&ehci_cf_port_reset_rwsem);
++
++	if (rc) {
++		ehci_err(ehci, "USB %x.%x, controller refused to start: %d\n",
++			 ((ehci->sbrn & 0xf0)>>4), (ehci->sbrn & 0x0f), rc);
++		return rc;
++	}
++
+ 	ehci->last_periodic_enable = ktime_get_real();
+ 
+ 	temp = HC_VERSION(ehci, ehci_readl(ehci, &ehci->caps->hc_capbase));
+diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
+index 087402aec5cbe..9f9ab5ccea889 100644
+--- a/drivers/usb/host/ehci-hub.c
++++ b/drivers/usb/host/ehci-hub.c
+@@ -345,6 +345,9 @@ static int ehci_bus_suspend (struct usb_hcd *hcd)
+ 
+ 	unlink_empty_async_suspended(ehci);
+ 
++	/* Some Synopsys controllers mistakenly leave IAA turned on */
++	ehci_writel(ehci, STS_IAA, &ehci->regs->status);
++
+ 	/* Any IAA cycle that started before the suspend is now invalid */
+ 	end_iaa_cycle(ehci);
+ 	ehci_handle_start_intr_unlinks(ehci);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 167dae117f738..db8612ec82d3e 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2930,6 +2930,8 @@ static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,
+ 	trb->field[0] = cpu_to_le32(field1);
+ 	trb->field[1] = cpu_to_le32(field2);
+ 	trb->field[2] = cpu_to_le32(field3);
++	/* make sure TRB is fully written before giving it to the controller */
++	wmb();
+ 	trb->field[3] = cpu_to_le32(field4);
+ 
+ 	trace_xhci_queue_trb(ring, trb);
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 934be16863523..50bb91b6a4b8d 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -623,6 +623,13 @@ static void tegra_xusb_mbox_handle(struct tegra_xusb *tegra,
+ 								     enable);
+ 			if (err < 0)
+ 				break;
++
++			/*
++			 * wait 500us for LFPS detector to be disabled before
++			 * sending ACK
++			 */
++			if (!enable)
++				usleep_range(500, 1000);
+ 		}
+ 
+ 		if (err < 0) {
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 6038c4c35db5a..bbebe248b7264 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -2010,16 +2010,6 @@ static struct irq_chip xen_percpu_chip __read_mostly = {
+ 	.irq_ack		= ack_dynirq,
+ };
+ 
+-int xen_set_callback_via(uint64_t via)
+-{
+-	struct xen_hvm_param a;
+-	a.domid = DOMID_SELF;
+-	a.index = HVM_PARAM_CALLBACK_IRQ;
+-	a.value = via;
+-	return HYPERVISOR_hvm_op(HVMOP_set_param, &a);
+-}
+-EXPORT_SYMBOL_GPL(xen_set_callback_via);
+-
+ #ifdef CONFIG_XEN_PVHVM
+ /* Vector callbacks are better than PCI interrupts to receive event
+  * channel notifications because we can receive vector callbacks on any
+diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
+index dd911e1ff782c..9db557b76511b 100644
+--- a/drivers/xen/platform-pci.c
++++ b/drivers/xen/platform-pci.c
+@@ -149,7 +149,6 @@ static int platform_pci_probe(struct pci_dev *pdev,
+ 	ret = gnttab_init();
+ 	if (ret)
+ 		goto grant_out;
+-	xenbus_probe(NULL);
+ 	return 0;
+ grant_out:
+ 	gnttab_free_auto_xlat_frames();
+diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
+index 2a93b7c9c1599..dc15373354144 100644
+--- a/drivers/xen/xenbus/xenbus.h
++++ b/drivers/xen/xenbus/xenbus.h
+@@ -115,6 +115,7 @@ int xenbus_probe_node(struct xen_bus_type *bus,
+ 		      const char *type,
+ 		      const char *nodename);
+ int xenbus_probe_devices(struct xen_bus_type *bus);
++void xenbus_probe(void);
+ 
+ void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
+ 
+diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
+index eb5151fc8efab..e5fda0256feb3 100644
+--- a/drivers/xen/xenbus/xenbus_comms.c
++++ b/drivers/xen/xenbus/xenbus_comms.c
+@@ -57,16 +57,8 @@ DEFINE_MUTEX(xs_response_mutex);
+ static int xenbus_irq;
+ static struct task_struct *xenbus_task;
+ 
+-static DECLARE_WORK(probe_work, xenbus_probe);
+-
+-
+ static irqreturn_t wake_waiting(int irq, void *unused)
+ {
+-	if (unlikely(xenstored_ready == 0)) {
+-		xenstored_ready = 1;
+-		schedule_work(&probe_work);
+-	}
+-
+ 	wake_up(&xb_waitq);
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 44634d970a5ca..c8f0282bb6497 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -683,29 +683,76 @@ void unregister_xenstore_notifier(struct notifier_block *nb)
+ }
+ EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
+ 
+-void xenbus_probe(struct work_struct *unused)
++void xenbus_probe(void)
+ {
+ 	xenstored_ready = 1;
+ 
++	/*
++	 * In the HVM case, xenbus_init() deferred its call to
++	 * xs_init() in case callbacks were not operational yet.
++	 * So do it now.
++	 */
++	if (xen_store_domain_type == XS_HVM)
++		xs_init();
++
+ 	/* Notify others that xenstore is up */
+ 	blocking_notifier_call_chain(&xenstore_chain, 0, NULL);
+ }
+-EXPORT_SYMBOL_GPL(xenbus_probe);
+ 
+-static int __init xenbus_probe_initcall(void)
++/*
++ * Returns true when XenStore init must be deferred in order to
++ * allow the PCI platform device to be initialised, before we
++ * can actually have event channel interrupts working.
++ */
++static bool xs_hvm_defer_init_for_callback(void)
+ {
+-	if (!xen_domain())
+-		return -ENODEV;
++#ifdef CONFIG_XEN_PVHVM
++	return xen_store_domain_type == XS_HVM &&
++		!xen_have_vector_callback;
++#else
++	return false;
++#endif
++}
+ 
+-	if (xen_initial_domain() || xen_hvm_domain())
+-		return 0;
++static int __init xenbus_probe_initcall(void)
++{
++	/*
++	 * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
++	 * need to wait for the platform PCI device to come up.
++	 */
++	if (xen_store_domain_type == XS_PV ||
++	    (xen_store_domain_type == XS_HVM &&
++	     !xs_hvm_defer_init_for_callback()))
++		xenbus_probe();
+ 
+-	xenbus_probe(NULL);
+ 	return 0;
+ }
+-
+ device_initcall(xenbus_probe_initcall);
+ 
++int xen_set_callback_via(uint64_t via)
++{
++	struct xen_hvm_param a;
++	int ret;
++
++	a.domid = DOMID_SELF;
++	a.index = HVM_PARAM_CALLBACK_IRQ;
++	a.value = via;
++
++	ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a);
++	if (ret)
++		return ret;
++
++	/*
++	 * If xenbus_probe_initcall() deferred the xenbus_probe()
++	 * due to the callback not functioning yet, we can do it now.
++	 */
++	if (!xenstored_ready && xs_hvm_defer_init_for_callback())
++		xenbus_probe();
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(xen_set_callback_via);
++
+ /* Set up event channel for xenstored which is run as a local process
+  * (this is normally used only in dom0)
+  */
+@@ -818,11 +865,17 @@ static int __init xenbus_init(void)
+ 		break;
+ 	}
+ 
+-	/* Initialize the interface to xenstore. */
+-	err = xs_init();
+-	if (err) {
+-		pr_warn("Error initializing xenstore comms: %i\n", err);
+-		goto out_error;
++	/*
++	 * HVM domains may not have a functional callback yet. In that
++	 * case let xs_init() be called from xenbus_probe(), which will
++	 * get invoked at an appropriate time.
++	 */
++	if (xen_store_domain_type != XS_HVM) {
++		err = xs_init();
++		if (err) {
++			pr_warn("Error initializing xenstore comms: %i\n", err);
++			goto out_error;
++		}
+ 	}
+ 
+ 	if ((xen_store_domain_type != XS_LOCAL) &&
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 771a036867dc0..553b4f6ec8639 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -3124,7 +3124,7 @@ void btrfs_backref_error_cleanup(struct btrfs_backref_cache *cache,
+ 		list_del_init(&lower->list);
+ 		if (lower == node)
+ 			node = NULL;
+-		btrfs_backref_free_node(cache, lower);
++		btrfs_backref_drop_node(cache, lower);
+ 	}
+ 
+ 	btrfs_backref_cleanup_node(cache, node);
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 3ba6f3839d392..cef2f080fdcd5 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2687,7 +2687,8 @@ again:
+ 	 * Go through delayed refs for all the stuff we've just kicked off
+ 	 * and then loop back (just once)
+ 	 */
+-	ret = btrfs_run_delayed_refs(trans, 0);
++	if (!ret)
++		ret = btrfs_run_delayed_refs(trans, 0);
+ 	if (!ret && loops == 0) {
+ 		loops++;
+ 		spin_lock(&cur_trans->dirty_bgs_lock);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index af97ddcc6b3e8..56f3b9acd2154 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1482,7 +1482,7 @@ void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info)
+ 		root = list_first_entry(&fs_info->allocated_roots,
+ 					struct btrfs_root, leak_list);
+ 		btrfs_err(fs_info, "leaked root %s refcount %d",
+-			  btrfs_root_name(root->root_key.objectid, buf),
++			  btrfs_root_name(&root->root_key, buf),
+ 			  refcount_read(&root->refs));
+ 		while (refcount_read(&root->refs) > 1)
+ 			btrfs_put_root(root);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 4209dbd6286e4..8fba1c219b190 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5571,7 +5571,15 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
+ 				goto out_free;
+ 			}
+ 
+-			trans = btrfs_start_transaction(tree_root, 0);
++		       /*
++			* Use join to avoid potential EINTR from transaction
++			* start. See wait_reserve_ticket and the whole
++			* reservation callchain.
++			*/
++			if (for_reloc)
++				trans = btrfs_join_transaction(tree_root);
++			else
++				trans = btrfs_start_transaction(tree_root, 0);
+ 			if (IS_ERR(trans)) {
+ 				err = PTR_ERR(trans);
+ 				goto out_free;
+diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c
+index 7695c4783d33b..c62771f3af8c6 100644
+--- a/fs/btrfs/print-tree.c
++++ b/fs/btrfs/print-tree.c
+@@ -26,22 +26,22 @@ static const struct root_name_map root_map[] = {
+ 	{ BTRFS_DATA_RELOC_TREE_OBJECTID,	"DATA_RELOC_TREE"	},
+ };
+ 
+-const char *btrfs_root_name(u64 objectid, char *buf)
++const char *btrfs_root_name(const struct btrfs_key *key, char *buf)
+ {
+ 	int i;
+ 
+-	if (objectid == BTRFS_TREE_RELOC_OBJECTID) {
++	if (key->objectid == BTRFS_TREE_RELOC_OBJECTID) {
+ 		snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN,
+-			 "TREE_RELOC offset=%llu", objectid);
++			 "TREE_RELOC offset=%llu", key->offset);
+ 		return buf;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(root_map); i++) {
+-		if (root_map[i].id == objectid)
++		if (root_map[i].id == key->objectid)
+ 			return root_map[i].name;
+ 	}
+ 
+-	snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", objectid);
++	snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", key->objectid);
+ 	return buf;
+ }
+ 
+diff --git a/fs/btrfs/print-tree.h b/fs/btrfs/print-tree.h
+index 78b99385a503f..8c3e9319ec4ef 100644
+--- a/fs/btrfs/print-tree.h
++++ b/fs/btrfs/print-tree.h
+@@ -11,6 +11,6 @@
+ 
+ void btrfs_print_leaf(struct extent_buffer *l);
+ void btrfs_print_tree(struct extent_buffer *c, bool follow);
+-const char *btrfs_root_name(u64 objectid, char *buf);
++const char *btrfs_root_name(const struct btrfs_key *key, char *buf);
+ 
+ #endif
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 9e08ddb629685..9e5809118c34d 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5512,6 +5512,21 @@ static int clone_range(struct send_ctx *sctx,
+ 			break;
+ 		offset += clone_len;
+ 		clone_root->offset += clone_len;
++
++		/*
++		 * If we are cloning from the file we are currently processing,
++		 * and using the send root as the clone root, we must stop once
++		 * the current clone offset reaches the current eof of the file
++		 * at the receiver, otherwise we would issue an invalid clone
++		 * operation (source range going beyond eof) and cause the
++		 * receiver to fail. So if we reach the current eof, bail out
++		 * and fallback to a regular write.
++		 */
++		if (clone_root->root == sctx->send_root &&
++		    clone_root->ino == sctx->cur_ino &&
++		    clone_root->offset >= sctx->cur_inode_next_write_offset)
++			break;
++
+ 		data_offset += clone_len;
+ next:
+ 		path->slots[0]++;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 78637665166e0..6311308b32beb 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4288,6 +4288,8 @@ int btrfs_recover_balance(struct btrfs_fs_info *fs_info)
+ 		btrfs_warn(fs_info,
+ 	"balance: cannot set exclusive op status, resume manually");
+ 
++	btrfs_release_path(path);
++
+ 	mutex_lock(&fs_info->balance_mutex);
+ 	BUG_ON(fs_info->balance_ctl);
+ 	spin_lock(&fs_info->balance_lock);
+diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
+index 8bda092e60c5a..e027c718ca01a 100644
+--- a/fs/cachefiles/rdwr.c
++++ b/fs/cachefiles/rdwr.c
+@@ -413,7 +413,6 @@ int cachefiles_read_or_alloc_page(struct fscache_retrieval *op,
+ 
+ 	inode = d_backing_inode(object->backer);
+ 	ASSERT(S_ISREG(inode->i_mode));
+-	ASSERT(inode->i_mapping->a_ops->readpages);
+ 
+ 	/* calculate the shift required to use bmap */
+ 	shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
+@@ -713,7 +712,6 @@ int cachefiles_read_or_alloc_pages(struct fscache_retrieval *op,
+ 
+ 	inode = d_backing_inode(object->backer);
+ 	ASSERT(S_ISREG(inode->i_mode));
+-	ASSERT(inode->i_mapping->a_ops->readpages);
+ 
+ 	/* calculate the shift required to use bmap */
+ 	shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 36b2ece434037..b1c2f416b9bd9 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -338,7 +338,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 	if (ssocket == NULL)
+ 		return -EAGAIN;
+ 
+-	if (signal_pending(current)) {
++	if (fatal_signal_pending(current)) {
+ 		cifs_dbg(FYI, "signal pending before send request\n");
+ 		return -ERESTARTSYS;
+ 	}
+@@ -429,7 +429,7 @@ unmask:
+ 
+ 	if (signal_pending(current) && (total_len != send_length)) {
+ 		cifs_dbg(FYI, "signal is pending after attempt to send\n");
+-		rc = -EINTR;
++		rc = -ERESTARTSYS;
+ 	}
+ 
+ 	/* uncork it */
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index e6005c78bfa93..90dddb507e4af 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -1474,21 +1474,25 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
+ 	}
+ 
+ 	/*
+-	 * Some filesystems may redirty the inode during the writeback
+-	 * due to delalloc, clear dirty metadata flags right before
+-	 * write_inode()
++	 * If the inode has dirty timestamps and we need to write them, call
++	 * mark_inode_dirty_sync() to notify the filesystem about it and to
++	 * change I_DIRTY_TIME into I_DIRTY_SYNC.
+ 	 */
+-	spin_lock(&inode->i_lock);
+-
+-	dirty = inode->i_state & I_DIRTY;
+ 	if ((inode->i_state & I_DIRTY_TIME) &&
+-	    ((dirty & I_DIRTY_INODE) ||
+-	     wbc->sync_mode == WB_SYNC_ALL || wbc->for_sync ||
++	    (wbc->sync_mode == WB_SYNC_ALL || wbc->for_sync ||
+ 	     time_after(jiffies, inode->dirtied_time_when +
+ 			dirtytime_expire_interval * HZ))) {
+-		dirty |= I_DIRTY_TIME;
+ 		trace_writeback_lazytime(inode);
++		mark_inode_dirty_sync(inode);
+ 	}
++
++	/*
++	 * Some filesystems may redirty the inode during the writeback
++	 * due to delalloc, clear dirty metadata flags right before
++	 * write_inode()
++	 */
++	spin_lock(&inode->i_lock);
++	dirty = inode->i_state & I_DIRTY;
+ 	inode->i_state &= ~dirty;
+ 
+ 	/*
+@@ -1509,8 +1513,6 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
+ 
+ 	spin_unlock(&inode->i_lock);
+ 
+-	if (dirty & I_DIRTY_TIME)
+-		mark_inode_dirty_sync(inode);
+ 	/* Don't write the inode if only I_DIRTY_PAGES was set */
+ 	if (dirty & ~I_DIRTY_PAGES) {
+ 		int err = write_inode(inode, wbc);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 265aea2cd7bc8..8cb0db187d90f 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -353,6 +353,7 @@ struct io_ring_ctx {
+ 		unsigned		cq_entries;
+ 		unsigned		cq_mask;
+ 		atomic_t		cq_timeouts;
++		unsigned		cq_last_tm_flush;
+ 		unsigned long		cq_check_overflow;
+ 		struct wait_queue_head	cq_wait;
+ 		struct fasync_struct	*cq_fasync;
+@@ -1521,19 +1522,38 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx)
+ 
+ static void io_flush_timeouts(struct io_ring_ctx *ctx)
+ {
+-	while (!list_empty(&ctx->timeout_list)) {
++	u32 seq;
++
++	if (list_empty(&ctx->timeout_list))
++		return;
++
++	seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
++
++	do {
++		u32 events_needed, events_got;
+ 		struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
+ 						struct io_kiocb, timeout.list);
+ 
+ 		if (io_is_timeout_noseq(req))
+ 			break;
+-		if (req->timeout.target_seq != ctx->cached_cq_tail
+-					- atomic_read(&ctx->cq_timeouts))
++
++		/*
++		 * Since seq can easily wrap around over time, subtract
++		 * the last seq at which timeouts were flushed before comparing.
++		 * Assuming not more than 2^31-1 events have happened since,
++		 * these subtractions won't have wrapped, so we can check if
++		 * target is in [last_seq, current_seq] by comparing the two.
++		 */
++		events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush;
++		events_got = seq - ctx->cq_last_tm_flush;
++		if (events_got < events_needed)
+ 			break;
+ 
+ 		list_del_init(&req->timeout.list);
+ 		io_kill_timeout(req);
+-	}
++	} while (!list_empty(&ctx->timeout_list));
++
++	ctx->cq_last_tm_flush = seq;
+ }
+ 
+ static void io_commit_cqring(struct io_ring_ctx *ctx)
+@@ -2147,6 +2167,8 @@ static void io_req_free_batch_finish(struct io_ring_ctx *ctx,
+ 		struct io_uring_task *tctx = rb->task->io_uring;
+ 
+ 		percpu_counter_sub(&tctx->inflight, rb->task_refs);
++		if (atomic_read(&tctx->in_idle))
++			wake_up(&tctx->wait);
+ 		put_task_struct_many(rb->task, rb->task_refs);
+ 		rb->task = NULL;
+ 	}
+@@ -2166,6 +2188,8 @@ static void io_req_free_batch(struct req_batch *rb, struct io_kiocb *req)
+ 			struct io_uring_task *tctx = rb->task->io_uring;
+ 
+ 			percpu_counter_sub(&tctx->inflight, rb->task_refs);
++			if (atomic_read(&tctx->in_idle))
++				wake_up(&tctx->wait);
+ 			put_task_struct_many(rb->task, rb->task_refs);
+ 		}
+ 		rb->task = req->task;
+@@ -3437,7 +3461,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
+ 
+ 	/* read it all, or we did blocking attempt. no retry. */
+ 	if (!iov_iter_count(iter) || !force_nonblock ||
+-	    (req->file->f_flags & O_NONBLOCK))
++	    (req->file->f_flags & O_NONBLOCK) || !(req->flags & REQ_F_ISREG))
+ 		goto done;
+ 
+ 	io_size -= ret;
+@@ -4226,7 +4250,6 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	 * io_wq_work.flags, so initialize io_wq_work firstly.
+ 	 */
+ 	io_req_init_async(req);
+-	req->work.flags |= IO_WQ_WORK_NO_CANCEL;
+ 
+ 	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+ 		return -EINVAL;
+@@ -4259,6 +4282,8 @@ static int io_close(struct io_kiocb *req, bool force_nonblock,
+ 
+ 	/* if the file has a flush method, be safe and punt to async */
+ 	if (close->put_file->f_op->flush && force_nonblock) {
++		/* not safe to cancel at this point */
++		req->work.flags |= IO_WQ_WORK_NO_CANCEL;
+ 		/* was never set, but play safe */
+ 		req->flags &= ~REQ_F_NOWAIT;
+ 		/* avoid grabbing files - we don't need the files */
+@@ -5582,6 +5607,12 @@ static int io_timeout(struct io_kiocb *req)
+ 	tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+ 	req->timeout.target_seq = tail + off;
+ 
++	/* Update the last seq here in case io_flush_timeouts() hasn't.
++	 * This is safe because ->completion_lock is held, and submissions
++	 * and completions are never mixed in the same ->completion_lock section.
++	 */
++	ctx->cq_last_tm_flush = tail;
++
+ 	/*
+ 	 * Insertion sort, ensuring the first entry in the list is always
+ 	 * the one we need first.
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index f277d023ebcd1..c757193121475 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -14,6 +14,7 @@
+ #include <linux/pagemap.h>
+ #include <linux/sched/mm.h>
+ #include <linux/fsnotify.h>
++#include <linux/uio.h>
+ 
+ #include "kernfs-internal.h"
+ 
+@@ -180,11 +181,10 @@ static const struct seq_operations kernfs_seq_ops = {
+  * it difficult to use seq_file.  Implement simplistic custom buffering for
+  * bin files.
+  */
+-static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
+-				       char __user *user_buf, size_t count,
+-				       loff_t *ppos)
++static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ {
+-	ssize_t len = min_t(size_t, count, PAGE_SIZE);
++	struct kernfs_open_file *of = kernfs_of(iocb->ki_filp);
++	ssize_t len = min_t(size_t, iov_iter_count(iter), PAGE_SIZE);
+ 	const struct kernfs_ops *ops;
+ 	char *buf;
+ 
+@@ -210,7 +210,7 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
+ 	of->event = atomic_read(&of->kn->attr.open->event);
+ 	ops = kernfs_ops(of->kn);
+ 	if (ops->read)
+-		len = ops->read(of, buf, len, *ppos);
++		len = ops->read(of, buf, len, iocb->ki_pos);
+ 	else
+ 		len = -EINVAL;
+ 
+@@ -220,12 +220,12 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
+ 	if (len < 0)
+ 		goto out_free;
+ 
+-	if (copy_to_user(user_buf, buf, len)) {
++	if (copy_to_iter(buf, len, iter) != len) {
+ 		len = -EFAULT;
+ 		goto out_free;
+ 	}
+ 
+-	*ppos += len;
++	iocb->ki_pos += len;
+ 
+  out_free:
+ 	if (buf == of->prealloc_buf)
+@@ -235,31 +235,14 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
+ 	return len;
+ }
+ 
+-/**
+- * kernfs_fop_read - kernfs vfs read callback
+- * @file: file pointer
+- * @user_buf: data to write
+- * @count: number of bytes
+- * @ppos: starting offset
+- */
+-static ssize_t kernfs_fop_read(struct file *file, char __user *user_buf,
+-			       size_t count, loff_t *ppos)
++static ssize_t kernfs_fop_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ {
+-	struct kernfs_open_file *of = kernfs_of(file);
+-
+-	if (of->kn->flags & KERNFS_HAS_SEQ_SHOW)
+-		return seq_read(file, user_buf, count, ppos);
+-	else
+-		return kernfs_file_direct_read(of, user_buf, count, ppos);
++	if (kernfs_of(iocb->ki_filp)->kn->flags & KERNFS_HAS_SEQ_SHOW)
++		return seq_read_iter(iocb, iter);
++	return kernfs_file_read_iter(iocb, iter);
+ }
+ 
+-/**
+- * kernfs_fop_write - kernfs vfs write callback
+- * @file: file pointer
+- * @user_buf: data to write
+- * @count: number of bytes
+- * @ppos: starting offset
+- *
++/*
+  * Copy data in from userland and pass it to the matching kernfs write
+  * operation.
+  *
+@@ -269,20 +252,18 @@ static ssize_t kernfs_fop_read(struct file *file, char __user *user_buf,
+  * modify only the the value you're changing, then write entire buffer
+  * back.
+  */
+-static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
+-				size_t count, loff_t *ppos)
++static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ {
+-	struct kernfs_open_file *of = kernfs_of(file);
++	struct kernfs_open_file *of = kernfs_of(iocb->ki_filp);
++	ssize_t len = iov_iter_count(iter);
+ 	const struct kernfs_ops *ops;
+-	ssize_t len;
+ 	char *buf;
+ 
+ 	if (of->atomic_write_len) {
+-		len = count;
+ 		if (len > of->atomic_write_len)
+ 			return -E2BIG;
+ 	} else {
+-		len = min_t(size_t, count, PAGE_SIZE);
++		len = min_t(size_t, len, PAGE_SIZE);
+ 	}
+ 
+ 	buf = of->prealloc_buf;
+@@ -293,7 +274,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+-	if (copy_from_user(buf, user_buf, len)) {
++	if (copy_from_iter(buf, len, iter) != len) {
+ 		len = -EFAULT;
+ 		goto out_free;
+ 	}
+@@ -312,7 +293,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
+ 
+ 	ops = kernfs_ops(of->kn);
+ 	if (ops->write)
+-		len = ops->write(of, buf, len, *ppos);
++		len = ops->write(of, buf, len, iocb->ki_pos);
+ 	else
+ 		len = -EINVAL;
+ 
+@@ -320,7 +301,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
+ 	mutex_unlock(&of->mutex);
+ 
+ 	if (len > 0)
+-		*ppos += len;
++		iocb->ki_pos += len;
+ 
+ out_free:
+ 	if (buf == of->prealloc_buf)
+@@ -673,7 +654,7 @@ static int kernfs_fop_open(struct inode *inode, struct file *file)
+ 
+ 	/*
+ 	 * Write path needs to atomic_write_len outside active reference.
+-	 * Cache it in open_file.  See kernfs_fop_write() for details.
++	 * Cache it in open_file.  See kernfs_fop_write_iter() for details.
+ 	 */
+ 	of->atomic_write_len = ops->atomic_write_len;
+ 
+@@ -960,14 +941,16 @@ void kernfs_notify(struct kernfs_node *kn)
+ EXPORT_SYMBOL_GPL(kernfs_notify);
+ 
+ const struct file_operations kernfs_file_fops = {
+-	.read		= kernfs_fop_read,
+-	.write		= kernfs_fop_write,
++	.read_iter	= kernfs_fop_read_iter,
++	.write_iter	= kernfs_fop_write_iter,
+ 	.llseek		= generic_file_llseek,
+ 	.mmap		= kernfs_fop_mmap,
+ 	.open		= kernfs_fop_open,
+ 	.release	= kernfs_fop_release,
+ 	.poll		= kernfs_fop_poll,
+ 	.fsync		= noop_fsync,
++	.splice_read	= generic_file_splice_read,
++	.splice_write	= iter_file_splice_write,
+ };
+ 
+ /**
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 833a2c64dfe80..5f5169b9c2e90 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -4632,6 +4632,7 @@ nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp,
+ 			    resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof);
+ 	if (nfserr)
+ 		return nfserr;
++	xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount));
+ 
+ 	tmp = htonl(NFS4_CONTENT_DATA);
+ 	write_bytes_to_xdr_buf(xdr->buf, starting_len,      &tmp,   4);
+@@ -4639,6 +4640,10 @@ nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp,
+ 	write_bytes_to_xdr_buf(xdr->buf, starting_len + 4,  &tmp64, 8);
+ 	tmp = htonl(*maxcount);
+ 	write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp,   4);
++
++	tmp = xdr_zero;
++	write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp,
++			       xdr_pad_size(*maxcount));
+ 	return nfs_ok;
+ }
+ 
+@@ -4731,14 +4736,15 @@ out:
+ 	if (nfserr && segments == 0)
+ 		xdr_truncate_encode(xdr, starting_len);
+ 	else {
+-		tmp = htonl(eof);
+-		write_bytes_to_xdr_buf(xdr->buf, starting_len,     &tmp, 4);
+-		tmp = htonl(segments);
+-		write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
+ 		if (nfserr) {
+ 			xdr_truncate_encode(xdr, last_segment);
+ 			nfserr = nfs_ok;
++			eof = 0;
+ 		}
++		tmp = htonl(eof);
++		write_bytes_to_xdr_buf(xdr->buf, starting_len,     &tmp, 4);
++		tmp = htonl(segments);
++		write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
+ 	}
+ 
+ 	return nfserr;
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 0ac197658a2d6..412b3b618994c 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -1206,6 +1206,7 @@ const struct file_operations pipefifo_fops = {
+ 	.unlocked_ioctl	= pipe_ioctl,
+ 	.release	= pipe_release,
+ 	.fasync		= pipe_fasync,
++	.splice_write	= iter_file_splice_write,
+ };
+ 
+ /*
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 317899222d7fd..d2018f70d1fae 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1770,6 +1770,12 @@ static int process_sysctl_arg(char *param, char *val,
+ 			return 0;
+ 	}
+ 
++	if (!val)
++		return -EINVAL;
++	len = strlen(val);
++	if (len == 0)
++		return -EINVAL;
++
+ 	/*
+ 	 * To set sysctl options, we use a temporary mount of proc, look up the
+ 	 * respective sys/ file and write to it. To avoid mounting it when no
+@@ -1811,7 +1817,6 @@ static int process_sysctl_arg(char *param, char *val,
+ 				file, param, val);
+ 		goto out;
+ 	}
+-	len = strlen(val);
+ 	wret = kernel_write(file, val, len, &pos);
+ 	if (wret < 0) {
+ 		err = wret;
+diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
+index dd90c9792909d..0e7316a86240b 100644
+--- a/include/asm-generic/bitops/atomic.h
++++ b/include/asm-generic/bitops/atomic.h
+@@ -11,19 +11,19 @@
+  * See Documentation/atomic_bitops.txt for details.
+  */
+ 
+-static inline void set_bit(unsigned int nr, volatile unsigned long *p)
++static __always_inline void set_bit(unsigned int nr, volatile unsigned long *p)
+ {
+ 	p += BIT_WORD(nr);
+ 	atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
+ }
+ 
+-static inline void clear_bit(unsigned int nr, volatile unsigned long *p)
++static __always_inline void clear_bit(unsigned int nr, volatile unsigned long *p)
+ {
+ 	p += BIT_WORD(nr);
+ 	atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
+ }
+ 
+-static inline void change_bit(unsigned int nr, volatile unsigned long *p)
++static __always_inline void change_bit(unsigned int nr, volatile unsigned long *p)
+ {
+ 	p += BIT_WORD(nr);
+ 	atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 5ed101be7b2e7..2b39de35525a9 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -615,6 +615,18 @@ static inline const char *dev_name(const struct device *dev)
+ 	return kobject_name(&dev->kobj);
+ }
+ 
++/**
++ * dev_bus_name - Return a device's bus/class name, if at all possible
++ * @dev: struct device to get the bus/class name of
++ *
++ * Will return the name of the bus/class the device is attached to.  If it is
++ * not attached to a bus/class, an empty string will be returned.
++ */
++static inline const char *dev_bus_name(const struct device *dev)
++{
++	return dev->bus ? dev->bus->name : (dev->class ? dev->class->name : "");
++}
++
+ __printf(2, 3) int dev_set_name(struct device *dev, const char *name, ...);
+ 
+ #ifdef CONFIG_NUMA
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index eb33d948788cc..bc8caac390fce 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -422,6 +422,7 @@ extern void tty_kclose(struct tty_struct *tty);
+ extern int tty_dev_name_to_number(const char *name, dev_t *number);
+ extern int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout);
+ extern void tty_ldisc_unlock(struct tty_struct *tty);
++extern ssize_t redirected_tty_write(struct kiocb *, struct iov_iter *);
+ #else
+ static inline void tty_kref_put(struct tty_struct *tty)
+ { }
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index 7338b3865a2a3..111d7771b2081 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -76,6 +76,8 @@ struct inet_connection_sock_af_ops {
+  * @icsk_ext_hdr_len:	   Network protocol overhead (IP/IPv6 options)
+  * @icsk_ack:		   Delayed ACK control data
+  * @icsk_mtup;		   MTU probing control data
++ * @icsk_probes_tstamp:    Probe timestamp (cleared by non-zero window ack)
++ * @icsk_user_timeout:	   TCP_USER_TIMEOUT value
+  */
+ struct inet_connection_sock {
+ 	/* inet_sock has to be the first member! */
+@@ -129,6 +131,7 @@ struct inet_connection_sock {
+ 
+ 		u32		  probe_timestamp;
+ 	} icsk_mtup;
++	u32			  icsk_probes_tstamp;
+ 	u32			  icsk_user_timeout;
+ 
+ 	u64			  icsk_ca_priv[104 / sizeof(u64)];
+diff --git a/include/net/sock.h b/include/net/sock.h
+index a5c6ae78df77d..253202dcc5e61 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1903,10 +1903,13 @@ static inline void sk_set_txhash(struct sock *sk)
+ 	sk->sk_txhash = net_tx_rndhash();
+ }
+ 
+-static inline void sk_rethink_txhash(struct sock *sk)
++static inline bool sk_rethink_txhash(struct sock *sk)
+ {
+-	if (sk->sk_txhash)
++	if (sk->sk_txhash) {
+ 		sk_set_txhash(sk);
++		return true;
++	}
++	return false;
+ }
+ 
+ static inline struct dst_entry *
+@@ -1929,12 +1932,10 @@ sk_dst_get(struct sock *sk)
+ 	return dst;
+ }
+ 
+-static inline void dst_negative_advice(struct sock *sk)
++static inline void __dst_negative_advice(struct sock *sk)
+ {
+ 	struct dst_entry *ndst, *dst = __sk_dst_get(sk);
+ 
+-	sk_rethink_txhash(sk);
+-
+ 	if (dst && dst->ops->negative_advice) {
+ 		ndst = dst->ops->negative_advice(dst);
+ 
+@@ -1946,6 +1947,12 @@ static inline void dst_negative_advice(struct sock *sk)
+ 	}
+ }
+ 
++static inline void dst_negative_advice(struct sock *sk)
++{
++	sk_rethink_txhash(sk);
++	__dst_negative_advice(sk);
++}
++
+ static inline void
+ __sk_dst_set(struct sock *sk, struct dst_entry *dst)
+ {
+diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
+index 00c7235ae93e7..2c43b0ef1e4d5 100644
+--- a/include/xen/xenbus.h
++++ b/include/xen/xenbus.h
+@@ -192,7 +192,7 @@ void xs_suspend_cancel(void);
+ 
+ struct work_struct;
+ 
+-void xenbus_probe(struct work_struct *);
++void xenbus_probe(void);
+ 
+ #define XENBUS_IS_ERR_READ(str) ({			\
+ 	if (!IS_ERR(str) && strlen(str) == 0) {		\
+diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
+index 6edff97ad594b..dbc1dbdd2cbf0 100644
+--- a/kernel/bpf/bpf_inode_storage.c
++++ b/kernel/bpf/bpf_inode_storage.c
+@@ -176,7 +176,7 @@ BPF_CALL_4(bpf_inode_storage_get, struct bpf_map *, map, struct inode *, inode,
+ 	 * bpf_local_storage_update expects the owner to have a
+ 	 * valid storage pointer.
+ 	 */
+-	if (!inode_storage_ptr(inode))
++	if (!inode || !inode_storage_ptr(inode))
+ 		return (unsigned long)NULL;
+ 
+ 	sdata = inode_storage_lookup(inode, map, true);
+@@ -200,6 +200,9 @@ BPF_CALL_4(bpf_inode_storage_get, struct bpf_map *, map, struct inode *, inode,
+ BPF_CALL_2(bpf_inode_storage_delete,
+ 	   struct bpf_map *, map, struct inode *, inode)
+ {
++	if (!inode)
++		return -EINVAL;
++
+ 	/* This helper must only called from where the inode is gurranteed
+ 	 * to have a refcount and cannot be freed.
+ 	 */
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 8f50c9c19f1b0..9433ab9995cd7 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2717,7 +2717,6 @@ out_unlock:
+ out_put_prog:
+ 	if (tgt_prog_fd && tgt_prog)
+ 		bpf_prog_put(tgt_prog);
+-	bpf_prog_put(prog);
+ 	return err;
+ }
+ 
+@@ -2830,7 +2829,10 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
+ 			tp_name = prog->aux->attach_func_name;
+ 			break;
+ 		}
+-		return bpf_tracing_prog_attach(prog, 0, 0);
++		err = bpf_tracing_prog_attach(prog, 0, 0);
++		if (err >= 0)
++			return err;
++		goto out_put_prog;
+ 	case BPF_PROG_TYPE_RAW_TRACEPOINT:
+ 	case BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE:
+ 		if (strncpy_from_user(buf,
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index c1418b47f625a..02bc5b8f1eb27 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -79,7 +79,7 @@ module_param(lock_stat, int, 0644);
+ DEFINE_PER_CPU(unsigned int, lockdep_recursion);
+ EXPORT_PER_CPU_SYMBOL_GPL(lockdep_recursion);
+ 
+-static inline bool lockdep_enabled(void)
++static __always_inline bool lockdep_enabled(void)
+ {
+ 	if (!debug_locks)
+ 		return false;
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index bc1e3b5a97bdd..801f8bc52b34f 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -3376,7 +3376,7 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog,
+ 	while (prb_read_valid_info(prb, seq, &info, &line_count)) {
+ 		if (r.info->seq >= dumper->next_seq)
+ 			break;
+-		l += get_record_print_text_size(&info, line_count, true, time);
++		l += get_record_print_text_size(&info, line_count, syslog, time);
+ 		seq = r.info->seq + 1;
+ 	}
+ 
+@@ -3386,7 +3386,7 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog,
+ 						&info, &line_count)) {
+ 		if (r.info->seq >= dumper->next_seq)
+ 			break;
+-		l -= get_record_print_text_size(&info, line_count, true, time);
++		l -= get_record_print_text_size(&info, line_count, syslog, time);
+ 		seq = r.info->seq + 1;
+ 	}
+ 
+diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
+index 74e25a1704f2b..617dd63589650 100644
+--- a/kernel/printk/printk_ringbuffer.c
++++ b/kernel/printk/printk_ringbuffer.c
+@@ -1720,7 +1720,7 @@ static bool copy_data(struct prb_data_ring *data_ring,
+ 
+ 	/* Caller interested in the line count? */
+ 	if (line_count)
+-		*line_count = count_lines(data, data_size);
++		*line_count = count_lines(data, len);
+ 
+ 	/* Caller interested in the data content? */
+ 	if (!buf || !buf_size)
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 1635111c5bd2a..a21e6a5792c5a 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1658,7 +1658,7 @@ static int copy_compat_iovec_from_user(struct iovec *iov,
+ 		(const struct compat_iovec __user *)uvec;
+ 	int ret = -EFAULT, i;
+ 
+-	if (!user_access_begin(uvec, nr_segs * sizeof(*uvec)))
++	if (!user_access_begin(uiov, nr_segs * sizeof(*uiov)))
+ 		return -EFAULT;
+ 
+ 	for (i = 0; i < nr_segs; i++) {
+diff --git a/mm/kasan/init.c b/mm/kasan/init.c
+index fe6be0be1f763..b8c6ec172bb22 100644
+--- a/mm/kasan/init.c
++++ b/mm/kasan/init.c
+@@ -377,9 +377,10 @@ static void kasan_remove_pmd_table(pmd_t *pmd, unsigned long addr,
+ 
+ 		if (kasan_pte_table(*pmd)) {
+ 			if (IS_ALIGNED(addr, PMD_SIZE) &&
+-			    IS_ALIGNED(next, PMD_SIZE))
++			    IS_ALIGNED(next, PMD_SIZE)) {
+ 				pmd_clear(pmd);
+-			continue;
++				continue;
++			}
+ 		}
+ 		pte = pte_offset_kernel(pmd, addr);
+ 		kasan_remove_pte_table(pte, addr, next);
+@@ -402,9 +403,10 @@ static void kasan_remove_pud_table(pud_t *pud, unsigned long addr,
+ 
+ 		if (kasan_pmd_table(*pud)) {
+ 			if (IS_ALIGNED(addr, PUD_SIZE) &&
+-			    IS_ALIGNED(next, PUD_SIZE))
++			    IS_ALIGNED(next, PUD_SIZE)) {
+ 				pud_clear(pud);
+-			continue;
++				continue;
++			}
+ 		}
+ 		pmd = pmd_offset(pud, addr);
+ 		pmd_base = pmd_offset(pud, 0);
+@@ -428,9 +430,10 @@ static void kasan_remove_p4d_table(p4d_t *p4d, unsigned long addr,
+ 
+ 		if (kasan_pud_table(*p4d)) {
+ 			if (IS_ALIGNED(addr, P4D_SIZE) &&
+-			    IS_ALIGNED(next, P4D_SIZE))
++			    IS_ALIGNED(next, P4D_SIZE)) {
+ 				p4d_clear(p4d);
+-			continue;
++				continue;
++			}
+ 		}
+ 		pud = pud_offset(p4d, addr);
+ 		kasan_remove_pud_table(pud, addr, next);
+@@ -462,9 +465,10 @@ void kasan_remove_zero_shadow(void *start, unsigned long size)
+ 
+ 		if (kasan_p4d_table(*pgd)) {
+ 			if (IS_ALIGNED(addr, PGDIR_SIZE) &&
+-			    IS_ALIGNED(next, PGDIR_SIZE))
++			    IS_ALIGNED(next, PGDIR_SIZE)) {
+ 				pgd_clear(pgd);
+-			continue;
++				continue;
++			}
+ 		}
+ 
+ 		p4d = p4d_offset(pgd, addr);
+@@ -488,7 +492,6 @@ int kasan_add_zero_shadow(void *start, unsigned long size)
+ 
+ 	ret = kasan_populate_early_shadow(shadow_start, shadow_end);
+ 	if (ret)
+-		kasan_remove_zero_shadow(shadow_start,
+-					size >> KASAN_SHADOW_SCALE_SHIFT);
++		kasan_remove_zero_shadow(start, size);
+ 	return ret;
+ }
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index a717728cc7b4a..8fc23d53f5500 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -3083,9 +3083,7 @@ void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages)
+ 	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ 		page_counter_uncharge(&memcg->kmem, nr_pages);
+ 
+-	page_counter_uncharge(&memcg->memory, nr_pages);
+-	if (do_memsw_account())
+-		page_counter_uncharge(&memcg->memsw, nr_pages);
++	refill_stock(memcg, nr_pages);
+ }
+ 
+ /**
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 8ea0c65f10756..9d7ca1bd7f4b3 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -406,6 +406,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
+ 	struct zone *oldzone, *newzone;
+ 	int dirty;
+ 	int expected_count = expected_page_refs(mapping, page) + extra_count;
++	int nr = thp_nr_pages(page);
+ 
+ 	if (!mapping) {
+ 		/* Anonymous page without mapping */
+@@ -441,7 +442,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
+ 	 */
+ 	newpage->index = page->index;
+ 	newpage->mapping = page->mapping;
+-	page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
++	page_ref_add(newpage, nr); /* add cache reference */
+ 	if (PageSwapBacked(page)) {
+ 		__SetPageSwapBacked(newpage);
+ 		if (PageSwapCache(page)) {
+@@ -463,7 +464,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
+ 	if (PageTransHuge(page)) {
+ 		int i;
+ 
+-		for (i = 1; i < HPAGE_PMD_NR; i++) {
++		for (i = 1; i < nr; i++) {
+ 			xas_next(&xas);
+ 			xas_store(&xas, newpage);
+ 		}
+@@ -474,7 +475,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
+ 	 * to one less reference.
+ 	 * We know this isn't the last reference.
+ 	 */
+-	page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
++	page_ref_unfreeze(page, expected_count - nr);
+ 
+ 	xas_unlock(&xas);
+ 	/* Leave irq disabled to prevent preemption while updating stats */
+@@ -497,17 +498,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
+ 		old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
+ 		new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
+ 
+-		__dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
+-		__inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
++		__mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
++		__mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
+ 		if (PageSwapBacked(page) && !PageSwapCache(page)) {
+-			__dec_lruvec_state(old_lruvec, NR_SHMEM);
+-			__inc_lruvec_state(new_lruvec, NR_SHMEM);
++			__mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
++			__mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
+ 		}
+ 		if (dirty && mapping_can_writeback(mapping)) {
+-			__dec_node_state(oldzone->zone_pgdat, NR_FILE_DIRTY);
+-			__dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
+-			__inc_node_state(newzone->zone_pgdat, NR_FILE_DIRTY);
+-			__inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
++			__mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
++			__mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr);
++			__mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
++			__mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
+ 		}
+ 	}
+ 	local_irq_enable();
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index c1c30a9f76f34..8b796c499cbb2 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -272,7 +272,8 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
+ 	    kattr->test.repeat)
+ 		return -EINVAL;
+ 
+-	if (ctx_size_in < prog->aux->max_ctx_offset)
++	if (ctx_size_in < prog->aux->max_ctx_offset ||
++	    ctx_size_in > MAX_BPF_FUNC_ARGS * sizeof(u64))
+ 		return -EINVAL;
+ 
+ 	if ((kattr->test.flags & BPF_F_TEST_RUN_ON_CPU) == 0 && cpu != 0)
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 38412e70f7618..81e5d482c238e 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -9602,6 +9602,11 @@ static netdev_features_t netdev_fix_features(struct net_device *dev,
+ 		}
+ 	}
+ 
++	if ((features & NETIF_F_HW_TLS_RX) && !(features & NETIF_F_RXCSUM)) {
++		netdev_dbg(dev, "Dropping TLS RX HW offload feature since no RXCSUM feature.\n");
++		features &= ~NETIF_F_HW_TLS_RX;
++	}
++
+ 	return features;
+ }
+ 
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 8c5ddffd707de..5d397838bceb6 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -4134,7 +4134,7 @@ out:
+ static int devlink_nl_cmd_port_param_get_doit(struct sk_buff *skb,
+ 					      struct genl_info *info)
+ {
+-	struct devlink_port *devlink_port = info->user_ptr[0];
++	struct devlink_port *devlink_port = info->user_ptr[1];
+ 	struct devlink_param_item *param_item;
+ 	struct sk_buff *msg;
+ 	int err;
+@@ -4163,7 +4163,7 @@ static int devlink_nl_cmd_port_param_get_doit(struct sk_buff *skb,
+ static int devlink_nl_cmd_port_param_set_doit(struct sk_buff *skb,
+ 					      struct genl_info *info)
+ {
+-	struct devlink_port *devlink_port = info->user_ptr[0];
++	struct devlink_port *devlink_port = info->user_ptr[1];
+ 
+ 	return __devlink_nl_cmd_param_set_doit(devlink_port->devlink,
+ 					       devlink_port->index,
+diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
+index 80dbf2f4016e2..8e582e29a41e3 100644
+--- a/net/core/gen_estimator.c
++++ b/net/core/gen_estimator.c
+@@ -80,11 +80,11 @@ static void est_timer(struct timer_list *t)
+ 	u64 rate, brate;
+ 
+ 	est_fetch_counters(est, &b);
+-	brate = (b.bytes - est->last_bytes) << (10 - est->ewma_log - est->intvl_log);
+-	brate -= (est->avbps >> est->ewma_log);
++	brate = (b.bytes - est->last_bytes) << (10 - est->intvl_log);
++	brate = (brate >> est->ewma_log) - (est->avbps >> est->ewma_log);
+ 
+-	rate = (b.packets - est->last_packets) << (10 - est->ewma_log - est->intvl_log);
+-	rate -= (est->avpps >> est->ewma_log);
++	rate = (b.packets - est->last_packets) << (10 - est->intvl_log);
++	rate = (rate >> est->ewma_log) - (est->avpps >> est->ewma_log);
+ 
+ 	write_seqcount_begin(&est->seq);
+ 	est->avbps += brate;
+@@ -143,6 +143,9 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
+ 	if (parm->interval < -2 || parm->interval > 3)
+ 		return -EINVAL;
+ 
++	if (parm->ewma_log == 0 || parm->ewma_log >= 31)
++		return -EINVAL;
++
+ 	est = kzalloc(sizeof(*est), GFP_KERNEL);
+ 	if (!est)
+ 		return -ENOBUFS;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index f0d6dba37b43d..7ab56796bd3a9 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -432,7 +432,11 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
+ 
+ 	len += NET_SKB_PAD;
+ 
+-	if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) ||
++	/* If requested length is either too small or too big,
++	 * we use kmalloc() for skb->head allocation.
++	 */
++	if (len <= SKB_WITH_OVERHEAD(1024) ||
++	    len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
+ 	    (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
+ 		skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
+ 		if (!skb)
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index f60869acbef02..48d2b615edc26 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -851,6 +851,7 @@ struct sock *inet_csk_clone_lock(const struct sock *sk,
+ 		newicsk->icsk_retransmits = 0;
+ 		newicsk->icsk_backoff	  = 0;
+ 		newicsk->icsk_probes_out  = 0;
++		newicsk->icsk_probes_tstamp = 0;
+ 
+ 		/* Deinitialize accept_queue to trap illegal accesses. */
+ 		memset(&newicsk->icsk_accept_queue, 0, sizeof(newicsk->icsk_accept_queue));
+diff --git a/net/ipv4/netfilter/ipt_rpfilter.c b/net/ipv4/netfilter/ipt_rpfilter.c
+index cc23f1ce239c2..8cd3224d913e0 100644
+--- a/net/ipv4/netfilter/ipt_rpfilter.c
++++ b/net/ipv4/netfilter/ipt_rpfilter.c
+@@ -76,7 +76,7 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 	flow.daddr = iph->saddr;
+ 	flow.saddr = rpfilter_get_saddr(iph->daddr);
+ 	flow.flowi4_mark = info->flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0;
+-	flow.flowi4_tos = RT_TOS(iph->tos);
++	flow.flowi4_tos = iph->tos & IPTOS_RT_MASK;
+ 	flow.flowi4_scope = RT_SCOPE_UNIVERSE;
+ 	flow.flowi4_oif = l3mdev_master_ifindex_rcu(xt_in(par));
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index b2bc3d7fe9e80..41d03683b13d6 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2685,6 +2685,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 
+ 	icsk->icsk_backoff = 0;
+ 	icsk->icsk_probes_out = 0;
++	icsk->icsk_probes_tstamp = 0;
+ 	icsk->icsk_rto = TCP_TIMEOUT_INIT;
+ 	icsk->icsk_rto_min = TCP_RTO_MIN;
+ 	icsk->icsk_delack_max = TCP_DELACK_MAX;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index ef4bdb038a4bb..6bf066f924c15 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3370,6 +3370,7 @@ static void tcp_ack_probe(struct sock *sk)
+ 		return;
+ 	if (!after(TCP_SKB_CB(head)->end_seq, tcp_wnd_end(tp))) {
+ 		icsk->icsk_backoff = 0;
++		icsk->icsk_probes_tstamp = 0;
+ 		inet_csk_clear_xmit_timer(sk, ICSK_TIME_PROBE0);
+ 		/* Socket must be waked up by subsequent tcp_data_snd_check().
+ 		 * This function is not for random using!
+@@ -4379,10 +4380,9 @@ static void tcp_rcv_spurious_retrans(struct sock *sk, const struct sk_buff *skb)
+ 	 * The receiver remembers and reflects via DSACKs. Leverage the
+ 	 * DSACK state and change the txhash to re-route speculatively.
+ 	 */
+-	if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq) {
+-		sk_rethink_txhash(sk);
++	if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq &&
++	    sk_rethink_txhash(sk))
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPDUPLICATEDATAREHASH);
+-	}
+ }
+ 
+ static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb)
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 595dcc3afac5c..ab8ed0fc47697 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1590,6 +1590,8 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
+ 		tcp_move_syn(newtp, req);
+ 		ireq->ireq_opt = NULL;
+ 	} else {
++		newinet->inet_opt = NULL;
++
+ 		if (!req_unhash && found_dup_sk) {
+ 			/* This code path should only be executed in the
+ 			 * syncookie case only
+@@ -1597,8 +1599,6 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
+ 			bh_unlock_sock(newsk);
+ 			sock_put(newsk);
+ 			newsk = NULL;
+-		} else {
+-			newinet->inet_opt = NULL;
+ 		}
+ 	}
+ 	return newsk;
+@@ -1755,6 +1755,7 @@ int tcp_v4_early_demux(struct sk_buff *skb)
+ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ {
+ 	u32 limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf);
++	u32 tail_gso_size, tail_gso_segs;
+ 	struct skb_shared_info *shinfo;
+ 	const struct tcphdr *th;
+ 	struct tcphdr *thtail;
+@@ -1762,6 +1763,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 	unsigned int hdrlen;
+ 	bool fragstolen;
+ 	u32 gso_segs;
++	u32 gso_size;
+ 	int delta;
+ 
+ 	/* In case all data was pulled from skb frags (in __pskb_pull_tail()),
+@@ -1787,13 +1789,6 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 	 */
+ 	th = (const struct tcphdr *)skb->data;
+ 	hdrlen = th->doff * 4;
+-	shinfo = skb_shinfo(skb);
+-
+-	if (!shinfo->gso_size)
+-		shinfo->gso_size = skb->len - hdrlen;
+-
+-	if (!shinfo->gso_segs)
+-		shinfo->gso_segs = 1;
+ 
+ 	tail = sk->sk_backlog.tail;
+ 	if (!tail)
+@@ -1816,6 +1811,15 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 		goto no_coalesce;
+ 
+ 	__skb_pull(skb, hdrlen);
++
++	shinfo = skb_shinfo(skb);
++	gso_size = shinfo->gso_size ?: skb->len;
++	gso_segs = shinfo->gso_segs ?: 1;
++
++	shinfo = skb_shinfo(tail);
++	tail_gso_size = shinfo->gso_size ?: (tail->len - hdrlen);
++	tail_gso_segs = shinfo->gso_segs ?: 1;
++
+ 	if (skb_try_coalesce(tail, skb, &fragstolen, &delta)) {
+ 		TCP_SKB_CB(tail)->end_seq = TCP_SKB_CB(skb)->end_seq;
+ 
+@@ -1842,11 +1846,8 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 		}
+ 
+ 		/* Not as strict as GRO. We only need to carry mss max value */
+-		skb_shinfo(tail)->gso_size = max(shinfo->gso_size,
+-						 skb_shinfo(tail)->gso_size);
+-
+-		gso_segs = skb_shinfo(tail)->gso_segs + shinfo->gso_segs;
+-		skb_shinfo(tail)->gso_segs = min_t(u32, gso_segs, 0xFFFF);
++		shinfo->gso_size = max(gso_size, tail_gso_size);
++		shinfo->gso_segs = min_t(u32, gso_segs + tail_gso_segs, 0xFFFF);
+ 
+ 		sk->sk_backlog.len += delta;
+ 		__NET_INC_STATS(sock_net(sk),
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 99011768c2640..e58e2589d7f98 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -4080,6 +4080,7 @@ void tcp_send_probe0(struct sock *sk)
+ 		/* Cancel probe timer, if it is not required. */
+ 		icsk->icsk_probes_out = 0;
+ 		icsk->icsk_backoff = 0;
++		icsk->icsk_probes_tstamp = 0;
+ 		return;
+ 	}
+ 
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 6c62b9ea1320d..faa92948441ba 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -219,14 +219,8 @@ static int tcp_write_timeout(struct sock *sk)
+ 	int retry_until;
+ 
+ 	if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
+-		if (icsk->icsk_retransmits) {
+-			dst_negative_advice(sk);
+-		} else {
+-			sk_rethink_txhash(sk);
+-			tp->timeout_rehash++;
+-			__NET_INC_STATS(sock_net(sk),
+-					LINUX_MIB_TCPTIMEOUTREHASH);
+-		}
++		if (icsk->icsk_retransmits)
++			__dst_negative_advice(sk);
+ 		retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries;
+ 		expired = icsk->icsk_retransmits >= retry_until;
+ 	} else {
+@@ -234,12 +228,7 @@ static int tcp_write_timeout(struct sock *sk)
+ 			/* Black hole detection */
+ 			tcp_mtu_probing(icsk, sk);
+ 
+-			dst_negative_advice(sk);
+-		} else {
+-			sk_rethink_txhash(sk);
+-			tp->timeout_rehash++;
+-			__NET_INC_STATS(sock_net(sk),
+-					LINUX_MIB_TCPTIMEOUTREHASH);
++			__dst_negative_advice(sk);
+ 		}
+ 
+ 		retry_until = net->ipv4.sysctl_tcp_retries2;
+@@ -270,6 +259,11 @@ static int tcp_write_timeout(struct sock *sk)
+ 		return 1;
+ 	}
+ 
++	if (sk_rethink_txhash(sk)) {
++		tp->timeout_rehash++;
++		__NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPTIMEOUTREHASH);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -349,6 +343,7 @@ static void tcp_probe_timer(struct sock *sk)
+ 
+ 	if (tp->packets_out || !skb) {
+ 		icsk->icsk_probes_out = 0;
++		icsk->icsk_probes_tstamp = 0;
+ 		return;
+ 	}
+ 
+@@ -360,13 +355,12 @@ static void tcp_probe_timer(struct sock *sk)
+ 	 * corresponding system limit. We also implement similar policy when
+ 	 * we use RTO to probe window in tcp_retransmit_timer().
+ 	 */
+-	if (icsk->icsk_user_timeout) {
+-		u32 elapsed = tcp_model_timeout(sk, icsk->icsk_probes_out,
+-						tcp_probe0_base(sk));
+-
+-		if (elapsed >= icsk->icsk_user_timeout)
+-			goto abort;
+-	}
++	if (!icsk->icsk_probes_tstamp)
++		icsk->icsk_probes_tstamp = tcp_jiffies32;
++	else if (icsk->icsk_user_timeout &&
++		 (s32)(tcp_jiffies32 - icsk->icsk_probes_tstamp) >=
++		 msecs_to_jiffies(icsk->icsk_user_timeout))
++		goto abort;
+ 
+ 	max_probes = sock_net(sk)->ipv4.sysctl_tcp_retries2;
+ 	if (sock_flag(sk, SOCK_DEAD)) {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 9eeebd4a00542..e37a2fa65c294 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2553,7 +2553,8 @@ int udp_v4_early_demux(struct sk_buff *skb)
+ 		 */
+ 		if (!inet_sk(sk)->inet_daddr && in_dev)
+ 			return ip_mc_validate_source(skb, iph->daddr,
+-						     iph->saddr, iph->tos,
++						     iph->saddr,
++						     iph->tos & IPTOS_RT_MASK,
+ 						     skb->dev, in_dev, &itag);
+ 	}
+ 	return 0;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 8b6eb384bac7c..4c881f5d9080c 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2466,8 +2466,9 @@ static void addrconf_add_mroute(struct net_device *dev)
+ 		.fc_ifindex = dev->ifindex,
+ 		.fc_dst_len = 8,
+ 		.fc_flags = RTF_UP,
+-		.fc_type = RTN_UNICAST,
++		.fc_type = RTN_MULTICAST,
+ 		.fc_nlinfo.nl_net = dev_net(dev),
++		.fc_protocol = RTPROT_KERNEL,
+ 	};
+ 
+ 	ipv6_addr_set(&cfg.fc_dst, htonl(0xFF000000), 0, 0, 0);
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 1319986693fc8..84f932532db7d 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1272,6 +1272,10 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
+ 
+ 		nla_opt_msk = nla_data(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]);
+ 		msk_depth = nla_len(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]);
++		if (!nla_ok(nla_opt_msk, msk_depth)) {
++			NL_SET_ERR_MSG(extack, "Invalid nested attribute for masks");
++			return -EINVAL;
++		}
+ 	}
+ 
+ 	nla_for_each_attr(nla_opt_key, nla_enc_key,
+@@ -1307,9 +1311,6 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
+ 				NL_SET_ERR_MSG(extack, "Key and mask miss aligned");
+ 				return -EINVAL;
+ 			}
+-
+-			if (msk_depth)
+-				nla_opt_msk = nla_next(nla_opt_msk, &msk_depth);
+ 			break;
+ 		case TCA_FLOWER_KEY_ENC_OPTS_VXLAN:
+ 			if (key->enc_opts.dst_opt_type) {
+@@ -1340,9 +1341,6 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
+ 				NL_SET_ERR_MSG(extack, "Key and mask miss aligned");
+ 				return -EINVAL;
+ 			}
+-
+-			if (msk_depth)
+-				nla_opt_msk = nla_next(nla_opt_msk, &msk_depth);
+ 			break;
+ 		case TCA_FLOWER_KEY_ENC_OPTS_ERSPAN:
+ 			if (key->enc_opts.dst_opt_type) {
+@@ -1373,14 +1371,20 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
+ 				NL_SET_ERR_MSG(extack, "Key and mask miss aligned");
+ 				return -EINVAL;
+ 			}
+-
+-			if (msk_depth)
+-				nla_opt_msk = nla_next(nla_opt_msk, &msk_depth);
+ 			break;
+ 		default:
+ 			NL_SET_ERR_MSG(extack, "Unknown tunnel option type");
+ 			return -EINVAL;
+ 		}
++
++		if (!msk_depth)
++			continue;
++
++		if (!nla_ok(nla_opt_msk, msk_depth)) {
++			NL_SET_ERR_MSG(extack, "A mask attribute is invalid");
++			return -EINVAL;
++		}
++		nla_opt_msk = nla_next(nla_opt_msk, &msk_depth);
+ 	}
+ 
+ 	return 0;
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 78bec347b8b66..c4007b9cd16d6 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -366,9 +366,13 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 	if (tb[TCA_TCINDEX_MASK])
+ 		cp->mask = nla_get_u16(tb[TCA_TCINDEX_MASK]);
+ 
+-	if (tb[TCA_TCINDEX_SHIFT])
++	if (tb[TCA_TCINDEX_SHIFT]) {
+ 		cp->shift = nla_get_u32(tb[TCA_TCINDEX_SHIFT]);
+-
++		if (cp->shift > 16) {
++			err = -EINVAL;
++			goto errout;
++		}
++	}
+ 	if (!cp->hash) {
+ 		/* Hash not specified, use perfect hash if the upper limit
+ 		 * of the hashing index is below the threshold.
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 2a76a2f5ed88c..5e8e49c4ab5ca 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -412,7 +412,8 @@ struct qdisc_rate_table *qdisc_get_rtab(struct tc_ratespec *r,
+ {
+ 	struct qdisc_rate_table *rtab;
+ 
+-	if (tab == NULL || r->rate == 0 || r->cell_log == 0 ||
++	if (tab == NULL || r->rate == 0 ||
++	    r->cell_log == 0 || r->cell_log >= 32 ||
+ 	    nla_len(tab) != TC_RTAB_SIZE) {
+ 		NL_SET_ERR_MSG(extack, "Invalid rate table parameters for searching");
+ 		return NULL;
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index c2752e2b9ce34..4404c491eb388 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1062,6 +1062,90 @@ err_noclose:
+ 	return 0;	/* record not complete */
+ }
+ 
++static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec,
++			      int flags)
++{
++	return kernel_sendpage(sock, virt_to_page(vec->iov_base),
++			       offset_in_page(vec->iov_base),
++			       vec->iov_len, flags);
++}
++
++/*
++ * kernel_sendpage() is used exclusively to reduce the number of
++ * copy operations in this path. Therefore the caller must ensure
++ * that the pages backing @xdr are unchanging.
++ *
++ * In addition, the logic assumes that * .bv_len is never larger
++ * than PAGE_SIZE.
++ */
++static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg,
++			   struct xdr_buf *xdr, rpc_fraghdr marker,
++			   unsigned int *sentp)
++{
++	const struct kvec *head = xdr->head;
++	const struct kvec *tail = xdr->tail;
++	struct kvec rm = {
++		.iov_base	= &marker,
++		.iov_len	= sizeof(marker),
++	};
++	int flags, ret;
++
++	*sentp = 0;
++	xdr_alloc_bvec(xdr, GFP_KERNEL);
++
++	msg->msg_flags = MSG_MORE;
++	ret = kernel_sendmsg(sock, msg, &rm, 1, rm.iov_len);
++	if (ret < 0)
++		return ret;
++	*sentp += ret;
++	if (ret != rm.iov_len)
++		return -EAGAIN;
++
++	flags = head->iov_len < xdr->len ? MSG_MORE | MSG_SENDPAGE_NOTLAST : 0;
++	ret = svc_tcp_send_kvec(sock, head, flags);
++	if (ret < 0)
++		return ret;
++	*sentp += ret;
++	if (ret != head->iov_len)
++		goto out;
++
++	if (xdr->page_len) {
++		unsigned int offset, len, remaining;
++		struct bio_vec *bvec;
++
++		bvec = xdr->bvec;
++		offset = xdr->page_base;
++		remaining = xdr->page_len;
++		flags = MSG_MORE | MSG_SENDPAGE_NOTLAST;
++		while (remaining > 0) {
++			if (remaining <= PAGE_SIZE && tail->iov_len == 0)
++				flags = 0;
++			len = min(remaining, bvec->bv_len);
++			ret = kernel_sendpage(sock, bvec->bv_page,
++					      bvec->bv_offset + offset,
++					      len, flags);
++			if (ret < 0)
++				return ret;
++			*sentp += ret;
++			if (ret != len)
++				goto out;
++			remaining -= len;
++			offset = 0;
++			bvec++;
++		}
++	}
++
++	if (tail->iov_len) {
++		ret = svc_tcp_send_kvec(sock, tail, 0);
++		if (ret < 0)
++			return ret;
++		*sentp += ret;
++	}
++
++out:
++	return 0;
++}
++
+ /**
+  * svc_tcp_sendto - Send out a reply on a TCP socket
+  * @rqstp: completed svc_rqst
+@@ -1089,7 +1173,7 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp)
+ 	mutex_lock(&xprt->xpt_mutex);
+ 	if (svc_xprt_is_dead(xprt))
+ 		goto out_notconn;
+-	err = xprt_sock_sendmsg(svsk->sk_sock, &msg, xdr, 0, marker, &sent);
++	err = svc_tcp_sendmsg(svsk->sk_sock, &msg, xdr, marker, &sent);
+ 	xdr_free_bvec(xdr);
+ 	trace_svcsock_tcp_send(xprt, err < 0 ? err : sent);
+ 	if (err < 0 || sent != (xdr->len + sizeof(marker)))
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index d5f42c62fd79e..52fd1f96b241e 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -107,9 +107,9 @@ EXPORT_SYMBOL(xsk_get_pool_from_qid);
+ 
+ void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id)
+ {
+-	if (queue_id < dev->real_num_rx_queues)
++	if (queue_id < dev->num_rx_queues)
+ 		dev->_rx[queue_id].pool = NULL;
+-	if (queue_id < dev->real_num_tx_queues)
++	if (queue_id < dev->num_tx_queues)
+ 		dev->_tx[queue_id].pool = NULL;
+ }
+ 
+diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
+index 11554d0412f06..1b8409ec2c97f 100644
+--- a/sound/core/seq/oss/seq_oss_synth.c
++++ b/sound/core/seq/oss/seq_oss_synth.c
+@@ -611,7 +611,8 @@ snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_in
+ 
+ 	if (info->is_midi) {
+ 		struct midi_info minf;
+-		snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf);
++		if (snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf))
++			return -ENXIO;
+ 		inf->synth_type = SYNTH_TYPE_MIDI;
+ 		inf->synth_subtype = 0;
+ 		inf->nr_voices = 16;
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 687216e745267..eec1775dfffe9 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2934,7 +2934,7 @@ static void hda_call_codec_resume(struct hda_codec *codec)
+ 	snd_hdac_leave_pm(&codec->core);
+ }
+ 
+-static int hda_codec_suspend(struct device *dev)
++static int hda_codec_runtime_suspend(struct device *dev)
+ {
+ 	struct hda_codec *codec = dev_to_hda_codec(dev);
+ 	unsigned int state;
+@@ -2953,7 +2953,7 @@ static int hda_codec_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int hda_codec_resume(struct device *dev)
++static int hda_codec_runtime_resume(struct device *dev)
+ {
+ 	struct hda_codec *codec = dev_to_hda_codec(dev);
+ 
+@@ -2968,16 +2968,6 @@ static int hda_codec_resume(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int hda_codec_runtime_suspend(struct device *dev)
+-{
+-	return hda_codec_suspend(dev);
+-}
+-
+-static int hda_codec_runtime_resume(struct device *dev)
+-{
+-	return hda_codec_resume(dev);
+-}
+-
+ #endif /* CONFIG_PM */
+ 
+ #ifdef CONFIG_PM_SLEEP
+@@ -2998,31 +2988,31 @@ static void hda_codec_pm_complete(struct device *dev)
+ static int hda_codec_pm_suspend(struct device *dev)
+ {
+ 	dev->power.power_state = PMSG_SUSPEND;
+-	return hda_codec_suspend(dev);
++	return pm_runtime_force_suspend(dev);
+ }
+ 
+ static int hda_codec_pm_resume(struct device *dev)
+ {
+ 	dev->power.power_state = PMSG_RESUME;
+-	return hda_codec_resume(dev);
++	return pm_runtime_force_resume(dev);
+ }
+ 
+ static int hda_codec_pm_freeze(struct device *dev)
+ {
+ 	dev->power.power_state = PMSG_FREEZE;
+-	return hda_codec_suspend(dev);
++	return pm_runtime_force_suspend(dev);
+ }
+ 
+ static int hda_codec_pm_thaw(struct device *dev)
+ {
+ 	dev->power.power_state = PMSG_THAW;
+-	return hda_codec_resume(dev);
++	return pm_runtime_force_resume(dev);
+ }
+ 
+ static int hda_codec_pm_restore(struct device *dev)
+ {
+ 	dev->power.power_state = PMSG_RESTORE;
+-	return hda_codec_resume(dev);
++	return pm_runtime_force_resume(dev);
+ }
+ #endif /* CONFIG_PM_SLEEP */
+ 
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index 70164d1428d40..361cf2041911a 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -388,7 +388,7 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
+ 	 * in powers of 2, next available ratio is 16 which can be
+ 	 * used as a limiting factor here.
+ 	 */
+-	if (of_device_is_compatible(np, "nvidia,tegra194-hda"))
++	if (of_device_is_compatible(np, "nvidia,tegra30-hda"))
+ 		chip->bus.core.sdo_limit = 16;
+ 
+ 	/* codec detection */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index dd82ff2bd5d65..ed5b6b894dc19 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6371,6 +6371,7 @@ enum {
+ 	ALC256_FIXUP_HP_HEADSET_MIC,
+ 	ALC236_FIXUP_DELL_AIO_HEADSET_MIC,
+ 	ALC282_FIXUP_ACER_DISABLE_LINEOUT,
++	ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7808,6 +7809,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE
+ 	},
++	[ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_limit_int_mic_boost,
++		.chained = true,
++		.chain_id = ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7826,6 +7833,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
++	SND_PCI_QUIRK(0x1025, 0x1094, "Acer Aspire E5-575T", ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1166, "Acer Veriton N4640G", ALC269_FIXUP_LIFEBOOK),
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index 0ab40a8a68fb5..834367dd54e1b 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -113,6 +113,7 @@ static struct via_spec *via_new_spec(struct hda_codec *codec)
+ 		spec->codec_type = VT1708S;
+ 	spec->gen.indep_hp = 1;
+ 	spec->gen.keep_eapd_on = 1;
++	spec->gen.dac_min_mute = 1;
+ 	spec->gen.pcm_playback_hook = via_playback_pcm_hook;
+ 	spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO;
+ 	codec->power_save_node = 1;
+diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
+index 65b59dbfb43c8..a9b1b4180c471 100644
+--- a/sound/soc/codecs/rt711.c
++++ b/sound/soc/codecs/rt711.c
+@@ -462,6 +462,8 @@ static int rt711_set_amp_gain_put(struct snd_kcontrol *kcontrol,
+ 	unsigned int read_ll, read_rl;
+ 	int i;
+ 
++	mutex_lock(&rt711->calibrate_mutex);
++
+ 	/* Can't use update bit function, so read the original value first */
+ 	addr_h = mc->reg;
+ 	addr_l = mc->rreg;
+@@ -547,6 +549,8 @@ static int rt711_set_amp_gain_put(struct snd_kcontrol *kcontrol,
+ 	if (dapm->bias_level <= SND_SOC_BIAS_STANDBY)
+ 		regmap_write(rt711->regmap,
+ 				RT711_SET_AUDIO_POWER_STATE, AC_PWRST_D3);
++
++	mutex_unlock(&rt711->calibrate_mutex);
+ 	return 0;
+ }
+ 
+@@ -859,9 +863,11 @@ static int rt711_set_bias_level(struct snd_soc_component *component,
+ 		break;
+ 
+ 	case SND_SOC_BIAS_STANDBY:
++		mutex_lock(&rt711->calibrate_mutex);
+ 		regmap_write(rt711->regmap,
+ 			RT711_SET_AUDIO_POWER_STATE,
+ 			AC_PWRST_D3);
++		mutex_unlock(&rt711->calibrate_mutex);
+ 		break;
+ 
+ 	default:
+diff --git a/sound/soc/intel/boards/haswell.c b/sound/soc/intel/boards/haswell.c
+index c55d1239e705b..c763bfeb1f38f 100644
+--- a/sound/soc/intel/boards/haswell.c
++++ b/sound/soc/intel/boards/haswell.c
+@@ -189,6 +189,7 @@ static struct platform_driver haswell_audio = {
+ 	.probe = haswell_audio_probe,
+ 	.driver = {
+ 		.name = "haswell-audio",
++		.pm = &snd_soc_pm_ops,
+ 	},
+ };
+ 
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index 6875fa570c2c5..8b0ddc4b8227b 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -156,7 +156,8 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ 		if (!hdev->bus->audio_component) {
+ 			dev_dbg(sdev->dev,
+ 				"iDisp hw present but no driver\n");
+-			goto error;
++			ret = -ENOENT;
++			goto out;
+ 		}
+ 		hda_priv->need_display_power = true;
+ 	}
+@@ -173,24 +174,23 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ 		 * other return codes without modification
+ 		 */
+ 		if (ret == 0)
+-			goto error;
++			ret = -ENOENT;
+ 	}
+ 
+-	return ret;
+-
+-error:
+-	snd_hdac_ext_bus_device_exit(hdev);
+-	return -ENOENT;
+-
++out:
++	if (ret < 0) {
++		snd_hdac_device_unregister(hdev);
++		put_device(&hdev->dev);
++	}
+ #else
+ 	hdev = devm_kzalloc(sdev->dev, sizeof(*hdev), GFP_KERNEL);
+ 	if (!hdev)
+ 		return -ENOMEM;
+ 
+ 	ret = snd_hdac_ext_bus_device_init(&hbus->core, address, hdev, HDA_DEV_ASOC);
++#endif
+ 
+ 	return ret;
+-#endif
+ }
+ 
+ /* Codec initialization */
+diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c
+index 18ff1c2f5376e..2dbc1273e56bd 100644
+--- a/sound/soc/sof/intel/hda-dsp.c
++++ b/sound/soc/sof/intel/hda-dsp.c
+@@ -683,8 +683,10 @@ static int hda_resume(struct snd_sof_dev *sdev, bool runtime_resume)
+ 
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA)
+ 	/* check jack status */
+-	if (runtime_resume)
+-		hda_codec_jack_check(sdev);
++	if (runtime_resume) {
++		if (sdev->system_suspend_target == SOF_SUSPEND_NONE)
++			hda_codec_jack_check(sdev);
++	}
+ 
+ 	/* turn off the links that were off before suspend */
+ 	list_for_each_entry(hlink, &bus->hlink_list, list) {
+diff --git a/tools/gpio/gpio-event-mon.c b/tools/gpio/gpio-event-mon.c
+index 90c3155f05b1e..84ae1039b0a87 100644
+--- a/tools/gpio/gpio-event-mon.c
++++ b/tools/gpio/gpio-event-mon.c
+@@ -107,8 +107,8 @@ int monitor_device(const char *device_name,
+ 			ret = -EIO;
+ 			break;
+ 		}
+-		fprintf(stdout, "GPIO EVENT at %llu on line %d (%d|%d) ",
+-			event.timestamp_ns, event.offset, event.line_seqno,
++		fprintf(stdout, "GPIO EVENT at %" PRIu64 " on line %d (%d|%d) ",
++			(uint64_t)event.timestamp_ns, event.offset, event.line_seqno,
+ 			event.seqno);
+ 		switch (event.id) {
+ 		case GPIO_V2_LINE_EVENT_RISING_EDGE:
+diff --git a/tools/gpio/gpio-watch.c b/tools/gpio/gpio-watch.c
+index f229ec62301b7..41e76d2441922 100644
+--- a/tools/gpio/gpio-watch.c
++++ b/tools/gpio/gpio-watch.c
+@@ -10,6 +10,7 @@
+ #include <ctype.h>
+ #include <errno.h>
+ #include <fcntl.h>
++#include <inttypes.h>
+ #include <linux/gpio.h>
+ #include <poll.h>
+ #include <stdbool.h>
+@@ -86,8 +87,8 @@ int main(int argc, char **argv)
+ 				return EXIT_FAILURE;
+ 			}
+ 
+-			printf("line %u: %s at %llu\n",
+-			       chg.info.offset, event, chg.timestamp_ns);
++			printf("line %u: %s at %" PRIu64 "\n",
++			       chg.info.offset, event, (uint64_t)chg.timestamp_ns);
+ 		}
+ 	}
+ 
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index cfcdbd7be066e..17465d454a0e3 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -367,21 +367,13 @@ static struct perf_mmap* perf_evlist__alloc_mmap(struct perf_evlist *evlist, boo
+ 	return map;
+ }
+ 
+-static void perf_evlist__set_sid_idx(struct perf_evlist *evlist,
+-				     struct perf_evsel *evsel, int idx, int cpu,
+-				     int thread)
++static void perf_evsel__set_sid_idx(struct perf_evsel *evsel, int idx, int cpu, int thread)
+ {
+ 	struct perf_sample_id *sid = SID(evsel, cpu, thread);
+ 
+ 	sid->idx = idx;
+-	if (evlist->cpus && cpu >= 0)
+-		sid->cpu = evlist->cpus->map[cpu];
+-	else
+-		sid->cpu = -1;
+-	if (!evsel->system_wide && evlist->threads && thread >= 0)
+-		sid->tid = perf_thread_map__pid(evlist->threads, thread);
+-	else
+-		sid->tid = -1;
++	sid->cpu = perf_cpu_map__cpu(evsel->cpus, cpu);
++	sid->tid = perf_thread_map__pid(evsel->threads, thread);
+ }
+ 
+ static struct perf_mmap*
+@@ -500,8 +492,7 @@ mmap_per_evsel(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
+ 			if (perf_evlist__id_add_fd(evlist, evsel, cpu, thread,
+ 						   fd) < 0)
+ 				return -1;
+-			perf_evlist__set_sid_idx(evlist, evsel, idx, cpu,
+-						 thread);
++			perf_evsel__set_sid_idx(evsel, idx, cpu, thread);
+ 		}
+ 	}
+ 
+diff --git a/tools/lib/perf/tests/test-cpumap.c b/tools/lib/perf/tests/test-cpumap.c
+index c8d45091e7c26..c70e9e03af3e9 100644
+--- a/tools/lib/perf/tests/test-cpumap.c
++++ b/tools/lib/perf/tests/test-cpumap.c
+@@ -27,5 +27,5 @@ int main(int argc, char **argv)
+ 	perf_cpu_map__put(cpus);
+ 
+ 	__T_END;
+-	return 0;
++	return tests_failed == 0 ? 0 : -1;
+ }
+diff --git a/tools/lib/perf/tests/test-evlist.c b/tools/lib/perf/tests/test-evlist.c
+index 6d8ebe0c25042..bd19cabddaf62 100644
+--- a/tools/lib/perf/tests/test-evlist.c
++++ b/tools/lib/perf/tests/test-evlist.c
+@@ -215,6 +215,7 @@ static int test_mmap_thread(void)
+ 		 sysfs__mountpoint());
+ 
+ 	if (filename__read_int(path, &id)) {
++		tests_failed++;
+ 		fprintf(stderr, "error: failed to get tracepoint id: %s\n", path);
+ 		return -1;
+ 	}
+@@ -409,5 +410,5 @@ int main(int argc, char **argv)
+ 	test_mmap_cpus();
+ 
+ 	__T_END;
+-	return 0;
++	return tests_failed == 0 ? 0 : -1;
+ }
+diff --git a/tools/lib/perf/tests/test-evsel.c b/tools/lib/perf/tests/test-evsel.c
+index 135722ac965bf..0ad82d7a2a51b 100644
+--- a/tools/lib/perf/tests/test-evsel.c
++++ b/tools/lib/perf/tests/test-evsel.c
+@@ -131,5 +131,5 @@ int main(int argc, char **argv)
+ 	test_stat_thread_enable();
+ 
+ 	__T_END;
+-	return 0;
++	return tests_failed == 0 ? 0 : -1;
+ }
+diff --git a/tools/lib/perf/tests/test-threadmap.c b/tools/lib/perf/tests/test-threadmap.c
+index 7dc4d6fbeddee..384471441b484 100644
+--- a/tools/lib/perf/tests/test-threadmap.c
++++ b/tools/lib/perf/tests/test-threadmap.c
+@@ -27,5 +27,5 @@ int main(int argc, char **argv)
+ 	perf_thread_map__put(threads);
+ 
+ 	__T_END;
+-	return 0;
++	return tests_failed == 0 ? 0 : -1;
+ }
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 84205c3a55ebe..2b5707738609e 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -1055,7 +1055,6 @@ ipv6_addr_metric_test()
+ 
+ 	check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 260"
+ 	log_test $? 0 "Set metric with peer route on local side"
+-	log_test $? 0 "User specified metric on local address"
+ 	check_route6 "2001:db8:104::2 dev dummy2 proto kernel metric 260"
+ 	log_test $? 0 "Set metric with peer route on peer side"
+ 
+diff --git a/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c b/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c
+index 9e5c7f3f498a7..0af4f02669a11 100644
+--- a/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c
++++ b/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c
+@@ -290,5 +290,5 @@ static int test(void)
+ 
+ int main(void)
+ {
+-	test_harness(test, "pkey_exec_prot");
++	return test_harness(test, "pkey_exec_prot");
+ }
+diff --git a/tools/testing/selftests/powerpc/mm/pkey_siginfo.c b/tools/testing/selftests/powerpc/mm/pkey_siginfo.c
+index 4f815d7c12145..2db76e56d4cb9 100644
+--- a/tools/testing/selftests/powerpc/mm/pkey_siginfo.c
++++ b/tools/testing/selftests/powerpc/mm/pkey_siginfo.c
+@@ -329,5 +329,5 @@ static int test(void)
+ 
+ int main(void)
+ {
+-	test_harness(test, "pkey_siginfo");
++	return test_harness(test, "pkey_siginfo");
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-01-30 13:27 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-01-30 13:27 UTC (permalink / raw
  To: gentoo-commits

commit:     9723a6ca8963f59a83672d3b433b01f598183bcc
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 30 13:26:22 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Jan 30 13:27:06 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9723a6ca

Linux patch 5.10.12

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1011_linux-5.10.12.patch | 1263 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1267 insertions(+)

diff --git a/0000_README b/0000_README
index fe8a778..8c99e2c 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-5.10.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.11
 
+Patch:  1011_linux-5.10.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-5.10.12.patch b/1011_linux-5.10.12.patch
new file mode 100644
index 0000000..40728cb
--- /dev/null
+++ b/1011_linux-5.10.12.patch
@@ -0,0 +1,1263 @@
+diff --git a/Makefile b/Makefile
+index 7a5d906f6ee36..a6b2e64bcf6c7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index 2f245594a90a6..ed7c5fc47f524 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -660,9 +660,8 @@ static void mvebu_pwm_get_state(struct pwm_chip *chip,
+ 
+ 	spin_lock_irqsave(&mvpwm->lock, flags);
+ 
+-	val = (unsigned long long)
+-		readl_relaxed(mvebu_pwmreg_blink_on_duration(mvpwm));
+-	val *= NSEC_PER_SEC;
++	u = readl_relaxed(mvebu_pwmreg_blink_on_duration(mvpwm));
++	val = (unsigned long long) u * NSEC_PER_SEC;
+ 	do_div(val, mvpwm->clk_rate);
+ 	if (val > UINT_MAX)
+ 		state->duty_cycle = UINT_MAX;
+@@ -671,21 +670,17 @@ static void mvebu_pwm_get_state(struct pwm_chip *chip,
+ 	else
+ 		state->duty_cycle = 1;
+ 
+-	val = (unsigned long long)
+-		readl_relaxed(mvebu_pwmreg_blink_off_duration(mvpwm));
++	val = (unsigned long long) u; /* on duration */
++	/* period = on + off duration */
++	val += readl_relaxed(mvebu_pwmreg_blink_off_duration(mvpwm));
+ 	val *= NSEC_PER_SEC;
+ 	do_div(val, mvpwm->clk_rate);
+-	if (val < state->duty_cycle) {
++	if (val > UINT_MAX)
++		state->period = UINT_MAX;
++	else if (val)
++		state->period = val;
++	else
+ 		state->period = 1;
+-	} else {
+-		val -= state->duty_cycle;
+-		if (val > UINT_MAX)
+-			state->period = UINT_MAX;
+-		else if (val)
+-			state->period = val;
+-		else
+-			state->period = 1;
+-	}
+ 
+ 	regmap_read(mvchip->regs, GPIO_BLINK_EN_OFF + mvchip->offset, &u);
+ 	if (u)
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 0743ef51d3b24..8429ebe7097e4 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -758,7 +758,8 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 			MT_STORE_FIELD(inrange_state);
+ 			return 1;
+ 		case HID_DG_CONFIDENCE:
+-			if (cls->name == MT_CLS_WIN_8 &&
++			if ((cls->name == MT_CLS_WIN_8 ||
++			     cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT) &&
+ 				(field->application == HID_DG_TOUCHPAD ||
+ 				 field->application == HID_DG_TOUCHSCREEN))
+ 				app->quirks |= MT_QUIRK_CONFIDENCE;
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 9e852b4bbf92b..73dafa60080f1 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -147,9 +147,9 @@ static int wacom_wac_pen_serial_enforce(struct hid_device *hdev,
+ 	}
+ 
+ 	if (flush)
+-		wacom_wac_queue_flush(hdev, &wacom_wac->pen_fifo);
++		wacom_wac_queue_flush(hdev, wacom_wac->pen_fifo);
+ 	else if (insert)
+-		wacom_wac_queue_insert(hdev, &wacom_wac->pen_fifo,
++		wacom_wac_queue_insert(hdev, wacom_wac->pen_fifo,
+ 				       raw_data, report_size);
+ 
+ 	return insert && !flush;
+@@ -1280,7 +1280,7 @@ static void wacom_devm_kfifo_release(struct device *dev, void *res)
+ static int wacom_devm_kfifo_alloc(struct wacom *wacom)
+ {
+ 	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+-	struct kfifo_rec_ptr_2 *pen_fifo = &wacom_wac->pen_fifo;
++	struct kfifo_rec_ptr_2 *pen_fifo;
+ 	int error;
+ 
+ 	pen_fifo = devres_alloc(wacom_devm_kfifo_release,
+@@ -1297,6 +1297,7 @@ static int wacom_devm_kfifo_alloc(struct wacom *wacom)
+ 	}
+ 
+ 	devres_add(&wacom->hdev->dev, pen_fifo);
++	wacom_wac->pen_fifo = pen_fifo;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index da612b6e9c779..195910dd2154e 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -342,7 +342,7 @@ struct wacom_wac {
+ 	struct input_dev *pen_input;
+ 	struct input_dev *touch_input;
+ 	struct input_dev *pad_input;
+-	struct kfifo_rec_ptr_2 pen_fifo;
++	struct kfifo_rec_ptr_2 *pen_fifo;
+ 	int pid;
+ 	int num_contacts_left;
+ 	u8 bt_features;
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h
+index c142f5e7f25f8..de57f2fed7437 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h
+@@ -509,6 +509,20 @@ static inline int ib_send_flags_to_pvrdma(int flags)
+ 	return flags & PVRDMA_MASK(PVRDMA_SEND_FLAGS_MAX);
+ }
+ 
++static inline int pvrdma_network_type_to_ib(enum pvrdma_network_type type)
++{
++	switch (type) {
++	case PVRDMA_NETWORK_ROCE_V1:
++		return RDMA_NETWORK_ROCE_V1;
++	case PVRDMA_NETWORK_IPV4:
++		return RDMA_NETWORK_IPV4;
++	case PVRDMA_NETWORK_IPV6:
++		return RDMA_NETWORK_IPV6;
++	default:
++		return RDMA_NETWORK_IPV6;
++	}
++}
++
+ void pvrdma_qp_cap_to_ib(struct ib_qp_cap *dst,
+ 			 const struct pvrdma_qp_cap *src);
+ void ib_qp_cap_to_pvrdma(struct pvrdma_qp_cap *dst,
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
+index 319546a39a0d5..62164db593a4f 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
+@@ -364,7 +364,7 @@ retry:
+ 	wc->dlid_path_bits = cqe->dlid_path_bits;
+ 	wc->port_num = cqe->port_num;
+ 	wc->vendor_err = cqe->vendor_err;
+-	wc->network_hdr_type = cqe->network_hdr_type;
++	wc->network_hdr_type = pvrdma_network_type_to_ib(cqe->network_hdr_type);
+ 
+ 	/* Update shared ring state */
+ 	pvrdma_idx_ring_inc(&cq->ring_state->rx.cons_head, cq->ibcq.cqe);
+diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
+index 96d3b2b2aa318..3f61f5863bf77 100644
+--- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
++++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
+@@ -118,8 +118,7 @@ static int __verify_length(struct vb2_buffer *vb, const struct v4l2_buffer *b)
+ 				return -EINVAL;
+ 		}
+ 	} else {
+-		length = (b->memory == VB2_MEMORY_USERPTR ||
+-			  b->memory == VB2_MEMORY_DMABUF)
++		length = (b->memory == VB2_MEMORY_USERPTR)
+ 			? b->length : vb->planes[0].length;
+ 
+ 		if (b->bytesused > length)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 8fa1c22fd96db..fcad5cdcabfa4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -237,13 +237,6 @@ static int iwl_dbg_tlv_alloc_region(struct iwl_trans *trans,
+ 	if (le32_to_cpu(tlv->length) < sizeof(*reg))
+ 		return -EINVAL;
+ 
+-	/* For safe using a string from FW make sure we have a
+-	 * null terminator
+-	 */
+-	reg->name[IWL_FW_INI_MAX_NAME - 1] = 0;
+-
+-	IWL_DEBUG_FW(trans, "WRT: parsing region: %s\n", reg->name);
+-
+ 	if (id >= IWL_FW_INI_MAX_REGION_ID) {
+ 		IWL_ERR(trans, "WRT: Invalid region id %u\n", id);
+ 		return -EINVAL;
+diff --git a/fs/file.c b/fs/file.c
+index 4559b5fec3bd5..21c0893f2f1df 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -21,7 +21,6 @@
+ #include <linux/rcupdate.h>
+ #include <linux/close_range.h>
+ #include <net/sock.h>
+-#include <linux/io_uring.h>
+ 
+ unsigned int sysctl_nr_open __read_mostly = 1024*1024;
+ unsigned int sysctl_nr_open_min = BITS_PER_LONG;
+@@ -453,7 +452,6 @@ void exit_files(struct task_struct *tsk)
+ 	struct files_struct * files = tsk->files;
+ 
+ 	if (files) {
+-		io_uring_files_cancel(files);
+ 		task_lock(tsk);
+ 		tsk->files = NULL;
+ 		task_unlock(tsk);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 8cb0db187d90f..fd12d9327ee5b 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -260,6 +260,7 @@ struct io_ring_ctx {
+ 		unsigned int		drain_next: 1;
+ 		unsigned int		eventfd_async: 1;
+ 		unsigned int		restricted: 1;
++		unsigned int		sqo_dead: 1;
+ 
+ 		/*
+ 		 * Ring buffer of indices into array of io_uring_sqe, which is
+@@ -970,6 +971,7 @@ static ssize_t io_import_iovec(int rw, struct io_kiocb *req,
+ static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
+ 			     const struct iovec *fast_iov,
+ 			     struct iov_iter *iter, bool force);
++static void io_req_drop_files(struct io_kiocb *req);
+ 
+ static struct kmem_cache *req_cachep;
+ 
+@@ -990,8 +992,7 @@ EXPORT_SYMBOL(io_uring_get_socket);
+ 
+ static inline void io_clean_op(struct io_kiocb *req)
+ {
+-	if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED |
+-			  REQ_F_INFLIGHT))
++	if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED))
+ 		__io_clean_op(req);
+ }
+ 
+@@ -1213,11 +1214,6 @@ static void __io_commit_cqring(struct io_ring_ctx *ctx)
+ 
+ 	/* order cqe stores with ring update */
+ 	smp_store_release(&rings->cq.tail, ctx->cached_cq_tail);
+-
+-	if (wq_has_sleeper(&ctx->cq_wait)) {
+-		wake_up_interruptible(&ctx->cq_wait);
+-		kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
+-	}
+ }
+ 
+ static void io_put_identity(struct io_uring_task *tctx, struct io_kiocb *req)
+@@ -1260,6 +1256,8 @@ static void io_req_clean_work(struct io_kiocb *req)
+ 			free_fs_struct(fs);
+ 		req->work.flags &= ~IO_WQ_WORK_FS;
+ 	}
++	if (req->flags & REQ_F_INFLIGHT)
++		io_req_drop_files(req);
+ 
+ 	io_put_identity(req->task->io_uring, req);
+ }
+@@ -1603,6 +1601,10 @@ static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx)
+ 
+ static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
+ {
++	if (wq_has_sleeper(&ctx->cq_wait)) {
++		wake_up_interruptible(&ctx->cq_wait);
++		kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
++	}
+ 	if (waitqueue_active(&ctx->wait))
+ 		wake_up(&ctx->wait);
+ 	if (ctx->sq_data && waitqueue_active(&ctx->sq_data->wait))
+@@ -2083,11 +2085,9 @@ static void io_req_task_cancel(struct callback_head *cb)
+ static void __io_req_task_submit(struct io_kiocb *req)
+ {
+ 	struct io_ring_ctx *ctx = req->ctx;
+-	bool fail;
+ 
+-	fail = __io_sq_thread_acquire_mm(ctx);
+ 	mutex_lock(&ctx->uring_lock);
+-	if (!fail)
++	if (!ctx->sqo_dead && !__io_sq_thread_acquire_mm(ctx))
+ 		__io_queue_sqe(req, NULL);
+ 	else
+ 		__io_req_task_cancel(req, -EFAULT);
+@@ -5962,9 +5962,6 @@ static void __io_clean_op(struct io_kiocb *req)
+ 		}
+ 		req->flags &= ~REQ_F_NEED_CLEANUP;
+ 	}
+-
+-	if (req->flags & REQ_F_INFLIGHT)
+-		io_req_drop_files(req);
+ }
+ 
+ static int io_issue_sqe(struct io_kiocb *req, bool force_nonblock,
+@@ -6796,7 +6793,7 @@ again:
+ 		to_submit = 8;
+ 
+ 	mutex_lock(&ctx->uring_lock);
+-	if (likely(!percpu_ref_is_dying(&ctx->refs)))
++	if (likely(!percpu_ref_is_dying(&ctx->refs) && !ctx->sqo_dead))
+ 		ret = io_submit_sqes(ctx, to_submit);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
+@@ -8487,6 +8484,10 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ 	mutex_lock(&ctx->uring_lock);
+ 	percpu_ref_kill(&ctx->refs);
+ 	/* if force is set, the ring is going away. always drop after that */
++
++	if (WARN_ON_ONCE((ctx->flags & IORING_SETUP_SQPOLL) && !ctx->sqo_dead))
++		ctx->sqo_dead = 1;
++
+ 	ctx->cq_overflow_flushed = 1;
+ 	if (ctx->rings)
+ 		__io_cqring_overflow_flush(ctx, true, NULL, NULL);
+@@ -8698,6 +8699,8 @@ static bool io_uring_cancel_files(struct io_ring_ctx *ctx,
+ 			break;
+ 		/* cancel this request, or head link requests */
+ 		io_attempt_cancel(ctx, cancel_req);
++		io_cqring_overflow_flush(ctx, true, task, files);
++
+ 		io_put_req(cancel_req);
+ 		/* cancellations _may_ trigger task work */
+ 		io_run_task_work();
+@@ -8745,6 +8748,17 @@ static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 	return ret;
+ }
+ 
++static void io_disable_sqo_submit(struct io_ring_ctx *ctx)
++{
++	mutex_lock(&ctx->uring_lock);
++	ctx->sqo_dead = 1;
++	mutex_unlock(&ctx->uring_lock);
++
++	/* make sure callers enter the ring to get error */
++	if (ctx->rings)
++		io_ring_set_wakeup_flag(ctx);
++}
++
+ /*
+  * We need to iteratively cancel requests, in case a request has dependent
+  * hard links. These persist even for failure of cancelations, hence keep
+@@ -8756,6 +8770,9 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 	struct task_struct *task = current;
+ 
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
++		/* for SQPOLL only sqo_task has task notes */
++		WARN_ON_ONCE(ctx->sqo_task != current);
++		io_disable_sqo_submit(ctx);
+ 		task = ctx->sq_data->thread;
+ 		atomic_inc(&task->io_uring->in_idle);
+ 		io_sq_thread_park(ctx->sq_data);
+@@ -8835,23 +8852,6 @@ static void io_uring_del_task_file(struct file *file)
+ 		fput(file);
+ }
+ 
+-/*
+- * Drop task note for this file if we're the only ones that hold it after
+- * pending fput()
+- */
+-static void io_uring_attempt_task_drop(struct file *file)
+-{
+-	if (!current->io_uring)
+-		return;
+-	/*
+-	 * fput() is pending, will be 2 if the only other ref is our potential
+-	 * task file note. If the task is exiting, drop regardless of count.
+-	 */
+-	if (fatal_signal_pending(current) || (current->flags & PF_EXITING) ||
+-	    atomic_long_read(&file->f_count) == 2)
+-		io_uring_del_task_file(file);
+-}
+-
+ static void io_uring_remove_task_files(struct io_uring_task *tctx)
+ {
+ 	struct file *file;
+@@ -8917,6 +8917,10 @@ void __io_uring_task_cancel(void)
+ 	/* make sure overflow events are dropped */
+ 	atomic_inc(&tctx->in_idle);
+ 
++	/* trigger io_disable_sqo_submit() */
++	if (tctx->sqpoll)
++		__io_uring_files_cancel(NULL);
++
+ 	do {
+ 		/* read completions before cancelations */
+ 		inflight = tctx_inflight(tctx);
+@@ -8943,7 +8947,36 @@ void __io_uring_task_cancel(void)
+ 
+ static int io_uring_flush(struct file *file, void *data)
+ {
+-	io_uring_attempt_task_drop(file);
++	struct io_uring_task *tctx = current->io_uring;
++	struct io_ring_ctx *ctx = file->private_data;
++
++	if (!tctx)
++		return 0;
++
++	/* we should have cancelled and erased it before PF_EXITING */
++	WARN_ON_ONCE((current->flags & PF_EXITING) &&
++		     xa_load(&tctx->xa, (unsigned long)file));
++
++	/*
++	 * fput() is pending, will be 2 if the only other ref is our potential
++	 * task file note. If the task is exiting, drop regardless of count.
++	 */
++	if (atomic_long_read(&file->f_count) != 2)
++		return 0;
++
++	if (ctx->flags & IORING_SETUP_SQPOLL) {
++		/* there is only one file note, which is owned by sqo_task */
++		WARN_ON_ONCE(ctx->sqo_task != current &&
++			     xa_load(&tctx->xa, (unsigned long)file));
++		/* sqo_dead check is for when this happens after cancellation */
++		WARN_ON_ONCE(ctx->sqo_task == current && !ctx->sqo_dead &&
++			     !xa_load(&tctx->xa, (unsigned long)file));
++
++		io_disable_sqo_submit(ctx);
++	}
++
++	if (!(ctx->flags & IORING_SETUP_SQPOLL) || ctx->sqo_task == current)
++		io_uring_del_task_file(file);
+ 	return 0;
+ }
+ 
+@@ -9017,8 +9050,9 @@ static unsigned long io_uring_nommu_get_unmapped_area(struct file *file,
+ 
+ #endif /* !CONFIG_MMU */
+ 
+-static void io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
++static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+ {
++	int ret = 0;
+ 	DEFINE_WAIT(wait);
+ 
+ 	do {
+@@ -9027,6 +9061,11 @@ static void io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+ 
+ 		prepare_to_wait(&ctx->sqo_sq_wait, &wait, TASK_INTERRUPTIBLE);
+ 
++		if (unlikely(ctx->sqo_dead)) {
++			ret = -EOWNERDEAD;
++			goto out;
++		}
++
+ 		if (!io_sqring_full(ctx))
+ 			break;
+ 
+@@ -9034,6 +9073,8 @@ static void io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+ 	} while (!signal_pending(current));
+ 
+ 	finish_wait(&ctx->sqo_sq_wait, &wait);
++out:
++	return ret;
+ }
+ 
+ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+@@ -9077,10 +9118,16 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+ 		io_cqring_overflow_flush(ctx, false, NULL, NULL);
+ 
++		ret = -EOWNERDEAD;
++		if (unlikely(ctx->sqo_dead))
++			goto out;
+ 		if (flags & IORING_ENTER_SQ_WAKEUP)
+ 			wake_up(&ctx->sq_data->wait);
+-		if (flags & IORING_ENTER_SQ_WAIT)
+-			io_sqpoll_wait_sq(ctx);
++		if (flags & IORING_ENTER_SQ_WAIT) {
++			ret = io_sqpoll_wait_sq(ctx);
++			if (ret)
++				goto out;
++		}
+ 		submitted = to_submit;
+ 	} else if (to_submit) {
+ 		ret = io_uring_add_task_file(ctx, f.file);
+@@ -9491,6 +9538,7 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
+ 	 */
+ 	ret = io_uring_install_fd(ctx, file);
+ 	if (ret < 0) {
++		io_disable_sqo_submit(ctx);
+ 		/* fput will clean it up */
+ 		fput(file);
+ 		return ret;
+@@ -9499,6 +9547,7 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
+ 	trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
+ 	return ret;
+ err:
++	io_disable_sqo_submit(ctx);
+ 	io_ring_ctx_wait_and_kill(ctx);
+ 	return ret;
+ }
+diff --git a/include/uapi/linux/v4l2-subdev.h b/include/uapi/linux/v4l2-subdev.h
+index 00850b98078a2..a38454d9e0f54 100644
+--- a/include/uapi/linux/v4l2-subdev.h
++++ b/include/uapi/linux/v4l2-subdev.h
+@@ -176,7 +176,7 @@ struct v4l2_subdev_capability {
+ };
+ 
+ /* The v4l2 sub-device video device node is registered in read-only mode. */
+-#define V4L2_SUBDEV_CAP_RO_SUBDEV		BIT(0)
++#define V4L2_SUBDEV_CAP_RO_SUBDEV		0x00000001
+ 
+ /* Backwards compatibility define --- to be removed */
+ #define v4l2_subdev_edid v4l2_edid
+diff --git a/include/uapi/rdma/vmw_pvrdma-abi.h b/include/uapi/rdma/vmw_pvrdma-abi.h
+index f8b638c73371d..901a4fd72c09f 100644
+--- a/include/uapi/rdma/vmw_pvrdma-abi.h
++++ b/include/uapi/rdma/vmw_pvrdma-abi.h
+@@ -133,6 +133,13 @@ enum pvrdma_wc_flags {
+ 	PVRDMA_WC_FLAGS_MAX		= PVRDMA_WC_WITH_NETWORK_HDR_TYPE,
+ };
+ 
++enum pvrdma_network_type {
++	PVRDMA_NETWORK_IB,
++	PVRDMA_NETWORK_ROCE_V1 = PVRDMA_NETWORK_IB,
++	PVRDMA_NETWORK_IPV4,
++	PVRDMA_NETWORK_IPV6
++};
++
+ struct pvrdma_alloc_ucontext_resp {
+ 	__u32 qp_tab_size;
+ 	__u32 reserved;
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 1f236ed375f83..d13d67fc5f4e2 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -63,6 +63,7 @@
+ #include <linux/random.h>
+ #include <linux/rcuwait.h>
+ #include <linux/compat.h>
++#include <linux/io_uring.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/unistd.h>
+@@ -762,6 +763,7 @@ void __noreturn do_exit(long code)
+ 		schedule();
+ 	}
+ 
++	io_uring_files_cancel(tsk->files);
+ 	exit_signals(tsk);  /* sets PF_EXITING */
+ 
+ 	/* sync mm's RSS info before statistics gathering */
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 00259c7e288ee..0693b3ea0f9a4 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -765,6 +765,29 @@ static struct futex_pi_state *alloc_pi_state(void)
+ 	return pi_state;
+ }
+ 
++static void pi_state_update_owner(struct futex_pi_state *pi_state,
++				  struct task_struct *new_owner)
++{
++	struct task_struct *old_owner = pi_state->owner;
++
++	lockdep_assert_held(&pi_state->pi_mutex.wait_lock);
++
++	if (old_owner) {
++		raw_spin_lock(&old_owner->pi_lock);
++		WARN_ON(list_empty(&pi_state->list));
++		list_del_init(&pi_state->list);
++		raw_spin_unlock(&old_owner->pi_lock);
++	}
++
++	if (new_owner) {
++		raw_spin_lock(&new_owner->pi_lock);
++		WARN_ON(!list_empty(&pi_state->list));
++		list_add(&pi_state->list, &new_owner->pi_state_list);
++		pi_state->owner = new_owner;
++		raw_spin_unlock(&new_owner->pi_lock);
++	}
++}
++
+ static void get_pi_state(struct futex_pi_state *pi_state)
+ {
+ 	WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount));
+@@ -787,17 +810,11 @@ static void put_pi_state(struct futex_pi_state *pi_state)
+ 	 * and has cleaned up the pi_state already
+ 	 */
+ 	if (pi_state->owner) {
+-		struct task_struct *owner;
+ 		unsigned long flags;
+ 
+ 		raw_spin_lock_irqsave(&pi_state->pi_mutex.wait_lock, flags);
+-		owner = pi_state->owner;
+-		if (owner) {
+-			raw_spin_lock(&owner->pi_lock);
+-			list_del_init(&pi_state->list);
+-			raw_spin_unlock(&owner->pi_lock);
+-		}
+-		rt_mutex_proxy_unlock(&pi_state->pi_mutex, owner);
++		pi_state_update_owner(pi_state, NULL);
++		rt_mutex_proxy_unlock(&pi_state->pi_mutex);
+ 		raw_spin_unlock_irqrestore(&pi_state->pi_mutex.wait_lock, flags);
+ 	}
+ 
+@@ -943,7 +960,8 @@ static inline void exit_pi_state_list(struct task_struct *curr) { }
+  *	FUTEX_OWNER_DIED bit. See [4]
+  *
+  * [10] There is no transient state which leaves owner and user space
+- *	TID out of sync.
++ *	TID out of sync. Except one error case where the kernel is denied
++ *	write access to the user address, see fixup_pi_state_owner().
+  *
+  *
+  * Serialization and lifetime rules:
+@@ -1523,26 +1541,15 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_
+ 			ret = -EINVAL;
+ 	}
+ 
+-	if (ret)
+-		goto out_unlock;
+-
+-	/*
+-	 * This is a point of no return; once we modify the uval there is no
+-	 * going back and subsequent operations must not fail.
+-	 */
+-
+-	raw_spin_lock(&pi_state->owner->pi_lock);
+-	WARN_ON(list_empty(&pi_state->list));
+-	list_del_init(&pi_state->list);
+-	raw_spin_unlock(&pi_state->owner->pi_lock);
+-
+-	raw_spin_lock(&new_owner->pi_lock);
+-	WARN_ON(!list_empty(&pi_state->list));
+-	list_add(&pi_state->list, &new_owner->pi_state_list);
+-	pi_state->owner = new_owner;
+-	raw_spin_unlock(&new_owner->pi_lock);
+-
+-	postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
++	if (!ret) {
++		/*
++		 * This is a point of no return; once we modified the uval
++		 * there is no going back and subsequent operations must
++		 * not fail.
++		 */
++		pi_state_update_owner(pi_state, new_owner);
++		postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
++	}
+ 
+ out_unlock:
+ 	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+@@ -2325,18 +2332,13 @@ static void unqueue_me_pi(struct futex_q *q)
+ 	spin_unlock(q->lock_ptr);
+ }
+ 
+-static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
+-				struct task_struct *argowner)
++static int __fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
++				  struct task_struct *argowner)
+ {
+ 	struct futex_pi_state *pi_state = q->pi_state;
+-	u32 uval, curval, newval;
+ 	struct task_struct *oldowner, *newowner;
+-	u32 newtid;
+-	int ret, err = 0;
+-
+-	lockdep_assert_held(q->lock_ptr);
+-
+-	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
++	u32 uval, curval, newval, newtid;
++	int err = 0;
+ 
+ 	oldowner = pi_state->owner;
+ 
+@@ -2370,14 +2372,12 @@ retry:
+ 			 * We raced against a concurrent self; things are
+ 			 * already fixed up. Nothing to do.
+ 			 */
+-			ret = 0;
+-			goto out_unlock;
++			return 0;
+ 		}
+ 
+ 		if (__rt_mutex_futex_trylock(&pi_state->pi_mutex)) {
+-			/* We got the lock after all, nothing to fix. */
+-			ret = 0;
+-			goto out_unlock;
++			/* We got the lock. pi_state is correct. Tell caller. */
++			return 1;
+ 		}
+ 
+ 		/*
+@@ -2404,8 +2404,7 @@ retry:
+ 			 * We raced against a concurrent self; things are
+ 			 * already fixed up. Nothing to do.
+ 			 */
+-			ret = 0;
+-			goto out_unlock;
++			return 1;
+ 		}
+ 		newowner = argowner;
+ 	}
+@@ -2435,22 +2434,9 @@ retry:
+ 	 * We fixed up user space. Now we need to fix the pi_state
+ 	 * itself.
+ 	 */
+-	if (pi_state->owner != NULL) {
+-		raw_spin_lock(&pi_state->owner->pi_lock);
+-		WARN_ON(list_empty(&pi_state->list));
+-		list_del_init(&pi_state->list);
+-		raw_spin_unlock(&pi_state->owner->pi_lock);
+-	}
++	pi_state_update_owner(pi_state, newowner);
+ 
+-	pi_state->owner = newowner;
+-
+-	raw_spin_lock(&newowner->pi_lock);
+-	WARN_ON(!list_empty(&pi_state->list));
+-	list_add(&pi_state->list, &newowner->pi_state_list);
+-	raw_spin_unlock(&newowner->pi_lock);
+-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+-
+-	return 0;
++	return argowner == current;
+ 
+ 	/*
+ 	 * In order to reschedule or handle a page fault, we need to drop the
+@@ -2471,17 +2457,16 @@ handle_err:
+ 
+ 	switch (err) {
+ 	case -EFAULT:
+-		ret = fault_in_user_writeable(uaddr);
++		err = fault_in_user_writeable(uaddr);
+ 		break;
+ 
+ 	case -EAGAIN:
+ 		cond_resched();
+-		ret = 0;
++		err = 0;
+ 		break;
+ 
+ 	default:
+ 		WARN_ON_ONCE(1);
+-		ret = err;
+ 		break;
+ 	}
+ 
+@@ -2491,17 +2476,44 @@ handle_err:
+ 	/*
+ 	 * Check if someone else fixed it for us:
+ 	 */
+-	if (pi_state->owner != oldowner) {
+-		ret = 0;
+-		goto out_unlock;
+-	}
++	if (pi_state->owner != oldowner)
++		return argowner == current;
+ 
+-	if (ret)
+-		goto out_unlock;
++	/* Retry if err was -EAGAIN or the fault in succeeded */
++	if (!err)
++		goto retry;
+ 
+-	goto retry;
++	/*
++	 * fault_in_user_writeable() failed so user state is immutable. At
++	 * best we can make the kernel state consistent but user state will
++	 * be most likely hosed and any subsequent unlock operation will be
++	 * rejected due to PI futex rule [10].
++	 *
++	 * Ensure that the rtmutex owner is also the pi_state owner despite
++	 * the user space value claiming something different. There is no
++	 * point in unlocking the rtmutex if current is the owner as it
++	 * would need to wait until the next waiter has taken the rtmutex
++	 * to guarantee consistent state. Keep it simple. Userspace asked
++	 * for this wreckaged state.
++	 *
++	 * The rtmutex has an owner - either current or some other
++	 * task. See the EAGAIN loop above.
++	 */
++	pi_state_update_owner(pi_state, rt_mutex_owner(&pi_state->pi_mutex));
+ 
+-out_unlock:
++	return err;
++}
++
++static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
++				struct task_struct *argowner)
++{
++	struct futex_pi_state *pi_state = q->pi_state;
++	int ret;
++
++	lockdep_assert_held(q->lock_ptr);
++
++	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
++	ret = __fixup_pi_state_owner(uaddr, q, argowner);
+ 	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+ 	return ret;
+ }
+@@ -2525,8 +2537,6 @@ static long futex_wait_restart(struct restart_block *restart);
+  */
+ static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
+ {
+-	int ret = 0;
+-
+ 	if (locked) {
+ 		/*
+ 		 * Got the lock. We might not be the anticipated owner if we
+@@ -2537,8 +2547,8 @@ static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
+ 		 * stable state, anything else needs more attention.
+ 		 */
+ 		if (q->pi_state->owner != current)
+-			ret = fixup_pi_state_owner(uaddr, q, current);
+-		return ret ? ret : locked;
++			return fixup_pi_state_owner(uaddr, q, current);
++		return 1;
+ 	}
+ 
+ 	/*
+@@ -2549,23 +2559,17 @@ static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
+ 	 * Another speculative read; pi_state->owner == current is unstable
+ 	 * but needs our attention.
+ 	 */
+-	if (q->pi_state->owner == current) {
+-		ret = fixup_pi_state_owner(uaddr, q, NULL);
+-		return ret;
+-	}
++	if (q->pi_state->owner == current)
++		return fixup_pi_state_owner(uaddr, q, NULL);
+ 
+ 	/*
+ 	 * Paranoia check. If we did not take the lock, then we should not be
+-	 * the owner of the rt_mutex.
++	 * the owner of the rt_mutex. Warn and establish consistent state.
+ 	 */
+-	if (rt_mutex_owner(&q->pi_state->pi_mutex) == current) {
+-		printk(KERN_ERR "fixup_owner: ret = %d pi-mutex: %p "
+-				"pi-state %p\n", ret,
+-				q->pi_state->pi_mutex.owner,
+-				q->pi_state->owner);
+-	}
++	if (WARN_ON_ONCE(rt_mutex_owner(&q->pi_state->pi_mutex) == current))
++		return fixup_pi_state_owner(uaddr, q, current);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ /**
+@@ -2773,7 +2777,6 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags,
+ 			 ktime_t *time, int trylock)
+ {
+ 	struct hrtimer_sleeper timeout, *to;
+-	struct futex_pi_state *pi_state = NULL;
+ 	struct task_struct *exiting = NULL;
+ 	struct rt_mutex_waiter rt_waiter;
+ 	struct futex_hash_bucket *hb;
+@@ -2909,23 +2912,8 @@ no_block:
+ 	if (res)
+ 		ret = (res < 0) ? res : 0;
+ 
+-	/*
+-	 * If fixup_owner() faulted and was unable to handle the fault, unlock
+-	 * it and return the fault to userspace.
+-	 */
+-	if (ret && (rt_mutex_owner(&q.pi_state->pi_mutex) == current)) {
+-		pi_state = q.pi_state;
+-		get_pi_state(pi_state);
+-	}
+-
+ 	/* Unqueue and drop the lock */
+ 	unqueue_me_pi(&q);
+-
+-	if (pi_state) {
+-		rt_mutex_futex_unlock(&pi_state->pi_mutex);
+-		put_pi_state(pi_state);
+-	}
+-
+ 	goto out;
+ 
+ out_unlock_put_key:
+@@ -3185,7 +3173,6 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ 				 u32 __user *uaddr2)
+ {
+ 	struct hrtimer_sleeper timeout, *to;
+-	struct futex_pi_state *pi_state = NULL;
+ 	struct rt_mutex_waiter rt_waiter;
+ 	struct futex_hash_bucket *hb;
+ 	union futex_key key2 = FUTEX_KEY_INIT;
+@@ -3263,16 +3250,17 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ 		if (q.pi_state && (q.pi_state->owner != current)) {
+ 			spin_lock(q.lock_ptr);
+ 			ret = fixup_pi_state_owner(uaddr2, &q, current);
+-			if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
+-				pi_state = q.pi_state;
+-				get_pi_state(pi_state);
+-			}
+ 			/*
+ 			 * Drop the reference to the pi state which
+ 			 * the requeue_pi() code acquired for us.
+ 			 */
+ 			put_pi_state(q.pi_state);
+ 			spin_unlock(q.lock_ptr);
++			/*
++			 * Adjust the return value. It's either -EFAULT or
++			 * success (1) but the caller expects 0 for success.
++			 */
++			ret = ret < 0 ? ret : 0;
+ 		}
+ 	} else {
+ 		struct rt_mutex *pi_mutex;
+@@ -3303,25 +3291,10 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ 		if (res)
+ 			ret = (res < 0) ? res : 0;
+ 
+-		/*
+-		 * If fixup_pi_state_owner() faulted and was unable to handle
+-		 * the fault, unlock the rt_mutex and return the fault to
+-		 * userspace.
+-		 */
+-		if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
+-			pi_state = q.pi_state;
+-			get_pi_state(pi_state);
+-		}
+-
+ 		/* Unqueue and drop the lock. */
+ 		unqueue_me_pi(&q);
+ 	}
+ 
+-	if (pi_state) {
+-		rt_mutex_futex_unlock(&pi_state->pi_mutex);
+-		put_pi_state(pi_state);
+-	}
+-
+ 	if (ret == -EINTR) {
+ 		/*
+ 		 * We've already been requeued, but cannot restart by calling
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index cfdd5b93264d7..2f8cd616d3b29 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -1716,8 +1716,7 @@ void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
+  * possible because it belongs to the pi_state which is about to be freed
+  * and it is not longer visible to other tasks.
+  */
+-void rt_mutex_proxy_unlock(struct rt_mutex *lock,
+-			   struct task_struct *proxy_owner)
++void rt_mutex_proxy_unlock(struct rt_mutex *lock)
+ {
+ 	debug_rt_mutex_proxy_unlock(lock);
+ 	rt_mutex_set_owner(lock, NULL);
+diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
+index d1d62f942be22..ca6fb489007b6 100644
+--- a/kernel/locking/rtmutex_common.h
++++ b/kernel/locking/rtmutex_common.h
+@@ -133,8 +133,7 @@ enum rtmutex_chainwalk {
+ extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock);
+ extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
+ 				       struct task_struct *proxy_owner);
+-extern void rt_mutex_proxy_unlock(struct rt_mutex *lock,
+-				  struct task_struct *proxy_owner);
++extern void rt_mutex_proxy_unlock(struct rt_mutex *lock);
+ extern void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter);
+ extern int __rt_mutex_start_proxy_lock(struct rt_mutex *lock,
+ 				     struct rt_mutex_waiter *waiter,
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 801f8bc52b34f..aafec8cb8637d 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1338,11 +1338,16 @@ static size_t info_print_prefix(const struct printk_info  *info, bool syslog,
+  * done:
+  *
+  *   - Add prefix for each line.
++ *   - Drop truncated lines that no longer fit into the buffer.
+  *   - Add the trailing newline that has been removed in vprintk_store().
+- *   - Drop truncated lines that do not longer fit into the buffer.
++ *   - Add a string terminator.
++ *
++ * Since the produced string is always terminated, the maximum possible
++ * return value is @r->text_buf_size - 1;
+  *
+  * Return: The length of the updated/prepared text, including the added
+- * prefixes and the newline. The dropped line(s) are not counted.
++ * prefixes and the newline. The terminator is not counted. The dropped
++ * line(s) are not counted.
+  */
+ static size_t record_print_text(struct printk_record *r, bool syslog,
+ 				bool time)
+@@ -1385,26 +1390,31 @@ static size_t record_print_text(struct printk_record *r, bool syslog,
+ 
+ 		/*
+ 		 * Truncate the text if there is not enough space to add the
+-		 * prefix and a trailing newline.
++		 * prefix and a trailing newline and a terminator.
+ 		 */
+-		if (len + prefix_len + text_len + 1 > buf_size) {
++		if (len + prefix_len + text_len + 1 + 1 > buf_size) {
+ 			/* Drop even the current line if no space. */
+-			if (len + prefix_len + line_len + 1 > buf_size)
++			if (len + prefix_len + line_len + 1 + 1 > buf_size)
+ 				break;
+ 
+-			text_len = buf_size - len - prefix_len - 1;
++			text_len = buf_size - len - prefix_len - 1 - 1;
+ 			truncated = true;
+ 		}
+ 
+ 		memmove(text + prefix_len, text, text_len);
+ 		memcpy(text, prefix, prefix_len);
+ 
++		/*
++		 * Increment the prepared length to include the text and
++		 * prefix that were just moved+copied. Also increment for the
++		 * newline at the end of this line. If this is the last line,
++		 * there is no newline, but it will be added immediately below.
++		 */
+ 		len += prefix_len + line_len + 1;
+-
+ 		if (text_len == line_len) {
+ 			/*
+-			 * Add the trailing newline removed in
+-			 * vprintk_store().
++			 * This is the last line. Add the trailing newline
++			 * removed in vprintk_store().
+ 			 */
+ 			text[prefix_len + line_len] = '\n';
+ 			break;
+@@ -1429,6 +1439,14 @@ static size_t record_print_text(struct printk_record *r, bool syslog,
+ 		text_len -= line_len + 1;
+ 	}
+ 
++	/*
++	 * If a buffer was provided, it will be terminated. Space for the
++	 * string terminator is guaranteed to be available. The terminator is
++	 * not counted in the return value.
++	 */
++	if (buf_size > 0)
++		r->text_buf[len] = 0;
++
+ 	return len;
+ }
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 14b9e83ff9da2..88639706ae177 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2846,20 +2846,20 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
+ {
+ 	struct page *page;
+ 
+-#ifdef CONFIG_CMA
+-	/*
+-	 * Balance movable allocations between regular and CMA areas by
+-	 * allocating from CMA when over half of the zone's free memory
+-	 * is in the CMA area.
+-	 */
+-	if (alloc_flags & ALLOC_CMA &&
+-	    zone_page_state(zone, NR_FREE_CMA_PAGES) >
+-	    zone_page_state(zone, NR_FREE_PAGES) / 2) {
+-		page = __rmqueue_cma_fallback(zone, order);
+-		if (page)
+-			return page;
++	if (IS_ENABLED(CONFIG_CMA)) {
++		/*
++		 * Balance movable allocations between regular and CMA areas by
++		 * allocating from CMA when over half of the zone's free memory
++		 * is in the CMA area.
++		 */
++		if (alloc_flags & ALLOC_CMA &&
++		    zone_page_state(zone, NR_FREE_CMA_PAGES) >
++		    zone_page_state(zone, NR_FREE_PAGES) / 2) {
++			page = __rmqueue_cma_fallback(zone, order);
++			if (page)
++				goto out;
++		}
+ 	}
+-#endif
+ retry:
+ 	page = __rmqueue_smallest(zone, order, migratetype);
+ 	if (unlikely(!page)) {
+@@ -2870,8 +2870,9 @@ retry:
+ 								alloc_flags))
+ 			goto retry;
+ 	}
+-
+-	trace_mm_page_alloc_zone_locked(page, order, migratetype);
++out:
++	if (page)
++		trace_mm_page_alloc_zone_locked(page, order, migratetype);
+ 	return page;
+ }
+ 
+diff --git a/mm/slub.c b/mm/slub.c
+index 3f4303f4b657d..071e41067ea67 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -5620,10 +5620,8 @@ static int sysfs_slab_add(struct kmem_cache *s)
+ 
+ 	s->kobj.kset = kset;
+ 	err = kobject_init_and_add(&s->kobj, &slab_ktype, NULL, "%s", name);
+-	if (err) {
+-		kobject_put(&s->kobj);
++	if (err)
+ 		goto out;
+-	}
+ 
+ 	err = sysfs_create_group(&s->kobj, &slab_attr_group);
+ 	if (err)
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index d58361109066d..16db9d1ebcbf3 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1045,16 +1045,18 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size)
+ 	/* Only single cluster request supported */
+ 	WARN_ON_ONCE(n_goal > 1 && size == SWAPFILE_CLUSTER);
+ 
++	spin_lock(&swap_avail_lock);
++
+ 	avail_pgs = atomic_long_read(&nr_swap_pages) / size;
+-	if (avail_pgs <= 0)
++	if (avail_pgs <= 0) {
++		spin_unlock(&swap_avail_lock);
+ 		goto noswap;
++	}
+ 
+ 	n_goal = min3((long)n_goal, (long)SWAP_BATCH, avail_pgs);
+ 
+ 	atomic_long_sub(n_goal * size, &nr_swap_pages);
+ 
+-	spin_lock(&swap_avail_lock);
+-
+ start_over:
+ 	node = numa_node_id();
+ 	plist_for_each_entry_safe(si, next, &swap_avail_heads[node], avail_lists[node]) {
+@@ -1128,14 +1130,13 @@ swp_entry_t get_swap_page_of_type(int type)
+ 
+ 	spin_lock(&si->lock);
+ 	if (si->flags & SWP_WRITEOK) {
+-		atomic_long_dec(&nr_swap_pages);
+ 		/* This is called for allocating swap entry, not cache */
+ 		offset = scan_swap_map(si, 1);
+ 		if (offset) {
++			atomic_long_dec(&nr_swap_pages);
+ 			spin_unlock(&si->lock);
+ 			return swp_entry(type, offset);
+ 		}
+-		atomic_long_inc(&nr_swap_pages);
+ 	}
+ 	spin_unlock(&si->lock);
+ fail:
+diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
+index 66cb92136de4a..bf656432ad736 100644
+--- a/tools/bpf/resolve_btfids/Makefile
++++ b/tools/bpf/resolve_btfids/Makefile
+@@ -18,15 +18,6 @@ else
+ endif
+ 
+ # always use the host compiler
+-ifneq ($(LLVM),)
+-HOSTAR  ?= llvm-ar
+-HOSTCC  ?= clang
+-HOSTLD  ?= ld.lld
+-else
+-HOSTAR  ?= ar
+-HOSTCC  ?= gcc
+-HOSTLD  ?= ld
+-endif
+ AR       = $(HOSTAR)
+ CC       = $(HOSTCC)
+ LD       = $(HOSTLD)
+diff --git a/tools/build/Makefile b/tools/build/Makefile
+index 722f1700d96a8..bae48e6fa9952 100644
+--- a/tools/build/Makefile
++++ b/tools/build/Makefile
+@@ -15,10 +15,6 @@ endef
+ $(call allow-override,CC,$(CROSS_COMPILE)gcc)
+ $(call allow-override,LD,$(CROSS_COMPILE)ld)
+ 
+-HOSTCC ?= gcc
+-HOSTLD ?= ld
+-HOSTAR ?= ar
+-
+ export HOSTCC HOSTLD HOSTAR
+ 
+ ifeq ($(V),1)
+diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
+index 4ea9a833dde7a..5cdb19036d7f7 100644
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -3,15 +3,6 @@ include ../scripts/Makefile.include
+ include ../scripts/Makefile.arch
+ 
+ # always use the host compiler
+-ifneq ($(LLVM),)
+-HOSTAR	?= llvm-ar
+-HOSTCC	?= clang
+-HOSTLD	?= ld.lld
+-else
+-HOSTAR	?= ar
+-HOSTCC	?= gcc
+-HOSTLD	?= ld
+-endif
+ AR	 = $(HOSTAR)
+ CC	 = $(HOSTCC)
+ LD	 = $(HOSTLD)
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index 4e1d7460574b4..9452cfb01ef19 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -354,8 +354,11 @@ static int read_symbols(struct elf *elf)
+ 
+ 	symtab = find_section_by_name(elf, ".symtab");
+ 	if (!symtab) {
+-		WARN("missing symbol table");
+-		return -1;
++		/*
++		 * A missing symbol table is actually possible if it's an empty
++		 * .o file.  This can happen for thunk_64.o.
++		 */
++		return 0;
+ 	}
+ 
+ 	symtab_shndx = find_section_by_name(elf, ".symtab_shndx");
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 7ce3f2e8b9c74..62f3deb1d3a8b 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -175,10 +175,6 @@ endef
+ 
+ LD += $(EXTRA_LDFLAGS)
+ 
+-HOSTCC  ?= gcc
+-HOSTLD  ?= ld
+-HOSTAR  ?= ar
+-
+ PKG_CONFIG = $(CROSS_COMPILE)pkg-config
+ LLVM_CONFIG ?= llvm-config
+ 
+diff --git a/tools/power/acpi/Makefile.config b/tools/power/acpi/Makefile.config
+index 54a2857c2510a..331f6d30f4726 100644
+--- a/tools/power/acpi/Makefile.config
++++ b/tools/power/acpi/Makefile.config
+@@ -54,7 +54,6 @@ INSTALL_SCRIPT = ${INSTALL_PROGRAM}
+ CROSS = #/usr/i386-linux-uclibc/usr/bin/i386-uclibc-
+ CROSS_COMPILE ?= $(CROSS)
+ LD = $(CC)
+-HOSTCC = gcc
+ 
+ # check if compiler option is supported
+ cc-supports = ${shell if $(CC) ${1} -S -o /dev/null -x c /dev/null > /dev/null 2>&1; then echo "$(1)"; fi;}
+diff --git a/tools/scripts/Makefile.include b/tools/scripts/Makefile.include
+index a7974638561ca..1358e89cdf7d6 100644
+--- a/tools/scripts/Makefile.include
++++ b/tools/scripts/Makefile.include
+@@ -59,6 +59,16 @@ $(call allow-override,LD,$(CROSS_COMPILE)ld)
+ $(call allow-override,CXX,$(CROSS_COMPILE)g++)
+ $(call allow-override,STRIP,$(CROSS_COMPILE)strip)
+ 
++ifneq ($(LLVM),)
++HOSTAR  ?= llvm-ar
++HOSTCC  ?= clang
++HOSTLD  ?= ld.lld
++else
++HOSTAR  ?= ar
++HOSTCC  ?= gcc
++HOSTLD  ?= ld
++endif
++
+ ifeq ($(CC_NO_CLANG), 1)
+ EXTRA_WARNINGS += -Wstrict-aliasing=3
+ endif


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-03 23:43 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-02-03 23:43 UTC (permalink / raw
  To: gentoo-commits

commit:     813dc8c752e33af6a79f31239aa610cca6157169
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb  3 23:42:45 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb  3 23:42:58 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=813dc8c7

Linux patch 5.10.13

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1012_linux-5.10.13.patch | 5253 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5257 insertions(+)

diff --git a/0000_README b/0000_README
index 8c99e2c..0a7ffef 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-5.10.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.12
 
+Patch:  1012_linux-5.10.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-5.10.13.patch b/1012_linux-5.10.13.patch
new file mode 100644
index 0000000..59a71aa
--- /dev/null
+++ b/1012_linux-5.10.13.patch
@@ -0,0 +1,5253 @@
+diff --git a/Documentation/asm-annotations.rst b/Documentation/asm-annotations.rst
+index 32ea57483378d..76424e0431f4b 100644
+--- a/Documentation/asm-annotations.rst
++++ b/Documentation/asm-annotations.rst
+@@ -100,6 +100,11 @@ Instruction Macros
+ ~~~~~~~~~~~~~~~~~~
+ This section covers ``SYM_FUNC_*`` and ``SYM_CODE_*`` enumerated above.
+ 
++``objtool`` requires that all code must be contained in an ELF symbol. Symbol
++names that have a ``.L`` prefix do not emit symbol table entries. ``.L``
++prefixed symbols can be used within a code region, but should be avoided for
++denoting a range of code via ``SYM_*_START/END`` annotations.
++
+ * ``SYM_FUNC_START`` and ``SYM_FUNC_START_LOCAL`` are supposed to be **the
+   most frequent markings**. They are used for functions with standard calling
+   conventions -- global and local. Like in C, they both align the functions to
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index e00a66d723728..4ba0df574eb25 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -1264,6 +1264,9 @@ field userspace_addr, which must point at user addressable memory for
+ the entire memory slot size.  Any object may back this memory, including
+ anonymous memory, ordinary files, and hugetlbfs.
+ 
++On architectures that support a form of address tagging, userspace_addr must
++be an untagged address.
++
+ It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr
+ be identical.  This allows large pages in the guest to be backed by large
+ pages in the host.
+@@ -1316,7 +1319,7 @@ documentation when it pops into existence).
+ 
+ :Capability: KVM_CAP_ENABLE_CAP_VM
+ :Architectures: all
+-:Type: vcpu ioctl
++:Type: vm ioctl
+ :Parameters: struct kvm_enable_cap (in)
+ :Returns: 0 on success; -1 on error
+ 
+diff --git a/Makefile b/Makefile
+index a6b2e64bcf6c7..a2d5e953ea40a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/compressed/atags_to_fdt.c b/arch/arm/boot/compressed/atags_to_fdt.c
+index 8452753efebe5..31927d2fe2972 100644
+--- a/arch/arm/boot/compressed/atags_to_fdt.c
++++ b/arch/arm/boot/compressed/atags_to_fdt.c
+@@ -15,7 +15,8 @@ static int node_offset(void *fdt, const char *node_path)
+ {
+ 	int offset = fdt_path_offset(fdt, node_path);
+ 	if (offset == -FDT_ERR_NOTFOUND)
+-		offset = fdt_add_subnode(fdt, 0, node_path);
++		/* Add the node to root if not found, dropping the leading '/' */
++		offset = fdt_add_subnode(fdt, 0, node_path + 1);
+ 	return offset;
+ }
+ 
+diff --git a/arch/arm/boot/dts/imx6q-tbs2910.dts b/arch/arm/boot/dts/imx6q-tbs2910.dts
+index 861e05d53157e..343364d3e4f7d 100644
+--- a/arch/arm/boot/dts/imx6q-tbs2910.dts
++++ b/arch/arm/boot/dts/imx6q-tbs2910.dts
+@@ -16,6 +16,13 @@
+ 		stdout-path = &uart1;
+ 	};
+ 
++	aliases {
++		mmc0 = &usdhc2;
++		mmc1 = &usdhc3;
++		mmc2 = &usdhc4;
++		/delete-property/ mmc3;
++	};
++
+ 	memory@10000000 {
+ 		device_type = "memory";
+ 		reg = <0x10000000 0x80000000>;
+diff --git a/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi b/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi
+index 736074f1c3ef9..959d8ac2e393b 100644
+--- a/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi
+@@ -418,7 +418,7 @@
+ 
+ 			/* VDD_AUD_1P8: Audio codec */
+ 			reg_aud_1p8v: ldo3 {
+-				regulator-name = "vdd1p8";
++				regulator-name = "vdd1p8a";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+ 				regulator-boot-on;
+diff --git a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+index 24f793ca28867..92f9977d14822 100644
+--- a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+@@ -137,7 +137,7 @@
+ 
+ 	lcd_backlight: lcd-backlight {
+ 		compatible = "pwm-backlight";
+-		pwms = <&pwm4 0 5000000>;
++		pwms = <&pwm4 0 5000000 0>;
+ 		pwm-names = "LCD_BKLT_PWM";
+ 
+ 		brightness-levels = <0 10 20 30 40 50 60 70 80 90 100>;
+@@ -167,7 +167,7 @@
+ 		i2c-gpio,delay-us = <2>; /* ~100 kHz */
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		status = "disabld";
++		status = "disabled";
+ 	};
+ 
+ 	i2c_cam: i2c-gpio-cam {
+@@ -179,7 +179,7 @@
+ 		i2c-gpio,delay-us = <2>; /* ~100 kHz */
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		status = "disabld";
++		status = "disabled";
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
+index b06577808ff4e..7e4e5fd0143a1 100644
+--- a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
+@@ -53,7 +53,6 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_microsom_enet_ar8035>;
+-	phy-handle = <&phy>;
+ 	phy-mode = "rgmii-id";
+ 	phy-reset-duration = <2>;
+ 	phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>;
+@@ -63,10 +62,19 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		phy: ethernet-phy@0 {
++		/*
++		 * The PHY can appear at either address 0 or 4 due to the
++		 * configuration (LED) pin not being pulled sufficiently.
++		 */
++		ethernet-phy@0 {
+ 			reg = <0>;
+ 			qca,clk-out-frequency = <125000000>;
+ 		};
++
++		ethernet-phy@4 {
++			reg = <4>;
++			qca,clk-out-frequency = <125000000>;
++		};
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/ste-db8500.dtsi b/arch/arm/boot/dts/ste-db8500.dtsi
+index d309fad32229d..344d29853bf76 100644
+--- a/arch/arm/boot/dts/ste-db8500.dtsi
++++ b/arch/arm/boot/dts/ste-db8500.dtsi
+@@ -12,4 +12,42 @@
+ 					    200000 0>;
+ 		};
+ 	};
++
++	reserved-memory {
++		#address-cells = <1>;
++		#size-cells = <1>;
++		ranges;
++
++		/* Modem trace memory */
++		ram@06000000 {
++			reg = <0x06000000 0x00f00000>;
++			no-map;
++		};
++
++		/* Modem shared memory */
++		ram@06f00000 {
++			reg = <0x06f00000 0x00100000>;
++			no-map;
++		};
++
++		/* Modem private memory */
++		ram@07000000 {
++			reg = <0x07000000 0x01000000>;
++			no-map;
++		};
++
++		/*
++		 * Initial Secure Software ISSW memory
++		 *
++		 * This is probably only used if the kernel tries
++		 * to actually call into trustzone to run secure
++		 * applications, which the mainline kernel probably
++		 * will not do on this old chipset. But you can never
++		 * be too careful, so reserve this memory anyway.
++		 */
++		ram@17f00000 {
++			reg = <0x17f00000 0x00100000>;
++			no-map;
++		};
++	};
+ };
+diff --git a/arch/arm/boot/dts/ste-db8520.dtsi b/arch/arm/boot/dts/ste-db8520.dtsi
+index 48bd8728ae27f..287804e9e1836 100644
+--- a/arch/arm/boot/dts/ste-db8520.dtsi
++++ b/arch/arm/boot/dts/ste-db8520.dtsi
+@@ -12,4 +12,42 @@
+ 					    200000 0>;
+ 		};
+ 	};
++
++	reserved-memory {
++		#address-cells = <1>;
++		#size-cells = <1>;
++		ranges;
++
++		/* Modem trace memory */
++		ram@06000000 {
++			reg = <0x06000000 0x00f00000>;
++			no-map;
++		};
++
++		/* Modem shared memory */
++		ram@06f00000 {
++			reg = <0x06f00000 0x00100000>;
++			no-map;
++		};
++
++		/* Modem private memory */
++		ram@07000000 {
++			reg = <0x07000000 0x01000000>;
++			no-map;
++		};
++
++		/*
++		 * Initial Secure Software ISSW memory
++		 *
++		 * This is probably only used if the kernel tries
++		 * to actually call into trustzone to run secure
++		 * applications, which the mainline kernel probably
++		 * will not do on this old chipset. But you can never
++		 * be too careful, so reserve this memory anyway.
++		 */
++		ram@17f00000 {
++			reg = <0x17f00000 0x00100000>;
++			no-map;
++		};
++	};
+ };
+diff --git a/arch/arm/boot/dts/ste-db9500.dtsi b/arch/arm/boot/dts/ste-db9500.dtsi
+new file mode 100644
+index 0000000000000..0afff703191c6
+--- /dev/null
++++ b/arch/arm/boot/dts/ste-db9500.dtsi
+@@ -0,0 +1,35 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++
++#include "ste-dbx5x0.dtsi"
++
++/ {
++	cpus {
++		cpu@300 {
++			/* cpufreq controls */
++			operating-points = <1152000 0
++					    800000 0
++					    400000 0
++					    200000 0>;
++		};
++	};
++
++	reserved-memory {
++		#address-cells = <1>;
++		#size-cells = <1>;
++		ranges;
++
++		/*
++		 * Initial Secure Software ISSW memory
++		 *
++		 * This is probably only used if the kernel tries
++		 * to actually call into trustzone to run secure
++		 * applications, which the mainline kernel probably
++		 * will not do on this old chipset. But you can never
++		 * be too careful, so reserve this memory anyway.
++		 */
++		ram@17f00000 {
++			reg = <0x17f00000 0x00100000>;
++			no-map;
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/ste-snowball.dts b/arch/arm/boot/dts/ste-snowball.dts
+index be90e73c923ec..27d8a07718a00 100644
+--- a/arch/arm/boot/dts/ste-snowball.dts
++++ b/arch/arm/boot/dts/ste-snowball.dts
+@@ -4,7 +4,7 @@
+  */
+ 
+ /dts-v1/;
+-#include "ste-db8500.dtsi"
++#include "ste-db9500.dtsi"
+ #include "ste-href-ab8500.dtsi"
+ #include "ste-href-family-pinctrl.dtsi"
+ 
+diff --git a/arch/arm/mach-imx/suspend-imx6.S b/arch/arm/mach-imx/suspend-imx6.S
+index 1eabf2d2834be..e06f946b75b96 100644
+--- a/arch/arm/mach-imx/suspend-imx6.S
++++ b/arch/arm/mach-imx/suspend-imx6.S
+@@ -67,6 +67,7 @@
+ #define MX6Q_CCM_CCR	0x0
+ 
+ 	.align 3
++	.arm
+ 
+ 	.macro  sync_l2_cache
+ 
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray-usb.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray-usb.dtsi
+index aef8f2b00778d..5401a646c8406 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/stingray-usb.dtsi
++++ b/arch/arm64/boot/dts/broadcom/stingray/stingray-usb.dtsi
+@@ -4,11 +4,16 @@
+  */
+ 	usb {
+ 		compatible = "simple-bus";
+-		dma-ranges;
+ 		#address-cells = <2>;
+ 		#size-cells = <2>;
+ 		ranges = <0x0 0x0 0x0 0x68500000 0x0 0x00400000>;
+ 
++		/*
++		 * Internally, USB bus to the interconnect can only address up
++		 * to 40-bit
++		 */
++		dma-ranges = <0 0 0 0 0x100 0x0>;
++
+ 		usbphy0: usb-phy@0 {
+ 			compatible = "brcm,sr-usb-combo-phy";
+ 			reg = <0x0 0x00000000 0x0 0x100>;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+index 33aa0efa2293a..62f4dcb96e70d 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+@@ -93,7 +93,7 @@
+ 	reboot {
+ 		compatible ="syscon-reboot";
+ 		regmap = <&rst>;
+-		offset = <0xb0>;
++		offset = <0>;
+ 		mask = <0x02>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 6038f66aefc10..03ef0e5f909e4 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -259,7 +259,7 @@
+ 				#gpio-cells = <2>;
+ 				interrupt-controller;
+ 				#interrupt-cells = <2>;
+-				gpio-ranges = <&iomuxc 0 56 26>, <&iomuxc 0 144 4>;
++				gpio-ranges = <&iomuxc 0 56 26>, <&iomuxc 26 144 4>;
+ 			};
+ 
+ 			gpio4: gpio@30230000 {
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index 2ed5ef8f274b1..2dd164bb1c5a9 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -788,7 +788,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
+ {
+ 	unsigned long *bmap = vcpu->kvm->arch.pmu_filter;
+ 	u64 val, mask = 0;
+-	int base, i;
++	int base, i, nr_events;
+ 
+ 	if (!pmceid1) {
+ 		val = read_sysreg(pmceid0_el0);
+@@ -801,13 +801,17 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
+ 	if (!bmap)
+ 		return val;
+ 
++	nr_events = kvm_pmu_event_mask(vcpu->kvm) + 1;
++
+ 	for (i = 0; i < 32; i += 8) {
+ 		u64 byte;
+ 
+ 		byte = bitmap_get_value8(bmap, base + i);
+ 		mask |= byte << i;
+-		byte = bitmap_get_value8(bmap, 0x4000 + base + i);
+-		mask |= byte << (32 + i);
++		if (nr_events >= (0x4000 + base + 32)) {
++			byte = bitmap_get_value8(bmap, 0x4000 + base + i);
++			mask |= byte << (32 + i);
++		}
+ 	}
+ 
+ 	return val & mask;
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index b234e8154cbd4..04dc17d52ac2d 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -202,9 +202,8 @@ config PREFETCH
+ 	depends on PA8X00 || PA7200
+ 
+ config MLONGCALLS
+-	bool "Enable the -mlong-calls compiler option for big kernels"
+-	default y if !MODULES || UBSAN || FTRACE
+-	default n
++	def_bool y if !MODULES || UBSAN || FTRACE
++	bool "Enable the -mlong-calls compiler option for big kernels" if MODULES && !UBSAN && !FTRACE
+ 	depends on PA8X00
+ 	help
+ 	  If you configure the kernel to include many drivers built-in instead
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index f6f28e41bb5e0..5d8123eb38ec5 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -997,10 +997,17 @@ intr_do_preempt:
+ 	bb,<,n	%r20, 31 - PSW_SM_I, intr_restore
+ 	nop
+ 
++	/* ssm PSW_SM_I done later in intr_restore */
++#ifdef CONFIG_MLONGCALLS
++	ldil	L%intr_restore, %r2
++	load32	preempt_schedule_irq, %r1
++	bv	%r0(%r1)
++	ldo	R%intr_restore(%r2), %r2
++#else
++	ldil	L%intr_restore, %r1
+ 	BL	preempt_schedule_irq, %r2
+-	nop
+-
+-	b,n	intr_restore		/* ssm PSW_SM_I done by intr_restore */
++	ldo	R%intr_restore(%r1), %r2
++#endif
+ #endif /* CONFIG_PREEMPTION */
+ 
+ 	/*
+diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
+index 6b1eca53e36cc..cc7a6271b6b4e 100644
+--- a/arch/powerpc/kernel/irq.c
++++ b/arch/powerpc/kernel/irq.c
+@@ -180,13 +180,18 @@ void notrace restore_interrupts(void)
+ 
+ void replay_soft_interrupts(void)
+ {
++	struct pt_regs regs;
++
+ 	/*
+-	 * We use local_paca rather than get_paca() to avoid all
+-	 * the debug_smp_processor_id() business in this low level
+-	 * function
++	 * Be careful here, calling these interrupt handlers can cause
++	 * softirqs to be raised, which they may run when calling irq_exit,
++	 * which will cause local_irq_enable() to be run, which can then
++	 * recurse into this function. Don't keep any state across
++	 * interrupt handler calls which may change underneath us.
++	 *
++	 * We use local_paca rather than get_paca() to avoid all the
++	 * debug_smp_processor_id() business in this low level function.
+ 	 */
+-	unsigned char happened = local_paca->irq_happened;
+-	struct pt_regs regs;
+ 
+ 	ppc_save_regs(&regs);
+ 	regs.softe = IRQS_ENABLED;
+@@ -209,7 +214,7 @@ again:
+ 	 * This is a higher priority interrupt than the others, so
+ 	 * replay it first.
+ 	 */
+-	if (IS_ENABLED(CONFIG_PPC_BOOK3S) && (happened & PACA_IRQ_HMI)) {
++	if (IS_ENABLED(CONFIG_PPC_BOOK3S) && (local_paca->irq_happened & PACA_IRQ_HMI)) {
+ 		local_paca->irq_happened &= ~PACA_IRQ_HMI;
+ 		regs.trap = 0xe60;
+ 		handle_hmi_exception(&regs);
+@@ -217,7 +222,7 @@ again:
+ 			hard_irq_disable();
+ 	}
+ 
+-	if (happened & PACA_IRQ_DEC) {
++	if (local_paca->irq_happened & PACA_IRQ_DEC) {
+ 		local_paca->irq_happened &= ~PACA_IRQ_DEC;
+ 		regs.trap = 0x900;
+ 		timer_interrupt(&regs);
+@@ -225,7 +230,7 @@ again:
+ 			hard_irq_disable();
+ 	}
+ 
+-	if (happened & PACA_IRQ_EE) {
++	if (local_paca->irq_happened & PACA_IRQ_EE) {
+ 		local_paca->irq_happened &= ~PACA_IRQ_EE;
+ 		regs.trap = 0x500;
+ 		do_IRQ(&regs);
+@@ -233,7 +238,7 @@ again:
+ 			hard_irq_disable();
+ 	}
+ 
+-	if (IS_ENABLED(CONFIG_PPC_DOORBELL) && (happened & PACA_IRQ_DBELL)) {
++	if (IS_ENABLED(CONFIG_PPC_DOORBELL) && (local_paca->irq_happened & PACA_IRQ_DBELL)) {
+ 		local_paca->irq_happened &= ~PACA_IRQ_DBELL;
+ 		if (IS_ENABLED(CONFIG_PPC_BOOK3E))
+ 			regs.trap = 0x280;
+@@ -245,7 +250,7 @@ again:
+ 	}
+ 
+ 	/* Book3E does not support soft-masking PMI interrupts */
+-	if (IS_ENABLED(CONFIG_PPC_BOOK3S) && (happened & PACA_IRQ_PMI)) {
++	if (IS_ENABLED(CONFIG_PPC_BOOK3S) && (local_paca->irq_happened & PACA_IRQ_PMI)) {
+ 		local_paca->irq_happened &= ~PACA_IRQ_PMI;
+ 		regs.trap = 0xf00;
+ 		performance_monitor_exception(&regs);
+@@ -253,8 +258,7 @@ again:
+ 			hard_irq_disable();
+ 	}
+ 
+-	happened = local_paca->irq_happened;
+-	if (happened & ~PACA_IRQ_HARD_DIS) {
++	if (local_paca->irq_happened & ~PACA_IRQ_HARD_DIS) {
+ 		/*
+ 		 * We are responding to the next interrupt, so interrupt-off
+ 		 * latencies should be reset here.
+diff --git a/arch/s390/boot/uv.c b/arch/s390/boot/uv.c
+index a15c033f53ca4..87641dd65ccf9 100644
+--- a/arch/s390/boot/uv.c
++++ b/arch/s390/boot/uv.c
+@@ -35,7 +35,7 @@ void uv_query_info(void)
+ 		uv_info.guest_cpu_stor_len = uvcb.cpu_stor_len;
+ 		uv_info.max_sec_stor_addr = ALIGN(uvcb.max_guest_stor_addr, PAGE_SIZE);
+ 		uv_info.max_num_sec_conf = uvcb.max_num_sec_conf;
+-		uv_info.max_guest_cpus = uvcb.max_guest_cpus;
++		uv_info.max_guest_cpu_id = uvcb.max_guest_cpu_id;
+ 	}
+ 
+ #ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST
+diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
+index 0325fc0469b7b..7b98d4caee779 100644
+--- a/arch/s390/include/asm/uv.h
++++ b/arch/s390/include/asm/uv.h
+@@ -96,7 +96,7 @@ struct uv_cb_qui {
+ 	u32 max_num_sec_conf;
+ 	u64 max_guest_stor_addr;
+ 	u8  reserved88[158 - 136];
+-	u16 max_guest_cpus;
++	u16 max_guest_cpu_id;
+ 	u8  reserveda0[200 - 160];
+ } __packed __aligned(8);
+ 
+@@ -273,7 +273,7 @@ struct uv_info {
+ 	unsigned long guest_cpu_stor_len;
+ 	unsigned long max_sec_stor_addr;
+ 	unsigned int max_num_sec_conf;
+-	unsigned short max_guest_cpus;
++	unsigned short max_guest_cpu_id;
+ };
+ 
+ extern struct uv_info uv_info;
+diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
+index 883bfed9f5c2c..b2d2ad1530676 100644
+--- a/arch/s390/kernel/uv.c
++++ b/arch/s390/kernel/uv.c
+@@ -368,7 +368,7 @@ static ssize_t uv_query_max_guest_cpus(struct kobject *kobj,
+ 				       struct kobj_attribute *attr, char *page)
+ {
+ 	return scnprintf(page, PAGE_SIZE, "%d\n",
+-			uv_info.max_guest_cpus);
++			uv_info.max_guest_cpu_id + 1);
+ }
+ 
+ static struct kobj_attribute uv_query_max_guest_cpus_attr =
+diff --git a/arch/x86/entry/thunk_64.S b/arch/x86/entry/thunk_64.S
+index ccd32877a3c41..c9a9fbf1655f3 100644
+--- a/arch/x86/entry/thunk_64.S
++++ b/arch/x86/entry/thunk_64.S
+@@ -31,7 +31,7 @@ SYM_FUNC_START_NOALIGN(\name)
+ 	.endif
+ 
+ 	call \func
+-	jmp  .L_restore
++	jmp  __thunk_restore
+ SYM_FUNC_END(\name)
+ 	_ASM_NOKPROBE(\name)
+ 	.endm
+@@ -44,7 +44,7 @@ SYM_FUNC_END(\name)
+ #endif
+ 
+ #ifdef CONFIG_PREEMPTION
+-SYM_CODE_START_LOCAL_NOALIGN(.L_restore)
++SYM_CODE_START_LOCAL_NOALIGN(__thunk_restore)
+ 	popq %r11
+ 	popq %r10
+ 	popq %r9
+@@ -56,6 +56,6 @@ SYM_CODE_START_LOCAL_NOALIGN(.L_restore)
+ 	popq %rdi
+ 	popq %rbp
+ 	ret
+-	_ASM_NOKPROBE(.L_restore)
+-SYM_CODE_END(.L_restore)
++	_ASM_NOKPROBE(__thunk_restore)
++SYM_CODE_END(__thunk_restore)
+ #endif
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index b2442eb0ac2f8..eb01c2618a9df 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -616,6 +616,7 @@ DECLARE_IDTENTRY_VC(X86_TRAP_VC,	exc_vmm_communication);
+ 
+ #ifdef CONFIG_XEN_PV
+ DECLARE_IDTENTRY_XENCB(X86_TRAP_OTHER,	exc_xen_hypervisor_callback);
++DECLARE_IDTENTRY_RAW(X86_TRAP_OTHER,	exc_xen_unknown_trap);
+ #endif
+ 
+ /* Device interrupts common/spurious */
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 9e4c226dbf7d9..65e40acde71aa 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -199,6 +199,10 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm)
+ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
++
++	if (WARN_ON(!is_guest_mode(vcpu)))
++		return true;
++
+ 	if (!nested_svm_vmrun_msrpm(svm)) {
+ 		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ 		vcpu->run->internal.suberror =
+@@ -595,6 +599,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
+ 	svm->nested.vmcb12_gpa = 0;
+ 	WARN_ON_ONCE(svm->nested.nested_run_pending);
+ 
++	kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, &svm->vcpu);
++
+ 	/* in case we halted in L2 */
+ 	svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE;
+ 
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 89af692deb7ef..f3eca45267781 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3123,13 +3123,9 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
+ 	return 0;
+ }
+ 
+-static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
++static bool nested_get_evmcs_page(struct kvm_vcpu *vcpu)
+ {
+-	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+-	struct kvm_host_map *map;
+-	struct page *page;
+-	u64 hpa;
+ 
+ 	/*
+ 	 * hv_evmcs may end up being not mapped after migration (when
+@@ -3152,6 +3148,17 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
+ 		}
+ 	}
+ 
++	return true;
++}
++
++static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
++{
++	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
++	struct vcpu_vmx *vmx = to_vmx(vcpu);
++	struct kvm_host_map *map;
++	struct page *page;
++	u64 hpa;
++
+ 	if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
+ 		/*
+ 		 * Translate L1 physical address to host physical
+@@ -3220,6 +3227,18 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
+ 		exec_controls_setbit(vmx, CPU_BASED_USE_MSR_BITMAPS);
+ 	else
+ 		exec_controls_clearbit(vmx, CPU_BASED_USE_MSR_BITMAPS);
++
++	return true;
++}
++
++static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
++{
++	if (!nested_get_evmcs_page(vcpu))
++		return false;
++
++	if (is_guest_mode(vcpu) && !nested_get_vmcs12_pages(vcpu))
++		return false;
++
+ 	return true;
+ }
+ 
+@@ -4416,6 +4435,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ 	/* trying to cancel vmlaunch/vmresume is a bug */
+ 	WARN_ON_ONCE(vmx->nested.nested_run_pending);
+ 
++	kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
++
+ 	/* Service the TLB flush request for L2 before switching to L1. */
+ 	if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
+ 		kvm_vcpu_flush_tlb_current(vcpu);
+@@ -6049,11 +6070,14 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu,
+ 	if (is_guest_mode(vcpu)) {
+ 		sync_vmcs02_to_vmcs12(vcpu, vmcs12);
+ 		sync_vmcs02_to_vmcs12_rare(vcpu, vmcs12);
+-	} else if (!vmx->nested.need_vmcs12_to_shadow_sync) {
+-		if (vmx->nested.hv_evmcs)
+-			copy_enlightened_to_vmcs12(vmx);
+-		else if (enable_shadow_vmcs)
+-			copy_shadow_to_vmcs12(vmx);
++	} else  {
++		copy_vmcs02_to_vmcs12_rare(vcpu, get_vmcs12(vcpu));
++		if (!vmx->nested.need_vmcs12_to_shadow_sync) {
++			if (vmx->nested.hv_evmcs)
++				copy_enlightened_to_vmcs12(vmx);
++			else if (enable_shadow_vmcs)
++				copy_shadow_to_vmcs12(vmx);
++		}
+ 	}
+ 
+ 	BUILD_BUG_ON(sizeof(user_vmx_nested_state->vmcs12) < VMCS12_SIZE);
+@@ -6573,7 +6597,7 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
+ 	.hv_timer_pending = nested_vmx_preemption_timer_pending,
+ 	.get_state = vmx_get_nested_state,
+ 	.set_state = vmx_set_nested_state,
+-	.get_nested_state_pages = nested_get_vmcs12_pages,
++	.get_nested_state_pages = vmx_get_nested_state_pages,
+ 	.write_log_dirty = nested_vmx_write_pml_buffer,
+ 	.enable_evmcs = nested_enable_evmcs,
+ 	.get_evmcs_version = nested_get_evmcs_version,
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index a886a47daebda..cdf5f34518f43 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -29,7 +29,7 @@ static struct kvm_event_hw_type_mapping intel_arch_events[] = {
+ 	[4] = { 0x2e, 0x41, PERF_COUNT_HW_CACHE_MISSES },
+ 	[5] = { 0xc4, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
+ 	[6] = { 0xc5, 0x00, PERF_COUNT_HW_BRANCH_MISSES },
+-	[7] = { 0x00, 0x30, PERF_COUNT_HW_REF_CPU_CYCLES },
++	[7] = { 0x00, 0x03, PERF_COUNT_HW_REF_CPU_CYCLES },
+ };
+ 
+ /* mapping between fixed pmc index and intel_arch_events array */
+@@ -345,7 +345,9 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ 
+ 	pmu->nr_arch_gp_counters = min_t(int, eax.split.num_counters,
+ 					 x86_pmu.num_counters_gp);
++	eax.split.bit_width = min_t(int, eax.split.bit_width, x86_pmu.bit_width_gp);
+ 	pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << eax.split.bit_width) - 1;
++	eax.split.mask_length = min_t(int, eax.split.mask_length, x86_pmu.events_mask_len);
+ 	pmu->available_event_types = ~entry->ebx &
+ 					((1ull << eax.split.mask_length) - 1);
+ 
+@@ -355,6 +357,8 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ 		pmu->nr_arch_fixed_counters =
+ 			min_t(int, edx.split.num_counters_fixed,
+ 			      x86_pmu.num_counters_fixed);
++		edx.split.bit_width_fixed = min_t(int,
++			edx.split.bit_width_fixed, x86_pmu.bit_width_fixed);
+ 		pmu->counter_bitmask[KVM_PMC_FIXED] =
+ 			((u64)1 << edx.split.bit_width_fixed) - 1;
+ 	}
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e545a8a613b19..0a302685e4d62 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -105,6 +105,7 @@ static u64 __read_mostly cr4_reserved_bits = CR4_RESERVED_BITS;
+ 
+ static void update_cr8_intercept(struct kvm_vcpu *vcpu);
+ static void process_nmi(struct kvm_vcpu *vcpu);
++static void process_smi(struct kvm_vcpu *vcpu);
+ static void enter_smm(struct kvm_vcpu *vcpu);
+ static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
+ static void store_regs(struct kvm_vcpu *vcpu);
+@@ -4199,6 +4200,9 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
+ {
+ 	process_nmi(vcpu);
+ 
++	if (kvm_check_request(KVM_REQ_SMI, vcpu))
++		process_smi(vcpu);
++
+ 	/*
+ 	 * In guest mode, payload delivery should be deferred,
+ 	 * so that the L1 hypervisor can intercept #PF before
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 4409306364dc3..9a5a50cdaab59 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -583,6 +583,13 @@ DEFINE_IDTENTRY_RAW(xenpv_exc_debug)
+ 		exc_debug(regs);
+ }
+ 
++DEFINE_IDTENTRY_RAW(exc_xen_unknown_trap)
++{
++	/* This should never happen and there is no way to handle it. */
++	pr_err("Unknown trap in Xen PV mode.");
++	BUG();
++}
++
+ struct trap_array_entry {
+ 	void (*orig)(void);
+ 	void (*xen)(void);
+@@ -631,6 +638,7 @@ static bool __ref get_trap_addr(void **addr, unsigned int ist)
+ {
+ 	unsigned int nr;
+ 	bool ist_okay = false;
++	bool found = false;
+ 
+ 	/*
+ 	 * Replace trap handler addresses by Xen specific ones.
+@@ -645,6 +653,7 @@ static bool __ref get_trap_addr(void **addr, unsigned int ist)
+ 		if (*addr == entry->orig) {
+ 			*addr = entry->xen;
+ 			ist_okay = entry->ist_okay;
++			found = true;
+ 			break;
+ 		}
+ 	}
+@@ -655,9 +664,13 @@ static bool __ref get_trap_addr(void **addr, unsigned int ist)
+ 		nr = (*addr - (void *)early_idt_handler_array[0]) /
+ 		     EARLY_IDT_HANDLER_SIZE;
+ 		*addr = (void *)xen_early_idt_handler_array[nr];
++		found = true;
+ 	}
+ 
+-	if (WARN_ON(ist != 0 && !ist_okay))
++	if (!found)
++		*addr = (void *)xen_asm_exc_xen_unknown_trap;
++
++	if (WARN_ON(found && ist != 0 && !ist_okay))
+ 		return false;
+ 
+ 	return true;
+diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
+index 1cb0e84b91610..53cf8aa35032d 100644
+--- a/arch/x86/xen/xen-asm.S
++++ b/arch/x86/xen/xen-asm.S
+@@ -178,6 +178,7 @@ xen_pv_trap asm_exc_simd_coprocessor_error
+ #ifdef CONFIG_IA32_EMULATION
+ xen_pv_trap entry_INT80_compat
+ #endif
++xen_pv_trap asm_exc_xen_unknown_trap
+ xen_pv_trap asm_exc_xen_hypervisor_callback
+ 
+ 	__INIT
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index a52703c98b773..d2359f7cfd5f2 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -303,7 +303,7 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
+ 		struct request_queue *q = hctx->queue;
+ 		struct blk_mq_tag_set *set = q->tag_set;
+ 
+-		if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &q->queue_flags))
++		if (!test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags))
+ 			return true;
+ 		users = atomic_read(&set->active_queues_shared_sbitmap);
+ 	} else {
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 770d84071a328..94f34109695c9 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1107,6 +1107,11 @@ static int nc_dma_get_range(struct device *dev, u64 *size)
+ 
+ 	ncomp = (struct acpi_iort_named_component *)node->node_data;
+ 
++	if (!ncomp->memory_address_limit) {
++		pr_warn(FW_BUG "Named component missing memory address limit\n");
++		return -EINVAL;
++	}
++
+ 	*size = ncomp->memory_address_limit >= 64 ? U64_MAX :
+ 			1ULL<<ncomp->memory_address_limit;
+ 
+@@ -1126,6 +1131,11 @@ static int rc_dma_get_range(struct device *dev, u64 *size)
+ 
+ 	rc = (struct acpi_iort_root_complex *)node->node_data;
+ 
++	if (!rc->memory_address_limit) {
++		pr_warn(FW_BUG "Root complex missing memory address limit\n");
++		return -EINVAL;
++	}
++
+ 	*size = rc->memory_address_limit >= 64 ? U64_MAX :
+ 			1ULL<<rc->memory_address_limit;
+ 
+@@ -1173,8 +1183,8 @@ void iort_dma_setup(struct device *dev, u64 *dma_addr, u64 *dma_size)
+ 		end = dmaaddr + size - 1;
+ 		mask = DMA_BIT_MASK(ilog2(end) + 1);
+ 		dev->bus_dma_limit = end;
+-		dev->coherent_dma_mask = mask;
+-		*dev->dma_mask = mask;
++		dev->coherent_dma_mask = min(dev->coherent_dma_mask, mask);
++		*dev->dma_mask = min(*dev->dma_mask, mask);
+ 	}
+ 
+ 	*dma_addr = dmaaddr;
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index 96869f1538b93..bfca116482b8b 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -251,20 +251,12 @@ int __acpi_device_uevent_modalias(struct acpi_device *adev,
+ 	if (add_uevent_var(env, "MODALIAS="))
+ 		return -ENOMEM;
+ 
+-	len = create_pnp_modalias(adev, &env->buf[env->buflen - 1],
+-				  sizeof(env->buf) - env->buflen);
+-	if (len < 0)
+-		return len;
+-
+-	env->buflen += len;
+-	if (!adev->data.of_compatible)
+-		return 0;
+-
+-	if (len > 0 && add_uevent_var(env, "MODALIAS="))
+-		return -ENOMEM;
+-
+-	len = create_of_modalias(adev, &env->buf[env->buflen - 1],
+-				 sizeof(env->buf) - env->buflen);
++	if (adev->data.of_compatible)
++		len = create_of_modalias(adev, &env->buf[env->buflen - 1],
++					 sizeof(env->buf) - env->buflen);
++	else
++		len = create_pnp_modalias(adev, &env->buf[env->buflen - 1],
++					  sizeof(env->buf) - env->buflen);
+ 	if (len < 0)
+ 		return len;
+ 
+diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c
+index 12c0ece746f04..859b1de31ddc0 100644
+--- a/drivers/acpi/thermal.c
++++ b/drivers/acpi/thermal.c
+@@ -174,6 +174,8 @@ struct acpi_thermal {
+ 	struct thermal_zone_device *thermal_zone;
+ 	int kelvin_offset;	/* in millidegrees */
+ 	struct work_struct thermal_check_work;
++	struct mutex thermal_check_lock;
++	refcount_t thermal_check_count;
+ };
+ 
+ /* --------------------------------------------------------------------------
+@@ -495,14 +497,6 @@ static int acpi_thermal_get_trip_points(struct acpi_thermal *tz)
+ 	return 0;
+ }
+ 
+-static void acpi_thermal_check(void *data)
+-{
+-	struct acpi_thermal *tz = data;
+-
+-	thermal_zone_device_update(tz->thermal_zone,
+-				   THERMAL_EVENT_UNSPECIFIED);
+-}
+-
+ /* sys I/F for generic thermal sysfs support */
+ 
+ static int thermal_get_temp(struct thermal_zone_device *thermal, int *temp)
+@@ -900,6 +894,12 @@ static void acpi_thermal_unregister_thermal_zone(struct acpi_thermal *tz)
+                                  Driver Interface
+    -------------------------------------------------------------------------- */
+ 
++static void acpi_queue_thermal_check(struct acpi_thermal *tz)
++{
++	if (!work_pending(&tz->thermal_check_work))
++		queue_work(acpi_thermal_pm_queue, &tz->thermal_check_work);
++}
++
+ static void acpi_thermal_notify(struct acpi_device *device, u32 event)
+ {
+ 	struct acpi_thermal *tz = acpi_driver_data(device);
+@@ -910,17 +910,17 @@ static void acpi_thermal_notify(struct acpi_device *device, u32 event)
+ 
+ 	switch (event) {
+ 	case ACPI_THERMAL_NOTIFY_TEMPERATURE:
+-		acpi_thermal_check(tz);
++		acpi_queue_thermal_check(tz);
+ 		break;
+ 	case ACPI_THERMAL_NOTIFY_THRESHOLDS:
+ 		acpi_thermal_trips_update(tz, ACPI_TRIPS_REFRESH_THRESHOLDS);
+-		acpi_thermal_check(tz);
++		acpi_queue_thermal_check(tz);
+ 		acpi_bus_generate_netlink_event(device->pnp.device_class,
+ 						  dev_name(&device->dev), event, 0);
+ 		break;
+ 	case ACPI_THERMAL_NOTIFY_DEVICES:
+ 		acpi_thermal_trips_update(tz, ACPI_TRIPS_REFRESH_DEVICES);
+-		acpi_thermal_check(tz);
++		acpi_queue_thermal_check(tz);
+ 		acpi_bus_generate_netlink_event(device->pnp.device_class,
+ 						  dev_name(&device->dev), event, 0);
+ 		break;
+@@ -1020,7 +1020,25 @@ static void acpi_thermal_check_fn(struct work_struct *work)
+ {
+ 	struct acpi_thermal *tz = container_of(work, struct acpi_thermal,
+ 					       thermal_check_work);
+-	acpi_thermal_check(tz);
++
++	/*
++	 * In general, it is not sufficient to check the pending bit, because
++	 * subsequent instances of this function may be queued after one of them
++	 * has started running (e.g. if _TMP sleeps).  Avoid bailing out if just
++	 * one of them is running, though, because it may have done the actual
++	 * check some time ago, so allow at least one of them to block on the
++	 * mutex while another one is running the update.
++	 */
++	if (!refcount_dec_not_one(&tz->thermal_check_count))
++		return;
++
++	mutex_lock(&tz->thermal_check_lock);
++
++	thermal_zone_device_update(tz->thermal_zone, THERMAL_EVENT_UNSPECIFIED);
++
++	refcount_inc(&tz->thermal_check_count);
++
++	mutex_unlock(&tz->thermal_check_lock);
+ }
+ 
+ static int acpi_thermal_add(struct acpi_device *device)
+@@ -1052,6 +1070,8 @@ static int acpi_thermal_add(struct acpi_device *device)
+ 	if (result)
+ 		goto free_memory;
+ 
++	refcount_set(&tz->thermal_check_count, 3);
++	mutex_init(&tz->thermal_check_lock);
+ 	INIT_WORK(&tz->thermal_check_work, acpi_thermal_check_fn);
+ 
+ 	pr_info(PREFIX "%s [%s] (%ld C)\n", acpi_device_name(device),
+@@ -1117,7 +1137,7 @@ static int acpi_thermal_resume(struct device *dev)
+ 		tz->state.active |= tz->trips.active[i].flags.enabled;
+ 	}
+ 
+-	queue_work(acpi_thermal_pm_queue, &tz->thermal_check_work);
++	acpi_queue_thermal_check(tz);
+ 
+ 	return AE_OK;
+ }
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index aaae9220f3a00..bd5c04fabdab6 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1029,6 +1029,12 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
+ 	if (!sock)
+ 		return err;
+ 
++	/*
++	 * We need to make sure we don't get any errant requests while we're
++	 * reallocating the ->socks array.
++	 */
++	blk_mq_freeze_queue(nbd->disk->queue);
++
+ 	if (!netlink && !nbd->task_setup &&
+ 	    !test_bit(NBD_RT_BOUND, &config->runtime_flags))
+ 		nbd->task_setup = current;
+@@ -1067,10 +1073,12 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
+ 	nsock->cookie = 0;
+ 	socks[config->num_connections++] = nsock;
+ 	atomic_inc(&config->live_connections);
++	blk_mq_unfreeze_queue(nbd->disk->queue);
+ 
+ 	return 0;
+ 
+ put_socket:
++	blk_mq_unfreeze_queue(nbd->disk->queue);
+ 	sockfd_put(sock);
+ 	return err;
+ }
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 48629d3433b4c..10078a7435644 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -945,7 +945,8 @@ static void blkif_set_queue_limits(struct blkfront_info *info)
+ 	if (info->feature_discard) {
+ 		blk_queue_flag_set(QUEUE_FLAG_DISCARD, rq);
+ 		blk_queue_max_discard_sectors(rq, get_capacity(gd));
+-		rq->limits.discard_granularity = info->discard_granularity;
++		rq->limits.discard_granularity = info->discard_granularity ?:
++						 info->physical_sector_size;
+ 		rq->limits.discard_alignment = info->discard_alignment;
+ 		if (info->feature_secdiscard)
+ 			blk_queue_flag_set(QUEUE_FLAG_SECERASE, rq);
+@@ -2179,19 +2180,12 @@ static void blkfront_closing(struct blkfront_info *info)
+ 
+ static void blkfront_setup_discard(struct blkfront_info *info)
+ {
+-	int err;
+-	unsigned int discard_granularity;
+-	unsigned int discard_alignment;
+-
+ 	info->feature_discard = 1;
+-	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+-		"discard-granularity", "%u", &discard_granularity,
+-		"discard-alignment", "%u", &discard_alignment,
+-		NULL);
+-	if (!err) {
+-		info->discard_granularity = discard_granularity;
+-		info->discard_alignment = discard_alignment;
+-	}
++	info->discard_granularity = xenbus_read_unsigned(info->xbdev->otherend,
++							 "discard-granularity",
++							 0);
++	info->discard_alignment = xenbus_read_unsigned(info->xbdev->otherend,
++						       "discard-alignment", 0);
+ 	info->feature_secdiscard =
+ 		!!xenbus_read_unsigned(info->xbdev->otherend, "discard-secure",
+ 				       0);
+diff --git a/drivers/clk/imx/Kconfig b/drivers/clk/imx/Kconfig
+index 3061896503f30..47d9ec3abd2f7 100644
+--- a/drivers/clk/imx/Kconfig
++++ b/drivers/clk/imx/Kconfig
+@@ -6,8 +6,6 @@ config MXC_CLK
+ 
+ config MXC_CLK_SCU
+ 	tristate
+-	depends on ARCH_MXC
+-	depends on IMX_SCU && HAVE_ARM_SMCCC
+ 
+ config CLK_IMX1
+ 	def_bool SOC_IMX1
+diff --git a/drivers/clk/mmp/clk-audio.c b/drivers/clk/mmp/clk-audio.c
+index eea69d498bd27..7aa7f4a9564fd 100644
+--- a/drivers/clk/mmp/clk-audio.c
++++ b/drivers/clk/mmp/clk-audio.c
+@@ -392,7 +392,8 @@ static int mmp2_audio_clk_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int __maybe_unused mmp2_audio_clk_suspend(struct device *dev)
++#ifdef CONFIG_PM
++static int mmp2_audio_clk_suspend(struct device *dev)
+ {
+ 	struct mmp2_audio_clk *priv = dev_get_drvdata(dev);
+ 
+@@ -404,7 +405,7 @@ static int __maybe_unused mmp2_audio_clk_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int __maybe_unused mmp2_audio_clk_resume(struct device *dev)
++static int mmp2_audio_clk_resume(struct device *dev)
+ {
+ 	struct mmp2_audio_clk *priv = dev_get_drvdata(dev);
+ 
+@@ -415,6 +416,7 @@ static int __maybe_unused mmp2_audio_clk_resume(struct device *dev)
+ 
+ 	return 0;
+ }
++#endif
+ 
+ static const struct dev_pm_ops mmp2_audio_clk_pm_ops = {
+ 	SET_RUNTIME_PM_OPS(mmp2_audio_clk_suspend, mmp2_audio_clk_resume, NULL)
+diff --git a/drivers/clk/qcom/gcc-sm8250.c b/drivers/clk/qcom/gcc-sm8250.c
+index 6cb6617b8d88c..ab594a0f0c408 100644
+--- a/drivers/clk/qcom/gcc-sm8250.c
++++ b/drivers/clk/qcom/gcc-sm8250.c
+@@ -722,7 +722,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parent_data_4,
+ 		.num_parents = 5,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+@@ -745,7 +745,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
+ 		.name = "gcc_sdcc4_apps_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = 3,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/crypto/marvell/cesa/cesa.h b/drivers/crypto/marvell/cesa/cesa.h
+index fabfaaccca872..fa56b45620c79 100644
+--- a/drivers/crypto/marvell/cesa/cesa.h
++++ b/drivers/crypto/marvell/cesa/cesa.h
+@@ -300,11 +300,11 @@ struct mv_cesa_tdma_desc {
+ 	__le32 byte_cnt;
+ 	union {
+ 		__le32 src;
+-		dma_addr_t src_dma;
++		u32 src_dma;
+ 	};
+ 	union {
+ 		__le32 dst;
+-		dma_addr_t dst_dma;
++		u32 dst_dma;
+ 	};
+ 	__le32 next_dma;
+ 
+diff --git a/drivers/firmware/efi/apple-properties.c b/drivers/firmware/efi/apple-properties.c
+index 34f53d898acb0..e1926483ae2fd 100644
+--- a/drivers/firmware/efi/apple-properties.c
++++ b/drivers/firmware/efi/apple-properties.c
+@@ -3,8 +3,9 @@
+  * apple-properties.c - EFI device properties on Macs
+  * Copyright (C) 2016 Lukas Wunner <lukas@wunner.de>
+  *
+- * Note, all properties are considered as u8 arrays.
+- * To get a value of any of them the caller must use device_property_read_u8_array().
++ * Properties are stored either as:
++ * u8 arrays which can be retrieved with device_property_read_u8_array() or
++ * booleans which can be queried with device_property_present().
+  */
+ 
+ #define pr_fmt(fmt) "apple-properties: " fmt
+@@ -88,8 +89,12 @@ static void __init unmarshal_key_value_pairs(struct dev_header *dev_header,
+ 
+ 		entry_data = ptr + key_len + sizeof(val_len);
+ 		entry_len = val_len - sizeof(val_len);
+-		entry[i] = PROPERTY_ENTRY_U8_ARRAY_LEN(key, entry_data,
+-						       entry_len);
++		if (entry_len)
++			entry[i] = PROPERTY_ENTRY_U8_ARRAY_LEN(key, entry_data,
++							       entry_len);
++		else
++			entry[i] = PROPERTY_ENTRY_BOOL(key);
++
+ 		if (dump_properties) {
+ 			dev_info(dev, "property: %s\n", key);
+ 			print_hex_dump(KERN_INFO, pr_fmt(), DUMP_PREFIX_OFFSET,
+diff --git a/drivers/firmware/imx/Kconfig b/drivers/firmware/imx/Kconfig
+index 1d2e5b85d7ca8..c027d99f2a599 100644
+--- a/drivers/firmware/imx/Kconfig
++++ b/drivers/firmware/imx/Kconfig
+@@ -13,6 +13,7 @@ config IMX_DSP
+ config IMX_SCU
+ 	bool "IMX SCU Protocol driver"
+ 	depends on IMX_MBOX
++	select SOC_BUS
+ 	help
+ 	  The System Controller Firmware (SCFW) is a low-level system function
+ 	  which runs on a dedicated Cortex-M core to provide power, clock, and
+diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
+index 44fd0cd069de6..95d0f18ed0c56 100644
+--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
++++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
+@@ -575,6 +575,7 @@ struct pptable_funcs {
+ 	int (*conv_power_profile_to_pplib_workload)(int power_profile);
+ 	uint32_t (*get_fan_control_mode)(struct smu_context *smu);
+ 	int (*set_fan_control_mode)(struct smu_context *smu, uint32_t mode);
++	int (*set_fan_speed_percent)(struct smu_context *smu, uint32_t speed);
+ 	int (*set_fan_speed_rpm)(struct smu_context *smu, uint32_t speed);
+ 	int (*set_xgmi_pstate)(struct smu_context *smu, uint32_t pstate);
+ 	int (*gfx_off_control)(struct smu_context *smu, bool enable);
+diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h b/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
+index 2d1c3babaa3a0..0046f1c26fc2d 100644
+--- a/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
++++ b/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
+@@ -200,6 +200,9 @@ int
+ smu_v11_0_set_fan_control_mode(struct smu_context *smu,
+ 			       uint32_t mode);
+ 
++int
++smu_v11_0_set_fan_speed_percent(struct smu_context *smu, uint32_t speed);
++
+ int smu_v11_0_set_fan_speed_rpm(struct smu_context *smu,
+ 				       uint32_t speed);
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index b1e5ec01527b8..5cc45b1cff7e7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2255,19 +2255,14 @@ int smu_get_fan_speed_percent(struct smu_context *smu, uint32_t *speed)
+ int smu_set_fan_speed_percent(struct smu_context *smu, uint32_t speed)
+ {
+ 	int ret = 0;
+-	uint32_t rpm;
+ 
+ 	if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled)
+ 		return -EOPNOTSUPP;
+ 
+ 	mutex_lock(&smu->mutex);
+ 
+-	if (smu->ppt_funcs->set_fan_speed_rpm) {
+-		if (speed > 100)
+-			speed = 100;
+-		rpm = speed * smu->fan_max_rpm / 100;
+-		ret = smu->ppt_funcs->set_fan_speed_rpm(smu, rpm);
+-	}
++	if (smu->ppt_funcs->set_fan_speed_percent)
++		ret = smu->ppt_funcs->set_fan_speed_percent(smu, speed);
+ 
+ 	mutex_unlock(&smu->mutex);
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index fc376281e629a..1c526cb239e03 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -2366,6 +2366,7 @@ static const struct pptable_funcs arcturus_ppt_funcs = {
+ 	.display_clock_voltage_request = smu_v11_0_display_clock_voltage_request,
+ 	.get_fan_control_mode = smu_v11_0_get_fan_control_mode,
+ 	.set_fan_control_mode = smu_v11_0_set_fan_control_mode,
++	.set_fan_speed_percent = smu_v11_0_set_fan_speed_percent,
+ 	.set_fan_speed_rpm = smu_v11_0_set_fan_speed_rpm,
+ 	.set_xgmi_pstate = smu_v11_0_set_xgmi_pstate,
+ 	.gfx_off_control = smu_v11_0_gfx_off_control,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index ef1a62e86a0ee..f2c8719b8395e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2710,6 +2710,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
+ 	.display_clock_voltage_request = smu_v11_0_display_clock_voltage_request,
+ 	.get_fan_control_mode = smu_v11_0_get_fan_control_mode,
+ 	.set_fan_control_mode = smu_v11_0_set_fan_control_mode,
++	.set_fan_speed_percent = smu_v11_0_set_fan_speed_percent,
+ 	.set_fan_speed_rpm = smu_v11_0_set_fan_speed_rpm,
+ 	.set_xgmi_pstate = smu_v11_0_set_xgmi_pstate,
+ 	.gfx_off_control = smu_v11_0_gfx_off_control,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index cf7c4f0e0a0b5..31da8fae6fa9d 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -2776,6 +2776,7 @@ static const struct pptable_funcs sienna_cichlid_ppt_funcs = {
+ 	.display_clock_voltage_request = smu_v11_0_display_clock_voltage_request,
+ 	.get_fan_control_mode = smu_v11_0_get_fan_control_mode,
+ 	.set_fan_control_mode = smu_v11_0_set_fan_control_mode,
++	.set_fan_speed_percent = smu_v11_0_set_fan_speed_percent,
+ 	.set_fan_speed_rpm = smu_v11_0_set_fan_speed_rpm,
+ 	.set_xgmi_pstate = smu_v11_0_set_xgmi_pstate,
+ 	.gfx_off_control = smu_v11_0_gfx_off_control,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+index 6db96fa1df092..e646f5931d795 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+@@ -1122,6 +1122,35 @@ smu_v11_0_set_fan_static_mode(struct smu_context *smu, uint32_t mode)
+ 	return 0;
+ }
+ 
++int
++smu_v11_0_set_fan_speed_percent(struct smu_context *smu, uint32_t speed)
++{
++	struct amdgpu_device *adev = smu->adev;
++	uint32_t duty100, duty;
++	uint64_t tmp64;
++
++	if (speed > 100)
++		speed = 100;
++
++	if (smu_v11_0_auto_fan_control(smu, 0))
++		return -EINVAL;
++
++	duty100 = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_FDO_CTRL1),
++				CG_FDO_CTRL1, FMAX_DUTY100);
++	if (!duty100)
++		return -EINVAL;
++
++	tmp64 = (uint64_t)speed * duty100;
++	do_div(tmp64, 100);
++	duty = (uint32_t)tmp64;
++
++	WREG32_SOC15(THM, 0, mmCG_FDO_CTRL0,
++		     REG_SET_FIELD(RREG32_SOC15(THM, 0, mmCG_FDO_CTRL0),
++				   CG_FDO_CTRL0, FDO_STATIC_DUTY, duty));
++
++	return smu_v11_0_set_fan_static_mode(smu, FDO_PWM_MODE_STATIC);
++}
++
+ int
+ smu_v11_0_set_fan_control_mode(struct smu_context *smu,
+ 			       uint32_t mode)
+@@ -1130,7 +1159,7 @@ smu_v11_0_set_fan_control_mode(struct smu_context *smu,
+ 
+ 	switch (mode) {
+ 	case AMD_FAN_CTRL_NONE:
+-		ret = smu_v11_0_set_fan_speed_rpm(smu, smu->fan_max_rpm);
++		ret = smu_v11_0_set_fan_speed_percent(smu, 100);
+ 		break;
+ 	case AMD_FAN_CTRL_MANUAL:
+ 		ret = smu_v11_0_auto_fan_control(smu, 0);
+diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+index 94465374ca2fe..e961ad6a31294 100644
+--- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c
++++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+@@ -390,6 +390,16 @@ static void emit_batch(struct i915_vma * const vma,
+ 						     &cb_kernel_ivb,
+ 						     desc_count);
+ 
++	/* Reset inherited context registers */
++	gen7_emit_pipeline_invalidate(&cmds);
++	batch_add(&cmds, MI_LOAD_REGISTER_IMM(2));
++	batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_0_GEN7));
++	batch_add(&cmds, 0xffff0000);
++	batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_1));
++	batch_add(&cmds, 0xffff0000 | PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
++	gen7_emit_pipeline_flush(&cmds);
++
++	/* Switch to the media pipeline and our base address */
+ 	gen7_emit_pipeline_invalidate(&cmds);
+ 	batch_add(&cmds, PIPELINE_SELECT | PIPELINE_SELECT_MEDIA);
+ 	batch_add(&cmds, MI_NOOP);
+@@ -399,9 +409,11 @@ static void emit_batch(struct i915_vma * const vma,
+ 	gen7_emit_state_base_address(&cmds, descriptors);
+ 	gen7_emit_pipeline_invalidate(&cmds);
+ 
++	/* Set the clear-residual kernel state */
+ 	gen7_emit_vfe_state(&cmds, bv, urb_size - 1, 0, 0);
+ 	gen7_emit_interface_descriptor_load(&cmds, descriptors, desc_count);
+ 
++	/* Execute the kernel on all HW threads */
+ 	for (i = 0; i < num_primitives(bv); i++)
+ 		gen7_emit_media_object(&cmds, i);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c
+index 81c05f551b9c8..060f826b1d52e 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c
+@@ -526,16 +526,39 @@ static int init_ggtt(struct i915_ggtt *ggtt)
+ 
+ 	mutex_init(&ggtt->error_mutex);
+ 	if (ggtt->mappable_end) {
+-		/* Reserve a mappable slot for our lockless error capture */
+-		ret = drm_mm_insert_node_in_range(&ggtt->vm.mm,
+-						  &ggtt->error_capture,
+-						  PAGE_SIZE, 0,
+-						  I915_COLOR_UNEVICTABLE,
+-						  0, ggtt->mappable_end,
+-						  DRM_MM_INSERT_LOW);
+-		if (ret)
+-			return ret;
++		/*
++		 * Reserve a mappable slot for our lockless error capture.
++		 *
++		 * We strongly prefer taking address 0x0 in order to protect
++		 * other critical buffers against accidental overwrites,
++		 * as writing to address 0 is a very common mistake.
++		 *
++		 * Since 0 may already be in use by the system (e.g. the BIOS
++		 * framebuffer), we let the reservation fail quietly and hope
++		 * 0 remains reserved always.
++		 *
++		 * If we fail to reserve 0, and then fail to find any space
++		 * for an error-capture, remain silent. We can afford not
++		 * to reserve an error_capture node as we have fallback
++		 * paths, and we trust that 0 will remain reserved. However,
++		 * the only likely reason for failure to insert is a driver
++		 * bug, which we expect to cause other failures...
++		 */
++		ggtt->error_capture.size = I915_GTT_PAGE_SIZE;
++		ggtt->error_capture.color = I915_COLOR_UNEVICTABLE;
++		if (drm_mm_reserve_node(&ggtt->vm.mm, &ggtt->error_capture))
++			drm_mm_insert_node_in_range(&ggtt->vm.mm,
++						    &ggtt->error_capture,
++						    ggtt->error_capture.size, 0,
++						    ggtt->error_capture.color,
++						    0, ggtt->mappable_end,
++						    DRM_MM_INSERT_LOW);
+ 	}
++	if (drm_mm_node_allocated(&ggtt->error_capture))
++		drm_dbg(&ggtt->vm.i915->drm,
++			"Reserved GGTT:[%llx, %llx] for use by error capture\n",
++			ggtt->error_capture.start,
++			ggtt->error_capture.start + ggtt->error_capture.size);
+ 
+ 	/*
+ 	 * The upper portion of the GuC address space has a sizeable hole
+@@ -548,9 +571,9 @@ static int init_ggtt(struct i915_ggtt *ggtt)
+ 
+ 	/* Clear any non-preallocated blocks */
+ 	drm_mm_for_each_hole(entry, &ggtt->vm.mm, hole_start, hole_end) {
+-		drm_dbg_kms(&ggtt->vm.i915->drm,
+-			    "clearing unused GTT space: [%lx, %lx]\n",
+-			    hole_start, hole_end);
++		drm_dbg(&ggtt->vm.i915->drm,
++			"clearing unused GTT space: [%lx, %lx]\n",
++			hole_start, hole_end);
+ 		ggtt->vm.clear_range(&ggtt->vm, hole_start,
+ 				     hole_end - hole_start);
+ 	}
+diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
+index 10a865f3dc09a..9ed19b8bca600 100644
+--- a/drivers/gpu/drm/i915/i915_active.c
++++ b/drivers/gpu/drm/i915/i915_active.c
+@@ -631,24 +631,26 @@ static int flush_lazy_signals(struct i915_active *ref)
+ 
+ int __i915_active_wait(struct i915_active *ref, int state)
+ {
+-	int err;
+-
+ 	might_sleep();
+ 
+-	if (!i915_active_acquire_if_busy(ref))
+-		return 0;
+-
+ 	/* Any fence added after the wait begins will not be auto-signaled */
+-	err = flush_lazy_signals(ref);
+-	i915_active_release(ref);
+-	if (err)
+-		return err;
++	if (i915_active_acquire_if_busy(ref)) {
++		int err;
+ 
+-	if (!i915_active_is_idle(ref) &&
+-	    ___wait_var_event(ref, i915_active_is_idle(ref),
+-			      state, 0, 0, schedule()))
+-		return -EINTR;
++		err = flush_lazy_signals(ref);
++		i915_active_release(ref);
++		if (err)
++			return err;
+ 
++		if (___wait_var_event(ref, i915_active_is_idle(ref),
++				      state, 0, 0, schedule()))
++			return -EINTR;
++	}
++
++	/*
++	 * After the wait is complete, the caller may free the active.
++	 * We have to flush any concurrent retirement before returning.
++	 */
+ 	flush_work(&ref->work);
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 83f4af097b858..fa830e77bb648 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1347,7 +1347,7 @@ intel_subplatform(const struct intel_runtime_info *info, enum intel_platform p)
+ {
+ 	const unsigned int pi = __platform_mask_index(info, p);
+ 
+-	return info->platform_mask[pi] & INTEL_SUBPLATFORM_BITS;
++	return info->platform_mask[pi] & ((1 << INTEL_SUBPLATFORM_BITS) - 1);
+ }
+ 
+ static __always_inline bool
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index 69c0fa20eba17..3c9ac6649ead3 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -184,13 +184,24 @@ static u64 get_rc6(struct intel_gt *gt)
+ 	return val;
+ }
+ 
+-static void park_rc6(struct drm_i915_private *i915)
++static void init_rc6(struct i915_pmu *pmu)
+ {
+-	struct i915_pmu *pmu = &i915->pmu;
++	struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu);
++	intel_wakeref_t wakeref;
+ 
+-	if (pmu->enable & config_enabled_mask(I915_PMU_RC6_RESIDENCY))
++	with_intel_runtime_pm(i915->gt.uncore->rpm, wakeref) {
+ 		pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);
++		pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur =
++					pmu->sample[__I915_SAMPLE_RC6].cur;
++		pmu->sleep_last = ktime_get();
++	}
++}
+ 
++static void park_rc6(struct drm_i915_private *i915)
++{
++	struct i915_pmu *pmu = &i915->pmu;
++
++	pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);
+ 	pmu->sleep_last = ktime_get();
+ }
+ 
+@@ -201,6 +212,7 @@ static u64 get_rc6(struct intel_gt *gt)
+ 	return __get_rc6(gt);
+ }
+ 
++static void init_rc6(struct i915_pmu *pmu) { }
+ static void park_rc6(struct drm_i915_private *i915) {}
+ 
+ #endif
+@@ -613,10 +625,8 @@ static void i915_pmu_enable(struct perf_event *event)
+ 		container_of(event->pmu, typeof(*i915), pmu.base);
+ 	unsigned int bit = event_enabled_bit(event);
+ 	struct i915_pmu *pmu = &i915->pmu;
+-	intel_wakeref_t wakeref;
+ 	unsigned long flags;
+ 
+-	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+ 	spin_lock_irqsave(&pmu->lock, flags);
+ 
+ 	/*
+@@ -627,13 +637,6 @@ static void i915_pmu_enable(struct perf_event *event)
+ 	GEM_BUG_ON(bit >= ARRAY_SIZE(pmu->enable_count));
+ 	GEM_BUG_ON(pmu->enable_count[bit] == ~0);
+ 
+-	if (pmu->enable_count[bit] == 0 &&
+-	    config_enabled_mask(I915_PMU_RC6_RESIDENCY) & BIT_ULL(bit)) {
+-		pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur = 0;
+-		pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);
+-		pmu->sleep_last = ktime_get();
+-	}
+-
+ 	pmu->enable |= BIT_ULL(bit);
+ 	pmu->enable_count[bit]++;
+ 
+@@ -674,8 +677,6 @@ static void i915_pmu_enable(struct perf_event *event)
+ 	 * an existing non-zero value.
+ 	 */
+ 	local64_set(&event->hw.prev_count, __i915_pmu_event_read(event));
+-
+-	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+ }
+ 
+ static void i915_pmu_disable(struct perf_event *event)
+@@ -1101,6 +1102,7 @@ void i915_pmu_register(struct drm_i915_private *i915)
+ 	hrtimer_init(&pmu->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	pmu->timer.function = i915_sample;
+ 	pmu->cpuhp.slot = CPUHP_INVALID;
++	init_rc6(pmu);
+ 
+ 	if (!is_igp(i915)) {
+ 		pmu->name = kasprintf(GFP_KERNEL,
+diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+index c53a222e3dece..713770fb2b92d 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
++++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+@@ -1880,7 +1880,7 @@ static int igt_cs_tlb(void *arg)
+ 	vma = i915_vma_instance(out, vm, NULL);
+ 	if (IS_ERR(vma)) {
+ 		err = PTR_ERR(vma);
+-		goto out_put_batch;
++		goto out_put_out;
+ 	}
+ 
+ 	err = i915_vma_pin(vma, 0, 0,
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/base507c.c b/drivers/gpu/drm/nouveau/dispnv50/base507c.c
+index 302d4e6fc52f1..788db043a3429 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/base507c.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/base507c.c
+@@ -88,7 +88,11 @@ base507c_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
+ 			  NVVAL(NV507C, SET_CONVERSION, OFS, 0x64));
+ 	} else {
+ 		PUSH_MTHD(push, NV507C, SET_PROCESSING,
+-			  NVDEF(NV507C, SET_PROCESSING, USE_GAIN_OFS, DISABLE));
++			  NVDEF(NV507C, SET_PROCESSING, USE_GAIN_OFS, DISABLE),
++
++					SET_CONVERSION,
++			  NVVAL(NV507C, SET_CONVERSION, GAIN, 0) |
++			  NVVAL(NV507C, SET_CONVERSION, OFS, 0));
+ 	}
+ 
+ 	PUSH_MTHD(push, NV507C, SURFACE_SET_OFFSET(0, 0), asyw->image.offset[0] >> 8);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/base827c.c b/drivers/gpu/drm/nouveau/dispnv50/base827c.c
+index 18d34096f1258..093d4ba6910ec 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/base827c.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/base827c.c
+@@ -49,7 +49,11 @@ base827c_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
+ 			  NVVAL(NV827C, SET_CONVERSION, OFS, 0x64));
+ 	} else {
+ 		PUSH_MTHD(push, NV827C, SET_PROCESSING,
+-			  NVDEF(NV827C, SET_PROCESSING, USE_GAIN_OFS, DISABLE));
++			  NVDEF(NV827C, SET_PROCESSING, USE_GAIN_OFS, DISABLE),
++
++					SET_CONVERSION,
++			  NVVAL(NV827C, SET_CONVERSION, GAIN, 0) |
++			  NVVAL(NV827C, SET_CONVERSION, OFS, 0));
+ 	}
+ 
+ 	PUSH_MTHD(push, NV827C, SURFACE_SET_OFFSET(0, 0), asyw->image.offset[0] >> 8,
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head917d.c b/drivers/gpu/drm/nouveau/dispnv50/head917d.c
+index a5d8274036609..ea9f8667305ec 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head917d.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/head917d.c
+@@ -22,6 +22,7 @@
+ #include "head.h"
+ #include "core.h"
+ 
++#include "nvif/push.h"
+ #include <nvif/push507c.h>
+ 
+ #include <nvhw/class/cl917d.h>
+@@ -73,6 +74,31 @@ head917d_base(struct nv50_head *head, struct nv50_head_atom *asyh)
+ 	return 0;
+ }
+ 
++static int
++head917d_curs_set(struct nv50_head *head, struct nv50_head_atom *asyh)
++{
++	struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push;
++	const int i = head->base.index;
++	int ret;
++
++	ret = PUSH_WAIT(push, 5);
++	if (ret)
++		return ret;
++
++	PUSH_MTHD(push, NV917D, HEAD_SET_CONTROL_CURSOR(i),
++		  NVDEF(NV917D, HEAD_SET_CONTROL_CURSOR, ENABLE, ENABLE) |
++		  NVVAL(NV917D, HEAD_SET_CONTROL_CURSOR, FORMAT, asyh->curs.format) |
++		  NVVAL(NV917D, HEAD_SET_CONTROL_CURSOR, SIZE, asyh->curs.layout) |
++		  NVVAL(NV917D, HEAD_SET_CONTROL_CURSOR, HOT_SPOT_X, 0) |
++		  NVVAL(NV917D, HEAD_SET_CONTROL_CURSOR, HOT_SPOT_Y, 0) |
++		  NVDEF(NV917D, HEAD_SET_CONTROL_CURSOR, COMPOSITION, ALPHA_BLEND),
++
++				HEAD_SET_OFFSET_CURSOR(i), asyh->curs.offset >> 8);
++
++	PUSH_MTHD(push, NV917D, HEAD_SET_CONTEXT_DMA_CURSOR(i), asyh->curs.handle);
++	return 0;
++}
++
+ int
+ head917d_curs_layout(struct nv50_head *head, struct nv50_wndw_atom *asyw,
+ 		     struct nv50_head_atom *asyh)
+@@ -101,7 +127,7 @@ head917d = {
+ 	.core_clr = head907d_core_clr,
+ 	.curs_layout = head917d_curs_layout,
+ 	.curs_format = head507d_curs_format,
+-	.curs_set = head907d_curs_set,
++	.curs_set = head917d_curs_set,
+ 	.curs_clr = head907d_curs_clr,
+ 	.base = head917d_base,
+ 	.ovly = head907d_ovly,
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+index 0356474ad6f6a..f07916ffe42cb 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+@@ -702,6 +702,11 @@ nv50_wndw_init(struct nv50_wndw *wndw)
+ 	nvif_notify_get(&wndw->notify);
+ }
+ 
++static const u64 nv50_cursor_format_modifiers[] = {
++	DRM_FORMAT_MOD_LINEAR,
++	DRM_FORMAT_MOD_INVALID,
++};
++
+ int
+ nv50_wndw_new_(const struct nv50_wndw_func *func, struct drm_device *dev,
+ 	       enum drm_plane_type type, const char *name, int index,
+@@ -713,6 +718,7 @@ nv50_wndw_new_(const struct nv50_wndw_func *func, struct drm_device *dev,
+ 	struct nvif_mmu *mmu = &drm->client.mmu;
+ 	struct nv50_disp *disp = nv50_disp(dev);
+ 	struct nv50_wndw *wndw;
++	const u64 *format_modifiers;
+ 	int nformat;
+ 	int ret;
+ 
+@@ -728,10 +734,13 @@ nv50_wndw_new_(const struct nv50_wndw_func *func, struct drm_device *dev,
+ 
+ 	for (nformat = 0; format[nformat]; nformat++);
+ 
+-	ret = drm_universal_plane_init(dev, &wndw->plane, heads, &nv50_wndw,
+-				       format, nformat,
+-				       nouveau_display(dev)->format_modifiers,
+-				       type, "%s-%d", name, index);
++	if (type == DRM_PLANE_TYPE_CURSOR)
++		format_modifiers = nv50_cursor_format_modifiers;
++	else
++		format_modifiers = nouveau_display(dev)->format_modifiers;
++
++	ret = drm_universal_plane_init(dev, &wndw->plane, heads, &nv50_wndw, format, nformat,
++				       format_modifiers, type, "%s-%d", name, index);
+ 	if (ret) {
+ 		kfree(*pwndw);
+ 		*pwndw = NULL;
+diff --git a/drivers/gpu/drm/nouveau/include/nvhw/class/cl917d.h b/drivers/gpu/drm/nouveau/include/nvhw/class/cl917d.h
+index 2a2612d6e1e0e..fb223723a38ad 100644
+--- a/drivers/gpu/drm/nouveau/include/nvhw/class/cl917d.h
++++ b/drivers/gpu/drm/nouveau/include/nvhw/class/cl917d.h
+@@ -66,6 +66,10 @@
+ #define NV917D_HEAD_SET_CONTROL_CURSOR_COMPOSITION_ALPHA_BLEND                  (0x00000000)
+ #define NV917D_HEAD_SET_CONTROL_CURSOR_COMPOSITION_PREMULT_ALPHA_BLEND          (0x00000001)
+ #define NV917D_HEAD_SET_CONTROL_CURSOR_COMPOSITION_XOR                          (0x00000002)
++#define NV917D_HEAD_SET_OFFSET_CURSOR(a)                                        (0x00000484 + (a)*0x00000300)
++#define NV917D_HEAD_SET_OFFSET_CURSOR_ORIGIN                                    31:0
++#define NV917D_HEAD_SET_CONTEXT_DMA_CURSOR(a)                                   (0x0000048C + (a)*0x00000300)
++#define NV917D_HEAD_SET_CONTEXT_DMA_CURSOR_HANDLE                               31:0
+ #define NV917D_HEAD_SET_DITHER_CONTROL(a)                                       (0x000004A0 + (a)*0x00000300)
+ #define NV917D_HEAD_SET_DITHER_CONTROL_ENABLE                                   0:0
+ #define NV917D_HEAD_SET_DITHER_CONTROL_ENABLE_DISABLE                           (0x00000000)
+diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
+index 4f69e4c3dafde..1c3f890377d2c 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
+@@ -315,6 +315,10 @@ nouveau_svmm_init(struct drm_device *dev, void *data,
+ 	struct drm_nouveau_svm_init *args = data;
+ 	int ret;
+ 
++	/* We need to fail if svm is disabled */
++	if (!cli->drm->svm)
++		return -ENOSYS;
++
+ 	/* Allocate tracking for SVM-enabled VMM. */
+ 	if (!(svmm = kzalloc(sizeof(*svmm), GFP_KERNEL)))
+ 		return -ENOMEM;
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index b72b2bd05a815..ad691571d759f 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -618,11 +618,11 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 	 * for now we just allocate globally.
+ 	 */
+ 	if (!hvs->hvs5)
+-		/* 96kB */
+-		drm_mm_init(&hvs->lbm_mm, 0, 96 * 1024);
++		/* 48k words of 2x12-bit pixels */
++		drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024);
+ 	else
+-		/* 70k words */
+-		drm_mm_init(&hvs->lbm_mm, 0, 70 * 2 * 1024);
++		/* 60k words of 4x12-bit pixels */
++		drm_mm_init(&hvs->lbm_mm, 0, 60 * 1024);
+ 
+ 	/* Upload filter kernels.  We only have the one for now, so we
+ 	 * keep it around for the lifetime of the driver.
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index 6b39cc2ca18d0..5612cab552270 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -437,6 +437,7 @@ static void vc4_write_ppf(struct vc4_plane_state *vc4_state, u32 src, u32 dst)
+ static u32 vc4_lbm_size(struct drm_plane_state *state)
+ {
+ 	struct vc4_plane_state *vc4_state = to_vc4_plane_state(state);
++	struct vc4_dev *vc4 = to_vc4_dev(state->plane->dev);
+ 	u32 pix_per_line;
+ 	u32 lbm;
+ 
+@@ -472,7 +473,11 @@ static u32 vc4_lbm_size(struct drm_plane_state *state)
+ 		lbm = pix_per_line * 16;
+ 	}
+ 
+-	lbm = roundup(lbm, 32);
++	/* Align it to 64 or 128 (hvs5) bytes */
++	lbm = roundup(lbm, vc4->hvs->hvs5 ? 128 : 64);
++
++	/* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */
++	lbm /= vc4->hvs->hvs5 ? 4 : 2;
+ 
+ 	return lbm;
+ }
+@@ -912,9 +917,9 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ 		if (!vc4_state->is_unity) {
+ 			vc4_dlist_write(vc4_state,
+ 					VC4_SET_FIELD(vc4_state->crtc_w,
+-						      SCALER_POS1_SCL_WIDTH) |
++						      SCALER5_POS1_SCL_WIDTH) |
+ 					VC4_SET_FIELD(vc4_state->crtc_h,
+-						      SCALER_POS1_SCL_HEIGHT));
++						      SCALER5_POS1_SCL_HEIGHT));
+ 		}
+ 
+ 		/* Position Word 2: Source Image Size */
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index f20379e4e2ec2..5df4bb52bb10f 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -2471,7 +2471,7 @@ int c4iw_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 	init_attr->cap.max_send_wr = qhp->attr.sq_num_entries;
+ 	init_attr->cap.max_recv_wr = qhp->attr.rq_num_entries;
+ 	init_attr->cap.max_send_sge = qhp->attr.sq_max_sges;
+-	init_attr->cap.max_recv_sge = qhp->attr.sq_max_sges;
++	init_attr->cap.max_recv_sge = qhp->attr.rq_max_sges;
+ 	init_attr->cap.max_inline_data = T4_MAX_SEND_INLINE;
+ 	init_attr->sq_sig_type = qhp->sq_sig_all ? IB_SIGNAL_ALL_WR : 0;
+ 	return 0;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index fb092ff79d840..e317d7d6d5c0d 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -3305,8 +3305,7 @@ static int mlx5_add_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num)
+ 	int err;
+ 
+ 	dev->port[port_num].roce.nb.notifier_call = mlx5_netdev_event;
+-	err = register_netdevice_notifier_net(mlx5_core_net(dev->mdev),
+-					      &dev->port[port_num].roce.nb);
++	err = register_netdevice_notifier(&dev->port[port_num].roce.nb);
+ 	if (err) {
+ 		dev->port[port_num].roce.nb.notifier_call = NULL;
+ 		return err;
+@@ -3318,8 +3317,7 @@ static int mlx5_add_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num)
+ static void mlx5_remove_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num)
+ {
+ 	if (dev->port[port_num].roce.nb.notifier_call) {
+-		unregister_netdevice_notifier_net(mlx5_core_net(dev->mdev),
+-						  &dev->port[port_num].roce.nb);
++		unregister_netdevice_notifier(&dev->port[port_num].roce.nb);
+ 		dev->port[port_num].roce.nb.notifier_call = NULL;
+ 	}
+ }
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index 6b8cbdf717140..b4adab6985632 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -84,12 +84,9 @@ static inline bool is_rd890_iommu(struct pci_dev *pdev)
+ 	       (pdev->device == PCI_DEVICE_ID_RD890_IOMMU);
+ }
+ 
+-static inline bool iommu_feature(struct amd_iommu *iommu, u64 f)
++static inline bool iommu_feature(struct amd_iommu *iommu, u64 mask)
+ {
+-	if (!(iommu->cap & (1 << IOMMU_CAP_EFR)))
+-		return false;
+-
+-	return !!(iommu->features & f);
++	return !!(iommu->features & mask);
+ }
+ 
+ static inline u64 iommu_virt_to_phys(void *vaddr)
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 494b42a31b7ae..33446c9d3bac8 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -379,6 +379,10 @@
+ #define IOMMU_CAP_NPCACHE 26
+ #define IOMMU_CAP_EFR     27
+ 
++/* IOMMU IVINFO */
++#define IOMMU_IVINFO_OFFSET     36
++#define IOMMU_IVINFO_EFRSUP     BIT(0)
++
+ /* IOMMU Feature Reporting Field (for IVHD type 10h */
+ #define IOMMU_FEAT_GASUP_SHIFT	6
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 23a790f8f5506..c842545368fdd 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -257,6 +257,8 @@ static void init_device_table_dma(void);
+ 
+ static bool amd_iommu_pre_enabled = true;
+ 
++static u32 amd_iommu_ivinfo __initdata;
++
+ bool translation_pre_enabled(struct amd_iommu *iommu)
+ {
+ 	return (iommu->flags & AMD_IOMMU_FLAG_TRANS_PRE_ENABLED);
+@@ -296,6 +298,18 @@ int amd_iommu_get_num_iommus(void)
+ 	return amd_iommus_present;
+ }
+ 
++/*
++ * For IVHD type 0x11/0x40, EFR is also available via IVHD.
++ * Default to IVHD EFR since it is available sooner
++ * (i.e. before PCI init).
++ */
++static void __init early_iommu_features_init(struct amd_iommu *iommu,
++					     struct ivhd_header *h)
++{
++	if (amd_iommu_ivinfo & IOMMU_IVINFO_EFRSUP)
++		iommu->features = h->efr_reg;
++}
++
+ /* Access to l1 and l2 indexed register spaces */
+ 
+ static u32 iommu_read_l1(struct amd_iommu *iommu, u16 l1, u8 address)
+@@ -1584,6 +1598,9 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
+ 		if ((h->efr_reg & BIT(IOMMU_EFR_XTSUP_SHIFT)) &&
+ 		    (h->efr_reg & BIT(IOMMU_EFR_MSICAPMMIOSUP_SHIFT)))
+ 			amd_iommu_xt_mode = IRQ_REMAP_X2APIC_MODE;
++
++		early_iommu_features_init(iommu, h);
++
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -1775,6 +1792,35 @@ static const struct attribute_group *amd_iommu_groups[] = {
+ 	NULL,
+ };
+ 
++/*
++ * Note: IVHD 0x11 and 0x40 also contains exact copy
++ * of the IOMMU Extended Feature Register [MMIO Offset 0030h].
++ * Default to EFR in IVHD since it is available sooner (i.e. before PCI init).
++ */
++static void __init late_iommu_features_init(struct amd_iommu *iommu)
++{
++	u64 features;
++
++	if (!(iommu->cap & (1 << IOMMU_CAP_EFR)))
++		return;
++
++	/* read extended feature bits */
++	features = readq(iommu->mmio_base + MMIO_EXT_FEATURES);
++
++	if (!iommu->features) {
++		iommu->features = features;
++		return;
++	}
++
++	/*
++	 * Sanity check and warn if EFR values from
++	 * IVHD and MMIO conflict.
++	 */
++	if (features != iommu->features)
++		pr_warn(FW_WARN "EFR mismatch. Use IVHD EFR (%#llx : %#llx\n).",
++			features, iommu->features);
++}
++
+ static int __init iommu_init_pci(struct amd_iommu *iommu)
+ {
+ 	int cap_ptr = iommu->cap_ptr;
+@@ -1794,8 +1840,7 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
+ 	if (!(iommu->cap & (1 << IOMMU_CAP_IOTLB)))
+ 		amd_iommu_iotlb_sup = false;
+ 
+-	/* read extended feature bits */
+-	iommu->features = readq(iommu->mmio_base + MMIO_EXT_FEATURES);
++	late_iommu_features_init(iommu);
+ 
+ 	if (iommu_feature(iommu, FEATURE_GT)) {
+ 		int glxval;
+@@ -2525,6 +2570,11 @@ static void __init free_dma_resources(void)
+ 	free_unity_maps();
+ }
+ 
++static void __init ivinfo_init(void *ivrs)
++{
++	amd_iommu_ivinfo = *((u32 *)(ivrs + IOMMU_IVINFO_OFFSET));
++}
++
+ /*
+  * This is the hardware init function for AMD IOMMU in the system.
+  * This function is called either from amd_iommu_init or from the interrupt
+@@ -2579,6 +2629,8 @@ static int __init early_amd_iommu_init(void)
+ 	if (ret)
+ 		goto out;
+ 
++	ivinfo_init(ivrs_base);
++
+ 	amd_iommu_target_ivhd_type = get_highest_supported_ivhd_type(ivrs_base);
+ 	DUMP_printk("Using IVHD type %#x\n", amd_iommu_target_ivhd_type);
+ 
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 004feaed3c72c..02e7c10a4224b 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1496,7 +1496,7 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
+ 	 * Max Invs Pending (MIP) is set to 0 for now until we have DIT in
+ 	 * ECAP.
+ 	 */
+-	if (addr & GENMASK_ULL(size_order + VTD_PAGE_SHIFT, 0))
++	if (!IS_ALIGNED(addr, VTD_PAGE_SIZE << size_order))
+ 		pr_warn_ratelimited("Invalidate non-aligned address %llx, order %d\n",
+ 				    addr, size_order);
+ 
+diff --git a/drivers/leds/led-triggers.c b/drivers/leds/led-triggers.c
+index 91da90cfb11d9..4e7b78a84149b 100644
+--- a/drivers/leds/led-triggers.c
++++ b/drivers/leds/led-triggers.c
+@@ -378,14 +378,15 @@ void led_trigger_event(struct led_trigger *trig,
+ 			enum led_brightness brightness)
+ {
+ 	struct led_classdev *led_cdev;
++	unsigned long flags;
+ 
+ 	if (!trig)
+ 		return;
+ 
+-	read_lock(&trig->leddev_list_lock);
++	read_lock_irqsave(&trig->leddev_list_lock, flags);
+ 	list_for_each_entry(led_cdev, &trig->led_cdevs, trig_list)
+ 		led_set_brightness(led_cdev, brightness);
+-	read_unlock(&trig->leddev_list_lock);
++	read_unlock_irqrestore(&trig->leddev_list_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(led_trigger_event);
+ 
+@@ -396,11 +397,12 @@ static void led_trigger_blink_setup(struct led_trigger *trig,
+ 			     int invert)
+ {
+ 	struct led_classdev *led_cdev;
++	unsigned long flags;
+ 
+ 	if (!trig)
+ 		return;
+ 
+-	read_lock(&trig->leddev_list_lock);
++	read_lock_irqsave(&trig->leddev_list_lock, flags);
+ 	list_for_each_entry(led_cdev, &trig->led_cdevs, trig_list) {
+ 		if (oneshot)
+ 			led_blink_set_oneshot(led_cdev, delay_on, delay_off,
+@@ -408,7 +410,7 @@ static void led_trigger_blink_setup(struct led_trigger *trig,
+ 		else
+ 			led_blink_set(led_cdev, delay_on, delay_off);
+ 	}
+-	read_unlock(&trig->leddev_list_lock);
++	read_unlock_irqrestore(&trig->leddev_list_lock, flags);
+ }
+ 
+ void led_trigger_blink(struct led_trigger *trig,
+diff --git a/drivers/md/bcache/features.h b/drivers/md/bcache/features.h
+index 84fc2c0f01015..d1c8fd3977fc6 100644
+--- a/drivers/md/bcache/features.h
++++ b/drivers/md/bcache/features.h
+@@ -33,6 +33,8 @@
+ #define BCH_FEATURE_COMPAT_FUNCS(name, flagname) \
+ static inline int bch_has_feature_##name(struct cache_sb *sb) \
+ { \
++	if (sb->version < BCACHE_SB_VERSION_CDEV_WITH_FEATURES) \
++		return 0; \
+ 	return (((sb)->feature_compat & \
+ 		BCH##_FEATURE_COMPAT_##flagname) != 0); \
+ } \
+@@ -50,6 +52,8 @@ static inline void bch_clear_feature_##name(struct cache_sb *sb) \
+ #define BCH_FEATURE_RO_COMPAT_FUNCS(name, flagname) \
+ static inline int bch_has_feature_##name(struct cache_sb *sb) \
+ { \
++	if (sb->version < BCACHE_SB_VERSION_CDEV_WITH_FEATURES) \
++		return 0; \
+ 	return (((sb)->feature_ro_compat & \
+ 		BCH##_FEATURE_RO_COMPAT_##flagname) != 0); \
+ } \
+@@ -67,6 +71,8 @@ static inline void bch_clear_feature_##name(struct cache_sb *sb) \
+ #define BCH_FEATURE_INCOMPAT_FUNCS(name, flagname) \
+ static inline int bch_has_feature_##name(struct cache_sb *sb) \
+ { \
++	if (sb->version < BCACHE_SB_VERSION_CDEV_WITH_FEATURES) \
++		return 0; \
+ 	return (((sb)->feature_incompat & \
+ 		BCH##_FEATURE_INCOMPAT_##flagname) != 0); \
+ } \
+diff --git a/drivers/media/cec/platform/Makefile b/drivers/media/cec/platform/Makefile
+index 3a947159b25ac..ea6f8ee8161c9 100644
+--- a/drivers/media/cec/platform/Makefile
++++ b/drivers/media/cec/platform/Makefile
+@@ -10,5 +10,6 @@ obj-$(CONFIG_CEC_MESON_AO)	+= meson/
+ obj-$(CONFIG_CEC_SAMSUNG_S5P)	+= s5p/
+ obj-$(CONFIG_CEC_SECO)		+= seco/
+ obj-$(CONFIG_CEC_STI)		+= sti/
++obj-$(CONFIG_CEC_STM32)		+= stm32/
+ obj-$(CONFIG_CEC_TEGRA)		+= tegra/
+ 
+diff --git a/drivers/media/rc/ir-mce_kbd-decoder.c b/drivers/media/rc/ir-mce_kbd-decoder.c
+index be8f2756a444e..1524dc0fc566e 100644
+--- a/drivers/media/rc/ir-mce_kbd-decoder.c
++++ b/drivers/media/rc/ir-mce_kbd-decoder.c
+@@ -320,7 +320,7 @@ again:
+ 				data->body);
+ 			spin_lock(&data->keylock);
+ 			if (scancode) {
+-				delay = nsecs_to_jiffies(dev->timeout) +
++				delay = usecs_to_jiffies(dev->timeout) +
+ 					msecs_to_jiffies(100);
+ 				mod_timer(&data->rx_timeout, jiffies + delay);
+ 			} else {
+diff --git a/drivers/media/rc/ite-cir.c b/drivers/media/rc/ite-cir.c
+index a905113fef6ea..0c6229592e132 100644
+--- a/drivers/media/rc/ite-cir.c
++++ b/drivers/media/rc/ite-cir.c
+@@ -1551,7 +1551,7 @@ static int ite_probe(struct pnp_dev *pdev, const struct pnp_device_id
+ 	rdev->s_rx_carrier_range = ite_set_rx_carrier_range;
+ 	/* FIFO threshold is 17 bytes, so 17 * 8 samples minimum */
+ 	rdev->min_timeout = 17 * 8 * ITE_BAUDRATE_DIVISOR *
+-			    itdev->params.sample_period;
++			    itdev->params.sample_period / 1000;
+ 	rdev->timeout = IR_DEFAULT_TIMEOUT;
+ 	rdev->max_timeout = 10 * IR_DEFAULT_TIMEOUT;
+ 	rdev->rx_resolution = ITE_BAUDRATE_DIVISOR *
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index 1d811e5ffb557..1fd62c1dac768 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -737,7 +737,7 @@ static unsigned int repeat_period(int protocol)
+ void rc_repeat(struct rc_dev *dev)
+ {
+ 	unsigned long flags;
+-	unsigned int timeout = nsecs_to_jiffies(dev->timeout) +
++	unsigned int timeout = usecs_to_jiffies(dev->timeout) +
+ 		msecs_to_jiffies(repeat_period(dev->last_protocol));
+ 	struct lirc_scancode sc = {
+ 		.scancode = dev->last_scancode, .rc_proto = dev->last_protocol,
+@@ -855,7 +855,7 @@ void rc_keydown(struct rc_dev *dev, enum rc_proto protocol, u64 scancode,
+ 	ir_do_keydown(dev, protocol, scancode, keycode, toggle);
+ 
+ 	if (dev->keypressed) {
+-		dev->keyup_jiffies = jiffies + nsecs_to_jiffies(dev->timeout) +
++		dev->keyup_jiffies = jiffies + usecs_to_jiffies(dev->timeout) +
+ 			msecs_to_jiffies(repeat_period(protocol));
+ 		mod_timer(&dev->timer_keyup, dev->keyup_jiffies);
+ 	}
+@@ -1928,6 +1928,8 @@ int rc_register_device(struct rc_dev *dev)
+ 			goto out_raw;
+ 	}
+ 
++	dev->registered = true;
++
+ 	rc = device_add(&dev->dev);
+ 	if (rc)
+ 		goto out_rx_free;
+@@ -1937,8 +1939,6 @@ int rc_register_device(struct rc_dev *dev)
+ 		 dev->device_name ?: "Unspecified device", path ?: "N/A");
+ 	kfree(path);
+ 
+-	dev->registered = true;
+-
+ 	/*
+ 	 * once the the input device is registered in rc_setup_rx_device,
+ 	 * userspace can open the input device and rc_open() will be called
+diff --git a/drivers/media/rc/serial_ir.c b/drivers/media/rc/serial_ir.c
+index 8cc28c92d05d6..96ae0294ac102 100644
+--- a/drivers/media/rc/serial_ir.c
++++ b/drivers/media/rc/serial_ir.c
+@@ -385,7 +385,7 @@ static irqreturn_t serial_ir_irq_handler(int i, void *blah)
+ 	} while (!(sinp(UART_IIR) & UART_IIR_NO_INT)); /* still pending ? */
+ 
+ 	mod_timer(&serial_ir.timeout_timer,
+-		  jiffies + nsecs_to_jiffies(serial_ir.rcdev->timeout));
++		  jiffies + usecs_to_jiffies(serial_ir.rcdev->timeout));
+ 
+ 	ir_raw_event_handle(serial_ir.rcdev);
+ 
+diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
+index 09879aea9f7cc..24cd3c1027ecc 100644
+--- a/drivers/net/can/dev.c
++++ b/drivers/net/can/dev.c
+@@ -1163,7 +1163,7 @@ static int can_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ {
+ 	struct can_priv *priv = netdev_priv(dev);
+ 	struct can_ctrlmode cm = {.flags = priv->ctrlmode};
+-	struct can_berr_counter bec;
++	struct can_berr_counter bec = { };
+ 	enum can_state state = priv->state;
+ 
+ 	if (priv->do_get_state)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 61968e9174dab..2872c4dc77f07 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -4046,20 +4046,16 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
+ 		goto error_param;
+ 
+ 	vf = &pf->vf[vf_id];
+-	vsi = pf->vsi[vf->lan_vsi_idx];
+ 
+ 	/* When the VF is resetting wait until it is done.
+ 	 * It can take up to 200 milliseconds,
+ 	 * but wait for up to 300 milliseconds to be safe.
+-	 * If the VF is indeed in reset, the vsi pointer has
+-	 * to show on the newly loaded vsi under pf->vsi[id].
++	 * Acquire the VSI pointer only after the VF has been
++	 * properly initialized.
+ 	 */
+ 	for (i = 0; i < 15; i++) {
+-		if (test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {
+-			if (i > 0)
+-				vsi = pf->vsi[vf->lan_vsi_idx];
++		if (test_bit(I40E_VF_STATE_INIT, &vf->vf_states))
+ 			break;
+-		}
+ 		msleep(20);
+ 	}
+ 	if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {
+@@ -4068,6 +4064,7 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
+ 		ret = -EAGAIN;
+ 		goto error_param;
+ 	}
++	vsi = pf->vsi[vf->lan_vsi_idx];
+ 
+ 	if (is_multicast_ether_addr(mac)) {
+ 		dev_err(&pf->pdev->dev,
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index a0723831c4e48..54cf382fddaf9 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -68,7 +68,9 @@
+ #define ICE_INT_NAME_STR_LEN	(IFNAMSIZ + 16)
+ #define ICE_AQ_LEN		64
+ #define ICE_MBXSQ_LEN		64
+-#define ICE_MIN_MSIX		2
++#define ICE_MIN_LAN_TXRX_MSIX	1
++#define ICE_MIN_LAN_OICR_MSIX	1
++#define ICE_MIN_MSIX		(ICE_MIN_LAN_TXRX_MSIX + ICE_MIN_LAN_OICR_MSIX)
+ #define ICE_FDIR_MSIX		1
+ #define ICE_NO_VSI		0xffff
+ #define ICE_VSI_MAP_CONTIG	0
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 9e8e9531cd871..69c113a4de7e6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -3258,8 +3258,8 @@ ice_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key,
+  */
+ static int ice_get_max_txq(struct ice_pf *pf)
+ {
+-	return min_t(int, num_online_cpus(),
+-		     pf->hw.func_caps.common_cap.num_txq);
++	return min3(pf->num_lan_msix, (u16)num_online_cpus(),
++		    (u16)pf->hw.func_caps.common_cap.num_txq);
+ }
+ 
+ /**
+@@ -3268,8 +3268,8 @@ static int ice_get_max_txq(struct ice_pf *pf)
+  */
+ static int ice_get_max_rxq(struct ice_pf *pf)
+ {
+-	return min_t(int, num_online_cpus(),
+-		     pf->hw.func_caps.common_cap.num_rxq);
++	return min3(pf->num_lan_msix, (u16)num_online_cpus(),
++		    (u16)pf->hw.func_caps.common_cap.num_rxq);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+index 2d27f66ac8534..192729546bbfc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+@@ -1576,7 +1576,13 @@ ice_set_fdir_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
+ 		       sizeof(struct in6_addr));
+ 		input->ip.v6.l4_header = fsp->h_u.usr_ip6_spec.l4_4_bytes;
+ 		input->ip.v6.tc = fsp->h_u.usr_ip6_spec.tclass;
+-		input->ip.v6.proto = fsp->h_u.usr_ip6_spec.l4_proto;
++
++		/* if no protocol requested, use IPPROTO_NONE */
++		if (!fsp->m_u.usr_ip6_spec.l4_proto)
++			input->ip.v6.proto = IPPROTO_NONE;
++		else
++			input->ip.v6.proto = fsp->h_u.usr_ip6_spec.l4_proto;
++
+ 		memcpy(input->mask.v6.dst_ip, fsp->m_u.usr_ip6_spec.ip6dst,
+ 		       sizeof(struct in6_addr));
+ 		memcpy(input->mask.v6.src_ip, fsp->m_u.usr_ip6_spec.ip6src,
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 3df67486d42d9..ad9c22a1b97a0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -161,8 +161,9 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
+ 
+ 	switch (vsi->type) {
+ 	case ICE_VSI_PF:
+-		vsi->alloc_txq = min_t(int, ice_get_avail_txq_count(pf),
+-				       num_online_cpus());
++		vsi->alloc_txq = min3(pf->num_lan_msix,
++				      ice_get_avail_txq_count(pf),
++				      (u16)num_online_cpus());
+ 		if (vsi->req_txq) {
+ 			vsi->alloc_txq = vsi->req_txq;
+ 			vsi->num_txq = vsi->req_txq;
+@@ -174,8 +175,9 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
+ 		if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
+ 			vsi->alloc_rxq = 1;
+ 		} else {
+-			vsi->alloc_rxq = min_t(int, ice_get_avail_rxq_count(pf),
+-					       num_online_cpus());
++			vsi->alloc_rxq = min3(pf->num_lan_msix,
++					      ice_get_avail_rxq_count(pf),
++					      (u16)num_online_cpus());
+ 			if (vsi->req_rxq) {
+ 				vsi->alloc_rxq = vsi->req_rxq;
+ 				vsi->num_rxq = vsi->req_rxq;
+@@ -184,7 +186,9 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
+ 
+ 		pf->num_lan_rx = vsi->alloc_rxq;
+ 
+-		vsi->num_q_vectors = max_t(int, vsi->alloc_rxq, vsi->alloc_txq);
++		vsi->num_q_vectors = min_t(int, pf->num_lan_msix,
++					   max_t(int, vsi->alloc_rxq,
++						 vsi->alloc_txq));
+ 		break;
+ 	case ICE_VSI_VF:
+ 		vf = &pf->vf[vsi->vf_id];
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 2dea4d0e9415c..bacb368063e34 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -3433,18 +3433,14 @@ static int ice_ena_msix_range(struct ice_pf *pf)
+ 	if (v_actual < v_budget) {
+ 		dev_warn(dev, "not enough OS MSI-X vectors. requested = %d, obtained = %d\n",
+ 			 v_budget, v_actual);
+-/* 2 vectors each for LAN and RDMA (traffic + OICR), one for flow director */
+-#define ICE_MIN_LAN_VECS 2
+-#define ICE_MIN_RDMA_VECS 2
+-#define ICE_MIN_VECS (ICE_MIN_LAN_VECS + ICE_MIN_RDMA_VECS + 1)
+ 
+-		if (v_actual < ICE_MIN_LAN_VECS) {
++		if (v_actual < ICE_MIN_MSIX) {
+ 			/* error if we can't get minimum vectors */
+ 			pci_disable_msix(pf->pdev);
+ 			err = -ERANGE;
+ 			goto msix_err;
+ 		} else {
+-			pf->num_lan_msix = ICE_MIN_LAN_VECS;
++			pf->num_lan_msix = ICE_MIN_LAN_TXRX_MSIX;
+ 		}
+ 	}
+ 
+@@ -4887,9 +4883,15 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 		goto err_update_filters;
+ 	}
+ 
+-	/* Add filter for new MAC. If filter exists, just return success */
++	/* Add filter for new MAC. If filter exists, return success */
+ 	status = ice_fltr_add_mac(vsi, mac, ICE_FWD_TO_VSI);
+ 	if (status == ICE_ERR_ALREADY_EXISTS) {
++		/* Although this MAC filter is already present in hardware it's
++		 * possible in some cases (e.g. bonding) that dev_addr was
++		 * modified outside of the driver and needs to be restored back
++		 * to this value.
++		 */
++		memcpy(netdev->dev_addr, mac, netdev->addr_len);
+ 		netdev_dbg(netdev, "filter for MAC %pM already exists\n", mac);
+ 		return 0;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 23eca2f0a03b1..af5b7f33db9af 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -1923,12 +1923,15 @@ int ice_tx_csum(struct ice_tx_buf *first, struct ice_tx_offload_params *off)
+ 				  ICE_TX_CTX_EIPT_IPV4_NO_CSUM;
+ 			l4_proto = ip.v4->protocol;
+ 		} else if (first->tx_flags & ICE_TX_FLAGS_IPV6) {
++			int ret;
++
+ 			tunnel |= ICE_TX_CTX_EIPT_IPV6;
+ 			exthdr = ip.hdr + sizeof(*ip.v6);
+ 			l4_proto = ip.v6->nexthdr;
+-			if (l4.hdr != exthdr)
+-				ipv6_skip_exthdr(skb, exthdr - skb->data,
+-						 &l4_proto, &frag_off);
++			ret = ipv6_skip_exthdr(skb, exthdr - skb->data,
++					       &l4_proto, &frag_off);
++			if (ret < 0)
++				return -1;
+ 		}
+ 
+ 		/* define outer transport */
+diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+index 61d331ce38cdd..831f2f09de5fb 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
++++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+@@ -1675,12 +1675,18 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
+ 	cmd->base.phy_address = hw->phy.addr;
+ 
+ 	/* advertising link modes */
+-	ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Half);
+-	ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Full);
+-	ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Half);
+-	ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Full);
+-	ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full);
+-	ethtool_link_ksettings_add_link_mode(cmd, advertising, 2500baseT_Full);
++	if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF)
++		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Half);
++	if (hw->phy.autoneg_advertised & ADVERTISE_10_FULL)
++		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Full);
++	if (hw->phy.autoneg_advertised & ADVERTISE_100_HALF)
++		ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Half);
++	if (hw->phy.autoneg_advertised & ADVERTISE_100_FULL)
++		ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Full);
++	if (hw->phy.autoneg_advertised & ADVERTISE_1000_FULL)
++		ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full);
++	if (hw->phy.autoneg_advertised & ADVERTISE_2500_FULL)
++		ethtool_link_ksettings_add_link_mode(cmd, advertising, 2500baseT_Full);
+ 
+ 	/* set autoneg settings */
+ 	if (hw->mac.autoneg == 1) {
+@@ -1792,6 +1798,12 @@ igc_ethtool_set_link_ksettings(struct net_device *netdev,
+ 
+ 	ethtool_convert_link_mode_to_legacy_u32(&advertising,
+ 						cmd->link_modes.advertising);
++	/* Converting to legacy u32 drops ETHTOOL_LINK_MODE_2500baseT_Full_BIT.
++	 * We have to check this and convert it to ADVERTISE_2500_FULL
++	 * (aka ETHTOOL_LINK_MODE_2500baseX_Full_BIT) explicitly.
++	 */
++	if (ethtool_link_ksettings_test_link_mode(cmd, advertising, 2500baseT_Full))
++		advertising |= ADVERTISE_2500_FULL;
+ 
+ 	if (cmd->base.autoneg == AUTONEG_ENABLE) {
+ 		hw->mac.autoneg = 1;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+index 69a05da0e3e3d..e03e78a35df00 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+@@ -275,7 +275,7 @@ int mlx5e_health_rsc_fmsg_dump(struct mlx5e_priv *priv, struct mlx5_rsc_key *key
+ 
+ 	err = devlink_fmsg_binary_pair_nest_start(fmsg, "data");
+ 	if (err)
+-		return err;
++		goto free_page;
+ 
+ 	cmd = mlx5_rsc_dump_cmd_create(mdev, key);
+ 	if (IS_ERR(cmd)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 072363e73f1ce..6bc6b48a56dc7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -167,6 +167,12 @@ static const struct rhashtable_params tuples_nat_ht_params = {
+ 	.min_size = 16 * 1024,
+ };
+ 
++static bool
++mlx5_tc_ct_entry_has_nat(struct mlx5_ct_entry *entry)
++{
++	return !!(entry->tuple_nat_node.next);
++}
++
+ static int
+ mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule)
+ {
+@@ -911,13 +917,13 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
+ err_insert:
+ 	mlx5_tc_ct_entry_del_rules(ct_priv, entry);
+ err_rules:
+-	rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
+-			       &entry->tuple_nat_node, tuples_nat_ht_params);
++	if (mlx5_tc_ct_entry_has_nat(entry))
++		rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
++				       &entry->tuple_nat_node, tuples_nat_ht_params);
+ err_tuple_nat:
+-	if (entry->tuple_node.next)
+-		rhashtable_remove_fast(&ct_priv->ct_tuples_ht,
+-				       &entry->tuple_node,
+-				       tuples_ht_params);
++	rhashtable_remove_fast(&ct_priv->ct_tuples_ht,
++			       &entry->tuple_node,
++			       tuples_ht_params);
+ err_tuple:
+ err_set:
+ 	kfree(entry);
+@@ -932,7 +938,7 @@ mlx5_tc_ct_del_ft_entry(struct mlx5_tc_ct_priv *ct_priv,
+ {
+ 	mlx5_tc_ct_entry_del_rules(ct_priv, entry);
+ 	mutex_lock(&ct_priv->shared_counter_lock);
+-	if (entry->tuple_node.next)
++	if (mlx5_tc_ct_entry_has_nat(entry))
+ 		rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
+ 				       &entry->tuple_nat_node,
+ 				       tuples_nat_ht_params);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_stats.c
+index 6c5c54bcd9be0..5cb936541b9e9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_stats.c
+@@ -76,7 +76,7 @@ static const struct counter_desc mlx5e_ipsec_sw_stats_desc[] = {
+ 
+ static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ipsec_sw)
+ {
+-	return NUM_IPSEC_SW_COUNTERS;
++	return priv->ipsec ? NUM_IPSEC_SW_COUNTERS : 0;
+ }
+ 
+ static inline MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ipsec_sw) {}
+@@ -105,7 +105,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(ipsec_sw)
+ 
+ static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ipsec_hw)
+ {
+-	return (mlx5_fpga_ipsec_device_caps(priv->mdev)) ? NUM_IPSEC_HW_COUNTERS : 0;
++	return (priv->ipsec && mlx5_fpga_ipsec_device_caps(priv->mdev)) ? NUM_IPSEC_HW_COUNTERS : 0;
+ }
+ 
+ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ipsec_hw)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+index d20243d6a0326..f23c67575073a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+@@ -1151,6 +1151,7 @@ static int mlx5e_set_trust_state(struct mlx5e_priv *priv, u8 trust_state)
+ {
+ 	struct mlx5e_channels new_channels = {};
+ 	bool reset_channels = true;
++	bool opened;
+ 	int err = 0;
+ 
+ 	mutex_lock(&priv->state_lock);
+@@ -1159,22 +1160,24 @@ static int mlx5e_set_trust_state(struct mlx5e_priv *priv, u8 trust_state)
+ 	mlx5e_params_calc_trust_tx_min_inline_mode(priv->mdev, &new_channels.params,
+ 						   trust_state);
+ 
+-	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+-		priv->channels.params = new_channels.params;
++	opened = test_bit(MLX5E_STATE_OPENED, &priv->state);
++	if (!opened)
+ 		reset_channels = false;
+-	}
+ 
+ 	/* Skip if tx_min_inline is the same */
+ 	if (new_channels.params.tx_min_inline_mode ==
+ 	    priv->channels.params.tx_min_inline_mode)
+ 		reset_channels = false;
+ 
+-	if (reset_channels)
++	if (reset_channels) {
+ 		err = mlx5e_safe_switch_channels(priv, &new_channels,
+ 						 mlx5e_update_trust_state_hw,
+ 						 &trust_state);
+-	else
++	} else {
+ 		err = mlx5e_update_trust_state_hw(priv, &trust_state);
++		if (!err && !opened)
++			priv->channels.params = new_channels.params;
++	}
+ 
+ 	mutex_unlock(&priv->state_lock);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index f01395a9fd8df..e596f050c4316 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -444,12 +444,18 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
+ 		goto out;
+ 	}
+ 
+-	new_channels.params = priv->channels.params;
++	new_channels.params = *cur_params;
+ 	new_channels.params.num_channels = count;
+ 
+ 	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
++		struct mlx5e_params old_params;
++
++		old_params = *cur_params;
+ 		*cur_params = new_channels.params;
+ 		err = mlx5e_num_channels_changed(priv);
++		if (err)
++			*cur_params = old_params;
++
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index ebce97921e03c..c9b5d7f29911e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3580,7 +3580,14 @@ static int mlx5e_setup_tc_mqprio(struct mlx5e_priv *priv,
+ 	new_channels.params.num_tc = tc ? tc : 1;
+ 
+ 	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
++		struct mlx5e_params old_params;
++
++		old_params = priv->channels.params;
+ 		priv->channels.params = new_channels.params;
++		err = mlx5e_num_channels_changed(priv);
++		if (err)
++			priv->channels.params = old_params;
++
+ 		goto out;
+ 	}
+ 
+@@ -3723,7 +3730,7 @@ static int set_feature_lro(struct net_device *netdev, bool enable)
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	struct mlx5e_channels new_channels = {};
+-	struct mlx5e_params *old_params;
++	struct mlx5e_params *cur_params;
+ 	int err = 0;
+ 	bool reset;
+ 
+@@ -3736,8 +3743,8 @@ static int set_feature_lro(struct net_device *netdev, bool enable)
+ 		goto out;
+ 	}
+ 
+-	old_params = &priv->channels.params;
+-	if (enable && !MLX5E_GET_PFLAG(old_params, MLX5E_PFLAG_RX_STRIDING_RQ)) {
++	cur_params = &priv->channels.params;
++	if (enable && !MLX5E_GET_PFLAG(cur_params, MLX5E_PFLAG_RX_STRIDING_RQ)) {
+ 		netdev_warn(netdev, "can't set LRO with legacy RQ\n");
+ 		err = -EINVAL;
+ 		goto out;
+@@ -3745,18 +3752,23 @@ static int set_feature_lro(struct net_device *netdev, bool enable)
+ 
+ 	reset = test_bit(MLX5E_STATE_OPENED, &priv->state);
+ 
+-	new_channels.params = *old_params;
++	new_channels.params = *cur_params;
+ 	new_channels.params.lro_en = enable;
+ 
+-	if (old_params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) {
+-		if (mlx5e_rx_mpwqe_is_linear_skb(mdev, old_params, NULL) ==
++	if (cur_params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) {
++		if (mlx5e_rx_mpwqe_is_linear_skb(mdev, cur_params, NULL) ==
+ 		    mlx5e_rx_mpwqe_is_linear_skb(mdev, &new_channels.params, NULL))
+ 			reset = false;
+ 	}
+ 
+ 	if (!reset) {
+-		*old_params = new_channels.params;
++		struct mlx5e_params old_params;
++
++		old_params = *cur_params;
++		*cur_params = new_channels.params;
+ 		err = mlx5e_modify_tirs_lro(priv);
++		if (err)
++			*cur_params = old_params;
+ 		goto out;
+ 	}
+ 
+@@ -4030,9 +4042,16 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
+ 	}
+ 
+ 	if (!reset) {
++		unsigned int old_mtu = params->sw_mtu;
++
+ 		params->sw_mtu = new_mtu;
+-		if (preactivate)
+-			preactivate(priv, NULL);
++		if (preactivate) {
++			err = preactivate(priv, NULL);
++			if (err) {
++				params->sw_mtu = old_mtu;
++				goto out;
++			}
++		}
+ 		netdev->mtu = params->sw_mtu;
+ 		goto out;
+ 	}
+@@ -4990,7 +5009,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	    FT_CAP(modify_root) &&
+ 	    FT_CAP(identified_miss_table_mode) &&
+ 	    FT_CAP(flow_table_modify)) {
+-#ifdef CONFIG_MLX5_ESWITCH
++#if IS_ENABLED(CONFIG_MLX5_CLS_ACT)
+ 		netdev->hw_features      |= NETIF_F_HW_TC;
+ #endif
+ #ifdef CONFIG_MLX5_EN_ARFS
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 67247c33b9fd6..304435e561170 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -738,7 +738,9 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev)
+ 
+ 	netdev->features       |= NETIF_F_NETNS_LOCAL;
+ 
++#if IS_ENABLED(CONFIG_MLX5_CLS_ACT)
+ 	netdev->hw_features    |= NETIF_F_HW_TC;
++#endif
+ 	netdev->hw_features    |= NETIF_F_SG;
+ 	netdev->hw_features    |= NETIF_F_IP_CSUM;
+ 	netdev->hw_features    |= NETIF_F_IPV6_CSUM;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index ce710f22b1fff..4b8a442f09cd6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -67,6 +67,7 @@
+ #include "lib/geneve.h"
+ #include "lib/fs_chains.h"
+ #include "diag/en_tc_tracepoint.h"
++#include <asm/div64.h>
+ 
+ #define nic_chains(priv) ((priv)->fs.tc.chains)
+ #define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)
+@@ -1164,6 +1165,9 @@ mlx5e_tc_offload_fdb_rules(struct mlx5_eswitch *esw,
+ 	struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts;
+ 	struct mlx5_flow_handle *rule;
+ 
++	if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH)
++		return mlx5_eswitch_add_offloaded_rule(esw, spec, attr);
++
+ 	if (flow_flag_test(flow, CT)) {
+ 		mod_hdr_acts = &attr->parse_attr->mod_hdr_acts;
+ 
+@@ -1194,6 +1198,9 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw,
+ {
+ 	flow_flag_clear(flow, OFFLOADED);
+ 
++	if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH)
++		goto offload_rule_0;
++
+ 	if (flow_flag_test(flow, CT)) {
+ 		mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr);
+ 		return;
+@@ -1202,6 +1209,7 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw,
+ 	if (attr->esw_attr->split_count)
+ 		mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr);
+ 
++offload_rule_0:
+ 	mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr);
+ }
+ 
+@@ -2271,8 +2279,8 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 	      BIT(FLOW_DISSECTOR_KEY_ENC_OPTS) |
+ 	      BIT(FLOW_DISSECTOR_KEY_MPLS))) {
+ 		NL_SET_ERR_MSG_MOD(extack, "Unsupported key");
+-		netdev_warn(priv->netdev, "Unsupported key used: 0x%x\n",
+-			    dissector->used_keys);
++		netdev_dbg(priv->netdev, "Unsupported key used: 0x%x\n",
++			   dissector->used_keys);
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+@@ -5009,13 +5017,13 @@ errout:
+ 	return err;
+ }
+ 
+-static int apply_police_params(struct mlx5e_priv *priv, u32 rate,
++static int apply_police_params(struct mlx5e_priv *priv, u64 rate,
+ 			       struct netlink_ext_ack *extack)
+ {
+ 	struct mlx5e_rep_priv *rpriv = priv->ppriv;
+ 	struct mlx5_eswitch *esw;
++	u32 rate_mbps = 0;
+ 	u16 vport_num;
+-	u32 rate_mbps;
+ 	int err;
+ 
+ 	vport_num = rpriv->rep->vport;
+@@ -5032,7 +5040,11 @@ static int apply_police_params(struct mlx5e_priv *priv, u32 rate,
+ 	 * Moreover, if rate is non zero we choose to configure to a minimum of
+ 	 * 1 mbit/sec.
+ 	 */
+-	rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0;
++	if (rate) {
++		rate = (rate * BITS_PER_BYTE) + 500000;
++		rate_mbps = max_t(u32, do_div(rate, 1000000), 1);
++	}
++
+ 	err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps);
+ 	if (err)
+ 		NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 9fdd99272e310..634c2bfd25be1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1141,6 +1141,7 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa
+ destroy_ft:
+ 	root->cmds->destroy_flow_table(root, ft);
+ free_ft:
++	rhltable_destroy(&ft->fgs_hash);
+ 	kfree(ft);
+ unlock_root:
+ 	mutex_unlock(&root->chain_lock);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+index 3a9fa629503f0..d046db7bb047d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+@@ -90,4 +90,9 @@ int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
+ 			       u32 key_type, u32 *p_key_id);
+ void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id);
+ 
++static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev)
++{
++	return devlink_net(priv_to_devlink(dev));
++}
++
+ #endif
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+index 4d7f8a357df76..a3e0c71831928 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+@@ -58,7 +58,7 @@ struct fw_page {
+ 	struct rb_node		rb_node;
+ 	u64			addr;
+ 	struct page	       *page;
+-	u16			func_id;
++	u32			function;
+ 	unsigned long		bitmask;
+ 	struct list_head	list;
+ 	unsigned		free_count;
+@@ -74,12 +74,17 @@ enum {
+ 	MLX5_NUM_4K_IN_PAGE		= PAGE_SIZE / MLX5_ADAPTER_PAGE_SIZE,
+ };
+ 
+-static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func_id)
++static u32 get_function(u16 func_id, bool ec_function)
++{
++	return func_id & (ec_function << 16);
++}
++
++static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function)
+ {
+ 	struct rb_root *root;
+ 	int err;
+ 
+-	root = xa_load(&dev->priv.page_root_xa, func_id);
++	root = xa_load(&dev->priv.page_root_xa, function);
+ 	if (root)
+ 		return root;
+ 
+@@ -87,7 +92,7 @@ static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func
+ 	if (!root)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	err = xa_insert(&dev->priv.page_root_xa, func_id, root, GFP_KERNEL);
++	err = xa_insert(&dev->priv.page_root_xa, function, root, GFP_KERNEL);
+ 	if (err) {
+ 		kfree(root);
+ 		return ERR_PTR(err);
+@@ -98,7 +103,7 @@ static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func
+ 	return root;
+ }
+ 
+-static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u16 func_id)
++static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u32 function)
+ {
+ 	struct rb_node *parent = NULL;
+ 	struct rb_root *root;
+@@ -107,7 +112,7 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u
+ 	struct fw_page *tfp;
+ 	int i;
+ 
+-	root = page_root_per_func_id(dev, func_id);
++	root = page_root_per_function(dev, function);
+ 	if (IS_ERR(root))
+ 		return PTR_ERR(root);
+ 
+@@ -130,7 +135,7 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u
+ 
+ 	nfp->addr = addr;
+ 	nfp->page = page;
+-	nfp->func_id = func_id;
++	nfp->function = function;
+ 	nfp->free_count = MLX5_NUM_4K_IN_PAGE;
+ 	for (i = 0; i < MLX5_NUM_4K_IN_PAGE; i++)
+ 		set_bit(i, &nfp->bitmask);
+@@ -143,14 +148,14 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u
+ }
+ 
+ static struct fw_page *find_fw_page(struct mlx5_core_dev *dev, u64 addr,
+-				    u32 func_id)
++				    u32 function)
+ {
+ 	struct fw_page *result = NULL;
+ 	struct rb_root *root;
+ 	struct rb_node *tmp;
+ 	struct fw_page *tfp;
+ 
+-	root = xa_load(&dev->priv.page_root_xa, func_id);
++	root = xa_load(&dev->priv.page_root_xa, function);
+ 	if (WARN_ON_ONCE(!root))
+ 		return NULL;
+ 
+@@ -194,14 +199,14 @@ static int mlx5_cmd_query_pages(struct mlx5_core_dev *dev, u16 *func_id,
+ 	return err;
+ }
+ 
+-static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u16 func_id)
++static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function)
+ {
+ 	struct fw_page *fp = NULL;
+ 	struct fw_page *iter;
+ 	unsigned n;
+ 
+ 	list_for_each_entry(iter, &dev->priv.free_list, list) {
+-		if (iter->func_id != func_id)
++		if (iter->function != function)
+ 			continue;
+ 		fp = iter;
+ 	}
+@@ -231,7 +236,7 @@ static void free_fwp(struct mlx5_core_dev *dev, struct fw_page *fwp,
+ {
+ 	struct rb_root *root;
+ 
+-	root = xa_load(&dev->priv.page_root_xa, fwp->func_id);
++	root = xa_load(&dev->priv.page_root_xa, fwp->function);
+ 	if (WARN_ON_ONCE(!root))
+ 		return;
+ 
+@@ -244,12 +249,12 @@ static void free_fwp(struct mlx5_core_dev *dev, struct fw_page *fwp,
+ 	kfree(fwp);
+ }
+ 
+-static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 func_id)
++static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 function)
+ {
+ 	struct fw_page *fwp;
+ 	int n;
+ 
+-	fwp = find_fw_page(dev, addr & MLX5_U64_4K_PAGE_MASK, func_id);
++	fwp = find_fw_page(dev, addr & MLX5_U64_4K_PAGE_MASK, function);
+ 	if (!fwp) {
+ 		mlx5_core_warn_rl(dev, "page not found\n");
+ 		return;
+@@ -263,7 +268,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 func_id)
+ 		list_add(&fwp->list, &dev->priv.free_list);
+ }
+ 
+-static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id)
++static int alloc_system_page(struct mlx5_core_dev *dev, u32 function)
+ {
+ 	struct device *device = mlx5_core_dma_dev(dev);
+ 	int nid = dev_to_node(device);
+@@ -291,7 +296,7 @@ map:
+ 		goto map;
+ 	}
+ 
+-	err = insert_page(dev, addr, page, func_id);
++	err = insert_page(dev, addr, page, function);
+ 	if (err) {
+ 		mlx5_core_err(dev, "failed to track allocated page\n");
+ 		dma_unmap_page(device, addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
+@@ -328,6 +333,7 @@ static void page_notify_fail(struct mlx5_core_dev *dev, u16 func_id,
+ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
+ 		      int notify_fail, bool ec_function)
+ {
++	u32 function = get_function(func_id, ec_function);
+ 	u32 out[MLX5_ST_SZ_DW(manage_pages_out)] = {0};
+ 	int inlen = MLX5_ST_SZ_BYTES(manage_pages_in);
+ 	u64 addr;
+@@ -345,10 +351,10 @@ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
+ 
+ 	for (i = 0; i < npages; i++) {
+ retry:
+-		err = alloc_4k(dev, &addr, func_id);
++		err = alloc_4k(dev, &addr, function);
+ 		if (err) {
+ 			if (err == -ENOMEM)
+-				err = alloc_system_page(dev, func_id);
++				err = alloc_system_page(dev, function);
+ 			if (err)
+ 				goto out_4k;
+ 
+@@ -384,7 +390,7 @@ retry:
+ 
+ out_4k:
+ 	for (i--; i >= 0; i--)
+-		free_4k(dev, MLX5_GET64(manage_pages_in, in, pas[i]), func_id);
++		free_4k(dev, MLX5_GET64(manage_pages_in, in, pas[i]), function);
+ out_free:
+ 	kvfree(in);
+ 	if (notify_fail)
+@@ -392,14 +398,15 @@ out_free:
+ 	return err;
+ }
+ 
+-static void release_all_pages(struct mlx5_core_dev *dev, u32 func_id,
++static void release_all_pages(struct mlx5_core_dev *dev, u16 func_id,
+ 			      bool ec_function)
+ {
++	u32 function = get_function(func_id, ec_function);
+ 	struct rb_root *root;
+ 	struct rb_node *p;
+ 	int npages = 0;
+ 
+-	root = xa_load(&dev->priv.page_root_xa, func_id);
++	root = xa_load(&dev->priv.page_root_xa, function);
+ 	if (WARN_ON_ONCE(!root))
+ 		return;
+ 
+@@ -446,6 +453,7 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
+ 	struct rb_root *root;
+ 	struct fw_page *fwp;
+ 	struct rb_node *p;
++	bool ec_function;
+ 	u32 func_id;
+ 	u32 npages;
+ 	u32 i = 0;
+@@ -456,8 +464,9 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
+ 	/* No hard feelings, we want our pages back! */
+ 	npages = MLX5_GET(manage_pages_in, in, input_num_entries);
+ 	func_id = MLX5_GET(manage_pages_in, in, function_id);
++	ec_function = MLX5_GET(manage_pages_in, in, embedded_cpu_function);
+ 
+-	root = xa_load(&dev->priv.page_root_xa, func_id);
++	root = xa_load(&dev->priv.page_root_xa, get_function(func_id, ec_function));
+ 	if (WARN_ON_ONCE(!root))
+ 		return -EEXIST;
+ 
+@@ -473,9 +482,10 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
+ 	return 0;
+ }
+ 
+-static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages,
++static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
+ 			 int *nclaimed, bool ec_function)
+ {
++	u32 function = get_function(func_id, ec_function);
+ 	int outlen = MLX5_ST_SZ_BYTES(manage_pages_out);
+ 	u32 in[MLX5_ST_SZ_DW(manage_pages_in)] = {};
+ 	int num_claimed;
+@@ -514,7 +524,7 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages,
+ 	}
+ 
+ 	for (i = 0; i < num_claimed; i++)
+-		free_4k(dev, MLX5_GET64(manage_pages_out, out, pas[i]), func_id);
++		free_4k(dev, MLX5_GET64(manage_pages_out, out, pas[i]), function);
+ 
+ 	if (nclaimed)
+ 		*nclaimed = num_claimed;
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 07f1f39339271..615f3776b4bee 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -991,7 +991,8 @@ static void __team_compute_features(struct team *team)
+ 	unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
+ 					IFF_XMIT_DST_RELEASE_PERM;
+ 
+-	list_for_each_entry(port, &team->port_list, list) {
++	rcu_read_lock();
++	list_for_each_entry_rcu(port, &team->port_list, list) {
+ 		vlan_features = netdev_increment_features(vlan_features,
+ 					port->dev->vlan_features,
+ 					TEAM_VLAN_FEATURES);
+@@ -1005,6 +1006,7 @@ static void __team_compute_features(struct team *team)
+ 		if (port->dev->hard_header_len > max_hard_header_len)
+ 			max_hard_header_len = port->dev->hard_header_len;
+ 	}
++	rcu_read_unlock();
+ 
+ 	team->dev->vlan_features = vlan_features;
+ 	team->dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL |
+@@ -1020,9 +1022,7 @@ static void __team_compute_features(struct team *team)
+ 
+ static void team_compute_features(struct team *team)
+ {
+-	mutex_lock(&team->lock);
+ 	__team_compute_features(team);
+-	mutex_unlock(&team->lock);
+ 	netdev_change_features(team->dev);
+ }
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 21120b4e5637d..ce73df4c137ea 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1325,6 +1325,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x0b3c, 0xc00a, 6)},	/* Olivetti Olicard 160 */
+ 	{QMI_FIXED_INTF(0x0b3c, 0xc00b, 4)},	/* Olivetti Olicard 500 */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0060, 4)},	/* Cinterion PLxx */
++	{QMI_QUIRK_SET_DTR(0x1e2d, 0x006f, 8)}, /* Cinterion PLS83/PLS63 */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0053, 4)},	/* Cinterion PHxx,PXxx */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0063, 10)},	/* Cinterion ALASxx (1 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0082, 4)},	/* Cinterion PHxx,PXxx (2 RmNet) */
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+index 6d8f7bff12432..895a907acdf0f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+@@ -224,40 +224,46 @@ static int iwl_pnvm_parse(struct iwl_trans *trans, const u8 *data,
+ int iwl_pnvm_load(struct iwl_trans *trans,
+ 		  struct iwl_notif_wait_data *notif_wait)
+ {
+-	const struct firmware *pnvm;
+ 	struct iwl_notification_wait pnvm_wait;
+ 	static const u16 ntf_cmds[] = { WIDE_ID(REGULATORY_AND_NVM_GROUP,
+ 						PNVM_INIT_COMPLETE_NTFY) };
+-	char pnvm_name[64];
+-	int ret;
+ 
+ 	/* if the SKU_ID is empty, there's nothing to do */
+ 	if (!trans->sku_id[0] && !trans->sku_id[1] && !trans->sku_id[2])
+ 		return 0;
+ 
+-	/* if we already have it, nothing to do either */
+-	if (trans->pnvm_loaded)
+-		return 0;
++	/* load from disk only if we haven't done it (or tried) before */
++	if (!trans->pnvm_loaded) {
++		const struct firmware *pnvm;
++		char pnvm_name[64];
++		int ret;
++
++		/*
++		 * The prefix unfortunately includes a hyphen at the end, so
++		 * don't add the dot here...
++		 */
++		snprintf(pnvm_name, sizeof(pnvm_name), "%spnvm",
++			 trans->cfg->fw_name_pre);
++
++		/* ...but replace the hyphen with the dot here. */
++		if (strlen(trans->cfg->fw_name_pre) < sizeof(pnvm_name))
++			pnvm_name[strlen(trans->cfg->fw_name_pre) - 1] = '.';
++
++		ret = firmware_request_nowarn(&pnvm, pnvm_name, trans->dev);
++		if (ret) {
++			IWL_DEBUG_FW(trans, "PNVM file %s not found %d\n",
++				     pnvm_name, ret);
++			/*
++			 * Pretend we've loaded it - at least we've tried and
++			 * couldn't load it at all, so there's no point in
++			 * trying again over and over.
++			 */
++			trans->pnvm_loaded = true;
++		} else {
++			iwl_pnvm_parse(trans, pnvm->data, pnvm->size);
+ 
+-	/*
+-	 * The prefix unfortunately includes a hyphen at the end, so
+-	 * don't add the dot here...
+-	 */
+-	snprintf(pnvm_name, sizeof(pnvm_name), "%spnvm",
+-		 trans->cfg->fw_name_pre);
+-
+-	/* ...but replace the hyphen with the dot here. */
+-	if (strlen(trans->cfg->fw_name_pre) < sizeof(pnvm_name))
+-		pnvm_name[strlen(trans->cfg->fw_name_pre) - 1] = '.';
+-
+-	ret = firmware_request_nowarn(&pnvm, pnvm_name, trans->dev);
+-	if (ret) {
+-		IWL_DEBUG_FW(trans, "PNVM file %s not found %d\n",
+-			     pnvm_name, ret);
+-	} else {
+-		iwl_pnvm_parse(trans, pnvm->data, pnvm->size);
+-
+-		release_firmware(pnvm);
++			release_firmware(pnvm);
++		}
+ 	}
+ 
+ 	iwl_init_notification_wait(notif_wait, &pnvm_wait,
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index 580b07a43856d..e82e3fc963be2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -498,7 +498,7 @@ struct iwl_cfg {
+ #define IWL_CFG_CORES_BT_GNSS		0x5
+ 
+ #define IWL_SUBDEVICE_RF_ID(subdevice)	((u16)((subdevice) & 0x00F0) >> 4)
+-#define IWL_SUBDEVICE_NO_160(subdevice)	((u16)((subdevice) & 0x0100) >> 9)
++#define IWL_SUBDEVICE_NO_160(subdevice)	((u16)((subdevice) & 0x0200) >> 9)
+ #define IWL_SUBDEVICE_CORES(subdevice)	((u16)((subdevice) & 0x1C00) >> 10)
+ 
+ struct iwl_dev_info {
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+index fa3f15778fc7b..579578534f9d9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+@@ -355,6 +355,12 @@
+ #define RADIO_RSP_ADDR_POS		(6)
+ #define RADIO_RSP_RD_CMD		(3)
+ 
++/* LTR control (Qu only) */
++#define HPM_MAC_LTR_CSR			0xa0348c
++#define HPM_MAC_LRT_ENABLE_ALL		0xf
++/* also uses CSR_LTR_* for values */
++#define HPM_UMAC_LTR			0xa03480
++
+ /* FW monitor */
+ #define MON_BUFF_SAMPLE_CTL		(0xa03c00)
+ #define MON_BUFF_BASE_ADDR		(0xa03c1c)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index fe1c538cd7182..7626117c01fa3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -833,6 +833,7 @@ iwl_mvm_tx_tso_segment(struct sk_buff *skb, unsigned int num_subframes,
+ 
+ 	next = skb_gso_segment(skb, netdev_flags);
+ 	skb_shinfo(skb)->gso_size = mss;
++	skb_shinfo(skb)->gso_type = ipv4 ? SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
+ 	if (WARN_ON_ONCE(IS_ERR(next)))
+ 		return -EINVAL;
+ 	else if (next)
+@@ -855,6 +856,8 @@ iwl_mvm_tx_tso_segment(struct sk_buff *skb, unsigned int num_subframes,
+ 
+ 		if (tcp_payload_len > mss) {
+ 			skb_shinfo(tmp)->gso_size = mss;
++			skb_shinfo(tmp)->gso_type = ipv4 ? SKB_GSO_TCPV4 :
++							   SKB_GSO_TCPV6;
+ 		} else {
+ 			if (qos) {
+ 				u8 *qc;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index 5512e3c630c31..d719e433a59bf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -122,6 +122,15 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 				 const struct fw_img *fw)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
++	u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
++		      u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
++				      CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
++		      u32_encode_bits(250,
++				      CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
++		      CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
++		      u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
++				      CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
++		      u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
+ 	struct iwl_context_info_gen3 *ctxt_info_gen3;
+ 	struct iwl_prph_scratch *prph_scratch;
+ 	struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl;
+@@ -253,23 +262,19 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL,
+ 		    CSR_AUTO_FUNC_BOOT_ENA);
+ 
+-	if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) {
+-		/*
+-		 * The firmware initializes this again later (to a smaller
+-		 * value), but for the boot process initialize the LTR to
+-		 * ~250 usec.
+-		 */
+-		u32 val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
+-			  u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
+-					  CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
+-			  u32_encode_bits(250,
+-					  CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
+-			  CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
+-			  u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
+-					  CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
+-			  u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
+-
+-		iwl_write32(trans, CSR_LTR_LONG_VAL_AD, val);
++	/*
++	 * To workaround hardware latency issues during the boot process,
++	 * initialize the LTR to ~250 usec (see ltr_val above).
++	 * The firmware initializes this again later (to a smaller value).
++	 */
++	if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 ||
++	     trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) &&
++	    !trans->trans_cfg->integrated) {
++		iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val);
++	} else if (trans->trans_cfg->integrated &&
++		   trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) {
++		iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL);
++		iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val);
+ 	}
+ 
+ 	if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
+@@ -341,6 +346,9 @@ int iwl_trans_pcie_ctx_info_gen3_set_pnvm(struct iwl_trans *trans,
+ 		return ret;
+ 	}
+ 
++	if (WARN_ON(prph_sc_ctrl->pnvm_cfg.pnvm_size))
++		return -EBUSY;
++
+ 	prph_sc_ctrl->pnvm_cfg.pnvm_base_addr =
+ 		cpu_to_le64(trans_pcie->pnvm_dram.physical);
+ 	prph_sc_ctrl->pnvm_cfg.pnvm_size =
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 2fffbbc8462fc..1a222469b5b4e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -2161,7 +2161,8 @@ static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
+ 
+ 	while (offs < dwords) {
+ 		/* limit the time we spin here under lock to 1/2s */
+-		ktime_t timeout = ktime_add_us(ktime_get(), 500 * USEC_PER_MSEC);
++		unsigned long end = jiffies + HZ / 2;
++		bool resched = false;
+ 
+ 		if (iwl_trans_grab_nic_access(trans, &flags)) {
+ 			iwl_write32(trans, HBUS_TARG_MEM_RADDR,
+@@ -2172,14 +2173,15 @@ static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
+ 							HBUS_TARG_MEM_RDAT);
+ 				offs++;
+ 
+-				/* calling ktime_get is expensive so
+-				 * do it once in 128 reads
+-				 */
+-				if (offs % 128 == 0 && ktime_after(ktime_get(),
+-								   timeout))
++				if (time_after(jiffies, end)) {
++					resched = true;
+ 					break;
++				}
+ 			}
+ 			iwl_trans_release_nic_access(trans, &flags);
++
++			if (resched)
++				cond_resched();
+ 		} else {
+ 			return -EBUSY;
+ 		}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+index 69e38f477b1e4..595519c582558 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+@@ -85,7 +85,7 @@ static int mt7663s_rx_run_queue(struct mt76_dev *dev, enum mt76_rxq_id qid,
+ {
+ 	struct mt76_queue *q = &dev->q_rx[qid];
+ 	struct mt76_sdio *sdio = &dev->sdio;
+-	int len = 0, err, i, order;
++	int len = 0, err, i;
+ 	struct page *page;
+ 	u8 *buf;
+ 
+@@ -98,8 +98,7 @@ static int mt7663s_rx_run_queue(struct mt76_dev *dev, enum mt76_rxq_id qid,
+ 	if (len > sdio->func->cur_blksize)
+ 		len = roundup(len, sdio->func->cur_blksize);
+ 
+-	order = get_order(len);
+-	page = __dev_alloc_pages(GFP_KERNEL, order);
++	page = __dev_alloc_pages(GFP_KERNEL, get_order(len));
+ 	if (!page)
+ 		return -ENOMEM;
+ 
+@@ -111,7 +110,7 @@ static int mt7663s_rx_run_queue(struct mt76_dev *dev, enum mt76_rxq_id qid,
+ 
+ 	if (err < 0) {
+ 		dev_err(dev->dev, "sdio read data failed:%d\n", err);
+-		__free_pages(page, order);
++		put_page(page);
+ 		return err;
+ 	}
+ 
+@@ -128,7 +127,7 @@ static int mt7663s_rx_run_queue(struct mt76_dev *dev, enum mt76_rxq_id qid,
+ 		if (q->queued + i + 1 == q->ndesc)
+ 			break;
+ 	}
+-	__free_pages(page, order);
++	put_page(page);
+ 
+ 	spin_lock_bh(&q->lock);
+ 	q->head = (q->head + i) % q->ndesc;
+diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c
+index 09f931d4598c2..11071519fce81 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/dma.c
++++ b/drivers/net/wireless/mediatek/mt7601u/dma.c
+@@ -152,8 +152,7 @@ mt7601u_rx_process_entry(struct mt7601u_dev *dev, struct mt7601u_dma_buf_rx *e)
+ 
+ 	if (new_p) {
+ 		/* we have one extra ref from the allocator */
+-		__free_pages(e->p, MT_RX_ORDER);
+-
++		put_page(e->p);
+ 		e->p = new_p;
+ 	}
+ }
+@@ -310,7 +309,6 @@ static int mt7601u_dma_submit_tx(struct mt7601u_dev *dev,
+ 	}
+ 
+ 	e = &q->e[q->end];
+-	e->skb = skb;
+ 	usb_fill_bulk_urb(e->urb, usb_dev, snd_pipe, skb->data, skb->len,
+ 			  mt7601u_complete_tx, q);
+ 	ret = usb_submit_urb(e->urb, GFP_ATOMIC);
+@@ -328,6 +326,7 @@ static int mt7601u_dma_submit_tx(struct mt7601u_dev *dev,
+ 
+ 	q->end = (q->end + 1) % q->entries;
+ 	q->used++;
++	e->skb = skb;
+ 
+ 	if (q->used >= q->entries)
+ 		ieee80211_stop_queue(dev->hw, skb_get_queue_mapping(skb));
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 74896be40c176..292e535a385d4 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -221,7 +221,7 @@ static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head,
+ 	}
+ 
+ 	for (ns = nvme_next_ns(head, old);
+-	     ns != old;
++	     ns && ns != old;
+ 	     ns = nvme_next_ns(head, ns)) {
+ 		if (nvme_path_is_disabled(ns))
+ 			continue;
+diff --git a/drivers/of/device.c b/drivers/of/device.c
+index aedfaaafd3e7e..1122daa8e2736 100644
+--- a/drivers/of/device.c
++++ b/drivers/of/device.c
+@@ -162,9 +162,11 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
+ 	mask = DMA_BIT_MASK(ilog2(end) + 1);
+ 	dev->coherent_dma_mask &= mask;
+ 	*dev->dma_mask &= mask;
+-	/* ...but only set bus limit if we found valid dma-ranges earlier */
+-	if (!ret)
++	/* ...but only set bus limit and range map if we found valid dma-ranges earlier */
++	if (!ret) {
+ 		dev->bus_dma_limit = end;
++		dev->dma_range_map = map;
++	}
+ 
+ 	coherent = of_dma_is_coherent(np);
+ 	dev_dbg(dev, "device is%sdma coherent\n",
+@@ -172,6 +174,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
+ 
+ 	iommu = of_iommu_configure(dev, np, id);
+ 	if (PTR_ERR(iommu) == -EPROBE_DEFER) {
++		/* Don't touch range map if it wasn't set from a valid dma-ranges */
++		if (!ret)
++			dev->dma_range_map = NULL;
+ 		kfree(map);
+ 		return -EPROBE_DEFER;
+ 	}
+@@ -181,7 +186,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
+ 
+ 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
+ 
+-	dev->dma_range_map = map;
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(of_dma_configure_id);
+diff --git a/drivers/s390/crypto/vfio_ap_drv.c b/drivers/s390/crypto/vfio_ap_drv.c
+index be2520cc010be..7dc72cb718b0e 100644
+--- a/drivers/s390/crypto/vfio_ap_drv.c
++++ b/drivers/s390/crypto/vfio_ap_drv.c
+@@ -71,15 +71,11 @@ static int vfio_ap_queue_dev_probe(struct ap_device *apdev)
+ static void vfio_ap_queue_dev_remove(struct ap_device *apdev)
+ {
+ 	struct vfio_ap_queue *q;
+-	int apid, apqi;
+ 
+ 	mutex_lock(&matrix_dev->lock);
+ 	q = dev_get_drvdata(&apdev->device);
++	vfio_ap_mdev_reset_queue(q, 1);
+ 	dev_set_drvdata(&apdev->device, NULL);
+-	apid = AP_QID_CARD(q->apqn);
+-	apqi = AP_QID_QUEUE(q->apqn);
+-	vfio_ap_mdev_reset_queue(apid, apqi, 1);
+-	vfio_ap_irq_disable(q);
+ 	kfree(q);
+ 	mutex_unlock(&matrix_dev->lock);
+ }
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index e0bde85187451..7ceb6c433b3ba 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -25,6 +25,7 @@
+ #define VFIO_AP_MDEV_NAME_HWVIRT "VFIO AP Passthrough Device"
+ 
+ static int vfio_ap_mdev_reset_queues(struct mdev_device *mdev);
++static struct vfio_ap_queue *vfio_ap_find_queue(int apqn);
+ 
+ static int match_apqn(struct device *dev, const void *data)
+ {
+@@ -49,20 +50,15 @@ static struct vfio_ap_queue *vfio_ap_get_queue(
+ 					int apqn)
+ {
+ 	struct vfio_ap_queue *q;
+-	struct device *dev;
+ 
+ 	if (!test_bit_inv(AP_QID_CARD(apqn), matrix_mdev->matrix.apm))
+ 		return NULL;
+ 	if (!test_bit_inv(AP_QID_QUEUE(apqn), matrix_mdev->matrix.aqm))
+ 		return NULL;
+ 
+-	dev = driver_find_device(&matrix_dev->vfio_ap_drv->driver, NULL,
+-				 &apqn, match_apqn);
+-	if (!dev)
+-		return NULL;
+-	q = dev_get_drvdata(dev);
+-	q->matrix_mdev = matrix_mdev;
+-	put_device(dev);
++	q = vfio_ap_find_queue(apqn);
++	if (q)
++		q->matrix_mdev = matrix_mdev;
+ 
+ 	return q;
+ }
+@@ -119,13 +115,18 @@ static void vfio_ap_wait_for_irqclear(int apqn)
+  */
+ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
+ {
+-	if (q->saved_isc != VFIO_AP_ISC_INVALID && q->matrix_mdev)
++	if (!q)
++		return;
++	if (q->saved_isc != VFIO_AP_ISC_INVALID &&
++	    !WARN_ON(!(q->matrix_mdev && q->matrix_mdev->kvm))) {
+ 		kvm_s390_gisc_unregister(q->matrix_mdev->kvm, q->saved_isc);
+-	if (q->saved_pfn && q->matrix_mdev)
++		q->saved_isc = VFIO_AP_ISC_INVALID;
++	}
++	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
+ 		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
+ 				 &q->saved_pfn, 1);
+-	q->saved_pfn = 0;
+-	q->saved_isc = VFIO_AP_ISC_INVALID;
++		q->saved_pfn = 0;
++	}
+ }
+ 
+ /**
+@@ -144,7 +145,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
+  * Returns if ap_aqic function failed with invalid, deconfigured or
+  * checkstopped AP.
+  */
+-struct ap_queue_status vfio_ap_irq_disable(struct vfio_ap_queue *q)
++static struct ap_queue_status vfio_ap_irq_disable(struct vfio_ap_queue *q)
+ {
+ 	struct ap_qirq_ctrl aqic_gisa = {};
+ 	struct ap_queue_status status;
+@@ -1114,48 +1115,70 @@ static int vfio_ap_mdev_group_notifier(struct notifier_block *nb,
+ 	return NOTIFY_OK;
+ }
+ 
+-static void vfio_ap_irq_disable_apqn(int apqn)
++static struct vfio_ap_queue *vfio_ap_find_queue(int apqn)
+ {
+ 	struct device *dev;
+-	struct vfio_ap_queue *q;
++	struct vfio_ap_queue *q = NULL;
+ 
+ 	dev = driver_find_device(&matrix_dev->vfio_ap_drv->driver, NULL,
+ 				 &apqn, match_apqn);
+ 	if (dev) {
+ 		q = dev_get_drvdata(dev);
+-		vfio_ap_irq_disable(q);
+ 		put_device(dev);
+ 	}
++
++	return q;
+ }
+ 
+-int vfio_ap_mdev_reset_queue(unsigned int apid, unsigned int apqi,
++int vfio_ap_mdev_reset_queue(struct vfio_ap_queue *q,
+ 			     unsigned int retry)
+ {
+ 	struct ap_queue_status status;
++	int ret;
+ 	int retry2 = 2;
+-	int apqn = AP_MKQID(apid, apqi);
+ 
+-	do {
+-		status = ap_zapq(apqn);
+-		switch (status.response_code) {
+-		case AP_RESPONSE_NORMAL:
+-			while (!status.queue_empty && retry2--) {
+-				msleep(20);
+-				status = ap_tapq(apqn, NULL);
+-			}
+-			WARN_ON_ONCE(retry2 <= 0);
+-			return 0;
+-		case AP_RESPONSE_RESET_IN_PROGRESS:
+-		case AP_RESPONSE_BUSY:
++	if (!q)
++		return 0;
++
++retry_zapq:
++	status = ap_zapq(q->apqn);
++	switch (status.response_code) {
++	case AP_RESPONSE_NORMAL:
++		ret = 0;
++		break;
++	case AP_RESPONSE_RESET_IN_PROGRESS:
++		if (retry--) {
+ 			msleep(20);
+-			break;
+-		default:
+-			/* things are really broken, give up */
+-			return -EIO;
++			goto retry_zapq;
+ 		}
+-	} while (retry--);
++		ret = -EBUSY;
++		break;
++	case AP_RESPONSE_Q_NOT_AVAIL:
++	case AP_RESPONSE_DECONFIGURED:
++	case AP_RESPONSE_CHECKSTOPPED:
++		WARN_ON_ONCE(status.irq_enabled);
++		ret = -EBUSY;
++		goto free_resources;
++	default:
++		/* things are really broken, give up */
++		WARN(true, "PQAP/ZAPQ completed with invalid rc (%x)\n",
++		     status.response_code);
++		return -EIO;
++	}
++
++	/* wait for the reset to take effect */
++	while (retry2--) {
++		if (status.queue_empty && !status.irq_enabled)
++			break;
++		msleep(20);
++		status = ap_tapq(q->apqn, NULL);
++	}
++	WARN_ON_ONCE(retry2 <= 0);
+ 
+-	return -EBUSY;
++free_resources:
++	vfio_ap_free_aqic_resources(q);
++
++	return ret;
+ }
+ 
+ static int vfio_ap_mdev_reset_queues(struct mdev_device *mdev)
+@@ -1163,13 +1186,15 @@ static int vfio_ap_mdev_reset_queues(struct mdev_device *mdev)
+ 	int ret;
+ 	int rc = 0;
+ 	unsigned long apid, apqi;
++	struct vfio_ap_queue *q;
+ 	struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
+ 
+ 	for_each_set_bit_inv(apid, matrix_mdev->matrix.apm,
+ 			     matrix_mdev->matrix.apm_max + 1) {
+ 		for_each_set_bit_inv(apqi, matrix_mdev->matrix.aqm,
+ 				     matrix_mdev->matrix.aqm_max + 1) {
+-			ret = vfio_ap_mdev_reset_queue(apid, apqi, 1);
++			q = vfio_ap_find_queue(AP_MKQID(apid, apqi));
++			ret = vfio_ap_mdev_reset_queue(q, 1);
+ 			/*
+ 			 * Regardless whether a queue turns out to be busy, or
+ 			 * is not operational, we need to continue resetting
+@@ -1177,7 +1202,6 @@ static int vfio_ap_mdev_reset_queues(struct mdev_device *mdev)
+ 			 */
+ 			if (ret)
+ 				rc = ret;
+-			vfio_ap_irq_disable_apqn(AP_MKQID(apid, apqi));
+ 		}
+ 	}
+ 
+diff --git a/drivers/s390/crypto/vfio_ap_private.h b/drivers/s390/crypto/vfio_ap_private.h
+index f46dde56b4644..28e9d99897682 100644
+--- a/drivers/s390/crypto/vfio_ap_private.h
++++ b/drivers/s390/crypto/vfio_ap_private.h
+@@ -88,11 +88,6 @@ struct ap_matrix_mdev {
+ 	struct mdev_device *mdev;
+ };
+ 
+-extern int vfio_ap_mdev_register(void);
+-extern void vfio_ap_mdev_unregister(void);
+-int vfio_ap_mdev_reset_queue(unsigned int apid, unsigned int apqi,
+-			     unsigned int retry);
+-
+ struct vfio_ap_queue {
+ 	struct ap_matrix_mdev *matrix_mdev;
+ 	unsigned long saved_pfn;
+@@ -100,5 +95,10 @@ struct vfio_ap_queue {
+ #define VFIO_AP_ISC_INVALID 0xff
+ 	unsigned char saved_isc;
+ };
+-struct ap_queue_status vfio_ap_irq_disable(struct vfio_ap_queue *q);
++
++int vfio_ap_mdev_register(void);
++void vfio_ap_mdev_unregister(void);
++int vfio_ap_mdev_reset_queue(struct vfio_ap_queue *q,
++			     unsigned int retry);
++
+ #endif /* _VFIO_AP_PRIVATE_H_ */
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index f9c8ae9d669ef..d389f56fff54a 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -42,7 +42,7 @@ MODULE_PARM_DESC(ql2xfulldump_on_mpifail,
+ int ql2xenforce_iocb_limit = 1;
+ module_param(ql2xenforce_iocb_limit, int, S_IRUGO | S_IWUSR);
+ MODULE_PARM_DESC(ql2xenforce_iocb_limit,
+-		 "Enforce IOCB throttling, to avoid FW congestion. (default: 0)");
++		 "Enforce IOCB throttling, to avoid FW congestion. (default: 1)");
+ 
+ /*
+  * CT6 CTX allocation cache
+diff --git a/drivers/soc/atmel/soc.c b/drivers/soc/atmel/soc.c
+index 55a1f57a4d8cb..5d06ee70a36b9 100644
+--- a/drivers/soc/atmel/soc.c
++++ b/drivers/soc/atmel/soc.c
+@@ -265,8 +265,21 @@ struct soc_device * __init at91_soc_init(const struct at91_soc *socs)
+ 	return soc_dev;
+ }
+ 
++static const struct of_device_id at91_soc_allowed_list[] __initconst = {
++	{ .compatible = "atmel,at91rm9200", },
++	{ .compatible = "atmel,at91sam9", },
++	{ .compatible = "atmel,sama5", },
++	{ .compatible = "atmel,samv7", },
++	{ }
++};
++
+ static int __init atmel_soc_device_init(void)
+ {
++	struct device_node *np = of_find_node_by_path("/");
++
++	if (!of_match_node(at91_soc_allowed_list, np))
++		return 0;
++
+ 	at91_soc_init(socs);
+ 
+ 	return 0;
+diff --git a/drivers/soc/imx/Kconfig b/drivers/soc/imx/Kconfig
+index a9370f4aacca9..05812f8ae7340 100644
+--- a/drivers/soc/imx/Kconfig
++++ b/drivers/soc/imx/Kconfig
+@@ -13,7 +13,7 @@ config SOC_IMX8M
+ 	depends on ARCH_MXC || COMPILE_TEST
+ 	default ARCH_MXC && ARM64
+ 	select SOC_BUS
+-	select ARM_GIC_V3 if ARCH_MXC
++	select ARM_GIC_V3 if ARCH_MXC && ARCH_MULTI_V7
+ 	help
+ 	  If you say yes here you get support for the NXP i.MX8M family
+ 	  support, it will provide the SoC info like SoC family,
+diff --git a/drivers/spi/spi-altera.c b/drivers/spi/spi-altera.c
+index cbc4c28c1541c..62ea0c9e321b4 100644
+--- a/drivers/spi/spi-altera.c
++++ b/drivers/spi/spi-altera.c
+@@ -254,7 +254,8 @@ static int altera_spi_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev,
+ 				"Invalid number of chipselect: %hu\n",
+ 				pdata->num_chipselect);
+-			return -EINVAL;
++			err = -EINVAL;
++			goto exit;
+ 		}
+ 
+ 		master->num_chipselect = pdata->num_chipselect;
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index b668a82d40ad4..f5fbdbc4ffdb1 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -367,7 +367,7 @@ hantro_reset_raw_fmt(struct hantro_ctx *ctx)
+ 
+ 	hantro_reset_fmt(raw_fmt, raw_vpu_fmt);
+ 	raw_fmt->width = encoded_fmt->width;
+-	raw_fmt->width = encoded_fmt->width;
++	raw_fmt->height = encoded_fmt->height;
+ 	if (ctx->is_encoder)
+ 		hantro_set_fmt_out(ctx, raw_fmt);
+ 	else
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h264.c b/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
+index 781c84a9b1b79..de7442d4834dc 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
+@@ -203,7 +203,7 @@ static void _cedrus_write_ref_list(struct cedrus_ctx *ctx,
+ 		position = cedrus_buf->codec.h264.position;
+ 
+ 		sram_array[i] |= position << 1;
+-		if (ref_list[i].fields & V4L2_H264_BOTTOM_FIELD_REF)
++		if (ref_list[i].fields == V4L2_H264_BOTTOM_FIELD_REF)
+ 			sram_array[i] |= BIT(0);
+ 	}
+ 
+diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c
+index c981757ba0d40..780d7c4fd7565 100644
+--- a/drivers/tee/optee/call.c
++++ b/drivers/tee/optee/call.c
+@@ -7,6 +7,7 @@
+ #include <linux/err.h>
+ #include <linux/errno.h>
+ #include <linux/mm.h>
++#include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/tee_drv.h>
+ #include <linux/types.h>
+@@ -148,7 +149,8 @@ u32 optee_do_call_with_arg(struct tee_context *ctx, phys_addr_t parg)
+ 			 */
+ 			optee_cq_wait_for_completion(&optee->call_queue, &w);
+ 		} else if (OPTEE_SMC_RETURN_IS_RPC(res.a0)) {
+-			might_sleep();
++			if (need_resched())
++				cond_resched();
+ 			param.a0 = res.a0;
+ 			param.a1 = res.a1;
+ 			param.a2 = res.a2;
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 2f8223b2ffa45..ff87cb51747d8 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1027,9 +1027,8 @@ void tty_write_message(struct tty_struct *tty, char *msg)
+  *	write method will not be invoked in parallel for each device.
+  */
+ 
+-static ssize_t tty_write(struct kiocb *iocb, struct iov_iter *from)
++static ssize_t file_tty_write(struct file *file, struct kiocb *iocb, struct iov_iter *from)
+ {
+-	struct file *file = iocb->ki_filp;
+ 	struct tty_struct *tty = file_tty(file);
+  	struct tty_ldisc *ld;
+ 	ssize_t ret;
+@@ -1052,6 +1051,11 @@ static ssize_t tty_write(struct kiocb *iocb, struct iov_iter *from)
+ 	return ret;
+ }
+ 
++static ssize_t tty_write(struct kiocb *iocb, struct iov_iter *from)
++{
++	return file_tty_write(iocb->ki_filp, iocb, from);
++}
++
+ ssize_t redirected_tty_write(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ 	struct file *p = NULL;
+@@ -1061,9 +1065,13 @@ ssize_t redirected_tty_write(struct kiocb *iocb, struct iov_iter *iter)
+ 		p = get_file(redirect);
+ 	spin_unlock(&redirect_lock);
+ 
++	/*
++	 * We know the redirected tty is just another tty, we can can
++	 * call file_tty_write() directly with that file pointer.
++	 */
+ 	if (p) {
+ 		ssize_t res;
+-		res = vfs_iocb_iter_write(p, iocb, iter);
++		res = file_tty_write(p, iocb, iter);
+ 		fput(p);
+ 		return res;
+ 	}
+@@ -2306,6 +2314,12 @@ static int tioccons(struct file *file)
+ 			fput(f);
+ 		return 0;
+ 	}
++	if (file->f_op->write_iter != tty_write)
++		return -ENOTTY;
++	if (!(file->f_mode & FMODE_WRITE))
++		return -EBADF;
++	if (!(file->f_mode & FMODE_CAN_WRITE))
++		return -EINVAL;
+ 	spin_lock(&redirect_lock);
+ 	if (redirect) {
+ 		spin_unlock(&redirect_lock);
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index c8f0282bb6497..18ffd0551b542 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -714,6 +714,23 @@ static bool xs_hvm_defer_init_for_callback(void)
+ #endif
+ }
+ 
++static int xenbus_probe_thread(void *unused)
++{
++	DEFINE_WAIT(w);
++
++	/*
++	 * We actually just want to wait for *any* trigger of xb_waitq,
++	 * and run xenbus_probe() the moment it occurs.
++	 */
++	prepare_to_wait(&xb_waitq, &w, TASK_INTERRUPTIBLE);
++	schedule();
++	finish_wait(&xb_waitq, &w);
++
++	DPRINTK("probing");
++	xenbus_probe();
++	return 0;
++}
++
+ static int __init xenbus_probe_initcall(void)
+ {
+ 	/*
+@@ -725,6 +742,20 @@ static int __init xenbus_probe_initcall(void)
+ 	     !xs_hvm_defer_init_for_callback()))
+ 		xenbus_probe();
+ 
++	/*
++	 * For XS_LOCAL, spawn a thread which will wait for xenstored
++	 * or a xenstore-stubdom to be started, then probe. It will be
++	 * triggered when communication starts happening, by waiting
++	 * on xb_waitq.
++	 */
++	if (xen_store_domain_type == XS_LOCAL) {
++		struct task_struct *probe_task;
++
++		probe_task = kthread_run(xenbus_probe_thread, NULL,
++					 "xenbus_probe");
++		if (IS_ERR(probe_task))
++			return PTR_ERR(probe_task);
++	}
+ 	return 0;
+ }
+ device_initcall(xenbus_probe_initcall);
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 9e84b1928b940..2ea189c1b4ffe 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -134,7 +134,15 @@ EXPORT_SYMBOL(truncate_bdev_range);
+ 
+ static void set_init_blocksize(struct block_device *bdev)
+ {
+-	bdev->bd_inode->i_blkbits = blksize_bits(bdev_logical_block_size(bdev));
++	unsigned int bsize = bdev_logical_block_size(bdev);
++	loff_t size = i_size_read(bdev->bd_inode);
++
++	while (bsize < PAGE_SIZE) {
++		if (size & bsize)
++			break;
++		bsize <<= 1;
++	}
++	bdev->bd_inode->i_blkbits = blksize_bits(bsize);
+ }
+ 
+ int set_blocksize(struct block_device *bdev, int size)
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index cef2f080fdcd5..a2111eab614f2 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -639,7 +639,15 @@ static noinline void caching_thread(struct btrfs_work *work)
+ 	mutex_lock(&caching_ctl->mutex);
+ 	down_read(&fs_info->commit_root_sem);
+ 
+-	if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE))
++	/*
++	 * If we are in the transaction that populated the free space tree we
++	 * can't actually cache from the free space tree as our commit root and
++	 * real root are the same, so we could change the contents of the blocks
++	 * while caching.  Instead do the slow caching in this case, and after
++	 * the transaction has committed we will be safe.
++	 */
++	if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) &&
++	    !(test_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags)))
+ 		ret = load_free_space_tree(caching_ctl);
+ 	else
+ 		ret = load_extent_tree_free(caching_ctl);
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index e01545538e07f..30ea9780725ff 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -146,6 +146,9 @@ enum {
+ 	BTRFS_FS_STATE_DEV_REPLACING,
+ 	/* The btrfs_fs_info created for self-tests */
+ 	BTRFS_FS_STATE_DUMMY_FS_INFO,
++
++	/* Indicate that we can't trust the free space tree for caching yet */
++	BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED,
+ };
+ 
+ #define BTRFS_BACKREF_REV_MAX		256
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index 6b9faf3b0e967..6cf2f7bb30c27 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1152,6 +1152,7 @@ int btrfs_create_free_space_tree(struct btrfs_fs_info *fs_info)
+ 		return PTR_ERR(trans);
+ 
+ 	set_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags);
++	set_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags);
+ 	free_space_root = btrfs_create_tree(trans,
+ 					    BTRFS_FREE_SPACE_TREE_OBJECTID);
+ 	if (IS_ERR(free_space_root)) {
+@@ -1173,11 +1174,18 @@ int btrfs_create_free_space_tree(struct btrfs_fs_info *fs_info)
+ 	btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE);
+ 	btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID);
+ 	clear_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags);
++	ret = btrfs_commit_transaction(trans);
+ 
+-	return btrfs_commit_transaction(trans);
++	/*
++	 * Now that we've committed the transaction any reading of our commit
++	 * root will be safe, so we can cache from the free space tree now.
++	 */
++	clear_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags);
++	return ret;
+ 
+ abort:
+ 	clear_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags);
++	clear_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags);
+ 	btrfs_abort_transaction(trans, ret);
+ 	btrfs_end_transaction(trans);
+ 	return ret;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 6311308b32beb..f9ae3850526c6 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -431,7 +431,7 @@ static struct btrfs_device *__alloc_device(struct btrfs_fs_info *fs_info)
+ 
+ 	atomic_set(&dev->reada_in_flight, 0);
+ 	atomic_set(&dev->dev_stats_ccnt, 0);
+-	btrfs_device_data_ordered_init(dev, fs_info);
++	btrfs_device_data_ordered_init(dev);
+ 	INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+ 	INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+ 	extent_io_tree_init(fs_info, &dev->alloc_state,
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 232f02bd214fc..f2177263748e8 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -39,10 +39,10 @@ struct btrfs_io_geometry {
+ #if BITS_PER_LONG==32 && defined(CONFIG_SMP)
+ #include <linux/seqlock.h>
+ #define __BTRFS_NEED_DEVICE_DATA_ORDERED
+-#define btrfs_device_data_ordered_init(device, info)				\
+-	seqcount_mutex_init(&device->data_seqcount, &info->chunk_mutex)
++#define btrfs_device_data_ordered_init(device)	\
++	seqcount_init(&device->data_seqcount)
+ #else
+-#define btrfs_device_data_ordered_init(device, info) do { } while (0)
++#define btrfs_device_data_ordered_init(device) do { } while (0)
+ #endif
+ 
+ #define BTRFS_DEV_STATE_WRITEABLE	(0)
+@@ -72,8 +72,7 @@ struct btrfs_device {
+ 	blk_status_t last_flush_error;
+ 
+ #ifdef __BTRFS_NEED_DEVICE_DATA_ORDERED
+-	/* A seqcount_t with associated chunk_mutex (for lockdep) */
+-	seqcount_mutex_t data_seqcount;
++	seqcount_t data_seqcount;
+ #endif
+ 
+ 	/* the internal btrfs device id */
+@@ -164,9 +163,11 @@ btrfs_device_get_##name(const struct btrfs_device *dev)			\
+ static inline void							\
+ btrfs_device_set_##name(struct btrfs_device *dev, u64 size)		\
+ {									\
++	preempt_disable();						\
+ 	write_seqcount_begin(&dev->data_seqcount);			\
+ 	dev->name = size;						\
+ 	write_seqcount_end(&dev->data_seqcount);			\
++	preempt_enable();						\
+ }
+ #elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
+ #define BTRFS_DEVICE_GETSET_FUNCS(name)					\
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index fd12d9327ee5b..907ecaffc3386 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -972,6 +972,7 @@ static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
+ 			     const struct iovec *fast_iov,
+ 			     struct iov_iter *iter, bool force);
+ static void io_req_drop_files(struct io_kiocb *req);
++static void io_req_task_queue(struct io_kiocb *req);
+ 
+ static struct kmem_cache *req_cachep;
+ 
+@@ -1502,18 +1503,11 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx)
+ 	do {
+ 		struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
+ 						struct io_defer_entry, list);
+-		struct io_kiocb *link;
+ 
+ 		if (req_need_defer(de->req, de->seq))
+ 			break;
+ 		list_del_init(&de->list);
+-		/* punt-init is done before queueing for defer */
+-		link = __io_queue_async_work(de->req);
+-		if (link) {
+-			__io_queue_linked_timeout(link);
+-			/* drop submission reference */
+-			io_put_req_deferred(link, 1);
+-		}
++		io_req_task_queue(de->req);
+ 		kfree(de);
+ 	} while (!list_empty(&ctx->defer_list));
+ }
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 471bfa273dade..cbadcf6ca4da2 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -324,6 +324,21 @@ pnfs_grab_inode_layout_hdr(struct pnfs_layout_hdr *lo)
+ 	return NULL;
+ }
+ 
++/*
++ * Compare 2 layout stateid sequence ids, to see which is newer,
++ * taking into account wraparound issues.
++ */
++static bool pnfs_seqid_is_newer(u32 s1, u32 s2)
++{
++	return (s32)(s1 - s2) > 0;
++}
++
++static void pnfs_barrier_update(struct pnfs_layout_hdr *lo, u32 newseq)
++{
++	if (pnfs_seqid_is_newer(newseq, lo->plh_barrier))
++		lo->plh_barrier = newseq;
++}
++
+ static void
+ pnfs_set_plh_return_info(struct pnfs_layout_hdr *lo, enum pnfs_iomode iomode,
+ 			 u32 seq)
+@@ -335,6 +350,7 @@ pnfs_set_plh_return_info(struct pnfs_layout_hdr *lo, enum pnfs_iomode iomode,
+ 	if (seq != 0) {
+ 		WARN_ON_ONCE(lo->plh_return_seq != 0 && lo->plh_return_seq != seq);
+ 		lo->plh_return_seq = seq;
++		pnfs_barrier_update(lo, seq);
+ 	}
+ }
+ 
+@@ -639,15 +655,6 @@ static int mark_lseg_invalid(struct pnfs_layout_segment *lseg,
+ 	return rv;
+ }
+ 
+-/*
+- * Compare 2 layout stateid sequence ids, to see which is newer,
+- * taking into account wraparound issues.
+- */
+-static bool pnfs_seqid_is_newer(u32 s1, u32 s2)
+-{
+-	return (s32)(s1 - s2) > 0;
+-}
+-
+ static bool
+ pnfs_should_free_range(const struct pnfs_layout_range *lseg_range,
+ 		 const struct pnfs_layout_range *recall_range)
+@@ -984,8 +991,7 @@ pnfs_set_layout_stateid(struct pnfs_layout_hdr *lo, const nfs4_stateid *new,
+ 		new_barrier = be32_to_cpu(new->seqid);
+ 	else if (new_barrier == 0)
+ 		return;
+-	if (pnfs_seqid_is_newer(new_barrier, lo->plh_barrier))
+-		lo->plh_barrier = new_barrier;
++	pnfs_barrier_update(lo, new_barrier);
+ }
+ 
+ static bool
+@@ -1183,20 +1189,17 @@ pnfs_prepare_layoutreturn(struct pnfs_layout_hdr *lo,
+ 		return false;
+ 	set_bit(NFS_LAYOUT_RETURN, &lo->plh_flags);
+ 	pnfs_get_layout_hdr(lo);
++	nfs4_stateid_copy(stateid, &lo->plh_stateid);
++	*cred = get_cred(lo->plh_lc_cred);
+ 	if (test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags)) {
+-		nfs4_stateid_copy(stateid, &lo->plh_stateid);
+-		*cred = get_cred(lo->plh_lc_cred);
+ 		if (lo->plh_return_seq != 0)
+ 			stateid->seqid = cpu_to_be32(lo->plh_return_seq);
+ 		if (iomode != NULL)
+ 			*iomode = lo->plh_return_iomode;
+ 		pnfs_clear_layoutreturn_info(lo);
+-		return true;
+-	}
+-	nfs4_stateid_copy(stateid, &lo->plh_stateid);
+-	*cred = get_cred(lo->plh_lc_cred);
+-	if (iomode != NULL)
++	} else if (iomode != NULL)
+ 		*iomode = IOMODE_ANY;
++	pnfs_barrier_update(lo, be32_to_cpu(stateid->seqid));
+ 	return true;
+ }
+ 
+@@ -2418,6 +2421,7 @@ out_forget:
+ 	spin_unlock(&ino->i_lock);
+ 	lseg->pls_layout = lo;
+ 	NFS_SERVER(ino)->pnfs_curr_ld->free_lseg(lseg);
++	pnfs_free_lseg_list(&free_me);
+ 	return ERR_PTR(-EAGAIN);
+ }
+ 
+diff --git a/include/dt-bindings/sound/apq8016-lpass.h b/include/dt-bindings/sound/apq8016-lpass.h
+index 3c3e16c0aadbf..dc605c4bc2249 100644
+--- a/include/dt-bindings/sound/apq8016-lpass.h
++++ b/include/dt-bindings/sound/apq8016-lpass.h
+@@ -2,9 +2,8 @@
+ #ifndef __DT_APQ8016_LPASS_H
+ #define __DT_APQ8016_LPASS_H
+ 
+-#define MI2S_PRIMARY	0
+-#define MI2S_SECONDARY	1
+-#define MI2S_TERTIARY	2
+-#define MI2S_QUATERNARY	3
++#include <dt-bindings/sound/qcom,lpass.h>
++
++/* NOTE: Use qcom,lpass.h to define any AIF ID's for LPASS */
+ 
+ #endif /* __DT_APQ8016_LPASS_H */
+diff --git a/include/dt-bindings/sound/qcom,lpass.h b/include/dt-bindings/sound/qcom,lpass.h
+new file mode 100644
+index 0000000000000..7b0b80b38699e
+--- /dev/null
++++ b/include/dt-bindings/sound/qcom,lpass.h
+@@ -0,0 +1,15 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __DT_QCOM_LPASS_H
++#define __DT_QCOM_LPASS_H
++
++#define MI2S_PRIMARY	0
++#define MI2S_SECONDARY	1
++#define MI2S_TERTIARY	2
++#define MI2S_QUATERNARY	3
++#define MI2S_QUINARY	4
++
++#define LPASS_DP_RX	5
++
++#define LPASS_MCLK0	0
++
++#endif /* __DT_QCOM_LPASS_H */
+diff --git a/include/dt-bindings/sound/sc7180-lpass.h b/include/dt-bindings/sound/sc7180-lpass.h
+index 56ecaafd2dc68..5c1ee8b36b197 100644
+--- a/include/dt-bindings/sound/sc7180-lpass.h
++++ b/include/dt-bindings/sound/sc7180-lpass.h
+@@ -2,10 +2,8 @@
+ #ifndef __DT_SC7180_LPASS_H
+ #define __DT_SC7180_LPASS_H
+ 
+-#define MI2S_PRIMARY	0
+-#define MI2S_SECONDARY	1
+-#define LPASS_DP_RX	2
++#include <dt-bindings/sound/qcom,lpass.h>
+ 
+-#define LPASS_MCLK0	0
++/* NOTE: Use qcom,lpass.h to define any AIF ID's for LPASS */
+ 
+ #endif /* __DT_APQ8016_LPASS_H */
+diff --git a/include/linux/linkage.h b/include/linux/linkage.h
+index 5bcfbd972e970..dbf8506decca0 100644
+--- a/include/linux/linkage.h
++++ b/include/linux/linkage.h
+@@ -178,6 +178,11 @@
+  * Objtool generates debug info for both FUNC & CODE, but needs special
+  * annotations for each CODE's start (to describe the actual stack frame).
+  *
++ * Objtool requires that all code must be contained in an ELF symbol. Symbol
++ * names that have a  .L prefix do not emit symbol table entries. .L
++ * prefixed symbols can be used within a code region, but should be avoided for
++ * denoting a range of code via ``SYM_*_START/END`` annotations.
++ *
+  * ALIAS -- does not generate debug info -- the aliased function will
+  */
+ 
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 0f23e1ed5e710..add85094f9a58 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1213,22 +1213,4 @@ static inline bool mlx5_is_roce_enabled(struct mlx5_core_dev *dev)
+ 	return val.vbool;
+ }
+ 
+-/**
+- * mlx5_core_net - Provide net namespace of the mlx5_core_dev
+- * @dev: mlx5 core device
+- *
+- * mlx5_core_net() returns the net namespace of mlx5 core device.
+- * This can be called only in below described limited context.
+- * (a) When a devlink instance for mlx5_core is registered and
+- *     when devlink reload operation is disabled.
+- *     or
+- * (b) during devlink reload reload_down() and reload_up callbacks
+- *     where it is ensured that devlink instance's net namespace is
+- *     stable.
+- */
+-static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev)
+-{
+-	return devlink_net(priv_to_devlink(dev));
+-}
+-
+ #endif /* MLX5_DRIVER_H */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index d4ef5bf941689..fe9747ee70a6f 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -625,6 +625,7 @@ static inline void tcp_clear_xmit_timers(struct sock *sk)
+ 
+ unsigned int tcp_sync_mss(struct sock *sk, u32 pmtu);
+ unsigned int tcp_current_mss(struct sock *sk);
++u32 tcp_clamp_probe0_to_user_timeout(const struct sock *sk, u32 when);
+ 
+ /* Bound MSS / TSO packet size with the half of the window */
+ static inline int tcp_bound_to_half_wnd(struct tcp_sock *tp, int pktsize)
+@@ -2065,7 +2066,7 @@ void tcp_mark_skb_lost(struct sock *sk, struct sk_buff *skb);
+ void tcp_newreno_mark_lost(struct sock *sk, bool snd_una_advanced);
+ extern s32 tcp_rack_skb_timeout(struct tcp_sock *tp, struct sk_buff *skb,
+ 				u32 reo_wnd);
+-extern void tcp_rack_mark_lost(struct sock *sk);
++extern bool tcp_rack_mark_lost(struct sock *sk);
+ extern void tcp_rack_advance(struct tcp_sock *tp, u8 sacked, u32 end_seq,
+ 			     u64 xmit_time);
+ extern void tcp_rack_reo_timeout(struct sock *sk);
+diff --git a/include/uapi/linux/rpl.h b/include/uapi/linux/rpl.h
+index 1dccb55cf8c64..708adddf9f138 100644
+--- a/include/uapi/linux/rpl.h
++++ b/include/uapi/linux/rpl.h
+@@ -28,10 +28,10 @@ struct ipv6_rpl_sr_hdr {
+ 		pad:4,
+ 		reserved1:16;
+ #elif defined(__BIG_ENDIAN_BITFIELD)
+-	__u32	reserved:20,
++	__u32	cmpri:4,
++		cmpre:4,
+ 		pad:4,
+-		cmpri:4,
+-		cmpre:4;
++		reserved:20;
+ #else
+ #error  "Please fix <asm/byteorder.h>"
+ #endif
+diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
+index 8798a8183974e..c589c7a9562ca 100644
+--- a/kernel/kexec_core.c
++++ b/kernel/kexec_core.c
+@@ -1135,7 +1135,6 @@ int kernel_kexec(void)
+ 
+ #ifdef CONFIG_KEXEC_JUMP
+ 	if (kexec_image->preserve_context) {
+-		lock_system_sleep();
+ 		pm_prepare_console();
+ 		error = freeze_processes();
+ 		if (error) {
+@@ -1198,7 +1197,6 @@ int kernel_kexec(void)
+ 		thaw_processes();
+  Restore_console:
+ 		pm_restore_console();
+-		unlock_system_sleep();
+ 	}
+ #endif
+ 
+diff --git a/kernel/power/swap.c b/kernel/power/swap.c
+index c73f2e295167d..72e33054a2e1b 100644
+--- a/kernel/power/swap.c
++++ b/kernel/power/swap.c
+@@ -497,10 +497,10 @@ static int swap_writer_finish(struct swap_map_handle *handle,
+ 		unsigned int flags, int error)
+ {
+ 	if (!error) {
+-		flush_swap_writer(handle);
+ 		pr_info("S");
+ 		error = mark_swapfiles(handle, flags);
+ 		pr_cont("|\n");
++		flush_swap_writer(handle);
+ 	}
+ 
+ 	if (error)
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 6bf066f924c15..fac5c1469ceee 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2845,7 +2845,8 @@ static void tcp_identify_packet_loss(struct sock *sk, int *ack_flag)
+ 	} else if (tcp_is_rack(sk)) {
+ 		u32 prior_retrans = tp->retrans_out;
+ 
+-		tcp_rack_mark_lost(sk);
++		if (tcp_rack_mark_lost(sk))
++			*ack_flag &= ~FLAG_SET_XMIT_TIMER;
+ 		if (prior_retrans > tp->retrans_out)
+ 			*ack_flag |= FLAG_LOST_RETRANS;
+ 	}
+@@ -3378,8 +3379,8 @@ static void tcp_ack_probe(struct sock *sk)
+ 	} else {
+ 		unsigned long when = tcp_probe0_when(sk, TCP_RTO_MAX);
+ 
+-		tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
+-				     when, TCP_RTO_MAX);
++		when = tcp_clamp_probe0_to_user_timeout(sk, when);
++		tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when, TCP_RTO_MAX);
+ 	}
+ }
+ 
+@@ -3802,9 +3803,6 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ 
+ 	if (tp->tlp_high_seq)
+ 		tcp_process_tlp_ack(sk, ack, flag);
+-	/* If needed, reset TLP/RTO timer; RACK may later override this. */
+-	if (flag & FLAG_SET_XMIT_TIMER)
+-		tcp_set_xmit_timer(sk);
+ 
+ 	if (tcp_ack_is_dubious(sk, flag)) {
+ 		if (!(flag & (FLAG_SND_UNA_ADVANCED | FLAG_NOT_DUP))) {
+@@ -3817,6 +3815,10 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ 				      &rexmit);
+ 	}
+ 
++	/* If needed, reset TLP/RTO timer when RACK doesn't set. */
++	if (flag & FLAG_SET_XMIT_TIMER)
++		tcp_set_xmit_timer(sk);
++
+ 	if ((flag & FLAG_FORWARD_PROGRESS) || !(flag & FLAG_NOT_DUP))
+ 		sk_dst_confirm(sk);
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index e58e2589d7f98..f99494637ff47 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -4095,6 +4095,8 @@ void tcp_send_probe0(struct sock *sk)
+ 		 */
+ 		timeout = TCP_RESOURCE_PROBE_INTERVAL;
+ 	}
++
++	timeout = tcp_clamp_probe0_to_user_timeout(sk, timeout);
+ 	tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, timeout, TCP_RTO_MAX);
+ }
+ 
+diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c
+index f65a3ddd0d58a..31fc178f42c02 100644
+--- a/net/ipv4/tcp_recovery.c
++++ b/net/ipv4/tcp_recovery.c
+@@ -96,13 +96,13 @@ static void tcp_rack_detect_loss(struct sock *sk, u32 *reo_timeout)
+ 	}
+ }
+ 
+-void tcp_rack_mark_lost(struct sock *sk)
++bool tcp_rack_mark_lost(struct sock *sk)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	u32 timeout;
+ 
+ 	if (!tp->rack.advanced)
+-		return;
++		return false;
+ 
+ 	/* Reset the advanced flag to avoid unnecessary queue scanning */
+ 	tp->rack.advanced = 0;
+@@ -112,6 +112,7 @@ void tcp_rack_mark_lost(struct sock *sk)
+ 		inet_csk_reset_xmit_timer(sk, ICSK_TIME_REO_TIMEOUT,
+ 					  timeout, inet_csk(sk)->icsk_rto);
+ 	}
++	return !!timeout;
+ }
+ 
+ /* Record the most recently (re)sent time among the (s)acked packets
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index faa92948441ba..4ef08079ccfa9 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -40,6 +40,24 @@ static u32 tcp_clamp_rto_to_user_timeout(const struct sock *sk)
+ 	return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining));
+ }
+ 
++u32 tcp_clamp_probe0_to_user_timeout(const struct sock *sk, u32 when)
++{
++	struct inet_connection_sock *icsk = inet_csk(sk);
++	u32 remaining;
++	s32 elapsed;
++
++	if (!icsk->icsk_user_timeout || !icsk->icsk_probes_tstamp)
++		return when;
++
++	elapsed = tcp_jiffies32 - icsk->icsk_probes_tstamp;
++	if (unlikely(elapsed < 0))
++		elapsed = 0;
++	remaining = msecs_to_jiffies(icsk->icsk_user_timeout) - elapsed;
++	remaining = max_t(u32, remaining, TCP_TIMEOUT_MIN);
++
++	return min_t(u32, remaining, when);
++}
++
+ /**
+  *  tcp_write_err() - close socket and save error info
+  *  @sk:  The socket the error has appeared on.
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 2a21226fb518a..d6913784be2bd 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1082,6 +1082,7 @@ enum queue_stop_reason {
+ 	IEEE80211_QUEUE_STOP_REASON_FLUSH,
+ 	IEEE80211_QUEUE_STOP_REASON_TDLS_TEARDOWN,
+ 	IEEE80211_QUEUE_STOP_REASON_RESERVE_TID,
++	IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE,
+ 
+ 	IEEE80211_QUEUE_STOP_REASONS,
+ };
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 44154cc596cd4..f3c3557a9e4c4 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1654,6 +1654,10 @@ static int ieee80211_runtime_change_iftype(struct ieee80211_sub_if_data *sdata,
+ 	if (ret)
+ 		return ret;
+ 
++	ieee80211_stop_vif_queues(local, sdata,
++				  IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE);
++	synchronize_net();
++
+ 	ieee80211_do_stop(sdata, false);
+ 
+ 	ieee80211_teardown_sdata(sdata);
+@@ -1676,6 +1680,8 @@ static int ieee80211_runtime_change_iftype(struct ieee80211_sub_if_data *sdata,
+ 	err = ieee80211_do_open(&sdata->wdev, false);
+ 	WARN(err, "type change: do_open returned %d", err);
+ 
++	ieee80211_wake_vif_queues(local, sdata,
++				  IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE);
+ 	return ret;
+ }
+ 
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 4990f7cbfafdf..5c84a968dae29 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -204,8 +204,10 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 		nft_set_ext_add_length(&priv->tmpl, NFT_SET_EXT_EXPR,
+ 				       priv->expr->ops->size);
+ 	if (set->flags & NFT_SET_TIMEOUT) {
+-		if (timeout || set->timeout)
++		if (timeout || set->timeout) {
++			nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_TIMEOUT);
+ 			nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_EXPIRATION);
++		}
+ 	}
+ 
+ 	priv->timeout = timeout;
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 8709f3d4e7c4b..bec7847f8eaac 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -852,6 +852,7 @@ static int nfc_genl_stop_poll(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	if (!dev->polling) {
+ 		device_unlock(&dev->dev);
++		nfc_put_device(dev);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/net/nfc/rawsock.c b/net/nfc/rawsock.c
+index 955c195ae14bc..9c7eb8455ba8e 100644
+--- a/net/nfc/rawsock.c
++++ b/net/nfc/rawsock.c
+@@ -105,7 +105,7 @@ static int rawsock_connect(struct socket *sock, struct sockaddr *_addr,
+ 	if (addr->target_idx > dev->target_next_idx - 1 ||
+ 	    addr->target_idx < dev->target_next_idx - dev->n_targets) {
+ 		rc = -EINVAL;
+-		goto error;
++		goto put_dev;
+ 	}
+ 
+ 	rc = nfc_activate_target(dev, addr->target_idx, addr->nfc_protocol);
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 8df1964db3332..a0b033954ceac 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -197,6 +197,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
+ 	tail = b->peer_backlog_tail;
+ 	while (CIRC_CNT(head, tail, size) > 0) {
+ 		struct rxrpc_peer *peer = b->peer_backlog[tail];
++		rxrpc_put_local(peer->local);
+ 		kfree(peer);
+ 		tail = (tail + 1) & (size - 1);
+ 	}
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index d10916ab45267..f64e681493a59 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -997,9 +997,12 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
+ 			mask |= EPOLLOUT | EPOLLWRNORM | EPOLLWRBAND;
+ 
+ 	} else if (sock->type == SOCK_STREAM) {
+-		const struct vsock_transport *transport = vsk->transport;
++		const struct vsock_transport *transport;
++
+ 		lock_sock(sk);
+ 
++		transport = vsk->transport;
++
+ 		/* Listening sockets that have connections in their accept
+ 		 * queue can be read.
+ 		 */
+@@ -1082,10 +1085,11 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	err = 0;
+ 	sk = sock->sk;
+ 	vsk = vsock_sk(sk);
+-	transport = vsk->transport;
+ 
+ 	lock_sock(sk);
+ 
++	transport = vsk->transport;
++
+ 	err = vsock_auto_bind(vsk);
+ 	if (err)
+ 		goto out;
+@@ -1544,10 +1548,11 @@ static int vsock_stream_setsockopt(struct socket *sock,
+ 	err = 0;
+ 	sk = sock->sk;
+ 	vsk = vsock_sk(sk);
+-	transport = vsk->transport;
+ 
+ 	lock_sock(sk);
+ 
++	transport = vsk->transport;
++
+ 	switch (optname) {
+ 	case SO_VM_SOCKETS_BUFFER_SIZE:
+ 		COPY_IN(val);
+@@ -1680,7 +1685,6 @@ static int vsock_stream_sendmsg(struct socket *sock, struct msghdr *msg,
+ 
+ 	sk = sock->sk;
+ 	vsk = vsock_sk(sk);
+-	transport = vsk->transport;
+ 	total_written = 0;
+ 	err = 0;
+ 
+@@ -1689,6 +1693,8 @@ static int vsock_stream_sendmsg(struct socket *sock, struct msghdr *msg,
+ 
+ 	lock_sock(sk);
+ 
++	transport = vsk->transport;
++
+ 	/* Callers should not provide a destination with stream sockets. */
+ 	if (msg->msg_namelen) {
+ 		err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP;
+@@ -1823,11 +1829,12 @@ vsock_stream_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 
+ 	sk = sock->sk;
+ 	vsk = vsock_sk(sk);
+-	transport = vsk->transport;
+ 	err = 0;
+ 
+ 	lock_sock(sk);
+ 
++	transport = vsk->transport;
++
+ 	if (!transport || sk->sk_state != TCP_ESTABLISHED) {
+ 		/* Recvmsg is supposed to return 0 if a peer performs an
+ 		 * orderly shutdown. Differentiate between that case and when a
+diff --git a/net/wireless/wext-core.c b/net/wireless/wext-core.c
+index 69102fda9ebd4..76a80a41615be 100644
+--- a/net/wireless/wext-core.c
++++ b/net/wireless/wext-core.c
+@@ -896,8 +896,9 @@ out:
+ int call_commit_handler(struct net_device *dev)
+ {
+ #ifdef CONFIG_WIRELESS_EXT
+-	if ((netif_running(dev)) &&
+-	   (dev->wireless_handlers->standard[0] != NULL))
++	if (netif_running(dev) &&
++	    dev->wireless_handlers &&
++	    dev->wireless_handlers->standard[0])
+ 		/* Call the commit handler on the driver */
+ 		return dev->wireless_handlers->standard[0](dev, NULL,
+ 							   NULL, NULL);
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 37456d022cfa3..61e6220ddd5ae 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -660,7 +660,7 @@ resume:
+ 		/* only the first xfrm gets the encap type */
+ 		encap_type = 0;
+ 
+-		if (async && x->repl->recheck(x, skb, seq)) {
++		if (x->repl->recheck(x, skb, seq)) {
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATESEQERROR);
+ 			goto drop_unlock;
+ 		}
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index d622c2548d229..b74f28cabe24f 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -793,15 +793,22 @@ static int xfrm_policy_addr_delta(const xfrm_address_t *a,
+ 				  const xfrm_address_t *b,
+ 				  u8 prefixlen, u16 family)
+ {
++	u32 ma, mb, mask;
+ 	unsigned int pdw, pbi;
+ 	int delta = 0;
+ 
+ 	switch (family) {
+ 	case AF_INET:
+-		if (sizeof(long) == 4 && prefixlen == 0)
+-			return ntohl(a->a4) - ntohl(b->a4);
+-		return (ntohl(a->a4) & ((~0UL << (32 - prefixlen)))) -
+-		       (ntohl(b->a4) & ((~0UL << (32 - prefixlen))));
++		if (prefixlen == 0)
++			return 0;
++		mask = ~0U << (32 - prefixlen);
++		ma = ntohl(a->a4) & mask;
++		mb = ntohl(b->a4) & mask;
++		if (ma < mb)
++			delta = -1;
++		else if (ma > mb)
++			delta = 1;
++		break;
+ 	case AF_INET6:
+ 		pdw = prefixlen >> 5;
+ 		pbi = prefixlen & 0x1f;
+@@ -812,10 +819,13 @@ static int xfrm_policy_addr_delta(const xfrm_address_t *a,
+ 				return delta;
+ 		}
+ 		if (pbi) {
+-			u32 mask = ~0u << (32 - pbi);
+-
+-			delta = (ntohl(a->a6[pdw]) & mask) -
+-				(ntohl(b->a6[pdw]) & mask);
++			mask = ~0U << (32 - pbi);
++			ma = ntohl(a->a6[pdw]) & mask;
++			mb = ntohl(b->a6[pdw]) & mask;
++			if (ma < mb)
++				delta = -1;
++			else if (ma > mb)
++				delta = 1;
+ 		}
+ 		break;
+ 	default:
+@@ -3078,8 +3088,8 @@ struct dst_entry *xfrm_lookup_with_ifid(struct net *net,
+ 		xflo.flags = flags;
+ 
+ 		/* To accelerate a bit...  */
+-		if ((dst_orig->flags & DST_NOXFRM) ||
+-		    !net->xfrm.policy_count[XFRM_POLICY_OUT])
++		if (!if_id && ((dst_orig->flags & DST_NOXFRM) ||
++			       !net->xfrm.policy_count[XFRM_POLICY_OUT]))
+ 			goto nopol;
+ 
+ 		xdst = xfrm_bundle_lookup(net, fl, family, dir, &xflo, if_id);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index ed5b6b894dc19..290645516313c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8006,6 +8006,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x194e, "ASUS UX563FD", ALC294_FIXUP_ASUS_HPE),
++	SND_PCI_QUIRK(0x1043, 0x1982, "ASUS B1400CEPE", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index 834367dd54e1b..a5c1a2c4eae4e 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -1043,7 +1043,7 @@ static const struct hda_fixup via_fixups[] = {
+ static const struct snd_pci_quirk vt2002p_fixups[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75),
+ 	SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST),
+-	SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", VIA_FIXUP_POWER_SAVE),
++	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo", VIA_FIXUP_POWER_SAVE),
+ 	{}
+ };
+ 
+diff --git a/sound/soc/amd/renoir/rn-pci-acp3x.c b/sound/soc/amd/renoir/rn-pci-acp3x.c
+index 6f153856657ae..917536def5f2a 100644
+--- a/sound/soc/amd/renoir/rn-pci-acp3x.c
++++ b/sound/soc/amd/renoir/rn-pci-acp3x.c
+@@ -165,10 +165,24 @@ static int rn_acp_deinit(void __iomem *acp_base)
+ 
+ static const struct dmi_system_id rn_acp_quirk_table[] = {
+ 	{
+-		/* Lenovo IdeaPad Flex 5 14ARE05, IdeaPad 5 15ARE05 */
++		/* Lenovo IdeaPad S340-14API */
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+-			DMI_EXACT_MATCH(DMI_BOARD_NAME, "LNVNB161216"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "81NB"),
++		}
++	},
++	{
++		/* Lenovo IdeaPad Flex 5 14ARE05 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "81X2"),
++		}
++	},
++	{
++		/* Lenovo IdeaPad 5 15ARE05 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "81YQ"),
+ 		}
+ 	},
+ 	{
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index 40bee10b0c65a..d699e61eca3d0 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -3619,15 +3619,16 @@ static void skl_tplg_complete(struct snd_soc_component *component)
+ 
+ 	list_for_each_entry(dobj, &component->dobj_list, list) {
+ 		struct snd_kcontrol *kcontrol = dobj->control.kcontrol;
+-		struct soc_enum *se =
+-			(struct soc_enum *)kcontrol->private_value;
+-		char **texts = dobj->control.dtexts;
++		struct soc_enum *se;
++		char **texts;
+ 		char chan_text[4];
+ 
+-		if (dobj->type != SND_SOC_DOBJ_ENUM ||
+-		    dobj->control.kcontrol->put !=
+-		    skl_tplg_multi_config_set_dmic)
++		if (dobj->type != SND_SOC_DOBJ_ENUM || !kcontrol ||
++		    kcontrol->put != skl_tplg_multi_config_set_dmic)
+ 			continue;
++
++		se = (struct soc_enum *)kcontrol->private_value;
++		texts = dobj->control.dtexts;
+ 		sprintf(chan_text, "c%d", mach->mach_params.dmic_num);
+ 
+ 		for (i = 0; i < se->items; i++) {
+diff --git a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+index 26e7d9a7198f8..20d31b69a5c00 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+@@ -532,6 +532,7 @@ static struct snd_soc_dai_link mt8183_da7219_dai_links[] = {
+ 		.dpcm_playback = 1,
+ 		.ignore_suspend = 1,
+ 		.be_hw_params_fixup = mt8183_i2s_hw_params_fixup,
++		.ignore = 1,
+ 		.init = mt8183_da7219_max98357_hdmi_init,
+ 		SND_SOC_DAILINK_REG(tdm),
+ 	},
+@@ -754,8 +755,10 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
+ 			}
+ 		}
+ 
+-		if (hdmi_codec && strcmp(dai_link->name, "TDM") == 0)
++		if (hdmi_codec && strcmp(dai_link->name, "TDM") == 0) {
+ 			dai_link->codecs->of_node = hdmi_codec;
++			dai_link->ignore = 0;
++		}
+ 
+ 		if (!dai_link->platforms->name)
+ 			dai_link->platforms->of_node = platform_node;
+diff --git a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
+index 327dfad41e310..79ba2f2d84522 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
+@@ -515,6 +515,7 @@ static struct snd_soc_dai_link mt8183_mt6358_ts3a227_dai_links[] = {
+ 		.ignore_suspend = 1,
+ 		.be_hw_params_fixup = mt8183_i2s_hw_params_fixup,
+ 		.ops = &mt8183_mt6358_tdm_ops,
++		.ignore = 1,
+ 		.init = mt8183_mt6358_ts3a227_max98357_hdmi_init,
+ 		SND_SOC_DAILINK_REG(tdm),
+ 	},
+@@ -661,8 +662,10 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
+ 						    SND_SOC_DAIFMT_CBM_CFM;
+ 		}
+ 
+-		if (hdmi_codec && strcmp(dai_link->name, "TDM") == 0)
++		if (hdmi_codec && strcmp(dai_link->name, "TDM") == 0) {
+ 			dai_link->codecs->of_node = hdmi_codec;
++			dai_link->ignore = 0;
++		}
+ 
+ 		if (!dai_link->platforms->name)
+ 			dai_link->platforms->of_node = platform_node;
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index 426235a217ec6..46bb24afeacf0 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -270,18 +270,6 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream,
+ 	struct lpaif_i2sctl *i2sctl = drvdata->i2sctl;
+ 	unsigned int id = dai->driver->id;
+ 	int ret = -EINVAL;
+-	unsigned int val = 0;
+-
+-	ret = regmap_read(drvdata->lpaif_map,
+-				LPAIF_I2SCTL_REG(drvdata->variant, dai->driver->id), &val);
+-	if (ret) {
+-		dev_err(dai->dev, "error reading from i2sctl reg: %d\n", ret);
+-		return ret;
+-	}
+-	if (val == LPAIF_I2SCTL_RESET_STATE) {
+-		dev_err(dai->dev, "error in i2sctl register state\n");
+-		return -ENOTRECOVERABLE;
+-	}
+ 
+ 	switch (cmd) {
+ 	case SNDRV_PCM_TRIGGER_START:
+@@ -356,8 +344,30 @@ int asoc_qcom_lpass_cpu_dai_probe(struct snd_soc_dai *dai)
+ }
+ EXPORT_SYMBOL_GPL(asoc_qcom_lpass_cpu_dai_probe);
+ 
++static int asoc_qcom_of_xlate_dai_name(struct snd_soc_component *component,
++				   struct of_phandle_args *args,
++				   const char **dai_name)
++{
++	struct lpass_data *drvdata = snd_soc_component_get_drvdata(component);
++	struct lpass_variant *variant = drvdata->variant;
++	int id = args->args[0];
++	int ret = -EINVAL;
++	int i;
++
++	for (i = 0; i  < variant->num_dai; i++) {
++		if (variant->dai_driver[i].id == id) {
++			*dai_name = variant->dai_driver[i].name;
++			ret = 0;
++			break;
++		}
++	}
++
++	return ret;
++}
++
+ static const struct snd_soc_component_driver lpass_cpu_comp_driver = {
+ 	.name = "lpass-cpu",
++	.of_xlate_dai_name = asoc_qcom_of_xlate_dai_name,
+ };
+ 
+ static bool lpass_cpu_regmap_writeable(struct device *dev, unsigned int reg)
+@@ -454,20 +464,16 @@ static bool lpass_cpu_regmap_volatile(struct device *dev, unsigned int reg)
+ 	struct lpass_variant *v = drvdata->variant;
+ 	int i;
+ 
+-	for (i = 0; i < v->i2s_ports; ++i)
+-		if (reg == LPAIF_I2SCTL_REG(v, i))
+-			return true;
+ 	for (i = 0; i < v->irq_ports; ++i)
+ 		if (reg == LPAIF_IRQSTAT_REG(v, i))
+ 			return true;
+ 
+ 	for (i = 0; i < v->rdma_channels; ++i)
+-		if (reg == LPAIF_RDMACURR_REG(v, i) || reg == LPAIF_RDMACTL_REG(v, i))
++		if (reg == LPAIF_RDMACURR_REG(v, i))
+ 			return true;
+ 
+ 	for (i = 0; i < v->wrdma_channels; ++i)
+-		if (reg == LPAIF_WRDMACURR_REG(v, i + v->wrdma_channel_start) ||
+-			reg == LPAIF_WRDMACTL_REG(v, i + v->wrdma_channel_start))
++		if (reg == LPAIF_WRDMACURR_REG(v, i + v->wrdma_channel_start))
+ 			return true;
+ 
+ 	return false;
+diff --git a/sound/soc/qcom/lpass-ipq806x.c b/sound/soc/qcom/lpass-ipq806x.c
+index 832a9161484e7..3a45e6a26f04b 100644
+--- a/sound/soc/qcom/lpass-ipq806x.c
++++ b/sound/soc/qcom/lpass-ipq806x.c
+@@ -131,7 +131,7 @@ static struct lpass_variant ipq806x_data = {
+ 	.micmode		= REG_FIELD_ID(0x0010, 4, 7, 5, 0x4),
+ 	.micmono		= REG_FIELD_ID(0x0010, 3, 3, 5, 0x4),
+ 	.wssrc			= REG_FIELD_ID(0x0010, 2, 2, 5, 0x4),
+-	.bitwidth		= REG_FIELD_ID(0x0010, 0, 0, 5, 0x4),
++	.bitwidth		= REG_FIELD_ID(0x0010, 0, 1, 5, 0x4),
+ 
+ 	.rdma_dyncclk		= REG_FIELD_ID(0x6000, 12, 12, 4, 0x1000),
+ 	.rdma_bursten		= REG_FIELD_ID(0x6000, 11, 11, 4, 0x1000),
+diff --git a/sound/soc/qcom/lpass-lpaif-reg.h b/sound/soc/qcom/lpass-lpaif-reg.h
+index 405542832e994..baf72f124ea9b 100644
+--- a/sound/soc/qcom/lpass-lpaif-reg.h
++++ b/sound/soc/qcom/lpass-lpaif-reg.h
+@@ -133,7 +133,7 @@
+ #define	LPAIF_WRDMAPERCNT_REG(v, chan)	LPAIF_WRDMA_REG_ADDR(v, 0x14, (chan))
+ 
+ #define LPAIF_INTFDMA_REG(v, chan, reg, dai_id)  \
+-		((v->dai_driver[dai_id].id ==  LPASS_DP_RX) ? \
++	((dai_id ==  LPASS_DP_RX) ? \
+ 		LPAIF_HDMI_RDMA##reg##_REG(v, chan) : \
+ 		 LPAIF_RDMA##reg##_REG(v, chan))
+ 
+diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c
+index 80b09dede5f9c..71122e9eb2305 100644
+--- a/sound/soc/qcom/lpass-platform.c
++++ b/sound/soc/qcom/lpass-platform.c
+@@ -257,6 +257,9 @@ static int lpass_platform_pcmops_hw_params(struct snd_soc_component *component,
+ 		break;
+ 	case MI2S_PRIMARY:
+ 	case MI2S_SECONDARY:
++	case MI2S_TERTIARY:
++	case MI2S_QUATERNARY:
++	case MI2S_QUINARY:
+ 		ret = regmap_fields_write(dmactl->intf, id,
+ 						LPAIF_DMACTL_AUDINTF(dma_port));
+ 		if (ret) {
+@@ -452,7 +455,6 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component,
+ 	unsigned int reg_irqclr = 0, val_irqclr = 0;
+ 	unsigned int  reg_irqen = 0, val_irqen = 0, val_mask = 0;
+ 	unsigned int dai_id = cpu_dai->driver->id;
+-	unsigned int dma_ctrl_reg = 0;
+ 
+ 	ch = pcm_data->dma_ch;
+ 	if (dir ==  SNDRV_PCM_STREAM_PLAYBACK) {
+@@ -469,17 +471,7 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component,
+ 		id = pcm_data->dma_ch - v->wrdma_channel_start;
+ 		map = drvdata->lpaif_map;
+ 	}
+-	ret = regmap_read(map, LPAIF_DMACTL_REG(v, ch, dir, dai_id), &dma_ctrl_reg);
+-	if (ret) {
+-		dev_err(soc_runtime->dev, "error reading from rdmactl reg: %d\n", ret);
+-		return ret;
+-	}
+ 
+-	if (dma_ctrl_reg == LPAIF_DMACTL_RESET_STATE ||
+-		dma_ctrl_reg == LPAIF_DMACTL_RESET_STATE + 1) {
+-		dev_err(soc_runtime->dev, "error in rdmactl register state\n");
+-		return -ENOTRECOVERABLE;
+-	}
+ 	switch (cmd) {
+ 	case SNDRV_PCM_TRIGGER_START:
+ 	case SNDRV_PCM_TRIGGER_RESUME:
+@@ -500,7 +492,6 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component,
+ 					"error writing to rdmactl reg: %d\n", ret);
+ 				return ret;
+ 			}
+-			map = drvdata->hdmiif_map;
+ 			reg_irqclr = LPASS_HDMITX_APP_IRQCLEAR_REG(v);
+ 			val_irqclr = (LPAIF_IRQ_ALL(ch) |
+ 					LPAIF_IRQ_HDMI_REQ_ON_PRELOAD(ch) |
+@@ -519,7 +510,9 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component,
+ 			break;
+ 		case MI2S_PRIMARY:
+ 		case MI2S_SECONDARY:
+-			map = drvdata->lpaif_map;
++		case MI2S_TERTIARY:
++		case MI2S_QUATERNARY:
++		case MI2S_QUINARY:
+ 			reg_irqclr = LPAIF_IRQCLEAR_REG(v, LPAIF_IRQ_PORT_HOST);
+ 			val_irqclr = LPAIF_IRQ_ALL(ch);
+ 
+@@ -563,7 +556,6 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component,
+ 					"error writing to rdmactl reg: %d\n", ret);
+ 				return ret;
+ 			}
+-			map = drvdata->hdmiif_map;
+ 			reg_irqen = LPASS_HDMITX_APP_IRQEN_REG(v);
+ 			val_mask = (LPAIF_IRQ_ALL(ch) |
+ 					LPAIF_IRQ_HDMI_REQ_ON_PRELOAD(ch) |
+@@ -573,7 +565,9 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component,
+ 			break;
+ 		case MI2S_PRIMARY:
+ 		case MI2S_SECONDARY:
+-			map = drvdata->lpaif_map;
++		case MI2S_TERTIARY:
++		case MI2S_QUATERNARY:
++		case MI2S_QUINARY:
+ 			reg_irqen = LPAIF_IRQEN_REG(v, LPAIF_IRQ_PORT_HOST);
+ 			val_mask = LPAIF_IRQ_ALL(ch);
+ 			val_irqen = 0;
+@@ -670,6 +664,9 @@ static irqreturn_t lpass_dma_interrupt_handler(
+ 	break;
+ 	case MI2S_PRIMARY:
+ 	case MI2S_SECONDARY:
++	case MI2S_TERTIARY:
++	case MI2S_QUATERNARY:
++	case MI2S_QUINARY:
+ 		map = drvdata->lpaif_map;
+ 		reg = LPAIF_IRQCLEAR_REG(v, LPAIF_IRQ_PORT_HOST);
+ 		val = 0;
+diff --git a/sound/soc/qcom/lpass-sc7180.c b/sound/soc/qcom/lpass-sc7180.c
+index bc998d5016000..c647e627897a2 100644
+--- a/sound/soc/qcom/lpass-sc7180.c
++++ b/sound/soc/qcom/lpass-sc7180.c
+@@ -20,7 +20,7 @@
+ #include "lpass.h"
+ 
+ static struct snd_soc_dai_driver sc7180_lpass_cpu_dai_driver[] = {
+-	[MI2S_PRIMARY] = {
++	{
+ 		.id = MI2S_PRIMARY,
+ 		.name = "Primary MI2S",
+ 		.playback = {
+@@ -43,9 +43,7 @@ static struct snd_soc_dai_driver sc7180_lpass_cpu_dai_driver[] = {
+ 		},
+ 		.probe	= &asoc_qcom_lpass_cpu_dai_probe,
+ 		.ops    = &asoc_qcom_lpass_cpu_dai_ops,
+-	},
+-
+-	[MI2S_SECONDARY] = {
++	}, {
+ 		.id = MI2S_SECONDARY,
+ 		.name = "Secondary MI2S",
+ 		.playback = {
+@@ -59,8 +57,7 @@ static struct snd_soc_dai_driver sc7180_lpass_cpu_dai_driver[] = {
+ 		},
+ 		.probe	= &asoc_qcom_lpass_cpu_dai_probe,
+ 		.ops    = &asoc_qcom_lpass_cpu_dai_ops,
+-	},
+-	[LPASS_DP_RX] = {
++	}, {
+ 		.id = LPASS_DP_RX,
+ 		.name = "Hdmi",
+ 		.playback = {
+diff --git a/sound/soc/qcom/lpass.h b/sound/soc/qcom/lpass.h
+index bccd1a05d771e..868c1c8dbd455 100644
+--- a/sound/soc/qcom/lpass.h
++++ b/sound/soc/qcom/lpass.h
+@@ -12,7 +12,7 @@
+ #include <linux/compiler.h>
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+-#include <dt-bindings/sound/sc7180-lpass.h>
++#include <dt-bindings/sound/qcom,lpass.h>
+ #include "lpass-hdmi.h"
+ 
+ #define LPASS_AHBIX_CLOCK_FREQUENCY		131072000
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index c5ef432a023ba..1030e11017b27 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -506,7 +506,7 @@ static void remove_dai(struct snd_soc_component *comp,
+ {
+ 	struct snd_soc_dai_driver *dai_drv =
+ 		container_of(dobj, struct snd_soc_dai_driver, dobj);
+-	struct snd_soc_dai *dai;
++	struct snd_soc_dai *dai, *_dai;
+ 
+ 	if (pass != SOC_TPLG_PASS_PCM_DAI)
+ 		return;
+@@ -514,9 +514,9 @@ static void remove_dai(struct snd_soc_component *comp,
+ 	if (dobj->ops && dobj->ops->dai_unload)
+ 		dobj->ops->dai_unload(comp, dobj);
+ 
+-	for_each_component_dais(comp, dai)
++	for_each_component_dais_safe(comp, dai, _dai)
+ 		if (dai->driver == dai_drv)
+-			dai->driver = NULL;
++			snd_soc_unregister_dai(dai);
+ 
+ 	kfree(dai_drv->playback.stream_name);
+ 	kfree(dai_drv->capture.stream_name);
+@@ -987,7 +987,7 @@ static int soc_tplg_denum_create_values(struct soc_enum *se,
+ 		return -EINVAL;
+ 
+ 	se->dobj.control.dvalues = kzalloc(le32_to_cpu(ec->items) *
+-					   sizeof(u32),
++					   sizeof(*se->dobj.control.dvalues),
+ 					   GFP_KERNEL);
+ 	if (!se->dobj.control.dvalues)
+ 		return -ENOMEM;
+@@ -1876,7 +1876,7 @@ static int soc_tplg_dai_create(struct soc_tplg *tplg,
+ 	list_add(&dai_drv->dobj.list, &tplg->comp->dobj_list);
+ 
+ 	/* register the DAI to the component */
+-	dai = devm_snd_soc_register_dai(tplg->comp->dev, tplg->comp, dai_drv, false);
++	dai = snd_soc_register_dai(tplg->comp, dai_drv, false);
+ 	if (!dai)
+ 		return -ENOMEM;
+ 
+@@ -1884,6 +1884,7 @@ static int soc_tplg_dai_create(struct soc_tplg *tplg,
+ 	ret = snd_soc_dapm_new_dai_widgets(dapm, dai);
+ 	if (ret != 0) {
+ 		dev_err(dai->dev, "Failed to create DAI widgets %d\n", ret);
++		snd_soc_unregister_dai(dai);
+ 		return ret;
+ 	}
+ 
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index 5bfc2f8b13b90..de7ff2d097ab9 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -337,7 +337,7 @@ config SND_SOC_SOF_HDA
+ 
+ config SND_SOC_SOF_INTEL_SOUNDWIRE_LINK
+ 	bool "SOF support for SoundWire"
+-	depends on SOUNDWIRE && ACPI
++	depends on ACPI
+ 	help
+ 	  This adds support for SoundWire with Sound Open Firmware
+ 		  for Intel(R) platforms.
+@@ -353,6 +353,7 @@ config SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE
+ 
+ config SND_SOC_SOF_INTEL_SOUNDWIRE
+ 	tristate
++	select SOUNDWIRE
+ 	select SOUNDWIRE_INTEL
+ 	help
+ 	  This option is not user-selectable but automagically handled by
+diff --git a/tools/testing/selftests/net/forwarding/router_mpath_nh.sh b/tools/testing/selftests/net/forwarding/router_mpath_nh.sh
+index cf3d26c233e8e..7fcc42bc076fa 100755
+--- a/tools/testing/selftests/net/forwarding/router_mpath_nh.sh
++++ b/tools/testing/selftests/net/forwarding/router_mpath_nh.sh
+@@ -197,7 +197,7 @@ multipath4_test()
+ 	t0_rp12=$(link_stats_tx_packets_get $rp12)
+ 	t0_rp13=$(link_stats_tx_packets_get $rp13)
+ 
+-	ip vrf exec vrf-h1 $MZ -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \
++	ip vrf exec vrf-h1 $MZ $h1 -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \
+ 		-d 1msec -t udp "sp=1024,dp=0-32768"
+ 
+ 	t1_rp12=$(link_stats_tx_packets_get $rp12)
+diff --git a/tools/testing/selftests/net/forwarding/router_multipath.sh b/tools/testing/selftests/net/forwarding/router_multipath.sh
+index 79a2099279621..464821c587a5e 100755
+--- a/tools/testing/selftests/net/forwarding/router_multipath.sh
++++ b/tools/testing/selftests/net/forwarding/router_multipath.sh
+@@ -178,7 +178,7 @@ multipath4_test()
+        t0_rp12=$(link_stats_tx_packets_get $rp12)
+        t0_rp13=$(link_stats_tx_packets_get $rp13)
+ 
+-       ip vrf exec vrf-h1 $MZ -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \
++       ip vrf exec vrf-h1 $MZ $h1 -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \
+ 	       -d 1msec -t udp "sp=1024,dp=0-32768"
+ 
+        t1_rp12=$(link_stats_tx_packets_get $rp12)
+diff --git a/tools/testing/selftests/net/xfrm_policy.sh b/tools/testing/selftests/net/xfrm_policy.sh
+index 7a1bf94c5bd38..bdf450eaf60cf 100755
+--- a/tools/testing/selftests/net/xfrm_policy.sh
++++ b/tools/testing/selftests/net/xfrm_policy.sh
+@@ -202,7 +202,7 @@ check_xfrm() {
+ 	# 1: iptables -m policy rule count != 0
+ 	rval=$1
+ 	ip=$2
+-	lret=0
++	local lret=0
+ 
+ 	ip netns exec ns1 ping -q -c 1 10.0.2.$ip > /dev/null
+ 
+@@ -287,6 +287,47 @@ check_hthresh_repeat()
+ 	return 0
+ }
+ 
++# insert non-overlapping policies in a random order and check that
++# all of them can be fetched using the traffic selectors.
++check_random_order()
++{
++	local ns=$1
++	local log=$2
++
++	for i in $(seq 100); do
++		ip -net $ns xfrm policy flush
++		for j in $(seq 0 16 255 | sort -R); do
++			ip -net $ns xfrm policy add dst $j.0.0.0/24 dir out priority 10 action allow
++		done
++		for j in $(seq 0 16 255); do
++			if ! ip -net $ns xfrm policy get dst $j.0.0.0/24 dir out > /dev/null; then
++				echo "FAIL: $log" 1>&2
++				return 1
++			fi
++		done
++	done
++
++	for i in $(seq 100); do
++		ip -net $ns xfrm policy flush
++		for j in $(seq 0 16 255 | sort -R); do
++			local addr=$(printf "e000:0000:%02x00::/56" $j)
++			ip -net $ns xfrm policy add dst $addr dir out priority 10 action allow
++		done
++		for j in $(seq 0 16 255); do
++			local addr=$(printf "e000:0000:%02x00::/56" $j)
++			if ! ip -net $ns xfrm policy get dst $addr dir out > /dev/null; then
++				echo "FAIL: $log" 1>&2
++				return 1
++			fi
++		done
++	done
++
++	ip -net $ns xfrm policy flush
++
++	echo "PASS: $log"
++	return 0
++}
++
+ #check for needed privileges
+ if [ "$(id -u)" -ne 0 ];then
+ 	echo "SKIP: Need root privileges"
+@@ -438,6 +479,8 @@ check_exceptions "exceptions and block policies after htresh change to normal"
+ 
+ check_hthresh_repeat "policies with repeated htresh change"
+ 
++check_random_order ns3 "policies inserted in random order"
++
+ for i in 1 2 3 4;do ip netns del ns$i;done
+ 
+ exit $ret
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 3083fb53861df..cf9cc0ed7e995 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1289,6 +1289,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
+ 		return -EINVAL;
+ 	/* We can read the guest memory with __xxx_user() later on. */
+ 	if ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
++	    (mem->userspace_addr != untagged_addr(mem->userspace_addr)) ||
+ 	     !access_ok((void __user *)(unsigned long)mem->userspace_addr,
+ 			mem->memory_size))
+ 		return -EINVAL;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-07 15:20 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-02-07 15:20 UTC (permalink / raw
  To: gentoo-commits

commit:     ac7c8645fd308101aed87ff91a355aac99de9053
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Feb  7 15:19:10 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sun Feb  7 15:19:31 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ac7c8645

Linux patch 5.10.14

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1013_linux-5.10.14.patch | 1728 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1732 insertions(+)

diff --git a/0000_README b/0000_README
index 0a7ffef..897c945 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-5.10.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.13
 
+Patch:  1013_linux-5.10.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1013_linux-5.10.14.patch b/1013_linux-5.10.14.patch
new file mode 100644
index 0000000..0533261
--- /dev/null
+++ b/1013_linux-5.10.14.patch
@@ -0,0 +1,1728 @@
+diff --git a/Makefile b/Makefile
+index a2d5e953ea40a..bb3770be9779d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
+index 65e4482e38498..02692fbe2db5c 100644
+--- a/arch/arm/mm/Kconfig
++++ b/arch/arm/mm/Kconfig
+@@ -743,6 +743,7 @@ config SWP_EMULATE
+ config CPU_BIG_ENDIAN
+ 	bool "Build big-endian kernel"
+ 	depends on ARCH_SUPPORTS_BIG_ENDIAN
++	depends on !LD_IS_LLD
+ 	help
+ 	  Say Y if you plan on running a kernel in big-endian mode.
+ 	  Note that your board must be properly built and your board
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b.dtsi
+index 9b8548e5f6e51..ee8fcae9f9f00 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b.dtsi
+@@ -135,3 +135,7 @@
+ 		};
+ 	};
+ };
++
++&mali {
++	dma-coherent;
++};
+diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
+index cd61239bae8c2..75c8e9a350cc7 100644
+--- a/arch/arm64/include/asm/memory.h
++++ b/arch/arm64/include/asm/memory.h
+@@ -238,11 +238,11 @@ static inline const void *__tag_set(const void *addr, u8 tag)
+ 
+ 
+ /*
+- * The linear kernel range starts at the bottom of the virtual address
+- * space. Testing the top bit for the start of the region is a
+- * sufficient check and avoids having to worry about the tag.
++ * Check whether an arbitrary address is within the linear map, which
++ * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the
++ * kernel's TTBR1 address range.
+  */
+-#define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
++#define __is_lm_address(addr)	(((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
+ 
+ #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+ #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
+@@ -323,7 +323,7 @@ static inline void *phys_to_virt(phys_addr_t x)
+ #endif /* !CONFIG_SPARSEMEM_VMEMMAP || CONFIG_DEBUG_VIRTUAL */
+ 
+ #define virt_addr_valid(addr)	({					\
+-	__typeof__(addr) __addr = addr;					\
++	__typeof__(addr) __addr = __tag_reset(addr);			\
+ 	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
+ })
+ 
+diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
+index 67a9ba9eaa96b..cde44c13dda1b 100644
+--- a/arch/arm64/mm/physaddr.c
++++ b/arch/arm64/mm/physaddr.c
+@@ -9,7 +9,7 @@
+ 
+ phys_addr_t __virt_to_phys(unsigned long x)
+ {
+-	WARN(!__is_lm_address(x),
++	WARN(!__is_lm_address(__tag_reset(x)),
+ 	     "virt_to_phys used for non-linear address: %pK (%pS)\n",
+ 	      (void *)x,
+ 	      (void *)x);
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index 5e658ba2654a7..9abe842dbd843 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -97,6 +97,7 @@
+ 
+ #define	INTEL_FAM6_LAKEFIELD		0x8A
+ #define INTEL_FAM6_ALDERLAKE		0x97
++#define INTEL_FAM6_ALDERLAKE_L		0x9A
+ 
+ /* "Small Core" Processors (Atom) */
+ 
+diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
+index 0b4920a7238e3..e16cccdd04207 100644
+--- a/arch/x86/include/asm/msr.h
++++ b/arch/x86/include/asm/msr.h
+@@ -86,7 +86,7 @@ static inline void do_trace_rdpmc(unsigned int msr, u64 val, int failed) {}
+  * think of extending them - you will be slapped with a stinking trout or a frozen
+  * shark will reach you, wherever you are! You've been warned.
+  */
+-static inline unsigned long long notrace __rdmsr(unsigned int msr)
++static __always_inline unsigned long long __rdmsr(unsigned int msr)
+ {
+ 	DECLARE_ARGS(val, low, high);
+ 
+@@ -98,7 +98,7 @@ static inline unsigned long long notrace __rdmsr(unsigned int msr)
+ 	return EAX_EDX_VAL(val, low, high);
+ }
+ 
+-static inline void notrace __wrmsr(unsigned int msr, u32 low, u32 high)
++static __always_inline void __wrmsr(unsigned int msr, u32 low, u32 high)
+ {
+ 	asm volatile("1: wrmsr\n"
+ 		     "2:\n"
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 098015b739993..84f581c91db45 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -665,6 +665,17 @@ static void __init trim_platform_memory_ranges(void)
+ 
+ static void __init trim_bios_range(void)
+ {
++	/*
++	 * A special case is the first 4Kb of memory;
++	 * This is a BIOS owned area, not kernel ram, but generally
++	 * not listed as such in the E820 table.
++	 *
++	 * This typically reserves additional memory (64KiB by default)
++	 * since some BIOSes are known to corrupt low memory.  See the
++	 * Kconfig help text for X86_RESERVE_LOW.
++	 */
++	e820__range_update(0, PAGE_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED);
++
+ 	/*
+ 	 * special case: Some BIOSes report the PC BIOS
+ 	 * area (640Kb -> 1Mb) as RAM even though it is not.
+@@ -722,15 +733,6 @@ early_param("reservelow", parse_reservelow);
+ 
+ static void __init trim_low_memory_range(void)
+ {
+-	/*
+-	 * A special case is the first 4Kb of memory;
+-	 * This is a BIOS owned area, not kernel ram, but generally
+-	 * not listed as such in the E820 table.
+-	 *
+-	 * This typically reserves additional memory (64KiB by default)
+-	 * since some BIOSes are known to corrupt low memory.  See the
+-	 * Kconfig help text for X86_RESERVE_LOW.
+-	 */
+ 	memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
+ }
+ 	
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+index b0e9b0509568c..95d883482227e 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+@@ -239,6 +239,7 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	struct dmcu *dmcu = clk_mgr_base->ctx->dc->res_pool->dmcu;
+ 	bool force_reset = false;
+ 	bool update_uclk = false;
++	bool p_state_change_support;
+ 
+ 	if (dc->work_arounds.skip_clock_update || !clk_mgr->smu_present)
+ 		return;
+@@ -279,8 +280,9 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
+ 		clk_mgr_base->clks.socclk_khz = new_clocks->socclk_khz;
+ 
+ 	clk_mgr_base->clks.prev_p_state_change_support = clk_mgr_base->clks.p_state_change_support;
+-	if (should_update_pstate_support(safe_to_lower, new_clocks->p_state_change_support, clk_mgr_base->clks.p_state_change_support)) {
+-		clk_mgr_base->clks.p_state_change_support = new_clocks->p_state_change_support;
++	p_state_change_support = new_clocks->p_state_change_support || (display_count == 0);
++	if (should_update_pstate_support(safe_to_lower, p_state_change_support, clk_mgr_base->clks.p_state_change_support)) {
++		clk_mgr_base->clks.p_state_change_support = p_state_change_support;
+ 
+ 		/* to disable P-State switching, set UCLK min = max */
+ 		if (!clk_mgr_base->clks.p_state_change_support)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 98464886341f6..17e6fd8201395 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -2375,6 +2375,9 @@ static bool decide_dp_link_settings(struct dc_link *link, struct dc_link_setting
+ 			initial_link_setting;
+ 	uint32_t link_bw;
+ 
++	if (req_bw > dc_link_bandwidth_kbps(link, &link->verified_link_cap))
++		return false;
++
+ 	/* search for the minimum link setting that:
+ 	 * 1. is supported according to the link training result
+ 	 * 2. could support the b/w requested by the timing
+@@ -3020,14 +3023,14 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
+ 		for (i = 0; i < MAX_PIPES; i++) {
+ 			pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
+ 			if (pipe_ctx && pipe_ctx->stream && !pipe_ctx->stream->dpms_off &&
+-					pipe_ctx->stream->link == link)
++					pipe_ctx->stream->link == link && !pipe_ctx->prev_odm_pipe)
+ 				core_link_disable_stream(pipe_ctx);
+ 		}
+ 
+ 		for (i = 0; i < MAX_PIPES; i++) {
+ 			pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
+ 			if (pipe_ctx && pipe_ctx->stream && !pipe_ctx->stream->dpms_off &&
+-					pipe_ctx->stream->link == link)
++					pipe_ctx->stream->link == link && !pipe_ctx->prev_odm_pipe)
+ 				core_link_enable_stream(link->dc->current_state, pipe_ctx);
+ 		}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index d0f3bf953d027..0d1e7b56fb395 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -646,8 +646,13 @@ static void power_on_plane(
+ 	if (REG(DC_IP_REQUEST_CNTL)) {
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 1);
+-		hws->funcs.dpp_pg_control(hws, plane_id, true);
+-		hws->funcs.hubp_pg_control(hws, plane_id, true);
++
++		if (hws->funcs.dpp_pg_control)
++			hws->funcs.dpp_pg_control(hws, plane_id, true);
++
++		if (hws->funcs.hubp_pg_control)
++			hws->funcs.hubp_pg_control(hws, plane_id, true);
++
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 0);
+ 		DC_LOG_DEBUG(
+@@ -1079,8 +1084,13 @@ void dcn10_plane_atomic_power_down(struct dc *dc,
+ 	if (REG(DC_IP_REQUEST_CNTL)) {
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 1);
+-		hws->funcs.dpp_pg_control(hws, dpp->inst, false);
+-		hws->funcs.hubp_pg_control(hws, hubp->inst, false);
++
++		if (hws->funcs.dpp_pg_control)
++			hws->funcs.dpp_pg_control(hws, dpp->inst, false);
++
++		if (hws->funcs.hubp_pg_control)
++			hws->funcs.hubp_pg_control(hws, hubp->inst, false);
++
+ 		dpp->funcs->dpp_reset(dpp);
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 0);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 01530e686f437..f1e9b3b06b924 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1069,8 +1069,13 @@ static void dcn20_power_on_plane(
+ 	if (REG(DC_IP_REQUEST_CNTL)) {
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 1);
+-		dcn20_dpp_pg_control(hws, pipe_ctx->plane_res.dpp->inst, true);
+-		dcn20_hubp_pg_control(hws, pipe_ctx->plane_res.hubp->inst, true);
++
++		if (hws->funcs.dpp_pg_control)
++			hws->funcs.dpp_pg_control(hws, pipe_ctx->plane_res.dpp->inst, true);
++
++		if (hws->funcs.hubp_pg_control)
++			hws->funcs.hubp_pg_control(hws, pipe_ctx->plane_res.hubp->inst, true);
++
+ 		REG_SET(DC_IP_REQUEST_CNTL, 0,
+ 				IP_REQUEST_EN, 0);
+ 		DC_LOG_DEBUG(
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index e73785e74cba8..20441127783ba 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -295,7 +295,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn2_1_soc = {
+ 	.num_banks = 8,
+ 	.num_chans = 4,
+ 	.vmm_page_size_bytes = 4096,
+-	.dram_clock_change_latency_us = 23.84,
++	.dram_clock_change_latency_us = 11.72,
+ 	.return_bus_width_bytes = 64,
+ 	.dispclk_dppclk_vco_speed_mhz = 3600,
+ 	.xfc_bus_transport_time_us = 4,
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
+index 67f9f66904be2..597cf1459b0a8 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.h
++++ b/drivers/gpu/drm/panfrost/panfrost_device.h
+@@ -88,6 +88,7 @@ struct panfrost_device {
+ 	/* pm_domains for devices with more than one. */
+ 	struct device *pm_domain_devs[MAX_PM_DOMAINS];
+ 	struct device_link *pm_domain_links[MAX_PM_DOMAINS];
++	bool coherent;
+ 
+ 	struct panfrost_features features;
+ 	const struct panfrost_compatible *comp;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index 0fc084110e5ba..689be734ed200 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -587,6 +587,8 @@ static int panfrost_probe(struct platform_device *pdev)
+ 	if (!pfdev->comp)
+ 		return -ENODEV;
+ 
++	pfdev->coherent = device_get_dma_attr(&pdev->dev) == DEV_DMA_COHERENT;
++
+ 	/* Allocate and initialze the DRM device. */
+ 	ddev = drm_dev_alloc(&panfrost_drm_driver, &pdev->dev);
+ 	if (IS_ERR(ddev))
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 62d4d710a5711..57a31dd0ffed1 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -218,6 +218,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = {
+  */
+ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size)
+ {
++	struct panfrost_device *pfdev = dev->dev_private;
+ 	struct panfrost_gem_object *obj;
+ 
+ 	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+@@ -227,6 +228,7 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t
+ 	INIT_LIST_HEAD(&obj->mappings.list);
+ 	mutex_init(&obj->mappings.lock);
+ 	obj->base.base.funcs = &panfrost_gem_funcs;
++	obj->base.map_cached = pfdev->coherent;
+ 
+ 	return &obj->base.base;
+ }
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index 776448c527ea9..be8d68fb0e11e 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -371,6 +371,7 @@ int panfrost_mmu_pgtable_alloc(struct panfrost_file_priv *priv)
+ 		.pgsize_bitmap	= SZ_4K | SZ_2M,
+ 		.ias		= FIELD_GET(0xff, pfdev->features.mmu_features),
+ 		.oas		= FIELD_GET(0xff00, pfdev->features.mmu_features),
++		.coherent_walk	= pfdev->coherent,
+ 		.tlb		= &mmu_tlb_ops,
+ 		.iommu_dev	= pfdev->dev,
+ 	};
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 0727383f49402..8b113ae32dc71 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -326,6 +326,8 @@ static void i2c_writel(struct tegra_i2c_dev *i2c_dev, u32 val, unsigned int reg)
+ 	/* read back register to make sure that register writes completed */
+ 	if (reg != I2C_TX_FIFO)
+ 		readl_relaxed(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg));
++	else if (i2c_dev->is_vi)
++		readl_relaxed(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, I2C_INT_STATUS));
+ }
+ 
+ static u32 i2c_readl(struct tegra_i2c_dev *i2c_dev, unsigned int reg)
+@@ -339,6 +341,21 @@ static void i2c_writesl(struct tegra_i2c_dev *i2c_dev, void *data,
+ 	writesl(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg), data, len);
+ }
+ 
++static void i2c_writesl_vi(struct tegra_i2c_dev *i2c_dev, void *data,
++			   unsigned int reg, unsigned int len)
++{
++	u32 *data32 = data;
++
++	/*
++	 * VI I2C controller has known hardware bug where writes get stuck
++	 * when immediate multiple writes happen to TX_FIFO register.
++	 * Recommended software work around is to read I2C register after
++	 * each write to TX_FIFO register to flush out the data.
++	 */
++	while (len--)
++		i2c_writel(i2c_dev, *data32++, reg);
++}
++
+ static void i2c_readsl(struct tegra_i2c_dev *i2c_dev, void *data,
+ 		       unsigned int reg, unsigned int len)
+ {
+@@ -811,7 +828,10 @@ static int tegra_i2c_fill_tx_fifo(struct tegra_i2c_dev *i2c_dev)
+ 		i2c_dev->msg_buf_remaining = buf_remaining;
+ 		i2c_dev->msg_buf = buf + words_to_transfer * BYTES_PER_FIFO_WORD;
+ 
+-		i2c_writesl(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer);
++		if (i2c_dev->is_vi)
++			i2c_writesl_vi(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer);
++		else
++			i2c_writesl(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer);
+ 
+ 		buf += words_to_transfer * BYTES_PER_FIFO_WORD;
+ 	}
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 151243fa01ba5..7e3db4c0324d3 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3350,6 +3350,11 @@ static int __init init_dmars(void)
+ 
+ 		if (!ecap_pass_through(iommu->ecap))
+ 			hw_pass_through = 0;
++
++		if (!intel_iommu_strict && cap_caching_mode(iommu->cap)) {
++			pr_warn("Disable batched IOTLB flush due to virtualization");
++			intel_iommu_strict = 1;
++		}
+ 		intel_svm_check(iommu);
+ 	}
+ 
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index a7a9bc08dcd11..bcfbd0e44a4a0 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -417,7 +417,13 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data,
+ 				<< ARM_LPAE_PTE_ATTRINDX_SHIFT);
+ 	}
+ 
+-	if (prot & IOMMU_CACHE)
++	/*
++	 * Also Mali has its own notions of shareability wherein its Inner
++	 * domain covers the cores within the GPU, and its Outer domain is
++	 * "outside the GPU" (i.e. either the Inner or System domain in CPU
++	 * terms, depending on coherency).
++	 */
++	if (prot & IOMMU_CACHE && data->iop.fmt != ARM_MALI_LPAE)
+ 		pte |= ARM_LPAE_PTE_SH_IS;
+ 	else
+ 		pte |= ARM_LPAE_PTE_SH_OS;
+@@ -1021,6 +1027,9 @@ arm_mali_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
+ 	cfg->arm_mali_lpae_cfg.transtab = virt_to_phys(data->pgd) |
+ 					  ARM_MALI_LPAE_TTBR_READ_INNER |
+ 					  ARM_MALI_LPAE_TTBR_ADRMODE_TABLE;
++	if (cfg->coherent_walk)
++		cfg->arm_mali_lpae_cfg.transtab |= ARM_MALI_LPAE_TTBR_SHARE_OUTER;
++
+ 	return &data->iop;
+ 
+ out_free_data:
+diff --git a/drivers/misc/habanalabs/common/device.c b/drivers/misc/habanalabs/common/device.c
+index 09c328ee65da8..71b3a4d5adc65 100644
+--- a/drivers/misc/habanalabs/common/device.c
++++ b/drivers/misc/habanalabs/common/device.c
+@@ -1425,6 +1425,15 @@ void hl_device_fini(struct hl_device *hdev)
+ 		}
+ 	}
+ 
++	/* Disable PCI access from device F/W so it won't send us additional
++	 * interrupts. We disable MSI/MSI-X at the halt_engines function and we
++	 * can't have the F/W sending us interrupts after that. We need to
++	 * disable the access here because if the device is marked disable, the
++	 * message won't be send. Also, in case of heartbeat, the device CPU is
++	 * marked as disable so this message won't be sent
++	 */
++	hl_fw_send_pci_access_msg(hdev,	CPUCP_PACKET_DISABLE_PCI_ACCESS);
++
+ 	/* Mark device as disabled */
+ 	hdev->disabled = true;
+ 
+diff --git a/drivers/misc/habanalabs/common/firmware_if.c b/drivers/misc/habanalabs/common/firmware_if.c
+index cd41c7ceb0e78..13c6eebd4fa63 100644
+--- a/drivers/misc/habanalabs/common/firmware_if.c
++++ b/drivers/misc/habanalabs/common/firmware_if.c
+@@ -385,6 +385,10 @@ int hl_fw_cpucp_pci_counters_get(struct hl_device *hdev,
+ 	}
+ 	counters->rx_throughput = result;
+ 
++	memset(&pkt, 0, sizeof(pkt));
++	pkt.ctl = cpu_to_le32(CPUCP_PACKET_PCIE_THROUGHPUT_GET <<
++			CPUCP_PKT_CTL_OPCODE_SHIFT);
++
+ 	/* Fetch PCI tx counter */
+ 	pkt.index = cpu_to_le32(cpucp_pcie_throughput_tx);
+ 	rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
+@@ -397,6 +401,7 @@ int hl_fw_cpucp_pci_counters_get(struct hl_device *hdev,
+ 	counters->tx_throughput = result;
+ 
+ 	/* Fetch PCI replay counter */
++	memset(&pkt, 0, sizeof(pkt));
+ 	pkt.ctl = cpu_to_le32(CPUCP_PACKET_PCIE_REPLAY_CNT_GET <<
+ 			CPUCP_PKT_CTL_OPCODE_SHIFT);
+ 
+diff --git a/drivers/misc/habanalabs/common/habanalabs_ioctl.c b/drivers/misc/habanalabs/common/habanalabs_ioctl.c
+index 07317ea491295..35401148969f5 100644
+--- a/drivers/misc/habanalabs/common/habanalabs_ioctl.c
++++ b/drivers/misc/habanalabs/common/habanalabs_ioctl.c
+@@ -133,6 +133,8 @@ static int hw_idle(struct hl_device *hdev, struct hl_info_args *args)
+ 
+ 	hw_idle.is_idle = hdev->asic_funcs->is_device_idle(hdev,
+ 					&hw_idle.busy_engines_mask_ext, NULL);
++	hw_idle.busy_engines_mask =
++			lower_32_bits(hw_idle.busy_engines_mask_ext);
+ 
+ 	return copy_to_user(out, &hw_idle,
+ 		min((size_t) max_size, sizeof(hw_idle))) ? -EFAULT : 0;
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index ed1bd41262ecd..68f661aca3ff2 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -3119,7 +3119,8 @@ static int gaudi_cb_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
+ 	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
+ 			VM_DONTCOPY | VM_NORESERVE;
+ 
+-	rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, dma_addr, size);
++	rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr,
++				(dma_addr - HOST_PHYS_BASE), size);
+ 	if (rc)
+ 		dev_err(hdev->dev, "dma_mmap_coherent error %d", rc);
+ 
+diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
+index 235d47b2420f5..986ed3c072088 100644
+--- a/drivers/misc/habanalabs/goya/goya.c
++++ b/drivers/misc/habanalabs/goya/goya.c
+@@ -2675,7 +2675,8 @@ static int goya_cb_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
+ 	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
+ 			VM_DONTCOPY | VM_NORESERVE;
+ 
+-	rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, dma_addr, size);
++	rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr,
++				(dma_addr - HOST_PHYS_BASE), size);
+ 	if (rc)
+ 		dev_err(hdev->dev, "dma_mmap_coherent error %d", rc);
+ 
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 1e9a0adda2d69..445226720ff29 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -509,15 +509,19 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 	/* Find our integrated MDIO bus node */
+ 	dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio");
+ 	priv->master_mii_bus = of_mdio_find_bus(dn);
+-	if (!priv->master_mii_bus)
++	if (!priv->master_mii_bus) {
++		of_node_put(dn);
+ 		return -EPROBE_DEFER;
++	}
+ 
+ 	get_device(&priv->master_mii_bus->dev);
+ 	priv->master_mii_dn = dn;
+ 
+ 	priv->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
+-	if (!priv->slave_mii_bus)
++	if (!priv->slave_mii_bus) {
++		of_node_put(dn);
+ 		return -ENOMEM;
++	}
+ 
+ 	priv->slave_mii_bus->priv = priv;
+ 	priv->slave_mii_bus->name = "sf2 slave mii";
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 0ef854911f215..d4a64dbde3157 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -400,7 +400,7 @@ int ksz_switch_register(struct ksz_device *dev,
+ 		gpiod_set_value_cansleep(dev->reset_gpio, 1);
+ 		usleep_range(10000, 12000);
+ 		gpiod_set_value_cansleep(dev->reset_gpio, 0);
+-		usleep_range(100, 1000);
++		msleep(100);
+ 	}
+ 
+ 	mutex_init(&dev->dev_mutex);
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 04f24c66cf366..55c28fbc5f9ea 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -2165,9 +2165,9 @@ static int fec_enet_mii_init(struct platform_device *pdev)
+ 	fep->mii_bus->parent = &pdev->dev;
+ 
+ 	err = of_mdiobus_register(fep->mii_bus, node);
+-	of_node_put(node);
+ 	if (err)
+ 		goto err_out_free_mdiobus;
++	of_node_put(node);
+ 
+ 	mii_cnt++;
+ 
+@@ -2180,6 +2180,7 @@ static int fec_enet_mii_init(struct platform_device *pdev)
+ err_out_free_mdiobus:
+ 	mdiobus_free(fep->mii_bus);
+ err_out:
++	of_node_put(node);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index e2540cc00d34e..627ce1a20473a 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -4979,6 +4979,12 @@ static void ibmvnic_tasklet(struct tasklet_struct *t)
+ 	while (!done) {
+ 		/* Pull all the valid messages off the CRQ */
+ 		while ((crq = ibmvnic_next_crq(adapter)) != NULL) {
++			/* This barrier makes sure ibmvnic_next_crq()'s
++			 * crq->generic.first & IBMVNIC_CRQ_CMD_RSP is loaded
++			 * before ibmvnic_handle_crq()'s
++			 * switch(gen_crq->first) and switch(gen_crq->cmd).
++			 */
++			dma_rmb();
+ 			ibmvnic_handle_crq(crq, adapter);
+ 			crq->generic.first = 0;
+ 		}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index d2581090f9a40..df238e46e2aeb 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -473,10 +473,11 @@ dma_addr_t __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool)
+ 	dma_addr_t iova;
+ 	u8 *buf;
+ 
+-	buf = napi_alloc_frag(pool->rbsize);
++	buf = napi_alloc_frag(pool->rbsize + OTX2_ALIGN);
+ 	if (unlikely(!buf))
+ 		return -ENOMEM;
+ 
++	buf = PTR_ALIGN(buf, OTX2_ALIGN);
+ 	iova = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize,
+ 				    DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
+ 	if (unlikely(dma_mapping_error(pfvf->dev, iova))) {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+index c6c5826aba41e..1892cea05ee7c 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+@@ -157,6 +157,7 @@ mlxsw_sp1_span_entry_cpu_deconfigure(struct mlxsw_sp_span_entry *span_entry)
+ 
+ static const
+ struct mlxsw_sp_span_entry_ops mlxsw_sp1_span_entry_ops_cpu = {
++	.is_static = true,
+ 	.can_handle = mlxsw_sp1_span_cpu_can_handle,
+ 	.parms_set = mlxsw_sp1_span_entry_cpu_parms,
+ 	.configure = mlxsw_sp1_span_entry_cpu_configure,
+@@ -214,6 +215,7 @@ mlxsw_sp_span_entry_phys_deconfigure(struct mlxsw_sp_span_entry *span_entry)
+ 
+ static const
+ struct mlxsw_sp_span_entry_ops mlxsw_sp_span_entry_ops_phys = {
++	.is_static = true,
+ 	.can_handle = mlxsw_sp_port_dev_check,
+ 	.parms_set = mlxsw_sp_span_entry_phys_parms,
+ 	.configure = mlxsw_sp_span_entry_phys_configure,
+@@ -721,6 +723,7 @@ mlxsw_sp2_span_entry_cpu_deconfigure(struct mlxsw_sp_span_entry *span_entry)
+ 
+ static const
+ struct mlxsw_sp_span_entry_ops mlxsw_sp2_span_entry_ops_cpu = {
++	.is_static = true,
+ 	.can_handle = mlxsw_sp2_span_cpu_can_handle,
+ 	.parms_set = mlxsw_sp2_span_entry_cpu_parms,
+ 	.configure = mlxsw_sp2_span_entry_cpu_configure,
+@@ -1036,6 +1039,9 @@ static void mlxsw_sp_span_respin_work(struct work_struct *work)
+ 		if (!refcount_read(&curr->ref_count))
+ 			continue;
+ 
++		if (curr->ops->is_static)
++			continue;
++
+ 		err = curr->ops->parms_set(mlxsw_sp, curr->to_dev, &sparms);
+ 		if (err)
+ 			continue;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h
+index d907718bc8c58..aa1cd409c0e2e 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h
+@@ -60,6 +60,7 @@ struct mlxsw_sp_span_entry {
+ };
+ 
+ struct mlxsw_sp_span_entry_ops {
++	bool is_static;
+ 	bool (*can_handle)(const struct net_device *to_dev);
+ 	int (*parms_set)(struct mlxsw_sp *mlxsw_sp,
+ 			 const struct net_device *to_dev,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
+index 82b1c7a5a7a94..ba0e4d2b256a4 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
+@@ -129,7 +129,7 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
+ 				if (ret) {
+ 					dev_err(&pdev->dev,
+ 						"Failed to set tx_clk\n");
+-					return ret;
++					goto err_remove_config_dt;
+ 				}
+ 			}
+ 		}
+@@ -143,7 +143,7 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
+ 			if (ret) {
+ 				dev_err(&pdev->dev,
+ 					"Failed to set clk_ptp_ref\n");
+-				return ret;
++				goto err_remove_config_dt;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 9a6a519426a08..103d2448e9e0d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -375,6 +375,7 @@ static int ehl_pse0_common_data(struct pci_dev *pdev,
+ 				struct plat_stmmacenet_data *plat)
+ {
+ 	plat->bus_id = 2;
++	plat->addr64 = 32;
+ 	return ehl_common_data(pdev, plat);
+ }
+ 
+@@ -406,6 +407,7 @@ static int ehl_pse1_common_data(struct pci_dev *pdev,
+ 				struct plat_stmmacenet_data *plat)
+ {
+ 	plat->bus_id = 3;
++	plat->addr64 = 32;
+ 	return ehl_common_data(pdev, plat);
+ }
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 34cb59b2fcd67..4ec5f05dabe1d 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1489,8 +1489,21 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
+ 	}
+ 
+ 	length = (io.nblocks + 1) << ns->lba_shift;
+-	meta_len = (io.nblocks + 1) * ns->ms;
+-	metadata = nvme_to_user_ptr(io.metadata);
++
++	if ((io.control & NVME_RW_PRINFO_PRACT) &&
++	    ns->ms == sizeof(struct t10_pi_tuple)) {
++		/*
++		 * Protection information is stripped/inserted by the
++		 * controller.
++		 */
++		if (nvme_to_user_ptr(io.metadata))
++			return -EINVAL;
++		meta_len = 0;
++		metadata = NULL;
++	} else {
++		meta_len = (io.nblocks + 1) * ns->ms;
++		metadata = nvme_to_user_ptr(io.metadata);
++	}
+ 
+ 	if (ns->features & NVME_NS_EXT_LBAS) {
+ 		length += meta_len;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 77f615568194d..a3486c1c27f0c 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -23,6 +23,7 @@
+ #include <linux/t10-pi.h>
+ #include <linux/types.h>
+ #include <linux/io-64-nonatomic-lo-hi.h>
++#include <linux/io-64-nonatomic-hi-lo.h>
+ #include <linux/sed-opal.h>
+ #include <linux/pci-p2pdma.h>
+ 
+@@ -1825,6 +1826,9 @@ static void nvme_map_cmb(struct nvme_dev *dev)
+ 	if (dev->cmb_size)
+ 		return;
+ 
++	if (NVME_CAP_CMBS(dev->ctrl.cap))
++		writel(NVME_CMBMSC_CRE, dev->bar + NVME_REG_CMBMSC);
++
+ 	dev->cmbsz = readl(dev->bar + NVME_REG_CMBSZ);
+ 	if (!dev->cmbsz)
+ 		return;
+@@ -1838,6 +1842,16 @@ static void nvme_map_cmb(struct nvme_dev *dev)
+ 	if (offset > bar_size)
+ 		return;
+ 
++	/*
++	 * Tell the controller about the host side address mapping the CMB,
++	 * and enable CMB decoding for the NVMe 1.4+ scheme:
++	 */
++	if (NVME_CAP_CMBS(dev->ctrl.cap)) {
++		hi_lo_writeq(NVME_CMBMSC_CRE | NVME_CMBMSC_CMSE |
++			     (pci_bus_address(pdev, bar) + offset),
++			     dev->bar + NVME_REG_CMBMSC);
++	}
++
+ 	/*
+ 	 * Controllers may support a CMB size larger than their BAR,
+ 	 * for example, due to being behind a bridge. Reduce the CMB to
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 65e3d0ef36e1a..493ed7ba86ed2 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -97,6 +97,7 @@ struct nvme_rdma_queue {
+ 	struct completion	cm_done;
+ 	bool			pi_support;
+ 	int			cq_size;
++	struct mutex		queue_lock;
+ };
+ 
+ struct nvme_rdma_ctrl {
+@@ -579,6 +580,7 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
+ 	int ret;
+ 
+ 	queue = &ctrl->queues[idx];
++	mutex_init(&queue->queue_lock);
+ 	queue->ctrl = ctrl;
+ 	if (idx && ctrl->ctrl.max_integrity_segments)
+ 		queue->pi_support = true;
+@@ -598,7 +600,8 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
+ 	if (IS_ERR(queue->cm_id)) {
+ 		dev_info(ctrl->ctrl.device,
+ 			"failed to create CM ID: %ld\n", PTR_ERR(queue->cm_id));
+-		return PTR_ERR(queue->cm_id);
++		ret = PTR_ERR(queue->cm_id);
++		goto out_destroy_mutex;
+ 	}
+ 
+ 	if (ctrl->ctrl.opts->mask & NVMF_OPT_HOST_TRADDR)
+@@ -628,6 +631,8 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
+ out_destroy_cm_id:
+ 	rdma_destroy_id(queue->cm_id);
+ 	nvme_rdma_destroy_queue_ib(queue);
++out_destroy_mutex:
++	mutex_destroy(&queue->queue_lock);
+ 	return ret;
+ }
+ 
+@@ -639,9 +644,10 @@ static void __nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
+ 
+ static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
+ {
+-	if (!test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags))
+-		return;
+-	__nvme_rdma_stop_queue(queue);
++	mutex_lock(&queue->queue_lock);
++	if (test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags))
++		__nvme_rdma_stop_queue(queue);
++	mutex_unlock(&queue->queue_lock);
+ }
+ 
+ static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue)
+@@ -651,6 +657,7 @@ static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue)
+ 
+ 	nvme_rdma_destroy_queue_ib(queue);
+ 	rdma_destroy_id(queue->cm_id);
++	mutex_destroy(&queue->queue_lock);
+ }
+ 
+ static void nvme_rdma_free_io_queues(struct nvme_rdma_ctrl *ctrl)
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 81db2331f6d78..6487b7897d1fb 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -76,6 +76,7 @@ struct nvme_tcp_queue {
+ 	struct work_struct	io_work;
+ 	int			io_cpu;
+ 
++	struct mutex		queue_lock;
+ 	struct mutex		send_mutex;
+ 	struct llist_head	req_list;
+ 	struct list_head	send_list;
+@@ -1219,6 +1220,7 @@ static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid)
+ 
+ 	sock_release(queue->sock);
+ 	kfree(queue->pdu);
++	mutex_destroy(&queue->queue_lock);
+ }
+ 
+ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+@@ -1380,6 +1382,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl,
+ 	struct nvme_tcp_queue *queue = &ctrl->queues[qid];
+ 	int ret, rcv_pdu_size;
+ 
++	mutex_init(&queue->queue_lock);
+ 	queue->ctrl = ctrl;
+ 	init_llist_head(&queue->req_list);
+ 	INIT_LIST_HEAD(&queue->send_list);
+@@ -1398,7 +1401,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl,
+ 	if (ret) {
+ 		dev_err(nctrl->device,
+ 			"failed to create socket: %d\n", ret);
+-		return ret;
++		goto err_destroy_mutex;
+ 	}
+ 
+ 	/* Single syn retry */
+@@ -1507,6 +1510,8 @@ err_crypto:
+ err_sock:
+ 	sock_release(queue->sock);
+ 	queue->sock = NULL;
++err_destroy_mutex:
++	mutex_destroy(&queue->queue_lock);
+ 	return ret;
+ }
+ 
+@@ -1534,9 +1539,10 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
+ 	struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
+ 	struct nvme_tcp_queue *queue = &ctrl->queues[qid];
+ 
+-	if (!test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags))
+-		return;
+-	__nvme_tcp_stop_queue(queue);
++	mutex_lock(&queue->queue_lock);
++	if (test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags))
++		__nvme_tcp_stop_queue(queue);
++	mutex_unlock(&queue->queue_lock);
+ }
+ 
+ static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
+diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
+index dca34489a1dc9..92ca23bc8dbfc 100644
+--- a/drivers/nvme/target/admin-cmd.c
++++ b/drivers/nvme/target/admin-cmd.c
+@@ -487,8 +487,10 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
+ 
+ 	/* return an all zeroed buffer if we can't find an active namespace */
+ 	ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
+-	if (!ns)
++	if (!ns) {
++		status = NVME_SC_INVALID_NS;
+ 		goto done;
++	}
+ 
+ 	nvmet_ns_revalidate(ns);
+ 
+@@ -541,7 +543,9 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
+ 		id->nsattr |= (1 << 0);
+ 	nvmet_put_namespace(ns);
+ done:
+-	status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id));
++	if (!status)
++		status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id));
++
+ 	kfree(id);
+ out:
+ 	nvmet_req_complete(req, status);
+diff --git a/drivers/phy/motorola/phy-cpcap-usb.c b/drivers/phy/motorola/phy-cpcap-usb.c
+index 442522ba487f0..4728e2bff6620 100644
+--- a/drivers/phy/motorola/phy-cpcap-usb.c
++++ b/drivers/phy/motorola/phy-cpcap-usb.c
+@@ -662,35 +662,42 @@ static int cpcap_usb_phy_probe(struct platform_device *pdev)
+ 	generic_phy = devm_phy_create(ddata->dev, NULL, &ops);
+ 	if (IS_ERR(generic_phy)) {
+ 		error = PTR_ERR(generic_phy);
+-		return PTR_ERR(generic_phy);
++		goto out_reg_disable;
+ 	}
+ 
+ 	phy_set_drvdata(generic_phy, ddata);
+ 
+ 	phy_provider = devm_of_phy_provider_register(ddata->dev,
+ 						     of_phy_simple_xlate);
+-	if (IS_ERR(phy_provider))
+-		return PTR_ERR(phy_provider);
++	if (IS_ERR(phy_provider)) {
++		error = PTR_ERR(phy_provider);
++		goto out_reg_disable;
++	}
+ 
+ 	error = cpcap_usb_init_optional_pins(ddata);
+ 	if (error)
+-		return error;
++		goto out_reg_disable;
+ 
+ 	cpcap_usb_init_optional_gpios(ddata);
+ 
+ 	error = cpcap_usb_init_iio(ddata);
+ 	if (error)
+-		return error;
++		goto out_reg_disable;
+ 
+ 	error = cpcap_usb_init_interrupts(pdev, ddata);
+ 	if (error)
+-		return error;
++		goto out_reg_disable;
+ 
+ 	usb_add_phy_dev(&ddata->phy);
+ 	atomic_set(&ddata->active, 1);
+ 	schedule_delayed_work(&ddata->detect_work, msecs_to_jiffies(1));
+ 
+ 	return 0;
++
++out_reg_disable:
++	regulator_disable(ddata->vusb);
++
++	return error;
+ }
+ 
+ static int cpcap_usb_phy_remove(struct platform_device *pdev)
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index 65fb3a3031470..30a9062d2b4b8 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -216,6 +216,12 @@ static const struct dmi_system_id dmi_switches_allow_list[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Switch SA5-271"),
+ 		},
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7352"),
++		},
++	},
+ 	{} /* Array terminator */
+ };
+ 
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index c404706379d92..69402758b99c3 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -8782,6 +8782,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ 	TPACPI_Q_LNV3('N', '1', 'T', TPACPI_FAN_2CTL),	/* P71 */
+ 	TPACPI_Q_LNV3('N', '1', 'U', TPACPI_FAN_2CTL),	/* P51 */
+ 	TPACPI_Q_LNV3('N', '2', 'C', TPACPI_FAN_2CTL),	/* P52 / P72 */
++	TPACPI_Q_LNV3('N', '2', 'N', TPACPI_FAN_2CTL),	/* P53 / P73 */
+ 	TPACPI_Q_LNV3('N', '2', 'E', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (1st gen) */
+ 	TPACPI_Q_LNV3('N', '2', 'O', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (2nd gen) */
+ 	TPACPI_Q_LNV3('N', '2', 'V', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (3nd gen) */
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 5783139d0a119..c4de932302d6b 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -263,6 +263,16 @@ static const struct ts_dmi_data digma_citi_e200_data = {
+ 	.properties	= digma_citi_e200_props,
+ };
+ 
++static const struct property_entry estar_beauty_hd_props[] = {
++	PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
++	{ }
++};
++
++static const struct ts_dmi_data estar_beauty_hd_data = {
++	.acpi_name	= "GDIX1001:00",
++	.properties	= estar_beauty_hd_props,
++};
++
+ static const struct property_entry gp_electronic_t701_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 960),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 640),
+@@ -942,6 +952,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
+ 		},
+ 	},
++	{
++		/* Estar Beauty HD (MID 7316R) */
++		.driver_data = (void *)&estar_beauty_hd_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Estar"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "eSTAR BEAUTY HD Intel Quad core"),
++		},
++	},
+ 	{
+ 		/* GP-electronic T701 */
+ 		.driver_data = (void *)&gp_electronic_t701_data,
+diff --git a/drivers/scsi/fnic/vnic_dev.c b/drivers/scsi/fnic/vnic_dev.c
+index a2beee6e09f06..5988c300cc82e 100644
+--- a/drivers/scsi/fnic/vnic_dev.c
++++ b/drivers/scsi/fnic/vnic_dev.c
+@@ -444,7 +444,8 @@ static int vnic_dev_init_devcmd2(struct vnic_dev *vdev)
+ 	fetch_index = ioread32(&vdev->devcmd2->wq.ctrl->fetch_index);
+ 	if (fetch_index == 0xFFFFFFFF) { /* check for hardware gone  */
+ 		pr_err("error in devcmd2 init");
+-		return -ENODEV;
++		err = -ENODEV;
++		goto err_free_wq;
+ 	}
+ 
+ 	/*
+@@ -460,7 +461,7 @@ static int vnic_dev_init_devcmd2(struct vnic_dev *vdev)
+ 	err = vnic_dev_alloc_desc_ring(vdev, &vdev->devcmd2->results_ring,
+ 			DEVCMD2_RING_SIZE, DEVCMD2_DESC_SIZE);
+ 	if (err)
+-		goto err_free_wq;
++		goto err_disable_wq;
+ 
+ 	vdev->devcmd2->result =
+ 		(struct devcmd2_result *) vdev->devcmd2->results_ring.descs;
+@@ -481,8 +482,9 @@ static int vnic_dev_init_devcmd2(struct vnic_dev *vdev)
+ 
+ err_free_desc_ring:
+ 	vnic_dev_free_desc_ring(vdev, &vdev->devcmd2->results_ring);
+-err_free_wq:
++err_disable_wq:
+ 	vnic_wq_disable(&vdev->devcmd2->wq);
++err_free_wq:
+ 	vnic_wq_free(&vdev->devcmd2->wq);
+ err_free_devcmd2:
+ 	kfree(vdev->devcmd2);
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
+index 070cf516b98fe..57c9a71fa33a7 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.c
++++ b/drivers/scsi/ibmvscsi/ibmvfc.c
+@@ -2957,8 +2957,10 @@ static int ibmvfc_slave_configure(struct scsi_device *sdev)
+ 	unsigned long flags = 0;
+ 
+ 	spin_lock_irqsave(shost->host_lock, flags);
+-	if (sdev->type == TYPE_DISK)
++	if (sdev->type == TYPE_DISK) {
+ 		sdev->allow_restart = 1;
++		blk_queue_rq_timeout(sdev->request_queue, 120 * HZ);
++	}
+ 	spin_unlock_irqrestore(shost->host_lock, flags);
+ 	return 0;
+ }
+diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
+index 96a2952cf626b..a50f1eef0e0cd 100644
+--- a/drivers/scsi/libfc/fc_exch.c
++++ b/drivers/scsi/libfc/fc_exch.c
+@@ -1624,8 +1624,13 @@ static void fc_exch_recv_seq_resp(struct fc_exch_mgr *mp, struct fc_frame *fp)
+ 		rc = fc_exch_done_locked(ep);
+ 		WARN_ON(fc_seq_exch(sp) != ep);
+ 		spin_unlock_bh(&ep->ex_lock);
+-		if (!rc)
++		if (!rc) {
+ 			fc_exch_delete(ep);
++		} else {
++			FC_EXCH_DBG(ep, "ep is completed already,"
++					"hence skip calling the resp\n");
++			goto skip_resp;
++		}
+ 	}
+ 
+ 	/*
+@@ -1644,6 +1649,7 @@ static void fc_exch_recv_seq_resp(struct fc_exch_mgr *mp, struct fc_frame *fp)
+ 	if (!fc_invoke_resp(ep, sp, fp))
+ 		fc_frame_free(fp);
+ 
++skip_resp:
+ 	fc_exch_release(ep);
+ 	return;
+ rel:
+@@ -1900,10 +1906,16 @@ static void fc_exch_reset(struct fc_exch *ep)
+ 
+ 	fc_exch_hold(ep);
+ 
+-	if (!rc)
++	if (!rc) {
+ 		fc_exch_delete(ep);
++	} else {
++		FC_EXCH_DBG(ep, "ep is completed already,"
++				"hence skip calling the resp\n");
++		goto skip_resp;
++	}
+ 
+ 	fc_invoke_resp(ep, sp, ERR_PTR(-FC_EX_CLOSED));
++skip_resp:
+ 	fc_seq_set_resp(sp, NULL, ep->arg);
+ 	fc_exch_release(ep);
+ }
+diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
+index cba1cf6a1c12d..1e939a2a387f3 100644
+--- a/drivers/scsi/scsi_transport_srp.c
++++ b/drivers/scsi/scsi_transport_srp.c
+@@ -541,7 +541,14 @@ int srp_reconnect_rport(struct srp_rport *rport)
+ 	res = mutex_lock_interruptible(&rport->mutex);
+ 	if (res)
+ 		goto out;
+-	scsi_target_block(&shost->shost_gendev);
++	if (rport->state != SRP_RPORT_FAIL_FAST)
++		/*
++		 * sdev state must be SDEV_TRANSPORT_OFFLINE, transition
++		 * to SDEV_BLOCK is illegal. Calling scsi_target_unblock()
++		 * later is ok though, scsi_internal_device_unblock_nowait()
++		 * treats SDEV_TRANSPORT_OFFLINE like SDEV_BLOCK.
++		 */
++		scsi_target_block(&shost->shost_gendev);
+ 	res = rport->state != SRP_RPORT_LOST ? i->f->reconnect(rport) : -ENODEV;
+ 	pr_debug("%s (state %d): transport.reconnect() returned %d\n",
+ 		 dev_name(&shost->shost_gendev), rport->state, res);
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 5bef3a68395d8..d0df217f4712a 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -705,6 +705,7 @@ static int udf_check_vsd(struct super_block *sb)
+ 	struct buffer_head *bh = NULL;
+ 	int nsr = 0;
+ 	struct udf_sb_info *sbi;
++	loff_t session_offset;
+ 
+ 	sbi = UDF_SB(sb);
+ 	if (sb->s_blocksize < sizeof(struct volStructDesc))
+@@ -712,7 +713,8 @@ static int udf_check_vsd(struct super_block *sb)
+ 	else
+ 		sectorsize = sb->s_blocksize;
+ 
+-	sector += (((loff_t)sbi->s_session) << sb->s_blocksize_bits);
++	session_offset = (loff_t)sbi->s_session << sb->s_blocksize_bits;
++	sector += session_offset;
+ 
+ 	udf_debug("Starting at sector %u (%lu byte sectors)\n",
+ 		  (unsigned int)(sector >> sb->s_blocksize_bits),
+@@ -757,8 +759,7 @@ static int udf_check_vsd(struct super_block *sb)
+ 
+ 	if (nsr > 0)
+ 		return 1;
+-	else if (!bh && sector - (sbi->s_session << sb->s_blocksize_bits) ==
+-			VSD_FIRST_SECTOR_OFFSET)
++	else if (!bh && sector - session_offset == VSD_FIRST_SECTOR_OFFSET)
+ 		return -1;
+ 	else
+ 		return 0;
+diff --git a/include/linux/kthread.h b/include/linux/kthread.h
+index 65b81e0c494d2..2484ed97e72f5 100644
+--- a/include/linux/kthread.h
++++ b/include/linux/kthread.h
+@@ -33,6 +33,9 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
+ 					  unsigned int cpu,
+ 					  const char *namefmt);
+ 
++void kthread_set_per_cpu(struct task_struct *k, int cpu);
++bool kthread_is_per_cpu(struct task_struct *k);
++
+ /**
+  * kthread_run - create and wake a thread.
+  * @threadfn: the function to run until signal_pending(current).
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index d925359976873..bfed36e342ccb 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -116,6 +116,9 @@ enum {
+ 	NVME_REG_BPMBL	= 0x0048,	/* Boot Partition Memory Buffer
+ 					 * Location
+ 					 */
++	NVME_REG_CMBMSC = 0x0050,	/* Controller Memory Buffer Memory
++					 * Space Control
++					 */
+ 	NVME_REG_PMRCAP	= 0x0e00,	/* Persistent Memory Capabilities */
+ 	NVME_REG_PMRCTL	= 0x0e04,	/* Persistent Memory Region Control */
+ 	NVME_REG_PMRSTS	= 0x0e08,	/* Persistent Memory Region Status */
+@@ -135,6 +138,7 @@ enum {
+ #define NVME_CAP_CSS(cap)	(((cap) >> 37) & 0xff)
+ #define NVME_CAP_MPSMIN(cap)	(((cap) >> 48) & 0xf)
+ #define NVME_CAP_MPSMAX(cap)	(((cap) >> 52) & 0xf)
++#define NVME_CAP_CMBS(cap)	(((cap) >> 57) & 0x1)
+ 
+ #define NVME_CMB_BIR(cmbloc)	((cmbloc) & 0x7)
+ #define NVME_CMB_OFST(cmbloc)	(((cmbloc) >> 12) & 0xfffff)
+@@ -192,6 +196,8 @@ enum {
+ 	NVME_CSTS_SHST_OCCUR	= 1 << 2,
+ 	NVME_CSTS_SHST_CMPLT	= 2 << 2,
+ 	NVME_CSTS_SHST_MASK	= 3 << 2,
++	NVME_CMBMSC_CRE		= 1 << 0,
++	NVME_CMBMSC_CMSE	= 1 << 1,
+ };
+ 
+ struct nvme_id_power_state {
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 933a625621b8d..5edf7e19ab262 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -493,11 +493,36 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
+ 		return p;
+ 	kthread_bind(p, cpu);
+ 	/* CPU hotplug need to bind once again when unparking the thread. */
+-	set_bit(KTHREAD_IS_PER_CPU, &to_kthread(p)->flags);
+ 	to_kthread(p)->cpu = cpu;
+ 	return p;
+ }
+ 
++void kthread_set_per_cpu(struct task_struct *k, int cpu)
++{
++	struct kthread *kthread = to_kthread(k);
++	if (!kthread)
++		return;
++
++	WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
++
++	if (cpu < 0) {
++		clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
++		return;
++	}
++
++	kthread->cpu = cpu;
++	set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
++}
++
++bool kthread_is_per_cpu(struct task_struct *k)
++{
++	struct kthread *kthread = to_kthread(k);
++	if (!kthread)
++		return false;
++
++	return test_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
++}
++
+ /**
+  * kthread_unpark - unpark a thread created by kthread_create().
+  * @k:		thread created by kthread_create().
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 02bc5b8f1eb27..bdaf4829098c0 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -5271,12 +5271,15 @@ static void __lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie
+ /*
+  * Check whether we follow the irq-flags state precisely:
+  */
+-static void check_flags(unsigned long flags)
++static noinstr void check_flags(unsigned long flags)
+ {
+ #if defined(CONFIG_PROVE_LOCKING) && defined(CONFIG_DEBUG_LOCKDEP)
+ 	if (!debug_locks)
+ 		return;
+ 
++	/* Get the warning out..  */
++	instrumentation_begin();
++
+ 	if (irqs_disabled_flags(flags)) {
+ 		if (DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())) {
+ 			printk("possible reason: unannotated irqs-off.\n");
+@@ -5304,6 +5307,8 @@ static void check_flags(unsigned long flags)
+ 
+ 	if (!debug_locks)
+ 		print_irqtrace_events(current);
++
++	instrumentation_end();
+ #endif
+ }
+ 
+diff --git a/kernel/smpboot.c b/kernel/smpboot.c
+index 2efe1e206167c..f25208e8df836 100644
+--- a/kernel/smpboot.c
++++ b/kernel/smpboot.c
+@@ -188,6 +188,7 @@ __smpboot_create_thread(struct smp_hotplug_thread *ht, unsigned int cpu)
+ 		kfree(td);
+ 		return PTR_ERR(tsk);
+ 	}
++	kthread_set_per_cpu(tsk, cpu);
+ 	/*
+ 	 * Park the thread so that it could start right on the CPU
+ 	 * when it is available.
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 0695c7895c892..1d99c52cc99a6 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1845,12 +1845,6 @@ static void worker_attach_to_pool(struct worker *worker,
+ {
+ 	mutex_lock(&wq_pool_attach_mutex);
+ 
+-	/*
+-	 * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
+-	 * online CPUs.  It'll be re-applied when any of the CPUs come up.
+-	 */
+-	set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
+-
+ 	/*
+ 	 * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains
+ 	 * stable across this function.  See the comments above the flag
+@@ -1859,6 +1853,9 @@ static void worker_attach_to_pool(struct worker *worker,
+ 	if (pool->flags & POOL_DISASSOCIATED)
+ 		worker->flags |= WORKER_UNBOUND;
+ 
++	if (worker->rescue_wq)
++		set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
++
+ 	list_add_tail(&worker->node, &pool->workers);
+ 	worker->pool = pool;
+ 
+diff --git a/net/mac80211/debugfs.c b/net/mac80211/debugfs.c
+index 90470392fdaa7..de5cd3818690c 100644
+--- a/net/mac80211/debugfs.c
++++ b/net/mac80211/debugfs.c
+@@ -120,18 +120,17 @@ static ssize_t aqm_write(struct file *file,
+ {
+ 	struct ieee80211_local *local = file->private_data;
+ 	char buf[100];
+-	size_t len;
+ 
+-	if (count > sizeof(buf))
++	if (count >= sizeof(buf))
+ 		return -EINVAL;
+ 
+ 	if (copy_from_user(buf, user_buf, count))
+ 		return -EFAULT;
+ 
+-	buf[sizeof(buf) - 1] = '\0';
+-	len = strlen(buf);
+-	if (len > 0 && buf[len-1] == '\n')
+-		buf[len-1] = 0;
++	if (count && buf[count - 1] == '\n')
++		buf[count - 1] = '\0';
++	else
++		buf[count] = '\0';
+ 
+ 	if (sscanf(buf, "fq_limit %u", &local->fq.limit) == 1)
+ 		return count;
+@@ -177,18 +176,17 @@ static ssize_t airtime_flags_write(struct file *file,
+ {
+ 	struct ieee80211_local *local = file->private_data;
+ 	char buf[16];
+-	size_t len;
+ 
+-	if (count > sizeof(buf))
++	if (count >= sizeof(buf))
+ 		return -EINVAL;
+ 
+ 	if (copy_from_user(buf, user_buf, count))
+ 		return -EFAULT;
+ 
+-	buf[sizeof(buf) - 1] = 0;
+-	len = strlen(buf);
+-	if (len > 0 && buf[len - 1] == '\n')
+-		buf[len - 1] = 0;
++	if (count && buf[count - 1] == '\n')
++		buf[count - 1] = '\0';
++	else
++		buf[count] = '\0';
+ 
+ 	if (kstrtou16(buf, 0, &local->airtime_flags))
+ 		return -EINVAL;
+@@ -237,20 +235,19 @@ static ssize_t aql_txq_limit_write(struct file *file,
+ {
+ 	struct ieee80211_local *local = file->private_data;
+ 	char buf[100];
+-	size_t len;
+ 	u32 ac, q_limit_low, q_limit_high, q_limit_low_old, q_limit_high_old;
+ 	struct sta_info *sta;
+ 
+-	if (count > sizeof(buf))
++	if (count >= sizeof(buf))
+ 		return -EINVAL;
+ 
+ 	if (copy_from_user(buf, user_buf, count))
+ 		return -EFAULT;
+ 
+-	buf[sizeof(buf) - 1] = 0;
+-	len = strlen(buf);
+-	if (len > 0 && buf[len - 1] == '\n')
+-		buf[len - 1] = 0;
++	if (count && buf[count - 1] == '\n')
++		buf[count - 1] = '\0';
++	else
++		buf[count] = '\0';
+ 
+ 	if (sscanf(buf, "%u %u %u", &ac, &q_limit_low, &q_limit_high) != 3)
+ 		return -EINVAL;
+@@ -306,18 +303,17 @@ static ssize_t force_tx_status_write(struct file *file,
+ {
+ 	struct ieee80211_local *local = file->private_data;
+ 	char buf[3];
+-	size_t len;
+ 
+-	if (count > sizeof(buf))
++	if (count >= sizeof(buf))
+ 		return -EINVAL;
+ 
+ 	if (copy_from_user(buf, user_buf, count))
+ 		return -EFAULT;
+ 
+-	buf[sizeof(buf) - 1] = '\0';
+-	len = strlen(buf);
+-	if (len > 0 && buf[len - 1] == '\n')
+-		buf[len - 1] = 0;
++	if (count && buf[count - 1] == '\n')
++		buf[count - 1] = '\0';
++	else
++		buf[count] = '\0';
+ 
+ 	if (buf[0] == '0' && buf[1] == '\0')
+ 		local->force_tx_status = 0;
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 2a5a11f92b03e..98517423b0b76 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4191,6 +4191,8 @@ void ieee80211_check_fast_rx(struct sta_info *sta)
+ 
+ 	rcu_read_lock();
+ 	key = rcu_dereference(sta->ptk[sta->ptk_idx]);
++	if (!key)
++		key = rcu_dereference(sdata->default_unicast_key);
+ 	if (key) {
+ 		switch (key->conf.cipher) {
+ 		case WLAN_CIPHER_SUITE_TKIP:
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index ca1e9de388910..88868bf300513 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -4278,7 +4278,6 @@ netdev_tx_t ieee80211_subif_start_xmit_8023(struct sk_buff *skb,
+ 	struct ethhdr *ehdr = (struct ethhdr *)skb->data;
+ 	struct ieee80211_key *key;
+ 	struct sta_info *sta;
+-	bool offload = true;
+ 
+ 	if (unlikely(skb->len < ETH_HLEN)) {
+ 		kfree_skb(skb);
+@@ -4294,18 +4293,22 @@ netdev_tx_t ieee80211_subif_start_xmit_8023(struct sk_buff *skb,
+ 
+ 	if (unlikely(IS_ERR_OR_NULL(sta) || !sta->uploaded ||
+ 	    !test_sta_flag(sta, WLAN_STA_AUTHORIZED) ||
+-		sdata->control_port_protocol == ehdr->h_proto))
+-		offload = false;
+-	else if ((key = rcu_dereference(sta->ptk[sta->ptk_idx])) &&
+-		 (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) ||
+-		  key->conf.cipher == WLAN_CIPHER_SUITE_TKIP))
+-		offload = false;
+-
+-	if (offload)
+-		ieee80211_8023_xmit(sdata, dev, sta, key, skb);
+-	else
+-		ieee80211_subif_start_xmit(skb, dev);
++	    sdata->control_port_protocol == ehdr->h_proto))
++		goto skip_offload;
++
++	key = rcu_dereference(sta->ptk[sta->ptk_idx]);
++	if (!key)
++		key = rcu_dereference(sdata->default_unicast_key);
++
++	if (key && (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) ||
++		    key->conf.cipher == WLAN_CIPHER_SUITE_TKIP))
++		goto skip_offload;
++
++	ieee80211_8023_xmit(sdata, dev, sta, key, skb);
++	goto out;
+ 
++skip_offload:
++	ieee80211_subif_start_xmit(skb, dev);
+ out:
+ 	rcu_read_unlock();
+ 
+diff --git a/net/switchdev/switchdev.c b/net/switchdev/switchdev.c
+index 23d8685453627..2c1ffc9ba2eb2 100644
+--- a/net/switchdev/switchdev.c
++++ b/net/switchdev/switchdev.c
+@@ -460,10 +460,11 @@ static int __switchdev_handle_port_obj_add(struct net_device *dev,
+ 	extack = switchdev_notifier_info_to_extack(&port_obj_info->info);
+ 
+ 	if (check_cb(dev)) {
+-		/* This flag is only checked if the return value is success. */
+-		port_obj_info->handled = true;
+-		return add_cb(dev, port_obj_info->obj, port_obj_info->trans,
+-			      extack);
++		err = add_cb(dev, port_obj_info->obj, port_obj_info->trans,
++			     extack);
++		if (err != -EOPNOTSUPP)
++			port_obj_info->handled = true;
++		return err;
+ 	}
+ 
+ 	/* Switch ports might be stacked under e.g. a LAG. Ignore the
+@@ -515,9 +516,10 @@ static int __switchdev_handle_port_obj_del(struct net_device *dev,
+ 	int err = -EOPNOTSUPP;
+ 
+ 	if (check_cb(dev)) {
+-		/* This flag is only checked if the return value is success. */
+-		port_obj_info->handled = true;
+-		return del_cb(dev, port_obj_info->obj);
++		err = del_cb(dev, port_obj_info->obj);
++		if (err != -EOPNOTSUPP)
++			port_obj_info->handled = true;
++		return err;
+ 	}
+ 
+ 	/* Switch ports might be stacked under e.g. a LAG. Ignore the
+@@ -568,9 +570,10 @@ static int __switchdev_handle_port_attr_set(struct net_device *dev,
+ 	int err = -EOPNOTSUPP;
+ 
+ 	if (check_cb(dev)) {
+-		port_attr_info->handled = true;
+-		return set_cb(dev, port_attr_info->attr,
+-			      port_attr_info->trans);
++		err = set_cb(dev, port_attr_info->attr, port_attr_info->trans);
++		if (err != -EOPNOTSUPP)
++			port_attr_info->handled = true;
++		return err;
+ 	}
+ 
+ 	/* Switch ports might be stacked under e.g. a LAG. Ignore the
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 770ad25f1907c..d393401db1ec5 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2484,6 +2484,9 @@ static const struct pci_device_id azx_ids[] = {
+ 	/* CometLake-S */
+ 	{ PCI_DEVICE(0x8086, 0xa3f0),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++	/* CometLake-R */
++	{ PCI_DEVICE(0x8086, 0xf0c8),
++	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ 	/* Icelake */
+ 	{ PCI_DEVICE(0x8086, 0x34c8),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+@@ -2507,6 +2510,9 @@ static const struct pci_device_id azx_ids[] = {
+ 	/* Alderlake-S */
+ 	{ PCI_DEVICE(0x8086, 0x7ad0),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++	/* Alderlake-P */
++	{ PCI_DEVICE(0x8086, 0x51c8),
++	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ 	/* Elkhart Lake */
+ 	{ PCI_DEVICE(0x8086, 0x4b55),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index d12b4799c3cb7..dc1ab4fc93a5b 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4349,6 +4349,7 @@ HDA_CODEC_ENTRY(0x8086280f, "Icelake HDMI",	patch_i915_icl_hdmi),
+ HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI",	patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x80862814, "DG1 HDMI",	patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x80862815, "Alderlake HDMI",	patch_i915_tgl_hdmi),
++HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI",	patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI",	patch_i915_icl_hdmi),
+ HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI",	patch_i915_icl_hdmi),
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index 8b0ddc4b8227b..8d65004c917a1 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -93,8 +93,7 @@ void hda_codec_jack_check(struct snd_sof_dev *sdev)
+ 		 * has been recorded in STATESTS
+ 		 */
+ 		if (codec->jacktbl.used)
+-			schedule_delayed_work(&codec->jackpoll_work,
+-					      codec->jackpoll_interval);
++			pm_request_resume(&codec->core.dev);
+ }
+ #else
+ void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev) {}
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index c6ab44543c92a..956383d5fa62e 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2921,14 +2921,10 @@ int check(struct objtool_file *file)
+ 	warnings += ret;
+ 
+ out:
+-	if (ret < 0) {
+-		/*
+-		 *  Fatal error.  The binary is corrupt or otherwise broken in
+-		 *  some way, or objtool itself is broken.  Fail the kernel
+-		 *  build.
+-		 */
+-		return ret;
+-	}
+-
++	/*
++	 *  For now, don't fail the kernel build on fatal warnings.  These
++	 *  errors are still fairly common due to the growing matrix of
++	 *  supported toolchains and their recent pace of change.
++	 */
+ 	return 0;
+ }
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index 9452cfb01ef19..f4f3e8d995930 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -425,6 +425,13 @@ static int read_symbols(struct elf *elf)
+ 		list_add(&sym->list, entry);
+ 		elf_hash_add(elf->symbol_hash, &sym->hash, sym->idx);
+ 		elf_hash_add(elf->symbol_name_hash, &sym->name_hash, str_hash(sym->name));
++
++		/*
++		 * Don't store empty STT_NOTYPE symbols in the rbtree.  They
++		 * can exist within a function, confusing the sorting.
++		 */
++		if (!sym->len)
++			rb_erase(&sym->node, &sym->sec->symbol_tree);
+ 	}
+ 
+ 	if (stats)
+diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
+index cd089a5058594..ead9e51f75ada 100644
+--- a/tools/power/x86/intel-speed-select/isst-config.c
++++ b/tools/power/x86/intel-speed-select/isst-config.c
+@@ -1245,6 +1245,8 @@ static void dump_isst_config(int arg)
+ 	isst_ctdp_display_information_end(outf);
+ }
+ 
++static void adjust_scaling_max_from_base_freq(int cpu);
++
+ static void set_tdp_level_for_cpu(int cpu, void *arg1, void *arg2, void *arg3,
+ 				  void *arg4)
+ {
+@@ -1263,6 +1265,9 @@ static void set_tdp_level_for_cpu(int cpu, void *arg1, void *arg2, void *arg3,
+ 			int pkg_id = get_physical_package_id(cpu);
+ 			int die_id = get_physical_die_id(cpu);
+ 
++			/* Wait for updated base frequencies */
++			usleep(2000);
++
+ 			fprintf(stderr, "Option is set to online/offline\n");
+ 			ctdp_level.core_cpumask_size =
+ 				alloc_cpu_set(&ctdp_level.core_cpumask);
+@@ -1279,6 +1284,7 @@ static void set_tdp_level_for_cpu(int cpu, void *arg1, void *arg2, void *arg3,
+ 					if (CPU_ISSET_S(i, ctdp_level.core_cpumask_size, ctdp_level.core_cpumask)) {
+ 						fprintf(stderr, "online cpu %d\n", i);
+ 						set_cpu_online_offline(i, 1);
++						adjust_scaling_max_from_base_freq(i);
+ 					} else {
+ 						fprintf(stderr, "offline cpu %d\n", i);
+ 						set_cpu_online_offline(i, 0);
+@@ -1436,6 +1442,31 @@ static int set_cpufreq_scaling_min_max(int cpu, int max, int freq)
+ 	return 0;
+ }
+ 
++static int no_turbo(void)
++{
++	return parse_int_file(0, "/sys/devices/system/cpu/intel_pstate/no_turbo");
++}
++
++static void adjust_scaling_max_from_base_freq(int cpu)
++{
++	int base_freq, scaling_max_freq;
++
++	scaling_max_freq = parse_int_file(0, "/sys/devices/system/cpu/cpu%d/cpufreq/scaling_max_freq", cpu);
++	base_freq = get_cpufreq_base_freq(cpu);
++	if (scaling_max_freq < base_freq || no_turbo())
++		set_cpufreq_scaling_min_max(cpu, 1, base_freq);
++}
++
++static void adjust_scaling_min_from_base_freq(int cpu)
++{
++	int base_freq, scaling_min_freq;
++
++	scaling_min_freq = parse_int_file(0, "/sys/devices/system/cpu/cpu%d/cpufreq/scaling_min_freq", cpu);
++	base_freq = get_cpufreq_base_freq(cpu);
++	if (scaling_min_freq < base_freq)
++		set_cpufreq_scaling_min_max(cpu, 0, base_freq);
++}
++
+ static int set_clx_pbf_cpufreq_scaling_min_max(int cpu)
+ {
+ 	struct isst_pkg_ctdp_level_info *ctdp_level;
+@@ -1533,6 +1564,7 @@ static void set_scaling_min_to_cpuinfo_max(int cpu)
+ 			continue;
+ 
+ 		set_cpufreq_scaling_min_max_from_cpuinfo(i, 1, 0);
++		adjust_scaling_min_from_base_freq(i);
+ 	}
+ }
+ 
+diff --git a/tools/testing/selftests/powerpc/alignment/alignment_handler.c b/tools/testing/selftests/powerpc/alignment/alignment_handler.c
+index cb53a8b777e68..c25cf7cd45e9f 100644
+--- a/tools/testing/selftests/powerpc/alignment/alignment_handler.c
++++ b/tools/testing/selftests/powerpc/alignment/alignment_handler.c
+@@ -443,7 +443,6 @@ int test_alignment_handler_integer(void)
+ 	LOAD_DFORM_TEST(ldu);
+ 	LOAD_XFORM_TEST(ldx);
+ 	LOAD_XFORM_TEST(ldux);
+-	LOAD_DFORM_TEST(lmw);
+ 	STORE_DFORM_TEST(stb);
+ 	STORE_XFORM_TEST(stbx);
+ 	STORE_DFORM_TEST(stbu);
+@@ -462,7 +461,11 @@ int test_alignment_handler_integer(void)
+ 	STORE_XFORM_TEST(stdx);
+ 	STORE_DFORM_TEST(stdu);
+ 	STORE_XFORM_TEST(stdux);
++
++#ifdef __BIG_ENDIAN__
++	LOAD_DFORM_TEST(lmw);
+ 	STORE_DFORM_TEST(stmw);
++#endif
+ 
+ 	return rc;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-09 19:10 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-02-09 19:10 UTC (permalink / raw
  To: gentoo-commits

commit:     f2b7022ccc048d545f23b383a7716a0cf9c7304f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb  9 19:09:13 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb  9 19:09:13 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f2b7022c

SUNRPC: Fix NFS READs that start at non-page-aligned offsets

See https://bugs.gentoo.org/768720
Thanks to DaggyStyle for reporting and testing

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ++
 ...RPC-NFS-fix-non-page-aligned-offsets-read.patch | 53 ++++++++++++++++++++++
 2 files changed, 57 insertions(+)

diff --git a/0000_README b/0000_README
index 897c945..7375e82 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2400_SUNRPC-NFS-fix-non-page-aligned-offsets-read.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/net/sunrpc/svcsock.c?id=bad4c6eb5eaa8300e065bd4426727db5141d687d
+Desc:   SUNRPC: Fix NFS READs that start at non-page-aligned offsets
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2400_SUNRPC-NFS-fix-non-page-aligned-offsets-read.patch b/2400_SUNRPC-NFS-fix-non-page-aligned-offsets-read.patch
new file mode 100644
index 0000000..34d6ebb
--- /dev/null
+++ b/2400_SUNRPC-NFS-fix-non-page-aligned-offsets-read.patch
@@ -0,0 +1,53 @@
+From bad4c6eb5eaa8300e065bd4426727db5141d687d Mon Sep 17 00:00:00 2001
+From: Chuck Lever <chuck.lever@oracle.com>
+Date: Sun, 31 Jan 2021 16:16:23 -0500
+Subject: SUNRPC: Fix NFS READs that start at non-page-aligned offsets
+
+Anj Duvnjak reports that the Kodi.tv NFS client is not able to read
+video files from a v5.10.11 Linux NFS server.
+
+The new sendpage-based TCP sendto logic was not attentive to non-
+zero page_base values. nfsd_splice_read() sets that field when a
+READ payload starts in the middle of a page.
+
+The Linux NFS client rarely emits an NFS READ that is not page-
+aligned. All of my testing so far has been with Linux clients, so I
+missed this one.
+
+Reported-by: A. Duvnjak <avian@extremenerds.net>
+BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=211471
+Fixes: 4a85a6a3320b ("SUNRPC: Handle TCP socket sends with kernel_sendpage() again")
+Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
+Tested-by: A. Duvnjak <avian@extremenerds.net>
+---
+ net/sunrpc/svcsock.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+(limited to 'net/sunrpc/svcsock.c')
+
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index c9766d07eb81a..5a809c64dc7b9 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1113,14 +1113,15 @@ static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg,
+ 		unsigned int offset, len, remaining;
+ 		struct bio_vec *bvec;
+ 
+-		bvec = xdr->bvec;
+-		offset = xdr->page_base;
++		bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT);
++		offset = offset_in_page(xdr->page_base);
+ 		remaining = xdr->page_len;
+ 		flags = MSG_MORE | MSG_SENDPAGE_NOTLAST;
+ 		while (remaining > 0) {
+ 			if (remaining <= PAGE_SIZE && tail->iov_len == 0)
+ 				flags = 0;
+-			len = min(remaining, bvec->bv_len);
++
++			len = min(remaining, bvec->bv_len - offset);
+ 			ret = kernel_sendpage(sock, bvec->bv_page,
+ 					      bvec->bv_offset + offset,
+ 					      len, flags);
+-- 
+cgit 1.2.3-1.el7
+


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-10  9:51 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-02-10  9:51 UTC (permalink / raw
  To: gentoo-commits

commit:     6c84e9a9d87af7d00d731b3a4f131091c7393002
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 10 09:51:09 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 10 09:51:15 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6c84e9a9

Linux patch 5.10.15

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1014_linux-5.10.15.patch | 4352 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4356 insertions(+)

diff --git a/0000_README b/0000_README
index 7375e82..7d03d9d 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-5.10.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.14
 
+Patch:  1014_linux-5.10.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-5.10.15.patch b/1014_linux-5.10.15.patch
new file mode 100644
index 0000000..28991a6
--- /dev/null
+++ b/1014_linux-5.10.15.patch
@@ -0,0 +1,4352 @@
+diff --git a/Documentation/filesystems/overlayfs.rst b/Documentation/filesystems/overlayfs.rst
+index 580ab9a0fe319..137afeb3f581c 100644
+--- a/Documentation/filesystems/overlayfs.rst
++++ b/Documentation/filesystems/overlayfs.rst
+@@ -575,6 +575,14 @@ without significant effort.
+ The advantage of mounting with the "volatile" option is that all forms of
+ sync calls to the upper filesystem are omitted.
+ 
++In order to avoid a giving a false sense of safety, the syncfs (and fsync)
++semantics of volatile mounts are slightly different than that of the rest of
++VFS.  If any writeback error occurs on the upperdir's filesystem after a
++volatile mount takes place, all sync functions will return an error.  Once this
++condition is reached, the filesystem will not recover, and every subsequent sync
++call will return an error, even if the upperdir has not experience a new error
++since the last sync call.
++
+ When overlay is mounted with "volatile" option, the directory
+ "$workdir/work/incompat/volatile" is created.  During next mount, overlay
+ checks for this directory and refuses to mount if present. This is a strong
+diff --git a/Makefile b/Makefile
+index bb3770be9779d..b62d2d4ea7b02 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+@@ -812,10 +812,12 @@ KBUILD_CFLAGS	+= -ftrivial-auto-var-init=zero
+ KBUILD_CFLAGS	+= -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang
+ endif
+ 
++DEBUG_CFLAGS	:=
++
+ # Workaround for GCC versions < 5.0
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61801
+ ifdef CONFIG_CC_IS_GCC
+-DEBUG_CFLAGS	:= $(call cc-ifversion, -lt, 0500, $(call cc-option, -fno-var-tracking-assignments))
++DEBUG_CFLAGS	+= $(call cc-ifversion, -lt, 0500, $(call cc-option, -fno-var-tracking-assignments))
+ endif
+ 
+ ifdef CONFIG_DEBUG_INFO
+@@ -948,12 +950,6 @@ KBUILD_CFLAGS   += $(call cc-option,-Werror=designated-init)
+ # change __FILE__ to the relative path from the srctree
+ KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
+ 
+-# ensure -fcf-protection is disabled when using retpoline as it is
+-# incompatible with -mindirect-branch=thunk-extern
+-ifdef CONFIG_RETPOLINE
+-KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
+-endif
+-
+ # include additional Makefiles when needed
+ include-y			:= scripts/Makefile.extrawarn
+ include-$(CONFIG_KASAN)		+= scripts/Makefile.kasan
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index c8745bc800f71..7b8c18e6605e4 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -114,7 +114,7 @@
+ 		gpio-sck = <&gpio1 12 GPIO_ACTIVE_HIGH>;
+ 		gpio-miso = <&gpio1 18 GPIO_ACTIVE_HIGH>;
+ 		gpio-mosi = <&gpio1 20 GPIO_ACTIVE_HIGH>;
+-		cs-gpios = <&gpio1 19 GPIO_ACTIVE_HIGH>;
++		cs-gpios = <&gpio1 19 GPIO_ACTIVE_LOW>;
+ 		num-chipselects = <1>;
+ 
+ 		/* lcd panel */
+@@ -124,7 +124,6 @@
+ 			spi-max-frequency = <100000>;
+ 			spi-cpol;
+ 			spi-cpha;
+-			spi-cs-high;
+ 
+ 			backlight= <&backlight>;
+ 			label = "lcd";
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-drc02.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-drc02.dtsi
+index 62ab23824a3e7..e4d287d994214 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-drc02.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-drc02.dtsi
+@@ -35,7 +35,7 @@
+ 	 */
+ 	rs485-rx-en {
+ 		gpio-hog;
+-		gpios = <8 GPIO_ACTIVE_HIGH>;
++		gpios = <8 0>;
+ 		output-low;
+ 		line-name = "rs485-rx-en";
+ 	};
+@@ -63,7 +63,7 @@
+ 	 */
+ 	usb-hub {
+ 		gpio-hog;
+-		gpios = <2 GPIO_ACTIVE_HIGH>;
++		gpios = <2 0>;
+ 		output-high;
+ 		line-name = "usb-hub-reset";
+ 	};
+@@ -87,6 +87,12 @@
+ 	};
+ };
+ 
++&i2c4 {
++	touchscreen@49 {
++		status = "disabled";
++	};
++};
++
+ &i2c5 {	/* TP7/TP8 */
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&i2c5_pins_a>;
+@@ -104,7 +110,7 @@
+ 	 * are used for on-board microSD slot instead.
+ 	 */
+ 	/delete-property/broken-cd;
+-	cd-gpios = <&gpioi 10 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
++	cd-gpios = <&gpioi 10 GPIO_ACTIVE_HIGH>;
+ 	disable-wp;
+ };
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index f796a6150313e..2d027dafb7bce 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -353,7 +353,8 @@
+ 	pinctrl-0 = <&sdmmc1_b4_pins_a &sdmmc1_dir_pins_a>;
+ 	pinctrl-1 = <&sdmmc1_b4_od_pins_a &sdmmc1_dir_pins_a>;
+ 	pinctrl-2 = <&sdmmc1_b4_sleep_pins_a &sdmmc1_dir_sleep_pins_a>;
+-	broken-cd;
++	cd-gpios = <&gpiog 1 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
++	disable-wp;
+ 	st,sig-dir;
+ 	st,neg-edge;
+ 	st,use-ckin;
+diff --git a/arch/arm/boot/dts/sun7i-a20-bananapro.dts b/arch/arm/boot/dts/sun7i-a20-bananapro.dts
+index 01ccff756996d..5740f9442705c 100644
+--- a/arch/arm/boot/dts/sun7i-a20-bananapro.dts
++++ b/arch/arm/boot/dts/sun7i-a20-bananapro.dts
+@@ -110,7 +110,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&gmac_rgmii_pins>;
+ 	phy-handle = <&phy1>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-supply = <&reg_gmac_3v3>;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/include/debug/tegra.S b/arch/arm/include/debug/tegra.S
+index 98daa7f483148..7454480d084b2 100644
+--- a/arch/arm/include/debug/tegra.S
++++ b/arch/arm/include/debug/tegra.S
+@@ -149,7 +149,34 @@
+ 
+ 		.align
+ 99:		.word	.
++#if defined(ZIMAGE)
++		.word	. + 4
++/*
++ * Storage for the state maintained by the macro.
++ *
++ * In the kernel proper, this data is located in arch/arm/mach-tegra/tegra.c.
++ * That's because this header is included from multiple files, and we only
++ * want a single copy of the data. In particular, the UART probing code above
++ * assumes it's running using physical addresses. This is true when this file
++ * is included from head.o, but not when included from debug.o. So we need
++ * to share the probe results between the two copies, rather than having
++ * to re-run the probing again later.
++ *
++ * In the decompressor, we put the storage right here, since common.c
++ * isn't included in the decompressor build. This storage data gets put in
++ * .text even though it's really data, since .data is discarded from the
++ * decompressor. Luckily, .text is writeable in the decompressor, unless
++ * CONFIG_ZBOOT_ROM. That dependency is handled in arch/arm/Kconfig.debug.
++ */
++		/* Debug UART initialization required */
++		.word	1
++		/* Debug UART physical address */
++		.word	0
++		/* Debug UART virtual address */
++		.word	0
++#else
+ 		.word	tegra_uart_config
++#endif
+ 		.ltorg
+ 
+ 		/* Load previously selected UART address */
+@@ -189,30 +216,3 @@
+ 
+ 		.macro	waituarttxrdy,rd,rx
+ 		.endm
+-
+-/*
+- * Storage for the state maintained by the macros above.
+- *
+- * In the kernel proper, this data is located in arch/arm/mach-tegra/tegra.c.
+- * That's because this header is included from multiple files, and we only
+- * want a single copy of the data. In particular, the UART probing code above
+- * assumes it's running using physical addresses. This is true when this file
+- * is included from head.o, but not when included from debug.o. So we need
+- * to share the probe results between the two copies, rather than having
+- * to re-run the probing again later.
+- *
+- * In the decompressor, we put the symbol/storage right here, since common.c
+- * isn't included in the decompressor build. This symbol gets put in .text
+- * even though it's really data, since .data is discarded from the
+- * decompressor. Luckily, .text is writeable in the decompressor, unless
+- * CONFIG_ZBOOT_ROM. That dependency is handled in arch/arm/Kconfig.debug.
+- */
+-#if defined(ZIMAGE)
+-tegra_uart_config:
+-	/* Debug UART initialization required */
+-	.word 1
+-	/* Debug UART physical address */
+-	.word 0
+-	/* Debug UART virtual address */
+-	.word 0
+-#endif
+diff --git a/arch/arm/mach-footbridge/dc21285.c b/arch/arm/mach-footbridge/dc21285.c
+index 416462e3f5d63..f9713dc561cf7 100644
+--- a/arch/arm/mach-footbridge/dc21285.c
++++ b/arch/arm/mach-footbridge/dc21285.c
+@@ -65,15 +65,15 @@ dc21285_read_config(struct pci_bus *bus, unsigned int devfn, int where,
+ 	if (addr)
+ 		switch (size) {
+ 		case 1:
+-			asm("ldrb	%0, [%1, %2]"
++			asm volatile("ldrb	%0, [%1, %2]"
+ 				: "=r" (v) : "r" (addr), "r" (where) : "cc");
+ 			break;
+ 		case 2:
+-			asm("ldrh	%0, [%1, %2]"
++			asm volatile("ldrh	%0, [%1, %2]"
+ 				: "=r" (v) : "r" (addr), "r" (where) : "cc");
+ 			break;
+ 		case 4:
+-			asm("ldr	%0, [%1, %2]"
++			asm volatile("ldr	%0, [%1, %2]"
+ 				: "=r" (v) : "r" (addr), "r" (where) : "cc");
+ 			break;
+ 		}
+@@ -99,17 +99,17 @@ dc21285_write_config(struct pci_bus *bus, unsigned int devfn, int where,
+ 	if (addr)
+ 		switch (size) {
+ 		case 1:
+-			asm("strb	%0, [%1, %2]"
++			asm volatile("strb	%0, [%1, %2]"
+ 				: : "r" (value), "r" (addr), "r" (where)
+ 				: "cc");
+ 			break;
+ 		case 2:
+-			asm("strh	%0, [%1, %2]"
++			asm volatile("strh	%0, [%1, %2]"
+ 				: : "r" (value), "r" (addr), "r" (where)
+ 				: "cc");
+ 			break;
+ 		case 4:
+-			asm("str	%0, [%1, %2]"
++			asm volatile("str	%0, [%1, %2]"
+ 				: : "r" (value), "r" (addr), "r" (where)
+ 				: "cc");
+ 			break;
+diff --git a/arch/arm/mach-omap1/board-osk.c b/arch/arm/mach-omap1/board-osk.c
+index a720259099edf..0a4c9b0b13b0c 100644
+--- a/arch/arm/mach-omap1/board-osk.c
++++ b/arch/arm/mach-omap1/board-osk.c
+@@ -203,6 +203,8 @@ static int osk_tps_setup(struct i2c_client *client, void *context)
+ 	 */
+ 	gpio_request(OSK_TPS_GPIO_USB_PWR_EN, "n_vbus_en");
+ 	gpio_direction_output(OSK_TPS_GPIO_USB_PWR_EN, 1);
++	/* Free the GPIO again as the driver will request it */
++	gpio_free(OSK_TPS_GPIO_USB_PWR_EN);
+ 
+ 	/* Set GPIO 2 high so LED D3 is off by default */
+ 	tps65010_set_gpio_out_value(GPIO2, HIGH);
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 8514fe6a275a3..a6127002573bd 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -2384,7 +2384,7 @@
+ 				interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>;
+ 				dr_mode = "host";
+ 				snps,dis_u2_susphy_quirk;
+-				snps,quirk-frame-length-adjustment;
++				snps,quirk-frame-length-adjustment = <0x20>;
+ 				snps,parkmode-disable-ss-quirk;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-c4.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-c4.dts
+index cf5a98f0e47c8..a712273c905af 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-c4.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-c4.dts
+@@ -52,7 +52,7 @@
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+ 
+-		gpio = <&gpio_ao GPIOAO_3 GPIO_ACTIVE_HIGH>;
++		gpio = <&gpio_ao GPIOAO_3 GPIO_OPEN_DRAIN>;
+ 		enable-active-high;
+ 		regulator-always-on;
+ 	};
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+index 1fa39bacff4b3..0b4545012d43e 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+@@ -385,7 +385,7 @@
+ 
+ 		dcfg: dcfg@1ee0000 {
+ 			compatible = "fsl,ls1046a-dcfg", "syscon";
+-			reg = <0x0 0x1ee0000 0x0 0x10000>;
++			reg = <0x0 0x1ee0000 0x0 0x1000>;
+ 			big-endian;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index 76a8c996d497f..d70aae77a6e84 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -263,6 +263,8 @@
+ &i2c3 {
+ 	status = "okay";
+ 	clock-frequency = <400000>;
++	/* Overwrite pinctrl-0 from sdm845.dtsi */
++	pinctrl-0 = <&qup_i2c3_default &i2c3_hid_active>;
+ 
+ 	tsel: hid@15 {
+ 		compatible = "hid-over-i2c";
+@@ -270,9 +272,6 @@
+ 		hid-descr-addr = <0x1>;
+ 
+ 		interrupts-extended = <&tlmm 37 IRQ_TYPE_LEVEL_HIGH>;
+-
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&i2c3_hid_active>;
+ 	};
+ 
+ 	tsc2: hid@2c {
+@@ -281,11 +280,6 @@
+ 		hid-descr-addr = <0x20>;
+ 
+ 		interrupts-extended = <&tlmm 37 IRQ_TYPE_LEVEL_HIGH>;
+-
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&i2c3_hid_active>;
+-
+-		status = "disabled";
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/px30.dtsi b/arch/arm64/boot/dts/rockchip/px30.dtsi
+index 2695ea8cda142..64193292d26c3 100644
+--- a/arch/arm64/boot/dts/rockchip/px30.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30.dtsi
+@@ -1097,7 +1097,7 @@
+ 	vopl_mmu: iommu@ff470f00 {
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x0 0xff470f00 0x0 0x100>;
+-		interrupts = <GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
+ 		interrupt-names = "vopl_mmu";
+ 		clocks = <&cru ACLK_VOPL>, <&cru HCLK_VOPL>;
+ 		clock-names = "aclk", "iface";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+index 06d48338c8362..219b7507a10fb 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+@@ -790,7 +790,6 @@
+ &pcie0 {
+ 	bus-scan-delay-ms = <1000>;
+ 	ep-gpios = <&gpio2 RK_PD4 GPIO_ACTIVE_HIGH>;
+-	max-link-speed = <2>;
+ 	num-lanes = <4>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pcie_clkreqn_cpm>;
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 234a21d26f674..3474286e59db7 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -252,8 +252,10 @@ choice
+ 	default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY
+ 
+ 	config MAXPHYSMEM_1GB
++		depends on 32BIT
+ 		bool "1GiB"
+ 	config MAXPHYSMEM_2GB
++		depends on 64BIT && CMODEL_MEDLOW
+ 		bool "2GiB"
+ 	config MAXPHYSMEM_128GB
+ 		depends on 64BIT && CMODEL_MEDANY
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index a6c4bb6c2c012..c17b8e5ec1869 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -1083,6 +1083,7 @@ static void virtio_uml_release_dev(struct device *d)
+ 	}
+ 
+ 	os_close_file(vu_dev->sock);
++	kfree(vu_dev);
+ }
+ 
+ /* Platform device */
+@@ -1096,7 +1097,7 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ 	if (!pdata)
+ 		return -EINVAL;
+ 
+-	vu_dev = devm_kzalloc(&pdev->dev, sizeof(*vu_dev), GFP_KERNEL);
++	vu_dev = kzalloc(sizeof(*vu_dev), GFP_KERNEL);
+ 	if (!vu_dev)
+ 		return -ENOMEM;
+ 
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 1bf21746f4cea..6a7efa78eba22 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -127,6 +127,9 @@ else
+ 
+         KBUILD_CFLAGS += -mno-red-zone
+         KBUILD_CFLAGS += -mcmodel=kernel
++
++	# Intel CET isn't enabled in the kernel
++	KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
+ endif
+ 
+ ifdef CONFIG_X86_X32
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 57af25cb44f63..51abd44ab8c2d 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -197,16 +197,6 @@ static inline bool apic_needs_pit(void) { return true; }
+ #endif /* !CONFIG_X86_LOCAL_APIC */
+ 
+ #ifdef CONFIG_X86_X2APIC
+-/*
+- * Make previous memory operations globally visible before
+- * sending the IPI through x2apic wrmsr. We need a serializing instruction or
+- * mfence for this.
+- */
+-static inline void x2apic_wrmsr_fence(void)
+-{
+-	asm volatile("mfence" : : : "memory");
+-}
+-
+ static inline void native_apic_msr_write(u32 reg, u32 v)
+ {
+ 	if (reg == APIC_DFR || reg == APIC_ID || reg == APIC_LDR ||
+diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
+index 7f828fe497978..4819d5e5a3353 100644
+--- a/arch/x86/include/asm/barrier.h
++++ b/arch/x86/include/asm/barrier.h
+@@ -84,4 +84,22 @@ do {									\
+ 
+ #include <asm-generic/barrier.h>
+ 
++/*
++ * Make previous memory operations globally visible before
++ * a WRMSR.
++ *
++ * MFENCE makes writes visible, but only affects load/store
++ * instructions.  WRMSR is unfortunately not a load/store
++ * instruction and is unaffected by MFENCE.  The LFENCE ensures
++ * that the WRMSR is not reordered.
++ *
++ * Most WRMSRs are full serializing instructions themselves and
++ * do not require this barrier.  This is only required for the
++ * IA32_TSC_DEADLINE and X2APIC MSRs.
++ */
++static inline void weak_wrmsr_fence(void)
++{
++	asm volatile("mfence; lfence" : : : "memory");
++}
++
+ #endif /* _ASM_X86_BARRIER_H */
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 113f6ca7b8284..f4c0514fc5108 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -41,6 +41,7 @@
+ #include <asm/perf_event.h>
+ #include <asm/x86_init.h>
+ #include <linux/atomic.h>
++#include <asm/barrier.h>
+ #include <asm/mpspec.h>
+ #include <asm/i8259.h>
+ #include <asm/proto.h>
+@@ -472,6 +473,9 @@ static int lapic_next_deadline(unsigned long delta,
+ {
+ 	u64 tsc;
+ 
++	/* This MSR is special and need a special fence: */
++	weak_wrmsr_fence();
++
+ 	tsc = rdtsc();
+ 	wrmsrl(MSR_IA32_TSC_DEADLINE, tsc + (((u64) delta) * TSC_DIVISOR));
+ 	return 0;
+diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
+index b0889c48a2ac5..7eec3c154fa24 100644
+--- a/arch/x86/kernel/apic/x2apic_cluster.c
++++ b/arch/x86/kernel/apic/x2apic_cluster.c
+@@ -29,7 +29,8 @@ static void x2apic_send_IPI(int cpu, int vector)
+ {
+ 	u32 dest = per_cpu(x86_cpu_to_logical_apicid, cpu);
+ 
+-	x2apic_wrmsr_fence();
++	/* x2apic MSRs are special and need a special fence: */
++	weak_wrmsr_fence();
+ 	__x2apic_send_IPI_dest(dest, vector, APIC_DEST_LOGICAL);
+ }
+ 
+@@ -41,7 +42,8 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
+ 	unsigned long flags;
+ 	u32 dest;
+ 
+-	x2apic_wrmsr_fence();
++	/* x2apic MSRs are special and need a special fence: */
++	weak_wrmsr_fence();
+ 	local_irq_save(flags);
+ 
+ 	tmpmsk = this_cpu_cpumask_var_ptr(ipi_mask);
+diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c
+index e14eae6d6ea71..032a00e5d9fa6 100644
+--- a/arch/x86/kernel/apic/x2apic_phys.c
++++ b/arch/x86/kernel/apic/x2apic_phys.c
+@@ -43,7 +43,8 @@ static void x2apic_send_IPI(int cpu, int vector)
+ {
+ 	u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
+ 
+-	x2apic_wrmsr_fence();
++	/* x2apic MSRs are special and need a special fence: */
++	weak_wrmsr_fence();
+ 	__x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL);
+ }
+ 
+@@ -54,7 +55,8 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
+ 	unsigned long this_cpu;
+ 	unsigned long flags;
+ 
+-	x2apic_wrmsr_fence();
++	/* x2apic MSRs are special and need a special fence: */
++	weak_wrmsr_fence();
+ 
+ 	local_irq_save(flags);
+ 
+@@ -125,7 +127,8 @@ void __x2apic_send_IPI_shorthand(int vector, u32 which)
+ {
+ 	unsigned long cfg = __prepare_ICR(which, vector, 0);
+ 
+-	x2apic_wrmsr_fence();
++	/* x2apic MSRs are special and need a special fence: */
++	weak_wrmsr_fence();
+ 	native_x2apic_icr_write(cfg, 0);
+ }
+ 
+diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
+index 03aa33b581658..668a4a6533d92 100644
+--- a/arch/x86/kernel/hw_breakpoint.c
++++ b/arch/x86/kernel/hw_breakpoint.c
+@@ -269,6 +269,20 @@ static inline bool within_cpu_entry(unsigned long addr, unsigned long end)
+ 			CPU_ENTRY_AREA_TOTAL_SIZE))
+ 		return true;
+ 
++	/*
++	 * When FSGSBASE is enabled, paranoid_entry() fetches the per-CPU
++	 * GSBASE value via __per_cpu_offset or pcpu_unit_offsets.
++	 */
++#ifdef CONFIG_SMP
++	if (within_area(addr, end, (unsigned long)__per_cpu_offset,
++			sizeof(unsigned long) * nr_cpu_ids))
++		return true;
++#else
++	if (within_area(addr, end, (unsigned long)&pcpu_unit_offsets,
++			sizeof(pcpu_unit_offsets)))
++		return true;
++#endif
++
+ 	for_each_possible_cpu(cpu) {
+ 		/* The original rw GDT is being used after load_direct_gdt() */
+ 		if (within_area(addr, end, (unsigned long)get_cpu_gdt_rw(cpu),
+@@ -293,6 +307,14 @@ static inline bool within_cpu_entry(unsigned long addr, unsigned long end)
+ 				(unsigned long)&per_cpu(cpu_tlbstate, cpu),
+ 				sizeof(struct tlb_state)))
+ 			return true;
++
++		/*
++		 * When in guest (X86_FEATURE_HYPERVISOR), local_db_save()
++		 * will read per-cpu cpu_dr7 before clear dr7 register.
++		 */
++		if (within_area(addr, end, (unsigned long)&per_cpu(cpu_dr7, cpu),
++				sizeof(cpu_dr7)))
++			return true;
+ 	}
+ 
+ 	return false;
+@@ -491,15 +513,12 @@ static int hw_breakpoint_handler(struct die_args *args)
+ 	struct perf_event *bp;
+ 	unsigned long *dr6_p;
+ 	unsigned long dr6;
++	bool bpx;
+ 
+ 	/* The DR6 value is pointed by args->err */
+ 	dr6_p = (unsigned long *)ERR_PTR(args->err);
+ 	dr6 = *dr6_p;
+ 
+-	/* If it's a single step, TRAP bits are random */
+-	if (dr6 & DR_STEP)
+-		return NOTIFY_DONE;
+-
+ 	/* Do an early return if no trap bits are set in DR6 */
+ 	if ((dr6 & DR_TRAP_BITS) == 0)
+ 		return NOTIFY_DONE;
+@@ -509,28 +528,29 @@ static int hw_breakpoint_handler(struct die_args *args)
+ 		if (likely(!(dr6 & (DR_TRAP0 << i))))
+ 			continue;
+ 
++		bp = this_cpu_read(bp_per_reg[i]);
++		if (!bp)
++			continue;
++
++		bpx = bp->hw.info.type == X86_BREAKPOINT_EXECUTE;
++
+ 		/*
+-		 * The counter may be concurrently released but that can only
+-		 * occur from a call_rcu() path. We can then safely fetch
+-		 * the breakpoint, use its callback, touch its counter
+-		 * while we are in an rcu_read_lock() path.
++		 * TF and data breakpoints are traps and can be merged, however
++		 * instruction breakpoints are faults and will be raised
++		 * separately.
++		 *
++		 * However DR6 can indicate both TF and instruction
++		 * breakpoints. In that case take TF as that has precedence and
++		 * delay the instruction breakpoint for the next exception.
+ 		 */
+-		rcu_read_lock();
++		if (bpx && (dr6 & DR_STEP))
++			continue;
+ 
+-		bp = this_cpu_read(bp_per_reg[i]);
+ 		/*
+ 		 * Reset the 'i'th TRAP bit in dr6 to denote completion of
+ 		 * exception handling
+ 		 */
+ 		(*dr6_p) &= ~(DR_TRAP0 << i);
+-		/*
+-		 * bp can be NULL due to lazy debug register switching
+-		 * or due to concurrent perf counter removing.
+-		 */
+-		if (!bp) {
+-			rcu_read_unlock();
+-			break;
+-		}
+ 
+ 		perf_bp_event(bp, args->regs);
+ 
+@@ -538,11 +558,10 @@ static int hw_breakpoint_handler(struct die_args *args)
+ 		 * Set up resume flag to avoid breakpoint recursion when
+ 		 * returning back to origin.
+ 		 */
+-		if (bp->hw.info.type == X86_BREAKPOINT_EXECUTE)
++		if (bpx)
+ 			args->regs->flags |= X86_EFLAGS_RF;
+-
+-		rcu_read_unlock();
+ 	}
++
+ 	/*
+ 	 * Further processing in do_debug() is needed for a) user-space
+ 	 * breakpoints (to generate signals) and b) when the system has
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 83637a2ff6052..62157b1000f08 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -320,7 +320,7 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
+ 	if (cpuid->nent < vcpu->arch.cpuid_nent)
+ 		goto out;
+ 	r = -EFAULT;
+-	if (copy_to_user(entries, &vcpu->arch.cpuid_entries,
++	if (copy_to_user(entries, vcpu->arch.cpuid_entries,
+ 			 vcpu->arch.cpuid_nent * sizeof(struct kvm_cpuid_entry2)))
+ 		goto out;
+ 	return 0;
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 56cae1ff9e3fe..66a08322988f2 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2879,6 +2879,8 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt)
+ 	ops->get_msr(ctxt, MSR_IA32_SYSENTER_ESP, &msr_data);
+ 	*reg_write(ctxt, VCPU_REGS_RSP) = (efer & EFER_LMA) ? msr_data :
+ 							      (u32)msr_data;
++	if (efer & EFER_LMA)
++		ctxt->mode = X86EMUL_MODE_PROT64;
+ 
+ 	return X86EMUL_CONTINUE;
+ }
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index b9265a585ea3c..c842d17240ccb 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -1037,8 +1037,8 @@ bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot)
+ }
+ 
+ /*
+- * Clear non-leaf entries (and free associated page tables) which could
+- * be replaced by large mappings, for GFNs within the slot.
++ * Clear leaf entries which could be replaced by large mappings, for
++ * GFNs within the slot.
+  */
+ static void zap_collapsible_spte_range(struct kvm *kvm,
+ 				       struct kvm_mmu_page *root,
+@@ -1050,7 +1050,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
+ 
+ 	tdp_root_for_each_pte(iter, root, start, end) {
+ 		if (!is_shadow_present_pte(iter.old_spte) ||
+-		    is_last_spte(iter.old_spte, iter.level))
++		    !is_last_spte(iter.old_spte, iter.level))
+ 			continue;
+ 
+ 		pfn = spte_to_pfn(iter.old_spte);
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 5c9630c3f6ba1..e3e04988fdabe 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -320,6 +320,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
+ 	unsigned long first, last;
+ 	int ret;
+ 
++	lockdep_assert_held(&kvm->lock);
++
+ 	if (ulen == 0 || uaddr + ulen < uaddr)
+ 		return ERR_PTR(-EINVAL);
+ 
+@@ -1001,12 +1003,20 @@ int svm_register_enc_region(struct kvm *kvm,
+ 	if (!region)
+ 		return -ENOMEM;
+ 
++	mutex_lock(&kvm->lock);
+ 	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
+ 	if (IS_ERR(region->pages)) {
+ 		ret = PTR_ERR(region->pages);
++		mutex_unlock(&kvm->lock);
+ 		goto e_free;
+ 	}
+ 
++	region->uaddr = range->addr;
++	region->size = range->size;
++
++	list_add_tail(&region->list, &sev->regions_list);
++	mutex_unlock(&kvm->lock);
++
+ 	/*
+ 	 * The guest may change the memory encryption attribute from C=0 -> C=1
+ 	 * or vice versa for this memory range. Lets make sure caches are
+@@ -1015,13 +1025,6 @@ int svm_register_enc_region(struct kvm *kvm,
+ 	 */
+ 	sev_clflush_pages(region->pages, region->npages);
+ 
+-	region->uaddr = range->addr;
+-	region->size = range->size;
+-
+-	mutex_lock(&kvm->lock);
+-	list_add_tail(&region->list, &sev->regions_list);
+-	mutex_unlock(&kvm->lock);
+-
+ 	return ret;
+ 
+ e_free:
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 94b0cb8330451..f4ae3871e412a 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -438,6 +438,11 @@ static int has_svm(void)
+ 		return 0;
+ 	}
+ 
++	if (sev_active()) {
++		pr_info("KVM is unsupported when running as an SEV guest\n");
++		return 0;
++	}
++
+ 	return 1;
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index c01aac2bac37c..82af43e14b09c 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6874,11 +6874,20 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+ 		switch (index) {
+ 		case MSR_IA32_TSX_CTRL:
+ 			/*
+-			 * No need to pass TSX_CTRL_CPUID_CLEAR through, so
+-			 * let's avoid changing CPUID bits under the host
+-			 * kernel's feet.
++			 * TSX_CTRL_CPUID_CLEAR is handled in the CPUID
++			 * interception.  Keep the host value unchanged to avoid
++			 * changing CPUID bits under the host kernel's feet.
++			 *
++			 * hle=0, rtm=0, tsx_ctrl=1 can be found with some
++			 * combinations of new kernel and old userspace.  If
++			 * those guests run on a tsx=off host, do allow guests
++			 * to use TSX_CTRL, but do not change the value on the
++			 * host so that TSX remains always disabled.
+ 			 */
+-			vmx->guest_uret_msrs[j].mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
++			if (boot_cpu_has(X86_FEATURE_RTM))
++				vmx->guest_uret_msrs[j].mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
++			else
++				vmx->guest_uret_msrs[j].mask = 0;
+ 			break;
+ 		default:
+ 			vmx->guest_uret_msrs[j].mask = -1ull;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0a302685e4d62..18a315bbcb79e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1376,16 +1376,24 @@ static u64 kvm_get_arch_capabilities(void)
+ 	if (!boot_cpu_has_bug(X86_BUG_MDS))
+ 		data |= ARCH_CAP_MDS_NO;
+ 
+-	/*
+-	 * On TAA affected systems:
+-	 *      - nothing to do if TSX is disabled on the host.
+-	 *      - we emulate TSX_CTRL if present on the host.
+-	 *	  This lets the guest use VERW to clear CPU buffers.
+-	 */
+-	if (!boot_cpu_has(X86_FEATURE_RTM))
+-		data &= ~(ARCH_CAP_TAA_NO | ARCH_CAP_TSX_CTRL_MSR);
+-	else if (!boot_cpu_has_bug(X86_BUG_TAA))
++	if (!boot_cpu_has(X86_FEATURE_RTM)) {
++		/*
++		 * If RTM=0 because the kernel has disabled TSX, the host might
++		 * have TAA_NO or TSX_CTRL.  Clear TAA_NO (the guest sees RTM=0
++		 * and therefore knows that there cannot be TAA) but keep
++		 * TSX_CTRL: some buggy userspaces leave it set on tsx=on hosts,
++		 * and we want to allow migrating those guests to tsx=off hosts.
++		 */
++		data &= ~ARCH_CAP_TAA_NO;
++	} else if (!boot_cpu_has_bug(X86_BUG_TAA)) {
+ 		data |= ARCH_CAP_TAA_NO;
++	} else {
++		/*
++		 * Nothing to do here; we emulate TSX_CTRL if present on the
++		 * host so the guest can choose between disabling TSX or
++		 * using VERW to clear CPU buffers.
++		 */
++	}
+ 
+ 	return data;
+ }
+@@ -9907,6 +9915,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ 	fx_init(vcpu);
+ 
+ 	vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
++	vcpu->arch.cr3_lm_rsvd_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63);
+ 
+ 	vcpu->arch.pat = MSR_IA32_CR_PAT_DEFAULT;
+ 
+diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
+index bc0833713be95..f80d10d39cf6d 100644
+--- a/arch/x86/mm/mem_encrypt.c
++++ b/arch/x86/mm/mem_encrypt.c
+@@ -351,6 +351,7 @@ bool sev_active(void)
+ {
+ 	return sev_status & MSR_AMD64_SEV_ENABLED;
+ }
++EXPORT_SYMBOL_GPL(sev_active);
+ 
+ /* Needs to be called from non-instrumentable code */
+ bool noinstr sev_es_active(void)
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 4ad3c4b276dcf..7e17d4edccb12 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -602,7 +602,11 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ 		ret = gdev->id;
+ 		goto err_free_gdev;
+ 	}
+-	dev_set_name(&gdev->dev, GPIOCHIP_NAME "%d", gdev->id);
++
++	ret = dev_set_name(&gdev->dev, GPIOCHIP_NAME "%d", gdev->id);
++	if (ret)
++		goto err_free_ida;
++
+ 	device_initialize(&gdev->dev);
+ 	dev_set_drvdata(&gdev->dev, gdev);
+ 	if (gc->parent && gc->parent->driver)
+@@ -616,7 +620,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ 	gdev->descs = kcalloc(gc->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);
+ 	if (!gdev->descs) {
+ 		ret = -ENOMEM;
+-		goto err_free_ida;
++		goto err_free_dev_name;
+ 	}
+ 
+ 	if (gc->ngpio == 0) {
+@@ -767,6 +771,8 @@ err_free_label:
+ 	kfree_const(gdev->label);
+ err_free_descs:
+ 	kfree(gdev->descs);
++err_free_dev_name:
++	kfree(dev_name(&gdev->dev));
+ err_free_ida:
+ 	ida_free(&gpio_ida, gdev->id);
+ err_free_gdev:
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0f7749e9424d4..580880212e551 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2278,8 +2278,6 @@ void amdgpu_dm_update_connector_after_detect(
+ 
+ 			drm_connector_update_edid_property(connector,
+ 							   aconnector->edid);
+-			drm_add_edid_modes(connector, aconnector->edid);
+-
+ 			if (aconnector->dc_link->aux_mode)
+ 				drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
+ 						    aconnector->edid);
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index e875425336406..7749b0ceabba9 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -3629,14 +3629,26 @@ static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr,
+ 	return 0;
+ }
+ 
+-static int drm_dp_get_vc_payload_bw(u8 dp_link_bw, u8  dp_link_count)
++/**
++ * drm_dp_get_vc_payload_bw - get the VC payload BW for an MST link
++ * @link_rate: link rate in 10kbits/s units
++ * @link_lane_count: lane count
++ *
++ * Calculate the total bandwidth of a MultiStream Transport link. The returned
++ * value is in units of PBNs/(timeslots/1 MTP). This value can be used to
++ * convert the number of PBNs required for a given stream to the number of
++ * timeslots this stream requires in each MTP.
++ */
++int drm_dp_get_vc_payload_bw(int link_rate, int link_lane_count)
+ {
+-	if (dp_link_bw == 0 || dp_link_count == 0)
+-		DRM_DEBUG_KMS("invalid link bandwidth in DPCD: %x (link count: %d)\n",
+-			      dp_link_bw, dp_link_count);
++	if (link_rate == 0 || link_lane_count == 0)
++		DRM_DEBUG_KMS("invalid link rate/lane count: (%d / %d)\n",
++			      link_rate, link_lane_count);
+ 
+-	return dp_link_bw * dp_link_count / 2;
++	/* See DP v2.0 2.6.4.2, VCPayload_Bandwidth_for_OneTimeSlotPer_MTP_Allocation */
++	return link_rate * link_lane_count / 54000;
+ }
++EXPORT_SYMBOL(drm_dp_get_vc_payload_bw);
+ 
+ /**
+  * drm_dp_read_mst_cap() - check whether or not a sink supports MST
+@@ -3692,7 +3704,7 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
+ 			goto out_unlock;
+ 		}
+ 
+-		mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr->dpcd[1],
++		mgr->pbn_div = drm_dp_get_vc_payload_bw(drm_dp_bw_code_to_link_rate(mgr->dpcd[1]),
+ 							mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK);
+ 		if (mgr->pbn_div == 0) {
+ 			ret = -EINVAL;
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 3f2bbd9370a86..40dfb4d0ffbec 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3274,6 +3274,23 @@ static void intel_ddi_disable_fec_state(struct intel_encoder *encoder,
+ 	intel_de_posting_read(dev_priv, intel_dp->regs.dp_tp_ctl);
+ }
+ 
++static void intel_ddi_power_up_lanes(struct intel_encoder *encoder,
++				     const struct intel_crtc_state *crtc_state)
++{
++	struct drm_i915_private *i915 = to_i915(encoder->base.dev);
++	struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
++	enum phy phy = intel_port_to_phy(i915, encoder->port);
++
++	if (intel_phy_is_combo(i915, phy)) {
++		bool lane_reversal =
++			dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
++
++		intel_combo_phy_power_up_lanes(i915, phy, false,
++					       crtc_state->lane_count,
++					       lane_reversal);
++	}
++}
++
+ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
+ 				  struct intel_encoder *encoder,
+ 				  const struct intel_crtc_state *crtc_state,
+@@ -3367,14 +3384,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
+ 	 * 7.f Combo PHY: Configure PORT_CL_DW10 Static Power Down to power up
+ 	 * the used lanes of the DDI.
+ 	 */
+-	if (intel_phy_is_combo(dev_priv, phy)) {
+-		bool lane_reversal =
+-			dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
+-
+-		intel_combo_phy_power_up_lanes(dev_priv, phy, false,
+-					       crtc_state->lane_count,
+-					       lane_reversal);
+-	}
++	intel_ddi_power_up_lanes(encoder, crtc_state);
+ 
+ 	/*
+ 	 * 7.g Configure and enable DDI_BUF_CTL
+@@ -3458,14 +3468,7 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
+ 	else
+ 		intel_prepare_dp_ddi_buffers(encoder, crtc_state);
+ 
+-	if (intel_phy_is_combo(dev_priv, phy)) {
+-		bool lane_reversal =
+-			dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
+-
+-		intel_combo_phy_power_up_lanes(dev_priv, phy, false,
+-					       crtc_state->lane_count,
+-					       lane_reversal);
+-	}
++	intel_ddi_power_up_lanes(encoder, crtc_state);
+ 
+ 	intel_ddi_init_dp_buf_reg(encoder);
+ 	if (!is_mst)
+@@ -3933,6 +3936,8 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state,
+ 		intel_de_write(dev_priv, reg, val);
+ 	}
+ 
++	intel_ddi_power_up_lanes(encoder, crtc_state);
++
+ 	/* In HDMI/DVI mode, the port width, and swing/emphasis values
+ 	 * are ignored so nothing special needs to be done besides
+ 	 * enabling the port.
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index aabf09f89cada..45c2556d63955 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -2294,7 +2294,7 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
+ 		 */
+ 		ret = i915_vma_pin_fence(vma);
+ 		if (ret != 0 && INTEL_GEN(dev_priv) < 4) {
+-			i915_gem_object_unpin_from_display_plane(vma);
++			i915_vma_unpin(vma);
+ 			vma = ERR_PTR(ret);
+ 			goto err;
+ 		}
+@@ -2312,12 +2312,9 @@ err:
+ 
+ void intel_unpin_fb_vma(struct i915_vma *vma, unsigned long flags)
+ {
+-	i915_gem_object_lock(vma->obj, NULL);
+ 	if (flags & PLANE_HAS_FENCE)
+ 		i915_vma_unpin_fence(vma);
+-	i915_gem_object_unpin_from_display_plane(vma);
+-	i915_gem_object_unlock(vma->obj);
+-
++	i915_vma_unpin(vma);
+ 	i915_vma_put(vma);
+ }
+ 
+@@ -4883,6 +4880,8 @@ u32 glk_plane_color_ctl(const struct intel_crtc_state *crtc_state,
+ 			plane_color_ctl |= PLANE_COLOR_YUV_RANGE_CORRECTION_DISABLE;
+ 	} else if (fb->format->is_yuv) {
+ 		plane_color_ctl |= PLANE_COLOR_INPUT_CSC_ENABLE;
++		if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
++			plane_color_ctl |= PLANE_COLOR_YUV_RANGE_CORRECTION_DISABLE;
+ 	}
+ 
+ 	return plane_color_ctl;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 5d745d9b99b2a..ecaa538b2d357 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -68,7 +68,9 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
+ 
+ 		slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr,
+ 						      connector->port,
+-						      crtc_state->pbn, 0);
++						      crtc_state->pbn,
++						      drm_dp_get_vc_payload_bw(crtc_state->port_clock,
++									       crtc_state->lane_count));
+ 		if (slots == -EDEADLK)
+ 			return slots;
+ 		if (slots >= 0)
+diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
+index 52b4f6193b4ce..0095c8cac9b40 100644
+--- a/drivers/gpu/drm/i915/display/intel_overlay.c
++++ b/drivers/gpu/drm/i915/display/intel_overlay.c
+@@ -359,7 +359,7 @@ static void intel_overlay_release_old_vma(struct intel_overlay *overlay)
+ 	intel_frontbuffer_flip_complete(overlay->i915,
+ 					INTEL_FRONTBUFFER_OVERLAY(overlay->crtc->pipe));
+ 
+-	i915_gem_object_unpin_from_display_plane(vma);
++	i915_vma_unpin(vma);
+ 	i915_vma_put(vma);
+ }
+ 
+@@ -860,7 +860,7 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
+ 	return 0;
+ 
+ out_unpin:
+-	i915_gem_object_unpin_from_display_plane(vma);
++	i915_vma_unpin(vma);
+ out_pin_section:
+ 	atomic_dec(&dev_priv->gpu_error.pending_fb_pin);
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c b/drivers/gpu/drm/i915/display/intel_sprite.c
+index 63040cb0d4e10..12f7128b777f6 100644
+--- a/drivers/gpu/drm/i915/display/intel_sprite.c
++++ b/drivers/gpu/drm/i915/display/intel_sprite.c
+@@ -469,13 +469,19 @@ skl_program_scaler(struct intel_plane *plane,
+ 
+ /* Preoffset values for YUV to RGB Conversion */
+ #define PREOFF_YUV_TO_RGB_HI		0x1800
+-#define PREOFF_YUV_TO_RGB_ME		0x1F00
++#define PREOFF_YUV_TO_RGB_ME		0x0000
+ #define PREOFF_YUV_TO_RGB_LO		0x1800
+ 
+ #define  ROFF(x)          (((x) & 0xffff) << 16)
+ #define  GOFF(x)          (((x) & 0xffff) << 0)
+ #define  BOFF(x)          (((x) & 0xffff) << 16)
+ 
++/*
++ * Programs the input color space conversion stage for ICL HDR planes.
++ * Note that it is assumed that this stage always happens after YUV
++ * range correction. Thus, the input to this stage is assumed to be
++ * in full-range YCbCr.
++ */
+ static void
+ icl_program_input_csc(struct intel_plane *plane,
+ 		      const struct intel_crtc_state *crtc_state,
+@@ -523,52 +529,7 @@ icl_program_input_csc(struct intel_plane *plane,
+ 			0x0, 0x7800, 0x7F10,
+ 		},
+ 	};
+-
+-	/* Matrix for Limited Range to Full Range Conversion */
+-	static const u16 input_csc_matrix_lr[][9] = {
+-		/*
+-		 * BT.601 Limted range YCbCr -> full range RGB
+-		 * The matrix required is :
+-		 * [1.164384, 0.000, 1.596027,
+-		 *  1.164384, -0.39175, -0.812813,
+-		 *  1.164384, 2.017232, 0.0000]
+-		 */
+-		[DRM_COLOR_YCBCR_BT601] = {
+-			0x7CC8, 0x7950, 0x0,
+-			0x8D00, 0x7950, 0x9C88,
+-			0x0, 0x7950, 0x6810,
+-		},
+-		/*
+-		 * BT.709 Limited range YCbCr -> full range RGB
+-		 * The matrix required is :
+-		 * [1.164384, 0.000, 1.792741,
+-		 *  1.164384, -0.213249, -0.532909,
+-		 *  1.164384, 2.112402, 0.0000]
+-		 */
+-		[DRM_COLOR_YCBCR_BT709] = {
+-			0x7E58, 0x7950, 0x0,
+-			0x8888, 0x7950, 0xADA8,
+-			0x0, 0x7950,  0x6870,
+-		},
+-		/*
+-		 * BT.2020 Limited range YCbCr -> full range RGB
+-		 * The matrix required is :
+-		 * [1.164, 0.000, 1.678,
+-		 *  1.164, -0.1873, -0.6504,
+-		 *  1.164, 2.1417, 0.0000]
+-		 */
+-		[DRM_COLOR_YCBCR_BT2020] = {
+-			0x7D70, 0x7950, 0x0,
+-			0x8A68, 0x7950, 0xAC00,
+-			0x0, 0x7950, 0x6890,
+-		},
+-	};
+-	const u16 *csc;
+-
+-	if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
+-		csc = input_csc_matrix[plane_state->hw.color_encoding];
+-	else
+-		csc = input_csc_matrix_lr[plane_state->hw.color_encoding];
++	const u16 *csc = input_csc_matrix[plane_state->hw.color_encoding];
+ 
+ 	intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0),
+ 			  ROFF(csc[0]) | GOFF(csc[1]));
+@@ -585,14 +546,8 @@ icl_program_input_csc(struct intel_plane *plane,
+ 
+ 	intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
+ 			  PREOFF_YUV_TO_RGB_HI);
+-	if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
+-		intel_de_write_fw(dev_priv,
+-				  PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
+-				  0);
+-	else
+-		intel_de_write_fw(dev_priv,
+-				  PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
+-				  PREOFF_YUV_TO_RGB_ME);
++	intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
++			  PREOFF_YUV_TO_RGB_ME);
+ 	intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2),
+ 			  PREOFF_YUV_TO_RGB_LO);
+ 	intel_de_write_fw(dev_priv,
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+index fcce6909f2017..3d435bfff7649 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+@@ -387,48 +387,6 @@ err:
+ 	return vma;
+ }
+ 
+-static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj)
+-{
+-	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+-	struct i915_vma *vma;
+-
+-	if (list_empty(&obj->vma.list))
+-		return;
+-
+-	mutex_lock(&i915->ggtt.vm.mutex);
+-	spin_lock(&obj->vma.lock);
+-	for_each_ggtt_vma(vma, obj) {
+-		if (!drm_mm_node_allocated(&vma->node))
+-			continue;
+-
+-		GEM_BUG_ON(vma->vm != &i915->ggtt.vm);
+-		list_move_tail(&vma->vm_link, &vma->vm->bound_list);
+-	}
+-	spin_unlock(&obj->vma.lock);
+-	mutex_unlock(&i915->ggtt.vm.mutex);
+-
+-	if (i915_gem_object_is_shrinkable(obj)) {
+-		unsigned long flags;
+-
+-		spin_lock_irqsave(&i915->mm.obj_lock, flags);
+-
+-		if (obj->mm.madv == I915_MADV_WILLNEED &&
+-		    !atomic_read(&obj->mm.shrink_pin))
+-			list_move_tail(&obj->mm.link, &i915->mm.shrink_list);
+-
+-		spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
+-	}
+-}
+-
+-void
+-i915_gem_object_unpin_from_display_plane(struct i915_vma *vma)
+-{
+-	/* Bump the LRU to try and avoid premature eviction whilst flipping  */
+-	i915_gem_object_bump_inactive_ggtt(vma->obj);
+-
+-	i915_vma_unpin(vma);
+-}
+-
+ /**
+  * Moves a single object to the CPU read, and possibly write domain.
+  * @obj: object to act on
+@@ -569,9 +527,6 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
+ 	else
+ 		err = i915_gem_object_set_to_cpu_domain(obj, write_domain);
+ 
+-	/* And bump the LRU for this access */
+-	i915_gem_object_bump_inactive_ggtt(obj);
+-
+ 	i915_gem_object_unlock(obj);
+ 
+ 	if (write_domain)
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
+index d46db8d8f38e4..bc48717971204 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
+@@ -471,7 +471,6 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
+ 				     u32 alignment,
+ 				     const struct i915_ggtt_view *view,
+ 				     unsigned int flags);
+-void i915_gem_object_unpin_from_display_plane(struct i915_vma *vma);
+ 
+ void i915_gem_object_make_unshrinkable(struct drm_i915_gem_object *obj);
+ void i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj);
+diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+index 0625cbb3b4312..0040b4765a54d 100644
+--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
++++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+@@ -451,10 +451,12 @@ void i915_request_cancel_breadcrumb(struct i915_request *rq)
+ 	struct intel_context *ce = rq->context;
+ 	bool release;
+ 
+-	if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags))
++	spin_lock(&ce->signal_lock);
++	if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags)) {
++		spin_unlock(&ce->signal_lock);
+ 		return;
++	}
+ 
+-	spin_lock(&ce->signal_lock);
+ 	list_del_rcu(&rq->signal_link);
+ 	release = remove_signaling_context(rq->engine->breadcrumbs, ce);
+ 	spin_unlock(&ce->signal_lock);
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 8c73377ac82ca..3d004ca76b6ed 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -215,9 +215,17 @@ static const struct xpad_device {
+ 	{ 0x0e6f, 0x0213, "Afterglow Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
+ 	{ 0x0e6f, 0x021f, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
+ 	{ 0x0e6f, 0x0246, "Rock Candy Gamepad for Xbox One 2015", 0, XTYPE_XBOXONE },
+-	{ 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02a0, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02a1, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02a2, "PDP Wired Controller for Xbox One - Crimson Red", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x02a4, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x02a6, "PDP Wired Controller for Xbox One - Camo Series", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02a7, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02a8, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02ad, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02b3, "Afterglow Prismatic Wired Controller", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02b8, "Afterglow Prismatic Wired Controller", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x0301, "Logic3 Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x0e6f, 0x0346, "Rock Candy Gamepad for Xbox One 2016", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x0401, "Logic3 Controller", 0, XTYPE_XBOX360 },
+@@ -296,6 +304,9 @@ static const struct xpad_device {
+ 	{ 0x1bad, 0xfa01, "MadCatz GamePad", 0, XTYPE_XBOX360 },
+ 	{ 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 },
+ 	{ 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 },
++	{ 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE },
++	{ 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 },
++	{ 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE },
+ 	{ 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x5300, "PowerA MINI PROEX Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x5303, "Xbox Airflo wired controller", 0, XTYPE_XBOX360 },
+@@ -429,8 +440,12 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOX360_VENDOR(0x162e),		/* Joytech X-Box 360 controllers */
+ 	XPAD_XBOX360_VENDOR(0x1689),		/* Razer Onza */
+ 	XPAD_XBOX360_VENDOR(0x1bad),		/* Harminix Rock Band Guitar and Drums */
++	XPAD_XBOX360_VENDOR(0x20d6),		/* PowerA Controllers */
++	XPAD_XBOXONE_VENDOR(0x20d6),		/* PowerA Controllers */
+ 	XPAD_XBOX360_VENDOR(0x24c6),		/* PowerA Controllers */
+ 	XPAD_XBOXONE_VENDOR(0x24c6),		/* PowerA Controllers */
++	XPAD_XBOXONE_VENDOR(0x2e24),		/* Hyperkin Duke X-Box One pad */
++	XPAD_XBOX360_VENDOR(0x2f24),		/* GameSir Controllers */
+ 	{ }
+ };
+ 
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 3a2dcf0805f12..c74b020796a94 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -219,6 +219,8 @@ static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "PEGATRON CORPORATION"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "C15B"),
+ 		},
++	},
++	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ByteSpeed LLC"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "ByteSpeed Laptop C15B"),
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index 6612f9e2d7e83..45113767db964 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -157,6 +157,7 @@ static const struct goodix_chip_id goodix_chip_ids[] = {
+ 	{ .id = "5663", .data = &gt1x_chip_data },
+ 	{ .id = "5688", .data = &gt1x_chip_data },
+ 	{ .id = "917S", .data = &gt1x_chip_data },
++	{ .id = "9286", .data = &gt1x_chip_data },
+ 
+ 	{ .id = "911", .data = &gt911_chip_data },
+ 	{ .id = "9271", .data = &gt911_chip_data },
+@@ -1445,6 +1446,7 @@ static const struct of_device_id goodix_of_match[] = {
+ 	{ .compatible = "goodix,gt927" },
+ 	{ .compatible = "goodix,gt9271" },
+ 	{ .compatible = "goodix,gt928" },
++	{ .compatible = "goodix,gt9286" },
+ 	{ .compatible = "goodix,gt967" },
+ 	{ }
+ };
+diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c
+index 199cf3daec106..d8fccf048bf44 100644
+--- a/drivers/input/touchscreen/ili210x.c
++++ b/drivers/input/touchscreen/ili210x.c
+@@ -29,11 +29,13 @@ struct ili2xxx_chip {
+ 			void *buf, size_t len);
+ 	int (*get_touch_data)(struct i2c_client *client, u8 *data);
+ 	bool (*parse_touch_data)(const u8 *data, unsigned int finger,
+-				 unsigned int *x, unsigned int *y);
++				 unsigned int *x, unsigned int *y,
++				 unsigned int *z);
+ 	bool (*continue_polling)(const u8 *data, bool touch);
+ 	unsigned int max_touches;
+ 	unsigned int resolution;
+ 	bool has_calibrate_reg;
++	bool has_pressure_reg;
+ };
+ 
+ struct ili210x {
+@@ -82,7 +84,8 @@ static int ili210x_read_touch_data(struct i2c_client *client, u8 *data)
+ 
+ static bool ili210x_touchdata_to_coords(const u8 *touchdata,
+ 					unsigned int finger,
+-					unsigned int *x, unsigned int *y)
++					unsigned int *x, unsigned int *y,
++					unsigned int *z)
+ {
+ 	if (touchdata[0] & BIT(finger))
+ 		return false;
+@@ -137,7 +140,8 @@ static int ili211x_read_touch_data(struct i2c_client *client, u8 *data)
+ 
+ static bool ili211x_touchdata_to_coords(const u8 *touchdata,
+ 					unsigned int finger,
+-					unsigned int *x, unsigned int *y)
++					unsigned int *x, unsigned int *y,
++					unsigned int *z)
+ {
+ 	u32 data;
+ 
+@@ -169,7 +173,8 @@ static const struct ili2xxx_chip ili211x_chip = {
+ 
+ static bool ili212x_touchdata_to_coords(const u8 *touchdata,
+ 					unsigned int finger,
+-					unsigned int *x, unsigned int *y)
++					unsigned int *x, unsigned int *y,
++					unsigned int *z)
+ {
+ 	u16 val;
+ 
+@@ -235,7 +240,8 @@ static int ili251x_read_touch_data(struct i2c_client *client, u8 *data)
+ 
+ static bool ili251x_touchdata_to_coords(const u8 *touchdata,
+ 					unsigned int finger,
+-					unsigned int *x, unsigned int *y)
++					unsigned int *x, unsigned int *y,
++					unsigned int *z)
+ {
+ 	u16 val;
+ 
+@@ -245,6 +251,7 @@ static bool ili251x_touchdata_to_coords(const u8 *touchdata,
+ 
+ 	*x = val & 0x3fff;
+ 	*y = get_unaligned_be16(touchdata + 1 + (finger * 5) + 2);
++	*z = touchdata[1 + (finger * 5) + 4];
+ 
+ 	return true;
+ }
+@@ -261,6 +268,7 @@ static const struct ili2xxx_chip ili251x_chip = {
+ 	.continue_polling	= ili251x_check_continue_polling,
+ 	.max_touches		= 10,
+ 	.has_calibrate_reg	= true,
++	.has_pressure_reg	= true,
+ };
+ 
+ static bool ili210x_report_events(struct ili210x *priv, u8 *touchdata)
+@@ -268,14 +276,16 @@ static bool ili210x_report_events(struct ili210x *priv, u8 *touchdata)
+ 	struct input_dev *input = priv->input;
+ 	int i;
+ 	bool contact = false, touch;
+-	unsigned int x = 0, y = 0;
++	unsigned int x = 0, y = 0, z = 0;
+ 
+ 	for (i = 0; i < priv->chip->max_touches; i++) {
+-		touch = priv->chip->parse_touch_data(touchdata, i, &x, &y);
++		touch = priv->chip->parse_touch_data(touchdata, i, &x, &y, &z);
+ 
+ 		input_mt_slot(input, i);
+ 		if (input_mt_report_slot_state(input, MT_TOOL_FINGER, touch)) {
+ 			touchscreen_report_pos(input, &priv->prop, x, y, true);
++			if (priv->chip->has_pressure_reg)
++				input_report_abs(input, ABS_MT_PRESSURE, z);
+ 			contact = true;
+ 		}
+ 	}
+@@ -437,6 +447,8 @@ static int ili210x_i2c_probe(struct i2c_client *client,
+ 	max_xy = (chip->resolution ?: SZ_64K) - 1;
+ 	input_set_abs_params(input, ABS_MT_POSITION_X, 0, max_xy, 0, 0);
+ 	input_set_abs_params(input, ABS_MT_POSITION_Y, 0, max_xy, 0, 0);
++	if (priv->chip->has_pressure_reg)
++		input_set_abs_params(input, ABS_MT_PRESSURE, 0, 0xa, 0, 0);
+ 	touchscreen_parse_properties(input, true, &priv->prop);
+ 
+ 	error = input_mt_init_slots(input, priv->chip->max_touches,
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 3be74cf3635fe..7a0a228d64bbe 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -639,8 +639,10 @@ static void md_submit_flush_data(struct work_struct *ws)
+ 	 * could wait for this and below md_handle_request could wait for those
+ 	 * bios because of suspend check
+ 	 */
++	spin_lock_irq(&mddev->lock);
+ 	mddev->last_flush = mddev->start_flush;
+ 	mddev->flush_bio = NULL;
++	spin_unlock_irq(&mddev->lock);
+ 	wake_up(&mddev->sb_wait);
+ 
+ 	if (bio->bi_iter.bi_size == 0) {
+diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c
+index 44bea5e4aeda1..b23773583179d 100644
+--- a/drivers/mmc/core/sdio_cis.c
++++ b/drivers/mmc/core/sdio_cis.c
+@@ -20,6 +20,8 @@
+ #include "sdio_cis.h"
+ #include "sdio_ops.h"
+ 
++#define SDIO_READ_CIS_TIMEOUT_MS  (10 * 1000) /* 10s */
++
+ static int cistpl_vers_1(struct mmc_card *card, struct sdio_func *func,
+ 			 const unsigned char *buf, unsigned size)
+ {
+@@ -274,6 +276,8 @@ static int sdio_read_cis(struct mmc_card *card, struct sdio_func *func)
+ 
+ 	do {
+ 		unsigned char tpl_code, tpl_link;
++		unsigned long timeout = jiffies +
++			msecs_to_jiffies(SDIO_READ_CIS_TIMEOUT_MS);
+ 
+ 		ret = mmc_io_rw_direct(card, 0, 0, ptr++, 0, &tpl_code);
+ 		if (ret)
+@@ -326,6 +330,8 @@ static int sdio_read_cis(struct mmc_card *card, struct sdio_func *func)
+ 			prev = &this->next;
+ 
+ 			if (ret == -ENOENT) {
++				if (time_after(jiffies, timeout))
++					break;
+ 				/* warn about unknown tuples */
+ 				pr_warn_ratelimited("%s: queuing unknown"
+ 				       " CIS tuple 0x%02x (%u bytes)\n",
+diff --git a/drivers/mmc/host/sdhci-pltfm.h b/drivers/mmc/host/sdhci-pltfm.h
+index 6301b81cf5731..9bd717ff784be 100644
+--- a/drivers/mmc/host/sdhci-pltfm.h
++++ b/drivers/mmc/host/sdhci-pltfm.h
+@@ -111,8 +111,13 @@ static inline void *sdhci_pltfm_priv(struct sdhci_pltfm_host *host)
+ 	return host->private;
+ }
+ 
++extern const struct dev_pm_ops sdhci_pltfm_pmops;
++#ifdef CONFIG_PM_SLEEP
+ int sdhci_pltfm_suspend(struct device *dev);
+ int sdhci_pltfm_resume(struct device *dev);
+-extern const struct dev_pm_ops sdhci_pltfm_pmops;
++#else
++static inline int sdhci_pltfm_suspend(struct device *dev) { return 0; }
++static inline int sdhci_pltfm_resume(struct device *dev) { return 0; }
++#endif
+ 
+ #endif /* _DRIVERS_MMC_SDHCI_PLTFM_H */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 34cca0a4b31c7..87160e723dfcf 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -1669,7 +1669,11 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,
+ 		if (!entry.portvec)
+ 			entry.state = 0;
+ 	} else {
+-		entry.portvec |= BIT(port);
++		if (state == MV88E6XXX_G1_ATU_DATA_STATE_UC_STATIC)
++			entry.portvec = BIT(port);
++		else
++			entry.portvec |= BIT(port);
++
+ 		entry.state = state;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 627ce1a20473a..2f281d0f98070 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -5339,11 +5339,6 @@ static int ibmvnic_remove(struct vio_dev *dev)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&adapter->state_lock, flags);
+-	if (test_bit(0, &adapter->resetting)) {
+-		spin_unlock_irqrestore(&adapter->state_lock, flags);
+-		return -EBUSY;
+-	}
+-
+ 	adapter->state = VNIC_REMOVING;
+ 	spin_unlock_irqrestore(&adapter->state_lock, flags);
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 2872c4dc77f07..3b269c70dcfe1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -55,12 +55,7 @@ static void i40e_vc_notify_vf_link_state(struct i40e_vf *vf)
+ 
+ 	pfe.event = VIRTCHNL_EVENT_LINK_CHANGE;
+ 	pfe.severity = PF_EVENT_SEVERITY_INFO;
+-
+-	/* Always report link is down if the VF queues aren't enabled */
+-	if (!vf->queues_enabled) {
+-		pfe.event_data.link_event.link_status = false;
+-		pfe.event_data.link_event.link_speed = 0;
+-	} else if (vf->link_forced) {
++	if (vf->link_forced) {
+ 		pfe.event_data.link_event.link_status = vf->link_up;
+ 		pfe.event_data.link_event.link_speed =
+ 			(vf->link_up ? VIRTCHNL_LINK_SPEED_40GB : 0);
+@@ -70,7 +65,6 @@ static void i40e_vc_notify_vf_link_state(struct i40e_vf *vf)
+ 		pfe.event_data.link_event.link_speed =
+ 			i40e_virtchnl_link_speed(ls->link_speed);
+ 	}
+-
+ 	i40e_aq_send_msg_to_vf(hw, abs_vf_id, VIRTCHNL_OP_EVENT,
+ 			       0, (u8 *)&pfe, sizeof(pfe), NULL);
+ }
+@@ -2443,8 +2437,6 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 		}
+ 	}
+ 
+-	vf->queues_enabled = true;
+-
+ error_param:
+ 	/* send the response to the VF */
+ 	return i40e_vc_send_resp_to_vf(vf, VIRTCHNL_OP_ENABLE_QUEUES,
+@@ -2466,9 +2458,6 @@ static int i40e_vc_disable_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	struct i40e_pf *pf = vf->pf;
+ 	i40e_status aq_ret = 0;
+ 
+-	/* Immediately mark queues as disabled */
+-	vf->queues_enabled = false;
+-
+ 	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto error_param;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 5491215d81deb..091e32c1bb46f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -98,7 +98,6 @@ struct i40e_vf {
+ 	unsigned int tx_rate;	/* Tx bandwidth limit in Mbps */
+ 	bool link_forced;
+ 	bool link_up;		/* only valid if VF link is forced */
+-	bool queues_enabled;	/* true if the VF queues are enabled */
+ 	bool spoofchk;
+ 	u16 num_vlan;
+ 
+diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+index 831f2f09de5fb..ec8cd69d49928 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
++++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+@@ -1714,7 +1714,8 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
+ 						     Asym_Pause);
+ 	}
+ 
+-	status = rd32(IGC_STATUS);
++	status = pm_runtime_suspended(&adapter->pdev->dev) ?
++		 0 : rd32(IGC_STATUS);
+ 
+ 	if (status & IGC_STATUS_LU) {
+ 		if (status & IGC_STATUS_SPEED_1000) {
+diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c
+index 8b67d9b49a83a..7ec04e48860c6 100644
+--- a/drivers/net/ethernet/intel/igc/igc_i225.c
++++ b/drivers/net/ethernet/intel/igc/igc_i225.c
+@@ -219,9 +219,9 @@ static s32 igc_write_nvm_srwr(struct igc_hw *hw, u16 offset, u16 words,
+ 			      u16 *data)
+ {
+ 	struct igc_nvm_info *nvm = &hw->nvm;
++	s32 ret_val = -IGC_ERR_NVM;
+ 	u32 attempts = 100000;
+ 	u32 i, k, eewr = 0;
+-	s32 ret_val = 0;
+ 
+ 	/* A check for invalid values:  offset too large, too many words,
+ 	 * too many words for the offset, and not enough words.
+@@ -229,7 +229,6 @@ static s32 igc_write_nvm_srwr(struct igc_hw *hw, u16 offset, u16 words,
+ 	if (offset >= nvm->word_size || (words > (nvm->word_size - offset)) ||
+ 	    words == 0) {
+ 		hw_dbg("nvm parameter(s) out of bounds\n");
+-		ret_val = -IGC_ERR_NVM;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/igc/igc_mac.c b/drivers/net/ethernet/intel/igc/igc_mac.c
+index 09cd0ec7ee87d..67b8ffd21d8af 100644
+--- a/drivers/net/ethernet/intel/igc/igc_mac.c
++++ b/drivers/net/ethernet/intel/igc/igc_mac.c
+@@ -638,7 +638,7 @@ s32 igc_config_fc_after_link_up(struct igc_hw *hw)
+ 	}
+ 
+ out:
+-	return 0;
++	return ret_val;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+index a30eb90ba3d28..dd590086fe6a5 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+@@ -29,16 +29,16 @@ static int mvpp2_prs_hw_write(struct mvpp2 *priv, struct mvpp2_prs_entry *pe)
+ 	/* Clear entry invalidation bit */
+ 	pe->tcam[MVPP2_PRS_TCAM_INV_WORD] &= ~MVPP2_PRS_TCAM_INV_MASK;
+ 
+-	/* Write tcam index - indirect access */
+-	mvpp2_write(priv, MVPP2_PRS_TCAM_IDX_REG, pe->index);
+-	for (i = 0; i < MVPP2_PRS_TCAM_WORDS; i++)
+-		mvpp2_write(priv, MVPP2_PRS_TCAM_DATA_REG(i), pe->tcam[i]);
+-
+ 	/* Write sram index - indirect access */
+ 	mvpp2_write(priv, MVPP2_PRS_SRAM_IDX_REG, pe->index);
+ 	for (i = 0; i < MVPP2_PRS_SRAM_WORDS; i++)
+ 		mvpp2_write(priv, MVPP2_PRS_SRAM_DATA_REG(i), pe->sram[i]);
+ 
++	/* Write tcam index - indirect access */
++	mvpp2_write(priv, MVPP2_PRS_TCAM_IDX_REG, pe->index);
++	for (i = 0; i < MVPP2_PRS_TCAM_WORDS; i++)
++		mvpp2_write(priv, MVPP2_PRS_TCAM_DATA_REG(i), pe->tcam[i]);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index c9b5d7f29911e..42848db8f8dd6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3593,12 +3593,10 @@ static int mlx5e_setup_tc_mqprio(struct mlx5e_priv *priv,
+ 
+ 	err = mlx5e_safe_switch_channels(priv, &new_channels,
+ 					 mlx5e_num_channels_changed_ctx, NULL);
+-	if (err)
+-		goto out;
+ 
+-	priv->max_opened_tc = max_t(u8, priv->max_opened_tc,
+-				    new_channels.params.num_tc);
+ out:
++	priv->max_opened_tc = max_t(u8, priv->max_opened_tc,
++				    priv->channels.params.num_tc);
+ 	mutex_unlock(&priv->state_lock);
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 6628a0197b4e0..6d2ba8b84187c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1262,8 +1262,10 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+ 	mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+ 
+ 	if (mlx5e_cqe_regb_chain(cqe))
+-		if (!mlx5e_tc_update_skb(cqe, skb))
++		if (!mlx5e_tc_update_skb(cqe, skb)) {
++			dev_kfree_skb_any(skb);
+ 			goto free_wqe;
++		}
+ 
+ 	napi_gro_receive(rq->cq.napi, skb);
+ 
+@@ -1316,8 +1318,10 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+ 	if (rep->vlan && skb_vlan_tag_present(skb))
+ 		skb_vlan_pop(skb);
+ 
+-	if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv))
++	if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv)) {
++		dev_kfree_skb_any(skb);
+ 		goto free_wqe;
++	}
+ 
+ 	napi_gro_receive(rq->cq.napi, skb);
+ 
+@@ -1371,8 +1375,10 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64
+ 
+ 	mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+ 
+-	if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv))
++	if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv)) {
++		dev_kfree_skb_any(skb);
+ 		goto mpwrq_cqe_out;
++	}
+ 
+ 	napi_gro_receive(rq->cq.napi, skb);
+ 
+@@ -1528,8 +1534,10 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq
+ 	mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+ 
+ 	if (mlx5e_cqe_regb_chain(cqe))
+-		if (!mlx5e_tc_update_skb(cqe, skb))
++		if (!mlx5e_tc_update_skb(cqe, skb)) {
++			dev_kfree_skb_any(skb);
+ 			goto mpwrq_cqe_out;
++		}
+ 
+ 	napi_gro_receive(rq->cq.napi, skb);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 634c2bfd25be1..79fc5755735fa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1764,6 +1764,7 @@ search_again_locked:
+ 		if (!fte_tmp)
+ 			continue;
+ 		rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte_tmp);
++		/* No error check needed here, because insert_fte() is not called */
+ 		up_write_ref_node(&fte_tmp->node, false);
+ 		tree_put_node(&fte_tmp->node, false);
+ 		kmem_cache_free(steering->ftes_cache, fte);
+@@ -1816,6 +1817,8 @@ skip_search:
+ 		up_write_ref_node(&g->node, false);
+ 		rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
+ 		up_write_ref_node(&fte->node, false);
++		if (IS_ERR(rule))
++			tree_put_node(&fte->node, false);
+ 		return rule;
+ 	}
+ 	rule = ERR_PTR(-ENOENT);
+@@ -1914,6 +1917,8 @@ search_again_locked:
+ 	up_write_ref_node(&g->node, false);
+ 	rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
+ 	up_write_ref_node(&fte->node, false);
++	if (IS_ERR(rule))
++		tree_put_node(&fte->node, false);
+ 	tree_put_node(&g->node, false);
+ 	return rule;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+index a3e0c71831928..a44a2bad5bbb5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+@@ -76,7 +76,7 @@ enum {
+ 
+ static u32 get_function(u16 func_id, bool ec_function)
+ {
+-	return func_id & (ec_function << 16);
++	return (u32)func_id | (ec_function << 16);
+ }
+ 
+ static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function)
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 762cabf16157b..75f774347f6d1 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4082,17 +4082,72 @@ err_out:
+ 	return -EIO;
+ }
+ 
+-static bool rtl_test_hw_pad_bug(struct rtl8169_private *tp)
++static bool rtl_skb_is_udp(struct sk_buff *skb)
++{
++	int no = skb_network_offset(skb);
++	struct ipv6hdr *i6h, _i6h;
++	struct iphdr *ih, _ih;
++
++	switch (vlan_get_protocol(skb)) {
++	case htons(ETH_P_IP):
++		ih = skb_header_pointer(skb, no, sizeof(_ih), &_ih);
++		return ih && ih->protocol == IPPROTO_UDP;
++	case htons(ETH_P_IPV6):
++		i6h = skb_header_pointer(skb, no, sizeof(_i6h), &_i6h);
++		return i6h && i6h->nexthdr == IPPROTO_UDP;
++	default:
++		return false;
++	}
++}
++
++#define RTL_MIN_PATCH_LEN	47
++
++/* see rtl8125_get_patch_pad_len() in r8125 vendor driver */
++static unsigned int rtl8125_quirk_udp_padto(struct rtl8169_private *tp,
++					    struct sk_buff *skb)
+ {
++	unsigned int padto = 0, len = skb->len;
++
++	if (rtl_is_8125(tp) && len < 128 + RTL_MIN_PATCH_LEN &&
++	    rtl_skb_is_udp(skb) && skb_transport_header_was_set(skb)) {
++		unsigned int trans_data_len = skb_tail_pointer(skb) -
++					      skb_transport_header(skb);
++
++		if (trans_data_len >= offsetof(struct udphdr, len) &&
++		    trans_data_len < RTL_MIN_PATCH_LEN) {
++			u16 dest = ntohs(udp_hdr(skb)->dest);
++
++			/* dest is a standard PTP port */
++			if (dest == 319 || dest == 320)
++				padto = len + RTL_MIN_PATCH_LEN - trans_data_len;
++		}
++
++		if (trans_data_len < sizeof(struct udphdr))
++			padto = max_t(unsigned int, padto,
++				      len + sizeof(struct udphdr) - trans_data_len);
++	}
++
++	return padto;
++}
++
++static unsigned int rtl_quirk_packet_padto(struct rtl8169_private *tp,
++					   struct sk_buff *skb)
++{
++	unsigned int padto;
++
++	padto = rtl8125_quirk_udp_padto(tp, skb);
++
+ 	switch (tp->mac_version) {
+ 	case RTL_GIGA_MAC_VER_34:
+ 	case RTL_GIGA_MAC_VER_60:
+ 	case RTL_GIGA_MAC_VER_61:
+ 	case RTL_GIGA_MAC_VER_63:
+-		return true;
++		padto = max_t(unsigned int, padto, ETH_ZLEN);
+ 	default:
+-		return false;
++		break;
+ 	}
++
++	return padto;
+ }
+ 
+ static void rtl8169_tso_csum_v1(struct sk_buff *skb, u32 *opts)
+@@ -4164,9 +4219,10 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
+ 
+ 		opts[1] |= transport_offset << TCPHO_SHIFT;
+ 	} else {
+-		if (unlikely(skb->len < ETH_ZLEN && rtl_test_hw_pad_bug(tp)))
+-			/* eth_skb_pad would free the skb on error */
+-			return !__skb_put_padto(skb, ETH_ZLEN, false);
++		unsigned int padto = rtl_quirk_packet_padto(tp, skb);
++
++		/* skb_padto would free the skb on error */
++		return !__skb_put_padto(skb, padto, false);
+ 	}
+ 
+ 	return true;
+@@ -4349,6 +4405,9 @@ static netdev_features_t rtl8169_features_check(struct sk_buff *skb,
+ 		if (skb->len < ETH_ZLEN)
+ 			features &= ~NETIF_F_CSUM_MASK;
+ 
++		if (rtl_quirk_packet_padto(tp, skb))
++			features &= ~NETIF_F_CSUM_MASK;
++
+ 		if (transport_offset > TCPHO_MAX &&
+ 		    rtl_chip_supports_csum_v2(tp))
+ 			features &= ~NETIF_F_CSUM_MASK;
+@@ -4694,10 +4753,10 @@ static int rtl8169_close(struct net_device *dev)
+ 
+ 	cancel_work_sync(&tp->wk.work);
+ 
+-	phy_disconnect(tp->phydev);
+-
+ 	free_irq(pci_irq_vector(pdev, 0), tp);
+ 
++	phy_disconnect(tp->phydev);
++
+ 	dma_free_coherent(&pdev->dev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ 			  tp->RxPhyAddr);
+ 	dma_free_coherent(&pdev->dev, R8169_TX_RING_BYTES, tp->TxDescArray,
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index 6bfac1efe037c..4a68da7115d19 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -1256,7 +1256,7 @@ static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
+ 	/* Hardware requires a 2^n ring size, with alignment equal to size */
+ 	ring->virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
+ 	if (ring->virt && addr % size) {
+-		dma_free_coherent(dev, size, ring->virt, ring->addr);
++		dma_free_coherent(dev, size, ring->virt, addr);
+ 		dev_err(dev, "unable to alloc 0x%zx-aligned ring buffer\n",
+ 			size);
+ 		return -EINVAL;	/* Not a good error value, but distinct */
+diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c
+index b59032e0859b7..9d208570d059a 100644
+--- a/drivers/nvdimm/dimm_devs.c
++++ b/drivers/nvdimm/dimm_devs.c
+@@ -335,16 +335,16 @@ static ssize_t state_show(struct device *dev, struct device_attribute *attr,
+ }
+ static DEVICE_ATTR_RO(state);
+ 
+-static ssize_t available_slots_show(struct device *dev,
+-		struct device_attribute *attr, char *buf)
++static ssize_t __available_slots_show(struct nvdimm_drvdata *ndd, char *buf)
+ {
+-	struct nvdimm_drvdata *ndd = dev_get_drvdata(dev);
++	struct device *dev;
+ 	ssize_t rc;
+ 	u32 nfree;
+ 
+ 	if (!ndd)
+ 		return -ENXIO;
+ 
++	dev = ndd->dev;
+ 	nvdimm_bus_lock(dev);
+ 	nfree = nd_label_nfree(ndd);
+ 	if (nfree - 1 > nfree) {
+@@ -356,6 +356,18 @@ static ssize_t available_slots_show(struct device *dev,
+ 	nvdimm_bus_unlock(dev);
+ 	return rc;
+ }
++
++static ssize_t available_slots_show(struct device *dev,
++				    struct device_attribute *attr, char *buf)
++{
++	ssize_t rc;
++
++	nd_device_lock(dev);
++	rc = __available_slots_show(dev_get_drvdata(dev), buf);
++	nd_device_unlock(dev);
++
++	return rc;
++}
+ static DEVICE_ATTR_RO(available_slots);
+ 
+ __weak ssize_t security_show(struct device *dev,
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 6da67f4d641a2..2403b71b601e9 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -1635,11 +1635,11 @@ static umode_t namespace_visible(struct kobject *kobj,
+ 		return a->mode;
+ 	}
+ 
+-	if (a == &dev_attr_nstype.attr || a == &dev_attr_size.attr
+-			|| a == &dev_attr_holder.attr
+-			|| a == &dev_attr_holder_class.attr
+-			|| a == &dev_attr_force_raw.attr
+-			|| a == &dev_attr_mode.attr)
++	/* base is_namespace_io() attributes */
++	if (a == &dev_attr_nstype.attr || a == &dev_attr_size.attr ||
++	    a == &dev_attr_holder.attr || a == &dev_attr_holder_class.attr ||
++	    a == &dev_attr_force_raw.attr || a == &dev_attr_mode.attr ||
++	    a == &dev_attr_resource.attr)
+ 		return a->mode;
+ 
+ 	return 0;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a3486c1c27f0c..a32494cde61f7 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3262,6 +3262,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 		.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ 	{ PCI_DEVICE(0x15b7, 0x2001),   /*  Sandisk Skyhawk */
+ 		.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
++	{ PCI_DEVICE(0x2646, 0x2263),   /* KINGSTON A2000 NVMe SSD  */
++		.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001),
+ 		.driver_data = NVME_QUIRK_SINGLE_VECTOR },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2003) },
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index dc1f0f6471896..aacf06f0b4312 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -305,7 +305,7 @@ static void nvmet_tcp_map_pdu_iovec(struct nvmet_tcp_cmd *cmd)
+ 	length = cmd->pdu_len;
+ 	cmd->nr_mapped = DIV_ROUND_UP(length, PAGE_SIZE);
+ 	offset = cmd->rbytes_done;
+-	cmd->sg_idx = DIV_ROUND_UP(offset, PAGE_SIZE);
++	cmd->sg_idx = offset / PAGE_SIZE;
+ 	sg_offset = offset % PAGE_SIZE;
+ 	sg = &cmd->req.sg[cmd->sg_idx];
+ 
+@@ -318,6 +318,7 @@ static void nvmet_tcp_map_pdu_iovec(struct nvmet_tcp_cmd *cmd)
+ 		length -= iov_len;
+ 		sg = sg_next(sg);
+ 		iov++;
++		sg_offset = 0;
+ 	}
+ 
+ 	iov_iter_kvec(&cmd->recv_msg.msg_iter, READ, cmd->iov,
+diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
+index a5f988a9f9482..b5442f979b4d0 100644
+--- a/drivers/thunderbolt/acpi.c
++++ b/drivers/thunderbolt/acpi.c
+@@ -56,7 +56,7 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
+ 	 * managed with the xHCI and the SuperSpeed hub so we create the
+ 	 * link from xHCI instead.
+ 	 */
+-	while (!dev_is_pci(dev))
++	while (dev && !dev_is_pci(dev))
+ 		dev = dev->parent;
+ 
+ 	if (!dev)
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 134dc2005ce97..c9f6e97582885 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -1329,14 +1329,17 @@ static int usblp_set_protocol(struct usblp *usblp, int protocol)
+ 	if (protocol < USBLP_FIRST_PROTOCOL || protocol > USBLP_LAST_PROTOCOL)
+ 		return -EINVAL;
+ 
+-	alts = usblp->protocol[protocol].alt_setting;
+-	if (alts < 0)
+-		return -EINVAL;
+-	r = usb_set_interface(usblp->dev, usblp->ifnum, alts);
+-	if (r < 0) {
+-		printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n",
+-			alts, usblp->ifnum);
+-		return r;
++	/* Don't unnecessarily set the interface if there's a single alt. */
++	if (usblp->intf->num_altsetting > 1) {
++		alts = usblp->protocol[protocol].alt_setting;
++		if (alts < 0)
++			return -EINVAL;
++		r = usb_set_interface(usblp->dev, usblp->ifnum, alts);
++		if (r < 0) {
++			printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n",
++				alts, usblp->ifnum);
++			return r;
++		}
+ 	}
+ 
+ 	usblp->bidir = (usblp->protocol[protocol].epread != NULL);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 0a0d11151cfb8..ad4c94366dadf 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -1543,7 +1543,6 @@ static void dwc2_hsotg_complete_oursetup(struct usb_ep *ep,
+ static struct dwc2_hsotg_ep *ep_from_windex(struct dwc2_hsotg *hsotg,
+ 					    u32 windex)
+ {
+-	struct dwc2_hsotg_ep *ep;
+ 	int dir = (windex & USB_DIR_IN) ? 1 : 0;
+ 	int idx = windex & 0x7F;
+ 
+@@ -1553,12 +1552,7 @@ static struct dwc2_hsotg_ep *ep_from_windex(struct dwc2_hsotg *hsotg,
+ 	if (idx > hsotg->num_of_eps)
+ 		return NULL;
+ 
+-	ep = index_to_ep(hsotg, idx, dir);
+-
+-	if (idx && ep->dir_in != dir)
+-		return NULL;
+-
+-	return ep;
++	return index_to_ep(hsotg, idx, dir);
+ }
+ 
+ /**
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 841daec70b6ef..3101f0dcf6ae8 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1758,7 +1758,7 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ 		if (PMSG_IS_AUTO(msg))
+ 			break;
+ 
+-		ret = dwc3_core_init(dwc);
++		ret = dwc3_core_init_for_resume(dwc);
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/drivers/usb/gadget/legacy/ether.c b/drivers/usb/gadget/legacy/ether.c
+index 30313b233680d..99c7fc0d1d597 100644
+--- a/drivers/usb/gadget/legacy/ether.c
++++ b/drivers/usb/gadget/legacy/ether.c
+@@ -403,8 +403,10 @@ static int eth_bind(struct usb_composite_dev *cdev)
+ 		struct usb_descriptor_header *usb_desc;
+ 
+ 		usb_desc = usb_otg_descriptor_alloc(gadget);
+-		if (!usb_desc)
++		if (!usb_desc) {
++			status = -ENOMEM;
+ 			goto fail1;
++		}
+ 		usb_otg_descriptor_init(gadget, usb_desc);
+ 		otg_desc[0] = usb_desc;
+ 		otg_desc[1] = NULL;
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/hub.c b/drivers/usb/gadget/udc/aspeed-vhub/hub.c
+index 6497185ec4e7a..bfd8e77788e29 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/hub.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/hub.c
+@@ -999,8 +999,10 @@ static int ast_vhub_of_parse_str_desc(struct ast_vhub *vhub,
+ 		str_array[offset].s = NULL;
+ 
+ 		ret = ast_vhub_str_alloc_add(vhub, &lang_str);
+-		if (ret)
++		if (ret) {
++			of_node_put(child);
+ 			break;
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index 45c54d56ecbd5..b45e5bf089979 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -200,6 +200,8 @@ static struct mu3h_sch_ep_info *create_sch_ep(struct usb_device *udev,
+ 
+ 	sch_ep->sch_tt = tt;
+ 	sch_ep->ep = ep;
++	INIT_LIST_HEAD(&sch_ep->endpoint);
++	INIT_LIST_HEAD(&sch_ep->tt_endpoint);
+ 
+ 	return sch_ep;
+ }
+@@ -373,6 +375,7 @@ static void update_bus_bw(struct mu3h_sch_bw_info *sch_bw,
+ 					sch_ep->bw_budget_table[j];
+ 		}
+ 	}
++	sch_ep->allocated = used;
+ }
+ 
+ static int check_sch_tt(struct usb_device *udev,
+@@ -541,6 +544,22 @@ static int check_sch_bw(struct usb_device *udev,
+ 	return 0;
+ }
+ 
++static void destroy_sch_ep(struct usb_device *udev,
++	struct mu3h_sch_bw_info *sch_bw, struct mu3h_sch_ep_info *sch_ep)
++{
++	/* only release ep bw check passed by check_sch_bw() */
++	if (sch_ep->allocated)
++		update_bus_bw(sch_bw, sch_ep, 0);
++
++	list_del(&sch_ep->endpoint);
++
++	if (sch_ep->sch_tt) {
++		list_del(&sch_ep->tt_endpoint);
++		drop_tt(udev);
++	}
++	kfree(sch_ep);
++}
++
+ static bool need_bw_sch(struct usb_host_endpoint *ep,
+ 	enum usb_device_speed speed, int has_tt)
+ {
+@@ -583,6 +602,8 @@ int xhci_mtk_sch_init(struct xhci_hcd_mtk *mtk)
+ 
+ 	mtk->sch_array = sch_array;
+ 
++	INIT_LIST_HEAD(&mtk->bw_ep_chk_list);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(xhci_mtk_sch_init);
+@@ -601,19 +622,14 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 	struct xhci_ep_ctx *ep_ctx;
+ 	struct xhci_slot_ctx *slot_ctx;
+ 	struct xhci_virt_device *virt_dev;
+-	struct mu3h_sch_bw_info *sch_bw;
+ 	struct mu3h_sch_ep_info *sch_ep;
+-	struct mu3h_sch_bw_info *sch_array;
+ 	unsigned int ep_index;
+-	int bw_index;
+-	int ret = 0;
+ 
+ 	xhci = hcd_to_xhci(hcd);
+ 	virt_dev = xhci->devs[udev->slot_id];
+ 	ep_index = xhci_get_endpoint_index(&ep->desc);
+ 	slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->in_ctx);
+ 	ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+-	sch_array = mtk->sch_array;
+ 
+ 	xhci_dbg(xhci, "%s() type:%d, speed:%d, mpkt:%d, dir:%d, ep:%p\n",
+ 		__func__, usb_endpoint_type(&ep->desc), udev->speed,
+@@ -632,35 +648,13 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 		return 0;
+ 	}
+ 
+-	bw_index = get_bw_index(xhci, udev, ep);
+-	sch_bw = &sch_array[bw_index];
+-
+ 	sch_ep = create_sch_ep(udev, ep, ep_ctx);
+ 	if (IS_ERR_OR_NULL(sch_ep))
+ 		return -ENOMEM;
+ 
+ 	setup_sch_info(udev, ep_ctx, sch_ep);
+ 
+-	ret = check_sch_bw(udev, sch_bw, sch_ep);
+-	if (ret) {
+-		xhci_err(xhci, "Not enough bandwidth!\n");
+-		if (is_fs_or_ls(udev->speed))
+-			drop_tt(udev);
+-
+-		kfree(sch_ep);
+-		return -ENOSPC;
+-	}
+-
+-	list_add_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list);
+-
+-	ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts)
+-		| EP_BCSCOUNT(sch_ep->cs_count) | EP_BBM(sch_ep->burst_mode));
+-	ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset)
+-		| EP_BREPEAT(sch_ep->repeat));
+-
+-	xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n",
+-			sch_ep->pkts, sch_ep->cs_count, sch_ep->burst_mode,
+-			sch_ep->offset, sch_ep->repeat);
++	list_add_tail(&sch_ep->endpoint, &mtk->bw_ep_chk_list);
+ 
+ 	return 0;
+ }
+@@ -675,7 +669,7 @@ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 	struct xhci_virt_device *virt_dev;
+ 	struct mu3h_sch_bw_info *sch_array;
+ 	struct mu3h_sch_bw_info *sch_bw;
+-	struct mu3h_sch_ep_info *sch_ep;
++	struct mu3h_sch_ep_info *sch_ep, *tmp;
+ 	int bw_index;
+ 
+ 	xhci = hcd_to_xhci(hcd);
+@@ -694,17 +688,79 @@ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 	bw_index = get_bw_index(xhci, udev, ep);
+ 	sch_bw = &sch_array[bw_index];
+ 
+-	list_for_each_entry(sch_ep, &sch_bw->bw_ep_list, endpoint) {
++	list_for_each_entry_safe(sch_ep, tmp, &sch_bw->bw_ep_list, endpoint) {
+ 		if (sch_ep->ep == ep) {
+-			update_bus_bw(sch_bw, sch_ep, 0);
+-			list_del(&sch_ep->endpoint);
+-			if (is_fs_or_ls(udev->speed)) {
+-				list_del(&sch_ep->tt_endpoint);
+-				drop_tt(udev);
+-			}
+-			kfree(sch_ep);
++			destroy_sch_ep(udev, sch_bw, sch_ep);
+ 			break;
+ 		}
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(xhci_mtk_drop_ep_quirk);
++
++int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
++{
++	struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd);
++	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
++	struct xhci_virt_device *virt_dev = xhci->devs[udev->slot_id];
++	struct mu3h_sch_bw_info *sch_bw;
++	struct mu3h_sch_ep_info *sch_ep, *tmp;
++	int bw_index, ret;
++
++	xhci_dbg(xhci, "%s() udev %s\n", __func__, dev_name(&udev->dev));
++
++	list_for_each_entry(sch_ep, &mtk->bw_ep_chk_list, endpoint) {
++		bw_index = get_bw_index(xhci, udev, sch_ep->ep);
++		sch_bw = &mtk->sch_array[bw_index];
++
++		ret = check_sch_bw(udev, sch_bw, sch_ep);
++		if (ret) {
++			xhci_err(xhci, "Not enough bandwidth!\n");
++			return -ENOSPC;
++		}
++	}
++
++	list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_chk_list, endpoint) {
++		struct xhci_ep_ctx *ep_ctx;
++		struct usb_host_endpoint *ep = sch_ep->ep;
++		unsigned int ep_index = xhci_get_endpoint_index(&ep->desc);
++
++		bw_index = get_bw_index(xhci, udev, ep);
++		sch_bw = &mtk->sch_array[bw_index];
++
++		list_move_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list);
++
++		ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
++		ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts)
++			| EP_BCSCOUNT(sch_ep->cs_count)
++			| EP_BBM(sch_ep->burst_mode));
++		ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset)
++			| EP_BREPEAT(sch_ep->repeat));
++
++		xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n",
++			sch_ep->pkts, sch_ep->cs_count, sch_ep->burst_mode,
++			sch_ep->offset, sch_ep->repeat);
++	}
++
++	return xhci_check_bandwidth(hcd, udev);
++}
++EXPORT_SYMBOL_GPL(xhci_mtk_check_bandwidth);
++
++void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
++{
++	struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd);
++	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
++	struct mu3h_sch_bw_info *sch_bw;
++	struct mu3h_sch_ep_info *sch_ep, *tmp;
++	int bw_index;
++
++	xhci_dbg(xhci, "%s() udev %s\n", __func__, dev_name(&udev->dev));
++
++	list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_chk_list, endpoint) {
++		bw_index = get_bw_index(xhci, udev, sch_ep->ep);
++		sch_bw = &mtk->sch_array[bw_index];
++		destroy_sch_ep(udev, sch_bw, sch_ep);
++	}
++
++	xhci_reset_bandwidth(hcd, udev);
++}
++EXPORT_SYMBOL_GPL(xhci_mtk_reset_bandwidth);
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index 8f321f39ab960..fe010cc61f19b 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -347,6 +347,8 @@ static void usb_wakeup_set(struct xhci_hcd_mtk *mtk, bool enable)
+ static int xhci_mtk_setup(struct usb_hcd *hcd);
+ static const struct xhci_driver_overrides xhci_mtk_overrides __initconst = {
+ 	.reset = xhci_mtk_setup,
++	.check_bandwidth = xhci_mtk_check_bandwidth,
++	.reset_bandwidth = xhci_mtk_reset_bandwidth,
+ };
+ 
+ static struct hc_driver __read_mostly xhci_mtk_hc_driver;
+diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
+index a93cfe8179049..cbb09dfea62e0 100644
+--- a/drivers/usb/host/xhci-mtk.h
++++ b/drivers/usb/host/xhci-mtk.h
+@@ -59,6 +59,7 @@ struct mu3h_sch_bw_info {
+  * @ep_type: endpoint type
+  * @maxpkt: max packet size of endpoint
+  * @ep: address of usb_host_endpoint struct
++ * @allocated: the bandwidth is aready allocated from bus_bw
+  * @offset: which uframe of the interval that transfer should be
+  *		scheduled first time within the interval
+  * @repeat: the time gap between two uframes that transfers are
+@@ -86,6 +87,7 @@ struct mu3h_sch_ep_info {
+ 	u32 ep_type;
+ 	u32 maxpkt;
+ 	void *ep;
++	bool allocated;
+ 	/*
+ 	 * mtk xHCI scheduling information put into reserved DWs
+ 	 * in ep context
+@@ -131,6 +133,7 @@ struct xhci_hcd_mtk {
+ 	struct device *dev;
+ 	struct usb_hcd *hcd;
+ 	struct mu3h_sch_bw_info *sch_array;
++	struct list_head bw_ep_chk_list;
+ 	struct mu3c_ippc_regs __iomem *ippc_regs;
+ 	bool has_ippc;
+ 	int num_u2_ports;
+@@ -166,6 +169,8 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 		struct usb_host_endpoint *ep);
+ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 		struct usb_host_endpoint *ep);
++int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
++void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+ 
+ #else
+ static inline int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd,
+@@ -179,6 +184,16 @@ static inline void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd,
+ {
+ }
+ 
++static inline int xhci_mtk_check_bandwidth(struct usb_hcd *hcd,
++		struct usb_device *udev)
++{
++	return 0;
++}
++
++static inline void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd,
++		struct usb_device *udev)
++{
++}
+ #endif
+ 
+ #endif		/* _XHCI_MTK_H_ */
+diff --git a/drivers/usb/host/xhci-mvebu.c b/drivers/usb/host/xhci-mvebu.c
+index 60651a50770f9..8ca1a235d1645 100644
+--- a/drivers/usb/host/xhci-mvebu.c
++++ b/drivers/usb/host/xhci-mvebu.c
+@@ -8,6 +8,7 @@
+ #include <linux/mbus.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
++#include <linux/phy/phy.h>
+ 
+ #include <linux/usb.h>
+ #include <linux/usb/hcd.h>
+@@ -74,6 +75,47 @@ int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd)
+ 	return 0;
+ }
+ 
++int xhci_mvebu_a3700_plat_setup(struct usb_hcd *hcd)
++{
++	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
++	struct device *dev = hcd->self.controller;
++	struct phy *phy;
++	int ret;
++
++	/* Old bindings miss the PHY handle */
++	phy = of_phy_get(dev->of_node, "usb3-phy");
++	if (IS_ERR(phy) && PTR_ERR(phy) == -EPROBE_DEFER)
++		return -EPROBE_DEFER;
++	else if (IS_ERR(phy))
++		goto phy_out;
++
++	ret = phy_init(phy);
++	if (ret)
++		goto phy_put;
++
++	ret = phy_set_mode(phy, PHY_MODE_USB_HOST_SS);
++	if (ret)
++		goto phy_exit;
++
++	ret = phy_power_on(phy);
++	if (ret == -EOPNOTSUPP) {
++		/* Skip initializatin of XHCI PHY when it is unsupported by firmware */
++		dev_warn(dev, "PHY unsupported by firmware\n");
++		xhci->quirks |= XHCI_SKIP_PHY_INIT;
++	}
++	if (ret)
++		goto phy_exit;
++
++	phy_power_off(phy);
++phy_exit:
++	phy_exit(phy);
++phy_put:
++	of_phy_put(phy);
++phy_out:
++
++	return 0;
++}
++
+ int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd)
+ {
+ 	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+diff --git a/drivers/usb/host/xhci-mvebu.h b/drivers/usb/host/xhci-mvebu.h
+index 3be021793cc8b..01bf3fcb3eca5 100644
+--- a/drivers/usb/host/xhci-mvebu.h
++++ b/drivers/usb/host/xhci-mvebu.h
+@@ -12,6 +12,7 @@ struct usb_hcd;
+ 
+ #if IS_ENABLED(CONFIG_USB_XHCI_MVEBU)
+ int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd);
++int xhci_mvebu_a3700_plat_setup(struct usb_hcd *hcd);
+ int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd);
+ #else
+ static inline int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd)
+@@ -19,6 +20,11 @@ static inline int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd)
+ 	return 0;
+ }
+ 
++static inline int xhci_mvebu_a3700_plat_setup(struct usb_hcd *hcd)
++{
++	return 0;
++}
++
+ static inline int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd)
+ {
+ 	return 0;
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 4d34f6005381e..c1edcc9b13cec 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -44,6 +44,16 @@ static void xhci_priv_plat_start(struct usb_hcd *hcd)
+ 		priv->plat_start(hcd);
+ }
+ 
++static int xhci_priv_plat_setup(struct usb_hcd *hcd)
++{
++	struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd);
++
++	if (!priv->plat_setup)
++		return 0;
++
++	return priv->plat_setup(hcd);
++}
++
+ static int xhci_priv_init_quirk(struct usb_hcd *hcd)
+ {
+ 	struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd);
+@@ -111,6 +121,7 @@ static const struct xhci_plat_priv xhci_plat_marvell_armada = {
+ };
+ 
+ static const struct xhci_plat_priv xhci_plat_marvell_armada3700 = {
++	.plat_setup = xhci_mvebu_a3700_plat_setup,
+ 	.init_quirk = xhci_mvebu_a3700_init_quirk,
+ };
+ 
+@@ -330,7 +341,14 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ 
+ 	hcd->tpl_support = of_usb_host_tpl_support(sysdev->of_node);
+ 	xhci->shared_hcd->tpl_support = hcd->tpl_support;
+-	if (priv && (priv->quirks & XHCI_SKIP_PHY_INIT))
++
++	if (priv) {
++		ret = xhci_priv_plat_setup(hcd);
++		if (ret)
++			goto disable_usb_phy;
++	}
++
++	if ((xhci->quirks & XHCI_SKIP_PHY_INIT) || (priv && (priv->quirks & XHCI_SKIP_PHY_INIT)))
+ 		hcd->skip_phy_initialization = 1;
+ 
+ 	if (priv && (priv->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK))
+diff --git a/drivers/usb/host/xhci-plat.h b/drivers/usb/host/xhci-plat.h
+index 1fb149d1fbcea..561d0b7bce098 100644
+--- a/drivers/usb/host/xhci-plat.h
++++ b/drivers/usb/host/xhci-plat.h
+@@ -13,6 +13,7 @@
+ struct xhci_plat_priv {
+ 	const char *firmware_name;
+ 	unsigned long long quirks;
++	int (*plat_setup)(struct usb_hcd *);
+ 	void (*plat_start)(struct usb_hcd *);
+ 	int (*init_quirk)(struct usb_hcd *);
+ 	int (*suspend_quirk)(struct usb_hcd *);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index db8612ec82d3e..061d5c51405fb 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -699,11 +699,16 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci,
+ 	dma_unmap_single(dev, seg->bounce_dma, ring->bounce_buf_len,
+ 			 DMA_FROM_DEVICE);
+ 	/* for in tranfers we need to copy the data from bounce to sg */
+-	len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf,
+-			     seg->bounce_len, seg->bounce_offs);
+-	if (len != seg->bounce_len)
+-		xhci_warn(xhci, "WARN Wrong bounce buffer read length: %zu != %d\n",
+-				len, seg->bounce_len);
++	if (urb->num_sgs) {
++		len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf,
++					   seg->bounce_len, seg->bounce_offs);
++		if (len != seg->bounce_len)
++			xhci_warn(xhci, "WARN Wrong bounce buffer read length: %zu != %d\n",
++				  len, seg->bounce_len);
++	} else {
++		memcpy(urb->transfer_buffer + seg->bounce_offs, seg->bounce_buf,
++		       seg->bounce_len);
++	}
+ 	seg->bounce_len = 0;
+ 	seg->bounce_offs = 0;
+ }
+@@ -3275,12 +3280,16 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
+ 
+ 	/* create a max max_pkt sized bounce buffer pointed to by last trb */
+ 	if (usb_urb_dir_out(urb)) {
+-		len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs,
+-				   seg->bounce_buf, new_buff_len, enqd_len);
+-		if (len != new_buff_len)
+-			xhci_warn(xhci,
+-				"WARN Wrong bounce buffer write length: %zu != %d\n",
+-				len, new_buff_len);
++		if (urb->num_sgs) {
++			len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs,
++						 seg->bounce_buf, new_buff_len, enqd_len);
++			if (len != new_buff_len)
++				xhci_warn(xhci, "WARN Wrong bounce buffer write length: %zu != %d\n",
++					  len, new_buff_len);
++		} else {
++			memcpy(seg->bounce_buf, urb->transfer_buffer + enqd_len, new_buff_len);
++		}
++
+ 		seg->bounce_dma = dma_map_single(dev, seg->bounce_buf,
+ 						 max_pkt, DMA_TO_DEVICE);
+ 	} else {
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 73f1373d517a2..d17bbb162810a 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -2861,7 +2861,7 @@ static void xhci_check_bw_drop_ep_streams(struct xhci_hcd *xhci,
+  * else should be touching the xhci->devs[slot_id] structure, so we
+  * don't need to take the xhci->lock for manipulating that.
+  */
+-static int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
++int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ {
+ 	int i;
+ 	int ret = 0;
+@@ -2959,7 +2959,7 @@ command_cleanup:
+ 	return ret;
+ }
+ 
+-static void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
++void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ {
+ 	struct xhci_hcd *xhci;
+ 	struct xhci_virt_device	*virt_dev;
+@@ -5385,6 +5385,10 @@ void xhci_init_driver(struct hc_driver *drv,
+ 			drv->reset = over->reset;
+ 		if (over->start)
+ 			drv->start = over->start;
++		if (over->check_bandwidth)
++			drv->check_bandwidth = over->check_bandwidth;
++		if (over->reset_bandwidth)
++			drv->reset_bandwidth = over->reset_bandwidth;
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(xhci_init_driver);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index d90c0d5df3b37..045740ad9c1ec 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1916,6 +1916,8 @@ struct xhci_driver_overrides {
+ 	size_t extra_priv_size;
+ 	int (*reset)(struct usb_hcd *hcd);
+ 	int (*start)(struct usb_hcd *hcd);
++	int (*check_bandwidth)(struct usb_hcd *, struct usb_device *);
++	void (*reset_bandwidth)(struct usb_hcd *, struct usb_device *);
+ };
+ 
+ #define	XHCI_CFC_DELAY		10
+@@ -2070,6 +2072,8 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks);
+ void xhci_shutdown(struct usb_hcd *hcd);
+ void xhci_init_driver(struct hc_driver *drv,
+ 		      const struct xhci_driver_overrides *over);
++int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
++void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
+ int xhci_ext_cap_init(struct xhci_hcd *xhci);
+ 
+diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
+index ac9a81ae82164..e6fa137018082 100644
+--- a/drivers/usb/renesas_usbhs/fifo.c
++++ b/drivers/usb/renesas_usbhs/fifo.c
+@@ -126,6 +126,7 @@ struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
+ 		}
+ 
+ 		usbhs_pipe_clear_without_sequence(pipe, 0, 0);
++		usbhs_pipe_running(pipe, 0);
+ 
+ 		__usbhsf_pkt_del(pkt);
+ 	}
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index d0c05aa8a0d6e..bf11f86896837 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -64,6 +64,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+ 	{ USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
+ 	{ USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
++	{ USB_DEVICE(0x0988, 0x0578) }, /* Teraoka AD2000 */
+ 	{ USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
+ 	{ USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */
+ 	{ USB_DEVICE(0x0BED, 0x1101) }, /* MEI series 2000 Combo Acceptor */
+@@ -204,6 +205,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1901, 0x0194) },	/* GE Healthcare Remote Alarm Box */
+ 	{ USB_DEVICE(0x1901, 0x0195) },	/* GE B850/B650/B450 CP2104 DP UART interface */
+ 	{ USB_DEVICE(0x1901, 0x0196) },	/* GE B850 CP2105 DP UART interface */
++	{ USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
+ 	{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ 	{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+ 	{ USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 3fe959104311b..2049e66f34a3f 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -425,6 +425,8 @@ static void option_instat_callback(struct urb *urb);
+ #define CINTERION_PRODUCT_AHXX_2RMNET		0x0084
+ #define CINTERION_PRODUCT_AHXX_AUDIO		0x0085
+ #define CINTERION_PRODUCT_CLS8			0x00b0
++#define CINTERION_PRODUCT_MV31_MBIM		0x00b3
++#define CINTERION_PRODUCT_MV31_RMNET		0x00b7
+ 
+ /* Olivetti products */
+ #define OLIVETTI_VENDOR_ID			0x0b3c
+@@ -1914,6 +1916,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDMNET) },
+ 	{ USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */
+ 	{ USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) },
++	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_MBIM, 0xff),
++	  .driver_info = RSVD(3)},
++	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff),
++	  .driver_info = RSVD(0)},
+ 	{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100),
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120),
+diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+index 5c92a576edae8..08f742fd24099 100644
+--- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
++++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+@@ -15,6 +15,7 @@ struct mlx5_vdpa_direct_mr {
+ 	struct sg_table sg_head;
+ 	int log_size;
+ 	int nsg;
++	int nent;
+ 	struct list_head list;
+ 	u64 offset;
+ };
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 4b6195666c589..d300f799efcd1 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -25,17 +25,6 @@ static int get_octo_len(u64 len, int page_shift)
+ 	return (npages + 1) / 2;
+ }
+ 
+-static void fill_sg(struct mlx5_vdpa_direct_mr *mr, void *in)
+-{
+-	struct scatterlist *sg;
+-	__be64 *pas;
+-	int i;
+-
+-	pas = MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt);
+-	for_each_sg(mr->sg_head.sgl, sg, mr->nsg, i)
+-		(*pas) = cpu_to_be64(sg_dma_address(sg));
+-}
+-
+ static void mlx5_set_access_mode(void *mkc, int mode)
+ {
+ 	MLX5_SET(mkc, mkc, access_mode_1_0, mode & 0x3);
+@@ -45,10 +34,18 @@ static void mlx5_set_access_mode(void *mkc, int mode)
+ static void populate_mtts(struct mlx5_vdpa_direct_mr *mr, __be64 *mtt)
+ {
+ 	struct scatterlist *sg;
++	int nsg = mr->nsg;
++	u64 dma_addr;
++	u64 dma_len;
++	int j = 0;
+ 	int i;
+ 
+-	for_each_sg(mr->sg_head.sgl, sg, mr->nsg, i)
+-		mtt[i] = cpu_to_be64(sg_dma_address(sg));
++	for_each_sg(mr->sg_head.sgl, sg, mr->nent, i) {
++		for (dma_addr = sg_dma_address(sg), dma_len = sg_dma_len(sg);
++		     nsg && dma_len;
++		     nsg--, dma_addr += BIT(mr->log_size), dma_len -= BIT(mr->log_size))
++			mtt[j++] = cpu_to_be64(dma_addr);
++	}
+ }
+ 
+ static int create_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr *mr)
+@@ -64,7 +61,6 @@ static int create_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct
+ 		return -ENOMEM;
+ 
+ 	MLX5_SET(create_mkey_in, in, uid, mvdev->res.uid);
+-	fill_sg(mr, in);
+ 	mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry);
+ 	MLX5_SET(mkc, mkc, lw, !!(mr->perm & VHOST_MAP_WO));
+ 	MLX5_SET(mkc, mkc, lr, !!(mr->perm & VHOST_MAP_RO));
+@@ -276,8 +272,8 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ done:
+ 	mr->log_size = log_entity_size;
+ 	mr->nsg = nsg;
+-	err = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
+-	if (!err)
++	mr->nent = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
++	if (!mr->nent)
+ 		goto err_map;
+ 
+ 	err = create_direct_mr(mvdev, mr);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 81b932f72e103..c6529f7c3034a 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -77,6 +77,7 @@ struct mlx5_vq_restore_info {
+ 	u64 device_addr;
+ 	u64 driver_addr;
+ 	u16 avail_index;
++	u16 used_index;
+ 	bool ready;
+ 	struct vdpa_callback cb;
+ 	bool restore;
+@@ -111,6 +112,7 @@ struct mlx5_vdpa_virtqueue {
+ 	u32 virtq_id;
+ 	struct mlx5_vdpa_net *ndev;
+ 	u16 avail_idx;
++	u16 used_idx;
+ 	int fw_state;
+ 
+ 	/* keep last in the struct */
+@@ -789,6 +791,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
+ 
+ 	obj_context = MLX5_ADDR_OF(create_virtio_net_q_in, in, obj_context);
+ 	MLX5_SET(virtio_net_q_object, obj_context, hw_available_index, mvq->avail_idx);
++	MLX5_SET(virtio_net_q_object, obj_context, hw_used_index, mvq->used_idx);
+ 	MLX5_SET(virtio_net_q_object, obj_context, queue_feature_bit_mask_12_3,
+ 		 get_features_12_3(ndev->mvdev.actual_features));
+ 	vq_ctx = MLX5_ADDR_OF(virtio_net_q_object, obj_context, virtio_q_context);
+@@ -1007,6 +1010,7 @@ static int connect_qps(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
+ struct mlx5_virtq_attr {
+ 	u8 state;
+ 	u16 available_index;
++	u16 used_index;
+ };
+ 
+ static int query_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq,
+@@ -1037,6 +1041,7 @@ static int query_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueu
+ 	memset(attr, 0, sizeof(*attr));
+ 	attr->state = MLX5_GET(virtio_net_q_object, obj_context, state);
+ 	attr->available_index = MLX5_GET(virtio_net_q_object, obj_context, hw_available_index);
++	attr->used_index = MLX5_GET(virtio_net_q_object, obj_context, hw_used_index);
+ 	kfree(out);
+ 	return 0;
+ 
+@@ -1520,6 +1525,16 @@ static void teardown_virtqueues(struct mlx5_vdpa_net *ndev)
+ 	}
+ }
+ 
++static void clear_virtqueues(struct mlx5_vdpa_net *ndev)
++{
++	int i;
++
++	for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) {
++		ndev->vqs[i].avail_idx = 0;
++		ndev->vqs[i].used_idx = 0;
++	}
++}
++
+ /* TODO: cross-endian support */
+ static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev)
+ {
+@@ -1595,6 +1610,7 @@ static int save_channel_info(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqu
+ 		return err;
+ 
+ 	ri->avail_index = attr.available_index;
++	ri->used_index = attr.used_index;
+ 	ri->ready = mvq->ready;
+ 	ri->num_ent = mvq->num_ent;
+ 	ri->desc_addr = mvq->desc_addr;
+@@ -1639,6 +1655,7 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev)
+ 			continue;
+ 
+ 		mvq->avail_idx = ri->avail_index;
++		mvq->used_idx = ri->used_index;
+ 		mvq->ready = ri->ready;
+ 		mvq->num_ent = ri->num_ent;
+ 		mvq->desc_addr = ri->desc_addr;
+@@ -1753,6 +1770,7 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ 	if (!status) {
+ 		mlx5_vdpa_info(mvdev, "performing device reset\n");
+ 		teardown_driver(ndev);
++		clear_virtqueues(ndev);
+ 		mlx5_vdpa_destroy_mr(&ndev->mvdev);
+ 		ndev->mvdev.status = 0;
+ 		ndev->mvdev.mlx_features = 0;
+diff --git a/fs/afs/main.c b/fs/afs/main.c
+index accdd8970e7c0..b2975256dadbd 100644
+--- a/fs/afs/main.c
++++ b/fs/afs/main.c
+@@ -193,7 +193,7 @@ static int __init afs_init(void)
+ 		goto error_cache;
+ #endif
+ 
+-	ret = register_pernet_subsys(&afs_net_ops);
++	ret = register_pernet_device(&afs_net_ops);
+ 	if (ret < 0)
+ 		goto error_net;
+ 
+@@ -213,7 +213,7 @@ static int __init afs_init(void)
+ error_proc:
+ 	afs_fs_exit();
+ error_fs:
+-	unregister_pernet_subsys(&afs_net_ops);
++	unregister_pernet_device(&afs_net_ops);
+ error_net:
+ #ifdef CONFIG_AFS_FSCACHE
+ 	fscache_unregister_netfs(&afs_cache_netfs);
+@@ -244,7 +244,7 @@ static void __exit afs_exit(void)
+ 
+ 	proc_remove(afs_proc_symlink);
+ 	afs_fs_exit();
+-	unregister_pernet_subsys(&afs_net_ops);
++	unregister_pernet_device(&afs_net_ops);
+ #ifdef CONFIG_AFS_FSCACHE
+ 	fscache_unregister_netfs(&afs_cache_netfs);
+ #endif
+diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
+index 398c1eef71906..0d7238cb45b56 100644
+--- a/fs/cifs/dir.c
++++ b/fs/cifs/dir.c
+@@ -736,6 +736,7 @@ static int
+ cifs_d_revalidate(struct dentry *direntry, unsigned int flags)
+ {
+ 	struct inode *inode;
++	int rc;
+ 
+ 	if (flags & LOOKUP_RCU)
+ 		return -ECHILD;
+@@ -745,8 +746,25 @@ cifs_d_revalidate(struct dentry *direntry, unsigned int flags)
+ 		if ((flags & LOOKUP_REVAL) && !CIFS_CACHE_READ(CIFS_I(inode)))
+ 			CIFS_I(inode)->time = 0; /* force reval */
+ 
+-		if (cifs_revalidate_dentry(direntry))
+-			return 0;
++		rc = cifs_revalidate_dentry(direntry);
++		if (rc) {
++			cifs_dbg(FYI, "cifs_revalidate_dentry failed with rc=%d", rc);
++			switch (rc) {
++			case -ENOENT:
++			case -ESTALE:
++				/*
++				 * Those errors mean the dentry is invalid
++				 * (file was deleted or recreated)
++				 */
++				return 0;
++			default:
++				/*
++				 * Otherwise some unexpected error happened
++				 * report it as-is to VFS layer
++				 */
++				return rc;
++			}
++		}
+ 		else {
+ 			/*
+ 			 * If the inode wasn't known to be a dfs entry when
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index 204a622b89ed3..56ec9fba3925b 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -286,7 +286,7 @@ struct smb2_negotiate_req {
+ 	__le32 NegotiateContextOffset; /* SMB3.1.1 only. MBZ earlier */
+ 	__le16 NegotiateContextCount;  /* SMB3.1.1 only. MBZ earlier */
+ 	__le16 Reserved2;
+-	__le16 Dialects[1]; /* One dialect (vers=) at a time for now */
++	__le16 Dialects[4]; /* BB expand this if autonegotiate > 4 dialects */
+ } __packed;
+ 
+ /* Dialects */
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index b1c2f416b9bd9..9391cd17a2b55 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -655,10 +655,22 @@ wait_for_compound_request(struct TCP_Server_Info *server, int num,
+ 	spin_lock(&server->req_lock);
+ 	if (*credits < num) {
+ 		/*
+-		 * Return immediately if not too many requests in flight since
+-		 * we will likely be stuck on waiting for credits.
++		 * If the server is tight on resources or just gives us less
++		 * credits for other reasons (e.g. requests are coming out of
++		 * order and the server delays granting more credits until it
++		 * processes a missing mid) and we exhausted most available
++		 * credits there may be situations when we try to send
++		 * a compound request but we don't have enough credits. At this
++		 * point the client needs to decide if it should wait for
++		 * additional credits or fail the request. If at least one
++		 * request is in flight there is a high probability that the
++		 * server will return enough credits to satisfy this compound
++		 * request.
++		 *
++		 * Return immediately if no requests in flight since we will be
++		 * stuck on waiting for credits.
+ 		 */
+-		if (server->in_flight < num - *credits) {
++		if (server->in_flight == 0) {
+ 			spin_unlock(&server->req_lock);
+ 			return -ENOTSUPP;
+ 		}
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index b5c109703daaf..21c20fd5f9ee7 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -735,9 +735,10 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
+ 
+ 		mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ 
++		set_page_huge_active(page);
+ 		/*
+ 		 * unlock_page because locked by add_to_page_cache()
+-		 * page_put due to reference from alloc_huge_page()
++		 * put_page() due to reference from alloc_huge_page()
+ 		 */
+ 		unlock_page(page);
+ 		put_page(page);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 907ecaffc3386..3b6307f6bd93d 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -8782,12 +8782,6 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+ 		atomic_dec(&task->io_uring->in_idle);
+-		/*
+-		 * If the files that are going away are the ones in the thread
+-		 * identity, clear them out.
+-		 */
+-		if (task->io_uring->identity->files == files)
+-			task->io_uring->identity->files = NULL;
+ 		io_sq_thread_unpark(ctx->sq_data);
+ 	}
+ }
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 28a075b5f5b2e..d1efa3a5a5032 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -992,8 +992,8 @@ static char *ovl_get_redirect(struct dentry *dentry, bool abs_redirect)
+ 
+ 		buflen -= thislen;
+ 		memcpy(&buf[buflen], name, thislen);
+-		tmp = dget_dlock(d->d_parent);
+ 		spin_unlock(&d->d_lock);
++		tmp = dget_parent(d);
+ 
+ 		dput(d);
+ 		d = tmp;
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index a1f72ac053e5f..5c5c3972ebd0a 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -445,8 +445,9 @@ static int ovl_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ 	const struct cred *old_cred;
+ 	int ret;
+ 
+-	if (!ovl_should_sync(OVL_FS(file_inode(file)->i_sb)))
+-		return 0;
++	ret = ovl_sync_status(OVL_FS(file_inode(file)->i_sb));
++	if (ret <= 0)
++		return ret;
+ 
+ 	ret = ovl_real_fdget_meta(file, &real, !datasync);
+ 	if (ret)
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index f8880aa2ba0ec..9f7af98ae2005 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -322,6 +322,7 @@ int ovl_check_metacopy_xattr(struct ovl_fs *ofs, struct dentry *dentry);
+ bool ovl_is_metacopy_dentry(struct dentry *dentry);
+ char *ovl_get_redirect_xattr(struct ovl_fs *ofs, struct dentry *dentry,
+ 			     int padding);
++int ovl_sync_status(struct ovl_fs *ofs);
+ 
+ static inline bool ovl_is_impuredir(struct super_block *sb,
+ 				    struct dentry *dentry)
+diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
+index 1b5a2094df8eb..b208eba5d0b64 100644
+--- a/fs/overlayfs/ovl_entry.h
++++ b/fs/overlayfs/ovl_entry.h
+@@ -79,6 +79,8 @@ struct ovl_fs {
+ 	atomic_long_t last_ino;
+ 	/* Whiteout dentry cache */
+ 	struct dentry *whiteout;
++	/* r/o snapshot of upperdir sb's only taken on volatile mounts */
++	errseq_t errseq;
+ };
+ 
+ static inline struct vfsmount *ovl_upper_mnt(struct ovl_fs *ofs)
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index 01620ebae1bd4..f404a78e6b607 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -865,7 +865,7 @@ struct file *ovl_dir_real_file(const struct file *file, bool want_upper)
+ 
+ 	struct ovl_dir_file *od = file->private_data;
+ 	struct dentry *dentry = file->f_path.dentry;
+-	struct file *realfile = od->realfile;
++	struct file *old, *realfile = od->realfile;
+ 
+ 	if (!OVL_TYPE_UPPER(ovl_path_type(dentry)))
+ 		return want_upper ? NULL : realfile;
+@@ -874,29 +874,20 @@ struct file *ovl_dir_real_file(const struct file *file, bool want_upper)
+ 	 * Need to check if we started out being a lower dir, but got copied up
+ 	 */
+ 	if (!od->is_upper) {
+-		struct inode *inode = file_inode(file);
+-
+ 		realfile = READ_ONCE(od->upperfile);
+ 		if (!realfile) {
+ 			struct path upperpath;
+ 
+ 			ovl_path_upper(dentry, &upperpath);
+ 			realfile = ovl_dir_open_realfile(file, &upperpath);
++			if (IS_ERR(realfile))
++				return realfile;
+ 
+-			inode_lock(inode);
+-			if (!od->upperfile) {
+-				if (IS_ERR(realfile)) {
+-					inode_unlock(inode);
+-					return realfile;
+-				}
+-				smp_store_release(&od->upperfile, realfile);
+-			} else {
+-				/* somebody has beaten us to it */
+-				if (!IS_ERR(realfile))
+-					fput(realfile);
+-				realfile = od->upperfile;
++			old = cmpxchg_release(&od->upperfile, NULL, realfile);
++			if (old) {
++				fput(realfile);
++				realfile = old;
+ 			}
+-			inode_unlock(inode);
+ 		}
+ 	}
+ 
+@@ -909,8 +900,9 @@ static int ovl_dir_fsync(struct file *file, loff_t start, loff_t end,
+ 	struct file *realfile;
+ 	int err;
+ 
+-	if (!ovl_should_sync(OVL_FS(file->f_path.dentry->d_sb)))
+-		return 0;
++	err = ovl_sync_status(OVL_FS(file->f_path.dentry->d_sb));
++	if (err <= 0)
++		return err;
+ 
+ 	realfile = ovl_dir_real_file(file, true);
+ 	err = PTR_ERR_OR_ZERO(realfile);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 290983bcfbb35..d23177a53c95f 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -261,11 +261,20 @@ static int ovl_sync_fs(struct super_block *sb, int wait)
+ 	struct super_block *upper_sb;
+ 	int ret;
+ 
+-	if (!ovl_upper_mnt(ofs))
+-		return 0;
++	ret = ovl_sync_status(ofs);
++	/*
++	 * We have to always set the err, because the return value isn't
++	 * checked in syncfs, and instead indirectly return an error via
++	 * the sb's writeback errseq, which VFS inspects after this call.
++	 */
++	if (ret < 0) {
++		errseq_set(&sb->s_wb_err, -EIO);
++		return -EIO;
++	}
++
++	if (!ret)
++		return ret;
+ 
+-	if (!ovl_should_sync(ofs))
+-		return 0;
+ 	/*
+ 	 * Not called for sync(2) call or an emergency sync (SB_I_SKIP_SYNC).
+ 	 * All the super blocks will be iterated, including upper_sb.
+@@ -1927,6 +1936,8 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 	sb->s_op = &ovl_super_operations;
+ 
+ 	if (ofs->config.upperdir) {
++		struct super_block *upper_sb;
++
+ 		if (!ofs->config.workdir) {
+ 			pr_err("missing 'workdir'\n");
+ 			goto out_err;
+@@ -1936,6 +1947,16 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 		if (err)
+ 			goto out_err;
+ 
++		upper_sb = ovl_upper_mnt(ofs)->mnt_sb;
++		if (!ovl_should_sync(ofs)) {
++			ofs->errseq = errseq_sample(&upper_sb->s_wb_err);
++			if (errseq_check(&upper_sb->s_wb_err, ofs->errseq)) {
++				err = -EIO;
++				pr_err("Cannot mount volatile when upperdir has an unseen error. Sync upperdir fs to clear state.\n");
++				goto out_err;
++			}
++		}
++
+ 		err = ovl_get_workdir(sb, ofs, &upperpath);
+ 		if (err)
+ 			goto out_err;
+@@ -1943,9 +1964,8 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 		if (!ofs->workdir)
+ 			sb->s_flags |= SB_RDONLY;
+ 
+-		sb->s_stack_depth = ovl_upper_mnt(ofs)->mnt_sb->s_stack_depth;
+-		sb->s_time_gran = ovl_upper_mnt(ofs)->mnt_sb->s_time_gran;
+-
++		sb->s_stack_depth = upper_sb->s_stack_depth;
++		sb->s_time_gran = upper_sb->s_time_gran;
+ 	}
+ 	oe = ovl_get_lowerstack(sb, splitlower, numlower, ofs, layers);
+ 	err = PTR_ERR(oe);
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 23f475627d07f..6e7b8c882045c 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -950,3 +950,30 @@ err_free:
+ 	kfree(buf);
+ 	return ERR_PTR(res);
+ }
++
++/*
++ * ovl_sync_status() - Check fs sync status for volatile mounts
++ *
++ * Returns 1 if this is not a volatile mount and a real sync is required.
++ *
++ * Returns 0 if syncing can be skipped because mount is volatile, and no errors
++ * have occurred on the upperdir since the mount.
++ *
++ * Returns -errno if it is a volatile mount, and the error that occurred since
++ * the last mount. If the error code changes, it'll return the latest error
++ * code.
++ */
++
++int ovl_sync_status(struct ovl_fs *ofs)
++{
++	struct vfsmount *mnt;
++
++	if (ovl_should_sync(ofs))
++		return 1;
++
++	mnt = ovl_upper_mnt(ofs);
++	if (!mnt)
++		return 0;
++
++	return errseq_check(&mnt->mnt_sb->s_wb_err, ofs->errseq);
++}
+diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h
+index f5e92fe9151c3..bd1c39907b924 100644
+--- a/include/drm/drm_dp_mst_helper.h
++++ b/include/drm/drm_dp_mst_helper.h
+@@ -783,6 +783,7 @@ drm_dp_mst_detect_port(struct drm_connector *connector,
+ 
+ struct edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
+ 
++int drm_dp_get_vc_payload_bw(int link_rate, int link_lane_count);
+ 
+ int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc);
+ 
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index ebca2ef022127..b5807f23caf80 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -770,6 +770,8 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+ }
+ #endif
+ 
++void set_page_huge_active(struct page *page);
++
+ #else	/* CONFIG_HUGETLB_PAGE */
+ struct hstate {};
+ 
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index b95a6f8db6ff9..9bbcfe3b0bb12 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -614,7 +614,10 @@ static inline void dev_iommu_fwspec_set(struct device *dev,
+ 
+ static inline void *dev_iommu_priv_get(struct device *dev)
+ {
+-	return dev->iommu->priv;
++	if (dev->iommu)
++		return dev->iommu->priv;
++	else
++		return NULL;
+ }
+ 
+ static inline void dev_iommu_priv_set(struct device *dev, void *priv)
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index c54365309e975..a36d35c259963 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -922,7 +922,7 @@ int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from,
+ 	__irq_alloc_descs(irq, from, cnt, node, THIS_MODULE, NULL)
+ 
+ #define irq_alloc_desc(node)			\
+-	irq_alloc_descs(-1, 0, 1, node)
++	irq_alloc_descs(-1, 1, 1, node)
+ 
+ #define irq_alloc_desc_at(at, node)		\
+ 	irq_alloc_descs(at, at, 1, node)
+@@ -937,7 +937,7 @@ int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from,
+ 	__devm_irq_alloc_descs(dev, irq, from, cnt, node, THIS_MODULE, NULL)
+ 
+ #define devm_irq_alloc_desc(dev, node)				\
+-	devm_irq_alloc_descs(dev, -1, 0, 1, node)
++	devm_irq_alloc_descs(dev, -1, 1, 1, node)
+ 
+ #define devm_irq_alloc_desc_at(dev, at, node)			\
+ 	devm_irq_alloc_descs(dev, at, at, 1, node)
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 629abaf25681d..21f21f7f878ce 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -251,7 +251,7 @@ extern void kprobes_inc_nmissed_count(struct kprobe *p);
+ extern bool arch_within_kprobe_blacklist(unsigned long addr);
+ extern int arch_populate_kprobe_blacklist(void);
+ extern bool arch_kprobe_on_func_entry(unsigned long offset);
+-extern bool kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset);
++extern int kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset);
+ 
+ extern bool within_kprobe_blacklist(unsigned long addr);
+ extern int kprobe_add_ksym_blacklist(unsigned long entry);
+diff --git a/include/linux/msi.h b/include/linux/msi.h
+index 6b584cc4757cd..2a3e997751cea 100644
+--- a/include/linux/msi.h
++++ b/include/linux/msi.h
+@@ -139,6 +139,12 @@ struct msi_desc {
+ 	list_for_each_entry((desc), dev_to_msi_list((dev)), list)
+ #define for_each_msi_entry_safe(desc, tmp, dev)	\
+ 	list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list)
++#define for_each_msi_vector(desc, __irq, dev)				\
++	for_each_msi_entry((desc), (dev))				\
++		if ((desc)->irq)					\
++			for (__irq = (desc)->irq;			\
++			     __irq < ((desc)->irq + (desc)->nvec_used);	\
++			     __irq++)
+ 
+ #ifdef CONFIG_IRQ_MSI_IOMMU
+ static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc)
+diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
+index 0f21617f1a668..966ed89803274 100644
+--- a/include/linux/tracepoint.h
++++ b/include/linux/tracepoint.h
+@@ -307,11 +307,13 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
+ 									\
+ 		it_func_ptr =						\
+ 			rcu_dereference_raw((&__tracepoint_##_name)->funcs); \
+-		do {							\
+-			it_func = (it_func_ptr)->func;			\
+-			__data = (it_func_ptr)->data;			\
+-			((void(*)(void *, proto))(it_func))(__data, args); \
+-		} while ((++it_func_ptr)->func);			\
++		if (it_func_ptr) {					\
++			do {						\
++				it_func = (it_func_ptr)->func;		\
++				__data = (it_func_ptr)->data;		\
++				((void(*)(void *, proto))(it_func))(__data, args); \
++			} while ((++it_func_ptr)->func);		\
++		}							\
+ 		return 0;						\
+ 	}								\
+ 	DEFINE_STATIC_CALL(tp_func_##_name, __traceiter_##_name);
+diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
+index 938eaf9517e26..76dad53a410ac 100644
+--- a/include/linux/vmalloc.h
++++ b/include/linux/vmalloc.h
+@@ -24,7 +24,8 @@ struct notifier_block;		/* in notifier.h */
+ #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
+ #define VM_NO_GUARD		0x00000040      /* don't add guard page */
+ #define VM_KASAN		0x00000080      /* has allocated kasan shadow memory */
+-#define VM_MAP_PUT_PAGES	0x00000100	/* put pages and free array in vfree */
++#define VM_FLUSH_RESET_PERMS	0x00000100	/* reset direct map and flush TLB on unmap, can't be freed in atomic context */
++#define VM_MAP_PUT_PAGES	0x00000200	/* put pages and free array in vfree */
+ 
+ /*
+  * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC.
+@@ -37,12 +38,6 @@ struct notifier_block;		/* in notifier.h */
+  * determine which allocations need the module shadow freed.
+  */
+ 
+-/*
+- * Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with
+- * vfree_atomic().
+- */
+-#define VM_FLUSH_RESET_PERMS	0x00000100      /* Reset direct map and flush TLB on unmap */
+-
+ /* bits [20..32] reserved for arch specific ioremap internals */
+ 
+ /*
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index d8fd8676fc724..3648164faa060 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1155,7 +1155,7 @@ static inline struct Qdisc *qdisc_replace(struct Qdisc *sch, struct Qdisc *new,
+ 	old = *pold;
+ 	*pold = new;
+ 	if (old != NULL)
+-		qdisc_tree_flush_backlog(old);
++		qdisc_purge_queue(old);
+ 	sch_tree_unlock(sch);
+ 
+ 	return old;
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 295d52a735982..949ae14a54250 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -178,7 +178,7 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ int udp_gro_complete(struct sk_buff *skb, int nhoff, udp_lookup_t lookup);
+ 
+ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+-				  netdev_features_t features);
++				  netdev_features_t features, bool is_ipv6);
+ 
+ static inline struct udphdr *udp_gro_udphdr(struct sk_buff *skb)
+ {
+diff --git a/init/init_task.c b/init/init_task.c
+index 15f6eb93a04fa..16d14c2ebb552 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -198,7 +198,8 @@ struct task_struct init_task
+ 	.lockdep_recursion = 0,
+ #endif
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+-	.ret_stack	= NULL,
++	.ret_stack		= NULL,
++	.tracing_graph_pause	= ATOMIC_INIT(0),
+ #endif
+ #if defined(CONFIG_TRACING) && defined(CONFIG_PREEMPTION)
+ 	.trace_recursion = 0,
+diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
+index dbc1dbdd2cbf0..c2a501cd90eba 100644
+--- a/kernel/bpf/bpf_inode_storage.c
++++ b/kernel/bpf/bpf_inode_storage.c
+@@ -125,8 +125,12 @@ static int bpf_fd_inode_storage_update_elem(struct bpf_map *map, void *key,
+ 
+ 	fd = *(int *)key;
+ 	f = fget_raw(fd);
+-	if (!f || !inode_storage_ptr(f->f_inode))
++	if (!f)
++		return -EBADF;
++	if (!inode_storage_ptr(f->f_inode)) {
++		fput(f);
+ 		return -EBADF;
++	}
+ 
+ 	sdata = bpf_local_storage_update(f->f_inode,
+ 					 (struct bpf_local_storage_map *)map,
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 96555a8a2c545..6aa9e10c6335a 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1442,6 +1442,11 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 			goto out;
+ 		}
+ 
++		if (ctx.optlen < 0) {
++			ret = -EFAULT;
++			goto out;
++		}
++
+ 		if (copy_from_user(ctx.optval, optval,
+ 				   min(ctx.optlen, max_optlen)) != 0) {
+ 			ret = -EFAULT;
+@@ -1459,7 +1464,7 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 		goto out;
+ 	}
+ 
+-	if (ctx.optlen > max_optlen) {
++	if (ctx.optlen > max_optlen || ctx.optlen < 0) {
+ 		ret = -EFAULT;
+ 		goto out;
+ 	}
+diff --git a/kernel/bpf/preload/Makefile b/kernel/bpf/preload/Makefile
+index 23ee310b6eb49..1951332dd15f5 100644
+--- a/kernel/bpf/preload/Makefile
++++ b/kernel/bpf/preload/Makefile
+@@ -4,8 +4,11 @@ LIBBPF_SRCS = $(srctree)/tools/lib/bpf/
+ LIBBPF_A = $(obj)/libbpf.a
+ LIBBPF_OUT = $(abspath $(obj))
+ 
++# Although not in use by libbpf's Makefile, set $(O) so that the "dummy" test
++# in tools/scripts/Makefile.include always succeeds when building the kernel
++# with $(O) pointing to a relative path, as in "make O=build bindeb-pkg".
+ $(LIBBPF_A):
+-	$(Q)$(MAKE) -C $(LIBBPF_SRCS) OUTPUT=$(LIBBPF_OUT)/ $(LIBBPF_OUT)/libbpf.a
++	$(Q)$(MAKE) -C $(LIBBPF_SRCS) O=$(LIBBPF_OUT)/ OUTPUT=$(LIBBPF_OUT)/ $(LIBBPF_OUT)/libbpf.a
+ 
+ userccflags += -I $(srctree)/tools/include/ -I $(srctree)/tools/include/uapi \
+ 	-I $(srctree)/tools/lib/ -Wno-unused-result
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index 2c0c4d6d0f83a..d924676c8781b 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -436,22 +436,22 @@ int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
+ 
+ 	can_reserve = msi_check_reservation_mode(domain, info, dev);
+ 
+-	for_each_msi_entry(desc, dev) {
+-		virq = desc->irq;
+-		if (desc->nvec_used == 1)
+-			dev_dbg(dev, "irq %d for MSI\n", virq);
+-		else
++	/*
++	 * This flag is set by the PCI layer as we need to activate
++	 * the MSI entries before the PCI layer enables MSI in the
++	 * card. Otherwise the card latches a random msi message.
++	 */
++	if (!(info->flags & MSI_FLAG_ACTIVATE_EARLY))
++		goto skip_activate;
++
++	for_each_msi_vector(desc, i, dev) {
++		if (desc->irq == i) {
++			virq = desc->irq;
+ 			dev_dbg(dev, "irq [%d-%d] for MSI\n",
+ 				virq, virq + desc->nvec_used - 1);
+-		/*
+-		 * This flag is set by the PCI layer as we need to activate
+-		 * the MSI entries before the PCI layer enables MSI in the
+-		 * card. Otherwise the card latches a random msi message.
+-		 */
+-		if (!(info->flags & MSI_FLAG_ACTIVATE_EARLY))
+-			continue;
++		}
+ 
+-		irq_data = irq_domain_get_irq_data(domain, desc->irq);
++		irq_data = irq_domain_get_irq_data(domain, i);
+ 		if (!can_reserve) {
+ 			irqd_clr_can_reserve(irq_data);
+ 			if (domain->flags & IRQ_DOMAIN_MSI_NOMASK_QUIRK)
+@@ -462,28 +462,24 @@ int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
+ 			goto cleanup;
+ 	}
+ 
++skip_activate:
+ 	/*
+ 	 * If these interrupts use reservation mode, clear the activated bit
+ 	 * so request_irq() will assign the final vector.
+ 	 */
+ 	if (can_reserve) {
+-		for_each_msi_entry(desc, dev) {
+-			irq_data = irq_domain_get_irq_data(domain, desc->irq);
++		for_each_msi_vector(desc, i, dev) {
++			irq_data = irq_domain_get_irq_data(domain, i);
+ 			irqd_clr_activated(irq_data);
+ 		}
+ 	}
+ 	return 0;
+ 
+ cleanup:
+-	for_each_msi_entry(desc, dev) {
+-		struct irq_data *irqd;
+-
+-		if (desc->irq == virq)
+-			break;
+-
+-		irqd = irq_domain_get_irq_data(domain, desc->irq);
+-		if (irqd_is_activated(irqd))
+-			irq_domain_deactivate_irq(irqd);
++	for_each_msi_vector(desc, i, dev) {
++		irq_data = irq_domain_get_irq_data(domain, i);
++		if (irqd_is_activated(irq_data))
++			irq_domain_deactivate_irq(irq_data);
+ 	}
+ 	msi_domain_free_irqs(domain, dev);
+ 	return ret;
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 41fdbb7953c60..911c77ef5bbcd 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2082,28 +2082,48 @@ bool __weak arch_kprobe_on_func_entry(unsigned long offset)
+ 	return !offset;
+ }
+ 
+-bool kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset)
++/**
++ * kprobe_on_func_entry() -- check whether given address is function entry
++ * @addr: Target address
++ * @sym:  Target symbol name
++ * @offset: The offset from the symbol or the address
++ *
++ * This checks whether the given @addr+@offset or @sym+@offset is on the
++ * function entry address or not.
++ * This returns 0 if it is the function entry, or -EINVAL if it is not.
++ * And also it returns -ENOENT if it fails the symbol or address lookup.
++ * Caller must pass @addr or @sym (either one must be NULL), or this
++ * returns -EINVAL.
++ */
++int kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset)
+ {
+ 	kprobe_opcode_t *kp_addr = _kprobe_addr(addr, sym, offset);
+ 
+ 	if (IS_ERR(kp_addr))
+-		return false;
++		return PTR_ERR(kp_addr);
+ 
+-	if (!kallsyms_lookup_size_offset((unsigned long)kp_addr, NULL, &offset) ||
+-						!arch_kprobe_on_func_entry(offset))
+-		return false;
++	if (!kallsyms_lookup_size_offset((unsigned long)kp_addr, NULL, &offset))
++		return -ENOENT;
+ 
+-	return true;
++	if (!arch_kprobe_on_func_entry(offset))
++		return -EINVAL;
++
++	return 0;
+ }
+ 
+ int register_kretprobe(struct kretprobe *rp)
+ {
+-	int ret = 0;
++	int ret;
+ 	struct kretprobe_instance *inst;
+ 	int i;
+ 	void *addr;
+ 
+-	if (!kprobe_on_func_entry(rp->kp.addr, rp->kp.symbol_name, rp->kp.offset))
++	ret = kprobe_on_func_entry(rp->kp.addr, rp->kp.symbol_name, rp->kp.offset);
++	if (ret)
++		return ret;
++
++	/* If only rp->kp.addr is specified, check reregistering kprobes */
++	if (rp->kp.addr && check_kprobe_rereg(&rp->kp))
+ 		return -EINVAL;
+ 
+ 	if (kretprobe_blacklist_size) {
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index 5658f13037b3d..a58da91eadb5c 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -395,7 +395,6 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
+ 		}
+ 
+ 		if (t->ret_stack == NULL) {
+-			atomic_set(&t->tracing_graph_pause, 0);
+ 			atomic_set(&t->trace_overrun, 0);
+ 			t->curr_ret_stack = -1;
+ 			t->curr_ret_depth = -1;
+@@ -490,7 +489,6 @@ static DEFINE_PER_CPU(struct ftrace_ret_stack *, idle_ret_stack);
+ static void
+ graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack)
+ {
+-	atomic_set(&t->tracing_graph_pause, 0);
+ 	atomic_set(&t->trace_overrun, 0);
+ 	t->ftrace_timestamp = 0;
+ 	/* make curr_ret_stack visible before we add the ret_stack */
+diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
+index 10bbb0f381d56..ee4571b624bcb 100644
+--- a/kernel/trace/trace_irqsoff.c
++++ b/kernel/trace/trace_irqsoff.c
+@@ -562,6 +562,8 @@ static int __irqsoff_tracer_init(struct trace_array *tr)
+ 	/* non overwrite screws up the latency tracers */
+ 	set_tracer_flag(tr, TRACE_ITER_OVERWRITE, 1);
+ 	set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, 1);
++	/* without pause, we will produce garbage if another latency occurs */
++	set_tracer_flag(tr, TRACE_ITER_PAUSE_ON_TRACE, 1);
+ 
+ 	tr->max_latency = 0;
+ 	irqsoff_trace = tr;
+@@ -583,11 +585,13 @@ static void __irqsoff_tracer_reset(struct trace_array *tr)
+ {
+ 	int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
+ 	int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
++	int pause_flag = save_flags & TRACE_ITER_PAUSE_ON_TRACE;
+ 
+ 	stop_irqsoff_tracer(tr, is_graph(tr));
+ 
+ 	set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, lat_flag);
+ 	set_tracer_flag(tr, TRACE_ITER_OVERWRITE, overwrite_flag);
++	set_tracer_flag(tr, TRACE_ITER_PAUSE_ON_TRACE, pause_flag);
+ 	ftrace_reset_array_ops(tr);
+ 
+ 	irqsoff_busy = false;
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 5fff39541b8ae..68150b9cbde92 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -221,9 +221,9 @@ bool trace_kprobe_on_func_entry(struct trace_event_call *call)
+ {
+ 	struct trace_kprobe *tk = trace_kprobe_primary_from_call(call);
+ 
+-	return tk ? kprobe_on_func_entry(tk->rp.kp.addr,
++	return tk ? (kprobe_on_func_entry(tk->rp.kp.addr,
+ 			tk->rp.kp.addr ? NULL : tk->rp.kp.symbol_name,
+-			tk->rp.kp.addr ? 0 : tk->rp.kp.offset) : false;
++			tk->rp.kp.addr ? 0 : tk->rp.kp.offset) == 0) : false;
+ }
+ 
+ bool trace_kprobe_error_injectable(struct trace_event_call *call)
+@@ -828,9 +828,11 @@ static int trace_kprobe_create(int argc, const char *argv[])
+ 		}
+ 		if (is_return)
+ 			flags |= TPARG_FL_RETURN;
+-		if (kprobe_on_func_entry(NULL, symbol, offset))
++		ret = kprobe_on_func_entry(NULL, symbol, offset);
++		if (ret == 0)
+ 			flags |= TPARG_FL_FENTRY;
+-		if (offset && is_return && !(flags & TPARG_FL_FENTRY)) {
++		/* Defer the ENOENT case until register kprobe */
++		if (ret == -EINVAL && is_return) {
+ 			trace_probe_log_err(0, BAD_RETPROBE);
+ 			goto parse_error;
+ 		}
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 13cb7a961b319..0846d4ffa3387 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1302,7 +1302,7 @@ fast_isolate_freepages(struct compact_control *cc)
+ {
+ 	unsigned int limit = min(1U, freelist_scan_limit(cc) >> 1);
+ 	unsigned int nr_scanned = 0;
+-	unsigned long low_pfn, min_pfn, high_pfn = 0, highest = 0;
++	unsigned long low_pfn, min_pfn, highest = 0;
+ 	unsigned long nr_isolated = 0;
+ 	unsigned long distance;
+ 	struct page *page = NULL;
+@@ -1347,6 +1347,7 @@ fast_isolate_freepages(struct compact_control *cc)
+ 		struct page *freepage;
+ 		unsigned long flags;
+ 		unsigned int order_scanned = 0;
++		unsigned long high_pfn = 0;
+ 
+ 		if (!area->nr_free)
+ 			continue;
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 0b2067b3c3283..125b69f59caad 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -835,6 +835,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
+ 	XA_STATE(xas, &mapping->i_pages, offset);
+ 	int huge = PageHuge(page);
+ 	int error;
++	bool charged = false;
+ 
+ 	VM_BUG_ON_PAGE(!PageLocked(page), page);
+ 	VM_BUG_ON_PAGE(PageSwapBacked(page), page);
+@@ -848,6 +849,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
+ 		error = mem_cgroup_charge(page, current->mm, gfp);
+ 		if (error)
+ 			goto error;
++		charged = true;
+ 	}
+ 
+ 	gfp &= GFP_RECLAIM_MASK;
+@@ -896,6 +898,8 @@ unlock:
+ 
+ 	if (xas_error(&xas)) {
+ 		error = xas_error(&xas);
++		if (charged)
++			mem_cgroup_uncharge(page);
+ 		goto error;
+ 	}
+ 
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 85eda66eb625d..4a78514830d5a 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2188,7 +2188,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ {
+ 	spinlock_t *ptl;
+ 	struct mmu_notifier_range range;
+-	bool was_locked = false;
++	bool do_unlock_page = false;
+ 	pmd_t _pmd;
+ 
+ 	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
+@@ -2204,7 +2204,6 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 	VM_BUG_ON(freeze && !page);
+ 	if (page) {
+ 		VM_WARN_ON_ONCE(!PageLocked(page));
+-		was_locked = true;
+ 		if (page != pmd_page(*pmd))
+ 			goto out;
+ 	}
+@@ -2213,19 +2212,29 @@ repeat:
+ 	if (pmd_trans_huge(*pmd)) {
+ 		if (!page) {
+ 			page = pmd_page(*pmd);
+-			if (unlikely(!trylock_page(page))) {
+-				get_page(page);
+-				_pmd = *pmd;
+-				spin_unlock(ptl);
+-				lock_page(page);
+-				spin_lock(ptl);
+-				if (unlikely(!pmd_same(*pmd, _pmd))) {
+-					unlock_page(page);
++			/*
++			 * An anonymous page must be locked, to ensure that a
++			 * concurrent reuse_swap_page() sees stable mapcount;
++			 * but reuse_swap_page() is not used on shmem or file,
++			 * and page lock must not be taken when zap_pmd_range()
++			 * calls __split_huge_pmd() while i_mmap_lock is held.
++			 */
++			if (PageAnon(page)) {
++				if (unlikely(!trylock_page(page))) {
++					get_page(page);
++					_pmd = *pmd;
++					spin_unlock(ptl);
++					lock_page(page);
++					spin_lock(ptl);
++					if (unlikely(!pmd_same(*pmd, _pmd))) {
++						unlock_page(page);
++						put_page(page);
++						page = NULL;
++						goto repeat;
++					}
+ 					put_page(page);
+-					page = NULL;
+-					goto repeat;
+ 				}
+-				put_page(page);
++				do_unlock_page = true;
+ 			}
+ 		}
+ 		if (PageMlocked(page))
+@@ -2235,7 +2244,7 @@ repeat:
+ 	__split_huge_pmd_locked(vma, pmd, range.start, freeze);
+ out:
+ 	spin_unlock(ptl);
+-	if (!was_locked && page)
++	if (do_unlock_page)
+ 		unlock_page(page);
+ 	/*
+ 	 * No need to double call mmu_notifier->invalidate_range() callback.
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 9a3f06cdcc2a8..26909396898b6 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -79,6 +79,21 @@ DEFINE_SPINLOCK(hugetlb_lock);
+ static int num_fault_mutexes;
+ struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp;
+ 
++static inline bool PageHugeFreed(struct page *head)
++{
++	return page_private(head + 4) == -1UL;
++}
++
++static inline void SetPageHugeFreed(struct page *head)
++{
++	set_page_private(head + 4, -1UL);
++}
++
++static inline void ClearPageHugeFreed(struct page *head)
++{
++	set_page_private(head + 4, 0);
++}
++
+ /* Forward declaration */
+ static int hugetlb_acct_memory(struct hstate *h, long delta);
+ 
+@@ -1028,6 +1043,7 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
+ 	list_move(&page->lru, &h->hugepage_freelists[nid]);
+ 	h->free_huge_pages++;
+ 	h->free_huge_pages_node[nid]++;
++	SetPageHugeFreed(page);
+ }
+ 
+ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
+@@ -1044,6 +1060,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
+ 
+ 		list_move(&page->lru, &h->hugepage_activelist);
+ 		set_page_refcounted(page);
++		ClearPageHugeFreed(page);
+ 		h->free_huge_pages--;
+ 		h->free_huge_pages_node[nid]--;
+ 		return page;
+@@ -1344,12 +1361,11 @@ struct hstate *size_to_hstate(unsigned long size)
+  */
+ bool page_huge_active(struct page *page)
+ {
+-	VM_BUG_ON_PAGE(!PageHuge(page), page);
+-	return PageHead(page) && PagePrivate(&page[1]);
++	return PageHeadHuge(page) && PagePrivate(&page[1]);
+ }
+ 
+ /* never called for tail page */
+-static void set_page_huge_active(struct page *page)
++void set_page_huge_active(struct page *page)
+ {
+ 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
+ 	SetPagePrivate(&page[1]);
+@@ -1505,6 +1521,7 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
+ 	spin_lock(&hugetlb_lock);
+ 	h->nr_huge_pages++;
+ 	h->nr_huge_pages_node[nid]++;
++	ClearPageHugeFreed(page);
+ 	spin_unlock(&hugetlb_lock);
+ }
+ 
+@@ -1755,6 +1772,7 @@ int dissolve_free_huge_page(struct page *page)
+ {
+ 	int rc = -EBUSY;
+ 
++retry:
+ 	/* Not to disrupt normal path by vainly holding hugetlb_lock */
+ 	if (!PageHuge(page))
+ 		return 0;
+@@ -1771,6 +1789,26 @@ int dissolve_free_huge_page(struct page *page)
+ 		int nid = page_to_nid(head);
+ 		if (h->free_huge_pages - h->resv_huge_pages == 0)
+ 			goto out;
++
++		/*
++		 * We should make sure that the page is already on the free list
++		 * when it is dissolved.
++		 */
++		if (unlikely(!PageHugeFreed(head))) {
++			spin_unlock(&hugetlb_lock);
++			cond_resched();
++
++			/*
++			 * Theoretically, we should return -EBUSY when we
++			 * encounter this race. In fact, we have a chance
++			 * to successfully dissolve the page if we do a
++			 * retry. Because the race window is quite small.
++			 * If we seize this opportunity, it is an optimization
++			 * for increasing the success rate of dissolving page.
++			 */
++			goto retry;
++		}
++
+ 		/*
+ 		 * Move PageHWPoison flag from head page to the raw error page,
+ 		 * which makes any subpages rather than the error page reusable.
+@@ -5556,9 +5594,9 @@ bool isolate_huge_page(struct page *page, struct list_head *list)
+ {
+ 	bool ret = true;
+ 
+-	VM_BUG_ON_PAGE(!PageHead(page), page);
+ 	spin_lock(&hugetlb_lock);
+-	if (!page_huge_active(page) || !get_page_unless_zero(page)) {
++	if (!PageHeadHuge(page) || !page_huge_active(page) ||
++	    !get_page_unless_zero(page)) {
+ 		ret = false;
+ 		goto unlock;
+ 	}
+diff --git a/mm/memblock.c b/mm/memblock.c
+index b68ee86788af9..10bd7d1ef0f49 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -275,14 +275,6 @@ __memblock_find_range_top_down(phys_addr_t start, phys_addr_t end,
+  *
+  * Find @size free area aligned to @align in the specified range and node.
+  *
+- * When allocation direction is bottom-up, the @start should be greater
+- * than the end of the kernel image. Otherwise, it will be trimmed. The
+- * reason is that we want the bottom-up allocation just near the kernel
+- * image so it is highly likely that the allocated memory and the kernel
+- * will reside in the same node.
+- *
+- * If bottom-up allocation failed, will try to allocate memory top-down.
+- *
+  * Return:
+  * Found address on success, 0 on failure.
+  */
+@@ -291,8 +283,6 @@ static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size,
+ 					phys_addr_t end, int nid,
+ 					enum memblock_flags flags)
+ {
+-	phys_addr_t kernel_end, ret;
+-
+ 	/* pump up @end */
+ 	if (end == MEMBLOCK_ALLOC_ACCESSIBLE ||
+ 	    end == MEMBLOCK_ALLOC_KASAN)
+@@ -301,40 +291,13 @@ static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size,
+ 	/* avoid allocating the first page */
+ 	start = max_t(phys_addr_t, start, PAGE_SIZE);
+ 	end = max(start, end);
+-	kernel_end = __pa_symbol(_end);
+-
+-	/*
+-	 * try bottom-up allocation only when bottom-up mode
+-	 * is set and @end is above the kernel image.
+-	 */
+-	if (memblock_bottom_up() && end > kernel_end) {
+-		phys_addr_t bottom_up_start;
+-
+-		/* make sure we will allocate above the kernel */
+-		bottom_up_start = max(start, kernel_end);
+ 
+-		/* ok, try bottom-up allocation first */
+-		ret = __memblock_find_range_bottom_up(bottom_up_start, end,
+-						      size, align, nid, flags);
+-		if (ret)
+-			return ret;
+-
+-		/*
+-		 * we always limit bottom-up allocation above the kernel,
+-		 * but top-down allocation doesn't have the limit, so
+-		 * retrying top-down allocation may succeed when bottom-up
+-		 * allocation failed.
+-		 *
+-		 * bottom-up allocation is expected to be fail very rarely,
+-		 * so we use WARN_ONCE() here to see the stack trace if
+-		 * fail happens.
+-		 */
+-		WARN_ONCE(IS_ENABLED(CONFIG_MEMORY_HOTREMOVE),
+-			  "memblock: bottom-up allocation failed, memory hotremove may be affected\n");
+-	}
+-
+-	return __memblock_find_range_top_down(start, end, size, align, nid,
+-					      flags);
++	if (memblock_bottom_up())
++		return __memblock_find_range_bottom_up(start, end, size, align,
++						       nid, flags);
++	else
++		return __memblock_find_range_top_down(start, end, size, align,
++						      nid, flags);
+ }
+ 
+ /**
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 9500d28a43b0e..2fe4bbb6b80cf 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1245,13 +1245,14 @@ static int __neigh_update(struct neighbour *neigh, const u8 *lladdr,
+ 	old    = neigh->nud_state;
+ 	err    = -EPERM;
+ 
+-	if (!(flags & NEIGH_UPDATE_F_ADMIN) &&
+-	    (old & (NUD_NOARP | NUD_PERMANENT)))
+-		goto out;
+ 	if (neigh->dead) {
+ 		NL_SET_ERR_MSG(extack, "Neighbor entry is now dead");
++		new = old;
+ 		goto out;
+ 	}
++	if (!(flags & NEIGH_UPDATE_F_ADMIN) &&
++	    (old & (NUD_NOARP | NUD_PERMANENT)))
++		goto out;
+ 
+ 	ext_learn_change = neigh_update_ext_learned(neigh, flags, &notify);
+ 
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 64594aa755f05..76a420c76f16e 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -317,7 +317,7 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
+ 	}
+ 
+ 	dev->needed_headroom = t_hlen + hlen;
+-	mtu -= (dev->hard_header_len + t_hlen);
++	mtu -= t_hlen;
+ 
+ 	if (mtu < IPV4_MIN_MTU)
+ 		mtu = IPV4_MIN_MTU;
+@@ -347,7 +347,7 @@ static struct ip_tunnel *ip_tunnel_create(struct net *net,
+ 	nt = netdev_priv(dev);
+ 	t_hlen = nt->hlen + sizeof(struct iphdr);
+ 	dev->min_mtu = ETH_MIN_MTU;
+-	dev->max_mtu = IP_MAX_MTU - dev->hard_header_len - t_hlen;
++	dev->max_mtu = IP_MAX_MTU - t_hlen;
+ 	ip_tunnel_add(itn, nt);
+ 	return nt;
+ 
+@@ -488,11 +488,10 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
+ 	int mtu;
+ 
+ 	tunnel_hlen = md ? tunnel_hlen : tunnel->hlen;
+-	pkt_size = skb->len - tunnel_hlen - dev->hard_header_len;
++	pkt_size = skb->len - tunnel_hlen;
+ 
+ 	if (df)
+-		mtu = dst_mtu(&rt->dst) - dev->hard_header_len
+-					- sizeof(struct iphdr) - tunnel_hlen;
++		mtu = dst_mtu(&rt->dst) - (sizeof(struct iphdr) + tunnel_hlen);
+ 	else
+ 		mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
+ 
+@@ -972,7 +971,7 @@ int __ip_tunnel_change_mtu(struct net_device *dev, int new_mtu, bool strict)
+ {
+ 	struct ip_tunnel *tunnel = netdev_priv(dev);
+ 	int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+-	int max_mtu = IP_MAX_MTU - dev->hard_header_len - t_hlen;
++	int max_mtu = IP_MAX_MTU - t_hlen;
+ 
+ 	if (new_mtu < ETH_MIN_MTU)
+ 		return -EINVAL;
+@@ -1149,10 +1148,9 @@ int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
+ 
+ 	mtu = ip_tunnel_bind_dev(dev);
+ 	if (tb[IFLA_MTU]) {
+-		unsigned int max = IP_MAX_MTU - dev->hard_header_len - nt->hlen;
++		unsigned int max = IP_MAX_MTU - (nt->hlen + sizeof(struct iphdr));
+ 
+-		mtu = clamp(dev->mtu, (unsigned int)ETH_MIN_MTU,
+-			    (unsigned int)(max - sizeof(struct iphdr)));
++		mtu = clamp(dev->mtu, (unsigned int)ETH_MIN_MTU, max);
+ 	}
+ 
+ 	err = dev_set_mtu(dev, mtu);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index c62805cd31319..cfdaac4a57e41 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -184,8 +184,67 @@ out_unlock:
+ }
+ EXPORT_SYMBOL(skb_udp_tunnel_segment);
+ 
++static void __udpv4_gso_segment_csum(struct sk_buff *seg,
++				     __be32 *oldip, __be32 *newip,
++				     __be16 *oldport, __be16 *newport)
++{
++	struct udphdr *uh;
++	struct iphdr *iph;
++
++	if (*oldip == *newip && *oldport == *newport)
++		return;
++
++	uh = udp_hdr(seg);
++	iph = ip_hdr(seg);
++
++	if (uh->check) {
++		inet_proto_csum_replace4(&uh->check, seg, *oldip, *newip,
++					 true);
++		inet_proto_csum_replace2(&uh->check, seg, *oldport, *newport,
++					 false);
++		if (!uh->check)
++			uh->check = CSUM_MANGLED_0;
++	}
++	*oldport = *newport;
++
++	csum_replace4(&iph->check, *oldip, *newip);
++	*oldip = *newip;
++}
++
++static struct sk_buff *__udpv4_gso_segment_list_csum(struct sk_buff *segs)
++{
++	struct sk_buff *seg;
++	struct udphdr *uh, *uh2;
++	struct iphdr *iph, *iph2;
++
++	seg = segs;
++	uh = udp_hdr(seg);
++	iph = ip_hdr(seg);
++
++	if ((udp_hdr(seg)->dest == udp_hdr(seg->next)->dest) &&
++	    (udp_hdr(seg)->source == udp_hdr(seg->next)->source) &&
++	    (ip_hdr(seg)->daddr == ip_hdr(seg->next)->daddr) &&
++	    (ip_hdr(seg)->saddr == ip_hdr(seg->next)->saddr))
++		return segs;
++
++	while ((seg = seg->next)) {
++		uh2 = udp_hdr(seg);
++		iph2 = ip_hdr(seg);
++
++		__udpv4_gso_segment_csum(seg,
++					 &iph2->saddr, &iph->saddr,
++					 &uh2->source, &uh->source);
++		__udpv4_gso_segment_csum(seg,
++					 &iph2->daddr, &iph->daddr,
++					 &uh2->dest, &uh->dest);
++	}
++
++	return segs;
++}
++
+ static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb,
+-					      netdev_features_t features)
++					      netdev_features_t features,
++					      bool is_ipv6)
+ {
+ 	unsigned int mss = skb_shinfo(skb)->gso_size;
+ 
+@@ -195,11 +254,11 @@ static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb,
+ 
+ 	udp_hdr(skb)->len = htons(sizeof(struct udphdr) + mss);
+ 
+-	return skb;
++	return is_ipv6 ? skb : __udpv4_gso_segment_list_csum(skb);
+ }
+ 
+ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+-				  netdev_features_t features)
++				  netdev_features_t features, bool is_ipv6)
+ {
+ 	struct sock *sk = gso_skb->sk;
+ 	unsigned int sum_truesize = 0;
+@@ -211,7 +270,7 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 	__be16 newlen;
+ 
+ 	if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)
+-		return __udp_gso_segment_list(gso_skb, features);
++		return __udp_gso_segment_list(gso_skb, features, is_ipv6);
+ 
+ 	mss = skb_shinfo(gso_skb)->gso_size;
+ 	if (gso_skb->len <= sizeof(*uh) + mss)
+@@ -325,7 +384,7 @@ static struct sk_buff *udp4_ufo_fragment(struct sk_buff *skb,
+ 		goto out;
+ 
+ 	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4)
+-		return __udp_gso_segment(skb, features);
++		return __udp_gso_segment(skb, features, false);
+ 
+ 	mss = skb_shinfo(skb)->gso_size;
+ 	if (unlikely(skb->len <= mss))
+diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c
+index f9e888d1b9af8..ebee748f25b9e 100644
+--- a/net/ipv6/udp_offload.c
++++ b/net/ipv6/udp_offload.c
+@@ -46,7 +46,7 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb,
+ 			goto out;
+ 
+ 		if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4)
+-			return __udp_gso_segment(skb, features);
++			return __udp_gso_segment(skb, features, true);
+ 
+ 		/* Do software UFO. Complete and fill in the UDP checksum as HW cannot
+ 		 * do checksum of UDP packets sent as multiple IP fragments.
+diff --git a/net/lapb/lapb_out.c b/net/lapb/lapb_out.c
+index 7a4d0715d1c32..a966d29c772d9 100644
+--- a/net/lapb/lapb_out.c
++++ b/net/lapb/lapb_out.c
+@@ -82,7 +82,8 @@ void lapb_kick(struct lapb_cb *lapb)
+ 		skb = skb_dequeue(&lapb->write_queue);
+ 
+ 		do {
+-			if ((skbn = skb_clone(skb, GFP_ATOMIC)) == NULL) {
++			skbn = skb_copy(skb, GFP_ATOMIC);
++			if (!skbn) {
+ 				skb_queue_head(&lapb->write_queue, skb);
+ 				break;
+ 			}
+diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c
+index c9a8a2433e8ac..48322e45e7ddb 100644
+--- a/net/mac80211/driver-ops.c
++++ b/net/mac80211/driver-ops.c
+@@ -125,8 +125,11 @@ int drv_sta_state(struct ieee80211_local *local,
+ 	} else if (old_state == IEEE80211_STA_AUTH &&
+ 		   new_state == IEEE80211_STA_ASSOC) {
+ 		ret = drv_sta_add(local, sdata, &sta->sta);
+-		if (ret == 0)
++		if (ret == 0) {
+ 			sta->uploaded = true;
++			if (rcu_access_pointer(sta->sta.rates))
++				drv_sta_rate_tbl_update(local, sdata, &sta->sta);
++		}
+ 	} else if (old_state == IEEE80211_STA_ASSOC &&
+ 		   new_state == IEEE80211_STA_AUTH) {
+ 		drv_sta_remove(local, sdata, &sta->sta);
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index 45927202c71c6..63652c39c8e07 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -960,7 +960,8 @@ int rate_control_set_rates(struct ieee80211_hw *hw,
+ 	if (old)
+ 		kfree_rcu(old, rcu_head);
+ 
+-	drv_sta_rate_tbl_update(hw_to_local(hw), sta->sdata, pubsta);
++	if (sta->uploaded)
++		drv_sta_rate_tbl_update(hw_to_local(hw), sta->sdata, pubsta);
+ 
+ 	ieee80211_sta_set_expected_throughput(pubsta, sta_get_expected_throughput(sta));
+ 
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index 0a2f4817ec6cf..41671af6b33f9 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -990,7 +990,7 @@ static int __init af_rxrpc_init(void)
+ 		goto error_security;
+ 	}
+ 
+-	ret = register_pernet_subsys(&rxrpc_net_ops);
++	ret = register_pernet_device(&rxrpc_net_ops);
+ 	if (ret)
+ 		goto error_pernet;
+ 
+@@ -1035,7 +1035,7 @@ error_key_type:
+ error_sock:
+ 	proto_unregister(&rxrpc_proto);
+ error_proto:
+-	unregister_pernet_subsys(&rxrpc_net_ops);
++	unregister_pernet_device(&rxrpc_net_ops);
+ error_pernet:
+ 	rxrpc_exit_security();
+ error_security:
+@@ -1057,7 +1057,7 @@ static void __exit af_rxrpc_exit(void)
+ 	unregister_key_type(&key_type_rxrpc);
+ 	sock_unregister(PF_RXRPC);
+ 	proto_unregister(&rxrpc_proto);
+-	unregister_pernet_subsys(&rxrpc_net_ops);
++	unregister_pernet_device(&rxrpc_net_ops);
+ 	ASSERTCMP(atomic_read(&rxrpc_n_tx_skbs), ==, 0);
+ 	ASSERTCMP(atomic_read(&rxrpc_n_rx_skbs), ==, 0);
+ 
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 4404c491eb388..fa7b7ae2c2c5f 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1113,14 +1113,15 @@ static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg,
+ 		unsigned int offset, len, remaining;
+ 		struct bio_vec *bvec;
+ 
+-		bvec = xdr->bvec;
+-		offset = xdr->page_base;
++		bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT);
++		offset = offset_in_page(xdr->page_base);
+ 		remaining = xdr->page_len;
+ 		flags = MSG_MORE | MSG_SENDPAGE_NOTLAST;
+ 		while (remaining > 0) {
+ 			if (remaining <= PAGE_SIZE && tail->iov_len == 0)
+ 				flags = 0;
+-			len = min(remaining, bvec->bv_len);
++
++			len = min(remaining, bvec->bv_len - offset);
+ 			ret = kernel_sendpage(sock, bvec->bv_page,
+ 					      bvec->bv_offset + offset,
+ 					      len, flags);
+diff --git a/scripts/Makefile b/scripts/Makefile
+index b5418ec587fbd..9de3c03b94aa7 100644
+--- a/scripts/Makefile
++++ b/scripts/Makefile
+@@ -3,6 +3,9 @@
+ # scripts contains sources for various helper programs used throughout
+ # the kernel for the build process.
+ 
++CRYPTO_LIBS = $(shell pkg-config --libs libcrypto 2> /dev/null || echo -lcrypto)
++CRYPTO_CFLAGS = $(shell pkg-config --cflags libcrypto 2> /dev/null)
++
+ hostprogs-always-$(CONFIG_BUILD_BIN2C)			+= bin2c
+ hostprogs-always-$(CONFIG_KALLSYMS)			+= kallsyms
+ hostprogs-always-$(BUILD_C_RECORDMCOUNT)		+= recordmcount
+@@ -14,8 +17,9 @@ hostprogs-always-$(CONFIG_SYSTEM_EXTRA_CERTIFICATE)	+= insert-sys-cert
+ 
+ HOSTCFLAGS_sorttable.o = -I$(srctree)/tools/include
+ HOSTCFLAGS_asn1_compiler.o = -I$(srctree)/include
+-HOSTLDLIBS_sign-file = -lcrypto
+-HOSTLDLIBS_extract-cert = -lcrypto
++HOSTLDLIBS_sign-file = $(CRYPTO_LIBS)
++HOSTCFLAGS_extract-cert.o = $(CRYPTO_CFLAGS)
++HOSTLDLIBS_extract-cert = $(CRYPTO_LIBS)
+ 
+ ifdef CONFIG_UNWINDER_ORC
+ ifeq ($(ARCH),x86_64)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-10 10:23 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-02-10 10:23 UTC (permalink / raw
  To: gentoo-commits

commit:     48021b103f4ba0eb61d3617182f0126b41f720a4
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 10 10:20:37 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 10 10:23:22 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=48021b10

Drop SUNRPC: Fix NFS READs that start at non-page-aligned offsets

Already added upstream

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README                                        |  4 --
 ...RPC-NFS-fix-non-page-aligned-offsets-read.patch | 53 ----------------------
 2 files changed, 57 deletions(-)

diff --git a/0000_README b/0000_README
index 7d03d9d..9214e4d 100644
--- a/0000_README
+++ b/0000_README
@@ -115,10 +115,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2400_SUNRPC-NFS-fix-non-page-aligned-offsets-read.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/net/sunrpc/svcsock.c?id=bad4c6eb5eaa8300e065bd4426727db5141d687d
-Desc:   SUNRPC: Fix NFS READs that start at non-page-aligned offsets
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2400_SUNRPC-NFS-fix-non-page-aligned-offsets-read.patch b/2400_SUNRPC-NFS-fix-non-page-aligned-offsets-read.patch
deleted file mode 100644
index 34d6ebb..0000000
--- a/2400_SUNRPC-NFS-fix-non-page-aligned-offsets-read.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-From bad4c6eb5eaa8300e065bd4426727db5141d687d Mon Sep 17 00:00:00 2001
-From: Chuck Lever <chuck.lever@oracle.com>
-Date: Sun, 31 Jan 2021 16:16:23 -0500
-Subject: SUNRPC: Fix NFS READs that start at non-page-aligned offsets
-
-Anj Duvnjak reports that the Kodi.tv NFS client is not able to read
-video files from a v5.10.11 Linux NFS server.
-
-The new sendpage-based TCP sendto logic was not attentive to non-
-zero page_base values. nfsd_splice_read() sets that field when a
-READ payload starts in the middle of a page.
-
-The Linux NFS client rarely emits an NFS READ that is not page-
-aligned. All of my testing so far has been with Linux clients, so I
-missed this one.
-
-Reported-by: A. Duvnjak <avian@extremenerds.net>
-BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=211471
-Fixes: 4a85a6a3320b ("SUNRPC: Handle TCP socket sends with kernel_sendpage() again")
-Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-Tested-by: A. Duvnjak <avian@extremenerds.net>
----
- net/sunrpc/svcsock.c | 7 ++++---
- 1 file changed, 4 insertions(+), 3 deletions(-)
-
-(limited to 'net/sunrpc/svcsock.c')
-
-diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
-index c9766d07eb81a..5a809c64dc7b9 100644
---- a/net/sunrpc/svcsock.c
-+++ b/net/sunrpc/svcsock.c
-@@ -1113,14 +1113,15 @@ static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg,
- 		unsigned int offset, len, remaining;
- 		struct bio_vec *bvec;
- 
--		bvec = xdr->bvec;
--		offset = xdr->page_base;
-+		bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT);
-+		offset = offset_in_page(xdr->page_base);
- 		remaining = xdr->page_len;
- 		flags = MSG_MORE | MSG_SENDPAGE_NOTLAST;
- 		while (remaining > 0) {
- 			if (remaining <= PAGE_SIZE && tail->iov_len == 0)
- 				flags = 0;
--			len = min(remaining, bvec->bv_len);
-+
-+			len = min(remaining, bvec->bv_len - offset);
- 			ret = kernel_sendpage(sock, bvec->bv_page,
- 					      bvec->bv_offset + offset,
- 					      len, flags);
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-13 14:42 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-02-13 14:42 UTC (permalink / raw
  To: gentoo-commits

commit:     ff99c69ccc56172fa7a303b47421e86f08ed0a8f
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 13 14:42:05 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Feb 13 14:42:13 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ff99c69c

Linux patch 5.10.16

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1015_linux-5.10.16.patch | 2316 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2320 insertions(+)

diff --git a/0000_README b/0000_README
index 9214e4d..bbc23e0 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-5.10.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.15
 
+Patch:  1015_linux-5.10.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.10.16.patch b/1015_linux-5.10.16.patch
new file mode 100644
index 0000000..01d52a4
--- /dev/null
+++ b/1015_linux-5.10.16.patch
@@ -0,0 +1,2316 @@
+diff --git a/Makefile b/Makefile
+index b62d2d4ea7b02..9a1f26680d836 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
+index 8dad44262e751..495ffc9cf5e22 100644
+--- a/arch/powerpc/kernel/vdso.c
++++ b/arch/powerpc/kernel/vdso.c
+@@ -475,7 +475,7 @@ static __init void vdso_setup_trampolines(struct lib32_elfinfo *v32,
+ 	 */
+ 
+ #ifdef CONFIG_PPC64
+-	vdso64_rt_sigtramp = find_function64(v64, "__kernel_sigtramp_rt64");
++	vdso64_rt_sigtramp = find_function64(v64, "__kernel_start_sigtramp_rt64");
+ #endif
+ 	vdso32_sigtramp	   = find_function32(v32, "__kernel_sigtramp32");
+ 	vdso32_rt_sigtramp = find_function32(v32, "__kernel_sigtramp_rt32");
+diff --git a/arch/powerpc/kernel/vdso64/sigtramp.S b/arch/powerpc/kernel/vdso64/sigtramp.S
+index bbf68cd01088b..2d4067561293e 100644
+--- a/arch/powerpc/kernel/vdso64/sigtramp.S
++++ b/arch/powerpc/kernel/vdso64/sigtramp.S
+@@ -15,11 +15,20 @@
+ 
+ 	.text
+ 
++/*
++ * __kernel_start_sigtramp_rt64 and __kernel_sigtramp_rt64 together
++ * are one function split in two parts. The kernel jumps to the former
++ * and the signal handler indirectly (by blr) returns to the latter.
++ * __kernel_sigtramp_rt64 needs to point to the return address so
++ * glibc can correctly identify the trampoline stack frame.
++ */
+ 	.balign 8
+ 	.balign IFETCH_ALIGN_BYTES
+-V_FUNCTION_BEGIN(__kernel_sigtramp_rt64)
++V_FUNCTION_BEGIN(__kernel_start_sigtramp_rt64)
+ .Lsigrt_start:
+ 	bctrl	/* call the handler */
++V_FUNCTION_END(__kernel_start_sigtramp_rt64)
++V_FUNCTION_BEGIN(__kernel_sigtramp_rt64)
+ 	addi	r1, r1, __SIGNAL_FRAMESIZE
+ 	li	r0,__NR_rt_sigreturn
+ 	sc
+diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S b/arch/powerpc/kernel/vdso64/vdso64.lds.S
+index 256fb97202987..bd120f590b9ed 100644
+--- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
++++ b/arch/powerpc/kernel/vdso64/vdso64.lds.S
+@@ -150,6 +150,7 @@ VERSION
+ 		__kernel_get_tbfreq;
+ 		__kernel_sync_dicache;
+ 		__kernel_sync_dicache_p5;
++		__kernel_start_sigtramp_rt64;
+ 		__kernel_sigtramp_rt64;
+ 		__kernel_getcpu;
+ 		__kernel_time;
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 54fbe1e80cc41..f13688c4b9317 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1017,6 +1017,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css)
+  */
+ void blkcg_destroy_blkgs(struct blkcg *blkcg)
+ {
++	might_sleep();
++
+ 	spin_lock_irq(&blkcg->lock);
+ 
+ 	while (!hlist_empty(&blkcg->blkg_list)) {
+@@ -1024,14 +1026,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg)
+ 						struct blkcg_gq, blkcg_node);
+ 		struct request_queue *q = blkg->q;
+ 
+-		if (spin_trylock(&q->queue_lock)) {
+-			blkg_destroy(blkg);
+-			spin_unlock(&q->queue_lock);
+-		} else {
++		if (need_resched() || !spin_trylock(&q->queue_lock)) {
++			/*
++			 * Given that the system can accumulate a huge number
++			 * of blkgs in pathological cases, check to see if we
++			 * need to rescheduling to avoid softlockup.
++			 */
+ 			spin_unlock_irq(&blkcg->lock);
+-			cpu_relax();
++			cond_resched();
+ 			spin_lock_irq(&blkcg->lock);
++			continue;
+ 		}
++
++		blkg_destroy(blkg);
++		spin_unlock(&q->queue_lock);
+ 	}
+ 
+ 	spin_unlock_irq(&blkcg->lock);
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index 689c06cbbb457..ade3ecf2ee495 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -756,6 +756,8 @@ static void edge_detector_stop(struct line *line)
+ 	cancel_delayed_work_sync(&line->work);
+ 	WRITE_ONCE(line->sw_debounced, 0);
+ 	line->eflags = 0;
++	if (line->desc)
++		WRITE_ONCE(line->desc->debounce_period_us, 0);
+ 	/* do not change line->level - see comment in debounced_value() */
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 40dfb4d0ffbec..db62e6a934d91 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -2597,6 +2597,9 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
+ 	u32 n_entries, val;
+ 	int ln, rate = 0;
+ 
++	if (enc_to_dig_port(encoder)->tc_mode == TC_PORT_TBT_ALT)
++		return;
++
+ 	if (type != INTEL_OUTPUT_HDMI) {
+ 		struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+ 
+@@ -2605,12 +2608,11 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
+ 
+ 	ddi_translations = icl_get_mg_buf_trans(encoder, type, rate,
+ 						&n_entries);
+-	/* The table does not have values for level 3 and level 9. */
+-	if (level >= n_entries || level == 3 || level == 9) {
++	if (level >= n_entries) {
+ 		drm_dbg_kms(&dev_priv->drm,
+ 			    "DDI translation not found for level %d. Using %d instead.",
+-			    level, n_entries - 2);
+-		level = n_entries - 2;
++			    level, n_entries - 1);
++		level = n_entries - 1;
+ 	}
+ 
+ 	/* Set MG_TX_LINK_PARAMS cri_use_fs32 to 0. */
+@@ -2742,6 +2744,9 @@ tgl_dkl_phy_ddi_vswing_sequence(struct intel_encoder *encoder, int link_clock,
+ 	u32 n_entries, val, ln, dpcnt_mask, dpcnt_val;
+ 	int rate = 0;
+ 
++	if (enc_to_dig_port(encoder)->tc_mode == TC_PORT_TBT_ALT)
++		return;
++
+ 	if (type != INTEL_OUTPUT_HDMI) {
+ 		struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+ 
+diff --git a/drivers/gpu/drm/nouveau/include/nvif/push.h b/drivers/gpu/drm/nouveau/include/nvif/push.h
+index 168d7694ede5c..6d3a8a3d2087b 100644
+--- a/drivers/gpu/drm/nouveau/include/nvif/push.h
++++ b/drivers/gpu/drm/nouveau/include/nvif/push.h
+@@ -123,131 +123,131 @@ PUSH_KICK(struct nvif_push *push)
+ } while(0)
+ #endif
+ 
+-#define PUSH_1(X,f,ds,n,c,o,p,s,mA,dA) do {                            \
+-	PUSH_##o##_HDR((p), s, mA, (c)+(n));                           \
+-	PUSH_##f(X, (p), X##mA, 1, o, (dA), ds, "");                   \
++#define PUSH_1(X,f,ds,n,o,p,s,mA,dA) do {                             \
++	PUSH_##o##_HDR((p), s, mA, (ds)+(n));                         \
++	PUSH_##f(X, (p), X##mA, 1, o, (dA), ds, "");                  \
+ } while(0)
+-#define PUSH_2(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (1?PUSH_##o##_INC), "mthd1");       \
+-	PUSH_1(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_2(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (1?PUSH_##o##_INC), "mthd1");      \
++	PUSH_1(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_3(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd2");       \
+-	PUSH_2(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_3(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd2");      \
++	PUSH_2(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_4(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd3");       \
+-	PUSH_3(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_4(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd3");      \
++	PUSH_3(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_5(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd4");       \
+-	PUSH_4(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_5(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd4");      \
++	PUSH_4(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_6(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd5");       \
+-	PUSH_5(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_6(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd5");      \
++	PUSH_5(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_7(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd6");       \
+-	PUSH_6(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_7(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd6");      \
++	PUSH_6(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_8(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd7");       \
+-	PUSH_7(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_8(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd7");      \
++	PUSH_7(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_9(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd8");       \
+-	PUSH_8(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_9(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd8");      \
++	PUSH_8(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_10(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd9");       \
+-	PUSH_9(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_10(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                 \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd9");      \
++	PUSH_9(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+ 
+-#define PUSH_1D(X,o,p,s,mA,dA)                            \
+-	PUSH_1(X, DATA_, 1, 1, 0, o, (p), s, X##mA, (dA))
+-#define PUSH_2D(X,o,p,s,mA,dA,mB,dB)                      \
+-	PUSH_2(X, DATA_, 1, 1, 0, o, (p), s, X##mB, (dB), \
+-					     X##mA, (dA))
+-#define PUSH_3D(X,o,p,s,mA,dA,mB,dB,mC,dC)                \
+-	PUSH_3(X, DATA_, 1, 1, 0, o, (p), s, X##mC, (dC), \
+-					     X##mB, (dB), \
+-					     X##mA, (dA))
+-#define PUSH_4D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD)          \
+-	PUSH_4(X, DATA_, 1, 1, 0, o, (p), s, X##mD, (dD), \
+-					     X##mC, (dC), \
+-					     X##mB, (dB), \
+-					     X##mA, (dA))
+-#define PUSH_5D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE)    \
+-	PUSH_5(X, DATA_, 1, 1, 0, o, (p), s, X##mE, (dE), \
+-					     X##mD, (dD), \
+-					     X##mC, (dC), \
+-					     X##mB, (dB), \
+-					     X##mA, (dA))
++#define PUSH_1D(X,o,p,s,mA,dA)                         \
++	PUSH_1(X, DATA_, 1, 0, o, (p), s, X##mA, (dA))
++#define PUSH_2D(X,o,p,s,mA,dA,mB,dB)                   \
++	PUSH_2(X, DATA_, 1, 0, o, (p), s, X##mB, (dB), \
++					  X##mA, (dA))
++#define PUSH_3D(X,o,p,s,mA,dA,mB,dB,mC,dC)             \
++	PUSH_3(X, DATA_, 1, 0, o, (p), s, X##mC, (dC), \
++					  X##mB, (dB), \
++					  X##mA, (dA))
++#define PUSH_4D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD)       \
++	PUSH_4(X, DATA_, 1, 0, o, (p), s, X##mD, (dD), \
++					  X##mC, (dC), \
++					  X##mB, (dB), \
++					  X##mA, (dA))
++#define PUSH_5D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE) \
++	PUSH_5(X, DATA_, 1, 0, o, (p), s, X##mE, (dE), \
++					  X##mD, (dD), \
++					  X##mC, (dC), \
++					  X##mB, (dB), \
++					  X##mA, (dA))
+ #define PUSH_6D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF) \
+-	PUSH_6(X, DATA_, 1, 1, 0, o, (p), s, X##mF, (dF),    \
+-					     X##mE, (dE),    \
+-					     X##mD, (dD),    \
+-					     X##mC, (dC),    \
+-					     X##mB, (dB),    \
+-					     X##mA, (dA))
++	PUSH_6(X, DATA_, 1, 0, o, (p), s, X##mF, (dF),       \
++					  X##mE, (dE),       \
++					  X##mD, (dD),       \
++					  X##mC, (dC),       \
++					  X##mB, (dB),       \
++					  X##mA, (dA))
+ #define PUSH_7D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG) \
+-	PUSH_7(X, DATA_, 1, 1, 0, o, (p), s, X##mG, (dG),          \
+-					     X##mF, (dF),          \
+-					     X##mE, (dE),          \
+-					     X##mD, (dD),          \
+-					     X##mC, (dC),          \
+-					     X##mB, (dB),          \
+-					     X##mA, (dA))
++	PUSH_7(X, DATA_, 1, 0, o, (p), s, X##mG, (dG),             \
++					  X##mF, (dF),             \
++					  X##mE, (dE),             \
++					  X##mD, (dD),             \
++					  X##mC, (dC),             \
++					  X##mB, (dB),             \
++					  X##mA, (dA))
+ #define PUSH_8D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH) \
+-	PUSH_8(X, DATA_, 1, 1, 0, o, (p), s, X##mH, (dH),                \
+-					     X##mG, (dG),                \
+-					     X##mF, (dF),                \
+-					     X##mE, (dE),                \
+-					     X##mD, (dD),                \
+-					     X##mC, (dC),                \
+-					     X##mB, (dB),                \
+-					     X##mA, (dA))
++	PUSH_8(X, DATA_, 1, 0, o, (p), s, X##mH, (dH),                   \
++					  X##mG, (dG),                   \
++					  X##mF, (dF),                   \
++					  X##mE, (dE),                   \
++					  X##mD, (dD),                   \
++					  X##mC, (dC),                   \
++					  X##mB, (dB),                   \
++					  X##mA, (dA))
+ #define PUSH_9D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH,mI,dI) \
+-	PUSH_9(X, DATA_, 1, 1, 0, o, (p), s, X##mI, (dI),                      \
+-					     X##mH, (dH),                      \
+-					     X##mG, (dG),                      \
+-					     X##mF, (dF),                      \
+-					     X##mE, (dE),                      \
+-					     X##mD, (dD),                      \
+-					     X##mC, (dC),                      \
+-					     X##mB, (dB),                      \
+-					     X##mA, (dA))
++	PUSH_9(X, DATA_, 1, 0, o, (p), s, X##mI, (dI),                         \
++					  X##mH, (dH),                         \
++					  X##mG, (dG),                         \
++					  X##mF, (dF),                         \
++					  X##mE, (dE),                         \
++					  X##mD, (dD),                         \
++					  X##mC, (dC),                         \
++					  X##mB, (dB),                         \
++					  X##mA, (dA))
+ #define PUSH_10D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH,mI,dI,mJ,dJ) \
+-	PUSH_10(X, DATA_, 1, 1, 0, o, (p), s, X##mJ, (dJ),                            \
+-					      X##mI, (dI),                            \
+-					      X##mH, (dH),                            \
+-					      X##mG, (dG),                            \
+-					      X##mF, (dF),                            \
+-					      X##mE, (dE),                            \
+-					      X##mD, (dD),                            \
+-					      X##mC, (dC),                            \
+-					      X##mB, (dB),                            \
+-					      X##mA, (dA))
++	PUSH_10(X, DATA_, 1, 0, o, (p), s, X##mJ, (dJ),                               \
++					   X##mI, (dI),                               \
++					   X##mH, (dH),                               \
++					   X##mG, (dG),                               \
++					   X##mF, (dF),                               \
++					   X##mE, (dE),                               \
++					   X##mD, (dD),                               \
++					   X##mC, (dC),                               \
++					   X##mB, (dB),                               \
++					   X##mA, (dA))
+ 
+-#define PUSH_1P(X,o,p,s,mA,dp,ds)                           \
+-	PUSH_1(X, DATAp, ds, ds, 0, o, (p), s, X##mA, (dp))
+-#define PUSH_2P(X,o,p,s,mA,dA,mB,dp,ds)                     \
+-	PUSH_2(X, DATAp, ds, ds, 0, o, (p), s, X##mB, (dp), \
+-					       X##mA, (dA))
+-#define PUSH_3P(X,o,p,s,mA,dA,mB,dB,mC,dp,ds)               \
+-	PUSH_3(X, DATAp, ds, ds, 0, o, (p), s, X##mC, (dp), \
+-					       X##mB, (dB), \
+-					       X##mA, (dA))
++#define PUSH_1P(X,o,p,s,mA,dp,ds)                       \
++	PUSH_1(X, DATAp, ds, 0, o, (p), s, X##mA, (dp))
++#define PUSH_2P(X,o,p,s,mA,dA,mB,dp,ds)                 \
++	PUSH_2(X, DATAp, ds, 0, o, (p), s, X##mB, (dp), \
++					   X##mA, (dA))
++#define PUSH_3P(X,o,p,s,mA,dA,mB,dB,mC,dp,ds)           \
++	PUSH_3(X, DATAp, ds, 0, o, (p), s, X##mC, (dp), \
++					   X##mB, (dB), \
++					   X##mA, (dA))
+ 
+ #define PUSH_(A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,IMPL,...) IMPL
+ #define PUSH(A...) PUSH_(A, PUSH_10P, PUSH_10D,          \
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 0818d3e507347..2ffd2f354d0ae 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -1275,7 +1275,8 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+ 	mtk_i2c_clock_disable(i2c);
+ 
+ 	ret = devm_request_irq(&pdev->dev, irq, mtk_i2c_irq,
+-			       IRQF_TRIGGER_NONE, I2C_DRV_NAME, i2c);
++			       IRQF_NO_SUSPEND | IRQF_TRIGGER_NONE,
++			       I2C_DRV_NAME, i2c);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev,
+ 			"Request I2C IRQ %d fail\n", irq);
+@@ -1302,7 +1303,16 @@ static int mtk_i2c_remove(struct platform_device *pdev)
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+-static int mtk_i2c_resume(struct device *dev)
++static int mtk_i2c_suspend_noirq(struct device *dev)
++{
++	struct mtk_i2c *i2c = dev_get_drvdata(dev);
++
++	i2c_mark_adapter_suspended(&i2c->adap);
++
++	return 0;
++}
++
++static int mtk_i2c_resume_noirq(struct device *dev)
+ {
+ 	int ret;
+ 	struct mtk_i2c *i2c = dev_get_drvdata(dev);
+@@ -1317,12 +1327,15 @@ static int mtk_i2c_resume(struct device *dev)
+ 
+ 	mtk_i2c_clock_disable(i2c);
+ 
++	i2c_mark_adapter_resumed(&i2c->adap);
++
+ 	return 0;
+ }
+ #endif
+ 
+ static const struct dev_pm_ops mtk_i2c_pm = {
+-	SET_SYSTEM_SLEEP_PM_OPS(NULL, mtk_i2c_resume)
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_i2c_suspend_noirq,
++				      mtk_i2c_resume_noirq)
+ };
+ 
+ static struct platform_driver mtk_i2c_driver = {
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index 5beec901713fb..a262c949ed76b 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -1158,11 +1158,9 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ #endif
+ 	}
+ 	if (!n || !n->dev)
+-		goto free_sk;
++		goto free_dst;
+ 
+ 	ndev = n->dev;
+-	if (!ndev)
+-		goto free_dst;
+ 	if (is_vlan_dev(ndev))
+ 		ndev = vlan_dev_real_dev(ndev);
+ 
+@@ -1249,7 +1247,8 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ free_csk:
+ 	chtls_sock_release(&csk->kref);
+ free_dst:
+-	neigh_release(n);
++	if (n)
++		neigh_release(n);
+ 	dst_release(dst);
+ free_sk:
+ 	inet_csk_prepare_forced_close(newsk);
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+index d2bbe6a735142..92c50efd48fc3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+@@ -358,6 +358,7 @@ const struct iwl_cfg_trans_params iwl_ma_trans_cfg = {
+ const char iwl_ax101_name[] = "Intel(R) Wi-Fi 6 AX101";
+ const char iwl_ax200_name[] = "Intel(R) Wi-Fi 6 AX200 160MHz";
+ const char iwl_ax201_name[] = "Intel(R) Wi-Fi 6 AX201 160MHz";
++const char iwl_ax203_name[] = "Intel(R) Wi-Fi 6 AX203";
+ const char iwl_ax211_name[] = "Intel(R) Wi-Fi 6 AX211 160MHz";
+ const char iwl_ax411_name[] = "Intel(R) Wi-Fi 6 AX411 160MHz";
+ const char iwl_ma_name[] = "Intel(R) Wi-Fi 6";
+@@ -384,6 +385,18 @@ const struct iwl_cfg iwl_qu_b0_hr1_b0 = {
+ 	.num_rbds = IWL_NUM_RBDS_22000_HE,
+ };
+ 
++const struct iwl_cfg iwl_qu_b0_hr_b0 = {
++	.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
++	IWL_DEVICE_22500,
++	/*
++	 * This device doesn't support receiving BlockAck with a large bitmap
++	 * so we need to restrict the size of transmitted aggregation to the
++	 * HT size; mac80211 would otherwise pick the HE max (256) by default.
++	 */
++	.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
++	.num_rbds = IWL_NUM_RBDS_22000_HE,
++};
++
+ const struct iwl_cfg iwl_ax201_cfg_qu_hr = {
+ 	.name = "Intel(R) Wi-Fi 6 AX201 160MHz",
+ 	.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
+@@ -410,6 +423,18 @@ const struct iwl_cfg iwl_qu_c0_hr1_b0 = {
+ 	.num_rbds = IWL_NUM_RBDS_22000_HE,
+ };
+ 
++const struct iwl_cfg iwl_qu_c0_hr_b0 = {
++	.fw_name_pre = IWL_QU_C_HR_B_FW_PRE,
++	IWL_DEVICE_22500,
++	/*
++	 * This device doesn't support receiving BlockAck with a large bitmap
++	 * so we need to restrict the size of transmitted aggregation to the
++	 * HT size; mac80211 would otherwise pick the HE max (256) by default.
++	 */
++	.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
++	.num_rbds = IWL_NUM_RBDS_22000_HE,
++};
++
+ const struct iwl_cfg iwl_ax201_cfg_qu_c0_hr_b0 = {
+ 	.name = "Intel(R) Wi-Fi 6 AX201 160MHz",
+ 	.fw_name_pre = IWL_QU_C_HR_B_FW_PRE,
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index e82e3fc963be2..9b91aa9b2e7f1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -544,6 +544,7 @@ extern const char iwl9260_killer_1550_name[];
+ extern const char iwl9560_killer_1550i_name[];
+ extern const char iwl9560_killer_1550s_name[];
+ extern const char iwl_ax200_name[];
++extern const char iwl_ax203_name[];
+ extern const char iwl_ax201_name[];
+ extern const char iwl_ax101_name[];
+ extern const char iwl_ax200_killer_1650w_name[];
+@@ -627,6 +628,8 @@ extern const struct iwl_cfg iwl9560_2ac_cfg_soc;
+ extern const struct iwl_cfg iwl_qu_b0_hr1_b0;
+ extern const struct iwl_cfg iwl_qu_c0_hr1_b0;
+ extern const struct iwl_cfg iwl_quz_a0_hr1_b0;
++extern const struct iwl_cfg iwl_qu_b0_hr_b0;
++extern const struct iwl_cfg iwl_qu_c0_hr_b0;
+ extern const struct iwl_cfg iwl_ax200_cfg_cc;
+ extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
+ extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+index f043eefabb4ec..7b1d2dac6ceb8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+@@ -514,7 +514,10 @@ static ssize_t iwl_dbgfs_os_device_timediff_read(struct file *file,
+ 	const size_t bufsz = sizeof(buf);
+ 	int pos = 0;
+ 
++	mutex_lock(&mvm->mutex);
+ 	iwl_mvm_get_sync_time(mvm, &curr_gp2, &curr_os);
++	mutex_unlock(&mvm->mutex);
++
+ 	do_div(curr_os, NSEC_PER_USEC);
+ 	diff = curr_os - curr_gp2;
+ 	pos += scnprintf(buf + pos, bufsz - pos, "diff=%lld\n", diff);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index b627e7da7ac9d..d42165559df6e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -4249,6 +4249,9 @@ static void __iwl_mvm_unassign_vif_chanctx(struct iwl_mvm *mvm,
+ 	iwl_mvm_binding_remove_vif(mvm, vif);
+ 
+ out:
++	if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD) &&
++	    switching_chanctx)
++		return;
+ 	mvmvif->phy_ctxt = NULL;
+ 	iwl_mvm_power_update_mac(mvm);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 0d1118f66f0d5..cb83490f1016f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -845,6 +845,10 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
+ 	if (!mvm->scan_cmd)
+ 		goto out_free;
+ 
++	/* invalidate ids to prevent accidental removal of sta_id 0 */
++	mvm->aux_sta.sta_id = IWL_MVM_INVALID_STA;
++	mvm->snif_sta.sta_id = IWL_MVM_INVALID_STA;
++
+ 	/* Set EBS as successful as long as not stated otherwise by the FW. */
+ 	mvm->last_ebs_successful = true;
+ 
+@@ -1245,6 +1249,7 @@ static void iwl_mvm_reprobe_wk(struct work_struct *wk)
+ 	reprobe = container_of(wk, struct iwl_mvm_reprobe, work);
+ 	if (device_reprobe(reprobe->dev))
+ 		dev_err(reprobe->dev, "reprobe failed!\n");
++	put_device(reprobe->dev);
+ 	kfree(reprobe);
+ 	module_put(THIS_MODULE);
+ }
+@@ -1295,7 +1300,7 @@ void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error)
+ 			module_put(THIS_MODULE);
+ 			return;
+ 		}
+-		reprobe->dev = mvm->trans->dev;
++		reprobe->dev = get_device(mvm->trans->dev);
+ 		INIT_WORK(&reprobe->work, iwl_mvm_reprobe_wk);
+ 		schedule_work(&reprobe->work);
+ 	} else if (test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 799d8219463cb..a66a5c19474a9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -2103,6 +2103,9 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA))
++		return -EINVAL;
++
+ 	iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id);
+ 	if (ret)
+@@ -2117,6 +2120,9 @@ int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm)
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA))
++		return -EINVAL;
++
+ 	iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
+ 	if (ret)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index d719e433a59bf..2d43899fbdd7a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -245,8 +245,10 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	/* Allocate IML */
+ 	iml_img = dma_alloc_coherent(trans->dev, trans->iml_len,
+ 				     &trans_pcie->iml_dma_addr, GFP_KERNEL);
+-	if (!iml_img)
+-		return -ENOMEM;
++	if (!iml_img) {
++		ret = -ENOMEM;
++		goto err_free_ctxt_info;
++	}
+ 
+ 	memcpy(iml_img, trans->iml, trans->iml_len);
+ 
+@@ -284,6 +286,11 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 
+ 	return 0;
+ 
++err_free_ctxt_info:
++	dma_free_coherent(trans->dev, sizeof(*trans_pcie->ctxt_info_gen3),
++			  trans_pcie->ctxt_info_gen3,
++			  trans_pcie->ctxt_info_dma_addr);
++	trans_pcie->ctxt_info_gen3 = NULL;
+ err_free_prph_info:
+ 	dma_free_coherent(trans->dev,
+ 			  sizeof(*prph_info),
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 7b5ece380fbfb..2823a1e81656d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -966,6 +966,11 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		      IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
+ 		      IWL_CFG_ANY, IWL_CFG_ANY,
+ 		      iwl_qu_b0_hr1_b0, iwl_ax101_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
++		      IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
++		      IWL_CFG_ANY, IWL_CFG_ANY,
++		      iwl_qu_b0_hr_b0, iwl_ax203_name),
+ 
+ 	/* Qu C step */
+ 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+@@ -973,6 +978,11 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		      IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
+ 		      IWL_CFG_ANY, IWL_CFG_ANY,
+ 		      iwl_qu_c0_hr1_b0, iwl_ax101_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
++		      IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
++		      IWL_CFG_ANY, IWL_CFG_ANY,
++		      iwl_qu_c0_hr_b0, iwl_ax203_name),
+ 
+ 	/* QuZ */
+ 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 966be5689d63a..ed54d04e43964 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -299,6 +299,11 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	struct iwl_txq *txq = trans->txqs.txq[txq_id];
+ 
++	if (!txq) {
++		IWL_ERR(trans, "Trying to free a queue that wasn't allocated?\n");
++		return;
++	}
++
+ 	spin_lock_bh(&txq->lock);
+ 	while (txq->write_ptr != txq->read_ptr) {
+ 		IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
+diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+index af0b27a68d84d..9181221a2434d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+@@ -887,10 +887,8 @@ void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id)
+ 			int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr);
+ 			struct sk_buff *skb = txq->entries[idx].skb;
+ 
+-			if (WARN_ON_ONCE(!skb))
+-				continue;
+-
+-			iwl_txq_free_tso_page(trans, skb);
++			if (!WARN_ON_ONCE(!skb))
++				iwl_txq_free_tso_page(trans, skb);
+ 		}
+ 		iwl_txq_gen2_free_tfd(trans, txq);
+ 		txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 42bbd99a36acf..35098dbd32a3c 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1813,13 +1813,13 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ {
+ 	struct regulator_dev *r;
+ 	struct device *dev = rdev->dev.parent;
+-	int ret;
++	int ret = 0;
+ 
+ 	/* No supply to resolve? */
+ 	if (!rdev->supply_name)
+ 		return 0;
+ 
+-	/* Supply already resolved? */
++	/* Supply already resolved? (fast-path without locking contention) */
+ 	if (rdev->supply)
+ 		return 0;
+ 
+@@ -1829,7 +1829,7 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 
+ 		/* Did the lookup explicitly defer for us? */
+ 		if (ret == -EPROBE_DEFER)
+-			return ret;
++			goto out;
+ 
+ 		if (have_full_constraints()) {
+ 			r = dummy_regulator_rdev;
+@@ -1837,15 +1837,18 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 		} else {
+ 			dev_err(dev, "Failed to resolve %s-supply for %s\n",
+ 				rdev->supply_name, rdev->desc->name);
+-			return -EPROBE_DEFER;
++			ret = -EPROBE_DEFER;
++			goto out;
+ 		}
+ 	}
+ 
+ 	if (r == rdev) {
+ 		dev_err(dev, "Supply for %s (%s) resolved to itself\n",
+ 			rdev->desc->name, rdev->supply_name);
+-		if (!have_full_constraints())
+-			return -EINVAL;
++		if (!have_full_constraints()) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 		r = dummy_regulator_rdev;
+ 		get_device(&r->dev);
+ 	}
+@@ -1859,7 +1862,8 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 	if (r->dev.parent && r->dev.parent != rdev->dev.parent) {
+ 		if (!device_is_bound(r->dev.parent)) {
+ 			put_device(&r->dev);
+-			return -EPROBE_DEFER;
++			ret = -EPROBE_DEFER;
++			goto out;
+ 		}
+ 	}
+ 
+@@ -1867,15 +1871,32 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 	ret = regulator_resolve_supply(r);
+ 	if (ret < 0) {
+ 		put_device(&r->dev);
+-		return ret;
++		goto out;
++	}
++
++	/*
++	 * Recheck rdev->supply with rdev->mutex lock held to avoid a race
++	 * between rdev->supply null check and setting rdev->supply in
++	 * set_supply() from concurrent tasks.
++	 */
++	regulator_lock(rdev);
++
++	/* Supply just resolved by a concurrent task? */
++	if (rdev->supply) {
++		regulator_unlock(rdev);
++		put_device(&r->dev);
++		goto out;
+ 	}
+ 
+ 	ret = set_supply(rdev, r);
+ 	if (ret < 0) {
++		regulator_unlock(rdev);
+ 		put_device(&r->dev);
+-		return ret;
++		goto out;
+ 	}
+ 
++	regulator_unlock(rdev);
++
+ 	/*
+ 	 * In set_machine_constraints() we may have turned this regulator on
+ 	 * but we couldn't propagate to the supply if it hadn't been resolved
+@@ -1886,11 +1907,12 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 		if (ret < 0) {
+ 			_regulator_put(rdev->supply);
+ 			rdev->supply = NULL;
+-			return ret;
++			goto out;
+ 		}
+ 	}
+ 
+-	return 0;
++out:
++	return ret;
+ }
+ 
+ /* Internal regulator request function */
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index b53c055bea6a3..f72d53848dcbc 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -1078,16 +1078,6 @@ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
+ 	return IO_WQ_CANCEL_NOTFOUND;
+ }
+ 
+-static bool io_wq_io_cb_cancel_data(struct io_wq_work *work, void *data)
+-{
+-	return work == data;
+-}
+-
+-enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork)
+-{
+-	return io_wq_cancel_cb(wq, io_wq_io_cb_cancel_data, (void *)cwork, false);
+-}
+-
+ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+ {
+ 	int ret = -ENOMEM, node;
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+index aaa363f358916..75113bcd5889f 100644
+--- a/fs/io-wq.h
++++ b/fs/io-wq.h
+@@ -130,7 +130,6 @@ static inline bool io_wq_is_hashed(struct io_wq_work *work)
+ }
+ 
+ void io_wq_cancel_all(struct io_wq *wq);
+-enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork);
+ 
+ typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 3b6307f6bd93d..d0b7332ca7033 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -286,7 +286,6 @@ struct io_ring_ctx {
+ 		struct list_head	timeout_list;
+ 		struct list_head	cq_overflow_list;
+ 
+-		wait_queue_head_t	inflight_wait;
+ 		struct io_uring_sqe	*sq_sqes;
+ 	} ____cacheline_aligned_in_smp;
+ 
+@@ -997,6 +996,43 @@ static inline void io_clean_op(struct io_kiocb *req)
+ 		__io_clean_op(req);
+ }
+ 
++static inline bool __io_match_files(struct io_kiocb *req,
++				    struct files_struct *files)
++{
++	if (req->file && req->file->f_op == &io_uring_fops)
++		return true;
++
++	return ((req->flags & REQ_F_WORK_INITIALIZED) &&
++	        (req->work.flags & IO_WQ_WORK_FILES)) &&
++		req->work.identity->files == files;
++}
++
++static bool io_match_task(struct io_kiocb *head,
++			  struct task_struct *task,
++			  struct files_struct *files)
++{
++	struct io_kiocb *link;
++
++	if (task && head->task != task) {
++		/* in terms of cancelation, always match if req task is dead */
++		if (head->task->flags & PF_EXITING)
++			return true;
++		return false;
++	}
++	if (!files)
++		return true;
++	if (__io_match_files(head, files))
++		return true;
++	if (head->flags & REQ_F_LINK_HEAD) {
++		list_for_each_entry(link, &head->link_list, link_list) {
++			if (__io_match_files(link, files))
++				return true;
++		}
++	}
++	return false;
++}
++
++
+ static void io_sq_thread_drop_mm(void)
+ {
+ 	struct mm_struct *mm = current->mm;
+@@ -1183,7 +1219,6 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ 	INIT_LIST_HEAD(&ctx->iopoll_list);
+ 	INIT_LIST_HEAD(&ctx->defer_list);
+ 	INIT_LIST_HEAD(&ctx->timeout_list);
+-	init_waitqueue_head(&ctx->inflight_wait);
+ 	spin_lock_init(&ctx->inflight_lock);
+ 	INIT_LIST_HEAD(&ctx->inflight_list);
+ 	INIT_DELAYED_WORK(&ctx->file_put_work, io_file_put_work);
+@@ -1368,11 +1403,14 @@ static bool io_grab_identity(struct io_kiocb *req)
+ 			return false;
+ 		atomic_inc(&id->files->count);
+ 		get_nsproxy(id->nsproxy);
+-		req->flags |= REQ_F_INFLIGHT;
+ 
+-		spin_lock_irq(&ctx->inflight_lock);
+-		list_add(&req->inflight_entry, &ctx->inflight_list);
+-		spin_unlock_irq(&ctx->inflight_lock);
++		if (!(req->flags & REQ_F_INFLIGHT)) {
++			req->flags |= REQ_F_INFLIGHT;
++
++			spin_lock_irq(&ctx->inflight_lock);
++			list_add(&req->inflight_entry, &ctx->inflight_list);
++			spin_unlock_irq(&ctx->inflight_lock);
++		}
+ 		req->work.flags |= IO_WQ_WORK_FILES;
+ 	}
+ 	if (!(req->work.flags & IO_WQ_WORK_MM) &&
+@@ -1466,30 +1504,18 @@ static void io_kill_timeout(struct io_kiocb *req)
+ 	}
+ }
+ 
+-static bool io_task_match(struct io_kiocb *req, struct task_struct *tsk)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	if (!tsk || req->task == tsk)
+-		return true;
+-	if (ctx->flags & IORING_SETUP_SQPOLL) {
+-		if (ctx->sq_data && req->task == ctx->sq_data->thread)
+-			return true;
+-	}
+-	return false;
+-}
+-
+ /*
+  * Returns true if we found and killed one or more timeouts
+  */
+-static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk)
++static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
++			     struct files_struct *files)
+ {
+ 	struct io_kiocb *req, *tmp;
+ 	int canceled = 0;
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
+ 	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+-		if (io_task_match(req, tsk)) {
++		if (io_match_task(req, tsk, files)) {
+ 			io_kill_timeout(req);
+ 			canceled++;
+ 		}
+@@ -1616,32 +1642,6 @@ static void io_cqring_mark_overflow(struct io_ring_ctx *ctx)
+ 	}
+ }
+ 
+-static inline bool __io_match_files(struct io_kiocb *req,
+-				    struct files_struct *files)
+-{
+-	return ((req->flags & REQ_F_WORK_INITIALIZED) &&
+-	        (req->work.flags & IO_WQ_WORK_FILES)) &&
+-		req->work.identity->files == files;
+-}
+-
+-static bool io_match_files(struct io_kiocb *req,
+-			   struct files_struct *files)
+-{
+-	struct io_kiocb *link;
+-
+-	if (!files)
+-		return true;
+-	if (__io_match_files(req, files))
+-		return true;
+-	if (req->flags & REQ_F_LINK_HEAD) {
+-		list_for_each_entry(link, &req->link_list, link_list) {
+-			if (__io_match_files(link, files))
+-				return true;
+-		}
+-	}
+-	return false;
+-}
+-
+ /* Returns true if there are no backlogged entries after the flush */
+ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+ 				       struct task_struct *tsk,
+@@ -1663,9 +1663,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+ 
+ 	cqe = NULL;
+ 	list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, compl.list) {
+-		if (tsk && req->task != tsk)
+-			continue;
+-		if (!io_match_files(req, files))
++		if (!io_match_task(req, tsk, files))
+ 			continue;
+ 
+ 		cqe = io_get_cqring(ctx);
+@@ -2086,6 +2084,9 @@ static void __io_req_task_submit(struct io_kiocb *req)
+ 	else
+ 		__io_req_task_cancel(req, -EFAULT);
+ 	mutex_unlock(&ctx->uring_lock);
++
++	if (ctx->flags & IORING_SETUP_SQPOLL)
++		io_sq_thread_drop_mm();
+ }
+ 
+ static void io_req_task_submit(struct callback_head *cb)
+@@ -5314,7 +5315,8 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ /*
+  * Returns true if we found and killed one or more poll requests
+  */
+-static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk)
++static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
++			       struct files_struct *files)
+ {
+ 	struct hlist_node *tmp;
+ 	struct io_kiocb *req;
+@@ -5326,7 +5328,7 @@ static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk)
+ 
+ 		list = &ctx->cancel_hash[i];
+ 		hlist_for_each_entry_safe(req, tmp, list, hash_node) {
+-			if (io_task_match(req, tsk))
++			if (io_match_task(req, tsk, files))
+ 				posted += io_poll_remove_one(req);
+ 		}
+ 	}
+@@ -5893,17 +5895,20 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ static void io_req_drop_files(struct io_kiocb *req)
+ {
+ 	struct io_ring_ctx *ctx = req->ctx;
++	struct io_uring_task *tctx = req->task->io_uring;
+ 	unsigned long flags;
+ 
+-	put_files_struct(req->work.identity->files);
+-	put_nsproxy(req->work.identity->nsproxy);
++	if (req->work.flags & IO_WQ_WORK_FILES) {
++		put_files_struct(req->work.identity->files);
++		put_nsproxy(req->work.identity->nsproxy);
++	}
+ 	spin_lock_irqsave(&ctx->inflight_lock, flags);
+ 	list_del(&req->inflight_entry);
+ 	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
+ 	req->flags &= ~REQ_F_INFLIGHT;
+ 	req->work.flags &= ~IO_WQ_WORK_FILES;
+-	if (waitqueue_active(&ctx->inflight_wait))
+-		wake_up(&ctx->inflight_wait);
++	if (atomic_read(&tctx->in_idle))
++		wake_up(&tctx->wait);
+ }
+ 
+ static void __io_clean_op(struct io_kiocb *req)
+@@ -6168,6 +6173,16 @@ static struct file *io_file_get(struct io_submit_state *state,
+ 		file = __io_file_get(state, fd);
+ 	}
+ 
++	if (file && file->f_op == &io_uring_fops &&
++	    !(req->flags & REQ_F_INFLIGHT)) {
++		io_req_init_async(req);
++		req->flags |= REQ_F_INFLIGHT;
++
++		spin_lock_irq(&ctx->inflight_lock);
++		list_add(&req->inflight_entry, &ctx->inflight_list);
++		spin_unlock_irq(&ctx->inflight_lock);
++	}
++
+ 	return file;
+ }
+ 
+@@ -6989,14 +7004,18 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 						TASK_INTERRUPTIBLE);
+ 		/* make sure we run task_work before checking for signals */
+ 		ret = io_run_task_work_sig();
+-		if (ret > 0)
++		if (ret > 0) {
++			finish_wait(&ctx->wait, &iowq.wq);
+ 			continue;
++		}
+ 		else if (ret < 0)
+ 			break;
+ 		if (io_should_wake(&iowq))
+ 			break;
+-		if (test_bit(0, &ctx->cq_check_overflow))
++		if (test_bit(0, &ctx->cq_check_overflow)) {
++			finish_wait(&ctx->wait, &iowq.wq);
+ 			continue;
++		}
+ 		schedule();
+ 	} while (1);
+ 	finish_wait(&ctx->wait, &iowq.wq);
+@@ -8487,8 +8506,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ 		__io_cqring_overflow_flush(ctx, true, NULL, NULL);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
+-	io_kill_timeouts(ctx, NULL);
+-	io_poll_remove_all(ctx, NULL);
++	io_kill_timeouts(ctx, NULL, NULL);
++	io_poll_remove_all(ctx, NULL, NULL);
+ 
+ 	if (ctx->io_wq)
+ 		io_wq_cancel_cb(ctx->io_wq, io_cancel_ctx_cb, ctx, true);
+@@ -8524,112 +8543,31 @@ static int io_uring_release(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
+-/*
+- * Returns true if 'preq' is the link parent of 'req'
+- */
+-static bool io_match_link(struct io_kiocb *preq, struct io_kiocb *req)
+-{
+-	struct io_kiocb *link;
+-
+-	if (!(preq->flags & REQ_F_LINK_HEAD))
+-		return false;
+-
+-	list_for_each_entry(link, &preq->link_list, link_list) {
+-		if (link == req)
+-			return true;
+-	}
+-
+-	return false;
+-}
+-
+-/*
+- * We're looking to cancel 'req' because it's holding on to our files, but
+- * 'req' could be a link to another request. See if it is, and cancel that
+- * parent request if so.
+- */
+-static bool io_poll_remove_link(struct io_ring_ctx *ctx, struct io_kiocb *req)
+-{
+-	struct hlist_node *tmp;
+-	struct io_kiocb *preq;
+-	bool found = false;
+-	int i;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
+-		struct hlist_head *list;
+-
+-		list = &ctx->cancel_hash[i];
+-		hlist_for_each_entry_safe(preq, tmp, list, hash_node) {
+-			found = io_match_link(preq, req);
+-			if (found) {
+-				io_poll_remove_one(preq);
+-				break;
+-			}
+-		}
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-	return found;
+-}
+-
+-static bool io_timeout_remove_link(struct io_ring_ctx *ctx,
+-				   struct io_kiocb *req)
+-{
+-	struct io_kiocb *preq;
+-	bool found = false;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	list_for_each_entry(preq, &ctx->timeout_list, timeout.list) {
+-		found = io_match_link(preq, req);
+-		if (found) {
+-			__io_timeout_cancel(preq);
+-			break;
+-		}
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-	return found;
+-}
++struct io_task_cancel {
++	struct task_struct *task;
++	struct files_struct *files;
++};
+ 
+-static bool io_cancel_link_cb(struct io_wq_work *work, void *data)
++static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+ {
+ 	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++	struct io_task_cancel *cancel = data;
+ 	bool ret;
+ 
+-	if (req->flags & REQ_F_LINK_TIMEOUT) {
++	if (cancel->files && (req->flags & REQ_F_LINK_TIMEOUT)) {
+ 		unsigned long flags;
+ 		struct io_ring_ctx *ctx = req->ctx;
+ 
+ 		/* protect against races with linked timeouts */
+ 		spin_lock_irqsave(&ctx->completion_lock, flags);
+-		ret = io_match_link(req, data);
++		ret = io_match_task(req, cancel->task, cancel->files);
+ 		spin_unlock_irqrestore(&ctx->completion_lock, flags);
+ 	} else {
+-		ret = io_match_link(req, data);
++		ret = io_match_task(req, cancel->task, cancel->files);
+ 	}
+ 	return ret;
+ }
+ 
+-static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
+-{
+-	enum io_wq_cancel cret;
+-
+-	/* cancel this particular work, if it's running */
+-	cret = io_wq_cancel_work(ctx->io_wq, &req->work);
+-	if (cret != IO_WQ_CANCEL_NOTFOUND)
+-		return;
+-
+-	/* find links that hold this pending, cancel those */
+-	cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_link_cb, req, true);
+-	if (cret != IO_WQ_CANCEL_NOTFOUND)
+-		return;
+-
+-	/* if we have a poll link holding this pending, cancel that */
+-	if (io_poll_remove_link(ctx, req))
+-		return;
+-
+-	/* final option, timeout link is holding this req pending */
+-	io_timeout_remove_link(ctx, req);
+-}
+-
+ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ 				  struct task_struct *task,
+ 				  struct files_struct *files)
+@@ -8639,8 +8577,7 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
+ 	list_for_each_entry_reverse(de, &ctx->defer_list, list) {
+-		if (io_task_match(de->req, task) &&
+-		    io_match_files(de->req, files)) {
++		if (io_match_task(de->req, task, files)) {
+ 			list_cut_position(&list, &ctx->defer_list, &de->list);
+ 			break;
+ 		}
+@@ -8657,73 +8594,56 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ 	}
+ }
+ 
+-/*
+- * Returns true if we found and killed one or more files pinning requests
+- */
+-static bool io_uring_cancel_files(struct io_ring_ctx *ctx,
++static int io_uring_count_inflight(struct io_ring_ctx *ctx,
++				   struct task_struct *task,
++				   struct files_struct *files)
++{
++	struct io_kiocb *req;
++	int cnt = 0;
++
++	spin_lock_irq(&ctx->inflight_lock);
++	list_for_each_entry(req, &ctx->inflight_list, inflight_entry)
++		cnt += io_match_task(req, task, files);
++	spin_unlock_irq(&ctx->inflight_lock);
++	return cnt;
++}
++
++static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ 				  struct task_struct *task,
+ 				  struct files_struct *files)
+ {
+-	if (list_empty_careful(&ctx->inflight_list))
+-		return false;
+-
+ 	while (!list_empty_careful(&ctx->inflight_list)) {
+-		struct io_kiocb *cancel_req = NULL, *req;
++		struct io_task_cancel cancel = { .task = task, .files = files };
+ 		DEFINE_WAIT(wait);
++		int inflight;
+ 
+-		spin_lock_irq(&ctx->inflight_lock);
+-		list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {
+-			if (req->task == task &&
+-			    (req->work.flags & IO_WQ_WORK_FILES) &&
+-			    req->work.identity->files != files)
+-				continue;
+-			/* req is being completed, ignore */
+-			if (!refcount_inc_not_zero(&req->refs))
+-				continue;
+-			cancel_req = req;
+-			break;
+-		}
+-		if (cancel_req)
+-			prepare_to_wait(&ctx->inflight_wait, &wait,
+-						TASK_UNINTERRUPTIBLE);
+-		spin_unlock_irq(&ctx->inflight_lock);
+-
+-		/* We need to keep going until we don't find a matching req */
+-		if (!cancel_req)
++		inflight = io_uring_count_inflight(ctx, task, files);
++		if (!inflight)
+ 			break;
+-		/* cancel this request, or head link requests */
+-		io_attempt_cancel(ctx, cancel_req);
+-		io_cqring_overflow_flush(ctx, true, task, files);
+ 
+-		io_put_req(cancel_req);
++		io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
++		io_poll_remove_all(ctx, task, files);
++		io_kill_timeouts(ctx, task, files);
+ 		/* cancellations _may_ trigger task work */
+ 		io_run_task_work();
+-		schedule();
+-		finish_wait(&ctx->inflight_wait, &wait);
+-	}
+-
+-	return true;
+-}
+ 
+-static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+-{
+-	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-	struct task_struct *task = data;
+-
+-	return io_task_match(req, task);
++		prepare_to_wait(&task->io_uring->wait, &wait,
++				TASK_UNINTERRUPTIBLE);
++		if (inflight == io_uring_count_inflight(ctx, task, files))
++			schedule();
++		finish_wait(&task->io_uring->wait, &wait);
++	}
+ }
+ 
+-static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+-					    struct task_struct *task,
+-					    struct files_struct *files)
++static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
++					    struct task_struct *task)
+ {
+-	bool ret;
+-
+-	ret = io_uring_cancel_files(ctx, task, files);
+-	if (!files) {
++	while (1) {
++		struct io_task_cancel cancel = { .task = task, .files = NULL, };
+ 		enum io_wq_cancel cret;
++		bool ret = false;
+ 
+-		cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, task, true);
++		cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
+ 		if (cret != IO_WQ_CANCEL_NOTFOUND)
+ 			ret = true;
+ 
+@@ -8735,11 +8655,13 @@ static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 			}
+ 		}
+ 
+-		ret |= io_poll_remove_all(ctx, task);
+-		ret |= io_kill_timeouts(ctx, task);
++		ret |= io_poll_remove_all(ctx, task, NULL);
++		ret |= io_kill_timeouts(ctx, task, NULL);
++		if (!ret)
++			break;
++		io_run_task_work();
++		cond_resched();
+ 	}
+-
+-	return ret;
+ }
+ 
+ static void io_disable_sqo_submit(struct io_ring_ctx *ctx)
+@@ -8764,8 +8686,6 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 	struct task_struct *task = current;
+ 
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+-		/* for SQPOLL only sqo_task has task notes */
+-		WARN_ON_ONCE(ctx->sqo_task != current);
+ 		io_disable_sqo_submit(ctx);
+ 		task = ctx->sq_data->thread;
+ 		atomic_inc(&task->io_uring->in_idle);
+@@ -8775,10 +8695,9 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 	io_cancel_defer_files(ctx, task, files);
+ 	io_cqring_overflow_flush(ctx, true, task, files);
+ 
+-	while (__io_uring_cancel_task_requests(ctx, task, files)) {
+-		io_run_task_work();
+-		cond_resched();
+-	}
++	io_uring_cancel_files(ctx, task, files);
++	if (!files)
++		__io_uring_cancel_task_requests(ctx, task);
+ 
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+ 		atomic_dec(&task->io_uring->in_idle);
+@@ -8919,15 +8838,15 @@ void __io_uring_task_cancel(void)
+ 		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
+ 
+ 		/*
+-		 * If we've seen completions, retry. This avoids a race where
+-		 * a completion comes in before we did prepare_to_wait().
++		 * If we've seen completions, retry without waiting. This
++		 * avoids a race where a completion comes in before we did
++		 * prepare_to_wait().
+ 		 */
+-		if (inflight != tctx_inflight(tctx))
+-			continue;
+-		schedule();
++		if (inflight == tctx_inflight(tctx))
++			schedule();
++		finish_wait(&tctx->wait, &wait);
+ 	} while (1);
+ 
+-	finish_wait(&tctx->wait, &wait);
+ 	atomic_dec(&tctx->in_idle);
+ 
+ 	io_uring_remove_task_files(tctx);
+@@ -8938,6 +8857,9 @@ static int io_uring_flush(struct file *file, void *data)
+ 	struct io_uring_task *tctx = current->io_uring;
+ 	struct io_ring_ctx *ctx = file->private_data;
+ 
++	if (fatal_signal_pending(current) || (current->flags & PF_EXITING))
++		io_uring_cancel_task_requests(ctx, NULL);
++
+ 	if (!tctx)
+ 		return 0;
+ 
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index cbadcf6ca4da2..b8712b835b105 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1000,7 +1000,7 @@ pnfs_layout_stateid_blocked(const struct pnfs_layout_hdr *lo,
+ {
+ 	u32 seqid = be32_to_cpu(stateid->seqid);
+ 
+-	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier);
++	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier) && lo->plh_barrier;
+ }
+ 
+ /* lget is set to 1 if called from inside send_layoutget call chain */
+@@ -1913,6 +1913,11 @@ static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
+ 		wake_up_var(&lo->plh_outstanding);
+ }
+ 
++static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
++{
++	return test_bit(NFS_LAYOUT_FIRST_LAYOUTGET, &lo->plh_flags);
++}
++
+ static void pnfs_clear_first_layoutget(struct pnfs_layout_hdr *lo)
+ {
+ 	unsigned long *bitlock = &lo->plh_flags;
+@@ -2387,23 +2392,34 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
+ 		goto out_forget;
+ 	}
+ 
+-	if (!pnfs_layout_is_valid(lo)) {
+-		/* We have a completely new layout */
+-		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, true);
+-	} else if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
++	if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
+ 		/* existing state ID, make sure the sequence number matches. */
+ 		if (pnfs_layout_stateid_blocked(lo, &res->stateid)) {
++			if (!pnfs_layout_is_valid(lo) &&
++			    pnfs_is_first_layoutget(lo))
++				lo->plh_barrier = 0;
+ 			dprintk("%s forget reply due to sequence\n", __func__);
+ 			goto out_forget;
+ 		}
+ 		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, false);
+-	} else {
++	} else if (pnfs_layout_is_valid(lo)) {
+ 		/*
+ 		 * We got an entirely new state ID.  Mark all segments for the
+ 		 * inode invalid, and retry the layoutget
+ 		 */
+-		pnfs_mark_layout_stateid_invalid(lo, &free_me);
++		struct pnfs_layout_range range = {
++			.iomode = IOMODE_ANY,
++			.length = NFS4_MAX_UINT64,
++		};
++		pnfs_set_plh_return_info(lo, IOMODE_ANY, 0);
++		pnfs_mark_matching_lsegs_return(lo, &lo->plh_return_segs,
++						&range, 0);
+ 		goto out_forget;
++	} else {
++		/* We have a completely new layout */
++		if (!pnfs_is_first_layoutget(lo))
++			goto out_forget;
++		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, true);
+ 	}
+ 
+ 	pnfs_get_lseg(lseg);
+diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
+index 64bc81363c6cc..e1bd592ce7001 100644
+--- a/fs/nilfs2/file.c
++++ b/fs/nilfs2/file.c
+@@ -141,6 +141,7 @@ const struct file_operations nilfs_file_operations = {
+ 	/* .release	= nilfs_release_file, */
+ 	.fsync		= nilfs_sync_file,
+ 	.splice_read	= generic_file_splice_read,
++	.splice_write   = iter_file_splice_write,
+ };
+ 
+ const struct inode_operations nilfs_file_inode_operations = {
+diff --git a/fs/squashfs/block.c b/fs/squashfs/block.c
+index 8a19773b5a0b7..45f44425d8560 100644
+--- a/fs/squashfs/block.c
++++ b/fs/squashfs/block.c
+@@ -196,9 +196,15 @@ int squashfs_read_data(struct super_block *sb, u64 index, int length,
+ 		length = SQUASHFS_COMPRESSED_SIZE(length);
+ 		index += 2;
+ 
+-		TRACE("Block @ 0x%llx, %scompressed size %d\n", index,
++		TRACE("Block @ 0x%llx, %scompressed size %d\n", index - 2,
+ 		      compressed ? "" : "un", length);
+ 	}
++	if (length < 0 || length > output->length ||
++			(index + length) > msblk->bytes_used) {
++		res = -EIO;
++		goto out;
++	}
++
+ 	if (next_index)
+ 		*next_index = index + length;
+ 
+diff --git a/fs/squashfs/export.c b/fs/squashfs/export.c
+index ae2c87bb0fbec..eb02072d28dd6 100644
+--- a/fs/squashfs/export.c
++++ b/fs/squashfs/export.c
+@@ -41,12 +41,17 @@ static long long squashfs_inode_lookup(struct super_block *sb, int ino_num)
+ 	struct squashfs_sb_info *msblk = sb->s_fs_info;
+ 	int blk = SQUASHFS_LOOKUP_BLOCK(ino_num - 1);
+ 	int offset = SQUASHFS_LOOKUP_BLOCK_OFFSET(ino_num - 1);
+-	u64 start = le64_to_cpu(msblk->inode_lookup_table[blk]);
++	u64 start;
+ 	__le64 ino;
+ 	int err;
+ 
+ 	TRACE("Entered squashfs_inode_lookup, inode_number = %d\n", ino_num);
+ 
++	if (ino_num == 0 || (ino_num - 1) >= msblk->inodes)
++		return -EINVAL;
++
++	start = le64_to_cpu(msblk->inode_lookup_table[blk]);
++
+ 	err = squashfs_read_metadata(sb, &ino, &start, &offset, sizeof(ino));
+ 	if (err < 0)
+ 		return err;
+@@ -111,7 +116,10 @@ __le64 *squashfs_read_inode_lookup_table(struct super_block *sb,
+ 		u64 lookup_table_start, u64 next_table, unsigned int inodes)
+ {
+ 	unsigned int length = SQUASHFS_LOOKUP_BLOCK_BYTES(inodes);
++	unsigned int indexes = SQUASHFS_LOOKUP_BLOCKS(inodes);
++	int n;
+ 	__le64 *table;
++	u64 start, end;
+ 
+ 	TRACE("In read_inode_lookup_table, length %d\n", length);
+ 
+@@ -121,20 +129,37 @@ __le64 *squashfs_read_inode_lookup_table(struct super_block *sb,
+ 	if (inodes == 0)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	/* length bytes should not extend into the next table - this check
+-	 * also traps instances where lookup_table_start is incorrectly larger
+-	 * than the next table start
++	/*
++	 * The computed size of the lookup table (length bytes) should exactly
++	 * match the table start and end points
+ 	 */
+-	if (lookup_table_start + length > next_table)
++	if (length != (next_table - lookup_table_start))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	table = squashfs_read_table(sb, lookup_table_start, length);
++	if (IS_ERR(table))
++		return table;
+ 
+ 	/*
+-	 * table[0] points to the first inode lookup table metadata block,
+-	 * this should be less than lookup_table_start
++	 * table0], table[1], ... table[indexes - 1] store the locations
++	 * of the compressed inode lookup blocks.  Each entry should be
++	 * less than the next (i.e. table[0] < table[1]), and the difference
++	 * between them should be SQUASHFS_METADATA_SIZE or less.
++	 * table[indexes - 1] should  be less than lookup_table_start, and
++	 * again the difference should be SQUASHFS_METADATA_SIZE or less
+ 	 */
+-	if (!IS_ERR(table) && le64_to_cpu(table[0]) >= lookup_table_start) {
++	for (n = 0; n < (indexes - 1); n++) {
++		start = le64_to_cpu(table[n]);
++		end = le64_to_cpu(table[n + 1]);
++
++		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++			kfree(table);
++			return ERR_PTR(-EINVAL);
++		}
++	}
++
++	start = le64_to_cpu(table[indexes - 1]);
++	if (start >= lookup_table_start || (lookup_table_start - start) > SQUASHFS_METADATA_SIZE) {
+ 		kfree(table);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/fs/squashfs/id.c b/fs/squashfs/id.c
+index 6be5afe7287d6..11581bf31af41 100644
+--- a/fs/squashfs/id.c
++++ b/fs/squashfs/id.c
+@@ -35,10 +35,15 @@ int squashfs_get_id(struct super_block *sb, unsigned int index,
+ 	struct squashfs_sb_info *msblk = sb->s_fs_info;
+ 	int block = SQUASHFS_ID_BLOCK(index);
+ 	int offset = SQUASHFS_ID_BLOCK_OFFSET(index);
+-	u64 start_block = le64_to_cpu(msblk->id_table[block]);
++	u64 start_block;
+ 	__le32 disk_id;
+ 	int err;
+ 
++	if (index >= msblk->ids)
++		return -EINVAL;
++
++	start_block = le64_to_cpu(msblk->id_table[block]);
++
+ 	err = squashfs_read_metadata(sb, &disk_id, &start_block, &offset,
+ 							sizeof(disk_id));
+ 	if (err < 0)
+@@ -56,7 +61,10 @@ __le64 *squashfs_read_id_index_table(struct super_block *sb,
+ 		u64 id_table_start, u64 next_table, unsigned short no_ids)
+ {
+ 	unsigned int length = SQUASHFS_ID_BLOCK_BYTES(no_ids);
++	unsigned int indexes = SQUASHFS_ID_BLOCKS(no_ids);
++	int n;
+ 	__le64 *table;
++	u64 start, end;
+ 
+ 	TRACE("In read_id_index_table, length %d\n", length);
+ 
+@@ -67,20 +75,36 @@ __le64 *squashfs_read_id_index_table(struct super_block *sb,
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	/*
+-	 * length bytes should not extend into the next table - this check
+-	 * also traps instances where id_table_start is incorrectly larger
+-	 * than the next table start
++	 * The computed size of the index table (length bytes) should exactly
++	 * match the table start and end points
+ 	 */
+-	if (id_table_start + length > next_table)
++	if (length != (next_table - id_table_start))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	table = squashfs_read_table(sb, id_table_start, length);
++	if (IS_ERR(table))
++		return table;
+ 
+ 	/*
+-	 * table[0] points to the first id lookup table metadata block, this
+-	 * should be less than id_table_start
++	 * table[0], table[1], ... table[indexes - 1] store the locations
++	 * of the compressed id blocks.   Each entry should be less than
++	 * the next (i.e. table[0] < table[1]), and the difference between them
++	 * should be SQUASHFS_METADATA_SIZE or less.  table[indexes - 1]
++	 * should be less than id_table_start, and again the difference
++	 * should be SQUASHFS_METADATA_SIZE or less
+ 	 */
+-	if (!IS_ERR(table) && le64_to_cpu(table[0]) >= id_table_start) {
++	for (n = 0; n < (indexes - 1); n++) {
++		start = le64_to_cpu(table[n]);
++		end = le64_to_cpu(table[n + 1]);
++
++		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++			kfree(table);
++			return ERR_PTR(-EINVAL);
++		}
++	}
++
++	start = le64_to_cpu(table[indexes - 1]);
++	if (start >= id_table_start || (id_table_start - start) > SQUASHFS_METADATA_SIZE) {
+ 		kfree(table);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/fs/squashfs/squashfs_fs_sb.h b/fs/squashfs/squashfs_fs_sb.h
+index 34c21ffb6df37..166e98806265b 100644
+--- a/fs/squashfs/squashfs_fs_sb.h
++++ b/fs/squashfs/squashfs_fs_sb.h
+@@ -64,5 +64,6 @@ struct squashfs_sb_info {
+ 	unsigned int				inodes;
+ 	unsigned int				fragments;
+ 	int					xattr_ids;
++	unsigned int				ids;
+ };
+ #endif
+diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c
+index d6c6593ec169e..88cc94be10765 100644
+--- a/fs/squashfs/super.c
++++ b/fs/squashfs/super.c
+@@ -166,6 +166,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	msblk->directory_table = le64_to_cpu(sblk->directory_table_start);
+ 	msblk->inodes = le32_to_cpu(sblk->inodes);
+ 	msblk->fragments = le32_to_cpu(sblk->fragments);
++	msblk->ids = le16_to_cpu(sblk->no_ids);
+ 	flags = le16_to_cpu(sblk->flags);
+ 
+ 	TRACE("Found valid superblock on %pg\n", sb->s_bdev);
+@@ -177,7 +178,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	TRACE("Block size %d\n", msblk->block_size);
+ 	TRACE("Number of inodes %d\n", msblk->inodes);
+ 	TRACE("Number of fragments %d\n", msblk->fragments);
+-	TRACE("Number of ids %d\n", le16_to_cpu(sblk->no_ids));
++	TRACE("Number of ids %d\n", msblk->ids);
+ 	TRACE("sblk->inode_table_start %llx\n", msblk->inode_table);
+ 	TRACE("sblk->directory_table_start %llx\n", msblk->directory_table);
+ 	TRACE("sblk->fragment_table_start %llx\n",
+@@ -236,8 +237,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ allocate_id_index_table:
+ 	/* Allocate and read id index table */
+ 	msblk->id_table = squashfs_read_id_index_table(sb,
+-		le64_to_cpu(sblk->id_table_start), next_table,
+-		le16_to_cpu(sblk->no_ids));
++		le64_to_cpu(sblk->id_table_start), next_table, msblk->ids);
+ 	if (IS_ERR(msblk->id_table)) {
+ 		errorf(fc, "unable to read id index table");
+ 		err = PTR_ERR(msblk->id_table);
+diff --git a/fs/squashfs/xattr.h b/fs/squashfs/xattr.h
+index 184129afd4566..d8a270d3ac4cb 100644
+--- a/fs/squashfs/xattr.h
++++ b/fs/squashfs/xattr.h
+@@ -17,8 +17,16 @@ extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *,
+ static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb,
+ 		u64 start, u64 *xattr_table_start, int *xattr_ids)
+ {
++	struct squashfs_xattr_id_table *id_table;
++
++	id_table = squashfs_read_table(sb, start, sizeof(*id_table));
++	if (IS_ERR(id_table))
++		return (__le64 *) id_table;
++
++	*xattr_table_start = le64_to_cpu(id_table->xattr_table_start);
++	kfree(id_table);
++
+ 	ERROR("Xattrs in filesystem, these will be ignored\n");
+-	*xattr_table_start = start;
+ 	return ERR_PTR(-ENOTSUPP);
+ }
+ 
+diff --git a/fs/squashfs/xattr_id.c b/fs/squashfs/xattr_id.c
+index d99e08464554f..ead66670b41a5 100644
+--- a/fs/squashfs/xattr_id.c
++++ b/fs/squashfs/xattr_id.c
+@@ -31,10 +31,15 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index,
+ 	struct squashfs_sb_info *msblk = sb->s_fs_info;
+ 	int block = SQUASHFS_XATTR_BLOCK(index);
+ 	int offset = SQUASHFS_XATTR_BLOCK_OFFSET(index);
+-	u64 start_block = le64_to_cpu(msblk->xattr_id_table[block]);
++	u64 start_block;
+ 	struct squashfs_xattr_id id;
+ 	int err;
+ 
++	if (index >= msblk->xattr_ids)
++		return -EINVAL;
++
++	start_block = le64_to_cpu(msblk->xattr_id_table[block]);
++
+ 	err = squashfs_read_metadata(sb, &id, &start_block, &offset,
+ 							sizeof(id));
+ 	if (err < 0)
+@@ -50,13 +55,17 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index,
+ /*
+  * Read uncompressed xattr id lookup table indexes from disk into memory
+  */
+-__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 start,
++__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
+ 		u64 *xattr_table_start, int *xattr_ids)
+ {
+-	unsigned int len;
++	struct squashfs_sb_info *msblk = sb->s_fs_info;
++	unsigned int len, indexes;
+ 	struct squashfs_xattr_id_table *id_table;
++	__le64 *table;
++	u64 start, end;
++	int n;
+ 
+-	id_table = squashfs_read_table(sb, start, sizeof(*id_table));
++	id_table = squashfs_read_table(sb, table_start, sizeof(*id_table));
+ 	if (IS_ERR(id_table))
+ 		return (__le64 *) id_table;
+ 
+@@ -70,13 +79,52 @@ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 start,
+ 	if (*xattr_ids == 0)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	/* xattr_table should be less than start */
+-	if (*xattr_table_start >= start)
++	len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
++	indexes = SQUASHFS_XATTR_BLOCKS(*xattr_ids);
++
++	/*
++	 * The computed size of the index table (len bytes) should exactly
++	 * match the table start and end points
++	 */
++	start = table_start + sizeof(*id_table);
++	end = msblk->bytes_used;
++
++	if (len != (end - start))
+ 		return ERR_PTR(-EINVAL);
+ 
+-	len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
++	table = squashfs_read_table(sb, start, len);
++	if (IS_ERR(table))
++		return table;
++
++	/* table[0], table[1], ... table[indexes - 1] store the locations
++	 * of the compressed xattr id blocks.  Each entry should be less than
++	 * the next (i.e. table[0] < table[1]), and the difference between them
++	 * should be SQUASHFS_METADATA_SIZE or less.  table[indexes - 1]
++	 * should be less than table_start, and again the difference
++	 * shouls be SQUASHFS_METADATA_SIZE or less.
++	 *
++	 * Finally xattr_table_start should be less than table[0].
++	 */
++	for (n = 0; n < (indexes - 1); n++) {
++		start = le64_to_cpu(table[n]);
++		end = le64_to_cpu(table[n + 1]);
++
++		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++			kfree(table);
++			return ERR_PTR(-EINVAL);
++		}
++	}
++
++	start = le64_to_cpu(table[indexes - 1]);
++	if (start >= table_start || (table_start - start) > SQUASHFS_METADATA_SIZE) {
++		kfree(table);
++		return ERR_PTR(-EINVAL);
++	}
+ 
+-	TRACE("In read_xattr_index_table, length %d\n", len);
++	if (*xattr_table_start >= le64_to_cpu(table[0])) {
++		kfree(table);
++		return ERR_PTR(-EINVAL);
++	}
+ 
+-	return squashfs_read_table(sb, start + sizeof(*id_table), len);
++	return table;
+ }
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index 9548d075e06da..b998e4b736912 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -25,8 +25,7 @@ struct rpc_rqst;
+ #define XDR_QUADLEN(l)		(((l) + 3) >> 2)
+ 
+ /*
+- * Generic opaque `network object.' At the kernel level, this type
+- * is used only by lockd.
++ * Generic opaque `network object.'
+  */
+ #define XDR_MAX_NETOBJ		1024
+ struct xdr_netobj {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 618cb1b451ade..8c017f8c0c6d6 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -6822,7 +6822,7 @@ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
+ 	case BPF_JSGT:
+ 		if (reg->s32_min_value > sval)
+ 			return 1;
+-		else if (reg->s32_max_value < sval)
++		else if (reg->s32_max_value <= sval)
+ 			return 0;
+ 		break;
+ 	case BPF_JLT:
+@@ -6895,7 +6895,7 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
+ 	case BPF_JSGT:
+ 		if (reg->smin_value > sval)
+ 			return 1;
+-		else if (reg->smax_value < sval)
++		else if (reg->smax_value <= sval)
+ 			return 0;
+ 		break;
+ 	case BPF_JLT:
+@@ -8465,7 +8465,11 @@ static bool range_within(struct bpf_reg_state *old,
+ 	return old->umin_value <= cur->umin_value &&
+ 	       old->umax_value >= cur->umax_value &&
+ 	       old->smin_value <= cur->smin_value &&
+-	       old->smax_value >= cur->smax_value;
++	       old->smax_value >= cur->smax_value &&
++	       old->u32_min_value <= cur->u32_min_value &&
++	       old->u32_max_value >= cur->u32_max_value &&
++	       old->s32_min_value <= cur->s32_min_value &&
++	       old->s32_max_value >= cur->s32_max_value;
+ }
+ 
+ /* Maximum number of register states that can exist at once */
+@@ -10862,30 +10866,28 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
+ 		    insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
+ 		    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
+ 			bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
+-			struct bpf_insn mask_and_div[] = {
+-				BPF_MOV32_REG(insn->src_reg, insn->src_reg),
++			bool isdiv = BPF_OP(insn->code) == BPF_DIV;
++			struct bpf_insn *patchlet;
++			struct bpf_insn chk_and_div[] = {
+ 				/* Rx div 0 -> 0 */
+-				BPF_JMP_IMM(BPF_JNE, insn->src_reg, 0, 2),
++				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++					     BPF_JNE | BPF_K, insn->src_reg,
++					     0, 2, 0),
+ 				BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
+ 				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+ 				*insn,
+ 			};
+-			struct bpf_insn mask_and_mod[] = {
+-				BPF_MOV32_REG(insn->src_reg, insn->src_reg),
++			struct bpf_insn chk_and_mod[] = {
+ 				/* Rx mod 0 -> Rx */
+-				BPF_JMP_IMM(BPF_JEQ, insn->src_reg, 0, 1),
++				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++					     BPF_JEQ | BPF_K, insn->src_reg,
++					     0, 1, 0),
+ 				*insn,
+ 			};
+-			struct bpf_insn *patchlet;
+ 
+-			if (insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
+-			    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
+-				patchlet = mask_and_div + (is64 ? 1 : 0);
+-				cnt = ARRAY_SIZE(mask_and_div) - (is64 ? 1 : 0);
+-			} else {
+-				patchlet = mask_and_mod + (is64 ? 1 : 0);
+-				cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 1 : 0);
+-			}
++			patchlet = isdiv ? chk_and_div : chk_and_mod;
++			cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
++				      ARRAY_SIZE(chk_and_mod);
+ 
+ 			new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
+ 			if (!new_prog)
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 8fc23d53f5500..a604e69ecfa57 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -6320,6 +6320,8 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
+ 	if (err)
+ 		return err;
+ 
++	page_counter_set_high(&memcg->memory, high);
++
+ 	for (;;) {
+ 		unsigned long nr_pages = page_counter_read(&memcg->memory);
+ 		unsigned long reclaimed;
+@@ -6343,10 +6345,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
+ 			break;
+ 	}
+ 
+-	page_counter_set_high(&memcg->memory, high);
+-
+ 	memcg_wb_domain_size_changed(memcg);
+-
+ 	return nbytes;
+ }
+ 
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index c12dbc51ef5fe..ef9b4ac03e7b7 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2902,7 +2902,7 @@ static int count_ah_combs(const struct xfrm_tmpl *t)
+ 			break;
+ 		if (!aalg->pfkey_supported)
+ 			continue;
+-		if (aalg_tmpl_set(t, aalg) && aalg->available)
++		if (aalg_tmpl_set(t, aalg))
+ 			sz += sizeof(struct sadb_comb);
+ 	}
+ 	return sz + sizeof(struct sadb_prop);
+@@ -2920,7 +2920,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ 		if (!ealg->pfkey_supported)
+ 			continue;
+ 
+-		if (!(ealg_tmpl_set(t, ealg) && ealg->available))
++		if (!(ealg_tmpl_set(t, ealg)))
+ 			continue;
+ 
+ 		for (k = 1; ; k++) {
+@@ -2931,7 +2931,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ 			if (!aalg->pfkey_supported)
+ 				continue;
+ 
+-			if (aalg_tmpl_set(t, aalg) && aalg->available)
++			if (aalg_tmpl_set(t, aalg))
+ 				sz += sizeof(struct sadb_comb);
+ 		}
+ 	}
+diff --git a/net/mac80211/spectmgmt.c b/net/mac80211/spectmgmt.c
+index ae1cb2c687224..76747bfdaddd0 100644
+--- a/net/mac80211/spectmgmt.c
++++ b/net/mac80211/spectmgmt.c
+@@ -133,16 +133,20 @@ int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
+ 	}
+ 
+ 	if (wide_bw_chansw_ie) {
++		u8 new_seg1 = wide_bw_chansw_ie->new_center_freq_seg1;
+ 		struct ieee80211_vht_operation vht_oper = {
+ 			.chan_width =
+ 				wide_bw_chansw_ie->new_channel_width,
+ 			.center_freq_seg0_idx =
+ 				wide_bw_chansw_ie->new_center_freq_seg0,
+-			.center_freq_seg1_idx =
+-				wide_bw_chansw_ie->new_center_freq_seg1,
++			.center_freq_seg1_idx = new_seg1,
+ 			/* .basic_mcs_set doesn't matter */
+ 		};
+-		struct ieee80211_ht_operation ht_oper = {};
++		struct ieee80211_ht_operation ht_oper = {
++			.operation_mode =
++				cpu_to_le16(new_seg1 <<
++					    IEEE80211_HT_OP_MODE_CCFS2_SHIFT),
++		};
+ 
+ 		/* default, for the case of IEEE80211_VHT_CHANWIDTH_USE_HT,
+ 		 * to the previously parsed chandef
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index 4ecc2a9595674..5f42aa5fc6128 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -29,6 +29,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/hashtable.h>
+ 
++#include "auth_gss_internal.h"
+ #include "../netns.h"
+ 
+ #include <trace/events/rpcgss.h>
+@@ -125,35 +126,6 @@ gss_cred_set_ctx(struct rpc_cred *cred, struct gss_cl_ctx *ctx)
+ 	clear_bit(RPCAUTH_CRED_NEW, &cred->cr_flags);
+ }
+ 
+-static const void *
+-simple_get_bytes(const void *p, const void *end, void *res, size_t len)
+-{
+-	const void *q = (const void *)((const char *)p + len);
+-	if (unlikely(q > end || q < p))
+-		return ERR_PTR(-EFAULT);
+-	memcpy(res, p, len);
+-	return q;
+-}
+-
+-static inline const void *
+-simple_get_netobj(const void *p, const void *end, struct xdr_netobj *dest)
+-{
+-	const void *q;
+-	unsigned int len;
+-
+-	p = simple_get_bytes(p, end, &len, sizeof(len));
+-	if (IS_ERR(p))
+-		return p;
+-	q = (const void *)((const char *)p + len);
+-	if (unlikely(q > end || q < p))
+-		return ERR_PTR(-EFAULT);
+-	dest->data = kmemdup(p, len, GFP_NOFS);
+-	if (unlikely(dest->data == NULL))
+-		return ERR_PTR(-ENOMEM);
+-	dest->len = len;
+-	return q;
+-}
+-
+ static struct gss_cl_ctx *
+ gss_cred_get_ctx(struct rpc_cred *cred)
+ {
+diff --git a/net/sunrpc/auth_gss/auth_gss_internal.h b/net/sunrpc/auth_gss/auth_gss_internal.h
+new file mode 100644
+index 0000000000000..f6d9631bd9d00
+--- /dev/null
++++ b/net/sunrpc/auth_gss/auth_gss_internal.h
+@@ -0,0 +1,45 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * linux/net/sunrpc/auth_gss/auth_gss_internal.h
++ *
++ * Internal definitions for RPCSEC_GSS client authentication
++ *
++ * Copyright (c) 2000 The Regents of the University of Michigan.
++ * All rights reserved.
++ *
++ */
++#include <linux/err.h>
++#include <linux/string.h>
++#include <linux/sunrpc/xdr.h>
++
++static inline const void *
++simple_get_bytes(const void *p, const void *end, void *res, size_t len)
++{
++	const void *q = (const void *)((const char *)p + len);
++	if (unlikely(q > end || q < p))
++		return ERR_PTR(-EFAULT);
++	memcpy(res, p, len);
++	return q;
++}
++
++static inline const void *
++simple_get_netobj(const void *p, const void *end, struct xdr_netobj *dest)
++{
++	const void *q;
++	unsigned int len;
++
++	p = simple_get_bytes(p, end, &len, sizeof(len));
++	if (IS_ERR(p))
++		return p;
++	q = (const void *)((const char *)p + len);
++	if (unlikely(q > end || q < p))
++		return ERR_PTR(-EFAULT);
++	if (len) {
++		dest->data = kmemdup(p, len, GFP_NOFS);
++		if (unlikely(dest->data == NULL))
++			return ERR_PTR(-ENOMEM);
++	} else
++		dest->data = NULL;
++	dest->len = len;
++	return q;
++}
+diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c
+index ae9acf3a73898..1c092b05c2bba 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_mech.c
++++ b/net/sunrpc/auth_gss/gss_krb5_mech.c
+@@ -21,6 +21,8 @@
+ #include <linux/sunrpc/xdr.h>
+ #include <linux/sunrpc/gss_krb5_enctypes.h>
+ 
++#include "auth_gss_internal.h"
++
+ #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+ # define RPCDBG_FACILITY	RPCDBG_AUTH
+ #endif
+@@ -143,35 +145,6 @@ get_gss_krb5_enctype(int etype)
+ 	return NULL;
+ }
+ 
+-static const void *
+-simple_get_bytes(const void *p, const void *end, void *res, int len)
+-{
+-	const void *q = (const void *)((const char *)p + len);
+-	if (unlikely(q > end || q < p))
+-		return ERR_PTR(-EFAULT);
+-	memcpy(res, p, len);
+-	return q;
+-}
+-
+-static const void *
+-simple_get_netobj(const void *p, const void *end, struct xdr_netobj *res)
+-{
+-	const void *q;
+-	unsigned int len;
+-
+-	p = simple_get_bytes(p, end, &len, sizeof(len));
+-	if (IS_ERR(p))
+-		return p;
+-	q = (const void *)((const char *)p + len);
+-	if (unlikely(q > end || q < p))
+-		return ERR_PTR(-EFAULT);
+-	res->data = kmemdup(p, len, GFP_NOFS);
+-	if (unlikely(res->data == NULL))
+-		return ERR_PTR(-ENOMEM);
+-	res->len = len;
+-	return q;
+-}
+-
+ static inline const void *
+ get_key(const void *p, const void *end,
+ 	struct krb5_ctx *ctx, struct crypto_sync_skcipher **res)
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 1c5114dedda92..fe49e9a97f0ec 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -306,6 +306,10 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ 		.device = 0xa0c8,
+ 	},
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++		.device = 0x43c8,
++	},
+ #endif
+ 
+ /* Elkhart Lake */
+diff --git a/sound/soc/codecs/ak4458.c b/sound/soc/codecs/ak4458.c
+index 1010c9ee2e836..472caad17012e 100644
+--- a/sound/soc/codecs/ak4458.c
++++ b/sound/soc/codecs/ak4458.c
+@@ -595,18 +595,10 @@ static struct snd_soc_dai_driver ak4497_dai = {
+ 	.ops = &ak4458_dai_ops,
+ };
+ 
+-static void ak4458_power_off(struct ak4458_priv *ak4458)
++static void ak4458_reset(struct ak4458_priv *ak4458, bool active)
+ {
+ 	if (ak4458->reset_gpiod) {
+-		gpiod_set_value_cansleep(ak4458->reset_gpiod, 0);
+-		usleep_range(1000, 2000);
+-	}
+-}
+-
+-static void ak4458_power_on(struct ak4458_priv *ak4458)
+-{
+-	if (ak4458->reset_gpiod) {
+-		gpiod_set_value_cansleep(ak4458->reset_gpiod, 1);
++		gpiod_set_value_cansleep(ak4458->reset_gpiod, active);
+ 		usleep_range(1000, 2000);
+ 	}
+ }
+@@ -620,7 +612,7 @@ static int ak4458_init(struct snd_soc_component *component)
+ 	if (ak4458->mute_gpiod)
+ 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 1);
+ 
+-	ak4458_power_on(ak4458);
++	ak4458_reset(ak4458, false);
+ 
+ 	ret = snd_soc_component_update_bits(component, AK4458_00_CONTROL1,
+ 			    0x80, 0x80);   /* ACKS bit = 1; 10000000 */
+@@ -650,7 +642,7 @@ static void ak4458_remove(struct snd_soc_component *component)
+ {
+ 	struct ak4458_priv *ak4458 = snd_soc_component_get_drvdata(component);
+ 
+-	ak4458_power_off(ak4458);
++	ak4458_reset(ak4458, true);
+ }
+ 
+ #ifdef CONFIG_PM
+@@ -660,7 +652,7 @@ static int __maybe_unused ak4458_runtime_suspend(struct device *dev)
+ 
+ 	regcache_cache_only(ak4458->regmap, true);
+ 
+-	ak4458_power_off(ak4458);
++	ak4458_reset(ak4458, true);
+ 
+ 	if (ak4458->mute_gpiod)
+ 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 0);
+@@ -685,8 +677,8 @@ static int __maybe_unused ak4458_runtime_resume(struct device *dev)
+ 	if (ak4458->mute_gpiod)
+ 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 1);
+ 
+-	ak4458_power_off(ak4458);
+-	ak4458_power_on(ak4458);
++	ak4458_reset(ak4458, true);
++	ak4458_reset(ak4458, false);
+ 
+ 	regcache_cache_only(ak4458->regmap, false);
+ 	regcache_mark_dirty(ak4458->regmap);
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index dec8716aa8ef5..985b2dcecf138 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -2031,11 +2031,14 @@ static struct wm_coeff_ctl *wm_adsp_get_ctl(struct wm_adsp *dsp,
+ 					     unsigned int alg)
+ {
+ 	struct wm_coeff_ctl *pos, *rslt = NULL;
++	const char *fw_txt = wm_adsp_fw_text[dsp->fw];
+ 
+ 	list_for_each_entry(pos, &dsp->ctl_list, list) {
+ 		if (!pos->subname)
+ 			continue;
+ 		if (strncmp(pos->subname, name, pos->subname_len) == 0 &&
++		    strncmp(pos->fw_name, fw_txt,
++			    SNDRV_CTL_ELEM_ID_NAME_MAXLEN) == 0 &&
+ 				pos->alg_region.alg == alg &&
+ 				pos->alg_region.type == type) {
+ 			rslt = pos;
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index b29946eb43551..a8d43c87cb5a2 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -57,6 +57,16 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
+ 					SOF_RT715_DAI_ID_FIX),
+ 	},
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E")
++		},
++		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
++					SOF_RT715_DAI_ID_FIX |
++					SOF_SDW_FOUR_SPK),
++	},
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+ 		.matches = {
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index d699e61eca3d0..0955cbb4e9187 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -3632,7 +3632,7 @@ static void skl_tplg_complete(struct snd_soc_component *component)
+ 		sprintf(chan_text, "c%d", mach->mach_params.dmic_num);
+ 
+ 		for (i = 0; i < se->items; i++) {
+-			struct snd_ctl_elem_value val;
++			struct snd_ctl_elem_value val = {};
+ 
+ 			if (strstr(texts[i], chan_text)) {
+ 				val.value.enumerated.item[0] = i;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-13 15:48 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-02-13 15:48 UTC (permalink / raw
  To: gentoo-commits

commit:     034e677f3ab6a057d20e7b28723df4ddc4298f86
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 13 15:47:09 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 13 15:48:10 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=034e677f

Revert "Linux patch 5.10.16"

This reverts commit ff99c69ccc56172fa7a303b47421e86f08ed0a8f.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 -
 1015_linux-5.10.16.patch | 2316 ----------------------------------------------
 2 files changed, 2320 deletions(-)

diff --git a/0000_README b/0000_README
index bbc23e0..9214e4d 100644
--- a/0000_README
+++ b/0000_README
@@ -103,10 +103,6 @@ Patch:  1014_linux-5.10.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.15
 
-Patch:  1015_linux-5.10.16.patch
-From:   http://www.kernel.org
-Desc:   Linux 5.10.16
-
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.10.16.patch b/1015_linux-5.10.16.patch
deleted file mode 100644
index 01d52a4..0000000
--- a/1015_linux-5.10.16.patch
+++ /dev/null
@@ -1,2316 +0,0 @@
-diff --git a/Makefile b/Makefile
-index b62d2d4ea7b02..9a1f26680d836 100644
---- a/Makefile
-+++ b/Makefile
-@@ -1,7 +1,7 @@
- # SPDX-License-Identifier: GPL-2.0
- VERSION = 5
- PATCHLEVEL = 10
--SUBLEVEL = 15
-+SUBLEVEL = 16
- EXTRAVERSION =
- NAME = Kleptomaniac Octopus
- 
-diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
-index 8dad44262e751..495ffc9cf5e22 100644
---- a/arch/powerpc/kernel/vdso.c
-+++ b/arch/powerpc/kernel/vdso.c
-@@ -475,7 +475,7 @@ static __init void vdso_setup_trampolines(struct lib32_elfinfo *v32,
- 	 */
- 
- #ifdef CONFIG_PPC64
--	vdso64_rt_sigtramp = find_function64(v64, "__kernel_sigtramp_rt64");
-+	vdso64_rt_sigtramp = find_function64(v64, "__kernel_start_sigtramp_rt64");
- #endif
- 	vdso32_sigtramp	   = find_function32(v32, "__kernel_sigtramp32");
- 	vdso32_rt_sigtramp = find_function32(v32, "__kernel_sigtramp_rt32");
-diff --git a/arch/powerpc/kernel/vdso64/sigtramp.S b/arch/powerpc/kernel/vdso64/sigtramp.S
-index bbf68cd01088b..2d4067561293e 100644
---- a/arch/powerpc/kernel/vdso64/sigtramp.S
-+++ b/arch/powerpc/kernel/vdso64/sigtramp.S
-@@ -15,11 +15,20 @@
- 
- 	.text
- 
-+/*
-+ * __kernel_start_sigtramp_rt64 and __kernel_sigtramp_rt64 together
-+ * are one function split in two parts. The kernel jumps to the former
-+ * and the signal handler indirectly (by blr) returns to the latter.
-+ * __kernel_sigtramp_rt64 needs to point to the return address so
-+ * glibc can correctly identify the trampoline stack frame.
-+ */
- 	.balign 8
- 	.balign IFETCH_ALIGN_BYTES
--V_FUNCTION_BEGIN(__kernel_sigtramp_rt64)
-+V_FUNCTION_BEGIN(__kernel_start_sigtramp_rt64)
- .Lsigrt_start:
- 	bctrl	/* call the handler */
-+V_FUNCTION_END(__kernel_start_sigtramp_rt64)
-+V_FUNCTION_BEGIN(__kernel_sigtramp_rt64)
- 	addi	r1, r1, __SIGNAL_FRAMESIZE
- 	li	r0,__NR_rt_sigreturn
- 	sc
-diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S b/arch/powerpc/kernel/vdso64/vdso64.lds.S
-index 256fb97202987..bd120f590b9ed 100644
---- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
-+++ b/arch/powerpc/kernel/vdso64/vdso64.lds.S
-@@ -150,6 +150,7 @@ VERSION
- 		__kernel_get_tbfreq;
- 		__kernel_sync_dicache;
- 		__kernel_sync_dicache_p5;
-+		__kernel_start_sigtramp_rt64;
- 		__kernel_sigtramp_rt64;
- 		__kernel_getcpu;
- 		__kernel_time;
-diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
-index 54fbe1e80cc41..f13688c4b9317 100644
---- a/block/blk-cgroup.c
-+++ b/block/blk-cgroup.c
-@@ -1017,6 +1017,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css)
-  */
- void blkcg_destroy_blkgs(struct blkcg *blkcg)
- {
-+	might_sleep();
-+
- 	spin_lock_irq(&blkcg->lock);
- 
- 	while (!hlist_empty(&blkcg->blkg_list)) {
-@@ -1024,14 +1026,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg)
- 						struct blkcg_gq, blkcg_node);
- 		struct request_queue *q = blkg->q;
- 
--		if (spin_trylock(&q->queue_lock)) {
--			blkg_destroy(blkg);
--			spin_unlock(&q->queue_lock);
--		} else {
-+		if (need_resched() || !spin_trylock(&q->queue_lock)) {
-+			/*
-+			 * Given that the system can accumulate a huge number
-+			 * of blkgs in pathological cases, check to see if we
-+			 * need to rescheduling to avoid softlockup.
-+			 */
- 			spin_unlock_irq(&blkcg->lock);
--			cpu_relax();
-+			cond_resched();
- 			spin_lock_irq(&blkcg->lock);
-+			continue;
- 		}
-+
-+		blkg_destroy(blkg);
-+		spin_unlock(&q->queue_lock);
- 	}
- 
- 	spin_unlock_irq(&blkcg->lock);
-diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
-index 689c06cbbb457..ade3ecf2ee495 100644
---- a/drivers/gpio/gpiolib-cdev.c
-+++ b/drivers/gpio/gpiolib-cdev.c
-@@ -756,6 +756,8 @@ static void edge_detector_stop(struct line *line)
- 	cancel_delayed_work_sync(&line->work);
- 	WRITE_ONCE(line->sw_debounced, 0);
- 	line->eflags = 0;
-+	if (line->desc)
-+		WRITE_ONCE(line->desc->debounce_period_us, 0);
- 	/* do not change line->level - see comment in debounced_value() */
- }
- 
-diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
-index 40dfb4d0ffbec..db62e6a934d91 100644
---- a/drivers/gpu/drm/i915/display/intel_ddi.c
-+++ b/drivers/gpu/drm/i915/display/intel_ddi.c
-@@ -2597,6 +2597,9 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
- 	u32 n_entries, val;
- 	int ln, rate = 0;
- 
-+	if (enc_to_dig_port(encoder)->tc_mode == TC_PORT_TBT_ALT)
-+		return;
-+
- 	if (type != INTEL_OUTPUT_HDMI) {
- 		struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
- 
-@@ -2605,12 +2608,11 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
- 
- 	ddi_translations = icl_get_mg_buf_trans(encoder, type, rate,
- 						&n_entries);
--	/* The table does not have values for level 3 and level 9. */
--	if (level >= n_entries || level == 3 || level == 9) {
-+	if (level >= n_entries) {
- 		drm_dbg_kms(&dev_priv->drm,
- 			    "DDI translation not found for level %d. Using %d instead.",
--			    level, n_entries - 2);
--		level = n_entries - 2;
-+			    level, n_entries - 1);
-+		level = n_entries - 1;
- 	}
- 
- 	/* Set MG_TX_LINK_PARAMS cri_use_fs32 to 0. */
-@@ -2742,6 +2744,9 @@ tgl_dkl_phy_ddi_vswing_sequence(struct intel_encoder *encoder, int link_clock,
- 	u32 n_entries, val, ln, dpcnt_mask, dpcnt_val;
- 	int rate = 0;
- 
-+	if (enc_to_dig_port(encoder)->tc_mode == TC_PORT_TBT_ALT)
-+		return;
-+
- 	if (type != INTEL_OUTPUT_HDMI) {
- 		struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
- 
-diff --git a/drivers/gpu/drm/nouveau/include/nvif/push.h b/drivers/gpu/drm/nouveau/include/nvif/push.h
-index 168d7694ede5c..6d3a8a3d2087b 100644
---- a/drivers/gpu/drm/nouveau/include/nvif/push.h
-+++ b/drivers/gpu/drm/nouveau/include/nvif/push.h
-@@ -123,131 +123,131 @@ PUSH_KICK(struct nvif_push *push)
- } while(0)
- #endif
- 
--#define PUSH_1(X,f,ds,n,c,o,p,s,mA,dA) do {                            \
--	PUSH_##o##_HDR((p), s, mA, (c)+(n));                           \
--	PUSH_##f(X, (p), X##mA, 1, o, (dA), ds, "");                   \
-+#define PUSH_1(X,f,ds,n,o,p,s,mA,dA) do {                             \
-+	PUSH_##o##_HDR((p), s, mA, (ds)+(n));                         \
-+	PUSH_##f(X, (p), X##mA, 1, o, (dA), ds, "");                  \
- } while(0)
--#define PUSH_2(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
--	PUSH_ASSERT((mB) - (mA) == (1?PUSH_##o##_INC), "mthd1");       \
--	PUSH_1(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_2(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
-+	PUSH_ASSERT((mB) - (mA) == (1?PUSH_##o##_INC), "mthd1");      \
-+	PUSH_1(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
--#define PUSH_3(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
--	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd2");       \
--	PUSH_2(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_3(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
-+	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd2");      \
-+	PUSH_2(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
--#define PUSH_4(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
--	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd3");       \
--	PUSH_3(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_4(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
-+	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd3");      \
-+	PUSH_3(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
--#define PUSH_5(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
--	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd4");       \
--	PUSH_4(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_5(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
-+	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd4");      \
-+	PUSH_4(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
--#define PUSH_6(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
--	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd5");       \
--	PUSH_5(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_6(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
-+	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd5");      \
-+	PUSH_5(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
--#define PUSH_7(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
--	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd6");       \
--	PUSH_6(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_7(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
-+	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd6");      \
-+	PUSH_6(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
--#define PUSH_8(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
--	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd7");       \
--	PUSH_7(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_8(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
-+	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd7");      \
-+	PUSH_7(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
--#define PUSH_9(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
--	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd8");       \
--	PUSH_8(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_9(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
-+	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd8");      \
-+	PUSH_8(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
--#define PUSH_10(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                \
--	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd9");       \
--	PUSH_9(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
--	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
-+#define PUSH_10(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                 \
-+	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd9");      \
-+	PUSH_9(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
-+	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
- } while(0)
- 
--#define PUSH_1D(X,o,p,s,mA,dA)                            \
--	PUSH_1(X, DATA_, 1, 1, 0, o, (p), s, X##mA, (dA))
--#define PUSH_2D(X,o,p,s,mA,dA,mB,dB)                      \
--	PUSH_2(X, DATA_, 1, 1, 0, o, (p), s, X##mB, (dB), \
--					     X##mA, (dA))
--#define PUSH_3D(X,o,p,s,mA,dA,mB,dB,mC,dC)                \
--	PUSH_3(X, DATA_, 1, 1, 0, o, (p), s, X##mC, (dC), \
--					     X##mB, (dB), \
--					     X##mA, (dA))
--#define PUSH_4D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD)          \
--	PUSH_4(X, DATA_, 1, 1, 0, o, (p), s, X##mD, (dD), \
--					     X##mC, (dC), \
--					     X##mB, (dB), \
--					     X##mA, (dA))
--#define PUSH_5D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE)    \
--	PUSH_5(X, DATA_, 1, 1, 0, o, (p), s, X##mE, (dE), \
--					     X##mD, (dD), \
--					     X##mC, (dC), \
--					     X##mB, (dB), \
--					     X##mA, (dA))
-+#define PUSH_1D(X,o,p,s,mA,dA)                         \
-+	PUSH_1(X, DATA_, 1, 0, o, (p), s, X##mA, (dA))
-+#define PUSH_2D(X,o,p,s,mA,dA,mB,dB)                   \
-+	PUSH_2(X, DATA_, 1, 0, o, (p), s, X##mB, (dB), \
-+					  X##mA, (dA))
-+#define PUSH_3D(X,o,p,s,mA,dA,mB,dB,mC,dC)             \
-+	PUSH_3(X, DATA_, 1, 0, o, (p), s, X##mC, (dC), \
-+					  X##mB, (dB), \
-+					  X##mA, (dA))
-+#define PUSH_4D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD)       \
-+	PUSH_4(X, DATA_, 1, 0, o, (p), s, X##mD, (dD), \
-+					  X##mC, (dC), \
-+					  X##mB, (dB), \
-+					  X##mA, (dA))
-+#define PUSH_5D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE) \
-+	PUSH_5(X, DATA_, 1, 0, o, (p), s, X##mE, (dE), \
-+					  X##mD, (dD), \
-+					  X##mC, (dC), \
-+					  X##mB, (dB), \
-+					  X##mA, (dA))
- #define PUSH_6D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF) \
--	PUSH_6(X, DATA_, 1, 1, 0, o, (p), s, X##mF, (dF),    \
--					     X##mE, (dE),    \
--					     X##mD, (dD),    \
--					     X##mC, (dC),    \
--					     X##mB, (dB),    \
--					     X##mA, (dA))
-+	PUSH_6(X, DATA_, 1, 0, o, (p), s, X##mF, (dF),       \
-+					  X##mE, (dE),       \
-+					  X##mD, (dD),       \
-+					  X##mC, (dC),       \
-+					  X##mB, (dB),       \
-+					  X##mA, (dA))
- #define PUSH_7D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG) \
--	PUSH_7(X, DATA_, 1, 1, 0, o, (p), s, X##mG, (dG),          \
--					     X##mF, (dF),          \
--					     X##mE, (dE),          \
--					     X##mD, (dD),          \
--					     X##mC, (dC),          \
--					     X##mB, (dB),          \
--					     X##mA, (dA))
-+	PUSH_7(X, DATA_, 1, 0, o, (p), s, X##mG, (dG),             \
-+					  X##mF, (dF),             \
-+					  X##mE, (dE),             \
-+					  X##mD, (dD),             \
-+					  X##mC, (dC),             \
-+					  X##mB, (dB),             \
-+					  X##mA, (dA))
- #define PUSH_8D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH) \
--	PUSH_8(X, DATA_, 1, 1, 0, o, (p), s, X##mH, (dH),                \
--					     X##mG, (dG),                \
--					     X##mF, (dF),                \
--					     X##mE, (dE),                \
--					     X##mD, (dD),                \
--					     X##mC, (dC),                \
--					     X##mB, (dB),                \
--					     X##mA, (dA))
-+	PUSH_8(X, DATA_, 1, 0, o, (p), s, X##mH, (dH),                   \
-+					  X##mG, (dG),                   \
-+					  X##mF, (dF),                   \
-+					  X##mE, (dE),                   \
-+					  X##mD, (dD),                   \
-+					  X##mC, (dC),                   \
-+					  X##mB, (dB),                   \
-+					  X##mA, (dA))
- #define PUSH_9D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH,mI,dI) \
--	PUSH_9(X, DATA_, 1, 1, 0, o, (p), s, X##mI, (dI),                      \
--					     X##mH, (dH),                      \
--					     X##mG, (dG),                      \
--					     X##mF, (dF),                      \
--					     X##mE, (dE),                      \
--					     X##mD, (dD),                      \
--					     X##mC, (dC),                      \
--					     X##mB, (dB),                      \
--					     X##mA, (dA))
-+	PUSH_9(X, DATA_, 1, 0, o, (p), s, X##mI, (dI),                         \
-+					  X##mH, (dH),                         \
-+					  X##mG, (dG),                         \
-+					  X##mF, (dF),                         \
-+					  X##mE, (dE),                         \
-+					  X##mD, (dD),                         \
-+					  X##mC, (dC),                         \
-+					  X##mB, (dB),                         \
-+					  X##mA, (dA))
- #define PUSH_10D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH,mI,dI,mJ,dJ) \
--	PUSH_10(X, DATA_, 1, 1, 0, o, (p), s, X##mJ, (dJ),                            \
--					      X##mI, (dI),                            \
--					      X##mH, (dH),                            \
--					      X##mG, (dG),                            \
--					      X##mF, (dF),                            \
--					      X##mE, (dE),                            \
--					      X##mD, (dD),                            \
--					      X##mC, (dC),                            \
--					      X##mB, (dB),                            \
--					      X##mA, (dA))
-+	PUSH_10(X, DATA_, 1, 0, o, (p), s, X##mJ, (dJ),                               \
-+					   X##mI, (dI),                               \
-+					   X##mH, (dH),                               \
-+					   X##mG, (dG),                               \
-+					   X##mF, (dF),                               \
-+					   X##mE, (dE),                               \
-+					   X##mD, (dD),                               \
-+					   X##mC, (dC),                               \
-+					   X##mB, (dB),                               \
-+					   X##mA, (dA))
- 
--#define PUSH_1P(X,o,p,s,mA,dp,ds)                           \
--	PUSH_1(X, DATAp, ds, ds, 0, o, (p), s, X##mA, (dp))
--#define PUSH_2P(X,o,p,s,mA,dA,mB,dp,ds)                     \
--	PUSH_2(X, DATAp, ds, ds, 0, o, (p), s, X##mB, (dp), \
--					       X##mA, (dA))
--#define PUSH_3P(X,o,p,s,mA,dA,mB,dB,mC,dp,ds)               \
--	PUSH_3(X, DATAp, ds, ds, 0, o, (p), s, X##mC, (dp), \
--					       X##mB, (dB), \
--					       X##mA, (dA))
-+#define PUSH_1P(X,o,p,s,mA,dp,ds)                       \
-+	PUSH_1(X, DATAp, ds, 0, o, (p), s, X##mA, (dp))
-+#define PUSH_2P(X,o,p,s,mA,dA,mB,dp,ds)                 \
-+	PUSH_2(X, DATAp, ds, 0, o, (p), s, X##mB, (dp), \
-+					   X##mA, (dA))
-+#define PUSH_3P(X,o,p,s,mA,dA,mB,dB,mC,dp,ds)           \
-+	PUSH_3(X, DATAp, ds, 0, o, (p), s, X##mC, (dp), \
-+					   X##mB, (dB), \
-+					   X##mA, (dA))
- 
- #define PUSH_(A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,IMPL,...) IMPL
- #define PUSH(A...) PUSH_(A, PUSH_10P, PUSH_10D,          \
-diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
-index 0818d3e507347..2ffd2f354d0ae 100644
---- a/drivers/i2c/busses/i2c-mt65xx.c
-+++ b/drivers/i2c/busses/i2c-mt65xx.c
-@@ -1275,7 +1275,8 @@ static int mtk_i2c_probe(struct platform_device *pdev)
- 	mtk_i2c_clock_disable(i2c);
- 
- 	ret = devm_request_irq(&pdev->dev, irq, mtk_i2c_irq,
--			       IRQF_TRIGGER_NONE, I2C_DRV_NAME, i2c);
-+			       IRQF_NO_SUSPEND | IRQF_TRIGGER_NONE,
-+			       I2C_DRV_NAME, i2c);
- 	if (ret < 0) {
- 		dev_err(&pdev->dev,
- 			"Request I2C IRQ %d fail\n", irq);
-@@ -1302,7 +1303,16 @@ static int mtk_i2c_remove(struct platform_device *pdev)
- }
- 
- #ifdef CONFIG_PM_SLEEP
--static int mtk_i2c_resume(struct device *dev)
-+static int mtk_i2c_suspend_noirq(struct device *dev)
-+{
-+	struct mtk_i2c *i2c = dev_get_drvdata(dev);
-+
-+	i2c_mark_adapter_suspended(&i2c->adap);
-+
-+	return 0;
-+}
-+
-+static int mtk_i2c_resume_noirq(struct device *dev)
- {
- 	int ret;
- 	struct mtk_i2c *i2c = dev_get_drvdata(dev);
-@@ -1317,12 +1327,15 @@ static int mtk_i2c_resume(struct device *dev)
- 
- 	mtk_i2c_clock_disable(i2c);
- 
-+	i2c_mark_adapter_resumed(&i2c->adap);
-+
- 	return 0;
- }
- #endif
- 
- static const struct dev_pm_ops mtk_i2c_pm = {
--	SET_SYSTEM_SLEEP_PM_OPS(NULL, mtk_i2c_resume)
-+	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_i2c_suspend_noirq,
-+				      mtk_i2c_resume_noirq)
- };
- 
- static struct platform_driver mtk_i2c_driver = {
-diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
-index 5beec901713fb..a262c949ed76b 100644
---- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
-+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
-@@ -1158,11 +1158,9 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
- #endif
- 	}
- 	if (!n || !n->dev)
--		goto free_sk;
-+		goto free_dst;
- 
- 	ndev = n->dev;
--	if (!ndev)
--		goto free_dst;
- 	if (is_vlan_dev(ndev))
- 		ndev = vlan_dev_real_dev(ndev);
- 
-@@ -1249,7 +1247,8 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
- free_csk:
- 	chtls_sock_release(&csk->kref);
- free_dst:
--	neigh_release(n);
-+	if (n)
-+		neigh_release(n);
- 	dst_release(dst);
- free_sk:
- 	inet_csk_prepare_forced_close(newsk);
-diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
-index d2bbe6a735142..92c50efd48fc3 100644
---- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
-+++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
-@@ -358,6 +358,7 @@ const struct iwl_cfg_trans_params iwl_ma_trans_cfg = {
- const char iwl_ax101_name[] = "Intel(R) Wi-Fi 6 AX101";
- const char iwl_ax200_name[] = "Intel(R) Wi-Fi 6 AX200 160MHz";
- const char iwl_ax201_name[] = "Intel(R) Wi-Fi 6 AX201 160MHz";
-+const char iwl_ax203_name[] = "Intel(R) Wi-Fi 6 AX203";
- const char iwl_ax211_name[] = "Intel(R) Wi-Fi 6 AX211 160MHz";
- const char iwl_ax411_name[] = "Intel(R) Wi-Fi 6 AX411 160MHz";
- const char iwl_ma_name[] = "Intel(R) Wi-Fi 6";
-@@ -384,6 +385,18 @@ const struct iwl_cfg iwl_qu_b0_hr1_b0 = {
- 	.num_rbds = IWL_NUM_RBDS_22000_HE,
- };
- 
-+const struct iwl_cfg iwl_qu_b0_hr_b0 = {
-+	.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
-+	IWL_DEVICE_22500,
-+	/*
-+	 * This device doesn't support receiving BlockAck with a large bitmap
-+	 * so we need to restrict the size of transmitted aggregation to the
-+	 * HT size; mac80211 would otherwise pick the HE max (256) by default.
-+	 */
-+	.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
-+	.num_rbds = IWL_NUM_RBDS_22000_HE,
-+};
-+
- const struct iwl_cfg iwl_ax201_cfg_qu_hr = {
- 	.name = "Intel(R) Wi-Fi 6 AX201 160MHz",
- 	.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
-@@ -410,6 +423,18 @@ const struct iwl_cfg iwl_qu_c0_hr1_b0 = {
- 	.num_rbds = IWL_NUM_RBDS_22000_HE,
- };
- 
-+const struct iwl_cfg iwl_qu_c0_hr_b0 = {
-+	.fw_name_pre = IWL_QU_C_HR_B_FW_PRE,
-+	IWL_DEVICE_22500,
-+	/*
-+	 * This device doesn't support receiving BlockAck with a large bitmap
-+	 * so we need to restrict the size of transmitted aggregation to the
-+	 * HT size; mac80211 would otherwise pick the HE max (256) by default.
-+	 */
-+	.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
-+	.num_rbds = IWL_NUM_RBDS_22000_HE,
-+};
-+
- const struct iwl_cfg iwl_ax201_cfg_qu_c0_hr_b0 = {
- 	.name = "Intel(R) Wi-Fi 6 AX201 160MHz",
- 	.fw_name_pre = IWL_QU_C_HR_B_FW_PRE,
-diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
-index e82e3fc963be2..9b91aa9b2e7f1 100644
---- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
-+++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
-@@ -544,6 +544,7 @@ extern const char iwl9260_killer_1550_name[];
- extern const char iwl9560_killer_1550i_name[];
- extern const char iwl9560_killer_1550s_name[];
- extern const char iwl_ax200_name[];
-+extern const char iwl_ax203_name[];
- extern const char iwl_ax201_name[];
- extern const char iwl_ax101_name[];
- extern const char iwl_ax200_killer_1650w_name[];
-@@ -627,6 +628,8 @@ extern const struct iwl_cfg iwl9560_2ac_cfg_soc;
- extern const struct iwl_cfg iwl_qu_b0_hr1_b0;
- extern const struct iwl_cfg iwl_qu_c0_hr1_b0;
- extern const struct iwl_cfg iwl_quz_a0_hr1_b0;
-+extern const struct iwl_cfg iwl_qu_b0_hr_b0;
-+extern const struct iwl_cfg iwl_qu_c0_hr_b0;
- extern const struct iwl_cfg iwl_ax200_cfg_cc;
- extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
- extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
-diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
-index f043eefabb4ec..7b1d2dac6ceb8 100644
---- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
-+++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
-@@ -514,7 +514,10 @@ static ssize_t iwl_dbgfs_os_device_timediff_read(struct file *file,
- 	const size_t bufsz = sizeof(buf);
- 	int pos = 0;
- 
-+	mutex_lock(&mvm->mutex);
- 	iwl_mvm_get_sync_time(mvm, &curr_gp2, &curr_os);
-+	mutex_unlock(&mvm->mutex);
-+
- 	do_div(curr_os, NSEC_PER_USEC);
- 	diff = curr_os - curr_gp2;
- 	pos += scnprintf(buf + pos, bufsz - pos, "diff=%lld\n", diff);
-diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
-index b627e7da7ac9d..d42165559df6e 100644
---- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
-+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
-@@ -4249,6 +4249,9 @@ static void __iwl_mvm_unassign_vif_chanctx(struct iwl_mvm *mvm,
- 	iwl_mvm_binding_remove_vif(mvm, vif);
- 
- out:
-+	if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD) &&
-+	    switching_chanctx)
-+		return;
- 	mvmvif->phy_ctxt = NULL;
- 	iwl_mvm_power_update_mac(mvm);
- }
-diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
-index 0d1118f66f0d5..cb83490f1016f 100644
---- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
-+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
-@@ -845,6 +845,10 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
- 	if (!mvm->scan_cmd)
- 		goto out_free;
- 
-+	/* invalidate ids to prevent accidental removal of sta_id 0 */
-+	mvm->aux_sta.sta_id = IWL_MVM_INVALID_STA;
-+	mvm->snif_sta.sta_id = IWL_MVM_INVALID_STA;
-+
- 	/* Set EBS as successful as long as not stated otherwise by the FW. */
- 	mvm->last_ebs_successful = true;
- 
-@@ -1245,6 +1249,7 @@ static void iwl_mvm_reprobe_wk(struct work_struct *wk)
- 	reprobe = container_of(wk, struct iwl_mvm_reprobe, work);
- 	if (device_reprobe(reprobe->dev))
- 		dev_err(reprobe->dev, "reprobe failed!\n");
-+	put_device(reprobe->dev);
- 	kfree(reprobe);
- 	module_put(THIS_MODULE);
- }
-@@ -1295,7 +1300,7 @@ void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error)
- 			module_put(THIS_MODULE);
- 			return;
- 		}
--		reprobe->dev = mvm->trans->dev;
-+		reprobe->dev = get_device(mvm->trans->dev);
- 		INIT_WORK(&reprobe->work, iwl_mvm_reprobe_wk);
- 		schedule_work(&reprobe->work);
- 	} else if (test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
-diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
-index 799d8219463cb..a66a5c19474a9 100644
---- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
-+++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
-@@ -2103,6 +2103,9 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
- 
- 	lockdep_assert_held(&mvm->mutex);
- 
-+	if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA))
-+		return -EINVAL;
-+
- 	iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
- 	ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id);
- 	if (ret)
-@@ -2117,6 +2120,9 @@ int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm)
- 
- 	lockdep_assert_held(&mvm->mutex);
- 
-+	if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA))
-+		return -EINVAL;
-+
- 	iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
- 	ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
- 	if (ret)
-diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
-index d719e433a59bf..2d43899fbdd7a 100644
---- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
-+++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
-@@ -245,8 +245,10 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
- 	/* Allocate IML */
- 	iml_img = dma_alloc_coherent(trans->dev, trans->iml_len,
- 				     &trans_pcie->iml_dma_addr, GFP_KERNEL);
--	if (!iml_img)
--		return -ENOMEM;
-+	if (!iml_img) {
-+		ret = -ENOMEM;
-+		goto err_free_ctxt_info;
-+	}
- 
- 	memcpy(iml_img, trans->iml, trans->iml_len);
- 
-@@ -284,6 +286,11 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
- 
- 	return 0;
- 
-+err_free_ctxt_info:
-+	dma_free_coherent(trans->dev, sizeof(*trans_pcie->ctxt_info_gen3),
-+			  trans_pcie->ctxt_info_gen3,
-+			  trans_pcie->ctxt_info_dma_addr);
-+	trans_pcie->ctxt_info_gen3 = NULL;
- err_free_prph_info:
- 	dma_free_coherent(trans->dev,
- 			  sizeof(*prph_info),
-diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
-index 7b5ece380fbfb..2823a1e81656d 100644
---- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
-+++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
-@@ -966,6 +966,11 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
- 		      IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
- 		      IWL_CFG_ANY, IWL_CFG_ANY,
- 		      iwl_qu_b0_hr1_b0, iwl_ax101_name),
-+	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
-+		      IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
-+		      IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
-+		      IWL_CFG_ANY, IWL_CFG_ANY,
-+		      iwl_qu_b0_hr_b0, iwl_ax203_name),
- 
- 	/* Qu C step */
- 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
-@@ -973,6 +978,11 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
- 		      IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
- 		      IWL_CFG_ANY, IWL_CFG_ANY,
- 		      iwl_qu_c0_hr1_b0, iwl_ax101_name),
-+	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
-+		      IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
-+		      IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
-+		      IWL_CFG_ANY, IWL_CFG_ANY,
-+		      iwl_qu_c0_hr_b0, iwl_ax203_name),
- 
- 	/* QuZ */
- 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
-diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
-index 966be5689d63a..ed54d04e43964 100644
---- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
-+++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
-@@ -299,6 +299,11 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
- 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
- 	struct iwl_txq *txq = trans->txqs.txq[txq_id];
- 
-+	if (!txq) {
-+		IWL_ERR(trans, "Trying to free a queue that wasn't allocated?\n");
-+		return;
-+	}
-+
- 	spin_lock_bh(&txq->lock);
- 	while (txq->write_ptr != txq->read_ptr) {
- 		IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
-diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
-index af0b27a68d84d..9181221a2434d 100644
---- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c
-+++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
-@@ -887,10 +887,8 @@ void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id)
- 			int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr);
- 			struct sk_buff *skb = txq->entries[idx].skb;
- 
--			if (WARN_ON_ONCE(!skb))
--				continue;
--
--			iwl_txq_free_tso_page(trans, skb);
-+			if (!WARN_ON_ONCE(!skb))
-+				iwl_txq_free_tso_page(trans, skb);
- 		}
- 		iwl_txq_gen2_free_tfd(trans, txq);
- 		txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
-diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
-index 42bbd99a36acf..35098dbd32a3c 100644
---- a/drivers/regulator/core.c
-+++ b/drivers/regulator/core.c
-@@ -1813,13 +1813,13 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
- {
- 	struct regulator_dev *r;
- 	struct device *dev = rdev->dev.parent;
--	int ret;
-+	int ret = 0;
- 
- 	/* No supply to resolve? */
- 	if (!rdev->supply_name)
- 		return 0;
- 
--	/* Supply already resolved? */
-+	/* Supply already resolved? (fast-path without locking contention) */
- 	if (rdev->supply)
- 		return 0;
- 
-@@ -1829,7 +1829,7 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
- 
- 		/* Did the lookup explicitly defer for us? */
- 		if (ret == -EPROBE_DEFER)
--			return ret;
-+			goto out;
- 
- 		if (have_full_constraints()) {
- 			r = dummy_regulator_rdev;
-@@ -1837,15 +1837,18 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
- 		} else {
- 			dev_err(dev, "Failed to resolve %s-supply for %s\n",
- 				rdev->supply_name, rdev->desc->name);
--			return -EPROBE_DEFER;
-+			ret = -EPROBE_DEFER;
-+			goto out;
- 		}
- 	}
- 
- 	if (r == rdev) {
- 		dev_err(dev, "Supply for %s (%s) resolved to itself\n",
- 			rdev->desc->name, rdev->supply_name);
--		if (!have_full_constraints())
--			return -EINVAL;
-+		if (!have_full_constraints()) {
-+			ret = -EINVAL;
-+			goto out;
-+		}
- 		r = dummy_regulator_rdev;
- 		get_device(&r->dev);
- 	}
-@@ -1859,7 +1862,8 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
- 	if (r->dev.parent && r->dev.parent != rdev->dev.parent) {
- 		if (!device_is_bound(r->dev.parent)) {
- 			put_device(&r->dev);
--			return -EPROBE_DEFER;
-+			ret = -EPROBE_DEFER;
-+			goto out;
- 		}
- 	}
- 
-@@ -1867,15 +1871,32 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
- 	ret = regulator_resolve_supply(r);
- 	if (ret < 0) {
- 		put_device(&r->dev);
--		return ret;
-+		goto out;
-+	}
-+
-+	/*
-+	 * Recheck rdev->supply with rdev->mutex lock held to avoid a race
-+	 * between rdev->supply null check and setting rdev->supply in
-+	 * set_supply() from concurrent tasks.
-+	 */
-+	regulator_lock(rdev);
-+
-+	/* Supply just resolved by a concurrent task? */
-+	if (rdev->supply) {
-+		regulator_unlock(rdev);
-+		put_device(&r->dev);
-+		goto out;
- 	}
- 
- 	ret = set_supply(rdev, r);
- 	if (ret < 0) {
-+		regulator_unlock(rdev);
- 		put_device(&r->dev);
--		return ret;
-+		goto out;
- 	}
- 
-+	regulator_unlock(rdev);
-+
- 	/*
- 	 * In set_machine_constraints() we may have turned this regulator on
- 	 * but we couldn't propagate to the supply if it hadn't been resolved
-@@ -1886,11 +1907,12 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
- 		if (ret < 0) {
- 			_regulator_put(rdev->supply);
- 			rdev->supply = NULL;
--			return ret;
-+			goto out;
- 		}
- 	}
- 
--	return 0;
-+out:
-+	return ret;
- }
- 
- /* Internal regulator request function */
-diff --git a/fs/io-wq.c b/fs/io-wq.c
-index b53c055bea6a3..f72d53848dcbc 100644
---- a/fs/io-wq.c
-+++ b/fs/io-wq.c
-@@ -1078,16 +1078,6 @@ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
- 	return IO_WQ_CANCEL_NOTFOUND;
- }
- 
--static bool io_wq_io_cb_cancel_data(struct io_wq_work *work, void *data)
--{
--	return work == data;
--}
--
--enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork)
--{
--	return io_wq_cancel_cb(wq, io_wq_io_cb_cancel_data, (void *)cwork, false);
--}
--
- struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
- {
- 	int ret = -ENOMEM, node;
-diff --git a/fs/io-wq.h b/fs/io-wq.h
-index aaa363f358916..75113bcd5889f 100644
---- a/fs/io-wq.h
-+++ b/fs/io-wq.h
-@@ -130,7 +130,6 @@ static inline bool io_wq_is_hashed(struct io_wq_work *work)
- }
- 
- void io_wq_cancel_all(struct io_wq *wq);
--enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork);
- 
- typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
- 
-diff --git a/fs/io_uring.c b/fs/io_uring.c
-index 3b6307f6bd93d..d0b7332ca7033 100644
---- a/fs/io_uring.c
-+++ b/fs/io_uring.c
-@@ -286,7 +286,6 @@ struct io_ring_ctx {
- 		struct list_head	timeout_list;
- 		struct list_head	cq_overflow_list;
- 
--		wait_queue_head_t	inflight_wait;
- 		struct io_uring_sqe	*sq_sqes;
- 	} ____cacheline_aligned_in_smp;
- 
-@@ -997,6 +996,43 @@ static inline void io_clean_op(struct io_kiocb *req)
- 		__io_clean_op(req);
- }
- 
-+static inline bool __io_match_files(struct io_kiocb *req,
-+				    struct files_struct *files)
-+{
-+	if (req->file && req->file->f_op == &io_uring_fops)
-+		return true;
-+
-+	return ((req->flags & REQ_F_WORK_INITIALIZED) &&
-+	        (req->work.flags & IO_WQ_WORK_FILES)) &&
-+		req->work.identity->files == files;
-+}
-+
-+static bool io_match_task(struct io_kiocb *head,
-+			  struct task_struct *task,
-+			  struct files_struct *files)
-+{
-+	struct io_kiocb *link;
-+
-+	if (task && head->task != task) {
-+		/* in terms of cancelation, always match if req task is dead */
-+		if (head->task->flags & PF_EXITING)
-+			return true;
-+		return false;
-+	}
-+	if (!files)
-+		return true;
-+	if (__io_match_files(head, files))
-+		return true;
-+	if (head->flags & REQ_F_LINK_HEAD) {
-+		list_for_each_entry(link, &head->link_list, link_list) {
-+			if (__io_match_files(link, files))
-+				return true;
-+		}
-+	}
-+	return false;
-+}
-+
-+
- static void io_sq_thread_drop_mm(void)
- {
- 	struct mm_struct *mm = current->mm;
-@@ -1183,7 +1219,6 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
- 	INIT_LIST_HEAD(&ctx->iopoll_list);
- 	INIT_LIST_HEAD(&ctx->defer_list);
- 	INIT_LIST_HEAD(&ctx->timeout_list);
--	init_waitqueue_head(&ctx->inflight_wait);
- 	spin_lock_init(&ctx->inflight_lock);
- 	INIT_LIST_HEAD(&ctx->inflight_list);
- 	INIT_DELAYED_WORK(&ctx->file_put_work, io_file_put_work);
-@@ -1368,11 +1403,14 @@ static bool io_grab_identity(struct io_kiocb *req)
- 			return false;
- 		atomic_inc(&id->files->count);
- 		get_nsproxy(id->nsproxy);
--		req->flags |= REQ_F_INFLIGHT;
- 
--		spin_lock_irq(&ctx->inflight_lock);
--		list_add(&req->inflight_entry, &ctx->inflight_list);
--		spin_unlock_irq(&ctx->inflight_lock);
-+		if (!(req->flags & REQ_F_INFLIGHT)) {
-+			req->flags |= REQ_F_INFLIGHT;
-+
-+			spin_lock_irq(&ctx->inflight_lock);
-+			list_add(&req->inflight_entry, &ctx->inflight_list);
-+			spin_unlock_irq(&ctx->inflight_lock);
-+		}
- 		req->work.flags |= IO_WQ_WORK_FILES;
- 	}
- 	if (!(req->work.flags & IO_WQ_WORK_MM) &&
-@@ -1466,30 +1504,18 @@ static void io_kill_timeout(struct io_kiocb *req)
- 	}
- }
- 
--static bool io_task_match(struct io_kiocb *req, struct task_struct *tsk)
--{
--	struct io_ring_ctx *ctx = req->ctx;
--
--	if (!tsk || req->task == tsk)
--		return true;
--	if (ctx->flags & IORING_SETUP_SQPOLL) {
--		if (ctx->sq_data && req->task == ctx->sq_data->thread)
--			return true;
--	}
--	return false;
--}
--
- /*
-  * Returns true if we found and killed one or more timeouts
-  */
--static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk)
-+static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
-+			     struct files_struct *files)
- {
- 	struct io_kiocb *req, *tmp;
- 	int canceled = 0;
- 
- 	spin_lock_irq(&ctx->completion_lock);
- 	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
--		if (io_task_match(req, tsk)) {
-+		if (io_match_task(req, tsk, files)) {
- 			io_kill_timeout(req);
- 			canceled++;
- 		}
-@@ -1616,32 +1642,6 @@ static void io_cqring_mark_overflow(struct io_ring_ctx *ctx)
- 	}
- }
- 
--static inline bool __io_match_files(struct io_kiocb *req,
--				    struct files_struct *files)
--{
--	return ((req->flags & REQ_F_WORK_INITIALIZED) &&
--	        (req->work.flags & IO_WQ_WORK_FILES)) &&
--		req->work.identity->files == files;
--}
--
--static bool io_match_files(struct io_kiocb *req,
--			   struct files_struct *files)
--{
--	struct io_kiocb *link;
--
--	if (!files)
--		return true;
--	if (__io_match_files(req, files))
--		return true;
--	if (req->flags & REQ_F_LINK_HEAD) {
--		list_for_each_entry(link, &req->link_list, link_list) {
--			if (__io_match_files(link, files))
--				return true;
--		}
--	}
--	return false;
--}
--
- /* Returns true if there are no backlogged entries after the flush */
- static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
- 				       struct task_struct *tsk,
-@@ -1663,9 +1663,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
- 
- 	cqe = NULL;
- 	list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, compl.list) {
--		if (tsk && req->task != tsk)
--			continue;
--		if (!io_match_files(req, files))
-+		if (!io_match_task(req, tsk, files))
- 			continue;
- 
- 		cqe = io_get_cqring(ctx);
-@@ -2086,6 +2084,9 @@ static void __io_req_task_submit(struct io_kiocb *req)
- 	else
- 		__io_req_task_cancel(req, -EFAULT);
- 	mutex_unlock(&ctx->uring_lock);
-+
-+	if (ctx->flags & IORING_SETUP_SQPOLL)
-+		io_sq_thread_drop_mm();
- }
- 
- static void io_req_task_submit(struct callback_head *cb)
-@@ -5314,7 +5315,8 @@ static bool io_poll_remove_one(struct io_kiocb *req)
- /*
-  * Returns true if we found and killed one or more poll requests
-  */
--static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk)
-+static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
-+			       struct files_struct *files)
- {
- 	struct hlist_node *tmp;
- 	struct io_kiocb *req;
-@@ -5326,7 +5328,7 @@ static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk)
- 
- 		list = &ctx->cancel_hash[i];
- 		hlist_for_each_entry_safe(req, tmp, list, hash_node) {
--			if (io_task_match(req, tsk))
-+			if (io_match_task(req, tsk, files))
- 				posted += io_poll_remove_one(req);
- 		}
- 	}
-@@ -5893,17 +5895,20 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
- static void io_req_drop_files(struct io_kiocb *req)
- {
- 	struct io_ring_ctx *ctx = req->ctx;
-+	struct io_uring_task *tctx = req->task->io_uring;
- 	unsigned long flags;
- 
--	put_files_struct(req->work.identity->files);
--	put_nsproxy(req->work.identity->nsproxy);
-+	if (req->work.flags & IO_WQ_WORK_FILES) {
-+		put_files_struct(req->work.identity->files);
-+		put_nsproxy(req->work.identity->nsproxy);
-+	}
- 	spin_lock_irqsave(&ctx->inflight_lock, flags);
- 	list_del(&req->inflight_entry);
- 	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
- 	req->flags &= ~REQ_F_INFLIGHT;
- 	req->work.flags &= ~IO_WQ_WORK_FILES;
--	if (waitqueue_active(&ctx->inflight_wait))
--		wake_up(&ctx->inflight_wait);
-+	if (atomic_read(&tctx->in_idle))
-+		wake_up(&tctx->wait);
- }
- 
- static void __io_clean_op(struct io_kiocb *req)
-@@ -6168,6 +6173,16 @@ static struct file *io_file_get(struct io_submit_state *state,
- 		file = __io_file_get(state, fd);
- 	}
- 
-+	if (file && file->f_op == &io_uring_fops &&
-+	    !(req->flags & REQ_F_INFLIGHT)) {
-+		io_req_init_async(req);
-+		req->flags |= REQ_F_INFLIGHT;
-+
-+		spin_lock_irq(&ctx->inflight_lock);
-+		list_add(&req->inflight_entry, &ctx->inflight_list);
-+		spin_unlock_irq(&ctx->inflight_lock);
-+	}
-+
- 	return file;
- }
- 
-@@ -6989,14 +7004,18 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
- 						TASK_INTERRUPTIBLE);
- 		/* make sure we run task_work before checking for signals */
- 		ret = io_run_task_work_sig();
--		if (ret > 0)
-+		if (ret > 0) {
-+			finish_wait(&ctx->wait, &iowq.wq);
- 			continue;
-+		}
- 		else if (ret < 0)
- 			break;
- 		if (io_should_wake(&iowq))
- 			break;
--		if (test_bit(0, &ctx->cq_check_overflow))
-+		if (test_bit(0, &ctx->cq_check_overflow)) {
-+			finish_wait(&ctx->wait, &iowq.wq);
- 			continue;
-+		}
- 		schedule();
- 	} while (1);
- 	finish_wait(&ctx->wait, &iowq.wq);
-@@ -8487,8 +8506,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
- 		__io_cqring_overflow_flush(ctx, true, NULL, NULL);
- 	mutex_unlock(&ctx->uring_lock);
- 
--	io_kill_timeouts(ctx, NULL);
--	io_poll_remove_all(ctx, NULL);
-+	io_kill_timeouts(ctx, NULL, NULL);
-+	io_poll_remove_all(ctx, NULL, NULL);
- 
- 	if (ctx->io_wq)
- 		io_wq_cancel_cb(ctx->io_wq, io_cancel_ctx_cb, ctx, true);
-@@ -8524,112 +8543,31 @@ static int io_uring_release(struct inode *inode, struct file *file)
- 	return 0;
- }
- 
--/*
-- * Returns true if 'preq' is the link parent of 'req'
-- */
--static bool io_match_link(struct io_kiocb *preq, struct io_kiocb *req)
--{
--	struct io_kiocb *link;
--
--	if (!(preq->flags & REQ_F_LINK_HEAD))
--		return false;
--
--	list_for_each_entry(link, &preq->link_list, link_list) {
--		if (link == req)
--			return true;
--	}
--
--	return false;
--}
--
--/*
-- * We're looking to cancel 'req' because it's holding on to our files, but
-- * 'req' could be a link to another request. See if it is, and cancel that
-- * parent request if so.
-- */
--static bool io_poll_remove_link(struct io_ring_ctx *ctx, struct io_kiocb *req)
--{
--	struct hlist_node *tmp;
--	struct io_kiocb *preq;
--	bool found = false;
--	int i;
--
--	spin_lock_irq(&ctx->completion_lock);
--	for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
--		struct hlist_head *list;
--
--		list = &ctx->cancel_hash[i];
--		hlist_for_each_entry_safe(preq, tmp, list, hash_node) {
--			found = io_match_link(preq, req);
--			if (found) {
--				io_poll_remove_one(preq);
--				break;
--			}
--		}
--	}
--	spin_unlock_irq(&ctx->completion_lock);
--	return found;
--}
--
--static bool io_timeout_remove_link(struct io_ring_ctx *ctx,
--				   struct io_kiocb *req)
--{
--	struct io_kiocb *preq;
--	bool found = false;
--
--	spin_lock_irq(&ctx->completion_lock);
--	list_for_each_entry(preq, &ctx->timeout_list, timeout.list) {
--		found = io_match_link(preq, req);
--		if (found) {
--			__io_timeout_cancel(preq);
--			break;
--		}
--	}
--	spin_unlock_irq(&ctx->completion_lock);
--	return found;
--}
-+struct io_task_cancel {
-+	struct task_struct *task;
-+	struct files_struct *files;
-+};
- 
--static bool io_cancel_link_cb(struct io_wq_work *work, void *data)
-+static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
- {
- 	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
-+	struct io_task_cancel *cancel = data;
- 	bool ret;
- 
--	if (req->flags & REQ_F_LINK_TIMEOUT) {
-+	if (cancel->files && (req->flags & REQ_F_LINK_TIMEOUT)) {
- 		unsigned long flags;
- 		struct io_ring_ctx *ctx = req->ctx;
- 
- 		/* protect against races with linked timeouts */
- 		spin_lock_irqsave(&ctx->completion_lock, flags);
--		ret = io_match_link(req, data);
-+		ret = io_match_task(req, cancel->task, cancel->files);
- 		spin_unlock_irqrestore(&ctx->completion_lock, flags);
- 	} else {
--		ret = io_match_link(req, data);
-+		ret = io_match_task(req, cancel->task, cancel->files);
- 	}
- 	return ret;
- }
- 
--static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
--{
--	enum io_wq_cancel cret;
--
--	/* cancel this particular work, if it's running */
--	cret = io_wq_cancel_work(ctx->io_wq, &req->work);
--	if (cret != IO_WQ_CANCEL_NOTFOUND)
--		return;
--
--	/* find links that hold this pending, cancel those */
--	cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_link_cb, req, true);
--	if (cret != IO_WQ_CANCEL_NOTFOUND)
--		return;
--
--	/* if we have a poll link holding this pending, cancel that */
--	if (io_poll_remove_link(ctx, req))
--		return;
--
--	/* final option, timeout link is holding this req pending */
--	io_timeout_remove_link(ctx, req);
--}
--
- static void io_cancel_defer_files(struct io_ring_ctx *ctx,
- 				  struct task_struct *task,
- 				  struct files_struct *files)
-@@ -8639,8 +8577,7 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
- 
- 	spin_lock_irq(&ctx->completion_lock);
- 	list_for_each_entry_reverse(de, &ctx->defer_list, list) {
--		if (io_task_match(de->req, task) &&
--		    io_match_files(de->req, files)) {
-+		if (io_match_task(de->req, task, files)) {
- 			list_cut_position(&list, &ctx->defer_list, &de->list);
- 			break;
- 		}
-@@ -8657,73 +8594,56 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
- 	}
- }
- 
--/*
-- * Returns true if we found and killed one or more files pinning requests
-- */
--static bool io_uring_cancel_files(struct io_ring_ctx *ctx,
-+static int io_uring_count_inflight(struct io_ring_ctx *ctx,
-+				   struct task_struct *task,
-+				   struct files_struct *files)
-+{
-+	struct io_kiocb *req;
-+	int cnt = 0;
-+
-+	spin_lock_irq(&ctx->inflight_lock);
-+	list_for_each_entry(req, &ctx->inflight_list, inflight_entry)
-+		cnt += io_match_task(req, task, files);
-+	spin_unlock_irq(&ctx->inflight_lock);
-+	return cnt;
-+}
-+
-+static void io_uring_cancel_files(struct io_ring_ctx *ctx,
- 				  struct task_struct *task,
- 				  struct files_struct *files)
- {
--	if (list_empty_careful(&ctx->inflight_list))
--		return false;
--
- 	while (!list_empty_careful(&ctx->inflight_list)) {
--		struct io_kiocb *cancel_req = NULL, *req;
-+		struct io_task_cancel cancel = { .task = task, .files = files };
- 		DEFINE_WAIT(wait);
-+		int inflight;
- 
--		spin_lock_irq(&ctx->inflight_lock);
--		list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {
--			if (req->task == task &&
--			    (req->work.flags & IO_WQ_WORK_FILES) &&
--			    req->work.identity->files != files)
--				continue;
--			/* req is being completed, ignore */
--			if (!refcount_inc_not_zero(&req->refs))
--				continue;
--			cancel_req = req;
--			break;
--		}
--		if (cancel_req)
--			prepare_to_wait(&ctx->inflight_wait, &wait,
--						TASK_UNINTERRUPTIBLE);
--		spin_unlock_irq(&ctx->inflight_lock);
--
--		/* We need to keep going until we don't find a matching req */
--		if (!cancel_req)
-+		inflight = io_uring_count_inflight(ctx, task, files);
-+		if (!inflight)
- 			break;
--		/* cancel this request, or head link requests */
--		io_attempt_cancel(ctx, cancel_req);
--		io_cqring_overflow_flush(ctx, true, task, files);
- 
--		io_put_req(cancel_req);
-+		io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
-+		io_poll_remove_all(ctx, task, files);
-+		io_kill_timeouts(ctx, task, files);
- 		/* cancellations _may_ trigger task work */
- 		io_run_task_work();
--		schedule();
--		finish_wait(&ctx->inflight_wait, &wait);
--	}
--
--	return true;
--}
- 
--static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
--{
--	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
--	struct task_struct *task = data;
--
--	return io_task_match(req, task);
-+		prepare_to_wait(&task->io_uring->wait, &wait,
-+				TASK_UNINTERRUPTIBLE);
-+		if (inflight == io_uring_count_inflight(ctx, task, files))
-+			schedule();
-+		finish_wait(&task->io_uring->wait, &wait);
-+	}
- }
- 
--static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
--					    struct task_struct *task,
--					    struct files_struct *files)
-+static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
-+					    struct task_struct *task)
- {
--	bool ret;
--
--	ret = io_uring_cancel_files(ctx, task, files);
--	if (!files) {
-+	while (1) {
-+		struct io_task_cancel cancel = { .task = task, .files = NULL, };
- 		enum io_wq_cancel cret;
-+		bool ret = false;
- 
--		cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, task, true);
-+		cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
- 		if (cret != IO_WQ_CANCEL_NOTFOUND)
- 			ret = true;
- 
-@@ -8735,11 +8655,13 @@ static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
- 			}
- 		}
- 
--		ret |= io_poll_remove_all(ctx, task);
--		ret |= io_kill_timeouts(ctx, task);
-+		ret |= io_poll_remove_all(ctx, task, NULL);
-+		ret |= io_kill_timeouts(ctx, task, NULL);
-+		if (!ret)
-+			break;
-+		io_run_task_work();
-+		cond_resched();
- 	}
--
--	return ret;
- }
- 
- static void io_disable_sqo_submit(struct io_ring_ctx *ctx)
-@@ -8764,8 +8686,6 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
- 	struct task_struct *task = current;
- 
- 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
--		/* for SQPOLL only sqo_task has task notes */
--		WARN_ON_ONCE(ctx->sqo_task != current);
- 		io_disable_sqo_submit(ctx);
- 		task = ctx->sq_data->thread;
- 		atomic_inc(&task->io_uring->in_idle);
-@@ -8775,10 +8695,9 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
- 	io_cancel_defer_files(ctx, task, files);
- 	io_cqring_overflow_flush(ctx, true, task, files);
- 
--	while (__io_uring_cancel_task_requests(ctx, task, files)) {
--		io_run_task_work();
--		cond_resched();
--	}
-+	io_uring_cancel_files(ctx, task, files);
-+	if (!files)
-+		__io_uring_cancel_task_requests(ctx, task);
- 
- 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
- 		atomic_dec(&task->io_uring->in_idle);
-@@ -8919,15 +8838,15 @@ void __io_uring_task_cancel(void)
- 		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
- 
- 		/*
--		 * If we've seen completions, retry. This avoids a race where
--		 * a completion comes in before we did prepare_to_wait().
-+		 * If we've seen completions, retry without waiting. This
-+		 * avoids a race where a completion comes in before we did
-+		 * prepare_to_wait().
- 		 */
--		if (inflight != tctx_inflight(tctx))
--			continue;
--		schedule();
-+		if (inflight == tctx_inflight(tctx))
-+			schedule();
-+		finish_wait(&tctx->wait, &wait);
- 	} while (1);
- 
--	finish_wait(&tctx->wait, &wait);
- 	atomic_dec(&tctx->in_idle);
- 
- 	io_uring_remove_task_files(tctx);
-@@ -8938,6 +8857,9 @@ static int io_uring_flush(struct file *file, void *data)
- 	struct io_uring_task *tctx = current->io_uring;
- 	struct io_ring_ctx *ctx = file->private_data;
- 
-+	if (fatal_signal_pending(current) || (current->flags & PF_EXITING))
-+		io_uring_cancel_task_requests(ctx, NULL);
-+
- 	if (!tctx)
- 		return 0;
- 
-diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
-index cbadcf6ca4da2..b8712b835b105 100644
---- a/fs/nfs/pnfs.c
-+++ b/fs/nfs/pnfs.c
-@@ -1000,7 +1000,7 @@ pnfs_layout_stateid_blocked(const struct pnfs_layout_hdr *lo,
- {
- 	u32 seqid = be32_to_cpu(stateid->seqid);
- 
--	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier);
-+	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier) && lo->plh_barrier;
- }
- 
- /* lget is set to 1 if called from inside send_layoutget call chain */
-@@ -1913,6 +1913,11 @@ static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
- 		wake_up_var(&lo->plh_outstanding);
- }
- 
-+static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
-+{
-+	return test_bit(NFS_LAYOUT_FIRST_LAYOUTGET, &lo->plh_flags);
-+}
-+
- static void pnfs_clear_first_layoutget(struct pnfs_layout_hdr *lo)
- {
- 	unsigned long *bitlock = &lo->plh_flags;
-@@ -2387,23 +2392,34 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
- 		goto out_forget;
- 	}
- 
--	if (!pnfs_layout_is_valid(lo)) {
--		/* We have a completely new layout */
--		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, true);
--	} else if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
-+	if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
- 		/* existing state ID, make sure the sequence number matches. */
- 		if (pnfs_layout_stateid_blocked(lo, &res->stateid)) {
-+			if (!pnfs_layout_is_valid(lo) &&
-+			    pnfs_is_first_layoutget(lo))
-+				lo->plh_barrier = 0;
- 			dprintk("%s forget reply due to sequence\n", __func__);
- 			goto out_forget;
- 		}
- 		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, false);
--	} else {
-+	} else if (pnfs_layout_is_valid(lo)) {
- 		/*
- 		 * We got an entirely new state ID.  Mark all segments for the
- 		 * inode invalid, and retry the layoutget
- 		 */
--		pnfs_mark_layout_stateid_invalid(lo, &free_me);
-+		struct pnfs_layout_range range = {
-+			.iomode = IOMODE_ANY,
-+			.length = NFS4_MAX_UINT64,
-+		};
-+		pnfs_set_plh_return_info(lo, IOMODE_ANY, 0);
-+		pnfs_mark_matching_lsegs_return(lo, &lo->plh_return_segs,
-+						&range, 0);
- 		goto out_forget;
-+	} else {
-+		/* We have a completely new layout */
-+		if (!pnfs_is_first_layoutget(lo))
-+			goto out_forget;
-+		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, true);
- 	}
- 
- 	pnfs_get_lseg(lseg);
-diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
-index 64bc81363c6cc..e1bd592ce7001 100644
---- a/fs/nilfs2/file.c
-+++ b/fs/nilfs2/file.c
-@@ -141,6 +141,7 @@ const struct file_operations nilfs_file_operations = {
- 	/* .release	= nilfs_release_file, */
- 	.fsync		= nilfs_sync_file,
- 	.splice_read	= generic_file_splice_read,
-+	.splice_write   = iter_file_splice_write,
- };
- 
- const struct inode_operations nilfs_file_inode_operations = {
-diff --git a/fs/squashfs/block.c b/fs/squashfs/block.c
-index 8a19773b5a0b7..45f44425d8560 100644
---- a/fs/squashfs/block.c
-+++ b/fs/squashfs/block.c
-@@ -196,9 +196,15 @@ int squashfs_read_data(struct super_block *sb, u64 index, int length,
- 		length = SQUASHFS_COMPRESSED_SIZE(length);
- 		index += 2;
- 
--		TRACE("Block @ 0x%llx, %scompressed size %d\n", index,
-+		TRACE("Block @ 0x%llx, %scompressed size %d\n", index - 2,
- 		      compressed ? "" : "un", length);
- 	}
-+	if (length < 0 || length > output->length ||
-+			(index + length) > msblk->bytes_used) {
-+		res = -EIO;
-+		goto out;
-+	}
-+
- 	if (next_index)
- 		*next_index = index + length;
- 
-diff --git a/fs/squashfs/export.c b/fs/squashfs/export.c
-index ae2c87bb0fbec..eb02072d28dd6 100644
---- a/fs/squashfs/export.c
-+++ b/fs/squashfs/export.c
-@@ -41,12 +41,17 @@ static long long squashfs_inode_lookup(struct super_block *sb, int ino_num)
- 	struct squashfs_sb_info *msblk = sb->s_fs_info;
- 	int blk = SQUASHFS_LOOKUP_BLOCK(ino_num - 1);
- 	int offset = SQUASHFS_LOOKUP_BLOCK_OFFSET(ino_num - 1);
--	u64 start = le64_to_cpu(msblk->inode_lookup_table[blk]);
-+	u64 start;
- 	__le64 ino;
- 	int err;
- 
- 	TRACE("Entered squashfs_inode_lookup, inode_number = %d\n", ino_num);
- 
-+	if (ino_num == 0 || (ino_num - 1) >= msblk->inodes)
-+		return -EINVAL;
-+
-+	start = le64_to_cpu(msblk->inode_lookup_table[blk]);
-+
- 	err = squashfs_read_metadata(sb, &ino, &start, &offset, sizeof(ino));
- 	if (err < 0)
- 		return err;
-@@ -111,7 +116,10 @@ __le64 *squashfs_read_inode_lookup_table(struct super_block *sb,
- 		u64 lookup_table_start, u64 next_table, unsigned int inodes)
- {
- 	unsigned int length = SQUASHFS_LOOKUP_BLOCK_BYTES(inodes);
-+	unsigned int indexes = SQUASHFS_LOOKUP_BLOCKS(inodes);
-+	int n;
- 	__le64 *table;
-+	u64 start, end;
- 
- 	TRACE("In read_inode_lookup_table, length %d\n", length);
- 
-@@ -121,20 +129,37 @@ __le64 *squashfs_read_inode_lookup_table(struct super_block *sb,
- 	if (inodes == 0)
- 		return ERR_PTR(-EINVAL);
- 
--	/* length bytes should not extend into the next table - this check
--	 * also traps instances where lookup_table_start is incorrectly larger
--	 * than the next table start
-+	/*
-+	 * The computed size of the lookup table (length bytes) should exactly
-+	 * match the table start and end points
- 	 */
--	if (lookup_table_start + length > next_table)
-+	if (length != (next_table - lookup_table_start))
- 		return ERR_PTR(-EINVAL);
- 
- 	table = squashfs_read_table(sb, lookup_table_start, length);
-+	if (IS_ERR(table))
-+		return table;
- 
- 	/*
--	 * table[0] points to the first inode lookup table metadata block,
--	 * this should be less than lookup_table_start
-+	 * table0], table[1], ... table[indexes - 1] store the locations
-+	 * of the compressed inode lookup blocks.  Each entry should be
-+	 * less than the next (i.e. table[0] < table[1]), and the difference
-+	 * between them should be SQUASHFS_METADATA_SIZE or less.
-+	 * table[indexes - 1] should  be less than lookup_table_start, and
-+	 * again the difference should be SQUASHFS_METADATA_SIZE or less
- 	 */
--	if (!IS_ERR(table) && le64_to_cpu(table[0]) >= lookup_table_start) {
-+	for (n = 0; n < (indexes - 1); n++) {
-+		start = le64_to_cpu(table[n]);
-+		end = le64_to_cpu(table[n + 1]);
-+
-+		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
-+			kfree(table);
-+			return ERR_PTR(-EINVAL);
-+		}
-+	}
-+
-+	start = le64_to_cpu(table[indexes - 1]);
-+	if (start >= lookup_table_start || (lookup_table_start - start) > SQUASHFS_METADATA_SIZE) {
- 		kfree(table);
- 		return ERR_PTR(-EINVAL);
- 	}
-diff --git a/fs/squashfs/id.c b/fs/squashfs/id.c
-index 6be5afe7287d6..11581bf31af41 100644
---- a/fs/squashfs/id.c
-+++ b/fs/squashfs/id.c
-@@ -35,10 +35,15 @@ int squashfs_get_id(struct super_block *sb, unsigned int index,
- 	struct squashfs_sb_info *msblk = sb->s_fs_info;
- 	int block = SQUASHFS_ID_BLOCK(index);
- 	int offset = SQUASHFS_ID_BLOCK_OFFSET(index);
--	u64 start_block = le64_to_cpu(msblk->id_table[block]);
-+	u64 start_block;
- 	__le32 disk_id;
- 	int err;
- 
-+	if (index >= msblk->ids)
-+		return -EINVAL;
-+
-+	start_block = le64_to_cpu(msblk->id_table[block]);
-+
- 	err = squashfs_read_metadata(sb, &disk_id, &start_block, &offset,
- 							sizeof(disk_id));
- 	if (err < 0)
-@@ -56,7 +61,10 @@ __le64 *squashfs_read_id_index_table(struct super_block *sb,
- 		u64 id_table_start, u64 next_table, unsigned short no_ids)
- {
- 	unsigned int length = SQUASHFS_ID_BLOCK_BYTES(no_ids);
-+	unsigned int indexes = SQUASHFS_ID_BLOCKS(no_ids);
-+	int n;
- 	__le64 *table;
-+	u64 start, end;
- 
- 	TRACE("In read_id_index_table, length %d\n", length);
- 
-@@ -67,20 +75,36 @@ __le64 *squashfs_read_id_index_table(struct super_block *sb,
- 		return ERR_PTR(-EINVAL);
- 
- 	/*
--	 * length bytes should not extend into the next table - this check
--	 * also traps instances where id_table_start is incorrectly larger
--	 * than the next table start
-+	 * The computed size of the index table (length bytes) should exactly
-+	 * match the table start and end points
- 	 */
--	if (id_table_start + length > next_table)
-+	if (length != (next_table - id_table_start))
- 		return ERR_PTR(-EINVAL);
- 
- 	table = squashfs_read_table(sb, id_table_start, length);
-+	if (IS_ERR(table))
-+		return table;
- 
- 	/*
--	 * table[0] points to the first id lookup table metadata block, this
--	 * should be less than id_table_start
-+	 * table[0], table[1], ... table[indexes - 1] store the locations
-+	 * of the compressed id blocks.   Each entry should be less than
-+	 * the next (i.e. table[0] < table[1]), and the difference between them
-+	 * should be SQUASHFS_METADATA_SIZE or less.  table[indexes - 1]
-+	 * should be less than id_table_start, and again the difference
-+	 * should be SQUASHFS_METADATA_SIZE or less
- 	 */
--	if (!IS_ERR(table) && le64_to_cpu(table[0]) >= id_table_start) {
-+	for (n = 0; n < (indexes - 1); n++) {
-+		start = le64_to_cpu(table[n]);
-+		end = le64_to_cpu(table[n + 1]);
-+
-+		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
-+			kfree(table);
-+			return ERR_PTR(-EINVAL);
-+		}
-+	}
-+
-+	start = le64_to_cpu(table[indexes - 1]);
-+	if (start >= id_table_start || (id_table_start - start) > SQUASHFS_METADATA_SIZE) {
- 		kfree(table);
- 		return ERR_PTR(-EINVAL);
- 	}
-diff --git a/fs/squashfs/squashfs_fs_sb.h b/fs/squashfs/squashfs_fs_sb.h
-index 34c21ffb6df37..166e98806265b 100644
---- a/fs/squashfs/squashfs_fs_sb.h
-+++ b/fs/squashfs/squashfs_fs_sb.h
-@@ -64,5 +64,6 @@ struct squashfs_sb_info {
- 	unsigned int				inodes;
- 	unsigned int				fragments;
- 	int					xattr_ids;
-+	unsigned int				ids;
- };
- #endif
-diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c
-index d6c6593ec169e..88cc94be10765 100644
---- a/fs/squashfs/super.c
-+++ b/fs/squashfs/super.c
-@@ -166,6 +166,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
- 	msblk->directory_table = le64_to_cpu(sblk->directory_table_start);
- 	msblk->inodes = le32_to_cpu(sblk->inodes);
- 	msblk->fragments = le32_to_cpu(sblk->fragments);
-+	msblk->ids = le16_to_cpu(sblk->no_ids);
- 	flags = le16_to_cpu(sblk->flags);
- 
- 	TRACE("Found valid superblock on %pg\n", sb->s_bdev);
-@@ -177,7 +178,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
- 	TRACE("Block size %d\n", msblk->block_size);
- 	TRACE("Number of inodes %d\n", msblk->inodes);
- 	TRACE("Number of fragments %d\n", msblk->fragments);
--	TRACE("Number of ids %d\n", le16_to_cpu(sblk->no_ids));
-+	TRACE("Number of ids %d\n", msblk->ids);
- 	TRACE("sblk->inode_table_start %llx\n", msblk->inode_table);
- 	TRACE("sblk->directory_table_start %llx\n", msblk->directory_table);
- 	TRACE("sblk->fragment_table_start %llx\n",
-@@ -236,8 +237,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
- allocate_id_index_table:
- 	/* Allocate and read id index table */
- 	msblk->id_table = squashfs_read_id_index_table(sb,
--		le64_to_cpu(sblk->id_table_start), next_table,
--		le16_to_cpu(sblk->no_ids));
-+		le64_to_cpu(sblk->id_table_start), next_table, msblk->ids);
- 	if (IS_ERR(msblk->id_table)) {
- 		errorf(fc, "unable to read id index table");
- 		err = PTR_ERR(msblk->id_table);
-diff --git a/fs/squashfs/xattr.h b/fs/squashfs/xattr.h
-index 184129afd4566..d8a270d3ac4cb 100644
---- a/fs/squashfs/xattr.h
-+++ b/fs/squashfs/xattr.h
-@@ -17,8 +17,16 @@ extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *,
- static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb,
- 		u64 start, u64 *xattr_table_start, int *xattr_ids)
- {
-+	struct squashfs_xattr_id_table *id_table;
-+
-+	id_table = squashfs_read_table(sb, start, sizeof(*id_table));
-+	if (IS_ERR(id_table))
-+		return (__le64 *) id_table;
-+
-+	*xattr_table_start = le64_to_cpu(id_table->xattr_table_start);
-+	kfree(id_table);
-+
- 	ERROR("Xattrs in filesystem, these will be ignored\n");
--	*xattr_table_start = start;
- 	return ERR_PTR(-ENOTSUPP);
- }
- 
-diff --git a/fs/squashfs/xattr_id.c b/fs/squashfs/xattr_id.c
-index d99e08464554f..ead66670b41a5 100644
---- a/fs/squashfs/xattr_id.c
-+++ b/fs/squashfs/xattr_id.c
-@@ -31,10 +31,15 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index,
- 	struct squashfs_sb_info *msblk = sb->s_fs_info;
- 	int block = SQUASHFS_XATTR_BLOCK(index);
- 	int offset = SQUASHFS_XATTR_BLOCK_OFFSET(index);
--	u64 start_block = le64_to_cpu(msblk->xattr_id_table[block]);
-+	u64 start_block;
- 	struct squashfs_xattr_id id;
- 	int err;
- 
-+	if (index >= msblk->xattr_ids)
-+		return -EINVAL;
-+
-+	start_block = le64_to_cpu(msblk->xattr_id_table[block]);
-+
- 	err = squashfs_read_metadata(sb, &id, &start_block, &offset,
- 							sizeof(id));
- 	if (err < 0)
-@@ -50,13 +55,17 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index,
- /*
-  * Read uncompressed xattr id lookup table indexes from disk into memory
-  */
--__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 start,
-+__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
- 		u64 *xattr_table_start, int *xattr_ids)
- {
--	unsigned int len;
-+	struct squashfs_sb_info *msblk = sb->s_fs_info;
-+	unsigned int len, indexes;
- 	struct squashfs_xattr_id_table *id_table;
-+	__le64 *table;
-+	u64 start, end;
-+	int n;
- 
--	id_table = squashfs_read_table(sb, start, sizeof(*id_table));
-+	id_table = squashfs_read_table(sb, table_start, sizeof(*id_table));
- 	if (IS_ERR(id_table))
- 		return (__le64 *) id_table;
- 
-@@ -70,13 +79,52 @@ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 start,
- 	if (*xattr_ids == 0)
- 		return ERR_PTR(-EINVAL);
- 
--	/* xattr_table should be less than start */
--	if (*xattr_table_start >= start)
-+	len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
-+	indexes = SQUASHFS_XATTR_BLOCKS(*xattr_ids);
-+
-+	/*
-+	 * The computed size of the index table (len bytes) should exactly
-+	 * match the table start and end points
-+	 */
-+	start = table_start + sizeof(*id_table);
-+	end = msblk->bytes_used;
-+
-+	if (len != (end - start))
- 		return ERR_PTR(-EINVAL);
- 
--	len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
-+	table = squashfs_read_table(sb, start, len);
-+	if (IS_ERR(table))
-+		return table;
-+
-+	/* table[0], table[1], ... table[indexes - 1] store the locations
-+	 * of the compressed xattr id blocks.  Each entry should be less than
-+	 * the next (i.e. table[0] < table[1]), and the difference between them
-+	 * should be SQUASHFS_METADATA_SIZE or less.  table[indexes - 1]
-+	 * should be less than table_start, and again the difference
-+	 * shouls be SQUASHFS_METADATA_SIZE or less.
-+	 *
-+	 * Finally xattr_table_start should be less than table[0].
-+	 */
-+	for (n = 0; n < (indexes - 1); n++) {
-+		start = le64_to_cpu(table[n]);
-+		end = le64_to_cpu(table[n + 1]);
-+
-+		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
-+			kfree(table);
-+			return ERR_PTR(-EINVAL);
-+		}
-+	}
-+
-+	start = le64_to_cpu(table[indexes - 1]);
-+	if (start >= table_start || (table_start - start) > SQUASHFS_METADATA_SIZE) {
-+		kfree(table);
-+		return ERR_PTR(-EINVAL);
-+	}
- 
--	TRACE("In read_xattr_index_table, length %d\n", len);
-+	if (*xattr_table_start >= le64_to_cpu(table[0])) {
-+		kfree(table);
-+		return ERR_PTR(-EINVAL);
-+	}
- 
--	return squashfs_read_table(sb, start + sizeof(*id_table), len);
-+	return table;
- }
-diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
-index 9548d075e06da..b998e4b736912 100644
---- a/include/linux/sunrpc/xdr.h
-+++ b/include/linux/sunrpc/xdr.h
-@@ -25,8 +25,7 @@ struct rpc_rqst;
- #define XDR_QUADLEN(l)		(((l) + 3) >> 2)
- 
- /*
-- * Generic opaque `network object.' At the kernel level, this type
-- * is used only by lockd.
-+ * Generic opaque `network object.'
-  */
- #define XDR_MAX_NETOBJ		1024
- struct xdr_netobj {
-diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
-index 618cb1b451ade..8c017f8c0c6d6 100644
---- a/kernel/bpf/verifier.c
-+++ b/kernel/bpf/verifier.c
-@@ -6822,7 +6822,7 @@ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
- 	case BPF_JSGT:
- 		if (reg->s32_min_value > sval)
- 			return 1;
--		else if (reg->s32_max_value < sval)
-+		else if (reg->s32_max_value <= sval)
- 			return 0;
- 		break;
- 	case BPF_JLT:
-@@ -6895,7 +6895,7 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
- 	case BPF_JSGT:
- 		if (reg->smin_value > sval)
- 			return 1;
--		else if (reg->smax_value < sval)
-+		else if (reg->smax_value <= sval)
- 			return 0;
- 		break;
- 	case BPF_JLT:
-@@ -8465,7 +8465,11 @@ static bool range_within(struct bpf_reg_state *old,
- 	return old->umin_value <= cur->umin_value &&
- 	       old->umax_value >= cur->umax_value &&
- 	       old->smin_value <= cur->smin_value &&
--	       old->smax_value >= cur->smax_value;
-+	       old->smax_value >= cur->smax_value &&
-+	       old->u32_min_value <= cur->u32_min_value &&
-+	       old->u32_max_value >= cur->u32_max_value &&
-+	       old->s32_min_value <= cur->s32_min_value &&
-+	       old->s32_max_value >= cur->s32_max_value;
- }
- 
- /* Maximum number of register states that can exist at once */
-@@ -10862,30 +10866,28 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
- 		    insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
- 		    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
- 			bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
--			struct bpf_insn mask_and_div[] = {
--				BPF_MOV32_REG(insn->src_reg, insn->src_reg),
-+			bool isdiv = BPF_OP(insn->code) == BPF_DIV;
-+			struct bpf_insn *patchlet;
-+			struct bpf_insn chk_and_div[] = {
- 				/* Rx div 0 -> 0 */
--				BPF_JMP_IMM(BPF_JNE, insn->src_reg, 0, 2),
-+				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
-+					     BPF_JNE | BPF_K, insn->src_reg,
-+					     0, 2, 0),
- 				BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
- 				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
- 				*insn,
- 			};
--			struct bpf_insn mask_and_mod[] = {
--				BPF_MOV32_REG(insn->src_reg, insn->src_reg),
-+			struct bpf_insn chk_and_mod[] = {
- 				/* Rx mod 0 -> Rx */
--				BPF_JMP_IMM(BPF_JEQ, insn->src_reg, 0, 1),
-+				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
-+					     BPF_JEQ | BPF_K, insn->src_reg,
-+					     0, 1, 0),
- 				*insn,
- 			};
--			struct bpf_insn *patchlet;
- 
--			if (insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
--			    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
--				patchlet = mask_and_div + (is64 ? 1 : 0);
--				cnt = ARRAY_SIZE(mask_and_div) - (is64 ? 1 : 0);
--			} else {
--				patchlet = mask_and_mod + (is64 ? 1 : 0);
--				cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 1 : 0);
--			}
-+			patchlet = isdiv ? chk_and_div : chk_and_mod;
-+			cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
-+				      ARRAY_SIZE(chk_and_mod);
- 
- 			new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
- 			if (!new_prog)
-diff --git a/mm/memcontrol.c b/mm/memcontrol.c
-index 8fc23d53f5500..a604e69ecfa57 100644
---- a/mm/memcontrol.c
-+++ b/mm/memcontrol.c
-@@ -6320,6 +6320,8 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
- 	if (err)
- 		return err;
- 
-+	page_counter_set_high(&memcg->memory, high);
-+
- 	for (;;) {
- 		unsigned long nr_pages = page_counter_read(&memcg->memory);
- 		unsigned long reclaimed;
-@@ -6343,10 +6345,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
- 			break;
- 	}
- 
--	page_counter_set_high(&memcg->memory, high);
--
- 	memcg_wb_domain_size_changed(memcg);
--
- 	return nbytes;
- }
- 
-diff --git a/net/key/af_key.c b/net/key/af_key.c
-index c12dbc51ef5fe..ef9b4ac03e7b7 100644
---- a/net/key/af_key.c
-+++ b/net/key/af_key.c
-@@ -2902,7 +2902,7 @@ static int count_ah_combs(const struct xfrm_tmpl *t)
- 			break;
- 		if (!aalg->pfkey_supported)
- 			continue;
--		if (aalg_tmpl_set(t, aalg) && aalg->available)
-+		if (aalg_tmpl_set(t, aalg))
- 			sz += sizeof(struct sadb_comb);
- 	}
- 	return sz + sizeof(struct sadb_prop);
-@@ -2920,7 +2920,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
- 		if (!ealg->pfkey_supported)
- 			continue;
- 
--		if (!(ealg_tmpl_set(t, ealg) && ealg->available))
-+		if (!(ealg_tmpl_set(t, ealg)))
- 			continue;
- 
- 		for (k = 1; ; k++) {
-@@ -2931,7 +2931,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
- 			if (!aalg->pfkey_supported)
- 				continue;
- 
--			if (aalg_tmpl_set(t, aalg) && aalg->available)
-+			if (aalg_tmpl_set(t, aalg))
- 				sz += sizeof(struct sadb_comb);
- 		}
- 	}
-diff --git a/net/mac80211/spectmgmt.c b/net/mac80211/spectmgmt.c
-index ae1cb2c687224..76747bfdaddd0 100644
---- a/net/mac80211/spectmgmt.c
-+++ b/net/mac80211/spectmgmt.c
-@@ -133,16 +133,20 @@ int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
- 	}
- 
- 	if (wide_bw_chansw_ie) {
-+		u8 new_seg1 = wide_bw_chansw_ie->new_center_freq_seg1;
- 		struct ieee80211_vht_operation vht_oper = {
- 			.chan_width =
- 				wide_bw_chansw_ie->new_channel_width,
- 			.center_freq_seg0_idx =
- 				wide_bw_chansw_ie->new_center_freq_seg0,
--			.center_freq_seg1_idx =
--				wide_bw_chansw_ie->new_center_freq_seg1,
-+			.center_freq_seg1_idx = new_seg1,
- 			/* .basic_mcs_set doesn't matter */
- 		};
--		struct ieee80211_ht_operation ht_oper = {};
-+		struct ieee80211_ht_operation ht_oper = {
-+			.operation_mode =
-+				cpu_to_le16(new_seg1 <<
-+					    IEEE80211_HT_OP_MODE_CCFS2_SHIFT),
-+		};
- 
- 		/* default, for the case of IEEE80211_VHT_CHANWIDTH_USE_HT,
- 		 * to the previously parsed chandef
-diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
-index 4ecc2a9595674..5f42aa5fc6128 100644
---- a/net/sunrpc/auth_gss/auth_gss.c
-+++ b/net/sunrpc/auth_gss/auth_gss.c
-@@ -29,6 +29,7 @@
- #include <linux/uaccess.h>
- #include <linux/hashtable.h>
- 
-+#include "auth_gss_internal.h"
- #include "../netns.h"
- 
- #include <trace/events/rpcgss.h>
-@@ -125,35 +126,6 @@ gss_cred_set_ctx(struct rpc_cred *cred, struct gss_cl_ctx *ctx)
- 	clear_bit(RPCAUTH_CRED_NEW, &cred->cr_flags);
- }
- 
--static const void *
--simple_get_bytes(const void *p, const void *end, void *res, size_t len)
--{
--	const void *q = (const void *)((const char *)p + len);
--	if (unlikely(q > end || q < p))
--		return ERR_PTR(-EFAULT);
--	memcpy(res, p, len);
--	return q;
--}
--
--static inline const void *
--simple_get_netobj(const void *p, const void *end, struct xdr_netobj *dest)
--{
--	const void *q;
--	unsigned int len;
--
--	p = simple_get_bytes(p, end, &len, sizeof(len));
--	if (IS_ERR(p))
--		return p;
--	q = (const void *)((const char *)p + len);
--	if (unlikely(q > end || q < p))
--		return ERR_PTR(-EFAULT);
--	dest->data = kmemdup(p, len, GFP_NOFS);
--	if (unlikely(dest->data == NULL))
--		return ERR_PTR(-ENOMEM);
--	dest->len = len;
--	return q;
--}
--
- static struct gss_cl_ctx *
- gss_cred_get_ctx(struct rpc_cred *cred)
- {
-diff --git a/net/sunrpc/auth_gss/auth_gss_internal.h b/net/sunrpc/auth_gss/auth_gss_internal.h
-new file mode 100644
-index 0000000000000..f6d9631bd9d00
---- /dev/null
-+++ b/net/sunrpc/auth_gss/auth_gss_internal.h
-@@ -0,0 +1,45 @@
-+// SPDX-License-Identifier: BSD-3-Clause
-+/*
-+ * linux/net/sunrpc/auth_gss/auth_gss_internal.h
-+ *
-+ * Internal definitions for RPCSEC_GSS client authentication
-+ *
-+ * Copyright (c) 2000 The Regents of the University of Michigan.
-+ * All rights reserved.
-+ *
-+ */
-+#include <linux/err.h>
-+#include <linux/string.h>
-+#include <linux/sunrpc/xdr.h>
-+
-+static inline const void *
-+simple_get_bytes(const void *p, const void *end, void *res, size_t len)
-+{
-+	const void *q = (const void *)((const char *)p + len);
-+	if (unlikely(q > end || q < p))
-+		return ERR_PTR(-EFAULT);
-+	memcpy(res, p, len);
-+	return q;
-+}
-+
-+static inline const void *
-+simple_get_netobj(const void *p, const void *end, struct xdr_netobj *dest)
-+{
-+	const void *q;
-+	unsigned int len;
-+
-+	p = simple_get_bytes(p, end, &len, sizeof(len));
-+	if (IS_ERR(p))
-+		return p;
-+	q = (const void *)((const char *)p + len);
-+	if (unlikely(q > end || q < p))
-+		return ERR_PTR(-EFAULT);
-+	if (len) {
-+		dest->data = kmemdup(p, len, GFP_NOFS);
-+		if (unlikely(dest->data == NULL))
-+			return ERR_PTR(-ENOMEM);
-+	} else
-+		dest->data = NULL;
-+	dest->len = len;
-+	return q;
-+}
-diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c
-index ae9acf3a73898..1c092b05c2bba 100644
---- a/net/sunrpc/auth_gss/gss_krb5_mech.c
-+++ b/net/sunrpc/auth_gss/gss_krb5_mech.c
-@@ -21,6 +21,8 @@
- #include <linux/sunrpc/xdr.h>
- #include <linux/sunrpc/gss_krb5_enctypes.h>
- 
-+#include "auth_gss_internal.h"
-+
- #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
- # define RPCDBG_FACILITY	RPCDBG_AUTH
- #endif
-@@ -143,35 +145,6 @@ get_gss_krb5_enctype(int etype)
- 	return NULL;
- }
- 
--static const void *
--simple_get_bytes(const void *p, const void *end, void *res, int len)
--{
--	const void *q = (const void *)((const char *)p + len);
--	if (unlikely(q > end || q < p))
--		return ERR_PTR(-EFAULT);
--	memcpy(res, p, len);
--	return q;
--}
--
--static const void *
--simple_get_netobj(const void *p, const void *end, struct xdr_netobj *res)
--{
--	const void *q;
--	unsigned int len;
--
--	p = simple_get_bytes(p, end, &len, sizeof(len));
--	if (IS_ERR(p))
--		return p;
--	q = (const void *)((const char *)p + len);
--	if (unlikely(q > end || q < p))
--		return ERR_PTR(-EFAULT);
--	res->data = kmemdup(p, len, GFP_NOFS);
--	if (unlikely(res->data == NULL))
--		return ERR_PTR(-ENOMEM);
--	res->len = len;
--	return q;
--}
--
- static inline const void *
- get_key(const void *p, const void *end,
- 	struct krb5_ctx *ctx, struct crypto_sync_skcipher **res)
-diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
-index 1c5114dedda92..fe49e9a97f0ec 100644
---- a/sound/hda/intel-dsp-config.c
-+++ b/sound/hda/intel-dsp-config.c
-@@ -306,6 +306,10 @@ static const struct config_entry config_table[] = {
- 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
- 		.device = 0xa0c8,
- 	},
-+	{
-+		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
-+		.device = 0x43c8,
-+	},
- #endif
- 
- /* Elkhart Lake */
-diff --git a/sound/soc/codecs/ak4458.c b/sound/soc/codecs/ak4458.c
-index 1010c9ee2e836..472caad17012e 100644
---- a/sound/soc/codecs/ak4458.c
-+++ b/sound/soc/codecs/ak4458.c
-@@ -595,18 +595,10 @@ static struct snd_soc_dai_driver ak4497_dai = {
- 	.ops = &ak4458_dai_ops,
- };
- 
--static void ak4458_power_off(struct ak4458_priv *ak4458)
-+static void ak4458_reset(struct ak4458_priv *ak4458, bool active)
- {
- 	if (ak4458->reset_gpiod) {
--		gpiod_set_value_cansleep(ak4458->reset_gpiod, 0);
--		usleep_range(1000, 2000);
--	}
--}
--
--static void ak4458_power_on(struct ak4458_priv *ak4458)
--{
--	if (ak4458->reset_gpiod) {
--		gpiod_set_value_cansleep(ak4458->reset_gpiod, 1);
-+		gpiod_set_value_cansleep(ak4458->reset_gpiod, active);
- 		usleep_range(1000, 2000);
- 	}
- }
-@@ -620,7 +612,7 @@ static int ak4458_init(struct snd_soc_component *component)
- 	if (ak4458->mute_gpiod)
- 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 1);
- 
--	ak4458_power_on(ak4458);
-+	ak4458_reset(ak4458, false);
- 
- 	ret = snd_soc_component_update_bits(component, AK4458_00_CONTROL1,
- 			    0x80, 0x80);   /* ACKS bit = 1; 10000000 */
-@@ -650,7 +642,7 @@ static void ak4458_remove(struct snd_soc_component *component)
- {
- 	struct ak4458_priv *ak4458 = snd_soc_component_get_drvdata(component);
- 
--	ak4458_power_off(ak4458);
-+	ak4458_reset(ak4458, true);
- }
- 
- #ifdef CONFIG_PM
-@@ -660,7 +652,7 @@ static int __maybe_unused ak4458_runtime_suspend(struct device *dev)
- 
- 	regcache_cache_only(ak4458->regmap, true);
- 
--	ak4458_power_off(ak4458);
-+	ak4458_reset(ak4458, true);
- 
- 	if (ak4458->mute_gpiod)
- 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 0);
-@@ -685,8 +677,8 @@ static int __maybe_unused ak4458_runtime_resume(struct device *dev)
- 	if (ak4458->mute_gpiod)
- 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 1);
- 
--	ak4458_power_off(ak4458);
--	ak4458_power_on(ak4458);
-+	ak4458_reset(ak4458, true);
-+	ak4458_reset(ak4458, false);
- 
- 	regcache_cache_only(ak4458->regmap, false);
- 	regcache_mark_dirty(ak4458->regmap);
-diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
-index dec8716aa8ef5..985b2dcecf138 100644
---- a/sound/soc/codecs/wm_adsp.c
-+++ b/sound/soc/codecs/wm_adsp.c
-@@ -2031,11 +2031,14 @@ static struct wm_coeff_ctl *wm_adsp_get_ctl(struct wm_adsp *dsp,
- 					     unsigned int alg)
- {
- 	struct wm_coeff_ctl *pos, *rslt = NULL;
-+	const char *fw_txt = wm_adsp_fw_text[dsp->fw];
- 
- 	list_for_each_entry(pos, &dsp->ctl_list, list) {
- 		if (!pos->subname)
- 			continue;
- 		if (strncmp(pos->subname, name, pos->subname_len) == 0 &&
-+		    strncmp(pos->fw_name, fw_txt,
-+			    SNDRV_CTL_ELEM_ID_NAME_MAXLEN) == 0 &&
- 				pos->alg_region.alg == alg &&
- 				pos->alg_region.type == type) {
- 			rslt = pos;
-diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
-index b29946eb43551..a8d43c87cb5a2 100644
---- a/sound/soc/intel/boards/sof_sdw.c
-+++ b/sound/soc/intel/boards/sof_sdw.c
-@@ -57,6 +57,16 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
- 		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
- 					SOF_RT715_DAI_ID_FIX),
- 	},
-+	{
-+		.callback = sof_sdw_quirk_cb,
-+		.matches = {
-+			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
-+			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E")
-+		},
-+		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
-+					SOF_RT715_DAI_ID_FIX |
-+					SOF_SDW_FOUR_SPK),
-+	},
- 	{
- 		.callback = sof_sdw_quirk_cb,
- 		.matches = {
-diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
-index d699e61eca3d0..0955cbb4e9187 100644
---- a/sound/soc/intel/skylake/skl-topology.c
-+++ b/sound/soc/intel/skylake/skl-topology.c
-@@ -3632,7 +3632,7 @@ static void skl_tplg_complete(struct snd_soc_component *component)
- 		sprintf(chan_text, "c%d", mach->mach_params.dmic_num);
- 
- 		for (i = 0; i < se->items; i++) {
--			struct snd_ctl_elem_value val;
-+			struct snd_ctl_elem_value val = {};
- 
- 			if (strstr(texts[i], chan_text)) {
- 				val.value.enumerated.item[0] = i;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-13 15:51 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-02-13 15:51 UTC (permalink / raw
  To: gentoo-commits

commit:     445e65a30fda0ff858a9109eacae19586bddaa71
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 13 15:51:18 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 13 15:51:18 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=445e65a3

Add back Linux patch 5.10.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1015_linux-5.10.16.patch | 2316 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2320 insertions(+)

diff --git a/0000_README b/0000_README
index 9214e4d..bbc23e0 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-5.10.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.15
 
+Patch:  1015_linux-5.10.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.10.16.patch b/1015_linux-5.10.16.patch
new file mode 100644
index 0000000..01d52a4
--- /dev/null
+++ b/1015_linux-5.10.16.patch
@@ -0,0 +1,2316 @@
+diff --git a/Makefile b/Makefile
+index b62d2d4ea7b02..9a1f26680d836 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
+index 8dad44262e751..495ffc9cf5e22 100644
+--- a/arch/powerpc/kernel/vdso.c
++++ b/arch/powerpc/kernel/vdso.c
+@@ -475,7 +475,7 @@ static __init void vdso_setup_trampolines(struct lib32_elfinfo *v32,
+ 	 */
+ 
+ #ifdef CONFIG_PPC64
+-	vdso64_rt_sigtramp = find_function64(v64, "__kernel_sigtramp_rt64");
++	vdso64_rt_sigtramp = find_function64(v64, "__kernel_start_sigtramp_rt64");
+ #endif
+ 	vdso32_sigtramp	   = find_function32(v32, "__kernel_sigtramp32");
+ 	vdso32_rt_sigtramp = find_function32(v32, "__kernel_sigtramp_rt32");
+diff --git a/arch/powerpc/kernel/vdso64/sigtramp.S b/arch/powerpc/kernel/vdso64/sigtramp.S
+index bbf68cd01088b..2d4067561293e 100644
+--- a/arch/powerpc/kernel/vdso64/sigtramp.S
++++ b/arch/powerpc/kernel/vdso64/sigtramp.S
+@@ -15,11 +15,20 @@
+ 
+ 	.text
+ 
++/*
++ * __kernel_start_sigtramp_rt64 and __kernel_sigtramp_rt64 together
++ * are one function split in two parts. The kernel jumps to the former
++ * and the signal handler indirectly (by blr) returns to the latter.
++ * __kernel_sigtramp_rt64 needs to point to the return address so
++ * glibc can correctly identify the trampoline stack frame.
++ */
+ 	.balign 8
+ 	.balign IFETCH_ALIGN_BYTES
+-V_FUNCTION_BEGIN(__kernel_sigtramp_rt64)
++V_FUNCTION_BEGIN(__kernel_start_sigtramp_rt64)
+ .Lsigrt_start:
+ 	bctrl	/* call the handler */
++V_FUNCTION_END(__kernel_start_sigtramp_rt64)
++V_FUNCTION_BEGIN(__kernel_sigtramp_rt64)
+ 	addi	r1, r1, __SIGNAL_FRAMESIZE
+ 	li	r0,__NR_rt_sigreturn
+ 	sc
+diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S b/arch/powerpc/kernel/vdso64/vdso64.lds.S
+index 256fb97202987..bd120f590b9ed 100644
+--- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
++++ b/arch/powerpc/kernel/vdso64/vdso64.lds.S
+@@ -150,6 +150,7 @@ VERSION
+ 		__kernel_get_tbfreq;
+ 		__kernel_sync_dicache;
+ 		__kernel_sync_dicache_p5;
++		__kernel_start_sigtramp_rt64;
+ 		__kernel_sigtramp_rt64;
+ 		__kernel_getcpu;
+ 		__kernel_time;
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 54fbe1e80cc41..f13688c4b9317 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1017,6 +1017,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css)
+  */
+ void blkcg_destroy_blkgs(struct blkcg *blkcg)
+ {
++	might_sleep();
++
+ 	spin_lock_irq(&blkcg->lock);
+ 
+ 	while (!hlist_empty(&blkcg->blkg_list)) {
+@@ -1024,14 +1026,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg)
+ 						struct blkcg_gq, blkcg_node);
+ 		struct request_queue *q = blkg->q;
+ 
+-		if (spin_trylock(&q->queue_lock)) {
+-			blkg_destroy(blkg);
+-			spin_unlock(&q->queue_lock);
+-		} else {
++		if (need_resched() || !spin_trylock(&q->queue_lock)) {
++			/*
++			 * Given that the system can accumulate a huge number
++			 * of blkgs in pathological cases, check to see if we
++			 * need to rescheduling to avoid softlockup.
++			 */
+ 			spin_unlock_irq(&blkcg->lock);
+-			cpu_relax();
++			cond_resched();
+ 			spin_lock_irq(&blkcg->lock);
++			continue;
+ 		}
++
++		blkg_destroy(blkg);
++		spin_unlock(&q->queue_lock);
+ 	}
+ 
+ 	spin_unlock_irq(&blkcg->lock);
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index 689c06cbbb457..ade3ecf2ee495 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -756,6 +756,8 @@ static void edge_detector_stop(struct line *line)
+ 	cancel_delayed_work_sync(&line->work);
+ 	WRITE_ONCE(line->sw_debounced, 0);
+ 	line->eflags = 0;
++	if (line->desc)
++		WRITE_ONCE(line->desc->debounce_period_us, 0);
+ 	/* do not change line->level - see comment in debounced_value() */
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 40dfb4d0ffbec..db62e6a934d91 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -2597,6 +2597,9 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
+ 	u32 n_entries, val;
+ 	int ln, rate = 0;
+ 
++	if (enc_to_dig_port(encoder)->tc_mode == TC_PORT_TBT_ALT)
++		return;
++
+ 	if (type != INTEL_OUTPUT_HDMI) {
+ 		struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+ 
+@@ -2605,12 +2608,11 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
+ 
+ 	ddi_translations = icl_get_mg_buf_trans(encoder, type, rate,
+ 						&n_entries);
+-	/* The table does not have values for level 3 and level 9. */
+-	if (level >= n_entries || level == 3 || level == 9) {
++	if (level >= n_entries) {
+ 		drm_dbg_kms(&dev_priv->drm,
+ 			    "DDI translation not found for level %d. Using %d instead.",
+-			    level, n_entries - 2);
+-		level = n_entries - 2;
++			    level, n_entries - 1);
++		level = n_entries - 1;
+ 	}
+ 
+ 	/* Set MG_TX_LINK_PARAMS cri_use_fs32 to 0. */
+@@ -2742,6 +2744,9 @@ tgl_dkl_phy_ddi_vswing_sequence(struct intel_encoder *encoder, int link_clock,
+ 	u32 n_entries, val, ln, dpcnt_mask, dpcnt_val;
+ 	int rate = 0;
+ 
++	if (enc_to_dig_port(encoder)->tc_mode == TC_PORT_TBT_ALT)
++		return;
++
+ 	if (type != INTEL_OUTPUT_HDMI) {
+ 		struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+ 
+diff --git a/drivers/gpu/drm/nouveau/include/nvif/push.h b/drivers/gpu/drm/nouveau/include/nvif/push.h
+index 168d7694ede5c..6d3a8a3d2087b 100644
+--- a/drivers/gpu/drm/nouveau/include/nvif/push.h
++++ b/drivers/gpu/drm/nouveau/include/nvif/push.h
+@@ -123,131 +123,131 @@ PUSH_KICK(struct nvif_push *push)
+ } while(0)
+ #endif
+ 
+-#define PUSH_1(X,f,ds,n,c,o,p,s,mA,dA) do {                            \
+-	PUSH_##o##_HDR((p), s, mA, (c)+(n));                           \
+-	PUSH_##f(X, (p), X##mA, 1, o, (dA), ds, "");                   \
++#define PUSH_1(X,f,ds,n,o,p,s,mA,dA) do {                             \
++	PUSH_##o##_HDR((p), s, mA, (ds)+(n));                         \
++	PUSH_##f(X, (p), X##mA, 1, o, (dA), ds, "");                  \
+ } while(0)
+-#define PUSH_2(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (1?PUSH_##o##_INC), "mthd1");       \
+-	PUSH_1(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_2(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (1?PUSH_##o##_INC), "mthd1");      \
++	PUSH_1(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_3(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd2");       \
+-	PUSH_2(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_3(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd2");      \
++	PUSH_2(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_4(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd3");       \
+-	PUSH_3(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_4(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd3");      \
++	PUSH_3(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_5(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd4");       \
+-	PUSH_4(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_5(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd4");      \
++	PUSH_4(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_6(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd5");       \
+-	PUSH_5(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_6(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd5");      \
++	PUSH_5(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_7(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd6");       \
+-	PUSH_6(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_7(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd6");      \
++	PUSH_6(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_8(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd7");       \
+-	PUSH_7(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_8(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd7");      \
++	PUSH_7(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_9(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                 \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd8");       \
+-	PUSH_8(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_9(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                  \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd8");      \
++	PUSH_8(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+-#define PUSH_10(X,f,ds,n,c,o,p,s,mB,dB,mA,dA,a...) do {                \
+-	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd9");       \
+-	PUSH_9(X, DATA_, 1, ds, (c)+(n), o, (p), s, X##mA, (dA), ##a); \
+-	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                   \
++#define PUSH_10(X,f,ds,n,o,p,s,mB,dB,mA,dA,a...) do {                 \
++	PUSH_ASSERT((mB) - (mA) == (0?PUSH_##o##_INC), "mthd9");      \
++	PUSH_9(X, DATA_, 1, (ds) + (n), o, (p), s, X##mA, (dA), ##a); \
++	PUSH_##f(X, (p), X##mB, 0, o, (dB), ds, "");                  \
+ } while(0)
+ 
+-#define PUSH_1D(X,o,p,s,mA,dA)                            \
+-	PUSH_1(X, DATA_, 1, 1, 0, o, (p), s, X##mA, (dA))
+-#define PUSH_2D(X,o,p,s,mA,dA,mB,dB)                      \
+-	PUSH_2(X, DATA_, 1, 1, 0, o, (p), s, X##mB, (dB), \
+-					     X##mA, (dA))
+-#define PUSH_3D(X,o,p,s,mA,dA,mB,dB,mC,dC)                \
+-	PUSH_3(X, DATA_, 1, 1, 0, o, (p), s, X##mC, (dC), \
+-					     X##mB, (dB), \
+-					     X##mA, (dA))
+-#define PUSH_4D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD)          \
+-	PUSH_4(X, DATA_, 1, 1, 0, o, (p), s, X##mD, (dD), \
+-					     X##mC, (dC), \
+-					     X##mB, (dB), \
+-					     X##mA, (dA))
+-#define PUSH_5D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE)    \
+-	PUSH_5(X, DATA_, 1, 1, 0, o, (p), s, X##mE, (dE), \
+-					     X##mD, (dD), \
+-					     X##mC, (dC), \
+-					     X##mB, (dB), \
+-					     X##mA, (dA))
++#define PUSH_1D(X,o,p,s,mA,dA)                         \
++	PUSH_1(X, DATA_, 1, 0, o, (p), s, X##mA, (dA))
++#define PUSH_2D(X,o,p,s,mA,dA,mB,dB)                   \
++	PUSH_2(X, DATA_, 1, 0, o, (p), s, X##mB, (dB), \
++					  X##mA, (dA))
++#define PUSH_3D(X,o,p,s,mA,dA,mB,dB,mC,dC)             \
++	PUSH_3(X, DATA_, 1, 0, o, (p), s, X##mC, (dC), \
++					  X##mB, (dB), \
++					  X##mA, (dA))
++#define PUSH_4D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD)       \
++	PUSH_4(X, DATA_, 1, 0, o, (p), s, X##mD, (dD), \
++					  X##mC, (dC), \
++					  X##mB, (dB), \
++					  X##mA, (dA))
++#define PUSH_5D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE) \
++	PUSH_5(X, DATA_, 1, 0, o, (p), s, X##mE, (dE), \
++					  X##mD, (dD), \
++					  X##mC, (dC), \
++					  X##mB, (dB), \
++					  X##mA, (dA))
+ #define PUSH_6D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF) \
+-	PUSH_6(X, DATA_, 1, 1, 0, o, (p), s, X##mF, (dF),    \
+-					     X##mE, (dE),    \
+-					     X##mD, (dD),    \
+-					     X##mC, (dC),    \
+-					     X##mB, (dB),    \
+-					     X##mA, (dA))
++	PUSH_6(X, DATA_, 1, 0, o, (p), s, X##mF, (dF),       \
++					  X##mE, (dE),       \
++					  X##mD, (dD),       \
++					  X##mC, (dC),       \
++					  X##mB, (dB),       \
++					  X##mA, (dA))
+ #define PUSH_7D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG) \
+-	PUSH_7(X, DATA_, 1, 1, 0, o, (p), s, X##mG, (dG),          \
+-					     X##mF, (dF),          \
+-					     X##mE, (dE),          \
+-					     X##mD, (dD),          \
+-					     X##mC, (dC),          \
+-					     X##mB, (dB),          \
+-					     X##mA, (dA))
++	PUSH_7(X, DATA_, 1, 0, o, (p), s, X##mG, (dG),             \
++					  X##mF, (dF),             \
++					  X##mE, (dE),             \
++					  X##mD, (dD),             \
++					  X##mC, (dC),             \
++					  X##mB, (dB),             \
++					  X##mA, (dA))
+ #define PUSH_8D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH) \
+-	PUSH_8(X, DATA_, 1, 1, 0, o, (p), s, X##mH, (dH),                \
+-					     X##mG, (dG),                \
+-					     X##mF, (dF),                \
+-					     X##mE, (dE),                \
+-					     X##mD, (dD),                \
+-					     X##mC, (dC),                \
+-					     X##mB, (dB),                \
+-					     X##mA, (dA))
++	PUSH_8(X, DATA_, 1, 0, o, (p), s, X##mH, (dH),                   \
++					  X##mG, (dG),                   \
++					  X##mF, (dF),                   \
++					  X##mE, (dE),                   \
++					  X##mD, (dD),                   \
++					  X##mC, (dC),                   \
++					  X##mB, (dB),                   \
++					  X##mA, (dA))
+ #define PUSH_9D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH,mI,dI) \
+-	PUSH_9(X, DATA_, 1, 1, 0, o, (p), s, X##mI, (dI),                      \
+-					     X##mH, (dH),                      \
+-					     X##mG, (dG),                      \
+-					     X##mF, (dF),                      \
+-					     X##mE, (dE),                      \
+-					     X##mD, (dD),                      \
+-					     X##mC, (dC),                      \
+-					     X##mB, (dB),                      \
+-					     X##mA, (dA))
++	PUSH_9(X, DATA_, 1, 0, o, (p), s, X##mI, (dI),                         \
++					  X##mH, (dH),                         \
++					  X##mG, (dG),                         \
++					  X##mF, (dF),                         \
++					  X##mE, (dE),                         \
++					  X##mD, (dD),                         \
++					  X##mC, (dC),                         \
++					  X##mB, (dB),                         \
++					  X##mA, (dA))
+ #define PUSH_10D(X,o,p,s,mA,dA,mB,dB,mC,dC,mD,dD,mE,dE,mF,dF,mG,dG,mH,dH,mI,dI,mJ,dJ) \
+-	PUSH_10(X, DATA_, 1, 1, 0, o, (p), s, X##mJ, (dJ),                            \
+-					      X##mI, (dI),                            \
+-					      X##mH, (dH),                            \
+-					      X##mG, (dG),                            \
+-					      X##mF, (dF),                            \
+-					      X##mE, (dE),                            \
+-					      X##mD, (dD),                            \
+-					      X##mC, (dC),                            \
+-					      X##mB, (dB),                            \
+-					      X##mA, (dA))
++	PUSH_10(X, DATA_, 1, 0, o, (p), s, X##mJ, (dJ),                               \
++					   X##mI, (dI),                               \
++					   X##mH, (dH),                               \
++					   X##mG, (dG),                               \
++					   X##mF, (dF),                               \
++					   X##mE, (dE),                               \
++					   X##mD, (dD),                               \
++					   X##mC, (dC),                               \
++					   X##mB, (dB),                               \
++					   X##mA, (dA))
+ 
+-#define PUSH_1P(X,o,p,s,mA,dp,ds)                           \
+-	PUSH_1(X, DATAp, ds, ds, 0, o, (p), s, X##mA, (dp))
+-#define PUSH_2P(X,o,p,s,mA,dA,mB,dp,ds)                     \
+-	PUSH_2(X, DATAp, ds, ds, 0, o, (p), s, X##mB, (dp), \
+-					       X##mA, (dA))
+-#define PUSH_3P(X,o,p,s,mA,dA,mB,dB,mC,dp,ds)               \
+-	PUSH_3(X, DATAp, ds, ds, 0, o, (p), s, X##mC, (dp), \
+-					       X##mB, (dB), \
+-					       X##mA, (dA))
++#define PUSH_1P(X,o,p,s,mA,dp,ds)                       \
++	PUSH_1(X, DATAp, ds, 0, o, (p), s, X##mA, (dp))
++#define PUSH_2P(X,o,p,s,mA,dA,mB,dp,ds)                 \
++	PUSH_2(X, DATAp, ds, 0, o, (p), s, X##mB, (dp), \
++					   X##mA, (dA))
++#define PUSH_3P(X,o,p,s,mA,dA,mB,dB,mC,dp,ds)           \
++	PUSH_3(X, DATAp, ds, 0, o, (p), s, X##mC, (dp), \
++					   X##mB, (dB), \
++					   X##mA, (dA))
+ 
+ #define PUSH_(A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,IMPL,...) IMPL
+ #define PUSH(A...) PUSH_(A, PUSH_10P, PUSH_10D,          \
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 0818d3e507347..2ffd2f354d0ae 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -1275,7 +1275,8 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+ 	mtk_i2c_clock_disable(i2c);
+ 
+ 	ret = devm_request_irq(&pdev->dev, irq, mtk_i2c_irq,
+-			       IRQF_TRIGGER_NONE, I2C_DRV_NAME, i2c);
++			       IRQF_NO_SUSPEND | IRQF_TRIGGER_NONE,
++			       I2C_DRV_NAME, i2c);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev,
+ 			"Request I2C IRQ %d fail\n", irq);
+@@ -1302,7 +1303,16 @@ static int mtk_i2c_remove(struct platform_device *pdev)
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+-static int mtk_i2c_resume(struct device *dev)
++static int mtk_i2c_suspend_noirq(struct device *dev)
++{
++	struct mtk_i2c *i2c = dev_get_drvdata(dev);
++
++	i2c_mark_adapter_suspended(&i2c->adap);
++
++	return 0;
++}
++
++static int mtk_i2c_resume_noirq(struct device *dev)
+ {
+ 	int ret;
+ 	struct mtk_i2c *i2c = dev_get_drvdata(dev);
+@@ -1317,12 +1327,15 @@ static int mtk_i2c_resume(struct device *dev)
+ 
+ 	mtk_i2c_clock_disable(i2c);
+ 
++	i2c_mark_adapter_resumed(&i2c->adap);
++
+ 	return 0;
+ }
+ #endif
+ 
+ static const struct dev_pm_ops mtk_i2c_pm = {
+-	SET_SYSTEM_SLEEP_PM_OPS(NULL, mtk_i2c_resume)
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_i2c_suspend_noirq,
++				      mtk_i2c_resume_noirq)
+ };
+ 
+ static struct platform_driver mtk_i2c_driver = {
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index 5beec901713fb..a262c949ed76b 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -1158,11 +1158,9 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ #endif
+ 	}
+ 	if (!n || !n->dev)
+-		goto free_sk;
++		goto free_dst;
+ 
+ 	ndev = n->dev;
+-	if (!ndev)
+-		goto free_dst;
+ 	if (is_vlan_dev(ndev))
+ 		ndev = vlan_dev_real_dev(ndev);
+ 
+@@ -1249,7 +1247,8 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ free_csk:
+ 	chtls_sock_release(&csk->kref);
+ free_dst:
+-	neigh_release(n);
++	if (n)
++		neigh_release(n);
+ 	dst_release(dst);
+ free_sk:
+ 	inet_csk_prepare_forced_close(newsk);
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+index d2bbe6a735142..92c50efd48fc3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+@@ -358,6 +358,7 @@ const struct iwl_cfg_trans_params iwl_ma_trans_cfg = {
+ const char iwl_ax101_name[] = "Intel(R) Wi-Fi 6 AX101";
+ const char iwl_ax200_name[] = "Intel(R) Wi-Fi 6 AX200 160MHz";
+ const char iwl_ax201_name[] = "Intel(R) Wi-Fi 6 AX201 160MHz";
++const char iwl_ax203_name[] = "Intel(R) Wi-Fi 6 AX203";
+ const char iwl_ax211_name[] = "Intel(R) Wi-Fi 6 AX211 160MHz";
+ const char iwl_ax411_name[] = "Intel(R) Wi-Fi 6 AX411 160MHz";
+ const char iwl_ma_name[] = "Intel(R) Wi-Fi 6";
+@@ -384,6 +385,18 @@ const struct iwl_cfg iwl_qu_b0_hr1_b0 = {
+ 	.num_rbds = IWL_NUM_RBDS_22000_HE,
+ };
+ 
++const struct iwl_cfg iwl_qu_b0_hr_b0 = {
++	.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
++	IWL_DEVICE_22500,
++	/*
++	 * This device doesn't support receiving BlockAck with a large bitmap
++	 * so we need to restrict the size of transmitted aggregation to the
++	 * HT size; mac80211 would otherwise pick the HE max (256) by default.
++	 */
++	.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
++	.num_rbds = IWL_NUM_RBDS_22000_HE,
++};
++
+ const struct iwl_cfg iwl_ax201_cfg_qu_hr = {
+ 	.name = "Intel(R) Wi-Fi 6 AX201 160MHz",
+ 	.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
+@@ -410,6 +423,18 @@ const struct iwl_cfg iwl_qu_c0_hr1_b0 = {
+ 	.num_rbds = IWL_NUM_RBDS_22000_HE,
+ };
+ 
++const struct iwl_cfg iwl_qu_c0_hr_b0 = {
++	.fw_name_pre = IWL_QU_C_HR_B_FW_PRE,
++	IWL_DEVICE_22500,
++	/*
++	 * This device doesn't support receiving BlockAck with a large bitmap
++	 * so we need to restrict the size of transmitted aggregation to the
++	 * HT size; mac80211 would otherwise pick the HE max (256) by default.
++	 */
++	.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
++	.num_rbds = IWL_NUM_RBDS_22000_HE,
++};
++
+ const struct iwl_cfg iwl_ax201_cfg_qu_c0_hr_b0 = {
+ 	.name = "Intel(R) Wi-Fi 6 AX201 160MHz",
+ 	.fw_name_pre = IWL_QU_C_HR_B_FW_PRE,
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index e82e3fc963be2..9b91aa9b2e7f1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -544,6 +544,7 @@ extern const char iwl9260_killer_1550_name[];
+ extern const char iwl9560_killer_1550i_name[];
+ extern const char iwl9560_killer_1550s_name[];
+ extern const char iwl_ax200_name[];
++extern const char iwl_ax203_name[];
+ extern const char iwl_ax201_name[];
+ extern const char iwl_ax101_name[];
+ extern const char iwl_ax200_killer_1650w_name[];
+@@ -627,6 +628,8 @@ extern const struct iwl_cfg iwl9560_2ac_cfg_soc;
+ extern const struct iwl_cfg iwl_qu_b0_hr1_b0;
+ extern const struct iwl_cfg iwl_qu_c0_hr1_b0;
+ extern const struct iwl_cfg iwl_quz_a0_hr1_b0;
++extern const struct iwl_cfg iwl_qu_b0_hr_b0;
++extern const struct iwl_cfg iwl_qu_c0_hr_b0;
+ extern const struct iwl_cfg iwl_ax200_cfg_cc;
+ extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
+ extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+index f043eefabb4ec..7b1d2dac6ceb8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+@@ -514,7 +514,10 @@ static ssize_t iwl_dbgfs_os_device_timediff_read(struct file *file,
+ 	const size_t bufsz = sizeof(buf);
+ 	int pos = 0;
+ 
++	mutex_lock(&mvm->mutex);
+ 	iwl_mvm_get_sync_time(mvm, &curr_gp2, &curr_os);
++	mutex_unlock(&mvm->mutex);
++
+ 	do_div(curr_os, NSEC_PER_USEC);
+ 	diff = curr_os - curr_gp2;
+ 	pos += scnprintf(buf + pos, bufsz - pos, "diff=%lld\n", diff);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index b627e7da7ac9d..d42165559df6e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -4249,6 +4249,9 @@ static void __iwl_mvm_unassign_vif_chanctx(struct iwl_mvm *mvm,
+ 	iwl_mvm_binding_remove_vif(mvm, vif);
+ 
+ out:
++	if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD) &&
++	    switching_chanctx)
++		return;
+ 	mvmvif->phy_ctxt = NULL;
+ 	iwl_mvm_power_update_mac(mvm);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 0d1118f66f0d5..cb83490f1016f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -845,6 +845,10 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
+ 	if (!mvm->scan_cmd)
+ 		goto out_free;
+ 
++	/* invalidate ids to prevent accidental removal of sta_id 0 */
++	mvm->aux_sta.sta_id = IWL_MVM_INVALID_STA;
++	mvm->snif_sta.sta_id = IWL_MVM_INVALID_STA;
++
+ 	/* Set EBS as successful as long as not stated otherwise by the FW. */
+ 	mvm->last_ebs_successful = true;
+ 
+@@ -1245,6 +1249,7 @@ static void iwl_mvm_reprobe_wk(struct work_struct *wk)
+ 	reprobe = container_of(wk, struct iwl_mvm_reprobe, work);
+ 	if (device_reprobe(reprobe->dev))
+ 		dev_err(reprobe->dev, "reprobe failed!\n");
++	put_device(reprobe->dev);
+ 	kfree(reprobe);
+ 	module_put(THIS_MODULE);
+ }
+@@ -1295,7 +1300,7 @@ void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error)
+ 			module_put(THIS_MODULE);
+ 			return;
+ 		}
+-		reprobe->dev = mvm->trans->dev;
++		reprobe->dev = get_device(mvm->trans->dev);
+ 		INIT_WORK(&reprobe->work, iwl_mvm_reprobe_wk);
+ 		schedule_work(&reprobe->work);
+ 	} else if (test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 799d8219463cb..a66a5c19474a9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -2103,6 +2103,9 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA))
++		return -EINVAL;
++
+ 	iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id);
+ 	if (ret)
+@@ -2117,6 +2120,9 @@ int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm)
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA))
++		return -EINVAL;
++
+ 	iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
+ 	if (ret)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index d719e433a59bf..2d43899fbdd7a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -245,8 +245,10 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	/* Allocate IML */
+ 	iml_img = dma_alloc_coherent(trans->dev, trans->iml_len,
+ 				     &trans_pcie->iml_dma_addr, GFP_KERNEL);
+-	if (!iml_img)
+-		return -ENOMEM;
++	if (!iml_img) {
++		ret = -ENOMEM;
++		goto err_free_ctxt_info;
++	}
+ 
+ 	memcpy(iml_img, trans->iml, trans->iml_len);
+ 
+@@ -284,6 +286,11 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 
+ 	return 0;
+ 
++err_free_ctxt_info:
++	dma_free_coherent(trans->dev, sizeof(*trans_pcie->ctxt_info_gen3),
++			  trans_pcie->ctxt_info_gen3,
++			  trans_pcie->ctxt_info_dma_addr);
++	trans_pcie->ctxt_info_gen3 = NULL;
+ err_free_prph_info:
+ 	dma_free_coherent(trans->dev,
+ 			  sizeof(*prph_info),
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 7b5ece380fbfb..2823a1e81656d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -966,6 +966,11 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		      IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
+ 		      IWL_CFG_ANY, IWL_CFG_ANY,
+ 		      iwl_qu_b0_hr1_b0, iwl_ax101_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
++		      IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
++		      IWL_CFG_ANY, IWL_CFG_ANY,
++		      iwl_qu_b0_hr_b0, iwl_ax203_name),
+ 
+ 	/* Qu C step */
+ 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+@@ -973,6 +978,11 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		      IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
+ 		      IWL_CFG_ANY, IWL_CFG_ANY,
+ 		      iwl_qu_c0_hr1_b0, iwl_ax101_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
++		      IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
++		      IWL_CFG_ANY, IWL_CFG_ANY,
++		      iwl_qu_c0_hr_b0, iwl_ax203_name),
+ 
+ 	/* QuZ */
+ 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 966be5689d63a..ed54d04e43964 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -299,6 +299,11 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	struct iwl_txq *txq = trans->txqs.txq[txq_id];
+ 
++	if (!txq) {
++		IWL_ERR(trans, "Trying to free a queue that wasn't allocated?\n");
++		return;
++	}
++
+ 	spin_lock_bh(&txq->lock);
+ 	while (txq->write_ptr != txq->read_ptr) {
+ 		IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
+diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+index af0b27a68d84d..9181221a2434d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+@@ -887,10 +887,8 @@ void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id)
+ 			int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr);
+ 			struct sk_buff *skb = txq->entries[idx].skb;
+ 
+-			if (WARN_ON_ONCE(!skb))
+-				continue;
+-
+-			iwl_txq_free_tso_page(trans, skb);
++			if (!WARN_ON_ONCE(!skb))
++				iwl_txq_free_tso_page(trans, skb);
+ 		}
+ 		iwl_txq_gen2_free_tfd(trans, txq);
+ 		txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 42bbd99a36acf..35098dbd32a3c 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1813,13 +1813,13 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ {
+ 	struct regulator_dev *r;
+ 	struct device *dev = rdev->dev.parent;
+-	int ret;
++	int ret = 0;
+ 
+ 	/* No supply to resolve? */
+ 	if (!rdev->supply_name)
+ 		return 0;
+ 
+-	/* Supply already resolved? */
++	/* Supply already resolved? (fast-path without locking contention) */
+ 	if (rdev->supply)
+ 		return 0;
+ 
+@@ -1829,7 +1829,7 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 
+ 		/* Did the lookup explicitly defer for us? */
+ 		if (ret == -EPROBE_DEFER)
+-			return ret;
++			goto out;
+ 
+ 		if (have_full_constraints()) {
+ 			r = dummy_regulator_rdev;
+@@ -1837,15 +1837,18 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 		} else {
+ 			dev_err(dev, "Failed to resolve %s-supply for %s\n",
+ 				rdev->supply_name, rdev->desc->name);
+-			return -EPROBE_DEFER;
++			ret = -EPROBE_DEFER;
++			goto out;
+ 		}
+ 	}
+ 
+ 	if (r == rdev) {
+ 		dev_err(dev, "Supply for %s (%s) resolved to itself\n",
+ 			rdev->desc->name, rdev->supply_name);
+-		if (!have_full_constraints())
+-			return -EINVAL;
++		if (!have_full_constraints()) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 		r = dummy_regulator_rdev;
+ 		get_device(&r->dev);
+ 	}
+@@ -1859,7 +1862,8 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 	if (r->dev.parent && r->dev.parent != rdev->dev.parent) {
+ 		if (!device_is_bound(r->dev.parent)) {
+ 			put_device(&r->dev);
+-			return -EPROBE_DEFER;
++			ret = -EPROBE_DEFER;
++			goto out;
+ 		}
+ 	}
+ 
+@@ -1867,15 +1871,32 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 	ret = regulator_resolve_supply(r);
+ 	if (ret < 0) {
+ 		put_device(&r->dev);
+-		return ret;
++		goto out;
++	}
++
++	/*
++	 * Recheck rdev->supply with rdev->mutex lock held to avoid a race
++	 * between rdev->supply null check and setting rdev->supply in
++	 * set_supply() from concurrent tasks.
++	 */
++	regulator_lock(rdev);
++
++	/* Supply just resolved by a concurrent task? */
++	if (rdev->supply) {
++		regulator_unlock(rdev);
++		put_device(&r->dev);
++		goto out;
+ 	}
+ 
+ 	ret = set_supply(rdev, r);
+ 	if (ret < 0) {
++		regulator_unlock(rdev);
+ 		put_device(&r->dev);
+-		return ret;
++		goto out;
+ 	}
+ 
++	regulator_unlock(rdev);
++
+ 	/*
+ 	 * In set_machine_constraints() we may have turned this regulator on
+ 	 * but we couldn't propagate to the supply if it hadn't been resolved
+@@ -1886,11 +1907,12 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 		if (ret < 0) {
+ 			_regulator_put(rdev->supply);
+ 			rdev->supply = NULL;
+-			return ret;
++			goto out;
+ 		}
+ 	}
+ 
+-	return 0;
++out:
++	return ret;
+ }
+ 
+ /* Internal regulator request function */
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index b53c055bea6a3..f72d53848dcbc 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -1078,16 +1078,6 @@ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
+ 	return IO_WQ_CANCEL_NOTFOUND;
+ }
+ 
+-static bool io_wq_io_cb_cancel_data(struct io_wq_work *work, void *data)
+-{
+-	return work == data;
+-}
+-
+-enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork)
+-{
+-	return io_wq_cancel_cb(wq, io_wq_io_cb_cancel_data, (void *)cwork, false);
+-}
+-
+ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+ {
+ 	int ret = -ENOMEM, node;
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+index aaa363f358916..75113bcd5889f 100644
+--- a/fs/io-wq.h
++++ b/fs/io-wq.h
+@@ -130,7 +130,6 @@ static inline bool io_wq_is_hashed(struct io_wq_work *work)
+ }
+ 
+ void io_wq_cancel_all(struct io_wq *wq);
+-enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork);
+ 
+ typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 3b6307f6bd93d..d0b7332ca7033 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -286,7 +286,6 @@ struct io_ring_ctx {
+ 		struct list_head	timeout_list;
+ 		struct list_head	cq_overflow_list;
+ 
+-		wait_queue_head_t	inflight_wait;
+ 		struct io_uring_sqe	*sq_sqes;
+ 	} ____cacheline_aligned_in_smp;
+ 
+@@ -997,6 +996,43 @@ static inline void io_clean_op(struct io_kiocb *req)
+ 		__io_clean_op(req);
+ }
+ 
++static inline bool __io_match_files(struct io_kiocb *req,
++				    struct files_struct *files)
++{
++	if (req->file && req->file->f_op == &io_uring_fops)
++		return true;
++
++	return ((req->flags & REQ_F_WORK_INITIALIZED) &&
++	        (req->work.flags & IO_WQ_WORK_FILES)) &&
++		req->work.identity->files == files;
++}
++
++static bool io_match_task(struct io_kiocb *head,
++			  struct task_struct *task,
++			  struct files_struct *files)
++{
++	struct io_kiocb *link;
++
++	if (task && head->task != task) {
++		/* in terms of cancelation, always match if req task is dead */
++		if (head->task->flags & PF_EXITING)
++			return true;
++		return false;
++	}
++	if (!files)
++		return true;
++	if (__io_match_files(head, files))
++		return true;
++	if (head->flags & REQ_F_LINK_HEAD) {
++		list_for_each_entry(link, &head->link_list, link_list) {
++			if (__io_match_files(link, files))
++				return true;
++		}
++	}
++	return false;
++}
++
++
+ static void io_sq_thread_drop_mm(void)
+ {
+ 	struct mm_struct *mm = current->mm;
+@@ -1183,7 +1219,6 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ 	INIT_LIST_HEAD(&ctx->iopoll_list);
+ 	INIT_LIST_HEAD(&ctx->defer_list);
+ 	INIT_LIST_HEAD(&ctx->timeout_list);
+-	init_waitqueue_head(&ctx->inflight_wait);
+ 	spin_lock_init(&ctx->inflight_lock);
+ 	INIT_LIST_HEAD(&ctx->inflight_list);
+ 	INIT_DELAYED_WORK(&ctx->file_put_work, io_file_put_work);
+@@ -1368,11 +1403,14 @@ static bool io_grab_identity(struct io_kiocb *req)
+ 			return false;
+ 		atomic_inc(&id->files->count);
+ 		get_nsproxy(id->nsproxy);
+-		req->flags |= REQ_F_INFLIGHT;
+ 
+-		spin_lock_irq(&ctx->inflight_lock);
+-		list_add(&req->inflight_entry, &ctx->inflight_list);
+-		spin_unlock_irq(&ctx->inflight_lock);
++		if (!(req->flags & REQ_F_INFLIGHT)) {
++			req->flags |= REQ_F_INFLIGHT;
++
++			spin_lock_irq(&ctx->inflight_lock);
++			list_add(&req->inflight_entry, &ctx->inflight_list);
++			spin_unlock_irq(&ctx->inflight_lock);
++		}
+ 		req->work.flags |= IO_WQ_WORK_FILES;
+ 	}
+ 	if (!(req->work.flags & IO_WQ_WORK_MM) &&
+@@ -1466,30 +1504,18 @@ static void io_kill_timeout(struct io_kiocb *req)
+ 	}
+ }
+ 
+-static bool io_task_match(struct io_kiocb *req, struct task_struct *tsk)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	if (!tsk || req->task == tsk)
+-		return true;
+-	if (ctx->flags & IORING_SETUP_SQPOLL) {
+-		if (ctx->sq_data && req->task == ctx->sq_data->thread)
+-			return true;
+-	}
+-	return false;
+-}
+-
+ /*
+  * Returns true if we found and killed one or more timeouts
+  */
+-static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk)
++static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
++			     struct files_struct *files)
+ {
+ 	struct io_kiocb *req, *tmp;
+ 	int canceled = 0;
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
+ 	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+-		if (io_task_match(req, tsk)) {
++		if (io_match_task(req, tsk, files)) {
+ 			io_kill_timeout(req);
+ 			canceled++;
+ 		}
+@@ -1616,32 +1642,6 @@ static void io_cqring_mark_overflow(struct io_ring_ctx *ctx)
+ 	}
+ }
+ 
+-static inline bool __io_match_files(struct io_kiocb *req,
+-				    struct files_struct *files)
+-{
+-	return ((req->flags & REQ_F_WORK_INITIALIZED) &&
+-	        (req->work.flags & IO_WQ_WORK_FILES)) &&
+-		req->work.identity->files == files;
+-}
+-
+-static bool io_match_files(struct io_kiocb *req,
+-			   struct files_struct *files)
+-{
+-	struct io_kiocb *link;
+-
+-	if (!files)
+-		return true;
+-	if (__io_match_files(req, files))
+-		return true;
+-	if (req->flags & REQ_F_LINK_HEAD) {
+-		list_for_each_entry(link, &req->link_list, link_list) {
+-			if (__io_match_files(link, files))
+-				return true;
+-		}
+-	}
+-	return false;
+-}
+-
+ /* Returns true if there are no backlogged entries after the flush */
+ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+ 				       struct task_struct *tsk,
+@@ -1663,9 +1663,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+ 
+ 	cqe = NULL;
+ 	list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, compl.list) {
+-		if (tsk && req->task != tsk)
+-			continue;
+-		if (!io_match_files(req, files))
++		if (!io_match_task(req, tsk, files))
+ 			continue;
+ 
+ 		cqe = io_get_cqring(ctx);
+@@ -2086,6 +2084,9 @@ static void __io_req_task_submit(struct io_kiocb *req)
+ 	else
+ 		__io_req_task_cancel(req, -EFAULT);
+ 	mutex_unlock(&ctx->uring_lock);
++
++	if (ctx->flags & IORING_SETUP_SQPOLL)
++		io_sq_thread_drop_mm();
+ }
+ 
+ static void io_req_task_submit(struct callback_head *cb)
+@@ -5314,7 +5315,8 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ /*
+  * Returns true if we found and killed one or more poll requests
+  */
+-static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk)
++static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
++			       struct files_struct *files)
+ {
+ 	struct hlist_node *tmp;
+ 	struct io_kiocb *req;
+@@ -5326,7 +5328,7 @@ static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk)
+ 
+ 		list = &ctx->cancel_hash[i];
+ 		hlist_for_each_entry_safe(req, tmp, list, hash_node) {
+-			if (io_task_match(req, tsk))
++			if (io_match_task(req, tsk, files))
+ 				posted += io_poll_remove_one(req);
+ 		}
+ 	}
+@@ -5893,17 +5895,20 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ static void io_req_drop_files(struct io_kiocb *req)
+ {
+ 	struct io_ring_ctx *ctx = req->ctx;
++	struct io_uring_task *tctx = req->task->io_uring;
+ 	unsigned long flags;
+ 
+-	put_files_struct(req->work.identity->files);
+-	put_nsproxy(req->work.identity->nsproxy);
++	if (req->work.flags & IO_WQ_WORK_FILES) {
++		put_files_struct(req->work.identity->files);
++		put_nsproxy(req->work.identity->nsproxy);
++	}
+ 	spin_lock_irqsave(&ctx->inflight_lock, flags);
+ 	list_del(&req->inflight_entry);
+ 	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
+ 	req->flags &= ~REQ_F_INFLIGHT;
+ 	req->work.flags &= ~IO_WQ_WORK_FILES;
+-	if (waitqueue_active(&ctx->inflight_wait))
+-		wake_up(&ctx->inflight_wait);
++	if (atomic_read(&tctx->in_idle))
++		wake_up(&tctx->wait);
+ }
+ 
+ static void __io_clean_op(struct io_kiocb *req)
+@@ -6168,6 +6173,16 @@ static struct file *io_file_get(struct io_submit_state *state,
+ 		file = __io_file_get(state, fd);
+ 	}
+ 
++	if (file && file->f_op == &io_uring_fops &&
++	    !(req->flags & REQ_F_INFLIGHT)) {
++		io_req_init_async(req);
++		req->flags |= REQ_F_INFLIGHT;
++
++		spin_lock_irq(&ctx->inflight_lock);
++		list_add(&req->inflight_entry, &ctx->inflight_list);
++		spin_unlock_irq(&ctx->inflight_lock);
++	}
++
+ 	return file;
+ }
+ 
+@@ -6989,14 +7004,18 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 						TASK_INTERRUPTIBLE);
+ 		/* make sure we run task_work before checking for signals */
+ 		ret = io_run_task_work_sig();
+-		if (ret > 0)
++		if (ret > 0) {
++			finish_wait(&ctx->wait, &iowq.wq);
+ 			continue;
++		}
+ 		else if (ret < 0)
+ 			break;
+ 		if (io_should_wake(&iowq))
+ 			break;
+-		if (test_bit(0, &ctx->cq_check_overflow))
++		if (test_bit(0, &ctx->cq_check_overflow)) {
++			finish_wait(&ctx->wait, &iowq.wq);
+ 			continue;
++		}
+ 		schedule();
+ 	} while (1);
+ 	finish_wait(&ctx->wait, &iowq.wq);
+@@ -8487,8 +8506,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ 		__io_cqring_overflow_flush(ctx, true, NULL, NULL);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
+-	io_kill_timeouts(ctx, NULL);
+-	io_poll_remove_all(ctx, NULL);
++	io_kill_timeouts(ctx, NULL, NULL);
++	io_poll_remove_all(ctx, NULL, NULL);
+ 
+ 	if (ctx->io_wq)
+ 		io_wq_cancel_cb(ctx->io_wq, io_cancel_ctx_cb, ctx, true);
+@@ -8524,112 +8543,31 @@ static int io_uring_release(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
+-/*
+- * Returns true if 'preq' is the link parent of 'req'
+- */
+-static bool io_match_link(struct io_kiocb *preq, struct io_kiocb *req)
+-{
+-	struct io_kiocb *link;
+-
+-	if (!(preq->flags & REQ_F_LINK_HEAD))
+-		return false;
+-
+-	list_for_each_entry(link, &preq->link_list, link_list) {
+-		if (link == req)
+-			return true;
+-	}
+-
+-	return false;
+-}
+-
+-/*
+- * We're looking to cancel 'req' because it's holding on to our files, but
+- * 'req' could be a link to another request. See if it is, and cancel that
+- * parent request if so.
+- */
+-static bool io_poll_remove_link(struct io_ring_ctx *ctx, struct io_kiocb *req)
+-{
+-	struct hlist_node *tmp;
+-	struct io_kiocb *preq;
+-	bool found = false;
+-	int i;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
+-		struct hlist_head *list;
+-
+-		list = &ctx->cancel_hash[i];
+-		hlist_for_each_entry_safe(preq, tmp, list, hash_node) {
+-			found = io_match_link(preq, req);
+-			if (found) {
+-				io_poll_remove_one(preq);
+-				break;
+-			}
+-		}
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-	return found;
+-}
+-
+-static bool io_timeout_remove_link(struct io_ring_ctx *ctx,
+-				   struct io_kiocb *req)
+-{
+-	struct io_kiocb *preq;
+-	bool found = false;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	list_for_each_entry(preq, &ctx->timeout_list, timeout.list) {
+-		found = io_match_link(preq, req);
+-		if (found) {
+-			__io_timeout_cancel(preq);
+-			break;
+-		}
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-	return found;
+-}
++struct io_task_cancel {
++	struct task_struct *task;
++	struct files_struct *files;
++};
+ 
+-static bool io_cancel_link_cb(struct io_wq_work *work, void *data)
++static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+ {
+ 	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++	struct io_task_cancel *cancel = data;
+ 	bool ret;
+ 
+-	if (req->flags & REQ_F_LINK_TIMEOUT) {
++	if (cancel->files && (req->flags & REQ_F_LINK_TIMEOUT)) {
+ 		unsigned long flags;
+ 		struct io_ring_ctx *ctx = req->ctx;
+ 
+ 		/* protect against races with linked timeouts */
+ 		spin_lock_irqsave(&ctx->completion_lock, flags);
+-		ret = io_match_link(req, data);
++		ret = io_match_task(req, cancel->task, cancel->files);
+ 		spin_unlock_irqrestore(&ctx->completion_lock, flags);
+ 	} else {
+-		ret = io_match_link(req, data);
++		ret = io_match_task(req, cancel->task, cancel->files);
+ 	}
+ 	return ret;
+ }
+ 
+-static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
+-{
+-	enum io_wq_cancel cret;
+-
+-	/* cancel this particular work, if it's running */
+-	cret = io_wq_cancel_work(ctx->io_wq, &req->work);
+-	if (cret != IO_WQ_CANCEL_NOTFOUND)
+-		return;
+-
+-	/* find links that hold this pending, cancel those */
+-	cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_link_cb, req, true);
+-	if (cret != IO_WQ_CANCEL_NOTFOUND)
+-		return;
+-
+-	/* if we have a poll link holding this pending, cancel that */
+-	if (io_poll_remove_link(ctx, req))
+-		return;
+-
+-	/* final option, timeout link is holding this req pending */
+-	io_timeout_remove_link(ctx, req);
+-}
+-
+ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ 				  struct task_struct *task,
+ 				  struct files_struct *files)
+@@ -8639,8 +8577,7 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
+ 	list_for_each_entry_reverse(de, &ctx->defer_list, list) {
+-		if (io_task_match(de->req, task) &&
+-		    io_match_files(de->req, files)) {
++		if (io_match_task(de->req, task, files)) {
+ 			list_cut_position(&list, &ctx->defer_list, &de->list);
+ 			break;
+ 		}
+@@ -8657,73 +8594,56 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ 	}
+ }
+ 
+-/*
+- * Returns true if we found and killed one or more files pinning requests
+- */
+-static bool io_uring_cancel_files(struct io_ring_ctx *ctx,
++static int io_uring_count_inflight(struct io_ring_ctx *ctx,
++				   struct task_struct *task,
++				   struct files_struct *files)
++{
++	struct io_kiocb *req;
++	int cnt = 0;
++
++	spin_lock_irq(&ctx->inflight_lock);
++	list_for_each_entry(req, &ctx->inflight_list, inflight_entry)
++		cnt += io_match_task(req, task, files);
++	spin_unlock_irq(&ctx->inflight_lock);
++	return cnt;
++}
++
++static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ 				  struct task_struct *task,
+ 				  struct files_struct *files)
+ {
+-	if (list_empty_careful(&ctx->inflight_list))
+-		return false;
+-
+ 	while (!list_empty_careful(&ctx->inflight_list)) {
+-		struct io_kiocb *cancel_req = NULL, *req;
++		struct io_task_cancel cancel = { .task = task, .files = files };
+ 		DEFINE_WAIT(wait);
++		int inflight;
+ 
+-		spin_lock_irq(&ctx->inflight_lock);
+-		list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {
+-			if (req->task == task &&
+-			    (req->work.flags & IO_WQ_WORK_FILES) &&
+-			    req->work.identity->files != files)
+-				continue;
+-			/* req is being completed, ignore */
+-			if (!refcount_inc_not_zero(&req->refs))
+-				continue;
+-			cancel_req = req;
+-			break;
+-		}
+-		if (cancel_req)
+-			prepare_to_wait(&ctx->inflight_wait, &wait,
+-						TASK_UNINTERRUPTIBLE);
+-		spin_unlock_irq(&ctx->inflight_lock);
+-
+-		/* We need to keep going until we don't find a matching req */
+-		if (!cancel_req)
++		inflight = io_uring_count_inflight(ctx, task, files);
++		if (!inflight)
+ 			break;
+-		/* cancel this request, or head link requests */
+-		io_attempt_cancel(ctx, cancel_req);
+-		io_cqring_overflow_flush(ctx, true, task, files);
+ 
+-		io_put_req(cancel_req);
++		io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
++		io_poll_remove_all(ctx, task, files);
++		io_kill_timeouts(ctx, task, files);
+ 		/* cancellations _may_ trigger task work */
+ 		io_run_task_work();
+-		schedule();
+-		finish_wait(&ctx->inflight_wait, &wait);
+-	}
+-
+-	return true;
+-}
+ 
+-static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+-{
+-	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-	struct task_struct *task = data;
+-
+-	return io_task_match(req, task);
++		prepare_to_wait(&task->io_uring->wait, &wait,
++				TASK_UNINTERRUPTIBLE);
++		if (inflight == io_uring_count_inflight(ctx, task, files))
++			schedule();
++		finish_wait(&task->io_uring->wait, &wait);
++	}
+ }
+ 
+-static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+-					    struct task_struct *task,
+-					    struct files_struct *files)
++static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
++					    struct task_struct *task)
+ {
+-	bool ret;
+-
+-	ret = io_uring_cancel_files(ctx, task, files);
+-	if (!files) {
++	while (1) {
++		struct io_task_cancel cancel = { .task = task, .files = NULL, };
+ 		enum io_wq_cancel cret;
++		bool ret = false;
+ 
+-		cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, task, true);
++		cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
+ 		if (cret != IO_WQ_CANCEL_NOTFOUND)
+ 			ret = true;
+ 
+@@ -8735,11 +8655,13 @@ static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 			}
+ 		}
+ 
+-		ret |= io_poll_remove_all(ctx, task);
+-		ret |= io_kill_timeouts(ctx, task);
++		ret |= io_poll_remove_all(ctx, task, NULL);
++		ret |= io_kill_timeouts(ctx, task, NULL);
++		if (!ret)
++			break;
++		io_run_task_work();
++		cond_resched();
+ 	}
+-
+-	return ret;
+ }
+ 
+ static void io_disable_sqo_submit(struct io_ring_ctx *ctx)
+@@ -8764,8 +8686,6 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 	struct task_struct *task = current;
+ 
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+-		/* for SQPOLL only sqo_task has task notes */
+-		WARN_ON_ONCE(ctx->sqo_task != current);
+ 		io_disable_sqo_submit(ctx);
+ 		task = ctx->sq_data->thread;
+ 		atomic_inc(&task->io_uring->in_idle);
+@@ -8775,10 +8695,9 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 	io_cancel_defer_files(ctx, task, files);
+ 	io_cqring_overflow_flush(ctx, true, task, files);
+ 
+-	while (__io_uring_cancel_task_requests(ctx, task, files)) {
+-		io_run_task_work();
+-		cond_resched();
+-	}
++	io_uring_cancel_files(ctx, task, files);
++	if (!files)
++		__io_uring_cancel_task_requests(ctx, task);
+ 
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+ 		atomic_dec(&task->io_uring->in_idle);
+@@ -8919,15 +8838,15 @@ void __io_uring_task_cancel(void)
+ 		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
+ 
+ 		/*
+-		 * If we've seen completions, retry. This avoids a race where
+-		 * a completion comes in before we did prepare_to_wait().
++		 * If we've seen completions, retry without waiting. This
++		 * avoids a race where a completion comes in before we did
++		 * prepare_to_wait().
+ 		 */
+-		if (inflight != tctx_inflight(tctx))
+-			continue;
+-		schedule();
++		if (inflight == tctx_inflight(tctx))
++			schedule();
++		finish_wait(&tctx->wait, &wait);
+ 	} while (1);
+ 
+-	finish_wait(&tctx->wait, &wait);
+ 	atomic_dec(&tctx->in_idle);
+ 
+ 	io_uring_remove_task_files(tctx);
+@@ -8938,6 +8857,9 @@ static int io_uring_flush(struct file *file, void *data)
+ 	struct io_uring_task *tctx = current->io_uring;
+ 	struct io_ring_ctx *ctx = file->private_data;
+ 
++	if (fatal_signal_pending(current) || (current->flags & PF_EXITING))
++		io_uring_cancel_task_requests(ctx, NULL);
++
+ 	if (!tctx)
+ 		return 0;
+ 
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index cbadcf6ca4da2..b8712b835b105 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1000,7 +1000,7 @@ pnfs_layout_stateid_blocked(const struct pnfs_layout_hdr *lo,
+ {
+ 	u32 seqid = be32_to_cpu(stateid->seqid);
+ 
+-	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier);
++	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier) && lo->plh_barrier;
+ }
+ 
+ /* lget is set to 1 if called from inside send_layoutget call chain */
+@@ -1913,6 +1913,11 @@ static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
+ 		wake_up_var(&lo->plh_outstanding);
+ }
+ 
++static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
++{
++	return test_bit(NFS_LAYOUT_FIRST_LAYOUTGET, &lo->plh_flags);
++}
++
+ static void pnfs_clear_first_layoutget(struct pnfs_layout_hdr *lo)
+ {
+ 	unsigned long *bitlock = &lo->plh_flags;
+@@ -2387,23 +2392,34 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
+ 		goto out_forget;
+ 	}
+ 
+-	if (!pnfs_layout_is_valid(lo)) {
+-		/* We have a completely new layout */
+-		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, true);
+-	} else if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
++	if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
+ 		/* existing state ID, make sure the sequence number matches. */
+ 		if (pnfs_layout_stateid_blocked(lo, &res->stateid)) {
++			if (!pnfs_layout_is_valid(lo) &&
++			    pnfs_is_first_layoutget(lo))
++				lo->plh_barrier = 0;
+ 			dprintk("%s forget reply due to sequence\n", __func__);
+ 			goto out_forget;
+ 		}
+ 		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, false);
+-	} else {
++	} else if (pnfs_layout_is_valid(lo)) {
+ 		/*
+ 		 * We got an entirely new state ID.  Mark all segments for the
+ 		 * inode invalid, and retry the layoutget
+ 		 */
+-		pnfs_mark_layout_stateid_invalid(lo, &free_me);
++		struct pnfs_layout_range range = {
++			.iomode = IOMODE_ANY,
++			.length = NFS4_MAX_UINT64,
++		};
++		pnfs_set_plh_return_info(lo, IOMODE_ANY, 0);
++		pnfs_mark_matching_lsegs_return(lo, &lo->plh_return_segs,
++						&range, 0);
+ 		goto out_forget;
++	} else {
++		/* We have a completely new layout */
++		if (!pnfs_is_first_layoutget(lo))
++			goto out_forget;
++		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, true);
+ 	}
+ 
+ 	pnfs_get_lseg(lseg);
+diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
+index 64bc81363c6cc..e1bd592ce7001 100644
+--- a/fs/nilfs2/file.c
++++ b/fs/nilfs2/file.c
+@@ -141,6 +141,7 @@ const struct file_operations nilfs_file_operations = {
+ 	/* .release	= nilfs_release_file, */
+ 	.fsync		= nilfs_sync_file,
+ 	.splice_read	= generic_file_splice_read,
++	.splice_write   = iter_file_splice_write,
+ };
+ 
+ const struct inode_operations nilfs_file_inode_operations = {
+diff --git a/fs/squashfs/block.c b/fs/squashfs/block.c
+index 8a19773b5a0b7..45f44425d8560 100644
+--- a/fs/squashfs/block.c
++++ b/fs/squashfs/block.c
+@@ -196,9 +196,15 @@ int squashfs_read_data(struct super_block *sb, u64 index, int length,
+ 		length = SQUASHFS_COMPRESSED_SIZE(length);
+ 		index += 2;
+ 
+-		TRACE("Block @ 0x%llx, %scompressed size %d\n", index,
++		TRACE("Block @ 0x%llx, %scompressed size %d\n", index - 2,
+ 		      compressed ? "" : "un", length);
+ 	}
++	if (length < 0 || length > output->length ||
++			(index + length) > msblk->bytes_used) {
++		res = -EIO;
++		goto out;
++	}
++
+ 	if (next_index)
+ 		*next_index = index + length;
+ 
+diff --git a/fs/squashfs/export.c b/fs/squashfs/export.c
+index ae2c87bb0fbec..eb02072d28dd6 100644
+--- a/fs/squashfs/export.c
++++ b/fs/squashfs/export.c
+@@ -41,12 +41,17 @@ static long long squashfs_inode_lookup(struct super_block *sb, int ino_num)
+ 	struct squashfs_sb_info *msblk = sb->s_fs_info;
+ 	int blk = SQUASHFS_LOOKUP_BLOCK(ino_num - 1);
+ 	int offset = SQUASHFS_LOOKUP_BLOCK_OFFSET(ino_num - 1);
+-	u64 start = le64_to_cpu(msblk->inode_lookup_table[blk]);
++	u64 start;
+ 	__le64 ino;
+ 	int err;
+ 
+ 	TRACE("Entered squashfs_inode_lookup, inode_number = %d\n", ino_num);
+ 
++	if (ino_num == 0 || (ino_num - 1) >= msblk->inodes)
++		return -EINVAL;
++
++	start = le64_to_cpu(msblk->inode_lookup_table[blk]);
++
+ 	err = squashfs_read_metadata(sb, &ino, &start, &offset, sizeof(ino));
+ 	if (err < 0)
+ 		return err;
+@@ -111,7 +116,10 @@ __le64 *squashfs_read_inode_lookup_table(struct super_block *sb,
+ 		u64 lookup_table_start, u64 next_table, unsigned int inodes)
+ {
+ 	unsigned int length = SQUASHFS_LOOKUP_BLOCK_BYTES(inodes);
++	unsigned int indexes = SQUASHFS_LOOKUP_BLOCKS(inodes);
++	int n;
+ 	__le64 *table;
++	u64 start, end;
+ 
+ 	TRACE("In read_inode_lookup_table, length %d\n", length);
+ 
+@@ -121,20 +129,37 @@ __le64 *squashfs_read_inode_lookup_table(struct super_block *sb,
+ 	if (inodes == 0)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	/* length bytes should not extend into the next table - this check
+-	 * also traps instances where lookup_table_start is incorrectly larger
+-	 * than the next table start
++	/*
++	 * The computed size of the lookup table (length bytes) should exactly
++	 * match the table start and end points
+ 	 */
+-	if (lookup_table_start + length > next_table)
++	if (length != (next_table - lookup_table_start))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	table = squashfs_read_table(sb, lookup_table_start, length);
++	if (IS_ERR(table))
++		return table;
+ 
+ 	/*
+-	 * table[0] points to the first inode lookup table metadata block,
+-	 * this should be less than lookup_table_start
++	 * table0], table[1], ... table[indexes - 1] store the locations
++	 * of the compressed inode lookup blocks.  Each entry should be
++	 * less than the next (i.e. table[0] < table[1]), and the difference
++	 * between them should be SQUASHFS_METADATA_SIZE or less.
++	 * table[indexes - 1] should  be less than lookup_table_start, and
++	 * again the difference should be SQUASHFS_METADATA_SIZE or less
+ 	 */
+-	if (!IS_ERR(table) && le64_to_cpu(table[0]) >= lookup_table_start) {
++	for (n = 0; n < (indexes - 1); n++) {
++		start = le64_to_cpu(table[n]);
++		end = le64_to_cpu(table[n + 1]);
++
++		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++			kfree(table);
++			return ERR_PTR(-EINVAL);
++		}
++	}
++
++	start = le64_to_cpu(table[indexes - 1]);
++	if (start >= lookup_table_start || (lookup_table_start - start) > SQUASHFS_METADATA_SIZE) {
+ 		kfree(table);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/fs/squashfs/id.c b/fs/squashfs/id.c
+index 6be5afe7287d6..11581bf31af41 100644
+--- a/fs/squashfs/id.c
++++ b/fs/squashfs/id.c
+@@ -35,10 +35,15 @@ int squashfs_get_id(struct super_block *sb, unsigned int index,
+ 	struct squashfs_sb_info *msblk = sb->s_fs_info;
+ 	int block = SQUASHFS_ID_BLOCK(index);
+ 	int offset = SQUASHFS_ID_BLOCK_OFFSET(index);
+-	u64 start_block = le64_to_cpu(msblk->id_table[block]);
++	u64 start_block;
+ 	__le32 disk_id;
+ 	int err;
+ 
++	if (index >= msblk->ids)
++		return -EINVAL;
++
++	start_block = le64_to_cpu(msblk->id_table[block]);
++
+ 	err = squashfs_read_metadata(sb, &disk_id, &start_block, &offset,
+ 							sizeof(disk_id));
+ 	if (err < 0)
+@@ -56,7 +61,10 @@ __le64 *squashfs_read_id_index_table(struct super_block *sb,
+ 		u64 id_table_start, u64 next_table, unsigned short no_ids)
+ {
+ 	unsigned int length = SQUASHFS_ID_BLOCK_BYTES(no_ids);
++	unsigned int indexes = SQUASHFS_ID_BLOCKS(no_ids);
++	int n;
+ 	__le64 *table;
++	u64 start, end;
+ 
+ 	TRACE("In read_id_index_table, length %d\n", length);
+ 
+@@ -67,20 +75,36 @@ __le64 *squashfs_read_id_index_table(struct super_block *sb,
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	/*
+-	 * length bytes should not extend into the next table - this check
+-	 * also traps instances where id_table_start is incorrectly larger
+-	 * than the next table start
++	 * The computed size of the index table (length bytes) should exactly
++	 * match the table start and end points
+ 	 */
+-	if (id_table_start + length > next_table)
++	if (length != (next_table - id_table_start))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	table = squashfs_read_table(sb, id_table_start, length);
++	if (IS_ERR(table))
++		return table;
+ 
+ 	/*
+-	 * table[0] points to the first id lookup table metadata block, this
+-	 * should be less than id_table_start
++	 * table[0], table[1], ... table[indexes - 1] store the locations
++	 * of the compressed id blocks.   Each entry should be less than
++	 * the next (i.e. table[0] < table[1]), and the difference between them
++	 * should be SQUASHFS_METADATA_SIZE or less.  table[indexes - 1]
++	 * should be less than id_table_start, and again the difference
++	 * should be SQUASHFS_METADATA_SIZE or less
+ 	 */
+-	if (!IS_ERR(table) && le64_to_cpu(table[0]) >= id_table_start) {
++	for (n = 0; n < (indexes - 1); n++) {
++		start = le64_to_cpu(table[n]);
++		end = le64_to_cpu(table[n + 1]);
++
++		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++			kfree(table);
++			return ERR_PTR(-EINVAL);
++		}
++	}
++
++	start = le64_to_cpu(table[indexes - 1]);
++	if (start >= id_table_start || (id_table_start - start) > SQUASHFS_METADATA_SIZE) {
+ 		kfree(table);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/fs/squashfs/squashfs_fs_sb.h b/fs/squashfs/squashfs_fs_sb.h
+index 34c21ffb6df37..166e98806265b 100644
+--- a/fs/squashfs/squashfs_fs_sb.h
++++ b/fs/squashfs/squashfs_fs_sb.h
+@@ -64,5 +64,6 @@ struct squashfs_sb_info {
+ 	unsigned int				inodes;
+ 	unsigned int				fragments;
+ 	int					xattr_ids;
++	unsigned int				ids;
+ };
+ #endif
+diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c
+index d6c6593ec169e..88cc94be10765 100644
+--- a/fs/squashfs/super.c
++++ b/fs/squashfs/super.c
+@@ -166,6 +166,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	msblk->directory_table = le64_to_cpu(sblk->directory_table_start);
+ 	msblk->inodes = le32_to_cpu(sblk->inodes);
+ 	msblk->fragments = le32_to_cpu(sblk->fragments);
++	msblk->ids = le16_to_cpu(sblk->no_ids);
+ 	flags = le16_to_cpu(sblk->flags);
+ 
+ 	TRACE("Found valid superblock on %pg\n", sb->s_bdev);
+@@ -177,7 +178,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	TRACE("Block size %d\n", msblk->block_size);
+ 	TRACE("Number of inodes %d\n", msblk->inodes);
+ 	TRACE("Number of fragments %d\n", msblk->fragments);
+-	TRACE("Number of ids %d\n", le16_to_cpu(sblk->no_ids));
++	TRACE("Number of ids %d\n", msblk->ids);
+ 	TRACE("sblk->inode_table_start %llx\n", msblk->inode_table);
+ 	TRACE("sblk->directory_table_start %llx\n", msblk->directory_table);
+ 	TRACE("sblk->fragment_table_start %llx\n",
+@@ -236,8 +237,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ allocate_id_index_table:
+ 	/* Allocate and read id index table */
+ 	msblk->id_table = squashfs_read_id_index_table(sb,
+-		le64_to_cpu(sblk->id_table_start), next_table,
+-		le16_to_cpu(sblk->no_ids));
++		le64_to_cpu(sblk->id_table_start), next_table, msblk->ids);
+ 	if (IS_ERR(msblk->id_table)) {
+ 		errorf(fc, "unable to read id index table");
+ 		err = PTR_ERR(msblk->id_table);
+diff --git a/fs/squashfs/xattr.h b/fs/squashfs/xattr.h
+index 184129afd4566..d8a270d3ac4cb 100644
+--- a/fs/squashfs/xattr.h
++++ b/fs/squashfs/xattr.h
+@@ -17,8 +17,16 @@ extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *,
+ static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb,
+ 		u64 start, u64 *xattr_table_start, int *xattr_ids)
+ {
++	struct squashfs_xattr_id_table *id_table;
++
++	id_table = squashfs_read_table(sb, start, sizeof(*id_table));
++	if (IS_ERR(id_table))
++		return (__le64 *) id_table;
++
++	*xattr_table_start = le64_to_cpu(id_table->xattr_table_start);
++	kfree(id_table);
++
+ 	ERROR("Xattrs in filesystem, these will be ignored\n");
+-	*xattr_table_start = start;
+ 	return ERR_PTR(-ENOTSUPP);
+ }
+ 
+diff --git a/fs/squashfs/xattr_id.c b/fs/squashfs/xattr_id.c
+index d99e08464554f..ead66670b41a5 100644
+--- a/fs/squashfs/xattr_id.c
++++ b/fs/squashfs/xattr_id.c
+@@ -31,10 +31,15 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index,
+ 	struct squashfs_sb_info *msblk = sb->s_fs_info;
+ 	int block = SQUASHFS_XATTR_BLOCK(index);
+ 	int offset = SQUASHFS_XATTR_BLOCK_OFFSET(index);
+-	u64 start_block = le64_to_cpu(msblk->xattr_id_table[block]);
++	u64 start_block;
+ 	struct squashfs_xattr_id id;
+ 	int err;
+ 
++	if (index >= msblk->xattr_ids)
++		return -EINVAL;
++
++	start_block = le64_to_cpu(msblk->xattr_id_table[block]);
++
+ 	err = squashfs_read_metadata(sb, &id, &start_block, &offset,
+ 							sizeof(id));
+ 	if (err < 0)
+@@ -50,13 +55,17 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index,
+ /*
+  * Read uncompressed xattr id lookup table indexes from disk into memory
+  */
+-__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 start,
++__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
+ 		u64 *xattr_table_start, int *xattr_ids)
+ {
+-	unsigned int len;
++	struct squashfs_sb_info *msblk = sb->s_fs_info;
++	unsigned int len, indexes;
+ 	struct squashfs_xattr_id_table *id_table;
++	__le64 *table;
++	u64 start, end;
++	int n;
+ 
+-	id_table = squashfs_read_table(sb, start, sizeof(*id_table));
++	id_table = squashfs_read_table(sb, table_start, sizeof(*id_table));
+ 	if (IS_ERR(id_table))
+ 		return (__le64 *) id_table;
+ 
+@@ -70,13 +79,52 @@ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 start,
+ 	if (*xattr_ids == 0)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	/* xattr_table should be less than start */
+-	if (*xattr_table_start >= start)
++	len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
++	indexes = SQUASHFS_XATTR_BLOCKS(*xattr_ids);
++
++	/*
++	 * The computed size of the index table (len bytes) should exactly
++	 * match the table start and end points
++	 */
++	start = table_start + sizeof(*id_table);
++	end = msblk->bytes_used;
++
++	if (len != (end - start))
+ 		return ERR_PTR(-EINVAL);
+ 
+-	len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
++	table = squashfs_read_table(sb, start, len);
++	if (IS_ERR(table))
++		return table;
++
++	/* table[0], table[1], ... table[indexes - 1] store the locations
++	 * of the compressed xattr id blocks.  Each entry should be less than
++	 * the next (i.e. table[0] < table[1]), and the difference between them
++	 * should be SQUASHFS_METADATA_SIZE or less.  table[indexes - 1]
++	 * should be less than table_start, and again the difference
++	 * shouls be SQUASHFS_METADATA_SIZE or less.
++	 *
++	 * Finally xattr_table_start should be less than table[0].
++	 */
++	for (n = 0; n < (indexes - 1); n++) {
++		start = le64_to_cpu(table[n]);
++		end = le64_to_cpu(table[n + 1]);
++
++		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++			kfree(table);
++			return ERR_PTR(-EINVAL);
++		}
++	}
++
++	start = le64_to_cpu(table[indexes - 1]);
++	if (start >= table_start || (table_start - start) > SQUASHFS_METADATA_SIZE) {
++		kfree(table);
++		return ERR_PTR(-EINVAL);
++	}
+ 
+-	TRACE("In read_xattr_index_table, length %d\n", len);
++	if (*xattr_table_start >= le64_to_cpu(table[0])) {
++		kfree(table);
++		return ERR_PTR(-EINVAL);
++	}
+ 
+-	return squashfs_read_table(sb, start + sizeof(*id_table), len);
++	return table;
+ }
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index 9548d075e06da..b998e4b736912 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -25,8 +25,7 @@ struct rpc_rqst;
+ #define XDR_QUADLEN(l)		(((l) + 3) >> 2)
+ 
+ /*
+- * Generic opaque `network object.' At the kernel level, this type
+- * is used only by lockd.
++ * Generic opaque `network object.'
+  */
+ #define XDR_MAX_NETOBJ		1024
+ struct xdr_netobj {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 618cb1b451ade..8c017f8c0c6d6 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -6822,7 +6822,7 @@ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
+ 	case BPF_JSGT:
+ 		if (reg->s32_min_value > sval)
+ 			return 1;
+-		else if (reg->s32_max_value < sval)
++		else if (reg->s32_max_value <= sval)
+ 			return 0;
+ 		break;
+ 	case BPF_JLT:
+@@ -6895,7 +6895,7 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
+ 	case BPF_JSGT:
+ 		if (reg->smin_value > sval)
+ 			return 1;
+-		else if (reg->smax_value < sval)
++		else if (reg->smax_value <= sval)
+ 			return 0;
+ 		break;
+ 	case BPF_JLT:
+@@ -8465,7 +8465,11 @@ static bool range_within(struct bpf_reg_state *old,
+ 	return old->umin_value <= cur->umin_value &&
+ 	       old->umax_value >= cur->umax_value &&
+ 	       old->smin_value <= cur->smin_value &&
+-	       old->smax_value >= cur->smax_value;
++	       old->smax_value >= cur->smax_value &&
++	       old->u32_min_value <= cur->u32_min_value &&
++	       old->u32_max_value >= cur->u32_max_value &&
++	       old->s32_min_value <= cur->s32_min_value &&
++	       old->s32_max_value >= cur->s32_max_value;
+ }
+ 
+ /* Maximum number of register states that can exist at once */
+@@ -10862,30 +10866,28 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
+ 		    insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
+ 		    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
+ 			bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
+-			struct bpf_insn mask_and_div[] = {
+-				BPF_MOV32_REG(insn->src_reg, insn->src_reg),
++			bool isdiv = BPF_OP(insn->code) == BPF_DIV;
++			struct bpf_insn *patchlet;
++			struct bpf_insn chk_and_div[] = {
+ 				/* Rx div 0 -> 0 */
+-				BPF_JMP_IMM(BPF_JNE, insn->src_reg, 0, 2),
++				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++					     BPF_JNE | BPF_K, insn->src_reg,
++					     0, 2, 0),
+ 				BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
+ 				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+ 				*insn,
+ 			};
+-			struct bpf_insn mask_and_mod[] = {
+-				BPF_MOV32_REG(insn->src_reg, insn->src_reg),
++			struct bpf_insn chk_and_mod[] = {
+ 				/* Rx mod 0 -> Rx */
+-				BPF_JMP_IMM(BPF_JEQ, insn->src_reg, 0, 1),
++				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++					     BPF_JEQ | BPF_K, insn->src_reg,
++					     0, 1, 0),
+ 				*insn,
+ 			};
+-			struct bpf_insn *patchlet;
+ 
+-			if (insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
+-			    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
+-				patchlet = mask_and_div + (is64 ? 1 : 0);
+-				cnt = ARRAY_SIZE(mask_and_div) - (is64 ? 1 : 0);
+-			} else {
+-				patchlet = mask_and_mod + (is64 ? 1 : 0);
+-				cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 1 : 0);
+-			}
++			patchlet = isdiv ? chk_and_div : chk_and_mod;
++			cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
++				      ARRAY_SIZE(chk_and_mod);
+ 
+ 			new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
+ 			if (!new_prog)
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 8fc23d53f5500..a604e69ecfa57 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -6320,6 +6320,8 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
+ 	if (err)
+ 		return err;
+ 
++	page_counter_set_high(&memcg->memory, high);
++
+ 	for (;;) {
+ 		unsigned long nr_pages = page_counter_read(&memcg->memory);
+ 		unsigned long reclaimed;
+@@ -6343,10 +6345,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
+ 			break;
+ 	}
+ 
+-	page_counter_set_high(&memcg->memory, high);
+-
+ 	memcg_wb_domain_size_changed(memcg);
+-
+ 	return nbytes;
+ }
+ 
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index c12dbc51ef5fe..ef9b4ac03e7b7 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2902,7 +2902,7 @@ static int count_ah_combs(const struct xfrm_tmpl *t)
+ 			break;
+ 		if (!aalg->pfkey_supported)
+ 			continue;
+-		if (aalg_tmpl_set(t, aalg) && aalg->available)
++		if (aalg_tmpl_set(t, aalg))
+ 			sz += sizeof(struct sadb_comb);
+ 	}
+ 	return sz + sizeof(struct sadb_prop);
+@@ -2920,7 +2920,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ 		if (!ealg->pfkey_supported)
+ 			continue;
+ 
+-		if (!(ealg_tmpl_set(t, ealg) && ealg->available))
++		if (!(ealg_tmpl_set(t, ealg)))
+ 			continue;
+ 
+ 		for (k = 1; ; k++) {
+@@ -2931,7 +2931,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ 			if (!aalg->pfkey_supported)
+ 				continue;
+ 
+-			if (aalg_tmpl_set(t, aalg) && aalg->available)
++			if (aalg_tmpl_set(t, aalg))
+ 				sz += sizeof(struct sadb_comb);
+ 		}
+ 	}
+diff --git a/net/mac80211/spectmgmt.c b/net/mac80211/spectmgmt.c
+index ae1cb2c687224..76747bfdaddd0 100644
+--- a/net/mac80211/spectmgmt.c
++++ b/net/mac80211/spectmgmt.c
+@@ -133,16 +133,20 @@ int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
+ 	}
+ 
+ 	if (wide_bw_chansw_ie) {
++		u8 new_seg1 = wide_bw_chansw_ie->new_center_freq_seg1;
+ 		struct ieee80211_vht_operation vht_oper = {
+ 			.chan_width =
+ 				wide_bw_chansw_ie->new_channel_width,
+ 			.center_freq_seg0_idx =
+ 				wide_bw_chansw_ie->new_center_freq_seg0,
+-			.center_freq_seg1_idx =
+-				wide_bw_chansw_ie->new_center_freq_seg1,
++			.center_freq_seg1_idx = new_seg1,
+ 			/* .basic_mcs_set doesn't matter */
+ 		};
+-		struct ieee80211_ht_operation ht_oper = {};
++		struct ieee80211_ht_operation ht_oper = {
++			.operation_mode =
++				cpu_to_le16(new_seg1 <<
++					    IEEE80211_HT_OP_MODE_CCFS2_SHIFT),
++		};
+ 
+ 		/* default, for the case of IEEE80211_VHT_CHANWIDTH_USE_HT,
+ 		 * to the previously parsed chandef
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index 4ecc2a9595674..5f42aa5fc6128 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -29,6 +29,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/hashtable.h>
+ 
++#include "auth_gss_internal.h"
+ #include "../netns.h"
+ 
+ #include <trace/events/rpcgss.h>
+@@ -125,35 +126,6 @@ gss_cred_set_ctx(struct rpc_cred *cred, struct gss_cl_ctx *ctx)
+ 	clear_bit(RPCAUTH_CRED_NEW, &cred->cr_flags);
+ }
+ 
+-static const void *
+-simple_get_bytes(const void *p, const void *end, void *res, size_t len)
+-{
+-	const void *q = (const void *)((const char *)p + len);
+-	if (unlikely(q > end || q < p))
+-		return ERR_PTR(-EFAULT);
+-	memcpy(res, p, len);
+-	return q;
+-}
+-
+-static inline const void *
+-simple_get_netobj(const void *p, const void *end, struct xdr_netobj *dest)
+-{
+-	const void *q;
+-	unsigned int len;
+-
+-	p = simple_get_bytes(p, end, &len, sizeof(len));
+-	if (IS_ERR(p))
+-		return p;
+-	q = (const void *)((const char *)p + len);
+-	if (unlikely(q > end || q < p))
+-		return ERR_PTR(-EFAULT);
+-	dest->data = kmemdup(p, len, GFP_NOFS);
+-	if (unlikely(dest->data == NULL))
+-		return ERR_PTR(-ENOMEM);
+-	dest->len = len;
+-	return q;
+-}
+-
+ static struct gss_cl_ctx *
+ gss_cred_get_ctx(struct rpc_cred *cred)
+ {
+diff --git a/net/sunrpc/auth_gss/auth_gss_internal.h b/net/sunrpc/auth_gss/auth_gss_internal.h
+new file mode 100644
+index 0000000000000..f6d9631bd9d00
+--- /dev/null
++++ b/net/sunrpc/auth_gss/auth_gss_internal.h
+@@ -0,0 +1,45 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * linux/net/sunrpc/auth_gss/auth_gss_internal.h
++ *
++ * Internal definitions for RPCSEC_GSS client authentication
++ *
++ * Copyright (c) 2000 The Regents of the University of Michigan.
++ * All rights reserved.
++ *
++ */
++#include <linux/err.h>
++#include <linux/string.h>
++#include <linux/sunrpc/xdr.h>
++
++static inline const void *
++simple_get_bytes(const void *p, const void *end, void *res, size_t len)
++{
++	const void *q = (const void *)((const char *)p + len);
++	if (unlikely(q > end || q < p))
++		return ERR_PTR(-EFAULT);
++	memcpy(res, p, len);
++	return q;
++}
++
++static inline const void *
++simple_get_netobj(const void *p, const void *end, struct xdr_netobj *dest)
++{
++	const void *q;
++	unsigned int len;
++
++	p = simple_get_bytes(p, end, &len, sizeof(len));
++	if (IS_ERR(p))
++		return p;
++	q = (const void *)((const char *)p + len);
++	if (unlikely(q > end || q < p))
++		return ERR_PTR(-EFAULT);
++	if (len) {
++		dest->data = kmemdup(p, len, GFP_NOFS);
++		if (unlikely(dest->data == NULL))
++			return ERR_PTR(-ENOMEM);
++	} else
++		dest->data = NULL;
++	dest->len = len;
++	return q;
++}
+diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c
+index ae9acf3a73898..1c092b05c2bba 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_mech.c
++++ b/net/sunrpc/auth_gss/gss_krb5_mech.c
+@@ -21,6 +21,8 @@
+ #include <linux/sunrpc/xdr.h>
+ #include <linux/sunrpc/gss_krb5_enctypes.h>
+ 
++#include "auth_gss_internal.h"
++
+ #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+ # define RPCDBG_FACILITY	RPCDBG_AUTH
+ #endif
+@@ -143,35 +145,6 @@ get_gss_krb5_enctype(int etype)
+ 	return NULL;
+ }
+ 
+-static const void *
+-simple_get_bytes(const void *p, const void *end, void *res, int len)
+-{
+-	const void *q = (const void *)((const char *)p + len);
+-	if (unlikely(q > end || q < p))
+-		return ERR_PTR(-EFAULT);
+-	memcpy(res, p, len);
+-	return q;
+-}
+-
+-static const void *
+-simple_get_netobj(const void *p, const void *end, struct xdr_netobj *res)
+-{
+-	const void *q;
+-	unsigned int len;
+-
+-	p = simple_get_bytes(p, end, &len, sizeof(len));
+-	if (IS_ERR(p))
+-		return p;
+-	q = (const void *)((const char *)p + len);
+-	if (unlikely(q > end || q < p))
+-		return ERR_PTR(-EFAULT);
+-	res->data = kmemdup(p, len, GFP_NOFS);
+-	if (unlikely(res->data == NULL))
+-		return ERR_PTR(-ENOMEM);
+-	res->len = len;
+-	return q;
+-}
+-
+ static inline const void *
+ get_key(const void *p, const void *end,
+ 	struct krb5_ctx *ctx, struct crypto_sync_skcipher **res)
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 1c5114dedda92..fe49e9a97f0ec 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -306,6 +306,10 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ 		.device = 0xa0c8,
+ 	},
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++		.device = 0x43c8,
++	},
+ #endif
+ 
+ /* Elkhart Lake */
+diff --git a/sound/soc/codecs/ak4458.c b/sound/soc/codecs/ak4458.c
+index 1010c9ee2e836..472caad17012e 100644
+--- a/sound/soc/codecs/ak4458.c
++++ b/sound/soc/codecs/ak4458.c
+@@ -595,18 +595,10 @@ static struct snd_soc_dai_driver ak4497_dai = {
+ 	.ops = &ak4458_dai_ops,
+ };
+ 
+-static void ak4458_power_off(struct ak4458_priv *ak4458)
++static void ak4458_reset(struct ak4458_priv *ak4458, bool active)
+ {
+ 	if (ak4458->reset_gpiod) {
+-		gpiod_set_value_cansleep(ak4458->reset_gpiod, 0);
+-		usleep_range(1000, 2000);
+-	}
+-}
+-
+-static void ak4458_power_on(struct ak4458_priv *ak4458)
+-{
+-	if (ak4458->reset_gpiod) {
+-		gpiod_set_value_cansleep(ak4458->reset_gpiod, 1);
++		gpiod_set_value_cansleep(ak4458->reset_gpiod, active);
+ 		usleep_range(1000, 2000);
+ 	}
+ }
+@@ -620,7 +612,7 @@ static int ak4458_init(struct snd_soc_component *component)
+ 	if (ak4458->mute_gpiod)
+ 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 1);
+ 
+-	ak4458_power_on(ak4458);
++	ak4458_reset(ak4458, false);
+ 
+ 	ret = snd_soc_component_update_bits(component, AK4458_00_CONTROL1,
+ 			    0x80, 0x80);   /* ACKS bit = 1; 10000000 */
+@@ -650,7 +642,7 @@ static void ak4458_remove(struct snd_soc_component *component)
+ {
+ 	struct ak4458_priv *ak4458 = snd_soc_component_get_drvdata(component);
+ 
+-	ak4458_power_off(ak4458);
++	ak4458_reset(ak4458, true);
+ }
+ 
+ #ifdef CONFIG_PM
+@@ -660,7 +652,7 @@ static int __maybe_unused ak4458_runtime_suspend(struct device *dev)
+ 
+ 	regcache_cache_only(ak4458->regmap, true);
+ 
+-	ak4458_power_off(ak4458);
++	ak4458_reset(ak4458, true);
+ 
+ 	if (ak4458->mute_gpiod)
+ 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 0);
+@@ -685,8 +677,8 @@ static int __maybe_unused ak4458_runtime_resume(struct device *dev)
+ 	if (ak4458->mute_gpiod)
+ 		gpiod_set_value_cansleep(ak4458->mute_gpiod, 1);
+ 
+-	ak4458_power_off(ak4458);
+-	ak4458_power_on(ak4458);
++	ak4458_reset(ak4458, true);
++	ak4458_reset(ak4458, false);
+ 
+ 	regcache_cache_only(ak4458->regmap, false);
+ 	regcache_mark_dirty(ak4458->regmap);
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index dec8716aa8ef5..985b2dcecf138 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -2031,11 +2031,14 @@ static struct wm_coeff_ctl *wm_adsp_get_ctl(struct wm_adsp *dsp,
+ 					     unsigned int alg)
+ {
+ 	struct wm_coeff_ctl *pos, *rslt = NULL;
++	const char *fw_txt = wm_adsp_fw_text[dsp->fw];
+ 
+ 	list_for_each_entry(pos, &dsp->ctl_list, list) {
+ 		if (!pos->subname)
+ 			continue;
+ 		if (strncmp(pos->subname, name, pos->subname_len) == 0 &&
++		    strncmp(pos->fw_name, fw_txt,
++			    SNDRV_CTL_ELEM_ID_NAME_MAXLEN) == 0 &&
+ 				pos->alg_region.alg == alg &&
+ 				pos->alg_region.type == type) {
+ 			rslt = pos;
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index b29946eb43551..a8d43c87cb5a2 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -57,6 +57,16 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
+ 					SOF_RT715_DAI_ID_FIX),
+ 	},
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E")
++		},
++		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
++					SOF_RT715_DAI_ID_FIX |
++					SOF_SDW_FOUR_SPK),
++	},
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+ 		.matches = {
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index d699e61eca3d0..0955cbb4e9187 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -3632,7 +3632,7 @@ static void skl_tplg_complete(struct snd_soc_component *component)
+ 		sprintf(chan_text, "c%d", mach->mach_params.dmic_num);
+ 
+ 		for (i = 0; i < se->items; i++) {
+-			struct snd_ctl_elem_value val;
++			struct snd_ctl_elem_value val = {};
+ 
+ 			if (strstr(texts[i], chan_text)) {
+ 				val.value.enumerated.item[0] = i;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-17 11:14 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-02-17 11:14 UTC (permalink / raw
  To: gentoo-commits

commit:     be34eda606dc6582a1ebc082ed07a0e81c11f4f6
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 17 11:13:34 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 17 11:13:56 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=be34eda6

Linux patch 5.10.17

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1016_linux-5.10.17.patch | 3937 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3941 insertions(+)

diff --git a/0000_README b/0000_README
index bbc23e0..9372f82 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-5.10.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.16
 
+Patch:  1016_linux-5.10.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-5.10.17.patch b/1016_linux-5.10.17.patch
new file mode 100644
index 0000000..df4cc46
--- /dev/null
+++ b/1016_linux-5.10.17.patch
@@ -0,0 +1,3937 @@
+diff --git a/Makefile b/Makefile
+index 9a1f26680d836..b740f9c933cb7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/dts/lpc32xx.dtsi b/arch/arm/boot/dts/lpc32xx.dtsi
+index 7b7ec7b1217b8..824393e1bcfb7 100644
+--- a/arch/arm/boot/dts/lpc32xx.dtsi
++++ b/arch/arm/boot/dts/lpc32xx.dtsi
+@@ -329,9 +329,6 @@
+ 
+ 					clocks = <&xtal_32k>, <&xtal>;
+ 					clock-names = "xtal_32k", "xtal";
+-
+-					assigned-clocks = <&clk LPC32XX_CLK_HCLK_PLL>;
+-					assigned-clock-rates = <208000000>;
+ 				};
+ 			};
+ 
+diff --git a/arch/arm/include/asm/kexec-internal.h b/arch/arm/include/asm/kexec-internal.h
+new file mode 100644
+index 0000000000000..ecc2322db7aa1
+--- /dev/null
++++ b/arch/arm/include/asm/kexec-internal.h
+@@ -0,0 +1,12 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ARM_KEXEC_INTERNAL_H
++#define _ARM_KEXEC_INTERNAL_H
++
++struct kexec_relocate_data {
++	unsigned long kexec_start_address;
++	unsigned long kexec_indirection_page;
++	unsigned long kexec_mach_type;
++	unsigned long kexec_r2;
++};
++
++#endif
+diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
+index a1570c8bab25a..be8050b0c3dfb 100644
+--- a/arch/arm/kernel/asm-offsets.c
++++ b/arch/arm/kernel/asm-offsets.c
+@@ -12,6 +12,7 @@
+ #include <linux/mm.h>
+ #include <linux/dma-mapping.h>
+ #include <asm/cacheflush.h>
++#include <asm/kexec-internal.h>
+ #include <asm/glue-df.h>
+ #include <asm/glue-pf.h>
+ #include <asm/mach/arch.h>
+@@ -170,5 +171,9 @@ int main(void)
+   DEFINE(MPU_RGN_PRBAR,	offsetof(struct mpu_rgn, prbar));
+   DEFINE(MPU_RGN_PRLAR,	offsetof(struct mpu_rgn, prlar));
+ #endif
++  DEFINE(KEXEC_START_ADDR,	offsetof(struct kexec_relocate_data, kexec_start_address));
++  DEFINE(KEXEC_INDIR_PAGE,	offsetof(struct kexec_relocate_data, kexec_indirection_page));
++  DEFINE(KEXEC_MACH_TYPE,	offsetof(struct kexec_relocate_data, kexec_mach_type));
++  DEFINE(KEXEC_R2,		offsetof(struct kexec_relocate_data, kexec_r2));
+   return 0; 
+ }
+diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
+index 5d84ad333f050..2b09dad7935eb 100644
+--- a/arch/arm/kernel/machine_kexec.c
++++ b/arch/arm/kernel/machine_kexec.c
+@@ -13,6 +13,7 @@
+ #include <linux/of_fdt.h>
+ #include <asm/mmu_context.h>
+ #include <asm/cacheflush.h>
++#include <asm/kexec-internal.h>
+ #include <asm/fncpy.h>
+ #include <asm/mach-types.h>
+ #include <asm/smp_plat.h>
+@@ -22,11 +23,6 @@
+ extern void relocate_new_kernel(void);
+ extern const unsigned int relocate_new_kernel_size;
+ 
+-extern unsigned long kexec_start_address;
+-extern unsigned long kexec_indirection_page;
+-extern unsigned long kexec_mach_type;
+-extern unsigned long kexec_boot_atags;
+-
+ static atomic_t waiting_for_crash_ipi;
+ 
+ /*
+@@ -159,6 +155,7 @@ void (*kexec_reinit)(void);
+ void machine_kexec(struct kimage *image)
+ {
+ 	unsigned long page_list, reboot_entry_phys;
++	struct kexec_relocate_data *data;
+ 	void (*reboot_entry)(void);
+ 	void *reboot_code_buffer;
+ 
+@@ -174,18 +171,17 @@ void machine_kexec(struct kimage *image)
+ 
+ 	reboot_code_buffer = page_address(image->control_code_page);
+ 
+-	/* Prepare parameters for reboot_code_buffer*/
+-	set_kernel_text_rw();
+-	kexec_start_address = image->start;
+-	kexec_indirection_page = page_list;
+-	kexec_mach_type = machine_arch_type;
+-	kexec_boot_atags = image->arch.kernel_r2;
+-
+ 	/* copy our kernel relocation code to the control code page */
+ 	reboot_entry = fncpy(reboot_code_buffer,
+ 			     &relocate_new_kernel,
+ 			     relocate_new_kernel_size);
+ 
++	data = reboot_code_buffer + relocate_new_kernel_size;
++	data->kexec_start_address = image->start;
++	data->kexec_indirection_page = page_list;
++	data->kexec_mach_type = machine_arch_type;
++	data->kexec_r2 = image->arch.kernel_r2;
++
+ 	/* get the identity mapping physical address for the reboot code */
+ 	reboot_entry_phys = virt_to_idmap(reboot_entry);
+ 
+diff --git a/arch/arm/kernel/relocate_kernel.S b/arch/arm/kernel/relocate_kernel.S
+index 72a08786e16eb..218d524360fcd 100644
+--- a/arch/arm/kernel/relocate_kernel.S
++++ b/arch/arm/kernel/relocate_kernel.S
+@@ -5,14 +5,16 @@
+ 
+ #include <linux/linkage.h>
+ #include <asm/assembler.h>
++#include <asm/asm-offsets.h>
+ #include <asm/kexec.h>
+ 
+ 	.align	3	/* not needed for this code, but keeps fncpy() happy */
+ 
+ ENTRY(relocate_new_kernel)
+ 
+-	ldr	r0,kexec_indirection_page
+-	ldr	r1,kexec_start_address
++	adr	r7, relocate_new_kernel_end
++	ldr	r0, [r7, #KEXEC_INDIR_PAGE]
++	ldr	r1, [r7, #KEXEC_START_ADDR]
+ 
+ 	/*
+ 	 * If there is no indirection page (we are doing crashdumps)
+@@ -57,34 +59,16 @@ ENTRY(relocate_new_kernel)
+ 
+ 2:
+ 	/* Jump to relocated kernel */
+-	mov lr,r1
+-	mov r0,#0
+-	ldr r1,kexec_mach_type
+-	ldr r2,kexec_boot_atags
+- ARM(	ret lr	)
+- THUMB(	bx lr		)
+-
+-	.align
+-
+-	.globl kexec_start_address
+-kexec_start_address:
+-	.long	0x0
+-
+-	.globl kexec_indirection_page
+-kexec_indirection_page:
+-	.long	0x0
+-
+-	.globl kexec_mach_type
+-kexec_mach_type:
+-	.long	0x0
+-
+-	/* phy addr of the atags for the new kernel */
+-	.globl kexec_boot_atags
+-kexec_boot_atags:
+-	.long	0x0
++	mov	lr, r1
++	mov	r0, #0
++	ldr	r1, [r7, #KEXEC_MACH_TYPE]
++	ldr	r2, [r7, #KEXEC_R2]
++ ARM(	ret	lr	)
++ THUMB(	bx	lr	)
+ 
+ ENDPROC(relocate_new_kernel)
+ 
++	.align	3
+ relocate_new_kernel_end:
+ 
+ 	.globl relocate_new_kernel_size
+diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
+index 585edbfccf6df..2f81d3af5f9af 100644
+--- a/arch/arm/kernel/signal.c
++++ b/arch/arm/kernel/signal.c
+@@ -693,18 +693,20 @@ struct page *get_signal_page(void)
+ 
+ 	addr = page_address(page);
+ 
++	/* Poison the entire page */
++	memset32(addr, __opcode_to_mem_arm(0xe7fddef1),
++		 PAGE_SIZE / sizeof(u32));
++
+ 	/* Give the signal return code some randomness */
+ 	offset = 0x200 + (get_random_int() & 0x7fc);
+ 	signal_return_offset = offset;
+ 
+-	/*
+-	 * Copy signal return handlers into the vector page, and
+-	 * set sigreturn to be a pointer to these.
+-	 */
++	/* Copy signal return handlers into the page */
+ 	memcpy(addr + offset, sigreturn_codes, sizeof(sigreturn_codes));
+ 
+-	ptr = (unsigned long)addr + offset;
+-	flush_icache_range(ptr, ptr + sizeof(sigreturn_codes));
++	/* Flush out all instructions in this page */
++	ptr = (unsigned long)addr;
++	flush_icache_range(ptr, ptr + PAGE_SIZE);
+ 
+ 	return page;
+ }
+diff --git a/arch/arm/mach-omap2/cpuidle44xx.c b/arch/arm/mach-omap2/cpuidle44xx.c
+index c8d317fafe2ea..de37027ad7587 100644
+--- a/arch/arm/mach-omap2/cpuidle44xx.c
++++ b/arch/arm/mach-omap2/cpuidle44xx.c
+@@ -151,10 +151,10 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev,
+ 				 (cx->mpu_logic_state == PWRDM_POWER_OFF);
+ 
+ 	/* Enter broadcast mode for periodic timers */
+-	tick_broadcast_enable();
++	RCU_NONIDLE(tick_broadcast_enable());
+ 
+ 	/* Enter broadcast mode for one-shot timers */
+-	tick_broadcast_enter();
++	RCU_NONIDLE(tick_broadcast_enter());
+ 
+ 	/*
+ 	 * Call idle CPU PM enter notifier chain so that
+@@ -166,7 +166,7 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev,
+ 
+ 	if (dev->cpu == 0) {
+ 		pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state);
+-		omap_set_pwrdm_state(mpu_pd, cx->mpu_state);
++		RCU_NONIDLE(omap_set_pwrdm_state(mpu_pd, cx->mpu_state));
+ 
+ 		/*
+ 		 * Call idle CPU cluster PM enter notifier chain
+@@ -178,7 +178,7 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev,
+ 				index = 0;
+ 				cx = state_ptr + index;
+ 				pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state);
+-				omap_set_pwrdm_state(mpu_pd, cx->mpu_state);
++				RCU_NONIDLE(omap_set_pwrdm_state(mpu_pd, cx->mpu_state));
+ 				mpuss_can_lose_context = 0;
+ 			}
+ 		}
+@@ -194,9 +194,9 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev,
+ 		    mpuss_can_lose_context)
+ 			gic_dist_disable();
+ 
+-		clkdm_deny_idle(cpu_clkdm[1]);
+-		omap_set_pwrdm_state(cpu_pd[1], PWRDM_POWER_ON);
+-		clkdm_allow_idle(cpu_clkdm[1]);
++		RCU_NONIDLE(clkdm_deny_idle(cpu_clkdm[1]));
++		RCU_NONIDLE(omap_set_pwrdm_state(cpu_pd[1], PWRDM_POWER_ON));
++		RCU_NONIDLE(clkdm_allow_idle(cpu_clkdm[1]));
+ 
+ 		if (IS_PM44XX_ERRATUM(PM_OMAP4_ROM_SMP_BOOT_ERRATUM_GICD) &&
+ 		    mpuss_can_lose_context) {
+@@ -222,7 +222,7 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev,
+ 	cpu_pm_exit();
+ 
+ cpu_pm_out:
+-	tick_broadcast_exit();
++	RCU_NONIDLE(tick_broadcast_exit());
+ 
+ fail:
+ 	cpuidle_coupled_parallel_barrier(dev, &abort_barrier);
+diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
+index 5a957a9a09843..8ad576ecd0f1d 100644
+--- a/arch/arm/xen/enlighten.c
++++ b/arch/arm/xen/enlighten.c
+@@ -370,8 +370,6 @@ static int __init xen_guest_init(void)
+ 		return -ENOMEM;
+ 	}
+ 	gnttab_init();
+-	if (!xen_initial_domain())
+-		xenbus_probe();
+ 
+ 	/*
+ 	 * Making sure board specific code will not set up ops for
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index 7cc236575ee20..c0b93813ea9ac 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -415,7 +415,9 @@
+ &gcc {
+ 	protected-clocks = <GCC_QSPI_CORE_CLK>,
+ 			   <GCC_QSPI_CORE_CLK_SRC>,
+-			   <GCC_QSPI_CNOC_PERIPH_AHB_CLK>;
++			   <GCC_QSPI_CNOC_PERIPH_AHB_CLK>,
++			   <GCC_LPASS_Q6_AXI_CLK>,
++			   <GCC_LPASS_SWAY_CLK>;
+ };
+ 
+ &gpu {
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index d70aae77a6e84..888dc23a530e6 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -245,7 +245,9 @@
+ &gcc {
+ 	protected-clocks = <GCC_QSPI_CORE_CLK>,
+ 			   <GCC_QSPI_CORE_CLK_SRC>,
+-			   <GCC_QSPI_CNOC_PERIPH_AHB_CLK>;
++			   <GCC_QSPI_CNOC_PERIPH_AHB_CLK>,
++			   <GCC_LPASS_Q6_AXI_CLK>,
++			   <GCC_LPASS_SWAY_CLK>;
+ };
+ 
+ &gpu {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts b/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
+index 2ee07d15a6e37..1eecad724f04c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
+@@ -114,6 +114,10 @@
+ 	cpu-supply = <&vdd_arm>;
+ };
+ 
++&display_subsystem {
++	status = "disabled";
++};
++
+ &gmac2io {
+ 	assigned-clocks = <&cru SCLK_MAC2IO>, <&cru SCLK_MAC2IO_EXT>;
+ 	assigned-clock-parents = <&gmac_clk>, <&gmac_clk>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 7a9a7aca86c6a..7e69603fb41c0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -234,6 +234,7 @@
+ 		reg = <0x0 0xf8000000 0x0 0x2000000>,
+ 		      <0x0 0xfd000000 0x0 0x1000000>;
+ 		reg-names = "axi-base", "apb-base";
++		device_type = "pci";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+ 		#interrupt-cells = <1>;
+@@ -252,7 +253,6 @@
+ 				<0 0 0 2 &pcie0_intc 1>,
+ 				<0 0 0 3 &pcie0_intc 2>,
+ 				<0 0 0 4 &pcie0_intc 3>;
+-		linux,pci-domain = <0>;
+ 		max-link-speed = <1>;
+ 		msi-map = <0x0 &its 0x0 0x1000>;
+ 		phys = <&pcie_phy 0>, <&pcie_phy 1>,
+@@ -1278,7 +1278,6 @@
+ 		compatible = "rockchip,rk3399-vdec";
+ 		reg = <0x0 0xff660000 0x0 0x400>;
+ 		interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH 0>;
+-		interrupt-names = "vdpu";
+ 		clocks = <&cru ACLK_VDU>, <&cru HCLK_VDU>,
+ 			 <&cru SCLK_VDU_CA>, <&cru SCLK_VDU_CORE>;
+ 		clock-names = "axi", "ahb", "cabac", "core";
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 0a52e076153bb..65a522fbd8743 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1696,16 +1696,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
+ #ifdef CONFIG_ARM64_MTE
+ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
+ {
+-	static bool cleared_zero_page = false;
+-
+ 	/*
+ 	 * Clear the tags in the zero page. This needs to be done via the
+ 	 * linear map which has the Tagged attribute.
+ 	 */
+-	if (!cleared_zero_page) {
+-		cleared_zero_page = true;
++	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
+ 		mte_clear_page_tags(lm_alias(empty_zero_page));
+-	}
+ }
+ #endif /* CONFIG_ARM64_MTE */
+ 
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index ef15c8a2a49dc..7a66a7d9c1ffc 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -239,11 +239,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
+ 		 * would cause the existing tags to be cleared if the page
+ 		 * was never mapped with PROT_MTE.
+ 		 */
+-		if (!test_bit(PG_mte_tagged, &page->flags)) {
++		if (!(vma->vm_flags & VM_MTE)) {
+ 			ret = -EOPNOTSUPP;
+ 			put_page(page);
+ 			break;
+ 		}
++		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
+ 
+ 		/* limit access to the end of the page */
+ 		offset = offset_in_page(addr);
+diff --git a/arch/h8300/kernel/asm-offsets.c b/arch/h8300/kernel/asm-offsets.c
+index 85e60509f0a83..d4b53af657c84 100644
+--- a/arch/h8300/kernel/asm-offsets.c
++++ b/arch/h8300/kernel/asm-offsets.c
+@@ -63,6 +63,9 @@ int main(void)
+ 	OFFSET(TI_FLAGS, thread_info, flags);
+ 	OFFSET(TI_CPU, thread_info, cpu);
+ 	OFFSET(TI_PRE, thread_info, preempt_count);
++#ifdef CONFIG_PREEMPTION
++	DEFINE(TI_PRE_COUNT, offsetof(struct thread_info, preempt_count));
++#endif
+ 
+ 	return 0;
+ }
+diff --git a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
+index 24d75a146e02d..60846e88ae4b1 100644
+--- a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
++++ b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
+@@ -90,7 +90,6 @@
+ 	phy0: ethernet-phy@0 {
+ 		compatible = "ethernet-phy-id0007.0771";
+ 		reg = <0>;
+-		reset-gpios = <&gpio 12 GPIO_ACTIVE_LOW>;
+ 	};
+ };
+ 
+diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
+index 2d50f76efe481..64a675c5c30ac 100644
+--- a/arch/riscv/include/asm/page.h
++++ b/arch/riscv/include/asm/page.h
+@@ -135,7 +135,10 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
+ 
+ #endif /* __ASSEMBLY__ */
+ 
+-#define virt_addr_valid(vaddr)	(pfn_valid(virt_to_pfn(vaddr)))
++#define virt_addr_valid(vaddr)	({						\
++	unsigned long _addr = (unsigned long)vaddr;				\
++	(unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));	\
++})
+ 
+ #define VM_DATA_DEFAULT_FLAGS	VM_DATA_FLAGS_NON_EXEC
+ 
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 6a7efa78eba22..0a6d497221e49 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -57,6 +57,9 @@ export BITS
+ KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow
+ KBUILD_CFLAGS += $(call cc-option,-mno-avx,)
+ 
++# Intel CET isn't enabled in the kernel
++KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
++
+ ifeq ($(CONFIG_X86_32),y)
+         BITS := 32
+         UTS_MACHINE := i386
+@@ -127,9 +130,6 @@ else
+ 
+         KBUILD_CFLAGS += -mno-red-zone
+         KBUILD_CFLAGS += -mcmodel=kernel
+-
+-	# Intel CET isn't enabled in the kernel
+-	KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
+ endif
+ 
+ ifdef CONFIG_X86_X32
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 59a1e3ce3f145..816fdbec795a4 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -1159,6 +1159,7 @@ static const struct x86_cpu_id split_lock_cpu_ids[] __initconst = {
+ 	X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE,		1),
+ 	X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X,	1),
+ 	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE,		1),
++	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L,		1),
+ 	{}
+ };
+ 
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index de776b2e60467..c65642c10aaea 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1829,6 +1829,7 @@ void arch_set_max_freq_ratio(bool turbo_disabled)
+ 	arch_max_freq_ratio = turbo_disabled ? SCHED_CAPACITY_SCALE :
+ 					arch_turbo_freq_ratio;
+ }
++EXPORT_SYMBOL_GPL(arch_set_max_freq_ratio);
+ 
+ static bool turbo_disabled(void)
+ {
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 65e40acde71aa..4fbe190c79159 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -231,6 +231,7 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
+ 
+ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
+ {
++	struct kvm_vcpu *vcpu = &svm->vcpu;
+ 	bool vmcb12_lma;
+ 
+ 	if ((vmcb12->save.efer & EFER_SVME) == 0)
+@@ -244,18 +245,10 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
+ 
+ 	vmcb12_lma = (vmcb12->save.efer & EFER_LME) && (vmcb12->save.cr0 & X86_CR0_PG);
+ 
+-	if (!vmcb12_lma) {
+-		if (vmcb12->save.cr4 & X86_CR4_PAE) {
+-			if (vmcb12->save.cr3 & MSR_CR3_LEGACY_PAE_RESERVED_MASK)
+-				return false;
+-		} else {
+-			if (vmcb12->save.cr3 & MSR_CR3_LEGACY_RESERVED_MASK)
+-				return false;
+-		}
+-	} else {
++	if (vmcb12_lma) {
+ 		if (!(vmcb12->save.cr4 & X86_CR4_PAE) ||
+ 		    !(vmcb12->save.cr0 & X86_CR0_PE) ||
+-		    (vmcb12->save.cr3 & MSR_CR3_LONG_MBZ_MASK))
++		    (vmcb12->save.cr3 & vcpu->arch.cr3_lm_rsvd_bits))
+ 			return false;
+ 	}
+ 	if (kvm_valid_cr4(&svm->vcpu, vmcb12->save.cr4))
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 1d853fe4c778b..be74e22b82ea7 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -346,9 +346,6 @@ static inline bool gif_set(struct vcpu_svm *svm)
+ }
+ 
+ /* svm.c */
+-#define MSR_CR3_LEGACY_RESERVED_MASK		0xfe7U
+-#define MSR_CR3_LEGACY_PAE_RESERVED_MASK	0x7U
+-#define MSR_CR3_LONG_MBZ_MASK			0xfff0000000000000U
+ #define MSR_INVALID				0xffffffffU
+ 
+ u32 svm_msrpm_offset(u32 msr);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 18a315bbcb79e..fa5f059c2b940 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9558,6 +9558,8 @@ static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+ 		if (!(sregs->cr4 & X86_CR4_PAE)
+ 		    || !(sregs->efer & EFER_LMA))
+ 			return -EINVAL;
++		if (sregs->cr3 & vcpu->arch.cr3_lm_rsvd_bits)
++			return -EINVAL;
+ 	} else {
+ 		/*
+ 		 * Not in 64-bit mode: EFER.LMA is clear and the code
+diff --git a/arch/x86/pci/init.c b/arch/x86/pci/init.c
+index 00bfa1ebad6c7..0bb3b8b44e4e2 100644
+--- a/arch/x86/pci/init.c
++++ b/arch/x86/pci/init.c
+@@ -9,16 +9,23 @@
+    in the right sequence from here. */
+ static __init int pci_arch_init(void)
+ {
+-	int type;
+-
+-	x86_create_pci_msi_domain();
++	int type, pcbios = 1;
+ 
+ 	type = pci_direct_probe();
+ 
+ 	if (!(pci_probe & PCI_PROBE_NOEARLY))
+ 		pci_mmcfg_early_init();
+ 
+-	if (x86_init.pci.arch_init && !x86_init.pci.arch_init())
++	if (x86_init.pci.arch_init)
++		pcbios = x86_init.pci.arch_init();
++
++	/*
++	 * Must happen after x86_init.pci.arch_init(). Xen sets up the
++	 * x86_init.irqs.create_pci_msi_domain there.
++	 */
++	x86_create_pci_msi_domain();
++
++	if (!pcbios)
+ 		return 0;
+ 
+ 	pci_pcbios_init();
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index e1e8d4e3a2139..8efd003540cae 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -115,31 +115,12 @@ void efi_sync_low_kernel_mappings(void)
+ 	pud_t *pud_k, *pud_efi;
+ 	pgd_t *efi_pgd = efi_mm.pgd;
+ 
+-	/*
+-	 * We can share all PGD entries apart from the one entry that
+-	 * covers the EFI runtime mapping space.
+-	 *
+-	 * Make sure the EFI runtime region mappings are guaranteed to
+-	 * only span a single PGD entry and that the entry also maps
+-	 * other important kernel regions.
+-	 */
+-	MAYBE_BUILD_BUG_ON(pgd_index(EFI_VA_END) != pgd_index(MODULES_END));
+-	MAYBE_BUILD_BUG_ON((EFI_VA_START & PGDIR_MASK) !=
+-			(EFI_VA_END & PGDIR_MASK));
+-
+ 	pgd_efi = efi_pgd + pgd_index(PAGE_OFFSET);
+ 	pgd_k = pgd_offset_k(PAGE_OFFSET);
+ 
+ 	num_entries = pgd_index(EFI_VA_END) - pgd_index(PAGE_OFFSET);
+ 	memcpy(pgd_efi, pgd_k, sizeof(pgd_t) * num_entries);
+ 
+-	/*
+-	 * As with PGDs, we share all P4D entries apart from the one entry
+-	 * that covers the EFI runtime mapping space.
+-	 */
+-	BUILD_BUG_ON(p4d_index(EFI_VA_END) != p4d_index(MODULES_END));
+-	BUILD_BUG_ON((EFI_VA_START & P4D_MASK) != (EFI_VA_END & P4D_MASK));
+-
+ 	pgd_efi = efi_pgd + pgd_index(EFI_VA_END);
+ 	pgd_k = pgd_offset_k(EFI_VA_END);
+ 	p4d_efi = p4d_offset(pgd_efi, 0);
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 9e4eb0fc1c16e..9e81d1052091f 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -6332,13 +6332,13 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
+ 	 * limit 'something'.
+ 	 */
+ 	/* no more than 50% of tags for async I/O */
+-	bfqd->word_depths[0][0] = max(bt->sb.depth >> 1, 1U);
++	bfqd->word_depths[0][0] = max((1U << bt->sb.shift) >> 1, 1U);
+ 	/*
+ 	 * no more than 75% of tags for sync writes (25% extra tags
+ 	 * w.r.t. async I/O, to prevent async I/O from starving sync
+ 	 * writes)
+ 	 */
+-	bfqd->word_depths[0][1] = max((bt->sb.depth * 3) >> 2, 1U);
++	bfqd->word_depths[0][1] = max(((1U << bt->sb.shift) * 3) >> 2, 1U);
+ 
+ 	/*
+ 	 * In-word depths in case some bfq_queue is being weight-
+@@ -6348,9 +6348,9 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
+ 	 * shortage.
+ 	 */
+ 	/* no more than ~18% of tags for async I/O */
+-	bfqd->word_depths[1][0] = max((bt->sb.depth * 3) >> 4, 1U);
++	bfqd->word_depths[1][0] = max(((1U << bt->sb.shift) * 3) >> 4, 1U);
+ 	/* no more than ~37% of tags for sync writes (~20% extra tags) */
+-	bfqd->word_depths[1][1] = max((bt->sb.depth * 6) >> 4, 1U);
++	bfqd->word_depths[1][1] = max(((1U << bt->sb.shift) * 6) >> 4, 1U);
+ 
+ 	for (i = 0; i < 2; i++)
+ 		for (j = 0; j < 2; j++)
+diff --git a/drivers/clk/sunxi-ng/ccu_mp.c b/drivers/clk/sunxi-ng/ccu_mp.c
+index fa4ecb9155909..9d3a76604d94c 100644
+--- a/drivers/clk/sunxi-ng/ccu_mp.c
++++ b/drivers/clk/sunxi-ng/ccu_mp.c
+@@ -108,7 +108,7 @@ static unsigned long ccu_mp_round_rate(struct ccu_mux_internal *mux,
+ 	max_m = cmp->m.max ?: 1 << cmp->m.width;
+ 	max_p = cmp->p.max ?: 1 << ((1 << cmp->p.width) - 1);
+ 
+-	if (!(clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT)) {
++	if (!clk_hw_can_set_rate_parent(&cmp->common.hw)) {
+ 		ccu_mp_find_best(*parent_rate, rate, max_m, max_p, &m, &p);
+ 		rate = *parent_rate / p / m;
+ 	} else {
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index 1e4fbb002a31d..d3e5a6fceb61b 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -26,6 +26,7 @@
+ #include <linux/uaccess.h>
+ 
+ #include <acpi/processor.h>
++#include <acpi/cppc_acpi.h>
+ 
+ #include <asm/msr.h>
+ #include <asm/processor.h>
+@@ -53,6 +54,7 @@ struct acpi_cpufreq_data {
+ 	unsigned int resume;
+ 	unsigned int cpu_feature;
+ 	unsigned int acpi_perf_cpu;
++	unsigned int first_perf_state;
+ 	cpumask_var_t freqdomain_cpus;
+ 	void (*cpu_freq_write)(struct acpi_pct_register *reg, u32 val);
+ 	u32 (*cpu_freq_read)(struct acpi_pct_register *reg);
+@@ -221,10 +223,10 @@ static unsigned extract_msr(struct cpufreq_policy *policy, u32 msr)
+ 
+ 	perf = to_perf_data(data);
+ 
+-	cpufreq_for_each_entry(pos, policy->freq_table)
++	cpufreq_for_each_entry(pos, policy->freq_table + data->first_perf_state)
+ 		if (msr == perf->states[pos->driver_data].status)
+ 			return pos->frequency;
+-	return policy->freq_table[0].frequency;
++	return policy->freq_table[data->first_perf_state].frequency;
+ }
+ 
+ static unsigned extract_freq(struct cpufreq_policy *policy, u32 val)
+@@ -363,6 +365,7 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
+ 	struct cpufreq_policy *policy;
+ 	unsigned int freq;
+ 	unsigned int cached_freq;
++	unsigned int state;
+ 
+ 	pr_debug("%s (%d)\n", __func__, cpu);
+ 
+@@ -374,7 +377,11 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
+ 	if (unlikely(!data || !policy->freq_table))
+ 		return 0;
+ 
+-	cached_freq = policy->freq_table[to_perf_data(data)->state].frequency;
++	state = to_perf_data(data)->state;
++	if (state < data->first_perf_state)
++		state = data->first_perf_state;
++
++	cached_freq = policy->freq_table[state].frequency;
+ 	freq = extract_freq(policy, get_cur_val(cpumask_of(cpu), data));
+ 	if (freq != cached_freq) {
+ 		/*
+@@ -628,16 +635,54 @@ static int acpi_cpufreq_blacklist(struct cpuinfo_x86 *c)
+ }
+ #endif
+ 
++#ifdef CONFIG_ACPI_CPPC_LIB
++static u64 get_max_boost_ratio(unsigned int cpu)
++{
++	struct cppc_perf_caps perf_caps;
++	u64 highest_perf, nominal_perf;
++	int ret;
++
++	if (acpi_pstate_strict)
++		return 0;
++
++	ret = cppc_get_perf_caps(cpu, &perf_caps);
++	if (ret) {
++		pr_debug("CPU%d: Unable to get performance capabilities (%d)\n",
++			 cpu, ret);
++		return 0;
++	}
++
++	highest_perf = perf_caps.highest_perf;
++	nominal_perf = perf_caps.nominal_perf;
++
++	if (!highest_perf || !nominal_perf) {
++		pr_debug("CPU%d: highest or nominal performance missing\n", cpu);
++		return 0;
++	}
++
++	if (highest_perf < nominal_perf) {
++		pr_debug("CPU%d: nominal performance above highest\n", cpu);
++		return 0;
++	}
++
++	return div_u64(highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
++}
++#else
++static inline u64 get_max_boost_ratio(unsigned int cpu) { return 0; }
++#endif
++
+ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ {
+-	unsigned int i;
+-	unsigned int valid_states = 0;
+-	unsigned int cpu = policy->cpu;
++	struct cpufreq_frequency_table *freq_table;
++	struct acpi_processor_performance *perf;
+ 	struct acpi_cpufreq_data *data;
++	unsigned int cpu = policy->cpu;
++	struct cpuinfo_x86 *c = &cpu_data(cpu);
++	unsigned int valid_states = 0;
+ 	unsigned int result = 0;
+-	struct cpuinfo_x86 *c = &cpu_data(policy->cpu);
+-	struct acpi_processor_performance *perf;
+-	struct cpufreq_frequency_table *freq_table;
++	unsigned int state_count;
++	u64 max_boost_ratio;
++	unsigned int i;
+ #ifdef CONFIG_SMP
+ 	static int blacklisted;
+ #endif
+@@ -750,8 +795,28 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 		goto err_unreg;
+ 	}
+ 
+-	freq_table = kcalloc(perf->state_count + 1, sizeof(*freq_table),
+-			     GFP_KERNEL);
++	state_count = perf->state_count + 1;
++
++	max_boost_ratio = get_max_boost_ratio(cpu);
++	if (max_boost_ratio) {
++		/*
++		 * Make a room for one more entry to represent the highest
++		 * available "boost" frequency.
++		 */
++		state_count++;
++		valid_states++;
++		data->first_perf_state = valid_states;
++	} else {
++		/*
++		 * If the maximum "boost" frequency is unknown, ask the arch
++		 * scale-invariance code to use the "nominal" performance for
++		 * CPU utilization scaling so as to prevent the schedutil
++		 * governor from selecting inadequate CPU frequencies.
++		 */
++		arch_set_max_freq_ratio(true);
++	}
++
++	freq_table = kcalloc(state_count, sizeof(*freq_table), GFP_KERNEL);
+ 	if (!freq_table) {
+ 		result = -ENOMEM;
+ 		goto err_unreg;
+@@ -785,6 +850,30 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 		valid_states++;
+ 	}
+ 	freq_table[valid_states].frequency = CPUFREQ_TABLE_END;
++
++	if (max_boost_ratio) {
++		unsigned int state = data->first_perf_state;
++		unsigned int freq = freq_table[state].frequency;
++
++		/*
++		 * Because the loop above sorts the freq_table entries in the
++		 * descending order, freq is the maximum frequency in the table.
++		 * Assume that it corresponds to the CPPC nominal frequency and
++		 * use it to populate the frequency field of the extra "boost"
++		 * frequency entry.
++		 */
++		freq_table[0].frequency = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT;
++		/*
++		 * The purpose of the extra "boost" frequency entry is to make
++		 * the rest of cpufreq aware of the real maximum frequency, but
++		 * the way to request it is the same as for the first_perf_state
++		 * entry that is expected to cover the entire range of "boost"
++		 * frequencies of the CPU, so copy the driver_data value from
++		 * that entry.
++		 */
++		freq_table[0].driver_data = freq_table[state].driver_data;
++	}
++
+ 	policy->freq_table = freq_table;
+ 	perf->state = 0;
+ 
+@@ -858,8 +947,10 @@ static void acpi_cpufreq_cpu_ready(struct cpufreq_policy *policy)
+ {
+ 	struct acpi_processor_performance *perf = per_cpu_ptr(acpi_perf_data,
+ 							      policy->cpu);
++	struct acpi_cpufreq_data *data = policy->driver_data;
++	unsigned int freq = policy->freq_table[data->first_perf_state].frequency;
+ 
+-	if (perf->states[0].core_frequency * 1000 != policy->cpuinfo.max_freq)
++	if (perf->states[0].core_frequency * 1000 != freq)
+ 		pr_warn(FW_WARN "P-state 0 is not max freq\n");
+ }
+ 
+diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
+index 962cbb5e5f7fc..fe6a460c43735 100644
+--- a/drivers/dma/dmaengine.c
++++ b/drivers/dma/dmaengine.c
+@@ -1110,7 +1110,6 @@ static void __dma_async_device_channel_unregister(struct dma_device *device,
+ 		  "%s called while %d clients hold a reference\n",
+ 		  __func__, chan->client_count);
+ 	mutex_lock(&dma_list_mutex);
+-	list_del(&chan->device_node);
+ 	device->chancnt--;
+ 	chan->dev->chan = NULL;
+ 	mutex_unlock(&dma_list_mutex);
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 663344987e3f3..a6704838ffcb7 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -325,17 +325,31 @@ static inline bool idxd_is_enabled(struct idxd_device *idxd)
+ 	return false;
+ }
+ 
++static inline bool idxd_device_is_halted(struct idxd_device *idxd)
++{
++	union gensts_reg gensts;
++
++	gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET);
++
++	return (gensts.state == IDXD_DEVICE_STATE_HALT);
++}
++
+ /*
+  * This is function is only used for reset during probe and will
+  * poll for completion. Once the device is setup with interrupts,
+  * all commands will be done via interrupt completion.
+  */
+-void idxd_device_init_reset(struct idxd_device *idxd)
++int idxd_device_init_reset(struct idxd_device *idxd)
+ {
+ 	struct device *dev = &idxd->pdev->dev;
+ 	union idxd_command_reg cmd;
+ 	unsigned long flags;
+ 
++	if (idxd_device_is_halted(idxd)) {
++		dev_warn(&idxd->pdev->dev, "Device is HALTED!\n");
++		return -ENXIO;
++	}
++
+ 	memset(&cmd, 0, sizeof(cmd));
+ 	cmd.cmd = IDXD_CMD_RESET_DEVICE;
+ 	dev_dbg(dev, "%s: sending reset for init.\n", __func__);
+@@ -346,6 +360,7 @@ void idxd_device_init_reset(struct idxd_device *idxd)
+ 	       IDXD_CMDSTS_ACTIVE)
+ 		cpu_relax();
+ 	spin_unlock_irqrestore(&idxd->dev_lock, flags);
++	return 0;
+ }
+ 
+ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
+@@ -355,6 +370,12 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
+ 	DECLARE_COMPLETION_ONSTACK(done);
+ 	unsigned long flags;
+ 
++	if (idxd_device_is_halted(idxd)) {
++		dev_warn(&idxd->pdev->dev, "Device is HALTED!\n");
++		*status = IDXD_CMDSTS_HW_ERR;
++		return;
++	}
++
+ 	memset(&cmd, 0, sizeof(cmd));
+ 	cmd.cmd = cmd_code;
+ 	cmd.operand = operand;
+diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c
+index 0c892cbd72e01..8b14ba0bae1cd 100644
+--- a/drivers/dma/idxd/dma.c
++++ b/drivers/dma/idxd/dma.c
+@@ -214,5 +214,8 @@ int idxd_register_dma_channel(struct idxd_wq *wq)
+ 
+ void idxd_unregister_dma_channel(struct idxd_wq *wq)
+ {
+-	dma_async_device_channel_unregister(&wq->idxd->dma_dev, &wq->dma_chan);
++	struct dma_chan *chan = &wq->dma_chan;
++
++	dma_async_device_channel_unregister(&wq->idxd->dma_dev, chan);
++	list_del(&chan->device_node);
+ }
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index d48f193daacc0..953ef6536aac4 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -281,7 +281,7 @@ void idxd_mask_msix_vector(struct idxd_device *idxd, int vec_id);
+ void idxd_unmask_msix_vector(struct idxd_device *idxd, int vec_id);
+ 
+ /* device control */
+-void idxd_device_init_reset(struct idxd_device *idxd);
++int idxd_device_init_reset(struct idxd_device *idxd);
+ int idxd_device_enable(struct idxd_device *idxd);
+ int idxd_device_disable(struct idxd_device *idxd);
+ void idxd_device_reset(struct idxd_device *idxd);
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 0a4432b063b5c..fa8c4228f358a 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -289,7 +289,10 @@ static int idxd_probe(struct idxd_device *idxd)
+ 	int rc;
+ 
+ 	dev_dbg(dev, "%s entered and resetting device\n", __func__);
+-	idxd_device_init_reset(idxd);
++	rc = idxd_device_init_reset(idxd);
++	if (rc < 0)
++		return rc;
++
+ 	dev_dbg(dev, "IDXD reset complete\n");
+ 
+ 	idxd_read_caps(idxd);
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index 17a65a13fb649..552e2e2707058 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -53,19 +53,14 @@ irqreturn_t idxd_irq_handler(int vec, void *data)
+ 	return IRQ_WAKE_THREAD;
+ }
+ 
+-irqreturn_t idxd_misc_thread(int vec, void *data)
++static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
+ {
+-	struct idxd_irq_entry *irq_entry = data;
+-	struct idxd_device *idxd = irq_entry->idxd;
+ 	struct device *dev = &idxd->pdev->dev;
+ 	union gensts_reg gensts;
+-	u32 cause, val = 0;
++	u32 val = 0;
+ 	int i;
+ 	bool err = false;
+ 
+-	cause = ioread32(idxd->reg_base + IDXD_INTCAUSE_OFFSET);
+-	iowrite32(cause, idxd->reg_base + IDXD_INTCAUSE_OFFSET);
+-
+ 	if (cause & IDXD_INTC_ERR) {
+ 		spin_lock_bh(&idxd->dev_lock);
+ 		for (i = 0; i < 4; i++)
+@@ -123,7 +118,7 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
+ 			      val);
+ 
+ 	if (!err)
+-		goto out;
++		return 0;
+ 
+ 	gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET);
+ 	if (gensts.state == IDXD_DEVICE_STATE_HALT) {
+@@ -144,10 +139,33 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
+ 				gensts.reset_type == IDXD_DEVICE_RESET_FLR ?
+ 				"FLR" : "system reset");
+ 			spin_unlock_bh(&idxd->dev_lock);
++			return -ENXIO;
+ 		}
+ 	}
+ 
+- out:
++	return 0;
++}
++
++irqreturn_t idxd_misc_thread(int vec, void *data)
++{
++	struct idxd_irq_entry *irq_entry = data;
++	struct idxd_device *idxd = irq_entry->idxd;
++	int rc;
++	u32 cause;
++
++	cause = ioread32(idxd->reg_base + IDXD_INTCAUSE_OFFSET);
++	if (cause)
++		iowrite32(cause, idxd->reg_base + IDXD_INTCAUSE_OFFSET);
++
++	while (cause) {
++		rc = process_misc_interrupts(idxd, cause);
++		if (rc < 0)
++			break;
++		cause = ioread32(idxd->reg_base + IDXD_INTCAUSE_OFFSET);
++		if (cause)
++			iowrite32(cause, idxd->reg_base + IDXD_INTCAUSE_OFFSET);
++	}
++
+ 	idxd_unmask_msix_vector(idxd, irq_entry->id);
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index f20ac3d694246..14751c7ccd1f4 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -428,8 +428,9 @@ config GPIO_MXC
+ 	select GENERIC_IRQ_CHIP
+ 
+ config GPIO_MXS
+-	def_bool y
++	bool "Freescale MXS GPIO support" if COMPILE_TEST
+ 	depends on ARCH_MXS || COMPILE_TEST
++	default y if ARCH_MXS
+ 	select GPIO_GENERIC
+ 	select GENERIC_IRQ_CHIP
+ 
+diff --git a/drivers/gpio/gpio-ep93xx.c b/drivers/gpio/gpio-ep93xx.c
+index 226da8df6f100..94d9fa0d6aa70 100644
+--- a/drivers/gpio/gpio-ep93xx.c
++++ b/drivers/gpio/gpio-ep93xx.c
+@@ -25,6 +25,9 @@
+ /* Maximum value for gpio line identifiers */
+ #define EP93XX_GPIO_LINE_MAX 63
+ 
++/* Number of GPIO chips in EP93XX */
++#define EP93XX_GPIO_CHIP_NUM 8
++
+ /* Maximum value for irq capable line identifiers */
+ #define EP93XX_GPIO_LINE_MAX_IRQ 23
+ 
+@@ -34,74 +37,75 @@
+  */
+ #define EP93XX_GPIO_F_IRQ_BASE 80
+ 
+-struct ep93xx_gpio {
+-	void __iomem		*base;
+-	struct gpio_chip	gc[8];
++struct ep93xx_gpio_irq_chip {
++	struct irq_chip ic;
++	u8 irq_offset;
++	u8 int_unmasked;
++	u8 int_enabled;
++	u8 int_type1;
++	u8 int_type2;
++	u8 int_debounce;
+ };
+ 
+-/*************************************************************************
+- * Interrupt handling for EP93xx on-chip GPIOs
+- *************************************************************************/
+-static unsigned char gpio_int_unmasked[3];
+-static unsigned char gpio_int_enabled[3];
+-static unsigned char gpio_int_type1[3];
+-static unsigned char gpio_int_type2[3];
+-static unsigned char gpio_int_debounce[3];
+-
+-/* Port ordering is: A B F */
+-static const u8 int_type1_register_offset[3]	= { 0x90, 0xac, 0x4c };
+-static const u8 int_type2_register_offset[3]	= { 0x94, 0xb0, 0x50 };
+-static const u8 eoi_register_offset[3]		= { 0x98, 0xb4, 0x54 };
+-static const u8 int_en_register_offset[3]	= { 0x9c, 0xb8, 0x58 };
+-static const u8 int_debounce_register_offset[3]	= { 0xa8, 0xc4, 0x64 };
+-
+-static void ep93xx_gpio_update_int_params(struct ep93xx_gpio *epg, unsigned port)
+-{
+-	BUG_ON(port > 2);
++struct ep93xx_gpio_chip {
++	struct gpio_chip		gc;
++	struct ep93xx_gpio_irq_chip	*eic;
++};
+ 
+-	writeb_relaxed(0, epg->base + int_en_register_offset[port]);
++struct ep93xx_gpio {
++	void __iomem		*base;
++	struct ep93xx_gpio_chip	gc[EP93XX_GPIO_CHIP_NUM];
++};
+ 
+-	writeb_relaxed(gpio_int_type2[port],
+-		       epg->base + int_type2_register_offset[port]);
++#define to_ep93xx_gpio_chip(x) container_of(x, struct ep93xx_gpio_chip, gc)
+ 
+-	writeb_relaxed(gpio_int_type1[port],
+-		       epg->base + int_type1_register_offset[port]);
++static struct ep93xx_gpio_irq_chip *to_ep93xx_gpio_irq_chip(struct gpio_chip *gc)
++{
++	struct ep93xx_gpio_chip *egc = to_ep93xx_gpio_chip(gc);
+ 
+-	writeb(gpio_int_unmasked[port] & gpio_int_enabled[port],
+-	       epg->base + int_en_register_offset[port]);
++	return egc->eic;
+ }
+ 
+-static int ep93xx_gpio_port(struct gpio_chip *gc)
++/*************************************************************************
++ * Interrupt handling for EP93xx on-chip GPIOs
++ *************************************************************************/
++#define EP93XX_INT_TYPE1_OFFSET		0x00
++#define EP93XX_INT_TYPE2_OFFSET		0x04
++#define EP93XX_INT_EOI_OFFSET		0x08
++#define EP93XX_INT_EN_OFFSET		0x0c
++#define EP93XX_INT_STATUS_OFFSET	0x10
++#define EP93XX_INT_RAW_STATUS_OFFSET	0x14
++#define EP93XX_INT_DEBOUNCE_OFFSET	0x18
++
++static void ep93xx_gpio_update_int_params(struct ep93xx_gpio *epg,
++					  struct ep93xx_gpio_irq_chip *eic)
+ {
+-	struct ep93xx_gpio *epg = gpiochip_get_data(gc);
+-	int port = 0;
++	writeb_relaxed(0, epg->base + eic->irq_offset + EP93XX_INT_EN_OFFSET);
+ 
+-	while (port < ARRAY_SIZE(epg->gc) && gc != &epg->gc[port])
+-		port++;
++	writeb_relaxed(eic->int_type2,
++		       epg->base + eic->irq_offset + EP93XX_INT_TYPE2_OFFSET);
+ 
+-	/* This should not happen but is there as a last safeguard */
+-	if (port == ARRAY_SIZE(epg->gc)) {
+-		pr_crit("can't find the GPIO port\n");
+-		return 0;
+-	}
++	writeb_relaxed(eic->int_type1,
++		       epg->base + eic->irq_offset + EP93XX_INT_TYPE1_OFFSET);
+ 
+-	return port;
++	writeb_relaxed(eic->int_unmasked & eic->int_enabled,
++		       epg->base + eic->irq_offset + EP93XX_INT_EN_OFFSET);
+ }
+ 
+ static void ep93xx_gpio_int_debounce(struct gpio_chip *gc,
+ 				     unsigned int offset, bool enable)
+ {
+ 	struct ep93xx_gpio *epg = gpiochip_get_data(gc);
+-	int port = ep93xx_gpio_port(gc);
++	struct ep93xx_gpio_irq_chip *eic = to_ep93xx_gpio_irq_chip(gc);
+ 	int port_mask = BIT(offset);
+ 
+ 	if (enable)
+-		gpio_int_debounce[port] |= port_mask;
++		eic->int_debounce |= port_mask;
+ 	else
+-		gpio_int_debounce[port] &= ~port_mask;
++		eic->int_debounce &= ~port_mask;
+ 
+-	writeb(gpio_int_debounce[port],
+-	       epg->base + int_debounce_register_offset[port]);
++	writeb(eic->int_debounce,
++	       epg->base + eic->irq_offset + EP93XX_INT_DEBOUNCE_OFFSET);
+ }
+ 
+ static void ep93xx_gpio_ab_irq_handler(struct irq_desc *desc)
+@@ -122,12 +126,12 @@ static void ep93xx_gpio_ab_irq_handler(struct irq_desc *desc)
+ 	 */
+ 	stat = readb(epg->base + EP93XX_GPIO_A_INT_STATUS);
+ 	for_each_set_bit(offset, &stat, 8)
+-		generic_handle_irq(irq_find_mapping(epg->gc[0].irq.domain,
++		generic_handle_irq(irq_find_mapping(epg->gc[0].gc.irq.domain,
+ 						    offset));
+ 
+ 	stat = readb(epg->base + EP93XX_GPIO_B_INT_STATUS);
+ 	for_each_set_bit(offset, &stat, 8)
+-		generic_handle_irq(irq_find_mapping(epg->gc[1].irq.domain,
++		generic_handle_irq(irq_find_mapping(epg->gc[1].gc.irq.domain,
+ 						    offset));
+ 
+ 	chained_irq_exit(irqchip, desc);
+@@ -153,52 +157,52 @@ static void ep93xx_gpio_f_irq_handler(struct irq_desc *desc)
+ static void ep93xx_gpio_irq_ack(struct irq_data *d)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct ep93xx_gpio_irq_chip *eic = to_ep93xx_gpio_irq_chip(gc);
+ 	struct ep93xx_gpio *epg = gpiochip_get_data(gc);
+-	int port = ep93xx_gpio_port(gc);
+ 	int port_mask = BIT(d->irq & 7);
+ 
+ 	if (irqd_get_trigger_type(d) == IRQ_TYPE_EDGE_BOTH) {
+-		gpio_int_type2[port] ^= port_mask; /* switch edge direction */
+-		ep93xx_gpio_update_int_params(epg, port);
++		eic->int_type2 ^= port_mask; /* switch edge direction */
++		ep93xx_gpio_update_int_params(epg, eic);
+ 	}
+ 
+-	writeb(port_mask, epg->base + eoi_register_offset[port]);
++	writeb(port_mask, epg->base + eic->irq_offset + EP93XX_INT_EOI_OFFSET);
+ }
+ 
+ static void ep93xx_gpio_irq_mask_ack(struct irq_data *d)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct ep93xx_gpio_irq_chip *eic = to_ep93xx_gpio_irq_chip(gc);
+ 	struct ep93xx_gpio *epg = gpiochip_get_data(gc);
+-	int port = ep93xx_gpio_port(gc);
+ 	int port_mask = BIT(d->irq & 7);
+ 
+ 	if (irqd_get_trigger_type(d) == IRQ_TYPE_EDGE_BOTH)
+-		gpio_int_type2[port] ^= port_mask; /* switch edge direction */
++		eic->int_type2 ^= port_mask; /* switch edge direction */
+ 
+-	gpio_int_unmasked[port] &= ~port_mask;
+-	ep93xx_gpio_update_int_params(epg, port);
++	eic->int_unmasked &= ~port_mask;
++	ep93xx_gpio_update_int_params(epg, eic);
+ 
+-	writeb(port_mask, epg->base + eoi_register_offset[port]);
++	writeb(port_mask, epg->base + eic->irq_offset + EP93XX_INT_EOI_OFFSET);
+ }
+ 
+ static void ep93xx_gpio_irq_mask(struct irq_data *d)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct ep93xx_gpio_irq_chip *eic = to_ep93xx_gpio_irq_chip(gc);
+ 	struct ep93xx_gpio *epg = gpiochip_get_data(gc);
+-	int port = ep93xx_gpio_port(gc);
+ 
+-	gpio_int_unmasked[port] &= ~BIT(d->irq & 7);
+-	ep93xx_gpio_update_int_params(epg, port);
++	eic->int_unmasked &= ~BIT(d->irq & 7);
++	ep93xx_gpio_update_int_params(epg, eic);
+ }
+ 
+ static void ep93xx_gpio_irq_unmask(struct irq_data *d)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct ep93xx_gpio_irq_chip *eic = to_ep93xx_gpio_irq_chip(gc);
+ 	struct ep93xx_gpio *epg = gpiochip_get_data(gc);
+-	int port = ep93xx_gpio_port(gc);
+ 
+-	gpio_int_unmasked[port] |= BIT(d->irq & 7);
+-	ep93xx_gpio_update_int_params(epg, port);
++	eic->int_unmasked |= BIT(d->irq & 7);
++	ep93xx_gpio_update_int_params(epg, eic);
+ }
+ 
+ /*
+@@ -209,8 +213,8 @@ static void ep93xx_gpio_irq_unmask(struct irq_data *d)
+ static int ep93xx_gpio_irq_type(struct irq_data *d, unsigned int type)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct ep93xx_gpio_irq_chip *eic = to_ep93xx_gpio_irq_chip(gc);
+ 	struct ep93xx_gpio *epg = gpiochip_get_data(gc);
+-	int port = ep93xx_gpio_port(gc);
+ 	int offset = d->irq & 7;
+ 	int port_mask = BIT(offset);
+ 	irq_flow_handler_t handler;
+@@ -219,32 +223,32 @@ static int ep93xx_gpio_irq_type(struct irq_data *d, unsigned int type)
+ 
+ 	switch (type) {
+ 	case IRQ_TYPE_EDGE_RISING:
+-		gpio_int_type1[port] |= port_mask;
+-		gpio_int_type2[port] |= port_mask;
++		eic->int_type1 |= port_mask;
++		eic->int_type2 |= port_mask;
+ 		handler = handle_edge_irq;
+ 		break;
+ 	case IRQ_TYPE_EDGE_FALLING:
+-		gpio_int_type1[port] |= port_mask;
+-		gpio_int_type2[port] &= ~port_mask;
++		eic->int_type1 |= port_mask;
++		eic->int_type2 &= ~port_mask;
+ 		handler = handle_edge_irq;
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+-		gpio_int_type1[port] &= ~port_mask;
+-		gpio_int_type2[port] |= port_mask;
++		eic->int_type1 &= ~port_mask;
++		eic->int_type2 |= port_mask;
+ 		handler = handle_level_irq;
+ 		break;
+ 	case IRQ_TYPE_LEVEL_LOW:
+-		gpio_int_type1[port] &= ~port_mask;
+-		gpio_int_type2[port] &= ~port_mask;
++		eic->int_type1 &= ~port_mask;
++		eic->int_type2 &= ~port_mask;
+ 		handler = handle_level_irq;
+ 		break;
+ 	case IRQ_TYPE_EDGE_BOTH:
+-		gpio_int_type1[port] |= port_mask;
++		eic->int_type1 |= port_mask;
+ 		/* set initial polarity based on current input level */
+ 		if (gc->get(gc, offset))
+-			gpio_int_type2[port] &= ~port_mask; /* falling */
++			eic->int_type2 &= ~port_mask; /* falling */
+ 		else
+-			gpio_int_type2[port] |= port_mask; /* rising */
++			eic->int_type2 |= port_mask; /* rising */
+ 		handler = handle_edge_irq;
+ 		break;
+ 	default:
+@@ -253,22 +257,13 @@ static int ep93xx_gpio_irq_type(struct irq_data *d, unsigned int type)
+ 
+ 	irq_set_handler_locked(d, handler);
+ 
+-	gpio_int_enabled[port] |= port_mask;
++	eic->int_enabled |= port_mask;
+ 
+-	ep93xx_gpio_update_int_params(epg, port);
++	ep93xx_gpio_update_int_params(epg, eic);
+ 
+ 	return 0;
+ }
+ 
+-static struct irq_chip ep93xx_gpio_irq_chip = {
+-	.name		= "GPIO",
+-	.irq_ack	= ep93xx_gpio_irq_ack,
+-	.irq_mask_ack	= ep93xx_gpio_irq_mask_ack,
+-	.irq_mask	= ep93xx_gpio_irq_mask,
+-	.irq_unmask	= ep93xx_gpio_irq_unmask,
+-	.irq_set_type	= ep93xx_gpio_irq_type,
+-};
+-
+ /*************************************************************************
+  * gpiolib interface for EP93xx on-chip GPIOs
+  *************************************************************************/
+@@ -276,17 +271,19 @@ struct ep93xx_gpio_bank {
+ 	const char	*label;
+ 	int		data;
+ 	int		dir;
++	int		irq;
+ 	int		base;
+ 	bool		has_irq;
+ 	bool		has_hierarchical_irq;
+ 	unsigned int	irq_base;
+ };
+ 
+-#define EP93XX_GPIO_BANK(_label, _data, _dir, _base, _has_irq, _has_hier, _irq_base) \
++#define EP93XX_GPIO_BANK(_label, _data, _dir, _irq, _base, _has_irq, _has_hier, _irq_base) \
+ 	{							\
+ 		.label		= _label,			\
+ 		.data		= _data,			\
+ 		.dir		= _dir,				\
++		.irq		= _irq,				\
+ 		.base		= _base,			\
+ 		.has_irq	= _has_irq,			\
+ 		.has_hierarchical_irq = _has_hier,		\
+@@ -295,16 +292,16 @@ struct ep93xx_gpio_bank {
+ 
+ static struct ep93xx_gpio_bank ep93xx_gpio_banks[] = {
+ 	/* Bank A has 8 IRQs */
+-	EP93XX_GPIO_BANK("A", 0x00, 0x10, 0, true, false, 64),
++	EP93XX_GPIO_BANK("A", 0x00, 0x10, 0x90, 0, true, false, 64),
+ 	/* Bank B has 8 IRQs */
+-	EP93XX_GPIO_BANK("B", 0x04, 0x14, 8, true, false, 72),
+-	EP93XX_GPIO_BANK("C", 0x08, 0x18, 40, false, false, 0),
+-	EP93XX_GPIO_BANK("D", 0x0c, 0x1c, 24, false, false, 0),
+-	EP93XX_GPIO_BANK("E", 0x20, 0x24, 32, false, false, 0),
++	EP93XX_GPIO_BANK("B", 0x04, 0x14, 0xac, 8, true, false, 72),
++	EP93XX_GPIO_BANK("C", 0x08, 0x18, 0x00, 40, false, false, 0),
++	EP93XX_GPIO_BANK("D", 0x0c, 0x1c, 0x00, 24, false, false, 0),
++	EP93XX_GPIO_BANK("E", 0x20, 0x24, 0x00, 32, false, false, 0),
+ 	/* Bank F has 8 IRQs */
+-	EP93XX_GPIO_BANK("F", 0x30, 0x34, 16, false, true, 0),
+-	EP93XX_GPIO_BANK("G", 0x38, 0x3c, 48, false, false, 0),
+-	EP93XX_GPIO_BANK("H", 0x40, 0x44, 56, false, false, 0),
++	EP93XX_GPIO_BANK("F", 0x30, 0x34, 0x4c, 16, false, true, 0),
++	EP93XX_GPIO_BANK("G", 0x38, 0x3c, 0x00, 48, false, false, 0),
++	EP93XX_GPIO_BANK("H", 0x40, 0x44, 0x00, 56, false, false, 0),
+ };
+ 
+ static int ep93xx_gpio_set_config(struct gpio_chip *gc, unsigned offset,
+@@ -326,13 +323,23 @@ static int ep93xx_gpio_f_to_irq(struct gpio_chip *gc, unsigned offset)
+ 	return EP93XX_GPIO_F_IRQ_BASE + offset;
+ }
+ 
+-static int ep93xx_gpio_add_bank(struct gpio_chip *gc,
++static void ep93xx_init_irq_chip(struct device *dev, struct irq_chip *ic)
++{
++	ic->irq_ack = ep93xx_gpio_irq_ack;
++	ic->irq_mask_ack = ep93xx_gpio_irq_mask_ack;
++	ic->irq_mask = ep93xx_gpio_irq_mask;
++	ic->irq_unmask = ep93xx_gpio_irq_unmask;
++	ic->irq_set_type = ep93xx_gpio_irq_type;
++}
++
++static int ep93xx_gpio_add_bank(struct ep93xx_gpio_chip *egc,
+ 				struct platform_device *pdev,
+ 				struct ep93xx_gpio *epg,
+ 				struct ep93xx_gpio_bank *bank)
+ {
+ 	void __iomem *data = epg->base + bank->data;
+ 	void __iomem *dir = epg->base + bank->dir;
++	struct gpio_chip *gc = &egc->gc;
+ 	struct device *dev = &pdev->dev;
+ 	struct gpio_irq_chip *girq;
+ 	int err;
+@@ -346,8 +353,21 @@ static int ep93xx_gpio_add_bank(struct gpio_chip *gc,
+ 
+ 	girq = &gc->irq;
+ 	if (bank->has_irq || bank->has_hierarchical_irq) {
++		struct irq_chip *ic;
++
+ 		gc->set_config = ep93xx_gpio_set_config;
+-		girq->chip = &ep93xx_gpio_irq_chip;
++		egc->eic = devm_kcalloc(dev, 1,
++					sizeof(*egc->eic),
++					GFP_KERNEL);
++		if (!egc->eic)
++			return -ENOMEM;
++		egc->eic->irq_offset = bank->irq;
++		ic = &egc->eic->ic;
++		ic->name = devm_kasprintf(dev, GFP_KERNEL, "gpio-irq-%s", bank->label);
++		if (!ic->name)
++			return -ENOMEM;
++		ep93xx_init_irq_chip(dev, ic);
++		girq->chip = ic;
+ 	}
+ 
+ 	if (bank->has_irq) {
+@@ -389,7 +409,7 @@ static int ep93xx_gpio_add_bank(struct gpio_chip *gc,
+ 			gpio_irq = EP93XX_GPIO_F_IRQ_BASE + i;
+ 			irq_set_chip_data(gpio_irq, &epg->gc[5]);
+ 			irq_set_chip_and_handler(gpio_irq,
+-						 &ep93xx_gpio_irq_chip,
++						 girq->chip,
+ 						 handle_level_irq);
+ 			irq_clear_status_flags(gpio_irq, IRQ_NOREQUEST);
+ 		}
+@@ -415,7 +435,7 @@ static int ep93xx_gpio_probe(struct platform_device *pdev)
+ 		return PTR_ERR(epg->base);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(ep93xx_gpio_banks); i++) {
+-		struct gpio_chip *gc = &epg->gc[i];
++		struct ep93xx_gpio_chip *gc = &epg->gc[i];
+ 		struct ep93xx_gpio_bank *bank = &ep93xx_gpio_banks[i];
+ 
+ 		if (ep93xx_gpio_add_bank(gc, pdev, epg, bank))
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 580880212e551..fdca76fc598c0 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1792,8 +1792,8 @@ static void emulated_link_detect(struct dc_link *link)
+ 	link->type = dc_connection_none;
+ 	prev_sink = link->local_sink;
+ 
+-	if (prev_sink != NULL)
+-		dc_sink_retain(prev_sink);
++	if (prev_sink)
++		dc_sink_release(prev_sink);
+ 
+ 	switch (link->connector_signal) {
+ 	case SIGNAL_TYPE_HDMI_TYPE_A: {
+@@ -2261,8 +2261,10 @@ void amdgpu_dm_update_connector_after_detect(
+ 		 * TODO: check if we still need the S3 mode update workaround.
+ 		 * If yes, put it here.
+ 		 */
+-		if (aconnector->dc_sink)
++		if (aconnector->dc_sink) {
+ 			amdgpu_dm_update_freesync_caps(connector, NULL);
++			dc_sink_release(aconnector->dc_sink);
++		}
+ 
+ 		aconnector->dc_sink = sink;
+ 		dc_sink_retain(aconnector->dc_sink);
+@@ -7870,14 +7872,14 @@ static int dm_force_atomic_commit(struct drm_connector *connector)
+ 
+ 	ret = PTR_ERR_OR_ZERO(conn_state);
+ 	if (ret)
+-		goto err;
++		goto out;
+ 
+ 	/* Attach crtc to drm_atomic_state*/
+ 	crtc_state = drm_atomic_get_crtc_state(state, &disconnected_acrtc->base);
+ 
+ 	ret = PTR_ERR_OR_ZERO(crtc_state);
+ 	if (ret)
+-		goto err;
++		goto out;
+ 
+ 	/* force a restore */
+ 	crtc_state->mode_changed = true;
+@@ -7887,17 +7889,15 @@ static int dm_force_atomic_commit(struct drm_connector *connector)
+ 
+ 	ret = PTR_ERR_OR_ZERO(plane_state);
+ 	if (ret)
+-		goto err;
+-
++		goto out;
+ 
+ 	/* Call commit internally with the state we just constructed */
+ 	ret = drm_atomic_commit(state);
+-	if (!ret)
+-		return 0;
+ 
+-err:
+-	DRM_ERROR("Restoring old state failed with %i\n", ret);
++out:
+ 	drm_atomic_state_put(state);
++	if (ret)
++		DRM_ERROR("Restoring old state failed with %i\n", ret);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index eee19edeeee5c..1e448f1b39a18 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -828,6 +828,9 @@ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+ 		if (computed_streams[i])
+ 			continue;
+ 
++		if (dcn20_remove_stream_from_ctx(stream->ctx->dc, dc_state, stream) != DC_OK)
++			return false;
++
+ 		mutex_lock(&aconnector->mst_mgr.lock);
+ 		if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link)) {
+ 			mutex_unlock(&aconnector->mst_mgr.lock);
+@@ -845,7 +848,8 @@ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+ 		stream = dc_state->streams[i];
+ 
+ 		if (stream->timing.flags.DSC == 1)
+-			dc_stream_add_dsc_to_resource(stream->ctx->dc, dc_state, stream);
++			if (dc_stream_add_dsc_to_resource(stream->ctx->dc, dc_state, stream) != DC_OK)
++				return false;
+ 	}
+ 
+ 	return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 17e6fd8201395..32b73ea866737 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -877,13 +877,13 @@ static uint32_t translate_training_aux_read_interval(uint32_t dpcd_aux_read_inte
+ 
+ 	switch (dpcd_aux_read_interval) {
+ 	case 0x01:
+-		aux_rd_interval_us = 400;
++		aux_rd_interval_us = 4000;
+ 		break;
+ 	case 0x02:
+-		aux_rd_interval_us = 4000;
++		aux_rd_interval_us = 8000;
+ 		break;
+ 	case 0x03:
+-		aux_rd_interval_us = 8000;
++		aux_rd_interval_us = 12000;
+ 		break;
+ 	case 0x04:
+ 		aux_rd_interval_us = 16000;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index a92f6e4b2eb8f..121643ddb719b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -297,8 +297,8 @@ static struct _vcs_dpi_soc_bounding_box_st dcn2_0_soc = {
+ 			},
+ 		},
+ 	.num_states = 5,
+-	.sr_exit_time_us = 11.6,
+-	.sr_enter_plus_exit_time_us = 13.9,
++	.sr_exit_time_us = 8.6,
++	.sr_enter_plus_exit_time_us = 10.9,
+ 	.urgent_latency_us = 4.0,
+ 	.urgent_latency_pixel_data_only_us = 4.0,
+ 	.urgent_latency_pixel_mixed_with_vm_data_us = 4.0,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index 20441127783ba..c993854404124 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -902,6 +902,8 @@ enum dcn20_clk_src_array_id {
+ 	DCN20_CLK_SRC_PLL0,
+ 	DCN20_CLK_SRC_PLL1,
+ 	DCN20_CLK_SRC_PLL2,
++	DCN20_CLK_SRC_PLL3,
++	DCN20_CLK_SRC_PLL4,
+ 	DCN20_CLK_SRC_TOTAL_DCN21
+ };
+ 
+@@ -1880,6 +1882,14 @@ static bool dcn21_resource_construct(
+ 			dcn21_clock_source_create(ctx, ctx->dc_bios,
+ 				CLOCK_SOURCE_COMBO_PHY_PLL2,
+ 				&clk_src_regs[2], false);
++	pool->base.clock_sources[DCN20_CLK_SRC_PLL3] =
++			dcn21_clock_source_create(ctx, ctx->dc_bios,
++				CLOCK_SOURCE_COMBO_PHY_PLL3,
++				&clk_src_regs[3], false);
++	pool->base.clock_sources[DCN20_CLK_SRC_PLL4] =
++			dcn21_clock_source_create(ctx, ctx->dc_bios,
++				CLOCK_SOURCE_COMBO_PHY_PLL4,
++				&clk_src_regs[4], false);
+ 
+ 	pool->base.clk_src_count = DCN20_CLK_SRC_TOTAL_DCN21;
+ 
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 7749b0ceabba9..17bdad95978a1 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -4224,6 +4224,7 @@ drm_dp_mst_detect_port(struct drm_connector *connector,
+ 
+ 	switch (port->pdt) {
+ 	case DP_PEER_DEVICE_NONE:
++		break;
+ 	case DP_PEER_DEVICE_MST_BRANCHING:
+ 		if (!port->mcs)
+ 			ret = connector_status_connected;
+diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
+index 0095c8cac9b40..b73d51e766ce8 100644
+--- a/drivers/gpu/drm/i915/display/intel_overlay.c
++++ b/drivers/gpu/drm/i915/display/intel_overlay.c
+@@ -182,6 +182,7 @@ struct intel_overlay {
+ 	struct intel_crtc *crtc;
+ 	struct i915_vma *vma;
+ 	struct i915_vma *old_vma;
++	struct intel_frontbuffer *frontbuffer;
+ 	bool active;
+ 	bool pfit_active;
+ 	u32 pfit_vscale_ratio; /* shifted-point number, (1<<12) == 1.0 */
+@@ -282,21 +283,19 @@ static void intel_overlay_flip_prepare(struct intel_overlay *overlay,
+ 				       struct i915_vma *vma)
+ {
+ 	enum pipe pipe = overlay->crtc->pipe;
+-	struct intel_frontbuffer *from = NULL, *to = NULL;
++	struct intel_frontbuffer *frontbuffer = NULL;
+ 
+ 	drm_WARN_ON(&overlay->i915->drm, overlay->old_vma);
+ 
+-	if (overlay->vma)
+-		from = intel_frontbuffer_get(overlay->vma->obj);
+ 	if (vma)
+-		to = intel_frontbuffer_get(vma->obj);
++		frontbuffer = intel_frontbuffer_get(vma->obj);
+ 
+-	intel_frontbuffer_track(from, to, INTEL_FRONTBUFFER_OVERLAY(pipe));
++	intel_frontbuffer_track(overlay->frontbuffer, frontbuffer,
++				INTEL_FRONTBUFFER_OVERLAY(pipe));
+ 
+-	if (to)
+-		intel_frontbuffer_put(to);
+-	if (from)
+-		intel_frontbuffer_put(from);
++	if (overlay->frontbuffer)
++		intel_frontbuffer_put(overlay->frontbuffer);
++	overlay->frontbuffer = frontbuffer;
+ 
+ 	intel_frontbuffer_flip_prepare(overlay->i915,
+ 				       INTEL_FRONTBUFFER_OVERLAY(pipe));
+diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
+index 8f67aef18b2da..1d81da31796e3 100644
+--- a/drivers/gpu/drm/i915/display/intel_tc.c
++++ b/drivers/gpu/drm/i915/display/intel_tc.c
+@@ -23,36 +23,6 @@ static const char *tc_port_mode_name(enum tc_port_mode mode)
+ 	return names[mode];
+ }
+ 
+-static void
+-tc_port_load_fia_params(struct drm_i915_private *i915,
+-			struct intel_digital_port *dig_port)
+-{
+-	enum port port = dig_port->base.port;
+-	enum tc_port tc_port = intel_port_to_tc(i915, port);
+-	u32 modular_fia;
+-
+-	if (INTEL_INFO(i915)->display.has_modular_fia) {
+-		modular_fia = intel_uncore_read(&i915->uncore,
+-						PORT_TX_DFLEXDPSP(FIA1));
+-		drm_WARN_ON(&i915->drm, modular_fia == 0xffffffff);
+-		modular_fia &= MODULAR_FIA_MASK;
+-	} else {
+-		modular_fia = 0;
+-	}
+-
+-	/*
+-	 * Each Modular FIA instance houses 2 TC ports. In SOC that has more
+-	 * than two TC ports, there are multiple instances of Modular FIA.
+-	 */
+-	if (modular_fia) {
+-		dig_port->tc_phy_fia = tc_port / 2;
+-		dig_port->tc_phy_fia_idx = tc_port % 2;
+-	} else {
+-		dig_port->tc_phy_fia = FIA1;
+-		dig_port->tc_phy_fia_idx = tc_port;
+-	}
+-}
+-
+ static enum intel_display_power_domain
+ tc_cold_get_power_domain(struct intel_digital_port *dig_port)
+ {
+@@ -646,6 +616,43 @@ void intel_tc_port_put_link(struct intel_digital_port *dig_port)
+ 	mutex_unlock(&dig_port->tc_lock);
+ }
+ 
++static bool
++tc_has_modular_fia(struct drm_i915_private *i915, struct intel_digital_port *dig_port)
++{
++	intel_wakeref_t wakeref;
++	u32 val;
++
++	if (!INTEL_INFO(i915)->display.has_modular_fia)
++		return false;
++
++	wakeref = tc_cold_block(dig_port);
++	val = intel_uncore_read(&i915->uncore, PORT_TX_DFLEXDPSP(FIA1));
++	tc_cold_unblock(dig_port, wakeref);
++
++	drm_WARN_ON(&i915->drm, val == 0xffffffff);
++
++	return val & MODULAR_FIA_MASK;
++}
++
++static void
++tc_port_load_fia_params(struct drm_i915_private *i915, struct intel_digital_port *dig_port)
++{
++	enum port port = dig_port->base.port;
++	enum tc_port tc_port = intel_port_to_tc(i915, port);
++
++	/*
++	 * Each Modular FIA instance houses 2 TC ports. In SOC that has more
++	 * than two TC ports, there are multiple instances of Modular FIA.
++	 */
++	if (tc_has_modular_fia(i915, dig_port)) {
++		dig_port->tc_phy_fia = tc_port / 2;
++		dig_port->tc_phy_fia_idx = tc_port % 2;
++	} else {
++		dig_port->tc_phy_fia = FIA1;
++		dig_port->tc_phy_fia_idx = tc_port;
++	}
++}
++
+ void intel_tc_port_init(struct intel_digital_port *dig_port, bool is_legacy)
+ {
+ 	struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index eaaf5d70e3529..1e643bc7e786a 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -689,6 +689,30 @@ static void sun4i_tcon1_mode_set(struct sun4i_tcon *tcon,
+ 		     SUN4I_TCON1_BASIC5_V_SYNC(vsync) |
+ 		     SUN4I_TCON1_BASIC5_H_SYNC(hsync));
+ 
++	/* Setup the polarity of multiple signals */
++	if (tcon->quirks->polarity_in_ch0) {
++		val = 0;
++
++		if (mode->flags & DRM_MODE_FLAG_PHSYNC)
++			val |= SUN4I_TCON0_IO_POL_HSYNC_POSITIVE;
++
++		if (mode->flags & DRM_MODE_FLAG_PVSYNC)
++			val |= SUN4I_TCON0_IO_POL_VSYNC_POSITIVE;
++
++		regmap_write(tcon->regs, SUN4I_TCON0_IO_POL_REG, val);
++	} else {
++		/* according to vendor driver, this bit must be always set */
++		val = SUN4I_TCON1_IO_POL_UNKNOWN;
++
++		if (mode->flags & DRM_MODE_FLAG_PHSYNC)
++			val |= SUN4I_TCON1_IO_POL_HSYNC_POSITIVE;
++
++		if (mode->flags & DRM_MODE_FLAG_PVSYNC)
++			val |= SUN4I_TCON1_IO_POL_VSYNC_POSITIVE;
++
++		regmap_write(tcon->regs, SUN4I_TCON1_IO_POL_REG, val);
++	}
++
+ 	/* Map output pins to channel 1 */
+ 	regmap_update_bits(tcon->regs, SUN4I_TCON_GCTL_REG,
+ 			   SUN4I_TCON_GCTL_IOMAP_MASK,
+@@ -1517,6 +1541,7 @@ static const struct sun4i_tcon_quirks sun8i_a83t_tv_quirks = {
+ 
+ static const struct sun4i_tcon_quirks sun8i_r40_tv_quirks = {
+ 	.has_channel_1		= true,
++	.polarity_in_ch0	= true,
+ 	.set_mux		= sun8i_r40_tcon_tv_set_mux,
+ };
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.h b/drivers/gpu/drm/sun4i/sun4i_tcon.h
+index cfbf4e6c16799..ee555318e3c2f 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.h
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.h
+@@ -153,6 +153,11 @@
+ #define SUN4I_TCON1_BASIC5_V_SYNC(height)		(((height) - 1) & 0x3ff)
+ 
+ #define SUN4I_TCON1_IO_POL_REG			0xf0
++/* there is no documentation about this bit */
++#define SUN4I_TCON1_IO_POL_UNKNOWN			BIT(26)
++#define SUN4I_TCON1_IO_POL_HSYNC_POSITIVE		BIT(25)
++#define SUN4I_TCON1_IO_POL_VSYNC_POSITIVE		BIT(24)
++
+ #define SUN4I_TCON1_IO_TRI_REG			0xf4
+ 
+ #define SUN4I_TCON_ECC_FIFO_REG			0xf8
+@@ -235,6 +240,7 @@ struct sun4i_tcon_quirks {
+ 	bool	needs_de_be_mux; /* sun6i needs mux to select backend */
+ 	bool    needs_edp_reset; /* a80 edp reset needed for tcon0 access */
+ 	bool	supports_lvds;   /* Does the TCON support an LVDS output? */
++	bool	polarity_in_ch0; /* some tcon1 channels have polarity bits in tcon0 pol register */
+ 	u8	dclk_min_div;	/* minimum divider for TCON0 DCLK */
+ 
+ 	/* callback to handle tcon muxing options */
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+index 92add2cef2e7d..bbdfd5e26ec88 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+@@ -21,8 +21,7 @@ static void sun8i_dw_hdmi_encoder_mode_set(struct drm_encoder *encoder,
+ {
+ 	struct sun8i_dw_hdmi *hdmi = encoder_to_sun8i_dw_hdmi(encoder);
+ 
+-	if (hdmi->quirks->set_rate)
+-		clk_set_rate(hdmi->clk_tmds, mode->crtc_clock * 1000);
++	clk_set_rate(hdmi->clk_tmds, mode->crtc_clock * 1000);
+ }
+ 
+ static const struct drm_encoder_helper_funcs
+@@ -48,11 +47,9 @@ sun8i_dw_hdmi_mode_valid_h6(struct dw_hdmi *hdmi, void *data,
+ {
+ 	/*
+ 	 * Controller support maximum of 594 MHz, which correlates to
+-	 * 4K@60Hz 4:4:4 or RGB. However, for frequencies greater than
+-	 * 340 MHz scrambling has to be enabled. Because scrambling is
+-	 * not yet implemented, just limit to 340 MHz for now.
++	 * 4K@60Hz 4:4:4 or RGB.
+ 	 */
+-	if (mode->clock > 340000)
++	if (mode->clock > 594000)
+ 		return MODE_CLOCK_HIGH;
+ 
+ 	return MODE_OK;
+@@ -295,7 +292,6 @@ static int sun8i_dw_hdmi_remove(struct platform_device *pdev)
+ 
+ static const struct sun8i_dw_hdmi_quirks sun8i_a83t_quirks = {
+ 	.mode_valid = sun8i_dw_hdmi_mode_valid_a83t,
+-	.set_rate = true,
+ };
+ 
+ static const struct sun8i_dw_hdmi_quirks sun50i_h6_quirks = {
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
+index d983746fa194c..d4b55af0592f8 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
+@@ -179,7 +179,6 @@ struct sun8i_dw_hdmi_quirks {
+ 	enum drm_mode_status (*mode_valid)(struct dw_hdmi *hdmi, void *data,
+ 					   const struct drm_display_info *info,
+ 					   const struct drm_display_mode *mode);
+-	unsigned int set_rate : 1;
+ 	unsigned int use_drm_infoframe : 1;
+ };
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index 35c2133724e2d..9994edf675096 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -104,29 +104,21 @@ static const struct dw_hdmi_mpll_config sun50i_h6_mpll_cfg[] = {
+ 
+ static const struct dw_hdmi_curr_ctrl sun50i_h6_cur_ctr[] = {
+ 	/* pixelclk    bpp8    bpp10   bpp12 */
+-	{ 25175000,  { 0x0000, 0x0000, 0x0000 }, },
+ 	{ 27000000,  { 0x0012, 0x0000, 0x0000 }, },
+-	{ 59400000,  { 0x0008, 0x0008, 0x0008 }, },
+-	{ 72000000,  { 0x0008, 0x0008, 0x001b }, },
+-	{ 74250000,  { 0x0013, 0x0013, 0x0013 }, },
+-	{ 90000000,  { 0x0008, 0x001a, 0x001b }, },
+-	{ 118800000, { 0x001b, 0x001a, 0x001b }, },
+-	{ 144000000, { 0x001b, 0x001a, 0x0034 }, },
+-	{ 180000000, { 0x001b, 0x0033, 0x0034 }, },
+-	{ 216000000, { 0x0036, 0x0033, 0x0034 }, },
+-	{ 237600000, { 0x0036, 0x0033, 0x001b }, },
+-	{ 288000000, { 0x0036, 0x001b, 0x001b }, },
+-	{ 297000000, { 0x0019, 0x001b, 0x0019 }, },
+-	{ 330000000, { 0x0036, 0x001b, 0x001b }, },
+-	{ 594000000, { 0x003f, 0x001b, 0x001b }, },
++	{ 74250000,  { 0x0013, 0x001a, 0x001b }, },
++	{ 148500000, { 0x0019, 0x0033, 0x0034 }, },
++	{ 297000000, { 0x0019, 0x001b, 0x001b }, },
++	{ 594000000, { 0x0010, 0x001b, 0x001b }, },
+ 	{ ~0UL,      { 0x0000, 0x0000, 0x0000 }, }
+ };
+ 
+ static const struct dw_hdmi_phy_config sun50i_h6_phy_config[] = {
+ 	/*pixelclk   symbol   term   vlev*/
+-	{ 74250000,  0x8009, 0x0004, 0x0232},
+-	{ 148500000, 0x8029, 0x0004, 0x0273},
+-	{ 594000000, 0x8039, 0x0004, 0x014a},
++	{ 27000000,  0x8009, 0x0007, 0x02b0 },
++	{ 74250000,  0x8009, 0x0006, 0x022d },
++	{ 148500000, 0x8029, 0x0006, 0x0270 },
++	{ 297000000, 0x8039, 0x0005, 0x01ab },
++	{ 594000000, 0x8029, 0x0000, 0x008a },
+ 	{ ~0UL,	     0x0000, 0x0000, 0x0000}
+ };
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index 5612cab552270..af4b8944a6032 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -220,7 +220,7 @@ static void vc4_plane_reset(struct drm_plane *plane)
+ 	__drm_atomic_helper_plane_reset(plane, &vc4_state->base);
+ }
+ 
+-static void vc4_dlist_write(struct vc4_plane_state *vc4_state, u32 val)
++static void vc4_dlist_counter_increment(struct vc4_plane_state *vc4_state)
+ {
+ 	if (vc4_state->dlist_count == vc4_state->dlist_size) {
+ 		u32 new_size = max(4u, vc4_state->dlist_count * 2);
+@@ -235,7 +235,15 @@ static void vc4_dlist_write(struct vc4_plane_state *vc4_state, u32 val)
+ 		vc4_state->dlist_size = new_size;
+ 	}
+ 
+-	vc4_state->dlist[vc4_state->dlist_count++] = val;
++	vc4_state->dlist_count++;
++}
++
++static void vc4_dlist_write(struct vc4_plane_state *vc4_state, u32 val)
++{
++	unsigned int idx = vc4_state->dlist_count;
++
++	vc4_dlist_counter_increment(vc4_state);
++	vc4_state->dlist[idx] = val;
+ }
+ 
+ /* Returns the scl0/scl1 field based on whether the dimensions need to
+@@ -978,8 +986,10 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ 		 * be set when calling vc4_plane_allocate_lbm().
+ 		 */
+ 		if (vc4_state->y_scaling[0] != VC4_SCALING_NONE ||
+-		    vc4_state->y_scaling[1] != VC4_SCALING_NONE)
+-			vc4_state->lbm_offset = vc4_state->dlist_count++;
++		    vc4_state->y_scaling[1] != VC4_SCALING_NONE) {
++			vc4_state->lbm_offset = vc4_state->dlist_count;
++			vc4_dlist_counter_increment(vc4_state);
++		}
+ 
+ 		if (num_planes > 1) {
+ 			/* Emit Cb/Cr as channel 0 and Y as channel
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index f41f51a176a1d..6747353345475 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -57,6 +57,8 @@
+ #define STM32F7_I2C_CR1_RXDMAEN			BIT(15)
+ #define STM32F7_I2C_CR1_TXDMAEN			BIT(14)
+ #define STM32F7_I2C_CR1_ANFOFF			BIT(12)
++#define STM32F7_I2C_CR1_DNF_MASK		GENMASK(11, 8)
++#define STM32F7_I2C_CR1_DNF(n)			(((n) & 0xf) << 8)
+ #define STM32F7_I2C_CR1_ERRIE			BIT(7)
+ #define STM32F7_I2C_CR1_TCIE			BIT(6)
+ #define STM32F7_I2C_CR1_STOPIE			BIT(5)
+@@ -160,7 +162,7 @@ enum {
+ };
+ 
+ #define STM32F7_I2C_DNF_DEFAULT			0
+-#define STM32F7_I2C_DNF_MAX			16
++#define STM32F7_I2C_DNF_MAX			15
+ 
+ #define STM32F7_I2C_ANALOG_FILTER_ENABLE	1
+ #define STM32F7_I2C_ANALOG_FILTER_DELAY_MIN	50	/* ns */
+@@ -725,6 +727,13 @@ static void stm32f7_i2c_hw_config(struct stm32f7_i2c_dev *i2c_dev)
+ 	else
+ 		stm32f7_i2c_set_bits(i2c_dev->base + STM32F7_I2C_CR1,
+ 				     STM32F7_I2C_CR1_ANFOFF);
++
++	/* Program the Digital Filter */
++	stm32f7_i2c_clr_bits(i2c_dev->base + STM32F7_I2C_CR1,
++			     STM32F7_I2C_CR1_DNF_MASK);
++	stm32f7_i2c_set_bits(i2c_dev->base + STM32F7_I2C_CR1,
++			     STM32F7_I2C_CR1_DNF(i2c_dev->setup.dnf));
++
+ 	stm32f7_i2c_set_bits(i2c_dev->base + STM32F7_I2C_CR1,
+ 			     STM32F7_I2C_CR1_PE);
+ }
+diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
+index c70b3822013f4..30c8ac24635d4 100644
+--- a/drivers/misc/lkdtm/Makefile
++++ b/drivers/misc/lkdtm/Makefile
+@@ -16,7 +16,7 @@ KCOV_INSTRUMENT_rodata.o	:= n
+ 
+ OBJCOPYFLAGS :=
+ OBJCOPYFLAGS_rodata_objcopy.o	:= \
+-			--rename-section .text=.rodata,alloc,readonly,load
++			--rename-section .noinstr.text=.rodata,alloc,readonly,load
+ targets += rodata.o rodata_objcopy.o
+ $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE
+ 	$(call if_changed,objcopy)
+diff --git a/drivers/misc/lkdtm/rodata.c b/drivers/misc/lkdtm/rodata.c
+index 58d180af72cf0..baacb876d1d94 100644
+--- a/drivers/misc/lkdtm/rodata.c
++++ b/drivers/misc/lkdtm/rodata.c
+@@ -5,7 +5,7 @@
+  */
+ #include "lkdtm.h"
+ 
+-void notrace lkdtm_rodata_do_nothing(void)
++void noinstr lkdtm_rodata_do_nothing(void)
+ {
+ 	/* Does nothing. We just want an architecture agnostic "return". */
+ }
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index c444ef3da3e24..89d7c9b231863 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -214,9 +214,24 @@ static void felix_phylink_mac_link_down(struct dsa_switch *ds, int port,
+ {
+ 	struct ocelot *ocelot = ds->priv;
+ 	struct ocelot_port *ocelot_port = ocelot->ports[port];
++	int err;
++
++	ocelot_port_rmwl(ocelot_port, 0, DEV_MAC_ENA_CFG_RX_ENA,
++			 DEV_MAC_ENA_CFG);
+ 
+-	ocelot_port_writel(ocelot_port, 0, DEV_MAC_ENA_CFG);
+ 	ocelot_fields_write(ocelot, port, QSYS_SWITCH_PORT_MODE_PORT_ENA, 0);
++
++	err = ocelot_port_flush(ocelot, port);
++	if (err)
++		dev_err(ocelot->dev, "failed to flush port %d: %d\n",
++			port, err);
++
++	/* Put the port in reset. */
++	ocelot_port_writel(ocelot_port,
++			   DEV_CLOCK_CFG_MAC_TX_RST |
++			   DEV_CLOCK_CFG_MAC_RX_RST |
++			   DEV_CLOCK_CFG_LINK_SPEED(OCELOT_SPEED_1000),
++			   DEV_CLOCK_CFG);
+ }
+ 
+ static void felix_phylink_mac_link_up(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+index 4cbf1667d7ff4..014ca6ae121f8 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+@@ -196,6 +196,8 @@ enum enetc_bdr_type {TX, RX};
+ #define ENETC_CBS_BW_MASK	GENMASK(6, 0)
+ #define ENETC_PTCCBSR1(n)	(0x1114 + (n) * 8) /* n = 0 to 7*/
+ #define ENETC_RSSHASH_KEY_SIZE	40
++#define ENETC_PRSSCAPR		0x1404
++#define ENETC_PRSSCAPR_GET_NUM_RSS(val)	(BIT((val) & 0xf) * 32)
+ #define ENETC_PRSSK(n)		(0x1410 + (n) * 4) /* n = [0..9] */
+ #define ENETC_PSIVLANFMR	0x1700
+ #define ENETC_PSIVLANFMR_VS	BIT(0)
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 419306342ac51..06514af0df106 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -1004,6 +1004,51 @@ static void enetc_phylink_destroy(struct enetc_ndev_priv *priv)
+ 		phylink_destroy(priv->phylink);
+ }
+ 
++/* Initialize the entire shared memory for the flow steering entries
++ * of this port (PF + VFs)
++ */
++static int enetc_init_port_rfs_memory(struct enetc_si *si)
++{
++	struct enetc_cmd_rfse rfse = {0};
++	struct enetc_hw *hw = &si->hw;
++	int num_rfs, i, err = 0;
++	u32 val;
++
++	val = enetc_port_rd(hw, ENETC_PRFSCAPR);
++	num_rfs = ENETC_PRFSCAPR_GET_NUM_RFS(val);
++
++	for (i = 0; i < num_rfs; i++) {
++		err = enetc_set_fs_entry(si, &rfse, i);
++		if (err)
++			break;
++	}
++
++	return err;
++}
++
++static int enetc_init_port_rss_memory(struct enetc_si *si)
++{
++	struct enetc_hw *hw = &si->hw;
++	int num_rss, err;
++	int *rss_table;
++	u32 val;
++
++	val = enetc_port_rd(hw, ENETC_PRSSCAPR);
++	num_rss = ENETC_PRSSCAPR_GET_NUM_RSS(val);
++	if (!num_rss)
++		return 0;
++
++	rss_table = kcalloc(num_rss, sizeof(*rss_table), GFP_KERNEL);
++	if (!rss_table)
++		return -ENOMEM;
++
++	err = enetc_set_rss_table(si, rss_table, num_rss);
++
++	kfree(rss_table);
++
++	return err;
++}
++
+ static int enetc_pf_probe(struct pci_dev *pdev,
+ 			  const struct pci_device_id *ent)
+ {
+@@ -1058,6 +1103,18 @@ static int enetc_pf_probe(struct pci_dev *pdev,
+ 		goto err_alloc_si_res;
+ 	}
+ 
++	err = enetc_init_port_rfs_memory(si);
++	if (err) {
++		dev_err(&pdev->dev, "Failed to initialize RFS memory\n");
++		goto err_init_port_rfs;
++	}
++
++	err = enetc_init_port_rss_memory(si);
++	if (err) {
++		dev_err(&pdev->dev, "Failed to initialize RSS memory\n");
++		goto err_init_port_rss;
++	}
++
+ 	err = enetc_alloc_msix(priv);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "MSIX alloc failed\n");
+@@ -1086,6 +1143,8 @@ err_phylink_create:
+ 	enetc_mdiobus_destroy(pf);
+ err_mdiobus_create:
+ 	enetc_free_msix(priv);
++err_init_port_rss:
++err_init_port_rfs:
+ err_alloc_msix:
+ 	enetc_free_si_resources(priv);
+ err_alloc_si_res:
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 4321132a4f630..c40820baf48a6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -9404,12 +9404,19 @@ int hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
+ 
+ void hclge_reset_vf_queue(struct hclge_vport *vport, u16 queue_id)
+ {
++	struct hnae3_handle *handle = &vport->nic;
+ 	struct hclge_dev *hdev = vport->back;
+ 	int reset_try_times = 0;
+ 	int reset_status;
+ 	u16 queue_gid;
+ 	int ret;
+ 
++	if (queue_id >= handle->kinfo.num_tqps) {
++		dev_warn(&hdev->pdev->dev, "Invalid vf queue id(%u)\n",
++			 queue_id);
++		return;
++	}
++
+ 	queue_gid = hclge_covert_handle_qid_global(&vport->nic, queue_id);
+ 
+ 	ret = hclge_send_reset_tqp_cmd(hdev, queue_gid, true);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 3ab6db2588d31..9c8004fc9dc4f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -158,21 +158,31 @@ static int hclge_get_ring_chain_from_mbx(
+ 			struct hclge_vport *vport)
+ {
+ 	struct hnae3_ring_chain_node *cur_chain, *new_chain;
++	struct hclge_dev *hdev = vport->back;
+ 	int ring_num;
+-	int i = 0;
++	int i;
+ 
+ 	ring_num = req->msg.ring_num;
+ 
+ 	if (ring_num > HCLGE_MBX_MAX_RING_CHAIN_PARAM_NUM)
+ 		return -ENOMEM;
+ 
++	for (i = 0; i < ring_num; i++) {
++		if (req->msg.param[i].tqp_index >= vport->nic.kinfo.rss_size) {
++			dev_err(&hdev->pdev->dev, "tqp index(%u) is out of range(0-%u)\n",
++				req->msg.param[i].tqp_index,
++				vport->nic.kinfo.rss_size - 1);
++			return -EINVAL;
++		}
++	}
++
+ 	hnae3_set_bit(ring_chain->flag, HNAE3_RING_TYPE_B,
+-		      req->msg.param[i].ring_type);
++		      req->msg.param[0].ring_type);
+ 	ring_chain->tqp_index =
+ 		hclge_get_queue_id(vport->nic.kinfo.tqp
+-				   [req->msg.param[i].tqp_index]);
++				   [req->msg.param[0].tqp_index]);
+ 	hnae3_set_field(ring_chain->int_gl_idx, HNAE3_RING_GL_IDX_M,
+-			HNAE3_RING_GL_IDX_S, req->msg.param[i].int_gl_index);
++			HNAE3_RING_GL_IDX_S, req->msg.param[0].int_gl_index);
+ 
+ 	cur_chain = ring_chain;
+ 
+@@ -581,6 +591,17 @@ static void hclge_get_rss_key(struct hclge_vport *vport,
+ 
+ 	index = mbx_req->msg.data[0];
+ 
++	/* Check the query index of rss_hash_key from VF, make sure no
++	 * more than the size of rss_hash_key.
++	 */
++	if (((index + 1) * HCLGE_RSS_MBX_RESP_LEN) >
++	      sizeof(vport[0].rss_hash_key)) {
++		dev_warn(&hdev->pdev->dev,
++			 "failed to get the rss hash key, the index(%u) invalid !\n",
++			 index);
++		return;
++	}
++
+ 	memcpy(resp_msg->data,
+ 	       &hdev->vport[0].rss_hash_key[index * HCLGE_RSS_MBX_RESP_LEN],
+ 	       HCLGE_RSS_MBX_RESP_LEN);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 2f281d0f98070..ee16e0e4fa5fc 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -4813,7 +4813,22 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
+ 				complete(&adapter->init_done);
+ 				adapter->init_done_rc = -EIO;
+ 			}
+-			ibmvnic_reset(adapter, VNIC_RESET_FAILOVER);
++			rc = ibmvnic_reset(adapter, VNIC_RESET_FAILOVER);
++			if (rc && rc != -EBUSY) {
++				/* We were unable to schedule the failover
++				 * reset either because the adapter was still
++				 * probing (eg: during kexec) or we could not
++				 * allocate memory. Clear the failover_pending
++				 * flag since no one else will. We ignore
++				 * EBUSY because it means either FAILOVER reset
++				 * is already scheduled or the adapter is
++				 * being removed.
++				 */
++				netdev_err(netdev,
++					   "Error %ld scheduling failover reset\n",
++					   rc);
++				adapter->failover_pending = false;
++			}
+ 			break;
+ 		case IBMVNIC_CRQ_INIT_COMPLETE:
+ 			dev_info(dev, "Partner initialization complete\n");
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index d4768dcb6c699..aa400b925b08e 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -348,6 +348,60 @@ static void ocelot_vlan_init(struct ocelot *ocelot)
+ 	}
+ }
+ 
++static u32 ocelot_read_eq_avail(struct ocelot *ocelot, int port)
++{
++	return ocelot_read_rix(ocelot, QSYS_SW_STATUS, port);
++}
++
++int ocelot_port_flush(struct ocelot *ocelot, int port)
++{
++	int err, val;
++
++	/* Disable dequeuing from the egress queues */
++	ocelot_rmw_rix(ocelot, QSYS_PORT_MODE_DEQUEUE_DIS,
++		       QSYS_PORT_MODE_DEQUEUE_DIS,
++		       QSYS_PORT_MODE, port);
++
++	/* Disable flow control */
++	ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, 0);
++
++	/* Disable priority flow control */
++	ocelot_fields_write(ocelot, port,
++			    QSYS_SWITCH_PORT_MODE_TX_PFC_ENA, 0);
++
++	/* Wait at least the time it takes to receive a frame of maximum length
++	 * at the port.
++	 * Worst-case delays for 10 kilobyte jumbo frames are:
++	 * 8 ms on a 10M port
++	 * 800 μs on a 100M port
++	 * 80 μs on a 1G port
++	 * 32 μs on a 2.5G port
++	 */
++	usleep_range(8000, 10000);
++
++	/* Disable half duplex backpressure. */
++	ocelot_rmw_rix(ocelot, 0, SYS_FRONT_PORT_MODE_HDX_MODE,
++		       SYS_FRONT_PORT_MODE, port);
++
++	/* Flush the queues associated with the port. */
++	ocelot_rmw_gix(ocelot, REW_PORT_CFG_FLUSH_ENA, REW_PORT_CFG_FLUSH_ENA,
++		       REW_PORT_CFG, port);
++
++	/* Enable dequeuing from the egress queues. */
++	ocelot_rmw_rix(ocelot, 0, QSYS_PORT_MODE_DEQUEUE_DIS, QSYS_PORT_MODE,
++		       port);
++
++	/* Wait until flushing is complete. */
++	err = read_poll_timeout(ocelot_read_eq_avail, val, !val,
++				100, 2000000, false, ocelot, port);
++
++	/* Clear flushing again. */
++	ocelot_rmw_gix(ocelot, 0, REW_PORT_CFG_FLUSH_ENA, REW_PORT_CFG, port);
++
++	return err;
++}
++EXPORT_SYMBOL(ocelot_port_flush);
++
+ void ocelot_adjust_link(struct ocelot *ocelot, int port,
+ 			struct phy_device *phydev)
+ {
+diff --git a/drivers/net/ethernet/mscc/ocelot_io.c b/drivers/net/ethernet/mscc/ocelot_io.c
+index 0acb459484185..ea4e83410fe4d 100644
+--- a/drivers/net/ethernet/mscc/ocelot_io.c
++++ b/drivers/net/ethernet/mscc/ocelot_io.c
+@@ -71,6 +71,14 @@ void ocelot_port_writel(struct ocelot_port *port, u32 val, u32 reg)
+ }
+ EXPORT_SYMBOL(ocelot_port_writel);
+ 
++void ocelot_port_rmwl(struct ocelot_port *port, u32 val, u32 mask, u32 reg)
++{
++	u32 cur = ocelot_port_readl(port, reg);
++
++	ocelot_port_writel(port, (cur & (~mask)) | val, reg);
++}
++EXPORT_SYMBOL(ocelot_port_rmwl);
++
+ u32 __ocelot_target_read_ix(struct ocelot *ocelot, enum ocelot_target target,
+ 			    u32 reg, u32 offset)
+ {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 06553d028d746..6088071cb1923 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -330,7 +330,12 @@ static int tc_setup_cbs(struct stmmac_priv *priv,
+ 
+ 		priv->plat->tx_queues_cfg[queue].mode_to_use = MTL_QUEUE_AVB;
+ 	} else if (!qopt->enable) {
+-		return stmmac_dma_qmode(priv, priv->ioaddr, queue, MTL_QUEUE_DCB);
++		ret = stmmac_dma_qmode(priv, priv->ioaddr, queue,
++				       MTL_QUEUE_DCB);
++		if (ret)
++			return ret;
++
++		priv->plat->tx_queues_cfg[queue].mode_to_use = MTL_QUEUE_DCB;
+ 	}
+ 
+ 	/* Port Transmit Rate and Speed Divider */
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 0c3de94b51787..6a7ab930ef70d 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -1253,8 +1253,11 @@ static int netvsc_receive(struct net_device *ndev,
+ 		ret = rndis_filter_receive(ndev, net_device,
+ 					   nvchan, data, buflen);
+ 
+-		if (unlikely(ret != NVSP_STAT_SUCCESS))
++		if (unlikely(ret != NVSP_STAT_SUCCESS)) {
++			/* Drop incomplete packet */
++			nvchan->rsc.cnt = 0;
+ 			status = NVSP_STAT_FAIL;
++		}
+ 	}
+ 
+ 	enq_receive_complete(ndev, net_device, q_idx,
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index b22e47bcfeca1..90bc0008fa2fd 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -508,8 +508,6 @@ static int rndis_filter_receive_data(struct net_device *ndev,
+ 	return ret;
+ 
+ drop:
+-	/* Drop incomplete packet */
+-	nvchan->rsc.cnt = 0;
+ 	return NVSP_STAT_FAIL;
+ }
+ 
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index 4a68da7115d19..2a65efd3e8da9 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -1573,6 +1573,7 @@ static int gsi_channel_setup(struct gsi *gsi, bool legacy)
+ 		if (!channel->gsi)
+ 			continue;	/* Ignore uninitialized channels */
+ 
++		ret = -EINVAL;
+ 		dev_err(gsi->dev, "channel %u not supported by hardware\n",
+ 			channel_id - 1);
+ 		channel_id = gsi->channel_count;
+diff --git a/drivers/net/wan/hdlc_x25.c b/drivers/net/wan/hdlc_x25.c
+index f52b9fed05931..34bc53facd11c 100644
+--- a/drivers/net/wan/hdlc_x25.c
++++ b/drivers/net/wan/hdlc_x25.c
+@@ -171,11 +171,11 @@ static int x25_open(struct net_device *dev)
+ 
+ 	result = lapb_register(dev, &cb);
+ 	if (result != LAPB_OK)
+-		return result;
++		return -ENOMEM;
+ 
+ 	result = lapb_getparms(dev, &params);
+ 	if (result != LAPB_OK)
+-		return result;
++		return -EINVAL;
+ 
+ 	if (state(hdlc)->settings.dce)
+ 		params.mode = params.mode | LAPB_DCE;
+@@ -190,7 +190,7 @@ static int x25_open(struct net_device *dev)
+ 
+ 	result = lapb_setparms(dev, &params);
+ 	if (result != LAPB_OK)
+-		return result;
++		return -EINVAL;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/Kconfig b/drivers/net/wireless/ath/ath9k/Kconfig
+index a84bb9b6573f8..e150d82eddb6c 100644
+--- a/drivers/net/wireless/ath/ath9k/Kconfig
++++ b/drivers/net/wireless/ath/ath9k/Kconfig
+@@ -21,11 +21,9 @@ config ATH9K_BTCOEX_SUPPORT
+ config ATH9K
+ 	tristate "Atheros 802.11n wireless cards support"
+ 	depends on MAC80211 && HAS_DMA
++	select MAC80211_LEDS if LEDS_CLASS=y || LEDS_CLASS=MAC80211
+ 	select ATH9K_HW
+ 	select ATH9K_COMMON
+-	imply NEW_LEDS
+-	imply LEDS_CLASS
+-	imply MAC80211_LEDS
+ 	help
+ 	  This module adds support for wireless adapters based on
+ 	  Atheros IEEE 802.11n AR5008, AR9001 and AR9002 family
+@@ -176,11 +174,9 @@ config ATH9K_PCI_NO_EEPROM
+ config ATH9K_HTC
+ 	tristate "Atheros HTC based wireless cards support"
+ 	depends on USB && MAC80211
++	select MAC80211_LEDS if LEDS_CLASS=y || LEDS_CLASS=MAC80211
+ 	select ATH9K_HW
+ 	select ATH9K_COMMON
+-	imply NEW_LEDS
+-	imply LEDS_CLASS
+-	imply MAC80211_LEDS
+ 	help
+ 	  Support for Atheros HTC based cards.
+ 	  Chipsets supported: AR9271
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 145e839fea4e5..917617aad8d3c 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -519,15 +519,17 @@ static void
+ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
+ 		  int len, bool more)
+ {
+-	struct page *page = virt_to_head_page(data);
+-	int offset = data - page_address(page);
+ 	struct sk_buff *skb = q->rx_head;
+ 	struct skb_shared_info *shinfo = skb_shinfo(skb);
+ 
+ 	if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) {
+-		offset += q->buf_offset;
++		struct page *page = virt_to_head_page(data);
++		int offset = data - page_address(page) + q->buf_offset;
++
+ 		skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len,
+ 				q->buf_size);
++	} else {
++		skb_free_frag(data);
+ 	}
+ 
+ 	if (more)
+diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
+index b8febe1d1bfd3..accc991d153f7 100644
+--- a/drivers/net/xen-netback/rx.c
++++ b/drivers/net/xen-netback/rx.c
+@@ -38,10 +38,15 @@ static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
+ 	RING_IDX prod, cons;
+ 	struct sk_buff *skb;
+ 	int needed;
++	unsigned long flags;
++
++	spin_lock_irqsave(&queue->rx_queue.lock, flags);
+ 
+ 	skb = skb_peek(&queue->rx_queue);
+-	if (!skb)
++	if (!skb) {
++		spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
+ 		return false;
++	}
+ 
+ 	needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
+ 	if (skb_is_gso(skb))
+@@ -49,6 +54,8 @@ static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
+ 	if (skb->sw_hash)
+ 		needed++;
+ 
++	spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
++
+ 	do {
+ 		prod = queue->rx.sring->req_prod;
+ 		cons = queue->rx.req_cons;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a32494cde61f7..4a33287371bda 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3247,6 +3247,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_DEVICE(0x144d, 0xa822),   /* Samsung PM1725a */
+ 		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
+ 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++	{ PCI_DEVICE(0x1987, 0x5016),	/* Phison E16 */
++		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE(0x1d1d, 0x1f1f),	/* LighNVM qemu device */
+ 		.driver_data = NVME_QUIRK_LIGHTNVM, },
+ 	{ PCI_DEVICE(0x1d1d, 0x2807),	/* CNEX WL */
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index 18bf8aeb5f870..e94e59283ecb9 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -32,6 +32,10 @@ MODULE_LICENSE("GPL");
+ MODULE_ALIAS("wmi:95F24279-4D7B-4334-9387-ACCDC67EF61C");
+ MODULE_ALIAS("wmi:5FB7F034-2C63-45e9-BE91-3D44E2C707E4");
+ 
++static int enable_tablet_mode_sw = -1;
++module_param(enable_tablet_mode_sw, int, 0444);
++MODULE_PARM_DESC(enable_tablet_mode_sw, "Enable SW_TABLET_MODE reporting (-1=auto, 0=no, 1=yes)");
++
+ #define HPWMI_EVENT_GUID "95F24279-4D7B-4334-9387-ACCDC67EF61C"
+ #define HPWMI_BIOS_GUID "5FB7F034-2C63-45e9-BE91-3D44E2C707E4"
+ 
+@@ -654,10 +658,12 @@ static int __init hp_wmi_input_setup(void)
+ 	}
+ 
+ 	/* Tablet mode */
+-	val = hp_wmi_hw_state(HPWMI_TABLET_MASK);
+-	if (!(val < 0)) {
+-		__set_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit);
+-		input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE, val);
++	if (enable_tablet_mode_sw > 0) {
++		val = hp_wmi_hw_state(HPWMI_TABLET_MASK);
++		if (val >= 0) {
++			__set_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit);
++			input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE, val);
++		}
+ 	}
+ 
+ 	err = sparse_keymap_setup(hp_wmi_input_dev, hp_wmi_keymap, NULL);
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 69f1a0457f51e..03c81cec6bc98 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -714,6 +714,9 @@ __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 		return -ENODEV;
+ 	}
+ 
++	if (!vport->phba->sli4_hba.nvmels_wq)
++		return -ENOMEM;
++
+ 	/*
+ 	 * there are two dma buf in the request, actually there is one and
+ 	 * the second one is just the start address + cmd size.
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 4a08c450b756f..b6540b92f5661 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -6881,6 +6881,7 @@ static void __exit scsi_debug_exit(void)
+ 
+ 	sdebug_erase_all_stores(false);
+ 	xa_destroy(per_store_ap);
++	kfree(sdebug_q_arr);
+ }
+ 
+ device_initcall(scsi_debug_init);
+diff --git a/drivers/soc/ti/omap_prm.c b/drivers/soc/ti/omap_prm.c
+index 4d41dc3cdce1f..c8b14b3a171f7 100644
+--- a/drivers/soc/ti/omap_prm.c
++++ b/drivers/soc/ti/omap_prm.c
+@@ -552,6 +552,7 @@ static int omap_prm_reset_init(struct platform_device *pdev,
+ 	const struct omap_rst_map *map;
+ 	struct ti_prm_platform_data *pdata = dev_get_platdata(&pdev->dev);
+ 	char buf[32];
++	u32 v;
+ 
+ 	/*
+ 	 * Check if we have controllable resets. If either rstctrl is non-zero
+@@ -599,6 +600,16 @@ static int omap_prm_reset_init(struct platform_device *pdev,
+ 		map++;
+ 	}
+ 
++	/* Quirk handling to assert rst_map_012 bits on reset and avoid errors */
++	if (prm->data->rstmap == rst_map_012) {
++		v = readl_relaxed(reset->prm->base + reset->prm->data->rstctrl);
++		if ((v & reset->mask) != reset->mask) {
++			dev_dbg(&pdev->dev, "Asserting all resets: %08x\n", v);
++			writel_relaxed(reset->mask, reset->prm->base +
++				       reset->prm->data->rstctrl);
++		}
++	}
++
+ 	return devm_reset_controller_register(&pdev->dev, &reset->rcdev);
+ }
+ 
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 2c6b9578a7d36..99908d8d2dd36 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -1646,9 +1646,16 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
+ 
+ 	/* pass ownership to the completion handler */
+ 	urb->status = status;
+-	kcov_remote_start_usb((u64)urb->dev->bus->busnum);
++	/*
++	 * This function can be called in task context inside another remote
++	 * coverage collection section, but KCOV doesn't support that kind of
++	 * recursion yet. Only collect coverage in softirq context for now.
++	 */
++	if (in_serving_softirq())
++		kcov_remote_start_usb((u64)urb->dev->bus->busnum);
+ 	urb->complete(urb);
+-	kcov_remote_stop();
++	if (in_serving_softirq())
++		kcov_remote_stop();
+ 
+ 	usb_anchor_resume_wakeups(anchor);
+ 	atomic_dec(&urb->use_count);
+diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
+index dc15373354144..2a93b7c9c1599 100644
+--- a/drivers/xen/xenbus/xenbus.h
++++ b/drivers/xen/xenbus/xenbus.h
+@@ -115,7 +115,6 @@ int xenbus_probe_node(struct xen_bus_type *bus,
+ 		      const char *type,
+ 		      const char *nodename);
+ int xenbus_probe_devices(struct xen_bus_type *bus);
+-void xenbus_probe(void);
+ 
+ void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
+ 
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 18ffd0551b542..8a75092bb148b 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -683,7 +683,7 @@ void unregister_xenstore_notifier(struct notifier_block *nb)
+ }
+ EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
+ 
+-void xenbus_probe(void)
++static void xenbus_probe(void)
+ {
+ 	xenstored_ready = 1;
+ 
+diff --git a/fs/Kconfig b/fs/Kconfig
+index aa4c122823018..da524c4d7b7e0 100644
+--- a/fs/Kconfig
++++ b/fs/Kconfig
+@@ -203,7 +203,7 @@ config TMPFS_XATTR
+ 
+ config TMPFS_INODE64
+ 	bool "Use 64-bit ino_t by default in tmpfs"
+-	depends on TMPFS && 64BIT
++	depends on TMPFS && 64BIT && !(S390 || ALPHA)
+ 	default n
+ 	help
+ 	  tmpfs has historically used only inode numbers as wide as an unsigned
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 955ecd4030f04..89d5d59c7d7a4 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -84,6 +84,14 @@ int ovl_copy_xattr(struct super_block *sb, struct dentry *old,
+ 
+ 		if (ovl_is_private_xattr(sb, name))
+ 			continue;
++
++		error = security_inode_copy_up_xattr(name);
++		if (error < 0 && error != -EOPNOTSUPP)
++			break;
++		if (error == 1) {
++			error = 0;
++			continue; /* Discard */
++		}
+ retry:
+ 		size = vfs_getxattr(old, name, value, value_size);
+ 		if (size == -ERANGE)
+@@ -107,13 +115,6 @@ retry:
+ 			goto retry;
+ 		}
+ 
+-		error = security_inode_copy_up_xattr(name);
+-		if (error < 0 && error != -EOPNOTSUPP)
+-			break;
+-		if (error == 1) {
+-			error = 0;
+-			continue; /* Discard */
+-		}
+ 		error = vfs_setxattr(new, name, value, size, 0);
+ 		if (error) {
+ 			if (error != -EOPNOTSUPP || ovl_must_copy_xattr(name))
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index b584dca845baa..4fadafd8bdc12 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -346,7 +346,9 @@ int ovl_xattr_set(struct dentry *dentry, struct inode *inode, const char *name,
+ 		goto out;
+ 
+ 	if (!value && !upperdentry) {
++		old_cred = ovl_override_creds(dentry->d_sb);
+ 		err = vfs_getxattr(realdentry, name, NULL, 0);
++		revert_creds(old_cred);
+ 		if (err < 0)
+ 			goto out_drop_write;
+ 	}
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index d23177a53c95f..50529a4e7bf39 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -79,7 +79,7 @@ static void ovl_dentry_release(struct dentry *dentry)
+ static struct dentry *ovl_d_real(struct dentry *dentry,
+ 				 const struct inode *inode)
+ {
+-	struct dentry *real;
++	struct dentry *real = NULL, *lower;
+ 
+ 	/* It's an overlay file */
+ 	if (inode && d_inode(dentry) == inode)
+@@ -98,9 +98,10 @@ static struct dentry *ovl_d_real(struct dentry *dentry,
+ 	if (real && !inode && ovl_has_upperdata(d_inode(dentry)))
+ 		return real;
+ 
+-	real = ovl_dentry_lowerdata(dentry);
+-	if (!real)
++	lower = ovl_dentry_lowerdata(dentry);
++	if (!lower)
+ 		goto bug;
++	real = lower;
+ 
+ 	/* Handle recursion */
+ 	real = d_real(real, inode);
+@@ -108,8 +109,10 @@ static struct dentry *ovl_d_real(struct dentry *dentry,
+ 	if (!inode || inode == d_inode(real))
+ 		return real;
+ bug:
+-	WARN(1, "ovl_d_real(%pd4, %s:%lu): real dentry not found\n", dentry,
+-	     inode ? inode->i_sb->s_id : "NULL", inode ? inode->i_ino : 0);
++	WARN(1, "%s(%pd4, %s:%lu): real dentry (%p/%lu) not found\n",
++	     __func__, dentry, inode ? inode->i_sb->s_id : "NULL",
++	     inode ? inode->i_ino : 0, real,
++	     real && d_inode(real) ? d_inode(real)->i_ino : 0);
+ 	return dentry;
+ }
+ 
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index b2b3d81b1535a..b97c628ad91ff 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -459,7 +459,7 @@
+ 	}								\
+ 									\
+ 	/* Built-in firmware blobs */					\
+-	.builtin_fw        : AT(ADDR(.builtin_fw) - LOAD_OFFSET) {	\
++	.builtin_fw : AT(ADDR(.builtin_fw) - LOAD_OFFSET) ALIGN(8) {	\
+ 		__start_builtin_fw = .;					\
+ 		KEEP(*(.builtin_fw))					\
+ 		__end_builtin_fw = .;					\
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 7c3da0e1ea9d4..9de5312edeb86 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -4313,6 +4313,7 @@ static inline void netif_tx_disable(struct net_device *dev)
+ 
+ 	local_bh_disable();
+ 	cpu = smp_processor_id();
++	spin_lock(&dev->tx_global_lock);
+ 	for (i = 0; i < dev->num_tx_queues; i++) {
+ 		struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
+ 
+@@ -4320,6 +4321,7 @@ static inline void netif_tx_disable(struct net_device *dev)
+ 		netif_tx_stop_queue(txq);
+ 		__netif_tx_unlock(txq);
+ 	}
++	spin_unlock(&dev->tx_global_lock);
+ 	local_bh_enable();
+ }
+ 
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 72d88566694ee..27ff8eb786dc3 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -260,7 +260,13 @@ static inline void iov_iter_reexpand(struct iov_iter *i, size_t count)
+ {
+ 	i->count = count;
+ }
+-size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *csump, struct iov_iter *i);
++
++struct csum_state {
++	__wsum csum;
++	size_t off;
++};
++
++size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *csstate, struct iov_iter *i);
+ size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i);
+ bool csum_and_copy_from_iter_full(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i);
+ size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp,
+diff --git a/include/net/switchdev.h b/include/net/switchdev.h
+index 53e8b4994296d..8528015590e44 100644
+--- a/include/net/switchdev.h
++++ b/include/net/switchdev.h
+@@ -41,7 +41,6 @@ enum switchdev_attr_id {
+ 	SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED,
+ 	SWITCHDEV_ATTR_ID_BRIDGE_MROUTER,
+ #if IS_ENABLED(CONFIG_BRIDGE_MRP)
+-	SWITCHDEV_ATTR_ID_MRP_PORT_STATE,
+ 	SWITCHDEV_ATTR_ID_MRP_PORT_ROLE,
+ #endif
+ };
+@@ -60,7 +59,6 @@ struct switchdev_attr {
+ 		bool vlan_filtering;			/* BRIDGE_VLAN_FILTERING */
+ 		bool mc_disabled;			/* MC_DISABLED */
+ #if IS_ENABLED(CONFIG_BRIDGE_MRP)
+-		u8 mrp_port_state;			/* MRP_PORT_STATE */
+ 		u8 mrp_port_role;			/* MRP_PORT_ROLE */
+ #endif
+ 	} u;
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 49b46df476f2c..4971b45860a4d 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -703,6 +703,7 @@ struct ocelot_policer {
+ /* I/O */
+ u32 ocelot_port_readl(struct ocelot_port *port, u32 reg);
+ void ocelot_port_writel(struct ocelot_port *port, u32 val, u32 reg);
++void ocelot_port_rmwl(struct ocelot_port *port, u32 val, u32 mask, u32 reg);
+ u32 __ocelot_read_ix(struct ocelot *ocelot, u32 reg, u32 offset);
+ void __ocelot_write_ix(struct ocelot *ocelot, u32 val, u32 reg, u32 offset);
+ void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 mask, u32 reg,
+@@ -731,6 +732,7 @@ int ocelot_get_sset_count(struct ocelot *ocelot, int port, int sset);
+ int ocelot_get_ts_info(struct ocelot *ocelot, int port,
+ 		       struct ethtool_ts_info *info);
+ void ocelot_set_ageing_time(struct ocelot *ocelot, unsigned int msecs);
++int ocelot_port_flush(struct ocelot *ocelot, int port);
+ void ocelot_adjust_link(struct ocelot *ocelot, int port,
+ 			struct phy_device *phydev);
+ int ocelot_port_vlan_filtering(struct ocelot *ocelot, int port, bool enabled,
+diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
+index 2c43b0ef1e4d5..bf3cfc7c35d0b 100644
+--- a/include/xen/xenbus.h
++++ b/include/xen/xenbus.h
+@@ -192,8 +192,6 @@ void xs_suspend_cancel(void);
+ 
+ struct work_struct;
+ 
+-void xenbus_probe(void);
+-
+ #define XENBUS_IS_ERR_READ(str) ({			\
+ 	if (!IS_ERR(str) && strlen(str) == 0) {		\
+ 		kfree(str);				\
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 06065fa271241..6e83bf8c080db 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -116,6 +116,8 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
+ 
+ 	/* hash table size must be power of 2 */
+ 	n_buckets = roundup_pow_of_two(attr->max_entries);
++	if (!n_buckets)
++		return ERR_PTR(-E2BIG);
+ 
+ 	cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap);
+ 	cost += n_buckets * (value_size + sizeof(struct stack_map_bucket));
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 32596fdbcd5b8..a5751784ad740 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -917,6 +917,9 @@ int cgroup1_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 		for_each_subsys(ss, i) {
+ 			if (strcmp(param->key, ss->legacy_name))
+ 				continue;
++			if (!cgroup_ssid_enabled(i) || cgroup1_ssid_disabled(i))
++				return invalfc(fc, "Disabled controller '%s'",
++					       param->key);
+ 			ctx->subsys_mask |= (1 << i);
+ 			return 0;
+ 		}
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index e41c21819ba08..5d1fdf7c3ec65 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -3567,6 +3567,7 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ {
+ 	struct psi_trigger *new;
+ 	struct cgroup *cgrp;
++	struct psi_group *psi;
+ 
+ 	cgrp = cgroup_kn_lock_live(of->kn, false);
+ 	if (!cgrp)
+@@ -3575,7 +3576,8 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ 	cgroup_get(cgrp);
+ 	cgroup_kn_unlock(of->kn);
+ 
+-	new = psi_trigger_create(&cgrp->psi, buf, nbytes, res);
++	psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi;
++	new = psi_trigger_create(psi, buf, nbytes, res);
+ 	if (IS_ERR(new)) {
+ 		cgroup_put(cgrp);
+ 		return PTR_ERR(new);
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 0dde84b9d29fe..fcbfc95649967 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -93,9 +93,6 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
+ {
+ 	unsigned int ret;
+ 
+-	if (in_nmi()) /* not supported yet */
+-		return 1;
+-
+ 	cant_sleep();
+ 
+ 	if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 3119d68d012df..ee4be813ba85b 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2745,7 +2745,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
+ 	    (entry = this_cpu_read(trace_buffered_event))) {
+ 		/* Try to use the per cpu buffer first */
+ 		val = this_cpu_inc_return(trace_buffered_event_cnt);
+-		if (val == 1) {
++		if ((len < (PAGE_SIZE - sizeof(*entry))) && val == 1) {
+ 			trace_event_setup(entry, type, flags, pc);
+ 			entry->array[0] = len;
+ 			return entry;
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 802f3e7d8b8b5..ab3cb67b869e5 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -1212,7 +1212,8 @@ system_enable_read(struct file *filp, char __user *ubuf, size_t cnt,
+ 	mutex_lock(&event_mutex);
+ 	list_for_each_entry(file, &tr->events, list) {
+ 		call = file->event_call;
+-		if (!trace_event_name(call) || !call->class || !call->class->reg)
++		if ((call->flags & TRACE_EVENT_FL_IGNORE_ENABLE) ||
++		    !trace_event_name(call) || !call->class || !call->class->reg)
+ 			continue;
+ 
+ 		if (system && strcmp(call->class->system, system->name) != 0)
+diff --git a/lib/cpumask.c b/lib/cpumask.c
+index 85da6ab4fbb5a..fb22fb266f937 100644
+--- a/lib/cpumask.c
++++ b/lib/cpumask.c
+@@ -6,7 +6,6 @@
+ #include <linux/export.h>
+ #include <linux/memblock.h>
+ #include <linux/numa.h>
+-#include <linux/sched/isolation.h>
+ 
+ /**
+  * cpumask_next - get the next cpu in a cpumask
+@@ -206,27 +205,22 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
+  */
+ unsigned int cpumask_local_spread(unsigned int i, int node)
+ {
+-	int cpu, hk_flags;
+-	const struct cpumask *mask;
++	int cpu;
+ 
+-	hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
+-	mask = housekeeping_cpumask(hk_flags);
+ 	/* Wrap: we always want a cpu. */
+-	i %= cpumask_weight(mask);
++	i %= num_online_cpus();
+ 
+ 	if (node == NUMA_NO_NODE) {
+-		for_each_cpu(cpu, mask) {
++		for_each_cpu(cpu, cpu_online_mask)
+ 			if (i-- == 0)
+ 				return cpu;
+-		}
+ 	} else {
+ 		/* NUMA first. */
+-		for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
++		for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask)
+ 			if (i-- == 0)
+ 				return cpu;
+-		}
+ 
+-		for_each_cpu(cpu, mask) {
++		for_each_cpu(cpu, cpu_online_mask) {
+ 			/* Skip NUMA nodes, done above. */
+ 			if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
+ 				continue;
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index a21e6a5792c5a..f0b2ccb1bb018 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -592,14 +592,15 @@ static __wsum csum_and_memcpy(void *to, const void *from, size_t len,
+ }
+ 
+ static size_t csum_and_copy_to_pipe_iter(const void *addr, size_t bytes,
+-				__wsum *csum, struct iov_iter *i)
++					 struct csum_state *csstate,
++					 struct iov_iter *i)
+ {
+ 	struct pipe_inode_info *pipe = i->pipe;
+ 	unsigned int p_mask = pipe->ring_size - 1;
++	__wsum sum = csstate->csum;
++	size_t off = csstate->off;
+ 	unsigned int i_head;
+ 	size_t n, r;
+-	size_t off = 0;
+-	__wsum sum = *csum;
+ 
+ 	if (!sanity(i))
+ 		return 0;
+@@ -621,7 +622,8 @@ static size_t csum_and_copy_to_pipe_iter(const void *addr, size_t bytes,
+ 		i_head++;
+ 	} while (n);
+ 	i->count -= bytes;
+-	*csum = sum;
++	csstate->csum = sum;
++	csstate->off = off;
+ 	return bytes;
+ }
+ 
+@@ -1522,18 +1524,19 @@ bool csum_and_copy_from_iter_full(void *addr, size_t bytes, __wsum *csum,
+ }
+ EXPORT_SYMBOL(csum_and_copy_from_iter_full);
+ 
+-size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *csump,
++size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate,
+ 			     struct iov_iter *i)
+ {
++	struct csum_state *csstate = _csstate;
+ 	const char *from = addr;
+-	__wsum *csum = csump;
+ 	__wsum sum, next;
+-	size_t off = 0;
++	size_t off;
+ 
+ 	if (unlikely(iov_iter_is_pipe(i)))
+-		return csum_and_copy_to_pipe_iter(addr, bytes, csum, i);
++		return csum_and_copy_to_pipe_iter(addr, bytes, _csstate, i);
+ 
+-	sum = *csum;
++	sum = csstate->csum;
++	off = csstate->off;
+ 	if (unlikely(iov_iter_is_discard(i))) {
+ 		WARN_ON(1);	/* for now */
+ 		return 0;
+@@ -1561,7 +1564,8 @@ size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *csump,
+ 		off += v.iov_len;
+ 	})
+ 	)
+-	*csum = sum;
++	csstate->csum = sum;
++	csstate->off = off;
+ 	return bytes;
+ }
+ EXPORT_SYMBOL(csum_and_copy_to_iter);
+diff --git a/lib/ubsan.c b/lib/ubsan.c
+index cb9af3f6b77e3..adf8dcf3c84e6 100644
+--- a/lib/ubsan.c
++++ b/lib/ubsan.c
+@@ -427,3 +427,34 @@ void __ubsan_handle_load_invalid_value(void *_data, void *val)
+ 	ubsan_epilogue();
+ }
+ EXPORT_SYMBOL(__ubsan_handle_load_invalid_value);
++
++void __ubsan_handle_alignment_assumption(void *_data, unsigned long ptr,
++					 unsigned long align,
++					 unsigned long offset);
++void __ubsan_handle_alignment_assumption(void *_data, unsigned long ptr,
++					 unsigned long align,
++					 unsigned long offset)
++{
++	struct alignment_assumption_data *data = _data;
++	unsigned long real_ptr;
++
++	if (suppress_report(&data->location))
++		return;
++
++	ubsan_prologue(&data->location, "alignment-assumption");
++
++	if (offset)
++		pr_err("assumption of %lu byte alignment (with offset of %lu byte) for pointer of type %s failed",
++		       align, offset, data->type->type_name);
++	else
++		pr_err("assumption of %lu byte alignment for pointer of type %s failed",
++		       align, data->type->type_name);
++
++	real_ptr = ptr - offset;
++	pr_err("%saddress is %lu aligned, misalignment offset is %lu bytes",
++	       offset ? "offset " : "", BIT(real_ptr ? __ffs(real_ptr) : 0),
++	       real_ptr & (align - 1));
++
++	ubsan_epilogue();
++}
++EXPORT_SYMBOL(__ubsan_handle_alignment_assumption);
+diff --git a/lib/ubsan.h b/lib/ubsan.h
+index 7b56c09473a98..9a0b71c5ff9fb 100644
+--- a/lib/ubsan.h
++++ b/lib/ubsan.h
+@@ -78,6 +78,12 @@ struct invalid_value_data {
+ 	struct type_descriptor *type;
+ };
+ 
++struct alignment_assumption_data {
++	struct source_location location;
++	struct source_location assumption_location;
++	struct type_descriptor *type;
++};
++
+ #if defined(CONFIG_ARCH_SUPPORTS_INT128)
+ typedef __int128 s_max;
+ typedef unsigned __int128 u_max;
+diff --git a/net/bridge/br_mrp.c b/net/bridge/br_mrp.c
+index b36689e6e7cba..d1336a7ad7ff2 100644
+--- a/net/bridge/br_mrp.c
++++ b/net/bridge/br_mrp.c
+@@ -544,19 +544,22 @@ int br_mrp_del(struct net_bridge *br, struct br_mrp_instance *instance)
+ int br_mrp_set_port_state(struct net_bridge_port *p,
+ 			  enum br_mrp_port_state_type state)
+ {
++	u32 port_state;
++
+ 	if (!p || !(p->flags & BR_MRP_AWARE))
+ 		return -EINVAL;
+ 
+ 	spin_lock_bh(&p->br->lock);
+ 
+ 	if (state == BR_MRP_PORT_STATE_FORWARDING)
+-		p->state = BR_STATE_FORWARDING;
++		port_state = BR_STATE_FORWARDING;
+ 	else
+-		p->state = BR_STATE_BLOCKING;
++		port_state = BR_STATE_BLOCKING;
+ 
++	p->state = port_state;
+ 	spin_unlock_bh(&p->br->lock);
+ 
+-	br_mrp_port_switchdev_set_state(p, state);
++	br_mrp_port_switchdev_set_state(p, port_state);
+ 
+ 	return 0;
+ }
+diff --git a/net/bridge/br_mrp_switchdev.c b/net/bridge/br_mrp_switchdev.c
+index ed547e03ace17..75a7e8d0a2685 100644
+--- a/net/bridge/br_mrp_switchdev.c
++++ b/net/bridge/br_mrp_switchdev.c
+@@ -169,13 +169,12 @@ int br_mrp_switchdev_send_in_test(struct net_bridge *br, struct br_mrp *mrp,
+ 	return err;
+ }
+ 
+-int br_mrp_port_switchdev_set_state(struct net_bridge_port *p,
+-				    enum br_mrp_port_state_type state)
++int br_mrp_port_switchdev_set_state(struct net_bridge_port *p, u32 state)
+ {
+ 	struct switchdev_attr attr = {
+ 		.orig_dev = p->dev,
+-		.id = SWITCHDEV_ATTR_ID_MRP_PORT_STATE,
+-		.u.mrp_port_state = state,
++		.id = SWITCHDEV_ATTR_ID_PORT_STP_STATE,
++		.u.stp_state = state,
+ 	};
+ 	int err;
+ 
+diff --git a/net/bridge/br_private_mrp.h b/net/bridge/br_private_mrp.h
+index af0e9eff65493..6657705b94b10 100644
+--- a/net/bridge/br_private_mrp.h
++++ b/net/bridge/br_private_mrp.h
+@@ -72,8 +72,7 @@ int br_mrp_switchdev_set_ring_state(struct net_bridge *br, struct br_mrp *mrp,
+ int br_mrp_switchdev_send_ring_test(struct net_bridge *br, struct br_mrp *mrp,
+ 				    u32 interval, u8 max_miss, u32 period,
+ 				    bool monitor);
+-int br_mrp_port_switchdev_set_state(struct net_bridge_port *p,
+-				    enum br_mrp_port_state_type state);
++int br_mrp_port_switchdev_set_state(struct net_bridge_port *p, u32 state);
+ int br_mrp_port_switchdev_set_role(struct net_bridge_port *p,
+ 				   enum br_mrp_port_role_type role);
+ int br_mrp_switchdev_set_in_role(struct net_bridge *br, struct br_mrp *mrp,
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index 9fcaa544f11a9..bc92683fdcdb4 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -721,8 +721,16 @@ static int skb_copy_and_csum_datagram(const struct sk_buff *skb, int offset,
+ 				      struct iov_iter *to, int len,
+ 				      __wsum *csump)
+ {
+-	return __skb_datagram_iter(skb, offset, to, len, true,
+-			csum_and_copy_to_iter, csump);
++	struct csum_state csdata = { .csum = *csump };
++	int ret;
++
++	ret = __skb_datagram_iter(skb, offset, to, len, true,
++				  csum_and_copy_to_iter, &csdata);
++	if (ret)
++		return ret;
++
++	*csump = csdata.csum;
++	return 0;
+ }
+ 
+ /**
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 81e5d482c238e..da85cb9398693 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5723,10 +5723,11 @@ static void gro_normal_list(struct napi_struct *napi)
+ /* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded,
+  * pass the whole batch up to the stack.
+  */
+-static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb)
++static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb, int segs)
+ {
+ 	list_add_tail(&skb->list, &napi->rx_list);
+-	if (++napi->rx_count >= gro_normal_batch)
++	napi->rx_count += segs;
++	if (napi->rx_count >= gro_normal_batch)
+ 		gro_normal_list(napi);
+ }
+ 
+@@ -5765,7 +5766,7 @@ static int napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb)
+ 	}
+ 
+ out:
+-	gro_normal_one(napi, skb);
++	gro_normal_one(napi, skb, NAPI_GRO_CB(skb)->count);
+ 	return NET_RX_SUCCESS;
+ }
+ 
+@@ -6055,7 +6056,7 @@ static gro_result_t napi_skb_finish(struct napi_struct *napi,
+ {
+ 	switch (ret) {
+ 	case GRO_NORMAL:
+-		gro_normal_one(napi, skb);
++		gro_normal_one(napi, skb, 1);
+ 		break;
+ 
+ 	case GRO_DROP:
+@@ -6143,7 +6144,7 @@ static gro_result_t napi_frags_finish(struct napi_struct *napi,
+ 		__skb_push(skb, ETH_HLEN);
+ 		skb->protocol = eth_type_trans(skb, skb->dev);
+ 		if (ret == GRO_NORMAL)
+-			gro_normal_one(napi, skb);
++			gro_normal_one(napi, skb, 1);
+ 		break;
+ 
+ 	case GRO_DROP:
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index a47e0f9b20d0a..a04fd637b4cdc 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -462,20 +462,23 @@ static int dsa_switch_setup(struct dsa_switch *ds)
+ 		ds->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
+ 		if (!ds->slave_mii_bus) {
+ 			err = -ENOMEM;
+-			goto unregister_notifier;
++			goto teardown;
+ 		}
+ 
+ 		dsa_slave_mii_bus_init(ds);
+ 
+ 		err = mdiobus_register(ds->slave_mii_bus);
+ 		if (err < 0)
+-			goto unregister_notifier;
++			goto teardown;
+ 	}
+ 
+ 	ds->setup = true;
+ 
+ 	return 0;
+ 
++teardown:
++	if (ds->ops->teardown)
++		ds->ops->teardown(ds);
+ unregister_notifier:
+ 	dsa_switch_unregister_notifier(ds);
+ unregister_devlink_ports:
+diff --git a/net/mac80211/Kconfig b/net/mac80211/Kconfig
+index cd9a9bd242bab..51ec8256b7fa9 100644
+--- a/net/mac80211/Kconfig
++++ b/net/mac80211/Kconfig
+@@ -69,7 +69,7 @@ config MAC80211_MESH
+ config MAC80211_LEDS
+ 	bool "Enable LED triggers"
+ 	depends on MAC80211
+-	depends on LEDS_CLASS
++	depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211
+ 	select LEDS_TRIGGERS
+ 	help
+ 	  This option enables a few LED triggers for different
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 234b7cab37c30..ff0168736f6ea 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -1229,7 +1229,8 @@ nf_conntrack_tuple_taken(const struct nf_conntrack_tuple *tuple,
+ 			 * Let nf_ct_resolve_clash() deal with this later.
+ 			 */
+ 			if (nf_ct_tuple_equal(&ignored_conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
+-					      &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple))
++					      &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple) &&
++					      nf_ct_zone_equal(ct, zone, IP_CT_DIR_ORIGINAL))
+ 				continue;
+ 
+ 			NF_CT_STAT_INC_ATOMIC(net, found);
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index 513f78db3cb2f..4a4acbba78ff7 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -399,7 +399,7 @@ static int nf_flow_nat_port_tcp(struct sk_buff *skb, unsigned int thoff,
+ 		return -1;
+ 
+ 	tcph = (void *)(skb_network_header(skb) + thoff);
+-	inet_proto_csum_replace2(&tcph->check, skb, port, new_port, true);
++	inet_proto_csum_replace2(&tcph->check, skb, port, new_port, false);
+ 
+ 	return 0;
+ }
+@@ -415,7 +415,7 @@ static int nf_flow_nat_port_udp(struct sk_buff *skb, unsigned int thoff,
+ 	udph = (void *)(skb_network_header(skb) + thoff);
+ 	if (udph->check || skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		inet_proto_csum_replace2(&udph->check, skb, port,
+-					 new_port, true);
++					 new_port, false);
+ 		if (!udph->check)
+ 			udph->check = CSUM_MANGLED_0;
+ 	}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 9a080767667b7..8739ef135156b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -8775,6 +8775,17 @@ int __nft_release_basechain(struct nft_ctx *ctx)
+ }
+ EXPORT_SYMBOL_GPL(__nft_release_basechain);
+ 
++static void __nft_release_hooks(struct net *net)
++{
++	struct nft_table *table;
++	struct nft_chain *chain;
++
++	list_for_each_entry(table, &net->nft.tables, list) {
++		list_for_each_entry(chain, &table->chains, list)
++			nf_tables_unregister_hook(net, table, chain);
++	}
++}
++
+ static void __nft_release_tables(struct net *net)
+ {
+ 	struct nft_flowtable *flowtable, *nf;
+@@ -8790,10 +8801,6 @@ static void __nft_release_tables(struct net *net)
+ 
+ 	list_for_each_entry_safe(table, nt, &net->nft.tables, list) {
+ 		ctx.family = table->family;
+-
+-		list_for_each_entry(chain, &table->chains, list)
+-			nf_tables_unregister_hook(net, table, chain);
+-		/* No packets are walking on these chains anymore. */
+ 		ctx.table = table;
+ 		list_for_each_entry(chain, &table->chains, list) {
+ 			ctx.chain = chain;
+@@ -8842,6 +8849,11 @@ static int __net_init nf_tables_init_net(struct net *net)
+ 	return 0;
+ }
+ 
++static void __net_exit nf_tables_pre_exit_net(struct net *net)
++{
++	__nft_release_hooks(net);
++}
++
+ static void __net_exit nf_tables_exit_net(struct net *net)
+ {
+ 	mutex_lock(&net->nft.commit_mutex);
+@@ -8855,8 +8867,9 @@ static void __net_exit nf_tables_exit_net(struct net *net)
+ }
+ 
+ static struct pernet_operations nf_tables_net_ops = {
+-	.init	= nf_tables_init_net,
+-	.exit	= nf_tables_exit_net,
++	.init		= nf_tables_init_net,
++	.pre_exit	= nf_tables_pre_exit_net,
++	.exit		= nf_tables_exit_net,
+ };
+ 
+ static int __init nf_tables_module_init(void)
+diff --git a/net/netfilter/xt_recent.c b/net/netfilter/xt_recent.c
+index 606411869698e..0446307516cdf 100644
+--- a/net/netfilter/xt_recent.c
++++ b/net/netfilter/xt_recent.c
+@@ -152,7 +152,8 @@ static void recent_entry_remove(struct recent_table *t, struct recent_entry *e)
+ /*
+  * Drop entries with timestamps older then 'time'.
+  */
+-static void recent_entry_reap(struct recent_table *t, unsigned long time)
++static void recent_entry_reap(struct recent_table *t, unsigned long time,
++			      struct recent_entry *working, bool update)
+ {
+ 	struct recent_entry *e;
+ 
+@@ -161,6 +162,12 @@ static void recent_entry_reap(struct recent_table *t, unsigned long time)
+ 	 */
+ 	e = list_entry(t->lru_list.next, struct recent_entry, lru_list);
+ 
++	/*
++	 * Do not reap the entry which are going to be updated.
++	 */
++	if (e == working && update)
++		return;
++
+ 	/*
+ 	 * The last time stamp is the most recent.
+ 	 */
+@@ -303,7 +310,8 @@ recent_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 
+ 		/* info->seconds must be non-zero */
+ 		if (info->check_set & XT_RECENT_REAP)
+-			recent_entry_reap(t, time);
++			recent_entry_reap(t, time, e,
++				info->check_set & XT_RECENT_UPDATE && ret);
+ 	}
+ 
+ 	if (info->check_set & XT_RECENT_SET ||
+diff --git a/net/qrtr/tun.c b/net/qrtr/tun.c
+index 15ce9b642b25f..b238c40a99842 100644
+--- a/net/qrtr/tun.c
++++ b/net/qrtr/tun.c
+@@ -80,6 +80,12 @@ static ssize_t qrtr_tun_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	ssize_t ret;
+ 	void *kbuf;
+ 
++	if (!len)
++		return -EINVAL;
++
++	if (len > KMALLOC_MAX_SIZE)
++		return -ENOMEM;
++
+ 	kbuf = kzalloc(len, GFP_KERNEL);
+ 	if (!kbuf)
+ 		return -ENOMEM;
+diff --git a/net/rds/rdma.c b/net/rds/rdma.c
+index 1d0afb1dd77b5..6f1a50d50d06d 100644
+--- a/net/rds/rdma.c
++++ b/net/rds/rdma.c
+@@ -565,6 +565,9 @@ int rds_rdma_extra_size(struct rds_rdma_args *args,
+ 	if (args->nr_local == 0)
+ 		return -EINVAL;
+ 
++	if (args->nr_local > UIO_MAXIOV)
++		return -EMSGSIZE;
++
+ 	iov->iov = kcalloc(args->nr_local,
+ 			   sizeof(struct rds_iovec),
+ 			   GFP_KERNEL);
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index c845594b663fb..4eb91d958a48d 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -548,8 +548,6 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
+ 		rxrpc_disconnect_call(call);
+ 	if (call->security)
+ 		call->security->free_call_crypto(call);
+-
+-	rxrpc_cleanup_ring(call);
+ 	_leave("");
+ }
+ 
+diff --git a/net/sctp/proc.c b/net/sctp/proc.c
+index f7da88ae20a57..982a87b3e11f8 100644
+--- a/net/sctp/proc.c
++++ b/net/sctp/proc.c
+@@ -215,6 +215,12 @@ static void sctp_transport_seq_stop(struct seq_file *seq, void *v)
+ {
+ 	struct sctp_ht_iter *iter = seq->private;
+ 
++	if (v && v != SEQ_START_TOKEN) {
++		struct sctp_transport *transport = v;
++
++		sctp_transport_put(transport);
++	}
++
+ 	sctp_transport_walk_stop(&iter->hti);
+ }
+ 
+@@ -222,6 +228,12 @@ static void *sctp_transport_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
+ 	struct sctp_ht_iter *iter = seq->private;
+ 
++	if (v && v != SEQ_START_TOKEN) {
++		struct sctp_transport *transport = v;
++
++		sctp_transport_put(transport);
++	}
++
+ 	++*pos;
+ 
+ 	return sctp_transport_get_next(seq_file_net(seq), &iter->hti);
+@@ -277,8 +289,6 @@ static int sctp_assocs_seq_show(struct seq_file *seq, void *v)
+ 		sk->sk_rcvbuf);
+ 	seq_printf(seq, "\n");
+ 
+-	sctp_transport_put(transport);
+-
+ 	return 0;
+ }
+ 
+@@ -354,8 +364,6 @@ static int sctp_remaddr_seq_show(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 	}
+ 
+-	sctp_transport_put(transport);
+-
+ 	return 0;
+ }
+ 
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index f64e681493a59..791955f5e7ec0 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -926,10 +926,12 @@ static int vsock_shutdown(struct socket *sock, int mode)
+ 	 */
+ 
+ 	sk = sock->sk;
++
++	lock_sock(sk);
+ 	if (sock->state == SS_UNCONNECTED) {
+ 		err = -ENOTCONN;
+ 		if (sk->sk_type == SOCK_STREAM)
+-			return err;
++			goto out;
+ 	} else {
+ 		sock->state = SS_DISCONNECTING;
+ 		err = 0;
+@@ -938,10 +940,8 @@ static int vsock_shutdown(struct socket *sock, int mode)
+ 	/* Receive and send shutdowns are treated alike. */
+ 	mode = mode & (RCV_SHUTDOWN | SEND_SHUTDOWN);
+ 	if (mode) {
+-		lock_sock(sk);
+ 		sk->sk_shutdown |= mode;
+ 		sk->sk_state_change(sk);
+-		release_sock(sk);
+ 
+ 		if (sk->sk_type == SOCK_STREAM) {
+ 			sock_reset_flag(sk, SOCK_DONE);
+@@ -949,6 +949,8 @@ static int vsock_shutdown(struct socket *sock, int mode)
+ 		}
+ 	}
+ 
++out:
++	release_sock(sk);
+ 	return err;
+ }
+ 
+@@ -1216,7 +1218,7 @@ static int vsock_transport_cancel_pkt(struct vsock_sock *vsk)
+ {
+ 	const struct vsock_transport *transport = vsk->transport;
+ 
+-	if (!transport->cancel_pkt)
++	if (!transport || !transport->cancel_pkt)
+ 		return -EOPNOTSUPP;
+ 
+ 	return transport->cancel_pkt(vsk);
+@@ -1226,7 +1228,6 @@ static void vsock_connect_timeout(struct work_struct *work)
+ {
+ 	struct sock *sk;
+ 	struct vsock_sock *vsk;
+-	int cancel = 0;
+ 
+ 	vsk = container_of(work, struct vsock_sock, connect_work.work);
+ 	sk = sk_vsock(vsk);
+@@ -1237,11 +1238,9 @@ static void vsock_connect_timeout(struct work_struct *work)
+ 		sk->sk_state = TCP_CLOSE;
+ 		sk->sk_err = ETIMEDOUT;
+ 		sk->sk_error_report(sk);
+-		cancel = 1;
++		vsock_transport_cancel_pkt(vsk);
+ 	}
+ 	release_sock(sk);
+-	if (cancel)
+-		vsock_transport_cancel_pkt(vsk);
+ 
+ 	sock_put(sk);
+ }
+diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
+index 630b851f8150f..cc3bae2659e79 100644
+--- a/net/vmw_vsock/hyperv_transport.c
++++ b/net/vmw_vsock/hyperv_transport.c
+@@ -474,14 +474,10 @@ static void hvs_shutdown_lock_held(struct hvsock *hvs, int mode)
+ 
+ static int hvs_shutdown(struct vsock_sock *vsk, int mode)
+ {
+-	struct sock *sk = sk_vsock(vsk);
+-
+ 	if (!(mode & SEND_SHUTDOWN))
+ 		return 0;
+ 
+-	lock_sock(sk);
+ 	hvs_shutdown_lock_held(vsk->trans, mode);
+-	release_sock(sk);
+ 	return 0;
+ }
+ 
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 5956939eebb78..e4370b1b74947 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1130,8 +1130,6 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
+ 
+ 	vsk = vsock_sk(sk);
+ 
+-	space_available = virtio_transport_space_update(sk, pkt);
+-
+ 	lock_sock(sk);
+ 
+ 	/* Check if sk has been closed before lock_sock */
+@@ -1142,6 +1140,8 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
+ 		goto free_pkt;
+ 	}
+ 
++	space_available = virtio_transport_space_update(sk, pkt);
++
+ 	/* Update CID in case it has changed after a transport reset event */
+ 	vsk->local_addr.svm_cid = dst.svm_cid;
+ 
+diff --git a/scripts/Makefile b/scripts/Makefile
+index 9de3c03b94aa7..c36106bce80ee 100644
+--- a/scripts/Makefile
++++ b/scripts/Makefile
+@@ -17,6 +17,7 @@ hostprogs-always-$(CONFIG_SYSTEM_EXTRA_CERTIFICATE)	+= insert-sys-cert
+ 
+ HOSTCFLAGS_sorttable.o = -I$(srctree)/tools/include
+ HOSTCFLAGS_asn1_compiler.o = -I$(srctree)/include
++HOSTCFLAGS_sign-file.o = $(CRYPTO_CFLAGS)
+ HOSTLDLIBS_sign-file = $(CRYPTO_LIBS)
+ HOSTCFLAGS_extract-cert.o = $(CRYPTO_CFLAGS)
+ HOSTLDLIBS_extract-cert = $(CRYPTO_LIBS)
+diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
+index 7ecd2ccba531b..54ad86d137849 100644
+--- a/scripts/kallsyms.c
++++ b/scripts/kallsyms.c
+@@ -112,6 +112,12 @@ static bool is_ignored_symbol(const char *name, char type)
+ 		"__crc_",		/* modversions */
+ 		"__efistub_",		/* arm64 EFI stub namespace */
+ 		"__kvm_nvhe_",		/* arm64 non-VHE KVM namespace */
++		"__AArch64ADRPThunk_",	/* arm64 lld */
++		"__ARMV5PILongThunk_",	/* arm lld */
++		"__ARMV7PILongThunk_",
++		"__ThumbV7PILongThunk_",
++		"__LA25Thunk_",		/* mips lld */
++		"__microLA25Thunk_",
+ 		NULL
+ 	};
+ 
+diff --git a/security/commoncap.c b/security/commoncap.c
+index 59bf3c1674c8b..a6c9bb4441d54 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -371,10 +371,11 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
+ {
+ 	int size, ret;
+ 	kuid_t kroot;
++	u32 nsmagic, magic;
+ 	uid_t root, mappedroot;
+ 	char *tmpbuf = NULL;
+ 	struct vfs_cap_data *cap;
+-	struct vfs_ns_cap_data *nscap;
++	struct vfs_ns_cap_data *nscap = NULL;
+ 	struct dentry *dentry;
+ 	struct user_namespace *fs_ns;
+ 
+@@ -396,46 +397,61 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
+ 	fs_ns = inode->i_sb->s_user_ns;
+ 	cap = (struct vfs_cap_data *) tmpbuf;
+ 	if (is_v2header((size_t) ret, cap)) {
+-		/* If this is sizeof(vfs_cap_data) then we're ok with the
+-		 * on-disk value, so return that.  */
+-		if (alloc)
+-			*buffer = tmpbuf;
+-		else
+-			kfree(tmpbuf);
+-		return ret;
+-	} else if (!is_v3header((size_t) ret, cap)) {
+-		kfree(tmpbuf);
+-		return -EINVAL;
++		root = 0;
++	} else if (is_v3header((size_t) ret, cap)) {
++		nscap = (struct vfs_ns_cap_data *) tmpbuf;
++		root = le32_to_cpu(nscap->rootid);
++	} else {
++		size = -EINVAL;
++		goto out_free;
+ 	}
+ 
+-	nscap = (struct vfs_ns_cap_data *) tmpbuf;
+-	root = le32_to_cpu(nscap->rootid);
+ 	kroot = make_kuid(fs_ns, root);
+ 
+ 	/* If the root kuid maps to a valid uid in current ns, then return
+ 	 * this as a nscap. */
+ 	mappedroot = from_kuid(current_user_ns(), kroot);
+ 	if (mappedroot != (uid_t)-1 && mappedroot != (uid_t)0) {
++		size = sizeof(struct vfs_ns_cap_data);
+ 		if (alloc) {
+-			*buffer = tmpbuf;
++			if (!nscap) {
++				/* v2 -> v3 conversion */
++				nscap = kzalloc(size, GFP_ATOMIC);
++				if (!nscap) {
++					size = -ENOMEM;
++					goto out_free;
++				}
++				nsmagic = VFS_CAP_REVISION_3;
++				magic = le32_to_cpu(cap->magic_etc);
++				if (magic & VFS_CAP_FLAGS_EFFECTIVE)
++					nsmagic |= VFS_CAP_FLAGS_EFFECTIVE;
++				memcpy(&nscap->data, &cap->data, sizeof(__le32) * 2 * VFS_CAP_U32);
++				nscap->magic_etc = cpu_to_le32(nsmagic);
++			} else {
++				/* use allocated v3 buffer */
++				tmpbuf = NULL;
++			}
+ 			nscap->rootid = cpu_to_le32(mappedroot);
+-		} else
+-			kfree(tmpbuf);
+-		return size;
++			*buffer = nscap;
++		}
++		goto out_free;
+ 	}
+ 
+ 	if (!rootid_owns_currentns(kroot)) {
+-		kfree(tmpbuf);
+-		return -EOPNOTSUPP;
++		size = -EOVERFLOW;
++		goto out_free;
+ 	}
+ 
+ 	/* This comes from a parent namespace.  Return as a v2 capability */
+ 	size = sizeof(struct vfs_cap_data);
+ 	if (alloc) {
+-		*buffer = kmalloc(size, GFP_ATOMIC);
+-		if (*buffer) {
+-			struct vfs_cap_data *cap = *buffer;
+-			__le32 nsmagic, magic;
++		if (nscap) {
++			/* v3 -> v2 conversion */
++			cap = kzalloc(size, GFP_ATOMIC);
++			if (!cap) {
++				size = -ENOMEM;
++				goto out_free;
++			}
+ 			magic = VFS_CAP_REVISION_2;
+ 			nsmagic = le32_to_cpu(nscap->magic_etc);
+ 			if (nsmagic & VFS_CAP_FLAGS_EFFECTIVE)
+@@ -443,9 +459,12 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
+ 			memcpy(&cap->data, &nscap->data, sizeof(__le32) * 2 * VFS_CAP_U32);
+ 			cap->magic_etc = cpu_to_le32(magic);
+ 		} else {
+-			size = -ENOMEM;
++			/* use unconverted v2 */
++			tmpbuf = NULL;
+ 		}
++		*buffer = cap;
+ 	}
++out_free:
+ 	kfree(tmpbuf);
+ 	return size;
+ }
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 956383d5fa62e..4bd30315eb62b 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -467,13 +467,20 @@ static int create_static_call_sections(struct objtool_file *file)
+ 
+ 		/* populate reloc for 'addr' */
+ 		reloc = malloc(sizeof(*reloc));
++
+ 		if (!reloc) {
+ 			perror("malloc");
+ 			return -1;
+ 		}
+ 		memset(reloc, 0, sizeof(*reloc));
+-		reloc->sym = insn->sec->sym;
+-		reloc->addend = insn->offset;
++
++		insn_to_reloc_sym_addend(insn->sec, insn->offset, reloc);
++		if (!reloc->sym) {
++			WARN_FUNC("static call tramp: missing containing symbol",
++				  insn->sec, insn->offset);
++			return -1;
++		}
++
+ 		reloc->type = R_X86_64_PC32;
+ 		reloc->offset = idx * sizeof(struct static_call_site);
+ 		reloc->sec = reloc_sec;
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index f4f3e8d995930..d8421e1d06bed 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -262,6 +262,32 @@ struct reloc *find_reloc_by_dest(const struct elf *elf, struct section *sec, uns
+ 	return find_reloc_by_dest_range(elf, sec, offset, 1);
+ }
+ 
++void insn_to_reloc_sym_addend(struct section *sec, unsigned long offset,
++			      struct reloc *reloc)
++{
++	if (sec->sym) {
++		reloc->sym = sec->sym;
++		reloc->addend = offset;
++		return;
++	}
++
++	/*
++	 * The Clang assembler strips section symbols, so we have to reference
++	 * the function symbol instead:
++	 */
++	reloc->sym = find_symbol_containing(sec, offset);
++	if (!reloc->sym) {
++		/*
++		 * Hack alert.  This happens when we need to reference the NOP
++		 * pad insn immediately after the function.
++		 */
++		reloc->sym = find_symbol_containing(sec, offset - 1);
++	}
++
++	if (reloc->sym)
++		reloc->addend = offset - reloc->sym->offset;
++}
++
+ static int read_sections(struct elf *elf)
+ {
+ 	Elf_Scn *s = NULL;
+diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h
+index 807f8c6700974..e6890cc70a25b 100644
+--- a/tools/objtool/elf.h
++++ b/tools/objtool/elf.h
+@@ -140,6 +140,8 @@ struct reloc *find_reloc_by_dest(const struct elf *elf, struct section *sec, uns
+ struct reloc *find_reloc_by_dest_range(const struct elf *elf, struct section *sec,
+ 				     unsigned long offset, unsigned int len);
+ struct symbol *find_func_containing(struct section *sec, unsigned long offset);
++void insn_to_reloc_sym_addend(struct section *sec, unsigned long offset,
++			      struct reloc *reloc);
+ int elf_rebuild_reloc_section(struct elf *elf, struct section *sec);
+ 
+ #define for_each_sec(file, sec)						\
+diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c
+index 235663b96adc7..9ce68b385a1b8 100644
+--- a/tools/objtool/orc_gen.c
++++ b/tools/objtool/orc_gen.c
+@@ -105,30 +105,11 @@ static int create_orc_entry(struct elf *elf, struct section *u_sec, struct secti
+ 	}
+ 	memset(reloc, 0, sizeof(*reloc));
+ 
+-	if (insn_sec->sym) {
+-		reloc->sym = insn_sec->sym;
+-		reloc->addend = insn_off;
+-	} else {
+-		/*
+-		 * The Clang assembler doesn't produce section symbols, so we
+-		 * have to reference the function symbol instead:
+-		 */
+-		reloc->sym = find_symbol_containing(insn_sec, insn_off);
+-		if (!reloc->sym) {
+-			/*
+-			 * Hack alert.  This happens when we need to reference
+-			 * the NOP pad insn immediately after the function.
+-			 */
+-			reloc->sym = find_symbol_containing(insn_sec,
+-							   insn_off - 1);
+-		}
+-		if (!reloc->sym) {
+-			WARN("missing symbol for insn at offset 0x%lx\n",
+-			     insn_off);
+-			return -1;
+-		}
+-
+-		reloc->addend = insn_off - reloc->sym->offset;
++	insn_to_reloc_sym_addend(insn_sec, insn_off, reloc);
++	if (!reloc->sym) {
++		WARN("missing symbol for insn at offset 0x%lx",
++		     insn_off);
++		return -1;
+ 	}
+ 
+ 	reloc->type = R_X86_64_PC32;
+diff --git a/tools/testing/selftests/net/txtimestamp.c b/tools/testing/selftests/net/txtimestamp.c
+index 490a8cca708a8..fabb1d555ee5c 100644
+--- a/tools/testing/selftests/net/txtimestamp.c
++++ b/tools/testing/selftests/net/txtimestamp.c
+@@ -26,6 +26,7 @@
+ #include <inttypes.h>
+ #include <linux/errqueue.h>
+ #include <linux/if_ether.h>
++#include <linux/if_packet.h>
+ #include <linux/ipv6.h>
+ #include <linux/net_tstamp.h>
+ #include <netdb.h>
+@@ -34,7 +35,6 @@
+ #include <netinet/ip.h>
+ #include <netinet/udp.h>
+ #include <netinet/tcp.h>
+-#include <netpacket/packet.h>
+ #include <poll.h>
+ #include <stdarg.h>
+ #include <stdbool.h>
+@@ -495,12 +495,12 @@ static void do_test(int family, unsigned int report_opt)
+ 	total_len = cfg_payload_len;
+ 	if (cfg_use_pf_packet || cfg_proto == SOCK_RAW) {
+ 		total_len += sizeof(struct udphdr);
+-		if (cfg_use_pf_packet || cfg_ipproto == IPPROTO_RAW)
++		if (cfg_use_pf_packet || cfg_ipproto == IPPROTO_RAW) {
+ 			if (family == PF_INET)
+ 				total_len += sizeof(struct iphdr);
+ 			else
+ 				total_len += sizeof(struct ipv6hdr);
+-
++		}
+ 		/* special case, only rawv6_sendmsg:
+ 		 * pass proto in sin6_port if not connected
+ 		 * also see ANK comment in net/ipv4/raw.c
+diff --git a/tools/testing/selftests/netfilter/nft_meta.sh b/tools/testing/selftests/netfilter/nft_meta.sh
+index 087f0e6e71ce7..f33154c04d344 100755
+--- a/tools/testing/selftests/netfilter/nft_meta.sh
++++ b/tools/testing/selftests/netfilter/nft_meta.sh
+@@ -23,7 +23,7 @@ ip -net "$ns0" addr add 127.0.0.1 dev lo
+ 
+ trap cleanup EXIT
+ 
+-currentyear=$(date +%G)
++currentyear=$(date +%Y)
+ lastyear=$((currentyear-1))
+ ip netns exec "$ns0" nft -f /dev/stdin <<EOF
+ table inet filter {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-18 14:48 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-02-18 14:48 UTC (permalink / raw
  To: gentoo-commits

commit:     ee1c74eb741fb092f39bd76494cae68ffbdc77b9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 18 14:48:01 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Feb 18 14:48:01 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ee1c74eb

Kernel patch enables v9.1 >= gcc < v10 optimizations for additional CPUs.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                   |   4 +
 5012_enable-cpu-optimizations-for-gcc91.patch | 641 ++++++++++++++++++++++++++
 2 files changed, 645 insertions(+)

diff --git a/0000_README b/0000_README
index 9372f82..927bfd9 100644
--- a/0000_README
+++ b/0000_README
@@ -139,6 +139,10 @@ Patch:  5000_shifts-ubuntu-20.04.patch
 From:   https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
 Desc:   UID/GID shifting overlay filesystem for containers 
 
+Patch:  5012_enable-cpu-optimizations-for-gcc91.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc >= v9.1 optimizations for additional CPUs.
+
 Patch:  5013_enable-cpu-optimizations-for-gcc10.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc = v10.1+ optimizations for additional CPUs.

diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
new file mode 100644
index 0000000..564eede
--- /dev/null
+++ b/5012_enable-cpu-optimizations-for-gcc91.patch
@@ -0,0 +1,641 @@
+WARNING
+This patch works with gcc versions 9.1+ and with kernel version 5.8+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features  --->
+  Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* AMD Family 17h (Zen 2)
+* Intel Silvermont low-power processors
+* Intel Goldmont low-power processors (Apollo Lake and Denverton)
+* Intel Goldmont Plus low-power processors (Gemini Lake)
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 10th Gen Core i7/i9 (Ice Lake)
+* Intel Xeon (Cascade Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[2]
+
+Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
+Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
+kernel's objtool issue with these.[3a,b]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[4]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.8
+gcc version >=9.1 and <10
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[6]
+
+REFERENCES
+1.  https://gcc.gnu.org/gcc-4.9/changes.html
+2.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
+3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
+4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
+5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
+6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/vermagic.h	2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/include/asm/vermagic.h	2020-06-15 10:44:10.437477053 -0400
+@@ -17,6 +17,36 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +65,28 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu	2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Kconfig.cpu	2020-06-15 10:44:10.437477053 -0400
+@@ -123,6 +123,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ 	depends on X86_32
++	select X86_P6_NOP
+ 	help
+ 	  Select this for Intel Pentium 4 chips.  This includes the
+ 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -155,9 +156,8 @@ config MPENTIUM4
+ 		-Paxville
+ 		-Dempsey
+ 
+-
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	help
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	help
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -173,12 +173,90 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	help
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	help
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	help
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	help
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	help
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	help
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	help
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	help
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	help
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	help
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Zen"
++	help
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
++
++config MZEN2
++	bool "AMD Zen 2"
++	help
++	  Select this for AMD Family 17h Zen 2 processors.
++
++	  Enables -march=znver2
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -260,6 +338,7 @@ config MVIAC7
+ 
+ config MPSC
+ 	bool "Intel P4 / older Netburst based Xeon"
++	select X86_P6_NOP
+ 	depends on X86_64
+ 	help
+ 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -269,8 +348,19 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
++	select X86_P6_NOP
+ 	help
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,14 +368,133 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
++	select X86_P6_NOP
+ 	help
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MGOLDMONT
++	bool "Intel Goldmont"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++	  Enables -march=goldmont
++
++config MGOLDMONTPLUS
++	bool "Intel Goldmont Plus"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++	  Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	select X86_P6_NOP
++	help
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	select X86_P6_NOP
++	help
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	select X86_P6_NOP
++	help
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	select X86_P6_NOP
++	help
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake X"
++	select X86_P6_NOP
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake X family.
++
++	  Enables -march=skylake-avx512
++
++config MCANNONLAKE
++	bool "Intel Cannon Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 8th Gen Core processors
++
++	  Enables -march=cannonlake
++
++config MICELAKE
++	bool "Intel Ice Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 10th Gen Core processors in the Ice Lake family.
++
++	  Enables -march=icelake-client
++
++config MCASCADELAKE
++	bool "Intel Cascade Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for Xeon processors in the Cascade Lake family.
++
++	  Enables -march=cascadelake
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -294,6 +503,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ help
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -318,7 +540,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -336,35 +558,36 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+ 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+ 
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs).  In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+-	def_bool y
+-	depends on X86_64
+-	depends on (MCORE2 || MPENTIUM4 || MPSC)
++	default n
++	bool "Support for P6_NOPs on Intel chips"
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
++	help
++	P6_NOPs are a relatively minor optimization that require a family >=
++	6 processor, except that it is broken on certain VIA chips.
++	Furthermore, AMD chips prefer a totally different sequence of NOPs
++	(which work on all CPUs).  In addition, it looks like Virtual PC
++	does not understand them.
++
++	As a result, disallow these if we're not compiling for X86_64 (these
++	NOPs do work on all x86-64 capable chips); the list of processors in
++	the right-hand clause are the cores that benefit from this optimization.
++
++	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+@@ -374,7 +597,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+--- a/arch/x86/Makefile	2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile	2020-06-15 10:44:35.608035680 -0400
+@@ -119,13 +119,56 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++        cflags-$(CONFIG_MGOLDMONT) += \
++                $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
++        cflags-$(CONFIG_MGOLDMONTPLUS) += \
++                $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MSKYLAKE) += \
++                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++        cflags-$(CONFIG_MSKYLAKEX) += \
++                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++        cflags-$(CONFIG_MCANNONLAKE) += \
++                $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++        cflags-$(CONFIG_MICELAKE) += \
++                $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
++        cflags-$(CONFIG_MCASCADELAKE) += \
++                $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+--- a/arch/x86/Makefile_32.cpu	2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu	2020-06-15 10:44:10.437477053 -0400
+@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
++cflags-$(CONFIG_MZEN2)	+= $(call cc-option,-march=znver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -33,8 +45,22 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MGOLDMONT)	+= -march=i686 $(call tune,goldmont)
++cflags-$(CONFIG_MGOLDMONTPLUS)	+= -march=i686 $(call tune,goldmont-plus)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE)	+= -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE)	+= -march=i686 $(call tune,icelake-client)
++cflags-$(CONFIG_MCASCADELAKE)	+= -march=i686 $(call tune,cascadelake)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-18 20:45 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-02-18 20:45 UTC (permalink / raw
  To: gentoo-commits

commit:     202b38dc2229eb5d6c8c7434d0fb47296fde4e3f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 18 20:44:47 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Feb 18 20:44:47 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=202b38dc

Bluetooth: btusb: Some Qualcomm Bluetooth adapters stop working

See bug #762041

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                              |  4 +++
 2010_btusb-rome-firmware-error-fix.patch | 50 ++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/0000_README b/0000_README
index 927bfd9..a45f62f 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2010_BT-btusb-rome-firmware-error-fix.patch
+From:   https://www.spinics.net/lists/linux-bluetooth/msg90197.html
+Desc:   Bluetooth: btusb: Some Qualcomm Bluetooth adapters stop working
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2010_btusb-rome-firmware-error-fix.patch b/2010_btusb-rome-firmware-error-fix.patch
new file mode 100644
index 0000000..91c18b7
--- /dev/null
+++ b/2010_btusb-rome-firmware-error-fix.patch
@@ -0,0 +1,50 @@
+From 234f414efd1164786269849b4fbb533d6c9cdbbf Mon Sep 17 00:00:00 2001
+From: Hui Wang <hui.wang@canonical.com>
+Date: Mon, 8 Feb 2021 13:02:37 +0800
+Subject: Bluetooth: btusb: Some Qualcomm Bluetooth adapters stop working
+
+This issue starts from linux-5.10-rc1, I reproduced this issue on my
+Dell Inspiron 7447 with BT adapter 0cf3:e005, the kernel will print
+out: "Bluetooth: hci0: don't support firmware rome 0x31010000", and
+someone else also reported the similar issue to bugzilla #211571.
+
+I found this is a regression introduced by 'commit b40f58b97386
+("Bluetooth: btusb: Add Qualcomm Bluetooth SoC WCN6855 support"), the
+patch assumed that if high ROM version is not zero, it is an adapter
+on WCN6855, but many old adapters don't need to load rampatch or nvm,
+and they have non-zero high ROM version.
+
+To fix it, let the driver match the rom_version in the
+qca_devices_table first, if there is no entry matched, check the
+high ROM version, if it is not zero, we assume this adapter is ready
+to work and no need to load rampatch and nvm like previously.
+
+BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=211571
+Fixes: b40f58b97386 ("Bluetooth: btusb: Add Qualcomm Bluetooth SoC WCN6855 support")
+Signed-off-by: Hui Wang <hui.wang@canonical.com>
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+---
+ drivers/bluetooth/btusb.c | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 9c6836ee3c9b3..52683fd22e050 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -4273,6 +4273,13 @@ static int btusb_setup_qca(struct hci_dev *hdev)
+ 			info = &qca_devices_table[i];
+ 	}
+ 	if (!info) {
++		/* If the rom_version is not matched in the qca_devices_table
++		 * and the high ROM version is not zero, we assume this chip no
++		 * need to load the rampatch and nvm.
++		 */
++		if (ver_rom & ~0xffffU)
++			return 0;
++
+ 		bt_dev_err(hdev, "don't support firmware rome 0x%x", ver_rom);
+ 		return -ENODEV;
+ 	}
+-- 
+cgit 1.2.3-1.el7
+


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-23 15:16 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-02-23 15:16 UTC (permalink / raw
  To: gentoo-commits

commit:     a05ac93762062b3cb2a076e452987e997fd4962b
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Tue Feb 23 15:16:06 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Feb 23 15:16:14 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a05ac937

Linux patch 5.10.18

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1017_linux-5.10.18.patch | 1145 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1149 insertions(+)

diff --git a/0000_README b/0000_README
index a45f62f..ae786d2 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1016_linux-5.10.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.17
 
+Patch:  1017_linux-5.10.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-5.10.18.patch b/1017_linux-5.10.18.patch
new file mode 100644
index 0000000..7c61ec0
--- /dev/null
+++ b/1017_linux-5.10.18.patch
@@ -0,0 +1,1145 @@
+diff --git a/Makefile b/Makefile
+index b740f9c933cb7..822a8e10d4325 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
+index e52950a43f2ed..fd6e3aafe2724 100644
+--- a/arch/arm/xen/p2m.c
++++ b/arch/arm/xen/p2m.c
+@@ -95,8 +95,10 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 	for (i = 0; i < count; i++) {
+ 		if (map_ops[i].status)
+ 			continue;
+-		set_phys_to_machine(map_ops[i].host_addr >> XEN_PAGE_SHIFT,
+-				    map_ops[i].dev_bus_addr >> XEN_PAGE_SHIFT);
++		if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> XEN_PAGE_SHIFT,
++				    map_ops[i].dev_bus_addr >> XEN_PAGE_SHIFT))) {
++			return -ENOMEM;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index be4151f42611f..d71942a3c65af 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -712,7 +712,8 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 		unsigned long mfn, pfn;
+ 
+ 		/* Do not add to override if the map failed. */
+-		if (map_ops[i].status)
++		if (map_ops[i].status != GNTST_okay ||
++		    (kmap_ops && kmap_ops[i].status != GNTST_okay))
+ 			continue;
+ 
+ 		if (map_ops[i].flags & GNTMAP_contains_pte) {
+@@ -750,17 +751,15 @@ int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+ 		unsigned long mfn = __pfn_to_mfn(page_to_pfn(pages[i]));
+ 		unsigned long pfn = page_to_pfn(pages[i]);
+ 
+-		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
++		if (mfn != INVALID_P2M_ENTRY && (mfn & FOREIGN_FRAME_BIT))
++			set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
++		else
+ 			ret = -EINVAL;
+-			goto out;
+-		}
+-
+-		set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
+ 	}
+ 	if (kunmap_ops)
+ 		ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
+-						kunmap_ops, count);
+-out:
++						kunmap_ops, count) ?: ret;
++
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
+index 9ebf53903d7bf..da16121140cab 100644
+--- a/drivers/block/xen-blkback/blkback.c
++++ b/drivers/block/xen-blkback/blkback.c
+@@ -794,8 +794,13 @@ again:
+ 			pages[i]->persistent_gnt = persistent_gnt;
+ 		} else {
+ 			if (gnttab_page_cache_get(&ring->free_pages,
+-						  &pages[i]->page))
+-				goto out_of_memory;
++						  &pages[i]->page)) {
++				gnttab_page_cache_put(&ring->free_pages,
++						      pages_to_gnt,
++						      segs_to_map);
++				ret = -ENOMEM;
++				goto out;
++			}
+ 			addr = vaddr(pages[i]->page);
+ 			pages_to_gnt[segs_to_map] = pages[i]->page;
+ 			pages[i]->persistent_gnt = NULL;
+@@ -811,10 +816,8 @@ again:
+ 			break;
+ 	}
+ 
+-	if (segs_to_map) {
++	if (segs_to_map)
+ 		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+-		BUG_ON(ret);
+-	}
+ 
+ 	/*
+ 	 * Now swizzle the MFN in our domain with the MFN from the other domain
+@@ -830,7 +833,7 @@ again:
+ 				gnttab_page_cache_put(&ring->free_pages,
+ 						      &pages[seg_idx]->page, 1);
+ 				pages[seg_idx]->handle = BLKBACK_INVALID_HANDLE;
+-				ret |= 1;
++				ret |= !ret;
+ 				goto next;
+ 			}
+ 			pages[seg_idx]->handle = map[new_map_idx].handle;
+@@ -882,17 +885,18 @@ next:
+ 	}
+ 	segs_to_map = 0;
+ 	last_map = map_until;
+-	if (map_until != num)
++	if (!ret && map_until != num)
+ 		goto again;
+ 
+-	return ret;
+-
+-out_of_memory:
+-	pr_alert("%s: out of memory\n", __func__);
+-	gnttab_page_cache_put(&ring->free_pages, pages_to_gnt, segs_to_map);
+-	for (i = last_map; i < num; i++)
++out:
++	for (i = last_map; i < num; i++) {
++		/* Don't zap current batch's valid persistent grants. */
++		if(i >= last_map + segs_to_map)
++			pages[i]->persistent_gnt = NULL;
+ 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
+-	return -ENOMEM;
++	}
++
++	return ret;
+ }
+ 
+ static int xen_blkbk_map_seg(struct pending_req *pending_req)
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 80468745d5c5e..9f958699141e2 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -480,7 +480,6 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
+ #define BTUSB_HW_RESET_ACTIVE	12
+ #define BTUSB_TX_WAIT_VND_EVT	13
+ #define BTUSB_WAKEUP_DISABLE	14
+-#define BTUSB_USE_ALT1_FOR_WBS	15
+ 
+ struct btusb_data {
+ 	struct hci_dev       *hdev;
+@@ -1710,15 +1709,12 @@ static void btusb_work(struct work_struct *work)
+ 				new_alts = data->sco_num;
+ 			}
+ 		} else if (data->air_mode == HCI_NOTIFY_ENABLE_SCO_TRANSP) {
+-			/* Check if Alt 6 is supported for Transparent audio */
+-			if (btusb_find_altsetting(data, 6)) {
+-				data->usb_alt6_packet_flow = true;
+-				new_alts = 6;
+-			} else if (test_bit(BTUSB_USE_ALT1_FOR_WBS, &data->flags)) {
+-				new_alts = 1;
+-			} else {
+-				bt_dev_err(hdev, "Device does not support ALT setting 6");
+-			}
++			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
++			 * many adapters do not support it.  Alt 1 appears to
++			 * work for all adapters that do not have alt 6, and
++			 * which work with WBS at all.
++			 */
++			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -4149,10 +4145,6 @@ static int btusb_probe(struct usb_interface *intf,
+ 		 * (DEVICE_REMOTE_WAKEUP)
+ 		 */
+ 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
+-		if (btusb_find_altsetting(data, 1))
+-			set_bit(BTUSB_USE_ALT1_FOR_WBS, &data->flags);
+-		else
+-			bt_dev_err(hdev, "Device does not support ALT setting 1");
+ 	}
+ 
+ 	if (!reset)
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 436e17f1d0e53..bd478947b93a5 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -28,6 +28,18 @@ static int isert_debug_level;
+ module_param_named(debug_level, isert_debug_level, int, 0644);
+ MODULE_PARM_DESC(debug_level, "Enable debug tracing if > 0 (default:0)");
+ 
++static int isert_sg_tablesize_set(const char *val,
++				  const struct kernel_param *kp);
++static const struct kernel_param_ops sg_tablesize_ops = {
++	.set = isert_sg_tablesize_set,
++	.get = param_get_int,
++};
++
++static int isert_sg_tablesize = ISCSI_ISER_DEF_SG_TABLESIZE;
++module_param_cb(sg_tablesize, &sg_tablesize_ops, &isert_sg_tablesize, 0644);
++MODULE_PARM_DESC(sg_tablesize,
++		 "Number of gather/scatter entries in a single scsi command, should >= 128 (default: 256, max: 4096)");
++
+ static DEFINE_MUTEX(device_list_mutex);
+ static LIST_HEAD(device_list);
+ static struct workqueue_struct *isert_comp_wq;
+@@ -47,6 +59,19 @@ static void isert_send_done(struct ib_cq *cq, struct ib_wc *wc);
+ static void isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc);
+ static void isert_login_send_done(struct ib_cq *cq, struct ib_wc *wc);
+ 
++static int isert_sg_tablesize_set(const char *val, const struct kernel_param *kp)
++{
++	int n = 0, ret;
++
++	ret = kstrtoint(val, 10, &n);
++	if (ret != 0 || n < ISCSI_ISER_MIN_SG_TABLESIZE ||
++	    n > ISCSI_ISER_MAX_SG_TABLESIZE)
++		return -EINVAL;
++
++	return param_set_int(val, kp);
++}
++
++
+ static inline bool
+ isert_prot_cmd(struct isert_conn *conn, struct se_cmd *cmd)
+ {
+@@ -101,7 +126,7 @@ isert_create_qp(struct isert_conn *isert_conn,
+ 	attr.cap.max_send_wr = ISERT_QP_MAX_REQ_DTOS + 1;
+ 	attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS + 1;
+ 	factor = rdma_rw_mr_factor(device->ib_device, cma_id->port_num,
+-				   ISCSI_ISER_MAX_SG_TABLESIZE);
++				   isert_sg_tablesize);
+ 	attr.cap.max_rdma_ctxs = ISCSI_DEF_XMIT_CMDS_MAX * factor;
+ 	attr.cap.max_send_sge = device->ib_device->attrs.max_send_sge;
+ 	attr.cap.max_recv_sge = 1;
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h
+index 7fee4a65e181a..6c5af13db4e0d 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.h
++++ b/drivers/infiniband/ulp/isert/ib_isert.h
+@@ -65,6 +65,12 @@
+  */
+ #define ISER_RX_SIZE		(ISCSI_DEF_MAX_RECV_SEG_LEN + 1024)
+ 
++/* Default I/O size is 1MB */
++#define ISCSI_ISER_DEF_SG_TABLESIZE 256
++
++/* Minimum I/O size is 512KB */
++#define ISCSI_ISER_MIN_SG_TABLESIZE 128
++
+ /* Maximum support is 16MB I/O size */
+ #define ISCSI_ISER_MAX_SG_TABLESIZE	4096
+ 
+diff --git a/drivers/media/usb/pwc/pwc-if.c b/drivers/media/usb/pwc/pwc-if.c
+index 61869636ec613..5e3339cc31c07 100644
+--- a/drivers/media/usb/pwc/pwc-if.c
++++ b/drivers/media/usb/pwc/pwc-if.c
+@@ -155,16 +155,17 @@ static const struct video_device pwc_template = {
+ /***************************************************************************/
+ /* Private functions */
+ 
+-static void *pwc_alloc_urb_buffer(struct device *dev,
++static void *pwc_alloc_urb_buffer(struct usb_device *dev,
+ 				  size_t size, dma_addr_t *dma_handle)
+ {
++	struct device *dmadev = dev->bus->sysdev;
+ 	void *buffer = kmalloc(size, GFP_KERNEL);
+ 
+ 	if (!buffer)
+ 		return NULL;
+ 
+-	*dma_handle = dma_map_single(dev, buffer, size, DMA_FROM_DEVICE);
+-	if (dma_mapping_error(dev, *dma_handle)) {
++	*dma_handle = dma_map_single(dmadev, buffer, size, DMA_FROM_DEVICE);
++	if (dma_mapping_error(dmadev, *dma_handle)) {
+ 		kfree(buffer);
+ 		return NULL;
+ 	}
+@@ -172,12 +173,14 @@ static void *pwc_alloc_urb_buffer(struct device *dev,
+ 	return buffer;
+ }
+ 
+-static void pwc_free_urb_buffer(struct device *dev,
++static void pwc_free_urb_buffer(struct usb_device *dev,
+ 				size_t size,
+ 				void *buffer,
+ 				dma_addr_t dma_handle)
+ {
+-	dma_unmap_single(dev, dma_handle, size, DMA_FROM_DEVICE);
++	struct device *dmadev = dev->bus->sysdev;
++
++	dma_unmap_single(dmadev, dma_handle, size, DMA_FROM_DEVICE);
+ 	kfree(buffer);
+ }
+ 
+@@ -282,6 +285,7 @@ static void pwc_frame_complete(struct pwc_device *pdev)
+ static void pwc_isoc_handler(struct urb *urb)
+ {
+ 	struct pwc_device *pdev = (struct pwc_device *)urb->context;
++	struct device *dmadev = urb->dev->bus->sysdev;
+ 	int i, fst, flen;
+ 	unsigned char *iso_buf = NULL;
+ 
+@@ -328,7 +332,7 @@ static void pwc_isoc_handler(struct urb *urb)
+ 	/* Reset ISOC error counter. We did get here, after all. */
+ 	pdev->visoc_errors = 0;
+ 
+-	dma_sync_single_for_cpu(&urb->dev->dev,
++	dma_sync_single_for_cpu(dmadev,
+ 				urb->transfer_dma,
+ 				urb->transfer_buffer_length,
+ 				DMA_FROM_DEVICE);
+@@ -379,7 +383,7 @@ static void pwc_isoc_handler(struct urb *urb)
+ 		pdev->vlast_packet_size = flen;
+ 	}
+ 
+-	dma_sync_single_for_device(&urb->dev->dev,
++	dma_sync_single_for_device(dmadev,
+ 				   urb->transfer_dma,
+ 				   urb->transfer_buffer_length,
+ 				   DMA_FROM_DEVICE);
+@@ -461,7 +465,7 @@ retry:
+ 		urb->pipe = usb_rcvisocpipe(udev, pdev->vendpoint);
+ 		urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP;
+ 		urb->transfer_buffer_length = ISO_BUFFER_SIZE;
+-		urb->transfer_buffer = pwc_alloc_urb_buffer(&udev->dev,
++		urb->transfer_buffer = pwc_alloc_urb_buffer(udev,
+ 							    urb->transfer_buffer_length,
+ 							    &urb->transfer_dma);
+ 		if (urb->transfer_buffer == NULL) {
+@@ -524,7 +528,7 @@ static void pwc_iso_free(struct pwc_device *pdev)
+ 		if (urb) {
+ 			PWC_DEBUG_MEMORY("Freeing URB\n");
+ 			if (urb->transfer_buffer)
+-				pwc_free_urb_buffer(&urb->dev->dev,
++				pwc_free_urb_buffer(urb->dev,
+ 						    urb->transfer_buffer_length,
+ 						    urb->transfer_buffer,
+ 						    urb->transfer_dma);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index 31b40fb83f6c1..c31036f57aef8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -2718,11 +2718,11 @@ int mt7615_mcu_rdd_cmd(struct mt7615_dev *dev,
+ int mt7615_mcu_set_fcc5_lpn(struct mt7615_dev *dev, int val)
+ {
+ 	struct {
+-		u16 tag;
+-		u16 min_lpn;
++		__le16 tag;
++		__le16 min_lpn;
+ 	} req = {
+-		.tag = 0x1,
+-		.min_lpn = val,
++		.tag = cpu_to_le16(0x1),
++		.min_lpn = cpu_to_le16(val),
+ 	};
+ 
+ 	return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH,
+@@ -2733,14 +2733,27 @@ int mt7615_mcu_set_pulse_th(struct mt7615_dev *dev,
+ 			    const struct mt7615_dfs_pulse *pulse)
+ {
+ 	struct {
+-		u16 tag;
+-		struct mt7615_dfs_pulse pulse;
++		__le16 tag;
++		__le32 max_width;	/* us */
++		__le32 max_pwr;		/* dbm */
++		__le32 min_pwr;		/* dbm */
++		__le32 min_stgr_pri;	/* us */
++		__le32 max_stgr_pri;	/* us */
++		__le32 min_cr_pri;	/* us */
++		__le32 max_cr_pri;	/* us */
+ 	} req = {
+-		.tag = 0x3,
++		.tag = cpu_to_le16(0x3),
++#define __req_field(field) .field = cpu_to_le32(pulse->field)
++		__req_field(max_width),
++		__req_field(max_pwr),
++		__req_field(min_pwr),
++		__req_field(min_stgr_pri),
++		__req_field(max_stgr_pri),
++		__req_field(min_cr_pri),
++		__req_field(max_cr_pri),
++#undef  __req_field
+ 	};
+ 
+-	memcpy(&req.pulse, pulse, sizeof(*pulse));
+-
+ 	return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH,
+ 				   &req, sizeof(req), true);
+ }
+@@ -2749,16 +2762,45 @@ int mt7615_mcu_set_radar_th(struct mt7615_dev *dev, int index,
+ 			    const struct mt7615_dfs_pattern *pattern)
+ {
+ 	struct {
+-		u16 tag;
+-		u16 radar_type;
+-		struct mt7615_dfs_pattern pattern;
++		__le16 tag;
++		__le16 radar_type;
++		u8 enb;
++		u8 stgr;
++		u8 min_crpn;
++		u8 max_crpn;
++		u8 min_crpr;
++		u8 min_pw;
++		u8 max_pw;
++		__le32 min_pri;
++		__le32 max_pri;
++		u8 min_crbn;
++		u8 max_crbn;
++		u8 min_stgpn;
++		u8 max_stgpn;
++		u8 min_stgpr;
+ 	} req = {
+-		.tag = 0x2,
+-		.radar_type = index,
++		.tag = cpu_to_le16(0x2),
++		.radar_type = cpu_to_le16(index),
++#define __req_field_u8(field) .field = pattern->field
++#define __req_field_u32(field) .field = cpu_to_le32(pattern->field)
++		__req_field_u8(enb),
++		__req_field_u8(stgr),
++		__req_field_u8(min_crpn),
++		__req_field_u8(max_crpn),
++		__req_field_u8(min_crpr),
++		__req_field_u8(min_pw),
++		__req_field_u8(max_pw),
++		__req_field_u32(min_pri),
++		__req_field_u32(max_pri),
++		__req_field_u8(min_crbn),
++		__req_field_u8(max_crbn),
++		__req_field_u8(min_stgpn),
++		__req_field_u8(max_stgpn),
++		__req_field_u8(min_stgpr),
++#undef __req_field_u8
++#undef __req_field_u32
+ 	};
+ 
+-	memcpy(&req.pattern, pattern, sizeof(*pattern));
+-
+ 	return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH,
+ 				   &req, sizeof(req), true);
+ }
+@@ -2769,9 +2811,9 @@ int mt7615_mcu_rdd_send_pattern(struct mt7615_dev *dev)
+ 		u8 pulse_num;
+ 		u8 rsv[3];
+ 		struct {
+-			u32 start_time;
+-			u16 width;
+-			s16 power;
++			__le32 start_time;
++			__le16 width;
++			__le16 power;
+ 		} pattern[32];
+ 	} req = {
+ 		.pulse_num = dev->radar_pattern.n_pulses,
+@@ -2784,10 +2826,11 @@ int mt7615_mcu_rdd_send_pattern(struct mt7615_dev *dev)
+ 
+ 	/* TODO: add some noise here */
+ 	for (i = 0; i < dev->radar_pattern.n_pulses; i++) {
+-		req.pattern[i].width = dev->radar_pattern.width;
+-		req.pattern[i].power = dev->radar_pattern.power;
+-		req.pattern[i].start_time = start_time +
+-					    i * dev->radar_pattern.period;
++		u32 ts = start_time + i * dev->radar_pattern.period;
++
++		req.pattern[i].width = cpu_to_le16(dev->radar_pattern.width);
++		req.pattern[i].power = cpu_to_le16(dev->radar_pattern.power);
++		req.pattern[i].start_time = cpu_to_le32(ts);
+ 	}
+ 
+ 	return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_PATTERN,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index a3ccc17856615..ea71409751519 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -2835,7 +2835,7 @@ int mt7915_mcu_fw_dbg_ctrl(struct mt7915_dev *dev, u32 module, u8 level)
+ 	struct {
+ 		u8 ver;
+ 		u8 pad;
+-		u16 len;
++		__le16 len;
+ 		u8 level;
+ 		u8 rsv[3];
+ 		__le32 module_idx;
+@@ -3070,12 +3070,12 @@ int mt7915_mcu_rdd_cmd(struct mt7915_dev *dev,
+ int mt7915_mcu_set_fcc5_lpn(struct mt7915_dev *dev, int val)
+ {
+ 	struct {
+-		u32 tag;
+-		u16 min_lpn;
++		__le32 tag;
++		__le16 min_lpn;
+ 		u8 rsv[2];
+ 	} __packed req = {
+-		.tag = 0x1,
+-		.min_lpn = val,
++		.tag = cpu_to_le32(0x1),
++		.min_lpn = cpu_to_le16(val),
+ 	};
+ 
+ 	return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH,
+@@ -3086,14 +3086,29 @@ int mt7915_mcu_set_pulse_th(struct mt7915_dev *dev,
+ 			    const struct mt7915_dfs_pulse *pulse)
+ {
+ 	struct {
+-		u32 tag;
+-		struct mt7915_dfs_pulse pulse;
++		__le32 tag;
++
++		__le32 max_width;		/* us */
++		__le32 max_pwr;			/* dbm */
++		__le32 min_pwr;			/* dbm */
++		__le32 min_stgr_pri;		/* us */
++		__le32 max_stgr_pri;		/* us */
++		__le32 min_cr_pri;		/* us */
++		__le32 max_cr_pri;		/* us */
+ 	} __packed req = {
+-		.tag = 0x3,
++		.tag = cpu_to_le32(0x3),
++
++#define __req_field(field) .field = cpu_to_le32(pulse->field)
++		__req_field(max_width),
++		__req_field(max_pwr),
++		__req_field(min_pwr),
++		__req_field(min_stgr_pri),
++		__req_field(max_stgr_pri),
++		__req_field(min_cr_pri),
++		__req_field(max_cr_pri),
++#undef __req_field
+ 	};
+ 
+-	memcpy(&req.pulse, pulse, sizeof(*pulse));
+-
+ 	return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH,
+ 				   &req, sizeof(req), true);
+ }
+@@ -3102,16 +3117,50 @@ int mt7915_mcu_set_radar_th(struct mt7915_dev *dev, int index,
+ 			    const struct mt7915_dfs_pattern *pattern)
+ {
+ 	struct {
+-		u32 tag;
+-		u16 radar_type;
+-		struct mt7915_dfs_pattern pattern;
++		__le32 tag;
++		__le16 radar_type;
++
++		u8 enb;
++		u8 stgr;
++		u8 min_crpn;
++		u8 max_crpn;
++		u8 min_crpr;
++		u8 min_pw;
++		u32 min_pri;
++		u32 max_pri;
++		u8 max_pw;
++		u8 min_crbn;
++		u8 max_crbn;
++		u8 min_stgpn;
++		u8 max_stgpn;
++		u8 min_stgpr;
++		u8 rsv[2];
++		u32 min_stgpr_diff;
+ 	} __packed req = {
+-		.tag = 0x2,
+-		.radar_type = index,
++		.tag = cpu_to_le32(0x2),
++		.radar_type = cpu_to_le16(index),
++
++#define __req_field_u8(field) .field = pattern->field
++#define __req_field_u32(field) .field = cpu_to_le32(pattern->field)
++		__req_field_u8(enb),
++		__req_field_u8(stgr),
++		__req_field_u8(min_crpn),
++		__req_field_u8(max_crpn),
++		__req_field_u8(min_crpr),
++		__req_field_u8(min_pw),
++		__req_field_u32(min_pri),
++		__req_field_u32(max_pri),
++		__req_field_u8(max_pw),
++		__req_field_u8(min_crbn),
++		__req_field_u8(max_crbn),
++		__req_field_u8(min_stgpn),
++		__req_field_u8(max_stgpn),
++		__req_field_u8(min_stgpr),
++		__req_field_u32(min_stgpr_diff),
++#undef __req_field_u8
++#undef __req_field_u32
+ 	};
+ 
+-	memcpy(&req.pattern, pattern, sizeof(*pattern));
+-
+ 	return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH,
+ 				   &req, sizeof(req), true);
+ }
+@@ -3342,12 +3391,12 @@ int mt7915_mcu_add_obss_spr(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ 		u8 drop_tx_idx;
+ 		u8 sta_idx;	/* 256 sta */
+ 		u8 rsv[2];
+-		u32 val;
++		__le32 val;
+ 	} __packed req = {
+ 		.action = MT_SPR_ENABLE,
+ 		.arg_num = 1,
+ 		.band_idx = mvif->band_idx,
+-		.val = enable,
++		.val = cpu_to_le32(enable),
+ 	};
+ 
+ 	return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_SPR,
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index bc3421d145768..423667b837510 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -1342,13 +1342,11 @@ int xenvif_tx_action(struct xenvif_queue *queue, int budget)
+ 		return 0;
+ 
+ 	gnttab_batch_copy(queue->tx_copy_ops, nr_cops);
+-	if (nr_mops != 0) {
++	if (nr_mops != 0)
+ 		ret = gnttab_map_refs(queue->tx_map_ops,
+ 				      NULL,
+ 				      queue->pages_to_map,
+ 				      nr_mops);
+-		BUG_ON(ret);
+-	}
+ 
+ 	work_done = xenvif_tx_submit(queue);
+ 
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index ff87cb51747d8..21cd5ac6ca8b5 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -963,11 +963,14 @@ static inline ssize_t do_tty_write(
+ 		if (ret <= 0)
+ 			break;
+ 
++		written += ret;
++		if (ret > size)
++			break;
++
+ 		/* FIXME! Have Al check this! */
+ 		if (ret != size)
+ 			iov_iter_revert(from, size-ret);
+ 
+-		written += ret;
+ 		count -= ret;
+ 		if (!count)
+ 			break;
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 6a90fdb9cbfc6..f2ad450db5478 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -42,6 +42,8 @@ static char *macaddr;
+ module_param(macaddr, charp, 0);
+ MODULE_PARM_DESC(macaddr, "Ethernet MAC address");
+ 
++u8 macaddr_buf[ETH_ALEN];
++
+ struct vdpasim_virtqueue {
+ 	struct vringh vring;
+ 	struct vringh_kiov iov;
+@@ -67,14 +69,24 @@ static u64 vdpasim_features = (1ULL << VIRTIO_F_ANY_LAYOUT) |
+ 			      (1ULL << VIRTIO_F_ACCESS_PLATFORM) |
+ 			      (1ULL << VIRTIO_NET_F_MAC);
+ 
++struct vdpasim;
++
++struct vdpasim_dev_attr {
++	size_t config_size;
++	int nvqs;
++	void (*get_config)(struct vdpasim *vdpasim, void *config);
++};
++
+ /* State of each vdpasim device */
+ struct vdpasim {
+ 	struct vdpa_device vdpa;
+-	struct vdpasim_virtqueue vqs[VDPASIM_VQ_NUM];
++	struct vdpasim_virtqueue *vqs;
+ 	struct work_struct work;
++	struct vdpasim_dev_attr dev_attr;
+ 	/* spinlock to synchronize virtqueue state */
+ 	spinlock_t lock;
+-	struct virtio_net_config config;
++	/* virtio config according to device type */
++	void *config;
+ 	struct vhost_iotlb *iommu;
+ 	void *buffer;
+ 	u32 status;
+@@ -144,7 +156,7 @@ static void vdpasim_reset(struct vdpasim *vdpasim)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < VDPASIM_VQ_NUM; i++)
++	for (i = 0; i < vdpasim->dev_attr.nvqs; i++)
+ 		vdpasim_vq_reset(&vdpasim->vqs[i]);
+ 
+ 	spin_lock(&vdpasim->iommu_lock);
+@@ -345,22 +357,24 @@ static const struct dma_map_ops vdpasim_dma_ops = {
+ static const struct vdpa_config_ops vdpasim_net_config_ops;
+ static const struct vdpa_config_ops vdpasim_net_batch_config_ops;
+ 
+-static struct vdpasim *vdpasim_create(void)
++static struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
+ {
+ 	const struct vdpa_config_ops *ops;
+ 	struct vdpasim *vdpasim;
+ 	struct device *dev;
+-	int ret = -ENOMEM;
++	int i, ret = -ENOMEM;
+ 
+ 	if (batch_mapping)
+ 		ops = &vdpasim_net_batch_config_ops;
+ 	else
+ 		ops = &vdpasim_net_config_ops;
+ 
+-	vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, VDPASIM_VQ_NUM);
++	vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops,
++				    dev_attr->nvqs);
+ 	if (!vdpasim)
+ 		goto err_alloc;
+ 
++	vdpasim->dev_attr = *dev_attr;
+ 	INIT_WORK(&vdpasim->work, vdpasim_work);
+ 	spin_lock_init(&vdpasim->lock);
+ 	spin_lock_init(&vdpasim->iommu_lock);
+@@ -371,6 +385,15 @@ static struct vdpasim *vdpasim_create(void)
+ 		goto err_iommu;
+ 	set_dma_ops(dev, &vdpasim_dma_ops);
+ 
++	vdpasim->config = kzalloc(dev_attr->config_size, GFP_KERNEL);
++	if (!vdpasim->config)
++		goto err_iommu;
++
++	vdpasim->vqs = kcalloc(dev_attr->nvqs, sizeof(struct vdpasim_virtqueue),
++			       GFP_KERNEL);
++	if (!vdpasim->vqs)
++		goto err_iommu;
++
+ 	vdpasim->iommu = vhost_iotlb_alloc(2048, 0);
+ 	if (!vdpasim->iommu)
+ 		goto err_iommu;
+@@ -380,17 +403,17 @@ static struct vdpasim *vdpasim_create(void)
+ 		goto err_iommu;
+ 
+ 	if (macaddr) {
+-		mac_pton(macaddr, vdpasim->config.mac);
+-		if (!is_valid_ether_addr(vdpasim->config.mac)) {
++		mac_pton(macaddr, macaddr_buf);
++		if (!is_valid_ether_addr(macaddr_buf)) {
+ 			ret = -EADDRNOTAVAIL;
+ 			goto err_iommu;
+ 		}
+ 	} else {
+-		eth_random_addr(vdpasim->config.mac);
++		eth_random_addr(macaddr_buf);
+ 	}
+ 
+-	vringh_set_iotlb(&vdpasim->vqs[0].vring, vdpasim->iommu);
+-	vringh_set_iotlb(&vdpasim->vqs[1].vring, vdpasim->iommu);
++	for (i = 0; i < dev_attr->nvqs; i++)
++		vringh_set_iotlb(&vdpasim->vqs[i].vring, vdpasim->iommu);
+ 
+ 	vdpasim->vdpa.dma_dev = dev;
+ 	ret = vdpa_register_device(&vdpasim->vdpa);
+@@ -504,7 +527,6 @@ static u64 vdpasim_get_features(struct vdpa_device *vdpa)
+ static int vdpasim_set_features(struct vdpa_device *vdpa, u64 features)
+ {
+ 	struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+-	struct virtio_net_config *config = &vdpasim->config;
+ 
+ 	/* DMA mapping must be done by driver */
+ 	if (!(features & (1ULL << VIRTIO_F_ACCESS_PLATFORM)))
+@@ -512,14 +534,6 @@ static int vdpasim_set_features(struct vdpa_device *vdpa, u64 features)
+ 
+ 	vdpasim->features = features & vdpasim_features;
+ 
+-	/* We generally only know whether guest is using the legacy interface
+-	 * here, so generally that's the earliest we can set config fields.
+-	 * Note: We actually require VIRTIO_F_ACCESS_PLATFORM above which
+-	 * implies VIRTIO_F_VERSION_1, but let's not try to be clever here.
+-	 */
+-
+-	config->mtu = cpu_to_vdpasim16(vdpasim, 1500);
+-	config->status = cpu_to_vdpasim16(vdpasim, VIRTIO_NET_S_LINK_UP);
+ 	return 0;
+ }
+ 
+@@ -572,8 +586,13 @@ static void vdpasim_get_config(struct vdpa_device *vdpa, unsigned int offset,
+ {
+ 	struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+ 
+-	if (offset + len < sizeof(struct virtio_net_config))
+-		memcpy(buf, (u8 *)&vdpasim->config + offset, len);
++	if (offset + len > vdpasim->dev_attr.config_size)
++		return;
++
++	if (vdpasim->dev_attr.get_config)
++		vdpasim->dev_attr.get_config(vdpasim, vdpasim->config);
++
++	memcpy(buf, vdpasim->config + offset, len);
+ }
+ 
+ static void vdpasim_set_config(struct vdpa_device *vdpa, unsigned int offset,
+@@ -659,6 +678,8 @@ static void vdpasim_free(struct vdpa_device *vdpa)
+ 	kfree(vdpasim->buffer);
+ 	if (vdpasim->iommu)
+ 		vhost_iotlb_free(vdpasim->iommu);
++	kfree(vdpasim->vqs);
++	kfree(vdpasim->config);
+ }
+ 
+ static const struct vdpa_config_ops vdpasim_net_config_ops = {
+@@ -714,9 +735,25 @@ static const struct vdpa_config_ops vdpasim_net_batch_config_ops = {
+ 	.free                   = vdpasim_free,
+ };
+ 
++static void vdpasim_net_get_config(struct vdpasim *vdpasim, void *config)
++{
++	struct virtio_net_config *net_config =
++		(struct virtio_net_config *)config;
++
++	net_config->mtu = cpu_to_vdpasim16(vdpasim, 1500);
++	net_config->status = cpu_to_vdpasim16(vdpasim, VIRTIO_NET_S_LINK_UP);
++	memcpy(net_config->mac, macaddr_buf, ETH_ALEN);
++}
++
+ static int __init vdpasim_dev_init(void)
+ {
+-	vdpasim_dev = vdpasim_create();
++	struct vdpasim_dev_attr dev_attr = {};
++
++	dev_attr.nvqs = VDPASIM_VQ_NUM;
++	dev_attr.config_size = sizeof(struct virtio_net_config);
++	dev_attr.get_config = vdpasim_net_get_config;
++
++	vdpasim_dev = vdpasim_create(&dev_attr);
+ 
+ 	if (!IS_ERR(vdpasim_dev))
+ 		return 0;
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index a36b71286bcf8..5447c5156b2e6 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -309,44 +309,47 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
+ 		 * to the kernel linear addresses of the struct pages.
+ 		 * These ptes are completely different from the user ptes dealt
+ 		 * with find_grant_ptes.
++		 * Note that GNTMAP_device_map isn't needed here: The
++		 * dev_bus_addr output field gets consumed only from ->map_ops,
++		 * and by not requesting it when mapping we also avoid needing
++		 * to mirror dev_bus_addr into ->unmap_ops (and holding an extra
++		 * reference to the page in the hypervisor).
+ 		 */
++		unsigned int flags = (map->flags & ~GNTMAP_device_map) |
++				     GNTMAP_host_map;
++
+ 		for (i = 0; i < map->count; i++) {
+ 			unsigned long address = (unsigned long)
+ 				pfn_to_kaddr(page_to_pfn(map->pages[i]));
+ 			BUG_ON(PageHighMem(map->pages[i]));
+ 
+-			gnttab_set_map_op(&map->kmap_ops[i], address,
+-				map->flags | GNTMAP_host_map,
++			gnttab_set_map_op(&map->kmap_ops[i], address, flags,
+ 				map->grants[i].ref,
+ 				map->grants[i].domid);
+ 			gnttab_set_unmap_op(&map->kunmap_ops[i], address,
+-				map->flags | GNTMAP_host_map, -1);
++				flags, -1);
+ 		}
+ 	}
+ 
+ 	pr_debug("map %d+%d\n", map->index, map->count);
+ 	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
+ 			map->pages, map->count);
+-	if (err)
+-		return err;
+ 
+ 	for (i = 0; i < map->count; i++) {
+-		if (map->map_ops[i].status) {
++		if (map->map_ops[i].status == GNTST_okay)
++			map->unmap_ops[i].handle = map->map_ops[i].handle;
++		else if (!err)
+ 			err = -EINVAL;
+-			continue;
+-		}
+ 
+-		map->unmap_ops[i].handle = map->map_ops[i].handle;
+-		if (use_ptemod)
+-			map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
+-#ifdef CONFIG_XEN_GRANT_DMA_ALLOC
+-		else if (map->dma_vaddr) {
+-			unsigned long bfn;
++		if (map->flags & GNTMAP_device_map)
++			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
+ 
+-			bfn = pfn_to_bfn(page_to_pfn(map->pages[i]));
+-			map->unmap_ops[i].dev_bus_addr = __pfn_to_phys(bfn);
++		if (use_ptemod) {
++			if (map->kmap_ops[i].status == GNTST_okay)
++				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
++			else if (!err)
++				err = -EINVAL;
+ 		}
+-#endif
+ 	}
+ 	return err;
+ }
+diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
+index 862162dca33cf..9cd4fe8ce6803 100644
+--- a/drivers/xen/xen-scsiback.c
++++ b/drivers/xen/xen-scsiback.c
+@@ -386,12 +386,12 @@ static int scsiback_gnttab_data_map_batch(struct gnttab_map_grant_ref *map,
+ 		return 0;
+ 
+ 	err = gnttab_map_refs(map, NULL, pg, cnt);
+-	BUG_ON(err);
+ 	for (i = 0; i < cnt; i++) {
+ 		if (unlikely(map[i].status != GNTST_okay)) {
+ 			pr_err("invalid buffer -- could not remap it\n");
+ 			map[i].handle = SCSIBACK_INVALID_HANDLE;
+-			err = -ENOMEM;
++			if (!err)
++				err = -ENOMEM;
+ 		} else {
+ 			get_page(pg[i]);
+ 		}
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 30ea9780725ff..b6884eda9ff67 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -146,9 +146,6 @@ enum {
+ 	BTRFS_FS_STATE_DEV_REPLACING,
+ 	/* The btrfs_fs_info created for self-tests */
+ 	BTRFS_FS_STATE_DUMMY_FS_INFO,
+-
+-	/* Indicate that we can't trust the free space tree for caching yet */
+-	BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED,
+ };
+ 
+ #define BTRFS_BACKREF_REV_MAX		256
+@@ -562,6 +559,9 @@ enum {
+ 
+ 	/* Indicate that the discard workqueue can service discards. */
+ 	BTRFS_FS_DISCARD_RUNNING,
++
++	/* Indicate that we can't trust the free space tree for caching yet */
++	BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED,
+ };
+ 
+ /*
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index acc47e2ffb46b..b536d21541a9f 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -8026,8 +8026,12 @@ ssize_t btrfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 	bool relock = false;
+ 	ssize_t ret;
+ 
+-	if (check_direct_IO(fs_info, iter, offset))
++	if (check_direct_IO(fs_info, iter, offset)) {
++		ASSERT(current->journal_info == NULL ||
++		       current->journal_info == BTRFS_DIO_SYNC_STUB);
++		current->journal_info = NULL;
+ 		return 0;
++	}
+ 
+ 	count = iov_iter_count(iter);
+ 	if (iov_iter_rw(iter) == WRITE) {
+diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
+index b9c937b3a1499..0b1182a3cf412 100644
+--- a/include/xen/grant_table.h
++++ b/include/xen/grant_table.h
+@@ -157,6 +157,7 @@ gnttab_set_map_op(struct gnttab_map_grant_ref *map, phys_addr_t addr,
+ 	map->flags = flags;
+ 	map->ref = ref;
+ 	map->dom = domid;
++	map->status = 1; /* arbitrary positive value */
+ }
+ 
+ static inline void
+diff --git a/net/bridge/br.c b/net/bridge/br.c
+index 401eeb9142eb6..1b169f8e74919 100644
+--- a/net/bridge/br.c
++++ b/net/bridge/br.c
+@@ -43,7 +43,10 @@ static int br_device_event(struct notifier_block *unused, unsigned long event, v
+ 
+ 		if (event == NETDEV_REGISTER) {
+ 			/* register of bridge completed, add sysfs entries */
+-			br_sysfs_addbr(dev);
++			err = br_sysfs_addbr(dev);
++			if (err)
++				return notifier_from_errno(err);
++
+ 			return NOTIFY_DONE;
+ 		}
+ 	}
+diff --git a/net/core/dev.c b/net/core/dev.c
+index da85cb9398693..210d0fce58e17 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3867,6 +3867,7 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
+ 		return skb;
+ 
+ 	/* qdisc_skb_cb(skb)->pkt_len was already set by the caller. */
++	qdisc_skb_cb(skb)->mru = 0;
+ 	mini_qdisc_bstats_cpu_update(miniq, skb);
+ 
+ 	switch (tcf_classify(skb, miniq->filter_list, &cl_res, false)) {
+@@ -4950,6 +4951,7 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret,
+ 	}
+ 
+ 	qdisc_skb_cb(skb)->pkt_len = skb->len;
++	qdisc_skb_cb(skb)->mru = 0;
+ 	skb->tc_at_ingress = 1;
+ 	mini_qdisc_bstats_cpu_update(miniq, skb);
+ 
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 967ce9ccfc0da..f56b2e331bb6b 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1648,8 +1648,11 @@ static struct sock *mptcp_subflow_get_retrans(const struct mptcp_sock *msk)
+ 			continue;
+ 
+ 		/* still data outstanding at TCP level?  Don't retransmit. */
+-		if (!tcp_write_queue_empty(ssk))
++		if (!tcp_write_queue_empty(ssk)) {
++			if (inet_csk(ssk)->icsk_ca_state >= TCP_CA_Loss)
++				continue;
+ 			return NULL;
++		}
+ 
+ 		if (subflow->backup) {
+ 			if (!backup)
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index c3a664871cb5a..e8902a7e60f24 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -959,16 +959,13 @@ static int dec_ttl_exception_handler(struct datapath *dp, struct sk_buff *skb,
+ 				     struct sw_flow_key *key,
+ 				     const struct nlattr *attr, bool last)
+ {
+-	/* The first action is always 'OVS_DEC_TTL_ATTR_ARG'. */
+-	struct nlattr *dec_ttl_arg = nla_data(attr);
++	/* The first attribute is always 'OVS_DEC_TTL_ATTR_ACTION'. */
++	struct nlattr *actions = nla_data(attr);
+ 
+-	if (nla_len(dec_ttl_arg)) {
+-		struct nlattr *actions = nla_data(dec_ttl_arg);
++	if (nla_len(actions))
++		return clone_execute(dp, skb, key, 0, nla_data(actions),
++				     nla_len(actions), last, false);
+ 
+-		if (actions)
+-			return clone_execute(dp, skb, key, 0, nla_data(actions),
+-					     nla_len(actions), last, false);
+-	}
+ 	consume_skb(skb);
+ 	return 0;
+ }
+@@ -1212,7 +1209,7 @@ static int execute_dec_ttl(struct sk_buff *skb, struct sw_flow_key *key)
+ 			return -EHOSTUNREACH;
+ 
+ 		key->ip.ttl = --nh->hop_limit;
+-	} else {
++	} else if (skb->protocol == htons(ETH_P_IP)) {
+ 		struct iphdr *nh;
+ 		u8 old_ttl;
+ 
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 7a18ffff85514..a0121e7c98b14 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4615,9 +4615,11 @@ static int __net_init packet_net_init(struct net *net)
+ 	mutex_init(&net->packet.sklist_lock);
+ 	INIT_HLIST_HEAD(&net->packet.sklist);
+ 
++#ifdef CONFIG_PROC_FS
+ 	if (!proc_create_net("packet", 0, net->proc_net, &packet_seq_ops,
+ 			sizeof(struct seq_net_private)))
+ 		return -ENOMEM;
++#endif /* CONFIG_PROC_FS */
+ 
+ 	return 0;
+ }
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 957aa9263ba4c..d7134c558993c 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -347,7 +347,7 @@ static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+ 	hdr->src_port_id = cpu_to_le32(from->sq_port);
+ 	if (to->sq_port == QRTR_PORT_CTRL) {
+ 		hdr->dst_node_id = cpu_to_le32(node->nid);
+-		hdr->dst_port_id = cpu_to_le32(QRTR_NODE_BCAST);
++		hdr->dst_port_id = cpu_to_le32(QRTR_PORT_CTRL);
+ 	} else {
+ 		hdr->dst_node_id = cpu_to_le32(to->sq_node);
+ 		hdr->dst_port_id = cpu_to_le32(to->sq_port);
+diff --git a/net/sched/Kconfig b/net/sched/Kconfig
+index a3b37d88800eb..d762e89ab74f7 100644
+--- a/net/sched/Kconfig
++++ b/net/sched/Kconfig
+@@ -813,7 +813,7 @@ config NET_ACT_SAMPLE
+ 
+ config NET_ACT_IPT
+ 	tristate "IPtables targets"
+-	depends on NET_CLS_ACT && NETFILTER && IP_NF_IPTABLES
++	depends on NET_CLS_ACT && NETFILTER && NETFILTER_XTABLES
+ 	help
+ 	  Say Y here to be able to invoke iptables targets after successful
+ 	  classification.
+@@ -912,7 +912,7 @@ config NET_ACT_BPF
+ 
+ config NET_ACT_CONNMARK
+ 	tristate "Netfilter Connection Mark Retriever"
+-	depends on NET_CLS_ACT && NETFILTER && IP_NF_IPTABLES
++	depends on NET_CLS_ACT && NETFILTER
+ 	depends on NF_CONNTRACK && NF_CONNTRACK_MARK
+ 	help
+ 	  Say Y here to allow retrieving of conn mark
+@@ -924,7 +924,7 @@ config NET_ACT_CONNMARK
+ 
+ config NET_ACT_CTINFO
+ 	tristate "Netfilter Connection Mark Actions"
+-	depends on NET_CLS_ACT && NETFILTER && IP_NF_IPTABLES
++	depends on NET_CLS_ACT && NETFILTER
+ 	depends on NF_CONNTRACK && NF_CONNTRACK_MARK
+ 	help
+ 	  Say Y here to allow transfer of a connmark stored information.
+diff --git a/net/tls/tls_proc.c b/net/tls/tls_proc.c
+index 3a5dd1e072332..feeceb0e4cb48 100644
+--- a/net/tls/tls_proc.c
++++ b/net/tls/tls_proc.c
+@@ -37,9 +37,12 @@ static int tls_statistics_seq_show(struct seq_file *seq, void *v)
+ 
+ int __net_init tls_proc_init(struct net *net)
+ {
++#ifdef CONFIG_PROC_FS
+ 	if (!proc_create_net_single("tls_stat", 0444, net->proc_net,
+ 				    tls_statistics_seq_show, NULL))
+ 		return -ENOMEM;
++#endif /* CONFIG_PROC_FS */
++
+ 	return 0;
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-26 10:42 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-02-26 10:42 UTC (permalink / raw
  To: gentoo-commits

commit:     f6d488f641a3289902abf03543bb035ce13d70e6
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 26 10:42:06 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Feb 26 10:42:24 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f6d488f6

Linux patch 5.10.19

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |   4 +
 1018_linux-5.10.19.patch | 544 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 548 insertions(+)

diff --git a/0000_README b/0000_README
index ae786d2..0f03575 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1017_linux-5.10.18.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.18
 
+Patch:  1018_linux-5.10.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-5.10.19.patch b/1018_linux-5.10.19.patch
new file mode 100644
index 0000000..b8b2ca4
--- /dev/null
+++ b/1018_linux-5.10.19.patch
@@ -0,0 +1,544 @@
+diff --git a/Makefile b/Makefile
+index 822a8e10d4325..f700bdea626d9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,9 +1,9 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+-NAME = Kleptomaniac Octopus
++NAME = Dare mighty things
+ 
+ # *DOCUMENTATION*
+ # To see a list of typical targets execute "make help"
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+index d47c88950d38d..7fd47d8f166a6 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+@@ -997,6 +997,7 @@
+ 			 <&tegra_car 128>, /* hda2hdmi */
+ 			 <&tegra_car 111>; /* hda2codec_2x */
+ 		reset-names = "hda", "hda2hdmi", "hda2codec_2x";
++		power-domains = <&pd_sor>;
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 52f36c8790862..dacbd13d32c69 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -2409,7 +2409,7 @@ static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm,
+ 		return 0;
+ 
+ restart:
+-	list_for_each_entry_safe(sp, tmp, &kvm->arch.active_mmu_pages, link) {
++	list_for_each_entry_safe_reverse(sp, tmp, &kvm->arch.active_mmu_pages, link) {
+ 		/*
+ 		 * Don't zap active root pages, the page itself can't be freed
+ 		 * and zapping it will just force vCPUs to realloc and reload.
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 9f958699141e2..1c942869baacc 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -3689,6 +3689,13 @@ static int btusb_setup_qca(struct hci_dev *hdev)
+ 			info = &qca_devices_table[i];
+ 	}
+ 	if (!info) {
++		/* If the rom_version is not matched in the qca_devices_table
++		 * and the high ROM version is not zero, we assume this chip no
++		 * need to load the rampatch and nvm.
++		 */
++		if (ver_rom & ~0xffffU)
++			return 0;
++
+ 		bt_dev_err(hdev, "don't support firmware rome 0x%x", ver_rom);
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_disp.c b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+index 98bd48f13fd11..8cd8af35cfaac 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_disp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+@@ -1398,19 +1398,11 @@ static void zynqmp_disp_enable(struct zynqmp_disp *disp)
+  */
+ static void zynqmp_disp_disable(struct zynqmp_disp *disp)
+ {
+-	struct drm_crtc *crtc = &disp->crtc;
+-
+ 	zynqmp_disp_audio_disable(&disp->audio);
+ 
+ 	zynqmp_disp_avbuf_disable_audio(&disp->avbuf);
+ 	zynqmp_disp_avbuf_disable_channels(&disp->avbuf);
+ 	zynqmp_disp_avbuf_disable(&disp->avbuf);
+-
+-	/* Mark the flip is done as crtc is disabled anyway */
+-	if (crtc->state->event) {
+-		complete_all(crtc->state->event->base.completion);
+-		crtc->state->event = NULL;
+-	}
+ }
+ 
+ static inline struct zynqmp_disp *crtc_to_disp(struct drm_crtc *crtc)
+@@ -1499,6 +1491,13 @@ zynqmp_disp_crtc_atomic_disable(struct drm_crtc *crtc,
+ 
+ 	drm_crtc_vblank_off(&disp->crtc);
+ 
++	spin_lock_irq(&crtc->dev->event_lock);
++	if (crtc->state->event) {
++		drm_crtc_send_vblank_event(crtc, crtc->state->event);
++		crtc->state->event = NULL;
++	}
++	spin_unlock_irq(&crtc->dev->event_lock);
++
+ 	clk_disable_unprepare(disp->pclk);
+ 	pm_runtime_put_sync(disp->dev);
+ }
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 56172fe6995cd..8a8b2b982f83c 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -90,7 +90,7 @@ EXPORT_SYMBOL_GPL(hid_register_report);
+  * Register a new field for this report.
+  */
+ 
+-static struct hid_field *hid_register_field(struct hid_report *report, unsigned usages, unsigned values)
++static struct hid_field *hid_register_field(struct hid_report *report, unsigned usages)
+ {
+ 	struct hid_field *field;
+ 
+@@ -101,7 +101,7 @@ static struct hid_field *hid_register_field(struct hid_report *report, unsigned
+ 
+ 	field = kzalloc((sizeof(struct hid_field) +
+ 			 usages * sizeof(struct hid_usage) +
+-			 values * sizeof(unsigned)), GFP_KERNEL);
++			 usages * sizeof(unsigned)), GFP_KERNEL);
+ 	if (!field)
+ 		return NULL;
+ 
+@@ -300,7 +300,7 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
+ 	usages = max_t(unsigned, parser->local.usage_index,
+ 				 parser->global.report_count);
+ 
+-	field = hid_register_field(report, usages, parser->global.report_count);
++	field = hid_register_field(report, usages);
+ 	if (!field)
+ 		return 0;
+ 
+diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c
+index ec448f5f2dc33..73b9db9e3aab6 100644
+--- a/drivers/hwmon/dell-smm-hwmon.c
++++ b/drivers/hwmon/dell-smm-hwmon.c
+@@ -1159,6 +1159,13 @@ static struct dmi_system_id i8k_blacklist_fan_support_dmi_table[] __initdata = {
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS13 9333"),
+ 		},
+ 	},
++	{
++		.ident = "Dell XPS 15 L502X",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Dell System XPS L502X"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h b/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
+index 0c5373462cedb..0b1b5f9c67d47 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
+@@ -219,6 +219,7 @@ CH_PCI_DEVICE_ID_TABLE_DEFINE_BEGIN
+ 	CH_PCI_ID_TABLE_FENTRY(0x6089), /* Custom T62100-KR */
+ 	CH_PCI_ID_TABLE_FENTRY(0x608a), /* Custom T62100-CR */
+ 	CH_PCI_ID_TABLE_FENTRY(0x608b), /* Custom T6225-CR */
++	CH_PCI_ID_TABLE_FENTRY(0x6092), /* Custom T62100-CR-LOM */
+ CH_PCI_DEVICE_ID_TABLE_DEFINE_END;
+ 
+ #endif /* __T4_PCI_ID_TBL_H__ */
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index ce73df4c137ea..b223536e07bed 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1332,6 +1332,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0082, 5)},	/* Cinterion PHxx,PXxx (2 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0083, 4)},	/* Cinterion PHxx,PXxx (1 RmNet + USB Audio)*/
+ 	{QMI_QUIRK_SET_DTR(0x1e2d, 0x00b0, 4)},	/* Cinterion CLS8 */
++	{QMI_FIXED_INTF(0x1e2d, 0x00b7, 0)},	/* Cinterion MV31 RmNet */
+ 	{QMI_FIXED_INTF(0x413c, 0x81a2, 8)},	/* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
+ 	{QMI_FIXED_INTF(0x413c, 0x81a3, 8)},	/* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
+ 	{QMI_FIXED_INTF(0x413c, 0x81a4, 8)},	/* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 493ed7ba86ed2..4eb867804b6ab 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -860,7 +860,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ 		return error;
+ 
+ 	ctrl->device = ctrl->queues[0].device;
+-	ctrl->ctrl.numa_node = dev_to_node(ctrl->device->dev->dma_device);
++	ctrl->ctrl.numa_node = ibdev_to_node(ctrl->device->dev);
+ 
+ 	/* T10-PI support */
+ 	if (ctrl->device->dev->attrs.device_cap_flags &
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 1b4eb7046b078..6ade3daf78584 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -391,6 +391,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* X-Rite/Gretag-Macbeth Eye-One Pro display colorimeter */
+ 	{ USB_DEVICE(0x0971, 0x2000), .driver_info = USB_QUIRK_NO_SET_INTF },
+ 
++	/* ELMO L-12F document camera */
++	{ USB_DEVICE(0x09a1, 0x0028), .driver_info = USB_QUIRK_DELAY_CTRL_MSG },
++
+ 	/* Broadcom BCM92035DGROM BT dongle */
+ 	{ USB_DEVICE(0x0a5c, 0x2021), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+@@ -415,6 +418,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x10d6, 0x2200), .driver_info =
+ 			USB_QUIRK_STRING_FETCH_255 },
+ 
++	/* novation SoundControl XL */
++	{ USB_DEVICE(0x1235, 0x0061), .driver_info = USB_QUIRK_RESET_RESUME },
++
+ 	/* Huawei 4G LTE module */
+ 	{ USB_DEVICE(0x12d1, 0x15bb), .driver_info =
+ 			USB_QUIRK_DISCONNECT_SUSPEND },
+@@ -495,9 +501,6 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* INTEL VALUE SSD */
+ 	{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+-	/* novation SoundControl XL */
+-	{ USB_DEVICE(0x1235, 0x0061), .driver_info = USB_QUIRK_RESET_RESUME },
+-
+ 	{ }  /* terminating entry must be last */
+ };
+ 
+diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c
+index e4aba6c6d3b59..1096d1d3a84c4 100644
+--- a/fs/ceph/mdsmap.c
++++ b/fs/ceph/mdsmap.c
+@@ -243,8 +243,8 @@ struct ceph_mdsmap *ceph_mdsmap_decode(void **p, void *end)
+ 		}
+ 
+ 		if (state <= 0) {
+-			pr_warn("mdsmap_decode got incorrect state(%s)\n",
+-				ceph_mds_state_name(state));
++			dout("mdsmap_decode got incorrect state(%s)\n",
++			     ceph_mds_state_name(state));
+ 			continue;
+ 		}
+ 
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 44f9cce570995..ad3ecda1314d9 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -4007,6 +4007,7 @@ int cifs_setup_cifs_sb(struct smb_vol *pvolume_info,
+ 		cifs_sb->prepath = kstrdup(pvolume_info->prepath, GFP_KERNEL);
+ 		if (cifs_sb->prepath == NULL)
+ 			return -ENOMEM;
++		cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/dax.c b/fs/dax.c
+index 5b47834f2e1bb..b3d27fdc67752 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -810,12 +810,12 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
+ 		address = pgoff_address(index, vma);
+ 
+ 		/*
+-		 * Note because we provide range to follow_pte_pmd it will
+-		 * call mmu_notifier_invalidate_range_start() on our behalf
+-		 * before taking any lock.
++		 * follow_invalidate_pte() will use the range to call
++		 * mmu_notifier_invalidate_range_start() on our behalf before
++		 * taking any lock.
+ 		 */
+-		if (follow_pte_pmd(vma->vm_mm, address, &range,
+-				   &ptep, &pmdp, &ptl))
++		if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep,
++					  &pmdp, &ptl))
+ 			continue;
+ 
+ 		/*
+diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
+index caf563981532b..e9d5c8e638b01 100644
+--- a/fs/ntfs/inode.c
++++ b/fs/ntfs/inode.c
+@@ -629,6 +629,12 @@ static int ntfs_read_locked_inode(struct inode *vi)
+ 	}
+ 	a = ctx->attr;
+ 	/* Get the standard information attribute value. */
++	if ((u8 *)a + le16_to_cpu(a->data.resident.value_offset)
++			+ le32_to_cpu(a->data.resident.value_length) >
++			(u8 *)ctx->mrec + vol->mft_record_size) {
++		ntfs_error(vi->i_sb, "Corrupt standard information attribute in inode.");
++		goto unm_err_out;
++	}
+ 	si = (STANDARD_INFORMATION*)((u8*)a +
+ 			le16_to_cpu(a->data.resident.value_offset));
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index cd5c313729ea1..b8eadd9f96802 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1655,9 +1655,11 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
+ 		unsigned long end, unsigned long floor, unsigned long ceiling);
+ int
+ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
+-int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+-		   struct mmu_notifier_range *range,
+-		   pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
++int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
++			  struct mmu_notifier_range *range, pte_t **ptepp,
++			  pmd_t **pmdpp, spinlock_t **ptlp);
++int follow_pte(struct mm_struct *mm, unsigned long address,
++	       pte_t **ptepp, spinlock_t **ptlp);
+ int follow_pfn(struct vm_area_struct *vma, unsigned long address,
+ 	unsigned long *pfn);
+ int follow_phys(struct vm_area_struct *vma, unsigned long address,
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 65771bef5e654..ac6ffa5618843 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -4642,6 +4642,19 @@ static inline struct ib_device *rdma_device_to_ibdev(struct device *device)
+ 	return coredev->owner;
+ }
+ 
++/**
++ * ibdev_to_node - return the NUMA node for a given ib_device
++ * @dev:	device to get the NUMA node for.
++ */
++static inline int ibdev_to_node(struct ib_device *ibdev)
++{
++	struct device *parent = ibdev->dev.parent;
++
++	if (!parent)
++		return NUMA_NO_NODE;
++	return dev_to_node(parent);
++}
++
+ /**
+  * rdma_device_to_drv_device - Helper macro to reach back to driver's
+  *			       ib_device holder structure from device pointer.
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 8c017f8c0c6d6..c09594e70f90a 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -10869,7 +10869,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
+ 			bool isdiv = BPF_OP(insn->code) == BPF_DIV;
+ 			struct bpf_insn *patchlet;
+ 			struct bpf_insn chk_and_div[] = {
+-				/* Rx div 0 -> 0 */
++				/* [R,W]x div 0 -> 0 */
+ 				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
+ 					     BPF_JNE | BPF_K, insn->src_reg,
+ 					     0, 2, 0),
+@@ -10878,16 +10878,18 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
+ 				*insn,
+ 			};
+ 			struct bpf_insn chk_and_mod[] = {
+-				/* Rx mod 0 -> Rx */
++				/* [R,W]x mod 0 -> [R,W]x */
+ 				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
+ 					     BPF_JEQ | BPF_K, insn->src_reg,
+-					     0, 1, 0),
++					     0, 1 + (is64 ? 0 : 1), 0),
+ 				*insn,
++				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++				BPF_MOV32_REG(insn->dst_reg, insn->dst_reg),
+ 			};
+ 
+ 			patchlet = isdiv ? chk_and_div : chk_and_mod;
+ 			cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
+-				      ARRAY_SIZE(chk_and_mod);
++				      ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0);
+ 
+ 			new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
+ 			if (!new_prog)
+diff --git a/mm/memory.c b/mm/memory.c
+index 50632c4366b8a..eb5722027160a 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4707,9 +4707,9 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
+ }
+ #endif /* __PAGETABLE_PMD_FOLDED */
+ 
+-static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+-			    struct mmu_notifier_range *range,
+-			    pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
++int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
++			  struct mmu_notifier_range *range, pte_t **ptepp,
++			  pmd_t **pmdpp, spinlock_t **ptlp)
+ {
+ 	pgd_t *pgd;
+ 	p4d_t *p4d;
+@@ -4774,31 +4774,33 @@ out:
+ 	return -EINVAL;
+ }
+ 
+-static inline int follow_pte(struct mm_struct *mm, unsigned long address,
+-			     pte_t **ptepp, spinlock_t **ptlp)
+-{
+-	int res;
+-
+-	/* (void) is needed to make gcc happy */
+-	(void) __cond_lock(*ptlp,
+-			   !(res = __follow_pte_pmd(mm, address, NULL,
+-						    ptepp, NULL, ptlp)));
+-	return res;
+-}
+-
+-int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+-		   struct mmu_notifier_range *range,
+-		   pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
++/**
++ * follow_pte - look up PTE at a user virtual address
++ * @mm: the mm_struct of the target address space
++ * @address: user virtual address
++ * @ptepp: location to store found PTE
++ * @ptlp: location to store the lock for the PTE
++ *
++ * On a successful return, the pointer to the PTE is stored in @ptepp;
++ * the corresponding lock is taken and its location is stored in @ptlp.
++ * The contents of the PTE are only stable until @ptlp is released;
++ * any further use, if any, must be protected against invalidation
++ * with MMU notifiers.
++ *
++ * Only IO mappings and raw PFN mappings are allowed.  The mmap semaphore
++ * should be taken for read.
++ *
++ * KVM uses this function.  While it is arguably less bad than ``follow_pfn``,
++ * it is not a good general-purpose API.
++ *
++ * Return: zero on success, -ve otherwise.
++ */
++int follow_pte(struct mm_struct *mm, unsigned long address,
++	       pte_t **ptepp, spinlock_t **ptlp)
+ {
+-	int res;
+-
+-	/* (void) is needed to make gcc happy */
+-	(void) __cond_lock(*ptlp,
+-			   !(res = __follow_pte_pmd(mm, address, range,
+-						    ptepp, pmdpp, ptlp)));
+-	return res;
++	return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp);
+ }
+-EXPORT_SYMBOL(follow_pte_pmd);
++EXPORT_SYMBOL_GPL(follow_pte);
+ 
+ /**
+  * follow_pfn - look up PFN at a user virtual address
+@@ -4808,6 +4810,9 @@ EXPORT_SYMBOL(follow_pte_pmd);
+  *
+  * Only IO mappings and raw PFN mappings are allowed.
+  *
++ * This function does not allow the caller to read the permissions
++ * of the PTE.  Do not use it.
++ *
+  * Return: zero and the pfn at @pfn on success, -ve otherwise.
+  */
+ int follow_pfn(struct vm_area_struct *vma, unsigned long address,
+diff --git a/net/rds/ib.h b/net/rds/ib.h
+index 8dfff43cf07f4..c23a11d9ad362 100644
+--- a/net/rds/ib.h
++++ b/net/rds/ib.h
+@@ -264,13 +264,6 @@ struct rds_ib_device {
+ 	int			*vector_load;
+ };
+ 
+-static inline int ibdev_to_node(struct ib_device *ibdev)
+-{
+-	struct device *parent;
+-
+-	parent = ibdev->dev.parent;
+-	return parent ? dev_to_node(parent) : NUMA_NO_NODE;
+-}
+ #define rdsibdev_to_node(rdsibdev) ibdev_to_node(rdsibdev->dev)
+ 
+ /* bits for i_ack_flags */
+diff --git a/scripts/gen_autoksyms.sh b/scripts/gen_autoksyms.sh
+index 16c0b2ddaa4c9..d54dfba15bf25 100755
+--- a/scripts/gen_autoksyms.sh
++++ b/scripts/gen_autoksyms.sh
+@@ -43,6 +43,9 @@ EOT
+ sed 's/ko$/mod/' $modlist |
+ xargs -n1 sed -n -e '2{s/ /\n/g;/^$/!p;}' -- |
+ cat - "$ksym_wl" |
++# Remove the dot prefix for ppc64; symbol names with a dot (.) hold entry
++# point addresses.
++sed -e 's/^\.//' |
+ sort -u |
+ sed -e 's/\(.*\)/#define __KSYM_\1 1/' >> "$output_file"
+ 
+diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
+index 3f77a5d695c13..0bafed857e171 100755
+--- a/scripts/recordmcount.pl
++++ b/scripts/recordmcount.pl
+@@ -268,7 +268,11 @@ if ($arch eq "x86_64") {
+ 
+     # force flags for this arch
+     $ld .= " -m shlelf_linux";
+-    $objcopy .= " -O elf32-sh-linux";
++    if ($endian eq "big") {
++        $objcopy .= " -O elf32-shbig-linux";
++    } else {
++        $objcopy .= " -O elf32-sh-linux";
++    }
+ 
+ } elsif ($arch eq "powerpc") {
+     my $ldemulation;
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index cf9cc0ed7e995..ed4d2e3a00718 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1888,10 +1888,12 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
+ 			       bool write_fault, bool *writable,
+ 			       kvm_pfn_t *p_pfn)
+ {
+-	unsigned long pfn;
++	kvm_pfn_t pfn;
++	pte_t *ptep;
++	spinlock_t *ptl;
+ 	int r;
+ 
+-	r = follow_pfn(vma, addr, &pfn);
++	r = follow_pte(vma->vm_mm, addr, &ptep, &ptl);
+ 	if (r) {
+ 		/*
+ 		 * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does
+@@ -1906,14 +1908,19 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
+ 		if (r)
+ 			return r;
+ 
+-		r = follow_pfn(vma, addr, &pfn);
++		r = follow_pte(vma->vm_mm, addr, &ptep, &ptl);
+ 		if (r)
+ 			return r;
++	}
+ 
++	if (write_fault && !pte_write(*ptep)) {
++		pfn = KVM_PFN_ERR_RO_FAULT;
++		goto out;
+ 	}
+ 
+ 	if (writable)
+-		*writable = true;
++		*writable = pte_write(*ptep);
++	pfn = pte_pfn(*ptep);
+ 
+ 	/*
+ 	 * Get a reference here because callers of *hva_to_pfn* and
+@@ -1928,6 +1935,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
+ 	 */ 
+ 	kvm_get_pfn(pfn);
+ 
++out:
++	pte_unmap_unlock(ptep, ptl);
+ 	*p_pfn = pfn;
+ 	return 0;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-02-26 13:22 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-02-26 13:22 UTC (permalink / raw
  To: gentoo-commits

commit:     50522221fefb61988b4c71bc07bd33fbcb015076
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 26 13:22:39 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 26 13:22:39 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=50522221

Remove redundant patch

2010_btusb-rome-firmware-error-fix.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                              |  4 ---
 2010_btusb-rome-firmware-error-fix.patch | 50 --------------------------------
 2 files changed, 54 deletions(-)

diff --git a/0000_README b/0000_README
index 0f03575..19abff2 100644
--- a/0000_README
+++ b/0000_README
@@ -131,10 +131,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2010_BT-btusb-rome-firmware-error-fix.patch
-From:   https://www.spinics.net/lists/linux-bluetooth/msg90197.html
-Desc:   Bluetooth: btusb: Some Qualcomm Bluetooth adapters stop working
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2010_btusb-rome-firmware-error-fix.patch b/2010_btusb-rome-firmware-error-fix.patch
deleted file mode 100644
index 91c18b7..0000000
--- a/2010_btusb-rome-firmware-error-fix.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-From 234f414efd1164786269849b4fbb533d6c9cdbbf Mon Sep 17 00:00:00 2001
-From: Hui Wang <hui.wang@canonical.com>
-Date: Mon, 8 Feb 2021 13:02:37 +0800
-Subject: Bluetooth: btusb: Some Qualcomm Bluetooth adapters stop working
-
-This issue starts from linux-5.10-rc1, I reproduced this issue on my
-Dell Inspiron 7447 with BT adapter 0cf3:e005, the kernel will print
-out: "Bluetooth: hci0: don't support firmware rome 0x31010000", and
-someone else also reported the similar issue to bugzilla #211571.
-
-I found this is a regression introduced by 'commit b40f58b97386
-("Bluetooth: btusb: Add Qualcomm Bluetooth SoC WCN6855 support"), the
-patch assumed that if high ROM version is not zero, it is an adapter
-on WCN6855, but many old adapters don't need to load rampatch or nvm,
-and they have non-zero high ROM version.
-
-To fix it, let the driver match the rom_version in the
-qca_devices_table first, if there is no entry matched, check the
-high ROM version, if it is not zero, we assume this adapter is ready
-to work and no need to load rampatch and nvm like previously.
-
-BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=211571
-Fixes: b40f58b97386 ("Bluetooth: btusb: Add Qualcomm Bluetooth SoC WCN6855 support")
-Signed-off-by: Hui Wang <hui.wang@canonical.com>
-Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
----
- drivers/bluetooth/btusb.c | 7 +++++++
- 1 file changed, 7 insertions(+)
-
-diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
-index 9c6836ee3c9b3..52683fd22e050 100644
---- a/drivers/bluetooth/btusb.c
-+++ b/drivers/bluetooth/btusb.c
-@@ -4273,6 +4273,13 @@ static int btusb_setup_qca(struct hci_dev *hdev)
- 			info = &qca_devices_table[i];
- 	}
- 	if (!info) {
-+		/* If the rom_version is not matched in the qca_devices_table
-+		 * and the high ROM version is not zero, we assume this chip no
-+		 * need to load the rampatch and nvm.
-+		 */
-+		if (ver_rom & ~0xffffU)
-+			return 0;
-+
- 		bt_dev_err(hdev, "don't support firmware rome 0x%x", ver_rom);
- 		return -ENODEV;
- 	}
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-04 12:04 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-03-04 12:04 UTC (permalink / raw
  To: gentoo-commits

commit:     8f0e5b98da760bc5b682dd22470dd161336ed39c
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Mar  4 12:04:11 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Mar  4 12:04:24 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8f0e5b98

Linux patch 5.10.20

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |     4 +
 1019_linux-5.10.20.patch | 25078 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 25082 insertions(+)

diff --git a/0000_README b/0000_README
index 19abff2..c338847 100644
--- a/0000_README
+++ b/0000_README
@@ -119,6 +119,10 @@ Patch:  1018_linux-5.10.19.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.19
 
+Patch:  1019_linux-5.10.20.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.20
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1019_linux-5.10.20.patch b/1019_linux-5.10.20.patch
new file mode 100644
index 0000000..4f20d62
--- /dev/null
+++ b/1019_linux-5.10.20.patch
@@ -0,0 +1,25078 @@
+diff --git a/Documentation/admin-guide/perf/arm-cmn.rst b/Documentation/admin-guide/perf/arm-cmn.rst
+index 0e48093460140..796e25b7027b2 100644
+--- a/Documentation/admin-guide/perf/arm-cmn.rst
++++ b/Documentation/admin-guide/perf/arm-cmn.rst
+@@ -17,7 +17,7 @@ PMU events
+ ----------
+ 
+ The PMU driver registers a single PMU device for the whole interconnect,
+-see /sys/bus/event_source/devices/arm_cmn. Multi-chip systems may link
++see /sys/bus/event_source/devices/arm_cmn_0. Multi-chip systems may link
+ more than one CMN together via external CCIX links - in this situation,
+ each mesh counts its own events entirely independently, and additional
+ PMU devices will be named arm_cmn_{1..n}.
+diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
+index f455fa00c00fa..06027c6a233ab 100644
+--- a/Documentation/admin-guide/sysctl/vm.rst
++++ b/Documentation/admin-guide/sysctl/vm.rst
+@@ -978,11 +978,11 @@ that benefit from having their data cached, zone_reclaim_mode should be
+ left disabled as the caching effect is likely to be more important than
+ data locality.
+ 
+-zone_reclaim may be enabled if it's known that the workload is partitioned
+-such that each partition fits within a NUMA node and that accessing remote
+-memory would cause a measurable performance reduction.  The page allocator
+-will then reclaim easily reusable pages (those page cache pages that are
+-currently not used) before allocating off node pages.
++Consider enabling one or more zone_reclaim mode bits if it's known that the
++workload is partitioned such that each partition fits within a NUMA node
++and that accessing remote memory would cause a measurable performance
++reduction.  The page allocator will take additional actions before
++allocating off node pages.
+ 
+ Allowing zone reclaim to write out pages stops processes that are
+ writing large amounts of data from dirtying pages on other nodes. Zone
+diff --git a/Documentation/filesystems/seq_file.rst b/Documentation/filesystems/seq_file.rst
+index 56856481dc8d8..a6726082a7c25 100644
+--- a/Documentation/filesystems/seq_file.rst
++++ b/Documentation/filesystems/seq_file.rst
+@@ -217,6 +217,12 @@ between the calls to start() and stop(), so holding a lock during that time
+ is a reasonable thing to do. The seq_file code will also avoid taking any
+ other locks while the iterator is active.
+ 
++The iterater value returned by start() or next() is guaranteed to be
++passed to a subsequent next() or stop() call.  This allows resources
++such as locks that were taken to be reliably released.  There is *no*
++guarantee that the iterator will be passed to show(), though in practice
++it often will be.
++
+ 
+ Formatted output
+ ================
+diff --git a/Documentation/scsi/libsas.rst b/Documentation/scsi/libsas.rst
+index 7216b5d258001..f9b77c7879dbb 100644
+--- a/Documentation/scsi/libsas.rst
++++ b/Documentation/scsi/libsas.rst
+@@ -189,7 +189,6 @@ num_phys
+ The event interface::
+ 
+ 	/* LLDD calls these to notify the class of an event. */
+-	void (*notify_ha_event)(struct sas_ha_struct *, enum ha_event);
+ 	void (*notify_port_event)(struct sas_phy *, enum port_event);
+ 	void (*notify_phy_event)(struct sas_phy *, enum phy_event);
+ 
+diff --git a/Documentation/security/keys/core.rst b/Documentation/security/keys/core.rst
+index aa0081685ee11..b3ed5c581034c 100644
+--- a/Documentation/security/keys/core.rst
++++ b/Documentation/security/keys/core.rst
+@@ -1040,8 +1040,8 @@ The keyctl syscall functions are:
+ 
+      "key" is the ID of the key to be watched.
+ 
+-     "queue_fd" is a file descriptor referring to an open "/dev/watch_queue"
+-     which manages the buffer into which notifications will be delivered.
++     "queue_fd" is a file descriptor referring to an open pipe which
++     manages the buffer into which notifications will be delivered.
+ 
+      "filter" is either NULL to remove a watch or a filter specification to
+      indicate what events are required from the key.
+diff --git a/Makefile b/Makefile
+index f700bdea626d9..1ebc8a6bf9b06 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 19
++SUBLEVEL = 20
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
+index 3a392983ac079..a0de09f994d88 100644
+--- a/arch/arm/boot/compressed/head.S
++++ b/arch/arm/boot/compressed/head.S
+@@ -1175,9 +1175,9 @@ __armv4_mmu_cache_off:
+ __armv7_mmu_cache_off:
+ 		mrc	p15, 0, r0, c1, c0
+ #ifdef CONFIG_MMU
+-		bic	r0, r0, #0x000d
++		bic	r0, r0, #0x0005
+ #else
+-		bic	r0, r0, #0x000c
++		bic	r0, r0, #0x0004
+ #endif
+ 		mcr	p15, 0, r0, c1, c0	@ turn MMU and cache off
+ 		mov	r0, #0
+diff --git a/arch/arm/boot/dts/armada-388-helios4.dts b/arch/arm/boot/dts/armada-388-helios4.dts
+index fb49df2a3bce7..a7ff774d797c8 100644
+--- a/arch/arm/boot/dts/armada-388-helios4.dts
++++ b/arch/arm/boot/dts/armada-388-helios4.dts
+@@ -70,6 +70,9 @@
+ 
+ 	system-leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&helios_system_led_pins>;
++
+ 		status-led {
+ 			label = "helios4:green:status";
+ 			gpios = <&gpio0 24 GPIO_ACTIVE_LOW>;
+@@ -86,6 +89,9 @@
+ 
+ 	io-leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&helios_io_led_pins>;
++
+ 		sata1-led {
+ 			label = "helios4:green:ata1";
+ 			gpios = <&gpio1 17 GPIO_ACTIVE_LOW>;
+@@ -121,11 +127,15 @@
+ 	fan1: j10-pwm {
+ 		compatible = "pwm-fan";
+ 		pwms = <&gpio1 9 40000>;	/* Target freq:25 kHz */
++		pinctrl-names = "default";
++		pinctrl-0 = <&helios_fan1_pins>;
+ 	};
+ 
+ 	fan2: j17-pwm {
+ 		compatible = "pwm-fan";
+ 		pwms = <&gpio1 23 40000>;	/* Target freq:25 kHz */
++		pinctrl-names = "default";
++		pinctrl-0 = <&helios_fan2_pins>;
+ 	};
+ 
+ 	usb2_phy: usb2-phy {
+@@ -286,16 +296,22 @@
+ 						       "mpp39", "mpp40";
+ 					marvell,function = "sd0";
+ 				};
+-				helios_led_pins: helios-led-pins {
+-					marvell,pins = "mpp24", "mpp25",
+-						       "mpp49", "mpp50",
++				helios_system_led_pins: helios-system-led-pins {
++					marvell,pins = "mpp24", "mpp25";
++					marvell,function = "gpio";
++				};
++				helios_io_led_pins: helios-io-led-pins {
++					marvell,pins = "mpp49", "mpp50",
+ 						       "mpp52", "mpp53",
+ 						       "mpp54";
+ 					marvell,function = "gpio";
+ 				};
+-				helios_fan_pins: helios-fan-pins {
+-					marvell,pins = "mpp41", "mpp43",
+-						       "mpp48", "mpp55";
++				helios_fan1_pins: helios_fan1_pins {
++					marvell,pins = "mpp41", "mpp43";
++					marvell,function = "gpio";
++				};
++				helios_fan2_pins: helios_fan2_pins {
++					marvell,pins = "mpp48", "mpp55";
+ 					marvell,function = "gpio";
+ 				};
+ 				microsom_spi1_cs_pins: spi1-cs-pins {
+diff --git a/arch/arm/boot/dts/aspeed-g4.dtsi b/arch/arm/boot/dts/aspeed-g4.dtsi
+index 82f0213e3a3c3..f81a540a35296 100644
+--- a/arch/arm/boot/dts/aspeed-g4.dtsi
++++ b/arch/arm/boot/dts/aspeed-g4.dtsi
+@@ -370,6 +370,7 @@
+ 						compatible = "aspeed,ast2400-lpc-snoop";
+ 						reg = <0x10 0x8>;
+ 						interrupts = <8>;
++						clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
+ 						status = "disabled";
+ 					};
+ 
+diff --git a/arch/arm/boot/dts/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed-g5.dtsi
+index a93009aa2f040..39e690be17044 100644
+--- a/arch/arm/boot/dts/aspeed-g5.dtsi
++++ b/arch/arm/boot/dts/aspeed-g5.dtsi
+@@ -492,6 +492,7 @@
+ 						compatible = "aspeed,ast2500-lpc-snoop";
+ 						reg = <0x10 0x8>;
+ 						interrupts = <8>;
++						clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
+ 						status = "disabled";
+ 					};
+ 
+diff --git a/arch/arm/boot/dts/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed-g6.dtsi
+index bf97aaad7be9b..1cf71bdb4fabe 100644
+--- a/arch/arm/boot/dts/aspeed-g6.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6.dtsi
+@@ -513,6 +513,7 @@
+ 						compatible = "aspeed,ast2600-lpc-snoop";
+ 						reg = <0x0 0x80>;
+ 						interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
++						clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
+ 						status = "disabled";
+ 					};
+ 
+diff --git a/arch/arm/boot/dts/exynos3250-artik5.dtsi b/arch/arm/boot/dts/exynos3250-artik5.dtsi
+index 12887b3924af8..ad525f2accbb4 100644
+--- a/arch/arm/boot/dts/exynos3250-artik5.dtsi
++++ b/arch/arm/boot/dts/exynos3250-artik5.dtsi
+@@ -79,7 +79,7 @@
+ 	s2mps14_pmic@66 {
+ 		compatible = "samsung,s2mps14-pmic";
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <5 IRQ_TYPE_NONE>;
++		interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&s2mps14_irq>;
+ 		reg = <0x66>;
+diff --git a/arch/arm/boot/dts/exynos3250-monk.dts b/arch/arm/boot/dts/exynos3250-monk.dts
+index c1a68e6120370..7e99e5812a4d3 100644
+--- a/arch/arm/boot/dts/exynos3250-monk.dts
++++ b/arch/arm/boot/dts/exynos3250-monk.dts
+@@ -200,7 +200,7 @@
+ 	s2mps14_pmic@66 {
+ 		compatible = "samsung,s2mps14-pmic";
+ 		interrupt-parent = <&gpx0>;
+-		interrupts = <7 IRQ_TYPE_NONE>;
++		interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ 		reg = <0x66>;
+ 		wakeup-source;
+ 
+diff --git a/arch/arm/boot/dts/exynos3250-rinato.dts b/arch/arm/boot/dts/exynos3250-rinato.dts
+index b55afaaa691e8..f9e3b13d3aac2 100644
+--- a/arch/arm/boot/dts/exynos3250-rinato.dts
++++ b/arch/arm/boot/dts/exynos3250-rinato.dts
+@@ -270,7 +270,7 @@
+ 	s2mps14_pmic@66 {
+ 		compatible = "samsung,s2mps14-pmic";
+ 		interrupt-parent = <&gpx0>;
+-		interrupts = <7 IRQ_TYPE_NONE>;
++		interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ 		reg = <0x66>;
+ 		wakeup-source;
+ 
+diff --git a/arch/arm/boot/dts/exynos5250-spring.dts b/arch/arm/boot/dts/exynos5250-spring.dts
+index a92ade33779cf..5a9c936407ea3 100644
+--- a/arch/arm/boot/dts/exynos5250-spring.dts
++++ b/arch/arm/boot/dts/exynos5250-spring.dts
+@@ -109,7 +109,7 @@
+ 		compatible = "samsung,s5m8767-pmic";
+ 		reg = <0x66>;
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <2 IRQ_TYPE_NONE>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&s5m8767_irq &s5m8767_dvs &s5m8767_ds>;
+ 		wakeup-source;
+diff --git a/arch/arm/boot/dts/exynos5420-arndale-octa.dts b/arch/arm/boot/dts/exynos5420-arndale-octa.dts
+index dd7f8385d81e7..3d9b93d2b242c 100644
+--- a/arch/arm/boot/dts/exynos5420-arndale-octa.dts
++++ b/arch/arm/boot/dts/exynos5420-arndale-octa.dts
+@@ -349,7 +349,7 @@
+ 		reg = <0x66>;
+ 
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <2 IRQ_TYPE_EDGE_FALLING>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&s2mps11_irq>;
+ 
+diff --git a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+index b1cf9414ce17f..d51c1d8620a09 100644
+--- a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+@@ -509,7 +509,7 @@
+ 		samsung,s2mps11-acokb-ground;
+ 
+ 		interrupt-parent = <&gpx0>;
+-		interrupts = <4 IRQ_TYPE_EDGE_FALLING>;
++		interrupts = <4 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&s2mps11_irq>;
+ 
+diff --git a/arch/arm/boot/dts/omap443x.dtsi b/arch/arm/boot/dts/omap443x.dtsi
+index cb309743de5da..dd8ef58cbaed4 100644
+--- a/arch/arm/boot/dts/omap443x.dtsi
++++ b/arch/arm/boot/dts/omap443x.dtsi
+@@ -33,10 +33,12 @@
+ 	};
+ 
+ 	ocp {
++		/* 4430 has only gpio_86 tshut and no talert interrupt */
+ 		bandgap: bandgap@4a002260 {
+ 			reg = <0x4a002260 0x4
+ 			       0x4a00232C 0x4>;
+ 			compatible = "ti,omap4430-bandgap";
++			gpios = <&gpio3 22 GPIO_ACTIVE_HIGH>;
+ 
+ 			#thermal-sensor-cells = <0>;
+ 		};
+diff --git a/arch/arm/kernel/sys_oabi-compat.c b/arch/arm/kernel/sys_oabi-compat.c
+index 0203e545bbc8d..075a2e0ed2c15 100644
+--- a/arch/arm/kernel/sys_oabi-compat.c
++++ b/arch/arm/kernel/sys_oabi-compat.c
+@@ -248,6 +248,7 @@ struct oabi_epoll_event {
+ 	__u64 data;
+ } __attribute__ ((packed,aligned(4)));
+ 
++#ifdef CONFIG_EPOLL
+ asmlinkage long sys_oabi_epoll_ctl(int epfd, int op, int fd,
+ 				   struct oabi_epoll_event __user *event)
+ {
+@@ -298,6 +299,20 @@ asmlinkage long sys_oabi_epoll_wait(int epfd,
+ 	kfree(kbuf);
+ 	return err ? -EFAULT : ret;
+ }
++#else
++asmlinkage long sys_oabi_epoll_ctl(int epfd, int op, int fd,
++				   struct oabi_epoll_event __user *event)
++{
++	return -EINVAL;
++}
++
++asmlinkage long sys_oabi_epoll_wait(int epfd,
++				    struct oabi_epoll_event __user *events,
++				    int maxevents, int timeout)
++{
++	return -EINVAL;
++}
++#endif
+ 
+ struct oabi_sembuf {
+ 	unsigned short	sem_num;
+diff --git a/arch/arm/mach-at91/pm_suspend.S b/arch/arm/mach-at91/pm_suspend.S
+index 0184de05c1be1..b683c2caa40b9 100644
+--- a/arch/arm/mach-at91/pm_suspend.S
++++ b/arch/arm/mach-at91/pm_suspend.S
+@@ -442,7 +442,7 @@ ENDPROC(at91_backup_mode)
+ 	str	tmp1, [pmc, #AT91_PMC_PLL_UPDT]
+ 
+ 	/* step 2. */
+-	ldr	tmp1, =#AT91_PMC_PLL_ACR_DEFAULT_PLLA
++	ldr	tmp1, =AT91_PMC_PLL_ACR_DEFAULT_PLLA
+ 	str	tmp1, [pmc, #AT91_PMC_PLL_ACR]
+ 
+ 	/* step 3. */
+diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig
+index f7211b57b1e78..165c184801e19 100644
+--- a/arch/arm/mach-ixp4xx/Kconfig
++++ b/arch/arm/mach-ixp4xx/Kconfig
+@@ -13,7 +13,6 @@ config MACH_IXP4XX_OF
+ 	select I2C
+ 	select I2C_IOP3XX
+ 	select PCI
+-	select TIMER_OF
+ 	select USE_OF
+ 	help
+ 	  Say 'Y' here to support Device Tree-based IXP4xx platforms.
+diff --git a/arch/arm/mach-s3c/irq-s3c24xx-fiq.S b/arch/arm/mach-s3c/irq-s3c24xx-fiq.S
+index b54cbd0122413..5d238d9a798e1 100644
+--- a/arch/arm/mach-s3c/irq-s3c24xx-fiq.S
++++ b/arch/arm/mach-s3c/irq-s3c24xx-fiq.S
+@@ -35,7 +35,6 @@
+ 	@ and an offset to the irq acknowledgment word
+ 
+ ENTRY(s3c24xx_spi_fiq_rx)
+-s3c24xx_spi_fix_rx:
+ 	.word	fiq_rx_end - fiq_rx_start
+ 	.word	fiq_rx_irq_ack - fiq_rx_start
+ fiq_rx_start:
+@@ -49,7 +48,7 @@ fiq_rx_start:
+ 	strb	fiq_rtmp, [ fiq_rspi, # S3C2410_SPTDAT ]
+ 
+ 	subs	fiq_rcount, fiq_rcount, #1
+-	subnes	pc, lr, #4		@@ return, still have work to do
++	subsne	pc, lr, #4		@@ return, still have work to do
+ 
+ 	@@ set IRQ controller so that next op will trigger IRQ
+ 	mov	fiq_rtmp, #0
+@@ -61,7 +60,6 @@ fiq_rx_irq_ack:
+ fiq_rx_end:
+ 
+ ENTRY(s3c24xx_spi_fiq_txrx)
+-s3c24xx_spi_fiq_txrx:
+ 	.word	fiq_txrx_end - fiq_txrx_start
+ 	.word	fiq_txrx_irq_ack - fiq_txrx_start
+ fiq_txrx_start:
+@@ -76,7 +74,7 @@ fiq_txrx_start:
+ 	strb	fiq_rtmp, [ fiq_rspi, # S3C2410_SPTDAT ]
+ 
+ 	subs	fiq_rcount, fiq_rcount, #1
+-	subnes	pc, lr, #4		@@ return, still have work to do
++	subsne	pc, lr, #4		@@ return, still have work to do
+ 
+ 	mov	fiq_rtmp, #0
+ 	str	fiq_rtmp, [ fiq_rirq, # S3C2410_INTMOD  - S3C24XX_VA_IRQ ]
+@@ -88,7 +86,6 @@ fiq_txrx_irq_ack:
+ fiq_txrx_end:
+ 
+ ENTRY(s3c24xx_spi_fiq_tx)
+-s3c24xx_spi_fix_tx:
+ 	.word	fiq_tx_end - fiq_tx_start
+ 	.word	fiq_tx_irq_ack - fiq_tx_start
+ fiq_tx_start:
+@@ -101,7 +98,7 @@ fiq_tx_start:
+ 	strb	fiq_rtmp, [ fiq_rspi, # S3C2410_SPTDAT ]
+ 
+ 	subs	fiq_rcount, fiq_rcount, #1
+-	subnes	pc, lr, #4		@@ return, still have work to do
++	subsne	pc, lr, #4		@@ return, still have work to do
+ 
+ 	mov	fiq_rtmp, #0
+ 	str	fiq_rtmp, [ fiq_rirq, # S3C2410_INTMOD  - S3C24XX_VA_IRQ ]
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index a6b5b7ef40aea..afe4bc55d4eba 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -520,7 +520,7 @@ config ARM64_ERRATUM_1024718
+ 	help
+ 	  This option adds a workaround for ARM Cortex-A55 Erratum 1024718.
+ 
+-	  Affected Cortex-A55 cores (r0p0, r0p1, r1p0) could cause incorrect
++	  Affected Cortex-A55 cores (all revisions) could cause incorrect
+ 	  update of the hardware dirty bit when the DBM/AP bits are updated
+ 	  without a break-before-make. The workaround is to disable the usage
+ 	  of hardware DBM locally on the affected cores. CPUs not affected by
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
+index 896f34fd9fc3a..7ae16541d14f5 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
+@@ -126,8 +126,6 @@
+ };
+ 
+ &ehci0 {
+-	phys = <&usbphy 0>;
+-	phy-names = "usb";
+ 	status = "okay";
+ };
+ 
+@@ -169,6 +167,7 @@
+ 	pinctrl-0 = <&mmc2_pins>, <&mmc2_ds_pin>;
+ 	vmmc-supply = <&reg_dcdc1>;
+ 	vqmmc-supply = <&reg_eldo1>;
++	max-frequency = <200000000>;
+ 	bus-width = <8>;
+ 	non-removable;
+ 	cap-mmc-hw-reset;
+@@ -177,8 +176,6 @@
+ };
+ 
+ &ohci0 {
+-	phys = <&usbphy 0>;
+-	phy-names = "usb";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
+index c48692b06e1fa..3402cec87035b 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
+@@ -32,7 +32,6 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&mmc0_pins>;
+ 	vmmc-supply = <&reg_dcdc1>;
+-	non-removable;
+ 	disable-wp;
+ 	bus-width = <4>;
+ 	cd-gpios = <&pio 5 6 GPIO_ACTIVE_LOW>; /* PF6 */
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+index dc238814013cb..7a41015a9ce59 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+@@ -514,7 +514,7 @@
+ 			resets = <&ccu RST_BUS_MMC2>;
+ 			reset-names = "ahb";
+ 			interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
+-			max-frequency = <200000000>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -593,6 +593,8 @@
+ 				 <&ccu CLK_USB_OHCI0>;
+ 			resets = <&ccu RST_BUS_OHCI0>,
+ 				 <&ccu RST_BUS_EHCI0>;
++			phys = <&usbphy 0>;
++			phy-names = "usb";
+ 			status = "disabled";
+ 		};
+ 
+@@ -603,6 +605,8 @@
+ 			clocks = <&ccu CLK_BUS_OHCI0>,
+ 				 <&ccu CLK_USB_OHCI0>;
+ 			resets = <&ccu RST_BUS_OHCI0>;
++			phys = <&usbphy 0>;
++			phy-names = "usb";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
+index 28c77d6872f64..4592fb7a6161d 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
+@@ -436,6 +436,7 @@
+ 			interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc0_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -452,6 +453,7 @@
+ 			interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc1_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -468,6 +470,7 @@
+ 			interrupts = <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc2_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -667,6 +670,8 @@
+ 				 <&ccu CLK_USB_OHCI0>;
+ 			resets = <&ccu RST_BUS_OHCI0>,
+ 				 <&ccu RST_BUS_EHCI0>;
++			phys = <&usb2phy 0>;
++			phy-names = "usb";
+ 			status = "disabled";
+ 		};
+ 
+@@ -677,6 +682,8 @@
+ 			clocks = <&ccu CLK_BUS_OHCI0>,
+ 				 <&ccu CLK_USB_OHCI0>;
+ 			resets = <&ccu RST_BUS_OHCI0>;
++			phys = <&usb2phy 0>;
++			phy-names = "usb";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
+index 4b517ca720597..06de0b1ce7267 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
+@@ -89,13 +89,12 @@
+ 	status = "okay";
+ };
+ 
+-&sd_emmc_a {
+-	sd-uhs-sdr50;
+-};
+-
+ &usb {
+ 	phys = <&usb2_phy0>, <&usb2_phy1>;
+ 	phy-names = "usb2-phy0", "usb2-phy1";
+ };
+  */
+ 
++&sd_emmc_a {
++	sd-uhs-sdr50;
++};
+diff --git a/arch/arm64/boot/dts/exynos/exynos5433-tm2-common.dtsi b/arch/arm64/boot/dts/exynos/exynos5433-tm2-common.dtsi
+index 829fea23d4ab1..106397a99da6b 100644
+--- a/arch/arm64/boot/dts/exynos/exynos5433-tm2-common.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynos5433-tm2-common.dtsi
+@@ -389,7 +389,7 @@
+ 	s2mps13-pmic@66 {
+ 		compatible = "samsung,s2mps13-pmic";
+ 		interrupt-parent = <&gpa0>;
+-		interrupts = <7 IRQ_TYPE_NONE>;
++		interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ 		reg = <0x66>;
+ 		samsung,s2mps11-wrstbi-ground;
+ 
+diff --git a/arch/arm64/boot/dts/exynos/exynos7-espresso.dts b/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
+index 92fecc539c6c7..358b7b6ea84f1 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
++++ b/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
+@@ -90,7 +90,7 @@
+ 	s2mps15_pmic@66 {
+ 		compatible = "samsung,s2mps15-pmic";
+ 		reg = <0x66>;
+-		interrupts = <2 IRQ_TYPE_NONE>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-parent = <&gpa0>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pmic_irq>;
+diff --git a/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi b/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
+index e1c0fcba5c206..07c099b4ed5b5 100644
+--- a/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
++++ b/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
+@@ -166,7 +166,7 @@
+ 			rx-fifo-depth = <16384>;
+ 			snps,multicast-filter-bins = <256>;
+ 			iommus = <&smmu 2>;
+-			altr,sysmgr-syscon = <&sysmgr 0x48 8>;
++			altr,sysmgr-syscon = <&sysmgr 0x48 0>;
+ 			clocks = <&clkmgr AGILEX_EMAC1_CLK>, <&clkmgr AGILEX_EMAC_PTP_CLK>;
+ 			clock-names = "stmmaceth", "ptp_ref";
+ 			status = "disabled";
+@@ -184,7 +184,7 @@
+ 			rx-fifo-depth = <16384>;
+ 			snps,multicast-filter-bins = <256>;
+ 			iommus = <&smmu 3>;
+-			altr,sysmgr-syscon = <&sysmgr 0x4c 16>;
++			altr,sysmgr-syscon = <&sysmgr 0x4c 0>;
+ 			clocks = <&clkmgr AGILEX_EMAC2_CLK>, <&clkmgr AGILEX_EMAC_PTP_CLK>;
+ 			clock-names = "stmmaceth", "ptp_ref";
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index bf76ebe463794..cca143e4b6bf8 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -204,7 +204,7 @@
+ 			};
+ 
+ 			partition@20000 {
+-				label = "u-boot";
++				label = "a53-firmware";
+ 				reg = <0x20000 0x160000>;
+ 			};
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+index 5b9ec032ce8d8..7c6d871538a63 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+@@ -698,6 +698,8 @@
+ 		clocks = <&pericfg CLK_PERI_MSDC30_1_PD>,
+ 			 <&topckgen CLK_TOP_AXI_SEL>;
+ 		clock-names = "source", "hclk";
++		resets = <&pericfg MT7622_PERI_MSDC1_SW_RST>;
++		reset-names = "hrst";
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi b/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
+index f7ac4c4033db6..7bf2cb01513e3 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi
+@@ -106,6 +106,9 @@
+ 		interrupt-parent = <&msmgpio>;
+ 		interrupts = <115 IRQ_TYPE_EDGE_RISING>;
+ 
++		vdd-supply = <&pm8916_l17>;
++		vddio-supply = <&pm8916_l5>;
++
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&accel_int_default>;
+ 	};
+@@ -113,6 +116,9 @@
+ 	magnetometer@12 {
+ 		compatible = "bosch,bmc150_magn";
+ 		reg = <0x12>;
++
++		vdd-supply = <&pm8916_l17>;
++		vddio-supply = <&pm8916_l5>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts b/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts
+index e39c04d977c25..dd35c3344358c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts
+@@ -38,7 +38,7 @@
+ 
+ &pronto {
+ 	iris {
+-		compatible = "qcom,wcn3680";
++		compatible = "qcom,wcn3660b";
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index aaa21899f1a63..0e34ed48b9fae 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -55,7 +55,7 @@
+ 			no-map;
+ 		};
+ 
+-		reserved@8668000 {
++		reserved@86680000 {
+ 			reg = <0x0 0x86680000 0x0 0x80000>;
+ 			no-map;
+ 		};
+@@ -68,7 +68,7 @@
+ 			qcom,client-id = <1>;
+ 		};
+ 
+-		rfsa@867e00000 {
++		rfsa@867e0000 {
+ 			reg = <0x0 0x867e0000 0x0 0x20000>;
+ 			no-map;
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+index 1528a865f1f8e..949fee6949e61 100644
+--- a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
++++ b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+@@ -114,7 +114,7 @@
+ 
+ &apps_rsc {
+ 	pm8009-rpmh-regulators {
+-		compatible = "qcom,pm8009-rpmh-regulators";
++		compatible = "qcom,pm8009-1-rpmh-regulators";
+ 		qcom,pmic-id = "f";
+ 
+ 		vdd-s1-supply = <&vph_pwr>;
+@@ -123,6 +123,13 @@
+ 		vdd-l5-l6-supply = <&vreg_bob>;
+ 		vdd-l7-supply = <&vreg_s4a_1p8>;
+ 
++		vreg_s2f_0p95: smps2 {
++			regulator-name = "vreg_s2f_0p95";
++			regulator-min-microvolt = <900000>;
++			regulator-max-microvolt = <952000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_AUTO>;
++		};
++
+ 		vreg_l1f_1p1: ldo1 {
+ 			regulator-name = "vreg_l1f_1p1";
+ 			regulator-min-microvolt = <1104000>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index c0b93813ea9ac..c4ac6f5dc008d 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -1114,11 +1114,11 @@
+ 		reg = <0x10>;
+ 
+ 		// CAM0_RST_N
+-		reset-gpios = <&tlmm 9 0>;
++		reset-gpios = <&tlmm 9 GPIO_ACTIVE_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&cam0_default>;
+ 		gpios = <&tlmm 13 0>,
+-			<&tlmm 9 0>;
++			<&tlmm 9 GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
+ 		clock-names = "xvclk";
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+index 66c9153b31015..597388f871272 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+@@ -150,7 +150,7 @@
+ 		regulator-name = "audio-1.8V";
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <1800000>;
+-		gpio = <&gpio_exp2 7 GPIO_ACTIVE_HIGH>;
++		gpio = <&gpio_exp4 1 GPIO_ACTIVE_HIGH>;
+ 		enable-active-high;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+index 97272f5fa0abf..289cf711307d6 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+@@ -88,7 +88,6 @@
+ 	pinctrl-names = "default";
+ 	uart-has-rtscts;
+ 	status = "okay";
+-	max-speed = <4000000>;
+ 
+ 	bluetooth {
+ 		compatible = "brcm,bcm43438-bt";
+@@ -97,6 +96,7 @@
+ 		device-wakeup-gpios = <&pca9654 5 GPIO_ACTIVE_HIGH>;
+ 		clocks = <&osc_32k>;
+ 		clock-names = "extclk";
++		max-speed = <4000000>;
+ 	};
+ };
+ 
+@@ -147,7 +147,7 @@
+ 	};
+ 
+ 	eeprom@50 {
+-		compatible = "microchip,at24c64", "atmel,24c64";
++		compatible = "microchip,24c64", "atmel,24c64";
+ 		pagesize = <32>;
+ 		read-only;	/* Manufacturing EEPROM programmed at factory */
+ 		reg = <0x50>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index db0d5c8e5f96a..93c734d8a46c2 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -928,6 +928,7 @@
+ 		phy-mode = "rmii";
+ 		phy-handle = <&phy>;
+ 		snps,txpbl = <0x4>;
++		clock_in_out = "output";
+ 		status = "disabled";
+ 
+ 		mdio {
+diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
+index 395bbf64b2abb..53c92e060c3dd 100644
+--- a/arch/arm64/crypto/aes-glue.c
++++ b/arch/arm64/crypto/aes-glue.c
+@@ -55,7 +55,7 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions");
+ #define aes_mac_update		neon_aes_mac_update
+ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 NEON");
+ #endif
+-#if defined(USE_V8_CRYPTO_EXTENSIONS) || !defined(CONFIG_CRYPTO_AES_ARM64_BS)
++#if defined(USE_V8_CRYPTO_EXTENSIONS) || !IS_ENABLED(CONFIG_CRYPTO_AES_ARM64_BS)
+ MODULE_ALIAS_CRYPTO("ecb(aes)");
+ MODULE_ALIAS_CRYPTO("cbc(aes)");
+ MODULE_ALIAS_CRYPTO("ctr(aes)");
+@@ -650,7 +650,7 @@ static int __maybe_unused xts_decrypt(struct skcipher_request *req)
+ }
+ 
+ static struct skcipher_alg aes_algs[] = { {
+-#if defined(USE_V8_CRYPTO_EXTENSIONS) || !defined(CONFIG_CRYPTO_AES_ARM64_BS)
++#if defined(USE_V8_CRYPTO_EXTENSIONS) || !IS_ENABLED(CONFIG_CRYPTO_AES_ARM64_BS)
+ 	.base = {
+ 		.cra_name		= "__ecb(aes)",
+ 		.cra_driver_name	= "__ecb-aes-" MODE,
+diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c
+index c63b99211db3d..8baf8d1846b64 100644
+--- a/arch/arm64/crypto/sha1-ce-glue.c
++++ b/arch/arm64/crypto/sha1-ce-glue.c
+@@ -19,6 +19,7 @@
+ MODULE_DESCRIPTION("SHA1 secure hash using ARMv8 Crypto Extensions");
+ MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
+ MODULE_LICENSE("GPL v2");
++MODULE_ALIAS_CRYPTO("sha1");
+ 
+ struct sha1_ce_state {
+ 	struct sha1_state	sst;
+diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
+index 5e956d7582a56..d33d3ee92cc98 100644
+--- a/arch/arm64/crypto/sha2-ce-glue.c
++++ b/arch/arm64/crypto/sha2-ce-glue.c
+@@ -19,6 +19,8 @@
+ MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash using ARMv8 Crypto Extensions");
+ MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
+ MODULE_LICENSE("GPL v2");
++MODULE_ALIAS_CRYPTO("sha224");
++MODULE_ALIAS_CRYPTO("sha256");
+ 
+ struct sha256_ce_state {
+ 	struct sha256_state	sst;
+diff --git a/arch/arm64/crypto/sha3-ce-glue.c b/arch/arm64/crypto/sha3-ce-glue.c
+index 9a4bbfc45f407..ddf7aca9ff459 100644
+--- a/arch/arm64/crypto/sha3-ce-glue.c
++++ b/arch/arm64/crypto/sha3-ce-glue.c
+@@ -23,6 +23,10 @@
+ MODULE_DESCRIPTION("SHA3 secure hash using ARMv8 Crypto Extensions");
+ MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
+ MODULE_LICENSE("GPL v2");
++MODULE_ALIAS_CRYPTO("sha3-224");
++MODULE_ALIAS_CRYPTO("sha3-256");
++MODULE_ALIAS_CRYPTO("sha3-384");
++MODULE_ALIAS_CRYPTO("sha3-512");
+ 
+ asmlinkage void sha3_ce_transform(u64 *st, const u8 *data, int blocks,
+ 				  int md_len);
+diff --git a/arch/arm64/crypto/sha512-ce-glue.c b/arch/arm64/crypto/sha512-ce-glue.c
+index dc890a719f54c..57c6f086dfb4c 100644
+--- a/arch/arm64/crypto/sha512-ce-glue.c
++++ b/arch/arm64/crypto/sha512-ce-glue.c
+@@ -23,6 +23,8 @@
+ MODULE_DESCRIPTION("SHA-384/SHA-512 secure hash using ARMv8 Crypto Extensions");
+ MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
+ MODULE_LICENSE("GPL v2");
++MODULE_ALIAS_CRYPTO("sha384");
++MODULE_ALIAS_CRYPTO("sha512");
+ 
+ asmlinkage void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+ 				    int blocks);
+diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h
+index 691f15af788e4..810045628c66e 100644
+--- a/arch/arm64/include/asm/module.lds.h
++++ b/arch/arm64/include/asm/module.lds.h
+@@ -1,7 +1,7 @@
+ #ifdef CONFIG_ARM64_MODULE_PLTS
+ SECTIONS {
+-	.plt (NOLOAD) : { BYTE(0) }
+-	.init.plt (NOLOAD) : { BYTE(0) }
+-	.text.ftrace_trampoline (NOLOAD) : { BYTE(0) }
++	.plt 0 (NOLOAD) : { BYTE(0) }
++	.init.plt 0 (NOLOAD) : { BYTE(0) }
++	.text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) }
+ }
+ #endif
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 65a522fbd8743..7da9a7cee4cef 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1457,7 +1457,7 @@ static bool cpu_has_broken_dbm(void)
+ 	/* List of CPUs which have broken DBM support. */
+ 	static const struct midr_range cpus[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_1024718
+-		MIDR_RANGE(MIDR_CORTEX_A55, 0, 0, 1, 0),  // A55 r0p0 -r1p0
++		MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+ 		/* Kryo4xx Silver (rdpe => r1p0) */
+ 		MIDR_REV(MIDR_QCOM_KRYO_4XX_SILVER, 0xd, 0xe),
+ #endif
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index d8d9caf02834e..e7550a5289fef 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -985,6 +985,7 @@ SYM_FUNC_START_LOCAL(__primary_switch)
+ 
+ 	tlbi	vmalle1				// Remove any stale TLB entries
+ 	dsb	nsh
++	isb
+ 
+ 	msr	sctlr_el1, x19			// re-enable the MMU
+ 	isb
+diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c
+index 03210f6447900..0cde47a63bebf 100644
+--- a/arch/arm64/kernel/machine_kexec_file.c
++++ b/arch/arm64/kernel/machine_kexec_file.c
+@@ -182,8 +182,10 @@ static int create_dtb(struct kimage *image,
+ 
+ 		/* duplicate a device tree blob */
+ 		ret = fdt_open_into(initial_boot_params, buf, buf_size);
+-		if (ret)
++		if (ret) {
++			vfree(buf);
+ 			return -EINVAL;
++		}
+ 
+ 		ret = setup_dtb(image, initrd_load_addr, initrd_len,
+ 				cmdline, buf);
+diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c
+index a412d8edbcd24..2c247634552b1 100644
+--- a/arch/arm64/kernel/probes/uprobes.c
++++ b/arch/arm64/kernel/probes/uprobes.c
+@@ -38,7 +38,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
+ 
+ 	/* TODO: Currently we do not support AARCH32 instruction probing */
+ 	if (mm->context.flags & MMCF_AARCH32)
+-		return -ENOTSUPP;
++		return -EOPNOTSUPP;
+ 	else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE))
+ 		return -EINVAL;
+ 
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index f49b349e16a34..66256603bd596 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -1799,7 +1799,7 @@ int syscall_trace_enter(struct pt_regs *regs)
+ 
+ 	if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
+ 		tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);
+-		if (!in_syscall(regs) || (flags & _TIF_SYSCALL_EMU))
++		if (flags & _TIF_SYSCALL_EMU)
+ 			return NO_SYSCALL;
+ 	}
+ 
+diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
+index 96cd347c7a465..9f8cdeccd1ba9 100644
+--- a/arch/arm64/kernel/suspend.c
++++ b/arch/arm64/kernel/suspend.c
+@@ -120,7 +120,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ 		if (!ret)
+ 			ret = -EOPNOTSUPP;
+ 	} else {
+-		__cpu_suspend_exit();
++		RCU_NONIDLE(__cpu_suspend_exit());
+ 	}
+ 
+ 	unpause_graph_tracing();
+diff --git a/arch/csky/kernel/ptrace.c b/arch/csky/kernel/ptrace.c
+index d822144906ac1..a4cf2e2ac15ac 100644
+--- a/arch/csky/kernel/ptrace.c
++++ b/arch/csky/kernel/ptrace.c
+@@ -83,7 +83,7 @@ static int gpr_get(struct task_struct *target,
+ 	/* Abiv1 regs->tls is fake and we need sync here. */
+ 	regs->tls = task_thread_info(target)->tp_value;
+ 
+-	return membuf_write(&to, regs, sizeof(regs));
++	return membuf_write(&to, regs, sizeof(*regs));
+ }
+ 
+ static int gpr_set(struct task_struct *target,
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 0d0f29d662c9a..686990fcc5f0f 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -136,6 +136,25 @@ cflags-$(CONFIG_SB1XXX_CORELIS)	+= $(call cc-option,-mno-sched-prolog) \
+ #
+ cflags-y += -fno-stack-check
+ 
++# binutils from v2.35 when built with --enable-mips-fix-loongson3-llsc=yes,
++# supports an -mfix-loongson3-llsc flag which emits a sync prior to each ll
++# instruction to work around a CPU bug (see __SYNC_loongson3_war in asm/sync.h
++# for a description).
++#
++# We disable this in order to prevent the assembler meddling with the
++# instruction that labels refer to, ie. if we label an ll instruction:
++#
++# 1: ll v0, 0(a0)
++#
++# ...then with the assembler fix applied the label may actually point at a sync
++# instruction inserted by the assembler, and if we were using the label in an
++# exception table the table would no longer contain the address of the ll
++# instruction.
++#
++# Avoid this by explicitly disabling that assembler behaviour.
++#
++cflags-y += $(call as-option,-Wa$(comma)-mno-fix-loongson3-llsc,)
++
+ #
+ # CPU-dependent compiler/assembler options for optimization.
+ #
+diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c
+index 561389d3fadb2..b329cdb6134d2 100644
+--- a/arch/mips/cavium-octeon/setup.c
++++ b/arch/mips/cavium-octeon/setup.c
+@@ -1158,12 +1158,15 @@ void __init device_tree_init(void)
+ 	bool do_prune;
+ 	bool fill_mac;
+ 
+-	if (fw_passed_dtb) {
+-		fdt = (void *)fw_passed_dtb;
++#ifdef CONFIG_MIPS_ELF_APPENDED_DTB
++	if (!fdt_check_header(&__appended_dtb)) {
++		fdt = &__appended_dtb;
+ 		do_prune = false;
+ 		fill_mac = true;
+ 		pr_info("Using appended Device Tree.\n");
+-	} else if (octeon_bootinfo->minor_version >= 3 && octeon_bootinfo->fdt_addr) {
++	} else
++#endif
++	if (octeon_bootinfo->minor_version >= 3 && octeon_bootinfo->fdt_addr) {
+ 		fdt = phys_to_virt(octeon_bootinfo->fdt_addr);
+ 		if (fdt_check_header(fdt))
+ 			panic("Corrupt Device Tree passed to kernel.");
+diff --git a/arch/mips/include/asm/asm.h b/arch/mips/include/asm/asm.h
+index 3682d1a0bb808..ea4b62ece3366 100644
+--- a/arch/mips/include/asm/asm.h
++++ b/arch/mips/include/asm/asm.h
+@@ -20,10 +20,27 @@
+ #include <asm/sgidefs.h>
+ #include <asm/asm-eva.h>
+ 
++#ifndef __VDSO__
++/*
++ * Emit CFI data in .debug_frame sections, not .eh_frame sections.
++ * We don't do DWARF unwinding at runtime, so only the offline DWARF
++ * information is useful to anyone. Note we should change this if we
++ * ever decide to enable DWARF unwinding at runtime.
++ */
++#define CFI_SECTIONS	.cfi_sections .debug_frame
++#else
++ /*
++  * For the vDSO, emit both runtime unwind information and debug
++  * symbols for the .dbg file.
++  */
++#define CFI_SECTIONS
++#endif
++
+ /*
+  * LEAF - declare leaf routine
+  */
+ #define LEAF(symbol)					\
++		CFI_SECTIONS;				\
+ 		.globl	symbol;				\
+ 		.align	2;				\
+ 		.type	symbol, @function;		\
+@@ -36,6 +53,7 @@ symbol:		.frame	sp, 0, ra;			\
+  * NESTED - declare nested routine entry point
+  */
+ #define NESTED(symbol, framesize, rpc)			\
++		CFI_SECTIONS;				\
+ 		.globl	symbol;				\
+ 		.align	2;				\
+ 		.type	symbol, @function;		\
+diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
+index f904084fcb1fd..27ad767915390 100644
+--- a/arch/mips/include/asm/atomic.h
++++ b/arch/mips/include/asm/atomic.h
+@@ -248,7 +248,7 @@ static __inline__ int pfx##_sub_if_positive(type i, pfx##_t * v)	\
+ 	 * bltz that can branch	to code outside of the LL/SC loop. As	\
+ 	 * such, we don't need to emit another barrier here.		\
+ 	 */								\
+-	if (!__SYNC_loongson3_war)					\
++	if (__SYNC_loongson3_war == 0)					\
+ 		smp_mb__after_atomic();					\
+ 									\
+ 	return result;							\
+diff --git a/arch/mips/include/asm/cmpxchg.h b/arch/mips/include/asm/cmpxchg.h
+index 5b0b3a6777ea5..ed8f3f3c4304a 100644
+--- a/arch/mips/include/asm/cmpxchg.h
++++ b/arch/mips/include/asm/cmpxchg.h
+@@ -99,7 +99,7 @@ unsigned long __xchg(volatile void *ptr, unsigned long x, int size)
+ 	 * contains a completion barrier prior to the LL, so we don't	\
+ 	 * need to emit an extra one here.				\
+ 	 */								\
+-	if (!__SYNC_loongson3_war)					\
++	if (__SYNC_loongson3_war == 0)					\
+ 		smp_mb__before_llsc();					\
+ 									\
+ 	__res = (__typeof__(*(ptr)))					\
+@@ -191,7 +191,7 @@ unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
+ 	 * contains a completion barrier prior to the LL, so we don't	\
+ 	 * need to emit an extra one here.				\
+ 	 */								\
+-	if (!__SYNC_loongson3_war)					\
++	if (__SYNC_loongson3_war == 0)					\
+ 		smp_mb__before_llsc();					\
+ 									\
+ 	__res = cmpxchg_local((ptr), (old), (new));			\
+@@ -201,7 +201,7 @@ unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
+ 	 * contains a completion barrier after the SC, so we don't	\
+ 	 * need to emit an extra one here.				\
+ 	 */								\
+-	if (!__SYNC_loongson3_war)					\
++	if (__SYNC_loongson3_war == 0)					\
+ 		smp_llsc_mb();						\
+ 									\
+ 	__res;								\
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index e6853697a0561..31cb9199197ca 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1830,16 +1830,17 @@ static inline void cpu_probe_ingenic(struct cpuinfo_mips *c, unsigned int cpu)
+ 		 */
+ 		case PRID_COMP_INGENIC_D0:
+ 			c->isa_level &= ~MIPS_CPU_ISA_M32R2;
+-			break;
++			fallthrough;
+ 
+ 		/*
+ 		 * The config0 register in the XBurst CPUs with a processor ID of
+-		 * PRID_COMP_INGENIC_D1 has an abandoned huge page tlb mode, this
+-		 * mode is not compatible with the MIPS standard, it will cause
+-		 * tlbmiss and into an infinite loop (line 21 in the tlb-funcs.S)
+-		 * when starting the init process. After chip reset, the default
+-		 * is HPTLB mode, Write 0xa9000000 to cp0 register 5 sel 4 to
+-		 * switch back to VTLB mode to prevent getting stuck.
++		 * PRID_COMP_INGENIC_D0 or PRID_COMP_INGENIC_D1 has an abandoned
++		 * huge page tlb mode, this mode is not compatible with the MIPS
++		 * standard, it will cause tlbmiss and into an infinite loop
++		 * (line 21 in the tlb-funcs.S) when starting the init process.
++		 * After chip reset, the default is HPTLB mode, Write 0xa9000000
++		 * to cp0 register 5 sel 4 to switch back to VTLB mode to prevent
++		 * getting stuck.
+ 		 */
+ 		case PRID_COMP_INGENIC_D1:
+ 			write_c0_page_ctrl(XBURST_PAGECTRL_HPTLB_DIS);
+diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
+index 5e97e9d02f98d..09fa4705ce8eb 100644
+--- a/arch/mips/kernel/vmlinux.lds.S
++++ b/arch/mips/kernel/vmlinux.lds.S
+@@ -90,6 +90,7 @@ SECTIONS
+ 
+ 		INIT_TASK_DATA(THREAD_SIZE)
+ 		NOSAVE_DATA
++		PAGE_ALIGNED_DATA(PAGE_SIZE)
+ 		CACHELINE_ALIGNED_DATA(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
+ 		READ_MOSTLY_DATA(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
+ 		DATA_DATA
+@@ -223,6 +224,5 @@ SECTIONS
+ 		*(.options)
+ 		*(.pdr)
+ 		*(.reginfo)
+-		*(.eh_frame)
+ 	}
+ }
+diff --git a/arch/mips/lantiq/irq.c b/arch/mips/lantiq/irq.c
+index df8eed3875f6d..43c2f271e6ab4 100644
+--- a/arch/mips/lantiq/irq.c
++++ b/arch/mips/lantiq/irq.c
+@@ -302,7 +302,7 @@ static void ltq_hw_irq_handler(struct irq_desc *desc)
+ 	generic_handle_irq(irq_linear_revmap(ltq_domain, hwirq));
+ 
+ 	/* if this is a EBU irq, we need to ack it or get a deadlock */
+-	if ((irq == LTQ_ICU_EBU_IRQ) && (module == 0) && LTQ_EBU_PCC_ISTAT)
++	if (irq == LTQ_ICU_EBU_IRQ && !module && LTQ_EBU_PCC_ISTAT != 0)
+ 		ltq_ebu_w32(ltq_ebu_r32(LTQ_EBU_PCC_ISTAT) | 0x10,
+ 			LTQ_EBU_PCC_ISTAT);
+ }
+diff --git a/arch/mips/loongson64/Platform b/arch/mips/loongson64/Platform
+index ec42c5085905c..e2354e128d9a0 100644
+--- a/arch/mips/loongson64/Platform
++++ b/arch/mips/loongson64/Platform
+@@ -5,28 +5,6 @@
+ 
+ cflags-$(CONFIG_CPU_LOONGSON64)	+= -Wa,--trap
+ 
+-#
+-# Some versions of binutils, not currently mainline as of 2019/02/04, support
+-# an -mfix-loongson3-llsc flag which emits a sync prior to each ll instruction
+-# to work around a CPU bug (see __SYNC_loongson3_war in asm/sync.h for a
+-# description).
+-#
+-# We disable this in order to prevent the assembler meddling with the
+-# instruction that labels refer to, ie. if we label an ll instruction:
+-#
+-# 1: ll v0, 0(a0)
+-#
+-# ...then with the assembler fix applied the label may actually point at a sync
+-# instruction inserted by the assembler, and if we were using the label in an
+-# exception table the table would no longer contain the address of the ll
+-# instruction.
+-#
+-# Avoid this by explicitly disabling that assembler behaviour. If upstream
+-# binutils does not merge support for the flag then we can revisit & remove
+-# this later - for now it ensures vendor toolchains don't cause problems.
+-#
+-cflags-$(CONFIG_CPU_LOONGSON64)	+= $(call as-option,-Wa$(comma)-mno-fix-loongson3-llsc,)
+-
+ #
+ # binutils from v2.25 on and gcc starting from v4.9.0 treat -march=loongson3a
+ # as MIPS64 R2; older versions as just R1.  This leaves the possibility open
+diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
+index c9644c38ec28f..96adc3d23bd2d 100644
+--- a/arch/mips/mm/c-r4k.c
++++ b/arch/mips/mm/c-r4k.c
+@@ -1593,7 +1593,7 @@ static int probe_scache(void)
+ 	return 1;
+ }
+ 
+-static void __init loongson2_sc_init(void)
++static void loongson2_sc_init(void)
+ {
+ 	struct cpuinfo_mips *c = &current_cpu_data;
+ 
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index 5810cc12bc1d9..2131d3fd73333 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -16,16 +16,13 @@ ccflags-vdso := \
+ 	$(filter -march=%,$(KBUILD_CFLAGS)) \
+ 	$(filter -m%-float,$(KBUILD_CFLAGS)) \
+ 	$(filter -mno-loongson-%,$(KBUILD_CFLAGS)) \
++	$(CLANG_FLAGS) \
+ 	-D__VDSO__
+ 
+ ifndef CONFIG_64BIT
+ ccflags-vdso += -DBUILD_VDSO32
+ endif
+ 
+-ifdef CONFIG_CC_IS_CLANG
+-ccflags-vdso += $(filter --target=%,$(KBUILD_CFLAGS))
+-endif
+-
+ #
+ # The -fno-jump-tables flag only prevents the compiler from generating
+ # jump tables but does not prevent the compiler from emitting absolute
+diff --git a/arch/nios2/kernel/entry.S b/arch/nios2/kernel/entry.S
+index da8442450e460..0794cd7803dfe 100644
+--- a/arch/nios2/kernel/entry.S
++++ b/arch/nios2/kernel/entry.S
+@@ -389,7 +389,10 @@ ENTRY(ret_from_interrupt)
+  */
+ ENTRY(sys_clone)
+ 	SAVE_SWITCH_STACK
++	subi    sp, sp, 4 /* make space for tls pointer */
++	stw     r8, 0(sp) /* pass tls pointer (r8) via stack (5th argument) */
+ 	call	nios2_clone
++	addi    sp, sp, 4
+ 	RESTORE_SWITCH_STACK
+ 	ret
+ 
+diff --git a/arch/nios2/kernel/sys_nios2.c b/arch/nios2/kernel/sys_nios2.c
+index cd390ec4f88bf..b1ca856999521 100644
+--- a/arch/nios2/kernel/sys_nios2.c
++++ b/arch/nios2/kernel/sys_nios2.c
+@@ -22,6 +22,7 @@ asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len,
+ 				unsigned int op)
+ {
+ 	struct vm_area_struct *vma;
++	struct mm_struct *mm = current->mm;
+ 
+ 	if (len == 0)
+ 		return 0;
+@@ -34,16 +35,22 @@ asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len,
+ 	if (addr + len < addr)
+ 		return -EFAULT;
+ 
++	if (mmap_read_lock_killable(mm))
++		return -EINTR;
++
+ 	/*
+ 	 * Verify that the specified address region actually belongs
+ 	 * to this process.
+ 	 */
+-	vma = find_vma(current->mm, addr);
+-	if (vma == NULL || addr < vma->vm_start || addr + len > vma->vm_end)
++	vma = find_vma(mm, addr);
++	if (vma == NULL || addr < vma->vm_start || addr + len > vma->vm_end) {
++		mmap_read_unlock(mm);
+ 		return -EFAULT;
++	}
+ 
+ 	flush_cache_range(vma, addr, addr + len);
+ 
++	mmap_read_unlock(mm);
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 5181872f94523..31ed8083571ff 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -761,7 +761,7 @@ config PPC_64K_PAGES
+ 
+ config PPC_256K_PAGES
+ 	bool "256k page size"
+-	depends on 44x && !STDBINUTILS
++	depends on 44x && !STDBINUTILS && !PPC_47x
+ 	help
+ 	  Make the page size 256k.
+ 
+diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
+index 55d6ede30c19a..9ab344d29a545 100644
+--- a/arch/powerpc/include/asm/kexec.h
++++ b/arch/powerpc/include/asm/kexec.h
+@@ -136,6 +136,7 @@ int load_crashdump_segments_ppc64(struct kimage *image,
+ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
+ 			  const void *fdt, unsigned long kernel_load_addr,
+ 			  unsigned long fdt_load_addr);
++unsigned int kexec_fdt_totalsize_ppc64(struct kimage *image);
+ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
+ 			unsigned long initrd_load_addr,
+ 			unsigned long initrd_len, const char *cmdline);
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 501c9a79038c0..f53bfefb4a577 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -216,8 +216,6 @@ do {								\
+ #define __put_user_nocheck_goto(x, ptr, size, label)		\
+ do {								\
+ 	__typeof__(*(ptr)) __user *__pu_addr = (ptr);		\
+-	if (!is_kernel_addr((unsigned long)__pu_addr))		\
+-		might_fault();					\
+ 	__chk_user_ptr(ptr);					\
+ 	__put_user_size_goto((x), __pu_addr, (size), label);	\
+ } while (0)
+@@ -313,7 +311,7 @@ do {								\
+ 	__typeof__(size) __gu_size = (size);			\
+ 								\
+ 	__chk_user_ptr(__gu_addr);				\
+-	if (!is_kernel_addr((unsigned long)__gu_addr))		\
++	if (do_allow && !is_kernel_addr((unsigned long)__gu_addr)) \
+ 		might_fault();					\
+ 	barrier_nospec();					\
+ 	if (do_allow)								\
+@@ -508,6 +506,9 @@ static __must_check inline bool user_access_begin(const void __user *ptr, size_t
+ {
+ 	if (unlikely(!access_ok(ptr, len)))
+ 		return false;
++
++	might_fault();
++
+ 	allow_read_write_user((void __user *)ptr, ptr, len);
+ 	return true;
+ }
+@@ -521,6 +522,9 @@ user_read_access_begin(const void __user *ptr, size_t len)
+ {
+ 	if (unlikely(!access_ok(ptr, len)))
+ 		return false;
++
++	might_fault();
++
+ 	allow_read_from_user(ptr, len);
+ 	return true;
+ }
+@@ -532,6 +536,9 @@ user_write_access_begin(const void __user *ptr, size_t len)
+ {
+ 	if (unlikely(!access_ok(ptr, len)))
+ 		return false;
++
++	might_fault();
++
+ 	allow_write_to_user((void __user *)ptr, len);
+ 	return true;
+ }
+diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
+index 8cdc8bcde7038..459f5d00b9904 100644
+--- a/arch/powerpc/kernel/entry_32.S
++++ b/arch/powerpc/kernel/entry_32.S
+@@ -347,6 +347,9 @@ trace_syscall_entry_irq_off:
+ 
+ 	.globl	transfer_to_syscall
+ transfer_to_syscall:
++#ifdef CONFIG_PPC_BOOK3S_32
++	kuep_lock r11, r12
++#endif
+ #ifdef CONFIG_TRACE_IRQFLAGS
+ 	andi.	r12,r9,MSR_EE
+ 	beq-	trace_syscall_entry_irq_off
+diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
+index c88e66adecb52..fef0b34a77c9d 100644
+--- a/arch/powerpc/kernel/head_32.h
++++ b/arch/powerpc/kernel/head_32.h
+@@ -56,7 +56,7 @@
+ 1:
+ 	tophys_novmstack r11, r11
+ #ifdef CONFIG_VMAP_STACK
+-	mtcrf	0x7f, r1
++	mtcrf	0x3f, r1
+ 	bt	32 - THREAD_ALIGN_SHIFT, stack_overflow
+ #endif
+ .endm
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index ee0bfebc375f2..ce5fd93499a74 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -175,7 +175,7 @@ SystemCall:
+ /* On the MPC8xx, this is a software emulation interrupt.  It occurs
+  * for all unimplemented and illegal instructions.
+  */
+-	EXCEPTION(0x1000, SoftEmu, program_check_exception, EXC_XFER_STD)
++	EXCEPTION(0x1000, SoftEmu, emulation_assist_interrupt, EXC_XFER_STD)
+ 
+ 	. = 0x1100
+ /*
+diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
+index d66da35f2e8d3..2729d8fa6e77c 100644
+--- a/arch/powerpc/kernel/head_book3s_32.S
++++ b/arch/powerpc/kernel/head_book3s_32.S
+@@ -280,12 +280,6 @@ MachineCheck:
+ 7:	EXCEPTION_PROLOG_2
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
+ #ifdef CONFIG_PPC_CHRP
+-#ifdef CONFIG_VMAP_STACK
+-	mfspr	r4, SPRN_SPRG_THREAD
+-	tovirt(r4, r4)
+-	lwz	r4, RTAS_SP(r4)
+-	cmpwi	cr1, r4, 0
+-#endif
+ 	beq	cr1, machine_check_tramp
+ 	twi	31, 0, 0
+ #else
+diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
+index cc7a6271b6b4e..e8a548447dd68 100644
+--- a/arch/powerpc/kernel/irq.c
++++ b/arch/powerpc/kernel/irq.c
+@@ -269,6 +269,31 @@ again:
+ 	}
+ }
+ 
++#if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_PPC_KUAP)
++static inline void replay_soft_interrupts_irqrestore(void)
++{
++	unsigned long kuap_state = get_kuap();
++
++	/*
++	 * Check if anything calls local_irq_enable/restore() when KUAP is
++	 * disabled (user access enabled). We handle that case here by saving
++	 * and re-locking AMR but we shouldn't get here in the first place,
++	 * hence the warning.
++	 */
++	kuap_check_amr();
++
++	if (kuap_state != AMR_KUAP_BLOCKED)
++		set_kuap(AMR_KUAP_BLOCKED);
++
++	replay_soft_interrupts();
++
++	if (kuap_state != AMR_KUAP_BLOCKED)
++		set_kuap(kuap_state);
++}
++#else
++#define replay_soft_interrupts_irqrestore() replay_soft_interrupts()
++#endif
++
+ notrace void arch_local_irq_restore(unsigned long mask)
+ {
+ 	unsigned char irq_happened;
+@@ -332,7 +357,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
+ 	irq_soft_mask_set(IRQS_ALL_DISABLED);
+ 	trace_hardirqs_off();
+ 
+-	replay_soft_interrupts();
++	replay_soft_interrupts_irqrestore();
+ 	local_paca->irq_happened = 0;
+ 
+ 	trace_hardirqs_on();
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index 38ae5933d9174..7e337c570ea6b 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -1330,14 +1330,10 @@ static void __init prom_check_platform_support(void)
+ 		if (prop_len > sizeof(vec))
+ 			prom_printf("WARNING: ibm,arch-vec-5-platform-support longer than expected (len: %d)\n",
+ 				    prop_len);
+-		prom_getprop(prom.chosen, "ibm,arch-vec-5-platform-support",
+-			     &vec, sizeof(vec));
+-		for (i = 0; i < sizeof(vec); i += 2) {
+-			prom_debug("%d: index = 0x%x val = 0x%x\n", i / 2
+-								  , vec[i]
+-								  , vec[i + 1]);
+-			prom_parse_platform_support(vec[i], vec[i + 1],
+-						    &supported);
++		prom_getprop(prom.chosen, "ibm,arch-vec-5-platform-support", &vec, sizeof(vec));
++		for (i = 0; i < prop_len; i += 2) {
++			prom_debug("%d: index = 0x%x val = 0x%x\n", i / 2, vec[i], vec[i + 1]);
++			prom_parse_platform_support(vec[i], vec[i + 1], &supported);
+ 		}
+ 	}
+ 
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index 7d372ff3504b2..1d20f0f77a920 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -53,6 +53,7 @@
+ #include <linux/of_clk.h>
+ #include <linux/suspend.h>
+ #include <linux/sched/cputime.h>
++#include <linux/sched/clock.h>
+ #include <linux/processor.h>
+ #include <asm/trace.h>
+ 
+@@ -1095,6 +1096,7 @@ void __init time_init(void)
+ 	tick_setup_hrtimer_broadcast();
+ 
+ 	of_clk_init(NULL);
++	enable_sched_clock_irqtime();
+ }
+ 
+ /*
+diff --git a/arch/powerpc/kexec/elf_64.c b/arch/powerpc/kexec/elf_64.c
+index d0e459bb2f05a..9842e33533df1 100644
+--- a/arch/powerpc/kexec/elf_64.c
++++ b/arch/powerpc/kexec/elf_64.c
+@@ -102,7 +102,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
+ 		pr_debug("Loaded initrd at 0x%lx\n", initrd_load_addr);
+ 	}
+ 
+-	fdt_size = fdt_totalsize(initial_boot_params) * 2;
++	fdt_size = kexec_fdt_totalsize_ppc64(image);
+ 	fdt = kmalloc(fdt_size, GFP_KERNEL);
+ 	if (!fdt) {
+ 		pr_err("Not enough memory for the device tree.\n");
+diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
+index c69bcf9b547a8..02b9e4d0dc40b 100644
+--- a/arch/powerpc/kexec/file_load_64.c
++++ b/arch/powerpc/kexec/file_load_64.c
+@@ -21,6 +21,7 @@
+ #include <linux/memblock.h>
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
++#include <asm/setup.h>
+ #include <asm/drmem.h>
+ #include <asm/kexec_ranges.h>
+ #include <asm/crashdump-ppc64.h>
+@@ -925,6 +926,40 @@ out:
+ 	return ret;
+ }
+ 
++/**
++ * kexec_fdt_totalsize_ppc64 - Return the estimated size needed to setup FDT
++ *                             for kexec/kdump kernel.
++ * @image:                     kexec image being loaded.
++ *
++ * Returns the estimated size needed for kexec/kdump kernel FDT.
++ */
++unsigned int kexec_fdt_totalsize_ppc64(struct kimage *image)
++{
++	unsigned int fdt_size;
++	u64 usm_entries;
++
++	/*
++	 * The below estimate more than accounts for a typical kexec case where
++	 * the additional space is to accommodate things like kexec cmdline,
++	 * chosen node with properties for initrd start & end addresses and
++	 * a property to indicate kexec boot..
++	 */
++	fdt_size = fdt_totalsize(initial_boot_params) + (2 * COMMAND_LINE_SIZE);
++	if (image->type != KEXEC_TYPE_CRASH)
++		return fdt_size;
++
++	/*
++	 * For kdump kernel, also account for linux,usable-memory and
++	 * linux,drconf-usable-memory properties. Get an approximate on the
++	 * number of usable memory entries and use for FDT size estimation.
++	 */
++	usm_entries = ((memblock_end_of_DRAM() / drmem_lmb_size()) +
++		       (2 * (resource_size(&crashk_res) / drmem_lmb_size())));
++	fdt_size += (unsigned int)(usm_entries * sizeof(u64));
++
++	return fdt_size;
++}
++
+ /**
+  * setup_new_fdt_ppc64 - Update the flattend device-tree of the kernel
+  *                       being loaded.
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index 13999123b7358..32fa0fa3d4ff5 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -1518,7 +1518,7 @@ int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu,
+ 	return emulated;
+ }
+ 
+-int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val)
++static int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val)
+ {
+ 	union kvmppc_one_reg reg;
+ 	int vmx_offset = 0;
+@@ -1536,7 +1536,7 @@ int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val)
+ 	return result;
+ }
+ 
+-int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val)
++static int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val)
+ {
+ 	union kvmppc_one_reg reg;
+ 	int vmx_offset = 0;
+@@ -1554,7 +1554,7 @@ int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val)
+ 	return result;
+ }
+ 
+-int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val)
++static int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val)
+ {
+ 	union kvmppc_one_reg reg;
+ 	int vmx_offset = 0;
+@@ -1572,7 +1572,7 @@ int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val)
+ 	return result;
+ }
+ 
+-int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val)
++static int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val)
+ {
+ 	union kvmppc_one_reg reg;
+ 	int vmx_offset = 0;
+diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c
+index 16e86ba8aa209..f6b7749d6ada7 100644
+--- a/arch/powerpc/platforms/pseries/dlpar.c
++++ b/arch/powerpc/platforms/pseries/dlpar.c
+@@ -127,7 +127,6 @@ void dlpar_free_cc_nodes(struct device_node *dn)
+ #define NEXT_PROPERTY   3
+ #define PREV_PARENT     4
+ #define MORE_MEMORY     5
+-#define CALL_AGAIN	-2
+ #define ERR_CFG_USE     -9003
+ 
+ struct device_node *dlpar_configure_connector(__be32 drc_index,
+@@ -168,6 +167,9 @@ struct device_node *dlpar_configure_connector(__be32 drc_index,
+ 
+ 		spin_unlock(&rtas_data_buf_lock);
+ 
++		if (rtas_busy_delay(rc))
++			continue;
++
+ 		switch (rc) {
+ 		case COMPLETE:
+ 			break;
+@@ -216,9 +218,6 @@ struct device_node *dlpar_configure_connector(__be32 drc_index,
+ 			last_dn = last_dn->parent;
+ 			break;
+ 
+-		case CALL_AGAIN:
+-			break;
+-
+ 		case MORE_MEMORY:
+ 		case ERR_CFG_USE:
+ 		default:
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index 0cfd6da784f84..71a315e73cbe7 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -32,9 +32,10 @@ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+ # Disable -pg to prevent insert call site
+ CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os
+ 
+-# Disable gcov profiling for VDSO code
++# Disable profiling and instrumentation for VDSO code
+ GCOV_PROFILE := n
+ KCOV_INSTRUMENT := n
++KASAN_SANITIZE := n
+ 
+ # Force dependency
+ $(obj)/vdso.o: $(obj)/vdso.so
+diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c
+index 8df10d3c8f6cf..7b3af2d6b9baa 100644
+--- a/arch/s390/kernel/vtime.c
++++ b/arch/s390/kernel/vtime.c
+@@ -136,7 +136,8 @@ static int do_account_vtime(struct task_struct *tsk)
+ 		"	stck	%1"	/* Store current tod clock value */
+ #endif
+ 		: "=Q" (S390_lowcore.last_update_timer),
+-		  "=Q" (S390_lowcore.last_update_clock));
++		  "=Q" (S390_lowcore.last_update_clock)
++		: : "cc");
+ 	clock = S390_lowcore.last_update_clock - clock;
+ 	timer -= S390_lowcore.last_update_timer;
+ 
+diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
+index a6ca135442f9a..530b7ec5d3ca9 100644
+--- a/arch/sparc/Kconfig
++++ b/arch/sparc/Kconfig
+@@ -496,7 +496,7 @@ config COMPAT
+ 	bool
+ 	depends on SPARC64
+ 	default y
+-	select COMPAT_BINFMT_ELF
++	select COMPAT_BINFMT_ELF if BINFMT_ELF
+ 	select HAVE_UID16
+ 	select ARCH_WANT_OLD_COMPAT_IPC
+ 	select COMPAT_OLD_SIGACTION
+diff --git a/arch/sparc/kernel/led.c b/arch/sparc/kernel/led.c
+index bd48575172c32..3a66e62eb2a0e 100644
+--- a/arch/sparc/kernel/led.c
++++ b/arch/sparc/kernel/led.c
+@@ -50,6 +50,7 @@ static void led_blink(struct timer_list *unused)
+ 	add_timer(&led_blink_timer);
+ }
+ 
++#ifdef CONFIG_PROC_FS
+ static int led_proc_show(struct seq_file *m, void *v)
+ {
+ 	if (get_auxio() & AUXIO_LED)
+@@ -111,6 +112,7 @@ static const struct proc_ops led_proc_ops = {
+ 	.proc_release	= single_release,
+ 	.proc_write	= led_proc_write,
+ };
++#endif
+ 
+ static struct proc_dir_entry *led;
+ 
+diff --git a/arch/sparc/lib/memset.S b/arch/sparc/lib/memset.S
+index b89d42b29e344..f427f34b8b79b 100644
+--- a/arch/sparc/lib/memset.S
++++ b/arch/sparc/lib/memset.S
+@@ -142,6 +142,7 @@ __bzero:
+ 	ZERO_LAST_BLOCKS(%o0, 0x48, %g2)
+ 	ZERO_LAST_BLOCKS(%o0, 0x08, %g2)
+ 13:
++	EXT(12b, 13b, 21f)
+ 	be	8f
+ 	 andcc	%o1, 4, %g0
+ 
+diff --git a/arch/um/include/shared/skas/mm_id.h b/arch/um/include/shared/skas/mm_id.h
+index 4337b4ced0954..e82e203f5f419 100644
+--- a/arch/um/include/shared/skas/mm_id.h
++++ b/arch/um/include/shared/skas/mm_id.h
+@@ -12,6 +12,7 @@ struct mm_id {
+ 		int pid;
+ 	} u;
+ 	unsigned long stack;
++	int kill;
+ };
+ 
+ #endif
+diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c
+index 61776790cd678..5be1b0da9f3be 100644
+--- a/arch/um/kernel/tlb.c
++++ b/arch/um/kernel/tlb.c
+@@ -125,6 +125,9 @@ static int add_mmap(unsigned long virt, unsigned long phys, unsigned long len,
+ 	struct host_vm_op *last;
+ 	int fd = -1, ret = 0;
+ 
++	if (virt + len > STUB_START && virt < STUB_END)
++		return -EINVAL;
++
+ 	if (hvc->userspace)
+ 		fd = phys_mapping(phys, &offset);
+ 	else
+@@ -162,7 +165,7 @@ static int add_munmap(unsigned long addr, unsigned long len,
+ 	struct host_vm_op *last;
+ 	int ret = 0;
+ 
+-	if ((addr >= STUB_START) && (addr < STUB_END))
++	if (addr + len > STUB_START && addr < STUB_END)
+ 		return -EINVAL;
+ 
+ 	if (hvc->index != 0) {
+@@ -192,6 +195,9 @@ static int add_mprotect(unsigned long addr, unsigned long len,
+ 	struct host_vm_op *last;
+ 	int ret = 0;
+ 
++	if (addr + len > STUB_START && addr < STUB_END)
++		return -EINVAL;
++
+ 	if (hvc->index != 0) {
+ 		last = &hvc->ops[hvc->index - 1];
+ 		if ((last->type == MPROTECT) &&
+@@ -346,12 +352,11 @@ void fix_range_common(struct mm_struct *mm, unsigned long start_addr,
+ 
+ 	/* This is not an else because ret is modified above */
+ 	if (ret) {
++		struct mm_id *mm_idp = &current->mm->context.id;
++
+ 		printk(KERN_ERR "fix_range_common: failed, killing current "
+ 		       "process: %d\n", task_tgid_vnr(current));
+-		/* We are under mmap_lock, release it such that current can terminate */
+-		mmap_write_unlock(current->mm);
+-		force_sig(SIGKILL);
+-		do_signal(&current->thread.regs);
++		mm_idp->kill = 1;
+ 	}
+ }
+ 
+@@ -472,6 +477,10 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long address)
+ 	struct mm_id *mm_id;
+ 
+ 	address &= PAGE_MASK;
++
++	if (address >= STUB_START && address < STUB_END)
++		goto kill;
++
+ 	pgd = pgd_offset(mm, address);
+ 	if (!pgd_present(*pgd))
+ 		goto kill;
+diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c
+index 4fb877b99dded..94a7c4125ebc8 100644
+--- a/arch/um/os-Linux/skas/process.c
++++ b/arch/um/os-Linux/skas/process.c
+@@ -249,6 +249,7 @@ static int userspace_tramp(void *stack)
+ }
+ 
+ int userspace_pid[NR_CPUS];
++int kill_userspace_mm[NR_CPUS];
+ 
+ /**
+  * start_userspace() - prepare a new userspace process
+@@ -342,6 +343,8 @@ void userspace(struct uml_pt_regs *regs, unsigned long *aux_fp_regs)
+ 	interrupt_end();
+ 
+ 	while (1) {
++		if (kill_userspace_mm[0])
++			fatal_sigsegv();
+ 
+ 		/*
+ 		 * This can legitimately fail if the process loads a
+@@ -650,4 +653,5 @@ void reboot_skas(void)
+ void __switch_mm(struct mm_id *mm_idp)
+ {
+ 	userspace_pid[0] = mm_idp->u.pid;
++	kill_userspace_mm[0] = mm_idp->kill;
+ }
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index ad8a7188a2bf7..f9a1d98e75349 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -686,7 +686,8 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	unsigned long auth_tag_len = crypto_aead_authsize(tfm);
+ 	const struct aesni_gcm_tfm_s *gcm_tfm = aesni_gcm_tfm;
+-	struct gcm_context_data data AESNI_ALIGN_ATTR;
++	u8 databuf[sizeof(struct gcm_context_data) + (AESNI_ALIGN - 8)] __aligned(8);
++	struct gcm_context_data *data = PTR_ALIGN((void *)databuf, AESNI_ALIGN);
+ 	struct scatter_walk dst_sg_walk = {};
+ 	unsigned long left = req->cryptlen;
+ 	unsigned long len, srclen, dstlen;
+@@ -735,8 +736,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 	}
+ 
+ 	kernel_fpu_begin();
+-	gcm_tfm->init(aes_ctx, &data, iv,
+-		hash_subkey, assoc, assoclen);
++	gcm_tfm->init(aes_ctx, data, iv, hash_subkey, assoc, assoclen);
+ 	if (req->src != req->dst) {
+ 		while (left) {
+ 			src = scatterwalk_map(&src_sg_walk);
+@@ -746,10 +746,10 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 			len = min(srclen, dstlen);
+ 			if (len) {
+ 				if (enc)
+-					gcm_tfm->enc_update(aes_ctx, &data,
++					gcm_tfm->enc_update(aes_ctx, data,
+ 							     dst, src, len);
+ 				else
+-					gcm_tfm->dec_update(aes_ctx, &data,
++					gcm_tfm->dec_update(aes_ctx, data,
+ 							     dst, src, len);
+ 			}
+ 			left -= len;
+@@ -767,10 +767,10 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 			len = scatterwalk_clamp(&src_sg_walk, left);
+ 			if (len) {
+ 				if (enc)
+-					gcm_tfm->enc_update(aes_ctx, &data,
++					gcm_tfm->enc_update(aes_ctx, data,
+ 							     src, src, len);
+ 				else
+-					gcm_tfm->dec_update(aes_ctx, &data,
++					gcm_tfm->dec_update(aes_ctx, data,
+ 							     src, src, len);
+ 			}
+ 			left -= len;
+@@ -779,7 +779,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 			scatterwalk_done(&src_sg_walk, 1, left);
+ 		}
+ 	}
+-	gcm_tfm->finalize(aes_ctx, &data, authTag, auth_tag_len);
++	gcm_tfm->finalize(aes_ctx, data, authTag, auth_tag_len);
+ 	kernel_fpu_end();
+ 
+ 	if (!assocmem)
+@@ -828,7 +828,8 @@ static int helper_rfc4106_encrypt(struct aead_request *req)
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm);
+ 	void *aes_ctx = &(ctx->aes_key_expanded);
+-	u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN)));
++	u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8);
++	u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN);
+ 	unsigned int i;
+ 	__be32 counter = cpu_to_be32(1);
+ 
+@@ -855,7 +856,8 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm);
+ 	void *aes_ctx = &(ctx->aes_key_expanded);
+-	u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN)));
++	u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8);
++	u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN);
+ 	unsigned int i;
+ 
+ 	if (unlikely(req->assoclen != 16 && req->assoclen != 20))
+@@ -985,7 +987,8 @@ static int generic_gcmaes_encrypt(struct aead_request *req)
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm);
+ 	void *aes_ctx = &(ctx->aes_key_expanded);
+-	u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN)));
++	u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8);
++	u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN);
+ 	__be32 counter = cpu_to_be32(1);
+ 
+ 	memcpy(iv, req->iv, 12);
+@@ -1001,7 +1004,8 @@ static int generic_gcmaes_decrypt(struct aead_request *req)
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm);
+ 	void *aes_ctx = &(ctx->aes_key_expanded);
+-	u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN)));
++	u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8);
++	u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN);
+ 
+ 	memcpy(iv, req->iv, 12);
+ 	*((__be32 *)(iv+12)) = counter;
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 94c6e6330e043..de5358671750d 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -304,7 +304,7 @@ __visible noinstr void xen_pv_evtchn_do_upcall(struct pt_regs *regs)
+ 
+ 	instrumentation_begin();
+ 	run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, regs);
+-	instrumentation_begin();
++	instrumentation_end();
+ 
+ 	set_irq_regs(old_regs);
+ 
+diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
+index 9aad0e0876fba..fda3e7747c223 100644
+--- a/arch/x86/include/asm/virtext.h
++++ b/arch/x86/include/asm/virtext.h
+@@ -30,15 +30,22 @@ static inline int cpu_has_vmx(void)
+ }
+ 
+ 
+-/** Disable VMX on the current CPU
++/**
++ * cpu_vmxoff() - Disable VMX on the current CPU
+  *
+- * vmxoff causes a undefined-opcode exception if vmxon was not run
+- * on the CPU previously. Only call this function if you know VMX
+- * is enabled.
++ * Disable VMX and clear CR4.VMXE (even if VMXOFF faults)
++ *
++ * Note, VMXOFF causes a #UD if the CPU is !post-VMXON, but it's impossible to
++ * atomically track post-VMXON state, e.g. this may be called in NMI context.
++ * Eat all faults as all other faults on VMXOFF faults are mode related, i.e.
++ * faults are guaranteed to be due to the !post-VMXON check unless the CPU is
++ * magically in RM, VM86, compat mode, or at CPL>0.
+  */
+ static inline void cpu_vmxoff(void)
+ {
+-	asm volatile ("vmxoff");
++	asm_volatile_goto("1: vmxoff\n\t"
++			  _ASM_EXTABLE(1b, %l[fault]) :::: fault);
++fault:
+ 	cr4_clear_bits(X86_CR4_VMXE);
+ }
+ 
+diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
+index c0d4098106589..79f900ffde4c5 100644
+--- a/arch/x86/kernel/msr.c
++++ b/arch/x86/kernel/msr.c
+@@ -184,6 +184,13 @@ static long msr_ioctl(struct file *file, unsigned int ioc, unsigned long arg)
+ 		err = security_locked_down(LOCKDOWN_MSR);
+ 		if (err)
+ 			break;
++
++		err = filter_write(regs[1]);
++		if (err)
++			return err;
++
++		add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);
++
+ 		err = wrmsr_safe_regs_on_cpu(cpu, regs);
+ 		if (err)
+ 			break;
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index db115943e8bdc..efbaef8b4de98 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -538,31 +538,21 @@ static void emergency_vmx_disable_all(void)
+ 	local_irq_disable();
+ 
+ 	/*
+-	 * We need to disable VMX on all CPUs before rebooting, otherwise
+-	 * we risk hanging up the machine, because the CPU ignores INIT
+-	 * signals when VMX is enabled.
++	 * Disable VMX on all CPUs before rebooting, otherwise we risk hanging
++	 * the machine, because the CPU blocks INIT when it's in VMX root.
+ 	 *
+-	 * We can't take any locks and we may be on an inconsistent
+-	 * state, so we use NMIs as IPIs to tell the other CPUs to disable
+-	 * VMX and halt.
++	 * We can't take any locks and we may be on an inconsistent state, so
++	 * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt.
+ 	 *
+-	 * For safety, we will avoid running the nmi_shootdown_cpus()
+-	 * stuff unnecessarily, but we don't have a way to check
+-	 * if other CPUs have VMX enabled. So we will call it only if the
+-	 * CPU we are running on has VMX enabled.
+-	 *
+-	 * We will miss cases where VMX is not enabled on all CPUs. This
+-	 * shouldn't do much harm because KVM always enable VMX on all
+-	 * CPUs anyway. But we can miss it on the small window where KVM
+-	 * is still enabling VMX.
++	 * Do the NMI shootdown even if VMX if off on _this_ CPU, as that
++	 * doesn't prevent a different CPU from being in VMX root operation.
+ 	 */
+-	if (cpu_has_vmx() && cpu_vmx_enabled()) {
+-		/* Disable VMX on this CPU. */
+-		cpu_vmxoff();
++	if (cpu_has_vmx()) {
++		/* Safely force _this_ CPU out of VMX root operation. */
++		__cpu_emergency_vmxoff();
+ 
+-		/* Halt and disable VMX on the other CPUs */
++		/* Halt and exit VMX root operation on the other CPUs. */
+ 		nmi_shootdown_cpus(vmxoff_nmi);
+-
+ 	}
+ }
+ 
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 66a08322988f2..1453b9b794425 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2564,12 +2564,12 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt,
+ 	ctxt->_eip   = GET_SMSTATE(u64, smstate, 0x7f78);
+ 	ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7f70) | X86_EFLAGS_FIXED;
+ 
+-	val = GET_SMSTATE(u32, smstate, 0x7f68);
++	val = GET_SMSTATE(u64, smstate, 0x7f68);
+ 
+ 	if (ctxt->ops->set_dr(ctxt, 6, (val & DR6_VOLATILE) | DR6_FIXED_1))
+ 		return X86EMUL_UNHANDLEABLE;
+ 
+-	val = GET_SMSTATE(u32, smstate, 0x7f60);
++	val = GET_SMSTATE(u64, smstate, 0x7f60);
+ 
+ 	if (ctxt->ops->set_dr(ctxt, 7, (val & DR7_VOLATILE) | DR7_FIXED_1))
+ 		return X86EMUL_UNHANDLEABLE;
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index c842d17240ccb..ffa0bd0e033fb 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -1055,7 +1055,8 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
+ 
+ 		pfn = spte_to_pfn(iter.old_spte);
+ 		if (kvm_is_reserved_pfn(pfn) ||
+-		    !PageTransCompoundMap(pfn_to_page(pfn)))
++		    (!PageCompound(pfn_to_page(pfn)) &&
++		     !kvm_is_zone_device_pfn(pfn)))
+ 			continue;
+ 
+ 		tdp_mmu_set_spte(kvm, &iter, 0);
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 4fbe190c79159..1008cc6cb66c5 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -51,6 +51,23 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu,
+ 	nested_svm_vmexit(svm);
+ }
+ 
++static void svm_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86_exception *fault)
++{
++       struct vcpu_svm *svm = to_svm(vcpu);
++       WARN_ON(!is_guest_mode(vcpu));
++
++       if (vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_EXCEPTION_OFFSET + PF_VECTOR) &&
++	   !svm->nested.nested_run_pending) {
++               svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + PF_VECTOR;
++               svm->vmcb->control.exit_code_hi = 0;
++               svm->vmcb->control.exit_info_1 = fault->error_code;
++               svm->vmcb->control.exit_info_2 = fault->address;
++               nested_svm_vmexit(svm);
++       } else {
++               kvm_inject_page_fault(vcpu, fault);
++       }
++}
++
+ static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+@@ -58,7 +75,7 @@ static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index)
+ 	u64 pdpte;
+ 	int ret;
+ 
+-	ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(__sme_clr(cr3)), &pdpte,
++	ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(cr3), &pdpte,
+ 				       offset_in_page(cr3) + index * 8, 8);
+ 	if (ret)
+ 		return 0;
+@@ -446,6 +463,9 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb12_gpa,
+ 	if (ret)
+ 		return ret;
+ 
++	if (!npt_enabled)
++		svm->vcpu.arch.mmu->inject_page_fault = svm_inject_page_fault_nested;
++
+ 	svm_set_gif(svm, true);
+ 
+ 	return 0;
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index f4ae3871e412a..76ab1ee0784ae 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1092,12 +1092,12 @@ static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
+ static void svm_check_invpcid(struct vcpu_svm *svm)
+ {
+ 	/*
+-	 * Intercept INVPCID instruction only if shadow page table is
+-	 * enabled. Interception is not required with nested page table
+-	 * enabled.
++	 * Intercept INVPCID if shadow paging is enabled to sync/free shadow
++	 * roots, or if INVPCID is disabled in the guest to inject #UD.
+ 	 */
+ 	if (kvm_cpu_cap_has(X86_FEATURE_INVPCID)) {
+-		if (!npt_enabled)
++		if (!npt_enabled ||
++		    !guest_cpuid_has(&svm->vcpu, X86_FEATURE_INVPCID))
+ 			svm_set_intercept(svm, INTERCEPT_INVPCID);
+ 		else
+ 			svm_clr_intercept(svm, INTERCEPT_INVPCID);
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 82bf37a5c9ecc..9c1545c376e9b 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -53,7 +53,7 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr)
+  * 32-bit mode:
+  *
+  *   Sometimes AMD Athlon/Opteron CPUs report invalid exceptions on prefetch.
+- *   Check that here and ignore it.
++ *   Check that here and ignore it.  This is AMD erratum #91.
+  *
+  * 64-bit mode:
+  *
+@@ -82,11 +82,7 @@ check_prefetch_opcode(struct pt_regs *regs, unsigned char *instr,
+ #ifdef CONFIG_X86_64
+ 	case 0x40:
+ 		/*
+-		 * In AMD64 long mode 0x40..0x4F are valid REX prefixes
+-		 * Need to figure out under what instruction mode the
+-		 * instruction was issued. Could check the LDT for lm,
+-		 * but for now it's good enough to assume that long
+-		 * mode only uses well known segments or kernel.
++		 * In 64-bit mode 0x40..0x4F are valid REX prefixes
+ 		 */
+ 		return (!user_mode(regs) || user_64bit_mode(regs));
+ #endif
+@@ -126,20 +122,31 @@ is_prefetch(struct pt_regs *regs, unsigned long error_code, unsigned long addr)
+ 	instr = (void *)convert_ip_to_linear(current, regs);
+ 	max_instr = instr + 15;
+ 
+-	if (user_mode(regs) && instr >= (unsigned char *)TASK_SIZE_MAX)
+-		return 0;
++	/*
++	 * This code has historically always bailed out if IP points to a
++	 * not-present page (e.g. due to a race).  No one has ever
++	 * complained about this.
++	 */
++	pagefault_disable();
+ 
+ 	while (instr < max_instr) {
+ 		unsigned char opcode;
+ 
+-		if (get_kernel_nofault(opcode, instr))
+-			break;
++		if (user_mode(regs)) {
++			if (get_user(opcode, instr))
++				break;
++		} else {
++			if (get_kernel_nofault(opcode, instr))
++				break;
++		}
+ 
+ 		instr++;
+ 
+ 		if (!check_prefetch_opcode(regs, instr, opcode, &prefetch))
+ 			break;
+ 	}
++
++	pagefault_enable();
+ 	return prefetch;
+ }
+ 
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index 8f665c352bf0d..ca311aaa67b88 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -1164,12 +1164,14 @@ static void *memtype_seq_start(struct seq_file *seq, loff_t *pos)
+ 
+ static void *memtype_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
++	kfree(v);
+ 	++*pos;
+ 	return memtype_get_idx(*pos);
+ }
+ 
+ static void memtype_seq_stop(struct seq_file *seq, void *v)
+ {
++	kfree(v);
+ }
+ 
+ static int memtype_seq_show(struct seq_file *seq, void *v)
+@@ -1181,8 +1183,6 @@ static int memtype_seq_show(struct seq_file *seq, void *v)
+ 			entry_print->end,
+ 			cattr_name(entry_print->type));
+ 
+-	kfree(entry_print);
+-
+ 	return 0;
+ }
+ 
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 9e81d1052091f..5720978e4d09b 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2937,6 +2937,7 @@ static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
+ 	}
+ 
+ 	bfqd->in_service_queue = bfqq;
++	bfqd->in_serv_last_pos = 0;
+ }
+ 
+ /*
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 659cdb8a07fef..c3aa7f8ee3883 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -468,6 +468,14 @@ void blk_queue_io_opt(struct request_queue *q, unsigned int opt)
+ }
+ EXPORT_SYMBOL(blk_queue_io_opt);
+ 
++static unsigned int blk_round_down_sectors(unsigned int sectors, unsigned int lbs)
++{
++	sectors = round_down(sectors, lbs >> SECTOR_SHIFT);
++	if (sectors < PAGE_SIZE >> SECTOR_SHIFT)
++		sectors = PAGE_SIZE >> SECTOR_SHIFT;
++	return sectors;
++}
++
+ /**
+  * blk_stack_limits - adjust queue_limits for stacked devices
+  * @t:	the stacking driver limits (top device)
+@@ -594,6 +602,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
+ 		ret = -1;
+ 	}
+ 
++	t->max_sectors = blk_round_down_sectors(t->max_sectors, t->logical_block_size);
++	t->max_hw_sectors = blk_round_down_sectors(t->max_hw_sectors, t->logical_block_size);
++	t->max_dev_sectors = blk_round_down_sectors(t->max_dev_sectors, t->logical_block_size);
++
+ 	/* Discard alignment and granularity */
+ 	if (b->discard_granularity) {
+ 		alignment = queue_limit_discard_alignment(b, start);
+diff --git a/block/bsg.c b/block/bsg.c
+index d7bae94b64d95..3d78e843a83f6 100644
+--- a/block/bsg.c
++++ b/block/bsg.c
+@@ -157,8 +157,10 @@ static int bsg_sg_io(struct request_queue *q, fmode_t mode, void __user *uarg)
+ 		return PTR_ERR(rq);
+ 
+ 	ret = q->bsg_dev.ops->fill_hdr(rq, &hdr, mode);
+-	if (ret)
++	if (ret) {
++		blk_put_request(rq);
+ 		return ret;
++	}
+ 
+ 	rq->timeout = msecs_to_jiffies(hdr.timeout);
+ 	if (!rq->timeout)
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 3fbc382eb926d..3be4d0e2a96c3 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -90,20 +90,27 @@ static int compat_blkpg_ioctl(struct block_device *bdev,
+ }
+ #endif
+ 
+-static int blkdev_reread_part(struct block_device *bdev)
++static int blkdev_reread_part(struct block_device *bdev, fmode_t mode)
+ {
+-	int ret;
++	struct block_device *tmp;
+ 
+ 	if (!disk_part_scan_enabled(bdev->bd_disk) || bdev_is_partition(bdev))
+ 		return -EINVAL;
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EACCES;
+ 
+-	mutex_lock(&bdev->bd_mutex);
+-	ret = bdev_disk_changed(bdev, false);
+-	mutex_unlock(&bdev->bd_mutex);
++	/*
++	 * Reopen the device to revalidate the driver state and force a
++	 * partition rescan.
++	 */
++	mode &= ~FMODE_EXCL;
++	set_bit(GD_NEED_PART_SCAN, &bdev->bd_disk->state);
+ 
+-	return ret;
++	tmp = blkdev_get_by_dev(bdev->bd_dev, mode, NULL);
++	if (IS_ERR(tmp))
++		return PTR_ERR(tmp);
++	blkdev_put(tmp, mode);
++	return 0;
+ }
+ 
+ static int blk_ioctl_discard(struct block_device *bdev, fmode_t mode,
+@@ -549,7 +556,7 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode,
+ 		bdev->bd_bdi->ra_pages = (arg * 512) / PAGE_SIZE;
+ 		return 0;
+ 	case BLKRRPART:
+-		return blkdev_reread_part(bdev);
++		return blkdev_reread_part(bdev, mode);
+ 	case BLKTRACESTART:
+ 	case BLKTRACESTOP:
+ 	case BLKTRACETEARDOWN:
+diff --git a/certs/blacklist.c b/certs/blacklist.c
+index 6514f9ebc943f..f1c434b04b5e4 100644
+--- a/certs/blacklist.c
++++ b/certs/blacklist.c
+@@ -162,7 +162,7 @@ static int __init blacklist_init(void)
+ 			      KEY_USR_VIEW | KEY_USR_READ |
+ 			      KEY_USR_SEARCH,
+ 			      KEY_ALLOC_NOT_IN_QUOTA |
+-			      KEY_FLAG_KEEP,
++			      KEY_ALLOC_SET_KEEP,
+ 			      NULL, NULL);
+ 	if (IS_ERR(blacklist_keyring))
+ 		panic("Can't allocate system blacklist keyring\n");
+diff --git a/crypto/ecdh_helper.c b/crypto/ecdh_helper.c
+index 66fcb2ea81544..fca63b559f655 100644
+--- a/crypto/ecdh_helper.c
++++ b/crypto/ecdh_helper.c
+@@ -67,6 +67,9 @@ int crypto_ecdh_decode_key(const char *buf, unsigned int len,
+ 	if (secret.type != CRYPTO_KPP_SECRET_TYPE_ECDH)
+ 		return -EINVAL;
+ 
++	if (unlikely(len < secret.len))
++		return -EINVAL;
++
+ 	ptr = ecdh_unpack_data(&params->curve_id, ptr, sizeof(params->curve_id));
+ 	ptr = ecdh_unpack_data(&params->key_size, ptr, sizeof(params->key_size));
+ 	if (secret.len != crypto_ecdh_key_len(params))
+diff --git a/crypto/michael_mic.c b/crypto/michael_mic.c
+index 63350c4ad4617..f4c31049601c9 100644
+--- a/crypto/michael_mic.c
++++ b/crypto/michael_mic.c
+@@ -7,7 +7,7 @@
+  * Copyright (c) 2004 Jouni Malinen <j@w1.fi>
+  */
+ #include <crypto/internal/hash.h>
+-#include <asm/byteorder.h>
++#include <asm/unaligned.h>
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/string.h>
+@@ -19,7 +19,7 @@ struct michael_mic_ctx {
+ };
+ 
+ struct michael_mic_desc_ctx {
+-	u8 pending[4];
++	__le32 pending;
+ 	size_t pending_len;
+ 
+ 	u32 l, r;
+@@ -60,13 +60,12 @@ static int michael_update(struct shash_desc *desc, const u8 *data,
+ 			   unsigned int len)
+ {
+ 	struct michael_mic_desc_ctx *mctx = shash_desc_ctx(desc);
+-	const __le32 *src;
+ 
+ 	if (mctx->pending_len) {
+ 		int flen = 4 - mctx->pending_len;
+ 		if (flen > len)
+ 			flen = len;
+-		memcpy(&mctx->pending[mctx->pending_len], data, flen);
++		memcpy((u8 *)&mctx->pending + mctx->pending_len, data, flen);
+ 		mctx->pending_len += flen;
+ 		data += flen;
+ 		len -= flen;
+@@ -74,23 +73,21 @@ static int michael_update(struct shash_desc *desc, const u8 *data,
+ 		if (mctx->pending_len < 4)
+ 			return 0;
+ 
+-		src = (const __le32 *)mctx->pending;
+-		mctx->l ^= le32_to_cpup(src);
++		mctx->l ^= le32_to_cpu(mctx->pending);
+ 		michael_block(mctx->l, mctx->r);
+ 		mctx->pending_len = 0;
+ 	}
+ 
+-	src = (const __le32 *)data;
+-
+ 	while (len >= 4) {
+-		mctx->l ^= le32_to_cpup(src++);
++		mctx->l ^= get_unaligned_le32(data);
+ 		michael_block(mctx->l, mctx->r);
++		data += 4;
+ 		len -= 4;
+ 	}
+ 
+ 	if (len > 0) {
+ 		mctx->pending_len = len;
+-		memcpy(mctx->pending, src, len);
++		memcpy(&mctx->pending, data, len);
+ 	}
+ 
+ 	return 0;
+@@ -100,8 +97,7 @@ static int michael_update(struct shash_desc *desc, const u8 *data,
+ static int michael_final(struct shash_desc *desc, u8 *out)
+ {
+ 	struct michael_mic_desc_ctx *mctx = shash_desc_ctx(desc);
+-	u8 *data = mctx->pending;
+-	__le32 *dst = (__le32 *)out;
++	u8 *data = (u8 *)&mctx->pending;
+ 
+ 	/* Last block and padding (0x5a, 4..7 x 0) */
+ 	switch (mctx->pending_len) {
+@@ -123,8 +119,8 @@ static int michael_final(struct shash_desc *desc, u8 *out)
+ 	/* l ^= 0; */
+ 	michael_block(mctx->l, mctx->r);
+ 
+-	dst[0] = cpu_to_le32(mctx->l);
+-	dst[1] = cpu_to_le32(mctx->r);
++	put_unaligned_le32(mctx->l, out);
++	put_unaligned_le32(mctx->r, out + 4);
+ 
+ 	return 0;
+ }
+@@ -135,13 +131,11 @@ static int michael_setkey(struct crypto_shash *tfm, const u8 *key,
+ {
+ 	struct michael_mic_ctx *mctx = crypto_shash_ctx(tfm);
+ 
+-	const __le32 *data = (const __le32 *)key;
+-
+ 	if (keylen != 8)
+ 		return -EINVAL;
+ 
+-	mctx->l = le32_to_cpu(data[0]);
+-	mctx->r = le32_to_cpu(data[1]);
++	mctx->l = get_unaligned_le32(key);
++	mctx->r = get_unaligned_le32(key + 4);
+ 	return 0;
+ }
+ 
+@@ -156,7 +150,6 @@ static struct shash_alg alg = {
+ 		.cra_name		=	"michael_mic",
+ 		.cra_driver_name	=	"michael_mic-generic",
+ 		.cra_blocksize		=	8,
+-		.cra_alignmask		=	3,
+ 		.cra_ctxsize		=	sizeof(struct michael_mic_ctx),
+ 		.cra_module		=	THIS_MODULE,
+ 	}
+diff --git a/drivers/acpi/acpi_configfs.c b/drivers/acpi/acpi_configfs.c
+index cf91f49101eac..3a14859dbb757 100644
+--- a/drivers/acpi/acpi_configfs.c
++++ b/drivers/acpi/acpi_configfs.c
+@@ -268,7 +268,12 @@ static int __init acpi_configfs_init(void)
+ 
+ 	acpi_table_group = configfs_register_default_group(root, "table",
+ 							   &acpi_tables_type);
+-	return PTR_ERR_OR_ZERO(acpi_table_group);
++	if (IS_ERR(acpi_table_group)) {
++		configfs_unregister_subsystem(&acpi_configfs);
++		return PTR_ERR(acpi_table_group);
++	}
++
++	return 0;
+ }
+ module_init(acpi_configfs_init);
+ 
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index d04de10a63e4d..e3dd64aa43737 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -787,9 +787,6 @@ static int acpi_data_prop_read_single(const struct acpi_device_data *data,
+ 	const union acpi_object *obj;
+ 	int ret;
+ 
+-	if (!val)
+-		return -EINVAL;
+-
+ 	if (proptype >= DEV_PROP_U8 && proptype <= DEV_PROP_U64) {
+ 		ret = acpi_data_get_property(data, propname, ACPI_TYPE_INTEGER, &obj);
+ 		if (ret)
+@@ -799,28 +796,43 @@ static int acpi_data_prop_read_single(const struct acpi_device_data *data,
+ 		case DEV_PROP_U8:
+ 			if (obj->integer.value > U8_MAX)
+ 				return -EOVERFLOW;
+-			*(u8 *)val = obj->integer.value;
++
++			if (val)
++				*(u8 *)val = obj->integer.value;
++
+ 			break;
+ 		case DEV_PROP_U16:
+ 			if (obj->integer.value > U16_MAX)
+ 				return -EOVERFLOW;
+-			*(u16 *)val = obj->integer.value;
++
++			if (val)
++				*(u16 *)val = obj->integer.value;
++
+ 			break;
+ 		case DEV_PROP_U32:
+ 			if (obj->integer.value > U32_MAX)
+ 				return -EOVERFLOW;
+-			*(u32 *)val = obj->integer.value;
++
++			if (val)
++				*(u32 *)val = obj->integer.value;
++
+ 			break;
+ 		default:
+-			*(u64 *)val = obj->integer.value;
++			if (val)
++				*(u64 *)val = obj->integer.value;
++
+ 			break;
+ 		}
++
++		if (!val)
++			return 1;
+ 	} else if (proptype == DEV_PROP_STRING) {
+ 		ret = acpi_data_get_property(data, propname, ACPI_TYPE_STRING, &obj);
+ 		if (ret)
+ 			return ret;
+ 
+-		*(char **)val = obj->string.pointer;
++		if (val)
++			*(char **)val = obj->string.pointer;
+ 
+ 		return 1;
+ 	} else {
+@@ -834,7 +846,7 @@ int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname,
+ {
+ 	int ret;
+ 
+-	if (!adev)
++	if (!adev || !val)
+ 		return -EINVAL;
+ 
+ 	ret = acpi_data_prop_read_single(&adev->data, propname, proptype, val);
+@@ -928,10 +940,20 @@ static int acpi_data_prop_read(const struct acpi_device_data *data,
+ 	const union acpi_object *items;
+ 	int ret;
+ 
+-	if (val && nval == 1) {
++	if (nval == 1 || !val) {
+ 		ret = acpi_data_prop_read_single(data, propname, proptype, val);
+-		if (ret >= 0)
++		/*
++		 * The overflow error means that the property is there and it is
++		 * single-value, but its type does not match, so return.
++		 */
++		if (ret >= 0 || ret == -EOVERFLOW)
+ 			return ret;
++
++		/*
++		 * Reading this property as a single-value one failed, but its
++		 * value may still be represented as one-element array, so
++		 * continue.
++		 */
+ 	}
+ 
+ 	ret = acpi_data_get_property_array(data, propname, ACPI_TYPE_ANY, &obj);
+diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c
+index ecc304149067c..b5f5ca4e3f343 100644
+--- a/drivers/amba/bus.c
++++ b/drivers/amba/bus.c
+@@ -299,10 +299,11 @@ static int amba_remove(struct device *dev)
+ {
+ 	struct amba_device *pcdev = to_amba_device(dev);
+ 	struct amba_driver *drv = to_amba_driver(dev->driver);
+-	int ret;
++	int ret = 0;
+ 
+ 	pm_runtime_get_sync(dev);
+-	ret = drv->remove(pcdev);
++	if (drv->remove)
++		ret = drv->remove(pcdev);
+ 	pm_runtime_put_noidle(dev);
+ 
+ 	/* Undo the runtime PM settings in amba_probe() */
+@@ -319,7 +320,9 @@ static int amba_remove(struct device *dev)
+ static void amba_shutdown(struct device *dev)
+ {
+ 	struct amba_driver *drv = to_amba_driver(dev->driver);
+-	drv->shutdown(to_amba_device(dev));
++
++	if (drv->shutdown)
++		drv->shutdown(to_amba_device(dev));
+ }
+ 
+ /**
+@@ -332,12 +335,13 @@ static void amba_shutdown(struct device *dev)
+  */
+ int amba_driver_register(struct amba_driver *drv)
+ {
+-	drv->drv.bus = &amba_bustype;
++	if (!drv->probe)
++		return -EINVAL;
+ 
+-#define SETFN(fn)	if (drv->fn) drv->drv.fn = amba_##fn
+-	SETFN(probe);
+-	SETFN(remove);
+-	SETFN(shutdown);
++	drv->drv.bus = &amba_bustype;
++	drv->drv.probe = amba_probe;
++	drv->drv.remove = amba_remove;
++	drv->drv.shutdown = amba_shutdown;
+ 
+ 	return driver_register(&drv->drv);
+ }
+diff --git a/drivers/ata/ahci_brcm.c b/drivers/ata/ahci_brcm.c
+index 49f7acbfcf01e..5b32df5d33adc 100644
+--- a/drivers/ata/ahci_brcm.c
++++ b/drivers/ata/ahci_brcm.c
+@@ -377,6 +377,10 @@ static int __maybe_unused brcm_ahci_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = ahci_platform_enable_regulators(hpriv);
++	if (ret)
++		goto out_disable_clks;
++
+ 	brcm_sata_init(priv);
+ 	brcm_sata_phys_enable(priv);
+ 	brcm_sata_alpm_init(hpriv);
+@@ -406,6 +410,8 @@ out_disable_platform_phys:
+ 	ahci_platform_disable_phys(hpriv);
+ out_disable_phys:
+ 	brcm_sata_phys_disable(priv);
++	ahci_platform_disable_regulators(hpriv);
++out_disable_clks:
+ 	ahci_platform_disable_clks(hpriv);
+ 	return ret;
+ }
+@@ -490,6 +496,10 @@ static int brcm_ahci_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto out_reset;
+ 
++	ret = ahci_platform_enable_regulators(hpriv);
++	if (ret)
++		goto out_disable_clks;
++
+ 	/* Must be first so as to configure endianness including that
+ 	 * of the standard AHCI register space.
+ 	 */
+@@ -499,7 +509,7 @@ static int brcm_ahci_probe(struct platform_device *pdev)
+ 	priv->port_mask = brcm_ahci_get_portmask(hpriv, priv);
+ 	if (!priv->port_mask) {
+ 		ret = -ENODEV;
+-		goto out_disable_clks;
++		goto out_disable_regulators;
+ 	}
+ 
+ 	/* Must be done before ahci_platform_enable_phys() */
+@@ -524,6 +534,8 @@ out_disable_platform_phys:
+ 	ahci_platform_disable_phys(hpriv);
+ out_disable_phys:
+ 	brcm_sata_phys_disable(priv);
++out_disable_regulators:
++	ahci_platform_disable_regulators(hpriv);
+ out_disable_clks:
+ 	ahci_platform_disable_clks(hpriv);
+ out_reset:
+diff --git a/drivers/auxdisplay/ht16k33.c b/drivers/auxdisplay/ht16k33.c
+index d951d54b26f52..d8602843e8a53 100644
+--- a/drivers/auxdisplay/ht16k33.c
++++ b/drivers/auxdisplay/ht16k33.c
+@@ -117,8 +117,7 @@ static void ht16k33_fb_queue(struct ht16k33_priv *priv)
+ {
+ 	struct ht16k33_fbdev *fbdev = &priv->fbdev;
+ 
+-	schedule_delayed_work(&fbdev->work,
+-			      msecs_to_jiffies(HZ / fbdev->refresh_rate));
++	schedule_delayed_work(&fbdev->work, HZ / fbdev->refresh_rate);
+ }
+ 
+ /*
+diff --git a/drivers/base/regmap/regmap-sdw.c b/drivers/base/regmap/regmap-sdw.c
+index c92d614b49432..4b8d2d010cab9 100644
+--- a/drivers/base/regmap/regmap-sdw.c
++++ b/drivers/base/regmap/regmap-sdw.c
+@@ -11,7 +11,7 @@ static int regmap_sdw_write(void *context, unsigned int reg, unsigned int val)
+ 	struct device *dev = context;
+ 	struct sdw_slave *slave = dev_to_sdw_dev(dev);
+ 
+-	return sdw_write(slave, reg, val);
++	return sdw_write_no_pm(slave, reg, val);
+ }
+ 
+ static int regmap_sdw_read(void *context, unsigned int reg, unsigned int *val)
+@@ -20,7 +20,7 @@ static int regmap_sdw_read(void *context, unsigned int reg, unsigned int *val)
+ 	struct sdw_slave *slave = dev_to_sdw_dev(dev);
+ 	int read;
+ 
+-	read = sdw_read(slave, reg);
++	read = sdw_read_no_pm(slave, reg);
+ 	if (read < 0)
+ 		return read;
+ 
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index 010828fc785bc..615a0c93e1166 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -443,14 +443,18 @@ software_node_get_next_child(const struct fwnode_handle *fwnode,
+ 	struct swnode *c = to_swnode(child);
+ 
+ 	if (!p || list_empty(&p->children) ||
+-	    (c && list_is_last(&c->entry, &p->children)))
++	    (c && list_is_last(&c->entry, &p->children))) {
++		fwnode_handle_put(child);
+ 		return NULL;
++	}
+ 
+ 	if (c)
+ 		c = list_next_entry(c, entry);
+ 	else
+ 		c = list_first_entry(&p->children, struct swnode, entry);
+-	return &c->fwnode;
++
++	fwnode_handle_put(child);
++	return fwnode_handle_get(&c->fwnode);
+ }
+ 
+ static struct fwnode_handle *
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 7df79ae6b0a1e..295da442329f3 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -4120,23 +4120,23 @@ static int floppy_open(struct block_device *bdev, fmode_t mode)
+ 	if (fdc_state[FDC(drive)].rawcmd == 1)
+ 		fdc_state[FDC(drive)].rawcmd = 2;
+ 
+-	if (!(mode & FMODE_NDELAY)) {
+-		if (mode & (FMODE_READ|FMODE_WRITE)) {
+-			drive_state[drive].last_checked = 0;
+-			clear_bit(FD_OPEN_SHOULD_FAIL_BIT,
+-				  &drive_state[drive].flags);
+-			if (bdev_check_media_change(bdev))
+-				floppy_revalidate(bdev->bd_disk);
+-			if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags))
+-				goto out;
+-			if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags))
+-				goto out;
+-		}
+-		res = -EROFS;
+-		if ((mode & FMODE_WRITE) &&
+-		    !test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags))
++	if (mode & (FMODE_READ|FMODE_WRITE)) {
++		drive_state[drive].last_checked = 0;
++		clear_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags);
++		if (bdev_check_media_change(bdev))
++			floppy_revalidate(bdev->bd_disk);
++		if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags))
++			goto out;
++		if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags))
+ 			goto out;
+ 	}
++
++	res = -EROFS;
++
++	if ((mode & FMODE_WRITE) &&
++			!test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags))
++		goto out;
++
+ 	mutex_unlock(&open_lock);
+ 	mutex_unlock(&floppy_mutex);
+ 	return 0;
+diff --git a/drivers/bluetooth/btqcomsmd.c b/drivers/bluetooth/btqcomsmd.c
+index 98d53764871f5..2acb719e596f5 100644
+--- a/drivers/bluetooth/btqcomsmd.c
++++ b/drivers/bluetooth/btqcomsmd.c
+@@ -142,12 +142,16 @@ static int btqcomsmd_probe(struct platform_device *pdev)
+ 
+ 	btq->cmd_channel = qcom_wcnss_open_channel(wcnss, "APPS_RIVA_BT_CMD",
+ 						   btqcomsmd_cmd_callback, btq);
+-	if (IS_ERR(btq->cmd_channel))
+-		return PTR_ERR(btq->cmd_channel);
++	if (IS_ERR(btq->cmd_channel)) {
++		ret = PTR_ERR(btq->cmd_channel);
++		goto destroy_acl_channel;
++	}
+ 
+ 	hdev = hci_alloc_dev();
+-	if (!hdev)
+-		return -ENOMEM;
++	if (!hdev) {
++		ret = -ENOMEM;
++		goto destroy_cmd_channel;
++	}
+ 
+ 	hci_set_drvdata(hdev, btq);
+ 	btq->hdev = hdev;
+@@ -161,14 +165,21 @@ static int btqcomsmd_probe(struct platform_device *pdev)
+ 	hdev->set_bdaddr = qca_set_bdaddr_rome;
+ 
+ 	ret = hci_register_dev(hdev);
+-	if (ret < 0) {
+-		hci_free_dev(hdev);
+-		return ret;
+-	}
++	if (ret < 0)
++		goto hci_free_dev;
+ 
+ 	platform_set_drvdata(pdev, btq);
+ 
+ 	return 0;
++
++hci_free_dev:
++	hci_free_dev(hdev);
++destroy_cmd_channel:
++	rpmsg_destroy_ept(btq->cmd_channel);
++destroy_acl_channel:
++	rpmsg_destroy_ept(btq->acl_channel);
++
++	return ret;
+ }
+ 
+ static int btqcomsmd_remove(struct platform_device *pdev)
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 1c942869baacc..2953b96b3ceda 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2827,7 +2827,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 		skb = bt_skb_alloc(HCI_WMT_MAX_EVENT_SIZE, GFP_ATOMIC);
+ 		if (!skb) {
+ 			hdev->stat.err_rx++;
+-			goto err_out;
++			return;
+ 		}
+ 
+ 		hci_skb_pkt_type(skb) = HCI_EVENT_PKT;
+@@ -2845,13 +2845,18 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 		 */
+ 		if (test_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags)) {
+ 			data->evt_skb = skb_clone(skb, GFP_ATOMIC);
+-			if (!data->evt_skb)
+-				goto err_out;
++			if (!data->evt_skb) {
++				kfree_skb(skb);
++				return;
++			}
+ 		}
+ 
+ 		err = hci_recv_frame(hdev, skb);
+-		if (err < 0)
+-			goto err_free_skb;
++		if (err < 0) {
++			kfree_skb(data->evt_skb);
++			data->evt_skb = NULL;
++			return;
++		}
+ 
+ 		if (test_and_clear_bit(BTUSB_TX_WAIT_VND_EVT,
+ 				       &data->flags)) {
+@@ -2860,11 +2865,6 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 			wake_up_bit(&data->flags,
+ 				    BTUSB_TX_WAIT_VND_EVT);
+ 		}
+-err_out:
+-		return;
+-err_free_skb:
+-		kfree_skb(data->evt_skb);
+-		data->evt_skb = NULL;
+ 		return;
+ 	} else if (urb->status == -ENOENT) {
+ 		/* Avoid suspend failed when usb_kill_urb */
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index f83d67eafc9f0..637c5b8c2aa1a 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -127,10 +127,9 @@ int hci_uart_tx_wakeup(struct hci_uart *hu)
+ 	if (!test_bit(HCI_UART_PROTO_READY, &hu->flags))
+ 		goto no_schedule;
+ 
+-	if (test_and_set_bit(HCI_UART_SENDING, &hu->tx_state)) {
+-		set_bit(HCI_UART_TX_WAKEUP, &hu->tx_state);
++	set_bit(HCI_UART_TX_WAKEUP, &hu->tx_state);
++	if (test_and_set_bit(HCI_UART_SENDING, &hu->tx_state))
+ 		goto no_schedule;
+-	}
+ 
+ 	BT_DBG("");
+ 
+@@ -174,10 +173,10 @@ restart:
+ 		kfree_skb(skb);
+ 	}
+ 
++	clear_bit(HCI_UART_SENDING, &hu->tx_state);
+ 	if (test_bit(HCI_UART_TX_WAKEUP, &hu->tx_state))
+ 		goto restart;
+ 
+-	clear_bit(HCI_UART_SENDING, &hu->tx_state);
+ 	wake_up_bit(&hu->tx_state, HCI_UART_SENDING);
+ }
+ 
+@@ -802,7 +801,8 @@ static int hci_uart_tty_ioctl(struct tty_struct *tty, struct file *file,
+  * We don't provide read/write/poll interface for user space.
+  */
+ static ssize_t hci_uart_tty_read(struct tty_struct *tty, struct file *file,
+-				 unsigned char __user *buf, size_t nr)
++				 unsigned char *buf, size_t nr,
++				 void **cookie, unsigned long offset)
+ {
+ 	return 0;
+ }
+@@ -819,29 +819,28 @@ static __poll_t hci_uart_tty_poll(struct tty_struct *tty,
+ 	return 0;
+ }
+ 
++static struct tty_ldisc_ops hci_uart_ldisc = {
++	.owner		= THIS_MODULE,
++	.magic		= TTY_LDISC_MAGIC,
++	.name		= "n_hci",
++	.open		= hci_uart_tty_open,
++	.close		= hci_uart_tty_close,
++	.read		= hci_uart_tty_read,
++	.write		= hci_uart_tty_write,
++	.ioctl		= hci_uart_tty_ioctl,
++	.compat_ioctl	= hci_uart_tty_ioctl,
++	.poll		= hci_uart_tty_poll,
++	.receive_buf	= hci_uart_tty_receive,
++	.write_wakeup	= hci_uart_tty_wakeup,
++};
++
+ static int __init hci_uart_init(void)
+ {
+-	static struct tty_ldisc_ops hci_uart_ldisc;
+ 	int err;
+ 
+ 	BT_INFO("HCI UART driver ver %s", VERSION);
+ 
+ 	/* Register the tty discipline */
+-
+-	memset(&hci_uart_ldisc, 0, sizeof(hci_uart_ldisc));
+-	hci_uart_ldisc.magic		= TTY_LDISC_MAGIC;
+-	hci_uart_ldisc.name		= "n_hci";
+-	hci_uart_ldisc.open		= hci_uart_tty_open;
+-	hci_uart_ldisc.close		= hci_uart_tty_close;
+-	hci_uart_ldisc.read		= hci_uart_tty_read;
+-	hci_uart_ldisc.write		= hci_uart_tty_write;
+-	hci_uart_ldisc.ioctl		= hci_uart_tty_ioctl;
+-	hci_uart_ldisc.compat_ioctl	= hci_uart_tty_ioctl;
+-	hci_uart_ldisc.poll		= hci_uart_tty_poll;
+-	hci_uart_ldisc.receive_buf	= hci_uart_tty_receive;
+-	hci_uart_ldisc.write_wakeup	= hci_uart_tty_wakeup;
+-	hci_uart_ldisc.owner		= THIS_MODULE;
+-
+ 	err = tty_register_ldisc(N_HCI, &hci_uart_ldisc);
+ 	if (err) {
+ 		BT_ERR("HCI line discipline registration failed. (%d)", err);
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 244b8feba5232..5c26c7d941731 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1020,7 +1020,9 @@ static void qca_controller_memdump(struct work_struct *work)
+ 			dump_size = __le32_to_cpu(dump->dump_size);
+ 			if (!(dump_size)) {
+ 				bt_dev_err(hu->hdev, "Rx invalid memdump size");
++				kfree(qca_memdump);
+ 				kfree_skb(skb);
++				qca->qca_memdump = NULL;
+ 				mutex_unlock(&qca->hci_memdump_lock);
+ 				return;
+ 			}
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index ef96ad06fa54e..9e03402ef1b37 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -83,9 +83,9 @@ static void hci_uart_write_work(struct work_struct *work)
+ 			hci_uart_tx_complete(hu, hci_skb_pkt_type(skb));
+ 			kfree_skb(skb);
+ 		}
+-	} while (test_bit(HCI_UART_TX_WAKEUP, &hu->tx_state));
+ 
+-	clear_bit(HCI_UART_SENDING, &hu->tx_state);
++		clear_bit(HCI_UART_SENDING, &hu->tx_state);
++	} while (test_bit(HCI_UART_TX_WAKEUP, &hu->tx_state));
+ }
+ 
+ /* ------- Interface to HCI layer ------ */
+diff --git a/drivers/char/hw_random/ingenic-trng.c b/drivers/char/hw_random/ingenic-trng.c
+index 954a8411d67d2..0eb80f786f4dd 100644
+--- a/drivers/char/hw_random/ingenic-trng.c
++++ b/drivers/char/hw_random/ingenic-trng.c
+@@ -113,13 +113,17 @@ static int ingenic_trng_probe(struct platform_device *pdev)
+ 	ret = hwrng_register(&trng->rng);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to register hwrng\n");
+-		return ret;
++		goto err_unprepare_clk;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, trng);
+ 
+ 	dev_info(&pdev->dev, "Ingenic DTRNG driver registered\n");
+ 	return 0;
++
++err_unprepare_clk:
++	clk_disable_unprepare(trng->clk);
++	return ret;
+ }
+ 
+ static int ingenic_trng_remove(struct platform_device *pdev)
+diff --git a/drivers/char/hw_random/timeriomem-rng.c b/drivers/char/hw_random/timeriomem-rng.c
+index e262445fed5f5..f35f0f31f52ad 100644
+--- a/drivers/char/hw_random/timeriomem-rng.c
++++ b/drivers/char/hw_random/timeriomem-rng.c
+@@ -69,7 +69,7 @@ static int timeriomem_rng_read(struct hwrng *hwrng, void *data,
+ 		 */
+ 		if (retval > 0)
+ 			usleep_range(period_us,
+-					period_us + min(1, period_us / 100));
++					period_us + max(1, period_us / 100));
+ 
+ 		*(u32 *)data = readl(priv->io_base);
+ 		retval += sizeof(u32);
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 2a41b21623ae4..f462b9d2f5a52 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1972,7 +1972,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ 			return -EPERM;
+ 		if (crng_init < 2)
+ 			return -ENODATA;
+-		crng_reseed(&primary_crng, NULL);
++		crng_reseed(&primary_crng, &input_pool);
+ 		crng_global_init_time = jiffies - 1;
+ 		return 0;
+ 	default:
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 947d1db0a5ccf..283f78211c3a7 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -164,8 +164,6 @@ extern const struct file_operations tpmrm_fops;
+ extern struct idr dev_nums_idr;
+ 
+ ssize_t tpm_transmit(struct tpm_chip *chip, u8 *buf, size_t bufsiz);
+-ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf,
+-			 size_t min_rsp_body_length, const char *desc);
+ int tpm_get_timeouts(struct tpm_chip *);
+ int tpm_auto_startup(struct tpm_chip *chip);
+ 
+@@ -194,8 +192,6 @@ static inline void tpm_msleep(unsigned int delay_msec)
+ int tpm_chip_start(struct tpm_chip *chip);
+ void tpm_chip_stop(struct tpm_chip *chip);
+ struct tpm_chip *tpm_find_get_ops(struct tpm_chip *chip);
+-__must_check int tpm_try_get_ops(struct tpm_chip *chip);
+-void tpm_put_ops(struct tpm_chip *chip);
+ 
+ struct tpm_chip *tpm_chip_alloc(struct device *dev,
+ 				const struct tpm_class_ops *ops);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 92c51c6cfd1b7..431919d5f48af 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -125,7 +125,8 @@ static bool check_locality(struct tpm_chip *chip, int l)
+ 	if (rc < 0)
+ 		return false;
+ 
+-	if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) ==
++	if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID
++		       | TPM_ACCESS_REQUEST_USE)) ==
+ 	    (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) {
+ 		priv->locality = l;
+ 		return true;
+@@ -134,58 +135,13 @@ static bool check_locality(struct tpm_chip *chip, int l)
+ 	return false;
+ }
+ 
+-static bool locality_inactive(struct tpm_chip *chip, int l)
+-{
+-	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+-	int rc;
+-	u8 access;
+-
+-	rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access);
+-	if (rc < 0)
+-		return false;
+-
+-	if ((access & (TPM_ACCESS_VALID | TPM_ACCESS_ACTIVE_LOCALITY))
+-	    == TPM_ACCESS_VALID)
+-		return true;
+-
+-	return false;
+-}
+-
+ static int release_locality(struct tpm_chip *chip, int l)
+ {
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+-	unsigned long stop, timeout;
+-	long rc;
+ 
+ 	tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY);
+ 
+-	stop = jiffies + chip->timeout_a;
+-
+-	if (chip->flags & TPM_CHIP_FLAG_IRQ) {
+-again:
+-		timeout = stop - jiffies;
+-		if ((long)timeout <= 0)
+-			return -1;
+-
+-		rc = wait_event_interruptible_timeout(priv->int_queue,
+-						      (locality_inactive(chip, l)),
+-						      timeout);
+-
+-		if (rc > 0)
+-			return 0;
+-
+-		if (rc == -ERESTARTSYS && freezing(current)) {
+-			clear_thread_flag(TIF_SIGPENDING);
+-			goto again;
+-		}
+-	} else {
+-		do {
+-			if (locality_inactive(chip, l))
+-				return 0;
+-			tpm_msleep(TPM_TIMEOUT);
+-		} while (time_before(jiffies, stop));
+-	}
+-	return -1;
++	return 0;
+ }
+ 
+ static int request_locality(struct tpm_chip *chip, int l)
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index 177368cac6dd6..a55b37fc2c8bd 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -17,7 +17,8 @@
+ 
+ #define ASPEED_G6_NUM_CLKS		71
+ 
+-#define ASPEED_G6_SILICON_REV		0x004
++#define ASPEED_G6_SILICON_REV		0x014
++#define CHIP_REVISION_ID			GENMASK(23, 16)
+ 
+ #define ASPEED_G6_RESET_CTRL		0x040
+ #define ASPEED_G6_RESET_CTRL2		0x050
+@@ -190,18 +191,34 @@ static struct clk_hw *ast2600_calc_pll(const char *name, u32 val)
+ static struct clk_hw *ast2600_calc_apll(const char *name, u32 val)
+ {
+ 	unsigned int mult, div;
++	u32 chip_id = readl(scu_g6_base + ASPEED_G6_SILICON_REV);
+ 
+-	if (val & BIT(20)) {
+-		/* Pass through mode */
+-		mult = div = 1;
++	if (((chip_id & CHIP_REVISION_ID) >> 16) >= 2) {
++		if (val & BIT(24)) {
++			/* Pass through mode */
++			mult = div = 1;
++		} else {
++			/* F = 25Mhz * [(m + 1) / (n + 1)] / (p + 1) */
++			u32 m = val & 0x1fff;
++			u32 n = (val >> 13) & 0x3f;
++			u32 p = (val >> 19) & 0xf;
++
++			mult = (m + 1);
++			div = (n + 1) * (p + 1);
++		}
+ 	} else {
+-		/* F = 25Mhz * (2-od) * [(m + 2) / (n + 1)] */
+-		u32 m = (val >> 5) & 0x3f;
+-		u32 od = (val >> 4) & 0x1;
+-		u32 n = val & 0xf;
++		if (val & BIT(20)) {
++			/* Pass through mode */
++			mult = div = 1;
++		} else {
++			/* F = 25Mhz * (2-od) * [(m + 2) / (n + 1)] */
++			u32 m = (val >> 5) & 0x3f;
++			u32 od = (val >> 4) & 0x1;
++			u32 n = val & 0xf;
+ 
+-		mult = (2 - od) * (m + 2);
+-		div = n + 1;
++			mult = (2 - od) * (m + 2);
++			div = n + 1;
++		}
+ 	}
+ 	return clk_hw_register_fixed_factor(NULL, name, "clkin", 0,
+ 			mult, div);
+diff --git a/drivers/clk/clk-divider.c b/drivers/clk/clk-divider.c
+index 8de12cb0c43d8..f32157cb40138 100644
+--- a/drivers/clk/clk-divider.c
++++ b/drivers/clk/clk-divider.c
+@@ -493,8 +493,13 @@ struct clk_hw *__clk_hw_register_divider(struct device *dev,
+ 	else
+ 		init.ops = &clk_divider_ops;
+ 	init.flags = flags;
+-	init.parent_names = (parent_name ? &parent_name: NULL);
+-	init.num_parents = (parent_name ? 1 : 0);
++	init.parent_names = parent_name ? &parent_name : NULL;
++	init.parent_hws = parent_hw ? &parent_hw : NULL;
++	init.parent_data = parent_data;
++	if (parent_name || parent_hw || parent_data)
++		init.num_parents = 1;
++	else
++		init.num_parents = 0;
+ 
+ 	/* struct clk_divider assignments */
+ 	div->reg = reg;
+diff --git a/drivers/clk/meson/clk-pll.c b/drivers/clk/meson/clk-pll.c
+index b17a13e9337c4..49f27fe532139 100644
+--- a/drivers/clk/meson/clk-pll.c
++++ b/drivers/clk/meson/clk-pll.c
+@@ -365,13 +365,14 @@ static int meson_clk_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ {
+ 	struct clk_regmap *clk = to_clk_regmap(hw);
+ 	struct meson_clk_pll_data *pll = meson_clk_pll_data(clk);
+-	unsigned int enabled, m, n, frac = 0, ret;
++	unsigned int enabled, m, n, frac = 0;
+ 	unsigned long old_rate;
++	int ret;
+ 
+ 	if (parent_rate == 0 || rate == 0)
+ 		return -EINVAL;
+ 
+-	old_rate = rate;
++	old_rate = clk_hw_get_rate(hw);
+ 
+ 	ret = meson_clk_get_pll_settings(rate, parent_rate, &m, &n, pll);
+ 	if (ret)
+@@ -393,7 +394,8 @@ static int meson_clk_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	if (!enabled)
+ 		return 0;
+ 
+-	if (meson_clk_pll_enable(hw)) {
++	ret = meson_clk_pll_enable(hw);
++	if (ret) {
+ 		pr_warn("%s: pll did not lock, trying to restore old rate %lu\n",
+ 			__func__, old_rate);
+ 		/*
+@@ -405,7 +407,7 @@ static int meson_clk_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 		meson_clk_pll_set_rate(hw, old_rate, parent_rate);
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/clk/qcom/gcc-msm8998.c b/drivers/clk/qcom/gcc-msm8998.c
+index 9d7016bcd6800..b8dcfe62312bb 100644
+--- a/drivers/clk/qcom/gcc-msm8998.c
++++ b/drivers/clk/qcom/gcc-msm8998.c
+@@ -135,7 +135,7 @@ static struct pll_vco fabia_vco[] = {
+ 
+ static struct clk_alpha_pll gpll0 = {
+ 	.offset = 0x0,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.vco_table = fabia_vco,
+ 	.num_vco = ARRAY_SIZE(fabia_vco),
+ 	.clkr = {
+@@ -145,58 +145,58 @@ static struct clk_alpha_pll gpll0 = {
+ 			.name = "gpll0",
+ 			.parent_names = (const char *[]){ "xo" },
+ 			.num_parents = 1,
+-			.ops = &clk_alpha_pll_ops,
++			.ops = &clk_alpha_pll_fixed_fabia_ops,
+ 		}
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll0_out_even = {
+ 	.offset = 0x0,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll0_out_even",
+ 		.parent_names = (const char *[]){ "gpll0" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll0_out_main = {
+ 	.offset = 0x0,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll0_out_main",
+ 		.parent_names = (const char *[]){ "gpll0" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll0_out_odd = {
+ 	.offset = 0x0,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll0_out_odd",
+ 		.parent_names = (const char *[]){ "gpll0" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll0_out_test = {
+ 	.offset = 0x0,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll0_out_test",
+ 		.parent_names = (const char *[]){ "gpll0" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll gpll1 = {
+ 	.offset = 0x1000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.vco_table = fabia_vco,
+ 	.num_vco = ARRAY_SIZE(fabia_vco),
+ 	.clkr = {
+@@ -206,58 +206,58 @@ static struct clk_alpha_pll gpll1 = {
+ 			.name = "gpll1",
+ 			.parent_names = (const char *[]){ "xo" },
+ 			.num_parents = 1,
+-			.ops = &clk_alpha_pll_ops,
++			.ops = &clk_alpha_pll_fixed_fabia_ops,
+ 		}
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll1_out_even = {
+ 	.offset = 0x1000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll1_out_even",
+ 		.parent_names = (const char *[]){ "gpll1" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll1_out_main = {
+ 	.offset = 0x1000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll1_out_main",
+ 		.parent_names = (const char *[]){ "gpll1" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll1_out_odd = {
+ 	.offset = 0x1000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll1_out_odd",
+ 		.parent_names = (const char *[]){ "gpll1" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll1_out_test = {
+ 	.offset = 0x1000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll1_out_test",
+ 		.parent_names = (const char *[]){ "gpll1" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll gpll2 = {
+ 	.offset = 0x2000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.vco_table = fabia_vco,
+ 	.num_vco = ARRAY_SIZE(fabia_vco),
+ 	.clkr = {
+@@ -267,58 +267,58 @@ static struct clk_alpha_pll gpll2 = {
+ 			.name = "gpll2",
+ 			.parent_names = (const char *[]){ "xo" },
+ 			.num_parents = 1,
+-			.ops = &clk_alpha_pll_ops,
++			.ops = &clk_alpha_pll_fixed_fabia_ops,
+ 		}
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll2_out_even = {
+ 	.offset = 0x2000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll2_out_even",
+ 		.parent_names = (const char *[]){ "gpll2" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll2_out_main = {
+ 	.offset = 0x2000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll2_out_main",
+ 		.parent_names = (const char *[]){ "gpll2" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll2_out_odd = {
+ 	.offset = 0x2000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll2_out_odd",
+ 		.parent_names = (const char *[]){ "gpll2" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll2_out_test = {
+ 	.offset = 0x2000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll2_out_test",
+ 		.parent_names = (const char *[]){ "gpll2" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll gpll3 = {
+ 	.offset = 0x3000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.vco_table = fabia_vco,
+ 	.num_vco = ARRAY_SIZE(fabia_vco),
+ 	.clkr = {
+@@ -328,58 +328,58 @@ static struct clk_alpha_pll gpll3 = {
+ 			.name = "gpll3",
+ 			.parent_names = (const char *[]){ "xo" },
+ 			.num_parents = 1,
+-			.ops = &clk_alpha_pll_ops,
++			.ops = &clk_alpha_pll_fixed_fabia_ops,
+ 		}
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll3_out_even = {
+ 	.offset = 0x3000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll3_out_even",
+ 		.parent_names = (const char *[]){ "gpll3" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll3_out_main = {
+ 	.offset = 0x3000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll3_out_main",
+ 		.parent_names = (const char *[]){ "gpll3" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll3_out_odd = {
+ 	.offset = 0x3000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll3_out_odd",
+ 		.parent_names = (const char *[]){ "gpll3" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll3_out_test = {
+ 	.offset = 0x3000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll3_out_test",
+ 		.parent_names = (const char *[]){ "gpll3" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll gpll4 = {
+ 	.offset = 0x77000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.vco_table = fabia_vco,
+ 	.num_vco = ARRAY_SIZE(fabia_vco),
+ 	.clkr = {
+@@ -389,52 +389,52 @@ static struct clk_alpha_pll gpll4 = {
+ 			.name = "gpll4",
+ 			.parent_names = (const char *[]){ "xo" },
+ 			.num_parents = 1,
+-			.ops = &clk_alpha_pll_ops,
++			.ops = &clk_alpha_pll_fixed_fabia_ops,
+ 		}
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll4_out_even = {
+ 	.offset = 0x77000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll4_out_even",
+ 		.parent_names = (const char *[]){ "gpll4" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll4_out_main = {
+ 	.offset = 0x77000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll4_out_main",
+ 		.parent_names = (const char *[]){ "gpll4" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll4_out_odd = {
+ 	.offset = 0x77000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll4_out_odd",
+ 		.parent_names = (const char *[]){ "gpll4" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+ static struct clk_alpha_pll_postdiv gpll4_out_test = {
+ 	.offset = 0x77000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll4_out_test",
+ 		.parent_names = (const char *[]){ "gpll4" },
+ 		.num_parents = 1,
+-		.ops = &clk_alpha_pll_postdiv_ops,
++		.ops = &clk_alpha_pll_postdiv_fabia_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c
+index 046d79416b7d0..4ee2706c9c6a0 100644
+--- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c
+@@ -69,7 +69,6 @@ enum clk_ids {
+ 	CLK_PLL5_DIV2,
+ 	CLK_PLL5_DIV4,
+ 	CLK_S1,
+-	CLK_S2,
+ 	CLK_S3,
+ 	CLK_SDSRC,
+ 	CLK_RPCSRC,
+@@ -137,7 +136,7 @@ static const struct cpg_core_clk r8a779a0_core_clks[] __initconst = {
+ 	DEF_FIXED("icu",	R8A779A0_CLK_ICU,	CLK_PLL5_DIV4,	2, 1),
+ 	DEF_FIXED("icud2",	R8A779A0_CLK_ICUD2,	CLK_PLL5_DIV4,	4, 1),
+ 	DEF_FIXED("vcbus",	R8A779A0_CLK_VCBUS,	CLK_PLL5_DIV4,	1, 1),
+-	DEF_FIXED("cbfusa",	R8A779A0_CLK_CBFUSA,	CLK_MAIN,	2, 1),
++	DEF_FIXED("cbfusa",	R8A779A0_CLK_CBFUSA,	CLK_EXTAL,	2, 1),
+ 
+ 	DEF_DIV6P1("mso",	R8A779A0_CLK_MSO,	CLK_PLL5_DIV4,	0x87c),
+ 	DEF_DIV6P1("canfd",	R8A779A0_CLK_CANFD,	CLK_PLL5_DIV4,	0x878),
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
+index f2497d0a4683a..bff446b782907 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
+@@ -237,7 +237,7 @@ static const char * const psi_ahb1_ahb2_parents[] = { "osc24M", "osc32k",
+ static SUNXI_CCU_MP_WITH_MUX(psi_ahb1_ahb2_clk, "psi-ahb1-ahb2",
+ 			     psi_ahb1_ahb2_parents,
+ 			     0x510,
+-			     0, 5,	/* M */
++			     0, 2,	/* M */
+ 			     8, 2,	/* P */
+ 			     24, 2,	/* mux */
+ 			     0);
+@@ -246,19 +246,19 @@ static const char * const ahb3_apb1_apb2_parents[] = { "osc24M", "osc32k",
+ 						       "psi-ahb1-ahb2",
+ 						       "pll-periph0" };
+ static SUNXI_CCU_MP_WITH_MUX(ahb3_clk, "ahb3", ahb3_apb1_apb2_parents, 0x51c,
+-			     0, 5,	/* M */
++			     0, 2,	/* M */
+ 			     8, 2,	/* P */
+ 			     24, 2,	/* mux */
+ 			     0);
+ 
+ static SUNXI_CCU_MP_WITH_MUX(apb1_clk, "apb1", ahb3_apb1_apb2_parents, 0x520,
+-			     0, 5,	/* M */
++			     0, 2,	/* M */
+ 			     8, 2,	/* P */
+ 			     24, 2,	/* mux */
+ 			     0);
+ 
+ static SUNXI_CCU_MP_WITH_MUX(apb2_clk, "apb2", ahb3_apb1_apb2_parents, 0x524,
+-			     0, 5,	/* M */
++			     0, 2,	/* M */
+ 			     8, 2,	/* P */
+ 			     24, 2,	/* mux */
+ 			     0);
+@@ -682,7 +682,7 @@ static struct ccu_mux hdmi_cec_clk = {
+ 
+ 	.common		= {
+ 		.reg		= 0xb10,
+-		.features	= CCU_FEATURE_VARIABLE_PREDIV,
++		.features	= CCU_FEATURE_FIXED_PREDIV,
+ 		.hw.init	= CLK_HW_INIT_PARENTS("hdmi-cec",
+ 						      hdmi_cec_parents,
+ 						      &ccu_mux_ops,
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index 2be849bb794ac..39f4d88662002 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -79,6 +79,7 @@ config IXP4XX_TIMER
+ 	bool "Intel XScale IXP4xx timer driver" if COMPILE_TEST
+ 	depends on HAS_IOMEM
+ 	select CLKSRC_MMIO
++	select TIMER_OF if OF
+ 	help
+ 	  Enables support for the Intel XScale IXP4xx SoC timer.
+ 
+diff --git a/drivers/clocksource/mxs_timer.c b/drivers/clocksource/mxs_timer.c
+index bc96a4cbf26c6..e52e12d27d2aa 100644
+--- a/drivers/clocksource/mxs_timer.c
++++ b/drivers/clocksource/mxs_timer.c
+@@ -131,10 +131,7 @@ static void mxs_irq_clear(char *state)
+ 
+ 	/* Clear pending interrupt */
+ 	timrot_irq_acknowledge();
+-
+-#ifdef DEBUG
+-	pr_info("%s: changing mode to %s\n", __func__, state)
+-#endif /* DEBUG */
++	pr_debug("%s: changing mode to %s\n", __func__, state);
+ }
+ 
+ static int mxs_shutdown(struct clock_event_device *evt)
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index d3e5a6fceb61b..d1bbc16fba4b4 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -54,7 +54,6 @@ struct acpi_cpufreq_data {
+ 	unsigned int resume;
+ 	unsigned int cpu_feature;
+ 	unsigned int acpi_perf_cpu;
+-	unsigned int first_perf_state;
+ 	cpumask_var_t freqdomain_cpus;
+ 	void (*cpu_freq_write)(struct acpi_pct_register *reg, u32 val);
+ 	u32 (*cpu_freq_read)(struct acpi_pct_register *reg);
+@@ -223,10 +222,10 @@ static unsigned extract_msr(struct cpufreq_policy *policy, u32 msr)
+ 
+ 	perf = to_perf_data(data);
+ 
+-	cpufreq_for_each_entry(pos, policy->freq_table + data->first_perf_state)
++	cpufreq_for_each_entry(pos, policy->freq_table)
+ 		if (msr == perf->states[pos->driver_data].status)
+ 			return pos->frequency;
+-	return policy->freq_table[data->first_perf_state].frequency;
++	return policy->freq_table[0].frequency;
+ }
+ 
+ static unsigned extract_freq(struct cpufreq_policy *policy, u32 val)
+@@ -365,7 +364,6 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
+ 	struct cpufreq_policy *policy;
+ 	unsigned int freq;
+ 	unsigned int cached_freq;
+-	unsigned int state;
+ 
+ 	pr_debug("%s (%d)\n", __func__, cpu);
+ 
+@@ -377,11 +375,7 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
+ 	if (unlikely(!data || !policy->freq_table))
+ 		return 0;
+ 
+-	state = to_perf_data(data)->state;
+-	if (state < data->first_perf_state)
+-		state = data->first_perf_state;
+-
+-	cached_freq = policy->freq_table[state].frequency;
++	cached_freq = policy->freq_table[to_perf_data(data)->state].frequency;
+ 	freq = extract_freq(policy, get_cur_val(cpumask_of(cpu), data));
+ 	if (freq != cached_freq) {
+ 		/*
+@@ -680,7 +674,6 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 	struct cpuinfo_x86 *c = &cpu_data(cpu);
+ 	unsigned int valid_states = 0;
+ 	unsigned int result = 0;
+-	unsigned int state_count;
+ 	u64 max_boost_ratio;
+ 	unsigned int i;
+ #ifdef CONFIG_SMP
+@@ -795,28 +788,8 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 		goto err_unreg;
+ 	}
+ 
+-	state_count = perf->state_count + 1;
+-
+-	max_boost_ratio = get_max_boost_ratio(cpu);
+-	if (max_boost_ratio) {
+-		/*
+-		 * Make a room for one more entry to represent the highest
+-		 * available "boost" frequency.
+-		 */
+-		state_count++;
+-		valid_states++;
+-		data->first_perf_state = valid_states;
+-	} else {
+-		/*
+-		 * If the maximum "boost" frequency is unknown, ask the arch
+-		 * scale-invariance code to use the "nominal" performance for
+-		 * CPU utilization scaling so as to prevent the schedutil
+-		 * governor from selecting inadequate CPU frequencies.
+-		 */
+-		arch_set_max_freq_ratio(true);
+-	}
+-
+-	freq_table = kcalloc(state_count, sizeof(*freq_table), GFP_KERNEL);
++	freq_table = kcalloc(perf->state_count + 1, sizeof(*freq_table),
++			     GFP_KERNEL);
+ 	if (!freq_table) {
+ 		result = -ENOMEM;
+ 		goto err_unreg;
+@@ -851,27 +824,25 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 	}
+ 	freq_table[valid_states].frequency = CPUFREQ_TABLE_END;
+ 
++	max_boost_ratio = get_max_boost_ratio(cpu);
+ 	if (max_boost_ratio) {
+-		unsigned int state = data->first_perf_state;
+-		unsigned int freq = freq_table[state].frequency;
++		unsigned int freq = freq_table[0].frequency;
+ 
+ 		/*
+ 		 * Because the loop above sorts the freq_table entries in the
+ 		 * descending order, freq is the maximum frequency in the table.
+ 		 * Assume that it corresponds to the CPPC nominal frequency and
+-		 * use it to populate the frequency field of the extra "boost"
+-		 * frequency entry.
++		 * use it to set cpuinfo.max_freq.
+ 		 */
+-		freq_table[0].frequency = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT;
++		policy->cpuinfo.max_freq = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT;
++	} else {
+ 		/*
+-		 * The purpose of the extra "boost" frequency entry is to make
+-		 * the rest of cpufreq aware of the real maximum frequency, but
+-		 * the way to request it is the same as for the first_perf_state
+-		 * entry that is expected to cover the entire range of "boost"
+-		 * frequencies of the CPU, so copy the driver_data value from
+-		 * that entry.
++		 * If the maximum "boost" frequency is unknown, ask the arch
++		 * scale-invariance code to use the "nominal" performance for
++		 * CPU utilization scaling so as to prevent the schedutil
++		 * governor from selecting inadequate CPU frequencies.
+ 		 */
+-		freq_table[0].driver_data = freq_table[state].driver_data;
++		arch_set_max_freq_ratio(true);
+ 	}
+ 
+ 	policy->freq_table = freq_table;
+@@ -947,8 +918,7 @@ static void acpi_cpufreq_cpu_ready(struct cpufreq_policy *policy)
+ {
+ 	struct acpi_processor_performance *perf = per_cpu_ptr(acpi_perf_data,
+ 							      policy->cpu);
+-	struct acpi_cpufreq_data *data = policy->driver_data;
+-	unsigned int freq = policy->freq_table[data->first_perf_state].frequency;
++	unsigned int freq = policy->freq_table[0].frequency;
+ 
+ 	if (perf->states[0].core_frequency * 1000 != freq)
+ 		pr_warn(FW_WARN "P-state 0 is not max freq\n");
+diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+index 3e31e5d28b79c..4153150e20db5 100644
+--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c
++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+@@ -597,6 +597,16 @@ unmap_base:
+ 	return ret;
+ }
+ 
++static void brcm_avs_prepare_uninit(struct platform_device *pdev)
++{
++	struct private_data *priv;
++
++	priv = platform_get_drvdata(pdev);
++
++	iounmap(priv->avs_intr_base);
++	iounmap(priv->base);
++}
++
+ static int brcm_avs_cpufreq_init(struct cpufreq_policy *policy)
+ {
+ 	struct cpufreq_frequency_table *freq_table;
+@@ -732,21 +742,21 @@ static int brcm_avs_cpufreq_probe(struct platform_device *pdev)
+ 
+ 	brcm_avs_driver.driver_data = pdev;
+ 
+-	return cpufreq_register_driver(&brcm_avs_driver);
++	ret = cpufreq_register_driver(&brcm_avs_driver);
++	if (ret)
++		brcm_avs_prepare_uninit(pdev);
++
++	return ret;
+ }
+ 
+ static int brcm_avs_cpufreq_remove(struct platform_device *pdev)
+ {
+-	struct private_data *priv;
+ 	int ret;
+ 
+ 	ret = cpufreq_unregister_driver(&brcm_avs_driver);
+-	if (ret)
+-		return ret;
++	WARN_ON(ret);
+ 
+-	priv = platform_get_drvdata(pdev);
+-	iounmap(priv->base);
+-	iounmap(priv->avs_intr_base);
++	brcm_avs_prepare_uninit(pdev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/cpufreq/freq_table.c b/drivers/cpufreq/freq_table.c
+index f839dc9852c08..d3f756f7b5a05 100644
+--- a/drivers/cpufreq/freq_table.c
++++ b/drivers/cpufreq/freq_table.c
+@@ -52,7 +52,13 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
+ 	}
+ 
+ 	policy->min = policy->cpuinfo.min_freq = min_freq;
+-	policy->max = policy->cpuinfo.max_freq = max_freq;
++	policy->max = max_freq;
++	/*
++	 * If the driver has set its own cpuinfo.max_freq above max_freq, leave
++	 * it as is.
++	 */
++	if (policy->cpuinfo.max_freq < max_freq)
++		policy->max = policy->cpuinfo.max_freq = max_freq;
+ 
+ 	if (policy->min == ~0)
+ 		return -EINVAL;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index cb95da684457f..c8ae8554f4c91 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -829,13 +829,13 @@ static struct freq_attr *hwp_cpufreq_attrs[] = {
+ 	NULL,
+ };
+ 
+-static void intel_pstate_get_hwp_max(unsigned int cpu, int *phy_max,
++static void intel_pstate_get_hwp_max(struct cpudata *cpu, int *phy_max,
+ 				     int *current_max)
+ {
+ 	u64 cap;
+ 
+-	rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap);
+-	WRITE_ONCE(all_cpu_data[cpu]->hwp_cap_cached, cap);
++	rdmsrl_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap);
++	WRITE_ONCE(cpu->hwp_cap_cached, cap);
+ 	if (global.no_turbo || global.turbo_disabled)
+ 		*current_max = HWP_GUARANTEED_PERF(cap);
+ 	else
+@@ -1223,7 +1223,7 @@ static void update_qos_request(enum freq_qos_req_type type)
+ 			continue;
+ 
+ 		if (hwp_active)
+-			intel_pstate_get_hwp_max(i, &turbo_max, &max_state);
++			intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state);
+ 		else
+ 			turbo_max = cpu->pstate.turbo_pstate;
+ 
+@@ -1724,21 +1724,22 @@ static void intel_pstate_max_within_limits(struct cpudata *cpu)
+ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
+ {
+ 	cpu->pstate.min_pstate = pstate_funcs.get_min();
+-	cpu->pstate.max_pstate = pstate_funcs.get_max();
+ 	cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical();
+ 	cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
+ 	cpu->pstate.scaling = pstate_funcs.get_scaling();
+-	cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
+ 
+ 	if (hwp_active && !hwp_mode_bdw) {
+ 		unsigned int phy_max, current_max;
+ 
+-		intel_pstate_get_hwp_max(cpu->cpu, &phy_max, &current_max);
++		intel_pstate_get_hwp_max(cpu, &phy_max, &current_max);
+ 		cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling;
+ 		cpu->pstate.turbo_pstate = phy_max;
++		cpu->pstate.max_pstate = HWP_GUARANTEED_PERF(READ_ONCE(cpu->hwp_cap_cached));
+ 	} else {
+ 		cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
++		cpu->pstate.max_pstate = pstate_funcs.get_max();
+ 	}
++	cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
+ 
+ 	if (pstate_funcs.get_aperf_mperf_shift)
+ 		cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift();
+@@ -2217,7 +2218,7 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu,
+ 	 * rather than pure ratios.
+ 	 */
+ 	if (hwp_active) {
+-		intel_pstate_get_hwp_max(cpu->cpu, &turbo_max, &max_state);
++		intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state);
+ 	} else {
+ 		max_state = global.no_turbo || global.turbo_disabled ?
+ 			cpu->pstate.max_pstate : cpu->pstate.turbo_pstate;
+@@ -2332,7 +2333,7 @@ static void intel_pstate_verify_cpu_policy(struct cpudata *cpu,
+ 	if (hwp_active) {
+ 		int max_state, turbo_max;
+ 
+-		intel_pstate_get_hwp_max(cpu->cpu, &turbo_max, &max_state);
++		intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state);
+ 		max_freq = max_state * cpu->pstate.scaling;
+ 	} else {
+ 		max_freq = intel_pstate_get_max_freq(cpu);
+@@ -2675,7 +2676,7 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 	if (hwp_active) {
+ 		u64 value;
+ 
+-		intel_pstate_get_hwp_max(policy->cpu, &turbo_max, &max_state);
++		intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state);
+ 		policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY_HWP;
+ 		rdmsrl_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value);
+ 		WRITE_ONCE(cpu->hwp_req_cached, value);
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index 9ed5341dc515b..2726e77c9e5a9 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -32,6 +32,7 @@ struct qcom_cpufreq_soc_data {
+ 
+ struct qcom_cpufreq_data {
+ 	void __iomem *base;
++	struct resource *res;
+ 	const struct qcom_cpufreq_soc_data *soc_data;
+ };
+ 
+@@ -280,6 +281,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
+ 	struct of_phandle_args args;
+ 	struct device_node *cpu_np;
+ 	struct device *cpu_dev;
++	struct resource *res;
+ 	void __iomem *base;
+ 	struct qcom_cpufreq_data *data;
+ 	int ret, index;
+@@ -303,18 +305,33 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
+ 
+ 	index = args.args[0];
+ 
+-	base = devm_platform_ioremap_resource(pdev, index);
+-	if (IS_ERR(base))
+-		return PTR_ERR(base);
++	res = platform_get_resource(pdev, IORESOURCE_MEM, index);
++	if (!res) {
++		dev_err(dev, "failed to get mem resource %d\n", index);
++		return -ENODEV;
++	}
++
++	if (!request_mem_region(res->start, resource_size(res), res->name)) {
++		dev_err(dev, "failed to request resource %pR\n", res);
++		return -EBUSY;
++	}
+ 
+-	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
++	base = ioremap(res->start, resource_size(res));
++	if (IS_ERR(base)) {
++		dev_err(dev, "failed to map resource %pR\n", res);
++		ret = PTR_ERR(base);
++		goto release_region;
++	}
++
++	data = kzalloc(sizeof(*data), GFP_KERNEL);
+ 	if (!data) {
+ 		ret = -ENOMEM;
+-		goto error;
++		goto unmap_base;
+ 	}
+ 
+ 	data->soc_data = of_device_get_match_data(&pdev->dev);
+ 	data->base = base;
++	data->res = res;
+ 
+ 	/* HW should be in enabled state to proceed */
+ 	if (!(readl_relaxed(base + data->soc_data->reg_enable) & 0x1)) {
+@@ -349,7 +366,11 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
+ 
+ 	return 0;
+ error:
+-	devm_iounmap(dev, base);
++	kfree(data);
++unmap_base:
++	iounmap(data->base);
++release_region:
++	release_mem_region(res->start, resource_size(res));
+ 	return ret;
+ }
+ 
+@@ -357,12 +378,15 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
+ {
+ 	struct device *cpu_dev = get_cpu_device(policy->cpu);
+ 	struct qcom_cpufreq_data *data = policy->driver_data;
+-	struct platform_device *pdev = cpufreq_get_driver_data();
++	struct resource *res = data->res;
++	void __iomem *base = data->base;
+ 
+ 	dev_pm_opp_remove_all_dynamic(cpu_dev);
+ 	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
+ 	kfree(policy->freq_table);
+-	devm_iounmap(&pdev->dev, data->base);
++	kfree(data);
++	iounmap(base);
++	release_mem_region(res->start, resource_size(res));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c
+index b72de8939497b..ffa628c89e21f 100644
+--- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c
+@@ -20,6 +20,7 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq)
+ 	unsigned int ivsize = crypto_skcipher_ivsize(tfm);
+ 	struct sun4i_cipher_req_ctx *ctx = skcipher_request_ctx(areq);
+ 	u32 mode = ctx->mode;
++	void *backup_iv = NULL;
+ 	/* when activating SS, the default FIFO space is SS_RX_DEFAULT(32) */
+ 	u32 rx_cnt = SS_RX_DEFAULT;
+ 	u32 tx_cnt = 0;
+@@ -30,6 +31,8 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq)
+ 	unsigned int ileft = areq->cryptlen;
+ 	unsigned int oleft = areq->cryptlen;
+ 	unsigned int todo;
++	unsigned long pi = 0, po = 0; /* progress for in and out */
++	bool miter_err;
+ 	struct sg_mapping_iter mi, mo;
+ 	unsigned int oi, oo; /* offset for in and out */
+ 	unsigned long flags;
+@@ -42,52 +45,71 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq)
+ 		return -EINVAL;
+ 	}
+ 
++	if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) {
++		backup_iv = kzalloc(ivsize, GFP_KERNEL);
++		if (!backup_iv)
++			return -ENOMEM;
++		scatterwalk_map_and_copy(backup_iv, areq->src, areq->cryptlen - ivsize, ivsize, 0);
++	}
++
+ 	spin_lock_irqsave(&ss->slock, flags);
+ 
+-	for (i = 0; i < op->keylen; i += 4)
+-		writel(*(op->key + i / 4), ss->base + SS_KEY0 + i);
++	for (i = 0; i < op->keylen / 4; i++)
++		writesl(ss->base + SS_KEY0 + i * 4, &op->key[i], 1);
+ 
+ 	if (areq->iv) {
+ 		for (i = 0; i < 4 && i < ivsize / 4; i++) {
+ 			v = *(u32 *)(areq->iv + i * 4);
+-			writel(v, ss->base + SS_IV0 + i * 4);
++			writesl(ss->base + SS_IV0 + i * 4, &v, 1);
+ 		}
+ 	}
+ 	writel(mode, ss->base + SS_CTL);
+ 
+-	sg_miter_start(&mi, areq->src, sg_nents(areq->src),
+-		       SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+-	sg_miter_start(&mo, areq->dst, sg_nents(areq->dst),
+-		       SG_MITER_TO_SG | SG_MITER_ATOMIC);
+-	sg_miter_next(&mi);
+-	sg_miter_next(&mo);
+-	if (!mi.addr || !mo.addr) {
+-		dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n");
+-		err = -EINVAL;
+-		goto release_ss;
+-	}
+ 
+ 	ileft = areq->cryptlen / 4;
+ 	oleft = areq->cryptlen / 4;
+ 	oi = 0;
+ 	oo = 0;
+ 	do {
+-		todo = min(rx_cnt, ileft);
+-		todo = min_t(size_t, todo, (mi.length - oi) / 4);
+-		if (todo) {
+-			ileft -= todo;
+-			writesl(ss->base + SS_RXFIFO, mi.addr + oi, todo);
+-			oi += todo * 4;
+-		}
+-		if (oi == mi.length) {
+-			sg_miter_next(&mi);
+-			oi = 0;
++		if (ileft) {
++			sg_miter_start(&mi, areq->src, sg_nents(areq->src),
++					SG_MITER_FROM_SG | SG_MITER_ATOMIC);
++			if (pi)
++				sg_miter_skip(&mi, pi);
++			miter_err = sg_miter_next(&mi);
++			if (!miter_err || !mi.addr) {
++				dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n");
++				err = -EINVAL;
++				goto release_ss;
++			}
++			todo = min(rx_cnt, ileft);
++			todo = min_t(size_t, todo, (mi.length - oi) / 4);
++			if (todo) {
++				ileft -= todo;
++				writesl(ss->base + SS_RXFIFO, mi.addr + oi, todo);
++				oi += todo * 4;
++			}
++			if (oi == mi.length) {
++				pi += mi.length;
++				oi = 0;
++			}
++			sg_miter_stop(&mi);
+ 		}
+ 
+ 		spaces = readl(ss->base + SS_FCSR);
+ 		rx_cnt = SS_RXFIFO_SPACES(spaces);
+ 		tx_cnt = SS_TXFIFO_SPACES(spaces);
+ 
++		sg_miter_start(&mo, areq->dst, sg_nents(areq->dst),
++			       SG_MITER_TO_SG | SG_MITER_ATOMIC);
++		if (po)
++			sg_miter_skip(&mo, po);
++		miter_err = sg_miter_next(&mo);
++		if (!miter_err || !mo.addr) {
++			dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n");
++			err = -EINVAL;
++			goto release_ss;
++		}
+ 		todo = min(tx_cnt, oleft);
+ 		todo = min_t(size_t, todo, (mo.length - oo) / 4);
+ 		if (todo) {
+@@ -96,21 +118,23 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq)
+ 			oo += todo * 4;
+ 		}
+ 		if (oo == mo.length) {
+-			sg_miter_next(&mo);
+ 			oo = 0;
++			po += mo.length;
+ 		}
++		sg_miter_stop(&mo);
+ 	} while (oleft);
+ 
+ 	if (areq->iv) {
+-		for (i = 0; i < 4 && i < ivsize / 4; i++) {
+-			v = readl(ss->base + SS_IV0 + i * 4);
+-			*(u32 *)(areq->iv + i * 4) = v;
++		if (mode & SS_DECRYPTION) {
++			memcpy(areq->iv, backup_iv, ivsize);
++			kfree_sensitive(backup_iv);
++		} else {
++			scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize,
++						 ivsize, 0);
+ 		}
+ 	}
+ 
+ release_ss:
+-	sg_miter_stop(&mi);
+-	sg_miter_stop(&mo);
+ 	writel(0, ss->base + SS_CTL);
+ 	spin_unlock_irqrestore(&ss->slock, flags);
+ 	return err;
+@@ -161,13 +185,16 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
+ 	unsigned int ileft = areq->cryptlen;
+ 	unsigned int oleft = areq->cryptlen;
+ 	unsigned int todo;
++	void *backup_iv = NULL;
+ 	struct sg_mapping_iter mi, mo;
++	unsigned long pi = 0, po = 0; /* progress for in and out */
++	bool miter_err;
+ 	unsigned int oi, oo;	/* offset for in and out */
+ 	unsigned int ob = 0;	/* offset in buf */
+ 	unsigned int obo = 0;	/* offset in bufo*/
+ 	unsigned int obl = 0;	/* length of data in bufo */
+ 	unsigned long flags;
+-	bool need_fallback;
++	bool need_fallback = false;
+ 
+ 	if (!areq->cryptlen)
+ 		return 0;
+@@ -186,12 +213,12 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
+ 	 * we can use the SS optimized function
+ 	 */
+ 	while (in_sg && no_chunk == 1) {
+-		if (in_sg->length % 4)
++		if ((in_sg->length | in_sg->offset) & 3u)
+ 			no_chunk = 0;
+ 		in_sg = sg_next(in_sg);
+ 	}
+ 	while (out_sg && no_chunk == 1) {
+-		if (out_sg->length % 4)
++		if ((out_sg->length | out_sg->offset) & 3u)
+ 			no_chunk = 0;
+ 		out_sg = sg_next(out_sg);
+ 	}
+@@ -202,30 +229,26 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
+ 	if (need_fallback)
+ 		return sun4i_ss_cipher_poll_fallback(areq);
+ 
++	if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) {
++		backup_iv = kzalloc(ivsize, GFP_KERNEL);
++		if (!backup_iv)
++			return -ENOMEM;
++		scatterwalk_map_and_copy(backup_iv, areq->src, areq->cryptlen - ivsize, ivsize, 0);
++	}
++
+ 	spin_lock_irqsave(&ss->slock, flags);
+ 
+-	for (i = 0; i < op->keylen; i += 4)
+-		writel(*(op->key + i / 4), ss->base + SS_KEY0 + i);
++	for (i = 0; i < op->keylen / 4; i++)
++		writesl(ss->base + SS_KEY0 + i * 4, &op->key[i], 1);
+ 
+ 	if (areq->iv) {
+ 		for (i = 0; i < 4 && i < ivsize / 4; i++) {
+ 			v = *(u32 *)(areq->iv + i * 4);
+-			writel(v, ss->base + SS_IV0 + i * 4);
++			writesl(ss->base + SS_IV0 + i * 4, &v, 1);
+ 		}
+ 	}
+ 	writel(mode, ss->base + SS_CTL);
+ 
+-	sg_miter_start(&mi, areq->src, sg_nents(areq->src),
+-		       SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+-	sg_miter_start(&mo, areq->dst, sg_nents(areq->dst),
+-		       SG_MITER_TO_SG | SG_MITER_ATOMIC);
+-	sg_miter_next(&mi);
+-	sg_miter_next(&mo);
+-	if (!mi.addr || !mo.addr) {
+-		dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n");
+-		err = -EINVAL;
+-		goto release_ss;
+-	}
+ 	ileft = areq->cryptlen;
+ 	oleft = areq->cryptlen;
+ 	oi = 0;
+@@ -233,8 +256,16 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
+ 
+ 	while (oleft) {
+ 		if (ileft) {
+-			char buf[4 * SS_RX_MAX];/* buffer for linearize SG src */
+-
++			sg_miter_start(&mi, areq->src, sg_nents(areq->src),
++				       SG_MITER_FROM_SG | SG_MITER_ATOMIC);
++			if (pi)
++				sg_miter_skip(&mi, pi);
++			miter_err = sg_miter_next(&mi);
++			if (!miter_err || !mi.addr) {
++				dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n");
++				err = -EINVAL;
++				goto release_ss;
++			}
+ 			/*
+ 			 * todo is the number of consecutive 4byte word that we
+ 			 * can read from current SG
+@@ -256,52 +287,57 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
+ 				 */
+ 				todo = min(rx_cnt * 4 - ob, ileft);
+ 				todo = min_t(size_t, todo, mi.length - oi);
+-				memcpy(buf + ob, mi.addr + oi, todo);
++				memcpy(ss->buf + ob, mi.addr + oi, todo);
+ 				ileft -= todo;
+ 				oi += todo;
+ 				ob += todo;
+ 				if (!(ob % 4)) {
+-					writesl(ss->base + SS_RXFIFO, buf,
++					writesl(ss->base + SS_RXFIFO, ss->buf,
+ 						ob / 4);
+ 					ob = 0;
+ 				}
+ 			}
+ 			if (oi == mi.length) {
+-				sg_miter_next(&mi);
++				pi += mi.length;
+ 				oi = 0;
+ 			}
++			sg_miter_stop(&mi);
+ 		}
+ 
+ 		spaces = readl(ss->base + SS_FCSR);
+ 		rx_cnt = SS_RXFIFO_SPACES(spaces);
+ 		tx_cnt = SS_TXFIFO_SPACES(spaces);
+-		dev_dbg(ss->dev,
+-			"%x %u/%zu %u/%u cnt=%u %u/%zu %u/%u cnt=%u %u\n",
+-			mode,
+-			oi, mi.length, ileft, areq->cryptlen, rx_cnt,
+-			oo, mo.length, oleft, areq->cryptlen, tx_cnt, ob);
+ 
+ 		if (!tx_cnt)
+ 			continue;
++		sg_miter_start(&mo, areq->dst, sg_nents(areq->dst),
++			       SG_MITER_TO_SG | SG_MITER_ATOMIC);
++		if (po)
++			sg_miter_skip(&mo, po);
++		miter_err = sg_miter_next(&mo);
++		if (!miter_err || !mo.addr) {
++			dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n");
++			err = -EINVAL;
++			goto release_ss;
++		}
+ 		/* todo in 4bytes word */
+ 		todo = min(tx_cnt, oleft / 4);
+ 		todo = min_t(size_t, todo, (mo.length - oo) / 4);
++
+ 		if (todo) {
+ 			readsl(ss->base + SS_TXFIFO, mo.addr + oo, todo);
+ 			oleft -= todo * 4;
+ 			oo += todo * 4;
+ 			if (oo == mo.length) {
+-				sg_miter_next(&mo);
++				po += mo.length;
+ 				oo = 0;
+ 			}
+ 		} else {
+-			char bufo[4 * SS_TX_MAX]; /* buffer for linearize SG dst */
+-
+ 			/*
+ 			 * read obl bytes in bufo, we read at maximum for
+ 			 * emptying the device
+ 			 */
+-			readsl(ss->base + SS_TXFIFO, bufo, tx_cnt);
++			readsl(ss->base + SS_TXFIFO, ss->bufo, tx_cnt);
+ 			obl = tx_cnt * 4;
+ 			obo = 0;
+ 			do {
+@@ -313,28 +349,31 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
+ 				 */
+ 				todo = min_t(size_t,
+ 					     mo.length - oo, obl - obo);
+-				memcpy(mo.addr + oo, bufo + obo, todo);
++				memcpy(mo.addr + oo, ss->bufo + obo, todo);
+ 				oleft -= todo;
+ 				obo += todo;
+ 				oo += todo;
+ 				if (oo == mo.length) {
++					po += mo.length;
+ 					sg_miter_next(&mo);
+ 					oo = 0;
+ 				}
+ 			} while (obo < obl);
+ 			/* bufo must be fully used here */
+ 		}
++		sg_miter_stop(&mo);
+ 	}
+ 	if (areq->iv) {
+-		for (i = 0; i < 4 && i < ivsize / 4; i++) {
+-			v = readl(ss->base + SS_IV0 + i * 4);
+-			*(u32 *)(areq->iv + i * 4) = v;
++		if (mode & SS_DECRYPTION) {
++			memcpy(areq->iv, backup_iv, ivsize);
++			kfree_sensitive(backup_iv);
++		} else {
++			scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize,
++						 ivsize, 0);
+ 		}
+ 	}
+ 
+ release_ss:
+-	sg_miter_stop(&mi);
+-	sg_miter_stop(&mo);
+ 	writel(0, ss->base + SS_CTL);
+ 	spin_unlock_irqrestore(&ss->slock, flags);
+ 
+diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h
+index 163962f9e2845..02105b39fbfec 100644
+--- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h
++++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h
+@@ -148,6 +148,8 @@ struct sun4i_ss_ctx {
+ 	struct reset_control *reset;
+ 	struct device *dev;
+ 	struct resource *res;
++	char buf[4 * SS_RX_MAX];/* buffer for linearize SG src */
++	char bufo[4 * SS_TX_MAX]; /* buffer for linearize SG dst */
+ 	spinlock_t slock; /* control the use of the device */
+ #ifdef CONFIG_CRYPTO_DEV_SUN4I_SS_PRNG
+ 	u32 seed[SS_SEED_LEN / BITS_PER_LONG];
+diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
+index 50d169e61b41d..1cb310a133b3f 100644
+--- a/drivers/crypto/bcm/cipher.c
++++ b/drivers/crypto/bcm/cipher.c
+@@ -41,7 +41,7 @@
+ 
+ /* ================= Device Structure ================== */
+ 
+-struct device_private iproc_priv;
++struct bcm_device_private iproc_priv;
+ 
+ /* ==================== Parameters ===================== */
+ 
+diff --git a/drivers/crypto/bcm/cipher.h b/drivers/crypto/bcm/cipher.h
+index 035c8389cb3dd..892823ef4a019 100644
+--- a/drivers/crypto/bcm/cipher.h
++++ b/drivers/crypto/bcm/cipher.h
+@@ -419,7 +419,7 @@ struct spu_hw {
+ 	u32 num_chan;
+ };
+ 
+-struct device_private {
++struct bcm_device_private {
+ 	struct platform_device *pdev;
+ 
+ 	struct spu_hw spu;
+@@ -466,6 +466,6 @@ struct device_private {
+ 	struct mbox_chan **mbox;
+ };
+ 
+-extern struct device_private iproc_priv;
++extern struct bcm_device_private iproc_priv;
+ 
+ #endif
+diff --git a/drivers/crypto/bcm/util.c b/drivers/crypto/bcm/util.c
+index 2b304fc780595..77aeedb840555 100644
+--- a/drivers/crypto/bcm/util.c
++++ b/drivers/crypto/bcm/util.c
+@@ -348,7 +348,7 @@ char *spu_alg_name(enum spu_cipher_alg alg, enum spu_cipher_mode mode)
+ static ssize_t spu_debugfs_read(struct file *filp, char __user *ubuf,
+ 				size_t count, loff_t *offp)
+ {
+-	struct device_private *ipriv;
++	struct bcm_device_private *ipriv;
+ 	char *buf;
+ 	ssize_t ret, out_offset, out_count;
+ 	int i;
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index a713a35dc5022..ae86557291c3f 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -1092,11 +1092,12 @@ static void ipsec_esp_decrypt_hwauth_done(struct device *dev,
+  */
+ static int sg_to_link_tbl_offset(struct scatterlist *sg, int sg_count,
+ 				 unsigned int offset, int datalen, int elen,
+-				 struct talitos_ptr *link_tbl_ptr)
++				 struct talitos_ptr *link_tbl_ptr, int align)
+ {
+ 	int n_sg = elen ? sg_count + 1 : sg_count;
+ 	int count = 0;
+ 	int cryptlen = datalen + elen;
++	int padding = ALIGN(cryptlen, align) - cryptlen;
+ 
+ 	while (cryptlen && sg && n_sg--) {
+ 		unsigned int len = sg_dma_len(sg);
+@@ -1120,7 +1121,7 @@ static int sg_to_link_tbl_offset(struct scatterlist *sg, int sg_count,
+ 			offset += datalen;
+ 		}
+ 		to_talitos_ptr(link_tbl_ptr + count,
+-			       sg_dma_address(sg) + offset, len, 0);
++			       sg_dma_address(sg) + offset, sg_next(sg) ? len : len + padding, 0);
+ 		to_talitos_ptr_ext_set(link_tbl_ptr + count, 0, 0);
+ 		count++;
+ 		cryptlen -= len;
+@@ -1143,10 +1144,11 @@ static int talitos_sg_map_ext(struct device *dev, struct scatterlist *src,
+ 			      unsigned int len, struct talitos_edesc *edesc,
+ 			      struct talitos_ptr *ptr, int sg_count,
+ 			      unsigned int offset, int tbl_off, int elen,
+-			      bool force)
++			      bool force, int align)
+ {
+ 	struct talitos_private *priv = dev_get_drvdata(dev);
+ 	bool is_sec1 = has_ftr_sec1(priv);
++	int aligned_len = ALIGN(len, align);
+ 
+ 	if (!src) {
+ 		to_talitos_ptr(ptr, 0, 0, is_sec1);
+@@ -1154,22 +1156,22 @@ static int talitos_sg_map_ext(struct device *dev, struct scatterlist *src,
+ 	}
+ 	to_talitos_ptr_ext_set(ptr, elen, is_sec1);
+ 	if (sg_count == 1 && !force) {
+-		to_talitos_ptr(ptr, sg_dma_address(src) + offset, len, is_sec1);
++		to_talitos_ptr(ptr, sg_dma_address(src) + offset, aligned_len, is_sec1);
+ 		return sg_count;
+ 	}
+ 	if (is_sec1) {
+-		to_talitos_ptr(ptr, edesc->dma_link_tbl + offset, len, is_sec1);
++		to_talitos_ptr(ptr, edesc->dma_link_tbl + offset, aligned_len, is_sec1);
+ 		return sg_count;
+ 	}
+ 	sg_count = sg_to_link_tbl_offset(src, sg_count, offset, len, elen,
+-					 &edesc->link_tbl[tbl_off]);
++					 &edesc->link_tbl[tbl_off], align);
+ 	if (sg_count == 1 && !force) {
+ 		/* Only one segment now, so no link tbl needed*/
+ 		copy_talitos_ptr(ptr, &edesc->link_tbl[tbl_off], is_sec1);
+ 		return sg_count;
+ 	}
+ 	to_talitos_ptr(ptr, edesc->dma_link_tbl +
+-			    tbl_off * sizeof(struct talitos_ptr), len, is_sec1);
++			    tbl_off * sizeof(struct talitos_ptr), aligned_len, is_sec1);
+ 	to_talitos_ptr_ext_or(ptr, DESC_PTR_LNKTBL_JUMP, is_sec1);
+ 
+ 	return sg_count;
+@@ -1181,7 +1183,7 @@ static int talitos_sg_map(struct device *dev, struct scatterlist *src,
+ 			  unsigned int offset, int tbl_off)
+ {
+ 	return talitos_sg_map_ext(dev, src, len, edesc, ptr, sg_count, offset,
+-				  tbl_off, 0, false);
++				  tbl_off, 0, false, 1);
+ }
+ 
+ /*
+@@ -1250,7 +1252,7 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
+ 
+ 	ret = talitos_sg_map_ext(dev, areq->src, cryptlen, edesc, &desc->ptr[4],
+ 				 sg_count, areq->assoclen, tbl_off, elen,
+-				 false);
++				 false, 1);
+ 
+ 	if (ret > 1) {
+ 		tbl_off += ret;
+@@ -1270,7 +1272,7 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
+ 		elen = 0;
+ 	ret = talitos_sg_map_ext(dev, areq->dst, cryptlen, edesc, &desc->ptr[5],
+ 				 sg_count, areq->assoclen, tbl_off, elen,
+-				 is_ipsec_esp && !encrypt);
++				 is_ipsec_esp && !encrypt, 1);
+ 	tbl_off += ret;
+ 
+ 	if (!encrypt && is_ipsec_esp) {
+@@ -1576,6 +1578,8 @@ static int common_nonsnoop(struct talitos_edesc *edesc,
+ 	bool sync_needed = false;
+ 	struct talitos_private *priv = dev_get_drvdata(dev);
+ 	bool is_sec1 = has_ftr_sec1(priv);
++	bool is_ctr = (desc->hdr & DESC_HDR_SEL0_MASK) == DESC_HDR_SEL0_AESU &&
++		      (desc->hdr & DESC_HDR_MODE0_AESU_MASK) == DESC_HDR_MODE0_AESU_CTR;
+ 
+ 	/* first DWORD empty */
+ 
+@@ -1596,8 +1600,8 @@ static int common_nonsnoop(struct talitos_edesc *edesc,
+ 	/*
+ 	 * cipher in
+ 	 */
+-	sg_count = talitos_sg_map(dev, areq->src, cryptlen, edesc,
+-				  &desc->ptr[3], sg_count, 0, 0);
++	sg_count = talitos_sg_map_ext(dev, areq->src, cryptlen, edesc, &desc->ptr[3],
++				      sg_count, 0, 0, 0, false, is_ctr ? 16 : 1);
+ 	if (sg_count > 1)
+ 		sync_needed = true;
+ 
+@@ -2760,6 +2764,22 @@ static struct talitos_alg_template driver_algs[] = {
+ 				     DESC_HDR_SEL0_AESU |
+ 				     DESC_HDR_MODE0_AESU_CTR,
+ 	},
++	{	.type = CRYPTO_ALG_TYPE_SKCIPHER,
++		.alg.skcipher = {
++			.base.cra_name = "ctr(aes)",
++			.base.cra_driver_name = "ctr-aes-talitos",
++			.base.cra_blocksize = 1,
++			.base.cra_flags = CRYPTO_ALG_ASYNC |
++					  CRYPTO_ALG_ALLOCATES_MEMORY,
++			.min_keysize = AES_MIN_KEY_SIZE,
++			.max_keysize = AES_MAX_KEY_SIZE,
++			.ivsize = AES_BLOCK_SIZE,
++			.setkey = skcipher_aes_setkey,
++		},
++		.desc_hdr_template = DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU |
++				     DESC_HDR_SEL0_AESU |
++				     DESC_HDR_MODE0_AESU_CTR,
++	},
+ 	{	.type = CRYPTO_ALG_TYPE_SKCIPHER,
+ 		.alg.skcipher = {
+ 			.base.cra_name = "ecb(des)",
+@@ -3177,6 +3197,12 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
+ 			t_alg->algt.alg.skcipher.setkey ?: skcipher_setkey;
+ 		t_alg->algt.alg.skcipher.encrypt = skcipher_encrypt;
+ 		t_alg->algt.alg.skcipher.decrypt = skcipher_decrypt;
++		if (!strcmp(alg->cra_name, "ctr(aes)") && !has_ftr_sec1(priv) &&
++		    DESC_TYPE(t_alg->algt.desc_hdr_template) !=
++		    DESC_TYPE(DESC_HDR_TYPE_AESU_CTR_NONSNOOP)) {
++			devm_kfree(dev, t_alg);
++			return ERR_PTR(-ENOTSUPP);
++		}
+ 		break;
+ 	case CRYPTO_ALG_TYPE_AEAD:
+ 		alg = &t_alg->algt.alg.aead.base;
+diff --git a/drivers/crypto/talitos.h b/drivers/crypto/talitos.h
+index 1469b956948ab..32825119e8805 100644
+--- a/drivers/crypto/talitos.h
++++ b/drivers/crypto/talitos.h
+@@ -344,6 +344,7 @@ static inline bool has_ftr_sec1(struct talitos_private *priv)
+ 
+ /* primary execution unit mode (MODE0) and derivatives */
+ #define	DESC_HDR_MODE0_ENCRYPT		cpu_to_be32(0x00100000)
++#define	DESC_HDR_MODE0_AESU_MASK	cpu_to_be32(0x00600000)
+ #define	DESC_HDR_MODE0_AESU_CBC		cpu_to_be32(0x00200000)
+ #define	DESC_HDR_MODE0_AESU_CTR		cpu_to_be32(0x00600000)
+ #define	DESC_HDR_MODE0_DEU_CBC		cpu_to_be32(0x00400000)
+diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
+index de7b74505e75e..c1d379bd7af33 100644
+--- a/drivers/dax/bus.c
++++ b/drivers/dax/bus.c
+@@ -1046,7 +1046,7 @@ static ssize_t range_parse(const char *opt, size_t len, struct range *range)
+ {
+ 	unsigned long long addr = 0;
+ 	char *start, *end, *str;
+-	ssize_t rc = EINVAL;
++	ssize_t rc = -EINVAL;
+ 
+ 	str = kstrdup(opt, GFP_KERNEL);
+ 	if (!str)
+diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c
+index 0feb323bae1e3..f8459cc5315df 100644
+--- a/drivers/dma/fsldma.c
++++ b/drivers/dma/fsldma.c
+@@ -1214,6 +1214,7 @@ static int fsldma_of_probe(struct platform_device *op)
+ {
+ 	struct fsldma_device *fdev;
+ 	struct device_node *child;
++	unsigned int i;
+ 	int err;
+ 
+ 	fdev = kzalloc(sizeof(*fdev), GFP_KERNEL);
+@@ -1292,6 +1293,10 @@ static int fsldma_of_probe(struct platform_device *op)
+ 	return 0;
+ 
+ out_free_fdev:
++	for (i = 0; i < FSL_DMA_MAX_CHANS_PER_DEVICE; i++) {
++		if (fdev->chan[i])
++			fsl_dma_chan_remove(fdev->chan[i]);
++	}
+ 	irq_dispose_mapping(fdev->irq);
+ 	iounmap(fdev->regs);
+ out_free:
+@@ -1314,6 +1319,7 @@ static int fsldma_of_remove(struct platform_device *op)
+ 		if (fdev->chan[i])
+ 			fsl_dma_chan_remove(fdev->chan[i]);
+ 	}
++	irq_dispose_mapping(fdev->irq);
+ 
+ 	iounmap(fdev->regs);
+ 	kfree(fdev);
+diff --git a/drivers/dma/hsu/pci.c b/drivers/dma/hsu/pci.c
+index 07cc7320a614f..9045a6f7f5893 100644
+--- a/drivers/dma/hsu/pci.c
++++ b/drivers/dma/hsu/pci.c
+@@ -26,22 +26,12 @@
+ static irqreturn_t hsu_pci_irq(int irq, void *dev)
+ {
+ 	struct hsu_dma_chip *chip = dev;
+-	struct pci_dev *pdev = to_pci_dev(chip->dev);
+ 	u32 dmaisr;
+ 	u32 status;
+ 	unsigned short i;
+ 	int ret = 0;
+ 	int err;
+ 
+-	/*
+-	 * On Intel Tangier B0 and Anniedale the interrupt line, disregarding
+-	 * to have different numbers, is shared between HSU DMA and UART IPs.
+-	 * Thus on such SoCs we are expecting that IRQ handler is called in
+-	 * UART driver only.
+-	 */
+-	if (pdev->device == PCI_DEVICE_ID_INTEL_MRFLD_HSU_DMA)
+-		return IRQ_HANDLED;
+-
+ 	dmaisr = readl(chip->regs + HSU_PCI_DMAISR);
+ 	for (i = 0; i < chip->hsu->nr_channels; i++) {
+ 		if (dmaisr & 0x1) {
+@@ -105,6 +95,17 @@ static int hsu_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (ret)
+ 		goto err_register_irq;
+ 
++	/*
++	 * On Intel Tangier B0 and Anniedale the interrupt line, disregarding
++	 * to have different numbers, is shared between HSU DMA and UART IPs.
++	 * Thus on such SoCs we are expecting that IRQ handler is called in
++	 * UART driver only. Instead of handling the spurious interrupt
++	 * from HSU DMA here and waste CPU time and delay HSU UART interrupt
++	 * handling, disable the interrupt entirely.
++	 */
++	if (pdev->device == PCI_DEVICE_ID_INTEL_MRFLD_HSU_DMA)
++		disable_irq_nosync(chip->irq);
++
+ 	pci_set_drvdata(pdev, chip);
+ 
+ 	return 0;
+diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c
+index 8b14ba0bae1cd..ec177a535d6dd 100644
+--- a/drivers/dma/idxd/dma.c
++++ b/drivers/dma/idxd/dma.c
+@@ -174,6 +174,7 @@ int idxd_register_dma_device(struct idxd_device *idxd)
+ 	INIT_LIST_HEAD(&dma->channels);
+ 	dma->dev = &idxd->pdev->dev;
+ 
++	dma_cap_set(DMA_PRIVATE, dma->cap_mask);
+ 	dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask);
+ 	dma->device_release = idxd_dma_release;
+ 
+diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c
+index 9fede32641e9e..04202d75f4eed 100644
+--- a/drivers/dma/owl-dma.c
++++ b/drivers/dma/owl-dma.c
+@@ -1245,6 +1245,7 @@ static int owl_dma_remove(struct platform_device *pdev)
+ 	owl_dma_free(od);
+ 
+ 	clk_disable_unprepare(od->clk);
++	dma_pool_destroy(od->lli_pool);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 3dfd8b6a0ebf7..6b2ce3f28f7b9 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -847,8 +847,6 @@ static int scmi_remove(struct platform_device *pdev)
+ 	struct scmi_info *info = platform_get_drvdata(pdev);
+ 	struct idr *idr = &info->tx_idr;
+ 
+-	scmi_notification_exit(&info->handle);
+-
+ 	mutex_lock(&scmi_list_mutex);
+ 	if (info->users)
+ 		ret = -EBUSY;
+@@ -859,6 +857,8 @@ static int scmi_remove(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	scmi_notification_exit(&info->handle);
++
+ 	/* Safe to free channels since no more users */
+ 	ret = idr_for_each(idr, info->desc->ops->chan_free, idr);
+ 	idr_destroy(&info->tx_idr);
+diff --git a/drivers/gpio/gpio-pcf857x.c b/drivers/gpio/gpio-pcf857x.c
+index a2a8d155c75e3..b7568ee33696d 100644
+--- a/drivers/gpio/gpio-pcf857x.c
++++ b/drivers/gpio/gpio-pcf857x.c
+@@ -332,7 +332,7 @@ static int pcf857x_probe(struct i2c_client *client,
+ 	 * reset state.  Otherwise it flags pins to be driven low.
+ 	 */
+ 	gpio->out = ~n_latch;
+-	gpio->status = gpio->out;
++	gpio->status = gpio->read(gpio->client);
+ 
+ 	/* Enable irqchip if we have an interrupt */
+ 	if (client->irq) {
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 147d61b9674ea..16f73c1023943 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -15,6 +15,9 @@ menuconfig DRM
+ 	select I2C_ALGOBIT
+ 	select DMA_SHARED_BUFFER
+ 	select SYNC_FILE
++# gallium uses SYS_kcmp for os_same_file_description() to de-duplicate
++# device and dmabuf fd. Let's make sure that is available for our userspace.
++	select KCMP
+ 	help
+ 	  Kernel-level support for the Direct Rendering Infrastructure (DRI)
+ 	  introduced in XFree86 4.0. If you say Y here, you need to select
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 82cd8e55595af..eb22a190c2423 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -844,7 +844,7 @@ static int amdgpu_ras_error_inject_xgmi(struct amdgpu_device *adev,
+ 	if (amdgpu_dpm_allow_xgmi_power_down(adev, true))
+ 		dev_warn(adev->dev, "Failed to allow XGMI power down");
+ 
+-	if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_DISALLOW))
++	if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_ALLOW))
+ 		dev_warn(adev->dev, "Failed to allow df cstate");
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
+index ee9480d14cbc3..86cfb3d55477f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
+@@ -21,7 +21,7 @@
+  *
+  */
+ 
+-#if !defined(_AMDGPU_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
++#if !defined(_AMDGPU_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
+ #define _AMDGPU_TRACE_H_
+ 
+ #include <linux/stringify.h>
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 41cd108214d6d..7efc618887e21 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -246,6 +246,8 @@ static u32 soc15_get_xclk(struct amdgpu_device *adev)
+ {
+ 	u32 reference_clock = adev->clock.spll.reference_freq;
+ 
++	if (adev->asic_type == CHIP_RENOIR)
++		return 10000;
+ 	if (adev->asic_type == CHIP_RAVEN)
+ 		return reference_clock / 4;
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
+index 16262e5d93f5c..7351dd195274e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
+@@ -243,11 +243,11 @@ get_sh_mem_bases_nybble_64(struct kfd_process_device *pdd)
+ static inline void dqm_lock(struct device_queue_manager *dqm)
+ {
+ 	mutex_lock(&dqm->lock_hidden);
+-	dqm->saved_flags = memalloc_nofs_save();
++	dqm->saved_flags = memalloc_noreclaim_save();
+ }
+ static inline void dqm_unlock(struct device_queue_manager *dqm)
+ {
+-	memalloc_nofs_restore(dqm->saved_flags);
++	memalloc_noreclaim_restore(dqm->saved_flags);
+ 	mutex_unlock(&dqm->lock_hidden);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index fdca76fc598c0..bffaefaf5a292 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1096,7 +1096,7 @@ static void amdgpu_dm_fini(struct amdgpu_device *adev)
+ 
+ #ifdef CONFIG_DRM_AMD_DC_HDCP
+ 	if (adev->dm.hdcp_workqueue) {
+-		hdcp_destroy(adev->dm.hdcp_workqueue);
++		hdcp_destroy(&adev->dev->kobj, adev->dm.hdcp_workqueue);
+ 		adev->dm.hdcp_workqueue = NULL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index c2cd184f0bbd4..79de68ac03f20 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -376,7 +376,7 @@ static void event_cpirq(struct work_struct *work)
+ }
+ 
+ 
+-void hdcp_destroy(struct hdcp_workqueue *hdcp_work)
++void hdcp_destroy(struct kobject *kobj, struct hdcp_workqueue *hdcp_work)
+ {
+ 	int i = 0;
+ 
+@@ -385,6 +385,7 @@ void hdcp_destroy(struct hdcp_workqueue *hdcp_work)
+ 		cancel_delayed_work_sync(&hdcp_work[i].watchdog_timer_dwork);
+ 	}
+ 
++	sysfs_remove_bin_file(kobj, &hdcp_work[0].attr);
+ 	kfree(hdcp_work->srm);
+ 	kfree(hdcp_work->srm_temp);
+ 	kfree(hdcp_work);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.h
+index 5159b3a5e5b03..09294ff122fea 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.h
+@@ -69,7 +69,7 @@ void hdcp_update_display(struct hdcp_workqueue *hdcp_work,
+ 
+ void hdcp_reset_display(struct hdcp_workqueue *work, unsigned int link_index);
+ void hdcp_handle_cpirq(struct hdcp_workqueue *work, unsigned int link_index);
+-void hdcp_destroy(struct hdcp_workqueue *work);
++void hdcp_destroy(struct kobject *kobj, struct hdcp_workqueue *work);
+ 
+ struct hdcp_workqueue *hdcp_create_workqueue(struct amdgpu_device *adev, struct cp_psp *cp_psp, struct dc *dc);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+index 070459e3e4070..afc10b954ffa7 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+@@ -245,6 +245,23 @@ static enum bp_result encoder_control_digx_v3(
+ 					cntl->enable_dp_audio);
+ 	params.ucLaneNum = (uint8_t)(cntl->lanes_number);
+ 
++	switch (cntl->color_depth) {
++	case COLOR_DEPTH_888:
++		params.ucBitPerColor = PANEL_8BIT_PER_COLOR;
++		break;
++	case COLOR_DEPTH_101010:
++		params.ucBitPerColor = PANEL_10BIT_PER_COLOR;
++		break;
++	case COLOR_DEPTH_121212:
++		params.ucBitPerColor = PANEL_12BIT_PER_COLOR;
++		break;
++	case COLOR_DEPTH_161616:
++		params.ucBitPerColor = PANEL_16BIT_PER_COLOR;
++		break;
++	default:
++		break;
++	}
++
+ 	if (EXEC_BIOS_CMD_TABLE(DIGxEncoderControl, params))
+ 		result = BP_RESULT_OK;
+ 
+@@ -274,6 +291,23 @@ static enum bp_result encoder_control_digx_v4(
+ 					cntl->enable_dp_audio));
+ 	params.ucLaneNum = (uint8_t)(cntl->lanes_number);
+ 
++	switch (cntl->color_depth) {
++	case COLOR_DEPTH_888:
++		params.ucBitPerColor = PANEL_8BIT_PER_COLOR;
++		break;
++	case COLOR_DEPTH_101010:
++		params.ucBitPerColor = PANEL_10BIT_PER_COLOR;
++		break;
++	case COLOR_DEPTH_121212:
++		params.ucBitPerColor = PANEL_12BIT_PER_COLOR;
++		break;
++	case COLOR_DEPTH_161616:
++		params.ucBitPerColor = PANEL_16BIT_PER_COLOR;
++		break;
++	default:
++		break;
++	}
++
+ 	if (EXEC_BIOS_CMD_TABLE(DIGxEncoderControl, params))
+ 		result = BP_RESULT_OK;
+ 
+@@ -1057,6 +1091,19 @@ static enum bp_result set_pixel_clock_v5(
+ 		 * driver choose program it itself, i.e. here we program it
+ 		 * to 888 by default.
+ 		 */
++		if (bp_params->signal_type == SIGNAL_TYPE_HDMI_TYPE_A)
++			switch (bp_params->color_depth) {
++			case TRANSMITTER_COLOR_DEPTH_30:
++				/* yes this is correct, the atom define is wrong */
++				clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_HDMI_32BPP;
++				break;
++			case TRANSMITTER_COLOR_DEPTH_36:
++				/* yes this is correct, the atom define is wrong */
++				clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_HDMI_30BPP;
++				break;
++			default:
++				break;
++			}
+ 
+ 		if (EXEC_BIOS_CMD_TABLE(SetPixelClock, clk))
+ 			result = BP_RESULT_OK;
+@@ -1135,6 +1182,20 @@ static enum bp_result set_pixel_clock_v6(
+ 		 * driver choose program it itself, i.e. here we pass required
+ 		 * target rate that includes deep color.
+ 		 */
++		if (bp_params->signal_type == SIGNAL_TYPE_HDMI_TYPE_A)
++			switch (bp_params->color_depth) {
++			case TRANSMITTER_COLOR_DEPTH_30:
++				clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_30BPP_V6;
++				break;
++			case TRANSMITTER_COLOR_DEPTH_36:
++				clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_36BPP_V6;
++				break;
++			case TRANSMITTER_COLOR_DEPTH_48:
++				clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_48BPP;
++				break;
++			default:
++				break;
++			}
+ 
+ 		if (EXEC_BIOS_CMD_TABLE(SetPixelClock, clk))
+ 			result = BP_RESULT_OK;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index 49ae5ff12da63..bae3a146b2cc2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -871,6 +871,20 @@ static bool dce110_program_pix_clk(
+ 	bp_pc_params.flags.SET_EXTERNAL_REF_DIV_SRC =
+ 					pll_settings->use_external_clk;
+ 
++	switch (pix_clk_params->color_depth) {
++	case COLOR_DEPTH_101010:
++		bp_pc_params.color_depth = TRANSMITTER_COLOR_DEPTH_30;
++		break;
++	case COLOR_DEPTH_121212:
++		bp_pc_params.color_depth = TRANSMITTER_COLOR_DEPTH_36;
++		break;
++	case COLOR_DEPTH_161616:
++		bp_pc_params.color_depth = TRANSMITTER_COLOR_DEPTH_48;
++		break;
++	default:
++		break;
++	}
++
+ 	if (clk_src->bios->funcs->set_pixel_clock(
+ 			clk_src->bios, &bp_pc_params) != BP_RESULT_OK)
+ 		return false;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c
+index 5054bb567b748..99ad475fc1ff5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c
+@@ -564,6 +564,7 @@ static void dce110_stream_encoder_hdmi_set_stream_attribute(
+ 	cntl.enable_dp_audio = enable_audio;
+ 	cntl.pixel_clock = actual_pix_clk_khz;
+ 	cntl.lanes_number = LANE_COUNT_FOUR;
++	cntl.color_depth = crtc_timing->display_color_depth;
+ 
+ 	if (enc110->base.bp->funcs->encoder_control(
+ 			enc110->base.bp, &cntl) != BP_RESULT_OK)
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
+index 2a32b66959ba2..e2e79025825f8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
+@@ -601,12 +601,12 @@ static void set_clamp(
+ 		clamp_max = 0x3FC0;
+ 		break;
+ 	case COLOR_DEPTH_101010:
+-		/* 10bit MSB aligned on 14 bit bus '11 1111 1111 1100' */
+-		clamp_max = 0x3FFC;
++		/* 10bit MSB aligned on 14 bit bus '11 1111 1111 0000' */
++		clamp_max = 0x3FF0;
+ 		break;
+ 	case COLOR_DEPTH_121212:
+-		/* 12bit MSB aligned on 14 bit bus '11 1111 1111 1111' */
+-		clamp_max = 0x3FFF;
++		/* 12bit MSB aligned on 14 bit bus '11 1111 1111 1100' */
++		clamp_max = 0x3FFC;
+ 		break;
+ 	default:
+ 		clamp_max = 0x3FC0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c
+index 81db0179f7ea8..85dc2b16c9418 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c
+@@ -480,7 +480,6 @@ unsigned int dcn10_get_dig_frontend(struct link_encoder *enc)
+ 		break;
+ 	default:
+ 		// invalid source select DIG
+-		ASSERT(false);
+ 		result = ENGINE_ID_UNKNOWN;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 121643ddb719b..4ea53c543e082 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -408,8 +408,8 @@ static struct _vcs_dpi_soc_bounding_box_st dcn2_0_nv14_soc = {
+ 			},
+ 		},
+ 	.num_states = 5,
+-	.sr_exit_time_us = 11.6,
+-	.sr_enter_plus_exit_time_us = 13.9,
++	.sr_exit_time_us = 8.6,
++	.sr_enter_plus_exit_time_us = 10.9,
+ 	.urgent_latency_us = 4.0,
+ 	.urgent_latency_pixel_data_only_us = 4.0,
+ 	.urgent_latency_pixel_mixed_with_vm_data_us = 4.0,
+@@ -3248,7 +3248,7 @@ restore_dml_state:
+ bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
+ 		bool fast_validate)
+ {
+-	bool voltage_supported = false;
++	bool voltage_supported;
+ 	DC_FP_START();
+ 	voltage_supported = dcn20_validate_bandwidth_fp(dc, context, fast_validate);
+ 	DC_FP_END();
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index c993854404124..4e2dcf259428f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -1173,8 +1173,8 @@ void dcn21_calculate_wm(
+ }
+ 
+ 
+-bool dcn21_validate_bandwidth(struct dc *dc, struct dc_state *context,
+-		bool fast_validate)
++static noinline bool dcn21_validate_bandwidth_fp(struct dc *dc,
++		struct dc_state *context, bool fast_validate)
+ {
+ 	bool out = false;
+ 
+@@ -1227,6 +1227,22 @@ validate_out:
+ 
+ 	return out;
+ }
++
++/*
++ * Some of the functions further below use the FPU, so we need to wrap this
++ * with DC_FP_START()/DC_FP_END(). Use the same approach as for
++ * dcn20_validate_bandwidth in dcn20_resource.c.
++ */
++bool dcn21_validate_bandwidth(struct dc *dc, struct dc_state *context,
++		bool fast_validate)
++{
++	bool voltage_supported;
++	DC_FP_START();
++	voltage_supported = dcn21_validate_bandwidth_fp(dc, context, fast_validate);
++	DC_FP_END();
++	return voltage_supported;
++}
++
+ static void dcn21_destroy_resource_pool(struct resource_pool **pool)
+ {
+ 	struct dcn21_resource_pool *dcn21_pool = TO_DCN21_RES_POOL(*pool);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index 204773ffc376f..97909d5aab344 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -526,6 +526,8 @@ void dcn30_init_hw(struct dc *dc)
+ 
+ 					fe = dc->links[i]->link_enc->funcs->get_dig_frontend(
+ 										dc->links[i]->link_enc);
++					if (fe == ENGINE_ID_UNKNOWN)
++						continue;
+ 
+ 					for (j = 0; j < dc->res_pool->stream_enc_count; j++) {
+ 						if (fe == dc->res_pool->stream_enc[j]->id) {
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+index 1b971265418b6..0e0f494fbb5e1 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+@@ -168,6 +168,11 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = {
+ 	.ack = NULL
+ };
+ 
++static const struct irq_source_info_funcs vupdate_no_lock_irq_info_funcs = {
++	.set = NULL,
++	.ack = NULL
++};
++
+ #undef BASE_INNER
+ #define BASE_INNER(seg) DMU_BASE__INST0_SEG ## seg
+ 
+@@ -230,6 +235,17 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = {
+ 		.funcs = &vblank_irq_info_funcs\
+ 	}
+ 
++/* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic
++ * of DCE's DC_IRQ_SOURCE_VUPDATEx.
++ */
++#define vupdate_no_lock_int_entry(reg_num)\
++	[DC_IRQ_SOURCE_VUPDATE1 + reg_num] = {\
++		IRQ_REG_ENTRY(OTG, reg_num,\
++			OTG_GLOBAL_SYNC_STATUS, VUPDATE_NO_LOCK_INT_EN,\
++			OTG_GLOBAL_SYNC_STATUS, VUPDATE_NO_LOCK_EVENT_CLEAR),\
++		.funcs = &vupdate_no_lock_irq_info_funcs\
++	}
++
+ #define vblank_int_entry(reg_num)\
+ 	[DC_IRQ_SOURCE_VBLANK1 + reg_num] = {\
+ 		IRQ_REG_ENTRY(OTG, reg_num,\
+@@ -338,6 +354,12 @@ irq_source_info_dcn21[DAL_IRQ_SOURCES_NUMBER] = {
+ 	vupdate_int_entry(3),
+ 	vupdate_int_entry(4),
+ 	vupdate_int_entry(5),
++	vupdate_no_lock_int_entry(0),
++	vupdate_no_lock_int_entry(1),
++	vupdate_no_lock_int_entry(2),
++	vupdate_no_lock_int_entry(3),
++	vupdate_no_lock_int_entry(4),
++	vupdate_no_lock_int_entry(5),
+ 	vblank_int_entry(0),
+ 	vblank_int_entry(1),
+ 	vblank_int_entry(2),
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 529816637c731..9f383b9041d28 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -1070,7 +1070,7 @@ static ssize_t amdgpu_get_pp_dpm_sclk(struct device *dev,
+ static ssize_t amdgpu_read_mask(const char *buf, size_t count, uint32_t *mask)
+ {
+ 	int ret;
+-	long level;
++	unsigned long level;
+ 	char *sub_str = NULL;
+ 	char *tmp;
+ 	char buf_cpy[AMDGPU_MASK_BUF_MAX + 1];
+@@ -1086,8 +1086,8 @@ static ssize_t amdgpu_read_mask(const char *buf, size_t count, uint32_t *mask)
+ 	while (tmp[0]) {
+ 		sub_str = strsep(&tmp, delimiter);
+ 		if (strlen(sub_str)) {
+-			ret = kstrtol(sub_str, 0, &level);
+-			if (ret)
++			ret = kstrtoul(sub_str, 0, &level);
++			if (ret || level > 31)
+ 				return -EINVAL;
+ 			*mask |= 1 << level;
+ 		} else
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 17bdad95978a1..9cf35dab25273 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -2302,7 +2302,8 @@ drm_dp_mst_port_add_connector(struct drm_dp_mst_branch *mstb,
+ 	}
+ 
+ 	if (port->pdt != DP_PEER_DEVICE_NONE &&
+-	    drm_dp_mst_is_end_device(port->pdt, port->mcs)) {
++	    drm_dp_mst_is_end_device(port->pdt, port->mcs) &&
++	    port->port_num >= DP_MST_LOGICAL_PORT_0) {
+ 		port->cached_edid = drm_get_edid(port->connector,
+ 						 &port->aux.ddc);
+ 		drm_connector_set_tile_property(port->connector);
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 1543d9d109705..8033467db4bee 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -923,11 +923,15 @@ static int setcmap_legacy(struct fb_cmap *cmap, struct fb_info *info)
+ 	drm_modeset_lock_all(fb_helper->dev);
+ 	drm_client_for_each_modeset(modeset, &fb_helper->client) {
+ 		crtc = modeset->crtc;
+-		if (!crtc->funcs->gamma_set || !crtc->gamma_size)
+-			return -EINVAL;
++		if (!crtc->funcs->gamma_set || !crtc->gamma_size) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 
+-		if (cmap->start + cmap->len > crtc->gamma_size)
+-			return -EINVAL;
++		if (cmap->start + cmap->len > crtc->gamma_size) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 
+ 		r = crtc->gamma_store;
+ 		g = r + crtc->gamma_size;
+@@ -940,8 +944,9 @@ static int setcmap_legacy(struct fb_cmap *cmap, struct fb_info *info)
+ 		ret = crtc->funcs->gamma_set(crtc, r, g, b,
+ 					     crtc->gamma_size, NULL);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 	}
++out:
+ 	drm_modeset_unlock_all(fb_helper->dev);
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c
+index 501b4fe55a3db..511cde5c7fa6f 100644
+--- a/drivers/gpu/drm/drm_modes.c
++++ b/drivers/gpu/drm/drm_modes.c
+@@ -762,7 +762,7 @@ int drm_mode_vrefresh(const struct drm_display_mode *mode)
+ 	if (mode->htotal == 0 || mode->vtotal == 0)
+ 		return 0;
+ 
+-	num = mode->clock * 1000;
++	num = mode->clock;
+ 	den = mode->htotal * mode->vtotal;
+ 
+ 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+@@ -772,7 +772,7 @@ int drm_mode_vrefresh(const struct drm_display_mode *mode)
+ 	if (mode->vscan > 1)
+ 		den *= mode->vscan;
+ 
+-	return DIV_ROUND_CLOSEST(num, den);
++	return DIV_ROUND_CLOSEST_ULL(mul_u32_u32(num, 1000), den);
+ }
+ EXPORT_SYMBOL(drm_mode_vrefresh);
+ 
+diff --git a/drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c b/drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c
+index e281070611480..fc9a34ed58bd1 100644
+--- a/drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c
++++ b/drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c
+@@ -279,11 +279,8 @@ int oaktrail_hdmi_i2c_init(struct pci_dev *dev)
+ 	hdmi_dev = pci_get_drvdata(dev);
+ 
+ 	i2c_dev = kzalloc(sizeof(struct hdmi_i2c_dev), GFP_KERNEL);
+-	if (i2c_dev == NULL) {
+-		DRM_ERROR("Can't allocate interface\n");
+-		ret = -ENOMEM;
+-		goto exit;
+-	}
++	if (!i2c_dev)
++		return -ENOMEM;
+ 
+ 	i2c_dev->adap = &oaktrail_hdmi_i2c_adapter;
+ 	i2c_dev->status = I2C_STAT_INIT;
+@@ -300,16 +297,23 @@ int oaktrail_hdmi_i2c_init(struct pci_dev *dev)
+ 			  oaktrail_hdmi_i2c_adapter.name, hdmi_dev);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to request IRQ for I2C controller\n");
+-		goto err;
++		goto free_dev;
+ 	}
+ 
+ 	/* Adapter registration */
+ 	ret = i2c_add_numbered_adapter(&oaktrail_hdmi_i2c_adapter);
+-	return ret;
++	if (ret) {
++		DRM_ERROR("Failed to add I2C adapter\n");
++		goto free_irq;
++	}
+ 
+-err:
++	return 0;
++
++free_irq:
++	free_irq(dev->irq, hdmi_dev);
++free_dev:
+ 	kfree(i2c_dev);
+-exit:
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/gma500/psb_drv.c b/drivers/gpu/drm/gma500/psb_drv.c
+index 34b4aae9a15e3..074f403d7ca07 100644
+--- a/drivers/gpu/drm/gma500/psb_drv.c
++++ b/drivers/gpu/drm/gma500/psb_drv.c
+@@ -313,6 +313,8 @@ static int psb_driver_load(struct drm_device *dev, unsigned long flags)
+ 	if (ret)
+ 		goto out_err;
+ 
++	ret = -ENOMEM;
++
+ 	dev_priv->mmu = psb_mmu_driver_init(dev, 1, 0, 0);
+ 	if (!dev_priv->mmu)
+ 		goto out_err;
+diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c b/drivers/gpu/drm/i915/display/intel_hdmi.c
+index 3f2008d845c20..1d616da4f1657 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
+@@ -2216,7 +2216,11 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi,
+ 					  has_hdmi_sink))
+ 		return MODE_CLOCK_HIGH;
+ 
+-	/* BXT DPLL can't generate 223-240 MHz */
++	/* GLK DPLL can't generate 446-480 MHz */
++	if (IS_GEMINILAKE(dev_priv) && clock > 446666 && clock < 480000)
++		return MODE_CLOCK_RANGE;
++
++	/* BXT/GLK DPLL can't generate 223-240 MHz */
+ 	if (IS_GEN9_LP(dev_priv) && clock > 223333 && clock < 240000)
+ 		return MODE_CLOCK_RANGE;
+ 
+diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+index e961ad6a31294..4adbc2bba97fb 100644
+--- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c
++++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+@@ -240,7 +240,7 @@ gen7_emit_state_base_address(struct batch_chunk *batch,
+ 	/* general */
+ 	*cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY;
+ 	/* surface */
+-	*cs++ = batch_addr(batch) | surface_state_base | BASE_ADDRESS_MODIFY;
++	*cs++ = (batch_addr(batch) + surface_state_base) | BASE_ADDRESS_MODIFY;
+ 	/* dynamic */
+ 	*cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY;
+ 	/* indirect */
+@@ -353,19 +353,21 @@ static void gen7_emit_pipeline_flush(struct batch_chunk *batch)
+ 
+ static void gen7_emit_pipeline_invalidate(struct batch_chunk *batch)
+ {
+-	u32 *cs = batch_alloc_items(batch, 0, 8);
++	u32 *cs = batch_alloc_items(batch, 0, 10);
+ 
+ 	/* ivb: Stall before STATE_CACHE_INVALIDATE */
+-	*cs++ = GFX_OP_PIPE_CONTROL(4);
++	*cs++ = GFX_OP_PIPE_CONTROL(5);
+ 	*cs++ = PIPE_CONTROL_STALL_AT_SCOREBOARD |
+ 		PIPE_CONTROL_CS_STALL;
+ 	*cs++ = 0;
+ 	*cs++ = 0;
++	*cs++ = 0;
+ 
+-	*cs++ = GFX_OP_PIPE_CONTROL(4);
++	*cs++ = GFX_OP_PIPE_CONTROL(5);
+ 	*cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE;
+ 	*cs++ = 0;
+ 	*cs++ = 0;
++	*cs++ = 0;
+ 
+ 	batch_advance(batch, cs);
+ }
+@@ -391,12 +393,14 @@ static void emit_batch(struct i915_vma * const vma,
+ 						     desc_count);
+ 
+ 	/* Reset inherited context registers */
++	gen7_emit_pipeline_flush(&cmds);
+ 	gen7_emit_pipeline_invalidate(&cmds);
+ 	batch_add(&cmds, MI_LOAD_REGISTER_IMM(2));
+ 	batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_0_GEN7));
+ 	batch_add(&cmds, 0xffff0000);
+ 	batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_1));
+ 	batch_add(&cmds, 0xffff0000 | PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
++	gen7_emit_pipeline_invalidate(&cmds);
+ 	gen7_emit_pipeline_flush(&cmds);
+ 
+ 	/* Switch to the media pipeline and our base address */
+diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
+index dc6df9e9a40d8..f6e7a88a56f1b 100644
+--- a/drivers/gpu/drm/lima/lima_sched.c
++++ b/drivers/gpu/drm/lima/lima_sched.c
+@@ -200,7 +200,7 @@ static int lima_pm_busy(struct lima_device *ldev)
+ 	int ret;
+ 
+ 	/* resume GPU if it has been suspended by runtime PM */
+-	ret = pm_runtime_get_sync(ldev->dev);
++	ret = pm_runtime_resume_and_get(ldev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index 28651bc579bc9..faff41183d173 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -266,7 +266,7 @@ static void mtk_ovl_layer_config(struct mtk_ddp_comp *comp, unsigned int idx,
+ 	}
+ 
+ 	con = ovl_fmt_convert(ovl, fmt);
+-	if (state->base.fb->format->has_alpha)
++	if (state->base.fb && state->base.fb->format->has_alpha)
+ 		con |= OVL_CON_AEN | OVL_CON_ALPHA;
+ 
+ 	if (pending->rotation & DRM_MODE_REFLECT_Y) {
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 491fee410dafe..8d78d95d29fcd 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -266,6 +266,16 @@ int a6xx_gmu_set_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
+ 		}
+ 		name = "GPU_SET";
+ 		break;
++	case GMU_OOB_PERFCOUNTER_SET:
++		if (gmu->legacy) {
++			request = GMU_OOB_PERFCOUNTER_REQUEST;
++			ack = GMU_OOB_PERFCOUNTER_ACK;
++		} else {
++			request = GMU_OOB_PERFCOUNTER_REQUEST_NEW;
++			ack = GMU_OOB_PERFCOUNTER_ACK_NEW;
++		}
++		name = "PERFCOUNTER";
++		break;
+ 	case GMU_OOB_BOOT_SLUMBER:
+ 		request = GMU_OOB_BOOT_SLUMBER_REQUEST;
+ 		ack = GMU_OOB_BOOT_SLUMBER_ACK;
+@@ -303,9 +313,14 @@ int a6xx_gmu_set_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
+ void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
+ {
+ 	if (!gmu->legacy) {
+-		WARN_ON(state != GMU_OOB_GPU_SET);
+-		gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET,
+-			1 << GMU_OOB_GPU_SET_CLEAR_NEW);
++		if (state == GMU_OOB_GPU_SET) {
++			gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET,
++				1 << GMU_OOB_GPU_SET_CLEAR_NEW);
++		} else {
++			WARN_ON(state != GMU_OOB_PERFCOUNTER_SET);
++			gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET,
++				1 << GMU_OOB_PERFCOUNTER_CLEAR_NEW);
++		}
+ 		return;
+ 	}
+ 
+@@ -314,6 +329,10 @@ void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
+ 		gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET,
+ 			1 << GMU_OOB_GPU_SET_CLEAR);
+ 		break;
++	case GMU_OOB_PERFCOUNTER_SET:
++		gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET,
++			1 << GMU_OOB_PERFCOUNTER_CLEAR);
++		break;
+ 	case GMU_OOB_BOOT_SLUMBER:
+ 		gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET,
+ 			1 << GMU_OOB_BOOT_SLUMBER_CLEAR);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+index c6d2bced8e5de..9fa278de2106a 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+@@ -156,6 +156,7 @@ enum a6xx_gmu_oob_state {
+ 	GMU_OOB_BOOT_SLUMBER = 0,
+ 	GMU_OOB_GPU_SET,
+ 	GMU_OOB_DCVS_SET,
++	GMU_OOB_PERFCOUNTER_SET,
+ };
+ 
+ /* These are the interrupt / ack bits for each OOB request that are set
+@@ -190,6 +191,13 @@ enum a6xx_gmu_oob_state {
+ #define GMU_OOB_GPU_SET_ACK_NEW		31
+ #define GMU_OOB_GPU_SET_CLEAR_NEW	31
+ 
++#define GMU_OOB_PERFCOUNTER_REQUEST	17
++#define GMU_OOB_PERFCOUNTER_ACK		25
++#define GMU_OOB_PERFCOUNTER_CLEAR	25
++
++#define GMU_OOB_PERFCOUNTER_REQUEST_NEW	28
++#define GMU_OOB_PERFCOUNTER_ACK_NEW	30
++#define GMU_OOB_PERFCOUNTER_CLEAR_NEW	30
+ 
+ void a6xx_hfi_init(struct a6xx_gmu *gmu);
+ int a6xx_hfi_start(struct a6xx_gmu *gmu, int boot_state);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 420ca4a0eb5f7..83b50f6d6bb78 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1066,14 +1066,18 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
+ {
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ 	struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
++	static DEFINE_MUTEX(perfcounter_oob);
++
++	mutex_lock(&perfcounter_oob);
+ 
+ 	/* Force the GPU power on so we can read this register */
+-	a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET);
++	a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
+ 
+ 	*value = gpu_read64(gpu, REG_A6XX_RBBM_PERFCTR_CP_0_LO,
+ 		REG_A6XX_RBBM_PERFCTR_CP_0_HI);
+ 
+-	a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET);
++	a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
++	mutex_unlock(&perfcounter_oob);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index c39dad151bb6d..7d7668998501a 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -1176,7 +1176,7 @@ static void mdp5_crtc_pp_done_irq(struct mdp_irq *irq, uint32_t irqstatus)
+ 	struct mdp5_crtc *mdp5_crtc = container_of(irq, struct mdp5_crtc,
+ 								pp_done);
+ 
+-	complete(&mdp5_crtc->pp_completion);
++	complete_all(&mdp5_crtc->pp_completion);
+ }
+ 
+ static void mdp5_crtc_wait_for_pp_done(struct drm_crtc *crtc)
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index fe0279542a1c2..a2db14f852f11 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -620,8 +620,8 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 	dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND);
+ 
+ 	/* signal the disconnect event early to ensure proper teardown */
+-	dp_display_handle_plugged_change(g_dp_display, false);
+ 	reinit_completion(&dp->audio_comp);
++	dp_display_handle_plugged_change(g_dp_display, false);
+ 
+ 	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK |
+ 					DP_DP_IRQ_HPD_INT_MASK, true);
+@@ -840,6 +840,9 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
+ 
+ 	/* wait only if audio was enabled */
+ 	if (dp_display->audio_enabled) {
++		/* signal the disconnect event */
++		reinit_completion(&dp->audio_comp);
++		dp_display_handle_plugged_change(dp_display, false);
+ 		if (!wait_for_completion_timeout(&dp->audio_comp,
+ 				HZ * 5))
+ 			DRM_ERROR("audio comp timeout\n");
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c
+index 1afb7c579dbbb..eca86bf448f74 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c
+@@ -139,7 +139,7 @@ const struct msm_dsi_phy_cfg dsi_phy_20nm_cfgs = {
+ 		.disable = dsi_20nm_phy_disable,
+ 		.init = msm_dsi_phy_init_common,
+ 	},
+-	.io_start = { 0xfd998300, 0xfd9a0300 },
++	.io_start = { 0xfd998500, 0xfd9a0500 },
+ 	.num_dsi_phy = 2,
+ };
+ 
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index d556c353e5aea..3d0adfa6736a5 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -775,9 +775,10 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
+ 		struct drm_file *file, struct drm_gem_object *obj,
+ 		uint64_t *iova)
+ {
++	struct msm_drm_private *priv = dev->dev_private;
+ 	struct msm_file_private *ctx = file->driver_priv;
+ 
+-	if (!ctx->aspace)
++	if (!priv->gpu)
+ 		return -EINVAL;
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h
+index f5f59261ea819..d1beaad0c82b6 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h
+@@ -14,6 +14,7 @@ enum dcb_connector_type {
+ 	DCB_CONNECTOR_LVDS_SPWG = 0x41,
+ 	DCB_CONNECTOR_DP = 0x46,
+ 	DCB_CONNECTOR_eDP = 0x47,
++	DCB_CONNECTOR_mDP = 0x48,
+ 	DCB_CONNECTOR_HDMI_0 = 0x60,
+ 	DCB_CONNECTOR_HDMI_1 = 0x61,
+ 	DCB_CONNECTOR_HDMI_C = 0x63,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c
+index 8f099601d2f2d..9b6f2c1414d72 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
++++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
+@@ -533,6 +533,7 @@ nouveau_channel_new(struct nouveau_drm *drm, struct nvif_device *device,
+ 	if (ret) {
+ 		NV_PRINTK(err, cli, "channel failed to initialise, %d\n", ret);
+ 		nouveau_channel_del(pchan);
++		goto done;
+ 	}
+ 
+ 	ret = nouveau_svmm_join((*pchan)->vmm->svmm, (*pchan)->inst);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index 8b4b3688c7ae3..4c992fd5bd68a 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -1210,6 +1210,7 @@ drm_conntype_from_dcb(enum dcb_connector_type dcb)
+ 	case DCB_CONNECTOR_DMS59_DP0:
+ 	case DCB_CONNECTOR_DMS59_DP1:
+ 	case DCB_CONNECTOR_DP       :
++	case DCB_CONNECTOR_mDP      :
+ 	case DCB_CONNECTOR_USB_C    : return DRM_MODE_CONNECTOR_DisplayPort;
+ 	case DCB_CONNECTOR_eDP      : return DRM_MODE_CONNECTOR_eDP;
+ 	case DCB_CONNECTOR_HDMI_0   :
+diff --git a/drivers/gpu/drm/panel/panel-elida-kd35t133.c b/drivers/gpu/drm/panel/panel-elida-kd35t133.c
+index bc36aa3c11234..fe5ac3ef90185 100644
+--- a/drivers/gpu/drm/panel/panel-elida-kd35t133.c
++++ b/drivers/gpu/drm/panel/panel-elida-kd35t133.c
+@@ -265,7 +265,8 @@ static int kd35t133_probe(struct mipi_dsi_device *dsi)
+ 	dsi->lanes = 1;
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
+-			  MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_EOT_PACKET;
++			  MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_EOT_PACKET |
++			  MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+ 	drm_panel_init(&ctx->panel, &dsi->dev, &kd35t133_funcs,
+ 		       DRM_MODE_CONNECTOR_DSI);
+diff --git a/drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c b/drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c
+index 0c5f22e95c2db..624d17b96a693 100644
+--- a/drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c
++++ b/drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c
+@@ -22,6 +22,7 @@
+ /* Manufacturer specific Commands send via DSI */
+ #define MANTIX_CMD_OTP_STOP_RELOAD_MIPI 0x41
+ #define MANTIX_CMD_INT_CANCEL           0x4C
++#define MANTIX_CMD_SPI_FINISH           0x90
+ 
+ struct mantix {
+ 	struct device *dev;
+@@ -66,6 +67,10 @@ static int mantix_init_sequence(struct mantix *ctx)
+ 	dsi_generic_write_seq(dsi, 0x80, 0x64, 0x00, 0x64, 0x00, 0x00);
+ 	msleep(20);
+ 
++	dsi_generic_write_seq(dsi, MANTIX_CMD_SPI_FINISH, 0xA5);
++	dsi_generic_write_seq(dsi, MANTIX_CMD_OTP_STOP_RELOAD_MIPI, 0x00, 0x2F);
++	msleep(20);
++
+ 	dev_dbg(dev, "Panel init sequence done\n");
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/rcar-du/rcar_cmm.c b/drivers/gpu/drm/rcar-du/rcar_cmm.c
+index c578095b09a53..382d53f8a22e8 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_cmm.c
++++ b/drivers/gpu/drm/rcar-du/rcar_cmm.c
+@@ -122,7 +122,7 @@ int rcar_cmm_enable(struct platform_device *pdev)
+ {
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+index fe86a3e677571..1b9738e44909d 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+@@ -727,13 +727,10 @@ static void rcar_du_crtc_atomic_enable(struct drm_crtc *crtc,
+ 	 */
+ 	if (rcdu->info->lvds_clk_mask & BIT(rcrtc->index) &&
+ 	    rstate->outputs == BIT(RCAR_DU_OUTPUT_DPAD0)) {
+-		struct rcar_du_encoder *encoder =
+-			rcdu->encoders[RCAR_DU_OUTPUT_LVDS0 + rcrtc->index];
++		struct drm_bridge *bridge = rcdu->lvds[rcrtc->index];
+ 		const struct drm_display_mode *mode =
+ 			&crtc->state->adjusted_mode;
+-		struct drm_bridge *bridge;
+ 
+-		bridge = drm_bridge_chain_get_first_bridge(&encoder->base);
+ 		rcar_lvds_clk_enable(bridge, mode->clock * 1000);
+ 	}
+ 
+@@ -759,15 +756,12 @@ static void rcar_du_crtc_atomic_disable(struct drm_crtc *crtc,
+ 
+ 	if (rcdu->info->lvds_clk_mask & BIT(rcrtc->index) &&
+ 	    rstate->outputs == BIT(RCAR_DU_OUTPUT_DPAD0)) {
+-		struct rcar_du_encoder *encoder =
+-			rcdu->encoders[RCAR_DU_OUTPUT_LVDS0 + rcrtc->index];
+-		struct drm_bridge *bridge;
++		struct drm_bridge *bridge = rcdu->lvds[rcrtc->index];
+ 
+ 		/*
+ 		 * Disable the LVDS clock output, see
+ 		 * rcar_du_crtc_atomic_enable().
+ 		 */
+-		bridge = drm_bridge_chain_get_first_bridge(&encoder->base);
+ 		rcar_lvds_clk_disable(bridge);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.h b/drivers/gpu/drm/rcar-du/rcar_du_drv.h
+index 61504c54e2ecf..3597a179bfb78 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.h
++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.h
+@@ -20,10 +20,10 @@
+ 
+ struct clk;
+ struct device;
++struct drm_bridge;
+ struct drm_device;
+ struct drm_property;
+ struct rcar_du_device;
+-struct rcar_du_encoder;
+ 
+ #define RCAR_DU_FEATURE_CRTC_IRQ_CLOCK	BIT(0)	/* Per-CRTC IRQ and clock */
+ #define RCAR_DU_FEATURE_VSP1_SOURCE	BIT(1)	/* Has inputs from VSP1 */
+@@ -71,6 +71,7 @@ struct rcar_du_device_info {
+ #define RCAR_DU_MAX_CRTCS		4
+ #define RCAR_DU_MAX_GROUPS		DIV_ROUND_UP(RCAR_DU_MAX_CRTCS, 2)
+ #define RCAR_DU_MAX_VSPS		4
++#define RCAR_DU_MAX_LVDS		2
+ 
+ struct rcar_du_device {
+ 	struct device *dev;
+@@ -83,11 +84,10 @@ struct rcar_du_device {
+ 	struct rcar_du_crtc crtcs[RCAR_DU_MAX_CRTCS];
+ 	unsigned int num_crtcs;
+ 
+-	struct rcar_du_encoder *encoders[RCAR_DU_OUTPUT_MAX];
+-
+ 	struct rcar_du_group groups[RCAR_DU_MAX_GROUPS];
+ 	struct platform_device *cmms[RCAR_DU_MAX_CRTCS];
+ 	struct rcar_du_vsp vsps[RCAR_DU_MAX_VSPS];
++	struct drm_bridge *lvds[RCAR_DU_MAX_LVDS];
+ 
+ 	struct {
+ 		struct drm_property *colorkey;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_encoder.c b/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
+index b0335da0c1614..50fc14534fa4d 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
+@@ -57,7 +57,6 @@ int rcar_du_encoder_init(struct rcar_du_device *rcdu,
+ 	if (renc == NULL)
+ 		return -ENOMEM;
+ 
+-	rcdu->encoders[output] = renc;
+ 	renc->output = output;
+ 	encoder = rcar_encoder_to_drm_encoder(renc);
+ 
+@@ -91,6 +90,10 @@ int rcar_du_encoder_init(struct rcar_du_device *rcdu,
+ 			ret = -EPROBE_DEFER;
+ 			goto done;
+ 		}
++
++		if (output == RCAR_DU_OUTPUT_LVDS0 ||
++		    output == RCAR_DU_OUTPUT_LVDS1)
++			rcdu->lvds[output - RCAR_DU_OUTPUT_LVDS0] = bridge;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+index 72dda446355fe..7015e22872bbe 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_kms.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+@@ -700,10 +700,10 @@ static int rcar_du_cmm_init(struct rcar_du_device *rcdu)
+ 		int ret;
+ 
+ 		cmm = of_parse_phandle(np, "renesas,cmms", i);
+-		if (IS_ERR(cmm)) {
++		if (!cmm) {
+ 			dev_err(rcdu->dev,
+ 				"Failed to parse 'renesas,cmms' property\n");
+-			return PTR_ERR(cmm);
++			return -EINVAL;
+ 		}
+ 
+ 		if (!of_device_is_available(cmm)) {
+@@ -713,10 +713,10 @@ static int rcar_du_cmm_init(struct rcar_du_device *rcdu)
+ 		}
+ 
+ 		pdev = of_find_device_by_node(cmm);
+-		if (IS_ERR(pdev)) {
++		if (!pdev) {
+ 			dev_err(rcdu->dev, "No device found for CMM%u\n", i);
+ 			of_node_put(cmm);
+-			return PTR_ERR(pdev);
++			return -EINVAL;
+ 		}
+ 
+ 		of_node_put(cmm);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
+index 4a2099cb582e1..857d97cdc67c6 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
+@@ -17,9 +17,20 @@
+ 
+ #define NUM_YUV2YUV_COEFFICIENTS 12
+ 
++/* AFBC supports a number of configurable modes. Relevant to us is block size
++ * (16x16 or 32x8), storage modifiers (SPARSE, SPLIT), and the YUV-like
++ * colourspace transform (YTR). 16x16 SPARSE mode is always used. SPLIT mode
++ * could be enabled via the hreg_block_split register, but is not currently
++ * handled. The colourspace transform is implicitly always assumed by the
++ * decoder, so consumers must use this transform as well.
++ *
++ * Failure to match modifiers will cause errors displaying AFBC buffers
++ * produced by conformant AFBC producers, including Mesa.
++ */
+ #define ROCKCHIP_AFBC_MOD \
+ 	DRM_FORMAT_MOD_ARM_AFBC( \
+ 		AFBC_FORMAT_MOD_BLOCK_SIZE_16x16 | AFBC_FORMAT_MOD_SPARSE \
++			| AFBC_FORMAT_MOD_YTR \
+ 	)
+ 
+ enum vop_data_format {
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 9a0d77a680180..7111e0f527b0b 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -890,6 +890,9 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched)
+ 	if (sched->thread)
+ 		kthread_stop(sched->thread);
+ 
++	/* Confirm no work left behind accessing device structures */
++	cancel_delayed_work_sync(&sched->work_tdr);
++
+ 	sched->ready = false;
+ }
+ EXPORT_SYMBOL(drm_sched_fini);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index 1e643bc7e786a..9f06dec0fc61d 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -569,30 +569,13 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon,
+ 	if (info->bus_flags & DRM_BUS_FLAG_DE_LOW)
+ 		val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
+ 
+-	/*
+-	 * On A20 and similar SoCs, the only way to achieve Positive Edge
+-	 * (Rising Edge), is setting dclk clock phase to 2/3(240°).
+-	 * By default TCON works in Negative Edge(Falling Edge),
+-	 * this is why phase is set to 0 in that case.
+-	 * Unfortunately there's no way to logically invert dclk through
+-	 * IO_POL register.
+-	 * The only acceptable way to work, triple checked with scope,
+-	 * is using clock phase set to 0° for Negative Edge and set to 240°
+-	 * for Positive Edge.
+-	 * On A33 and similar SoCs there would be a 90° phase option,
+-	 * but it divides also dclk by 2.
+-	 * Following code is a way to avoid quirks all around TCON
+-	 * and DOTCLOCK drivers.
+-	 */
+-	if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE)
+-		clk_set_phase(tcon->dclk, 240);
+-
+ 	if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
+-		clk_set_phase(tcon->dclk, 0);
++		val |= SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE;
+ 
+ 	regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,
+ 			   SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |
+ 			   SUN4I_TCON0_IO_POL_VSYNC_POSITIVE |
++			   SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE |
+ 			   SUN4I_TCON0_IO_POL_DE_NEGATIVE,
+ 			   val);
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.h b/drivers/gpu/drm/sun4i/sun4i_tcon.h
+index ee555318e3c2f..e624f6977eb84 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.h
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.h
+@@ -113,6 +113,7 @@
+ #define SUN4I_TCON0_IO_POL_REG			0x88
+ #define SUN4I_TCON0_IO_POL_DCLK_PHASE(phase)		((phase & 3) << 28)
+ #define SUN4I_TCON0_IO_POL_DE_NEGATIVE			BIT(27)
++#define SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE		BIT(26)
+ #define SUN4I_TCON0_IO_POL_HSYNC_POSITIVE		BIT(25)
+ #define SUN4I_TCON0_IO_POL_VSYNC_POSITIVE		BIT(24)
+ 
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index 424ad60b4f388..b2c8c68b7e261 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -2184,7 +2184,7 @@ static int tegra_dc_runtime_resume(struct host1x_client *client)
+ 	struct device *dev = client->dev;
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get runtime PM: %d\n", err);
+ 		return err;
+diff --git a/drivers/gpu/drm/tegra/dsi.c b/drivers/gpu/drm/tegra/dsi.c
+index 5691ef1b0e586..f46d377f0c304 100644
+--- a/drivers/gpu/drm/tegra/dsi.c
++++ b/drivers/gpu/drm/tegra/dsi.c
+@@ -1111,7 +1111,7 @@ static int tegra_dsi_runtime_resume(struct host1x_client *client)
+ 	struct device *dev = client->dev;
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get runtime PM: %d\n", err);
+ 		return err;
+diff --git a/drivers/gpu/drm/tegra/hdmi.c b/drivers/gpu/drm/tegra/hdmi.c
+index d09a24931c87c..e5d2a40260288 100644
+--- a/drivers/gpu/drm/tegra/hdmi.c
++++ b/drivers/gpu/drm/tegra/hdmi.c
+@@ -1510,7 +1510,7 @@ static int tegra_hdmi_runtime_resume(struct host1x_client *client)
+ 	struct device *dev = client->dev;
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get runtime PM: %d\n", err);
+ 		return err;
+diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c
+index 22a03f7ffdc12..5ce771cba1335 100644
+--- a/drivers/gpu/drm/tegra/hub.c
++++ b/drivers/gpu/drm/tegra/hub.c
+@@ -789,7 +789,7 @@ static int tegra_display_hub_runtime_resume(struct host1x_client *client)
+ 	unsigned int i;
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get runtime PM: %d\n", err);
+ 		return err;
+diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c
+index cc2aa2308a515..f02a035dda453 100644
+--- a/drivers/gpu/drm/tegra/sor.c
++++ b/drivers/gpu/drm/tegra/sor.c
+@@ -3218,7 +3218,7 @@ static int tegra_sor_runtime_resume(struct host1x_client *client)
+ 	struct device *dev = client->dev;
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get runtime PM: %d\n", err);
+ 		return err;
+diff --git a/drivers/gpu/drm/tegra/vic.c b/drivers/gpu/drm/tegra/vic.c
+index ade56b860cf9d..b77f726303d89 100644
+--- a/drivers/gpu/drm/tegra/vic.c
++++ b/drivers/gpu/drm/tegra/vic.c
+@@ -314,7 +314,7 @@ static int vic_open_channel(struct tegra_drm_client *client,
+ 	struct vic *vic = to_vic(client);
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(vic->dev);
++	err = pm_runtime_resume_and_get(vic->dev);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index eaba98e15de46..af5f01eff872c 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -119,24 +119,57 @@ static void vc5_hdmi_reset(struct vc4_hdmi *vc4_hdmi)
+ 		   HDMI_READ(HDMI_CLOCK_STOP) | VC4_DVP_HT_CLOCK_STOP_PIXEL);
+ }
+ 
++#ifdef CONFIG_DRM_VC4_HDMI_CEC
++static void vc4_hdmi_cec_update_clk_div(struct vc4_hdmi *vc4_hdmi)
++{
++	u16 clk_cnt;
++	u32 value;
++
++	value = HDMI_READ(HDMI_CEC_CNTRL_1);
++	value &= ~VC4_HDMI_CEC_DIV_CLK_CNT_MASK;
++
++	/*
++	 * Set the clock divider: the hsm_clock rate and this divider
++	 * setting will give a 40 kHz CEC clock.
++	 */
++	clk_cnt = clk_get_rate(vc4_hdmi->hsm_clock) / CEC_CLOCK_FREQ;
++	value |= clk_cnt << VC4_HDMI_CEC_DIV_CLK_CNT_SHIFT;
++	HDMI_WRITE(HDMI_CEC_CNTRL_1, value);
++}
++#else
++static void vc4_hdmi_cec_update_clk_div(struct vc4_hdmi *vc4_hdmi) {}
++#endif
++
+ static enum drm_connector_status
+ vc4_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ {
+ 	struct vc4_hdmi *vc4_hdmi = connector_to_vc4_hdmi(connector);
++	bool connected = false;
+ 
+ 	if (vc4_hdmi->hpd_gpio) {
+ 		if (gpio_get_value_cansleep(vc4_hdmi->hpd_gpio) ^
+ 		    vc4_hdmi->hpd_active_low)
+-			return connector_status_connected;
+-		cec_phys_addr_invalidate(vc4_hdmi->cec_adap);
+-		return connector_status_disconnected;
++			connected = true;
++	} else if (drm_probe_ddc(vc4_hdmi->ddc)) {
++		connected = true;
++	} else if (HDMI_READ(HDMI_HOTPLUG) & VC4_HDMI_HOTPLUG_CONNECTED) {
++		connected = true;
+ 	}
+ 
+-	if (drm_probe_ddc(vc4_hdmi->ddc))
+-		return connector_status_connected;
++	if (connected) {
++		if (connector->status != connector_status_connected) {
++			struct edid *edid = drm_get_edid(connector, vc4_hdmi->ddc);
++
++			if (edid) {
++				cec_s_phys_addr_from_edid(vc4_hdmi->cec_adap, edid);
++				vc4_hdmi->encoder.hdmi_monitor = drm_detect_hdmi_monitor(edid);
++				kfree(edid);
++			}
++		}
+ 
+-	if (HDMI_READ(HDMI_HOTPLUG) & VC4_HDMI_HOTPLUG_CONNECTED)
+ 		return connector_status_connected;
++	}
++
+ 	cec_phys_addr_invalidate(vc4_hdmi->cec_adap);
+ 	return connector_status_disconnected;
+ }
+@@ -640,6 +673,8 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder)
+ 		return;
+ 	}
+ 
++	vc4_hdmi_cec_update_clk_div(vc4_hdmi);
++
+ 	/*
+ 	 * FIXME: When the pixel freq is 594MHz (4k60), this needs to be setup
+ 	 * at 300MHz.
+@@ -661,9 +696,6 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder)
+ 		return;
+ 	}
+ 
+-	if (vc4_hdmi->variant->reset)
+-		vc4_hdmi->variant->reset(vc4_hdmi);
+-
+ 	if (vc4_hdmi->variant->phy_init)
+ 		vc4_hdmi->variant->phy_init(vc4_hdmi, mode);
+ 
+@@ -791,6 +823,9 @@ static int vc4_hdmi_encoder_atomic_check(struct drm_encoder *encoder,
+ 		pixel_rate = mode->clock * 1000;
+ 	}
+ 
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
++		pixel_rate = pixel_rate * 2;
++
+ 	if (pixel_rate > vc4_hdmi->variant->max_pixel_clock)
+ 		return -EINVAL;
+ 
+@@ -1313,13 +1348,20 @@ static irqreturn_t vc4_cec_irq_handler_thread(int irq, void *priv)
+ 
+ static void vc4_cec_read_msg(struct vc4_hdmi *vc4_hdmi, u32 cntrl1)
+ {
++	struct drm_device *dev = vc4_hdmi->connector.dev;
+ 	struct cec_msg *msg = &vc4_hdmi->cec_rx_msg;
+ 	unsigned int i;
+ 
+ 	msg->len = 1 + ((cntrl1 & VC4_HDMI_CEC_REC_WRD_CNT_MASK) >>
+ 					VC4_HDMI_CEC_REC_WRD_CNT_SHIFT);
++
++	if (msg->len > 16) {
++		drm_err(dev, "Attempting to read too much data (%d)\n", msg->len);
++		return;
++	}
++
+ 	for (i = 0; i < msg->len; i += 4) {
+-		u32 val = HDMI_READ(HDMI_CEC_RX_DATA_1 + i);
++		u32 val = HDMI_READ(HDMI_CEC_RX_DATA_1 + (i >> 2));
+ 
+ 		msg->msg[i] = val & 0xff;
+ 		msg->msg[i + 1] = (val >> 8) & 0xff;
+@@ -1412,11 +1454,17 @@ static int vc4_hdmi_cec_adap_transmit(struct cec_adapter *adap, u8 attempts,
+ 				      u32 signal_free_time, struct cec_msg *msg)
+ {
+ 	struct vc4_hdmi *vc4_hdmi = cec_get_drvdata(adap);
++	struct drm_device *dev = vc4_hdmi->connector.dev;
+ 	u32 val;
+ 	unsigned int i;
+ 
++	if (msg->len > 16) {
++		drm_err(dev, "Attempting to transmit too much data (%d)\n", msg->len);
++		return -ENOMEM;
++	}
++
+ 	for (i = 0; i < msg->len; i += 4)
+-		HDMI_WRITE(HDMI_CEC_TX_DATA_1 + i,
++		HDMI_WRITE(HDMI_CEC_TX_DATA_1 + (i >> 2),
+ 			   (msg->msg[i]) |
+ 			   (msg->msg[i + 1] << 8) |
+ 			   (msg->msg[i + 2] << 16) |
+@@ -1461,16 +1509,14 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ 	cec_s_conn_info(vc4_hdmi->cec_adap, &conn_info);
+ 
+ 	HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, 0xffffffff);
++
+ 	value = HDMI_READ(HDMI_CEC_CNTRL_1);
+-	value &= ~VC4_HDMI_CEC_DIV_CLK_CNT_MASK;
+-	/*
+-	 * Set the logical address to Unregistered and set the clock
+-	 * divider: the hsm_clock rate and this divider setting will
+-	 * give a 40 kHz CEC clock.
+-	 */
+-	value |= VC4_HDMI_CEC_ADDR_MASK |
+-		 (4091 << VC4_HDMI_CEC_DIV_CLK_CNT_SHIFT);
++	/* Set the logical address to Unregistered */
++	value |= VC4_HDMI_CEC_ADDR_MASK;
+ 	HDMI_WRITE(HDMI_CEC_CNTRL_1, value);
++
++	vc4_hdmi_cec_update_clk_div(vc4_hdmi);
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, platform_get_irq(pdev, 0),
+ 					vc4_cec_irq_handler,
+ 					vc4_cec_irq_handler_thread, 0,
+@@ -1741,6 +1787,9 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 	vc4_hdmi->disable_wifi_frequencies =
+ 		of_property_read_bool(dev->of_node, "wifi-2.4ghz-coexistence");
+ 
++	if (vc4_hdmi->variant->reset)
++		vc4_hdmi->variant->reset(vc4_hdmi);
++
+ 	pm_runtime_enable(dev);
+ 
+ 	drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi_regs.h b/drivers/gpu/drm/vc4/vc4_hdmi_regs.h
+index 7c6b4818f2455..6c0dfbbe1a7ef 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi_regs.h
++++ b/drivers/gpu/drm/vc4/vc4_hdmi_regs.h
+@@ -29,6 +29,7 @@ enum vc4_hdmi_field {
+ 	HDMI_CEC_CPU_MASK_SET,
+ 	HDMI_CEC_CPU_MASK_STATUS,
+ 	HDMI_CEC_CPU_STATUS,
++	HDMI_CEC_CPU_SET,
+ 
+ 	/*
+ 	 * Transmit data, first byte is low byte of the 32-bit reg.
+@@ -196,9 +197,10 @@ static const struct vc4_hdmi_register vc4_hdmi_fields[] = {
+ 	VC4_HDMI_REG(HDMI_TX_PHY_RESET_CTL, 0x02c0),
+ 	VC4_HDMI_REG(HDMI_TX_PHY_CTL_0, 0x02c4),
+ 	VC4_HDMI_REG(HDMI_CEC_CPU_STATUS, 0x0340),
++	VC4_HDMI_REG(HDMI_CEC_CPU_SET, 0x0344),
+ 	VC4_HDMI_REG(HDMI_CEC_CPU_CLEAR, 0x0348),
+ 	VC4_HDMI_REG(HDMI_CEC_CPU_MASK_STATUS, 0x034c),
+-	VC4_HDMI_REG(HDMI_CEC_CPU_MASK_SET, 0x034c),
++	VC4_HDMI_REG(HDMI_CEC_CPU_MASK_SET, 0x0350),
+ 	VC4_HDMI_REG(HDMI_CEC_CPU_MASK_CLEAR, 0x0354),
+ 	VC4_HDMI_REG(HDMI_RAM_PACKET_START, 0x0400),
+ };
+diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
+index c30c75ee83fce..8502400b2f9c9 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
++++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
+@@ -39,9 +39,6 @@ static int virtio_gpu_gem_create(struct drm_file *file,
+ 	int ret;
+ 	u32 handle;
+ 
+-	if (vgdev->has_virgl_3d)
+-		virtio_gpu_create_context(dev, file);
+-
+ 	ret = virtio_gpu_object_create(vgdev, params, &obj, NULL);
+ 	if (ret < 0)
+ 		return ret;
+@@ -119,6 +116,11 @@ int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
+ 	if (!vgdev->has_virgl_3d)
+ 		goto out_notify;
+ 
++	/* the context might still be missing when the first ioctl is
++	 * DRM_IOCTL_MODE_CREATE_DUMB or DRM_IOCTL_PRIME_FD_TO_HANDLE
++	 */
++	virtio_gpu_create_context(obj->dev, file);
++
+ 	objs = virtio_gpu_array_alloc(1);
+ 	if (!objs)
+ 		return -ENOMEM;
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 8a8b2b982f83c..097cb1ee31268 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1307,6 +1307,9 @@ EXPORT_SYMBOL_GPL(hid_open_report);
+ 
+ static s32 snto32(__u32 value, unsigned n)
+ {
++	if (!value || !n)
++		return 0;
++
+ 	switch (n) {
+ 	case 8:  return ((__s8)value);
+ 	case 16: return ((__s16)value);
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index 45e7e0bdd382b..fcdc922bc9733 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -980,6 +980,7 @@ static void logi_hidpp_recv_queue_notif(struct hid_device *hdev,
+ 	case 0x07:
+ 		device_type = "eQUAD step 4 Gaming";
+ 		logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
++		workitem.reports_supported |= STD_KEYBOARD;
+ 		break;
+ 	case 0x08:
+ 		device_type = "eQUAD step 4 for gamepads";
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 1bd0eb71559ca..44d715c12f6ab 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2600,7 +2600,12 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 		wacom_wac->is_invalid_bt_frame = !value;
+ 		return;
+ 	case HID_DG_CONTACTMAX:
+-		features->touch_max = value;
++		if (!features->touch_max) {
++			features->touch_max = value;
++		} else {
++			hid_warn(hdev, "%s: ignoring attempt to overwrite non-zero touch_max "
++				 "%d -> %d\n", __func__, features->touch_max, value);
++		}
+ 		return;
+ 	}
+ 
+diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c
+index 7596dc1646484..44a3f5660c109 100644
+--- a/drivers/hsi/controllers/omap_ssi_core.c
++++ b/drivers/hsi/controllers/omap_ssi_core.c
+@@ -424,7 +424,7 @@ static int ssi_hw_init(struct hsi_controller *ssi)
+ 	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(ssi->device.parent);
++	err = pm_runtime_resume_and_get(ssi->device.parent);
+ 	if (err < 0) {
+ 		dev_err(&ssi->device, "runtime PM failed %d\n", err);
+ 		return err;
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 1d44bb635bb84..6be9f56cb6270 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -1102,8 +1102,7 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
+ 			vmbus_device_unregister(channel->device_obj);
+ 			put_device(dev);
+ 		}
+-	}
+-	if (channel->primary_channel != NULL) {
++	} else if (channel->primary_channel != NULL) {
+ 		/*
+ 		 * Sub-channel is being rescinded. Following is the channel
+ 		 * close sequence when initiated from the driveri (refer to
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 95b54b0a36252..74d3e2fe43d46 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -131,7 +131,8 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ 	writel_relaxed(0x0, drvdata->base + TRCAUXCTLR);
+ 	writel_relaxed(config->eventctrl0, drvdata->base + TRCEVENTCTL0R);
+ 	writel_relaxed(config->eventctrl1, drvdata->base + TRCEVENTCTL1R);
+-	writel_relaxed(config->stall_ctrl, drvdata->base + TRCSTALLCTLR);
++	if (drvdata->stallctl)
++		writel_relaxed(config->stall_ctrl, drvdata->base + TRCSTALLCTLR);
+ 	writel_relaxed(config->ts_ctrl, drvdata->base + TRCTSCTLR);
+ 	writel_relaxed(config->syncfreq, drvdata->base + TRCSYNCPR);
+ 	writel_relaxed(config->ccctlr, drvdata->base + TRCCCCTLR);
+@@ -1187,7 +1188,8 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ 	state->trcauxctlr = readl(drvdata->base + TRCAUXCTLR);
+ 	state->trceventctl0r = readl(drvdata->base + TRCEVENTCTL0R);
+ 	state->trceventctl1r = readl(drvdata->base + TRCEVENTCTL1R);
+-	state->trcstallctlr = readl(drvdata->base + TRCSTALLCTLR);
++	if (drvdata->stallctl)
++		state->trcstallctlr = readl(drvdata->base + TRCSTALLCTLR);
+ 	state->trctsctlr = readl(drvdata->base + TRCTSCTLR);
+ 	state->trcsyncpr = readl(drvdata->base + TRCSYNCPR);
+ 	state->trcccctlr = readl(drvdata->base + TRCCCCTLR);
+@@ -1254,7 +1256,8 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ 
+ 	state->trcclaimset = readl(drvdata->base + TRCCLAIMCLR);
+ 
+-	state->trcpdcr = readl(drvdata->base + TRCPDCR);
++	if (!drvdata->skip_power_up)
++		state->trcpdcr = readl(drvdata->base + TRCPDCR);
+ 
+ 	/* wait for TRCSTATR.IDLE to go up */
+ 	if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 1)) {
+@@ -1272,9 +1275,9 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ 	 * potentially save power on systems that respect the TRCPDCR_PU
+ 	 * despite requesting software to save/restore state.
+ 	 */
+-	writel_relaxed((state->trcpdcr & ~TRCPDCR_PU),
+-			drvdata->base + TRCPDCR);
+-
++	if (!drvdata->skip_power_up)
++		writel_relaxed((state->trcpdcr & ~TRCPDCR_PU),
++				drvdata->base + TRCPDCR);
+ out:
+ 	CS_LOCK(drvdata->base);
+ 	return ret;
+@@ -1296,7 +1299,8 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ 	writel_relaxed(state->trcauxctlr, drvdata->base + TRCAUXCTLR);
+ 	writel_relaxed(state->trceventctl0r, drvdata->base + TRCEVENTCTL0R);
+ 	writel_relaxed(state->trceventctl1r, drvdata->base + TRCEVENTCTL1R);
+-	writel_relaxed(state->trcstallctlr, drvdata->base + TRCSTALLCTLR);
++	if (drvdata->stallctl)
++		writel_relaxed(state->trcstallctlr, drvdata->base + TRCSTALLCTLR);
+ 	writel_relaxed(state->trctsctlr, drvdata->base + TRCTSCTLR);
+ 	writel_relaxed(state->trcsyncpr, drvdata->base + TRCSYNCPR);
+ 	writel_relaxed(state->trcccctlr, drvdata->base + TRCCCCTLR);
+@@ -1368,7 +1372,8 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ 
+ 	writel_relaxed(state->trcclaimset, drvdata->base + TRCCLAIMSET);
+ 
+-	writel_relaxed(state->trcpdcr, drvdata->base + TRCPDCR);
++	if (!drvdata->skip_power_up)
++		writel_relaxed(state->trcpdcr, drvdata->base + TRCPDCR);
+ 
+ 	drvdata->state_needs_restore = false;
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index 989ce7b8ade7c..4682f26139961 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -389,7 +389,7 @@ static ssize_t mode_store(struct device *dev,
+ 		config->eventctrl1 &= ~BIT(12);
+ 
+ 	/* bit[8], Instruction stall bit */
+-	if (config->mode & ETM_MODE_ISTALL_EN)
++	if ((config->mode & ETM_MODE_ISTALL_EN) && (drvdata->stallctl == true))
+ 		config->stall_ctrl |= BIT(8);
+ 	else
+ 		config->stall_ctrl &= ~BIT(8);
+diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
+index d8295b1c379d1..35baca2f62c4e 100644
+--- a/drivers/i2c/busses/i2c-bcm-iproc.c
++++ b/drivers/i2c/busses/i2c-bcm-iproc.c
+@@ -159,6 +159,11 @@
+ 
+ #define IE_S_ALL_INTERRUPT_SHIFT     21
+ #define IE_S_ALL_INTERRUPT_MASK      0x3f
++/*
++ * It takes ~18us to reading 10bytes of data, hence to keep tasklet
++ * running for less time, max slave read per tasklet is set to 10 bytes.
++ */
++#define MAX_SLAVE_RX_PER_INT         10
+ 
+ enum i2c_slave_read_status {
+ 	I2C_SLAVE_RX_FIFO_EMPTY = 0,
+@@ -205,8 +210,18 @@ struct bcm_iproc_i2c_dev {
+ 	/* bytes that have been read */
+ 	unsigned int rx_bytes;
+ 	unsigned int thld_bytes;
++
++	bool slave_rx_only;
++	bool rx_start_rcvd;
++	bool slave_read_complete;
++	u32 tx_underrun;
++	u32 slave_int_mask;
++	struct tasklet_struct slave_rx_tasklet;
+ };
+ 
++/* tasklet to process slave rx data */
++static void slave_rx_tasklet_fn(unsigned long);
++
+ /*
+  * Can be expanded in the future if more interrupt status bits are utilized
+  */
+@@ -215,7 +230,8 @@ struct bcm_iproc_i2c_dev {
+ 
+ #define ISR_MASK_SLAVE (BIT(IS_S_START_BUSY_SHIFT)\
+ 		| BIT(IS_S_RX_EVENT_SHIFT) | BIT(IS_S_RD_EVENT_SHIFT)\
+-		| BIT(IS_S_TX_UNDERRUN_SHIFT))
++		| BIT(IS_S_TX_UNDERRUN_SHIFT) | BIT(IS_S_RX_FIFO_FULL_SHIFT)\
++		| BIT(IS_S_RX_THLD_SHIFT))
+ 
+ static int bcm_iproc_i2c_reg_slave(struct i2c_client *slave);
+ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave);
+@@ -259,6 +275,7 @@ static void bcm_iproc_i2c_slave_init(
+ {
+ 	u32 val;
+ 
++	iproc_i2c->tx_underrun = 0;
+ 	if (need_reset) {
+ 		/* put controller in reset */
+ 		val = iproc_i2c_rd_reg(iproc_i2c, CFG_OFFSET);
+@@ -295,8 +312,11 @@ static void bcm_iproc_i2c_slave_init(
+ 
+ 	/* Enable interrupt register to indicate a valid byte in receive fifo */
+ 	val = BIT(IE_S_RX_EVENT_SHIFT);
++	/* Enable interrupt register to indicate a Master read transaction */
++	val |= BIT(IE_S_RD_EVENT_SHIFT);
+ 	/* Enable interrupt register for the Slave BUSY command */
+ 	val |= BIT(IE_S_START_BUSY_SHIFT);
++	iproc_i2c->slave_int_mask = val;
+ 	iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, val);
+ }
+ 
+@@ -321,76 +341,176 @@ static void bcm_iproc_i2c_check_slave_status(
+ 	}
+ }
+ 
+-static bool bcm_iproc_i2c_slave_isr(struct bcm_iproc_i2c_dev *iproc_i2c,
+-				    u32 status)
++static void bcm_iproc_i2c_slave_read(struct bcm_iproc_i2c_dev *iproc_i2c)
+ {
++	u8 rx_data, rx_status;
++	u32 rx_bytes = 0;
+ 	u32 val;
+-	u8 value, rx_status;
+ 
+-	/* Slave RX byte receive */
+-	if (status & BIT(IS_S_RX_EVENT_SHIFT)) {
++	while (rx_bytes < MAX_SLAVE_RX_PER_INT) {
+ 		val = iproc_i2c_rd_reg(iproc_i2c, S_RX_OFFSET);
+ 		rx_status = (val >> S_RX_STATUS_SHIFT) & S_RX_STATUS_MASK;
++		rx_data = ((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK);
++
+ 		if (rx_status == I2C_SLAVE_RX_START) {
+-			/* Start of SMBUS for Master write */
++			/* Start of SMBUS Master write */
+ 			i2c_slave_event(iproc_i2c->slave,
+-					I2C_SLAVE_WRITE_REQUESTED, &value);
+-
+-			val = iproc_i2c_rd_reg(iproc_i2c, S_RX_OFFSET);
+-			value = (u8)((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK);
++					I2C_SLAVE_WRITE_REQUESTED, &rx_data);
++			iproc_i2c->rx_start_rcvd = true;
++			iproc_i2c->slave_read_complete = false;
++		} else if (rx_status == I2C_SLAVE_RX_DATA &&
++			   iproc_i2c->rx_start_rcvd) {
++			/* Middle of SMBUS Master write */
+ 			i2c_slave_event(iproc_i2c->slave,
+-					I2C_SLAVE_WRITE_RECEIVED, &value);
+-		} else if (status & BIT(IS_S_RD_EVENT_SHIFT)) {
+-			/* Start of SMBUS for Master Read */
+-			i2c_slave_event(iproc_i2c->slave,
+-					I2C_SLAVE_READ_REQUESTED, &value);
+-			iproc_i2c_wr_reg(iproc_i2c, S_TX_OFFSET, value);
++					I2C_SLAVE_WRITE_RECEIVED, &rx_data);
++		} else if (rx_status == I2C_SLAVE_RX_END &&
++			   iproc_i2c->rx_start_rcvd) {
++			/* End of SMBUS Master write */
++			if (iproc_i2c->slave_rx_only)
++				i2c_slave_event(iproc_i2c->slave,
++						I2C_SLAVE_WRITE_RECEIVED,
++						&rx_data);
++
++			i2c_slave_event(iproc_i2c->slave, I2C_SLAVE_STOP,
++					&rx_data);
++		} else if (rx_status == I2C_SLAVE_RX_FIFO_EMPTY) {
++			iproc_i2c->rx_start_rcvd = false;
++			iproc_i2c->slave_read_complete = true;
++			break;
++		}
+ 
+-			val = BIT(S_CMD_START_BUSY_SHIFT);
+-			iproc_i2c_wr_reg(iproc_i2c, S_CMD_OFFSET, val);
++		rx_bytes++;
++	}
++}
+ 
+-			/*
+-			 * Enable interrupt for TX FIFO becomes empty and
+-			 * less than PKT_LENGTH bytes were output on the SMBUS
+-			 */
+-			val = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET);
+-			val |= BIT(IE_S_TX_UNDERRUN_SHIFT);
+-			iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, val);
+-		} else {
+-			/* Master write other than start */
+-			value = (u8)((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK);
++static void slave_rx_tasklet_fn(unsigned long data)
++{
++	struct bcm_iproc_i2c_dev *iproc_i2c = (struct bcm_iproc_i2c_dev *)data;
++	u32 int_clr;
++
++	bcm_iproc_i2c_slave_read(iproc_i2c);
++
++	/* clear pending IS_S_RX_EVENT_SHIFT interrupt */
++	int_clr = BIT(IS_S_RX_EVENT_SHIFT);
++
++	if (!iproc_i2c->slave_rx_only && iproc_i2c->slave_read_complete) {
++		/*
++		 * In case of single byte master-read request,
++		 * IS_S_TX_UNDERRUN_SHIFT event is generated before
++		 * IS_S_START_BUSY_SHIFT event. Hence start slave data send
++		 * from first IS_S_TX_UNDERRUN_SHIFT event.
++		 *
++		 * This means don't send any data from slave when
++		 * IS_S_RD_EVENT_SHIFT event is generated else it will increment
++		 * eeprom or other backend slave driver read pointer twice.
++		 */
++		iproc_i2c->tx_underrun = 0;
++		iproc_i2c->slave_int_mask |= BIT(IE_S_TX_UNDERRUN_SHIFT);
++
++		/* clear IS_S_RD_EVENT_SHIFT interrupt */
++		int_clr |= BIT(IS_S_RD_EVENT_SHIFT);
++	}
++
++	/* clear slave interrupt */
++	iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, int_clr);
++	/* enable slave interrupts */
++	iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, iproc_i2c->slave_int_mask);
++}
++
++static bool bcm_iproc_i2c_slave_isr(struct bcm_iproc_i2c_dev *iproc_i2c,
++				    u32 status)
++{
++	u32 val;
++	u8 value;
++
++	/*
++	 * Slave events in case of master-write, master-write-read and,
++	 * master-read
++	 *
++	 * Master-write     : only IS_S_RX_EVENT_SHIFT event
++	 * Master-write-read: both IS_S_RX_EVENT_SHIFT and IS_S_RD_EVENT_SHIFT
++	 *                    events
++	 * Master-read      : both IS_S_RX_EVENT_SHIFT and IS_S_RD_EVENT_SHIFT
++	 *                    events or only IS_S_RD_EVENT_SHIFT
++	 */
++	if (status & BIT(IS_S_RX_EVENT_SHIFT) ||
++	    status & BIT(IS_S_RD_EVENT_SHIFT)) {
++		/* disable slave interrupts */
++		val = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET);
++		val &= ~iproc_i2c->slave_int_mask;
++		iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, val);
++
++		if (status & BIT(IS_S_RD_EVENT_SHIFT))
++			/* Master-write-read request */
++			iproc_i2c->slave_rx_only = false;
++		else
++			/* Master-write request only */
++			iproc_i2c->slave_rx_only = true;
++
++		/* schedule tasklet to read data later */
++		tasklet_schedule(&iproc_i2c->slave_rx_tasklet);
++
++		/* clear only IS_S_RX_EVENT_SHIFT interrupt */
++		iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET,
++				 BIT(IS_S_RX_EVENT_SHIFT));
++	}
++
++	if (status & BIT(IS_S_TX_UNDERRUN_SHIFT)) {
++		iproc_i2c->tx_underrun++;
++		if (iproc_i2c->tx_underrun == 1)
++			/* Start of SMBUS for Master Read */
+ 			i2c_slave_event(iproc_i2c->slave,
+-					I2C_SLAVE_WRITE_RECEIVED, &value);
+-			if (rx_status == I2C_SLAVE_RX_END)
+-				i2c_slave_event(iproc_i2c->slave,
+-						I2C_SLAVE_STOP, &value);
+-		}
+-	} else if (status & BIT(IS_S_TX_UNDERRUN_SHIFT)) {
+-		/* Master read other than start */
+-		i2c_slave_event(iproc_i2c->slave,
+-				I2C_SLAVE_READ_PROCESSED, &value);
++					I2C_SLAVE_READ_REQUESTED,
++					&value);
++		else
++			/* Master read other than start */
++			i2c_slave_event(iproc_i2c->slave,
++					I2C_SLAVE_READ_PROCESSED,
++					&value);
+ 
+ 		iproc_i2c_wr_reg(iproc_i2c, S_TX_OFFSET, value);
++		/* start transfer */
+ 		val = BIT(S_CMD_START_BUSY_SHIFT);
+ 		iproc_i2c_wr_reg(iproc_i2c, S_CMD_OFFSET, val);
++
++		/* clear interrupt */
++		iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET,
++				 BIT(IS_S_TX_UNDERRUN_SHIFT));
+ 	}
+ 
+-	/* Stop */
++	/* Stop received from master in case of master read transaction */
+ 	if (status & BIT(IS_S_START_BUSY_SHIFT)) {
+-		i2c_slave_event(iproc_i2c->slave, I2C_SLAVE_STOP, &value);
+ 		/*
+ 		 * Enable interrupt for TX FIFO becomes empty and
+ 		 * less than PKT_LENGTH bytes were output on the SMBUS
+ 		 */
+-		val = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET);
+-		val &= ~BIT(IE_S_TX_UNDERRUN_SHIFT);
+-		iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, val);
++		iproc_i2c->slave_int_mask &= ~BIT(IE_S_TX_UNDERRUN_SHIFT);
++		iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET,
++				 iproc_i2c->slave_int_mask);
++
++		/* End of SMBUS for Master Read */
++		val = BIT(S_TX_WR_STATUS_SHIFT);
++		iproc_i2c_wr_reg(iproc_i2c, S_TX_OFFSET, val);
++
++		val = BIT(S_CMD_START_BUSY_SHIFT);
++		iproc_i2c_wr_reg(iproc_i2c, S_CMD_OFFSET, val);
++
++		/* flush TX FIFOs */
++		val = iproc_i2c_rd_reg(iproc_i2c, S_FIFO_CTRL_OFFSET);
++		val |= (BIT(S_FIFO_TX_FLUSH_SHIFT));
++		iproc_i2c_wr_reg(iproc_i2c, S_FIFO_CTRL_OFFSET, val);
++
++		i2c_slave_event(iproc_i2c->slave, I2C_SLAVE_STOP, &value);
++
++		/* clear interrupt */
++		iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET,
++				 BIT(IS_S_START_BUSY_SHIFT));
+ 	}
+ 
+-	/* clear interrupt status */
+-	iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, status);
++	/* check slave transmit status only if slave is transmitting */
++	if (!iproc_i2c->slave_rx_only)
++		bcm_iproc_i2c_check_slave_status(iproc_i2c);
+ 
+-	bcm_iproc_i2c_check_slave_status(iproc_i2c);
+ 	return true;
+ }
+ 
+@@ -505,12 +625,17 @@ static void bcm_iproc_i2c_process_m_event(struct bcm_iproc_i2c_dev *iproc_i2c,
+ static irqreturn_t bcm_iproc_i2c_isr(int irq, void *data)
+ {
+ 	struct bcm_iproc_i2c_dev *iproc_i2c = data;
+-	u32 status = iproc_i2c_rd_reg(iproc_i2c, IS_OFFSET);
++	u32 slave_status;
++	u32 status;
+ 	bool ret;
+-	u32 sl_status = status & ISR_MASK_SLAVE;
+ 
+-	if (sl_status) {
+-		ret = bcm_iproc_i2c_slave_isr(iproc_i2c, sl_status);
++	status = iproc_i2c_rd_reg(iproc_i2c, IS_OFFSET);
++	/* process only slave interrupt which are enabled */
++	slave_status = status & iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET) &
++		       ISR_MASK_SLAVE;
++
++	if (slave_status) {
++		ret = bcm_iproc_i2c_slave_isr(iproc_i2c, slave_status);
+ 		if (ret)
+ 			return IRQ_HANDLED;
+ 		else
+@@ -1066,6 +1191,10 @@ static int bcm_iproc_i2c_reg_slave(struct i2c_client *slave)
+ 		return -EAFNOSUPPORT;
+ 
+ 	iproc_i2c->slave = slave;
++
++	tasklet_init(&iproc_i2c->slave_rx_tasklet, slave_rx_tasklet_fn,
++		     (unsigned long)iproc_i2c);
++
+ 	bcm_iproc_i2c_slave_init(iproc_i2c, false);
+ 	return 0;
+ }
+@@ -1086,6 +1215,8 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave)
+ 			IE_S_ALL_INTERRUPT_SHIFT);
+ 	iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, tmp);
+ 
++	tasklet_kill(&iproc_i2c->slave_rx_tasklet);
++
+ 	/* Erase the slave address programmed */
+ 	tmp = iproc_i2c_rd_reg(iproc_i2c, S_CFG_SMBUS_ADDR_OFFSET);
+ 	tmp &= ~BIT(S_CFG_EN_NIC_SMB_ADDR3_SHIFT);
+diff --git a/drivers/i2c/busses/i2c-brcmstb.c b/drivers/i2c/busses/i2c-brcmstb.c
+index d4e0a0f6732ae..ba766d24219ef 100644
+--- a/drivers/i2c/busses/i2c-brcmstb.c
++++ b/drivers/i2c/busses/i2c-brcmstb.c
+@@ -316,7 +316,7 @@ static int brcmstb_send_i2c_cmd(struct brcmstb_i2c_dev *dev,
+ 		goto cmd_out;
+ 	}
+ 
+-	if ((CMD_RD || CMD_WR) &&
++	if ((cmd == CMD_RD || cmd == CMD_WR) &&
+ 	    bsc_readl(dev, iic_enable) & BSC_IIC_EN_NOACK_MASK) {
+ 		rc = -EREMOTEIO;
+ 		dev_dbg(dev->device, "controller received NOACK intr for %s\n",
+diff --git a/drivers/i2c/busses/i2c-exynos5.c b/drivers/i2c/busses/i2c-exynos5.c
+index 6ce3ec03b5952..b6f2c63776140 100644
+--- a/drivers/i2c/busses/i2c-exynos5.c
++++ b/drivers/i2c/busses/i2c-exynos5.c
+@@ -606,6 +606,7 @@ static void exynos5_i2c_message_start(struct exynos5_i2c *i2c, int stop)
+ 	u32 i2c_ctl;
+ 	u32 int_en = 0;
+ 	u32 i2c_auto_conf = 0;
++	u32 i2c_addr = 0;
+ 	u32 fifo_ctl;
+ 	unsigned long flags;
+ 	unsigned short trig_lvl;
+@@ -640,7 +641,12 @@ static void exynos5_i2c_message_start(struct exynos5_i2c *i2c, int stop)
+ 		int_en |= HSI2C_INT_TX_ALMOSTEMPTY_EN;
+ 	}
+ 
+-	writel(HSI2C_SLV_ADDR_MAS(i2c->msg->addr), i2c->regs + HSI2C_ADDR);
++	i2c_addr = HSI2C_SLV_ADDR_MAS(i2c->msg->addr);
++
++	if (i2c->op_clock >= I2C_MAX_FAST_MODE_PLUS_FREQ)
++		i2c_addr |= HSI2C_MASTER_ID(MASTER_ID(i2c->adap.nr));
++
++	writel(i2c_addr, i2c->regs + HSI2C_ADDR);
+ 
+ 	writel(fifo_ctl, i2c->regs + HSI2C_FIFO_CTL);
+ 	writel(i2c_ctl, i2c->regs + HSI2C_CTL);
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index dce75b85253c1..4a6dd05d6dbf9 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -86,6 +86,9 @@ struct geni_i2c_dev {
+ 	u32 clk_freq_out;
+ 	const struct geni_i2c_clk_fld *clk_fld;
+ 	int suspended;
++	void *dma_buf;
++	size_t xfer_len;
++	dma_addr_t dma_addr;
+ };
+ 
+ struct geni_i2c_err_log {
+@@ -348,14 +351,39 @@ static void geni_i2c_tx_fsm_rst(struct geni_i2c_dev *gi2c)
+ 		dev_err(gi2c->se.dev, "Timeout resetting TX_FSM\n");
+ }
+ 
++static void geni_i2c_rx_msg_cleanup(struct geni_i2c_dev *gi2c,
++				     struct i2c_msg *cur)
++{
++	gi2c->cur_rd = 0;
++	if (gi2c->dma_buf) {
++		if (gi2c->err)
++			geni_i2c_rx_fsm_rst(gi2c);
++		geni_se_rx_dma_unprep(&gi2c->se, gi2c->dma_addr, gi2c->xfer_len);
++		i2c_put_dma_safe_msg_buf(gi2c->dma_buf, cur, !gi2c->err);
++	}
++}
++
++static void geni_i2c_tx_msg_cleanup(struct geni_i2c_dev *gi2c,
++				     struct i2c_msg *cur)
++{
++	gi2c->cur_wr = 0;
++	if (gi2c->dma_buf) {
++		if (gi2c->err)
++			geni_i2c_tx_fsm_rst(gi2c);
++		geni_se_tx_dma_unprep(&gi2c->se, gi2c->dma_addr, gi2c->xfer_len);
++		i2c_put_dma_safe_msg_buf(gi2c->dma_buf, cur, !gi2c->err);
++	}
++}
++
+ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ 				u32 m_param)
+ {
+-	dma_addr_t rx_dma;
++	dma_addr_t rx_dma = 0;
+ 	unsigned long time_left;
+ 	void *dma_buf = NULL;
+ 	struct geni_se *se = &gi2c->se;
+ 	size_t len = msg->len;
++	struct i2c_msg *cur;
+ 
+ 	if (!of_machine_is_compatible("lenovo,yoga-c630"))
+ 		dma_buf = i2c_get_dma_safe_msg_buf(msg, 32);
+@@ -372,19 +400,18 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ 		geni_se_select_mode(se, GENI_SE_FIFO);
+ 		i2c_put_dma_safe_msg_buf(dma_buf, msg, false);
+ 		dma_buf = NULL;
++	} else {
++		gi2c->xfer_len = len;
++		gi2c->dma_addr = rx_dma;
++		gi2c->dma_buf = dma_buf;
+ 	}
+ 
++	cur = gi2c->cur;
+ 	time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT);
+ 	if (!time_left)
+ 		geni_i2c_abort_xfer(gi2c);
+ 
+-	gi2c->cur_rd = 0;
+-	if (dma_buf) {
+-		if (gi2c->err)
+-			geni_i2c_rx_fsm_rst(gi2c);
+-		geni_se_rx_dma_unprep(se, rx_dma, len);
+-		i2c_put_dma_safe_msg_buf(dma_buf, msg, !gi2c->err);
+-	}
++	geni_i2c_rx_msg_cleanup(gi2c, cur);
+ 
+ 	return gi2c->err;
+ }
+@@ -392,11 +419,12 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ 				u32 m_param)
+ {
+-	dma_addr_t tx_dma;
++	dma_addr_t tx_dma = 0;
+ 	unsigned long time_left;
+ 	void *dma_buf = NULL;
+ 	struct geni_se *se = &gi2c->se;
+ 	size_t len = msg->len;
++	struct i2c_msg *cur;
+ 
+ 	if (!of_machine_is_compatible("lenovo,yoga-c630"))
+ 		dma_buf = i2c_get_dma_safe_msg_buf(msg, 32);
+@@ -413,22 +441,21 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ 		geni_se_select_mode(se, GENI_SE_FIFO);
+ 		i2c_put_dma_safe_msg_buf(dma_buf, msg, false);
+ 		dma_buf = NULL;
++	} else {
++		gi2c->xfer_len = len;
++		gi2c->dma_addr = tx_dma;
++		gi2c->dma_buf = dma_buf;
+ 	}
+ 
+ 	if (!dma_buf) /* Get FIFO IRQ */
+ 		writel_relaxed(1, se->base + SE_GENI_TX_WATERMARK_REG);
+ 
++	cur = gi2c->cur;
+ 	time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT);
+ 	if (!time_left)
+ 		geni_i2c_abort_xfer(gi2c);
+ 
+-	gi2c->cur_wr = 0;
+-	if (dma_buf) {
+-		if (gi2c->err)
+-			geni_i2c_tx_fsm_rst(gi2c);
+-		geni_se_tx_dma_unprep(se, tx_dma, len);
+-		i2c_put_dma_safe_msg_buf(dma_buf, msg, !gi2c->err);
+-	}
++	geni_i2c_tx_msg_cleanup(gi2c, cur);
+ 
+ 	return gi2c->err;
+ }
+diff --git a/drivers/ide/falconide.c b/drivers/ide/falconide.c
+index dbeb2605e5f6e..607c44bc50f1b 100644
+--- a/drivers/ide/falconide.c
++++ b/drivers/ide/falconide.c
+@@ -166,6 +166,7 @@ static int __init falconide_init(struct platform_device *pdev)
+ 	if (rc)
+ 		goto err_free;
+ 
++	platform_set_drvdata(pdev, host);
+ 	return 0;
+ err_free:
+ 	ide_host_free(host);
+@@ -176,7 +177,7 @@ err:
+ 
+ static int falconide_remove(struct platform_device *pdev)
+ {
+-	struct ide_host *host = dev_get_drvdata(&pdev->dev);
++	struct ide_host *host = platform_get_drvdata(pdev);
+ 
+ 	ide_host_remove(host);
+ 
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 5afd142fe8c78..8e578f73a074c 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -4332,7 +4332,7 @@ static int cm_add_one(struct ib_device *ib_device)
+ 	unsigned long flags;
+ 	int ret;
+ 	int count = 0;
+-	u8 i;
++	unsigned int i;
+ 
+ 	cm_dev = kzalloc(struct_size(cm_dev, port, ib_device->phys_port_cnt),
+ 			 GFP_KERNEL);
+@@ -4344,7 +4344,7 @@ static int cm_add_one(struct ib_device *ib_device)
+ 	cm_dev->going_down = 0;
+ 
+ 	set_bit(IB_MGMT_METHOD_SEND, reg_req.method_mask);
+-	for (i = 1; i <= ib_device->phys_port_cnt; i++) {
++	rdma_for_each_port (ib_device, i) {
+ 		if (!rdma_cap_ib_cm(ib_device, i))
+ 			continue;
+ 
+@@ -4430,7 +4430,7 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ 		.clr_port_cap_mask = IB_PORT_CM_SUP
+ 	};
+ 	unsigned long flags;
+-	int i;
++	unsigned int i;
+ 
+ 	write_lock_irqsave(&cm.device_lock, flags);
+ 	list_del(&cm_dev->list);
+@@ -4440,7 +4440,7 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ 	cm_dev->going_down = 1;
+ 	spin_unlock_irq(&cm.lock);
+ 
+-	for (i = 1; i <= ib_device->phys_port_cnt; i++) {
++	rdma_for_each_port (ib_device, i) {
+ 		if (!rdma_cap_ib_cm(ib_device, i))
+ 			continue;
+ 
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index c51b84b2d2f37..e3638f80e1d52 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -352,7 +352,13 @@ struct ib_device *cma_get_ib_dev(struct cma_device *cma_dev)
+ 
+ struct cma_multicast {
+ 	struct rdma_id_private *id_priv;
+-	struct ib_sa_multicast *sa_mc;
++	union {
++		struct ib_sa_multicast *sa_mc;
++		struct {
++			struct work_struct work;
++			struct rdma_cm_event event;
++		} iboe_join;
++	};
+ 	struct list_head	list;
+ 	void			*context;
+ 	struct sockaddr_storage	addr;
+@@ -1823,6 +1829,8 @@ static void destroy_mc(struct rdma_id_private *id_priv,
+ 			cma_igmp_send(ndev, &mgid, false);
+ 			dev_put(ndev);
+ 		}
++
++		cancel_work_sync(&mc->iboe_join.work);
+ 	}
+ 	kfree(mc);
+ }
+@@ -2683,6 +2691,28 @@ static int cma_query_ib_route(struct rdma_id_private *id_priv,
+ 	return (id_priv->query_id < 0) ? id_priv->query_id : 0;
+ }
+ 
++static void cma_iboe_join_work_handler(struct work_struct *work)
++{
++	struct cma_multicast *mc =
++		container_of(work, struct cma_multicast, iboe_join.work);
++	struct rdma_cm_event *event = &mc->iboe_join.event;
++	struct rdma_id_private *id_priv = mc->id_priv;
++	int ret;
++
++	mutex_lock(&id_priv->handler_mutex);
++	if (READ_ONCE(id_priv->state) == RDMA_CM_DESTROYING ||
++	    READ_ONCE(id_priv->state) == RDMA_CM_DEVICE_REMOVAL)
++		goto out_unlock;
++
++	ret = cma_cm_event_handler(id_priv, event);
++	WARN_ON(ret);
++
++out_unlock:
++	mutex_unlock(&id_priv->handler_mutex);
++	if (event->event == RDMA_CM_EVENT_MULTICAST_JOIN)
++		rdma_destroy_ah_attr(&event->param.ud.ah_attr);
++}
++
+ static void cma_work_handler(struct work_struct *_work)
+ {
+ 	struct cma_work *work = container_of(_work, struct cma_work, work);
+@@ -4478,10 +4508,7 @@ static int cma_ib_mc_handler(int status, struct ib_sa_multicast *multicast)
+ 	cma_make_mc_event(status, id_priv, multicast, &event, mc);
+ 	ret = cma_cm_event_handler(id_priv, &event);
+ 	rdma_destroy_ah_attr(&event.param.ud.ah_attr);
+-	if (ret) {
+-		destroy_id_handler_unlock(id_priv);
+-		return 0;
+-	}
++	WARN_ON(ret);
+ 
+ out:
+ 	mutex_unlock(&id_priv->handler_mutex);
+@@ -4604,7 +4631,6 @@ static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
+ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 				   struct cma_multicast *mc)
+ {
+-	struct cma_work *work;
+ 	struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
+ 	int err = 0;
+ 	struct sockaddr *addr = (struct sockaddr *)&mc->addr;
+@@ -4618,10 +4644,6 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 	if (cma_zero_addr(addr))
+ 		return -EINVAL;
+ 
+-	work = kzalloc(sizeof *work, GFP_KERNEL);
+-	if (!work)
+-		return -ENOMEM;
+-
+ 	gid_type = id_priv->cma_dev->default_gid_type[id_priv->id.port_num -
+ 		   rdma_start_port(id_priv->cma_dev->device)];
+ 	cma_iboe_set_mgid(addr, &ib.rec.mgid, gid_type);
+@@ -4632,10 +4654,9 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 
+ 	if (dev_addr->bound_dev_if)
+ 		ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if);
+-	if (!ndev) {
+-		err = -ENODEV;
+-		goto err_free;
+-	}
++	if (!ndev)
++		return -ENODEV;
++
+ 	ib.rec.rate = iboe_get_rate(ndev);
+ 	ib.rec.hop_limit = 1;
+ 	ib.rec.mtu = iboe_get_mtu(ndev->mtu);
+@@ -4653,24 +4674,15 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 			err = -ENOTSUPP;
+ 	}
+ 	dev_put(ndev);
+-	if (err || !ib.rec.mtu) {
+-		if (!err)
+-			err = -EINVAL;
+-		goto err_free;
+-	}
++	if (err || !ib.rec.mtu)
++		return err ?: -EINVAL;
++
+ 	rdma_ip2gid((struct sockaddr *)&id_priv->id.route.addr.src_addr,
+ 		    &ib.rec.port_gid);
+-	work->id = id_priv;
+-	INIT_WORK(&work->work, cma_work_handler);
+-	cma_make_mc_event(0, id_priv, &ib, &work->event, mc);
+-	/* Balances with cma_id_put() in cma_work_handler */
+-	cma_id_get(id_priv);
+-	queue_work(cma_wq, &work->work);
++	INIT_WORK(&mc->iboe_join.work, cma_iboe_join_work_handler);
++	cma_make_mc_event(0, id_priv, &ib, &mc->iboe_join.event, mc);
++	queue_work(cma_wq, &mc->iboe_join.work);
+ 	return 0;
+-
+-err_free:
+-	kfree(work);
+-	return err;
+ }
+ 
+ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
+diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
+index b0d0b522cc764..4688a6657c875 100644
+--- a/drivers/infiniband/core/user_mad.c
++++ b/drivers/infiniband/core/user_mad.c
+@@ -379,6 +379,11 @@ static ssize_t ib_umad_read(struct file *filp, char __user *buf,
+ 
+ 	mutex_lock(&file->mutex);
+ 
++	if (file->agents_dead) {
++		mutex_unlock(&file->mutex);
++		return -EIO;
++	}
++
+ 	while (list_empty(&file->recv_list)) {
+ 		mutex_unlock(&file->mutex);
+ 
+@@ -392,6 +397,11 @@ static ssize_t ib_umad_read(struct file *filp, char __user *buf,
+ 		mutex_lock(&file->mutex);
+ 	}
+ 
++	if (file->agents_dead) {
++		mutex_unlock(&file->mutex);
++		return -EIO;
++	}
++
+ 	packet = list_entry(file->recv_list.next, struct ib_umad_packet, list);
+ 	list_del(&packet->list);
+ 
+@@ -524,7 +534,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 
+ 	agent = __get_agent(file, packet->mad.hdr.id);
+ 	if (!agent) {
+-		ret = -EINVAL;
++		ret = -EIO;
+ 		goto err_up;
+ 	}
+ 
+@@ -653,10 +663,14 @@ static __poll_t ib_umad_poll(struct file *filp, struct poll_table_struct *wait)
+ 	/* we will always be able to post a MAD send */
+ 	__poll_t mask = EPOLLOUT | EPOLLWRNORM;
+ 
++	mutex_lock(&file->mutex);
+ 	poll_wait(filp, &file->recv_wait, wait);
+ 
+ 	if (!list_empty(&file->recv_list))
+ 		mask |= EPOLLIN | EPOLLRDNORM;
++	if (file->agents_dead)
++		mask = EPOLLERR;
++	mutex_unlock(&file->mutex);
+ 
+ 	return mask;
+ }
+@@ -1336,6 +1350,7 @@ static void ib_umad_kill_port(struct ib_umad_port *port)
+ 	list_for_each_entry(file, &port->file_list, port_list) {
+ 		mutex_lock(&file->mutex);
+ 		file->agents_dead = 1;
++		wake_up_interruptible(&file->recv_wait);
+ 		mutex_unlock(&file->mutex);
+ 
+ 		for (id = 0; id < IB_UMAD_MAX_AGENTS; ++id)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 1ea87f92aabbe..d9aa7424d2902 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -632,7 +632,7 @@ struct hns_roce_qp {
+ 	struct hns_roce_db	sdb;
+ 	unsigned long		en_flags;
+ 	u32			doorbell_qpn;
+-	u32			sq_signal_bits;
++	enum ib_sig_type	sq_signal_bits;
+ 	struct hns_roce_wq	sq;
+ 
+ 	struct hns_roce_mtr	mtr;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 5c29c7d8c50e6..ebcf26dec1e30 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1232,7 +1232,7 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ 	u32 timeout = 0;
+ 	int handle = 0;
+ 	u16 desc_ret;
+-	int ret = 0;
++	int ret;
+ 	int ntc;
+ 
+ 	spin_lock_bh(&csq->lock);
+@@ -1277,15 +1277,14 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ 	if (hns_roce_cmq_csq_done(hr_dev)) {
+ 		complete = true;
+ 		handle = 0;
++		ret = 0;
+ 		while (handle < num) {
+ 			/* get the result of hardware write back */
+ 			desc_to_use = &csq->desc[ntc];
+ 			desc[handle] = *desc_to_use;
+ 			dev_dbg(hr_dev->dev, "Get cmq desc:\n");
+ 			desc_ret = le16_to_cpu(desc[handle].retval);
+-			if (desc_ret == CMD_EXEC_SUCCESS)
+-				ret = 0;
+-			else
++			if (unlikely(desc_ret != CMD_EXEC_SUCCESS))
+ 				ret = -EIO;
+ 			priv->cmq.last_status = desc_ret;
+ 			ntc++;
+@@ -1847,7 +1846,6 @@ static void set_default_caps(struct hns_roce_dev *hr_dev)
+ 
+ 	caps->flags		= HNS_ROCE_CAP_FLAG_REREG_MR |
+ 				  HNS_ROCE_CAP_FLAG_ROCE_V1_V2 |
+-				  HNS_ROCE_CAP_FLAG_RQ_INLINE |
+ 				  HNS_ROCE_CAP_FLAG_RECORD_DB |
+ 				  HNS_ROCE_CAP_FLAG_SQ_RECORD_DB;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index ae721fa61e0e4..ba65823a5c0bb 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -781,8 +781,7 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
+ 	return 0;
+ 
+ err_qp_table_free:
+-	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ)
+-		hns_roce_cleanup_qp_table(hr_dev);
++	hns_roce_cleanup_qp_table(hr_dev);
+ 
+ err_cq_table_free:
+ 	hns_roce_cleanup_cq_table(hr_dev);
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 9e3d8b8264980..26564e7d34572 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1067,7 +1067,9 @@ static void devx_obj_build_destroy_cmd(void *in, void *out, void *din,
+ 		MLX5_SET(general_obj_in_cmd_hdr, din, opcode, MLX5_CMD_OP_DESTROY_RQT);
+ 		break;
+ 	case MLX5_CMD_OP_CREATE_TIR:
+-		MLX5_SET(general_obj_in_cmd_hdr, din, opcode, MLX5_CMD_OP_DESTROY_TIR);
++		*obj_id = MLX5_GET(create_tir_out, out, tirn);
++		MLX5_SET(destroy_tir_in, din, opcode, MLX5_CMD_OP_DESTROY_TIR);
++		MLX5_SET(destroy_tir_in, din, tirn, *obj_id);
+ 		break;
+ 	case MLX5_CMD_OP_CREATE_TIS:
+ 		MLX5_SET(general_obj_in_cmd_hdr, din, opcode, MLX5_CMD_OP_DESTROY_TIS);
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index e317d7d6d5c0d..beec0d7c0d6e8 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -3921,7 +3921,7 @@ static void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev)
+ 	mlx5_ib_cleanup_multiport_master(dev);
+ 	WARN_ON(!xa_empty(&dev->odp_mkeys));
+ 	cleanup_srcu_struct(&dev->odp_srcu);
+-
++	mutex_destroy(&dev->cap_mask_mutex);
+ 	WARN_ON(!xa_empty(&dev->sig_mrs));
+ 	WARN_ON(!bitmap_empty(dev->dm.memic_alloc_pages, MLX5_MAX_MEMIC_PAGES));
+ }
+@@ -3972,6 +3972,10 @@ static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev)
+ 	dev->ib_dev.dev.parent		= mdev->device;
+ 	dev->ib_dev.lag_flags		= RDMA_LAG_FLAGS_HASH_ALL_SLAVES;
+ 
++	err = init_srcu_struct(&dev->odp_srcu);
++	if (err)
++		goto err_mp;
++
+ 	mutex_init(&dev->cap_mask_mutex);
+ 	INIT_LIST_HEAD(&dev->qp_list);
+ 	spin_lock_init(&dev->reset_flow_resource_lock);
+@@ -3981,17 +3985,11 @@ static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev)
+ 
+ 	spin_lock_init(&dev->dm.lock);
+ 	dev->dm.dev = mdev;
+-
+-	err = init_srcu_struct(&dev->odp_srcu);
+-	if (err)
+-		goto err_mp;
+-
+ 	return 0;
+ 
+ err_mp:
+ 	mlx5_ib_cleanup_multiport_master(dev);
+-
+-	return -ENOMEM;
++	return err;
+ }
+ 
+ static int mlx5_ib_enable_driver(struct ib_device *dev)
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 943914c2a50c7..bce44502ab0ed 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -414,6 +414,11 @@ int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb)
+ 
+ void rxe_loopback(struct sk_buff *skb)
+ {
++	if (skb->protocol == htons(ETH_P_IP))
++		skb_pull(skb, sizeof(struct iphdr));
++	else
++		skb_pull(skb, sizeof(struct ipv6hdr));
++
+ 	rxe_rcv(skb);
+ }
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
+index c9984a28eecc7..cb69a125e2806 100644
+--- a/drivers/infiniband/sw/rxe/rxe_recv.c
++++ b/drivers/infiniband/sw/rxe/rxe_recv.c
+@@ -9,21 +9,26 @@
+ #include "rxe.h"
+ #include "rxe_loc.h"
+ 
++/* check that QP matches packet opcode type and is in a valid state */
+ static int check_type_state(struct rxe_dev *rxe, struct rxe_pkt_info *pkt,
+ 			    struct rxe_qp *qp)
+ {
++	unsigned int pkt_type;
++
+ 	if (unlikely(!qp->valid))
+ 		goto err1;
+ 
++	pkt_type = pkt->opcode & 0xe0;
++
+ 	switch (qp_type(qp)) {
+ 	case IB_QPT_RC:
+-		if (unlikely((pkt->opcode & IB_OPCODE_RC) != 0)) {
++		if (unlikely(pkt_type != IB_OPCODE_RC)) {
+ 			pr_warn_ratelimited("bad qp type\n");
+ 			goto err1;
+ 		}
+ 		break;
+ 	case IB_QPT_UC:
+-		if (unlikely(!(pkt->opcode & IB_OPCODE_UC))) {
++		if (unlikely(pkt_type != IB_OPCODE_UC)) {
+ 			pr_warn_ratelimited("bad qp type\n");
+ 			goto err1;
+ 		}
+@@ -31,7 +36,7 @@ static int check_type_state(struct rxe_dev *rxe, struct rxe_pkt_info *pkt,
+ 	case IB_QPT_UD:
+ 	case IB_QPT_SMI:
+ 	case IB_QPT_GSI:
+-		if (unlikely(!(pkt->opcode & IB_OPCODE_UD))) {
++		if (unlikely(pkt_type != IB_OPCODE_UD)) {
+ 			pr_warn_ratelimited("bad qp type\n");
+ 			goto err1;
+ 		}
+@@ -252,7 +257,6 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb)
+ 
+ 	list_for_each_entry(mce, &mcg->qp_list, qp_list) {
+ 		qp = mce->qp;
+-		pkt = SKB_TO_PKT(skb);
+ 
+ 		/* validate qp for incoming packet */
+ 		err = check_type_state(rxe, pkt, qp);
+@@ -264,12 +268,18 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb)
+ 			continue;
+ 
+ 		/* for all but the last qp create a new clone of the
+-		 * skb and pass to the qp.
++		 * skb and pass to the qp. If an error occurs in the
++		 * checks for the last qp in the list we need to
++		 * free the skb since it hasn't been passed on to
++		 * rxe_rcv_pkt() which would free it later.
+ 		 */
+-		if (mce->qp_list.next != &mcg->qp_list)
++		if (mce->qp_list.next != &mcg->qp_list) {
+ 			per_qp_skb = skb_clone(skb, GFP_ATOMIC);
+-		else
++		} else {
+ 			per_qp_skb = skb;
++			/* show we have consumed the skb */
++			skb = NULL;
++		}
+ 
+ 		if (unlikely(!per_qp_skb))
+ 			continue;
+@@ -284,9 +294,8 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb)
+ 
+ 	rxe_drop_ref(mcg);	/* drop ref from rxe_pool_get_key. */
+ 
+-	return;
+-
+ err1:
++	/* free skb if not consumed */
+ 	kfree_skb(skb);
+ }
+ 
+diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h
+index adda789962196..368959ae9a8cc 100644
+--- a/drivers/infiniband/sw/siw/siw.h
++++ b/drivers/infiniband/sw/siw/siw.h
+@@ -653,7 +653,7 @@ static inline struct siw_sqe *orq_get_free(struct siw_qp *qp)
+ {
+ 	struct siw_sqe *orq_e = orq_get_tail(qp);
+ 
+-	if (orq_e && READ_ONCE(orq_e->flags) == 0)
++	if (READ_ONCE(orq_e->flags) == 0)
+ 		return orq_e;
+ 
+ 	return NULL;
+diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c
+index 9d152e198a59b..32a553a1b905e 100644
+--- a/drivers/infiniband/sw/siw/siw_main.c
++++ b/drivers/infiniband/sw/siw/siw_main.c
+@@ -135,7 +135,7 @@ static struct {
+ 
+ static int siw_init_cpulist(void)
+ {
+-	int i, num_nodes = num_possible_nodes();
++	int i, num_nodes = nr_node_ids;
+ 
+ 	memset(siw_tx_thread, 0, sizeof(siw_tx_thread));
+ 
+diff --git a/drivers/infiniband/sw/siw/siw_qp.c b/drivers/infiniband/sw/siw/siw_qp.c
+index 875d36d4b1c61..ddb2e66f9f133 100644
+--- a/drivers/infiniband/sw/siw/siw_qp.c
++++ b/drivers/infiniband/sw/siw/siw_qp.c
+@@ -199,26 +199,26 @@ void siw_qp_llp_write_space(struct sock *sk)
+ 
+ static int siw_qp_readq_init(struct siw_qp *qp, int irq_size, int orq_size)
+ {
+-	irq_size = roundup_pow_of_two(irq_size);
+-	orq_size = roundup_pow_of_two(orq_size);
+-
+-	qp->attrs.irq_size = irq_size;
+-	qp->attrs.orq_size = orq_size;
+-
+-	qp->irq = vzalloc(irq_size * sizeof(struct siw_sqe));
+-	if (!qp->irq) {
+-		siw_dbg_qp(qp, "irq malloc for %d failed\n", irq_size);
+-		qp->attrs.irq_size = 0;
+-		return -ENOMEM;
++	if (irq_size) {
++		irq_size = roundup_pow_of_two(irq_size);
++		qp->irq = vzalloc(irq_size * sizeof(struct siw_sqe));
++		if (!qp->irq) {
++			qp->attrs.irq_size = 0;
++			return -ENOMEM;
++		}
+ 	}
+-	qp->orq = vzalloc(orq_size * sizeof(struct siw_sqe));
+-	if (!qp->orq) {
+-		siw_dbg_qp(qp, "orq malloc for %d failed\n", orq_size);
+-		qp->attrs.orq_size = 0;
+-		qp->attrs.irq_size = 0;
+-		vfree(qp->irq);
+-		return -ENOMEM;
++	if (orq_size) {
++		orq_size = roundup_pow_of_two(orq_size);
++		qp->orq = vzalloc(orq_size * sizeof(struct siw_sqe));
++		if (!qp->orq) {
++			qp->attrs.orq_size = 0;
++			qp->attrs.irq_size = 0;
++			vfree(qp->irq);
++			return -ENOMEM;
++		}
+ 	}
++	qp->attrs.irq_size = irq_size;
++	qp->attrs.orq_size = orq_size;
+ 	siw_dbg_qp(qp, "ORD %d, IRD %d\n", orq_size, irq_size);
+ 	return 0;
+ }
+@@ -288,13 +288,14 @@ int siw_qp_mpa_rts(struct siw_qp *qp, enum mpa_v2_ctrl ctrl)
+ 	if (ctrl & MPA_V2_RDMA_WRITE_RTR)
+ 		wqe->sqe.opcode = SIW_OP_WRITE;
+ 	else if (ctrl & MPA_V2_RDMA_READ_RTR) {
+-		struct siw_sqe *rreq;
++		struct siw_sqe *rreq = NULL;
+ 
+ 		wqe->sqe.opcode = SIW_OP_READ;
+ 
+ 		spin_lock(&qp->orq_lock);
+ 
+-		rreq = orq_get_free(qp);
++		if (qp->attrs.orq_size)
++			rreq = orq_get_free(qp);
+ 		if (rreq) {
+ 			siw_read_to_orq(rreq, &wqe->sqe);
+ 			qp->orq_put++;
+@@ -877,135 +878,88 @@ void siw_read_to_orq(struct siw_sqe *rreq, struct siw_sqe *sqe)
+ 	rreq->num_sge = 1;
+ }
+ 
+-/*
+- * Must be called with SQ locked.
+- * To avoid complete SQ starvation by constant inbound READ requests,
+- * the active IRQ will not be served after qp->irq_burst, if the
+- * SQ has pending work.
+- */
+-int siw_activate_tx(struct siw_qp *qp)
++static int siw_activate_tx_from_sq(struct siw_qp *qp)
+ {
+-	struct siw_sqe *irqe, *sqe;
++	struct siw_sqe *sqe;
+ 	struct siw_wqe *wqe = tx_wqe(qp);
+ 	int rv = 1;
+ 
+-	irqe = &qp->irq[qp->irq_get % qp->attrs.irq_size];
+-
+-	if (irqe->flags & SIW_WQE_VALID) {
+-		sqe = sq_get_next(qp);
+-
+-		/*
+-		 * Avoid local WQE processing starvation in case
+-		 * of constant inbound READ request stream
+-		 */
+-		if (sqe && ++qp->irq_burst >= SIW_IRQ_MAXBURST_SQ_ACTIVE) {
+-			qp->irq_burst = 0;
+-			goto skip_irq;
+-		}
+-		memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE);
+-		wqe->wr_status = SIW_WR_QUEUED;
+-
+-		/* start READ RESPONSE */
+-		wqe->sqe.opcode = SIW_OP_READ_RESPONSE;
+-		wqe->sqe.flags = 0;
+-		if (irqe->num_sge) {
+-			wqe->sqe.num_sge = 1;
+-			wqe->sqe.sge[0].length = irqe->sge[0].length;
+-			wqe->sqe.sge[0].laddr = irqe->sge[0].laddr;
+-			wqe->sqe.sge[0].lkey = irqe->sge[0].lkey;
+-		} else {
+-			wqe->sqe.num_sge = 0;
+-		}
+-
+-		/* Retain original RREQ's message sequence number for
+-		 * potential error reporting cases.
+-		 */
+-		wqe->sqe.sge[1].length = irqe->sge[1].length;
+-
+-		wqe->sqe.rkey = irqe->rkey;
+-		wqe->sqe.raddr = irqe->raddr;
++	sqe = sq_get_next(qp);
++	if (!sqe)
++		return 0;
+ 
+-		wqe->processed = 0;
+-		qp->irq_get++;
++	memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE);
++	wqe->wr_status = SIW_WR_QUEUED;
+ 
+-		/* mark current IRQ entry free */
+-		smp_store_mb(irqe->flags, 0);
++	/* First copy SQE to kernel private memory */
++	memcpy(&wqe->sqe, sqe, sizeof(*sqe));
+ 
++	if (wqe->sqe.opcode >= SIW_NUM_OPCODES) {
++		rv = -EINVAL;
+ 		goto out;
+ 	}
+-	sqe = sq_get_next(qp);
+-	if (sqe) {
+-skip_irq:
+-		memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE);
+-		wqe->wr_status = SIW_WR_QUEUED;
+-
+-		/* First copy SQE to kernel private memory */
+-		memcpy(&wqe->sqe, sqe, sizeof(*sqe));
+-
+-		if (wqe->sqe.opcode >= SIW_NUM_OPCODES) {
++	if (wqe->sqe.flags & SIW_WQE_INLINE) {
++		if (wqe->sqe.opcode != SIW_OP_SEND &&
++		    wqe->sqe.opcode != SIW_OP_WRITE) {
+ 			rv = -EINVAL;
+ 			goto out;
+ 		}
+-		if (wqe->sqe.flags & SIW_WQE_INLINE) {
+-			if (wqe->sqe.opcode != SIW_OP_SEND &&
+-			    wqe->sqe.opcode != SIW_OP_WRITE) {
+-				rv = -EINVAL;
+-				goto out;
+-			}
+-			if (wqe->sqe.sge[0].length > SIW_MAX_INLINE) {
+-				rv = -EINVAL;
+-				goto out;
+-			}
+-			wqe->sqe.sge[0].laddr = (uintptr_t)&wqe->sqe.sge[1];
+-			wqe->sqe.sge[0].lkey = 0;
+-			wqe->sqe.num_sge = 1;
++		if (wqe->sqe.sge[0].length > SIW_MAX_INLINE) {
++			rv = -EINVAL;
++			goto out;
+ 		}
+-		if (wqe->sqe.flags & SIW_WQE_READ_FENCE) {
+-			/* A READ cannot be fenced */
+-			if (unlikely(wqe->sqe.opcode == SIW_OP_READ ||
+-				     wqe->sqe.opcode ==
+-					     SIW_OP_READ_LOCAL_INV)) {
+-				siw_dbg_qp(qp, "cannot fence read\n");
+-				rv = -EINVAL;
+-				goto out;
+-			}
+-			spin_lock(&qp->orq_lock);
++		wqe->sqe.sge[0].laddr = (uintptr_t)&wqe->sqe.sge[1];
++		wqe->sqe.sge[0].lkey = 0;
++		wqe->sqe.num_sge = 1;
++	}
++	if (wqe->sqe.flags & SIW_WQE_READ_FENCE) {
++		/* A READ cannot be fenced */
++		if (unlikely(wqe->sqe.opcode == SIW_OP_READ ||
++			     wqe->sqe.opcode ==
++				     SIW_OP_READ_LOCAL_INV)) {
++			siw_dbg_qp(qp, "cannot fence read\n");
++			rv = -EINVAL;
++			goto out;
++		}
++		spin_lock(&qp->orq_lock);
+ 
+-			if (!siw_orq_empty(qp)) {
+-				qp->tx_ctx.orq_fence = 1;
+-				rv = 0;
+-			}
+-			spin_unlock(&qp->orq_lock);
++		if (qp->attrs.orq_size && !siw_orq_empty(qp)) {
++			qp->tx_ctx.orq_fence = 1;
++			rv = 0;
++		}
++		spin_unlock(&qp->orq_lock);
+ 
+-		} else if (wqe->sqe.opcode == SIW_OP_READ ||
+-			   wqe->sqe.opcode == SIW_OP_READ_LOCAL_INV) {
+-			struct siw_sqe *rreq;
++	} else if (wqe->sqe.opcode == SIW_OP_READ ||
++		   wqe->sqe.opcode == SIW_OP_READ_LOCAL_INV) {
++		struct siw_sqe *rreq;
+ 
+-			wqe->sqe.num_sge = 1;
++		if (unlikely(!qp->attrs.orq_size)) {
++			/* We negotiated not to send READ req's */
++			rv = -EINVAL;
++			goto out;
++		}
++		wqe->sqe.num_sge = 1;
+ 
+-			spin_lock(&qp->orq_lock);
++		spin_lock(&qp->orq_lock);
+ 
+-			rreq = orq_get_free(qp);
+-			if (rreq) {
+-				/*
+-				 * Make an immediate copy in ORQ to be ready
+-				 * to process loopback READ reply
+-				 */
+-				siw_read_to_orq(rreq, &wqe->sqe);
+-				qp->orq_put++;
+-			} else {
+-				qp->tx_ctx.orq_fence = 1;
+-				rv = 0;
+-			}
+-			spin_unlock(&qp->orq_lock);
++		rreq = orq_get_free(qp);
++		if (rreq) {
++			/*
++			 * Make an immediate copy in ORQ to be ready
++			 * to process loopback READ reply
++			 */
++			siw_read_to_orq(rreq, &wqe->sqe);
++			qp->orq_put++;
++		} else {
++			qp->tx_ctx.orq_fence = 1;
++			rv = 0;
+ 		}
+-
+-		/* Clear SQE, can be re-used by application */
+-		smp_store_mb(sqe->flags, 0);
+-		qp->sq_get++;
+-	} else {
+-		rv = 0;
++		spin_unlock(&qp->orq_lock);
+ 	}
++
++	/* Clear SQE, can be re-used by application */
++	smp_store_mb(sqe->flags, 0);
++	qp->sq_get++;
+ out:
+ 	if (unlikely(rv < 0)) {
+ 		siw_dbg_qp(qp, "error %d\n", rv);
+@@ -1014,6 +968,65 @@ out:
+ 	return rv;
+ }
+ 
++/*
++ * Must be called with SQ locked.
++ * To avoid complete SQ starvation by constant inbound READ requests,
++ * the active IRQ will not be served after qp->irq_burst, if the
++ * SQ has pending work.
++ */
++int siw_activate_tx(struct siw_qp *qp)
++{
++	struct siw_sqe *irqe;
++	struct siw_wqe *wqe = tx_wqe(qp);
++
++	if (!qp->attrs.irq_size)
++		return siw_activate_tx_from_sq(qp);
++
++	irqe = &qp->irq[qp->irq_get % qp->attrs.irq_size];
++
++	if (!(irqe->flags & SIW_WQE_VALID))
++		return siw_activate_tx_from_sq(qp);
++
++	/*
++	 * Avoid local WQE processing starvation in case
++	 * of constant inbound READ request stream
++	 */
++	if (sq_get_next(qp) && ++qp->irq_burst >= SIW_IRQ_MAXBURST_SQ_ACTIVE) {
++		qp->irq_burst = 0;
++		return siw_activate_tx_from_sq(qp);
++	}
++	memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE);
++	wqe->wr_status = SIW_WR_QUEUED;
++
++	/* start READ RESPONSE */
++	wqe->sqe.opcode = SIW_OP_READ_RESPONSE;
++	wqe->sqe.flags = 0;
++	if (irqe->num_sge) {
++		wqe->sqe.num_sge = 1;
++		wqe->sqe.sge[0].length = irqe->sge[0].length;
++		wqe->sqe.sge[0].laddr = irqe->sge[0].laddr;
++		wqe->sqe.sge[0].lkey = irqe->sge[0].lkey;
++	} else {
++		wqe->sqe.num_sge = 0;
++	}
++
++	/* Retain original RREQ's message sequence number for
++	 * potential error reporting cases.
++	 */
++	wqe->sqe.sge[1].length = irqe->sge[1].length;
++
++	wqe->sqe.rkey = irqe->rkey;
++	wqe->sqe.raddr = irqe->raddr;
++
++	wqe->processed = 0;
++	qp->irq_get++;
++
++	/* mark current IRQ entry free */
++	smp_store_mb(irqe->flags, 0);
++
++	return 1;
++}
++
+ /*
+  * Check if current CQ state qualifies for calling CQ completion
+  * handler. Must be called with CQ lock held.
+diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/siw/siw_qp_rx.c
+index 4bd1f1f84057b..60116f20653c7 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_rx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_rx.c
+@@ -680,6 +680,10 @@ static int siw_init_rresp(struct siw_qp *qp, struct siw_rx_stream *srx)
+ 	}
+ 	spin_lock_irqsave(&qp->sq_lock, flags);
+ 
++	if (unlikely(!qp->attrs.irq_size)) {
++		run_sq = 0;
++		goto error_irq;
++	}
+ 	if (tx_work->wr_status == SIW_WR_IDLE) {
+ 		/*
+ 		 * immediately schedule READ response w/o
+@@ -712,8 +716,9 @@ static int siw_init_rresp(struct siw_qp *qp, struct siw_rx_stream *srx)
+ 		/* RRESP now valid as current TX wqe or placed into IRQ */
+ 		smp_store_mb(resp->flags, SIW_WQE_VALID);
+ 	} else {
+-		pr_warn("siw: [QP %u]: irq %d exceeded %d\n", qp_id(qp),
+-			qp->irq_put % qp->attrs.irq_size, qp->attrs.irq_size);
++error_irq:
++		pr_warn("siw: [QP %u]: IRQ exceeded or null, size %d\n",
++			qp_id(qp), qp->attrs.irq_size);
+ 
+ 		siw_init_terminate(qp, TERM_ERROR_LAYER_RDMAP,
+ 				   RDMAP_ETYPE_REMOTE_OPERATION,
+@@ -740,6 +745,9 @@ static int siw_orqe_start_rx(struct siw_qp *qp)
+ 	struct siw_sqe *orqe;
+ 	struct siw_wqe *wqe = NULL;
+ 
++	if (unlikely(!qp->attrs.orq_size))
++		return -EPROTO;
++
+ 	/* make sure ORQ indices are current */
+ 	smp_mb();
+ 
+@@ -796,8 +804,8 @@ int siw_proc_rresp(struct siw_qp *qp)
+ 		 */
+ 		rv = siw_orqe_start_rx(qp);
+ 		if (rv) {
+-			pr_warn("siw: [QP %u]: ORQ empty at idx %d\n",
+-				qp_id(qp), qp->orq_get % qp->attrs.orq_size);
++			pr_warn("siw: [QP %u]: ORQ empty, size %d\n",
++				qp_id(qp), qp->attrs.orq_size);
+ 			goto error_term;
+ 		}
+ 		rv = siw_rresp_check_ntoh(srx, frx);
+@@ -1290,11 +1298,13 @@ static int siw_rdmap_complete(struct siw_qp *qp, int error)
+ 					      wc_status);
+ 		siw_wqe_put_mem(wqe, SIW_OP_READ);
+ 
+-		if (!error)
++		if (!error) {
+ 			rv = siw_check_tx_fence(qp);
+-		else
+-			/* Disable current ORQ eleement */
+-			WRITE_ONCE(orq_get_current(qp)->flags, 0);
++		} else {
++			/* Disable current ORQ element */
++			if (qp->attrs.orq_size)
++				WRITE_ONCE(orq_get_current(qp)->flags, 0);
++		}
+ 		break;
+ 
+ 	case RDMAP_RDMA_READ_REQ:
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index d19d8325588b5..7989c4043db4e 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -1107,8 +1107,8 @@ next_wqe:
+ 		/*
+ 		 * RREQ may have already been completed by inbound RRESP!
+ 		 */
+-		if (tx_type == SIW_OP_READ ||
+-		    tx_type == SIW_OP_READ_LOCAL_INV) {
++		if ((tx_type == SIW_OP_READ ||
++		     tx_type == SIW_OP_READ_LOCAL_INV) && qp->attrs.orq_size) {
+ 			/* Cleanup pending entry in ORQ */
+ 			qp->orq_put--;
+ 			qp->orq[qp->orq_put % qp->attrs.orq_size].flags = 0;
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index 7cf3242ffb41f..fb25e8011f5a4 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -362,13 +362,23 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
+ 	if (rv)
+ 		goto err_out;
+ 
++	num_sqe = attrs->cap.max_send_wr;
++	num_rqe = attrs->cap.max_recv_wr;
++
+ 	/* All queue indices are derived from modulo operations
+ 	 * on a free running 'get' (consumer) and 'put' (producer)
+ 	 * unsigned counter. Having queue sizes at power of two
+ 	 * avoids handling counter wrap around.
+ 	 */
+-	num_sqe = roundup_pow_of_two(attrs->cap.max_send_wr);
+-	num_rqe = roundup_pow_of_two(attrs->cap.max_recv_wr);
++	if (num_sqe)
++		num_sqe = roundup_pow_of_two(num_sqe);
++	else {
++		/* Zero sized SQ is not supported */
++		rv = -EINVAL;
++		goto err_out;
++	}
++	if (num_rqe)
++		num_rqe = roundup_pow_of_two(num_rqe);
+ 
+ 	if (udata)
+ 		qp->sendq = vmalloc_user(num_sqe * sizeof(struct siw_sqe));
+@@ -376,7 +386,6 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
+ 		qp->sendq = vzalloc(num_sqe * sizeof(struct siw_sqe));
+ 
+ 	if (qp->sendq == NULL) {
+-		siw_dbg(base_dev, "SQ size %d alloc failed\n", num_sqe);
+ 		rv = -ENOMEM;
+ 		goto err_out_xa;
+ 	}
+@@ -410,7 +419,6 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
+ 			qp->recvq = vzalloc(num_rqe * sizeof(struct siw_rqe));
+ 
+ 		if (qp->recvq == NULL) {
+-			siw_dbg(base_dev, "RQ size %d alloc failed\n", num_rqe);
+ 			rv = -ENOMEM;
+ 			goto err_out_xa;
+ 		}
+@@ -960,9 +968,9 @@ int siw_post_receive(struct ib_qp *base_qp, const struct ib_recv_wr *wr,
+ 	unsigned long flags;
+ 	int rv = 0;
+ 
+-	if (qp->srq) {
++	if (qp->srq || qp->attrs.rq_size == 0) {
+ 		*bad_wr = wr;
+-		return -EOPNOTSUPP; /* what else from errno.h? */
++		return -EINVAL;
+ 	}
+ 	if (!rdma_is_kernel_res(&qp->base_qp.res)) {
+ 		siw_dbg_qp(qp, "no kernel post_recv for user mapped rq\n");
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c
+index ac4c49cbf1538..2ee3806f2df5b 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c
+@@ -408,6 +408,7 @@ int rtrs_clt_create_sess_files(struct rtrs_clt_sess *sess)
+ 				   "%s", str);
+ 	if (err) {
+ 		pr_err("kobject_init_and_add: %d\n", err);
++		kobject_put(&sess->kobj);
+ 		return err;
+ 	}
+ 	err = sysfs_create_group(&sess->kobj, &rtrs_clt_sess_attr_group);
+@@ -419,6 +420,7 @@ int rtrs_clt_create_sess_files(struct rtrs_clt_sess *sess)
+ 				   &sess->kobj, "stats");
+ 	if (err) {
+ 		pr_err("kobject_init_and_add: %d\n", err);
++		kobject_put(&sess->stats->kobj_stats);
+ 		goto remove_group;
+ 	}
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index d54a77ebe1184..fc0e90915678a 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -31,6 +31,8 @@
+  */
+ #define RTRS_RECONNECT_SEED 8
+ 
++#define FIRST_CONN 0x01
++
+ MODULE_DESCRIPTION("RDMA Transport Client");
+ MODULE_LICENSE("GPL");
+ 
+@@ -1516,7 +1518,7 @@ static void destroy_con(struct rtrs_clt_con *con)
+ static int create_con_cq_qp(struct rtrs_clt_con *con)
+ {
+ 	struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+-	u16 wr_queue_size;
++	u32 max_send_wr, max_recv_wr, cq_size;
+ 	int err, cq_vector;
+ 	struct rtrs_msg_rkey_rsp *rsp;
+ 
+@@ -1536,7 +1538,8 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
+ 		 * + 2 for drain and heartbeat
+ 		 * in case qp gets into error state
+ 		 */
+-		wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2;
++		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
++		max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
+ 		/* We must be the first here */
+ 		if (WARN_ON(sess->s.dev))
+ 			return -EINVAL;
+@@ -1568,25 +1571,29 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
+ 
+ 		/* Shared between connections */
+ 		sess->s.dev_ref++;
+-		wr_queue_size =
++		max_send_wr =
+ 			min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr,
+ 			      /* QD * (REQ + RSP + FR REGS or INVS) + drain */
+ 			      sess->queue_depth * 3 + 1);
++		max_recv_wr =
++			min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr,
++			      sess->queue_depth * 3 + 1);
+ 	}
+ 	/* alloc iu to recv new rkey reply when server reports flags set */
+ 	if (sess->flags == RTRS_MSG_NEW_RKEY_F || con->c.cid == 0) {
+-		con->rsp_ius = rtrs_iu_alloc(wr_queue_size, sizeof(*rsp),
++		con->rsp_ius = rtrs_iu_alloc(max_recv_wr, sizeof(*rsp),
+ 					      GFP_KERNEL, sess->s.dev->ib_dev,
+ 					      DMA_FROM_DEVICE,
+ 					      rtrs_clt_rdma_done);
+ 		if (!con->rsp_ius)
+ 			return -ENOMEM;
+-		con->queue_size = wr_queue_size;
++		con->queue_size = max_recv_wr;
+ 	}
++	cq_size = max_send_wr + max_recv_wr;
+ 	cq_vector = con->cpu % sess->s.dev->ib_dev->num_comp_vectors;
+ 	err = rtrs_cq_qp_create(&sess->s, &con->c, sess->max_send_sge,
+-				 cq_vector, wr_queue_size, wr_queue_size,
+-				 IB_POLL_SOFTIRQ);
++				 cq_vector, cq_size, max_send_wr,
++				 max_recv_wr, IB_POLL_SOFTIRQ);
+ 	/*
+ 	 * In case of error we do not bother to clean previous allocations,
+ 	 * since destroy_con_cq_qp() must be called.
+@@ -1669,6 +1676,7 @@ static int rtrs_rdma_route_resolved(struct rtrs_clt_con *con)
+ 		.cid_num = cpu_to_le16(sess->s.con_num),
+ 		.recon_cnt = cpu_to_le16(sess->s.recon_cnt),
+ 	};
++	msg.first_conn = sess->for_new_clt ? FIRST_CONN : 0;
+ 	uuid_copy(&msg.sess_uuid, &sess->s.uuid);
+ 	uuid_copy(&msg.paths_uuid, &clt->paths_uuid);
+ 
+@@ -1754,6 +1762,8 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 		scnprintf(sess->hca_name, sizeof(sess->hca_name),
+ 			  sess->s.dev->ib_dev->name);
+ 		sess->s.src_addr = con->c.cm_id->route.addr.src_addr;
++		/* set for_new_clt, to allow future reconnect on any path */
++		sess->for_new_clt = 1;
+ 	}
+ 
+ 	return 0;
+@@ -2571,11 +2581,8 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
+ 	clt->dev.class = rtrs_clt_dev_class;
+ 	clt->dev.release = rtrs_clt_dev_release;
+ 	err = dev_set_name(&clt->dev, "%s", sessname);
+-	if (err) {
+-		free_percpu(clt->pcpu_path);
+-		kfree(clt);
+-		return ERR_PTR(err);
+-	}
++	if (err)
++		goto err;
+ 	/*
+ 	 * Suppress user space notification until
+ 	 * sysfs files are created
+@@ -2583,29 +2590,31 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
+ 	dev_set_uevent_suppress(&clt->dev, true);
+ 	err = device_register(&clt->dev);
+ 	if (err) {
+-		free_percpu(clt->pcpu_path);
+ 		put_device(&clt->dev);
+-		return ERR_PTR(err);
++		goto err;
+ 	}
+ 
+ 	clt->kobj_paths = kobject_create_and_add("paths", &clt->dev.kobj);
+ 	if (!clt->kobj_paths) {
+-		free_percpu(clt->pcpu_path);
+-		device_unregister(&clt->dev);
+-		return NULL;
++		err = -ENOMEM;
++		goto err_dev;
+ 	}
+ 	err = rtrs_clt_create_sysfs_root_files(clt);
+ 	if (err) {
+-		free_percpu(clt->pcpu_path);
+ 		kobject_del(clt->kobj_paths);
+ 		kobject_put(clt->kobj_paths);
+-		device_unregister(&clt->dev);
+-		return ERR_PTR(err);
++		goto err_dev;
+ 	}
+ 	dev_set_uevent_suppress(&clt->dev, false);
+ 	kobject_uevent(&clt->dev.kobj, KOBJ_ADD);
+ 
+ 	return clt;
++err_dev:
++	device_unregister(&clt->dev);
++err:
++	free_percpu(clt->pcpu_path);
++	kfree(clt);
++	return ERR_PTR(err);
+ }
+ 
+ static void wait_for_inflight_permits(struct rtrs_clt *clt)
+@@ -2678,6 +2687,8 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
+ 			err = PTR_ERR(sess);
+ 			goto close_all_sess;
+ 		}
++		if (!i)
++			sess->for_new_clt = 1;
+ 		list_add_tail_rcu(&sess->s.entry, &clt->paths_list);
+ 
+ 		err = init_sess(sess);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.h b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
+index 167acd3c90fcc..22da5d50c22c4 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
+@@ -142,6 +142,7 @@ struct rtrs_clt_sess {
+ 	int			max_send_sge;
+ 	u32			flags;
+ 	struct kobject		kobj;
++	u8			for_new_clt;
+ 	struct rtrs_clt_stats	*stats;
+ 	/* cache hca_port and hca_name to display in sysfs */
+ 	u8			hca_port;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+index b8e43dc4d95ab..2e1d2f7e372ac 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+@@ -188,7 +188,9 @@ struct rtrs_msg_conn_req {
+ 	__le16		recon_cnt;
+ 	uuid_t		sess_uuid;
+ 	uuid_t		paths_uuid;
+-	u8		reserved[12];
++	u8		first_conn : 1;
++	u8		reserved_bits : 7;
++	u8		reserved[11];
+ };
+ 
+ /**
+@@ -304,8 +306,9 @@ int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe,
+ 				   struct ib_send_wr *head);
+ 
+ int rtrs_cq_qp_create(struct rtrs_sess *rtrs_sess, struct rtrs_con *con,
+-		      u32 max_send_sge, int cq_vector, u16 cq_size,
+-		      u16 wr_queue_size, enum ib_poll_context poll_ctx);
++		      u32 max_send_sge, int cq_vector, int cq_size,
++		      u32 max_send_wr, u32 max_recv_wr,
++		      enum ib_poll_context poll_ctx);
+ void rtrs_cq_qp_destroy(struct rtrs_con *con);
+ 
+ void rtrs_init_hb(struct rtrs_sess *sess, struct ib_cqe *cqe,
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
+index 07fbb063555d3..39708ab4f26e5 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
+@@ -53,6 +53,8 @@ static ssize_t rtrs_srv_disconnect_store(struct kobject *kobj,
+ 	sockaddr_to_str((struct sockaddr *)&sess->s.dst_addr, str, sizeof(str));
+ 
+ 	rtrs_info(s, "disconnect for path %s requested\n", str);
++	/* first remove sysfs itself to avoid deadlock */
++	sysfs_remove_file_self(&sess->kobj, &attr->attr);
+ 	close_sess(sess);
+ 
+ 	return count;
+@@ -184,6 +186,7 @@ static int rtrs_srv_create_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
+ 		err = -ENOMEM;
+ 		pr_err("kobject_create_and_add(): %d\n", err);
+ 		device_del(&srv->dev);
++		put_device(&srv->dev);
+ 		goto unlock;
+ 	}
+ 	dev_set_uevent_suppress(&srv->dev, false);
+@@ -209,6 +212,7 @@ rtrs_srv_destroy_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
+ 		kobject_put(srv->kobj_paths);
+ 		mutex_unlock(&srv->paths_mutex);
+ 		device_del(&srv->dev);
++		put_device(&srv->dev);
+ 	} else {
+ 		mutex_unlock(&srv->paths_mutex);
+ 	}
+@@ -237,6 +241,7 @@ static int rtrs_srv_create_stats_files(struct rtrs_srv_sess *sess)
+ 				   &sess->kobj, "stats");
+ 	if (err) {
+ 		rtrs_err(s, "kobject_init_and_add(): %d\n", err);
++		kobject_put(&sess->stats->kobj_stats);
+ 		return err;
+ 	}
+ 	err = sysfs_create_group(&sess->stats->kobj_stats,
+@@ -293,8 +298,8 @@ remove_group:
+ 	sysfs_remove_group(&sess->kobj, &rtrs_srv_sess_attr_group);
+ put_kobj:
+ 	kobject_del(&sess->kobj);
+-	kobject_put(&sess->kobj);
+ destroy_root:
++	kobject_put(&sess->kobj);
+ 	rtrs_srv_destroy_once_sysfs_root_folders(sess);
+ 
+ 	return err;
+@@ -305,7 +310,7 @@ void rtrs_srv_destroy_sess_files(struct rtrs_srv_sess *sess)
+ 	if (sess->kobj.state_in_sysfs) {
+ 		kobject_del(&sess->stats->kobj_stats);
+ 		kobject_put(&sess->stats->kobj_stats);
+-		kobject_del(&sess->kobj);
++		sysfs_remove_group(&sess->kobj, &rtrs_srv_sess_attr_group);
+ 		kobject_put(&sess->kobj);
+ 
+ 		rtrs_srv_destroy_once_sysfs_root_folders(sess);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 1cb778aff3c59..f009a6907169c 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -232,7 +232,8 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
+ 	dma_addr_t dma_addr = sess->dma_addr[id->msg_id];
+ 	struct rtrs_srv_mr *srv_mr;
+ 	struct rtrs_srv *srv = sess->srv;
+-	struct ib_send_wr inv_wr, imm_wr;
++	struct ib_send_wr inv_wr;
++	struct ib_rdma_wr imm_wr;
+ 	struct ib_rdma_wr *wr = NULL;
+ 	enum ib_send_flags flags;
+ 	size_t sg_cnt;
+@@ -277,21 +278,22 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
+ 		WARN_ON_ONCE(rkey != wr->rkey);
+ 
+ 	wr->wr.opcode = IB_WR_RDMA_WRITE;
++	wr->wr.wr_cqe   = &io_comp_cqe;
+ 	wr->wr.ex.imm_data = 0;
+ 	wr->wr.send_flags  = 0;
+ 
+ 	if (need_inval && always_invalidate) {
+ 		wr->wr.next = &rwr.wr;
+ 		rwr.wr.next = &inv_wr;
+-		inv_wr.next = &imm_wr;
++		inv_wr.next = &imm_wr.wr;
+ 	} else if (always_invalidate) {
+ 		wr->wr.next = &rwr.wr;
+-		rwr.wr.next = &imm_wr;
++		rwr.wr.next = &imm_wr.wr;
+ 	} else if (need_inval) {
+ 		wr->wr.next = &inv_wr;
+-		inv_wr.next = &imm_wr;
++		inv_wr.next = &imm_wr.wr;
+ 	} else {
+-		wr->wr.next = &imm_wr;
++		wr->wr.next = &imm_wr.wr;
+ 	}
+ 	/*
+ 	 * From time to time we have to post signaled sends,
+@@ -304,16 +306,18 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
+ 		inv_wr.sg_list = NULL;
+ 		inv_wr.num_sge = 0;
+ 		inv_wr.opcode = IB_WR_SEND_WITH_INV;
++		inv_wr.wr_cqe   = &io_comp_cqe;
+ 		inv_wr.send_flags = 0;
+ 		inv_wr.ex.invalidate_rkey = rkey;
+ 	}
+ 
+-	imm_wr.next = NULL;
++	imm_wr.wr.next = NULL;
+ 	if (always_invalidate) {
+ 		struct rtrs_msg_rkey_rsp *msg;
+ 
+ 		srv_mr = &sess->mrs[id->msg_id];
+ 		rwr.wr.opcode = IB_WR_REG_MR;
++		rwr.wr.wr_cqe = &local_reg_cqe;
+ 		rwr.wr.num_sge = 0;
+ 		rwr.mr = srv_mr->mr;
+ 		rwr.wr.send_flags = 0;
+@@ -328,22 +332,22 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
+ 		list.addr   = srv_mr->iu->dma_addr;
+ 		list.length = sizeof(*msg);
+ 		list.lkey   = sess->s.dev->ib_pd->local_dma_lkey;
+-		imm_wr.sg_list = &list;
+-		imm_wr.num_sge = 1;
+-		imm_wr.opcode = IB_WR_SEND_WITH_IMM;
++		imm_wr.wr.sg_list = &list;
++		imm_wr.wr.num_sge = 1;
++		imm_wr.wr.opcode = IB_WR_SEND_WITH_IMM;
+ 		ib_dma_sync_single_for_device(sess->s.dev->ib_dev,
+ 					      srv_mr->iu->dma_addr,
+ 					      srv_mr->iu->size, DMA_TO_DEVICE);
+ 	} else {
+-		imm_wr.sg_list = NULL;
+-		imm_wr.num_sge = 0;
+-		imm_wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM;
++		imm_wr.wr.sg_list = NULL;
++		imm_wr.wr.num_sge = 0;
++		imm_wr.wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM;
+ 	}
+-	imm_wr.send_flags = flags;
+-	imm_wr.ex.imm_data = cpu_to_be32(rtrs_to_io_rsp_imm(id->msg_id,
++	imm_wr.wr.send_flags = flags;
++	imm_wr.wr.ex.imm_data = cpu_to_be32(rtrs_to_io_rsp_imm(id->msg_id,
+ 							     0, need_inval));
+ 
+-	imm_wr.wr_cqe   = &io_comp_cqe;
++	imm_wr.wr.wr_cqe   = &io_comp_cqe;
+ 	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, dma_addr,
+ 				      offset, DMA_BIDIRECTIONAL);
+ 
+@@ -370,7 +374,8 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ {
+ 	struct rtrs_sess *s = con->c.sess;
+ 	struct rtrs_srv_sess *sess = to_srv_sess(s);
+-	struct ib_send_wr inv_wr, imm_wr, *wr = NULL;
++	struct ib_send_wr inv_wr, *wr = NULL;
++	struct ib_rdma_wr imm_wr;
+ 	struct ib_reg_wr rwr;
+ 	struct rtrs_srv *srv = sess->srv;
+ 	struct rtrs_srv_mr *srv_mr;
+@@ -389,6 +394,7 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ 
+ 		if (need_inval) {
+ 			if (likely(sg_cnt)) {
++				inv_wr.wr_cqe   = &io_comp_cqe;
+ 				inv_wr.sg_list = NULL;
+ 				inv_wr.num_sge = 0;
+ 				inv_wr.opcode = IB_WR_SEND_WITH_INV;
+@@ -406,15 +412,15 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ 	if (need_inval && always_invalidate) {
+ 		wr = &inv_wr;
+ 		inv_wr.next = &rwr.wr;
+-		rwr.wr.next = &imm_wr;
++		rwr.wr.next = &imm_wr.wr;
+ 	} else if (always_invalidate) {
+ 		wr = &rwr.wr;
+-		rwr.wr.next = &imm_wr;
++		rwr.wr.next = &imm_wr.wr;
+ 	} else if (need_inval) {
+ 		wr = &inv_wr;
+-		inv_wr.next = &imm_wr;
++		inv_wr.next = &imm_wr.wr;
+ 	} else {
+-		wr = &imm_wr;
++		wr = &imm_wr.wr;
+ 	}
+ 	/*
+ 	 * From time to time we have to post signalled sends,
+@@ -423,14 +429,15 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ 	flags = (atomic_inc_return(&con->wr_cnt) % srv->queue_depth) ?
+ 		0 : IB_SEND_SIGNALED;
+ 	imm = rtrs_to_io_rsp_imm(id->msg_id, errno, need_inval);
+-	imm_wr.next = NULL;
++	imm_wr.wr.next = NULL;
+ 	if (always_invalidate) {
+ 		struct ib_sge list;
+ 		struct rtrs_msg_rkey_rsp *msg;
+ 
+ 		srv_mr = &sess->mrs[id->msg_id];
+-		rwr.wr.next = &imm_wr;
++		rwr.wr.next = &imm_wr.wr;
+ 		rwr.wr.opcode = IB_WR_REG_MR;
++		rwr.wr.wr_cqe = &local_reg_cqe;
+ 		rwr.wr.num_sge = 0;
+ 		rwr.wr.send_flags = 0;
+ 		rwr.mr = srv_mr->mr;
+@@ -445,21 +452,21 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ 		list.addr   = srv_mr->iu->dma_addr;
+ 		list.length = sizeof(*msg);
+ 		list.lkey   = sess->s.dev->ib_pd->local_dma_lkey;
+-		imm_wr.sg_list = &list;
+-		imm_wr.num_sge = 1;
+-		imm_wr.opcode = IB_WR_SEND_WITH_IMM;
++		imm_wr.wr.sg_list = &list;
++		imm_wr.wr.num_sge = 1;
++		imm_wr.wr.opcode = IB_WR_SEND_WITH_IMM;
+ 		ib_dma_sync_single_for_device(sess->s.dev->ib_dev,
+ 					      srv_mr->iu->dma_addr,
+ 					      srv_mr->iu->size, DMA_TO_DEVICE);
+ 	} else {
+-		imm_wr.sg_list = NULL;
+-		imm_wr.num_sge = 0;
+-		imm_wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM;
++		imm_wr.wr.sg_list = NULL;
++		imm_wr.wr.num_sge = 0;
++		imm_wr.wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM;
+ 	}
+-	imm_wr.send_flags = flags;
+-	imm_wr.wr_cqe   = &io_comp_cqe;
++	imm_wr.wr.send_flags = flags;
++	imm_wr.wr.wr_cqe   = &io_comp_cqe;
+ 
+-	imm_wr.ex.imm_data = cpu_to_be32(imm);
++	imm_wr.wr.ex.imm_data = cpu_to_be32(imm);
+ 
+ 	err = ib_post_send(id->con->c.qp, wr, NULL);
+ 	if (unlikely(err))
+@@ -1343,7 +1350,8 @@ static void free_srv(struct rtrs_srv *srv)
+ }
+ 
+ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx,
+-					   const uuid_t *paths_uuid)
++					  const uuid_t *paths_uuid,
++					  bool first_conn)
+ {
+ 	struct rtrs_srv *srv;
+ 	int i;
+@@ -1356,13 +1364,18 @@ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx,
+ 			return srv;
+ 		}
+ 	}
++	mutex_unlock(&ctx->srv_mutex);
++	/*
++	 * If this request is not the first connection request from the
++	 * client for this session then fail and return error.
++	 */
++	if (!first_conn)
++		return ERR_PTR(-ENXIO);
+ 
+ 	/* need to allocate a new srv */
+ 	srv = kzalloc(sizeof(*srv), GFP_KERNEL);
+-	if  (!srv) {
+-		mutex_unlock(&ctx->srv_mutex);
+-		return NULL;
+-	}
++	if  (!srv)
++		return ERR_PTR(-ENOMEM);
+ 
+ 	INIT_LIST_HEAD(&srv->paths_list);
+ 	mutex_init(&srv->paths_mutex);
+@@ -1372,8 +1385,6 @@ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx,
+ 	srv->ctx = ctx;
+ 	device_initialize(&srv->dev);
+ 	srv->dev.release = rtrs_srv_dev_release;
+-	list_add(&srv->ctx_list, &ctx->srv_list);
+-	mutex_unlock(&ctx->srv_mutex);
+ 
+ 	srv->chunks = kcalloc(srv->queue_depth, sizeof(*srv->chunks),
+ 			      GFP_KERNEL);
+@@ -1386,6 +1397,9 @@ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx,
+ 			goto err_free_chunks;
+ 	}
+ 	refcount_set(&srv->refcount, 1);
++	mutex_lock(&ctx->srv_mutex);
++	list_add(&srv->ctx_list, &ctx->srv_list);
++	mutex_unlock(&ctx->srv_mutex);
+ 
+ 	return srv;
+ 
+@@ -1396,7 +1410,7 @@ err_free_chunks:
+ 
+ err_free_srv:
+ 	kfree(srv);
+-	return NULL;
++	return ERR_PTR(-ENOMEM);
+ }
+ 
+ static void put_srv(struct rtrs_srv *srv)
+@@ -1476,10 +1490,12 @@ static bool __is_path_w_addr_exists(struct rtrs_srv *srv,
+ 
+ static void free_sess(struct rtrs_srv_sess *sess)
+ {
+-	if (sess->kobj.state_in_sysfs)
++	if (sess->kobj.state_in_sysfs) {
++		kobject_del(&sess->kobj);
+ 		kobject_put(&sess->kobj);
+-	else
++	} else {
+ 		kfree(sess);
++	}
+ }
+ 
+ static void rtrs_srv_close_work(struct work_struct *work)
+@@ -1601,7 +1617,7 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 	struct rtrs_sess *s = &sess->s;
+ 	struct rtrs_srv_con *con;
+ 
+-	u16 cq_size, wr_queue_size;
++	u32 cq_size, wr_queue_size;
+ 	int err, cq_vector;
+ 
+ 	con = kzalloc(sizeof(*con), GFP_KERNEL);
+@@ -1615,7 +1631,7 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 	con->c.cm_id = cm_id;
+ 	con->c.sess = &sess->s;
+ 	con->c.cid = cid;
+-	atomic_set(&con->wr_cnt, 0);
++	atomic_set(&con->wr_cnt, 1);
+ 
+ 	if (con->c.cid == 0) {
+ 		/*
+@@ -1645,7 +1661,8 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 
+ 	/* TODO: SOFTIRQ can be faster, but be careful with softirq context */
+ 	err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size,
+-				 wr_queue_size, IB_POLL_WORKQUEUE);
++				 wr_queue_size, wr_queue_size,
++				 IB_POLL_WORKQUEUE);
+ 	if (err) {
+ 		rtrs_err(s, "rtrs_cq_qp_create(), err: %d\n", err);
+ 		goto free_con;
+@@ -1796,13 +1813,9 @@ static int rtrs_rdma_connect(struct rdma_cm_id *cm_id,
+ 		goto reject_w_econnreset;
+ 	}
+ 	recon_cnt = le16_to_cpu(msg->recon_cnt);
+-	srv = get_or_create_srv(ctx, &msg->paths_uuid);
+-	/*
+-	 * "refcount == 0" happens if a previous thread calls get_or_create_srv
+-	 * allocate srv, but chunks of srv are not allocated yet.
+-	 */
+-	if (!srv || refcount_read(&srv->refcount) == 0) {
+-		err = -ENOMEM;
++	srv = get_or_create_srv(ctx, &msg->paths_uuid, msg->first_conn);
++	if (IS_ERR(srv)) {
++		err = PTR_ERR(srv);
+ 		goto reject_w_err;
+ 	}
+ 	mutex_lock(&srv->paths_mutex);
+@@ -1877,8 +1890,8 @@ reject_w_econnreset:
+ 	return rtrs_rdma_do_reject(cm_id, -ECONNRESET);
+ 
+ close_and_return_err:
+-	close_sess(sess);
+ 	mutex_unlock(&srv->paths_mutex);
++	close_sess(sess);
+ 
+ 	return err;
+ }
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index ff1093d6e4bc9..23e5452e10c46 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -246,14 +246,14 @@ static int create_cq(struct rtrs_con *con, int cq_vector, u16 cq_size,
+ }
+ 
+ static int create_qp(struct rtrs_con *con, struct ib_pd *pd,
+-		     u16 wr_queue_size, u32 max_sge)
++		     u32 max_send_wr, u32 max_recv_wr, u32 max_sge)
+ {
+ 	struct ib_qp_init_attr init_attr = {NULL};
+ 	struct rdma_cm_id *cm_id = con->cm_id;
+ 	int ret;
+ 
+-	init_attr.cap.max_send_wr = wr_queue_size;
+-	init_attr.cap.max_recv_wr = wr_queue_size;
++	init_attr.cap.max_send_wr = max_send_wr;
++	init_attr.cap.max_recv_wr = max_recv_wr;
+ 	init_attr.cap.max_recv_sge = 1;
+ 	init_attr.event_handler = qp_event_handler;
+ 	init_attr.qp_context = con;
+@@ -275,8 +275,9 @@ static int create_qp(struct rtrs_con *con, struct ib_pd *pd,
+ }
+ 
+ int rtrs_cq_qp_create(struct rtrs_sess *sess, struct rtrs_con *con,
+-		       u32 max_send_sge, int cq_vector, u16 cq_size,
+-		       u16 wr_queue_size, enum ib_poll_context poll_ctx)
++		       u32 max_send_sge, int cq_vector, int cq_size,
++		       u32 max_send_wr, u32 max_recv_wr,
++		       enum ib_poll_context poll_ctx)
+ {
+ 	int err;
+ 
+@@ -284,7 +285,8 @@ int rtrs_cq_qp_create(struct rtrs_sess *sess, struct rtrs_con *con,
+ 	if (err)
+ 		return err;
+ 
+-	err = create_qp(con, sess->dev->ib_pd, wr_queue_size, max_send_sge);
++	err = create_qp(con, sess->dev->ib_pd, max_send_wr, max_recv_wr,
++			max_send_sge);
+ 	if (err) {
+ 		ib_free_cq(con->cq);
+ 		con->cq = NULL;
+diff --git a/drivers/input/joydev.c b/drivers/input/joydev.c
+index a2b5fbba2d3b3..430dc69750048 100644
+--- a/drivers/input/joydev.c
++++ b/drivers/input/joydev.c
+@@ -456,7 +456,7 @@ static int joydev_handle_JSIOCSAXMAP(struct joydev *joydev,
+ 	if (IS_ERR(abspam))
+ 		return PTR_ERR(abspam);
+ 
+-	for (i = 0; i < joydev->nabs; i++) {
++	for (i = 0; i < len && i < joydev->nabs; i++) {
+ 		if (abspam[i] > ABS_MAX) {
+ 			retval = -EINVAL;
+ 			goto out;
+@@ -480,6 +480,9 @@ static int joydev_handle_JSIOCSBTNMAP(struct joydev *joydev,
+ 	int i;
+ 	int retval = 0;
+ 
++	if (len % sizeof(*keypam))
++		return -EINVAL;
++
+ 	len = min(len, sizeof(joydev->keypam));
+ 
+ 	/* Validate the map. */
+@@ -487,7 +490,7 @@ static int joydev_handle_JSIOCSBTNMAP(struct joydev *joydev,
+ 	if (IS_ERR(keypam))
+ 		return PTR_ERR(keypam);
+ 
+-	for (i = 0; i < joydev->nkey; i++) {
++	for (i = 0; i < (len / 2) && i < joydev->nkey; i++) {
+ 		if (keypam[i] > KEY_MAX || keypam[i] < BTN_MISC) {
+ 			retval = -EINVAL;
+ 			goto out;
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 3d004ca76b6ed..e5f1e3cf9179f 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -305,6 +305,7 @@ static const struct xpad_device {
+ 	{ 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 },
+ 	{ 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 },
+ 	{ 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE },
++	{ 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE },
+ 	{ 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 },
+ 	{ 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE },
+ 	{ 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index c74b020796a94..9119e12a57784 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -588,6 +588,10 @@ static const struct dmi_system_id i8042_dmi_noselftest_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
+ 		},
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */
++		},
+ 	},
+ 	{ }
+ };
+diff --git a/drivers/input/serio/serport.c b/drivers/input/serio/serport.c
+index 8ac970a423de6..33e9d9bfd036f 100644
+--- a/drivers/input/serio/serport.c
++++ b/drivers/input/serio/serport.c
+@@ -156,7 +156,9 @@ out:
+  * returning 0 characters.
+  */
+ 
+-static ssize_t serport_ldisc_read(struct tty_struct * tty, struct file * file, unsigned char __user * buf, size_t nr)
++static ssize_t serport_ldisc_read(struct tty_struct * tty, struct file * file,
++				  unsigned char *kbuf, size_t nr,
++				  void **cookie, unsigned long offset)
+ {
+ 	struct serport *serport = (struct serport*) tty->disc_data;
+ 	struct serio *serio;
+diff --git a/drivers/input/touchscreen/elo.c b/drivers/input/touchscreen/elo.c
+index e0bacd34866ad..96173232e53fe 100644
+--- a/drivers/input/touchscreen/elo.c
++++ b/drivers/input/touchscreen/elo.c
+@@ -341,8 +341,10 @@ static int elo_connect(struct serio *serio, struct serio_driver *drv)
+ 	switch (elo->id) {
+ 
+ 	case 0: /* 10-byte protocol */
+-		if (elo_setup_10(elo))
++		if (elo_setup_10(elo)) {
++			err = -EIO;
+ 			goto fail3;
++		}
+ 
+ 		break;
+ 
+diff --git a/drivers/input/touchscreen/raydium_i2c_ts.c b/drivers/input/touchscreen/raydium_i2c_ts.c
+index 603a948460d64..4d2d22a869773 100644
+--- a/drivers/input/touchscreen/raydium_i2c_ts.c
++++ b/drivers/input/touchscreen/raydium_i2c_ts.c
+@@ -445,6 +445,7 @@ static int raydium_i2c_write_object(struct i2c_client *client,
+ 				    enum raydium_bl_ack state)
+ {
+ 	int error;
++	static const u8 cmd[] = { 0xFF, 0x39 };
+ 
+ 	error = raydium_i2c_send(client, RM_CMD_BOOT_WRT, data, len);
+ 	if (error) {
+@@ -453,7 +454,7 @@ static int raydium_i2c_write_object(struct i2c_client *client,
+ 		return error;
+ 	}
+ 
+-	error = raydium_i2c_send(client, RM_CMD_BOOT_ACK, NULL, 0);
++	error = raydium_i2c_send(client, RM_CMD_BOOT_ACK, cmd, sizeof(cmd));
+ 	if (error) {
+ 		dev_err(&client->dev, "Ack obj command failed: %d\n", error);
+ 		return error;
+diff --git a/drivers/input/touchscreen/sur40.c b/drivers/input/touchscreen/sur40.c
+index 620cdd7d214a6..12f2562b0141b 100644
+--- a/drivers/input/touchscreen/sur40.c
++++ b/drivers/input/touchscreen/sur40.c
+@@ -787,6 +787,7 @@ static int sur40_probe(struct usb_interface *interface,
+ 		dev_err(&interface->dev,
+ 			"Unable to register video controls.");
+ 		v4l2_ctrl_handler_free(&sur40->hdl);
++		error = sur40->hdl.error;
+ 		goto err_unreg_v4l2;
+ 	}
+ 
+diff --git a/drivers/input/touchscreen/zinitix.c b/drivers/input/touchscreen/zinitix.c
+index 1acc2eb2bcb33..fd8b4e9f08a21 100644
+--- a/drivers/input/touchscreen/zinitix.c
++++ b/drivers/input/touchscreen/zinitix.c
+@@ -190,7 +190,7 @@ static int zinitix_write_cmd(struct i2c_client *client, u16 reg)
+ 	return 0;
+ }
+ 
+-static bool zinitix_init_touch(struct bt541_ts_data *bt541)
++static int zinitix_init_touch(struct bt541_ts_data *bt541)
+ {
+ 	struct i2c_client *client = bt541->client;
+ 	int i;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index e634bbe605730..7067b7c116260 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2267,7 +2267,7 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
+ {
+ 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ 
+-	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
++	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start + 1,
+ 			       gather->pgsize, true, smmu_domain);
+ }
+ 
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 0eba5e883e3f1..63f7173b241f0 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -65,6 +65,8 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
+ 		smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
+ 
+ 		if (FIELD_GET(ARM_SMMU_SMR_VALID, smr)) {
++			/* Ignore valid bit for SMR mask extraction. */
++			smr &= ~ARM_SMMU_SMR_VALID;
+ 			smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
+ 			smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
+ 			smmu->smrs[i].valid = true;
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 0f4dc25d46c92..0d9adce6d812f 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -2409,9 +2409,6 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
+ 		size -= pgsize;
+ 	}
+ 
+-	if (ops->iotlb_sync_map)
+-		ops->iotlb_sync_map(domain);
+-
+ 	/* unroll mapping in case something went wrong */
+ 	if (ret)
+ 		iommu_unmap(domain, orig_iova, orig_size - size);
+@@ -2421,18 +2418,31 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
+ 	return ret;
+ }
+ 
++static int _iommu_map(struct iommu_domain *domain, unsigned long iova,
++		      phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
++{
++	const struct iommu_ops *ops = domain->ops;
++	int ret;
++
++	ret = __iommu_map(domain, iova, paddr, size, prot, gfp);
++	if (ret == 0 && ops->iotlb_sync_map)
++		ops->iotlb_sync_map(domain);
++
++	return ret;
++}
++
+ int iommu_map(struct iommu_domain *domain, unsigned long iova,
+ 	      phys_addr_t paddr, size_t size, int prot)
+ {
+ 	might_sleep();
+-	return __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
++	return _iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
+ }
+ EXPORT_SYMBOL_GPL(iommu_map);
+ 
+ int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
+ 	      phys_addr_t paddr, size_t size, int prot)
+ {
+-	return __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
++	return _iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
+ }
+ EXPORT_SYMBOL_GPL(iommu_map_atomic);
+ 
+@@ -2516,6 +2526,7 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
+ 			     struct scatterlist *sg, unsigned int nents, int prot,
+ 			     gfp_t gfp)
+ {
++	const struct iommu_ops *ops = domain->ops;
+ 	size_t len = 0, mapped = 0;
+ 	phys_addr_t start;
+ 	unsigned int i = 0;
+@@ -2546,6 +2557,8 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
+ 			sg = sg_next(sg);
+ 	}
+ 
++	if (ops->iotlb_sync_map)
++		ops->iotlb_sync_map(domain);
+ 	return mapped;
+ 
+ out_err:
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index c072cee532c20..19387d2bc4b4f 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -445,7 +445,7 @@ static void mtk_iommu_iotlb_sync(struct iommu_domain *domain,
+ 				 struct iommu_iotlb_gather *gather)
+ {
+ 	struct mtk_iommu_data *data = mtk_iommu_get_m4u_data();
+-	size_t length = gather->end - gather->start;
++	size_t length = gather->end - gather->start + 1;
+ 
+ 	if (gather->start == ULONG_MAX)
+ 		return;
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index 2aa79c32ee228..6156a065681bc 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -464,7 +464,8 @@ config IMX_IRQSTEER
+ 	  Support for the i.MX IRQSTEER interrupt multiplexer/remapper.
+ 
+ config IMX_INTMUX
+-	def_bool y if ARCH_MXC || COMPILE_TEST
++	bool "i.MX INTMUX support" if COMPILE_TEST
++	default y if ARCH_MXC
+ 	select IRQ_DOMAIN
+ 	help
+ 	  Support for the i.MX INTMUX interrupt multiplexer.
+diff --git a/drivers/irqchip/irq-loongson-pch-msi.c b/drivers/irqchip/irq-loongson-pch-msi.c
+index 12aeeab432893..32562b7e681b5 100644
+--- a/drivers/irqchip/irq-loongson-pch-msi.c
++++ b/drivers/irqchip/irq-loongson-pch-msi.c
+@@ -225,7 +225,7 @@ static int pch_msi_init(struct device_node *node,
+ 		goto err_priv;
+ 	}
+ 
+-	priv->msi_map = bitmap_alloc(priv->num_irqs, GFP_KERNEL);
++	priv->msi_map = bitmap_zalloc(priv->num_irqs, GFP_KERNEL);
+ 	if (!priv->msi_map) {
+ 		ret = -ENOMEM;
+ 		goto err_priv;
+diff --git a/drivers/macintosh/adb-iop.c b/drivers/macintosh/adb-iop.c
+index 0ee3272491501..2633bc254935c 100644
+--- a/drivers/macintosh/adb-iop.c
++++ b/drivers/macintosh/adb-iop.c
+@@ -19,6 +19,7 @@
+ #include <asm/macints.h>
+ #include <asm/mac_iop.h>
+ #include <asm/adb_iop.h>
++#include <asm/unaligned.h>
+ 
+ #include <linux/adb.h>
+ 
+@@ -249,7 +250,7 @@ static void adb_iop_set_ap_complete(struct iop_msg *msg)
+ {
+ 	struct adb_iopmsg *amsg = (struct adb_iopmsg *)msg->message;
+ 
+-	autopoll_devs = (amsg->data[1] << 8) | amsg->data[0];
++	autopoll_devs = get_unaligned_be16(amsg->data);
+ 	if (autopoll_devs & (1 << autopoll_addr))
+ 		return;
+ 	autopoll_addr = autopoll_devs ? (ffs(autopoll_devs) - 1) : 0;
+@@ -266,8 +267,7 @@ static int adb_iop_autopoll(int devs)
+ 	amsg.flags = ADB_IOP_SET_AUTOPOLL | (mask ? ADB_IOP_AUTOPOLL : 0);
+ 	amsg.count = 2;
+ 	amsg.cmd = 0;
+-	amsg.data[0] = mask & 0xFF;
+-	amsg.data[1] = (mask >> 8) & 0xFF;
++	put_unaligned_be16(mask, amsg.data);
+ 
+ 	iop_send_message(ADB_IOP, ADB_CHAN, NULL, sizeof(amsg), (__u8 *)&amsg,
+ 			 adb_iop_set_ap_complete);
+diff --git a/drivers/mailbox/sprd-mailbox.c b/drivers/mailbox/sprd-mailbox.c
+index f6fab24ae8a9a..4c325301a2fe8 100644
+--- a/drivers/mailbox/sprd-mailbox.c
++++ b/drivers/mailbox/sprd-mailbox.c
+@@ -35,7 +35,7 @@
+ #define SPRD_MBOX_IRQ_CLR			BIT(0)
+ 
+ /* Bit and mask definiation for outbox's SPRD_MBOX_FIFO_STS register */
+-#define SPRD_OUTBOX_FIFO_FULL			BIT(0)
++#define SPRD_OUTBOX_FIFO_FULL			BIT(2)
+ #define SPRD_OUTBOX_FIFO_WR_SHIFT		16
+ #define SPRD_OUTBOX_FIFO_RD_SHIFT		24
+ #define SPRD_OUTBOX_FIFO_POS_MASK		GENMASK(7, 0)
+diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
+index 1d57f48307e66..e8bf4f752e8be 100644
+--- a/drivers/md/bcache/bcache.h
++++ b/drivers/md/bcache/bcache.h
+@@ -1001,6 +1001,7 @@ void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent);
+ 
+ extern struct workqueue_struct *bcache_wq;
+ extern struct workqueue_struct *bch_journal_wq;
++extern struct workqueue_struct *bch_flush_wq;
+ extern struct mutex bch_register_lock;
+ extern struct list_head bch_cache_sets;
+ 
+@@ -1042,5 +1043,7 @@ void bch_debug_exit(void);
+ void bch_debug_init(void);
+ void bch_request_exit(void);
+ int bch_request_init(void);
++void bch_btree_exit(void);
++int bch_btree_init(void);
+ 
+ #endif /* _BCACHE_H */
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 910df242c83df..fe6dce125aba2 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -99,6 +99,8 @@
+ #define PTR_HASH(c, k)							\
+ 	(((k)->ptr[0] >> c->bucket_bits) | PTR_GEN(k, 0))
+ 
++static struct workqueue_struct *btree_io_wq;
++
+ #define insert_lock(s, b)	((b)->level <= (s)->lock)
+ 
+ 
+@@ -308,7 +310,7 @@ static void __btree_node_write_done(struct closure *cl)
+ 	btree_complete_write(b, w);
+ 
+ 	if (btree_node_dirty(b))
+-		schedule_delayed_work(&b->work, 30 * HZ);
++		queue_delayed_work(btree_io_wq, &b->work, 30 * HZ);
+ 
+ 	closure_return_with_destructor(cl, btree_node_write_unlock);
+ }
+@@ -481,7 +483,7 @@ static void bch_btree_leaf_dirty(struct btree *b, atomic_t *journal_ref)
+ 	BUG_ON(!i->keys);
+ 
+ 	if (!btree_node_dirty(b))
+-		schedule_delayed_work(&b->work, 30 * HZ);
++		queue_delayed_work(btree_io_wq, &b->work, 30 * HZ);
+ 
+ 	set_btree_node_dirty(b);
+ 
+@@ -2764,3 +2766,18 @@ void bch_keybuf_init(struct keybuf *buf)
+ 	spin_lock_init(&buf->lock);
+ 	array_allocator_init(&buf->freelist);
+ }
++
++void bch_btree_exit(void)
++{
++	if (btree_io_wq)
++		destroy_workqueue(btree_io_wq);
++}
++
++int __init bch_btree_init(void)
++{
++	btree_io_wq = alloc_workqueue("bch_btree_io", WQ_MEM_RECLAIM, 0);
++	if (!btree_io_wq)
++		return -ENOMEM;
++
++	return 0;
++}
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index aefbdb7e003bc..c6613e8173337 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -932,8 +932,8 @@ atomic_t *bch_journal(struct cache_set *c,
+ 		journal_try_write(c);
+ 	} else if (!w->dirty) {
+ 		w->dirty = true;
+-		schedule_delayed_work(&c->journal.work,
+-				      msecs_to_jiffies(c->journal_delay_ms));
++		queue_delayed_work(bch_flush_wq, &c->journal.work,
++				   msecs_to_jiffies(c->journal_delay_ms));
+ 		spin_unlock(&c->journal.lock);
+ 	} else {
+ 		spin_unlock(&c->journal.lock);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index a148b92ad8563..248bda63f0852 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -49,6 +49,7 @@ static int bcache_major;
+ static DEFINE_IDA(bcache_device_idx);
+ static wait_queue_head_t unregister_wait;
+ struct workqueue_struct *bcache_wq;
++struct workqueue_struct *bch_flush_wq;
+ struct workqueue_struct *bch_journal_wq;
+ 
+ 
+@@ -2833,6 +2834,9 @@ static void bcache_exit(void)
+ 		destroy_workqueue(bcache_wq);
+ 	if (bch_journal_wq)
+ 		destroy_workqueue(bch_journal_wq);
++	if (bch_flush_wq)
++		destroy_workqueue(bch_flush_wq);
++	bch_btree_exit();
+ 
+ 	if (bcache_major)
+ 		unregister_blkdev(bcache_major, "bcache");
+@@ -2888,10 +2892,26 @@ static int __init bcache_init(void)
+ 		return bcache_major;
+ 	}
+ 
++	if (bch_btree_init())
++		goto err;
++
+ 	bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM, 0);
+ 	if (!bcache_wq)
+ 		goto err;
+ 
++	/*
++	 * Let's not make this `WQ_MEM_RECLAIM` for the following reasons:
++	 *
++	 * 1. It used `system_wq` before which also does no memory reclaim.
++	 * 2. With `WQ_MEM_RECLAIM` desktop stalls, increased boot times, and
++	 *    reduced throughput can be observed.
++	 *
++	 * We still want to user our own queue to not congest the `system_wq`.
++	 */
++	bch_flush_wq = alloc_workqueue("bch_flush", 0, 0);
++	if (!bch_flush_wq)
++		goto err;
++
+ 	bch_journal_wq = alloc_workqueue("bch_journal", WQ_MEM_RECLAIM, 0);
+ 	if (!bch_journal_wq)
+ 		goto err;
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index d522093cb39dd..3db92d9a030b9 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -109,6 +109,10 @@ struct mapped_device {
+ 
+ 	struct block_device *bdev;
+ 
++	int swap_bios;
++	struct semaphore swap_bios_semaphore;
++	struct mutex swap_bios_lock;
++
+ 	struct dm_stats stats;
+ 
+ 	/* for blk-mq request-based DM support */
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 875823d6ee7e0..70ae6f3aede94 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -3324,6 +3324,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 	wake_up_process(cc->write_thread);
+ 
+ 	ti->num_flush_bios = 1;
++	ti->limit_swap_bios = true;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/md/dm-era-target.c b/drivers/md/dm-era-target.c
+index b24e3839bb3a1..d9ac7372108c9 100644
+--- a/drivers/md/dm-era-target.c
++++ b/drivers/md/dm-era-target.c
+@@ -47,6 +47,7 @@ struct writeset {
+ static void writeset_free(struct writeset *ws)
+ {
+ 	vfree(ws->bits);
++	ws->bits = NULL;
+ }
+ 
+ static int setup_on_disk_bitset(struct dm_disk_bitset *info,
+@@ -71,8 +72,6 @@ static size_t bitset_size(unsigned nr_bits)
+  */
+ static int writeset_alloc(struct writeset *ws, dm_block_t nr_blocks)
+ {
+-	ws->md.nr_bits = nr_blocks;
+-	ws->md.root = INVALID_WRITESET_ROOT;
+ 	ws->bits = vzalloc(bitset_size(nr_blocks));
+ 	if (!ws->bits) {
+ 		DMERR("%s: couldn't allocate in memory bitset", __func__);
+@@ -85,12 +84,14 @@ static int writeset_alloc(struct writeset *ws, dm_block_t nr_blocks)
+ /*
+  * Wipes the in-core bitset, and creates a new on disk bitset.
+  */
+-static int writeset_init(struct dm_disk_bitset *info, struct writeset *ws)
++static int writeset_init(struct dm_disk_bitset *info, struct writeset *ws,
++			 dm_block_t nr_blocks)
+ {
+ 	int r;
+ 
+-	memset(ws->bits, 0, bitset_size(ws->md.nr_bits));
++	memset(ws->bits, 0, bitset_size(nr_blocks));
+ 
++	ws->md.nr_bits = nr_blocks;
+ 	r = setup_on_disk_bitset(info, ws->md.nr_bits, &ws->md.root);
+ 	if (r) {
+ 		DMERR("%s: setup_on_disk_bitset failed", __func__);
+@@ -134,7 +135,7 @@ static int writeset_test_and_set(struct dm_disk_bitset *info,
+ {
+ 	int r;
+ 
+-	if (!test_and_set_bit(block, ws->bits)) {
++	if (!test_bit(block, ws->bits)) {
+ 		r = dm_bitset_set_bit(info, ws->md.root, block, &ws->md.root);
+ 		if (r) {
+ 			/* FIXME: fail mode */
+@@ -388,7 +389,7 @@ static void ws_dec(void *context, const void *value)
+ 
+ static int ws_eq(void *context, const void *value1, const void *value2)
+ {
+-	return !memcmp(value1, value2, sizeof(struct writeset_metadata));
++	return !memcmp(value1, value2, sizeof(struct writeset_disk));
+ }
+ 
+ /*----------------------------------------------------------------*/
+@@ -564,6 +565,15 @@ static int open_metadata(struct era_metadata *md)
+ 	}
+ 
+ 	disk = dm_block_data(sblock);
++
++	/* Verify the data block size hasn't changed */
++	if (le32_to_cpu(disk->data_block_size) != md->block_size) {
++		DMERR("changing the data block size (from %u to %llu) is not supported",
++		      le32_to_cpu(disk->data_block_size), md->block_size);
++		r = -EINVAL;
++		goto bad;
++	}
++
+ 	r = dm_tm_open_with_sm(md->bm, SUPERBLOCK_LOCATION,
+ 			       disk->metadata_space_map_root,
+ 			       sizeof(disk->metadata_space_map_root),
+@@ -575,10 +585,10 @@ static int open_metadata(struct era_metadata *md)
+ 
+ 	setup_infos(md);
+ 
+-	md->block_size = le32_to_cpu(disk->data_block_size);
+ 	md->nr_blocks = le32_to_cpu(disk->nr_blocks);
+ 	md->current_era = le32_to_cpu(disk->current_era);
+ 
++	ws_unpack(&disk->current_writeset, &md->current_writeset->md);
+ 	md->writeset_tree_root = le64_to_cpu(disk->writeset_tree_root);
+ 	md->era_array_root = le64_to_cpu(disk->era_array_root);
+ 	md->metadata_snap = le64_to_cpu(disk->metadata_snap);
+@@ -746,6 +756,12 @@ static int metadata_digest_lookup_writeset(struct era_metadata *md,
+ 	ws_unpack(&disk, &d->writeset);
+ 	d->value = cpu_to_le32(key);
+ 
++	/*
++	 * We initialise another bitset info to avoid any caching side effects
++	 * with the previous one.
++	 */
++	dm_disk_bitset_init(md->tm, &d->info);
++
+ 	d->nr_bits = min(d->writeset.nr_bits, md->nr_blocks);
+ 	d->current_bit = 0;
+ 	d->step = metadata_digest_transcribe_writeset;
+@@ -759,12 +775,6 @@ static int metadata_digest_start(struct era_metadata *md, struct digest *d)
+ 		return 0;
+ 
+ 	memset(d, 0, sizeof(*d));
+-
+-	/*
+-	 * We initialise another bitset info to avoid any caching side
+-	 * effects with the previous one.
+-	 */
+-	dm_disk_bitset_init(md->tm, &d->info);
+ 	d->step = metadata_digest_lookup_writeset;
+ 
+ 	return 0;
+@@ -802,6 +812,8 @@ static struct era_metadata *metadata_open(struct block_device *bdev,
+ 
+ static void metadata_close(struct era_metadata *md)
+ {
++	writeset_free(&md->writesets[0]);
++	writeset_free(&md->writesets[1]);
+ 	destroy_persistent_data_objects(md);
+ 	kfree(md);
+ }
+@@ -839,6 +851,7 @@ static int metadata_resize(struct era_metadata *md, void *arg)
+ 	r = writeset_alloc(&md->writesets[1], *new_size);
+ 	if (r) {
+ 		DMERR("%s: writeset_alloc failed for writeset 1", __func__);
++		writeset_free(&md->writesets[0]);
+ 		return r;
+ 	}
+ 
+@@ -849,6 +862,8 @@ static int metadata_resize(struct era_metadata *md, void *arg)
+ 			    &value, &md->era_array_root);
+ 	if (r) {
+ 		DMERR("%s: dm_array_resize failed", __func__);
++		writeset_free(&md->writesets[0]);
++		writeset_free(&md->writesets[1]);
+ 		return r;
+ 	}
+ 
+@@ -870,7 +885,6 @@ static int metadata_era_archive(struct era_metadata *md)
+ 	}
+ 
+ 	ws_pack(&md->current_writeset->md, &value);
+-	md->current_writeset->md.root = INVALID_WRITESET_ROOT;
+ 
+ 	keys[0] = md->current_era;
+ 	__dm_bless_for_disk(&value);
+@@ -882,6 +896,7 @@ static int metadata_era_archive(struct era_metadata *md)
+ 		return r;
+ 	}
+ 
++	md->current_writeset->md.root = INVALID_WRITESET_ROOT;
+ 	md->archived_writesets = true;
+ 
+ 	return 0;
+@@ -898,7 +913,7 @@ static int metadata_new_era(struct era_metadata *md)
+ 	int r;
+ 	struct writeset *new_writeset = next_writeset(md);
+ 
+-	r = writeset_init(&md->bitset_info, new_writeset);
++	r = writeset_init(&md->bitset_info, new_writeset, md->nr_blocks);
+ 	if (r) {
+ 		DMERR("%s: writeset_init failed", __func__);
+ 		return r;
+@@ -951,7 +966,7 @@ static int metadata_commit(struct era_metadata *md)
+ 	int r;
+ 	struct dm_block *sblock;
+ 
+-	if (md->current_writeset->md.root != SUPERBLOCK_LOCATION) {
++	if (md->current_writeset->md.root != INVALID_WRITESET_ROOT) {
+ 		r = dm_bitset_flush(&md->bitset_info, md->current_writeset->md.root,
+ 				    &md->current_writeset->md.root);
+ 		if (r) {
+@@ -1225,8 +1240,10 @@ static void process_deferred_bios(struct era *era)
+ 	int r;
+ 	struct bio_list deferred_bios, marked_bios;
+ 	struct bio *bio;
++	struct blk_plug plug;
+ 	bool commit_needed = false;
+ 	bool failed = false;
++	struct writeset *ws = era->md->current_writeset;
+ 
+ 	bio_list_init(&deferred_bios);
+ 	bio_list_init(&marked_bios);
+@@ -1236,9 +1253,11 @@ static void process_deferred_bios(struct era *era)
+ 	bio_list_init(&era->deferred_bios);
+ 	spin_unlock(&era->deferred_lock);
+ 
++	if (bio_list_empty(&deferred_bios))
++		return;
++
+ 	while ((bio = bio_list_pop(&deferred_bios))) {
+-		r = writeset_test_and_set(&era->md->bitset_info,
+-					  era->md->current_writeset,
++		r = writeset_test_and_set(&era->md->bitset_info, ws,
+ 					  get_block(era, bio));
+ 		if (r < 0) {
+ 			/*
+@@ -1246,7 +1265,6 @@ static void process_deferred_bios(struct era *era)
+ 			 * FIXME: finish.
+ 			 */
+ 			failed = true;
+-
+ 		} else if (r == 0)
+ 			commit_needed = true;
+ 
+@@ -1262,9 +1280,19 @@ static void process_deferred_bios(struct era *era)
+ 	if (failed)
+ 		while ((bio = bio_list_pop(&marked_bios)))
+ 			bio_io_error(bio);
+-	else
+-		while ((bio = bio_list_pop(&marked_bios)))
++	else {
++		blk_start_plug(&plug);
++		while ((bio = bio_list_pop(&marked_bios))) {
++			/*
++			 * Only update the in-core writeset if the on-disk one
++			 * was updated too.
++			 */
++			if (commit_needed)
++				set_bit(get_block(era, bio), ws->bits);
+ 			submit_bio_noacct(bio);
++		}
++		blk_finish_plug(&plug);
++	}
+ }
+ 
+ static void process_rpc_calls(struct era *era)
+@@ -1473,15 +1501,6 @@ static int era_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 	}
+ 	era->md = md;
+ 
+-	era->nr_blocks = calc_nr_blocks(era);
+-
+-	r = metadata_resize(era->md, &era->nr_blocks);
+-	if (r) {
+-		ti->error = "couldn't resize metadata";
+-		era_destroy(era);
+-		return -ENOMEM;
+-	}
+-
+ 	era->wq = alloc_ordered_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM);
+ 	if (!era->wq) {
+ 		ti->error = "could not create workqueue for metadata object";
+@@ -1556,16 +1575,24 @@ static int era_preresume(struct dm_target *ti)
+ 	dm_block_t new_size = calc_nr_blocks(era);
+ 
+ 	if (era->nr_blocks != new_size) {
+-		r = in_worker1(era, metadata_resize, &new_size);
+-		if (r)
++		r = metadata_resize(era->md, &new_size);
++		if (r) {
++			DMERR("%s: metadata_resize failed", __func__);
++			return r;
++		}
++
++		r = metadata_commit(era->md);
++		if (r) {
++			DMERR("%s: metadata_commit failed", __func__);
+ 			return r;
++		}
+ 
+ 		era->nr_blocks = new_size;
+ 	}
+ 
+ 	start_worker(era);
+ 
+-	r = in_worker0(era, metadata_new_era);
++	r = in_worker0(era, metadata_era_rollover);
+ 	if (r) {
+ 		DMERR("%s: metadata_era_rollover failed", __func__);
+ 		return r;
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 09ded08cbb609..9b824c21580a4 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -827,24 +827,24 @@ void dm_table_set_type(struct dm_table *t, enum dm_queue_mode type)
+ EXPORT_SYMBOL_GPL(dm_table_set_type);
+ 
+ /* validate the dax capability of the target device span */
+-int device_supports_dax(struct dm_target *ti, struct dm_dev *dev,
++int device_not_dax_capable(struct dm_target *ti, struct dm_dev *dev,
+ 			sector_t start, sector_t len, void *data)
+ {
+ 	int blocksize = *(int *) data, id;
+ 	bool rc;
+ 
+ 	id = dax_read_lock();
+-	rc = dax_supported(dev->dax_dev, dev->bdev, blocksize, start, len);
++	rc = !dax_supported(dev->dax_dev, dev->bdev, blocksize, start, len);
+ 	dax_read_unlock(id);
+ 
+ 	return rc;
+ }
+ 
+ /* Check devices support synchronous DAX */
+-static int device_dax_synchronous(struct dm_target *ti, struct dm_dev *dev,
+-				  sector_t start, sector_t len, void *data)
++static int device_not_dax_synchronous_capable(struct dm_target *ti, struct dm_dev *dev,
++					      sector_t start, sector_t len, void *data)
+ {
+-	return dev->dax_dev && dax_synchronous(dev->dax_dev);
++	return !dev->dax_dev || !dax_synchronous(dev->dax_dev);
+ }
+ 
+ bool dm_table_supports_dax(struct dm_table *t,
+@@ -861,7 +861,7 @@ bool dm_table_supports_dax(struct dm_table *t,
+ 			return false;
+ 
+ 		if (!ti->type->iterate_devices ||
+-		    !ti->type->iterate_devices(ti, iterate_fn, blocksize))
++		    ti->type->iterate_devices(ti, iterate_fn, blocksize))
+ 			return false;
+ 	}
+ 
+@@ -932,7 +932,7 @@ static int dm_table_determine_type(struct dm_table *t)
+ verify_bio_based:
+ 		/* We must use this table as bio-based */
+ 		t->type = DM_TYPE_BIO_BASED;
+-		if (dm_table_supports_dax(t, device_supports_dax, &page_size) ||
++		if (dm_table_supports_dax(t, device_not_dax_capable, &page_size) ||
+ 		    (list_empty(devices) && live_md_type == DM_TYPE_DAX_BIO_BASED)) {
+ 			t->type = DM_TYPE_DAX_BIO_BASED;
+ 		}
+@@ -1302,6 +1302,46 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
+ 	return &t->targets[(KEYS_PER_NODE * n) + k];
+ }
+ 
++/*
++ * type->iterate_devices() should be called when the sanity check needs to
++ * iterate and check all underlying data devices. iterate_devices() will
++ * iterate all underlying data devices until it encounters a non-zero return
++ * code, returned by whether the input iterate_devices_callout_fn, or
++ * iterate_devices() itself internally.
++ *
++ * For some target type (e.g. dm-stripe), one call of iterate_devices() may
++ * iterate multiple underlying devices internally, in which case a non-zero
++ * return code returned by iterate_devices_callout_fn will stop the iteration
++ * in advance.
++ *
++ * Cases requiring _any_ underlying device supporting some kind of attribute,
++ * should use the iteration structure like dm_table_any_dev_attr(), or call
++ * it directly. @func should handle semantics of positive examples, e.g.
++ * capable of something.
++ *
++ * Cases requiring _all_ underlying devices supporting some kind of attribute,
++ * should use the iteration structure like dm_table_supports_nowait() or
++ * dm_table_supports_discards(). Or introduce dm_table_all_devs_attr() that
++ * uses an @anti_func that handle semantics of counter examples, e.g. not
++ * capable of something. So: return !dm_table_any_dev_attr(t, anti_func, data);
++ */
++static bool dm_table_any_dev_attr(struct dm_table *t,
++				  iterate_devices_callout_fn func, void *data)
++{
++	struct dm_target *ti;
++	unsigned int i;
++
++	for (i = 0; i < dm_table_get_num_targets(t); i++) {
++		ti = dm_table_get_target(t, i);
++
++		if (ti->type->iterate_devices &&
++		    ti->type->iterate_devices(ti, func, data))
++			return true;
++        }
++
++	return false;
++}
++
+ static int count_device(struct dm_target *ti, struct dm_dev *dev,
+ 			sector_t start, sector_t len, void *data)
+ {
+@@ -1338,13 +1378,13 @@ bool dm_table_has_no_data_devices(struct dm_table *table)
+ 	return true;
+ }
+ 
+-static int device_is_zoned_model(struct dm_target *ti, struct dm_dev *dev,
+-				 sector_t start, sector_t len, void *data)
++static int device_not_zoned_model(struct dm_target *ti, struct dm_dev *dev,
++				  sector_t start, sector_t len, void *data)
+ {
+ 	struct request_queue *q = bdev_get_queue(dev->bdev);
+ 	enum blk_zoned_model *zoned_model = data;
+ 
+-	return q && blk_queue_zoned_model(q) == *zoned_model;
++	return !q || blk_queue_zoned_model(q) != *zoned_model;
+ }
+ 
+ static bool dm_table_supports_zoned_model(struct dm_table *t,
+@@ -1361,37 +1401,20 @@ static bool dm_table_supports_zoned_model(struct dm_table *t,
+ 			return false;
+ 
+ 		if (!ti->type->iterate_devices ||
+-		    !ti->type->iterate_devices(ti, device_is_zoned_model, &zoned_model))
++		    ti->type->iterate_devices(ti, device_not_zoned_model, &zoned_model))
+ 			return false;
+ 	}
+ 
+ 	return true;
+ }
+ 
+-static int device_matches_zone_sectors(struct dm_target *ti, struct dm_dev *dev,
+-				       sector_t start, sector_t len, void *data)
++static int device_not_matches_zone_sectors(struct dm_target *ti, struct dm_dev *dev,
++					   sector_t start, sector_t len, void *data)
+ {
+ 	struct request_queue *q = bdev_get_queue(dev->bdev);
+ 	unsigned int *zone_sectors = data;
+ 
+-	return q && blk_queue_zone_sectors(q) == *zone_sectors;
+-}
+-
+-static bool dm_table_matches_zone_sectors(struct dm_table *t,
+-					  unsigned int zone_sectors)
+-{
+-	struct dm_target *ti;
+-	unsigned i;
+-
+-	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+-		ti = dm_table_get_target(t, i);
+-
+-		if (!ti->type->iterate_devices ||
+-		    !ti->type->iterate_devices(ti, device_matches_zone_sectors, &zone_sectors))
+-			return false;
+-	}
+-
+-	return true;
++	return !q || blk_queue_zone_sectors(q) != *zone_sectors;
+ }
+ 
+ static int validate_hardware_zoned_model(struct dm_table *table,
+@@ -1411,7 +1434,7 @@ static int validate_hardware_zoned_model(struct dm_table *table,
+ 	if (!zone_sectors || !is_power_of_2(zone_sectors))
+ 		return -EINVAL;
+ 
+-	if (!dm_table_matches_zone_sectors(table, zone_sectors)) {
++	if (dm_table_any_dev_attr(table, device_not_matches_zone_sectors, &zone_sectors)) {
+ 		DMERR("%s: zone sectors is not consistent across all devices",
+ 		      dm_device_name(table->md));
+ 		return -EINVAL;
+@@ -1585,29 +1608,12 @@ static int device_dax_write_cache_enabled(struct dm_target *ti,
+ 	return false;
+ }
+ 
+-static int dm_table_supports_dax_write_cache(struct dm_table *t)
+-{
+-	struct dm_target *ti;
+-	unsigned i;
+-
+-	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+-		ti = dm_table_get_target(t, i);
+-
+-		if (ti->type->iterate_devices &&
+-		    ti->type->iterate_devices(ti,
+-				device_dax_write_cache_enabled, NULL))
+-			return true;
+-	}
+-
+-	return false;
+-}
+-
+-static int device_is_nonrot(struct dm_target *ti, struct dm_dev *dev,
+-			    sector_t start, sector_t len, void *data)
++static int device_is_rotational(struct dm_target *ti, struct dm_dev *dev,
++				sector_t start, sector_t len, void *data)
+ {
+ 	struct request_queue *q = bdev_get_queue(dev->bdev);
+ 
+-	return q && blk_queue_nonrot(q);
++	return q && !blk_queue_nonrot(q);
+ }
+ 
+ static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev,
+@@ -1618,23 +1624,6 @@ static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev,
+ 	return q && !blk_queue_add_random(q);
+ }
+ 
+-static bool dm_table_all_devices_attribute(struct dm_table *t,
+-					   iterate_devices_callout_fn func)
+-{
+-	struct dm_target *ti;
+-	unsigned i;
+-
+-	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+-		ti = dm_table_get_target(t, i);
+-
+-		if (!ti->type->iterate_devices ||
+-		    !ti->type->iterate_devices(ti, func, NULL))
+-			return false;
+-	}
+-
+-	return true;
+-}
+-
+ static int device_not_write_same_capable(struct dm_target *ti, struct dm_dev *dev,
+ 					 sector_t start, sector_t len, void *data)
+ {
+@@ -1786,27 +1775,6 @@ static int device_requires_stable_pages(struct dm_target *ti,
+ 	return q && blk_queue_stable_writes(q);
+ }
+ 
+-/*
+- * If any underlying device requires stable pages, a table must require
+- * them as well.  Only targets that support iterate_devices are considered:
+- * don't want error, zero, etc to require stable pages.
+- */
+-static bool dm_table_requires_stable_pages(struct dm_table *t)
+-{
+-	struct dm_target *ti;
+-	unsigned i;
+-
+-	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+-		ti = dm_table_get_target(t, i);
+-
+-		if (ti->type->iterate_devices &&
+-		    ti->type->iterate_devices(ti, device_requires_stable_pages, NULL))
+-			return true;
+-	}
+-
+-	return false;
+-}
+-
+ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 			       struct queue_limits *limits)
+ {
+@@ -1844,22 +1812,22 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	}
+ 	blk_queue_write_cache(q, wc, fua);
+ 
+-	if (dm_table_supports_dax(t, device_supports_dax, &page_size)) {
++	if (dm_table_supports_dax(t, device_not_dax_capable, &page_size)) {
+ 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
+-		if (dm_table_supports_dax(t, device_dax_synchronous, NULL))
++		if (dm_table_supports_dax(t, device_not_dax_synchronous_capable, NULL))
+ 			set_dax_synchronous(t->md->dax_dev);
+ 	}
+ 	else
+ 		blk_queue_flag_clear(QUEUE_FLAG_DAX, q);
+ 
+-	if (dm_table_supports_dax_write_cache(t))
++	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
+ 		dax_write_cache(t->md->dax_dev, true);
+ 
+ 	/* Ensure that all underlying devices are non-rotational. */
+-	if (dm_table_all_devices_attribute(t, device_is_nonrot))
+-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
+-	else
++	if (dm_table_any_dev_attr(t, device_is_rotational, NULL))
+ 		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
++	else
++		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
+ 
+ 	if (!dm_table_supports_write_same(t))
+ 		q->limits.max_write_same_sectors = 0;
+@@ -1871,8 +1839,11 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	/*
+ 	 * Some devices don't use blk_integrity but still want stable pages
+ 	 * because they do their own checksumming.
++	 * If any underlying device requires stable pages, a table must require
++	 * them as well.  Only targets that support iterate_devices are considered:
++	 * don't want error, zero, etc to require stable pages.
+ 	 */
+-	if (dm_table_requires_stable_pages(t))
++	if (dm_table_any_dev_attr(t, device_requires_stable_pages, NULL))
+ 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
+ 	else
+ 		blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q);
+@@ -1883,7 +1854,8 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	 * Clear QUEUE_FLAG_ADD_RANDOM if any underlying device does not
+ 	 * have it set.
+ 	 */
+-	if (blk_queue_add_random(q) && dm_table_all_devices_attribute(t, device_is_not_random))
++	if (blk_queue_add_random(q) &&
++	    dm_table_any_dev_attr(t, device_is_not_random, NULL))
+ 		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
+ 
+ 	/*
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index d5223a0e5cc51..8628c4aa2e854 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -148,6 +148,7 @@ struct dm_writecache {
+ 	size_t metadata_sectors;
+ 	size_t n_blocks;
+ 	uint64_t seq_count;
++	sector_t data_device_sectors;
+ 	void *block_start;
+ 	struct wc_entry *entries;
+ 	unsigned block_size;
+@@ -159,14 +160,22 @@ struct dm_writecache {
+ 	bool overwrote_committed:1;
+ 	bool memory_vmapped:1;
+ 
++	bool start_sector_set:1;
+ 	bool high_wm_percent_set:1;
+ 	bool low_wm_percent_set:1;
+ 	bool max_writeback_jobs_set:1;
+ 	bool autocommit_blocks_set:1;
+ 	bool autocommit_time_set:1;
++	bool max_age_set:1;
+ 	bool writeback_fua_set:1;
+ 	bool flush_on_suspend:1;
+ 	bool cleaner:1;
++	bool cleaner_set:1;
++
++	unsigned high_wm_percent_value;
++	unsigned low_wm_percent_value;
++	unsigned autocommit_time_value;
++	unsigned max_age_value;
+ 
+ 	unsigned writeback_all;
+ 	struct workqueue_struct *writeback_wq;
+@@ -523,7 +532,7 @@ static void ssd_commit_superblock(struct dm_writecache *wc)
+ 
+ 	region.bdev = wc->ssd_dev->bdev;
+ 	region.sector = 0;
+-	region.count = PAGE_SIZE;
++	region.count = PAGE_SIZE >> SECTOR_SHIFT;
+ 
+ 	if (unlikely(region.sector + region.count > wc->metadata_sectors))
+ 		region.count = wc->metadata_sectors - region.sector;
+@@ -969,6 +978,8 @@ static void writecache_resume(struct dm_target *ti)
+ 
+ 	wc_lock(wc);
+ 
++	wc->data_device_sectors = i_size_read(wc->dev->bdev->bd_inode) >> SECTOR_SHIFT;
++
+ 	if (WC_MODE_PMEM(wc)) {
+ 		persistent_memory_invalidate_cache(wc->memory_map, wc->memory_map_size);
+ 	} else {
+@@ -1638,6 +1649,10 @@ static bool wc_add_block(struct writeback_struct *wb, struct wc_entry *e, gfp_t
+ 	void *address = memory_data(wc, e);
+ 
+ 	persistent_memory_flush_cache(address, block_size);
++
++	if (unlikely(bio_end_sector(&wb->bio) >= wc->data_device_sectors))
++		return true;
++
+ 	return bio_add_page(&wb->bio, persistent_memory_page(address),
+ 			    block_size, persistent_memory_page_offset(address)) != 0;
+ }
+@@ -1709,6 +1724,9 @@ static void __writecache_writeback_pmem(struct dm_writecache *wc, struct writeba
+ 		if (writecache_has_error(wc)) {
+ 			bio->bi_status = BLK_STS_IOERR;
+ 			bio_endio(bio);
++		} else if (unlikely(!bio_sectors(bio))) {
++			bio->bi_status = BLK_STS_OK;
++			bio_endio(bio);
+ 		} else {
+ 			submit_bio(bio);
+ 		}
+@@ -1752,6 +1770,14 @@ static void __writecache_writeback_ssd(struct dm_writecache *wc, struct writebac
+ 			e = f;
+ 		}
+ 
++		if (unlikely(to.sector + to.count > wc->data_device_sectors)) {
++			if (to.sector >= wc->data_device_sectors) {
++				writecache_copy_endio(0, 0, c);
++				continue;
++			}
++			from.count = to.count = wc->data_device_sectors - to.sector;
++		}
++
+ 		dm_kcopyd_copy(wc->dm_kcopyd, &from, 1, &to, 0, writecache_copy_endio, c);
+ 
+ 		__writeback_throttle(wc, wbl);
+@@ -2205,6 +2231,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 			if (sscanf(string, "%llu%c", &start_sector, &dummy) != 1)
+ 				goto invalid_optional;
+ 			wc->start_sector = start_sector;
++			wc->start_sector_set = true;
+ 			if (wc->start_sector != start_sector ||
+ 			    wc->start_sector >= wc->memory_map_size >> SECTOR_SHIFT)
+ 				goto invalid_optional;
+@@ -2214,6 +2241,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 				goto invalid_optional;
+ 			if (high_wm_percent < 0 || high_wm_percent > 100)
+ 				goto invalid_optional;
++			wc->high_wm_percent_value = high_wm_percent;
+ 			wc->high_wm_percent_set = true;
+ 		} else if (!strcasecmp(string, "low_watermark") && opt_params >= 1) {
+ 			string = dm_shift_arg(&as), opt_params--;
+@@ -2221,6 +2249,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 				goto invalid_optional;
+ 			if (low_wm_percent < 0 || low_wm_percent > 100)
+ 				goto invalid_optional;
++			wc->low_wm_percent_value = low_wm_percent;
+ 			wc->low_wm_percent_set = true;
+ 		} else if (!strcasecmp(string, "writeback_jobs") && opt_params >= 1) {
+ 			string = dm_shift_arg(&as), opt_params--;
+@@ -2240,6 +2269,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 			if (autocommit_msecs > 3600000)
+ 				goto invalid_optional;
+ 			wc->autocommit_jiffies = msecs_to_jiffies(autocommit_msecs);
++			wc->autocommit_time_value = autocommit_msecs;
+ 			wc->autocommit_time_set = true;
+ 		} else if (!strcasecmp(string, "max_age") && opt_params >= 1) {
+ 			unsigned max_age_msecs;
+@@ -2249,7 +2279,10 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 			if (max_age_msecs > 86400000)
+ 				goto invalid_optional;
+ 			wc->max_age = msecs_to_jiffies(max_age_msecs);
++			wc->max_age_set = true;
++			wc->max_age_value = max_age_msecs;
+ 		} else if (!strcasecmp(string, "cleaner")) {
++			wc->cleaner_set = true;
+ 			wc->cleaner = true;
+ 		} else if (!strcasecmp(string, "fua")) {
+ 			if (WC_MODE_PMEM(wc)) {
+@@ -2455,7 +2488,6 @@ static void writecache_status(struct dm_target *ti, status_type_t type,
+ 	struct dm_writecache *wc = ti->private;
+ 	unsigned extra_args;
+ 	unsigned sz = 0;
+-	uint64_t x;
+ 
+ 	switch (type) {
+ 	case STATUSTYPE_INFO:
+@@ -2467,11 +2499,11 @@ static void writecache_status(struct dm_target *ti, status_type_t type,
+ 		DMEMIT("%c %s %s %u ", WC_MODE_PMEM(wc) ? 'p' : 's',
+ 				wc->dev->name, wc->ssd_dev->name, wc->block_size);
+ 		extra_args = 0;
+-		if (wc->start_sector)
++		if (wc->start_sector_set)
+ 			extra_args += 2;
+-		if (wc->high_wm_percent_set && !wc->cleaner)
++		if (wc->high_wm_percent_set)
+ 			extra_args += 2;
+-		if (wc->low_wm_percent_set && !wc->cleaner)
++		if (wc->low_wm_percent_set)
+ 			extra_args += 2;
+ 		if (wc->max_writeback_jobs_set)
+ 			extra_args += 2;
+@@ -2479,37 +2511,29 @@ static void writecache_status(struct dm_target *ti, status_type_t type,
+ 			extra_args += 2;
+ 		if (wc->autocommit_time_set)
+ 			extra_args += 2;
+-		if (wc->max_age != MAX_AGE_UNSPECIFIED)
++		if (wc->max_age_set)
+ 			extra_args += 2;
+-		if (wc->cleaner)
++		if (wc->cleaner_set)
+ 			extra_args++;
+ 		if (wc->writeback_fua_set)
+ 			extra_args++;
+ 
+ 		DMEMIT("%u", extra_args);
+-		if (wc->start_sector)
++		if (wc->start_sector_set)
+ 			DMEMIT(" start_sector %llu", (unsigned long long)wc->start_sector);
+-		if (wc->high_wm_percent_set && !wc->cleaner) {
+-			x = (uint64_t)wc->freelist_high_watermark * 100;
+-			x += wc->n_blocks / 2;
+-			do_div(x, (size_t)wc->n_blocks);
+-			DMEMIT(" high_watermark %u", 100 - (unsigned)x);
+-		}
+-		if (wc->low_wm_percent_set && !wc->cleaner) {
+-			x = (uint64_t)wc->freelist_low_watermark * 100;
+-			x += wc->n_blocks / 2;
+-			do_div(x, (size_t)wc->n_blocks);
+-			DMEMIT(" low_watermark %u", 100 - (unsigned)x);
+-		}
++		if (wc->high_wm_percent_set)
++			DMEMIT(" high_watermark %u", wc->high_wm_percent_value);
++		if (wc->low_wm_percent_set)
++			DMEMIT(" low_watermark %u", wc->low_wm_percent_value);
+ 		if (wc->max_writeback_jobs_set)
+ 			DMEMIT(" writeback_jobs %u", wc->max_writeback_jobs);
+ 		if (wc->autocommit_blocks_set)
+ 			DMEMIT(" autocommit_blocks %u", wc->autocommit_blocks);
+ 		if (wc->autocommit_time_set)
+-			DMEMIT(" autocommit_time %u", jiffies_to_msecs(wc->autocommit_jiffies));
+-		if (wc->max_age != MAX_AGE_UNSPECIFIED)
+-			DMEMIT(" max_age %u", jiffies_to_msecs(wc->max_age));
+-		if (wc->cleaner)
++			DMEMIT(" autocommit_time %u", wc->autocommit_time_value);
++		if (wc->max_age_set)
++			DMEMIT(" max_age %u", wc->max_age_value);
++		if (wc->cleaner_set)
+ 			DMEMIT(" cleaner");
+ 		if (wc->writeback_fua_set)
+ 			DMEMIT(" %sfua", wc->writeback_fua ? "" : "no");
+@@ -2519,7 +2543,7 @@ static void writecache_status(struct dm_target *ti, status_type_t type,
+ 
+ static struct target_type writecache_target = {
+ 	.name			= "writecache",
+-	.version		= {1, 3, 0},
++	.version		= {1, 4, 0},
+ 	.module			= THIS_MODULE,
+ 	.ctr			= writecache_ctr,
+ 	.dtr			= writecache_dtr,
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 1e99a4c1eca43..638c04f9e832c 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -148,6 +148,16 @@ EXPORT_SYMBOL_GPL(dm_bio_get_target_bio_nr);
+ #define DM_NUMA_NODE NUMA_NO_NODE
+ static int dm_numa_node = DM_NUMA_NODE;
+ 
++#define DEFAULT_SWAP_BIOS	(8 * 1048576 / PAGE_SIZE)
++static int swap_bios = DEFAULT_SWAP_BIOS;
++static int get_swap_bios(void)
++{
++	int latch = READ_ONCE(swap_bios);
++	if (unlikely(latch <= 0))
++		latch = DEFAULT_SWAP_BIOS;
++	return latch;
++}
++
+ /*
+  * For mempools pre-allocation at the table loading time.
+  */
+@@ -966,6 +976,11 @@ void disable_write_zeroes(struct mapped_device *md)
+ 	limits->max_write_zeroes_sectors = 0;
+ }
+ 
++static bool swap_bios_limit(struct dm_target *ti, struct bio *bio)
++{
++	return unlikely((bio->bi_opf & REQ_SWAP) != 0) && unlikely(ti->limit_swap_bios);
++}
++
+ static void clone_endio(struct bio *bio)
+ {
+ 	blk_status_t error = bio->bi_status;
+@@ -1016,6 +1031,11 @@ static void clone_endio(struct bio *bio)
+ 		}
+ 	}
+ 
++	if (unlikely(swap_bios_limit(tio->ti, bio))) {
++		struct mapped_device *md = io->md;
++		up(&md->swap_bios_semaphore);
++	}
++
+ 	free_tio(tio);
+ 	dec_pending(io, error);
+ }
+@@ -1125,7 +1145,7 @@ static bool dm_dax_supported(struct dax_device *dax_dev, struct block_device *bd
+ 	if (!map)
+ 		goto out;
+ 
+-	ret = dm_table_supports_dax(map, device_supports_dax, &blocksize);
++	ret = dm_table_supports_dax(map, device_not_dax_capable, &blocksize);
+ 
+ out:
+ 	dm_put_live_table(md, srcu_idx);
+@@ -1249,6 +1269,22 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
+ }
+ EXPORT_SYMBOL_GPL(dm_accept_partial_bio);
+ 
++static noinline void __set_swap_bios_limit(struct mapped_device *md, int latch)
++{
++	mutex_lock(&md->swap_bios_lock);
++	while (latch < md->swap_bios) {
++		cond_resched();
++		down(&md->swap_bios_semaphore);
++		md->swap_bios--;
++	}
++	while (latch > md->swap_bios) {
++		cond_resched();
++		up(&md->swap_bios_semaphore);
++		md->swap_bios++;
++	}
++	mutex_unlock(&md->swap_bios_lock);
++}
++
+ static blk_qc_t __map_bio(struct dm_target_io *tio)
+ {
+ 	int r;
+@@ -1268,6 +1304,14 @@ static blk_qc_t __map_bio(struct dm_target_io *tio)
+ 	atomic_inc(&io->io_count);
+ 	sector = clone->bi_iter.bi_sector;
+ 
++	if (unlikely(swap_bios_limit(ti, clone))) {
++		struct mapped_device *md = io->md;
++		int latch = get_swap_bios();
++		if (unlikely(latch != md->swap_bios))
++			__set_swap_bios_limit(md, latch);
++		down(&md->swap_bios_semaphore);
++	}
++
+ 	r = ti->type->map(ti, clone);
+ 	switch (r) {
+ 	case DM_MAPIO_SUBMITTED:
+@@ -1279,10 +1323,18 @@ static blk_qc_t __map_bio(struct dm_target_io *tio)
+ 		ret = submit_bio_noacct(clone);
+ 		break;
+ 	case DM_MAPIO_KILL:
++		if (unlikely(swap_bios_limit(ti, clone))) {
++			struct mapped_device *md = io->md;
++			up(&md->swap_bios_semaphore);
++		}
+ 		free_tio(tio);
+ 		dec_pending(io, BLK_STS_IOERR);
+ 		break;
+ 	case DM_MAPIO_REQUEUE:
++		if (unlikely(swap_bios_limit(ti, clone))) {
++			struct mapped_device *md = io->md;
++			up(&md->swap_bios_semaphore);
++		}
+ 		free_tio(tio);
+ 		dec_pending(io, BLK_STS_DM_REQUEUE);
+ 		break;
+@@ -1756,6 +1808,7 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ 	mutex_destroy(&md->suspend_lock);
+ 	mutex_destroy(&md->type_lock);
+ 	mutex_destroy(&md->table_devices_lock);
++	mutex_destroy(&md->swap_bios_lock);
+ 
+ 	dm_mq_cleanup_mapped_device(md);
+ }
+@@ -1823,6 +1876,10 @@ static struct mapped_device *alloc_dev(int minor)
+ 	init_waitqueue_head(&md->eventq);
+ 	init_completion(&md->kobj_holder.completion);
+ 
++	md->swap_bios = get_swap_bios();
++	sema_init(&md->swap_bios_semaphore, md->swap_bios);
++	mutex_init(&md->swap_bios_lock);
++
+ 	md->disk->major = _major;
+ 	md->disk->first_minor = minor;
+ 	md->disk->fops = &dm_blk_dops;
+@@ -3119,6 +3176,9 @@ MODULE_PARM_DESC(reserved_bio_based_ios, "Reserved IOs in bio-based mempools");
+ module_param(dm_numa_node, int, S_IRUGO | S_IWUSR);
+ MODULE_PARM_DESC(dm_numa_node, "NUMA node for DM device memory allocations");
+ 
++module_param(swap_bios, int, S_IRUGO | S_IWUSR);
++MODULE_PARM_DESC(swap_bios, "Maximum allowed inflight swap IOs");
++
+ MODULE_DESCRIPTION(DM_NAME " driver");
+ MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/md/dm.h b/drivers/md/dm.h
+index fffe1e289c533..b441ad772c188 100644
+--- a/drivers/md/dm.h
++++ b/drivers/md/dm.h
+@@ -73,7 +73,7 @@ void dm_table_free_md_mempools(struct dm_table *t);
+ struct dm_md_mempools *dm_table_get_md_mempools(struct dm_table *t);
+ bool dm_table_supports_dax(struct dm_table *t, iterate_devices_callout_fn fn,
+ 			   int *blocksize);
+-int device_supports_dax(struct dm_target *ti, struct dm_dev *dev,
++int device_not_dax_capable(struct dm_target *ti, struct dm_dev *dev,
+ 			   sector_t start, sector_t len, void *data);
+ 
+ void dm_lock_md_type(struct mapped_device *md);
+diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
+index c82c1493e099d..b1e2476d3c9e6 100644
+--- a/drivers/media/i2c/max9286.c
++++ b/drivers/media/i2c/max9286.c
+@@ -580,7 +580,7 @@ static int max9286_v4l2_notifier_register(struct max9286_priv *priv)
+ 
+ 		asd = v4l2_async_notifier_add_fwnode_subdev(&priv->notifier,
+ 							    source->fwnode,
+-							    sizeof(*asd));
++							    sizeof(struct max9286_asd));
+ 		if (IS_ERR(asd)) {
+ 			dev_err(dev, "Failed to add subdev for source %u: %ld",
+ 				i, PTR_ERR(asd));
+diff --git a/drivers/media/i2c/ov5670.c b/drivers/media/i2c/ov5670.c
+index f26252e35e08d..04d3f14902017 100644
+--- a/drivers/media/i2c/ov5670.c
++++ b/drivers/media/i2c/ov5670.c
+@@ -2084,7 +2084,8 @@ static int ov5670_init_controls(struct ov5670 *ov5670)
+ 
+ 	/* By default, V4L2_CID_PIXEL_RATE is read only */
+ 	ov5670->pixel_rate = v4l2_ctrl_new_std(ctrl_hdlr, &ov5670_ctrl_ops,
+-					       V4L2_CID_PIXEL_RATE, 0,
++					       V4L2_CID_PIXEL_RATE,
++					       link_freq_configs[0].pixel_rate,
+ 					       link_freq_configs[0].pixel_rate,
+ 					       1,
+ 					       link_freq_configs[0].pixel_rate);
+diff --git a/drivers/media/pci/cx25821/cx25821-core.c b/drivers/media/pci/cx25821/cx25821-core.c
+index 55018d9e439fb..285047b32c44a 100644
+--- a/drivers/media/pci/cx25821/cx25821-core.c
++++ b/drivers/media/pci/cx25821/cx25821-core.c
+@@ -976,8 +976,10 @@ int cx25821_riscmem_alloc(struct pci_dev *pci,
+ 	__le32 *cpu;
+ 	dma_addr_t dma = 0;
+ 
+-	if (NULL != risc->cpu && risc->size < size)
++	if (risc->cpu && risc->size < size) {
+ 		pci_free_consistent(pci, risc->size, risc->cpu, risc->dma);
++		risc->cpu = NULL;
++	}
+ 	if (NULL == risc->cpu) {
+ 		cpu = pci_zalloc_consistent(pci, size, &dma);
+ 		if (NULL == cpu)
+diff --git a/drivers/media/pci/intel/ipu3/Kconfig b/drivers/media/pci/intel/ipu3/Kconfig
+index 82d7f17e6a024..7a805201034b7 100644
+--- a/drivers/media/pci/intel/ipu3/Kconfig
++++ b/drivers/media/pci/intel/ipu3/Kconfig
+@@ -2,7 +2,8 @@
+ config VIDEO_IPU3_CIO2
+ 	tristate "Intel ipu3-cio2 driver"
+ 	depends on VIDEO_V4L2 && PCI
+-	depends on (X86 && ACPI) || COMPILE_TEST
++	depends on ACPI || COMPILE_TEST
++	depends on X86
+ 	select MEDIA_CONTROLLER
+ 	select VIDEO_V4L2_SUBDEV_API
+ 	select V4L2_FWNODE
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.c b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+index 1fcd131482e0e..dcbfe8c9abc72 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.c
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+@@ -1277,7 +1277,7 @@ static int cio2_subdev_set_fmt(struct v4l2_subdev *sd,
+ 	fmt->format.code = formats[0].mbus_code;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(formats); i++) {
+-		if (formats[i].mbus_code == fmt->format.code) {
++		if (formats[i].mbus_code == mbus_code) {
+ 			fmt->format.code = mbus_code;
+ 			break;
+ 		}
+diff --git a/drivers/media/pci/saa7134/saa7134-empress.c b/drivers/media/pci/saa7134/saa7134-empress.c
+index 39e3c7f8c5b46..76a37fbd84587 100644
+--- a/drivers/media/pci/saa7134/saa7134-empress.c
++++ b/drivers/media/pci/saa7134/saa7134-empress.c
+@@ -282,8 +282,11 @@ static int empress_init(struct saa7134_dev *dev)
+ 	q->lock = &dev->lock;
+ 	q->dev = &dev->pci->dev;
+ 	err = vb2_queue_init(q);
+-	if (err)
++	if (err) {
++		video_device_release(dev->empress_dev);
++		dev->empress_dev = NULL;
+ 		return err;
++	}
+ 	dev->empress_dev->queue = q;
+ 	dev->empress_dev->device_caps = V4L2_CAP_READWRITE | V4L2_CAP_STREAMING |
+ 					V4L2_CAP_VIDEO_CAPTURE;
+diff --git a/drivers/media/pci/smipcie/smipcie-ir.c b/drivers/media/pci/smipcie/smipcie-ir.c
+index e6b74e161a055..c0604d9c70119 100644
+--- a/drivers/media/pci/smipcie/smipcie-ir.c
++++ b/drivers/media/pci/smipcie/smipcie-ir.c
+@@ -60,38 +60,44 @@ static void smi_ir_decode(struct smi_rc *ir)
+ {
+ 	struct smi_dev *dev = ir->dev;
+ 	struct rc_dev *rc_dev = ir->rc_dev;
+-	u32 dwIRControl, dwIRData;
+-	u8 index, ucIRCount, readLoop;
++	u32 control, data;
++	u8 index, ir_count, read_loop;
+ 
+-	dwIRControl = smi_read(IR_Init_Reg);
++	control = smi_read(IR_Init_Reg);
+ 
+-	if (dwIRControl & rbIRVld) {
+-		ucIRCount = (u8) smi_read(IR_Data_Cnt);
++	dev_dbg(&rc_dev->dev, "ircontrol: 0x%08x\n", control);
+ 
+-		readLoop = ucIRCount/4;
+-		if (ucIRCount % 4)
+-			readLoop += 1;
+-		for (index = 0; index < readLoop; index++) {
+-			dwIRData = smi_read(IR_DATA_BUFFER_BASE + (index * 4));
++	if (control & rbIRVld) {
++		ir_count = (u8)smi_read(IR_Data_Cnt);
+ 
+-			ir->irData[index*4 + 0] = (u8)(dwIRData);
+-			ir->irData[index*4 + 1] = (u8)(dwIRData >> 8);
+-			ir->irData[index*4 + 2] = (u8)(dwIRData >> 16);
+-			ir->irData[index*4 + 3] = (u8)(dwIRData >> 24);
++		dev_dbg(&rc_dev->dev, "ircount %d\n", ir_count);
++
++		read_loop = ir_count / 4;
++		if (ir_count % 4)
++			read_loop += 1;
++		for (index = 0; index < read_loop; index++) {
++			data = smi_read(IR_DATA_BUFFER_BASE + (index * 4));
++			dev_dbg(&rc_dev->dev, "IRData 0x%08x\n", data);
++
++			ir->irData[index * 4 + 0] = (u8)(data);
++			ir->irData[index * 4 + 1] = (u8)(data >> 8);
++			ir->irData[index * 4 + 2] = (u8)(data >> 16);
++			ir->irData[index * 4 + 3] = (u8)(data >> 24);
+ 		}
+-		smi_raw_process(rc_dev, ir->irData, ucIRCount);
+-		smi_set(IR_Init_Reg, rbIRVld);
++		smi_raw_process(rc_dev, ir->irData, ir_count);
+ 	}
+ 
+-	if (dwIRControl & rbIRhighidle) {
++	if (control & rbIRhighidle) {
+ 		struct ir_raw_event rawir = {};
+ 
++		dev_dbg(&rc_dev->dev, "high idle\n");
++
+ 		rawir.pulse = 0;
+ 		rawir.duration = SMI_SAMPLE_PERIOD * SMI_SAMPLE_IDLEMIN;
+ 		ir_raw_event_store_with_filter(rc_dev, &rawir);
+-		smi_set(IR_Init_Reg, rbIRhighidle);
+ 	}
+ 
++	smi_set(IR_Init_Reg, rbIRVld);
+ 	ir_raw_event_handle(rc_dev);
+ }
+ 
+@@ -150,7 +156,7 @@ int smi_ir_init(struct smi_dev *dev)
+ 	rc_dev->dev.parent = &dev->pci_dev->dev;
+ 
+ 	rc_dev->map_name = dev->info->rc_map;
+-	rc_dev->timeout = MS_TO_US(100);
++	rc_dev->timeout = SMI_SAMPLE_PERIOD * SMI_SAMPLE_IDLEMIN;
+ 	rc_dev->rx_resolution = SMI_SAMPLE_PERIOD;
+ 
+ 	ir->rc_dev = rc_dev;
+@@ -173,7 +179,7 @@ void smi_ir_exit(struct smi_dev *dev)
+ 	struct smi_rc *ir = &dev->ir;
+ 	struct rc_dev *rc_dev = ir->rc_dev;
+ 
+-	smi_ir_stop(ir);
+ 	rc_unregister_device(rc_dev);
++	smi_ir_stop(ir);
+ 	ir->rc_dev = NULL;
+ }
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index c46a79eace98b..f2c4dadd6a0eb 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -1551,12 +1551,12 @@ static int aspeed_video_setup_video(struct aspeed_video *video)
+ 			       V4L2_JPEG_CHROMA_SUBSAMPLING_420, mask,
+ 			       V4L2_JPEG_CHROMA_SUBSAMPLING_444);
+ 
+-	if (video->ctrl_handler.error) {
++	rc = video->ctrl_handler.error;
++	if (rc) {
+ 		v4l2_ctrl_handler_free(&video->ctrl_handler);
+ 		v4l2_device_unregister(v4l2_dev);
+ 
+-		dev_err(video->dev, "Failed to init controls: %d\n",
+-			video->ctrl_handler.error);
++		dev_err(video->dev, "Failed to init controls: %d\n", rc);
+ 		return rc;
+ 	}
+ 
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
+index c012fd2e1d291..34266fba824f2 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.c
++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
+@@ -931,6 +931,7 @@ static int mclk_enable(struct clk_hw *hw)
+ 		mclk_div = 2;
+ 	}
+ 
++	pm_runtime_get_sync(cam->dev);
+ 	clk_enable(cam->clk[0]);
+ 	mcam_reg_write(cam, REG_CLKCTRL, (mclk_src << 29) | mclk_div);
+ 	mcam_ctlr_power_up(cam);
+@@ -944,6 +945,7 @@ static void mclk_disable(struct clk_hw *hw)
+ 
+ 	mcam_ctlr_power_down(cam);
+ 	clk_disable(cam->clk[0]);
++	pm_runtime_put(cam->dev);
+ }
+ 
+ static unsigned long mclk_recalc_rate(struct clk_hw *hw,
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
+index 3be8a04c4c679..219c2c5b78efc 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
+@@ -310,7 +310,7 @@ static int mtk_vcodec_probe(struct platform_device *pdev)
+ 		ret = PTR_ERR((__force void *)dev->reg_base[VENC_SYS]);
+ 		goto err_res;
+ 	}
+-	mtk_v4l2_debug(2, "reg[%d] base=0x%p", i, dev->reg_base[VENC_SYS]);
++	mtk_v4l2_debug(2, "reg[%d] base=0x%p", VENC_SYS, dev->reg_base[VENC_SYS]);
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ 	if (res == NULL) {
+@@ -339,7 +339,7 @@ static int mtk_vcodec_probe(struct platform_device *pdev)
+ 			ret = PTR_ERR((__force void *)dev->reg_base[VENC_LT_SYS]);
+ 			goto err_res;
+ 		}
+-		mtk_v4l2_debug(2, "reg[%d] base=0x%p", i, dev->reg_base[VENC_LT_SYS]);
++		mtk_v4l2_debug(2, "reg[%d] base=0x%p", VENC_LT_SYS, dev->reg_base[VENC_LT_SYS]);
+ 
+ 		dev->enc_lt_irq = platform_get_irq(pdev, 1);
+ 		irq_set_status_flags(dev->enc_lt_irq, IRQ_NOAUTOEN);
+diff --git a/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c b/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c
+index 5ea153a685225..d9880210b2ab6 100644
+--- a/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c
++++ b/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c
+@@ -890,7 +890,8 @@ static int vdec_vp9_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
+ 			memset(inst->seg_id_buf.va, 0, inst->seg_id_buf.size);
+ 
+ 			if (vsi->show_frame & BIT(2)) {
+-				if (vpu_dec_start(&inst->vpu, NULL, 0)) {
++				ret = vpu_dec_start(&inst->vpu, NULL, 0);
++				if (ret) {
+ 					mtk_vcodec_err(inst, "vpu trig decoder failed");
+ 					goto DECODE_ERROR;
+ 				}
+diff --git a/drivers/media/platform/pxa_camera.c b/drivers/media/platform/pxa_camera.c
+index e47520fcb93c0..4ee7d5327df05 100644
+--- a/drivers/media/platform/pxa_camera.c
++++ b/drivers/media/platform/pxa_camera.c
+@@ -1386,6 +1386,9 @@ static int pxac_vb2_prepare(struct vb2_buffer *vb)
+ 	struct pxa_camera_dev *pcdev = vb2_get_drv_priv(vb->vb2_queue);
+ 	struct pxa_buffer *buf = vb2_to_pxa_buffer(vb);
+ 	int ret = 0;
++#ifdef DEBUG
++	int i;
++#endif
+ 
+ 	switch (pcdev->channels) {
+ 	case 1:
+diff --git a/drivers/media/platform/qcom/camss/camss-video.c b/drivers/media/platform/qcom/camss/camss-video.c
+index 114c3ae4a4abb..15965e63cb619 100644
+--- a/drivers/media/platform/qcom/camss/camss-video.c
++++ b/drivers/media/platform/qcom/camss/camss-video.c
+@@ -979,6 +979,7 @@ int msm_video_register(struct camss_video *video, struct v4l2_device *v4l2_dev,
+ 			video->nformats = ARRAY_SIZE(formats_rdi_8x96);
+ 		}
+ 	} else {
++		ret = -EINVAL;
+ 		goto error_video_register;
+ 	}
+ 
+diff --git a/drivers/media/platform/ti-vpe/cal.c b/drivers/media/platform/ti-vpe/cal.c
+index 59a0266b1f399..2eef245c31a17 100644
+--- a/drivers/media/platform/ti-vpe/cal.c
++++ b/drivers/media/platform/ti-vpe/cal.c
+@@ -406,7 +406,7 @@ static irqreturn_t cal_irq(int irq_cal, void *data)
+  */
+ 
+ struct cal_v4l2_async_subdev {
+-	struct v4l2_async_subdev asd;
++	struct v4l2_async_subdev asd; /* Must be first */
+ 	struct cal_camerarx *phy;
+ };
+ 
+@@ -472,7 +472,7 @@ static int cal_async_notifier_register(struct cal_dev *cal)
+ 		fwnode = of_fwnode_handle(phy->sensor_node);
+ 		asd = v4l2_async_notifier_add_fwnode_subdev(&cal->notifier,
+ 							    fwnode,
+-							    sizeof(*asd));
++							    sizeof(*casd));
+ 		if (IS_ERR(asd)) {
+ 			phy_err(phy, "Failed to add subdev to notifier\n");
+ 			ret = PTR_ERR(asd);
+diff --git a/drivers/media/platform/vsp1/vsp1_drv.c b/drivers/media/platform/vsp1/vsp1_drv.c
+index dc62533cf32ce..aa66e4f5f3f34 100644
+--- a/drivers/media/platform/vsp1/vsp1_drv.c
++++ b/drivers/media/platform/vsp1/vsp1_drv.c
+@@ -882,8 +882,10 @@ static int vsp1_probe(struct platform_device *pdev)
+ 	}
+ 
+ done:
+-	if (ret)
++	if (ret) {
+ 		pm_runtime_disable(&pdev->dev);
++		rcar_fcp_put(vsp1->fcp);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index e0242c9b6aeb1..3e729a17b35ff 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -491,6 +491,7 @@ static void irtoy_disconnect(struct usb_interface *intf)
+ 
+ static const struct usb_device_id irtoy_table[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x04d8, 0xfd08, USB_CLASS_CDC_DATA) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x04d8, 0xf58b, USB_CLASS_CDC_DATA) },
+ 	{ }
+ };
+ 
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index f1dbd059ed087..c8d63673e131d 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -1169,7 +1169,7 @@ static void mceusb_handle_command(struct mceusb_dev *ir, u8 *buf_in)
+ 		switch (subcmd) {
+ 		/* the one and only 5-byte return value command */
+ 		case MCE_RSP_GETPORTSTATUS:
+-			if (buf_in[5] == 0)
++			if (buf_in[5] == 0 && *hi < 8)
+ 				ir->txports_cabled |= 1 << *hi;
+ 			break;
+ 
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_psi.c b/drivers/media/test-drivers/vidtv/vidtv_psi.c
+index 4511a2a98405d..1724bb485e670 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_psi.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_psi.c
+@@ -1164,6 +1164,8 @@ u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args)
+ 	struct vidtv_psi_desc *table_descriptor   = args->pmt->descriptor;
+ 	struct vidtv_psi_table_pmt_stream *stream = args->pmt->stream;
+ 	struct vidtv_psi_desc *stream_descriptor;
++	u32 crc = INITIAL_CRC;
++	u32 nbytes = 0;
+ 	struct header_write_args h_args = {
+ 		.dest_buf           = args->buf,
+ 		.dest_offset        = args->offset,
+@@ -1181,6 +1183,7 @@ u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args)
+ 		.new_psi_section    = false,
+ 		.is_crc             = false,
+ 		.dest_buf_sz        = args->buf_sz,
++		.crc                = &crc,
+ 	};
+ 	struct desc_write_args d_args   = {
+ 		.dest_buf           = args->buf,
+@@ -1193,8 +1196,6 @@ u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args)
+ 		.pid                = args->pid,
+ 		.dest_buf_sz        = args->buf_sz,
+ 	};
+-	u32 crc = INITIAL_CRC;
+-	u32 nbytes = 0;
+ 
+ 	vidtv_psi_pmt_table_update_sec_len(args->pmt);
+ 
+diff --git a/drivers/media/tuners/qm1d1c0042.c b/drivers/media/tuners/qm1d1c0042.c
+index 0e26d22f0b268..53aa2558f71e1 100644
+--- a/drivers/media/tuners/qm1d1c0042.c
++++ b/drivers/media/tuners/qm1d1c0042.c
+@@ -343,8 +343,10 @@ static int qm1d1c0042_init(struct dvb_frontend *fe)
+ 		if (val == reg_initval[reg_index][0x00])
+ 			break;
+ 	}
+-	if (reg_index >= QM1D1C0042_NUM_REG_ROWS)
++	if (reg_index >= QM1D1C0042_NUM_REG_ROWS) {
++		ret = -EINVAL;
+ 		goto failed;
++	}
+ 	memcpy(state->regs, reg_initval[reg_index], QM1D1C0042_NUM_REGS);
+ 	usleep_range(2000, 3000);
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/lmedm04.c b/drivers/media/usb/dvb-usb-v2/lmedm04.c
+index 5a7a9522d46da..9ddda8d68ee0f 100644
+--- a/drivers/media/usb/dvb-usb-v2/lmedm04.c
++++ b/drivers/media/usb/dvb-usb-v2/lmedm04.c
+@@ -391,7 +391,7 @@ static int lme2510_int_read(struct dvb_usb_adapter *adap)
+ 	ep = usb_pipe_endpoint(d->udev, lme_int->lme_urb->pipe);
+ 
+ 	if (usb_endpoint_type(&ep->desc) == USB_ENDPOINT_XFER_BULK)
+-		lme_int->lme_urb->pipe = usb_rcvbulkpipe(d->udev, 0xa),
++		lme_int->lme_urb->pipe = usb_rcvbulkpipe(d->udev, 0xa);
+ 
+ 	usb_submit_urb(lme_int->lme_urb, GFP_ATOMIC);
+ 	info("INT Interrupt Service Started");
+diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
+index e6088b5d1b805..3daa64bb1e1d9 100644
+--- a/drivers/media/usb/em28xx/em28xx-core.c
++++ b/drivers/media/usb/em28xx/em28xx-core.c
+@@ -956,14 +956,10 @@ int em28xx_alloc_urbs(struct em28xx *dev, enum em28xx_mode mode, int xfer_bulk,
+ 
+ 		usb_bufs->buf[i] = kzalloc(sb_size, GFP_KERNEL);
+ 		if (!usb_bufs->buf[i]) {
+-			em28xx_uninit_usb_xfer(dev, mode);
+-
+ 			for (i--; i >= 0; i--)
+ 				kfree(usb_bufs->buf[i]);
+ 
+-			kfree(usb_bufs->buf);
+-			usb_bufs->buf = NULL;
+-
++			em28xx_uninit_usb_xfer(dev, mode);
+ 			return -ENOMEM;
+ 		}
+ 
+diff --git a/drivers/media/usb/tm6000/tm6000-dvb.c b/drivers/media/usb/tm6000/tm6000-dvb.c
+index 19c90fa9e443d..293a460f4616c 100644
+--- a/drivers/media/usb/tm6000/tm6000-dvb.c
++++ b/drivers/media/usb/tm6000/tm6000-dvb.c
+@@ -141,6 +141,10 @@ static int tm6000_start_stream(struct tm6000_core *dev)
+ 	if (ret < 0) {
+ 		printk(KERN_ERR "tm6000: error %i in %s during pipe reset\n",
+ 							ret, __func__);
++
++		kfree(dvb->bulk_urb->transfer_buffer);
++		usb_free_urb(dvb->bulk_urb);
++		dvb->bulk_urb = NULL;
+ 		return ret;
+ 	} else
+ 		printk(KERN_ERR "tm6000: pipe reset\n");
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index fa06bfa174ad3..c7172b8952a96 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -248,7 +248,9 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream,
+ 		goto done;
+ 
+ 	/* After the probe, update fmt with the values returned from
+-	 * negotiation with the device.
++	 * negotiation with the device. Some devices return invalid bFormatIndex
++	 * and bFrameIndex values, in which case we can only assume they have
++	 * accepted the requested format as-is.
+ 	 */
+ 	for (i = 0; i < stream->nformats; ++i) {
+ 		if (probe->bFormatIndex == stream->format[i].index) {
+@@ -257,11 +259,10 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream,
+ 		}
+ 	}
+ 
+-	if (i == stream->nformats) {
+-		uvc_trace(UVC_TRACE_FORMAT, "Unknown bFormatIndex %u\n",
++	if (i == stream->nformats)
++		uvc_trace(UVC_TRACE_FORMAT,
++			  "Unknown bFormatIndex %u, using default\n",
+ 			  probe->bFormatIndex);
+-		return -EINVAL;
+-	}
+ 
+ 	for (i = 0; i < format->nframes; ++i) {
+ 		if (probe->bFrameIndex == format->frame[i].bFrameIndex) {
+@@ -270,11 +271,10 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream,
+ 		}
+ 	}
+ 
+-	if (i == format->nframes) {
+-		uvc_trace(UVC_TRACE_FORMAT, "Unknown bFrameIndex %u\n",
++	if (i == format->nframes)
++		uvc_trace(UVC_TRACE_FORMAT,
++			  "Unknown bFrameIndex %u, using default\n",
+ 			  probe->bFrameIndex);
+-		return -EINVAL;
+-	}
+ 
+ 	fmt->fmt.pix.width = frame->wWidth;
+ 	fmt->fmt.pix.height = frame->wHeight;
+diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
+index 691e4c344cf84..75f8e0f60d81d 100644
+--- a/drivers/memory/mtk-smi.c
++++ b/drivers/memory/mtk-smi.c
+@@ -130,7 +130,7 @@ static void mtk_smi_clk_disable(const struct mtk_smi *smi)
+ 
+ int mtk_smi_larb_get(struct device *larbdev)
+ {
+-	int ret = pm_runtime_get_sync(larbdev);
++	int ret = pm_runtime_resume_and_get(larbdev);
+ 
+ 	return (ret < 0) ? ret : 0;
+ }
+@@ -366,7 +366,7 @@ static int __maybe_unused mtk_smi_larb_resume(struct device *dev)
+ 	int ret;
+ 
+ 	/* Power on smi-common. */
+-	ret = pm_runtime_get_sync(larb->smi_common_dev);
++	ret = pm_runtime_resume_and_get(larb->smi_common_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to pm get for smi-common(%d).\n", ret);
+ 		return ret;
+diff --git a/drivers/memory/ti-aemif.c b/drivers/memory/ti-aemif.c
+index 159a16f5e7d67..51d20c2ccb755 100644
+--- a/drivers/memory/ti-aemif.c
++++ b/drivers/memory/ti-aemif.c
+@@ -378,8 +378,10 @@ static int aemif_probe(struct platform_device *pdev)
+ 		 */
+ 		for_each_available_child_of_node(np, child_np) {
+ 			ret = of_aemif_parse_abus_config(pdev, child_np);
+-			if (ret < 0)
++			if (ret < 0) {
++				of_node_put(child_np);
+ 				goto error;
++			}
+ 		}
+ 	} else if (pdata && pdata->num_abus_data > 0) {
+ 		for (i = 0; i < pdata->num_abus_data; i++, aemif->num_cs++) {
+@@ -405,8 +407,10 @@ static int aemif_probe(struct platform_device *pdev)
+ 		for_each_available_child_of_node(np, child_np) {
+ 			ret = of_platform_populate(child_np, NULL,
+ 						   dev_lookup, dev);
+-			if (ret < 0)
++			if (ret < 0) {
++				of_node_put(child_np);
+ 				goto error;
++			}
+ 		}
+ 	} else if (pdata) {
+ 		for (i = 0; i < pdata->num_sub_devices; i++) {
+diff --git a/drivers/mfd/altera-sysmgr.c b/drivers/mfd/altera-sysmgr.c
+index 41076d121dd54..591b300d90953 100644
+--- a/drivers/mfd/altera-sysmgr.c
++++ b/drivers/mfd/altera-sysmgr.c
+@@ -145,7 +145,8 @@ static int sysmgr_probe(struct platform_device *pdev)
+ 		sysmgr_config.reg_write = s10_protected_reg_write;
+ 
+ 		/* Need physical address for SMCC call */
+-		regmap = devm_regmap_init(dev, NULL, (void *)res->start,
++		regmap = devm_regmap_init(dev, NULL,
++					  (void *)(uintptr_t)res->start,
+ 					  &sysmgr_config);
+ 	} else {
+ 		base = devm_ioremap(dev, res->start, resource_size(res));
+diff --git a/drivers/mfd/bd9571mwv.c b/drivers/mfd/bd9571mwv.c
+index fab3cdc27ed64..19d57a45134c6 100644
+--- a/drivers/mfd/bd9571mwv.c
++++ b/drivers/mfd/bd9571mwv.c
+@@ -185,9 +185,9 @@ static int bd9571mwv_probe(struct i2c_client *client,
+ 		return ret;
+ 	}
+ 
+-	ret = mfd_add_devices(bd->dev, PLATFORM_DEVID_AUTO, bd9571mwv_cells,
+-			      ARRAY_SIZE(bd9571mwv_cells), NULL, 0,
+-			      regmap_irq_get_domain(bd->irq_data));
++	ret = devm_mfd_add_devices(bd->dev, PLATFORM_DEVID_AUTO,
++				   bd9571mwv_cells, ARRAY_SIZE(bd9571mwv_cells),
++				   NULL, 0, regmap_irq_get_domain(bd->irq_data));
+ 	if (ret) {
+ 		regmap_del_irq_chip(bd->irq, bd->irq_data);
+ 		return ret;
+diff --git a/drivers/mfd/gateworks-gsc.c b/drivers/mfd/gateworks-gsc.c
+index 576da62fbb0ce..d87876747b913 100644
+--- a/drivers/mfd/gateworks-gsc.c
++++ b/drivers/mfd/gateworks-gsc.c
+@@ -234,7 +234,7 @@ static int gsc_probe(struct i2c_client *client)
+ 
+ 	ret = devm_regmap_add_irq_chip(dev, gsc->regmap, client->irq,
+ 				       IRQF_ONESHOT | IRQF_SHARED |
+-				       IRQF_TRIGGER_FALLING, 0,
++				       IRQF_TRIGGER_LOW, 0,
+ 				       &gsc_irq_chip, &irq_data);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/mfd/wm831x-auxadc.c b/drivers/mfd/wm831x-auxadc.c
+index 8a7cc0f86958b..65b98f3fbd929 100644
+--- a/drivers/mfd/wm831x-auxadc.c
++++ b/drivers/mfd/wm831x-auxadc.c
+@@ -93,11 +93,10 @@ static int wm831x_auxadc_read_irq(struct wm831x *wm831x,
+ 	wait_for_completion_timeout(&req->done, msecs_to_jiffies(500));
+ 
+ 	mutex_lock(&wm831x->auxadc_lock);
+-
+-	list_del(&req->list);
+ 	ret = req->val;
+ 
+ out:
++	list_del(&req->list);
+ 	mutex_unlock(&wm831x->auxadc_lock);
+ 
+ 	kfree(req);
+diff --git a/drivers/misc/cardreader/rts5227.c b/drivers/misc/cardreader/rts5227.c
+index 8859011672cb9..8200af22b529e 100644
+--- a/drivers/misc/cardreader/rts5227.c
++++ b/drivers/misc/cardreader/rts5227.c
+@@ -398,6 +398,11 @@ static int rts522a_extra_init_hw(struct rtsx_pcr *pcr)
+ {
+ 	rts5227_extra_init_hw(pcr);
+ 
++	/* Power down OCP for power consumption */
++	if (!pcr->card_exist)
++		rtsx_pci_write_register(pcr, FPDCTL, OC_POWER_DOWN,
++				OC_POWER_DOWN);
++
+ 	rtsx_pci_write_register(pcr, FUNC_FORCE_CTL, FUNC_FORCE_UPME_XMT_DBG,
+ 		FUNC_FORCE_UPME_XMT_DBG);
+ 	rtsx_pci_write_register(pcr, PCLK_CTL, 0x04, 0x04);
+diff --git a/drivers/misc/eeprom/eeprom_93xx46.c b/drivers/misc/eeprom/eeprom_93xx46.c
+index 7c45f82b43027..d92c4d2c521a3 100644
+--- a/drivers/misc/eeprom/eeprom_93xx46.c
++++ b/drivers/misc/eeprom/eeprom_93xx46.c
+@@ -512,3 +512,4 @@ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Driver for 93xx46 EEPROMs");
+ MODULE_AUTHOR("Anatolij Gustschin <agust@denx.de>");
+ MODULE_ALIAS("spi:93xx46");
++MODULE_ALIAS("spi:eeprom-93xx46");
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 994ab67bc2dce..815d01f785dff 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -520,12 +520,13 @@ fastrpc_map_dma_buf(struct dma_buf_attachment *attachment,
+ {
+ 	struct fastrpc_dma_buf_attachment *a = attachment->priv;
+ 	struct sg_table *table;
++	int ret;
+ 
+ 	table = &a->sgt;
+ 
+-	if (!dma_map_sgtable(attachment->dev, table, dir, 0))
+-		return ERR_PTR(-ENOMEM);
+-
++	ret = dma_map_sgtable(attachment->dev, table, dir, 0);
++	if (ret)
++		table = ERR_PTR(ret);
+ 	return table;
+ }
+ 
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index a97eb5d47705d..33579d9795c32 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -1373,7 +1373,7 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
+ 			return -EPROTO;
+ 		}
+ 
+-		dev->dev_state = MEI_DEV_POWER_DOWN;
++		mei_set_devstate(dev, MEI_DEV_POWER_DOWN);
+ 		dev_info(dev->dev, "hbm: stop response: resetting.\n");
+ 		/* force the reset */
+ 		return -EPROTO;
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 9cf8d8f60cfef..14be76d4c2e61 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -101,6 +101,11 @@
+ #define MEI_DEV_ID_MCC        0x4B70  /* Mule Creek Canyon (EHL) */
+ #define MEI_DEV_ID_MCC_4      0x4B75  /* Mule Creek Canyon 4 (EHL) */
+ 
++#define MEI_DEV_ID_EBG        0x1BE0  /* Emmitsburg WS */
++
++#define MEI_DEV_ID_ADP_S      0x7AE8  /* Alder Lake Point S */
++#define MEI_DEV_ID_ADP_LP     0x7A60  /* Alder Lake Point LP */
++
+ /*
+  * MEI HW Section
+  */
+diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
+index 326955b04fda9..2161c1234ad72 100644
+--- a/drivers/misc/mei/interrupt.c
++++ b/drivers/misc/mei/interrupt.c
+@@ -295,12 +295,17 @@ static inline bool hdr_is_fixed(struct mei_msg_hdr *mei_hdr)
+ static inline int hdr_is_valid(u32 msg_hdr)
+ {
+ 	struct mei_msg_hdr *mei_hdr;
++	u32 expected_len = 0;
+ 
+ 	mei_hdr = (struct mei_msg_hdr *)&msg_hdr;
+ 	if (!msg_hdr || mei_hdr->reserved)
+ 		return -EBADMSG;
+ 
+-	if (mei_hdr->dma_ring && mei_hdr->length != MEI_SLOT_SIZE)
++	if (mei_hdr->dma_ring)
++		expected_len += MEI_SLOT_SIZE;
++	if (mei_hdr->extended)
++		expected_len += MEI_SLOT_SIZE;
++	if (mei_hdr->length < expected_len)
+ 		return -EBADMSG;
+ 
+ 	return 0;
+@@ -324,6 +329,8 @@ int mei_irq_read_handler(struct mei_device *dev,
+ 	struct mei_cl *cl;
+ 	int ret;
+ 	u32 ext_meta_hdr_u32;
++	u32 hdr_size_left;
++	u32 hdr_size_ext;
+ 	int i;
+ 	int ext_hdr_end;
+ 
+@@ -353,6 +360,7 @@ int mei_irq_read_handler(struct mei_device *dev,
+ 	}
+ 
+ 	ext_hdr_end = 1;
++	hdr_size_left = mei_hdr->length;
+ 
+ 	if (mei_hdr->extended) {
+ 		if (!dev->rd_msg_hdr[1]) {
+@@ -363,8 +371,21 @@ int mei_irq_read_handler(struct mei_device *dev,
+ 			dev_dbg(dev->dev, "extended header is %08x\n",
+ 				ext_meta_hdr_u32);
+ 		}
+-		meta_hdr = ((struct mei_ext_meta_hdr *)
+-				dev->rd_msg_hdr + 1);
++		meta_hdr = ((struct mei_ext_meta_hdr *)dev->rd_msg_hdr + 1);
++		if (check_add_overflow((u32)sizeof(*meta_hdr),
++				       mei_slots2data(meta_hdr->size),
++				       &hdr_size_ext)) {
++			dev_err(dev->dev, "extended message size too big %d\n",
++				meta_hdr->size);
++			return -EBADMSG;
++		}
++		if (hdr_size_left < hdr_size_ext) {
++			dev_err(dev->dev, "corrupted message header len %d\n",
++				mei_hdr->length);
++			return -EBADMSG;
++		}
++		hdr_size_left -= hdr_size_ext;
++
+ 		ext_hdr_end = meta_hdr->size + 2;
+ 		for (i = dev->rd_msg_hdr_count; i < ext_hdr_end; i++) {
+ 			dev->rd_msg_hdr[i] = mei_read_hdr(dev);
+@@ -376,6 +397,12 @@ int mei_irq_read_handler(struct mei_device *dev,
+ 	}
+ 
+ 	if (mei_hdr->dma_ring) {
++		if (hdr_size_left != sizeof(dev->rd_msg_hdr[ext_hdr_end])) {
++			dev_err(dev->dev, "corrupted message header len %d\n",
++				mei_hdr->length);
++			return -EBADMSG;
++		}
++
+ 		dev->rd_msg_hdr[ext_hdr_end] = mei_read_hdr(dev);
+ 		dev->rd_msg_hdr_count++;
+ 		(*slots)--;
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 1de9ef7a272ba..a7e179626b635 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -107,6 +107,11 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_CDF, MEI_ME_PCH8_CFG)},
+ 
++	{MEI_PCI_DEVICE(MEI_DEV_ID_EBG, MEI_ME_PCH15_SPS_CFG)},
++
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_S, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_LP, MEI_ME_PCH15_CFG)},
++
+ 	/* required last entry */
+ 	{0, }
+ };
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index c49065887e8f5..c2338750313c4 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -537,6 +537,9 @@ static struct vmci_queue *qp_host_alloc_queue(u64 size)
+ 
+ 	queue_page_size = num_pages * sizeof(*queue->kernel_if->u.h.page);
+ 
++	if (queue_size + queue_page_size > KMALLOC_MAX_SIZE)
++		return NULL;
++
+ 	queue = kzalloc(queue_size + queue_page_size, GFP_KERNEL);
+ 	if (queue) {
+ 		queue->q_header = NULL;
+@@ -630,7 +633,7 @@ static void qp_release_pages(struct page **pages,
+ 
+ 	for (i = 0; i < num_pages; i++) {
+ 		if (dirty)
+-			set_page_dirty(pages[i]);
++			set_page_dirty_lock(pages[i]);
+ 
+ 		put_page(pages[i]);
+ 		pages[i] = NULL;
+diff --git a/drivers/mmc/host/owl-mmc.c b/drivers/mmc/host/owl-mmc.c
+index ccf214a89eda9..3d4abf175b1d8 100644
+--- a/drivers/mmc/host/owl-mmc.c
++++ b/drivers/mmc/host/owl-mmc.c
+@@ -641,7 +641,7 @@ static int owl_mmc_probe(struct platform_device *pdev)
+ 	owl_host->irq = platform_get_irq(pdev, 0);
+ 	if (owl_host->irq < 0) {
+ 		ret = -EINVAL;
+-		goto err_free_host;
++		goto err_release_channel;
+ 	}
+ 
+ 	ret = devm_request_irq(&pdev->dev, owl_host->irq, owl_irq_handler,
+@@ -649,19 +649,21 @@ static int owl_mmc_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to request irq %d\n",
+ 			owl_host->irq);
+-		goto err_free_host;
++		goto err_release_channel;
+ 	}
+ 
+ 	ret = mmc_add_host(mmc);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to add host\n");
+-		goto err_free_host;
++		goto err_release_channel;
+ 	}
+ 
+ 	dev_dbg(&pdev->dev, "Owl MMC Controller Initialized\n");
+ 
+ 	return 0;
+ 
++err_release_channel:
++	dma_release_channel(owl_host->dma);
+ err_free_host:
+ 	mmc_free_host(mmc);
+ 
+@@ -675,6 +677,7 @@ static int owl_mmc_remove(struct platform_device *pdev)
+ 
+ 	mmc_remove_host(mmc);
+ 	disable_irq(owl_host->irq);
++	dma_release_channel(owl_host->dma);
+ 	mmc_free_host(mmc);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index fe13e1ea22dcc..f3e76d6b3e3fe 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -186,8 +186,8 @@ renesas_sdhi_internal_dmac_start_dma(struct tmio_mmc_host *host,
+ 			mmc_get_dma_dir(data)))
+ 		goto force_pio;
+ 
+-	/* This DMAC cannot handle if buffer is not 8-bytes alignment */
+-	if (!IS_ALIGNED(sg_dma_address(sg), 8))
++	/* This DMAC cannot handle if buffer is not 128-bytes alignment */
++	if (!IS_ALIGNED(sg_dma_address(sg), 128))
+ 		goto force_pio_with_unmap;
+ 
+ 	if (data->flags & MMC_DATA_READ) {
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index fce8fa7e6b309..5d9b3106d2f70 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -1752,9 +1752,10 @@ static int sdhci_esdhc_imx_remove(struct platform_device *pdev)
+ 	struct sdhci_host *host = platform_get_drvdata(pdev);
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
+-	int dead = (readl(host->ioaddr + SDHCI_INT_STATUS) == 0xffffffff);
++	int dead;
+ 
+ 	pm_runtime_get_sync(&pdev->dev);
++	dead = (readl(host->ioaddr + SDHCI_INT_STATUS) == 0xffffffff);
+ 	pm_runtime_disable(&pdev->dev);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index fa76748d89293..94e3f72f6405d 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -33,6 +33,8 @@
+ #define O2_SD_ADMA2		0xE7
+ #define O2_SD_INF_MOD		0xF1
+ #define O2_SD_MISC_CTRL4	0xFC
++#define O2_SD_MISC_CTRL		0x1C0
++#define O2_SD_PWR_FORCE_L0	0x0002
+ #define O2_SD_TUNING_CTRL	0x300
+ #define O2_SD_PLL_SETTING	0x304
+ #define O2_SD_MISC_SETTING	0x308
+@@ -300,6 +302,8 @@ static int sdhci_o2_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ {
+ 	struct sdhci_host *host = mmc_priv(mmc);
+ 	int current_bus_width = 0;
++	u32 scratch32 = 0;
++	u16 scratch = 0;
+ 
+ 	/*
+ 	 * This handler only implements the eMMC tuning that is specific to
+@@ -312,6 +316,17 @@ static int sdhci_o2_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ 	if (WARN_ON((opcode != MMC_SEND_TUNING_BLOCK_HS200) &&
+ 			(opcode != MMC_SEND_TUNING_BLOCK)))
+ 		return -EINVAL;
++
++	/* Force power mode enter L0 */
++	scratch = sdhci_readw(host, O2_SD_MISC_CTRL);
++	scratch |= O2_SD_PWR_FORCE_L0;
++	sdhci_writew(host, scratch, O2_SD_MISC_CTRL);
++
++	/* wait DLL lock, timeout value 5ms */
++	if (readx_poll_timeout(sdhci_o2_pll_dll_wdt_control, host,
++		scratch32, (scratch32 & O2_DLL_LOCK_STATUS), 1, 5000))
++		pr_warn("%s: DLL can't lock in 5ms after force L0 during tuning.\n",
++				mmc_hostname(host->mmc));
+ 	/*
+ 	 * Judge the tuning reason, whether caused by dll shift
+ 	 * If cause by dll shift, should call sdhci_o2_dll_recovery
+@@ -344,6 +359,11 @@ static int sdhci_o2_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ 		sdhci_set_bus_width(host, current_bus_width);
+ 	}
+ 
++	/* Cancel force power mode enter L0 */
++	scratch = sdhci_readw(host, O2_SD_MISC_CTRL);
++	scratch &= ~(O2_SD_PWR_FORCE_L0);
++	sdhci_writew(host, scratch, O2_SD_MISC_CTRL);
++
+ 	sdhci_reset(host, SDHCI_RESET_CMD);
+ 	sdhci_reset(host, SDHCI_RESET_DATA);
+ 
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index 58109c5b53e2e..19cbb6171b358 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -708,14 +708,14 @@ static int sdhci_sprd_remove(struct platform_device *pdev)
+ {
+ 	struct sdhci_host *host = platform_get_drvdata(pdev);
+ 	struct sdhci_sprd_host *sprd_host = TO_SPRD_HOST(host);
+-	struct mmc_host *mmc = host->mmc;
+ 
+-	mmc_remove_host(mmc);
++	sdhci_remove_host(host, 0);
++
+ 	clk_disable_unprepare(sprd_host->clk_sdio);
+ 	clk_disable_unprepare(sprd_host->clk_enable);
+ 	clk_disable_unprepare(sprd_host->clk_2x_enable);
+ 
+-	mmc_free_host(mmc);
++	sdhci_pltfm_free(pdev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mmc/host/usdhi6rol0.c b/drivers/mmc/host/usdhi6rol0.c
+index e2d5112d809dc..615f3d008af1e 100644
+--- a/drivers/mmc/host/usdhi6rol0.c
++++ b/drivers/mmc/host/usdhi6rol0.c
+@@ -1858,10 +1858,12 @@ static int usdhi6_probe(struct platform_device *pdev)
+ 
+ 	ret = mmc_add_host(mmc);
+ 	if (ret < 0)
+-		goto e_clk_off;
++		goto e_release_dma;
+ 
+ 	return 0;
+ 
++e_release_dma:
++	usdhi6_dma_release(host);
+ e_clk_off:
+ 	clk_disable_unprepare(host->clk);
+ e_free_mmc:
+diff --git a/drivers/mtd/parsers/afs.c b/drivers/mtd/parsers/afs.c
+index 980e332bdac48..26116694c821b 100644
+--- a/drivers/mtd/parsers/afs.c
++++ b/drivers/mtd/parsers/afs.c
+@@ -370,10 +370,8 @@ static int parse_afs_partitions(struct mtd_info *mtd,
+ 	return i;
+ 
+ out_free_parts:
+-	while (i >= 0) {
++	while (--i >= 0)
+ 		kfree(parts[i].name);
+-		i--;
+-	}
+ 	kfree(parts);
+ 	*pparts = NULL;
+ 	return ret;
+diff --git a/drivers/mtd/parsers/parser_imagetag.c b/drivers/mtd/parsers/parser_imagetag.c
+index d69607b482272..fab0949aabba1 100644
+--- a/drivers/mtd/parsers/parser_imagetag.c
++++ b/drivers/mtd/parsers/parser_imagetag.c
+@@ -83,6 +83,7 @@ static int bcm963xx_parse_imagetag_partitions(struct mtd_info *master,
+ 			pr_err("invalid rootfs address: %*ph\n",
+ 				(int)sizeof(buf->flash_image_start),
+ 				buf->flash_image_start);
++			ret = -EINVAL;
+ 			goto out;
+ 		}
+ 
+@@ -92,6 +93,7 @@ static int bcm963xx_parse_imagetag_partitions(struct mtd_info *master,
+ 			pr_err("invalid kernel address: %*ph\n",
+ 				(int)sizeof(buf->kernel_address),
+ 				buf->kernel_address);
++			ret = -EINVAL;
+ 			goto out;
+ 		}
+ 
+@@ -100,6 +102,7 @@ static int bcm963xx_parse_imagetag_partitions(struct mtd_info *master,
+ 			pr_err("invalid kernel length: %*ph\n",
+ 				(int)sizeof(buf->kernel_length),
+ 				buf->kernel_length);
++			ret = -EINVAL;
+ 			goto out;
+ 		}
+ 
+@@ -108,6 +111,7 @@ static int bcm963xx_parse_imagetag_partitions(struct mtd_info *master,
+ 			pr_err("invalid total length: %*ph\n",
+ 				(int)sizeof(buf->total_length),
+ 				buf->total_length);
++			ret = -EINVAL;
+ 			goto out;
+ 		}
+ 
+diff --git a/drivers/mtd/spi-nor/controllers/hisi-sfc.c b/drivers/mtd/spi-nor/controllers/hisi-sfc.c
+index 95c502173cbda..440fc5ae7d34c 100644
+--- a/drivers/mtd/spi-nor/controllers/hisi-sfc.c
++++ b/drivers/mtd/spi-nor/controllers/hisi-sfc.c
+@@ -399,8 +399,10 @@ static int hisi_spi_nor_register_all(struct hifmc_host *host)
+ 
+ 	for_each_available_child_of_node(dev->of_node, np) {
+ 		ret = hisi_spi_nor_register(np, host);
+-		if (ret)
++		if (ret) {
++			of_node_put(np);
+ 			goto fail;
++		}
+ 
+ 		if (host->num_chip == HIFMC_MAX_CHIP_NUM) {
+ 			dev_warn(dev, "Flash device number exceeds the maximum chipselect number\n");
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index ad6c79d9a7f86..06e1bf01fd920 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -1212,14 +1212,15 @@ spi_nor_find_best_erase_type(const struct spi_nor_erase_map *map,
+ 
+ 		erase = &map->erase_type[i];
+ 
++		/* Alignment is not mandatory for overlaid regions */
++		if (region->offset & SNOR_OVERLAID_REGION &&
++		    region->size <= len)
++			return erase;
++
+ 		/* Don't erase more than what the user has asked for. */
+ 		if (erase->size > len)
+ 			continue;
+ 
+-		/* Alignment is not mandatory for overlaid regions */
+-		if (region->offset & SNOR_OVERLAID_REGION)
+-			return erase;
+-
+ 		spi_nor_div_by_erase_size(erase, addr, &rem);
+ 		if (rem)
+ 			continue;
+@@ -1363,6 +1364,7 @@ static int spi_nor_init_erase_cmd_list(struct spi_nor *nor,
+ 			goto destroy_erase_cmd_list;
+ 
+ 		if (prev_erase != erase ||
++		    erase->size != cmd->size ||
+ 		    region->offset & SNOR_OVERLAID_REGION) {
+ 			cmd = spi_nor_init_erase_cmd(region, erase);
+ 			if (IS_ERR(cmd)) {
+diff --git a/drivers/mtd/spi-nor/sfdp.c b/drivers/mtd/spi-nor/sfdp.c
+index e2a43d39eb5f4..08de2a2b44520 100644
+--- a/drivers/mtd/spi-nor/sfdp.c
++++ b/drivers/mtd/spi-nor/sfdp.c
+@@ -760,7 +760,7 @@ spi_nor_region_check_overlay(struct spi_nor_erase_region *region,
+ 	int i;
+ 
+ 	for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) {
+-		if (!(erase_type & BIT(i)))
++		if (!(erase[i].size && erase_type & BIT(erase[i].idx)))
+ 			continue;
+ 		if (region->size & erase[i].size_mask) {
+ 			spi_nor_region_mark_overlay(region);
+@@ -830,6 +830,7 @@ spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
+ 		offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) +
+ 			 region[i].size;
+ 	}
++	spi_nor_region_mark_end(&region[i - 1]);
+ 
+ 	save_uniform_erase_type = map->uniform_erase_type;
+ 	map->uniform_erase_type = spi_nor_sort_erase_mask(map,
+@@ -853,8 +854,6 @@ spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
+ 		if (!(regions_erase_type & BIT(erase[i].idx)))
+ 			spi_nor_set_erase_type(&erase[i], 0, 0xFF);
+ 
+-	spi_nor_region_mark_end(&region[i - 1]);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index c3dbe64e628ea..13e0a8caf3b6f 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -87,7 +87,7 @@ config WIREGUARD
+ 	select CRYPTO_CURVE25519_X86 if X86 && 64BIT
+ 	select ARM_CRYPTO if ARM
+ 	select ARM64_CRYPTO if ARM64
+-	select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON
++	select CRYPTO_CHACHA20_NEON if ARM || (ARM64 && KERNEL_MODE_NEON)
+ 	select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON
+ 	select CRYPTO_POLY1305_ARM if ARM
+ 	select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 59de6b3b5f026..096d818c167e2 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -2824,7 +2824,7 @@ static int mcp251xfd_probe(struct spi_device *spi)
+ 			spi_get_device_id(spi)->driver_data;
+ 
+ 	/* Errata Reference:
+-	 * mcp2517fd: DS80000789B, mcp2518fd: DS80000792C 4.
++	 * mcp2517fd: DS80000792C 5., mcp2518fd: DS80000789C 4.
+ 	 *
+ 	 * The SPI can write corrupted data to the RAM at fast SPI
+ 	 * speeds:
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 89d7c9b231863..4e53464411edf 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -635,14 +635,18 @@ static void felix_teardown(struct dsa_switch *ds)
+ 	struct felix *felix = ocelot_to_felix(ocelot);
+ 	int port;
+ 
+-	if (felix->info->mdio_bus_free)
+-		felix->info->mdio_bus_free(ocelot);
+-
+-	for (port = 0; port < ocelot->num_phys_ports; port++)
+-		ocelot_deinit_port(ocelot, port);
+ 	ocelot_deinit_timestamp(ocelot);
+-	/* stop workqueue thread */
+ 	ocelot_deinit(ocelot);
++
++	for (port = 0; port < ocelot->num_phys_ports; port++) {
++		if (dsa_is_unused_port(ds, port))
++			continue;
++
++		ocelot_deinit_port(ocelot, port);
++	}
++
++	if (felix->info->mdio_bus_free)
++		felix->info->mdio_bus_free(ocelot);
+ }
+ 
+ static int felix_hwtstamp_get(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+index b40d4377cc71d..b2cd3bdba9f89 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+@@ -1279,10 +1279,18 @@
+ #define MDIO_PMA_10GBR_FECCTRL		0x00ab
+ #endif
+ 
++#ifndef MDIO_PMA_RX_CTRL1
++#define MDIO_PMA_RX_CTRL1		0x8051
++#endif
++
+ #ifndef MDIO_PCS_DIG_CTRL
+ #define MDIO_PCS_DIG_CTRL		0x8000
+ #endif
+ 
++#ifndef MDIO_PCS_DIGITAL_STAT
++#define MDIO_PCS_DIGITAL_STAT		0x8010
++#endif
++
+ #ifndef MDIO_AN_XNP
+ #define MDIO_AN_XNP			0x0016
+ #endif
+@@ -1358,6 +1366,8 @@
+ #define XGBE_KR_TRAINING_ENABLE		BIT(1)
+ 
+ #define XGBE_PCS_CL37_BP		BIT(12)
++#define XGBE_PCS_PSEQ_STATE_MASK	0x1c
++#define XGBE_PCS_PSEQ_STATE_POWER_GOOD	0x10
+ 
+ #define XGBE_AN_CL37_INT_CMPLT		BIT(0)
+ #define XGBE_AN_CL37_INT_MASK		0x01
+@@ -1375,6 +1385,10 @@
+ #define XGBE_PMA_CDR_TRACK_EN_OFF	0x00
+ #define XGBE_PMA_CDR_TRACK_EN_ON	0x01
+ 
++#define XGBE_PMA_RX_RST_0_MASK		BIT(4)
++#define XGBE_PMA_RX_RST_0_RESET_ON	0x10
++#define XGBE_PMA_RX_RST_0_RESET_OFF	0x00
++
+ /* Bit setting and getting macros
+  *  The get macro will extract the current bit field value from within
+  *  the variable
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index 2709a2db56577..395eb0b526802 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -1368,6 +1368,7 @@ static void xgbe_stop(struct xgbe_prv_data *pdata)
+ 		return;
+ 
+ 	netif_tx_stop_all_queues(netdev);
++	netif_carrier_off(pdata->netdev);
+ 
+ 	xgbe_stop_timers(pdata);
+ 	flush_workqueue(pdata->dev_workqueue);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 93ef5a30cb8d9..4e97b48695220 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -1345,7 +1345,7 @@ static void xgbe_phy_status(struct xgbe_prv_data *pdata)
+ 							     &an_restart);
+ 	if (an_restart) {
+ 		xgbe_phy_config_aneg(pdata);
+-		return;
++		goto adjust_link;
+ 	}
+ 
+ 	if (pdata->phy.link) {
+@@ -1396,7 +1396,6 @@ static void xgbe_phy_stop(struct xgbe_prv_data *pdata)
+ 	pdata->phy_if.phy_impl.stop(pdata);
+ 
+ 	pdata->phy.link = 0;
+-	netif_carrier_off(pdata->netdev);
+ 
+ 	xgbe_phy_adjust_link(pdata);
+ }
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 859ded0c06b05..18e48b3bc402b 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -922,6 +922,9 @@ static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata)
+ 	if ((phy_id & 0xfffffff0) != 0x03625d10)
+ 		return false;
+ 
++	/* Reset PHY - wait for self-clearing reset bit to clear */
++	genphy_soft_reset(phy_data->phydev);
++
+ 	/* Disable RGMII mode */
+ 	phy_write(phy_data->phydev, 0x18, 0x7007);
+ 	reg = phy_read(phy_data->phydev, 0x18);
+@@ -1953,6 +1956,27 @@ static void xgbe_phy_set_redrv_mode(struct xgbe_prv_data *pdata)
+ 	xgbe_phy_put_comm_ownership(pdata);
+ }
+ 
++static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata)
++{
++	int reg;
++
++	reg = XMDIO_READ_BITS(pdata, MDIO_MMD_PCS, MDIO_PCS_DIGITAL_STAT,
++			      XGBE_PCS_PSEQ_STATE_MASK);
++	if (reg == XGBE_PCS_PSEQ_STATE_POWER_GOOD) {
++		/* Mailbox command timed out, reset of RX block is required.
++		 * This can be done by asseting the reset bit and wait for
++		 * its compeletion.
++		 */
++		XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_CTRL1,
++				 XGBE_PMA_RX_RST_0_MASK, XGBE_PMA_RX_RST_0_RESET_ON);
++		ndelay(20);
++		XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_CTRL1,
++				 XGBE_PMA_RX_RST_0_MASK, XGBE_PMA_RX_RST_0_RESET_OFF);
++		usleep_range(40, 50);
++		netif_err(pdata, link, pdata->netdev, "firmware mailbox reset performed\n");
++	}
++}
++
+ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 					unsigned int cmd, unsigned int sub_cmd)
+ {
+@@ -1960,9 +1984,11 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 	unsigned int wait;
+ 
+ 	/* Log if a previous command did not complete */
+-	if (XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS))
++	if (XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS)) {
+ 		netif_dbg(pdata, link, pdata->netdev,
+ 			  "firmware mailbox not ready for command\n");
++		xgbe_phy_rx_reset(pdata);
++	}
+ 
+ 	/* Construct the command */
+ 	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, cmd);
+@@ -1984,6 +2010,9 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 
+ 	netif_dbg(pdata, link, pdata->netdev,
+ 		  "firmware mailbox command did not complete\n");
++
++	/* Reset on error */
++	xgbe_phy_rx_reset(pdata);
+ }
+ 
+ static void xgbe_phy_rrc(struct xgbe_prv_data *pdata)
+@@ -2584,6 +2613,14 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
+ 	if (reg & MDIO_STAT1_LSTATUS)
+ 		return 1;
+ 
++	if (pdata->phy.autoneg == AUTONEG_ENABLE &&
++	    phy_data->port_mode == XGBE_PORT_MODE_BACKPLANE) {
++		if (!test_bit(XGBE_LINK_INIT, &pdata->dev_state)) {
++			netif_carrier_off(pdata->netdev);
++			*an_restart = 1;
++		}
++	}
++
+ 	/* No link, attempt a receiver reset cycle */
+ 	if (phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) {
+ 		phy_data->rrc_count = 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 033bfab24ef2f..c7c5c01a783a0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -8856,9 +8856,10 @@ void bnxt_tx_disable(struct bnxt *bp)
+ 			txr->dev_state = BNXT_DEV_STATE_CLOSING;
+ 		}
+ 	}
++	/* Drop carrier first to prevent TX timeout */
++	netif_carrier_off(bp->dev);
+ 	/* Stop all TX queues */
+ 	netif_tx_disable(bp->dev);
+-	netif_carrier_off(bp->dev);
+ }
+ 
+ void bnxt_tx_enable(struct bnxt *bp)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+index 184b6d0513b2a..8b0e916afe6b1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+@@ -474,8 +474,8 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ 	if (BNXT_PF(bp) && !bnxt_hwrm_get_nvm_cfg_ver(bp, &nvm_cfg_ver)) {
+ 		u32 ver = nvm_cfg_ver.vu32;
+ 
+-		sprintf(buf, "%X.%X.%X", (ver >> 16) & 0xF, (ver >> 8) & 0xF,
+-			ver & 0xF);
++		sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xf, (ver >> 8) & 0xf,
++			ver & 0xf);
+ 		rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_STORED,
+ 				      DEVLINK_INFO_VERSION_GENERIC_FW_PSID,
+ 				      buf);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
+index 1b49f2fa9b185..34546f5312eee 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
+@@ -46,6 +46,9 @@
+ #define MAX_ULD_QSETS 16
+ #define MAX_ULD_NPORTS 4
+ 
++/* ulp_mem_io + ulptx_idata + payload + padding */
++#define MAX_IMM_ULPTX_WR_LEN (32 + 8 + 256 + 8)
++
+ /* CPL message priority levels */
+ enum {
+ 	CPL_PRIORITY_DATA     = 0,  /* data messages */
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index 196652a114c5f..3334c9e2152ab 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -2842,17 +2842,22 @@ int t4_mgmt_tx(struct adapter *adap, struct sk_buff *skb)
+  *	@skb: the packet
+  *
+  *	Returns true if a packet can be sent as an offload WR with immediate
+- *	data.  We currently use the same limit as for Ethernet packets.
++ *	data.
++ *	FW_OFLD_TX_DATA_WR limits the payload to 255 bytes due to 8-bit field.
++ *      However, FW_ULPTX_WR commands have a 256 byte immediate only
++ *      payload limit.
+  */
+ static inline int is_ofld_imm(const struct sk_buff *skb)
+ {
+ 	struct work_request_hdr *req = (struct work_request_hdr *)skb->data;
+ 	unsigned long opcode = FW_WR_OP_G(ntohl(req->wr_hi));
+ 
+-	if (opcode == FW_CRYPTO_LOOKASIDE_WR)
++	if (unlikely(opcode == FW_ULPTX_WR))
++		return skb->len <= MAX_IMM_ULPTX_WR_LEN;
++	else if (opcode == FW_CRYPTO_LOOKASIDE_WR)
+ 		return skb->len <= SGE_MAX_WR_LEN;
+ 	else
+-		return skb->len <= MAX_IMM_TX_PKT_LEN;
++		return skb->len <= MAX_IMM_OFLD_TX_DATA_WR_LEN;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
+index 47ba81e42f5d0..b1161bdeda4dc 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
+@@ -50,9 +50,6 @@
+ #define MIN_RCV_WND (24 * 1024U)
+ #define LOOPBACK(x)     (((x) & htonl(0xff000000)) == htonl(0x7f000000))
+ 
+-/* ulp_mem_io + ulptx_idata + payload + padding */
+-#define MAX_IMM_ULPTX_WR_LEN (32 + 8 + 256 + 8)
+-
+ /* for TX: a skb must have a headroom of at least TX_HEADER_LEN bytes */
+ #define TX_HEADER_LEN \
+ 	(sizeof(struct fw_ofld_tx_data_wr) + sizeof(struct sge_opaque_hdr))
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index d880ab2a7d962..f91c67489e629 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -399,10 +399,20 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
+ 		xdp.frame_sz = DPAA2_ETH_RX_BUF_RAW_SIZE;
+ 
+ 		err = xdp_do_redirect(priv->net_dev, &xdp, xdp_prog);
+-		if (unlikely(err))
++		if (unlikely(err)) {
++			addr = dma_map_page(priv->net_dev->dev.parent,
++					    virt_to_page(vaddr), 0,
++					    priv->rx_buf_size, DMA_BIDIRECTIONAL);
++			if (unlikely(dma_mapping_error(priv->net_dev->dev.parent, addr))) {
++				free_pages((unsigned long)vaddr, 0);
++			} else {
++				ch->buf_count++;
++				dpaa2_eth_xdp_release_buf(priv, ch, addr);
++			}
+ 			ch->stats.xdp_drop++;
+-		else
++		} else {
+ 			ch->stats.xdp_redirect++;
++		}
+ 		break;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 06514af0df106..796e3d6f23f09 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -1164,14 +1164,15 @@ static void enetc_pf_remove(struct pci_dev *pdev)
+ 	struct enetc_ndev_priv *priv;
+ 
+ 	priv = netdev_priv(si->ndev);
+-	enetc_phylink_destroy(priv);
+-	enetc_mdiobus_destroy(pf);
+ 
+ 	if (pf->num_vfs)
+ 		enetc_sriov_configure(pdev, 0);
+ 
+ 	unregister_netdev(si->ndev);
+ 
++	enetc_phylink_destroy(priv);
++	enetc_mdiobus_destroy(pf);
++
+ 	enetc_free_msix(priv);
+ 
+ 	enetc_free_si_resources(priv);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index ee16e0e4fa5fc..5e1f4e71af7bc 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -249,8 +249,13 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
+ 	if (!ltb->buff)
+ 		return;
+ 
++	/* VIOS automatically unmaps the long term buffer at remote
++	 * end for the following resets:
++	 * FAILOVER, MOBILITY, TIMEOUT.
++	 */
+ 	if (adapter->reset_reason != VNIC_RESET_FAILOVER &&
+-	    adapter->reset_reason != VNIC_RESET_MOBILITY)
++	    adapter->reset_reason != VNIC_RESET_MOBILITY &&
++	    adapter->reset_reason != VNIC_RESET_TIMEOUT)
+ 		send_request_unmap(adapter, ltb->map_id);
+ 	dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
+ }
+@@ -1329,10 +1334,8 @@ static int __ibmvnic_close(struct net_device *netdev)
+ 
+ 	adapter->state = VNIC_CLOSING;
+ 	rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_DN);
+-	if (rc)
+-		return rc;
+ 	adapter->state = VNIC_CLOSED;
+-	return 0;
++	return rc;
+ }
+ 
+ static int ibmvnic_close(struct net_device *netdev)
+@@ -1594,6 +1597,9 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 		skb_copy_from_linear_data(skb, dst, skb->len);
+ 	}
+ 
++	/* post changes to long_term_buff *dst before VIOS accessing it */
++	dma_wmb();
++
+ 	tx_pool->consumer_index =
+ 	    (tx_pool->consumer_index + 1) % tx_pool->num_buffers;
+ 
+@@ -2434,6 +2440,8 @@ restart_poll:
+ 		offset = be16_to_cpu(next->rx_comp.off_frame_data);
+ 		flags = next->rx_comp.flags;
+ 		skb = rx_buff->skb;
++		/* load long_term_buff before copying to skb */
++		dma_rmb();
+ 		skb_copy_to_linear_data(skb, rx_buff->data + offset,
+ 					length);
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 26ba1f3eb2d85..9e81f85ee2d8d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -4878,7 +4878,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
+ 	enum i40e_admin_queue_err adq_err;
+ 	struct i40e_vsi *vsi = np->vsi;
+ 	struct i40e_pf *pf = vsi->back;
+-	bool is_reset_needed;
++	u32 reset_needed = 0;
+ 	i40e_status status;
+ 	u32 i, j;
+ 
+@@ -4923,9 +4923,11 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
+ flags_complete:
+ 	changed_flags = orig_flags ^ new_flags;
+ 
+-	is_reset_needed = !!(changed_flags & (I40E_FLAG_VEB_STATS_ENABLED |
+-		I40E_FLAG_LEGACY_RX | I40E_FLAG_SOURCE_PRUNING_DISABLED |
+-		I40E_FLAG_DISABLE_FW_LLDP));
++	if (changed_flags & I40E_FLAG_DISABLE_FW_LLDP)
++		reset_needed = I40E_PF_RESET_AND_REBUILD_FLAG;
++	if (changed_flags & (I40E_FLAG_VEB_STATS_ENABLED |
++	    I40E_FLAG_LEGACY_RX | I40E_FLAG_SOURCE_PRUNING_DISABLED))
++		reset_needed = BIT(__I40E_PF_RESET_REQUESTED);
+ 
+ 	/* Before we finalize any flag changes, we need to perform some
+ 	 * checks to ensure that the changes are supported and safe.
+@@ -5057,7 +5059,7 @@ flags_complete:
+ 				case I40E_AQ_RC_EEXIST:
+ 					dev_warn(&pf->pdev->dev,
+ 						 "FW LLDP agent is already running\n");
+-					is_reset_needed = false;
++					reset_needed = 0;
+ 					break;
+ 				case I40E_AQ_RC_EPERM:
+ 					dev_warn(&pf->pdev->dev,
+@@ -5086,8 +5088,8 @@ flags_complete:
+ 	/* Issue reset to cause things to take effect, as additional bits
+ 	 * are added we will need to create a mask of bits requiring reset
+ 	 */
+-	if (is_reset_needed)
+-		i40e_do_reset(pf, BIT(__I40E_PF_RESET_REQUESTED), true);
++	if (reset_needed)
++		i40e_do_reset(pf, reset_needed, true);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 1db482d310c2d..59971f62e6268 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -2616,7 +2616,7 @@ static void i40e_sync_filters_subtask(struct i40e_pf *pf)
+ 		return;
+ 	if (!test_and_clear_bit(__I40E_MACVLAN_SYNC_PENDING, pf->state))
+ 		return;
+-	if (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) {
++	if (test_bit(__I40E_VF_DISABLE, pf->state)) {
+ 		set_bit(__I40E_MACVLAN_SYNC_PENDING, pf->state);
+ 		return;
+ 	}
+@@ -2634,7 +2634,6 @@ static void i40e_sync_filters_subtask(struct i40e_pf *pf)
+ 			}
+ 		}
+ 	}
+-	clear_bit(__I40E_VF_DISABLE, pf->state);
+ }
+ 
+ /**
+@@ -7667,6 +7666,8 @@ int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
+ 	if (filter->flags >= ARRAY_SIZE(flag_table))
+ 		return I40E_ERR_CONFIG;
+ 
++	memset(&cld_filter, 0, sizeof(cld_filter));
++
+ 	/* copy element needed to add cloud filter from filter */
+ 	i40e_set_cld_element(filter, &cld_filter);
+ 
+@@ -7730,10 +7731,13 @@ int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
+ 		return -EOPNOTSUPP;
+ 
+ 	/* adding filter using src_port/src_ip is not supported at this stage */
+-	if (filter->src_port || filter->src_ipv4 ||
++	if (filter->src_port ||
++	    (filter->src_ipv4 && filter->n_proto != ETH_P_IPV6) ||
+ 	    !ipv6_addr_any(&filter->ip.v6.src_ip6))
+ 		return -EOPNOTSUPP;
+ 
++	memset(&cld_filter, 0, sizeof(cld_filter));
++
+ 	/* copy element needed to add cloud filter from filter */
+ 	i40e_set_cld_element(filter, &cld_filter.element);
+ 
+@@ -7757,7 +7761,7 @@ int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
+ 			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
+ 		}
+ 
+-	} else if (filter->dst_ipv4 ||
++	} else if ((filter->dst_ipv4 && filter->n_proto != ETH_P_IPV6) ||
+ 		   !ipv6_addr_any(&filter->ip.v6.dst_ip6)) {
+ 		cld_filter.element.flags =
+ 				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
+@@ -8533,11 +8537,6 @@ void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags, bool lock_acquired)
+ 		dev_dbg(&pf->pdev->dev, "PFR requested\n");
+ 		i40e_handle_reset_warning(pf, lock_acquired);
+ 
+-		dev_info(&pf->pdev->dev,
+-			 pf->flags & I40E_FLAG_DISABLE_FW_LLDP ?
+-			 "FW LLDP is disabled\n" :
+-			 "FW LLDP is enabled\n");
+-
+ 	} else if (reset_flags & I40E_PF_RESET_AND_REBUILD_FLAG) {
+ 		/* Request a PF Reset
+ 		 *
+@@ -8545,6 +8544,10 @@ void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags, bool lock_acquired)
+ 		 */
+ 		i40e_prep_for_reset(pf, lock_acquired);
+ 		i40e_reset_and_rebuild(pf, true, lock_acquired);
++		dev_info(&pf->pdev->dev,
++			 pf->flags & I40E_FLAG_DISABLE_FW_LLDP ?
++			 "FW LLDP is disabled\n" :
++			 "FW LLDP is enabled\n");
+ 
+ 	} else if (reset_flags & BIT_ULL(__I40E_REINIT_REQUESTED)) {
+ 		int v;
+@@ -10001,7 +10004,6 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 	int old_recovery_mode_bit = test_bit(__I40E_RECOVERY_MODE, pf->state);
+ 	struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
+ 	struct i40e_hw *hw = &pf->hw;
+-	u8 set_fc_aq_fail = 0;
+ 	i40e_status ret;
+ 	u32 val;
+ 	int v;
+@@ -10127,13 +10129,6 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 			 i40e_stat_str(&pf->hw, ret),
+ 			 i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
+ 
+-	/* make sure our flow control settings are restored */
+-	ret = i40e_set_fc(&pf->hw, &set_fc_aq_fail, true);
+-	if (ret)
+-		dev_dbg(&pf->pdev->dev, "setting flow control: ret = %s last_status = %s\n",
+-			i40e_stat_str(&pf->hw, ret),
+-			i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
+-
+ 	/* Rebuild the VSIs and VEBs that existed before reset.
+ 	 * They are still in our local switch element arrays, so only
+ 	 * need to rebuild the switch model in the HW.
+@@ -11709,6 +11704,8 @@ i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf)
+ 	struct i40e_aqc_configure_partition_bw_data bw_data;
+ 	i40e_status status;
+ 
++	memset(&bw_data, 0, sizeof(bw_data));
++
+ 	/* Set the valid bit for this PF */
+ 	bw_data.pf_valid_bits = cpu_to_le16(BIT(pf->hw.pf_id));
+ 	bw_data.max_bw[pf->hw.pf_id] = pf->max_bw & I40E_ALT_BW_VALUE_MASK;
+@@ -14714,7 +14711,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	int err;
+ 	u32 val;
+ 	u32 i;
+-	u8 set_fc_aq_fail;
+ 
+ 	err = pci_enable_device_mem(pdev);
+ 	if (err)
+@@ -15048,24 +15044,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	}
+ 	INIT_LIST_HEAD(&pf->vsi[pf->lan_vsi]->ch_list);
+ 
+-	/* Make sure flow control is set according to current settings */
+-	err = i40e_set_fc(hw, &set_fc_aq_fail, true);
+-	if (set_fc_aq_fail & I40E_SET_FC_AQ_FAIL_GET)
+-		dev_dbg(&pf->pdev->dev,
+-			"Set fc with err %s aq_err %s on get_phy_cap\n",
+-			i40e_stat_str(hw, err),
+-			i40e_aq_str(hw, hw->aq.asq_last_status));
+-	if (set_fc_aq_fail & I40E_SET_FC_AQ_FAIL_SET)
+-		dev_dbg(&pf->pdev->dev,
+-			"Set fc with err %s aq_err %s on set_phy_config\n",
+-			i40e_stat_str(hw, err),
+-			i40e_aq_str(hw, hw->aq.asq_last_status));
+-	if (set_fc_aq_fail & I40E_SET_FC_AQ_FAIL_UPDATE)
+-		dev_dbg(&pf->pdev->dev,
+-			"Set fc with err %s aq_err %s on get_link_info\n",
+-			i40e_stat_str(hw, err),
+-			i40e_aq_str(hw, hw->aq.asq_last_status));
+-
+ 	/* if FDIR VSI was set up, start it now */
+ 	for (i = 0; i < pf->num_alloc_vsi; i++) {
+ 		if (pf->vsi[i] && pf->vsi[i]->type == I40E_VSI_FDIR) {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 3f5825fa67c99..38dec49ac64d2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -3102,13 +3102,16 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
+ 
+ 			l4_proto = ip.v4->protocol;
+ 		} else if (*tx_flags & I40E_TX_FLAGS_IPV6) {
++			int ret;
++
+ 			tunnel |= I40E_TX_CTX_EXT_IP_IPV6;
+ 
+ 			exthdr = ip.hdr + sizeof(*ip.v6);
+ 			l4_proto = ip.v6->nexthdr;
+-			if (l4.hdr != exthdr)
+-				ipv6_skip_exthdr(skb, exthdr - skb->data,
+-						 &l4_proto, &frag_off);
++			ret = ipv6_skip_exthdr(skb, exthdr - skb->data,
++					       &l4_proto, &frag_off);
++			if (ret < 0)
++				return -1;
+ 		}
+ 
+ 		/* define outer transport */
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 54cf382fddaf9..5b3f2bb22eba7 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -444,9 +444,7 @@ struct ice_pf {
+ 	struct ice_hw_port_stats stats_prev;
+ 	struct ice_hw hw;
+ 	u8 stat_prev_loaded:1; /* has previous stats been loaded */
+-#ifdef CONFIG_DCB
+ 	u16 dcbx_cap;
+-#endif /* CONFIG_DCB */
+ 	u32 tx_timeout_count;
+ 	unsigned long tx_timeout_last_recovery;
+ 	u32 tx_timeout_recovery_level;
+diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
+index 87f91b750d59a..8c133a8be6add 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
++++ b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
+@@ -136,7 +136,7 @@ ice_dcbnl_getnumtcs(struct net_device *dev, int __always_unused tcid, u8 *num)
+ 	if (!test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags))
+ 		return -EINVAL;
+ 
+-	*num = IEEE_8021QAZ_MAX_TCS;
++	*num = pf->hw.func_caps.common_cap.maxtc;
+ 	return 0;
+ }
+ 
+@@ -160,6 +160,10 @@ static u8 ice_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
+ {
+ 	struct ice_pf *pf = ice_netdev_to_pf(netdev);
+ 
++	/* if FW LLDP agent is running, DCBNL not allowed to change mode */
++	if (test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags))
++		return ICE_DCB_NO_HW_CHG;
++
+ 	/* No support for LLD_MANAGED modes or CEE+IEEE */
+ 	if ((mode & DCB_CAP_DCBX_LLD_MANAGED) ||
+ 	    ((mode & DCB_CAP_DCBX_VER_IEEE) && (mode & DCB_CAP_DCBX_VER_CEE)) ||
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 69c113a4de7e6..aebebd2102da0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -8,6 +8,7 @@
+ #include "ice_fltr.h"
+ #include "ice_lib.h"
+ #include "ice_dcb_lib.h"
++#include <net/dcbnl.h>
+ 
+ struct ice_stats {
+ 	char stat_string[ETH_GSTRING_LEN];
+@@ -1238,6 +1239,9 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags)
+ 			status = ice_init_pf_dcb(pf, true);
+ 			if (status)
+ 				dev_warn(dev, "Fail to init DCB\n");
++
++			pf->dcbx_cap &= ~DCB_CAP_DCBX_LLD_MANAGED;
++			pf->dcbx_cap |= DCB_CAP_DCBX_HOST;
+ 		} else {
+ 			enum ice_status status;
+ 			bool dcbx_agent_status;
+@@ -1280,6 +1284,9 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags)
+ 			if (status)
+ 				dev_dbg(dev, "Fail to enable MIB change events\n");
+ 
++			pf->dcbx_cap &= ~DCB_CAP_DCBX_HOST;
++			pf->dcbx_cap |= DCB_CAP_DCBX_LLD_MANAGED;
++
+ 			ice_nway_reset(netdev);
+ 		}
+ 	}
+@@ -3321,6 +3328,18 @@ ice_get_channels(struct net_device *dev, struct ethtool_channels *ch)
+ 	ch->max_other = ch->other_count;
+ }
+ 
++/**
++ * ice_get_valid_rss_size - return valid number of RSS queues
++ * @hw: pointer to the HW structure
++ * @new_size: requested RSS queues
++ */
++static int ice_get_valid_rss_size(struct ice_hw *hw, int new_size)
++{
++	struct ice_hw_common_caps *caps = &hw->func_caps.common_cap;
++
++	return min_t(int, new_size, BIT(caps->rss_table_entry_width));
++}
++
+ /**
+  * ice_vsi_set_dflt_rss_lut - set default RSS LUT with requested RSS size
+  * @vsi: VSI to reconfigure RSS LUT on
+@@ -3348,14 +3367,10 @@ static int ice_vsi_set_dflt_rss_lut(struct ice_vsi *vsi, int req_rss_size)
+ 		return -ENOMEM;
+ 
+ 	/* set RSS LUT parameters */
+-	if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
++	if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags))
+ 		vsi->rss_size = 1;
+-	} else {
+-		struct ice_hw_common_caps *caps = &hw->func_caps.common_cap;
+-
+-		vsi->rss_size = min_t(int, req_rss_size,
+-				      BIT(caps->rss_table_entry_width));
+-	}
++	else
++		vsi->rss_size = ice_get_valid_rss_size(hw, req_rss_size);
+ 
+ 	/* create/set RSS LUT */
+ 	ice_fill_rss_lut(lut, vsi->rss_table_size, vsi->rss_size);
+@@ -3434,9 +3449,12 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
+ 
+ 	ice_vsi_recfg_qs(vsi, new_rx, new_tx);
+ 
+-	if (new_rx && !netif_is_rxfh_configured(dev))
++	if (!netif_is_rxfh_configured(dev))
+ 		return ice_vsi_set_dflt_rss_lut(vsi, new_rx);
+ 
++	/* Update rss_size due to change in Rx queues */
++	vsi->rss_size = ice_get_valid_rss_size(&pf->hw, new_rx);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index ec7f6c64132ee..b3161c5def465 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -1878,6 +1878,29 @@ static int ice_vc_get_ver_msg(struct ice_vf *vf, u8 *msg)
+ 				     sizeof(struct virtchnl_version_info));
+ }
+ 
++/**
++ * ice_vc_get_max_frame_size - get max frame size allowed for VF
++ * @vf: VF used to determine max frame size
++ *
++ * Max frame size is determined based on the current port's max frame size and
++ * whether a port VLAN is configured on this VF. The VF is not aware whether
++ * it's in a port VLAN so the PF needs to account for this in max frame size
++ * checks and sending the max frame size to the VF.
++ */
++static u16 ice_vc_get_max_frame_size(struct ice_vf *vf)
++{
++	struct ice_vsi *vsi = vf->pf->vsi[vf->lan_vsi_idx];
++	struct ice_port_info *pi = vsi->port_info;
++	u16 max_frame_size;
++
++	max_frame_size = pi->phy.link_info.max_frame_size;
++
++	if (vf->port_vlan_info)
++		max_frame_size -= VLAN_HLEN;
++
++	return max_frame_size;
++}
++
+ /**
+  * ice_vc_get_vf_res_msg
+  * @vf: pointer to the VF info
+@@ -1960,6 +1983,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
+ 	vfres->max_vectors = pf->num_msix_per_vf;
+ 	vfres->rss_key_size = ICE_VSIQF_HKEY_ARRAY_SIZE;
+ 	vfres->rss_lut_size = ICE_VSIQF_HLUT_ARRAY_SIZE;
++	vfres->max_mtu = ice_vc_get_max_frame_size(vf);
+ 
+ 	vfres->vsi_res[0].vsi_id = vf->lan_vsi_num;
+ 	vfres->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
+@@ -2952,6 +2976,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ 
+ 		/* copy Rx queue info from VF into VSI */
+ 		if (qpi->rxq.ring_len > 0) {
++			u16 max_frame_size = ice_vc_get_max_frame_size(vf);
++
+ 			num_rxq++;
+ 			vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;
+ 			vsi->rx_rings[i]->count = qpi->rxq.ring_len;
+@@ -2964,7 +2990,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ 			}
+ 			vsi->rx_buf_len = qpi->rxq.databuffer_size;
+ 			vsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len;
+-			if (qpi->rxq.max_pkt_size >= (16 * 1024) ||
++			if (qpi->rxq.max_pkt_size > max_frame_size ||
+ 			    qpi->rxq.max_pkt_size < 64) {
+ 				v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ 				goto error_param;
+@@ -2972,6 +2998,11 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ 		}
+ 
+ 		vsi->max_frame = qpi->rxq.max_pkt_size;
++		/* add space for the port VLAN since the VF driver is not
++		 * expected to account for it in the MTU calculation
++		 */
++		if (vf->port_vlan_info)
++			vsi->max_frame += VLAN_HLEN;
+ 	}
+ 
+ 	/* VF can request to configure less than allocated queues or default
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index ceb4f27898002..c6b735b305156 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -3409,7 +3409,9 @@ static int mvneta_txq_sw_init(struct mvneta_port *pp,
+ 		return -ENOMEM;
+ 
+ 	/* Setup XPS mapping */
+-	if (txq_number > 1)
++	if (pp->neta_armada3700)
++		cpu = 0;
++	else if (txq_number > 1)
+ 		cpu = txq->id % num_present_cpus();
+ 	else
+ 		cpu = pp->rxq_def % num_present_cpus();
+@@ -4187,6 +4189,11 @@ static int mvneta_cpu_online(unsigned int cpu, struct hlist_node *node)
+ 						  node_online);
+ 	struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
+ 
++	/* Armada 3700's per-cpu interrupt for mvneta is broken, all interrupts
++	 * are routed to CPU 0, so we don't need all the cpu-hotplug support
++	 */
++	if (pp->neta_armada3700)
++		return 0;
+ 
+ 	spin_lock(&pp->lock);
+ 	/*
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index 77adad4adb1bc..809f50ab0432e 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -332,7 +332,7 @@ static ssize_t rvu_dbg_qsize_write(struct file *filp,
+ 	u16 pcifunc;
+ 	int ret, lf;
+ 
+-	cmd_buf = memdup_user(buffer, count);
++	cmd_buf = memdup_user(buffer, count + 1);
+ 	if (IS_ERR(cmd_buf))
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index 1187ef1375e29..cb341372d5a35 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -4986,6 +4986,7 @@ static int mlx4_do_mirror_rule(struct mlx4_dev *dev, struct res_fs_rule *fs_rule
+ 
+ 	if (!fs_rule->mirr_mbox) {
+ 		mlx4_err(dev, "rule mirroring mailbox is null\n");
++		mlx4_free_cmd_mailbox(dev, mailbox);
+ 		return -EINVAL;
+ 	}
+ 	memcpy(mailbox->buf, fs_rule->mirr_mbox, fs_rule->mirr_mbox_size);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index a28f95df2901d..bf5cf022e279d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -137,6 +137,11 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change,
+ {
+ 	struct mlx5_core_dev *dev = devlink_priv(devlink);
+ 
++	if (mlx5_lag_is_active(dev)) {
++		NL_SET_ERR_MSG_MOD(extack, "reload is unsupported in Lag mode\n");
++		return -EOPNOTSUPP;
++	}
++
+ 	switch (action) {
+ 	case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
+ 		mlx5_unload_one(dev, false);
+@@ -282,6 +287,10 @@ static int mlx5_devlink_enable_roce_validate(struct devlink *devlink, u32 id,
+ 		NL_SET_ERR_MSG_MOD(extack, "Device doesn't support RoCE");
+ 		return -EOPNOTSUPP;
+ 	}
++	if (mlx5_core_is_mp_slave(dev) || mlx5_lag_is_active(dev)) {
++		NL_SET_ERR_MSG_MOD(extack, "Multi port slave/Lag device can't configure RoCE");
++		return -EOPNOTSUPP;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 6bc6b48a56dc7..24e2c0d955b99 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -12,6 +12,7 @@
+ #include <net/flow_offload.h>
+ #include <net/netfilter/nf_flow_table.h>
+ #include <linux/workqueue.h>
++#include <linux/refcount.h>
+ #include <linux/xarray.h>
+ 
+ #include "lib/fs_chains.h"
+@@ -51,11 +52,11 @@ struct mlx5_tc_ct_priv {
+ 	struct mlx5_flow_table *ct_nat;
+ 	struct mlx5_flow_table *post_ct;
+ 	struct mutex control_lock; /* guards parallel adds/dels */
+-	struct mutex shared_counter_lock;
+ 	struct mapping_ctx *zone_mapping;
+ 	struct mapping_ctx *labels_mapping;
+ 	enum mlx5_flow_namespace_type ns_type;
+ 	struct mlx5_fs_chains *chains;
++	spinlock_t ht_lock; /* protects ft entries */
+ };
+ 
+ struct mlx5_ct_flow {
+@@ -124,6 +125,10 @@ struct mlx5_ct_counter {
+ 	bool is_shared;
+ };
+ 
++enum {
++	MLX5_CT_ENTRY_FLAG_VALID,
++};
++
+ struct mlx5_ct_entry {
+ 	struct rhash_head node;
+ 	struct rhash_head tuple_node;
+@@ -134,6 +139,12 @@ struct mlx5_ct_entry {
+ 	struct mlx5_ct_tuple tuple;
+ 	struct mlx5_ct_tuple tuple_nat;
+ 	struct mlx5_ct_zone_rule zone_rules[2];
++
++	struct mlx5_tc_ct_priv *ct_priv;
++	struct work_struct work;
++
++	refcount_t refcnt;
++	unsigned long flags;
+ };
+ 
+ static const struct rhashtable_params cts_ht_params = {
+@@ -740,6 +751,87 @@ err_attr:
+ 	return err;
+ }
+ 
++static bool
++mlx5_tc_ct_entry_valid(struct mlx5_ct_entry *entry)
++{
++	return test_bit(MLX5_CT_ENTRY_FLAG_VALID, &entry->flags);
++}
++
++static struct mlx5_ct_entry *
++mlx5_tc_ct_entry_get(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_tuple *tuple)
++{
++	struct mlx5_ct_entry *entry;
++
++	entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, tuple,
++				       tuples_ht_params);
++	if (entry && mlx5_tc_ct_entry_valid(entry) &&
++	    refcount_inc_not_zero(&entry->refcnt)) {
++		return entry;
++	} else if (!entry) {
++		entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_nat_ht,
++					       tuple, tuples_nat_ht_params);
++		if (entry && mlx5_tc_ct_entry_valid(entry) &&
++		    refcount_inc_not_zero(&entry->refcnt))
++			return entry;
++	}
++
++	return entry ? ERR_PTR(-EINVAL) : NULL;
++}
++
++static void mlx5_tc_ct_entry_remove_from_tuples(struct mlx5_ct_entry *entry)
++{
++	struct mlx5_tc_ct_priv *ct_priv = entry->ct_priv;
++
++	rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
++			       &entry->tuple_nat_node,
++			       tuples_nat_ht_params);
++	rhashtable_remove_fast(&ct_priv->ct_tuples_ht, &entry->tuple_node,
++			       tuples_ht_params);
++}
++
++static void mlx5_tc_ct_entry_del(struct mlx5_ct_entry *entry)
++{
++	struct mlx5_tc_ct_priv *ct_priv = entry->ct_priv;
++
++	mlx5_tc_ct_entry_del_rules(ct_priv, entry);
++
++	spin_lock_bh(&ct_priv->ht_lock);
++	mlx5_tc_ct_entry_remove_from_tuples(entry);
++	spin_unlock_bh(&ct_priv->ht_lock);
++
++	mlx5_tc_ct_counter_put(ct_priv, entry);
++	kfree(entry);
++}
++
++static void
++mlx5_tc_ct_entry_put(struct mlx5_ct_entry *entry)
++{
++	if (!refcount_dec_and_test(&entry->refcnt))
++		return;
++
++	mlx5_tc_ct_entry_del(entry);
++}
++
++static void mlx5_tc_ct_entry_del_work(struct work_struct *work)
++{
++	struct mlx5_ct_entry *entry = container_of(work, struct mlx5_ct_entry, work);
++
++	mlx5_tc_ct_entry_del(entry);
++}
++
++static void
++__mlx5_tc_ct_entry_put(struct mlx5_ct_entry *entry)
++{
++	struct mlx5e_priv *priv;
++
++	if (!refcount_dec_and_test(&entry->refcnt))
++		return;
++
++	priv = netdev_priv(entry->ct_priv->netdev);
++	INIT_WORK(&entry->work, mlx5_tc_ct_entry_del_work);
++	queue_work(priv->wq, &entry->work);
++}
++
+ static struct mlx5_ct_counter *
+ mlx5_tc_ct_counter_create(struct mlx5_tc_ct_priv *ct_priv)
+ {
+@@ -792,16 +884,26 @@ mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv,
+ 	}
+ 
+ 	/* Use the same counter as the reverse direction */
+-	mutex_lock(&ct_priv->shared_counter_lock);
+-	rev_entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, &rev_tuple,
+-					   tuples_ht_params);
+-	if (rev_entry) {
+-		if (refcount_inc_not_zero(&rev_entry->counter->refcount)) {
+-			mutex_unlock(&ct_priv->shared_counter_lock);
+-			return rev_entry->counter;
+-		}
++	spin_lock_bh(&ct_priv->ht_lock);
++	rev_entry = mlx5_tc_ct_entry_get(ct_priv, &rev_tuple);
++
++	if (IS_ERR(rev_entry)) {
++		spin_unlock_bh(&ct_priv->ht_lock);
++		goto create_counter;
++	}
++
++	if (rev_entry && refcount_inc_not_zero(&rev_entry->counter->refcount)) {
++		ct_dbg("Using shared counter entry=0x%p rev=0x%p\n", entry, rev_entry);
++		shared_counter = rev_entry->counter;
++		spin_unlock_bh(&ct_priv->ht_lock);
++
++		mlx5_tc_ct_entry_put(rev_entry);
++		return shared_counter;
+ 	}
+-	mutex_unlock(&ct_priv->shared_counter_lock);
++
++	spin_unlock_bh(&ct_priv->ht_lock);
++
++create_counter:
+ 
+ 	shared_counter = mlx5_tc_ct_counter_create(ct_priv);
+ 	if (IS_ERR(shared_counter)) {
+@@ -866,10 +968,14 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
+ 	if (!meta_action)
+ 		return -EOPNOTSUPP;
+ 
+-	entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie,
+-				       cts_ht_params);
+-	if (entry)
+-		return 0;
++	spin_lock_bh(&ct_priv->ht_lock);
++	entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params);
++	if (entry && refcount_inc_not_zero(&entry->refcnt)) {
++		spin_unlock_bh(&ct_priv->ht_lock);
++		mlx5_tc_ct_entry_put(entry);
++		return -EEXIST;
++	}
++	spin_unlock_bh(&ct_priv->ht_lock);
+ 
+ 	entry = kzalloc(sizeof(*entry), GFP_KERNEL);
+ 	if (!entry)
+@@ -878,6 +984,8 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
+ 	entry->tuple.zone = ft->zone;
+ 	entry->cookie = flow->cookie;
+ 	entry->restore_cookie = meta_action->ct_metadata.cookie;
++	refcount_set(&entry->refcnt, 2);
++	entry->ct_priv = ct_priv;
+ 
+ 	err = mlx5_tc_ct_rule_to_tuple(&entry->tuple, flow_rule);
+ 	if (err)
+@@ -888,35 +996,40 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
+ 	if (err)
+ 		goto err_set;
+ 
+-	err = rhashtable_insert_fast(&ct_priv->ct_tuples_ht,
+-				     &entry->tuple_node,
+-				     tuples_ht_params);
++	spin_lock_bh(&ct_priv->ht_lock);
++
++	err = rhashtable_lookup_insert_fast(&ft->ct_entries_ht, &entry->node,
++					    cts_ht_params);
++	if (err)
++		goto err_entries;
++
++	err = rhashtable_lookup_insert_fast(&ct_priv->ct_tuples_ht,
++					    &entry->tuple_node,
++					    tuples_ht_params);
+ 	if (err)
+ 		goto err_tuple;
+ 
+ 	if (memcmp(&entry->tuple, &entry->tuple_nat, sizeof(entry->tuple))) {
+-		err = rhashtable_insert_fast(&ct_priv->ct_tuples_nat_ht,
+-					     &entry->tuple_nat_node,
+-					     tuples_nat_ht_params);
++		err = rhashtable_lookup_insert_fast(&ct_priv->ct_tuples_nat_ht,
++						    &entry->tuple_nat_node,
++						    tuples_nat_ht_params);
+ 		if (err)
+ 			goto err_tuple_nat;
+ 	}
++	spin_unlock_bh(&ct_priv->ht_lock);
+ 
+ 	err = mlx5_tc_ct_entry_add_rules(ct_priv, flow_rule, entry,
+ 					 ft->zone_restore_id);
+ 	if (err)
+ 		goto err_rules;
+ 
+-	err = rhashtable_insert_fast(&ft->ct_entries_ht, &entry->node,
+-				     cts_ht_params);
+-	if (err)
+-		goto err_insert;
++	set_bit(MLX5_CT_ENTRY_FLAG_VALID, &entry->flags);
++	mlx5_tc_ct_entry_put(entry); /* this function reference */
+ 
+ 	return 0;
+ 
+-err_insert:
+-	mlx5_tc_ct_entry_del_rules(ct_priv, entry);
+ err_rules:
++	spin_lock_bh(&ct_priv->ht_lock);
+ 	if (mlx5_tc_ct_entry_has_nat(entry))
+ 		rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
+ 				       &entry->tuple_nat_node, tuples_nat_ht_params);
+@@ -925,47 +1038,43 @@ err_tuple_nat:
+ 			       &entry->tuple_node,
+ 			       tuples_ht_params);
+ err_tuple:
++	rhashtable_remove_fast(&ft->ct_entries_ht,
++			       &entry->node,
++			       cts_ht_params);
++err_entries:
++	spin_unlock_bh(&ct_priv->ht_lock);
+ err_set:
+ 	kfree(entry);
+-	netdev_warn(ct_priv->netdev,
+-		    "Failed to offload ct entry, err: %d\n", err);
++	if (err != -EEXIST)
++		netdev_warn(ct_priv->netdev, "Failed to offload ct entry, err: %d\n", err);
+ 	return err;
+ }
+ 
+-static void
+-mlx5_tc_ct_del_ft_entry(struct mlx5_tc_ct_priv *ct_priv,
+-			struct mlx5_ct_entry *entry)
+-{
+-	mlx5_tc_ct_entry_del_rules(ct_priv, entry);
+-	mutex_lock(&ct_priv->shared_counter_lock);
+-	if (mlx5_tc_ct_entry_has_nat(entry))
+-		rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
+-				       &entry->tuple_nat_node,
+-				       tuples_nat_ht_params);
+-	rhashtable_remove_fast(&ct_priv->ct_tuples_ht, &entry->tuple_node,
+-			       tuples_ht_params);
+-	mutex_unlock(&ct_priv->shared_counter_lock);
+-	mlx5_tc_ct_counter_put(ct_priv, entry);
+-
+-}
+-
+ static int
+ mlx5_tc_ct_block_flow_offload_del(struct mlx5_ct_ft *ft,
+ 				  struct flow_cls_offload *flow)
+ {
++	struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv;
+ 	unsigned long cookie = flow->cookie;
+ 	struct mlx5_ct_entry *entry;
+ 
+-	entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie,
+-				       cts_ht_params);
+-	if (!entry)
++	spin_lock_bh(&ct_priv->ht_lock);
++	entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params);
++	if (!entry) {
++		spin_unlock_bh(&ct_priv->ht_lock);
+ 		return -ENOENT;
++	}
+ 
+-	mlx5_tc_ct_del_ft_entry(ft->ct_priv, entry);
+-	WARN_ON(rhashtable_remove_fast(&ft->ct_entries_ht,
+-				       &entry->node,
+-				       cts_ht_params));
+-	kfree(entry);
++	if (!mlx5_tc_ct_entry_valid(entry)) {
++		spin_unlock_bh(&ct_priv->ht_lock);
++		return -EINVAL;
++	}
++
++	rhashtable_remove_fast(&ft->ct_entries_ht, &entry->node, cts_ht_params);
++	mlx5_tc_ct_entry_remove_from_tuples(entry);
++	spin_unlock_bh(&ct_priv->ht_lock);
++
++	mlx5_tc_ct_entry_put(entry);
+ 
+ 	return 0;
+ }
+@@ -974,19 +1083,30 @@ static int
+ mlx5_tc_ct_block_flow_offload_stats(struct mlx5_ct_ft *ft,
+ 				    struct flow_cls_offload *f)
+ {
++	struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv;
+ 	unsigned long cookie = f->cookie;
+ 	struct mlx5_ct_entry *entry;
+ 	u64 lastuse, packets, bytes;
+ 
+-	entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie,
+-				       cts_ht_params);
+-	if (!entry)
++	spin_lock_bh(&ct_priv->ht_lock);
++	entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params);
++	if (!entry) {
++		spin_unlock_bh(&ct_priv->ht_lock);
+ 		return -ENOENT;
++	}
++
++	if (!mlx5_tc_ct_entry_valid(entry) || !refcount_inc_not_zero(&entry->refcnt)) {
++		spin_unlock_bh(&ct_priv->ht_lock);
++		return -EINVAL;
++	}
++
++	spin_unlock_bh(&ct_priv->ht_lock);
+ 
+ 	mlx5_fc_query_cached(entry->counter->counter, &bytes, &packets, &lastuse);
+ 	flow_stats_update(&f->stats, bytes, packets, 0, lastuse,
+ 			  FLOW_ACTION_HW_STATS_DELAYED);
+ 
++	mlx5_tc_ct_entry_put(entry);
+ 	return 0;
+ }
+ 
+@@ -1478,11 +1598,9 @@ err_mapping:
+ static void
+ mlx5_tc_ct_flush_ft_entry(void *ptr, void *arg)
+ {
+-	struct mlx5_tc_ct_priv *ct_priv = arg;
+ 	struct mlx5_ct_entry *entry = ptr;
+ 
+-	mlx5_tc_ct_del_ft_entry(ct_priv, entry);
+-	kfree(entry);
++	mlx5_tc_ct_entry_put(entry);
+ }
+ 
+ static void
+@@ -1960,6 +2078,7 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains,
+ 		goto err_mapping_labels;
+ 	}
+ 
++	spin_lock_init(&ct_priv->ht_lock);
+ 	ct_priv->ns_type = ns_type;
+ 	ct_priv->chains = chains;
+ 	ct_priv->netdev = priv->netdev;
+@@ -1994,7 +2113,6 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains,
+ 
+ 	idr_init(&ct_priv->fte_ids);
+ 	mutex_init(&ct_priv->control_lock);
+-	mutex_init(&ct_priv->shared_counter_lock);
+ 	rhashtable_init(&ct_priv->zone_ht, &zone_params);
+ 	rhashtable_init(&ct_priv->ct_tuples_ht, &tuples_ht_params);
+ 	rhashtable_init(&ct_priv->ct_tuples_nat_ht, &tuples_nat_ht_params);
+@@ -2037,7 +2155,6 @@ mlx5_tc_ct_clean(struct mlx5_tc_ct_priv *ct_priv)
+ 	rhashtable_destroy(&ct_priv->ct_tuples_nat_ht);
+ 	rhashtable_destroy(&ct_priv->zone_ht);
+ 	mutex_destroy(&ct_priv->control_lock);
+-	mutex_destroy(&ct_priv->shared_counter_lock);
+ 	idr_destroy(&ct_priv->fte_ids);
+ 	kfree(ct_priv);
+ }
+@@ -2059,14 +2176,22 @@ mlx5e_tc_ct_restore_flow(struct mlx5_tc_ct_priv *ct_priv,
+ 	if (!mlx5_tc_ct_skb_to_tuple(skb, &tuple, zone))
+ 		return false;
+ 
+-	entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, &tuple,
+-				       tuples_ht_params);
+-	if (!entry)
+-		entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_nat_ht,
+-					       &tuple, tuples_nat_ht_params);
+-	if (!entry)
++	spin_lock(&ct_priv->ht_lock);
++
++	entry = mlx5_tc_ct_entry_get(ct_priv, &tuple);
++	if (!entry) {
++		spin_unlock(&ct_priv->ht_lock);
++		return false;
++	}
++
++	if (IS_ERR(entry)) {
++		spin_unlock(&ct_priv->ht_lock);
+ 		return false;
++	}
++	spin_unlock(&ct_priv->ht_lock);
+ 
+ 	tcf_ct_flow_table_restore_skb(skb, entry->restore_cookie);
++	__mlx5_tc_ct_entry_put(entry);
++
+ 	return true;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+index d487e5e371625..8d991c3b7a503 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+@@ -83,7 +83,7 @@ static inline void mlx5e_xdp_tx_disable(struct mlx5e_priv *priv)
+ 
+ 	clear_bit(MLX5E_STATE_XDP_TX_ENABLED, &priv->state);
+ 	/* Let other device's napi(s) and XSK wakeups see our new state. */
+-	synchronize_rcu();
++	synchronize_net();
+ }
+ 
+ static inline bool mlx5e_xdp_tx_is_enabled(struct mlx5e_priv *priv)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+index be3465ba38ca1..f95905fc4979e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+@@ -106,7 +106,7 @@ err_free_cparam:
+ void mlx5e_close_xsk(struct mlx5e_channel *c)
+ {
+ 	clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
+-	synchronize_rcu(); /* Sync with the XSK wakeup and with NAPI. */
++	synchronize_net(); /* Sync with the XSK wakeup and with NAPI. */
+ 
+ 	mlx5e_close_rq(&c->xskrq);
+ 	mlx5e_close_cq(&c->xskrq.cq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
+index 1fae7fab8297e..ff81b69a59a9b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
+@@ -173,7 +173,7 @@ static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv,
+ #endif
+ 
+ #if IS_ENABLED(CONFIG_GENEVE)
+-	if (skb->encapsulation)
++	if (skb->encapsulation && skb->ip_summed == CHECKSUM_PARTIAL)
+ 		mlx5e_tx_tunnel_accel(skb, eseg, ihs);
+ #endif
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
+index 6a1d82503ef8f..d06532d0baa43 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
+@@ -57,6 +57,20 @@ struct mlx5e_ktls_offload_context_rx {
+ 	struct mlx5e_ktls_rx_resync_ctx resync;
+ };
+ 
++static bool mlx5e_ktls_priv_rx_put(struct mlx5e_ktls_offload_context_rx *priv_rx)
++{
++	if (!refcount_dec_and_test(&priv_rx->resync.refcnt))
++		return false;
++
++	kfree(priv_rx);
++	return true;
++}
++
++static void mlx5e_ktls_priv_rx_get(struct mlx5e_ktls_offload_context_rx *priv_rx)
++{
++	refcount_inc(&priv_rx->resync.refcnt);
++}
++
+ static int mlx5e_ktls_create_tir(struct mlx5_core_dev *mdev, u32 *tirn, u32 rqtn)
+ {
+ 	int err, inlen;
+@@ -326,7 +340,7 @@ static void resync_handle_work(struct work_struct *work)
+ 	priv_rx = container_of(resync, struct mlx5e_ktls_offload_context_rx, resync);
+ 
+ 	if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) {
+-		refcount_dec(&resync->refcnt);
++		mlx5e_ktls_priv_rx_put(priv_rx);
+ 		return;
+ 	}
+ 
+@@ -334,7 +348,7 @@ static void resync_handle_work(struct work_struct *work)
+ 	sq = &c->async_icosq;
+ 
+ 	if (resync_post_get_progress_params(sq, priv_rx))
+-		refcount_dec(&resync->refcnt);
++		mlx5e_ktls_priv_rx_put(priv_rx);
+ }
+ 
+ static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync,
+@@ -377,7 +391,11 @@ unlock:
+ 	return err;
+ }
+ 
+-/* Function is called with elevated refcount, it decreases it. */
++/* Function can be called with the refcount being either elevated or not.
++ * It decreases the refcount and may free the kTLS priv context.
++ * Refcount is not elevated only if tls_dev_del has been called, but GET_PSV was
++ * already in flight.
++ */
+ void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi,
+ 					  struct mlx5e_icosq *sq)
+ {
+@@ -410,7 +428,7 @@ void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi,
+ 	tls_offload_rx_resync_async_request_end(priv_rx->sk, cpu_to_be32(hw_seq));
+ 	priv_rx->stats->tls_resync_req_end++;
+ out:
+-	refcount_dec(&resync->refcnt);
++	mlx5e_ktls_priv_rx_put(priv_rx);
+ 	dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
+ 	kfree(buf);
+ }
+@@ -431,9 +449,9 @@ static bool resync_queue_get_psv(struct sock *sk)
+ 		return false;
+ 
+ 	resync = &priv_rx->resync;
+-	refcount_inc(&resync->refcnt);
++	mlx5e_ktls_priv_rx_get(priv_rx);
+ 	if (unlikely(!queue_work(resync->priv->tls->rx_wq, &resync->work)))
+-		refcount_dec(&resync->refcnt);
++		mlx5e_ktls_priv_rx_put(priv_rx);
+ 
+ 	return true;
+ }
+@@ -625,31 +643,6 @@ err_create_key:
+ 	return err;
+ }
+ 
+-/* Elevated refcount on the resync object means there are
+- * outstanding operations (uncompleted GET_PSV WQEs) that
+- * will read the resync / priv_rx objects once completed.
+- * Wait for them to avoid use-after-free.
+- */
+-static void wait_for_resync(struct net_device *netdev,
+-			    struct mlx5e_ktls_rx_resync_ctx *resync)
+-{
+-#define MLX5E_KTLS_RX_RESYNC_TIMEOUT 20000 /* msecs */
+-	unsigned long exp_time = jiffies + msecs_to_jiffies(MLX5E_KTLS_RX_RESYNC_TIMEOUT);
+-	unsigned int refcnt;
+-
+-	do {
+-		refcnt = refcount_read(&resync->refcnt);
+-		if (refcnt == 1)
+-			return;
+-
+-		msleep(20);
+-	} while (time_before(jiffies, exp_time));
+-
+-	netdev_warn(netdev,
+-		    "Failed waiting for kTLS RX resync refcnt to be released (%u).\n",
+-		    refcnt);
+-}
+-
+ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx)
+ {
+ 	struct mlx5e_ktls_offload_context_rx *priv_rx;
+@@ -663,7 +656,7 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx)
+ 	priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx);
+ 	set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags);
+ 	mlx5e_set_ktls_rx_priv_ctx(tls_ctx, NULL);
+-	synchronize_rcu(); /* Sync with NAPI */
++	synchronize_net(); /* Sync with NAPI */
+ 	if (!cancel_work_sync(&priv_rx->rule.work))
+ 		/* completion is needed, as the priv_rx in the add flow
+ 		 * is maintained on the wqe info (wi), not on the socket.
+@@ -671,8 +664,7 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx)
+ 		wait_for_completion(&priv_rx->add_ctx);
+ 	resync = &priv_rx->resync;
+ 	if (cancel_work_sync(&resync->work))
+-		refcount_dec(&resync->refcnt);
+-	wait_for_resync(netdev, resync);
++		mlx5e_ktls_priv_rx_put(priv_rx);
+ 
+ 	priv_rx->stats->tls_del++;
+ 	if (priv_rx->rule.rule)
+@@ -680,5 +672,9 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx)
+ 
+ 	mlx5_core_destroy_tir(mdev, priv_rx->tirn);
+ 	mlx5_ktls_destroy_key(mdev, priv_rx->key_id);
+-	kfree(priv_rx);
++	/* priv_rx should normally be freed here, but if there is an outstanding
++	 * GET_PSV, deallocation will be delayed until the CQE for GET_PSV is
++	 * processed.
++	 */
++	mlx5e_ktls_priv_rx_put(priv_rx);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index e596f050c4316..b8622440243b4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -522,7 +522,7 @@ static int mlx5e_get_coalesce(struct net_device *netdev,
+ #define MLX5E_MAX_COAL_FRAMES		MLX5_MAX_CQ_COUNT
+ 
+ static void
+-mlx5e_set_priv_channels_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal)
++mlx5e_set_priv_channels_tx_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal)
+ {
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	int tc;
+@@ -537,6 +537,17 @@ mlx5e_set_priv_channels_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesc
+ 						coal->tx_coalesce_usecs,
+ 						coal->tx_max_coalesced_frames);
+ 		}
++	}
++}
++
++static void
++mlx5e_set_priv_channels_rx_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal)
++{
++	struct mlx5_core_dev *mdev = priv->mdev;
++	int i;
++
++	for (i = 0; i < priv->channels.num; ++i) {
++		struct mlx5e_channel *c = priv->channels.c[i];
+ 
+ 		mlx5_core_modify_cq_moderation(mdev, &c->rq.cq.mcq,
+ 					       coal->rx_coalesce_usecs,
+@@ -583,21 +594,9 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv,
+ 	tx_moder->pkts    = coal->tx_max_coalesced_frames;
+ 	new_channels.params.tx_dim_enabled = !!coal->use_adaptive_tx_coalesce;
+ 
+-	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+-		priv->channels.params = new_channels.params;
+-		goto out;
+-	}
+-	/* we are opened */
+-
+ 	reset_rx = !!coal->use_adaptive_rx_coalesce != priv->channels.params.rx_dim_enabled;
+ 	reset_tx = !!coal->use_adaptive_tx_coalesce != priv->channels.params.tx_dim_enabled;
+ 
+-	if (!reset_rx && !reset_tx) {
+-		mlx5e_set_priv_channels_coalesce(priv, coal);
+-		priv->channels.params = new_channels.params;
+-		goto out;
+-	}
+-
+ 	if (reset_rx) {
+ 		u8 mode = MLX5E_GET_PFLAG(&new_channels.params,
+ 					  MLX5E_PFLAG_RX_CQE_BASED_MODER);
+@@ -611,6 +610,20 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv,
+ 		mlx5e_reset_tx_moderation(&new_channels.params, mode);
+ 	}
+ 
++	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
++		priv->channels.params = new_channels.params;
++		goto out;
++	}
++
++	if (!reset_rx && !reset_tx) {
++		if (!coal->use_adaptive_rx_coalesce)
++			mlx5e_set_priv_channels_rx_coalesce(priv, coal);
++		if (!coal->use_adaptive_tx_coalesce)
++			mlx5e_set_priv_channels_tx_coalesce(priv, coal);
++		priv->channels.params = new_channels.params;
++		goto out;
++	}
++
+ 	err = mlx5e_safe_switch_channels(priv, &new_channels, NULL, NULL);
+ 
+ out:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 42848db8f8dd6..6394f9d8c6851 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -919,7 +919,7 @@ void mlx5e_activate_rq(struct mlx5e_rq *rq)
+ void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
+ {
+ 	clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
+-	synchronize_rcu(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */
++	synchronize_net(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */
+ }
+ 
+ void mlx5e_close_rq(struct mlx5e_rq *rq)
+@@ -1380,7 +1380,7 @@ static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq)
+ 	struct mlx5_wq_cyc *wq = &sq->wq;
+ 
+ 	clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
+-	synchronize_rcu(); /* Sync with NAPI to prevent netif_tx_wake_queue. */
++	synchronize_net(); /* Sync with NAPI to prevent netif_tx_wake_queue. */
+ 
+ 	mlx5e_tx_disable_queue(sq->txq);
+ 
+@@ -1456,7 +1456,7 @@ void mlx5e_activate_icosq(struct mlx5e_icosq *icosq)
+ void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq)
+ {
+ 	clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state);
+-	synchronize_rcu(); /* Sync with NAPI. */
++	synchronize_net(); /* Sync with NAPI. */
+ }
+ 
+ void mlx5e_close_icosq(struct mlx5e_icosq *sq)
+@@ -1535,7 +1535,7 @@ void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq)
+ 	struct mlx5e_channel *c = sq->channel;
+ 
+ 	clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
+-	synchronize_rcu(); /* Sync with NAPI. */
++	synchronize_net(); /* Sync with NAPI. */
+ 
+ 	mlx5e_destroy_sq(c->mdev, sq->sqn);
+ 	mlx5e_free_xdpsq_descs(sq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index 54523bed16cd3..0c32c485eb588 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -190,6 +190,16 @@ static bool reset_fw_if_needed(struct mlx5_core_dev *dev)
+ 	return true;
+ }
+ 
++static void enter_error_state(struct mlx5_core_dev *dev, bool force)
++{
++	if (mlx5_health_check_fatal_sensors(dev) || force) { /* protected state setting */
++		dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
++		mlx5_cmd_flush(dev);
++	}
++
++	mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_SYS_ERROR, (void *)1);
++}
++
+ void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force)
+ {
+ 	bool err_detected = false;
+@@ -208,12 +218,7 @@ void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force)
+ 		goto unlock;
+ 	}
+ 
+-	if (mlx5_health_check_fatal_sensors(dev) || force) { /* protected state setting */
+-		dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
+-		mlx5_cmd_flush(dev);
+-	}
+-
+-	mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_SYS_ERROR, (void *)1);
++	enter_error_state(dev, force);
+ unlock:
+ 	mutex_unlock(&dev->intf_state_mutex);
+ }
+@@ -613,7 +618,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
+ 	priv = container_of(health, struct mlx5_priv, health);
+ 	dev = container_of(priv, struct mlx5_core_dev, priv);
+ 
+-	mlx5_enter_error_state(dev, false);
++	enter_error_state(dev, false);
+ 	if (IS_ERR_OR_NULL(health->fw_fatal_reporter)) {
+ 		if (mlx5_health_try_recover(dev))
+ 			mlx5_core_err(dev, "health recovery failed\n");
+@@ -707,8 +712,9 @@ static void poll_health(struct timer_list *t)
+ 		mlx5_core_err(dev, "Fatal error %u detected\n", fatal_error);
+ 		dev->priv.health.fatal_error = fatal_error;
+ 		print_health_info(dev);
++		dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
+ 		mlx5_trigger_health_work(dev);
+-		goto out;
++		return;
+ 	}
+ 
+ 	count = ioread32be(health->health_counter);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index e455a2f31f070..8246b6285d5a4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1380,7 +1380,8 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err);
+ 
+ 	pci_save_state(pdev);
+-	devlink_reload_enable(devlink);
++	if (!mlx5_core_is_mp_slave(dev))
++		devlink_reload_enable(devlink);
+ 	return 0;
+ 
+ err_load_one:
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 75f774347f6d1..cfcc3ac613189 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2351,14 +2351,14 @@ static void r8168dp_hw_jumbo_disable(struct rtl8169_private *tp)
+ 
+ static void r8168e_hw_jumbo_enable(struct rtl8169_private *tp)
+ {
+-	RTL_W8(tp, MaxTxPacketSize, 0x3f);
++	RTL_W8(tp, MaxTxPacketSize, 0x24);
+ 	RTL_W8(tp, Config3, RTL_R8(tp, Config3) | Jumbo_En0);
+ 	RTL_W8(tp, Config4, RTL_R8(tp, Config4) | 0x01);
+ }
+ 
+ static void r8168e_hw_jumbo_disable(struct rtl8169_private *tp)
+ {
+-	RTL_W8(tp, MaxTxPacketSize, 0x0c);
++	RTL_W8(tp, MaxTxPacketSize, 0x3f);
+ 	RTL_W8(tp, Config3, RTL_R8(tp, Config3) & ~Jumbo_En0);
+ 	RTL_W8(tp, Config4, RTL_R8(tp, Config4) & ~0x01);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+index 9ddadae8e4c51..752658ec7beeb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+@@ -301,7 +301,7 @@ static int meson8b_init_prg_eth(struct meson8b_dwmac *dwmac)
+ 		return -EINVAL;
+ 	};
+ 
+-	if (rx_dly_config & PRG_ETH0_ADJ_ENABLE) {
++	if (delay_config & PRG_ETH0_ADJ_ENABLE) {
+ 		if (!dwmac->timing_adj_clk) {
+ 			dev_err(dwmac->dev,
+ 				"The timing-adjustment clock is mandatory for the RX delay re-timing\n");
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 6088071cb1923..40dc14d1415f3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -322,6 +322,32 @@ static int tc_setup_cbs(struct stmmac_priv *priv,
+ 	if (!priv->dma_cap.av)
+ 		return -EOPNOTSUPP;
+ 
++	/* Port Transmit Rate and Speed Divider */
++	switch (priv->speed) {
++	case SPEED_10000:
++		ptr = 32;
++		speed_div = 10000000;
++		break;
++	case SPEED_5000:
++		ptr = 32;
++		speed_div = 5000000;
++		break;
++	case SPEED_2500:
++		ptr = 8;
++		speed_div = 2500000;
++		break;
++	case SPEED_1000:
++		ptr = 8;
++		speed_div = 1000000;
++		break;
++	case SPEED_100:
++		ptr = 4;
++		speed_div = 100000;
++		break;
++	default:
++		return -EOPNOTSUPP;
++	}
++
+ 	mode_to_use = priv->plat->tx_queues_cfg[queue].mode_to_use;
+ 	if (mode_to_use == MTL_QUEUE_DCB && qopt->enable) {
+ 		ret = stmmac_dma_qmode(priv, priv->ioaddr, queue, MTL_QUEUE_AVB);
+@@ -338,10 +364,6 @@ static int tc_setup_cbs(struct stmmac_priv *priv,
+ 		priv->plat->tx_queues_cfg[queue].mode_to_use = MTL_QUEUE_DCB;
+ 	}
+ 
+-	/* Port Transmit Rate and Speed Divider */
+-	ptr = (priv->speed == SPEED_100) ? 4 : 8;
+-	speed_div = (priv->speed == SPEED_100) ? 100000 : 1000000;
+-
+ 	/* Final adjustments for HW */
+ 	value = div_s64(qopt->idleslope * 1024ll * ptr, speed_div);
+ 	priv->plat->tx_queues_cfg[queue].idle_slope = value & GENMASK(31, 0);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 9aafd3ecdaa4d..eea0bb7c23ede 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -1805,6 +1805,18 @@ static int axienet_probe(struct platform_device *pdev)
+ 	lp->options = XAE_OPTION_DEFAULTS;
+ 	lp->rx_bd_num = RX_BD_NUM_DEFAULT;
+ 	lp->tx_bd_num = TX_BD_NUM_DEFAULT;
++
++	lp->clk = devm_clk_get_optional(&pdev->dev, NULL);
++	if (IS_ERR(lp->clk)) {
++		ret = PTR_ERR(lp->clk);
++		goto free_netdev;
++	}
++	ret = clk_prepare_enable(lp->clk);
++	if (ret) {
++		dev_err(&pdev->dev, "Unable to enable clock: %d\n", ret);
++		goto free_netdev;
++	}
++
+ 	/* Map device registers */
+ 	ethres = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	lp->regs = devm_ioremap_resource(&pdev->dev, ethres);
+@@ -1980,20 +1992,6 @@ static int axienet_probe(struct platform_device *pdev)
+ 
+ 	lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ 	if (lp->phy_node) {
+-		lp->clk = devm_clk_get(&pdev->dev, NULL);
+-		if (IS_ERR(lp->clk)) {
+-			dev_warn(&pdev->dev, "Failed to get clock: %ld\n",
+-				 PTR_ERR(lp->clk));
+-			lp->clk = NULL;
+-		} else {
+-			ret = clk_prepare_enable(lp->clk);
+-			if (ret) {
+-				dev_err(&pdev->dev, "Unable to enable clock: %d\n",
+-					ret);
+-				goto free_netdev;
+-			}
+-		}
+-
+ 		ret = axienet_mdio_setup(lp);
+ 		if (ret)
+ 			dev_warn(&pdev->dev,
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index dc668ed280b9f..1c46bc4d27058 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -539,7 +539,6 @@ static int gtp_build_skb_ip4(struct sk_buff *skb, struct net_device *dev,
+ 	if (!skb_is_gso(skb) && (iph->frag_off & htons(IP_DF)) &&
+ 	    mtu < ntohs(iph->tot_len)) {
+ 		netdev_dbg(dev, "packet too big, fragmentation needed\n");
+-		memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 		icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+ 			      htonl(mtu));
+ 		goto err_rt;
+diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h
+index 9481bce94c2ed..c2023f93c0b24 100644
+--- a/drivers/net/phy/mscc/mscc.h
++++ b/drivers/net/phy/mscc/mscc.h
+@@ -102,6 +102,7 @@ enum rgmii_clock_delay {
+ #define PHY_MCB_S6G_READ		  BIT(30)
+ 
+ #define PHY_S6G_PLL5G_CFG0		  0x06
++#define PHY_S6G_PLL5G_CFG2		  0x08
+ #define PHY_S6G_LCPLL_CFG		  0x11
+ #define PHY_S6G_PLL_CFG			  0x2b
+ #define PHY_S6G_COMMON_CFG		  0x2c
+@@ -121,6 +122,9 @@ enum rgmii_clock_delay {
+ #define PHY_S6G_PLL_FSM_CTRL_DATA_POS	  8
+ #define PHY_S6G_PLL_FSM_ENA_POS		  7
+ 
++#define PHY_S6G_CFG2_FSM_DIS              1
++#define PHY_S6G_CFG2_FSM_CLK_BP          23
++
+ #define MSCC_EXT_PAGE_ACCESS		  31
+ #define MSCC_PHY_PAGE_STANDARD		  0x0000 /* Standard registers */
+ #define MSCC_PHY_PAGE_EXTENDED		  0x0001 /* Extended registers */
+@@ -412,6 +416,10 @@ struct vsc8531_edge_rate_table {
+ };
+ #endif /* CONFIG_OF_MDIO */
+ 
++enum csr_target {
++	MACRO_CTRL  = 0x07,
++};
++
+ #if IS_ENABLED(CONFIG_MACSEC)
+ int vsc8584_macsec_init(struct phy_device *phydev);
+ void vsc8584_handle_macsec_interrupt(struct phy_device *phydev);
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index 6bc7406a1ce73..41a410124437d 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -710,6 +710,113 @@ static int phy_base_read(struct phy_device *phydev, u32 regnum)
+ 	return __phy_package_read(phydev, regnum);
+ }
+ 
++static u32 vsc85xx_csr_read(struct phy_device *phydev,
++			    enum csr_target target, u32 reg)
++{
++	unsigned long deadline;
++	u32 val, val_l, val_h;
++
++	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_CSR_CNTL);
++
++	/* CSR registers are grouped under different Target IDs.
++	 * 6-bit Target_ID is split between MSCC_EXT_PAGE_CSR_CNTL_20 and
++	 * MSCC_EXT_PAGE_CSR_CNTL_19 registers.
++	 * Target_ID[5:2] maps to bits[3:0] of MSCC_EXT_PAGE_CSR_CNTL_20
++	 * and Target_ID[1:0] maps to bits[13:12] of MSCC_EXT_PAGE_CSR_CNTL_19.
++	 */
++
++	/* Setup the Target ID */
++	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_20,
++		       MSCC_PHY_CSR_CNTL_20_TARGET(target >> 2));
++
++	if ((target >> 2 == 0x1) || (target >> 2 == 0x3))
++		/* non-MACsec access */
++		target &= 0x3;
++	else
++		target = 0;
++
++	/* Trigger CSR Action - Read into the CSR's */
++	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_19,
++		       MSCC_PHY_CSR_CNTL_19_CMD | MSCC_PHY_CSR_CNTL_19_READ |
++		       MSCC_PHY_CSR_CNTL_19_REG_ADDR(reg) |
++		       MSCC_PHY_CSR_CNTL_19_TARGET(target));
++
++	/* Wait for register access*/
++	deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS);
++	do {
++		usleep_range(500, 1000);
++		val = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_19);
++	} while (time_before(jiffies, deadline) &&
++		!(val & MSCC_PHY_CSR_CNTL_19_CMD));
++
++	if (!(val & MSCC_PHY_CSR_CNTL_19_CMD))
++		return 0xffffffff;
++
++	/* Read the Least Significant Word (LSW) (17) */
++	val_l = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_17);
++
++	/* Read the Most Significant Word (MSW) (18) */
++	val_h = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_18);
++
++	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS,
++		       MSCC_PHY_PAGE_STANDARD);
++
++	return (val_h << 16) | val_l;
++}
++
++static int vsc85xx_csr_write(struct phy_device *phydev,
++			     enum csr_target target, u32 reg, u32 val)
++{
++	unsigned long deadline;
++
++	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_CSR_CNTL);
++
++	/* CSR registers are grouped under different Target IDs.
++	 * 6-bit Target_ID is split between MSCC_EXT_PAGE_CSR_CNTL_20 and
++	 * MSCC_EXT_PAGE_CSR_CNTL_19 registers.
++	 * Target_ID[5:2] maps to bits[3:0] of MSCC_EXT_PAGE_CSR_CNTL_20
++	 * and Target_ID[1:0] maps to bits[13:12] of MSCC_EXT_PAGE_CSR_CNTL_19.
++	 */
++
++	/* Setup the Target ID */
++	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_20,
++		       MSCC_PHY_CSR_CNTL_20_TARGET(target >> 2));
++
++	/* Write the Least Significant Word (LSW) (17) */
++	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_17, (u16)val);
++
++	/* Write the Most Significant Word (MSW) (18) */
++	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_18, (u16)(val >> 16));
++
++	if ((target >> 2 == 0x1) || (target >> 2 == 0x3))
++		/* non-MACsec access */
++		target &= 0x3;
++	else
++		target = 0;
++
++	/* Trigger CSR Action - Write into the CSR's */
++	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_19,
++		       MSCC_PHY_CSR_CNTL_19_CMD |
++		       MSCC_PHY_CSR_CNTL_19_REG_ADDR(reg) |
++		       MSCC_PHY_CSR_CNTL_19_TARGET(target));
++
++	/* Wait for register access */
++	deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS);
++	do {
++		usleep_range(500, 1000);
++		val = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_19);
++	} while (time_before(jiffies, deadline) &&
++		 !(val & MSCC_PHY_CSR_CNTL_19_CMD));
++
++	if (!(val & MSCC_PHY_CSR_CNTL_19_CMD))
++		return -ETIMEDOUT;
++
++	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS,
++		       MSCC_PHY_PAGE_STANDARD);
++
++	return 0;
++}
++
+ /* bus->mdio_lock should be locked when using this function */
+ static void vsc8584_csr_write(struct phy_device *phydev, u16 addr, u32 val)
+ {
+@@ -1131,6 +1238,92 @@ out:
+ 	return ret;
+ }
+ 
++/* Access LCPLL Cfg_2 */
++static void vsc8584_pll5g_cfg2_wr(struct phy_device *phydev,
++				  bool disable_fsm)
++{
++	u32 rd_dat;
++
++	rd_dat = vsc85xx_csr_read(phydev, MACRO_CTRL, PHY_S6G_PLL5G_CFG2);
++	rd_dat &= ~BIT(PHY_S6G_CFG2_FSM_DIS);
++	rd_dat |= (disable_fsm << PHY_S6G_CFG2_FSM_DIS);
++	vsc85xx_csr_write(phydev, MACRO_CTRL, PHY_S6G_PLL5G_CFG2, rd_dat);
++}
++
++/* trigger a read to the spcified MCB */
++static int vsc8584_mcb_rd_trig(struct phy_device *phydev,
++			       u32 mcb_reg_addr, u8 mcb_slave_num)
++{
++	u32 rd_dat = 0;
++
++	/* read MCB */
++	vsc85xx_csr_write(phydev, MACRO_CTRL, mcb_reg_addr,
++			  (0x40000000 | (1L << mcb_slave_num)));
++
++	return read_poll_timeout(vsc85xx_csr_read, rd_dat,
++				 !(rd_dat & 0x40000000),
++				 4000, 200000, 0,
++				 phydev, MACRO_CTRL, mcb_reg_addr);
++}
++
++/* trigger a write to the spcified MCB */
++static int vsc8584_mcb_wr_trig(struct phy_device *phydev,
++			       u32 mcb_reg_addr,
++			       u8 mcb_slave_num)
++{
++	u32 rd_dat = 0;
++
++	/* write back MCB */
++	vsc85xx_csr_write(phydev, MACRO_CTRL, mcb_reg_addr,
++			  (0x80000000 | (1L << mcb_slave_num)));
++
++	return read_poll_timeout(vsc85xx_csr_read, rd_dat,
++				 !(rd_dat & 0x80000000),
++				 4000, 200000, 0,
++				 phydev, MACRO_CTRL, mcb_reg_addr);
++}
++
++/* Sequence to Reset LCPLL for the VIPER and ELISE PHY */
++static int vsc8584_pll5g_reset(struct phy_device *phydev)
++{
++	bool dis_fsm;
++	int ret = 0;
++
++	ret = vsc8584_mcb_rd_trig(phydev, 0x11, 0);
++	if (ret < 0)
++		goto done;
++	dis_fsm = 1;
++
++	/* Reset LCPLL */
++	vsc8584_pll5g_cfg2_wr(phydev, dis_fsm);
++
++	/* write back LCPLL MCB */
++	ret = vsc8584_mcb_wr_trig(phydev, 0x11, 0);
++	if (ret < 0)
++		goto done;
++
++	/* 10 mSec sleep while LCPLL is hold in reset */
++	usleep_range(10000, 20000);
++
++	/* read LCPLL MCB into CSRs */
++	ret = vsc8584_mcb_rd_trig(phydev, 0x11, 0);
++	if (ret < 0)
++		goto done;
++	dis_fsm = 0;
++
++	/* Release the Reset of LCPLL */
++	vsc8584_pll5g_cfg2_wr(phydev, dis_fsm);
++
++	/* write back LCPLL MCB */
++	ret = vsc8584_mcb_wr_trig(phydev, 0x11, 0);
++	if (ret < 0)
++		goto done;
++
++	usleep_range(110000, 200000);
++done:
++	return ret;
++}
++
+ /* bus->mdio_lock should be locked when using this function */
+ static int vsc8584_config_pre_init(struct phy_device *phydev)
+ {
+@@ -1579,8 +1772,16 @@ static int vsc8514_config_pre_init(struct phy_device *phydev)
+ 		{0x16b2, 0x00007000},
+ 		{0x16b4, 0x00000814},
+ 	};
++	struct device *dev = &phydev->mdio.dev;
+ 	unsigned int i;
+ 	u16 reg;
++	int ret;
++
++	ret = vsc8584_pll5g_reset(phydev);
++	if (ret < 0) {
++		dev_err(dev, "failed LCPLL reset, ret: %d\n", ret);
++		return ret;
++	}
+ 
+ 	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_STANDARD);
+ 
+@@ -1615,101 +1816,6 @@ static int vsc8514_config_pre_init(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
+-static u32 vsc85xx_csr_ctrl_phy_read(struct phy_device *phydev,
+-				     u32 target, u32 reg)
+-{
+-	unsigned long deadline;
+-	u32 val, val_l, val_h;
+-
+-	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_CSR_CNTL);
+-
+-	/* CSR registers are grouped under different Target IDs.
+-	 * 6-bit Target_ID is split between MSCC_EXT_PAGE_CSR_CNTL_20 and
+-	 * MSCC_EXT_PAGE_CSR_CNTL_19 registers.
+-	 * Target_ID[5:2] maps to bits[3:0] of MSCC_EXT_PAGE_CSR_CNTL_20
+-	 * and Target_ID[1:0] maps to bits[13:12] of MSCC_EXT_PAGE_CSR_CNTL_19.
+-	 */
+-
+-	/* Setup the Target ID */
+-	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_20,
+-		       MSCC_PHY_CSR_CNTL_20_TARGET(target >> 2));
+-
+-	/* Trigger CSR Action - Read into the CSR's */
+-	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_19,
+-		       MSCC_PHY_CSR_CNTL_19_CMD | MSCC_PHY_CSR_CNTL_19_READ |
+-		       MSCC_PHY_CSR_CNTL_19_REG_ADDR(reg) |
+-		       MSCC_PHY_CSR_CNTL_19_TARGET(target & 0x3));
+-
+-	/* Wait for register access*/
+-	deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS);
+-	do {
+-		usleep_range(500, 1000);
+-		val = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_19);
+-	} while (time_before(jiffies, deadline) &&
+-		!(val & MSCC_PHY_CSR_CNTL_19_CMD));
+-
+-	if (!(val & MSCC_PHY_CSR_CNTL_19_CMD))
+-		return 0xffffffff;
+-
+-	/* Read the Least Significant Word (LSW) (17) */
+-	val_l = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_17);
+-
+-	/* Read the Most Significant Word (MSW) (18) */
+-	val_h = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_18);
+-
+-	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS,
+-		       MSCC_PHY_PAGE_STANDARD);
+-
+-	return (val_h << 16) | val_l;
+-}
+-
+-static int vsc85xx_csr_ctrl_phy_write(struct phy_device *phydev,
+-				      u32 target, u32 reg, u32 val)
+-{
+-	unsigned long deadline;
+-
+-	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_CSR_CNTL);
+-
+-	/* CSR registers are grouped under different Target IDs.
+-	 * 6-bit Target_ID is split between MSCC_EXT_PAGE_CSR_CNTL_20 and
+-	 * MSCC_EXT_PAGE_CSR_CNTL_19 registers.
+-	 * Target_ID[5:2] maps to bits[3:0] of MSCC_EXT_PAGE_CSR_CNTL_20
+-	 * and Target_ID[1:0] maps to bits[13:12] of MSCC_EXT_PAGE_CSR_CNTL_19.
+-	 */
+-
+-	/* Setup the Target ID */
+-	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_20,
+-		       MSCC_PHY_CSR_CNTL_20_TARGET(target >> 2));
+-
+-	/* Write the Least Significant Word (LSW) (17) */
+-	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_17, (u16)val);
+-
+-	/* Write the Most Significant Word (MSW) (18) */
+-	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_18, (u16)(val >> 16));
+-
+-	/* Trigger CSR Action - Write into the CSR's */
+-	phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_19,
+-		       MSCC_PHY_CSR_CNTL_19_CMD |
+-		       MSCC_PHY_CSR_CNTL_19_REG_ADDR(reg) |
+-		       MSCC_PHY_CSR_CNTL_19_TARGET(target & 0x3));
+-
+-	/* Wait for register access */
+-	deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS);
+-	do {
+-		usleep_range(500, 1000);
+-		val = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_19);
+-	} while (time_before(jiffies, deadline) &&
+-		 !(val & MSCC_PHY_CSR_CNTL_19_CMD));
+-
+-	if (!(val & MSCC_PHY_CSR_CNTL_19_CMD))
+-		return -ETIMEDOUT;
+-
+-	phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS,
+-		       MSCC_PHY_PAGE_STANDARD);
+-
+-	return 0;
+-}
+-
+ static int __phy_write_mcb_s6g(struct phy_device *phydev, u32 reg, u8 mcb,
+ 			       u32 op)
+ {
+@@ -1717,15 +1823,15 @@ static int __phy_write_mcb_s6g(struct phy_device *phydev, u32 reg, u8 mcb,
+ 	u32 val;
+ 	int ret;
+ 
+-	ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET, reg,
+-					 op | (1 << mcb));
++	ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET, reg,
++				op | (1 << mcb));
+ 	if (ret)
+ 		return -EINVAL;
+ 
+ 	deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS);
+ 	do {
+ 		usleep_range(500, 1000);
+-		val = vsc85xx_csr_ctrl_phy_read(phydev, PHY_MCB_TARGET, reg);
++		val = vsc85xx_csr_read(phydev, PHY_MCB_TARGET, reg);
+ 
+ 		if (val == 0xffffffff)
+ 			return -EIO;
+@@ -1806,41 +1912,41 @@ static int vsc8514_config_init(struct phy_device *phydev)
+ 	/* lcpll mcb */
+ 	phy_update_mcb_s6g(phydev, PHY_S6G_LCPLL_CFG, 0);
+ 	/* pll5gcfg0 */
+-	ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET,
+-					 PHY_S6G_PLL5G_CFG0, 0x7036f145);
++	ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET,
++				PHY_S6G_PLL5G_CFG0, 0x7036f145);
+ 	if (ret)
+ 		goto err;
+ 
+ 	phy_commit_mcb_s6g(phydev, PHY_S6G_LCPLL_CFG, 0);
+ 	/* pllcfg */
+-	ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET,
+-					 PHY_S6G_PLL_CFG,
+-					 (3 << PHY_S6G_PLL_ENA_OFFS_POS) |
+-					 (120 << PHY_S6G_PLL_FSM_CTRL_DATA_POS)
+-					 | (0 << PHY_S6G_PLL_FSM_ENA_POS));
++	ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET,
++				PHY_S6G_PLL_CFG,
++				(3 << PHY_S6G_PLL_ENA_OFFS_POS) |
++				(120 << PHY_S6G_PLL_FSM_CTRL_DATA_POS)
++				| (0 << PHY_S6G_PLL_FSM_ENA_POS));
+ 	if (ret)
+ 		goto err;
+ 
+ 	/* commoncfg */
+-	ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET,
+-					 PHY_S6G_COMMON_CFG,
+-					 (0 << PHY_S6G_SYS_RST_POS) |
+-					 (0 << PHY_S6G_ENA_LANE_POS) |
+-					 (0 << PHY_S6G_ENA_LOOP_POS) |
+-					 (0 << PHY_S6G_QRATE_POS) |
+-					 (3 << PHY_S6G_IF_MODE_POS));
++	ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET,
++				PHY_S6G_COMMON_CFG,
++				(0 << PHY_S6G_SYS_RST_POS) |
++				(0 << PHY_S6G_ENA_LANE_POS) |
++				(0 << PHY_S6G_ENA_LOOP_POS) |
++				(0 << PHY_S6G_QRATE_POS) |
++				(3 << PHY_S6G_IF_MODE_POS));
+ 	if (ret)
+ 		goto err;
+ 
+ 	/* misccfg */
+-	ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET,
+-					 PHY_S6G_MISC_CFG, 1);
++	ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET,
++				PHY_S6G_MISC_CFG, 1);
+ 	if (ret)
+ 		goto err;
+ 
+ 	/* gpcfg */
+-	ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET,
+-					 PHY_S6G_GPC_CFG, 768);
++	ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET,
++				PHY_S6G_GPC_CFG, 768);
+ 	if (ret)
+ 		goto err;
+ 
+@@ -1851,8 +1957,8 @@ static int vsc8514_config_init(struct phy_device *phydev)
+ 		usleep_range(500, 1000);
+ 		phy_update_mcb_s6g(phydev, PHY_MCB_S6G_CFG,
+ 				   0); /* read 6G MCB into CSRs */
+-		reg = vsc85xx_csr_ctrl_phy_read(phydev, PHY_MCB_TARGET,
+-						PHY_S6G_PLL_STATUS);
++		reg = vsc85xx_csr_read(phydev, PHY_MCB_TARGET,
++				       PHY_S6G_PLL_STATUS);
+ 		if (reg == 0xffffffff) {
+ 			phy_unlock_mdio_bus(phydev);
+ 			return -EIO;
+@@ -1866,8 +1972,8 @@ static int vsc8514_config_init(struct phy_device *phydev)
+ 	}
+ 
+ 	/* misccfg */
+-	ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET,
+-					 PHY_S6G_MISC_CFG, 0);
++	ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET,
++				PHY_S6G_MISC_CFG, 0);
+ 	if (ret)
+ 		goto err;
+ 
+@@ -1878,8 +1984,8 @@ static int vsc8514_config_init(struct phy_device *phydev)
+ 		usleep_range(500, 1000);
+ 		phy_update_mcb_s6g(phydev, PHY_MCB_S6G_CFG,
+ 				   0); /* read 6G MCB into CSRs */
+-		reg = vsc85xx_csr_ctrl_phy_read(phydev, PHY_MCB_TARGET,
+-						PHY_S6G_IB_STATUS0);
++		reg = vsc85xx_csr_read(phydev, PHY_MCB_TARGET,
++				       PHY_S6G_IB_STATUS0);
+ 		if (reg == 0xffffffff) {
+ 			phy_unlock_mdio_bus(phydev);
+ 			return -EIO;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 5dab6be6fc383..dd1f711140c3d 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -300,50 +300,22 @@ static int mdio_bus_phy_resume(struct device *dev)
+ 
+ 	phydev->suspended_by_mdio_bus = 0;
+ 
+-	ret = phy_resume(phydev);
++	ret = phy_init_hw(phydev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-no_resume:
+-	if (phydev->attached_dev && phydev->adjust_link)
+-		phy_start_machine(phydev);
+-
+-	return 0;
+-}
+-
+-static int mdio_bus_phy_restore(struct device *dev)
+-{
+-	struct phy_device *phydev = to_phy_device(dev);
+-	struct net_device *netdev = phydev->attached_dev;
+-	int ret;
+-
+-	if (!netdev)
+-		return 0;
+-
+-	ret = phy_init_hw(phydev);
++	ret = phy_resume(phydev);
+ 	if (ret < 0)
+ 		return ret;
+-
++no_resume:
+ 	if (phydev->attached_dev && phydev->adjust_link)
+ 		phy_start_machine(phydev);
+ 
+ 	return 0;
+ }
+ 
+-static const struct dev_pm_ops mdio_bus_phy_pm_ops = {
+-	.suspend = mdio_bus_phy_suspend,
+-	.resume = mdio_bus_phy_resume,
+-	.freeze = mdio_bus_phy_suspend,
+-	.thaw = mdio_bus_phy_resume,
+-	.restore = mdio_bus_phy_restore,
+-};
+-
+-#define MDIO_BUS_PHY_PM_OPS (&mdio_bus_phy_pm_ops)
+-
+-#else
+-
+-#define MDIO_BUS_PHY_PM_OPS NULL
+-
++static SIMPLE_DEV_PM_OPS(mdio_bus_phy_pm_ops, mdio_bus_phy_suspend,
++			 mdio_bus_phy_resume);
+ #endif /* CONFIG_PM */
+ 
+ /**
+@@ -554,7 +526,7 @@ static const struct device_type mdio_bus_phy_type = {
+ 	.name = "PHY",
+ 	.groups = phy_dev_groups,
+ 	.release = phy_device_release,
+-	.pm = MDIO_BUS_PHY_PM_OPS,
++	.pm = pm_ptr(&mdio_bus_phy_pm_ops),
+ };
+ 
+ static int phy_request_driver_module(struct phy_device *dev, u32 phy_id)
+@@ -1143,10 +1115,19 @@ int phy_init_hw(struct phy_device *phydev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (phydev->drv->config_init)
++	if (phydev->drv->config_init) {
+ 		ret = phydev->drv->config_init(phydev);
++		if (ret < 0)
++			return ret;
++	}
+ 
+-	return ret;
++	if (phydev->drv->config_intr) {
++		ret = phydev->drv->config_intr(phydev);
++		if (ret < 0)
++			return ret;
++	}
++
++	return 0;
+ }
+ EXPORT_SYMBOL(phy_init_hw);
+ 
+diff --git a/drivers/net/ppp/ppp_async.c b/drivers/net/ppp/ppp_async.c
+index 29a0917a81e60..f14a9d190de91 100644
+--- a/drivers/net/ppp/ppp_async.c
++++ b/drivers/net/ppp/ppp_async.c
+@@ -259,7 +259,8 @@ static int ppp_asynctty_hangup(struct tty_struct *tty)
+  */
+ static ssize_t
+ ppp_asynctty_read(struct tty_struct *tty, struct file *file,
+-		  unsigned char __user *buf, size_t count)
++		  unsigned char *buf, size_t count,
++		  void **cookie, unsigned long offset)
+ {
+ 	return -EAGAIN;
+ }
+diff --git a/drivers/net/ppp/ppp_synctty.c b/drivers/net/ppp/ppp_synctty.c
+index 0f338752c38b9..f774b7e52da44 100644
+--- a/drivers/net/ppp/ppp_synctty.c
++++ b/drivers/net/ppp/ppp_synctty.c
+@@ -257,7 +257,8 @@ static int ppp_sync_hangup(struct tty_struct *tty)
+  */
+ static ssize_t
+ ppp_sync_read(struct tty_struct *tty, struct file *file,
+-	       unsigned char __user *buf, size_t count)
++	      unsigned char *buf, size_t count,
++	      void **cookie, unsigned long offset)
+ {
+ 	return -EAGAIN;
+ }
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 977f77e2c2ce6..50cb8f045a1e5 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -4721,7 +4721,6 @@ static void vxlan_destroy_tunnels(struct net *net, struct list_head *head)
+ 	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+ 	struct vxlan_dev *vxlan, *next;
+ 	struct net_device *dev, *aux;
+-	unsigned int h;
+ 
+ 	for_each_netdev_safe(net, dev, aux)
+ 		if (dev->rtnl_link_ops == &vxlan_link_ops)
+@@ -4735,14 +4734,13 @@ static void vxlan_destroy_tunnels(struct net *net, struct list_head *head)
+ 			unregister_netdevice_queue(vxlan->dev, head);
+ 	}
+ 
+-	for (h = 0; h < PORT_HASH_SIZE; ++h)
+-		WARN_ON_ONCE(!hlist_empty(&vn->sock_list[h]));
+ }
+ 
+ static void __net_exit vxlan_exit_batch_net(struct list_head *net_list)
+ {
+ 	struct net *net;
+ 	LIST_HEAD(list);
++	unsigned int h;
+ 
+ 	rtnl_lock();
+ 	list_for_each_entry(net, net_list, exit_list)
+@@ -4752,6 +4750,13 @@ static void __net_exit vxlan_exit_batch_net(struct list_head *net_list)
+ 
+ 	unregister_netdevice_many(&list);
+ 	rtnl_unlock();
++
++	list_for_each_entry(net, net_list, exit_list) {
++		struct vxlan_net *vn = net_generic(net, vxlan_net_id);
++
++		for (h = 0; h < PORT_HASH_SIZE; ++h)
++			WARN_ON_ONCE(!hlist_empty(&vn->sock_list[h]));
++	}
+ }
+ 
+ static struct pernet_operations vxlan_net_ops = {
+diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
+index c9f65e96ccb04..e01ab0742738a 100644
+--- a/drivers/net/wireguard/device.c
++++ b/drivers/net/wireguard/device.c
+@@ -138,7 +138,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		else if (skb->protocol == htons(ETH_P_IPV6))
+ 			net_dbg_ratelimited("%s: No peer has allowed IPs matching %pI6\n",
+ 					    dev->name, &ipv6_hdr(skb)->daddr);
+-		goto err;
++		goto err_icmp;
+ 	}
+ 
+ 	family = READ_ONCE(peer->endpoint.addr.sa_family);
+@@ -201,12 +201,13 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ err_peer:
+ 	wg_peer_put(peer);
+-err:
+-	++dev->stats.tx_errors;
++err_icmp:
+ 	if (skb->protocol == htons(ETH_P_IP))
+ 		icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
+ 	else if (skb->protocol == htons(ETH_P_IPV6))
+ 		icmpv6_ndo_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0);
++err:
++	++dev->stats.tx_errors;
+ 	kfree_skb(skb);
+ 	return ret;
+ }
+@@ -234,8 +235,8 @@ static void wg_destruct(struct net_device *dev)
+ 	destroy_workqueue(wg->handshake_receive_wq);
+ 	destroy_workqueue(wg->handshake_send_wq);
+ 	destroy_workqueue(wg->packet_crypt_wq);
+-	wg_packet_queue_free(&wg->decrypt_queue, true);
+-	wg_packet_queue_free(&wg->encrypt_queue, true);
++	wg_packet_queue_free(&wg->decrypt_queue);
++	wg_packet_queue_free(&wg->encrypt_queue);
+ 	rcu_barrier(); /* Wait for all the peers to be actually freed. */
+ 	wg_ratelimiter_uninit();
+ 	memzero_explicit(&wg->static_identity, sizeof(wg->static_identity));
+@@ -337,12 +338,12 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
+ 		goto err_destroy_handshake_send;
+ 
+ 	ret = wg_packet_queue_init(&wg->encrypt_queue, wg_packet_encrypt_worker,
+-				   true, MAX_QUEUED_PACKETS);
++				   MAX_QUEUED_PACKETS);
+ 	if (ret < 0)
+ 		goto err_destroy_packet_crypt;
+ 
+ 	ret = wg_packet_queue_init(&wg->decrypt_queue, wg_packet_decrypt_worker,
+-				   true, MAX_QUEUED_PACKETS);
++				   MAX_QUEUED_PACKETS);
+ 	if (ret < 0)
+ 		goto err_free_encrypt_queue;
+ 
+@@ -367,9 +368,9 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
+ err_uninit_ratelimiter:
+ 	wg_ratelimiter_uninit();
+ err_free_decrypt_queue:
+-	wg_packet_queue_free(&wg->decrypt_queue, true);
++	wg_packet_queue_free(&wg->decrypt_queue);
+ err_free_encrypt_queue:
+-	wg_packet_queue_free(&wg->encrypt_queue, true);
++	wg_packet_queue_free(&wg->encrypt_queue);
+ err_destroy_packet_crypt:
+ 	destroy_workqueue(wg->packet_crypt_wq);
+ err_destroy_handshake_send:
+diff --git a/drivers/net/wireguard/device.h b/drivers/net/wireguard/device.h
+index 4d0144e169478..854bc3d97150e 100644
+--- a/drivers/net/wireguard/device.h
++++ b/drivers/net/wireguard/device.h
+@@ -27,13 +27,14 @@ struct multicore_worker {
+ 
+ struct crypt_queue {
+ 	struct ptr_ring ring;
+-	union {
+-		struct {
+-			struct multicore_worker __percpu *worker;
+-			int last_cpu;
+-		};
+-		struct work_struct work;
+-	};
++	struct multicore_worker __percpu *worker;
++	int last_cpu;
++};
++
++struct prev_queue {
++	struct sk_buff *head, *tail, *peeked;
++	struct { struct sk_buff *next, *prev; } empty; // Match first 2 members of struct sk_buff.
++	atomic_t count;
+ };
+ 
+ struct wg_device {
+diff --git a/drivers/net/wireguard/peer.c b/drivers/net/wireguard/peer.c
+index b3b6370e6b959..cd5cb0292cb67 100644
+--- a/drivers/net/wireguard/peer.c
++++ b/drivers/net/wireguard/peer.c
+@@ -32,27 +32,22 @@ struct wg_peer *wg_peer_create(struct wg_device *wg,
+ 	peer = kzalloc(sizeof(*peer), GFP_KERNEL);
+ 	if (unlikely(!peer))
+ 		return ERR_PTR(ret);
+-	peer->device = wg;
++	if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))
++		goto err;
+ 
++	peer->device = wg;
+ 	wg_noise_handshake_init(&peer->handshake, &wg->static_identity,
+ 				public_key, preshared_key, peer);
+-	if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))
+-		goto err_1;
+-	if (wg_packet_queue_init(&peer->tx_queue, wg_packet_tx_worker, false,
+-				 MAX_QUEUED_PACKETS))
+-		goto err_2;
+-	if (wg_packet_queue_init(&peer->rx_queue, NULL, false,
+-				 MAX_QUEUED_PACKETS))
+-		goto err_3;
+-
+ 	peer->internal_id = atomic64_inc_return(&peer_counter);
+ 	peer->serial_work_cpu = nr_cpumask_bits;
+ 	wg_cookie_init(&peer->latest_cookie);
+ 	wg_timers_init(peer);
+ 	wg_cookie_checker_precompute_peer_keys(peer);
+ 	spin_lock_init(&peer->keypairs.keypair_update_lock);
+-	INIT_WORK(&peer->transmit_handshake_work,
+-		  wg_packet_handshake_send_worker);
++	INIT_WORK(&peer->transmit_handshake_work, wg_packet_handshake_send_worker);
++	INIT_WORK(&peer->transmit_packet_work, wg_packet_tx_worker);
++	wg_prev_queue_init(&peer->tx_queue);
++	wg_prev_queue_init(&peer->rx_queue);
+ 	rwlock_init(&peer->endpoint_lock);
+ 	kref_init(&peer->refcount);
+ 	skb_queue_head_init(&peer->staged_packet_queue);
+@@ -68,11 +63,7 @@ struct wg_peer *wg_peer_create(struct wg_device *wg,
+ 	pr_debug("%s: Peer %llu created\n", wg->dev->name, peer->internal_id);
+ 	return peer;
+ 
+-err_3:
+-	wg_packet_queue_free(&peer->tx_queue, false);
+-err_2:
+-	dst_cache_destroy(&peer->endpoint_cache);
+-err_1:
++err:
+ 	kfree(peer);
+ 	return ERR_PTR(ret);
+ }
+@@ -197,8 +188,7 @@ static void rcu_release(struct rcu_head *rcu)
+ 	struct wg_peer *peer = container_of(rcu, struct wg_peer, rcu);
+ 
+ 	dst_cache_destroy(&peer->endpoint_cache);
+-	wg_packet_queue_free(&peer->rx_queue, false);
+-	wg_packet_queue_free(&peer->tx_queue, false);
++	WARN_ON(wg_prev_queue_peek(&peer->tx_queue) || wg_prev_queue_peek(&peer->rx_queue));
+ 
+ 	/* The final zeroing takes care of clearing any remaining handshake key
+ 	 * material and other potentially sensitive information.
+diff --git a/drivers/net/wireguard/peer.h b/drivers/net/wireguard/peer.h
+index 23af409229972..0809cda08bfa4 100644
+--- a/drivers/net/wireguard/peer.h
++++ b/drivers/net/wireguard/peer.h
+@@ -36,7 +36,7 @@ struct endpoint {
+ 
+ struct wg_peer {
+ 	struct wg_device *device;
+-	struct crypt_queue tx_queue, rx_queue;
++	struct prev_queue tx_queue, rx_queue;
+ 	struct sk_buff_head staged_packet_queue;
+ 	int serial_work_cpu;
+ 	struct noise_keypairs keypairs;
+@@ -45,7 +45,7 @@ struct wg_peer {
+ 	rwlock_t endpoint_lock;
+ 	struct noise_handshake handshake;
+ 	atomic64_t last_sent_handshake;
+-	struct work_struct transmit_handshake_work, clear_peer_work;
++	struct work_struct transmit_handshake_work, clear_peer_work, transmit_packet_work;
+ 	struct cookie latest_cookie;
+ 	struct hlist_node pubkey_hash;
+ 	u64 rx_bytes, tx_bytes;
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 71b8e80b58e12..48e7b982a3073 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -9,8 +9,7 @@ struct multicore_worker __percpu *
+ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
+ {
+ 	int cpu;
+-	struct multicore_worker __percpu *worker =
+-		alloc_percpu(struct multicore_worker);
++	struct multicore_worker __percpu *worker = alloc_percpu(struct multicore_worker);
+ 
+ 	if (!worker)
+ 		return NULL;
+@@ -23,7 +22,7 @@ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
+ }
+ 
+ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+-			 bool multicore, unsigned int len)
++			 unsigned int len)
+ {
+ 	int ret;
+ 
+@@ -31,25 +30,78 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+ 	ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL);
+ 	if (ret)
+ 		return ret;
+-	if (function) {
+-		if (multicore) {
+-			queue->worker = wg_packet_percpu_multicore_worker_alloc(
+-				function, queue);
+-			if (!queue->worker) {
+-				ptr_ring_cleanup(&queue->ring, NULL);
+-				return -ENOMEM;
+-			}
+-		} else {
+-			INIT_WORK(&queue->work, function);
+-		}
++	queue->worker = wg_packet_percpu_multicore_worker_alloc(function, queue);
++	if (!queue->worker) {
++		ptr_ring_cleanup(&queue->ring, NULL);
++		return -ENOMEM;
+ 	}
+ 	return 0;
+ }
+ 
+-void wg_packet_queue_free(struct crypt_queue *queue, bool multicore)
++void wg_packet_queue_free(struct crypt_queue *queue)
+ {
+-	if (multicore)
+-		free_percpu(queue->worker);
++	free_percpu(queue->worker);
+ 	WARN_ON(!__ptr_ring_empty(&queue->ring));
+ 	ptr_ring_cleanup(&queue->ring, NULL);
+ }
++
++#define NEXT(skb) ((skb)->prev)
++#define STUB(queue) ((struct sk_buff *)&queue->empty)
++
++void wg_prev_queue_init(struct prev_queue *queue)
++{
++	NEXT(STUB(queue)) = NULL;
++	queue->head = queue->tail = STUB(queue);
++	queue->peeked = NULL;
++	atomic_set(&queue->count, 0);
++	BUILD_BUG_ON(
++		offsetof(struct sk_buff, next) != offsetof(struct prev_queue, empty.next) -
++							offsetof(struct prev_queue, empty) ||
++		offsetof(struct sk_buff, prev) != offsetof(struct prev_queue, empty.prev) -
++							 offsetof(struct prev_queue, empty));
++}
++
++static void __wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb)
++{
++	WRITE_ONCE(NEXT(skb), NULL);
++	WRITE_ONCE(NEXT(xchg_release(&queue->head, skb)), skb);
++}
++
++bool wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb)
++{
++	if (!atomic_add_unless(&queue->count, 1, MAX_QUEUED_PACKETS))
++		return false;
++	__wg_prev_queue_enqueue(queue, skb);
++	return true;
++}
++
++struct sk_buff *wg_prev_queue_dequeue(struct prev_queue *queue)
++{
++	struct sk_buff *tail = queue->tail, *next = smp_load_acquire(&NEXT(tail));
++
++	if (tail == STUB(queue)) {
++		if (!next)
++			return NULL;
++		queue->tail = next;
++		tail = next;
++		next = smp_load_acquire(&NEXT(next));
++	}
++	if (next) {
++		queue->tail = next;
++		atomic_dec(&queue->count);
++		return tail;
++	}
++	if (tail != READ_ONCE(queue->head))
++		return NULL;
++	__wg_prev_queue_enqueue(queue, STUB(queue));
++	next = smp_load_acquire(&NEXT(tail));
++	if (next) {
++		queue->tail = next;
++		atomic_dec(&queue->count);
++		return tail;
++	}
++	return NULL;
++}
++
++#undef NEXT
++#undef STUB
+diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
+index dfb674e030764..4ef2944a68bc9 100644
+--- a/drivers/net/wireguard/queueing.h
++++ b/drivers/net/wireguard/queueing.h
+@@ -17,12 +17,13 @@ struct wg_device;
+ struct wg_peer;
+ struct multicore_worker;
+ struct crypt_queue;
++struct prev_queue;
+ struct sk_buff;
+ 
+ /* queueing.c APIs: */
+ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+-			 bool multicore, unsigned int len);
+-void wg_packet_queue_free(struct crypt_queue *queue, bool multicore);
++			 unsigned int len);
++void wg_packet_queue_free(struct crypt_queue *queue);
+ struct multicore_worker __percpu *
+ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr);
+ 
+@@ -135,8 +136,31 @@ static inline int wg_cpumask_next_online(int *next)
+ 	return cpu;
+ }
+ 
++void wg_prev_queue_init(struct prev_queue *queue);
++
++/* Multi producer */
++bool wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb);
++
++/* Single consumer */
++struct sk_buff *wg_prev_queue_dequeue(struct prev_queue *queue);
++
++/* Single consumer */
++static inline struct sk_buff *wg_prev_queue_peek(struct prev_queue *queue)
++{
++	if (queue->peeked)
++		return queue->peeked;
++	queue->peeked = wg_prev_queue_dequeue(queue);
++	return queue->peeked;
++}
++
++/* Single consumer */
++static inline void wg_prev_queue_drop_peeked(struct prev_queue *queue)
++{
++	queue->peeked = NULL;
++}
++
+ static inline int wg_queue_enqueue_per_device_and_peer(
+-	struct crypt_queue *device_queue, struct crypt_queue *peer_queue,
++	struct crypt_queue *device_queue, struct prev_queue *peer_queue,
+ 	struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu)
+ {
+ 	int cpu;
+@@ -145,8 +169,9 @@ static inline int wg_queue_enqueue_per_device_and_peer(
+ 	/* We first queue this up for the peer ingestion, but the consumer
+ 	 * will wait for the state to change to CRYPTED or DEAD before.
+ 	 */
+-	if (unlikely(ptr_ring_produce_bh(&peer_queue->ring, skb)))
++	if (unlikely(!wg_prev_queue_enqueue(peer_queue, skb)))
+ 		return -ENOSPC;
++
+ 	/* Then we queue it up in the device queue, which consumes the
+ 	 * packet as soon as it can.
+ 	 */
+@@ -157,9 +182,7 @@ static inline int wg_queue_enqueue_per_device_and_peer(
+ 	return 0;
+ }
+ 
+-static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue,
+-					     struct sk_buff *skb,
+-					     enum packet_state state)
++static inline void wg_queue_enqueue_per_peer_tx(struct sk_buff *skb, enum packet_state state)
+ {
+ 	/* We take a reference, because as soon as we call atomic_set, the
+ 	 * peer can be freed from below us.
+@@ -167,14 +190,12 @@ static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue,
+ 	struct wg_peer *peer = wg_peer_get(PACKET_PEER(skb));
+ 
+ 	atomic_set_release(&PACKET_CB(skb)->state, state);
+-	queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu,
+-					       peer->internal_id),
+-		      peer->device->packet_crypt_wq, &queue->work);
++	queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu, peer->internal_id),
++		      peer->device->packet_crypt_wq, &peer->transmit_packet_work);
+ 	wg_peer_put(peer);
+ }
+ 
+-static inline void wg_queue_enqueue_per_peer_napi(struct sk_buff *skb,
+-						  enum packet_state state)
++static inline void wg_queue_enqueue_per_peer_rx(struct sk_buff *skb, enum packet_state state)
+ {
+ 	/* We take a reference, because as soon as we call atomic_set, the
+ 	 * peer can be freed from below us.
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index 2c9551ea6dc73..7dc84bcca2613 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -444,7 +444,6 @@ packet_processed:
+ int wg_packet_rx_poll(struct napi_struct *napi, int budget)
+ {
+ 	struct wg_peer *peer = container_of(napi, struct wg_peer, napi);
+-	struct crypt_queue *queue = &peer->rx_queue;
+ 	struct noise_keypair *keypair;
+ 	struct endpoint endpoint;
+ 	enum packet_state state;
+@@ -455,11 +454,10 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget)
+ 	if (unlikely(budget <= 0))
+ 		return 0;
+ 
+-	while ((skb = __ptr_ring_peek(&queue->ring)) != NULL &&
++	while ((skb = wg_prev_queue_peek(&peer->rx_queue)) != NULL &&
+ 	       (state = atomic_read_acquire(&PACKET_CB(skb)->state)) !=
+ 		       PACKET_STATE_UNCRYPTED) {
+-		__ptr_ring_discard_one(&queue->ring);
+-		peer = PACKET_PEER(skb);
++		wg_prev_queue_drop_peeked(&peer->rx_queue);
+ 		keypair = PACKET_CB(skb)->keypair;
+ 		free = true;
+ 
+@@ -508,7 +506,7 @@ void wg_packet_decrypt_worker(struct work_struct *work)
+ 		enum packet_state state =
+ 			likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ?
+ 				PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
+-		wg_queue_enqueue_per_peer_napi(skb, state);
++		wg_queue_enqueue_per_peer_rx(skb, state);
+ 		if (need_resched())
+ 			cond_resched();
+ 	}
+@@ -531,12 +529,10 @@ static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb)
+ 	if (unlikely(READ_ONCE(peer->is_dead)))
+ 		goto err;
+ 
+-	ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue,
+-						   &peer->rx_queue, skb,
+-						   wg->packet_crypt_wq,
+-						   &wg->decrypt_queue.last_cpu);
++	ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb,
++						   wg->packet_crypt_wq, &wg->decrypt_queue.last_cpu);
+ 	if (unlikely(ret == -EPIPE))
+-		wg_queue_enqueue_per_peer_napi(skb, PACKET_STATE_DEAD);
++		wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD);
+ 	if (likely(!ret || ret == -EPIPE)) {
+ 		rcu_read_unlock_bh();
+ 		return;
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index f74b9341ab0fe..5368f7c35b4bf 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -239,8 +239,7 @@ void wg_packet_send_keepalive(struct wg_peer *peer)
+ 	wg_packet_send_staged_packets(peer);
+ }
+ 
+-static void wg_packet_create_data_done(struct sk_buff *first,
+-				       struct wg_peer *peer)
++static void wg_packet_create_data_done(struct wg_peer *peer, struct sk_buff *first)
+ {
+ 	struct sk_buff *skb, *next;
+ 	bool is_keepalive, data_sent = false;
+@@ -262,22 +261,19 @@ static void wg_packet_create_data_done(struct sk_buff *first,
+ 
+ void wg_packet_tx_worker(struct work_struct *work)
+ {
+-	struct crypt_queue *queue = container_of(work, struct crypt_queue,
+-						 work);
++	struct wg_peer *peer = container_of(work, struct wg_peer, transmit_packet_work);
+ 	struct noise_keypair *keypair;
+ 	enum packet_state state;
+ 	struct sk_buff *first;
+-	struct wg_peer *peer;
+ 
+-	while ((first = __ptr_ring_peek(&queue->ring)) != NULL &&
++	while ((first = wg_prev_queue_peek(&peer->tx_queue)) != NULL &&
+ 	       (state = atomic_read_acquire(&PACKET_CB(first)->state)) !=
+ 		       PACKET_STATE_UNCRYPTED) {
+-		__ptr_ring_discard_one(&queue->ring);
+-		peer = PACKET_PEER(first);
++		wg_prev_queue_drop_peeked(&peer->tx_queue);
+ 		keypair = PACKET_CB(first)->keypair;
+ 
+ 		if (likely(state == PACKET_STATE_CRYPTED))
+-			wg_packet_create_data_done(first, peer);
++			wg_packet_create_data_done(peer, first);
+ 		else
+ 			kfree_skb_list(first);
+ 
+@@ -306,16 +302,14 @@ void wg_packet_encrypt_worker(struct work_struct *work)
+ 				break;
+ 			}
+ 		}
+-		wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first,
+-					  state);
++		wg_queue_enqueue_per_peer_tx(first, state);
+ 		if (need_resched())
+ 			cond_resched();
+ 	}
+ }
+ 
+-static void wg_packet_create_data(struct sk_buff *first)
++static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first)
+ {
+-	struct wg_peer *peer = PACKET_PEER(first);
+ 	struct wg_device *wg = peer->device;
+ 	int ret = -EINVAL;
+ 
+@@ -323,13 +317,10 @@ static void wg_packet_create_data(struct sk_buff *first)
+ 	if (unlikely(READ_ONCE(peer->is_dead)))
+ 		goto err;
+ 
+-	ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue,
+-						   &peer->tx_queue, first,
+-						   wg->packet_crypt_wq,
+-						   &wg->encrypt_queue.last_cpu);
++	ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first,
++						   wg->packet_crypt_wq, &wg->encrypt_queue.last_cpu);
+ 	if (unlikely(ret == -EPIPE))
+-		wg_queue_enqueue_per_peer(&peer->tx_queue, first,
+-					  PACKET_STATE_DEAD);
++		wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD);
+ err:
+ 	rcu_read_unlock_bh();
+ 	if (likely(!ret || ret == -EPIPE))
+@@ -393,7 +384,7 @@ void wg_packet_send_staged_packets(struct wg_peer *peer)
+ 	packets.prev->next = NULL;
+ 	wg_peer_get(keypair->entry.peer);
+ 	PACKET_CB(packets.next)->keypair = keypair;
+-	wg_packet_create_data(packets.next);
++	wg_packet_create_data(peer, packets.next);
+ 	return;
+ 
+ out_invalid:
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 2e3eb5bbe49c8..4bc84cc5e824b 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -9116,7 +9116,9 @@ static void ath10k_sta_statistics(struct ieee80211_hw *hw,
+ 	if (!ath10k_peer_stats_enabled(ar))
+ 		return;
+ 
++	mutex_lock(&ar->conf_mutex);
+ 	ath10k_debug_fw_stats_request(ar);
++	mutex_unlock(&ar->conf_mutex);
+ 
+ 	sinfo->rx_duration = arsta->rx_duration;
+ 	sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_DURATION);
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index fd41f25456dc4..daae470ecf5aa 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -1045,12 +1045,13 @@ static int ath10k_snoc_hif_power_up(struct ath10k *ar,
+ 	ret = ath10k_snoc_init_pipes(ar);
+ 	if (ret) {
+ 		ath10k_err(ar, "failed to initialize CE: %d\n", ret);
+-		goto err_wlan_enable;
++		goto err_free_rri;
+ 	}
+ 
+ 	return 0;
+ 
+-err_wlan_enable:
++err_free_rri:
++	ath10k_ce_free_rri(ar);
+ 	ath10k_snoc_wlan_disable(ar);
+ 
+ 	return ret;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 7b5834157fe51..e6135795719a1 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -240,8 +240,10 @@ static int ath10k_wmi_tlv_parse_peer_stats_info(struct ath10k *ar, u16 tag, u16
+ 		   __le32_to_cpu(stat->last_tx_rate_code),
+ 		   __le32_to_cpu(stat->last_tx_bitrate_kbps));
+ 
++	rcu_read_lock();
+ 	sta = ieee80211_find_sta_by_ifaddr(ar->hw, stat->peer_macaddr.addr, NULL);
+ 	if (!sta) {
++		rcu_read_unlock();
+ 		ath10k_warn(ar, "not found station for peer stats\n");
+ 		return -EINVAL;
+ 	}
+@@ -251,6 +253,7 @@ static int ath10k_wmi_tlv_parse_peer_stats_info(struct ath10k *ar, u16 tag, u16
+ 	arsta->rx_bitrate_kbps = __le32_to_cpu(stat->last_rx_bitrate_kbps);
+ 	arsta->tx_rate_code = __le32_to_cpu(stat->last_tx_rate_code);
+ 	arsta->tx_bitrate_kbps = __le32_to_cpu(stat->last_tx_bitrate_kbps);
++	rcu_read_unlock();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index af427d9051a07..b5bd9b06da89e 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -4213,11 +4213,6 @@ static int ath11k_mac_op_start(struct ieee80211_hw *hw)
+ 	/* Configure the hash seed for hash based reo dest ring selection */
+ 	ath11k_wmi_pdev_lro_cfg(ar, ar->pdev->pdev_id);
+ 
+-	mutex_unlock(&ar->conf_mutex);
+-
+-	rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx],
+-			   &ab->pdevs[ar->pdev_idx]);
+-
+ 	/* allow device to enter IMPS */
+ 	if (ab->hw_params.idle_ps) {
+ 		ret = ath11k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_IDLE_PS_CONFIG,
+@@ -4227,6 +4222,12 @@ static int ath11k_mac_op_start(struct ieee80211_hw *hw)
+ 			goto err;
+ 		}
+ 	}
++
++	mutex_unlock(&ar->conf_mutex);
++
++	rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx],
++			   &ab->pdevs[ar->pdev_idx]);
++
+ 	return 0;
+ 
+ err:
+diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c
+index 26ea51a721564..859a865c59950 100644
+--- a/drivers/net/wireless/ath/ath9k/debug.c
++++ b/drivers/net/wireless/ath/ath9k/debug.c
+@@ -1223,8 +1223,11 @@ static ssize_t write_file_nf_override(struct file *file,
+ 
+ 	ah->nf_override = val;
+ 
+-	if (ah->curchan)
++	if (ah->curchan) {
++		ath9k_ps_wakeup(sc);
+ 		ath9k_hw_loadnf(ah, ah->curchan);
++		ath9k_ps_restore(sc);
++	}
+ 
+ 	return count;
+ }
+diff --git a/drivers/net/wireless/broadcom/b43/phy_n.c b/drivers/net/wireless/broadcom/b43/phy_n.c
+index b669dff24b6e0..665b737fbb0d8 100644
+--- a/drivers/net/wireless/broadcom/b43/phy_n.c
++++ b/drivers/net/wireless/broadcom/b43/phy_n.c
+@@ -5311,7 +5311,7 @@ static void b43_nphy_restore_cal(struct b43_wldev *dev)
+ 
+ 	for (i = 0; i < 4; i++) {
+ 		if (dev->phy.rev >= 3)
+-			table[i] = coef[i];
++			coef[i] = table[i];
+ 		else
+ 			coef[i] = 0;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+index 895a907acdf0f..37ce4fe136c5e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+@@ -198,14 +198,14 @@ static int iwl_pnvm_parse(struct iwl_trans *trans, const u8 *data,
+ 				     le32_to_cpu(sku_id->data[1]),
+ 				     le32_to_cpu(sku_id->data[2]));
+ 
++			data += sizeof(*tlv) + ALIGN(tlv_len, 4);
++			len -= ALIGN(tlv_len, 4);
++
+ 			if (trans->sku_id[0] == le32_to_cpu(sku_id->data[0]) &&
+ 			    trans->sku_id[1] == le32_to_cpu(sku_id->data[1]) &&
+ 			    trans->sku_id[2] == le32_to_cpu(sku_id->data[2])) {
+ 				int ret;
+ 
+-				data += sizeof(*tlv) + ALIGN(tlv_len, 4);
+-				len -= ALIGN(tlv_len, 4);
+-
+ 				ret = iwl_pnvm_handle_section(trans, data, len);
+ 				if (!ret)
+ 					return 0;
+@@ -227,6 +227,7 @@ int iwl_pnvm_load(struct iwl_trans *trans,
+ 	struct iwl_notification_wait pnvm_wait;
+ 	static const u16 ntf_cmds[] = { WIDE_ID(REGULATORY_AND_NVM_GROUP,
+ 						PNVM_INIT_COMPLETE_NTFY) };
++	int ret;
+ 
+ 	/* if the SKU_ID is empty, there's nothing to do */
+ 	if (!trans->sku_id[0] && !trans->sku_id[1] && !trans->sku_id[2])
+@@ -236,7 +237,6 @@ int iwl_pnvm_load(struct iwl_trans *trans,
+ 	if (!trans->pnvm_loaded) {
+ 		const struct firmware *pnvm;
+ 		char pnvm_name[64];
+-		int ret;
+ 
+ 		/*
+ 		 * The prefix unfortunately includes a hyphen at the end, so
+@@ -264,6 +264,11 @@ int iwl_pnvm_load(struct iwl_trans *trans,
+ 
+ 			release_firmware(pnvm);
+ 		}
++	} else {
++		/* if we already loaded, we need to set it again */
++		ret = iwl_trans_set_pnvm(trans, NULL, 0);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	iwl_init_notification_wait(notif_wait, &pnvm_wait,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 6385b9641126b..ad374b25e2550 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -896,12 +896,10 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
+ 	if (cmd_ver == 3) {
+ 		len = sizeof(cmd.v3);
+ 		n_bands = ARRAY_SIZE(cmd.v3.table[0]);
+-		cmd.v3.table_revision = cpu_to_le32(mvm->fwrt.geo_rev);
+ 	} else if (fw_has_api(&mvm->fwrt.fw->ucode_capa,
+ 			      IWL_UCODE_TLV_API_SAR_TABLE_VER)) {
+ 		len = sizeof(cmd.v2);
+ 		n_bands = ARRAY_SIZE(cmd.v2.table[0]);
+-		cmd.v2.table_revision = cpu_to_le32(mvm->fwrt.geo_rev);
+ 	} else {
+ 		len = sizeof(cmd.v1);
+ 		n_bands = ARRAY_SIZE(cmd.v1.table[0]);
+@@ -921,6 +919,16 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
+ 	if (ret)
+ 		return 0;
+ 
++	/*
++	 * Set the revision on versions that contain it.
++	 * This must be done after calling iwl_sar_geo_init().
++	 */
++	if (cmd_ver == 3)
++		cmd.v3.table_revision = cpu_to_le32(mvm->fwrt.geo_rev);
++	else if (fw_has_api(&mvm->fwrt.fw->ucode_capa,
++			    IWL_UCODE_TLV_API_SAR_TABLE_VER))
++		cmd.v2.table_revision = cpu_to_le32(mvm->fwrt.geo_rev);
++
+ 	return iwl_mvm_send_cmd_pdu(mvm,
+ 				    WIDE_ID(PHY_OPS_GROUP, GEO_TX_POWER_LIMIT),
+ 				    0, len, &cmd);
+@@ -929,7 +937,6 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
+ static int iwl_mvm_get_ppag_table(struct iwl_mvm *mvm)
+ {
+ 	union acpi_object *wifi_pkg, *data, *enabled;
+-	union iwl_ppag_table_cmd ppag_table;
+ 	int i, j, ret, tbl_rev, num_sub_bands;
+ 	int idx = 2;
+ 	s8 *gain;
+@@ -983,8 +990,8 @@ read_table:
+ 		goto out_free;
+ 	}
+ 
+-	ppag_table.v1.enabled = cpu_to_le32(enabled->integer.value);
+-	if (!ppag_table.v1.enabled) {
++	mvm->fwrt.ppag_table.v1.enabled = cpu_to_le32(enabled->integer.value);
++	if (!mvm->fwrt.ppag_table.v1.enabled) {
+ 		ret = 0;
+ 		goto out_free;
+ 	}
+@@ -999,16 +1006,23 @@ read_table:
+ 			union acpi_object *ent;
+ 
+ 			ent = &wifi_pkg->package.elements[idx++];
+-			if (ent->type != ACPI_TYPE_INTEGER ||
+-			    (j == 0 && ent->integer.value > ACPI_PPAG_MAX_LB) ||
+-			    (j == 0 && ent->integer.value < ACPI_PPAG_MIN_LB) ||
+-			    (j != 0 && ent->integer.value > ACPI_PPAG_MAX_HB) ||
+-			    (j != 0 && ent->integer.value < ACPI_PPAG_MIN_HB)) {
+-				ppag_table.v1.enabled = cpu_to_le32(0);
++			if (ent->type != ACPI_TYPE_INTEGER) {
+ 				ret = -EINVAL;
+ 				goto out_free;
+ 			}
++
+ 			gain[i * num_sub_bands + j] = ent->integer.value;
++
++			if ((j == 0 &&
++			     (gain[i * num_sub_bands + j] > ACPI_PPAG_MAX_LB ||
++			      gain[i * num_sub_bands + j] < ACPI_PPAG_MIN_LB)) ||
++			    (j != 0 &&
++			     (gain[i * num_sub_bands + j] > ACPI_PPAG_MAX_HB ||
++			      gain[i * num_sub_bands + j] < ACPI_PPAG_MIN_HB))) {
++				mvm->fwrt.ppag_table.v1.enabled = cpu_to_le32(0);
++				ret = -EINVAL;
++				goto out_free;
++			}
+ 		}
+ 	}
+ 	ret = 0;
+@@ -1021,7 +1035,6 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm)
+ {
+ 	u8 cmd_ver;
+ 	int i, j, ret, num_sub_bands, cmd_size;
+-	union iwl_ppag_table_cmd ppag_table;
+ 	s8 *gain;
+ 
+ 	if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_SET_PPAG)) {
+@@ -1040,7 +1053,7 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm)
+ 	if (cmd_ver == 1) {
+ 		num_sub_bands = IWL_NUM_SUB_BANDS;
+ 		gain = mvm->fwrt.ppag_table.v1.gain[0];
+-		cmd_size = sizeof(ppag_table.v1);
++		cmd_size = sizeof(mvm->fwrt.ppag_table.v1);
+ 		if (mvm->fwrt.ppag_ver == 2) {
+ 			IWL_DEBUG_RADIO(mvm,
+ 					"PPAG table is v2 but FW supports v1, sending truncated table\n");
+@@ -1048,7 +1061,7 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm)
+ 	} else if (cmd_ver == 2) {
+ 		num_sub_bands = IWL_NUM_SUB_BANDS_V2;
+ 		gain = mvm->fwrt.ppag_table.v2.gain[0];
+-		cmd_size = sizeof(ppag_table.v2);
++		cmd_size = sizeof(mvm->fwrt.ppag_table.v2);
+ 		if (mvm->fwrt.ppag_ver == 1) {
+ 			IWL_DEBUG_RADIO(mvm,
+ 					"PPAG table is v1 but FW supports v2, sending padded table\n");
+@@ -1068,7 +1081,7 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm)
+ 	IWL_DEBUG_RADIO(mvm, "Sending PER_PLATFORM_ANT_GAIN_CMD\n");
+ 	ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(PHY_OPS_GROUP,
+ 						PER_PLATFORM_ANT_GAIN_CMD),
+-				   0, cmd_size, &ppag_table);
++				   0, cmd_size, &mvm->fwrt.ppag_table);
+ 	if (ret < 0)
+ 		IWL_ERR(mvm, "failed to send PER_PLATFORM_ANT_GAIN_CMD (%d)\n",
+ 			ret);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 1db6d8d38822a..3939eccd3d5ac 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -1055,9 +1055,6 @@ void iwl_mvm_remove_csa_period(struct iwl_mvm *mvm,
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+-	if (!te_data->running)
+-		return;
+-
+ 	spin_lock_bh(&mvm->time_event_lock);
+ 	id = te_data->id;
+ 	spin_unlock_bh(&mvm->time_event_lock);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index 2d43899fbdd7a..81ef4fc8d7831 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -345,17 +345,20 @@ int iwl_trans_pcie_ctx_info_gen3_set_pnvm(struct iwl_trans *trans,
+ 	if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
+ 		return 0;
+ 
+-	ret = iwl_pcie_ctxt_info_alloc_dma(trans, data, len,
+-					   &trans_pcie->pnvm_dram);
+-	if (ret < 0) {
+-		IWL_DEBUG_FW(trans, "Failed to allocate PNVM DMA %d.\n",
+-			     ret);
+-		return ret;
++	/* only allocate the DRAM if not allocated yet */
++	if (!trans->pnvm_loaded) {
++		if (WARN_ON(prph_sc_ctrl->pnvm_cfg.pnvm_size))
++			return -EBUSY;
++
++		ret = iwl_pcie_ctxt_info_alloc_dma(trans, data, len,
++						   &trans_pcie->pnvm_dram);
++		if (ret < 0) {
++			IWL_DEBUG_FW(trans, "Failed to allocate PNVM DMA %d.\n",
++				     ret);
++			return ret;
++		}
+ 	}
+ 
+-	if (WARN_ON(prph_sc_ctrl->pnvm_cfg.pnvm_size))
+-		return -EBUSY;
+-
+ 	prph_sc_ctrl->pnvm_cfg.pnvm_base_addr =
+ 		cpu_to_le64(trans_pcie->pnvm_dram.physical);
+ 	prph_sc_ctrl->pnvm_cfg.pnvm_size =
+diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
+index acb786d8b1d8f..e02a4fbb74de5 100644
+--- a/drivers/net/xen-netback/interface.c
++++ b/drivers/net/xen-netback/interface.c
+@@ -162,13 +162,15 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
+ {
+ 	struct xenvif_queue *queue = dev_id;
+ 	int old;
++	bool has_rx, has_tx;
+ 
+ 	old = atomic_fetch_or(NETBK_COMMON_EOI, &queue->eoi_pending);
+ 	WARN(old, "Interrupt while EOI pending\n");
+ 
+-	/* Use bitwise or as we need to call both functions. */
+-	if ((!xenvif_handle_tx_interrupt(queue) |
+-	     !xenvif_handle_rx_interrupt(queue))) {
++	has_tx = xenvif_handle_tx_interrupt(queue);
++	has_rx = xenvif_handle_rx_interrupt(queue);
++
++	if (!has_rx && !has_tx) {
+ 		atomic_andnot(NETBK_COMMON_EOI, &queue->eoi_pending);
+ 		xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
+ 	}
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 292e535a385d4..e812a0d0fdb3d 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -676,6 +676,10 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id)
+ 	if (blk_queue_stable_writes(ns->queue) && ns->head->disk)
+ 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
+ 				   ns->head->disk->queue);
++#ifdef CONFIG_BLK_DEV_ZONED
++	if (blk_queue_is_zoned(ns->queue) && ns->head->disk)
++		ns->head->disk->queue->nr_zones = ns->queue->nr_zones;
++#endif
+ }
+ 
+ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
+index 92ca23bc8dbfc..e20dea5c44f7b 100644
+--- a/drivers/nvme/target/admin-cmd.c
++++ b/drivers/nvme/target/admin-cmd.c
+@@ -469,7 +469,6 @@ out:
+ static void nvmet_execute_identify_ns(struct nvmet_req *req)
+ {
+ 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
+-	struct nvmet_ns *ns;
+ 	struct nvme_id_ns *id;
+ 	u16 status = 0;
+ 
+@@ -486,20 +485,21 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
+ 	}
+ 
+ 	/* return an all zeroed buffer if we can't find an active namespace */
+-	ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
+-	if (!ns) {
+-		status = NVME_SC_INVALID_NS;
++	req->ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
++	if (!req->ns) {
++		status = 0;
+ 		goto done;
+ 	}
+ 
+-	nvmet_ns_revalidate(ns);
++	nvmet_ns_revalidate(req->ns);
+ 
+ 	/*
+ 	 * nuse = ncap = nsze isn't always true, but we have no way to find
+ 	 * that out from the underlying device.
+ 	 */
+-	id->ncap = id->nsze = cpu_to_le64(ns->size >> ns->blksize_shift);
+-	switch (req->port->ana_state[ns->anagrpid]) {
++	id->ncap = id->nsze =
++		cpu_to_le64(req->ns->size >> req->ns->blksize_shift);
++	switch (req->port->ana_state[req->ns->anagrpid]) {
+ 	case NVME_ANA_INACCESSIBLE:
+ 	case NVME_ANA_PERSISTENT_LOSS:
+ 		break;
+@@ -508,8 +508,8 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
+ 		break;
+         }
+ 
+-	if (ns->bdev)
+-		nvmet_bdev_set_limits(ns->bdev, id);
++	if (req->ns->bdev)
++		nvmet_bdev_set_limits(req->ns->bdev, id);
+ 
+ 	/*
+ 	 * We just provide a single LBA format that matches what the
+@@ -523,25 +523,24 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
+ 	 * controllers, but also with any other user of the block device.
+ 	 */
+ 	id->nmic = (1 << 0);
+-	id->anagrpid = cpu_to_le32(ns->anagrpid);
++	id->anagrpid = cpu_to_le32(req->ns->anagrpid);
+ 
+-	memcpy(&id->nguid, &ns->nguid, sizeof(id->nguid));
++	memcpy(&id->nguid, &req->ns->nguid, sizeof(id->nguid));
+ 
+-	id->lbaf[0].ds = ns->blksize_shift;
++	id->lbaf[0].ds = req->ns->blksize_shift;
+ 
+-	if (ctrl->pi_support && nvmet_ns_has_pi(ns)) {
++	if (ctrl->pi_support && nvmet_ns_has_pi(req->ns)) {
+ 		id->dpc = NVME_NS_DPC_PI_FIRST | NVME_NS_DPC_PI_LAST |
+ 			  NVME_NS_DPC_PI_TYPE1 | NVME_NS_DPC_PI_TYPE2 |
+ 			  NVME_NS_DPC_PI_TYPE3;
+ 		id->mc = NVME_MC_EXTENDED_LBA;
+-		id->dps = ns->pi_type;
++		id->dps = req->ns->pi_type;
+ 		id->flbas = NVME_NS_FLBAS_META_EXT;
+-		id->lbaf[0].ms = cpu_to_le16(ns->metadata_size);
++		id->lbaf[0].ms = cpu_to_le16(req->ns->metadata_size);
+ 	}
+ 
+-	if (ns->readonly)
++	if (req->ns->readonly)
+ 		id->nsattr |= (1 << 0);
+-	nvmet_put_namespace(ns);
+ done:
+ 	if (!status)
+ 		status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id));
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index aacf06f0b4312..8b0485ada315b 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -379,7 +379,7 @@ err:
+ 	return NVME_SC_INTERNAL;
+ }
+ 
+-static void nvmet_tcp_ddgst(struct ahash_request *hash,
++static void nvmet_tcp_send_ddgst(struct ahash_request *hash,
+ 		struct nvmet_tcp_cmd *cmd)
+ {
+ 	ahash_request_set_crypt(hash, cmd->req.sg,
+@@ -387,6 +387,23 @@ static void nvmet_tcp_ddgst(struct ahash_request *hash,
+ 	crypto_ahash_digest(hash);
+ }
+ 
++static void nvmet_tcp_recv_ddgst(struct ahash_request *hash,
++		struct nvmet_tcp_cmd *cmd)
++{
++	struct scatterlist sg;
++	struct kvec *iov;
++	int i;
++
++	crypto_ahash_init(hash);
++	for (i = 0, iov = cmd->iov; i < cmd->nr_mapped; i++, iov++) {
++		sg_init_one(&sg, iov->iov_base, iov->iov_len);
++		ahash_request_set_crypt(hash, &sg, NULL, iov->iov_len);
++		crypto_ahash_update(hash);
++	}
++	ahash_request_set_crypt(hash, NULL, (void *)&cmd->exp_ddgst, 0);
++	crypto_ahash_final(hash);
++}
++
+ static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd)
+ {
+ 	struct nvme_tcp_data_pdu *pdu = cmd->data_pdu;
+@@ -411,7 +428,7 @@ static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd)
+ 
+ 	if (queue->data_digest) {
+ 		pdu->hdr.flags |= NVME_TCP_F_DDGST;
+-		nvmet_tcp_ddgst(queue->snd_hash, cmd);
++		nvmet_tcp_send_ddgst(queue->snd_hash, cmd);
+ 	}
+ 
+ 	if (cmd->queue->hdr_digest) {
+@@ -1060,7 +1077,7 @@ static void nvmet_tcp_prep_recv_ddgst(struct nvmet_tcp_cmd *cmd)
+ {
+ 	struct nvmet_tcp_queue *queue = cmd->queue;
+ 
+-	nvmet_tcp_ddgst(queue->rcv_hash, cmd);
++	nvmet_tcp_recv_ddgst(queue->rcv_hash, cmd);
+ 	queue->offset = 0;
+ 	queue->left = NVME_TCP_DIGEST_LENGTH;
+ 	queue->rcv_state = NVMET_TCP_RECV_DDGST;
+@@ -1081,14 +1098,14 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
+ 		cmd->rbytes_done += ret;
+ 	}
+ 
++	if (queue->data_digest) {
++		nvmet_tcp_prep_recv_ddgst(cmd);
++		return 0;
++	}
+ 	nvmet_tcp_unmap_pdu_iovec(cmd);
+ 
+ 	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+ 	    cmd->rbytes_done == cmd->req.transfer_len) {
+-		if (queue->data_digest) {
+-			nvmet_tcp_prep_recv_ddgst(cmd);
+-			return 0;
+-		}
+ 		cmd->req.execute(&cmd->req);
+ 	}
+ 
+@@ -1468,17 +1485,27 @@ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
+ 	if (inet->rcv_tos > 0)
+ 		ip_sock_set_tos(sock->sk, inet->rcv_tos);
+ 
++	ret = 0;
+ 	write_lock_bh(&sock->sk->sk_callback_lock);
+-	sock->sk->sk_user_data = queue;
+-	queue->data_ready = sock->sk->sk_data_ready;
+-	sock->sk->sk_data_ready = nvmet_tcp_data_ready;
+-	queue->state_change = sock->sk->sk_state_change;
+-	sock->sk->sk_state_change = nvmet_tcp_state_change;
+-	queue->write_space = sock->sk->sk_write_space;
+-	sock->sk->sk_write_space = nvmet_tcp_write_space;
++	if (sock->sk->sk_state != TCP_ESTABLISHED) {
++		/*
++		 * If the socket is already closing, don't even start
++		 * consuming it
++		 */
++		ret = -ENOTCONN;
++	} else {
++		sock->sk->sk_user_data = queue;
++		queue->data_ready = sock->sk->sk_data_ready;
++		sock->sk->sk_data_ready = nvmet_tcp_data_ready;
++		queue->state_change = sock->sk->sk_state_change;
++		sock->sk->sk_state_change = nvmet_tcp_state_change;
++		queue->write_space = sock->sk->sk_write_space;
++		sock->sk->sk_write_space = nvmet_tcp_write_space;
++		queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work);
++	}
+ 	write_unlock_bh(&sock->sk->sk_callback_lock);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port,
+@@ -1526,8 +1553,6 @@ static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port,
+ 	if (ret)
+ 		goto out_destroy_sq;
+ 
+-	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work);
+-
+ 	return 0;
+ out_destroy_sq:
+ 	mutex_lock(&nvmet_tcp_queue_mutex);
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index a09ff8409f600..9b6ab83956c3b 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -545,7 +545,9 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
+ 
+ 	for_each_child_of_node(parent, child) {
+ 		addr = of_get_property(child, "reg", &len);
+-		if (!addr || (len < 2 * sizeof(u32))) {
++		if (!addr)
++			continue;
++		if (len < 2 * sizeof(u32)) {
+ 			dev_err(dev, "nvmem: invalid reg on %pOF\n", child);
+ 			return -EINVAL;
+ 		}
+@@ -576,6 +578,7 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
+ 				cell->name, nvmem->stride);
+ 			/* Cells already added will be freed later. */
+ 			kfree_const(cell->name);
++			of_node_put(cell->np);
+ 			kfree(cell);
+ 			return -EINVAL;
+ 		}
+diff --git a/drivers/nvmem/qcom-spmi-sdam.c b/drivers/nvmem/qcom-spmi-sdam.c
+index a72704cd04681..f6e9f96933ca2 100644
+--- a/drivers/nvmem/qcom-spmi-sdam.c
++++ b/drivers/nvmem/qcom-spmi-sdam.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2017, 2020 The Linux Foundation. All rights reserved.
++ * Copyright (c) 2017, 2020-2021, The Linux Foundation. All rights reserved.
+  */
+ 
+ #include <linux/device.h>
+@@ -18,7 +18,6 @@
+ #define SDAM_PBS_TRIG_CLR		0xE6
+ 
+ struct sdam_chip {
+-	struct platform_device		*pdev;
+ 	struct regmap			*regmap;
+ 	struct nvmem_config		sdam_config;
+ 	unsigned int			base;
+@@ -65,7 +64,7 @@ static int sdam_read(void *priv, unsigned int offset, void *val,
+ 				size_t bytes)
+ {
+ 	struct sdam_chip *sdam = priv;
+-	struct device *dev = &sdam->pdev->dev;
++	struct device *dev = sdam->sdam_config.dev;
+ 	int rc;
+ 
+ 	if (!sdam_is_valid(sdam, offset, bytes)) {
+@@ -86,7 +85,7 @@ static int sdam_write(void *priv, unsigned int offset, void *val,
+ 				size_t bytes)
+ {
+ 	struct sdam_chip *sdam = priv;
+-	struct device *dev = &sdam->pdev->dev;
++	struct device *dev = sdam->sdam_config.dev;
+ 	int rc;
+ 
+ 	if (!sdam_is_valid(sdam, offset, bytes)) {
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 4602e467ca8b9..f2e697000b96f 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -1149,8 +1149,16 @@ int __init __weak early_init_dt_mark_hotplug_memory_arch(u64 base, u64 size)
+ int __init __weak early_init_dt_reserve_memory_arch(phys_addr_t base,
+ 					phys_addr_t size, bool nomap)
+ {
+-	if (nomap)
+-		return memblock_remove(base, size);
++	if (nomap) {
++		/*
++		 * If the memory is already reserved (by another region), we
++		 * should not allow it to be marked nomap.
++		 */
++		if (memblock_is_region_reserved(base, size))
++			return -EBUSY;
++
++		return memblock_mark_nomap(base, size);
++	}
+ 	return memblock_reserve(base, size);
+ }
+ 
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 9faeb83e4b326..363277b31ecbb 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -751,7 +751,6 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
+ 		struct device *dev, struct device_node *np)
+ {
+ 	struct dev_pm_opp *new_opp;
+-	u64 rate = 0;
+ 	u32 val;
+ 	int ret;
+ 	bool rate_not_available = false;
+@@ -768,7 +767,8 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
+ 
+ 	/* Check if the OPP supports hardware's hierarchy of versions or not */
+ 	if (!_opp_is_supported(dev, opp_table, np)) {
+-		dev_dbg(dev, "OPP not supported by hardware: %llu\n", rate);
++		dev_dbg(dev, "OPP not supported by hardware: %lu\n",
++			new_opp->rate);
+ 		goto free_opp;
+ 	}
+ 
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 811c1cb2e8deb..1cb7cfc75d6e4 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -321,9 +321,10 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc)
+ 
+ 	resource_list_for_each_entry(entry, &bridge->dma_ranges) {
+ 		err = cdns_pcie_host_bar_config(rc, entry);
+-		if (err)
++		if (err) {
+ 			dev_err(dev, "Fail to configure IB using dma-ranges\n");
+-		return err;
++			return err;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index b4761640ffd99..557554f53ce9c 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -395,7 +395,9 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ 
+ 	/* enable external reference clock */
+ 	val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK);
+-	val &= ~PHY_REFCLK_USE_PAD;
++	/* USE_PAD is required only for ipq806x */
++	if (!of_device_is_compatible(node, "qcom,pcie-apq8064"))
++		val &= ~PHY_REFCLK_USE_PAD;
+ 	val |= PHY_REFCLK_SSP_EN;
+ 	writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK);
+ 
+diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c
+index cdc0963f154e3..2bee09b16255d 100644
+--- a/drivers/pci/controller/pcie-rcar-host.c
++++ b/drivers/pci/controller/pcie-rcar-host.c
+@@ -737,7 +737,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
+ 	}
+ 
+ 	/* setup MSI data target */
+-	msi->pages = __get_free_pages(GFP_KERNEL, 0);
++	msi->pages = __get_free_pages(GFP_KERNEL | GFP_DMA32, 0);
+ 	rcar_pcie_hw_enable_msi(host);
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/pcie-rockchip.c b/drivers/pci/controller/pcie-rockchip.c
+index 904dec0d3a88f..990a00e08bc5b 100644
+--- a/drivers/pci/controller/pcie-rockchip.c
++++ b/drivers/pci/controller/pcie-rockchip.c
+@@ -82,7 +82,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
+ 	}
+ 
+ 	rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev,
+-								     "mgmt-sticky");
++								"mgmt-sticky");
+ 	if (IS_ERR(rockchip->mgmt_sticky_rst)) {
+ 		if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER)
+ 			dev_err(dev, "missing mgmt-sticky reset property in node\n");
+@@ -118,11 +118,11 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
+ 	}
+ 
+ 	if (rockchip->is_rc) {
+-		rockchip->ep_gpio = devm_gpiod_get(dev, "ep", GPIOD_OUT_HIGH);
+-		if (IS_ERR(rockchip->ep_gpio)) {
+-			dev_err(dev, "missing ep-gpios property in node\n");
+-			return PTR_ERR(rockchip->ep_gpio);
+-		}
++		rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep",
++							    GPIOD_OUT_HIGH);
++		if (IS_ERR(rockchip->ep_gpio))
++			return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio),
++					     "failed to get ep GPIO\n");
+ 	}
+ 
+ 	rockchip->aclk_pcie = devm_clk_get(dev, "aclk");
+diff --git a/drivers/pci/controller/pcie-xilinx-cpm.c b/drivers/pci/controller/pcie-xilinx-cpm.c
+index f92e0152e65e3..67937facd90cd 100644
+--- a/drivers/pci/controller/pcie-xilinx-cpm.c
++++ b/drivers/pci/controller/pcie-xilinx-cpm.c
+@@ -404,6 +404,7 @@ static int xilinx_cpm_pcie_init_irq_domain(struct xilinx_cpm_pcie_port *port)
+ 	return 0;
+ out:
+ 	xilinx_cpm_free_irq_domains(port);
++	of_node_put(pcie_intc_node);
+ 	dev_err(dev, "Failed to allocate IRQ domains\n");
+ 
+ 	return -ENOMEM;
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index 139869d50eb26..fdaf86a888b73 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -21,8 +21,9 @@
+ #include "pci-bridge-emul.h"
+ 
+ #define PCI_BRIDGE_CONF_END	PCI_STD_HEADER_SIZEOF
++#define PCI_CAP_PCIE_SIZEOF	(PCI_EXP_SLTSTA2 + 2)
+ #define PCI_CAP_PCIE_START	PCI_BRIDGE_CONF_END
+-#define PCI_CAP_PCIE_END	(PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
++#define PCI_CAP_PCIE_END	(PCI_CAP_PCIE_START + PCI_CAP_PCIE_SIZEOF)
+ 
+ /**
+  * struct pci_bridge_reg_behavior - register bits behaviors
+@@ -46,7 +47,8 @@ struct pci_bridge_reg_behavior {
+ 	u32 w1c;
+ };
+ 
+-static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
++static const
++struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = {
+ 	[PCI_VENDOR_ID / 4] = { .ro = ~0 },
+ 	[PCI_COMMAND / 4] = {
+ 		.rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
+@@ -164,7 +166,8 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
+ 	},
+ };
+ 
+-static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
++static const
++struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] = {
+ 	[PCI_CAP_LIST_ID / 4] = {
+ 		/*
+ 		 * Capability ID, Next Capability Pointer and
+@@ -260,6 +263,8 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
+ 			 unsigned int flags)
+ {
++	BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END);
++
+ 	bridge->conf.class_revision |= cpu_to_le32(PCI_CLASS_BRIDGE_PCI << 16);
+ 	bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
+ 	bridge->conf.cache_line_size = 0x10;
+diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
+index 43eda101fcf40..7f1acb3918d0c 100644
+--- a/drivers/pci/setup-res.c
++++ b/drivers/pci/setup-res.c
+@@ -410,10 +410,16 @@ EXPORT_SYMBOL(pci_release_resource);
+ int pci_resize_resource(struct pci_dev *dev, int resno, int size)
+ {
+ 	struct resource *res = dev->resource + resno;
++	struct pci_host_bridge *host;
+ 	int old, ret;
+ 	u32 sizes;
+ 	u16 cmd;
+ 
++	/* Check if we must preserve the firmware's resource assignment */
++	host = pci_find_host_bridge(dev->bus);
++	if (host->preserve_config)
++		return -ENOTSUPP;
++
+ 	/* Make sure the resource isn't assigned before resizing it. */
+ 	if (!(res->flags & IORESOURCE_UNSET))
+ 		return -EBUSY;
+diff --git a/drivers/pci/syscall.c b/drivers/pci/syscall.c
+index 31e39558d49d8..8b003c890b87b 100644
+--- a/drivers/pci/syscall.c
++++ b/drivers/pci/syscall.c
+@@ -20,7 +20,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
+ 	u16 word;
+ 	u32 dword;
+ 	long err;
+-	long cfg_ret;
++	int cfg_ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+@@ -46,7 +46,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
+ 	}
+ 
+ 	err = -EIO;
+-	if (cfg_ret != PCIBIOS_SUCCESSFUL)
++	if (cfg_ret)
+ 		goto error;
+ 
+ 	switch (len) {
+@@ -105,7 +105,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
+ 		if (err)
+ 			break;
+ 		err = pci_user_write_config_byte(dev, off, byte);
+-		if (err != PCIBIOS_SUCCESSFUL)
++		if (err)
+ 			err = -EIO;
+ 		break;
+ 
+@@ -114,7 +114,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
+ 		if (err)
+ 			break;
+ 		err = pci_user_write_config_word(dev, off, word);
+-		if (err != PCIBIOS_SUCCESSFUL)
++		if (err)
+ 			err = -EIO;
+ 		break;
+ 
+@@ -123,7 +123,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
+ 		if (err)
+ 			break;
+ 		err = pci_user_write_config_dword(dev, off, dword);
+-		if (err != PCIBIOS_SUCCESSFUL)
++		if (err)
+ 			err = -EIO;
+ 		break;
+ 
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index a76ff594f3ca4..46defb1dcf867 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -1150,7 +1150,7 @@ static int arm_cmn_commit_txn(struct pmu *pmu)
+ static int arm_cmn_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
+ {
+ 	struct arm_cmn *cmn;
+-	unsigned int target;
++	unsigned int i, target;
+ 
+ 	cmn = hlist_entry_safe(node, struct arm_cmn, cpuhp_node);
+ 	if (cpu != cmn->cpu)
+@@ -1161,6 +1161,8 @@ static int arm_cmn_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
+ 		return 0;
+ 
+ 	perf_pmu_migrate_context(&cmn->pmu, cpu, target);
++	for (i = 0; i < cmn->num_dtcs; i++)
++		irq_set_affinity_hint(cmn->dtc[i].irq, cpumask_of(target));
+ 	cmn->cpu = target;
+ 	return 0;
+ }
+@@ -1502,7 +1504,7 @@ static int arm_cmn_probe(struct platform_device *pdev)
+ 	struct arm_cmn *cmn;
+ 	const char *name;
+ 	static atomic_t id;
+-	int err, rootnode, this_id;
++	int err, rootnode;
+ 
+ 	cmn = devm_kzalloc(&pdev->dev, sizeof(*cmn), GFP_KERNEL);
+ 	if (!cmn)
+@@ -1549,14 +1551,9 @@ static int arm_cmn_probe(struct platform_device *pdev)
+ 		.cancel_txn = arm_cmn_end_txn,
+ 	};
+ 
+-	this_id = atomic_fetch_inc(&id);
+-	if (this_id == 0) {
+-		name = "arm_cmn";
+-	} else {
+-		name = devm_kasprintf(cmn->dev, GFP_KERNEL, "arm_cmn_%d", this_id);
+-		if (!name)
+-			return -ENOMEM;
+-	}
++	name = devm_kasprintf(cmn->dev, GFP_KERNEL, "arm_cmn_%d", atomic_fetch_inc(&id));
++	if (!name)
++		return -ENOMEM;
+ 
+ 	err = cpuhp_state_add_instance(arm_cmn_hp_state, &cmn->cpuhp_node);
+ 	if (err)
+diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig
+index 01b53f86004cb..9ed5f167a9f3c 100644
+--- a/drivers/phy/Kconfig
++++ b/drivers/phy/Kconfig
+@@ -52,6 +52,7 @@ config PHY_XGENE
+ config USB_LGM_PHY
+ 	tristate "INTEL Lightning Mountain USB PHY Driver"
+ 	depends on USB_SUPPORT
++	depends on X86 || COMPILE_TEST
+ 	select USB_PHY
+ 	select REGULATOR
+ 	select REGULATOR_FIXED_VOLTAGE
+diff --git a/drivers/phy/cadence/phy-cadence-torrent.c b/drivers/phy/cadence/phy-cadence-torrent.c
+index f310e15d94cbc..591a15834b48f 100644
+--- a/drivers/phy/cadence/phy-cadence-torrent.c
++++ b/drivers/phy/cadence/phy-cadence-torrent.c
+@@ -2298,6 +2298,7 @@ static int cdns_torrent_phy_probe(struct platform_device *pdev)
+ 
+ 	if (total_num_lanes > MAX_NUM_LANES) {
+ 		dev_err(dev, "Invalid lane configuration\n");
++		ret = -EINVAL;
+ 		goto put_lnk_rst;
+ 	}
+ 
+diff --git a/drivers/phy/lantiq/phy-lantiq-rcu-usb2.c b/drivers/phy/lantiq/phy-lantiq-rcu-usb2.c
+index a7d126192cf12..29d246ea24b47 100644
+--- a/drivers/phy/lantiq/phy-lantiq-rcu-usb2.c
++++ b/drivers/phy/lantiq/phy-lantiq-rcu-usb2.c
+@@ -124,8 +124,16 @@ static int ltq_rcu_usb2_phy_power_on(struct phy *phy)
+ 	reset_control_deassert(priv->phy_reset);
+ 
+ 	ret = clk_prepare_enable(priv->phy_gate_clk);
+-	if (ret)
++	if (ret) {
+ 		dev_err(dev, "failed to enable PHY gate\n");
++		return ret;
++	}
++
++	/*
++	 * at least the xrx200 usb2 phy requires some extra time to be
++	 * operational after enabling the clock
++	 */
++	usleep_range(100, 200);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/phy/rockchip/phy-rockchip-emmc.c b/drivers/phy/rockchip/phy-rockchip-emmc.c
+index 2dc19ddd120f5..a005fc58bbf02 100644
+--- a/drivers/phy/rockchip/phy-rockchip-emmc.c
++++ b/drivers/phy/rockchip/phy-rockchip-emmc.c
+@@ -240,15 +240,17 @@ static int rockchip_emmc_phy_init(struct phy *phy)
+ 	 * - SDHCI driver to get the PHY
+ 	 * - SDHCI driver to init the PHY
+ 	 *
+-	 * The clock is optional, so upon any error we just set to NULL.
++	 * The clock is optional, using clk_get_optional() to get the clock
++	 * and do error processing if the return value != NULL
+ 	 *
+ 	 * NOTE: we don't do anything special for EPROBE_DEFER here.  Given the
+ 	 * above expected use case, EPROBE_DEFER isn't sensible to expect, so
+ 	 * it's just like any other error.
+ 	 */
+-	rk_phy->emmcclk = clk_get(&phy->dev, "emmcclk");
++	rk_phy->emmcclk = clk_get_optional(&phy->dev, "emmcclk");
+ 	if (IS_ERR(rk_phy->emmcclk)) {
+-		dev_dbg(&phy->dev, "Error getting emmcclk: %d\n", ret);
++		ret = PTR_ERR(rk_phy->emmcclk);
++		dev_err(&phy->dev, "Error getting emmcclk: %d\n", ret);
+ 		rk_phy->emmcclk = NULL;
+ 	}
+ 
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index 0ecee8b8773d0..ea5149efcbeae 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -526,11 +526,13 @@ int cros_ec_query_all(struct cros_ec_device *ec_dev)
+ 		 * power), not wake up.
+ 		 */
+ 		ec_dev->host_event_wake_mask = U32_MAX &
+-			~(BIT(EC_HOST_EVENT_AC_DISCONNECTED) |
+-			  BIT(EC_HOST_EVENT_BATTERY_LOW) |
+-			  BIT(EC_HOST_EVENT_BATTERY_CRITICAL) |
+-			  BIT(EC_HOST_EVENT_PD_MCU) |
+-			  BIT(EC_HOST_EVENT_BATTERY_STATUS));
++			~(EC_HOST_EVENT_MASK(EC_HOST_EVENT_LID_CLOSED) |
++			  EC_HOST_EVENT_MASK(EC_HOST_EVENT_AC_DISCONNECTED) |
++			  EC_HOST_EVENT_MASK(EC_HOST_EVENT_BATTERY_LOW) |
++			  EC_HOST_EVENT_MASK(EC_HOST_EVENT_BATTERY_CRITICAL) |
++			  EC_HOST_EVENT_MASK(EC_HOST_EVENT_BATTERY) |
++			  EC_HOST_EVENT_MASK(EC_HOST_EVENT_PD_MCU) |
++			  EC_HOST_EVENT_MASK(EC_HOST_EVENT_BATTERY_STATUS));
+ 		/*
+ 		 * Old ECs may not support this command. Complain about all
+ 		 * other errors.
+diff --git a/drivers/power/reset/at91-sama5d2_shdwc.c b/drivers/power/reset/at91-sama5d2_shdwc.c
+index 2fe3a627cb535..d9cf91e5b06d0 100644
+--- a/drivers/power/reset/at91-sama5d2_shdwc.c
++++ b/drivers/power/reset/at91-sama5d2_shdwc.c
+@@ -37,7 +37,7 @@
+ 
+ #define AT91_SHDW_MR	0x04		/* Shut Down Mode Register */
+ #define AT91_SHDW_WKUPDBC_SHIFT	24
+-#define AT91_SHDW_WKUPDBC_MASK	GENMASK(31, 16)
++#define AT91_SHDW_WKUPDBC_MASK	GENMASK(26, 24)
+ #define AT91_SHDW_WKUPDBC(x)	(((x) << AT91_SHDW_WKUPDBC_SHIFT) \
+ 						& AT91_SHDW_WKUPDBC_MASK)
+ 
+diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
+index eec646c568b7b..1699b9269a78e 100644
+--- a/drivers/power/supply/Kconfig
++++ b/drivers/power/supply/Kconfig
+@@ -229,6 +229,7 @@ config BATTERY_SBS
+ config CHARGER_SBS
+ 	tristate "SBS Compliant charger"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  Say Y to include support for SBS compliant battery chargers.
+ 
+diff --git a/drivers/power/supply/axp20x_usb_power.c b/drivers/power/supply/axp20x_usb_power.c
+index 0eaa86c52874a..25e288388edad 100644
+--- a/drivers/power/supply/axp20x_usb_power.c
++++ b/drivers/power/supply/axp20x_usb_power.c
+@@ -593,6 +593,7 @@ static int axp20x_usb_power_probe(struct platform_device *pdev)
+ 	power->axp20x_id = axp_data->axp20x_id;
+ 	power->regmap = axp20x->regmap;
+ 	power->num_irqs = axp_data->num_irq_names;
++	INIT_DELAYED_WORK(&power->vbus_detect, axp20x_usb_power_poll_vbus);
+ 
+ 	if (power->axp20x_id == AXP202_ID) {
+ 		/* Enable vbus valid checking */
+@@ -645,7 +646,6 @@ static int axp20x_usb_power_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	INIT_DELAYED_WORK(&power->vbus_detect, axp20x_usb_power_poll_vbus);
+ 	if (axp20x_usb_vbus_needs_polling(power))
+ 		queue_delayed_work(system_wq, &power->vbus_detect, 0);
+ 
+diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c
+index 295611b3b15e9..cebc5c8fda1b5 100644
+--- a/drivers/power/supply/cpcap-battery.c
++++ b/drivers/power/supply/cpcap-battery.c
+@@ -561,17 +561,21 @@ static int cpcap_battery_update_charger(struct cpcap_battery_ddata *ddata,
+ 				POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE,
+ 				&prop);
+ 	if (error)
+-		return error;
++		goto out_put;
+ 
+ 	/* Allow charger const voltage lower than battery const voltage */
+ 	if (const_charge_voltage > prop.intval)
+-		return 0;
++		goto out_put;
+ 
+ 	val.intval = const_charge_voltage;
+ 
+-	return power_supply_set_property(charger,
++	error = power_supply_set_property(charger,
+ 			POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE,
+ 			&val);
++out_put:
++	power_supply_put(charger);
++
++	return error;
+ }
+ 
+ static int cpcap_battery_set_property(struct power_supply *psy,
+@@ -666,7 +670,7 @@ static int cpcap_battery_init_irq(struct platform_device *pdev,
+ 
+ 	error = devm_request_threaded_irq(ddata->dev, irq, NULL,
+ 					  cpcap_battery_irq_thread,
+-					  IRQF_SHARED,
++					  IRQF_SHARED | IRQF_ONESHOT,
+ 					  name, ddata);
+ 	if (error) {
+ 		dev_err(ddata->dev, "could not get irq %s: %i\n",
+diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
+index c0d452e3dc8b0..22fff01425d63 100644
+--- a/drivers/power/supply/cpcap-charger.c
++++ b/drivers/power/supply/cpcap-charger.c
+@@ -301,6 +301,8 @@ cpcap_charger_get_bat_const_charge_voltage(struct cpcap_charger_ddata *ddata)
+ 				&prop);
+ 		if (!error)
+ 			voltage = prop.intval;
++
++		power_supply_put(battery);
+ 	}
+ 
+ 	return voltage;
+@@ -708,7 +710,7 @@ static int cpcap_usb_init_irq(struct platform_device *pdev,
+ 
+ 	error = devm_request_threaded_irq(ddata->dev, irq, NULL,
+ 					  cpcap_charger_irq_thread,
+-					  IRQF_SHARED,
++					  IRQF_SHARED | IRQF_ONESHOT,
+ 					  name, ddata);
+ 	if (error) {
+ 		dev_err(ddata->dev, "could not get irq %s: %i\n",
+diff --git a/drivers/power/supply/smb347-charger.c b/drivers/power/supply/smb347-charger.c
+index d3bf35ed12cee..8cfbd8d6b4786 100644
+--- a/drivers/power/supply/smb347-charger.c
++++ b/drivers/power/supply/smb347-charger.c
+@@ -137,6 +137,7 @@
+  * @mains_online: is AC/DC input connected
+  * @usb_online: is USB input connected
+  * @charging_enabled: is charging enabled
++ * @irq_unsupported: is interrupt unsupported by SMB hardware
+  * @max_charge_current: maximum current (in uA) the battery can be charged
+  * @max_charge_voltage: maximum voltage (in uV) the battery can be charged
+  * @pre_charge_current: current (in uA) to use in pre-charging phase
+@@ -193,6 +194,7 @@ struct smb347_charger {
+ 	bool			mains_online;
+ 	bool			usb_online;
+ 	bool			charging_enabled;
++	bool			irq_unsupported;
+ 
+ 	unsigned int		max_charge_current;
+ 	unsigned int		max_charge_voltage;
+@@ -862,6 +864,9 @@ static int smb347_irq_set(struct smb347_charger *smb, bool enable)
+ {
+ 	int ret;
+ 
++	if (smb->irq_unsupported)
++		return 0;
++
+ 	ret = smb347_set_writable(smb, true);
+ 	if (ret < 0)
+ 		return ret;
+@@ -923,8 +928,6 @@ static int smb347_irq_init(struct smb347_charger *smb,
+ 	ret = regmap_update_bits(smb->regmap, CFG_STAT,
+ 				 CFG_STAT_ACTIVE_HIGH | CFG_STAT_DISABLED,
+ 				 CFG_STAT_DISABLED);
+-	if (ret < 0)
+-		client->irq = 0;
+ 
+ 	smb347_set_writable(smb, false);
+ 
+@@ -1345,6 +1348,7 @@ static int smb347_probe(struct i2c_client *client,
+ 		if (ret < 0) {
+ 			dev_warn(dev, "failed to initialize IRQ: %d\n", ret);
+ 			dev_warn(dev, "disabling IRQ support\n");
++			smb->irq_unsupported = true;
+ 		} else {
+ 			smb347_irq_enable(smb);
+ 		}
+@@ -1357,8 +1361,8 @@ static int smb347_remove(struct i2c_client *client)
+ {
+ 	struct smb347_charger *smb = i2c_get_clientdata(client);
+ 
+-	if (client->irq)
+-		smb347_irq_disable(smb);
++	smb347_irq_disable(smb);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pwm/pwm-iqs620a.c b/drivers/pwm/pwm-iqs620a.c
+index 7d33e36464360..3e967a12458c6 100644
+--- a/drivers/pwm/pwm-iqs620a.c
++++ b/drivers/pwm/pwm-iqs620a.c
+@@ -46,7 +46,8 @@ static int iqs620_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ {
+ 	struct iqs620_pwm_private *iqs620_pwm;
+ 	struct iqs62x_core *iqs62x;
+-	u64 duty_scale;
++	unsigned int duty_cycle;
++	unsigned int duty_scale;
+ 	int ret;
+ 
+ 	if (state->polarity != PWM_POLARITY_NORMAL)
+@@ -70,7 +71,8 @@ static int iqs620_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	 * For lower duty cycles (e.g. 0), the PWM output is simply disabled to
+ 	 * allow an external pull-down resistor to hold the GPIO3/LTX pin low.
+ 	 */
+-	duty_scale = div_u64(state->duty_cycle * 256, IQS620_PWM_PERIOD_NS);
++	duty_cycle = min_t(u64, state->duty_cycle, IQS620_PWM_PERIOD_NS);
++	duty_scale = duty_cycle * 256 / IQS620_PWM_PERIOD_NS;
+ 
+ 	mutex_lock(&iqs620_pwm->lock);
+ 
+@@ -82,7 +84,7 @@ static int iqs620_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	}
+ 
+ 	if (duty_scale) {
+-		u8 duty_val = min_t(u64, duty_scale - 1, 0xff);
++		u8 duty_val = duty_scale - 1;
+ 
+ 		ret = regmap_write(iqs62x->regmap, IQS620_PWM_DUTY_CYCLE,
+ 				   duty_val);
+diff --git a/drivers/pwm/pwm-rockchip.c b/drivers/pwm/pwm-rockchip.c
+index 77c23a2c6d71e..3b8da7b0091b1 100644
+--- a/drivers/pwm/pwm-rockchip.c
++++ b/drivers/pwm/pwm-rockchip.c
+@@ -289,6 +289,7 @@ static int rockchip_pwm_probe(struct platform_device *pdev)
+ 	struct rockchip_pwm_chip *pc;
+ 	struct resource *r;
+ 	u32 enable_conf, ctrl;
++	bool enabled;
+ 	int ret, count;
+ 
+ 	id = of_match_device(rockchip_pwm_dt_ids, &pdev->dev);
+@@ -332,9 +333,9 @@ static int rockchip_pwm_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	ret = clk_prepare(pc->pclk);
++	ret = clk_prepare_enable(pc->pclk);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "Can't prepare APB clk: %d\n", ret);
++		dev_err(&pdev->dev, "Can't prepare enable APB clk: %d\n", ret);
+ 		goto err_clk;
+ 	}
+ 
+@@ -351,23 +352,26 @@ static int rockchip_pwm_probe(struct platform_device *pdev)
+ 		pc->chip.of_pwm_n_cells = 3;
+ 	}
+ 
++	enable_conf = pc->data->enable_conf;
++	ctrl = readl_relaxed(pc->base + pc->data->regs.ctrl);
++	enabled = (ctrl & enable_conf) == enable_conf;
++
+ 	ret = pwmchip_add(&pc->chip);
+ 	if (ret < 0) {
+-		clk_unprepare(pc->clk);
+ 		dev_err(&pdev->dev, "pwmchip_add() failed: %d\n", ret);
+ 		goto err_pclk;
+ 	}
+ 
+ 	/* Keep the PWM clk enabled if the PWM appears to be up and running. */
+-	enable_conf = pc->data->enable_conf;
+-	ctrl = readl_relaxed(pc->base + pc->data->regs.ctrl);
+-	if ((ctrl & enable_conf) != enable_conf)
++	if (!enabled)
+ 		clk_disable(pc->clk);
+ 
++	clk_disable(pc->pclk);
++
+ 	return 0;
+ 
+ err_pclk:
+-	clk_unprepare(pc->pclk);
++	clk_disable_unprepare(pc->pclk);
+ err_clk:
+ 	clk_disable_unprepare(pc->clk);
+ 
+diff --git a/drivers/regulator/axp20x-regulator.c b/drivers/regulator/axp20x-regulator.c
+index 90cb8445f7216..d260c442b788d 100644
+--- a/drivers/regulator/axp20x-regulator.c
++++ b/drivers/regulator/axp20x-regulator.c
+@@ -1070,7 +1070,7 @@ static int axp20x_set_dcdc_freq(struct platform_device *pdev, u32 dcdcfreq)
+ static int axp20x_regulator_parse_dt(struct platform_device *pdev)
+ {
+ 	struct device_node *np, *regulators;
+-	int ret;
++	int ret = 0;
+ 	u32 dcdcfreq = 0;
+ 
+ 	np = of_node_get(pdev->dev.parent->of_node);
+@@ -1085,13 +1085,12 @@ static int axp20x_regulator_parse_dt(struct platform_device *pdev)
+ 		ret = axp20x_set_dcdc_freq(pdev, dcdcfreq);
+ 		if (ret < 0) {
+ 			dev_err(&pdev->dev, "Error setting dcdc frequency: %d\n", ret);
+-			return ret;
+ 		}
+-
+ 		of_node_put(regulators);
+ 	}
+ 
+-	return 0;
++	of_node_put(np);
++	return ret;
+ }
+ 
+ static int axp20x_set_dcdc_workmode(struct regulator_dev *rdev, int id, u32 workmode)
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 35098dbd32a3c..7b3de8b0b1caf 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1617,7 +1617,7 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 					  const char *supply_name)
+ {
+ 	struct regulator *regulator;
+-	int err;
++	int err = 0;
+ 
+ 	if (dev) {
+ 		char buf[REG_STR_SIZE];
+@@ -1663,8 +1663,8 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 		}
+ 	}
+ 
+-	regulator->debugfs = debugfs_create_dir(supply_name,
+-						rdev->debugfs);
++	if (err != -EEXIST)
++		regulator->debugfs = debugfs_create_dir(supply_name, rdev->debugfs);
+ 	if (!regulator->debugfs) {
+ 		rdev_dbg(rdev, "Failed to create debugfs directory\n");
+ 	} else {
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index a22c4b5f64f7e..52e4396d40717 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -732,6 +732,15 @@ static const struct rpmh_vreg_hw_data pmic5_hfsmps515 = {
+ 	.of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+ };
+ 
++static const struct rpmh_vreg_hw_data pmic5_hfsmps515_1 = {
++	.regulator_type = VRM,
++	.ops = &rpmh_regulator_vrm_ops,
++	.voltage_range = REGULATOR_LINEAR_RANGE(900000, 0, 4, 16000),
++	.n_voltages = 5,
++	.pmic_mode_map = pmic_mode_map_pmic5_smps,
++	.of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
++};
++
+ static const struct rpmh_vreg_hw_data pmic5_bob = {
+ 	.regulator_type = VRM,
+ 	.ops = &rpmh_regulator_vrm_bypass_ops,
+@@ -874,6 +883,19 @@ static const struct rpmh_vreg_init_data pm8009_vreg_data[] = {
+ 	RPMH_VREG("ldo4",   "ldo%s4",  &pmic5_nldo,      "vdd-l4"),
+ 	RPMH_VREG("ldo5",   "ldo%s5",  &pmic5_pldo,      "vdd-l5-l6"),
+ 	RPMH_VREG("ldo6",   "ldo%s6",  &pmic5_pldo,      "vdd-l5-l6"),
++	RPMH_VREG("ldo7",   "ldo%s7",  &pmic5_pldo_lv,   "vdd-l7"),
++	{},
++};
++
++static const struct rpmh_vreg_init_data pm8009_1_vreg_data[] = {
++	RPMH_VREG("smps1",  "smp%s1",  &pmic5_hfsmps510, "vdd-s1"),
++	RPMH_VREG("smps2",  "smp%s2",  &pmic5_hfsmps515_1, "vdd-s2"),
++	RPMH_VREG("ldo1",   "ldo%s1",  &pmic5_nldo,      "vdd-l1"),
++	RPMH_VREG("ldo2",   "ldo%s2",  &pmic5_nldo,      "vdd-l2"),
++	RPMH_VREG("ldo3",   "ldo%s3",  &pmic5_nldo,      "vdd-l3"),
++	RPMH_VREG("ldo4",   "ldo%s4",  &pmic5_nldo,      "vdd-l4"),
++	RPMH_VREG("ldo5",   "ldo%s5",  &pmic5_pldo,      "vdd-l5-l6"),
++	RPMH_VREG("ldo6",   "ldo%s6",  &pmic5_pldo,      "vdd-l5-l6"),
+ 	RPMH_VREG("ldo7",   "ldo%s6",  &pmic5_pldo_lv,   "vdd-l7"),
+ 	{},
+ };
+@@ -976,6 +998,10 @@ static const struct of_device_id __maybe_unused rpmh_regulator_match_table[] = {
+ 		.compatible = "qcom,pm8009-rpmh-regulators",
+ 		.data = pm8009_vreg_data,
+ 	},
++	{
++		.compatible = "qcom,pm8009-1-rpmh-regulators",
++		.data = pm8009_1_vreg_data,
++	},
+ 	{
+ 		.compatible = "qcom,pm8150-rpmh-regulators",
+ 		.data = pm8150_vreg_data,
+diff --git a/drivers/regulator/rohm-regulator.c b/drivers/regulator/rohm-regulator.c
+index 399002383b28b..5c558b153d55e 100644
+--- a/drivers/regulator/rohm-regulator.c
++++ b/drivers/regulator/rohm-regulator.c
+@@ -52,9 +52,12 @@ int rohm_regulator_set_dvs_levels(const struct rohm_dvs_config *dvs,
+ 	char *prop;
+ 	unsigned int reg, mask, omask, oreg = desc->enable_reg;
+ 
+-	for (i = 0; i < ROHM_DVS_LEVEL_MAX && !ret; i++) {
+-		if (dvs->level_map & (1 << i)) {
+-			switch (i + 1) {
++	for (i = 0; i < ROHM_DVS_LEVEL_VALID_AMOUNT && !ret; i++) {
++		int bit;
++
++		bit = BIT(i);
++		if (dvs->level_map & bit) {
++			switch (bit) {
+ 			case ROHM_DVS_LEVEL_RUN:
+ 				prop = "rohm,dvs-run-voltage";
+ 				reg = dvs->run_reg;
+diff --git a/drivers/regulator/s5m8767.c b/drivers/regulator/s5m8767.c
+index 3fa472127e9a1..7c111bbdc2afa 100644
+--- a/drivers/regulator/s5m8767.c
++++ b/drivers/regulator/s5m8767.c
+@@ -544,14 +544,18 @@ static int s5m8767_pmic_dt_parse_pdata(struct platform_device *pdev,
+ 	rdata = devm_kcalloc(&pdev->dev,
+ 			     pdata->num_regulators, sizeof(*rdata),
+ 			     GFP_KERNEL);
+-	if (!rdata)
++	if (!rdata) {
++		of_node_put(regulators_np);
+ 		return -ENOMEM;
++	}
+ 
+ 	rmode = devm_kcalloc(&pdev->dev,
+ 			     pdata->num_regulators, sizeof(*rmode),
+ 			     GFP_KERNEL);
+-	if (!rmode)
++	if (!rmode) {
++		of_node_put(regulators_np);
+ 		return -ENOMEM;
++	}
+ 
+ 	pdata->regulators = rdata;
+ 	pdata->opmode = rmode;
+@@ -573,10 +577,13 @@ static int s5m8767_pmic_dt_parse_pdata(struct platform_device *pdev,
+ 			"s5m8767,pmic-ext-control",
+ 			GPIOD_OUT_HIGH | GPIOD_FLAGS_BIT_NONEXCLUSIVE,
+ 			"s5m8767");
+-		if (PTR_ERR(rdata->ext_control_gpiod) == -ENOENT)
++		if (PTR_ERR(rdata->ext_control_gpiod) == -ENOENT) {
+ 			rdata->ext_control_gpiod = NULL;
+-		else if (IS_ERR(rdata->ext_control_gpiod))
++		} else if (IS_ERR(rdata->ext_control_gpiod)) {
++			of_node_put(reg_np);
++			of_node_put(regulators_np);
+ 			return PTR_ERR(rdata->ext_control_gpiod);
++		}
+ 
+ 		rdata->id = i;
+ 		rdata->initdata = of_get_regulator_init_data(
+diff --git a/drivers/remoteproc/mtk_common.h b/drivers/remoteproc/mtk_common.h
+index f2bcc9d9fda65..58388057062a2 100644
+--- a/drivers/remoteproc/mtk_common.h
++++ b/drivers/remoteproc/mtk_common.h
+@@ -47,6 +47,7 @@
+ 
+ #define MT8192_CORE0_SW_RSTN_CLR	0x10000
+ #define MT8192_CORE0_SW_RSTN_SET	0x10004
++#define MT8192_CORE0_WDT_IRQ		0x10030
+ #define MT8192_CORE0_WDT_CFG		0x10034
+ 
+ #define SCP_FW_VER_LEN			32
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index 52fa01d67c18e..00a6e57dfa16b 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -184,17 +184,19 @@ static void mt8192_scp_irq_handler(struct mtk_scp *scp)
+ 
+ 	scp_to_host = readl(scp->reg_base + MT8192_SCP2APMCU_IPC_SET);
+ 
+-	if (scp_to_host & MT8192_SCP_IPC_INT_BIT)
++	if (scp_to_host & MT8192_SCP_IPC_INT_BIT) {
+ 		scp_ipi_handler(scp);
+-	else
+-		scp_wdt_handler(scp, scp_to_host);
+ 
+-	/*
+-	 * SCP won't send another interrupt until we clear
+-	 * MT8192_SCP2APMCU_IPC.
+-	 */
+-	writel(MT8192_SCP_IPC_INT_BIT,
+-	       scp->reg_base + MT8192_SCP2APMCU_IPC_CLR);
++		/*
++		 * SCP won't send another interrupt until we clear
++		 * MT8192_SCP2APMCU_IPC.
++		 */
++		writel(MT8192_SCP_IPC_INT_BIT,
++		       scp->reg_base + MT8192_SCP2APMCU_IPC_CLR);
++	} else {
++		scp_wdt_handler(scp, scp_to_host);
++		writel(1, scp->reg_base + MT8192_CORE0_WDT_IRQ);
++	}
+ }
+ 
+ static irqreturn_t scp_irq_handler(int irq, void *priv)
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index 65ad9d0b47ab1..33e4ecd6c6659 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -692,6 +692,7 @@ config RTC_DRV_S5M
+ 	tristate "Samsung S2M/S5M series"
+ 	depends on MFD_SEC_CORE || COMPILE_TEST
+ 	select REGMAP_IRQ
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you will get support for the
+ 	  RTC of Samsung S2MPS14 and S5M PMIC series.
+@@ -1296,7 +1297,7 @@ config RTC_DRV_OPAL
+ 
+ config RTC_DRV_ZYNQMP
+ 	tristate "Xilinx Zynq Ultrascale+ MPSoC RTC"
+-	depends on OF
++	depends on OF && HAS_IOMEM
+ 	help
+ 	  If you say yes here you get support for the RTC controller found on
+ 	  Xilinx Zynq Ultrascale+ MPSoC.
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index f60f9fb252142..3b9eda311c273 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -1438,6 +1438,8 @@ static int icarsamodexpo_ioctl(struct ap_perms *perms, unsigned long arg)
+ 			if (rc == -EAGAIN)
+ 				tr.again_counter++;
+ 		} while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX);
++	if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX)
++		rc = -EIO;
+ 	if (rc) {
+ 		ZCRYPT_DBF(DBF_DEBUG, "ioctl ICARSAMODEXPO rc=%d\n", rc);
+ 		return rc;
+@@ -1481,6 +1483,8 @@ static int icarsacrt_ioctl(struct ap_perms *perms, unsigned long arg)
+ 			if (rc == -EAGAIN)
+ 				tr.again_counter++;
+ 		} while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX);
++	if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX)
++		rc = -EIO;
+ 	if (rc) {
+ 		ZCRYPT_DBF(DBF_DEBUG, "ioctl ICARSACRT rc=%d\n", rc);
+ 		return rc;
+@@ -1524,6 +1528,8 @@ static int zsecsendcprb_ioctl(struct ap_perms *perms, unsigned long arg)
+ 			if (rc == -EAGAIN)
+ 				tr.again_counter++;
+ 		} while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX);
++	if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX)
++		rc = -EIO;
+ 	if (rc)
+ 		ZCRYPT_DBF(DBF_DEBUG, "ioctl ZSENDCPRB rc=%d status=0x%x\n",
+ 			   rc, xcRB.status);
+@@ -1568,6 +1574,8 @@ static int zsendep11cprb_ioctl(struct ap_perms *perms, unsigned long arg)
+ 			if (rc == -EAGAIN)
+ 				tr.again_counter++;
+ 		} while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX);
++	if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX)
++		rc = -EIO;
+ 	if (rc)
+ 		ZCRYPT_DBF(DBF_DEBUG, "ioctl ZSENDEP11CPRB rc=%d\n", rc);
+ 	if (copy_to_user(uxcrb, &xcrb, sizeof(xcrb)))
+@@ -1744,6 +1752,8 @@ static long trans_modexpo32(struct ap_perms *perms, struct file *filp,
+ 			if (rc == -EAGAIN)
+ 				tr.again_counter++;
+ 		} while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX);
++	if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX)
++		rc = -EIO;
+ 	if (rc)
+ 		return rc;
+ 	return put_user(mex64.outputdatalength,
+@@ -1795,6 +1805,8 @@ static long trans_modexpo_crt32(struct ap_perms *perms, struct file *filp,
+ 			if (rc == -EAGAIN)
+ 				tr.again_counter++;
+ 		} while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX);
++	if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX)
++		rc = -EIO;
+ 	if (rc)
+ 		return rc;
+ 	return put_user(crt64.outputdatalength,
+@@ -1865,6 +1877,8 @@ static long trans_xcRB32(struct ap_perms *perms, struct file *filp,
+ 			if (rc == -EAGAIN)
+ 				tr.again_counter++;
+ 		} while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX);
++	if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX)
++		rc = -EIO;
+ 	xcRB32.reply_control_blk_length = xcRB64.reply_control_blk_length;
+ 	xcRB32.reply_data_length = xcRB64.reply_data_length;
+ 	xcRB32.status = xcRB64.status;
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index 5730572b52cd5..54e686dca6dea 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -117,7 +117,7 @@ struct virtio_rev_info {
+ };
+ 
+ /* the highest virtio-ccw revision we support */
+-#define VIRTIO_CCW_REV_MAX 1
++#define VIRTIO_CCW_REV_MAX 2
+ 
+ struct virtio_ccw_vq_info {
+ 	struct virtqueue *vq;
+@@ -952,7 +952,7 @@ static u8 virtio_ccw_get_status(struct virtio_device *vdev)
+ 	u8 old_status = vcdev->dma_area->status;
+ 	struct ccw1 *ccw;
+ 
+-	if (vcdev->revision < 1)
++	if (vcdev->revision < 2)
+ 		return vcdev->dma_area->status;
+ 
+ 	ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
+diff --git a/drivers/scsi/bnx2fc/Kconfig b/drivers/scsi/bnx2fc/Kconfig
+index 3cf7e08df8093..ecdc0f0f4f4e6 100644
+--- a/drivers/scsi/bnx2fc/Kconfig
++++ b/drivers/scsi/bnx2fc/Kconfig
+@@ -5,6 +5,7 @@ config SCSI_BNX2X_FCOE
+ 	depends on (IPV6 || IPV6=n)
+ 	depends on LIBFC
+ 	depends on LIBFCOE
++	depends on MMU
+ 	select NETDEVICES
+ 	select ETHERNET
+ 	select NET_VENDOR_BROADCOM
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 9746d2f4fcfad..f4a672e549716 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -1154,13 +1154,14 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ 	struct lpfc_vport *vport = pmb->vport;
+ 	LPFC_MBOXQ_t *sparam_mb;
+ 	struct lpfc_dmabuf *sparam_mp;
++	u16 status = pmb->u.mb.mbxStatus;
+ 	int rc;
+ 
+-	if (pmb->u.mb.mbxStatus)
+-		goto out;
+-
+ 	mempool_free(pmb, phba->mbox_mem_pool);
+ 
++	if (status)
++		goto out;
++
+ 	/* don't perform discovery for SLI4 loopback diagnostic test */
+ 	if ((phba->sli_rev == LPFC_SLI_REV4) &&
+ 	    !(phba->hba_flag & HBA_FCOE_MODE) &&
+@@ -1223,12 +1224,10 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ 
+ out:
+ 	lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+-			 "0306 CONFIG_LINK mbxStatus error x%x "
+-			 "HBA state x%x\n",
+-			 pmb->u.mb.mbxStatus, vport->port_state);
+-sparam_out:
+-	mempool_free(pmb, phba->mbox_mem_pool);
++			 "0306 CONFIG_LINK mbxStatus error x%x HBA state x%x\n",
++			 status, vport->port_state);
+ 
++sparam_out:
+ 	lpfc_linkdown(phba);
+ 
+ 	lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
+index bb7431912d410..144a893e7335b 100644
+--- a/drivers/scsi/qla2xxx/qla_dbg.c
++++ b/drivers/scsi/qla2xxx/qla_dbg.c
+@@ -202,6 +202,7 @@ qla24xx_dump_ram(struct qla_hw_data *ha, uint32_t addr, __be32 *ram,
+ 		wrt_reg_word(&reg->mailbox0, MBC_DUMP_RISC_RAM_EXTENDED);
+ 		wrt_reg_word(&reg->mailbox1, LSW(addr));
+ 		wrt_reg_word(&reg->mailbox8, MSW(addr));
++		wrt_reg_word(&reg->mailbox10, 0);
+ 
+ 		wrt_reg_word(&reg->mailbox2, MSW(LSD(dump_dma)));
+ 		wrt_reg_word(&reg->mailbox3, LSW(LSD(dump_dma)));
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index d6325fb2ef73b..4ebd8851a0c9f 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -4277,7 +4277,8 @@ qla2x00_dump_ram(scsi_qla_host_t *vha, dma_addr_t req_dma, uint32_t addr,
+ 	if (MSW(addr) || IS_FWI2_CAPABLE(vha->hw)) {
+ 		mcp->mb[0] = MBC_DUMP_RISC_RAM_EXTENDED;
+ 		mcp->mb[8] = MSW(addr);
+-		mcp->out_mb = MBX_8|MBX_0;
++		mcp->mb[10] = 0;
++		mcp->out_mb = MBX_10|MBX_8|MBX_0;
+ 	} else {
+ 		mcp->mb[0] = MBC_DUMP_RISC_RAM;
+ 		mcp->out_mb = MBX_0;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index fedb89d4ac3f0..20a6564f87d9f 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -709,9 +709,9 @@ static int sd_sec_submit(void *data, u16 spsp, u8 secp, void *buffer,
+ 	put_unaligned_be16(spsp, &cdb[2]);
+ 	put_unaligned_be32(len, &cdb[6]);
+ 
+-	ret = scsi_execute_req(sdev, cdb,
+-			send ? DMA_TO_DEVICE : DMA_FROM_DEVICE,
+-			buffer, len, NULL, SD_TIMEOUT, sdkp->max_retries, NULL);
++	ret = scsi_execute(sdev, cdb, send ? DMA_TO_DEVICE : DMA_FROM_DEVICE,
++		buffer, len, NULL, NULL, SD_TIMEOUT, sdkp->max_retries, 0,
++		RQF_PM, NULL);
+ 	return ret <= 0 ? ret : -EIO;
+ }
+ #endif /* CONFIG_BLK_SED_OPAL */
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index cf07b7f935790..87a7274e4632b 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -688,6 +688,7 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp)
+ 	unsigned int nr_zones = sdkp->rev_nr_zones;
+ 	u32 max_append;
+ 	int ret = 0;
++	unsigned int flags;
+ 
+ 	/*
+ 	 * For all zoned disks, initialize zone append emulation data if not
+@@ -720,16 +721,19 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp)
+ 	    disk->queue->nr_zones == nr_zones)
+ 		goto unlock;
+ 
++	flags = memalloc_noio_save();
+ 	sdkp->zone_blocks = zone_blocks;
+ 	sdkp->nr_zones = nr_zones;
+-	sdkp->rev_wp_offset = kvcalloc(nr_zones, sizeof(u32), GFP_NOIO);
++	sdkp->rev_wp_offset = kvcalloc(nr_zones, sizeof(u32), GFP_KERNEL);
+ 	if (!sdkp->rev_wp_offset) {
+ 		ret = -ENOMEM;
++		memalloc_noio_restore(flags);
+ 		goto unlock;
+ 	}
+ 
+ 	ret = blk_revalidate_disk_zones(disk, sd_zbc_revalidate_zones_cb);
+ 
++	memalloc_noio_restore(flags);
+ 	kvfree(sdkp->rev_wp_offset);
+ 	sdkp->rev_wp_offset = NULL;
+ 
+diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+index f3d8d53ab84de..dbe5325a324d5 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c
++++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+@@ -11,6 +11,7 @@
+  */
+ 
+ #include <linux/bitops.h>
++#include <linux/clk.h>
+ #include <linux/interrupt.h>
+ #include <linux/fs.h>
+ #include <linux/kfifo.h>
+@@ -67,6 +68,7 @@ struct aspeed_lpc_snoop_channel {
+ struct aspeed_lpc_snoop {
+ 	struct regmap		*regmap;
+ 	int			irq;
++	struct clk		*clk;
+ 	struct aspeed_lpc_snoop_channel chan[NUM_SNOOP_CHANNELS];
+ };
+ 
+@@ -282,22 +284,42 @@ static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
++	lpc_snoop->clk = devm_clk_get(dev, NULL);
++	if (IS_ERR(lpc_snoop->clk)) {
++		rc = PTR_ERR(lpc_snoop->clk);
++		if (rc != -EPROBE_DEFER)
++			dev_err(dev, "couldn't get clock\n");
++		return rc;
++	}
++	rc = clk_prepare_enable(lpc_snoop->clk);
++	if (rc) {
++		dev_err(dev, "couldn't enable clock\n");
++		return rc;
++	}
++
+ 	rc = aspeed_lpc_snoop_config_irq(lpc_snoop, pdev);
+ 	if (rc)
+-		return rc;
++		goto err;
+ 
+ 	rc = aspeed_lpc_enable_snoop(lpc_snoop, dev, 0, port);
+ 	if (rc)
+-		return rc;
++		goto err;
+ 
+ 	/* Configuration of 2nd snoop channel port is optional */
+ 	if (of_property_read_u32_index(dev->of_node, "snoop-ports",
+ 				       1, &port) == 0) {
+ 		rc = aspeed_lpc_enable_snoop(lpc_snoop, dev, 1, port);
+-		if (rc)
++		if (rc) {
+ 			aspeed_lpc_disable_snoop(lpc_snoop, 0);
++			goto err;
++		}
+ 	}
+ 
++	return 0;
++
++err:
++	clk_disable_unprepare(lpc_snoop->clk);
++
+ 	return rc;
+ }
+ 
+@@ -309,6 +331,8 @@ static int aspeed_lpc_snoop_remove(struct platform_device *pdev)
+ 	aspeed_lpc_disable_snoop(lpc_snoop, 0);
+ 	aspeed_lpc_disable_snoop(lpc_snoop, 1);
+ 
++	clk_disable_unprepare(lpc_snoop->clk);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c
+index 7f9e9944d1eae..f1875dc31ae2c 100644
+--- a/drivers/soc/qcom/ocmem.c
++++ b/drivers/soc/qcom/ocmem.c
+@@ -189,6 +189,7 @@ struct ocmem *of_get_ocmem(struct device *dev)
+ {
+ 	struct platform_device *pdev;
+ 	struct device_node *devnode;
++	struct ocmem *ocmem;
+ 
+ 	devnode = of_parse_phandle(dev->of_node, "sram", 0);
+ 	if (!devnode || !devnode->parent) {
+@@ -202,7 +203,12 @@ struct ocmem *of_get_ocmem(struct device *dev)
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 	}
+ 
+-	return platform_get_drvdata(pdev);
++	ocmem = platform_get_drvdata(pdev);
++	if (!ocmem) {
++		dev_err(dev, "Cannot get ocmem\n");
++		return ERR_PTR(-ENODEV);
++	}
++	return ocmem;
+ }
+ EXPORT_SYMBOL(of_get_ocmem);
+ 
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index b44ede48decc0..e0620416e5743 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -280,7 +280,7 @@ static int qcom_show_pmic_model(struct seq_file *seq, void *p)
+ 	if (model < 0)
+ 		return -EINVAL;
+ 
+-	if (model <= ARRAY_SIZE(pmic_models) && pmic_models[model])
++	if (model < ARRAY_SIZE(pmic_models) && pmic_models[model])
+ 		seq_printf(seq, "%s\n", pmic_models[model]);
+ 	else
+ 		seq_printf(seq, "unknown (%d)\n", model);
+diff --git a/drivers/soc/samsung/exynos-asv.c b/drivers/soc/samsung/exynos-asv.c
+index 8abf4dfaa5c59..5daeadc363829 100644
+--- a/drivers/soc/samsung/exynos-asv.c
++++ b/drivers/soc/samsung/exynos-asv.c
+@@ -119,11 +119,6 @@ static int exynos_asv_probe(struct platform_device *pdev)
+ 	u32 product_id = 0;
+ 	int ret, i;
+ 
+-	cpu_dev = get_cpu_device(0);
+-	ret = dev_pm_opp_get_opp_count(cpu_dev);
+-	if (ret < 0)
+-		return -EPROBE_DEFER;
+-
+ 	asv = devm_kzalloc(&pdev->dev, sizeof(*asv), GFP_KERNEL);
+ 	if (!asv)
+ 		return -ENOMEM;
+@@ -134,7 +129,13 @@ static int exynos_asv_probe(struct platform_device *pdev)
+ 		return PTR_ERR(asv->chipid_regmap);
+ 	}
+ 
+-	regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_PRO_ID, &product_id);
++	ret = regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_PRO_ID,
++			  &product_id);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Cannot read revision from ChipID: %d\n",
++			ret);
++		return -ENODEV;
++	}
+ 
+ 	switch (product_id & EXYNOS_MASK) {
+ 	case 0xE5422000:
+@@ -144,6 +145,11 @@ static int exynos_asv_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
++	cpu_dev = get_cpu_device(0);
++	ret = dev_pm_opp_get_opp_count(cpu_dev);
++	if (ret < 0)
++		return -EPROBE_DEFER;
++
+ 	ret = of_property_read_u32(pdev->dev.of_node, "samsung,asv-bin",
+ 				   &asv->of_bin);
+ 	if (ret < 0)
+diff --git a/drivers/soc/ti/pm33xx.c b/drivers/soc/ti/pm33xx.c
+index d2f5e7001a93c..dc21aa855a458 100644
+--- a/drivers/soc/ti/pm33xx.c
++++ b/drivers/soc/ti/pm33xx.c
+@@ -536,7 +536,7 @@ static int am33xx_pm_probe(struct platform_device *pdev)
+ 
+ 	ret = am33xx_push_sram_idle();
+ 	if (ret)
+-		goto err_free_sram;
++		goto err_unsetup_rtc;
+ 
+ 	am33xx_pm_set_ipc_ops();
+ 
+@@ -566,6 +566,9 @@ static int am33xx_pm_probe(struct platform_device *pdev)
+ 
+ err_put_wkup_m3_ipc:
+ 	wkup_m3_ipc_put(m3_ipc);
++err_unsetup_rtc:
++	iounmap(rtc_base_virt);
++	clk_put(rtc_fck);
+ err_free_sram:
+ 	am33xx_pm_free_sram();
+ 	pm33xx_dev = NULL;
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 8eaf31e766773..1fe786855095a 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -405,10 +405,11 @@ sdw_nwrite_no_pm(struct sdw_slave *slave, u32 addr, size_t count, u8 *val)
+ 	return sdw_transfer(slave->bus, &msg);
+ }
+ 
+-static int sdw_write_no_pm(struct sdw_slave *slave, u32 addr, u8 value)
++int sdw_write_no_pm(struct sdw_slave *slave, u32 addr, u8 value)
+ {
+ 	return sdw_nwrite_no_pm(slave, addr, 1, &value);
+ }
++EXPORT_SYMBOL(sdw_write_no_pm);
+ 
+ static int
+ sdw_bread_no_pm(struct sdw_bus *bus, u16 dev_num, u32 addr)
+@@ -476,8 +477,7 @@ int sdw_bwrite_no_pm_unlocked(struct sdw_bus *bus, u16 dev_num, u32 addr, u8 val
+ }
+ EXPORT_SYMBOL(sdw_bwrite_no_pm_unlocked);
+ 
+-static int
+-sdw_read_no_pm(struct sdw_slave *slave, u32 addr)
++int sdw_read_no_pm(struct sdw_slave *slave, u32 addr)
+ {
+ 	u8 buf;
+ 	int ret;
+@@ -488,6 +488,19 @@ sdw_read_no_pm(struct sdw_slave *slave, u32 addr)
+ 	else
+ 		return buf;
+ }
++EXPORT_SYMBOL(sdw_read_no_pm);
++
++static int sdw_update_no_pm(struct sdw_slave *slave, u32 addr, u8 mask, u8 val)
++{
++	int tmp;
++
++	tmp = sdw_read_no_pm(slave, addr);
++	if (tmp < 0)
++		return tmp;
++
++	tmp = (tmp & ~mask) | val;
++	return sdw_write_no_pm(slave, addr, tmp);
++}
+ 
+ /**
+  * sdw_nread() - Read "n" contiguous SDW Slave registers
+@@ -500,16 +513,16 @@ int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val)
+ {
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(slave->bus->dev);
++	ret = pm_runtime_get_sync(&slave->dev);
+ 	if (ret < 0 && ret != -EACCES) {
+-		pm_runtime_put_noidle(slave->bus->dev);
++		pm_runtime_put_noidle(&slave->dev);
+ 		return ret;
+ 	}
+ 
+ 	ret = sdw_nread_no_pm(slave, addr, count, val);
+ 
+-	pm_runtime_mark_last_busy(slave->bus->dev);
+-	pm_runtime_put(slave->bus->dev);
++	pm_runtime_mark_last_busy(&slave->dev);
++	pm_runtime_put(&slave->dev);
+ 
+ 	return ret;
+ }
+@@ -526,16 +539,16 @@ int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val)
+ {
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(slave->bus->dev);
++	ret = pm_runtime_get_sync(&slave->dev);
+ 	if (ret < 0 && ret != -EACCES) {
+-		pm_runtime_put_noidle(slave->bus->dev);
++		pm_runtime_put_noidle(&slave->dev);
+ 		return ret;
+ 	}
+ 
+ 	ret = sdw_nwrite_no_pm(slave, addr, count, val);
+ 
+-	pm_runtime_mark_last_busy(slave->bus->dev);
+-	pm_runtime_put(slave->bus->dev);
++	pm_runtime_mark_last_busy(&slave->dev);
++	pm_runtime_put(&slave->dev);
+ 
+ 	return ret;
+ }
+@@ -1210,7 +1223,7 @@ static int sdw_slave_set_frequency(struct sdw_slave *slave)
+ 	}
+ 	scale_index++;
+ 
+-	ret = sdw_write(slave, SDW_SCP_BUS_CLOCK_BASE, base);
++	ret = sdw_write_no_pm(slave, SDW_SCP_BUS_CLOCK_BASE, base);
+ 	if (ret < 0) {
+ 		dev_err(&slave->dev,
+ 			"SDW_SCP_BUS_CLOCK_BASE write failed:%d\n", ret);
+@@ -1218,13 +1231,13 @@ static int sdw_slave_set_frequency(struct sdw_slave *slave)
+ 	}
+ 
+ 	/* initialize scale for both banks */
+-	ret = sdw_write(slave, SDW_SCP_BUSCLOCK_SCALE_B0, scale_index);
++	ret = sdw_write_no_pm(slave, SDW_SCP_BUSCLOCK_SCALE_B0, scale_index);
+ 	if (ret < 0) {
+ 		dev_err(&slave->dev,
+ 			"SDW_SCP_BUSCLOCK_SCALE_B0 write failed:%d\n", ret);
+ 		return ret;
+ 	}
+-	ret = sdw_write(slave, SDW_SCP_BUSCLOCK_SCALE_B1, scale_index);
++	ret = sdw_write_no_pm(slave, SDW_SCP_BUSCLOCK_SCALE_B1, scale_index);
+ 	if (ret < 0)
+ 		dev_err(&slave->dev,
+ 			"SDW_SCP_BUSCLOCK_SCALE_B1 write failed:%d\n", ret);
+@@ -1256,7 +1269,7 @@ static int sdw_initialize_slave(struct sdw_slave *slave)
+ 	val = slave->prop.scp_int1_mask;
+ 
+ 	/* Enable SCP interrupts */
+-	ret = sdw_update(slave, SDW_SCP_INTMASK1, val, val);
++	ret = sdw_update_no_pm(slave, SDW_SCP_INTMASK1, val, val);
+ 	if (ret < 0) {
+ 		dev_err(slave->bus->dev,
+ 			"SDW_SCP_INTMASK1 write failed:%d\n", ret);
+@@ -1271,7 +1284,7 @@ static int sdw_initialize_slave(struct sdw_slave *slave)
+ 	val = prop->dp0_prop->imp_def_interrupts;
+ 	val |= SDW_DP0_INT_PORT_READY | SDW_DP0_INT_BRA_FAILURE;
+ 
+-	ret = sdw_update(slave, SDW_DP0_INTMASK, val, val);
++	ret = sdw_update_no_pm(slave, SDW_DP0_INTMASK, val, val);
+ 	if (ret < 0)
+ 		dev_err(slave->bus->dev,
+ 			"SDW_DP0_INTMASK read failed:%d\n", ret);
+@@ -1433,7 +1446,7 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave)
+ 	ret = pm_runtime_get_sync(&slave->dev);
+ 	if (ret < 0 && ret != -EACCES) {
+ 		dev_err(&slave->dev, "Failed to resume device: %d\n", ret);
+-		pm_runtime_put_noidle(slave->bus->dev);
++		pm_runtime_put_noidle(&slave->dev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index 9fa55164354a2..580660599f461 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -484,10 +484,10 @@ cdns_fill_msg_resp(struct sdw_cdns *cdns,
+ 		if (!(cdns->response_buf[i] & CDNS_MCP_RESP_ACK)) {
+ 			no_ack = 1;
+ 			dev_dbg_ratelimited(cdns->dev, "Msg Ack not received\n");
+-			if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) {
+-				nack = 1;
+-				dev_err_ratelimited(cdns->dev, "Msg NACK received\n");
+-			}
++		}
++		if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) {
++			nack = 1;
++			dev_err_ratelimited(cdns->dev, "Msg NACK received\n");
+ 		}
+ 	}
+ 
+diff --git a/drivers/soundwire/intel_init.c b/drivers/soundwire/intel_init.c
+index cabdadb09a1bb..bc8520eb385ec 100644
+--- a/drivers/soundwire/intel_init.c
++++ b/drivers/soundwire/intel_init.c
+@@ -405,11 +405,12 @@ int sdw_intel_acpi_scan(acpi_handle *parent_handle,
+ {
+ 	acpi_status status;
+ 
++	info->handle = NULL;
+ 	status = acpi_walk_namespace(ACPI_TYPE_DEVICE,
+ 				     parent_handle, 1,
+ 				     sdw_intel_acpi_cb,
+ 				     NULL, info, NULL);
+-	if (ACPI_FAILURE(status))
++	if (ACPI_FAILURE(status) || info->handle == NULL)
+ 		return -ENODEV;
+ 
+ 	return sdw_intel_scan_controller(info);
+diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c
+index 0e5e64a80848d..1db43cbead575 100644
+--- a/drivers/spi/spi-atmel.c
++++ b/drivers/spi/spi-atmel.c
+@@ -1590,7 +1590,7 @@ static int atmel_spi_probe(struct platform_device *pdev)
+ 		if (ret == 0) {
+ 			as->use_dma = true;
+ 		} else if (ret == -EPROBE_DEFER) {
+-			return ret;
++			goto out_unmap_regs;
+ 		}
+ 	} else if (as->caps.has_pdc_support) {
+ 		as->use_pdc = true;
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index ba7d40c2922f7..826b01f346246 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -461,7 +461,7 @@ static int cqspi_read_setup(struct cqspi_flash_pdata *f_pdata,
+ 	/* Setup dummy clock cycles */
+ 	dummy_clk = op->dummy.nbytes * 8;
+ 	if (dummy_clk > CQSPI_DUMMY_CLKS_MAX)
+-		dummy_clk = CQSPI_DUMMY_CLKS_MAX;
++		return -EOPNOTSUPP;
+ 
+ 	if (dummy_clk)
+ 		reg |= (dummy_clk & CQSPI_REG_RD_INSTR_DUMMY_MASK)
+diff --git a/drivers/spi/spi-dw-bt1.c b/drivers/spi/spi-dw-bt1.c
+index c279b7891e3ac..bc9d5eab3c589 100644
+--- a/drivers/spi/spi-dw-bt1.c
++++ b/drivers/spi/spi-dw-bt1.c
+@@ -84,7 +84,7 @@ static void dw_spi_bt1_dirmap_copy_from_map(void *to, void __iomem *from, size_t
+ 	if (shift) {
+ 		chunk = min_t(size_t, 4 - shift, len);
+ 		data = readl_relaxed(from - shift);
+-		memcpy(to, &data + shift, chunk);
++		memcpy(to, (char *)&data + shift, chunk);
+ 		from += chunk;
+ 		to += chunk;
+ 		len -= chunk;
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index 6d8e0a05a5355..e4a8d203f9408 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -695,7 +695,7 @@ static void fsl_spi_cs_control(struct spi_device *spi, bool on)
+ 
+ 		if (WARN_ON_ONCE(!pinfo->immr_spi_cs))
+ 			return;
+-		iowrite32be(on ? SPI_BOOT_SEL_BIT : 0, pinfo->immr_spi_cs);
++		iowrite32be(on ? 0 : SPI_BOOT_SEL_BIT, pinfo->immr_spi_cs);
+ 	}
+ }
+ 
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 8df5e973404f0..831a38920fa98 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1713,7 +1713,7 @@ static int spi_imx_probe(struct platform_device *pdev)
+ 	master->dev.of_node = pdev->dev.of_node;
+ 	ret = spi_bitbang_start(&spi_imx->bitbang);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "bitbang start failed with %d\n", ret);
++		dev_err_probe(&pdev->dev, ret, "bitbang start failed\n");
+ 		goto out_bitbang_start;
+ 	}
+ 
+diff --git a/drivers/spi/spi-pxa2xx-pci.c b/drivers/spi/spi-pxa2xx-pci.c
+index f236e3034cf85..aafac128bb5f1 100644
+--- a/drivers/spi/spi-pxa2xx-pci.c
++++ b/drivers/spi/spi-pxa2xx-pci.c
+@@ -21,7 +21,8 @@ enum {
+ 	PORT_BSW1,
+ 	PORT_BSW2,
+ 	PORT_CE4100,
+-	PORT_LPT,
++	PORT_LPT0,
++	PORT_LPT1,
+ };
+ 
+ struct pxa_spi_info {
+@@ -57,8 +58,10 @@ static struct dw_dma_slave bsw1_rx_param = { .src_id = 7 };
+ static struct dw_dma_slave bsw2_tx_param = { .dst_id = 8 };
+ static struct dw_dma_slave bsw2_rx_param = { .src_id = 9 };
+ 
+-static struct dw_dma_slave lpt_tx_param = { .dst_id = 0 };
+-static struct dw_dma_slave lpt_rx_param = { .src_id = 1 };
++static struct dw_dma_slave lpt1_tx_param = { .dst_id = 0 };
++static struct dw_dma_slave lpt1_rx_param = { .src_id = 1 };
++static struct dw_dma_slave lpt0_tx_param = { .dst_id = 2 };
++static struct dw_dma_slave lpt0_rx_param = { .src_id = 3 };
+ 
+ static bool lpss_dma_filter(struct dma_chan *chan, void *param)
+ {
+@@ -185,12 +188,19 @@ static struct pxa_spi_info spi_info_configs[] = {
+ 		.num_chipselect = 1,
+ 		.max_clk_rate = 50000000,
+ 	},
+-	[PORT_LPT] = {
++	[PORT_LPT0] = {
+ 		.type = LPSS_LPT_SSP,
+ 		.port_id = 0,
+ 		.setup = lpss_spi_setup,
+-		.tx_param = &lpt_tx_param,
+-		.rx_param = &lpt_rx_param,
++		.tx_param = &lpt0_tx_param,
++		.rx_param = &lpt0_rx_param,
++	},
++	[PORT_LPT1] = {
++		.type = LPSS_LPT_SSP,
++		.port_id = 1,
++		.setup = lpss_spi_setup,
++		.tx_param = &lpt1_tx_param,
++		.rx_param = &lpt1_rx_param,
+ 	},
+ };
+ 
+@@ -285,8 +295,9 @@ static const struct pci_device_id pxa2xx_spi_pci_devices[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x2290), PORT_BSW1 },
+ 	{ PCI_VDEVICE(INTEL, 0x22ac), PORT_BSW2 },
+ 	{ PCI_VDEVICE(INTEL, 0x2e6a), PORT_CE4100 },
+-	{ PCI_VDEVICE(INTEL, 0x9ce6), PORT_LPT },
+-	{ },
++	{ PCI_VDEVICE(INTEL, 0x9ce5), PORT_LPT0 },
++	{ PCI_VDEVICE(INTEL, 0x9ce6), PORT_LPT1 },
++	{ }
+ };
+ MODULE_DEVICE_TABLE(pci, pxa2xx_spi_pci_devices);
+ 
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 6017209c6d2f7..6eeb39669a866 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -1677,6 +1677,10 @@ static int stm32_spi_transfer_one(struct spi_master *master,
+ 	struct stm32_spi *spi = spi_master_get_devdata(master);
+ 	int ret;
+ 
++	/* Don't do anything on 0 bytes transfers */
++	if (transfer->len == 0)
++		return 0;
++
+ 	spi->tx_buf = transfer->tx_buf;
+ 	spi->rx_buf = transfer->rx_buf;
+ 	spi->tx_len = spi->tx_buf ? transfer->len : 0;
+diff --git a/drivers/spi/spi-synquacer.c b/drivers/spi/spi-synquacer.c
+index 8cdca6ab80989..ea706d9629cb1 100644
+--- a/drivers/spi/spi-synquacer.c
++++ b/drivers/spi/spi-synquacer.c
+@@ -490,6 +490,10 @@ static void synquacer_spi_set_cs(struct spi_device *spi, bool enable)
+ 	val &= ~(SYNQUACER_HSSPI_DMPSEL_CS_MASK <<
+ 		 SYNQUACER_HSSPI_DMPSEL_CS_SHIFT);
+ 	val |= spi->chip_select << SYNQUACER_HSSPI_DMPSEL_CS_SHIFT;
++
++	if (!enable)
++		val |= SYNQUACER_HSSPI_DMSTOP_STOP;
++
+ 	writel(val, sspi->regs + SYNQUACER_HSSPI_REG_DMSTART);
+ }
+ 
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 7694e1ae5b0b2..4257a2d368f71 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1259,7 +1259,7 @@ static int spi_transfer_one_message(struct spi_controller *ctlr,
+ 			ptp_read_system_prets(xfer->ptp_sts);
+ 		}
+ 
+-		if (xfer->tx_buf || xfer->rx_buf) {
++		if ((xfer->tx_buf || xfer->rx_buf) && xfer->len) {
+ 			reinit_completion(&ctlr->xfer_completion);
+ 
+ fallback_pio:
+diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
+index de844b4121107..bbbd311eda030 100644
+--- a/drivers/spmi/spmi-pmic-arb.c
++++ b/drivers/spmi/spmi-pmic-arb.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2012-2015, 2017, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2012-2015, 2017, 2021, The Linux Foundation. All rights reserved.
+  */
+ #include <linux/bitmap.h>
+ #include <linux/delay.h>
+@@ -505,8 +505,7 @@ static void cleanup_irq(struct spmi_pmic_arb *pmic_arb, u16 apid, int id)
+ static void periph_interrupt(struct spmi_pmic_arb *pmic_arb, u16 apid)
+ {
+ 	unsigned int irq;
+-	u32 status;
+-	int id;
++	u32 status, id;
+ 	u8 sid = (pmic_arb->apid_data[apid].ppid >> 8) & 0xF;
+ 	u8 per = pmic_arb->apid_data[apid].ppid & 0xFF;
+ 
+diff --git a/drivers/staging/gdm724x/gdm_usb.c b/drivers/staging/gdm724x/gdm_usb.c
+index dc4da66c3695b..54bdb64f52e88 100644
+--- a/drivers/staging/gdm724x/gdm_usb.c
++++ b/drivers/staging/gdm724x/gdm_usb.c
+@@ -56,20 +56,24 @@ static int gdm_usb_recv(void *priv_dev,
+ 
+ static int request_mac_address(struct lte_udev *udev)
+ {
+-	u8 buf[16] = {0,};
+-	struct hci_packet *hci = (struct hci_packet *)buf;
++	struct hci_packet *hci;
+ 	struct usb_device *usbdev = udev->usbdev;
+ 	int actual;
+ 	int ret = -1;
+ 
++	hci = kmalloc(struct_size(hci, data, 1), GFP_KERNEL);
++	if (!hci)
++		return -ENOMEM;
++
+ 	hci->cmd_evt = gdm_cpu_to_dev16(udev->gdm_ed, LTE_GET_INFORMATION);
+ 	hci->len = gdm_cpu_to_dev16(udev->gdm_ed, 1);
+ 	hci->data[0] = MAC_ADDRESS;
+ 
+-	ret = usb_bulk_msg(usbdev, usb_sndbulkpipe(usbdev, 2), buf, 5,
++	ret = usb_bulk_msg(usbdev, usb_sndbulkpipe(usbdev, 2), hci, 5,
+ 			   &actual, 1000);
+ 
+ 	udev->request_mac_addr = 1;
++	kfree(hci);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/staging/media/allegro-dvt/allegro-core.c b/drivers/staging/media/allegro-dvt/allegro-core.c
+index 9f718f43282bc..640451134072b 100644
+--- a/drivers/staging/media/allegro-dvt/allegro-core.c
++++ b/drivers/staging/media/allegro-dvt/allegro-core.c
+@@ -2483,8 +2483,6 @@ static int allegro_open(struct file *file)
+ 	INIT_LIST_HEAD(&channel->buffers_reference);
+ 	INIT_LIST_HEAD(&channel->buffers_intermediate);
+ 
+-	list_add(&channel->list, &dev->channels);
+-
+ 	channel->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, channel,
+ 						allegro_queue_init);
+ 
+@@ -2493,6 +2491,7 @@ static int allegro_open(struct file *file)
+ 		goto error;
+ 	}
+ 
++	list_add(&channel->list, &dev->channels);
+ 	file->private_data = &channel->fh;
+ 	v4l2_fh_add(&channel->fh);
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.c b/drivers/staging/media/atomisp/pci/atomisp_subdev.c
+index 52b9fb18c87f0..dcc2dd981ca60 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_subdev.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.c
+@@ -349,12 +349,20 @@ static int isp_subdev_get_selection(struct v4l2_subdev *sd,
+ 	return 0;
+ }
+ 
+-static char *atomisp_pad_str[] = { "ATOMISP_SUBDEV_PAD_SINK",
+-				   "ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE",
+-				   "ATOMISP_SUBDEV_PAD_SOURCE_VF",
+-				   "ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW",
+-				   "ATOMISP_SUBDEV_PAD_SOURCE_VIDEO"
+-				 };
++static const char *atomisp_pad_str(unsigned int pad)
++{
++	static const char *const pad_str[] = {
++		"ATOMISP_SUBDEV_PAD_SINK",
++		"ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE",
++		"ATOMISP_SUBDEV_PAD_SOURCE_VF",
++		"ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW",
++		"ATOMISP_SUBDEV_PAD_SOURCE_VIDEO",
++	};
++
++	if (pad >= ARRAY_SIZE(pad_str))
++		return "ATOMISP_INVALID_PAD";
++	return pad_str[pad];
++}
+ 
+ int atomisp_subdev_set_selection(struct v4l2_subdev *sd,
+ 				 struct v4l2_subdev_pad_config *cfg,
+@@ -378,7 +386,7 @@ int atomisp_subdev_set_selection(struct v4l2_subdev *sd,
+ 
+ 	dev_dbg(isp->dev,
+ 		"sel: pad %s tgt %s l %d t %d w %d h %d which %s f 0x%8.8x\n",
+-		atomisp_pad_str[pad], target == V4L2_SEL_TGT_CROP
++		atomisp_pad_str(pad), target == V4L2_SEL_TGT_CROP
+ 		? "V4L2_SEL_TGT_CROP" : "V4L2_SEL_TGT_COMPOSE",
+ 		r->left, r->top, r->width, r->height,
+ 		which == V4L2_SUBDEV_FORMAT_TRY ? "V4L2_SUBDEV_FORMAT_TRY"
+@@ -612,7 +620,7 @@ void atomisp_subdev_set_ffmt(struct v4l2_subdev *sd,
+ 	enum atomisp_input_stream_id stream_id;
+ 
+ 	dev_dbg(isp->dev, "ffmt: pad %s w %d h %d code 0x%8.8x which %s\n",
+-		atomisp_pad_str[pad], ffmt->width, ffmt->height, ffmt->code,
++		atomisp_pad_str(pad), ffmt->width, ffmt->height, ffmt->code,
+ 		which == V4L2_SUBDEV_FORMAT_TRY ? "V4L2_SUBDEV_FORMAT_TRY"
+ 		: "V4L2_SUBDEV_FORMAT_ACTIVE");
+ 
+diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/media/atomisp/pci/hmm/hmm.c
+index e0eaff0f8a228..6a5ee46070898 100644
+--- a/drivers/staging/media/atomisp/pci/hmm/hmm.c
++++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c
+@@ -269,7 +269,7 @@ ia_css_ptr hmm_alloc(size_t bytes, enum hmm_bo_type type,
+ 		hmm_set(bo->start, 0, bytes);
+ 
+ 	dev_dbg(atomisp_dev,
+-		"%s: pages: 0x%08x (%ld bytes), type: %d from highmem %d, user ptr %p, cached %d\n",
++		"%s: pages: 0x%08x (%zu bytes), type: %d from highmem %d, user ptr %p, cached %d\n",
+ 		__func__, bo->start, bytes, type, from_highmem, userptr, cached);
+ 
+ 	return bo->start;
+diff --git a/drivers/staging/media/imx/imx-media-csc-scaler.c b/drivers/staging/media/imx/imx-media-csc-scaler.c
+index fab1155a5958c..63a0204502a8b 100644
+--- a/drivers/staging/media/imx/imx-media-csc-scaler.c
++++ b/drivers/staging/media/imx/imx-media-csc-scaler.c
+@@ -869,11 +869,7 @@ void imx_media_csc_scaler_device_unregister(struct imx_media_video_dev *vdev)
+ 	struct ipu_csc_scaler_priv *priv = vdev_to_priv(vdev);
+ 	struct video_device *vfd = priv->vdev.vfd;
+ 
+-	mutex_lock(&priv->mutex);
+-
+ 	video_unregister_device(vfd);
+-
+-	mutex_unlock(&priv->mutex);
+ }
+ 
+ struct imx_media_video_dev *
+diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c
+index 6d2205461e565..338b8bd0bb076 100644
+--- a/drivers/staging/media/imx/imx-media-dev.c
++++ b/drivers/staging/media/imx/imx-media-dev.c
+@@ -53,6 +53,7 @@ static int imx6_media_probe_complete(struct v4l2_async_notifier *notifier)
+ 	imxmd->m2m_vdev = imx_media_csc_scaler_device_init(imxmd);
+ 	if (IS_ERR(imxmd->m2m_vdev)) {
+ 		ret = PTR_ERR(imxmd->m2m_vdev);
++		imxmd->m2m_vdev = NULL;
+ 		goto unlock;
+ 	}
+ 
+@@ -107,10 +108,14 @@ static int imx_media_remove(struct platform_device *pdev)
+ 
+ 	v4l2_info(&imxmd->v4l2_dev, "Removing imx-media\n");
+ 
++	if (imxmd->m2m_vdev) {
++		imx_media_csc_scaler_device_unregister(imxmd->m2m_vdev);
++		imxmd->m2m_vdev = NULL;
++	}
++
+ 	v4l2_async_notifier_unregister(&imxmd->notifier);
+ 	imx_media_unregister_ipu_internal_subdevs(imxmd);
+ 	v4l2_async_notifier_cleanup(&imxmd->notifier);
+-	imx_media_csc_scaler_device_unregister(imxmd->m2m_vdev);
+ 	media_device_unregister(&imxmd->md);
+ 	v4l2_device_unregister(&imxmd->v4l2_dev);
+ 	media_device_cleanup(&imxmd->md);
+diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c
+index a3f3df9017046..ac52b1daf9914 100644
+--- a/drivers/staging/media/imx/imx7-media-csi.c
++++ b/drivers/staging/media/imx/imx7-media-csi.c
+@@ -499,6 +499,7 @@ static int imx7_csi_pad_link_validate(struct v4l2_subdev *sd,
+ 				      struct v4l2_subdev_format *sink_fmt)
+ {
+ 	struct imx7_csi *csi = v4l2_get_subdevdata(sd);
++	struct media_entity *src;
+ 	struct media_pad *pad;
+ 	int ret;
+ 
+@@ -509,11 +510,21 @@ static int imx7_csi_pad_link_validate(struct v4l2_subdev *sd,
+ 	if (!csi->src_sd)
+ 		return -EPIPE;
+ 
++	src = &csi->src_sd->entity;
++
++	/*
++	 * if the source is neither a CSI MUX or CSI-2 get the one directly
++	 * upstream from this CSI
++	 */
++	if (src->function != MEDIA_ENT_F_VID_IF_BRIDGE &&
++	    src->function != MEDIA_ENT_F_VID_MUX)
++		src = &csi->sd.entity;
++
+ 	/*
+-	 * find the entity that is selected by the CSI mux. This is needed
++	 * find the entity that is selected by the source. This is needed
+ 	 * to distinguish between a parallel or CSI-2 pipeline.
+ 	 */
+-	pad = imx_media_pipeline_pad(&csi->src_sd->entity, 0, 0, true);
++	pad = imx_media_pipeline_pad(src, 0, 0, true);
+ 	if (!pad)
+ 		return -ENODEV;
+ 
+@@ -1164,12 +1175,12 @@ static int imx7_csi_notify_bound(struct v4l2_async_notifier *notifier,
+ 	struct imx7_csi *csi = imx7_csi_notifier_to_dev(notifier);
+ 	struct media_pad *sink = &csi->sd.entity.pads[IMX7_CSI_PAD_SINK];
+ 
+-	/* The bound subdev must always be the CSI mux */
+-	if (WARN_ON(sd->entity.function != MEDIA_ENT_F_VID_MUX))
+-		return -ENXIO;
+-
+-	/* Mark it as such via its group id */
+-	sd->grp_id = IMX_MEDIA_GRP_ID_CSI_MUX;
++	/*
++	 * If the subdev is a video mux, it must be one of the CSI
++	 * muxes. Mark it as such via its group id.
++	 */
++	if (sd->entity.function == MEDIA_ENT_F_VID_MUX)
++		sd->grp_id = IMX_MEDIA_GRP_ID_CSI_MUX;
+ 
+ 	return v4l2_create_fwnode_links_to_pad(sd, sink);
+ }
+diff --git a/drivers/staging/mt7621-dma/Makefile b/drivers/staging/mt7621-dma/Makefile
+index 66da1bf10c32e..23256d1286f3e 100644
+--- a/drivers/staging/mt7621-dma/Makefile
++++ b/drivers/staging/mt7621-dma/Makefile
+@@ -1,4 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0
+-obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
++obj-$(CONFIG_MTK_HSDMA) += hsdma-mt7621.o
+ 
+ ccflags-y += -I$(srctree)/drivers/dma
+diff --git a/drivers/staging/mt7621-dma/hsdma-mt7621.c b/drivers/staging/mt7621-dma/hsdma-mt7621.c
+new file mode 100644
+index 0000000000000..28f1c2446be16
+--- /dev/null
++++ b/drivers/staging/mt7621-dma/hsdma-mt7621.c
+@@ -0,0 +1,760 @@
++// SPDX-License-Identifier: GPL-2.0+
++/*
++ *  Copyright (C) 2015, Michael Lee <igvtee@gmail.com>
++ *  MTK HSDMA support
++ */
++
++#include <linux/dmaengine.h>
++#include <linux/dma-mapping.h>
++#include <linux/err.h>
++#include <linux/init.h>
++#include <linux/list.h>
++#include <linux/module.h>
++#include <linux/platform_device.h>
++#include <linux/slab.h>
++#include <linux/spinlock.h>
++#include <linux/irq.h>
++#include <linux/of_dma.h>
++#include <linux/reset.h>
++#include <linux/of_device.h>
++
++#include "virt-dma.h"
++
++#define HSDMA_BASE_OFFSET		0x800
++
++#define HSDMA_REG_TX_BASE		0x00
++#define HSDMA_REG_TX_CNT		0x04
++#define HSDMA_REG_TX_CTX		0x08
++#define HSDMA_REG_TX_DTX		0x0c
++#define HSDMA_REG_RX_BASE		0x100
++#define HSDMA_REG_RX_CNT		0x104
++#define HSDMA_REG_RX_CRX		0x108
++#define HSDMA_REG_RX_DRX		0x10c
++#define HSDMA_REG_INFO			0x200
++#define HSDMA_REG_GLO_CFG		0x204
++#define HSDMA_REG_RST_CFG		0x208
++#define HSDMA_REG_DELAY_INT		0x20c
++#define HSDMA_REG_FREEQ_THRES		0x210
++#define HSDMA_REG_INT_STATUS		0x220
++#define HSDMA_REG_INT_MASK		0x228
++#define HSDMA_REG_SCH_Q01		0x280
++#define HSDMA_REG_SCH_Q23		0x284
++
++#define HSDMA_DESCS_MAX			0xfff
++#define HSDMA_DESCS_NUM			8
++#define HSDMA_DESCS_MASK		(HSDMA_DESCS_NUM - 1)
++#define HSDMA_NEXT_DESC(x)		(((x) + 1) & HSDMA_DESCS_MASK)
++
++/* HSDMA_REG_INFO */
++#define HSDMA_INFO_INDEX_MASK		0xf
++#define HSDMA_INFO_INDEX_SHIFT		24
++#define HSDMA_INFO_BASE_MASK		0xff
++#define HSDMA_INFO_BASE_SHIFT		16
++#define HSDMA_INFO_RX_MASK		0xff
++#define HSDMA_INFO_RX_SHIFT		8
++#define HSDMA_INFO_TX_MASK		0xff
++#define HSDMA_INFO_TX_SHIFT		0
++
++/* HSDMA_REG_GLO_CFG */
++#define HSDMA_GLO_TX_2B_OFFSET		BIT(31)
++#define HSDMA_GLO_CLK_GATE		BIT(30)
++#define HSDMA_GLO_BYTE_SWAP		BIT(29)
++#define HSDMA_GLO_MULTI_DMA		BIT(10)
++#define HSDMA_GLO_TWO_BUF		BIT(9)
++#define HSDMA_GLO_32B_DESC		BIT(8)
++#define HSDMA_GLO_BIG_ENDIAN		BIT(7)
++#define HSDMA_GLO_TX_DONE		BIT(6)
++#define HSDMA_GLO_BT_MASK		0x3
++#define HSDMA_GLO_BT_SHIFT		4
++#define HSDMA_GLO_RX_BUSY		BIT(3)
++#define HSDMA_GLO_RX_DMA		BIT(2)
++#define HSDMA_GLO_TX_BUSY		BIT(1)
++#define HSDMA_GLO_TX_DMA		BIT(0)
++
++#define HSDMA_BT_SIZE_16BYTES		(0 << HSDMA_GLO_BT_SHIFT)
++#define HSDMA_BT_SIZE_32BYTES		(1 << HSDMA_GLO_BT_SHIFT)
++#define HSDMA_BT_SIZE_64BYTES		(2 << HSDMA_GLO_BT_SHIFT)
++#define HSDMA_BT_SIZE_128BYTES		(3 << HSDMA_GLO_BT_SHIFT)
++
++#define HSDMA_GLO_DEFAULT		(HSDMA_GLO_MULTI_DMA | \
++		HSDMA_GLO_RX_DMA | HSDMA_GLO_TX_DMA | HSDMA_BT_SIZE_32BYTES)
++
++/* HSDMA_REG_RST_CFG */
++#define HSDMA_RST_RX_SHIFT		16
++#define HSDMA_RST_TX_SHIFT		0
++
++/* HSDMA_REG_DELAY_INT */
++#define HSDMA_DELAY_INT_EN		BIT(15)
++#define HSDMA_DELAY_PEND_OFFSET		8
++#define HSDMA_DELAY_TIME_OFFSET		0
++#define HSDMA_DELAY_TX_OFFSET		16
++#define HSDMA_DELAY_RX_OFFSET		0
++
++#define HSDMA_DELAY_INIT(x)		(HSDMA_DELAY_INT_EN | \
++		((x) << HSDMA_DELAY_PEND_OFFSET))
++#define HSDMA_DELAY(x)			((HSDMA_DELAY_INIT(x) << \
++		HSDMA_DELAY_TX_OFFSET) | HSDMA_DELAY_INIT(x))
++
++/* HSDMA_REG_INT_STATUS */
++#define HSDMA_INT_DELAY_RX_COH		BIT(31)
++#define HSDMA_INT_DELAY_RX_INT		BIT(30)
++#define HSDMA_INT_DELAY_TX_COH		BIT(29)
++#define HSDMA_INT_DELAY_TX_INT		BIT(28)
++#define HSDMA_INT_RX_MASK		0x3
++#define HSDMA_INT_RX_SHIFT		16
++#define HSDMA_INT_RX_Q0			BIT(16)
++#define HSDMA_INT_TX_MASK		0xf
++#define HSDMA_INT_TX_SHIFT		0
++#define HSDMA_INT_TX_Q0			BIT(0)
++
++/* tx/rx dma desc flags */
++#define HSDMA_PLEN_MASK			0x3fff
++#define HSDMA_DESC_DONE			BIT(31)
++#define HSDMA_DESC_LS0			BIT(30)
++#define HSDMA_DESC_PLEN0(_x)		(((_x) & HSDMA_PLEN_MASK) << 16)
++#define HSDMA_DESC_TAG			BIT(15)
++#define HSDMA_DESC_LS1			BIT(14)
++#define HSDMA_DESC_PLEN1(_x)		((_x) & HSDMA_PLEN_MASK)
++
++/* align 4 bytes */
++#define HSDMA_ALIGN_SIZE		3
++/* align size 128bytes */
++#define HSDMA_MAX_PLEN			0x3f80
++
++struct hsdma_desc {
++	u32 addr0;
++	u32 flags;
++	u32 addr1;
++	u32 unused;
++};
++
++struct mtk_hsdma_sg {
++	dma_addr_t src_addr;
++	dma_addr_t dst_addr;
++	u32 len;
++};
++
++struct mtk_hsdma_desc {
++	struct virt_dma_desc vdesc;
++	unsigned int num_sgs;
++	struct mtk_hsdma_sg sg[1];
++};
++
++struct mtk_hsdma_chan {
++	struct virt_dma_chan vchan;
++	unsigned int id;
++	dma_addr_t desc_addr;
++	int tx_idx;
++	int rx_idx;
++	struct hsdma_desc *tx_ring;
++	struct hsdma_desc *rx_ring;
++	struct mtk_hsdma_desc *desc;
++	unsigned int next_sg;
++};
++
++struct mtk_hsdam_engine {
++	struct dma_device ddev;
++	struct device_dma_parameters dma_parms;
++	void __iomem *base;
++	struct tasklet_struct task;
++	volatile unsigned long chan_issued;
++
++	struct mtk_hsdma_chan chan[1];
++};
++
++static inline struct mtk_hsdam_engine *mtk_hsdma_chan_get_dev(
++		struct mtk_hsdma_chan *chan)
++{
++	return container_of(chan->vchan.chan.device, struct mtk_hsdam_engine,
++			ddev);
++}
++
++static inline struct mtk_hsdma_chan *to_mtk_hsdma_chan(struct dma_chan *c)
++{
++	return container_of(c, struct mtk_hsdma_chan, vchan.chan);
++}
++
++static inline struct mtk_hsdma_desc *to_mtk_hsdma_desc(
++		struct virt_dma_desc *vdesc)
++{
++	return container_of(vdesc, struct mtk_hsdma_desc, vdesc);
++}
++
++static inline u32 mtk_hsdma_read(struct mtk_hsdam_engine *hsdma, u32 reg)
++{
++	return readl(hsdma->base + reg);
++}
++
++static inline void mtk_hsdma_write(struct mtk_hsdam_engine *hsdma,
++				   unsigned int reg, u32 val)
++{
++	writel(val, hsdma->base + reg);
++}
++
++static void mtk_hsdma_reset_chan(struct mtk_hsdam_engine *hsdma,
++				 struct mtk_hsdma_chan *chan)
++{
++	chan->tx_idx = 0;
++	chan->rx_idx = HSDMA_DESCS_NUM - 1;
++
++	mtk_hsdma_write(hsdma, HSDMA_REG_TX_CTX, chan->tx_idx);
++	mtk_hsdma_write(hsdma, HSDMA_REG_RX_CRX, chan->rx_idx);
++
++	mtk_hsdma_write(hsdma, HSDMA_REG_RST_CFG,
++			0x1 << (chan->id + HSDMA_RST_TX_SHIFT));
++	mtk_hsdma_write(hsdma, HSDMA_REG_RST_CFG,
++			0x1 << (chan->id + HSDMA_RST_RX_SHIFT));
++}
++
++static void hsdma_dump_reg(struct mtk_hsdam_engine *hsdma)
++{
++	dev_dbg(hsdma->ddev.dev, "tbase %08x, tcnt %08x, "
++			"tctx %08x, tdtx: %08x, rbase %08x, "
++			"rcnt %08x, rctx %08x, rdtx %08x\n",
++			mtk_hsdma_read(hsdma, HSDMA_REG_TX_BASE),
++			mtk_hsdma_read(hsdma, HSDMA_REG_TX_CNT),
++			mtk_hsdma_read(hsdma, HSDMA_REG_TX_CTX),
++			mtk_hsdma_read(hsdma, HSDMA_REG_TX_DTX),
++			mtk_hsdma_read(hsdma, HSDMA_REG_RX_BASE),
++			mtk_hsdma_read(hsdma, HSDMA_REG_RX_CNT),
++			mtk_hsdma_read(hsdma, HSDMA_REG_RX_CRX),
++			mtk_hsdma_read(hsdma, HSDMA_REG_RX_DRX));
++
++	dev_dbg(hsdma->ddev.dev, "info %08x, glo %08x, delay %08x, intr_stat %08x, intr_mask %08x\n",
++			mtk_hsdma_read(hsdma, HSDMA_REG_INFO),
++			mtk_hsdma_read(hsdma, HSDMA_REG_GLO_CFG),
++			mtk_hsdma_read(hsdma, HSDMA_REG_DELAY_INT),
++			mtk_hsdma_read(hsdma, HSDMA_REG_INT_STATUS),
++			mtk_hsdma_read(hsdma, HSDMA_REG_INT_MASK));
++}
++
++static void hsdma_dump_desc(struct mtk_hsdam_engine *hsdma,
++			    struct mtk_hsdma_chan *chan)
++{
++	struct hsdma_desc *tx_desc;
++	struct hsdma_desc *rx_desc;
++	int i;
++
++	dev_dbg(hsdma->ddev.dev, "tx idx: %d, rx idx: %d\n",
++		chan->tx_idx, chan->rx_idx);
++
++	for (i = 0; i < HSDMA_DESCS_NUM; i++) {
++		tx_desc = &chan->tx_ring[i];
++		rx_desc = &chan->rx_ring[i];
++
++		dev_dbg(hsdma->ddev.dev, "%d tx addr0: %08x, flags %08x, "
++				"tx addr1: %08x, rx addr0 %08x, flags %08x\n",
++				i, tx_desc->addr0, tx_desc->flags,
++				tx_desc->addr1, rx_desc->addr0, rx_desc->flags);
++	}
++}
++
++static void mtk_hsdma_reset(struct mtk_hsdam_engine *hsdma,
++			    struct mtk_hsdma_chan *chan)
++{
++	int i;
++
++	/* disable dma */
++	mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, 0);
++
++	/* disable intr */
++	mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, 0);
++
++	/* init desc value */
++	for (i = 0; i < HSDMA_DESCS_NUM; i++) {
++		chan->tx_ring[i].addr0 = 0;
++		chan->tx_ring[i].flags = HSDMA_DESC_LS0 | HSDMA_DESC_DONE;
++	}
++	for (i = 0; i < HSDMA_DESCS_NUM; i++) {
++		chan->rx_ring[i].addr0 = 0;
++		chan->rx_ring[i].flags = 0;
++	}
++
++	/* reset */
++	mtk_hsdma_reset_chan(hsdma, chan);
++
++	/* enable intr */
++	mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, HSDMA_INT_RX_Q0);
++
++	/* enable dma */
++	mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, HSDMA_GLO_DEFAULT);
++}
++
++static int mtk_hsdma_terminate_all(struct dma_chan *c)
++{
++	struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c);
++	struct mtk_hsdam_engine *hsdma = mtk_hsdma_chan_get_dev(chan);
++	unsigned long timeout;
++	LIST_HEAD(head);
++
++	spin_lock_bh(&chan->vchan.lock);
++	chan->desc = NULL;
++	clear_bit(chan->id, &hsdma->chan_issued);
++	vchan_get_all_descriptors(&chan->vchan, &head);
++	spin_unlock_bh(&chan->vchan.lock);
++
++	vchan_dma_desc_free_list(&chan->vchan, &head);
++
++	/* wait dma transfer complete */
++	timeout = jiffies + msecs_to_jiffies(2000);
++	while (mtk_hsdma_read(hsdma, HSDMA_REG_GLO_CFG) &
++			(HSDMA_GLO_RX_BUSY | HSDMA_GLO_TX_BUSY)) {
++		if (time_after_eq(jiffies, timeout)) {
++			hsdma_dump_desc(hsdma, chan);
++			mtk_hsdma_reset(hsdma, chan);
++			dev_err(hsdma->ddev.dev, "timeout, reset it\n");
++			break;
++		}
++		cpu_relax();
++	}
++
++	return 0;
++}
++
++static int mtk_hsdma_start_transfer(struct mtk_hsdam_engine *hsdma,
++				    struct mtk_hsdma_chan *chan)
++{
++	dma_addr_t src, dst;
++	size_t len, tlen;
++	struct hsdma_desc *tx_desc, *rx_desc;
++	struct mtk_hsdma_sg *sg;
++	unsigned int i;
++	int rx_idx;
++
++	sg = &chan->desc->sg[0];
++	len = sg->len;
++	chan->desc->num_sgs = DIV_ROUND_UP(len, HSDMA_MAX_PLEN);
++
++	/* tx desc */
++	src = sg->src_addr;
++	for (i = 0; i < chan->desc->num_sgs; i++) {
++		tx_desc = &chan->tx_ring[chan->tx_idx];
++
++		if (len > HSDMA_MAX_PLEN)
++			tlen = HSDMA_MAX_PLEN;
++		else
++			tlen = len;
++
++		if (i & 0x1) {
++			tx_desc->addr1 = src;
++			tx_desc->flags |= HSDMA_DESC_PLEN1(tlen);
++		} else {
++			tx_desc->addr0 = src;
++			tx_desc->flags = HSDMA_DESC_PLEN0(tlen);
++
++			/* update index */
++			chan->tx_idx = HSDMA_NEXT_DESC(chan->tx_idx);
++		}
++
++		src += tlen;
++		len -= tlen;
++	}
++	if (i & 0x1)
++		tx_desc->flags |= HSDMA_DESC_LS0;
++	else
++		tx_desc->flags |= HSDMA_DESC_LS1;
++
++	/* rx desc */
++	rx_idx = HSDMA_NEXT_DESC(chan->rx_idx);
++	len = sg->len;
++	dst = sg->dst_addr;
++	for (i = 0; i < chan->desc->num_sgs; i++) {
++		rx_desc = &chan->rx_ring[rx_idx];
++		if (len > HSDMA_MAX_PLEN)
++			tlen = HSDMA_MAX_PLEN;
++		else
++			tlen = len;
++
++		rx_desc->addr0 = dst;
++		rx_desc->flags = HSDMA_DESC_PLEN0(tlen);
++
++		dst += tlen;
++		len -= tlen;
++
++		/* update index */
++		rx_idx = HSDMA_NEXT_DESC(rx_idx);
++	}
++
++	/* make sure desc and index all up to date */
++	wmb();
++	mtk_hsdma_write(hsdma, HSDMA_REG_TX_CTX, chan->tx_idx);
++
++	return 0;
++}
++
++static int gdma_next_desc(struct mtk_hsdma_chan *chan)
++{
++	struct virt_dma_desc *vdesc;
++
++	vdesc = vchan_next_desc(&chan->vchan);
++	if (!vdesc) {
++		chan->desc = NULL;
++		return 0;
++	}
++	chan->desc = to_mtk_hsdma_desc(vdesc);
++	chan->next_sg = 0;
++
++	return 1;
++}
++
++static void mtk_hsdma_chan_done(struct mtk_hsdam_engine *hsdma,
++				struct mtk_hsdma_chan *chan)
++{
++	struct mtk_hsdma_desc *desc;
++	int chan_issued;
++
++	chan_issued = 0;
++	spin_lock_bh(&chan->vchan.lock);
++	desc = chan->desc;
++	if (likely(desc)) {
++		if (chan->next_sg == desc->num_sgs) {
++			list_del(&desc->vdesc.node);
++			vchan_cookie_complete(&desc->vdesc);
++			chan_issued = gdma_next_desc(chan);
++		}
++	} else {
++		dev_dbg(hsdma->ddev.dev, "no desc to complete\n");
++	}
++
++	if (chan_issued)
++		set_bit(chan->id, &hsdma->chan_issued);
++	spin_unlock_bh(&chan->vchan.lock);
++}
++
++static irqreturn_t mtk_hsdma_irq(int irq, void *devid)
++{
++	struct mtk_hsdam_engine *hsdma = devid;
++	u32 status;
++
++	status = mtk_hsdma_read(hsdma, HSDMA_REG_INT_STATUS);
++	if (unlikely(!status))
++		return IRQ_NONE;
++
++	if (likely(status & HSDMA_INT_RX_Q0))
++		tasklet_schedule(&hsdma->task);
++	else
++		dev_dbg(hsdma->ddev.dev, "unhandle irq status %08x\n", status);
++	/* clean intr bits */
++	mtk_hsdma_write(hsdma, HSDMA_REG_INT_STATUS, status);
++
++	return IRQ_HANDLED;
++}
++
++static void mtk_hsdma_issue_pending(struct dma_chan *c)
++{
++	struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c);
++	struct mtk_hsdam_engine *hsdma = mtk_hsdma_chan_get_dev(chan);
++
++	spin_lock_bh(&chan->vchan.lock);
++	if (vchan_issue_pending(&chan->vchan) && !chan->desc) {
++		if (gdma_next_desc(chan)) {
++			set_bit(chan->id, &hsdma->chan_issued);
++			tasklet_schedule(&hsdma->task);
++		} else {
++			dev_dbg(hsdma->ddev.dev, "no desc to issue\n");
++		}
++	}
++	spin_unlock_bh(&chan->vchan.lock);
++}
++
++static struct dma_async_tx_descriptor *mtk_hsdma_prep_dma_memcpy(
++		struct dma_chan *c, dma_addr_t dest, dma_addr_t src,
++		size_t len, unsigned long flags)
++{
++	struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c);
++	struct mtk_hsdma_desc *desc;
++
++	if (len <= 0)
++		return NULL;
++
++	desc = kzalloc(sizeof(*desc), GFP_ATOMIC);
++	if (!desc) {
++		dev_err(c->device->dev, "alloc memcpy decs error\n");
++		return NULL;
++	}
++
++	desc->sg[0].src_addr = src;
++	desc->sg[0].dst_addr = dest;
++	desc->sg[0].len = len;
++
++	return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
++}
++
++static enum dma_status mtk_hsdma_tx_status(struct dma_chan *c,
++					   dma_cookie_t cookie,
++					   struct dma_tx_state *state)
++{
++	return dma_cookie_status(c, cookie, state);
++}
++
++static void mtk_hsdma_free_chan_resources(struct dma_chan *c)
++{
++	vchan_free_chan_resources(to_virt_chan(c));
++}
++
++static void mtk_hsdma_desc_free(struct virt_dma_desc *vdesc)
++{
++	kfree(container_of(vdesc, struct mtk_hsdma_desc, vdesc));
++}
++
++static void mtk_hsdma_tx(struct mtk_hsdam_engine *hsdma)
++{
++	struct mtk_hsdma_chan *chan;
++
++	if (test_and_clear_bit(0, &hsdma->chan_issued)) {
++		chan = &hsdma->chan[0];
++		if (chan->desc)
++			mtk_hsdma_start_transfer(hsdma, chan);
++		else
++			dev_dbg(hsdma->ddev.dev, "chan 0 no desc to issue\n");
++	}
++}
++
++static void mtk_hsdma_rx(struct mtk_hsdam_engine *hsdma)
++{
++	struct mtk_hsdma_chan *chan;
++	int next_idx, drx_idx, cnt;
++
++	chan = &hsdma->chan[0];
++	next_idx = HSDMA_NEXT_DESC(chan->rx_idx);
++	drx_idx = mtk_hsdma_read(hsdma, HSDMA_REG_RX_DRX);
++
++	cnt = (drx_idx - next_idx) & HSDMA_DESCS_MASK;
++	if (!cnt)
++		return;
++
++	chan->next_sg += cnt;
++	chan->rx_idx = (chan->rx_idx + cnt) & HSDMA_DESCS_MASK;
++
++	/* update rx crx */
++	wmb();
++	mtk_hsdma_write(hsdma, HSDMA_REG_RX_CRX, chan->rx_idx);
++
++	mtk_hsdma_chan_done(hsdma, chan);
++}
++
++static void mtk_hsdma_tasklet(struct tasklet_struct *t)
++{
++	struct mtk_hsdam_engine *hsdma = from_tasklet(hsdma, t, task);
++
++	mtk_hsdma_rx(hsdma);
++	mtk_hsdma_tx(hsdma);
++}
++
++static int mtk_hsdam_alloc_desc(struct mtk_hsdam_engine *hsdma,
++				struct mtk_hsdma_chan *chan)
++{
++	int i;
++
++	chan->tx_ring = dma_alloc_coherent(hsdma->ddev.dev,
++					   2 * HSDMA_DESCS_NUM *
++					   sizeof(*chan->tx_ring),
++			&chan->desc_addr, GFP_ATOMIC | __GFP_ZERO);
++	if (!chan->tx_ring)
++		goto no_mem;
++
++	chan->rx_ring = &chan->tx_ring[HSDMA_DESCS_NUM];
++
++	/* init tx ring value */
++	for (i = 0; i < HSDMA_DESCS_NUM; i++)
++		chan->tx_ring[i].flags = HSDMA_DESC_LS0 | HSDMA_DESC_DONE;
++
++	return 0;
++no_mem:
++	return -ENOMEM;
++}
++
++static void mtk_hsdam_free_desc(struct mtk_hsdam_engine *hsdma,
++				struct mtk_hsdma_chan *chan)
++{
++	if (chan->tx_ring) {
++		dma_free_coherent(hsdma->ddev.dev,
++				  2 * HSDMA_DESCS_NUM * sizeof(*chan->tx_ring),
++				  chan->tx_ring, chan->desc_addr);
++		chan->tx_ring = NULL;
++		chan->rx_ring = NULL;
++	}
++}
++
++static int mtk_hsdma_init(struct mtk_hsdam_engine *hsdma)
++{
++	struct mtk_hsdma_chan *chan;
++	int ret;
++	u32 reg;
++
++	/* init desc */
++	chan = &hsdma->chan[0];
++	ret = mtk_hsdam_alloc_desc(hsdma, chan);
++	if (ret)
++		return ret;
++
++	/* tx */
++	mtk_hsdma_write(hsdma, HSDMA_REG_TX_BASE, chan->desc_addr);
++	mtk_hsdma_write(hsdma, HSDMA_REG_TX_CNT, HSDMA_DESCS_NUM);
++	/* rx */
++	mtk_hsdma_write(hsdma, HSDMA_REG_RX_BASE, chan->desc_addr +
++			(sizeof(struct hsdma_desc) * HSDMA_DESCS_NUM));
++	mtk_hsdma_write(hsdma, HSDMA_REG_RX_CNT, HSDMA_DESCS_NUM);
++	/* reset */
++	mtk_hsdma_reset_chan(hsdma, chan);
++
++	/* enable rx intr */
++	mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, HSDMA_INT_RX_Q0);
++
++	/* enable dma */
++	mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, HSDMA_GLO_DEFAULT);
++
++	/* hardware info */
++	reg = mtk_hsdma_read(hsdma, HSDMA_REG_INFO);
++	dev_info(hsdma->ddev.dev, "rx: %d, tx: %d\n",
++		 (reg >> HSDMA_INFO_RX_SHIFT) & HSDMA_INFO_RX_MASK,
++		 (reg >> HSDMA_INFO_TX_SHIFT) & HSDMA_INFO_TX_MASK);
++
++	hsdma_dump_reg(hsdma);
++
++	return ret;
++}
++
++static void mtk_hsdma_uninit(struct mtk_hsdam_engine *hsdma)
++{
++	struct mtk_hsdma_chan *chan;
++
++	/* disable dma */
++	mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, 0);
++
++	/* disable intr */
++	mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, 0);
++
++	/* free desc */
++	chan = &hsdma->chan[0];
++	mtk_hsdam_free_desc(hsdma, chan);
++
++	/* tx */
++	mtk_hsdma_write(hsdma, HSDMA_REG_TX_BASE, 0);
++	mtk_hsdma_write(hsdma, HSDMA_REG_TX_CNT, 0);
++	/* rx */
++	mtk_hsdma_write(hsdma, HSDMA_REG_RX_BASE, 0);
++	mtk_hsdma_write(hsdma, HSDMA_REG_RX_CNT, 0);
++	/* reset */
++	mtk_hsdma_reset_chan(hsdma, chan);
++}
++
++static const struct of_device_id mtk_hsdma_of_match[] = {
++	{ .compatible = "mediatek,mt7621-hsdma" },
++	{ },
++};
++
++static int mtk_hsdma_probe(struct platform_device *pdev)
++{
++	const struct of_device_id *match;
++	struct mtk_hsdma_chan *chan;
++	struct mtk_hsdam_engine *hsdma;
++	struct dma_device *dd;
++	int ret;
++	int irq;
++	void __iomem *base;
++
++	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
++	if (ret)
++		return ret;
++
++	match = of_match_device(mtk_hsdma_of_match, &pdev->dev);
++	if (!match)
++		return -EINVAL;
++
++	hsdma = devm_kzalloc(&pdev->dev, sizeof(*hsdma), GFP_KERNEL);
++	if (!hsdma)
++		return -EINVAL;
++
++	base = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(base))
++		return PTR_ERR(base);
++	hsdma->base = base + HSDMA_BASE_OFFSET;
++	tasklet_setup(&hsdma->task, mtk_hsdma_tasklet);
++
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0)
++		return -EINVAL;
++	ret = devm_request_irq(&pdev->dev, irq, mtk_hsdma_irq,
++			       0, dev_name(&pdev->dev), hsdma);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to request irq\n");
++		return ret;
++	}
++
++	device_reset(&pdev->dev);
++
++	dd = &hsdma->ddev;
++	dma_cap_set(DMA_MEMCPY, dd->cap_mask);
++	dd->copy_align = HSDMA_ALIGN_SIZE;
++	dd->device_free_chan_resources = mtk_hsdma_free_chan_resources;
++	dd->device_prep_dma_memcpy = mtk_hsdma_prep_dma_memcpy;
++	dd->device_terminate_all = mtk_hsdma_terminate_all;
++	dd->device_tx_status = mtk_hsdma_tx_status;
++	dd->device_issue_pending = mtk_hsdma_issue_pending;
++	dd->dev = &pdev->dev;
++	dd->dev->dma_parms = &hsdma->dma_parms;
++	dma_set_max_seg_size(dd->dev, HSDMA_MAX_PLEN);
++	INIT_LIST_HEAD(&dd->channels);
++
++	chan = &hsdma->chan[0];
++	chan->id = 0;
++	chan->vchan.desc_free = mtk_hsdma_desc_free;
++	vchan_init(&chan->vchan, dd);
++
++	/* init hardware */
++	ret = mtk_hsdma_init(hsdma);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to alloc ring descs\n");
++		return ret;
++	}
++
++	ret = dma_async_device_register(dd);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to register dma device\n");
++		goto err_uninit_hsdma;
++	}
++
++	ret = of_dma_controller_register(pdev->dev.of_node,
++					 of_dma_xlate_by_chan_id, hsdma);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to register of dma controller\n");
++		goto err_unregister;
++	}
++
++	platform_set_drvdata(pdev, hsdma);
++
++	return 0;
++
++err_unregister:
++	dma_async_device_unregister(dd);
++err_uninit_hsdma:
++	mtk_hsdma_uninit(hsdma);
++	return ret;
++}
++
++static int mtk_hsdma_remove(struct platform_device *pdev)
++{
++	struct mtk_hsdam_engine *hsdma = platform_get_drvdata(pdev);
++
++	mtk_hsdma_uninit(hsdma);
++
++	of_dma_controller_free(pdev->dev.of_node);
++	dma_async_device_unregister(&hsdma->ddev);
++
++	return 0;
++}
++
++static struct platform_driver mtk_hsdma_driver = {
++	.probe = mtk_hsdma_probe,
++	.remove = mtk_hsdma_remove,
++	.driver = {
++		.name = KBUILD_MODNAME,
++		.of_match_table = mtk_hsdma_of_match,
++	},
++};
++module_platform_driver(mtk_hsdma_driver);
++
++MODULE_AUTHOR("Michael Lee <igvtee@gmail.com>");
++MODULE_DESCRIPTION("MTK HSDMA driver");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/staging/mt7621-dma/mtk-hsdma.c b/drivers/staging/mt7621-dma/mtk-hsdma.c
+deleted file mode 100644
+index 5ad55ca620229..0000000000000
+--- a/drivers/staging/mt7621-dma/mtk-hsdma.c
++++ /dev/null
+@@ -1,760 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/*
+- *  Copyright (C) 2015, Michael Lee <igvtee@gmail.com>
+- *  MTK HSDMA support
+- */
+-
+-#include <linux/dmaengine.h>
+-#include <linux/dma-mapping.h>
+-#include <linux/err.h>
+-#include <linux/init.h>
+-#include <linux/list.h>
+-#include <linux/module.h>
+-#include <linux/platform_device.h>
+-#include <linux/slab.h>
+-#include <linux/spinlock.h>
+-#include <linux/irq.h>
+-#include <linux/of_dma.h>
+-#include <linux/reset.h>
+-#include <linux/of_device.h>
+-
+-#include "virt-dma.h"
+-
+-#define HSDMA_BASE_OFFSET		0x800
+-
+-#define HSDMA_REG_TX_BASE		0x00
+-#define HSDMA_REG_TX_CNT		0x04
+-#define HSDMA_REG_TX_CTX		0x08
+-#define HSDMA_REG_TX_DTX		0x0c
+-#define HSDMA_REG_RX_BASE		0x100
+-#define HSDMA_REG_RX_CNT		0x104
+-#define HSDMA_REG_RX_CRX		0x108
+-#define HSDMA_REG_RX_DRX		0x10c
+-#define HSDMA_REG_INFO			0x200
+-#define HSDMA_REG_GLO_CFG		0x204
+-#define HSDMA_REG_RST_CFG		0x208
+-#define HSDMA_REG_DELAY_INT		0x20c
+-#define HSDMA_REG_FREEQ_THRES		0x210
+-#define HSDMA_REG_INT_STATUS		0x220
+-#define HSDMA_REG_INT_MASK		0x228
+-#define HSDMA_REG_SCH_Q01		0x280
+-#define HSDMA_REG_SCH_Q23		0x284
+-
+-#define HSDMA_DESCS_MAX			0xfff
+-#define HSDMA_DESCS_NUM			8
+-#define HSDMA_DESCS_MASK		(HSDMA_DESCS_NUM - 1)
+-#define HSDMA_NEXT_DESC(x)		(((x) + 1) & HSDMA_DESCS_MASK)
+-
+-/* HSDMA_REG_INFO */
+-#define HSDMA_INFO_INDEX_MASK		0xf
+-#define HSDMA_INFO_INDEX_SHIFT		24
+-#define HSDMA_INFO_BASE_MASK		0xff
+-#define HSDMA_INFO_BASE_SHIFT		16
+-#define HSDMA_INFO_RX_MASK		0xff
+-#define HSDMA_INFO_RX_SHIFT		8
+-#define HSDMA_INFO_TX_MASK		0xff
+-#define HSDMA_INFO_TX_SHIFT		0
+-
+-/* HSDMA_REG_GLO_CFG */
+-#define HSDMA_GLO_TX_2B_OFFSET		BIT(31)
+-#define HSDMA_GLO_CLK_GATE		BIT(30)
+-#define HSDMA_GLO_BYTE_SWAP		BIT(29)
+-#define HSDMA_GLO_MULTI_DMA		BIT(10)
+-#define HSDMA_GLO_TWO_BUF		BIT(9)
+-#define HSDMA_GLO_32B_DESC		BIT(8)
+-#define HSDMA_GLO_BIG_ENDIAN		BIT(7)
+-#define HSDMA_GLO_TX_DONE		BIT(6)
+-#define HSDMA_GLO_BT_MASK		0x3
+-#define HSDMA_GLO_BT_SHIFT		4
+-#define HSDMA_GLO_RX_BUSY		BIT(3)
+-#define HSDMA_GLO_RX_DMA		BIT(2)
+-#define HSDMA_GLO_TX_BUSY		BIT(1)
+-#define HSDMA_GLO_TX_DMA		BIT(0)
+-
+-#define HSDMA_BT_SIZE_16BYTES		(0 << HSDMA_GLO_BT_SHIFT)
+-#define HSDMA_BT_SIZE_32BYTES		(1 << HSDMA_GLO_BT_SHIFT)
+-#define HSDMA_BT_SIZE_64BYTES		(2 << HSDMA_GLO_BT_SHIFT)
+-#define HSDMA_BT_SIZE_128BYTES		(3 << HSDMA_GLO_BT_SHIFT)
+-
+-#define HSDMA_GLO_DEFAULT		(HSDMA_GLO_MULTI_DMA | \
+-		HSDMA_GLO_RX_DMA | HSDMA_GLO_TX_DMA | HSDMA_BT_SIZE_32BYTES)
+-
+-/* HSDMA_REG_RST_CFG */
+-#define HSDMA_RST_RX_SHIFT		16
+-#define HSDMA_RST_TX_SHIFT		0
+-
+-/* HSDMA_REG_DELAY_INT */
+-#define HSDMA_DELAY_INT_EN		BIT(15)
+-#define HSDMA_DELAY_PEND_OFFSET		8
+-#define HSDMA_DELAY_TIME_OFFSET		0
+-#define HSDMA_DELAY_TX_OFFSET		16
+-#define HSDMA_DELAY_RX_OFFSET		0
+-
+-#define HSDMA_DELAY_INIT(x)		(HSDMA_DELAY_INT_EN | \
+-		((x) << HSDMA_DELAY_PEND_OFFSET))
+-#define HSDMA_DELAY(x)			((HSDMA_DELAY_INIT(x) << \
+-		HSDMA_DELAY_TX_OFFSET) | HSDMA_DELAY_INIT(x))
+-
+-/* HSDMA_REG_INT_STATUS */
+-#define HSDMA_INT_DELAY_RX_COH		BIT(31)
+-#define HSDMA_INT_DELAY_RX_INT		BIT(30)
+-#define HSDMA_INT_DELAY_TX_COH		BIT(29)
+-#define HSDMA_INT_DELAY_TX_INT		BIT(28)
+-#define HSDMA_INT_RX_MASK		0x3
+-#define HSDMA_INT_RX_SHIFT		16
+-#define HSDMA_INT_RX_Q0			BIT(16)
+-#define HSDMA_INT_TX_MASK		0xf
+-#define HSDMA_INT_TX_SHIFT		0
+-#define HSDMA_INT_TX_Q0			BIT(0)
+-
+-/* tx/rx dma desc flags */
+-#define HSDMA_PLEN_MASK			0x3fff
+-#define HSDMA_DESC_DONE			BIT(31)
+-#define HSDMA_DESC_LS0			BIT(30)
+-#define HSDMA_DESC_PLEN0(_x)		(((_x) & HSDMA_PLEN_MASK) << 16)
+-#define HSDMA_DESC_TAG			BIT(15)
+-#define HSDMA_DESC_LS1			BIT(14)
+-#define HSDMA_DESC_PLEN1(_x)		((_x) & HSDMA_PLEN_MASK)
+-
+-/* align 4 bytes */
+-#define HSDMA_ALIGN_SIZE		3
+-/* align size 128bytes */
+-#define HSDMA_MAX_PLEN			0x3f80
+-
+-struct hsdma_desc {
+-	u32 addr0;
+-	u32 flags;
+-	u32 addr1;
+-	u32 unused;
+-};
+-
+-struct mtk_hsdma_sg {
+-	dma_addr_t src_addr;
+-	dma_addr_t dst_addr;
+-	u32 len;
+-};
+-
+-struct mtk_hsdma_desc {
+-	struct virt_dma_desc vdesc;
+-	unsigned int num_sgs;
+-	struct mtk_hsdma_sg sg[1];
+-};
+-
+-struct mtk_hsdma_chan {
+-	struct virt_dma_chan vchan;
+-	unsigned int id;
+-	dma_addr_t desc_addr;
+-	int tx_idx;
+-	int rx_idx;
+-	struct hsdma_desc *tx_ring;
+-	struct hsdma_desc *rx_ring;
+-	struct mtk_hsdma_desc *desc;
+-	unsigned int next_sg;
+-};
+-
+-struct mtk_hsdam_engine {
+-	struct dma_device ddev;
+-	struct device_dma_parameters dma_parms;
+-	void __iomem *base;
+-	struct tasklet_struct task;
+-	volatile unsigned long chan_issued;
+-
+-	struct mtk_hsdma_chan chan[1];
+-};
+-
+-static inline struct mtk_hsdam_engine *mtk_hsdma_chan_get_dev(
+-		struct mtk_hsdma_chan *chan)
+-{
+-	return container_of(chan->vchan.chan.device, struct mtk_hsdam_engine,
+-			ddev);
+-}
+-
+-static inline struct mtk_hsdma_chan *to_mtk_hsdma_chan(struct dma_chan *c)
+-{
+-	return container_of(c, struct mtk_hsdma_chan, vchan.chan);
+-}
+-
+-static inline struct mtk_hsdma_desc *to_mtk_hsdma_desc(
+-		struct virt_dma_desc *vdesc)
+-{
+-	return container_of(vdesc, struct mtk_hsdma_desc, vdesc);
+-}
+-
+-static inline u32 mtk_hsdma_read(struct mtk_hsdam_engine *hsdma, u32 reg)
+-{
+-	return readl(hsdma->base + reg);
+-}
+-
+-static inline void mtk_hsdma_write(struct mtk_hsdam_engine *hsdma,
+-				   unsigned int reg, u32 val)
+-{
+-	writel(val, hsdma->base + reg);
+-}
+-
+-static void mtk_hsdma_reset_chan(struct mtk_hsdam_engine *hsdma,
+-				 struct mtk_hsdma_chan *chan)
+-{
+-	chan->tx_idx = 0;
+-	chan->rx_idx = HSDMA_DESCS_NUM - 1;
+-
+-	mtk_hsdma_write(hsdma, HSDMA_REG_TX_CTX, chan->tx_idx);
+-	mtk_hsdma_write(hsdma, HSDMA_REG_RX_CRX, chan->rx_idx);
+-
+-	mtk_hsdma_write(hsdma, HSDMA_REG_RST_CFG,
+-			0x1 << (chan->id + HSDMA_RST_TX_SHIFT));
+-	mtk_hsdma_write(hsdma, HSDMA_REG_RST_CFG,
+-			0x1 << (chan->id + HSDMA_RST_RX_SHIFT));
+-}
+-
+-static void hsdma_dump_reg(struct mtk_hsdam_engine *hsdma)
+-{
+-	dev_dbg(hsdma->ddev.dev, "tbase %08x, tcnt %08x, "
+-			"tctx %08x, tdtx: %08x, rbase %08x, "
+-			"rcnt %08x, rctx %08x, rdtx %08x\n",
+-			mtk_hsdma_read(hsdma, HSDMA_REG_TX_BASE),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_TX_CNT),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_TX_CTX),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_TX_DTX),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_RX_BASE),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_RX_CNT),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_RX_CRX),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_RX_DRX));
+-
+-	dev_dbg(hsdma->ddev.dev, "info %08x, glo %08x, delay %08x, intr_stat %08x, intr_mask %08x\n",
+-			mtk_hsdma_read(hsdma, HSDMA_REG_INFO),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_GLO_CFG),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_DELAY_INT),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_INT_STATUS),
+-			mtk_hsdma_read(hsdma, HSDMA_REG_INT_MASK));
+-}
+-
+-static void hsdma_dump_desc(struct mtk_hsdam_engine *hsdma,
+-			    struct mtk_hsdma_chan *chan)
+-{
+-	struct hsdma_desc *tx_desc;
+-	struct hsdma_desc *rx_desc;
+-	int i;
+-
+-	dev_dbg(hsdma->ddev.dev, "tx idx: %d, rx idx: %d\n",
+-		chan->tx_idx, chan->rx_idx);
+-
+-	for (i = 0; i < HSDMA_DESCS_NUM; i++) {
+-		tx_desc = &chan->tx_ring[i];
+-		rx_desc = &chan->rx_ring[i];
+-
+-		dev_dbg(hsdma->ddev.dev, "%d tx addr0: %08x, flags %08x, "
+-				"tx addr1: %08x, rx addr0 %08x, flags %08x\n",
+-				i, tx_desc->addr0, tx_desc->flags,
+-				tx_desc->addr1, rx_desc->addr0, rx_desc->flags);
+-	}
+-}
+-
+-static void mtk_hsdma_reset(struct mtk_hsdam_engine *hsdma,
+-			    struct mtk_hsdma_chan *chan)
+-{
+-	int i;
+-
+-	/* disable dma */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, 0);
+-
+-	/* disable intr */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, 0);
+-
+-	/* init desc value */
+-	for (i = 0; i < HSDMA_DESCS_NUM; i++) {
+-		chan->tx_ring[i].addr0 = 0;
+-		chan->tx_ring[i].flags = HSDMA_DESC_LS0 | HSDMA_DESC_DONE;
+-	}
+-	for (i = 0; i < HSDMA_DESCS_NUM; i++) {
+-		chan->rx_ring[i].addr0 = 0;
+-		chan->rx_ring[i].flags = 0;
+-	}
+-
+-	/* reset */
+-	mtk_hsdma_reset_chan(hsdma, chan);
+-
+-	/* enable intr */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, HSDMA_INT_RX_Q0);
+-
+-	/* enable dma */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, HSDMA_GLO_DEFAULT);
+-}
+-
+-static int mtk_hsdma_terminate_all(struct dma_chan *c)
+-{
+-	struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c);
+-	struct mtk_hsdam_engine *hsdma = mtk_hsdma_chan_get_dev(chan);
+-	unsigned long timeout;
+-	LIST_HEAD(head);
+-
+-	spin_lock_bh(&chan->vchan.lock);
+-	chan->desc = NULL;
+-	clear_bit(chan->id, &hsdma->chan_issued);
+-	vchan_get_all_descriptors(&chan->vchan, &head);
+-	spin_unlock_bh(&chan->vchan.lock);
+-
+-	vchan_dma_desc_free_list(&chan->vchan, &head);
+-
+-	/* wait dma transfer complete */
+-	timeout = jiffies + msecs_to_jiffies(2000);
+-	while (mtk_hsdma_read(hsdma, HSDMA_REG_GLO_CFG) &
+-			(HSDMA_GLO_RX_BUSY | HSDMA_GLO_TX_BUSY)) {
+-		if (time_after_eq(jiffies, timeout)) {
+-			hsdma_dump_desc(hsdma, chan);
+-			mtk_hsdma_reset(hsdma, chan);
+-			dev_err(hsdma->ddev.dev, "timeout, reset it\n");
+-			break;
+-		}
+-		cpu_relax();
+-	}
+-
+-	return 0;
+-}
+-
+-static int mtk_hsdma_start_transfer(struct mtk_hsdam_engine *hsdma,
+-				    struct mtk_hsdma_chan *chan)
+-{
+-	dma_addr_t src, dst;
+-	size_t len, tlen;
+-	struct hsdma_desc *tx_desc, *rx_desc;
+-	struct mtk_hsdma_sg *sg;
+-	unsigned int i;
+-	int rx_idx;
+-
+-	sg = &chan->desc->sg[0];
+-	len = sg->len;
+-	chan->desc->num_sgs = DIV_ROUND_UP(len, HSDMA_MAX_PLEN);
+-
+-	/* tx desc */
+-	src = sg->src_addr;
+-	for (i = 0; i < chan->desc->num_sgs; i++) {
+-		tx_desc = &chan->tx_ring[chan->tx_idx];
+-
+-		if (len > HSDMA_MAX_PLEN)
+-			tlen = HSDMA_MAX_PLEN;
+-		else
+-			tlen = len;
+-
+-		if (i & 0x1) {
+-			tx_desc->addr1 = src;
+-			tx_desc->flags |= HSDMA_DESC_PLEN1(tlen);
+-		} else {
+-			tx_desc->addr0 = src;
+-			tx_desc->flags = HSDMA_DESC_PLEN0(tlen);
+-
+-			/* update index */
+-			chan->tx_idx = HSDMA_NEXT_DESC(chan->tx_idx);
+-		}
+-
+-		src += tlen;
+-		len -= tlen;
+-	}
+-	if (i & 0x1)
+-		tx_desc->flags |= HSDMA_DESC_LS0;
+-	else
+-		tx_desc->flags |= HSDMA_DESC_LS1;
+-
+-	/* rx desc */
+-	rx_idx = HSDMA_NEXT_DESC(chan->rx_idx);
+-	len = sg->len;
+-	dst = sg->dst_addr;
+-	for (i = 0; i < chan->desc->num_sgs; i++) {
+-		rx_desc = &chan->rx_ring[rx_idx];
+-		if (len > HSDMA_MAX_PLEN)
+-			tlen = HSDMA_MAX_PLEN;
+-		else
+-			tlen = len;
+-
+-		rx_desc->addr0 = dst;
+-		rx_desc->flags = HSDMA_DESC_PLEN0(tlen);
+-
+-		dst += tlen;
+-		len -= tlen;
+-
+-		/* update index */
+-		rx_idx = HSDMA_NEXT_DESC(rx_idx);
+-	}
+-
+-	/* make sure desc and index all up to date */
+-	wmb();
+-	mtk_hsdma_write(hsdma, HSDMA_REG_TX_CTX, chan->tx_idx);
+-
+-	return 0;
+-}
+-
+-static int gdma_next_desc(struct mtk_hsdma_chan *chan)
+-{
+-	struct virt_dma_desc *vdesc;
+-
+-	vdesc = vchan_next_desc(&chan->vchan);
+-	if (!vdesc) {
+-		chan->desc = NULL;
+-		return 0;
+-	}
+-	chan->desc = to_mtk_hsdma_desc(vdesc);
+-	chan->next_sg = 0;
+-
+-	return 1;
+-}
+-
+-static void mtk_hsdma_chan_done(struct mtk_hsdam_engine *hsdma,
+-				struct mtk_hsdma_chan *chan)
+-{
+-	struct mtk_hsdma_desc *desc;
+-	int chan_issued;
+-
+-	chan_issued = 0;
+-	spin_lock_bh(&chan->vchan.lock);
+-	desc = chan->desc;
+-	if (likely(desc)) {
+-		if (chan->next_sg == desc->num_sgs) {
+-			list_del(&desc->vdesc.node);
+-			vchan_cookie_complete(&desc->vdesc);
+-			chan_issued = gdma_next_desc(chan);
+-		}
+-	} else {
+-		dev_dbg(hsdma->ddev.dev, "no desc to complete\n");
+-	}
+-
+-	if (chan_issued)
+-		set_bit(chan->id, &hsdma->chan_issued);
+-	spin_unlock_bh(&chan->vchan.lock);
+-}
+-
+-static irqreturn_t mtk_hsdma_irq(int irq, void *devid)
+-{
+-	struct mtk_hsdam_engine *hsdma = devid;
+-	u32 status;
+-
+-	status = mtk_hsdma_read(hsdma, HSDMA_REG_INT_STATUS);
+-	if (unlikely(!status))
+-		return IRQ_NONE;
+-
+-	if (likely(status & HSDMA_INT_RX_Q0))
+-		tasklet_schedule(&hsdma->task);
+-	else
+-		dev_dbg(hsdma->ddev.dev, "unhandle irq status %08x\n", status);
+-	/* clean intr bits */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_INT_STATUS, status);
+-
+-	return IRQ_HANDLED;
+-}
+-
+-static void mtk_hsdma_issue_pending(struct dma_chan *c)
+-{
+-	struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c);
+-	struct mtk_hsdam_engine *hsdma = mtk_hsdma_chan_get_dev(chan);
+-
+-	spin_lock_bh(&chan->vchan.lock);
+-	if (vchan_issue_pending(&chan->vchan) && !chan->desc) {
+-		if (gdma_next_desc(chan)) {
+-			set_bit(chan->id, &hsdma->chan_issued);
+-			tasklet_schedule(&hsdma->task);
+-		} else {
+-			dev_dbg(hsdma->ddev.dev, "no desc to issue\n");
+-		}
+-	}
+-	spin_unlock_bh(&chan->vchan.lock);
+-}
+-
+-static struct dma_async_tx_descriptor *mtk_hsdma_prep_dma_memcpy(
+-		struct dma_chan *c, dma_addr_t dest, dma_addr_t src,
+-		size_t len, unsigned long flags)
+-{
+-	struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c);
+-	struct mtk_hsdma_desc *desc;
+-
+-	if (len <= 0)
+-		return NULL;
+-
+-	desc = kzalloc(sizeof(*desc), GFP_ATOMIC);
+-	if (!desc) {
+-		dev_err(c->device->dev, "alloc memcpy decs error\n");
+-		return NULL;
+-	}
+-
+-	desc->sg[0].src_addr = src;
+-	desc->sg[0].dst_addr = dest;
+-	desc->sg[0].len = len;
+-
+-	return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
+-}
+-
+-static enum dma_status mtk_hsdma_tx_status(struct dma_chan *c,
+-					   dma_cookie_t cookie,
+-					   struct dma_tx_state *state)
+-{
+-	return dma_cookie_status(c, cookie, state);
+-}
+-
+-static void mtk_hsdma_free_chan_resources(struct dma_chan *c)
+-{
+-	vchan_free_chan_resources(to_virt_chan(c));
+-}
+-
+-static void mtk_hsdma_desc_free(struct virt_dma_desc *vdesc)
+-{
+-	kfree(container_of(vdesc, struct mtk_hsdma_desc, vdesc));
+-}
+-
+-static void mtk_hsdma_tx(struct mtk_hsdam_engine *hsdma)
+-{
+-	struct mtk_hsdma_chan *chan;
+-
+-	if (test_and_clear_bit(0, &hsdma->chan_issued)) {
+-		chan = &hsdma->chan[0];
+-		if (chan->desc)
+-			mtk_hsdma_start_transfer(hsdma, chan);
+-		else
+-			dev_dbg(hsdma->ddev.dev, "chan 0 no desc to issue\n");
+-	}
+-}
+-
+-static void mtk_hsdma_rx(struct mtk_hsdam_engine *hsdma)
+-{
+-	struct mtk_hsdma_chan *chan;
+-	int next_idx, drx_idx, cnt;
+-
+-	chan = &hsdma->chan[0];
+-	next_idx = HSDMA_NEXT_DESC(chan->rx_idx);
+-	drx_idx = mtk_hsdma_read(hsdma, HSDMA_REG_RX_DRX);
+-
+-	cnt = (drx_idx - next_idx) & HSDMA_DESCS_MASK;
+-	if (!cnt)
+-		return;
+-
+-	chan->next_sg += cnt;
+-	chan->rx_idx = (chan->rx_idx + cnt) & HSDMA_DESCS_MASK;
+-
+-	/* update rx crx */
+-	wmb();
+-	mtk_hsdma_write(hsdma, HSDMA_REG_RX_CRX, chan->rx_idx);
+-
+-	mtk_hsdma_chan_done(hsdma, chan);
+-}
+-
+-static void mtk_hsdma_tasklet(struct tasklet_struct *t)
+-{
+-	struct mtk_hsdam_engine *hsdma = from_tasklet(hsdma, t, task);
+-
+-	mtk_hsdma_rx(hsdma);
+-	mtk_hsdma_tx(hsdma);
+-}
+-
+-static int mtk_hsdam_alloc_desc(struct mtk_hsdam_engine *hsdma,
+-				struct mtk_hsdma_chan *chan)
+-{
+-	int i;
+-
+-	chan->tx_ring = dma_alloc_coherent(hsdma->ddev.dev,
+-					   2 * HSDMA_DESCS_NUM *
+-					   sizeof(*chan->tx_ring),
+-			&chan->desc_addr, GFP_ATOMIC | __GFP_ZERO);
+-	if (!chan->tx_ring)
+-		goto no_mem;
+-
+-	chan->rx_ring = &chan->tx_ring[HSDMA_DESCS_NUM];
+-
+-	/* init tx ring value */
+-	for (i = 0; i < HSDMA_DESCS_NUM; i++)
+-		chan->tx_ring[i].flags = HSDMA_DESC_LS0 | HSDMA_DESC_DONE;
+-
+-	return 0;
+-no_mem:
+-	return -ENOMEM;
+-}
+-
+-static void mtk_hsdam_free_desc(struct mtk_hsdam_engine *hsdma,
+-				struct mtk_hsdma_chan *chan)
+-{
+-	if (chan->tx_ring) {
+-		dma_free_coherent(hsdma->ddev.dev,
+-				  2 * HSDMA_DESCS_NUM * sizeof(*chan->tx_ring),
+-				  chan->tx_ring, chan->desc_addr);
+-		chan->tx_ring = NULL;
+-		chan->rx_ring = NULL;
+-	}
+-}
+-
+-static int mtk_hsdma_init(struct mtk_hsdam_engine *hsdma)
+-{
+-	struct mtk_hsdma_chan *chan;
+-	int ret;
+-	u32 reg;
+-
+-	/* init desc */
+-	chan = &hsdma->chan[0];
+-	ret = mtk_hsdam_alloc_desc(hsdma, chan);
+-	if (ret)
+-		return ret;
+-
+-	/* tx */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_TX_BASE, chan->desc_addr);
+-	mtk_hsdma_write(hsdma, HSDMA_REG_TX_CNT, HSDMA_DESCS_NUM);
+-	/* rx */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_RX_BASE, chan->desc_addr +
+-			(sizeof(struct hsdma_desc) * HSDMA_DESCS_NUM));
+-	mtk_hsdma_write(hsdma, HSDMA_REG_RX_CNT, HSDMA_DESCS_NUM);
+-	/* reset */
+-	mtk_hsdma_reset_chan(hsdma, chan);
+-
+-	/* enable rx intr */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, HSDMA_INT_RX_Q0);
+-
+-	/* enable dma */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, HSDMA_GLO_DEFAULT);
+-
+-	/* hardware info */
+-	reg = mtk_hsdma_read(hsdma, HSDMA_REG_INFO);
+-	dev_info(hsdma->ddev.dev, "rx: %d, tx: %d\n",
+-		 (reg >> HSDMA_INFO_RX_SHIFT) & HSDMA_INFO_RX_MASK,
+-		 (reg >> HSDMA_INFO_TX_SHIFT) & HSDMA_INFO_TX_MASK);
+-
+-	hsdma_dump_reg(hsdma);
+-
+-	return ret;
+-}
+-
+-static void mtk_hsdma_uninit(struct mtk_hsdam_engine *hsdma)
+-{
+-	struct mtk_hsdma_chan *chan;
+-
+-	/* disable dma */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, 0);
+-
+-	/* disable intr */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, 0);
+-
+-	/* free desc */
+-	chan = &hsdma->chan[0];
+-	mtk_hsdam_free_desc(hsdma, chan);
+-
+-	/* tx */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_TX_BASE, 0);
+-	mtk_hsdma_write(hsdma, HSDMA_REG_TX_CNT, 0);
+-	/* rx */
+-	mtk_hsdma_write(hsdma, HSDMA_REG_RX_BASE, 0);
+-	mtk_hsdma_write(hsdma, HSDMA_REG_RX_CNT, 0);
+-	/* reset */
+-	mtk_hsdma_reset_chan(hsdma, chan);
+-}
+-
+-static const struct of_device_id mtk_hsdma_of_match[] = {
+-	{ .compatible = "mediatek,mt7621-hsdma" },
+-	{ },
+-};
+-
+-static int mtk_hsdma_probe(struct platform_device *pdev)
+-{
+-	const struct of_device_id *match;
+-	struct mtk_hsdma_chan *chan;
+-	struct mtk_hsdam_engine *hsdma;
+-	struct dma_device *dd;
+-	int ret;
+-	int irq;
+-	void __iomem *base;
+-
+-	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+-	if (ret)
+-		return ret;
+-
+-	match = of_match_device(mtk_hsdma_of_match, &pdev->dev);
+-	if (!match)
+-		return -EINVAL;
+-
+-	hsdma = devm_kzalloc(&pdev->dev, sizeof(*hsdma), GFP_KERNEL);
+-	if (!hsdma)
+-		return -EINVAL;
+-
+-	base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(base))
+-		return PTR_ERR(base);
+-	hsdma->base = base + HSDMA_BASE_OFFSET;
+-	tasklet_setup(&hsdma->task, mtk_hsdma_tasklet);
+-
+-	irq = platform_get_irq(pdev, 0);
+-	if (irq < 0)
+-		return -EINVAL;
+-	ret = devm_request_irq(&pdev->dev, irq, mtk_hsdma_irq,
+-			       0, dev_name(&pdev->dev), hsdma);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to request irq\n");
+-		return ret;
+-	}
+-
+-	device_reset(&pdev->dev);
+-
+-	dd = &hsdma->ddev;
+-	dma_cap_set(DMA_MEMCPY, dd->cap_mask);
+-	dd->copy_align = HSDMA_ALIGN_SIZE;
+-	dd->device_free_chan_resources = mtk_hsdma_free_chan_resources;
+-	dd->device_prep_dma_memcpy = mtk_hsdma_prep_dma_memcpy;
+-	dd->device_terminate_all = mtk_hsdma_terminate_all;
+-	dd->device_tx_status = mtk_hsdma_tx_status;
+-	dd->device_issue_pending = mtk_hsdma_issue_pending;
+-	dd->dev = &pdev->dev;
+-	dd->dev->dma_parms = &hsdma->dma_parms;
+-	dma_set_max_seg_size(dd->dev, HSDMA_MAX_PLEN);
+-	INIT_LIST_HEAD(&dd->channels);
+-
+-	chan = &hsdma->chan[0];
+-	chan->id = 0;
+-	chan->vchan.desc_free = mtk_hsdma_desc_free;
+-	vchan_init(&chan->vchan, dd);
+-
+-	/* init hardware */
+-	ret = mtk_hsdma_init(hsdma);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to alloc ring descs\n");
+-		return ret;
+-	}
+-
+-	ret = dma_async_device_register(dd);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to register dma device\n");
+-		goto err_uninit_hsdma;
+-	}
+-
+-	ret = of_dma_controller_register(pdev->dev.of_node,
+-					 of_dma_xlate_by_chan_id, hsdma);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to register of dma controller\n");
+-		goto err_unregister;
+-	}
+-
+-	platform_set_drvdata(pdev, hsdma);
+-
+-	return 0;
+-
+-err_unregister:
+-	dma_async_device_unregister(dd);
+-err_uninit_hsdma:
+-	mtk_hsdma_uninit(hsdma);
+-	return ret;
+-}
+-
+-static int mtk_hsdma_remove(struct platform_device *pdev)
+-{
+-	struct mtk_hsdam_engine *hsdma = platform_get_drvdata(pdev);
+-
+-	mtk_hsdma_uninit(hsdma);
+-
+-	of_dma_controller_free(pdev->dev.of_node);
+-	dma_async_device_unregister(&hsdma->ddev);
+-
+-	return 0;
+-}
+-
+-static struct platform_driver mtk_hsdma_driver = {
+-	.probe = mtk_hsdma_probe,
+-	.remove = mtk_hsdma_remove,
+-	.driver = {
+-		.name = "hsdma-mt7621",
+-		.of_match_table = mtk_hsdma_of_match,
+-	},
+-};
+-module_platform_driver(mtk_hsdma_driver);
+-
+-MODULE_AUTHOR("Michael Lee <igvtee@gmail.com>");
+-MODULE_DESCRIPTION("MTK HSDMA driver");
+-MODULE_LICENSE("GPL v2");
+diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+index 99bfc828672c2..497b4e0c358cc 100644
+--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c
++++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+@@ -41,6 +41,7 @@ static const struct usb_device_id rtw_usb_id_tbl[] = {
+ 	{USB_DEVICE(0x2357, 0x0111)}, /* TP-Link TL-WN727N v5.21 */
+ 	{USB_DEVICE(0x2C4E, 0x0102)}, /* MERCUSYS MW150US v2 */
+ 	{USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */
++	{USB_DEVICE(0x7392, 0xb811)}, /* Edimax EW-7811UN V2 */
+ 	{USB_DEVICE(USB_VENDER_ID_REALTEK, 0xffef)}, /* Rosewill RNX-N150NUB */
+ 	{}	/* Terminating entry */
+ };
+diff --git a/drivers/staging/rtl8723bs/os_dep/wifi_regd.c b/drivers/staging/rtl8723bs/os_dep/wifi_regd.c
+index 578b9f734231e..65592bf84f380 100644
+--- a/drivers/staging/rtl8723bs/os_dep/wifi_regd.c
++++ b/drivers/staging/rtl8723bs/os_dep/wifi_regd.c
+@@ -34,7 +34,7 @@
+ 	NL80211_RRF_PASSIVE_SCAN)
+ 
+ static const struct ieee80211_regdomain rtw_regdom_rd = {
+-	.n_reg_rules = 3,
++	.n_reg_rules = 2,
+ 	.alpha2 = "99",
+ 	.reg_rules = {
+ 		RTW_2GHZ_CH01_11,
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 01125d9f991bb..3d378da119e7a 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -953,7 +953,7 @@ static int vchiq_irq_queue_bulk_tx_rx(struct vchiq_instance *instance,
+ 	struct vchiq_service *service;
+ 	struct bulk_waiter_node *waiter = NULL;
+ 	bool found = false;
+-	void *userdata = NULL;
++	void *userdata;
+ 	int status = 0;
+ 	int ret;
+ 
+@@ -992,6 +992,8 @@ static int vchiq_irq_queue_bulk_tx_rx(struct vchiq_instance *instance,
+ 			"found bulk_waiter %pK for pid %d", waiter,
+ 			current->pid);
+ 		userdata = &waiter->bulk_waiter;
++	} else {
++		userdata = args->userdata;
+ 	}
+ 
+ 	/*
+@@ -1712,7 +1714,7 @@ vchiq_compat_ioctl_queue_bulk(struct file *file,
+ {
+ 	struct vchiq_queue_bulk_transfer32 args32;
+ 	struct vchiq_queue_bulk_transfer args;
+-	enum vchiq_bulk_dir dir = (cmd == VCHIQ_IOC_QUEUE_BULK_TRANSMIT) ?
++	enum vchiq_bulk_dir dir = (cmd == VCHIQ_IOC_QUEUE_BULK_TRANSMIT32) ?
+ 				  VCHIQ_BULK_TRANSMIT : VCHIQ_BULK_RECEIVE;
+ 
+ 	if (copy_from_user(&args32, argp, sizeof(args32)))
+diff --git a/drivers/staging/wfx/data_tx.c b/drivers/staging/wfx/data_tx.c
+index 36b36ef39d053..77fb104efdec1 100644
+--- a/drivers/staging/wfx/data_tx.c
++++ b/drivers/staging/wfx/data_tx.c
+@@ -331,6 +331,7 @@ static int wfx_tx_inner(struct wfx_vif *wvif, struct ieee80211_sta *sta,
+ {
+ 	struct hif_msg *hif_msg;
+ 	struct hif_req_tx *req;
++	struct wfx_tx_priv *tx_priv;
+ 	struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ 	struct ieee80211_key_conf *hw_key = tx_info->control.hw_key;
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+@@ -344,11 +345,14 @@ static int wfx_tx_inner(struct wfx_vif *wvif, struct ieee80211_sta *sta,
+ 
+ 	// From now tx_info->control is unusable
+ 	memset(tx_info->rate_driver_data, 0, sizeof(struct wfx_tx_priv));
++	// Fill tx_priv
++	tx_priv = (struct wfx_tx_priv *)tx_info->rate_driver_data;
++	tx_priv->icv_size = wfx_tx_get_icv_len(hw_key);
+ 
+ 	// Fill hif_msg
+ 	WARN(skb_headroom(skb) < wmsg_len, "not enough space in skb");
+ 	WARN(offset & 1, "attempt to transmit an unaligned frame");
+-	skb_put(skb, wfx_tx_get_icv_len(hw_key));
++	skb_put(skb, tx_priv->icv_size);
+ 	skb_push(skb, wmsg_len);
+ 	memset(skb->data, 0, wmsg_len);
+ 	hif_msg = (struct hif_msg *)skb->data;
+@@ -484,6 +488,7 @@ static void wfx_tx_fill_rates(struct wfx_dev *wdev,
+ 
+ void wfx_tx_confirm_cb(struct wfx_dev *wdev, const struct hif_cnf_tx *arg)
+ {
++	const struct wfx_tx_priv *tx_priv;
+ 	struct ieee80211_tx_info *tx_info;
+ 	struct wfx_vif *wvif;
+ 	struct sk_buff *skb;
+@@ -495,6 +500,7 @@ void wfx_tx_confirm_cb(struct wfx_dev *wdev, const struct hif_cnf_tx *arg)
+ 		return;
+ 	}
+ 	tx_info = IEEE80211_SKB_CB(skb);
++	tx_priv = wfx_skb_tx_priv(skb);
+ 	wvif = wdev_to_wvif(wdev, ((struct hif_msg *)skb->data)->interface);
+ 	WARN_ON(!wvif);
+ 	if (!wvif)
+@@ -503,6 +509,8 @@ void wfx_tx_confirm_cb(struct wfx_dev *wdev, const struct hif_cnf_tx *arg)
+ 	// Note that wfx_pending_get_pkt_us_delay() get data from tx_info
+ 	_trace_tx_stats(arg, skb, wfx_pending_get_pkt_us_delay(wdev, skb));
+ 	wfx_tx_fill_rates(wdev, tx_info, arg);
++	skb_trim(skb, skb->len - tx_priv->icv_size);
++
+ 	// From now, you can touch to tx_info->status, but do not touch to
+ 	// tx_priv anymore
+ 	// FIXME: use ieee80211_tx_info_clear_status()
+diff --git a/drivers/staging/wfx/data_tx.h b/drivers/staging/wfx/data_tx.h
+index 46c9fff7a870e..401363d6b563a 100644
+--- a/drivers/staging/wfx/data_tx.h
++++ b/drivers/staging/wfx/data_tx.h
+@@ -35,6 +35,7 @@ struct tx_policy_cache {
+ 
+ struct wfx_tx_priv {
+ 	ktime_t xmit_timestamp;
++	unsigned char icv_size;
+ };
+ 
+ void wfx_tx_policy_init(struct wfx_vif *wvif);
+diff --git a/drivers/target/iscsi/cxgbit/cxgbit_target.c b/drivers/target/iscsi/cxgbit/cxgbit_target.c
+index 9b3eb2e8c92ad..b926e1d6c7b8e 100644
+--- a/drivers/target/iscsi/cxgbit/cxgbit_target.c
++++ b/drivers/target/iscsi/cxgbit/cxgbit_target.c
+@@ -86,8 +86,7 @@ static int cxgbit_is_ofld_imm(const struct sk_buff *skb)
+ 	if (likely(cxgbit_skcb_flags(skb) & SKCBF_TX_ISO))
+ 		length += sizeof(struct cpl_tx_data_iso);
+ 
+-#define MAX_IMM_TX_PKT_LEN	256
+-	return length <= MAX_IMM_TX_PKT_LEN;
++	return length <= MAX_IMM_OFLD_TX_DATA_WR_LEN;
+ }
+ 
+ /*
+diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c
+index 1e3614e4798f0..6cbb3643c6c48 100644
+--- a/drivers/tee/optee/rpc.c
++++ b/drivers/tee/optee/rpc.c
+@@ -54,8 +54,9 @@ bad:
+ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx,
+ 					     struct optee_msg_arg *arg)
+ {
+-	struct i2c_client client = { 0 };
+ 	struct tee_param *params;
++	struct i2c_adapter *adapter;
++	struct i2c_msg msg = { };
+ 	size_t i;
+ 	int ret = -EOPNOTSUPP;
+ 	u8 attr[] = {
+@@ -85,48 +86,48 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx,
+ 			goto bad;
+ 	}
+ 
+-	client.adapter = i2c_get_adapter(params[0].u.value.b);
+-	if (!client.adapter)
++	adapter = i2c_get_adapter(params[0].u.value.b);
++	if (!adapter)
+ 		goto bad;
+ 
+ 	if (params[1].u.value.a & OPTEE_MSG_RPC_CMD_I2C_FLAGS_TEN_BIT) {
+-		if (!i2c_check_functionality(client.adapter,
++		if (!i2c_check_functionality(adapter,
+ 					     I2C_FUNC_10BIT_ADDR)) {
+-			i2c_put_adapter(client.adapter);
++			i2c_put_adapter(adapter);
+ 			goto bad;
+ 		}
+ 
+-		client.flags = I2C_CLIENT_TEN;
++		msg.flags = I2C_M_TEN;
+ 	}
+ 
+-	client.addr = params[0].u.value.c;
+-	snprintf(client.name, I2C_NAME_SIZE, "i2c%d", client.adapter->nr);
++	msg.addr = params[0].u.value.c;
++	msg.buf  = params[2].u.memref.shm->kaddr;
++	msg.len  = params[2].u.memref.size;
+ 
+ 	switch (params[0].u.value.a) {
+ 	case OPTEE_MSG_RPC_CMD_I2C_TRANSFER_RD:
+-		ret = i2c_master_recv(&client, params[2].u.memref.shm->kaddr,
+-				      params[2].u.memref.size);
++		msg.flags |= I2C_M_RD;
+ 		break;
+ 	case OPTEE_MSG_RPC_CMD_I2C_TRANSFER_WR:
+-		ret = i2c_master_send(&client, params[2].u.memref.shm->kaddr,
+-				      params[2].u.memref.size);
+ 		break;
+ 	default:
+-		i2c_put_adapter(client.adapter);
++		i2c_put_adapter(adapter);
+ 		goto bad;
+ 	}
+ 
++	ret = i2c_transfer(adapter, &msg, 1);
++
+ 	if (ret < 0) {
+ 		arg->ret = TEEC_ERROR_COMMUNICATION;
+ 	} else {
+-		params[3].u.value.a = ret;
++		params[3].u.value.a = msg.len;
+ 		if (optee_to_msg_param(arg->params, arg->num_params, params))
+ 			arg->ret = TEEC_ERROR_BAD_PARAMETERS;
+ 		else
+ 			arg->ret = TEEC_SUCCESS;
+ 	}
+ 
+-	i2c_put_adapter(client.adapter);
++	i2c_put_adapter(adapter);
+ 	kfree(params);
+ 	return;
+ bad:
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index 612f063c1cfcd..ddc166e3a93eb 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -441,7 +441,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
+ 	frequency = get_state_freq(cpufreq_cdev, state);
+ 
+ 	ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
+-	if (ret > 0) {
++	if (ret >= 0) {
+ 		cpufreq_cdev->cpufreq_state = state;
+ 		cpus = cpufreq_cdev->policy->cpus;
+ 		max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 25f3152089c2a..fea1eeac5b907 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -2557,7 +2557,8 @@ static void gsmld_write_wakeup(struct tty_struct *tty)
+  */
+ 
+ static ssize_t gsmld_read(struct tty_struct *tty, struct file *file,
+-			 unsigned char __user *buf, size_t nr)
++			  unsigned char *buf, size_t nr,
++			  void **cookie, unsigned long offset)
+ {
+ 	return -EOPNOTSUPP;
+ }
+diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
+index 12557ee1edb68..1363e659dc1db 100644
+--- a/drivers/tty/n_hdlc.c
++++ b/drivers/tty/n_hdlc.c
+@@ -416,13 +416,19 @@ static void n_hdlc_tty_receive(struct tty_struct *tty, const __u8 *data,
+  * Returns the number of bytes returned or error code.
+  */
+ static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file,
+-			   __u8 __user *buf, size_t nr)
++			   __u8 *kbuf, size_t nr,
++			   void **cookie, unsigned long offset)
+ {
+ 	struct n_hdlc *n_hdlc = tty->disc_data;
+ 	int ret = 0;
+ 	struct n_hdlc_buf *rbuf;
+ 	DECLARE_WAITQUEUE(wait, current);
+ 
++	/* Is this a repeated call for an rbuf we already found earlier? */
++	rbuf = *cookie;
++	if (rbuf)
++		goto have_rbuf;
++
+ 	add_wait_queue(&tty->read_wait, &wait);
+ 
+ 	for (;;) {
+@@ -436,25 +442,8 @@ static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file,
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 
+ 		rbuf = n_hdlc_buf_get(&n_hdlc->rx_buf_list);
+-		if (rbuf) {
+-			if (rbuf->count > nr) {
+-				/* too large for caller's buffer */
+-				ret = -EOVERFLOW;
+-			} else {
+-				__set_current_state(TASK_RUNNING);
+-				if (copy_to_user(buf, rbuf->buf, rbuf->count))
+-					ret = -EFAULT;
+-				else
+-					ret = rbuf->count;
+-			}
+-
+-			if (n_hdlc->rx_free_buf_list.count >
+-			    DEFAULT_RX_BUF_COUNT)
+-				kfree(rbuf);
+-			else
+-				n_hdlc_buf_put(&n_hdlc->rx_free_buf_list, rbuf);
++		if (rbuf)
+ 			break;
+-		}
+ 
+ 		/* no data */
+ 		if (tty_io_nonblock(tty, file)) {
+@@ -473,6 +462,39 @@ static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file,
+ 	remove_wait_queue(&tty->read_wait, &wait);
+ 	__set_current_state(TASK_RUNNING);
+ 
++	if (!rbuf)
++		return ret;
++	*cookie = rbuf;
++
++have_rbuf:
++	/* Have we used it up entirely? */
++	if (offset >= rbuf->count)
++		goto done_with_rbuf;
++
++	/* More data to go, but can't copy any more? EOVERFLOW */
++	ret = -EOVERFLOW;
++	if (!nr)
++		goto done_with_rbuf;
++
++	/* Copy as much data as possible */
++	ret = rbuf->count - offset;
++	if (ret > nr)
++		ret = nr;
++	memcpy(kbuf, rbuf->buf+offset, ret);
++	offset += ret;
++
++	/* If we still have data left, we leave the rbuf in the cookie */
++	if (offset < rbuf->count)
++		return ret;
++
++done_with_rbuf:
++	*cookie = NULL;
++
++	if (n_hdlc->rx_free_buf_list.count > DEFAULT_RX_BUF_COUNT)
++		kfree(rbuf);
++	else
++		n_hdlc_buf_put(&n_hdlc->rx_free_buf_list, rbuf);
++
+ 	return ret;
+ 
+ }	/* end of n_hdlc_tty_read() */
+diff --git a/drivers/tty/n_null.c b/drivers/tty/n_null.c
+index 96feabae47407..ce03ae78f5c6a 100644
+--- a/drivers/tty/n_null.c
++++ b/drivers/tty/n_null.c
+@@ -20,7 +20,8 @@ static void n_null_close(struct tty_struct *tty)
+ }
+ 
+ static ssize_t n_null_read(struct tty_struct *tty, struct file *file,
+-			   unsigned char __user * buf, size_t nr)
++			   unsigned char *buf, size_t nr,
++			   void **cookie, unsigned long offset)
+ {
+ 	return -EOPNOTSUPP;
+ }
+diff --git a/drivers/tty/n_r3964.c b/drivers/tty/n_r3964.c
+index 934dd2fb2ec80..3161f0a535e37 100644
+--- a/drivers/tty/n_r3964.c
++++ b/drivers/tty/n_r3964.c
+@@ -129,7 +129,7 @@ static void remove_client_block(struct r3964_info *pInfo,
+ static int r3964_open(struct tty_struct *tty);
+ static void r3964_close(struct tty_struct *tty);
+ static ssize_t r3964_read(struct tty_struct *tty, struct file *file,
+-		unsigned char __user * buf, size_t nr);
++		void *cookie, unsigned char *buf, size_t nr);
+ static ssize_t r3964_write(struct tty_struct *tty, struct file *file,
+ 		const unsigned char *buf, size_t nr);
+ static int r3964_ioctl(struct tty_struct *tty, struct file *file,
+@@ -1058,7 +1058,8 @@ static void r3964_close(struct tty_struct *tty)
+ }
+ 
+ static ssize_t r3964_read(struct tty_struct *tty, struct file *file,
+-			  unsigned char __user * buf, size_t nr)
++			  unsigned char *kbuf, size_t nr,
++			  void **cookie, unsigned long offset)
+ {
+ 	struct r3964_info *pInfo = tty->disc_data;
+ 	struct r3964_client_info *pClient;
+@@ -1109,10 +1110,7 @@ static ssize_t r3964_read(struct tty_struct *tty, struct file *file,
+ 		kfree(pMsg);
+ 		TRACE_M("r3964_read - msg kfree %p", pMsg);
+ 
+-		if (copy_to_user(buf, &theMsg, ret)) {
+-			ret = -EFAULT;
+-			goto unlock;
+-		}
++		memcpy(kbuf, &theMsg, ret);
+ 
+ 		TRACE_PS("read - return %d", ret);
+ 		goto unlock;
+diff --git a/drivers/tty/n_tracerouter.c b/drivers/tty/n_tracerouter.c
+index 4479af4d2fa5c..3490ed51b1a3c 100644
+--- a/drivers/tty/n_tracerouter.c
++++ b/drivers/tty/n_tracerouter.c
+@@ -118,7 +118,9 @@ static void n_tracerouter_close(struct tty_struct *tty)
+  *	 -EINVAL
+  */
+ static ssize_t n_tracerouter_read(struct tty_struct *tty, struct file *file,
+-				  unsigned char __user *buf, size_t nr) {
++				  unsigned char *buf, size_t nr,
++				  void **cookie, unsigned long offset)
++{
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/tty/n_tracesink.c b/drivers/tty/n_tracesink.c
+index d96ba82cc3569..1d9931041fd8b 100644
+--- a/drivers/tty/n_tracesink.c
++++ b/drivers/tty/n_tracesink.c
+@@ -115,7 +115,9 @@ static void n_tracesink_close(struct tty_struct *tty)
+  *	 -EINVAL
+  */
+ static ssize_t n_tracesink_read(struct tty_struct *tty, struct file *file,
+-				unsigned char __user *buf, size_t nr) {
++				unsigned char *buf, size_t nr,
++				void **cookie, unsigned long offset)
++{
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index c2869489ba681..e8963165082ee 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -164,29 +164,24 @@ static void zero_buffer(struct tty_struct *tty, u8 *buffer, int size)
+ 		memset(buffer, 0x00, size);
+ }
+ 
+-static int tty_copy_to_user(struct tty_struct *tty, void __user *to,
+-			    size_t tail, size_t n)
++static void tty_copy(struct tty_struct *tty, void *to, size_t tail, size_t n)
+ {
+ 	struct n_tty_data *ldata = tty->disc_data;
+ 	size_t size = N_TTY_BUF_SIZE - tail;
+ 	void *from = read_buf_addr(ldata, tail);
+-	int uncopied;
+ 
+ 	if (n > size) {
+ 		tty_audit_add_data(tty, from, size);
+-		uncopied = copy_to_user(to, from, size);
+-		zero_buffer(tty, from, size - uncopied);
+-		if (uncopied)
+-			return uncopied;
++		memcpy(to, from, size);
++		zero_buffer(tty, from, size);
+ 		to += size;
+ 		n -= size;
+ 		from = ldata->read_buf;
+ 	}
+ 
+ 	tty_audit_add_data(tty, from, n);
+-	uncopied = copy_to_user(to, from, n);
+-	zero_buffer(tty, from, n - uncopied);
+-	return uncopied;
++	memcpy(to, from, n);
++	zero_buffer(tty, from, n);
+ }
+ 
+ /**
+@@ -1942,15 +1937,16 @@ static inline int input_available_p(struct tty_struct *tty, int poll)
+ /**
+  *	copy_from_read_buf	-	copy read data directly
+  *	@tty: terminal device
+- *	@b: user data
++ *	@kbp: data
+  *	@nr: size of data
+  *
+  *	Helper function to speed up n_tty_read.  It is only called when
+- *	ICANON is off; it copies characters straight from the tty queue to
+- *	user space directly.  It can be profitably called twice; once to
+- *	drain the space from the tail pointer to the (physical) end of the
+- *	buffer, and once to drain the space from the (physical) beginning of
+- *	the buffer to head pointer.
++ *	ICANON is off; it copies characters straight from the tty queue.
++ *
++ *	It can be profitably called twice; once to drain the space from
++ *	the tail pointer to the (physical) end of the buffer, and once
++ *	to drain the space from the (physical) beginning of the buffer
++ *	to head pointer.
+  *
+  *	Called under the ldata->atomic_read_lock sem
+  *
+@@ -1960,7 +1956,7 @@ static inline int input_available_p(struct tty_struct *tty, int poll)
+  */
+ 
+ static int copy_from_read_buf(struct tty_struct *tty,
+-				      unsigned char __user **b,
++				      unsigned char **kbp,
+ 				      size_t *nr)
+ 
+ {
+@@ -1976,8 +1972,7 @@ static int copy_from_read_buf(struct tty_struct *tty,
+ 	n = min(*nr, n);
+ 	if (n) {
+ 		unsigned char *from = read_buf_addr(ldata, tail);
+-		retval = copy_to_user(*b, from, n);
+-		n -= retval;
++		memcpy(*kbp, from, n);
+ 		is_eof = n == 1 && *from == EOF_CHAR(tty);
+ 		tty_audit_add_data(tty, from, n);
+ 		zero_buffer(tty, from, n);
+@@ -1986,7 +1981,7 @@ static int copy_from_read_buf(struct tty_struct *tty,
+ 		if (L_EXTPROC(tty) && ldata->icanon && is_eof &&
+ 		    (head == ldata->read_tail))
+ 			n = 0;
+-		*b += n;
++		*kbp += n;
+ 		*nr -= n;
+ 	}
+ 	return retval;
+@@ -1995,12 +1990,12 @@ static int copy_from_read_buf(struct tty_struct *tty,
+ /**
+  *	canon_copy_from_read_buf	-	copy read data in canonical mode
+  *	@tty: terminal device
+- *	@b: user data
++ *	@kbp: data
+  *	@nr: size of data
+  *
+  *	Helper function for n_tty_read.  It is only called when ICANON is on;
+  *	it copies one line of input up to and including the line-delimiting
+- *	character into the user-space buffer.
++ *	character into the result buffer.
+  *
+  *	NB: When termios is changed from non-canonical to canonical mode and
+  *	the read buffer contains data, n_tty_set_termios() simulates an EOF
+@@ -2016,14 +2011,14 @@ static int copy_from_read_buf(struct tty_struct *tty,
+  */
+ 
+ static int canon_copy_from_read_buf(struct tty_struct *tty,
+-				    unsigned char __user **b,
++				    unsigned char **kbp,
+ 				    size_t *nr)
+ {
+ 	struct n_tty_data *ldata = tty->disc_data;
+ 	size_t n, size, more, c;
+ 	size_t eol;
+ 	size_t tail;
+-	int ret, found = 0;
++	int found = 0;
+ 
+ 	/* N.B. avoid overrun if nr == 0 */
+ 	if (!*nr)
+@@ -2059,10 +2054,8 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
+ 	n_tty_trace("%s: eol:%zu found:%d n:%zu c:%zu tail:%zu more:%zu\n",
+ 		    __func__, eol, found, n, c, tail, more);
+ 
+-	ret = tty_copy_to_user(tty, *b, tail, n);
+-	if (ret)
+-		return -EFAULT;
+-	*b += n;
++	tty_copy(tty, *kbp, tail, n);
++	*kbp += n;
+ 	*nr -= n;
+ 
+ 	if (found)
+@@ -2127,10 +2120,11 @@ static int job_control(struct tty_struct *tty, struct file *file)
+  */
+ 
+ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+-			 unsigned char __user *buf, size_t nr)
++			  unsigned char *kbuf, size_t nr,
++			  void **cookie, unsigned long offset)
+ {
+ 	struct n_tty_data *ldata = tty->disc_data;
+-	unsigned char __user *b = buf;
++	unsigned char *kb = kbuf;
+ 	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	int c;
+ 	int minimum, time;
+@@ -2176,17 +2170,13 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 		/* First test for status change. */
+ 		if (packet && tty->link->ctrl_status) {
+ 			unsigned char cs;
+-			if (b != buf)
++			if (kb != kbuf)
+ 				break;
+ 			spin_lock_irq(&tty->link->ctrl_lock);
+ 			cs = tty->link->ctrl_status;
+ 			tty->link->ctrl_status = 0;
+ 			spin_unlock_irq(&tty->link->ctrl_lock);
+-			if (put_user(cs, b)) {
+-				retval = -EFAULT;
+-				break;
+-			}
+-			b++;
++			*kb++ = cs;
+ 			nr--;
+ 			break;
+ 		}
+@@ -2229,24 +2219,20 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 		}
+ 
+ 		if (ldata->icanon && !L_EXTPROC(tty)) {
+-			retval = canon_copy_from_read_buf(tty, &b, &nr);
++			retval = canon_copy_from_read_buf(tty, &kb, &nr);
+ 			if (retval)
+ 				break;
+ 		} else {
+ 			int uncopied;
+ 
+ 			/* Deal with packet mode. */
+-			if (packet && b == buf) {
+-				if (put_user(TIOCPKT_DATA, b)) {
+-					retval = -EFAULT;
+-					break;
+-				}
+-				b++;
++			if (packet && kb == kbuf) {
++				*kb++ = TIOCPKT_DATA;
+ 				nr--;
+ 			}
+ 
+-			uncopied = copy_from_read_buf(tty, &b, &nr);
+-			uncopied += copy_from_read_buf(tty, &b, &nr);
++			uncopied = copy_from_read_buf(tty, &kb, &nr);
++			uncopied += copy_from_read_buf(tty, &kb, &nr);
+ 			if (uncopied) {
+ 				retval = -EFAULT;
+ 				break;
+@@ -2255,7 +2241,7 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 
+ 		n_tty_check_unthrottle(tty);
+ 
+-		if (b - buf >= minimum)
++		if (kb - kbuf >= minimum)
+ 			break;
+ 		if (time)
+ 			timeout = time;
+@@ -2267,8 +2253,8 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 	remove_wait_queue(&tty->read_wait, &wait);
+ 	mutex_unlock(&ldata->atomic_read_lock);
+ 
+-	if (b - buf)
+-		retval = b - buf;
++	if (kb - kbuf)
++		retval = kb - kbuf;
+ 
+ 	return retval;
+ }
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 21cd5ac6ca8b5..3f55fe7293f31 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -142,7 +142,7 @@ LIST_HEAD(tty_drivers);			/* linked list of tty drivers */
+ /* Mutex to protect creating and releasing a tty */
+ DEFINE_MUTEX(tty_mutex);
+ 
+-static ssize_t tty_read(struct file *, char __user *, size_t, loff_t *);
++static ssize_t tty_read(struct kiocb *, struct iov_iter *);
+ static ssize_t tty_write(struct kiocb *, struct iov_iter *);
+ static __poll_t tty_poll(struct file *, poll_table *);
+ static int tty_open(struct inode *, struct file *);
+@@ -473,8 +473,9 @@ static void tty_show_fdinfo(struct seq_file *m, struct file *file)
+ 
+ static const struct file_operations tty_fops = {
+ 	.llseek		= no_llseek,
+-	.read		= tty_read,
++	.read_iter	= tty_read,
+ 	.write_iter	= tty_write,
++	.splice_read	= generic_file_splice_read,
+ 	.splice_write	= iter_file_splice_write,
+ 	.poll		= tty_poll,
+ 	.unlocked_ioctl	= tty_ioctl,
+@@ -487,8 +488,9 @@ static const struct file_operations tty_fops = {
+ 
+ static const struct file_operations console_fops = {
+ 	.llseek		= no_llseek,
+-	.read		= tty_read,
++	.read_iter	= tty_read,
+ 	.write_iter	= redirected_tty_write,
++	.splice_read	= generic_file_splice_read,
+ 	.splice_write	= iter_file_splice_write,
+ 	.poll		= tty_poll,
+ 	.unlocked_ioctl	= tty_ioctl,
+@@ -830,6 +832,65 @@ static void tty_update_time(struct timespec64 *time)
+ 		time->tv_sec = sec;
+ }
+ 
++/*
++ * Iterate on the ldisc ->read() function until we've gotten all
++ * the data the ldisc has for us.
++ *
++ * The "cookie" is something that the ldisc read function can fill
++ * in to let us know that there is more data to be had.
++ *
++ * We promise to continue to call the ldisc until it stops returning
++ * data or clears the cookie. The cookie may be something that the
++ * ldisc maintains state for and needs to free.
++ */
++static int iterate_tty_read(struct tty_ldisc *ld, struct tty_struct *tty,
++		struct file *file, struct iov_iter *to)
++{
++	int retval = 0;
++	void *cookie = NULL;
++	unsigned long offset = 0;
++	char kernel_buf[64];
++	size_t count = iov_iter_count(to);
++
++	do {
++		int size, copied;
++
++		size = count > sizeof(kernel_buf) ? sizeof(kernel_buf) : count;
++		size = ld->ops->read(tty, file, kernel_buf, size, &cookie, offset);
++		if (!size)
++			break;
++
++		/*
++		 * A ldisc read error return will override any previously copied
++		 * data (eg -EOVERFLOW from HDLC)
++		 */
++		if (size < 0) {
++			memzero_explicit(kernel_buf, sizeof(kernel_buf));
++			return size;
++		}
++
++		copied = copy_to_iter(kernel_buf, size, to);
++		offset += copied;
++		count -= copied;
++
++		/*
++		 * If the user copy failed, we still need to do another ->read()
++		 * call if we had a cookie to let the ldisc clear up.
++		 *
++		 * But make sure size is zeroed.
++		 */
++		if (unlikely(copied != size)) {
++			count = 0;
++			retval = -EFAULT;
++		}
++	} while (cookie);
++
++	/* We always clear tty buffer in case they contained passwords */
++	memzero_explicit(kernel_buf, sizeof(kernel_buf));
++	return offset ? offset : retval;
++}
++
++
+ /**
+  *	tty_read	-	read method for tty device files
+  *	@file: pointer to tty file
+@@ -845,10 +906,10 @@ static void tty_update_time(struct timespec64 *time)
+  *	read calls may be outstanding in parallel.
+  */
+ 
+-static ssize_t tty_read(struct file *file, char __user *buf, size_t count,
+-			loff_t *ppos)
++static ssize_t tty_read(struct kiocb *iocb, struct iov_iter *to)
+ {
+ 	int i;
++	struct file *file = iocb->ki_filp;
+ 	struct inode *inode = file_inode(file);
+ 	struct tty_struct *tty = file_tty(file);
+ 	struct tty_ldisc *ld;
+@@ -861,12 +922,9 @@ static ssize_t tty_read(struct file *file, char __user *buf, size_t count,
+ 	/* We want to wait for the line discipline to sort out in this
+ 	   situation */
+ 	ld = tty_ldisc_ref_wait(tty);
+-	if (!ld)
+-		return hung_up_tty_read(file, buf, count, ppos);
+-	if (ld->ops->read)
+-		i = ld->ops->read(tty, file, buf, count);
+-	else
+-		i = -EIO;
++	i = -EIO;
++	if (ld && ld->ops->read)
++		i = iterate_tty_read(ld, tty, file, to);
+ 	tty_ldisc_deref(ld);
+ 
+ 	if (i > 0)
+@@ -2885,7 +2943,7 @@ static long tty_compat_ioctl(struct file *file, unsigned int cmd,
+ 
+ static int this_tty(const void *t, struct file *file, unsigned fd)
+ {
+-	if (likely(file->f_op->read != tty_read))
++	if (likely(file->f_op->read_iter != tty_read))
+ 		return 0;
+ 	return file_tty(file) != t ? 0 : fd + 1;
+ }
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index e9ac215b96633..fc3269f5faf19 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -1313,19 +1313,20 @@ static void dwc2_hc_start_transfer(struct dwc2_hsotg *hsotg,
+ 			if (num_packets > max_hc_pkt_count) {
+ 				num_packets = max_hc_pkt_count;
+ 				chan->xfer_len = num_packets * chan->max_packet;
++			} else if (chan->ep_is_in) {
++				/*
++				 * Always program an integral # of max packets
++				 * for IN transfers.
++				 * Note: This assumes that the input buffer is
++				 * aligned and sized accordingly.
++				 */
++				chan->xfer_len = num_packets * chan->max_packet;
+ 			}
+ 		} else {
+ 			/* Need 1 packet for transfer length of 0 */
+ 			num_packets = 1;
+ 		}
+ 
+-		if (chan->ep_is_in)
+-			/*
+-			 * Always program an integral # of max packets for IN
+-			 * transfers
+-			 */
+-			chan->xfer_len = num_packets * chan->max_packet;
+-
+ 		if (chan->ep_type == USB_ENDPOINT_XFER_INT ||
+ 		    chan->ep_type == USB_ENDPOINT_XFER_ISOC)
+ 			/*
+diff --git a/drivers/usb/dwc2/hcd_intr.c b/drivers/usb/dwc2/hcd_intr.c
+index a052d39b4375e..d5f4ec1b73b15 100644
+--- a/drivers/usb/dwc2/hcd_intr.c
++++ b/drivers/usb/dwc2/hcd_intr.c
+@@ -500,7 +500,7 @@ static int dwc2_update_urb_state(struct dwc2_hsotg *hsotg,
+ 						      &short_read);
+ 
+ 	if (urb->actual_length + xfer_length > urb->length) {
+-		dev_warn(hsotg->dev, "%s(): trimming xfer length\n", __func__);
++		dev_dbg(hsotg->dev, "%s(): trimming xfer length\n", __func__);
+ 		xfer_length = urb->length - urb->actual_length;
+ 	}
+ 
+@@ -1977,6 +1977,18 @@ error:
+ 		qtd->error_count++;
+ 		dwc2_update_urb_state_abn(hsotg, chan, chnum, qtd->urb,
+ 					  qtd, DWC2_HC_XFER_XACT_ERR);
++		/*
++		 * We can get here after a completed transaction
++		 * (urb->actual_length >= urb->length) which was not reported
++		 * as completed. If that is the case, and we do not abort
++		 * the transfer, a transfer of size 0 will be enqueued
++		 * subsequently. If urb->actual_length is not DMA-aligned,
++		 * the buffer will then point to an unaligned address, and
++		 * the resulting behavior is undefined. Bail out in that
++		 * situation.
++		 */
++		if (qtd->urb->actual_length >= qtd->urb->length)
++			qtd->error_count = 3;
+ 		dwc2_hcd_save_data_toggle(hsotg, chan, chnum, qtd);
+ 		dwc2_halt_channel(hsotg, chan, qtd, DWC2_HC_XFER_XACT_ERR);
+ 	}
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index ee44321fee386..56f7235bc068c 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -605,8 +605,23 @@ static int dwc3_gadget_set_ep_config(struct dwc3_ep *dep, unsigned int action)
+ 		params.param0 |= DWC3_DEPCFG_FIFO_NUMBER(dep->number >> 1);
+ 
+ 	if (desc->bInterval) {
+-		params.param1 |= DWC3_DEPCFG_BINTERVAL_M1(desc->bInterval - 1);
+-		dep->interval = 1 << (desc->bInterval - 1);
++		u8 bInterval_m1;
++
++		/*
++		 * Valid range for DEPCFG.bInterval_m1 is from 0 to 13, and it
++		 * must be set to 0 when the controller operates in full-speed.
++		 */
++		bInterval_m1 = min_t(u8, desc->bInterval - 1, 13);
++		if (dwc->gadget->speed == USB_SPEED_FULL)
++			bInterval_m1 = 0;
++
++		if (usb_endpoint_type(desc) == USB_ENDPOINT_XFER_INT &&
++		    dwc->gadget->speed == USB_SPEED_FULL)
++			dep->interval = desc->bInterval;
++		else
++			dep->interval = 1 << (desc->bInterval - 1);
++
++		params.param1 |= DWC3_DEPCFG_BINTERVAL_M1(bInterval_m1);
+ 	}
+ 
+ 	return dwc3_send_gadget_ep_cmd(dep, DWC3_DEPCMD_SETEPCONFIG, &params);
+diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
+index e6d32c5367812..908e49dafd620 100644
+--- a/drivers/usb/gadget/function/u_audio.c
++++ b/drivers/usb/gadget/function/u_audio.c
+@@ -89,7 +89,12 @@ static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req)
+ 	struct snd_uac_chip *uac = prm->uac;
+ 
+ 	/* i/f shutting down */
+-	if (!prm->ep_enabled || req->status == -ESHUTDOWN)
++	if (!prm->ep_enabled) {
++		usb_ep_free_request(ep, req);
++		return;
++	}
++
++	if (req->status == -ESHUTDOWN)
+ 		return;
+ 
+ 	/*
+@@ -336,8 +341,14 @@ static inline void free_ep(struct uac_rtd_params *prm, struct usb_ep *ep)
+ 
+ 	for (i = 0; i < params->req_number; i++) {
+ 		if (prm->ureq[i].req) {
+-			usb_ep_dequeue(ep, prm->ureq[i].req);
+-			usb_ep_free_request(ep, prm->ureq[i].req);
++			if (usb_ep_dequeue(ep, prm->ureq[i].req))
++				usb_ep_free_request(ep, prm->ureq[i].req);
++			/*
++			 * If usb_ep_dequeue() cannot successfully dequeue the
++			 * request, the request will be freed by the completion
++			 * callback.
++			 */
++
+ 			prm->ureq[i].req = NULL;
+ 		}
+ 	}
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 849e0b770130a..1cd87729ba604 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2240,32 +2240,35 @@ int musb_queue_resume_work(struct musb *musb,
+ {
+ 	struct musb_pending_work *w;
+ 	unsigned long flags;
++	bool is_suspended;
+ 	int error;
+ 
+ 	if (WARN_ON(!callback))
+ 		return -EINVAL;
+ 
+-	if (pm_runtime_active(musb->controller))
+-		return callback(musb, data);
++	spin_lock_irqsave(&musb->list_lock, flags);
++	is_suspended = musb->is_runtime_suspended;
++
++	if (is_suspended) {
++		w = devm_kzalloc(musb->controller, sizeof(*w), GFP_ATOMIC);
++		if (!w) {
++			error = -ENOMEM;
++			goto out_unlock;
++		}
+ 
+-	w = devm_kzalloc(musb->controller, sizeof(*w), GFP_ATOMIC);
+-	if (!w)
+-		return -ENOMEM;
++		w->callback = callback;
++		w->data = data;
+ 
+-	w->callback = callback;
+-	w->data = data;
+-	spin_lock_irqsave(&musb->list_lock, flags);
+-	if (musb->is_runtime_suspended) {
+ 		list_add_tail(&w->node, &musb->pending_list);
+ 		error = 0;
+-	} else {
+-		dev_err(musb->controller, "could not add resume work %p\n",
+-			callback);
+-		devm_kfree(musb->controller, w);
+-		error = -EINPROGRESS;
+ 	}
++
++out_unlock:
+ 	spin_unlock_irqrestore(&musb->list_lock, flags);
+ 
++	if (!is_suspended)
++		error = callback(musb, data);
++
+ 	return error;
+ }
+ EXPORT_SYMBOL_GPL(musb_queue_resume_work);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index e0f4c3d9649cd..56cd70ba201c7 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1386,8 +1386,9 @@ static int change_speed(struct tty_struct *tty, struct usb_serial_port *port)
+ 	index_value = get_ftdi_divisor(tty, port);
+ 	value = (u16)index_value;
+ 	index = (u16)(index_value >> 16);
+-	if ((priv->chip_type == FT2232C) || (priv->chip_type == FT2232H) ||
+-		(priv->chip_type == FT4232H) || (priv->chip_type == FT232H)) {
++	if (priv->chip_type == FT2232C || priv->chip_type == FT2232H ||
++			priv->chip_type == FT4232H || priv->chip_type == FT232H ||
++			priv->chip_type == FTX) {
+ 		/* Probably the BM type needs the MSB of the encoded fractional
+ 		 * divider also moved like for the chips above. Any infos? */
+ 		index = (u16)((index << 8) | priv->interface);
+diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c
+index 5a5d2a95070ed..b418a0d4adb89 100644
+--- a/drivers/usb/serial/mos7720.c
++++ b/drivers/usb/serial/mos7720.c
+@@ -1250,8 +1250,10 @@ static int mos7720_write(struct tty_struct *tty, struct usb_serial_port *port,
+ 	if (urb->transfer_buffer == NULL) {
+ 		urb->transfer_buffer = kmalloc(URB_TRANSFER_BUFFER_SIZE,
+ 					       GFP_ATOMIC);
+-		if (!urb->transfer_buffer)
++		if (!urb->transfer_buffer) {
++			bytes_sent = -ENOMEM;
+ 			goto exit;
++		}
+ 	}
+ 	transfer_size = min(count, URB_TRANSFER_BUFFER_SIZE);
+ 
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index 23f91d658cb46..30c25ef0dacd2 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -883,8 +883,10 @@ static int mos7840_write(struct tty_struct *tty, struct usb_serial_port *port,
+ 	if (urb->transfer_buffer == NULL) {
+ 		urb->transfer_buffer = kmalloc(URB_TRANSFER_BUFFER_SIZE,
+ 					       GFP_ATOMIC);
+-		if (!urb->transfer_buffer)
++		if (!urb->transfer_buffer) {
++			bytes_sent = -ENOMEM;
+ 			goto exit;
++		}
+ 	}
+ 	transfer_size = min(count, URB_TRANSFER_BUFFER_SIZE);
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 2049e66f34a3f..c6969ca728390 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1569,7 +1569,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1272, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1273, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1274, 0xff, 0xff, 0xff) },
+-	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1275, 0xff, 0xff, 0xff) },
++	{ USB_DEVICE(ZTE_VENDOR_ID, 0x1275),	/* ZTE P685M */
++	  .driver_info = RSVD(3) | RSVD(4) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1276, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1277, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1278, 0xff, 0xff, 0xff) },
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index be8067017eaa5..29dda60e3bcde 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -183,6 +183,7 @@ struct pl2303_type_data {
+ 	speed_t max_baud_rate;
+ 	unsigned long quirks;
+ 	unsigned int no_autoxonxoff:1;
++	unsigned int no_divisors:1;
+ };
+ 
+ struct pl2303_serial_private {
+@@ -209,6 +210,7 @@ static const struct pl2303_type_data pl2303_type_data[TYPE_COUNT] = {
+ 	},
+ 	[TYPE_HXN] = {
+ 		.max_baud_rate		= 12000000,
++		.no_divisors		= true,
+ 	},
+ };
+ 
+@@ -571,8 +573,12 @@ static void pl2303_encode_baud_rate(struct tty_struct *tty,
+ 		baud = min_t(speed_t, baud, spriv->type->max_baud_rate);
+ 	/*
+ 	 * Use direct method for supported baud rates, otherwise use divisors.
++	 * Newer chip types do not support divisor encoding.
+ 	 */
+-	baud_sup = pl2303_get_supported_baud_rate(baud);
++	if (spriv->type->no_divisors)
++		baud_sup = baud;
++	else
++		baud_sup = pl2303_get_supported_baud_rate(baud);
+ 
+ 	if (baud == baud_sup)
+ 		baud = pl2303_encode_baud_rate_direct(buf, baud);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index c6529f7c3034a..5a86ede36c1ca 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -1805,7 +1805,7 @@ static void mlx5_vdpa_get_config(struct vdpa_device *vdev, unsigned int offset,
+ 	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
+ 	struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
+ 
+-	if (offset + len < sizeof(struct virtio_net_config))
++	if (offset + len <= sizeof(struct virtio_net_config))
+ 		memcpy(buf, (u8 *)&ndev->config + offset, len);
+ }
+ 
+diff --git a/drivers/vfio/pci/vfio_pci_zdev.c b/drivers/vfio/pci/vfio_pci_zdev.c
+index 2296856340311..1bb7edac56899 100644
+--- a/drivers/vfio/pci/vfio_pci_zdev.c
++++ b/drivers/vfio/pci/vfio_pci_zdev.c
+@@ -74,6 +74,8 @@ static int zpci_util_cap(struct zpci_dev *zdev, struct vfio_pci_device *vdev,
+ 	int ret;
+ 
+ 	cap = kmalloc(cap_size, GFP_KERNEL);
++	if (!cap)
++		return -ENOMEM;
+ 
+ 	cap->header.id = VFIO_DEVICE_INFO_CAP_ZPCI_UTIL;
+ 	cap->header.version = 1;
+@@ -98,6 +100,8 @@ static int zpci_pfip_cap(struct zpci_dev *zdev, struct vfio_pci_device *vdev,
+ 	int ret;
+ 
+ 	cap = kmalloc(cap_size, GFP_KERNEL);
++	if (!cap)
++		return -ENOMEM;
+ 
+ 	cap->header.id = VFIO_DEVICE_INFO_CAP_ZPCI_PFIP;
+ 	cap->header.version = 1;
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 67e8276389951..fbd438e9b9b03 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -24,6 +24,7 @@
+ #include <linux/compat.h>
+ #include <linux/device.h>
+ #include <linux/fs.h>
++#include <linux/highmem.h>
+ #include <linux/iommu.h>
+ #include <linux/module.h>
+ #include <linux/mm.h>
+@@ -236,6 +237,18 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
+ 	}
+ }
+ 
++static void vfio_iommu_populate_bitmap_full(struct vfio_iommu *iommu)
++{
++	struct rb_node *n;
++	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
++
++	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
++		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
++
++		bitmap_set(dma->bitmap, 0, dma->size >> pgshift);
++	}
++}
++
+ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
+ {
+ 	struct rb_node *n;
+@@ -419,9 +432,11 @@ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
+ 			    unsigned long vaddr, unsigned long *pfn,
+ 			    bool write_fault)
+ {
++	pte_t *ptep;
++	spinlock_t *ptl;
+ 	int ret;
+ 
+-	ret = follow_pfn(vma, vaddr, pfn);
++	ret = follow_pte(vma->vm_mm, vaddr, &ptep, &ptl);
+ 	if (ret) {
+ 		bool unlocked = false;
+ 
+@@ -435,9 +450,17 @@ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = follow_pfn(vma, vaddr, pfn);
++		ret = follow_pte(vma->vm_mm, vaddr, &ptep, &ptl);
++		if (ret)
++			return ret;
+ 	}
+ 
++	if (write_fault && !pte_write(*ptep))
++		ret = -EFAULT;
++	else
++		*pfn = pte_pfn(*ptep);
++
++	pte_unmap_unlock(ptep, ptl);
+ 	return ret;
+ }
+ 
+@@ -945,6 +968,7 @@ static long vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma,
+ 
+ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
+ {
++	WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list));
+ 	vfio_unmap_unpin(iommu, dma, true);
+ 	vfio_unlink_dma(iommu, dma);
+ 	put_task_struct(dma->task);
+@@ -2238,23 +2262,6 @@ static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu)
+ 	}
+ }
+ 
+-static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu)
+-{
+-	struct rb_node *n;
+-
+-	n = rb_first(&iommu->dma_list);
+-	for (; n; n = rb_next(n)) {
+-		struct vfio_dma *dma;
+-
+-		dma = rb_entry(n, struct vfio_dma, node);
+-
+-		if (WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list)))
+-			break;
+-	}
+-	/* mdev vendor driver must unregister notifier */
+-	WARN_ON(iommu->notifier.head);
+-}
+-
+ /*
+  * Called when a domain is removed in detach. It is possible that
+  * the removed domain decided the iova aperture window. Modify the
+@@ -2354,10 +2361,10 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
+ 			kfree(group);
+ 
+ 			if (list_empty(&iommu->external_domain->group_list)) {
+-				vfio_sanity_check_pfn_list(iommu);
+-
+-				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu))
++				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
++					WARN_ON(iommu->notifier.head);
+ 					vfio_iommu_unmap_unpin_all(iommu);
++				}
+ 
+ 				kfree(iommu->external_domain);
+ 				iommu->external_domain = NULL;
+@@ -2391,10 +2398,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
+ 		 */
+ 		if (list_empty(&domain->group_list)) {
+ 			if (list_is_singular(&iommu->domain_list)) {
+-				if (!iommu->external_domain)
++				if (!iommu->external_domain) {
++					WARN_ON(iommu->notifier.head);
+ 					vfio_iommu_unmap_unpin_all(iommu);
+-				else
++				} else {
+ 					vfio_iommu_unmap_unpin_reaccount(iommu);
++				}
+ 			}
+ 			iommu_domain_free(domain->domain);
+ 			list_del(&domain->next);
+@@ -2415,8 +2424,11 @@ detach_group_done:
+ 	 * Removal of a group without dirty tracking may allow the iommu scope
+ 	 * to be promoted.
+ 	 */
+-	if (update_dirty_scope)
++	if (update_dirty_scope) {
+ 		update_pinned_page_dirty_scope(iommu);
++		if (iommu->dirty_page_tracking)
++			vfio_iommu_populate_bitmap_full(iommu);
++	}
+ 	mutex_unlock(&iommu->lock);
+ }
+ 
+@@ -2475,7 +2487,6 @@ static void vfio_iommu_type1_release(void *iommu_data)
+ 
+ 	if (iommu->external_domain) {
+ 		vfio_release_domain(iommu->external_domain, true);
+-		vfio_sanity_check_pfn_list(iommu);
+ 		kfree(iommu->external_domain);
+ 	}
+ 
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index cfb7f5612ef0f..4f02db65dedec 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1269,6 +1269,7 @@ config FB_ATY
+ 	select FB_CFB_IMAGEBLIT
+ 	select FB_BACKLIGHT if FB_ATY_BACKLIGHT
+ 	select FB_MACMODES if PPC
++	select FB_ATY_CT if SPARC64 && PCI
+ 	help
+ 	  This driver supports graphics boards with the ATI Mach64 chips.
+ 	  Say Y if you have such a graphics board.
+@@ -1279,7 +1280,6 @@ config FB_ATY
+ config FB_ATY_CT
+ 	bool "Mach64 CT/VT/GT/LT (incl. 3D RAGE) support"
+ 	depends on PCI && FB_ATY
+-	default y if SPARC64 && PCI
+ 	help
+ 	  Say Y here to support use of ATI's 64-bit Rage boards (or other
+ 	  boards based on the Mach64 CT, VT, GT, and LT chipsets) as a
+diff --git a/drivers/virt/vboxguest/vboxguest_utils.c b/drivers/virt/vboxguest/vboxguest_utils.c
+index ea05af41ec69e..8d195e3f83012 100644
+--- a/drivers/virt/vboxguest/vboxguest_utils.c
++++ b/drivers/virt/vboxguest/vboxguest_utils.c
+@@ -468,7 +468,7 @@ static int hgcm_cancel_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call)
+  *               Cancellation fun.
+  */
+ static int vbg_hgcm_do_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call,
+-			    u32 timeout_ms, bool *leak_it)
++			    u32 timeout_ms, bool interruptible, bool *leak_it)
+ {
+ 	int rc, cancel_rc, ret;
+ 	long timeout;
+@@ -495,10 +495,15 @@ static int vbg_hgcm_do_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call,
+ 	else
+ 		timeout = msecs_to_jiffies(timeout_ms);
+ 
+-	timeout = wait_event_interruptible_timeout(
+-					gdev->hgcm_wq,
+-					hgcm_req_done(gdev, &call->header),
+-					timeout);
++	if (interruptible) {
++		timeout = wait_event_interruptible_timeout(gdev->hgcm_wq,
++							   hgcm_req_done(gdev, &call->header),
++							   timeout);
++	} else {
++		timeout = wait_event_timeout(gdev->hgcm_wq,
++					     hgcm_req_done(gdev, &call->header),
++					     timeout);
++	}
+ 
+ 	/* timeout > 0 means hgcm_req_done has returned true, so success */
+ 	if (timeout > 0)
+@@ -631,7 +636,8 @@ int vbg_hgcm_call(struct vbg_dev *gdev, u32 requestor, u32 client_id,
+ 	hgcm_call_init_call(call, client_id, function, parms, parm_count,
+ 			    bounce_bufs);
+ 
+-	ret = vbg_hgcm_do_call(gdev, call, timeout_ms, &leak_it);
++	ret = vbg_hgcm_do_call(gdev, call, timeout_ms,
++			       requestor & VMMDEV_REQUESTOR_USERMODE, &leak_it);
+ 	if (ret == 0) {
+ 		*vbox_status = call->header.result;
+ 		ret = hgcm_call_copy_back_result(call, parms, parm_count,
+diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c
+index cddf60b7309ca..974d02bb3a45c 100644
+--- a/drivers/w1/slaves/w1_therm.c
++++ b/drivers/w1/slaves/w1_therm.c
+@@ -667,28 +667,24 @@ static inline int w1_DS18B20_get_resolution(struct w1_slave *sl)
+  */
+ static inline int w1_DS18B20_convert_temp(u8 rom[9])
+ {
+-	int t;
+-	u32 bv;
++	u16 bv;
++	s16 t;
++
++	/* Signed 16-bit value to unsigned, cpu order */
++	bv = le16_to_cpup((__le16 *)rom);
+ 
+ 	/* Config register bit R2 = 1 - GX20MH01 in 13 or 14 bit resolution mode */
+ 	if (rom[4] & 0x80) {
+-		/* Signed 16-bit value to unsigned, cpu order */
+-		bv = le16_to_cpup((__le16 *)rom);
+-
+ 		/* Insert two temperature bits from config register */
+ 		/* Avoid arithmetic shift of signed value */
+ 		bv = (bv << 2) | (rom[4] & 3);
+-
+-		t = (int) sign_extend32(bv, 17); /* Degrees, lowest bit is 2^-6 */
+-		return (t*1000)/64;  /* Millidegrees */
++		t = (s16) bv;	/* Degrees, lowest bit is 2^-6 */
++		return (int)t * 1000 / 64;	/* Sign-extend to int; millidegrees */
+ 	}
+-
+-	t = (int)le16_to_cpup((__le16 *)rom);
+-	return t*1000/16;
++	t = (s16)bv;	/* Degrees, lowest bit is 2^-4 */
++	return (int)t * 1000 / 16;	/* Sign-extend to int; millidegrees */
+ }
+ 
+-
+-
+ /**
+  * w1_DS18S20_convert_temp() - temperature computation for DS18S20
+  * @rom: data read from device RAM (8 data bytes + 1 CRC byte)
+diff --git a/drivers/watchdog/intel-mid_wdt.c b/drivers/watchdog/intel-mid_wdt.c
+index 1ae03b64ef8bf..9b2173f765c8c 100644
+--- a/drivers/watchdog/intel-mid_wdt.c
++++ b/drivers/watchdog/intel-mid_wdt.c
+@@ -154,6 +154,10 @@ static int mid_wdt_probe(struct platform_device *pdev)
+ 	watchdog_set_nowayout(wdt_dev, WATCHDOG_NOWAYOUT);
+ 	watchdog_set_drvdata(wdt_dev, mid);
+ 
++	mid->scu = devm_intel_scu_ipc_dev_get(dev);
++	if (!mid->scu)
++		return -EPROBE_DEFER;
++
+ 	ret = devm_request_irq(dev, pdata->irq, mid_wdt_irq,
+ 			       IRQF_SHARED | IRQF_NO_SUSPEND, "watchdog",
+ 			       wdt_dev);
+@@ -162,10 +166,6 @@ static int mid_wdt_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	mid->scu = devm_intel_scu_ipc_dev_get(dev);
+-	if (!mid->scu)
+-		return -EPROBE_DEFER;
+-
+ 	/*
+ 	 * The firmware followed by U-Boot leaves the watchdog running
+ 	 * with the default threshold which may vary. When we get here
+diff --git a/drivers/watchdog/mei_wdt.c b/drivers/watchdog/mei_wdt.c
+index 5391bf3e6b11d..c5967d8b4256a 100644
+--- a/drivers/watchdog/mei_wdt.c
++++ b/drivers/watchdog/mei_wdt.c
+@@ -382,6 +382,7 @@ static int mei_wdt_register(struct mei_wdt *wdt)
+ 
+ 	watchdog_set_drvdata(&wdt->wdd, wdt);
+ 	watchdog_stop_on_reboot(&wdt->wdd);
++	watchdog_stop_on_unregister(&wdt->wdd);
+ 
+ 	ret = watchdog_register_device(&wdt->wdd);
+ 	if (ret)
+diff --git a/drivers/watchdog/qcom-wdt.c b/drivers/watchdog/qcom-wdt.c
+index cdf754233e53d..bdab184215d27 100644
+--- a/drivers/watchdog/qcom-wdt.c
++++ b/drivers/watchdog/qcom-wdt.c
+@@ -22,7 +22,6 @@ enum wdt_reg {
+ };
+ 
+ #define QCOM_WDT_ENABLE		BIT(0)
+-#define QCOM_WDT_ENABLE_IRQ	BIT(1)
+ 
+ static const u32 reg_offset_data_apcs_tmr[] = {
+ 	[WDT_RST] = 0x38,
+@@ -63,16 +62,6 @@ struct qcom_wdt *to_qcom_wdt(struct watchdog_device *wdd)
+ 	return container_of(wdd, struct qcom_wdt, wdd);
+ }
+ 
+-static inline int qcom_get_enable(struct watchdog_device *wdd)
+-{
+-	int enable = QCOM_WDT_ENABLE;
+-
+-	if (wdd->pretimeout)
+-		enable |= QCOM_WDT_ENABLE_IRQ;
+-
+-	return enable;
+-}
+-
+ static irqreturn_t qcom_wdt_isr(int irq, void *arg)
+ {
+ 	struct watchdog_device *wdd = arg;
+@@ -91,7 +80,7 @@ static int qcom_wdt_start(struct watchdog_device *wdd)
+ 	writel(1, wdt_addr(wdt, WDT_RST));
+ 	writel(bark * wdt->rate, wdt_addr(wdt, WDT_BARK_TIME));
+ 	writel(wdd->timeout * wdt->rate, wdt_addr(wdt, WDT_BITE_TIME));
+-	writel(qcom_get_enable(wdd), wdt_addr(wdt, WDT_EN));
++	writel(QCOM_WDT_ENABLE, wdt_addr(wdt, WDT_EN));
+ 	return 0;
+ }
+ 
+diff --git a/fs/affs/namei.c b/fs/affs/namei.c
+index 41c5749f4db78..5400a876d73fb 100644
+--- a/fs/affs/namei.c
++++ b/fs/affs/namei.c
+@@ -460,8 +460,10 @@ affs_xrename(struct inode *old_dir, struct dentry *old_dentry,
+ 		return -EIO;
+ 
+ 	bh_new = affs_bread(sb, d_inode(new_dentry)->i_ino);
+-	if (!bh_new)
++	if (!bh_new) {
++		affs_brelse(bh_old);
+ 		return -EIO;
++	}
+ 
+ 	/* Remove old header from its parent directory. */
+ 	affs_lock_dir(old_dir);
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 553b4f6ec8639..6e447bdaf9ec8 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -2548,13 +2548,6 @@ void btrfs_backref_cleanup_node(struct btrfs_backref_cache *cache,
+ 		list_del(&edge->list[UPPER]);
+ 		btrfs_backref_free_edge(cache, edge);
+ 
+-		if (RB_EMPTY_NODE(&upper->rb_node)) {
+-			BUG_ON(!list_empty(&node->upper));
+-			btrfs_backref_drop_node(cache, node);
+-			node = upper;
+-			node->lowest = 1;
+-			continue;
+-		}
+ 		/*
+ 		 * Add the node to leaf node list if no other child block
+ 		 * cached.
+@@ -2631,7 +2624,7 @@ static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
+ 		/* Only reloc backref cache cares about a specific root */
+ 		if (cache->is_reloc) {
+ 			root = find_reloc_root(cache->fs_info, cur->bytenr);
+-			if (WARN_ON(!root))
++			if (!root)
+ 				return -ENOENT;
+ 			cur->root = root;
+ 		} else {
+diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
+index ff705cc564a9a..17abde7f794ce 100644
+--- a/fs/btrfs/backref.h
++++ b/fs/btrfs/backref.h
+@@ -296,6 +296,9 @@ static inline void btrfs_backref_free_node(struct btrfs_backref_cache *cache,
+ 					   struct btrfs_backref_node *node)
+ {
+ 	if (node) {
++		ASSERT(list_empty(&node->list));
++		ASSERT(list_empty(&node->lower));
++		ASSERT(node->eb == NULL);
+ 		cache->nr_nodes--;
+ 		btrfs_put_root(node->root);
+ 		kfree(node);
+@@ -340,11 +343,11 @@ static inline void btrfs_backref_drop_node_buffer(
+ static inline void btrfs_backref_drop_node(struct btrfs_backref_cache *tree,
+ 					   struct btrfs_backref_node *node)
+ {
+-	BUG_ON(!list_empty(&node->upper));
++	ASSERT(list_empty(&node->upper));
+ 
+ 	btrfs_backref_drop_node_buffer(node);
+-	list_del(&node->list);
+-	list_del(&node->lower);
++	list_del_init(&node->list);
++	list_del_init(&node->lower);
+ 	if (!RB_EMPTY_NODE(&node->rb_node))
+ 		rb_erase(&node->rb_node, &tree->rb_root);
+ 	btrfs_backref_free_node(tree, node);
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index a2111eab614f2..9a5d652c1672e 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1450,9 +1450,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		btrfs_space_info_update_bytes_pinned(fs_info, space_info,
+ 						     -block_group->pinned);
+ 		space_info->bytes_readonly += block_group->pinned;
+-		percpu_counter_add_batch(&space_info->total_bytes_pinned,
+-				   -block_group->pinned,
+-				   BTRFS_TOTAL_BYTES_PINNED_BATCH);
++		__btrfs_mod_total_bytes_pinned(space_info, -block_group->pinned);
+ 		block_group->pinned = 0;
+ 
+ 		spin_unlock(&block_group->lock);
+@@ -2582,8 +2580,10 @@ again:
+ 
+ 	if (!path) {
+ 		path = btrfs_alloc_path();
+-		if (!path)
+-			return -ENOMEM;
++		if (!path) {
++			ret = -ENOMEM;
++			goto out;
++		}
+ 	}
+ 
+ 	/*
+@@ -2677,16 +2677,14 @@ again:
+ 			btrfs_put_block_group(cache);
+ 		if (drop_reserve)
+ 			btrfs_delayed_refs_rsv_release(fs_info, 1);
+-
+-		if (ret)
+-			break;
+-
+ 		/*
+ 		 * Avoid blocking other tasks for too long. It might even save
+ 		 * us from writing caches for block groups that are going to be
+ 		 * removed.
+ 		 */
+ 		mutex_unlock(&trans->transaction->cache_write_mutex);
++		if (ret)
++			goto out;
+ 		mutex_lock(&trans->transaction->cache_write_mutex);
+ 	}
+ 	mutex_unlock(&trans->transaction->cache_write_mutex);
+@@ -2710,7 +2708,12 @@ again:
+ 			goto again;
+ 		}
+ 		spin_unlock(&cur_trans->dirty_bgs_lock);
+-	} else if (ret < 0) {
++	}
++out:
++	if (ret < 0) {
++		spin_lock(&cur_trans->dirty_bgs_lock);
++		list_splice_init(&dirty, &cur_trans->dirty_bgs);
++		spin_unlock(&cur_trans->dirty_bgs_lock);
+ 		btrfs_cleanup_dirty_bgs(cur_trans, fs_info);
+ 	}
+ 
+@@ -2914,10 +2917,8 @@ int btrfs_update_block_group(struct btrfs_trans_handle *trans,
+ 			spin_unlock(&cache->lock);
+ 			spin_unlock(&cache->space_info->lock);
+ 
+-			percpu_counter_add_batch(
+-					&cache->space_info->total_bytes_pinned,
+-					num_bytes,
+-					BTRFS_TOTAL_BYTES_PINNED_BATCH);
++			__btrfs_mod_total_bytes_pinned(cache->space_info,
++						       num_bytes);
+ 			set_extent_dirty(&trans->transaction->pinned_extents,
+ 					 bytenr, bytenr + num_bytes - 1,
+ 					 GFP_NOFS | __GFP_NOFAIL);
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 113da62dc17f6..f2f6f65038923 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -221,9 +221,12 @@ int btrfs_copy_root(struct btrfs_trans_handle *trans,
+ 		ret = btrfs_inc_ref(trans, root, cow, 1);
+ 	else
+ 		ret = btrfs_inc_ref(trans, root, cow, 0);
+-
+-	if (ret)
++	if (ret) {
++		btrfs_tree_unlock(cow);
++		free_extent_buffer(cow);
++		btrfs_abort_transaction(trans, ret);
+ 		return ret;
++	}
+ 
+ 	btrfs_mark_buffer_dirty(cow);
+ 	*cow_ret = cow;
+diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
+index 353cc2994d106..30883b9a26d84 100644
+--- a/fs/btrfs/delayed-ref.c
++++ b/fs/btrfs/delayed-ref.c
+@@ -648,12 +648,12 @@ inserted:
+  */
+ static noinline void update_existing_head_ref(struct btrfs_trans_handle *trans,
+ 			 struct btrfs_delayed_ref_head *existing,
+-			 struct btrfs_delayed_ref_head *update,
+-			 int *old_ref_mod_ret)
++			 struct btrfs_delayed_ref_head *update)
+ {
+ 	struct btrfs_delayed_ref_root *delayed_refs =
+ 		&trans->transaction->delayed_refs;
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
++	u64 flags = btrfs_ref_head_to_space_flags(existing);
+ 	int old_ref_mod;
+ 
+ 	BUG_ON(existing->is_data != update->is_data);
+@@ -701,8 +701,6 @@ static noinline void update_existing_head_ref(struct btrfs_trans_handle *trans,
+ 	 * currently, for refs we just added we know we're a-ok.
+ 	 */
+ 	old_ref_mod = existing->total_ref_mod;
+-	if (old_ref_mod_ret)
+-		*old_ref_mod_ret = old_ref_mod;
+ 	existing->ref_mod += update->ref_mod;
+ 	existing->total_ref_mod += update->ref_mod;
+ 
+@@ -724,6 +722,27 @@ static noinline void update_existing_head_ref(struct btrfs_trans_handle *trans,
+ 			trans->delayed_ref_updates += csum_leaves;
+ 		}
+ 	}
++
++	/*
++	 * This handles the following conditions:
++	 *
++	 * 1. We had a ref mod of 0 or more and went negative, indicating that
++	 *    we may be freeing space, so add our space to the
++	 *    total_bytes_pinned counter.
++	 * 2. We were negative and went to 0 or positive, so no longer can say
++	 *    that the space would be pinned, decrement our counter from the
++	 *    total_bytes_pinned counter.
++	 * 3. We are now at 0 and have ->must_insert_reserved set, which means
++	 *    this was a new allocation and then we dropped it, and thus must
++	 *    add our space to the total_bytes_pinned counter.
++	 */
++	if (existing->total_ref_mod < 0 && old_ref_mod >= 0)
++		btrfs_mod_total_bytes_pinned(fs_info, flags, existing->num_bytes);
++	else if (existing->total_ref_mod >= 0 && old_ref_mod < 0)
++		btrfs_mod_total_bytes_pinned(fs_info, flags, -existing->num_bytes);
++	else if (existing->total_ref_mod == 0 && existing->must_insert_reserved)
++		btrfs_mod_total_bytes_pinned(fs_info, flags, existing->num_bytes);
++
+ 	spin_unlock(&existing->lock);
+ }
+ 
+@@ -798,8 +817,7 @@ static noinline struct btrfs_delayed_ref_head *
+ add_delayed_ref_head(struct btrfs_trans_handle *trans,
+ 		     struct btrfs_delayed_ref_head *head_ref,
+ 		     struct btrfs_qgroup_extent_record *qrecord,
+-		     int action, int *qrecord_inserted_ret,
+-		     int *old_ref_mod, int *new_ref_mod)
++		     int action, int *qrecord_inserted_ret)
+ {
+ 	struct btrfs_delayed_ref_head *existing;
+ 	struct btrfs_delayed_ref_root *delayed_refs;
+@@ -821,8 +839,7 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans,
+ 	existing = htree_insert(&delayed_refs->href_root,
+ 				&head_ref->href_node);
+ 	if (existing) {
+-		update_existing_head_ref(trans, existing, head_ref,
+-					 old_ref_mod);
++		update_existing_head_ref(trans, existing, head_ref);
+ 		/*
+ 		 * we've updated the existing ref, free the newly
+ 		 * allocated ref
+@@ -830,14 +847,17 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans,
+ 		kmem_cache_free(btrfs_delayed_ref_head_cachep, head_ref);
+ 		head_ref = existing;
+ 	} else {
+-		if (old_ref_mod)
+-			*old_ref_mod = 0;
++		u64 flags = btrfs_ref_head_to_space_flags(head_ref);
++
+ 		if (head_ref->is_data && head_ref->ref_mod < 0) {
+ 			delayed_refs->pending_csums += head_ref->num_bytes;
+ 			trans->delayed_ref_updates +=
+ 				btrfs_csum_bytes_to_leaves(trans->fs_info,
+ 							   head_ref->num_bytes);
+ 		}
++		if (head_ref->ref_mod < 0)
++			btrfs_mod_total_bytes_pinned(trans->fs_info, flags,
++						     head_ref->num_bytes);
+ 		delayed_refs->num_heads++;
+ 		delayed_refs->num_heads_ready++;
+ 		atomic_inc(&delayed_refs->num_entries);
+@@ -845,8 +865,6 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans,
+ 	}
+ 	if (qrecord_inserted_ret)
+ 		*qrecord_inserted_ret = qrecord_inserted;
+-	if (new_ref_mod)
+-		*new_ref_mod = head_ref->total_ref_mod;
+ 
+ 	return head_ref;
+ }
+@@ -909,8 +927,7 @@ static void init_delayed_ref_common(struct btrfs_fs_info *fs_info,
+  */
+ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
+ 			       struct btrfs_ref *generic_ref,
+-			       struct btrfs_delayed_extent_op *extent_op,
+-			       int *old_ref_mod, int *new_ref_mod)
++			       struct btrfs_delayed_extent_op *extent_op)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	struct btrfs_delayed_tree_ref *ref;
+@@ -977,8 +994,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
+ 	 * the spin lock
+ 	 */
+ 	head_ref = add_delayed_ref_head(trans, head_ref, record,
+-					action, &qrecord_inserted,
+-					old_ref_mod, new_ref_mod);
++					action, &qrecord_inserted);
+ 
+ 	ret = insert_delayed_ref(trans, delayed_refs, head_ref, &ref->node);
+ 	spin_unlock(&delayed_refs->lock);
+@@ -1006,8 +1022,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
+  */
+ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans,
+ 			       struct btrfs_ref *generic_ref,
+-			       u64 reserved, int *old_ref_mod,
+-			       int *new_ref_mod)
++			       u64 reserved)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	struct btrfs_delayed_data_ref *ref;
+@@ -1073,8 +1088,7 @@ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans,
+ 	 * the spin lock
+ 	 */
+ 	head_ref = add_delayed_ref_head(trans, head_ref, record,
+-					action, &qrecord_inserted,
+-					old_ref_mod, new_ref_mod);
++					action, &qrecord_inserted);
+ 
+ 	ret = insert_delayed_ref(trans, delayed_refs, head_ref, &ref->node);
+ 	spin_unlock(&delayed_refs->lock);
+@@ -1117,7 +1131,7 @@ int btrfs_add_delayed_extent_op(struct btrfs_trans_handle *trans,
+ 	spin_lock(&delayed_refs->lock);
+ 
+ 	add_delayed_ref_head(trans, head_ref, NULL, BTRFS_UPDATE_DELAYED_HEAD,
+-			     NULL, NULL, NULL);
++			     NULL);
+ 
+ 	spin_unlock(&delayed_refs->lock);
+ 
+diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
+index 1c977e6d45dc3..3ba140468f126 100644
+--- a/fs/btrfs/delayed-ref.h
++++ b/fs/btrfs/delayed-ref.h
+@@ -326,6 +326,16 @@ static inline void btrfs_put_delayed_ref(struct btrfs_delayed_ref_node *ref)
+ 	}
+ }
+ 
++static inline u64 btrfs_ref_head_to_space_flags(
++				struct btrfs_delayed_ref_head *head_ref)
++{
++	if (head_ref->is_data)
++		return BTRFS_BLOCK_GROUP_DATA;
++	else if (head_ref->is_system)
++		return BTRFS_BLOCK_GROUP_SYSTEM;
++	return BTRFS_BLOCK_GROUP_METADATA;
++}
++
+ static inline void btrfs_put_delayed_ref_head(struct btrfs_delayed_ref_head *head)
+ {
+ 	if (refcount_dec_and_test(&head->refs))
+@@ -334,12 +344,10 @@ static inline void btrfs_put_delayed_ref_head(struct btrfs_delayed_ref_head *hea
+ 
+ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
+ 			       struct btrfs_ref *generic_ref,
+-			       struct btrfs_delayed_extent_op *extent_op,
+-			       int *old_ref_mod, int *new_ref_mod);
++			       struct btrfs_delayed_extent_op *extent_op);
+ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans,
+ 			       struct btrfs_ref *generic_ref,
+-			       u64 reserved, int *old_ref_mod,
+-			       int *new_ref_mod);
++			       u64 reserved);
+ int btrfs_add_delayed_extent_op(struct btrfs_trans_handle *trans,
+ 				u64 bytenr, u64 num_bytes,
+ 				struct btrfs_delayed_extent_op *extent_op);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8fba1c219b190..51c18da4792ec 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -82,41 +82,6 @@ void btrfs_free_excluded_extents(struct btrfs_block_group *cache)
+ 			  EXTENT_UPTODATE);
+ }
+ 
+-static u64 generic_ref_to_space_flags(struct btrfs_ref *ref)
+-{
+-	if (ref->type == BTRFS_REF_METADATA) {
+-		if (ref->tree_ref.root == BTRFS_CHUNK_TREE_OBJECTID)
+-			return BTRFS_BLOCK_GROUP_SYSTEM;
+-		else
+-			return BTRFS_BLOCK_GROUP_METADATA;
+-	}
+-	return BTRFS_BLOCK_GROUP_DATA;
+-}
+-
+-static void add_pinned_bytes(struct btrfs_fs_info *fs_info,
+-			     struct btrfs_ref *ref)
+-{
+-	struct btrfs_space_info *space_info;
+-	u64 flags = generic_ref_to_space_flags(ref);
+-
+-	space_info = btrfs_find_space_info(fs_info, flags);
+-	ASSERT(space_info);
+-	percpu_counter_add_batch(&space_info->total_bytes_pinned, ref->len,
+-		    BTRFS_TOTAL_BYTES_PINNED_BATCH);
+-}
+-
+-static void sub_pinned_bytes(struct btrfs_fs_info *fs_info,
+-			     struct btrfs_ref *ref)
+-{
+-	struct btrfs_space_info *space_info;
+-	u64 flags = generic_ref_to_space_flags(ref);
+-
+-	space_info = btrfs_find_space_info(fs_info, flags);
+-	ASSERT(space_info);
+-	percpu_counter_add_batch(&space_info->total_bytes_pinned, -ref->len,
+-		    BTRFS_TOTAL_BYTES_PINNED_BATCH);
+-}
+-
+ /* simple helper to search for an existing data extent at a given offset */
+ int btrfs_lookup_data_extent(struct btrfs_fs_info *fs_info, u64 start, u64 len)
+ {
+@@ -1386,7 +1351,6 @@ int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
+ 			 struct btrfs_ref *generic_ref)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+-	int old_ref_mod, new_ref_mod;
+ 	int ret;
+ 
+ 	ASSERT(generic_ref->type != BTRFS_REF_NOT_SET &&
+@@ -1395,17 +1359,12 @@ int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
+ 	       generic_ref->tree_ref.root == BTRFS_TREE_LOG_OBJECTID);
+ 
+ 	if (generic_ref->type == BTRFS_REF_METADATA)
+-		ret = btrfs_add_delayed_tree_ref(trans, generic_ref,
+-				NULL, &old_ref_mod, &new_ref_mod);
++		ret = btrfs_add_delayed_tree_ref(trans, generic_ref, NULL);
+ 	else
+-		ret = btrfs_add_delayed_data_ref(trans, generic_ref, 0,
+-						 &old_ref_mod, &new_ref_mod);
++		ret = btrfs_add_delayed_data_ref(trans, generic_ref, 0);
+ 
+ 	btrfs_ref_tree_mod(fs_info, generic_ref);
+ 
+-	if (ret == 0 && old_ref_mod < 0 && new_ref_mod >= 0)
+-		sub_pinned_bytes(fs_info, generic_ref);
+-
+ 	return ret;
+ }
+ 
+@@ -1796,34 +1755,28 @@ void btrfs_cleanup_ref_head_accounting(struct btrfs_fs_info *fs_info,
+ {
+ 	int nr_items = 1;	/* Dropping this ref head update. */
+ 
+-	if (head->total_ref_mod < 0) {
+-		struct btrfs_space_info *space_info;
+-		u64 flags;
++	/*
++	 * We had csum deletions accounted for in our delayed refs rsv, we need
++	 * to drop the csum leaves for this update from our delayed_refs_rsv.
++	 */
++	if (head->total_ref_mod < 0 && head->is_data) {
++		spin_lock(&delayed_refs->lock);
++		delayed_refs->pending_csums -= head->num_bytes;
++		spin_unlock(&delayed_refs->lock);
++		nr_items += btrfs_csum_bytes_to_leaves(fs_info, head->num_bytes);
++	}
+ 
+-		if (head->is_data)
+-			flags = BTRFS_BLOCK_GROUP_DATA;
+-		else if (head->is_system)
+-			flags = BTRFS_BLOCK_GROUP_SYSTEM;
+-		else
+-			flags = BTRFS_BLOCK_GROUP_METADATA;
+-		space_info = btrfs_find_space_info(fs_info, flags);
+-		ASSERT(space_info);
+-		percpu_counter_add_batch(&space_info->total_bytes_pinned,
+-				   -head->num_bytes,
+-				   BTRFS_TOTAL_BYTES_PINNED_BATCH);
++	/*
++	 * We were dropping refs, or had a new ref and dropped it, and thus must
++	 * adjust down our total_bytes_pinned, the space may or may not have
++	 * been pinned and so is accounted for properly in the pinned space by
++	 * now.
++	 */
++	if (head->total_ref_mod < 0 ||
++	    (head->total_ref_mod == 0 && head->must_insert_reserved)) {
++		u64 flags = btrfs_ref_head_to_space_flags(head);
+ 
+-		/*
+-		 * We had csum deletions accounted for in our delayed refs rsv,
+-		 * we need to drop the csum leaves for this update from our
+-		 * delayed_refs_rsv.
+-		 */
+-		if (head->is_data) {
+-			spin_lock(&delayed_refs->lock);
+-			delayed_refs->pending_csums -= head->num_bytes;
+-			spin_unlock(&delayed_refs->lock);
+-			nr_items += btrfs_csum_bytes_to_leaves(fs_info,
+-				head->num_bytes);
+-		}
++		btrfs_mod_total_bytes_pinned(fs_info, flags, -head->num_bytes);
+ 	}
+ 
+ 	btrfs_delayed_refs_rsv_release(fs_info, nr_items);
+@@ -2592,8 +2545,7 @@ static int pin_down_extent(struct btrfs_trans_handle *trans,
+ 	spin_unlock(&cache->lock);
+ 	spin_unlock(&cache->space_info->lock);
+ 
+-	percpu_counter_add_batch(&cache->space_info->total_bytes_pinned,
+-		    num_bytes, BTRFS_TOTAL_BYTES_PINNED_BATCH);
++	__btrfs_mod_total_bytes_pinned(cache->space_info, num_bytes);
+ 	set_extent_dirty(&trans->transaction->pinned_extents, bytenr,
+ 			 bytenr + num_bytes - 1, GFP_NOFS | __GFP_NOFAIL);
+ 	return 0;
+@@ -2819,8 +2771,7 @@ static int unpin_extent_range(struct btrfs_fs_info *fs_info,
+ 		cache->pinned -= len;
+ 		btrfs_space_info_update_bytes_pinned(fs_info, space_info, -len);
+ 		space_info->max_extent_size = 0;
+-		percpu_counter_add_batch(&space_info->total_bytes_pinned,
+-			    -len, BTRFS_TOTAL_BYTES_PINNED_BATCH);
++		__btrfs_mod_total_bytes_pinned(space_info, -len);
+ 		if (cache->ro) {
+ 			space_info->bytes_readonly += len;
+ 			readonly = true;
+@@ -3359,7 +3310,6 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ {
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct btrfs_ref generic_ref = { 0 };
+-	int pin = 1;
+ 	int ret;
+ 
+ 	btrfs_init_generic_ref(&generic_ref, BTRFS_DROP_DELAYED_REF,
+@@ -3368,13 +3318,9 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ 			    root->root_key.objectid);
+ 
+ 	if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) {
+-		int old_ref_mod, new_ref_mod;
+-
+ 		btrfs_ref_tree_mod(fs_info, &generic_ref);
+-		ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, NULL,
+-						 &old_ref_mod, &new_ref_mod);
++		ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, NULL);
+ 		BUG_ON(ret); /* -ENOMEM */
+-		pin = old_ref_mod >= 0 && new_ref_mod < 0;
+ 	}
+ 
+ 	if (last_ref && btrfs_header_generation(buf) == trans->transid) {
+@@ -3386,7 +3332,6 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ 				goto out;
+ 		}
+ 
+-		pin = 0;
+ 		cache = btrfs_lookup_block_group(fs_info, buf->start);
+ 
+ 		if (btrfs_header_flag(buf, BTRFS_HEADER_FLAG_WRITTEN)) {
+@@ -3403,9 +3348,6 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ 		trace_btrfs_reserved_extent_free(fs_info, buf->start, buf->len);
+ 	}
+ out:
+-	if (pin)
+-		add_pinned_bytes(fs_info, &generic_ref);
+-
+ 	if (last_ref) {
+ 		/*
+ 		 * Deleting the buffer, clear the corrupt flag since it doesn't
+@@ -3419,7 +3361,6 @@ out:
+ int btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_ref *ref)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+-	int old_ref_mod, new_ref_mod;
+ 	int ret;
+ 
+ 	if (btrfs_is_testing(fs_info))
+@@ -3435,14 +3376,11 @@ int btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_ref *ref)
+ 	     ref->data_ref.ref_root == BTRFS_TREE_LOG_OBJECTID)) {
+ 		/* unlocks the pinned mutex */
+ 		btrfs_pin_extent(trans, ref->bytenr, ref->len, 1);
+-		old_ref_mod = new_ref_mod = 0;
+ 		ret = 0;
+ 	} else if (ref->type == BTRFS_REF_METADATA) {
+-		ret = btrfs_add_delayed_tree_ref(trans, ref, NULL,
+-						 &old_ref_mod, &new_ref_mod);
++		ret = btrfs_add_delayed_tree_ref(trans, ref, NULL);
+ 	} else {
+-		ret = btrfs_add_delayed_data_ref(trans, ref, 0,
+-						 &old_ref_mod, &new_ref_mod);
++		ret = btrfs_add_delayed_data_ref(trans, ref, 0);
+ 	}
+ 
+ 	if (!((ref->type == BTRFS_REF_METADATA &&
+@@ -3451,9 +3389,6 @@ int btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_ref *ref)
+ 	       ref->data_ref.ref_root == BTRFS_TREE_LOG_OBJECTID)))
+ 		btrfs_ref_tree_mod(fs_info, ref);
+ 
+-	if (ret == 0 && old_ref_mod >= 0 && new_ref_mod < 0)
+-		add_pinned_bytes(fs_info, ref);
+-
+ 	return ret;
+ }
+ 
+@@ -4571,7 +4506,6 @@ int btrfs_alloc_reserved_file_extent(struct btrfs_trans_handle *trans,
+ 				     struct btrfs_key *ins)
+ {
+ 	struct btrfs_ref generic_ref = { 0 };
+-	int ret;
+ 
+ 	BUG_ON(root->root_key.objectid == BTRFS_TREE_LOG_OBJECTID);
+ 
+@@ -4579,9 +4513,8 @@ int btrfs_alloc_reserved_file_extent(struct btrfs_trans_handle *trans,
+ 			       ins->objectid, ins->offset, 0);
+ 	btrfs_init_data_ref(&generic_ref, root->root_key.objectid, owner, offset);
+ 	btrfs_ref_tree_mod(root->fs_info, &generic_ref);
+-	ret = btrfs_add_delayed_data_ref(trans, &generic_ref,
+-					 ram_bytes, NULL, NULL);
+-	return ret;
++
++	return btrfs_add_delayed_data_ref(trans, &generic_ref, ram_bytes);
+ }
+ 
+ /*
+@@ -4769,8 +4702,7 @@ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,
+ 		generic_ref.real_root = root->root_key.objectid;
+ 		btrfs_init_tree_ref(&generic_ref, level, root_objectid);
+ 		btrfs_ref_tree_mod(fs_info, &generic_ref);
+-		ret = btrfs_add_delayed_tree_ref(trans, &generic_ref,
+-						 extent_op, NULL, NULL);
++		ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, extent_op);
+ 		if (ret)
+ 			goto out_free_delayed;
+ 	}
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index af0013d3df63f..ae4059ce2f84c 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -744,8 +744,10 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode,
+ 	while (num_entries) {
+ 		e = kmem_cache_zalloc(btrfs_free_space_cachep,
+ 				      GFP_NOFS);
+-		if (!e)
++		if (!e) {
++			ret = -ENOMEM;
+ 			goto free_cache;
++		}
+ 
+ 		ret = io_ctl_read_entry(&io_ctl, e, &type);
+ 		if (ret) {
+@@ -764,6 +766,7 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode,
+ 			e->trim_state = BTRFS_TRIM_STATE_TRIMMED;
+ 
+ 		if (!e->bytes) {
++			ret = -1;
+ 			kmem_cache_free(btrfs_free_space_cachep, e);
+ 			goto free_cache;
+ 		}
+@@ -784,6 +787,7 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode,
+ 			e->bitmap = kmem_cache_zalloc(
+ 					btrfs_free_space_bitmap_cachep, GFP_NOFS);
+ 			if (!e->bitmap) {
++				ret = -ENOMEM;
+ 				kmem_cache_free(
+ 					btrfs_free_space_cachep, e);
+ 				goto free_cache;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b536d21541a9f..4d85f3a6695d1 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -8207,8 +8207,9 @@ static void btrfs_invalidatepage(struct page *page, unsigned int offset,
+ 
+ 	if (!inode_evicting)
+ 		lock_extent_bits(tree, page_start, page_end, &cached_state);
+-again:
++
+ 	start = page_start;
++again:
+ 	ordered = btrfs_lookup_ordered_range(inode, start, page_end - start + 1);
+ 	if (ordered) {
+ 		end = min(page_end,
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 108e93ff6cb6f..6a44d8f5e12e2 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -669,9 +669,7 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 			RB_CLEAR_NODE(&node->rb_node);
+ 		}
+ 		spin_unlock(&rc->reloc_root_tree.lock);
+-		if (!node)
+-			return;
+-		BUG_ON((struct btrfs_root *)node->data != root);
++		ASSERT(!node || (struct btrfs_root *)node->data == root);
+ 	}
+ 
+ 	/*
+diff --git a/fs/btrfs/space-info.h b/fs/btrfs/space-info.h
+index 5646393b928c9..74706f604bce1 100644
+--- a/fs/btrfs/space-info.h
++++ b/fs/btrfs/space-info.h
+@@ -152,4 +152,21 @@ static inline void btrfs_space_info_free_bytes_may_use(
+ int btrfs_reserve_data_bytes(struct btrfs_fs_info *fs_info, u64 bytes,
+ 			     enum btrfs_reserve_flush_enum flush);
+ 
++static inline void __btrfs_mod_total_bytes_pinned(
++					struct btrfs_space_info *space_info,
++					s64 mod)
++{
++	percpu_counter_add_batch(&space_info->total_bytes_pinned, mod,
++				 BTRFS_TOTAL_BYTES_PINNED_BATCH);
++}
++
++static inline void btrfs_mod_total_bytes_pinned(struct btrfs_fs_info *fs_info,
++						u64 flags, s64 mod)
++{
++	struct btrfs_space_info *space_info = btrfs_find_space_info(fs_info, flags);
++
++	ASSERT(space_info);
++	__btrfs_mod_total_bytes_pinned(space_info, mod);
++}
++
+ #endif /* BTRFS_SPACE_INFO_H */
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 2b200b5a44c3a..576d01275bbd7 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -3092,10 +3092,12 @@ static void __ceph_put_cap_refs(struct ceph_inode_info *ci, int had,
+ 	dout("put_cap_refs %p had %s%s%s\n", inode, ceph_cap_string(had),
+ 	     last ? " last" : "", put ? " put" : "");
+ 
+-	if (last && !skip_checking_caps)
+-		ceph_check_caps(ci, 0, NULL);
+-	else if (flushsnaps)
+-		ceph_flush_snaps(ci, NULL);
++	if (!skip_checking_caps) {
++		if (last)
++			ceph_check_caps(ci, 0, NULL);
++		else if (flushsnaps)
++			ceph_flush_snaps(ci, NULL);
++	}
+ 	if (wake)
+ 		wake_up_all(&ci->i_cap_wq);
+ 	while (put-- > 0)
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 2fcf66473436b..86c7f04896207 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -297,7 +297,7 @@ struct dentry *debugfs_lookup(const char *name, struct dentry *parent)
+ {
+ 	struct dentry *dentry;
+ 
+-	if (IS_ERR(parent))
++	if (!debugfs_initialized() || IS_ERR_OR_NULL(name) || IS_ERR(parent))
+ 		return NULL;
+ 
+ 	if (!parent)
+@@ -318,6 +318,9 @@ static struct dentry *start_creating(const char *name, struct dentry *parent)
+ 	if (!(debugfs_allow & DEBUGFS_ALLOW_API))
+ 		return ERR_PTR(-EPERM);
+ 
++	if (!debugfs_initialized())
++		return ERR_PTR(-ENOENT);
++
+ 	pr_debug("creating file '%s'\n", name);
+ 
+ 	if (IS_ERR(parent))
+diff --git a/fs/erofs/xattr.c b/fs/erofs/xattr.c
+index 5bde77d708524..47314a26767a8 100644
+--- a/fs/erofs/xattr.c
++++ b/fs/erofs/xattr.c
+@@ -48,8 +48,14 @@ static int init_inode_xattrs(struct inode *inode)
+ 	int ret = 0;
+ 
+ 	/* the most case is that xattrs of this inode are initialized. */
+-	if (test_bit(EROFS_I_EA_INITED_BIT, &vi->flags))
++	if (test_bit(EROFS_I_EA_INITED_BIT, &vi->flags)) {
++		/*
++		 * paired with smp_mb() at the end of the function to ensure
++		 * fields will only be observed after the bit is set.
++		 */
++		smp_mb();
+ 		return 0;
++	}
+ 
+ 	if (wait_on_bit_lock(&vi->flags, EROFS_I_BL_XATTR_BIT, TASK_KILLABLE))
+ 		return -ERESTARTSYS;
+@@ -137,6 +143,8 @@ static int init_inode_xattrs(struct inode *inode)
+ 	}
+ 	xattr_iter_end(&it, atomic_map);
+ 
++	/* paired with smp_mb() at the beginning of the function. */
++	smp_mb();
+ 	set_bit(EROFS_I_EA_INITED_BIT, &vi->flags);
+ 
+ out_unlock:
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index ae325541884e3..14d2de35110cc 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -36,8 +36,14 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ 	void *kaddr;
+ 	struct z_erofs_map_header *h;
+ 
+-	if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags))
++	if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags)) {
++		/*
++		 * paired with smp_mb() at the end of the function to ensure
++		 * fields will only be observed after the bit is set.
++		 */
++		smp_mb();
+ 		return 0;
++	}
+ 
+ 	if (wait_on_bit_lock(&vi->flags, EROFS_I_BL_Z_BIT, TASK_KILLABLE))
+ 		return -ERESTARTSYS;
+@@ -83,6 +89,8 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ 
+ 	vi->z_physical_clusterbits[1] = vi->z_logical_clusterbits +
+ 					((h->h_clusterbits >> 5) & 7);
++	/* paired with smp_mb() at the beginning of the function */
++	smp_mb();
+ 	set_bit(EROFS_I_Z_INITED_BIT, &vi->flags);
+ unmap_done:
+ 	kunmap_atomic(kaddr);
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 117b1c395ae4a..2d5b172f490e0 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1062,7 +1062,7 @@ static struct epitem *ep_find(struct eventpoll *ep, struct file *file, int fd)
+ 	return epir;
+ }
+ 
+-#ifdef CONFIG_CHECKPOINT_RESTORE
++#ifdef CONFIG_KCMP
+ static struct epitem *ep_find_tfd(struct eventpoll *ep, int tfd, unsigned long toff)
+ {
+ 	struct rb_node *rbp;
+@@ -1104,7 +1104,7 @@ struct file *get_epoll_tfile_raw_ptr(struct file *file, int tfd,
+ 
+ 	return file_raw;
+ }
+-#endif /* CONFIG_CHECKPOINT_RESTORE */
++#endif /* CONFIG_KCMP */
+ 
+ /**
+  * Adds a new entry to the tail of the list in a lockless way, i.e.
+diff --git a/fs/exfat/exfat_raw.h b/fs/exfat/exfat_raw.h
+index 6aec6288e1f21..7f39b1c6469c4 100644
+--- a/fs/exfat/exfat_raw.h
++++ b/fs/exfat/exfat_raw.h
+@@ -77,6 +77,10 @@
+ 
+ #define EXFAT_FILE_NAME_LEN		15
+ 
++#define EXFAT_MIN_SECT_SIZE_BITS		9
++#define EXFAT_MAX_SECT_SIZE_BITS		12
++#define EXFAT_MAX_SECT_PER_CLUS_BITS(x)		(25 - (x)->sect_size_bits)
++
+ /* EXFAT: Main and Backup Boot Sector (512 bytes) */
+ struct boot_sector {
+ 	__u8	jmp_boot[BOOTSEC_JUMP_BOOT_LEN];
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index 87be5bfc31eb4..c6d8d2e534865 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -381,8 +381,7 @@ static int exfat_calibrate_blocksize(struct super_block *sb, int logical_sect)
+ {
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ 
+-	if (!is_power_of_2(logical_sect) ||
+-	    logical_sect < 512 || logical_sect > 4096) {
++	if (!is_power_of_2(logical_sect)) {
+ 		exfat_err(sb, "bogus logical sector size %u", logical_sect);
+ 		return -EIO;
+ 	}
+@@ -451,6 +450,25 @@ static int exfat_read_boot_sector(struct super_block *sb)
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * sect_size_bits could be at least 9 and at most 12.
++	 */
++	if (p_boot->sect_size_bits < EXFAT_MIN_SECT_SIZE_BITS ||
++	    p_boot->sect_size_bits > EXFAT_MAX_SECT_SIZE_BITS) {
++		exfat_err(sb, "bogus sector size bits : %u\n",
++				p_boot->sect_size_bits);
++		return -EINVAL;
++	}
++
++	/*
++	 * sect_per_clus_bits could be at least 0 and at most 25 - sect_size_bits.
++	 */
++	if (p_boot->sect_per_clus_bits > EXFAT_MAX_SECT_PER_CLUS_BITS(p_boot)) {
++		exfat_err(sb, "bogus sectors bits per cluster : %u\n",
++				p_boot->sect_per_clus_bits);
++		return -EINVAL;
++	}
++
+ 	sbi->sect_per_clus = 1 << p_boot->sect_per_clus_bits;
+ 	sbi->sect_per_clus_bits = p_boot->sect_per_clus_bits;
+ 	sbi->cluster_size_bits = p_boot->sect_per_clus_bits +
+@@ -477,16 +495,19 @@ static int exfat_read_boot_sector(struct super_block *sb)
+ 	sbi->used_clusters = EXFAT_CLUSTERS_UNTRACKED;
+ 
+ 	/* check consistencies */
+-	if (sbi->num_FAT_sectors << p_boot->sect_size_bits <
+-	    sbi->num_clusters * 4) {
++	if ((u64)sbi->num_FAT_sectors << p_boot->sect_size_bits <
++	    (u64)sbi->num_clusters * 4) {
+ 		exfat_err(sb, "bogus fat length");
+ 		return -EINVAL;
+ 	}
++
+ 	if (sbi->data_start_sector <
+-	    sbi->FAT1_start_sector + sbi->num_FAT_sectors * p_boot->num_fats) {
++	    (u64)sbi->FAT1_start_sector +
++	    (u64)sbi->num_FAT_sectors * p_boot->num_fats) {
+ 		exfat_err(sb, "bogus data start sector");
+ 		return -EINVAL;
+ 	}
++
+ 	if (sbi->vol_flags & VOLUME_DIRTY)
+ 		exfat_warn(sb, "Volume was not properly unmounted. Some data may be corrupt. Please run fsck.");
+ 	if (sbi->vol_flags & MEDIA_FAILURE)
+diff --git a/fs/ext4/Kconfig b/fs/ext4/Kconfig
+index 619dd35ddd48a..86699c8cab281 100644
+--- a/fs/ext4/Kconfig
++++ b/fs/ext4/Kconfig
+@@ -103,8 +103,7 @@ config EXT4_DEBUG
+ 
+ config EXT4_KUNIT_TESTS
+ 	tristate "KUnit tests for ext4" if !KUNIT_ALL_TESTS
+-	select EXT4_FS
+-	depends on KUNIT
++	depends on EXT4_FS && KUNIT
+ 	default KUNIT_ALL_TESTS
+ 	help
+ 	  This builds the ext4 KUnit tests.
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index df0886e08a772..14783f7dcbe98 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2410,11 +2410,10 @@ again:
+ 						   (frame - 1)->bh);
+ 			if (err)
+ 				goto journal_error;
+-			if (restart) {
+-				err = ext4_handle_dirty_dx_node(handle, dir,
+-							   frame->bh);
++			err = ext4_handle_dirty_dx_node(handle, dir,
++							frame->bh);
++			if (err)
+ 				goto journal_error;
+-			}
+ 		} else {
+ 			struct dx_root *dxroot;
+ 			memcpy((char *) entries2, (char *) entries,
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index c5fee4d7ea72f..d3f407ba64c9e 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1393,7 +1393,7 @@ retry_write:
+ 
+ 		ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted,
+ 						NULL, NULL, wbc, io_type,
+-						compr_blocks);
++						compr_blocks, false);
+ 		if (ret) {
+ 			if (ret == AOP_WRITEPAGE_ACTIVATE) {
+ 				unlock_page(cc->rpages[i]);
+@@ -1428,6 +1428,9 @@ retry_write:
+ 
+ 		*submitted += _submitted;
+ 	}
++
++	f2fs_balance_fs(F2FS_M_SB(mapping), true);
++
+ 	return 0;
+ out_err:
+ 	for (++i; i < cc->cluster_size; i++) {
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index b29243ee1c3e5..901bd1d963ee8 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -499,7 +499,7 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi,
+ 		if (f2fs_lfs_mode(sbi) && current->plug)
+ 			blk_finish_plug(current->plug);
+ 
+-		if (F2FS_IO_ALIGNED(sbi))
++		if (!F2FS_IO_ALIGNED(sbi))
+ 			goto submit_io;
+ 
+ 		start = bio->bi_iter.bi_size >> F2FS_BLKSIZE_BITS;
+@@ -2757,7 +2757,8 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 				sector_t *last_block,
+ 				struct writeback_control *wbc,
+ 				enum iostat_type io_type,
+-				int compr_blocks)
++				int compr_blocks,
++				bool allow_balance)
+ {
+ 	struct inode *inode = page->mapping->host;
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+@@ -2895,7 +2896,7 @@ out:
+ 	}
+ 	unlock_page(page);
+ 	if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) &&
+-					!F2FS_I(inode)->cp_task)
++			!F2FS_I(inode)->cp_task && allow_balance)
+ 		f2fs_balance_fs(sbi, need_balance_fs);
+ 
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+@@ -2942,7 +2943,7 @@ out:
+ #endif
+ 
+ 	return f2fs_write_single_data_page(page, NULL, NULL, NULL,
+-						wbc, FS_DATA_IO, 0);
++						wbc, FS_DATA_IO, 0, true);
+ }
+ 
+ /*
+@@ -3110,7 +3111,8 @@ continue_unlock:
+ 			}
+ #endif
+ 			ret = f2fs_write_single_data_page(page, &submitted,
+-					&bio, &last_block, wbc, io_type, 0);
++					&bio, &last_block, wbc, io_type,
++					0, true);
+ 			if (ret == AOP_WRITEPAGE_ACTIVATE)
+ 				unlock_page(page);
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 06e5a6053f3f9..699815e94bd30 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3507,7 +3507,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 				struct bio **bio, sector_t *last_block,
+ 				struct writeback_control *wbc,
+ 				enum iostat_type io_type,
+-				int compr_blocks);
++				int compr_blocks, bool allow_balance);
+ void f2fs_invalidate_page(struct page *page, unsigned int offset,
+ 			unsigned int length);
+ int f2fs_release_page(struct page *page, gfp_t wait);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index fe39e591e5b4c..498e3aac79340 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -59,6 +59,9 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
+ 	bool need_alloc = true;
+ 	int err = 0;
+ 
++	if (unlikely(IS_IMMUTABLE(inode)))
++		return VM_FAULT_SIGBUS;
++
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		err = -EIO;
+ 		goto err;
+@@ -766,6 +769,10 @@ int f2fs_truncate(struct inode *inode)
+ 		return -EIO;
+ 	}
+ 
++	err = dquot_initialize(inode);
++	if (err)
++		return err;
++
+ 	/* we should check inline_data size */
+ 	if (!f2fs_may_inline_data(inode)) {
+ 		err = f2fs_convert_inline_inode(inode);
+@@ -847,7 +854,8 @@ static void __setattr_copy(struct inode *inode, const struct iattr *attr)
+ 	if (ia_valid & ATTR_MODE) {
+ 		umode_t mode = attr->ia_mode;
+ 
+-		if (!in_group_p(inode->i_gid) && !capable(CAP_FSETID))
++		if (!in_group_p(inode->i_gid) &&
++			!capable_wrt_inode_uidgid(inode, CAP_FSETID))
+ 			mode &= ~S_ISGID;
+ 		set_acl_inode(inode, mode);
+ 	}
+@@ -864,6 +872,14 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
+ 	if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
+ 		return -EIO;
+ 
++	if (unlikely(IS_IMMUTABLE(inode)))
++		return -EPERM;
++
++	if (unlikely(IS_APPEND(inode) &&
++			(attr->ia_valid & (ATTR_MODE | ATTR_UID |
++				  ATTR_GID | ATTR_TIMES_SET))))
++		return -EPERM;
++
+ 	if ((attr->ia_valid & ATTR_SIZE) &&
+ 		!f2fs_is_compress_backend_ready(inode))
+ 		return -EOPNOTSUPP;
+@@ -4079,6 +4095,11 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 		inode_lock(inode);
+ 	}
+ 
++	if (unlikely(IS_IMMUTABLE(inode))) {
++		ret = -EPERM;
++		goto unlock;
++	}
++
+ 	ret = generic_write_checks(iocb, from);
+ 	if (ret > 0) {
+ 		bool preallocated = false;
+@@ -4143,6 +4164,7 @@ write:
+ 		if (ret > 0)
+ 			f2fs_update_iostat(F2FS_I_SB(inode), APP_WRITE_IO, ret);
+ 	}
++unlock:
+ 	inode_unlock(inode);
+ out:
+ 	trace_f2fs_file_write_iter(inode, iocb->ki_pos,
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 70384e31788db..b9e37f0b3e093 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -191,6 +191,10 @@ int f2fs_convert_inline_inode(struct inode *inode)
+ 	if (!f2fs_has_inline_data(inode))
+ 		return 0;
+ 
++	err = dquot_initialize(inode);
++	if (err)
++		return err;
++
+ 	page = f2fs_grab_cache_page(inode->i_mapping, 0, false);
+ 	if (!page)
+ 		return -ENOMEM;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index aa284ce7ec00d..4fffbef216af8 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1764,6 +1764,9 @@ restore_flag:
+ 
+ static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi)
+ {
++	/* we should flush all the data to keep data consistency */
++	sync_inodes_sb(sbi->sb);
++
+ 	down_write(&sbi->gc_lock);
+ 	f2fs_dirty_to_prefree(sbi);
+ 
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 62d9081d1e26e..a1f9dde33058f 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1230,6 +1230,9 @@ static int gfs2_iomap_end(struct inode *inode, loff_t pos, loff_t length,
+ 
+ 	gfs2_inplace_release(ip);
+ 
++	if (ip->i_qadata && ip->i_qadata->qa_qd_num)
++		gfs2_quota_unlock(ip);
++
+ 	if (length != written && (iomap->flags & IOMAP_F_NEW)) {
+ 		/* Deallocate blocks that were just allocated. */
+ 		loff_t blockmask = i_blocksize(inode) - 1;
+@@ -1242,9 +1245,6 @@ static int gfs2_iomap_end(struct inode *inode, loff_t pos, loff_t length,
+ 		}
+ 	}
+ 
+-	if (ip->i_qadata && ip->i_qadata->qa_qd_num)
+-		gfs2_quota_unlock(ip);
+-
+ 	if (unlikely(!written))
+ 		goto out_unlock;
+ 
+diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
+index 9f2b5609f225d..153272f82984b 100644
+--- a/fs/gfs2/lock_dlm.c
++++ b/fs/gfs2/lock_dlm.c
+@@ -284,7 +284,6 @@ static void gdlm_put_lock(struct gfs2_glock *gl)
+ {
+ 	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+ 	struct lm_lockstruct *ls = &sdp->sd_lockstruct;
+-	int lvb_needs_unlock = 0;
+ 	int error;
+ 
+ 	if (gl->gl_lksb.sb_lkid == 0) {
+@@ -297,13 +296,10 @@ static void gdlm_put_lock(struct gfs2_glock *gl)
+ 	gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT);
+ 	gfs2_update_request_times(gl);
+ 
+-	/* don't want to skip dlm_unlock writing the lvb when lock is ex */
+-
+-	if (gl->gl_lksb.sb_lvbptr && (gl->gl_state == LM_ST_EXCLUSIVE))
+-		lvb_needs_unlock = 1;
++	/* don't want to skip dlm_unlock writing the lvb when lock has one */
+ 
+ 	if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) &&
+-	    !lvb_needs_unlock) {
++	    !gl->gl_lksb.sb_lvbptr) {
+ 		gfs2_glock_free(gl);
+ 		return;
+ 	}
+diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
+index c26c68ebd29d4..a3c1911862f01 100644
+--- a/fs/gfs2/recovery.c
++++ b/fs/gfs2/recovery.c
+@@ -514,8 +514,10 @@ void gfs2_recover_func(struct work_struct *work)
+ 			error = foreach_descriptor(jd, head.lh_tail,
+ 						   head.lh_blkno, pass);
+ 			lops_after_scan(jd, error, pass);
+-			if (error)
++			if (error) {
++				up_read(&sdp->sd_log_flush_lock);
+ 				goto fail_gunlock_thaw;
++			}
+ 		}
+ 
+ 		recover_local_statfs(jd, &head);
+diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
+index 0fba3bf641890..b7d4e4550880d 100644
+--- a/fs/gfs2/util.c
++++ b/fs/gfs2/util.c
+@@ -93,9 +93,10 @@ out_unlock:
+ 
+ static void signal_our_withdraw(struct gfs2_sbd *sdp)
+ {
+-	struct gfs2_glock *gl = sdp->sd_live_gh.gh_gl;
++	struct gfs2_glock *live_gl = sdp->sd_live_gh.gh_gl;
+ 	struct inode *inode = sdp->sd_jdesc->jd_inode;
+ 	struct gfs2_inode *ip = GFS2_I(inode);
++	struct gfs2_glock *i_gl = ip->i_gl;
+ 	u64 no_formal_ino = ip->i_no_formal_ino;
+ 	int ret = 0;
+ 	int tries;
+@@ -141,7 +142,8 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
+ 		atomic_set(&sdp->sd_freeze_state, SFS_FROZEN);
+ 		thaw_super(sdp->sd_vfs);
+ 	} else {
+-		wait_on_bit(&gl->gl_flags, GLF_DEMOTE, TASK_UNINTERRUPTIBLE);
++		wait_on_bit(&i_gl->gl_flags, GLF_DEMOTE,
++			    TASK_UNINTERRUPTIBLE);
+ 	}
+ 
+ 	/*
+@@ -161,15 +163,15 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
+ 	 * on other nodes to be successful, otherwise we remain the owner of
+ 	 * the glock as far as dlm is concerned.
+ 	 */
+-	if (gl->gl_ops->go_free) {
+-		set_bit(GLF_FREEING, &gl->gl_flags);
+-		wait_on_bit(&gl->gl_flags, GLF_FREEING, TASK_UNINTERRUPTIBLE);
++	if (i_gl->gl_ops->go_free) {
++		set_bit(GLF_FREEING, &i_gl->gl_flags);
++		wait_on_bit(&i_gl->gl_flags, GLF_FREEING, TASK_UNINTERRUPTIBLE);
+ 	}
+ 
+ 	/*
+ 	 * Dequeue the "live" glock, but keep a reference so it's never freed.
+ 	 */
+-	gfs2_glock_hold(gl);
++	gfs2_glock_hold(live_gl);
+ 	gfs2_glock_dq_wait(&sdp->sd_live_gh);
+ 	/*
+ 	 * We enqueue the "live" glock in EX so that all other nodes
+@@ -208,7 +210,7 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
+ 		gfs2_glock_nq(&sdp->sd_live_gh);
+ 	}
+ 
+-	gfs2_glock_queue_put(gl); /* drop the extra reference we acquired */
++	gfs2_glock_queue_put(live_gl); /* drop extra reference we acquired */
+ 	clear_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags);
+ 
+ 	/*
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index d0b7332ca7033..d0172cc4f6427 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -8440,8 +8440,21 @@ static __poll_t io_uring_poll(struct file *file, poll_table *wait)
+ 	smp_rmb();
+ 	if (!io_sqring_full(ctx))
+ 		mask |= EPOLLOUT | EPOLLWRNORM;
+-	io_cqring_overflow_flush(ctx, false, NULL, NULL);
+-	if (io_cqring_events(ctx))
++
++	/*
++	 * Don't flush cqring overflow list here, just do a simple check.
++	 * Otherwise there could possible be ABBA deadlock:
++	 *      CPU0                    CPU1
++	 *      ----                    ----
++	 * lock(&ctx->uring_lock);
++	 *                              lock(&ep->mtx);
++	 *                              lock(&ctx->uring_lock);
++	 * lock(&ep->mtx);
++	 *
++	 * Users may get EPOLLIN meanwhile seeing nothing in cqring, this
++	 * pushs them to do the flush.
++	 */
++	if (io_cqring_events(ctx) || test_bit(0, &ctx->cq_check_overflow))
+ 		mask |= EPOLLIN | EPOLLRDNORM;
+ 
+ 	return mask;
+diff --git a/fs/isofs/dir.c b/fs/isofs/dir.c
+index f0fe641893a5e..b9e6a7ec78be4 100644
+--- a/fs/isofs/dir.c
++++ b/fs/isofs/dir.c
+@@ -152,6 +152,7 @@ static int do_isofs_readdir(struct inode *inode, struct file *file,
+ 			printk(KERN_NOTICE "iso9660: Corrupted directory entry"
+ 			       " in block %lu of inode %lu\n", block,
+ 			       inode->i_ino);
++			brelse(bh);
+ 			return -EIO;
+ 		}
+ 
+diff --git a/fs/isofs/namei.c b/fs/isofs/namei.c
+index 402769881c32b..58f80e1b3ac0d 100644
+--- a/fs/isofs/namei.c
++++ b/fs/isofs/namei.c
+@@ -102,6 +102,7 @@ isofs_find_entry(struct inode *dir, struct dentry *dentry,
+ 			printk(KERN_NOTICE "iso9660: Corrupted directory entry"
+ 			       " in block %lu of inode %lu\n", block,
+ 			       dir->i_ino);
++			brelse(bh);
+ 			return 0;
+ 		}
+ 
+diff --git a/fs/jffs2/summary.c b/fs/jffs2/summary.c
+index be7c8a6a57480..4fe64519870f1 100644
+--- a/fs/jffs2/summary.c
++++ b/fs/jffs2/summary.c
+@@ -783,6 +783,8 @@ static int jffs2_sum_write_data(struct jffs2_sb_info *c, struct jffs2_eraseblock
+ 					dbg_summary("Writing unknown RWCOMPAT_COPY node type %x\n",
+ 						    je16_to_cpu(temp->u.nodetype));
+ 					jffs2_sum_disable_collecting(c->summary);
++					/* The above call removes the list, nothing more to do */
++					goto bail_rwcompat;
+ 				} else {
+ 					BUG();	/* unknown node in summary information */
+ 				}
+@@ -794,6 +796,7 @@ static int jffs2_sum_write_data(struct jffs2_sb_info *c, struct jffs2_eraseblock
+ 
+ 		c->summary->sum_num--;
+ 	}
++ bail_rwcompat:
+ 
+ 	jffs2_sum_reset_collected(c->summary);
+ 
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 7dfcab2a2da68..aedad59f8a458 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1656,7 +1656,7 @@ s64 dbDiscardAG(struct inode *ip, int agno, s64 minlen)
+ 		} else if (rc == -ENOSPC) {
+ 			/* search for next smaller log2 block */
+ 			l2nb = BLKSTOL2(nblocks) - 1;
+-			nblocks = 1 << l2nb;
++			nblocks = 1LL << l2nb;
+ 		} else {
+ 			/* Trim any already allocated blocks */
+ 			jfs_error(bmp->db_ipbmap->i_sb, "-EIO\n");
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 0cd5b127f3bb9..a811d42ffbd11 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5433,15 +5433,16 @@ static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
+ 
+ 	if (cache_validity & NFS_INO_INVALID_ATIME)
+ 		bitmask[1] |= FATTR4_WORD1_TIME_ACCESS;
+-	if (cache_validity & NFS_INO_INVALID_ACCESS)
+-		bitmask[0] |= FATTR4_WORD1_MODE | FATTR4_WORD1_OWNER |
+-				FATTR4_WORD1_OWNER_GROUP;
+-	if (cache_validity & NFS_INO_INVALID_ACL)
+-		bitmask[0] |= FATTR4_WORD0_ACL;
+-	if (cache_validity & NFS_INO_INVALID_LABEL)
++	if (cache_validity & NFS_INO_INVALID_OTHER)
++		bitmask[1] |= FATTR4_WORD1_MODE | FATTR4_WORD1_OWNER |
++				FATTR4_WORD1_OWNER_GROUP |
++				FATTR4_WORD1_NUMLINKS;
++	if (label && label->len && cache_validity & NFS_INO_INVALID_LABEL)
+ 		bitmask[2] |= FATTR4_WORD2_SECURITY_LABEL;
+-	if (cache_validity & NFS_INO_INVALID_CTIME)
++	if (cache_validity & NFS_INO_INVALID_CHANGE)
+ 		bitmask[0] |= FATTR4_WORD0_CHANGE;
++	if (cache_validity & NFS_INO_INVALID_CTIME)
++		bitmask[1] |= FATTR4_WORD1_TIME_METADATA;
+ 	if (cache_validity & NFS_INO_INVALID_MTIME)
+ 		bitmask[1] |= FATTR4_WORD1_TIME_MODIFY;
+ 	if (cache_validity & NFS_INO_INVALID_SIZE)
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index f6d5d783f4a45..0759e589ab52b 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1522,12 +1522,9 @@ static int __init init_nfsd(void)
+ 	int retval;
+ 	printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n");
+ 
+-	retval = register_pernet_subsys(&nfsd_net_ops);
+-	if (retval < 0)
+-		return retval;
+ 	retval = register_cld_notifier();
+ 	if (retval)
+-		goto out_unregister_pernet;
++		return retval;
+ 	retval = nfsd4_init_slabs();
+ 	if (retval)
+ 		goto out_unregister_notifier;
+@@ -1544,9 +1541,14 @@ static int __init init_nfsd(void)
+ 		goto out_free_lockd;
+ 	retval = register_filesystem(&nfsd_fs_type);
+ 	if (retval)
++		goto out_free_exports;
++	retval = register_pernet_subsys(&nfsd_net_ops);
++	if (retval < 0)
+ 		goto out_free_all;
+ 	return 0;
+ out_free_all:
++	unregister_pernet_subsys(&nfsd_net_ops);
++out_free_exports:
+ 	remove_proc_entry("fs/nfs/exports", NULL);
+ 	remove_proc_entry("fs/nfs", NULL);
+ out_free_lockd:
+@@ -1559,13 +1561,12 @@ out_free_slabs:
+ 	nfsd4_free_slabs();
+ out_unregister_notifier:
+ 	unregister_cld_notifier();
+-out_unregister_pernet:
+-	unregister_pernet_subsys(&nfsd_net_ops);
+ 	return retval;
+ }
+ 
+ static void __exit exit_nfsd(void)
+ {
++	unregister_pernet_subsys(&nfsd_net_ops);
+ 	nfsd_drc_slab_free();
+ 	remove_proc_entry("fs/nfs/exports", NULL);
+ 	remove_proc_entry("fs/nfs", NULL);
+@@ -1575,7 +1576,6 @@ static void __exit exit_nfsd(void)
+ 	nfsd4_exit_pnfs();
+ 	unregister_filesystem(&nfsd_fs_type);
+ 	unregister_cld_notifier();
+-	unregister_pernet_subsys(&nfsd_net_ops);
+ }
+ 
+ MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
+diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
+index 0179a73a3fa2c..12a7590601ddb 100644
+--- a/fs/ocfs2/cluster/heartbeat.c
++++ b/fs/ocfs2/cluster/heartbeat.c
+@@ -2042,7 +2042,7 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g
+ 			o2hb_nego_timeout_handler,
+ 			reg, NULL, &reg->hr_handler_list);
+ 	if (ret)
+-		goto free;
++		goto remove_item;
+ 
+ 	ret = o2net_register_handler(O2HB_NEGO_APPROVE_MSG, reg->hr_key,
+ 			sizeof(struct o2hb_nego_msg),
+@@ -2057,6 +2057,12 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g
+ 
+ unregister_handler:
+ 	o2net_unregister_handler_list(&reg->hr_handler_list);
++remove_item:
++	spin_lock(&o2hb_live_lock);
++	list_del(&reg->hr_all_item);
++	if (o2hb_global_heartbeat_active())
++		clear_bit(reg->hr_region_num, o2hb_region_bitmap);
++	spin_unlock(&o2hb_live_lock);
+ free:
+ 	kfree(reg);
+ 	return ERR_PTR(ret);
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index d2018f70d1fae..070d2df8ab9cf 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -571,7 +571,7 @@ static ssize_t proc_sys_call_handler(struct kiocb *iocb, struct iov_iter *iter,
+ 	error = -ENOMEM;
+ 	if (count >= KMALLOC_MAX_SIZE)
+ 		goto out;
+-	kbuf = kzalloc(count + 1, GFP_KERNEL);
++	kbuf = kvzalloc(count + 1, GFP_KERNEL);
+ 	if (!kbuf)
+ 		goto out;
+ 
+@@ -600,7 +600,7 @@ static ssize_t proc_sys_call_handler(struct kiocb *iocb, struct iov_iter *iter,
+ 
+ 	error = count;
+ out_free_buf:
+-	kfree(kbuf);
++	kvfree(kbuf);
+ out:
+ 	sysctl_head_finish(head);
+ 
+diff --git a/fs/proc/self.c b/fs/proc/self.c
+index cc71ce3466dc0..a4012154e1096 100644
+--- a/fs/proc/self.c
++++ b/fs/proc/self.c
+@@ -20,7 +20,7 @@ static const char *proc_self_get_link(struct dentry *dentry,
+ 	 * Not currently supported. Once we can inherit all of struct pid,
+ 	 * we can allow this.
+ 	 */
+-	if (current->flags & PF_KTHREAD)
++	if (current->flags & PF_IO_WORKER)
+ 		return ERR_PTR(-EOPNOTSUPP);
+ 
+ 	if (!tgid)
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 602e3a52884d8..3cec6fbef725e 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1210,7 +1210,6 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
+ 	struct mm_struct *mm;
+ 	struct vm_area_struct *vma;
+ 	enum clear_refs_types type;
+-	struct mmu_gather tlb;
+ 	int itype;
+ 	int rv;
+ 
+@@ -1249,7 +1248,6 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
+ 			goto out_unlock;
+ 		}
+ 
+-		tlb_gather_mmu(&tlb, mm, 0, -1);
+ 		if (type == CLEAR_REFS_SOFT_DIRTY) {
+ 			for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ 				if (!(vma->vm_flags & VM_SOFTDIRTY))
+@@ -1258,15 +1256,18 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
+ 				vma_set_page_prot(vma);
+ 			}
+ 
++			inc_tlb_flush_pending(mm);
+ 			mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY,
+ 						0, NULL, mm, 0, -1UL);
+ 			mmu_notifier_invalidate_range_start(&range);
+ 		}
+ 		walk_page_range(mm, 0, mm->highest_vm_end, &clear_refs_walk_ops,
+ 				&cp);
+-		if (type == CLEAR_REFS_SOFT_DIRTY)
++		if (type == CLEAR_REFS_SOFT_DIRTY) {
+ 			mmu_notifier_invalidate_range_end(&range);
+-		tlb_finish_mmu(&tlb, 0, -1);
++			flush_tlb_mm(mm);
++			dec_tlb_flush_pending(mm);
++		}
+ out_unlock:
+ 		mmap_write_unlock(mm);
+ out_mm:
+diff --git a/fs/proc/thread_self.c b/fs/proc/thread_self.c
+index a553273fbd417..d56681d86d28a 100644
+--- a/fs/proc/thread_self.c
++++ b/fs/proc/thread_self.c
+@@ -17,6 +17,13 @@ static const char *proc_thread_self_get_link(struct dentry *dentry,
+ 	pid_t pid = task_pid_nr_ns(current, ns);
+ 	char *name;
+ 
++	/*
++	 * Not currently supported. Once we can inherit all of struct pid,
++	 * we can allow this.
++	 */
++	if (current->flags & PF_IO_WORKER)
++		return ERR_PTR(-EOPNOTSUPP);
++
+ 	if (!pid)
+ 		return ERR_PTR(-ENOENT);
+ 	name = kmalloc(10 + 6 + 10 + 1, dentry ? GFP_KERNEL : GFP_ATOMIC);
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index 36714df37d5d8..b1ebf7b61732c 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -269,7 +269,7 @@ static int pstore_compress(const void *in, void *out,
+ {
+ 	int ret;
+ 
+-	if (!IS_ENABLED(CONFIG_PSTORE_COMPRESSION))
++	if (!IS_ENABLED(CONFIG_PSTORE_COMPRESS))
+ 		return -EINVAL;
+ 
+ 	ret = crypto_comp_compress(tfm, in, inlen, out, &outlen);
+@@ -671,7 +671,7 @@ static void decompress_record(struct pstore_record *record)
+ 	int unzipped_len;
+ 	char *unzipped, *workspace;
+ 
+-	if (!IS_ENABLED(CONFIG_PSTORE_COMPRESSION) || !record->compressed)
++	if (!IS_ENABLED(CONFIG_PSTORE_COMPRESS) || !record->compressed)
+ 		return;
+ 
+ 	/* Only PSTORE_TYPE_DMESG support compression. */
+diff --git a/fs/quota/quota_v2.c b/fs/quota/quota_v2.c
+index c21106557a37e..b1467f3921c28 100644
+--- a/fs/quota/quota_v2.c
++++ b/fs/quota/quota_v2.c
+@@ -164,19 +164,24 @@ static int v2_read_file_info(struct super_block *sb, int type)
+ 		quota_error(sb, "Number of blocks too big for quota file size (%llu > %llu).",
+ 		    (loff_t)qinfo->dqi_blocks << qinfo->dqi_blocksize_bits,
+ 		    i_size_read(sb_dqopt(sb)->files[type]));
+-		goto out;
++		goto out_free;
+ 	}
+ 	if (qinfo->dqi_free_blk >= qinfo->dqi_blocks) {
+ 		quota_error(sb, "Free block number too big (%u >= %u).",
+ 			    qinfo->dqi_free_blk, qinfo->dqi_blocks);
+-		goto out;
++		goto out_free;
+ 	}
+ 	if (qinfo->dqi_free_entry >= qinfo->dqi_blocks) {
+ 		quota_error(sb, "Block with free entry too big (%u >= %u).",
+ 			    qinfo->dqi_free_entry, qinfo->dqi_blocks);
+-		goto out;
++		goto out_free;
+ 	}
+ 	ret = 0;
++out_free:
++	if (ret) {
++		kfree(info->dqi_priv);
++		info->dqi_priv = NULL;
++	}
+ out:
+ 	up_read(&dqopt->dqio_sem);
+ 	return ret;
+diff --git a/fs/ubifs/auth.c b/fs/ubifs/auth.c
+index 8c50de693e1d4..50e88a2ab88ff 100644
+--- a/fs/ubifs/auth.c
++++ b/fs/ubifs/auth.c
+@@ -328,7 +328,7 @@ int ubifs_init_authentication(struct ubifs_info *c)
+ 		ubifs_err(c, "hmac %s is bigger than maximum allowed hmac size (%d > %d)",
+ 			  hmac_name, c->hmac_desc_len, UBIFS_HMAC_ARR_SZ);
+ 		err = -EINVAL;
+-		goto out_free_hash;
++		goto out_free_hmac;
+ 	}
+ 
+ 	err = crypto_shash_setkey(c->hmac_tfm, ukp->data, ukp->datalen);
+diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c
+index 2f8d8f4f411ab..9a151a1f5e260 100644
+--- a/fs/ubifs/replay.c
++++ b/fs/ubifs/replay.c
+@@ -559,7 +559,9 @@ static int is_last_bud(struct ubifs_info *c, struct ubifs_bud *bud)
+ }
+ 
+ /* authenticate_sleb_hash is split out for stack usage */
+-static int authenticate_sleb_hash(struct ubifs_info *c, struct shash_desc *log_hash, u8 *hash)
++static int noinline_for_stack
++authenticate_sleb_hash(struct ubifs_info *c,
++		       struct shash_desc *log_hash, u8 *hash)
+ {
+ 	SHASH_DESC_ON_STACK(hash_desc, c->hash_tfm);
+ 
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index cb3acfb7dd1fc..dacbb999ae34d 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -838,8 +838,10 @@ static int alloc_wbufs(struct ubifs_info *c)
+ 		c->jheads[i].wbuf.jhead = i;
+ 		c->jheads[i].grouped = 1;
+ 		c->jheads[i].log_hash = ubifs_hash_get_desc(c);
+-		if (IS_ERR(c->jheads[i].log_hash))
++		if (IS_ERR(c->jheads[i].log_hash)) {
++			err = PTR_ERR(c->jheads[i].log_hash);
+ 			goto out;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index bec47f2d074be..3fe933b1010c3 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -250,6 +250,9 @@ static loff_t zonefs_check_zone_condition(struct inode *inode,
+ 		}
+ 		inode->i_mode &= ~0222;
+ 		return i_size_read(inode);
++	case BLK_ZONE_COND_FULL:
++		/* The write pointer of full zones is invalid. */
++		return zi->i_max_size;
+ 	default:
+ 		if (zi->i_ztype == ZONEFS_ZTYPE_CNV)
+ 			return zi->i_max_size;
+diff --git a/include/acpi/acexcep.h b/include/acpi/acexcep.h
+index 2fc624a617690..f8a4afb0279a3 100644
+--- a/include/acpi/acexcep.h
++++ b/include/acpi/acexcep.h
+@@ -59,11 +59,11 @@ struct acpi_exception_info {
+ 
+ #define AE_OK                           (acpi_status) 0x0000
+ 
+-#define ACPI_ENV_EXCEPTION(status)      (status & AE_CODE_ENVIRONMENTAL)
+-#define ACPI_AML_EXCEPTION(status)      (status & AE_CODE_AML)
+-#define ACPI_PROG_EXCEPTION(status)     (status & AE_CODE_PROGRAMMER)
+-#define ACPI_TABLE_EXCEPTION(status)    (status & AE_CODE_ACPI_TABLES)
+-#define ACPI_CNTL_EXCEPTION(status)     (status & AE_CODE_CONTROL)
++#define ACPI_ENV_EXCEPTION(status)      (((status) & AE_CODE_MASK) == AE_CODE_ENVIRONMENTAL)
++#define ACPI_AML_EXCEPTION(status)      (((status) & AE_CODE_MASK) == AE_CODE_AML)
++#define ACPI_PROG_EXCEPTION(status)     (((status) & AE_CODE_MASK) == AE_CODE_PROGRAMMER)
++#define ACPI_TABLE_EXCEPTION(status)    (((status) & AE_CODE_MASK) == AE_CODE_ACPI_TABLES)
++#define ACPI_CNTL_EXCEPTION(status)     (((status) & AE_CODE_MASK) == AE_CODE_CONTROL)
+ 
+ /*
+  * Environmental exceptions
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index b97c628ad91ff..34d8287cd7749 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -828,8 +828,13 @@
+ 		/* DWARF 4 */						\
+ 		.debug_types	0 : { *(.debug_types) }			\
+ 		/* DWARF 5 */						\
++		.debug_addr	0 : { *(.debug_addr) }			\
++		.debug_line_str	0 : { *(.debug_line_str) }		\
++		.debug_loclists	0 : { *(.debug_loclists) }		\
+ 		.debug_macro	0 : { *(.debug_macro) }			\
+-		.debug_addr	0 : { *(.debug_addr) }
++		.debug_names	0 : { *(.debug_names) }			\
++		.debug_rnglists	0 : { *(.debug_rnglists) }		\
++		.debug_str_offsets	0 : { *(.debug_str_offsets) }
+ 
+ /* Stabs debugging sections. */
+ #define STABS_DEBUG							\
+@@ -988,12 +993,13 @@
+ #endif
+ 
+ /*
+- * Clang's -fsanitize=kernel-address and -fsanitize=thread produce
+- * unwanted sections (.eh_frame and .init_array.*), but
+- * CONFIG_CONSTRUCTORS wants to keep any .init_array.* sections.
++ * Clang's -fprofile-arcs, -fsanitize=kernel-address, and
++ * -fsanitize=thread produce unwanted sections (.eh_frame
++ * and .init_array.*), but CONFIG_CONSTRUCTORS wants to
++ * keep any .init_array.* sections.
+  * https://bugs.llvm.org/show_bug.cgi?id=46478
+  */
+-#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN)
++#if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN)
+ # ifdef CONFIG_CONSTRUCTORS
+ #  define SANITIZER_DISCARDS						\
+ 	*(.eh_frame)
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 2b16bf48aab61..642ce03f19c4c 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1371,7 +1371,10 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size)
+ /* verify correctness of eBPF program */
+ int bpf_check(struct bpf_prog **fp, union bpf_attr *attr,
+ 	      union bpf_attr __user *uattr);
++
++#ifndef CONFIG_BPF_JIT_ALWAYS_ON
+ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth);
++#endif
+ 
+ struct btf *bpf_get_btf_vmlinux(void);
+ 
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index 61a66fb8ebb34..d2d7f9b6a2761 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -325,6 +325,11 @@ struct dm_target {
+ 	 * whether or not its underlying devices have support.
+ 	 */
+ 	bool discards_supported:1;
++
++	/*
++	 * Set if we need to limit the number of in-flight bios when swapping.
++	 */
++	bool limit_swap_bios:1;
+ };
+ 
+ void *dm_per_bio_data(struct bio *bio, size_t data_size);
+diff --git a/include/linux/eventpoll.h b/include/linux/eventpoll.h
+index 8f000fada5a46..0df0de0cf45e3 100644
+--- a/include/linux/eventpoll.h
++++ b/include/linux/eventpoll.h
+@@ -18,7 +18,7 @@ struct file;
+ 
+ #ifdef CONFIG_EPOLL
+ 
+-#ifdef CONFIG_CHECKPOINT_RESTORE
++#ifdef CONFIG_KCMP
+ struct file *get_epoll_tfile_raw_ptr(struct file *file, int tfd, unsigned long toff);
+ #endif
+ 
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 1b62397bd1247..e2ffa02f9067a 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -886,7 +886,7 @@ void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp);
+ u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
+ #define __bpf_call_base_args \
+ 	((u64 (*)(u64, u64, u64, u64, u64, const struct bpf_insn *)) \
+-	 __bpf_call_base)
++	 (void *)__bpf_call_base)
+ 
+ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog);
+ void bpf_jit_compile(struct bpf_prog *prog);
+diff --git a/include/linux/icmpv6.h b/include/linux/icmpv6.h
+index 1b3371ae81936..9055cb380ee24 100644
+--- a/include/linux/icmpv6.h
++++ b/include/linux/icmpv6.h
+@@ -3,6 +3,7 @@
+ #define _LINUX_ICMPV6_H
+ 
+ #include <linux/skbuff.h>
++#include <linux/ipv6.h>
+ #include <uapi/linux/icmpv6.h>
+ 
+ static inline struct icmp6hdr *icmp6_hdr(const struct sk_buff *skb)
+@@ -15,13 +16,16 @@ static inline struct icmp6hdr *icmp6_hdr(const struct sk_buff *skb)
+ #if IS_ENABLED(CONFIG_IPV6)
+ 
+ typedef void ip6_icmp_send_t(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+-			     const struct in6_addr *force_saddr);
+-#if IS_BUILTIN(CONFIG_IPV6)
++			     const struct in6_addr *force_saddr,
++			     const struct inet6_skb_parm *parm);
+ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+-		const struct in6_addr *force_saddr);
+-static inline void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info)
++		const struct in6_addr *force_saddr,
++		const struct inet6_skb_parm *parm);
++#if IS_BUILTIN(CONFIG_IPV6)
++static inline void __icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
++				 const struct inet6_skb_parm *parm)
+ {
+-	icmp6_send(skb, type, code, info, NULL);
++	icmp6_send(skb, type, code, info, NULL, parm);
+ }
+ static inline int inet6_register_icmp_sender(ip6_icmp_send_t *fn)
+ {
+@@ -34,18 +38,28 @@ static inline int inet6_unregister_icmp_sender(ip6_icmp_send_t *fn)
+ 	return 0;
+ }
+ #else
+-extern void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info);
++extern void __icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
++			  const struct inet6_skb_parm *parm);
+ extern int inet6_register_icmp_sender(ip6_icmp_send_t *fn);
+ extern int inet6_unregister_icmp_sender(ip6_icmp_send_t *fn);
+ #endif
+ 
++static inline void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info)
++{
++	__icmpv6_send(skb, type, code, info, IP6CB(skb));
++}
++
+ int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type,
+ 			       unsigned int data_len);
+ 
+ #if IS_ENABLED(CONFIG_NF_NAT)
+ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info);
+ #else
+-#define icmpv6_ndo_send icmpv6_send
++static inline void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info)
++{
++	struct inet6_skb_parm parm = { 0 };
++	__icmpv6_send(skb_in, type, code, info, &parm);
++}
+ #endif
+ 
+ #else
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index 9bbcfe3b0bb12..f11f5072af5dc 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -169,7 +169,7 @@ enum iommu_dev_features {
+  * struct iommu_iotlb_gather - Range information for a pending IOTLB flush
+  *
+  * @start: IOVA representing the start of the range to be flushed
+- * @end: IOVA representing the end of the range to be flushed (exclusive)
++ * @end: IOVA representing the end of the range to be flushed (inclusive)
+  * @pgsize: The interval at which to perform the flush
+  *
+  * This structure is intended to be updated by multiple calls to the
+@@ -536,7 +536,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
+ 					       struct iommu_iotlb_gather *gather,
+ 					       unsigned long iova, size_t size)
+ {
+-	unsigned long start = iova, end = start + size;
++	unsigned long start = iova, end = start + size - 1;
+ 
+ 	/*
+ 	 * If the new page is disjoint from the current range or is mapped at
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index dda61d150a138..f514a7dd8c9cf 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -84,7 +84,6 @@ struct ipv6_params {
+ 	__s32 autoconf;
+ };
+ extern struct ipv6_params ipv6_defaults;
+-#include <linux/icmpv6.h>
+ #include <linux/tcp.h>
+ #include <linux/udp.h>
+ 
+diff --git a/include/linux/kexec.h b/include/linux/kexec.h
+index 9e93bef529680..5f61389f5f361 100644
+--- a/include/linux/kexec.h
++++ b/include/linux/kexec.h
+@@ -300,6 +300,11 @@ struct kimage {
+ 	/* Information for loading purgatory */
+ 	struct purgatory_info purgatory_info;
+ #endif
++
++#ifdef CONFIG_IMA_KEXEC
++	/* Virtual address of IMA measurement buffer for kexec syscall */
++	void *ima_buffer;
++#endif
+ };
+ 
+ /* kexec interface functions */
+diff --git a/include/linux/key.h b/include/linux/key.h
+index 0f2e24f13c2bd..eed3ce139a32e 100644
+--- a/include/linux/key.h
++++ b/include/linux/key.h
+@@ -289,6 +289,7 @@ extern struct key *key_alloc(struct key_type *type,
+ #define KEY_ALLOC_BUILT_IN		0x0004	/* Key is built into kernel */
+ #define KEY_ALLOC_BYPASS_RESTRICTION	0x0008	/* Override the check on restricted keyrings */
+ #define KEY_ALLOC_UID_KEYRING		0x0010	/* allocating a user or user session keyring */
++#define KEY_ALLOC_SET_KEEP		0x0020	/* Set the KEEP flag on the key/keyring */
+ 
+ extern void key_revoke(struct key *key);
+ extern void key_invalidate(struct key *key);
+diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h
+index 0d6cf64c8bb12..3c755f6eaefd8 100644
+--- a/include/linux/kgdb.h
++++ b/include/linux/kgdb.h
+@@ -360,9 +360,11 @@ extern atomic_t			kgdb_active;
+ extern bool dbg_is_early;
+ extern void __init dbg_late_init(void);
+ extern void kgdb_panic(const char *msg);
++extern void kgdb_free_init_mem(void);
+ #else /* ! CONFIG_KGDB */
+ #define in_dbg_master() (0)
+ #define dbg_late_init()
+ static inline void kgdb_panic(const char *msg) {}
++static inline void kgdb_free_init_mem(void) { }
+ #endif /* ! CONFIG_KGDB */
+ #endif /* _KGDB_H_ */
+diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
+index c941b73773216..2fcc01891b474 100644
+--- a/include/linux/khugepaged.h
++++ b/include/linux/khugepaged.h
+@@ -3,6 +3,7 @@
+ #define _LINUX_KHUGEPAGED_H
+ 
+ #include <linux/sched/coredump.h> /* MMF_VM_HUGEPAGE */
++#include <linux/shmem_fs.h>
+ 
+ 
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+@@ -57,6 +58,7 @@ static inline int khugepaged_enter(struct vm_area_struct *vma,
+ {
+ 	if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags))
+ 		if ((khugepaged_always() ||
++		     (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) ||
+ 		     (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) &&
+ 		    !(vm_flags & VM_NOHUGEPAGE) &&
+ 		    !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
+diff --git a/include/linux/memremap.h b/include/linux/memremap.h
+index 79c49e7f5c304..f5b464daeeca5 100644
+--- a/include/linux/memremap.h
++++ b/include/linux/memremap.h
+@@ -137,6 +137,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
+ void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap);
+ struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
+ 		struct dev_pagemap *pgmap);
++bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
+ 
+ unsigned long vmem_altmap_offset(struct vmem_altmap *altmap);
+ void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns);
+@@ -165,6 +166,11 @@ static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
+ 	return NULL;
+ }
+ 
++static inline bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn)
++{
++	return false;
++}
++
+ static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
+ {
+ 	return 0;
+diff --git a/include/linux/mfd/rohm-generic.h b/include/linux/mfd/rohm-generic.h
+index 4283b5b33e040..2b85b9deb03ae 100644
+--- a/include/linux/mfd/rohm-generic.h
++++ b/include/linux/mfd/rohm-generic.h
+@@ -20,14 +20,12 @@ struct rohm_regmap_dev {
+ 	struct regmap *regmap;
+ };
+ 
+-enum {
+-	ROHM_DVS_LEVEL_UNKNOWN,
+-	ROHM_DVS_LEVEL_RUN,
+-	ROHM_DVS_LEVEL_IDLE,
+-	ROHM_DVS_LEVEL_SUSPEND,
+-	ROHM_DVS_LEVEL_LPSR,
+-	ROHM_DVS_LEVEL_MAX = ROHM_DVS_LEVEL_LPSR,
+-};
++#define ROHM_DVS_LEVEL_RUN		BIT(0)
++#define ROHM_DVS_LEVEL_IDLE		BIT(1)
++#define ROHM_DVS_LEVEL_SUSPEND		BIT(2)
++#define ROHM_DVS_LEVEL_LPSR		BIT(3)
++#define ROHM_DVS_LEVEL_VALID_AMOUNT	4
++#define ROHM_DVS_LEVEL_UNKNOWN		0
+ 
+ /**
+  * struct rohm_dvs_config - dynamic voltage scaling register descriptions
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 5c119d6cecf14..c5adba5e79e7e 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -110,8 +110,10 @@ static inline void rcu_user_exit(void) { }
+ 
+ #ifdef CONFIG_RCU_NOCB_CPU
+ void rcu_init_nohz(void);
++void rcu_nocb_flush_deferred_wakeup(void);
+ #else /* #ifdef CONFIG_RCU_NOCB_CPU */
+ static inline void rcu_init_nohz(void) { }
++static inline void rcu_nocb_flush_deferred_wakeup(void) { }
+ #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
+ 
+ /**
+diff --git a/include/linux/rmap.h b/include/linux/rmap.h
+index 70085ca1a3fc9..def5c62c93b3b 100644
+--- a/include/linux/rmap.h
++++ b/include/linux/rmap.h
+@@ -213,7 +213,8 @@ struct page_vma_mapped_walk {
+ 
+ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
+ {
+-	if (pvmw->pte)
++	/* HugeTLB pte is set to the relevant page table entry without pte_mapped. */
++	if (pvmw->pte && !PageHuge(pvmw->page))
+ 		pte_unmap(pvmw->pte);
+ 	if (pvmw->ptl)
+ 		spin_unlock(pvmw->ptl);
+diff --git a/include/linux/soundwire/sdw.h b/include/linux/soundwire/sdw.h
+index 41cc1192f9aab..57cda3a3a9d95 100644
+--- a/include/linux/soundwire/sdw.h
++++ b/include/linux/soundwire/sdw.h
+@@ -1001,6 +1001,8 @@ int sdw_bus_exit_clk_stop(struct sdw_bus *bus);
+ 
+ int sdw_read(struct sdw_slave *slave, u32 addr);
+ int sdw_write(struct sdw_slave *slave, u32 addr, u8 value);
++int sdw_write_no_pm(struct sdw_slave *slave, u32 addr, u8 value);
++int sdw_read_no_pm(struct sdw_slave *slave, u32 addr);
+ int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val);
+ int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val);
+ 
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 8f4ff39f51e7d..804a3f69bbd93 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -397,6 +397,10 @@ static inline u32 tpm2_rc_value(u32 rc)
+ #if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE)
+ 
+ extern int tpm_is_tpm2(struct tpm_chip *chip);
++extern __must_check int tpm_try_get_ops(struct tpm_chip *chip);
++extern void tpm_put_ops(struct tpm_chip *chip);
++extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf,
++				size_t min_rsp_body_length, const char *desc);
+ extern int tpm_pcr_read(struct tpm_chip *chip, u32 pcr_idx,
+ 			struct tpm_digest *digest);
+ extern int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx,
+@@ -410,7 +414,6 @@ static inline int tpm_is_tpm2(struct tpm_chip *chip)
+ {
+ 	return -ENODEV;
+ }
+-
+ static inline int tpm_pcr_read(struct tpm_chip *chip, int pcr_idx,
+ 			       struct tpm_digest *digest)
+ {
+diff --git a/include/linux/tty_ldisc.h b/include/linux/tty_ldisc.h
+index b1e6043e99175..572a079761165 100644
+--- a/include/linux/tty_ldisc.h
++++ b/include/linux/tty_ldisc.h
+@@ -185,7 +185,8 @@ struct tty_ldisc_ops {
+ 	void	(*close)(struct tty_struct *);
+ 	void	(*flush_buffer)(struct tty_struct *tty);
+ 	ssize_t	(*read)(struct tty_struct *tty, struct file *file,
+-			unsigned char __user *buf, size_t nr);
++			unsigned char *buf, size_t nr,
++			void **cookie, unsigned long offset);
+ 	ssize_t	(*write)(struct tty_struct *tty, struct file *file,
+ 			 const unsigned char *buf, size_t nr);
+ 	int	(*ioctl)(struct tty_struct *tty, struct file *file,
+diff --git a/include/net/act_api.h b/include/net/act_api.h
+index 87214927314a1..89b42a1e4f88e 100644
+--- a/include/net/act_api.h
++++ b/include/net/act_api.h
+@@ -166,6 +166,7 @@ int tcf_idr_create_from_flags(struct tc_action_net *tn, u32 index,
+ 			      struct nlattr *est, struct tc_action **a,
+ 			      const struct tc_action_ops *ops, int bind,
+ 			      u32 flags);
++void tcf_idr_insert_many(struct tc_action *actions[]);
+ void tcf_idr_cleanup(struct tc_action_net *tn, u32 index);
+ int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index,
+ 			struct tc_action **a, int bind);
+@@ -186,10 +187,13 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ 		    struct nlattr *est, char *name, int ovr, int bind,
+ 		    struct tc_action *actions[], size_t *attr_size,
+ 		    bool rtnl_held, struct netlink_ext_ack *extack);
++struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
++					 bool rtnl_held,
++					 struct netlink_ext_ack *extack);
+ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ 				    struct nlattr *nla, struct nlattr *est,
+ 				    char *name, int ovr, int bind,
+-				    bool rtnl_held,
++				    struct tc_action_ops *ops, bool rtnl_held,
+ 				    struct netlink_ext_ack *extack);
+ int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind,
+ 		    int ref, bool terse);
+diff --git a/include/net/icmp.h b/include/net/icmp.h
+index 9ac2d2672a938..fd84adc479633 100644
+--- a/include/net/icmp.h
++++ b/include/net/icmp.h
+@@ -46,7 +46,11 @@ static inline void icmp_send(struct sk_buff *skb_in, int type, int code, __be32
+ #if IS_ENABLED(CONFIG_NF_NAT)
+ void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info);
+ #else
+-#define icmp_ndo_send icmp_send
++static inline void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info)
++{
++	struct ip_options opts = { 0 };
++	__icmp_send(skb_in, type, code, info, &opts);
++}
+ #endif
+ 
+ int icmp_rcv(struct sk_buff *skb);
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index fe9747ee70a6f..7d66c61d22c7d 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1424,8 +1424,13 @@ void tcp_cleanup_rbuf(struct sock *sk, int copied);
+  */
+ static inline bool tcp_rmem_pressure(const struct sock *sk)
+ {
+-	int rcvbuf = READ_ONCE(sk->sk_rcvbuf);
+-	int threshold = rcvbuf - (rcvbuf >> 3);
++	int rcvbuf, threshold;
++
++	if (tcp_under_memory_pressure(sk))
++		return true;
++
++	rcvbuf = READ_ONCE(sk->sk_rcvbuf);
++	threshold = rcvbuf - (rcvbuf >> 3);
+ 
+ 	return atomic_read(&sk->sk_rmem_alloc) > threshold;
+ }
+diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
+index 82f3278012677..5498d7a6556a7 100644
+--- a/include/uapi/drm/drm_fourcc.h
++++ b/include/uapi/drm/drm_fourcc.h
+@@ -997,9 +997,9 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
+  * Not all combinations are valid, and different SoCs may support different
+  * combinations of layout and options.
+  */
+-#define __fourcc_mod_amlogic_layout_mask 0xf
++#define __fourcc_mod_amlogic_layout_mask 0xff
+ #define __fourcc_mod_amlogic_options_shift 8
+-#define __fourcc_mod_amlogic_options_mask 0xf
++#define __fourcc_mod_amlogic_options_mask 0xff
+ 
+ #define DRM_FORMAT_MOD_AMLOGIC_FBC(__layout, __options) \
+ 	fourcc_mod_code(AMLOGIC, \
+diff --git a/init/Kconfig b/init/Kconfig
+index 0872a5a2e7590..d559abf38c905 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -1194,6 +1194,7 @@ endif # NAMESPACES
+ config CHECKPOINT_RESTORE
+ 	bool "Checkpoint/restore support"
+ 	select PROC_CHILDREN
++	select KCMP
+ 	default n
+ 	help
+ 	  Enables additional kernel features in a sake of checkpoint/restore.
+@@ -1737,6 +1738,16 @@ config ARCH_HAS_MEMBARRIER_CALLBACKS
+ config ARCH_HAS_MEMBARRIER_SYNC_CORE
+ 	bool
+ 
++config KCMP
++	bool "Enable kcmp() system call" if EXPERT
++	help
++	  Enable the kernel resource comparison system call. It provides
++	  user-space with the ability to compare two processes to see if they
++	  share a common resource, such as a file descriptor or even virtual
++	  memory space.
++
++	  If unsure, say N.
++
+ config RSEQ
+ 	bool "Enable rseq() system call" if EXPERT
+ 	default y
+diff --git a/init/main.c b/init/main.c
+index 9d964511fe0c2..d9d9141112511 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1417,6 +1417,7 @@ static int __ref kernel_init(void *unused)
+ 	async_synchronize_full();
+ 	kprobe_free_init_mem();
+ 	ftrace_free_init_mem();
++	kgdb_free_init_mem();
+ 	free_initmem();
+ 	mark_readonly();
+ 
+diff --git a/kernel/Makefile b/kernel/Makefile
+index 6c9f19911be03..88b60a6e5dd0a 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -48,7 +48,7 @@ obj-y += livepatch/
+ obj-y += dma/
+ obj-y += entry/
+ 
+-obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o
++obj-$(CONFIG_KCMP) += kcmp.o
+ obj-$(CONFIG_FREEZER) += freezer.o
+ obj-$(CONFIG_PROFILING) += profile.o
+ obj-$(CONFIG_STACKTRACE) += stacktrace.o
+diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c
+index 8f10e30ea0b08..e8957e911de31 100644
+--- a/kernel/bpf/bpf_iter.c
++++ b/kernel/bpf/bpf_iter.c
+@@ -273,7 +273,7 @@ int bpf_iter_reg_target(const struct bpf_iter_reg *reg_info)
+ {
+ 	struct bpf_iter_target_info *tinfo;
+ 
+-	tinfo = kmalloc(sizeof(*tinfo), GFP_KERNEL);
++	tinfo = kzalloc(sizeof(*tinfo), GFP_KERNEL);
+ 	if (!tinfo)
+ 		return -ENOMEM;
+ 
+diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
+index 1b6b9349cb857..d99e89f113c43 100644
+--- a/kernel/bpf/bpf_lru_list.c
++++ b/kernel/bpf/bpf_lru_list.c
+@@ -502,13 +502,14 @@ struct bpf_lru_node *bpf_lru_pop_free(struct bpf_lru *lru, u32 hash)
+ static void bpf_common_lru_push_free(struct bpf_lru *lru,
+ 				     struct bpf_lru_node *node)
+ {
++	u8 node_type = READ_ONCE(node->type);
+ 	unsigned long flags;
+ 
+-	if (WARN_ON_ONCE(node->type == BPF_LRU_LIST_T_FREE) ||
+-	    WARN_ON_ONCE(node->type == BPF_LRU_LOCAL_LIST_T_FREE))
++	if (WARN_ON_ONCE(node_type == BPF_LRU_LIST_T_FREE) ||
++	    WARN_ON_ONCE(node_type == BPF_LRU_LOCAL_LIST_T_FREE))
+ 		return;
+ 
+-	if (node->type == BPF_LRU_LOCAL_LIST_T_PENDING) {
++	if (node_type == BPF_LRU_LOCAL_LIST_T_PENDING) {
+ 		struct bpf_lru_locallist *loc_l;
+ 
+ 		loc_l = per_cpu_ptr(lru->common_lru.local_list, node->cpu);
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 2b5ca93c17dec..b5be9659ab590 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -815,9 +815,7 @@ static int dev_map_notification(struct notifier_block *notifier,
+ 			break;
+ 
+ 		/* will be freed in free_netdev() */
+-		netdev->xdp_bulkq =
+-			__alloc_percpu_gfp(sizeof(struct xdp_dev_bulk_queue),
+-					   sizeof(void *), GFP_ATOMIC);
++		netdev->xdp_bulkq = alloc_percpu(struct xdp_dev_bulk_queue);
+ 		if (!netdev->xdp_bulkq)
+ 			return NOTIFY_BAD;
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index c09594e70f90a..6c2e4947beaeb 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4786,8 +4786,9 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 					subprog);
+ 			clear_caller_saved_regs(env, caller->regs);
+ 
+-			/* All global functions return SCALAR_VALUE */
++			/* All global functions return a 64-bit SCALAR_VALUE */
+ 			mark_reg_unknown(env, caller->regs, BPF_REG_0);
++			caller->regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG;
+ 
+ 			/* continue with next insn after call */
+ 			return 0;
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index 1e75a8923a8d1..8661eb2b17711 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -456,6 +456,17 @@ setundefined:
+ 	return 0;
+ }
+ 
++void kgdb_free_init_mem(void)
++{
++	int i;
++
++	/* Clear init memory breakpoints. */
++	for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
++		if (init_section_contains((void *)kgdb_break[i].bpt_addr, 0))
++			kgdb_break[i].state = BP_UNDEFINED;
++	}
++}
++
+ #ifdef CONFIG_KGDB_KDB
+ void kdb_dump_stack_on_cpu(int cpu)
+ {
+diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
+index a4281fb99299e..81874213b0fe9 100644
+--- a/kernel/debug/kdb/kdb_private.h
++++ b/kernel/debug/kdb/kdb_private.h
+@@ -230,7 +230,7 @@ extern struct task_struct *kdb_curr_task(int);
+ 
+ #define kdb_task_has_cpu(p) (task_curr(p))
+ 
+-#define GFP_KDB (in_interrupt() ? GFP_ATOMIC : GFP_KERNEL)
++#define GFP_KDB (in_dbg_master() ? GFP_ATOMIC : GFP_KERNEL)
+ 
+ extern void *debug_kmalloc(size_t size, gfp_t flags);
+ extern void debug_kfree(void *);
+diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
+index 3994a217bde76..3bf98db9c702d 100644
+--- a/kernel/kcsan/core.c
++++ b/kernel/kcsan/core.c
+@@ -12,7 +12,6 @@
+ #include <linux/moduleparam.h>
+ #include <linux/percpu.h>
+ #include <linux/preempt.h>
+-#include <linux/random.h>
+ #include <linux/sched.h>
+ #include <linux/uaccess.h>
+ 
+@@ -101,7 +100,7 @@ static atomic_long_t watchpoints[CONFIG_KCSAN_NUM_WATCHPOINTS + NUM_SLOTS-1];
+ static DEFINE_PER_CPU(long, kcsan_skip);
+ 
+ /* For kcsan_prandom_u32_max(). */
+-static DEFINE_PER_CPU(struct rnd_state, kcsan_rand_state);
++static DEFINE_PER_CPU(u32, kcsan_rand_state);
+ 
+ static __always_inline atomic_long_t *find_watchpoint(unsigned long addr,
+ 						      size_t size,
+@@ -275,20 +274,17 @@ should_watch(const volatile void *ptr, size_t size, int type, struct kcsan_ctx *
+ }
+ 
+ /*
+- * Returns a pseudo-random number in interval [0, ep_ro). See prandom_u32_max()
+- * for more details.
+- *
+- * The open-coded version here is using only safe primitives for all contexts
+- * where we can have KCSAN instrumentation. In particular, we cannot use
+- * prandom_u32() directly, as its tracepoint could cause recursion.
++ * Returns a pseudo-random number in interval [0, ep_ro). Simple linear
++ * congruential generator, using constants from "Numerical Recipes".
+  */
+ static u32 kcsan_prandom_u32_max(u32 ep_ro)
+ {
+-	struct rnd_state *state = &get_cpu_var(kcsan_rand_state);
+-	const u32 res = prandom_u32_state(state);
++	u32 state = this_cpu_read(kcsan_rand_state);
++
++	state = 1664525 * state + 1013904223;
++	this_cpu_write(kcsan_rand_state, state);
+ 
+-	put_cpu_var(kcsan_rand_state);
+-	return (u32)(((u64) res * ep_ro) >> 32);
++	return state % ep_ro;
+ }
+ 
+ static inline void reset_kcsan_skip(void)
+@@ -639,10 +635,14 @@ static __always_inline void check_access(const volatile void *ptr, size_t size,
+ 
+ void __init kcsan_init(void)
+ {
++	int cpu;
++
+ 	BUG_ON(!in_task());
+ 
+ 	kcsan_debugfs_init();
+-	prandom_seed_full_state(&kcsan_rand_state);
++
++	for_each_possible_cpu(cpu)
++		per_cpu(kcsan_rand_state, cpu) = (u32)get_cycles();
+ 
+ 	/*
+ 	 * We are in the init task, and no other tasks should be running;
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index e21f6b9234f7a..7825adcc5efc3 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -166,6 +166,11 @@ void kimage_file_post_load_cleanup(struct kimage *image)
+ 	vfree(pi->sechdrs);
+ 	pi->sechdrs = NULL;
+ 
++#ifdef CONFIG_IMA_KEXEC
++	vfree(image->ima_buffer);
++	image->ima_buffer = NULL;
++#endif /* CONFIG_IMA_KEXEC */
++
+ 	/* See if architecture has anything to cleanup post load */
+ 	arch_kimage_file_post_load_cleanup(image);
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 911c77ef5bbcd..f590e9ff37062 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -871,7 +871,6 @@ out:
+ 	cpus_read_unlock();
+ }
+ 
+-#ifdef CONFIG_SYSCTL
+ static void optimize_all_kprobes(void)
+ {
+ 	struct hlist_head *head;
+@@ -897,6 +896,7 @@ out:
+ 	mutex_unlock(&kprobe_mutex);
+ }
+ 
++#ifdef CONFIG_SYSCTL
+ static void unoptimize_all_kprobes(void)
+ {
+ 	struct hlist_head *head;
+@@ -2627,18 +2627,14 @@ static int __init init_kprobes(void)
+ 		}
+ 	}
+ 
+-#if defined(CONFIG_OPTPROBES)
+-#if defined(__ARCH_WANT_KPROBES_INSN_SLOT)
+-	/* Init kprobe_optinsn_slots */
+-	kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
+-#endif
+-	/* By default, kprobes can be optimized */
+-	kprobes_allow_optimization = true;
+-#endif
+-
+ 	/* By default, kprobes are armed */
+ 	kprobes_all_disarmed = false;
+ 
++#if defined(CONFIG_OPTPROBES) && defined(__ARCH_WANT_KPROBES_INSN_SLOT)
++	/* Init kprobe_optinsn_slots for allocation */
++	kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
++#endif
++
+ 	err = arch_init_kprobes();
+ 	if (!err)
+ 		err = register_die_notifier(&kprobe_exceptions_nb);
+@@ -2653,6 +2649,21 @@ static int __init init_kprobes(void)
+ }
+ early_initcall(init_kprobes);
+ 
++#if defined(CONFIG_OPTPROBES)
++static int __init init_optprobes(void)
++{
++	/*
++	 * Enable kprobe optimization - this kicks the optimizer which
++	 * depends on synchronize_rcu_tasks() and ksoftirqd, that is
++	 * not spawned in early initcall. So delay the optimization.
++	 */
++	optimize_all_kprobes();
++
++	return 0;
++}
++subsys_initcall(init_optprobes);
++#endif
++
+ #ifdef CONFIG_DEBUG_FS
+ static void report_probe(struct seq_file *pi, struct kprobe *p,
+ 		const char *sym, int offset, char *modname, struct kprobe *pp)
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index bdaf4829098c0..780012eb2f3fe 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3707,7 +3707,7 @@ static void
+ print_usage_bug(struct task_struct *curr, struct held_lock *this,
+ 		enum lock_usage_bit prev_bit, enum lock_usage_bit new_bit)
+ {
+-	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
++	if (!debug_locks_off() || debug_locks_silent)
+ 		return;
+ 
+ 	pr_warn("\n");
+@@ -3748,6 +3748,7 @@ valid_state(struct task_struct *curr, struct held_lock *this,
+ 	    enum lock_usage_bit new_bit, enum lock_usage_bit bad_bit)
+ {
+ 	if (unlikely(hlock_class(this)->usage_mask & (1 << bad_bit))) {
++		graph_unlock();
+ 		print_usage_bug(curr, this, bad_bit, new_bit);
+ 		return 0;
+ 	}
+diff --git a/kernel/module.c b/kernel/module.c
+index e20499309b2af..94f926473e350 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -2315,6 +2315,21 @@ static int verify_exported_symbols(struct module *mod)
+ 	return 0;
+ }
+ 
++static bool ignore_undef_symbol(Elf_Half emachine, const char *name)
++{
++	/*
++	 * On x86, PIC code and Clang non-PIC code may have call foo@PLT. GNU as
++	 * before 2.37 produces an unreferenced _GLOBAL_OFFSET_TABLE_ on x86-64.
++	 * i386 has a similar problem but may not deserve a fix.
++	 *
++	 * If we ever have to ignore many symbols, consider refactoring the code to
++	 * only warn if referenced by a relocation.
++	 */
++	if (emachine == EM_386 || emachine == EM_X86_64)
++		return !strcmp(name, "_GLOBAL_OFFSET_TABLE_");
++	return false;
++}
++
+ /* Change all symbols so that st_value encodes the pointer directly. */
+ static int simplify_symbols(struct module *mod, const struct load_info *info)
+ {
+@@ -2360,8 +2375,10 @@ static int simplify_symbols(struct module *mod, const struct load_info *info)
+ 				break;
+ 			}
+ 
+-			/* Ok if weak.  */
+-			if (!ksym && ELF_ST_BIND(sym[i].st_info) == STB_WEAK)
++			/* Ok if weak or ignored.  */
++			if (!ksym &&
++			    (ELF_ST_BIND(sym[i].st_info) == STB_WEAK ||
++			     ignore_undef_symbol(info->hdr->e_machine, name)))
+ 				break;
+ 
+ 			ret = PTR_ERR(ksym) ?: -ENOENT;
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index aafec8cb8637d..d0df95346ab3f 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -782,9 +782,9 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf,
+ 		logbuf_lock_irq();
+ 	}
+ 
+-	if (user->seq < prb_first_valid_seq(prb)) {
++	if (r->info->seq != user->seq) {
+ 		/* our last seen message is gone, return error and reset */
+-		user->seq = prb_first_valid_seq(prb);
++		user->seq = r->info->seq;
+ 		ret = -EPIPE;
+ 		logbuf_unlock_irq();
+ 		goto out;
+@@ -859,6 +859,7 @@ static loff_t devkmsg_llseek(struct file *file, loff_t offset, int whence)
+ static __poll_t devkmsg_poll(struct file *file, poll_table *wait)
+ {
+ 	struct devkmsg_user *user = file->private_data;
++	struct printk_info info;
+ 	__poll_t ret = 0;
+ 
+ 	if (!user)
+@@ -867,9 +868,9 @@ static __poll_t devkmsg_poll(struct file *file, poll_table *wait)
+ 	poll_wait(file, &log_wait, wait);
+ 
+ 	logbuf_lock_irq();
+-	if (prb_read_valid(prb, user->seq, NULL)) {
++	if (prb_read_valid_info(prb, user->seq, &info, NULL)) {
+ 		/* return error when data has vanished underneath us */
+-		if (user->seq < prb_first_valid_seq(prb))
++		if (info.seq != user->seq)
+ 			ret = EPOLLIN|EPOLLRDNORM|EPOLLERR|EPOLLPRI;
+ 		else
+ 			ret = EPOLLIN|EPOLLRDNORM;
+@@ -1606,6 +1607,7 @@ static void syslog_clear(void)
+ 
+ int do_syslog(int type, char __user *buf, int len, int source)
+ {
++	struct printk_info info;
+ 	bool clear = false;
+ 	static int saved_console_loglevel = LOGLEVEL_DEFAULT;
+ 	int error;
+@@ -1676,9 +1678,14 @@ int do_syslog(int type, char __user *buf, int len, int source)
+ 	/* Number of chars in the log buffer */
+ 	case SYSLOG_ACTION_SIZE_UNREAD:
+ 		logbuf_lock_irq();
+-		if (syslog_seq < prb_first_valid_seq(prb)) {
++		if (!prb_read_valid_info(prb, syslog_seq, &info, NULL)) {
++			/* No unread messages. */
++			logbuf_unlock_irq();
++			return 0;
++		}
++		if (info.seq != syslog_seq) {
+ 			/* messages are gone, move to first one */
+-			syslog_seq = prb_first_valid_seq(prb);
++			syslog_seq = info.seq;
+ 			syslog_partial = 0;
+ 		}
+ 		if (source == SYSLOG_FROM_PROC) {
+@@ -1690,7 +1697,6 @@ int do_syslog(int type, char __user *buf, int len, int source)
+ 			error = prb_next_seq(prb) - syslog_seq;
+ 		} else {
+ 			bool time = syslog_partial ? syslog_time : printk_time;
+-			struct printk_info info;
+ 			unsigned int line_count;
+ 			u64 seq;
+ 
+@@ -3378,9 +3384,11 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog,
+ 		goto out;
+ 
+ 	logbuf_lock_irqsave(flags);
+-	if (dumper->cur_seq < prb_first_valid_seq(prb)) {
+-		/* messages are gone, move to first available one */
+-		dumper->cur_seq = prb_first_valid_seq(prb);
++	if (prb_read_valid_info(prb, dumper->cur_seq, &info, NULL)) {
++		if (info.seq != dumper->cur_seq) {
++			/* messages are gone, move to first available one */
++			dumper->cur_seq = info.seq;
++		}
+ 	}
+ 
+ 	/* last entry */
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index a0e6f746de6c4..2e9e3ed7d63ef 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -45,6 +45,8 @@ struct printk_safe_seq_buf {
+ static DEFINE_PER_CPU(struct printk_safe_seq_buf, safe_print_seq);
+ static DEFINE_PER_CPU(int, printk_context);
+ 
++static DEFINE_RAW_SPINLOCK(safe_read_lock);
++
+ #ifdef CONFIG_PRINTK_NMI
+ static DEFINE_PER_CPU(struct printk_safe_seq_buf, nmi_print_seq);
+ #endif
+@@ -180,8 +182,6 @@ static void report_message_lost(struct printk_safe_seq_buf *s)
+  */
+ static void __printk_safe_flush(struct irq_work *work)
+ {
+-	static raw_spinlock_t read_lock =
+-		__RAW_SPIN_LOCK_INITIALIZER(read_lock);
+ 	struct printk_safe_seq_buf *s =
+ 		container_of(work, struct printk_safe_seq_buf, work);
+ 	unsigned long flags;
+@@ -195,7 +195,7 @@ static void __printk_safe_flush(struct irq_work *work)
+ 	 * different CPUs. This is especially important when printing
+ 	 * a backtrace.
+ 	 */
+-	raw_spin_lock_irqsave(&read_lock, flags);
++	raw_spin_lock_irqsave(&safe_read_lock, flags);
+ 
+ 	i = 0;
+ more:
+@@ -232,7 +232,7 @@ more:
+ 
+ out:
+ 	report_message_lost(s);
+-	raw_spin_unlock_irqrestore(&read_lock, flags);
++	raw_spin_unlock_irqrestore(&safe_read_lock, flags);
+ }
+ 
+ /**
+@@ -278,6 +278,14 @@ void printk_safe_flush_on_panic(void)
+ 		raw_spin_lock_init(&logbuf_lock);
+ 	}
+ 
++	if (raw_spin_is_locked(&safe_read_lock)) {
++		if (num_online_cpus() > 1)
++			return;
++
++		debug_locks_off();
++		raw_spin_lock_init(&safe_read_lock);
++	}
++
+ 	printk_safe_flush();
+ }
+ 
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 593df7edfe97f..5dc36c6e80fdc 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -636,7 +636,6 @@ static noinstr void rcu_eqs_enter(bool user)
+ 	trace_rcu_dyntick(TPS("Start"), rdp->dynticks_nesting, 0, atomic_read(&rdp->dynticks));
+ 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current));
+ 	rdp = this_cpu_ptr(&rcu_data);
+-	do_nocb_deferred_wakeup(rdp);
+ 	rcu_prepare_for_idle();
+ 	rcu_preempt_deferred_qs(current);
+ 
+@@ -683,7 +682,14 @@ EXPORT_SYMBOL_GPL(rcu_idle_enter);
+  */
+ noinstr void rcu_user_enter(void)
+ {
++	struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
++
+ 	lockdep_assert_irqs_disabled();
++
++	instrumentation_begin();
++	do_nocb_deferred_wakeup(rdp);
++	instrumentation_end();
++
+ 	rcu_eqs_enter(true);
+ }
+ #endif /* CONFIG_NO_HZ_FULL */
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index fd8a52e9a8874..7d4f78bf40577 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -2187,6 +2187,11 @@ static void do_nocb_deferred_wakeup(struct rcu_data *rdp)
+ 		do_nocb_deferred_wakeup_common(rdp);
+ }
+ 
++void rcu_nocb_flush_deferred_wakeup(void)
++{
++	do_nocb_deferred_wakeup(this_cpu_ptr(&rcu_data));
++}
++
+ void __init rcu_init_nohz(void)
+ {
+ 	int cpu;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index ae7ceba8fd4f2..3486053060276 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3932,6 +3932,22 @@ static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
+ 	trace_sched_util_est_cfs_tp(cfs_rq);
+ }
+ 
++static inline void util_est_dequeue(struct cfs_rq *cfs_rq,
++				    struct task_struct *p)
++{
++	unsigned int enqueued;
++
++	if (!sched_feat(UTIL_EST))
++		return;
++
++	/* Update root cfs_rq's estimated utilization */
++	enqueued  = cfs_rq->avg.util_est.enqueued;
++	enqueued -= min_t(unsigned int, enqueued, _task_util_est(p));
++	WRITE_ONCE(cfs_rq->avg.util_est.enqueued, enqueued);
++
++	trace_sched_util_est_cfs_tp(cfs_rq);
++}
++
+ /*
+  * Check if a (signed) value is within a specified (unsigned) margin,
+  * based on the observation that:
+@@ -3945,23 +3961,16 @@ static inline bool within_margin(int value, int margin)
+ 	return ((unsigned int)(value + margin - 1) < (2 * margin - 1));
+ }
+ 
+-static void
+-util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep)
++static inline void util_est_update(struct cfs_rq *cfs_rq,
++				   struct task_struct *p,
++				   bool task_sleep)
+ {
+ 	long last_ewma_diff;
+ 	struct util_est ue;
+-	int cpu;
+ 
+ 	if (!sched_feat(UTIL_EST))
+ 		return;
+ 
+-	/* Update root cfs_rq's estimated utilization */
+-	ue.enqueued  = cfs_rq->avg.util_est.enqueued;
+-	ue.enqueued -= min_t(unsigned int, ue.enqueued, _task_util_est(p));
+-	WRITE_ONCE(cfs_rq->avg.util_est.enqueued, ue.enqueued);
+-
+-	trace_sched_util_est_cfs_tp(cfs_rq);
+-
+ 	/*
+ 	 * Skip update of task's estimated utilization when the task has not
+ 	 * yet completed an activation, e.g. being migrated.
+@@ -4001,8 +4010,7 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep)
+ 	 * To avoid overestimation of actual task utilization, skip updates if
+ 	 * we cannot grant there is idle time in this CPU.
+ 	 */
+-	cpu = cpu_of(rq_of(cfs_rq));
+-	if (task_util(p) > capacity_orig_of(cpu))
++	if (task_util(p) > capacity_orig_of(cpu_of(rq_of(cfs_rq))))
+ 		return;
+ 
+ 	/*
+@@ -4041,7 +4049,7 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
+ 	if (!static_branch_unlikely(&sched_asym_cpucapacity))
+ 		return;
+ 
+-	if (!p) {
++	if (!p || p->nr_cpus_allowed == 1) {
+ 		rq->misfit_task_load = 0;
+ 		return;
+ 	}
+@@ -4085,8 +4093,11 @@ static inline void
+ util_est_enqueue(struct cfs_rq *cfs_rq, struct task_struct *p) {}
+ 
+ static inline void
+-util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p,
+-		 bool task_sleep) {}
++util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p) {}
++
++static inline void
++util_est_update(struct cfs_rq *cfs_rq, struct task_struct *p,
++		bool task_sleep) {}
+ static inline void update_misfit_status(struct task_struct *p, struct rq *rq) {}
+ 
+ #endif /* CONFIG_SMP */
+@@ -5589,6 +5600,8 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ 	int idle_h_nr_running = task_has_idle_policy(p);
+ 	bool was_sched_idle = sched_idle_rq(rq);
+ 
++	util_est_dequeue(&rq->cfs, p);
++
+ 	for_each_sched_entity(se) {
+ 		cfs_rq = cfs_rq_of(se);
+ 		dequeue_entity(cfs_rq, se, flags);
+@@ -5639,7 +5652,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ 		rq->next_balance = jiffies;
+ 
+ dequeue_throttle:
+-	util_est_dequeue(&rq->cfs, p, task_sleep);
++	util_est_update(&rq->cfs, p, task_sleep);
+ 	hrtick_update(rq);
+ }
+ 
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index c6932b8f4467a..36b545f17206f 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -285,6 +285,7 @@ static void do_idle(void)
+ 		}
+ 
+ 		arch_cpu_idle_enter();
++		rcu_nocb_flush_deferred_wakeup();
+ 
+ 		/*
+ 		 * In poll mode we reenable interrupts and spin. Also if we
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 53a7d1512dd73..0ceaaba36c2e1 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -1050,6 +1050,8 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
+ 			    const bool recheck_after_trace)
+ {
+ 	BUG();
++
++	return -1;
+ }
+ #endif
+ 
+diff --git a/kernel/smp.c b/kernel/smp.c
+index 4d17501433be7..25240fb2df949 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -14,6 +14,7 @@
+ #include <linux/export.h>
+ #include <linux/percpu.h>
+ #include <linux/init.h>
++#include <linux/interrupt.h>
+ #include <linux/gfp.h>
+ #include <linux/smp.h>
+ #include <linux/cpu.h>
+@@ -449,6 +450,9 @@ void flush_smp_call_function_from_idle(void)
+ 
+ 	local_irq_save(flags);
+ 	flush_smp_call_function_queue(true);
++	if (local_softirq_pending())
++		do_softirq();
++
+ 	local_irq_restore(flags);
+ }
+ 
+diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
+index 3f659f8550741..3e261482296cf 100644
+--- a/kernel/tracepoint.c
++++ b/kernel/tracepoint.c
+@@ -53,6 +53,12 @@ struct tp_probes {
+ 	struct tracepoint_func probes[];
+ };
+ 
++/* Called in removal of a func but failed to allocate a new tp_funcs */
++static void tp_stub_func(void)
++{
++	return;
++}
++
+ static inline void *allocate_probes(int count)
+ {
+ 	struct tp_probes *p  = kmalloc(struct_size(p, probes, count),
+@@ -131,6 +137,7 @@ func_add(struct tracepoint_func **funcs, struct tracepoint_func *tp_func,
+ {
+ 	struct tracepoint_func *old, *new;
+ 	int nr_probes = 0;
++	int stub_funcs = 0;
+ 	int pos = -1;
+ 
+ 	if (WARN_ON(!tp_func->func))
+@@ -147,14 +154,34 @@ func_add(struct tracepoint_func **funcs, struct tracepoint_func *tp_func,
+ 			if (old[nr_probes].func == tp_func->func &&
+ 			    old[nr_probes].data == tp_func->data)
+ 				return ERR_PTR(-EEXIST);
++			if (old[nr_probes].func == tp_stub_func)
++				stub_funcs++;
+ 		}
+ 	}
+-	/* + 2 : one for new probe, one for NULL func */
+-	new = allocate_probes(nr_probes + 2);
++	/* + 2 : one for new probe, one for NULL func - stub functions */
++	new = allocate_probes(nr_probes + 2 - stub_funcs);
+ 	if (new == NULL)
+ 		return ERR_PTR(-ENOMEM);
+ 	if (old) {
+-		if (pos < 0) {
++		if (stub_funcs) {
++			/* Need to copy one at a time to remove stubs */
++			int probes = 0;
++
++			pos = -1;
++			for (nr_probes = 0; old[nr_probes].func; nr_probes++) {
++				if (old[nr_probes].func == tp_stub_func)
++					continue;
++				if (pos < 0 && old[nr_probes].prio < prio)
++					pos = probes++;
++				new[probes++] = old[nr_probes];
++			}
++			nr_probes = probes;
++			if (pos < 0)
++				pos = probes;
++			else
++				nr_probes--; /* Account for insertion */
++
++		} else if (pos < 0) {
+ 			pos = nr_probes;
+ 			memcpy(new, old, nr_probes * sizeof(struct tracepoint_func));
+ 		} else {
+@@ -188,8 +215,9 @@ static void *func_remove(struct tracepoint_func **funcs,
+ 	/* (N -> M), (N > 1, M >= 0) probes */
+ 	if (tp_func->func) {
+ 		for (nr_probes = 0; old[nr_probes].func; nr_probes++) {
+-			if (old[nr_probes].func == tp_func->func &&
+-			     old[nr_probes].data == tp_func->data)
++			if ((old[nr_probes].func == tp_func->func &&
++			     old[nr_probes].data == tp_func->data) ||
++			    old[nr_probes].func == tp_stub_func)
+ 				nr_del++;
+ 		}
+ 	}
+@@ -208,14 +236,32 @@ static void *func_remove(struct tracepoint_func **funcs,
+ 		/* N -> M, (N > 1, M > 0) */
+ 		/* + 1 for NULL */
+ 		new = allocate_probes(nr_probes - nr_del + 1);
+-		if (new == NULL)
+-			return ERR_PTR(-ENOMEM);
+-		for (i = 0; old[i].func; i++)
+-			if (old[i].func != tp_func->func
+-					|| old[i].data != tp_func->data)
+-				new[j++] = old[i];
+-		new[nr_probes - nr_del].func = NULL;
+-		*funcs = new;
++		if (new) {
++			for (i = 0; old[i].func; i++)
++				if ((old[i].func != tp_func->func
++				     || old[i].data != tp_func->data)
++				    && old[i].func != tp_stub_func)
++					new[j++] = old[i];
++			new[nr_probes - nr_del].func = NULL;
++			*funcs = new;
++		} else {
++			/*
++			 * Failed to allocate, replace the old function
++			 * with calls to tp_stub_func.
++			 */
++			for (i = 0; old[i].func; i++)
++				if (old[i].func == tp_func->func &&
++				    old[i].data == tp_func->data) {
++					old[i].func = tp_stub_func;
++					/* Set the prio to the next event. */
++					if (old[i + 1].func)
++						old[i].prio =
++							old[i + 1].prio;
++					else
++						old[i].prio = -1;
++				}
++			*funcs = old;
++		}
+ 	}
+ 	debug_print_probes(*funcs);
+ 	return old;
+@@ -295,10 +341,12 @@ static int tracepoint_remove_func(struct tracepoint *tp,
+ 	tp_funcs = rcu_dereference_protected(tp->funcs,
+ 			lockdep_is_held(&tracepoints_mutex));
+ 	old = func_remove(&tp_funcs, func);
+-	if (IS_ERR(old)) {
+-		WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM);
++	if (WARN_ON_ONCE(IS_ERR(old)))
+ 		return PTR_ERR(old);
+-	}
++
++	if (tp_funcs == old)
++		/* Failed allocating new tp_funcs, replaced func with stub */
++		return 0;
+ 
+ 	if (!tp_funcs) {
+ 		/* Removed last function */
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 0846d4ffa3387..dba424447473d 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1248,7 +1248,7 @@ static void
+ fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long nr_isolated)
+ {
+ 	unsigned long start_pfn, end_pfn;
+-	struct page *page = pfn_to_page(pfn);
++	struct page *page;
+ 
+ 	/* Do not search around if there are enough pages already */
+ 	if (cc->nr_freepages >= cc->nr_migratepages)
+@@ -1259,8 +1259,12 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long
+ 		return;
+ 
+ 	/* Pageblock boundaries */
+-	start_pfn = pageblock_start_pfn(pfn);
+-	end_pfn = min(pageblock_end_pfn(pfn), zone_end_pfn(cc->zone)) - 1;
++	start_pfn = max(pageblock_start_pfn(pfn), cc->zone->zone_start_pfn);
++	end_pfn = min(pageblock_end_pfn(pfn), zone_end_pfn(cc->zone));
++
++	page = pageblock_pfn_to_page(start_pfn, end_pfn, cc->zone);
++	if (!page)
++		return;
+ 
+ 	/* Scan before */
+ 	if (start_pfn != pfn) {
+@@ -1362,7 +1366,8 @@ fast_isolate_freepages(struct compact_control *cc)
+ 			pfn = page_to_pfn(freepage);
+ 
+ 			if (pfn >= highest)
+-				highest = pageblock_start_pfn(pfn);
++				highest = max(pageblock_start_pfn(pfn),
++					      cc->zone->zone_start_pfn);
+ 
+ 			if (pfn >= low_pfn) {
+ 				cc->fast_search_fail = 0;
+@@ -1432,7 +1437,8 @@ fast_isolate_freepages(struct compact_control *cc)
+ 			} else {
+ 				if (cc->direct_compaction && pfn_valid(min_pfn)) {
+ 					page = pageblock_pfn_to_page(min_pfn,
+-						pageblock_end_pfn(min_pfn),
++						min(pageblock_end_pfn(min_pfn),
++						    zone_end_pfn(cc->zone)),
+ 						cc->zone);
+ 					cc->free_pfn = min_pfn;
+ 				}
+@@ -1662,6 +1668,7 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+ 	unsigned long pfn = cc->migrate_pfn;
+ 	unsigned long high_pfn;
+ 	int order;
++	bool found_block = false;
+ 
+ 	/* Skip hints are relied on to avoid repeats on the fast search */
+ 	if (cc->ignore_skip_hint)
+@@ -1704,7 +1711,7 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+ 	high_pfn = pageblock_start_pfn(cc->migrate_pfn + distance);
+ 
+ 	for (order = cc->order - 1;
+-	     order >= PAGE_ALLOC_COSTLY_ORDER && pfn == cc->migrate_pfn && nr_scanned < limit;
++	     order >= PAGE_ALLOC_COSTLY_ORDER && !found_block && nr_scanned < limit;
+ 	     order--) {
+ 		struct free_area *area = &cc->zone->free_area[order];
+ 		struct list_head *freelist;
+@@ -1719,7 +1726,11 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+ 		list_for_each_entry(freepage, freelist, lru) {
+ 			unsigned long free_pfn;
+ 
+-			nr_scanned++;
++			if (nr_scanned++ >= limit) {
++				move_freelist_tail(freelist, freepage);
++				break;
++			}
++
+ 			free_pfn = page_to_pfn(freepage);
+ 			if (free_pfn < high_pfn) {
+ 				/*
+@@ -1728,12 +1739,8 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+ 				 * the list assumes an entry is deleted, not
+ 				 * reordered.
+ 				 */
+-				if (get_pageblock_skip(freepage)) {
+-					if (list_is_last(freelist, &freepage->lru))
+-						break;
+-
++				if (get_pageblock_skip(freepage))
+ 					continue;
+-				}
+ 
+ 				/* Reorder to so a future search skips recent pages */
+ 				move_freelist_tail(freelist, freepage);
+@@ -1741,15 +1748,10 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+ 				update_fast_start_pfn(cc, free_pfn);
+ 				pfn = pageblock_start_pfn(free_pfn);
+ 				cc->fast_search_fail = 0;
++				found_block = true;
+ 				set_pageblock_skip(freepage);
+ 				break;
+ 			}
+-
+-			if (nr_scanned >= limit) {
+-				cc->fast_search_fail++;
+-				move_freelist_tail(freelist, freepage);
+-				break;
+-			}
+ 		}
+ 		spin_unlock_irqrestore(&cc->zone->lock, flags);
+ 	}
+@@ -1760,9 +1762,10 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+ 	 * If fast scanning failed then use a cached entry for a page block
+ 	 * that had free pages as the basis for starting a linear scan.
+ 	 */
+-	if (pfn == cc->migrate_pfn)
++	if (!found_block) {
++		cc->fast_search_fail++;
+ 		pfn = reinit_migrate_pfn(cc);
+-
++	}
+ 	return pfn;
+ }
+ 
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 26909396898b6..2e3b7075e4329 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1312,14 +1312,16 @@ static inline void destroy_compound_gigantic_page(struct page *page,
+ static void update_and_free_page(struct hstate *h, struct page *page)
+ {
+ 	int i;
++	struct page *subpage = page;
+ 
+ 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
+ 		return;
+ 
+ 	h->nr_huge_pages--;
+ 	h->nr_huge_pages_node[page_to_nid(page)]--;
+-	for (i = 0; i < pages_per_huge_page(h); i++) {
+-		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
++	for (i = 0; i < pages_per_huge_page(h);
++	     i++, subpage = mem_map_next(subpage, page, i)) {
++		subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
+ 				1 << PG_referenced | 1 << PG_dirty |
+ 				1 << PG_active | 1 << PG_private |
+ 				1 << PG_writeback);
+@@ -2517,7 +2519,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
+ 		if (hstate_is_gigantic(h)) {
+ 			if (hugetlb_cma_size) {
+ 				pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation\n");
+-				break;
++				goto free;
+ 			}
+ 			if (!alloc_bootmem_huge_page(h))
+ 				break;
+@@ -2535,7 +2537,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
+ 			h->max_huge_pages, buf, i);
+ 		h->max_huge_pages = i;
+ 	}
+-
++free:
+ 	kfree(node_alloc_noretry);
+ }
+ 
+@@ -2984,8 +2986,10 @@ static int hugetlb_sysfs_add_hstate(struct hstate *h, struct kobject *parent,
+ 		return -ENOMEM;
+ 
+ 	retval = sysfs_create_group(hstate_kobjs[hi], hstate_attr_group);
+-	if (retval)
++	if (retval) {
+ 		kobject_put(hstate_kobjs[hi]);
++		hstate_kobjs[hi] = NULL;
++	}
+ 
+ 	return retval;
+ }
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 4e3dff13eb70c..abab394c42062 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -440,18 +440,28 @@ static inline int khugepaged_test_exit(struct mm_struct *mm)
+ static bool hugepage_vma_check(struct vm_area_struct *vma,
+ 			       unsigned long vm_flags)
+ {
+-	if ((!(vm_flags & VM_HUGEPAGE) && !khugepaged_always()) ||
+-	    (vm_flags & VM_NOHUGEPAGE) ||
++	/* Explicitly disabled through madvise. */
++	if ((vm_flags & VM_NOHUGEPAGE) ||
+ 	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
+ 		return false;
+ 
+-	if (shmem_file(vma->vm_file) ||
+-	    (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
+-	     vma->vm_file &&
+-	     (vm_flags & VM_DENYWRITE))) {
++	/* Enabled via shmem mount options or sysfs settings. */
++	if (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) {
+ 		return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
+ 				HPAGE_PMD_NR);
+ 	}
++
++	/* THP settings require madvise. */
++	if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always())
++		return false;
++
++	/* Read-only file mappings need to be aligned for THP to work. */
++	if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file &&
++	    (vm_flags & VM_DENYWRITE)) {
++		return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
++				HPAGE_PMD_NR);
++	}
++
+ 	if (!vma->anon_vma || vma->vm_ops)
+ 		return false;
+ 	if (vma_is_temporary_stack(vma))
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index a604e69ecfa57..d6966f1ebc7af 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -1083,13 +1083,9 @@ static __always_inline struct mem_cgroup *get_active_memcg(void)
+ 
+ 	rcu_read_lock();
+ 	memcg = active_memcg();
+-	if (memcg) {
+-		/* current->active_memcg must hold a ref. */
+-		if (WARN_ON_ONCE(!css_tryget(&memcg->css)))
+-			memcg = root_mem_cgroup;
+-		else
+-			memcg = current->active_memcg;
+-	}
++	/* remote memcg must hold a ref. */
++	if (memcg && WARN_ON_ONCE(!css_tryget(&memcg->css)))
++		memcg = root_mem_cgroup;
+ 	rcu_read_unlock();
+ 
+ 	return memcg;
+@@ -5668,10 +5664,8 @@ static int mem_cgroup_move_account(struct page *page,
+ 			__mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
+ 			__mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
+ 			if (PageTransHuge(page)) {
+-				__mod_lruvec_state(from_vec, NR_ANON_THPS,
+-						   -nr_pages);
+-				__mod_lruvec_state(to_vec, NR_ANON_THPS,
+-						   nr_pages);
++				__dec_lruvec_state(from_vec, NR_ANON_THPS);
++				__inc_lruvec_state(to_vec, NR_ANON_THPS);
+ 			}
+ 
+ 		}
+@@ -6810,7 +6804,19 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
+ 	memcg_check_events(memcg, page);
+ 	local_irq_enable();
+ 
+-	if (PageSwapCache(page)) {
++	/*
++	 * Cgroup1's unified memory+swap counter has been charged with the
++	 * new swapcache page, finish the transfer by uncharging the swap
++	 * slot. The swap slot would also get uncharged when it dies, but
++	 * it can stick around indefinitely and we'd count the page twice
++	 * the entire time.
++	 *
++	 * Cgroup2 has separate resource counters for memory and swap,
++	 * so this is a non-issue here. Memory and swap charge lifetimes
++	 * correspond 1:1 to page and swap slot lifetimes: we charge the
++	 * page to memory here, and uncharge swap when the slot is freed.
++	 */
++	if (do_memsw_account() && PageSwapCache(page)) {
+ 		swp_entry_t entry = { .val = page_private(page) };
+ 		/*
+ 		 * The swap entry might not get freed for a long time,
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index fd653c9953cfd..570a20b425613 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1237,6 +1237,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
+ 		 */
+ 		put_page(page);
+ 
++	/* device metadata space is not recoverable */
++	if (!pgmap_pfn_valid(pgmap, pfn)) {
++		rc = -ENXIO;
++		goto out;
++	}
++
+ 	/*
+ 	 * Prevent the inode from being freed while we are interrogating
+ 	 * the address_space, typically this would be handled by
+diff --git a/mm/memory.c b/mm/memory.c
+index eb5722027160a..827d42f9ebf7c 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -2165,11 +2165,11 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ 			unsigned long addr, unsigned long end,
+ 			unsigned long pfn, pgprot_t prot)
+ {
+-	pte_t *pte;
++	pte_t *pte, *mapped_pte;
+ 	spinlock_t *ptl;
+ 	int err = 0;
+ 
+-	pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
++	mapped_pte = pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
+ 	if (!pte)
+ 		return -ENOMEM;
+ 	arch_enter_lazy_mmu_mode();
+@@ -2183,7 +2183,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ 		pfn++;
+ 	} while (pte++, addr += PAGE_SIZE, addr != end);
+ 	arch_leave_lazy_mmu_mode();
+-	pte_unmap_unlock(pte - 1, ptl);
++	pte_unmap_unlock(mapped_pte, ptl);
+ 	return err;
+ }
+ 
+@@ -5203,17 +5203,19 @@ long copy_huge_page_from_user(struct page *dst_page,
+ 	void *page_kaddr;
+ 	unsigned long i, rc = 0;
+ 	unsigned long ret_val = pages_per_huge_page * PAGE_SIZE;
++	struct page *subpage = dst_page;
+ 
+-	for (i = 0; i < pages_per_huge_page; i++) {
++	for (i = 0; i < pages_per_huge_page;
++	     i++, subpage = mem_map_next(subpage, dst_page, i)) {
+ 		if (allow_pagefault)
+-			page_kaddr = kmap(dst_page + i);
++			page_kaddr = kmap(subpage);
+ 		else
+-			page_kaddr = kmap_atomic(dst_page + i);
++			page_kaddr = kmap_atomic(subpage);
+ 		rc = copy_from_user(page_kaddr,
+ 				(const void __user *)(src + i * PAGE_SIZE),
+ 				PAGE_SIZE);
+ 		if (allow_pagefault)
+-			kunmap(dst_page + i);
++			kunmap(subpage);
+ 		else
+ 			kunmap_atomic(page_kaddr);
+ 
+diff --git a/mm/memremap.c b/mm/memremap.c
+index 16b2fb482da11..2455bac895066 100644
+--- a/mm/memremap.c
++++ b/mm/memremap.c
+@@ -80,6 +80,21 @@ static unsigned long pfn_first(struct dev_pagemap *pgmap, int range_id)
+ 	return pfn + vmem_altmap_offset(pgmap_altmap(pgmap));
+ }
+ 
++bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn)
++{
++	int i;
++
++	for (i = 0; i < pgmap->nr_range; i++) {
++		struct range *range = &pgmap->ranges[i];
++
++		if (pfn >= PHYS_PFN(range->start) &&
++		    pfn <= PHYS_PFN(range->end))
++			return pfn >= pfn_first(pgmap, i);
++	}
++
++	return false;
++}
++
+ static unsigned long pfn_end(struct dev_pagemap *pgmap, int range_id)
+ {
+ 	const struct range *range = &pgmap->ranges[range_id];
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index f9ccd5dc13f32..8d96679668b4e 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -836,8 +836,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
+ 	page = alloc_pages(flags, order);
+ 	if (likely(page)) {
+ 		ret = page_address(page);
+-		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
+-				    PAGE_SIZE << order);
++		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
++				      PAGE_SIZE << order);
+ 	}
+ 	ret = kasan_kmalloc_large(ret, size, flags);
+ 	/* As ret might get tagged, call kmemleak hook after KASAN. */
+diff --git a/mm/slub.c b/mm/slub.c
+index 071e41067ea67..7b378e2ce270d 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -3984,8 +3984,8 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
+ 	page = alloc_pages_node(node, flags, order);
+ 	if (page) {
+ 		ptr = page_address(page);
+-		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
+-				    PAGE_SIZE << order);
++		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
++				      PAGE_SIZE << order);
+ 	}
+ 
+ 	return kmalloc_large_node_hook(ptr, size, flags);
+@@ -4116,8 +4116,8 @@ void kfree(const void *x)
+ 
+ 		BUG_ON(!PageCompound(page));
+ 		kfree_hook(object);
+-		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
+-				    -(PAGE_SIZE << order));
++		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
++				      -(PAGE_SIZE << order));
+ 		__free_pages(page, order);
+ 		return;
+ 	}
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 4c5a9b2286bf5..67d38334052ef 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4084,8 +4084,13 @@ module_init(kswapd_init)
+  */
+ int node_reclaim_mode __read_mostly;
+ 
+-#define RECLAIM_WRITE (1<<0)	/* Writeout pages during reclaim */
+-#define RECLAIM_UNMAP (1<<1)	/* Unmap pages during reclaim */
++/*
++ * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
++ * ABI.  New bits are OK, but existing bits can never change.
++ */
++#define RECLAIM_ZONE  (1<<0)   /* Run shrink_inactive_list on the zone */
++#define RECLAIM_WRITE (1<<1)   /* Writeout pages during reclaim */
++#define RECLAIM_UNMAP (1<<2)   /* Unmap pages during reclaim */
+ 
+ /*
+  * Priority for NODE_RECLAIM. This determines the fraction of pages
+diff --git a/net/bluetooth/a2mp.c b/net/bluetooth/a2mp.c
+index da7fd7c8c2dc0..463bad58478b2 100644
+--- a/net/bluetooth/a2mp.c
++++ b/net/bluetooth/a2mp.c
+@@ -381,9 +381,9 @@ static int a2mp_getampassoc_req(struct amp_mgr *mgr, struct sk_buff *skb,
+ 	hdev = hci_dev_get(req->id);
+ 	if (!hdev || hdev->amp_type == AMP_TYPE_BREDR || tmp) {
+ 		struct a2mp_amp_assoc_rsp rsp;
+-		rsp.id = req->id;
+ 
+ 		memset(&rsp, 0, sizeof(rsp));
++		rsp.id = req->id;
+ 
+ 		if (tmp) {
+ 			rsp.status = A2MP_STATUS_COLLISION_OCCURED;
+@@ -512,6 +512,7 @@ static int a2mp_createphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb,
+ 		assoc = kmemdup(req->amp_assoc, assoc_len, GFP_KERNEL);
+ 		if (!assoc) {
+ 			amp_ctrl_put(ctrl);
++			hci_dev_put(hdev);
+ 			return -ENOMEM;
+ 		}
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index c4aa2cbb92697..555058270f112 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1356,8 +1356,10 @@ int hci_inquiry(void __user *arg)
+ 		 * cleared). If it is interrupted by a signal, return -EINTR.
+ 		 */
+ 		if (wait_on_bit(&hdev->flags, HCI_INQUIRY,
+-				TASK_INTERRUPTIBLE))
+-			return -EINTR;
++				TASK_INTERRUPTIBLE)) {
++			err = -EINTR;
++			goto done;
++		}
+ 	}
+ 
+ 	/* for unlimited number of responses we will use buffer with
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 2ca5eecebacfa..f0a19a48c0481 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -5549,6 +5549,7 @@ BPF_CALL_4(bpf_skb_fib_lookup, struct sk_buff *, skb,
+ {
+ 	struct net *net = dev_net(skb->dev);
+ 	int rc = -EAFNOSUPPORT;
++	bool check_mtu = false;
+ 
+ 	if (plen < sizeof(*params))
+ 		return -EINVAL;
+@@ -5556,22 +5557,28 @@ BPF_CALL_4(bpf_skb_fib_lookup, struct sk_buff *, skb,
+ 	if (flags & ~(BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_OUTPUT))
+ 		return -EINVAL;
+ 
++	if (params->tot_len)
++		check_mtu = true;
++
+ 	switch (params->family) {
+ #if IS_ENABLED(CONFIG_INET)
+ 	case AF_INET:
+-		rc = bpf_ipv4_fib_lookup(net, params, flags, false);
++		rc = bpf_ipv4_fib_lookup(net, params, flags, check_mtu);
+ 		break;
+ #endif
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	case AF_INET6:
+-		rc = bpf_ipv6_fib_lookup(net, params, flags, false);
++		rc = bpf_ipv6_fib_lookup(net, params, flags, check_mtu);
+ 		break;
+ #endif
+ 	}
+ 
+-	if (!rc) {
++	if (rc == BPF_FIB_LKUP_RET_SUCCESS && !check_mtu) {
+ 		struct net_device *dev;
+ 
++		/* When tot_len isn't provided by user, check skb
++		 * against MTU of FIB lookup resulting net_device
++		 */
+ 		dev = dev_get_by_index_rcu(net, params->ifindex);
+ 		if (!is_skb_forwardable(dev, skb))
+ 			rc = BPF_FIB_LKUP_RET_FRAG_NEEDED;
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index 005faea415a48..ff3818333fcfb 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -775,13 +775,14 @@ EXPORT_SYMBOL(__icmp_send);
+ void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info)
+ {
+ 	struct sk_buff *cloned_skb = NULL;
++	struct ip_options opts = { 0 };
+ 	enum ip_conntrack_info ctinfo;
+ 	struct nf_conn *ct;
+ 	__be32 orig_ip;
+ 
+ 	ct = nf_ct_get(skb_in, &ctinfo);
+ 	if (!ct || !(ct->status & IPS_SRC_NAT)) {
+-		icmp_send(skb_in, type, code, info);
++		__icmp_send(skb_in, type, code, info, &opts);
+ 		return;
+ 	}
+ 
+@@ -796,7 +797,7 @@ void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info)
+ 
+ 	orig_ip = ip_hdr(skb_in)->saddr;
+ 	ip_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.ip;
+-	icmp_send(skb_in, type, code, info);
++	__icmp_send(skb_in, type, code, info, &opts);
+ 	ip_hdr(skb_in)->saddr = orig_ip;
+ out:
+ 	consume_skb(cloned_skb);
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index 8956144ea65e8..cbab41d557b20 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -331,10 +331,9 @@ static int icmpv6_getfrag(void *from, char *to, int offset, int len, int odd, st
+ }
+ 
+ #if IS_ENABLED(CONFIG_IPV6_MIP6)
+-static void mip6_addr_swap(struct sk_buff *skb)
++static void mip6_addr_swap(struct sk_buff *skb, const struct inet6_skb_parm *opt)
+ {
+ 	struct ipv6hdr *iph = ipv6_hdr(skb);
+-	struct inet6_skb_parm *opt = IP6CB(skb);
+ 	struct ipv6_destopt_hao *hao;
+ 	struct in6_addr tmp;
+ 	int off;
+@@ -351,7 +350,7 @@ static void mip6_addr_swap(struct sk_buff *skb)
+ 	}
+ }
+ #else
+-static inline void mip6_addr_swap(struct sk_buff *skb) {}
++static inline void mip6_addr_swap(struct sk_buff *skb, const struct inet6_skb_parm *opt) {}
+ #endif
+ 
+ static struct dst_entry *icmpv6_route_lookup(struct net *net,
+@@ -446,7 +445,8 @@ static int icmp6_iif(const struct sk_buff *skb)
+  *	Send an ICMP message in response to a packet in error
+  */
+ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+-		const struct in6_addr *force_saddr)
++		const struct in6_addr *force_saddr,
++		const struct inet6_skb_parm *parm)
+ {
+ 	struct inet6_dev *idev = NULL;
+ 	struct ipv6hdr *hdr = ipv6_hdr(skb);
+@@ -542,7 +542,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ 	if (!(skb->dev->flags & IFF_LOOPBACK) && !icmpv6_global_allow(net, type))
+ 		goto out_bh_enable;
+ 
+-	mip6_addr_swap(skb);
++	mip6_addr_swap(skb, parm);
+ 
+ 	sk = icmpv6_xmit_lock(net);
+ 	if (!sk)
+@@ -559,7 +559,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ 		/* select a more meaningful saddr from input if */
+ 		struct net_device *in_netdev;
+ 
+-		in_netdev = dev_get_by_index(net, IP6CB(skb)->iif);
++		in_netdev = dev_get_by_index(net, parm->iif);
+ 		if (in_netdev) {
+ 			ipv6_dev_get_saddr(net, in_netdev, &fl6.daddr,
+ 					   inet6_sk(sk)->srcprefs,
+@@ -640,7 +640,7 @@ EXPORT_SYMBOL(icmp6_send);
+  */
+ void icmpv6_param_prob(struct sk_buff *skb, u8 code, int pos)
+ {
+-	icmp6_send(skb, ICMPV6_PARAMPROB, code, pos, NULL);
++	icmp6_send(skb, ICMPV6_PARAMPROB, code, pos, NULL, IP6CB(skb));
+ 	kfree_skb(skb);
+ }
+ 
+@@ -697,10 +697,10 @@ int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type,
+ 	}
+ 	if (type == ICMP_TIME_EXCEEDED)
+ 		icmp6_send(skb2, ICMPV6_TIME_EXCEED, ICMPV6_EXC_HOPLIMIT,
+-			   info, &temp_saddr);
++			   info, &temp_saddr, IP6CB(skb2));
+ 	else
+ 		icmp6_send(skb2, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH,
+-			   info, &temp_saddr);
++			   info, &temp_saddr, IP6CB(skb2));
+ 	if (rt)
+ 		ip6_rt_put(rt);
+ 
+diff --git a/net/ipv6/ip6_icmp.c b/net/ipv6/ip6_icmp.c
+index 70c8c2f36c980..9e3574880cb03 100644
+--- a/net/ipv6/ip6_icmp.c
++++ b/net/ipv6/ip6_icmp.c
+@@ -33,23 +33,25 @@ int inet6_unregister_icmp_sender(ip6_icmp_send_t *fn)
+ }
+ EXPORT_SYMBOL(inet6_unregister_icmp_sender);
+ 
+-void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info)
++void __icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
++		   const struct inet6_skb_parm *parm)
+ {
+ 	ip6_icmp_send_t *send;
+ 
+ 	rcu_read_lock();
+ 	send = rcu_dereference(ip6_icmp_send);
+ 	if (send)
+-		send(skb, type, code, info, NULL);
++		send(skb, type, code, info, NULL, parm);
+ 	rcu_read_unlock();
+ }
+-EXPORT_SYMBOL(icmpv6_send);
++EXPORT_SYMBOL(__icmpv6_send);
+ #endif
+ 
+ #if IS_ENABLED(CONFIG_NF_NAT)
+ #include <net/netfilter/nf_conntrack.h>
+ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info)
+ {
++	struct inet6_skb_parm parm = { 0 };
+ 	struct sk_buff *cloned_skb = NULL;
+ 	enum ip_conntrack_info ctinfo;
+ 	struct in6_addr orig_ip;
+@@ -57,7 +59,7 @@ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info)
+ 
+ 	ct = nf_ct_get(skb_in, &ctinfo);
+ 	if (!ct || !(ct->status & IPS_SRC_NAT)) {
+-		icmpv6_send(skb_in, type, code, info);
++		__icmpv6_send(skb_in, type, code, info, &parm);
+ 		return;
+ 	}
+ 
+@@ -72,7 +74,7 @@ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info)
+ 
+ 	orig_ip = ipv6_hdr(skb_in)->saddr;
+ 	ipv6_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.in6;
+-	icmpv6_send(skb_in, type, code, info);
++	__icmpv6_send(skb_in, type, code, info, &parm);
+ 	ipv6_hdr(skb_in)->saddr = orig_ip;
+ out:
+ 	consume_skb(cloned_skb);
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index 313eee12410ec..3db514c4c63ab 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -356,7 +356,7 @@ u32 airtime_link_metric_get(struct ieee80211_local *local,
+ 	 */
+ 	tx_time = (device_constant + 10 * test_frame_len / rate);
+ 	estimated_retx = ((1 << (2 * ARITH_SHIFT)) / (s_unit - err));
+-	result = (tx_time * estimated_retx) >> (2 * ARITH_SHIFT);
++	result = ((u64)tx_time * estimated_retx) >> (2 * ARITH_SHIFT);
+ 	return (u32)result;
+ }
+ 
+diff --git a/net/nfc/nci/uart.c b/net/nfc/nci/uart.c
+index 11b554ce07ffc..1204c438e87dc 100644
+--- a/net/nfc/nci/uart.c
++++ b/net/nfc/nci/uart.c
+@@ -292,7 +292,8 @@ static int nci_uart_tty_ioctl(struct tty_struct *tty, struct file *file,
+ 
+ /* We don't provide read/write/poll interface for user space. */
+ static ssize_t nci_uart_tty_read(struct tty_struct *tty, struct file *file,
+-				 unsigned char __user *buf, size_t nr)
++				 unsigned char *buf, size_t nr,
++				 void **cookie, unsigned long offset)
+ {
+ 	return 0;
+ }
+diff --git a/net/qrtr/tun.c b/net/qrtr/tun.c
+index b238c40a99842..304b41fea5ab0 100644
+--- a/net/qrtr/tun.c
++++ b/net/qrtr/tun.c
+@@ -31,6 +31,7 @@ static int qrtr_tun_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
+ static int qrtr_tun_open(struct inode *inode, struct file *filp)
+ {
+ 	struct qrtr_tun *tun;
++	int ret;
+ 
+ 	tun = kzalloc(sizeof(*tun), GFP_KERNEL);
+ 	if (!tun)
+@@ -43,7 +44,16 @@ static int qrtr_tun_open(struct inode *inode, struct file *filp)
+ 
+ 	filp->private_data = tun;
+ 
+-	return qrtr_endpoint_register(&tun->ep, QRTR_EP_NID_AUTO);
++	ret = qrtr_endpoint_register(&tun->ep, QRTR_EP_NID_AUTO);
++	if (ret)
++		goto out;
++
++	return 0;
++
++out:
++	filp->private_data = NULL;
++	kfree(tun);
++	return ret;
+ }
+ 
+ static ssize_t qrtr_tun_read_iter(struct kiocb *iocb, struct iov_iter *to)
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index f66417d5d2c31..181c4b501225f 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -888,7 +888,7 @@ static const struct nla_policy tcf_action_policy[TCA_ACT_MAX + 1] = {
+ 	[TCA_ACT_HW_STATS]	= NLA_POLICY_BITFIELD32(TCA_ACT_HW_STATS_ANY),
+ };
+ 
+-static void tcf_idr_insert_many(struct tc_action *actions[])
++void tcf_idr_insert_many(struct tc_action *actions[])
+ {
+ 	int i;
+ 
+@@ -908,19 +908,13 @@ static void tcf_idr_insert_many(struct tc_action *actions[])
+ 	}
+ }
+ 
+-struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+-				    struct nlattr *nla, struct nlattr *est,
+-				    char *name, int ovr, int bind,
+-				    bool rtnl_held,
+-				    struct netlink_ext_ack *extack)
++struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
++					 bool rtnl_held,
++					 struct netlink_ext_ack *extack)
+ {
+-	struct nla_bitfield32 flags = { 0, 0 };
+-	u8 hw_stats = TCA_ACT_HW_STATS_ANY;
+-	struct tc_action *a;
++	struct nlattr *tb[TCA_ACT_MAX + 1];
+ 	struct tc_action_ops *a_o;
+-	struct tc_cookie *cookie = NULL;
+ 	char act_name[IFNAMSIZ];
+-	struct nlattr *tb[TCA_ACT_MAX + 1];
+ 	struct nlattr *kind;
+ 	int err;
+ 
+@@ -928,33 +922,21 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ 		err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla,
+ 						  tcf_action_policy, extack);
+ 		if (err < 0)
+-			goto err_out;
++			return ERR_PTR(err);
+ 		err = -EINVAL;
+ 		kind = tb[TCA_ACT_KIND];
+ 		if (!kind) {
+ 			NL_SET_ERR_MSG(extack, "TC action kind must be specified");
+-			goto err_out;
++			return ERR_PTR(err);
+ 		}
+ 		if (nla_strlcpy(act_name, kind, IFNAMSIZ) >= IFNAMSIZ) {
+ 			NL_SET_ERR_MSG(extack, "TC action name too long");
+-			goto err_out;
+-		}
+-		if (tb[TCA_ACT_COOKIE]) {
+-			cookie = nla_memdup_cookie(tb);
+-			if (!cookie) {
+-				NL_SET_ERR_MSG(extack, "No memory to generate TC cookie");
+-				err = -ENOMEM;
+-				goto err_out;
+-			}
++			return ERR_PTR(err);
+ 		}
+-		hw_stats = tcf_action_hw_stats_get(tb[TCA_ACT_HW_STATS]);
+-		if (tb[TCA_ACT_FLAGS])
+-			flags = nla_get_bitfield32(tb[TCA_ACT_FLAGS]);
+ 	} else {
+ 		if (strlcpy(act_name, name, IFNAMSIZ) >= IFNAMSIZ) {
+ 			NL_SET_ERR_MSG(extack, "TC action name too long");
+-			err = -EINVAL;
+-			goto err_out;
++			return ERR_PTR(-EINVAL);
+ 		}
+ 	}
+ 
+@@ -976,24 +958,56 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ 		 * indicate this using -EAGAIN.
+ 		 */
+ 		if (a_o != NULL) {
+-			err = -EAGAIN;
+-			goto err_mod;
++			module_put(a_o->owner);
++			return ERR_PTR(-EAGAIN);
+ 		}
+ #endif
+ 		NL_SET_ERR_MSG(extack, "Failed to load TC action module");
+-		err = -ENOENT;
+-		goto err_free;
++		return ERR_PTR(-ENOENT);
+ 	}
+ 
++	return a_o;
++}
++
++struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
++				    struct nlattr *nla, struct nlattr *est,
++				    char *name, int ovr, int bind,
++				    struct tc_action_ops *a_o, bool rtnl_held,
++				    struct netlink_ext_ack *extack)
++{
++	struct nla_bitfield32 flags = { 0, 0 };
++	u8 hw_stats = TCA_ACT_HW_STATS_ANY;
++	struct nlattr *tb[TCA_ACT_MAX + 1];
++	struct tc_cookie *cookie = NULL;
++	struct tc_action *a;
++	int err;
++
+ 	/* backward compatibility for policer */
+-	if (name == NULL)
++	if (name == NULL) {
++		err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla,
++						  tcf_action_policy, extack);
++		if (err < 0)
++			return ERR_PTR(err);
++		if (tb[TCA_ACT_COOKIE]) {
++			cookie = nla_memdup_cookie(tb);
++			if (!cookie) {
++				NL_SET_ERR_MSG(extack, "No memory to generate TC cookie");
++				err = -ENOMEM;
++				goto err_out;
++			}
++		}
++		hw_stats = tcf_action_hw_stats_get(tb[TCA_ACT_HW_STATS]);
++		if (tb[TCA_ACT_FLAGS])
++			flags = nla_get_bitfield32(tb[TCA_ACT_FLAGS]);
++
+ 		err = a_o->init(net, tb[TCA_ACT_OPTIONS], est, &a, ovr, bind,
+ 				rtnl_held, tp, flags.value, extack);
+-	else
++	} else {
+ 		err = a_o->init(net, nla, est, &a, ovr, bind, rtnl_held,
+ 				tp, flags.value, extack);
++	}
+ 	if (err < 0)
+-		goto err_mod;
++		goto err_out;
+ 
+ 	if (!name && tb[TCA_ACT_COOKIE])
+ 		tcf_set_action_cookie(&a->act_cookie, cookie);
+@@ -1010,14 +1024,11 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ 
+ 	return a;
+ 
+-err_mod:
+-	module_put(a_o->owner);
+-err_free:
++err_out:
+ 	if (cookie) {
+ 		kfree(cookie->data);
+ 		kfree(cookie);
+ 	}
+-err_out:
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -1028,6 +1039,7 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ 		    struct tc_action *actions[], size_t *attr_size,
+ 		    bool rtnl_held, struct netlink_ext_ack *extack)
+ {
++	struct tc_action_ops *ops[TCA_ACT_MAX_PRIO] = {};
+ 	struct nlattr *tb[TCA_ACT_MAX_PRIO + 1];
+ 	struct tc_action *act;
+ 	size_t sz = 0;
+@@ -1039,9 +1051,20 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ 	if (err < 0)
+ 		return err;
+ 
++	for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) {
++		struct tc_action_ops *a_o;
++
++		a_o = tc_action_load_ops(name, tb[i], rtnl_held, extack);
++		if (IS_ERR(a_o)) {
++			err = PTR_ERR(a_o);
++			goto err_mod;
++		}
++		ops[i - 1] = a_o;
++	}
++
+ 	for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) {
+ 		act = tcf_action_init_1(net, tp, tb[i], est, name, ovr, bind,
+-					rtnl_held, extack);
++					ops[i - 1], rtnl_held, extack);
+ 		if (IS_ERR(act)) {
+ 			err = PTR_ERR(act);
+ 			goto err;
+@@ -1061,6 +1084,11 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ 
+ err:
+ 	tcf_action_destroy(actions, bind);
++err_mod:
++	for (i = 0; i < TCA_ACT_MAX_PRIO; i++) {
++		if (ops[i])
++			module_put(ops[i]->owner);
++	}
+ 	return err;
+ }
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 838b3fd94d776..b2b7834c6cf8a 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -3055,16 +3055,24 @@ int tcf_exts_validate(struct net *net, struct tcf_proto *tp, struct nlattr **tb,
+ 		size_t attr_size = 0;
+ 
+ 		if (exts->police && tb[exts->police]) {
++			struct tc_action_ops *a_o;
++
++			a_o = tc_action_load_ops("police", tb[exts->police], rtnl_held, extack);
++			if (IS_ERR(a_o))
++				return PTR_ERR(a_o);
+ 			act = tcf_action_init_1(net, tp, tb[exts->police],
+ 						rate_tlv, "police", ovr,
+-						TCA_ACT_BIND, rtnl_held,
++						TCA_ACT_BIND, a_o, rtnl_held,
+ 						extack);
+-			if (IS_ERR(act))
++			if (IS_ERR(act)) {
++				module_put(a_o->owner);
+ 				return PTR_ERR(act);
++			}
+ 
+ 			act->type = exts->type = TCA_OLD_COMPAT;
+ 			exts->actions[0] = act;
+ 			exts->nr_actions = 1;
++			tcf_idr_insert_many(exts->actions);
+ 		} else if (exts->action && tb[exts->action]) {
+ 			int err;
+ 
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index fb044792b571c..5f7e3d12523fe 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -475,9 +475,6 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
+ 	if (!svc_rdma_post_recvs(newxprt))
+ 		goto errout;
+ 
+-	/* Swap out the handler */
+-	newxprt->sc_cm_id->event_handler = svc_rdma_cma_handler;
+-
+ 	/* Construct RDMA-CM private message */
+ 	pmsg.cp_magic = rpcrdma_cmp_magic;
+ 	pmsg.cp_version = RPCRDMA_CMP_VERSION;
+@@ -498,7 +495,10 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
+ 	}
+ 	conn_param.private_data = &pmsg;
+ 	conn_param.private_data_len = sizeof(pmsg);
++	rdma_lock_handler(newxprt->sc_cm_id);
++	newxprt->sc_cm_id->event_handler = svc_rdma_cma_handler;
+ 	ret = rdma_accept(newxprt->sc_cm_id, &conn_param);
++	rdma_unlock_handler(newxprt->sc_cm_id);
+ 	if (ret) {
+ 		trace_svcrdma_accept_err(newxprt, ret);
+ 		goto errout;
+diff --git a/samples/Kconfig b/samples/Kconfig
+index 0ed6e4d71d87b..e76cdfc50e257 100644
+--- a/samples/Kconfig
++++ b/samples/Kconfig
+@@ -210,7 +210,7 @@ config SAMPLE_WATCHDOG
+ 	depends on CC_CAN_LINK
+ 
+ config SAMPLE_WATCH_QUEUE
+-	bool "Build example /dev/watch_queue notification consumer"
++	bool "Build example watch_queue notification API consumer"
+ 	depends on CC_CAN_LINK && HEADERS_INSTALL
+ 	help
+ 	  Build example userspace program to use the new mount_notify(),
+diff --git a/samples/watch_queue/watch_test.c b/samples/watch_queue/watch_test.c
+index 46e618a897fef..8c6cb57d5cfc5 100644
+--- a/samples/watch_queue/watch_test.c
++++ b/samples/watch_queue/watch_test.c
+@@ -1,5 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
+-/* Use /dev/watch_queue to watch for notifications.
++/* Use watch_queue API to watch for notifications.
+  *
+  * Copyright (C) 2020 Red Hat, Inc. All Rights Reserved.
+  * Written by David Howells (dhowells@redhat.com)
+diff --git a/security/commoncap.c b/security/commoncap.c
+index a6c9bb4441d54..b2a656947504d 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -500,7 +500,8 @@ int cap_convert_nscap(struct dentry *dentry, void **ivalue, size_t size)
+ 	__u32 magic, nsmagic;
+ 	struct inode *inode = d_backing_inode(dentry);
+ 	struct user_namespace *task_ns = current_user_ns(),
+-		*fs_ns = inode->i_sb->s_user_ns;
++		*fs_ns = inode->i_sb->s_user_ns,
++		*ancestor;
+ 	kuid_t rootid;
+ 	size_t newsize;
+ 
+@@ -523,6 +524,15 @@ int cap_convert_nscap(struct dentry *dentry, void **ivalue, size_t size)
+ 	if (nsrootid == -1)
+ 		return -EINVAL;
+ 
++	/*
++	 * Do not allow allow adding a v3 filesystem capability xattr
++	 * if the rootid field is ambiguous.
++	 */
++	for (ancestor = task_ns->parent; ancestor; ancestor = ancestor->parent) {
++		if (from_kuid(ancestor, rootid) == 0)
++			return -EINVAL;
++	}
++
+ 	newsize = sizeof(struct vfs_ns_cap_data);
+ 	nscap = kmalloc(newsize, GFP_ATOMIC);
+ 	if (!nscap)
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index 168c3b78ac47b..a6dd47eb086da 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -73,7 +73,7 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo)
+ {
+ 	long rc;
+ 	const char *algo;
+-	struct crypto_shash **tfm, *tmp_tfm;
++	struct crypto_shash **tfm, *tmp_tfm = NULL;
+ 	struct shash_desc *desc;
+ 
+ 	if (type == EVM_XATTR_HMAC) {
+@@ -118,13 +118,16 @@ unlock:
+ alloc:
+ 	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(*tfm),
+ 			GFP_KERNEL);
+-	if (!desc)
++	if (!desc) {
++		crypto_free_shash(tmp_tfm);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	desc->tfm = *tfm;
+ 
+ 	rc = crypto_shash_init(desc);
+ 	if (rc) {
++		crypto_free_shash(tmp_tfm);
+ 		kfree(desc);
+ 		return ERR_PTR(rc);
+ 	}
+diff --git a/security/integrity/ima/ima_kexec.c b/security/integrity/ima/ima_kexec.c
+index 121de3e04af23..e29bea3dd4ccd 100644
+--- a/security/integrity/ima/ima_kexec.c
++++ b/security/integrity/ima/ima_kexec.c
+@@ -119,6 +119,7 @@ void ima_add_kexec_buffer(struct kimage *image)
+ 	ret = kexec_add_buffer(&kbuf);
+ 	if (ret) {
+ 		pr_err("Error passing over kexec measurement buffer.\n");
++		vfree(kexec_buffer);
+ 		return;
+ 	}
+ 
+@@ -128,6 +129,8 @@ void ima_add_kexec_buffer(struct kimage *image)
+ 		return;
+ 	}
+ 
++	image->ima_buffer = kexec_buffer;
++
+ 	pr_debug("kexec measurement buffer for the loaded kernel at 0x%lx.\n",
+ 		 kbuf.mem);
+ }
+diff --git a/security/integrity/ima/ima_mok.c b/security/integrity/ima/ima_mok.c
+index 36cadadbfba47..1e5c019161738 100644
+--- a/security/integrity/ima/ima_mok.c
++++ b/security/integrity/ima/ima_mok.c
+@@ -38,13 +38,12 @@ __init int ima_mok_init(void)
+ 				(KEY_POS_ALL & ~KEY_POS_SETATTR) |
+ 				KEY_USR_VIEW | KEY_USR_READ |
+ 				KEY_USR_WRITE | KEY_USR_SEARCH,
+-				KEY_ALLOC_NOT_IN_QUOTA,
++				KEY_ALLOC_NOT_IN_QUOTA |
++				KEY_ALLOC_SET_KEEP,
+ 				restriction, NULL);
+ 
+ 	if (IS_ERR(ima_blacklist_keyring))
+ 		panic("Can't allocate IMA blacklist keyring.");
+-
+-	set_bit(KEY_FLAG_KEEP, &ima_blacklist_keyring->flags);
+ 	return 0;
+ }
+ device_initcall(ima_mok_init);
+diff --git a/security/keys/Kconfig b/security/keys/Kconfig
+index 83bc23409164a..c161642a84841 100644
+--- a/security/keys/Kconfig
++++ b/security/keys/Kconfig
+@@ -119,7 +119,7 @@ config KEY_NOTIFICATIONS
+ 	bool "Provide key/keyring change notifications"
+ 	depends on KEYS && WATCH_QUEUE
+ 	help
+-	  This option provides support for getting change notifications on keys
+-	  and keyrings on which the caller has View permission.  This makes use
+-	  of the /dev/watch_queue misc device to handle the notification
+-	  buffer and provides KEYCTL_WATCH_KEY to enable/disable watches.
++	  This option provides support for getting change notifications
++	  on keys and keyrings on which the caller has View permission.
++	  This makes use of pipes to handle the notification buffer and
++	  provides KEYCTL_WATCH_KEY to enable/disable watches.
+diff --git a/security/keys/key.c b/security/keys/key.c
+index e282c6179b21d..151ff39b68030 100644
+--- a/security/keys/key.c
++++ b/security/keys/key.c
+@@ -303,6 +303,8 @@ struct key *key_alloc(struct key_type *type, const char *desc,
+ 		key->flags |= 1 << KEY_FLAG_BUILTIN;
+ 	if (flags & KEY_ALLOC_UID_KEYRING)
+ 		key->flags |= 1 << KEY_FLAG_UID_KEYRING;
++	if (flags & KEY_ALLOC_SET_KEEP)
++		key->flags |= 1 << KEY_FLAG_KEEP;
+ 
+ #ifdef KEY_DEBUGGING
+ 	key->magic = KEY_DEBUG_MAGIC;
+diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
+index b9fe02e5f84f0..7a937c3c52834 100644
+--- a/security/keys/trusted-keys/trusted_tpm1.c
++++ b/security/keys/trusted-keys/trusted_tpm1.c
+@@ -403,9 +403,12 @@ static int osap(struct tpm_buf *tb, struct osapsess *s,
+ 	int ret;
+ 
+ 	ret = tpm_get_random(chip, ononce, TPM_NONCE_SIZE);
+-	if (ret != TPM_NONCE_SIZE)
++	if (ret < 0)
+ 		return ret;
+ 
++	if (ret != TPM_NONCE_SIZE)
++		return -EIO;
++
+ 	tpm_buf_reset(tb, TPM_TAG_RQU_COMMAND, TPM_ORD_OSAP);
+ 	tpm_buf_append_u16(tb, type);
+ 	tpm_buf_append_u32(tb, handle);
+@@ -496,8 +499,12 @@ static int tpm_seal(struct tpm_buf *tb, uint16_t keytype,
+ 		goto out;
+ 
+ 	ret = tpm_get_random(chip, td->nonceodd, TPM_NONCE_SIZE);
++	if (ret < 0)
++		return ret;
++
+ 	if (ret != TPM_NONCE_SIZE)
+-		goto out;
++		return -EIO;
++
+ 	ordinal = htonl(TPM_ORD_SEAL);
+ 	datsize = htonl(datalen);
+ 	pcrsize = htonl(pcrinfosize);
+@@ -601,9 +608,12 @@ static int tpm_unseal(struct tpm_buf *tb,
+ 
+ 	ordinal = htonl(TPM_ORD_UNSEAL);
+ 	ret = tpm_get_random(chip, nonceodd, TPM_NONCE_SIZE);
++	if (ret < 0)
++		return ret;
++
+ 	if (ret != TPM_NONCE_SIZE) {
+ 		pr_info("trusted_key: tpm_get_random failed (%d)\n", ret);
+-		return ret;
++		return -EIO;
+ 	}
+ 	ret = TSS_authhmac(authdata1, keyauth, TPM_NONCE_SIZE,
+ 			   enonce1, nonceodd, cont, sizeof(uint32_t),
+@@ -791,7 +801,7 @@ static int getoptions(char *c, struct trusted_key_payload *pay,
+ 		case Opt_migratable:
+ 			if (*args[0].from == '0')
+ 				pay->migratable = 0;
+-			else
++			else if (*args[0].from != '1')
+ 				return -EINVAL;
+ 			break;
+ 		case Opt_pcrlock:
+@@ -1013,8 +1023,12 @@ static int trusted_instantiate(struct key *key,
+ 	case Opt_new:
+ 		key_len = payload->key_len;
+ 		ret = tpm_get_random(chip, payload->key, key_len);
++		if (ret < 0)
++			goto out;
++
+ 		if (ret != key_len) {
+ 			pr_info("trusted_key: key_create failed (%d)\n", ret);
++			ret = -EIO;
+ 			goto out;
+ 		}
+ 		if (tpm2)
+diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
+index 08ec7f48f01d0..e2a0ed5d02f01 100644
+--- a/security/keys/trusted-keys/trusted_tpm2.c
++++ b/security/keys/trusted-keys/trusted_tpm2.c
+@@ -83,6 +83,12 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
+ 	if (rc)
+ 		return rc;
+ 
++	rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_CREATE);
++	if (rc) {
++		tpm_put_ops(chip);
++		return rc;
++	}
++
+ 	tpm_buf_append_u32(&buf, options->keyhandle);
+ 	tpm2_buf_append_auth(&buf, TPM2_RS_PW,
+ 			     NULL /* nonce */, 0,
+@@ -130,7 +136,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
+ 		goto out;
+ 	}
+ 
+-	rc = tpm_send(chip, buf.data, tpm_buf_length(&buf));
++	rc = tpm_transmit_cmd(chip, &buf, 4, "sealing data");
+ 	if (rc)
+ 		goto out;
+ 
+@@ -157,6 +163,7 @@ out:
+ 			rc = -EPERM;
+ 	}
+ 
++	tpm_put_ops(chip);
+ 	return rc;
+ }
+ 
+@@ -211,7 +218,7 @@ static int tpm2_load_cmd(struct tpm_chip *chip,
+ 		goto out;
+ 	}
+ 
+-	rc = tpm_send(chip, buf.data, tpm_buf_length(&buf));
++	rc = tpm_transmit_cmd(chip, &buf, 4, "loading blob");
+ 	if (!rc)
+ 		*blob_handle = be32_to_cpup(
+ 			(__be32 *) &buf.data[TPM_HEADER_SIZE]);
+@@ -260,7 +267,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
+ 			     options->blobauth /* hmac */,
+ 			     TPM_DIGEST_SIZE);
+ 
+-	rc = tpm_send(chip, buf.data, tpm_buf_length(&buf));
++	rc = tpm_transmit_cmd(chip, &buf, 6, "unsealing");
+ 	if (rc > 0)
+ 		rc = -EPERM;
+ 
+@@ -304,12 +311,19 @@ int tpm2_unseal_trusted(struct tpm_chip *chip,
+ 	u32 blob_handle;
+ 	int rc;
+ 
+-	rc = tpm2_load_cmd(chip, payload, options, &blob_handle);
++	rc = tpm_try_get_ops(chip);
+ 	if (rc)
+ 		return rc;
+ 
++	rc = tpm2_load_cmd(chip, payload, options, &blob_handle);
++	if (rc)
++		goto out;
++
+ 	rc = tpm2_unseal_cmd(chip, payload, options, blob_handle);
+ 	tpm2_flush_context(chip, blob_handle);
+ 
++out:
++	tpm_put_ops(chip);
++
+ 	return rc;
+ }
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index c46312710e73e..227eb89679637 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -3414,6 +3414,10 @@ static int selinux_inode_setsecurity(struct inode *inode, const char *name,
+ static int selinux_inode_listsecurity(struct inode *inode, char *buffer, size_t buffer_size)
+ {
+ 	const int len = sizeof(XATTR_NAME_SELINUX);
++
++	if (!selinux_initialized(&selinux_state))
++		return 0;
++
+ 	if (buffer && len <= buffer_size)
+ 		memcpy(buffer, XATTR_NAME_SELINUX, len);
+ 	return len;
+diff --git a/sound/core/init.c b/sound/core/init.c
+index 764dbe673d488..018ce4ef12ec8 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -14,6 +14,7 @@
+ #include <linux/ctype.h>
+ #include <linux/pm.h>
+ #include <linux/completion.h>
++#include <linux/interrupt.h>
+ 
+ #include <sound/core.h>
+ #include <sound/control.h>
+@@ -418,6 +419,9 @@ int snd_card_disconnect(struct snd_card *card)
+ 	/* notify all devices that we are disconnected */
+ 	snd_device_disconnect_all(card);
+ 
++	if (card->sync_irq > 0)
++		synchronize_irq(card->sync_irq);
++
+ 	snd_info_card_disconnect(card);
+ 	if (card->registered) {
+ 		device_del(&card->card_dev);
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index be5714f1bb58c..41cbdac5b1cfa 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -1111,6 +1111,10 @@ static int snd_pcm_dev_disconnect(struct snd_device *device)
+ 		}
+ 	}
+ 
++	for (cidx = 0; cidx < 2; cidx++)
++		for (substream = pcm->streams[cidx].substream; substream; substream = substream->next)
++			snd_pcm_sync_stop(substream, false);
++
+ 	pcm_call_notify(pcm, n_disconnect);
+ 	for (cidx = 0; cidx < 2; cidx++) {
+ 		snd_unregister_device(&pcm->streams[cidx].dev);
+diff --git a/sound/core/pcm_local.h b/sound/core/pcm_local.h
+index 17a1a5d870980..b3e8be5aeafb3 100644
+--- a/sound/core/pcm_local.h
++++ b/sound/core/pcm_local.h
+@@ -63,6 +63,7 @@ static inline void snd_pcm_timer_done(struct snd_pcm_substream *substream) {}
+ 
+ void __snd_pcm_xrun(struct snd_pcm_substream *substream);
+ void snd_pcm_group_init(struct snd_pcm_group *group);
++void snd_pcm_sync_stop(struct snd_pcm_substream *substream, bool sync_irq);
+ 
+ #ifdef CONFIG_SND_DMA_SGBUF
+ struct page *snd_pcm_sgbuf_ops_page(struct snd_pcm_substream *substream,
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 9f3f8e953ff04..50c14cc861e69 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -583,13 +583,13 @@ static inline void snd_pcm_timer_notify(struct snd_pcm_substream *substream,
+ #endif
+ }
+ 
+-static void snd_pcm_sync_stop(struct snd_pcm_substream *substream)
++void snd_pcm_sync_stop(struct snd_pcm_substream *substream, bool sync_irq)
+ {
+-	if (substream->runtime->stop_operating) {
++	if (substream->runtime && substream->runtime->stop_operating) {
+ 		substream->runtime->stop_operating = false;
+-		if (substream->ops->sync_stop)
++		if (substream->ops && substream->ops->sync_stop)
+ 			substream->ops->sync_stop(substream);
+-		else if (substream->pcm->card->sync_irq > 0)
++		else if (sync_irq && substream->pcm->card->sync_irq > 0)
+ 			synchronize_irq(substream->pcm->card->sync_irq);
+ 	}
+ }
+@@ -686,7 +686,7 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 		if (atomic_read(&substream->mmap_count))
+ 			return -EBADFD;
+ 
+-	snd_pcm_sync_stop(substream);
++	snd_pcm_sync_stop(substream, true);
+ 
+ 	params->rmask = ~0U;
+ 	err = snd_pcm_hw_refine(substream, params);
+@@ -809,7 +809,7 @@ static int do_hw_free(struct snd_pcm_substream *substream)
+ {
+ 	int result = 0;
+ 
+-	snd_pcm_sync_stop(substream);
++	snd_pcm_sync_stop(substream, true);
+ 	if (substream->ops->hw_free)
+ 		result = substream->ops->hw_free(substream);
+ 	if (substream->managed_buffer_alloc)
+@@ -1421,8 +1421,10 @@ static int snd_pcm_do_stop(struct snd_pcm_substream *substream,
+ 			   snd_pcm_state_t state)
+ {
+ 	if (substream->runtime->trigger_master == substream &&
+-	    snd_pcm_running(substream))
++	    snd_pcm_running(substream)) {
+ 		substream->ops->trigger(substream, SNDRV_PCM_TRIGGER_STOP);
++		substream->runtime->stop_operating = true;
++	}
+ 	return 0; /* unconditonally stop all substreams */
+ }
+ 
+@@ -1435,7 +1437,6 @@ static void snd_pcm_post_stop(struct snd_pcm_substream *substream,
+ 		runtime->status->state = state;
+ 		snd_pcm_timer_notify(substream, SNDRV_TIMER_EVENT_MSTOP);
+ 	}
+-	runtime->stop_operating = true;
+ 	wake_up(&runtime->sleep);
+ 	wake_up(&runtime->tsleep);
+ }
+@@ -1615,6 +1616,7 @@ static int snd_pcm_do_suspend(struct snd_pcm_substream *substream,
+ 	if (! snd_pcm_running(substream))
+ 		return 0;
+ 	substream->ops->trigger(substream, SNDRV_PCM_TRIGGER_SUSPEND);
++	runtime->stop_operating = true;
+ 	return 0; /* suspend unconditionally */
+ }
+ 
+@@ -1691,6 +1693,12 @@ int snd_pcm_suspend_all(struct snd_pcm *pcm)
+ 				return err;
+ 		}
+ 	}
++
++	for (stream = 0; stream < 2; stream++)
++		for (substream = pcm->streams[stream].substream;
++		     substream; substream = substream->next)
++			snd_pcm_sync_stop(substream, false);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(snd_pcm_suspend_all);
+@@ -1736,7 +1744,6 @@ static void snd_pcm_post_resume(struct snd_pcm_substream *substream,
+ 	snd_pcm_trigger_tstamp(substream);
+ 	runtime->status->state = runtime->status->suspended_state;
+ 	snd_pcm_timer_notify(substream, SNDRV_TIMER_EVENT_MRESUME);
+-	snd_pcm_sync_stop(substream);
+ }
+ 
+ static const struct action_ops snd_pcm_action_resume = {
+@@ -1866,7 +1873,7 @@ static int snd_pcm_do_prepare(struct snd_pcm_substream *substream,
+ 			      snd_pcm_state_t state)
+ {
+ 	int err;
+-	snd_pcm_sync_stop(substream);
++	snd_pcm_sync_stop(substream, true);
+ 	err = substream->ops->prepare(substream);
+ 	if (err < 0)
+ 		return err;
+diff --git a/sound/firewire/fireface/ff-protocol-latter.c b/sound/firewire/fireface/ff-protocol-latter.c
+index 8d3b23778eb26..7ddb7b97f02db 100644
+--- a/sound/firewire/fireface/ff-protocol-latter.c
++++ b/sound/firewire/fireface/ff-protocol-latter.c
+@@ -15,6 +15,61 @@
+ #define LATTER_FETCH_MODE	0xffff00000010ULL
+ #define LATTER_SYNC_STATUS	0x0000801c0000ULL
+ 
++// The content of sync status register differs between models.
++//
++// Fireface UCX:
++//  0xf0000000: (unidentified)
++//  0x0f000000: effective rate of sampling clock
++//  0x00f00000: detected rate of word clock on BNC interface
++//  0x000f0000: detected rate of ADAT or S/PDIF on optical interface
++//  0x0000f000: detected rate of S/PDIF on coaxial interface
++//  0x00000e00: effective source of sampling clock
++//    0x00000e00: Internal
++//    0x00000800: (unidentified)
++//    0x00000600: Word clock on BNC interface
++//    0x00000400: ADAT on optical interface
++//    0x00000200: S/PDIF on coaxial or optical interface
++//  0x00000100: Optical interface is used for ADAT signal
++//  0x00000080: (unidentified)
++//  0x00000040: Synchronized to word clock on BNC interface
++//  0x00000020: Synchronized to ADAT or S/PDIF on optical interface
++//  0x00000010: Synchronized to S/PDIF on coaxial interface
++//  0x00000008: (unidentified)
++//  0x00000004: Lock word clock on BNC interface
++//  0x00000002: Lock ADAT or S/PDIF on optical interface
++//  0x00000001: Lock S/PDIF on coaxial interface
++//
++// Fireface 802 (and perhaps UFX):
++//   0xf0000000: effective rate of sampling clock
++//   0x0f000000: detected rate of ADAT-B on 2nd optical interface
++//   0x00f00000: detected rate of ADAT-A on 1st optical interface
++//   0x000f0000: detected rate of AES/EBU on XLR or coaxial interface
++//   0x0000f000: detected rate of word clock on BNC interface
++//   0x00000e00: effective source of sampling clock
++//     0x00000e00: internal
++//     0x00000800: ADAT-B
++//     0x00000600: ADAT-A
++//     0x00000400: AES/EBU
++//     0x00000200: Word clock
++//   0x00000080: Synchronized to ADAT-B on 2nd optical interface
++//   0x00000040: Synchronized to ADAT-A on 1st optical interface
++//   0x00000020: Synchronized to AES/EBU on XLR or 2nd optical interface
++//   0x00000010: Synchronized to word clock on BNC interface
++//   0x00000008: Lock ADAT-B on 2nd optical interface
++//   0x00000004: Lock ADAT-A on 1st optical interface
++//   0x00000002: Lock AES/EBU on XLR or 2nd optical interface
++//   0x00000001: Lock word clock on BNC interface
++//
++// The pattern for rate bits:
++//   0x00: 32.0 kHz
++//   0x01: 44.1 kHz
++//   0x02: 48.0 kHz
++//   0x04: 64.0 kHz
++//   0x05: 88.2 kHz
++//   0x06: 96.0 kHz
++//   0x08: 128.0 kHz
++//   0x09: 176.4 kHz
++//   0x0a: 192.0 kHz
+ static int parse_clock_bits(u32 data, unsigned int *rate,
+ 			    enum snd_ff_clock_src *src,
+ 			    enum snd_ff_unit_version unit_version)
+@@ -23,35 +78,48 @@ static int parse_clock_bits(u32 data, unsigned int *rate,
+ 		unsigned int rate;
+ 		u32 flag;
+ 	} *rate_entry, rate_entries[] = {
+-		{ 32000,	0x00000000, },
+-		{ 44100,	0x01000000, },
+-		{ 48000,	0x02000000, },
+-		{ 64000,	0x04000000, },
+-		{ 88200,	0x05000000, },
+-		{ 96000,	0x06000000, },
+-		{ 128000,	0x08000000, },
+-		{ 176400,	0x09000000, },
+-		{ 192000,	0x0a000000, },
++		{ 32000,	0x00, },
++		{ 44100,	0x01, },
++		{ 48000,	0x02, },
++		{ 64000,	0x04, },
++		{ 88200,	0x05, },
++		{ 96000,	0x06, },
++		{ 128000,	0x08, },
++		{ 176400,	0x09, },
++		{ 192000,	0x0a, },
+ 	};
+ 	static const struct {
+ 		enum snd_ff_clock_src src;
+ 		u32 flag;
+-	} *clk_entry, clk_entries[] = {
++	} *clk_entry, *clk_entries, ucx_clk_entries[] = {
+ 		{ SND_FF_CLOCK_SRC_SPDIF,	0x00000200, },
+ 		{ SND_FF_CLOCK_SRC_ADAT1,	0x00000400, },
+ 		{ SND_FF_CLOCK_SRC_WORD,	0x00000600, },
+ 		{ SND_FF_CLOCK_SRC_INTERNAL,	0x00000e00, },
++	}, ufx_ff802_clk_entries[] = {
++		{ SND_FF_CLOCK_SRC_WORD,	0x00000200, },
++		{ SND_FF_CLOCK_SRC_SPDIF,	0x00000400, },
++		{ SND_FF_CLOCK_SRC_ADAT1,	0x00000600, },
++		{ SND_FF_CLOCK_SRC_ADAT2,	0x00000800, },
++		{ SND_FF_CLOCK_SRC_INTERNAL,	0x00000e00, },
+ 	};
++	u32 rate_bits;
++	unsigned int clk_entry_count;
+ 	int i;
+ 
+-	if (unit_version != SND_FF_UNIT_VERSION_UCX) {
+-		// e.g. 0x00fe0f20 but expected 0x00eff002.
+-		data = ((data & 0xf0f0f0f0) >> 4) | ((data & 0x0f0f0f0f) << 4);
++	if (unit_version == SND_FF_UNIT_VERSION_UCX) {
++		rate_bits = (data & 0x0f000000) >> 24;
++		clk_entries = ucx_clk_entries;
++		clk_entry_count = ARRAY_SIZE(ucx_clk_entries);
++	} else {
++		rate_bits = (data & 0xf0000000) >> 28;
++		clk_entries = ufx_ff802_clk_entries;
++		clk_entry_count = ARRAY_SIZE(ufx_ff802_clk_entries);
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(rate_entries); ++i) {
+ 		rate_entry = rate_entries + i;
+-		if ((data & 0x0f000000) == rate_entry->flag) {
++		if (rate_bits == rate_entry->flag) {
+ 			*rate = rate_entry->rate;
+ 			break;
+ 		}
+@@ -59,14 +127,14 @@ static int parse_clock_bits(u32 data, unsigned int *rate,
+ 	if (i == ARRAY_SIZE(rate_entries))
+ 		return -EIO;
+ 
+-	for (i = 0; i < ARRAY_SIZE(clk_entries); ++i) {
++	for (i = 0; i < clk_entry_count; ++i) {
+ 		clk_entry = clk_entries + i;
+ 		if ((data & 0x000e00) == clk_entry->flag) {
+ 			*src = clk_entry->src;
+ 			break;
+ 		}
+ 	}
+-	if (i == ARRAY_SIZE(clk_entries))
++	if (i == clk_entry_count)
+ 		return -EIO;
+ 
+ 	return 0;
+@@ -249,16 +317,22 @@ static void latter_dump_status(struct snd_ff *ff, struct snd_info_buffer *buffer
+ 		char *const label;
+ 		u32 locked_mask;
+ 		u32 synced_mask;
+-	} *clk_entry, clk_entries[] = {
++	} *clk_entry, *clk_entries, ucx_clk_entries[] = {
+ 		{ "S/PDIF",	0x00000001, 0x00000010, },
+ 		{ "ADAT",	0x00000002, 0x00000020, },
+ 		{ "WDClk",	0x00000004, 0x00000040, },
++	}, ufx_ff802_clk_entries[] = {
++		{ "WDClk",	0x00000001, 0x00000010, },
++		{ "AES/EBU",	0x00000002, 0x00000020, },
++		{ "ADAT-A",	0x00000004, 0x00000040, },
++		{ "ADAT-B",	0x00000008, 0x00000080, },
+ 	};
+ 	__le32 reg;
+ 	u32 data;
+ 	unsigned int rate;
+ 	enum snd_ff_clock_src src;
+ 	const char *label;
++	unsigned int clk_entry_count;
+ 	int i;
+ 	int err;
+ 
+@@ -270,7 +344,15 @@ static void latter_dump_status(struct snd_ff *ff, struct snd_info_buffer *buffer
+ 
+ 	snd_iprintf(buffer, "External source detection:\n");
+ 
+-	for (i = 0; i < ARRAY_SIZE(clk_entries); ++i) {
++	if (ff->unit_version == SND_FF_UNIT_VERSION_UCX) {
++		clk_entries = ucx_clk_entries;
++		clk_entry_count = ARRAY_SIZE(ucx_clk_entries);
++	} else {
++		clk_entries = ufx_ff802_clk_entries;
++		clk_entry_count = ARRAY_SIZE(ufx_ff802_clk_entries);
++	}
++
++	for (i = 0; i < clk_entry_count; ++i) {
+ 		clk_entry = clk_entries + i;
+ 		snd_iprintf(buffer, "%s: ", clk_entry->label);
+ 		if (data & clk_entry->locked_mask) {
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index d393401db1ec5..145f4ff47d54f 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2481,6 +2481,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	/* CometLake-H */
+ 	{ PCI_DEVICE(0x8086, 0x06C8),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++	{ PCI_DEVICE(0x8086, 0xf1c8),
++	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ 	/* CometLake-S */
+ 	{ PCI_DEVICE(0x8086, 0xa3f0),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index dc1ab4fc93a5b..c67d5915ce243 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2133,7 +2133,6 @@ static int hdmi_pcm_close(struct hda_pcm_stream *hinfo,
+ 			goto unlock;
+ 		}
+ 		per_cvt = get_cvt(spec, cvt_idx);
+-		snd_BUG_ON(!per_cvt->assigned);
+ 		per_cvt->assigned = 0;
+ 		hinfo->nid = 0;
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 290645516313c..1927605f0f7ed 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1905,6 +1905,7 @@ enum {
+ 	ALC889_FIXUP_FRONT_HP_NO_PRESENCE,
+ 	ALC889_FIXUP_VAIO_TT,
+ 	ALC888_FIXUP_EEE1601,
++	ALC886_FIXUP_EAPD,
+ 	ALC882_FIXUP_EAPD,
+ 	ALC883_FIXUP_EAPD,
+ 	ALC883_FIXUP_ACER_EAPD,
+@@ -2238,6 +2239,15 @@ static const struct hda_fixup alc882_fixups[] = {
+ 			{ }
+ 		}
+ 	},
++	[ALC886_FIXUP_EAPD] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			/* change to EAPD mode */
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x07 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0068 },
++			{ }
++		}
++	},
+ 	[ALC882_FIXUP_EAPD] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -2510,6 +2520,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x106b, 0x4a00, "Macbook 5,2", ALC889_FIXUP_MBA11_VREF),
+ 
+ 	SND_PCI_QUIRK(0x1071, 0x8258, "Evesham Voyaeger", ALC882_FIXUP_EAPD),
++	SND_PCI_QUIRK(0x13fe, 0x1009, "Advantech MIT-W101", ALC886_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950),
+@@ -4280,6 +4291,28 @@ static void alc280_fixup_hp_gpio4(struct hda_codec *codec,
+ 	}
+ }
+ 
++/* HP Spectre x360 14 model needs a unique workaround for enabling the amp;
++ * it needs to toggle the GPIO0 once on and off at each time (bko#210633)
++ */
++static void alc245_fixup_hp_x360_amp(struct hda_codec *codec,
++				     const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		spec->gpio_mask |= 0x01;
++		spec->gpio_dir |= 0x01;
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		/* need to toggle GPIO to enable the amp */
++		alc_update_gpio_data(codec, 0x01, true);
++		msleep(100);
++		alc_update_gpio_data(codec, 0x01, false);
++		break;
++	}
++}
++
+ static void alc_update_coef_led(struct hda_codec *codec,
+ 				struct alc_coef_led *led,
+ 				bool polarity, bool on)
+@@ -6266,6 +6299,7 @@ enum {
+ 	ALC280_FIXUP_HP_DOCK_PINS,
+ 	ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED,
+ 	ALC280_FIXUP_HP_9480M,
++	ALC245_FIXUP_HP_X360_AMP,
+ 	ALC288_FIXUP_DELL_HEADSET_MODE,
+ 	ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC288_FIXUP_DELL_XPS_13,
+@@ -6971,6 +7005,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc280_fixup_hp_9480m,
+ 	},
++	[ALC245_FIXUP_HP_X360_AMP] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc245_fixup_hp_x360_amp,
++	},
+ 	[ALC288_FIXUP_DELL_HEADSET_MODE] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_headset_mode_dell_alc288,
+@@ -7985,6 +8023,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -8357,6 +8396,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
+ 	{.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ 	{.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
++	{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/codecs/cpcap.c b/sound/soc/codecs/cpcap.c
+index f046987ee4cdb..c0425e3707d9c 100644
+--- a/sound/soc/codecs/cpcap.c
++++ b/sound/soc/codecs/cpcap.c
+@@ -1264,12 +1264,12 @@ static int cpcap_voice_hw_params(struct snd_pcm_substream *substream,
+ 
+ 	if (direction == SNDRV_PCM_STREAM_CAPTURE) {
+ 		mask = 0x0000;
+-		mask |= CPCAP_BIT_MIC1_RX_TIMESLOT0;
+-		mask |= CPCAP_BIT_MIC1_RX_TIMESLOT1;
+-		mask |= CPCAP_BIT_MIC1_RX_TIMESLOT2;
+-		mask |= CPCAP_BIT_MIC2_TIMESLOT0;
+-		mask |= CPCAP_BIT_MIC2_TIMESLOT1;
+-		mask |= CPCAP_BIT_MIC2_TIMESLOT2;
++		mask |= BIT(CPCAP_BIT_MIC1_RX_TIMESLOT0);
++		mask |= BIT(CPCAP_BIT_MIC1_RX_TIMESLOT1);
++		mask |= BIT(CPCAP_BIT_MIC1_RX_TIMESLOT2);
++		mask |= BIT(CPCAP_BIT_MIC2_TIMESLOT0);
++		mask |= BIT(CPCAP_BIT_MIC2_TIMESLOT1);
++		mask |= BIT(CPCAP_BIT_MIC2_TIMESLOT2);
+ 		val = 0x0000;
+ 		if (channels >= 2)
+ 			val = BIT(CPCAP_BIT_MIC1_RX_TIMESLOT0);
+diff --git a/sound/soc/codecs/cs42l56.c b/sound/soc/codecs/cs42l56.c
+index 97024a6ac96d7..06dcfae9dfe71 100644
+--- a/sound/soc/codecs/cs42l56.c
++++ b/sound/soc/codecs/cs42l56.c
+@@ -1249,6 +1249,7 @@ static int cs42l56_i2c_probe(struct i2c_client *i2c_client,
+ 		dev_err(&i2c_client->dev,
+ 			"CS42L56 Device ID (%X). Expected %X\n",
+ 			devid, CS42L56_DEVID);
++		ret = -EINVAL;
+ 		goto err_enable;
+ 	}
+ 	alpha_rev = reg & CS42L56_AREV_MASK;
+@@ -1306,7 +1307,7 @@ static int cs42l56_i2c_probe(struct i2c_client *i2c_client,
+ 	ret =  devm_snd_soc_register_component(&i2c_client->dev,
+ 			&soc_component_dev_cs42l56, &cs42l56_dai, 1);
+ 	if (ret < 0)
+-		return ret;
++		goto err_enable;
+ 
+ 	return 0;
+ 
+diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
+index 6b4e0eb30c89a..7e652843c57d9 100644
+--- a/sound/soc/codecs/rt5682-i2c.c
++++ b/sound/soc/codecs/rt5682-i2c.c
+@@ -268,6 +268,9 @@ static void rt5682_i2c_shutdown(struct i2c_client *client)
+ {
+ 	struct rt5682_priv *rt5682 = i2c_get_clientdata(client);
+ 
++	cancel_delayed_work_sync(&rt5682->jack_detect_work);
++	cancel_delayed_work_sync(&rt5682->jd_check_work);
++
+ 	rt5682_reset(rt5682);
+ }
+ 
+diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c
+index 4530b74f5921b..db87e07b11c94 100644
+--- a/sound/soc/codecs/wsa881x.c
++++ b/sound/soc/codecs/wsa881x.c
+@@ -640,6 +640,7 @@ static struct regmap_config wsa881x_regmap_config = {
+ 	.val_bits = 8,
+ 	.cache_type = REGCACHE_RBTREE,
+ 	.reg_defaults = wsa881x_defaults,
++	.max_register = WSA881X_SPKR_STATUS3,
+ 	.num_reg_defaults = ARRAY_SIZE(wsa881x_defaults),
+ 	.volatile_reg = wsa881x_volatile_register,
+ 	.readable_reg = wsa881x_readable_register,
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index 6cada4c1e283b..ab31045cfc952 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -172,16 +172,15 @@ int asoc_simple_parse_clk(struct device *dev,
+ 	 *  or device's module clock.
+ 	 */
+ 	clk = devm_get_clk_from_child(dev, node, NULL);
+-	if (!IS_ERR(clk)) {
+-		simple_dai->sysclk = clk_get_rate(clk);
++	if (IS_ERR(clk))
++		clk = devm_get_clk_from_child(dev, dlc->of_node, NULL);
+ 
++	if (!IS_ERR(clk)) {
+ 		simple_dai->clk = clk;
+-	} else if (!of_property_read_u32(node, "system-clock-frequency", &val)) {
++		simple_dai->sysclk = clk_get_rate(clk);
++	} else if (!of_property_read_u32(node, "system-clock-frequency",
++					 &val)) {
+ 		simple_dai->sysclk = val;
+-	} else {
+-		clk = devm_get_clk_from_child(dev, dlc->of_node, NULL);
+-		if (!IS_ERR(clk))
+-			simple_dai->sysclk = clk_get_rate(clk);
+ 	}
+ 
+ 	if (of_property_read_bool(node, "system-clock-direction-out"))
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index a8d43c87cb5a2..07e72ca1dfbc9 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -54,7 +54,8 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A3E")
+ 		},
+-		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_RT711_JD_SRC_JD2 |
+ 					SOF_RT715_DAI_ID_FIX),
+ 	},
+ 	{
+@@ -63,7 +64,8 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E")
+ 		},
+-		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_RT711_JD_SRC_JD2 |
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+diff --git a/sound/soc/qcom/lpass-apq8016.c b/sound/soc/qcom/lpass-apq8016.c
+index 0aedb3a0a798a..7c0e774ad0625 100644
+--- a/sound/soc/qcom/lpass-apq8016.c
++++ b/sound/soc/qcom/lpass-apq8016.c
+@@ -250,7 +250,7 @@ static struct lpass_variant apq8016_data = {
+ 	.micmode		= REG_FIELD_ID(0x1000, 4, 7, 4, 0x1000),
+ 	.micmono		= REG_FIELD_ID(0x1000, 3, 3, 4, 0x1000),
+ 	.wssrc			= REG_FIELD_ID(0x1000, 2, 2, 4, 0x1000),
+-	.bitwidth		= REG_FIELD_ID(0x1000, 0, 0, 4, 0x1000),
++	.bitwidth		= REG_FIELD_ID(0x1000, 0, 1, 4, 0x1000),
+ 
+ 	.rdma_dyncclk		= REG_FIELD_ID(0x8400, 12, 12, 2, 0x1000),
+ 	.rdma_bursten		= REG_FIELD_ID(0x8400, 11, 11, 2, 0x1000),
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index 46bb24afeacf0..a33dbd6de8a06 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -286,16 +286,12 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream,
+ 			dev_err(dai->dev, "error writing to i2sctl reg: %d\n",
+ 				ret);
+ 
+-		if (drvdata->bit_clk_state[id] == LPAIF_BIT_CLK_DISABLE) {
+-			ret = clk_enable(drvdata->mi2s_bit_clk[id]);
+-			if (ret) {
+-				dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret);
+-				clk_disable(drvdata->mi2s_osr_clk[id]);
+-				return ret;
+-			}
+-			drvdata->bit_clk_state[id] = LPAIF_BIT_CLK_ENABLE;
++		ret = clk_enable(drvdata->mi2s_bit_clk[id]);
++		if (ret) {
++			dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret);
++			clk_disable(drvdata->mi2s_osr_clk[id]);
++			return ret;
+ 		}
+-
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 	case SNDRV_PCM_TRIGGER_SUSPEND:
+@@ -310,10 +306,9 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream,
+ 		if (ret)
+ 			dev_err(dai->dev, "error writing to i2sctl reg: %d\n",
+ 				ret);
+-		if (drvdata->bit_clk_state[id] == LPAIF_BIT_CLK_ENABLE) {
+-			clk_disable(drvdata->mi2s_bit_clk[dai->driver->id]);
+-			drvdata->bit_clk_state[id] = LPAIF_BIT_CLK_DISABLE;
+-		}
++
++		clk_disable(drvdata->mi2s_bit_clk[dai->driver->id]);
++
+ 		break;
+ 	}
+ 
+@@ -599,7 +594,7 @@ static bool lpass_hdmi_regmap_writeable(struct device *dev, unsigned int reg)
+ 			return true;
+ 	}
+ 
+-	for (i = 0; i < v->rdma_channels; ++i) {
++	for (i = 0; i < v->hdmi_rdma_channels; ++i) {
+ 		if (reg == LPAIF_HDMI_RDMACTL_REG(v, i))
+ 			return true;
+ 		if (reg == LPAIF_HDMI_RDMABASE_REG(v, i))
+@@ -645,7 +640,7 @@ static bool lpass_hdmi_regmap_readable(struct device *dev, unsigned int reg)
+ 	if (reg == LPASS_HDMITX_APP_IRQSTAT_REG(v))
+ 		return true;
+ 
+-	for (i = 0; i < v->rdma_channels; ++i) {
++	for (i = 0; i < v->hdmi_rdma_channels; ++i) {
+ 		if (reg == LPAIF_HDMI_RDMACTL_REG(v, i))
+ 			return true;
+ 		if (reg == LPAIF_HDMI_RDMABASE_REG(v, i))
+@@ -672,7 +667,7 @@ static bool lpass_hdmi_regmap_volatile(struct device *dev, unsigned int reg)
+ 	if (reg == LPASS_HDMI_TX_LEGACY_ADDR(v))
+ 		return true;
+ 
+-	for (i = 0; i < v->rdma_channels; ++i) {
++	for (i = 0; i < v->hdmi_rdma_channels; ++i) {
+ 		if (reg == LPAIF_HDMI_RDMACURR_REG(v, i))
+ 			return true;
+ 	}
+@@ -822,7 +817,7 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		lpass_hdmi_regmap_config.max_register = LPAIF_HDMI_RDMAPER_REG(variant,
+-					variant->hdmi_rdma_channels);
++					variant->hdmi_rdma_channels - 1);
+ 		drvdata->hdmiif_map = devm_regmap_init_mmio(dev, drvdata->hdmiif,
+ 					&lpass_hdmi_regmap_config);
+ 		if (IS_ERR(drvdata->hdmiif_map)) {
+@@ -866,7 +861,6 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev)
+ 				PTR_ERR(drvdata->mi2s_bit_clk[dai_id]));
+ 			return PTR_ERR(drvdata->mi2s_bit_clk[dai_id]);
+ 		}
+-		drvdata->bit_clk_state[dai_id] = LPAIF_BIT_CLK_DISABLE;
+ 	}
+ 
+ 	/* Allocation for i2sctl regmap fields */
+diff --git a/sound/soc/qcom/lpass-lpaif-reg.h b/sound/soc/qcom/lpass-lpaif-reg.h
+index baf72f124ea9b..2eb03ad9b7c74 100644
+--- a/sound/soc/qcom/lpass-lpaif-reg.h
++++ b/sound/soc/qcom/lpass-lpaif-reg.h
+@@ -60,9 +60,6 @@
+ #define LPAIF_I2SCTL_BITWIDTH_24	1
+ #define LPAIF_I2SCTL_BITWIDTH_32	2
+ 
+-#define LPAIF_BIT_CLK_DISABLE		0
+-#define LPAIF_BIT_CLK_ENABLE		1
+-
+ #define LPAIF_I2SCTL_RESET_STATE	0x003C0004
+ #define LPAIF_DMACTL_RESET_STATE	0x00200000
+ 
+diff --git a/sound/soc/qcom/lpass.h b/sound/soc/qcom/lpass.h
+index 868c1c8dbd455..1d926dd5f5900 100644
+--- a/sound/soc/qcom/lpass.h
++++ b/sound/soc/qcom/lpass.h
+@@ -68,7 +68,6 @@ struct lpass_data {
+ 	unsigned int mi2s_playback_sd_mode[LPASS_MAX_MI2S_PORTS];
+ 	unsigned int mi2s_capture_sd_mode[LPASS_MAX_MI2S_PORTS];
+ 	int hdmi_port_enable;
+-	int bit_clk_state[LPASS_MAX_MI2S_PORTS];
+ 
+ 	/* low-power audio interface (LPAIF) registers */
+ 	void __iomem *lpaif;
+diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
+index c9ac9c1d26c47..9766725c29166 100644
+--- a/sound/soc/qcom/qdsp6/q6asm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
+@@ -1233,6 +1233,25 @@ static void q6asm_dai_pcm_free(struct snd_soc_component *component,
+ 	}
+ }
+ 
++static const struct snd_soc_dapm_widget q6asm_dapm_widgets[] = {
++	SND_SOC_DAPM_AIF_IN("MM_DL1", "MultiMedia1 Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("MM_DL2", "MultiMedia2 Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("MM_DL3", "MultiMedia3 Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("MM_DL4", "MultiMedia4 Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("MM_DL5", "MultiMedia5 Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("MM_DL6", "MultiMedia6 Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("MM_DL7", "MultiMedia7 Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("MM_DL8", "MultiMedia8 Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("MM_UL1", "MultiMedia1 Capture", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("MM_UL2", "MultiMedia2 Capture", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("MM_UL3", "MultiMedia3 Capture", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("MM_UL4", "MultiMedia4 Capture", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("MM_UL5", "MultiMedia5 Capture", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("MM_UL6", "MultiMedia6 Capture", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("MM_UL7", "MultiMedia7 Capture", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("MM_UL8", "MultiMedia8 Capture", 0, SND_SOC_NOPM, 0, 0),
++};
++
+ static const struct snd_soc_component_driver q6asm_fe_dai_component = {
+ 	.name		= DRV_NAME,
+ 	.open		= q6asm_dai_open,
+@@ -1245,6 +1264,8 @@ static const struct snd_soc_component_driver q6asm_fe_dai_component = {
+ 	.pcm_construct	= q6asm_dai_pcm_new,
+ 	.pcm_destruct	= q6asm_dai_pcm_free,
+ 	.compress_ops	= &q6asm_dai_compress_ops,
++	.dapm_widgets	= q6asm_dapm_widgets,
++	.num_dapm_widgets = ARRAY_SIZE(q6asm_dapm_widgets),
+ };
+ 
+ static struct snd_soc_dai_driver q6asm_fe_dais_template[] = {
+diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c
+index 53185e26fea17..0a6b9433f6acf 100644
+--- a/sound/soc/qcom/qdsp6/q6routing.c
++++ b/sound/soc/qcom/qdsp6/q6routing.c
+@@ -713,24 +713,6 @@ static const struct snd_kcontrol_new mmul8_mixer_controls[] = {
+ 	Q6ROUTING_TX_MIXERS(MSM_FRONTEND_DAI_MULTIMEDIA8) };
+ 
+ static const struct snd_soc_dapm_widget msm_qdsp6_widgets[] = {
+-	/* Frontend AIF */
+-	SND_SOC_DAPM_AIF_IN("MM_DL1", "MultiMedia1 Playback", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_IN("MM_DL2", "MultiMedia2 Playback", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_IN("MM_DL3", "MultiMedia3 Playback", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_IN("MM_DL4", "MultiMedia4 Playback", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_IN("MM_DL5", "MultiMedia5 Playback", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_IN("MM_DL6", "MultiMedia6 Playback", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_IN("MM_DL7", "MultiMedia7 Playback", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_IN("MM_DL8", "MultiMedia8 Playback", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_OUT("MM_UL1", "MultiMedia1 Capture", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_OUT("MM_UL2", "MultiMedia2 Capture", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_OUT("MM_UL3", "MultiMedia3 Capture", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_OUT("MM_UL4", "MultiMedia4 Capture", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_OUT("MM_UL5", "MultiMedia5 Capture", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_OUT("MM_UL6", "MultiMedia6 Capture", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_OUT("MM_UL7", "MultiMedia7 Capture", 0, 0, 0, 0),
+-	SND_SOC_DAPM_AIF_OUT("MM_UL8", "MultiMedia8 Capture", 0, 0, 0, 0),
+-
+ 	/* Mixer definitions */
+ 	SND_SOC_DAPM_MIXER("HDMI Mixer", SND_SOC_NOPM, 0, 0,
+ 			   hdmi_mixer_controls,
+diff --git a/sound/soc/sh/siu.h b/sound/soc/sh/siu.h
+index 6201840f1bc05..a675c36fc9d95 100644
+--- a/sound/soc/sh/siu.h
++++ b/sound/soc/sh/siu.h
+@@ -169,7 +169,7 @@ static inline u32 siu_read32(u32 __iomem *addr)
+ #define SIU_BRGBSEL	(0x108 / sizeof(u32))
+ #define SIU_BRRB	(0x10c / sizeof(u32))
+ 
+-extern struct snd_soc_component_driver siu_component;
++extern const struct snd_soc_component_driver siu_component;
+ extern struct siu_info *siu_i2s_data;
+ 
+ int siu_init_port(int port, struct siu_port **port_info, struct snd_card *card);
+diff --git a/sound/soc/sh/siu_pcm.c b/sound/soc/sh/siu_pcm.c
+index 45c4320976ab9..4785886df4f03 100644
+--- a/sound/soc/sh/siu_pcm.c
++++ b/sound/soc/sh/siu_pcm.c
+@@ -543,7 +543,7 @@ static void siu_pcm_free(struct snd_soc_component *component,
+ 	dev_dbg(pcm->card->dev, "%s\n", __func__);
+ }
+ 
+-struct const snd_soc_component_driver siu_component = {
++const struct snd_soc_component_driver siu_component = {
+ 	.name		= DRV_NAME,
+ 	.open		= siu_pcm_open,
+ 	.close		= siu_pcm_close,
+diff --git a/sound/soc/sof/debug.c b/sound/soc/sof/debug.c
+index 9419a99bab536..3ef51b2210237 100644
+--- a/sound/soc/sof/debug.c
++++ b/sound/soc/sof/debug.c
+@@ -350,7 +350,7 @@ static ssize_t sof_dfsentry_write(struct file *file, const char __user *buffer,
+ 	char *string;
+ 	int ret;
+ 
+-	string = kzalloc(count, GFP_KERNEL);
++	string = kzalloc(count+1, GFP_KERNEL);
+ 	if (!string)
+ 		return -ENOMEM;
+ 
+diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c
+index 2dbc1273e56bd..cd324f3d11d17 100644
+--- a/sound/soc/sof/intel/hda-dsp.c
++++ b/sound/soc/sof/intel/hda-dsp.c
+@@ -801,11 +801,15 @@ int hda_dsp_runtime_idle(struct snd_sof_dev *sdev)
+ 
+ int hda_dsp_runtime_suspend(struct snd_sof_dev *sdev)
+ {
++	struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata;
+ 	const struct sof_dsp_power_state target_state = {
+ 		.state = SOF_DSP_PM_D3,
+ 	};
+ 	int ret;
+ 
++	/* cancel any attempt for DSP D0I3 */
++	cancel_delayed_work_sync(&hda->d0i3_work);
++
+ 	/* stop hda controller and power dsp off */
+ 	ret = hda_suspend(sdev, true);
+ 	if (ret < 0)
+diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c
+index 8f62e3487dc18..75657a25dbc05 100644
+--- a/sound/soc/sof/sof-pci-dev.c
++++ b/sound/soc/sof/sof-pci-dev.c
+@@ -65,6 +65,13 @@ static const struct dmi_system_id community_key_platforms[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "UP-APL01"),
+ 		}
+ 	},
++	{
++		.ident = "Up Extreme",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "AAEON"),
++			DMI_MATCH(DMI_BOARD_NAME, "UP-WHL01"),
++		}
++	},
+ 	{
+ 		.ident = "Google Chromebooks",
+ 		.matches = {
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index a860303cc5222..1b08f52ef86f6 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -1861,7 +1861,7 @@ void snd_usb_preallocate_buffer(struct snd_usb_substream *subs)
+ {
+ 	struct snd_pcm *pcm = subs->stream->pcm;
+ 	struct snd_pcm_substream *s = pcm->streams[subs->direction].substream;
+-	struct device *dev = subs->dev->bus->controller;
++	struct device *dev = subs->dev->bus->sysdev;
+ 
+ 	if (snd_usb_use_vmalloc)
+ 		snd_pcm_set_managed_buffer(s, SNDRV_DMA_TYPE_VMALLOC,
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index ad165e6e74bc0..b954db52bb807 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -865,24 +865,24 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
+ 		if (btf_is_ptr(mtype)) {
+ 			struct bpf_program *prog;
+ 
+-			mtype = skip_mods_and_typedefs(btf, mtype->type, &mtype_id);
++			prog = st_ops->progs[i];
++			if (!prog)
++				continue;
++
+ 			kern_mtype = skip_mods_and_typedefs(kern_btf,
+ 							    kern_mtype->type,
+ 							    &kern_mtype_id);
+-			if (!btf_is_func_proto(mtype) ||
+-			    !btf_is_func_proto(kern_mtype)) {
+-				pr_warn("struct_ops init_kern %s: non func ptr %s is not supported\n",
++
++			/* mtype->type must be a func_proto which was
++			 * guaranteed in bpf_object__collect_st_ops_relos(),
++			 * so only check kern_mtype for func_proto here.
++			 */
++			if (!btf_is_func_proto(kern_mtype)) {
++				pr_warn("struct_ops init_kern %s: kernel member %s is not a func ptr\n",
+ 					map->name, mname);
+ 				return -ENOTSUP;
+ 			}
+ 
+-			prog = st_ops->progs[i];
+-			if (!prog) {
+-				pr_debug("struct_ops init_kern %s: func ptr %s is not set\n",
+-					 map->name, mname);
+-				continue;
+-			}
+-
+ 			prog->attach_btf_id = kern_type_id;
+ 			prog->expected_attach_type = kern_member_idx;
+ 
+diff --git a/tools/objtool/arch/x86/special.c b/tools/objtool/arch/x86/special.c
+index fd4af88c0ea52..151b13d0a2676 100644
+--- a/tools/objtool/arch/x86/special.c
++++ b/tools/objtool/arch/x86/special.c
+@@ -48,7 +48,7 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
+ 	 * replacement group.
+ 	 */
+ 	return insn->offset == special_alt->new_off &&
+-	       (insn->type == INSN_CALL || is_static_jump(insn));
++	       (insn->type == INSN_CALL || is_jump(insn));
+ }
+ 
+ /*
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 4bd30315eb62b..dc24aac08edd6 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -789,7 +789,8 @@ static int add_jump_destinations(struct objtool_file *file)
+ 			dest_sec = reloc->sym->sec;
+ 			dest_off = reloc->sym->sym.st_value +
+ 				   arch_dest_reloc_offset(reloc->addend);
+-		} else if (strstr(reloc->sym->name, "_indirect_thunk_")) {
++		} else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21) ||
++			   !strncmp(reloc->sym->name, "__x86_retpoline_", 16)) {
+ 			/*
+ 			 * Retpoline jumps are really dynamic jumps in
+ 			 * disguise, so convert them accordingly.
+@@ -849,8 +850,8 @@ static int add_jump_destinations(struct objtool_file *file)
+ 			 * case where the parent function's only reference to a
+ 			 * subfunction is through a jump table.
+ 			 */
+-			if (!strstr(insn->func->name, ".cold.") &&
+-			    strstr(insn->jump_dest->func->name, ".cold.")) {
++			if (!strstr(insn->func->name, ".cold") &&
++			    strstr(insn->jump_dest->func->name, ".cold")) {
+ 				insn->func->cfunc = insn->jump_dest->func;
+ 				insn->jump_dest->func->pfunc = insn->func;
+ 
+@@ -2592,15 +2593,19 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 			break;
+ 
+ 		case INSN_STD:
+-			if (state.df)
++			if (state.df) {
+ 				WARN_FUNC("recursive STD", sec, insn->offset);
++				return 1;
++			}
+ 
+ 			state.df = true;
+ 			break;
+ 
+ 		case INSN_CLD:
+-			if (!state.df && func)
++			if (!state.df && func) {
+ 				WARN_FUNC("redundant CLD", sec, insn->offset);
++				return 1;
++			}
+ 
+ 			state.df = false;
+ 			break;
+diff --git a/tools/objtool/check.h b/tools/objtool/check.h
+index 5ec00a4b891b6..2804848e628e3 100644
+--- a/tools/objtool/check.h
++++ b/tools/objtool/check.h
+@@ -54,6 +54,17 @@ static inline bool is_static_jump(struct instruction *insn)
+ 	       insn->type == INSN_JUMP_UNCONDITIONAL;
+ }
+ 
++static inline bool is_dynamic_jump(struct instruction *insn)
++{
++	return insn->type == INSN_JUMP_DYNAMIC ||
++	       insn->type == INSN_JUMP_DYNAMIC_CONDITIONAL;
++}
++
++static inline bool is_jump(struct instruction *insn)
++{
++	return is_static_jump(insn) || is_dynamic_jump(insn);
++}
++
+ struct instruction *find_insn(struct objtool_file *file,
+ 			      struct section *sec, unsigned long offset);
+ 
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index adf311d15d3d2..e5c938d538ee5 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -1666,7 +1666,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
+ 		status = -1;
+ 		goto out_delete_session;
+ 	}
+-	err = evlist__add_pollfd(rec->evlist, done_fd);
++	err = evlist__add_wakeup_eventfd(rec->evlist, done_fd);
+ 	if (err < 0) {
+ 		pr_err("Failed to add wakeup eventfd to poll list\n");
+ 		status = err;
+diff --git a/tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json b/tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json
+index 40010a8724b3a..ce6e7e7960579 100644
+--- a/tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json
++++ b/tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json
+@@ -114,7 +114,7 @@
+         "PublicDescription": "Level 2 access to instruciton TLB that caused a page table walk. This event counts on any instruciton access which causes L2I_TLB_REFILL to count",
+         "EventCode": "0x35",
+         "EventName": "L2I_TLB_ACCESS",
+-        "BriefDescription": "L2D TLB access"
++        "BriefDescription": "L2I TLB access"
+     },
+     {
+         "PublicDescription": "Branch target buffer misprediction",
+diff --git a/tools/perf/tests/sample-parsing.c b/tools/perf/tests/sample-parsing.c
+index a0bdaf390ac8e..33a58976222d3 100644
+--- a/tools/perf/tests/sample-parsing.c
++++ b/tools/perf/tests/sample-parsing.c
+@@ -193,7 +193,7 @@ static int do_test(u64 sample_type, u64 sample_regs, u64 read_format)
+ 		.data = {1, -1ULL, 211, 212, 213},
+ 	};
+ 	u64 regs[64];
+-	const u64 raw_data[] = {0x123456780a0b0c0dULL, 0x1102030405060708ULL};
++	const u32 raw_data[] = {0x12345678, 0x0a0b0c0d, 0x11020304, 0x05060708, 0 };
+ 	const u64 data[] = {0x2211443366558877ULL, 0, 0xaabbccddeeff4321ULL};
+ 	const u64 aux_data[] = {0xa55a, 0, 0xeeddee, 0x0282028202820282};
+ 	struct perf_sample sample = {
+diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
+index 05616d4138a96..7e440fa90c938 100644
+--- a/tools/perf/util/event.c
++++ b/tools/perf/util/event.c
+@@ -673,6 +673,8 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
+ 		}
+ 
+ 		al->sym = map__find_symbol(al->map, al->addr);
++	} else if (symbol_conf.dso_list) {
++		al->filtered |= (1 << HIST_FILTER__DSO);
+ 	}
+ 
+ 	if (symbol_conf.sym_list) {
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index 8bdf3d2c907cb..98ae432470cdd 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -508,6 +508,14 @@ int evlist__filter_pollfd(struct evlist *evlist, short revents_and_mask)
+ 	return perf_evlist__filter_pollfd(&evlist->core, revents_and_mask);
+ }
+ 
++#ifdef HAVE_EVENTFD_SUPPORT
++int evlist__add_wakeup_eventfd(struct evlist *evlist, int fd)
++{
++	return perf_evlist__add_pollfd(&evlist->core, fd, NULL, POLLIN,
++				       fdarray_flag__nonfilterable);
++}
++#endif
++
+ int evlist__poll(struct evlist *evlist, int timeout)
+ {
+ 	return perf_evlist__poll(&evlist->core, timeout);
+diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
+index e1a450322bc5b..9298fce53ea31 100644
+--- a/tools/perf/util/evlist.h
++++ b/tools/perf/util/evlist.h
+@@ -160,6 +160,10 @@ perf_evlist__find_tracepoint_by_name(struct evlist *evlist,
+ int evlist__add_pollfd(struct evlist *evlist, int fd);
+ int evlist__filter_pollfd(struct evlist *evlist, short revents_and_mask);
+ 
++#ifdef HAVE_EVENTFD_SUPPORT
++int evlist__add_wakeup_eventfd(struct evlist *evlist, int fd);
++#endif
++
+ int evlist__poll(struct evlist *evlist, int timeout);
+ 
+ struct evsel *perf_evlist__id2evsel(struct evlist *evlist, u64 id);
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 697513f351549..197eb58a39cb7 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -24,6 +24,13 @@
+ #include "intel-pt-decoder.h"
+ #include "intel-pt-log.h"
+ 
++#define BITULL(x) (1ULL << (x))
++
++/* IA32_RTIT_CTL MSR bits */
++#define INTEL_PT_CYC_ENABLE		BITULL(1)
++#define INTEL_PT_CYC_THRESHOLD		(BITULL(22) | BITULL(21) | BITULL(20) | BITULL(19))
++#define INTEL_PT_CYC_THRESHOLD_SHIFT	19
++
+ #define INTEL_PT_BLK_SIZE 1024
+ 
+ #define BIT63 (((uint64_t)1 << 63))
+@@ -167,6 +174,8 @@ struct intel_pt_decoder {
+ 	uint64_t sample_tot_cyc_cnt;
+ 	uint64_t base_cyc_cnt;
+ 	uint64_t cyc_cnt_timestamp;
++	uint64_t ctl;
++	uint64_t cyc_threshold;
+ 	double tsc_to_cyc;
+ 	bool continuous_period;
+ 	bool overflow;
+@@ -204,6 +213,14 @@ static uint64_t intel_pt_lower_power_of_2(uint64_t x)
+ 	return x << i;
+ }
+ 
++static uint64_t intel_pt_cyc_threshold(uint64_t ctl)
++{
++	if (!(ctl & INTEL_PT_CYC_ENABLE))
++		return 0;
++
++	return (ctl & INTEL_PT_CYC_THRESHOLD) >> INTEL_PT_CYC_THRESHOLD_SHIFT;
++}
++
+ static void intel_pt_setup_period(struct intel_pt_decoder *decoder)
+ {
+ 	if (decoder->period_type == INTEL_PT_PERIOD_TICKS) {
+@@ -245,12 +262,15 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
+ 
+ 	decoder->flags              = params->flags;
+ 
++	decoder->ctl                = params->ctl;
+ 	decoder->period             = params->period;
+ 	decoder->period_type        = params->period_type;
+ 
+ 	decoder->max_non_turbo_ratio    = params->max_non_turbo_ratio;
+ 	decoder->max_non_turbo_ratio_fp = params->max_non_turbo_ratio;
+ 
++	decoder->cyc_threshold = intel_pt_cyc_threshold(decoder->ctl);
++
+ 	intel_pt_setup_period(decoder);
+ 
+ 	decoder->mtc_shift = params->mtc_period;
+@@ -1761,6 +1781,9 @@ static int intel_pt_walk_psbend(struct intel_pt_decoder *decoder)
+ 			break;
+ 
+ 		case INTEL_PT_CYC:
++			intel_pt_calc_cyc_timestamp(decoder);
++			break;
++
+ 		case INTEL_PT_VMCS:
+ 		case INTEL_PT_MNT:
+ 		case INTEL_PT_PAD:
+@@ -2014,6 +2037,7 @@ static int intel_pt_hop_trace(struct intel_pt_decoder *decoder, bool *no_tip, in
+ 
+ static int intel_pt_walk_trace(struct intel_pt_decoder *decoder)
+ {
++	int last_packet_type = INTEL_PT_PAD;
+ 	bool no_tip = false;
+ 	int err;
+ 
+@@ -2022,6 +2046,12 @@ static int intel_pt_walk_trace(struct intel_pt_decoder *decoder)
+ 		if (err)
+ 			return err;
+ next:
++		if (decoder->cyc_threshold) {
++			if (decoder->sample_cyc && last_packet_type != INTEL_PT_CYC)
++				decoder->sample_cyc = false;
++			last_packet_type = decoder->packet.type;
++		}
++
+ 		if (decoder->hop) {
+ 			switch (intel_pt_hop_trace(decoder, &no_tip, &err)) {
+ 			case HOP_IGNORE:
+@@ -2811,9 +2841,18 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder)
+ 		}
+ 		if (intel_pt_sample_time(decoder->pkt_state)) {
+ 			intel_pt_update_sample_time(decoder);
+-			if (decoder->sample_cyc)
++			if (decoder->sample_cyc) {
+ 				decoder->sample_tot_cyc_cnt = decoder->tot_cyc_cnt;
++				decoder->state.flags |= INTEL_PT_SAMPLE_IPC;
++				decoder->sample_cyc = false;
++			}
+ 		}
++		/*
++		 * When using only TSC/MTC to compute cycles, IPC can be
++		 * sampled as soon as the cycle count changes.
++		 */
++		if (!decoder->have_cyc)
++			decoder->state.flags |= INTEL_PT_SAMPLE_IPC;
+ 	}
+ 
+ 	decoder->state.timestamp = decoder->sample_timestamp;
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+index 8645fc2654811..48adaa78acfc2 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+@@ -17,6 +17,7 @@
+ #define INTEL_PT_ABORT_TX	(1 << 1)
+ #define INTEL_PT_ASYNC		(1 << 2)
+ #define INTEL_PT_FUP_IP		(1 << 3)
++#define INTEL_PT_SAMPLE_IPC	(1 << 4)
+ 
+ enum intel_pt_sample_type {
+ 	INTEL_PT_BRANCH		= 1 << 0,
+@@ -243,6 +244,7 @@ struct intel_pt_params {
+ 	void *data;
+ 	bool return_compression;
+ 	bool branch_enable;
++	uint64_t ctl;
+ 	uint64_t period;
+ 	enum intel_pt_period_type period_type;
+ 	unsigned max_non_turbo_ratio;
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 3a0348caec7d6..dc023b8c6003a 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -893,6 +893,18 @@ static bool intel_pt_sampling_mode(struct intel_pt *pt)
+ 	return false;
+ }
+ 
++static u64 intel_pt_ctl(struct intel_pt *pt)
++{
++	struct evsel *evsel;
++	u64 config;
++
++	evlist__for_each_entry(pt->session->evlist, evsel) {
++		if (intel_pt_get_config(pt, &evsel->core.attr, &config))
++			return config;
++	}
++	return 0;
++}
++
+ static u64 intel_pt_ns_to_ticks(const struct intel_pt *pt, u64 ns)
+ {
+ 	u64 quot, rem;
+@@ -1026,6 +1038,7 @@ static struct intel_pt_queue *intel_pt_alloc_queue(struct intel_pt *pt,
+ 	params.data = ptq;
+ 	params.return_compression = intel_pt_return_compression(pt);
+ 	params.branch_enable = intel_pt_branch_enable(pt);
++	params.ctl = intel_pt_ctl(pt);
+ 	params.max_non_turbo_ratio = pt->max_non_turbo_ratio;
+ 	params.mtc_period = intel_pt_mtc_period(pt);
+ 	params.tsc_ctc_ratio_n = pt->tsc_ctc_ratio_n;
+@@ -1381,7 +1394,8 @@ static int intel_pt_synth_branch_sample(struct intel_pt_queue *ptq)
+ 		sample.branch_stack = (struct branch_stack *)&dummy_bs;
+ 	}
+ 
+-	sample.cyc_cnt = ptq->ipc_cyc_cnt - ptq->last_br_cyc_cnt;
++	if (ptq->state->flags & INTEL_PT_SAMPLE_IPC)
++		sample.cyc_cnt = ptq->ipc_cyc_cnt - ptq->last_br_cyc_cnt;
+ 	if (sample.cyc_cnt) {
+ 		sample.insn_cnt = ptq->ipc_insn_cnt - ptq->last_br_insn_cnt;
+ 		ptq->last_br_insn_cnt = ptq->ipc_insn_cnt;
+@@ -1431,7 +1445,8 @@ static int intel_pt_synth_instruction_sample(struct intel_pt_queue *ptq)
+ 	else
+ 		sample.period = ptq->state->tot_insn_cnt - ptq->last_insn_cnt;
+ 
+-	sample.cyc_cnt = ptq->ipc_cyc_cnt - ptq->last_in_cyc_cnt;
++	if (ptq->state->flags & INTEL_PT_SAMPLE_IPC)
++		sample.cyc_cnt = ptq->ipc_cyc_cnt - ptq->last_in_cyc_cnt;
+ 	if (sample.cyc_cnt) {
+ 		sample.insn_cnt = ptq->ipc_insn_cnt - ptq->last_in_insn_cnt;
+ 		ptq->last_in_insn_cnt = ptq->ipc_insn_cnt;
+@@ -1966,14 +1981,8 @@ static int intel_pt_sample(struct intel_pt_queue *ptq)
+ 
+ 	ptq->have_sample = false;
+ 
+-	if (ptq->state->tot_cyc_cnt > ptq->ipc_cyc_cnt) {
+-		/*
+-		 * Cycle count and instruction count only go together to create
+-		 * a valid IPC ratio when the cycle count changes.
+-		 */
+-		ptq->ipc_insn_cnt = ptq->state->tot_insn_cnt;
+-		ptq->ipc_cyc_cnt = ptq->state->tot_cyc_cnt;
+-	}
++	ptq->ipc_insn_cnt = ptq->state->tot_insn_cnt;
++	ptq->ipc_cyc_cnt = ptq->state->tot_cyc_cnt;
+ 
+ 	/*
+ 	 * Do PEBS first to allow for the possibility that the PEBS timestamp
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 0d14abdf3d722..4d569ad7db02d 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -1561,12 +1561,11 @@ static int bfd2elf_binding(asymbol *symbol)
+ int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
+ {
+ 	int err = -1;
+-	long symbols_size, symbols_count;
++	long symbols_size, symbols_count, i;
+ 	asection *section;
+ 	asymbol **symbols, *sym;
+ 	struct symbol *symbol;
+ 	bfd *abfd;
+-	u_int i;
+ 	u64 start, len;
+ 
+ 	abfd = bfd_openr(dso->long_name, NULL);
+@@ -1867,8 +1866,10 @@ int dso__load(struct dso *dso, struct map *map)
+ 		if (nsexit)
+ 			nsinfo__mountns_enter(dso->nsinfo, &nsc);
+ 
+-		if (bfdrc == 0)
++		if (bfdrc == 0) {
++			ret = 0;
+ 			break;
++		}
+ 
+ 		if (!is_reg || sirc < 0)
+ 			continue;
+diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
+index 497ab51bc1702..3fbe1acd531ae 100755
+--- a/tools/testing/kunit/kunit_tool_test.py
++++ b/tools/testing/kunit/kunit_tool_test.py
+@@ -288,19 +288,17 @@ class StrContains(str):
+ class KUnitMainTest(unittest.TestCase):
+ 	def setUp(self):
+ 		path = get_absolute_path('test_data/test_is_test_passed-all_passed.log')
+-		file = open(path)
+-		all_passed_log = file.readlines()
+-		self.print_patch = mock.patch('builtins.print')
+-		self.print_mock = self.print_patch.start()
++		with open(path) as file:
++			all_passed_log = file.readlines()
++
++		self.print_mock = mock.patch('builtins.print').start()
++		self.addCleanup(mock.patch.stopall)
++
+ 		self.linux_source_mock = mock.Mock()
+ 		self.linux_source_mock.build_reconfig = mock.Mock(return_value=True)
+ 		self.linux_source_mock.build_um_kernel = mock.Mock(return_value=True)
+ 		self.linux_source_mock.run_kernel = mock.Mock(return_value=all_passed_log)
+ 
+-	def tearDown(self):
+-		self.print_patch.stop()
+-		pass
+-
+ 	def test_config_passes_args_pass(self):
+ 		kunit.main(['config', '--build_dir=.kunit'], self.linux_source_mock)
+ 		assert self.linux_source_mock.build_reconfig.call_count == 1
+diff --git a/tools/testing/selftests/bpf/test_xdp_redirect.sh b/tools/testing/selftests/bpf/test_xdp_redirect.sh
+index dd80f0c84afb4..c033850886f44 100755
+--- a/tools/testing/selftests/bpf/test_xdp_redirect.sh
++++ b/tools/testing/selftests/bpf/test_xdp_redirect.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # Create 2 namespaces with two veth peers, and
+ # forward packets in-between using generic XDP
+ #
+@@ -57,12 +57,8 @@ test_xdp_redirect()
+ 	ip link set dev veth1 $xdpmode obj test_xdp_redirect.o sec redirect_to_222 &> /dev/null
+ 	ip link set dev veth2 $xdpmode obj test_xdp_redirect.o sec redirect_to_111 &> /dev/null
+ 
+-	ip netns exec ns1 ping -c 1 10.1.1.22 &> /dev/null
+-	local ret1=$?
+-	ip netns exec ns2 ping -c 1 10.1.1.11 &> /dev/null
+-	local ret2=$?
+-
+-	if [ $ret1 -eq 0 -a $ret2 -eq 0 ]; then
++	if ip netns exec ns1 ping -c 1 10.1.1.22 &> /dev/null &&
++	   ip netns exec ns2 ping -c 1 10.1.1.11 &> /dev/null; then
+ 		echo "selftests: test_xdp_redirect $xdpmode [PASS]";
+ 	else
+ 		ret=1
+diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
+index 607c2acd20829..604b43ece15f5 100644
+--- a/tools/testing/selftests/dmabuf-heaps/Makefile
++++ b/tools/testing/selftests/dmabuf-heaps/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -static -O3 -Wl,-no-as-needed -Wall -I../../../../usr/include
++CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
+ 
+ TEST_GEN_PROGS = dmabuf-heap
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc
+index ada594fe16cb3..955e3ceea44b5 100644
+--- a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc
+@@ -1,19 +1,38 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ # description: event trigger - test synthetic_events syntax parser errors
+-# requires: synthetic_events error_log
++# requires: synthetic_events error_log "char name[]' >> synthetic_events":README
+ 
+ check_error() { # command-with-error-pos-by-^
+     ftrace_errlog_check 'synthetic_events' "$1" 'synthetic_events'
+ }
+ 
++check_dyn_error() { # command-with-error-pos-by-^
++    ftrace_errlog_check 'synthetic_events' "$1" 'dynamic_events'
++}
++
+ check_error 'myevent ^chr arg'			# INVALID_TYPE
+-check_error 'myevent ^char str[];; int v'	# INVALID_TYPE
+-check_error 'myevent char ^str]; int v'		# INVALID_NAME
+-check_error 'myevent char ^str;[]'		# INVALID_NAME
+-check_error 'myevent ^char str[; int v'		# INVALID_TYPE
+-check_error '^mye;vent char str[]'		# BAD_NAME
+-check_error 'myevent char str[]; ^int'		# INVALID_FIELD
+-check_error '^myevent'				# INCOMPLETE_CMD
++check_error 'myevent ^unsigned arg'		# INCOMPLETE_TYPE
++
++check_error 'myevent char ^str]; int v'		# BAD_NAME
++check_error '^mye-vent char str[]'		# BAD_NAME
++check_error 'myevent char ^st-r[]'		# BAD_NAME
++
++check_error 'myevent char str;^[]'		# INVALID_FIELD
++check_error 'myevent char str; ^int'		# INVALID_FIELD
++
++check_error 'myevent char ^str[; int v'		# INVALID_ARRAY_SPEC
++check_error 'myevent char ^str[kdjdk]'		# INVALID_ARRAY_SPEC
++check_error 'myevent char ^str[257]'		# INVALID_ARRAY_SPEC
++
++check_error '^mye;vent char str[]'		# INVALID_CMD
++check_error '^myevent ; char str[]'		# INVALID_CMD
++check_error '^myevent; char str[]'		# INVALID_CMD
++check_error '^myevent ;char str[]'		# INVALID_CMD
++check_error '^; char str[]'			# INVALID_CMD
++check_error '^;myevent char str[]'		# INVALID_CMD
++check_error '^myevent'				# INVALID_CMD
++
++check_dyn_error '^s:junk/myevent char str['	# INVALID_DYN_CMD
+ 
+ exit 0
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index 2cfd87d94db89..e927df83efb91 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -493,7 +493,7 @@ do_transfer()
+ 		echo "${listener_ns} SYNRX: ${cl_proto} -> ${srv_proto}: expect ${expect_synrx}, got ${stat_synrx_now_l}"
+ 	fi
+ 	if [ $expect_ackrx -ne $stat_ackrx_now_l ] ;then
+-		echo "${listener_ns} ACKRX: ${cl_proto} -> ${srv_proto}: expect ${expect_synrx}, got ${stat_synrx_now_l}"
++		echo "${listener_ns} ACKRX: ${cl_proto} -> ${srv_proto}: expect ${expect_ackrx}, got ${stat_ackrx_now_l} "
+ 	fi
+ 
+ 	if [ $retc -eq 0 ] && [ $rets -eq 0 ];then
+diff --git a/tools/testing/selftests/powerpc/eeh/eeh-basic.sh b/tools/testing/selftests/powerpc/eeh/eeh-basic.sh
+index 0d783e1065c86..64779f073e177 100755
+--- a/tools/testing/selftests/powerpc/eeh/eeh-basic.sh
++++ b/tools/testing/selftests/powerpc/eeh/eeh-basic.sh
+@@ -86,5 +86,5 @@ echo "$failed devices failed to recover ($dev_count tested)"
+ lspci | diff -u $pre_lspci -
+ rm -f $pre_lspci
+ 
+-test "$failed" == 0
++test "$failed" -eq 0
+ exit $?
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 26c72f2b61b1b..1b6c7d33c4ff2 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -315,7 +315,7 @@ TEST(kcmp)
+ 	ret = __filecmp(getpid(), getpid(), 1, 1);
+ 	EXPECT_EQ(ret, 0);
+ 	if (ret != 0 && errno == ENOSYS)
+-		SKIP(return, "Kernel does not support kcmp() (missing CONFIG_CHECKPOINT_RESTORE?)");
++		SKIP(return, "Kernel does not support kcmp() (missing CONFIG_KCMP?)");
+ }
+ 
+ TEST(mode_strict_support)
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 74c69b75f6f5a..7ed7cd95e58fe 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -39,7 +39,7 @@ ip0() { pretty 0 "ip $*"; ip -n $netns0 "$@"; }
+ ip1() { pretty 1 "ip $*"; ip -n $netns1 "$@"; }
+ ip2() { pretty 2 "ip $*"; ip -n $netns2 "$@"; }
+ sleep() { read -t "$1" -N 1 || true; }
+-waitiperf() { pretty "${1//*-}" "wait for iperf:5201 pid $2"; while [[ $(ss -N "$1" -tlpH 'sport = 5201') != *\"iperf3\",pid=$2,fd=* ]]; do sleep 0.1; done; }
++waitiperf() { pretty "${1//*-}" "wait for iperf:${3:-5201} pid $2"; while [[ $(ss -N "$1" -tlpH "sport = ${3:-5201}") != *\"iperf3\",pid=$2,fd=* ]]; do sleep 0.1; done; }
+ waitncatudp() { pretty "${1//*-}" "wait for udp:1111 pid $2"; while [[ $(ss -N "$1" -ulpH 'sport = 1111') != *\"ncat\",pid=$2,fd=* ]]; do sleep 0.1; done; }
+ waitiface() { pretty "${1//*-}" "wait for $2 to come up"; ip netns exec "$1" bash -c "while [[ \$(< \"/sys/class/net/$2/operstate\") != up ]]; do read -t .1 -N 0 || true; done;"; }
+ 
+@@ -141,6 +141,19 @@ tests() {
+ 	n2 iperf3 -s -1 -B fd00::2 &
+ 	waitiperf $netns2 $!
+ 	n1 iperf3 -Z -t 3 -b 0 -u -c fd00::2
++
++	# TCP over IPv4, in parallel
++	for max in 4 5 50; do
++		local pids=( )
++		for ((i=0; i < max; ++i)) do
++			n2 iperf3 -p $(( 5200 + i )) -s -1 -B 192.168.241.2 &
++			pids+=( $! ); waitiperf $netns2 $! $(( 5200 + i ))
++		done
++		for ((i=0; i < max; ++i)) do
++			n1 iperf3 -Z -t 3 -p $(( 5200 + i )) -c 192.168.241.2 &
++		done
++		wait "${pids[@]}"
++	done
+ }
+ 
+ [[ $(ip1 link show dev wg0) =~ mtu\ ([0-9]+) ]] && orig_mtu="${BASH_REMATCH[1]}"


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-07 15:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-03-07 15:17 UTC (permalink / raw
  To: gentoo-commits

commit:     a3d608ce89b0ac82b9cc22dd12dd299c53d4d4a6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar  7 15:16:50 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar  7 15:16:50 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a3d608ce

Linux patch 5.10.21

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1020_linux-5.10.21.patch | 5007 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5011 insertions(+)

diff --git a/0000_README b/0000_README
index c338847..fd876b7 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch:  1019_linux-5.10.20.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.20
 
+Patch:  1020_linux-5.10.21.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.21
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1020_linux-5.10.21.patch b/1020_linux-5.10.21.patch
new file mode 100644
index 0000000..c3becb3
--- /dev/null
+++ b/1020_linux-5.10.21.patch
@@ -0,0 +1,5007 @@
+diff --git a/Documentation/devicetree/bindings/net/btusb.txt b/Documentation/devicetree/bindings/net/btusb.txt
+index b1ad6ee68e909..c51dd99dc0d3c 100644
+--- a/Documentation/devicetree/bindings/net/btusb.txt
++++ b/Documentation/devicetree/bindings/net/btusb.txt
+@@ -38,7 +38,7 @@ Following example uses irq pin number 3 of gpio0 for out of band wake-on-bt:
+ 	compatible = "usb1286,204e";
+ 	reg = <1>;
+ 	interrupt-parent = <&gpio0>;
+-	interrupt-name = "wakeup";
++	interrupt-names = "wakeup";
+ 	interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+     };
+ };
+diff --git a/Documentation/devicetree/bindings/net/ethernet-controller.yaml b/Documentation/devicetree/bindings/net/ethernet-controller.yaml
+index fdf7098172183..39147d33e8c7c 100644
+--- a/Documentation/devicetree/bindings/net/ethernet-controller.yaml
++++ b/Documentation/devicetree/bindings/net/ethernet-controller.yaml
+@@ -206,6 +206,11 @@ properties:
+                 Indicates that full-duplex is used. When absent, half
+                 duplex is assumed.
+ 
++            pause:
++              $ref: /schemas/types.yaml#definitions/flag
++              description:
++                Indicates that pause should be enabled.
++
+             asym-pause:
+               $ref: /schemas/types.yaml#definitions/flag
+               description:
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index 25e6673a085a0..4abcfff15e384 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -630,16 +630,15 @@ tcp_rmem - vector of 3 INTEGERs: min, default, max
+ 
+ 	default: initial size of receive buffer used by TCP sockets.
+ 	This value overrides net.core.rmem_default used by other protocols.
+-	Default: 87380 bytes. This value results in window of 65535 with
+-	default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit
+-	less for default tcp_app_win. See below about these variables.
++	Default: 131072 bytes.
++	This value results in initial window of 65535.
+ 
+ 	max: maximal size of receive buffer allowed for automatically
+ 	selected receiver buffers for TCP socket. This value does not override
+ 	net.core.rmem_max.  Calling setsockopt() with SO_RCVBUF disables
+ 	automatic tuning of that socket's receive buffer size, in which
+ 	case this value is ignored.
+-	Default: between 87380B and 6MB, depending on RAM size.
++	Default: between 131072 and 6MB, depending on RAM size.
+ 
+ tcp_sack - BOOLEAN
+ 	Enable select acknowledgments (SACKS).
+diff --git a/Makefile b/Makefile
+index 1ebc8a6bf9b06..98ae9007e8a52 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 20
++SUBLEVEL = 21
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
+index fd6e3aafe2724..acb464547a54f 100644
+--- a/arch/arm/xen/p2m.c
++++ b/arch/arm/xen/p2m.c
+@@ -93,12 +93,39 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 	int i;
+ 
+ 	for (i = 0; i < count; i++) {
++		struct gnttab_unmap_grant_ref unmap;
++		int rc;
++
+ 		if (map_ops[i].status)
+ 			continue;
+-		if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> XEN_PAGE_SHIFT,
+-				    map_ops[i].dev_bus_addr >> XEN_PAGE_SHIFT))) {
+-			return -ENOMEM;
+-		}
++		if (likely(set_phys_to_machine(map_ops[i].host_addr >> XEN_PAGE_SHIFT,
++				    map_ops[i].dev_bus_addr >> XEN_PAGE_SHIFT)))
++			continue;
++
++		/*
++		 * Signal an error for this slot. This in turn requires
++		 * immediate unmapping.
++		 */
++		map_ops[i].status = GNTST_general_error;
++		unmap.host_addr = map_ops[i].host_addr,
++		unmap.handle = map_ops[i].handle;
++		map_ops[i].handle = ~0;
++		if (map_ops[i].flags & GNTMAP_device_map)
++			unmap.dev_bus_addr = map_ops[i].dev_bus_addr;
++		else
++			unmap.dev_bus_addr = 0;
++
++		/*
++		 * Pre-populate the status field, to be recognizable in
++		 * the log message below.
++		 */
++		unmap.status = 1;
++
++		rc = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
++					       &unmap, 1);
++		if (rc || unmap.status != GNTST_okay)
++			pr_err_once("gnttab unmap failed: rc=%d st=%d\n",
++				    rc, unmap.status);
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
+index e76c866199493..60f5829d476f5 100644
+--- a/arch/parisc/kernel/irq.c
++++ b/arch/parisc/kernel/irq.c
+@@ -376,7 +376,11 @@ static inline int eirr_to_irq(unsigned long eirr)
+ /*
+  * IRQ STACK - used for irq handler
+  */
++#ifdef CONFIG_64BIT
++#define IRQ_STACK_SIZE      (4096 << 4) /* 64k irq stack size */
++#else
+ #define IRQ_STACK_SIZE      (4096 << 3) /* 32k irq stack size */
++#endif
+ 
+ union irq_stack_union {
+ 	unsigned long stack[IRQ_STACK_SIZE/sizeof(unsigned long)];
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index b18bce1a209fa..242bdd8281e0f 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -1241,9 +1241,11 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 		if ((word & 0xfe2) == 2)
+ 			op->type = SYSCALL;
+ 		else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) &&
+-				(word & 0xfe3) == 1)
++				(word & 0xfe3) == 1) {	/* scv */
+ 			op->type = SYSCALL_VECTORED_0;
+-		else
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
++		} else
+ 			op->type = UNKNOWN;
+ 		return 0;
+ #endif
+@@ -1347,7 +1349,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ #ifdef __powerpc64__
+ 	case 1:
+ 		if (!cpu_has_feature(CPU_FTR_ARCH_31))
+-			return -1;
++			goto unknown_opcode;
+ 
+ 		prefix_r = GET_PREFIX_R(word);
+ 		ra = GET_PREFIX_RA(suffix);
+@@ -1380,8 +1382,13 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 
+ #ifdef __powerpc64__
+ 	case 4:
++		/*
++		 * There are very many instructions with this primary opcode
++		 * introduced in the ISA as early as v2.03. However, the ones
++		 * we currently emulate were all introduced with ISA 3.0
++		 */
+ 		if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-			return -1;
++			goto unknown_opcode;
+ 
+ 		switch (word & 0x3f) {
+ 		case 48:	/* maddhd */
+@@ -1407,7 +1414,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 		 * There are other instructions from ISA 3.0 with the same
+ 		 * primary opcode which do not have emulation support yet.
+ 		 */
+-		return -1;
++		goto unknown_opcode;
+ #endif
+ 
+ 	case 7:		/* mulli */
+@@ -1467,6 +1474,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 	case 19:
+ 		if (((word >> 1) & 0x1f) == 2) {
+ 			/* addpcis */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			imm = (short) (word & 0xffc1);	/* d0 + d2 fields */
+ 			imm |= (word >> 15) & 0x3e;	/* d1 field */
+ 			op->val = regs->nip + (imm << 16) + 4;
+@@ -1779,7 +1788,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ #ifdef __powerpc64__
+ 		case 265:	/* modud */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-				return -1;
++				goto unknown_opcode;
+ 			op->val = regs->gpr[ra] % regs->gpr[rb];
+ 			goto compute_done;
+ #endif
+@@ -1789,7 +1798,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 
+ 		case 267:	/* moduw */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-				return -1;
++				goto unknown_opcode;
+ 			op->val = (unsigned int) regs->gpr[ra] %
+ 				(unsigned int) regs->gpr[rb];
+ 			goto compute_done;
+@@ -1826,7 +1835,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ #endif
+ 		case 755:	/* darn */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-				return -1;
++				goto unknown_opcode;
+ 			switch (ra & 0x3) {
+ 			case 0:
+ 				/* 32-bit conditioned */
+@@ -1848,14 +1857,14 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ #ifdef __powerpc64__
+ 		case 777:	/* modsd */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-				return -1;
++				goto unknown_opcode;
+ 			op->val = (long int) regs->gpr[ra] %
+ 				(long int) regs->gpr[rb];
+ 			goto compute_done;
+ #endif
+ 		case 779:	/* modsw */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-				return -1;
++				goto unknown_opcode;
+ 			op->val = (int) regs->gpr[ra] %
+ 				(int) regs->gpr[rb];
+ 			goto compute_done;
+@@ -1932,14 +1941,14 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ #endif
+ 		case 538:	/* cnttzw */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-				return -1;
++				goto unknown_opcode;
+ 			val = (unsigned int) regs->gpr[rd];
+ 			op->val = (val ? __builtin_ctz(val) : 32);
+ 			goto logical_done;
+ #ifdef __powerpc64__
+ 		case 570:	/* cnttzd */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-				return -1;
++				goto unknown_opcode;
+ 			val = regs->gpr[rd];
+ 			op->val = (val ? __builtin_ctzl(val) : 64);
+ 			goto logical_done;
+@@ -2049,7 +2058,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 		case 890:	/* extswsli with sh_5 = 0 */
+ 		case 891:	/* extswsli with sh_5 = 1 */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+-				return -1;
++				goto unknown_opcode;
+ 			op->type = COMPUTE + SETREG;
+ 			sh = rb | ((word & 2) << 4);
+ 			val = (signed int) regs->gpr[rd];
+@@ -2376,6 +2385,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 268:	/* lxvx */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(LOAD_VSX, 0, 16);
+ 			op->element_size = 16;
+@@ -2385,6 +2396,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 		case 269:	/* lxvl */
+ 		case 301: {	/* lxvll */
+ 			int nb;
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->ea = ra ? regs->gpr[ra] : 0;
+ 			nb = regs->gpr[rb] & 0xff;
+@@ -2404,6 +2417,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 364:	/* lxvwsx */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(LOAD_VSX, 0, 4);
+ 			op->element_size = 4;
+@@ -2411,6 +2426,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 396:	/* stxvx */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(STORE_VSX, 0, 16);
+ 			op->element_size = 16;
+@@ -2420,6 +2437,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 		case 397:	/* stxvl */
+ 		case 429: {	/* stxvll */
+ 			int nb;
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->ea = ra ? regs->gpr[ra] : 0;
+ 			nb = regs->gpr[rb] & 0xff;
+@@ -2464,6 +2483,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 781:	/* lxsibzx */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(LOAD_VSX, 0, 1);
+ 			op->element_size = 8;
+@@ -2471,6 +2492,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 812:	/* lxvh8x */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(LOAD_VSX, 0, 16);
+ 			op->element_size = 2;
+@@ -2478,6 +2501,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 813:	/* lxsihzx */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(LOAD_VSX, 0, 2);
+ 			op->element_size = 8;
+@@ -2491,6 +2516,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 876:	/* lxvb16x */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(LOAD_VSX, 0, 16);
+ 			op->element_size = 1;
+@@ -2504,6 +2531,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 909:	/* stxsibx */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(STORE_VSX, 0, 1);
+ 			op->element_size = 8;
+@@ -2511,6 +2540,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 940:	/* stxvh8x */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(STORE_VSX, 0, 16);
+ 			op->element_size = 2;
+@@ -2518,6 +2549,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 941:	/* stxsihx */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(STORE_VSX, 0, 2);
+ 			op->element_size = 8;
+@@ -2531,6 +2564,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 1004:	/* stxvb16x */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd | ((word & 1) << 5);
+ 			op->type = MKOP(STORE_VSX, 0, 16);
+ 			op->element_size = 1;
+@@ -2639,12 +2674,16 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			op->type = MKOP(LOAD_FP, 0, 16);
+ 			break;
+ 		case 2:		/* lxsd */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd + 32;
+ 			op->type = MKOP(LOAD_VSX, 0, 8);
+ 			op->element_size = 8;
+ 			op->vsx_flags = VSX_CHECK_VEC;
+ 			break;
+ 		case 3:		/* lxssp */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->reg = rd + 32;
+ 			op->type = MKOP(LOAD_VSX, 0, 4);
+ 			op->element_size = 8;
+@@ -2681,6 +2720,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 1:		/* lxv */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->ea = dqform_ea(word, regs);
+ 			if (word & 8)
+ 				op->reg = rd + 32;
+@@ -2691,6 +2732,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 
+ 		case 2:		/* stxsd with LSB of DS field = 0 */
+ 		case 6:		/* stxsd with LSB of DS field = 1 */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->ea = dsform_ea(word, regs);
+ 			op->reg = rd + 32;
+ 			op->type = MKOP(STORE_VSX, 0, 8);
+@@ -2700,6 +2743,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 
+ 		case 3:		/* stxssp with LSB of DS field = 0 */
+ 		case 7:		/* stxssp with LSB of DS field = 1 */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->ea = dsform_ea(word, regs);
+ 			op->reg = rd + 32;
+ 			op->type = MKOP(STORE_VSX, 0, 4);
+@@ -2708,6 +2753,8 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 			break;
+ 
+ 		case 5:		/* stxv */
++			if (!cpu_has_feature(CPU_FTR_ARCH_300))
++				goto unknown_opcode;
+ 			op->ea = dqform_ea(word, regs);
+ 			if (word & 8)
+ 				op->reg = rd + 32;
+@@ -2737,7 +2784,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 		break;
+ 	case 1: /* Prefixed instructions */
+ 		if (!cpu_has_feature(CPU_FTR_ARCH_31))
+-			return -1;
++			goto unknown_opcode;
+ 
+ 		prefix_r = GET_PREFIX_R(word);
+ 		ra = GET_PREFIX_RA(suffix);
+@@ -2872,6 +2919,10 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 
+ 	return 0;
+ 
++ unknown_opcode:
++	op->type = UNKNOWN;
++	return 0;
++
+  logical_done:
+ 	if (word & 1)
+ 		set_cr0(regs, op);
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 608082fb9a6c6..e8921e78a2926 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -222,8 +222,6 @@ pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+ pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+ pte_t fixmap_pte[PTRS_PER_PTE] __page_aligned_bss;
+ 
+-#define MAX_EARLY_MAPPING_SIZE	SZ_128M
+-
+ pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+ 
+ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
+@@ -298,13 +296,7 @@ static void __init create_pte_mapping(pte_t *ptep,
+ 
+ pmd_t trampoline_pmd[PTRS_PER_PMD] __page_aligned_bss;
+ pmd_t fixmap_pmd[PTRS_PER_PMD] __page_aligned_bss;
+-
+-#if MAX_EARLY_MAPPING_SIZE < PGDIR_SIZE
+-#define NUM_EARLY_PMDS		1UL
+-#else
+-#define NUM_EARLY_PMDS		(1UL + MAX_EARLY_MAPPING_SIZE / PGDIR_SIZE)
+-#endif
+-pmd_t early_pmd[PTRS_PER_PMD * NUM_EARLY_PMDS] __initdata __aligned(PAGE_SIZE);
++pmd_t early_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE);
+ pmd_t early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE);
+ 
+ static pmd_t *__init get_pmd_virt_early(phys_addr_t pa)
+@@ -326,11 +318,9 @@ static pmd_t *get_pmd_virt_late(phys_addr_t pa)
+ 
+ static phys_addr_t __init alloc_pmd_early(uintptr_t va)
+ {
+-	uintptr_t pmd_num;
++	BUG_ON((va - PAGE_OFFSET) >> PGDIR_SHIFT);
+ 
+-	pmd_num = (va - PAGE_OFFSET) >> PGDIR_SHIFT;
+-	BUG_ON(pmd_num >= NUM_EARLY_PMDS);
+-	return (uintptr_t)&early_pmd[pmd_num * PTRS_PER_PMD];
++	return (uintptr_t)early_pmd;
+ }
+ 
+ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va)
+@@ -448,7 +438,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ 	uintptr_t va, pa, end_va;
+ 	uintptr_t load_pa = (uintptr_t)(&_start);
+ 	uintptr_t load_sz = (uintptr_t)(&_end) - load_pa;
+-	uintptr_t map_size = best_map_size(load_pa, MAX_EARLY_MAPPING_SIZE);
++	uintptr_t map_size;
+ #ifndef __PAGETABLE_PMD_FOLDED
+ 	pmd_t fix_bmap_spmd, fix_bmap_epmd;
+ #endif
+@@ -460,12 +450,11 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ 	 * Enforce boot alignment requirements of RV32 and
+ 	 * RV64 by only allowing PMD or PGD mappings.
+ 	 */
+-	BUG_ON(map_size == PAGE_SIZE);
++	map_size = PMD_SIZE;
+ 
+ 	/* Sanity check alignment and size */
+ 	BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
+ 	BUG_ON((load_pa % map_size) != 0);
+-	BUG_ON(load_sz > MAX_EARLY_MAPPING_SIZE);
+ 
+ 	pt_ops.alloc_pte = alloc_pte_early;
+ 	pt_ops.get_pte_virt = get_pte_virt_early;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 7d4d89fa8647a..aaa7bffdb20f5 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4384,6 +4384,9 @@ static const struct x86_cpu_desc isolation_ucodes[] = {
+ 	INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_X,		 2, 0x0b000014),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 3, 0x00000021),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 4, 0x00000000),
++	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 5, 0x00000000),
++	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 6, 0x00000000),
++	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 7, 0x00000000),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_L,		 3, 0x0000007c),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE,		 3, 0x0000007c),
+ 	INTEL_CPU_DESC(INTEL_FAM6_KABYLAKE,		 9, 0x0000004e),
+diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
+index 5941e18edd5a9..ab09af613788b 100644
+--- a/arch/x86/include/asm/xen/page.h
++++ b/arch/x86/include/asm/xen/page.h
+@@ -86,6 +86,18 @@ clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+ }
+ #endif
+ 
++/*
++ * The maximum amount of extra memory compared to the base size.  The
++ * main scaling factor is the size of struct page.  At extreme ratios
++ * of base:extra, all the base memory can be filled with page
++ * structures for the extra memory, leaving no space for anything
++ * else.
++ *
++ * 10x seems like a reasonable balance between scaling flexibility and
++ * leaving a practically usable system.
++ */
++#define XEN_EXTRA_MEM_RATIO	(10)
++
+ /*
+  * Helper functions to write or read unsigned long values to/from
+  * memory, when the access may fault.
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index 34b153cbd4acb..5e9a34b5bd741 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -114,6 +114,7 @@ int apply_relocate(Elf32_Shdr *sechdrs,
+ 			*location += sym->st_value;
+ 			break;
+ 		case R_386_PC32:
++		case R_386_PLT32:
+ 			/* Add the value, subtract its position */
+ 			*location += sym->st_value - (uint32_t)location;
+ 			break;
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index efbaef8b4de98..b29657b76e3fa 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -477,6 +477,15 @@ static const struct dmi_system_id reboot_dmi_table[] __initconst = {
+ 		},
+ 	},
+ 
++	{	/* PCIe Wifi card isn't detected after reboot otherwise */
++		.callback = set_pci_reboot,
++		.ident = "Zotac ZBOX CI327 nano",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "NA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZBOX-CI327NANO-GS-01"),
++		},
++	},
++
+ 	/* Sony */
+ 	{	/* Handle problems with rebooting on Sony VGN-Z540N */
+ 		.callback = set_bios_reboot,
+diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
+index ce7188cbdae58..1c3a1962cade6 100644
+--- a/arch/x86/tools/relocs.c
++++ b/arch/x86/tools/relocs.c
+@@ -867,9 +867,11 @@ static int do_reloc32(struct section *sec, Elf_Rel *rel, Elf_Sym *sym,
+ 	case R_386_PC32:
+ 	case R_386_PC16:
+ 	case R_386_PC8:
++	case R_386_PLT32:
+ 		/*
+-		 * NONE can be ignored and PC relative relocations don't
+-		 * need to be adjusted.
++		 * NONE can be ignored and PC relative relocations don't need
++		 * to be adjusted. Because sym must be defined, R_386_PLT32 can
++		 * be treated the same way as R_386_PC32.
+ 		 */
+ 		break;
+ 
+@@ -910,9 +912,11 @@ static int do_reloc_real(struct section *sec, Elf_Rel *rel, Elf_Sym *sym,
+ 	case R_386_PC32:
+ 	case R_386_PC16:
+ 	case R_386_PC8:
++	case R_386_PLT32:
+ 		/*
+-		 * NONE can be ignored and PC relative relocations don't
+-		 * need to be adjusted.
++		 * NONE can be ignored and PC relative relocations don't need
++		 * to be adjusted. Because sym must be defined, R_386_PLT32 can
++		 * be treated the same way as R_386_PC32.
+ 		 */
+ 		break;
+ 
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index d71942a3c65af..60da7e793385e 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -416,6 +416,9 @@ void __init xen_vmalloc_p2m_tree(void)
+ 	xen_p2m_last_pfn = xen_max_p2m_pfn;
+ 
+ 	p2m_limit = (phys_addr_t)P2M_LIMIT * 1024 * 1024 * 1024 / PAGE_SIZE;
++	if (!p2m_limit && IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC))
++		p2m_limit = xen_start_info->nr_pages * XEN_EXTRA_MEM_RATIO;
++
+ 	vm.flags = VM_ALLOC;
+ 	vm.size = ALIGN(sizeof(unsigned long) * max(xen_max_p2m_pfn, p2m_limit),
+ 			PMD_SIZE * PMDS_PER_MID_PAGE);
+@@ -652,10 +655,9 @@ bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
+ 	pte_t *ptep;
+ 	unsigned int level;
+ 
+-	if (unlikely(pfn >= xen_p2m_size)) {
+-		BUG_ON(mfn != INVALID_P2M_ENTRY);
+-		return true;
+-	}
++	/* Only invalid entries allowed above the highest p2m covered frame. */
++	if (unlikely(pfn >= xen_p2m_size))
++		return mfn == INVALID_P2M_ENTRY;
+ 
+ 	/*
+ 	 * The interface requires atomic updates on p2m elements.
+@@ -710,6 +712,8 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 
+ 	for (i = 0; i < count; i++) {
+ 		unsigned long mfn, pfn;
++		struct gnttab_unmap_grant_ref unmap[2];
++		int rc;
+ 
+ 		/* Do not add to override if the map failed. */
+ 		if (map_ops[i].status != GNTST_okay ||
+@@ -727,10 +731,46 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 
+ 		WARN(pfn_to_mfn(pfn) != INVALID_P2M_ENTRY, "page must be ballooned");
+ 
+-		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+-			ret = -ENOMEM;
+-			goto out;
++		if (likely(set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
++			continue;
++
++		/*
++		 * Signal an error for this slot. This in turn requires
++		 * immediate unmapping.
++		 */
++		map_ops[i].status = GNTST_general_error;
++		unmap[0].host_addr = map_ops[i].host_addr,
++		unmap[0].handle = map_ops[i].handle;
++		map_ops[i].handle = ~0;
++		if (map_ops[i].flags & GNTMAP_device_map)
++			unmap[0].dev_bus_addr = map_ops[i].dev_bus_addr;
++		else
++			unmap[0].dev_bus_addr = 0;
++
++		if (kmap_ops) {
++			kmap_ops[i].status = GNTST_general_error;
++			unmap[1].host_addr = kmap_ops[i].host_addr,
++			unmap[1].handle = kmap_ops[i].handle;
++			kmap_ops[i].handle = ~0;
++			if (kmap_ops[i].flags & GNTMAP_device_map)
++				unmap[1].dev_bus_addr = kmap_ops[i].dev_bus_addr;
++			else
++				unmap[1].dev_bus_addr = 0;
+ 		}
++
++		/*
++		 * Pre-populate both status fields, to be recognizable in
++		 * the log message below.
++		 */
++		unmap[0].status = 1;
++		unmap[1].status = 1;
++
++		rc = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
++					       unmap, 1 + !!kmap_ops);
++		if (rc || unmap[0].status != GNTST_okay ||
++		    unmap[1].status != GNTST_okay)
++			pr_err_once("gnttab unmap failed: rc=%d st0=%d st1=%d\n",
++				    rc, unmap[0].status, unmap[1].status);
+ 	}
+ 
+ out:
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index 7eab14d56369d..1a3b75652fa4f 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -59,18 +59,6 @@ static struct {
+ } xen_remap_buf __initdata __aligned(PAGE_SIZE);
+ static unsigned long xen_remap_mfn __initdata = INVALID_P2M_ENTRY;
+ 
+-/* 
+- * The maximum amount of extra memory compared to the base size.  The
+- * main scaling factor is the size of struct page.  At extreme ratios
+- * of base:extra, all the base memory can be filled with page
+- * structures for the extra memory, leaving no space for anything
+- * else.
+- * 
+- * 10x seems like a reasonable balance between scaling flexibility and
+- * leaving a practically usable system.
+- */
+-#define EXTRA_MEM_RATIO		(10)
+-
+ static bool xen_512gb_limit __initdata = IS_ENABLED(CONFIG_XEN_512GB);
+ 
+ static void __init xen_parse_512gb(void)
+@@ -790,20 +778,13 @@ char * __init xen_memory_setup(void)
+ 		extra_pages += max_pages - max_pfn;
+ 
+ 	/*
+-	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
+-	 * factor the base size.  On non-highmem systems, the base
+-	 * size is the full initial memory allocation; on highmem it
+-	 * is limited to the max size of lowmem, so that it doesn't
+-	 * get completely filled.
++	 * Clamp the amount of extra memory to a XEN_EXTRA_MEM_RATIO
++	 * factor the base size.
+ 	 *
+ 	 * Make sure we have no memory above max_pages, as this area
+ 	 * isn't handled by the p2m management.
+-	 *
+-	 * In principle there could be a problem in lowmem systems if
+-	 * the initial memory is also very large with respect to
+-	 * lowmem, but we won't try to deal with that here.
+ 	 */
+-	extra_pages = min3(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
++	extra_pages = min3(XEN_EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
+ 			   extra_pages, max_pages - max_pfn);
+ 	i = 0;
+ 	addr = xen_e820_table.entries[0].addr;
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index eea0f453cfb6e..8609174e036e8 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -199,8 +199,8 @@ static int test_mb_aead_jiffies(struct test_mb_aead_data *data, int enc,
+ 			goto out;
+ 	}
+ 
+-	pr_cont("%d operations in %d seconds (%ld bytes)\n",
+-		bcount * num_mb, secs, (long)bcount * blen * num_mb);
++	pr_cont("%d operations in %d seconds (%llu bytes)\n",
++		bcount * num_mb, secs, (u64)bcount * blen * num_mb);
+ 
+ out:
+ 	kfree(rc);
+@@ -469,8 +469,8 @@ static int test_aead_jiffies(struct aead_request *req, int enc,
+ 			return ret;
+ 	}
+ 
+-	printk("%d operations in %d seconds (%ld bytes)\n",
+-	       bcount, secs, (long)bcount * blen);
++	pr_cont("%d operations in %d seconds (%llu bytes)\n",
++	        bcount, secs, (u64)bcount * blen);
+ 	return 0;
+ }
+ 
+@@ -760,8 +760,8 @@ static int test_mb_ahash_jiffies(struct test_mb_ahash_data *data, int blen,
+ 			goto out;
+ 	}
+ 
+-	pr_cont("%d operations in %d seconds (%ld bytes)\n",
+-		bcount * num_mb, secs, (long)bcount * blen * num_mb);
++	pr_cont("%d operations in %d seconds (%llu bytes)\n",
++		bcount * num_mb, secs, (u64)bcount * blen * num_mb);
+ 
+ out:
+ 	kfree(rc);
+@@ -1197,8 +1197,8 @@ static int test_mb_acipher_jiffies(struct test_mb_skcipher_data *data, int enc,
+ 			goto out;
+ 	}
+ 
+-	pr_cont("%d operations in %d seconds (%ld bytes)\n",
+-		bcount * num_mb, secs, (long)bcount * blen * num_mb);
++	pr_cont("%d operations in %d seconds (%llu bytes)\n",
++		bcount * num_mb, secs, (u64)bcount * blen * num_mb);
+ 
+ out:
+ 	kfree(rc);
+@@ -1435,8 +1435,8 @@ static int test_acipher_jiffies(struct skcipher_request *req, int enc,
+ 			return ret;
+ 	}
+ 
+-	pr_cont("%d operations in %d seconds (%ld bytes)\n",
+-		bcount, secs, (long)bcount * blen);
++	pr_cont("%d operations in %d seconds (%llu bytes)\n",
++		bcount, secs, (u64)bcount * blen);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index bd5c04fabdab6..5e45eddbe2abc 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -78,8 +78,7 @@ struct link_dead_args {
+ #define NBD_RT_HAS_PID_FILE		3
+ #define NBD_RT_HAS_CONFIG_REF		4
+ #define NBD_RT_BOUND			5
+-#define NBD_RT_DESTROY_ON_DISCONNECT	6
+-#define NBD_RT_DISCONNECT_ON_CLOSE	7
++#define NBD_RT_DISCONNECT_ON_CLOSE	6
+ 
+ #define NBD_DESTROY_ON_DISCONNECT	0
+ #define NBD_DISCONNECT_REQUESTED	1
+@@ -1955,12 +1954,21 @@ again:
+ 	if (info->attrs[NBD_ATTR_CLIENT_FLAGS]) {
+ 		u64 flags = nla_get_u64(info->attrs[NBD_ATTR_CLIENT_FLAGS]);
+ 		if (flags & NBD_CFLAG_DESTROY_ON_DISCONNECT) {
+-			set_bit(NBD_RT_DESTROY_ON_DISCONNECT,
+-				&config->runtime_flags);
+-			set_bit(NBD_DESTROY_ON_DISCONNECT, &nbd->flags);
+-			put_dev = true;
++			/*
++			 * We have 1 ref to keep the device around, and then 1
++			 * ref for our current operation here, which will be
++			 * inherited by the config.  If we already have
++			 * DESTROY_ON_DISCONNECT set then we know we don't have
++			 * that extra ref already held so we don't need the
++			 * put_dev.
++			 */
++			if (!test_and_set_bit(NBD_DESTROY_ON_DISCONNECT,
++					      &nbd->flags))
++				put_dev = true;
+ 		} else {
+-			clear_bit(NBD_DESTROY_ON_DISCONNECT, &nbd->flags);
++			if (test_and_clear_bit(NBD_DESTROY_ON_DISCONNECT,
++					       &nbd->flags))
++				refcount_inc(&nbd->refs);
+ 		}
+ 		if (flags & NBD_CFLAG_DISCONNECT_ON_CLOSE) {
+ 			set_bit(NBD_RT_DISCONNECT_ON_CLOSE,
+@@ -2131,15 +2139,13 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NBD_ATTR_CLIENT_FLAGS]) {
+ 		u64 flags = nla_get_u64(info->attrs[NBD_ATTR_CLIENT_FLAGS]);
+ 		if (flags & NBD_CFLAG_DESTROY_ON_DISCONNECT) {
+-			if (!test_and_set_bit(NBD_RT_DESTROY_ON_DISCONNECT,
+-					      &config->runtime_flags))
++			if (!test_and_set_bit(NBD_DESTROY_ON_DISCONNECT,
++					      &nbd->flags))
+ 				put_dev = true;
+-			set_bit(NBD_DESTROY_ON_DISCONNECT, &nbd->flags);
+ 		} else {
+-			if (test_and_clear_bit(NBD_RT_DESTROY_ON_DISCONNECT,
+-					       &config->runtime_flags))
++			if (test_and_clear_bit(NBD_DESTROY_ON_DISCONNECT,
++					       &nbd->flags))
+ 				refcount_inc(&nbd->refs);
+-			clear_bit(NBD_DESTROY_ON_DISCONNECT, &nbd->flags);
+ 		}
+ 
+ 		if (flags & NBD_CFLAG_DISCONNECT_ON_CLOSE) {
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 1b697208d6615..711168451e9e5 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1078,7 +1078,7 @@ static ssize_t mm_stat_show(struct device *dev,
+ 			zram->limit_pages << PAGE_SHIFT,
+ 			max_used << PAGE_SHIFT,
+ 			(u64)atomic64_read(&zram->stats.same_pages),
+-			pool_stats.pages_compacted,
++			atomic_long_read(&pool_stats.pages_compacted),
+ 			(u64)atomic64_read(&zram->stats.huge_pages));
+ 	up_read(&zram->init_lock);
+ 
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 78d635f1d1567..996729e78105a 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -906,6 +906,11 @@ static int h5_btrtl_setup(struct h5 *h5)
+ 	/* Give the device some time before the hci-core sends it a reset */
+ 	usleep_range(10000, 20000);
+ 
++	/* Enable controller to do both LE scan and BR/EDR inquiry
++	 * simultaneously.
++	 */
++	set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &h5->hu->hdev->quirks);
++
+ out_free:
+ 	btrtl_free(btrtl_dev);
+ 
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 620f7041db6b5..b36d5879b91e0 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -3350,10 +3350,13 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
+ 			fam_type = &family_types[F15_M60H_CPUS];
+ 			pvt->ops = &family_types[F15_M60H_CPUS].ops;
+ 			break;
++		/* Richland is only client */
++		} else if (pvt->model == 0x13) {
++			return NULL;
++		} else {
++			fam_type	= &family_types[F15_CPUS];
++			pvt->ops	= &family_types[F15_CPUS].ops;
+ 		}
+-
+-		fam_type	= &family_types[F15_CPUS];
+-		pvt->ops	= &family_types[F15_CPUS].ops;
+ 		break;
+ 
+ 	case 0x16:
+@@ -3547,6 +3550,7 @@ static int probe_one_instance(unsigned int nid)
+ 	pvt->mc_node_id	= nid;
+ 	pvt->F3 = F3;
+ 
++	ret = -ENODEV;
+ 	fam_type = per_family_init(pvt);
+ 	if (!fam_type)
+ 		goto err_enable;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index d0aea5e395315..e7678ba8fdcf8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -558,10 +558,14 @@ static int amdgpu_virt_write_vf2pf_data(struct amdgpu_device *adev)
+ void amdgpu_virt_update_vf2pf_work_item(struct work_struct *work)
+ {
+ 	struct amdgpu_device *adev = container_of(work, struct amdgpu_device, virt.vf2pf_work.work);
++	int ret;
+ 
+-	amdgpu_virt_read_pf2vf_data(adev);
++	ret = amdgpu_virt_read_pf2vf_data(adev);
++	if (ret)
++		goto out;
+ 	amdgpu_virt_write_vf2pf_data(adev);
+ 
++out:
+ 	schedule_delayed_work(&(adev->virt.vf2pf_work), adev->virt.vf2pf_update_interval_ms);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/cz_ih.c b/drivers/gpu/drm/amd/amdgpu/cz_ih.c
+index 1dca0cabc326a..13520d173296f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/cz_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/cz_ih.c
+@@ -193,19 +193,30 @@ static u32 cz_ih_get_wptr(struct amdgpu_device *adev,
+ 
+ 	wptr = le32_to_cpu(*ih->wptr_cpu);
+ 
+-	if (REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW)) {
+-		wptr = REG_SET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW, 0);
+-		/* When a ring buffer overflow happen start parsing interrupt
+-		 * from the last not overwritten vector (wptr + 16). Hopefully
+-		 * this should allow us to catchup.
+-		 */
+-		dev_warn(adev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",
+-			wptr, ih->rptr, (wptr + 16) & ih->ptr_mask);
+-		ih->rptr = (wptr + 16) & ih->ptr_mask;
+-		tmp = RREG32(mmIH_RB_CNTL);
+-		tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+-		WREG32(mmIH_RB_CNTL, tmp);
+-	}
++	if (!REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW))
++		goto out;
++
++	/* Double check that the overflow wasn't already cleared. */
++	wptr = RREG32(mmIH_RB_WPTR);
++
++	if (!REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW))
++		goto out;
++
++	wptr = REG_SET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW, 0);
++
++	/* When a ring buffer overflow happen start parsing interrupt
++	 * from the last not overwritten vector (wptr + 16). Hopefully
++	 * this should allow us to catchup.
++	 */
++	dev_warn(adev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",
++		wptr, ih->rptr, (wptr + 16) & ih->ptr_mask);
++	ih->rptr = (wptr + 16) & ih->ptr_mask;
++	tmp = RREG32(mmIH_RB_CNTL);
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
++	WREG32(mmIH_RB_CNTL, tmp);
++
++
++out:
+ 	return (wptr & ih->ptr_mask);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/iceland_ih.c b/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
+index a13dd9a51149a..7d165f024f072 100644
+--- a/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
+@@ -193,19 +193,29 @@ static u32 iceland_ih_get_wptr(struct amdgpu_device *adev,
+ 
+ 	wptr = le32_to_cpu(*ih->wptr_cpu);
+ 
+-	if (REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW)) {
+-		wptr = REG_SET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW, 0);
+-		/* When a ring buffer overflow happen start parsing interrupt
+-		 * from the last not overwritten vector (wptr + 16). Hopefully
+-		 * this should allow us to catchup.
+-		 */
+-		dev_warn(adev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",
+-			 wptr, ih->rptr, (wptr + 16) & ih->ptr_mask);
+-		ih->rptr = (wptr + 16) & ih->ptr_mask;
+-		tmp = RREG32(mmIH_RB_CNTL);
+-		tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+-		WREG32(mmIH_RB_CNTL, tmp);
+-	}
++	if (!REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW))
++		goto out;
++
++	/* Double check that the overflow wasn't already cleared. */
++	wptr = RREG32(mmIH_RB_WPTR);
++
++	if (!REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW))
++		goto out;
++
++	wptr = REG_SET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW, 0);
++	/* When a ring buffer overflow happen start parsing interrupt
++	 * from the last not overwritten vector (wptr + 16). Hopefully
++	 * this should allow us to catchup.
++	 */
++	dev_warn(adev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",
++		wptr, ih->rptr, (wptr + 16) & ih->ptr_mask);
++	ih->rptr = (wptr + 16) & ih->ptr_mask;
++	tmp = RREG32(mmIH_RB_CNTL);
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
++	WREG32(mmIH_RB_CNTL, tmp);
++
++
++out:
+ 	return (wptr & ih->ptr_mask);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/tonga_ih.c b/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
+index e40140bf6699c..db0a3bda13fbe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
+@@ -195,19 +195,30 @@ static u32 tonga_ih_get_wptr(struct amdgpu_device *adev,
+ 
+ 	wptr = le32_to_cpu(*ih->wptr_cpu);
+ 
+-	if (REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW)) {
+-		wptr = REG_SET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW, 0);
+-		/* When a ring buffer overflow happen start parsing interrupt
+-		 * from the last not overwritten vector (wptr + 16). Hopefully
+-		 * this should allow us to catchup.
+-		 */
+-		dev_warn(adev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",
+-			 wptr, ih->rptr, (wptr + 16) & ih->ptr_mask);
+-		ih->rptr = (wptr + 16) & ih->ptr_mask;
+-		tmp = RREG32(mmIH_RB_CNTL);
+-		tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+-		WREG32(mmIH_RB_CNTL, tmp);
+-	}
++	if (!REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW))
++		goto out;
++
++	/* Double check that the overflow wasn't already cleared. */
++	wptr = RREG32(mmIH_RB_WPTR);
++
++	if (!REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW))
++		goto out;
++
++	wptr = REG_SET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW, 0);
++
++	/* When a ring buffer overflow happen start parsing interrupt
++	 * from the last not overwritten vector (wptr + 16). Hopefully
++	 * this should allow us to catchup.
++	 */
++
++	dev_warn(adev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",
++		wptr, ih->rptr, (wptr + 16) & ih->ptr_mask);
++	ih->rptr = (wptr + 16) & ih->ptr_mask;
++	tmp = RREG32(mmIH_RB_CNTL);
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
++	WREG32(mmIH_RB_CNTL, tmp);
++
++out:
+ 	return (wptr & ih->ptr_mask);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index e1e5d81a5e438..21c7b642a8b4e 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -1454,6 +1454,11 @@ static bool dc_link_construct(struct dc_link *link,
+ 		goto ddc_create_fail;
+ 	}
+ 
++	if (!link->ddc->ddc_pin) {
++		DC_ERROR("Failed to get I2C info for connector!\n");
++		goto ddc_create_fail;
++	}
++
+ 	link->ddc_hw_inst =
+ 		dal_ddc_get_line(dal_ddc_service_get_ddc_pin(link->ddc));
+ 
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
+index 085d1b2fa8c0a..d3485f742acc2 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
+@@ -368,7 +368,6 @@ static void hibmc_pci_remove(struct pci_dev *pdev)
+ 
+ 	drm_dev_unregister(dev);
+ 	hibmc_unload(dev);
+-	drm_dev_put(dev);
+ }
+ 
+ static struct pci_device_id hibmc_pci_table[] = {
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index 00d6b95e259d6..0c98978e2e55c 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -172,8 +172,9 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
+ 		*nents = shmem->pages->orig_nents;
+ 	}
+ 
+-	*ents = kmalloc_array(*nents, sizeof(struct virtio_gpu_mem_entry),
+-			      GFP_KERNEL);
++	*ents = kvmalloc_array(*nents,
++			       sizeof(struct virtio_gpu_mem_entry),
++			       GFP_KERNEL);
+ 	if (!(*ents)) {
+ 		DRM_ERROR("failed to allocate ent list\n");
+ 		return -ENOMEM;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index fc0e90915678a..5c2107ce7f6e1 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -502,7 +502,7 @@ static void rtrs_clt_recv_done(struct rtrs_clt_con *con, struct ib_wc *wc)
+ 	int err;
+ 	struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+ 
+-	WARN_ON(sess->flags != RTRS_MSG_NEW_RKEY_F);
++	WARN_ON((sess->flags & RTRS_MSG_NEW_RKEY_F) == 0);
+ 	iu = container_of(wc->wr_cqe, struct rtrs_iu,
+ 			  cqe);
+ 	err = rtrs_iu_post_recv(&con->c, iu);
+@@ -522,7 +522,7 @@ static void rtrs_clt_rkey_rsp_done(struct rtrs_clt_con *con, struct ib_wc *wc)
+ 	u32 buf_id;
+ 	int err;
+ 
+-	WARN_ON(sess->flags != RTRS_MSG_NEW_RKEY_F);
++	WARN_ON((sess->flags & RTRS_MSG_NEW_RKEY_F) == 0);
+ 
+ 	iu = container_of(wc->wr_cqe, struct rtrs_iu, cqe);
+ 
+@@ -629,12 +629,12 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		} else if (imm_type == RTRS_HB_MSG_IMM) {
+ 			WARN_ON(con->c.cid);
+ 			rtrs_send_hb_ack(&sess->s);
+-			if (sess->flags == RTRS_MSG_NEW_RKEY_F)
++			if (sess->flags & RTRS_MSG_NEW_RKEY_F)
+ 				return  rtrs_clt_recv_done(con, wc);
+ 		} else if (imm_type == RTRS_HB_ACK_IMM) {
+ 			WARN_ON(con->c.cid);
+ 			sess->s.hb_missed_cnt = 0;
+-			if (sess->flags == RTRS_MSG_NEW_RKEY_F)
++			if (sess->flags & RTRS_MSG_NEW_RKEY_F)
+ 				return  rtrs_clt_recv_done(con, wc);
+ 		} else {
+ 			rtrs_wrn(con->c.sess, "Unknown IMM type %u\n",
+@@ -662,7 +662,7 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		WARN_ON(!(wc->wc_flags & IB_WC_WITH_INVALIDATE ||
+ 			  wc->wc_flags & IB_WC_WITH_IMM));
+ 		WARN_ON(wc->wr_cqe->done != rtrs_clt_rdma_done);
+-		if (sess->flags == RTRS_MSG_NEW_RKEY_F) {
++		if (sess->flags & RTRS_MSG_NEW_RKEY_F) {
+ 			if (wc->wc_flags & IB_WC_WITH_INVALIDATE)
+ 				return  rtrs_clt_recv_done(con, wc);
+ 
+@@ -672,7 +672,6 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	case IB_WC_RDMA_WRITE:
+ 		/*
+ 		 * post_send() RDMA write completions of IO reqs (read/write)
+-		 * and hb
+ 		 */
+ 		break;
+ 
+@@ -688,7 +687,7 @@ static int post_recv_io(struct rtrs_clt_con *con, size_t q_size)
+ 	struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+ 
+ 	for (i = 0; i < q_size; i++) {
+-		if (sess->flags == RTRS_MSG_NEW_RKEY_F) {
++		if (sess->flags & RTRS_MSG_NEW_RKEY_F) {
+ 			struct rtrs_iu *iu = &con->rsp_ius[i];
+ 
+ 			err = rtrs_iu_post_recv(&con->c, iu);
+@@ -1580,7 +1579,7 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
+ 			      sess->queue_depth * 3 + 1);
+ 	}
+ 	/* alloc iu to recv new rkey reply when server reports flags set */
+-	if (sess->flags == RTRS_MSG_NEW_RKEY_F || con->c.cid == 0) {
++	if (sess->flags & RTRS_MSG_NEW_RKEY_F || con->c.cid == 0) {
+ 		con->rsp_ius = rtrs_iu_alloc(max_recv_wr, sizeof(*rsp),
+ 					      GFP_KERNEL, sess->s.dev->ib_dev,
+ 					      DMA_FROM_DEVICE,
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index f009a6907169c..b690a3b8f94d9 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -836,7 +836,7 @@ static int process_info_req(struct rtrs_srv_con *con,
+ 		rwr[mri].wr.opcode = IB_WR_REG_MR;
+ 		rwr[mri].wr.wr_cqe = &local_reg_cqe;
+ 		rwr[mri].wr.num_sge = 0;
+-		rwr[mri].wr.send_flags = mri ? 0 : IB_SEND_SIGNALED;
++		rwr[mri].wr.send_flags = 0;
+ 		rwr[mri].mr = mr;
+ 		rwr[mri].key = mr->rkey;
+ 		rwr[mri].access = (IB_ACCESS_LOCAL_WRITE |
+@@ -1260,7 +1260,6 @@ static void rtrs_srv_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	case IB_WC_SEND:
+ 		/*
+ 		 * post_send() RDMA write completions of IO reqs (read/write)
+-		 * and hb
+ 		 */
+ 		atomic_add(srv->queue_depth, &con->sq_wr_avail);
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index 23e5452e10c46..a3e1a027f8081 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -325,7 +325,7 @@ void rtrs_send_hb_ack(struct rtrs_sess *sess)
+ 
+ 	imm = rtrs_to_imm(RTRS_HB_ACK_IMM, 0);
+ 	err = rtrs_post_rdma_write_imm_empty(usr_con, sess->hb_cqe, imm,
+-					      IB_SEND_SIGNALED, NULL);
++					     0, NULL);
+ 	if (err) {
+ 		sess->hb_err_handler(usr_con);
+ 		return;
+@@ -354,7 +354,7 @@ static void hb_work(struct work_struct *work)
+ 	}
+ 	imm = rtrs_to_imm(RTRS_HB_MSG_IMM, 0);
+ 	err = rtrs_post_rdma_write_imm_empty(usr_con, sess->hb_cqe, imm,
+-					      IB_SEND_SIGNALED, NULL);
++					     0, NULL);
+ 	if (err) {
+ 		sess->hb_err_handler(usr_con);
+ 		return;
+diff --git a/drivers/input/mouse/elan_i2c.h b/drivers/input/mouse/elan_i2c.h
+index 36e3cd9086716..e12da5b024b05 100644
+--- a/drivers/input/mouse/elan_i2c.h
++++ b/drivers/input/mouse/elan_i2c.h
+@@ -28,6 +28,22 @@
+ 
+ #define ETP_FEATURE_REPORT_MK	BIT(0)
+ 
++#define ETP_REPORT_ID		0x5D
++#define ETP_TP_REPORT_ID	0x5E
++#define ETP_TP_REPORT_ID2	0x5F
++#define ETP_REPORT_ID2		0x60	/* High precision report */
++
++#define ETP_REPORT_ID_OFFSET	2
++#define ETP_TOUCH_INFO_OFFSET	3
++#define ETP_FINGER_DATA_OFFSET	4
++#define ETP_HOVER_INFO_OFFSET	30
++#define ETP_MK_DATA_OFFSET	33	/* For high precision reports */
++
++#define ETP_MAX_REPORT_LEN	39
++
++#define ETP_MAX_FINGERS		5
++#define ETP_FINGER_DATA_LEN	5
++
+ /* IAP Firmware handling */
+ #define ETP_PRODUCT_ID_FORMAT_STRING	"%d.0"
+ #define ETP_FW_NAME		"elan_i2c_" ETP_PRODUCT_ID_FORMAT_STRING ".bin"
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 61ed3f5ca2199..11a9ee32c98cc 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -47,18 +47,6 @@
+ #define ETP_FINGER_WIDTH	15
+ #define ETP_RETRY_COUNT		3
+ 
+-#define ETP_MAX_FINGERS		5
+-#define ETP_FINGER_DATA_LEN	5
+-#define ETP_REPORT_ID		0x5D
+-#define ETP_REPORT_ID2		0x60	/* High precision report */
+-#define ETP_TP_REPORT_ID	0x5E
+-#define ETP_REPORT_ID_OFFSET	2
+-#define ETP_TOUCH_INFO_OFFSET	3
+-#define ETP_FINGER_DATA_OFFSET	4
+-#define ETP_HOVER_INFO_OFFSET	30
+-#define ETP_MK_DATA_OFFSET	33	/* For high precision reports */
+-#define ETP_MAX_REPORT_LEN	39
+-
+ /* The main device structure */
+ struct elan_tp_data {
+ 	struct i2c_client	*client;
+@@ -1076,6 +1064,7 @@ static irqreturn_t elan_isr(int irq, void *dev_id)
+ 		elan_report_absolute(data, report, true);
+ 		break;
+ 	case ETP_TP_REPORT_ID:
++	case ETP_TP_REPORT_ID2:
+ 		elan_report_trackpoint(data, report);
+ 		break;
+ 	default:
+diff --git a/drivers/input/mouse/elan_i2c_smbus.c b/drivers/input/mouse/elan_i2c_smbus.c
+index 1820f1cfc1dc4..6dc148b9d959e 100644
+--- a/drivers/input/mouse/elan_i2c_smbus.c
++++ b/drivers/input/mouse/elan_i2c_smbus.c
+@@ -45,6 +45,7 @@
+ #define ETP_SMBUS_CALIBRATE_QUERY	0xC5
+ 
+ #define ETP_SMBUS_REPORT_LEN		32
++#define ETP_SMBUS_REPORT_LEN2		7
+ #define ETP_SMBUS_REPORT_OFFSET		2
+ #define ETP_SMBUS_HELLOPACKET_LEN	5
+ #define ETP_SMBUS_IAP_PASSWORD		0x1234
+@@ -497,10 +498,13 @@ static int elan_smbus_get_report(struct i2c_client *client,
+ 		return len;
+ 	}
+ 
+-	if (len != ETP_SMBUS_REPORT_LEN) {
++	if (report[ETP_REPORT_ID_OFFSET] == ETP_TP_REPORT_ID2)
++		report_len = ETP_SMBUS_REPORT_LEN2;
++
++	if (len != report_len) {
+ 		dev_err(&client->dev,
+ 			"wrong report length (%d vs %d expected)\n",
+-			len, ETP_SMBUS_REPORT_LEN);
++			len, report_len);
+ 		return -EIO;
+ 	}
+ 
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 90f8765f9efc8..e0e53a9a816fc 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -89,6 +89,47 @@ static int elantech_ps2_command(struct psmouse *psmouse,
+ 	return rc;
+ }
+ 
++/*
++ * Send an Elantech style special command to read 3 bytes from a register
++ */
++static int elantech_read_reg_params(struct psmouse *psmouse, u8 reg, u8 *param)
++{
++	if (elantech_ps2_command(psmouse, NULL, ETP_PS2_CUSTOM_COMMAND) ||
++	    elantech_ps2_command(psmouse, NULL, ETP_REGISTER_READWRITE) ||
++	    elantech_ps2_command(psmouse, NULL, ETP_PS2_CUSTOM_COMMAND) ||
++	    elantech_ps2_command(psmouse, NULL, reg) ||
++	    elantech_ps2_command(psmouse, param, PSMOUSE_CMD_GETINFO)) {
++		psmouse_err(psmouse,
++			    "failed to read register %#02x\n", reg);
++		return -EIO;
++	}
++
++	return 0;
++}
++
++/*
++ * Send an Elantech style special command to write a register with a parameter
++ */
++static int elantech_write_reg_params(struct psmouse *psmouse, u8 reg, u8 *param)
++{
++	if (elantech_ps2_command(psmouse, NULL, ETP_PS2_CUSTOM_COMMAND) ||
++	    elantech_ps2_command(psmouse, NULL, ETP_REGISTER_READWRITE) ||
++	    elantech_ps2_command(psmouse, NULL, ETP_PS2_CUSTOM_COMMAND) ||
++	    elantech_ps2_command(psmouse, NULL, reg) ||
++	    elantech_ps2_command(psmouse, NULL, ETP_PS2_CUSTOM_COMMAND) ||
++	    elantech_ps2_command(psmouse, NULL, param[0]) ||
++	    elantech_ps2_command(psmouse, NULL, ETP_PS2_CUSTOM_COMMAND) ||
++	    elantech_ps2_command(psmouse, NULL, param[1]) ||
++	    elantech_ps2_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11)) {
++		psmouse_err(psmouse,
++			    "failed to write register %#02x with value %#02x%#02x\n",
++			    reg, param[0], param[1]);
++		return -EIO;
++	}
++
++	return 0;
++}
++
+ /*
+  * Send an Elantech style special command to read a value from a register
+  */
+@@ -1529,19 +1570,35 @@ static const struct dmi_system_id no_hw_res_dmi_table[] = {
+ 	{ }
+ };
+ 
++/*
++ * Change Report id 0x5E to 0x5F.
++ */
++static int elantech_change_report_id(struct psmouse *psmouse)
++{
++	unsigned char param[2] = { 0x10, 0x03 };
++
++	if (elantech_write_reg_params(psmouse, 0x7, param) ||
++	    elantech_read_reg_params(psmouse, 0x7, param) ||
++	    param[0] != 0x10 || param[1] != 0x03) {
++		psmouse_err(psmouse, "Unable to change report ID to 0x5f.\n");
++		return -EIO;
++	}
++
++	return 0;
++}
+ /*
+  * determine hardware version and set some properties according to it.
+  */
+ static int elantech_set_properties(struct elantech_device_info *info)
+ {
+ 	/* This represents the version of IC body. */
+-	int ver = (info->fw_version & 0x0f0000) >> 16;
++	info->ic_version = (info->fw_version & 0x0f0000) >> 16;
+ 
+ 	/* Early version of Elan touchpads doesn't obey the rule. */
+ 	if (info->fw_version < 0x020030 || info->fw_version == 0x020600)
+ 		info->hw_version = 1;
+ 	else {
+-		switch (ver) {
++		switch (info->ic_version) {
+ 		case 2:
+ 		case 4:
+ 			info->hw_version = 2;
+@@ -1557,6 +1614,11 @@ static int elantech_set_properties(struct elantech_device_info *info)
+ 		}
+ 	}
+ 
++	/* Get information pattern for hw_version 4 */
++	info->pattern = 0x00;
++	if (info->ic_version == 0x0f && (info->fw_version & 0xff) <= 0x02)
++		info->pattern = info->fw_version & 0xff;
++
+ 	/* decide which send_cmd we're gonna use early */
+ 	info->send_cmd = info->hw_version >= 3 ? elantech_send_cmd :
+ 						 synaptics_send_cmd;
+@@ -1598,6 +1660,7 @@ static int elantech_query_info(struct psmouse *psmouse,
+ {
+ 	unsigned char param[3];
+ 	unsigned char traces;
++	unsigned char ic_body[3];
+ 
+ 	memset(info, 0, sizeof(*info));
+ 
+@@ -1640,6 +1703,21 @@ static int elantech_query_info(struct psmouse *psmouse,
+ 			     info->samples[2]);
+ 	}
+ 
++	if (info->pattern > 0x00 && info->ic_version == 0xf) {
++		if (info->send_cmd(psmouse, ETP_ICBODY_QUERY, ic_body)) {
++			psmouse_err(psmouse, "failed to query ic body\n");
++			return -EINVAL;
++		}
++		info->ic_version = be16_to_cpup((__be16 *)ic_body);
++		psmouse_info(psmouse,
++			     "Elan ic body: %#04x, current fw version: %#02x\n",
++			     info->ic_version, ic_body[2]);
++	}
++
++	info->product_id = be16_to_cpup((__be16 *)info->samples);
++	if (info->pattern == 0x00)
++		info->product_id &= 0xff;
++
+ 	if (info->samples[1] == 0x74 && info->hw_version == 0x03) {
+ 		/*
+ 		 * This module has a bug which makes absolute mode
+@@ -1654,6 +1732,23 @@ static int elantech_query_info(struct psmouse *psmouse,
+ 	/* The MSB indicates the presence of the trackpoint */
+ 	info->has_trackpoint = (info->capabilities[0] & 0x80) == 0x80;
+ 
++	if (info->has_trackpoint && info->ic_version == 0x0011 &&
++	    (info->product_id == 0x08 || info->product_id == 0x09 ||
++	     info->product_id == 0x0d || info->product_id == 0x0e)) {
++		/*
++		 * This module has a bug which makes trackpoint in SMBus
++		 * mode return invalid data unless trackpoint is switched
++		 * from using 0x5e reports to 0x5f. If we are not able to
++		 * make the switch, let's abort initialization so we'll be
++		 * using standard PS/2 protocol.
++		 */
++		if (elantech_change_report_id(psmouse)) {
++			psmouse_info(psmouse,
++				     "Trackpoint report is broken, forcing standard PS/2 protocol\n");
++			return -ENODEV;
++		}
++	}
++
+ 	info->x_res = 31;
+ 	info->y_res = 31;
+ 	if (info->hw_version == 4) {
+diff --git a/drivers/input/mouse/elantech.h b/drivers/input/mouse/elantech.h
+index e0a3e59d4f1bb..571e6ca11d33b 100644
+--- a/drivers/input/mouse/elantech.h
++++ b/drivers/input/mouse/elantech.h
+@@ -18,6 +18,7 @@
+ #define ETP_CAPABILITIES_QUERY		0x02
+ #define ETP_SAMPLE_QUERY		0x03
+ #define ETP_RESOLUTION_QUERY		0x04
++#define ETP_ICBODY_QUERY		0x05
+ 
+ /*
+  * Command values for register reading or writing
+@@ -140,7 +141,10 @@ struct elantech_device_info {
+ 	unsigned char samples[3];
+ 	unsigned char debug;
+ 	unsigned char hw_version;
++	unsigned char pattern;
+ 	unsigned int fw_version;
++	unsigned int ic_version;
++	unsigned int product_id;
+ 	unsigned int x_min;
+ 	unsigned int y_min;
+ 	unsigned int x_max;
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index c8d63673e131d..5642595a057ec 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -701,11 +701,18 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
+ 				data[0], data[1]);
+ 			break;
+ 		case MCE_RSP_EQIRCFS:
++			if (!data[0] && !data[1]) {
++				dev_dbg(dev, "%s: no carrier", inout);
++				break;
++			}
++			// prescaler should make sense
++			if (data[0] > 8)
++				break;
+ 			period = DIV_ROUND_CLOSEST((1U << data[0] * 2) *
+ 						   (data[1] + 1), 10);
+ 			if (!period)
+ 				break;
+-			carrier = (1000 * 1000) / period;
++			carrier = USEC_PER_SEC / period;
+ 			dev_dbg(dev, "%s carrier of %u Hz (period %uus)",
+ 				 inout, carrier, period);
+ 			break;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index ddb9eaa11be71..5ad5282641350 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -1028,7 +1028,10 @@ static struct uvc_entity *uvc_alloc_entity(u16 type, u8 id,
+ 	unsigned int i;
+ 
+ 	extra_size = roundup(extra_size, sizeof(*entity->pads));
+-	num_inputs = (type & UVC_TERM_OUTPUT) ? num_pads : num_pads - 1;
++	if (num_pads)
++		num_inputs = type & UVC_TERM_OUTPUT ? num_pads : num_pads - 1;
++	else
++		num_inputs = 0;
+ 	size = sizeof(*entity) + extra_size + sizeof(*entity->pads) * num_pads
+ 	     + num_inputs;
+ 	entity = kzalloc(size, GFP_KERNEL);
+@@ -1044,7 +1047,7 @@ static struct uvc_entity *uvc_alloc_entity(u16 type, u8 id,
+ 
+ 	for (i = 0; i < num_inputs; ++i)
+ 		entity->pads[i].flags = MEDIA_PAD_FL_SINK;
+-	if (!UVC_ENTITY_IS_OTERM(entity))
++	if (!UVC_ENTITY_IS_OTERM(entity) && num_pads)
+ 		entity->pads[num_pads-1].flags = MEDIA_PAD_FL_SOURCE;
+ 
+ 	entity->bNrInPins = num_inputs;
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index bd7f330c941c4..cfe422d9f439b 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -1987,7 +1987,8 @@ static int std_validate(const struct v4l2_ctrl *ctrl, u32 idx,
+ 	case V4L2_CTRL_TYPE_INTEGER_MENU:
+ 		if (ptr.p_s32[idx] < ctrl->minimum || ptr.p_s32[idx] > ctrl->maximum)
+ 			return -ERANGE;
+-		if (ctrl->menu_skip_mask & (1ULL << ptr.p_s32[idx]))
++		if (ptr.p_s32[idx] < BITS_PER_LONG_LONG &&
++		    (ctrl->menu_skip_mask & BIT_ULL(ptr.p_s32[idx])))
+ 			return -EINVAL;
+ 		if (ctrl->type == V4L2_CTRL_TYPE_MENU &&
+ 		    ctrl->qmenu[ptr.p_s32[idx]][0] == '\0')
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index eeff398fbdcc1..9eda8b91d17af 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -3251,7 +3251,7 @@ video_usercopy(struct file *file, unsigned int orig_cmd, unsigned long arg,
+ 	       v4l2_kioctl func)
+ {
+ 	char	sbuf[128];
+-	void    *mbuf = NULL;
++	void    *mbuf = NULL, *array_buf = NULL;
+ 	void	*parg = (void *)arg;
+ 	long	err  = -EINVAL;
+ 	bool	has_array_args;
+@@ -3286,20 +3286,14 @@ video_usercopy(struct file *file, unsigned int orig_cmd, unsigned long arg,
+ 	has_array_args = err;
+ 
+ 	if (has_array_args) {
+-		/*
+-		 * When adding new types of array args, make sure that the
+-		 * parent argument to ioctl (which contains the pointer to the
+-		 * array) fits into sbuf (so that mbuf will still remain
+-		 * unused up to here).
+-		 */
+-		mbuf = kvmalloc(array_size, GFP_KERNEL);
++		array_buf = kvmalloc(array_size, GFP_KERNEL);
+ 		err = -ENOMEM;
+-		if (NULL == mbuf)
++		if (array_buf == NULL)
+ 			goto out_array_args;
+ 		err = -EFAULT;
+-		if (copy_from_user(mbuf, user_ptr, array_size))
++		if (copy_from_user(array_buf, user_ptr, array_size))
+ 			goto out_array_args;
+-		*kernel_ptr = mbuf;
++		*kernel_ptr = array_buf;
+ 	}
+ 
+ 	/* Handles IOCTL */
+@@ -3318,7 +3312,7 @@ video_usercopy(struct file *file, unsigned int orig_cmd, unsigned long arg,
+ 
+ 	if (has_array_args) {
+ 		*kernel_ptr = (void __force *)user_ptr;
+-		if (copy_to_user(user_ptr, mbuf, array_size))
++		if (copy_to_user(user_ptr, array_buf, array_size))
+ 			err = -EFAULT;
+ 		goto out_array_args;
+ 	}
+@@ -3333,6 +3327,7 @@ out_array_args:
+ 	if (video_put_user((void __user *)arg, parg, orig_cmd))
+ 		err = -EFAULT;
+ out:
++	kvfree(array_buf);
+ 	kvfree(mbuf);
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index dd5c8a9038bb7..a60ce90305819 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -223,8 +223,6 @@
+ #define AG71XX_REG_RX_SM	0x01b0
+ #define AG71XX_REG_TX_SM	0x01b4
+ 
+-#define ETH_SWITCH_HEADER_LEN	2
+-
+ #define AG71XX_DEFAULT_MSG_ENABLE	\
+ 	(NETIF_MSG_DRV			\
+ 	| NETIF_MSG_PROBE		\
+@@ -933,7 +931,7 @@ static void ag71xx_hw_setup(struct ag71xx *ag)
+ 
+ static unsigned int ag71xx_max_frame_len(unsigned int mtu)
+ {
+-	return ETH_SWITCH_HEADER_LEN + ETH_HLEN + VLAN_HLEN + mtu + ETH_FCS_LEN;
++	return ETH_HLEN + VLAN_HLEN + mtu + ETH_FCS_LEN;
+ }
+ 
+ static void ag71xx_hw_set_macaddr(struct ag71xx *ag, unsigned char *mac)
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index 58014feedf6c8..fb954e8141802 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -44,6 +44,17 @@ static void sfp_quirk_2500basex(const struct sfp_eeprom_id *id,
+ 	phylink_set(modes, 2500baseX_Full);
+ }
+ 
++static void sfp_quirk_ubnt_uf_instant(const struct sfp_eeprom_id *id,
++				      unsigned long *modes)
++{
++	/* Ubiquiti U-Fiber Instant module claims that support all transceiver
++	 * types including 10G Ethernet which is not truth. So clear all claimed
++	 * modes and set only one mode which module supports: 1000baseX_Full.
++	 */
++	phylink_zero(modes);
++	phylink_set(modes, 1000baseX_Full);
++}
++
+ static const struct sfp_quirk sfp_quirks[] = {
+ 	{
+ 		// Alcatel Lucent G-010S-P can operate at 2500base-X, but
+@@ -63,6 +74,10 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 		.vendor = "HUAWEI",
+ 		.part = "MA5671A",
+ 		.modes = sfp_quirk_2500basex,
++	}, {
++		.vendor = "UBNT",
++		.part = "UF-INSTANT",
++		.modes = sfp_quirk_ubnt_uf_instant,
+ 	},
+ };
+ 
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 34aa196b7465c..7a680b5177f5e 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -219,6 +219,7 @@ struct sfp {
+ 	struct sfp_bus *sfp_bus;
+ 	struct phy_device *mod_phy;
+ 	const struct sff_data *type;
++	size_t i2c_block_size;
+ 	u32 max_power_mW;
+ 
+ 	unsigned int (*get_state)(struct sfp *);
+@@ -272,8 +273,21 @@ static const struct sff_data sff_data = {
+ 
+ static bool sfp_module_supported(const struct sfp_eeprom_id *id)
+ {
+-	return id->base.phys_id == SFF8024_ID_SFP &&
+-	       id->base.phys_ext_id == SFP_PHYS_EXT_ID_SFP;
++	if (id->base.phys_id == SFF8024_ID_SFP &&
++	    id->base.phys_ext_id == SFP_PHYS_EXT_ID_SFP)
++		return true;
++
++	/* SFP GPON module Ubiquiti U-Fiber Instant has in its EEPROM stored
++	 * phys id SFF instead of SFP. Therefore mark this module explicitly
++	 * as supported based on vendor name and pn match.
++	 */
++	if (id->base.phys_id == SFF8024_ID_SFF_8472 &&
++	    id->base.phys_ext_id == SFP_PHYS_EXT_ID_SFP &&
++	    !memcmp(id->base.vendor_name, "UBNT            ", 16) &&
++	    !memcmp(id->base.vendor_pn, "UF-INSTANT      ", 16))
++		return true;
++
++	return false;
+ }
+ 
+ static const struct sff_data sfp_data = {
+@@ -336,6 +350,7 @@ static int sfp_i2c_read(struct sfp *sfp, bool a2, u8 dev_addr, void *buf,
+ {
+ 	struct i2c_msg msgs[2];
+ 	u8 bus_addr = a2 ? 0x51 : 0x50;
++	size_t block_size = sfp->i2c_block_size;
+ 	size_t this_len;
+ 	int ret;
+ 
+@@ -350,8 +365,8 @@ static int sfp_i2c_read(struct sfp *sfp, bool a2, u8 dev_addr, void *buf,
+ 
+ 	while (len) {
+ 		this_len = len;
+-		if (this_len > 16)
+-			this_len = 16;
++		if (this_len > block_size)
++			this_len = block_size;
+ 
+ 		msgs[1].len = this_len;
+ 
+@@ -1272,6 +1287,20 @@ static void sfp_hwmon_probe(struct work_struct *work)
+ 	struct sfp *sfp = container_of(work, struct sfp, hwmon_probe.work);
+ 	int err, i;
+ 
++	/* hwmon interface needs to access 16bit registers in atomic way to
++	 * guarantee coherency of the diagnostic monitoring data. If it is not
++	 * possible to guarantee coherency because EEPROM is broken in such way
++	 * that does not support atomic 16bit read operation then we have to
++	 * skip registration of hwmon device.
++	 */
++	if (sfp->i2c_block_size < 2) {
++		dev_info(sfp->dev,
++			 "skipping hwmon device registration due to broken EEPROM\n");
++		dev_info(sfp->dev,
++			 "diagnostic EEPROM area cannot be read atomically to guarantee data coherency\n");
++		return;
++	}
++
+ 	err = sfp_read(sfp, true, 0, &sfp->diag, sizeof(sfp->diag));
+ 	if (err < 0) {
+ 		if (sfp->hwmon_tries--) {
+@@ -1632,6 +1661,32 @@ static int sfp_sm_mod_hpower(struct sfp *sfp, bool enable)
+ 	return 0;
+ }
+ 
++/* GPON modules based on Realtek RTL8672 and RTL9601C chips (e.g. V-SOL
++ * V2801F, CarlitoxxPro CPGOS03-0490, Ubiquiti U-Fiber Instant, ...) do
++ * not support multibyte reads from the EEPROM. Each multi-byte read
++ * operation returns just one byte of EEPROM followed by zeros. There is
++ * no way to identify which modules are using Realtek RTL8672 and RTL9601C
++ * chips. Moreover every OEM of V-SOL V2801F module puts its own vendor
++ * name and vendor id into EEPROM, so there is even no way to detect if
++ * module is V-SOL V2801F. Therefore check for those zeros in the read
++ * data and then based on check switch to reading EEPROM to one byte
++ * at a time.
++ */
++static bool sfp_id_needs_byte_io(struct sfp *sfp, void *buf, size_t len)
++{
++	size_t i, block_size = sfp->i2c_block_size;
++
++	/* Already using byte IO */
++	if (block_size == 1)
++		return false;
++
++	for (i = 1; i < len; i += block_size) {
++		if (memchr_inv(buf + i, '\0', min(block_size - 1, len - i)))
++			return false;
++	}
++	return true;
++}
++
+ static int sfp_cotsworks_fixup_check(struct sfp *sfp, struct sfp_eeprom_id *id)
+ {
+ 	u8 check;
+@@ -1673,18 +1728,51 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
+ 	u8 check;
+ 	int ret;
+ 
+-	ret = sfp_read(sfp, false, 0, &id, sizeof(id));
++	/* Some SFP modules and also some Linux I2C drivers do not like reads
++	 * longer than 16 bytes, so read the EEPROM in chunks of 16 bytes at
++	 * a time.
++	 */
++	sfp->i2c_block_size = 16;
++
++	ret = sfp_read(sfp, false, 0, &id.base, sizeof(id.base));
+ 	if (ret < 0) {
+ 		if (report)
+ 			dev_err(sfp->dev, "failed to read EEPROM: %d\n", ret);
+ 		return -EAGAIN;
+ 	}
+ 
+-	if (ret != sizeof(id)) {
++	if (ret != sizeof(id.base)) {
+ 		dev_err(sfp->dev, "EEPROM short read: %d\n", ret);
+ 		return -EAGAIN;
+ 	}
+ 
++	/* Some SFP modules (e.g. Nokia 3FE46541AA) lock up if read from
++	 * address 0x51 is just one byte at a time. Also SFF-8472 requires
++	 * that EEPROM supports atomic 16bit read operation for diagnostic
++	 * fields, so do not switch to one byte reading at a time unless it
++	 * is really required and we have no other option.
++	 */
++	if (sfp_id_needs_byte_io(sfp, &id.base, sizeof(id.base))) {
++		dev_info(sfp->dev,
++			 "Detected broken RTL8672/RTL9601C emulated EEPROM\n");
++		dev_info(sfp->dev,
++			 "Switching to reading EEPROM to one byte at a time\n");
++		sfp->i2c_block_size = 1;
++
++		ret = sfp_read(sfp, false, 0, &id.base, sizeof(id.base));
++		if (ret < 0) {
++			if (report)
++				dev_err(sfp->dev, "failed to read EEPROM: %d\n",
++					ret);
++			return -EAGAIN;
++		}
++
++		if (ret != sizeof(id.base)) {
++			dev_err(sfp->dev, "EEPROM short read: %d\n", ret);
++			return -EAGAIN;
++		}
++	}
++
+ 	/* Cotsworks do not seem to update the checksums when they
+ 	 * do the final programming with the final module part number,
+ 	 * serial number and date code.
+@@ -1719,6 +1807,18 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
+ 		}
+ 	}
+ 
++	ret = sfp_read(sfp, false, SFP_CC_BASE + 1, &id.ext, sizeof(id.ext));
++	if (ret < 0) {
++		if (report)
++			dev_err(sfp->dev, "failed to read EEPROM: %d\n", ret);
++		return -EAGAIN;
++	}
++
++	if (ret != sizeof(id.ext)) {
++		dev_err(sfp->dev, "EEPROM short read: %d\n", ret);
++		return -EAGAIN;
++	}
++
+ 	check = sfp_check(&id.ext, sizeof(id.ext) - 1);
+ 	if (check != id.ext.cc_ext) {
+ 		if (cotsworks) {
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 1f4bdd94407a9..f549d3a8e59c0 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -1093,10 +1093,9 @@ static long tap_ioctl(struct file *file, unsigned int cmd,
+ 			return -ENOLINK;
+ 		}
+ 		ret = 0;
+-		u = tap->dev->type;
++		dev_get_mac_address(&sa, dev_net(tap->dev), tap->dev->name);
+ 		if (copy_to_user(&ifr->ifr_name, tap->dev->name, IFNAMSIZ) ||
+-		    copy_to_user(&ifr->ifr_hwaddr.sa_data, tap->dev->dev_addr, ETH_ALEN) ||
+-		    put_user(u, &ifr->ifr_hwaddr.sa_family))
++		    copy_to_user(&ifr->ifr_hwaddr, &sa, sizeof(sa)))
+ 			ret = -EFAULT;
+ 		tap_put_tap_dev(tap);
+ 		rtnl_unlock();
+@@ -1111,7 +1110,7 @@ static long tap_ioctl(struct file *file, unsigned int cmd,
+ 			rtnl_unlock();
+ 			return -ENOLINK;
+ 		}
+-		ret = dev_set_mac_address(tap->dev, &sa, NULL);
++		ret = dev_set_mac_address_user(tap->dev, &sa, NULL);
+ 		tap_put_tap_dev(tap);
+ 		rtnl_unlock();
+ 		return ret;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 1ac80756e5afa..accde25a66a01 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -3157,15 +3157,14 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
+ 
+ 	case SIOCGIFHWADDR:
+ 		/* Get hw address */
+-		memcpy(ifr.ifr_hwaddr.sa_data, tun->dev->dev_addr, ETH_ALEN);
+-		ifr.ifr_hwaddr.sa_family = tun->dev->type;
++		dev_get_mac_address(&ifr.ifr_hwaddr, net, tun->dev->name);
+ 		if (copy_to_user(argp, &ifr, ifreq_len))
+ 			ret = -EFAULT;
+ 		break;
+ 
+ 	case SIOCSIFHWADDR:
+ 		/* Set hw address */
+-		ret = dev_set_mac_address(tun->dev, &ifr.ifr_hwaddr, NULL);
++		ret = dev_set_mac_address_user(tun->dev, &ifr.ifr_hwaddr, NULL);
+ 		break;
+ 
+ 	case TUNGETSNDBUF:
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index b223536e07bed..c7320861943b4 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1258,6 +1258,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x19d2, 0x1255, 4)},
+ 	{QMI_FIXED_INTF(0x19d2, 0x1256, 4)},
+ 	{QMI_FIXED_INTF(0x19d2, 0x1270, 5)},	/* ZTE MF667 */
++	{QMI_FIXED_INTF(0x19d2, 0x1275, 3)},	/* ZTE P685M */
+ 	{QMI_FIXED_INTF(0x19d2, 0x1401, 2)},
+ 	{QMI_FIXED_INTF(0x19d2, 0x1402, 2)},	/* ZTE MF60 */
+ 	{QMI_FIXED_INTF(0x19d2, 0x1424, 2)},
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 4bc84cc5e824b..f5c0f9bac8404 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -3763,23 +3763,16 @@ bool ath10k_mac_tx_frm_has_freq(struct ath10k *ar)
+ static int ath10k_mac_tx_wmi_mgmt(struct ath10k *ar, struct sk_buff *skb)
+ {
+ 	struct sk_buff_head *q = &ar->wmi_mgmt_tx_queue;
+-	int ret = 0;
+-
+-	spin_lock_bh(&ar->data_lock);
+ 
+-	if (skb_queue_len(q) == ATH10K_MAX_NUM_MGMT_PENDING) {
++	if (skb_queue_len_lockless(q) >= ATH10K_MAX_NUM_MGMT_PENDING) {
+ 		ath10k_warn(ar, "wmi mgmt tx queue is full\n");
+-		ret = -ENOSPC;
+-		goto unlock;
++		return -ENOSPC;
+ 	}
+ 
+-	__skb_queue_tail(q, skb);
++	skb_queue_tail(q, skb);
+ 	ieee80211_queue_work(ar->hw, &ar->wmi_mgmt_tx_work);
+ 
+-unlock:
+-	spin_unlock_bh(&ar->data_lock);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static enum ath10k_mac_tx_path
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+index 4aa2561934d77..6d5188b78f2de 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+@@ -40,6 +40,18 @@ static const struct brcmf_dmi_data pov_tab_p1006w_data = {
+ 	BRCM_CC_43340_CHIP_ID, 2, "pov-tab-p1006w-data"
+ };
+ 
++static const struct brcmf_dmi_data predia_basic_data = {
++	BRCM_CC_43341_CHIP_ID, 2, "predia-basic"
++};
++
++/* Note the Voyo winpad A15 tablet uses the same Ampak AP6330 module, with the
++ * exact same nvram file as the Prowise-PT301 tablet. Since the nvram for the
++ * Prowise-PT301 is already in linux-firmware we just point to that here.
++ */
++static const struct brcmf_dmi_data voyo_winpad_a15_data = {
++	BRCM_CC_4330_CHIP_ID, 4, "Prowise-PT301"
++};
++
+ static const struct dmi_system_id dmi_platform_data[] = {
+ 	{
+ 		/* ACEPC T8 Cherry Trail Z8350 mini PC */
+@@ -111,6 +123,26 @@ static const struct dmi_system_id dmi_platform_data[] = {
+ 		},
+ 		.driver_data = (void *)&pov_tab_p1006w_data,
+ 	},
++	{
++		/* Predia Basic tablet (+ with keyboard dock) */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CherryTrail"),
++			/* Mx.WT107.KUBNGEA02 with the version-nr dropped */
++			DMI_MATCH(DMI_BIOS_VERSION, "Mx.WT107.KUBNGEA"),
++		},
++		.driver_data = (void *)&predia_basic_data,
++	},
++	{
++		/* Voyo winpad A15 tablet */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++			/* Above strings are too generic, also match on BIOS date */
++			DMI_MATCH(DMI_BIOS_DATE, "11/20/2014"),
++		},
++		.driver_data = (void *)&voyo_winpad_a15_data,
++	},
+ 	{}
+ };
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 3d62fda067e44..f1f954ff46856 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -2098,6 +2098,23 @@ void mt7615_dma_reset(struct mt7615_dev *dev)
+ }
+ EXPORT_SYMBOL_GPL(mt7615_dma_reset);
+ 
++void mt7615_tx_token_put(struct mt7615_dev *dev)
++{
++	struct mt76_txwi_cache *txwi;
++	int id;
++
++	spin_lock_bh(&dev->token_lock);
++	idr_for_each_entry(&dev->token, txwi, id) {
++		mt7615_txp_skb_unmap(&dev->mt76, txwi);
++		if (txwi->skb)
++			dev_kfree_skb_any(txwi->skb);
++		mt76_put_txwi(&dev->mt76, txwi);
++	}
++	spin_unlock_bh(&dev->token_lock);
++	idr_destroy(&dev->token);
++}
++EXPORT_SYMBOL_GPL(mt7615_tx_token_put);
++
+ void mt7615_mac_reset_work(struct work_struct *work)
+ {
+ 	struct mt7615_phy *phy2;
+@@ -2141,6 +2158,9 @@ void mt7615_mac_reset_work(struct work_struct *work)
+ 
+ 	mt76_wr(dev, MT_MCU_INT_EVENT, MT_MCU_INT_EVENT_PDMA_STOPPED);
+ 
++	mt7615_tx_token_put(dev);
++	idr_init(&dev->token);
++
+ 	if (mt7615_wait_reset_state(dev, MT_MCU_CMD_RESET_DONE)) {
+ 		mt7615_dma_reset(dev);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+index 6a9f9187f76ac..5b06294d654aa 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+@@ -619,7 +619,7 @@ int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 			  struct mt76_tx_info *tx_info);
+ 
+ void mt7615_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue_entry *e);
+-
++void mt7615_tx_token_put(struct mt7615_dev *dev);
+ void mt7615_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
+ 			 struct sk_buff *skb);
+ void mt7615_sta_ps(struct mt76_dev *mdev, struct ieee80211_sta *sta, bool ps);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+index 06a0f8f7bc893..7b81aef3684ed 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+@@ -153,9 +153,7 @@ int mt7615_register_device(struct mt7615_dev *dev)
+ 
+ void mt7615_unregister_device(struct mt7615_dev *dev)
+ {
+-	struct mt76_txwi_cache *txwi;
+ 	bool mcu_running;
+-	int id;
+ 
+ 	mcu_running = mt7615_wait_for_mcu_init(dev);
+ 
+@@ -165,15 +163,7 @@ void mt7615_unregister_device(struct mt7615_dev *dev)
+ 		mt7615_mcu_exit(dev);
+ 	mt7615_dma_cleanup(dev);
+ 
+-	spin_lock_bh(&dev->token_lock);
+-	idr_for_each_entry(&dev->token, txwi, id) {
+-		mt7615_txp_skb_unmap(&dev->mt76, txwi);
+-		if (txwi->skb)
+-			dev_kfree_skb_any(txwi->skb);
+-		mt76_put_txwi(&dev->mt76, txwi);
+-	}
+-	spin_unlock_bh(&dev->token_lock);
+-	idr_destroy(&dev->token);
++	mt7615_tx_token_put(dev);
+ 
+ 	tasklet_disable(&dev->irq_tasklet);
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index 3f7e3cfb6f00d..ce9892152f4d4 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -248,7 +248,8 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 			rsi_set_len_qno(&data_desc->len_qno,
+ 					(skb->len - FRAME_DESC_SZ),
+ 					RSI_WIFI_MGMT_Q);
+-		if ((skb->len - header_size) == EAPOL4_PACKET_LEN) {
++		if (((skb->len - header_size) == EAPOL4_PACKET_LEN) ||
++		    ((skb->len - header_size) == EAPOL4_PACKET_LEN - 2)) {
+ 			data_desc->misc_flags |=
+ 				RSI_DESC_REQUIRE_CFM_TO_HOST;
+ 			xtend_desc->confirm_frame_type = EAPOL4_CONFIRM;
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index a7b8684143f46..592e9dadcb556 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -153,9 +153,7 @@ static void rsi_handle_interrupt(struct sdio_func *function)
+ 	if (adapter->priv->fsm_state == FSM_FW_NOT_LOADED)
+ 		return;
+ 
+-	dev->sdio_irq_task = current;
+-	rsi_interrupt_handler(adapter);
+-	dev->sdio_irq_task = NULL;
++	rsi_set_event(&dev->rx_thread.event);
+ }
+ 
+ /**
+@@ -1058,8 +1056,6 @@ static int rsi_probe(struct sdio_func *pfunction,
+ 		rsi_dbg(ERR_ZONE, "%s: Unable to init rx thrd\n", __func__);
+ 		goto fail_kill_thread;
+ 	}
+-	skb_queue_head_init(&sdev->rx_q.head);
+-	sdev->rx_q.num_rx_pkts = 0;
+ 
+ 	sdio_claim_host(pfunction);
+ 	if (sdio_claim_irq(pfunction, rsi_handle_interrupt)) {
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
+index 7825c9a889d36..23e709aabd1f7 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
+@@ -60,39 +60,20 @@ int rsi_sdio_master_access_msword(struct rsi_hw *adapter, u16 ms_word)
+ 	return status;
+ }
+ 
++static void rsi_rx_handler(struct rsi_hw *adapter);
++
+ void rsi_sdio_rx_thread(struct rsi_common *common)
+ {
+ 	struct rsi_hw *adapter = common->priv;
+ 	struct rsi_91x_sdiodev *sdev = adapter->rsi_dev;
+-	struct sk_buff *skb;
+-	int status;
+ 
+ 	do {
+ 		rsi_wait_event(&sdev->rx_thread.event, EVENT_WAIT_FOREVER);
+ 		rsi_reset_event(&sdev->rx_thread.event);
++		rsi_rx_handler(adapter);
++	} while (!atomic_read(&sdev->rx_thread.thread_done));
+ 
+-		while (true) {
+-			if (atomic_read(&sdev->rx_thread.thread_done))
+-				goto out;
+-
+-			skb = skb_dequeue(&sdev->rx_q.head);
+-			if (!skb)
+-				break;
+-			if (sdev->rx_q.num_rx_pkts > 0)
+-				sdev->rx_q.num_rx_pkts--;
+-			status = rsi_read_pkt(common, skb->data, skb->len);
+-			if (status) {
+-				rsi_dbg(ERR_ZONE, "Failed to read the packet\n");
+-				dev_kfree_skb(skb);
+-				break;
+-			}
+-			dev_kfree_skb(skb);
+-		}
+-	} while (1);
+-
+-out:
+ 	rsi_dbg(INFO_ZONE, "%s: Terminated SDIO RX thread\n", __func__);
+-	skb_queue_purge(&sdev->rx_q.head);
+ 	atomic_inc(&sdev->rx_thread.thread_done);
+ 	complete_and_exit(&sdev->rx_thread.completion, 0);
+ }
+@@ -113,10 +94,6 @@ static int rsi_process_pkt(struct rsi_common *common)
+ 	u32 rcv_pkt_len = 0;
+ 	int status = 0;
+ 	u8 value = 0;
+-	struct sk_buff *skb;
+-
+-	if (dev->rx_q.num_rx_pkts >= RSI_MAX_RX_PKTS)
+-		return 0;
+ 
+ 	num_blks = ((adapter->interrupt_status & 1) |
+ 			((adapter->interrupt_status >> RECV_NUM_BLOCKS) << 1));
+@@ -144,22 +121,19 @@ static int rsi_process_pkt(struct rsi_common *common)
+ 
+ 	rcv_pkt_len = (num_blks * 256);
+ 
+-	skb = dev_alloc_skb(rcv_pkt_len);
+-	if (!skb)
+-		return -ENOMEM;
+-
+-	status = rsi_sdio_host_intf_read_pkt(adapter, skb->data, rcv_pkt_len);
++	status = rsi_sdio_host_intf_read_pkt(adapter, dev->pktbuffer,
++					     rcv_pkt_len);
+ 	if (status) {
+ 		rsi_dbg(ERR_ZONE, "%s: Failed to read packet from card\n",
+ 			__func__);
+-		dev_kfree_skb(skb);
+ 		return status;
+ 	}
+-	skb_put(skb, rcv_pkt_len);
+-	skb_queue_tail(&dev->rx_q.head, skb);
+-	dev->rx_q.num_rx_pkts++;
+ 
+-	rsi_set_event(&dev->rx_thread.event);
++	status = rsi_read_pkt(common, dev->pktbuffer, rcv_pkt_len);
++	if (status) {
++		rsi_dbg(ERR_ZONE, "Failed to read the packet\n");
++		return status;
++	}
+ 
+ 	return 0;
+ }
+@@ -251,12 +225,12 @@ int rsi_init_sdio_slave_regs(struct rsi_hw *adapter)
+ }
+ 
+ /**
+- * rsi_interrupt_handler() - This function read and process SDIO interrupts.
++ * rsi_rx_handler() - Read and process SDIO interrupts.
+  * @adapter: Pointer to the adapter structure.
+  *
+  * Return: None.
+  */
+-void rsi_interrupt_handler(struct rsi_hw *adapter)
++static void rsi_rx_handler(struct rsi_hw *adapter)
+ {
+ 	struct rsi_common *common = adapter->priv;
+ 	struct rsi_91x_sdiodev *dev =
+diff --git a/drivers/net/wireless/rsi/rsi_sdio.h b/drivers/net/wireless/rsi/rsi_sdio.h
+index 9afc1d0d2684a..1c756263cf15e 100644
+--- a/drivers/net/wireless/rsi/rsi_sdio.h
++++ b/drivers/net/wireless/rsi/rsi_sdio.h
+@@ -107,11 +107,6 @@ struct receive_info {
+ 	u32 buf_available_counter;
+ };
+ 
+-struct rsi_sdio_rx_q {
+-	u8 num_rx_pkts;
+-	struct sk_buff_head head;
+-};
+-
+ struct rsi_91x_sdiodev {
+ 	struct sdio_func *pfunction;
+ 	struct task_struct *sdio_irq_task;
+@@ -124,11 +119,10 @@ struct rsi_91x_sdiodev {
+ 	u16 tx_blk_size;
+ 	u8 write_fail;
+ 	bool buff_status_updated;
+-	struct rsi_sdio_rx_q rx_q;
+ 	struct rsi_thread rx_thread;
++	u8 pktbuffer[8192] __aligned(4);
+ };
+ 
+-void rsi_interrupt_handler(struct rsi_hw *adapter);
+ int rsi_init_sdio_slave_regs(struct rsi_hw *adapter);
+ int rsi_sdio_read_register(struct rsi_hw *adapter, u32 addr, u8 *data);
+ int rsi_sdio_host_intf_read_pkt(struct rsi_hw *adapter, u8 *pkt, u32 length);
+diff --git a/drivers/net/wireless/ti/wl12xx/main.c b/drivers/net/wireless/ti/wl12xx/main.c
+index 3c9c623bb4283..9d7dbfe7fe0c3 100644
+--- a/drivers/net/wireless/ti/wl12xx/main.c
++++ b/drivers/net/wireless/ti/wl12xx/main.c
+@@ -635,7 +635,6 @@ static int wl12xx_identify_chip(struct wl1271 *wl)
+ 		wl->quirks |= WLCORE_QUIRK_LEGACY_NVS |
+ 			      WLCORE_QUIRK_DUAL_PROBE_TMPL |
+ 			      WLCORE_QUIRK_TKIP_HEADER_SPACE |
+-			      WLCORE_QUIRK_START_STA_FAILS |
+ 			      WLCORE_QUIRK_AP_ZERO_SESSION_ID;
+ 		wl->sr_fw_name = WL127X_FW_NAME_SINGLE;
+ 		wl->mr_fw_name = WL127X_FW_NAME_MULTI;
+@@ -659,7 +658,6 @@ static int wl12xx_identify_chip(struct wl1271 *wl)
+ 		wl->quirks |= WLCORE_QUIRK_LEGACY_NVS |
+ 			      WLCORE_QUIRK_DUAL_PROBE_TMPL |
+ 			      WLCORE_QUIRK_TKIP_HEADER_SPACE |
+-			      WLCORE_QUIRK_START_STA_FAILS |
+ 			      WLCORE_QUIRK_AP_ZERO_SESSION_ID;
+ 		wl->plt_fw_name = WL127X_PLT_FW_NAME;
+ 		wl->sr_fw_name = WL127X_FW_NAME_SINGLE;
+@@ -688,7 +686,6 @@ static int wl12xx_identify_chip(struct wl1271 *wl)
+ 		wl->quirks |= WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN |
+ 			      WLCORE_QUIRK_DUAL_PROBE_TMPL |
+ 			      WLCORE_QUIRK_TKIP_HEADER_SPACE |
+-			      WLCORE_QUIRK_START_STA_FAILS |
+ 			      WLCORE_QUIRK_AP_ZERO_SESSION_ID;
+ 
+ 		wlcore_set_min_fw_ver(wl, WL128X_CHIP_VER,
+diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
+index 6863fd552d5e7..6e402d62dbe4a 100644
+--- a/drivers/net/wireless/ti/wlcore/main.c
++++ b/drivers/net/wireless/ti/wlcore/main.c
+@@ -2872,21 +2872,8 @@ static int wlcore_join(struct wl1271 *wl, struct wl12xx_vif *wlvif)
+ 
+ 	if (is_ibss)
+ 		ret = wl12xx_cmd_role_start_ibss(wl, wlvif);
+-	else {
+-		if (wl->quirks & WLCORE_QUIRK_START_STA_FAILS) {
+-			/*
+-			 * TODO: this is an ugly workaround for wl12xx fw
+-			 * bug - we are not able to tx/rx after the first
+-			 * start_sta, so make dummy start+stop calls,
+-			 * and then call start_sta again.
+-			 * this should be fixed in the fw.
+-			 */
+-			wl12xx_cmd_role_start_sta(wl, wlvif);
+-			wl12xx_cmd_role_stop_sta(wl, wlvif);
+-		}
+-
++	else
+ 		ret = wl12xx_cmd_role_start_sta(wl, wlvif);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/ti/wlcore/wlcore.h b/drivers/net/wireless/ti/wlcore/wlcore.h
+index b7821311ac75b..81c94d390623b 100644
+--- a/drivers/net/wireless/ti/wlcore/wlcore.h
++++ b/drivers/net/wireless/ti/wlcore/wlcore.h
+@@ -547,9 +547,6 @@ wlcore_set_min_fw_ver(struct wl1271 *wl, unsigned int chip,
+ /* Each RX/TX transaction requires an end-of-transaction transfer */
+ #define WLCORE_QUIRK_END_OF_TRANSACTION		BIT(0)
+ 
+-/* the first start_role(sta) sometimes doesn't work on wl12xx */
+-#define WLCORE_QUIRK_START_STA_FAILS		BIT(1)
+-
+ /* wl127x and SPI don't support SDIO block size alignment */
+ #define WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN		BIT(2)
+ 
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index 423667b837510..986b569709616 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -1342,11 +1342,21 @@ int xenvif_tx_action(struct xenvif_queue *queue, int budget)
+ 		return 0;
+ 
+ 	gnttab_batch_copy(queue->tx_copy_ops, nr_cops);
+-	if (nr_mops != 0)
++	if (nr_mops != 0) {
+ 		ret = gnttab_map_refs(queue->tx_map_ops,
+ 				      NULL,
+ 				      queue->pages_to_map,
+ 				      nr_mops);
++		if (ret) {
++			unsigned int i;
++
++			netdev_err(queue->vif->dev, "Map fail: nr %u ret %d\n",
++				   nr_mops, ret);
++			for (i = 0; i < nr_mops; ++i)
++				WARN_ON_ONCE(queue->tx_map_ops[i].status ==
++				             GNTST_okay);
++		}
++	}
+ 
+ 	work_done = xenvif_tx_submit(queue);
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 4ec5f05dabe1d..e1e574ecf031b 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -351,6 +351,26 @@ bool nvme_cancel_request(struct request *req, void *data, bool reserved)
+ }
+ EXPORT_SYMBOL_GPL(nvme_cancel_request);
+ 
++void nvme_cancel_tagset(struct nvme_ctrl *ctrl)
++{
++	if (ctrl->tagset) {
++		blk_mq_tagset_busy_iter(ctrl->tagset,
++				nvme_cancel_request, ctrl);
++		blk_mq_tagset_wait_completed_request(ctrl->tagset);
++	}
++}
++EXPORT_SYMBOL_GPL(nvme_cancel_tagset);
++
++void nvme_cancel_admin_tagset(struct nvme_ctrl *ctrl)
++{
++	if (ctrl->admin_tagset) {
++		blk_mq_tagset_busy_iter(ctrl->admin_tagset,
++				nvme_cancel_request, ctrl);
++		blk_mq_tagset_wait_completed_request(ctrl->admin_tagset);
++	}
++}
++EXPORT_SYMBOL_GPL(nvme_cancel_admin_tagset);
++
+ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
+ 		enum nvme_ctrl_state new_state)
+ {
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 567f7ad18a91c..f843540cc238e 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -571,6 +571,8 @@ static inline bool nvme_is_aen_req(u16 qid, __u16 command_id)
+ 
+ void nvme_complete_rq(struct request *req);
+ bool nvme_cancel_request(struct request *req, void *data, bool reserved);
++void nvme_cancel_tagset(struct nvme_ctrl *ctrl);
++void nvme_cancel_admin_tagset(struct nvme_ctrl *ctrl);
+ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
+ 		enum nvme_ctrl_state new_state);
+ bool nvme_wait_reset(struct nvme_ctrl *ctrl);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 4eb867804b6ab..1957030132722 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -919,12 +919,16 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ 
+ 	error = nvme_init_identify(&ctrl->ctrl);
+ 	if (error)
+-		goto out_stop_queue;
++		goto out_quiesce_queue;
+ 
+ 	return 0;
+ 
++out_quiesce_queue:
++	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
++	blk_sync_queue(ctrl->ctrl.admin_q);
+ out_stop_queue:
+ 	nvme_rdma_stop_queue(&ctrl->queues[0]);
++	nvme_cancel_admin_tagset(&ctrl->ctrl);
+ out_cleanup_queue:
+ 	if (new)
+ 		blk_cleanup_queue(ctrl->ctrl.admin_q);
+@@ -1001,8 +1005,10 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
+ 
+ out_wait_freeze_timed_out:
+ 	nvme_stop_queues(&ctrl->ctrl);
++	nvme_sync_io_queues(&ctrl->ctrl);
+ 	nvme_rdma_stop_io_queues(ctrl);
+ out_cleanup_connect_q:
++	nvme_cancel_tagset(&ctrl->ctrl);
+ 	if (new)
+ 		blk_cleanup_queue(ctrl->ctrl.connect_q);
+ out_free_tag_set:
+@@ -1144,10 +1150,18 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+ 	return 0;
+ 
+ destroy_io:
+-	if (ctrl->ctrl.queue_count > 1)
++	if (ctrl->ctrl.queue_count > 1) {
++		nvme_stop_queues(&ctrl->ctrl);
++		nvme_sync_io_queues(&ctrl->ctrl);
++		nvme_rdma_stop_io_queues(ctrl);
++		nvme_cancel_tagset(&ctrl->ctrl);
+ 		nvme_rdma_destroy_io_queues(ctrl, new);
++	}
+ destroy_admin:
++	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
++	blk_sync_queue(ctrl->ctrl.admin_q);
+ 	nvme_rdma_stop_queue(&ctrl->queues[0]);
++	nvme_cancel_admin_tagset(&ctrl->ctrl);
+ 	nvme_rdma_destroy_admin_queue(ctrl, new);
+ 	return ret;
+ }
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 6487b7897d1fb..739ac7deccd96 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1815,8 +1815,10 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+ 
+ out_wait_freeze_timed_out:
+ 	nvme_stop_queues(ctrl);
++	nvme_sync_io_queues(ctrl);
+ 	nvme_tcp_stop_io_queues(ctrl);
+ out_cleanup_connect_q:
++	nvme_cancel_tagset(ctrl);
+ 	if (new)
+ 		blk_cleanup_queue(ctrl->connect_q);
+ out_free_tag_set:
+@@ -1878,12 +1880,16 @@ static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new)
+ 
+ 	error = nvme_init_identify(ctrl);
+ 	if (error)
+-		goto out_stop_queue;
++		goto out_quiesce_queue;
+ 
+ 	return 0;
+ 
++out_quiesce_queue:
++	blk_mq_quiesce_queue(ctrl->admin_q);
++	blk_sync_queue(ctrl->admin_q);
+ out_stop_queue:
+ 	nvme_tcp_stop_queue(ctrl, 0);
++	nvme_cancel_admin_tagset(ctrl);
+ out_cleanup_queue:
+ 	if (new)
+ 		blk_cleanup_queue(ctrl->admin_q);
+@@ -2003,10 +2009,18 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
+ 	return 0;
+ 
+ destroy_io:
+-	if (ctrl->queue_count > 1)
++	if (ctrl->queue_count > 1) {
++		nvme_stop_queues(ctrl);
++		nvme_sync_io_queues(ctrl);
++		nvme_tcp_stop_io_queues(ctrl);
++		nvme_cancel_tagset(ctrl);
+ 		nvme_tcp_destroy_io_queues(ctrl, new);
++	}
+ destroy_admin:
++	blk_mq_quiesce_queue(ctrl->admin_q);
++	blk_sync_queue(ctrl->admin_q);
+ 	nvme_tcp_stop_queue(ctrl, 0);
++	nvme_cancel_admin_tagset(ctrl);
+ 	nvme_tcp_destroy_admin_queue(ctrl, new);
+ 	return ret;
+ }
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 6427cbd0a5be2..5c93450725108 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -3577,7 +3577,14 @@ u32 pci_rebar_get_possible_sizes(struct pci_dev *pdev, int bar)
+ 		return 0;
+ 
+ 	pci_read_config_dword(pdev, pos + PCI_REBAR_CAP, &cap);
+-	return (cap & PCI_REBAR_CAP_SIZES) >> 4;
++	cap &= PCI_REBAR_CAP_SIZES;
++
++	/* Sapphire RX 5600 XT Pulse has an invalid cap dword for BAR 0 */
++	if (pdev->vendor == PCI_VENDOR_ID_ATI && pdev->device == 0x731f &&
++	    bar == 0 && cap == 0x7000)
++		cap = 0x3f000;
++
++	return cap >> 4;
+ }
+ 
+ /**
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index 00a6e57dfa16b..63c501a42c44b 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -775,21 +775,19 @@ static const struct mtk_scp_of_data mt8192_of_data = {
+ 	.host_to_scp_int_bit = MT8192_HOST_IPC_INT_BIT,
+ };
+ 
+-#if defined(CONFIG_OF)
+ static const struct of_device_id mtk_scp_of_match[] = {
+ 	{ .compatible = "mediatek,mt8183-scp", .data = &mt8183_of_data },
+ 	{ .compatible = "mediatek,mt8192-scp", .data = &mt8192_of_data },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, mtk_scp_of_match);
+-#endif
+ 
+ static struct platform_driver mtk_scp_driver = {
+ 	.probe = scp_probe,
+ 	.remove = scp_remove,
+ 	.driver = {
+ 		.name = "mtk-scp",
+-		.of_match_table = of_match_ptr(mtk_scp_of_match),
++		.of_match_table = mtk_scp_of_match,
+ 	},
+ };
+ 
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index f9314f1393fbd..5125a6c7f70e9 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -3338,125 +3338,125 @@ int iscsi_session_get_param(struct iscsi_cls_session *cls_session,
+ 
+ 	switch(param) {
+ 	case ISCSI_PARAM_FAST_ABORT:
+-		len = sprintf(buf, "%d\n", session->fast_abort);
++		len = sysfs_emit(buf, "%d\n", session->fast_abort);
+ 		break;
+ 	case ISCSI_PARAM_ABORT_TMO:
+-		len = sprintf(buf, "%d\n", session->abort_timeout);
++		len = sysfs_emit(buf, "%d\n", session->abort_timeout);
+ 		break;
+ 	case ISCSI_PARAM_LU_RESET_TMO:
+-		len = sprintf(buf, "%d\n", session->lu_reset_timeout);
++		len = sysfs_emit(buf, "%d\n", session->lu_reset_timeout);
+ 		break;
+ 	case ISCSI_PARAM_TGT_RESET_TMO:
+-		len = sprintf(buf, "%d\n", session->tgt_reset_timeout);
++		len = sysfs_emit(buf, "%d\n", session->tgt_reset_timeout);
+ 		break;
+ 	case ISCSI_PARAM_INITIAL_R2T_EN:
+-		len = sprintf(buf, "%d\n", session->initial_r2t_en);
++		len = sysfs_emit(buf, "%d\n", session->initial_r2t_en);
+ 		break;
+ 	case ISCSI_PARAM_MAX_R2T:
+-		len = sprintf(buf, "%hu\n", session->max_r2t);
++		len = sysfs_emit(buf, "%hu\n", session->max_r2t);
+ 		break;
+ 	case ISCSI_PARAM_IMM_DATA_EN:
+-		len = sprintf(buf, "%d\n", session->imm_data_en);
++		len = sysfs_emit(buf, "%d\n", session->imm_data_en);
+ 		break;
+ 	case ISCSI_PARAM_FIRST_BURST:
+-		len = sprintf(buf, "%u\n", session->first_burst);
++		len = sysfs_emit(buf, "%u\n", session->first_burst);
+ 		break;
+ 	case ISCSI_PARAM_MAX_BURST:
+-		len = sprintf(buf, "%u\n", session->max_burst);
++		len = sysfs_emit(buf, "%u\n", session->max_burst);
+ 		break;
+ 	case ISCSI_PARAM_PDU_INORDER_EN:
+-		len = sprintf(buf, "%d\n", session->pdu_inorder_en);
++		len = sysfs_emit(buf, "%d\n", session->pdu_inorder_en);
+ 		break;
+ 	case ISCSI_PARAM_DATASEQ_INORDER_EN:
+-		len = sprintf(buf, "%d\n", session->dataseq_inorder_en);
++		len = sysfs_emit(buf, "%d\n", session->dataseq_inorder_en);
+ 		break;
+ 	case ISCSI_PARAM_DEF_TASKMGMT_TMO:
+-		len = sprintf(buf, "%d\n", session->def_taskmgmt_tmo);
++		len = sysfs_emit(buf, "%d\n", session->def_taskmgmt_tmo);
+ 		break;
+ 	case ISCSI_PARAM_ERL:
+-		len = sprintf(buf, "%d\n", session->erl);
++		len = sysfs_emit(buf, "%d\n", session->erl);
+ 		break;
+ 	case ISCSI_PARAM_TARGET_NAME:
+-		len = sprintf(buf, "%s\n", session->targetname);
++		len = sysfs_emit(buf, "%s\n", session->targetname);
+ 		break;
+ 	case ISCSI_PARAM_TARGET_ALIAS:
+-		len = sprintf(buf, "%s\n", session->targetalias);
++		len = sysfs_emit(buf, "%s\n", session->targetalias);
+ 		break;
+ 	case ISCSI_PARAM_TPGT:
+-		len = sprintf(buf, "%d\n", session->tpgt);
++		len = sysfs_emit(buf, "%d\n", session->tpgt);
+ 		break;
+ 	case ISCSI_PARAM_USERNAME:
+-		len = sprintf(buf, "%s\n", session->username);
++		len = sysfs_emit(buf, "%s\n", session->username);
+ 		break;
+ 	case ISCSI_PARAM_USERNAME_IN:
+-		len = sprintf(buf, "%s\n", session->username_in);
++		len = sysfs_emit(buf, "%s\n", session->username_in);
+ 		break;
+ 	case ISCSI_PARAM_PASSWORD:
+-		len = sprintf(buf, "%s\n", session->password);
++		len = sysfs_emit(buf, "%s\n", session->password);
+ 		break;
+ 	case ISCSI_PARAM_PASSWORD_IN:
+-		len = sprintf(buf, "%s\n", session->password_in);
++		len = sysfs_emit(buf, "%s\n", session->password_in);
+ 		break;
+ 	case ISCSI_PARAM_IFACE_NAME:
+-		len = sprintf(buf, "%s\n", session->ifacename);
++		len = sysfs_emit(buf, "%s\n", session->ifacename);
+ 		break;
+ 	case ISCSI_PARAM_INITIATOR_NAME:
+-		len = sprintf(buf, "%s\n", session->initiatorname);
++		len = sysfs_emit(buf, "%s\n", session->initiatorname);
+ 		break;
+ 	case ISCSI_PARAM_BOOT_ROOT:
+-		len = sprintf(buf, "%s\n", session->boot_root);
++		len = sysfs_emit(buf, "%s\n", session->boot_root);
+ 		break;
+ 	case ISCSI_PARAM_BOOT_NIC:
+-		len = sprintf(buf, "%s\n", session->boot_nic);
++		len = sysfs_emit(buf, "%s\n", session->boot_nic);
+ 		break;
+ 	case ISCSI_PARAM_BOOT_TARGET:
+-		len = sprintf(buf, "%s\n", session->boot_target);
++		len = sysfs_emit(buf, "%s\n", session->boot_target);
+ 		break;
+ 	case ISCSI_PARAM_AUTO_SND_TGT_DISABLE:
+-		len = sprintf(buf, "%u\n", session->auto_snd_tgt_disable);
++		len = sysfs_emit(buf, "%u\n", session->auto_snd_tgt_disable);
+ 		break;
+ 	case ISCSI_PARAM_DISCOVERY_SESS:
+-		len = sprintf(buf, "%u\n", session->discovery_sess);
++		len = sysfs_emit(buf, "%u\n", session->discovery_sess);
+ 		break;
+ 	case ISCSI_PARAM_PORTAL_TYPE:
+-		len = sprintf(buf, "%s\n", session->portal_type);
++		len = sysfs_emit(buf, "%s\n", session->portal_type);
+ 		break;
+ 	case ISCSI_PARAM_CHAP_AUTH_EN:
+-		len = sprintf(buf, "%u\n", session->chap_auth_en);
++		len = sysfs_emit(buf, "%u\n", session->chap_auth_en);
+ 		break;
+ 	case ISCSI_PARAM_DISCOVERY_LOGOUT_EN:
+-		len = sprintf(buf, "%u\n", session->discovery_logout_en);
++		len = sysfs_emit(buf, "%u\n", session->discovery_logout_en);
+ 		break;
+ 	case ISCSI_PARAM_BIDI_CHAP_EN:
+-		len = sprintf(buf, "%u\n", session->bidi_chap_en);
++		len = sysfs_emit(buf, "%u\n", session->bidi_chap_en);
+ 		break;
+ 	case ISCSI_PARAM_DISCOVERY_AUTH_OPTIONAL:
+-		len = sprintf(buf, "%u\n", session->discovery_auth_optional);
++		len = sysfs_emit(buf, "%u\n", session->discovery_auth_optional);
+ 		break;
+ 	case ISCSI_PARAM_DEF_TIME2WAIT:
+-		len = sprintf(buf, "%d\n", session->time2wait);
++		len = sysfs_emit(buf, "%d\n", session->time2wait);
+ 		break;
+ 	case ISCSI_PARAM_DEF_TIME2RETAIN:
+-		len = sprintf(buf, "%d\n", session->time2retain);
++		len = sysfs_emit(buf, "%d\n", session->time2retain);
+ 		break;
+ 	case ISCSI_PARAM_TSID:
+-		len = sprintf(buf, "%u\n", session->tsid);
++		len = sysfs_emit(buf, "%u\n", session->tsid);
+ 		break;
+ 	case ISCSI_PARAM_ISID:
+-		len = sprintf(buf, "%02x%02x%02x%02x%02x%02x\n",
++		len = sysfs_emit(buf, "%02x%02x%02x%02x%02x%02x\n",
+ 			      session->isid[0], session->isid[1],
+ 			      session->isid[2], session->isid[3],
+ 			      session->isid[4], session->isid[5]);
+ 		break;
+ 	case ISCSI_PARAM_DISCOVERY_PARENT_IDX:
+-		len = sprintf(buf, "%u\n", session->discovery_parent_idx);
++		len = sysfs_emit(buf, "%u\n", session->discovery_parent_idx);
+ 		break;
+ 	case ISCSI_PARAM_DISCOVERY_PARENT_TYPE:
+ 		if (session->discovery_parent_type)
+-			len = sprintf(buf, "%s\n",
++			len = sysfs_emit(buf, "%s\n",
+ 				      session->discovery_parent_type);
+ 		else
+-			len = sprintf(buf, "\n");
++			len = sysfs_emit(buf, "\n");
+ 		break;
+ 	default:
+ 		return -ENOSYS;
+@@ -3488,16 +3488,16 @@ int iscsi_conn_get_addr_param(struct sockaddr_storage *addr,
+ 	case ISCSI_PARAM_CONN_ADDRESS:
+ 	case ISCSI_HOST_PARAM_IPADDRESS:
+ 		if (sin)
+-			len = sprintf(buf, "%pI4\n", &sin->sin_addr.s_addr);
++			len = sysfs_emit(buf, "%pI4\n", &sin->sin_addr.s_addr);
+ 		else
+-			len = sprintf(buf, "%pI6\n", &sin6->sin6_addr);
++			len = sysfs_emit(buf, "%pI6\n", &sin6->sin6_addr);
+ 		break;
+ 	case ISCSI_PARAM_CONN_PORT:
+ 	case ISCSI_PARAM_LOCAL_PORT:
+ 		if (sin)
+-			len = sprintf(buf, "%hu\n", be16_to_cpu(sin->sin_port));
++			len = sysfs_emit(buf, "%hu\n", be16_to_cpu(sin->sin_port));
+ 		else
+-			len = sprintf(buf, "%hu\n",
++			len = sysfs_emit(buf, "%hu\n",
+ 				      be16_to_cpu(sin6->sin6_port));
+ 		break;
+ 	default:
+@@ -3516,88 +3516,88 @@ int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ 
+ 	switch(param) {
+ 	case ISCSI_PARAM_PING_TMO:
+-		len = sprintf(buf, "%u\n", conn->ping_timeout);
++		len = sysfs_emit(buf, "%u\n", conn->ping_timeout);
+ 		break;
+ 	case ISCSI_PARAM_RECV_TMO:
+-		len = sprintf(buf, "%u\n", conn->recv_timeout);
++		len = sysfs_emit(buf, "%u\n", conn->recv_timeout);
+ 		break;
+ 	case ISCSI_PARAM_MAX_RECV_DLENGTH:
+-		len = sprintf(buf, "%u\n", conn->max_recv_dlength);
++		len = sysfs_emit(buf, "%u\n", conn->max_recv_dlength);
+ 		break;
+ 	case ISCSI_PARAM_MAX_XMIT_DLENGTH:
+-		len = sprintf(buf, "%u\n", conn->max_xmit_dlength);
++		len = sysfs_emit(buf, "%u\n", conn->max_xmit_dlength);
+ 		break;
+ 	case ISCSI_PARAM_HDRDGST_EN:
+-		len = sprintf(buf, "%d\n", conn->hdrdgst_en);
++		len = sysfs_emit(buf, "%d\n", conn->hdrdgst_en);
+ 		break;
+ 	case ISCSI_PARAM_DATADGST_EN:
+-		len = sprintf(buf, "%d\n", conn->datadgst_en);
++		len = sysfs_emit(buf, "%d\n", conn->datadgst_en);
+ 		break;
+ 	case ISCSI_PARAM_IFMARKER_EN:
+-		len = sprintf(buf, "%d\n", conn->ifmarker_en);
++		len = sysfs_emit(buf, "%d\n", conn->ifmarker_en);
+ 		break;
+ 	case ISCSI_PARAM_OFMARKER_EN:
+-		len = sprintf(buf, "%d\n", conn->ofmarker_en);
++		len = sysfs_emit(buf, "%d\n", conn->ofmarker_en);
+ 		break;
+ 	case ISCSI_PARAM_EXP_STATSN:
+-		len = sprintf(buf, "%u\n", conn->exp_statsn);
++		len = sysfs_emit(buf, "%u\n", conn->exp_statsn);
+ 		break;
+ 	case ISCSI_PARAM_PERSISTENT_PORT:
+-		len = sprintf(buf, "%d\n", conn->persistent_port);
++		len = sysfs_emit(buf, "%d\n", conn->persistent_port);
+ 		break;
+ 	case ISCSI_PARAM_PERSISTENT_ADDRESS:
+-		len = sprintf(buf, "%s\n", conn->persistent_address);
++		len = sysfs_emit(buf, "%s\n", conn->persistent_address);
+ 		break;
+ 	case ISCSI_PARAM_STATSN:
+-		len = sprintf(buf, "%u\n", conn->statsn);
++		len = sysfs_emit(buf, "%u\n", conn->statsn);
+ 		break;
+ 	case ISCSI_PARAM_MAX_SEGMENT_SIZE:
+-		len = sprintf(buf, "%u\n", conn->max_segment_size);
++		len = sysfs_emit(buf, "%u\n", conn->max_segment_size);
+ 		break;
+ 	case ISCSI_PARAM_KEEPALIVE_TMO:
+-		len = sprintf(buf, "%u\n", conn->keepalive_tmo);
++		len = sysfs_emit(buf, "%u\n", conn->keepalive_tmo);
+ 		break;
+ 	case ISCSI_PARAM_LOCAL_PORT:
+-		len = sprintf(buf, "%u\n", conn->local_port);
++		len = sysfs_emit(buf, "%u\n", conn->local_port);
+ 		break;
+ 	case ISCSI_PARAM_TCP_TIMESTAMP_STAT:
+-		len = sprintf(buf, "%u\n", conn->tcp_timestamp_stat);
++		len = sysfs_emit(buf, "%u\n", conn->tcp_timestamp_stat);
+ 		break;
+ 	case ISCSI_PARAM_TCP_NAGLE_DISABLE:
+-		len = sprintf(buf, "%u\n", conn->tcp_nagle_disable);
++		len = sysfs_emit(buf, "%u\n", conn->tcp_nagle_disable);
+ 		break;
+ 	case ISCSI_PARAM_TCP_WSF_DISABLE:
+-		len = sprintf(buf, "%u\n", conn->tcp_wsf_disable);
++		len = sysfs_emit(buf, "%u\n", conn->tcp_wsf_disable);
+ 		break;
+ 	case ISCSI_PARAM_TCP_TIMER_SCALE:
+-		len = sprintf(buf, "%u\n", conn->tcp_timer_scale);
++		len = sysfs_emit(buf, "%u\n", conn->tcp_timer_scale);
+ 		break;
+ 	case ISCSI_PARAM_TCP_TIMESTAMP_EN:
+-		len = sprintf(buf, "%u\n", conn->tcp_timestamp_en);
++		len = sysfs_emit(buf, "%u\n", conn->tcp_timestamp_en);
+ 		break;
+ 	case ISCSI_PARAM_IP_FRAGMENT_DISABLE:
+-		len = sprintf(buf, "%u\n", conn->fragment_disable);
++		len = sysfs_emit(buf, "%u\n", conn->fragment_disable);
+ 		break;
+ 	case ISCSI_PARAM_IPV4_TOS:
+-		len = sprintf(buf, "%u\n", conn->ipv4_tos);
++		len = sysfs_emit(buf, "%u\n", conn->ipv4_tos);
+ 		break;
+ 	case ISCSI_PARAM_IPV6_TC:
+-		len = sprintf(buf, "%u\n", conn->ipv6_traffic_class);
++		len = sysfs_emit(buf, "%u\n", conn->ipv6_traffic_class);
+ 		break;
+ 	case ISCSI_PARAM_IPV6_FLOW_LABEL:
+-		len = sprintf(buf, "%u\n", conn->ipv6_flow_label);
++		len = sysfs_emit(buf, "%u\n", conn->ipv6_flow_label);
+ 		break;
+ 	case ISCSI_PARAM_IS_FW_ASSIGNED_IPV6:
+-		len = sprintf(buf, "%u\n", conn->is_fw_assigned_ipv6);
++		len = sysfs_emit(buf, "%u\n", conn->is_fw_assigned_ipv6);
+ 		break;
+ 	case ISCSI_PARAM_TCP_XMIT_WSF:
+-		len = sprintf(buf, "%u\n", conn->tcp_xmit_wsf);
++		len = sysfs_emit(buf, "%u\n", conn->tcp_xmit_wsf);
+ 		break;
+ 	case ISCSI_PARAM_TCP_RECV_WSF:
+-		len = sprintf(buf, "%u\n", conn->tcp_recv_wsf);
++		len = sysfs_emit(buf, "%u\n", conn->tcp_recv_wsf);
+ 		break;
+ 	case ISCSI_PARAM_LOCAL_IPADDR:
+-		len = sprintf(buf, "%s\n", conn->local_ipaddr);
++		len = sysfs_emit(buf, "%s\n", conn->local_ipaddr);
+ 		break;
+ 	default:
+ 		return -ENOSYS;
+@@ -3615,13 +3615,13 @@ int iscsi_host_get_param(struct Scsi_Host *shost, enum iscsi_host_param param,
+ 
+ 	switch (param) {
+ 	case ISCSI_HOST_PARAM_NETDEV_NAME:
+-		len = sprintf(buf, "%s\n", ihost->netdev);
++		len = sysfs_emit(buf, "%s\n", ihost->netdev);
+ 		break;
+ 	case ISCSI_HOST_PARAM_HWADDRESS:
+-		len = sprintf(buf, "%s\n", ihost->hwaddress);
++		len = sysfs_emit(buf, "%s\n", ihost->hwaddress);
+ 		break;
+ 	case ISCSI_HOST_PARAM_INITIATOR_NAME:
+-		len = sprintf(buf, "%s\n", ihost->initiatorname);
++		len = sysfs_emit(buf, "%s\n", ihost->initiatorname);
+ 		break;
+ 	default:
+ 		return -ENOSYS;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 2e68c0a876986..c53c3f9fa526a 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -132,7 +132,11 @@ show_transport_handle(struct device *dev, struct device_attribute *attr,
+ 		      char *buf)
+ {
+ 	struct iscsi_internal *priv = dev_to_iscsi_internal(dev);
+-	return sprintf(buf, "%llu\n", (unsigned long long)iscsi_handle(priv->iscsi_transport));
++
++	if (!capable(CAP_SYS_ADMIN))
++		return -EACCES;
++	return sysfs_emit(buf, "%llu\n",
++		  (unsigned long long)iscsi_handle(priv->iscsi_transport));
+ }
+ static DEVICE_ATTR(handle, S_IRUGO, show_transport_handle, NULL);
+ 
+@@ -142,7 +146,7 @@ show_transport_##name(struct device *dev, 				\
+ 		      struct device_attribute *attr,char *buf)		\
+ {									\
+ 	struct iscsi_internal *priv = dev_to_iscsi_internal(dev);	\
+-	return sprintf(buf, format"\n", priv->iscsi_transport->name);	\
++	return sysfs_emit(buf, format"\n", priv->iscsi_transport->name);\
+ }									\
+ static DEVICE_ATTR(name, S_IRUGO, show_transport_##name, NULL);
+ 
+@@ -183,7 +187,7 @@ static ssize_t
+ show_ep_handle(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ 	struct iscsi_endpoint *ep = iscsi_dev_to_endpoint(dev);
+-	return sprintf(buf, "%llu\n", (unsigned long long) ep->id);
++	return sysfs_emit(buf, "%llu\n", (unsigned long long) ep->id);
+ }
+ static ISCSI_ATTR(ep, handle, S_IRUGO, show_ep_handle, NULL);
+ 
+@@ -2883,6 +2887,9 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ 	struct iscsi_cls_session *session;
+ 	int err = 0, value = 0;
+ 
++	if (ev->u.set_param.len > PAGE_SIZE)
++		return -EINVAL;
++
+ 	session = iscsi_session_lookup(ev->u.set_param.sid);
+ 	conn = iscsi_conn_lookup(ev->u.set_param.sid, ev->u.set_param.cid);
+ 	if (!conn || !session)
+@@ -3030,6 +3037,9 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ 	if (!transport->set_host_param)
+ 		return -ENOSYS;
+ 
++	if (ev->u.set_host_param.len > PAGE_SIZE)
++		return -EINVAL;
++
+ 	shost = scsi_host_lookup(ev->u.set_host_param.host_no);
+ 	if (!shost) {
+ 		printk(KERN_ERR "set_host_param could not find host no %u\n",
+@@ -3617,6 +3627,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ {
+ 	int err = 0;
+ 	u32 portid;
++	u32 pdu_len;
+ 	struct iscsi_uevent *ev = nlmsg_data(nlh);
+ 	struct iscsi_transport *transport = NULL;
+ 	struct iscsi_internal *priv;
+@@ -3624,6 +3635,9 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 	struct iscsi_cls_conn *conn;
+ 	struct iscsi_endpoint *ep = NULL;
+ 
++	if (!netlink_capable(skb, CAP_SYS_ADMIN))
++		return -EPERM;
++
+ 	if (nlh->nlmsg_type == ISCSI_UEVENT_PATH_UPDATE)
+ 		*group = ISCSI_NL_GRP_UIP;
+ 	else
+@@ -3756,6 +3770,14 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 			err = -EINVAL;
+ 		break;
+ 	case ISCSI_UEVENT_SEND_PDU:
++		pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
++
++		if ((ev->u.send_pdu.hdr_size > pdu_len) ||
++		    (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
++			err = -EINVAL;
++			break;
++		}
++
+ 		conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
+ 		if (conn) {
+ 			mutex_lock(&conn_mutex);
+@@ -3960,7 +3982,7 @@ static ssize_t show_conn_state(struct device *dev,
+ 	    conn->state < ARRAY_SIZE(connection_state_names))
+ 		state = connection_state_names[conn->state];
+ 
+-	return sprintf(buf, "%s\n", state);
++	return sysfs_emit(buf, "%s\n", state);
+ }
+ static ISCSI_CLASS_ATTR(conn, state, S_IRUGO, show_conn_state,
+ 			NULL);
+@@ -4188,7 +4210,7 @@ show_priv_session_state(struct device *dev, struct device_attribute *attr,
+ 			char *buf)
+ {
+ 	struct iscsi_cls_session *session = iscsi_dev_to_session(dev->parent);
+-	return sprintf(buf, "%s\n", iscsi_session_state_name(session->state));
++	return sysfs_emit(buf, "%s\n", iscsi_session_state_name(session->state));
+ }
+ static ISCSI_CLASS_ATTR(priv_sess, state, S_IRUGO, show_priv_session_state,
+ 			NULL);
+@@ -4197,7 +4219,7 @@ show_priv_session_creator(struct device *dev, struct device_attribute *attr,
+ 			char *buf)
+ {
+ 	struct iscsi_cls_session *session = iscsi_dev_to_session(dev->parent);
+-	return sprintf(buf, "%d\n", session->creator);
++	return sysfs_emit(buf, "%d\n", session->creator);
+ }
+ static ISCSI_CLASS_ATTR(priv_sess, creator, S_IRUGO, show_priv_session_creator,
+ 			NULL);
+@@ -4206,7 +4228,7 @@ show_priv_session_target_id(struct device *dev, struct device_attribute *attr,
+ 			    char *buf)
+ {
+ 	struct iscsi_cls_session *session = iscsi_dev_to_session(dev->parent);
+-	return sprintf(buf, "%d\n", session->target_id);
++	return sysfs_emit(buf, "%d\n", session->target_id);
+ }
+ static ISCSI_CLASS_ATTR(priv_sess, target_id, S_IRUGO,
+ 			show_priv_session_target_id, NULL);
+@@ -4219,8 +4241,8 @@ show_priv_session_##field(struct device *dev, 				\
+ 	struct iscsi_cls_session *session = 				\
+ 			iscsi_dev_to_session(dev->parent);		\
+ 	if (session->field == -1)					\
+-		return sprintf(buf, "off\n");				\
+-	return sprintf(buf, format"\n", session->field);		\
++		return sysfs_emit(buf, "off\n");			\
++	return sysfs_emit(buf, format"\n", session->field);		\
+ }
+ 
+ #define iscsi_priv_session_attr_store(field)				\
+diff --git a/drivers/staging/fwserial/fwserial.c b/drivers/staging/fwserial/fwserial.c
+index db83d34cd6779..c368082aae1aa 100644
+--- a/drivers/staging/fwserial/fwserial.c
++++ b/drivers/staging/fwserial/fwserial.c
+@@ -2189,6 +2189,7 @@ static int fwserial_create(struct fw_unit *unit)
+ 		err = fw_core_add_address_handler(&port->rx_handler,
+ 						  &fw_high_memory_region);
+ 		if (err) {
++			tty_port_destroy(&port->port);
+ 			kfree(port);
+ 			goto free_ports;
+ 		}
+@@ -2271,6 +2272,7 @@ unregister_ttys:
+ 
+ free_ports:
+ 	for (--i; i >= 0; --i) {
++		fw_core_remove_address_handler(&serial->ports[i]->rx_handler);
+ 		tty_port_destroy(&serial->ports[i]->port);
+ 		kfree(serial->ports[i]);
+ 	}
+diff --git a/drivers/staging/most/sound/sound.c b/drivers/staging/most/sound/sound.c
+index 8a449ab9bdce4..b7666a7b1760a 100644
+--- a/drivers/staging/most/sound/sound.c
++++ b/drivers/staging/most/sound/sound.c
+@@ -96,6 +96,8 @@ static void swap_copy24(u8 *dest, const u8 *source, unsigned int bytes)
+ {
+ 	unsigned int i = 0;
+ 
++	if (bytes < 2)
++		return;
+ 	while (i < bytes - 2) {
+ 		dest[i] = source[i + 2];
+ 		dest[i + 1] = source[i + 1];
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-ctl.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-ctl.c
+index 4c2cae99776b9..3703409715dab 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-ctl.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-ctl.c
+@@ -224,7 +224,7 @@ int snd_bcm2835_new_ctl(struct bcm2835_chip *chip)
+ {
+ 	int err;
+ 
+-	strcpy(chip->card->mixername, "Broadcom Mixer");
++	strscpy(chip->card->mixername, "Broadcom Mixer", sizeof(chip->card->mixername));
+ 	err = create_ctls(chip, ARRAY_SIZE(snd_bcm2835_ctl), snd_bcm2835_ctl);
+ 	if (err < 0)
+ 		return err;
+@@ -261,7 +261,7 @@ static const struct snd_kcontrol_new snd_bcm2835_headphones_ctl[] = {
+ 
+ int snd_bcm2835_new_headphones_ctl(struct bcm2835_chip *chip)
+ {
+-	strcpy(chip->card->mixername, "Broadcom Mixer");
++	strscpy(chip->card->mixername, "Broadcom Mixer", sizeof(chip->card->mixername));
+ 	return create_ctls(chip, ARRAY_SIZE(snd_bcm2835_headphones_ctl),
+ 			   snd_bcm2835_headphones_ctl);
+ }
+@@ -295,7 +295,7 @@ static const struct snd_kcontrol_new snd_bcm2835_hdmi[] = {
+ 
+ int snd_bcm2835_new_hdmi_ctl(struct bcm2835_chip *chip)
+ {
+-	strcpy(chip->card->mixername, "Broadcom Mixer");
++	strscpy(chip->card->mixername, "Broadcom Mixer", sizeof(chip->card->mixername));
+ 	return create_ctls(chip, ARRAY_SIZE(snd_bcm2835_hdmi),
+ 			   snd_bcm2835_hdmi);
+ }
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c
+index f783b632141b5..096f2c54258aa 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c
+@@ -334,7 +334,7 @@ int snd_bcm2835_new_pcm(struct bcm2835_chip *chip, const char *name,
+ 
+ 	pcm->private_data = chip;
+ 	pcm->nonatomic = true;
+-	strcpy(pcm->name, name);
++	strscpy(pcm->name, name, sizeof(pcm->name));
+ 	if (!spdif) {
+ 		chip->dest = route;
+ 		chip->volume = 0;
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c
+index cf5f80f5ca6b0..c250fbef2fa3d 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c
+@@ -185,9 +185,9 @@ static int snd_add_child_device(struct device *dev,
+ 		goto error;
+ 	}
+ 
+-	strcpy(card->driver, audio_driver->driver.name);
+-	strcpy(card->shortname, audio_driver->shortname);
+-	strcpy(card->longname, audio_driver->longname);
++	strscpy(card->driver, audio_driver->driver.name, sizeof(card->driver));
++	strscpy(card->shortname, audio_driver->shortname, sizeof(card->shortname));
++	strscpy(card->longname, audio_driver->longname, sizeof(card->longname));
+ 
+ 	err = audio_driver->newpcm(chip, audio_driver->shortname,
+ 		audio_driver->route,
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index e8963165082ee..e4f4b2186bcec 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -1943,31 +1943,27 @@ static inline int input_available_p(struct tty_struct *tty, int poll)
+  *	Helper function to speed up n_tty_read.  It is only called when
+  *	ICANON is off; it copies characters straight from the tty queue.
+  *
+- *	It can be profitably called twice; once to drain the space from
+- *	the tail pointer to the (physical) end of the buffer, and once
+- *	to drain the space from the (physical) beginning of the buffer
+- *	to head pointer.
+- *
+  *	Called under the ldata->atomic_read_lock sem
+  *
++ *	Returns true if it successfully copied data, but there is still
++ *	more data to be had.
++ *
+  *	n_tty_read()/consumer path:
+  *		caller holds non-exclusive termios_rwsem
+  *		read_tail published
+  */
+ 
+-static int copy_from_read_buf(struct tty_struct *tty,
++static bool copy_from_read_buf(struct tty_struct *tty,
+ 				      unsigned char **kbp,
+ 				      size_t *nr)
+ 
+ {
+ 	struct n_tty_data *ldata = tty->disc_data;
+-	int retval;
+ 	size_t n;
+ 	bool is_eof;
+ 	size_t head = smp_load_acquire(&ldata->commit_head);
+ 	size_t tail = ldata->read_tail & (N_TTY_BUF_SIZE - 1);
+ 
+-	retval = 0;
+ 	n = min(head - ldata->read_tail, N_TTY_BUF_SIZE - tail);
+ 	n = min(*nr, n);
+ 	if (n) {
+@@ -1980,11 +1976,14 @@ static int copy_from_read_buf(struct tty_struct *tty,
+ 		/* Turn single EOF into zero-length read */
+ 		if (L_EXTPROC(tty) && ldata->icanon && is_eof &&
+ 		    (head == ldata->read_tail))
+-			n = 0;
++			return false;
+ 		*kbp += n;
+ 		*nr -= n;
++
++		/* If we have more to copy, let the caller know */
++		return head != ldata->read_tail;
+ 	}
+-	return retval;
++	return false;
+ }
+ 
+ /**
+@@ -2010,21 +2009,22 @@ static int copy_from_read_buf(struct tty_struct *tty,
+  *		read_tail published
+  */
+ 
+-static int canon_copy_from_read_buf(struct tty_struct *tty,
+-				    unsigned char **kbp,
+-				    size_t *nr)
++static bool canon_copy_from_read_buf(struct tty_struct *tty,
++				     unsigned char **kbp,
++				     size_t *nr)
+ {
+ 	struct n_tty_data *ldata = tty->disc_data;
+ 	size_t n, size, more, c;
+ 	size_t eol;
+-	size_t tail;
++	size_t tail, canon_head;
+ 	int found = 0;
+ 
+ 	/* N.B. avoid overrun if nr == 0 */
+ 	if (!*nr)
+-		return 0;
++		return false;
+ 
+-	n = min(*nr + 1, smp_load_acquire(&ldata->canon_head) - ldata->read_tail);
++	canon_head = smp_load_acquire(&ldata->canon_head);
++	n = min(*nr + 1, canon_head - ldata->read_tail);
+ 
+ 	tail = ldata->read_tail & (N_TTY_BUF_SIZE - 1);
+ 	size = min_t(size_t, tail + n, N_TTY_BUF_SIZE);
+@@ -2068,8 +2068,11 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
+ 		else
+ 			ldata->push = 0;
+ 		tty_audit_push();
++		return false;
+ 	}
+-	return 0;
++
++	/* No EOL found - do a continuation retry if there is more data */
++	return ldata->read_tail != canon_head;
+ }
+ 
+ /**
+@@ -2133,6 +2136,30 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 	int packet;
+ 	size_t tail;
+ 
++	/*
++	 * Is this a continuation of a read started earler?
++	 *
++	 * If so, we still hold the atomic_read_lock and the
++	 * termios_rwsem, and can just continue to copy data.
++	 */
++	if (*cookie) {
++		if (ldata->icanon && !L_EXTPROC(tty)) {
++			if (canon_copy_from_read_buf(tty, &kb, &nr))
++				return kb - kbuf;
++		} else {
++			if (copy_from_read_buf(tty, &kb, &nr))
++				return kb - kbuf;
++		}
++
++		/* No more data - release locks and stop retries */
++		n_tty_kick_worker(tty);
++		n_tty_check_unthrottle(tty);
++		up_read(&tty->termios_rwsem);
++		mutex_unlock(&ldata->atomic_read_lock);
++		*cookie = NULL;
++		return kb - kbuf;
++	}
++
+ 	c = job_control(tty, file);
+ 	if (c < 0)
+ 		return c;
+@@ -2219,23 +2246,29 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 		}
+ 
+ 		if (ldata->icanon && !L_EXTPROC(tty)) {
+-			retval = canon_copy_from_read_buf(tty, &kb, &nr);
+-			if (retval)
+-				break;
++			if (canon_copy_from_read_buf(tty, &kb, &nr))
++				goto more_to_be_read;
+ 		} else {
+-			int uncopied;
+-
+ 			/* Deal with packet mode. */
+ 			if (packet && kb == kbuf) {
+ 				*kb++ = TIOCPKT_DATA;
+ 				nr--;
+ 			}
+ 
+-			uncopied = copy_from_read_buf(tty, &kb, &nr);
+-			uncopied += copy_from_read_buf(tty, &kb, &nr);
+-			if (uncopied) {
+-				retval = -EFAULT;
+-				break;
++			/*
++			 * Copy data, and if there is more to be had
++			 * and we have nothing more to wait for, then
++			 * let's mark us for retries.
++			 *
++			 * NOTE! We return here with both the termios_sem
++			 * and atomic_read_lock still held, the retries
++			 * will release them when done.
++			 */
++			if (copy_from_read_buf(tty, &kb, &nr) && kb - kbuf >= minimum) {
++more_to_be_read:
++				remove_wait_queue(&tty->read_wait, &wait);
++				*cookie = cookie;
++				return kb - kbuf;
+ 			}
+ 		}
+ 
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 3f55fe7293f31..146bd67115623 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -429,8 +429,7 @@ struct tty_driver *tty_find_polling_driver(char *name, int *line)
+ EXPORT_SYMBOL_GPL(tty_find_polling_driver);
+ #endif
+ 
+-static ssize_t hung_up_tty_read(struct file *file, char __user *buf,
+-				size_t count, loff_t *ppos)
++static ssize_t hung_up_tty_read(struct kiocb *iocb, struct iov_iter *to)
+ {
+ 	return 0;
+ }
+@@ -502,7 +501,7 @@ static const struct file_operations console_fops = {
+ 
+ static const struct file_operations hung_up_tty_fops = {
+ 	.llseek		= no_llseek,
+-	.read		= hung_up_tty_read,
++	.read_iter	= hung_up_tty_read,
+ 	.write_iter	= hung_up_tty_write,
+ 	.poll		= hung_up_tty_poll,
+ 	.unlocked_ioctl	= hung_up_tty_ioctl,
+@@ -860,13 +859,20 @@ static int iterate_tty_read(struct tty_ldisc *ld, struct tty_struct *tty,
+ 		if (!size)
+ 			break;
+ 
+-		/*
+-		 * A ldisc read error return will override any previously copied
+-		 * data (eg -EOVERFLOW from HDLC)
+-		 */
+ 		if (size < 0) {
+-			memzero_explicit(kernel_buf, sizeof(kernel_buf));
+-			return size;
++			/* Did we have an earlier error (ie -EFAULT)? */
++			if (retval)
++				break;
++			retval = size;
++
++			/*
++			 * -EOVERFLOW means we didn't have enough space
++			 * for a whole packet, and we shouldn't return
++			 * a partial result.
++			 */
++			if (retval == -EOVERFLOW)
++				offset = 0;
++			break;
+ 		}
+ 
+ 		copied = copy_to_iter(kernel_buf, size, to);
+@@ -922,8 +928,10 @@ static ssize_t tty_read(struct kiocb *iocb, struct iov_iter *to)
+ 	/* We want to wait for the line discipline to sort out in this
+ 	   situation */
+ 	ld = tty_ldisc_ref_wait(tty);
++	if (!ld)
++		return hung_up_tty_read(iocb, to);
+ 	i = -EIO;
+-	if (ld && ld->ops->read)
++	if (ld->ops->read)
+ 		i = iterate_tty_read(ld, tty, file, to);
+ 	tty_ldisc_deref(ld);
+ 
+diff --git a/drivers/tty/vt/consolemap.c b/drivers/tty/vt/consolemap.c
+index 5d778c0aa0091..8ba0dc51a1f17 100644
+--- a/drivers/tty/vt/consolemap.c
++++ b/drivers/tty/vt/consolemap.c
+@@ -495,7 +495,7 @@ con_insert_unipair(struct uni_pagedir *p, u_short unicode, u_short fontpos)
+ 
+ 	p2[unicode & 0x3f] = fontpos;
+ 	
+-	p->sum += (fontpos << 20) + unicode;
++	p->sum += (fontpos << 20U) + unicode;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index f9b3c1cb9530f..b9cdd02c10009 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -1017,6 +1017,7 @@ static void dlfb_ops_destroy(struct fb_info *info)
+ 	}
+ 	vfree(dlfb->backing_buffer);
+ 	kfree(dlfb->edid);
++	dlfb_free_urb_list(dlfb);
+ 	usb_put_dev(dlfb->udev);
+ 	kfree(dlfb);
+ 
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 96dbfc011f45d..261a50708cb89 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1320,7 +1320,6 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans)
+ 	struct btrfs_root *gang[8];
+ 	int i;
+ 	int ret;
+-	int err = 0;
+ 
+ 	spin_lock(&fs_info->fs_roots_radix_lock);
+ 	while (1) {
+@@ -1332,6 +1331,8 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans)
+ 			break;
+ 		for (i = 0; i < ret; i++) {
+ 			struct btrfs_root *root = gang[i];
++			int ret2;
++
+ 			radix_tree_tag_clear(&fs_info->fs_roots_radix,
+ 					(unsigned long)root->root_key.objectid,
+ 					BTRFS_ROOT_TRANS_TAG);
+@@ -1353,17 +1354,17 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans)
+ 						    root->node);
+ 			}
+ 
+-			err = btrfs_update_root(trans, fs_info->tree_root,
++			ret2 = btrfs_update_root(trans, fs_info->tree_root,
+ 						&root->root_key,
+ 						&root->root_item);
++			if (ret2)
++				return ret2;
+ 			spin_lock(&fs_info->fs_roots_radix_lock);
+-			if (err)
+-				break;
+ 			btrfs_qgroup_free_meta_all_pertrans(root);
+ 		}
+ 	}
+ 	spin_unlock(&fs_info->fs_roots_radix_lock);
+-	return err;
++	return 0;
+ }
+ 
+ /*
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index be10b16ea66ee..d5a6b9b888a56 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -158,8 +158,8 @@ static int erofs_read_superblock(struct super_block *sb)
+ 	blkszbits = dsb->blkszbits;
+ 	/* 9(512 bytes) + LOG_SECTORS_PER_BLOCK == LOG_BLOCK_SIZE */
+ 	if (blkszbits != LOG_BLOCK_SIZE) {
+-		erofs_err(sb, "blksize %u isn't supported on this platform",
+-			  1 << blkszbits);
++		erofs_err(sb, "blkszbits %u isn't supported on this platform",
++			  blkszbits);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 8fa37d1434de1..5f7ab4f113224 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -854,7 +854,11 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
+ 
+ 	if (whiteout) {
+ 		f2fs_i_links_write(inode, false);
++
++		spin_lock(&inode->i_lock);
+ 		inode->i_state |= I_LINKABLE;
++		spin_unlock(&inode->i_lock);
++
+ 		*whiteout = inode;
+ 	} else {
+ 		d_tmpfile(dentry, inode);
+@@ -1040,7 +1044,11 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		err = f2fs_add_link(old_dentry, whiteout);
+ 		if (err)
+ 			goto put_out_dir;
++
++		spin_lock(&whiteout->i_lock);
+ 		whiteout->i_state &= ~I_LINKABLE;
++		spin_unlock(&whiteout->i_lock);
++
+ 		iput(whiteout);
+ 	}
+ 
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index e81eb0748e2a9..229814b4f4a6c 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -101,11 +101,11 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
+ #define BLKS_PER_SEC(sbi)					\
+ 	((sbi)->segs_per_sec * (sbi)->blocks_per_seg)
+ #define GET_SEC_FROM_SEG(sbi, segno)				\
+-	((segno) / (sbi)->segs_per_sec)
++	(((segno) == -1) ? -1: (segno) / (sbi)->segs_per_sec)
+ #define GET_SEG_FROM_SEC(sbi, secno)				\
+ 	((secno) * (sbi)->segs_per_sec)
+ #define GET_ZONE_FROM_SEC(sbi, secno)				\
+-	((secno) / (sbi)->secs_per_zone)
++	(((secno) == -1) ? -1: (secno) / (sbi)->secs_per_zone)
+ #define GET_ZONE_FROM_SEG(sbi, segno)				\
+ 	GET_ZONE_FROM_SEC(sbi, GET_SEC_FROM_SEG(sbi, segno))
+ 
+diff --git a/fs/jfs/jfs_filsys.h b/fs/jfs/jfs_filsys.h
+index 1e899298f7f00..b5d702df7111a 100644
+--- a/fs/jfs/jfs_filsys.h
++++ b/fs/jfs/jfs_filsys.h
+@@ -268,5 +268,6 @@
+ 				 * fsck() must be run to repair
+ 				 */
+ #define	FM_EXTENDFS 0x00000008	/* file system extendfs() in progress */
++#define	FM_STATE_MAX 0x0000000f	/* max value of s_state */
+ 
+ #endif				/* _H_JFS_FILSYS */
+diff --git a/fs/jfs/jfs_mount.c b/fs/jfs/jfs_mount.c
+index 2935d4c776ec7..5d7d7170c03c0 100644
+--- a/fs/jfs/jfs_mount.c
++++ b/fs/jfs/jfs_mount.c
+@@ -37,6 +37,7 @@
+ #include <linux/fs.h>
+ #include <linux/buffer_head.h>
+ #include <linux/blkdev.h>
++#include <linux/log2.h>
+ 
+ #include "jfs_incore.h"
+ #include "jfs_filsys.h"
+@@ -366,6 +367,15 @@ static int chkSuper(struct super_block *sb)
+ 	sbi->bsize = bsize;
+ 	sbi->l2bsize = le16_to_cpu(j_sb->s_l2bsize);
+ 
++	/* check some fields for possible corruption */
++	if (sbi->l2bsize != ilog2((u32)bsize) ||
++	    j_sb->pad != 0 ||
++	    le32_to_cpu(j_sb->s_state) > FM_STATE_MAX) {
++		rc = -EINVAL;
++		jfs_err("jfs_mount: Mount Failure: superblock is corrupt!");
++		goto out;
++	}
++
+ 	/*
+ 	 * For now, ignore s_pbsize, l2bfactor.  All I/O going through buffer
+ 	 * cache.
+diff --git a/fs/namei.c b/fs/namei.c
+index d4a6dd7723038..7af66d5a0c1bf 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -669,17 +669,17 @@ static bool legitimize_root(struct nameidata *nd)
+  */
+ 
+ /**
+- * unlazy_walk - try to switch to ref-walk mode.
++ * try_to_unlazy - try to switch to ref-walk mode.
+  * @nd: nameidata pathwalk data
+- * Returns: 0 on success, -ECHILD on failure
++ * Returns: true on success, false on failure
+  *
+- * unlazy_walk attempts to legitimize the current nd->path and nd->root
++ * try_to_unlazy attempts to legitimize the current nd->path and nd->root
+  * for ref-walk mode.
+  * Must be called from rcu-walk context.
+- * Nothing should touch nameidata between unlazy_walk() failure and
++ * Nothing should touch nameidata between try_to_unlazy() failure and
+  * terminate_walk().
+  */
+-static int unlazy_walk(struct nameidata *nd)
++static bool try_to_unlazy(struct nameidata *nd)
+ {
+ 	struct dentry *parent = nd->path.dentry;
+ 
+@@ -694,14 +694,14 @@ static int unlazy_walk(struct nameidata *nd)
+ 		goto out;
+ 	rcu_read_unlock();
+ 	BUG_ON(nd->inode != parent->d_inode);
+-	return 0;
++	return true;
+ 
+ out1:
+ 	nd->path.mnt = NULL;
+ 	nd->path.dentry = NULL;
+ out:
+ 	rcu_read_unlock();
+-	return -ECHILD;
++	return false;
+ }
+ 
+ /**
+@@ -792,7 +792,7 @@ static int complete_walk(struct nameidata *nd)
+ 		 */
+ 		if (!(nd->flags & (LOOKUP_ROOT | LOOKUP_IS_SCOPED)))
+ 			nd->root.mnt = NULL;
+-		if (unlikely(unlazy_walk(nd)))
++		if (!try_to_unlazy(nd))
+ 			return -ECHILD;
+ 	}
+ 
+@@ -1466,7 +1466,7 @@ static struct dentry *lookup_fast(struct nameidata *nd,
+ 		unsigned seq;
+ 		dentry = __d_lookup_rcu(parent, &nd->last, &seq);
+ 		if (unlikely(!dentry)) {
+-			if (unlazy_walk(nd))
++			if (!try_to_unlazy(nd))
+ 				return ERR_PTR(-ECHILD);
+ 			return NULL;
+ 		}
+@@ -1567,10 +1567,8 @@ static inline int may_lookup(struct nameidata *nd)
+ {
+ 	if (nd->flags & LOOKUP_RCU) {
+ 		int err = inode_permission(nd->inode, MAY_EXEC|MAY_NOT_BLOCK);
+-		if (err != -ECHILD)
++		if (err != -ECHILD || !try_to_unlazy(nd))
+ 			return err;
+-		if (unlazy_walk(nd))
+-			return -ECHILD;
+ 	}
+ 	return inode_permission(nd->inode, MAY_EXEC);
+ }
+@@ -1592,7 +1590,7 @@ static int reserve_stack(struct nameidata *nd, struct path *link, unsigned seq)
+ 		// unlazy even if we fail to grab the link - cleanup needs it
+ 		bool grabbed_link = legitimize_path(nd, link, seq);
+ 
+-		if (unlazy_walk(nd) != 0 || !grabbed_link)
++		if (!try_to_unlazy(nd) != 0 || !grabbed_link)
+ 			return -ECHILD;
+ 
+ 		if (nd_alloc_stack(nd))
+@@ -1634,7 +1632,7 @@ static const char *pick_link(struct nameidata *nd, struct path *link,
+ 		touch_atime(&last->link);
+ 		cond_resched();
+ 	} else if (atime_needs_update(&last->link, inode)) {
+-		if (unlikely(unlazy_walk(nd)))
++		if (!try_to_unlazy(nd))
+ 			return ERR_PTR(-ECHILD);
+ 		touch_atime(&last->link);
+ 	}
+@@ -1651,11 +1649,8 @@ static const char *pick_link(struct nameidata *nd, struct path *link,
+ 		get = inode->i_op->get_link;
+ 		if (nd->flags & LOOKUP_RCU) {
+ 			res = get(NULL, inode, &last->done);
+-			if (res == ERR_PTR(-ECHILD)) {
+-				if (unlikely(unlazy_walk(nd)))
+-					return ERR_PTR(-ECHILD);
++			if (res == ERR_PTR(-ECHILD) && try_to_unlazy(nd))
+ 				res = get(link->dentry, inode, &last->done);
+-			}
+ 		} else {
+ 			res = get(link->dentry, inode, &last->done);
+ 		}
+@@ -2193,7 +2188,7 @@ OK:
+ 		}
+ 		if (unlikely(!d_can_lookup(nd->path.dentry))) {
+ 			if (nd->flags & LOOKUP_RCU) {
+-				if (unlazy_walk(nd))
++				if (!try_to_unlazy(nd))
+ 					return -ECHILD;
+ 			}
+ 			return -ENOTDIR;
+@@ -3127,7 +3122,6 @@ static const char *open_last_lookups(struct nameidata *nd,
+ 	struct inode *inode;
+ 	struct dentry *dentry;
+ 	const char *res;
+-	int error;
+ 
+ 	nd->flags |= op->intent;
+ 
+@@ -3151,9 +3145,8 @@ static const char *open_last_lookups(struct nameidata *nd,
+ 	} else {
+ 		/* create side of things */
+ 		if (nd->flags & LOOKUP_RCU) {
+-			error = unlazy_walk(nd);
+-			if (unlikely(error))
+-				return ERR_PTR(error);
++			if (!try_to_unlazy(nd))
++				return ERR_PTR(-ECHILD);
+ 		}
+ 		audit_inode(nd->name, dir, AUDIT_INODE_PARENT);
+ 		/* trailing slashes? */
+@@ -3162,9 +3155,7 @@ static const char *open_last_lookups(struct nameidata *nd,
+ 	}
+ 
+ 	if (open_flag & (O_CREAT | O_TRUNC | O_WRONLY | O_RDWR)) {
+-		error = mnt_want_write(nd->path.mnt);
+-		if (!error)
+-			got_write = true;
++		got_write = !mnt_want_write(nd->path.mnt);
+ 		/*
+ 		 * do _not_ fail yet - we might not need that or fail with
+ 		 * a different error; let lookup_open() decide; we'll be
+diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
+index 1414ab79eacfc..b7f7b31a77d59 100644
+--- a/fs/xfs/xfs_iops.c
++++ b/fs/xfs/xfs_iops.c
+@@ -865,7 +865,7 @@ xfs_setattr_size(
+ 	ASSERT(xfs_isilocked(ip, XFS_MMAPLOCK_EXCL));
+ 	ASSERT(S_ISREG(inode->i_mode));
+ 	ASSERT((iattr->ia_valid & (ATTR_UID|ATTR_GID|ATTR_ATIME|ATTR_ATIME_SET|
+-		ATTR_MTIME_SET|ATTR_KILL_PRIV|ATTR_TIMES_SET)) == 0);
++		ATTR_MTIME_SET|ATTR_TIMES_SET)) == 0);
+ 
+ 	oldsize = inode->i_size;
+ 	newsize = iattr->ia_size;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 9de5312edeb86..8753e98a8d58a 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3879,6 +3879,9 @@ int dev_pre_changeaddr_notify(struct net_device *dev, const char *addr,
+ 			      struct netlink_ext_ack *extack);
+ int dev_set_mac_address(struct net_device *dev, struct sockaddr *sa,
+ 			struct netlink_ext_ack *extack);
++int dev_set_mac_address_user(struct net_device *dev, struct sockaddr *sa,
++			     struct netlink_ext_ack *extack);
++int dev_get_mac_address(struct sockaddr *sa, struct net *net, char *dev_name);
+ int dev_change_carrier(struct net_device *, bool new_carrier);
+ int dev_get_phys_port_id(struct net_device *dev,
+ 			 struct netdev_phys_item_id *ppid);
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index 667935c0dbd4c..fbc6805358da0 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -484,6 +484,7 @@ struct backing_dev_info;
+ extern int init_swap_address_space(unsigned int type, unsigned long nr_pages);
+ extern void exit_swap_address_space(unsigned int type);
+ extern struct swap_info_struct *get_swap_device(swp_entry_t entry);
++sector_t swap_page_sector(struct page *page);
+ 
+ static inline void put_swap_device(struct swap_info_struct *si)
+ {
+diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
+index 4807ca4d52e03..2a430e713ce51 100644
+--- a/include/linux/zsmalloc.h
++++ b/include/linux/zsmalloc.h
+@@ -35,7 +35,7 @@ enum zs_mapmode {
+ 
+ struct zs_pool_stats {
+ 	/* How many pages were migrated (freed) */
+-	unsigned long pages_compacted;
++	atomic_long_t pages_compacted;
+ };
+ 
+ struct zs_pool;
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index c8e67042a3b14..6da4b3c5dd55d 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -238,6 +238,14 @@ enum {
+ 	 * during the hdev->setup vendor callback.
+ 	 */
+ 	HCI_QUIRK_BROKEN_ERR_DATA_REPORTING,
++
++	/*
++	 * When this quirk is set, then the hci_suspend_notifier is not
++	 * registered. This is intended for devices which drop completely
++	 * from the bus on system-suspend and which will show up as a new
++	 * HCI after resume.
++	 */
++	HCI_QUIRK_NO_SUSPEND_NOTIFIER,
+ };
+ 
+ /* HCI device flags */
+diff --git a/include/uapi/linux/pkt_cls.h b/include/uapi/linux/pkt_cls.h
+index ee95f42fb0ecf..88f4bf0047e7a 100644
+--- a/include/uapi/linux/pkt_cls.h
++++ b/include/uapi/linux/pkt_cls.h
+@@ -591,6 +591,8 @@ enum {
+ 	TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED = 1 << 1, /* Part of an existing connection. */
+ 	TCA_FLOWER_KEY_CT_FLAGS_RELATED = 1 << 2, /* Related to an established connection. */
+ 	TCA_FLOWER_KEY_CT_FLAGS_TRACKED = 1 << 3, /* Conntrack has occurred. */
++
++	__TCA_FLOWER_KEY_CT_FLAGS_MAX,
+ };
+ 
+ enum {
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 77aa0e788b9b7..3a150445e0cba 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -363,8 +363,9 @@ static enum hrtimer_restart hrtick(struct hrtimer *timer)
+ static void __hrtick_restart(struct rq *rq)
+ {
+ 	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
+ 
+-	hrtimer_start_expires(timer, HRTIMER_MODE_ABS_PINNED_HARD);
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
+ }
+ 
+ /*
+@@ -388,7 +389,6 @@ static void __hrtick_start(void *arg)
+ void hrtick_start(struct rq *rq, u64 delay)
+ {
+ 	struct hrtimer *timer = &rq->hrtick_timer;
+-	ktime_t time;
+ 	s64 delta;
+ 
+ 	/*
+@@ -396,9 +396,7 @@ void hrtick_start(struct rq *rq, u64 delay)
+ 	 * doesn't make sense and can cause timer DoS.
+ 	 */
+ 	delta = max_t(s64, delay, 10000LL);
+-	time = ktime_add_ns(timer->base->get_time(), delta);
+-
+-	hrtimer_set_expires(timer, time);
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
+ 
+ 	if (rq == this_rq())
+ 		__hrtick_restart(rq);
+@@ -2989,7 +2987,7 @@ out:
+ 
+ /**
+  * try_invoke_on_locked_down_task - Invoke a function on task in fixed state
+- * @p: Process for which the function is to be invoked.
++ * @p: Process for which the function is to be invoked, can be @current.
+  * @func: Function to invoke.
+  * @arg: Argument to function.
+  *
+@@ -3007,12 +3005,11 @@ out:
+  */
+ bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg)
+ {
+-	bool ret = false;
+ 	struct rq_flags rf;
++	bool ret = false;
+ 	struct rq *rq;
+ 
+-	lockdep_assert_irqs_enabled();
+-	raw_spin_lock_irq(&p->pi_lock);
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
+ 	if (p->on_rq) {
+ 		rq = __task_rq_lock(p, &rf);
+ 		if (task_rq(p) == rq)
+@@ -3029,7 +3026,7 @@ bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct t
+ 				ret = func(p, arg);
+ 		}
+ 	}
+-	raw_spin_unlock_irq(&p->pi_lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c122176c627ec..fac1b121d1130 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1018,6 +1018,7 @@ struct rq {
+ 	call_single_data_t	hrtick_csd;
+ #endif
+ 	struct hrtimer		hrtick_timer;
++	ktime_t 		hrtick_time;
+ #endif
+ 
+ #ifdef CONFIG_SCHEDSTATS
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 2e3b7075e4329..94c6b3d4df96a 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5302,21 +5302,23 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ 				unsigned long *start, unsigned long *end)
+ {
+-	unsigned long a_start, a_end;
++	unsigned long v_start = ALIGN(vma->vm_start, PUD_SIZE),
++		v_end = ALIGN_DOWN(vma->vm_end, PUD_SIZE);
+ 
+-	if (!(vma->vm_flags & VM_MAYSHARE))
++	/*
++	 * vma need span at least one aligned PUD size and the start,end range
++	 * must at least partialy within it.
++	 */
++	if (!(vma->vm_flags & VM_MAYSHARE) || !(v_end > v_start) ||
++		(*end <= v_start) || (*start >= v_end))
+ 		return;
+ 
+ 	/* Extend the range to be PUD aligned for a worst case scenario */
+-	a_start = ALIGN_DOWN(*start, PUD_SIZE);
+-	a_end = ALIGN(*end, PUD_SIZE);
++	if (*start > v_start)
++		*start = ALIGN_DOWN(*start, PUD_SIZE);
+ 
+-	/*
+-	 * Intersect the range with the vma range, since pmd sharing won't be
+-	 * across vma after all
+-	 */
+-	*start = max(vma->vm_start, a_start);
+-	*end = min(vma->vm_end, a_end);
++	if (*end < v_end)
++		*end = ALIGN(*end, PUD_SIZE);
+ }
+ 
+ /*
+diff --git a/mm/page_io.c b/mm/page_io.c
+index 433df12633495..96479817ffae3 100644
+--- a/mm/page_io.c
++++ b/mm/page_io.c
+@@ -273,11 +273,6 @@ out:
+ 	return ret;
+ }
+ 
+-static sector_t swap_page_sector(struct page *page)
+-{
+-	return (sector_t)__page_file_index(page) << (PAGE_SHIFT - 9);
+-}
+-
+ static inline void count_swpout_vm_event(struct page *page)
+ {
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 16db9d1ebcbf3..5256c10049b0f 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -220,6 +220,19 @@ offset_to_swap_extent(struct swap_info_struct *sis, unsigned long offset)
+ 	BUG();
+ }
+ 
++sector_t swap_page_sector(struct page *page)
++{
++	struct swap_info_struct *sis = page_swap_info(page);
++	struct swap_extent *se;
++	sector_t sector;
++	pgoff_t offset;
++
++	offset = __page_file_index(page);
++	se = offset_to_swap_extent(sis, offset);
++	sector = se->start_block + (offset - se->start_page);
++	return sector << (PAGE_SHIFT - 9);
++}
++
+ /*
+  * swap allocation tell device that a cluster of swap can now be discarded,
+  * to allow the swap device to optimize its wear-levelling.
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index cdfaaadea8ff7..7a0b79b0a6899 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -2216,11 +2216,13 @@ static unsigned long zs_can_compact(struct size_class *class)
+ 	return obj_wasted * class->pages_per_zspage;
+ }
+ 
+-static void __zs_compact(struct zs_pool *pool, struct size_class *class)
++static unsigned long __zs_compact(struct zs_pool *pool,
++				  struct size_class *class)
+ {
+ 	struct zs_compact_control cc;
+ 	struct zspage *src_zspage;
+ 	struct zspage *dst_zspage = NULL;
++	unsigned long pages_freed = 0;
+ 
+ 	spin_lock(&class->lock);
+ 	while ((src_zspage = isolate_zspage(class, true))) {
+@@ -2250,7 +2252,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
+ 		putback_zspage(class, dst_zspage);
+ 		if (putback_zspage(class, src_zspage) == ZS_EMPTY) {
+ 			free_zspage(pool, class, src_zspage);
+-			pool->stats.pages_compacted += class->pages_per_zspage;
++			pages_freed += class->pages_per_zspage;
+ 		}
+ 		spin_unlock(&class->lock);
+ 		cond_resched();
+@@ -2261,12 +2263,15 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
+ 		putback_zspage(class, src_zspage);
+ 
+ 	spin_unlock(&class->lock);
++
++	return pages_freed;
+ }
+ 
+ unsigned long zs_compact(struct zs_pool *pool)
+ {
+ 	int i;
+ 	struct size_class *class;
++	unsigned long pages_freed = 0;
+ 
+ 	for (i = ZS_SIZE_CLASSES - 1; i >= 0; i--) {
+ 		class = pool->size_class[i];
+@@ -2274,10 +2279,11 @@ unsigned long zs_compact(struct zs_pool *pool)
+ 			continue;
+ 		if (class->index != i)
+ 			continue;
+-		__zs_compact(pool, class);
++		pages_freed += __zs_compact(pool, class);
+ 	}
++	atomic_long_add(pages_freed, &pool->stats.pages_compacted);
+ 
+-	return pool->stats.pages_compacted;
++	return pages_freed;
+ }
+ EXPORT_SYMBOL_GPL(zs_compact);
+ 
+@@ -2294,13 +2300,12 @@ static unsigned long zs_shrinker_scan(struct shrinker *shrinker,
+ 	struct zs_pool *pool = container_of(shrinker, struct zs_pool,
+ 			shrinker);
+ 
+-	pages_freed = pool->stats.pages_compacted;
+ 	/*
+ 	 * Compact classes and calculate compaction delta.
+ 	 * Can run concurrently with a manually triggered
+ 	 * (by user) compaction.
+ 	 */
+-	pages_freed = zs_compact(pool) - pages_freed;
++	pages_freed = zs_compact(pool);
+ 
+ 	return pages_freed ? pages_freed : SHRINK_STOP;
+ }
+diff --git a/net/bluetooth/amp.c b/net/bluetooth/amp.c
+index 9c711f0dfae35..be2d469d6369d 100644
+--- a/net/bluetooth/amp.c
++++ b/net/bluetooth/amp.c
+@@ -297,6 +297,9 @@ void amp_read_loc_assoc_final_data(struct hci_dev *hdev,
+ 	struct hci_request req;
+ 	int err;
+ 
++	if (!mgr)
++		return;
++
+ 	cp.phy_handle = hcon->handle;
+ 	cp.len_so_far = cpu_to_le16(0);
+ 	cp.max_len = cpu_to_le16(hdev->amp_assoc_size);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 555058270f112..0152bc6b67967 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3529,7 +3529,8 @@ static int hci_suspend_notifier(struct notifier_block *nb, unsigned long action,
+ 	}
+ 
+ 	/* Suspend notifier should only act on events when powered. */
+-	if (!hdev_is_powered(hdev))
++	if (!hdev_is_powered(hdev) ||
++	    hci_dev_test_flag(hdev, HCI_UNREGISTER))
+ 		goto done;
+ 
+ 	if (action == PM_SUSPEND_PREPARE) {
+@@ -3784,10 +3785,12 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	hci_sock_dev_event(hdev, HCI_DEV_REG);
+ 	hci_dev_hold(hdev);
+ 
+-	hdev->suspend_notifier.notifier_call = hci_suspend_notifier;
+-	error = register_pm_notifier(&hdev->suspend_notifier);
+-	if (error)
+-		goto err_wqueue;
++	if (!test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks)) {
++		hdev->suspend_notifier.notifier_call = hci_suspend_notifier;
++		error = register_pm_notifier(&hdev->suspend_notifier);
++		if (error)
++			goto err_wqueue;
++	}
+ 
+ 	queue_work(hdev->req_workqueue, &hdev->power_on);
+ 
+@@ -3822,9 +3825,11 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ 
+ 	cancel_work_sync(&hdev->power_on);
+ 
+-	hci_suspend_clear_tasks(hdev);
+-	unregister_pm_notifier(&hdev->suspend_notifier);
+-	cancel_work_sync(&hdev->suspend_prepare);
++	if (!test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks)) {
++		hci_suspend_clear_tasks(hdev);
++		unregister_pm_notifier(&hdev->suspend_notifier);
++		cancel_work_sync(&hdev->suspend_prepare);
++	}
+ 
+ 	hci_dev_do_close(hdev);
+ 
+diff --git a/net/bridge/br_sysfs_if.c b/net/bridge/br_sysfs_if.c
+index 7a59cdddd3ce3..5047e9c2333a2 100644
+--- a/net/bridge/br_sysfs_if.c
++++ b/net/bridge/br_sysfs_if.c
+@@ -55,9 +55,8 @@ static BRPORT_ATTR(_name, 0644,					\
+ static int store_flag(struct net_bridge_port *p, unsigned long v,
+ 		      unsigned long mask)
+ {
+-	unsigned long flags;
+-
+-	flags = p->flags;
++	unsigned long flags = p->flags;
++	int err;
+ 
+ 	if (v)
+ 		flags |= mask;
+@@ -65,6 +64,10 @@ static int store_flag(struct net_bridge_port *p, unsigned long v,
+ 		flags &= ~mask;
+ 
+ 	if (flags != p->flags) {
++		err = br_switchdev_set_port_flag(p, flags, mask);
++		if (err)
++			return err;
++
+ 		p->flags = flags;
+ 		br_port_flags_change(p, mask);
+ 	}
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 210d0fce58e17..75ca6c6d01d6e 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -8686,6 +8686,48 @@ int dev_set_mac_address(struct net_device *dev, struct sockaddr *sa,
+ }
+ EXPORT_SYMBOL(dev_set_mac_address);
+ 
++static DECLARE_RWSEM(dev_addr_sem);
++
++int dev_set_mac_address_user(struct net_device *dev, struct sockaddr *sa,
++			     struct netlink_ext_ack *extack)
++{
++	int ret;
++
++	down_write(&dev_addr_sem);
++	ret = dev_set_mac_address(dev, sa, extack);
++	up_write(&dev_addr_sem);
++	return ret;
++}
++EXPORT_SYMBOL(dev_set_mac_address_user);
++
++int dev_get_mac_address(struct sockaddr *sa, struct net *net, char *dev_name)
++{
++	size_t size = sizeof(sa->sa_data);
++	struct net_device *dev;
++	int ret = 0;
++
++	down_read(&dev_addr_sem);
++	rcu_read_lock();
++
++	dev = dev_get_by_name_rcu(net, dev_name);
++	if (!dev) {
++		ret = -ENODEV;
++		goto unlock;
++	}
++	if (!dev->addr_len)
++		memset(sa->sa_data, 0, size);
++	else
++		memcpy(sa->sa_data, dev->dev_addr,
++		       min_t(size_t, size, dev->addr_len));
++	sa->sa_family = dev->type;
++
++unlock:
++	rcu_read_unlock();
++	up_read(&dev_addr_sem);
++	return ret;
++}
++EXPORT_SYMBOL(dev_get_mac_address);
++
+ /**
+  *	dev_change_carrier - Change device carrier
+  *	@dev: device
+diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
+index 205e92e604ef7..54fb18b4f55e4 100644
+--- a/net/core/dev_ioctl.c
++++ b/net/core/dev_ioctl.c
+@@ -123,17 +123,6 @@ static int dev_ifsioc_locked(struct net *net, struct ifreq *ifr, unsigned int cm
+ 		ifr->ifr_mtu = dev->mtu;
+ 		return 0;
+ 
+-	case SIOCGIFHWADDR:
+-		if (!dev->addr_len)
+-			memset(ifr->ifr_hwaddr.sa_data, 0,
+-			       sizeof(ifr->ifr_hwaddr.sa_data));
+-		else
+-			memcpy(ifr->ifr_hwaddr.sa_data, dev->dev_addr,
+-			       min(sizeof(ifr->ifr_hwaddr.sa_data),
+-				   (size_t)dev->addr_len));
+-		ifr->ifr_hwaddr.sa_family = dev->type;
+-		return 0;
+-
+ 	case SIOCGIFSLAVE:
+ 		err = -EINVAL;
+ 		break;
+@@ -274,7 +263,7 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, unsigned int cmd)
+ 	case SIOCSIFHWADDR:
+ 		if (dev->addr_len > sizeof(struct sockaddr))
+ 			return -EINVAL;
+-		return dev_set_mac_address(dev, &ifr->ifr_hwaddr, NULL);
++		return dev_set_mac_address_user(dev, &ifr->ifr_hwaddr, NULL);
+ 
+ 	case SIOCSIFHWBROADCAST:
+ 		if (ifr->ifr_hwaddr.sa_family != dev->type)
+@@ -418,6 +407,12 @@ int dev_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr, bool *need_c
+ 	 */
+ 
+ 	switch (cmd) {
++	case SIOCGIFHWADDR:
++		dev_load(net, ifr->ifr_name);
++		ret = dev_get_mac_address(&ifr->ifr_hwaddr, net, ifr->ifr_name);
++		if (colon)
++			*colon = ':';
++		return ret;
+ 	/*
+ 	 *	These ioctl calls:
+ 	 *	- can be done by all.
+@@ -427,7 +422,6 @@ int dev_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr, bool *need_c
+ 	case SIOCGIFFLAGS:
+ 	case SIOCGIFMETRIC:
+ 	case SIOCGIFMTU:
+-	case SIOCGIFHWADDR:
+ 	case SIOCGIFSLAVE:
+ 	case SIOCGIFMAP:
+ 	case SIOCGIFINDEX:
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index 105978604ffdb..3fba429f1f57b 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -3464,7 +3464,7 @@ static int pktgen_thread_worker(void *arg)
+ 	struct pktgen_dev *pkt_dev = NULL;
+ 	int cpu = t->cpu;
+ 
+-	BUG_ON(smp_processor_id() != cpu);
++	WARN_ON(smp_processor_id() != cpu);
+ 
+ 	init_waitqueue_head(&t->queue);
+ 	complete(&t->start_done);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 7d72236917839..eae8e87930cd7 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2658,7 +2658,7 @@ static int do_setlink(const struct sk_buff *skb,
+ 		sa->sa_family = dev->type;
+ 		memcpy(sa->sa_data, nla_data(tb[IFLA_ADDRESS]),
+ 		       dev->addr_len);
+-		err = dev_set_mac_address(dev, sa, extack);
++		err = dev_set_mac_address_user(dev, sa, extack);
+ 		kfree(sa);
+ 		if (err)
+ 			goto errout;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 7ab56796bd3a9..1301ea694b940 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3285,7 +3285,19 @@ EXPORT_SYMBOL(skb_split);
+  */
+ static int skb_prepare_for_shift(struct sk_buff *skb)
+ {
+-	return skb_cloned(skb) && pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
++	int ret = 0;
++
++	if (skb_cloned(skb)) {
++		/* Save and restore truesize: pskb_expand_head() may reallocate
++		 * memory where ksize(kmalloc(S)) != ksize(kmalloc(S)), but we
++		 * cannot change truesize at this point.
++		 */
++		unsigned int save_truesize = skb->truesize;
++
++		ret = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
++		skb->truesize = save_truesize;
++	}
++	return ret;
+ }
+ 
+ /**
+diff --git a/net/dsa/tag_rtl4_a.c b/net/dsa/tag_rtl4_a.c
+index 2646abe5a69e8..c17d39b4a1a04 100644
+--- a/net/dsa/tag_rtl4_a.c
++++ b/net/dsa/tag_rtl4_a.c
+@@ -12,9 +12,7 @@
+  *
+  * The 2 bytes tag form a 16 bit big endian word. The exact
+  * meaning has been guessed from packet dumps from ingress
+- * frames, as no working egress traffic has been available
+- * we do not know the format of the egress tags or if they
+- * are even supported.
++ * frames.
+  */
+ 
+ #include <linux/etherdevice.h>
+@@ -36,17 +34,34 @@
+ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
+ 				      struct net_device *dev)
+ {
+-	/*
+-	 * Just let it pass thru, we don't know if it is possible
+-	 * to tag a frame with the 0x8899 ethertype and direct it
+-	 * to a specific port, all attempts at reverse-engineering have
+-	 * ended up with the frames getting dropped.
+-	 *
+-	 * The VLAN set-up needs to restrict the frames to the right port.
+-	 *
+-	 * If you have documentation on the tagging format for RTL8366RB
+-	 * (tag type A) then please contribute.
+-	 */
++	struct dsa_port *dp = dsa_slave_to_port(dev);
++	u8 *tag;
++	u16 *p;
++	u16 out;
++
++	/* Pad out to at least 60 bytes */
++	if (unlikely(eth_skb_pad(skb)))
++		return NULL;
++	if (skb_cow_head(skb, RTL4_A_HDR_LEN) < 0)
++		return NULL;
++
++	netdev_dbg(dev, "add realtek tag to package to port %d\n",
++		   dp->index);
++	skb_push(skb, RTL4_A_HDR_LEN);
++
++	memmove(skb->data, skb->data + RTL4_A_HDR_LEN, 2 * ETH_ALEN);
++	tag = skb->data + 2 * ETH_ALEN;
++
++	/* Set Ethertype */
++	p = (u16 *)tag;
++	*p = htons(RTL4_A_ETHERTYPE);
++
++	out = (RTL4_A_PROTOCOL_RTL8366RB << 12) | (2 << 8);
++	/* The lower bits is the port numer */
++	out |= (u8)dp->index;
++	p = (u16 *)(tag + 2);
++	*p = htons(out);
++
+ 	return skb;
+ }
+ 
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 5c97de4599057..805f974923b92 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -164,8 +164,10 @@ static struct hsr_node *hsr_add_node(struct hsr_priv *hsr,
+ 	 * as initialization. (0 could trigger an spurious ring error warning).
+ 	 */
+ 	now = jiffies;
+-	for (i = 0; i < HSR_PT_PORTS; i++)
++	for (i = 0; i < HSR_PT_PORTS; i++) {
+ 		new_node->time_in[i] = now;
++		new_node->time_out[i] = now;
++	}
+ 	for (i = 0; i < HSR_PT_PORTS; i++)
+ 		new_node->seq_out[i] = seq_out;
+ 
+@@ -411,9 +413,12 @@ void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port,
+ int hsr_register_frame_out(struct hsr_port *port, struct hsr_node *node,
+ 			   u16 sequence_nr)
+ {
+-	if (seq_nr_before_or_eq(sequence_nr, node->seq_out[port->type]))
++	if (seq_nr_before_or_eq(sequence_nr, node->seq_out[port->type]) &&
++	    time_is_after_jiffies(node->time_out[port->type] +
++	    msecs_to_jiffies(HSR_ENTRY_FORGET_TIME)))
+ 		return 1;
+ 
++	node->time_out[port->type] = jiffies;
+ 	node->seq_out[port->type] = sequence_nr;
+ 	return 0;
+ }
+diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
+index 86b43f539f2cc..d9628e7a5f051 100644
+--- a/net/hsr/hsr_framereg.h
++++ b/net/hsr/hsr_framereg.h
+@@ -75,6 +75,7 @@ struct hsr_node {
+ 	enum hsr_port_type	addr_B_port;
+ 	unsigned long		time_in[HSR_PT_PORTS];
+ 	bool			time_in_stale[HSR_PT_PORTS];
++	unsigned long		time_out[HSR_PT_PORTS];
+ 	/* if the node is a SAN */
+ 	bool			san_a;
+ 	bool			san_b;
+diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
+index 7dc92ce5a1340..f79ca55d69868 100644
+--- a/net/hsr/hsr_main.h
++++ b/net/hsr/hsr_main.h
+@@ -21,6 +21,7 @@
+ #define HSR_LIFE_CHECK_INTERVAL		 2000 /* ms */
+ #define HSR_NODE_FORGET_TIME		60000 /* ms */
+ #define HSR_ANNOUNCE_INTERVAL		  100 /* ms */
++#define HSR_ENTRY_FORGET_TIME		  400 /* ms */
+ 
+ /* By how much may slave1 and slave2 timestamps of latest received frame from
+  * each node differ before we notify of communication problem?
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index db7d888914fad..e14368ced21f8 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -2036,7 +2036,6 @@ static int afiucv_hs_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	char nullstring[8];
+ 
+ 	if (!pskb_may_pull(skb, sizeof(*trans_hdr))) {
+-		WARN_ONCE(1, "AF_IUCV failed to receive skb, len=%u", skb->len);
+ 		kfree_skb(skb);
+ 		return NET_RX_SUCCESS;
+ 	}
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 953906e407428..16adba172fb94 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -981,6 +981,12 @@ static void subflow_data_ready(struct sock *sk)
+ 
+ 	msk = mptcp_sk(parent);
+ 	if (state & TCPF_LISTEN) {
++		/* MPJ subflow are removed from accept queue before reaching here,
++		 * avoid stray wakeups
++		 */
++		if (reqsk_queue_empty(&inet_csk(sk)->icsk_accept_queue))
++			return;
++
+ 		set_bit(MPTCP_DATA_READY, &msk->flags);
+ 		parent->sk_data_ready(parent);
+ 		return;
+diff --git a/net/psample/psample.c b/net/psample/psample.c
+index 33e238c965bd8..482c07f2766b1 100644
+--- a/net/psample/psample.c
++++ b/net/psample/psample.c
+@@ -309,10 +309,10 @@ static int psample_tunnel_meta_len(struct ip_tunnel_info *tun_info)
+ 	unsigned short tun_proto = ip_tunnel_info_af(tun_info);
+ 	const struct ip_tunnel_key *tun_key = &tun_info->key;
+ 	int tun_opts_len = tun_info->options_len;
+-	int sum = 0;
++	int sum = nla_total_size(0);	/* PSAMPLE_ATTR_TUNNEL */
+ 
+ 	if (tun_key->tun_flags & TUNNEL_KEY)
+-		sum += nla_total_size(sizeof(u64));
++		sum += nla_total_size_64bit(sizeof(u64));
+ 
+ 	if (tun_info->mode & IP_TUNNEL_INFO_BRIDGE)
+ 		sum += nla_total_size(0);
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 84f932532db7d..46c1b3e9f66a5 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -30,6 +30,11 @@
+ 
+ #include <uapi/linux/netfilter/nf_conntrack_common.h>
+ 
++#define TCA_FLOWER_KEY_CT_FLAGS_MAX \
++		((__TCA_FLOWER_KEY_CT_FLAGS_MAX - 1) << 1)
++#define TCA_FLOWER_KEY_CT_FLAGS_MASK \
++		(TCA_FLOWER_KEY_CT_FLAGS_MAX - 1)
++
+ struct fl_flow_key {
+ 	struct flow_dissector_key_meta meta;
+ 	struct flow_dissector_key_control control;
+@@ -686,8 +691,10 @@ static const struct nla_policy fl_policy[TCA_FLOWER_MAX + 1] = {
+ 	[TCA_FLOWER_KEY_ENC_IP_TTL_MASK] = { .type = NLA_U8 },
+ 	[TCA_FLOWER_KEY_ENC_OPTS]	= { .type = NLA_NESTED },
+ 	[TCA_FLOWER_KEY_ENC_OPTS_MASK]	= { .type = NLA_NESTED },
+-	[TCA_FLOWER_KEY_CT_STATE]	= { .type = NLA_U16 },
+-	[TCA_FLOWER_KEY_CT_STATE_MASK]	= { .type = NLA_U16 },
++	[TCA_FLOWER_KEY_CT_STATE]	=
++		NLA_POLICY_MASK(NLA_U16, TCA_FLOWER_KEY_CT_FLAGS_MASK),
++	[TCA_FLOWER_KEY_CT_STATE_MASK]	=
++		NLA_POLICY_MASK(NLA_U16, TCA_FLOWER_KEY_CT_FLAGS_MASK),
+ 	[TCA_FLOWER_KEY_CT_ZONE]	= { .type = NLA_U16 },
+ 	[TCA_FLOWER_KEY_CT_ZONE_MASK]	= { .type = NLA_U16 },
+ 	[TCA_FLOWER_KEY_CT_MARK]	= { .type = NLA_U32 },
+@@ -1390,12 +1397,33 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
+ 	return 0;
+ }
+ 
++static int fl_validate_ct_state(u16 state, struct nlattr *tb,
++				struct netlink_ext_ack *extack)
++{
++	if (state && !(state & TCA_FLOWER_KEY_CT_FLAGS_TRACKED)) {
++		NL_SET_ERR_MSG_ATTR(extack, tb,
++				    "no trk, so no other flag can be set");
++		return -EINVAL;
++	}
++
++	if (state & TCA_FLOWER_KEY_CT_FLAGS_NEW &&
++	    state & TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED) {
++		NL_SET_ERR_MSG_ATTR(extack, tb,
++				    "new and est are mutually exclusive");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static int fl_set_key_ct(struct nlattr **tb,
+ 			 struct flow_dissector_key_ct *key,
+ 			 struct flow_dissector_key_ct *mask,
+ 			 struct netlink_ext_ack *extack)
+ {
+ 	if (tb[TCA_FLOWER_KEY_CT_STATE]) {
++		int err;
++
+ 		if (!IS_ENABLED(CONFIG_NF_CONNTRACK)) {
+ 			NL_SET_ERR_MSG(extack, "Conntrack isn't enabled");
+ 			return -EOPNOTSUPP;
+@@ -1403,6 +1431,13 @@ static int fl_set_key_ct(struct nlattr **tb,
+ 		fl_set_key_val(tb, &key->ct_state, TCA_FLOWER_KEY_CT_STATE,
+ 			       &mask->ct_state, TCA_FLOWER_KEY_CT_STATE_MASK,
+ 			       sizeof(key->ct_state));
++
++		err = fl_validate_ct_state(mask->ct_state,
++					   tb[TCA_FLOWER_KEY_CT_STATE_MASK],
++					   extack);
++		if (err)
++			return err;
++
+ 	}
+ 	if (tb[TCA_FLOWER_KEY_CT_ZONE]) {
+ 		if (!IS_ENABLED(CONFIG_NF_CONNTRACK_ZONES)) {
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index e567b4baf3a08..334299357e715 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -1167,7 +1167,7 @@ static ssize_t smk_write_net4addr(struct file *file, const char __user *buf,
+ 		return -EPERM;
+ 	if (*ppos != 0)
+ 		return -EINVAL;
+-	if (count < SMK_NETLBLADDRMIN)
++	if (count < SMK_NETLBLADDRMIN || count > PAGE_SIZE - 1)
+ 		return -EINVAL;
+ 
+ 	data = memdup_user_nul(buf, count);
+@@ -1427,7 +1427,7 @@ static ssize_t smk_write_net6addr(struct file *file, const char __user *buf,
+ 		return -EPERM;
+ 	if (*ppos != 0)
+ 		return -EINVAL;
+-	if (count < SMK_NETLBLADDRMIN)
++	if (count < SMK_NETLBLADDRMIN || count > PAGE_SIZE - 1)
+ 		return -EINVAL;
+ 
+ 	data = memdup_user_nul(buf, count);
+@@ -1834,6 +1834,10 @@ static ssize_t smk_write_ambient(struct file *file, const char __user *buf,
+ 	if (!smack_privileged(CAP_MAC_ADMIN))
+ 		return -EPERM;
+ 
++	/* Enough data must be present */
++	if (count == 0 || count > PAGE_SIZE)
++		return -EINVAL;
++
+ 	data = memdup_user_nul(buf, count);
+ 	if (IS_ERR(data))
+ 		return PTR_ERR(data);
+@@ -2005,6 +2009,9 @@ static ssize_t smk_write_onlycap(struct file *file, const char __user *buf,
+ 	if (!smack_privileged(CAP_MAC_ADMIN))
+ 		return -EPERM;
+ 
++	if (count > PAGE_SIZE)
++		return -EINVAL;
++
+ 	data = memdup_user_nul(buf, count);
+ 	if (IS_ERR(data))
+ 		return PTR_ERR(data);
+@@ -2092,6 +2099,9 @@ static ssize_t smk_write_unconfined(struct file *file, const char __user *buf,
+ 	if (!smack_privileged(CAP_MAC_ADMIN))
+ 		return -EPERM;
+ 
++	if (count > PAGE_SIZE)
++		return -EINVAL;
++
+ 	data = memdup_user_nul(buf, count);
+ 	if (IS_ERR(data))
+ 		return PTR_ERR(data);
+@@ -2647,6 +2657,10 @@ static ssize_t smk_write_syslog(struct file *file, const char __user *buf,
+ 	if (!smack_privileged(CAP_MAC_ADMIN))
+ 		return -EPERM;
+ 
++	/* Enough data must be present */
++	if (count == 0 || count > PAGE_SIZE)
++		return -EINVAL;
++
+ 	data = memdup_user_nul(buf, count);
+ 	if (IS_ERR(data))
+ 		return PTR_ERR(data);
+@@ -2739,10 +2753,13 @@ static ssize_t smk_write_relabel_self(struct file *file, const char __user *buf,
+ 		return -EPERM;
+ 
+ 	/*
++	 * No partial write.
+ 	 * Enough data must be present.
+ 	 */
+ 	if (*ppos != 0)
+ 		return -EINVAL;
++	if (count == 0 || count > PAGE_SIZE)
++		return -EINVAL;
+ 
+ 	data = memdup_user_nul(buf, count);
+ 	if (IS_ERR(data))
+diff --git a/security/tomoyo/file.c b/security/tomoyo/file.c
+index 051f7297877cb..1e6077568fdec 100644
+--- a/security/tomoyo/file.c
++++ b/security/tomoyo/file.c
+@@ -362,14 +362,14 @@ static bool tomoyo_merge_path_acl(struct tomoyo_acl_info *a,
+ {
+ 	u16 * const a_perm = &container_of(a, struct tomoyo_path_acl, head)
+ 		->perm;
+-	u16 perm = *a_perm;
++	u16 perm = READ_ONCE(*a_perm);
+ 	const u16 b_perm = container_of(b, struct tomoyo_path_acl, head)->perm;
+ 
+ 	if (is_delete)
+ 		perm &= ~b_perm;
+ 	else
+ 		perm |= b_perm;
+-	*a_perm = perm;
++	WRITE_ONCE(*a_perm, perm);
+ 	return !perm;
+ }
+ 
+@@ -437,7 +437,7 @@ static bool tomoyo_merge_mkdev_acl(struct tomoyo_acl_info *a,
+ {
+ 	u8 *const a_perm = &container_of(a, struct tomoyo_mkdev_acl,
+ 					 head)->perm;
+-	u8 perm = *a_perm;
++	u8 perm = READ_ONCE(*a_perm);
+ 	const u8 b_perm = container_of(b, struct tomoyo_mkdev_acl, head)
+ 		->perm;
+ 
+@@ -445,7 +445,7 @@ static bool tomoyo_merge_mkdev_acl(struct tomoyo_acl_info *a,
+ 		perm &= ~b_perm;
+ 	else
+ 		perm |= b_perm;
+-	*a_perm = perm;
++	WRITE_ONCE(*a_perm, perm);
+ 	return !perm;
+ }
+ 
+@@ -517,14 +517,14 @@ static bool tomoyo_merge_path2_acl(struct tomoyo_acl_info *a,
+ {
+ 	u8 * const a_perm = &container_of(a, struct tomoyo_path2_acl, head)
+ 		->perm;
+-	u8 perm = *a_perm;
++	u8 perm = READ_ONCE(*a_perm);
+ 	const u8 b_perm = container_of(b, struct tomoyo_path2_acl, head)->perm;
+ 
+ 	if (is_delete)
+ 		perm &= ~b_perm;
+ 	else
+ 		perm |= b_perm;
+-	*a_perm = perm;
++	WRITE_ONCE(*a_perm, perm);
+ 	return !perm;
+ }
+ 
+@@ -655,7 +655,7 @@ static bool tomoyo_merge_path_number_acl(struct tomoyo_acl_info *a,
+ {
+ 	u8 * const a_perm = &container_of(a, struct tomoyo_path_number_acl,
+ 					  head)->perm;
+-	u8 perm = *a_perm;
++	u8 perm = READ_ONCE(*a_perm);
+ 	const u8 b_perm = container_of(b, struct tomoyo_path_number_acl, head)
+ 		->perm;
+ 
+@@ -663,7 +663,7 @@ static bool tomoyo_merge_path_number_acl(struct tomoyo_acl_info *a,
+ 		perm &= ~b_perm;
+ 	else
+ 		perm |= b_perm;
+-	*a_perm = perm;
++	WRITE_ONCE(*a_perm, perm);
+ 	return !perm;
+ }
+ 
+diff --git a/security/tomoyo/network.c b/security/tomoyo/network.c
+index f9ff121d7e1eb..a89ed55d85d41 100644
+--- a/security/tomoyo/network.c
++++ b/security/tomoyo/network.c
+@@ -233,14 +233,14 @@ static bool tomoyo_merge_inet_acl(struct tomoyo_acl_info *a,
+ {
+ 	u8 * const a_perm =
+ 		&container_of(a, struct tomoyo_inet_acl, head)->perm;
+-	u8 perm = *a_perm;
++	u8 perm = READ_ONCE(*a_perm);
+ 	const u8 b_perm = container_of(b, struct tomoyo_inet_acl, head)->perm;
+ 
+ 	if (is_delete)
+ 		perm &= ~b_perm;
+ 	else
+ 		perm |= b_perm;
+-	*a_perm = perm;
++	WRITE_ONCE(*a_perm, perm);
+ 	return !perm;
+ }
+ 
+@@ -259,14 +259,14 @@ static bool tomoyo_merge_unix_acl(struct tomoyo_acl_info *a,
+ {
+ 	u8 * const a_perm =
+ 		&container_of(a, struct tomoyo_unix_acl, head)->perm;
+-	u8 perm = *a_perm;
++	u8 perm = READ_ONCE(*a_perm);
+ 	const u8 b_perm = container_of(b, struct tomoyo_unix_acl, head)->perm;
+ 
+ 	if (is_delete)
+ 		perm &= ~b_perm;
+ 	else
+ 		perm |= b_perm;
+-	*a_perm = perm;
++	WRITE_ONCE(*a_perm, perm);
+ 	return !perm;
+ }
+ 
+diff --git a/security/tomoyo/util.c b/security/tomoyo/util.c
+index a40abb0b91eee..cd458e10cf2af 100644
+--- a/security/tomoyo/util.c
++++ b/security/tomoyo/util.c
+@@ -1053,30 +1053,30 @@ bool tomoyo_domain_quota_is_ok(struct tomoyo_request_info *r)
+ 
+ 		if (ptr->is_deleted)
+ 			continue;
++		/*
++		 * Reading perm bitmap might race with tomoyo_merge_*() because
++		 * caller does not hold tomoyo_policy_lock mutex. But exceeding
++		 * max_learning_entry parameter by a few entries does not harm.
++		 */
+ 		switch (ptr->type) {
+ 		case TOMOYO_TYPE_PATH_ACL:
+-			perm = container_of(ptr, struct tomoyo_path_acl, head)
+-				->perm;
++			data_race(perm = container_of(ptr, struct tomoyo_path_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_PATH2_ACL:
+-			perm = container_of(ptr, struct tomoyo_path2_acl, head)
+-				->perm;
++			data_race(perm = container_of(ptr, struct tomoyo_path2_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_PATH_NUMBER_ACL:
+-			perm = container_of(ptr, struct tomoyo_path_number_acl,
+-					    head)->perm;
++			data_race(perm = container_of(ptr, struct tomoyo_path_number_acl, head)
++				  ->perm);
+ 			break;
+ 		case TOMOYO_TYPE_MKDEV_ACL:
+-			perm = container_of(ptr, struct tomoyo_mkdev_acl,
+-					    head)->perm;
++			data_race(perm = container_of(ptr, struct tomoyo_mkdev_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_INET_ACL:
+-			perm = container_of(ptr, struct tomoyo_inet_acl,
+-					    head)->perm;
++			data_race(perm = container_of(ptr, struct tomoyo_inet_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_UNIX_ACL:
+-			perm = container_of(ptr, struct tomoyo_unix_acl,
+-					    head)->perm;
++			data_race(perm = container_of(ptr, struct tomoyo_unix_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_MANUAL_TASK_ACL:
+ 			perm = 0;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1927605f0f7ed..5f4f8c2d760f0 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2532,6 +2532,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
++	SND_PCI_QUIRK(0x1462, 0xcc34, "MSI Godlike X570", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ 	SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+@@ -6396,6 +6397,7 @@ enum {
+ 	ALC269_FIXUP_LEMOTE_A1802,
+ 	ALC269_FIXUP_LEMOTE_A190X,
+ 	ALC256_FIXUP_INTEL_NUC8_RUGGED,
++	ALC256_FIXUP_INTEL_NUC10,
+ 	ALC255_FIXUP_XIAOMI_HEADSET_MIC,
+ 	ALC274_FIXUP_HP_MIC,
+ 	ALC274_FIXUP_HP_HEADSET_MIC,
+@@ -7782,6 +7784,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE
+ 	},
++	[ALC256_FIXUP_INTEL_NUC10] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE
++	},
+ 	[ALC255_FIXUP_XIAOMI_HEADSET_MIC] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -8128,6 +8139,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x8551, "System76 Gazelle (gaze14)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8560, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8561, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[5|7][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -8222,6 +8234,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
++	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+ 
+ #if 0
+ 	/* Below is a quirk table taken from the old code.
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index f790514a147dd..3af4cb87032ce 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -71,6 +71,7 @@ enum {
+ #define BYT_RT5640_SSP0_AIF2		BIT(21)
+ #define BYT_RT5640_MCLK_EN		BIT(22)
+ #define BYT_RT5640_MCLK_25MHZ		BIT(23)
++#define BYT_RT5640_NO_SPEAKERS		BIT(24)
+ 
+ #define BYTCR_INPUT_DEFAULTS				\
+ 	(BYT_RT5640_IN3_MAP |				\
+@@ -132,6 +133,8 @@ static void log_quirks(struct device *dev)
+ 		dev_info(dev, "quirk JD_NOT_INV enabled\n");
+ 	if (byt_rt5640_quirk & BYT_RT5640_MONO_SPEAKER)
+ 		dev_info(dev, "quirk MONO_SPEAKER enabled\n");
++	if (byt_rt5640_quirk & BYT_RT5640_NO_SPEAKERS)
++		dev_info(dev, "quirk NO_SPEAKERS enabled\n");
+ 	if (byt_rt5640_quirk & BYT_RT5640_DIFF_MIC)
+ 		dev_info(dev, "quirk DIFF_MIC enabled\n");
+ 	if (byt_rt5640_quirk & BYT_RT5640_SSP0_AIF1) {
+@@ -399,6 +402,19 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* Acer One 10 S1002 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "One S1002"),
++		},
++		.driver_data = (void *)(BYT_RT5640_IN1_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_DIFF_MIC |
++					BYT_RT5640_SSP0_AIF2 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+@@ -512,6 +528,16 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_MONO_SPEAKER |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* Estar Beauty HD MID 7316R */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Estar"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "eSTAR BEAUTY HD Intel Quad core"),
++		},
++		.driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++					BYT_RT5640_MONO_SPEAKER |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+@@ -786,6 +812,20 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF2 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* Voyo Winpad A15 */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++			/* Above strings are too generic, also match on BIOS date */
++			DMI_MATCH(DMI_BIOS_DATE, "11/20/2014"),
++		},
++		.driver_data = (void *)(BYT_RT5640_IN1_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_DIFF_MIC |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{	/* Catch-all for generic Insyde tablets, must be last */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
+@@ -934,7 +974,7 @@ static int byt_rt5640_init(struct snd_soc_pcm_runtime *runtime)
+ 		ret = snd_soc_dapm_add_routes(&card->dapm,
+ 					byt_rt5640_mono_spk_map,
+ 					ARRAY_SIZE(byt_rt5640_mono_spk_map));
+-	} else {
++	} else if (!(byt_rt5640_quirk & BYT_RT5640_NO_SPEAKERS)) {
+ 		ret = snd_soc_dapm_add_routes(&card->dapm,
+ 					byt_rt5640_stereo_spk_map,
+ 					ARRAY_SIZE(byt_rt5640_stereo_spk_map));
+@@ -1179,6 +1219,7 @@ struct acpi_chan_package {   /* ACPICA seems to require 64 bit integers */
+ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ {
+ 	static const char * const map_name[] = { "dmic1", "dmic2", "in1", "in3" };
++	__maybe_unused const char *spk_type;
+ 	const struct dmi_system_id *dmi_id;
+ 	struct byt_rt5640_private *priv;
+ 	struct snd_soc_acpi_mach *mach;
+@@ -1186,7 +1227,7 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ 	struct acpi_device *adev;
+ 	int ret_val = 0;
+ 	int dai_index = 0;
+-	int i;
++	int i, cfg_spk;
+ 
+ 	is_bytcr = false;
+ 	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+@@ -1325,16 +1366,24 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	if (byt_rt5640_quirk & BYT_RT5640_NO_SPEAKERS) {
++		cfg_spk = 0;
++		spk_type = "none";
++	} else if (byt_rt5640_quirk & BYT_RT5640_MONO_SPEAKER) {
++		cfg_spk = 1;
++		spk_type = "mono";
++	} else {
++		cfg_spk = 2;
++		spk_type = "stereo";
++	}
++
+ 	snprintf(byt_rt5640_components, sizeof(byt_rt5640_components),
+-		 "cfg-spk:%s cfg-mic:%s",
+-		 (byt_rt5640_quirk & BYT_RT5640_MONO_SPEAKER) ? "1" : "2",
++		 "cfg-spk:%d cfg-mic:%s", cfg_spk,
+ 		 map_name[BYT_RT5640_MAP(byt_rt5640_quirk)]);
+ 	byt_rt5640_card.components = byt_rt5640_components;
+ #if !IS_ENABLED(CONFIG_SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES)
+ 	snprintf(byt_rt5640_long_name, sizeof(byt_rt5640_long_name),
+-		 "bytcr-rt5640-%s-spk-%s-mic",
+-		 (byt_rt5640_quirk & BYT_RT5640_MONO_SPEAKER) ?
+-			"mono" : "stereo",
++		 "bytcr-rt5640-%s-spk-%s-mic", spk_type,
+ 		 map_name[BYT_RT5640_MAP(byt_rt5640_quirk)]);
+ 	byt_rt5640_card.long_name = byt_rt5640_long_name;
+ #endif
+diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
+index 688b5e0a49e32..bf8b87d45cb0a 100644
+--- a/sound/soc/intel/boards/bytcr_rt5651.c
++++ b/sound/soc/intel/boards/bytcr_rt5651.c
+@@ -435,6 +435,19 @@ static const struct dmi_system_id byt_rt5651_quirk_table[] = {
+ 					BYT_RT5651_SSP0_AIF1 |
+ 					BYT_RT5651_MONO_SPEAKER),
+ 	},
++	{
++		/* Jumper EZpad 7 */
++		.callback = byt_rt5651_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Jumper"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "EZpad"),
++			/* Jumper12x.WJ2012.bsBKRCP05 with the version dropped */
++			DMI_MATCH(DMI_BIOS_VERSION, "Jumper12x.WJ2012.bsBKRCP"),
++		},
++		.driver_data = (void *)(BYT_RT5651_DEFAULT_QUIRKS |
++					BYT_RT5651_IN2_MAP |
++					BYT_RT5651_JD_NOT_INV),
++	},
+ 	{
+ 		/* KIANO SlimNote 14.2 */
+ 		.callback = byt_rt5651_quirk_cb,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 07e72ca1dfbc9..0f1d845a0ccad 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -115,9 +115,10 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME,
+ 				  "Tiger Lake Client Platform"),
+ 		},
+-		.driver_data = (void *)(SOF_RT711_JD_SRC_JD1 |
+-				SOF_SDW_TGL_HDMI | SOF_SDW_PCH_DMIC |
+-				SOF_SSP_PORT(SOF_I2S_SSP2)),
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_RT711_JD_SRC_JD1 |
++					SOF_SDW_PCH_DMIC |
++					SOF_SSP_PORT(SOF_I2S_SSP2)),
+ 	},
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+@@ -141,7 +142,8 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Google"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Volteer"),
+ 		},
+-		.driver_data = (void *)(SOF_SDW_TGL_HDMI | SOF_SDW_PCH_DMIC |
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_SDW_PCH_DMIC |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+ 	{
+@@ -150,7 +152,8 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Google"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Ripto"),
+ 		},
+-		.driver_data = (void *)(SOF_SDW_TGL_HDMI | SOF_SDW_PCH_DMIC |
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_SDW_PCH_DMIC |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+ 
+@@ -922,7 +925,7 @@ static int sof_card_dai_links_create(struct device *dev,
+ 		ctx->idisp_codec = true;
+ 
+ 	/* enable dmic01 & dmic16k */
+-	dmic_num = (sof_sdw_quirk & SOF_SDW_PCH_DMIC) ? 2 : 0;
++	dmic_num = (sof_sdw_quirk & SOF_SDW_PCH_DMIC || mach_params->dmic_num) ? 2 : 0;
+ 	comp_num += dmic_num;
+ 
+ 	dev_dbg(dev, "sdw %d, ssp %d, dmic %d, hdmi %d", sdw_be_num, ssp_num,
+diff --git a/sound/soc/intel/common/soc-intel-quirks.h b/sound/soc/intel/common/soc-intel-quirks.h
+index b07df3059926d..a93987ab7f4d7 100644
+--- a/sound/soc/intel/common/soc-intel-quirks.h
++++ b/sound/soc/intel/common/soc-intel-quirks.h
+@@ -11,6 +11,7 @@
+ 
+ #if IS_ENABLED(CONFIG_X86)
+ 
++#include <linux/dmi.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+ #include <asm/iosf_mbi.h>
+@@ -38,12 +39,36 @@ SOC_INTEL_IS_CPU(cml, KABYLAKE_L);
+ 
+ static inline bool soc_intel_is_byt_cr(struct platform_device *pdev)
+ {
++	/*
++	 * List of systems which:
++	 * 1. Use a non CR version of the Bay Trail SoC
++	 * 2. Contain at least 6 interrupt resources so that the
++	 *    platform_get_resource(pdev, IORESOURCE_IRQ, 5) check below
++	 *    succeeds
++	 * 3. Despite 1. and 2. still have their IPC IRQ at index 0 rather then 5
++	 *
++	 * This needs to be here so that it can be shared between the SST and
++	 * SOF drivers. We rely on the compiler to optimize this out in files
++	 * where soc_intel_is_byt_cr is not used.
++	 */
++	static const struct dmi_system_id force_bytcr_table[] = {
++		{	/* Lenovo Yoga Tablet 2 series */
++			.matches = {
++				DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++				DMI_MATCH(DMI_PRODUCT_FAMILY, "YOGATablet2"),
++			},
++		},
++		{}
++	};
+ 	struct device *dev = &pdev->dev;
+ 	int status = 0;
+ 
+ 	if (!soc_intel_is_byt())
+ 		return false;
+ 
++	if (dmi_check_system(force_bytcr_table))
++		return true;
++
+ 	if (iosf_mbi_available()) {
+ 		u32 bios_status;
+ 
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index a33dbd6de8a06..3ddd32fd3a44b 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -743,7 +743,6 @@ static void of_lpass_cpu_parse_dai_data(struct device *dev,
+ 		}
+ 		if (id == LPASS_DP_RX) {
+ 			data->hdmi_port_enable = 1;
+-			dev_err(dev, "HDMI Port is enabled: %d\n", id);
+ 		} else {
+ 			data->mi2s_playback_sd_mode[id] =
+ 				of_lpass_cpu_parse_sd_lines(dev, node,


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-09 12:18 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-03-09 12:18 UTC (permalink / raw
  To: gentoo-commits

commit:     36921383285e31038402e82e831105271a22a7ad
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Mar  9 12:18:35 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Mar  9 12:18:35 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=36921383

Linux patch 5.10.22

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1021_linux-5.10.22.patch | 1550 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1554 insertions(+)

diff --git a/0000_README b/0000_README
index fd876b7..36a5539 100644
--- a/0000_README
+++ b/0000_README
@@ -127,6 +127,10 @@ Patch:  1020_linux-5.10.21.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.21
 
+Patch:  1021_linux-5.10.22.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.22
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1021_linux-5.10.22.patch b/1021_linux-5.10.22.patch
new file mode 100644
index 0000000..cb507f7
--- /dev/null
+++ b/1021_linux-5.10.22.patch
@@ -0,0 +1,1550 @@
+diff --git a/Makefile b/Makefile
+index 98ae9007e8a52..12a2a7271fcb0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 21
++SUBLEVEL = 22
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+index 724ee179b316e..fae48efae83e9 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+@@ -227,8 +227,6 @@
+ 				      "timing-adjustment";
+ 			rx-fifo-depth = <4096>;
+ 			tx-fifo-depth = <2048>;
+-			resets = <&reset RESET_ETHERNET>;
+-			reset-names = "stmmaceth";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index a6127002573bd..959b299344e54 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -224,8 +224,6 @@
+ 				      "timing-adjustment";
+ 			rx-fifo-depth = <4096>;
+ 			tx-fifo-depth = <2048>;
+-			resets = <&reset RESET_ETHERNET>;
+-			reset-names = "stmmaceth";
+ 			status = "disabled";
+ 
+ 			mdio0: mdio {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 726b91d3a905a..0edd137151f89 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -13,7 +13,6 @@
+ #include <dt-bindings/interrupt-controller/irq.h>
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/power/meson-gxbb-power.h>
+-#include <dt-bindings/reset/amlogic,meson-gxbb-reset.h>
+ #include <dt-bindings/thermal/thermal.h>
+ 
+ / {
+@@ -576,8 +575,6 @@
+ 			interrupt-names = "macirq";
+ 			rx-fifo-depth = <4096>;
+ 			tx-fifo-depth = <2048>;
+-			resets = <&reset RESET_ETHERNET>;
+-			reset-names = "stmmaceth";
+ 			power-domains = <&pwrc PWRC_GXBB_ETHERNET_MEM_ID>;
+ 			status = "disabled";
+ 		};
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 00576a960f11f..b913844ab7404 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -29,6 +29,7 @@
+ #include <linux/kexec.h>
+ #include <linux/crash_dump.h>
+ #include <linux/hugetlb.h>
++#include <linux/acpi_iort.h>
+ 
+ #include <asm/boot.h>
+ #include <asm/fixmap.h>
+@@ -42,8 +43,6 @@
+ #include <asm/tlb.h>
+ #include <asm/alternative.h>
+ 
+-#define ARM64_ZONE_DMA_BITS	30
+-
+ /*
+  * We need to be able to catch inadvertent references to memstart_addr
+  * that occur (potentially in generic code) before arm64_memblock_init()
+@@ -188,8 +187,14 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
+ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ {
+ 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
++	unsigned int __maybe_unused acpi_zone_dma_bits;
++	unsigned int __maybe_unused dt_zone_dma_bits;
+ 
+ #ifdef CONFIG_ZONE_DMA
++	acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
++	dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
++	zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
++	arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
+ 	max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
+ #endif
+ #ifdef CONFIG_ZONE_DMA32
+@@ -376,18 +381,11 @@ void __init arm64_memblock_init(void)
+ 
+ 	early_init_fdt_scan_reserved_mem();
+ 
+-	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
+-		zone_dma_bits = ARM64_ZONE_DMA_BITS;
+-		arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
+-	}
+-
+ 	if (IS_ENABLED(CONFIG_ZONE_DMA32))
+ 		arm64_dma32_phys_limit = max_zone_phys(32);
+ 	else
+ 		arm64_dma32_phys_limit = PHYS_MASK + 1;
+ 
+-	reserve_crashkernel();
+-
+ 	reserve_elfcorehdr();
+ 
+ 	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
+@@ -427,6 +425,12 @@ void __init bootmem_init(void)
+ 	sparse_init();
+ 	zone_sizes_init(min, max);
+ 
++	/*
++	 * request_standard_resources() depends on crashkernel's memory being
++	 * reserved, so do it here.
++	 */
++	reserve_crashkernel();
++
+ 	memblock_dump_all();
+ }
+ 
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 94f34109695c9..2494138a6905e 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1730,3 +1730,58 @@ void __init acpi_iort_init(void)
+ 
+ 	iort_init_platform_devices();
+ }
++
++#ifdef CONFIG_ZONE_DMA
++/*
++ * Extract the highest CPU physical address accessible to all DMA masters in
++ * the system. PHYS_ADDR_MAX is returned when no constrained device is found.
++ */
++phys_addr_t __init acpi_iort_dma_get_max_cpu_address(void)
++{
++	phys_addr_t limit = PHYS_ADDR_MAX;
++	struct acpi_iort_node *node, *end;
++	struct acpi_table_iort *iort;
++	acpi_status status;
++	int i;
++
++	if (acpi_disabled)
++		return limit;
++
++	status = acpi_get_table(ACPI_SIG_IORT, 0,
++				(struct acpi_table_header **)&iort);
++	if (ACPI_FAILURE(status))
++		return limit;
++
++	node = ACPI_ADD_PTR(struct acpi_iort_node, iort, iort->node_offset);
++	end = ACPI_ADD_PTR(struct acpi_iort_node, iort, iort->header.length);
++
++	for (i = 0; i < iort->node_count; i++) {
++		if (node >= end)
++			break;
++
++		switch (node->type) {
++			struct acpi_iort_named_component *ncomp;
++			struct acpi_iort_root_complex *rc;
++			phys_addr_t local_limit;
++
++		case ACPI_IORT_NODE_NAMED_COMPONENT:
++			ncomp = (struct acpi_iort_named_component *)node->node_data;
++			local_limit = DMA_BIT_MASK(ncomp->memory_address_limit);
++			limit = min_not_zero(limit, local_limit);
++			break;
++
++		case ACPI_IORT_NODE_PCI_ROOT_COMPLEX:
++			if (node->revision < 1)
++				break;
++
++			rc = (struct acpi_iort_root_complex *)node->node_data;
++			local_limit = DMA_BIT_MASK(rc->memory_address_limit);
++			limit = min_not_zero(limit, local_limit);
++			break;
++		}
++		node = ACPI_ADD_PTR(struct acpi_iort_node, node, node->length);
++	}
++	acpi_put_table(&iort->header);
++	return limit;
++}
++#endif
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index bfda153b1a41d..87682dcb64ec3 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -325,22 +325,22 @@ static void rpm_put_suppliers(struct device *dev)
+ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
+ 	__releases(&dev->power.lock) __acquires(&dev->power.lock)
+ {
+-	int retval, idx;
+ 	bool use_links = dev->power.links_count > 0;
++	bool get = false;
++	int retval, idx;
++	bool put;
+ 
+ 	if (dev->power.irq_safe) {
+ 		spin_unlock(&dev->power.lock);
++	} else if (!use_links) {
++		spin_unlock_irq(&dev->power.lock);
+ 	} else {
++		get = dev->power.runtime_status == RPM_RESUMING;
++
+ 		spin_unlock_irq(&dev->power.lock);
+ 
+-		/*
+-		 * Resume suppliers if necessary.
+-		 *
+-		 * The device's runtime PM status cannot change until this
+-		 * routine returns, so it is safe to read the status outside of
+-		 * the lock.
+-		 */
+-		if (use_links && dev->power.runtime_status == RPM_RESUMING) {
++		/* Resume suppliers if necessary. */
++		if (get) {
+ 			idx = device_links_read_lock();
+ 
+ 			retval = rpm_get_suppliers(dev);
+@@ -355,24 +355,36 @@ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
+ 
+ 	if (dev->power.irq_safe) {
+ 		spin_lock(&dev->power.lock);
+-	} else {
+-		/*
+-		 * If the device is suspending and the callback has returned
+-		 * success, drop the usage counters of the suppliers that have
+-		 * been reference counted on its resume.
+-		 *
+-		 * Do that if resume fails too.
+-		 */
+-		if (use_links
+-		    && ((dev->power.runtime_status == RPM_SUSPENDING && !retval)
+-		    || (dev->power.runtime_status == RPM_RESUMING && retval))) {
+-			idx = device_links_read_lock();
++		return retval;
++	}
+ 
+- fail:
+-			rpm_put_suppliers(dev);
++	spin_lock_irq(&dev->power.lock);
+ 
+-			device_links_read_unlock(idx);
+-		}
++	if (!use_links)
++		return retval;
++
++	/*
++	 * If the device is suspending and the callback has returned success,
++	 * drop the usage counters of the suppliers that have been reference
++	 * counted on its resume.
++	 *
++	 * Do that if the resume fails too.
++	 */
++	put = dev->power.runtime_status == RPM_SUSPENDING && !retval;
++	if (put)
++		__update_runtime_status(dev, RPM_SUSPENDED);
++	else
++		put = get && retval;
++
++	if (put) {
++		spin_unlock_irq(&dev->power.lock);
++
++		idx = device_links_read_lock();
++
++fail:
++		rpm_put_suppliers(dev);
++
++		device_links_read_unlock(idx);
+ 
+ 		spin_lock_irq(&dev->power.lock);
+ 	}
+diff --git a/drivers/block/rsxx/core.c b/drivers/block/rsxx/core.c
+index 63f549889f875..5ac1881396afb 100644
+--- a/drivers/block/rsxx/core.c
++++ b/drivers/block/rsxx/core.c
+@@ -165,15 +165,17 @@ static ssize_t rsxx_cram_read(struct file *fp, char __user *ubuf,
+ {
+ 	struct rsxx_cardinfo *card = file_inode(fp)->i_private;
+ 	char *buf;
+-	ssize_t st;
++	int st;
+ 
+ 	buf = kzalloc(cnt, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+ 	st = rsxx_creg_read(card, CREG_ADD_CRAM + (u32)*ppos, cnt, buf, 1);
+-	if (!st)
+-		st = copy_to_user(ubuf, buf, cnt);
++	if (!st) {
++		if (copy_to_user(ubuf, buf, cnt))
++			st = -EFAULT;
++	}
+ 	kfree(buf);
+ 	if (st)
+ 		return st;
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 431919d5f48af..a2e0395cbe618 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -707,12 +707,22 @@ static int tpm_tis_gen_interrupt(struct tpm_chip *chip)
+ 	const char *desc = "attempting to generate an interrupt";
+ 	u32 cap2;
+ 	cap_t cap;
++	int ret;
+ 
++	/* TPM 2.0 */
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ 		return tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
+-	else
+-		return tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc,
+-				  0);
++
++	/* TPM 1.2 */
++	ret = request_locality(chip, 0);
++	if (ret < 0)
++		return ret;
++
++	ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
++
++	release_locality(chip, 0);
++
++	return ret;
+ }
+ 
+ /* Register the IRQ and issue a command that will cause an interrupt. If an
+@@ -1019,11 +1029,21 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	init_waitqueue_head(&priv->read_queue);
+ 	init_waitqueue_head(&priv->int_queue);
+ 	if (irq != -1) {
+-		/* Before doing irq testing issue a command to the TPM in polling mode
++		/*
++		 * Before doing irq testing issue a command to the TPM in polling mode
+ 		 * to make sure it works. May as well use that command to set the
+ 		 * proper timeouts for the driver.
+ 		 */
+-		if (tpm_get_timeouts(chip)) {
++
++		rc = request_locality(chip, 0);
++		if (rc < 0)
++			goto out_err;
++
++		rc = tpm_get_timeouts(chip);
++
++		release_locality(chip, 0);
++
++		if (rc) {
+ 			dev_err(dev, "Could not get TPM timeouts and durations\n");
+ 			rc = -ENODEV;
+ 			goto out_err;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index 2d125b8b15ee1..00a190929b55c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -355,7 +355,7 @@ static ssize_t amdgpu_debugfs_regs_pcie_read(struct file *f, char __user *buf,
+ 	while (size) {
+ 		uint32_t value;
+ 
+-		value = RREG32_PCIE(*pos >> 2);
++		value = RREG32_PCIE(*pos);
+ 		r = put_user(value, (uint32_t *)buf);
+ 		if (r) {
+ 			pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
+@@ -422,7 +422,7 @@ static ssize_t amdgpu_debugfs_regs_pcie_write(struct file *f, const char __user
+ 			return r;
+ 		}
+ 
+-		WREG32_PCIE(*pos >> 2, value);
++		WREG32_PCIE(*pos, value);
+ 
+ 		result += 4;
+ 		buf += 4;
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index 8eeba8096493b..0af4774ddd613 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -459,7 +459,8 @@ static bool nv_is_headless_sku(struct pci_dev *pdev)
+ {
+ 	if ((pdev->device == 0x731E &&
+ 	    (pdev->revision == 0xC6 || pdev->revision == 0xC7)) ||
+-	    (pdev->device == 0x7340 && pdev->revision == 0xC9))
++	    (pdev->device == 0x7340 && pdev->revision == 0xC9)  ||
++	    (pdev->device == 0x7360 && pdev->revision == 0xC7))
+ 		return true;
+ 	return false;
+ }
+@@ -524,7 +525,8 @@ int nv_set_ip_blocks(struct amdgpu_device *adev)
+ 		if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT &&
+ 		    !amdgpu_sriov_vf(adev))
+ 			amdgpu_device_ip_block_add(adev, &smu_v11_0_ip_block);
+-		amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block);
++		if (!nv_is_headless_sku(adev->pdev))
++		        amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block);
+ 		if (!amdgpu_sriov_vf(adev))
+ 			amdgpu_device_ip_block_add(adev, &jpeg_v2_0_ip_block);
+ 		break;
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 8e578f73a074c..bbba0cd42c89b 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -3650,6 +3650,7 @@ static int cm_send_sidr_rep_locked(struct cm_id_private *cm_id_priv,
+ 				   struct ib_cm_sidr_rep_param *param)
+ {
+ 	struct ib_mad_send_buf *msg;
++	unsigned long flags;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&cm_id_priv->lock);
+@@ -3675,12 +3676,12 @@ static int cm_send_sidr_rep_locked(struct cm_id_private *cm_id_priv,
+ 		return ret;
+ 	}
+ 	cm_id_priv->id.state = IB_CM_IDLE;
+-	spin_lock_irq(&cm.lock);
++	spin_lock_irqsave(&cm.lock, flags);
+ 	if (!RB_EMPTY_NODE(&cm_id_priv->sidr_id_node)) {
+ 		rb_erase(&cm_id_priv->sidr_id_node, &cm.remote_sidr_table);
+ 		RB_CLEAR_NODE(&cm_id_priv->sidr_id_node);
+ 	}
+-	spin_unlock_irq(&cm.lock);
++	spin_unlock_irqrestore(&cm.lock, flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 26564e7d34572..efb9ec99b68bd 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1973,8 +1973,10 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_SUBSCRIBE_EVENT)(
+ 
+ 		num_alloc_xa_entries++;
+ 		event_sub = kzalloc(sizeof(*event_sub), GFP_KERNEL);
+-		if (!event_sub)
++		if (!event_sub) {
++			err = -ENOMEM;
+ 			goto err;
++		}
+ 
+ 		list_add_tail(&event_sub->event_list, &sub_list);
+ 		uverbs_uobject_get(&ev_file->uobj);
+diff --git a/drivers/infiniband/sw/rxe/Kconfig b/drivers/infiniband/sw/rxe/Kconfig
+index 4521490667925..06b8dc5093f77 100644
+--- a/drivers/infiniband/sw/rxe/Kconfig
++++ b/drivers/infiniband/sw/rxe/Kconfig
+@@ -4,6 +4,7 @@ config RDMA_RXE
+ 	depends on INET && PCI && INFINIBAND
+ 	depends on INFINIBAND_VIRT_DMA
+ 	select NET_UDP_TUNNEL
++	select CRYPTO
+ 	select CRYPTO_CRC32
+ 	help
+ 	This driver implements the InfiniBand RDMA transport over
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index 97dfcffbf495a..444c0bec221a4 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -30,8 +30,8 @@
+ #define VCMD_VRSP_IP			0x1
+ #define VCMD_VRSP_SC(e)			(((e) >> 1) & 0x3)
+ #define VCMD_VRSP_SC_SUCCESS		0
+-#define VCMD_VRSP_SC_NO_PASID_AVAIL	1
+-#define VCMD_VRSP_SC_INVALID_PASID	1
++#define VCMD_VRSP_SC_NO_PASID_AVAIL	2
++#define VCMD_VRSP_SC_INVALID_PASID	2
+ #define VCMD_VRSP_RESULT_PASID(e)	(((e) >> 8) & 0xfffff)
+ #define VCMD_CMD_OPERAND(e)		((e) << 8)
+ /*
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index fce4cbf9529d6..50f3e673729c3 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -1526,6 +1526,10 @@ EXPORT_SYMBOL_GPL(dm_bufio_get_block_size);
+ sector_t dm_bufio_get_device_size(struct dm_bufio_client *c)
+ {
+ 	sector_t s = i_size_read(c->bdev->bd_inode) >> SECTOR_SHIFT;
++	if (s >= c->start)
++		s -= c->start;
++	else
++		s = 0;
+ 	if (likely(c->sectors_per_block_bits >= 0))
+ 		s >>= c->sectors_per_block_bits;
+ 	else
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index fb41b4f23c489..66f4c6398f670 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -61,19 +61,18 @@ static int fec_decode_rs8(struct dm_verity *v, struct dm_verity_fec_io *fio,
+ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index,
+ 			   unsigned *offset, struct dm_buffer **buf)
+ {
+-	u64 position, block;
++	u64 position, block, rem;
+ 	u8 *res;
+ 
+ 	position = (index + rsb) * v->fec->roots;
+-	block = position >> v->data_dev_block_bits;
+-	*offset = (unsigned)(position - (block << v->data_dev_block_bits));
++	block = div64_u64_rem(position, v->fec->roots << SECTOR_SHIFT, &rem);
++	*offset = (unsigned)rem;
+ 
+-	res = dm_bufio_read(v->fec->bufio, v->fec->start + block, buf);
++	res = dm_bufio_read(v->fec->bufio, block, buf);
+ 	if (IS_ERR(res)) {
+ 		DMERR("%s: FEC %llu: parity read failed (block %llu): %ld",
+ 		      v->data_dev->name, (unsigned long long)rsb,
+-		      (unsigned long long)(v->fec->start + block),
+-		      PTR_ERR(res));
++		      (unsigned long long)block, PTR_ERR(res));
+ 		*buf = NULL;
+ 	}
+ 
+@@ -155,7 +154,7 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio,
+ 
+ 		/* read the next block when we run out of parity bytes */
+ 		offset += v->fec->roots;
+-		if (offset >= 1 << v->data_dev_block_bits) {
++		if (offset >= v->fec->roots << SECTOR_SHIFT) {
+ 			dm_bufio_release(buf);
+ 
+ 			par = fec_read_parity(v, rsb, block_offset, &offset, &buf);
+@@ -674,7 +673,7 @@ int verity_fec_ctr(struct dm_verity *v)
+ {
+ 	struct dm_verity_fec *f = v->fec;
+ 	struct dm_target *ti = v->ti;
+-	u64 hash_blocks;
++	u64 hash_blocks, fec_blocks;
+ 	int ret;
+ 
+ 	if (!verity_fec_is_enabled(v)) {
+@@ -744,15 +743,17 @@ int verity_fec_ctr(struct dm_verity *v)
+ 	}
+ 
+ 	f->bufio = dm_bufio_client_create(f->dev->bdev,
+-					  1 << v->data_dev_block_bits,
++					  f->roots << SECTOR_SHIFT,
+ 					  1, 0, NULL, NULL);
+ 	if (IS_ERR(f->bufio)) {
+ 		ti->error = "Cannot initialize FEC bufio client";
+ 		return PTR_ERR(f->bufio);
+ 	}
+ 
+-	if (dm_bufio_get_device_size(f->bufio) <
+-	    ((f->start + f->rounds * f->roots) >> v->data_dev_block_bits)) {
++	dm_bufio_set_sector_offset(f->bufio, f->start << (v->data_dev_block_bits - SECTOR_SHIFT));
++
++	fec_blocks = div64_u64(f->rounds * f->roots, v->fec->roots << SECTOR_SHIFT);
++	if (dm_bufio_get_device_size(f->bufio) < fec_blocks) {
+ 		ti->error = "FEC device is too small";
+ 		return -E2BIG;
+ 	}
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index cfcc3ac613189..b7d5eaa70a67b 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2244,6 +2244,7 @@ static void rtl_pll_power_down(struct rtl8169_private *tp)
+ 
+ 	switch (tp->mac_version) {
+ 	case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_26:
++	case RTL_GIGA_MAC_VER_29 ... RTL_GIGA_MAC_VER_30:
+ 	case RTL_GIGA_MAC_VER_32 ... RTL_GIGA_MAC_VER_33:
+ 	case RTL_GIGA_MAC_VER_37:
+ 	case RTL_GIGA_MAC_VER_39:
+@@ -2271,6 +2272,7 @@ static void rtl_pll_power_up(struct rtl8169_private *tp)
+ {
+ 	switch (tp->mac_version) {
+ 	case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_26:
++	case RTL_GIGA_MAC_VER_29 ... RTL_GIGA_MAC_VER_30:
+ 	case RTL_GIGA_MAC_VER_32 ... RTL_GIGA_MAC_VER_33:
+ 	case RTL_GIGA_MAC_VER_37:
+ 	case RTL_GIGA_MAC_VER_39:
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 1c3257a2d4e37..73ddf2540f3fe 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -1024,6 +1024,48 @@ out:
+ }
+ #endif /* CONFIG_HAS_DMA */
+ 
++/**
++ * of_dma_get_max_cpu_address - Gets highest CPU address suitable for DMA
++ * @np: The node to start searching from or NULL to start from the root
++ *
++ * Gets the highest CPU physical address that is addressable by all DMA masters
++ * in the sub-tree pointed by np, or the whole tree if NULL is passed. If no
++ * DMA constrained device is found, it returns PHYS_ADDR_MAX.
++ */
++phys_addr_t __init of_dma_get_max_cpu_address(struct device_node *np)
++{
++	phys_addr_t max_cpu_addr = PHYS_ADDR_MAX;
++	struct of_range_parser parser;
++	phys_addr_t subtree_max_addr;
++	struct device_node *child;
++	struct of_range range;
++	const __be32 *ranges;
++	u64 cpu_end = 0;
++	int len;
++
++	if (!np)
++		np = of_root;
++
++	ranges = of_get_property(np, "dma-ranges", &len);
++	if (ranges && len) {
++		of_dma_range_parser_init(&parser, np);
++		for_each_of_range(&parser, &range)
++			if (range.cpu_addr + range.size > cpu_end)
++				cpu_end = range.cpu_addr + range.size - 1;
++
++		if (max_cpu_addr > cpu_end)
++			max_cpu_addr = cpu_end;
++	}
++
++	for_each_available_child_of_node(np, child) {
++		subtree_max_addr = of_dma_get_max_cpu_address(child);
++		if (max_cpu_addr > subtree_max_addr)
++			max_cpu_addr = subtree_max_addr;
++	}
++
++	return max_cpu_addr;
++}
++
+ /**
+  * of_dma_is_coherent - Check if device is coherent
+  * @np:	device node
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 06cc988faf78b..eb51bc1474401 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -869,6 +869,26 @@ static void __init of_unittest_changeset(void)
+ #endif
+ }
+ 
++static void __init of_unittest_dma_get_max_cpu_address(void)
++{
++	struct device_node *np;
++	phys_addr_t cpu_addr;
++
++	if (!IS_ENABLED(CONFIG_OF_ADDRESS))
++		return;
++
++	np = of_find_node_by_path("/testcase-data/address-tests");
++	if (!np) {
++		pr_err("missing testcase data\n");
++		return;
++	}
++
++	cpu_addr = of_dma_get_max_cpu_address(np);
++	unittest(cpu_addr == 0x4fffffff,
++		 "of_dma_get_max_cpu_address: wrong CPU addr %pad (expecting %x)\n",
++		 &cpu_addr, 0x4fffffff);
++}
++
+ static void __init of_unittest_dma_ranges_one(const char *path,
+ 		u64 expect_dma_addr, u64 expect_paddr)
+ {
+@@ -3266,6 +3286,7 @@ static int __init of_unittest(void)
+ 	of_unittest_changeset();
+ 	of_unittest_parse_interrupts();
+ 	of_unittest_parse_interrupts_extended();
++	of_unittest_dma_get_max_cpu_address();
+ 	of_unittest_parse_dma_ranges();
+ 	of_unittest_pci_dma_ranges();
+ 	of_unittest_match_node();
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 9a5d652c1672e..c99e293b50f54 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1229,6 +1229,11 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
+ 	spin_lock(&sinfo->lock);
+ 	spin_lock(&cache->lock);
+ 
++	if (cache->swap_extents) {
++		ret = -ETXTBSY;
++		goto out;
++	}
++
+ 	if (cache->ro) {
+ 		cache->ro++;
+ 		ret = 0;
+@@ -2274,7 +2279,7 @@ again:
+ 	}
+ 
+ 	ret = inc_block_group_ro(cache, 0);
+-	if (!do_chunk_alloc)
++	if (!do_chunk_alloc || ret == -ETXTBSY)
+ 		goto unlock_out;
+ 	if (!ret)
+ 		goto out;
+@@ -2283,6 +2288,8 @@ again:
+ 	if (ret < 0)
+ 		goto out;
+ 	ret = inc_block_group_ro(cache, 0);
++	if (ret == -ETXTBSY)
++		goto unlock_out;
+ out:
+ 	if (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM) {
+ 		alloc_flags = btrfs_get_alloc_profile(fs_info, cache->flags);
+@@ -3363,6 +3370,7 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
+ 		ASSERT(list_empty(&block_group->io_list));
+ 		ASSERT(list_empty(&block_group->bg_list));
+ 		ASSERT(refcount_read(&block_group->refs) == 1);
++		ASSERT(block_group->swap_extents == 0);
+ 		btrfs_put_block_group(block_group);
+ 
+ 		spin_lock(&info->block_group_cache_lock);
+@@ -3429,3 +3437,26 @@ void btrfs_unfreeze_block_group(struct btrfs_block_group *block_group)
+ 		__btrfs_remove_free_space_cache(block_group->free_space_ctl);
+ 	}
+ }
++
++bool btrfs_inc_block_group_swap_extents(struct btrfs_block_group *bg)
++{
++	bool ret = true;
++
++	spin_lock(&bg->lock);
++	if (bg->ro)
++		ret = false;
++	else
++		bg->swap_extents++;
++	spin_unlock(&bg->lock);
++
++	return ret;
++}
++
++void btrfs_dec_block_group_swap_extents(struct btrfs_block_group *bg, int amount)
++{
++	spin_lock(&bg->lock);
++	ASSERT(!bg->ro);
++	ASSERT(bg->swap_extents >= amount);
++	bg->swap_extents -= amount;
++	spin_unlock(&bg->lock);
++}
+diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
+index adfd7583a17b8..4c7614346f724 100644
+--- a/fs/btrfs/block-group.h
++++ b/fs/btrfs/block-group.h
+@@ -181,6 +181,12 @@ struct btrfs_block_group {
+ 	 */
+ 	int needs_free_space;
+ 
++	/*
++	 * Number of extents in this block group used for swap files.
++	 * All accesses protected by the spinlock 'lock'.
++	 */
++	int swap_extents;
++
+ 	/* Record locked full stripes for RAID5/6 block group */
+ 	struct btrfs_full_stripe_locks_tree full_stripe_locks_root;
+ };
+@@ -299,4 +305,7 @@ int btrfs_rmap_block(struct btrfs_fs_info *fs_info, u64 chunk_start,
+ 		     u64 physical, u64 **logical, int *naddrs, int *stripe_len);
+ #endif
+ 
++bool btrfs_inc_block_group_swap_extents(struct btrfs_block_group *bg);
++void btrfs_dec_block_group_swap_extents(struct btrfs_block_group *bg, int amount);
++
+ #endif /* BTRFS_BLOCK_GROUP_H */
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index b6884eda9ff67..bcc6848bb6d6a 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -522,6 +522,11 @@ struct btrfs_swapfile_pin {
+ 	 * points to a struct btrfs_device.
+ 	 */
+ 	bool is_block_group;
++	/*
++	 * Only used when 'is_block_group' is true and it is the number of
++	 * extents used by a swapfile for this block group ('ptr' field).
++	 */
++	int bg_extent_count;
+ };
+ 
+ bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr);
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 5aba81e161132..36e0de34ec68b 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -649,7 +649,7 @@ static int btrfs_delayed_inode_reserve_metadata(
+ 						      btrfs_ino(inode),
+ 						      num_bytes, 1);
+ 		} else {
+-			btrfs_qgroup_free_meta_prealloc(root, fs_info->nodesize);
++			btrfs_qgroup_free_meta_prealloc(root, num_bytes);
+ 		}
+ 		return ret;
+ 	}
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 4373da7bcc0d5..c81a20cc10dc8 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -3236,8 +3236,11 @@ reserve_space:
+ 			goto out;
+ 		ret = btrfs_qgroup_reserve_data(BTRFS_I(inode), &data_reserved,
+ 						alloc_start, bytes_to_reserve);
+-		if (ret)
++		if (ret) {
++			unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart,
++					     lockend, &cached_state);
+ 			goto out;
++		}
+ 		ret = btrfs_prealloc_file_range(inode, mode, alloc_start,
+ 						alloc_end - alloc_start,
+ 						i_blocksize(inode),
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index ae4059ce2f84c..ba280707d5ec2 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -2714,8 +2714,10 @@ static void __btrfs_return_cluster_to_free_space(
+ 	struct rb_node *node;
+ 
+ 	spin_lock(&cluster->lock);
+-	if (cluster->block_group != block_group)
+-		goto out;
++	if (cluster->block_group != block_group) {
++		spin_unlock(&cluster->lock);
++		return;
++	}
+ 
+ 	cluster->block_group = NULL;
+ 	cluster->window_start = 0;
+@@ -2753,8 +2755,6 @@ static void __btrfs_return_cluster_to_free_space(
+ 				   entry->offset, &entry->offset_index, bitmap);
+ 	}
+ 	cluster->root = RB_ROOT;
+-
+-out:
+ 	spin_unlock(&cluster->lock);
+ 	btrfs_put_block_group(block_group);
+ }
+@@ -3034,8 +3034,6 @@ u64 btrfs_alloc_from_cluster(struct btrfs_block_group *block_group,
+ 			entry->bytes -= bytes;
+ 		}
+ 
+-		if (entry->bytes == 0)
+-			rb_erase(&entry->offset_index, &cluster->root);
+ 		break;
+ 	}
+ out:
+@@ -3052,7 +3050,10 @@ out:
+ 	ctl->free_space -= bytes;
+ 	if (!entry->bitmap && !btrfs_free_space_trimmed(entry))
+ 		ctl->discardable_bytes[BTRFS_STAT_CURR] -= bytes;
++
++	spin_lock(&cluster->lock);
+ 	if (entry->bytes == 0) {
++		rb_erase(&entry->offset_index, &cluster->root);
+ 		ctl->free_extents--;
+ 		if (entry->bitmap) {
+ 			kmem_cache_free(btrfs_free_space_bitmap_cachep,
+@@ -3065,6 +3066,7 @@ out:
+ 		kmem_cache_free(btrfs_free_space_cachep, entry);
+ 	}
+ 
++	spin_unlock(&cluster->lock);
+ 	spin_unlock(&ctl->tree_lock);
+ 
+ 	return ret;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4d85f3a6695d1..9b3df72ceffbb 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9993,6 +9993,7 @@ static int btrfs_add_swapfile_pin(struct inode *inode, void *ptr,
+ 	sp->ptr = ptr;
+ 	sp->inode = inode;
+ 	sp->is_block_group = is_block_group;
++	sp->bg_extent_count = 1;
+ 
+ 	spin_lock(&fs_info->swapfile_pins_lock);
+ 	p = &fs_info->swapfile_pins.rb_node;
+@@ -10006,6 +10007,8 @@ static int btrfs_add_swapfile_pin(struct inode *inode, void *ptr,
+ 			   (sp->ptr == entry->ptr && sp->inode > entry->inode)) {
+ 			p = &(*p)->rb_right;
+ 		} else {
++			if (is_block_group)
++				entry->bg_extent_count++;
+ 			spin_unlock(&fs_info->swapfile_pins_lock);
+ 			kfree(sp);
+ 			return 1;
+@@ -10031,8 +10034,11 @@ static void btrfs_free_swapfile_pins(struct inode *inode)
+ 		sp = rb_entry(node, struct btrfs_swapfile_pin, node);
+ 		if (sp->inode == inode) {
+ 			rb_erase(&sp->node, &fs_info->swapfile_pins);
+-			if (sp->is_block_group)
++			if (sp->is_block_group) {
++				btrfs_dec_block_group_swap_extents(sp->ptr,
++							   sp->bg_extent_count);
+ 				btrfs_put_block_group(sp->ptr);
++			}
+ 			kfree(sp);
+ 		}
+ 		node = next;
+@@ -10093,7 +10099,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ 			       sector_t *span)
+ {
+ 	struct inode *inode = file_inode(file);
+-	struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
++	struct btrfs_root *root = BTRFS_I(inode)->root;
++	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
+ 	struct extent_state *cached_state = NULL;
+ 	struct extent_map *em = NULL;
+@@ -10144,13 +10151,27 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ 	   "cannot activate swapfile while exclusive operation is running");
+ 		return -EBUSY;
+ 	}
++
++	/*
++	 * Prevent snapshot creation while we are activating the swap file.
++	 * We do not want to race with snapshot creation. If snapshot creation
++	 * already started before we bumped nr_swapfiles from 0 to 1 and
++	 * completes before the first write into the swap file after it is
++	 * activated, than that write would fallback to COW.
++	 */
++	if (!btrfs_drew_try_write_lock(&root->snapshot_lock)) {
++		btrfs_exclop_finish(fs_info);
++		btrfs_warn(fs_info,
++	   "cannot activate swapfile because snapshot creation is in progress");
++		return -EINVAL;
++	}
+ 	/*
+ 	 * Snapshots can create extents which require COW even if NODATACOW is
+ 	 * set. We use this counter to prevent snapshots. We must increment it
+ 	 * before walking the extents because we don't want a concurrent
+ 	 * snapshot to run after we've already checked the extents.
+ 	 */
+-	atomic_inc(&BTRFS_I(inode)->root->nr_swapfiles);
++	atomic_inc(&root->nr_swapfiles);
+ 
+ 	isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize);
+ 
+@@ -10247,6 +10268,17 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ 			goto out;
+ 		}
+ 
++		if (!btrfs_inc_block_group_swap_extents(bg)) {
++			btrfs_warn(fs_info,
++			   "block group for swapfile at %llu is read-only%s",
++			   bg->start,
++			   atomic_read(&fs_info->scrubs_running) ?
++				       " (scrub running)" : "");
++			btrfs_put_block_group(bg);
++			ret = -EINVAL;
++			goto out;
++		}
++
+ 		ret = btrfs_add_swapfile_pin(inode, bg, true);
+ 		if (ret) {
+ 			btrfs_put_block_group(bg);
+@@ -10285,6 +10317,8 @@ out:
+ 	if (ret)
+ 		btrfs_swap_deactivate(file);
+ 
++	btrfs_drew_write_unlock(&root->snapshot_lock);
++
+ 	btrfs_exclop_finish(fs_info);
+ 
+ 	if (ret)
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index bd46e107f955e..f5135314e4b39 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1914,7 +1914,10 @@ static noinline int btrfs_ioctl_snap_create_v2(struct file *file,
+ 	if (vol_args->flags & BTRFS_SUBVOL_RDONLY)
+ 		readonly = true;
+ 	if (vol_args->flags & BTRFS_SUBVOL_QGROUP_INHERIT) {
+-		if (vol_args->size > PAGE_SIZE) {
++		u64 nums;
++
++		if (vol_args->size < sizeof(*inherit) ||
++		    vol_args->size > PAGE_SIZE) {
+ 			ret = -EINVAL;
+ 			goto free_args;
+ 		}
+@@ -1923,6 +1926,20 @@ static noinline int btrfs_ioctl_snap_create_v2(struct file *file,
+ 			ret = PTR_ERR(inherit);
+ 			goto free_args;
+ 		}
++
++		if (inherit->num_qgroups > PAGE_SIZE ||
++		    inherit->num_ref_copies > PAGE_SIZE ||
++		    inherit->num_excl_copies > PAGE_SIZE) {
++			ret = -EINVAL;
++			goto free_inherit;
++		}
++
++		nums = inherit->num_qgroups + 2 * inherit->num_ref_copies +
++		       2 * inherit->num_excl_copies;
++		if (vol_args->size != struct_size(inherit, qgroups, nums)) {
++			ret = -EINVAL;
++			goto free_inherit;
++		}
+ 	}
+ 
+ 	ret = __btrfs_ioctl_snap_create(file, vol_args->name, vol_args->fd,
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index 255490f42b5d5..9d33bf0154abf 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -2363,16 +2363,21 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
+ 	SetPageUptodate(p_page);
+ 
+ 	if (has_qstripe) {
++		/* RAID6, allocate and map temp space for the Q stripe */
+ 		q_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
+ 		if (!q_page) {
+ 			__free_page(p_page);
+ 			goto cleanup;
+ 		}
+ 		SetPageUptodate(q_page);
++		pointers[rbio->real_stripes - 1] = kmap(q_page);
+ 	}
+ 
+ 	atomic_set(&rbio->error, 0);
+ 
++	/* Map the parity stripe just once */
++	pointers[nr_data] = kmap(p_page);
++
+ 	for_each_set_bit(pagenr, rbio->dbitmap, rbio->stripe_npages) {
+ 		struct page *p;
+ 		void *parity;
+@@ -2382,16 +2387,8 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
+ 			pointers[stripe] = kmap(p);
+ 		}
+ 
+-		/* then add the parity stripe */
+-		pointers[stripe++] = kmap(p_page);
+-
+ 		if (has_qstripe) {
+-			/*
+-			 * raid6, add the qstripe and call the
+-			 * library function to fill in our p/q
+-			 */
+-			pointers[stripe++] = kmap(q_page);
+-
++			/* RAID6, call the library function to fill in our P/Q */
+ 			raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE,
+ 						pointers);
+ 		} else {
+@@ -2412,12 +2409,14 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
+ 
+ 		for (stripe = 0; stripe < nr_data; stripe++)
+ 			kunmap(page_in_rbio(rbio, stripe, pagenr, 0));
+-		kunmap(p_page);
+ 	}
+ 
++	kunmap(p_page);
+ 	__free_page(p_page);
+-	if (q_page)
++	if (q_page) {
++		kunmap(q_page);
+ 		__free_page(q_page);
++	}
+ 
+ writeback:
+ 	/*
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index a646af95dd100..c4f87df532833 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -548,6 +548,24 @@ process_slot:
+ 		btrfs_release_path(path);
+ 		path->leave_spinning = 0;
+ 
++		/*
++		 * When using NO_HOLES and we are cloning a range that covers
++		 * only a hole (no extents) into a range beyond the current
++		 * i_size, punching a hole in the target range will not create
++		 * an extent map defining a hole, because the range starts at or
++		 * beyond current i_size. If the file previously had an i_size
++		 * greater than the new i_size set by this clone operation, we
++		 * need to make sure the next fsync is a full fsync, so that it
++		 * detects and logs a hole covering a range from the current
++		 * i_size to the new i_size. If the clone range covers extents,
++		 * besides a hole, then we know the full sync flag was already
++		 * set by previous calls to btrfs_replace_file_extents() that
++		 * replaced file extent items.
++		 */
++		if (last_dest_end >= i_size_read(inode))
++			set_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
++				&BTRFS_I(inode)->runtime_flags);
++
+ 		ret = btrfs_replace_file_extents(inode, path, last_dest_end,
+ 				destoff + len - 1, NULL, &trans);
+ 		if (ret)
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index e71e7586e9eb0..0392c556af601 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3568,6 +3568,13 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ 			 * commit_transactions.
+ 			 */
+ 			ro_set = 0;
++		} else if (ret == -ETXTBSY) {
++			btrfs_warn(fs_info,
++		   "skipping scrub of block group %llu due to active swapfile",
++				   cache->start);
++			scrub_pause_off(fs_info);
++			ret = 0;
++			goto skip_unfreeze;
+ 		} else {
+ 			btrfs_warn(fs_info,
+ 				   "failed setting block group ro: %d", ret);
+@@ -3657,7 +3664,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ 		} else {
+ 			spin_unlock(&cache->lock);
+ 		}
+-
++skip_unfreeze:
+ 		btrfs_unfreeze_block_group(cache);
+ 		btrfs_put_block_group(cache);
+ 		if (ret)
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index e51774201d53b..f1a60bcdb3db8 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -229,11 +229,33 @@ int btrfs_setxattr_trans(struct inode *inode, const char *name,
+ {
+ 	struct btrfs_root *root = BTRFS_I(inode)->root;
+ 	struct btrfs_trans_handle *trans;
++	const bool start_trans = (current->journal_info == NULL);
+ 	int ret;
+ 
+-	trans = btrfs_start_transaction(root, 2);
+-	if (IS_ERR(trans))
+-		return PTR_ERR(trans);
++	if (start_trans) {
++		/*
++		 * 1 unit for inserting/updating/deleting the xattr
++		 * 1 unit for the inode item update
++		 */
++		trans = btrfs_start_transaction(root, 2);
++		if (IS_ERR(trans))
++			return PTR_ERR(trans);
++	} else {
++		/*
++		 * This can happen when smack is enabled and a directory is being
++		 * created. It happens through d_instantiate_new(), which calls
++		 * smack_d_instantiate(), which in turn calls __vfs_setxattr() to
++		 * set the transmute xattr (XATTR_NAME_SMACKTRANSMUTE) on the
++		 * inode. We have already reserved space for the xattr and inode
++		 * update at btrfs_mkdir(), so just use the transaction handle.
++		 * We don't join or start a transaction, as that will reset the
++		 * block_rsv of the handle and trigger a warning for the start
++		 * case.
++		 */
++		ASSERT(strncmp(name, XATTR_SECURITY_PREFIX,
++			       XATTR_SECURITY_PREFIX_LEN) == 0);
++		trans = current->journal_info;
++	}
+ 
+ 	ret = btrfs_setxattr(trans, inode, name, value, size, flags);
+ 	if (ret)
+@@ -244,7 +266,8 @@ int btrfs_setxattr_trans(struct inode *inode, const char *name,
+ 	ret = btrfs_update_inode(trans, root, inode);
+ 	BUG_ON(ret);
+ out:
+-	btrfs_end_transaction(trans);
++	if (start_trans)
++		btrfs_end_transaction(trans);
+ 	return ret;
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index d0172cc4f6427..691c998691439 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -5083,6 +5083,9 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ 			pt->error = -EINVAL;
+ 			return;
+ 		}
++		/* double add on the same waitqueue head, ignore */
++		if (poll->head == head)
++			return;
+ 		poll = kmalloc(sizeof(*poll), GFP_ATOMIC);
+ 		if (!poll) {
+ 			pt->error = -ENOMEM;
+diff --git a/include/crypto/hash.h b/include/crypto/hash.h
+index af2ff31ff619f..13f8a6a54ca87 100644
+--- a/include/crypto/hash.h
++++ b/include/crypto/hash.h
+@@ -149,7 +149,7 @@ struct ahash_alg {
+ 
+ struct shash_desc {
+ 	struct crypto_shash *tfm;
+-	void *__ctx[] CRYPTO_MINALIGN_ATTR;
++	void *__ctx[] __aligned(ARCH_SLAB_MINALIGN);
+ };
+ 
+ #define HASH_MAX_DIGESTSIZE	 64
+@@ -162,9 +162,9 @@ struct shash_desc {
+ 
+ #define HASH_MAX_STATESIZE	512
+ 
+-#define SHASH_DESC_ON_STACK(shash, ctx)				  \
+-	char __##shash##_desc[sizeof(struct shash_desc) +	  \
+-		HASH_MAX_DESCSIZE] CRYPTO_MINALIGN_ATTR; \
++#define SHASH_DESC_ON_STACK(shash, ctx)					     \
++	char __##shash##_desc[sizeof(struct shash_desc) + HASH_MAX_DESCSIZE] \
++		__aligned(__alignof__(struct shash_desc));		     \
+ 	struct shash_desc *shash = (struct shash_desc *)__##shash##_desc
+ 
+ /**
+diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h
+index 20a32120bb880..1a12baa58e409 100644
+--- a/include/linux/acpi_iort.h
++++ b/include/linux/acpi_iort.h
+@@ -38,6 +38,7 @@ void iort_dma_setup(struct device *dev, u64 *dma_addr, u64 *size);
+ const struct iommu_ops *iort_iommu_configure_id(struct device *dev,
+ 						const u32 *id_in);
+ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head);
++phys_addr_t acpi_iort_dma_get_max_cpu_address(void);
+ #else
+ static inline void acpi_iort_init(void) { }
+ static inline u32 iort_msi_map_id(struct device *dev, u32 id)
+@@ -55,6 +56,9 @@ static inline const struct iommu_ops *iort_iommu_configure_id(
+ static inline
+ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head)
+ { return 0; }
++
++static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void)
++{ return PHYS_ADDR_MAX; }
+ #endif
+ 
+ #endif /* __ACPI_IORT_H__ */
+diff --git a/include/linux/crypto.h b/include/linux/crypto.h
+index ef90e07c9635c..e3abd1f8646a1 100644
+--- a/include/linux/crypto.h
++++ b/include/linux/crypto.h
+@@ -151,9 +151,12 @@
+  * The macro CRYPTO_MINALIGN_ATTR (along with the void * type in the actual
+  * declaration) is used to ensure that the crypto_tfm context structure is
+  * aligned correctly for the given architecture so that there are no alignment
+- * faults for C data types.  In particular, this is required on platforms such
+- * as arm where pointers are 32-bit aligned but there are data types such as
+- * u64 which require 64-bit alignment.
++ * faults for C data types.  On architectures that support non-cache coherent
++ * DMA, such as ARM or arm64, it also takes into account the minimal alignment
++ * that is required to ensure that the context struct member does not share any
++ * cachelines with the rest of the struct. This is needed to ensure that cache
++ * maintenance for non-coherent DMA (cache invalidation in particular) does not
++ * affect data that may be accessed by the CPU concurrently.
+  */
+ #define CRYPTO_MINALIGN ARCH_KMALLOC_MINALIGN
+ 
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index fb3bf696c05e8..9d0c454d23cd6 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -354,26 +354,6 @@ enum zone_type {
+ 	 * DMA mask is assumed when ZONE_DMA32 is defined. Some 64-bit
+ 	 * platforms may need both zones as they support peripherals with
+ 	 * different DMA addressing limitations.
+-	 *
+-	 * Some examples:
+-	 *
+-	 *  - i386 and x86_64 have a fixed 16M ZONE_DMA and ZONE_DMA32 for the
+-	 *    rest of the lower 4G.
+-	 *
+-	 *  - arm only uses ZONE_DMA, the size, up to 4G, may vary depending on
+-	 *    the specific device.
+-	 *
+-	 *  - arm64 has a fixed 1G ZONE_DMA and ZONE_DMA32 for the rest of the
+-	 *    lower 4G.
+-	 *
+-	 *  - powerpc only uses ZONE_DMA, the size, up to 2G, may vary
+-	 *    depending on the specific device.
+-	 *
+-	 *  - s390 uses ZONE_DMA fixed to the lower 2G.
+-	 *
+-	 *  - ia64 and riscv only use ZONE_DMA32.
+-	 *
+-	 *  - parisc uses neither.
+ 	 */
+ #ifdef CONFIG_ZONE_DMA
+ 	ZONE_DMA,
+diff --git a/include/linux/of.h b/include/linux/of.h
+index af655d264f10f..0f4e81e6fb232 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -558,6 +558,8 @@ int of_map_id(struct device_node *np, u32 id,
+ 	       const char *map_name, const char *map_mask_name,
+ 	       struct device_node **target, u32 *id_out);
+ 
++phys_addr_t of_dma_get_max_cpu_address(struct device_node *np);
++
+ #else /* CONFIG_OF */
+ 
+ static inline void of_core_init(void)
+@@ -995,6 +997,11 @@ static inline int of_map_id(struct device_node *np, u32 id,
+ 	return -EINVAL;
+ }
+ 
++static inline phys_addr_t of_dma_get_max_cpu_address(struct device_node *np)
++{
++	return PHYS_ADDR_MAX;
++}
++
+ #define of_match_ptr(_ptr)	NULL
+ #define of_match_node(_matches, _node)	NULL
+ #endif /* CONFIG_OF */
+diff --git a/include/sound/intel-nhlt.h b/include/sound/intel-nhlt.h
+index 743c2f4422806..d0574805865f9 100644
+--- a/include/sound/intel-nhlt.h
++++ b/include/sound/intel-nhlt.h
+@@ -112,6 +112,11 @@ struct nhlt_vendor_dmic_array_config {
+ 	/* TODO add vendor mic config */
+ } __packed;
+ 
++enum {
++	NHLT_CONFIG_TYPE_GENERIC = 0,
++	NHLT_CONFIG_TYPE_MIC_ARRAY = 1
++};
++
+ enum {
+ 	NHLT_MIC_ARRAY_2CH_SMALL = 0xa,
+ 	NHLT_MIC_ARRAY_2CH_BIG = 0xb,
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index ddeb865706ba4..b12e5f4721ca7 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2836,6 +2836,17 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer,
+ 				       write_stamp, write_stamp - delta))
+ 			return 0;
+ 
++		/*
++		 * It's possible that the event time delta is zero
++		 * (has the same time stamp as the previous event)
++		 * in which case write_stamp and before_stamp could
++		 * be the same. In such a case, force before_stamp
++		 * to be different than write_stamp. It doesn't
++		 * matter what it is, as long as its different.
++		 */
++		if (!delta)
++			rb_time_set(&cpu_buffer->before_stamp, 0);
++
+ 		/*
+ 		 * If an event were to come in now, it would see that the
+ 		 * write_stamp and the before_stamp are different, and assume
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index b9c2ee7ab43fa..cce12e1971d85 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -438,7 +438,7 @@ static int arm_is_fake_mcount(Elf32_Rel const *rp)
+ 
+ static int arm64_is_fake_mcount(Elf64_Rel const *rp)
+ {
+-	return ELF64_R_TYPE(w(rp->r_info)) != R_AARCH64_CALL26;
++	return ELF64_R_TYPE(w8(rp->r_info)) != R_AARCH64_CALL26;
+ }
+ 
+ /* 64-bit EM_MIPS has weird ELF64_Rela.r_info.
+diff --git a/security/tomoyo/network.c b/security/tomoyo/network.c
+index a89ed55d85d41..478f757ff8435 100644
+--- a/security/tomoyo/network.c
++++ b/security/tomoyo/network.c
+@@ -613,7 +613,7 @@ static int tomoyo_check_unix_address(struct sockaddr *addr,
+ static bool tomoyo_kernel_service(void)
+ {
+ 	/* Nothing to do if I am a kernel service. */
+-	return uaccess_kernel();
++	return (current->flags & (PF_KTHREAD | PF_IO_WORKER)) == PF_KTHREAD;
+ }
+ 
+ /**
+diff --git a/sound/hda/intel-nhlt.c b/sound/hda/intel-nhlt.c
+index 059aaf04f536a..d053beccfaec3 100644
+--- a/sound/hda/intel-nhlt.c
++++ b/sound/hda/intel-nhlt.c
+@@ -31,18 +31,44 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
+ 	struct nhlt_endpoint *epnt;
+ 	struct nhlt_dmic_array_config *cfg;
+ 	struct nhlt_vendor_dmic_array_config *cfg_vendor;
++	struct nhlt_fmt *fmt_configs;
+ 	unsigned int dmic_geo = 0;
+-	u8 j;
++	u16 max_ch = 0;
++	u8 i, j;
+ 
+ 	if (!nhlt)
+ 		return 0;
+ 
+-	epnt = (struct nhlt_endpoint *)nhlt->desc;
++	for (j = 0, epnt = nhlt->desc; j < nhlt->endpoint_count; j++,
++	     epnt = (struct nhlt_endpoint *)((u8 *)epnt + epnt->length)) {
+ 
+-	for (j = 0; j < nhlt->endpoint_count; j++) {
+-		if (epnt->linktype == NHLT_LINK_DMIC) {
+-			cfg = (struct nhlt_dmic_array_config  *)
+-					(epnt->config.caps);
++		if (epnt->linktype != NHLT_LINK_DMIC)
++			continue;
++
++		cfg = (struct nhlt_dmic_array_config  *)(epnt->config.caps);
++		fmt_configs = (struct nhlt_fmt *)(epnt->config.caps + epnt->config.size);
++
++		/* find max number of channels based on format_configuration */
++		if (fmt_configs->fmt_count) {
++			dev_dbg(dev, "%s: found %d format definitions\n",
++				__func__, fmt_configs->fmt_count);
++
++			for (i = 0; i < fmt_configs->fmt_count; i++) {
++				struct wav_fmt_ext *fmt_ext;
++
++				fmt_ext = &fmt_configs->fmt_config[i].fmt_ext;
++
++				if (fmt_ext->fmt.channels > max_ch)
++					max_ch = fmt_ext->fmt.channels;
++			}
++			dev_dbg(dev, "%s: max channels found %d\n", __func__, max_ch);
++		} else {
++			dev_dbg(dev, "%s: No format information found\n", __func__);
++		}
++
++		if (cfg->device_config.config_type != NHLT_CONFIG_TYPE_MIC_ARRAY) {
++			dmic_geo = max_ch;
++		} else {
+ 			switch (cfg->array_type) {
+ 			case NHLT_MIC_ARRAY_2CH_SMALL:
+ 			case NHLT_MIC_ARRAY_2CH_BIG:
+@@ -59,13 +85,23 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
+ 				dmic_geo = cfg_vendor->nb_mics;
+ 				break;
+ 			default:
+-				dev_warn(dev, "undefined DMIC array_type 0x%0x\n",
+-					 cfg->array_type);
++				dev_warn(dev, "%s: undefined DMIC array_type 0x%0x\n",
++					 __func__, cfg->array_type);
++			}
++
++			if (dmic_geo > 0) {
++				dev_dbg(dev, "%s: Array with %d dmics\n", __func__, dmic_geo);
++			}
++			if (max_ch > dmic_geo) {
++				dev_dbg(dev, "%s: max channels %d exceed dmic number %d\n",
++					__func__, max_ch, dmic_geo);
+ 			}
+ 		}
+-		epnt = (struct nhlt_endpoint *)((u8 *)epnt + epnt->length);
+ 	}
+ 
++	dev_dbg(dev, "%s: dmic number %d max_ch %d\n",
++		__func__, dmic_geo, max_ch);
++
+ 	return dmic_geo;
+ }
+ EXPORT_SYMBOL_GPL(intel_nhlt_get_dmic_geo);
+diff --git a/sound/pci/ctxfi/cthw20k2.c b/sound/pci/ctxfi/cthw20k2.c
+index fc1bc18caee98..85d1fc76f59e1 100644
+--- a/sound/pci/ctxfi/cthw20k2.c
++++ b/sound/pci/ctxfi/cthw20k2.c
+@@ -991,7 +991,7 @@ static int daio_mgr_dao_init(void *blk, unsigned int idx, unsigned int conf)
+ 
+ 	if (idx < 4) {
+ 		/* S/PDIF output */
+-		switch ((conf & 0x7)) {
++		switch ((conf & 0xf)) {
+ 		case 1:
+ 			set_field(&ctl->txctl[idx], ATXCTL_NUC, 0);
+ 			break;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 5f4f8c2d760f0..b47504fa8dfd0 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6408,6 +6408,7 @@ enum {
+ 	ALC236_FIXUP_DELL_AIO_HEADSET_MIC,
+ 	ALC282_FIXUP_ACER_DISABLE_LINEOUT,
+ 	ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
++	ALC256_FIXUP_ACER_HEADSET_MIC,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7864,6 +7865,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+ 	},
++	[ALC256_FIXUP_ACER_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x02a1113c }, /* use as headset mic, without its own jack detect */
++			{ 0x1a, 0x90a1092f }, /* use as internal mic */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7890,9 +7901,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK),
+ 	SND_PCI_QUIRK(0x1025, 0x1247, "Acer vCopperbox", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS),
+ 	SND_PCI_QUIRK(0x1025, 0x1248, "Acer Veriton N4660G", ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x1269, "Acer SWIFT SF314-54", ALC256_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x129c, "Acer SWIFT SF314-55", ALC256_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 81e987eaf0637..375cfb9c9ab7e 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1301,6 +1301,17 @@ no_res_check:
+ 			/* totally crap, return an error */
+ 			return -EINVAL;
+ 		}
++	} else {
++		/* if the max volume is too low, it's likely a bogus range;
++		 * here we use -96dB as the threshold
++		 */
++		if (cval->dBmax <= -9600) {
++			usb_audio_info(cval->head.mixer->chip,
++				       "%d:%d: bogus dB values (%d/%d), disabling dB reporting\n",
++				       cval->head.id, mixer_ctrl_intf(cval->head.mixer),
++				       cval->dBmin, cval->dBmax);
++			cval->dBmin = cval->dBmax = 0;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index a7212f16660ec..646deb6244b15 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -536,6 +536,16 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x05a7, 0x1020),
+ 		.map = bose_companion5_map,
+ 	},
++	{
++		/* Corsair Virtuoso SE (wired mode) */
++		.id = USB_ID(0x1b1c, 0x0a3d),
++		.map = corsair_virtuoso_map,
++	},
++	{
++		/* Corsair Virtuoso SE (wireless mode) */
++		.id = USB_ID(0x1b1c, 0x0a3e),
++		.map = corsair_virtuoso_map,
++	},
+ 	{
+ 		/* Corsair Virtuoso (wired mode) */
+ 		.id = USB_ID(0x1b1c, 0x0a41),


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-11 15:08 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-03-11 15:08 UTC (permalink / raw
  To: gentoo-commits

commit:     bcc300be8c872d577fbf92a1d82df611433ae496
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 11 15:08:31 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Mar 11 15:08:31 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bcc300be

Linux patch 5.10.23

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1022_linux-5.10.23.patch | 2146 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2150 insertions(+)

diff --git a/0000_README b/0000_README
index 36a5539..5a10fbd 100644
--- a/0000_README
+++ b/0000_README
@@ -131,6 +131,10 @@ Patch:  1021_linux-5.10.22.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.22
 
+Patch:  1022_linux-5.10.23.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.23
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1022_linux-5.10.23.patch b/1022_linux-5.10.23.patch
new file mode 100644
index 0000000..975a160
--- /dev/null
+++ b/1022_linux-5.10.23.patch
@@ -0,0 +1,2146 @@
+diff --git a/Makefile b/Makefile
+index 12a2a7271fcb0..7fdb78b48f556 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 22
++SUBLEVEL = 23
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index afe4bc55d4eba..c1be64228327c 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -950,8 +950,9 @@ choice
+ 	  that is selected here.
+ 
+ config CPU_BIG_ENDIAN
+-       bool "Build big-endian kernel"
+-       help
++	bool "Build big-endian kernel"
++	depends on !LD_IS_LLD || LLD_VERSION >= 130000
++	help
+ 	  Say Y if you plan on running a kernel with a big-endian userspace.
+ 
+ config CPU_LITTLE_ENDIAN
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index 04dc17d52ac2d..14f3252f2da03 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -201,9 +201,12 @@ config PREFETCH
+ 	def_bool y
+ 	depends on PA8X00 || PA7200
+ 
++config PARISC_HUGE_KERNEL
++	def_bool y if !MODULES || UBSAN || FTRACE || COMPILE_TEST
++
+ config MLONGCALLS
+-	def_bool y if !MODULES || UBSAN || FTRACE
+-	bool "Enable the -mlong-calls compiler option for big kernels" if MODULES && !UBSAN && !FTRACE
++	def_bool y if PARISC_HUGE_KERNEL
++	bool "Enable the -mlong-calls compiler option for big kernels" if !PARISC_HUGE_KERNEL
+ 	depends on PA8X00
+ 	help
+ 	  If you configure the kernel to include many drivers built-in instead
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 76ab1ee0784ae..642f0da31ac4f 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1192,6 +1192,7 @@ static void init_vmcb(struct vcpu_svm *svm)
+ 	init_sys_seg(&save->ldtr, SEG_TYPE_LDT);
+ 	init_sys_seg(&save->tr, SEG_TYPE_BUSY_TSS16);
+ 
++	svm_set_cr4(&svm->vcpu, 0);
+ 	svm_set_efer(&svm->vcpu, 0);
+ 	save->dr6 = 0xffff0ff0;
+ 	kvm_set_rflags(&svm->vcpu, 2);
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index e7ca622a468f5..2249a7d7ca27f 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -404,6 +404,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
+ 		__reserved_bits |= X86_CR4_UMIP;        \
+ 	if (!__cpu_has(__c, X86_FEATURE_VMX))           \
+ 		__reserved_bits |= X86_CR4_VMXE;        \
++	if (!__cpu_has(__c, X86_FEATURE_PCID))          \
++		__reserved_bits |= X86_CR4_PCIDE;       \
+ 	__reserved_bits;                                \
+ })
+ 
+diff --git a/drivers/acpi/acpica/acobject.h b/drivers/acpi/acpica/acobject.h
+index 9f0219a8cb985..dd7efafcb1034 100644
+--- a/drivers/acpi/acpica/acobject.h
++++ b/drivers/acpi/acpica/acobject.h
+@@ -284,6 +284,7 @@ struct acpi_object_addr_handler {
+ 	acpi_adr_space_handler handler;
+ 	struct acpi_namespace_node *node;	/* Parent device */
+ 	void *context;
++	acpi_mutex context_mutex;
+ 	acpi_adr_space_setup setup;
+ 	union acpi_operand_object *region_list;	/* Regions using this handler */
+ 	union acpi_operand_object *next;
+diff --git a/drivers/acpi/acpica/evhandler.c b/drivers/acpi/acpica/evhandler.c
+index 5884eba047f73..3438dc187efb6 100644
+--- a/drivers/acpi/acpica/evhandler.c
++++ b/drivers/acpi/acpica/evhandler.c
+@@ -489,6 +489,13 @@ acpi_ev_install_space_handler(struct acpi_namespace_node *node,
+ 
+ 	/* Init handler obj */
+ 
++	status =
++	    acpi_os_create_mutex(&handler_obj->address_space.context_mutex);
++	if (ACPI_FAILURE(status)) {
++		acpi_ut_remove_reference(handler_obj);
++		goto unlock_and_exit;
++	}
++
+ 	handler_obj->address_space.space_id = (u8)space_id;
+ 	handler_obj->address_space.handler_flags = flags;
+ 	handler_obj->address_space.region_list = NULL;
+diff --git a/drivers/acpi/acpica/evregion.c b/drivers/acpi/acpica/evregion.c
+index 738d4b231f34a..980efc9bd5ee7 100644
+--- a/drivers/acpi/acpica/evregion.c
++++ b/drivers/acpi/acpica/evregion.c
+@@ -111,6 +111,8 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 	union acpi_operand_object *region_obj2;
+ 	void *region_context = NULL;
+ 	struct acpi_connection_info *context;
++	acpi_mutex context_mutex;
++	u8 context_locked;
+ 	acpi_physical_address address;
+ 
+ 	ACPI_FUNCTION_TRACE(ev_address_space_dispatch);
+@@ -135,6 +137,8 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 	}
+ 
+ 	context = handler_desc->address_space.context;
++	context_mutex = handler_desc->address_space.context_mutex;
++	context_locked = FALSE;
+ 
+ 	/*
+ 	 * It may be the case that the region has never been initialized.
+@@ -203,6 +207,23 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 	handler = handler_desc->address_space.handler;
+ 	address = (region_obj->region.address + region_offset);
+ 
++	ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
++			  "Handler %p (@%p) Address %8.8X%8.8X [%s]\n",
++			  &region_obj->region.handler->address_space, handler,
++			  ACPI_FORMAT_UINT64(address),
++			  acpi_ut_get_region_name(region_obj->region.
++						  space_id)));
++
++	if (!(handler_desc->address_space.handler_flags &
++	      ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)) {
++		/*
++		 * For handlers other than the default (supplied) handlers, we must
++		 * exit the interpreter because the handler *might* block -- we don't
++		 * know what it will do, so we can't hold the lock on the interpreter.
++		 */
++		acpi_ex_exit_interpreter();
++	}
++
+ 	/*
+ 	 * Special handling for generic_serial_bus and general_purpose_io:
+ 	 * There are three extra parameters that must be passed to the
+@@ -211,6 +232,11 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 	 *   2) Length of the above buffer
+ 	 *   3) Actual access length from the access_as() op
+ 	 *
++	 * Since we pass these extra parameters via the context, which is
++	 * shared between threads, we must lock the context to avoid these
++	 * parameters being changed from another thread before the handler
++	 * has completed running.
++	 *
+ 	 * In addition, for general_purpose_io, the Address and bit_width fields
+ 	 * are defined as follows:
+ 	 *   1) Address is the pin number index of the field (bit offset from
+@@ -220,6 +246,14 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 	if ((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) &&
+ 	    context && field_obj) {
+ 
++		status =
++		    acpi_os_acquire_mutex(context_mutex, ACPI_WAIT_FOREVER);
++		if (ACPI_FAILURE(status)) {
++			goto re_enter_interpreter;
++		}
++
++		context_locked = TRUE;
++
+ 		/* Get the Connection (resource_template) buffer */
+ 
+ 		context->connection = field_obj->field.resource_buffer;
+@@ -229,6 +263,14 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 	if ((region_obj->region.space_id == ACPI_ADR_SPACE_GPIO) &&
+ 	    context && field_obj) {
+ 
++		status =
++		    acpi_os_acquire_mutex(context_mutex, ACPI_WAIT_FOREVER);
++		if (ACPI_FAILURE(status)) {
++			goto re_enter_interpreter;
++		}
++
++		context_locked = TRUE;
++
+ 		/* Get the Connection (resource_template) buffer */
+ 
+ 		context->connection = field_obj->field.resource_buffer;
+@@ -238,28 +280,15 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 		bit_width = field_obj->field.bit_length;
+ 	}
+ 
+-	ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
+-			  "Handler %p (@%p) Address %8.8X%8.8X [%s]\n",
+-			  &region_obj->region.handler->address_space, handler,
+-			  ACPI_FORMAT_UINT64(address),
+-			  acpi_ut_get_region_name(region_obj->region.
+-						  space_id)));
+-
+-	if (!(handler_desc->address_space.handler_flags &
+-	      ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)) {
+-		/*
+-		 * For handlers other than the default (supplied) handlers, we must
+-		 * exit the interpreter because the handler *might* block -- we don't
+-		 * know what it will do, so we can't hold the lock on the interpreter.
+-		 */
+-		acpi_ex_exit_interpreter();
+-	}
+-
+ 	/* Call the handler */
+ 
+ 	status = handler(function, address, bit_width, value, context,
+ 			 region_obj2->extra.region_context);
+ 
++	if (context_locked) {
++		acpi_os_release_mutex(context_mutex);
++	}
++
+ 	if (ACPI_FAILURE(status)) {
+ 		ACPI_EXCEPTION((AE_INFO, status, "Returned by Handler for [%s]",
+ 				acpi_ut_get_region_name(region_obj->region.
+@@ -276,6 +305,7 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 		}
+ 	}
+ 
++re_enter_interpreter:
+ 	if (!(handler_desc->address_space.handler_flags &
+ 	      ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)) {
+ 		/*
+diff --git a/drivers/acpi/acpica/evxfregn.c b/drivers/acpi/acpica/evxfregn.c
+index da97fd0c6b51e..3bb06f17a18b6 100644
+--- a/drivers/acpi/acpica/evxfregn.c
++++ b/drivers/acpi/acpica/evxfregn.c
+@@ -201,6 +201,8 @@ acpi_remove_address_space_handler(acpi_handle device,
+ 
+ 			/* Now we can delete the handler object */
+ 
++			acpi_os_release_mutex(handler_obj->address_space.
++					      context_mutex);
+ 			acpi_ut_remove_reference(handler_obj);
+ 			goto unlock_and_exit;
+ 		}
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 4f5463b2a2178..811d298637cb2 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -140,6 +140,13 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 	},
+ 	{
+ 	.callback = video_detect_force_vendor,
++	.ident = "GIGABYTE GB-BXBT-2807",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "GB-BXBT-2807"),
++		},
++	},
++	{
+ 	.ident = "Sony VPCEH3U1E",
+ 	.matches = {
+ 		DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 5c26c7d941731..ad47ff0d55c2e 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -78,6 +78,7 @@ enum qca_flags {
+ 
+ enum qca_capabilities {
+ 	QCA_CAP_WIDEBAND_SPEECH = BIT(0),
++	QCA_CAP_VALID_LE_STATES = BIT(1),
+ };
+ 
+ /* HCI_IBS transmit side sleep protocol states */
+@@ -1782,7 +1783,7 @@ static const struct qca_device_data qca_soc_data_wcn3991 = {
+ 		{ "vddch0", 450000 },
+ 	},
+ 	.num_vregs = 4,
+-	.capabilities = QCA_CAP_WIDEBAND_SPEECH,
++	.capabilities = QCA_CAP_WIDEBAND_SPEECH | QCA_CAP_VALID_LE_STATES,
+ };
+ 
+ static const struct qca_device_data qca_soc_data_wcn3998 = {
+@@ -2019,11 +2020,17 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 		hdev->shutdown = qca_power_off;
+ 	}
+ 
+-	/* Wideband speech support must be set per driver since it can't be
+-	 * queried via hci.
+-	 */
+-	if (data && (data->capabilities & QCA_CAP_WIDEBAND_SPEECH))
+-		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++	if (data) {
++		/* Wideband speech support must be set per driver since it can't
++		 * be queried via hci. Same with the valid le states quirk.
++		 */
++		if (data->capabilities & QCA_CAP_WIDEBAND_SPEECH)
++			set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
++				&hdev->quirks);
++
++		if (data->capabilities & QCA_CAP_VALID_LE_STATES)
++			set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 92ecf1a78ec73..45f5530666d3f 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1379,6 +1379,8 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 		   SYSC_QUIRK_CLKDM_NOAUTO),
+ 	SYSC_QUIRK("dwc3", 0x488c0000, 0, 0x10, -ENODEV, 0x500a0200, 0xffffffff,
+ 		   SYSC_QUIRK_CLKDM_NOAUTO),
++	SYSC_QUIRK("gpmc", 0, 0, 0x10, 0x14, 0x00000060, 0xffffffff,
++		   SYSC_QUIRK_GPMC_DEBUG),
+ 	SYSC_QUIRK("hdmi", 0, 0, 0x10, -ENODEV, 0x50030200, 0xffffffff,
+ 		   SYSC_QUIRK_OPT_CLKS_NEEDED),
+ 	SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff,
+@@ -1814,6 +1816,14 @@ static void sysc_init_module_quirks(struct sysc *ddata)
+ 		return;
+ 	}
+ 
++#ifdef CONFIG_OMAP_GPMC_DEBUG
++	if (ddata->cfg.quirks & SYSC_QUIRK_GPMC_DEBUG) {
++		ddata->cfg.quirks |= SYSC_QUIRK_NO_RESET_ON_INIT;
++
++		return;
++	}
++#endif
++
+ 	if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_I2C) {
+ 		ddata->pre_reset_quirk = sysc_pre_reset_quirk_i2c;
+ 		ddata->post_reset_quirk = sysc_post_reset_quirk_i2c;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 69ed2c6094665..5e11cdb207d83 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -626,8 +626,6 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
+ 	if (adreno_gpu->info->quirks & ADRENO_QUIRK_TWO_PASS_USE_WFI)
+ 		gpu_rmw(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 0, (1 << 8));
+ 
+-	gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 0xc0200100);
+-
+ 	/* Enable USE_RETENTION_FLOPS */
+ 	gpu_write(gpu, REG_A5XX_CP_CHICKEN_DBG, 0x02000000);
+ 
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 94180c63571ed..06813f297dcca 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -359,6 +359,7 @@
+ #define USB_DEVICE_ID_DRAGONRISE_DOLPHINBAR	0x1803
+ #define USB_DEVICE_ID_DRAGONRISE_GAMECUBE1	0x1843
+ #define USB_DEVICE_ID_DRAGONRISE_GAMECUBE2	0x1844
++#define USB_DEVICE_ID_DRAGONRISE_GAMECUBE3	0x1846
+ 
+ #define USB_VENDOR_ID_DWAV		0x0eef
+ #define USB_DEVICE_ID_EGALAX_TOUCHCONTROLLER	0x0001
+@@ -638,6 +639,8 @@
+ #define USB_DEVICE_ID_INNEX_GENESIS_ATARI	0x4745
+ 
+ #define USB_VENDOR_ID_ITE               0x048d
++#define I2C_VENDOR_ID_ITE		0x103c
++#define I2C_DEVICE_ID_ITE_VOYO_WINPAD_A15	0x184f
+ #define USB_DEVICE_ID_ITE_LENOVO_YOGA   0x8386
+ #define USB_DEVICE_ID_ITE_LENOVO_YOGA2  0x8350
+ #define I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720	0x837a
+diff --git a/drivers/hid/hid-mf.c b/drivers/hid/hid-mf.c
+index fc75f30f537c9..92d7ecd41a78f 100644
+--- a/drivers/hid/hid-mf.c
++++ b/drivers/hid/hid-mf.c
+@@ -153,6 +153,8 @@ static const struct hid_device_id mf_devices[] = {
+ 		.driver_data = HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_GAMECUBE2),
+ 		.driver_data = 0 }, /* No quirk required */
++	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_GAMECUBE3),
++		.driver_data = HID_QUIRK_MULTI_INPUT },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(hid, mf_devices);
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index bf7ecab5d9e5e..2e38340e19dfb 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -72,6 +72,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_REDRAGON_SEYMUR2), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_DOLPHINBAR), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_GAMECUBE1), HID_QUIRK_MULTI_INPUT },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_GAMECUBE3), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_PS3), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_WIIU), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_EGALAX_TOUCHCONTROLLER), HID_QUIRK_MULTI_INPUT | HID_QUIRK_NOGET },
+@@ -484,6 +485,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_DOLPHINBAR) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_GAMECUBE1) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_GAMECUBE2) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_GAMECUBE3) },
+ #endif
+ #if IS_ENABLED(CONFIG_HID_MICROSOFT)
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_COMFORT_MOUSE_4500) },
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index aeff1ffb0c8b3..cb7758d59014e 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -171,6 +171,8 @@ static const struct i2c_hid_quirks {
+ 		I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV },
+ 	{ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288,
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
++	{ I2C_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_VOYO_WINPAD_A15,
++		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+ 	{ I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_3118,
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+ 	{ USB_VENDOR_ID_ELAN, HID_ANY_ID,
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index b9cf59443843b..5f1195791cb18 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -1503,6 +1503,10 @@ static bool increase_address_space(struct protection_domain *domain,
+ 	bool ret = true;
+ 	u64 *pte;
+ 
++	pte = (void *)get_zeroed_page(gfp);
++	if (!pte)
++		return false;
++
+ 	spin_lock_irqsave(&domain->lock, flags);
+ 
+ 	amd_iommu_domain_get_pgtable(domain, &pgtable);
+@@ -1514,10 +1518,6 @@ static bool increase_address_space(struct protection_domain *domain,
+ 	if (WARN_ON_ONCE(pgtable.mode == PAGE_MODE_6_LEVEL))
+ 		goto out;
+ 
+-	pte = (void *)get_zeroed_page(gfp);
+-	if (!pte)
+-		goto out;
+-
+ 	*pte = PM_LEVEL_PDE(pgtable.mode, iommu_virt_to_phys(pgtable.root));
+ 
+ 	pgtable.root  = pte;
+@@ -1531,10 +1531,12 @@ static bool increase_address_space(struct protection_domain *domain,
+ 	 */
+ 	amd_iommu_domain_set_pgtable(domain, pte, pgtable.mode);
+ 
++	pte = NULL;
+ 	ret = true;
+ 
+ out:
+ 	spin_unlock_irqrestore(&domain->lock, flags);
++	free_page((unsigned long)pte);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/pci/cx23885/cx23885-core.c b/drivers/media/pci/cx23885/cx23885-core.c
+index 4b0c53f61fb7c..4e8132d4b2dfa 100644
+--- a/drivers/media/pci/cx23885/cx23885-core.c
++++ b/drivers/media/pci/cx23885/cx23885-core.c
+@@ -2074,6 +2074,10 @@ static struct {
+ 	 * 0x1451 is PCI ID for the IOMMU found on Ryzen
+ 	 */
+ 	{ PCI_VENDOR_ID_AMD, 0x1451 },
++	/* According to sudo lspci -nn,
++	 * 0x1423 is the PCI ID for the IOMMU found on Kaveri
++	 */
++	{ PCI_VENDOR_ID_AMD, 0x1423 },
+ };
+ 
+ static bool cx23885_does_need_dma_reset(void)
+diff --git a/drivers/misc/eeprom/eeprom_93xx46.c b/drivers/misc/eeprom/eeprom_93xx46.c
+index d92c4d2c521a3..6e5f544c9c737 100644
+--- a/drivers/misc/eeprom/eeprom_93xx46.c
++++ b/drivers/misc/eeprom/eeprom_93xx46.c
+@@ -35,6 +35,10 @@ static const struct eeprom_93xx46_devtype_data atmel_at93c46d_data = {
+ 		  EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH,
+ };
+ 
++static const struct eeprom_93xx46_devtype_data microchip_93lc46b_data = {
++	.quirks = EEPROM_93XX46_QUIRK_EXTRA_READ_CYCLE,
++};
++
+ struct eeprom_93xx46_dev {
+ 	struct spi_device *spi;
+ 	struct eeprom_93xx46_platform_data *pdata;
+@@ -55,6 +59,11 @@ static inline bool has_quirk_instruction_length(struct eeprom_93xx46_dev *edev)
+ 	return edev->pdata->quirks & EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH;
+ }
+ 
++static inline bool has_quirk_extra_read_cycle(struct eeprom_93xx46_dev *edev)
++{
++	return edev->pdata->quirks & EEPROM_93XX46_QUIRK_EXTRA_READ_CYCLE;
++}
++
+ static int eeprom_93xx46_read(void *priv, unsigned int off,
+ 			      void *val, size_t count)
+ {
+@@ -96,6 +105,11 @@ static int eeprom_93xx46_read(void *priv, unsigned int off,
+ 		dev_dbg(&edev->spi->dev, "read cmd 0x%x, %d Hz\n",
+ 			cmd_addr, edev->spi->max_speed_hz);
+ 
++		if (has_quirk_extra_read_cycle(edev)) {
++			cmd_addr <<= 1;
++			bits += 1;
++		}
++
+ 		spi_message_init(&m);
+ 
+ 		t[0].tx_buf = (char *)&cmd_addr;
+@@ -363,6 +377,7 @@ static void select_deassert(void *context)
+ static const struct of_device_id eeprom_93xx46_of_table[] = {
+ 	{ .compatible = "eeprom-93xx46", },
+ 	{ .compatible = "atmel,at93c46d", .data = &atmel_at93c46d_data, },
++	{ .compatible = "microchip,93lc46b", .data = &microchip_93lc46b_data, },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(of, eeprom_93xx46_of_table);
+diff --git a/drivers/mmc/host/sdhci-of-dwcmshc.c b/drivers/mmc/host/sdhci-of-dwcmshc.c
+index d90020ed36227..59d8d96ce206b 100644
+--- a/drivers/mmc/host/sdhci-of-dwcmshc.c
++++ b/drivers/mmc/host/sdhci-of-dwcmshc.c
+@@ -112,6 +112,7 @@ static const struct sdhci_ops sdhci_dwcmshc_ops = {
+ static const struct sdhci_pltfm_data sdhci_dwcmshc_pdata = {
+ 	.ops = &sdhci_dwcmshc_ops,
+ 	.quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
++	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+ };
+ 
+ static int dwcmshc_probe(struct platform_device *pdev)
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index 6a10ff0377a24..33cf952cc01d3 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -526,6 +526,8 @@ static void mwifiex_pcie_reset_prepare(struct pci_dev *pdev)
+ 	clear_bit(MWIFIEX_IFACE_WORK_DEVICE_DUMP, &card->work_flags);
+ 	clear_bit(MWIFIEX_IFACE_WORK_CARD_RESET, &card->work_flags);
+ 	mwifiex_dbg(adapter, INFO, "%s, successful\n", __func__);
++
++	card->pci_reset_ongoing = true;
+ }
+ 
+ /*
+@@ -554,6 +556,8 @@ static void mwifiex_pcie_reset_done(struct pci_dev *pdev)
+ 		dev_err(&pdev->dev, "reinit failed: %d\n", ret);
+ 	else
+ 		mwifiex_dbg(adapter, INFO, "%s, successful\n", __func__);
++
++	card->pci_reset_ongoing = false;
+ }
+ 
+ static const struct pci_error_handlers mwifiex_pcie_err_handler = {
+@@ -3142,7 +3146,19 @@ static void mwifiex_cleanup_pcie(struct mwifiex_adapter *adapter)
+ 	int ret;
+ 	u32 fw_status;
+ 
+-	cancel_work_sync(&card->work);
++	/* Perform the cancel_work_sync() only when we're not resetting
++	 * the card. It's because that function never returns if we're
++	 * in reset path. If we're here when resetting the card, it means
++	 * that we failed to reset the card (reset failure path).
++	 */
++	if (!card->pci_reset_ongoing) {
++		mwifiex_dbg(adapter, MSG, "performing cancel_work_sync()...\n");
++		cancel_work_sync(&card->work);
++		mwifiex_dbg(adapter, MSG, "cancel_work_sync() done\n");
++	} else {
++		mwifiex_dbg(adapter, MSG,
++			    "skipped cancel_work_sync() because we're in card reset failure path\n");
++	}
+ 
+ 	ret = mwifiex_read_reg(adapter, reg->fw_status, &fw_status);
+ 	if (fw_status == FIRMWARE_READY_PCIE) {
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.h b/drivers/net/wireless/marvell/mwifiex/pcie.h
+index 843d57eda8201..5ed613d657094 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.h
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.h
+@@ -242,6 +242,8 @@ struct pcie_service_card {
+ 	struct mwifiex_msix_context share_irq_ctx;
+ 	struct work_struct work;
+ 	unsigned long work_flags;
++
++	bool pci_reset_ongoing;
+ };
+ 
+ static inline int
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 4a33287371bda..99c59f93a0641 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3235,7 +3235,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_DEVICE(0x126f, 0x2263),	/* Silicon Motion unidentified */
+ 		.driver_data = NVME_QUIRK_NO_NS_DESC_LIST, },
+ 	{ PCI_DEVICE(0x1bb1, 0x0100),   /* Seagate Nytro Flash Storage */
+-		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY, },
++		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
++				NVME_QUIRK_NO_NS_DESC_LIST, },
+ 	{ PCI_DEVICE(0x1c58, 0x0003),	/* HGST adapter */
+ 		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY, },
+ 	{ PCI_DEVICE(0x1c58, 0x0023),	/* WDC SN200 adapter */
+@@ -3249,6 +3250,9 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE(0x1987, 0x5016),	/* Phison E16 */
+ 		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++	{ PCI_DEVICE(0x1b4b, 0x1092),	/* Lexar 256 GB SSD */
++		.driver_data = NVME_QUIRK_NO_NS_DESC_LIST |
++				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE(0x1d1d, 0x1f1f),	/* LighNVM qemu device */
+ 		.driver_data = NVME_QUIRK_LIGHTNVM, },
+ 	{ PCI_DEVICE(0x1d1d, 0x2807),	/* CNEX WL */
+@@ -3264,6 +3268,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 		.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ 	{ PCI_DEVICE(0x15b7, 0x2001),   /*  Sandisk Skyhawk */
+ 		.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
++	{ PCI_DEVICE(0x2646, 0x2262),   /* KINGSTON SKC2000 NVMe SSD */
++		.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ 	{ PCI_DEVICE(0x2646, 0x2263),   /* KINGSTON A2000 NVMe SSD  */
+ 		.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001),
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index 586b9d69fa5e2..d34ca0fda0f66 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -63,6 +63,7 @@ enum j721e_pcie_mode {
+ 
+ struct j721e_pcie_data {
+ 	enum j721e_pcie_mode	mode;
++	bool quirk_retrain_flag;
+ };
+ 
+ static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
+@@ -270,6 +271,7 @@ static struct pci_ops cdns_ti_pcie_host_ops = {
+ 
+ static const struct j721e_pcie_data j721e_pcie_rc_data = {
+ 	.mode = PCI_MODE_RC,
++	.quirk_retrain_flag = true,
+ };
+ 
+ static const struct j721e_pcie_data j721e_pcie_ep_data = {
+@@ -378,6 +380,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 
+ 		bridge->ops = &cdns_ti_pcie_host_ops;
+ 		rc = pci_host_bridge_priv(bridge);
++		rc->quirk_retrain_flag = data->quirk_retrain_flag;
+ 
+ 		cdns_pcie = &rc->pcie;
+ 		cdns_pcie->dev = dev;
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 1cb7cfc75d6e4..73dcf8cf98fbf 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -77,6 +77,68 @@ static struct pci_ops cdns_pcie_host_ops = {
+ 	.write		= pci_generic_config_write,
+ };
+ 
++static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
++{
++	struct device *dev = pcie->dev;
++	int retries;
++
++	/* Check if the link is up or not */
++	for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
++		if (cdns_pcie_link_up(pcie)) {
++			dev_info(dev, "Link up\n");
++			return 0;
++		}
++		usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
++	}
++
++	return -ETIMEDOUT;
++}
++
++static int cdns_pcie_retrain(struct cdns_pcie *pcie)
++{
++	u32 lnk_cap_sls, pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
++	u16 lnk_stat, lnk_ctl;
++	int ret = 0;
++
++	/*
++	 * Set retrain bit if current speed is 2.5 GB/s,
++	 * but the PCIe root port support is > 2.5 GB/s.
++	 */
++
++	lnk_cap_sls = cdns_pcie_readl(pcie, (CDNS_PCIE_RP_BASE + pcie_cap_off +
++					     PCI_EXP_LNKCAP));
++	if ((lnk_cap_sls & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB)
++		return ret;
++
++	lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
++	if ((lnk_stat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
++		lnk_ctl = cdns_pcie_rp_readw(pcie,
++					     pcie_cap_off + PCI_EXP_LNKCTL);
++		lnk_ctl |= PCI_EXP_LNKCTL_RL;
++		cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
++				    lnk_ctl);
++
++		ret = cdns_pcie_host_wait_for_link(pcie);
++	}
++	return ret;
++}
++
++static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
++{
++	struct cdns_pcie *pcie = &rc->pcie;
++	int ret;
++
++	ret = cdns_pcie_host_wait_for_link(pcie);
++
++	/*
++	 * Retrain link for Gen2 training defect
++	 * if quirk flag is set.
++	 */
++	if (!ret && rc->quirk_retrain_flag)
++		ret = cdns_pcie_retrain(pcie);
++
++	return ret;
++}
+ 
+ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
+ {
+@@ -399,23 +461,6 @@ static int cdns_pcie_host_init(struct device *dev,
+ 	return cdns_pcie_host_init_address_translation(rc);
+ }
+ 
+-static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
+-{
+-	struct device *dev = pcie->dev;
+-	int retries;
+-
+-	/* Check if the link is up or not */
+-	for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
+-		if (cdns_pcie_link_up(pcie)) {
+-			dev_info(dev, "Link up\n");
+-			return 0;
+-		}
+-		usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
+-	}
+-
+-	return -ETIMEDOUT;
+-}
+-
+ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ {
+ 	struct device *dev = rc->pcie.dev;
+@@ -458,7 +503,7 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ 		return ret;
+ 	}
+ 
+-	ret = cdns_pcie_host_wait_for_link(pcie);
++	ret = cdns_pcie_host_start_link(rc);
+ 	if (ret)
+ 		dev_dbg(dev, "PCIe link never came up\n");
+ 
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
+index feed1e3038f45..6705a5fedfbb0 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.h
++++ b/drivers/pci/controller/cadence/pcie-cadence.h
+@@ -119,7 +119,7 @@
+  * Root Port Registers (PCI configuration space for the root port function)
+  */
+ #define CDNS_PCIE_RP_BASE	0x00200000
+-
++#define CDNS_PCIE_RP_CAP_OFFSET 0xc0
+ 
+ /*
+  * Address Translation Registers
+@@ -290,6 +290,7 @@ struct cdns_pcie {
+  * @device_id: PCI device ID
+  * @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and	RP_NO_BAR if it's free or
+  *                available
++ * @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
+  */
+ struct cdns_pcie_rc {
+ 	struct cdns_pcie	pcie;
+@@ -298,6 +299,7 @@ struct cdns_pcie_rc {
+ 	u32			vendor_id;
+ 	u32			device_id;
+ 	bool			avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
++	bool                    quirk_retrain_flag;
+ };
+ 
+ /**
+@@ -413,6 +415,13 @@ static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
+ 	cdns_pcie_write_sz(addr, 0x2, value);
+ }
+ 
++static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg)
++{
++	void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg;
++
++	return cdns_pcie_read_sz(addr, 0x2);
++}
++
+ /* Endpoint Function register access */
+ static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
+ 					  u32 reg, u8 value)
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index fb1dc11e7cc52..b570f297e3ec1 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3998,6 +3998,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9183,
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c46 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x91a0,
+ 			 quirk_dma_func1_alias);
++/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c135 */
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9215,
++			 quirk_dma_func1_alias);
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c127 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9220,
+ 			 quirk_dma_func1_alias);
+diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
+index 5592a929b5935..80983f9dfcd55 100644
+--- a/drivers/platform/x86/acer-wmi.c
++++ b/drivers/platform/x86/acer-wmi.c
+@@ -30,6 +30,7 @@
+ #include <linux/input/sparse-keymap.h>
+ #include <acpi/video.h>
+ 
++ACPI_MODULE_NAME(KBUILD_MODNAME);
+ MODULE_AUTHOR("Carlos Corbacho");
+ MODULE_DESCRIPTION("Acer Laptop WMI Extras Driver");
+ MODULE_LICENSE("GPL");
+@@ -80,7 +81,7 @@ MODULE_ALIAS("wmi:676AA15E-6A47-4D9F-A2CC-1E6D18D14026");
+ 
+ enum acer_wmi_event_ids {
+ 	WMID_HOTKEY_EVENT = 0x1,
+-	WMID_ACCEL_EVENT = 0x5,
++	WMID_ACCEL_OR_KBD_DOCK_EVENT = 0x5,
+ };
+ 
+ static const struct key_entry acer_wmi_keymap[] __initconst = {
+@@ -128,7 +129,9 @@ struct event_return_value {
+ 	u8 function;
+ 	u8 key_num;
+ 	u16 device_state;
+-	u32 reserved;
++	u16 reserved1;
++	u8 kbd_dock_state;
++	u8 reserved2;
+ } __attribute__((packed));
+ 
+ /*
+@@ -206,14 +209,13 @@ struct hotkey_function_type_aa {
+ /*
+  * Interface capability flags
+  */
+-#define ACER_CAP_MAILLED		(1<<0)
+-#define ACER_CAP_WIRELESS		(1<<1)
+-#define ACER_CAP_BLUETOOTH		(1<<2)
+-#define ACER_CAP_BRIGHTNESS		(1<<3)
+-#define ACER_CAP_THREEG			(1<<4)
+-#define ACER_CAP_ACCEL			(1<<5)
+-#define ACER_CAP_RFBTN			(1<<6)
+-#define ACER_CAP_ANY			(0xFFFFFFFF)
++#define ACER_CAP_MAILLED		BIT(0)
++#define ACER_CAP_WIRELESS		BIT(1)
++#define ACER_CAP_BLUETOOTH		BIT(2)
++#define ACER_CAP_BRIGHTNESS		BIT(3)
++#define ACER_CAP_THREEG			BIT(4)
++#define ACER_CAP_SET_FUNCTION_MODE	BIT(5)
++#define ACER_CAP_KBD_DOCK		BIT(6)
+ 
+ /*
+  * Interface type flags
+@@ -236,6 +238,7 @@ static int mailled = -1;
+ static int brightness = -1;
+ static int threeg = -1;
+ static int force_series;
++static int force_caps = -1;
+ static bool ec_raw_mode;
+ static bool has_type_aa;
+ static u16 commun_func_bitmap;
+@@ -245,11 +248,13 @@ module_param(mailled, int, 0444);
+ module_param(brightness, int, 0444);
+ module_param(threeg, int, 0444);
+ module_param(force_series, int, 0444);
++module_param(force_caps, int, 0444);
+ module_param(ec_raw_mode, bool, 0444);
+ MODULE_PARM_DESC(mailled, "Set initial state of Mail LED");
+ MODULE_PARM_DESC(brightness, "Set initial LCD backlight brightness");
+ MODULE_PARM_DESC(threeg, "Set initial state of 3G hardware");
+ MODULE_PARM_DESC(force_series, "Force a different laptop series");
++MODULE_PARM_DESC(force_caps, "Force the capability bitmask to this value");
+ MODULE_PARM_DESC(ec_raw_mode, "Enable EC raw mode");
+ 
+ struct acer_data {
+@@ -319,6 +324,15 @@ static int __init dmi_matched(const struct dmi_system_id *dmi)
+ 	return 1;
+ }
+ 
++static int __init set_force_caps(const struct dmi_system_id *dmi)
++{
++	if (force_caps == -1) {
++		force_caps = (uintptr_t)dmi->driver_data;
++		pr_info("Found %s, set force_caps to 0x%x\n", dmi->ident, force_caps);
++	}
++	return 1;
++}
++
+ static struct quirk_entry quirk_unknown = {
+ };
+ 
+@@ -497,6 +511,33 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
+ 		},
+ 		.driver_data = &quirk_acer_travelmate_2490,
+ 	},
++	{
++		.callback = set_force_caps,
++		.ident = "Acer Aspire Switch 10E SW3-016",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW3-016"),
++		},
++		.driver_data = (void *)ACER_CAP_KBD_DOCK,
++	},
++	{
++		.callback = set_force_caps,
++		.ident = "Acer Aspire Switch 10 SW5-012",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"),
++		},
++		.driver_data = (void *)ACER_CAP_KBD_DOCK,
++	},
++	{
++		.callback = set_force_caps,
++		.ident = "Acer One 10 (S1003)",
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "One S1003"),
++		},
++		.driver_data = (void *)ACER_CAP_KBD_DOCK,
++	},
+ 	{}
+ };
+ 
+@@ -1253,10 +1294,8 @@ static void __init type_aa_dmi_decode(const struct dmi_header *header, void *d)
+ 		interface->capability |= ACER_CAP_THREEG;
+ 	if (type_aa->commun_func_bitmap & ACER_WMID3_GDS_BLUETOOTH)
+ 		interface->capability |= ACER_CAP_BLUETOOTH;
+-	if (type_aa->commun_func_bitmap & ACER_WMID3_GDS_RFBTN) {
+-		interface->capability |= ACER_CAP_RFBTN;
++	if (type_aa->commun_func_bitmap & ACER_WMID3_GDS_RFBTN)
+ 		commun_func_bitmap &= ~ACER_WMID3_GDS_RFBTN;
+-	}
+ 
+ 	commun_fn_key_number = type_aa->commun_fn_key_number;
+ }
+@@ -1520,7 +1559,7 @@ static int acer_gsensor_event(void)
+ 	struct acpi_buffer output;
+ 	union acpi_object out_obj[5];
+ 
+-	if (!has_cap(ACER_CAP_ACCEL))
++	if (!acer_wmi_accel_dev)
+ 		return -1;
+ 
+ 	output.length = sizeof(out_obj);
+@@ -1543,6 +1582,71 @@ static int acer_gsensor_event(void)
+ 	return 0;
+ }
+ 
++/*
++ * Switch series keyboard dock status
++ */
++static int acer_kbd_dock_state_to_sw_tablet_mode(u8 kbd_dock_state)
++{
++	switch (kbd_dock_state) {
++	case 0x01: /* Docked, traditional clamshell laptop mode */
++		return 0;
++	case 0x04: /* Stand-alone tablet */
++	case 0x40: /* Docked, tent mode, keyboard not usable */
++		return 1;
++	default:
++		pr_warn("Unknown kbd_dock_state 0x%02x\n", kbd_dock_state);
++	}
++
++	return 0;
++}
++
++static void acer_kbd_dock_get_initial_state(void)
++{
++	u8 *output, input[8] = { 0x05, 0x00, };
++	struct acpi_buffer input_buf = { sizeof(input), input };
++	struct acpi_buffer output_buf = { ACPI_ALLOCATE_BUFFER, NULL };
++	union acpi_object *obj;
++	acpi_status status;
++	int sw_tablet_mode;
++
++	status = wmi_evaluate_method(WMID_GUID3, 0, 0x2, &input_buf, &output_buf);
++	if (ACPI_FAILURE(status)) {
++		ACPI_EXCEPTION((AE_INFO, status, "Error getting keyboard-dock initial status"));
++		return;
++	}
++
++	obj = output_buf.pointer;
++	if (!obj || obj->type != ACPI_TYPE_BUFFER || obj->buffer.length != 8) {
++		pr_err("Unexpected output format getting keyboard-dock initial status\n");
++		goto out_free_obj;
++	}
++
++	output = obj->buffer.pointer;
++	if (output[0] != 0x00 || (output[3] != 0x05 && output[3] != 0x45)) {
++		pr_err("Unexpected output [0]=0x%02x [3]=0x%02x getting keyboard-dock initial status\n",
++		       output[0], output[3]);
++		goto out_free_obj;
++	}
++
++	sw_tablet_mode = acer_kbd_dock_state_to_sw_tablet_mode(output[4]);
++	input_report_switch(acer_wmi_input_dev, SW_TABLET_MODE, sw_tablet_mode);
++
++out_free_obj:
++	kfree(obj);
++}
++
++static void acer_kbd_dock_event(const struct event_return_value *event)
++{
++	int sw_tablet_mode;
++
++	if (!has_cap(ACER_CAP_KBD_DOCK))
++		return;
++
++	sw_tablet_mode = acer_kbd_dock_state_to_sw_tablet_mode(event->kbd_dock_state);
++	input_report_switch(acer_wmi_input_dev, SW_TABLET_MODE, sw_tablet_mode);
++	input_sync(acer_wmi_input_dev);
++}
++
+ /*
+  * Rfkill devices
+  */
+@@ -1770,8 +1874,9 @@ static void acer_wmi_notify(u32 value, void *context)
+ 			sparse_keymap_report_event(acer_wmi_input_dev, scancode, 1, true);
+ 		}
+ 		break;
+-	case WMID_ACCEL_EVENT:
++	case WMID_ACCEL_OR_KBD_DOCK_EVENT:
+ 		acer_gsensor_event();
++		acer_kbd_dock_event(&return_value);
+ 		break;
+ 	default:
+ 		pr_warn("Unknown function number - %d - %d\n",
+@@ -1894,8 +1999,6 @@ static int __init acer_wmi_accel_setup(void)
+ 	gsensor_handle = acpi_device_handle(adev);
+ 	acpi_dev_put(adev);
+ 
+-	interface->capability |= ACER_CAP_ACCEL;
+-
+ 	acer_wmi_accel_dev = input_allocate_device();
+ 	if (!acer_wmi_accel_dev)
+ 		return -ENOMEM;
+@@ -1921,11 +2024,6 @@ err_free_dev:
+ 	return err;
+ }
+ 
+-static void acer_wmi_accel_destroy(void)
+-{
+-	input_unregister_device(acer_wmi_accel_dev);
+-}
+-
+ static int __init acer_wmi_input_setup(void)
+ {
+ 	acpi_status status;
+@@ -1943,6 +2041,9 @@ static int __init acer_wmi_input_setup(void)
+ 	if (err)
+ 		goto err_free_dev;
+ 
++	if (has_cap(ACER_CAP_KBD_DOCK))
++		input_set_capability(acer_wmi_input_dev, EV_SW, SW_TABLET_MODE);
++
+ 	status = wmi_install_notify_handler(ACERWMID_EVENT_GUID,
+ 						acer_wmi_notify, NULL);
+ 	if (ACPI_FAILURE(status)) {
+@@ -1950,6 +2051,9 @@ static int __init acer_wmi_input_setup(void)
+ 		goto err_free_dev;
+ 	}
+ 
++	if (has_cap(ACER_CAP_KBD_DOCK))
++		acer_kbd_dock_get_initial_state();
++
+ 	err = input_register_device(acer_wmi_input_dev);
+ 	if (err)
+ 		goto err_uninstall_notifier;
+@@ -2080,7 +2184,7 @@ static int acer_resume(struct device *dev)
+ 	if (has_cap(ACER_CAP_BRIGHTNESS))
+ 		set_u32(data->brightness, ACER_CAP_BRIGHTNESS);
+ 
+-	if (has_cap(ACER_CAP_ACCEL))
++	if (acer_wmi_accel_dev)
+ 		acer_gsensor_init();
+ 
+ 	return 0;
+@@ -2181,7 +2285,7 @@ static int __init acer_wmi_init(void)
+ 		}
+ 		/* WMID always provides brightness methods */
+ 		interface->capability |= ACER_CAP_BRIGHTNESS;
+-	} else if (!wmi_has_guid(WMID_GUID2) && interface && !has_type_aa) {
++	} else if (!wmi_has_guid(WMID_GUID2) && interface && !has_type_aa && force_caps == -1) {
+ 		pr_err("No WMID device detection method found\n");
+ 		return -ENODEV;
+ 	}
+@@ -2211,7 +2315,14 @@ static int __init acer_wmi_init(void)
+ 	if (acpi_video_get_backlight_type() != acpi_backlight_vendor)
+ 		interface->capability &= ~ACER_CAP_BRIGHTNESS;
+ 
+-	if (wmi_has_guid(WMID_GUID3)) {
++	if (wmi_has_guid(WMID_GUID3))
++		interface->capability |= ACER_CAP_SET_FUNCTION_MODE;
++
++	if (force_caps != -1)
++		interface->capability = force_caps;
++
++	if (wmi_has_guid(WMID_GUID3) &&
++	    (interface->capability & ACER_CAP_SET_FUNCTION_MODE)) {
+ 		if (ACPI_FAILURE(acer_wmi_enable_rf_button()))
+ 			pr_warn("Cannot enable RF Button Driver\n");
+ 
+@@ -2270,8 +2381,8 @@ error_device_alloc:
+ error_platform_register:
+ 	if (wmi_has_guid(ACERWMID_EVENT_GUID))
+ 		acer_wmi_input_destroy();
+-	if (has_cap(ACER_CAP_ACCEL))
+-		acer_wmi_accel_destroy();
++	if (acer_wmi_accel_dev)
++		input_unregister_device(acer_wmi_accel_dev);
+ 
+ 	return err;
+ }
+@@ -2281,8 +2392,8 @@ static void __exit acer_wmi_exit(void)
+ 	if (wmi_has_guid(ACERWMID_EVENT_GUID))
+ 		acer_wmi_input_destroy();
+ 
+-	if (has_cap(ACER_CAP_ACCEL))
+-		acer_wmi_accel_destroy();
++	if (acer_wmi_accel_dev)
++		input_unregister_device(acer_wmi_accel_dev);
+ 
+ 	remove_debugfs();
+ 	platform_device_unregister(acer_platform_device);
+diff --git a/drivers/scsi/ufs/ufs-exynos.c b/drivers/scsi/ufs/ufs-exynos.c
+index 5e6b95dbb578f..f54b494ca4486 100644
+--- a/drivers/scsi/ufs/ufs-exynos.c
++++ b/drivers/scsi/ufs/ufs-exynos.c
+@@ -653,6 +653,11 @@ static int exynos_ufs_pre_pwr_mode(struct ufs_hba *hba,
+ 		}
+ 	}
+ 
++	/* setting for three timeout values for traffic class #0 */
++	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA0), 8064);
++	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA1), 28224);
++	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA2), 20160);
++
+ 	return 0;
+ out:
+ 	return ret;
+@@ -1249,7 +1254,9 @@ struct exynos_ufs_drv_data exynos_ufs_drvs = {
+ 				  UFSHCI_QUIRK_BROKEN_HCE |
+ 				  UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR |
+ 				  UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR |
+-				  UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL,
++				  UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL |
++				  UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING |
++				  UFSHCD_QUIRK_ALIGN_SG_WITH_PAGE_SIZE,
+ 	.opts			= EXYNOS_UFS_OPT_HAS_APB_CLK_CTRL |
+ 				  EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL |
+ 				  EXYNOS_UFS_OPT_BROKEN_RX_SEL_IDX |
+diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c
+index 914a827a93ee8..934713472ebce 100644
+--- a/drivers/scsi/ufs/ufs-mediatek.c
++++ b/drivers/scsi/ufs/ufs-mediatek.c
+@@ -566,6 +566,7 @@ static int ufs_mtk_init(struct ufs_hba *hba)
+ 
+ 	/* Enable WriteBooster */
+ 	hba->caps |= UFSHCD_CAP_WB_EN;
++	hba->quirks |= UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL;
+ 	hba->vps->wb_flush_threshold = UFS_WB_BUF_REMAIN_PERCENT(80);
+ 
+ 	/*
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 8132893284670..5a7cc2e42ffdf 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -4153,25 +4153,27 @@ static int ufshcd_change_power_mode(struct ufs_hba *hba,
+ 		ufshcd_dme_set(hba, UIC_ARG_MIB(PA_HSSERIES),
+ 						pwr_mode->hs_rate);
+ 
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA0),
+-			DL_FC0ProtectionTimeOutVal_Default);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA1),
+-			DL_TC0ReplayTimeOutVal_Default);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA2),
+-			DL_AFC0ReqTimeOutVal_Default);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA3),
+-			DL_FC1ProtectionTimeOutVal_Default);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA4),
+-			DL_TC1ReplayTimeOutVal_Default);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA5),
+-			DL_AFC1ReqTimeOutVal_Default);
+-
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(DME_LocalFC0ProtectionTimeOutVal),
+-			DL_FC0ProtectionTimeOutVal_Default);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(DME_LocalTC0ReplayTimeOutVal),
+-			DL_TC0ReplayTimeOutVal_Default);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(DME_LocalAFC0ReqTimeOutVal),
+-			DL_AFC0ReqTimeOutVal_Default);
++	if (!(hba->quirks & UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING)) {
++		ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA0),
++				DL_FC0ProtectionTimeOutVal_Default);
++		ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA1),
++				DL_TC0ReplayTimeOutVal_Default);
++		ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA2),
++				DL_AFC0ReqTimeOutVal_Default);
++		ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA3),
++				DL_FC1ProtectionTimeOutVal_Default);
++		ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA4),
++				DL_TC1ReplayTimeOutVal_Default);
++		ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA5),
++				DL_AFC1ReqTimeOutVal_Default);
++
++		ufshcd_dme_set(hba, UIC_ARG_MIB(DME_LocalFC0ProtectionTimeOutVal),
++				DL_FC0ProtectionTimeOutVal_Default);
++		ufshcd_dme_set(hba, UIC_ARG_MIB(DME_LocalTC0ReplayTimeOutVal),
++				DL_TC0ReplayTimeOutVal_Default);
++		ufshcd_dme_set(hba, UIC_ARG_MIB(DME_LocalAFC0ReqTimeOutVal),
++				DL_AFC0ReqTimeOutVal_Default);
++	}
+ 
+ 	ret = ufshcd_uic_change_pwr_mode(hba, pwr_mode->pwr_rx << 4
+ 			| pwr_mode->pwr_tx);
+@@ -4746,6 +4748,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
+ 	struct request_queue *q = sdev->request_queue;
+ 
+ 	blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
++	if (hba->quirks & UFSHCD_QUIRK_ALIGN_SG_WITH_PAGE_SIZE)
++		blk_queue_update_dma_alignment(q, PAGE_SIZE - 1);
+ 
+ 	if (ufshcd_is_rpm_autosuspend_allowed(hba))
+ 		sdev->rpm_autosuspend = 1;
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 6c62a281c8631..812aa348751eb 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -544,6 +544,16 @@ enum ufshcd_quirks {
+ 	 */
+ 	UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL		= 1 << 12,
+ 
++	/*
++	 * This quirk needs to disable unipro timeout values
++	 * before power mode change
++	 */
++	UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING = 1 << 13,
++
++	/*
++	 * This quirk allows only sg entries aligned with page size.
++	 */
++	UFSHCD_QUIRK_ALIGN_SG_WITH_PAGE_SIZE		= 1 << 14,
+ };
+ 
+ enum ufshcd_caps {
+diff --git a/drivers/usb/cdns3/core.c b/drivers/usb/cdns3/core.c
+index 039ab5d2435eb..6eeb7ed8e91f3 100644
+--- a/drivers/usb/cdns3/core.c
++++ b/drivers/usb/cdns3/core.c
+@@ -569,7 +569,8 @@ static int cdns3_probe(struct platform_device *pdev)
+ 	device_set_wakeup_capable(dev, true);
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+-	pm_runtime_forbid(dev);
++	if (!(cdns->pdata && (cdns->pdata->quirks & CDNS3_DEFAULT_PM_RUNTIME_ALLOW)))
++		pm_runtime_forbid(dev);
+ 
+ 	/*
+ 	 * The controller needs less time between bus and controller suspend,
+diff --git a/drivers/usb/cdns3/core.h b/drivers/usb/cdns3/core.h
+index 8a40d53d5edeb..3176f924293a1 100644
+--- a/drivers/usb/cdns3/core.h
++++ b/drivers/usb/cdns3/core.h
+@@ -42,6 +42,8 @@ struct cdns3_role_driver {
+ struct cdns3_platform_data {
+ 	int (*platform_suspend)(struct device *dev,
+ 			bool suspend, bool wakeup);
++	unsigned long quirks;
++#define CDNS3_DEFAULT_PM_RUNTIME_ALLOW	BIT(0)
+ };
+ 
+ /**
+@@ -73,6 +75,7 @@ struct cdns3_platform_data {
+  * @wakeup_pending: wakeup interrupt pending
+  * @pdata: platform data from glue layer
+  * @lock: spinlock structure
++ * @xhci_plat_data: xhci private data structure pointer
+  */
+ struct cdns3 {
+ 	struct device			*dev;
+@@ -106,6 +109,7 @@ struct cdns3 {
+ 	bool				wakeup_pending;
+ 	struct cdns3_platform_data	*pdata;
+ 	spinlock_t			lock;
++	struct xhci_plat_priv		*xhci_plat_data;
+ };
+ 
+ int cdns3_hw_role_switch(struct cdns3 *cdns);
+diff --git a/drivers/usb/cdns3/host-export.h b/drivers/usb/cdns3/host-export.h
+index ae11810f88261..26041718a086c 100644
+--- a/drivers/usb/cdns3/host-export.h
++++ b/drivers/usb/cdns3/host-export.h
+@@ -9,9 +9,11 @@
+ #ifndef __LINUX_CDNS3_HOST_EXPORT
+ #define __LINUX_CDNS3_HOST_EXPORT
+ 
++struct usb_hcd;
+ #ifdef CONFIG_USB_CDNS3_HOST
+ 
+ int cdns3_host_init(struct cdns3 *cdns);
++int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd);
+ 
+ #else
+ 
+@@ -21,6 +23,10 @@ static inline int cdns3_host_init(struct cdns3 *cdns)
+ }
+ 
+ static inline void cdns3_host_exit(struct cdns3 *cdns) { }
++static inline int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd)
++{
++	return 0;
++}
+ 
+ #endif /* CONFIG_USB_CDNS3_HOST */
+ 
+diff --git a/drivers/usb/cdns3/host.c b/drivers/usb/cdns3/host.c
+index b3e2cb69762cc..102977790d606 100644
+--- a/drivers/usb/cdns3/host.c
++++ b/drivers/usb/cdns3/host.c
+@@ -14,6 +14,19 @@
+ #include "drd.h"
+ #include "host-export.h"
+ #include <linux/usb/hcd.h>
++#include "../host/xhci.h"
++#include "../host/xhci-plat.h"
++
++#define XECP_PORT_CAP_REG	0x8000
++#define XECP_AUX_CTRL_REG1	0x8120
++
++#define CFG_RXDET_P3_EN		BIT(15)
++#define LPM_2_STB_SWITCH_EN	BIT(25)
++
++static const struct xhci_plat_priv xhci_plat_cdns3_xhci = {
++	.quirks = XHCI_SKIP_PHY_INIT,
++	.suspend_quirk = xhci_cdns3_suspend_quirk,
++};
+ 
+ static int __cdns3_host_init(struct cdns3 *cdns)
+ {
+@@ -39,10 +52,25 @@ static int __cdns3_host_init(struct cdns3 *cdns)
+ 		goto err1;
+ 	}
+ 
++	cdns->xhci_plat_data = kmemdup(&xhci_plat_cdns3_xhci,
++			sizeof(struct xhci_plat_priv), GFP_KERNEL);
++	if (!cdns->xhci_plat_data) {
++		ret = -ENOMEM;
++		goto err1;
++	}
++
++	if (cdns->pdata && (cdns->pdata->quirks & CDNS3_DEFAULT_PM_RUNTIME_ALLOW))
++		cdns->xhci_plat_data->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
++
++	ret = platform_device_add_data(xhci, cdns->xhci_plat_data,
++			sizeof(struct xhci_plat_priv));
++	if (ret)
++		goto free_memory;
++
+ 	ret = platform_device_add(xhci);
+ 	if (ret) {
+ 		dev_err(cdns->dev, "failed to register xHCI device\n");
+-		goto err1;
++		goto free_memory;
+ 	}
+ 
+ 	/* Glue needs to access xHCI region register for Power management */
+@@ -51,13 +79,43 @@ static int __cdns3_host_init(struct cdns3 *cdns)
+ 		cdns->xhci_regs = hcd->regs;
+ 
+ 	return 0;
++
++free_memory:
++	kfree(cdns->xhci_plat_data);
+ err1:
+ 	platform_device_put(xhci);
+ 	return ret;
+ }
+ 
++int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd)
++{
++	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
++	u32 value;
++
++	if (pm_runtime_status_suspended(hcd->self.controller))
++		return 0;
++
++	/* set usbcmd.EU3S */
++	value = readl(&xhci->op_regs->command);
++	value |= CMD_PM_INDEX;
++	writel(value, &xhci->op_regs->command);
++
++	if (hcd->regs) {
++		value = readl(hcd->regs + XECP_AUX_CTRL_REG1);
++		value |= CFG_RXDET_P3_EN;
++		writel(value, hcd->regs + XECP_AUX_CTRL_REG1);
++
++		value = readl(hcd->regs + XECP_PORT_CAP_REG);
++		value |= LPM_2_STB_SWITCH_EN;
++		writel(value, hcd->regs + XECP_PORT_CAP_REG);
++	}
++
++	return 0;
++}
++
+ static void cdns3_host_exit(struct cdns3 *cdns)
+ {
++	kfree(cdns->xhci_plat_data);
+ 	platform_device_unregister(cdns->host_dev);
+ 	cdns->host_dev = NULL;
+ 	cdns3_drd_host_off(cdns);
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 36e0de34ec68b..4e2cce5ca7f6a 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -627,7 +627,8 @@ static int btrfs_delayed_inode_reserve_metadata(
+ 	 */
+ 	if (!src_rsv || (!trans->bytes_reserved &&
+ 			 src_rsv->type != BTRFS_BLOCK_RSV_DELALLOC)) {
+-		ret = btrfs_qgroup_reserve_meta_prealloc(root, num_bytes, true);
++		ret = btrfs_qgroup_reserve_meta(root, num_bytes,
++					  BTRFS_QGROUP_RSV_META_PREALLOC, true);
+ 		if (ret < 0)
+ 			return ret;
+ 		ret = btrfs_block_rsv_add(root, dst_rsv, num_bytes,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 9b3df72ceffbb..cbeb0cdaca7af 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -5798,7 +5798,7 @@ static int btrfs_dirty_inode(struct inode *inode)
+ 		return PTR_ERR(trans);
+ 
+ 	ret = btrfs_update_inode(trans, root, inode);
+-	if (ret && ret == -ENOSPC) {
++	if (ret && (ret == -ENOSPC || ret == -EDQUOT)) {
+ 		/* whoops, lets try again with the full transaction */
+ 		btrfs_end_transaction(trans);
+ 		trans = btrfs_start_transaction(root, 1);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index d504a9a207515..cd9b1a16489b4 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3875,8 +3875,8 @@ static int sub_root_meta_rsv(struct btrfs_root *root, int num_bytes,
+ 	return num_bytes;
+ }
+ 
+-static int qgroup_reserve_meta(struct btrfs_root *root, int num_bytes,
+-				enum btrfs_qgroup_rsv_type type, bool enforce)
++int btrfs_qgroup_reserve_meta(struct btrfs_root *root, int num_bytes,
++			      enum btrfs_qgroup_rsv_type type, bool enforce)
+ {
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	int ret;
+@@ -3907,14 +3907,14 @@ int __btrfs_qgroup_reserve_meta(struct btrfs_root *root, int num_bytes,
+ {
+ 	int ret;
+ 
+-	ret = qgroup_reserve_meta(root, num_bytes, type, enforce);
++	ret = btrfs_qgroup_reserve_meta(root, num_bytes, type, enforce);
+ 	if (ret <= 0 && ret != -EDQUOT)
+ 		return ret;
+ 
+ 	ret = try_flush_qgroup(root);
+ 	if (ret < 0)
+ 		return ret;
+-	return qgroup_reserve_meta(root, num_bytes, type, enforce);
++	return btrfs_qgroup_reserve_meta(root, num_bytes, type, enforce);
+ }
+ 
+ void btrfs_qgroup_free_meta_all_pertrans(struct btrfs_root *root)
+diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
+index 50dea9a2d8fbd..7283e4f549af7 100644
+--- a/fs/btrfs/qgroup.h
++++ b/fs/btrfs/qgroup.h
+@@ -361,6 +361,8 @@ int btrfs_qgroup_release_data(struct btrfs_inode *inode, u64 start, u64 len);
+ int btrfs_qgroup_free_data(struct btrfs_inode *inode,
+ 			   struct extent_changeset *reserved, u64 start,
+ 			   u64 len);
++int btrfs_qgroup_reserve_meta(struct btrfs_root *root, int num_bytes,
++			      enum btrfs_qgroup_rsv_type type, bool enforce);
+ int __btrfs_qgroup_reserve_meta(struct btrfs_root *root, int num_bytes,
+ 				enum btrfs_qgroup_rsv_type type, bool enforce);
+ /* Reserve metadata space for pertrans and prealloc type */
+diff --git a/include/linux/eeprom_93xx46.h b/include/linux/eeprom_93xx46.h
+index eec7928ff8fe0..99580c22f91a4 100644
+--- a/include/linux/eeprom_93xx46.h
++++ b/include/linux/eeprom_93xx46.h
+@@ -16,6 +16,8 @@ struct eeprom_93xx46_platform_data {
+ #define EEPROM_93XX46_QUIRK_SINGLE_WORD_READ		(1 << 0)
+ /* Instructions such as EWEN are (addrlen + 2) in length. */
+ #define EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH		(1 << 1)
++/* Add extra cycle after address during a read */
++#define EEPROM_93XX46_QUIRK_EXTRA_READ_CYCLE		BIT(2)
+ 
+ 	/*
+ 	 * optional hooks to control additional logic
+diff --git a/include/linux/platform_data/ti-sysc.h b/include/linux/platform_data/ti-sysc.h
+index 240dce553a0bd..fafc1beea504a 100644
+--- a/include/linux/platform_data/ti-sysc.h
++++ b/include/linux/platform_data/ti-sysc.h
+@@ -50,6 +50,7 @@ struct sysc_regbits {
+ 	s8 emufree_shift;
+ };
+ 
++#define SYSC_QUIRK_GPMC_DEBUG		BIT(26)
+ #define SYSC_MODULE_QUIRK_ENA_RESETDONE	BIT(25)
+ #define SYSC_MODULE_QUIRK_PRUSS		BIT(24)
+ #define SYSC_MODULE_QUIRK_DSS_RESET	BIT(23)
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 3af4cb87032ce..d56db9f34373e 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -437,6 +437,18 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ARCHOS"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ARCHOS 140 CESIUM"),
++		},
++		.driver_data = (void *)(BYT_RT5640_IN1_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 0f1d845a0ccad..1d7677376e742 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -48,26 +48,14 @@ static int sof_sdw_quirk_cb(const struct dmi_system_id *id)
+ }
+ 
+ static const struct dmi_system_id sof_sdw_quirk_table[] = {
++	/* CometLake devices */
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+-			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A3E")
+-		},
+-		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+-					SOF_RT711_JD_SRC_JD2 |
+-					SOF_RT715_DAI_ID_FIX),
+-	},
+-	{
+-		.callback = sof_sdw_quirk_cb,
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+-			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E")
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CometLake Client"),
+ 		},
+-		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+-					SOF_RT711_JD_SRC_JD2 |
+-					SOF_RT715_DAI_ID_FIX |
+-					SOF_SDW_FOUR_SPK),
++		.driver_data = (void *)SOF_SDW_PCH_DMIC,
+ 	},
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+@@ -98,7 +86,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+-		{
++	{
+ 		.callback = sof_sdw_quirk_cb,
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+@@ -108,6 +96,16 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
++	/* IceLake devices */
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Ice Lake Client"),
++		},
++		.driver_data = (void *)SOF_SDW_PCH_DMIC,
++	},
++	/* TigerLake devices */
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+ 		.matches = {
+@@ -123,18 +121,23 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Ice Lake Client"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A3E")
+ 		},
+-		.driver_data = (void *)SOF_SDW_PCH_DMIC,
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_RT711_JD_SRC_JD2 |
++					SOF_RT715_DAI_ID_FIX),
+ 	},
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "CometLake Client"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E")
+ 		},
+-		.driver_data = (void *)SOF_SDW_PCH_DMIC,
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_RT711_JD_SRC_JD2 |
++					SOF_RT715_DAI_ID_FIX |
++					SOF_SDW_FOUR_SPK),
+ 	},
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+@@ -156,7 +159,34 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 					SOF_SDW_PCH_DMIC |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+-
++	{
++		/*
++		 * this entry covers multiple HP SKUs. The family name
++		 * does not seem robust enough, so we use a partial
++		 * match that ignores the product name suffix
++		 * (e.g. 15-eb1xxx, 14t-ea000 or 13-aw2xxx)
++		 */
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Convertible"),
++		},
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_SDW_PCH_DMIC |
++					SOF_RT711_JD_SRC_JD2),
++	},
++	/* TigerLake-SDCA devices */
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A32")
++		},
++		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
++					SOF_RT711_JD_SRC_JD2 |
++					SOF_RT715_DAI_ID_FIX |
++					SOF_SDW_FOUR_SPK),
++	},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index de7ff2d097ab9..6708a2c5a8381 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -84,7 +84,7 @@ config SND_SOC_SOF_BAYTRAIL
+ 
+ config SND_SOC_SOF_BROADWELL_SUPPORT
+ 	bool "SOF support for Broadwell"
+-	depends on SND_SOC_INTEL_HASWELL=n
++	depends on SND_SOC_INTEL_CATPT=n
+ 	help
+ 	  This adds support for Sound Open Firmware for Intel(R) platforms
+ 	  using the Broadwell processors.
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index df036a359f2fc..448de77f43fd8 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -2603,141 +2603,251 @@ static int snd_bbfpro_controls_create(struct usb_mixer_interface *mixer)
+ }
+ 
+ /*
+- * Pioneer DJ DJM-250MK2 and maybe other DJM models
++ * Pioneer DJ DJM Mixers
+  *
+- * For playback, no duplicate mapping should be set.
+- * There are three mixer stereo channels (CH1, CH2, AUX)
+- * and three stereo sources (Playback 1-2, Playback 3-4, Playback 5-6).
+- * Each channel should be mapped just once to one source.
+- * If mapped multiple times, only one source will play on given channel
+- * (sources are not mixed together).
++ * These devices generally have options for soft-switching the playback and
++ * capture sources in addition to the recording level. Although different
++ * devices have different configurations, there seems to be canonical values
++ * for specific capture/playback types:  See the definitions of these below.
+  *
+- * For recording, duplicate mapping is OK. We will get the same signal multiple times.
+- *
+- * Channels 7-8 are in both directions fixed to FX SEND / FX RETURN.
+- *
+- * See also notes in the quirks-table.h file.
++ * The wValue is masked with the stereo channel number. e.g. Setting Ch2 to
++ * capture phono would be 0x0203. Capture, playback and capture level have
++ * different wIndexes.
+  */
+ 
+-struct snd_pioneer_djm_option {
+-	const u16 wIndex;
+-	const u16 wValue;
++// Capture types
++#define SND_DJM_CAP_LINE	0x00
++#define SND_DJM_CAP_CDLINE	0x01
++#define SND_DJM_CAP_DIGITAL	0x02
++#define SND_DJM_CAP_PHONO	0x03
++#define SND_DJM_CAP_PFADER	0x06
++#define SND_DJM_CAP_XFADERA	0x07
++#define SND_DJM_CAP_XFADERB	0x08
++#define SND_DJM_CAP_MIC		0x09
++#define SND_DJM_CAP_AUX		0x0d
++#define SND_DJM_CAP_RECOUT	0x0a
++#define SND_DJM_CAP_NONE	0x0f
++#define SND_DJM_CAP_CH1PFADER	0x11
++#define SND_DJM_CAP_CH2PFADER	0x12
++#define SND_DJM_CAP_CH3PFADER	0x13
++#define SND_DJM_CAP_CH4PFADER	0x14
++
++// Playback types
++#define SND_DJM_PB_CH1		0x00
++#define SND_DJM_PB_CH2		0x01
++#define SND_DJM_PB_AUX		0x04
++
++#define SND_DJM_WINDEX_CAP	0x8002
++#define SND_DJM_WINDEX_CAPLVL	0x8003
++#define SND_DJM_WINDEX_PB	0x8016
++
++// kcontrol->private_value layout
++#define SND_DJM_VALUE_MASK	0x0000ffff
++#define SND_DJM_GROUP_MASK	0x00ff0000
++#define SND_DJM_DEVICE_MASK	0xff000000
++#define SND_DJM_GROUP_SHIFT	16
++#define SND_DJM_DEVICE_SHIFT	24
++
++// device table index
++#define SND_DJM_250MK2_IDX	0x0
++#define SND_DJM_750_IDX		0x1
++#define SND_DJM_900NXS2_IDX	0x2
++
++
++#define SND_DJM_CTL(_name, suffix, _default_value, _windex) { \
++	.name = _name, \
++	.options = snd_djm_opts_##suffix, \
++	.noptions = ARRAY_SIZE(snd_djm_opts_##suffix), \
++	.default_value = _default_value, \
++	.wIndex = _windex }
++
++#define SND_DJM_DEVICE(suffix) { \
++	.controls = snd_djm_ctls_##suffix, \
++	.ncontrols = ARRAY_SIZE(snd_djm_ctls_##suffix) }
++
++
++struct snd_djm_device {
+ 	const char *name;
++	const struct snd_djm_ctl *controls;
++	size_t ncontrols;
+ };
+ 
+-static const struct snd_pioneer_djm_option snd_pioneer_djm_options_capture_level[] = {
+-	{ .name =  "-5 dB",                  .wValue = 0x0300, .wIndex = 0x8003 },
+-	{ .name = "-10 dB",                  .wValue = 0x0200, .wIndex = 0x8003 },
+-	{ .name = "-15 dB",                  .wValue = 0x0100, .wIndex = 0x8003 },
+-	{ .name = "-19 dB",                  .wValue = 0x0000, .wIndex = 0x8003 }
++struct snd_djm_ctl {
++	const char *name;
++	const u16 *options;
++	size_t noptions;
++	u16 default_value;
++	u16 wIndex;
+ };
+ 
+-static const struct snd_pioneer_djm_option snd_pioneer_djm_options_capture_ch12[] = {
+-	{ .name =  "CH1 Control Tone PHONO", .wValue = 0x0103, .wIndex = 0x8002 },
+-	{ .name =  "CH1 Control Tone LINE",  .wValue = 0x0100, .wIndex = 0x8002 },
+-	{ .name =  "Post CH1 Fader",         .wValue = 0x0106, .wIndex = 0x8002 },
+-	{ .name =  "Cross Fader A",          .wValue = 0x0107, .wIndex = 0x8002 },
+-	{ .name =  "Cross Fader B",          .wValue = 0x0108, .wIndex = 0x8002 },
+-	{ .name =  "MIC",                    .wValue = 0x0109, .wIndex = 0x8002 },
+-	{ .name =  "AUX",                    .wValue = 0x010d, .wIndex = 0x8002 },
+-	{ .name =  "REC OUT",                .wValue = 0x010a, .wIndex = 0x8002 }
++static const char *snd_djm_get_label_caplevel(u16 wvalue)
++{
++	switch (wvalue) {
++	case 0x0000:	return "-19dB";
++	case 0x0100:	return "-15dB";
++	case 0x0200:	return "-10dB";
++	case 0x0300:	return "-5dB";
++	default:	return NULL;
++	}
+ };
+ 
+-static const struct snd_pioneer_djm_option snd_pioneer_djm_options_capture_ch34[] = {
+-	{ .name =  "CH2 Control Tone PHONO", .wValue = 0x0203, .wIndex = 0x8002 },
+-	{ .name =  "CH2 Control Tone LINE",  .wValue = 0x0200, .wIndex = 0x8002 },
+-	{ .name =  "Post CH2 Fader",         .wValue = 0x0206, .wIndex = 0x8002 },
+-	{ .name =  "Cross Fader A",          .wValue = 0x0207, .wIndex = 0x8002 },
+-	{ .name =  "Cross Fader B",          .wValue = 0x0208, .wIndex = 0x8002 },
+-	{ .name =  "MIC",                    .wValue = 0x0209, .wIndex = 0x8002 },
+-	{ .name =  "AUX",                    .wValue = 0x020d, .wIndex = 0x8002 },
+-	{ .name =  "REC OUT",                .wValue = 0x020a, .wIndex = 0x8002 }
++static const char *snd_djm_get_label_cap(u16 wvalue)
++{
++	switch (wvalue & 0x00ff) {
++	case SND_DJM_CAP_LINE:		return "Control Tone LINE";
++	case SND_DJM_CAP_CDLINE:	return "Control Tone CD/LINE";
++	case SND_DJM_CAP_DIGITAL:	return "Control Tone DIGITAL";
++	case SND_DJM_CAP_PHONO:		return "Control Tone PHONO";
++	case SND_DJM_CAP_PFADER:	return "Post Fader";
++	case SND_DJM_CAP_XFADERA:	return "Cross Fader A";
++	case SND_DJM_CAP_XFADERB:	return "Cross Fader B";
++	case SND_DJM_CAP_MIC:		return "Mic";
++	case SND_DJM_CAP_RECOUT:	return "Rec Out";
++	case SND_DJM_CAP_AUX:		return "Aux";
++	case SND_DJM_CAP_NONE:		return "None";
++	case SND_DJM_CAP_CH1PFADER:	return "Post Fader Ch1";
++	case SND_DJM_CAP_CH2PFADER:	return "Post Fader Ch2";
++	case SND_DJM_CAP_CH3PFADER:	return "Post Fader Ch3";
++	case SND_DJM_CAP_CH4PFADER:	return "Post Fader Ch4";
++	default:			return NULL;
++	}
+ };
+ 
+-static const struct snd_pioneer_djm_option snd_pioneer_djm_options_capture_ch56[] = {
+-	{ .name =  "REC OUT",                .wValue = 0x030a, .wIndex = 0x8002 },
+-	{ .name =  "Post CH1 Fader",         .wValue = 0x0311, .wIndex = 0x8002 },
+-	{ .name =  "Post CH2 Fader",         .wValue = 0x0312, .wIndex = 0x8002 },
+-	{ .name =  "Cross Fader A",          .wValue = 0x0307, .wIndex = 0x8002 },
+-	{ .name =  "Cross Fader B",          .wValue = 0x0308, .wIndex = 0x8002 },
+-	{ .name =  "MIC",                    .wValue = 0x0309, .wIndex = 0x8002 },
+-	{ .name =  "AUX",                    .wValue = 0x030d, .wIndex = 0x8002 }
++static const char *snd_djm_get_label_pb(u16 wvalue)
++{
++	switch (wvalue & 0x00ff) {
++	case SND_DJM_PB_CH1:	return "Ch1";
++	case SND_DJM_PB_CH2:	return "Ch2";
++	case SND_DJM_PB_AUX:	return "Aux";
++	default:		return NULL;
++	}
+ };
+ 
+-static const struct snd_pioneer_djm_option snd_pioneer_djm_options_playback_12[] = {
+-	{ .name =  "CH1",                    .wValue = 0x0100, .wIndex = 0x8016 },
+-	{ .name =  "CH2",                    .wValue = 0x0101, .wIndex = 0x8016 },
+-	{ .name =  "AUX",                    .wValue = 0x0104, .wIndex = 0x8016 }
++static const char *snd_djm_get_label(u16 wvalue, u16 windex)
++{
++	switch (windex) {
++	case SND_DJM_WINDEX_CAPLVL:	return snd_djm_get_label_caplevel(wvalue);
++	case SND_DJM_WINDEX_CAP:	return snd_djm_get_label_cap(wvalue);
++	case SND_DJM_WINDEX_PB:		return snd_djm_get_label_pb(wvalue);
++	default:			return NULL;
++	}
+ };
+ 
+-static const struct snd_pioneer_djm_option snd_pioneer_djm_options_playback_34[] = {
+-	{ .name =  "CH1",                    .wValue = 0x0200, .wIndex = 0x8016 },
+-	{ .name =  "CH2",                    .wValue = 0x0201, .wIndex = 0x8016 },
+-	{ .name =  "AUX",                    .wValue = 0x0204, .wIndex = 0x8016 }
++
++// DJM-250MK2
++static const u16 snd_djm_opts_cap_level[] = {
++	0x0000, 0x0100, 0x0200, 0x0300 };
++
++static const u16 snd_djm_opts_250mk2_cap1[] = {
++	0x0103, 0x0100, 0x0106, 0x0107, 0x0108, 0x0109, 0x010d, 0x010a };
++
++static const u16 snd_djm_opts_250mk2_cap2[] = {
++	0x0203, 0x0200, 0x0206, 0x0207, 0x0208, 0x0209, 0x020d, 0x020a };
++
++static const u16 snd_djm_opts_250mk2_cap3[] = {
++	0x030a, 0x0311, 0x0312, 0x0307, 0x0308, 0x0309, 0x030d };
++
++static const u16 snd_djm_opts_250mk2_pb1[] = { 0x0100, 0x0101, 0x0104 };
++static const u16 snd_djm_opts_250mk2_pb2[] = { 0x0200, 0x0201, 0x0204 };
++static const u16 snd_djm_opts_250mk2_pb3[] = { 0x0300, 0x0301, 0x0304 };
++
++static const struct snd_djm_ctl snd_djm_ctls_250mk2[] = {
++	SND_DJM_CTL("Capture Level", cap_level, 0, SND_DJM_WINDEX_CAPLVL),
++	SND_DJM_CTL("Ch1 Input",   250mk2_cap1, 2, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch2 Input",   250mk2_cap2, 2, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch3 Input",   250mk2_cap3, 0, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch1 Output",   250mk2_pb1, 0, SND_DJM_WINDEX_PB),
++	SND_DJM_CTL("Ch2 Output",   250mk2_pb2, 1, SND_DJM_WINDEX_PB),
++	SND_DJM_CTL("Ch3 Output",   250mk2_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-static const struct snd_pioneer_djm_option snd_pioneer_djm_options_playback_56[] = {
+-	{ .name =  "CH1",                    .wValue = 0x0300, .wIndex = 0x8016 },
+-	{ .name =  "CH2",                    .wValue = 0x0301, .wIndex = 0x8016 },
+-	{ .name =  "AUX",                    .wValue = 0x0304, .wIndex = 0x8016 }
++
++// DJM-750
++static const u16 snd_djm_opts_750_cap1[] = {
++	0x0101, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a, 0x010f };
++static const u16 snd_djm_opts_750_cap2[] = {
++	0x0200, 0x0201, 0x0206, 0x0207, 0x0208, 0x0209, 0x020a, 0x020f };
++static const u16 snd_djm_opts_750_cap3[] = {
++	0x0300, 0x0301, 0x0306, 0x0307, 0x0308, 0x0309, 0x030a, 0x030f };
++static const u16 snd_djm_opts_750_cap4[] = {
++	0x0401, 0x0403, 0x0406, 0x0407, 0x0408, 0x0409, 0x040a, 0x040f };
++
++static const struct snd_djm_ctl snd_djm_ctls_750[] = {
++	SND_DJM_CTL("Capture Level", cap_level, 0, SND_DJM_WINDEX_CAPLVL),
++	SND_DJM_CTL("Ch1 Input",   750_cap1, 2, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch2 Input",   750_cap2, 2, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch3 Input",   750_cap3, 0, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch4 Input",   750_cap4, 0, SND_DJM_WINDEX_CAP)
+ };
+ 
+-struct snd_pioneer_djm_option_group {
+-	const char *name;
+-	const struct snd_pioneer_djm_option *options;
+-	const size_t count;
+-	const u16 default_value;
++
++// DJM-900NXS2
++static const u16 snd_djm_opts_900nxs2_cap1[] = {
++	0x0100, 0x0102, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a };
++static const u16 snd_djm_opts_900nxs2_cap2[] = {
++	0x0200, 0x0202, 0x0203, 0x0206, 0x0207, 0x0208, 0x0209, 0x020a };
++static const u16 snd_djm_opts_900nxs2_cap3[] = {
++	0x0300, 0x0302, 0x0303, 0x0306, 0x0307, 0x0308, 0x0309, 0x030a };
++static const u16 snd_djm_opts_900nxs2_cap4[] = {
++	0x0400, 0x0402, 0x0403, 0x0406, 0x0407, 0x0408, 0x0409, 0x040a };
++static const u16 snd_djm_opts_900nxs2_cap5[] = {
++	0x0507, 0x0508, 0x0509, 0x050a, 0x0511, 0x0512, 0x0513, 0x0514 };
++
++static const struct snd_djm_ctl snd_djm_ctls_900nxs2[] = {
++	SND_DJM_CTL("Capture Level", cap_level, 0, SND_DJM_WINDEX_CAPLVL),
++	SND_DJM_CTL("Ch1 Input",   900nxs2_cap1, 2, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch2 Input",   900nxs2_cap2, 2, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch3 Input",   900nxs2_cap3, 2, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch4 Input",   900nxs2_cap4, 2, SND_DJM_WINDEX_CAP),
++	SND_DJM_CTL("Ch5 Input",   900nxs2_cap5, 3, SND_DJM_WINDEX_CAP)
+ };
+ 
+-#define snd_pioneer_djm_option_group_item(_name, suffix, _default_value) { \
+-	.name = _name, \
+-	.options = snd_pioneer_djm_options_##suffix, \
+-	.count = ARRAY_SIZE(snd_pioneer_djm_options_##suffix), \
+-	.default_value = _default_value }
+-
+-static const struct snd_pioneer_djm_option_group snd_pioneer_djm_option_groups[] = {
+-	snd_pioneer_djm_option_group_item("Master Capture Level Capture Switch", capture_level, 0),
+-	snd_pioneer_djm_option_group_item("Capture 1-2 Capture Switch",          capture_ch12,  2),
+-	snd_pioneer_djm_option_group_item("Capture 3-4 Capture Switch",          capture_ch34,  2),
+-	snd_pioneer_djm_option_group_item("Capture 5-6 Capture Switch",          capture_ch56,  0),
+-	snd_pioneer_djm_option_group_item("Playback 1-2 Playback Switch",        playback_12,   0),
+-	snd_pioneer_djm_option_group_item("Playback 3-4 Playback Switch",        playback_34,   1),
+-	snd_pioneer_djm_option_group_item("Playback 5-6 Playback Switch",        playback_56,   2)
++
++static const struct snd_djm_device snd_djm_devices[] = {
++	SND_DJM_DEVICE(250mk2),
++	SND_DJM_DEVICE(750),
++	SND_DJM_DEVICE(900nxs2)
+ };
+ 
+-// layout of the kcontrol->private_value:
+-#define SND_PIONEER_DJM_VALUE_MASK 0x0000ffff
+-#define SND_PIONEER_DJM_GROUP_MASK 0xffff0000
+-#define SND_PIONEER_DJM_GROUP_SHIFT 16
+ 
+-static int snd_pioneer_djm_controls_info(struct snd_kcontrol *kctl, struct snd_ctl_elem_info *info)
++static int snd_djm_controls_info(struct snd_kcontrol *kctl,
++				struct snd_ctl_elem_info *info)
+ {
+-	u16 group_index = kctl->private_value >> SND_PIONEER_DJM_GROUP_SHIFT;
+-	size_t count;
++	unsigned long private_value = kctl->private_value;
++	u8 device_idx = (private_value & SND_DJM_DEVICE_MASK) >> SND_DJM_DEVICE_SHIFT;
++	u8 ctl_idx = (private_value & SND_DJM_GROUP_MASK) >> SND_DJM_GROUP_SHIFT;
++	const struct snd_djm_device *device = &snd_djm_devices[device_idx];
+ 	const char *name;
+-	const struct snd_pioneer_djm_option_group *group;
++	const struct snd_djm_ctl *ctl;
++	size_t noptions;
+ 
+-	if (group_index >= ARRAY_SIZE(snd_pioneer_djm_option_groups))
++	if (ctl_idx >= device->ncontrols)
++		return -EINVAL;
++
++	ctl = &device->controls[ctl_idx];
++	noptions = ctl->noptions;
++	if (info->value.enumerated.item >= noptions)
++		info->value.enumerated.item = noptions - 1;
++
++	name = snd_djm_get_label(ctl->options[info->value.enumerated.item],
++				ctl->wIndex);
++	if (!name)
+ 		return -EINVAL;
+ 
+-	group = &snd_pioneer_djm_option_groups[group_index];
+-	count = group->count;
+-	if (info->value.enumerated.item >= count)
+-		info->value.enumerated.item = count - 1;
+-	name = group->options[info->value.enumerated.item].name;
+ 	strlcpy(info->value.enumerated.name, name, sizeof(info->value.enumerated.name));
+ 	info->type = SNDRV_CTL_ELEM_TYPE_ENUMERATED;
+ 	info->count = 1;
+-	info->value.enumerated.items = count;
++	info->value.enumerated.items = noptions;
+ 	return 0;
+ }
+ 
+-static int snd_pioneer_djm_controls_update(struct usb_mixer_interface *mixer, u16 group, u16 value)
++static int snd_djm_controls_update(struct usb_mixer_interface *mixer,
++				u8 device_idx, u8 group, u16 value)
+ {
+ 	int err;
++	const struct snd_djm_device *device = &snd_djm_devices[device_idx];
+ 
+-	if (group >= ARRAY_SIZE(snd_pioneer_djm_option_groups)
+-			|| value >= snd_pioneer_djm_option_groups[group].count)
++	if ((group >= device->ncontrols) || value >= device->controls[group].noptions)
+ 		return -EINVAL;
+ 
+ 	err = snd_usb_lock_shutdown(mixer->chip);
+@@ -2748,63 +2858,76 @@ static int snd_pioneer_djm_controls_update(struct usb_mixer_interface *mixer, u1
+ 		mixer->chip->dev, usb_sndctrlpipe(mixer->chip->dev, 0),
+ 		USB_REQ_SET_FEATURE,
+ 		USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-		snd_pioneer_djm_option_groups[group].options[value].wValue,
+-		snd_pioneer_djm_option_groups[group].options[value].wIndex,
++		device->controls[group].options[value],
++		device->controls[group].wIndex,
+ 		NULL, 0);
+ 
+ 	snd_usb_unlock_shutdown(mixer->chip);
+ 	return err;
+ }
+ 
+-static int snd_pioneer_djm_controls_get(struct snd_kcontrol *kctl, struct snd_ctl_elem_value *elem)
++static int snd_djm_controls_get(struct snd_kcontrol *kctl,
++				struct snd_ctl_elem_value *elem)
+ {
+-	elem->value.enumerated.item[0] = kctl->private_value & SND_PIONEER_DJM_VALUE_MASK;
++	elem->value.enumerated.item[0] = kctl->private_value & SND_DJM_VALUE_MASK;
+ 	return 0;
+ }
+ 
+-static int snd_pioneer_djm_controls_put(struct snd_kcontrol *kctl, struct snd_ctl_elem_value *elem)
++static int snd_djm_controls_put(struct snd_kcontrol *kctl, struct snd_ctl_elem_value *elem)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kctl);
+ 	struct usb_mixer_interface *mixer = list->mixer;
+ 	unsigned long private_value = kctl->private_value;
+-	u16 group = (private_value & SND_PIONEER_DJM_GROUP_MASK) >> SND_PIONEER_DJM_GROUP_SHIFT;
++
++	u8 device = (private_value & SND_DJM_DEVICE_MASK) >> SND_DJM_DEVICE_SHIFT;
++	u8 group = (private_value & SND_DJM_GROUP_MASK) >> SND_DJM_GROUP_SHIFT;
+ 	u16 value = elem->value.enumerated.item[0];
+ 
+-	kctl->private_value = (group << SND_PIONEER_DJM_GROUP_SHIFT) | value;
++	kctl->private_value = ((device << SND_DJM_DEVICE_SHIFT) |
++			      (group << SND_DJM_GROUP_SHIFT) |
++			      value);
+ 
+-	return snd_pioneer_djm_controls_update(mixer, group, value);
++	return snd_djm_controls_update(mixer, device, group, value);
+ }
+ 
+-static int snd_pioneer_djm_controls_resume(struct usb_mixer_elem_list *list)
++static int snd_djm_controls_resume(struct usb_mixer_elem_list *list)
+ {
+ 	unsigned long private_value = list->kctl->private_value;
+-	u16 group = (private_value & SND_PIONEER_DJM_GROUP_MASK) >> SND_PIONEER_DJM_GROUP_SHIFT;
+-	u16 value = (private_value & SND_PIONEER_DJM_VALUE_MASK);
++	u8 device = (private_value & SND_DJM_DEVICE_MASK) >> SND_DJM_DEVICE_SHIFT;
++	u8 group = (private_value & SND_DJM_GROUP_MASK) >> SND_DJM_GROUP_SHIFT;
++	u16 value = (private_value & SND_DJM_VALUE_MASK);
+ 
+-	return snd_pioneer_djm_controls_update(list->mixer, group, value);
++	return snd_djm_controls_update(list->mixer, device, group, value);
+ }
+ 
+-static int snd_pioneer_djm_controls_create(struct usb_mixer_interface *mixer)
++static int snd_djm_controls_create(struct usb_mixer_interface *mixer,
++		const u8 device_idx)
+ {
+ 	int err, i;
+-	const struct snd_pioneer_djm_option_group *group;
++	u16 value;
++
++	const struct snd_djm_device *device = &snd_djm_devices[device_idx];
++
+ 	struct snd_kcontrol_new knew = {
+ 		.iface  = SNDRV_CTL_ELEM_IFACE_MIXER,
+ 		.access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
+ 		.index = 0,
+-		.info = snd_pioneer_djm_controls_info,
+-		.get  = snd_pioneer_djm_controls_get,
+-		.put  = snd_pioneer_djm_controls_put
++		.info = snd_djm_controls_info,
++		.get  = snd_djm_controls_get,
++		.put  = snd_djm_controls_put
+ 	};
+ 
+-	for (i = 0; i < ARRAY_SIZE(snd_pioneer_djm_option_groups); i++) {
+-		group = &snd_pioneer_djm_option_groups[i];
+-		knew.name = group->name;
+-		knew.private_value = (i << SND_PIONEER_DJM_GROUP_SHIFT) | group->default_value;
+-		err = snd_pioneer_djm_controls_update(mixer, i, group->default_value);
++	for (i = 0; i < device->ncontrols; i++) {
++		value = device->controls[i].default_value;
++		knew.name = device->controls[i].name;
++		knew.private_value = (
++			(device_idx << SND_DJM_DEVICE_SHIFT) |
++			(i << SND_DJM_GROUP_SHIFT) |
++			value);
++		err = snd_djm_controls_update(mixer, device_idx, i, value);
+ 		if (err)
+ 			return err;
+-		err = add_single_ctl_with_resume(mixer, 0, snd_pioneer_djm_controls_resume,
++		err = add_single_ctl_with_resume(mixer, 0, snd_djm_controls_resume,
+ 						 &knew, NULL);
+ 		if (err)
+ 			return err;
+@@ -2917,7 +3040,13 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		err = snd_bbfpro_controls_create(mixer);
+ 		break;
+ 	case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */
+-		err = snd_pioneer_djm_controls_create(mixer);
++		err = snd_djm_controls_create(mixer, SND_DJM_250MK2_IDX);
++		break;
++	case USB_ID(0x08e4, 0x017f): /* Pioneer DJ DJM-750 */
++		err = snd_djm_controls_create(mixer, SND_DJM_750_IDX);
++		break;
++	case USB_ID(0x2b73, 0x000a): /* Pioneer DJ DJM-900NXS2 */
++		err = snd_djm_controls_create(mixer, SND_DJM_900NXS2_IDX);
+ 		break;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-17 17:00 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-03-17 17:00 UTC (permalink / raw
  To: gentoo-commits

commit:     a2f5d6f6c1bc3626b3390e310df05e1155f83646
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 17 17:00:41 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 17 17:00:41 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a2f5d6f6

Linux patch 5.10.24

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1023_linux-5.10.24.patch | 11590 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11594 insertions(+)

diff --git a/0000_README b/0000_README
index 5a10fbd..14f1018 100644
--- a/0000_README
+++ b/0000_README
@@ -135,6 +135,10 @@ Patch:  1022_linux-5.10.23.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.23
 
+Patch:  1023_linux-5.10.24.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.24
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1023_linux-5.10.24.patch b/1023_linux-5.10.24.patch
new file mode 100644
index 0000000..19f4b04
--- /dev/null
+++ b/1023_linux-5.10.24.patch
@@ -0,0 +1,11590 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-memory b/Documentation/ABI/testing/sysfs-devices-memory
+index 2da2b1fba2c1c..16a727a611b1e 100644
+--- a/Documentation/ABI/testing/sysfs-devices-memory
++++ b/Documentation/ABI/testing/sysfs-devices-memory
+@@ -26,8 +26,9 @@ Date:		September 2008
+ Contact:	Badari Pulavarty <pbadari@us.ibm.com>
+ Description:
+ 		The file /sys/devices/system/memory/memoryX/phys_device
+-		is read-only and is designed to show the name of physical
+-		memory device.  Implementation is currently incomplete.
++		is read-only;  it is a legacy interface only ever used on s390x
++		to expose the covered storage increment.
++Users:		Legacy s390-tools lsmem/chmem
+ 
+ What:		/sys/devices/system/memory/memoryX/phys_index
+ Date:		September 2008
+diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst
+index 5c4432c96c4b6..245739f55ac7d 100644
+--- a/Documentation/admin-guide/mm/memory-hotplug.rst
++++ b/Documentation/admin-guide/mm/memory-hotplug.rst
+@@ -160,8 +160,8 @@ Under each memory block, you can see 5 files:
+ 
+                     "online_movable", "online", "offline" command
+                     which will be performed on all sections in the block.
+-``phys_device``     read-only: designed to show the name of physical memory
+-                    device.  This is not well implemented now.
++``phys_device``	    read-only: legacy interface only ever used on s390x to
++		    expose the covered storage increment.
+ ``removable``       read-only: contains an integer value indicating
+                     whether the memory block is removable or not
+                     removable.  A value of 1 indicates that the memory
+diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
+index 654649556306f..7272a4bd74dd0 100644
+--- a/Documentation/gpu/todo.rst
++++ b/Documentation/gpu/todo.rst
+@@ -560,6 +560,27 @@ Some of these date from the very introduction of KMS in 2008 ...
+ 
+ Level: Intermediate
+ 
++Remove automatic page mapping from dma-buf importing
++----------------------------------------------------
++
++When importing dma-bufs, the dma-buf and PRIME frameworks automatically map
++imported pages into the importer's DMA area. drm_gem_prime_fd_to_handle() and
++drm_gem_prime_handle_to_fd() require that importers call dma_buf_attach()
++even if they never do actual device DMA, but only CPU access through
++dma_buf_vmap(). This is a problem for USB devices, which do not support DMA
++operations.
++
++To fix the issue, automatic page mappings should be removed from the
++buffer-sharing code. Fixing this is a bit more involved, since the import/export
++cache is also tied to &drm_gem_object.import_attach. Meanwhile we paper over
++this problem for USB devices by fishing out the USB host controller device, as
++long as that supports DMA. Otherwise importing can still needlessly fail.
++
++Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
++
++Level: Advanced
++
++
+ Better Testing
+ ==============
+ 
+diff --git a/Documentation/networking/netdev-FAQ.rst b/Documentation/networking/netdev-FAQ.rst
+index 4b9ed5874d5ad..be88ab15e53ce 100644
+--- a/Documentation/networking/netdev-FAQ.rst
++++ b/Documentation/networking/netdev-FAQ.rst
+@@ -144,77 +144,13 @@ Please send incremental versions on top of what has been merged in order to fix
+ the patches the way they would look like if your latest patch series was to be
+ merged.
+ 
+-Q: How can I tell what patches are queued up for backporting to the various stable releases?
+---------------------------------------------------------------------------------------------
+-A: Normally Greg Kroah-Hartman collects stable commits himself, but for
+-networking, Dave collects up patches he deems critical for the
+-networking subsystem, and then hands them off to Greg.
+-
+-There is a patchworks queue that you can see here:
+-
+-  https://patchwork.kernel.org/bundle/netdev/stable/?state=*
+-
+-It contains the patches which Dave has selected, but not yet handed off
+-to Greg.  If Greg already has the patch, then it will be here:
+-
+-  https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
+-
+-A quick way to find whether the patch is in this stable-queue is to
+-simply clone the repo, and then git grep the mainline commit ID, e.g.
+-::
+-
+-  stable-queue$ git grep -l 284041ef21fdf2e
+-  releases/3.0.84/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
+-  releases/3.4.51/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
+-  releases/3.9.8/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
+-  stable/stable-queue$
+-
+-Q: I see a network patch and I think it should be backported to stable.
+------------------------------------------------------------------------
+-Q: Should I request it via stable@vger.kernel.org like the references in
+-the kernel's Documentation/process/stable-kernel-rules.rst file say?
+-A: No, not for networking.  Check the stable queues as per above first
+-to see if it is already queued.  If not, then send a mail to netdev,
+-listing the upstream commit ID and why you think it should be a stable
+-candidate.
+-
+-Before you jump to go do the above, do note that the normal stable rules
+-in :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
+-still apply.  So you need to explicitly indicate why it is a critical
+-fix and exactly what users are impacted.  In addition, you need to
+-convince yourself that you *really* think it has been overlooked,
+-vs. having been considered and rejected.
+-
+-Generally speaking, the longer it has had a chance to "soak" in
+-mainline, the better the odds that it is an OK candidate for stable.  So
+-scrambling to request a commit be added the day after it appears should
+-be avoided.
+-
+-Q: I have created a network patch and I think it should be backported to stable.
+---------------------------------------------------------------------------------
+-Q: Should I add a Cc: stable@vger.kernel.org like the references in the
+-kernel's Documentation/ directory say?
+-A: No.  See above answer.  In short, if you think it really belongs in
+-stable, then ensure you write a decent commit log that describes who
+-gets impacted by the bug fix and how it manifests itself, and when the
+-bug was introduced.  If you do that properly, then the commit will get
+-handled appropriately and most likely get put in the patchworks stable
+-queue if it really warrants it.
+-
+-If you think there is some valid information relating to it being in
+-stable that does *not* belong in the commit log, then use the three dash
+-marker line as described in
+-:ref:`Documentation/process/submitting-patches.rst <the_canonical_patch_format>`
+-to temporarily embed that information into the patch that you send.
+-
+-Q: Are all networking bug fixes backported to all stable releases?
+-------------------------------------------------------------------
+-A: Due to capacity, Dave could only take care of the backports for the
+-last two stable releases. For earlier stable releases, each stable
+-branch maintainer is supposed to take care of them. If you find any
+-patch is missing from an earlier stable branch, please notify
+-stable@vger.kernel.org with either a commit ID or a formal patch
+-backported, and CC Dave and other relevant networking developers.
++Q: Are there special rules regarding stable submissions on netdev?
++---------------------------------------------------------------
++While it used to be the case that netdev submissions were not supposed
++to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer
++the case today. Please follow the standard stable rules in
++:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,
++and make sure you include appropriate Fixes tags!
+ 
+ Q: Is the comment style convention different for the networking content?
+ ------------------------------------------------------------------------
+diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
+index 3973556250e17..003c865e9c212 100644
+--- a/Documentation/process/stable-kernel-rules.rst
++++ b/Documentation/process/stable-kernel-rules.rst
+@@ -35,12 +35,6 @@ Rules on what kind of patches are accepted, and which ones are not, into the
+ Procedure for submitting patches to the -stable tree
+ ----------------------------------------------------
+ 
+- - If the patch covers files in net/ or drivers/net please follow netdev stable
+-   submission guidelines as described in
+-   :ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
+-   after first checking the stable networking queue at
+-   https://patchwork.kernel.org/bundle/netdev/stable/?state=*
+-   to ensure the requested patch is not already queued up.
+  - Security patches should not be handled (solely) by the -stable review
+    process but should follow the procedures in
+    :ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
+diff --git a/Documentation/process/submitting-patches.rst b/Documentation/process/submitting-patches.rst
+index 83d9a82055a78..5a267f5d1a501 100644
+--- a/Documentation/process/submitting-patches.rst
++++ b/Documentation/process/submitting-patches.rst
+@@ -250,11 +250,6 @@ should also read
+ :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
+ in addition to this file.
+ 
+-Note, however, that some subsystem maintainers want to come to their own
+-conclusions on which patches should go to the stable trees.  The networking
+-maintainer, in particular, would rather not see individual developers
+-adding lines like the above to their patches.
+-
+ If changes affect userland-kernel interfaces, please send the MAN-PAGES
+ maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
+ least a notification of the change, so that some information makes its way
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index 4ba0df574eb25..a5d27553d59c9 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -182,6 +182,9 @@ is dependent on the CPU capability and the kernel configuration. The limit can
+ be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the KVM_CHECK_EXTENSION
+ ioctl() at run-time.
+ 
++Creation of the VM will fail if the requested IPA size (whether it is
++implicit or explicit) is unsupported on the host.
++
+ Please note that configuring the IPA size does not affect the capability
+ exposed by the guest CPUs in ID_AA64MMFR0_EL1[PARange]. It only affects
+ size of the address translated by the stage2 level (guest physical to
+diff --git a/Makefile b/Makefile
+index 7fdb78b48f556..3a435c928e750 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 23
++SUBLEVEL = 24
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1247,9 +1247,15 @@ define filechk_utsrelease.h
+ endef
+ 
+ define filechk_version.h
+-	echo \#define LINUX_VERSION_CODE $(shell                         \
+-	expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 0$(SUBLEVEL)); \
+-	echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))'
++	if [ $(SUBLEVEL) -gt 255 ]; then                                 \
++		echo \#define LINUX_VERSION_CODE $(shell                 \
++		expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 255); \
++	else                                                             \
++		echo \#define LINUX_VERSION_CODE $(shell                 \
++		expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
++	fi;                                                              \
++	echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) +  \
++	((c) > 255 ? 255 : (c)))'
+ endef
+ 
+ $(version_h): FORCE
+diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
+index a0de09f994d88..247ce90559901 100644
+--- a/arch/arm/boot/compressed/head.S
++++ b/arch/arm/boot/compressed/head.S
+@@ -1440,8 +1440,7 @@ ENTRY(efi_enter_kernel)
+ 		mov	r4, r0			@ preserve image base
+ 		mov	r8, r1			@ preserve DT pointer
+ 
+- ARM(		adrl	r0, call_cache_fn	)
+- THUMB(		adr	r0, call_cache_fn	)
++		adr_l	r0, call_cache_fn
+ 		adr	r1, 0f			@ clean the region of code we
+ 		bl	cache_clean_flush	@ may run with the MMU off
+ 
+diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
+index feac2c8b86f29..72627c5fb3b2c 100644
+--- a/arch/arm/include/asm/assembler.h
++++ b/arch/arm/include/asm/assembler.h
+@@ -494,4 +494,88 @@ THUMB(	orr	\reg , \reg , #PSR_T_BIT	)
+ #define _ASM_NOKPROBE(entry)
+ #endif
+ 
++	.macro		__adldst_l, op, reg, sym, tmp, c
++	.if		__LINUX_ARM_ARCH__ < 7
++	ldr\c		\tmp, .La\@
++	.subsection	1
++	.align		2
++.La\@:	.long		\sym - .Lpc\@
++	.previous
++	.else
++	.ifnb		\c
++ THUMB(	ittt		\c			)
++	.endif
++	movw\c		\tmp, #:lower16:\sym - .Lpc\@
++	movt\c		\tmp, #:upper16:\sym - .Lpc\@
++	.endif
++
++#ifndef CONFIG_THUMB2_KERNEL
++	.set		.Lpc\@, . + 8			// PC bias
++	.ifc		\op, add
++	add\c		\reg, \tmp, pc
++	.else
++	\op\c		\reg, [pc, \tmp]
++	.endif
++#else
++.Lb\@:	add\c		\tmp, \tmp, pc
++	/*
++	 * In Thumb-2 builds, the PC bias depends on whether we are currently
++	 * emitting into a .arm or a .thumb section. The size of the add opcode
++	 * above will be 2 bytes when emitting in Thumb mode and 4 bytes when
++	 * emitting in ARM mode, so let's use this to account for the bias.
++	 */
++	.set		.Lpc\@, . + (. - .Lb\@)
++
++	.ifnc		\op, add
++	\op\c		\reg, [\tmp]
++	.endif
++#endif
++	.endm
++
++	/*
++	 * mov_l - move a constant value or [relocated] address into a register
++	 */
++	.macro		mov_l, dst:req, imm:req
++	.if		__LINUX_ARM_ARCH__ < 7
++	ldr		\dst, =\imm
++	.else
++	movw		\dst, #:lower16:\imm
++	movt		\dst, #:upper16:\imm
++	.endif
++	.endm
++
++	/*
++	 * adr_l - adr pseudo-op with unlimited range
++	 *
++	 * @dst: destination register
++	 * @sym: name of the symbol
++	 * @cond: conditional opcode suffix
++	 */
++	.macro		adr_l, dst:req, sym:req, cond
++	__adldst_l	add, \dst, \sym, \dst, \cond
++	.endm
++
++	/*
++	 * ldr_l - ldr <literal> pseudo-op with unlimited range
++	 *
++	 * @dst: destination register
++	 * @sym: name of the symbol
++	 * @cond: conditional opcode suffix
++	 */
++	.macro		ldr_l, dst:req, sym:req, cond
++	__adldst_l	ldr, \dst, \sym, \dst, \cond
++	.endm
++
++	/*
++	 * str_l - str <literal> pseudo-op with unlimited range
++	 *
++	 * @src: source register
++	 * @sym: name of the symbol
++	 * @tmp: mandatory scratch register
++	 * @cond: conditional opcode suffix
++	 */
++	.macro		str_l, src:req, sym:req, tmp:req, cond
++	__adldst_l	str, \src, \sym, \tmp, \cond
++	.endm
++
+ #endif /* __ASM_ASSEMBLER_H__ */
+diff --git a/arch/arm/kernel/iwmmxt.S b/arch/arm/kernel/iwmmxt.S
+index 0dcae787b004d..d2b4ac06e4ed8 100644
+--- a/arch/arm/kernel/iwmmxt.S
++++ b/arch/arm/kernel/iwmmxt.S
+@@ -16,6 +16,7 @@
+ #include <asm/thread_info.h>
+ #include <asm/asm-offsets.h>
+ #include <asm/assembler.h>
++#include "iwmmxt.h"
+ 
+ #if defined(CONFIG_CPU_PJ4) || defined(CONFIG_CPU_PJ4B)
+ #define PJ4(code...)		code
+@@ -113,33 +114,33 @@ concan_save:
+ 
+ concan_dump:
+ 
+-	wstrw	wCSSF, [r1, #MMX_WCSSF]
+-	wstrw	wCASF, [r1, #MMX_WCASF]
+-	wstrw	wCGR0, [r1, #MMX_WCGR0]
+-	wstrw	wCGR1, [r1, #MMX_WCGR1]
+-	wstrw	wCGR2, [r1, #MMX_WCGR2]
+-	wstrw	wCGR3, [r1, #MMX_WCGR3]
++	wstrw	wCSSF, r1, MMX_WCSSF
++	wstrw	wCASF, r1, MMX_WCASF
++	wstrw	wCGR0, r1, MMX_WCGR0
++	wstrw	wCGR1, r1, MMX_WCGR1
++	wstrw	wCGR2, r1, MMX_WCGR2
++	wstrw	wCGR3, r1, MMX_WCGR3
+ 
+ 1:	@ MUP? wRn
+ 	tst	r2, #0x2
+ 	beq	2f
+ 
+-	wstrd	wR0,  [r1, #MMX_WR0]
+-	wstrd	wR1,  [r1, #MMX_WR1]
+-	wstrd	wR2,  [r1, #MMX_WR2]
+-	wstrd	wR3,  [r1, #MMX_WR3]
+-	wstrd	wR4,  [r1, #MMX_WR4]
+-	wstrd	wR5,  [r1, #MMX_WR5]
+-	wstrd	wR6,  [r1, #MMX_WR6]
+-	wstrd	wR7,  [r1, #MMX_WR7]
+-	wstrd	wR8,  [r1, #MMX_WR8]
+-	wstrd	wR9,  [r1, #MMX_WR9]
+-	wstrd	wR10, [r1, #MMX_WR10]
+-	wstrd	wR11, [r1, #MMX_WR11]
+-	wstrd	wR12, [r1, #MMX_WR12]
+-	wstrd	wR13, [r1, #MMX_WR13]
+-	wstrd	wR14, [r1, #MMX_WR14]
+-	wstrd	wR15, [r1, #MMX_WR15]
++	wstrd	wR0,  r1, MMX_WR0
++	wstrd	wR1,  r1, MMX_WR1
++	wstrd	wR2,  r1, MMX_WR2
++	wstrd	wR3,  r1, MMX_WR3
++	wstrd	wR4,  r1, MMX_WR4
++	wstrd	wR5,  r1, MMX_WR5
++	wstrd	wR6,  r1, MMX_WR6
++	wstrd	wR7,  r1, MMX_WR7
++	wstrd	wR8,  r1, MMX_WR8
++	wstrd	wR9,  r1, MMX_WR9
++	wstrd	wR10, r1, MMX_WR10
++	wstrd	wR11, r1, MMX_WR11
++	wstrd	wR12, r1, MMX_WR12
++	wstrd	wR13, r1, MMX_WR13
++	wstrd	wR14, r1, MMX_WR14
++	wstrd	wR15, r1, MMX_WR15
+ 
+ 2:	teq	r0, #0				@ anything to load?
+ 	reteq	lr				@ if not, return
+@@ -147,30 +148,30 @@ concan_dump:
+ concan_load:
+ 
+ 	@ Load wRn
+-	wldrd	wR0,  [r0, #MMX_WR0]
+-	wldrd	wR1,  [r0, #MMX_WR1]
+-	wldrd	wR2,  [r0, #MMX_WR2]
+-	wldrd	wR3,  [r0, #MMX_WR3]
+-	wldrd	wR4,  [r0, #MMX_WR4]
+-	wldrd	wR5,  [r0, #MMX_WR5]
+-	wldrd	wR6,  [r0, #MMX_WR6]
+-	wldrd	wR7,  [r0, #MMX_WR7]
+-	wldrd	wR8,  [r0, #MMX_WR8]
+-	wldrd	wR9,  [r0, #MMX_WR9]
+-	wldrd	wR10, [r0, #MMX_WR10]
+-	wldrd	wR11, [r0, #MMX_WR11]
+-	wldrd	wR12, [r0, #MMX_WR12]
+-	wldrd	wR13, [r0, #MMX_WR13]
+-	wldrd	wR14, [r0, #MMX_WR14]
+-	wldrd	wR15, [r0, #MMX_WR15]
++	wldrd	wR0,  r0, MMX_WR0
++	wldrd	wR1,  r0, MMX_WR1
++	wldrd	wR2,  r0, MMX_WR2
++	wldrd	wR3,  r0, MMX_WR3
++	wldrd	wR4,  r0, MMX_WR4
++	wldrd	wR5,  r0, MMX_WR5
++	wldrd	wR6,  r0, MMX_WR6
++	wldrd	wR7,  r0, MMX_WR7
++	wldrd	wR8,  r0, MMX_WR8
++	wldrd	wR9,  r0, MMX_WR9
++	wldrd	wR10, r0, MMX_WR10
++	wldrd	wR11, r0, MMX_WR11
++	wldrd	wR12, r0, MMX_WR12
++	wldrd	wR13, r0, MMX_WR13
++	wldrd	wR14, r0, MMX_WR14
++	wldrd	wR15, r0, MMX_WR15
+ 
+ 	@ Load wCx
+-	wldrw	wCSSF, [r0, #MMX_WCSSF]
+-	wldrw	wCASF, [r0, #MMX_WCASF]
+-	wldrw	wCGR0, [r0, #MMX_WCGR0]
+-	wldrw	wCGR1, [r0, #MMX_WCGR1]
+-	wldrw	wCGR2, [r0, #MMX_WCGR2]
+-	wldrw	wCGR3, [r0, #MMX_WCGR3]
++	wldrw	wCSSF, r0, MMX_WCSSF
++	wldrw	wCASF, r0, MMX_WCASF
++	wldrw	wCGR0, r0, MMX_WCGR0
++	wldrw	wCGR1, r0, MMX_WCGR1
++	wldrw	wCGR2, r0, MMX_WCGR2
++	wldrw	wCGR3, r0, MMX_WCGR3
+ 
+ 	@ clear CUP/MUP (only if r1 != 0)
+ 	teq	r1, #0
+diff --git a/arch/arm/kernel/iwmmxt.h b/arch/arm/kernel/iwmmxt.h
+new file mode 100644
+index 0000000000000..fb627286f5bb9
+--- /dev/null
++++ b/arch/arm/kernel/iwmmxt.h
+@@ -0,0 +1,47 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++
++#ifndef __IWMMXT_H__
++#define __IWMMXT_H__
++
++.irp b, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
++.set .LwR\b, \b
++.set .Lr\b, \b
++.endr
++
++.set .LwCSSF, 0x2
++.set .LwCASF, 0x3
++.set .LwCGR0, 0x8
++.set .LwCGR1, 0x9
++.set .LwCGR2, 0xa
++.set .LwCGR3, 0xb
++
++.macro wldrd, reg:req, base:req, offset:req
++.inst 0xedd00100 | (.L\reg << 12) | (.L\base << 16) | (\offset >> 2)
++.endm
++
++.macro wldrw, reg:req, base:req, offset:req
++.inst 0xfd900100 | (.L\reg << 12) | (.L\base << 16) | (\offset >> 2)
++.endm
++
++.macro wstrd, reg:req, base:req, offset:req
++.inst 0xedc00100 | (.L\reg << 12) | (.L\base << 16) | (\offset >> 2)
++.endm
++
++.macro wstrw, reg:req, base:req, offset:req
++.inst 0xfd800100 | (.L\reg << 12) | (.L\base << 16) | (\offset >> 2)
++.endm
++
++#ifdef __clang__
++
++#define wCon c1
++
++.macro tmrc, dest:req, control:req
++mrc p1, 0, \dest, \control, c0, 0
++.endm
++
++.macro tmcr, control:req, src:req
++mcr p1, 0, \src, \control, c0, 0
++.endm
++#endif
++
++#endif
+diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
+index 54387ccd1ab26..044bb9e2cd74f 100644
+--- a/arch/arm64/include/asm/kvm_asm.h
++++ b/arch/arm64/include/asm/kvm_asm.h
+@@ -49,7 +49,7 @@
+ #define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		2
+ #define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		3
+ #define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		4
+-#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_local_vmid	5
++#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		5
+ #define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		6
+ #define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			7
+ #define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_ich_vtr_el2		8
+@@ -180,10 +180,10 @@ DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
+ #define __bp_harden_hyp_vecs	CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
+ 
+ extern void __kvm_flush_vm_context(void);
++extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu);
+ extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
+ 				     int level);
+ extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
+-extern void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu);
+ 
+ extern void __kvm_timer_set_cntvoff(u64 cntvoff);
+ 
+diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
+index 6b664de5ec1f4..123e67cb85050 100644
+--- a/arch/arm64/include/asm/kvm_hyp.h
++++ b/arch/arm64/include/asm/kvm_hyp.h
+@@ -82,6 +82,11 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt);
+ void __debug_switch_to_guest(struct kvm_vcpu *vcpu);
+ void __debug_switch_to_host(struct kvm_vcpu *vcpu);
+ 
++#ifdef __KVM_NVHE_HYPERVISOR__
++void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu);
++void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu);
++#endif
++
+ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
+ void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
+ 
+@@ -94,7 +99,8 @@ u64 __guest_enter(struct kvm_vcpu *vcpu);
+ 
+ void __noreturn hyp_panic(void);
+ #ifdef __KVM_NVHE_HYPERVISOR__
+-void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par);
++void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
++			       u64 elr, u64 par);
+ #endif
+ 
+ #endif /* __ARM64_KVM_HYP_H__ */
+diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
+index 75c8e9a350cc7..505bdd75b5411 100644
+--- a/arch/arm64/include/asm/memory.h
++++ b/arch/arm64/include/asm/memory.h
+@@ -306,6 +306,11 @@ static inline void *phys_to_virt(phys_addr_t x)
+ #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
+ 
+ #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
++#define page_to_virt(x)	({						\
++	__typeof__(x) __page = x;					\
++	void *__addr = __va(page_to_phys(__page));			\
++	(void *)__tag_set((const void *)__addr, page_kasan_tag(__page));\
++})
+ #define virt_to_page(x)		pfn_to_page(virt_to_pfn(x))
+ #else
+ #define page_to_virt(x)	({						\
+diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
+index 0672236e1aeab..4e2ba94778450 100644
+--- a/arch/arm64/include/asm/mmu_context.h
++++ b/arch/arm64/include/asm/mmu_context.h
+@@ -65,10 +65,7 @@ extern u64 idmap_ptrs_per_pgd;
+ 
+ static inline bool __cpu_uses_extended_idmap(void)
+ {
+-	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
+-		return false;
+-
+-	return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
++	return unlikely(idmap_t0sz != TCR_T0SZ(vabits_actual));
+ }
+ 
+ /*
+diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
+index 046be789fbb47..9a65fb5281100 100644
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -66,7 +66,6 @@ extern bool arm64_use_ng_mappings;
+ #define _PAGE_DEFAULT		(_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+ 
+ #define PAGE_KERNEL		__pgprot(PROT_NORMAL)
+-#define PAGE_KERNEL_TAGGED	__pgprot(PROT_NORMAL_TAGGED)
+ #define PAGE_KERNEL_RO		__pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
+ #define PAGE_KERNEL_ROX		__pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
+ #define PAGE_KERNEL_EXEC	__pgprot(PROT_NORMAL & ~PTE_PXN)
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 5628289b9d5e6..717f13d52ecc5 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -484,6 +484,9 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
+ 	__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC) | PTE_PXN | PTE_UXN)
+ #define pgprot_device(prot) \
+ 	__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_PXN | PTE_UXN)
++#define pgprot_tagged(prot) \
++	__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_TAGGED))
++#define pgprot_mhp	pgprot_tagged
+ /*
+  * DMA allocations for non-coherent devices use what the Arm architecture calls
+  * "Normal non-cacheable" memory, which permits speculation, unaligned accesses
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index e7550a5289fef..78cdd6b24172c 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -334,7 +334,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
+ 	 */
+ 	adrp	x5, __idmap_text_end
+ 	clz	x5, x5
+-	cmp	x5, TCR_T0SZ(VA_BITS)	// default T0SZ small enough?
++	cmp	x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough?
+ 	b.ge	1f			// .. then skip VA range extension
+ 
+ 	adr_l	x6, idmap_t0sz
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 3605f77ad4df1..11852e05ee32a 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -460,7 +460,7 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
+ 	return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
+ }
+ 
+-static inline u32 armv8pmu_read_evcntr(int idx)
++static inline u64 armv8pmu_read_evcntr(int idx)
+ {
+ 	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+ 
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index c0ffb019ca8be..a1c2c955474e9 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -352,11 +352,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 	last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
+ 
+ 	/*
++	 * We guarantee that both TLBs and I-cache are private to each
++	 * vcpu. If detecting that a vcpu from the same VM has
++	 * previously run on the same physical CPU, call into the
++	 * hypervisor code to nuke the relevant contexts.
++	 *
+ 	 * We might get preempted before the vCPU actually runs, but
+ 	 * over-invalidation doesn't affect correctness.
+ 	 */
+ 	if (*last_ran != vcpu->vcpu_id) {
+-		kvm_call_hyp(__kvm_tlb_flush_local_vmid, mmu);
++		kvm_call_hyp(__kvm_flush_cpu_context, mmu);
+ 		*last_ran = vcpu->vcpu_id;
+ 	}
+ 
+diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
+index b0afad7a99c6e..0c66a1d408fd7 100644
+--- a/arch/arm64/kvm/hyp/entry.S
++++ b/arch/arm64/kvm/hyp/entry.S
+@@ -146,7 +146,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
+ 	// Now restore the hyp regs
+ 	restore_callee_saved_regs x2
+ 
+-	set_loaded_vcpu xzr, x1, x2
++	set_loaded_vcpu xzr, x2, x3
+ 
+ alternative_if ARM64_HAS_RAS_EXTN
+ 	// If we have the RAS extensions we can consume a pending error
+diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+index 91a711aa8382e..f401724f12ef7 100644
+--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
++++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+@@ -58,16 +58,24 @@ static void __debug_restore_spe(u64 pmscr_el1)
+ 	write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1);
+ }
+ 
+-void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
++void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
+ {
+ 	/* Disable and flush SPE data generation */
+ 	__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
++}
++
++void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
++{
+ 	__debug_switch_to_guest_common(vcpu);
+ }
+ 
+-void __debug_switch_to_host(struct kvm_vcpu *vcpu)
++void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
+ {
+ 	__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
++}
++
++void __debug_switch_to_host(struct kvm_vcpu *vcpu)
++{
+ 	__debug_switch_to_host_common(vcpu);
+ }
+ 
+diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
+index ed27f06a31ba2..4ce934fc1f72a 100644
+--- a/arch/arm64/kvm/hyp/nvhe/host.S
++++ b/arch/arm64/kvm/hyp/nvhe/host.S
+@@ -64,10 +64,15 @@ __host_enter_without_restoring:
+ SYM_FUNC_END(__host_exit)
+ 
+ /*
+- * void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par);
++ * void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
++ * 				  u64 elr, u64 par);
+  */
+ SYM_FUNC_START(__hyp_do_panic)
+-	/* Load the format arguments into x1-7 */
++	mov	x29, x0
++
++	/* Load the format string into x0 and arguments into x1-7 */
++	ldr	x0, =__hyp_panic_string
++
+ 	mov	x6, x3
+ 	get_vcpu_ptr x7, x3
+ 
+@@ -82,13 +87,8 @@ SYM_FUNC_START(__hyp_do_panic)
+ 	ldr	lr, =panic
+ 	msr	elr_el2, lr
+ 
+-	/*
+-	 * Set the panic format string and enter the host, conditionally
+-	 * restoring the host context.
+-	 */
+-	cmp	x0, xzr
+-	ldr	x0, =__hyp_panic_string
+-	b.eq	__host_enter_without_restoring
++	/* Enter the host, conditionally restoring the host context. */
++	cbz	x29, __host_enter_without_restoring
+ 	b	__host_enter_for_panic
+ SYM_FUNC_END(__hyp_do_panic)
+ 
+@@ -144,7 +144,7 @@ SYM_FUNC_END(__hyp_do_panic)
+ 
+ .macro invalid_host_el1_vect
+ 	.align 7
+-	mov	x0, xzr		/* restore_host = false */
++	mov	x0, xzr		/* host_ctxt = NULL */
+ 	mrs	x1, spsr_el2
+ 	mrs	x2, elr_el2
+ 	mrs	x3, par_el1
+diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+index e2eafe2c93aff..3df30b459215b 100644
+--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
++++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+@@ -46,11 +46,11 @@ static void handle_host_hcall(unsigned long func_id,
+ 		__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
+ 		break;
+ 	}
+-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
++	case KVM_HOST_SMCCC_FUNC(__kvm_flush_cpu_context): {
+ 		unsigned long r1 = host_ctxt->regs.regs[1];
+ 		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
+ 
+-		__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
++		__kvm_flush_cpu_context(kern_hyp_va(mmu));
+ 		break;
+ 	}
+ 	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
+diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
+index 8ae8160bc93ab..6624596846d3d 100644
+--- a/arch/arm64/kvm/hyp/nvhe/switch.c
++++ b/arch/arm64/kvm/hyp/nvhe/switch.c
+@@ -188,6 +188,14 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
+ 
+ 	__sysreg_save_state_nvhe(host_ctxt);
++	/*
++	 * We must flush and disable the SPE buffer for nVHE, as
++	 * the translation regime(EL1&0) is going to be loaded with
++	 * that of the guest. And we must do this before we change the
++	 * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and
++	 * before we load guest Stage1.
++	 */
++	__debug_save_host_buffers_nvhe(vcpu);
+ 
+ 	/*
+ 	 * We must restore the 32-bit state before the sysregs, thanks
+@@ -228,11 +236,12 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+ 		__fpsimd_save_fpexc32(vcpu);
+ 
++	__debug_switch_to_host(vcpu);
+ 	/*
+ 	 * This must come after restoring the host sysregs, since a non-VHE
+ 	 * system may enable SPE here and make use of the TTBRs.
+ 	 */
+-	__debug_switch_to_host(vcpu);
++	__debug_restore_host_buffers_nvhe(vcpu);
+ 
+ 	if (pmu_switch_needed)
+ 		__pmu_switch_to_host(host_ctxt);
+@@ -251,7 +260,6 @@ void __noreturn hyp_panic(void)
+ 	u64 spsr = read_sysreg_el2(SYS_SPSR);
+ 	u64 elr = read_sysreg_el2(SYS_ELR);
+ 	u64 par = read_sysreg_par();
+-	bool restore_host = true;
+ 	struct kvm_cpu_context *host_ctxt;
+ 	struct kvm_vcpu *vcpu;
+ 
+@@ -265,7 +273,7 @@ void __noreturn hyp_panic(void)
+ 		__sysreg_restore_state_nvhe(host_ctxt);
+ 	}
+ 
+-	__hyp_do_panic(restore_host, spsr, elr, par);
++	__hyp_do_panic(host_ctxt, spsr, elr, par);
+ 	unreachable();
+ }
+ 
+diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
+index fbde89a2c6e83..229b06748c208 100644
+--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
++++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
+@@ -123,7 +123,7 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
+ 	__tlb_switch_to_host(&cxt);
+ }
+ 
+-void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
++void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
+ {
+ 	struct tlb_inv_context cxt;
+ 
+@@ -131,6 +131,7 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
+ 	__tlb_switch_to_guest(mmu, &cxt);
+ 
+ 	__tlbi(vmalle1);
++	asm volatile("ic iallu");
+ 	dsb(nsh);
+ 	isb();
+ 
+diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
+index bdf8e55ed308e..4d99d07c610c8 100644
+--- a/arch/arm64/kvm/hyp/pgtable.c
++++ b/arch/arm64/kvm/hyp/pgtable.c
+@@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
+ 		goto out;
+ 
+ 	if (!table) {
++		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
+ 		data->addr += kvm_granule_size(level);
+ 		goto out;
+ 	}
+diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
+index fd7895945bbc6..66f17349f0c36 100644
+--- a/arch/arm64/kvm/hyp/vhe/tlb.c
++++ b/arch/arm64/kvm/hyp/vhe/tlb.c
+@@ -127,7 +127,7 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
+ 	__tlb_switch_to_host(&cxt);
+ }
+ 
+-void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
++void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
+ {
+ 	struct tlb_inv_context cxt;
+ 
+@@ -135,6 +135,7 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
+ 	__tlb_switch_to_guest(mmu, &cxt);
+ 
+ 	__tlbi(vmalle1);
++	asm volatile("ic iallu");
+ 	dsb(nsh);
+ 	isb();
+ 
+diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
+index 75814a02d1894..26068456ec0f3 100644
+--- a/arch/arm64/kvm/mmu.c
++++ b/arch/arm64/kvm/mmu.c
+@@ -1309,8 +1309,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ 	 * Prevent userspace from creating a memory region outside of the IPA
+ 	 * space addressable by the KVM guest IPA space.
+ 	 */
+-	if (memslot->base_gfn + memslot->npages >=
+-	    (kvm_phys_size(kvm) >> PAGE_SHIFT))
++	if ((memslot->base_gfn + memslot->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT))
+ 		return -EFAULT;
+ 
+ 	mmap_read_lock(current->mm);
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index f32490229a4c7..e911eea36eb0e 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -373,10 +373,9 @@ int kvm_set_ipa_limit(void)
+ 	}
+ 
+ 	kvm_ipa_limit = id_aa64mmfr0_parange_to_phys_shift(parange);
+-	WARN(kvm_ipa_limit < KVM_PHYS_SHIFT,
+-	     "KVM IPA Size Limit (%d bits) is smaller than default size\n",
+-	     kvm_ipa_limit);
+-	kvm_info("IPA Size Limit: %d bits\n", kvm_ipa_limit);
++	kvm_info("IPA Size Limit: %d bits%s\n", kvm_ipa_limit,
++		 ((kvm_ipa_limit < KVM_PHYS_SHIFT) ?
++		  " (Reduced IPA size, limited VM/VMM compatibility)" : ""));
+ 
+ 	return 0;
+ }
+@@ -405,6 +404,11 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
+ 			return -EINVAL;
+ 	} else {
+ 		phys_shift = KVM_PHYS_SHIFT;
++		if (phys_shift > kvm_ipa_limit) {
++			pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n",
++				     current->comm);
++			return -EINVAL;
++		}
+ 	}
+ 
+ 	mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index b913844ab7404..916e0547fdccf 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -218,6 +218,18 @@ int pfn_valid(unsigned long pfn)
+ 
+ 	if (!valid_section(__pfn_to_section(pfn)))
+ 		return 0;
++
++	/*
++	 * ZONE_DEVICE memory does not have the memblock entries.
++	 * memblock_is_map_memory() check for ZONE_DEVICE based
++	 * addresses will always fail. Even the normal hotplugged
++	 * memory will never have MEMBLOCK_NOMAP flag set in their
++	 * memblock entries. Skip memblock search for all non early
++	 * memory sections covering all of hotplug memory including
++	 * both normal and ZONE_DEVICE based.
++	 */
++	if (!early_section(__pfn_to_section(pfn)))
++		return pfn_section_valid(__pfn_to_section(pfn), pfn);
+ #endif
+ 	return memblock_is_map_memory(addr);
+ }
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index ca692a8157315..6aabf1eced31e 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -40,7 +40,7 @@
+ #define NO_BLOCK_MAPPINGS	BIT(0)
+ #define NO_CONT_MAPPINGS	BIT(1)
+ 
+-u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
++u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
+ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
+ 
+ u64 __section(".mmuoff.data.write") vabits_actual;
+@@ -502,7 +502,8 @@ static void __init map_mem(pgd_t *pgdp)
+ 		 * if MTE is present. Otherwise, it has the same attributes as
+ 		 * PAGE_KERNEL.
+ 		 */
+-		__map_memblock(pgdp, start, end, PAGE_KERNEL_TAGGED, flags);
++		__map_memblock(pgdp, start, end, pgprot_tagged(PAGE_KERNEL),
++			       flags);
+ 	}
+ 
+ 	/*
+diff --git a/arch/mips/crypto/Makefile b/arch/mips/crypto/Makefile
+index 8e1deaf00e0c0..5e4105cccf9fa 100644
+--- a/arch/mips/crypto/Makefile
++++ b/arch/mips/crypto/Makefile
+@@ -12,8 +12,8 @@ AFLAGS_chacha-core.o += -O2 # needed to fill branch delay slots
+ obj-$(CONFIG_CRYPTO_POLY1305_MIPS) += poly1305-mips.o
+ poly1305-mips-y := poly1305-core.o poly1305-glue.o
+ 
+-perlasm-flavour-$(CONFIG_CPU_MIPS32) := o32
+-perlasm-flavour-$(CONFIG_CPU_MIPS64) := 64
++perlasm-flavour-$(CONFIG_32BIT) := o32
++perlasm-flavour-$(CONFIG_64BIT) := 64
+ 
+ quiet_cmd_perlasm = PERLASM $@
+       cmd_perlasm = $(PERL) $(<) $(perlasm-flavour-y) $(@)
+diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
+index eacc9102c2515..d5b3c3bb95b40 100644
+--- a/arch/powerpc/include/asm/code-patching.h
++++ b/arch/powerpc/include/asm/code-patching.h
+@@ -73,7 +73,7 @@ void __patch_exception(int exc, unsigned long addr);
+ #endif
+ 
+ #define OP_RT_RA_MASK	0xffff0000UL
+-#define LIS_R2		0x3c020000UL
++#define LIS_R2		0x3c400000UL
+ #define ADDIS_R2_R12	0x3c4c0000UL
+ #define ADDI_R2_R2	0x38420000UL
+ 
+diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
+index 475687f24f4ad..d319160d790c0 100644
+--- a/arch/powerpc/include/asm/machdep.h
++++ b/arch/powerpc/include/asm/machdep.h
+@@ -59,6 +59,9 @@ struct machdep_calls {
+ 	int		(*pcibios_root_bridge_prepare)(struct pci_host_bridge
+ 				*bridge);
+ 
++	/* finds all the pci_controllers present at boot */
++	void 		(*discover_phbs)(void);
++
+ 	/* To setup PHBs when using automatic OF platform driver for PCI */
+ 	int		(*pci_setup_phb)(struct pci_controller *host);
+ 
+diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
+index e2c778c176a3a..d6f262df4f346 100644
+--- a/arch/powerpc/include/asm/ptrace.h
++++ b/arch/powerpc/include/asm/ptrace.h
+@@ -62,6 +62,9 @@ struct pt_regs
+ };
+ #endif
+ 
++
++#define STACK_FRAME_WITH_PT_REGS (STACK_FRAME_OVERHEAD + sizeof(struct pt_regs))
++
+ #ifdef __powerpc64__
+ 
+ /*
+@@ -190,7 +193,7 @@ extern int ptrace_put_reg(struct task_struct *task, int regno,
+ #define TRAP_FLAGS_MASK		0x11
+ #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
+ #define FULL_REGS(regs)		(((regs)->trap & 1) == 0)
+-#define SET_FULL_REGS(regs)	((regs)->trap |= 1)
++#define SET_FULL_REGS(regs)	((regs)->trap &= ~1)
+ #endif
+ #define CHECK_FULL_REGS(regs)	BUG_ON(!FULL_REGS(regs))
+ #define NV_REG_POISON		0xdeadbeefdeadbeefUL
+@@ -205,7 +208,7 @@ extern int ptrace_put_reg(struct task_struct *task, int regno,
+ #define TRAP_FLAGS_MASK		0x1F
+ #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
+ #define FULL_REGS(regs)		(((regs)->trap & 1) == 0)
+-#define SET_FULL_REGS(regs)	((regs)->trap |= 1)
++#define SET_FULL_REGS(regs)	((regs)->trap &= ~1)
+ #define IS_CRITICAL_EXC(regs)	(((regs)->trap & 2) != 0)
+ #define IS_MCHECK_EXC(regs)	(((regs)->trap & 4) != 0)
+ #define IS_DEBUG_EXC(regs)	(((regs)->trap & 8) != 0)
+diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h
+index fdab934283721..9d1fbd8be1c74 100644
+--- a/arch/powerpc/include/asm/switch_to.h
++++ b/arch/powerpc/include/asm/switch_to.h
+@@ -71,6 +71,16 @@ static inline void disable_kernel_vsx(void)
+ {
+ 	msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX);
+ }
++#else
++static inline void enable_kernel_vsx(void)
++{
++	BUILD_BUG();
++}
++
++static inline void disable_kernel_vsx(void)
++{
++	BUILD_BUG();
++}
+ #endif
+ 
+ #ifdef CONFIG_SPE
+diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
+index c2722ff36e982..5c125255571cd 100644
+--- a/arch/powerpc/kernel/asm-offsets.c
++++ b/arch/powerpc/kernel/asm-offsets.c
+@@ -307,7 +307,7 @@ int main(void)
+ 
+ 	/* Interrupt register frame */
+ 	DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
+-	DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
++	DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_WITH_PT_REGS);
+ 	STACK_PT_REGS_OFFSET(GPR0, gpr[0]);
+ 	STACK_PT_REGS_OFFSET(GPR1, gpr[1]);
+ 	STACK_PT_REGS_OFFSET(GPR2, gpr[2]);
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 3cde2fbd74fce..9d3b468bd2d7a 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -470,7 +470,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real)
+ 
+ 	ld	r10,PACAKMSR(r13)	/* get MSR value for kernel */
+ 	/* MSR[RI] is clear iff using SRR regs */
+-	.if IHSRR == EXC_HV_OR_STD
++	.if IHSRR_IF_HVMODE
+ 	BEGIN_FTR_SECTION
+ 	xori	r10,r10,MSR_RI
+ 	END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE)
+diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
+index 2729d8fa6e77c..96b45901da647 100644
+--- a/arch/powerpc/kernel/head_book3s_32.S
++++ b/arch/powerpc/kernel/head_book3s_32.S
+@@ -461,10 +461,11 @@ InstructionTLBMiss:
+ 	cmplw	0,r1,r3
+ #endif
+ 	mfspr	r2, SPRN_SPRG_PGDIR
+-	li	r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
++	li	r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC | _PAGE_USER
+ #if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
+ 	bgt-	112f
+ 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
++	li	r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
+ 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
+ #endif
+ 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
+@@ -523,9 +524,10 @@ DataLoadTLBMiss:
+ 	lis	r1, TASK_SIZE@h		/* check if kernel address */
+ 	cmplw	0,r1,r3
+ 	mfspr	r2, SPRN_SPRG_PGDIR
+-	li	r1, _PAGE_PRESENT | _PAGE_ACCESSED
++	li	r1, _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER
+ 	bgt-	112f
+ 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
++	li	r1, _PAGE_PRESENT | _PAGE_ACCESSED
+ 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
+ 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
+ 	lwz	r2,0(r2)		/* get pmd entry */
+@@ -599,9 +601,10 @@ DataStoreTLBMiss:
+ 	lis	r1, TASK_SIZE@h		/* check if kernel address */
+ 	cmplw	0,r1,r3
+ 	mfspr	r2, SPRN_SPRG_PGDIR
+-	li	r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
++	li	r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER
+ 	bgt-	112f
+ 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
++	li	r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
+ 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
+ 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
+ 	lwz	r2,0(r2)		/* get pmd entry */
+diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
+index be108616a721f..7920559a1ca81 100644
+--- a/arch/powerpc/kernel/pci-common.c
++++ b/arch/powerpc/kernel/pci-common.c
+@@ -1625,3 +1625,13 @@ static void fixup_hide_host_resource_fsl(struct pci_dev *dev)
+ }
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MOTOROLA, PCI_ANY_ID, fixup_hide_host_resource_fsl);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_FREESCALE, PCI_ANY_ID, fixup_hide_host_resource_fsl);
++
++
++static int __init discover_phbs(void)
++{
++	if (ppc_md.discover_phbs)
++		ppc_md.discover_phbs();
++
++	return 0;
++}
++core_initcall(discover_phbs);
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index d421a2c7f8224..1a1d2657fe8dd 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -2170,7 +2170,7 @@ void show_stack(struct task_struct *tsk, unsigned long *stack,
+ 		 * See if this is an exception frame.
+ 		 * We look for the "regshere" marker in the current frame.
+ 		 */
+-		if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE)
++		if (validate_sp(sp, tsk, STACK_FRAME_WITH_PT_REGS)
+ 		    && stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
+ 			struct pt_regs *regs = (struct pt_regs *)
+ 				(sp + STACK_FRAME_OVERHEAD);
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 5006dcbe1d9fd..77dffea3d5373 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -509,8 +509,11 @@ out:
+ 		die("Unrecoverable nested System Reset", regs, SIGABRT);
+ #endif
+ 	/* Must die if the interrupt is not recoverable */
+-	if (!(regs->msr & MSR_RI))
++	if (!(regs->msr & MSR_RI)) {
++		/* For the reason explained in die_mce, nmi_exit before die */
++		nmi_exit();
+ 		die("Unrecoverable System Reset", regs, SIGABRT);
++	}
+ 
+ 	if (saved_hsrrs) {
+ 		mtspr(SPRN_HSRR0, hsrr0);
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 43599e671d383..ded4a3efd3f06 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -211,7 +211,7 @@ static inline void perf_get_data_addr(struct perf_event *event, struct pt_regs *
+ 	if (!(mmcra & MMCRA_SAMPLE_ENABLE) || sdar_valid)
+ 		*addrp = mfspr(SPRN_SDAR);
+ 
+-	if (is_kernel_addr(mfspr(SPRN_SDAR)) && perf_allow_kernel(&event->attr) != 0)
++	if (is_kernel_addr(mfspr(SPRN_SDAR)) && event->attr.exclude_kernel)
+ 		*addrp = 0;
+ }
+ 
+@@ -477,7 +477,7 @@ static void power_pmu_bhrb_read(struct perf_event *event, struct cpu_hw_events *
+ 			 * addresses, hence include a check before filtering code
+ 			 */
+ 			if (!(ppmu->flags & PPMU_ARCH_31) &&
+-				is_kernel_addr(addr) && perf_allow_kernel(&event->attr) != 0)
++			    is_kernel_addr(addr) && event->attr.exclude_kernel)
+ 				continue;
+ 
+ 			/* Branches are read most recent first (ie. mfbhrb 0 is
+@@ -2112,7 +2112,17 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
+ 			left += period;
+ 			if (left <= 0)
+ 				left = period;
+-			record = siar_valid(regs);
++
++			/*
++			 * If address is not requested in the sample via
++			 * PERF_SAMPLE_IP, just record that sample irrespective
++			 * of SIAR valid check.
++			 */
++			if (event->attr.sample_type & PERF_SAMPLE_IP)
++				record = siar_valid(regs);
++			else
++				record = 1;
++
+ 			event->hw.last_period = event->hw.sample_period;
+ 		}
+ 		if (left < 0x80000000LL)
+@@ -2130,9 +2140,10 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
+ 	 * MMCR2. Check attr.exclude_kernel and address to drop the sample in
+ 	 * these cases.
+ 	 */
+-	if (event->attr.exclude_kernel && record)
+-		if (is_kernel_addr(mfspr(SPRN_SIAR)))
+-			record = 0;
++	if (event->attr.exclude_kernel &&
++	    (event->attr.sample_type & PERF_SAMPLE_IP) &&
++	    is_kernel_addr(mfspr(SPRN_SIAR)))
++		record = 0;
+ 
+ 	/*
+ 	 * Finally record data if requested.
+diff --git a/arch/powerpc/platforms/pseries/msi.c b/arch/powerpc/platforms/pseries/msi.c
+index b3ac2455faadc..637300330507f 100644
+--- a/arch/powerpc/platforms/pseries/msi.c
++++ b/arch/powerpc/platforms/pseries/msi.c
+@@ -4,6 +4,7 @@
+  * Copyright 2006-2007 Michael Ellerman, IBM Corp.
+  */
+ 
++#include <linux/crash_dump.h>
+ #include <linux/device.h>
+ #include <linux/irq.h>
+ #include <linux/msi.h>
+@@ -458,8 +459,28 @@ again:
+ 			return hwirq;
+ 		}
+ 
+-		virq = irq_create_mapping_affinity(NULL, hwirq,
+-						   entry->affinity);
++		/*
++		 * Depending on the number of online CPUs in the original
++		 * kernel, it is likely for CPU #0 to be offline in a kdump
++		 * kernel. The associated IRQs in the affinity mappings
++		 * provided by irq_create_affinity_masks() are thus not
++		 * started by irq_startup(), as per-design for managed IRQs.
++		 * This can be a problem with multi-queue block devices driven
++		 * by blk-mq : such a non-started IRQ is very likely paired
++		 * with the single queue enforced by blk-mq during kdump (see
++		 * blk_mq_alloc_tag_set()). This causes the device to remain
++		 * silent and likely hangs the guest at some point.
++		 *
++		 * We don't really care for fine-grained affinity when doing
++		 * kdump actually : simply ignore the pre-computed affinity
++		 * masks in this case and let the default mask with all CPUs
++		 * be used when creating the IRQ mappings.
++		 */
++		if (is_kdump_kernel())
++			virq = irq_create_mapping(NULL, hwirq);
++		else
++			virq = irq_create_mapping_affinity(NULL, hwirq,
++							   entry->affinity);
+ 
+ 		if (!virq) {
+ 			pr_debug("rtas_msi: Failed mapping hwirq %d\n", hwirq);
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 3a0d545f0ce84..791bc373418bd 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -775,7 +775,7 @@ static int smp_add_core(struct sclp_core_entry *core, cpumask_t *avail,
+ static int __smp_rescan_cpus(struct sclp_core_info *info, bool early)
+ {
+ 	struct sclp_core_entry *core;
+-	cpumask_t avail;
++	static cpumask_t avail;
+ 	bool configured;
+ 	u16 core_id;
+ 	int nr, i;
+diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
+index f94532f25db14..274217e7ed702 100644
+--- a/arch/sparc/include/asm/mman.h
++++ b/arch/sparc/include/asm/mman.h
+@@ -57,35 +57,39 @@ static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
+ {
+ 	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
+ 		return 0;
+-	if (prot & PROT_ADI) {
+-		if (!adi_capable())
+-			return 0;
++	return 1;
++}
+ 
+-		if (addr) {
+-			struct vm_area_struct *vma;
++#define arch_validate_flags(vm_flags) arch_validate_flags(vm_flags)
++/* arch_validate_flags() - Ensure combination of flags is valid for a
++ *	VMA.
++ */
++static inline bool arch_validate_flags(unsigned long vm_flags)
++{
++	/* If ADI is being enabled on this VMA, check for ADI
++	 * capability on the platform and ensure VMA is suitable
++	 * for ADI
++	 */
++	if (vm_flags & VM_SPARC_ADI) {
++		if (!adi_capable())
++			return false;
+ 
+-			vma = find_vma(current->mm, addr);
+-			if (vma) {
+-				/* ADI can not be enabled on PFN
+-				 * mapped pages
+-				 */
+-				if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+-					return 0;
++		/* ADI can not be enabled on PFN mapped pages */
++		if (vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
++			return false;
+ 
+-				/* Mergeable pages can become unmergeable
+-				 * if ADI is enabled on them even if they
+-				 * have identical data on them. This can be
+-				 * because ADI enabled pages with identical
+-				 * data may still not have identical ADI
+-				 * tags on them. Disallow ADI on mergeable
+-				 * pages.
+-				 */
+-				if (vma->vm_flags & VM_MERGEABLE)
+-					return 0;
+-			}
+-		}
++		/* Mergeable pages can become unmergeable
++		 * if ADI is enabled on them even if they
++		 * have identical data on them. This can be
++		 * because ADI enabled pages with identical
++		 * data may still not have identical ADI
++		 * tags on them. Disallow ADI on mergeable
++		 * pages.
++		 */
++		if (vm_flags & VM_MERGEABLE)
++			return false;
+ 	}
+-	return 1;
++	return true;
+ }
+ #endif /* CONFIG_SPARC64 */
+ 
+diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
+index eb2946b1df8a4..6139c5700ccc9 100644
+--- a/arch/sparc/mm/init_32.c
++++ b/arch/sparc/mm/init_32.c
+@@ -197,6 +197,9 @@ unsigned long __init bootmem_init(unsigned long *pages_avail)
+ 	size = memblock_phys_mem_size() - memblock_reserved_size();
+ 	*pages_avail = (size >> PAGE_SHIFT) - high_pages;
+ 
++	/* Only allow low memory to be allocated via memblock allocation */
++	memblock_set_current_limit(max_low_pfn << PAGE_SHIFT);
++
+ 	return max_pfn;
+ }
+ 
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index de5358671750d..2e4d91f3feea4 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -128,7 +128,8 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
+ 		regs->ax = -EFAULT;
+ 
+ 		instrumentation_end();
+-		syscall_exit_to_user_mode(regs);
++		local_irq_disable();
++		irqentry_exit_to_user_mode(regs);
+ 		return false;
+ 	}
+ 
+@@ -213,40 +214,6 @@ SYSCALL_DEFINE0(ni_syscall)
+ 	return -ENOSYS;
+ }
+ 
+-noinstr bool idtentry_enter_nmi(struct pt_regs *regs)
+-{
+-	bool irq_state = lockdep_hardirqs_enabled();
+-
+-	__nmi_enter();
+-	lockdep_hardirqs_off(CALLER_ADDR0);
+-	lockdep_hardirq_enter();
+-	rcu_nmi_enter();
+-
+-	instrumentation_begin();
+-	trace_hardirqs_off_finish();
+-	ftrace_nmi_enter();
+-	instrumentation_end();
+-
+-	return irq_state;
+-}
+-
+-noinstr void idtentry_exit_nmi(struct pt_regs *regs, bool restore)
+-{
+-	instrumentation_begin();
+-	ftrace_nmi_exit();
+-	if (restore) {
+-		trace_hardirqs_on_prepare();
+-		lockdep_hardirqs_on_prepare(CALLER_ADDR0);
+-	}
+-	instrumentation_end();
+-
+-	rcu_nmi_exit();
+-	lockdep_hardirq_exit();
+-	if (restore)
+-		lockdep_hardirqs_on(CALLER_ADDR0);
+-	__nmi_exit();
+-}
+-
+ #ifdef CONFIG_XEN_PV
+ #ifndef CONFIG_PREEMPTION
+ /*
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index 541fdaf640453..0051cf5c792d1 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -210,6 +210,8 @@ SYM_CODE_START(entry_SYSCALL_compat)
+ 	/* Switch to the kernel stack */
+ 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+ 
++SYM_INNER_LABEL(entry_SYSCALL_compat_safe_stack, SYM_L_GLOBAL)
++
+ 	/* Construct struct pt_regs on stack */
+ 	pushq	$__USER32_DS		/* pt_regs->ss */
+ 	pushq	%r8			/* pt_regs->sp */
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index aaa7bffdb20f5..4b05c876f9f69 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3565,8 +3565,10 @@ static int intel_pmu_hw_config(struct perf_event *event)
+ 		if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) {
+ 			event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
+ 			if (!(event->attr.sample_type &
+-			      ~intel_pmu_large_pebs_flags(event)))
++			      ~intel_pmu_large_pebs_flags(event))) {
+ 				event->hw.flags |= PERF_X86_EVENT_LARGE_PEBS;
++				event->attach_state |= PERF_ATTACH_SCHED_CB;
++			}
+ 		}
+ 		if (x86_pmu.pebs_aliases)
+ 			x86_pmu.pebs_aliases(event);
+@@ -3579,6 +3581,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
+ 		ret = intel_pmu_setup_lbr_filter(event);
+ 		if (ret)
+ 			return ret;
++		event->attach_state |= PERF_ATTACH_SCHED_CB;
+ 
+ 		/*
+ 		 * BTS is set up earlier in this path, so don't account twice
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index eb01c2618a9df..f656aabd1545c 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -11,9 +11,6 @@
+ 
+ #include <asm/irq_stack.h>
+ 
+-bool idtentry_enter_nmi(struct pt_regs *regs);
+-void idtentry_exit_nmi(struct pt_regs *regs, bool irq_state);
+-
+ /**
+  * DECLARE_IDTENTRY - Declare functions for simple IDT entry points
+  *		      No error code pushed by hardware
+diff --git a/arch/x86/include/asm/insn-eval.h b/arch/x86/include/asm/insn-eval.h
+index a0f839aa144d9..98b4dae5e8bc8 100644
+--- a/arch/x86/include/asm/insn-eval.h
++++ b/arch/x86/include/asm/insn-eval.h
+@@ -23,6 +23,8 @@ unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx);
+ int insn_get_code_seg_params(struct pt_regs *regs);
+ int insn_fetch_from_user(struct pt_regs *regs,
+ 			 unsigned char buf[MAX_INSN_SIZE]);
++int insn_fetch_from_user_inatomic(struct pt_regs *regs,
++				  unsigned char buf[MAX_INSN_SIZE]);
+ bool insn_decode(struct insn *insn, struct pt_regs *regs,
+ 		 unsigned char buf[MAX_INSN_SIZE], int buf_size);
+ 
+diff --git a/arch/x86/include/asm/proto.h b/arch/x86/include/asm/proto.h
+index 2c35f1c01a2df..b6a9d51d1d791 100644
+--- a/arch/x86/include/asm/proto.h
++++ b/arch/x86/include/asm/proto.h
+@@ -25,6 +25,7 @@ void __end_SYSENTER_singlestep_region(void);
+ void entry_SYSENTER_compat(void);
+ void __end_entry_SYSENTER_compat(void);
+ void entry_SYSCALL_compat(void);
++void entry_SYSCALL_compat_safe_stack(void);
+ void entry_INT80_compat(void);
+ #ifdef CONFIG_XEN_PV
+ void xen_entry_INT80_compat(void);
+diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
+index d8324a2366961..409f661481e11 100644
+--- a/arch/x86/include/asm/ptrace.h
++++ b/arch/x86/include/asm/ptrace.h
+@@ -94,6 +94,8 @@ struct pt_regs {
+ #include <asm/paravirt_types.h>
+ #endif
+ 
++#include <asm/proto.h>
++
+ struct cpuinfo_x86;
+ struct task_struct;
+ 
+@@ -175,6 +177,19 @@ static inline bool any_64bit_mode(struct pt_regs *regs)
+ #ifdef CONFIG_X86_64
+ #define current_user_stack_pointer()	current_pt_regs()->sp
+ #define compat_user_stack_pointer()	current_pt_regs()->sp
++
++static inline bool ip_within_syscall_gap(struct pt_regs *regs)
++{
++	bool ret = (regs->ip >= (unsigned long)entry_SYSCALL_64 &&
++		    regs->ip <  (unsigned long)entry_SYSCALL_64_safe_stack);
++
++#ifdef CONFIG_IA32_EMULATION
++	ret = ret || (regs->ip >= (unsigned long)entry_SYSCALL_compat &&
++		      regs->ip <  (unsigned long)entry_SYSCALL_compat_safe_stack);
++#endif
++
++	return ret;
++}
+ #endif
+ 
+ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 311688202ea51..b7a27589dfa0b 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -1986,7 +1986,7 @@ void (*machine_check_vector)(struct pt_regs *) = unexpected_machine_check;
+ 
+ static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)
+ {
+-	bool irq_state;
++	irqentry_state_t irq_state;
+ 
+ 	WARN_ON_ONCE(user_mode(regs));
+ 
+@@ -1998,7 +1998,7 @@ static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)
+ 	    mce_check_crashing_cpu())
+ 		return;
+ 
+-	irq_state = idtentry_enter_nmi(regs);
++	irq_state = irqentry_nmi_enter(regs);
+ 	/*
+ 	 * The call targets are marked noinstr, but objtool can't figure
+ 	 * that out because it's an indirect call. Annotate it.
+@@ -2009,7 +2009,7 @@ static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)
+ 	if (regs->flags & X86_EFLAGS_IF)
+ 		trace_hardirqs_on_prepare();
+ 	instrumentation_end();
+-	idtentry_exit_nmi(regs, irq_state);
++	irqentry_nmi_exit(regs, irq_state);
+ }
+ 
+ static __always_inline void exc_machine_check_user(struct pt_regs *regs)
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index 34b18f6eeb2ce..5ee705b44560b 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -269,21 +269,20 @@ static void __init kvmclock_init_mem(void)
+ 
+ static int __init kvm_setup_vsyscall_timeinfo(void)
+ {
+-#ifdef CONFIG_X86_64
+-	u8 flags;
++	kvmclock_init_mem();
+ 
+-	if (!per_cpu(hv_clock_per_cpu, 0) || !kvmclock_vsyscall)
+-		return 0;
++#ifdef CONFIG_X86_64
++	if (per_cpu(hv_clock_per_cpu, 0) && kvmclock_vsyscall) {
++		u8 flags;
+ 
+-	flags = pvclock_read_flags(&hv_clock_boot[0].pvti);
+-	if (!(flags & PVCLOCK_TSC_STABLE_BIT))
+-		return 0;
++		flags = pvclock_read_flags(&hv_clock_boot[0].pvti);
++		if (!(flags & PVCLOCK_TSC_STABLE_BIT))
++			return 0;
+ 
+-	kvm_clock.vdso_clock_mode = VDSO_CLOCKMODE_PVCLOCK;
++		kvm_clock.vdso_clock_mode = VDSO_CLOCKMODE_PVCLOCK;
++	}
+ #endif
+ 
+-	kvmclock_init_mem();
+-
+ 	return 0;
+ }
+ early_initcall(kvm_setup_vsyscall_timeinfo);
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index 4bc77aaf13039..bf250a339655f 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -475,7 +475,7 @@ static DEFINE_PER_CPU(unsigned long, nmi_dr7);
+ 
+ DEFINE_IDTENTRY_RAW(exc_nmi)
+ {
+-	bool irq_state;
++	irqentry_state_t irq_state;
+ 
+ 	/*
+ 	 * Re-enable NMIs right here when running as an SEV-ES guest. This might
+@@ -502,14 +502,14 @@ nmi_restart:
+ 
+ 	this_cpu_write(nmi_dr7, local_db_save());
+ 
+-	irq_state = idtentry_enter_nmi(regs);
++	irq_state = irqentry_nmi_enter(regs);
+ 
+ 	inc_irq_stat(__nmi_count);
+ 
+ 	if (!ignore_nmis)
+ 		default_do_nmi(regs);
+ 
+-	idtentry_exit_nmi(regs, irq_state);
++	irqentry_nmi_exit(regs, irq_state);
+ 
+ 	local_db_restore(this_cpu_read(nmi_dr7));
+ 
+diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
+index 84c1821819afb..04a780abb512d 100644
+--- a/arch/x86/kernel/sev-es.c
++++ b/arch/x86/kernel/sev-es.c
+@@ -121,8 +121,18 @@ static void __init setup_vc_stacks(int cpu)
+ 	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
+ }
+ 
+-static __always_inline bool on_vc_stack(unsigned long sp)
++static __always_inline bool on_vc_stack(struct pt_regs *regs)
+ {
++	unsigned long sp = regs->sp;
++
++	/* User-mode RSP is not trusted */
++	if (user_mode(regs))
++		return false;
++
++	/* SYSCALL gap still has user-mode RSP */
++	if (ip_within_syscall_gap(regs))
++		return false;
++
+ 	return ((sp >= __this_cpu_ist_bottom_va(VC)) && (sp < __this_cpu_ist_top_va(VC)));
+ }
+ 
+@@ -144,7 +154,7 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs)
+ 	old_ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]);
+ 
+ 	/* Make room on the IST stack */
+-	if (on_vc_stack(regs->sp))
++	if (on_vc_stack(regs))
+ 		new_ist = ALIGN_DOWN(regs->sp, 8) - sizeof(old_ist);
+ 	else
+ 		new_ist = old_ist - sizeof(old_ist);
+@@ -248,7 +258,7 @@ static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
+ 	int res;
+ 
+ 	if (user_mode(ctxt->regs)) {
+-		res = insn_fetch_from_user(ctxt->regs, buffer);
++		res = insn_fetch_from_user_inatomic(ctxt->regs, buffer);
+ 		if (!res) {
+ 			ctxt->fi.vector     = X86_TRAP_PF;
+ 			ctxt->fi.error_code = X86_PF_INSTR | X86_PF_USER;
+@@ -1248,13 +1258,12 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs)
+ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ {
+ 	struct sev_es_runtime_data *data = this_cpu_read(runtime_data);
++	irqentry_state_t irq_state;
+ 	struct ghcb_state state;
+ 	struct es_em_ctxt ctxt;
+ 	enum es_result result;
+ 	struct ghcb *ghcb;
+ 
+-	lockdep_assert_irqs_disabled();
+-
+ 	/*
+ 	 * Handle #DB before calling into !noinstr code to avoid recursive #DB.
+ 	 */
+@@ -1263,6 +1272,8 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 		return;
+ 	}
+ 
++	irq_state = irqentry_nmi_enter(regs);
++	lockdep_assert_irqs_disabled();
+ 	instrumentation_begin();
+ 
+ 	/*
+@@ -1325,6 +1336,7 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 
+ out:
+ 	instrumentation_end();
++	irqentry_nmi_exit(regs, irq_state);
+ 
+ 	return;
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 170c94ec00685..7692bf7908e6c 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -406,7 +406,7 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
+ 	}
+ #endif
+ 
+-	idtentry_enter_nmi(regs);
++	irqentry_nmi_enter(regs);
+ 	instrumentation_begin();
+ 	notify_die(DIE_TRAP, str, regs, error_code, X86_TRAP_DF, SIGSEGV);
+ 
+@@ -652,12 +652,13 @@ DEFINE_IDTENTRY_RAW(exc_int3)
+ 		instrumentation_end();
+ 		irqentry_exit_to_user_mode(regs);
+ 	} else {
+-		bool irq_state = idtentry_enter_nmi(regs);
++		irqentry_state_t irq_state = irqentry_nmi_enter(regs);
++
+ 		instrumentation_begin();
+ 		if (!do_int3(regs))
+ 			die("int3", regs, 0);
+ 		instrumentation_end();
+-		idtentry_exit_nmi(regs, irq_state);
++		irqentry_nmi_exit(regs, irq_state);
+ 	}
+ }
+ 
+@@ -686,8 +687,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
+ 	 * In the SYSCALL entry path the RSP value comes from user-space - don't
+ 	 * trust it and switch to the current kernel stack
+ 	 */
+-	if (regs->ip >= (unsigned long)entry_SYSCALL_64 &&
+-	    regs->ip <  (unsigned long)entry_SYSCALL_64_safe_stack) {
++	if (ip_within_syscall_gap(regs)) {
+ 		sp = this_cpu_read(cpu_current_top_of_stack);
+ 		goto sync;
+ 	}
+@@ -852,7 +852,7 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs,
+ 	 * includes the entry stack is excluded for everything.
+ 	 */
+ 	unsigned long dr7 = local_db_save();
+-	bool irq_state = idtentry_enter_nmi(regs);
++	irqentry_state_t irq_state = irqentry_nmi_enter(regs);
+ 	instrumentation_begin();
+ 
+ 	/*
+@@ -909,7 +909,7 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs,
+ 		regs->flags &= ~X86_EFLAGS_TF;
+ out:
+ 	instrumentation_end();
+-	idtentry_exit_nmi(regs, irq_state);
++	irqentry_nmi_exit(regs, irq_state);
+ 
+ 	local_db_restore(dr7);
+ }
+@@ -927,7 +927,7 @@ static __always_inline void exc_debug_user(struct pt_regs *regs,
+ 
+ 	/*
+ 	 * NB: We can't easily clear DR7 here because
+-	 * idtentry_exit_to_usermode() can invoke ptrace, schedule, access
++	 * irqentry_exit_to_usermode() can invoke ptrace, schedule, access
+ 	 * user memory, etc.  This means that a recursive #DB is possible.  If
+ 	 * this happens, that #DB will hit exc_debug_kernel() and clear DR7.
+ 	 * Since we're not on the IST stack right now, everything will be
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 73f8001000669..c451d5f6422f6 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -367,8 +367,8 @@ static bool deref_stack_regs(struct unwind_state *state, unsigned long addr,
+ 	if (!stack_access_ok(state, addr, sizeof(struct pt_regs)))
+ 		return false;
+ 
+-	*ip = regs->ip;
+-	*sp = regs->sp;
++	*ip = READ_ONCE_NOCHECK(regs->ip);
++	*sp = READ_ONCE_NOCHECK(regs->sp);
+ 	return true;
+ }
+ 
+@@ -380,8 +380,8 @@ static bool deref_stack_iret_regs(struct unwind_state *state, unsigned long addr
+ 	if (!stack_access_ok(state, addr, IRET_FRAME_SIZE))
+ 		return false;
+ 
+-	*ip = regs->ip;
+-	*sp = regs->sp;
++	*ip = READ_ONCE_NOCHECK(regs->ip);
++	*sp = READ_ONCE_NOCHECK(regs->sp);
+ 	return true;
+ }
+ 
+@@ -402,12 +402,12 @@ static bool get_reg(struct unwind_state *state, unsigned int reg_off,
+ 		return false;
+ 
+ 	if (state->full_regs) {
+-		*val = ((unsigned long *)state->regs)[reg];
++		*val = READ_ONCE_NOCHECK(((unsigned long *)state->regs)[reg]);
+ 		return true;
+ 	}
+ 
+ 	if (state->prev_regs) {
+-		*val = ((unsigned long *)state->prev_regs)[reg];
++		*val = READ_ONCE_NOCHECK(((unsigned long *)state->prev_regs)[reg]);
+ 		return true;
+ 	}
+ 
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 86c33d53c90a0..4ca81ae9bc8ad 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1641,7 +1641,16 @@ static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn)
+ 	}
+ 
+ 	if (kvm_use_posted_timer_interrupt(apic->vcpu)) {
+-		kvm_wait_lapic_expire(vcpu);
++		/*
++		 * Ensure the guest's timer has truly expired before posting an
++		 * interrupt.  Open code the relevant checks to avoid querying
++		 * lapic_timer_int_injected(), which will be false since the
++		 * interrupt isn't yet injected.  Waiting until after injecting
++		 * is not an option since that won't help a posted interrupt.
++		 */
++		if (vcpu->arch.apic->lapic_timer.expired_tscdeadline &&
++		    vcpu->arch.apic->lapic_timer.timer_advance_ns)
++			__kvm_wait_lapic_expire(vcpu);
+ 		kvm_apic_inject_pending_timer_irqs(apic);
+ 		return;
+ 	}
+diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
+index 4229950a5d78c..bb0b3fe1e0a02 100644
+--- a/arch/x86/lib/insn-eval.c
++++ b/arch/x86/lib/insn-eval.c
+@@ -1415,6 +1415,25 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
+ 	}
+ }
+ 
++static unsigned long insn_get_effective_ip(struct pt_regs *regs)
++{
++	unsigned long seg_base = 0;
++
++	/*
++	 * If not in user-space long mode, a custom code segment could be in
++	 * use. This is true in protected mode (if the process defined a local
++	 * descriptor table), or virtual-8086 mode. In most of the cases
++	 * seg_base will be zero as in USER_CS.
++	 */
++	if (!user_64bit_mode(regs)) {
++		seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
++		if (seg_base == -1L)
++			return 0;
++	}
++
++	return seg_base + regs->ip;
++}
++
+ /**
+  * insn_fetch_from_user() - Copy instruction bytes from user-space memory
+  * @regs:	Structure with register values as seen when entering kernel mode
+@@ -1431,24 +1450,43 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
+  */
+ int insn_fetch_from_user(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE])
+ {
+-	unsigned long seg_base = 0;
++	unsigned long ip;
+ 	int not_copied;
+ 
+-	/*
+-	 * If not in user-space long mode, a custom code segment could be in
+-	 * use. This is true in protected mode (if the process defined a local
+-	 * descriptor table), or virtual-8086 mode. In most of the cases
+-	 * seg_base will be zero as in USER_CS.
+-	 */
+-	if (!user_64bit_mode(regs)) {
+-		seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
+-		if (seg_base == -1L)
+-			return 0;
+-	}
++	ip = insn_get_effective_ip(regs);
++	if (!ip)
++		return 0;
++
++	not_copied = copy_from_user(buf, (void __user *)ip, MAX_INSN_SIZE);
+ 
++	return MAX_INSN_SIZE - not_copied;
++}
++
++/**
++ * insn_fetch_from_user_inatomic() - Copy instruction bytes from user-space memory
++ *                                   while in atomic code
++ * @regs:	Structure with register values as seen when entering kernel mode
++ * @buf:	Array to store the fetched instruction
++ *
++ * Gets the linear address of the instruction and copies the instruction bytes
++ * to the buf. This function must be used in atomic context.
++ *
++ * Returns:
++ *
++ * Number of instruction bytes copied.
++ *
++ * 0 if nothing was copied.
++ */
++int insn_fetch_from_user_inatomic(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE])
++{
++	unsigned long ip;
++	int not_copied;
++
++	ip = insn_get_effective_ip(regs);
++	if (!ip)
++		return 0;
+ 
+-	not_copied = copy_from_user(buf, (void __user *)(seg_base + regs->ip),
+-				    MAX_INSN_SIZE);
++	not_copied = __copy_from_user_inatomic(buf, (void __user *)ip, MAX_INSN_SIZE);
+ 
+ 	return MAX_INSN_SIZE - not_copied;
+ }
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 6817a673e5cec..4676c6f00489c 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -318,6 +318,22 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
+ 	return 0;
+ }
+ 
++static int blkdev_truncate_zone_range(struct block_device *bdev, fmode_t mode,
++				      const struct blk_zone_range *zrange)
++{
++	loff_t start, end;
++
++	if (zrange->sector + zrange->nr_sectors <= zrange->sector ||
++	    zrange->sector + zrange->nr_sectors > get_capacity(bdev->bd_disk))
++		/* Out of range */
++		return -EINVAL;
++
++	start = zrange->sector << SECTOR_SHIFT;
++	end = ((zrange->sector + zrange->nr_sectors) << SECTOR_SHIFT) - 1;
++
++	return truncate_bdev_range(bdev, mode, start, end);
++}
++
+ /*
+  * BLKRESETZONE, BLKOPENZONE, BLKCLOSEZONE and BLKFINISHZONE ioctl processing.
+  * Called from blkdev_ioctl.
+@@ -329,6 +345,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
+ 	struct request_queue *q;
+ 	struct blk_zone_range zrange;
+ 	enum req_opf op;
++	int ret;
+ 
+ 	if (!argp)
+ 		return -EINVAL;
+@@ -352,6 +369,11 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
+ 	switch (cmd) {
+ 	case BLKRESETZONE:
+ 		op = REQ_OP_ZONE_RESET;
++
++		/* Invalidate the page cache, including dirty pages. */
++		ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
++		if (ret)
++			return ret;
+ 		break;
+ 	case BLKOPENZONE:
+ 		op = REQ_OP_ZONE_OPEN;
+@@ -366,8 +388,20 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
+ 		return -ENOTTY;
+ 	}
+ 
+-	return blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
+-				GFP_KERNEL);
++	ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
++			       GFP_KERNEL);
++
++	/*
++	 * Invalidate the page cache again for zone reset: writes can only be
++	 * direct for zoned devices so concurrent writes would not add any page
++	 * to the page cache after/during reset. The page cache may be filled
++	 * again due to concurrent reads though and dropping the pages for
++	 * these is fine.
++	 */
++	if (!ret && cmd == BLKRESETZONE)
++		ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
++
++	return ret;
+ }
+ 
+ static inline unsigned long *blk_alloc_zone_bitmap(int node,
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 37de7d006858d..774adc9846fa8 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -772,7 +772,7 @@ config CRYPTO_POLY1305_X86_64
+ 
+ config CRYPTO_POLY1305_MIPS
+ 	tristate "Poly1305 authenticator algorithm (MIPS optimized)"
+-	depends on CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
++	depends on MIPS
+ 	select CRYPTO_ARCH_HAVE_LIB_POLY1305
+ 
+ config CRYPTO_MD4
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index eef4ffb6122c9..de058d15b33ea 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -290,20 +290,20 @@ static ssize_t state_store(struct device *dev, struct device_attribute *attr,
+ }
+ 
+ /*
+- * phys_device is a bad name for this.  What I really want
+- * is a way to differentiate between memory ranges that
+- * are part of physical devices that constitute
+- * a complete removable unit or fru.
+- * i.e. do these ranges belong to the same physical device,
+- * s.t. if I offline all of these sections I can then
+- * remove the physical device?
++ * Legacy interface that we cannot remove: s390x exposes the storage increment
++ * covered by a memory block, allowing for identifying which memory blocks
++ * comprise a storage increment. Since a memory block spans complete
++ * storage increments nowadays, this interface is basically unused. Other
++ * archs never exposed != 0.
+  */
+ static ssize_t phys_device_show(struct device *dev,
+ 				struct device_attribute *attr, char *buf)
+ {
+ 	struct memory_block *mem = to_memory_block(dev);
++	unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
+ 
+-	return sysfs_emit(buf, "%d\n", mem->phys_device);
++	return sysfs_emit(buf, "%d\n",
++			  arch_get_memory_phys_device(start_pfn));
+ }
+ 
+ #ifdef CONFIG_MEMORY_HOTREMOVE
+@@ -488,11 +488,7 @@ static DEVICE_ATTR_WO(soft_offline_page);
+ static DEVICE_ATTR_WO(hard_offline_page);
+ #endif
+ 
+-/*
+- * Note that phys_device is optional.  It is here to allow for
+- * differentiation between which *physical* devices each
+- * section belongs to...
+- */
++/* See phys_device_show(). */
+ int __weak arch_get_memory_phys_device(unsigned long start_pfn)
+ {
+ 	return 0;
+@@ -574,7 +570,6 @@ int register_memory(struct memory_block *memory)
+ static int init_memory_block(unsigned long block_id, unsigned long state)
+ {
+ 	struct memory_block *mem;
+-	unsigned long start_pfn;
+ 	int ret = 0;
+ 
+ 	mem = find_memory_block_by_id(block_id);
+@@ -588,8 +583,6 @@ static int init_memory_block(unsigned long block_id, unsigned long state)
+ 
+ 	mem->start_section_nr = block_id * sections_per_block;
+ 	mem->state = state;
+-	start_pfn = section_nr_to_pfn(mem->start_section_nr);
+-	mem->phys_device = arch_get_memory_phys_device(start_pfn);
+ 	mem->nid = NUMA_NO_NODE;
+ 
+ 	ret = register_memory(mem);
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index 615a0c93e1166..206bd4d7d7e23 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -786,6 +786,9 @@ int software_node_register(const struct software_node *node)
+ 	if (software_node_to_swnode(node))
+ 		return -EEXIST;
+ 
++	if (node->parent && !parent)
++		return -EINVAL;
++
+ 	return PTR_ERR_OR_ZERO(swnode_register(node, parent, 0));
+ }
+ EXPORT_SYMBOL_GPL(software_node_register);
+diff --git a/drivers/base/test/Makefile b/drivers/base/test/Makefile
+index 3ca56367c84b7..2f15fae8625f1 100644
+--- a/drivers/base/test/Makefile
++++ b/drivers/base/test/Makefile
+@@ -2,3 +2,4 @@
+ obj-$(CONFIG_TEST_ASYNC_DRIVER_PROBE)	+= test_async_driver_probe.o
+ 
+ obj-$(CONFIG_KUNIT_DRIVER_PE_TEST) += property-entry-test.o
++CFLAGS_REMOVE_property-entry-test.o += -fplugin-arg-structleak_plugin-byref -fplugin-arg-structleak_plugin-byref-all
+diff --git a/drivers/block/rsxx/core.c b/drivers/block/rsxx/core.c
+index 5ac1881396afb..227e1be4c6f99 100644
+--- a/drivers/block/rsxx/core.c
++++ b/drivers/block/rsxx/core.c
+@@ -871,6 +871,7 @@ static int rsxx_pci_probe(struct pci_dev *dev,
+ 	card->event_wq = create_singlethread_workqueue(DRIVER_NAME"_event");
+ 	if (!card->event_wq) {
+ 		dev_err(CARD_TO_DEV(card), "Failed card event setup.\n");
++		st = -ENOMEM;
+ 		goto failed_event_handler;
+ 	}
+ 
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 711168451e9e5..7dce17fd59baa 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -633,7 +633,7 @@ static ssize_t writeback_store(struct device *dev,
+ 	struct bio_vec bio_vec;
+ 	struct page *page;
+ 	ssize_t ret = len;
+-	int mode;
++	int mode, err;
+ 	unsigned long blk_idx = 0;
+ 
+ 	if (sysfs_streq(buf, "idle"))
+@@ -725,12 +725,17 @@ static ssize_t writeback_store(struct device *dev,
+ 		 * XXX: A single page IO would be inefficient for write
+ 		 * but it would be not bad as starter.
+ 		 */
+-		ret = submit_bio_wait(&bio);
+-		if (ret) {
++		err = submit_bio_wait(&bio);
++		if (err) {
+ 			zram_slot_lock(zram, index);
+ 			zram_clear_flag(zram, index, ZRAM_UNDER_WB);
+ 			zram_clear_flag(zram, index, ZRAM_IDLE);
+ 			zram_slot_unlock(zram, index);
++			/*
++			 * Return last IO error unless every IO were
++			 * not suceeded.
++			 */
++			ret = err;
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
+index af26e0695b866..51ed640e527b4 100644
+--- a/drivers/clk/qcom/gdsc.c
++++ b/drivers/clk/qcom/gdsc.c
+@@ -183,7 +183,10 @@ static inline int gdsc_assert_reset(struct gdsc *sc)
+ static inline void gdsc_force_mem_on(struct gdsc *sc)
+ {
+ 	int i;
+-	u32 mask = RETAIN_MEM | RETAIN_PERIPH;
++	u32 mask = RETAIN_MEM;
++
++	if (!(sc->flags & NO_RET_PERIPH))
++		mask |= RETAIN_PERIPH;
+ 
+ 	for (i = 0; i < sc->cxc_count; i++)
+ 		regmap_update_bits(sc->regmap, sc->cxcs[i], mask, mask);
+@@ -192,7 +195,10 @@ static inline void gdsc_force_mem_on(struct gdsc *sc)
+ static inline void gdsc_clear_mem_on(struct gdsc *sc)
+ {
+ 	int i;
+-	u32 mask = RETAIN_MEM | RETAIN_PERIPH;
++	u32 mask = RETAIN_MEM;
++
++	if (!(sc->flags & NO_RET_PERIPH))
++		mask |= RETAIN_PERIPH;
+ 
+ 	for (i = 0; i < sc->cxc_count; i++)
+ 		regmap_update_bits(sc->regmap, sc->cxcs[i], mask, 0);
+diff --git a/drivers/clk/qcom/gdsc.h b/drivers/clk/qcom/gdsc.h
+index bd537438c7932..5bb396b344d16 100644
+--- a/drivers/clk/qcom/gdsc.h
++++ b/drivers/clk/qcom/gdsc.h
+@@ -42,7 +42,7 @@ struct gdsc {
+ #define PWRSTS_ON		BIT(2)
+ #define PWRSTS_OFF_ON		(PWRSTS_OFF | PWRSTS_ON)
+ #define PWRSTS_RET_ON		(PWRSTS_RET | PWRSTS_ON)
+-	const u8			flags;
++	const u16			flags;
+ #define VOTABLE		BIT(0)
+ #define CLAMP_IO	BIT(1)
+ #define HW_CTRL		BIT(2)
+@@ -51,6 +51,7 @@ struct gdsc {
+ #define POLL_CFG_GDSCR	BIT(5)
+ #define ALWAYS_ON	BIT(6)
+ #define RETAIN_FF_ENABLE	BIT(7)
++#define NO_RET_PERIPH	BIT(8)
+ 	struct reset_controller_dev	*rcdev;
+ 	unsigned int			*resets;
+ 	unsigned int			reset_count;
+diff --git a/drivers/clk/qcom/gpucc-msm8998.c b/drivers/clk/qcom/gpucc-msm8998.c
+index 9b3923af02a14..1a518c4915b4b 100644
+--- a/drivers/clk/qcom/gpucc-msm8998.c
++++ b/drivers/clk/qcom/gpucc-msm8998.c
+@@ -253,12 +253,16 @@ static struct gdsc gpu_cx_gdsc = {
+ static struct gdsc gpu_gx_gdsc = {
+ 	.gdscr = 0x1094,
+ 	.clamp_io_ctrl = 0x130,
++	.resets = (unsigned int []){ GPU_GX_BCR },
++	.reset_count = 1,
++	.cxcs = (unsigned int []){ 0x1098 },
++	.cxc_count = 1,
+ 	.pd = {
+ 		.name = "gpu_gx",
+ 	},
+ 	.parent = &gpu_cx_gdsc.pd,
+-	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = CLAMP_IO | AON_RESET,
++	.pwrsts = PWRSTS_OFF_ON | PWRSTS_RET,
++	.flags = CLAMP_IO | SW_RESET | AON_RESET | NO_RET_PERIPH,
+ };
+ 
+ static struct clk_regmap *gpucc_msm8998_clocks[] = {
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index 2726e77c9e5a9..6de07556665b1 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -317,9 +317,9 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
+ 	}
+ 
+ 	base = ioremap(res->start, resource_size(res));
+-	if (IS_ERR(base)) {
++	if (!base) {
+ 		dev_err(dev, "failed to map resource %pR\n", res);
+-		ret = PTR_ERR(base);
++		ret = -ENOMEM;
+ 		goto release_region;
+ 	}
+ 
+@@ -368,7 +368,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
+ error:
+ 	kfree(data);
+ unmap_base:
+-	iounmap(data->base);
++	iounmap(base);
+ release_region:
+ 	release_mem_region(res->start, resource_size(res));
+ 	return ret;
+diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
+index 914a343c7785c..0ab439c53eee3 100644
+--- a/drivers/firmware/efi/libstub/efi-stub.c
++++ b/drivers/firmware/efi/libstub/efi-stub.c
+@@ -96,6 +96,18 @@ static void install_memreserve_table(void)
+ 		efi_err("Failed to install memreserve config table!\n");
+ }
+ 
++static u32 get_supported_rt_services(void)
++{
++	const efi_rt_properties_table_t *rt_prop_table;
++	u32 supported = EFI_RT_SUPPORTED_ALL;
++
++	rt_prop_table = get_efi_config_table(EFI_RT_PROPERTIES_TABLE_GUID);
++	if (rt_prop_table)
++		supported &= rt_prop_table->runtime_services_supported;
++
++	return supported;
++}
++
+ /*
+  * EFI entry point for the arm/arm64 EFI stubs.  This is the entrypoint
+  * that is described in the PE/COFF header.  Most of the code is the same
+@@ -250,6 +262,10 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
+ 			  (prop_tbl->memory_protection_attribute &
+ 			   EFI_PROPERTIES_RUNTIME_MEMORY_PROTECTION_NON_EXECUTABLE_PE_DATA);
+ 
++	/* force efi_novamap if SetVirtualAddressMap() is unsupported */
++	efi_novamap |= !(get_supported_rt_services() &
++			 EFI_RT_SUPPORTED_SET_VIRTUAL_ADDRESS_MAP);
++
+ 	/* hibernation expects the runtime regions to stay in the same place */
+ 	if (!IS_ENABLED(CONFIG_HIBERNATION) && !efi_nokaslr && !flat_va_mapping) {
+ 		/*
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 825b362eb4b7d..6898c27f71f85 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -112,8 +112,29 @@ MODULE_DEVICE_TABLE(i2c, pca953x_id);
+ #ifdef CONFIG_GPIO_PCA953X_IRQ
+ 
+ #include <linux/dmi.h>
+-#include <linux/gpio.h>
+-#include <linux/list.h>
++
++static const struct acpi_gpio_params pca953x_irq_gpios = { 0, 0, true };
++
++static const struct acpi_gpio_mapping pca953x_acpi_irq_gpios[] = {
++	{ "irq-gpios", &pca953x_irq_gpios, 1, ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER },
++	{ }
++};
++
++static int pca953x_acpi_get_irq(struct device *dev)
++{
++	int ret;
++
++	ret = devm_acpi_dev_add_driver_gpios(dev, pca953x_acpi_irq_gpios);
++	if (ret)
++		dev_warn(dev, "can't add GPIO ACPI mapping\n");
++
++	ret = acpi_dev_gpio_irq_get_by(ACPI_COMPANION(dev), "irq-gpios", 0);
++	if (ret < 0)
++		return ret;
++
++	dev_info(dev, "ACPI interrupt quirk (IRQ %d)\n", ret);
++	return ret;
++}
+ 
+ static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
+ 	{
+@@ -132,59 +153,6 @@ static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
+ 	},
+ 	{}
+ };
+-
+-#ifdef CONFIG_ACPI
+-static int pca953x_acpi_get_pin(struct acpi_resource *ares, void *data)
+-{
+-	struct acpi_resource_gpio *agpio;
+-	int *pin = data;
+-
+-	if (acpi_gpio_get_irq_resource(ares, &agpio))
+-		*pin = agpio->pin_table[0];
+-	return 1;
+-}
+-
+-static int pca953x_acpi_find_pin(struct device *dev)
+-{
+-	struct acpi_device *adev = ACPI_COMPANION(dev);
+-	int pin = -ENOENT, ret;
+-	LIST_HEAD(r);
+-
+-	ret = acpi_dev_get_resources(adev, &r, pca953x_acpi_get_pin, &pin);
+-	acpi_dev_free_resource_list(&r);
+-	if (ret < 0)
+-		return ret;
+-
+-	return pin;
+-}
+-#else
+-static inline int pca953x_acpi_find_pin(struct device *dev) { return -ENXIO; }
+-#endif
+-
+-static int pca953x_acpi_get_irq(struct device *dev)
+-{
+-	int pin, ret;
+-
+-	pin = pca953x_acpi_find_pin(dev);
+-	if (pin < 0)
+-		return pin;
+-
+-	dev_info(dev, "Applying ACPI interrupt quirk (GPIO %d)\n", pin);
+-
+-	if (!gpio_is_valid(pin))
+-		return -EINVAL;
+-
+-	ret = gpio_request(pin, "pca953x interrupt");
+-	if (ret)
+-		return ret;
+-
+-	ret = gpio_to_irq(pin);
+-
+-	/* When pin is used as an IRQ, no need to keep it requested */
+-	gpio_free(pin);
+-
+-	return ret;
+-}
+ #endif
+ 
+ static const struct acpi_device_id pca953x_acpi_ids[] = {
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 834a12f3219e5..49a1f8ce4baa6 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -649,6 +649,7 @@ static int acpi_populate_gpio_lookup(struct acpi_resource *ares, void *data)
+ 	if (!lookup->desc) {
+ 		const struct acpi_resource_gpio *agpio = &ares->data.gpio;
+ 		bool gpioint = agpio->connection_type == ACPI_RESOURCE_GPIO_TYPE_INT;
++		struct gpio_desc *desc;
+ 		int pin_index;
+ 
+ 		if (lookup->info.quirks & ACPI_GPIO_QUIRK_ONLY_GPIOIO && gpioint)
+@@ -661,8 +662,12 @@ static int acpi_populate_gpio_lookup(struct acpi_resource *ares, void *data)
+ 		if (pin_index >= agpio->pin_table_length)
+ 			return 1;
+ 
+-		lookup->desc = acpi_get_gpiod(agpio->resource_source.string_ptr,
++		if (lookup->info.quirks & ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER)
++			desc = gpio_to_desc(agpio->pin_table[pin_index]);
++		else
++			desc = acpi_get_gpiod(agpio->resource_source.string_ptr,
+ 					      agpio->pin_table[pin_index]);
++		lookup->desc = desc;
+ 		lookup->info.pin_config = agpio->pin_config;
+ 		lookup->info.gpioint = gpioint;
+ 
+@@ -911,8 +916,9 @@ struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode,
+ }
+ 
+ /**
+- * acpi_dev_gpio_irq_get() - Find GpioInt and translate it to Linux IRQ number
++ * acpi_dev_gpio_irq_get_by() - Find GpioInt and translate it to Linux IRQ number
+  * @adev: pointer to a ACPI device to get IRQ from
++ * @name: optional name of GpioInt resource
+  * @index: index of GpioInt resource (starting from %0)
+  *
+  * If the device has one or more GpioInt resources, this function can be
+@@ -922,9 +928,12 @@ struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode,
+  * The function is idempotent, though each time it runs it will configure GPIO
+  * pin direction according to the flags in GpioInt resource.
+  *
++ * The function takes optional @name parameter. If the resource has a property
++ * name, then only those will be taken into account.
++ *
+  * Return: Linux IRQ number (> %0) on success, negative errno on failure.
+  */
+-int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
++int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index)
+ {
+ 	int idx, i;
+ 	unsigned int irq_flags;
+@@ -934,7 +943,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
+ 		struct acpi_gpio_info info;
+ 		struct gpio_desc *desc;
+ 
+-		desc = acpi_get_gpiod_by_index(adev, NULL, i, &info);
++		desc = acpi_get_gpiod_by_index(adev, name, i, &info);
+ 
+ 		/* Ignore -EPROBE_DEFER, it only matters if idx matches */
+ 		if (IS_ERR(desc) && PTR_ERR(desc) != -EPROBE_DEFER)
+@@ -971,7 +980,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
+ 	}
+ 	return -ENOENT;
+ }
+-EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get);
++EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get_by);
+ 
+ static acpi_status
+ acpi_gpio_adr_space_handler(u32 function, acpi_physical_address address,
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 7e17d4edccb12..7f557ea905424 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -472,8 +472,12 @@ EXPORT_SYMBOL_GPL(gpiochip_line_is_valid);
+ static void gpiodevice_release(struct device *dev)
+ {
+ 	struct gpio_device *gdev = dev_get_drvdata(dev);
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&gpio_lock, flags);
+ 	list_del(&gdev->list);
++	spin_unlock_irqrestore(&gpio_lock, flags);
++
+ 	ida_free(&gpio_ida, gdev->id);
+ 	kfree_const(gdev->label);
+ 	kfree(gdev->descs);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 87f095dc385c7..76c31aa7b84df 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -178,6 +178,7 @@ extern uint amdgpu_smu_memory_pool_size;
+ extern uint amdgpu_dc_feature_mask;
+ extern uint amdgpu_dc_debug_mask;
+ extern uint amdgpu_dm_abm_level;
++extern int amdgpu_backlight;
+ extern struct amdgpu_mgpu_info mgpu_info;
+ extern int amdgpu_ras_enable;
+ extern uint amdgpu_ras_mask;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 0b786d8dd8bc7..1a880cb48d19e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -768,6 +768,10 @@ uint amdgpu_dm_abm_level = 0;
+ MODULE_PARM_DESC(abmlevel, "ABM level (0 = off (default), 1-4 = backlight reduction level) ");
+ module_param_named(abmlevel, amdgpu_dm_abm_level, uint, 0444);
+ 
++int amdgpu_backlight = -1;
++MODULE_PARM_DESC(backlight, "Backlight control (0 = pwm, 1 = aux, -1 auto (default))");
++module_param_named(backlight, amdgpu_backlight, bint, 0444);
++
+ /**
+  * DOC: tmz (int)
+  * Trusted Memory Zone (TMZ) is a method to protect data being written
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index bffaefaf5a292..ea1ea147f6073 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2140,6 +2140,11 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ 	    caps->ext_caps->bits.hdr_aux_backlight_control == 1)
+ 		caps->aux_support = true;
+ 
++	if (amdgpu_backlight == 0)
++		caps->aux_support = false;
++	else if (amdgpu_backlight == 1)
++		caps->aux_support = true;
++
+ 	/* From the specification (CTA-861-G), for calculating the maximum
+ 	 * luminance we need to use:
+ 	 *	Luminance = 50*2**(CV/32)
+@@ -3038,19 +3043,6 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm)
+ #endif
+ }
+ 
+-static int set_backlight_via_aux(struct dc_link *link, uint32_t brightness)
+-{
+-	bool rc;
+-
+-	if (!link)
+-		return 1;
+-
+-	rc = dc_link_set_backlight_level_nits(link, true, brightness,
+-					      AUX_BL_DEFAULT_TRANSITION_TIME_MS);
+-
+-	return rc ? 0 : 1;
+-}
+-
+ static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
+ 				unsigned *min, unsigned *max)
+ {
+@@ -3113,9 +3105,10 @@ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
+ 	brightness = convert_brightness_from_user(&caps, bd->props.brightness);
+ 	// Change brightness based on AUX property
+ 	if (caps.aux_support)
+-		return set_backlight_via_aux(link, brightness);
+-
+-	rc = dc_link_set_backlight_level(dm->backlight_link, brightness, 0);
++		rc = dc_link_set_backlight_level_nits(link, true, brightness,
++						      AUX_BL_DEFAULT_TRANSITION_TIME_MS);
++	else
++		rc = dc_link_set_backlight_level(dm->backlight_link, brightness, 0);
+ 
+ 	return rc ? 0 : 1;
+ }
+@@ -3123,11 +3116,27 @@ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
+ static int amdgpu_dm_backlight_get_brightness(struct backlight_device *bd)
+ {
+ 	struct amdgpu_display_manager *dm = bl_get_data(bd);
+-	int ret = dc_link_get_backlight_level(dm->backlight_link);
++	struct amdgpu_dm_backlight_caps caps;
++
++	amdgpu_dm_update_backlight_caps(dm);
++	caps = dm->backlight_caps;
++
++	if (caps.aux_support) {
++		struct dc_link *link = (struct dc_link *)dm->backlight_link;
++		u32 avg, peak;
++		bool rc;
+ 
+-	if (ret == DC_ERROR_UNEXPECTED)
+-		return bd->props.brightness;
+-	return convert_brightness_to_user(&dm->backlight_caps, ret);
++		rc = dc_link_get_backlight_level_nits(link, &avg, &peak);
++		if (!rc)
++			return bd->props.brightness;
++		return convert_brightness_to_user(&caps, avg);
++	} else {
++		int ret = dc_link_get_backlight_level(dm->backlight_link);
++
++		if (ret == DC_ERROR_UNEXPECTED)
++			return bd->props.brightness;
++		return convert_brightness_to_user(&caps, ret);
++	}
+ }
+ 
+ static const struct backlight_ops amdgpu_dm_backlight_ops = {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 21c7b642a8b4e..f0039599e02f7 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -2555,7 +2555,6 @@ bool dc_link_set_backlight_level(const struct dc_link *link,
+ 			if (pipe_ctx->plane_state == NULL)
+ 				frame_ramp = 0;
+ 		} else {
+-			ASSERT(false);
+ 			return false;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index 4e2dcf259428f..b5fe2a008bd47 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -1058,8 +1058,6 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
+ {
+ 	int i;
+ 
+-	DC_FP_START();
+-
+ 	if (dc->bb_overrides.sr_exit_time_ns) {
+ 		for (i = 0; i < WM_SET_COUNT; i++) {
+ 			  dc->clk_mgr->bw_params->wm_table.entries[i].sr_exit_time_us =
+@@ -1084,8 +1082,6 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
+ 				dc->bb_overrides.dram_clock_change_latency_ns / 1000.0;
+ 		}
+ 	}
+-
+-	DC_FP_END();
+ }
+ 
+ void dcn21_calculate_wm(
+@@ -1183,7 +1179,7 @@ static noinline bool dcn21_validate_bandwidth_fp(struct dc *dc,
+ 	int vlevel = 0;
+ 	int pipe_split_from[MAX_PIPES];
+ 	int pipe_cnt = 0;
+-	display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL);
++	display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_ATOMIC);
+ 	DC_LOGGER_INIT(dc->ctx->logger);
+ 
+ 	BW_VAL_TRACE_COUNT();
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index 7eada3098ffcc..18e4eb8884c26 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -1506,6 +1506,48 @@ static int vega10_populate_single_lclk_level(struct pp_hwmgr *hwmgr,
+ 	return 0;
+ }
+ 
++static int vega10_override_pcie_parameters(struct pp_hwmgr *hwmgr)
++{
++	struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
++	struct vega10_hwmgr *data =
++			(struct vega10_hwmgr *)(hwmgr->backend);
++	uint32_t pcie_gen = 0, pcie_width = 0;
++	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
++	int i;
++
++	if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
++		pcie_gen = 3;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
++		pcie_gen = 2;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
++		pcie_gen = 1;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1)
++		pcie_gen = 0;
++
++	if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
++		pcie_width = 6;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
++		pcie_width = 5;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8)
++		pcie_width = 4;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4)
++		pcie_width = 3;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2)
++		pcie_width = 2;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1)
++		pcie_width = 1;
++
++	for (i = 0; i < NUM_LINK_LEVELS; i++) {
++		if (pp_table->PcieGenSpeed[i] > pcie_gen)
++			pp_table->PcieGenSpeed[i] = pcie_gen;
++
++		if (pp_table->PcieLaneCount[i] > pcie_width)
++			pp_table->PcieLaneCount[i] = pcie_width;
++	}
++
++	return 0;
++}
++
+ static int vega10_populate_smc_link_levels(struct pp_hwmgr *hwmgr)
+ {
+ 	int result = -1;
+@@ -2557,6 +2599,11 @@ static int vega10_init_smc_table(struct pp_hwmgr *hwmgr)
+ 			"Failed to initialize Link Level!",
+ 			return result);
+ 
++	result = vega10_override_pcie_parameters(hwmgr);
++	PP_ASSERT_WITH_CODE(!result,
++			"Failed to override pcie parameters!",
++			return result);
++
+ 	result = vega10_populate_all_graphic_levels(hwmgr);
+ 	PP_ASSERT_WITH_CODE(!result,
+ 			"Failed to initialize Graphics Level!",
+@@ -2923,6 +2970,7 @@ static int vega10_start_dpm(struct pp_hwmgr *hwmgr, uint32_t bitmap)
+ 	return 0;
+ }
+ 
++
+ static int vega10_enable_disable_PCC_limit_feature(struct pp_hwmgr *hwmgr, bool enable)
+ {
+ 	struct vega10_hwmgr *data = hwmgr->backend;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
+index dc206fa88c5e5..62076035029ac 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
+@@ -481,6 +481,67 @@ static void vega12_init_dpm_state(struct vega12_dpm_state *dpm_state)
+ 	dpm_state->hard_max_level = 0xffff;
+ }
+ 
++static int vega12_override_pcie_parameters(struct pp_hwmgr *hwmgr)
++{
++	struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
++	struct vega12_hwmgr *data =
++			(struct vega12_hwmgr *)(hwmgr->backend);
++	uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg, pcie_gen_arg, pcie_width_arg;
++	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
++	int i;
++	int ret;
++
++	if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
++		pcie_gen = 3;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
++		pcie_gen = 2;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
++		pcie_gen = 1;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1)
++		pcie_gen = 0;
++
++	if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
++		pcie_width = 6;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
++		pcie_width = 5;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8)
++		pcie_width = 4;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4)
++		pcie_width = 3;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2)
++		pcie_width = 2;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1)
++		pcie_width = 1;
++
++	/* Bit 31:16: LCLK DPM level. 0 is DPM0, and 1 is DPM1
++	 * Bit 15:8:  PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
++	 * Bit 7:0:   PCIE lane width, 1 to 7 corresponds is x1 to x32
++	 */
++	for (i = 0; i < NUM_LINK_LEVELS; i++) {
++		pcie_gen_arg = (pp_table->PcieGenSpeed[i] > pcie_gen) ? pcie_gen :
++			pp_table->PcieGenSpeed[i];
++		pcie_width_arg = (pp_table->PcieLaneCount[i] > pcie_width) ? pcie_width :
++			pp_table->PcieLaneCount[i];
++
++		if (pcie_gen_arg != pp_table->PcieGenSpeed[i] || pcie_width_arg !=
++		    pp_table->PcieLaneCount[i]) {
++			smu_pcie_arg = (i << 16) | (pcie_gen_arg << 8) | pcie_width_arg;
++			ret = smum_send_msg_to_smc_with_parameter(hwmgr,
++				PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
++				NULL);
++			PP_ASSERT_WITH_CODE(!ret,
++				"[OverridePcieParameters] Attempt to override pcie params failed!",
++				return ret);
++		}
++
++		/* update the pptable */
++		pp_table->PcieGenSpeed[i] = pcie_gen_arg;
++		pp_table->PcieLaneCount[i] = pcie_width_arg;
++	}
++
++	return 0;
++}
++
+ static int vega12_get_number_of_dpm_level(struct pp_hwmgr *hwmgr,
+ 		PPCLK_e clk_id, uint32_t *num_of_levels)
+ {
+@@ -969,6 +1030,11 @@ static int vega12_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
+ 			"Failed to enable all smu features!",
+ 			return result);
+ 
++	result = vega12_override_pcie_parameters(hwmgr);
++	PP_ASSERT_WITH_CODE(!result,
++			"[EnableDPMTasks] Failed to override pcie parameters!",
++			return result);
++
+ 	tmp_result = vega12_power_control_set_level(hwmgr);
+ 	PP_ASSERT_WITH_CODE(!tmp_result,
+ 			"Failed to power control set level!",
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+index da84012b7fd51..251979c059c8b 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+@@ -832,7 +832,9 @@ static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
+ 	struct vega20_hwmgr *data =
+ 			(struct vega20_hwmgr *)(hwmgr->backend);
+-	uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg;
++	uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg, pcie_gen_arg, pcie_width_arg;
++	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
++	int i;
+ 	int ret;
+ 
+ 	if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
+@@ -861,17 +863,27 @@ static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
+ 	 * Bit 15:8:  PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
+ 	 * Bit 7:0:   PCIE lane width, 1 to 7 corresponds is x1 to x32
+ 	 */
+-	smu_pcie_arg = (1 << 16) | (pcie_gen << 8) | pcie_width;
+-	ret = smum_send_msg_to_smc_with_parameter(hwmgr,
+-			PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
+-			NULL);
+-	PP_ASSERT_WITH_CODE(!ret,
+-		"[OverridePcieParameters] Attempt to override pcie params failed!",
+-		return ret);
++	for (i = 0; i < NUM_LINK_LEVELS; i++) {
++		pcie_gen_arg = (pp_table->PcieGenSpeed[i] > pcie_gen) ? pcie_gen :
++			pp_table->PcieGenSpeed[i];
++		pcie_width_arg = (pp_table->PcieLaneCount[i] > pcie_width) ? pcie_width :
++			pp_table->PcieLaneCount[i];
++
++		if (pcie_gen_arg != pp_table->PcieGenSpeed[i] || pcie_width_arg !=
++		    pp_table->PcieLaneCount[i]) {
++			smu_pcie_arg = (i << 16) | (pcie_gen_arg << 8) | pcie_width_arg;
++			ret = smum_send_msg_to_smc_with_parameter(hwmgr,
++				PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
++				NULL);
++			PP_ASSERT_WITH_CODE(!ret,
++				"[OverridePcieParameters] Attempt to override pcie params failed!",
++				return ret);
++		}
+ 
+-	data->pcie_parameters_override = true;
+-	data->pcie_gen_level1 = pcie_gen;
+-	data->pcie_width_level1 = pcie_width;
++		/* update the pptable */
++		pp_table->PcieGenSpeed[i] = pcie_gen_arg;
++		pp_table->PcieLaneCount[i] = pcie_width_arg;
++	}
+ 
+ 	return 0;
+ }
+@@ -3320,9 +3332,7 @@ static int vega20_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 			data->od8_settings.od8_settings_array;
+ 	OverDriveTable_t *od_table =
+ 			&(data->smc_state_table.overdrive_table);
+-	struct phm_ppt_v3_information *pptable_information =
+-		(struct phm_ppt_v3_information *)hwmgr->pptable;
+-	PPTable_t *pptable = (PPTable_t *)pptable_information->smc_pptable;
++	PPTable_t *pptable = &(data->smc_state_table.pp_table);
+ 	struct pp_clock_levels_with_latency clocks;
+ 	struct vega20_single_dpm_table *fclk_dpm_table =
+ 			&(data->dpm_table.fclk_table);
+@@ -3421,13 +3431,9 @@ static int vega20_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 		current_lane_width =
+ 			vega20_get_current_pcie_link_width_level(hwmgr);
+ 		for (i = 0; i < NUM_LINK_LEVELS; i++) {
+-			if (i == 1 && data->pcie_parameters_override) {
+-				gen_speed = data->pcie_gen_level1;
+-				lane_width = data->pcie_width_level1;
+-			} else {
+-				gen_speed = pptable->PcieGenSpeed[i];
+-				lane_width = pptable->PcieLaneCount[i];
+-			}
++			gen_speed = pptable->PcieGenSpeed[i];
++			lane_width = pptable->PcieLaneCount[i];
++
+ 			size += sprintf(buf + size, "%d: %s %s %dMhz %s\n", i,
+ 					(gen_speed == 0) ? "2.5GT/s," :
+ 					(gen_speed == 1) ? "5.0GT/s," :
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index e00616d94f26e..cfacce0418a49 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -340,13 +340,14 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+ 	if (--shmem->vmap_use_count > 0)
+ 		return;
+ 
+-	if (obj->import_attach)
++	if (obj->import_attach) {
+ 		dma_buf_vunmap(obj->import_attach->dmabuf, shmem->vaddr);
+-	else
++	} else {
+ 		vunmap(shmem->vaddr);
++		drm_gem_shmem_put_pages(shmem);
++	}
+ 
+ 	shmem->vaddr = NULL;
+-	drm_gem_shmem_put_pages(shmem);
+ }
+ 
+ /*
+@@ -534,14 +535,28 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
+ 	struct drm_gem_object *obj = vma->vm_private_data;
+ 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ 	loff_t num_pages = obj->size >> PAGE_SHIFT;
++	vm_fault_t ret;
+ 	struct page *page;
++	pgoff_t page_offset;
+ 
+-	if (vmf->pgoff >= num_pages || WARN_ON_ONCE(!shmem->pages))
+-		return VM_FAULT_SIGBUS;
++	/* We don't use vmf->pgoff since that has the fake offset */
++	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
+ 
+-	page = shmem->pages[vmf->pgoff];
++	mutex_lock(&shmem->pages_lock);
+ 
+-	return vmf_insert_page(vma, vmf->address, page);
++	if (page_offset >= num_pages ||
++	    WARN_ON_ONCE(!shmem->pages) ||
++	    shmem->madv < 0) {
++		ret = VM_FAULT_SIGBUS;
++	} else {
++		page = shmem->pages[page_offset];
++
++		ret = vmf_insert_page(vma, vmf->address, page);
++	}
++
++	mutex_unlock(&shmem->pages_lock);
++
++	return ret;
+ }
+ 
+ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
+@@ -590,9 +605,6 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+ 	struct drm_gem_shmem_object *shmem;
+ 	int ret;
+ 
+-	/* Remove the fake offset */
+-	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
+-
+ 	if (obj->import_attach) {
+ 		/* Drop the reference drm_gem_mmap_obj() acquired.*/
+ 		drm_gem_object_put(obj);
+diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
+index f86448ab1fe04..dc734d4828a17 100644
+--- a/drivers/gpu/drm/drm_ioc32.c
++++ b/drivers/gpu/drm/drm_ioc32.c
+@@ -99,6 +99,8 @@ static int compat_drm_version(struct file *file, unsigned int cmd,
+ 	if (copy_from_user(&v32, (void __user *)arg, sizeof(v32)))
+ 		return -EFAULT;
+ 
++	memset(&v, 0, sizeof(v));
++
+ 	v = (struct drm_version) {
+ 		.name_len = v32.name_len,
+ 		.name = compat_ptr(v32.name),
+@@ -137,6 +139,9 @@ static int compat_drm_getunique(struct file *file, unsigned int cmd,
+ 
+ 	if (copy_from_user(&uq32, (void __user *)arg, sizeof(uq32)))
+ 		return -EFAULT;
++
++	memset(&uq, 0, sizeof(uq));
++
+ 	uq = (struct drm_unique){
+ 		.unique_len = uq32.unique_len,
+ 		.unique = compat_ptr(uq32.unique),
+@@ -265,6 +270,8 @@ static int compat_drm_getclient(struct file *file, unsigned int cmd,
+ 	if (copy_from_user(&c32, argp, sizeof(c32)))
+ 		return -EFAULT;
+ 
++	memset(&client, 0, sizeof(client));
++
+ 	client.idx = c32.idx;
+ 
+ 	err = drm_ioctl_kernel(file, drm_getclient, &client, 0);
+@@ -852,6 +859,8 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
+ 	if (copy_from_user(&req32, argp, sizeof(req32)))
+ 		return -EFAULT;
+ 
++	memset(&req, 0, sizeof(req));
++
+ 	req.request.type = req32.request.type;
+ 	req.request.sequence = req32.request.sequence;
+ 	req.request.signal = req32.request.signal;
+@@ -889,6 +898,8 @@ static int compat_drm_mode_addfb2(struct file *file, unsigned int cmd,
+ 	struct drm_mode_fb_cmd2 req64;
+ 	int err;
+ 
++	memset(&req64, 0, sizeof(req64));
++
+ 	if (copy_from_user(&req64, argp,
+ 			   offsetof(drm_mode_fb_cmd232_t, modifier)))
+ 		return -EFAULT;
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+index efdeb7b7b2a0a..a19537706ed1f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+@@ -708,9 +708,12 @@ static int engine_setup_common(struct intel_engine_cs *engine)
+ 		goto err_status;
+ 	}
+ 
++	err = intel_engine_init_cmd_parser(engine);
++	if (err)
++		goto err_cmd_parser;
++
+ 	intel_engine_init_active(engine, ENGINE_PHYSICAL);
+ 	intel_engine_init_execlists(engine);
+-	intel_engine_init_cmd_parser(engine);
+ 	intel_engine_init__pm(engine);
+ 	intel_engine_init_retire(engine);
+ 
+@@ -724,6 +727,8 @@ static int engine_setup_common(struct intel_engine_cs *engine)
+ 
+ 	return 0;
+ 
++err_cmd_parser:
++	intel_breadcrumbs_free(engine->breadcrumbs);
+ err_status:
+ 	cleanup_status_page(engine);
+ 	return err;
+diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
+index e7362ec22aded..9ce174950340b 100644
+--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
++++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
+@@ -939,7 +939,7 @@ static void fini_hash_table(struct intel_engine_cs *engine)
+  * struct intel_engine_cs based on whether the platform requires software
+  * command parsing.
+  */
+-void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
++int intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
+ {
+ 	const struct drm_i915_cmd_table *cmd_tables;
+ 	int cmd_table_count;
+@@ -947,7 +947,7 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
+ 
+ 	if (!IS_GEN(engine->i915, 7) && !(IS_GEN(engine->i915, 9) &&
+ 					  engine->class == COPY_ENGINE_CLASS))
+-		return;
++		return 0;
+ 
+ 	switch (engine->class) {
+ 	case RENDER_CLASS:
+@@ -1012,19 +1012,19 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
+ 		break;
+ 	default:
+ 		MISSING_CASE(engine->class);
+-		return;
++		goto out;
+ 	}
+ 
+ 	if (!validate_cmds_sorted(engine, cmd_tables, cmd_table_count)) {
+ 		drm_err(&engine->i915->drm,
+ 			"%s: command descriptions are not sorted\n",
+ 			engine->name);
+-		return;
++		goto out;
+ 	}
+ 	if (!validate_regs_sorted(engine)) {
+ 		drm_err(&engine->i915->drm,
+ 			"%s: registers are not sorted\n", engine->name);
+-		return;
++		goto out;
+ 	}
+ 
+ 	ret = init_hash_table(engine, cmd_tables, cmd_table_count);
+@@ -1032,10 +1032,17 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
+ 		drm_err(&engine->i915->drm,
+ 			"%s: initialised failed!\n", engine->name);
+ 		fini_hash_table(engine);
+-		return;
++		goto out;
+ 	}
+ 
+ 	engine->flags |= I915_ENGINE_USING_CMD_PARSER;
++
++out:
++	if (intel_engine_requires_cmd_parser(engine) &&
++	    !intel_engine_using_cmd_parser(engine))
++		return -EINVAL;
++
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index fa830e77bb648..6909901b35513 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1946,7 +1946,7 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
+ 
+ /* i915_cmd_parser.c */
+ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv);
+-void intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
++int intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
+ void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine);
+ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ 			    struct i915_vma *batch,
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 3d1de9cbb1c8d..db56732bdd260 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -482,6 +482,16 @@ static int meson_probe_remote(struct platform_device *pdev,
+ 	return count;
+ }
+ 
++static void meson_drv_shutdown(struct platform_device *pdev)
++{
++	struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
++	struct drm_device *drm = priv->drm;
++
++	DRM_DEBUG_DRIVER("\n");
++	drm_kms_helper_poll_fini(drm);
++	drm_atomic_helper_shutdown(drm);
++}
++
+ static int meson_drv_probe(struct platform_device *pdev)
+ {
+ 	struct component_match *match = NULL;
+@@ -553,6 +563,7 @@ static const struct dev_pm_ops meson_drv_pm_ops = {
+ 
+ static struct platform_driver meson_drm_platform_driver = {
+ 	.probe      = meson_drv_probe,
++	.shutdown   = meson_drv_shutdown,
+ 	.driver     = {
+ 		.name	= "meson-drm",
+ 		.of_match_table = dt_match,
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index 6063f3a153290..862ef59d4d033 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -327,6 +327,7 @@ static void qxl_crtc_update_monitors_config(struct drm_crtc *crtc,
+ 
+ 	head.id = i;
+ 	head.flags = 0;
++	head.surface_id = 0;
+ 	oldcount = qdev->monitors_config->count;
+ 	if (crtc->state->active) {
+ 		struct drm_display_mode *mode = &crtc->mode;
+diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
+index cc397671f6898..0f5d1e598d75f 100644
+--- a/drivers/gpu/drm/tiny/gm12u320.c
++++ b/drivers/gpu/drm/tiny/gm12u320.c
+@@ -83,6 +83,7 @@ MODULE_PARM_DESC(eco_mode, "Turn on Eco mode (less bright, more silent)");
+ 
+ struct gm12u320_device {
+ 	struct drm_device	         dev;
++	struct device                   *dmadev;
+ 	struct drm_simple_display_pipe   pipe;
+ 	struct drm_connector	         conn;
+ 	struct usb_device               *udev;
+@@ -598,6 +599,22 @@ static const uint64_t gm12u320_pipe_modifiers[] = {
+ 	DRM_FORMAT_MOD_INVALID
+ };
+ 
++/*
++ * FIXME: Dma-buf sharing requires DMA support by the importing device.
++ *        This function is a workaround to make USB devices work as well.
++ *        See todo.rst for how to fix the issue in the dma-buf framework.
++ */
++static struct drm_gem_object *gm12u320_gem_prime_import(struct drm_device *dev,
++							struct dma_buf *dma_buf)
++{
++	struct gm12u320_device *gm12u320 = to_gm12u320(dev);
++
++	if (!gm12u320->dmadev)
++		return ERR_PTR(-ENODEV);
++
++	return drm_gem_prime_import_dev(dev, dma_buf, gm12u320->dmadev);
++}
++
+ DEFINE_DRM_GEM_FOPS(gm12u320_fops);
+ 
+ static struct drm_driver gm12u320_drm_driver = {
+@@ -611,6 +628,7 @@ static struct drm_driver gm12u320_drm_driver = {
+ 
+ 	.fops		 = &gm12u320_fops,
+ 	DRM_GEM_SHMEM_DRIVER_OPS,
++	.gem_prime_import = gm12u320_gem_prime_import,
+ };
+ 
+ static const struct drm_mode_config_funcs gm12u320_mode_config_funcs = {
+@@ -637,16 +655,19 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
+ 				      struct gm12u320_device, dev);
+ 	if (IS_ERR(gm12u320))
+ 		return PTR_ERR(gm12u320);
++	dev = &gm12u320->dev;
++
++	gm12u320->dmadev = usb_intf_get_dma_device(to_usb_interface(dev->dev));
++	if (!gm12u320->dmadev)
++		drm_warn(dev, "buffer sharing not supported"); /* not an error */
+ 
+ 	gm12u320->udev = interface_to_usbdev(interface);
+ 	INIT_DELAYED_WORK(&gm12u320->fb_update.work, gm12u320_fb_update_work);
+ 	mutex_init(&gm12u320->fb_update.lock);
+ 
+-	dev = &gm12u320->dev;
+-
+ 	ret = drmm_mode_config_init(dev);
+ 	if (ret)
+-		return ret;
++		goto err_put_device;
+ 
+ 	dev->mode_config.min_width = GM12U320_USER_WIDTH;
+ 	dev->mode_config.max_width = GM12U320_USER_WIDTH;
+@@ -656,15 +677,15 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
+ 
+ 	ret = gm12u320_usb_alloc(gm12u320);
+ 	if (ret)
+-		return ret;
++		goto err_put_device;
+ 
+ 	ret = gm12u320_set_ecomode(gm12u320);
+ 	if (ret)
+-		return ret;
++		goto err_put_device;
+ 
+ 	ret = gm12u320_conn_init(gm12u320);
+ 	if (ret)
+-		return ret;
++		goto err_put_device;
+ 
+ 	ret = drm_simple_display_pipe_init(&gm12u320->dev,
+ 					   &gm12u320->pipe,
+@@ -674,24 +695,31 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
+ 					   gm12u320_pipe_modifiers,
+ 					   &gm12u320->conn);
+ 	if (ret)
+-		return ret;
++		goto err_put_device;
+ 
+ 	drm_mode_config_reset(dev);
+ 
+ 	usb_set_intfdata(interface, dev);
+ 	ret = drm_dev_register(dev, 0);
+ 	if (ret)
+-		return ret;
++		goto err_put_device;
+ 
+ 	drm_fbdev_generic_setup(dev, 0);
+ 
+ 	return 0;
++
++err_put_device:
++	put_device(gm12u320->dmadev);
++	return ret;
+ }
+ 
+ static void gm12u320_usb_disconnect(struct usb_interface *interface)
+ {
+ 	struct drm_device *dev = usb_get_intfdata(interface);
++	struct gm12u320_device *gm12u320 = to_gm12u320(dev);
+ 
++	put_device(gm12u320->dmadev);
++	gm12u320->dmadev = NULL;
+ 	drm_dev_unplug(dev);
+ 	drm_atomic_helper_shutdown(dev);
+ }
+diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
+index 96d4317a2c1bd..bcf32d188c1b1 100644
+--- a/drivers/gpu/drm/udl/udl_drv.c
++++ b/drivers/gpu/drm/udl/udl_drv.c
+@@ -32,6 +32,22 @@ static int udl_usb_resume(struct usb_interface *interface)
+ 	return drm_mode_config_helper_resume(dev);
+ }
+ 
++/*
++ * FIXME: Dma-buf sharing requires DMA support by the importing device.
++ *        This function is a workaround to make USB devices work as well.
++ *        See todo.rst for how to fix the issue in the dma-buf framework.
++ */
++static struct drm_gem_object *udl_driver_gem_prime_import(struct drm_device *dev,
++							  struct dma_buf *dma_buf)
++{
++	struct udl_device *udl = to_udl(dev);
++
++	if (!udl->dmadev)
++		return ERR_PTR(-ENODEV);
++
++	return drm_gem_prime_import_dev(dev, dma_buf, udl->dmadev);
++}
++
+ DEFINE_DRM_GEM_FOPS(udl_driver_fops);
+ 
+ static struct drm_driver driver = {
+@@ -42,6 +58,7 @@ static struct drm_driver driver = {
+ 
+ 	.fops = &udl_driver_fops,
+ 	DRM_GEM_SHMEM_DRIVER_OPS,
++	.gem_prime_import = udl_driver_gem_prime_import,
+ 
+ 	.name = DRIVER_NAME,
+ 	.desc = DRIVER_DESC,
+diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
+index b1461f30780bc..8aab14871e1b7 100644
+--- a/drivers/gpu/drm/udl/udl_drv.h
++++ b/drivers/gpu/drm/udl/udl_drv.h
+@@ -50,6 +50,7 @@ struct urb_list {
+ struct udl_device {
+ 	struct drm_device drm;
+ 	struct device *dev;
++	struct device *dmadev;
+ 	struct usb_device *udev;
+ 
+ 	struct drm_simple_display_pipe display_pipe;
+diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
+index f5d27f2a56543..5f1d3891ed549 100644
+--- a/drivers/gpu/drm/udl/udl_main.c
++++ b/drivers/gpu/drm/udl/udl_main.c
+@@ -314,6 +314,10 @@ int udl_init(struct udl_device *udl)
+ 
+ 	DRM_DEBUG("\n");
+ 
++	udl->dmadev = usb_intf_get_dma_device(to_usb_interface(dev->dev));
++	if (!udl->dmadev)
++		drm_warn(dev, "buffer sharing not supported"); /* not an error */
++
+ 	mutex_init(&udl->gem_lock);
+ 
+ 	if (!udl_parse_vendor_descriptor(dev, udl->udev)) {
+@@ -342,12 +346,18 @@ int udl_init(struct udl_device *udl)
+ err:
+ 	if (udl->urbs.count)
+ 		udl_free_urb_list(dev);
++	put_device(udl->dmadev);
+ 	DRM_ERROR("%d\n", ret);
+ 	return ret;
+ }
+ 
+ int udl_drop_usb(struct drm_device *dev)
+ {
++	struct udl_device *udl = to_udl(dev);
++
+ 	udl_free_urb_list(dev);
++	put_device(udl->dmadev);
++	udl->dmadev = NULL;
++
+ 	return 0;
+ }
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index fcdc922bc9733..271bd8d243395 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -995,7 +995,12 @@ static void logi_hidpp_recv_queue_notif(struct hid_device *hdev,
+ 		workitem.reports_supported |= STD_KEYBOARD;
+ 		break;
+ 	case 0x0d:
+-		device_type = "eQUAD Lightspeed 1_1";
++		device_type = "eQUAD Lightspeed 1.1";
++		logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
++		workitem.reports_supported |= STD_KEYBOARD;
++		break;
++	case 0x0f:
++		device_type = "eQUAD Lightspeed 1.2";
+ 		logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
+ 		workitem.reports_supported |= STD_KEYBOARD;
+ 		break;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 217def2d7cb44..ad6630e3cc779 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -91,7 +91,6 @@
+ 
+ #define RCAR_BUS_PHASE_START	(MDBS | MIE | ESG)
+ #define RCAR_BUS_PHASE_DATA	(MDBS | MIE)
+-#define RCAR_BUS_MASK_DATA	(~(ESG | FSB) & 0xFF)
+ #define RCAR_BUS_PHASE_STOP	(MDBS | MIE | FSB)
+ 
+ #define RCAR_IRQ_SEND	(MNR | MAL | MST | MAT | MDE)
+@@ -120,6 +119,7 @@ enum rcar_i2c_type {
+ };
+ 
+ struct rcar_i2c_priv {
++	u32 flags;
+ 	void __iomem *io;
+ 	struct i2c_adapter adap;
+ 	struct i2c_msg *msg;
+@@ -130,7 +130,6 @@ struct rcar_i2c_priv {
+ 
+ 	int pos;
+ 	u32 icccr;
+-	u32 flags;
+ 	u8 recovery_icmcr;	/* protected by adapter lock */
+ 	enum rcar_i2c_type devtype;
+ 	struct i2c_client *slave;
+@@ -621,7 +620,7 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ /*
+  * This driver has a lock-free design because there are IP cores (at least
+  * R-Car Gen2) which have an inherent race condition in their hardware design.
+- * There, we need to clear RCAR_BUS_MASK_DATA bits as soon as possible after
++ * There, we need to switch to RCAR_BUS_PHASE_DATA as soon as possible after
+  * the interrupt was generated, otherwise an unwanted repeated message gets
+  * generated. It turned out that taking a spinlock at the beginning of the ISR
+  * was already causing repeated messages. Thus, this driver was converted to
+@@ -630,13 +629,11 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ static irqreturn_t rcar_i2c_irq(int irq, void *ptr)
+ {
+ 	struct rcar_i2c_priv *priv = ptr;
+-	u32 msr, val;
++	u32 msr;
+ 
+ 	/* Clear START or STOP immediately, except for REPSTART after read */
+-	if (likely(!(priv->flags & ID_P_REP_AFTER_RD))) {
+-		val = rcar_i2c_read(priv, ICMCR);
+-		rcar_i2c_write(priv, ICMCR, val & RCAR_BUS_MASK_DATA);
+-	}
++	if (likely(!(priv->flags & ID_P_REP_AFTER_RD)))
++		rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
+ 
+ 	msr = rcar_i2c_read(priv, ICMSR);
+ 
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 5157ae29a4460..076526710fe30 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -220,10 +220,10 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
+ 
+ 		cur_base += ret * PAGE_SIZE;
+ 		npages -= ret;
+-		sg = __sg_alloc_table_from_pages(
+-			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
+-			dma_get_max_seg_size(device->dma_device), sg, npages,
+-			GFP_KERNEL);
++		sg = __sg_alloc_table_from_pages(&umem->sg_head, page_list, ret,
++				0, ret << PAGE_SHIFT,
++				ib_dma_max_seg_size(device), sg, npages,
++				GFP_KERNEL);
+ 		umem->sg_nents = umem->sg_head.nents;
+ 		if (IS_ERR(sg)) {
+ 			unpin_user_pages_dirty_lock(page_list, ret, 0);
+diff --git a/drivers/input/keyboard/applespi.c b/drivers/input/keyboard/applespi.c
+index 14362ebab9a9d..0b46bc014cde7 100644
+--- a/drivers/input/keyboard/applespi.c
++++ b/drivers/input/keyboard/applespi.c
+@@ -48,6 +48,7 @@
+ #include <linux/efi.h>
+ #include <linux/input.h>
+ #include <linux/input/mt.h>
++#include <linux/ktime.h>
+ #include <linux/leds.h>
+ #include <linux/module.h>
+ #include <linux/spinlock.h>
+@@ -400,7 +401,7 @@ struct applespi_data {
+ 	unsigned int			cmd_msg_cntr;
+ 	/* lock to protect the above parameters and flags below */
+ 	spinlock_t			cmd_msg_lock;
+-	bool				cmd_msg_queued;
++	ktime_t				cmd_msg_queued;
+ 	enum applespi_evt_type		cmd_evt_type;
+ 
+ 	struct led_classdev		backlight_info;
+@@ -716,7 +717,7 @@ static void applespi_msg_complete(struct applespi_data *applespi,
+ 		wake_up_all(&applespi->drain_complete);
+ 
+ 	if (is_write_msg) {
+-		applespi->cmd_msg_queued = false;
++		applespi->cmd_msg_queued = 0;
+ 		applespi_send_cmd_msg(applespi);
+ 	}
+ 
+@@ -758,8 +759,16 @@ static int applespi_send_cmd_msg(struct applespi_data *applespi)
+ 		return 0;
+ 
+ 	/* check whether send is in progress */
+-	if (applespi->cmd_msg_queued)
+-		return 0;
++	if (applespi->cmd_msg_queued) {
++		if (ktime_ms_delta(ktime_get(), applespi->cmd_msg_queued) < 1000)
++			return 0;
++
++		dev_warn(&applespi->spi->dev, "Command %d timed out\n",
++			 applespi->cmd_evt_type);
++
++		applespi->cmd_msg_queued = 0;
++		applespi->write_active = false;
++	}
+ 
+ 	/* set up packet */
+ 	memset(packet, 0, APPLESPI_PACKET_SIZE);
+@@ -856,7 +865,7 @@ static int applespi_send_cmd_msg(struct applespi_data *applespi)
+ 		return sts;
+ 	}
+ 
+-	applespi->cmd_msg_queued = true;
++	applespi->cmd_msg_queued = ktime_get_coarse();
+ 	applespi->write_active = true;
+ 
+ 	return 0;
+@@ -1908,7 +1917,7 @@ static int __maybe_unused applespi_resume(struct device *dev)
+ 	applespi->drain = false;
+ 	applespi->have_cl_led_on = false;
+ 	applespi->have_bl_level = 0;
+-	applespi->cmd_msg_queued = false;
++	applespi->cmd_msg_queued = 0;
+ 	applespi->read_active = false;
+ 	applespi->write_active = false;
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index c842545368fdd..3c215f0a6052b 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -12,6 +12,7 @@
+ #include <linux/acpi.h>
+ #include <linux/list.h>
+ #include <linux/bitmap.h>
++#include <linux/delay.h>
+ #include <linux/slab.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/interrupt.h>
+@@ -254,6 +255,8 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
+ static int amd_iommu_enable_interrupts(void);
+ static int __init iommu_go_to_state(enum iommu_init_state state);
+ static void init_device_table_dma(void);
++static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
++				u8 fxn, u64 *value, bool is_write);
+ 
+ static bool amd_iommu_pre_enabled = true;
+ 
+@@ -1717,13 +1720,11 @@ static int __init init_iommu_all(struct acpi_table_header *table)
+ 	return 0;
+ }
+ 
+-static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
+-				u8 fxn, u64 *value, bool is_write);
+-
+-static void init_iommu_perf_ctr(struct amd_iommu *iommu)
++static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
+ {
++	int retry;
+ 	struct pci_dev *pdev = iommu->dev;
+-	u64 val = 0xabcd, val2 = 0, save_reg = 0;
++	u64 val = 0xabcd, val2 = 0, save_reg, save_src;
+ 
+ 	if (!iommu_feature(iommu, FEATURE_PC))
+ 		return;
+@@ -1731,17 +1732,39 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu)
+ 	amd_iommu_pc_present = true;
+ 
+ 	/* save the value to restore, if writable */
+-	if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false))
++	if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||
++	    iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))
+ 		goto pc_false;
+ 
+-	/* Check if the performance counters can be written to */
+-	if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) ||
+-	    (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) ||
+-	    (val != val2))
++	/*
++	 * Disable power gating by programing the performance counter
++	 * source to 20 (i.e. counts the reads and writes from/to IOMMU
++	 * Reserved Register [MMIO Offset 1FF8h] that are ignored.),
++	 * which never get incremented during this init phase.
++	 * (Note: The event is also deprecated.)
++	 */
++	val = 20;
++	if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))
+ 		goto pc_false;
+ 
++	/* Check if the performance counters can be written to */
++	val = 0xabcd;
++	for (retry = 5; retry; retry--) {
++		if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||
++		    iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||
++		    val2)
++			break;
++
++		/* Wait about 20 msec for power gating to disable and retry. */
++		msleep(20);
++	}
++
+ 	/* restore */
+-	if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true))
++	if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||
++	    iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))
++		goto pc_false;
++
++	if (val != val2)
+ 		goto pc_false;
+ 
+ 	pci_info(pdev, "IOMMU performance counters supported\n");
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 43f392d27d318..b200a3acc6ed9 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -1079,8 +1079,17 @@ prq_advance:
+ 	 * Clear the page request overflow bit and wake up all threads that
+ 	 * are waiting for the completion of this handling.
+ 	 */
+-	if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO)
+-		writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
++	if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) {
++		pr_info_ratelimited("IOMMU: %s: PRQ overflow detected\n",
++				    iommu->name);
++		head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
++		tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
++		if (head == tail) {
++			writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
++			pr_info_ratelimited("IOMMU: %s: PRQ overflow cleared",
++					    iommu->name);
++		}
++	}
+ 
+ 	if (!completion_done(&iommu->prq_complete))
+ 		complete(&iommu->prq_complete);
+diff --git a/drivers/media/platform/vsp1/vsp1_drm.c b/drivers/media/platform/vsp1/vsp1_drm.c
+index 86d5e3f4b1ffc..06f74d410973e 100644
+--- a/drivers/media/platform/vsp1/vsp1_drm.c
++++ b/drivers/media/platform/vsp1/vsp1_drm.c
+@@ -245,7 +245,7 @@ static int vsp1_du_pipeline_setup_brx(struct vsp1_device *vsp1,
+ 		brx = &vsp1->bru->entity;
+ 	else if (pipe->brx && !drm_pipe->force_brx_release)
+ 		brx = pipe->brx;
+-	else if (!vsp1->bru->entity.pipe)
++	else if (vsp1_feature(vsp1, VSP1_HAS_BRU) && !vsp1->bru->entity.pipe)
+ 		brx = &vsp1->bru->entity;
+ 	else
+ 		brx = &vsp1->brs->entity;
+@@ -462,9 +462,9 @@ static int vsp1_du_pipeline_setup_inputs(struct vsp1_device *vsp1,
+ 	 * make sure it is present in the pipeline's list of entities if it
+ 	 * wasn't already.
+ 	 */
+-	if (!use_uif) {
++	if (drm_pipe->uif && !use_uif) {
+ 		drm_pipe->uif->pipe = NULL;
+-	} else if (!drm_pipe->uif->pipe) {
++	} else if (drm_pipe->uif && !drm_pipe->uif->pipe) {
+ 		drm_pipe->uif->pipe = pipe;
+ 		list_add_tail(&drm_pipe->uif->list_pipe, &pipe->entities);
+ 	}
+diff --git a/drivers/media/rc/Makefile b/drivers/media/rc/Makefile
+index 5bb2932ab1195..ff6a8fc4c38e5 100644
+--- a/drivers/media/rc/Makefile
++++ b/drivers/media/rc/Makefile
+@@ -5,6 +5,7 @@ obj-y += keymaps/
+ obj-$(CONFIG_RC_CORE) += rc-core.o
+ rc-core-y := rc-main.o rc-ir-raw.o
+ rc-core-$(CONFIG_LIRC) += lirc_dev.o
++rc-core-$(CONFIG_MEDIA_CEC_RC) += keymaps/rc-cec.o
+ rc-core-$(CONFIG_BPF_LIRC_MODE2) += bpf-lirc.o
+ obj-$(CONFIG_IR_NEC_DECODER) += ir-nec-decoder.o
+ obj-$(CONFIG_IR_RC5_DECODER) += ir-rc5-decoder.o
+diff --git a/drivers/media/rc/keymaps/Makefile b/drivers/media/rc/keymaps/Makefile
+index aaa1bf81d00d4..3581761e9797d 100644
+--- a/drivers/media/rc/keymaps/Makefile
++++ b/drivers/media/rc/keymaps/Makefile
+@@ -21,7 +21,6 @@ obj-$(CONFIG_RC_MAP) += rc-adstech-dvb-t-pci.o \
+ 			rc-behold.o \
+ 			rc-behold-columbus.o \
+ 			rc-budget-ci-old.o \
+-			rc-cec.o \
+ 			rc-cinergy-1400.o \
+ 			rc-cinergy.o \
+ 			rc-d680-dmb.o \
+diff --git a/drivers/media/rc/keymaps/rc-cec.c b/drivers/media/rc/keymaps/rc-cec.c
+index 3e3bd11092b45..068e22aeac8c3 100644
+--- a/drivers/media/rc/keymaps/rc-cec.c
++++ b/drivers/media/rc/keymaps/rc-cec.c
+@@ -1,5 +1,15 @@
+ // SPDX-License-Identifier: GPL-2.0-or-later
+ /* Keytable for the CEC remote control
++ *
++ * This keymap is unusual in that it can't be built as a module,
++ * instead it is registered directly in rc-main.c if CONFIG_MEDIA_CEC_RC
++ * is set. This is because it can be called from drm_dp_cec_set_edid() via
++ * cec_register_adapter() in an asynchronous context, and it is not
++ * allowed to use request_module() to load rc-cec.ko in that case.
++ *
++ * Since this keymap is only used if CONFIG_MEDIA_CEC_RC is set, we
++ * just compile this keymap into the rc-core module and never as a
++ * separate module.
+  *
+  * Copyright (c) 2015 by Kamil Debski
+  */
+@@ -152,7 +162,7 @@ static struct rc_map_table cec[] = {
+ 	/* 0x77-0xff: Reserved */
+ };
+ 
+-static struct rc_map_list cec_map = {
++struct rc_map_list cec_map = {
+ 	.map = {
+ 		.scan		= cec,
+ 		.size		= ARRAY_SIZE(cec),
+@@ -160,19 +170,3 @@ static struct rc_map_list cec_map = {
+ 		.name		= RC_MAP_CEC,
+ 	}
+ };
+-
+-static int __init init_rc_map_cec(void)
+-{
+-	return rc_map_register(&cec_map);
+-}
+-
+-static void __exit exit_rc_map_cec(void)
+-{
+-	rc_map_unregister(&cec_map);
+-}
+-
+-module_init(init_rc_map_cec);
+-module_exit(exit_rc_map_cec);
+-
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Kamil Debski");
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index 1fd62c1dac768..8e88dc8ea6c5e 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -2069,6 +2069,9 @@ static int __init rc_core_init(void)
+ 
+ 	led_trigger_register_simple("rc-feedback", &led_feedback);
+ 	rc_map_register(&empty_map);
++#ifdef CONFIG_MEDIA_CEC_RC
++	rc_map_register(&cec_map);
++#endif
+ 
+ 	return 0;
+ }
+@@ -2078,6 +2081,9 @@ static void __exit rc_core_exit(void)
+ 	lirc_dev_exit();
+ 	class_unregister(&rc_class);
+ 	led_trigger_unregister_simple(led_feedback);
++#ifdef CONFIG_MEDIA_CEC_RC
++	rc_map_unregister(&cec_map);
++#endif
+ 	rc_map_unregister(&empty_map);
+ }
+ 
+diff --git a/drivers/media/usb/usbtv/usbtv-audio.c b/drivers/media/usb/usbtv/usbtv-audio.c
+index b57e94fb19770..333bd305a4f9f 100644
+--- a/drivers/media/usb/usbtv/usbtv-audio.c
++++ b/drivers/media/usb/usbtv/usbtv-audio.c
+@@ -371,7 +371,7 @@ void usbtv_audio_free(struct usbtv *usbtv)
+ 	cancel_work_sync(&usbtv->snd_trigger);
+ 
+ 	if (usbtv->snd && usbtv->udev) {
+-		snd_card_free(usbtv->snd);
++		snd_card_free_when_closed(usbtv->snd);
+ 		usbtv->snd = NULL;
+ 	}
+ }
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 815d01f785dff..273d9c1591793 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -948,6 +948,11 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl,  u32 kernel,
+ 	if (!fl->cctx->rpdev)
+ 		return -EPIPE;
+ 
++	if (handle == FASTRPC_INIT_HANDLE && !kernel) {
++		dev_warn_ratelimited(fl->sctx->dev, "user app trying to send a kernel RPC message (%d)\n",  handle);
++		return -EPERM;
++	}
++
+ 	ctx = fastrpc_context_alloc(fl, kernel, sc, args);
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+diff --git a/drivers/misc/pvpanic.c b/drivers/misc/pvpanic.c
+index e16a5e51006e5..d9140e75602d7 100644
+--- a/drivers/misc/pvpanic.c
++++ b/drivers/misc/pvpanic.c
+@@ -166,6 +166,7 @@ static const struct of_device_id pvpanic_mmio_match[] = {
+ 	{ .compatible = "qemu,pvpanic-mmio", },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(of, pvpanic_mmio_match);
+ 
+ static struct platform_driver pvpanic_mmio_driver = {
+ 	.driver = {
+diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
+index c2e70b757dd12..4383c262b3f5a 100644
+--- a/drivers/mmc/core/bus.c
++++ b/drivers/mmc/core/bus.c
+@@ -399,11 +399,6 @@ void mmc_remove_card(struct mmc_card *card)
+ 	mmc_remove_card_debugfs(card);
+ #endif
+ 
+-	if (host->cqe_enabled) {
+-		host->cqe_ops->cqe_disable(host);
+-		host->cqe_enabled = false;
+-	}
+-
+ 	if (mmc_card_present(card)) {
+ 		if (mmc_host_is_spi(card->host)) {
+ 			pr_info("%s: SPI card removed\n",
+@@ -416,6 +411,10 @@ void mmc_remove_card(struct mmc_card *card)
+ 		of_node_put(card->dev.of_node);
+ 	}
+ 
++	if (host->cqe_enabled) {
++		host->cqe_ops->cqe_disable(host);
++		host->cqe_enabled = false;
++	}
++
+ 	put_device(&card->dev);
+ }
+-
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index ff3063ce2acda..9ce34e8800335 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -423,10 +423,6 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
+ 
+ 		/* EXT_CSD value is in units of 10ms, but we store in ms */
+ 		card->ext_csd.part_time = 10 * ext_csd[EXT_CSD_PART_SWITCH_TIME];
+-		/* Some eMMC set the value too low so set a minimum */
+-		if (card->ext_csd.part_time &&
+-		    card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
+-			card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
+ 
+ 		/* Sleep / awake timeout in 100ns units */
+ 		if (sa_shift > 0 && sa_shift <= 0x17)
+@@ -616,6 +612,17 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
+ 		card->ext_csd.data_sector_size = 512;
+ 	}
+ 
++	/*
++	 * GENERIC_CMD6_TIME is to be used "unless a specific timeout is defined
++	 * when accessing a specific field", so use it here if there is no
++	 * PARTITION_SWITCH_TIME.
++	 */
++	if (!card->ext_csd.part_time)
++		card->ext_csd.part_time = card->ext_csd.generic_cmd6_time;
++	/* Some eMMC set the value too low so set a minimum */
++	if (card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
++		card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
++
+ 	/* eMMC v5 or later */
+ 	if (card->ext_csd.rev >= 7) {
+ 		memcpy(card->ext_csd.fwrev, &ext_csd[EXT_CSD_FIRMWARE_VERSION],
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index b5a41a7ce1658..9bde0def114b5 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -1241,7 +1241,11 @@ mmci_start_command(struct mmci_host *host, struct mmc_command *cmd, u32 c)
+ 		if (!cmd->busy_timeout)
+ 			cmd->busy_timeout = 10 * MSEC_PER_SEC;
+ 
+-		clks = (unsigned long long)cmd->busy_timeout * host->cclk;
++		if (cmd->busy_timeout > host->mmc->max_busy_timeout)
++			clks = (unsigned long long)host->mmc->max_busy_timeout * host->cclk;
++		else
++			clks = (unsigned long long)cmd->busy_timeout * host->cclk;
++
+ 		do_div(clks, MSEC_PER_SEC);
+ 		writel_relaxed(clks, host->base + MMCIDATATIMER);
+ 	}
+@@ -2091,6 +2095,10 @@ static int mmci_probe(struct amba_device *dev,
+ 		mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY;
+ 	}
+ 
++	/* Variants with mandatory busy timeout in HW needs R1B responses. */
++	if (variant->busy_timeout)
++		mmc->caps |= MMC_CAP_NEED_RSP_BUSY;
++
+ 	/* Prepare a CMD12 - needed to clear the DPSM on some variants. */
+ 	host->stop_abort.opcode = MMC_STOP_TRANSMISSION;
+ 	host->stop_abort.arg = 0;
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 004fbfc236721..dc84e2dff4085 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -1101,13 +1101,13 @@ static void msdc_track_cmd_data(struct msdc_host *host,
+ static void msdc_request_done(struct msdc_host *host, struct mmc_request *mrq)
+ {
+ 	unsigned long flags;
+-	bool ret;
+ 
+-	ret = cancel_delayed_work(&host->req_timeout);
+-	if (!ret) {
+-		/* delay work already running */
+-		return;
+-	}
++	/*
++	 * No need check the return value of cancel_delayed_work, as only ONE
++	 * path will go here!
++	 */
++	cancel_delayed_work(&host->req_timeout);
++
+ 	spin_lock_irqsave(&host->lock, flags);
+ 	host->mrq = NULL;
+ 	spin_unlock_irqrestore(&host->lock, flags);
+@@ -1129,7 +1129,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
+ 	bool done = false;
+ 	bool sbc_error;
+ 	unsigned long flags;
+-	u32 *rsp = cmd->resp;
++	u32 *rsp;
+ 
+ 	if (mrq->sbc && cmd == mrq->cmd &&
+ 	    (events & (MSDC_INT_ACMDRDY | MSDC_INT_ACMDCRCERR
+@@ -1150,6 +1150,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
+ 
+ 	if (done)
+ 		return true;
++	rsp = cmd->resp;
+ 
+ 	sdr_clr_bits(host->base + MSDC_INTEN, cmd_ints_mask);
+ 
+@@ -1337,7 +1338,7 @@ static void msdc_data_xfer_next(struct msdc_host *host,
+ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
+ 				struct mmc_request *mrq, struct mmc_data *data)
+ {
+-	struct mmc_command *stop = data->stop;
++	struct mmc_command *stop;
+ 	unsigned long flags;
+ 	bool done;
+ 	unsigned int check_data = events &
+@@ -1353,6 +1354,7 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
+ 
+ 	if (done)
+ 		return true;
++	stop = data->stop;
+ 
+ 	if (check_data || (stop && stop->error)) {
+ 		dev_dbg(host->dev, "DMA status: 0x%8X\n",
+diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c
+index 75007f61df972..4fbbff03137c3 100644
+--- a/drivers/mmc/host/mxs-mmc.c
++++ b/drivers/mmc/host/mxs-mmc.c
+@@ -643,7 +643,7 @@ static int mxs_mmc_probe(struct platform_device *pdev)
+ 
+ 	ret = mmc_of_parse(mmc);
+ 	if (ret)
+-		goto out_clk_disable;
++		goto out_free_dma;
+ 
+ 	mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
+ 
+diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
+index c9434b461aabc..ddeaf8e1f72f9 100644
+--- a/drivers/mmc/host/sdhci-iproc.c
++++ b/drivers/mmc/host/sdhci-iproc.c
+@@ -296,9 +296,27 @@ static const struct of_device_id sdhci_iproc_of_match[] = {
+ MODULE_DEVICE_TABLE(of, sdhci_iproc_of_match);
+ 
+ #ifdef CONFIG_ACPI
++/*
++ * This is a duplicate of bcm2835_(pltfrm_)data without caps quirks
++ * which are provided by the ACPI table.
++ */
++static const struct sdhci_pltfm_data sdhci_bcm_arasan_data = {
++	.quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION |
++		  SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
++		  SDHCI_QUIRK_NO_HISPD_BIT,
++	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	.ops = &sdhci_iproc_32only_ops,
++};
++
++static const struct sdhci_iproc_data bcm_arasan_data = {
++	.pdata = &sdhci_bcm_arasan_data,
++};
++
+ static const struct acpi_device_id sdhci_iproc_acpi_ids[] = {
+ 	{ .id = "BRCM5871", .driver_data = (kernel_ulong_t)&iproc_cygnus_data },
+ 	{ .id = "BRCM5872", .driver_data = (kernel_ulong_t)&iproc_data },
++	{ .id = "BCM2847",  .driver_data = (kernel_ulong_t)&bcm_arasan_data },
++	{ .id = "BRCME88C", .driver_data = (kernel_ulong_t)&bcm2711_data },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(acpi, sdhci_iproc_acpi_ids);
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 3561ae8a481a0..6edf9fffd934a 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -3994,10 +3994,10 @@ void __sdhci_read_caps(struct sdhci_host *host, const u16 *ver,
+ 	if (host->v4_mode)
+ 		sdhci_do_enable_v4_mode(host);
+ 
+-	of_property_read_u64(mmc_dev(host->mmc)->of_node,
+-			     "sdhci-caps-mask", &dt_caps_mask);
+-	of_property_read_u64(mmc_dev(host->mmc)->of_node,
+-			     "sdhci-caps", &dt_caps);
++	device_property_read_u64_array(mmc_dev(host->mmc),
++				       "sdhci-caps-mask", &dt_caps_mask, 1);
++	device_property_read_u64_array(mmc_dev(host->mmc),
++				       "sdhci-caps", &dt_caps, 1);
+ 
+ 	v = ver ? *ver : sdhci_readw(host, SDHCI_HOST_VERSION);
+ 	host->version = (v & SDHCI_SPEC_VER_MASK) >> SDHCI_SPEC_VER_SHIFT;
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index 13e0a8caf3b6f..600b9d09ec087 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -92,7 +92,7 @@ config WIREGUARD
+ 	select CRYPTO_POLY1305_ARM if ARM
+ 	select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
+ 	select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
+-	select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
++	select CRYPTO_POLY1305_MIPS if MIPS
+ 	help
+ 	  WireGuard is a secure, fast, and easy to use replacement for IPSec
+ 	  that uses modern cryptography and clever networking tricks. It's
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index 99e5f272205d3..d712c6fdbc87d 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -662,7 +662,7 @@ static int flexcan_chip_freeze(struct flexcan_priv *priv)
+ 	u32 reg;
+ 
+ 	reg = priv->read(&regs->mcr);
+-	reg |= FLEXCAN_MCR_HALT;
++	reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT;
+ 	priv->write(reg, &regs->mcr);
+ 
+ 	while (timeout-- && !(priv->read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK))
+@@ -1375,10 +1375,13 @@ static int flexcan_chip_start(struct net_device *dev)
+ 
+ 	flexcan_set_bittiming(dev);
+ 
++	/* set freeze, halt */
++	err = flexcan_chip_freeze(priv);
++	if (err)
++		goto out_chip_disable;
++
+ 	/* MCR
+ 	 *
+-	 * enable freeze
+-	 * halt now
+ 	 * only supervisor access
+ 	 * enable warning int
+ 	 * enable individual RX masking
+@@ -1387,9 +1390,8 @@ static int flexcan_chip_start(struct net_device *dev)
+ 	 */
+ 	reg_mcr = priv->read(&regs->mcr);
+ 	reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
+-	reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
+-		FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ | FLEXCAN_MCR_IDAM_C |
+-		FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
++	reg_mcr |= FLEXCAN_MCR_SUPV | FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ |
++		FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
+ 
+ 	/* MCR
+ 	 *
+@@ -1800,10 +1802,14 @@ static int register_flexcandev(struct net_device *dev)
+ 	if (err)
+ 		goto out_chip_disable;
+ 
+-	/* set freeze, halt and activate FIFO, restrict register access */
++	/* set freeze, halt */
++	err = flexcan_chip_freeze(priv);
++	if (err)
++		goto out_chip_disable;
++
++	/* activate FIFO, restrict register access */
+ 	reg = priv->read(&regs->mcr);
+-	reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT |
+-		FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
++	reg |=  FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
+ 	priv->write(reg, &regs->mcr);
+ 
+ 	/* Currently we only support newer versions of this core
+diff --git a/drivers/net/can/m_can/tcan4x5x.c b/drivers/net/can/m_can/tcan4x5x.c
+index f726c5112294f..01f5b6e03a2dd 100644
+--- a/drivers/net/can/m_can/tcan4x5x.c
++++ b/drivers/net/can/m_can/tcan4x5x.c
+@@ -328,14 +328,14 @@ static int tcan4x5x_init(struct m_can_classdev *cdev)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Zero out the MCAN buffers */
++	m_can_init_ram(cdev);
++
+ 	ret = regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG,
+ 				 TCAN4X5X_MODE_SEL_MASK, TCAN4X5X_MODE_NORMAL);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Zero out the MCAN buffers */
+-	m_can_init_ram(cdev);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 4ca0296509936..1a855816cbc9d 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1834,7 +1834,7 @@ out_unlock_ptp:
+ 				speed = SPEED_1000;
+ 			else if (bmcr & BMCR_SPEED100)
+ 				speed = SPEED_100;
+-			else if (bmcr & BMCR_SPEED10)
++			else
+ 				speed = SPEED_10;
+ 
+ 			sja1105_sgmii_pcs_force_speed(priv, speed);
+diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
+index 9b7f1af5f5747..9e02f88645931 100644
+--- a/drivers/net/ethernet/atheros/alx/main.c
++++ b/drivers/net/ethernet/atheros/alx/main.c
+@@ -1894,13 +1894,16 @@ static int alx_resume(struct device *dev)
+ 
+ 	if (!netif_running(alx->dev))
+ 		return 0;
+-	netif_device_attach(alx->dev);
+ 
+ 	rtnl_lock();
+ 	err = __alx_open(alx, true);
+ 	rtnl_unlock();
++	if (err)
++		return err;
+ 
+-	return err;
++	netif_device_attach(alx->dev);
++
++	return 0;
+ }
+ 
+ static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index c7c5c01a783a0..a59c1f1fb31ed 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -8430,10 +8430,18 @@ static void bnxt_setup_inta(struct bnxt *bp)
+ 	bp->irq_tbl[0].handler = bnxt_inta;
+ }
+ 
++static int bnxt_init_int_mode(struct bnxt *bp);
++
+ static int bnxt_setup_int_mode(struct bnxt *bp)
+ {
+ 	int rc;
+ 
++	if (!bp->irq_tbl) {
++		rc = bnxt_init_int_mode(bp);
++		if (rc || !bp->irq_tbl)
++			return rc ?: -ENODEV;
++	}
++
+ 	if (bp->flags & BNXT_FLAG_USING_MSIX)
+ 		bnxt_setup_msix(bp);
+ 	else
+@@ -8618,7 +8626,7 @@ static int bnxt_init_inta(struct bnxt *bp)
+ 
+ static int bnxt_init_int_mode(struct bnxt *bp)
+ {
+-	int rc = 0;
++	int rc = -ENODEV;
+ 
+ 	if (bp->flags & BNXT_FLAG_MSIX_CAP)
+ 		rc = bnxt_init_msix(bp);
+@@ -9339,7 +9347,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
+ {
+ 	struct hwrm_func_drv_if_change_output *resp = bp->hwrm_cmd_resp_addr;
+ 	struct hwrm_func_drv_if_change_input req = {0};
+-	bool resc_reinit = false, fw_reset = false;
++	bool fw_reset = !bp->irq_tbl;
++	bool resc_reinit = false;
+ 	u32 flags = 0;
+ 	int rc;
+ 
+@@ -9367,6 +9376,7 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
+ 
+ 	if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) {
+ 		netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
++		set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
+ 		return -ENODEV;
+ 	}
+ 	if (resc_reinit || fw_reset) {
+diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
+index 5c6c8c5ec7471..ba7f857d1710d 100644
+--- a/drivers/net/ethernet/davicom/dm9000.c
++++ b/drivers/net/ethernet/davicom/dm9000.c
+@@ -133,6 +133,8 @@ struct board_info {
+ 	u32		wake_state;
+ 
+ 	int		ip_summed;
++
++	struct regulator *power_supply;
+ };
+ 
+ /* debug code */
+@@ -1452,7 +1454,7 @@ dm9000_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(dev, "failed to request reset gpio %d: %d\n",
+ 				reset_gpios, ret);
+-			return -ENODEV;
++			goto out_regulator_disable;
+ 		}
+ 
+ 		/* According to manual PWRST# Low Period Min 1ms */
+@@ -1464,8 +1466,10 @@ dm9000_probe(struct platform_device *pdev)
+ 
+ 	if (!pdata) {
+ 		pdata = dm9000_parse_dt(&pdev->dev);
+-		if (IS_ERR(pdata))
+-			return PTR_ERR(pdata);
++		if (IS_ERR(pdata)) {
++			ret = PTR_ERR(pdata);
++			goto out_regulator_disable;
++		}
+ 	}
+ 
+ 	/* Init network device */
+@@ -1482,6 +1486,8 @@ dm9000_probe(struct platform_device *pdev)
+ 
+ 	db->dev = &pdev->dev;
+ 	db->ndev = ndev;
++	if (!IS_ERR(power))
++		db->power_supply = power;
+ 
+ 	spin_lock_init(&db->lock);
+ 	mutex_init(&db->addr_lock);
+@@ -1706,6 +1712,10 @@ out:
+ 	dm9000_release_board(pdev, db);
+ 	free_netdev(ndev);
+ 
++out_regulator_disable:
++	if (!IS_ERR(power))
++		regulator_disable(power);
++
+ 	return ret;
+ }
+ 
+@@ -1763,10 +1773,13 @@ static int
+ dm9000_drv_remove(struct platform_device *pdev)
+ {
+ 	struct net_device *ndev = platform_get_drvdata(pdev);
++	struct board_info *dm = to_dm9000_board(ndev);
+ 
+ 	unregister_netdev(ndev);
+-	dm9000_release_board(pdev, netdev_priv(ndev));
++	dm9000_release_board(pdev, dm);
+ 	free_netdev(ndev);		/* free device structure */
++	if (dm->power_supply)
++		regulator_disable(dm->power_supply);
+ 
+ 	dev_dbg(&pdev->dev, "released and freed device\n");
+ 	return 0;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index fc2075ea57fea..df4a858c80015 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -321,6 +321,8 @@ static int enetc_poll(struct napi_struct *napi, int budget)
+ 	int work_done;
+ 	int i;
+ 
++	enetc_lock_mdio();
++
+ 	for (i = 0; i < v->count_tx_rings; i++)
+ 		if (!enetc_clean_tx_ring(&v->tx_ring[i], budget))
+ 			complete = false;
+@@ -331,8 +333,10 @@ static int enetc_poll(struct napi_struct *napi, int budget)
+ 	if (work_done)
+ 		v->rx_napi_work = true;
+ 
+-	if (!complete)
++	if (!complete) {
++		enetc_unlock_mdio();
+ 		return budget;
++	}
+ 
+ 	napi_complete_done(napi, work_done);
+ 
+@@ -341,8 +345,6 @@ static int enetc_poll(struct napi_struct *napi, int budget)
+ 
+ 	v->rx_napi_work = false;
+ 
+-	enetc_lock_mdio();
+-
+ 	/* enable interrupts */
+ 	enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE);
+ 
+@@ -367,8 +369,8 @@ static void enetc_get_tx_tstamp(struct enetc_hw *hw, union enetc_tx_bd *txbd,
+ {
+ 	u32 lo, hi, tstamp_lo;
+ 
+-	lo = enetc_rd(hw, ENETC_SICTR0);
+-	hi = enetc_rd(hw, ENETC_SICTR1);
++	lo = enetc_rd_hot(hw, ENETC_SICTR0);
++	hi = enetc_rd_hot(hw, ENETC_SICTR1);
+ 	tstamp_lo = le32_to_cpu(txbd->wb.tstamp);
+ 	if (lo <= tstamp_lo)
+ 		hi -= 1;
+@@ -382,6 +384,12 @@ static void enetc_tstamp_tx(struct sk_buff *skb, u64 tstamp)
+ 	if (skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) {
+ 		memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ 		shhwtstamps.hwtstamp = ns_to_ktime(tstamp);
++		/* Ensure skb_mstamp_ns, which might have been populated with
++		 * the txtime, is not mistaken for a software timestamp,
++		 * because this will prevent the dispatch of our hardware
++		 * timestamp to the socket.
++		 */
++		skb->tstamp = ktime_set(0, 0);
+ 		skb_tstamp_tx(skb, &shhwtstamps);
+ 	}
+ }
+@@ -398,9 +406,7 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
+ 	i = tx_ring->next_to_clean;
+ 	tx_swbd = &tx_ring->tx_swbd[i];
+ 
+-	enetc_lock_mdio();
+ 	bds_to_clean = enetc_bd_ready_count(tx_ring, i);
+-	enetc_unlock_mdio();
+ 
+ 	do_tstamp = false;
+ 
+@@ -443,8 +449,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
+ 			tx_swbd = tx_ring->tx_swbd;
+ 		}
+ 
+-		enetc_lock_mdio();
+-
+ 		/* BD iteration loop end */
+ 		if (is_eof) {
+ 			tx_frm_cnt++;
+@@ -455,8 +459,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
+ 
+ 		if (unlikely(!bds_to_clean))
+ 			bds_to_clean = enetc_bd_ready_count(tx_ring, i);
+-
+-		enetc_unlock_mdio();
+ 	}
+ 
+ 	tx_ring->next_to_clean = i;
+@@ -567,9 +569,8 @@ static void enetc_get_rx_tstamp(struct net_device *ndev,
+ static void enetc_get_offloads(struct enetc_bdr *rx_ring,
+ 			       union enetc_rx_bd *rxbd, struct sk_buff *skb)
+ {
+-#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
+ 	struct enetc_ndev_priv *priv = netdev_priv(rx_ring->ndev);
+-#endif
++
+ 	/* TODO: hashing */
+ 	if (rx_ring->ndev->features & NETIF_F_RXCSUM) {
+ 		u16 inet_csum = le16_to_cpu(rxbd->r.inet_csum);
+@@ -578,12 +579,31 @@ static void enetc_get_offloads(struct enetc_bdr *rx_ring,
+ 		skb->ip_summed = CHECKSUM_COMPLETE;
+ 	}
+ 
+-	/* copy VLAN to skb, if one is extracted, for now we assume it's a
+-	 * standard TPID, but HW also supports custom values
+-	 */
+-	if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN)
+-		__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+-				       le16_to_cpu(rxbd->r.vlan_opt));
++	if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN) {
++		__be16 tpid = 0;
++
++		switch (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_TPID) {
++		case 0:
++			tpid = htons(ETH_P_8021Q);
++			break;
++		case 1:
++			tpid = htons(ETH_P_8021AD);
++			break;
++		case 2:
++			tpid = htons(enetc_port_rd(&priv->si->hw,
++						   ENETC_PCVLANR1));
++			break;
++		case 3:
++			tpid = htons(enetc_port_rd(&priv->si->hw,
++						   ENETC_PCVLANR2));
++			break;
++		default:
++			break;
++		}
++
++		__vlan_hwaccel_put_tag(skb, tpid, le16_to_cpu(rxbd->r.vlan_opt));
++	}
++
+ #ifdef CONFIG_FSL_ENETC_PTP_CLOCK
+ 	if (priv->active_offloads & ENETC_F_RX_TSTAMP)
+ 		enetc_get_rx_tstamp(rx_ring->ndev, rxbd, skb);
+@@ -700,8 +720,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
+ 		u32 bd_status;
+ 		u16 size;
+ 
+-		enetc_lock_mdio();
+-
+ 		if (cleaned_cnt >= ENETC_RXBD_BUNDLE) {
+ 			int count = enetc_refill_rx_ring(rx_ring, cleaned_cnt);
+ 
+@@ -712,19 +730,15 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
+ 
+ 		rxbd = enetc_rxbd(rx_ring, i);
+ 		bd_status = le32_to_cpu(rxbd->r.lstatus);
+-		if (!bd_status) {
+-			enetc_unlock_mdio();
++		if (!bd_status)
+ 			break;
+-		}
+ 
+ 		enetc_wr_reg_hot(rx_ring->idr, BIT(rx_ring->index));
+ 		dma_rmb(); /* for reading other rxbd fields */
+ 		size = le16_to_cpu(rxbd->r.buf_len);
+ 		skb = enetc_map_rx_buff_to_skb(rx_ring, i, size);
+-		if (!skb) {
+-			enetc_unlock_mdio();
++		if (!skb)
+ 			break;
+-		}
+ 
+ 		enetc_get_offloads(rx_ring, rxbd, skb);
+ 
+@@ -736,7 +750,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
+ 
+ 		if (unlikely(bd_status &
+ 			     ENETC_RXBD_LSTATUS(ENETC_RXBD_ERR_MASK))) {
+-			enetc_unlock_mdio();
+ 			dev_kfree_skb(skb);
+ 			while (!(bd_status & ENETC_RXBD_LSTATUS_F)) {
+ 				dma_rmb();
+@@ -776,8 +789,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
+ 
+ 		enetc_process_skb(rx_ring, skb);
+ 
+-		enetc_unlock_mdio();
+-
+ 		napi_gro_receive(napi, skb);
+ 
+ 		rx_frm_cnt++;
+@@ -1024,7 +1035,7 @@ static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv)
+ 		enetc_free_tx_ring(priv->tx_ring[i]);
+ }
+ 
+-static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
++int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
+ {
+ 	int size = cbdr->bd_count * sizeof(struct enetc_cbd);
+ 
+@@ -1045,7 +1056,7 @@ static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
+ 	return 0;
+ }
+ 
+-static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
++void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
+ {
+ 	int size = cbdr->bd_count * sizeof(struct enetc_cbd);
+ 
+@@ -1053,7 +1064,7 @@ static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
+ 	cbdr->bd_base = NULL;
+ }
+ 
+-static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
++void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
+ {
+ 	/* set CBDR cache attributes */
+ 	enetc_wr(hw, ENETC_SICAR2,
+@@ -1073,7 +1084,7 @@ static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
+ 	cbdr->cir = hw->reg + ENETC_SICBDRCIR;
+ }
+ 
+-static void enetc_clear_cbdr(struct enetc_hw *hw)
++void enetc_clear_cbdr(struct enetc_hw *hw)
+ {
+ 	enetc_wr(hw, ENETC_SICBDRMR, 0);
+ }
+@@ -1098,13 +1109,12 @@ static int enetc_setup_default_rss_table(struct enetc_si *si, int num_groups)
+ 	return 0;
+ }
+ 
+-static int enetc_configure_si(struct enetc_ndev_priv *priv)
++int enetc_configure_si(struct enetc_ndev_priv *priv)
+ {
+ 	struct enetc_si *si = priv->si;
+ 	struct enetc_hw *hw = &si->hw;
+ 	int err;
+ 
+-	enetc_setup_cbdr(hw, &si->cbd_ring);
+ 	/* set SI cache attributes */
+ 	enetc_wr(hw, ENETC_SICAR0,
+ 		 ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
+@@ -1152,6 +1162,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
+ 	if (err)
+ 		return err;
+ 
++	enetc_setup_cbdr(&si->hw, &si->cbd_ring);
++
+ 	priv->cls_rules = kcalloc(si->num_fs_entries, sizeof(*priv->cls_rules),
+ 				  GFP_KERNEL);
+ 	if (!priv->cls_rules) {
+@@ -1159,14 +1171,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
+ 		goto err_alloc_cls;
+ 	}
+ 
+-	err = enetc_configure_si(priv);
+-	if (err)
+-		goto err_config_si;
+-
+ 	return 0;
+ 
+-err_config_si:
+-	kfree(priv->cls_rules);
+ err_alloc_cls:
+ 	enetc_clear_cbdr(&si->hw);
+ 	enetc_free_cbdr(priv->dev, &si->cbd_ring);
+@@ -1252,7 +1258,8 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+ 	rx_ring->idr = hw->reg + ENETC_SIRXIDR;
+ 
+ 	enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring));
+-	enetc_wr(hw, ENETC_SIRXIDR, rx_ring->next_to_use);
++	/* update ENETC's consumer index */
++	enetc_rxbdr_wr(hw, idx, ENETC_RBCIR, rx_ring->next_to_use);
+ 
+ 	/* enable ring */
+ 	enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr);
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
+index dd0fb0c066d75..15d19cbd5a954 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc.h
+@@ -293,6 +293,7 @@ void enetc_get_si_caps(struct enetc_si *si);
+ void enetc_init_si_rings_params(struct enetc_ndev_priv *priv);
+ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv);
+ void enetc_free_si_resources(struct enetc_ndev_priv *priv);
++int enetc_configure_si(struct enetc_ndev_priv *priv);
+ 
+ int enetc_open(struct net_device *ndev);
+ int enetc_close(struct net_device *ndev);
+@@ -310,6 +311,10 @@ int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+ void enetc_set_ethtool_ops(struct net_device *ndev);
+ 
+ /* control buffer descriptor ring (CBDR) */
++int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
++void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
++void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr);
++void enetc_clear_cbdr(struct enetc_hw *hw);
+ int enetc_set_mac_flt_entry(struct enetc_si *si, int index,
+ 			    char *mac_addr, int si_map);
+ int enetc_clear_mac_flt_entry(struct enetc_si *si, int index);
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+index 014ca6ae121f8..21a6ce415cb22 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+@@ -172,6 +172,8 @@ enum enetc_bdr_type {TX, RX};
+ #define ENETC_PSIPMAR0(n)	(0x0100 + (n) * 0x8) /* n = SI index */
+ #define ENETC_PSIPMAR1(n)	(0x0104 + (n) * 0x8)
+ #define ENETC_PVCLCTR		0x0208
++#define ENETC_PCVLANR1		0x0210
++#define ENETC_PCVLANR2		0x0214
+ #define ENETC_VLAN_TYPE_C	BIT(0)
+ #define ENETC_VLAN_TYPE_S	BIT(1)
+ #define ENETC_PVCLCTR_OVTPIDL(bmp)	((bmp) & 0xff) /* VLAN_TYPE */
+@@ -236,10 +238,17 @@ enum enetc_bdr_type {TX, RX};
+ #define ENETC_PM_IMDIO_BASE	0x8030
+ 
+ #define ENETC_PM0_IF_MODE	0x8300
+-#define ENETC_PMO_IFM_RG	BIT(2)
++#define ENETC_PM0_IFM_RG	BIT(2)
+ #define ENETC_PM0_IFM_RLP	(BIT(5) | BIT(11))
+-#define ENETC_PM0_IFM_RGAUTO	(BIT(15) | ENETC_PMO_IFM_RG | BIT(1))
+-#define ENETC_PM0_IFM_XGMII	BIT(12)
++#define ENETC_PM0_IFM_EN_AUTO	BIT(15)
++#define ENETC_PM0_IFM_SSP_MASK	GENMASK(14, 13)
++#define ENETC_PM0_IFM_SSP_1000	(2 << 13)
++#define ENETC_PM0_IFM_SSP_100	(0 << 13)
++#define ENETC_PM0_IFM_SSP_10	(1 << 13)
++#define ENETC_PM0_IFM_FULL_DPX	BIT(12)
++#define ENETC_PM0_IFM_IFMODE_MASK GENMASK(1, 0)
++#define ENETC_PM0_IFM_IFMODE_XGMII 0
++#define ENETC_PM0_IFM_IFMODE_GMII 2
+ #define ENETC_PSIDCAPR		0x1b08
+ #define ENETC_PSIDCAPR_MSK	GENMASK(15, 0)
+ #define ENETC_PSFCAPR		0x1b18
+@@ -453,6 +462,8 @@ static inline u64 _enetc_rd_reg64_wa(void __iomem *reg)
+ #define enetc_wr_reg(reg, val)		_enetc_wr_reg_wa((reg), (val))
+ #define enetc_rd(hw, off)		enetc_rd_reg((hw)->reg + (off))
+ #define enetc_wr(hw, off, val)		enetc_wr_reg((hw)->reg + (off), val)
++#define enetc_rd_hot(hw, off)		enetc_rd_reg_hot((hw)->reg + (off))
++#define enetc_wr_hot(hw, off, val)	enetc_wr_reg_hot((hw)->reg + (off), val)
+ #define enetc_rd64(hw, off)		_enetc_rd_reg64_wa((hw)->reg + (off))
+ /* port register accessors - PF only */
+ #define enetc_port_rd(hw, off)		enetc_rd_reg((hw)->port + (off))
+@@ -573,6 +584,7 @@ union enetc_rx_bd {
+ #define ENETC_RXBD_LSTATUS(flags)	((flags) << 16)
+ #define ENETC_RXBD_FLAG_VLAN	BIT(9)
+ #define ENETC_RXBD_FLAG_TSTMP	BIT(10)
++#define ENETC_RXBD_FLAG_TPID	GENMASK(1, 0)
+ 
+ #define ENETC_MAC_ADDR_FILT_CNT	8 /* # of supported entries per port */
+ #define EMETC_MAC_ADDR_FILT_RES	3 /* # of reserved entries at the beginning */
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 796e3d6f23f09..83187cd59fddd 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -190,7 +190,6 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
+ {
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ 	struct enetc_pf *pf = enetc_si_priv(priv->si);
+-	char vlan_promisc_simap = pf->vlan_promisc_simap;
+ 	struct enetc_hw *hw = &priv->si->hw;
+ 	bool uprom = false, mprom = false;
+ 	struct enetc_mac_filter *filter;
+@@ -203,16 +202,12 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
+ 		psipmr = ENETC_PSIPMR_SET_UP(0) | ENETC_PSIPMR_SET_MP(0);
+ 		uprom = true;
+ 		mprom = true;
+-		/* Enable VLAN promiscuous mode for SI0 (PF) */
+-		vlan_promisc_simap |= BIT(0);
+ 	} else if (ndev->flags & IFF_ALLMULTI) {
+ 		/* enable multi cast promisc mode for SI0 (PF) */
+ 		psipmr = ENETC_PSIPMR_SET_MP(0);
+ 		mprom = true;
+ 	}
+ 
+-	enetc_set_vlan_promisc(&pf->si->hw, vlan_promisc_simap);
+-
+ 	/* first 2 filter entries belong to PF */
+ 	if (!uprom) {
+ 		/* Update unicast filters */
+@@ -320,7 +315,7 @@ static void enetc_set_loopback(struct net_device *ndev, bool en)
+ 	u32 reg;
+ 
+ 	reg = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
+-	if (reg & ENETC_PMO_IFM_RG) {
++	if (reg & ENETC_PM0_IFM_RG) {
+ 		/* RGMII mode */
+ 		reg = (reg & ~ENETC_PM0_IFM_RLP) |
+ 		      (en ? ENETC_PM0_IFM_RLP : 0);
+@@ -499,13 +494,20 @@ static void enetc_configure_port_mac(struct enetc_hw *hw)
+ 
+ static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
+ {
+-	/* set auto-speed for RGMII */
+-	if (enetc_port_rd(hw, ENETC_PM0_IF_MODE) & ENETC_PMO_IFM_RG ||
+-	    phy_interface_mode_is_rgmii(phy_mode))
+-		enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_RGAUTO);
++	u32 val;
+ 
+-	if (phy_mode == PHY_INTERFACE_MODE_USXGMII)
+-		enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_XGMII);
++	if (phy_interface_mode_is_rgmii(phy_mode)) {
++		val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
++		val &= ~ENETC_PM0_IFM_EN_AUTO;
++		val &= ENETC_PM0_IFM_IFMODE_MASK;
++		val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
++		enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
++	}
++
++	if (phy_mode == PHY_INTERFACE_MODE_USXGMII) {
++		val = ENETC_PM0_IFM_FULL_DPX | ENETC_PM0_IFM_IFMODE_XGMII;
++		enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
++	}
+ }
+ 
+ static void enetc_mac_enable(struct enetc_hw *hw, bool en)
+@@ -857,13 +859,12 @@ static bool enetc_port_has_pcs(struct enetc_pf *pf)
+ 		pf->if_mode == PHY_INTERFACE_MODE_USXGMII);
+ }
+ 
+-static int enetc_mdiobus_create(struct enetc_pf *pf)
++static int enetc_mdiobus_create(struct enetc_pf *pf, struct device_node *node)
+ {
+-	struct device *dev = &pf->si->pdev->dev;
+ 	struct device_node *mdio_np;
+ 	int err;
+ 
+-	mdio_np = of_get_child_by_name(dev->of_node, "mdio");
++	mdio_np = of_get_child_by_name(node, "mdio");
+ 	if (mdio_np) {
+ 		err = enetc_mdio_probe(pf, mdio_np);
+ 
+@@ -944,6 +945,34 @@ static void enetc_pl_mac_config(struct phylink_config *config,
+ 		phylink_set_pcs(priv->phylink, &pf->pcs->pcs);
+ }
+ 
++static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex)
++{
++	u32 old_val, val;
++
++	old_val = val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
++
++	if (speed == SPEED_1000) {
++		val &= ~ENETC_PM0_IFM_SSP_MASK;
++		val |= ENETC_PM0_IFM_SSP_1000;
++	} else if (speed == SPEED_100) {
++		val &= ~ENETC_PM0_IFM_SSP_MASK;
++		val |= ENETC_PM0_IFM_SSP_100;
++	} else if (speed == SPEED_10) {
++		val &= ~ENETC_PM0_IFM_SSP_MASK;
++		val |= ENETC_PM0_IFM_SSP_10;
++	}
++
++	if (duplex == DUPLEX_FULL)
++		val |= ENETC_PM0_IFM_FULL_DPX;
++	else
++		val &= ~ENETC_PM0_IFM_FULL_DPX;
++
++	if (val == old_val)
++		return;
++
++	enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
++}
++
+ static void enetc_pl_mac_link_up(struct phylink_config *config,
+ 				 struct phy_device *phy, unsigned int mode,
+ 				 phy_interface_t interface, int speed,
+@@ -956,6 +985,10 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
+ 	if (priv->active_offloads & ENETC_F_QBV)
+ 		enetc_sched_speed_set(priv, speed);
+ 
++	if (!phylink_autoneg_inband(mode) &&
++	    phy_interface_mode_is_rgmii(interface))
++		enetc_force_rgmii_mac(&pf->si->hw, speed, duplex);
++
+ 	enetc_mac_enable(&pf->si->hw, true);
+ }
+ 
+@@ -975,18 +1008,17 @@ static const struct phylink_mac_ops enetc_mac_phylink_ops = {
+ 	.mac_link_down = enetc_pl_mac_link_down,
+ };
+ 
+-static int enetc_phylink_create(struct enetc_ndev_priv *priv)
++static int enetc_phylink_create(struct enetc_ndev_priv *priv,
++				struct device_node *node)
+ {
+ 	struct enetc_pf *pf = enetc_si_priv(priv->si);
+-	struct device *dev = &pf->si->pdev->dev;
+ 	struct phylink *phylink;
+ 	int err;
+ 
+ 	pf->phylink_config.dev = &priv->ndev->dev;
+ 	pf->phylink_config.type = PHYLINK_NETDEV;
+ 
+-	phylink = phylink_create(&pf->phylink_config,
+-				 of_fwnode_handle(dev->of_node),
++	phylink = phylink_create(&pf->phylink_config, of_fwnode_handle(node),
+ 				 pf->if_mode, &enetc_mac_phylink_ops);
+ 	if (IS_ERR(phylink)) {
+ 		err = PTR_ERR(phylink);
+@@ -1049,20 +1081,36 @@ static int enetc_init_port_rss_memory(struct enetc_si *si)
+ 	return err;
+ }
+ 
++static void enetc_init_unused_port(struct enetc_si *si)
++{
++	struct device *dev = &si->pdev->dev;
++	struct enetc_hw *hw = &si->hw;
++	int err;
++
++	si->cbd_ring.bd_count = ENETC_CBDR_DEFAULT_SIZE;
++	err = enetc_alloc_cbdr(dev, &si->cbd_ring);
++	if (err)
++		return;
++
++	enetc_setup_cbdr(hw, &si->cbd_ring);
++
++	enetc_init_port_rfs_memory(si);
++	enetc_init_port_rss_memory(si);
++
++	enetc_clear_cbdr(hw);
++	enetc_free_cbdr(dev, &si->cbd_ring);
++}
++
+ static int enetc_pf_probe(struct pci_dev *pdev,
+ 			  const struct pci_device_id *ent)
+ {
++	struct device_node *node = pdev->dev.of_node;
+ 	struct enetc_ndev_priv *priv;
+ 	struct net_device *ndev;
+ 	struct enetc_si *si;
+ 	struct enetc_pf *pf;
+ 	int err;
+ 
+-	if (pdev->dev.of_node && !of_device_is_available(pdev->dev.of_node)) {
+-		dev_info(&pdev->dev, "device is disabled, skipping\n");
+-		return -ENODEV;
+-	}
+-
+ 	err = enetc_pci_probe(pdev, KBUILD_MODNAME, sizeof(*pf));
+ 	if (err) {
+ 		dev_err(&pdev->dev, "PCI probing failed\n");
+@@ -1076,6 +1124,13 @@ static int enetc_pf_probe(struct pci_dev *pdev,
+ 		goto err_map_pf_space;
+ 	}
+ 
++	if (node && !of_device_is_available(node)) {
++		enetc_init_unused_port(si);
++		dev_info(&pdev->dev, "device is disabled, skipping\n");
++		err = -ENODEV;
++		goto err_device_disabled;
++	}
++
+ 	pf = enetc_si_priv(si);
+ 	pf->si = si;
+ 	pf->total_vfs = pci_sriov_get_totalvfs(pdev);
+@@ -1115,18 +1170,24 @@ static int enetc_pf_probe(struct pci_dev *pdev,
+ 		goto err_init_port_rss;
+ 	}
+ 
++	err = enetc_configure_si(priv);
++	if (err) {
++		dev_err(&pdev->dev, "Failed to configure SI\n");
++		goto err_config_si;
++	}
++
+ 	err = enetc_alloc_msix(priv);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "MSIX alloc failed\n");
+ 		goto err_alloc_msix;
+ 	}
+ 
+-	if (!of_get_phy_mode(pdev->dev.of_node, &pf->if_mode)) {
+-		err = enetc_mdiobus_create(pf);
++	if (!of_get_phy_mode(node, &pf->if_mode)) {
++		err = enetc_mdiobus_create(pf, node);
+ 		if (err)
+ 			goto err_mdiobus_create;
+ 
+-		err = enetc_phylink_create(priv);
++		err = enetc_phylink_create(priv, node);
+ 		if (err)
+ 			goto err_phylink_create;
+ 	}
+@@ -1143,6 +1204,7 @@ err_phylink_create:
+ 	enetc_mdiobus_destroy(pf);
+ err_mdiobus_create:
+ 	enetc_free_msix(priv);
++err_config_si:
+ err_init_port_rss:
+ err_init_port_rfs:
+ err_alloc_msix:
+@@ -1151,6 +1213,7 @@ err_alloc_si_res:
+ 	si->ndev = NULL;
+ 	free_netdev(ndev);
+ err_alloc_netdev:
++err_device_disabled:
+ err_map_pf_space:
+ 	enetc_pci_remove(pdev);
+ 
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_vf.c b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+index 7b5c82c7e4e5a..33c125735db7e 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_vf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+@@ -177,6 +177,12 @@ static int enetc_vf_probe(struct pci_dev *pdev,
+ 		goto err_alloc_si_res;
+ 	}
+ 
++	err = enetc_configure_si(priv);
++	if (err) {
++		dev_err(&pdev->dev, "Failed to configure SI\n");
++		goto err_config_si;
++	}
++
+ 	err = enetc_alloc_msix(priv);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "MSIX alloc failed\n");
+@@ -193,6 +199,7 @@ static int enetc_vf_probe(struct pci_dev *pdev,
+ 
+ err_reg_netdev:
+ 	enetc_free_msix(priv);
++err_config_si:
+ err_alloc_msix:
+ 	enetc_free_si_resources(priv);
+ err_alloc_si_res:
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+index 096e26a2e16b4..36690fc5c1aff 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+@@ -1031,16 +1031,16 @@ struct hclge_fd_tcam_config_3_cmd {
+ #define HCLGE_FD_AD_DROP_B		0
+ #define HCLGE_FD_AD_DIRECT_QID_B	1
+ #define HCLGE_FD_AD_QID_S		2
+-#define HCLGE_FD_AD_QID_M		GENMASK(12, 2)
++#define HCLGE_FD_AD_QID_M		GENMASK(11, 2)
+ #define HCLGE_FD_AD_USE_COUNTER_B	12
+ #define HCLGE_FD_AD_COUNTER_NUM_S	13
+ #define HCLGE_FD_AD_COUNTER_NUM_M	GENMASK(20, 13)
+ #define HCLGE_FD_AD_NXT_STEP_B		20
+ #define HCLGE_FD_AD_NXT_KEY_S		21
+-#define HCLGE_FD_AD_NXT_KEY_M		GENMASK(26, 21)
++#define HCLGE_FD_AD_NXT_KEY_M		GENMASK(25, 21)
+ #define HCLGE_FD_AD_WR_RULE_ID_B	0
+ #define HCLGE_FD_AD_RULE_ID_S		1
+-#define HCLGE_FD_AD_RULE_ID_M		GENMASK(13, 1)
++#define HCLGE_FD_AD_RULE_ID_M		GENMASK(12, 1)
+ 
+ struct hclge_fd_ad_config_cmd {
+ 	u8 stage;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index c40820baf48a6..b856dbe4db73b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -5115,9 +5115,9 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y,
+ 	case BIT(INNER_SRC_MAC):
+ 		for (i = 0; i < ETH_ALEN; i++) {
+ 			calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
+-			       rule->tuples.src_mac[i]);
++			       rule->tuples_mask.src_mac[i]);
+ 			calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
+-			       rule->tuples.src_mac[i]);
++			       rule->tuples_mask.src_mac[i]);
+ 		}
+ 
+ 		return true;
+@@ -6183,8 +6183,7 @@ static void hclge_fd_get_ext_info(struct ethtool_rx_flow_spec *fs,
+ 		fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1);
+ 		fs->m_ext.vlan_tci =
+ 				rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ?
+-				cpu_to_be16(VLAN_VID_MASK) :
+-				cpu_to_be16(rule->tuples_mask.vlan_tag1);
++				0 : cpu_to_be16(rule->tuples_mask.vlan_tag1);
+ 	}
+ 
+ 	if (fs->flow_type & FLOW_MAC_EXT) {
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 5e1f4e71af7bc..f184f4a79cc39 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1832,10 +1832,9 @@ static int ibmvnic_set_mac(struct net_device *netdev, void *p)
+ 	if (!is_valid_ether_addr(addr->sa_data))
+ 		return -EADDRNOTAVAIL;
+ 
+-	if (adapter->state != VNIC_PROBED) {
+-		ether_addr_copy(adapter->mac_addr, addr->sa_data);
++	ether_addr_copy(adapter->mac_addr, addr->sa_data);
++	if (adapter->state != VNIC_PROBED)
+ 		rc = __ibmvnic_set_mac(netdev, addr->sa_data);
+-	}
+ 
+ 	return rc;
+ }
+@@ -5176,16 +5175,14 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
+ {
+ 	struct device *dev = &adapter->vdev->dev;
+ 	unsigned long timeout = msecs_to_jiffies(20000);
+-	u64 old_num_rx_queues, old_num_tx_queues;
++	u64 old_num_rx_queues = adapter->req_rx_queues;
++	u64 old_num_tx_queues = adapter->req_tx_queues;
+ 	int rc;
+ 
+ 	adapter->from_passive_init = false;
+ 
+-	if (reset) {
+-		old_num_rx_queues = adapter->req_rx_queues;
+-		old_num_tx_queues = adapter->req_tx_queues;
++	if (reset)
+ 		reinit_completion(&adapter->init_done);
+-	}
+ 
+ 	adapter->init_done_rc = 0;
+ 	rc = ibmvnic_send_crq_init(adapter);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 59971f62e6268..3e4a4d6f0419c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -15100,6 +15100,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		if (err) {
+ 			dev_info(&pdev->dev,
+ 				 "setup of misc vector failed: %d\n", err);
++			i40e_cloud_filter_exit(pf);
++			i40e_fdir_teardown(pf);
+ 			goto err_vsis;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index eca73526ac86b..54d47265a7ac1 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -575,6 +575,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
+ 		return -EINVAL;
+ 	}
+ 
++	if (xs->props.mode != XFRM_MODE_TRANSPORT) {
++		netdev_err(dev, "Unsupported mode for ipsec offload\n");
++		return -EINVAL;
++	}
++
+ 	if (ixgbe_ipsec_check_mgmt_ip(xs)) {
+ 		netdev_err(dev, "IPsec IP addr clash with mgmt filters\n");
+ 		return -EINVAL;
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+index 5170dd9d8705b..caaea2c920a6e 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+@@ -272,6 +272,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
+ 		return -EINVAL;
+ 	}
+ 
++	if (xs->props.mode != XFRM_MODE_TRANSPORT) {
++		netdev_err(dev, "Unsupported mode for ipsec offload\n");
++		return -EINVAL;
++	}
++
+ 	if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
+ 		struct rx_sa rsa;
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+index a8641a407c06a..96d2891f1675a 100644
+--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
++++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+@@ -1225,8 +1225,6 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
+ 		goto push_new_skb;
+ 	}
+ 
+-	desc_data.dma_addr = new_dma_addr;
+-
+ 	/* We can't fail anymore at this point: it's safe to unmap the skb. */
+ 	mtk_star_dma_unmap_rx(priv, &desc_data);
+ 
+@@ -1236,6 +1234,9 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
+ 	desc_data.skb->dev = ndev;
+ 	netif_receive_skb(desc_data.skb);
+ 
++	/* update dma_addr for new skb */
++	desc_data.dma_addr = new_dma_addr;
++
+ push_new_skb:
+ 	desc_data.len = skb_tailroom(new_skb);
+ 	desc_data.skb = new_skb;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index 23849f2b9c252..1434df66fcf2e 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -47,7 +47,7 @@
+ #define EN_ETHTOOL_SHORT_MASK cpu_to_be16(0xffff)
+ #define EN_ETHTOOL_WORD_MASK  cpu_to_be32(0xffffffff)
+ 
+-static int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
++int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
+ {
+ 	int i, t;
+ 	int err = 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index 6f290319b6178..d8a20e83d9040 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -3559,6 +3559,8 @@ int mlx4_en_reset_config(struct net_device *dev,
+ 			en_err(priv, "Failed starting port\n");
+ 	}
+ 
++	if (!err)
++		err = mlx4_en_moderation_update(priv);
+ out:
+ 	mutex_unlock(&mdev->state_lock);
+ 	kfree(tmp);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+index 30378e4c90b5b..0aa4a23ad3def 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+@@ -795,6 +795,7 @@ void mlx4_en_ptp_overflow_check(struct mlx4_en_dev *mdev);
+ #define DEV_FEATURE_CHANGED(dev, new_features, feature) \
+ 	((dev->features & feature) ^ (new_features & feature))
+ 
++int mlx4_en_moderation_update(struct mlx4_en_priv *priv);
+ int mlx4_en_reset_config(struct net_device *dev,
+ 			 struct hwtstamp_config ts_config,
+ 			 netdev_features_t new_features);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
+index 39eff6a57ba22..3c3069afc0a31 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
+@@ -4208,6 +4208,7 @@ MLXSW_ITEM32(reg, ptys, ext_eth_proto_cap, 0x08, 0, 32);
+ #define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4		BIT(20)
+ #define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4		BIT(21)
+ #define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4		BIT(22)
++#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4	BIT(23)
+ #define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_CR		BIT(27)
+ #define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_KR		BIT(28)
+ #define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_SR		BIT(29)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
+index 540616469e284..68333ecf6151e 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
+@@ -1171,6 +1171,11 @@ static const struct mlxsw_sp1_port_link_mode mlxsw_sp1_port_link_mode[] = {
+ 		.mask_ethtool	= ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
+ 		.speed		= SPEED_100000,
+ 	},
++	{
++		.mask		= MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
++		.mask_ethtool	= ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
++		.speed		= SPEED_100000,
++	},
+ };
+ 
+ #define MLXSW_SP1_PORT_LINK_MODE_LEN ARRAY_SIZE(mlxsw_sp1_port_link_mode)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
+index 5023d91269f45..28bfe1ea9d947 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
+@@ -612,7 +612,8 @@ static const struct mlxsw_sx_port_link_mode mlxsw_sx_port_link_mode[] = {
+ 	{
+ 		.mask		= MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 |
+ 				  MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 |
+-				  MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4,
++				  MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 |
++				  MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
+ 		.speed		= 100000,
+ 	},
+ };
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index 729495a1a77ee..3655503352928 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -540,13 +540,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
+ 			return -EOPNOTSUPP;
+ 		}
+ 
++		flow_rule_match_ipv4_addrs(rule, &match);
++
+ 		if (filter->block_id == VCAP_IS1 && *(u32 *)&match.mask->dst) {
+ 			NL_SET_ERR_MSG_MOD(extack,
+ 					   "Key type S1_NORMAL cannot match on destination IP");
+ 			return -EOPNOTSUPP;
+ 		}
+ 
+-		flow_rule_match_ipv4_addrs(rule, &match);
+ 		tmp = &filter->key.ipv4.sip.value.addr[0];
+ 		memcpy(tmp, &match.key->src, 4);
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index b7d5eaa70a67b..1591715c97177 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -1042,7 +1042,7 @@ static void r8168fp_adjust_ocp_cmd(struct rtl8169_private *tp, u32 *cmd, int typ
+ {
+ 	/* based on RTL8168FP_OOBMAC_BASE in vendor driver */
+ 	if (tp->mac_version == RTL_GIGA_MAC_VER_52 && type == ERIAR_OOB)
+-		*cmd |= 0x7f0 << 18;
++		*cmd |= 0xf70 << 18;
+ }
+ 
+ DECLARE_RTL_COND(rtl_eriar_cond)
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index d5d236d687e9e..6d84266c03caf 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -560,6 +560,8 @@ static struct sh_eth_cpu_data r7s72100_data = {
+ 			  EESR_TDE,
+ 	.fdr_value	= 0x0000070f,
+ 
++	.trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
++
+ 	.no_psr		= 1,
+ 	.apr		= 1,
+ 	.mpr		= 1,
+@@ -780,6 +782,8 @@ static struct sh_eth_cpu_data r7s9210_data = {
+ 
+ 	.fdr_value	= 0x0000070f,
+ 
++	.trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
++
+ 	.apr		= 1,
+ 	.mpr		= 1,
+ 	.tpauser	= 1,
+@@ -1089,6 +1093,9 @@ static struct sh_eth_cpu_data sh771x_data = {
+ 			  EESIPR_CEEFIP | EESIPR_CELFIP |
+ 			  EESIPR_RRFIP | EESIPR_RTLFIP | EESIPR_RTSFIP |
+ 			  EESIPR_PREIP | EESIPR_CERFIP,
++
++	.trscer_err_mask = DESC_I_RINT8,
++
+ 	.tsu		= 1,
+ 	.dual_port	= 1,
+ };
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 103d2448e9e0d..a9087dae767de 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -233,6 +233,7 @@ static void common_default_data(struct plat_stmmacenet_data *plat)
+ static int intel_mgbe_common_data(struct pci_dev *pdev,
+ 				  struct plat_stmmacenet_data *plat)
+ {
++	char clk_name[20];
+ 	int ret;
+ 	int i;
+ 
+@@ -300,8 +301,10 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
+ 	plat->eee_usecs_rate = plat->clk_ptp_rate;
+ 
+ 	/* Set system clock */
++	sprintf(clk_name, "%s-%s", "stmmac", pci_name(pdev));
++
+ 	plat->stmmac_clk = clk_register_fixed_rate(&pdev->dev,
+-						   "stmmac-clk", NULL, 0,
++						   clk_name, NULL, 0,
+ 						   plat->clk_ptp_rate);
+ 
+ 	if (IS_ERR(plat->stmmac_clk)) {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
+index c6540b003b430..2ecd3a8a690c2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
+@@ -499,10 +499,15 @@ static void dwmac4_get_rx_header_len(struct dma_desc *p, unsigned int *len)
+ 	*len = le32_to_cpu(p->des2) & RDES2_HL;
+ }
+ 
+-static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr)
++static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool buf2_valid)
+ {
+ 	p->des2 = cpu_to_le32(lower_32_bits(addr));
+-	p->des3 = cpu_to_le32(upper_32_bits(addr) | RDES3_BUFFER2_VALID_ADDR);
++	p->des3 = cpu_to_le32(upper_32_bits(addr));
++
++	if (buf2_valid)
++		p->des3 |= cpu_to_le32(RDES3_BUFFER2_VALID_ADDR);
++	else
++		p->des3 &= cpu_to_le32(~RDES3_BUFFER2_VALID_ADDR);
+ }
+ 
+ static void dwmac4_set_tbs(struct dma_edesc *p, u32 sec, u32 nsec)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+index bb29bfcd62c34..62aa0e95beb70 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+@@ -124,6 +124,23 @@ static void dwmac4_dma_init_channel(void __iomem *ioaddr,
+ 	       ioaddr + DMA_CHAN_INTR_ENA(chan));
+ }
+ 
++static void dwmac410_dma_init_channel(void __iomem *ioaddr,
++				      struct stmmac_dma_cfg *dma_cfg, u32 chan)
++{
++	u32 value;
++
++	/* common channel control register config */
++	value = readl(ioaddr + DMA_CHAN_CONTROL(chan));
++	if (dma_cfg->pblx8)
++		value = value | DMA_BUS_MODE_PBL;
++
++	writel(value, ioaddr + DMA_CHAN_CONTROL(chan));
++
++	/* Mask interrupts by writing to CSR7 */
++	writel(DMA_CHAN_INTR_DEFAULT_MASK_4_10,
++	       ioaddr + DMA_CHAN_INTR_ENA(chan));
++}
++
+ static void dwmac4_dma_init(void __iomem *ioaddr,
+ 			    struct stmmac_dma_cfg *dma_cfg, int atds)
+ {
+@@ -523,7 +540,7 @@ const struct stmmac_dma_ops dwmac4_dma_ops = {
+ const struct stmmac_dma_ops dwmac410_dma_ops = {
+ 	.reset = dwmac4_dma_reset,
+ 	.init = dwmac4_dma_init,
+-	.init_chan = dwmac4_dma_init_channel,
++	.init_chan = dwmac410_dma_init_channel,
+ 	.init_rx_chan = dwmac4_dma_init_rx_chan,
+ 	.init_tx_chan = dwmac4_dma_init_tx_chan,
+ 	.axi = dwmac4_dma_axi,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+index 0b4ee2dbb691d..71e50751ef2dc 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+@@ -53,10 +53,6 @@ void dwmac4_dma_stop_tx(void __iomem *ioaddr, u32 chan)
+ 
+ 	value &= ~DMA_CONTROL_ST;
+ 	writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan));
+-
+-	value = readl(ioaddr + GMAC_CONFIG);
+-	value &= ~GMAC_CONFIG_TE;
+-	writel(value, ioaddr + GMAC_CONFIG);
+ }
+ 
+ void dwmac4_dma_start_rx(void __iomem *ioaddr, u32 chan)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
+index 0aaf19ab56729..ccfb0102dde49 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
+@@ -292,7 +292,7 @@ static void dwxgmac2_get_rx_header_len(struct dma_desc *p, unsigned int *len)
+ 		*len = le32_to_cpu(p->des2) & XGMAC_RDES2_HL;
+ }
+ 
+-static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr)
++static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool is_valid)
+ {
+ 	p->des2 = cpu_to_le32(lower_32_bits(addr));
+ 	p->des3 = cpu_to_le32(upper_32_bits(addr));
+diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+index e2dca9b6e9926..afe7ec496545a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
++++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+@@ -91,7 +91,7 @@ struct stmmac_desc_ops {
+ 	int (*get_rx_hash)(struct dma_desc *p, u32 *hash,
+ 			   enum pkt_hash_types *type);
+ 	void (*get_rx_header_len)(struct dma_desc *p, unsigned int *len);
+-	void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr);
++	void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr, bool buf2_valid);
+ 	void (*set_sarc)(struct dma_desc *p, u32 sarc_type);
+ 	void (*set_vlan_tag)(struct dma_desc *p, u16 tag, u16 inner_tag,
+ 			     u32 inner_type);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index b3d6d8e3f4de9..7d01c5cf60c96 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1279,9 +1279,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
+ 			return -ENOMEM;
+ 
+ 		buf->sec_addr = page_pool_get_dma_addr(buf->sec_page);
+-		stmmac_set_desc_sec_addr(priv, p, buf->sec_addr);
++		stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
+ 	} else {
+ 		buf->sec_page = NULL;
++		stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
+ 	}
+ 
+ 	buf->addr = page_pool_get_dma_addr(buf->page);
+@@ -3618,7 +3619,10 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
+ 					   DMA_FROM_DEVICE);
+ 
+ 		stmmac_set_desc_addr(priv, p, buf->addr);
+-		stmmac_set_desc_sec_addr(priv, p, buf->sec_addr);
++		if (priv->sph)
++			stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
++		else
++			stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
+ 		stmmac_refill_desc3(priv, rx_q, p);
+ 
+ 		rx_q->rx_count_frames++;
+@@ -5114,13 +5118,16 @@ int stmmac_dvr_remove(struct device *dev)
+ 	netdev_info(priv->dev, "%s: removing driver", __func__);
+ 
+ 	stmmac_stop_all_dma(priv);
++	stmmac_mac_set(priv, priv->ioaddr, false);
++	netif_carrier_off(ndev);
++	unregister_netdev(ndev);
+ 
++	/* Serdes power down needs to happen after VLAN filter
++	 * is deleted that is triggered by unregister_netdev().
++	 */
+ 	if (priv->plat->serdes_powerdown)
+ 		priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
+ 
+-	stmmac_mac_set(priv, priv->ioaddr, false);
+-	netif_carrier_off(ndev);
+-	unregister_netdev(ndev);
+ #ifdef CONFIG_DEBUG_FS
+ 	stmmac_exit_fs(ndev);
+ #endif
+@@ -5227,6 +5234,8 @@ static void stmmac_reset_queues_param(struct stmmac_priv *priv)
+ 		tx_q->cur_tx = 0;
+ 		tx_q->dirty_tx = 0;
+ 		tx_q->mss = 0;
++
++		netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue));
+ 	}
+ }
+ 
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index 7178468302c8f..ad6dbf0110526 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -296,6 +296,7 @@ nsim_create(struct nsim_dev *nsim_dev, struct nsim_dev_port *nsim_dev_port)
+ 	dev_net_set(dev, nsim_dev_net(nsim_dev));
+ 	ns = netdev_priv(dev);
+ 	ns->netdev = dev;
++	u64_stats_init(&ns->syncp);
+ 	ns->nsim_dev = nsim_dev;
+ 	ns->nsim_dev_port = nsim_dev_port;
+ 	ns->nsim_bus_dev = nsim_dev->nsim_bus_dev;
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 35525a671400d..49e96ca585fff 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -293,14 +293,16 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev,
+ 
+ 	phydev->autoneg = autoneg;
+ 
+-	phydev->speed = speed;
++	if (autoneg == AUTONEG_DISABLE) {
++		phydev->speed = speed;
++		phydev->duplex = duplex;
++	}
+ 
+ 	linkmode_copy(phydev->advertising, advertising);
+ 
+ 	linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ 			 phydev->advertising, autoneg == AUTONEG_ENABLE);
+ 
+-	phydev->duplex = duplex;
+ 	phydev->master_slave_set = cmd->base.master_slave_cfg;
+ 	phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index dd1f711140c3d..2d4eed2d61ce9 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -230,7 +230,6 @@ static struct phy_driver genphy_driver;
+ static LIST_HEAD(phy_fixup_list);
+ static DEFINE_MUTEX(phy_fixup_lock);
+ 
+-#ifdef CONFIG_PM
+ static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
+ {
+ 	struct device_driver *drv = phydev->mdio.dev.driver;
+@@ -270,7 +269,7 @@ out:
+ 	return !phydev->suspended;
+ }
+ 
+-static int mdio_bus_phy_suspend(struct device *dev)
++static __maybe_unused int mdio_bus_phy_suspend(struct device *dev)
+ {
+ 	struct phy_device *phydev = to_phy_device(dev);
+ 
+@@ -290,7 +289,7 @@ static int mdio_bus_phy_suspend(struct device *dev)
+ 	return phy_suspend(phydev);
+ }
+ 
+-static int mdio_bus_phy_resume(struct device *dev)
++static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
+ {
+ 	struct phy_device *phydev = to_phy_device(dev);
+ 	int ret;
+@@ -316,7 +315,6 @@ no_resume:
+ 
+ static SIMPLE_DEV_PM_OPS(mdio_bus_phy_pm_ops, mdio_bus_phy_suspend,
+ 			 mdio_bus_phy_resume);
+-#endif /* CONFIG_PM */
+ 
+ /**
+  * phy_register_fixup - creates a new phy_fixup and adds it to the list
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index c7320861943b4..6e033ba717030 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -419,13 +419,6 @@ static ssize_t add_mux_store(struct device *d,  struct device_attribute *attr, c
+ 		goto err;
+ 	}
+ 
+-	/* we don't want to modify a running netdev */
+-	if (netif_running(dev->net)) {
+-		netdev_err(dev->net, "Cannot change a running device\n");
+-		ret = -EBUSY;
+-		goto err;
+-	}
+-
+ 	ret = qmimux_register_device(dev->net, mux_id);
+ 	if (!ret) {
+ 		info->flags |= QMI_WWAN_FLAG_MUX;
+@@ -455,13 +448,6 @@ static ssize_t del_mux_store(struct device *d,  struct device_attribute *attr, c
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+-	/* we don't want to modify a running netdev */
+-	if (netif_running(dev->net)) {
+-		netdev_err(dev->net, "Cannot change a running device\n");
+-		ret = -EBUSY;
+-		goto err;
+-	}
+-
+ 	del_dev = qmimux_find_dev(dev, mux_id);
+ 	if (!del_dev) {
+ 		netdev_err(dev->net, "mux_id not present\n");
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index b6be2454b8bdd..605c01fb73f15 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -283,7 +283,6 @@ static int lapbeth_open(struct net_device *dev)
+ 		return -ENODEV;
+ 	}
+ 
+-	netif_start_queue(dev);
+ 	return 0;
+ }
+ 
+@@ -291,8 +290,6 @@ static int lapbeth_close(struct net_device *dev)
+ {
+ 	int err;
+ 
+-	netif_stop_queue(dev);
+-
+ 	if ((err = lapb_unregister(dev)) != LAPB_OK)
+ 		pr_err("lapb_unregister error: %d\n", err);
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index ebd6886a8c184..a68fe3a45a744 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -774,6 +774,7 @@ static void ath11k_core_restart(struct work_struct *work)
+ 		complete(&ar->scan.started);
+ 		complete(&ar->scan.completed);
+ 		complete(&ar->peer_assoc_done);
++		complete(&ar->peer_delete_done);
+ 		complete(&ar->install_key_done);
+ 		complete(&ar->vdev_setup_done);
+ 		complete(&ar->bss_survey_done);
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 5a7915f75e1e2..c8e36251068c9 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -502,6 +502,7 @@ struct ath11k {
+ 	u8 lmac_id;
+ 
+ 	struct completion peer_assoc_done;
++	struct completion peer_delete_done;
+ 
+ 	int install_key_status;
+ 	struct completion install_key_done;
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index b5bd9b06da89e..ee0edd9185604 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -2986,6 +2986,7 @@ static int ath11k_mac_station_add(struct ath11k *ar,
+ 	}
+ 
+ 	if (ab->hw_params.vdev_start_delay &&
++	    !arvif->is_started &&
+ 	    arvif->vdev_type != WMI_VDEV_TYPE_AP) {
+ 		ret = ath11k_start_vdev_delay(ar->hw, vif);
+ 		if (ret) {
+@@ -4589,8 +4590,22 @@ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
+ 
+ err_peer_del:
+ 	if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
++		reinit_completion(&ar->peer_delete_done);
++
++		ret = ath11k_wmi_send_peer_delete_cmd(ar, vif->addr,
++						      arvif->vdev_id);
++		if (ret) {
++			ath11k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
++				    arvif->vdev_id, vif->addr);
++			return ret;
++		}
++
++		ret = ath11k_wait_for_peer_delete_done(ar, arvif->vdev_id,
++						       vif->addr);
++		if (ret)
++			return ret;
++
+ 		ar->num_peers--;
+-		ath11k_wmi_send_peer_delete_cmd(ar, vif->addr, arvif->vdev_id);
+ 	}
+ 
+ err_vdev_del:
+@@ -5234,7 +5249,8 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 	/* for QCA6390 bss peer must be created before vdev_start */
+ 	if (ab->hw_params.vdev_start_delay &&
+ 	    arvif->vdev_type != WMI_VDEV_TYPE_AP &&
+-	    arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
++	    arvif->vdev_type != WMI_VDEV_TYPE_MONITOR &&
++	    !ath11k_peer_find_by_vdev_id(ab, arvif->vdev_id)) {
+ 		memcpy(&arvif->chanctx, ctx, sizeof(*ctx));
+ 		ret = 0;
+ 		goto out;
+@@ -5245,7 +5261,9 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 		goto out;
+ 	}
+ 
+-	if (ab->hw_params.vdev_start_delay) {
++	if (ab->hw_params.vdev_start_delay &&
++	    arvif->vdev_type != WMI_VDEV_TYPE_AP &&
++	    arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
+ 		param.vdev_id = arvif->vdev_id;
+ 		param.peer_type = WMI_PEER_TYPE_DEFAULT;
+ 		param.peer_addr = ar->mac_addr;
+@@ -6413,6 +6431,7 @@ int ath11k_mac_allocate(struct ath11k_base *ab)
+ 		mutex_init(&ar->conf_mutex);
+ 		init_completion(&ar->vdev_setup_done);
+ 		init_completion(&ar->peer_assoc_done);
++		init_completion(&ar->peer_delete_done);
+ 		init_completion(&ar->install_key_done);
+ 		init_completion(&ar->bss_survey_done);
+ 		init_completion(&ar->scan.started);
+diff --git a/drivers/net/wireless/ath/ath11k/peer.c b/drivers/net/wireless/ath/ath11k/peer.c
+index 61ad9300eafb1..b69e7ebfa9303 100644
+--- a/drivers/net/wireless/ath/ath11k/peer.c
++++ b/drivers/net/wireless/ath/ath11k/peer.c
+@@ -76,6 +76,23 @@ struct ath11k_peer *ath11k_peer_find_by_id(struct ath11k_base *ab,
+ 	return NULL;
+ }
+ 
++struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab,
++						int vdev_id)
++{
++	struct ath11k_peer *peer;
++
++	spin_lock_bh(&ab->base_lock);
++
++	list_for_each_entry(peer, &ab->peers, list) {
++		if (vdev_id == peer->vdev_id) {
++			spin_unlock_bh(&ab->base_lock);
++			return peer;
++		}
++	}
++	spin_unlock_bh(&ab->base_lock);
++	return NULL;
++}
++
+ void ath11k_peer_unmap_event(struct ath11k_base *ab, u16 peer_id)
+ {
+ 	struct ath11k_peer *peer;
+@@ -177,12 +194,36 @@ static int ath11k_wait_for_peer_deleted(struct ath11k *ar, int vdev_id, const u8
+ 	return ath11k_wait_for_peer_common(ar->ab, vdev_id, addr, false);
+ }
+ 
++int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id,
++				     const u8 *addr)
++{
++	int ret;
++	unsigned long time_left;
++
++	ret = ath11k_wait_for_peer_deleted(ar, vdev_id, addr);
++	if (ret) {
++		ath11k_warn(ar->ab, "failed wait for peer deleted");
++		return ret;
++	}
++
++	time_left = wait_for_completion_timeout(&ar->peer_delete_done,
++						3 * HZ);
++	if (time_left == 0) {
++		ath11k_warn(ar->ab, "Timeout in receiving peer delete response\n");
++		return -ETIMEDOUT;
++	}
++
++	return 0;
++}
++
+ int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr)
+ {
+ 	int ret;
+ 
+ 	lockdep_assert_held(&ar->conf_mutex);
+ 
++	reinit_completion(&ar->peer_delete_done);
++
+ 	ret = ath11k_wmi_send_peer_delete_cmd(ar, addr, vdev_id);
+ 	if (ret) {
+ 		ath11k_warn(ar->ab,
+@@ -191,7 +232,7 @@ int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr)
+ 		return ret;
+ 	}
+ 
+-	ret = ath11k_wait_for_peer_deleted(ar, vdev_id, addr);
++	ret = ath11k_wait_for_peer_delete_done(ar, vdev_id, addr);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -247,8 +288,22 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
+ 		spin_unlock_bh(&ar->ab->base_lock);
+ 		ath11k_warn(ar->ab, "failed to find peer %pM on vdev %i after creation\n",
+ 			    param->peer_addr, param->vdev_id);
+-		ath11k_wmi_send_peer_delete_cmd(ar, param->peer_addr,
+-						param->vdev_id);
++
++		reinit_completion(&ar->peer_delete_done);
++
++		ret = ath11k_wmi_send_peer_delete_cmd(ar, param->peer_addr,
++						      param->vdev_id);
++		if (ret) {
++			ath11k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
++				    param->vdev_id, param->peer_addr);
++			return ret;
++		}
++
++		ret = ath11k_wait_for_peer_delete_done(ar, param->vdev_id,
++						       param->peer_addr);
++		if (ret)
++			return ret;
++
+ 		return -ENOENT;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/peer.h b/drivers/net/wireless/ath/ath11k/peer.h
+index 5d125ce8984e3..8553ed061aeaa 100644
+--- a/drivers/net/wireless/ath/ath11k/peer.h
++++ b/drivers/net/wireless/ath/ath11k/peer.h
+@@ -41,5 +41,9 @@ void ath11k_peer_cleanup(struct ath11k *ar, u32 vdev_id);
+ int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr);
+ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
+ 		       struct ieee80211_sta *sta, struct peer_create_params *param);
++int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id,
++				     const u8 *addr);
++struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab,
++						int vdev_id);
+ 
+ #endif /* _PEER_H_ */
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 04b8b002edfe0..173ab6ceed1f6 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -5532,15 +5532,26 @@ static int ath11k_ready_event(struct ath11k_base *ab, struct sk_buff *skb)
+ static void ath11k_peer_delete_resp_event(struct ath11k_base *ab, struct sk_buff *skb)
+ {
+ 	struct wmi_peer_delete_resp_event peer_del_resp;
++	struct ath11k *ar;
+ 
+ 	if (ath11k_pull_peer_del_resp_ev(ab, skb, &peer_del_resp) != 0) {
+ 		ath11k_warn(ab, "failed to extract peer delete resp");
+ 		return;
+ 	}
+ 
+-	/* TODO: Do we need to validate whether ath11k_peer_find() return NULL
+-	 *	 Why this is needed when there is HTT event for peer delete
+-	 */
++	rcu_read_lock();
++	ar = ath11k_mac_get_ar_by_vdev_id(ab, peer_del_resp.vdev_id);
++	if (!ar) {
++		ath11k_warn(ab, "invalid vdev id in peer delete resp ev %d",
++			    peer_del_resp.vdev_id);
++		rcu_read_unlock();
++		return;
++	}
++
++	complete(&ar->peer_delete_done);
++	rcu_read_unlock();
++	ath11k_dbg(ab, ATH11K_DBG_WMI, "peer delete resp for vdev id %d addr %pM\n",
++		   peer_del_resp.vdev_id, peer_del_resp.peer_macaddr.addr);
+ }
+ 
+ static inline const char *ath11k_wmi_vdev_resp_print(u32 vdev_resp_status)
+diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h
+index e06b74a54a697..01d85f2509362 100644
+--- a/drivers/net/wireless/ath/ath9k/ath9k.h
++++ b/drivers/net/wireless/ath/ath9k/ath9k.h
+@@ -177,7 +177,8 @@ struct ath_frame_info {
+ 	s8 txq;
+ 	u8 keyix;
+ 	u8 rtscts_rate;
+-	u8 retries : 7;
++	u8 retries : 6;
++	u8 dyn_smps : 1;
+ 	u8 baw_tracked : 1;
+ 	u8 tx_power;
+ 	enum ath9k_key_type keytype:2;
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index e60d4737fc6e4..5691bd6eb82c2 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -1271,6 +1271,11 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
+ 				 is_40, is_sgi, is_sp);
+ 			if (rix < 8 && (tx_info->flags & IEEE80211_TX_CTL_STBC))
+ 				info->rates[i].RateFlags |= ATH9K_RATESERIES_STBC;
++			if (rix >= 8 && fi->dyn_smps) {
++				info->rates[i].RateFlags |=
++					ATH9K_RATESERIES_RTS_CTS;
++				info->flags |= ATH9K_TXDESC_CTSENA;
++			}
+ 
+ 			info->txpower[i] = ath_get_rate_txpower(sc, bf, rix,
+ 								is_40, false);
+@@ -2114,6 +2119,7 @@ static void setup_frame_info(struct ieee80211_hw *hw,
+ 		fi->keyix = an->ps_key;
+ 	else
+ 		fi->keyix = ATH9K_TXKEYIX_INVALID;
++	fi->dyn_smps = sta && sta->smps_mode == IEEE80211_SMPS_DYNAMIC;
+ 	fi->keytype = keytype;
+ 	fi->framelen = framelen;
+ 	fi->tx_power = txpower;
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 917617aad8d3c..262c40dc14a63 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -521,13 +521,13 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
+ {
+ 	struct sk_buff *skb = q->rx_head;
+ 	struct skb_shared_info *shinfo = skb_shinfo(skb);
++	int nr_frags = shinfo->nr_frags;
+ 
+-	if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) {
++	if (nr_frags < ARRAY_SIZE(shinfo->frags)) {
+ 		struct page *page = virt_to_head_page(data);
+ 		int offset = data - page_address(page) + q->buf_offset;
+ 
+-		skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len,
+-				q->buf_size);
++		skb_add_rx_frag(skb, nr_frags, page, offset, len, q->buf_size);
+ 	} else {
+ 		skb_free_frag(data);
+ 	}
+@@ -536,7 +536,10 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
+ 		return;
+ 
+ 	q->rx_head = NULL;
+-	dev->drv->rx_skb(dev, q - dev->q_rx, skb);
++	if (nr_frags < ARRAY_SIZE(shinfo->frags))
++		dev->drv->rx_skb(dev, q - dev->q_rx, skb);
++	else
++		dev_kfree_skb(skb);
+ }
+ 
+ static int
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 5ead217ac2bc8..fab068c8ba026 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2055,7 +2055,7 @@ done:
+ 		nvme_fc_complete_rq(rq);
+ 
+ check_error:
+-	if (terminate_assoc)
++	if (terminate_assoc && ctrl->ctrl.state != NVME_CTRL_RESETTING)
+ 		queue_work(nvme_reset_wq, &ctrl->ioerr_work);
+ }
+ 
+diff --git a/drivers/pci/controller/pci-xgene-msi.c b/drivers/pci/controller/pci-xgene-msi.c
+index 2470782cb01af..1c34c897a7e2a 100644
+--- a/drivers/pci/controller/pci-xgene-msi.c
++++ b/drivers/pci/controller/pci-xgene-msi.c
+@@ -384,13 +384,9 @@ static int xgene_msi_hwirq_alloc(unsigned int cpu)
+ 		if (!msi_group->gic_irq)
+ 			continue;
+ 
+-		irq_set_chained_handler(msi_group->gic_irq,
+-					xgene_msi_isr);
+-		err = irq_set_handler_data(msi_group->gic_irq, msi_group);
+-		if (err) {
+-			pr_err("failed to register GIC IRQ handler\n");
+-			return -EINVAL;
+-		}
++		irq_set_chained_handler_and_data(msi_group->gic_irq,
++			xgene_msi_isr, msi_group);
++
+ 		/*
+ 		 * Statically allocate MSI GIC IRQs to each CPU core.
+ 		 * With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index cf4c18f0c25ab..23548b517e4b6 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -1035,14 +1035,14 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
+ 		err = of_pci_get_devfn(child);
+ 		if (err < 0) {
+ 			dev_err(dev, "failed to parse devfn: %d\n", err);
+-			return err;
++			goto error_put_node;
+ 		}
+ 
+ 		slot = PCI_SLOT(err);
+ 
+ 		err = mtk_pcie_parse_port(pcie, child, slot);
+ 		if (err)
+-			return err;
++			goto error_put_node;
+ 	}
+ 
+ 	err = mtk_pcie_subsys_powerup(pcie);
+@@ -1058,6 +1058,9 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
+ 		mtk_pcie_subsys_powerdown(pcie);
+ 
+ 	return 0;
++error_put_node:
++	of_node_put(child);
++	return err;
+ }
+ 
+ static int mtk_pcie_probe(struct platform_device *pdev)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 5c93450725108..9e971fffeb6a3 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4010,6 +4010,10 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
+ 	ret = logic_pio_register_range(range);
+ 	if (ret)
+ 		kfree(range);
++
++	/* Ignore duplicates due to deferred probing */
++	if (ret == -EEXIST)
++		ret = 0;
+ #endif
+ 
+ 	return ret;
+diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig
+index 3946555a60422..45a2ef702b45b 100644
+--- a/drivers/pci/pcie/Kconfig
++++ b/drivers/pci/pcie/Kconfig
+@@ -133,14 +133,6 @@ config PCIE_PTM
+ 	  This is only useful if you have devices that support PTM, but it
+ 	  is safe to enable even if you don't.
+ 
+-config PCIE_BW
+-	bool "PCI Express Bandwidth Change Notification"
+-	depends on PCIEPORTBUS
+-	help
+-	  This enables PCI Express Bandwidth Change Notification.  If
+-	  you know link width or rate changes occur only to correct
+-	  unreliable links, you may answer Y.
+-
+ config PCIE_EDR
+ 	bool "PCI Express Error Disconnect Recover support"
+ 	depends on PCIE_DPC && ACPI
+diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile
+index 68da9280ff117..9a7085668466f 100644
+--- a/drivers/pci/pcie/Makefile
++++ b/drivers/pci/pcie/Makefile
+@@ -12,5 +12,4 @@ obj-$(CONFIG_PCIEAER_INJECT)	+= aer_inject.o
+ obj-$(CONFIG_PCIE_PME)		+= pme.o
+ obj-$(CONFIG_PCIE_DPC)		+= dpc.o
+ obj-$(CONFIG_PCIE_PTM)		+= ptm.o
+-obj-$(CONFIG_PCIE_BW)		+= bw_notification.o
+ obj-$(CONFIG_PCIE_EDR)		+= edr.o
+diff --git a/drivers/pci/pcie/bw_notification.c b/drivers/pci/pcie/bw_notification.c
+deleted file mode 100644
+index 565d23cccb8b5..0000000000000
+--- a/drivers/pci/pcie/bw_notification.c
++++ /dev/null
+@@ -1,138 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/*
+- * PCI Express Link Bandwidth Notification services driver
+- * Author: Alexandru Gagniuc <mr.nuke.me@gmail.com>
+- *
+- * Copyright (C) 2019, Dell Inc
+- *
+- * The PCIe Link Bandwidth Notification provides a way to notify the
+- * operating system when the link width or data rate changes.  This
+- * capability is required for all root ports and downstream ports
+- * supporting links wider than x1 and/or multiple link speeds.
+- *
+- * This service port driver hooks into the bandwidth notification interrupt
+- * and warns when links become degraded in operation.
+- */
+-
+-#define dev_fmt(fmt) "bw_notification: " fmt
+-
+-#include "../pci.h"
+-#include "portdrv.h"
+-
+-static bool pcie_link_bandwidth_notification_supported(struct pci_dev *dev)
+-{
+-	int ret;
+-	u32 lnk_cap;
+-
+-	ret = pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnk_cap);
+-	return (ret == PCIBIOS_SUCCESSFUL) && (lnk_cap & PCI_EXP_LNKCAP_LBNC);
+-}
+-
+-static void pcie_enable_link_bandwidth_notification(struct pci_dev *dev)
+-{
+-	u16 lnk_ctl;
+-
+-	pcie_capability_write_word(dev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
+-
+-	pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
+-	lnk_ctl |= PCI_EXP_LNKCTL_LBMIE;
+-	pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
+-}
+-
+-static void pcie_disable_link_bandwidth_notification(struct pci_dev *dev)
+-{
+-	u16 lnk_ctl;
+-
+-	pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
+-	lnk_ctl &= ~PCI_EXP_LNKCTL_LBMIE;
+-	pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
+-}
+-
+-static irqreturn_t pcie_bw_notification_irq(int irq, void *context)
+-{
+-	struct pcie_device *srv = context;
+-	struct pci_dev *port = srv->port;
+-	u16 link_status, events;
+-	int ret;
+-
+-	ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status);
+-	events = link_status & PCI_EXP_LNKSTA_LBMS;
+-
+-	if (ret != PCIBIOS_SUCCESSFUL || !events)
+-		return IRQ_NONE;
+-
+-	pcie_capability_write_word(port, PCI_EXP_LNKSTA, events);
+-	pcie_update_link_speed(port->subordinate, link_status);
+-	return IRQ_WAKE_THREAD;
+-}
+-
+-static irqreturn_t pcie_bw_notification_handler(int irq, void *context)
+-{
+-	struct pcie_device *srv = context;
+-	struct pci_dev *port = srv->port;
+-	struct pci_dev *dev;
+-
+-	/*
+-	 * Print status from downstream devices, not this root port or
+-	 * downstream switch port.
+-	 */
+-	down_read(&pci_bus_sem);
+-	list_for_each_entry(dev, &port->subordinate->devices, bus_list)
+-		pcie_report_downtraining(dev);
+-	up_read(&pci_bus_sem);
+-
+-	return IRQ_HANDLED;
+-}
+-
+-static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
+-{
+-	int ret;
+-
+-	/* Single-width or single-speed ports do not have to support this. */
+-	if (!pcie_link_bandwidth_notification_supported(srv->port))
+-		return -ENODEV;
+-
+-	ret = request_threaded_irq(srv->irq, pcie_bw_notification_irq,
+-				   pcie_bw_notification_handler,
+-				   IRQF_SHARED, "PCIe BW notif", srv);
+-	if (ret)
+-		return ret;
+-
+-	pcie_enable_link_bandwidth_notification(srv->port);
+-	pci_info(srv->port, "enabled with IRQ %d\n", srv->irq);
+-
+-	return 0;
+-}
+-
+-static void pcie_bandwidth_notification_remove(struct pcie_device *srv)
+-{
+-	pcie_disable_link_bandwidth_notification(srv->port);
+-	free_irq(srv->irq, srv);
+-}
+-
+-static int pcie_bandwidth_notification_suspend(struct pcie_device *srv)
+-{
+-	pcie_disable_link_bandwidth_notification(srv->port);
+-	return 0;
+-}
+-
+-static int pcie_bandwidth_notification_resume(struct pcie_device *srv)
+-{
+-	pcie_enable_link_bandwidth_notification(srv->port);
+-	return 0;
+-}
+-
+-static struct pcie_port_service_driver pcie_bandwidth_notification_driver = {
+-	.name		= "pcie_bw_notification",
+-	.port_type	= PCIE_ANY_PORT,
+-	.service	= PCIE_PORT_SERVICE_BWNOTIF,
+-	.probe		= pcie_bandwidth_notification_probe,
+-	.suspend	= pcie_bandwidth_notification_suspend,
+-	.resume		= pcie_bandwidth_notification_resume,
+-	.remove		= pcie_bandwidth_notification_remove,
+-};
+-
+-int __init pcie_bandwidth_notification_init(void)
+-{
+-	return pcie_port_service_register(&pcie_bandwidth_notification_driver);
+-}
+diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h
+index af7cf237432ac..2ff5724b8f13f 100644
+--- a/drivers/pci/pcie/portdrv.h
++++ b/drivers/pci/pcie/portdrv.h
+@@ -53,12 +53,6 @@ int pcie_dpc_init(void);
+ static inline int pcie_dpc_init(void) { return 0; }
+ #endif
+ 
+-#ifdef CONFIG_PCIE_BW
+-int pcie_bandwidth_notification_init(void);
+-#else
+-static inline int pcie_bandwidth_notification_init(void) { return 0; }
+-#endif
+-
+ /* Port Type */
+ #define PCIE_ANY_PORT			(~0)
+ 
+diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c
+index 3a3ce40ae1abd..d4559cf88f79d 100644
+--- a/drivers/pci/pcie/portdrv_pci.c
++++ b/drivers/pci/pcie/portdrv_pci.c
+@@ -248,7 +248,6 @@ static void __init pcie_init_services(void)
+ 	pcie_pme_init();
+ 	pcie_dpc_init();
+ 	pcie_hp_init();
+-	pcie_bandwidth_notification_init();
+ }
+ 
+ static int __init pcie_portdrv_init(void)
+diff --git a/drivers/platform/olpc/olpc-ec.c b/drivers/platform/olpc/olpc-ec.c
+index f64b82824db28..2db7113383fdc 100644
+--- a/drivers/platform/olpc/olpc-ec.c
++++ b/drivers/platform/olpc/olpc-ec.c
+@@ -426,11 +426,8 @@ static int olpc_ec_probe(struct platform_device *pdev)
+ 
+ 	/* get the EC revision */
+ 	err = olpc_ec_cmd(EC_FIRMWARE_REV, NULL, 0, &ec->version, 1);
+-	if (err) {
+-		ec_priv = NULL;
+-		kfree(ec);
+-		return err;
+-	}
++	if (err)
++		goto error;
+ 
+ 	config.dev = pdev->dev.parent;
+ 	config.driver_data = ec;
+@@ -440,12 +437,16 @@ static int olpc_ec_probe(struct platform_device *pdev)
+ 	if (IS_ERR(ec->dcon_rdev)) {
+ 		dev_err(&pdev->dev, "failed to register DCON regulator\n");
+ 		err = PTR_ERR(ec->dcon_rdev);
+-		kfree(ec);
+-		return err;
++		goto error;
+ 	}
+ 
+ 	ec->dbgfs_dir = olpc_ec_setup_debugfs();
+ 
++	return 0;
++
++error:
++	ec_priv = NULL;
++	kfree(ec);
+ 	return err;
+ }
+ 
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 217a7b84abdfa..2adfab552e22a 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -3087,7 +3087,8 @@ static blk_status_t do_dasd_request(struct blk_mq_hw_ctx *hctx,
+ 
+ 	basedev = block->base;
+ 	spin_lock_irq(&dq->lock);
+-	if (basedev->state < DASD_STATE_READY) {
++	if (basedev->state < DASD_STATE_READY ||
++	    test_bit(DASD_FLAG_OFFLINE, &basedev->flags)) {
+ 		DBF_DEV_EVENT(DBF_ERR, basedev,
+ 			      "device not ready for request %p", req);
+ 		rc = BLK_STS_IOERR;
+@@ -3522,8 +3523,6 @@ void dasd_generic_remove(struct ccw_device *cdev)
+ 	struct dasd_device *device;
+ 	struct dasd_block *block;
+ 
+-	cdev->handler = NULL;
+-
+ 	device = dasd_device_from_cdev(cdev);
+ 	if (IS_ERR(device)) {
+ 		dasd_remove_sysfs_files(cdev);
+@@ -3542,6 +3541,7 @@ void dasd_generic_remove(struct ccw_device *cdev)
+ 	 * no quite down yet.
+ 	 */
+ 	dasd_set_target_state(device, DASD_STATE_NEW);
++	cdev->handler = NULL;
+ 	/* dasd_delete_device destroys the device reference. */
+ 	block = device->block;
+ 	dasd_delete_device(device);
+diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
+index 8b3ed5b45277a..1ad5f7018ec2d 100644
+--- a/drivers/s390/cio/vfio_ccw_ops.c
++++ b/drivers/s390/cio/vfio_ccw_ops.c
+@@ -539,7 +539,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
+ 		if (ret)
+ 			return ret;
+ 
+-		return copy_to_user((void __user *)arg, &info, minsz);
++		return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
+ 	}
+ 	case VFIO_DEVICE_GET_REGION_INFO:
+ 	{
+@@ -557,7 +557,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
+ 		if (ret)
+ 			return ret;
+ 
+-		return copy_to_user((void __user *)arg, &info, minsz);
++		return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
+ 	}
+ 	case VFIO_DEVICE_GET_IRQ_INFO:
+ 	{
+@@ -578,7 +578,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
+ 		if (info.count == -1)
+ 			return -EINVAL;
+ 
+-		return copy_to_user((void __user *)arg, &info, minsz);
++		return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
+ 	}
+ 	case VFIO_DEVICE_SET_IRQS:
+ 	{
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index 7ceb6c433b3ba..72eb8f984534f 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -1279,7 +1279,7 @@ static int vfio_ap_mdev_get_device_info(unsigned long arg)
+ 	info.num_regions = 0;
+ 	info.num_irqs = 0;
+ 
+-	return copy_to_user((void __user *)arg, &info, minsz);
++	return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
+ }
+ 
+ static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index 2f7e06ec9a30e..bf8404b0e74ff 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -424,8 +424,6 @@ enum qeth_qdio_out_buffer_state {
+ 	/* Received QAOB notification on CQ: */
+ 	QETH_QDIO_BUF_QAOB_OK,
+ 	QETH_QDIO_BUF_QAOB_ERROR,
+-	/* Handled via transfer pending / completion queue. */
+-	QETH_QDIO_BUF_HANDLED_DELAYED,
+ };
+ 
+ struct qeth_qdio_out_buffer {
+@@ -438,7 +436,7 @@ struct qeth_qdio_out_buffer {
+ 	int is_header[QDIO_MAX_ELEMENTS_PER_BUFFER];
+ 
+ 	struct qeth_qdio_out_q *q;
+-	struct qeth_qdio_out_buffer *next_pending;
++	struct list_head list_entry;
+ };
+ 
+ struct qeth_card;
+@@ -502,6 +500,7 @@ struct qeth_qdio_out_q {
+ 	struct qdio_buffer *qdio_bufs[QDIO_MAX_BUFFERS_PER_Q];
+ 	struct qeth_qdio_out_buffer *bufs[QDIO_MAX_BUFFERS_PER_Q];
+ 	struct qdio_outbuf_state *bufstates; /* convenience pointer */
++	struct list_head pending_bufs;
+ 	struct qeth_out_q_stats stats;
+ 	spinlock_t lock;
+ 	unsigned int priority;
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index f108232498baf..03f96177e58ee 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -73,9 +73,6 @@ static void qeth_free_qdio_queues(struct qeth_card *card);
+ static void qeth_notify_skbs(struct qeth_qdio_out_q *queue,
+ 		struct qeth_qdio_out_buffer *buf,
+ 		enum iucv_tx_notify notification);
+-static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
+-				 int budget);
+-static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *, int);
+ 
+ static void qeth_close_dev_handler(struct work_struct *work)
+ {
+@@ -466,42 +463,6 @@ static enum iucv_tx_notify qeth_compute_cq_notification(int sbalf15,
+ 	return n;
+ }
+ 
+-static void qeth_cleanup_handled_pending(struct qeth_qdio_out_q *q, int bidx,
+-					 int forced_cleanup)
+-{
+-	if (q->card->options.cq != QETH_CQ_ENABLED)
+-		return;
+-
+-	if (q->bufs[bidx]->next_pending != NULL) {
+-		struct qeth_qdio_out_buffer *head = q->bufs[bidx];
+-		struct qeth_qdio_out_buffer *c = q->bufs[bidx]->next_pending;
+-
+-		while (c) {
+-			if (forced_cleanup ||
+-			    atomic_read(&c->state) ==
+-			      QETH_QDIO_BUF_HANDLED_DELAYED) {
+-				struct qeth_qdio_out_buffer *f = c;
+-
+-				QETH_CARD_TEXT(f->q->card, 5, "fp");
+-				QETH_CARD_TEXT_(f->q->card, 5, "%lx", (long) f);
+-				/* release here to avoid interleaving between
+-				   outbound tasklet and inbound tasklet
+-				   regarding notifications and lifecycle */
+-				qeth_tx_complete_buf(c, forced_cleanup, 0);
+-
+-				c = f->next_pending;
+-				WARN_ON_ONCE(head->next_pending != f);
+-				head->next_pending = c;
+-				kmem_cache_free(qeth_qdio_outbuf_cache, f);
+-			} else {
+-				head = c;
+-				c = c->next_pending;
+-			}
+-
+-		}
+-	}
+-}
+-
+ static void qeth_qdio_handle_aob(struct qeth_card *card,
+ 				 unsigned long phys_aob_addr)
+ {
+@@ -517,18 +478,6 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
+ 	buffer = (struct qeth_qdio_out_buffer *) aob->user1;
+ 	QETH_CARD_TEXT_(card, 5, "%lx", aob->user1);
+ 
+-	/* Free dangling allocations. The attached skbs are handled by
+-	 * qeth_cleanup_handled_pending().
+-	 */
+-	for (i = 0;
+-	     i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);
+-	     i++) {
+-		void *data = phys_to_virt(aob->sba[i]);
+-
+-		if (data && buffer->is_header[i])
+-			kmem_cache_free(qeth_core_header_cache, data);
+-	}
+-
+ 	if (aob->aorc) {
+ 		QETH_CARD_TEXT_(card, 2, "aorc%02X", aob->aorc);
+ 		new_state = QETH_QDIO_BUF_QAOB_ERROR;
+@@ -536,10 +485,9 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
+ 
+ 	switch (atomic_xchg(&buffer->state, new_state)) {
+ 	case QETH_QDIO_BUF_PRIMED:
+-		/* Faster than TX completion code. */
+-		notification = qeth_compute_cq_notification(aob->aorc, 0);
+-		qeth_notify_skbs(buffer->q, buffer, notification);
+-		atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);
++		/* Faster than TX completion code, let it handle the async
++		 * completion for us.
++		 */
+ 		break;
+ 	case QETH_QDIO_BUF_PENDING:
+ 		/* TX completion code is active and will handle the async
+@@ -550,7 +498,20 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
+ 		/* TX completion code is already finished. */
+ 		notification = qeth_compute_cq_notification(aob->aorc, 1);
+ 		qeth_notify_skbs(buffer->q, buffer, notification);
+-		atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);
++
++		/* Free dangling allocations. The attached skbs are handled by
++		 * qeth_tx_complete_pending_bufs().
++		 */
++		for (i = 0;
++		     i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);
++		     i++) {
++			void *data = phys_to_virt(aob->sba[i]);
++
++			if (data && buffer->is_header[i])
++				kmem_cache_free(qeth_core_header_cache, data);
++		}
++
++		atomic_set(&buffer->state, QETH_QDIO_BUF_EMPTY);
+ 		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+@@ -1422,9 +1383,6 @@ static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
+ 	struct qeth_qdio_out_q *queue = buf->q;
+ 	struct sk_buff *skb;
+ 
+-	if (atomic_read(&buf->state) == QETH_QDIO_BUF_PENDING)
+-		qeth_notify_skbs(queue, buf, TX_NOTIFY_GENERALERROR);
+-
+ 	/* Empty buffer? */
+ 	if (buf->next_element_to_fill == 0)
+ 		return;
+@@ -1486,14 +1444,38 @@ static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue,
+ 	atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY);
+ }
+ 
++static void qeth_tx_complete_pending_bufs(struct qeth_card *card,
++					  struct qeth_qdio_out_q *queue,
++					  bool drain)
++{
++	struct qeth_qdio_out_buffer *buf, *tmp;
++
++	list_for_each_entry_safe(buf, tmp, &queue->pending_bufs, list_entry) {
++		if (drain || atomic_read(&buf->state) == QETH_QDIO_BUF_EMPTY) {
++			QETH_CARD_TEXT(card, 5, "fp");
++			QETH_CARD_TEXT_(card, 5, "%lx", (long) buf);
++
++			if (drain)
++				qeth_notify_skbs(queue, buf,
++						 TX_NOTIFY_GENERALERROR);
++			qeth_tx_complete_buf(buf, drain, 0);
++
++			list_del(&buf->list_entry);
++			kmem_cache_free(qeth_qdio_outbuf_cache, buf);
++		}
++	}
++}
++
+ static void qeth_drain_output_queue(struct qeth_qdio_out_q *q, bool free)
+ {
+ 	int j;
+ 
++	qeth_tx_complete_pending_bufs(q->card, q, true);
++
+ 	for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
+ 		if (!q->bufs[j])
+ 			continue;
+-		qeth_cleanup_handled_pending(q, j, 1);
++
+ 		qeth_clear_output_buffer(q, q->bufs[j], true, 0);
+ 		if (free) {
+ 			kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[j]);
+@@ -2613,7 +2595,6 @@ static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *q, int bidx)
+ 	skb_queue_head_init(&newbuf->skb_list);
+ 	lockdep_set_class(&newbuf->skb_list.lock, &qdio_out_skb_queue_key);
+ 	newbuf->q = q;
+-	newbuf->next_pending = q->bufs[bidx];
+ 	atomic_set(&newbuf->state, QETH_QDIO_BUF_EMPTY);
+ 	q->bufs[bidx] = newbuf;
+ 	return 0;
+@@ -2632,15 +2613,28 @@ static void qeth_free_output_queue(struct qeth_qdio_out_q *q)
+ static struct qeth_qdio_out_q *qeth_alloc_output_queue(void)
+ {
+ 	struct qeth_qdio_out_q *q = kzalloc(sizeof(*q), GFP_KERNEL);
++	unsigned int i;
+ 
+ 	if (!q)
+ 		return NULL;
+ 
+-	if (qdio_alloc_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q)) {
+-		kfree(q);
+-		return NULL;
++	if (qdio_alloc_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q))
++		goto err_qdio_bufs;
++
++	for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; i++) {
++		if (qeth_init_qdio_out_buf(q, i))
++			goto err_out_bufs;
+ 	}
++
+ 	return q;
++
++err_out_bufs:
++	while (i > 0)
++		kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[--i]);
++	qdio_free_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q);
++err_qdio_bufs:
++	kfree(q);
++	return NULL;
+ }
+ 
+ static void qeth_tx_completion_timer(struct timer_list *timer)
+@@ -2653,7 +2647,7 @@ static void qeth_tx_completion_timer(struct timer_list *timer)
+ 
+ static int qeth_alloc_qdio_queues(struct qeth_card *card)
+ {
+-	int i, j;
++	unsigned int i;
+ 
+ 	QETH_CARD_TEXT(card, 2, "allcqdbf");
+ 
+@@ -2682,18 +2676,12 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
+ 		card->qdio.out_qs[i] = queue;
+ 		queue->card = card;
+ 		queue->queue_no = i;
++		INIT_LIST_HEAD(&queue->pending_bufs);
+ 		spin_lock_init(&queue->lock);
+ 		timer_setup(&queue->timer, qeth_tx_completion_timer, 0);
+ 		queue->coalesce_usecs = QETH_TX_COALESCE_USECS;
+ 		queue->max_coalesced_frames = QETH_TX_MAX_COALESCED_FRAMES;
+ 		queue->priority = QETH_QIB_PQUE_PRIO_DEFAULT;
+-
+-		/* give outbound qeth_qdio_buffers their qdio_buffers */
+-		for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
+-			WARN_ON(queue->bufs[j]);
+-			if (qeth_init_qdio_out_buf(queue, j))
+-				goto out_freeoutqbufs;
+-		}
+ 	}
+ 
+ 	/* completion */
+@@ -2702,13 +2690,6 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
+ 
+ 	return 0;
+ 
+-out_freeoutqbufs:
+-	while (j > 0) {
+-		--j;
+-		kmem_cache_free(qeth_qdio_outbuf_cache,
+-				card->qdio.out_qs[i]->bufs[j]);
+-		card->qdio.out_qs[i]->bufs[j] = NULL;
+-	}
+ out_freeoutq:
+ 	while (i > 0) {
+ 		qeth_free_output_queue(card->qdio.out_qs[--i]);
+@@ -5871,9 +5852,13 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
+ 				 QDIO_OUTBUF_STATE_FLAG_PENDING)) {
+ 		WARN_ON_ONCE(card->options.cq != QETH_CQ_ENABLED);
+ 
+-		if (atomic_cmpxchg(&buffer->state, QETH_QDIO_BUF_PRIMED,
+-						   QETH_QDIO_BUF_PENDING) ==
+-		    QETH_QDIO_BUF_PRIMED) {
++		QETH_CARD_TEXT_(card, 5, "pel%u", bidx);
++
++		switch (atomic_cmpxchg(&buffer->state,
++				       QETH_QDIO_BUF_PRIMED,
++				       QETH_QDIO_BUF_PENDING)) {
++		case QETH_QDIO_BUF_PRIMED:
++			/* We have initial ownership, no QAOB (yet): */
+ 			qeth_notify_skbs(queue, buffer, TX_NOTIFY_PENDING);
+ 
+ 			/* Handle race with qeth_qdio_handle_aob(): */
+@@ -5881,39 +5866,51 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
+ 					    QETH_QDIO_BUF_NEED_QAOB)) {
+ 			case QETH_QDIO_BUF_PENDING:
+ 				/* No concurrent QAOB notification. */
+-				break;
++
++				/* Prepare the queue slot for immediate re-use: */
++				qeth_scrub_qdio_buffer(buffer->buffer, queue->max_elements);
++				if (qeth_init_qdio_out_buf(queue, bidx)) {
++					QETH_CARD_TEXT(card, 2, "outofbuf");
++					qeth_schedule_recovery(card);
++				}
++
++				list_add(&buffer->list_entry,
++					 &queue->pending_bufs);
++				/* Skip clearing the buffer: */
++				return;
+ 			case QETH_QDIO_BUF_QAOB_OK:
+ 				qeth_notify_skbs(queue, buffer,
+ 						 TX_NOTIFY_DELAYED_OK);
+-				atomic_set(&buffer->state,
+-					   QETH_QDIO_BUF_HANDLED_DELAYED);
++				error = false;
+ 				break;
+ 			case QETH_QDIO_BUF_QAOB_ERROR:
+ 				qeth_notify_skbs(queue, buffer,
+ 						 TX_NOTIFY_DELAYED_GENERALERROR);
+-				atomic_set(&buffer->state,
+-					   QETH_QDIO_BUF_HANDLED_DELAYED);
++				error = true;
+ 				break;
+ 			default:
+ 				WARN_ON_ONCE(1);
+ 			}
+-		}
+-
+-		QETH_CARD_TEXT_(card, 5, "pel%u", bidx);
+ 
+-		/* prepare the queue slot for re-use: */
+-		qeth_scrub_qdio_buffer(buffer->buffer, queue->max_elements);
+-		if (qeth_init_qdio_out_buf(queue, bidx)) {
+-			QETH_CARD_TEXT(card, 2, "outofbuf");
+-			qeth_schedule_recovery(card);
++			break;
++		case QETH_QDIO_BUF_QAOB_OK:
++			/* qeth_qdio_handle_aob() already received a QAOB: */
++			qeth_notify_skbs(queue, buffer, TX_NOTIFY_OK);
++			error = false;
++			break;
++		case QETH_QDIO_BUF_QAOB_ERROR:
++			/* qeth_qdio_handle_aob() already received a QAOB: */
++			qeth_notify_skbs(queue, buffer, TX_NOTIFY_GENERALERROR);
++			error = true;
++			break;
++		default:
++			WARN_ON_ONCE(1);
+ 		}
+-
+-		return;
+-	}
+-
+-	if (card->options.cq == QETH_CQ_ENABLED)
++	} else if (card->options.cq == QETH_CQ_ENABLED) {
+ 		qeth_notify_skbs(queue, buffer,
+ 				 qeth_compute_cq_notification(sflags, 0));
++	}
++
+ 	qeth_clear_output_buffer(queue, buffer, error, budget);
+ }
+ 
+@@ -5934,6 +5931,8 @@ static int qeth_tx_poll(struct napi_struct *napi, int budget)
+ 		unsigned int bytes = 0;
+ 		int completed;
+ 
++		qeth_tx_complete_pending_bufs(card, queue, false);
++
+ 		if (qeth_out_queue_is_empty(queue)) {
+ 			napi_complete(napi);
+ 			return 0;
+@@ -5966,7 +5965,6 @@ static int qeth_tx_poll(struct napi_struct *napi, int budget)
+ 
+ 			qeth_handle_send_error(card, buffer, error);
+ 			qeth_iqd_tx_complete(queue, bidx, error, budget);
+-			qeth_cleanup_handled_pending(queue, bidx, false);
+ 		}
+ 
+ 		netdev_tx_completed_queue(txq, packets, bytes);
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 5125a6c7f70e9..41b8192d207d0 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -1532,14 +1532,9 @@ check_mgmt:
+ 		}
+ 		rc = iscsi_prep_scsi_cmd_pdu(conn->task);
+ 		if (rc) {
+-			if (rc == -ENOMEM || rc == -EACCES) {
+-				spin_lock_bh(&conn->taskqueuelock);
+-				list_add_tail(&conn->task->running,
+-					      &conn->cmdqueue);
+-				conn->task = NULL;
+-				spin_unlock_bh(&conn->taskqueuelock);
+-				goto done;
+-			} else
++			if (rc == -ENOMEM || rc == -EACCES)
++				fail_scsi_task(conn->task, DID_IMM_RETRY);
++			else
+ 				fail_scsi_task(conn->task, DID_ABORT);
+ 			spin_lock_bh(&conn->taskqueuelock);
+ 			continue;
+diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c
+index bdcd27faa0547..34b424ad96a20 100644
+--- a/drivers/scsi/ufs/ufs-sysfs.c
++++ b/drivers/scsi/ufs/ufs-sysfs.c
+@@ -785,7 +785,8 @@ static ssize_t _pname##_show(struct device *dev,			\
+ 	struct scsi_device *sdev = to_scsi_device(dev);			\
+ 	struct ufs_hba *hba = shost_priv(sdev->host);			\
+ 	u8 lun = ufshcd_scsi_to_upiu_lun(sdev->lun);			\
+-	if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun))		\
++	if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun,		\
++				_duname##_DESC_PARAM##_puname))		\
+ 		return -EINVAL;						\
+ 	return ufs_sysfs_read_desc_param(hba, QUERY_DESC_IDN_##_duname,	\
+ 		lun, _duname##_DESC_PARAM##_puname, buf, _size);	\
+diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h
+index f8ab16f30fdca..07ca39008b84b 100644
+--- a/drivers/scsi/ufs/ufs.h
++++ b/drivers/scsi/ufs/ufs.h
+@@ -551,13 +551,15 @@ struct ufs_dev_info {
+  * @return: true if the lun has a matching unit descriptor, false otherwise
+  */
+ static inline bool ufs_is_valid_unit_desc_lun(struct ufs_dev_info *dev_info,
+-		u8 lun)
++		u8 lun, u8 param_offset)
+ {
+ 	if (!dev_info || !dev_info->max_lu_supported) {
+ 		pr_err("Max General LU supported by UFS isn't initialized\n");
+ 		return false;
+ 	}
+-
++	/* WB is available only for the logical unit from 0 to 7 */
++	if (param_offset == UNIT_DESC_PARAM_WB_BUF_ALLOC_UNITS)
++		return lun < UFS_UPIU_MAX_WB_LUN_ID;
+ 	return lun == UFS_UPIU_RPMB_WLUN || (lun < dev_info->max_lu_supported);
+ }
+ 
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 5a7cc2e42ffdf..97d9d5d99adcc 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -3378,7 +3378,7 @@ static inline int ufshcd_read_unit_desc_param(struct ufs_hba *hba,
+ 	 * Unit descriptors are only available for general purpose LUs (LUN id
+ 	 * from 0 to 7) and RPMB Well known LU.
+ 	 */
+-	if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun))
++	if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun, param_offset))
+ 		return -EOPNOTSUPP;
+ 
+ 	return ufshcd_read_desc_param(hba, QUERY_DESC_IDN_UNIT, lun,
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 6eeb39669a866..53c4311cc6ab5 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -928,8 +928,8 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
+ 		mask |= STM32H7_SPI_SR_RXP;
+ 
+ 	if (!(sr & mask)) {
+-		dev_dbg(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
+-			sr, ier);
++		dev_warn(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
++			 sr, ier);
+ 		spin_unlock_irqrestore(&spi->lock, flags);
+ 		return IRQ_NONE;
+ 	}
+@@ -956,15 +956,8 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
+ 	}
+ 
+ 	if (sr & STM32H7_SPI_SR_OVR) {
+-		dev_warn(spi->dev, "Overrun: received value discarded\n");
+-		if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
+-			stm32h7_spi_read_rxfifo(spi, false);
+-		/*
+-		 * If overrun is detected while using DMA, it means that
+-		 * something went wrong, so stop the current transfer
+-		 */
+-		if (spi->cur_usedma)
+-			end = true;
++		dev_err(spi->dev, "Overrun: RX data lost\n");
++		end = true;
+ 	}
+ 
+ 	if (sr & STM32H7_SPI_SR_EOT) {
+diff --git a/drivers/staging/comedi/drivers/addi_apci_1032.c b/drivers/staging/comedi/drivers/addi_apci_1032.c
+index 35b75f0c9200b..81a246fbcc01f 100644
+--- a/drivers/staging/comedi/drivers/addi_apci_1032.c
++++ b/drivers/staging/comedi/drivers/addi_apci_1032.c
+@@ -260,6 +260,7 @@ static irqreturn_t apci1032_interrupt(int irq, void *d)
+ 	struct apci1032_private *devpriv = dev->private;
+ 	struct comedi_subdevice *s = dev->read_subdev;
+ 	unsigned int ctrl;
++	unsigned short val;
+ 
+ 	/* check interrupt is from this device */
+ 	if ((inl(devpriv->amcc_iobase + AMCC_OP_REG_INTCSR) &
+@@ -275,7 +276,8 @@ static irqreturn_t apci1032_interrupt(int irq, void *d)
+ 	outl(ctrl & ~APCI1032_CTRL_INT_ENA, dev->iobase + APCI1032_CTRL_REG);
+ 
+ 	s->state = inl(dev->iobase + APCI1032_STATUS_REG) & 0xffff;
+-	comedi_buf_write_samples(s, &s->state, 1);
++	val = s->state;
++	comedi_buf_write_samples(s, &val, 1);
+ 	comedi_handle_events(dev, s);
+ 
+ 	/* enable the interrupt */
+diff --git a/drivers/staging/comedi/drivers/addi_apci_1500.c b/drivers/staging/comedi/drivers/addi_apci_1500.c
+index 11efb21555e39..b04c15dcfb575 100644
+--- a/drivers/staging/comedi/drivers/addi_apci_1500.c
++++ b/drivers/staging/comedi/drivers/addi_apci_1500.c
+@@ -208,7 +208,7 @@ static irqreturn_t apci1500_interrupt(int irq, void *d)
+ 	struct comedi_device *dev = d;
+ 	struct apci1500_private *devpriv = dev->private;
+ 	struct comedi_subdevice *s = dev->read_subdev;
+-	unsigned int status = 0;
++	unsigned short status = 0;
+ 	unsigned int val;
+ 
+ 	val = inl(devpriv->amcc + AMCC_OP_REG_INTCSR);
+@@ -238,14 +238,14 @@ static irqreturn_t apci1500_interrupt(int irq, void *d)
+ 	 *
+ 	 *    Mask     Meaning
+ 	 * ----------  ------------------------------------------
+-	 * 0x00000001  Event 1 has occurred
+-	 * 0x00000010  Event 2 has occurred
+-	 * 0x00000100  Counter/timer 1 has run down (not implemented)
+-	 * 0x00001000  Counter/timer 2 has run down (not implemented)
+-	 * 0x00010000  Counter 3 has run down (not implemented)
+-	 * 0x00100000  Watchdog has run down (not implemented)
+-	 * 0x01000000  Voltage error
+-	 * 0x10000000  Short-circuit error
++	 * 0b00000001  Event 1 has occurred
++	 * 0b00000010  Event 2 has occurred
++	 * 0b00000100  Counter/timer 1 has run down (not implemented)
++	 * 0b00001000  Counter/timer 2 has run down (not implemented)
++	 * 0b00010000  Counter 3 has run down (not implemented)
++	 * 0b00100000  Watchdog has run down (not implemented)
++	 * 0b01000000  Voltage error
++	 * 0b10000000  Short-circuit error
+ 	 */
+ 	comedi_buf_write_samples(s, &status, 1);
+ 	comedi_handle_events(dev, s);
+diff --git a/drivers/staging/comedi/drivers/adv_pci1710.c b/drivers/staging/comedi/drivers/adv_pci1710.c
+index 692893c7e5c3d..090607760be6b 100644
+--- a/drivers/staging/comedi/drivers/adv_pci1710.c
++++ b/drivers/staging/comedi/drivers/adv_pci1710.c
+@@ -300,11 +300,11 @@ static int pci1710_ai_eoc(struct comedi_device *dev,
+ static int pci1710_ai_read_sample(struct comedi_device *dev,
+ 				  struct comedi_subdevice *s,
+ 				  unsigned int cur_chan,
+-				  unsigned int *val)
++				  unsigned short *val)
+ {
+ 	const struct boardtype *board = dev->board_ptr;
+ 	struct pci1710_private *devpriv = dev->private;
+-	unsigned int sample;
++	unsigned short sample;
+ 	unsigned int chan;
+ 
+ 	sample = inw(dev->iobase + PCI171X_AD_DATA_REG);
+@@ -345,7 +345,7 @@ static int pci1710_ai_insn_read(struct comedi_device *dev,
+ 	pci1710_ai_setup_chanlist(dev, s, &insn->chanspec, 1, 1);
+ 
+ 	for (i = 0; i < insn->n; i++) {
+-		unsigned int val;
++		unsigned short val;
+ 
+ 		/* start conversion */
+ 		outw(0, dev->iobase + PCI171X_SOFTTRG_REG);
+@@ -395,7 +395,7 @@ static void pci1710_handle_every_sample(struct comedi_device *dev,
+ {
+ 	struct comedi_cmd *cmd = &s->async->cmd;
+ 	unsigned int status;
+-	unsigned int val;
++	unsigned short val;
+ 	int ret;
+ 
+ 	status = inw(dev->iobase + PCI171X_STATUS_REG);
+@@ -455,7 +455,7 @@ static void pci1710_handle_fifo(struct comedi_device *dev,
+ 	}
+ 
+ 	for (i = 0; i < devpriv->max_samples; i++) {
+-		unsigned int val;
++		unsigned short val;
+ 		int ret;
+ 
+ 		ret = pci1710_ai_read_sample(dev, s, s->async->cur_chan, &val);
+diff --git a/drivers/staging/comedi/drivers/das6402.c b/drivers/staging/comedi/drivers/das6402.c
+index 04e224f8b7793..96f4107b8054d 100644
+--- a/drivers/staging/comedi/drivers/das6402.c
++++ b/drivers/staging/comedi/drivers/das6402.c
+@@ -186,7 +186,7 @@ static irqreturn_t das6402_interrupt(int irq, void *d)
+ 	if (status & DAS6402_STATUS_FFULL) {
+ 		async->events |= COMEDI_CB_OVERFLOW;
+ 	} else if (status & DAS6402_STATUS_FFNE) {
+-		unsigned int val;
++		unsigned short val;
+ 
+ 		val = das6402_ai_read_sample(dev, s);
+ 		comedi_buf_write_samples(s, &val, 1);
+diff --git a/drivers/staging/comedi/drivers/das800.c b/drivers/staging/comedi/drivers/das800.c
+index 4ea100ff6930f..2881808d6606c 100644
+--- a/drivers/staging/comedi/drivers/das800.c
++++ b/drivers/staging/comedi/drivers/das800.c
+@@ -427,7 +427,7 @@ static irqreturn_t das800_interrupt(int irq, void *d)
+ 	struct comedi_cmd *cmd;
+ 	unsigned long irq_flags;
+ 	unsigned int status;
+-	unsigned int val;
++	unsigned short val;
+ 	bool fifo_empty;
+ 	bool fifo_overflow;
+ 	int i;
+diff --git a/drivers/staging/comedi/drivers/dmm32at.c b/drivers/staging/comedi/drivers/dmm32at.c
+index 17e6018918bbf..56682f01242fd 100644
+--- a/drivers/staging/comedi/drivers/dmm32at.c
++++ b/drivers/staging/comedi/drivers/dmm32at.c
+@@ -404,7 +404,7 @@ static irqreturn_t dmm32at_isr(int irq, void *d)
+ {
+ 	struct comedi_device *dev = d;
+ 	unsigned char intstat;
+-	unsigned int val;
++	unsigned short val;
+ 	int i;
+ 
+ 	if (!dev->attached) {
+diff --git a/drivers/staging/comedi/drivers/me4000.c b/drivers/staging/comedi/drivers/me4000.c
+index 726e40dc17b62..0d3d4cafce2e8 100644
+--- a/drivers/staging/comedi/drivers/me4000.c
++++ b/drivers/staging/comedi/drivers/me4000.c
+@@ -924,7 +924,7 @@ static irqreturn_t me4000_ai_isr(int irq, void *dev_id)
+ 	struct comedi_subdevice *s = dev->read_subdev;
+ 	int i;
+ 	int c = 0;
+-	unsigned int lval;
++	unsigned short lval;
+ 
+ 	if (!dev->attached)
+ 		return IRQ_NONE;
+diff --git a/drivers/staging/comedi/drivers/pcl711.c b/drivers/staging/comedi/drivers/pcl711.c
+index 2dbf69e309650..bd6f42fe9e3ca 100644
+--- a/drivers/staging/comedi/drivers/pcl711.c
++++ b/drivers/staging/comedi/drivers/pcl711.c
+@@ -184,7 +184,7 @@ static irqreturn_t pcl711_interrupt(int irq, void *d)
+ 	struct comedi_device *dev = d;
+ 	struct comedi_subdevice *s = dev->read_subdev;
+ 	struct comedi_cmd *cmd = &s->async->cmd;
+-	unsigned int data;
++	unsigned short data;
+ 
+ 	if (!dev->attached) {
+ 		dev_err(dev->class_dev, "spurious interrupt\n");
+diff --git a/drivers/staging/comedi/drivers/pcl818.c b/drivers/staging/comedi/drivers/pcl818.c
+index 63e3011158f23..f4b4a686c710f 100644
+--- a/drivers/staging/comedi/drivers/pcl818.c
++++ b/drivers/staging/comedi/drivers/pcl818.c
+@@ -423,7 +423,7 @@ static int pcl818_ai_eoc(struct comedi_device *dev,
+ 
+ static bool pcl818_ai_write_sample(struct comedi_device *dev,
+ 				   struct comedi_subdevice *s,
+-				   unsigned int chan, unsigned int val)
++				   unsigned int chan, unsigned short val)
+ {
+ 	struct pcl818_private *devpriv = dev->private;
+ 	struct comedi_cmd *cmd = &s->async->cmd;
+diff --git a/drivers/staging/ks7010/ks_wlan_net.c b/drivers/staging/ks7010/ks_wlan_net.c
+index dc09cc6e1c478..09e7b4cd0138c 100644
+--- a/drivers/staging/ks7010/ks_wlan_net.c
++++ b/drivers/staging/ks7010/ks_wlan_net.c
+@@ -1120,6 +1120,7 @@ static int ks_wlan_set_scan(struct net_device *dev,
+ {
+ 	struct ks_wlan_private *priv = netdev_priv(dev);
+ 	struct iw_scan_req *req = NULL;
++	int len;
+ 
+ 	if (priv->sleep_mode == SLP_SLEEP)
+ 		return -EPERM;
+@@ -1129,8 +1130,9 @@ static int ks_wlan_set_scan(struct net_device *dev,
+ 	if (wrqu->data.length == sizeof(struct iw_scan_req) &&
+ 	    wrqu->data.flags & IW_SCAN_THIS_ESSID) {
+ 		req = (struct iw_scan_req *)extra;
+-		priv->scan_ssid_len = req->essid_len;
+-		memcpy(priv->scan_ssid, req->essid, priv->scan_ssid_len);
++		len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
++		priv->scan_ssid_len = len;
++		memcpy(priv->scan_ssid, req->essid, len);
+ 	} else {
+ 		priv->scan_ssid_len = 0;
+ 	}
+diff --git a/drivers/staging/media/rkisp1/rkisp1-params.c b/drivers/staging/media/rkisp1/rkisp1-params.c
+index 986d293201e63..3eb3fb2d64bc1 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-params.c
++++ b/drivers/staging/media/rkisp1/rkisp1-params.c
+@@ -1291,7 +1291,6 @@ static void rkisp1_params_config_parameter(struct rkisp1_params *params)
+ 	memset(hst.hist_weight, 0x01, sizeof(hst.hist_weight));
+ 	rkisp1_hst_config(params, &hst);
+ 	rkisp1_param_set_bits(params, RKISP1_CIF_ISP_HIST_PROP,
+-			      ~RKISP1_CIF_ISP_HIST_PROP_MODE_MASK |
+ 			      rkisp1_hst_params_default_config.mode);
+ 
+ 	/* set the  range */
+diff --git a/drivers/staging/rtl8188eu/core/rtw_ap.c b/drivers/staging/rtl8188eu/core/rtw_ap.c
+index 2078d87055bf6..d25a5734249f0 100644
+--- a/drivers/staging/rtl8188eu/core/rtw_ap.c
++++ b/drivers/staging/rtl8188eu/core/rtw_ap.c
+@@ -791,6 +791,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf,  int len)
+ 	p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _SSID_IE_, &ie_len,
+ 		       pbss_network->ie_length - _BEACON_IE_OFFSET_);
+ 	if (p && ie_len > 0) {
++		ie_len = min_t(int, ie_len, sizeof(pbss_network->ssid.ssid));
+ 		memset(&pbss_network->ssid, 0, sizeof(struct ndis_802_11_ssid));
+ 		memcpy(pbss_network->ssid.ssid, p + 2, ie_len);
+ 		pbss_network->ssid.ssid_length = ie_len;
+@@ -811,6 +812,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf,  int len)
+ 	p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _SUPPORTEDRATES_IE_, &ie_len,
+ 		       pbss_network->ie_length - _BEACON_IE_OFFSET_);
+ 	if (p) {
++		ie_len = min_t(int, ie_len, NDIS_802_11_LENGTH_RATES_EX);
+ 		memcpy(supportRate, p + 2, ie_len);
+ 		supportRateNum = ie_len;
+ 	}
+@@ -819,6 +821,8 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf,  int len)
+ 	p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _EXT_SUPPORTEDRATES_IE_,
+ 		       &ie_len, pbss_network->ie_length - _BEACON_IE_OFFSET_);
+ 	if (p) {
++		ie_len = min_t(int, ie_len,
++			       NDIS_802_11_LENGTH_RATES_EX - supportRateNum);
+ 		memcpy(supportRate + supportRateNum, p + 2, ie_len);
+ 		supportRateNum += ie_len;
+ 	}
+@@ -934,6 +938,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf,  int len)
+ 
+ 		pht_cap->mcs.rx_mask[0] = 0xff;
+ 		pht_cap->mcs.rx_mask[1] = 0x0;
++		ie_len = min_t(int, ie_len, sizeof(pmlmepriv->htpriv.ht_cap));
+ 		memcpy(&pmlmepriv->htpriv.ht_cap, p + 2, ie_len);
+ 	}
+ 
+diff --git a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
+index 8e10462f1fbe5..d487528b5fc9a 100644
+--- a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
++++ b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
+@@ -1144,9 +1144,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
+ 						break;
+ 					}
+ 					sec_len = *(pos++); len -= 1;
+-					if (sec_len > 0 && sec_len <= len) {
++					if (sec_len > 0 &&
++					    sec_len <= len &&
++					    sec_len <= 32) {
+ 						ssid[ssid_index].ssid_length = sec_len;
+-						memcpy(ssid[ssid_index].ssid, pos, ssid[ssid_index].ssid_length);
++						memcpy(ssid[ssid_index].ssid, pos, sec_len);
+ 						ssid_index++;
+ 					}
+ 					pos += sec_len;
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c b/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
+index 16bcee13f64b5..407effde5e71a 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
+@@ -406,9 +406,10 @@ static int _rtl92e_wx_set_scan(struct net_device *dev,
+ 		struct iw_scan_req *req = (struct iw_scan_req *)b;
+ 
+ 		if (req->essid_len) {
+-			ieee->current_network.ssid_len = req->essid_len;
+-			memcpy(ieee->current_network.ssid, req->essid,
+-			       req->essid_len);
++			int len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
++
++			ieee->current_network.ssid_len = len;
++			memcpy(ieee->current_network.ssid, req->essid, len);
+ 		}
+ 	}
+ 
+diff --git a/drivers/staging/rtl8192u/r8192U_wx.c b/drivers/staging/rtl8192u/r8192U_wx.c
+index d853586705fc9..77bf88696a844 100644
+--- a/drivers/staging/rtl8192u/r8192U_wx.c
++++ b/drivers/staging/rtl8192u/r8192U_wx.c
+@@ -331,8 +331,10 @@ static int r8192_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
+ 		struct iw_scan_req *req = (struct iw_scan_req *)b;
+ 
+ 		if (req->essid_len) {
+-			ieee->current_network.ssid_len = req->essid_len;
+-			memcpy(ieee->current_network.ssid, req->essid, req->essid_len);
++			int len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
++
++			ieee->current_network.ssid_len = len;
++			memcpy(ieee->current_network.ssid, req->essid, len);
+ 		}
+ 	}
+ 
+diff --git a/drivers/staging/rtl8712/rtl871x_cmd.c b/drivers/staging/rtl8712/rtl871x_cmd.c
+index 18116469bd316..75716f59044d9 100644
+--- a/drivers/staging/rtl8712/rtl871x_cmd.c
++++ b/drivers/staging/rtl8712/rtl871x_cmd.c
+@@ -192,8 +192,10 @@ u8 r8712_sitesurvey_cmd(struct _adapter *padapter,
+ 	psurveyPara->ss_ssidlen = 0;
+ 	memset(psurveyPara->ss_ssid, 0, IW_ESSID_MAX_SIZE + 1);
+ 	if (pssid && pssid->SsidLength) {
+-		memcpy(psurveyPara->ss_ssid, pssid->Ssid, pssid->SsidLength);
+-		psurveyPara->ss_ssidlen = cpu_to_le32(pssid->SsidLength);
++		int len = min_t(int, pssid->SsidLength, IW_ESSID_MAX_SIZE);
++
++		memcpy(psurveyPara->ss_ssid, pssid->Ssid, len);
++		psurveyPara->ss_ssidlen = cpu_to_le32(len);
+ 	}
+ 	set_fwstate(pmlmepriv, _FW_UNDER_SURVEY);
+ 	r8712_enqueue_cmd(pcmdpriv, ph2c);
+diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
+index cbaa7a4897483..2a661b04cd255 100644
+--- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
++++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
+@@ -924,7 +924,7 @@ static int r871x_wx_set_priv(struct net_device *dev,
+ 	struct iw_point *dwrq = (struct iw_point *)awrq;
+ 
+ 	len = dwrq->length;
+-	ext = memdup_user(dwrq->pointer, len);
++	ext = strndup_user(dwrq->pointer, len);
+ 	if (IS_ERR(ext))
+ 		return PTR_ERR(ext);
+ 
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index 5f79ea05f9b81..b42193c554fb2 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -3738,6 +3738,7 @@ core_scsi3_pri_read_keys(struct se_cmd *cmd)
+ 	spin_unlock(&dev->t10_pr.registration_lock);
+ 
+ 	put_unaligned_be32(add_len, &buf[4]);
++	target_set_cmd_data_length(cmd, 8 + add_len);
+ 
+ 	transport_kunmap_data_sg(cmd);
+ 
+@@ -3756,7 +3757,7 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
+ 	struct t10_pr_registration *pr_reg;
+ 	unsigned char *buf;
+ 	u64 pr_res_key;
+-	u32 add_len = 16; /* Hardcoded to 16 when a reservation is held. */
++	u32 add_len = 0;
+ 
+ 	if (cmd->data_length < 8) {
+ 		pr_err("PRIN SA READ_RESERVATIONS SCSI Data Length: %u"
+@@ -3774,8 +3775,9 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
+ 	pr_reg = dev->dev_pr_res_holder;
+ 	if (pr_reg) {
+ 		/*
+-		 * Set the hardcoded Additional Length
++		 * Set the Additional Length to 16 when a reservation is held
+ 		 */
++		add_len = 16;
+ 		put_unaligned_be32(add_len, &buf[4]);
+ 
+ 		if (cmd->data_length < 22)
+@@ -3811,6 +3813,8 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
+ 			  (pr_reg->pr_res_type & 0x0f);
+ 	}
+ 
++	target_set_cmd_data_length(cmd, 8 + add_len);
++
+ err:
+ 	spin_unlock(&dev->dev_reservation_lock);
+ 	transport_kunmap_data_sg(cmd);
+@@ -3829,7 +3833,7 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
+ 	struct se_device *dev = cmd->se_dev;
+ 	struct t10_reservation *pr_tmpl = &dev->t10_pr;
+ 	unsigned char *buf;
+-	u16 add_len = 8; /* Hardcoded to 8. */
++	u16 len = 8; /* Hardcoded to 8. */
+ 
+ 	if (cmd->data_length < 6) {
+ 		pr_err("PRIN SA REPORT_CAPABILITIES SCSI Data Length:"
+@@ -3841,7 +3845,7 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
+ 	if (!buf)
+ 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 
+-	put_unaligned_be16(add_len, &buf[0]);
++	put_unaligned_be16(len, &buf[0]);
+ 	buf[2] |= 0x10; /* CRH: Compatible Reservation Hanlding bit. */
+ 	buf[2] |= 0x08; /* SIP_C: Specify Initiator Ports Capable bit */
+ 	buf[2] |= 0x04; /* ATP_C: All Target Ports Capable bit */
+@@ -3870,6 +3874,8 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
+ 	buf[4] |= 0x02; /* PR_TYPE_WRITE_EXCLUSIVE */
+ 	buf[5] |= 0x01; /* PR_TYPE_EXCLUSIVE_ACCESS_ALLREG */
+ 
++	target_set_cmd_data_length(cmd, len);
++
+ 	transport_kunmap_data_sg(cmd);
+ 
+ 	return 0;
+@@ -4030,6 +4036,7 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd)
+ 	 * Set ADDITIONAL_LENGTH
+ 	 */
+ 	put_unaligned_be32(add_len, &buf[4]);
++	target_set_cmd_data_length(cmd, 8 + add_len);
+ 
+ 	transport_kunmap_data_sg(cmd);
+ 
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index ff26ab0a5f600..484f0ba0a65bb 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -873,11 +873,9 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
+ }
+ EXPORT_SYMBOL(target_complete_cmd);
+ 
+-void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
++void target_set_cmd_data_length(struct se_cmd *cmd, int length)
+ {
+-	if ((scsi_status == SAM_STAT_GOOD ||
+-	     cmd->se_cmd_flags & SCF_TREAT_READ_AS_NORMAL) &&
+-	    length < cmd->data_length) {
++	if (length < cmd->data_length) {
+ 		if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT) {
+ 			cmd->residual_count += cmd->data_length - length;
+ 		} else {
+@@ -887,6 +885,15 @@ void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int len
+ 
+ 		cmd->data_length = length;
+ 	}
++}
++EXPORT_SYMBOL(target_set_cmd_data_length);
++
++void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
++{
++	if (scsi_status == SAM_STAT_GOOD ||
++	    cmd->se_cmd_flags & SCF_TREAT_READ_AS_NORMAL) {
++		target_set_cmd_data_length(cmd, length);
++	}
+ 
+ 	target_complete_cmd(cmd, scsi_status);
+ }
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 21130af106bb6..8434bd5a8ec78 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1056,9 +1056,9 @@ static int max310x_startup(struct uart_port *port)
+ 	max310x_port_update(port, MAX310X_MODE1_REG,
+ 			    MAX310X_MODE1_TRNSCVCTRL_BIT, 0);
+ 
+-	/* Reset FIFOs */
+-	max310x_port_write(port, MAX310X_MODE2_REG,
+-			   MAX310X_MODE2_FIFORST_BIT);
++	/* Configure MODE2 register & Reset FIFOs*/
++	val = MAX310X_MODE2_RXEMPTINV_BIT | MAX310X_MODE2_FIFORST_BIT;
++	max310x_port_write(port, MAX310X_MODE2_REG, val);
+ 	max310x_port_update(port, MAX310X_MODE2_REG,
+ 			    MAX310X_MODE2_FIFORST_BIT, 0);
+ 
+@@ -1086,27 +1086,8 @@ static int max310x_startup(struct uart_port *port)
+ 	/* Clear IRQ status register */
+ 	max310x_port_read(port, MAX310X_IRQSTS_REG);
+ 
+-	/*
+-	 * Let's ask for an interrupt after a timeout equivalent to
+-	 * the receiving time of 4 characters after the last character
+-	 * has been received.
+-	 */
+-	max310x_port_write(port, MAX310X_RXTO_REG, 4);
+-
+-	/*
+-	 * Make sure we also get RX interrupts when the RX FIFO is
+-	 * filling up quickly, so get an interrupt when half of the RX
+-	 * FIFO has been filled in.
+-	 */
+-	max310x_port_write(port, MAX310X_FIFOTRIGLVL_REG,
+-			   MAX310X_FIFOTRIGLVL_RX(MAX310X_FIFO_SIZE / 2));
+-
+-	/* Enable RX timeout interrupt in LSR */
+-	max310x_port_write(port, MAX310X_LSR_IRQEN_REG,
+-			   MAX310X_LSR_RXTO_BIT);
+-
+-	/* Enable LSR, RX FIFO trigger, CTS change interrupts */
+-	val = MAX310X_IRQ_LSR_BIT  | MAX310X_IRQ_RXFIFO_BIT | MAX310X_IRQ_TXEMPTY_BIT;
++	/* Enable RX, TX, CTS change interrupts */
++	val = MAX310X_IRQ_RXEMPTY_BIT | MAX310X_IRQ_TXEMPTY_BIT;
+ 	max310x_port_write(port, MAX310X_IRQEN_REG, val | MAX310X_IRQ_CTS_BIT);
+ 
+ 	return 0;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 781905745812e..2f4e5174e78c8 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1929,6 +1929,11 @@ static const struct usb_device_id acm_ids[] = {
+ 	.driver_info = SEND_ZERO_PACKET,
+ 	},
+ 
++	/* Exclude Goodix Fingerprint Reader */
++	{ USB_DEVICE(0x27c6, 0x5395),
++	.driver_info = IGNORE_DEVICE,
++	},
++
+ 	/* control interfaces without any protocol set */
+ 	{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
+ 		USB_CDC_PROTO_NONE) },
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index c9f6e97582885..f27b4aecff3d4 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -494,16 +494,24 @@ static int usblp_release(struct inode *inode, struct file *file)
+ /* No kernel lock - fine */
+ static __poll_t usblp_poll(struct file *file, struct poll_table_struct *wait)
+ {
+-	__poll_t ret;
++	struct usblp *usblp = file->private_data;
++	__poll_t ret = 0;
+ 	unsigned long flags;
+ 
+-	struct usblp *usblp = file->private_data;
+ 	/* Should we check file->f_mode & FMODE_WRITE before poll_wait()? */
+ 	poll_wait(file, &usblp->rwait, wait);
+ 	poll_wait(file, &usblp->wwait, wait);
++
++	mutex_lock(&usblp->mut);
++	if (!usblp->present)
++		ret |= EPOLLHUP;
++	mutex_unlock(&usblp->mut);
++
+ 	spin_lock_irqsave(&usblp->lock, flags);
+-	ret = ((usblp->bidir && usblp->rcomplete) ? EPOLLIN  | EPOLLRDNORM : 0) |
+-	   ((usblp->no_paper || usblp->wcomplete) ? EPOLLOUT | EPOLLWRNORM : 0);
++	if (usblp->bidir && usblp->rcomplete)
++		ret |= EPOLLIN  | EPOLLRDNORM;
++	if (usblp->no_paper || usblp->wcomplete)
++		ret |= EPOLLOUT | EPOLLWRNORM;
+ 	spin_unlock_irqrestore(&usblp->lock, flags);
+ 	return ret;
+ }
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index 9b4ac4415f1a4..db4de5367737a 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -748,6 +748,38 @@ void usb_put_intf(struct usb_interface *intf)
+ }
+ EXPORT_SYMBOL_GPL(usb_put_intf);
+ 
++/**
++ * usb_intf_get_dma_device - acquire a reference on the usb interface's DMA endpoint
++ * @intf: the usb interface
++ *
++ * While a USB device cannot perform DMA operations by itself, many USB
++ * controllers can. A call to usb_intf_get_dma_device() returns the DMA endpoint
++ * for the given USB interface, if any. The returned device structure must be
++ * released with put_device().
++ *
++ * See also usb_get_dma_device().
++ *
++ * Returns: A reference to the usb interface's DMA endpoint; or NULL if none
++ *          exists.
++ */
++struct device *usb_intf_get_dma_device(struct usb_interface *intf)
++{
++	struct usb_device *udev = interface_to_usbdev(intf);
++	struct device *dmadev;
++
++	if (!udev->bus)
++		return NULL;
++
++	dmadev = get_device(udev->bus->sysdev);
++	if (!dmadev || !dmadev->dma_mask) {
++		put_device(dmadev);
++		return NULL;
++	}
++
++	return dmadev;
++}
++EXPORT_SYMBOL_GPL(usb_intf_get_dma_device);
++
+ /*			USB device locking
+  *
+  * USB devices and interfaces are locked using the semaphore in their
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index c703d552bbcfc..c00c4fa139b88 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -60,12 +60,14 @@ struct dwc3_acpi_pdata {
+ 	int			dp_hs_phy_irq_index;
+ 	int			dm_hs_phy_irq_index;
+ 	int			ss_phy_irq_index;
++	bool			is_urs;
+ };
+ 
+ struct dwc3_qcom {
+ 	struct device		*dev;
+ 	void __iomem		*qscratch_base;
+ 	struct platform_device	*dwc3;
++	struct platform_device	*urs_usb;
+ 	struct clk		**clks;
+ 	int			num_clocks;
+ 	struct reset_control	*resets;
+@@ -356,8 +358,10 @@ static int dwc3_qcom_suspend(struct dwc3_qcom *qcom)
+ 	if (ret)
+ 		dev_warn(qcom->dev, "failed to disable interconnect: %d\n", ret);
+ 
++	if (device_may_wakeup(qcom->dev))
++		dwc3_qcom_enable_interrupts(qcom);
++
+ 	qcom->is_suspended = true;
+-	dwc3_qcom_enable_interrupts(qcom);
+ 
+ 	return 0;
+ }
+@@ -370,7 +374,8 @@ static int dwc3_qcom_resume(struct dwc3_qcom *qcom)
+ 	if (!qcom->is_suspended)
+ 		return 0;
+ 
+-	dwc3_qcom_disable_interrupts(qcom);
++	if (device_may_wakeup(qcom->dev))
++		dwc3_qcom_disable_interrupts(qcom);
+ 
+ 	for (i = 0; i < qcom->num_clocks; i++) {
+ 		ret = clk_prepare_enable(qcom->clks[i]);
+@@ -429,13 +434,15 @@ static void dwc3_qcom_select_utmi_clk(struct dwc3_qcom *qcom)
+ static int dwc3_qcom_get_irq(struct platform_device *pdev,
+ 			     const char *name, int num)
+ {
++	struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
++	struct platform_device *pdev_irq = qcom->urs_usb ? qcom->urs_usb : pdev;
+ 	struct device_node *np = pdev->dev.of_node;
+ 	int ret;
+ 
+ 	if (np)
+-		ret = platform_get_irq_byname(pdev, name);
++		ret = platform_get_irq_byname(pdev_irq, name);
+ 	else
+-		ret = platform_get_irq(pdev, num);
++		ret = platform_get_irq(pdev_irq, num);
+ 
+ 	return ret;
+ }
+@@ -568,6 +575,8 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
+ 	struct dwc3_qcom	*qcom = platform_get_drvdata(pdev);
+ 	struct device		*dev = &pdev->dev;
+ 	struct resource		*res, *child_res = NULL;
++	struct platform_device	*pdev_irq = qcom->urs_usb ? qcom->urs_usb :
++							    pdev;
+ 	int			irq;
+ 	int			ret;
+ 
+@@ -597,7 +606,7 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
+ 	child_res[0].end = child_res[0].start +
+ 		qcom->acpi_pdata->dwc3_core_base_size;
+ 
+-	irq = platform_get_irq(pdev, 0);
++	irq = platform_get_irq(pdev_irq, 0);
+ 	child_res[1].flags = IORESOURCE_IRQ;
+ 	child_res[1].start = child_res[1].end = irq;
+ 
+@@ -639,16 +648,46 @@ static int dwc3_qcom_of_register_core(struct platform_device *pdev)
+ 	ret = of_platform_populate(np, NULL, NULL, dev);
+ 	if (ret) {
+ 		dev_err(dev, "failed to register dwc3 core - %d\n", ret);
+-		return ret;
++		goto node_put;
+ 	}
+ 
+ 	qcom->dwc3 = of_find_device_by_node(dwc3_np);
+ 	if (!qcom->dwc3) {
++		ret = -ENODEV;
+ 		dev_err(dev, "failed to get dwc3 platform device\n");
+-		return -ENODEV;
+ 	}
+ 
+-	return 0;
++node_put:
++	of_node_put(dwc3_np);
++
++	return ret;
++}
++
++static struct platform_device *
++dwc3_qcom_create_urs_usb_platdev(struct device *dev)
++{
++	struct fwnode_handle *fwh;
++	struct acpi_device *adev;
++	char name[8];
++	int ret;
++	int id;
++
++	/* Figure out device id */
++	ret = sscanf(fwnode_get_name(dev->fwnode), "URS%d", &id);
++	if (!ret)
++		return NULL;
++
++	/* Find the child using name */
++	snprintf(name, sizeof(name), "USB%d", id);
++	fwh = fwnode_get_named_child_node(dev->fwnode, name);
++	if (!fwh)
++		return NULL;
++
++	adev = to_acpi_device_node(fwh);
++	if (!adev)
++		return NULL;
++
++	return acpi_create_platform_device(adev, NULL);
+ }
+ 
+ static int dwc3_qcom_probe(struct platform_device *pdev)
+@@ -715,6 +754,14 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 			qcom->acpi_pdata->qscratch_base_offset;
+ 		parent_res->end = parent_res->start +
+ 			qcom->acpi_pdata->qscratch_base_size;
++
++		if (qcom->acpi_pdata->is_urs) {
++			qcom->urs_usb = dwc3_qcom_create_urs_usb_platdev(dev);
++			if (!qcom->urs_usb) {
++				dev_err(dev, "failed to create URS USB platdev\n");
++				return -ENODEV;
++			}
++		}
+ 	}
+ 
+ 	qcom->qscratch_base = devm_ioremap_resource(dev, parent_res);
+@@ -877,8 +924,22 @@ static const struct dwc3_acpi_pdata sdm845_acpi_pdata = {
+ 	.ss_phy_irq_index = 2
+ };
+ 
++static const struct dwc3_acpi_pdata sdm845_acpi_urs_pdata = {
++	.qscratch_base_offset = SDM845_QSCRATCH_BASE_OFFSET,
++	.qscratch_base_size = SDM845_QSCRATCH_SIZE,
++	.dwc3_core_base_size = SDM845_DWC3_CORE_SIZE,
++	.hs_phy_irq_index = 1,
++	.dp_hs_phy_irq_index = 4,
++	.dm_hs_phy_irq_index = 3,
++	.ss_phy_irq_index = 2,
++	.is_urs = true,
++};
++
+ static const struct acpi_device_id dwc3_qcom_acpi_match[] = {
+ 	{ "QCOM2430", (unsigned long)&sdm845_acpi_pdata },
++	{ "QCOM0304", (unsigned long)&sdm845_acpi_urs_pdata },
++	{ "QCOM0497", (unsigned long)&sdm845_acpi_urs_pdata },
++	{ "QCOM04A6", (unsigned long)&sdm845_acpi_pdata },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(acpi, dwc3_qcom_acpi_match);
+diff --git a/drivers/usb/gadget/function/f_uac1.c b/drivers/usb/gadget/function/f_uac1.c
+index 00d346965f7a5..560382e0a8f38 100644
+--- a/drivers/usb/gadget/function/f_uac1.c
++++ b/drivers/usb/gadget/function/f_uac1.c
+@@ -499,6 +499,7 @@ static void f_audio_disable(struct usb_function *f)
+ 	uac1->as_out_alt = 0;
+ 	uac1->as_in_alt = 0;
+ 
++	u_audio_stop_playback(&uac1->g_audio);
+ 	u_audio_stop_capture(&uac1->g_audio);
+ }
+ 
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index 5d960b6603b6f..6f03e944e0e31 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -478,7 +478,7 @@ static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
+ 	}
+ 
+ 	max_size_bw = num_channels(chmask) * ssize *
+-		DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1)));
++		((srate / (factor / (1 << (ep_desc->bInterval - 1)))) + 1);
+ 	ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw,
+ 						    max_size_ep));
+ 
+diff --git a/drivers/usb/gadget/function/u_ether_configfs.h b/drivers/usb/gadget/function/u_ether_configfs.h
+index bd92b57030131..f982e18a5a789 100644
+--- a/drivers/usb/gadget/function/u_ether_configfs.h
++++ b/drivers/usb/gadget/function/u_ether_configfs.h
+@@ -169,12 +169,11 @@ out:									\
+ 						size_t len)		\
+ 	{								\
+ 		struct f_##_f_##_opts *opts = to_f_##_f_##_opts(item);	\
+-		int ret;						\
++		int ret = -EINVAL;					\
+ 		u8 val;							\
+ 									\
+ 		mutex_lock(&opts->lock);				\
+-		ret = sscanf(page, "%02hhx", &val);			\
+-		if (ret > 0) {						\
++		if (sscanf(page, "%02hhx", &val) > 0) {			\
+ 			opts->_n_ = val;				\
+ 			ret = len;					\
+ 		}							\
+diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
+index f1ea51476add0..1d3ebb07ccd4d 100644
+--- a/drivers/usb/gadget/udc/s3c2410_udc.c
++++ b/drivers/usb/gadget/udc/s3c2410_udc.c
+@@ -1773,8 +1773,8 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	udc_info = dev_get_platdata(&pdev->dev);
+ 
+ 	base_addr = devm_platform_ioremap_resource(pdev, 0);
+-	if (!base_addr) {
+-		retval = -ENOMEM;
++	if (IS_ERR(base_addr)) {
++		retval = PTR_ERR(base_addr);
+ 		goto err_mem;
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 84da8406d5b42..5bbccc9a0179f 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -66,6 +66,7 @@
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI		0x1142
+ #define PCI_DEVICE_ID_ASMEDIA_1142_XHCI			0x1242
+ #define PCI_DEVICE_ID_ASMEDIA_2142_XHCI			0x2142
++#define PCI_DEVICE_ID_ASMEDIA_3242_XHCI			0x3242
+ 
+ static const char hcd_name[] = "xhci_hcd";
+ 
+@@ -276,11 +277,14 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
+ 		xhci->quirks |= XHCI_BROKEN_STREAMS;
+ 	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+-		pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
++		pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI) {
+ 		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
++		xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
++	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ 	    (pdev->device == PCI_DEVICE_ID_ASMEDIA_1142_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI))
++	     pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_ASMEDIA_3242_XHCI))
+ 		xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+@@ -295,6 +299,11 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == 0x9026)
+ 		xhci->quirks |= XHCI_RESET_PLL_ON_DISCONNECT;
+ 
++	if (pdev->vendor == PCI_VENDOR_ID_AMD &&
++	    (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_2 ||
++	     pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4))
++		xhci->quirks |= XHCI_NO_SOFT_RETRY;
++
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
+ 				"QUIRK: Resetting on resume");
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 061d5c51405fb..054840a69eb4a 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2307,7 +2307,8 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		remaining	= 0;
+ 		break;
+ 	case COMP_USB_TRANSACTION_ERROR:
+-		if ((ep_ring->err_count++ > MAX_SOFT_RETRY) ||
++		if (xhci->quirks & XHCI_NO_SOFT_RETRY ||
++		    (ep_ring->err_count++ > MAX_SOFT_RETRY) ||
+ 		    le32_to_cpu(slot_ctx->tt_info) & TT_SLOT)
+ 			break;
+ 		*status = 0;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index d17bbb162810a..c449de6164b18 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -883,44 +883,42 @@ static void xhci_clear_command_ring(struct xhci_hcd *xhci)
+ 	xhci_set_cmd_ring_deq(xhci);
+ }
+ 
+-static void xhci_disable_port_wake_on_bits(struct xhci_hcd *xhci)
++/*
++ * Disable port wake bits if do_wakeup is not set.
++ *
++ * Also clear a possible internal port wake state left hanging for ports that
++ * detected termination but never successfully enumerated (trained to 0U).
++ * Internal wake causes immediate xHCI wake after suspend. PORT_CSC write done
++ * at enumeration clears this wake, force one here as well for unconnected ports
++ */
++
++static void xhci_disable_hub_port_wake(struct xhci_hcd *xhci,
++				       struct xhci_hub *rhub,
++				       bool do_wakeup)
+ {
+-	struct xhci_port **ports;
+-	int port_index;
+ 	unsigned long flags;
+ 	u32 t1, t2, portsc;
++	int i;
+ 
+ 	spin_lock_irqsave(&xhci->lock, flags);
+ 
+-	/* disable usb3 ports Wake bits */
+-	port_index = xhci->usb3_rhub.num_ports;
+-	ports = xhci->usb3_rhub.ports;
+-	while (port_index--) {
+-		t1 = readl(ports[port_index]->addr);
+-		portsc = t1;
+-		t1 = xhci_port_state_to_neutral(t1);
+-		t2 = t1 & ~PORT_WAKE_BITS;
+-		if (t1 != t2) {
+-			writel(t2, ports[port_index]->addr);
+-			xhci_dbg(xhci, "disable wake bits port %d-%d, portsc: 0x%x, write: 0x%x\n",
+-				 xhci->usb3_rhub.hcd->self.busnum,
+-				 port_index + 1, portsc, t2);
+-		}
+-	}
++	for (i = 0; i < rhub->num_ports; i++) {
++		portsc = readl(rhub->ports[i]->addr);
++		t1 = xhci_port_state_to_neutral(portsc);
++		t2 = t1;
++
++		/* clear wake bits if do_wake is not set */
++		if (!do_wakeup)
++			t2 &= ~PORT_WAKE_BITS;
++
++		/* Don't touch csc bit if connected or connect change is set */
++		if (!(portsc & (PORT_CSC | PORT_CONNECT)))
++			t2 |= PORT_CSC;
+ 
+-	/* disable usb2 ports Wake bits */
+-	port_index = xhci->usb2_rhub.num_ports;
+-	ports = xhci->usb2_rhub.ports;
+-	while (port_index--) {
+-		t1 = readl(ports[port_index]->addr);
+-		portsc = t1;
+-		t1 = xhci_port_state_to_neutral(t1);
+-		t2 = t1 & ~PORT_WAKE_BITS;
+ 		if (t1 != t2) {
+-			writel(t2, ports[port_index]->addr);
+-			xhci_dbg(xhci, "disable wake bits port %d-%d, portsc: 0x%x, write: 0x%x\n",
+-				 xhci->usb2_rhub.hcd->self.busnum,
+-				 port_index + 1, portsc, t2);
++			writel(t2, rhub->ports[i]->addr);
++			xhci_dbg(xhci, "config port %d-%d wake bits, portsc: 0x%x, write: 0x%x\n",
++				 rhub->hcd->self.busnum, i + 1, portsc, t2);
+ 		}
+ 	}
+ 	spin_unlock_irqrestore(&xhci->lock, flags);
+@@ -983,8 +981,8 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
+ 		return -EINVAL;
+ 
+ 	/* Clear root port wake on bits if wakeup not allowed. */
+-	if (!do_wakeup)
+-		xhci_disable_port_wake_on_bits(xhci);
++	xhci_disable_hub_port_wake(xhci, &xhci->usb3_rhub, do_wakeup);
++	xhci_disable_hub_port_wake(xhci, &xhci->usb2_rhub, do_wakeup);
+ 
+ 	if (!HCD_HW_ACCESSIBLE(hcd))
+ 		return 0;
+@@ -1088,6 +1086,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 	struct usb_hcd		*secondary_hcd;
+ 	int			retval = 0;
+ 	bool			comp_timer_running = false;
++	bool			pending_portevent = false;
+ 
+ 	if (!hcd->state)
+ 		return 0;
+@@ -1226,13 +1225,22 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 
+  done:
+ 	if (retval == 0) {
+-		/* Resume root hubs only when have pending events. */
+-		if (xhci_pending_portevent(xhci)) {
++		/*
++		 * Resume roothubs only if there are pending events.
++		 * USB 3 devices resend U3 LFPS wake after a 100ms delay if
++		 * the first wake signalling failed, give it that chance.
++		 */
++		pending_portevent = xhci_pending_portevent(xhci);
++		if (!pending_portevent) {
++			msleep(120);
++			pending_portevent = xhci_pending_portevent(xhci);
++		}
++
++		if (pending_portevent) {
+ 			usb_hcd_resume_root_hub(xhci->shared_hcd);
+ 			usb_hcd_resume_root_hub(hcd);
+ 		}
+ 	}
+-
+ 	/*
+ 	 * If system is subject to the Quirk, Compliance Mode Timer needs to
+ 	 * be re-initialized Always after a system resume. Ports are subject
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 045740ad9c1ec..d01241f1daf3b 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1879,6 +1879,7 @@ struct xhci_hcd {
+ #define XHCI_SKIP_PHY_INIT	BIT_ULL(37)
+ #define XHCI_DISABLE_SPARSE	BIT_ULL(38)
+ #define XHCI_SG_TRB_CACHE_SIZE_QUIRK	BIT_ULL(39)
++#define XHCI_NO_SOFT_RETRY	BIT_ULL(40)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+diff --git a/drivers/usb/renesas_usbhs/pipe.c b/drivers/usb/renesas_usbhs/pipe.c
+index e7334b7fb3a62..75fff2e4cbc65 100644
+--- a/drivers/usb/renesas_usbhs/pipe.c
++++ b/drivers/usb/renesas_usbhs/pipe.c
+@@ -746,6 +746,8 @@ struct usbhs_pipe *usbhs_pipe_malloc(struct usbhs_priv *priv,
+ 
+ void usbhs_pipe_free(struct usbhs_pipe *pipe)
+ {
++	usbhsp_pipe_select(pipe);
++	usbhsp_pipe_cfg_set(pipe, 0xFFFF, 0);
+ 	usbhsp_put_pipe(pipe);
+ }
+ 
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 28deaaec581f6..f26861246f653 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -86,6 +86,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1a86, 0x7522) },
+ 	{ USB_DEVICE(0x1a86, 0x7523) },
+ 	{ USB_DEVICE(0x4348, 0x5523) },
++	{ USB_DEVICE(0x9986, 0x7523) },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(usb, id_table);
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index bf11f86896837..b5f4e584f3c9e 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -149,6 +149,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x8857) },	/* CEL EM357 ZigBee USB Stick */
+ 	{ USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
+ 	{ USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
++	{ USB_DEVICE(0x10C4, 0x88D8) }, /* Acuity Brands nLight Air Adapter */
+ 	{ USB_DEVICE(0x10C4, 0x88FB) }, /* CESINEL MEDCAL STII Network Analyzer */
+ 	{ USB_DEVICE(0x10C4, 0x8938) }, /* CESINEL MEDCAL S II Network Analyzer */
+ 	{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
+@@ -205,6 +206,8 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1901, 0x0194) },	/* GE Healthcare Remote Alarm Box */
+ 	{ USB_DEVICE(0x1901, 0x0195) },	/* GE B850/B650/B450 CP2104 DP UART interface */
+ 	{ USB_DEVICE(0x1901, 0x0196) },	/* GE B850 CP2105 DP UART interface */
++	{ USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 Display serial interface */
++	{ USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 M.2 Key E serial interface */
+ 	{ USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
+ 	{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ 	{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
+index ba5d8df695189..4b48ef4adbeb6 100644
+--- a/drivers/usb/serial/io_edgeport.c
++++ b/drivers/usb/serial/io_edgeport.c
+@@ -3003,26 +3003,32 @@ static int edge_startup(struct usb_serial *serial)
+ 				response = -ENODEV;
+ 			}
+ 
+-			usb_free_urb(edge_serial->interrupt_read_urb);
+-			kfree(edge_serial->interrupt_in_buffer);
+-
+-			usb_free_urb(edge_serial->read_urb);
+-			kfree(edge_serial->bulk_in_buffer);
+-
+-			kfree(edge_serial);
+-
+-			return response;
++			goto error;
+ 		}
+ 
+ 		/* start interrupt read for this edgeport this interrupt will
+ 		 * continue as long as the edgeport is connected */
+ 		response = usb_submit_urb(edge_serial->interrupt_read_urb,
+ 								GFP_KERNEL);
+-		if (response)
++		if (response) {
+ 			dev_err(ddev, "%s - Error %d submitting control urb\n",
+ 				__func__, response);
++
++			goto error;
++		}
+ 	}
+ 	return response;
++
++error:
++	usb_free_urb(edge_serial->interrupt_read_urb);
++	kfree(edge_serial->interrupt_in_buffer);
++
++	usb_free_urb(edge_serial->read_urb);
++	kfree(edge_serial->bulk_in_buffer);
++
++	kfree(edge_serial);
++
++	return response;
+ }
+ 
+ 
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index 2305d425e6c9a..8f1de1fbbeedf 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -46,6 +46,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 	int sockfd = 0;
+ 	struct socket *socket;
+ 	int rv;
++	struct task_struct *tcp_rx = NULL;
++	struct task_struct *tcp_tx = NULL;
+ 
+ 	if (!sdev) {
+ 		dev_err(dev, "sdev is null\n");
+@@ -69,23 +71,47 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 		}
+ 
+ 		socket = sockfd_lookup(sockfd, &err);
+-		if (!socket)
++		if (!socket) {
++			dev_err(dev, "failed to lookup sock");
+ 			goto err;
++		}
+ 
+-		sdev->ud.tcp_socket = socket;
+-		sdev->ud.sockfd = sockfd;
++		if (socket->type != SOCK_STREAM) {
++			dev_err(dev, "Expecting SOCK_STREAM - found %d",
++				socket->type);
++			goto sock_err;
++		}
+ 
++		/* unlock and create threads and get tasks */
+ 		spin_unlock_irq(&sdev->ud.lock);
++		tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx");
++		if (IS_ERR(tcp_rx)) {
++			sockfd_put(socket);
++			return -EINVAL;
++		}
++		tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx");
++		if (IS_ERR(tcp_tx)) {
++			kthread_stop(tcp_rx);
++			sockfd_put(socket);
++			return -EINVAL;
++		}
+ 
+-		sdev->ud.tcp_rx = kthread_get_run(stub_rx_loop, &sdev->ud,
+-						  "stub_rx");
+-		sdev->ud.tcp_tx = kthread_get_run(stub_tx_loop, &sdev->ud,
+-						  "stub_tx");
++		/* get task structs now */
++		get_task_struct(tcp_rx);
++		get_task_struct(tcp_tx);
+ 
++		/* lock and update sdev->ud state */
+ 		spin_lock_irq(&sdev->ud.lock);
++		sdev->ud.tcp_socket = socket;
++		sdev->ud.sockfd = sockfd;
++		sdev->ud.tcp_rx = tcp_rx;
++		sdev->ud.tcp_tx = tcp_tx;
+ 		sdev->ud.status = SDEV_ST_USED;
+ 		spin_unlock_irq(&sdev->ud.lock);
+ 
++		wake_up_process(sdev->ud.tcp_rx);
++		wake_up_process(sdev->ud.tcp_tx);
++
+ 	} else {
+ 		dev_info(dev, "stub down\n");
+ 
+@@ -100,6 +126,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 
+ 	return count;
+ 
++sock_err:
++	sockfd_put(socket);
+ err:
+ 	spin_unlock_irq(&sdev->ud.lock);
+ 	return -EINVAL;
+diff --git a/drivers/usb/usbip/vhci_sysfs.c b/drivers/usb/usbip/vhci_sysfs.c
+index be37aec250c2b..e64ea314930be 100644
+--- a/drivers/usb/usbip/vhci_sysfs.c
++++ b/drivers/usb/usbip/vhci_sysfs.c
+@@ -312,6 +312,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
+ 	struct vhci *vhci;
+ 	int err;
+ 	unsigned long flags;
++	struct task_struct *tcp_rx = NULL;
++	struct task_struct *tcp_tx = NULL;
+ 
+ 	/*
+ 	 * @rhport: port number of vhci_hcd
+@@ -349,12 +351,35 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
+ 
+ 	/* Extract socket from fd. */
+ 	socket = sockfd_lookup(sockfd, &err);
+-	if (!socket)
++	if (!socket) {
++		dev_err(dev, "failed to lookup sock");
+ 		return -EINVAL;
++	}
++	if (socket->type != SOCK_STREAM) {
++		dev_err(dev, "Expecting SOCK_STREAM - found %d",
++			socket->type);
++		sockfd_put(socket);
++		return -EINVAL;
++	}
++
++	/* create threads before locking */
++	tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx");
++	if (IS_ERR(tcp_rx)) {
++		sockfd_put(socket);
++		return -EINVAL;
++	}
++	tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx");
++	if (IS_ERR(tcp_tx)) {
++		kthread_stop(tcp_rx);
++		sockfd_put(socket);
++		return -EINVAL;
++	}
+ 
+-	/* now need lock until setting vdev status as used */
++	/* get task structs now */
++	get_task_struct(tcp_rx);
++	get_task_struct(tcp_tx);
+ 
+-	/* begin a lock */
++	/* now begin lock until setting vdev status set */
+ 	spin_lock_irqsave(&vhci->lock, flags);
+ 	spin_lock(&vdev->ud.lock);
+ 
+@@ -364,6 +389,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
+ 		spin_unlock_irqrestore(&vhci->lock, flags);
+ 
+ 		sockfd_put(socket);
++		kthread_stop_put(tcp_rx);
++		kthread_stop_put(tcp_tx);
+ 
+ 		dev_err(dev, "port %d already used\n", rhport);
+ 		/*
+@@ -382,14 +409,16 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
+ 	vdev->speed         = speed;
+ 	vdev->ud.sockfd     = sockfd;
+ 	vdev->ud.tcp_socket = socket;
++	vdev->ud.tcp_rx     = tcp_rx;
++	vdev->ud.tcp_tx     = tcp_tx;
+ 	vdev->ud.status     = VDEV_ST_NOTASSIGNED;
+ 
+ 	spin_unlock(&vdev->ud.lock);
+ 	spin_unlock_irqrestore(&vhci->lock, flags);
+ 	/* end the lock */
+ 
+-	vdev->ud.tcp_rx = kthread_get_run(vhci_rx_loop, &vdev->ud, "vhci_rx");
+-	vdev->ud.tcp_tx = kthread_get_run(vhci_tx_loop, &vdev->ud, "vhci_tx");
++	wake_up_process(vdev->ud.tcp_rx);
++	wake_up_process(vdev->ud.tcp_tx);
+ 
+ 	rh_port_connect(vdev, speed);
+ 
+diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
+index 100f680c572ae..a3ec39fc61778 100644
+--- a/drivers/usb/usbip/vudc_sysfs.c
++++ b/drivers/usb/usbip/vudc_sysfs.c
+@@ -90,8 +90,9 @@ unlock:
+ }
+ static BIN_ATTR_RO(dev_desc, sizeof(struct usb_device_descriptor));
+ 
+-static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *attr,
+-		     const char *in, size_t count)
++static ssize_t usbip_sockfd_store(struct device *dev,
++				  struct device_attribute *attr,
++				  const char *in, size_t count)
+ {
+ 	struct vudc *udc = (struct vudc *) dev_get_drvdata(dev);
+ 	int rv;
+@@ -100,6 +101,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 	struct socket *socket;
+ 	unsigned long flags;
+ 	int ret;
++	struct task_struct *tcp_rx = NULL;
++	struct task_struct *tcp_tx = NULL;
+ 
+ 	rv = kstrtoint(in, 0, &sockfd);
+ 	if (rv != 0)
+@@ -138,24 +141,54 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 			goto unlock_ud;
+ 		}
+ 
+-		udc->ud.tcp_socket = socket;
++		if (socket->type != SOCK_STREAM) {
++			dev_err(dev, "Expecting SOCK_STREAM - found %d",
++				socket->type);
++			ret = -EINVAL;
++			goto sock_err;
++		}
+ 
++		/* unlock and create threads and get tasks */
+ 		spin_unlock_irq(&udc->ud.lock);
+ 		spin_unlock_irqrestore(&udc->lock, flags);
+ 
+-		udc->ud.tcp_rx = kthread_get_run(&v_rx_loop,
+-						    &udc->ud, "vudc_rx");
+-		udc->ud.tcp_tx = kthread_get_run(&v_tx_loop,
+-						    &udc->ud, "vudc_tx");
++		tcp_rx = kthread_create(&v_rx_loop, &udc->ud, "vudc_rx");
++		if (IS_ERR(tcp_rx)) {
++			sockfd_put(socket);
++			return -EINVAL;
++		}
++		tcp_tx = kthread_create(&v_tx_loop, &udc->ud, "vudc_tx");
++		if (IS_ERR(tcp_tx)) {
++			kthread_stop(tcp_rx);
++			sockfd_put(socket);
++			return -EINVAL;
++		}
++
++		/* get task structs now */
++		get_task_struct(tcp_rx);
++		get_task_struct(tcp_tx);
+ 
++		/* lock and update udc->ud state */
+ 		spin_lock_irqsave(&udc->lock, flags);
+ 		spin_lock_irq(&udc->ud.lock);
++
++		udc->ud.tcp_socket = socket;
++		udc->ud.tcp_rx = tcp_rx;
++		udc->ud.tcp_rx = tcp_tx;
+ 		udc->ud.status = SDEV_ST_USED;
++
+ 		spin_unlock_irq(&udc->ud.lock);
+ 
+ 		ktime_get_ts64(&udc->start_time);
+ 		v_start_timer(udc);
+ 		udc->connected = 1;
++
++		spin_unlock_irqrestore(&udc->lock, flags);
++
++		wake_up_process(udc->ud.tcp_rx);
++		wake_up_process(udc->ud.tcp_tx);
++		return count;
++
+ 	} else {
+ 		if (!udc->connected) {
+ 			dev_err(dev, "Device not connected");
+@@ -177,6 +210,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 
+ 	return count;
+ 
++sock_err:
++	sockfd_put(socket);
+ unlock_ud:
+ 	spin_unlock_irq(&udc->ud.lock);
+ unlock:
+diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
+index da87f3a1e351b..b8f2f971c2f0f 100644
+--- a/drivers/xen/events/events_2l.c
++++ b/drivers/xen/events/events_2l.c
+@@ -47,6 +47,11 @@ static unsigned evtchn_2l_max_channels(void)
+ 	return EVTCHN_2L_NR_CHANNELS;
+ }
+ 
++static void evtchn_2l_remove(evtchn_port_t evtchn, unsigned int cpu)
++{
++	clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
++}
++
+ static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
+ 				  unsigned int old_cpu)
+ {
+@@ -72,12 +77,6 @@ static bool evtchn_2l_is_pending(evtchn_port_t port)
+ 	return sync_test_bit(port, BM(&s->evtchn_pending[0]));
+ }
+ 
+-static bool evtchn_2l_test_and_set_mask(evtchn_port_t port)
+-{
+-	struct shared_info *s = HYPERVISOR_shared_info;
+-	return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0]));
+-}
+-
+ static void evtchn_2l_mask(evtchn_port_t port)
+ {
+ 	struct shared_info *s = HYPERVISOR_shared_info;
+@@ -355,18 +354,27 @@ static void evtchn_2l_resume(void)
+ 				EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
+ }
+ 
++static int evtchn_2l_percpu_deinit(unsigned int cpu)
++{
++	memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) *
++			EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
++
++	return 0;
++}
++
+ static const struct evtchn_ops evtchn_ops_2l = {
+ 	.max_channels      = evtchn_2l_max_channels,
+ 	.nr_channels       = evtchn_2l_max_channels,
++	.remove            = evtchn_2l_remove,
+ 	.bind_to_cpu       = evtchn_2l_bind_to_cpu,
+ 	.clear_pending     = evtchn_2l_clear_pending,
+ 	.set_pending       = evtchn_2l_set_pending,
+ 	.is_pending        = evtchn_2l_is_pending,
+-	.test_and_set_mask = evtchn_2l_test_and_set_mask,
+ 	.mask              = evtchn_2l_mask,
+ 	.unmask            = evtchn_2l_unmask,
+ 	.handle_events     = evtchn_2l_handle_events,
+ 	.resume	           = evtchn_2l_resume,
++	.percpu_deinit     = evtchn_2l_percpu_deinit,
+ };
+ 
+ void __init xen_evtchn_2l_init(void)
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index bbebe248b7264..7bd03f6e0422f 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -96,13 +96,19 @@ struct irq_info {
+ 	struct list_head eoi_list;
+ 	short refcnt;
+ 	short spurious_cnt;
+-	enum xen_irq_type type; /* type */
++	short type;             /* type */
++	u8 mask_reason;         /* Why is event channel masked */
++#define EVT_MASK_REASON_EXPLICIT	0x01
++#define EVT_MASK_REASON_TEMPORARY	0x02
++#define EVT_MASK_REASON_EOI_PENDING	0x04
++	u8 is_active;		/* Is event just being handled? */
+ 	unsigned irq;
+ 	evtchn_port_t evtchn;   /* event channel */
+ 	unsigned short cpu;     /* cpu bound */
+ 	unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
+ 	unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
+ 	u64 eoi_time;           /* Time in jiffies when to EOI. */
++	spinlock_t lock;
+ 
+ 	union {
+ 		unsigned short virq;
+@@ -151,6 +157,7 @@ static DEFINE_RWLOCK(evtchn_rwlock);
+  *   evtchn_rwlock
+  *     IRQ-desc lock
+  *       percpu eoi_list_lock
++ *         irq_info->lock
+  */
+ 
+ static LIST_HEAD(xen_irq_list_head);
+@@ -272,6 +279,8 @@ static int xen_irq_info_common_setup(struct irq_info *info,
+ 	info->irq = irq;
+ 	info->evtchn = evtchn;
+ 	info->cpu = cpu;
++	info->mask_reason = EVT_MASK_REASON_EXPLICIT;
++	spin_lock_init(&info->lock);
+ 
+ 	ret = set_evtchn_to_irq(evtchn, irq);
+ 	if (ret < 0)
+@@ -338,6 +347,7 @@ static int xen_irq_info_pirq_setup(unsigned irq,
+ static void xen_irq_info_cleanup(struct irq_info *info)
+ {
+ 	set_evtchn_to_irq(info->evtchn, -1);
++	xen_evtchn_port_remove(info->evtchn, info->cpu);
+ 	info->evtchn = 0;
+ }
+ 
+@@ -418,6 +428,34 @@ unsigned int cpu_from_evtchn(evtchn_port_t evtchn)
+ 	return ret;
+ }
+ 
++static void do_mask(struct irq_info *info, u8 reason)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&info->lock, flags);
++
++	if (!info->mask_reason)
++		mask_evtchn(info->evtchn);
++
++	info->mask_reason |= reason;
++
++	spin_unlock_irqrestore(&info->lock, flags);
++}
++
++static void do_unmask(struct irq_info *info, u8 reason)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&info->lock, flags);
++
++	info->mask_reason &= ~reason;
++
++	if (!info->mask_reason)
++		unmask_evtchn(info->evtchn);
++
++	spin_unlock_irqrestore(&info->lock, flags);
++}
++
+ #ifdef CONFIG_X86
+ static bool pirq_check_eoi_map(unsigned irq)
+ {
+@@ -545,7 +583,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
+ 	}
+ 
+ 	info->eoi_time = 0;
+-	unmask_evtchn(evtchn);
++	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
+ }
+ 
+ static void xen_irq_lateeoi_worker(struct work_struct *work)
+@@ -714,6 +752,12 @@ static void xen_evtchn_close(evtchn_port_t port)
+ 		BUG();
+ }
+ 
++static void event_handler_exit(struct irq_info *info)
++{
++	smp_store_release(&info->is_active, 0);
++	clear_evtchn(info->evtchn);
++}
++
+ static void pirq_query_unmask(int irq)
+ {
+ 	struct physdev_irq_status_query irq_status;
+@@ -732,7 +776,8 @@ static void pirq_query_unmask(int irq)
+ 
+ static void eoi_pirq(struct irq_data *data)
+ {
+-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
++	struct irq_info *info = info_for_irq(data->irq);
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
+ 	struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) };
+ 	int rc = 0;
+ 
+@@ -741,16 +786,15 @@ static void eoi_pirq(struct irq_data *data)
+ 
+ 	if (unlikely(irqd_is_setaffinity_pending(data)) &&
+ 	    likely(!irqd_irq_disabled(data))) {
+-		int masked = test_and_set_mask(evtchn);
++		do_mask(info, EVT_MASK_REASON_TEMPORARY);
+ 
+-		clear_evtchn(evtchn);
++		event_handler_exit(info);
+ 
+ 		irq_move_masked_irq(data);
+ 
+-		if (!masked)
+-			unmask_evtchn(evtchn);
++		do_unmask(info, EVT_MASK_REASON_TEMPORARY);
+ 	} else
+-		clear_evtchn(evtchn);
++		event_handler_exit(info);
+ 
+ 	if (pirq_needs_eoi(data->irq)) {
+ 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
+@@ -801,7 +845,8 @@ static unsigned int __startup_pirq(unsigned int irq)
+ 		goto err;
+ 
+ out:
+-	unmask_evtchn(evtchn);
++	do_unmask(info, EVT_MASK_REASON_EXPLICIT);
++
+ 	eoi_pirq(irq_get_irq_data(irq));
+ 
+ 	return 0;
+@@ -828,7 +873,7 @@ static void shutdown_pirq(struct irq_data *data)
+ 	if (!VALID_EVTCHN(evtchn))
+ 		return;
+ 
+-	mask_evtchn(evtchn);
++	do_mask(info, EVT_MASK_REASON_EXPLICIT);
+ 	xen_evtchn_close(evtchn);
+ 	xen_irq_info_cleanup(info);
+ }
+@@ -1565,6 +1610,8 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
+ 	}
+ 
+ 	info = info_for_irq(irq);
++	if (xchg_acquire(&info->is_active, 1))
++		return;
+ 
+ 	if (ctrl->defer_eoi) {
+ 		info->eoi_cpu = smp_processor_id();
+@@ -1655,10 +1702,10 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int irq)
+ }
+ 
+ /* Rebind an evtchn so that it gets delivered to a specific cpu */
+-static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
++static int xen_rebind_evtchn_to_cpu(struct irq_info *info, unsigned int tcpu)
+ {
+ 	struct evtchn_bind_vcpu bind_vcpu;
+-	int masked;
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
+ 
+ 	if (!VALID_EVTCHN(evtchn))
+ 		return -1;
+@@ -1674,7 +1721,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
+ 	 * Mask the event while changing the VCPU binding to prevent
+ 	 * it being delivered on an unexpected VCPU.
+ 	 */
+-	masked = test_and_set_mask(evtchn);
++	do_mask(info, EVT_MASK_REASON_TEMPORARY);
+ 
+ 	/*
+ 	 * If this fails, it usually just indicates that we're dealing with a
+@@ -1684,8 +1731,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
+ 	if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
+ 		bind_evtchn_to_cpu(evtchn, tcpu);
+ 
+-	if (!masked)
+-		unmask_evtchn(evtchn);
++	do_unmask(info, EVT_MASK_REASON_TEMPORARY);
+ 
+ 	return 0;
+ }
+@@ -1694,7 +1740,7 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
+ 			    bool force)
+ {
+ 	unsigned tcpu = cpumask_first_and(dest, cpu_online_mask);
+-	int ret = xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
++	int ret = xen_rebind_evtchn_to_cpu(info_for_irq(data->irq), tcpu);
+ 
+ 	if (!ret)
+ 		irq_data_update_effective_affinity(data, cpumask_of(tcpu));
+@@ -1713,39 +1759,41 @@ EXPORT_SYMBOL_GPL(xen_set_affinity_evtchn);
+ 
+ static void enable_dynirq(struct irq_data *data)
+ {
+-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
++	struct irq_info *info = info_for_irq(data->irq);
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
+ 
+ 	if (VALID_EVTCHN(evtchn))
+-		unmask_evtchn(evtchn);
++		do_unmask(info, EVT_MASK_REASON_EXPLICIT);
+ }
+ 
+ static void disable_dynirq(struct irq_data *data)
+ {
+-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
++	struct irq_info *info = info_for_irq(data->irq);
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
+ 
+ 	if (VALID_EVTCHN(evtchn))
+-		mask_evtchn(evtchn);
++		do_mask(info, EVT_MASK_REASON_EXPLICIT);
+ }
+ 
+ static void ack_dynirq(struct irq_data *data)
+ {
+-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
++	struct irq_info *info = info_for_irq(data->irq);
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
+ 
+ 	if (!VALID_EVTCHN(evtchn))
+ 		return;
+ 
+ 	if (unlikely(irqd_is_setaffinity_pending(data)) &&
+ 	    likely(!irqd_irq_disabled(data))) {
+-		int masked = test_and_set_mask(evtchn);
++		do_mask(info, EVT_MASK_REASON_TEMPORARY);
+ 
+-		clear_evtchn(evtchn);
++		event_handler_exit(info);
+ 
+ 		irq_move_masked_irq(data);
+ 
+-		if (!masked)
+-			unmask_evtchn(evtchn);
++		do_unmask(info, EVT_MASK_REASON_TEMPORARY);
+ 	} else
+-		clear_evtchn(evtchn);
++		event_handler_exit(info);
+ }
+ 
+ static void mask_ack_dynirq(struct irq_data *data)
+@@ -1754,18 +1802,39 @@ static void mask_ack_dynirq(struct irq_data *data)
+ 	ack_dynirq(data);
+ }
+ 
++static void lateeoi_ack_dynirq(struct irq_data *data)
++{
++	struct irq_info *info = info_for_irq(data->irq);
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
++
++	if (VALID_EVTCHN(evtchn)) {
++		do_mask(info, EVT_MASK_REASON_EOI_PENDING);
++		event_handler_exit(info);
++	}
++}
++
++static void lateeoi_mask_ack_dynirq(struct irq_data *data)
++{
++	struct irq_info *info = info_for_irq(data->irq);
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
++
++	if (VALID_EVTCHN(evtchn)) {
++		do_mask(info, EVT_MASK_REASON_EXPLICIT);
++		event_handler_exit(info);
++	}
++}
++
+ static int retrigger_dynirq(struct irq_data *data)
+ {
+-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+-	int masked;
++	struct irq_info *info = info_for_irq(data->irq);
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
+ 
+ 	if (!VALID_EVTCHN(evtchn))
+ 		return 0;
+ 
+-	masked = test_and_set_mask(evtchn);
++	do_mask(info, EVT_MASK_REASON_TEMPORARY);
+ 	set_evtchn(evtchn);
+-	if (!masked)
+-		unmask_evtchn(evtchn);
++	do_unmask(info, EVT_MASK_REASON_TEMPORARY);
+ 
+ 	return 1;
+ }
+@@ -1862,10 +1931,11 @@ static void restore_cpu_ipis(unsigned int cpu)
+ /* Clear an irq's pending state, in preparation for polling on it */
+ void xen_clear_irq_pending(int irq)
+ {
+-	evtchn_port_t evtchn = evtchn_from_irq(irq);
++	struct irq_info *info = info_for_irq(irq);
++	evtchn_port_t evtchn = info ? info->evtchn : 0;
+ 
+ 	if (VALID_EVTCHN(evtchn))
+-		clear_evtchn(evtchn);
++		event_handler_exit(info);
+ }
+ EXPORT_SYMBOL(xen_clear_irq_pending);
+ void xen_set_irq_pending(int irq)
+@@ -1973,8 +2043,8 @@ static struct irq_chip xen_lateeoi_chip __read_mostly = {
+ 	.irq_mask		= disable_dynirq,
+ 	.irq_unmask		= enable_dynirq,
+ 
+-	.irq_ack		= mask_ack_dynirq,
+-	.irq_mask_ack		= mask_ack_dynirq,
++	.irq_ack		= lateeoi_ack_dynirq,
++	.irq_mask_ack		= lateeoi_mask_ack_dynirq,
+ 
+ 	.irq_set_affinity	= set_affinity_irq,
+ 	.irq_retrigger		= retrigger_dynirq,
+diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
+index b234f1766810c..ad9fe51d3fb33 100644
+--- a/drivers/xen/events/events_fifo.c
++++ b/drivers/xen/events/events_fifo.c
+@@ -209,12 +209,6 @@ static bool evtchn_fifo_is_pending(evtchn_port_t port)
+ 	return sync_test_bit(EVTCHN_FIFO_BIT(PENDING, word), BM(word));
+ }
+ 
+-static bool evtchn_fifo_test_and_set_mask(evtchn_port_t port)
+-{
+-	event_word_t *word = event_word_from_port(port);
+-	return sync_test_and_set_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word));
+-}
+-
+ static void evtchn_fifo_mask(evtchn_port_t port)
+ {
+ 	event_word_t *word = event_word_from_port(port);
+@@ -423,7 +417,6 @@ static const struct evtchn_ops evtchn_ops_fifo = {
+ 	.clear_pending     = evtchn_fifo_clear_pending,
+ 	.set_pending       = evtchn_fifo_set_pending,
+ 	.is_pending        = evtchn_fifo_is_pending,
+-	.test_and_set_mask = evtchn_fifo_test_and_set_mask,
+ 	.mask              = evtchn_fifo_mask,
+ 	.unmask            = evtchn_fifo_unmask,
+ 	.handle_events     = evtchn_fifo_handle_events,
+diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
+index 0a97c0549db76..4d3398eff9cdf 100644
+--- a/drivers/xen/events/events_internal.h
++++ b/drivers/xen/events/events_internal.h
+@@ -14,13 +14,13 @@ struct evtchn_ops {
+ 	unsigned (*nr_channels)(void);
+ 
+ 	int (*setup)(evtchn_port_t port);
++	void (*remove)(evtchn_port_t port, unsigned int cpu);
+ 	void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu,
+ 			    unsigned int old_cpu);
+ 
+ 	void (*clear_pending)(evtchn_port_t port);
+ 	void (*set_pending)(evtchn_port_t port);
+ 	bool (*is_pending)(evtchn_port_t port);
+-	bool (*test_and_set_mask)(evtchn_port_t port);
+ 	void (*mask)(evtchn_port_t port);
+ 	void (*unmask)(evtchn_port_t port);
+ 
+@@ -54,6 +54,13 @@ static inline int xen_evtchn_port_setup(evtchn_port_t evtchn)
+ 	return 0;
+ }
+ 
++static inline void xen_evtchn_port_remove(evtchn_port_t evtchn,
++					  unsigned int cpu)
++{
++	if (evtchn_ops->remove)
++		evtchn_ops->remove(evtchn, cpu);
++}
++
+ static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn,
+ 					       unsigned int cpu,
+ 					       unsigned int old_cpu)
+@@ -76,11 +83,6 @@ static inline bool test_evtchn(evtchn_port_t port)
+ 	return evtchn_ops->is_pending(port);
+ }
+ 
+-static inline bool test_and_set_mask(evtchn_port_t port)
+-{
+-	return evtchn_ops->test_and_set_mask(port);
+-}
+-
+ static inline void mask_evtchn(evtchn_port_t port)
+ {
+ 	return evtchn_ops->mask(port);
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index 3880a82da1dc5..11b5bf2419555 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -647,12 +647,24 @@ static ssize_t bm_register_write(struct file *file, const char __user *buffer,
+ 	struct super_block *sb = file_inode(file)->i_sb;
+ 	struct dentry *root = sb->s_root, *dentry;
+ 	int err = 0;
++	struct file *f = NULL;
+ 
+ 	e = create_entry(buffer, count);
+ 
+ 	if (IS_ERR(e))
+ 		return PTR_ERR(e);
+ 
++	if (e->flags & MISC_FMT_OPEN_FILE) {
++		f = open_exec(e->interpreter);
++		if (IS_ERR(f)) {
++			pr_notice("register: failed to install interpreter file %s\n",
++				 e->interpreter);
++			kfree(e);
++			return PTR_ERR(f);
++		}
++		e->interp_file = f;
++	}
++
+ 	inode_lock(d_inode(root));
+ 	dentry = lookup_one_len(e->name, root, strlen(e->name));
+ 	err = PTR_ERR(dentry);
+@@ -676,21 +688,6 @@ static ssize_t bm_register_write(struct file *file, const char __user *buffer,
+ 		goto out2;
+ 	}
+ 
+-	if (e->flags & MISC_FMT_OPEN_FILE) {
+-		struct file *f;
+-
+-		f = open_exec(e->interpreter);
+-		if (IS_ERR(f)) {
+-			err = PTR_ERR(f);
+-			pr_notice("register: failed to install interpreter file %s\n", e->interpreter);
+-			simple_release_fs(&bm_mnt, &entry_count);
+-			iput(inode);
+-			inode = NULL;
+-			goto out2;
+-		}
+-		e->interp_file = f;
+-	}
+-
+ 	e->dentry = dget(dentry);
+ 	inode->i_private = e;
+ 	inode->i_fop = &bm_entry_operations;
+@@ -707,6 +704,8 @@ out:
+ 	inode_unlock(d_inode(root));
+ 
+ 	if (err) {
++		if (f)
++			filp_close(f, NULL);
+ 		kfree(e);
+ 		return err;
+ 	}
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 2ea189c1b4ffe..fe201b757baa4 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -123,12 +123,21 @@ int truncate_bdev_range(struct block_device *bdev, fmode_t mode,
+ 		err = bd_prepare_to_claim(bdev, claimed_bdev,
+ 					  truncate_bdev_range);
+ 		if (err)
+-			return err;
++			goto invalidate;
+ 	}
+ 	truncate_inode_pages_range(bdev->bd_inode->i_mapping, lstart, lend);
+ 	if (claimed_bdev)
+ 		bd_abort_claiming(bdev, claimed_bdev, truncate_bdev_range);
+ 	return 0;
++
++invalidate:
++	/*
++	 * Someone else has handle exclusively open. Try invalidating instead.
++	 * The 'end' argument is inclusive so the rounding is safe.
++	 */
++	return invalidate_inode_pages2_range(bdev->bd_inode->i_mapping,
++					     lstart >> PAGE_SHIFT,
++					     lend >> PAGE_SHIFT);
+ }
+ EXPORT_SYMBOL(truncate_bdev_range);
+ 
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 472cb7777e3e9..f0ed29a9a6f11 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -286,7 +286,7 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 		rc = server->ops->queryfs(xid, tcon, cifs_sb, buf);
+ 
+ 	free_xid(xid);
+-	return 0;
++	return rc;
+ }
+ 
+ static long cifs_fallocate(struct file *file, int mode, loff_t off, loff_t len)
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 484ec2d8c5c95..3295516af2aec 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -256,7 +256,7 @@ struct smb_version_operations {
+ 	/* verify the message */
+ 	int (*check_message)(char *, unsigned int, struct TCP_Server_Info *);
+ 	bool (*is_oplock_break)(char *, struct TCP_Server_Info *);
+-	int (*handle_cancelled_mid)(char *, struct TCP_Server_Info *);
++	int (*handle_cancelled_mid)(struct mid_q_entry *, struct TCP_Server_Info *);
+ 	void (*downgrade_oplock)(struct TCP_Server_Info *server,
+ 				 struct cifsInodeInfo *cinode, __u32 oplock,
+ 				 unsigned int epoch, bool *purge_cache);
+@@ -1785,10 +1785,11 @@ static inline bool is_retryable_error(int error)
+ #define   CIFS_NO_RSP_BUF   0x040    /* no response buffer required */
+ 
+ /* Type of request operation */
+-#define   CIFS_ECHO_OP      0x080    /* echo request */
+-#define   CIFS_OBREAK_OP   0x0100    /* oplock break request */
+-#define   CIFS_NEG_OP      0x0200    /* negotiate request */
+-#define   CIFS_OP_MASK     0x0380    /* mask request type */
++#define   CIFS_ECHO_OP            0x080  /* echo request */
++#define   CIFS_OBREAK_OP          0x0100 /* oplock break request */
++#define   CIFS_NEG_OP             0x0200 /* negotiate request */
++#define   CIFS_CP_CREATE_CLOSE_OP 0x0400 /* compound create+close request */
++#define   CIFS_OP_MASK            0x0780 /* mask request type */
+ 
+ #define   CIFS_HAS_CREDITS 0x0400    /* already has credits */
+ #define   CIFS_TRANSFORM_REQ 0x0800    /* transform request before sending */
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index ad3ecda1314d9..fa359f473e3db 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2629,6 +2629,11 @@ smbd_connected:
+ 	tcp_ses->min_offload = volume_info->min_offload;
+ 	tcp_ses->tcpStatus = CifsNeedNegotiate;
+ 
++	if ((volume_info->max_credits < 20) || (volume_info->max_credits > 60000))
++		tcp_ses->max_credits = SMB2_MAX_CREDITS_AVAILABLE;
++	else
++		tcp_ses->max_credits = volume_info->max_credits;
++
+ 	tcp_ses->nr_targets = 1;
+ 	tcp_ses->ignore_signature = volume_info->ignore_signature;
+ 	/* thread spawned, put it on the list */
+@@ -4077,11 +4082,6 @@ static int mount_get_conns(struct smb_vol *vol, struct cifs_sb_info *cifs_sb,
+ 
+ 	*nserver = server;
+ 
+-	if ((vol->max_credits < 20) || (vol->max_credits > 60000))
+-		server->max_credits = SMB2_MAX_CREDITS_AVAILABLE;
+-	else
+-		server->max_credits = vol->max_credits;
+-
+ 	/* get a reference to a SMB session */
+ 	ses = cifs_get_smb_ses(server, vol);
+ 	if (IS_ERR(ses)) {
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index de564368a887c..c2fe85ca2ded3 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -224,6 +224,7 @@ cifs_ses_add_channel(struct cifs_ses *ses, struct cifs_server_iface *iface)
+ 	vol.noautotune = ses->server->noautotune;
+ 	vol.sockopt_tcp_nodelay = ses->server->tcp_nodelay;
+ 	vol.echo_interval = ses->server->echo_interval / HZ;
++	vol.max_credits = ses->server->max_credits;
+ 
+ 	/*
+ 	 * This will be used for encoding/decoding user/domain/pw
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index 1f900b81c34ae..a718dc77e604e 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -358,6 +358,7 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	if (cfile)
+ 		goto after_close;
+ 	/* Close */
++	flags |= CIFS_CP_CREATE_CLOSE_OP;
+ 	rqst[num_rqst].rq_iov = &vars->close_iov[0];
+ 	rqst[num_rqst].rq_nvec = 1;
+ 	rc = SMB2_close_init(tcon, server,
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 2da6b41cb5526..db22d686c61ff 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -835,14 +835,14 @@ smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid,
+ }
+ 
+ int
+-smb2_handle_cancelled_mid(char *buffer, struct TCP_Server_Info *server)
++smb2_handle_cancelled_mid(struct mid_q_entry *mid, struct TCP_Server_Info *server)
+ {
+-	struct smb2_sync_hdr *sync_hdr = (struct smb2_sync_hdr *)buffer;
+-	struct smb2_create_rsp *rsp = (struct smb2_create_rsp *)buffer;
++	struct smb2_sync_hdr *sync_hdr = mid->resp_buf;
++	struct smb2_create_rsp *rsp = mid->resp_buf;
+ 	struct cifs_tcon *tcon;
+ 	int rc;
+ 
+-	if (sync_hdr->Command != SMB2_CREATE ||
++	if ((mid->optype & CIFS_CP_CREATE_CLOSE_OP) || sync_hdr->Command != SMB2_CREATE ||
+ 	    sync_hdr->Status != STATUS_SUCCESS)
+ 		return 0;
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 22f1d8dc12b00..02998c79bb907 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1137,7 +1137,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ 	struct TCP_Server_Info *server = cifs_pick_channel(ses);
+ 	__le16 *utf16_path = NULL;
+ 	int ea_name_len = strlen(ea_name);
+-	int flags = 0;
++	int flags = CIFS_CP_CREATE_CLOSE_OP;
+ 	int len;
+ 	struct smb_rqst rqst[3];
+ 	int resp_buftype[3];
+@@ -1515,7 +1515,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	struct smb_query_info qi;
+ 	struct smb_query_info __user *pqi;
+ 	int rc = 0;
+-	int flags = 0;
++	int flags = CIFS_CP_CREATE_CLOSE_OP;
+ 	struct smb2_query_info_rsp *qi_rsp = NULL;
+ 	struct smb2_ioctl_rsp *io_rsp = NULL;
+ 	void *buffer = NULL;
+@@ -2482,7 +2482,7 @@ smb2_query_info_compound(const unsigned int xid, struct cifs_tcon *tcon,
+ {
+ 	struct cifs_ses *ses = tcon->ses;
+ 	struct TCP_Server_Info *server = cifs_pick_channel(ses);
+-	int flags = 0;
++	int flags = CIFS_CP_CREATE_CLOSE_OP;
+ 	struct smb_rqst rqst[3];
+ 	int resp_buftype[3];
+ 	struct kvec rsp_iov[3];
+@@ -2880,7 +2880,7 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+ 	unsigned int sub_offset;
+ 	unsigned int print_len;
+ 	unsigned int print_offset;
+-	int flags = 0;
++	int flags = CIFS_CP_CREATE_CLOSE_OP;
+ 	struct smb_rqst rqst[3];
+ 	int resp_buftype[3];
+ 	struct kvec rsp_iov[3];
+@@ -3062,7 +3062,7 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
+ 	struct cifs_open_parms oparms;
+ 	struct cifs_fid fid;
+ 	struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses);
+-	int flags = 0;
++	int flags = CIFS_CP_CREATE_CLOSE_OP;
+ 	struct smb_rqst rqst[3];
+ 	int resp_buftype[3];
+ 	struct kvec rsp_iov[3];
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index d4110447ee3a8..4eb0ca84355a6 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -246,8 +246,7 @@ extern int SMB2_oplock_break(const unsigned int xid, struct cifs_tcon *tcon,
+ extern int smb2_handle_cancelled_close(struct cifs_tcon *tcon,
+ 				       __u64 persistent_fid,
+ 				       __u64 volatile_fid);
+-extern int smb2_handle_cancelled_mid(char *buffer,
+-					struct TCP_Server_Info *server);
++extern int smb2_handle_cancelled_mid(struct mid_q_entry *mid, struct TCP_Server_Info *server);
+ void smb2_cancelled_close_fid(struct work_struct *work);
+ extern int SMB2_QFS_info(const unsigned int xid, struct cifs_tcon *tcon,
+ 			 u64 persistent_file_id, u64 volatile_file_id,
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 9391cd17a2b55..0b9f1a0cba1a3 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -101,7 +101,7 @@ static void _cifs_mid_q_entry_release(struct kref *refcount)
+ 	if (midEntry->resp_buf && (midEntry->mid_flags & MID_WAIT_CANCELLED) &&
+ 	    midEntry->mid_state == MID_RESPONSE_RECEIVED &&
+ 	    server->ops->handle_cancelled_mid)
+-		server->ops->handle_cancelled_mid(midEntry->resp_buf, server);
++		server->ops->handle_cancelled_mid(midEntry, server);
+ 
+ 	midEntry->mid_state = MID_FREE;
+ 	atomic_dec(&midCount);
+diff --git a/fs/configfs/file.c b/fs/configfs/file.c
+index 1f0270229d7b7..da8351d1e4552 100644
+--- a/fs/configfs/file.c
++++ b/fs/configfs/file.c
+@@ -378,7 +378,7 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
+ 
+ 	attr = to_attr(dentry);
+ 	if (!attr)
+-		goto out_put_item;
++		goto out_free_buffer;
+ 
+ 	if (type & CONFIGFS_ITEM_BIN_ATTR) {
+ 		buffer->bin_attr = to_bin_attr(dentry);
+@@ -391,7 +391,7 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
+ 	/* Grab the module reference for this attribute if we have one */
+ 	error = -ENODEV;
+ 	if (!try_module_get(buffer->owner))
+-		goto out_put_item;
++		goto out_free_buffer;
+ 
+ 	error = -EACCES;
+ 	if (!buffer->item->ci_type)
+@@ -435,8 +435,6 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
+ 
+ out_put_module:
+ 	module_put(buffer->owner);
+-out_put_item:
+-	config_item_put(buffer->item);
+ out_free_buffer:
+ 	up_read(&frag->frag_sem);
+ 	kfree(buffer);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index ea5aefa23a20a..e30bf8f342c2a 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -4876,7 +4876,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	set_task_ioprio(sbi->s_journal->j_task, journal_ioprio);
+ 
+-	sbi->s_journal->j_commit_callback = ext4_journal_commit_callback;
+ 	sbi->s_journal->j_submit_inode_data_buffers =
+ 		ext4_journal_submit_inode_data_buffers;
+ 	sbi->s_journal->j_finish_inode_data_buffers =
+@@ -4993,6 +4992,14 @@ no_journal:
+ 		goto failed_mount5;
+ 	}
+ 
++	/*
++	 * We can only set up the journal commit callback once
++	 * mballoc is initialized
++	 */
++	if (sbi->s_journal)
++		sbi->s_journal->j_commit_callback =
++			ext4_journal_commit_callback;
++
+ 	block = ext4_count_free_clusters(sb);
+ 	ext4_free_blocks_count_set(sbi->s_es, 
+ 				   EXT4_C2B(sbi, block));
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 4e011adaf9670..c837675cd395a 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1202,6 +1202,15 @@ out_force:
+ 	goto out;
+ }
+ 
++static void nfs_mark_dir_for_revalidate(struct inode *inode)
++{
++	struct nfs_inode *nfsi = NFS_I(inode);
++
++	spin_lock(&inode->i_lock);
++	nfsi->cache_validity |= NFS_INO_REVAL_PAGECACHE;
++	spin_unlock(&inode->i_lock);
++}
++
+ /*
+  * We judge how long we want to trust negative
+  * dentries by looking at the parent inode mtime.
+@@ -1236,19 +1245,14 @@ nfs_lookup_revalidate_done(struct inode *dir, struct dentry *dentry,
+ 			__func__, dentry);
+ 		return 1;
+ 	case 0:
+-		nfs_mark_for_revalidate(dir);
+-		if (inode && S_ISDIR(inode->i_mode)) {
+-			/* Purge readdir caches. */
+-			nfs_zap_caches(inode);
+-			/*
+-			 * We can't d_drop the root of a disconnected tree:
+-			 * its d_hash is on the s_anon list and d_drop() would hide
+-			 * it from shrink_dcache_for_unmount(), leading to busy
+-			 * inodes on unmount and further oopses.
+-			 */
+-			if (IS_ROOT(dentry))
+-				return 1;
+-		}
++		/*
++		 * We can't d_drop the root of a disconnected tree:
++		 * its d_hash is on the s_anon list and d_drop() would hide
++		 * it from shrink_dcache_for_unmount(), leading to busy
++		 * inodes on unmount and further oopses.
++		 */
++		if (inode && IS_ROOT(dentry))
++			return 1;
+ 		dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is invalid\n",
+ 				__func__, dentry);
+ 		return 0;
+@@ -1326,6 +1330,13 @@ out:
+ 	nfs_free_fattr(fattr);
+ 	nfs_free_fhandle(fhandle);
+ 	nfs4_label_free(label);
++
++	/*
++	 * If the lookup failed despite the dentry change attribute being
++	 * a match, then we should revalidate the directory cache.
++	 */
++	if (!ret && nfs_verify_change_attribute(dir, dentry->d_time))
++		nfs_mark_dir_for_revalidate(dir);
+ 	return nfs_lookup_revalidate_done(dir, dentry, inode, ret);
+ }
+ 
+@@ -1368,7 +1379,7 @@ nfs_do_lookup_revalidate(struct inode *dir, struct dentry *dentry,
+ 		error = nfs_lookup_verify_inode(inode, flags);
+ 		if (error) {
+ 			if (error == -ESTALE)
+-				nfs_zap_caches(dir);
++				nfs_mark_dir_for_revalidate(dir);
+ 			goto out_bad;
+ 		}
+ 		nfs_advise_use_readdirplus(dir);
+@@ -1865,7 +1876,6 @@ out:
+ 	dput(parent);
+ 	return d;
+ out_error:
+-	nfs_mark_for_revalidate(dir);
+ 	d = ERR_PTR(error);
+ 	goto out;
+ }
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index a811d42ffbd11..ba2dfba4854bf 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5967,7 +5967,7 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf,
+ 		return ret;
+ 	if (!(fattr.valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL))
+ 		return -ENOENT;
+-	return 0;
++	return label.len;
+ }
+ 
+ static int nfs4_get_security_label(struct inode *inode, void *buf,
+diff --git a/fs/pnode.h b/fs/pnode.h
+index 26f74e092bd98..988f1aa9b02ae 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -12,7 +12,7 @@
+ 
+ #define IS_MNT_SHARED(m) ((m)->mnt.mnt_flags & MNT_SHARED)
+ #define IS_MNT_SLAVE(m) ((m)->mnt_master)
+-#define IS_MNT_NEW(m)  (!(m)->mnt_ns)
++#define IS_MNT_NEW(m)  (!(m)->mnt_ns || is_anon_ns((m)->mnt_ns))
+ #define CLEAR_MNT_SHARED(m) ((m)->mnt.mnt_flags &= ~MNT_SHARED)
+ #define IS_MNT_UNBINDABLE(m) ((m)->mnt.mnt_flags & MNT_UNBINDABLE)
+ #define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index bb89c3e43212b..0dd2f93ac0480 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -544,11 +544,14 @@ static int udf_do_extend_file(struct inode *inode,
+ 
+ 		udf_write_aext(inode, last_pos, &last_ext->extLocation,
+ 				last_ext->extLength, 1);
++
+ 		/*
+-		 * We've rewritten the last extent but there may be empty
+-		 * indirect extent after it - enter it.
++		 * We've rewritten the last extent. If we are going to add
++		 * more extents, we may need to enter possible following
++		 * empty indirect extent.
+ 		 */
+-		udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
++		if (new_block_bytes || prealloc_len)
++			udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
+ 	}
+ 
+ 	/* Managed to do everything necessary? */
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 5b1dc1ad4fb32..9e173c6f312dc 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -1072,19 +1072,25 @@ void __acpi_handle_debug(struct _ddebug *descriptor, acpi_handle handle, const c
+ #if defined(CONFIG_ACPI) && defined(CONFIG_GPIOLIB)
+ bool acpi_gpio_get_irq_resource(struct acpi_resource *ares,
+ 				struct acpi_resource_gpio **agpio);
+-int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index);
++int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index);
+ #else
+ static inline bool acpi_gpio_get_irq_resource(struct acpi_resource *ares,
+ 					      struct acpi_resource_gpio **agpio)
+ {
+ 	return false;
+ }
+-static inline int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
++static inline int acpi_dev_gpio_irq_get_by(struct acpi_device *adev,
++					   const char *name, int index)
+ {
+ 	return -ENXIO;
+ }
+ #endif
+ 
++static inline int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
++{
++	return acpi_dev_gpio_irq_get_by(adev, NULL, index);
++}
++
+ /* Device properties */
+ 
+ #ifdef CONFIG_ACPI
+diff --git a/include/linux/can/skb.h b/include/linux/can/skb.h
+index fc61cf4eff1c9..ce7393d397e18 100644
+--- a/include/linux/can/skb.h
++++ b/include/linux/can/skb.h
+@@ -49,8 +49,12 @@ static inline void can_skb_reserve(struct sk_buff *skb)
+ 
+ static inline void can_skb_set_owner(struct sk_buff *skb, struct sock *sk)
+ {
+-	if (sk) {
+-		sock_hold(sk);
++	/* If the socket has already been closed by user space, the
++	 * refcount may already be 0 (and the socket will be freed
++	 * after the last TX skb has been freed). So only increase
++	 * socket refcount if the refcount is > 0.
++	 */
++	if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) {
+ 		skb->destructor = sock_efree;
+ 		skb->sk = sk;
+ 	}
+diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
+index 98cff1b4b088c..189149de77a9d 100644
+--- a/include/linux/compiler-clang.h
++++ b/include/linux/compiler-clang.h
+@@ -41,6 +41,12 @@
+ #define __no_sanitize_thread
+ #endif
+ 
++#if defined(CONFIG_ARCH_USE_BUILTIN_BSWAP)
++#define __HAVE_BUILTIN_BSWAP32__
++#define __HAVE_BUILTIN_BSWAP64__
++#define __HAVE_BUILTIN_BSWAP16__
++#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
++
+ #if __has_feature(undefined_behavior_sanitizer)
+ /* GCC does not have __SANITIZE_UNDEFINED__ */
+ #define __no_sanitize_undefined \
+diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
+index 474f29638d2c9..7dff07713a073 100644
+--- a/include/linux/entry-common.h
++++ b/include/linux/entry-common.h
+@@ -341,8 +341,26 @@ void irqentry_enter_from_user_mode(struct pt_regs *regs);
+ void irqentry_exit_to_user_mode(struct pt_regs *regs);
+ 
+ #ifndef irqentry_state
++/**
++ * struct irqentry_state - Opaque object for exception state storage
++ * @exit_rcu: Used exclusively in the irqentry_*() calls; signals whether the
++ *            exit path has to invoke rcu_irq_exit().
++ * @lockdep: Used exclusively in the irqentry_nmi_*() calls; ensures that
++ *           lockdep state is restored correctly on exit from nmi.
++ *
++ * This opaque object is filled in by the irqentry_*_enter() functions and
++ * must be passed back into the corresponding irqentry_*_exit() functions
++ * when the exception is complete.
++ *
++ * Callers of irqentry_*_[enter|exit]() must consider this structure opaque
++ * and all members private.  Descriptions of the members are provided to aid in
++ * the maintenance of the irqentry_*() functions.
++ */
+ typedef struct irqentry_state {
+-	bool	exit_rcu;
++	union {
++		bool	exit_rcu;
++		bool	lockdep;
++	};
+ } irqentry_state_t;
+ #endif
+ 
+@@ -402,4 +420,23 @@ void irqentry_exit_cond_resched(void);
+  */
+ void noinstr irqentry_exit(struct pt_regs *regs, irqentry_state_t state);
+ 
++/**
++ * irqentry_nmi_enter - Handle NMI entry
++ * @regs:	Pointer to currents pt_regs
++ *
++ * Similar to irqentry_enter() but taking care of the NMI constraints.
++ */
++irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs);
++
++/**
++ * irqentry_nmi_exit - Handle return from NMI handling
++ * @regs:	Pointer to pt_regs (NMI entry regs)
++ * @irq_state:	Return value from matching call to irqentry_nmi_enter()
++ *
++ * Last action before returning to the low level assmenbly code.
++ *
++ * Counterpart to irqentry_nmi_enter().
++ */
++void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state);
++
+ #endif
+diff --git a/include/linux/gpio/consumer.h b/include/linux/gpio/consumer.h
+index 901aab89d025f..79f450e93abfd 100644
+--- a/include/linux/gpio/consumer.h
++++ b/include/linux/gpio/consumer.h
+@@ -674,6 +674,8 @@ struct acpi_gpio_mapping {
+  * get GpioIo type explicitly, this quirk may be used.
+  */
+ #define ACPI_GPIO_QUIRK_ONLY_GPIOIO		BIT(1)
++/* Use given pin as an absolute GPIO number in the system */
++#define ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER		BIT(2)
+ 
+ 	unsigned int quirks;
+ };
+diff --git a/include/linux/memory.h b/include/linux/memory.h
+index 439a89e758d87..4da95e684e20f 100644
+--- a/include/linux/memory.h
++++ b/include/linux/memory.h
+@@ -27,9 +27,8 @@ struct memory_block {
+ 	unsigned long start_section_nr;
+ 	unsigned long state;		/* serialized by the dev->lock */
+ 	int online_type;		/* for passing data to online routine */
+-	int phys_device;		/* to which fru does this belong? */
+-	struct device dev;
+ 	int nid;			/* NID for this memory block */
++	struct device dev;
+ };
+ 
+ int arch_get_memory_phys_device(unsigned long start_pfn);
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 96450f6fb1de8..22ce0604b4480 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -606,6 +606,7 @@ struct swevent_hlist {
+ #define PERF_ATTACH_TASK	0x04
+ #define PERF_ATTACH_TASK_DATA	0x08
+ #define PERF_ATTACH_ITRACE	0x10
++#define PERF_ATTACH_SCHED_CB	0x20
+ 
+ struct perf_cgroup;
+ struct perf_buffer;
+@@ -872,6 +873,7 @@ struct perf_cpu_context {
+ 	struct list_head		cgrp_cpuctx_entry;
+ #endif
+ 
++	struct list_head		sched_cb_entry;
+ 	int				sched_cb_usage;
+ 
+ 	int				online;
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index e237004d498d6..7c869ea8dffc8 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -857,6 +857,10 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
+ #define pgprot_device pgprot_noncached
+ #endif
+ 
++#ifndef pgprot_mhp
++#define pgprot_mhp(prot)	(prot)
++#endif
++
+ #ifdef CONFIG_MMU
+ #ifndef pgprot_modify
+ #define pgprot_modify pgprot_modify
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index d5ece7a9a403f..dc1f4dcd9a825 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -140,7 +140,8 @@ static inline bool in_vfork(struct task_struct *tsk)
+ 	 * another oom-unkillable task does this it should blame itself.
+ 	 */
+ 	rcu_read_lock();
+-	ret = tsk->vfork_done && tsk->real_parent->mm == tsk->mm;
++	ret = tsk->vfork_done &&
++			rcu_dereference(tsk->real_parent)->mm == tsk->mm;
+ 	rcu_read_unlock();
+ 
+ 	return ret;
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index cbfc78b92b654..1ac20d75b0618 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -659,10 +659,7 @@ typedef struct {
+  * seqcount_latch_init() - runtime initializer for seqcount_latch_t
+  * @s: Pointer to the seqcount_latch_t instance
+  */
+-static inline void seqcount_latch_init(seqcount_latch_t *s)
+-{
+-	seqcount_init(&s->seqcount);
+-}
++#define seqcount_latch_init(s) seqcount_init(&(s)->seqcount)
+ 
+ /**
+  * raw_read_seqcount_latch() - pick even/odd latch data copy
+diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h
+index 76d8b09384a7a..63ea9aff368f0 100644
+--- a/include/linux/stop_machine.h
++++ b/include/linux/stop_machine.h
+@@ -123,7 +123,7 @@ int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
+ 				   const struct cpumask *cpus);
+ #else	/* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
+ 
+-static inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
++static __always_inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
+ 					  const struct cpumask *cpus)
+ {
+ 	unsigned long flags;
+@@ -134,14 +134,15 @@ static inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
+ 	return ret;
+ }
+ 
+-static inline int stop_machine(cpu_stop_fn_t fn, void *data,
+-			       const struct cpumask *cpus)
++static __always_inline int
++stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
+ {
+ 	return stop_machine_cpuslocked(fn, data, cpus);
+ }
+ 
+-static inline int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
+-						 const struct cpumask *cpus)
++static __always_inline int
++stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
++			       const struct cpumask *cpus)
+ {
+ 	return stop_machine(fn, data, cpus);
+ }
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 7d72c4e0713c1..d6a41841b93e4 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -746,6 +746,8 @@ extern int usb_lock_device_for_reset(struct usb_device *udev,
+ extern int usb_reset_device(struct usb_device *dev);
+ extern void usb_queue_reset_device(struct usb_interface *dev);
+ 
++extern struct device *usb_intf_get_dma_device(struct usb_interface *intf);
++
+ #ifdef CONFIG_ACPI
+ extern int usb_acpi_set_power_state(struct usb_device *hdev, int index,
+ 	bool enable);
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index e8a924eeea3d0..6b5fcfa1e5553 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -79,8 +79,13 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 		if (gso_type && skb->network_header) {
+ 			struct flow_keys_basic keys;
+ 
+-			if (!skb->protocol)
++			if (!skb->protocol) {
++				__be16 protocol = dev_parse_header_protocol(skb);
++
+ 				virtio_net_hdr_set_proto(skb, hdr);
++				if (protocol && protocol != skb->protocol)
++					return -EINVAL;
++			}
+ retry:
+ 			if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
+ 							      NULL, 0, 0, 0,
+diff --git a/include/media/rc-map.h b/include/media/rc-map.h
+index 7dbb91c601a77..c3effcdf2a641 100644
+--- a/include/media/rc-map.h
++++ b/include/media/rc-map.h
+@@ -175,6 +175,13 @@ struct rc_map_list {
+ 	struct rc_map map;
+ };
+ 
++#ifdef CONFIG_MEDIA_CEC_RC
++/*
++ * rc_map_list from rc-cec.c
++ */
++extern struct rc_map_list cec_map;
++#endif
++
+ /* Routines from rc-map.c */
+ 
+ /**
+diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
+index 6336780d83a75..ce2fba49c95da 100644
+--- a/include/target/target_core_backend.h
++++ b/include/target/target_core_backend.h
+@@ -72,6 +72,7 @@ int	transport_backend_register(const struct target_backend_ops *);
+ void	target_backend_unregister(const struct target_backend_ops *);
+ 
+ void	target_complete_cmd(struct se_cmd *, u8);
++void	target_set_cmd_data_length(struct se_cmd *, int);
+ void	target_complete_cmd_with_length(struct se_cmd *, u8, int);
+ 
+ void	transport_copy_sense_to_cmd(struct se_cmd *, unsigned char *);
+diff --git a/include/uapi/linux/l2tp.h b/include/uapi/linux/l2tp.h
+index 30c80d5ba4bfc..bab8c97086111 100644
+--- a/include/uapi/linux/l2tp.h
++++ b/include/uapi/linux/l2tp.h
+@@ -145,6 +145,7 @@ enum {
+ 	L2TP_ATTR_RX_ERRORS,		/* u64 */
+ 	L2TP_ATTR_STATS_PAD,
+ 	L2TP_ATTR_RX_COOKIE_DISCARDS,	/* u64 */
++	L2TP_ATTR_RX_INVALID,		/* u64 */
+ 	__L2TP_ATTR_STATS_MAX,
+ };
+ 
+diff --git a/include/uapi/linux/netfilter/nfnetlink_cthelper.h b/include/uapi/linux/netfilter/nfnetlink_cthelper.h
+index a13137afc4299..70af02092d16e 100644
+--- a/include/uapi/linux/netfilter/nfnetlink_cthelper.h
++++ b/include/uapi/linux/netfilter/nfnetlink_cthelper.h
+@@ -5,7 +5,7 @@
+ #define NFCT_HELPER_STATUS_DISABLED	0
+ #define NFCT_HELPER_STATUS_ENABLED	1
+ 
+-enum nfnl_acct_msg_types {
++enum nfnl_cthelper_msg_types {
+ 	NFNL_MSG_CTHELPER_NEW,
+ 	NFNL_MSG_CTHELPER_GET,
+ 	NFNL_MSG_CTHELPER_DEL,
+diff --git a/kernel/entry/common.c b/kernel/entry/common.c
+index e9e2df3f3f9ee..e289e67732926 100644
+--- a/kernel/entry/common.c
++++ b/kernel/entry/common.c
+@@ -397,3 +397,39 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
+ 			rcu_irq_exit();
+ 	}
+ }
++
++irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs)
++{
++	irqentry_state_t irq_state;
++
++	irq_state.lockdep = lockdep_hardirqs_enabled();
++
++	__nmi_enter();
++	lockdep_hardirqs_off(CALLER_ADDR0);
++	lockdep_hardirq_enter();
++	rcu_nmi_enter();
++
++	instrumentation_begin();
++	trace_hardirqs_off_finish();
++	ftrace_nmi_enter();
++	instrumentation_end();
++
++	return irq_state;
++}
++
++void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state)
++{
++	instrumentation_begin();
++	ftrace_nmi_exit();
++	if (irq_state.lockdep) {
++		trace_hardirqs_on_prepare();
++		lockdep_hardirqs_on_prepare(CALLER_ADDR0);
++	}
++	instrumentation_end();
++
++	rcu_nmi_exit();
++	lockdep_hardirq_exit();
++	if (irq_state.lockdep)
++		lockdep_hardirqs_on(CALLER_ADDR0);
++	__nmi_exit();
++}
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index c3ba29d058b73..4af161b3f322f 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -383,6 +383,7 @@ static DEFINE_MUTEX(perf_sched_mutex);
+ static atomic_t perf_sched_count;
+ 
+ static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
++static DEFINE_PER_CPU(int, perf_sched_cb_usages);
+ static DEFINE_PER_CPU(struct pmu_event_list, pmu_sb_events);
+ 
+ static atomic_t nr_mmap_events __read_mostly;
+@@ -3466,11 +3467,16 @@ unlock:
+ 	}
+ }
+ 
++static DEFINE_PER_CPU(struct list_head, sched_cb_list);
++
+ void perf_sched_cb_dec(struct pmu *pmu)
+ {
+ 	struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
+ 
+-	--cpuctx->sched_cb_usage;
++	this_cpu_dec(perf_sched_cb_usages);
++
++	if (!--cpuctx->sched_cb_usage)
++		list_del(&cpuctx->sched_cb_entry);
+ }
+ 
+ 
+@@ -3478,7 +3484,10 @@ void perf_sched_cb_inc(struct pmu *pmu)
+ {
+ 	struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
+ 
+-	cpuctx->sched_cb_usage++;
++	if (!cpuctx->sched_cb_usage++)
++		list_add(&cpuctx->sched_cb_entry, this_cpu_ptr(&sched_cb_list));
++
++	this_cpu_inc(perf_sched_cb_usages);
+ }
+ 
+ /*
+@@ -3507,6 +3516,24 @@ static void __perf_pmu_sched_task(struct perf_cpu_context *cpuctx, bool sched_in
+ 	perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
+ }
+ 
++static void perf_pmu_sched_task(struct task_struct *prev,
++				struct task_struct *next,
++				bool sched_in)
++{
++	struct perf_cpu_context *cpuctx;
++
++	if (prev == next)
++		return;
++
++	list_for_each_entry(cpuctx, this_cpu_ptr(&sched_cb_list), sched_cb_entry) {
++		/* will be handled in perf_event_context_sched_in/out */
++		if (cpuctx->task_ctx)
++			continue;
++
++		__perf_pmu_sched_task(cpuctx, sched_in);
++	}
++}
++
+ static void perf_event_switch(struct task_struct *task,
+ 			      struct task_struct *next_prev, bool sched_in);
+ 
+@@ -3529,6 +3556,9 @@ void __perf_event_task_sched_out(struct task_struct *task,
+ {
+ 	int ctxn;
+ 
++	if (__this_cpu_read(perf_sched_cb_usages))
++		perf_pmu_sched_task(task, next, false);
++
+ 	if (atomic_read(&nr_switch_events))
+ 		perf_event_switch(task, next, false);
+ 
+@@ -3837,6 +3867,9 @@ void __perf_event_task_sched_in(struct task_struct *prev,
+ 
+ 	if (atomic_read(&nr_switch_events))
+ 		perf_event_switch(task, prev, true);
++
++	if (__this_cpu_read(perf_sched_cb_usages))
++		perf_pmu_sched_task(prev, task, true);
+ }
+ 
+ static u64 perf_calculate_period(struct perf_event *event, u64 nsec, u64 count)
+@@ -4661,7 +4694,7 @@ static void unaccount_event(struct perf_event *event)
+ 	if (event->parent)
+ 		return;
+ 
+-	if (event->attach_state & PERF_ATTACH_TASK)
++	if (event->attach_state & (PERF_ATTACH_TASK | PERF_ATTACH_SCHED_CB))
+ 		dec = true;
+ 	if (event->attr.mmap || event->attr.mmap_data)
+ 		atomic_dec(&nr_mmap_events);
+@@ -11056,7 +11089,7 @@ static void account_event(struct perf_event *event)
+ 	if (event->parent)
+ 		return;
+ 
+-	if (event->attach_state & PERF_ATTACH_TASK)
++	if (event->attach_state & (PERF_ATTACH_TASK | PERF_ATTACH_SCHED_CB))
+ 		inc = true;
+ 	if (event->attr.mmap || event->attr.mmap_data)
+ 		atomic_inc(&nr_mmap_events);
+@@ -12848,6 +12881,7 @@ static void __init perf_event_init_all_cpus(void)
+ #ifdef CONFIG_CGROUP_PERF
+ 		INIT_LIST_HEAD(&per_cpu(cgrp_cpuctx_list, cpu));
+ #endif
++		INIT_LIST_HEAD(&per_cpu(sched_cb_list, cpu));
+ 	}
+ }
+ 
+diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
+index 9d8df34bea75b..16f57e71f9c44 100644
+--- a/kernel/sched/membarrier.c
++++ b/kernel/sched/membarrier.c
+@@ -332,9 +332,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
+ 	}
+ 	rcu_read_unlock();
+ 
+-	preempt_disable();
+-	smp_call_function_many(tmpmask, ipi_sync_rq_state, mm, 1);
+-	preempt_enable();
++	on_each_cpu_mask(tmpmask, ipi_sync_rq_state, mm, true);
+ 
+ 	free_cpumask_var(tmpmask);
+ 	cpus_read_unlock();
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index afad085960b81..b9306d2bb4269 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -2951,7 +2951,7 @@ static struct ctl_table vm_table[] = {
+ 		.data		= &block_dump,
+ 		.maxlen		= sizeof(block_dump),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 	},
+ 	{
+@@ -2959,7 +2959,7 @@ static struct ctl_table vm_table[] = {
+ 		.data		= &sysctl_vfs_cache_pressure,
+ 		.maxlen		= sizeof(sysctl_vfs_cache_pressure),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 	},
+ #if defined(HAVE_ARCH_PICK_MMAP_LAYOUT) || \
+@@ -2969,7 +2969,7 @@ static struct ctl_table vm_table[] = {
+ 		.data		= &sysctl_legacy_va_layout,
+ 		.maxlen		= sizeof(sysctl_legacy_va_layout),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 	},
+ #endif
+@@ -2979,7 +2979,7 @@ static struct ctl_table vm_table[] = {
+ 		.data		= &node_reclaim_mode,
+ 		.maxlen		= sizeof(node_reclaim_mode),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 	},
+ 	{
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 387b4bef7dd14..4416f5d72c11e 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -546,8 +546,11 @@ static ktime_t __hrtimer_next_event_base(struct hrtimer_cpu_base *cpu_base,
+ }
+ 
+ /*
+- * Recomputes cpu_base::*next_timer and returns the earliest expires_next but
+- * does not set cpu_base::*expires_next, that is done by hrtimer_reprogram.
++ * Recomputes cpu_base::*next_timer and returns the earliest expires_next
++ * but does not set cpu_base::*expires_next, that is done by
++ * hrtimer[_force]_reprogram and hrtimer_interrupt only. When updating
++ * cpu_base::*expires_next right away, reprogramming logic would no longer
++ * work.
+  *
+  * When a softirq is pending, we can ignore the HRTIMER_ACTIVE_SOFT bases,
+  * those timers will get run whenever the softirq gets handled, at the end of
+@@ -588,6 +591,37 @@ __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base, unsigned int active_
+ 	return expires_next;
+ }
+ 
++static ktime_t hrtimer_update_next_event(struct hrtimer_cpu_base *cpu_base)
++{
++	ktime_t expires_next, soft = KTIME_MAX;
++
++	/*
++	 * If the soft interrupt has already been activated, ignore the
++	 * soft bases. They will be handled in the already raised soft
++	 * interrupt.
++	 */
++	if (!cpu_base->softirq_activated) {
++		soft = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_SOFT);
++		/*
++		 * Update the soft expiry time. clock_settime() might have
++		 * affected it.
++		 */
++		cpu_base->softirq_expires_next = soft;
++	}
++
++	expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_HARD);
++	/*
++	 * If a softirq timer is expiring first, update cpu_base->next_timer
++	 * and program the hardware with the soft expiry time.
++	 */
++	if (expires_next > soft) {
++		cpu_base->next_timer = cpu_base->softirq_next_timer;
++		expires_next = soft;
++	}
++
++	return expires_next;
++}
++
+ static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base)
+ {
+ 	ktime_t *offs_real = &base->clock_base[HRTIMER_BASE_REALTIME].offset;
+@@ -628,23 +662,7 @@ hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal)
+ {
+ 	ktime_t expires_next;
+ 
+-	/*
+-	 * Find the current next expiration time.
+-	 */
+-	expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
+-
+-	if (cpu_base->next_timer && cpu_base->next_timer->is_soft) {
+-		/*
+-		 * When the softirq is activated, hrtimer has to be
+-		 * programmed with the first hard hrtimer because soft
+-		 * timer interrupt could occur too late.
+-		 */
+-		if (cpu_base->softirq_activated)
+-			expires_next = __hrtimer_get_next_event(cpu_base,
+-								HRTIMER_ACTIVE_HARD);
+-		else
+-			cpu_base->softirq_expires_next = expires_next;
+-	}
++	expires_next = hrtimer_update_next_event(cpu_base);
+ 
+ 	if (skip_equal && expires_next == cpu_base->expires_next)
+ 		return;
+@@ -1644,8 +1662,8 @@ retry:
+ 
+ 	__hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_HARD);
+ 
+-	/* Reevaluate the clock bases for the next expiry */
+-	expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
++	/* Reevaluate the clock bases for the [soft] next expiry */
++	expires_next = hrtimer_update_next_event(cpu_base);
+ 	/*
+ 	 * Store the new expiry value so the migration code can verify
+ 	 * against it.
+diff --git a/lib/logic_pio.c b/lib/logic_pio.c
+index f32fe481b4922..07b4b9a1f54b6 100644
+--- a/lib/logic_pio.c
++++ b/lib/logic_pio.c
+@@ -28,6 +28,8 @@ static DEFINE_MUTEX(io_range_mutex);
+  * @new_range: pointer to the IO range to be registered.
+  *
+  * Returns 0 on success, the error code in case of failure.
++ * If the range already exists, -EEXIST will be returned, which should be
++ * considered a success.
+  *
+  * Register a new IO range node in the IO range list.
+  */
+@@ -51,6 +53,7 @@ int logic_pio_register_range(struct logic_pio_hwaddr *new_range)
+ 	list_for_each_entry(range, &io_range_list, list) {
+ 		if (range->fwnode == new_range->fwnode) {
+ 			/* range already there */
++			ret = -EEXIST;
+ 			goto end_register;
+ 		}
+ 		if (range->flags == LOGIC_PIO_CPU_MMIO &&
+diff --git a/lib/test_kasan.c b/lib/test_kasan.c
+index 662f862702fc8..400507f1e5db0 100644
+--- a/lib/test_kasan.c
++++ b/lib/test_kasan.c
+@@ -737,13 +737,13 @@ static void kasan_bitops_tags(struct kunit *test)
+ 		return;
+ 	}
+ 
+-	/* Allocation size will be rounded to up granule size, which is 16. */
+-	bits = kzalloc(sizeof(*bits), GFP_KERNEL);
++	/* kmalloc-64 cache will be used and the last 16 bytes will be the redzone. */
++	bits = kzalloc(48, GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits);
+ 
+-	/* Do the accesses past the 16 allocated bytes. */
+-	kasan_bitops_modify(test, BITS_PER_LONG, &bits[1]);
+-	kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, &bits[1]);
++	/* Do the accesses past the 48 allocated bytes, but within the redone. */
++	kasan_bitops_modify(test, BITS_PER_LONG, (void *)bits + 48);
++	kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, (void *)bits + 48);
+ 
+ 	kfree(bits);
+ }
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 9abf4c5f2bce2..24abc79f8914e 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -1202,12 +1202,22 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
+ 		goto release_task;
+ 	}
+ 
+-	mm = mm_access(task, PTRACE_MODE_ATTACH_FSCREDS);
++	/* Require PTRACE_MODE_READ to avoid leaking ASLR metadata. */
++	mm = mm_access(task, PTRACE_MODE_READ_FSCREDS);
+ 	if (IS_ERR_OR_NULL(mm)) {
+ 		ret = IS_ERR(mm) ? PTR_ERR(mm) : -ESRCH;
+ 		goto release_task;
+ 	}
+ 
++	/*
++	 * Require CAP_SYS_NICE for influencing process performance. Note that
++	 * only non-destructive hints are currently supported.
++	 */
++	if (!capable(CAP_SYS_NICE)) {
++		ret = -EPERM;
++		goto release_mm;
++	}
++
+ 	total_len = iov_iter_count(&iter);
+ 
+ 	while (iov_iter_count(&iter)) {
+@@ -1222,6 +1232,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
+ 	if (ret == 0)
+ 		ret = total_len - iov_iter_count(&iter);
+ 
++release_mm:
+ 	mmput(mm);
+ release_task:
+ 	put_task_struct(task);
+diff --git a/mm/memory.c b/mm/memory.c
+index 827d42f9ebf7c..4d565d7c80169 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3090,6 +3090,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
+ 		return handle_userfault(vmf, VM_UFFD_WP);
+ 	}
+ 
++	/*
++	 * Userfaultfd write-protect can defer flushes. Ensure the TLB
++	 * is flushed in this case before copying.
++	 */
++	if (unlikely(userfaultfd_wp(vmf->vma) &&
++		     mm_tlb_flush_pending(vmf->vma->vm_mm)))
++		flush_tlb_page(vmf->vma, vmf->address);
++
+ 	vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
+ 	if (!vmf->page) {
+ 		/*
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index aa453a4331437..b9de2df5b8358 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1020,7 +1020,7 @@ static int online_memory_block(struct memory_block *mem, void *arg)
+  */
+ int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
+ {
+-	struct mhp_params params = { .pgprot = PAGE_KERNEL };
++	struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) };
+ 	u64 start, size;
+ 	bool new_node = false;
+ 	int ret;
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 88639706ae177..690f79c781cf7 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6189,13 +6189,66 @@ static void __meminit zone_init_free_lists(struct zone *zone)
+ 	}
+ }
+ 
++#if !defined(CONFIG_FLAT_NODE_MEM_MAP)
++/*
++ * Only struct pages that correspond to ranges defined by memblock.memory
++ * are zeroed and initialized by going through __init_single_page() during
++ * memmap_init_zone().
++ *
++ * But, there could be struct pages that correspond to holes in
++ * memblock.memory. This can happen because of the following reasons:
++ * - physical memory bank size is not necessarily the exact multiple of the
++ *   arbitrary section size
++ * - early reserved memory may not be listed in memblock.memory
++ * - memory layouts defined with memmap= kernel parameter may not align
++ *   nicely with memmap sections
++ *
++ * Explicitly initialize those struct pages so that:
++ * - PG_Reserved is set
++ * - zone and node links point to zone and node that span the page if the
++ *   hole is in the middle of a zone
++ * - zone and node links point to adjacent zone/node if the hole falls on
++ *   the zone boundary; the pages in such holes will be prepended to the
++ *   zone/node above the hole except for the trailing pages in the last
++ *   section that will be appended to the zone/node below.
++ */
++static u64 __meminit init_unavailable_range(unsigned long spfn,
++					    unsigned long epfn,
++					    int zone, int node)
++{
++	unsigned long pfn;
++	u64 pgcnt = 0;
++
++	for (pfn = spfn; pfn < epfn; pfn++) {
++		if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
++			pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
++				+ pageblock_nr_pages - 1;
++			continue;
++		}
++		__init_single_page(pfn_to_page(pfn), pfn, zone, node);
++		__SetPageReserved(pfn_to_page(pfn));
++		pgcnt++;
++	}
++
++	return pgcnt;
++}
++#else
++static inline u64 init_unavailable_range(unsigned long spfn, unsigned long epfn,
++					 int zone, int node)
++{
++	return 0;
++}
++#endif
++
+ void __meminit __weak memmap_init(unsigned long size, int nid,
+ 				  unsigned long zone,
+ 				  unsigned long range_start_pfn)
+ {
++	static unsigned long hole_pfn;
+ 	unsigned long start_pfn, end_pfn;
+ 	unsigned long range_end_pfn = range_start_pfn + size;
+ 	int i;
++	u64 pgcnt = 0;
+ 
+ 	for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
+ 		start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn);
+@@ -6206,7 +6259,29 @@ void __meminit __weak memmap_init(unsigned long size, int nid,
+ 			memmap_init_zone(size, nid, zone, start_pfn, range_end_pfn,
+ 					 MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
+ 		}
++
++		if (hole_pfn < start_pfn)
++			pgcnt += init_unavailable_range(hole_pfn, start_pfn,
++							zone, nid);
++		hole_pfn = end_pfn;
+ 	}
++
++#ifdef CONFIG_SPARSEMEM
++	/*
++	 * Initialize the hole in the range [zone_end_pfn, section_end].
++	 * If zone boundary falls in the middle of a section, this hole
++	 * will be re-initialized during the call to this function for the
++	 * higher zone.
++	 */
++	end_pfn = round_up(range_end_pfn, PAGES_PER_SECTION);
++	if (hole_pfn < end_pfn)
++		pgcnt += init_unavailable_range(hole_pfn, end_pfn,
++						zone, nid);
++#endif
++
++	if (pgcnt)
++		pr_info("  %s zone: %llu pages in unavailable ranges\n",
++			zone_names[zone], pgcnt);
+ }
+ 
+ static int zone_batchsize(struct zone *zone)
+@@ -6999,88 +7074,6 @@ void __init free_area_init_memoryless_node(int nid)
+ 	free_area_init_node(nid);
+ }
+ 
+-#if !defined(CONFIG_FLAT_NODE_MEM_MAP)
+-/*
+- * Initialize all valid struct pages in the range [spfn, epfn) and mark them
+- * PageReserved(). Return the number of struct pages that were initialized.
+- */
+-static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn)
+-{
+-	unsigned long pfn;
+-	u64 pgcnt = 0;
+-
+-	for (pfn = spfn; pfn < epfn; pfn++) {
+-		if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
+-			pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
+-				+ pageblock_nr_pages - 1;
+-			continue;
+-		}
+-		/*
+-		 * Use a fake node/zone (0) for now. Some of these pages
+-		 * (in memblock.reserved but not in memblock.memory) will
+-		 * get re-initialized via reserve_bootmem_region() later.
+-		 */
+-		__init_single_page(pfn_to_page(pfn), pfn, 0, 0);
+-		__SetPageReserved(pfn_to_page(pfn));
+-		pgcnt++;
+-	}
+-
+-	return pgcnt;
+-}
+-
+-/*
+- * Only struct pages that are backed by physical memory are zeroed and
+- * initialized by going through __init_single_page(). But, there are some
+- * struct pages which are reserved in memblock allocator and their fields
+- * may be accessed (for example page_to_pfn() on some configuration accesses
+- * flags). We must explicitly initialize those struct pages.
+- *
+- * This function also addresses a similar issue where struct pages are left
+- * uninitialized because the physical address range is not covered by
+- * memblock.memory or memblock.reserved. That could happen when memblock
+- * layout is manually configured via memmap=, or when the highest physical
+- * address (max_pfn) does not end on a section boundary.
+- */
+-static void __init init_unavailable_mem(void)
+-{
+-	phys_addr_t start, end;
+-	u64 i, pgcnt;
+-	phys_addr_t next = 0;
+-
+-	/*
+-	 * Loop through unavailable ranges not covered by memblock.memory.
+-	 */
+-	pgcnt = 0;
+-	for_each_mem_range(i, &start, &end) {
+-		if (next < start)
+-			pgcnt += init_unavailable_range(PFN_DOWN(next),
+-							PFN_UP(start));
+-		next = end;
+-	}
+-
+-	/*
+-	 * Early sections always have a fully populated memmap for the whole
+-	 * section - see pfn_valid(). If the last section has holes at the
+-	 * end and that section is marked "online", the memmap will be
+-	 * considered initialized. Make sure that memmap has a well defined
+-	 * state.
+-	 */
+-	pgcnt += init_unavailable_range(PFN_DOWN(next),
+-					round_up(max_pfn, PAGES_PER_SECTION));
+-
+-	/*
+-	 * Struct pages that do not have backing memory. This could be because
+-	 * firmware is using some of this memory, or for some other reasons.
+-	 */
+-	if (pgcnt)
+-		pr_info("Zeroed struct page in unavailable ranges: %lld pages", pgcnt);
+-}
+-#else
+-static inline void __init init_unavailable_mem(void)
+-{
+-}
+-#endif /* !CONFIG_FLAT_NODE_MEM_MAP */
+-
+ #if MAX_NUMNODES > 1
+ /*
+  * Figure out the number of possible node ids.
+@@ -7504,7 +7497,6 @@ void __init free_area_init(unsigned long *max_zone_pfn)
+ 	/* Initialise every node */
+ 	mminit_verify_pageflags_layout();
+ 	setup_nr_node_ids();
+-	init_unavailable_mem();
+ 	for_each_online_node(nid) {
+ 		pg_data_t *pgdat = NODE_DATA(nid);
+ 		free_area_init_node(nid);
+diff --git a/mm/slub.c b/mm/slub.c
+index 7b378e2ce270d..fbc415c340095 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1971,7 +1971,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
+ 
+ 		t = acquire_slab(s, n, page, object == NULL, &objects);
+ 		if (!t)
+-			continue; /* cmpxchg raced */
++			break;
+ 
+ 		available += objects;
+ 		if (!object) {
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index 3bc5ca40c9fbb..c6806eef906f9 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -548,6 +548,30 @@ netdev_tx_t dsa_enqueue_skb(struct sk_buff *skb, struct net_device *dev)
+ }
+ EXPORT_SYMBOL_GPL(dsa_enqueue_skb);
+ 
++static int dsa_realloc_skb(struct sk_buff *skb, struct net_device *dev)
++{
++	int needed_headroom = dev->needed_headroom;
++	int needed_tailroom = dev->needed_tailroom;
++
++	/* For tail taggers, we need to pad short frames ourselves, to ensure
++	 * that the tail tag does not fail at its role of being at the end of
++	 * the packet, once the master interface pads the frame. Account for
++	 * that pad length here, and pad later.
++	 */
++	if (unlikely(needed_tailroom && skb->len < ETH_ZLEN))
++		needed_tailroom += ETH_ZLEN - skb->len;
++	/* skb_headroom() returns unsigned int... */
++	needed_headroom = max_t(int, needed_headroom - skb_headroom(skb), 0);
++	needed_tailroom = max_t(int, needed_tailroom - skb_tailroom(skb), 0);
++
++	if (likely(!needed_headroom && !needed_tailroom && !skb_cloned(skb)))
++		/* No reallocation needed, yay! */
++		return 0;
++
++	return pskb_expand_head(skb, needed_headroom, needed_tailroom,
++				GFP_ATOMIC);
++}
++
+ static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct dsa_slave_priv *p = netdev_priv(dev);
+@@ -567,6 +591,17 @@ static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 */
+ 	dsa_skb_tx_timestamp(p, skb);
+ 
++	if (dsa_realloc_skb(skb, dev)) {
++		dev_kfree_skb_any(skb);
++		return NETDEV_TX_OK;
++	}
++
++	/* needed_tailroom should still be 'warm' in the cache line from
++	 * dsa_realloc_skb(), which has also ensured that padding is safe.
++	 */
++	if (dev->needed_tailroom)
++		eth_skb_pad(skb);
++
+ 	/* Transmit function may have to reallocate the original SKB,
+ 	 * in which case it must have freed it. Only free it here on error.
+ 	 */
+@@ -1791,6 +1826,16 @@ int dsa_slave_create(struct dsa_port *port)
+ 	slave_dev->netdev_ops = &dsa_slave_netdev_ops;
+ 	if (ds->ops->port_max_mtu)
+ 		slave_dev->max_mtu = ds->ops->port_max_mtu(ds, port->index);
++	if (cpu_dp->tag_ops->tail_tag)
++		slave_dev->needed_tailroom = cpu_dp->tag_ops->overhead;
++	else
++		slave_dev->needed_headroom = cpu_dp->tag_ops->overhead;
++	/* Try to save one extra realloc later in the TX path (in the master)
++	 * by also inheriting the master's needed headroom and tailroom.
++	 * The 8021q driver also does this.
++	 */
++	slave_dev->needed_headroom += master->needed_headroom;
++	slave_dev->needed_tailroom += master->needed_tailroom;
+ 	SET_NETDEV_DEVTYPE(slave_dev, &dsa_type);
+ 
+ 	netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one,
+diff --git a/net/dsa/tag_ar9331.c b/net/dsa/tag_ar9331.c
+index 55b00694cdba1..002cf7f952e2d 100644
+--- a/net/dsa/tag_ar9331.c
++++ b/net/dsa/tag_ar9331.c
+@@ -31,9 +31,6 @@ static struct sk_buff *ar9331_tag_xmit(struct sk_buff *skb,
+ 	__le16 *phdr;
+ 	u16 hdr;
+ 
+-	if (skb_cow_head(skb, AR9331_HDR_LEN) < 0)
+-		return NULL;
+-
+ 	phdr = skb_push(skb, AR9331_HDR_LEN);
+ 
+ 	hdr = FIELD_PREP(AR9331_HDR_VERSION_MASK, AR9331_HDR_VERSION);
+diff --git a/net/dsa/tag_brcm.c b/net/dsa/tag_brcm.c
+index ad72dff8d5242..e934dace39227 100644
+--- a/net/dsa/tag_brcm.c
++++ b/net/dsa/tag_brcm.c
+@@ -66,9 +66,6 @@ static struct sk_buff *brcm_tag_xmit_ll(struct sk_buff *skb,
+ 	u16 queue = skb_get_queue_mapping(skb);
+ 	u8 *brcm_tag;
+ 
+-	if (skb_cow_head(skb, BRCM_TAG_LEN) < 0)
+-		return NULL;
+-
+ 	/* The Ethernet switch we are interfaced with needs packets to be at
+ 	 * least 64 bytes (including FCS) otherwise they will be discarded when
+ 	 * they enter the switch port logic. When Broadcom tags are enabled, we
+diff --git a/net/dsa/tag_dsa.c b/net/dsa/tag_dsa.c
+index 0b756fae68a5f..63d690a0fca6f 100644
+--- a/net/dsa/tag_dsa.c
++++ b/net/dsa/tag_dsa.c
+@@ -23,9 +23,6 @@ static struct sk_buff *dsa_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * the ethertype field for untagged packets.
+ 	 */
+ 	if (skb->protocol == htons(ETH_P_8021Q)) {
+-		if (skb_cow_head(skb, 0) < 0)
+-			return NULL;
+-
+ 		/*
+ 		 * Construct tagged FROM_CPU DSA tag from 802.1q tag.
+ 		 */
+@@ -41,8 +38,6 @@ static struct sk_buff *dsa_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			dsa_header[2] &= ~0x10;
+ 		}
+ 	} else {
+-		if (skb_cow_head(skb, DSA_HLEN) < 0)
+-			return NULL;
+ 		skb_push(skb, DSA_HLEN);
+ 
+ 		memmove(skb->data, skb->data + DSA_HLEN, 2 * ETH_ALEN);
+diff --git a/net/dsa/tag_edsa.c b/net/dsa/tag_edsa.c
+index 1206142403197..abf70a29deb43 100644
+--- a/net/dsa/tag_edsa.c
++++ b/net/dsa/tag_edsa.c
+@@ -35,8 +35,6 @@ static struct sk_buff *edsa_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * current ethertype field if the packet is untagged.
+ 	 */
+ 	if (skb->protocol == htons(ETH_P_8021Q)) {
+-		if (skb_cow_head(skb, DSA_HLEN) < 0)
+-			return NULL;
+ 		skb_push(skb, DSA_HLEN);
+ 
+ 		memmove(skb->data, skb->data + DSA_HLEN, 2 * ETH_ALEN);
+@@ -60,8 +58,6 @@ static struct sk_buff *edsa_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			edsa_header[6] &= ~0x10;
+ 		}
+ 	} else {
+-		if (skb_cow_head(skb, EDSA_HLEN) < 0)
+-			return NULL;
+ 		skb_push(skb, EDSA_HLEN);
+ 
+ 		memmove(skb->data, skb->data + EDSA_HLEN, 2 * ETH_ALEN);
+diff --git a/net/dsa/tag_gswip.c b/net/dsa/tag_gswip.c
+index 408d4af390a0e..2f5bd5e338ab5 100644
+--- a/net/dsa/tag_gswip.c
++++ b/net/dsa/tag_gswip.c
+@@ -60,13 +60,8 @@ static struct sk_buff *gswip_tag_xmit(struct sk_buff *skb,
+ 				      struct net_device *dev)
+ {
+ 	struct dsa_port *dp = dsa_slave_to_port(dev);
+-	int err;
+ 	u8 *gswip_tag;
+ 
+-	err = skb_cow_head(skb, GSWIP_TX_HEADER_LEN);
+-	if (err)
+-		return NULL;
+-
+ 	skb_push(skb, GSWIP_TX_HEADER_LEN);
+ 
+ 	gswip_tag = skb->data;
+diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c
+index 0a5aa982c60d9..4820dbcedfa2d 100644
+--- a/net/dsa/tag_ksz.c
++++ b/net/dsa/tag_ksz.c
+@@ -14,46 +14,6 @@
+ #define KSZ_EGRESS_TAG_LEN		1
+ #define KSZ_INGRESS_TAG_LEN		1
+ 
+-static struct sk_buff *ksz_common_xmit(struct sk_buff *skb,
+-				       struct net_device *dev, int len)
+-{
+-	struct sk_buff *nskb;
+-	int padlen;
+-
+-	padlen = (skb->len >= ETH_ZLEN) ? 0 : ETH_ZLEN - skb->len;
+-
+-	if (skb_tailroom(skb) >= padlen + len) {
+-		/* Let dsa_slave_xmit() free skb */
+-		if (__skb_put_padto(skb, skb->len + padlen, false))
+-			return NULL;
+-
+-		nskb = skb;
+-	} else {
+-		nskb = alloc_skb(NET_IP_ALIGN + skb->len +
+-				 padlen + len, GFP_ATOMIC);
+-		if (!nskb)
+-			return NULL;
+-		skb_reserve(nskb, NET_IP_ALIGN);
+-
+-		skb_reset_mac_header(nskb);
+-		skb_set_network_header(nskb,
+-				       skb_network_header(skb) - skb->head);
+-		skb_set_transport_header(nskb,
+-					 skb_transport_header(skb) - skb->head);
+-		skb_copy_and_csum_dev(skb, skb_put(nskb, skb->len));
+-
+-		/* Let skb_put_padto() free nskb, and let dsa_slave_xmit() free
+-		 * skb
+-		 */
+-		if (skb_put_padto(nskb, nskb->len + padlen))
+-			return NULL;
+-
+-		consume_skb(skb);
+-	}
+-
+-	return nskb;
+-}
+-
+ static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
+ 				      struct net_device *dev,
+ 				      unsigned int port, unsigned int len)
+@@ -90,23 +50,18 @@ static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
+ static struct sk_buff *ksz8795_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct dsa_port *dp = dsa_slave_to_port(dev);
+-	struct sk_buff *nskb;
+ 	u8 *tag;
+ 	u8 *addr;
+ 
+-	nskb = ksz_common_xmit(skb, dev, KSZ_INGRESS_TAG_LEN);
+-	if (!nskb)
+-		return NULL;
+-
+ 	/* Tag encoding */
+-	tag = skb_put(nskb, KSZ_INGRESS_TAG_LEN);
+-	addr = skb_mac_header(nskb);
++	tag = skb_put(skb, KSZ_INGRESS_TAG_LEN);
++	addr = skb_mac_header(skb);
+ 
+ 	*tag = 1 << dp->index;
+ 	if (is_link_local_ether_addr(addr))
+ 		*tag |= KSZ8795_TAIL_TAG_OVERRIDE;
+ 
+-	return nskb;
++	return skb;
+ }
+ 
+ static struct sk_buff *ksz8795_rcv(struct sk_buff *skb, struct net_device *dev,
+@@ -156,18 +111,13 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb,
+ 				    struct net_device *dev)
+ {
+ 	struct dsa_port *dp = dsa_slave_to_port(dev);
+-	struct sk_buff *nskb;
+ 	__be16 *tag;
+ 	u8 *addr;
+ 	u16 val;
+ 
+-	nskb = ksz_common_xmit(skb, dev, KSZ9477_INGRESS_TAG_LEN);
+-	if (!nskb)
+-		return NULL;
+-
+ 	/* Tag encoding */
+-	tag = skb_put(nskb, KSZ9477_INGRESS_TAG_LEN);
+-	addr = skb_mac_header(nskb);
++	tag = skb_put(skb, KSZ9477_INGRESS_TAG_LEN);
++	addr = skb_mac_header(skb);
+ 
+ 	val = BIT(dp->index);
+ 
+@@ -176,7 +126,7 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb,
+ 
+ 	*tag = cpu_to_be16(val);
+ 
+-	return nskb;
++	return skb;
+ }
+ 
+ static struct sk_buff *ksz9477_rcv(struct sk_buff *skb, struct net_device *dev,
+@@ -213,24 +163,19 @@ static struct sk_buff *ksz9893_xmit(struct sk_buff *skb,
+ 				    struct net_device *dev)
+ {
+ 	struct dsa_port *dp = dsa_slave_to_port(dev);
+-	struct sk_buff *nskb;
+ 	u8 *addr;
+ 	u8 *tag;
+ 
+-	nskb = ksz_common_xmit(skb, dev, KSZ_INGRESS_TAG_LEN);
+-	if (!nskb)
+-		return NULL;
+-
+ 	/* Tag encoding */
+-	tag = skb_put(nskb, KSZ_INGRESS_TAG_LEN);
+-	addr = skb_mac_header(nskb);
++	tag = skb_put(skb, KSZ_INGRESS_TAG_LEN);
++	addr = skb_mac_header(skb);
+ 
+ 	*tag = BIT(dp->index);
+ 
+ 	if (is_link_local_ether_addr(addr))
+ 		*tag |= KSZ9893_TAIL_TAG_OVERRIDE;
+ 
+-	return nskb;
++	return skb;
+ }
+ 
+ static const struct dsa_device_ops ksz9893_netdev_ops = {
+diff --git a/net/dsa/tag_lan9303.c b/net/dsa/tag_lan9303.c
+index ccfb6f641bbfb..aa1318dccaf0a 100644
+--- a/net/dsa/tag_lan9303.c
++++ b/net/dsa/tag_lan9303.c
+@@ -58,15 +58,6 @@ static struct sk_buff *lan9303_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	__be16 *lan9303_tag;
+ 	u16 tag;
+ 
+-	/* insert a special VLAN tag between the MAC addresses
+-	 * and the current ethertype field.
+-	 */
+-	if (skb_cow_head(skb, LAN9303_TAG_LEN) < 0) {
+-		dev_dbg(&dev->dev,
+-			"Cannot make room for the special tag. Dropping packet\n");
+-		return NULL;
+-	}
+-
+ 	/* provide 'LAN9303_TAG_LEN' bytes additional space */
+ 	skb_push(skb, LAN9303_TAG_LEN);
+ 
+diff --git a/net/dsa/tag_mtk.c b/net/dsa/tag_mtk.c
+index 4cdd9cf428fbf..59748487664fe 100644
+--- a/net/dsa/tag_mtk.c
++++ b/net/dsa/tag_mtk.c
+@@ -13,6 +13,7 @@
+ #define MTK_HDR_LEN		4
+ #define MTK_HDR_XMIT_UNTAGGED		0
+ #define MTK_HDR_XMIT_TAGGED_TPID_8100	1
++#define MTK_HDR_XMIT_TAGGED_TPID_88A8	2
+ #define MTK_HDR_RECV_SOURCE_PORT_MASK	GENMASK(2, 0)
+ #define MTK_HDR_XMIT_DP_BIT_MASK	GENMASK(5, 0)
+ #define MTK_HDR_XMIT_SA_DIS		BIT(6)
+@@ -21,8 +22,8 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
+ 				    struct net_device *dev)
+ {
+ 	struct dsa_port *dp = dsa_slave_to_port(dev);
++	u8 xmit_tpid;
+ 	u8 *mtk_tag;
+-	bool is_vlan_skb = true;
+ 	unsigned char *dest = eth_hdr(skb)->h_dest;
+ 	bool is_multicast_skb = is_multicast_ether_addr(dest) &&
+ 				!is_broadcast_ether_addr(dest);
+@@ -33,13 +34,17 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
+ 	 * the both special and VLAN tag at the same time and then look up VLAN
+ 	 * table with VID.
+ 	 */
+-	if (!skb_vlan_tagged(skb)) {
+-		if (skb_cow_head(skb, MTK_HDR_LEN) < 0)
+-			return NULL;
+-
++	switch (skb->protocol) {
++	case htons(ETH_P_8021Q):
++		xmit_tpid = MTK_HDR_XMIT_TAGGED_TPID_8100;
++		break;
++	case htons(ETH_P_8021AD):
++		xmit_tpid = MTK_HDR_XMIT_TAGGED_TPID_88A8;
++		break;
++	default:
++		xmit_tpid = MTK_HDR_XMIT_UNTAGGED;
+ 		skb_push(skb, MTK_HDR_LEN);
+ 		memmove(skb->data, skb->data + MTK_HDR_LEN, 2 * ETH_ALEN);
+-		is_vlan_skb = false;
+ 	}
+ 
+ 	mtk_tag = skb->data + 2 * ETH_ALEN;
+@@ -47,8 +52,7 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
+ 	/* Mark tag attribute on special tag insertion to notify hardware
+ 	 * whether that's a combined special tag with 802.1Q header.
+ 	 */
+-	mtk_tag[0] = is_vlan_skb ? MTK_HDR_XMIT_TAGGED_TPID_8100 :
+-		     MTK_HDR_XMIT_UNTAGGED;
++	mtk_tag[0] = xmit_tpid;
+ 	mtk_tag[1] = (1 << dp->index) & MTK_HDR_XMIT_DP_BIT_MASK;
+ 
+ 	/* Disable SA learning for multicast frames */
+@@ -56,7 +60,7 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
+ 		mtk_tag[1] |= MTK_HDR_XMIT_SA_DIS;
+ 
+ 	/* Tag control information is kept for 802.1Q */
+-	if (!is_vlan_skb) {
++	if (xmit_tpid == MTK_HDR_XMIT_UNTAGGED) {
+ 		mtk_tag[2] = 0;
+ 		mtk_tag[3] = 0;
+ 	}
+diff --git a/net/dsa/tag_ocelot.c b/net/dsa/tag_ocelot.c
+index 3b468aca5c53f..16a1afd5b8e14 100644
+--- a/net/dsa/tag_ocelot.c
++++ b/net/dsa/tag_ocelot.c
+@@ -143,13 +143,6 @@ static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
+ 	struct ocelot_port *ocelot_port;
+ 	u8 *prefix, *injection;
+ 	u64 qos_class, rew_op;
+-	int err;
+-
+-	err = skb_cow_head(skb, OCELOT_TOTAL_TAG_LEN);
+-	if (unlikely(err < 0)) {
+-		netdev_err(netdev, "Cannot make room for tag.\n");
+-		return NULL;
+-	}
+ 
+ 	ocelot_port = ocelot->ports[dp->index];
+ 
+diff --git a/net/dsa/tag_qca.c b/net/dsa/tag_qca.c
+index 1b9e8507112b5..88181b52f480b 100644
+--- a/net/dsa/tag_qca.c
++++ b/net/dsa/tag_qca.c
+@@ -34,9 +34,6 @@ static struct sk_buff *qca_tag_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	__be16 *phdr;
+ 	u16 hdr;
+ 
+-	if (skb_cow_head(skb, QCA_HDR_LEN) < 0)
+-		return NULL;
+-
+ 	skb_push(skb, QCA_HDR_LEN);
+ 
+ 	memmove(skb->data, skb->data + QCA_HDR_LEN, 2 * ETH_ALEN);
+diff --git a/net/dsa/tag_rtl4_a.c b/net/dsa/tag_rtl4_a.c
+index c17d39b4a1a04..e9176475bac89 100644
+--- a/net/dsa/tag_rtl4_a.c
++++ b/net/dsa/tag_rtl4_a.c
+@@ -35,14 +35,12 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
+ 				      struct net_device *dev)
+ {
+ 	struct dsa_port *dp = dsa_slave_to_port(dev);
++	__be16 *p;
+ 	u8 *tag;
+-	u16 *p;
+ 	u16 out;
+ 
+ 	/* Pad out to at least 60 bytes */
+-	if (unlikely(eth_skb_pad(skb)))
+-		return NULL;
+-	if (skb_cow_head(skb, RTL4_A_HDR_LEN) < 0)
++	if (unlikely(__skb_put_padto(skb, ETH_ZLEN, false)))
+ 		return NULL;
+ 
+ 	netdev_dbg(dev, "add realtek tag to package to port %d\n",
+@@ -53,13 +51,13 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
+ 	tag = skb->data + 2 * ETH_ALEN;
+ 
+ 	/* Set Ethertype */
+-	p = (u16 *)tag;
++	p = (__be16 *)tag;
+ 	*p = htons(RTL4_A_ETHERTYPE);
+ 
+ 	out = (RTL4_A_PROTOCOL_RTL8366RB << 12) | (2 << 8);
+-	/* The lower bits is the port numer */
++	/* The lower bits is the port number */
+ 	out |= (u8)dp->index;
+-	p = (u16 *)(tag + 2);
++	p = (__be16 *)(tag + 2);
+ 	*p = htons(out);
+ 
+ 	return skb;
+diff --git a/net/dsa/tag_trailer.c b/net/dsa/tag_trailer.c
+index 3a1cc24a4f0a5..5b97ede56a0fd 100644
+--- a/net/dsa/tag_trailer.c
++++ b/net/dsa/tag_trailer.c
+@@ -13,42 +13,15 @@
+ static struct sk_buff *trailer_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct dsa_port *dp = dsa_slave_to_port(dev);
+-	struct sk_buff *nskb;
+-	int padlen;
+ 	u8 *trailer;
+ 
+-	/*
+-	 * We have to make sure that the trailer ends up as the very
+-	 * last 4 bytes of the packet.  This means that we have to pad
+-	 * the packet to the minimum ethernet frame size, if necessary,
+-	 * before adding the trailer.
+-	 */
+-	padlen = 0;
+-	if (skb->len < 60)
+-		padlen = 60 - skb->len;
+-
+-	nskb = alloc_skb(NET_IP_ALIGN + skb->len + padlen + 4, GFP_ATOMIC);
+-	if (!nskb)
+-		return NULL;
+-	skb_reserve(nskb, NET_IP_ALIGN);
+-
+-	skb_reset_mac_header(nskb);
+-	skb_set_network_header(nskb, skb_network_header(skb) - skb->head);
+-	skb_set_transport_header(nskb, skb_transport_header(skb) - skb->head);
+-	skb_copy_and_csum_dev(skb, skb_put(nskb, skb->len));
+-	consume_skb(skb);
+-
+-	if (padlen) {
+-		skb_put_zero(nskb, padlen);
+-	}
+-
+-	trailer = skb_put(nskb, 4);
++	trailer = skb_put(skb, 4);
+ 	trailer[0] = 0x80;
+ 	trailer[1] = 1 << dp->index;
+ 	trailer[2] = 0x10;
+ 	trailer[3] = 0x00;
+ 
+-	return nskb;
++	return skb;
+ }
+ 
+ static struct sk_buff *trailer_rcv(struct sk_buff *skb, struct net_device *dev,
+diff --git a/net/ethtool/channels.c b/net/ethtool/channels.c
+index 25a9e566ef5cd..6a070dc8e4b0d 100644
+--- a/net/ethtool/channels.c
++++ b/net/ethtool/channels.c
+@@ -116,10 +116,9 @@ int ethnl_set_channels(struct sk_buff *skb, struct genl_info *info)
+ 	struct ethtool_channels channels = {};
+ 	struct ethnl_req_info req_info = {};
+ 	struct nlattr **tb = info->attrs;
+-	const struct nlattr *err_attr;
++	u32 err_attr, max_rx_in_use = 0;
+ 	const struct ethtool_ops *ops;
+ 	struct net_device *dev;
+-	u32 max_rx_in_use = 0;
+ 	int ret;
+ 
+ 	ret = ethnl_parse_header_dev_get(&req_info,
+@@ -157,34 +156,35 @@ int ethnl_set_channels(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	/* ensure new channel counts are within limits */
+ 	if (channels.rx_count > channels.max_rx)
+-		err_attr = tb[ETHTOOL_A_CHANNELS_RX_COUNT];
++		err_attr = ETHTOOL_A_CHANNELS_RX_COUNT;
+ 	else if (channels.tx_count > channels.max_tx)
+-		err_attr = tb[ETHTOOL_A_CHANNELS_TX_COUNT];
++		err_attr = ETHTOOL_A_CHANNELS_TX_COUNT;
+ 	else if (channels.other_count > channels.max_other)
+-		err_attr = tb[ETHTOOL_A_CHANNELS_OTHER_COUNT];
++		err_attr = ETHTOOL_A_CHANNELS_OTHER_COUNT;
+ 	else if (channels.combined_count > channels.max_combined)
+-		err_attr = tb[ETHTOOL_A_CHANNELS_COMBINED_COUNT];
++		err_attr = ETHTOOL_A_CHANNELS_COMBINED_COUNT;
+ 	else
+-		err_attr = NULL;
++		err_attr = 0;
+ 	if (err_attr) {
+ 		ret = -EINVAL;
+-		NL_SET_ERR_MSG_ATTR(info->extack, err_attr,
++		NL_SET_ERR_MSG_ATTR(info->extack, tb[err_attr],
+ 				    "requested channel count exceeds maximum");
+ 		goto out_ops;
+ 	}
+ 
+ 	/* ensure there is at least one RX and one TX channel */
+ 	if (!channels.combined_count && !channels.rx_count)
+-		err_attr = tb[ETHTOOL_A_CHANNELS_RX_COUNT];
++		err_attr = ETHTOOL_A_CHANNELS_RX_COUNT;
+ 	else if (!channels.combined_count && !channels.tx_count)
+-		err_attr = tb[ETHTOOL_A_CHANNELS_TX_COUNT];
++		err_attr = ETHTOOL_A_CHANNELS_TX_COUNT;
+ 	else
+-		err_attr = NULL;
++		err_attr = 0;
+ 	if (err_attr) {
+ 		if (mod_combined)
+-			err_attr = tb[ETHTOOL_A_CHANNELS_COMBINED_COUNT];
++			err_attr = ETHTOOL_A_CHANNELS_COMBINED_COUNT;
+ 		ret = -EINVAL;
+-		NL_SET_ERR_MSG_ATTR(info->extack, err_attr, "requested channel counts would result in no RX or TX channel being configured");
++		NL_SET_ERR_MSG_ATTR(info->extack, tb[err_attr],
++				    "requested channel counts would result in no RX or TX channel being configured");
+ 		goto out_ops;
+ 	}
+ 
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 471d33a0d095f..be09c7669a799 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -519,16 +519,10 @@ int cipso_v4_doi_remove(u32 doi, struct netlbl_audit *audit_info)
+ 		ret_val = -ENOENT;
+ 		goto doi_remove_return;
+ 	}
+-	if (!refcount_dec_and_test(&doi_def->refcount)) {
+-		spin_unlock(&cipso_v4_doi_list_lock);
+-		ret_val = -EBUSY;
+-		goto doi_remove_return;
+-	}
+ 	list_del_rcu(&doi_def->list);
+ 	spin_unlock(&cipso_v4_doi_list_lock);
+ 
+-	cipso_v4_cache_invalidate();
+-	call_rcu(&doi_def->rcu, cipso_v4_doi_free_rcu);
++	cipso_v4_doi_putdef(doi_def);
+ 	ret_val = 0;
+ 
+ doi_remove_return:
+@@ -585,9 +579,6 @@ void cipso_v4_doi_putdef(struct cipso_v4_doi *doi_def)
+ 
+ 	if (!refcount_dec_and_test(&doi_def->refcount))
+ 		return;
+-	spin_lock(&cipso_v4_doi_list_lock);
+-	list_del_rcu(&doi_def->list);
+-	spin_unlock(&cipso_v4_doi_list_lock);
+ 
+ 	cipso_v4_cache_invalidate();
+ 	call_rcu(&doi_def->rcu, cipso_v4_doi_free_rcu);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 76a420c76f16e..f6cc26de5ed30 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -502,8 +502,7 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
+ 		if (!skb_is_gso(skb) &&
+ 		    (inner_iph->frag_off & htons(IP_DF)) &&
+ 		    mtu < pkt_size) {
+-			memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+-			icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
++			icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
+ 			return -E2BIG;
+ 		}
+ 	}
+@@ -527,7 +526,7 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
+ 
+ 		if (!skb_is_gso(skb) && mtu >= IPV6_MIN_MTU &&
+ 					mtu < pkt_size) {
+-			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ 			return -E2BIG;
+ 		}
+ 	}
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index b957cbee2cf7b..84a818b09beeb 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -238,13 +238,13 @@ static netdev_tx_t vti_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	if (skb->len > mtu) {
+ 		skb_dst_update_pmtu_no_confirm(skb, mtu);
+ 		if (skb->protocol == htons(ETH_P_IP)) {
+-			icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+-				  htonl(mtu));
++			icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
++				      htonl(mtu));
+ 		} else {
+ 			if (mtu < IPV6_MIN_MTU)
+ 				mtu = IPV6_MIN_MTU;
+ 
+-			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ 		}
+ 
+ 		dst_release(dst);
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index f63f7ada51b36..f2d313c5900df 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -1182,7 +1182,7 @@ out:
+ 
+ /* rtnl */
+ /* remove all nexthops tied to a device being deleted */
+-static void nexthop_flush_dev(struct net_device *dev)
++static void nexthop_flush_dev(struct net_device *dev, unsigned long event)
+ {
+ 	unsigned int hash = nh_dev_hashfn(dev->ifindex);
+ 	struct net *net = dev_net(dev);
+@@ -1194,6 +1194,10 @@ static void nexthop_flush_dev(struct net_device *dev)
+ 		if (nhi->fib_nhc.nhc_dev != dev)
+ 			continue;
+ 
++		if (nhi->reject_nh &&
++		    (event == NETDEV_DOWN || event == NETDEV_CHANGE))
++			continue;
++
+ 		remove_nexthop(net, nhi->nh_parent, NULL);
+ 	}
+ }
+@@ -1940,11 +1944,11 @@ static int nh_netdev_event(struct notifier_block *this,
+ 	switch (event) {
+ 	case NETDEV_DOWN:
+ 	case NETDEV_UNREGISTER:
+-		nexthop_flush_dev(dev);
++		nexthop_flush_dev(dev, event);
+ 		break;
+ 	case NETDEV_CHANGE:
+ 		if (!(dev_get_flags(dev) & (IFF_RUNNING | IFF_LOWER_UP)))
+-			nexthop_flush_dev(dev);
++			nexthop_flush_dev(dev, event);
+ 		break;
+ 	case NETDEV_CHANGEMTU:
+ 		info_ext = ptr;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 41d03683b13d6..2384ac048bead 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3164,16 +3164,23 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 		break;
+ 
+ 	case TCP_QUEUE_SEQ:
+-		if (sk->sk_state != TCP_CLOSE)
++		if (sk->sk_state != TCP_CLOSE) {
+ 			err = -EPERM;
+-		else if (tp->repair_queue == TCP_SEND_QUEUE)
+-			WRITE_ONCE(tp->write_seq, val);
+-		else if (tp->repair_queue == TCP_RECV_QUEUE) {
+-			WRITE_ONCE(tp->rcv_nxt, val);
+-			WRITE_ONCE(tp->copied_seq, val);
+-		}
+-		else
++		} else if (tp->repair_queue == TCP_SEND_QUEUE) {
++			if (!tcp_rtx_queue_empty(sk))
++				err = -EPERM;
++			else
++				WRITE_ONCE(tp->write_seq, val);
++		} else if (tp->repair_queue == TCP_RECV_QUEUE) {
++			if (tp->rcv_nxt != tp->copied_seq) {
++				err = -EPERM;
++			} else {
++				WRITE_ONCE(tp->rcv_nxt, val);
++				WRITE_ONCE(tp->copied_seq, val);
++			}
++		} else {
+ 			err = -EINVAL;
++		}
+ 		break;
+ 
+ 	case TCP_REPAIR_OPTIONS:
+@@ -3829,7 +3836,8 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 
+ 		if (get_user(len, optlen))
+ 			return -EFAULT;
+-		if (len < offsetofend(struct tcp_zerocopy_receive, length))
++		if (len < 0 ||
++		    len < offsetofend(struct tcp_zerocopy_receive, length))
+ 			return -EINVAL;
+ 		if (len > sizeof(zc)) {
+ 			len = sizeof(zc);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index cfdaac4a57e41..6e2b02cf78418 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -522,7 +522,7 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ 	}
+ 
+ 	if (!sk || NAPI_GRO_CB(skb)->encap_mark ||
+-	    (skb->ip_summed != CHECKSUM_PARTIAL &&
++	    (uh->check && skb->ip_summed != CHECKSUM_PARTIAL &&
+ 	     NAPI_GRO_CB(skb)->csum_cnt == 0 &&
+ 	     !NAPI_GRO_CB(skb)->csum_valid) ||
+ 	    !udp_sk(sk)->gro_receive)
+diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c
+index 78f766019b7e0..0ea66e9db2495 100644
+--- a/net/ipv6/calipso.c
++++ b/net/ipv6/calipso.c
+@@ -83,6 +83,9 @@ struct calipso_map_cache_entry {
+ 
+ static struct calipso_map_cache_bkt *calipso_cache;
+ 
++static void calipso_cache_invalidate(void);
++static void calipso_doi_putdef(struct calipso_doi *doi_def);
++
+ /* Label Mapping Cache Functions
+  */
+ 
+@@ -444,15 +447,10 @@ static int calipso_doi_remove(u32 doi, struct netlbl_audit *audit_info)
+ 		ret_val = -ENOENT;
+ 		goto doi_remove_return;
+ 	}
+-	if (!refcount_dec_and_test(&doi_def->refcount)) {
+-		spin_unlock(&calipso_doi_list_lock);
+-		ret_val = -EBUSY;
+-		goto doi_remove_return;
+-	}
+ 	list_del_rcu(&doi_def->list);
+ 	spin_unlock(&calipso_doi_list_lock);
+ 
+-	call_rcu(&doi_def->rcu, calipso_doi_free_rcu);
++	calipso_doi_putdef(doi_def);
+ 	ret_val = 0;
+ 
+ doi_remove_return:
+@@ -508,10 +506,8 @@ static void calipso_doi_putdef(struct calipso_doi *doi_def)
+ 
+ 	if (!refcount_dec_and_test(&doi_def->refcount))
+ 		return;
+-	spin_lock(&calipso_doi_list_lock);
+-	list_del_rcu(&doi_def->list);
+-	spin_unlock(&calipso_doi_list_lock);
+ 
++	calipso_cache_invalidate();
+ 	call_rcu(&doi_def->rcu, calipso_doi_free_rcu);
+ }
+ 
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index cf6e1380b527c..640f71a7b29d9 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -678,8 +678,8 @@ static int prepare_ip6gre_xmit_ipv6(struct sk_buff *skb,
+ 
+ 		tel = (struct ipv6_tlv_tnl_enc_lim *)&skb_network_header(skb)[offset];
+ 		if (tel->encap_limit == 0) {
+-			icmpv6_send(skb, ICMPV6_PARAMPROB,
+-				    ICMPV6_HDR_FIELD, offset + 2);
++			icmpv6_ndo_send(skb, ICMPV6_PARAMPROB,
++					ICMPV6_HDR_FIELD, offset + 2);
+ 			return -1;
+ 		}
+ 		*encap_limit = tel->encap_limit - 1;
+@@ -805,8 +805,8 @@ static inline int ip6gre_xmit_ipv4(struct sk_buff *skb, struct net_device *dev)
+ 	if (err != 0) {
+ 		/* XXX: send ICMP error even if DF is not set. */
+ 		if (err == -EMSGSIZE)
+-			icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+-				  htonl(mtu));
++			icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
++				      htonl(mtu));
+ 		return -1;
+ 	}
+ 
+@@ -837,7 +837,7 @@ static inline int ip6gre_xmit_ipv6(struct sk_buff *skb, struct net_device *dev)
+ 			  &mtu, skb->protocol);
+ 	if (err != 0) {
+ 		if (err == -EMSGSIZE)
+-			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ 		return -1;
+ 	}
+ 
+@@ -1063,10 +1063,10 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 		/* XXX: send ICMP error even if DF is not set. */
+ 		if (err == -EMSGSIZE) {
+ 			if (skb->protocol == htons(ETH_P_IP))
+-				icmp_send(skb, ICMP_DEST_UNREACH,
+-					  ICMP_FRAG_NEEDED, htonl(mtu));
++				icmp_ndo_send(skb, ICMP_DEST_UNREACH,
++					      ICMP_FRAG_NEEDED, htonl(mtu));
+ 			else
+-				icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++				icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ 		}
+ 
+ 		goto tx_err;
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 648db3fe508f0..5d27b5c631217 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1363,8 +1363,8 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
+ 
+ 				tel = (void *)&skb_network_header(skb)[offset];
+ 				if (tel->encap_limit == 0) {
+-					icmpv6_send(skb, ICMPV6_PARAMPROB,
+-						ICMPV6_HDR_FIELD, offset + 2);
++					icmpv6_ndo_send(skb, ICMPV6_PARAMPROB,
++							ICMPV6_HDR_FIELD, offset + 2);
+ 					return -1;
+ 				}
+ 				encap_limit = tel->encap_limit - 1;
+@@ -1416,11 +1416,11 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		if (err == -EMSGSIZE)
+ 			switch (protocol) {
+ 			case IPPROTO_IPIP:
+-				icmp_send(skb, ICMP_DEST_UNREACH,
+-					  ICMP_FRAG_NEEDED, htonl(mtu));
++				icmp_ndo_send(skb, ICMP_DEST_UNREACH,
++					      ICMP_FRAG_NEEDED, htonl(mtu));
+ 				break;
+ 			case IPPROTO_IPV6:
+-				icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++				icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ 				break;
+ 			default:
+ 				break;
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index 5f9c4fdc120d6..ecfeffc06c55c 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -520,10 +520,10 @@ vti6_xmit(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 			if (mtu < IPV6_MIN_MTU)
+ 				mtu = IPV6_MIN_MTU;
+ 
+-			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ 		} else {
+-			icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+-				  htonl(mtu));
++			icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
++				      htonl(mtu));
+ 		}
+ 
+ 		err = -EMSGSIZE;
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index ff048cb8d8074..b26f469a3fb8c 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -987,7 +987,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
+ 			skb_dst_update_pmtu_no_confirm(skb, mtu);
+ 
+ 		if (skb->len > mtu && !skb_is_gso(skb)) {
+-			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ 			ip_rt_put(rt);
+ 			goto tx_error;
+ 		}
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 7be5103ff2a84..203890e378cb0 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -649,9 +649,9 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
+ 	/* Parse and check optional cookie */
+ 	if (session->peer_cookie_len > 0) {
+ 		if (memcmp(ptr, &session->peer_cookie[0], session->peer_cookie_len)) {
+-			pr_warn_ratelimited("%s: cookie mismatch (%u/%u). Discarding.\n",
+-					    tunnel->name, tunnel->tunnel_id,
+-					    session->session_id);
++			pr_debug_ratelimited("%s: cookie mismatch (%u/%u). Discarding.\n",
++					     tunnel->name, tunnel->tunnel_id,
++					     session->session_id);
+ 			atomic_long_inc(&session->stats.rx_cookie_discards);
+ 			goto discard;
+ 		}
+@@ -702,8 +702,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
+ 		 * If user has configured mandatory sequence numbers, discard.
+ 		 */
+ 		if (session->recv_seq) {
+-			pr_warn_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
+-					    session->name);
++			pr_debug_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
++					     session->name);
+ 			atomic_long_inc(&session->stats.rx_seq_discards);
+ 			goto discard;
+ 		}
+@@ -718,8 +718,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
+ 			session->send_seq = 0;
+ 			l2tp_session_set_header_len(session, tunnel->version);
+ 		} else if (session->send_seq) {
+-			pr_warn_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
+-					    session->name);
++			pr_debug_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
++					     session->name);
+ 			atomic_long_inc(&session->stats.rx_seq_discards);
+ 			goto discard;
+ 		}
+@@ -809,9 +809,9 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
+ 
+ 	/* Short packet? */
+ 	if (!pskb_may_pull(skb, L2TP_HDR_SIZE_MAX)) {
+-		pr_warn_ratelimited("%s: recv short packet (len=%d)\n",
+-				    tunnel->name, skb->len);
+-		goto error;
++		pr_debug_ratelimited("%s: recv short packet (len=%d)\n",
++				     tunnel->name, skb->len);
++		goto invalid;
+ 	}
+ 
+ 	/* Point to L2TP header */
+@@ -824,9 +824,9 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
+ 	/* Check protocol version */
+ 	version = hdrflags & L2TP_HDR_VER_MASK;
+ 	if (version != tunnel->version) {
+-		pr_warn_ratelimited("%s: recv protocol version mismatch: got %d expected %d\n",
+-				    tunnel->name, version, tunnel->version);
+-		goto error;
++		pr_debug_ratelimited("%s: recv protocol version mismatch: got %d expected %d\n",
++				     tunnel->name, version, tunnel->version);
++		goto invalid;
+ 	}
+ 
+ 	/* Get length of L2TP packet */
+@@ -834,7 +834,7 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
+ 
+ 	/* If type is control packet, it is handled by userspace. */
+ 	if (hdrflags & L2TP_HDRFLAG_T)
+-		goto error;
++		goto pass;
+ 
+ 	/* Skip flags */
+ 	ptr += 2;
+@@ -863,21 +863,24 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
+ 			l2tp_session_dec_refcount(session);
+ 
+ 		/* Not found? Pass to userspace to deal with */
+-		pr_warn_ratelimited("%s: no session found (%u/%u). Passing up.\n",
+-				    tunnel->name, tunnel_id, session_id);
+-		goto error;
++		pr_debug_ratelimited("%s: no session found (%u/%u). Passing up.\n",
++				     tunnel->name, tunnel_id, session_id);
++		goto pass;
+ 	}
+ 
+ 	if (tunnel->version == L2TP_HDR_VER_3 &&
+ 	    l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr))
+-		goto error;
++		goto invalid;
+ 
+ 	l2tp_recv_common(session, skb, ptr, optr, hdrflags, length);
+ 	l2tp_session_dec_refcount(session);
+ 
+ 	return 0;
+ 
+-error:
++invalid:
++	atomic_long_inc(&tunnel->stats.rx_invalid);
++
++pass:
+ 	/* Put UDP header back */
+ 	__skb_push(skb, sizeof(struct udphdr));
+ 
+diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
+index cb21d906343e8..98ea98eb9567b 100644
+--- a/net/l2tp/l2tp_core.h
++++ b/net/l2tp/l2tp_core.h
+@@ -39,6 +39,7 @@ struct l2tp_stats {
+ 	atomic_long_t		rx_oos_packets;
+ 	atomic_long_t		rx_errors;
+ 	atomic_long_t		rx_cookie_discards;
++	atomic_long_t		rx_invalid;
+ };
+ 
+ struct l2tp_tunnel;
+diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
+index 83956c9ee1fcc..96eb91be9238b 100644
+--- a/net/l2tp/l2tp_netlink.c
++++ b/net/l2tp/l2tp_netlink.c
+@@ -428,6 +428,9 @@ static int l2tp_nl_tunnel_send(struct sk_buff *skb, u32 portid, u32 seq, int fla
+ 			      L2TP_ATTR_STATS_PAD) ||
+ 	    nla_put_u64_64bit(skb, L2TP_ATTR_RX_ERRORS,
+ 			      atomic_long_read(&tunnel->stats.rx_errors),
++			      L2TP_ATTR_STATS_PAD) ||
++	    nla_put_u64_64bit(skb, L2TP_ATTR_RX_INVALID,
++			      atomic_long_read(&tunnel->stats.rx_invalid),
+ 			      L2TP_ATTR_STATS_PAD))
+ 		goto nla_put_failure;
+ 	nla_nest_end(skb, nest);
+@@ -771,6 +774,9 @@ static int l2tp_nl_session_send(struct sk_buff *skb, u32 portid, u32 seq, int fl
+ 			      L2TP_ATTR_STATS_PAD) ||
+ 	    nla_put_u64_64bit(skb, L2TP_ATTR_RX_ERRORS,
+ 			      atomic_long_read(&session->stats.rx_errors),
++			      L2TP_ATTR_STATS_PAD) ||
++	    nla_put_u64_64bit(skb, L2TP_ATTR_RX_INVALID,
++			      atomic_long_read(&session->stats.rx_invalid),
+ 			      L2TP_ATTR_STATS_PAD))
+ 		goto nla_put_failure;
+ 	nla_nest_end(skb, nest);
+diff --git a/net/mpls/mpls_gso.c b/net/mpls/mpls_gso.c
+index b1690149b6fa0..1482259de9b5d 100644
+--- a/net/mpls/mpls_gso.c
++++ b/net/mpls/mpls_gso.c
+@@ -14,6 +14,7 @@
+ #include <linux/netdev_features.h>
+ #include <linux/netdevice.h>
+ #include <linux/skbuff.h>
++#include <net/mpls.h>
+ 
+ static struct sk_buff *mpls_gso_segment(struct sk_buff *skb,
+ 				       netdev_features_t features)
+@@ -27,6 +28,8 @@ static struct sk_buff *mpls_gso_segment(struct sk_buff *skb,
+ 
+ 	skb_reset_network_header(skb);
+ 	mpls_hlen = skb_inner_network_header(skb) - skb_network_header(skb);
++	if (unlikely(!mpls_hlen || mpls_hlen % MPLS_HLEN))
++		goto out;
+ 	if (unlikely(!pskb_may_pull(skb, mpls_hlen)))
+ 		goto out;
+ 
+diff --git a/net/netfilter/nf_nat_proto.c b/net/netfilter/nf_nat_proto.c
+index e87b6bd6b3cdb..4731d21fc3ad8 100644
+--- a/net/netfilter/nf_nat_proto.c
++++ b/net/netfilter/nf_nat_proto.c
+@@ -646,8 +646,8 @@ nf_nat_ipv4_fn(void *priv, struct sk_buff *skb,
+ }
+ 
+ static unsigned int
+-nf_nat_ipv4_in(void *priv, struct sk_buff *skb,
+-	       const struct nf_hook_state *state)
++nf_nat_ipv4_pre_routing(void *priv, struct sk_buff *skb,
++			const struct nf_hook_state *state)
+ {
+ 	unsigned int ret;
+ 	__be32 daddr = ip_hdr(skb)->daddr;
+@@ -659,6 +659,23 @@ nf_nat_ipv4_in(void *priv, struct sk_buff *skb,
+ 	return ret;
+ }
+ 
++static unsigned int
++nf_nat_ipv4_local_in(void *priv, struct sk_buff *skb,
++		     const struct nf_hook_state *state)
++{
++	__be32 saddr = ip_hdr(skb)->saddr;
++	struct sock *sk = skb->sk;
++	unsigned int ret;
++
++	ret = nf_nat_ipv4_fn(priv, skb, state);
++
++	if (ret == NF_ACCEPT && sk && saddr != ip_hdr(skb)->saddr &&
++	    !inet_sk_transparent(sk))
++		skb_orphan(skb); /* TCP edemux obtained wrong socket */
++
++	return ret;
++}
++
+ static unsigned int
+ nf_nat_ipv4_out(void *priv, struct sk_buff *skb,
+ 		const struct nf_hook_state *state)
+@@ -736,7 +753,7 @@ nf_nat_ipv4_local_fn(void *priv, struct sk_buff *skb,
+ static const struct nf_hook_ops nf_nat_ipv4_ops[] = {
+ 	/* Before packet filtering, change destination */
+ 	{
+-		.hook		= nf_nat_ipv4_in,
++		.hook		= nf_nat_ipv4_pre_routing,
+ 		.pf		= NFPROTO_IPV4,
+ 		.hooknum	= NF_INET_PRE_ROUTING,
+ 		.priority	= NF_IP_PRI_NAT_DST,
+@@ -757,7 +774,7 @@ static const struct nf_hook_ops nf_nat_ipv4_ops[] = {
+ 	},
+ 	/* After packet filtering, change source */
+ 	{
+-		.hook		= nf_nat_ipv4_fn,
++		.hook		= nf_nat_ipv4_local_in,
+ 		.pf		= NFPROTO_IPV4,
+ 		.hooknum	= NF_INET_LOCAL_IN,
+ 		.priority	= NF_IP_PRI_NAT_SRC,
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index acce622582e3d..bce6ca203d462 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -330,6 +330,7 @@ static int match_revfn(u8 af, const char *name, u8 revision, int *bestp)
+ 	const struct xt_match *m;
+ 	int have_rev = 0;
+ 
++	mutex_lock(&xt[af].mutex);
+ 	list_for_each_entry(m, &xt[af].match, list) {
+ 		if (strcmp(m->name, name) == 0) {
+ 			if (m->revision > *bestp)
+@@ -338,6 +339,7 @@ static int match_revfn(u8 af, const char *name, u8 revision, int *bestp)
+ 				have_rev = 1;
+ 		}
+ 	}
++	mutex_unlock(&xt[af].mutex);
+ 
+ 	if (af != NFPROTO_UNSPEC && !have_rev)
+ 		return match_revfn(NFPROTO_UNSPEC, name, revision, bestp);
+@@ -350,6 +352,7 @@ static int target_revfn(u8 af, const char *name, u8 revision, int *bestp)
+ 	const struct xt_target *t;
+ 	int have_rev = 0;
+ 
++	mutex_lock(&xt[af].mutex);
+ 	list_for_each_entry(t, &xt[af].target, list) {
+ 		if (strcmp(t->name, name) == 0) {
+ 			if (t->revision > *bestp)
+@@ -358,6 +361,7 @@ static int target_revfn(u8 af, const char *name, u8 revision, int *bestp)
+ 				have_rev = 1;
+ 		}
+ 	}
++	mutex_unlock(&xt[af].mutex);
+ 
+ 	if (af != NFPROTO_UNSPEC && !have_rev)
+ 		return target_revfn(NFPROTO_UNSPEC, name, revision, bestp);
+@@ -371,12 +375,10 @@ int xt_find_revision(u8 af, const char *name, u8 revision, int target,
+ {
+ 	int have_rev, best = -1;
+ 
+-	mutex_lock(&xt[af].mutex);
+ 	if (target == 1)
+ 		have_rev = target_revfn(af, name, revision, &best);
+ 	else
+ 		have_rev = match_revfn(af, name, revision, &best);
+-	mutex_unlock(&xt[af].mutex);
+ 
+ 	/* Nothing at all?  Return 0 to try loading module. */
+ 	if (best == -1) {
+diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
+index 726dda95934c6..4f50a64315cf0 100644
+--- a/net/netlabel/netlabel_cipso_v4.c
++++ b/net/netlabel/netlabel_cipso_v4.c
+@@ -575,6 +575,7 @@ list_start:
+ 
+ 		break;
+ 	}
++	cipso_v4_doi_putdef(doi_def);
+ 	rcu_read_unlock();
+ 
+ 	genlmsg_end(ans_skb, data);
+@@ -583,12 +584,14 @@ list_start:
+ list_retry:
+ 	/* XXX - this limit is a guesstimate */
+ 	if (nlsze_mult < 4) {
++		cipso_v4_doi_putdef(doi_def);
+ 		rcu_read_unlock();
+ 		kfree_skb(ans_skb);
+ 		nlsze_mult *= 2;
+ 		goto list_start;
+ 	}
+ list_failure_lock:
++	cipso_v4_doi_putdef(doi_def);
+ 	rcu_read_unlock();
+ list_failure:
+ 	kfree_skb(ans_skb);
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index d7134c558993c..38de24af24c44 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -935,8 +935,10 @@ static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	plen = (len + 3) & ~3;
+ 	skb = sock_alloc_send_skb(sk, plen + QRTR_HDR_MAX_SIZE,
+ 				  msg->msg_flags & MSG_DONTWAIT, &rc);
+-	if (!skb)
++	if (!skb) {
++		rc = -ENOMEM;
+ 		goto out_node;
++	}
+ 
+ 	skb_reserve(skb, QRTR_HDR_MAX_SIZE);
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 5e8e49c4ab5ca..54a8c363bcdda 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -2167,7 +2167,7 @@ static int tc_dump_tclass_qdisc(struct Qdisc *q, struct sk_buff *skb,
+ 
+ static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
+ 			       struct tcmsg *tcm, struct netlink_callback *cb,
+-			       int *t_p, int s_t)
++			       int *t_p, int s_t, bool recur)
+ {
+ 	struct Qdisc *q;
+ 	int b;
+@@ -2178,7 +2178,7 @@ static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
+ 	if (tc_dump_tclass_qdisc(root, skb, tcm, cb, t_p, s_t) < 0)
+ 		return -1;
+ 
+-	if (!qdisc_dev(root))
++	if (!qdisc_dev(root) || !recur)
+ 		return 0;
+ 
+ 	if (tcm->tcm_parent) {
+@@ -2213,13 +2213,13 @@ static int tc_dump_tclass(struct sk_buff *skb, struct netlink_callback *cb)
+ 	s_t = cb->args[0];
+ 	t = 0;
+ 
+-	if (tc_dump_tclass_root(dev->qdisc, skb, tcm, cb, &t, s_t) < 0)
++	if (tc_dump_tclass_root(dev->qdisc, skb, tcm, cb, &t, s_t, true) < 0)
+ 		goto done;
+ 
+ 	dev_queue = dev_ingress_queue(dev);
+ 	if (dev_queue &&
+ 	    tc_dump_tclass_root(dev_queue->qdisc_sleeping, skb, tcm, cb,
+-				&t, s_t) < 0)
++				&t, s_t, false) < 0)
+ 		goto done;
+ 
+ done:
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index cf702a5f7fe5d..39ed0e0afe6d9 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -963,8 +963,11 @@ void rpc_execute(struct rpc_task *task)
+ 
+ 	rpc_set_active(task);
+ 	rpc_make_runnable(rpciod_workqueue, task);
+-	if (!is_async)
++	if (!is_async) {
++		unsigned int pflags = memalloc_nofs_save();
+ 		__rpc_execute(task);
++		memalloc_nofs_restore(pflags);
++	}
+ }
+ 
+ static void rpc_async_schedule(struct work_struct *work)
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index 33c58de58626c..3edae90188936 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -1543,5 +1543,7 @@ int main(int argc, char **argv)
+ 
+ 	xdpsock_cleanup();
+ 
++	munmap(bufs, NUM_FRAMES * opt_xsk_frame_size);
++
+ 	return 0;
+ }
+diff --git a/security/commoncap.c b/security/commoncap.c
+index b2a656947504d..a6c9bb4441d54 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -500,8 +500,7 @@ int cap_convert_nscap(struct dentry *dentry, void **ivalue, size_t size)
+ 	__u32 magic, nsmagic;
+ 	struct inode *inode = d_backing_inode(dentry);
+ 	struct user_namespace *task_ns = current_user_ns(),
+-		*fs_ns = inode->i_sb->s_user_ns,
+-		*ancestor;
++		*fs_ns = inode->i_sb->s_user_ns;
+ 	kuid_t rootid;
+ 	size_t newsize;
+ 
+@@ -524,15 +523,6 @@ int cap_convert_nscap(struct dentry *dentry, void **ivalue, size_t size)
+ 	if (nsrootid == -1)
+ 		return -EINVAL;
+ 
+-	/*
+-	 * Do not allow allow adding a v3 filesystem capability xattr
+-	 * if the rootid field is ambiguous.
+-	 */
+-	for (ancestor = task_ns->parent; ancestor; ancestor = ancestor->parent) {
+-		if (from_kuid(ancestor, rootid) == 0)
+-			return -EINVAL;
+-	}
+-
+ 	newsize = sizeof(struct vfs_ns_cap_data);
+ 	nscap = kmalloc(newsize, GFP_ATOMIC);
+ 	if (!nscap)
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 6a85645663759..17a25e453f60c 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -47,6 +47,10 @@ static void hda_codec_unsol_event(struct hdac_device *dev, unsigned int ev)
+ 	if (codec->bus->shutdown)
+ 		return;
+ 
++	/* ignore unsol events during system suspend/resume */
++	if (codec->core.dev.power.power_state.event != PM_EVENT_ON)
++		return;
++
+ 	if (codec->patch_ops.unsol_event)
+ 		codec->patch_ops.unsol_event(codec, ev);
+ }
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index 80016b7b6849e..b972d59eb1ec2 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -609,13 +609,6 @@ static int azx_pcm_open(struct snd_pcm_substream *substream)
+ 				     20,
+ 				     178000000);
+ 
+-	/* by some reason, the playback stream stalls on PulseAudio with
+-	 * tsched=1 when a capture stream triggers.  Until we figure out the
+-	 * real cause, disable tsched mode by telling the PCM info flag.
+-	 */
+-	if (chip->driver_caps & AZX_DCAPS_AMD_WORKAROUND)
+-		runtime->hw.info |= SNDRV_PCM_INFO_BATCH;
+-
+ 	if (chip->align_buffer_size)
+ 		/* constrain buffer sizes to be multiple of 128
+ 		   bytes. This is more efficient in terms of memory
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 145f4ff47d54f..d244616d28d88 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1026,6 +1026,8 @@ static int azx_prepare(struct device *dev)
+ 	chip = card->private_data;
+ 	chip->pm_prepared = 1;
+ 
++	flush_work(&azx_bus(chip)->unsol_work);
++
+ 	/* HDA controller always requires different WAKEEN for runtime suspend
+ 	 * and system suspend, so don't use direct-complete here.
+ 	 */
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index ee500e46dd4f6..f774b2ac9720c 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1275,6 +1275,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ 	SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
++	SND_PCI_QUIRK(0x1102, 0x0191, "Sound Blaster AE-5 Plus", QUIRK_AE5),
+ 	SND_PCI_QUIRK(0x1102, 0x0081, "Sound Blaster AE-7", QUIRK_AE7),
+ 	{}
+ };
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index d49cc4409d59c..a980a4eda51c9 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -149,6 +149,21 @@ static int cx_auto_vmaster_mute_led(struct led_classdev *led_cdev,
+ 	return 0;
+ }
+ 
++static void cxt_init_gpio_led(struct hda_codec *codec)
++{
++	struct conexant_spec *spec = codec->spec;
++	unsigned int mask = spec->gpio_mute_led_mask | spec->gpio_mic_led_mask;
++
++	if (mask) {
++		snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_MASK,
++				    mask);
++		snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DIRECTION,
++				    mask);
++		snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA,
++				    spec->gpio_led);
++	}
++}
++
+ static int cx_auto_init(struct hda_codec *codec)
+ {
+ 	struct conexant_spec *spec = codec->spec;
+@@ -156,6 +171,7 @@ static int cx_auto_init(struct hda_codec *codec)
+ 	if (!spec->dynamic_eapd)
+ 		cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, true);
+ 
++	cxt_init_gpio_led(codec);
+ 	snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT);
+ 
+ 	return 0;
+@@ -215,6 +231,7 @@ enum {
+ 	CXT_FIXUP_HP_SPECTRE,
+ 	CXT_FIXUP_HP_GATE_MIC,
+ 	CXT_FIXUP_MUTE_LED_GPIO,
++	CXT_FIXUP_HP_ZBOOK_MUTE_LED,
+ 	CXT_FIXUP_HEADSET_MIC,
+ 	CXT_FIXUP_HP_MIC_NO_PRESENCE,
+ };
+@@ -654,31 +671,36 @@ static int cxt_gpio_micmute_update(struct led_classdev *led_cdev,
+ 	return 0;
+ }
+ 
+-
+-static void cxt_fixup_mute_led_gpio(struct hda_codec *codec,
+-				const struct hda_fixup *fix, int action)
++static void cxt_setup_mute_led(struct hda_codec *codec,
++			       unsigned int mute, unsigned int mic_mute)
+ {
+ 	struct conexant_spec *spec = codec->spec;
+-	static const struct hda_verb gpio_init[] = {
+-		{ 0x01, AC_VERB_SET_GPIO_MASK, 0x03 },
+-		{ 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x03 },
+-		{}
+-	};
+ 
+-	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++	spec->gpio_led = 0;
++	spec->mute_led_polarity = 0;
++	if (mute) {
+ 		snd_hda_gen_add_mute_led_cdev(codec, cxt_gpio_mute_update);
+-		spec->gpio_led = 0;
+-		spec->mute_led_polarity = 0;
+-		spec->gpio_mute_led_mask = 0x01;
+-		spec->gpio_mic_led_mask = 0x02;
++		spec->gpio_mute_led_mask = mute;
++	}
++	if (mic_mute) {
+ 		snd_hda_gen_add_micmute_led_cdev(codec, cxt_gpio_micmute_update);
++		spec->gpio_mic_led_mask = mic_mute;
+ 	}
+-	snd_hda_add_verbs(codec, gpio_init);
+-	if (spec->gpio_led)
+-		snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA,
+-				    spec->gpio_led);
+ }
+ 
++static void cxt_fixup_mute_led_gpio(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		cxt_setup_mute_led(codec, 0x01, 0x02);
++}
++
++static void cxt_fixup_hp_zbook_mute_led(struct hda_codec *codec,
++					const struct hda_fixup *fix, int action)
++{
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		cxt_setup_mute_led(codec, 0x10, 0x20);
++}
+ 
+ /* ThinkPad X200 & co with cxt5051 */
+ static const struct hda_pintbl cxt_pincfg_lenovo_x200[] = {
+@@ -839,6 +861,10 @@ static const struct hda_fixup cxt_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cxt_fixup_mute_led_gpio,
+ 	},
++	[CXT_FIXUP_HP_ZBOOK_MUTE_LED] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cxt_fixup_hp_zbook_mute_led,
++	},
+ 	[CXT_FIXUP_HEADSET_MIC] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cxt_fixup_headset_mic,
+@@ -917,6 +943,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),
++	SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+@@ -956,6 +983,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
+ 	{ .id = CXT_FIXUP_MUTE_LED_EAPD, .name = "mute-led-eapd" },
+ 	{ .id = CXT_FIXUP_HP_DOCK, .name = "hp-dock" },
+ 	{ .id = CXT_FIXUP_MUTE_LED_GPIO, .name = "mute-led-gpio" },
++	{ .id = CXT_FIXUP_HP_ZBOOK_MUTE_LED, .name = "hp-zbook-mute-led" },
+ 	{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
+ 	{}
+ };
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index c67d5915ce243..8c6f10cbced32 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2475,6 +2475,18 @@ static void generic_hdmi_free(struct hda_codec *codec)
+ }
+ 
+ #ifdef CONFIG_PM
++static int generic_hdmi_suspend(struct hda_codec *codec)
++{
++	struct hdmi_spec *spec = codec->spec;
++	int pin_idx;
++
++	for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
++		struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
++		cancel_delayed_work_sync(&per_pin->work);
++	}
++	return 0;
++}
++
+ static int generic_hdmi_resume(struct hda_codec *codec)
+ {
+ 	struct hdmi_spec *spec = codec->spec;
+@@ -2498,6 +2510,7 @@ static const struct hda_codec_ops generic_hdmi_patch_ops = {
+ 	.build_controls		= generic_hdmi_build_controls,
+ 	.unsol_event		= hdmi_unsol_event,
+ #ifdef CONFIG_PM
++	.suspend		= generic_hdmi_suspend,
+ 	.resume			= generic_hdmi_resume,
+ #endif
+ };
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 57d6d4ff01e08..fc7c359ae215a 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -830,6 +830,9 @@ static int usb_audio_probe(struct usb_interface *intf,
+ 		snd_media_device_create(chip, intf);
+ 	}
+ 
++	if (quirk)
++		chip->quirk_type = quirk->type;
++
+ 	usb_chip[chip->index] = chip;
+ 	chip->intf[chip->num_interfaces] = intf;
+ 	chip->num_interfaces++;
+@@ -904,6 +907,9 @@ static void usb_audio_disconnect(struct usb_interface *intf)
+ 		}
+ 	}
+ 
++	if (chip->quirk_type & QUIRK_SETUP_DISABLE_AUTOSUSPEND)
++		usb_enable_autosuspend(interface_to_usbdev(intf));
++
+ 	chip->num_interfaces--;
+ 	if (chip->num_interfaces <= 0) {
+ 		usb_chip[chip->index] = NULL;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index f82c2ab809c1d..10b3a8006bdb3 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -523,7 +523,7 @@ static int setup_disable_autosuspend(struct snd_usb_audio *chip,
+ 				       struct usb_driver *driver,
+ 				       const struct snd_usb_audio_quirk *quirk)
+ {
+-	driver->supports_autosuspend = 0;
++	usb_disable_autosuspend(interface_to_usbdev(iface));
+ 	return 1;	/* Continue with creating streams and mixer */
+ }
+ 
+@@ -1520,6 +1520,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ 	case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */
+ 	case USB_ID(0x21b4, 0x0081): /* AudioQuest DragonFly */
+ 	case USB_ID(0x2912, 0x30c8): /* Audioengine D1 */
++	case USB_ID(0x413c, 0xa506): /* Dell AE515 sound bar */
+ 		return true;
+ 	}
+ 
+@@ -1672,6 +1673,14 @@ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
+ 	    && (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+ 		msleep(20);
+ 
++	/*
++	 * Plantronics headsets (C320, C320-M, etc) need a delay to avoid
++	 * random microhpone failures.
++	 */
++	if (USB_ID_VENDOR(chip->usb_id) == 0x047f &&
++	    (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
++		msleep(20);
++
+ 	/* Zoom R16/24, many Logitech(at least H650e/H570e/BCC950),
+ 	 * Jabra 550a, Kingston HyperX needs a tiny delay here,
+ 	 * otherwise requests like get/set frequency return
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 0805b7f21272f..9667060ff92be 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -27,6 +27,7 @@ struct snd_usb_audio {
+ 	struct snd_card *card;
+ 	struct usb_interface *intf[MAX_CARD_INTERFACES];
+ 	u32 usb_id;
++	uint16_t quirk_type;
+ 	struct mutex mutex;
+ 	unsigned int system_suspend;
+ 	atomic_t active;
+diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
+index dfa540d8a02d6..d636643ddd358 100644
+--- a/tools/bpf/resolve_btfids/main.c
++++ b/tools/bpf/resolve_btfids/main.c
+@@ -258,6 +258,11 @@ static struct btf_id *add_symbol(struct rb_root *root, char *name, size_t size)
+ 	return btf_id__add(root, id, false);
+ }
+ 
++/* Older libelf.h and glibc elf.h might not yet define the ELF compression types. */
++#ifndef SHF_COMPRESSED
++#define SHF_COMPRESSED (1 << 11) /* Section with compressed data. */
++#endif
++
+ /*
+  * The data of compressed section should be aligned to 4
+  * (for 32bit) or 8 (for 64 bit) bytes. The binutils ld
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index 9bc537d0b92da..f6e8831673f9b 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -535,15 +535,16 @@ static int xsk_lookup_bpf_maps(struct xsk_socket *xsk)
+ 		if (fd < 0)
+ 			continue;
+ 
++		memset(&map_info, 0, map_len);
+ 		err = bpf_obj_get_info_by_fd(fd, &map_info, &map_len);
+ 		if (err) {
+ 			close(fd);
+ 			continue;
+ 		}
+ 
+-		if (!strcmp(map_info.name, "xsks_map")) {
++		if (!strncmp(map_info.name, "xsks_map", sizeof(map_info.name))) {
+ 			ctx->xsks_map_fd = fd;
+-			continue;
++			break;
+ 		}
+ 
+ 		close(fd);
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 62f3deb1d3a8b..e41a8f9b99d2d 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -600,7 +600,7 @@ arch_errno_hdr_dir := $(srctree)/tools
+ arch_errno_tbl := $(srctree)/tools/perf/trace/beauty/arch_errno_names.sh
+ 
+ $(arch_errno_name_array): $(arch_errno_tbl)
+-	$(Q)$(SHELL) '$(arch_errno_tbl)' $(firstword $(CC)) $(arch_errno_hdr_dir) > $@
++	$(Q)$(SHELL) '$(arch_errno_tbl)' '$(patsubst -%,,$(CC))' $(arch_errno_hdr_dir) > $@
+ 
+ sync_file_range_arrays := $(beauty_outdir)/sync_file_range_arrays.c
+ sync_file_range_tbls := $(srctree)/tools/perf/trace/beauty/sync_file_range.sh
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index d42339df20f8d..8a3b7d5a47376 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -3003,7 +3003,7 @@ int output_field_add(struct perf_hpp_list *list, char *tok)
+ 		if (strncasecmp(tok, sd->name, strlen(tok)))
+ 			continue;
+ 
+-		if (sort__mode != SORT_MODE__MEMORY)
++		if (sort__mode != SORT_MODE__BRANCH)
+ 			return -EINVAL;
+ 
+ 		return __sort_dimension__add_output(list, sd);
+@@ -3015,7 +3015,7 @@ int output_field_add(struct perf_hpp_list *list, char *tok)
+ 		if (strncasecmp(tok, sd->name, strlen(tok)))
+ 			continue;
+ 
+-		if (sort__mode != SORT_MODE__BRANCH)
++		if (sort__mode != SORT_MODE__MEMORY)
+ 			return -EINVAL;
+ 
+ 		return __sort_dimension__add_output(list, sd);
+diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c
+index f507dff713c9f..8a01af783310a 100644
+--- a/tools/perf/util/trace-event-read.c
++++ b/tools/perf/util/trace-event-read.c
+@@ -361,6 +361,7 @@ static int read_saved_cmdline(struct tep_handle *pevent)
+ 		pr_debug("error reading saved cmdlines\n");
+ 		goto out;
+ 	}
++	buf[ret] = '\0';
+ 
+ 	parse_saved_cmdline(pevent, buf, size);
+ 	ret = 0;
+diff --git a/tools/testing/selftests/bpf/progs/netif_receive_skb.c b/tools/testing/selftests/bpf/progs/netif_receive_skb.c
+index 6b670039ea679..1d8918dfbd3ff 100644
+--- a/tools/testing/selftests/bpf/progs/netif_receive_skb.c
++++ b/tools/testing/selftests/bpf/progs/netif_receive_skb.c
+@@ -16,6 +16,13 @@ bool skip = false;
+ #define STRSIZE			2048
+ #define EXPECTED_STRSIZE	256
+ 
++#if defined(bpf_target_s390)
++/* NULL points to a readable struct lowcore on s390, so take the last page */
++#define BADPTR			((void *)0xFFFFFFFFFFFFF000ULL)
++#else
++#define BADPTR			0
++#endif
++
+ #ifndef ARRAY_SIZE
+ #define ARRAY_SIZE(x)	(sizeof(x) / sizeof((x)[0]))
+ #endif
+@@ -113,11 +120,11 @@ int BPF_PROG(trace_netif_receive_skb, struct sk_buff *skb)
+ 	}
+ 
+ 	/* Check invalid ptr value */
+-	p.ptr = 0;
++	p.ptr = BADPTR;
+ 	__ret = bpf_snprintf_btf(str, STRSIZE, &p, sizeof(p), 0);
+ 	if (__ret >= 0) {
+-		bpf_printk("printing NULL should generate error, got (%d)",
+-			   __ret);
++		bpf_printk("printing %llx should generate error, got (%d)",
++			   (unsigned long long)BADPTR, __ret);
+ 		ret = -ERANGE;
+ 	}
+ 
+diff --git a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
+index a621b58ab079d..9afe947cfae95 100644
+--- a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
++++ b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
+@@ -446,10 +446,8 @@ int _geneve_get_tunnel(struct __sk_buff *skb)
+ 	}
+ 
+ 	ret = bpf_skb_get_tunnel_opt(skb, &gopt, sizeof(gopt));
+-	if (ret < 0) {
+-		ERROR(ret);
+-		return TC_ACT_SHOT;
+-	}
++	if (ret < 0)
++		gopt.opt_class = 0;
+ 
+ 	bpf_trace_printk(fmt, sizeof(fmt),
+ 			key.tunnel_id, key.remote_ipv4, gopt.opt_class);
+diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
+index bed53b561e044..1b138cd2b187d 100644
+--- a/tools/testing/selftests/bpf/verifier/array_access.c
++++ b/tools/testing/selftests/bpf/verifier/array_access.c
+@@ -250,12 +250,13 @@
+ 	BPF_MOV64_IMM(BPF_REG_5, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 		     BPF_FUNC_csum_diff),
++	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff),
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ 	.fixup_map_array_ro = { 3 },
+ 	.result = ACCEPT,
+-	.retval = -29,
++	.retval = 65507,
+ },
+ {
+ 	"invalid write map access into a read-only array 1",
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+index 197e769c2ed16..f8cda822c1cec 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+@@ -86,11 +86,20 @@ test_ip6gretap()
+ 
+ test_gretap_stp()
+ {
++	# Sometimes after mirror installation, the neighbor's state is not valid.
++	# The reason is that there is no SW datapath activity related to the
++	# neighbor for the remote GRE address. Therefore whether the corresponding
++	# neighbor will be valid is a matter of luck, and the test is thus racy.
++	# Set the neighbor's state to permanent, so it would be always valid.
++	ip neigh replace 192.0.2.130 lladdr $(mac_get $h3) \
++		nud permanent dev br2
+ 	full_test_span_gre_stp gt4 $swp3.555 "mirror to gretap"
+ }
+ 
+ test_ip6gretap_stp()
+ {
++	ip neigh replace 2001:db8:2::2 lladdr $(mac_get $h3) \
++		nud permanent dev br2
+ 	full_test_span_gre_stp gt6 $swp3.555 "mirror to ip6gretap"
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-20 14:35 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-03-20 14:35 UTC (permalink / raw
  To: gentoo-commits

commit:     650cfc052a32b654a2650ca0624dd800dd696668
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 20 14:34:50 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 20 14:34:50 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=650cfc05

Linux patch 5.10.25

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1024_linux-5.10.25.patch | 1108 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1112 insertions(+)

diff --git a/0000_README b/0000_README
index 14f1018..918ab77 100644
--- a/0000_README
+++ b/0000_README
@@ -139,6 +139,10 @@ Patch:  1023_linux-5.10.24.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.24
 
+Patch:  1024_linux-5.10.25.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.25
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1024_linux-5.10.25.patch b/1024_linux-5.10.25.patch
new file mode 100644
index 0000000..c8ad213
--- /dev/null
+++ b/1024_linux-5.10.25.patch
@@ -0,0 +1,1108 @@
+diff --git a/Makefile b/Makefile
+index 3a435c928e750..6858425cbe6c1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 24
++SUBLEVEL = 25
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
+index 1852b19a73a0a..57aef3f5a81e2 100644
+--- a/arch/x86/crypto/aesni-intel_asm.S
++++ b/arch/x86/crypto/aesni-intel_asm.S
+@@ -318,7 +318,7 @@ _initial_blocks_\@:
+ 
+ 	# Main loop - Encrypt/Decrypt remaining blocks
+ 
+-	cmp	$0, %r13
++	test	%r13, %r13
+ 	je	_zero_cipher_left_\@
+ 	sub	$64, %r13
+ 	je	_four_cipher_left_\@
+@@ -437,7 +437,7 @@ _multiple_of_16_bytes_\@:
+ 
+ 	mov PBlockLen(%arg2), %r12
+ 
+-	cmp $0, %r12
++	test %r12, %r12
+ 	je _partial_done\@
+ 
+ 	GHASH_MUL %xmm8, %xmm13, %xmm9, %xmm10, %xmm11, %xmm5, %xmm6
+@@ -474,7 +474,7 @@ _T_8_\@:
+ 	add	$8, %r10
+ 	sub	$8, %r11
+ 	psrldq	$8, %xmm0
+-	cmp	$0, %r11
++	test	%r11, %r11
+ 	je	_return_T_done_\@
+ _T_4_\@:
+ 	movd	%xmm0, %eax
+@@ -482,7 +482,7 @@ _T_4_\@:
+ 	add	$4, %r10
+ 	sub	$4, %r11
+ 	psrldq	$4, %xmm0
+-	cmp	$0, %r11
++	test	%r11, %r11
+ 	je	_return_T_done_\@
+ _T_123_\@:
+ 	movd	%xmm0, %eax
+@@ -619,7 +619,7 @@ _get_AAD_blocks\@:
+ 
+ 	/* read the last <16B of AAD */
+ _get_AAD_rest\@:
+-	cmp	   $0, %r11
++	test	   %r11, %r11
+ 	je	   _get_AAD_done\@
+ 
+ 	READ_PARTIAL_BLOCK %r10, %r11, \TMP1, \TMP7
+@@ -640,7 +640,7 @@ _get_AAD_done\@:
+ .macro PARTIAL_BLOCK CYPH_PLAIN_OUT PLAIN_CYPH_IN PLAIN_CYPH_LEN DATA_OFFSET \
+ 	AAD_HASH operation
+ 	mov 	PBlockLen(%arg2), %r13
+-	cmp	$0, %r13
++	test	%r13, %r13
+ 	je	_partial_block_done_\@	# Leave Macro if no partial blocks
+ 	# Read in input data without over reading
+ 	cmp	$16, \PLAIN_CYPH_LEN
+@@ -692,7 +692,7 @@ _no_extra_mask_1_\@:
+ 	pshufb	%xmm2, %xmm3
+ 	pxor	%xmm3, \AAD_HASH
+ 
+-	cmp	$0, %r10
++	test	%r10, %r10
+ 	jl	_partial_incomplete_1_\@
+ 
+ 	# GHASH computation for the last <16 Byte block
+@@ -727,7 +727,7 @@ _no_extra_mask_2_\@:
+ 	pshufb	%xmm2, %xmm9
+ 	pxor	%xmm9, \AAD_HASH
+ 
+-	cmp	$0, %r10
++	test	%r10, %r10
+ 	jl	_partial_incomplete_2_\@
+ 
+ 	# GHASH computation for the last <16 Byte block
+@@ -747,7 +747,7 @@ _encode_done_\@:
+ 	pshufb	%xmm2, %xmm9
+ .endif
+ 	# output encrypted Bytes
+-	cmp	$0, %r10
++	test	%r10, %r10
+ 	jl	_partial_fill_\@
+ 	mov	%r13, %r12
+ 	mov	$16, %r13
+@@ -2715,25 +2715,18 @@ SYM_FUNC_END(aesni_ctr_enc)
+ 	pxor CTR, IV;
+ 
+ /*
+- * void aesni_xts_crypt8(const struct crypto_aes_ctx *ctx, u8 *dst,
+- *			 const u8 *src, bool enc, le128 *iv)
++ * void aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst,
++ *			  const u8 *src, unsigned int len, le128 *iv)
+  */
+-SYM_FUNC_START(aesni_xts_crypt8)
++SYM_FUNC_START(aesni_xts_encrypt)
+ 	FRAME_BEGIN
+-	cmpb $0, %cl
+-	movl $0, %ecx
+-	movl $240, %r10d
+-	leaq _aesni_enc4, %r11
+-	leaq _aesni_dec4, %rax
+-	cmovel %r10d, %ecx
+-	cmoveq %rax, %r11
+ 
+ 	movdqa .Lgf128mul_x_ble_mask, GF128MUL_MASK
+ 	movups (IVP), IV
+ 
+ 	mov 480(KEYP), KLEN
+-	addq %rcx, KEYP
+ 
++.Lxts_enc_loop4:
+ 	movdqa IV, STATE1
+ 	movdqu 0x00(INP), INC
+ 	pxor INC, STATE1
+@@ -2757,71 +2750,103 @@ SYM_FUNC_START(aesni_xts_crypt8)
+ 	pxor INC, STATE4
+ 	movdqu IV, 0x30(OUTP)
+ 
+-	CALL_NOSPEC r11
++	call _aesni_enc4
+ 
+ 	movdqu 0x00(OUTP), INC
+ 	pxor INC, STATE1
+ 	movdqu STATE1, 0x00(OUTP)
+ 
+-	_aesni_gf128mul_x_ble()
+-	movdqa IV, STATE1
+-	movdqu 0x40(INP), INC
+-	pxor INC, STATE1
+-	movdqu IV, 0x40(OUTP)
+-
+ 	movdqu 0x10(OUTP), INC
+ 	pxor INC, STATE2
+ 	movdqu STATE2, 0x10(OUTP)
+ 
+-	_aesni_gf128mul_x_ble()
+-	movdqa IV, STATE2
+-	movdqu 0x50(INP), INC
+-	pxor INC, STATE2
+-	movdqu IV, 0x50(OUTP)
+-
+ 	movdqu 0x20(OUTP), INC
+ 	pxor INC, STATE3
+ 	movdqu STATE3, 0x20(OUTP)
+ 
+-	_aesni_gf128mul_x_ble()
+-	movdqa IV, STATE3
+-	movdqu 0x60(INP), INC
+-	pxor INC, STATE3
+-	movdqu IV, 0x60(OUTP)
+-
+ 	movdqu 0x30(OUTP), INC
+ 	pxor INC, STATE4
+ 	movdqu STATE4, 0x30(OUTP)
+ 
+ 	_aesni_gf128mul_x_ble()
+-	movdqa IV, STATE4
+-	movdqu 0x70(INP), INC
+-	pxor INC, STATE4
+-	movdqu IV, 0x70(OUTP)
+ 
+-	_aesni_gf128mul_x_ble()
++	add $64, INP
++	add $64, OUTP
++	sub $64, LEN
++	ja .Lxts_enc_loop4
++
+ 	movups IV, (IVP)
+ 
+-	CALL_NOSPEC r11
++	FRAME_END
++	ret
++SYM_FUNC_END(aesni_xts_encrypt)
++
++/*
++ * void aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst,
++ *			  const u8 *src, unsigned int len, le128 *iv)
++ */
++SYM_FUNC_START(aesni_xts_decrypt)
++	FRAME_BEGIN
++
++	movdqa .Lgf128mul_x_ble_mask, GF128MUL_MASK
++	movups (IVP), IV
++
++	mov 480(KEYP), KLEN
++	add $240, KEYP
+ 
+-	movdqu 0x40(OUTP), INC
++.Lxts_dec_loop4:
++	movdqa IV, STATE1
++	movdqu 0x00(INP), INC
+ 	pxor INC, STATE1
+-	movdqu STATE1, 0x40(OUTP)
++	movdqu IV, 0x00(OUTP)
+ 
+-	movdqu 0x50(OUTP), INC
++	_aesni_gf128mul_x_ble()
++	movdqa IV, STATE2
++	movdqu 0x10(INP), INC
++	pxor INC, STATE2
++	movdqu IV, 0x10(OUTP)
++
++	_aesni_gf128mul_x_ble()
++	movdqa IV, STATE3
++	movdqu 0x20(INP), INC
++	pxor INC, STATE3
++	movdqu IV, 0x20(OUTP)
++
++	_aesni_gf128mul_x_ble()
++	movdqa IV, STATE4
++	movdqu 0x30(INP), INC
++	pxor INC, STATE4
++	movdqu IV, 0x30(OUTP)
++
++	call _aesni_dec4
++
++	movdqu 0x00(OUTP), INC
++	pxor INC, STATE1
++	movdqu STATE1, 0x00(OUTP)
++
++	movdqu 0x10(OUTP), INC
+ 	pxor INC, STATE2
+-	movdqu STATE2, 0x50(OUTP)
++	movdqu STATE2, 0x10(OUTP)
+ 
+-	movdqu 0x60(OUTP), INC
++	movdqu 0x20(OUTP), INC
+ 	pxor INC, STATE3
+-	movdqu STATE3, 0x60(OUTP)
++	movdqu STATE3, 0x20(OUTP)
+ 
+-	movdqu 0x70(OUTP), INC
++	movdqu 0x30(OUTP), INC
+ 	pxor INC, STATE4
+-	movdqu STATE4, 0x70(OUTP)
++	movdqu STATE4, 0x30(OUTP)
++
++	_aesni_gf128mul_x_ble()
++
++	add $64, INP
++	add $64, OUTP
++	sub $64, LEN
++	ja .Lxts_dec_loop4
++
++	movups IV, (IVP)
+ 
+ 	FRAME_END
+ 	ret
+-SYM_FUNC_END(aesni_xts_crypt8)
++SYM_FUNC_END(aesni_xts_decrypt)
+ 
+ #endif
+diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S
+index 5fee47956f3bb..2cf8e94d986a5 100644
+--- a/arch/x86/crypto/aesni-intel_avx-x86_64.S
++++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S
+@@ -369,7 +369,7 @@ _initial_num_blocks_is_0\@:
+ 
+ 
+ _initial_blocks_encrypted\@:
+-        cmp     $0, %r13
++        test    %r13, %r13
+         je      _zero_cipher_left\@
+ 
+         sub     $128, %r13
+@@ -528,7 +528,7 @@ _multiple_of_16_bytes\@:
+         vmovdqu HashKey(arg2), %xmm13
+ 
+         mov PBlockLen(arg2), %r12
+-        cmp $0, %r12
++        test %r12, %r12
+         je _partial_done\@
+ 
+ 	#GHASH computation for the last <16 Byte block
+@@ -573,7 +573,7 @@ _T_8\@:
+         add     $8, %r10
+         sub     $8, %r11
+         vpsrldq $8, %xmm9, %xmm9
+-        cmp     $0, %r11
++        test    %r11, %r11
+         je     _return_T_done\@
+ _T_4\@:
+         vmovd   %xmm9, %eax
+@@ -581,7 +581,7 @@ _T_4\@:
+         add     $4, %r10
+         sub     $4, %r11
+         vpsrldq     $4, %xmm9, %xmm9
+-        cmp     $0, %r11
++        test    %r11, %r11
+         je     _return_T_done\@
+ _T_123\@:
+         vmovd     %xmm9, %eax
+@@ -625,7 +625,7 @@ _get_AAD_blocks\@:
+ 	cmp     $16, %r11
+ 	jge     _get_AAD_blocks\@
+ 	vmovdqu \T8, \T7
+-	cmp     $0, %r11
++	test    %r11, %r11
+ 	je      _get_AAD_done\@
+ 
+ 	vpxor   \T7, \T7, \T7
+@@ -644,7 +644,7 @@ _get_AAD_rest8\@:
+ 	vpxor   \T1, \T7, \T7
+ 	jmp     _get_AAD_rest8\@
+ _get_AAD_rest4\@:
+-	cmp     $0, %r11
++	test    %r11, %r11
+ 	jle      _get_AAD_rest0\@
+ 	mov     (%r10), %eax
+ 	movq    %rax, \T1
+@@ -749,7 +749,7 @@ _done_read_partial_block_\@:
+ .macro PARTIAL_BLOCK GHASH_MUL CYPH_PLAIN_OUT PLAIN_CYPH_IN PLAIN_CYPH_LEN DATA_OFFSET \
+         AAD_HASH ENC_DEC
+         mov 	PBlockLen(arg2), %r13
+-        cmp	$0, %r13
++        test	%r13, %r13
+         je	_partial_block_done_\@	# Leave Macro if no partial blocks
+         # Read in input data without over reading
+         cmp	$16, \PLAIN_CYPH_LEN
+@@ -801,7 +801,7 @@ _no_extra_mask_1_\@:
+         vpshufb	%xmm2, %xmm3, %xmm3
+         vpxor	%xmm3, \AAD_HASH, \AAD_HASH
+ 
+-        cmp	$0, %r10
++        test	%r10, %r10
+         jl	_partial_incomplete_1_\@
+ 
+         # GHASH computation for the last <16 Byte block
+@@ -836,7 +836,7 @@ _no_extra_mask_2_\@:
+         vpshufb %xmm2, %xmm9, %xmm9
+         vpxor	%xmm9, \AAD_HASH, \AAD_HASH
+ 
+-        cmp	$0, %r10
++        test	%r10, %r10
+         jl	_partial_incomplete_2_\@
+ 
+         # GHASH computation for the last <16 Byte block
+@@ -856,7 +856,7 @@ _encode_done_\@:
+         vpshufb	%xmm2, %xmm9, %xmm9
+ .endif
+         # output encrypted Bytes
+-        cmp	$0, %r10
++        test	%r10, %r10
+         jl	_partial_fill_\@
+         mov	%r13, %r12
+         mov	$16, %r13
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index f9a1d98e75349..be891fdf8d174 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -97,6 +97,12 @@ asmlinkage void aesni_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out,
+ #define AVX_GEN2_OPTSIZE 640
+ #define AVX_GEN4_OPTSIZE 4096
+ 
++asmlinkage void aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out,
++				  const u8 *in, unsigned int len, u8 *iv);
++
++asmlinkage void aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out,
++				  const u8 *in, unsigned int len, u8 *iv);
++
+ #ifdef CONFIG_X86_64
+ 
+ static void (*aesni_ctr_enc_tfm)(struct crypto_aes_ctx *ctx, u8 *out,
+@@ -104,9 +110,6 @@ static void (*aesni_ctr_enc_tfm)(struct crypto_aes_ctx *ctx, u8 *out,
+ asmlinkage void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
+ 			      const u8 *in, unsigned int len, u8 *iv);
+ 
+-asmlinkage void aesni_xts_crypt8(const struct crypto_aes_ctx *ctx, u8 *out,
+-				 const u8 *in, bool enc, le128 *iv);
+-
+ /* asmlinkage void aesni_gcm_enc()
+  * void *ctx,  AES Key schedule. Starts on a 16 byte boundary.
+  * struct gcm_context_data.  May be uninitialized.
+@@ -547,14 +550,14 @@ static void aesni_xts_dec(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
+ 	glue_xts_crypt_128bit_one(ctx, dst, src, iv, aesni_dec);
+ }
+ 
+-static void aesni_xts_enc8(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
++static void aesni_xts_enc32(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
+ {
+-	aesni_xts_crypt8(ctx, dst, src, true, iv);
++	aesni_xts_encrypt(ctx, dst, src, 32 * AES_BLOCK_SIZE, (u8 *)iv);
+ }
+ 
+-static void aesni_xts_dec8(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
++static void aesni_xts_dec32(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
+ {
+-	aesni_xts_crypt8(ctx, dst, src, false, iv);
++	aesni_xts_decrypt(ctx, dst, src, 32 * AES_BLOCK_SIZE, (u8 *)iv);
+ }
+ 
+ static const struct common_glue_ctx aesni_enc_xts = {
+@@ -562,8 +565,8 @@ static const struct common_glue_ctx aesni_enc_xts = {
+ 	.fpu_blocks_limit = 1,
+ 
+ 	.funcs = { {
+-		.num_blocks = 8,
+-		.fn_u = { .xts = aesni_xts_enc8 }
++		.num_blocks = 32,
++		.fn_u = { .xts = aesni_xts_enc32 }
+ 	}, {
+ 		.num_blocks = 1,
+ 		.fn_u = { .xts = aesni_xts_enc }
+@@ -575,8 +578,8 @@ static const struct common_glue_ctx aesni_dec_xts = {
+ 	.fpu_blocks_limit = 1,
+ 
+ 	.funcs = { {
+-		.num_blocks = 8,
+-		.fn_u = { .xts = aesni_xts_dec8 }
++		.num_blocks = 32,
++		.fn_u = { .xts = aesni_xts_dec32 }
+ 	}, {
+ 		.num_blocks = 1,
+ 		.fn_u = { .xts = aesni_xts_dec }
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index d8fcd21ab472f..a8f85993dab30 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -3624,7 +3624,7 @@ static ssize_t srp_create_target(struct device *dev,
+ 	struct srp_rdma_ch *ch;
+ 	struct srp_device *srp_dev = host->srp_dev;
+ 	struct ib_device *ibdev = srp_dev->dev;
+-	int ret, node_idx, node, cpu, i;
++	int ret, i, ch_idx;
+ 	unsigned int max_sectors_per_mr, mr_per_cmd = 0;
+ 	bool multich = false;
+ 	uint32_t max_iu_len;
+@@ -3749,81 +3749,61 @@ static ssize_t srp_create_target(struct device *dev,
+ 		goto out;
+ 
+ 	ret = -ENOMEM;
+-	if (target->ch_count == 0)
++	if (target->ch_count == 0) {
+ 		target->ch_count =
+-			max_t(unsigned int, num_online_nodes(),
+-			      min(ch_count ?:
+-					  min(4 * num_online_nodes(),
+-					      ibdev->num_comp_vectors),
+-				  num_online_cpus()));
++			min(ch_count ?:
++				max(4 * num_online_nodes(),
++				    ibdev->num_comp_vectors),
++				num_online_cpus());
++	}
++
+ 	target->ch = kcalloc(target->ch_count, sizeof(*target->ch),
+ 			     GFP_KERNEL);
+ 	if (!target->ch)
+ 		goto out;
+ 
+-	node_idx = 0;
+-	for_each_online_node(node) {
+-		const int ch_start = (node_idx * target->ch_count /
+-				      num_online_nodes());
+-		const int ch_end = ((node_idx + 1) * target->ch_count /
+-				    num_online_nodes());
+-		const int cv_start = node_idx * ibdev->num_comp_vectors /
+-				     num_online_nodes();
+-		const int cv_end = (node_idx + 1) * ibdev->num_comp_vectors /
+-				   num_online_nodes();
+-		int cpu_idx = 0;
+-
+-		for_each_online_cpu(cpu) {
+-			if (cpu_to_node(cpu) != node)
+-				continue;
+-			if (ch_start + cpu_idx >= ch_end)
+-				continue;
+-			ch = &target->ch[ch_start + cpu_idx];
+-			ch->target = target;
+-			ch->comp_vector = cv_start == cv_end ? cv_start :
+-				cv_start + cpu_idx % (cv_end - cv_start);
+-			spin_lock_init(&ch->lock);
+-			INIT_LIST_HEAD(&ch->free_tx);
+-			ret = srp_new_cm_id(ch);
+-			if (ret)
+-				goto err_disconnect;
++	for (ch_idx = 0; ch_idx < target->ch_count; ++ch_idx) {
++		ch = &target->ch[ch_idx];
++		ch->target = target;
++		ch->comp_vector = ch_idx % ibdev->num_comp_vectors;
++		spin_lock_init(&ch->lock);
++		INIT_LIST_HEAD(&ch->free_tx);
++		ret = srp_new_cm_id(ch);
++		if (ret)
++			goto err_disconnect;
+ 
+-			ret = srp_create_ch_ib(ch);
+-			if (ret)
+-				goto err_disconnect;
++		ret = srp_create_ch_ib(ch);
++		if (ret)
++			goto err_disconnect;
+ 
+-			ret = srp_alloc_req_data(ch);
+-			if (ret)
+-				goto err_disconnect;
++		ret = srp_alloc_req_data(ch);
++		if (ret)
++			goto err_disconnect;
+ 
+-			ret = srp_connect_ch(ch, max_iu_len, multich);
+-			if (ret) {
+-				char dst[64];
+-
+-				if (target->using_rdma_cm)
+-					snprintf(dst, sizeof(dst), "%pIS",
+-						 &target->rdma_cm.dst);
+-				else
+-					snprintf(dst, sizeof(dst), "%pI6",
+-						 target->ib_cm.orig_dgid.raw);
+-				shost_printk(KERN_ERR, target->scsi_host,
+-					     PFX "Connection %d/%d to %s failed\n",
+-					     ch_start + cpu_idx,
+-					     target->ch_count, dst);
+-				if (node_idx == 0 && cpu_idx == 0) {
+-					goto free_ch;
+-				} else {
+-					srp_free_ch_ib(target, ch);
+-					srp_free_req_data(target, ch);
+-					target->ch_count = ch - target->ch;
+-					goto connected;
+-				}
+-			}
++		ret = srp_connect_ch(ch, max_iu_len, multich);
++		if (ret) {
++			char dst[64];
+ 
+-			multich = true;
+-			cpu_idx++;
++			if (target->using_rdma_cm)
++				snprintf(dst, sizeof(dst), "%pIS",
++					&target->rdma_cm.dst);
++			else
++				snprintf(dst, sizeof(dst), "%pI6",
++					target->ib_cm.orig_dgid.raw);
++			shost_printk(KERN_ERR, target->scsi_host,
++				PFX "Connection %d/%d to %s failed\n",
++				ch_idx,
++				target->ch_count, dst);
++			if (ch_idx == 0) {
++				goto free_ch;
++			} else {
++				srp_free_ch_ib(target, ch);
++				srp_free_req_data(target, ch);
++				target->ch_count = ch - target->ch;
++				goto connected;
++			}
+ 		}
+-		node_idx++;
++		multich = true;
+ 	}
+ 
+ connected:
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 95c7fa171e35a..f504b6858ed29 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -510,6 +510,19 @@ void b53_imp_vlan_setup(struct dsa_switch *ds, int cpu_port)
+ }
+ EXPORT_SYMBOL(b53_imp_vlan_setup);
+ 
++static void b53_port_set_learning(struct b53_device *dev, int port,
++				  bool learning)
++{
++	u16 reg;
++
++	b53_read16(dev, B53_CTRL_PAGE, B53_DIS_LEARNING, &reg);
++	if (learning)
++		reg &= ~BIT(port);
++	else
++		reg |= BIT(port);
++	b53_write16(dev, B53_CTRL_PAGE, B53_DIS_LEARNING, reg);
++}
++
+ int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy)
+ {
+ 	struct b53_device *dev = ds->priv;
+@@ -523,6 +536,7 @@ int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy)
+ 	cpu_port = dsa_to_port(ds, port)->cpu_dp->index;
+ 
+ 	b53_br_egress_floods(ds, port, true, true);
++	b53_port_set_learning(dev, port, false);
+ 
+ 	if (dev->ops->irq_enable)
+ 		ret = dev->ops->irq_enable(dev, port);
+@@ -656,6 +670,7 @@ static void b53_enable_cpu_port(struct b53_device *dev, int port)
+ 	b53_brcm_hdr_setup(dev->ds, port);
+ 
+ 	b53_br_egress_floods(dev->ds, port, true, true);
++	b53_port_set_learning(dev, port, false);
+ }
+ 
+ static void b53_enable_mib(struct b53_device *dev)
+@@ -1839,6 +1854,8 @@ int b53_br_join(struct dsa_switch *ds, int port, struct net_device *br)
+ 	b53_write16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), pvlan);
+ 	dev->ports[port].vlan_ctl_mask = pvlan;
+ 
++	b53_port_set_learning(dev, port, true);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(b53_br_join);
+@@ -1886,6 +1903,7 @@ void b53_br_leave(struct dsa_switch *ds, int port, struct net_device *br)
+ 		vl->untag |= BIT(port) | BIT(cpu_port);
+ 		b53_set_vlan_entry(dev, pvid, vl);
+ 	}
++	b53_port_set_learning(dev, port, false);
+ }
+ EXPORT_SYMBOL(b53_br_leave);
+ 
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index c90985c294a2e..b2c539a421545 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -115,6 +115,7 @@
+ #define B53_UC_FLOOD_MASK		0x32
+ #define B53_MC_FLOOD_MASK		0x34
+ #define B53_IPMC_FLOOD_MASK		0x36
++#define B53_DIS_LEARNING		0x3c
+ 
+ /*
+  * Override Ports 0-7 State on devices with xMII interfaces (8 bit)
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 445226720ff29..edb0a1027b38f 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -222,23 +222,10 @@ static int bcm_sf2_port_setup(struct dsa_switch *ds, int port,
+ 	reg &= ~P_TXQ_PSM_VDD(port);
+ 	core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL);
+ 
+-	/* Enable learning */
+-	reg = core_readl(priv, CORE_DIS_LEARN);
+-	reg &= ~BIT(port);
+-	core_writel(priv, reg, CORE_DIS_LEARN);
+-
+ 	/* Enable Broadcom tags for that port if requested */
+-	if (priv->brcm_tag_mask & BIT(port)) {
++	if (priv->brcm_tag_mask & BIT(port))
+ 		b53_brcm_hdr_setup(ds, port);
+ 
+-		/* Disable learning on ASP port */
+-		if (port == 7) {
+-			reg = core_readl(priv, CORE_DIS_LEARN);
+-			reg |= BIT(port);
+-			core_writel(priv, reg, CORE_DIS_LEARN);
+-		}
+-	}
+-
+ 	/* Configure Traffic Class to QoS mapping, allow each priority to map
+ 	 * to a different queue number
+ 	 */
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 404d66f01e8d7..d64aee04e59a7 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -862,6 +862,7 @@ static inline u64 fuse_get_attr_version(struct fuse_conn *fc)
+ 
+ static inline void fuse_make_bad(struct inode *inode)
+ {
++	remove_inode_hash(inode);
+ 	set_bit(FUSE_I_BAD, &get_fuse_inode(inode)->state);
+ }
+ 
+diff --git a/fs/locks.c b/fs/locks.c
+index 1f84a03601fec..32c948fe29448 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -1808,9 +1808,6 @@ check_conflicting_open(struct file *filp, const long arg, int flags)
+ 
+ 	if (flags & FL_LAYOUT)
+ 		return 0;
+-	if (flags & FL_DELEG)
+-		/* We leave these checks to the caller. */
+-		return 0;
+ 
+ 	if (arg == F_RDLCK)
+ 		return inode_is_open_for_write(inode) ? -EAGAIN : 0;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 47006eec724e6..ee4e6e3b995d4 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4945,31 +4945,6 @@ static struct file_lock *nfs4_alloc_init_lease(struct nfs4_delegation *dp,
+ 	return fl;
+ }
+ 
+-static int nfsd4_check_conflicting_opens(struct nfs4_client *clp,
+-						struct nfs4_file *fp)
+-{
+-	struct nfs4_clnt_odstate *co;
+-	struct file *f = fp->fi_deleg_file->nf_file;
+-	struct inode *ino = locks_inode(f);
+-	int writes = atomic_read(&ino->i_writecount);
+-
+-	if (fp->fi_fds[O_WRONLY])
+-		writes--;
+-	if (fp->fi_fds[O_RDWR])
+-		writes--;
+-	if (writes > 0)
+-		return -EAGAIN;
+-	spin_lock(&fp->fi_lock);
+-	list_for_each_entry(co, &fp->fi_clnt_odstate, co_perfile) {
+-		if (co->co_client != clp) {
+-			spin_unlock(&fp->fi_lock);
+-			return -EAGAIN;
+-		}
+-	}
+-	spin_unlock(&fp->fi_lock);
+-	return 0;
+-}
+-
+ static struct nfs4_delegation *
+ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 		    struct nfs4_file *fp, struct nfs4_clnt_odstate *odstate)
+@@ -4989,12 +4964,9 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 
+ 	nf = find_readable_file(fp);
+ 	if (!nf) {
+-		/*
+-		 * We probably could attempt another open and get a read
+-		 * delegation, but for now, don't bother until the
+-		 * client actually sends us one.
+-		 */
+-		return ERR_PTR(-EAGAIN);
++		/* We should always have a readable file here */
++		WARN_ON_ONCE(1);
++		return ERR_PTR(-EBADF);
+ 	}
+ 	spin_lock(&state_lock);
+ 	spin_lock(&fp->fi_lock);
+@@ -5024,19 +4996,11 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 	if (!fl)
+ 		goto out_clnt_odstate;
+ 
+-	status = nfsd4_check_conflicting_opens(clp, fp);
+-	if (status) {
+-		locks_free_lock(fl);
+-		goto out_clnt_odstate;
+-	}
+ 	status = vfs_setlease(fp->fi_deleg_file->nf_file, fl->fl_type, &fl, NULL);
+ 	if (fl)
+ 		locks_free_lock(fl);
+ 	if (status)
+ 		goto out_clnt_odstate;
+-	status = nfsd4_check_conflicting_opens(clp, fp);
+-	if (status)
+-		goto out_clnt_odstate;
+ 
+ 	spin_lock(&state_lock);
+ 	spin_lock(&fp->fi_lock);
+@@ -5118,6 +5082,17 @@ nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
+ 				goto out_no_deleg;
+ 			if (!cb_up || !(oo->oo_flags & NFS4_OO_CONFIRMED))
+ 				goto out_no_deleg;
++			/*
++			 * Also, if the file was opened for write or
++			 * create, there's a good chance the client's
++			 * about to write to it, resulting in an
++			 * immediate recall (since we don't support
++			 * write delegations):
++			 */
++			if (open->op_share_access & NFS4_SHARE_ACCESS_WRITE)
++				goto out_no_deleg;
++			if (open->op_create == NFS4_OPEN_CREATE)
++				goto out_no_deleg;
+ 			break;
+ 		default:
+ 			goto out_no_deleg;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 6c2e4947beaeb..9a1aba2d00733 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5333,10 +5333,14 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
+ {
+ 	bool mask_to_left = (opcode == BPF_ADD &&  off_is_neg) ||
+ 			    (opcode == BPF_SUB && !off_is_neg);
+-	u32 off;
++	u32 off, max;
+ 
+ 	switch (ptr_reg->type) {
+ 	case PTR_TO_STACK:
++		/* Offset 0 is out-of-bounds, but acceptable start for the
++		 * left direction, see BPF_REG_FP.
++		 */
++		max = MAX_BPF_STACK + mask_to_left;
+ 		/* Indirect variable offset stack access is prohibited in
+ 		 * unprivileged mode so it's not handled here.
+ 		 */
+@@ -5344,16 +5348,17 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
+ 		if (mask_to_left)
+ 			*ptr_limit = MAX_BPF_STACK + off;
+ 		else
+-			*ptr_limit = -off;
+-		return 0;
++			*ptr_limit = -off - 1;
++		return *ptr_limit >= max ? -ERANGE : 0;
+ 	case PTR_TO_MAP_VALUE:
++		max = ptr_reg->map_ptr->value_size;
+ 		if (mask_to_left) {
+ 			*ptr_limit = ptr_reg->umax_value + ptr_reg->off;
+ 		} else {
+ 			off = ptr_reg->smin_value + ptr_reg->off;
+-			*ptr_limit = ptr_reg->map_ptr->value_size - off;
++			*ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
+ 		}
+-		return 0;
++		return *ptr_limit >= max ? -ERANGE : 0;
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -5406,6 +5411,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 	u32 alu_state, alu_limit;
+ 	struct bpf_reg_state tmp;
+ 	bool ret;
++	int err;
+ 
+ 	if (can_skip_alu_sanitation(env, insn))
+ 		return 0;
+@@ -5421,10 +5427,13 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 	alu_state |= ptr_is_dst_reg ?
+ 		     BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
+ 
+-	if (retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg))
+-		return 0;
+-	if (update_alu_sanitation_state(aux, alu_state, alu_limit))
+-		return -EACCES;
++	err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg);
++	if (err < 0)
++		return err;
++
++	err = update_alu_sanitation_state(aux, alu_state, alu_limit);
++	if (err < 0)
++		return err;
+ do_sim:
+ 	/* Simulate and find potential out-of-bounds access under
+ 	 * speculative execution from truncation as a result of
+@@ -5540,7 +5549,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	case BPF_ADD:
+ 		ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
+ 		if (ret < 0) {
+-			verbose(env, "R%d tried to add from different maps or paths\n", dst);
++			verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst);
+ 			return ret;
+ 		}
+ 		/* We can take a fixed offset as long as it doesn't overflow
+@@ -5595,7 +5604,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	case BPF_SUB:
+ 		ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
+ 		if (ret < 0) {
+-			verbose(env, "R%d tried to sub from different maps or paths\n", dst);
++			verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst);
+ 			return ret;
+ 		}
+ 		if (dst_reg == off_reg) {
+@@ -10942,7 +10951,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
+ 			off_reg = issrc ? insn->src_reg : insn->dst_reg;
+ 			if (isneg)
+ 				*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
+-			*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit - 1);
++			*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
+ 			*patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
+ 			*patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
+ 			*patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index e2f9ce2f5b8b6..8527267725bb7 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -576,9 +576,6 @@ static int deactivate_urbs(struct snd_usb_endpoint *ep, bool force)
+ {
+ 	unsigned int i;
+ 
+-	if (!force && atomic_read(&ep->chip->shutdown)) /* to be sure... */
+-		return -EBADFD;
+-
+ 	clear_bit(EP_FLAG_RUNNING, &ep->flags);
+ 
+ 	INIT_LIST_HEAD(&ep->ready_playback_urbs);
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 1b08f52ef86f6..f4494d0549172 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -280,10 +280,7 @@ static int snd_usb_pcm_sync_stop(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_usb_substream *subs = substream->runtime->private_data;
+ 
+-	if (!snd_usb_lock_shutdown(subs->stream->chip)) {
+-		sync_pending_stops(subs);
+-		snd_usb_unlock_shutdown(subs->stream->chip);
+-	}
++	sync_pending_stops(subs);
+ 	return 0;
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/bounds_deduction.c b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
+index 1fd07a4f27ac2..c162498a64fc6 100644
+--- a/tools/testing/selftests/bpf/verifier/bounds_deduction.c
++++ b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
+@@ -6,8 +6,9 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
++	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
++	.result = REJECT,
+ },
+ {
+ 	"check deducing bounds from const, 2",
+@@ -20,6 +21,8 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
+ 		BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 1,
+ },
+@@ -31,8 +34,9 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
++	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
++	.result = REJECT,
+ },
+ {
+ 	"check deducing bounds from const, 4",
+@@ -45,6 +49,8 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
+ 		BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ },
+ {
+@@ -55,8 +61,9 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
++	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
++	.result = REJECT,
+ },
+ {
+ 	"check deducing bounds from const, 6",
+@@ -67,8 +74,9 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
++	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
++	.result = REJECT,
+ },
+ {
+ 	"check deducing bounds from const, 7",
+@@ -80,8 +88,9 @@
+ 			    offsetof(struct __sk_buff, mark)),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
++	.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
+ 	.errstr = "dereference of modified ctx ptr",
++	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+@@ -94,8 +103,9 @@
+ 			    offsetof(struct __sk_buff, mark)),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
++	.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
+ 	.errstr = "dereference of modified ctx ptr",
++	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+@@ -106,8 +116,9 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
++	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
++	.result = REJECT,
+ },
+ {
+ 	"check deducing bounds from const, 10",
+@@ -119,6 +130,6 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
+ 	.errstr = "math between ctx pointer and register with unbounded min value is not allowed",
++	.result = REJECT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/map_ptr.c b/tools/testing/selftests/bpf/verifier/map_ptr.c
+index 637f9293bda84..92a1dc8e17462 100644
+--- a/tools/testing/selftests/bpf/verifier/map_ptr.c
++++ b/tools/testing/selftests/bpf/verifier/map_ptr.c
+@@ -74,6 +74,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_16b = { 4 },
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
+ 	.result = ACCEPT,
+ },
+ {
+@@ -90,5 +92,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_16b = { 4 },
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
+ 	.result = ACCEPT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c
+index 91bb77c24a2ef..0d621c841db14 100644
+--- a/tools/testing/selftests/bpf/verifier/unpriv.c
++++ b/tools/testing/selftests/bpf/verifier/unpriv.c
+@@ -495,7 +495,7 @@
+ 	.result = ACCEPT,
+ },
+ {
+-	"unpriv: adding of fp",
++	"unpriv: adding of fp, reg",
+ 	.insns = {
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_MOV64_IMM(BPF_REG_1, 0),
+@@ -503,6 +503,19 @@
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
++	.result_unpriv = REJECT,
++	.result = ACCEPT,
++},
++{
++	"unpriv: adding of fp, imm",
++	.insns = {
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0),
++	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
++	BPF_EXIT_INSN(),
++	},
+ 	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
+ 	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+index ed4e76b246499..feb91266db39a 100644
+--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
++++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+@@ -169,7 +169,7 @@
+ 	.fixup_map_array_48b = { 1 },
+ 	.result = ACCEPT,
+ 	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R2 tried to add from different maps or paths",
++	.errstr_unpriv = "R2 tried to add from different maps, paths, or prohibited types",
+ 	.retval = 0,
+ },
+ {
+@@ -516,6 +516,27 @@
+ 	.result = ACCEPT,
+ 	.retval = 0xabcdef12,
+ },
++{
++	"map access: value_ptr += N, value_ptr -= N known scalar",
++	.insns = {
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
++	BPF_MOV32_IMM(BPF_REG_1, 0x12345678),
++	BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
++	BPF_MOV64_IMM(BPF_REG_1, 2),
++	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.fixup_map_array_48b = { 3 },
++	.result = ACCEPT,
++	.retval = 0x12345678,
++},
+ {
+ 	"map access: unknown scalar += value_ptr, 1",
+ 	.insns = {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-22 15:57 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-03-22 15:57 UTC (permalink / raw
  To: gentoo-commits

commit:     7263eb9aa362df4357ca26ab62f11b0d23a03cb8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 22 15:57:01 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 22 15:57:01 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7263eb9a

Updates for CPU Optimization patch for 5.10.X gcc v9.1 and v10

Bug: https://bugs.gentoo.org/777666

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5012_enable-cpu-optimizations-for-gcc91.patch | 414 ++++++++++----------------
 5013_enable-cpu-optimizations-for-gcc10.patch | 283 ++++++------------
 2 files changed, 244 insertions(+), 453 deletions(-)

diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
index 564eede..56aff7e 100644
--- a/5012_enable-cpu-optimizations-for-gcc91.patch
+++ b/5012_enable-cpu-optimizations-for-gcc91.patch
@@ -1,3 +1,8 @@
+From 56af79dc8be395c6adf25a05de3566822dbb2947 Mon Sep 17 00:00:00 2001
+From: graysky <graysky@archlinux.us>
+Date: Tue, 9 Mar 2021 01:57:33 -0500
+Subject: [PATCH] more-uarches-for-gcc-v9-and-kernel-5.8+
+
 WARNING
 This patch works with gcc versions 9.1+ and with kernel version 5.8+ and should
 NOT be applied when compiling on older versions of gcc due to key name changes
@@ -78,90 +83,19 @@ REFERENCES
 4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
 5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
 6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+---
+ arch/x86/Kconfig.cpu            | 240 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile               |  37 ++++-
+ arch/x86/include/asm/vermagic.h |  52 +++++++
+ 3 files changed, 312 insertions(+), 17 deletions(-)
 
---- a/arch/x86/include/asm/vermagic.h	2020-06-10 14:21:45.000000000 -0400
-+++ b/arch/x86/include/asm/vermagic.h	2020-06-15 10:44:10.437477053 -0400
-@@ -17,6 +17,36 @@
- #define MODULE_PROC_FAMILY "586MMX "
- #elif defined CONFIG_MCORE2
- #define MODULE_PROC_FAMILY "CORE2 "
-+#elif defined CONFIG_MNATIVE
-+#define MODULE_PROC_FAMILY "NATIVE "
-+#elif defined CONFIG_MNEHALEM
-+#define MODULE_PROC_FAMILY "NEHALEM "
-+#elif defined CONFIG_MWESTMERE
-+#define MODULE_PROC_FAMILY "WESTMERE "
-+#elif defined CONFIG_MSILVERMONT
-+#define MODULE_PROC_FAMILY "SILVERMONT "
-+#elif defined CONFIG_MGOLDMONT
-+#define MODULE_PROC_FAMILY "GOLDMONT "
-+#elif defined CONFIG_MGOLDMONTPLUS
-+#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
-+#elif defined CONFIG_MSANDYBRIDGE
-+#define MODULE_PROC_FAMILY "SANDYBRIDGE "
-+#elif defined CONFIG_MIVYBRIDGE
-+#define MODULE_PROC_FAMILY "IVYBRIDGE "
-+#elif defined CONFIG_MHASWELL
-+#define MODULE_PROC_FAMILY "HASWELL "
-+#elif defined CONFIG_MBROADWELL
-+#define MODULE_PROC_FAMILY "BROADWELL "
-+#elif defined CONFIG_MSKYLAKE
-+#define MODULE_PROC_FAMILY "SKYLAKE "
-+#elif defined CONFIG_MSKYLAKEX
-+#define MODULE_PROC_FAMILY "SKYLAKEX "
-+#elif defined CONFIG_MCANNONLAKE
-+#define MODULE_PROC_FAMILY "CANNONLAKE "
-+#elif defined CONFIG_MICELAKE
-+#define MODULE_PROC_FAMILY "ICELAKE "
-+#elif defined CONFIG_MCASCADELAKE
-+#define MODULE_PROC_FAMILY "CASCADELAKE "
- #elif defined CONFIG_MATOM
- #define MODULE_PROC_FAMILY "ATOM "
- #elif defined CONFIG_M686
-@@ -35,6 +65,28 @@
- #define MODULE_PROC_FAMILY "K7 "
- #elif defined CONFIG_MK8
- #define MODULE_PROC_FAMILY "K8 "
-+#elif defined CONFIG_MK8SSE3
-+#define MODULE_PROC_FAMILY "K8SSE3 "
-+#elif defined CONFIG_MK10
-+#define MODULE_PROC_FAMILY "K10 "
-+#elif defined CONFIG_MBARCELONA
-+#define MODULE_PROC_FAMILY "BARCELONA "
-+#elif defined CONFIG_MBOBCAT
-+#define MODULE_PROC_FAMILY "BOBCAT "
-+#elif defined CONFIG_MBULLDOZER
-+#define MODULE_PROC_FAMILY "BULLDOZER "
-+#elif defined CONFIG_MPILEDRIVER
-+#define MODULE_PROC_FAMILY "PILEDRIVER "
-+#elif defined CONFIG_MSTEAMROLLER
-+#define MODULE_PROC_FAMILY "STEAMROLLER "
-+#elif defined CONFIG_MJAGUAR
-+#define MODULE_PROC_FAMILY "JAGUAR "
-+#elif defined CONFIG_MEXCAVATOR
-+#define MODULE_PROC_FAMILY "EXCAVATOR "
-+#elif defined CONFIG_MZEN
-+#define MODULE_PROC_FAMILY "ZEN "
-+#elif defined CONFIG_MZEN2
-+#define MODULE_PROC_FAMILY "ZEN2 "
- #elif defined CONFIG_MELAN
- #define MODULE_PROC_FAMILY "ELAN "
- #elif defined CONFIG_MCRUSOE
---- a/arch/x86/Kconfig.cpu	2020-06-10 14:21:45.000000000 -0400
-+++ b/arch/x86/Kconfig.cpu	2020-06-15 10:44:10.437477053 -0400
-@@ -123,6 +123,7 @@ config MPENTIUMM
- config MPENTIUM4
- 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
- 	depends on X86_32
-+	select X86_P6_NOP
- 	help
- 	  Select this for Intel Pentium 4 chips.  This includes the
- 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
-@@ -155,9 +156,8 @@ config MPENTIUM4
- 		-Paxville
- 		-Dempsey
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index 814fe0d349b0..aa7dd036e8a3 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -157,7 +157,7 @@ config MPENTIUM4
+ 
  
--
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -199,7 +133,7 @@ REFERENCES
 +	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
 +	help
 +	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
-+		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
 +	  Enables use of some extended instructions, and passes appropriate
 +	  optimization flags to GCC.
 +
@@ -269,52 +203,33 @@ REFERENCES
  config MCRUSOE
  	bool "Crusoe"
  	depends on X86_32
-@@ -260,6 +338,7 @@ config MVIAC7
- 
- config MPSC
- 	bool "Intel P4 / older Netburst based Xeon"
-+	select X86_P6_NOP
- 	depends on X86_64
- 	help
- 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
-@@ -269,8 +348,19 @@ config MPSC
- 	  using the cpu family field
+@@ -270,7 +348,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
  
-+config MATOM
-+	bool "Intel Atom"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Atom platform. Intel Atom CPUs have an
-+	  in-order pipelining architecture and thus can benefit from
-+	  accordingly optimized code. Use a recent GCC with specific Atom
-+	  support in order to fully benefit from selecting this option.
-+
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
-+	select X86_P6_NOP
  	help
  
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,14 +368,133 @@ config MCORE2
+@@ -278,6 +356,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
  
--config MATOM
--	bool "Intel Atom"
 +	  Enables -march=core2
 +
+ config MATOM
+ 	bool "Intel Atom"
+ 	help
+@@ -287,6 +367,132 @@ config MATOM
+ 	  accordingly optimized code. Use a recent GCC with specific Atom
+ 	  support in order to fully benefit from selecting this option.
+ 
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
- 	help
- 
--	  Select this for the Intel Atom platform. Intel Atom CPUs have an
--	  in-order pipelining architecture and thus can benefit from
--	  accordingly optimized code. Use a recent GCC with specific Atom
--	  support in order to fully benefit from selecting this option.
++	help
++
 +	  Select this for 1st Gen Core processors in the Nehalem family.
 +
 +	  Enables -march=nehalem
@@ -435,110 +350,96 @@ REFERENCES
 +	  Select this for Xeon processors in the Cascade Lake family.
 +
 +	  Enables -march=cascadelake
- 
++
  config GENERIC_CPU
  	bool "Generic-x86-64"
-@@ -294,6 +503,19 @@ config GENERIC_CPU
+ 	depends on X86_64
+@@ -294,6 +500,16 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
  
 +config MNATIVE
-+ bool "Native optimizations autodetected by GCC"
-+ help
++	bool "Native optimizations autodetected by GCC"
++	help
 +
-+   GCC 4.2 and above support -march=native, which automatically detects
-+   the optimum settings to use based on your processor. -march=native
-+   also detects and applies additional settings beyond -march specific
-+   to your CPU, (eg. -msse4). Unless you have a specific reason not to
-+   (e.g. distcc cross-compiling), you should probably be using
-+   -march=native rather than anything listed below.
++	  GCC 4.2 and above support -march=native, which automatically detects
++	  the optimum settings to use based on your processor. Do NOT use this
++	  for AMD CPUs.  Intel Only!
 +
-+   Enables -march=native
++	  Enables -march=native
 +
  endchoice
  
  config X86_GENERIC
-@@ -318,7 +540,7 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -318,7 +534,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
  	int
  	default "7" if MPENTIUM4 || MPSC
 -	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || X86_GENERIC || GENERIC_CPU
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
  
-@@ -336,35 +558,36 @@ config X86_ALIGNMENT_16
+@@ -336,11 +552,11 @@ config X86_ALIGNMENT_16
  
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
  
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MATOM || MNATIVE
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
  
  config X86_USE_3DNOW
  	def_bool y
- 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
- 
--#
--# P6_NOPs are a relatively minor optimization that require a family >=
--# 6 processor, except that it is broken on certain VIA chips.
--# Furthermore, AMD chips prefer a totally different sequence of NOPs
--# (which work on all CPUs).  In addition, it looks like Virtual PC
--# does not understand them.
--#
--# As a result, disallow these if we're not compiling for X86_64 (these
--# NOPs do work on all x86-64 capable chips); the list of processors in
--# the right-hand clause are the cores that benefit from this optimization.
--#
+@@ -360,26 +576,26 @@ config X86_USE_3DNOW
  config X86_P6_NOP
--	def_bool y
--	depends on X86_64
+ 	def_bool y
+ 	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
-+	default n
-+	bool "Support for P6_NOPs on Intel chips"
-+	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
-+	help
-+	P6_NOPs are a relatively minor optimization that require a family >=
-+	6 processor, except that it is broken on certain VIA chips.
-+	Furthermore, AMD chips prefer a totally different sequence of NOPs
-+	(which work on all CPUs).  In addition, it looks like Virtual PC
-+	does not understand them.
-+
-+	As a result, disallow these if we're not compiling for X86_64 (these
-+	NOPs do work on all x86-64 capable chips); the list of processors in
-+	the right-hand clause are the cores that benefit from this optimization.
-+
-+	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
  
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE) || X86_64
  
  config X86_CMPXCHG64
  	def_bool y
-@@ -374,7 +597,7 @@ config X86_CMPXCHG64
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
++	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
+ 
+ # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
++	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
  
  config X86_MINIMUM_CPU_FAMILY
  	int
---- a/arch/x86/Makefile	2020-06-10 14:21:45.000000000 -0400
-+++ b/arch/x86/Makefile	2020-06-15 10:44:35.608035680 -0400
-@@ -119,13 +119,56 @@ else
- 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+ 	default "64" if X86_64
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
+ 	default "5" if X86_32 && X86_CMPXCHG64
+ 	default "4"
  
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 00e378de8bc0..7602ef4a2dd4 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -121,11 +121,38 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-+        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
          cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+-
+-        cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
 +        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
 +        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
 +        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
@@ -551,91 +452,98 @@ REFERENCES
 +        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
 +        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
 +        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
- 
-         cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
-+        cflags-$(CONFIG_MNEHALEM) += \
-+                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
-+        cflags-$(CONFIG_MWESTMERE) += \
-+                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
-+        cflags-$(CONFIG_MSILVERMONT) += \
-+                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
-+        cflags-$(CONFIG_MGOLDMONT) += \
-+                $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += \
-+                $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
-+        cflags-$(CONFIG_MSANDYBRIDGE) += \
-+                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
-+        cflags-$(CONFIG_MIVYBRIDGE) += \
-+                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
-+        cflags-$(CONFIG_MHASWELL) += \
-+                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
-+        cflags-$(CONFIG_MBROADWELL) += \
-+                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
-+        cflags-$(CONFIG_MSKYLAKE) += \
-+                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
-+        cflags-$(CONFIG_MSKYLAKEX) += \
-+                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
-+        cflags-$(CONFIG_MCANNONLAKE) += \
-+                $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
-+        cflags-$(CONFIG_MICELAKE) += \
-+                $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
-+        cflags-$(CONFIG_MCASCADELAKE) += \
-+                $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
-+                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
++        cflags-$(CONFIG_MZEN2) +=  $(call cc-option,-march=znver2)
++
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
++        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
++        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
++        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
++        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
++        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
++        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
++        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
++        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
++        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
++        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
++        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
++        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
++        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
++        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
++        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
          cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
          KBUILD_CFLAGS += $(cflags-y)
  
---- a/arch/x86/Makefile_32.cpu	2020-06-10 14:21:45.000000000 -0400
-+++ b/arch/x86/Makefile_32.cpu	2020-06-15 10:44:10.437477053 -0400
-@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6)		+= -march=k6
- # Please note, that patches that add -march=athlon-xp and friends are pointless.
- # They make zero difference whatsosever to performance at this time.
- cflags-$(CONFIG_MK7)		+= -march=athlon
-+cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
- cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
-+cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
-+cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
-+cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
-+cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
-+cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
-+cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
-+cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
-+cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
-+cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
-+cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
-+cflags-$(CONFIG_MZEN2)	+= $(call cc-option,-march=znver2,-march=athlon)
- cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
-@@ -33,8 +45,22 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
- cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
- cflags-$(CONFIG_MVIAC7)		+= -march=i686
- cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
--cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
--	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
-+cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
-+cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
-+cflags-$(CONFIG_MGOLDMONT)	+= -march=i686 $(call tune,goldmont)
-+cflags-$(CONFIG_MGOLDMONTPLUS)	+= -march=i686 $(call tune,goldmont-plus)
-+cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
-+cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
-+cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
-+cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
-+cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
-+cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
-+cflags-$(CONFIG_MCANNONLAKE)	+= -march=i686 $(call tune,cannonlake)
-+cflags-$(CONFIG_MICELAKE)	+= -march=i686 $(call tune,icelake-client)
-+cflags-$(CONFIG_MCASCADELAKE)	+= -march=i686 $(call tune,cascadelake)
-+cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
-+	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
- 
- # AMD Elan support
- cflags-$(CONFIG_MELAN)		+= -march=i486
+diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
+index 75884d2cdec3..0cf864d2d110 100644
+--- a/arch/x86/include/asm/vermagic.h
++++ b/arch/x86/include/asm/vermagic.h
+@@ -17,6 +17,36 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +65,28 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+-- 
+2.30.1
+

diff --git a/5013_enable-cpu-optimizations-for-gcc10.patch b/5013_enable-cpu-optimizations-for-gcc10.patch
index 0fc0a64..c90b586 100644
--- a/5013_enable-cpu-optimizations-for-gcc10.patch
+++ b/5013_enable-cpu-optimizations-for-gcc10.patch
@@ -1,8 +1,7 @@
 From 4666424a864159b4de572c90adb2c3e1fcdd5890 Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
 Date: Fri, 13 Nov 2020 15:45:08 -0500
-Subject: [PATCH] 
- enable_additional_cpu_optimizations_for_gcc_v10.1+_kernel_v5.8+.patch
+Subject: [PATCH]more-uarches-for-gcc-v10-and-kernel-5.8+
 
 WARNING
 This patch works with gcc versions 10.1+ and with kernel version 5.8+ and should
@@ -86,30 +85,20 @@ REFERENCES
 4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
 5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
 6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+
 ---
- arch/x86/Kconfig.cpu            | 301 ++++++++++++++++++++++++++++----
- arch/x86/Makefile               |  53 +++++-
- arch/x86/Makefile_32.cpu        |  32 +++-
- arch/x86/include/asm/vermagic.h |  56 ++++++
- 4 files changed, 407 insertions(+), 35 deletions(-)
+ arch/x86/Kconfig.cpu            | 258 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile               |  39 ++++-
+ arch/x86/include/asm/vermagic.h |  56 +++++++
+ 3 files changed, 336 insertions(+), 17 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..7b08e87fe797 100644
+index 814fe0d349b0..134390e619bb 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
-@@ -123,6 +123,7 @@ config MPENTIUMM
- config MPENTIUM4
- 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
- 	depends on X86_32
-+	select X86_P6_NOP
- 	help
- 	  Select this for Intel Pentium 4 chips.  This includes the
- 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
-@@ -155,9 +156,8 @@ config MPENTIUM4
- 		-Paxville
- 		-Dempsey
+@@ -157,7 +157,7 @@ config MPENTIUM4
+ 
  
--
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -147,7 +136,7 @@ index 814fe0d349b0..7b08e87fe797 100644
 +	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
 +	help
 +	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
-+		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
 +	  Enables use of some extended instructions, and passes appropriate
 +	  optimization flags to GCC.
 +
@@ -217,52 +206,33 @@ index 814fe0d349b0..7b08e87fe797 100644
  config MCRUSOE
  	bool "Crusoe"
  	depends on X86_32
-@@ -260,6 +338,7 @@ config MVIAC7
- 
- config MPSC
- 	bool "Intel P4 / older Netburst based Xeon"
-+	select X86_P6_NOP
- 	depends on X86_64
- 	help
- 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
-@@ -269,8 +348,19 @@ config MPSC
- 	  using the cpu family field
+@@ -270,7 +348,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
  
-+config MATOM
-+	bool "Intel Atom"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Atom platform. Intel Atom CPUs have an
-+	  in-order pipelining architecture and thus can benefit from
-+	  accordingly optimized code. Use a recent GCC with specific Atom
-+	  support in order to fully benefit from selecting this option.
-+
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
-+	select X86_P6_NOP
  	help
  
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,14 +368,151 @@ config MCORE2
+@@ -278,6 +356,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
  
--config MATOM
--	bool "Intel Atom"
 +	  Enables -march=core2
 +
+ config MATOM
+ 	bool "Intel Atom"
+ 	help
+@@ -287,6 +367,150 @@ config MATOM
+ 	  accordingly optimized code. Use a recent GCC with specific Atom
+ 	  support in order to fully benefit from selecting this option.
+ 
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
- 	help
- 
--	  Select this for the Intel Atom platform. Intel Atom CPUs have an
--	  in-order pipelining architecture and thus can benefit from
--	  accordingly optimized code. Use a recent GCC with specific Atom
--	  support in order to fully benefit from selecting this option.
++	help
++
 +	  Select this for 1st Gen Core processors in the Nehalem family.
 +
 +	  Enables -march=nehalem
@@ -401,112 +371,96 @@ index 814fe0d349b0..7b08e87fe797 100644
 +	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
 +
 +	  Enables -march=tigerlake
- 
++
  config GENERIC_CPU
  	bool "Generic-x86-64"
-@@ -294,6 +521,19 @@ config GENERIC_CPU
+ 	depends on X86_64
+@@ -294,6 +518,16 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
  
 +config MNATIVE
-+ bool "Native optimizations autodetected by GCC"
-+ help
++	bool "Native optimizations autodetected by GCC"
++	help
 +
-+   GCC 4.2 and above support -march=native, which automatically detects
-+   the optimum settings to use based on your processor. -march=native
-+   also detects and applies additional settings beyond -march specific
-+   to your CPU, (eg. -msse4). Unless you have a specific reason not to
-+   (e.g. distcc cross-compiling), you should probably be using
-+   -march=native rather than anything listed below.
++	  GCC 4.2 and above support -march=native, which automatically detects
++	  the optimum settings to use based on your processor. Do NOT use this
++	  for AMD CPUs.  Intel Only!
 +
-+   Enables -march=native
++	  Enables -march=native
 +
  endchoice
  
  config X86_GENERIC
-@@ -318,7 +558,7 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -318,7 +552,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
  	int
  	default "7" if MPENTIUM4 || MPSC
 -	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || X86_GENERIC || GENERIC_CPU
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
  
-@@ -336,35 +576,36 @@ config X86_ALIGNMENT_16
+@@ -336,11 +570,11 @@ config X86_ALIGNMENT_16
  
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
  
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MATOM || MNATIVE
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
  
  config X86_USE_3DNOW
  	def_bool y
- 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
- 
--#
--# P6_NOPs are a relatively minor optimization that require a family >=
--# 6 processor, except that it is broken on certain VIA chips.
--# Furthermore, AMD chips prefer a totally different sequence of NOPs
--# (which work on all CPUs).  In addition, it looks like Virtual PC
--# does not understand them.
--#
--# As a result, disallow these if we're not compiling for X86_64 (these
--# NOPs do work on all x86-64 capable chips); the list of processors in
--# the right-hand clause are the cores that benefit from this optimization.
--#
+@@ -360,26 +594,26 @@ config X86_USE_3DNOW
  config X86_P6_NOP
--	def_bool y
--	depends on X86_64
+ 	def_bool y
+ 	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
-+	default n
-+	bool "Support for P6_NOPs on Intel chips"
-+	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
-+	help
-+	P6_NOPs are a relatively minor optimization that require a family >=
-+	6 processor, except that it is broken on certain VIA chips.
-+	Furthermore, AMD chips prefer a totally different sequence of NOPs
-+	(which work on all CPUs).  In addition, it looks like Virtual PC
-+	does not understand them.
-+
-+	As a result, disallow these if we're not compiling for X86_64 (these
-+	NOPs do work on all x86-64 capable chips); the list of processors in
-+	the right-hand clause are the cores that benefit from this optimization.
-+
-+	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
  
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE) || X86_64
  
  config X86_CMPXCHG64
  	def_bool y
-@@ -374,7 +615,7 @@ config X86_CMPXCHG64
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
++	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
+ 
+ # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
++	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
  
  config X86_MINIMUM_CPU_FAMILY
  	int
+ 	default "64" if X86_64
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
+ 	default "5" if X86_32 && X86_CMPXCHG64
+ 	default "4"
+ 
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 154259f18b8b..405b1f2b3c65 100644
+index 7116da3980be..50c8af35092b 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -115,13 +115,60 @@ else
- 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
- 
+@@ -110,11 +110,40 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-+        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
          cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+-
+-        cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
 +        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
 +        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
 +        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
@@ -519,102 +473,30 @@ index 154259f18b8b..405b1f2b3c65 100644
 +        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
 +        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
 +        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
- 
-         cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
-+        cflags-$(CONFIG_MNEHALEM) += \
-+                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
-+        cflags-$(CONFIG_MWESTMERE) += \
-+                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
-+        cflags-$(CONFIG_MSILVERMONT) += \
-+                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
-+        cflags-$(CONFIG_MGOLDMONT) += \
-+                $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += \
-+                $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
-+        cflags-$(CONFIG_MSANDYBRIDGE) += \
-+                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
-+        cflags-$(CONFIG_MIVYBRIDGE) += \
-+                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
-+        cflags-$(CONFIG_MHASWELL) += \
-+                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
-+        cflags-$(CONFIG_MBROADWELL) += \
-+                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
-+        cflags-$(CONFIG_MSKYLAKE) += \
-+                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
-+        cflags-$(CONFIG_MSKYLAKEX) += \
-+                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
-+        cflags-$(CONFIG_MCANNONLAKE) += \
-+                $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
-+        cflags-$(CONFIG_MICELAKE) += \
-+                $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
-+        cflags-$(CONFIG_MCASCADELAKE) += \
-+                $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
-+        cflags-$(CONFIG_MCOOPERLAKE) += \
-+                $(call cc-option,-march=cooperlake,$(call cc-option,-mtune=cooperlake))
-+        cflags-$(CONFIG_MTIGERLAKE) += \
-+                $(call cc-option,-march=tigerlake,$(call cc-option,-mtune=tigerlake))
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
-+                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
++        cflags-$(CONFIG_MZEN2) +=  $(call cc-option,-march=znver2)
++
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
++        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
++        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
++        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
++        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
++        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
++        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
++        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
++        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
++        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
++        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
++        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
++        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
++        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
++        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
++        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
++        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
++        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
          cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
          KBUILD_CFLAGS += $(cflags-y)
  
-diff --git a/arch/x86/Makefile_32.cpu b/arch/x86/Makefile_32.cpu
-index cd3056759880..cb0a4c6bd987 100644
---- a/arch/x86/Makefile_32.cpu
-+++ b/arch/x86/Makefile_32.cpu
-@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6)		+= -march=k6
- # Please note, that patches that add -march=athlon-xp and friends are pointless.
- # They make zero difference whatsosever to performance at this time.
- cflags-$(CONFIG_MK7)		+= -march=athlon
-+cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
- cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
-+cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
-+cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
-+cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
-+cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
-+cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
-+cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
-+cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
-+cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
-+cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
-+cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
-+cflags-$(CONFIG_MZEN2)	+= $(call cc-option,-march=znver2,-march=athlon)
- cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
-@@ -33,8 +45,24 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-option,-march=c3,-march=i486) -falign-fu
- cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
- cflags-$(CONFIG_MVIAC7)		+= -march=i686
- cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
--cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
--	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
-+cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
-+cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
-+cflags-$(CONFIG_MGOLDMONT)	+= -march=i686 $(call tune,goldmont)
-+cflags-$(CONFIG_MGOLDMONTPLUS)	+= -march=i686 $(call tune,goldmont-plus)
-+cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
-+cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
-+cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
-+cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
-+cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
-+cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
-+cflags-$(CONFIG_MCANNONLAKE)	+= -march=i686 $(call tune,cannonlake)
-+cflags-$(CONFIG_MICELAKE)	+= -march=i686 $(call tune,icelake-client)
-+cflags-$(CONFIG_MCASCADELAKE)	+= -march=i686 $(call tune,cascadelake)
-+cflags-$(CONFIG_MCOOPERLAKE)	+= -march=i686 $(call tune,cooperlake)
-+cflags-$(CONFIG_MTIGERLAKE)	+= -march=i686 $(call tune,tigerlake)
-+cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
-+	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
- 
- # AMD Elan support
- cflags-$(CONFIG_MELAN)		+= -march=i486
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
 index 75884d2cdec3..14c222e78213 100644
 --- a/arch/x86/include/asm/vermagic.h
@@ -690,5 +572,6 @@ index 75884d2cdec3..14c222e78213 100644
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
 -- 
-2.29.2
+2.30.1
+
 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-25  9:04 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-03-25  9:04 UTC (permalink / raw
  To: gentoo-commits

commit:     cc0e5ae3b0469c91a02eb4e50582aff4a7554e5b
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 25 08:57:38 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Mar 25 08:58:00 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cc0e5ae3

Linux patch 5.10.26

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |     4 +
 1025_linux-5.10.26.patch | 12913 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 12917 insertions(+)

diff --git a/0000_README b/0000_README
index 918ab77..da6b8c2 100644
--- a/0000_README
+++ b/0000_README
@@ -143,6 +143,10 @@ Patch:  1024_linux-5.10.25.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.25
 
+Patch:  1025_linux-5.10.26.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.26
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1025_linux-5.10.26.patch b/1025_linux-5.10.26.patch
new file mode 100644
index 0000000..b5ea906
--- /dev/null
+++ b/1025_linux-5.10.26.patch
@@ -0,0 +1,12913 @@
+diff --git a/Documentation/scsi/libsas.rst b/Documentation/scsi/libsas.rst
+index f9b77c7879dbb..ea63ab3a92160 100644
+--- a/Documentation/scsi/libsas.rst
++++ b/Documentation/scsi/libsas.rst
+@@ -189,12 +189,10 @@ num_phys
+ The event interface::
+ 
+ 	/* LLDD calls these to notify the class of an event. */
+-	void (*notify_port_event)(struct sas_phy *, enum port_event);
+-	void (*notify_phy_event)(struct sas_phy *, enum phy_event);
+-
+-When sas_register_ha() returns, those are set and can be
+-called by the LLDD to notify the SAS layer of such events
+-the SAS layer.
++	void sas_notify_port_event(struct sas_phy *, enum port_event);
++	void sas_notify_phy_event(struct sas_phy *, enum phy_event);
++	void sas_notify_port_event_gfp(struct sas_phy *, enum port_event, gfp_t);
++	void sas_notify_phy_event_gfp(struct sas_phy *, enum phy_event, gfp_t);
+ 
+ The port notification::
+ 
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 281de213ef478..24cdfcf334ea1 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -1155,7 +1155,7 @@ M:	Joel Fernandes <joel@joelfernandes.org>
+ M:	Christian Brauner <christian@brauner.io>
+ M:	Hridya Valsaraju <hridya@google.com>
+ M:	Suren Baghdasaryan <surenb@google.com>
+-L:	devel@driverdev.osuosl.org
++L:	linux-kernel@vger.kernel.org
+ S:	Supported
+ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
+ F:	drivers/android/
+@@ -8001,7 +8001,6 @@ F:	drivers/crypto/hisilicon/sec2/sec_main.c
+ 
+ HISILICON STAGING DRIVERS FOR HIKEY 960/970
+ M:	Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
+-L:	devel@driverdev.osuosl.org
+ S:	Maintained
+ F:	drivers/staging/hikey9xx/
+ 
+@@ -16665,7 +16664,7 @@ F:	drivers/staging/vt665?/
+ 
+ STAGING SUBSYSTEM
+ M:	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+-L:	devel@driverdev.osuosl.org
++L:	linux-staging@lists.linux.dev
+ S:	Supported
+ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
+ F:	drivers/staging/
+@@ -18705,7 +18704,7 @@ VME SUBSYSTEM
+ M:	Martyn Welch <martyn@welchs.me.uk>
+ M:	Manohar Vanga <manohar.vanga@gmail.com>
+ M:	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+-L:	devel@driverdev.osuosl.org
++L:	linux-kernel@vger.kernel.org
+ S:	Maintained
+ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
+ F:	Documentation/driver-api/vme.rst
+diff --git a/Makefile b/Makefile
+index 6858425cbe6c1..d4b87e604762a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 25
++SUBLEVEL = 26
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1249,15 +1249,17 @@ endef
+ define filechk_version.h
+ 	if [ $(SUBLEVEL) -gt 255 ]; then                                 \
+ 		echo \#define LINUX_VERSION_CODE $(shell                 \
+-		expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 255); \
++		expr $(VERSION) \* 65536 + $(PATCHLEVEL) \* 256 + 255); \
+ 	else                                                             \
+ 		echo \#define LINUX_VERSION_CODE $(shell                 \
+-		expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
++		expr $(VERSION) \* 65536 + $(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
+ 	fi;                                                              \
+ 	echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) +  \
+ 	((c) > 255 ? 255 : (c)))'
+ endef
+ 
++$(version_h): PATCHLEVEL := $(if $(PATCHLEVEL), $(PATCHLEVEL), 0)
++$(version_h): SUBLEVEL := $(if $(SUBLEVEL), $(SUBLEVEL), 0)
+ $(version_h): FORCE
+ 	$(call filechk,version.h)
+ 	$(Q)rm -f $(old_version_h)
+diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
+index d66511825fe1e..337ab1d18cc1f 100644
+--- a/arch/mips/boot/compressed/Makefile
++++ b/arch/mips/boot/compressed/Makefile
+@@ -36,6 +36,7 @@ KBUILD_AFLAGS := $(KBUILD_AFLAGS) -D__ASSEMBLY__ \
+ 
+ # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
+ KCOV_INSTRUMENT		:= n
++UBSAN_SANITIZE := n
+ 
+ # decompressor objects (linked with vmlinuz)
+ vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o
+diff --git a/arch/powerpc/include/asm/cpu_has_feature.h b/arch/powerpc/include/asm/cpu_has_feature.h
+index 7897d16e09904..727d4b3219379 100644
+--- a/arch/powerpc/include/asm/cpu_has_feature.h
++++ b/arch/powerpc/include/asm/cpu_has_feature.h
+@@ -7,7 +7,7 @@
+ #include <linux/bug.h>
+ #include <asm/cputable.h>
+ 
+-static inline bool early_cpu_has_feature(unsigned long feature)
++static __always_inline bool early_cpu_has_feature(unsigned long feature)
+ {
+ 	return !!((CPU_FTRS_ALWAYS & feature) ||
+ 		  (CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
+@@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long feature)
+ 	return static_branch_likely(&cpu_feature_keys[i]);
+ }
+ #else
+-static inline bool cpu_has_feature(unsigned long feature)
++static __always_inline bool cpu_has_feature(unsigned long feature)
+ {
+ 	return early_cpu_has_feature(feature);
+ }
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index 242bdd8281e0f..a2e067f68dee8 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -1853,7 +1853,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 				goto compute_done;
+ 			}
+ 
+-			return -1;
++			goto unknown_opcode;
+ #ifdef __powerpc64__
+ 		case 777:	/* modsd */
+ 			if (!cpu_has_feature(CPU_FTR_ARCH_300))
+@@ -2909,6 +2909,20 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+ 
+ 	}
+ 
++	if (OP_IS_LOAD_STORE(op->type) && (op->type & UPDATE)) {
++		switch (GETTYPE(op->type)) {
++		case LOAD:
++			if (ra == rd)
++				goto unknown_opcode;
++			fallthrough;
++		case STORE:
++		case LOAD_FP:
++		case STORE_FP:
++			if (ra == 0)
++				goto unknown_opcode;
++		}
++	}
++
+ #ifdef CONFIG_VSX
+ 	if ((GETTYPE(op->type) == LOAD_VSX ||
+ 	     GETTYPE(op->type) == STORE_VSX) &&
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 3474286e59db7..df7fccf76df69 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -84,7 +84,6 @@ config RISCV
+ 	select PCI_MSI if PCI
+ 	select RISCV_INTC
+ 	select RISCV_TIMER if RISCV_SBI
+-	select SPARSEMEM_STATIC if 32BIT
+ 	select SPARSE_IRQ
+ 	select SYSCTL_EXCEPTION_TRACE
+ 	select THREAD_INFO_IN_TASK
+@@ -145,7 +144,8 @@ config ARCH_FLATMEM_ENABLE
+ config ARCH_SPARSEMEM_ENABLE
+ 	def_bool y
+ 	depends on MMU
+-	select SPARSEMEM_VMEMMAP_ENABLE
++	select SPARSEMEM_STATIC if 32BIT && SPARSMEM
++	select SPARSEMEM_VMEMMAP_ENABLE if 64BIT
+ 
+ config ARCH_SELECT_MEMORY_MODEL
+ 	def_bool ARCH_SPARSEMEM_ENABLE
+diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
+index 653edb25d4957..c0fdb05ffa0b2 100644
+--- a/arch/riscv/include/asm/sbi.h
++++ b/arch/riscv/include/asm/sbi.h
+@@ -51,10 +51,10 @@ enum sbi_ext_rfence_fid {
+ 	SBI_EXT_RFENCE_REMOTE_FENCE_I = 0,
+ 	SBI_EXT_RFENCE_REMOTE_SFENCE_VMA,
+ 	SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID,
+-	SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA,
+ 	SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID,
+-	SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA,
++	SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA,
+ 	SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID,
++	SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA,
+ };
+ 
+ enum sbi_ext_hsm_fid {
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index 212628932ddc1..a75d94a9bcb2f 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -201,8 +201,8 @@ extern unsigned int s390_pci_no_rid;
+   Prototypes
+ ----------------------------------------------------------------------------- */
+ /* Base stuff */
+-int zpci_create_device(struct zpci_dev *);
+-void zpci_remove_device(struct zpci_dev *zdev);
++int zpci_create_device(u32 fid, u32 fh, enum zpci_state state);
++void zpci_remove_device(struct zpci_dev *zdev, bool set_error);
+ int zpci_enable_device(struct zpci_dev *);
+ int zpci_disable_device(struct zpci_dev *);
+ int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64);
+@@ -212,7 +212,7 @@ void zpci_remove_reserved_devices(void);
+ /* CLP */
+ int clp_setup_writeback_mio(void);
+ int clp_scan_pci_devices(void);
+-int clp_add_pci_device(u32, u32, int);
++int clp_query_pci_fn(struct zpci_dev *zdev);
+ int clp_enable_fh(struct zpci_dev *, u8);
+ int clp_disable_fh(struct zpci_dev *);
+ int clp_get_state(u32 fid, enum zpci_state *state);
+diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c
+index 7b3af2d6b9baa..579ec3a8c816f 100644
+--- a/arch/s390/kernel/vtime.c
++++ b/arch/s390/kernel/vtime.c
+@@ -217,7 +217,7 @@ void vtime_flush(struct task_struct *tsk)
+ 	avg_steal = S390_lowcore.avg_steal_timer / 2;
+ 	if ((s64) steal > 0) {
+ 		S390_lowcore.steal_timer = 0;
+-		account_steal_time(steal);
++		account_steal_time(cputime_to_nsecs(steal));
+ 		avg_steal += steal;
+ 	}
+ 	S390_lowcore.avg_steal_timer = avg_steal;
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 570016ae8bcd1..1ae7a76ae97b7 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -682,56 +682,101 @@ int zpci_disable_device(struct zpci_dev *zdev)
+ }
+ EXPORT_SYMBOL_GPL(zpci_disable_device);
+ 
+-void zpci_remove_device(struct zpci_dev *zdev)
++/* zpci_remove_device - Removes the given zdev from the PCI core
++ * @zdev: the zdev to be removed from the PCI core
++ * @set_error: if true the device's error state is set to permanent failure
++ *
++ * Sets a zPCI device to a configured but offline state; the zPCI
++ * device is still accessible through its hotplug slot and the zPCI
++ * API but is removed from the common code PCI bus, making it
++ * no longer available to drivers.
++ */
++void zpci_remove_device(struct zpci_dev *zdev, bool set_error)
+ {
+ 	struct zpci_bus *zbus = zdev->zbus;
+ 	struct pci_dev *pdev;
+ 
++	if (!zdev->zbus->bus)
++		return;
++
+ 	pdev = pci_get_slot(zbus->bus, zdev->devfn);
+ 	if (pdev) {
+-		if (pdev->is_virtfn)
+-			return zpci_iov_remove_virtfn(pdev, zdev->vfn);
++		if (set_error)
++			pdev->error_state = pci_channel_io_perm_failure;
++		if (pdev->is_virtfn) {
++			zpci_iov_remove_virtfn(pdev, zdev->vfn);
++			/* balance pci_get_slot */
++			pci_dev_put(pdev);
++			return;
++		}
+ 		pci_stop_and_remove_bus_device_locked(pdev);
++		/* balance pci_get_slot */
++		pci_dev_put(pdev);
+ 	}
+ }
+ 
+-int zpci_create_device(struct zpci_dev *zdev)
++/**
++ * zpci_create_device() - Create a new zpci_dev and add it to the zbus
++ * @fid: Function ID of the device to be created
++ * @fh: Current Function Handle of the device to be created
++ * @state: Initial state after creation either Standby or Configured
++ *
++ * Creates a new zpci device and adds it to its, possibly newly created, zbus
++ * as well as zpci_list.
++ *
++ * Returns: 0 on success, an error value otherwise
++ */
++int zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
+ {
++	struct zpci_dev *zdev;
+ 	int rc;
+ 
+-	kref_init(&zdev->kref);
++	zpci_dbg(3, "add fid:%x, fh:%x, c:%d\n", fid, fh, state);
++	zdev = kzalloc(sizeof(*zdev), GFP_KERNEL);
++	if (!zdev)
++		return -ENOMEM;
+ 
+-	spin_lock(&zpci_list_lock);
+-	list_add_tail(&zdev->entry, &zpci_list);
+-	spin_unlock(&zpci_list_lock);
++	/* FID and Function Handle are the static/dynamic identifiers */
++	zdev->fid = fid;
++	zdev->fh = fh;
+ 
+-	rc = zpci_init_iommu(zdev);
++	/* Query function properties and update zdev */
++	rc = clp_query_pci_fn(zdev);
+ 	if (rc)
+-		goto out;
++		goto error;
++	zdev->state =  state;
+ 
++	kref_init(&zdev->kref);
+ 	mutex_init(&zdev->lock);
++
++	rc = zpci_init_iommu(zdev);
++	if (rc)
++		goto error;
++
+ 	if (zdev->state == ZPCI_FN_STATE_CONFIGURED) {
+ 		rc = zpci_enable_device(zdev);
+ 		if (rc)
+-			goto out_destroy_iommu;
++			goto error_destroy_iommu;
+ 	}
+ 
+ 	rc = zpci_bus_device_register(zdev, &pci_root_ops);
+ 	if (rc)
+-		goto out_disable;
++		goto error_disable;
++
++	spin_lock(&zpci_list_lock);
++	list_add_tail(&zdev->entry, &zpci_list);
++	spin_unlock(&zpci_list_lock);
+ 
+ 	return 0;
+ 
+-out_disable:
++error_disable:
+ 	if (zdev->state == ZPCI_FN_STATE_ONLINE)
+ 		zpci_disable_device(zdev);
+-
+-out_destroy_iommu:
++error_destroy_iommu:
+ 	zpci_destroy_iommu(zdev);
+-out:
+-	spin_lock(&zpci_list_lock);
+-	list_del(&zdev->entry);
+-	spin_unlock(&zpci_list_lock);
++error:
++	zpci_dbg(0, "add fid:%x, rc:%d\n", fid, rc);
++	kfree(zdev);
+ 	return rc;
+ }
+ 
+@@ -740,7 +785,7 @@ void zpci_release_device(struct kref *kref)
+ 	struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
+ 
+ 	if (zdev->zbus->bus)
+-		zpci_remove_device(zdev);
++		zpci_remove_device(zdev, false);
+ 
+ 	switch (zdev->state) {
+ 	case ZPCI_FN_STATE_ONLINE:
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index 153720d21ae7f..d3331596ddbe1 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -181,7 +181,7 @@ static int clp_store_query_pci_fn(struct zpci_dev *zdev,
+ 	return 0;
+ }
+ 
+-static int clp_query_pci_fn(struct zpci_dev *zdev, u32 fh)
++int clp_query_pci_fn(struct zpci_dev *zdev)
+ {
+ 	struct clp_req_rsp_query_pci *rrb;
+ 	int rc;
+@@ -194,7 +194,7 @@ static int clp_query_pci_fn(struct zpci_dev *zdev, u32 fh)
+ 	rrb->request.hdr.len = sizeof(rrb->request);
+ 	rrb->request.hdr.cmd = CLP_QUERY_PCI_FN;
+ 	rrb->response.hdr.len = sizeof(rrb->response);
+-	rrb->request.fh = fh;
++	rrb->request.fh = zdev->fh;
+ 
+ 	rc = clp_req(rrb, CLP_LPS_PCI);
+ 	if (!rc && rrb->response.hdr.rsp == CLP_RC_OK) {
+@@ -212,40 +212,6 @@ out:
+ 	return rc;
+ }
+ 
+-int clp_add_pci_device(u32 fid, u32 fh, int configured)
+-{
+-	struct zpci_dev *zdev;
+-	int rc = -ENOMEM;
+-
+-	zpci_dbg(3, "add fid:%x, fh:%x, c:%d\n", fid, fh, configured);
+-	zdev = kzalloc(sizeof(*zdev), GFP_KERNEL);
+-	if (!zdev)
+-		goto error;
+-
+-	zdev->fh = fh;
+-	zdev->fid = fid;
+-
+-	/* Query function properties and update zdev */
+-	rc = clp_query_pci_fn(zdev, fh);
+-	if (rc)
+-		goto error;
+-
+-	if (configured)
+-		zdev->state = ZPCI_FN_STATE_CONFIGURED;
+-	else
+-		zdev->state = ZPCI_FN_STATE_STANDBY;
+-
+-	rc = zpci_create_device(zdev);
+-	if (rc)
+-		goto error;
+-	return 0;
+-
+-error:
+-	zpci_dbg(0, "add fid:%x, rc:%d\n", fid, rc);
+-	kfree(zdev);
+-	return rc;
+-}
+-
+ static int clp_refresh_fh(u32 fid);
+ /*
+  * Enable/Disable a given PCI function and update its function handle if
+@@ -408,7 +374,7 @@ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
+ 
+ 	zdev = get_zdev_by_fid(entry->fid);
+ 	if (!zdev)
+-		clp_add_pci_device(entry->fid, entry->fh, entry->config_state);
++		zpci_create_device(entry->fid, entry->fh, entry->config_state);
+ }
+ 
+ int clp_scan_pci_devices(void)
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index 9a6bae503fe61..ac0c65cdd69d9 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -76,20 +76,17 @@ void zpci_event_error(void *data)
+ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ {
+ 	struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
+-	struct pci_dev *pdev = NULL;
+ 	enum zpci_state state;
++	struct pci_dev *pdev;
+ 	int ret;
+ 
+-	if (zdev && zdev->zbus && zdev->zbus->bus)
+-		pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
+-
+ 	zpci_err("avail CCDF:\n");
+ 	zpci_err_hex(ccdf, sizeof(*ccdf));
+ 
+ 	switch (ccdf->pec) {
+ 	case 0x0301: /* Reserved|Standby -> Configured */
+ 		if (!zdev) {
+-			ret = clp_add_pci_device(ccdf->fid, ccdf->fh, 1);
++			zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_CONFIGURED);
+ 			break;
+ 		}
+ 		/* the configuration request may be stale */
+@@ -116,7 +113,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 		break;
+ 	case 0x0302: /* Reserved -> Standby */
+ 		if (!zdev) {
+-			clp_add_pci_device(ccdf->fid, ccdf->fh, 0);
++			zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_STANDBY);
+ 			break;
+ 		}
+ 		zdev->fh = ccdf->fh;
+@@ -124,8 +121,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 	case 0x0303: /* Deconfiguration requested */
+ 		if (!zdev)
+ 			break;
+-		if (pdev)
+-			zpci_remove_device(zdev);
++		zpci_remove_device(zdev, false);
+ 
+ 		ret = zpci_disable_device(zdev);
+ 		if (ret)
+@@ -140,12 +136,10 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 	case 0x0304: /* Configured -> Standby|Reserved */
+ 		if (!zdev)
+ 			break;
+-		if (pdev) {
+-			/* Give the driver a hint that the function is
+-			 * already unusable. */
+-			pdev->error_state = pci_channel_io_perm_failure;
+-			zpci_remove_device(zdev);
+-		}
++		/* Give the driver a hint that the function is
++		 * already unusable.
++		 */
++		zpci_remove_device(zdev, true);
+ 
+ 		zdev->fh = ccdf->fh;
+ 		zpci_disable_device(zdev);
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 4b05c876f9f69..e7dc13fe5e29f 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3562,6 +3562,9 @@ static int intel_pmu_hw_config(struct perf_event *event)
+ 		return ret;
+ 
+ 	if (event->attr.precise_ip) {
++		if ((event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_FIXED_VLBR_EVENT)
++			return -EINVAL;
++
+ 		if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) {
+ 			event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
+ 			if (!(event->attr.sample_type &
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 485c5066f8b8c..31a7a6566d077 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1894,7 +1894,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
+ 		 */
+ 		if (!pebs_status && cpuc->pebs_enabled &&
+ 			!(cpuc->pebs_enabled & (cpuc->pebs_enabled-1)))
+-			pebs_status = cpuc->pebs_enabled;
++			pebs_status = p->status = cpuc->pebs_enabled;
+ 
+ 		bit = find_first_bit((unsigned long *)&pebs_status,
+ 					x86_pmu.max_pebs_events);
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 82a08b5858182..50d02db723177 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -552,15 +552,6 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset,
+ 	*size = fpu_kernel_xstate_size;
+ }
+ 
+-/*
+- * Thread-synchronous status.
+- *
+- * This is different from the flags in that nobody else
+- * ever touches our thread-synchronous status, so we don't
+- * have to worry about atomic accesses.
+- */
+-#define TS_COMPAT		0x0002	/* 32bit syscall active (64BIT)*/
+-
+ static inline void
+ native_load_sp0(unsigned long sp0)
+ {
+diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
+index 44733a4bfc429..e701f29b48817 100644
+--- a/arch/x86/include/asm/thread_info.h
++++ b/arch/x86/include/asm/thread_info.h
+@@ -216,10 +216,31 @@ static inline int arch_within_stack_frames(const void * const stack,
+ 
+ #endif
+ 
++/*
++ * Thread-synchronous status.
++ *
++ * This is different from the flags in that nobody else
++ * ever touches our thread-synchronous status, so we don't
++ * have to worry about atomic accesses.
++ */
++#define TS_COMPAT		0x0002	/* 32bit syscall active (64BIT)*/
++
++#ifndef __ASSEMBLY__
+ #ifdef CONFIG_COMPAT
+ #define TS_I386_REGS_POKED	0x0004	/* regs poked by 32-bit ptracer */
++#define TS_COMPAT_RESTART	0x0008
++
++#define arch_set_restart_data	arch_set_restart_data
++
++static inline void arch_set_restart_data(struct restart_block *restart)
++{
++	struct thread_info *ti = current_thread_info();
++	if (ti->status & TS_COMPAT)
++		ti->status |= TS_COMPAT_RESTART;
++	else
++		ti->status &= ~TS_COMPAT_RESTART;
++}
+ #endif
+-#ifndef __ASSEMBLY__
+ 
+ #ifdef CONFIG_X86_32
+ #define in_ia32_syscall() true
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index f4c0514fc5108..539f3e88ca7cd 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -2317,6 +2317,11 @@ static int cpuid_to_apicid[] = {
+ 	[0 ... NR_CPUS - 1] = -1,
+ };
+ 
++bool arch_match_cpu_phys_id(int cpu, u64 phys_id)
++{
++	return phys_id == cpuid_to_apicid[cpu];
++}
++
+ #ifdef CONFIG_SMP
+ /**
+  * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 7b3c7e0d4a094..0d4818eab0da8 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1033,6 +1033,16 @@ static int mp_map_pin_to_irq(u32 gsi, int idx, int ioapic, int pin,
+ 	if (idx >= 0 && test_bit(mp_irqs[idx].srcbus, mp_bus_not_pci)) {
+ 		irq = mp_irqs[idx].srcbusirq;
+ 		legacy = mp_is_legacy_irq(irq);
++		/*
++		 * IRQ2 is unusable for historical reasons on systems which
++		 * have a legacy PIC. See the comment vs. IRQ2 further down.
++		 *
++		 * If this gets removed at some point then the related code
++		 * in lapic_assign_system_vectors() needs to be adjusted as
++		 * well.
++		 */
++		if (legacy && irq == PIC_CASCADE_IR)
++			return -EINVAL;
+ 	}
+ 
+ 	mutex_lock(&ioapic_mutex);
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index be0d7d4152eca..f51cab3e983d8 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -766,30 +766,8 @@ handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+ 
+ static inline unsigned long get_nr_restart_syscall(const struct pt_regs *regs)
+ {
+-	/*
+-	 * This function is fundamentally broken as currently
+-	 * implemented.
+-	 *
+-	 * The idea is that we want to trigger a call to the
+-	 * restart_block() syscall and that we want in_ia32_syscall(),
+-	 * in_x32_syscall(), etc. to match whatever they were in the
+-	 * syscall being restarted.  We assume that the syscall
+-	 * instruction at (regs->ip - 2) matches whatever syscall
+-	 * instruction we used to enter in the first place.
+-	 *
+-	 * The problem is that we can get here when ptrace pokes
+-	 * syscall-like values into regs even if we're not in a syscall
+-	 * at all.
+-	 *
+-	 * For now, we maintain historical behavior and guess based on
+-	 * stored state.  We could do better by saving the actual
+-	 * syscall arch in restart_block or (with caveats on x32) by
+-	 * checking if regs->ip points to 'int $0x80'.  The current
+-	 * behavior is incorrect if a tracer has a different bitness
+-	 * than the tracee.
+-	 */
+ #ifdef CONFIG_IA32_EMULATION
+-	if (current_thread_info()->status & (TS_COMPAT|TS_I386_REGS_POKED))
++	if (current_thread_info()->status & TS_COMPAT_RESTART)
+ 		return __NR_ia32_restart_syscall;
+ #endif
+ #ifdef CONFIG_X86_X32_ABI
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 87682dcb64ec3..bfda153b1a41d 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -325,22 +325,22 @@ static void rpm_put_suppliers(struct device *dev)
+ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
+ 	__releases(&dev->power.lock) __acquires(&dev->power.lock)
+ {
+-	bool use_links = dev->power.links_count > 0;
+-	bool get = false;
+ 	int retval, idx;
+-	bool put;
++	bool use_links = dev->power.links_count > 0;
+ 
+ 	if (dev->power.irq_safe) {
+ 		spin_unlock(&dev->power.lock);
+-	} else if (!use_links) {
+-		spin_unlock_irq(&dev->power.lock);
+ 	} else {
+-		get = dev->power.runtime_status == RPM_RESUMING;
+-
+ 		spin_unlock_irq(&dev->power.lock);
+ 
+-		/* Resume suppliers if necessary. */
+-		if (get) {
++		/*
++		 * Resume suppliers if necessary.
++		 *
++		 * The device's runtime PM status cannot change until this
++		 * routine returns, so it is safe to read the status outside of
++		 * the lock.
++		 */
++		if (use_links && dev->power.runtime_status == RPM_RESUMING) {
+ 			idx = device_links_read_lock();
+ 
+ 			retval = rpm_get_suppliers(dev);
+@@ -355,36 +355,24 @@ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
+ 
+ 	if (dev->power.irq_safe) {
+ 		spin_lock(&dev->power.lock);
+-		return retval;
+-	}
+-
+-	spin_lock_irq(&dev->power.lock);
+-
+-	if (!use_links)
+-		return retval;
+-
+-	/*
+-	 * If the device is suspending and the callback has returned success,
+-	 * drop the usage counters of the suppliers that have been reference
+-	 * counted on its resume.
+-	 *
+-	 * Do that if the resume fails too.
+-	 */
+-	put = dev->power.runtime_status == RPM_SUSPENDING && !retval;
+-	if (put)
+-		__update_runtime_status(dev, RPM_SUSPENDED);
+-	else
+-		put = get && retval;
+-
+-	if (put) {
+-		spin_unlock_irq(&dev->power.lock);
+-
+-		idx = device_links_read_lock();
++	} else {
++		/*
++		 * If the device is suspending and the callback has returned
++		 * success, drop the usage counters of the suppliers that have
++		 * been reference counted on its resume.
++		 *
++		 * Do that if resume fails too.
++		 */
++		if (use_links
++		    && ((dev->power.runtime_status == RPM_SUSPENDING && !retval)
++		    || (dev->power.runtime_status == RPM_RESUMING && retval))) {
++			idx = device_links_read_lock();
+ 
+-fail:
+-		rpm_put_suppliers(dev);
++ fail:
++			rpm_put_suppliers(dev);
+ 
+-		device_links_read_unlock(idx);
++			device_links_read_unlock(idx);
++		}
+ 
+ 		spin_lock_irq(&dev->power.lock);
+ 	}
+diff --git a/drivers/counter/stm32-timer-cnt.c b/drivers/counter/stm32-timer-cnt.c
+index ef2a974a2f105..75bc401fdd189 100644
+--- a/drivers/counter/stm32-timer-cnt.c
++++ b/drivers/counter/stm32-timer-cnt.c
+@@ -31,7 +31,7 @@ struct stm32_timer_cnt {
+ 	struct counter_device counter;
+ 	struct regmap *regmap;
+ 	struct clk *clk;
+-	u32 ceiling;
++	u32 max_arr;
+ 	bool enabled;
+ 	struct stm32_timer_regs bak;
+ };
+@@ -44,13 +44,14 @@ struct stm32_timer_cnt {
+  * @STM32_COUNT_ENCODER_MODE_3: counts on both TI1FP1 and TI2FP2 edges
+  */
+ enum stm32_count_function {
+-	STM32_COUNT_SLAVE_MODE_DISABLED = -1,
++	STM32_COUNT_SLAVE_MODE_DISABLED,
+ 	STM32_COUNT_ENCODER_MODE_1,
+ 	STM32_COUNT_ENCODER_MODE_2,
+ 	STM32_COUNT_ENCODER_MODE_3,
+ };
+ 
+ static enum counter_count_function stm32_count_functions[] = {
++	[STM32_COUNT_SLAVE_MODE_DISABLED] = COUNTER_COUNT_FUNCTION_INCREASE,
+ 	[STM32_COUNT_ENCODER_MODE_1] = COUNTER_COUNT_FUNCTION_QUADRATURE_X2_A,
+ 	[STM32_COUNT_ENCODER_MODE_2] = COUNTER_COUNT_FUNCTION_QUADRATURE_X2_B,
+ 	[STM32_COUNT_ENCODER_MODE_3] = COUNTER_COUNT_FUNCTION_QUADRATURE_X4,
+@@ -73,8 +74,10 @@ static int stm32_count_write(struct counter_device *counter,
+ 			     const unsigned long val)
+ {
+ 	struct stm32_timer_cnt *const priv = counter->priv;
++	u32 ceiling;
+ 
+-	if (val > priv->ceiling)
++	regmap_read(priv->regmap, TIM_ARR, &ceiling);
++	if (val > ceiling)
+ 		return -EINVAL;
+ 
+ 	return regmap_write(priv->regmap, TIM_CNT, val);
+@@ -90,6 +93,9 @@ static int stm32_count_function_get(struct counter_device *counter,
+ 	regmap_read(priv->regmap, TIM_SMCR, &smcr);
+ 
+ 	switch (smcr & TIM_SMCR_SMS) {
++	case 0:
++		*function = STM32_COUNT_SLAVE_MODE_DISABLED;
++		return 0;
+ 	case 1:
+ 		*function = STM32_COUNT_ENCODER_MODE_1;
+ 		return 0;
+@@ -99,9 +105,9 @@ static int stm32_count_function_get(struct counter_device *counter,
+ 	case 3:
+ 		*function = STM32_COUNT_ENCODER_MODE_3;
+ 		return 0;
++	default:
++		return -EINVAL;
+ 	}
+-
+-	return -EINVAL;
+ }
+ 
+ static int stm32_count_function_set(struct counter_device *counter,
+@@ -112,6 +118,9 @@ static int stm32_count_function_set(struct counter_device *counter,
+ 	u32 cr1, sms;
+ 
+ 	switch (function) {
++	case STM32_COUNT_SLAVE_MODE_DISABLED:
++		sms = 0;
++		break;
+ 	case STM32_COUNT_ENCODER_MODE_1:
+ 		sms = 1;
+ 		break;
+@@ -122,8 +131,7 @@ static int stm32_count_function_set(struct counter_device *counter,
+ 		sms = 3;
+ 		break;
+ 	default:
+-		sms = 0;
+-		break;
++		return -EINVAL;
+ 	}
+ 
+ 	/* Store enable status */
+@@ -131,10 +139,6 @@ static int stm32_count_function_set(struct counter_device *counter,
+ 
+ 	regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0);
+ 
+-	/* TIMx_ARR register shouldn't be buffered (ARPE=0) */
+-	regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0);
+-	regmap_write(priv->regmap, TIM_ARR, priv->ceiling);
+-
+ 	regmap_update_bits(priv->regmap, TIM_SMCR, TIM_SMCR_SMS, sms);
+ 
+ 	/* Make sure that registers are updated */
+@@ -185,11 +189,13 @@ static ssize_t stm32_count_ceiling_write(struct counter_device *counter,
+ 	if (ret)
+ 		return ret;
+ 
++	if (ceiling > priv->max_arr)
++		return -ERANGE;
++
+ 	/* TIMx_ARR register shouldn't be buffered (ARPE=0) */
+ 	regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0);
+ 	regmap_write(priv->regmap, TIM_ARR, ceiling);
+ 
+-	priv->ceiling = ceiling;
+ 	return len;
+ }
+ 
+@@ -274,31 +280,36 @@ static int stm32_action_get(struct counter_device *counter,
+ 	size_t function;
+ 	int err;
+ 
+-	/* Default action mode (e.g. STM32_COUNT_SLAVE_MODE_DISABLED) */
+-	*action = STM32_SYNAPSE_ACTION_NONE;
+-
+ 	err = stm32_count_function_get(counter, count, &function);
+ 	if (err)
+-		return 0;
++		return err;
+ 
+ 	switch (function) {
++	case STM32_COUNT_SLAVE_MODE_DISABLED:
++		/* counts on internal clock when CEN=1 */
++		*action = STM32_SYNAPSE_ACTION_NONE;
++		return 0;
+ 	case STM32_COUNT_ENCODER_MODE_1:
+ 		/* counts up/down on TI1FP1 edge depending on TI2FP2 level */
+ 		if (synapse->signal->id == count->synapses[0].signal->id)
+ 			*action = STM32_SYNAPSE_ACTION_BOTH_EDGES;
+-		break;
++		else
++			*action = STM32_SYNAPSE_ACTION_NONE;
++		return 0;
+ 	case STM32_COUNT_ENCODER_MODE_2:
+ 		/* counts up/down on TI2FP2 edge depending on TI1FP1 level */
+ 		if (synapse->signal->id == count->synapses[1].signal->id)
+ 			*action = STM32_SYNAPSE_ACTION_BOTH_EDGES;
+-		break;
++		else
++			*action = STM32_SYNAPSE_ACTION_NONE;
++		return 0;
+ 	case STM32_COUNT_ENCODER_MODE_3:
+ 		/* counts up/down on both TI1FP1 and TI2FP2 edges */
+ 		*action = STM32_SYNAPSE_ACTION_BOTH_EDGES;
+-		break;
++		return 0;
++	default:
++		return -EINVAL;
+ 	}
+-
+-	return 0;
+ }
+ 
+ static const struct counter_ops stm32_timer_cnt_ops = {
+@@ -359,7 +370,7 @@ static int stm32_timer_cnt_probe(struct platform_device *pdev)
+ 
+ 	priv->regmap = ddata->regmap;
+ 	priv->clk = ddata->clk;
+-	priv->ceiling = ddata->max_arr;
++	priv->max_arr = ddata->max_arr;
+ 
+ 	priv->counter.name = dev_name(dev);
+ 	priv->counter.parent = dev;
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index df3f9bcab581c..4b7ee3fa9224f 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -927,7 +927,7 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+ 	}
+ 
+ 	/* first try to find a slot in an existing linked list entry */
+-	for (prsv = efi_memreserve_root->next; prsv; prsv = rsv->next) {
++	for (prsv = efi_memreserve_root->next; prsv; ) {
+ 		rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB);
+ 		index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size);
+ 		if (index < rsv->size) {
+@@ -937,6 +937,7 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+ 			memunmap(rsv);
+ 			return efi_mem_reserve_iomem(addr, size);
+ 		}
++		prsv = rsv->next;
+ 		memunmap(rsv);
+ 	}
+ 
+diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c
+index 41c1d00bf933c..abdc8a6a39631 100644
+--- a/drivers/firmware/efi/vars.c
++++ b/drivers/firmware/efi/vars.c
+@@ -484,6 +484,10 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
+ 				}
+ 			}
+ 
++			break;
++		case EFI_UNSUPPORTED:
++			err = -EOPNOTSUPP;
++			status = EFI_NOT_FOUND;
+ 			break;
+ 		case EFI_NOT_FOUND:
+ 			break;
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 7f557ea905424..0a2c4adcd833c 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -572,6 +572,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ 			       struct lock_class_key *lock_key,
+ 			       struct lock_class_key *request_key)
+ {
++	struct fwnode_handle *fwnode = gc->parent ? dev_fwnode(gc->parent) : NULL;
+ 	unsigned long	flags;
+ 	int		ret = 0;
+ 	unsigned	i;
+@@ -601,6 +602,12 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ 		gc->of_node = gdev->dev.of_node;
+ #endif
+ 
++	/*
++	 * Assign fwnode depending on the result of the previous calls,
++	 * if none of them succeed, assign it to the parent's one.
++	 */
++	gdev->dev.fwnode = dev_fwnode(&gdev->dev) ?: fwnode;
++
+ 	gdev->id = ida_alloc(&gpio_ida, GFP_KERNEL);
+ 	if (gdev->id < 0) {
+ 		ret = gdev->id;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index ea1ea147f6073..c07737c456776 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1902,6 +1902,33 @@ cleanup:
+ 	return;
+ }
+ 
++static void dm_set_dpms_off(struct dc_link *link)
++{
++	struct dc_stream_state *stream_state;
++	struct amdgpu_dm_connector *aconnector = link->priv;
++	struct amdgpu_device *adev = drm_to_adev(aconnector->base.dev);
++	struct dc_stream_update stream_update;
++	bool dpms_off = true;
++
++	memset(&stream_update, 0, sizeof(stream_update));
++	stream_update.dpms_off = &dpms_off;
++
++	mutex_lock(&adev->dm.dc_lock);
++	stream_state = dc_stream_find_from_link(link);
++
++	if (stream_state == NULL) {
++		DRM_DEBUG_DRIVER("Error finding stream state associated with link!\n");
++		mutex_unlock(&adev->dm.dc_lock);
++		return;
++	}
++
++	stream_update.stream = stream_state;
++	dc_commit_updates_for_stream(stream_state->ctx->dc, NULL, 0,
++				     stream_state, &stream_update,
++				     stream_state->ctx->dc->current_state);
++	mutex_unlock(&adev->dm.dc_lock);
++}
++
+ static int dm_resume(void *handle)
+ {
+ 	struct amdgpu_device *adev = handle;
+@@ -2353,8 +2380,11 @@ static void handle_hpd_irq(void *param)
+ 			drm_kms_helper_hotplug_event(dev);
+ 
+ 	} else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {
+-		amdgpu_dm_update_connector_after_detect(aconnector);
++		if (new_connection_type == dc_connection_none &&
++		    aconnector->dc_link->type == dc_connection_none)
++			dm_set_dpms_off(aconnector->dc_link);
+ 
++		amdgpu_dm_update_connector_after_detect(aconnector);
+ 
+ 		drm_modeset_lock_all(dev);
+ 		dm_restore_drm_connector_state(dev, connector);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 45ad05f6e03b9..ffb21196bf599 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2767,6 +2767,19 @@ struct dc_stream_state *dc_get_stream_at_index(struct dc *dc, uint8_t i)
+ 	return NULL;
+ }
+ 
++struct dc_stream_state *dc_stream_find_from_link(const struct dc_link *link)
++{
++	uint8_t i;
++	struct dc_context *ctx = link->ctx;
++
++	for (i = 0; i < ctx->dc->current_state->stream_count; i++) {
++		if (ctx->dc->current_state->streams[i]->link == link)
++			return ctx->dc->current_state->streams[i];
++	}
++
++	return NULL;
++}
++
+ enum dc_irq_source dc_interrupt_to_irq_source(
+ 		struct dc *dc,
+ 		uint32_t src_id,
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+index c246af7c584b0..205bedd1b1966 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+@@ -297,6 +297,7 @@ void dc_stream_log(const struct dc *dc, const struct dc_stream_state *stream);
+ 
+ uint8_t dc_get_current_stream_count(struct dc *dc);
+ struct dc_stream_state *dc_get_stream_at_index(struct dc *dc, uint8_t i);
++struct dc_stream_state *dc_stream_find_from_link(const struct dc_link *link);
+ 
+ /*
+  * Return the current frame counter.
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
+index 41a1d0e9b7e20..e0df9b0065f9c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
+@@ -113,6 +113,7 @@ bool cm3_helper_translate_curve_to_hw_format(
+ 	struct pwl_result_data *rgb_resulted;
+ 	struct pwl_result_data *rgb;
+ 	struct pwl_result_data *rgb_plus_1;
++	struct pwl_result_data *rgb_minus_1;
+ 	struct fixed31_32 end_value;
+ 
+ 	int32_t region_start, region_end;
+@@ -140,7 +141,7 @@ bool cm3_helper_translate_curve_to_hw_format(
+ 		region_start = -MAX_LOW_POINT;
+ 		region_end   = NUMBER_REGIONS - MAX_LOW_POINT;
+ 	} else {
+-		/* 10 segments
++		/* 11 segments
+ 		 * segment is from 2^-10 to 2^0
+ 		 * There are less than 256 points, for optimization
+ 		 */
+@@ -154,9 +155,10 @@ bool cm3_helper_translate_curve_to_hw_format(
+ 		seg_distr[7] = 4;
+ 		seg_distr[8] = 4;
+ 		seg_distr[9] = 4;
++		seg_distr[10] = 1;
+ 
+ 		region_start = -10;
+-		region_end = 0;
++		region_end = 1;
+ 	}
+ 
+ 	for (i = region_end - region_start; i < MAX_REGIONS_NUMBER ; i++)
+@@ -189,6 +191,10 @@ bool cm3_helper_translate_curve_to_hw_format(
+ 	rgb_resulted[hw_points - 1].green = output_tf->tf_pts.green[start_index];
+ 	rgb_resulted[hw_points - 1].blue = output_tf->tf_pts.blue[start_index];
+ 
++	rgb_resulted[hw_points].red = rgb_resulted[hw_points - 1].red;
++	rgb_resulted[hw_points].green = rgb_resulted[hw_points - 1].green;
++	rgb_resulted[hw_points].blue = rgb_resulted[hw_points - 1].blue;
++
+ 	// All 3 color channels have same x
+ 	corner_points[0].red.x = dc_fixpt_pow(dc_fixpt_from_int(2),
+ 					     dc_fixpt_from_int(region_start));
+@@ -259,15 +265,18 @@ bool cm3_helper_translate_curve_to_hw_format(
+ 
+ 	rgb = rgb_resulted;
+ 	rgb_plus_1 = rgb_resulted + 1;
++	rgb_minus_1 = rgb;
+ 
+ 	i = 1;
+ 	while (i != hw_points + 1) {
+-		if (dc_fixpt_lt(rgb_plus_1->red, rgb->red))
+-			rgb_plus_1->red = rgb->red;
+-		if (dc_fixpt_lt(rgb_plus_1->green, rgb->green))
+-			rgb_plus_1->green = rgb->green;
+-		if (dc_fixpt_lt(rgb_plus_1->blue, rgb->blue))
+-			rgb_plus_1->blue = rgb->blue;
++		if (i >= hw_points - 1) {
++			if (dc_fixpt_lt(rgb_plus_1->red, rgb->red))
++				rgb_plus_1->red = dc_fixpt_add(rgb->red, rgb_minus_1->delta_red);
++			if (dc_fixpt_lt(rgb_plus_1->green, rgb->green))
++				rgb_plus_1->green = dc_fixpt_add(rgb->green, rgb_minus_1->delta_green);
++			if (dc_fixpt_lt(rgb_plus_1->blue, rgb->blue))
++				rgb_plus_1->blue = dc_fixpt_add(rgb->blue, rgb_minus_1->delta_blue);
++		}
+ 
+ 		rgb->delta_red   = dc_fixpt_sub(rgb_plus_1->red,   rgb->red);
+ 		rgb->delta_green = dc_fixpt_sub(rgb_plus_1->green, rgb->green);
+@@ -283,6 +292,7 @@ bool cm3_helper_translate_curve_to_hw_format(
+ 		}
+ 
+ 		++rgb_plus_1;
++		rgb_minus_1 = rgb;
+ 		++rgb;
+ 		++i;
+ 	}
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index 35629140fc7aa..c5223a9e0d891 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -4771,6 +4771,72 @@ static int smu7_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type
+ 	return 0;
+ }
+ 
++static int smu7_get_sclks_with_latency(struct pp_hwmgr *hwmgr,
++				       struct pp_clock_levels_with_latency *clocks)
++{
++	struct phm_ppt_v1_information *table_info =
++			(struct phm_ppt_v1_information *)hwmgr->pptable;
++	struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table =
++			table_info->vdd_dep_on_sclk;
++	int i;
++
++	clocks->num_levels = 0;
++	for (i = 0; i < dep_sclk_table->count; i++) {
++		if (dep_sclk_table->entries[i].clk) {
++			clocks->data[clocks->num_levels].clocks_in_khz =
++				dep_sclk_table->entries[i].clk * 10;
++			clocks->num_levels++;
++		}
++	}
++
++	return 0;
++}
++
++static int smu7_get_mclks_with_latency(struct pp_hwmgr *hwmgr,
++				       struct pp_clock_levels_with_latency *clocks)
++{
++	struct phm_ppt_v1_information *table_info =
++			(struct phm_ppt_v1_information *)hwmgr->pptable;
++	struct phm_ppt_v1_clock_voltage_dependency_table *dep_mclk_table =
++			table_info->vdd_dep_on_mclk;
++	int i;
++
++	clocks->num_levels = 0;
++	for (i = 0; i < dep_mclk_table->count; i++) {
++		if (dep_mclk_table->entries[i].clk) {
++			clocks->data[clocks->num_levels].clocks_in_khz =
++					dep_mclk_table->entries[i].clk * 10;
++			clocks->data[clocks->num_levels].latency_in_us =
++					smu7_get_mem_latency(hwmgr, dep_mclk_table->entries[i].clk);
++			clocks->num_levels++;
++		}
++	}
++
++	return 0;
++}
++
++static int smu7_get_clock_by_type_with_latency(struct pp_hwmgr *hwmgr,
++					       enum amd_pp_clock_type type,
++					       struct pp_clock_levels_with_latency *clocks)
++{
++	if (!(hwmgr->chip_id >= CHIP_POLARIS10 &&
++	      hwmgr->chip_id <= CHIP_VEGAM))
++		return -EINVAL;
++
++	switch (type) {
++	case amd_pp_sys_clock:
++		smu7_get_sclks_with_latency(hwmgr, clocks);
++		break;
++	case amd_pp_mem_clock:
++		smu7_get_mclks_with_latency(hwmgr, clocks);
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static int smu7_notify_cac_buffer_info(struct pp_hwmgr *hwmgr,
+ 					uint32_t virtual_addr_low,
+ 					uint32_t virtual_addr_hi,
+@@ -5188,6 +5254,7 @@ static const struct pp_hwmgr_func smu7_hwmgr_funcs = {
+ 	.get_mclk_od = smu7_get_mclk_od,
+ 	.set_mclk_od = smu7_set_mclk_od,
+ 	.get_clock_by_type = smu7_get_clock_by_type,
++	.get_clock_by_type_with_latency = smu7_get_clock_by_type_with_latency,
+ 	.read_sensor = smu7_read_sensor,
+ 	.dynamic_state_management_disable = smu7_disable_dpm_tasks,
+ 	.avfs_control = smu7_avfs_control,
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index 3640d0e229d22..74e66dea57086 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -600,7 +600,6 @@ static int append_oa_sample(struct i915_perf_stream *stream,
+ {
+ 	int report_size = stream->oa_buffer.format_size;
+ 	struct drm_i915_perf_record_header header;
+-	u32 sample_flags = stream->sample_flags;
+ 
+ 	header.type = DRM_I915_PERF_RECORD_SAMPLE;
+ 	header.pad = 0;
+@@ -614,10 +613,8 @@ static int append_oa_sample(struct i915_perf_stream *stream,
+ 		return -EFAULT;
+ 	buf += sizeof(header);
+ 
+-	if (sample_flags & SAMPLE_OA_REPORT) {
+-		if (copy_to_user(buf, report, report_size))
+-			return -EFAULT;
+-	}
++	if (copy_to_user(buf, report, report_size))
++		return -EFAULT;
+ 
+ 	(*offset) += header.size;
+ 
+@@ -2676,7 +2673,7 @@ static void i915_oa_stream_enable(struct i915_perf_stream *stream)
+ 
+ 	stream->perf->ops.oa_enable(stream);
+ 
+-	if (stream->periodic)
++	if (stream->sample_flags & SAMPLE_OA_REPORT)
+ 		hrtimer_start(&stream->poll_check_timer,
+ 			      ns_to_ktime(stream->poll_oa_period),
+ 			      HRTIMER_MODE_REL_PINNED);
+@@ -2739,7 +2736,7 @@ static void i915_oa_stream_disable(struct i915_perf_stream *stream)
+ {
+ 	stream->perf->ops.oa_disable(stream);
+ 
+-	if (stream->periodic)
++	if (stream->sample_flags & SAMPLE_OA_REPORT)
+ 		hrtimer_cancel(&stream->poll_check_timer);
+ }
+ 
+@@ -3022,7 +3019,7 @@ static ssize_t i915_perf_read(struct file *file,
+ 	 * disabled stream as an error. In particular it might otherwise lead
+ 	 * to a deadlock for blocking file descriptors...
+ 	 */
+-	if (!stream->enabled)
++	if (!stream->enabled || !(stream->sample_flags & SAMPLE_OA_REPORT))
+ 		return -EIO;
+ 
+ 	if (!(file->f_flags & O_NONBLOCK)) {
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index 17e9ceb9c6c48..86fda6182543b 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -266,6 +266,8 @@ config ADI_AXI_ADC
+ 	select IIO_BUFFER
+ 	select IIO_BUFFER_HW_CONSUMER
+ 	select IIO_BUFFER_DMAENGINE
++	depends on HAS_IOMEM
++	depends on OF
+ 	help
+ 	  Say yes here to build support for Analog Devices Generic
+ 	  AXI ADC IP core. The IP core is used for interfacing with
+@@ -912,6 +914,7 @@ config STM32_ADC_CORE
+ 	depends on ARCH_STM32 || COMPILE_TEST
+ 	depends on OF
+ 	depends on REGULATOR
++	depends on HAS_IOMEM
+ 	select IIO_BUFFER
+ 	select MFD_STM32_TIMERS
+ 	select IIO_STM32_TIMER_TRIGGER
+diff --git a/drivers/iio/adc/ab8500-gpadc.c b/drivers/iio/adc/ab8500-gpadc.c
+index 1bb987a4acbab..8d81505282dd3 100644
+--- a/drivers/iio/adc/ab8500-gpadc.c
++++ b/drivers/iio/adc/ab8500-gpadc.c
+@@ -918,7 +918,7 @@ static int ab8500_gpadc_read_raw(struct iio_dev *indio_dev,
+ 			return processed;
+ 
+ 		/* Return millivolt or milliamps or millicentigrades */
+-		*val = processed * 1000;
++		*val = processed;
+ 		return IIO_VAL_INT;
+ 	}
+ 
+diff --git a/drivers/iio/adc/ad7949.c b/drivers/iio/adc/ad7949.c
+index 5d597e5050f68..1b4b3203e4285 100644
+--- a/drivers/iio/adc/ad7949.c
++++ b/drivers/iio/adc/ad7949.c
+@@ -91,7 +91,7 @@ static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val,
+ 	int ret;
+ 	int i;
+ 	int bits_per_word = ad7949_adc->resolution;
+-	int mask = GENMASK(ad7949_adc->resolution, 0);
++	int mask = GENMASK(ad7949_adc->resolution - 1, 0);
+ 	struct spi_message msg;
+ 	struct spi_transfer tx[] = {
+ 		{
+diff --git a/drivers/iio/adc/qcom-spmi-vadc.c b/drivers/iio/adc/qcom-spmi-vadc.c
+index b0388f8a69f42..7e7d408452eca 100644
+--- a/drivers/iio/adc/qcom-spmi-vadc.c
++++ b/drivers/iio/adc/qcom-spmi-vadc.c
+@@ -598,7 +598,7 @@ static const struct vadc_channels vadc_chans[] = {
+ 	VADC_CHAN_NO_SCALE(P_MUX16_1_3, 1)
+ 
+ 	VADC_CHAN_NO_SCALE(LR_MUX1_BAT_THERM, 0)
+-	VADC_CHAN_NO_SCALE(LR_MUX2_BAT_ID, 0)
++	VADC_CHAN_VOLT(LR_MUX2_BAT_ID, 0, SCALE_DEFAULT)
+ 	VADC_CHAN_NO_SCALE(LR_MUX3_XO_THERM, 0)
+ 	VADC_CHAN_NO_SCALE(LR_MUX4_AMUX_THM1, 0)
+ 	VADC_CHAN_NO_SCALE(LR_MUX5_AMUX_THM2, 0)
+diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c
+index 00e58060968c9..8ea6c2aa6263d 100644
+--- a/drivers/iio/gyro/mpu3050-core.c
++++ b/drivers/iio/gyro/mpu3050-core.c
+@@ -550,6 +550,8 @@ static irqreturn_t mpu3050_trigger_handler(int irq, void *p)
+ 					       MPU3050_FIFO_R,
+ 					       &fifo_values[offset],
+ 					       toread);
++			if (ret)
++				goto out_trigger_unlock;
+ 
+ 			dev_dbg(mpu3050->dev,
+ 				"%04x %04x %04x %04x %04x\n",
+diff --git a/drivers/iio/humidity/hid-sensor-humidity.c b/drivers/iio/humidity/hid-sensor-humidity.c
+index 52f605114ef77..d62705448ae25 100644
+--- a/drivers/iio/humidity/hid-sensor-humidity.c
++++ b/drivers/iio/humidity/hid-sensor-humidity.c
+@@ -15,7 +15,10 @@
+ struct hid_humidity_state {
+ 	struct hid_sensor_common common_attributes;
+ 	struct hid_sensor_hub_attribute_info humidity_attr;
+-	s32 humidity_data;
++	struct {
++		s32 humidity_data;
++		u64 timestamp __aligned(8);
++	} scan;
+ 	int scale_pre_decml;
+ 	int scale_post_decml;
+ 	int scale_precision;
+@@ -125,9 +128,8 @@ static int humidity_proc_event(struct hid_sensor_hub_device *hsdev,
+ 	struct hid_humidity_state *humid_st = iio_priv(indio_dev);
+ 
+ 	if (atomic_read(&humid_st->common_attributes.data_ready))
+-		iio_push_to_buffers_with_timestamp(indio_dev,
+-					&humid_st->humidity_data,
+-					iio_get_time_ns(indio_dev));
++		iio_push_to_buffers_with_timestamp(indio_dev, &humid_st->scan,
++						   iio_get_time_ns(indio_dev));
+ 
+ 	return 0;
+ }
+@@ -142,7 +144,7 @@ static int humidity_capture_sample(struct hid_sensor_hub_device *hsdev,
+ 
+ 	switch (usage_id) {
+ 	case HID_USAGE_SENSOR_ATMOSPHERIC_HUMIDITY:
+-		humid_st->humidity_data = *(s32 *)raw_data;
++		humid_st->scan.humidity_data = *(s32 *)raw_data;
+ 
+ 		return 0;
+ 	default:
+diff --git a/drivers/iio/imu/adis16400.c b/drivers/iio/imu/adis16400.c
+index 54af2ed664f6f..785a4ce606d89 100644
+--- a/drivers/iio/imu/adis16400.c
++++ b/drivers/iio/imu/adis16400.c
+@@ -462,8 +462,7 @@ static int adis16400_initial_setup(struct iio_dev *indio_dev)
+ 		if (ret)
+ 			goto err_ret;
+ 
+-		ret = sscanf(indio_dev->name, "adis%u\n", &device_id);
+-		if (ret != 1) {
++		if (sscanf(indio_dev->name, "adis%u\n", &device_id) != 1) {
+ 			ret = -EINVAL;
+ 			goto err_ret;
+ 		}
+diff --git a/drivers/iio/light/hid-sensor-prox.c b/drivers/iio/light/hid-sensor-prox.c
+index 330cf359e0b81..e9e00ce0c6d4d 100644
+--- a/drivers/iio/light/hid-sensor-prox.c
++++ b/drivers/iio/light/hid-sensor-prox.c
+@@ -23,6 +23,9 @@ struct prox_state {
+ 	struct hid_sensor_common common_attributes;
+ 	struct hid_sensor_hub_attribute_info prox_attr;
+ 	u32 human_presence;
++	int scale_pre_decml;
++	int scale_post_decml;
++	int scale_precision;
+ };
+ 
+ /* Channel definitions */
+@@ -93,8 +96,9 @@ static int prox_read_raw(struct iio_dev *indio_dev,
+ 		ret_type = IIO_VAL_INT;
+ 		break;
+ 	case IIO_CHAN_INFO_SCALE:
+-		*val = prox_state->prox_attr.units;
+-		ret_type = IIO_VAL_INT;
++		*val = prox_state->scale_pre_decml;
++		*val2 = prox_state->scale_post_decml;
++		ret_type = prox_state->scale_precision;
+ 		break;
+ 	case IIO_CHAN_INFO_OFFSET:
+ 		*val = hid_sensor_convert_exponent(
+@@ -234,6 +238,11 @@ static int prox_parse_report(struct platform_device *pdev,
+ 			HID_USAGE_SENSOR_HUMAN_PRESENCE,
+ 			&st->common_attributes.sensitivity);
+ 
++	st->scale_precision = hid_sensor_format_scale(
++				hsdev->usage,
++				&st->prox_attr,
++				&st->scale_pre_decml, &st->scale_post_decml);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iio/temperature/hid-sensor-temperature.c b/drivers/iio/temperature/hid-sensor-temperature.c
+index 81688f1b932f1..da9a247097fa2 100644
+--- a/drivers/iio/temperature/hid-sensor-temperature.c
++++ b/drivers/iio/temperature/hid-sensor-temperature.c
+@@ -15,7 +15,10 @@
+ struct temperature_state {
+ 	struct hid_sensor_common common_attributes;
+ 	struct hid_sensor_hub_attribute_info temperature_attr;
+-	s32 temperature_data;
++	struct {
++		s32 temperature_data;
++		u64 timestamp __aligned(8);
++	} scan;
+ 	int scale_pre_decml;
+ 	int scale_post_decml;
+ 	int scale_precision;
+@@ -32,7 +35,7 @@ static const struct iio_chan_spec temperature_channels[] = {
+ 			BIT(IIO_CHAN_INFO_SAMP_FREQ) |
+ 			BIT(IIO_CHAN_INFO_HYSTERESIS),
+ 	},
+-	IIO_CHAN_SOFT_TIMESTAMP(3),
++	IIO_CHAN_SOFT_TIMESTAMP(1),
+ };
+ 
+ /* Adjust channel real bits based on report descriptor */
+@@ -123,9 +126,8 @@ static int temperature_proc_event(struct hid_sensor_hub_device *hsdev,
+ 	struct temperature_state *temp_st = iio_priv(indio_dev);
+ 
+ 	if (atomic_read(&temp_st->common_attributes.data_ready))
+-		iio_push_to_buffers_with_timestamp(indio_dev,
+-				&temp_st->temperature_data,
+-				iio_get_time_ns(indio_dev));
++		iio_push_to_buffers_with_timestamp(indio_dev, &temp_st->scan,
++						   iio_get_time_ns(indio_dev));
+ 
+ 	return 0;
+ }
+@@ -140,7 +142,7 @@ static int temperature_capture_sample(struct hid_sensor_hub_device *hsdev,
+ 
+ 	switch (usage_id) {
+ 	case HID_USAGE_SENSOR_DATA_ENVIRONMENTAL_TEMPERATURE:
+-		temp_st->temperature_data = *(s32 *)raw_data;
++		temp_st->scan.temperature_data = *(s32 *)raw_data;
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 600e056798c0a..75caeec378bda 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -2458,8 +2458,6 @@ static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
+ 	case MLX5_IB_QPT_HW_GSI:
+ 	case IB_QPT_DRIVER:
+ 	case IB_QPT_GSI:
+-		if (dev->profile == &raw_eth_profile)
+-			goto out;
+ 	case IB_QPT_RAW_PACKET:
+ 	case IB_QPT_UD:
+ 	case MLX5_IB_QPT_REG_UMR:
+@@ -2654,10 +2652,6 @@ static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ 	int create_flags = attr->create_flags;
+ 	bool cond;
+ 
+-	if (qp->type == IB_QPT_UD && dev->profile == &raw_eth_profile)
+-		if (create_flags & ~MLX5_IB_QP_CREATE_WC_TEST)
+-			return -EINVAL;
+-
+ 	if (qp_type == MLX5_IB_QPT_DCT)
+ 		return (create_flags) ? -EINVAL : 0;
+ 
+@@ -4235,6 +4229,23 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 	return 0;
+ }
+ 
++static bool mlx5_ib_modify_qp_allowed(struct mlx5_ib_dev *dev,
++				      struct mlx5_ib_qp *qp,
++				      enum ib_qp_type qp_type)
++{
++	if (dev->profile != &raw_eth_profile)
++		return true;
++
++	if (qp_type == IB_QPT_RAW_PACKET || qp_type == MLX5_IB_QPT_REG_UMR)
++		return true;
++
++	/* Internal QP used for wc testing, with NOPs in wq */
++	if (qp->flags & MLX5_IB_QP_CREATE_WC_TEST)
++		return true;
++
++	return false;
++}
++
+ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		      int attr_mask, struct ib_udata *udata)
+ {
+@@ -4247,6 +4258,9 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 	int err = -EINVAL;
+ 	int port;
+ 
++	if (!mlx5_ib_modify_qp_allowed(dev, qp, ibqp->qp_type))
++		return -EOPNOTSUPP;
++
+ 	if (ibqp->rwq_ind_tbl)
+ 		return -ENOSYS;
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 5c2107ce7f6e1..6eb95e3c4c8a4 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -1237,8 +1237,7 @@ static void free_sess_reqs(struct rtrs_clt_sess *sess)
+ 		if (req->mr)
+ 			ib_dereg_mr(req->mr);
+ 		kfree(req->sge);
+-		rtrs_iu_free(req->iu, DMA_TO_DEVICE,
+-			      sess->s.dev->ib_dev, 1);
++		rtrs_iu_free(req->iu, sess->s.dev->ib_dev, 1);
+ 	}
+ 	kfree(sess->reqs);
+ 	sess->reqs = NULL;
+@@ -1611,8 +1610,7 @@ static void destroy_con_cq_qp(struct rtrs_clt_con *con)
+ 
+ 	rtrs_cq_qp_destroy(&con->c);
+ 	if (con->rsp_ius) {
+-		rtrs_iu_free(con->rsp_ius, DMA_FROM_DEVICE,
+-			      sess->s.dev->ib_dev, con->queue_size);
++		rtrs_iu_free(con->rsp_ius, sess->s.dev->ib_dev, con->queue_size);
+ 		con->rsp_ius = NULL;
+ 		con->queue_size = 0;
+ 	}
+@@ -2252,7 +2250,7 @@ static void rtrs_clt_info_req_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	struct rtrs_iu *iu;
+ 
+ 	iu = container_of(wc->wr_cqe, struct rtrs_iu, cqe);
+-	rtrs_iu_free(iu, DMA_TO_DEVICE, sess->s.dev->ib_dev, 1);
++	rtrs_iu_free(iu, sess->s.dev->ib_dev, 1);
+ 
+ 	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+ 		rtrs_err(sess->clt, "Sess info request send failed: %s\n",
+@@ -2381,7 +2379,7 @@ static void rtrs_clt_info_rsp_done(struct ib_cq *cq, struct ib_wc *wc)
+ 
+ out:
+ 	rtrs_clt_update_wc_stats(con);
+-	rtrs_iu_free(iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev, 1);
++	rtrs_iu_free(iu, sess->s.dev->ib_dev, 1);
+ 	rtrs_clt_change_state(sess, state);
+ }
+ 
+@@ -2443,9 +2441,9 @@ static int rtrs_send_sess_info(struct rtrs_clt_sess *sess)
+ 
+ out:
+ 	if (tx_iu)
+-		rtrs_iu_free(tx_iu, DMA_TO_DEVICE, sess->s.dev->ib_dev, 1);
++		rtrs_iu_free(tx_iu, sess->s.dev->ib_dev, 1);
+ 	if (rx_iu)
+-		rtrs_iu_free(rx_iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev, 1);
++		rtrs_iu_free(rx_iu, sess->s.dev->ib_dev, 1);
+ 	if (unlikely(err))
+ 		/* If we've never taken async path because of malloc problems */
+ 		rtrs_clt_change_state(sess, RTRS_CLT_CONNECTING_ERR);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+index 2e1d2f7e372ac..8caad0a2322bf 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+@@ -289,8 +289,7 @@ struct rtrs_msg_rdma_hdr {
+ struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t t,
+ 			      struct ib_device *dev, enum dma_data_direction,
+ 			      void (*done)(struct ib_cq *cq, struct ib_wc *wc));
+-void rtrs_iu_free(struct rtrs_iu *iu, enum dma_data_direction dir,
+-		  struct ib_device *dev, u32 queue_size);
++void rtrs_iu_free(struct rtrs_iu *iu, struct ib_device *dev, u32 queue_size);
+ int rtrs_iu_post_recv(struct rtrs_con *con, struct rtrs_iu *iu);
+ int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size,
+ 		      struct ib_send_wr *head);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index b690a3b8f94d9..43806180f85ec 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -584,8 +584,7 @@ static void unmap_cont_bufs(struct rtrs_srv_sess *sess)
+ 		struct rtrs_srv_mr *srv_mr;
+ 
+ 		srv_mr = &sess->mrs[i];
+-		rtrs_iu_free(srv_mr->iu, DMA_TO_DEVICE,
+-			      sess->s.dev->ib_dev, 1);
++		rtrs_iu_free(srv_mr->iu, sess->s.dev->ib_dev, 1);
+ 		ib_dereg_mr(srv_mr->mr);
+ 		ib_dma_unmap_sg(sess->s.dev->ib_dev, srv_mr->sgt.sgl,
+ 				srv_mr->sgt.nents, DMA_BIDIRECTIONAL);
+@@ -672,7 +671,7 @@ static int map_cont_bufs(struct rtrs_srv_sess *sess)
+ 			if (!srv_mr->iu) {
+ 				err = -ENOMEM;
+ 				rtrs_err(ss, "rtrs_iu_alloc(), err: %d\n", err);
+-				goto free_iu;
++				goto dereg_mr;
+ 			}
+ 		}
+ 		/* Eventually dma addr for each chunk can be cached */
+@@ -688,9 +687,7 @@ err:
+ 			srv_mr = &sess->mrs[mri];
+ 			sgt = &srv_mr->sgt;
+ 			mr = srv_mr->mr;
+-free_iu:
+-			rtrs_iu_free(srv_mr->iu, DMA_TO_DEVICE,
+-				      sess->s.dev->ib_dev, 1);
++			rtrs_iu_free(srv_mr->iu, sess->s.dev->ib_dev, 1);
+ dereg_mr:
+ 			ib_dereg_mr(mr);
+ unmap_sg:
+@@ -742,7 +739,7 @@ static void rtrs_srv_info_rsp_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	struct rtrs_iu *iu;
+ 
+ 	iu = container_of(wc->wr_cqe, struct rtrs_iu, cqe);
+-	rtrs_iu_free(iu, DMA_TO_DEVICE, sess->s.dev->ib_dev, 1);
++	rtrs_iu_free(iu, sess->s.dev->ib_dev, 1);
+ 
+ 	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+ 		rtrs_err(s, "Sess info response send failed: %s\n",
+@@ -868,7 +865,7 @@ static int process_info_req(struct rtrs_srv_con *con,
+ 	if (unlikely(err)) {
+ 		rtrs_err(s, "rtrs_iu_post_send(), err: %d\n", err);
+ iu_free:
+-		rtrs_iu_free(tx_iu, DMA_TO_DEVICE, sess->s.dev->ib_dev, 1);
++		rtrs_iu_free(tx_iu, sess->s.dev->ib_dev, 1);
+ 	}
+ rwr_free:
+ 	kfree(rwr);
+@@ -913,7 +910,7 @@ static void rtrs_srv_info_req_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		goto close;
+ 
+ out:
+-	rtrs_iu_free(iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev, 1);
++	rtrs_iu_free(iu, sess->s.dev->ib_dev, 1);
+ 	return;
+ close:
+ 	close_sess(sess);
+@@ -936,7 +933,7 @@ static int post_recv_info_req(struct rtrs_srv_con *con)
+ 	err = rtrs_iu_post_recv(&con->c, rx_iu);
+ 	if (unlikely(err)) {
+ 		rtrs_err(s, "rtrs_iu_post_recv(), err: %d\n", err);
+-		rtrs_iu_free(rx_iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev, 1);
++		rtrs_iu_free(rx_iu, sess->s.dev->ib_dev, 1);
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index a3e1a027f8081..d13aff0aa8165 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -31,6 +31,7 @@ struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t gfp_mask,
+ 		return NULL;
+ 	for (i = 0; i < queue_size; i++) {
+ 		iu = &ius[i];
++		iu->direction = dir;
+ 		iu->buf = kzalloc(size, gfp_mask);
+ 		if (!iu->buf)
+ 			goto err;
+@@ -41,17 +42,15 @@ struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t gfp_mask,
+ 
+ 		iu->cqe.done  = done;
+ 		iu->size      = size;
+-		iu->direction = dir;
+ 	}
+ 	return ius;
+ err:
+-	rtrs_iu_free(ius, dir, dma_dev, i);
++	rtrs_iu_free(ius, dma_dev, i);
+ 	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(rtrs_iu_alloc);
+ 
+-void rtrs_iu_free(struct rtrs_iu *ius, enum dma_data_direction dir,
+-		   struct ib_device *ibdev, u32 queue_size)
++void rtrs_iu_free(struct rtrs_iu *ius, struct ib_device *ibdev, u32 queue_size)
+ {
+ 	struct rtrs_iu *iu;
+ 	int i;
+@@ -61,7 +60,7 @@ void rtrs_iu_free(struct rtrs_iu *ius, enum dma_data_direction dir,
+ 
+ 	for (i = 0; i < queue_size; i++) {
+ 		iu = &ius[i];
+-		ib_dma_unmap_single(ibdev, iu->dma_addr, iu->size, dir);
++		ib_dma_unmap_single(ibdev, iu->dma_addr, iu->size, iu->direction);
+ 		kfree(iu->buf);
+ 	}
+ 	kfree(ius);
+@@ -105,6 +104,22 @@ int rtrs_post_recv_empty(struct rtrs_con *con, struct ib_cqe *cqe)
+ }
+ EXPORT_SYMBOL_GPL(rtrs_post_recv_empty);
+ 
++static int rtrs_post_send(struct ib_qp *qp, struct ib_send_wr *head,
++			     struct ib_send_wr *wr)
++{
++	if (head) {
++		struct ib_send_wr *tail = head;
++
++		while (tail->next)
++			tail = tail->next;
++		tail->next = wr;
++	} else {
++		head = wr;
++	}
++
++	return ib_post_send(qp, head, NULL);
++}
++
+ int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size,
+ 		       struct ib_send_wr *head)
+ {
+@@ -127,17 +142,7 @@ int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size,
+ 		.send_flags = IB_SEND_SIGNALED,
+ 	};
+ 
+-	if (head) {
+-		struct ib_send_wr *tail = head;
+-
+-		while (tail->next)
+-			tail = tail->next;
+-		tail->next = &wr;
+-	} else {
+-		head = &wr;
+-	}
+-
+-	return ib_post_send(con->qp, head, NULL);
++	return rtrs_post_send(con->qp, head, &wr);
+ }
+ EXPORT_SYMBOL_GPL(rtrs_iu_post_send);
+ 
+@@ -169,17 +174,7 @@ int rtrs_iu_post_rdma_write_imm(struct rtrs_con *con, struct rtrs_iu *iu,
+ 		if (WARN_ON(sge[i].length == 0))
+ 			return -EINVAL;
+ 
+-	if (head) {
+-		struct ib_send_wr *tail = head;
+-
+-		while (tail->next)
+-			tail = tail->next;
+-		tail->next = &wr.wr;
+-	} else {
+-		head = &wr.wr;
+-	}
+-
+-	return ib_post_send(con->qp, head, NULL);
++	return rtrs_post_send(con->qp, head, &wr.wr);
+ }
+ EXPORT_SYMBOL_GPL(rtrs_iu_post_rdma_write_imm);
+ 
+@@ -187,26 +182,16 @@ int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe,
+ 				    u32 imm_data, enum ib_send_flags flags,
+ 				    struct ib_send_wr *head)
+ {
+-	struct ib_send_wr wr;
++	struct ib_rdma_wr wr;
+ 
+-	wr = (struct ib_send_wr) {
+-		.wr_cqe	= cqe,
+-		.send_flags	= flags,
+-		.opcode	= IB_WR_RDMA_WRITE_WITH_IMM,
+-		.ex.imm_data	= cpu_to_be32(imm_data),
++	wr = (struct ib_rdma_wr) {
++		.wr.wr_cqe	= cqe,
++		.wr.send_flags	= flags,
++		.wr.opcode	= IB_WR_RDMA_WRITE_WITH_IMM,
++		.wr.ex.imm_data	= cpu_to_be32(imm_data),
+ 	};
+ 
+-	if (head) {
+-		struct ib_send_wr *tail = head;
+-
+-		while (tail->next)
+-			tail = tail->next;
+-		tail->next = &wr;
+-	} else {
+-		head = &wr;
+-	}
+-
+-	return ib_post_send(con->qp, head, NULL);
++	return rtrs_post_send(con->qp, head, &wr.wr);
+ }
+ EXPORT_SYMBOL_GPL(rtrs_post_rdma_write_imm_empty);
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 47afc5938c26b..6d5a39af10978 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3918,11 +3918,15 @@ static int bond_neigh_init(struct neighbour *n)
+ 
+ 	rcu_read_lock();
+ 	slave = bond_first_slave_rcu(bond);
+-	if (!slave)
++	if (!slave) {
++		ret = -EINVAL;
+ 		goto out;
++	}
+ 	slave_ops = slave->dev->netdev_ops;
+-	if (!slave_ops->ndo_neigh_setup)
++	if (!slave_ops->ndo_neigh_setup) {
++		ret = -EINVAL;
+ 		goto out;
++	}
+ 
+ 	/* TODO: find another way [1] to implement this.
+ 	 * Passing a zeroed structure is fragile,
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index f184f4a79cc39..4a4cb62b73320 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -409,6 +409,8 @@ static void replenish_pools(struct ibmvnic_adapter *adapter)
+ 		if (adapter->rx_pool[i].active)
+ 			replenish_rx_pool(adapter, &adapter->rx_pool[i]);
+ 	}
++
++	netdev_dbg(adapter->netdev, "Replenished %d pools\n", i);
+ }
+ 
+ static void release_stats_buffers(struct ibmvnic_adapter *adapter)
+@@ -914,6 +916,7 @@ static int ibmvnic_login(struct net_device *netdev)
+ 
+ 	__ibmvnic_set_mac(netdev, adapter->mac_addr);
+ 
++	netdev_dbg(netdev, "[S:%d] Login succeeded\n", adapter->state);
+ 	return 0;
+ }
+ 
+@@ -1343,6 +1346,10 @@ static int ibmvnic_close(struct net_device *netdev)
+ 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+ 	int rc;
+ 
++	netdev_dbg(netdev, "[S:%d FOP:%d FRR:%d] Closing\n",
++		   adapter->state, adapter->failover_pending,
++		   adapter->force_reset_recovery);
++
+ 	/* If device failover is pending, just set device state and return.
+ 	 * Device operation will be handled by reset routine.
+ 	 */
+@@ -1937,8 +1944,10 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 	struct net_device *netdev = adapter->netdev;
+ 	int i, rc;
+ 
+-	netdev_dbg(adapter->netdev, "Re-setting driver (%d)\n",
+-		   rwi->reset_reason);
++	netdev_dbg(adapter->netdev,
++		   "[S:%d FOP:%d] Reset reason %d, reset_state %d\n",
++		   adapter->state, adapter->failover_pending,
++		   rwi->reset_reason, reset_state);
+ 
+ 	rtnl_lock();
+ 	/*
+@@ -2097,6 +2106,8 @@ out:
+ 		adapter->state = reset_state;
+ 	rtnl_unlock();
+ 
++	netdev_dbg(adapter->netdev, "[S:%d FOP:%d] Reset done, rc %d\n",
++		   adapter->state, adapter->failover_pending, rc);
+ 	return rc;
+ }
+ 
+@@ -2166,6 +2177,8 @@ out:
+ 	/* restore adapter state if reset failed */
+ 	if (rc)
+ 		adapter->state = reset_state;
++	netdev_dbg(adapter->netdev, "[S:%d FOP:%d] Hard reset done, rc %d\n",
++		   adapter->state, adapter->failover_pending, rc);
+ 	return rc;
+ }
+ 
+@@ -2275,6 +2288,11 @@ static void __ibmvnic_reset(struct work_struct *work)
+ 	}
+ 
+ 	clear_bit_unlock(0, &adapter->resetting);
++
++	netdev_dbg(adapter->netdev,
++		   "[S:%d FRR:%d WFR:%d] Done processing resets\n",
++		   adapter->state, adapter->force_reset_recovery,
++		   adapter->wait_for_reset);
+ }
+ 
+ static void __ibmvnic_delayed_reset(struct work_struct *work)
+@@ -2295,6 +2313,8 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 	unsigned long flags;
+ 	int ret;
+ 
++	spin_lock_irqsave(&adapter->rwi_lock, flags);
++
+ 	/*
+ 	 * If failover is pending don't schedule any other reset.
+ 	 * Instead let the failover complete. If there is already a
+@@ -2315,13 +2335,11 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 		goto err;
+ 	}
+ 
+-	spin_lock_irqsave(&adapter->rwi_lock, flags);
+-
+ 	list_for_each(entry, &adapter->rwi_list) {
+ 		tmp = list_entry(entry, struct ibmvnic_rwi, list);
+ 		if (tmp->reset_reason == reason) {
+-			netdev_dbg(netdev, "Skipping matching reset\n");
+-			spin_unlock_irqrestore(&adapter->rwi_lock, flags);
++			netdev_dbg(netdev, "Skipping matching reset, reason=%d\n",
++				   reason);
+ 			ret = EBUSY;
+ 			goto err;
+ 		}
+@@ -2329,8 +2347,6 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 
+ 	rwi = kzalloc(sizeof(*rwi), GFP_ATOMIC);
+ 	if (!rwi) {
+-		spin_unlock_irqrestore(&adapter->rwi_lock, flags);
+-		ibmvnic_close(netdev);
+ 		ret = ENOMEM;
+ 		goto err;
+ 	}
+@@ -2343,12 +2359,17 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 	}
+ 	rwi->reset_reason = reason;
+ 	list_add_tail(&rwi->list, &adapter->rwi_list);
+-	spin_unlock_irqrestore(&adapter->rwi_lock, flags);
+ 	netdev_dbg(adapter->netdev, "Scheduling reset (reason %d)\n", reason);
+ 	schedule_work(&adapter->ibmvnic_reset);
+ 
+-	return 0;
++	ret = 0;
+ err:
++	/* ibmvnic_close() below can block, so drop the lock first */
++	spin_unlock_irqrestore(&adapter->rwi_lock, flags);
++
++	if (ret == ENOMEM)
++		ibmvnic_close(netdev);
++
+ 	return -ret;
+ }
+ 
+@@ -5359,7 +5380,18 @@ static int ibmvnic_remove(struct vio_dev *dev)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&adapter->state_lock, flags);
++
++	/* If ibmvnic_reset() is scheduling a reset, wait for it to
++	 * finish. Then, set the state to REMOVING to prevent it from
++	 * scheduling any more work and to have reset functions ignore
++	 * any resets that have already been scheduled. Drop the lock
++	 * after setting state, so __ibmvnic_reset() which is called
++	 * from the flush_work() below, can make progress.
++	 */
++	spin_lock(&adapter->rwi_lock);
+ 	adapter->state = VNIC_REMOVING;
++	spin_unlock(&adapter->rwi_lock);
++
+ 	spin_unlock_irqrestore(&adapter->state_lock, flags);
+ 
+ 	flush_work(&adapter->ibmvnic_reset);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index 21e7ea858cda3..b27211063c643 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -1080,6 +1080,7 @@ struct ibmvnic_adapter {
+ 	struct tasklet_struct tasklet;
+ 	enum vnic_state state;
+ 	enum ibmvnic_reset_reason reset_reason;
++	/* when taking both state and rwi locks, take state lock first */
+ 	spinlock_t rwi_lock;
+ 	struct list_head rwi_list;
+ 	struct work_struct ibmvnic_reset;
+@@ -1096,6 +1097,8 @@ struct ibmvnic_adapter {
+ 	struct ibmvnic_tunables desired;
+ 	struct ibmvnic_tunables fallback;
+ 
+-	/* Used for serializatin of state field */
++	/* Used for serialization of state field. When taking both state
++	 * and rwi locks, take state lock first.
++	 */
+ 	spinlock_t state_lock;
+ };
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 3e4a4d6f0419c..4a2d03cada01e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5920,7 +5920,7 @@ static int i40e_add_channel(struct i40e_pf *pf, u16 uplink_seid,
+ 	ch->enabled_tc = !i40e_is_channel_macvlan(ch) && enabled_tc;
+ 	ch->seid = ctxt.seid;
+ 	ch->vsi_number = ctxt.vsi_number;
+-	ch->stat_counter_idx = cpu_to_le16(ctxt.info.stat_counter_idx);
++	ch->stat_counter_idx = le16_to_cpu(ctxt.info.stat_counter_idx);
+ 
+ 	/* copy just the sections touched not the entire info
+ 	 * since not all sections are valid as returned by
+@@ -7599,8 +7599,8 @@ static inline void
+ i40e_set_cld_element(struct i40e_cloud_filter *filter,
+ 		     struct i40e_aqc_cloud_filters_element_data *cld)
+ {
+-	int i, j;
+ 	u32 ipa;
++	int i;
+ 
+ 	memset(cld, 0, sizeof(*cld));
+ 	ether_addr_copy(cld->outer_mac, filter->dst_mac);
+@@ -7611,14 +7611,14 @@ i40e_set_cld_element(struct i40e_cloud_filter *filter,
+ 
+ 	if (filter->n_proto == ETH_P_IPV6) {
+ #define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
+-		for (i = 0, j = 0; i < ARRAY_SIZE(filter->dst_ipv6);
+-		     i++, j += 2) {
++		for (i = 0; i < ARRAY_SIZE(filter->dst_ipv6); i++) {
+ 			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
+-			ipa = cpu_to_le32(ipa);
+-			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, sizeof(ipa));
++
++			*(__le32 *)&cld->ipaddr.raw_v6.data[i * 2] = cpu_to_le32(ipa);
+ 		}
+ 	} else {
+ 		ipa = be32_to_cpu(filter->dst_ipv4);
++
+ 		memcpy(&cld->ipaddr.v4.data, &ipa, sizeof(ipa));
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 38dec49ac64d2..899714243af7a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -1782,7 +1782,7 @@ void i40e_process_skb_fields(struct i40e_ring *rx_ring,
+ 	skb_record_rx_queue(skb, rx_ring->queue_index);
+ 
+ 	if (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+-		u16 vlan_tag = rx_desc->wb.qword0.lo_dword.l2tag1;
++		__le16 vlan_tag = rx_desc->wb.qword0.lo_dword.l2tag1;
+ 
+ 		__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+ 				       le16_to_cpu(vlan_tag));
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index a7f74b3b97af5..47ae1d1723c54 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1263,6 +1263,7 @@ static struct phy_driver ksphy_driver[] = {
+ 	.probe		= kszphy_probe,
+ 	.config_init	= ksz8081_config_init,
+ 	.ack_interrupt	= kszphy_ack_interrupt,
++	.soft_reset	= genphy_soft_reset,
+ 	.config_intr	= kszphy_config_intr,
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+index 92c50efd48fc3..39842bdef4b46 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+@@ -92,6 +92,7 @@
+ #define IWL_SNJ_A_HR_B_FW_PRE		"iwlwifi-SoSnj-a0-hr-b0-"
+ #define IWL_MA_A_GF_A_FW_PRE		"iwlwifi-ma-a0-gf-a0-"
+ #define IWL_MA_A_MR_A_FW_PRE		"iwlwifi-ma-a0-mr-a0-"
++#define IWL_SNJ_A_MR_A_FW_PRE		"iwlwifi-SoSnj-a0-mr-a0-"
+ 
+ #define IWL_QU_B_HR_B_MODULE_FIRMWARE(api) \
+ 	IWL_QU_B_HR_B_FW_PRE __stringify(api) ".ucode"
+@@ -127,6 +128,8 @@
+ 	IWL_MA_A_GF_A_FW_PRE __stringify(api) ".ucode"
+ #define IWL_MA_A_MR_A_FW_MODULE_FIRMWARE(api) \
+ 	IWL_MA_A_MR_A_FW_PRE __stringify(api) ".ucode"
++#define IWL_SNJ_A_MR_A_MODULE_FIRMWARE(api) \
++	IWL_SNJ_A_MR_A_FW_PRE __stringify(api) ".ucode"
+ 
+ static const struct iwl_base_params iwl_22000_base_params = {
+ 	.eeprom_size = OTP_LOW_IMAGE_SIZE_32K,
+@@ -672,6 +675,13 @@ const struct iwl_cfg iwl_cfg_ma_a0_mr_a0 = {
+ 	.num_rbds = IWL_NUM_RBDS_AX210_HE,
+ };
+ 
++const struct iwl_cfg iwl_cfg_snj_a0_mr_a0 = {
++	.fw_name_pre = IWL_SNJ_A_MR_A_FW_PRE,
++	.uhb_supported = true,
++	IWL_DEVICE_AX210,
++	.num_rbds = IWL_NUM_RBDS_AX210_HE,
++};
++
+ MODULE_FIRMWARE(IWL_QU_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_QNJ_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_QU_C_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+@@ -689,3 +699,4 @@ MODULE_FIRMWARE(IWL_SNJ_A_GF_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_SNJ_A_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_MA_A_GF_A_FW_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL_MA_A_MR_A_FW_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_SNJ_A_MR_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index 9b91aa9b2e7f1..bd04e4fbbb8ab 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -472,6 +472,7 @@ struct iwl_cfg {
+ #define IWL_CFG_MAC_TYPE_QU		0x33
+ #define IWL_CFG_MAC_TYPE_QUZ		0x35
+ #define IWL_CFG_MAC_TYPE_QNJ		0x36
++#define IWL_CFG_MAC_TYPE_SNJ		0x42
+ #define IWL_CFG_MAC_TYPE_MA		0x44
+ 
+ #define IWL_CFG_RF_TYPE_TH		0x105
+@@ -656,6 +657,7 @@ extern const struct iwl_cfg iwlax211_cfg_snj_gf_a0;
+ extern const struct iwl_cfg iwlax201_cfg_snj_hr_b0;
+ extern const struct iwl_cfg iwl_cfg_ma_a0_gf_a0;
+ extern const struct iwl_cfg iwl_cfg_ma_a0_mr_a0;
++extern const struct iwl_cfg iwl_cfg_snj_a0_mr_a0;
+ #endif /* CONFIG_IWLMVM */
+ 
+ #endif /* __IWL_CONFIG_H__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 2823a1e81656d..fa32f9045c0cb 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1002,6 +1002,12 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		      IWL_CFG_RF_TYPE_MR, IWL_CFG_ANY,
+ 		      IWL_CFG_ANY, IWL_CFG_ANY,
+ 		      iwl_cfg_ma_a0_mr_a0, iwl_ma_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SNJ, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_MR, IWL_CFG_ANY,
++		      IWL_CFG_ANY, IWL_CFG_ANY,
++		      iwl_cfg_snj_a0_mr_a0, iwl_ma_name),
++
+ 
+ #endif /* CONFIG_IWLMVM */
+ };
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index e1e574ecf031b..de846aaa8728b 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1894,30 +1894,18 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns)
+ 		blk_queue_max_write_zeroes_sectors(queue, UINT_MAX);
+ }
+ 
+-static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns)
++/*
++ * Even though NVMe spec explicitly states that MDTS is not applicable to the
++ * write-zeroes, we are cautious and limit the size to the controllers
++ * max_hw_sectors value, which is based on the MDTS field and possibly other
++ * limiting factors.
++ */
++static void nvme_config_write_zeroes(struct request_queue *q,
++		struct nvme_ctrl *ctrl)
+ {
+-	u64 max_blocks;
+-
+-	if (!(ns->ctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES) ||
+-	    (ns->ctrl->quirks & NVME_QUIRK_DISABLE_WRITE_ZEROES))
+-		return;
+-	/*
+-	 * Even though NVMe spec explicitly states that MDTS is not
+-	 * applicable to the write-zeroes:- "The restriction does not apply to
+-	 * commands that do not transfer data between the host and the
+-	 * controller (e.g., Write Uncorrectable ro Write Zeroes command).".
+-	 * In order to be more cautious use controller's max_hw_sectors value
+-	 * to configure the maximum sectors for the write-zeroes which is
+-	 * configured based on the controller's MDTS field in the
+-	 * nvme_init_identify() if available.
+-	 */
+-	if (ns->ctrl->max_hw_sectors == UINT_MAX)
+-		max_blocks = (u64)USHRT_MAX + 1;
+-	else
+-		max_blocks = ns->ctrl->max_hw_sectors + 1;
+-
+-	blk_queue_max_write_zeroes_sectors(disk->queue,
+-					   nvme_lba_to_sect(ns, max_blocks));
++	if ((ctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES) &&
++	    !(ctrl->quirks & NVME_QUIRK_DISABLE_WRITE_ZEROES))
++		blk_queue_max_write_zeroes_sectors(q, ctrl->max_hw_sectors);
+ }
+ 
+ static bool nvme_ns_ids_valid(struct nvme_ns_ids *ids)
+@@ -2089,7 +2077,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
+ 	set_capacity_revalidate_and_notify(disk, capacity, false);
+ 
+ 	nvme_config_discard(disk, ns);
+-	nvme_config_write_zeroes(disk, ns);
++	nvme_config_write_zeroes(disk->queue, ns->ctrl);
+ 
+ 	if (id->nsattr & NVME_NS_ATTR_RO)
+ 		set_disk_ro(disk, true);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 1957030132722..8b326508a480e 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -736,8 +736,11 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl)
+ 		return ret;
+ 
+ 	ctrl->ctrl.queue_count = nr_io_queues + 1;
+-	if (ctrl->ctrl.queue_count < 2)
+-		return 0;
++	if (ctrl->ctrl.queue_count < 2) {
++		dev_err(ctrl->ctrl.device,
++			"unable to set any I/O queues\n");
++		return -ENOMEM;
++	}
+ 
+ 	dev_info(ctrl->ctrl.device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 739ac7deccd96..9444e5e2a95ba 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -287,7 +287,7 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 	 * directly, otherwise queue io_work. Also, only do that if we
+ 	 * are on the same cpu, so we don't introduce contention.
+ 	 */
+-	if (queue->io_cpu == __smp_processor_id() &&
++	if (queue->io_cpu == raw_smp_processor_id() &&
+ 	    sync && empty && mutex_trylock(&queue->send_mutex)) {
+ 		queue->more_requests = !last;
+ 		nvme_tcp_send_all(queue);
+@@ -568,6 +568,13 @@ static int nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
+ 	req->pdu_len = le32_to_cpu(pdu->r2t_length);
+ 	req->pdu_sent = 0;
+ 
++	if (unlikely(!req->pdu_len)) {
++		dev_err(queue->ctrl->ctrl.device,
++			"req %d r2t len is %u, probably a bug...\n",
++			rq->tag, req->pdu_len);
++		return -EPROTO;
++	}
++
+ 	if (unlikely(req->data_sent + req->pdu_len > req->data_len)) {
+ 		dev_err(queue->ctrl->ctrl.device,
+ 			"req %d r2t len %u exceeded data len %u (%zu sent)\n",
+@@ -1748,8 +1755,11 @@ static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ 		return ret;
+ 
+ 	ctrl->queue_count = nr_io_queues + 1;
+-	if (ctrl->queue_count < 2)
+-		return 0;
++	if (ctrl->queue_count < 2) {
++		dev_err(ctrl->device,
++			"unable to set any I/O queues\n");
++		return -ENOMEM;
++	}
+ 
+ 	dev_info(ctrl->device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 957b39a82431b..1e79d33c1df7e 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1109,9 +1109,20 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
+ {
+ 	lockdep_assert_held(&ctrl->lock);
+ 
+-	if (nvmet_cc_iosqes(ctrl->cc) != NVME_NVM_IOSQES ||
+-	    nvmet_cc_iocqes(ctrl->cc) != NVME_NVM_IOCQES ||
+-	    nvmet_cc_mps(ctrl->cc) != 0 ||
++	/*
++	 * Only I/O controllers should verify iosqes,iocqes.
++	 * Strictly speaking, the spec says a discovery controller
++	 * should verify iosqes,iocqes are zeroed, however that
++	 * would break backwards compatibility, so don't enforce it.
++	 */
++	if (ctrl->subsys->type != NVME_NQN_DISC &&
++	    (nvmet_cc_iosqes(ctrl->cc) != NVME_NVM_IOSQES ||
++	     nvmet_cc_iocqes(ctrl->cc) != NVME_NVM_IOCQES)) {
++		ctrl->csts = NVME_CSTS_CFS;
++		return;
++	}
++
++	if (nvmet_cc_mps(ctrl->cc) != 0 ||
+ 	    nvmet_cc_ams(ctrl->cc) != 0 ||
+ 	    nvmet_cc_css(ctrl->cc) != 0) {
+ 		ctrl->csts = NVME_CSTS_CFS;
+diff --git a/drivers/pci/hotplug/rpadlpar_sysfs.c b/drivers/pci/hotplug/rpadlpar_sysfs.c
+index cdbfa5df3a51f..dbfa0b55d31a5 100644
+--- a/drivers/pci/hotplug/rpadlpar_sysfs.c
++++ b/drivers/pci/hotplug/rpadlpar_sysfs.c
+@@ -34,12 +34,11 @@ static ssize_t add_slot_store(struct kobject *kobj, struct kobj_attribute *attr,
+ 	if (nbytes >= MAX_DRC_NAME_LEN)
+ 		return 0;
+ 
+-	memcpy(drc_name, buf, nbytes);
++	strscpy(drc_name, buf, nbytes + 1);
+ 
+ 	end = strchr(drc_name, '\n');
+-	if (!end)
+-		end = &drc_name[nbytes];
+-	*end = '\0';
++	if (end)
++		*end = '\0';
+ 
+ 	rc = dlpar_add_slot(drc_name);
+ 	if (rc)
+@@ -65,12 +64,11 @@ static ssize_t remove_slot_store(struct kobject *kobj,
+ 	if (nbytes >= MAX_DRC_NAME_LEN)
+ 		return 0;
+ 
+-	memcpy(drc_name, buf, nbytes);
++	strscpy(drc_name, buf, nbytes + 1);
+ 
+ 	end = strchr(drc_name, '\n');
+-	if (!end)
+-		end = &drc_name[nbytes];
+-	*end = '\0';
++	if (end)
++		*end = '\0';
+ 
+ 	rc = dlpar_remove_slot(drc_name);
+ 	if (rc)
+diff --git a/drivers/pci/hotplug/s390_pci_hpc.c b/drivers/pci/hotplug/s390_pci_hpc.c
+index c9e790c74051f..a047c421debe2 100644
+--- a/drivers/pci/hotplug/s390_pci_hpc.c
++++ b/drivers/pci/hotplug/s390_pci_hpc.c
+@@ -93,8 +93,9 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
+ 		pci_dev_put(pdev);
+ 		return -EBUSY;
+ 	}
++	pci_dev_put(pdev);
+ 
+-	zpci_remove_device(zdev);
++	zpci_remove_device(zdev, false);
+ 
+ 	rc = zpci_disable_device(zdev);
+ 	if (rc)
+diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c
+index cb29421d745aa..d38109cc3a011 100644
+--- a/drivers/regulator/pca9450-regulator.c
++++ b/drivers/regulator/pca9450-regulator.c
+@@ -5,6 +5,7 @@
+  */
+ 
+ #include <linux/err.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+ #include <linux/interrupt.h>
+ #include <linux/kernel.h>
+@@ -32,6 +33,7 @@ struct pca9450_regulator_desc {
+ struct pca9450 {
+ 	struct device *dev;
+ 	struct regmap *regmap;
++	struct gpio_desc *sd_vsel_gpio;
+ 	enum pca9450_chip_type type;
+ 	unsigned int rcnt;
+ 	int irq;
+@@ -795,6 +797,34 @@ static int pca9450_i2c_probe(struct i2c_client *i2c,
+ 		return ret;
+ 	}
+ 
++	/* Clear PRESET_EN bit in BUCK123_DVS to use DVS registers */
++	ret = regmap_clear_bits(pca9450->regmap, PCA9450_REG_BUCK123_DVS,
++				BUCK123_PRESET_EN);
++	if (ret) {
++		dev_err(&i2c->dev, "Failed to clear PRESET_EN bit: %d\n", ret);
++		return ret;
++	}
++
++	/* Set reset behavior on assertion of WDOG_B signal */
++	ret = regmap_update_bits(pca9450->regmap, PCA9450_REG_RESET_CTRL,
++				WDOG_B_CFG_MASK, WDOG_B_CFG_COLD_LDO12);
++	if (ret) {
++		dev_err(&i2c->dev, "Failed to set WDOG_B reset behavior\n");
++		return ret;
++	}
++
++	/*
++	 * The driver uses the LDO5CTRL_H register to control the LDO5 regulator.
++	 * This is only valid if the SD_VSEL input of the PMIC is high. Let's
++	 * check if the pin is available as GPIO and set it to high.
++	 */
++	pca9450->sd_vsel_gpio = gpiod_get_optional(pca9450->dev, "sd-vsel", GPIOD_OUT_HIGH);
++
++	if (IS_ERR(pca9450->sd_vsel_gpio)) {
++		dev_err(&i2c->dev, "Failed to get SD_VSEL GPIO\n");
++		return ret;
++	}
++
+ 	dev_info(&i2c->dev, "%s probed.\n",
+ 		type == PCA9450_TYPE_PCA9450A ? "pca9450a" : "pca9450bc");
+ 
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 03f96177e58ee..4d51c4ace8ea3 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -470,6 +470,7 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
+ 	struct qaob *aob;
+ 	struct qeth_qdio_out_buffer *buffer;
+ 	enum iucv_tx_notify notification;
++	struct qeth_qdio_out_q *queue;
+ 	unsigned int i;
+ 
+ 	aob = (struct qaob *) phys_to_virt(phys_aob_addr);
+@@ -511,7 +512,9 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
+ 				kmem_cache_free(qeth_core_header_cache, data);
+ 		}
+ 
++		queue = buffer->q;
+ 		atomic_set(&buffer->state, QETH_QDIO_BUF_EMPTY);
++		napi_schedule(&queue->napi);
+ 		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+@@ -7013,9 +7016,7 @@ int qeth_open(struct net_device *dev)
+ 	card->data.state = CH_STATE_UP;
+ 	netif_tx_start_all_queues(dev);
+ 
+-	napi_enable(&card->napi);
+ 	local_bh_disable();
+-	napi_schedule(&card->napi);
+ 	if (IS_IQD(card)) {
+ 		struct qeth_qdio_out_q *queue;
+ 		unsigned int i;
+@@ -7027,8 +7028,12 @@ int qeth_open(struct net_device *dev)
+ 			napi_schedule(&queue->napi);
+ 		}
+ 	}
++
++	napi_enable(&card->napi);
++	napi_schedule(&card->napi);
+ 	/* kick-start the NAPI softirq: */
+ 	local_bh_enable();
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(qeth_open);
+@@ -7038,6 +7043,11 @@ int qeth_stop(struct net_device *dev)
+ 	struct qeth_card *card = dev->ml_priv;
+ 
+ 	QETH_CARD_TEXT(card, 4, "qethstop");
++
++	napi_disable(&card->napi);
++	cancel_delayed_work_sync(&card->buffer_reclaim_work);
++	qdio_stop_irq(CARD_DDEV(card));
++
+ 	if (IS_IQD(card)) {
+ 		struct qeth_qdio_out_q *queue;
+ 		unsigned int i;
+@@ -7058,10 +7068,6 @@ int qeth_stop(struct net_device *dev)
+ 		netif_tx_disable(dev);
+ 	}
+ 
+-	napi_disable(&card->napi);
+-	cancel_delayed_work_sync(&card->buffer_reclaim_work);
+-	qdio_stop_irq(CARD_DDEV(card));
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(qeth_stop);
+diff --git a/drivers/scsi/aic94xx/aic94xx_scb.c b/drivers/scsi/aic94xx/aic94xx_scb.c
+index e2d880a5f3915..7b0566f6a97f2 100644
+--- a/drivers/scsi/aic94xx/aic94xx_scb.c
++++ b/drivers/scsi/aic94xx/aic94xx_scb.c
+@@ -68,7 +68,6 @@ static void asd_phy_event_tasklet(struct asd_ascb *ascb,
+ 					 struct done_list_struct *dl)
+ {
+ 	struct asd_ha_struct *asd_ha = ascb->ha;
+-	struct sas_ha_struct *sas_ha = &asd_ha->sas_ha;
+ 	int phy_id = dl->status_block[0] & DL_PHY_MASK;
+ 	struct asd_phy *phy = &asd_ha->phys[phy_id];
+ 
+@@ -81,7 +80,7 @@ static void asd_phy_event_tasklet(struct asd_ascb *ascb,
+ 		ASD_DPRINTK("phy%d: device unplugged\n", phy_id);
+ 		asd_turn_led(asd_ha, phy_id, 0);
+ 		sas_phy_disconnected(&phy->sas_phy);
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL);
++		sas_notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL);
+ 		break;
+ 	case CURRENT_OOB_DONE:
+ 		/* hot plugged device */
+@@ -89,12 +88,12 @@ static void asd_phy_event_tasklet(struct asd_ascb *ascb,
+ 		get_lrate_mode(phy, oob_mode);
+ 		ASD_DPRINTK("phy%d device plugged: lrate:0x%x, proto:0x%x\n",
+ 			    phy_id, phy->sas_phy.linkrate, phy->sas_phy.iproto);
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
++		sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
+ 		break;
+ 	case CURRENT_SPINUP_HOLD:
+ 		/* hot plug SATA, no COMWAKE sent */
+ 		asd_turn_led(asd_ha, phy_id, 1);
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD);
++		sas_notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD);
+ 		break;
+ 	case CURRENT_GTO_TIMEOUT:
+ 	case CURRENT_OOB_ERROR:
+@@ -102,7 +101,7 @@ static void asd_phy_event_tasklet(struct asd_ascb *ascb,
+ 			    dl->status_block[1]);
+ 		asd_turn_led(asd_ha, phy_id, 0);
+ 		sas_phy_disconnected(&phy->sas_phy);
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR);
++		sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR);
+ 		break;
+ 	}
+ }
+@@ -222,7 +221,6 @@ static void asd_bytes_dmaed_tasklet(struct asd_ascb *ascb,
+ 	int edb_el = edb_id + ascb->edb_index;
+ 	struct asd_dma_tok *edb = ascb->ha->seq.edb_arr[edb_el];
+ 	struct asd_phy *phy = &ascb->ha->phys[phy_id];
+-	struct sas_ha_struct *sas_ha = phy->sas_phy.ha;
+ 	u16 size = ((dl->status_block[3] & 7) << 8) | dl->status_block[2];
+ 
+ 	size = min(size, (u16) sizeof(phy->frame_rcvd));
+@@ -234,7 +232,7 @@ static void asd_bytes_dmaed_tasklet(struct asd_ascb *ascb,
+ 	spin_unlock_irqrestore(&phy->sas_phy.frame_rcvd_lock, flags);
+ 	asd_dump_frame_rcvd(phy, dl);
+ 	asd_form_port(ascb->ha, phy);
+-	sas_ha->notify_port_event(&phy->sas_phy, PORTE_BYTES_DMAED);
++	sas_notify_port_event(&phy->sas_phy, PORTE_BYTES_DMAED);
+ }
+ 
+ static void asd_link_reset_err_tasklet(struct asd_ascb *ascb,
+@@ -270,7 +268,7 @@ static void asd_link_reset_err_tasklet(struct asd_ascb *ascb,
+ 	asd_turn_led(asd_ha, phy_id, 0);
+ 	sas_phy_disconnected(sas_phy);
+ 	asd_deform_port(asd_ha, phy);
+-	sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++	sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 
+ 	if (retries_left == 0) {
+ 		int num = 1;
+@@ -315,7 +313,7 @@ static void asd_primitive_rcvd_tasklet(struct asd_ascb *ascb,
+ 			spin_lock_irqsave(&sas_phy->sas_prim_lock, flags);
+ 			sas_phy->sas_prim = ffs(cont);
+ 			spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags);
+-			sas_ha->notify_port_event(sas_phy,PORTE_BROADCAST_RCVD);
++			sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 			break;
+ 
+ 		case LmUNKNOWNP:
+@@ -336,7 +334,7 @@ static void asd_primitive_rcvd_tasklet(struct asd_ascb *ascb,
+ 			/* The sequencer disables all phys on that port.
+ 			 * We have to re-enable the phys ourselves. */
+ 			asd_deform_port(asd_ha, phy);
+-			sas_ha->notify_port_event(sas_phy, PORTE_HARD_RESET);
++			sas_notify_port_event(sas_phy, PORTE_HARD_RESET);
+ 			break;
+ 
+ 		default:
+@@ -567,7 +565,7 @@ static void escb_tasklet_complete(struct asd_ascb *ascb,
+ 		/* the device is gone */
+ 		sas_phy_disconnected(sas_phy);
+ 		asd_deform_port(asd_ha, phy);
+-		sas_ha->notify_port_event(sas_phy, PORTE_TIMER_EVENT);
++		sas_notify_port_event(sas_phy, PORTE_TIMER_EVENT);
+ 		break;
+ 	default:
+ 		ASD_DPRINTK("%s: phy%d: unknown event:0x%x\n", __func__,
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 274ccf18ce2db..1feca45384c7a 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -622,7 +622,6 @@ static void hisi_sas_bytes_dmaed(struct hisi_hba *hisi_hba, int phy_no)
+ {
+ 	struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
+ 	struct asd_sas_phy *sas_phy = &phy->sas_phy;
+-	struct sas_ha_struct *sas_ha;
+ 
+ 	if (!phy->phy_attached)
+ 		return;
+@@ -633,8 +632,7 @@ static void hisi_sas_bytes_dmaed(struct hisi_hba *hisi_hba, int phy_no)
+ 		return;
+ 	}
+ 
+-	sas_ha = &hisi_hba->sha;
+-	sas_ha->notify_phy_event(sas_phy, PHYE_OOB_DONE);
++	sas_notify_phy_event(sas_phy, PHYE_OOB_DONE);
+ 
+ 	if (sas_phy->phy) {
+ 		struct sas_phy *sphy = sas_phy->phy;
+@@ -662,7 +660,7 @@ static void hisi_sas_bytes_dmaed(struct hisi_hba *hisi_hba, int phy_no)
+ 	}
+ 
+ 	sas_phy->frame_rcvd_size = phy->frame_rcvd_size;
+-	sas_ha->notify_port_event(sas_phy, PORTE_BYTES_DMAED);
++	sas_notify_port_event(sas_phy, PORTE_BYTES_DMAED);
+ }
+ 
+ static struct hisi_sas_device *hisi_sas_alloc_dev(struct domain_device *device)
+@@ -1417,7 +1415,6 @@ static void hisi_sas_refresh_port_id(struct hisi_hba *hisi_hba)
+ 
+ static void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 state)
+ {
+-	struct sas_ha_struct *sas_ha = &hisi_hba->sha;
+ 	struct asd_sas_port *_sas_port = NULL;
+ 	int phy_no;
+ 
+@@ -1438,7 +1435,7 @@ static void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 state)
+ 				_sas_port = sas_port;
+ 
+ 				if (dev_is_expander(dev->dev_type))
+-					sas_ha->notify_port_event(sas_phy,
++					sas_notify_port_event(sas_phy,
+ 							PORTE_BROADCAST_RCVD);
+ 			}
+ 		} else {
+@@ -2200,7 +2197,6 @@ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
+ {
+ 	struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
+ 	struct asd_sas_phy *sas_phy = &phy->sas_phy;
+-	struct sas_ha_struct *sas_ha = &hisi_hba->sha;
+ 	struct device *dev = hisi_hba->dev;
+ 
+ 	if (rdy) {
+@@ -2216,7 +2212,7 @@ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
+ 			return;
+ 		}
+ 		/* Phy down and not ready */
+-		sas_ha->notify_phy_event(sas_phy, PHYE_LOSS_OF_SIGNAL);
++		sas_notify_phy_event(sas_phy, PHYE_LOSS_OF_SIGNAL);
+ 		sas_phy_disconnected(sas_phy);
+ 
+ 		if (port) {
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 45e866cb9164d..22eecc89d41bd 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1408,7 +1408,6 @@ static irqreturn_t int_bcast_v1_hw(int irq, void *p)
+ 	struct hisi_sas_phy *phy = p;
+ 	struct hisi_hba *hisi_hba = phy->hisi_hba;
+ 	struct asd_sas_phy *sas_phy = &phy->sas_phy;
+-	struct sas_ha_struct *sha = &hisi_hba->sha;
+ 	struct device *dev = hisi_hba->dev;
+ 	int phy_no = sas_phy->id;
+ 	u32 irq_value;
+@@ -1424,7 +1423,7 @@ static irqreturn_t int_bcast_v1_hw(int irq, void *p)
+ 	}
+ 
+ 	if (!test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+-		sha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 
+ end:
+ 	hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2,
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index b57177b52facc..6ef8730c61a6e 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -2818,14 +2818,13 @@ static void phy_bcast_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
+ {
+ 	struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
+ 	struct asd_sas_phy *sas_phy = &phy->sas_phy;
+-	struct sas_ha_struct *sas_ha = &hisi_hba->sha;
+ 	u32 bcast_status;
+ 
+ 	hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 1);
+ 	bcast_status = hisi_sas_phy_read32(hisi_hba, phy_no, RX_PRIMS_STATUS);
+ 	if ((bcast_status & RX_BCAST_CHG_MSK) &&
+ 	    !test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+-		sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 	hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0,
+ 			     CHL_INT0_SL_RX_BCST_ACK_MSK);
+ 	hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 0);
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 2cbd8a524edab..19170c7ac336f 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -1598,14 +1598,13 @@ static irqreturn_t phy_bcast_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
+ {
+ 	struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
+ 	struct asd_sas_phy *sas_phy = &phy->sas_phy;
+-	struct sas_ha_struct *sas_ha = &hisi_hba->sha;
+ 	u32 bcast_status;
+ 
+ 	hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 1);
+ 	bcast_status = hisi_sas_phy_read32(hisi_hba, phy_no, RX_PRIMS_STATUS);
+ 	if ((bcast_status & RX_BCAST_CHG_MSK) &&
+ 	    !test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+-		sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 	hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0,
+ 			     CHL_INT0_SL_RX_BCST_ACK_MSK);
+ 	hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 0);
+diff --git a/drivers/scsi/isci/port.c b/drivers/scsi/isci/port.c
+index 1df45f028ea75..e50c3b0deeb30 100644
+--- a/drivers/scsi/isci/port.c
++++ b/drivers/scsi/isci/port.c
+@@ -164,7 +164,8 @@ static void isci_port_bc_change_received(struct isci_host *ihost,
+ 		"%s: isci_phy = %p, sas_phy = %p\n",
+ 		__func__, iphy, &iphy->sas_phy);
+ 
+-	ihost->sas_ha.notify_port_event(&iphy->sas_phy, PORTE_BROADCAST_RCVD);
++	sas_notify_port_event_gfp(&iphy->sas_phy,
++				  PORTE_BROADCAST_RCVD, GFP_ATOMIC);
+ 	sci_port_bcn_enable(iport);
+ }
+ 
+@@ -223,8 +224,8 @@ static void isci_port_link_up(struct isci_host *isci_host,
+ 	/* Notify libsas that we have an address frame, if indeed
+ 	 * we've found an SSP, SMP, or STP target */
+ 	if (success)
+-		isci_host->sas_ha.notify_port_event(&iphy->sas_phy,
+-						    PORTE_BYTES_DMAED);
++		sas_notify_port_event_gfp(&iphy->sas_phy,
++					  PORTE_BYTES_DMAED, GFP_ATOMIC);
+ }
+ 
+ 
+@@ -270,8 +271,8 @@ static void isci_port_link_down(struct isci_host *isci_host,
+ 	 * isci_port_deformed and isci_dev_gone functions.
+ 	 */
+ 	sas_phy_disconnected(&isci_phy->sas_phy);
+-	isci_host->sas_ha.notify_phy_event(&isci_phy->sas_phy,
+-					   PHYE_LOSS_OF_SIGNAL);
++	sas_notify_phy_event_gfp(&isci_phy->sas_phy,
++				 PHYE_LOSS_OF_SIGNAL, GFP_ATOMIC);
+ 
+ 	dev_dbg(&isci_host->pdev->dev,
+ 		"%s: isci_port = %p - Done\n", __func__, isci_port);
+diff --git a/drivers/scsi/libsas/sas_event.c b/drivers/scsi/libsas/sas_event.c
+index a1852f6c042b9..ba266a17250ae 100644
+--- a/drivers/scsi/libsas/sas_event.c
++++ b/drivers/scsi/libsas/sas_event.c
+@@ -109,7 +109,7 @@ void sas_enable_revalidation(struct sas_ha_struct *ha)
+ 
+ 		sas_phy = container_of(port->phy_list.next, struct asd_sas_phy,
+ 				port_phy_el);
+-		ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 	}
+ 	mutex_unlock(&ha->disco_mutex);
+ }
+@@ -131,18 +131,15 @@ static void sas_phy_event_worker(struct work_struct *work)
+ 	sas_free_event(ev);
+ }
+ 
+-static int sas_notify_port_event(struct asd_sas_phy *phy, enum port_event event)
++static int __sas_notify_port_event(struct asd_sas_phy *phy,
++				   enum port_event event,
++				   struct asd_sas_event *ev)
+ {
+-	struct asd_sas_event *ev;
+ 	struct sas_ha_struct *ha = phy->ha;
+ 	int ret;
+ 
+ 	BUG_ON(event >= PORT_NUM_EVENTS);
+ 
+-	ev = sas_alloc_event(phy);
+-	if (!ev)
+-		return -ENOMEM;
+-
+ 	INIT_SAS_EVENT(ev, sas_port_event_worker, phy, event);
+ 
+ 	ret = sas_queue_event(event, &ev->work, ha);
+@@ -152,18 +149,40 @@ static int sas_notify_port_event(struct asd_sas_phy *phy, enum port_event event)
+ 	return ret;
+ }
+ 
+-int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event)
++int sas_notify_port_event_gfp(struct asd_sas_phy *phy, enum port_event event,
++			      gfp_t gfp_flags)
+ {
+ 	struct asd_sas_event *ev;
+-	struct sas_ha_struct *ha = phy->ha;
+-	int ret;
+ 
+-	BUG_ON(event >= PHY_NUM_EVENTS);
++	ev = sas_alloc_event_gfp(phy, gfp_flags);
++	if (!ev)
++		return -ENOMEM;
++
++	return __sas_notify_port_event(phy, event, ev);
++}
++EXPORT_SYMBOL_GPL(sas_notify_port_event_gfp);
++
++int sas_notify_port_event(struct asd_sas_phy *phy, enum port_event event)
++{
++	struct asd_sas_event *ev;
+ 
+ 	ev = sas_alloc_event(phy);
+ 	if (!ev)
+ 		return -ENOMEM;
+ 
++	return __sas_notify_port_event(phy, event, ev);
++}
++EXPORT_SYMBOL_GPL(sas_notify_port_event);
++
++static inline int __sas_notify_phy_event(struct asd_sas_phy *phy,
++					 enum phy_event event,
++					 struct asd_sas_event *ev)
++{
++	struct sas_ha_struct *ha = phy->ha;
++	int ret;
++
++	BUG_ON(event >= PHY_NUM_EVENTS);
++
+ 	INIT_SAS_EVENT(ev, sas_phy_event_worker, phy, event);
+ 
+ 	ret = sas_queue_event(event, &ev->work, ha);
+@@ -173,10 +192,27 @@ int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event)
+ 	return ret;
+ }
+ 
+-int sas_init_events(struct sas_ha_struct *sas_ha)
++int sas_notify_phy_event_gfp(struct asd_sas_phy *phy, enum phy_event event,
++			     gfp_t gfp_flags)
+ {
+-	sas_ha->notify_port_event = sas_notify_port_event;
+-	sas_ha->notify_phy_event = sas_notify_phy_event;
++	struct asd_sas_event *ev;
+ 
+-	return 0;
++	ev = sas_alloc_event_gfp(phy, gfp_flags);
++	if (!ev)
++		return -ENOMEM;
++
++	return __sas_notify_phy_event(phy, event, ev);
++}
++EXPORT_SYMBOL_GPL(sas_notify_phy_event_gfp);
++
++int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event)
++{
++	struct asd_sas_event *ev;
++
++	ev = sas_alloc_event(phy);
++	if (!ev)
++		return -ENOMEM;
++
++	return __sas_notify_phy_event(phy, event, ev);
+ }
++EXPORT_SYMBOL_GPL(sas_notify_phy_event);
+diff --git a/drivers/scsi/libsas/sas_init.c b/drivers/scsi/libsas/sas_init.c
+index 21c43b18d5d5b..f8ae1f0f17d36 100644
+--- a/drivers/scsi/libsas/sas_init.c
++++ b/drivers/scsi/libsas/sas_init.c
+@@ -123,12 +123,6 @@ int sas_register_ha(struct sas_ha_struct *sas_ha)
+ 		goto Undo_phys;
+ 	}
+ 
+-	error = sas_init_events(sas_ha);
+-	if (error) {
+-		pr_notice("couldn't start event thread:%d\n", error);
+-		goto Undo_ports;
+-	}
+-
+ 	error = -ENOMEM;
+ 	snprintf(name, sizeof(name), "%s_event_q", dev_name(sas_ha->dev));
+ 	sas_ha->event_q = create_singlethread_workqueue(name);
+@@ -590,16 +584,15 @@ sas_domain_attach_transport(struct sas_domain_function_template *dft)
+ }
+ EXPORT_SYMBOL_GPL(sas_domain_attach_transport);
+ 
+-
+-struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
++static struct asd_sas_event *__sas_alloc_event(struct asd_sas_phy *phy,
++					       gfp_t gfp_flags)
+ {
+ 	struct asd_sas_event *event;
+-	gfp_t flags = in_interrupt() ? GFP_ATOMIC : GFP_KERNEL;
+ 	struct sas_ha_struct *sas_ha = phy->ha;
+ 	struct sas_internal *i =
+ 		to_sas_internal(sas_ha->core.shost->transportt);
+ 
+-	event = kmem_cache_zalloc(sas_event_cache, flags);
++	event = kmem_cache_zalloc(sas_event_cache, gfp_flags);
+ 	if (!event)
+ 		return NULL;
+ 
+@@ -610,7 +603,8 @@ struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
+ 			if (cmpxchg(&phy->in_shutdown, 0, 1) == 0) {
+ 				pr_notice("The phy%d bursting events, shut it down.\n",
+ 					  phy->id);
+-				sas_notify_phy_event(phy, PHYE_SHUTDOWN);
++				sas_notify_phy_event_gfp(phy, PHYE_SHUTDOWN,
++							 gfp_flags);
+ 			}
+ 		} else {
+ 			/* Do not support PHY control, stop allocating events */
+@@ -624,6 +618,17 @@ struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
+ 	return event;
+ }
+ 
++struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
++{
++	return __sas_alloc_event(phy, in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
++}
++
++struct asd_sas_event *sas_alloc_event_gfp(struct asd_sas_phy *phy,
++					  gfp_t gfp_flags)
++{
++	return __sas_alloc_event(phy, gfp_flags);
++}
++
+ void sas_free_event(struct asd_sas_event *event)
+ {
+ 	struct asd_sas_phy *phy = event->phy;
+diff --git a/drivers/scsi/libsas/sas_internal.h b/drivers/scsi/libsas/sas_internal.h
+index 1f1d01901978c..52e09c3e2b50d 100644
+--- a/drivers/scsi/libsas/sas_internal.h
++++ b/drivers/scsi/libsas/sas_internal.h
+@@ -49,12 +49,13 @@ int  sas_register_phys(struct sas_ha_struct *sas_ha);
+ void sas_unregister_phys(struct sas_ha_struct *sas_ha);
+ 
+ struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy);
++struct asd_sas_event *sas_alloc_event_gfp(struct asd_sas_phy *phy,
++					  gfp_t gfp_flags);
+ void sas_free_event(struct asd_sas_event *event);
+ 
+ int  sas_register_ports(struct sas_ha_struct *sas_ha);
+ void sas_unregister_ports(struct sas_ha_struct *sas_ha);
+ 
+-int  sas_init_events(struct sas_ha_struct *sas_ha);
+ void sas_disable_revalidation(struct sas_ha_struct *ha);
+ void sas_enable_revalidation(struct sas_ha_struct *ha);
+ void __sas_drain_work(struct sas_ha_struct *ha);
+@@ -78,6 +79,8 @@ int sas_smp_phy_control(struct domain_device *dev, int phy_id,
+ int sas_smp_get_phy_events(struct sas_phy *phy);
+ 
+ int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event);
++int sas_notify_phy_event_gfp(struct asd_sas_phy *phy, enum phy_event event,
++			     gfp_t flags);
+ void sas_device_set_phy(struct domain_device *dev, struct sas_port *port);
+ struct domain_device *sas_find_dev_by_rphy(struct sas_rphy *rphy);
+ struct domain_device *sas_ex_to_ata(struct domain_device *ex_dev, int phy_id);
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index c9a327b13e5cf..b89c5513243e8 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -2423,7 +2423,7 @@ lpfc_debugfs_dif_err_write(struct file *file, const char __user *buf,
+ 	memset(dstbuf, 0, 33);
+ 	size = (nbytes < 32) ? nbytes : 32;
+ 	if (copy_from_user(dstbuf, buf, size))
+-		return 0;
++		return -EFAULT;
+ 
+ 	if (dent == phba->debug_InjErrLBA) {
+ 		if ((dstbuf[0] == 'o') && (dstbuf[1] == 'f') &&
+@@ -2432,7 +2432,7 @@ lpfc_debugfs_dif_err_write(struct file *file, const char __user *buf,
+ 	}
+ 
+ 	if ((tmp == 0) && (kstrtoull(dstbuf, 0, &tmp)))
+-		return 0;
++		return -EINVAL;
+ 
+ 	if (dent == phba->debug_writeGuard)
+ 		phba->lpfc_injerr_wgrd_cnt = (uint32_t)tmp;
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index a920eced92ecc..484e01428da28 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -216,11 +216,11 @@ void mvs_set_sas_addr(struct mvs_info *mvi, int port_id, u32 off_lo,
+ 	MVS_CHIP_DISP->write_port_cfg_data(mvi, port_id, hi);
+ }
+ 
+-static void mvs_bytes_dmaed(struct mvs_info *mvi, int i)
++static void mvs_bytes_dmaed(struct mvs_info *mvi, int i, gfp_t gfp_flags)
+ {
+ 	struct mvs_phy *phy = &mvi->phy[i];
+ 	struct asd_sas_phy *sas_phy = &phy->sas_phy;
+-	struct sas_ha_struct *sas_ha;
++
+ 	if (!phy->phy_attached)
+ 		return;
+ 
+@@ -229,8 +229,7 @@ static void mvs_bytes_dmaed(struct mvs_info *mvi, int i)
+ 		return;
+ 	}
+ 
+-	sas_ha = mvi->sas;
+-	sas_ha->notify_phy_event(sas_phy, PHYE_OOB_DONE);
++	sas_notify_phy_event_gfp(sas_phy, PHYE_OOB_DONE, gfp_flags);
+ 
+ 	if (sas_phy->phy) {
+ 		struct sas_phy *sphy = sas_phy->phy;
+@@ -262,8 +261,7 @@ static void mvs_bytes_dmaed(struct mvs_info *mvi, int i)
+ 
+ 	sas_phy->frame_rcvd_size = phy->frame_rcvd_size;
+ 
+-	mvi->sas->notify_port_event(sas_phy,
+-				   PORTE_BYTES_DMAED);
++	sas_notify_port_event_gfp(sas_phy, PORTE_BYTES_DMAED, gfp_flags);
+ }
+ 
+ void mvs_scan_start(struct Scsi_Host *shost)
+@@ -279,7 +277,7 @@ void mvs_scan_start(struct Scsi_Host *shost)
+ 	for (j = 0; j < core_nr; j++) {
+ 		mvi = ((struct mvs_prv_info *)sha->lldd_ha)->mvi[j];
+ 		for (i = 0; i < mvi->chip->n_phy; ++i)
+-			mvs_bytes_dmaed(mvi, i);
++			mvs_bytes_dmaed(mvi, i, GFP_KERNEL);
+ 	}
+ 	mvs_prv->scan_finished = 1;
+ }
+@@ -1880,7 +1878,6 @@ static void mvs_work_queue(struct work_struct *work)
+ 	struct mvs_info *mvi = mwq->mvi;
+ 	unsigned long flags;
+ 	u32 phy_no = (unsigned long) mwq->data;
+-	struct sas_ha_struct *sas_ha = mvi->sas;
+ 	struct mvs_phy *phy = &mvi->phy[phy_no];
+ 	struct asd_sas_phy *sas_phy = &phy->sas_phy;
+ 
+@@ -1895,21 +1892,21 @@ static void mvs_work_queue(struct work_struct *work)
+ 			if (!(tmp & PHY_READY_MASK)) {
+ 				sas_phy_disconnected(sas_phy);
+ 				mvs_phy_disconnected(phy);
+-				sas_ha->notify_phy_event(sas_phy,
+-					PHYE_LOSS_OF_SIGNAL);
++				sas_notify_phy_event_gfp(sas_phy,
++					PHYE_LOSS_OF_SIGNAL, GFP_ATOMIC);
+ 				mv_dprintk("phy%d Removed Device\n", phy_no);
+ 			} else {
+ 				MVS_CHIP_DISP->detect_porttype(mvi, phy_no);
+ 				mvs_update_phyinfo(mvi, phy_no, 1);
+-				mvs_bytes_dmaed(mvi, phy_no);
++				mvs_bytes_dmaed(mvi, phy_no, GFP_ATOMIC);
+ 				mvs_port_notify_formed(sas_phy, 0);
+ 				mv_dprintk("phy%d Attached Device\n", phy_no);
+ 			}
+ 		}
+ 	} else if (mwq->handler & EXP_BRCT_CHG) {
+ 		phy->phy_event &= ~EXP_BRCT_CHG;
+-		sas_ha->notify_port_event(sas_phy,
+-				PORTE_BROADCAST_RCVD);
++		sas_notify_port_event_gfp(sas_phy,
++				PORTE_BROADCAST_RCVD, GFP_ATOMIC);
+ 		mv_dprintk("phy%d Got Broadcast Change\n", phy_no);
+ 	}
+ 	list_del(&mwq->entry);
+@@ -2026,7 +2023,7 @@ void mvs_int_port(struct mvs_info *mvi, int phy_no, u32 events)
+ 				mdelay(10);
+ 			}
+ 
+-			mvs_bytes_dmaed(mvi, phy_no);
++			mvs_bytes_dmaed(mvi, phy_no, GFP_ATOMIC);
+ 			/* whether driver is going to handle hot plug */
+ 			if (phy->phy_event & PHY_PLUG_OUT) {
+ 				mvs_port_notify_formed(&phy->sas_phy, 0);
+diff --git a/drivers/scsi/myrs.c b/drivers/scsi/myrs.c
+index 7a3ade765ce3b..78c41bbf67562 100644
+--- a/drivers/scsi/myrs.c
++++ b/drivers/scsi/myrs.c
+@@ -2274,12 +2274,12 @@ static void myrs_cleanup(struct myrs_hba *cs)
+ 	if (cs->mmio_base) {
+ 		cs->disable_intr(cs);
+ 		iounmap(cs->mmio_base);
++		cs->mmio_base = NULL;
+ 	}
+ 	if (cs->irq)
+ 		free_irq(cs->irq, cs);
+ 	if (cs->io_addr)
+ 		release_region(cs->io_addr, 0x80);
+-	iounmap(cs->mmio_base);
+ 	pci_set_drvdata(pdev, NULL);
+ 	pci_disable_device(pdev);
+ 	scsi_host_put(cs->host);
+diff --git a/drivers/scsi/pm8001/pm8001_ctl.c b/drivers/scsi/pm8001/pm8001_ctl.c
+index 3587f7c8a4289..12035baf0997b 100644
+--- a/drivers/scsi/pm8001/pm8001_ctl.c
++++ b/drivers/scsi/pm8001/pm8001_ctl.c
+@@ -841,10 +841,9 @@ static ssize_t pm8001_store_update_fw(struct device *cdev,
+ 			       pm8001_ha->dev);
+ 
+ 	if (ret) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk(
+-			"Failed to load firmware image file %s,	error %d\n",
+-			filename_ptr, ret));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "Failed to load firmware image file %s, error %d\n",
++			   filename_ptr, ret);
+ 		pm8001_ha->fw_status = FAIL_OPEN_BIOS_FILE;
+ 		goto out;
+ 	}
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 2b7b2954ec31a..95ba1bd16db93 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -400,9 +400,9 @@ int pm8001_bar4_shift(struct pm8001_hba_info *pm8001_ha, u32 shiftValue)
+ 	} while ((regVal != shiftValue) && time_before(jiffies, start));
+ 
+ 	if (regVal != shiftValue) {
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("TIMEOUT:SPC_IBW_AXI_TRANSLATION_LOW"
+-			" = 0x%x\n", regVal));
++		pm8001_dbg(pm8001_ha, INIT,
++			   "TIMEOUT:SPC_IBW_AXI_TRANSLATION_LOW = 0x%x\n",
++			   regVal);
+ 		return -1;
+ 	}
+ 	return 0;
+@@ -623,12 +623,10 @@ static void init_pci_device_addresses(struct pm8001_hba_info *pm8001_ha)
+ 
+ 	value = pm8001_cr32(pm8001_ha, 0, 0x44);
+ 	offset = value & 0x03FFFFFF;
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("Scratchpad 0 Offset: %x\n", offset));
++	pm8001_dbg(pm8001_ha, INIT, "Scratchpad 0 Offset: %x\n", offset);
+ 	pcilogic = (value & 0xFC000000) >> 26;
+ 	pcibar = get_pci_bar_index(pcilogic);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("Scratchpad 0 PCI BAR: %d\n", pcibar));
++	pm8001_dbg(pm8001_ha, INIT, "Scratchpad 0 PCI BAR: %d\n", pcibar);
+ 	pm8001_ha->main_cfg_tbl_addr = base_addr =
+ 		pm8001_ha->io_mem[pcibar].memvirtaddr + offset;
+ 	pm8001_ha->general_stat_tbl_addr =
+@@ -652,16 +650,15 @@ static int pm8001_chip_init(struct pm8001_hba_info *pm8001_ha)
+ 	* as this is shared with BIOS data */
+ 	if (deviceid == 0x8081 || deviceid == 0x0042) {
+ 		if (-1 == pm8001_bar4_shift(pm8001_ha, GSM_SM_BASE)) {
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("Shift Bar4 to 0x%x failed\n",
+-					GSM_SM_BASE));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "Shift Bar4 to 0x%x failed\n",
++				   GSM_SM_BASE);
+ 			return -1;
+ 		}
+ 	}
+ 	/* check the firmware status */
+ 	if (-1 == check_fw_ready(pm8001_ha)) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Firmware is not ready!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "Firmware is not ready!\n");
+ 		return -EBUSY;
+ 	}
+ 
+@@ -686,8 +683,7 @@ static int pm8001_chip_init(struct pm8001_hba_info *pm8001_ha)
+ 	}
+ 	/* notify firmware update finished and check initialization status */
+ 	if (0 == mpi_init_check(pm8001_ha)) {
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("MPI initialize successful!\n"));
++		pm8001_dbg(pm8001_ha, INIT, "MPI initialize successful!\n");
+ 	} else
+ 		return -EBUSY;
+ 	/*This register is a 16-bit timer with a resolution of 1us. This is the
+@@ -709,9 +705,9 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
+ 	pci_read_config_word(pm8001_ha->pdev, PCI_DEVICE_ID, &deviceid);
+ 	if (deviceid == 0x8081 || deviceid == 0x0042) {
+ 		if (-1 == pm8001_bar4_shift(pm8001_ha, GSM_SM_BASE)) {
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("Shift Bar4 to 0x%x failed\n",
+-					GSM_SM_BASE));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "Shift Bar4 to 0x%x failed\n",
++				   GSM_SM_BASE);
+ 			return -1;
+ 		}
+ 	}
+@@ -729,8 +725,8 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
+ 	} while ((value != 0) && (--max_wait_count));
+ 
+ 	if (!max_wait_count) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("TIMEOUT:IBDB value/=0x%x\n", value));
++		pm8001_dbg(pm8001_ha, FAIL, "TIMEOUT:IBDB value/=0x%x\n",
++			   value);
+ 		return -1;
+ 	}
+ 
+@@ -747,9 +743,8 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
+ 			break;
+ 	} while (--max_wait_count);
+ 	if (!max_wait_count) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk(" TIME OUT MPI State = 0x%x\n",
+-				gst_len_mpistate & GST_MPI_STATE_MASK));
++		pm8001_dbg(pm8001_ha, FAIL, " TIME OUT MPI State = 0x%x\n",
++			   gst_len_mpistate & GST_MPI_STATE_MASK);
+ 		return -1;
+ 	}
+ 	return 0;
+@@ -763,25 +758,23 @@ static u32 soft_reset_ready_check(struct pm8001_hba_info *pm8001_ha)
+ {
+ 	u32 regVal, regVal1, regVal2;
+ 	if (mpi_uninit_check(pm8001_ha) != 0) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("MPI state is not ready\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "MPI state is not ready\n");
+ 		return -1;
+ 	}
+ 	/* read the scratch pad 2 register bit 2 */
+ 	regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2)
+ 		& SCRATCH_PAD2_FWRDY_RST;
+ 	if (regVal == SCRATCH_PAD2_FWRDY_RST) {
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("Firmware is ready for reset .\n"));
++		pm8001_dbg(pm8001_ha, INIT, "Firmware is ready for reset.\n");
+ 	} else {
+ 		unsigned long flags;
+ 		/* Trigger NMI twice via RB6 */
+ 		spin_lock_irqsave(&pm8001_ha->lock, flags);
+ 		if (-1 == pm8001_bar4_shift(pm8001_ha, RB6_ACCESS_REG)) {
+ 			spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("Shift Bar4 to 0x%x failed\n",
+-					RB6_ACCESS_REG));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "Shift Bar4 to 0x%x failed\n",
++				   RB6_ACCESS_REG);
+ 			return -1;
+ 		}
+ 		pm8001_cw32(pm8001_ha, 2, SPC_RB6_OFFSET,
+@@ -794,16 +787,14 @@ static u32 soft_reset_ready_check(struct pm8001_hba_info *pm8001_ha)
+ 		if (regVal != SCRATCH_PAD2_FWRDY_RST) {
+ 			regVal1 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1);
+ 			regVal2 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2);
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("TIMEOUT:MSGU_SCRATCH_PAD1"
+-				"=0x%x, MSGU_SCRATCH_PAD2=0x%x\n",
+-				regVal1, regVal2));
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SCRATCH_PAD0 value = 0x%x\n",
+-				pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0)));
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SCRATCH_PAD3 value = 0x%x\n",
+-				pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_3)));
++			pm8001_dbg(pm8001_ha, FAIL, "TIMEOUT:MSGU_SCRATCH_PAD1=0x%x, MSGU_SCRATCH_PAD2=0x%x\n",
++				   regVal1, regVal2);
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SCRATCH_PAD0 value = 0x%x\n",
++				   pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SCRATCH_PAD3 value = 0x%x\n",
++				   pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_3));
+ 			spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 			return -1;
+ 		}
+@@ -828,7 +819,7 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 
+ 	/* step1: Check FW is ready for soft reset */
+ 	if (soft_reset_ready_check(pm8001_ha) != 0) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("FW is not ready\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "FW is not ready\n");
+ 		return -1;
+ 	}
+ 
+@@ -838,46 +829,43 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 	spin_lock_irqsave(&pm8001_ha->lock, flags);
+ 	if (-1 == pm8001_bar4_shift(pm8001_ha, MBIC_AAP1_ADDR_BASE)) {
+ 		spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Shift Bar4 to 0x%x failed\n",
+-			MBIC_AAP1_ADDR_BASE));
++		pm8001_dbg(pm8001_ha, FAIL, "Shift Bar4 to 0x%x failed\n",
++			   MBIC_AAP1_ADDR_BASE);
+ 		return -1;
+ 	}
+ 	regVal = pm8001_cr32(pm8001_ha, 2, MBIC_NMI_ENABLE_VPE0_IOP);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("MBIC - NMI Enable VPE0 (IOP)= 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "MBIC - NMI Enable VPE0 (IOP)= 0x%x\n",
++		   regVal);
+ 	pm8001_cw32(pm8001_ha, 2, MBIC_NMI_ENABLE_VPE0_IOP, 0x0);
+ 	/* map 0x70000 to BAR4(0x20), BAR2(win) */
+ 	if (-1 == pm8001_bar4_shift(pm8001_ha, MBIC_IOP_ADDR_BASE)) {
+ 		spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Shift Bar4 to 0x%x failed\n",
+-			MBIC_IOP_ADDR_BASE));
++		pm8001_dbg(pm8001_ha, FAIL, "Shift Bar4 to 0x%x failed\n",
++			   MBIC_IOP_ADDR_BASE);
+ 		return -1;
+ 	}
+ 	regVal = pm8001_cr32(pm8001_ha, 2, MBIC_NMI_ENABLE_VPE0_AAP1);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("MBIC - NMI Enable VPE0 (AAP1)= 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "MBIC - NMI Enable VPE0 (AAP1)= 0x%x\n",
++		   regVal);
+ 	pm8001_cw32(pm8001_ha, 2, MBIC_NMI_ENABLE_VPE0_AAP1, 0x0);
+ 
+ 	regVal = pm8001_cr32(pm8001_ha, 1, PCIE_EVENT_INTERRUPT_ENABLE);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("PCIE -Event Interrupt Enable = 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "PCIE -Event Interrupt Enable = 0x%x\n",
++		   regVal);
+ 	pm8001_cw32(pm8001_ha, 1, PCIE_EVENT_INTERRUPT_ENABLE, 0x0);
+ 
+ 	regVal = pm8001_cr32(pm8001_ha, 1, PCIE_EVENT_INTERRUPT);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("PCIE - Event Interrupt  = 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "PCIE - Event Interrupt  = 0x%x\n",
++		   regVal);
+ 	pm8001_cw32(pm8001_ha, 1, PCIE_EVENT_INTERRUPT, regVal);
+ 
+ 	regVal = pm8001_cr32(pm8001_ha, 1, PCIE_ERROR_INTERRUPT_ENABLE);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("PCIE -Error Interrupt Enable = 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "PCIE -Error Interrupt Enable = 0x%x\n",
++		   regVal);
+ 	pm8001_cw32(pm8001_ha, 1, PCIE_ERROR_INTERRUPT_ENABLE, 0x0);
+ 
+ 	regVal = pm8001_cr32(pm8001_ha, 1, PCIE_ERROR_INTERRUPT);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("PCIE - Error Interrupt = 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "PCIE - Error Interrupt = 0x%x\n", regVal);
+ 	pm8001_cw32(pm8001_ha, 1, PCIE_ERROR_INTERRUPT, regVal);
+ 
+ 	/* read the scratch pad 1 register bit 2 */
+@@ -893,15 +881,13 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 	/* map 0x0700000 to BAR4(0x20), BAR2(win) */
+ 	if (-1 == pm8001_bar4_shift(pm8001_ha, GSM_ADDR_BASE)) {
+ 		spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Shift Bar4 to 0x%x failed\n",
+-			GSM_ADDR_BASE));
++		pm8001_dbg(pm8001_ha, FAIL, "Shift Bar4 to 0x%x failed\n",
++			   GSM_ADDR_BASE);
+ 		return -1;
+ 	}
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x0(0x00007b88)-GSM Configuration and"
+-		" Reset = 0x%x\n",
+-		pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET)));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x0(0x00007b88)-GSM Configuration and Reset = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET));
+ 
+ 	/* step 3: host read GSM Configuration and Reset register */
+ 	regVal = pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET);
+@@ -916,59 +902,52 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 	regVal &= ~(0x00003b00);
+ 	/* host write GSM Configuration and Reset register */
+ 	pm8001_cw32(pm8001_ha, 2, GSM_CONFIG_RESET, regVal);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x0 (0x00007b88 ==> 0x00004088) - GSM "
+-		"Configuration and Reset is set to = 0x%x\n",
+-		pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET)));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x0 (0x00007b88 ==> 0x00004088) - GSM Configuration and Reset is set to = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET));
+ 
+ 	/* step 4: */
+ 	/* disable GSM - Read Address Parity Check */
+ 	regVal1 = pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x700038 - Read Address Parity Check "
+-		"Enable = 0x%x\n", regVal1));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x700038 - Read Address Parity Check Enable = 0x%x\n",
++		   regVal1);
+ 	pm8001_cw32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK, 0x0);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x700038 - Read Address Parity Check Enable"
+-		"is set to = 0x%x\n",
+-		pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK)));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x700038 - Read Address Parity Check Enable is set to = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK));
+ 
+ 	/* disable GSM - Write Address Parity Check */
+ 	regVal2 = pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x700040 - Write Address Parity Check"
+-		" Enable = 0x%x\n", regVal2));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x700040 - Write Address Parity Check Enable = 0x%x\n",
++		   regVal2);
+ 	pm8001_cw32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK, 0x0);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x700040 - Write Address Parity Check "
+-		"Enable is set to = 0x%x\n",
+-		pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK)));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x700040 - Write Address Parity Check Enable is set to = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK));
+ 
+ 	/* disable GSM - Write Data Parity Check */
+ 	regVal3 = pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x300048 - Write Data Parity Check"
+-		" Enable = 0x%x\n", regVal3));
++	pm8001_dbg(pm8001_ha, INIT, "GSM 0x300048 - Write Data Parity Check Enable = 0x%x\n",
++		   regVal3);
+ 	pm8001_cw32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK, 0x0);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x300048 - Write Data Parity Check Enable"
+-		"is set to = 0x%x\n",
+-	pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK)));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x300048 - Write Data Parity Check Enable is set to = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK));
+ 
+ 	/* step 5: delay 10 usec */
+ 	udelay(10);
+ 	/* step 5-b: set GPIO-0 output control to tristate anyway */
+ 	if (-1 == pm8001_bar4_shift(pm8001_ha, GPIO_ADDR_BASE)) {
+ 		spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+-		PM8001_INIT_DBG(pm8001_ha,
+-				pm8001_printk("Shift Bar4 to 0x%x failed\n",
+-				GPIO_ADDR_BASE));
++		pm8001_dbg(pm8001_ha, INIT, "Shift Bar4 to 0x%x failed\n",
++			   GPIO_ADDR_BASE);
+ 		return -1;
+ 	}
+ 	regVal = pm8001_cr32(pm8001_ha, 2, GPIO_GPIO_0_0UTPUT_CTL_OFFSET);
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("GPIO Output Control Register:"
+-			" = 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "GPIO Output Control Register: = 0x%x\n",
++		   regVal);
+ 	/* set GPIO-0 output control to tri-state */
+ 	regVal &= 0xFFFFFFFC;
+ 	pm8001_cw32(pm8001_ha, 2, GPIO_GPIO_0_0UTPUT_CTL_OFFSET, regVal);
+@@ -977,23 +956,20 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 	/* map 0x00000 to BAR4(0x20), BAR2(win) */
+ 	if (-1 == pm8001_bar4_shift(pm8001_ha, SPC_TOP_LEVEL_ADDR_BASE)) {
+ 		spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("SPC Shift Bar4 to 0x%x failed\n",
+-			SPC_TOP_LEVEL_ADDR_BASE));
++		pm8001_dbg(pm8001_ha, FAIL, "SPC Shift Bar4 to 0x%x failed\n",
++			   SPC_TOP_LEVEL_ADDR_BASE);
+ 		return -1;
+ 	}
+ 	regVal = pm8001_cr32(pm8001_ha, 2, SPC_REG_RESET);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("Top Register before resetting IOP/AAP1"
+-		":= 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "Top Register before resetting IOP/AAP1:= 0x%x\n",
++		   regVal);
+ 	regVal &= ~(SPC_REG_RESET_PCS_IOP_SS | SPC_REG_RESET_PCS_AAP1_SS);
+ 	pm8001_cw32(pm8001_ha, 2, SPC_REG_RESET, regVal);
+ 
+ 	/* step 7: Reset the BDMA/OSSP */
+ 	regVal = pm8001_cr32(pm8001_ha, 2, SPC_REG_RESET);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("Top Register before resetting BDMA/OSSP"
+-		": = 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT, "Top Register before resetting BDMA/OSSP: = 0x%x\n",
++		   regVal);
+ 	regVal &= ~(SPC_REG_RESET_BDMA_CORE | SPC_REG_RESET_OSSP);
+ 	pm8001_cw32(pm8001_ha, 2, SPC_REG_RESET, regVal);
+ 
+@@ -1002,9 +978,9 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 
+ 	/* step 9: bring the BDMA and OSSP out of reset */
+ 	regVal = pm8001_cr32(pm8001_ha, 2, SPC_REG_RESET);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("Top Register before bringing up BDMA/OSSP"
+-		":= 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "Top Register before bringing up BDMA/OSSP:= 0x%x\n",
++		   regVal);
+ 	regVal |= (SPC_REG_RESET_BDMA_CORE | SPC_REG_RESET_OSSP);
+ 	pm8001_cw32(pm8001_ha, 2, SPC_REG_RESET, regVal);
+ 
+@@ -1015,14 +991,13 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 	/* map 0x0700000 to BAR4(0x20), BAR2(win) */
+ 	if (-1 == pm8001_bar4_shift(pm8001_ha, GSM_ADDR_BASE)) {
+ 		spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("SPC Shift Bar4 to 0x%x failed\n",
+-			GSM_ADDR_BASE));
++		pm8001_dbg(pm8001_ha, FAIL, "SPC Shift Bar4 to 0x%x failed\n",
++			   GSM_ADDR_BASE);
+ 		return -1;
+ 	}
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x0 (0x00007b88)-GSM Configuration and "
+-		"Reset = 0x%x\n", pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET)));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x0 (0x00007b88)-GSM Configuration and Reset = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET));
+ 	regVal = pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET);
+ 	/* Put those bits to high */
+ 	/* GSM XCBI offset = 0x70 0000
+@@ -1034,44 +1009,37 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 	*/
+ 	regVal |= (GSM_CONFIG_RESET_VALUE);
+ 	pm8001_cw32(pm8001_ha, 2, GSM_CONFIG_RESET, regVal);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM (0x00004088 ==> 0x00007b88) - GSM"
+-		" Configuration and Reset is set to = 0x%x\n",
+-		pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET)));
++	pm8001_dbg(pm8001_ha, INIT, "GSM (0x00004088 ==> 0x00007b88) - GSM Configuration and Reset is set to = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET));
+ 
+ 	/* step 12: Restore GSM - Read Address Parity Check */
+ 	regVal = pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK);
+ 	/* just for debugging */
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x700038 - Read Address Parity Check Enable"
+-		" = 0x%x\n", regVal));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x700038 - Read Address Parity Check Enable = 0x%x\n",
++		   regVal);
+ 	pm8001_cw32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK, regVal1);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x700038 - Read Address Parity"
+-		" Check Enable is set to = 0x%x\n",
+-		pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK)));
++	pm8001_dbg(pm8001_ha, INIT, "GSM 0x700038 - Read Address Parity Check Enable is set to = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK));
+ 	/* Restore GSM - Write Address Parity Check */
+ 	regVal = pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK);
+ 	pm8001_cw32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK, regVal2);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x700040 - Write Address Parity Check"
+-		" Enable is set to = 0x%x\n",
+-		pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK)));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x700040 - Write Address Parity Check Enable is set to = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK));
+ 	/* Restore GSM - Write Data Parity Check */
+ 	regVal = pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK);
+ 	pm8001_cw32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK, regVal3);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("GSM 0x700048 - Write Data Parity Check Enable"
+-		"is set to = 0x%x\n",
+-		pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK)));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "GSM 0x700048 - Write Data Parity Check Enableis set to = 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK));
+ 
+ 	/* step 13: bring the IOP and AAP1 out of reset */
+ 	/* map 0x00000 to BAR4(0x20), BAR2(win) */
+ 	if (-1 == pm8001_bar4_shift(pm8001_ha, SPC_TOP_LEVEL_ADDR_BASE)) {
+ 		spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Shift Bar4 to 0x%x failed\n",
+-			SPC_TOP_LEVEL_ADDR_BASE));
++		pm8001_dbg(pm8001_ha, FAIL, "Shift Bar4 to 0x%x failed\n",
++			   SPC_TOP_LEVEL_ADDR_BASE);
+ 		return -1;
+ 	}
+ 	regVal = pm8001_cr32(pm8001_ha, 2, SPC_REG_RESET);
+@@ -1094,22 +1062,20 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 		if (!max_wait_count) {
+ 			regVal = pm8001_cr32(pm8001_ha, 0,
+ 				MSGU_SCRATCH_PAD_1);
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("TIMEOUT : ToggleVal 0x%x,"
+-				"MSGU_SCRATCH_PAD1 = 0x%x\n",
+-				toggleVal, regVal));
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SCRATCH_PAD0 value = 0x%x\n",
+-				pm8001_cr32(pm8001_ha, 0,
+-				MSGU_SCRATCH_PAD_0)));
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SCRATCH_PAD2 value = 0x%x\n",
+-				pm8001_cr32(pm8001_ha, 0,
+-				MSGU_SCRATCH_PAD_2)));
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SCRATCH_PAD3 value = 0x%x\n",
+-				pm8001_cr32(pm8001_ha, 0,
+-				MSGU_SCRATCH_PAD_3)));
++			pm8001_dbg(pm8001_ha, FAIL, "TIMEOUT : ToggleVal 0x%x,MSGU_SCRATCH_PAD1 = 0x%x\n",
++				   toggleVal, regVal);
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SCRATCH_PAD0 value = 0x%x\n",
++				   pm8001_cr32(pm8001_ha, 0,
++					       MSGU_SCRATCH_PAD_0));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SCRATCH_PAD2 value = 0x%x\n",
++				   pm8001_cr32(pm8001_ha, 0,
++					       MSGU_SCRATCH_PAD_2));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SCRATCH_PAD3 value = 0x%x\n",
++				   pm8001_cr32(pm8001_ha, 0,
++					       MSGU_SCRATCH_PAD_3));
+ 			spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 			return -1;
+ 		}
+@@ -1124,22 +1090,22 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 		if (check_fw_ready(pm8001_ha) == -1) {
+ 			regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1);
+ 			/* return error if MPI Configuration Table not ready */
+-			PM8001_INIT_DBG(pm8001_ha,
+-				pm8001_printk("FW not ready SCRATCH_PAD1"
+-				" = 0x%x\n", regVal));
++			pm8001_dbg(pm8001_ha, INIT,
++				   "FW not ready SCRATCH_PAD1 = 0x%x\n",
++				   regVal);
+ 			regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2);
+ 			/* return error if MPI Configuration Table not ready */
+-			PM8001_INIT_DBG(pm8001_ha,
+-				pm8001_printk("FW not ready SCRATCH_PAD2"
+-				" = 0x%x\n", regVal));
+-			PM8001_INIT_DBG(pm8001_ha,
+-				pm8001_printk("SCRATCH_PAD0 value = 0x%x\n",
+-				pm8001_cr32(pm8001_ha, 0,
+-				MSGU_SCRATCH_PAD_0)));
+-			PM8001_INIT_DBG(pm8001_ha,
+-				pm8001_printk("SCRATCH_PAD3 value = 0x%x\n",
+-				pm8001_cr32(pm8001_ha, 0,
+-				MSGU_SCRATCH_PAD_3)));
++			pm8001_dbg(pm8001_ha, INIT,
++				   "FW not ready SCRATCH_PAD2 = 0x%x\n",
++				   regVal);
++			pm8001_dbg(pm8001_ha, INIT,
++				   "SCRATCH_PAD0 value = 0x%x\n",
++				   pm8001_cr32(pm8001_ha, 0,
++					       MSGU_SCRATCH_PAD_0));
++			pm8001_dbg(pm8001_ha, INIT,
++				   "SCRATCH_PAD3 value = 0x%x\n",
++				   pm8001_cr32(pm8001_ha, 0,
++					       MSGU_SCRATCH_PAD_3));
+ 			spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 			return -1;
+ 		}
+@@ -1147,8 +1113,7 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 	pm8001_bar4_shift(pm8001_ha, 0);
+ 	spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("SPC soft reset Complete\n"));
++	pm8001_dbg(pm8001_ha, INIT, "SPC soft reset Complete\n");
+ 	return 0;
+ }
+ 
+@@ -1156,8 +1121,7 @@ static void pm8001_hw_chip_rst(struct pm8001_hba_info *pm8001_ha)
+ {
+ 	u32 i;
+ 	u32 regVal;
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("chip reset start\n"));
++	pm8001_dbg(pm8001_ha, INIT, "chip reset start\n");
+ 
+ 	/* do SPC chip reset. */
+ 	regVal = pm8001_cr32(pm8001_ha, 1, SPC_REG_RESET);
+@@ -1181,8 +1145,7 @@ static void pm8001_hw_chip_rst(struct pm8001_hba_info *pm8001_ha)
+ 		mdelay(1);
+ 	} while ((--i) != 0);
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("chip reset finished\n"));
++	pm8001_dbg(pm8001_ha, INIT, "chip reset finished\n");
+ }
+ 
+ /**
+@@ -1356,12 +1319,18 @@ int pm8001_mpi_build_cmd(struct pm8001_hba_info *pm8001_ha,
+ {
+ 	u32 Header = 0, hpriority = 0, bc = 1, category = 0x02;
+ 	void *pMessage;
+-
+-	if (pm8001_mpi_msg_free_get(circularQ, pm8001_ha->iomb_size,
+-		&pMessage) < 0) {
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("No free mpi buffer\n"));
+-		return -ENOMEM;
++	unsigned long flags;
++	int q_index = circularQ - pm8001_ha->inbnd_q_tbl;
++	int rv = -1;
++
++	WARN_ON(q_index >= PM8001_MAX_INB_NUM);
++	spin_lock_irqsave(&circularQ->iq_lock, flags);
++	rv = pm8001_mpi_msg_free_get(circularQ, pm8001_ha->iomb_size,
++			&pMessage);
++	if (rv < 0) {
++		pm8001_dbg(pm8001_ha, IO, "No free mpi buffer\n");
++		rv = -ENOMEM;
++		goto done;
+ 	}
+ 
+ 	if (nb > (pm8001_ha->iomb_size - sizeof(struct mpi_msg_hdr)))
+@@ -1380,11 +1349,13 @@ int pm8001_mpi_build_cmd(struct pm8001_hba_info *pm8001_ha,
+ 	/*Update the PI to the firmware*/
+ 	pm8001_cw32(pm8001_ha, circularQ->pi_pci_bar,
+ 		circularQ->pi_offset, circularQ->producer_idx);
+-	PM8001_DEVIO_DBG(pm8001_ha,
+-		pm8001_printk("INB Q %x OPCODE:%x , UPDATED PI=%d CI=%d\n",
+-			responseQueue, opCode, circularQ->producer_idx,
+-			circularQ->consumer_index));
+-	return 0;
++	pm8001_dbg(pm8001_ha, DEVIO,
++		   "INB Q %x OPCODE:%x , UPDATED PI=%d CI=%d\n",
++		   responseQueue, opCode, circularQ->producer_idx,
++		   circularQ->consumer_index);
++done:
++	spin_unlock_irqrestore(&circularQ->iq_lock, flags);
++	return rv;
+ }
+ 
+ u32 pm8001_mpi_msg_free_set(struct pm8001_hba_info *pm8001_ha, void *pMsg,
+@@ -1398,17 +1369,17 @@ u32 pm8001_mpi_msg_free_set(struct pm8001_hba_info *pm8001_ha, void *pMsg,
+ 	pOutBoundMsgHeader = (struct mpi_msg_hdr *)(circularQ->base_virt +
+ 				circularQ->consumer_idx * pm8001_ha->iomb_size);
+ 	if (pOutBoundMsgHeader != msgHeader) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("consumer_idx = %d msgHeader = %p\n",
+-			circularQ->consumer_idx, msgHeader));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "consumer_idx = %d msgHeader = %p\n",
++			   circularQ->consumer_idx, msgHeader);
+ 
+ 		/* Update the producer index from SPC */
+ 		producer_index = pm8001_read_32(circularQ->pi_virt);
+ 		circularQ->producer_index = cpu_to_le32(producer_index);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("consumer_idx = %d producer_index = %d"
+-			"msgHeader = %p\n", circularQ->consumer_idx,
+-			circularQ->producer_index, msgHeader));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "consumer_idx = %d producer_index = %dmsgHeader = %p\n",
++			   circularQ->consumer_idx,
++			   circularQ->producer_index, msgHeader);
+ 		return 0;
+ 	}
+ 	/* free the circular queue buffer elements associated with the message*/
+@@ -1420,9 +1391,8 @@ u32 pm8001_mpi_msg_free_set(struct pm8001_hba_info *pm8001_ha, void *pMsg,
+ 	/* Update the producer index from SPC*/
+ 	producer_index = pm8001_read_32(circularQ->pi_virt);
+ 	circularQ->producer_index = cpu_to_le32(producer_index);
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk(" CI=%d PI=%d\n", circularQ->consumer_idx,
+-		circularQ->producer_index));
++	pm8001_dbg(pm8001_ha, IO, " CI=%d PI=%d\n",
++		   circularQ->consumer_idx, circularQ->producer_index);
+ 	return 0;
+ }
+ 
+@@ -1452,10 +1422,10 @@ u32 pm8001_mpi_msg_consume(struct pm8001_hba_info *pm8001_ha,
+ 			/* read header */
+ 			header_tmp = pm8001_read_32(msgHeader);
+ 			msgHeader_tmp = cpu_to_le32(header_tmp);
+-			PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-				"outbound opcode msgheader:%x ci=%d pi=%d\n",
+-				msgHeader_tmp, circularQ->consumer_idx,
+-				circularQ->producer_index));
++			pm8001_dbg(pm8001_ha, DEVIO,
++				   "outbound opcode msgheader:%x ci=%d pi=%d\n",
++				   msgHeader_tmp, circularQ->consumer_idx,
++				   circularQ->producer_index);
+ 			if (0 != (le32_to_cpu(msgHeader_tmp) & 0x80000000)) {
+ 				if (OPC_OUB_SKIP_ENTRY !=
+ 					(le32_to_cpu(msgHeader_tmp) & 0xfff)) {
+@@ -1464,12 +1434,11 @@ u32 pm8001_mpi_msg_consume(struct pm8001_hba_info *pm8001_ha,
+ 						sizeof(struct mpi_msg_hdr);
+ 					*pBC = (u8)((le32_to_cpu(msgHeader_tmp)
+ 						>> 24) & 0x1f);
+-					PM8001_IO_DBG(pm8001_ha,
+-						pm8001_printk(": CI=%d PI=%d "
+-						"msgHeader=%x\n",
+-						circularQ->consumer_idx,
+-						circularQ->producer_index,
+-						msgHeader_tmp));
++					pm8001_dbg(pm8001_ha, IO,
++						   ": CI=%d PI=%d msgHeader=%x\n",
++						   circularQ->consumer_idx,
++						   circularQ->producer_index,
++						   msgHeader_tmp);
+ 					return MPI_IO_STATUS_SUCCESS;
+ 				} else {
+ 					circularQ->consumer_idx =
+@@ -1578,17 +1547,15 @@ void pm8001_work_fn(struct work_struct *work)
+ 		ts->stat = SAS_QUEUE_FULL;
+ 		pm8001_dev = ccb->device;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		spin_lock_irqsave(&t->task_state_lock, flags1);
+ 		t->task_state_flags &= ~SAS_TASK_STATE_PENDING;
+ 		t->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+ 		t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 		if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 			spin_unlock_irqrestore(&t->task_state_lock, flags1);
+-			PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("task 0x%p"
+-				" done with event 0x%x resp 0x%x stat 0x%x but"
+-				" aborted by upper layer!\n",
+-				t, pw->handler, ts->resp, ts->stat));
++			pm8001_dbg(pm8001_ha, FAIL, "task 0x%p done with event 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++				   t, pw->handler, ts->resp, ts->stat);
+ 			pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 			spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 		} else {
+@@ -1608,26 +1575,16 @@ void pm8001_work_fn(struct work_struct *work)
+ 		unsigned long flags, flags1;
+ 		int i, ret = 0;
+ 
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 
+ 		ret = pm8001_query_task(t);
+ 
+-		PM8001_IO_DBG(pm8001_ha,
+-			switch (ret) {
+-			case TMF_RESP_FUNC_SUCC:
+-				pm8001_printk("...Task on lu\n");
+-				break;
+-
+-			case TMF_RESP_FUNC_COMPLETE:
+-				pm8001_printk("...Task NOT on lu\n");
+-				break;
+-
+-			default:
+-				PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-					"...query task failed!!!\n"));
+-				break;
+-			});
++		if (ret == TMF_RESP_FUNC_SUCC)
++			pm8001_dbg(pm8001_ha, IO, "...Task on lu\n");
++		else if (ret == TMF_RESP_FUNC_COMPLETE)
++			pm8001_dbg(pm8001_ha, IO, "...Task NOT on lu\n");
++		else
++			pm8001_dbg(pm8001_ha, DEVIO, "...query task failed!!!\n");
+ 
+ 		spin_lock_irqsave(&pm8001_ha->lock, flags);
+ 
+@@ -1672,8 +1629,7 @@ void pm8001_work_fn(struct work_struct *work)
+ 				break;
+ 			default: /* device misbehavior */
+ 				ret = TMF_RESP_FUNC_FAILED;
+-				PM8001_IO_DBG(pm8001_ha,
+-					pm8001_printk("...Reset phy\n"));
++				pm8001_dbg(pm8001_ha, IO, "...Reset phy\n");
+ 				pm8001_I_T_nexus_reset(dev);
+ 				break;
+ 			}
+@@ -1687,15 +1643,14 @@ void pm8001_work_fn(struct work_struct *work)
+ 		default: /* device misbehavior */
+ 			spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 			ret = TMF_RESP_FUNC_FAILED;
+-			PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("...Reset phy\n"));
++			pm8001_dbg(pm8001_ha, IO, "...Reset phy\n");
+ 			pm8001_I_T_nexus_reset(dev);
+ 		}
+ 
+ 		if (ret == TMF_RESP_FUNC_FAILED)
+ 			t = NULL;
+ 		pm8001_open_reject_retry(pm8001_ha, t, pm8001_dev);
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("...Complete\n"));
++		pm8001_dbg(pm8001_ha, IO, "...Complete\n");
+ 	}	break;
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS:
+ 		dev = pm8001_dev->sas_device;
+@@ -1749,15 +1704,14 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	int ret;
+ 
+ 	if (!pm8001_ha_dev) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("dev is null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "dev is null\n");
+ 		return;
+ 	}
+ 
+ 	task = sas_alloc_slow_task(GFP_ATOMIC);
+ 
+ 	if (!task) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("cannot "
+-						"allocate task\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "cannot allocate task\n");
+ 		return;
+ 	}
+ 
+@@ -1802,8 +1756,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 	task = sas_alloc_slow_task(GFP_ATOMIC);
+ 
+ 	if (!task) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("cannot allocate task !!!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "cannot allocate task !!!\n");
+ 		return;
+ 	}
+ 	task->task_done = pm8001_task_done;
+@@ -1811,8 +1764,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 	res = pm8001_tag_alloc(pm8001_ha, &ccb_tag);
+ 	if (res) {
+ 		sas_free_task(task);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("cannot allocate tag !!!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "cannot allocate tag !!!\n");
+ 		return;
+ 	}
+ 
+@@ -1823,8 +1775,8 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 	if (!dev) {
+ 		sas_free_task(task);
+ 		pm8001_tag_free(pm8001_ha, ccb_tag);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Domain device cannot be allocated\n"));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "Domain device cannot be allocated\n");
+ 		return;
+ 	}
+ 	task->dev = dev;
+@@ -1901,27 +1853,25 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t = ccb->task;
+ 
+ 	if (status && status != IO_UNDERFLOW)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("sas IO status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, FAIL, "sas IO status 0x%x\n", status);
+ 	if (unlikely(!t || !t->lldd_task || !t->dev))
+ 		return;
+ 	ts = &t->task_status;
+ 	/* Print sas address of IO failed device */
+ 	if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) &&
+ 		(status != IO_UNDERFLOW))
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("SAS Address of IO Failure Drive:"
+-			"%016llx", SAS_ADDR(t->dev->sas_addr)));
++		pm8001_dbg(pm8001_ha, FAIL, "SAS Address of IO Failure Drive:%016llx\n",
++			   SAS_ADDR(t->dev->sas_addr));
+ 
+ 	if (status)
+-		PM8001_IOERR_DBG(pm8001_ha, pm8001_printk(
+-			"status:0x%x, tag:0x%x, task:0x%p\n",
+-			status, tag, t));
++		pm8001_dbg(pm8001_ha, IOERR,
++			   "status:0x%x, tag:0x%x, task:0x%p\n",
++			   status, tag, t);
+ 
+ 	switch (status) {
+ 	case IO_SUCCESS:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS"
+-			",param = %d\n", param));
++		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS,param = %d\n",
++			   param);
+ 		if (param == 0) {
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAM_STAT_GOOD;
+@@ -1933,69 +1883,63 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 			sas_ssp_task_response(pm8001_ha->dev, t, iu);
+ 		}
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_ABORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ABORTED IOMB Tag\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ABORTED IOMB Tag\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_ABORTED_TASK;
+ 		break;
+ 	case IO_UNDERFLOW:
+ 		/* SSP Completion with error */
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW"
+-			",param = %d\n", param));
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW,param = %d\n",
++			   param);
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_UNDERRUN;
+ 		ts->residual = param;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_NO_DEVICE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_NO_DEVICE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_NO_DEVICE\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_PHY_DOWN;
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		/* Force the midlayer to retry */
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_EPROTO;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+@@ -2005,68 +1949,59 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_"
+-			"NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
+ 		break;
+ 	case IO_XFER_ERROR_NAK_RECEIVED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_NAK_RECEIVED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_XFER_ERROR_ACK_NAK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_ACK_NAK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
+ 		break;
+ 	case IO_XFER_ERROR_DMA:
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("IO_XFER_ERROR_DMA\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_DMA\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_XFER_ERROR_OFFSET_MISMATCH:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_OFFSET_MISMATCH\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		break;
+ 	case IO_PORT_IN_RESET:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_PORT_IN_RESET\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_PORT_IN_RESET\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		break;
+ 	case IO_DS_NON_OPERATIONAL:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_NON_OPERATIONAL\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_NON_OPERATIONAL\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		if (!t->uldd_task)
+@@ -2075,51 +2010,44 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 				IO_DS_NON_OPERATIONAL);
+ 		break;
+ 	case IO_DS_IN_RECOVERY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_IN_RECOVERY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_IN_RECOVERY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		break;
+ 	case IO_TM_TAG_NOT_FOUND:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_TM_TAG_NOT_FOUND\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_TM_TAG_NOT_FOUND\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		break;
+ 	case IO_SSP_EXT_IU_ZERO_LEN_ERROR:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_SSP_EXT_IU_ZERO_LEN_ERROR\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_SSP_EXT_IU_ZERO_LEN_ERROR\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", status);
+ 		/* not allowed case. Therefore, return failed status */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		break;
+ 	}
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("scsi_status = %x\n ",
+-		psspPayload->ssp_resp_iu.status));
++	pm8001_dbg(pm8001_ha, IO, "scsi_status = %x\n",
++		   psspPayload->ssp_resp_iu.status);
+ 	spin_lock_irqsave(&t->task_state_lock, flags);
+ 	t->task_state_flags &= ~SAS_TASK_STATE_PENDING;
+ 	t->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("task 0x%p done with"
+-			" io_status 0x%x resp 0x%x "
+-			"stat 0x%x but aborted by upper layer!\n",
+-			t, status, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL, "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, status, ts->resp, ts->stat);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+@@ -2148,60 +2076,52 @@ static void mpi_ssp_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t = ccb->task;
+ 	pm8001_dev = ccb->device;
+ 	if (event)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("sas IO status 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, FAIL, "sas IO status 0x%x\n", event);
+ 	if (unlikely(!t || !t->lldd_task || !t->dev))
+ 		return;
+ 	ts = &t->task_status;
+-	PM8001_DEVIO_DBG(pm8001_ha,
+-		pm8001_printk("port_id = %x,device_id = %x\n",
+-		port_id, dev_id));
++	pm8001_dbg(pm8001_ha, DEVIO, "port_id = %x,device_id = %x\n",
++		   port_id, dev_id);
+ 	switch (event) {
+ 	case IO_OVERFLOW:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n");)
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		ts->residual = 0;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		pm8001_handle_event(pm8001_ha, t, IO_XFER_ERROR_BREAK);
+ 		return;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT"
+-			"_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_EPROTO;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+@@ -2211,88 +2131,78 @@ static void mpi_ssp_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_"
+-			"NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-		       pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
+ 		break;
+ 	case IO_XFER_ERROR_NAK_RECEIVED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_NAK_RECEIVED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_XFER_ERROR_ACK_NAK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_ACK_NAK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		pm8001_handle_event(pm8001_ha, t, IO_XFER_OPEN_RETRY_TIMEOUT);
+ 		return;
+ 	case IO_XFER_ERROR_UNEXPECTED_PHASE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_UNEXPECTED_PHASE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_UNEXPECTED_PHASE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_RDY_OVERRUN:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_RDY_OVERRUN\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_XFER_RDY_OVERRUN\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-		       pm8001_printk("IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_OFFSET_MISMATCH:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_OFFSET_MISMATCH\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_ZERO_DATA_LEN:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_CMD_FRAME_ISSUED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("  IO_XFER_CMD_FRAME_ISSUED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_CMD_FRAME_ISSUED\n");
+ 		return;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", event);
+ 		/* not allowed case. Therefore, return failed status */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+@@ -2304,10 +2214,8 @@ static void mpi_ssp_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("task 0x%p done with"
+-			" event 0x%x resp 0x%x "
+-			"stat 0x%x but aborted by upper layer!\n",
+-			t, event, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL, "task 0x%p done with event 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, event, ts->resp, ts->stat);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+@@ -2343,8 +2251,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	tag = le32_to_cpu(psataPayload->tag);
+ 
+ 	if (!tag) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("tag null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "tag null\n");
+ 		return;
+ 	}
+ 	ccb = &pm8001_ha->ccb_info[tag];
+@@ -2353,8 +2260,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		t = ccb->task;
+ 		pm8001_dev = ccb->device;
+ 	} else {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("ccb null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "ccb null\n");
+ 		return;
+ 	}
+ 
+@@ -2362,29 +2268,26 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		if (t->dev && (t->dev->lldd_dev))
+ 			pm8001_dev = t->dev->lldd_dev;
+ 	} else {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "task null\n");
+ 		return;
+ 	}
+ 
+ 	if ((pm8001_dev && !(pm8001_dev->id & NCQ_READ_LOG_FLAG))
+ 		&& unlikely(!t || !t->lldd_task || !t->dev)) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task or dev null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "task or dev null\n");
+ 		return;
+ 	}
+ 
+ 	ts = &t->task_status;
+ 	if (!ts) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("ts null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "ts null\n");
+ 		return;
+ 	}
+ 
+ 	if (status)
+-		PM8001_IOERR_DBG(pm8001_ha, pm8001_printk(
+-			"status:0x%x, tag:0x%x, task::0x%p\n",
+-			status, tag, t));
++		pm8001_dbg(pm8001_ha, IOERR,
++			   "status:0x%x, tag:0x%x, task::0x%p\n",
++			   status, tag, t);
+ 
+ 	/* Print sas address of IO failed device */
+ 	if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) &&
+@@ -2416,19 +2319,19 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 						& 0xff000000)) +
+ 						pm8001_dev->attached_phy +
+ 						0x10);
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SAS Address of IO Failure Drive:"
+-				"%08x%08x", temp_sata_addr_hi,
+-					temp_sata_addr_low));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SAS Address of IO Failure Drive:%08x%08x\n",
++				   temp_sata_addr_hi,
++				   temp_sata_addr_low);
+ 		} else {
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SAS Address of IO Failure Drive:"
+-				"%016llx", SAS_ADDR(t->dev->sas_addr)));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SAS Address of IO Failure Drive:%016llx\n",
++				   SAS_ADDR(t->dev->sas_addr));
+ 		}
+ 	}
+ 	switch (status) {
+ 	case IO_SUCCESS:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS\n");
+ 		if (param == 0) {
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAM_STAT_GOOD;
+@@ -2450,99 +2353,102 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAS_PROTO_RESPONSE;
+ 			ts->residual = param;
+-			PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("SAS_PROTO_RESPONSE len = %d\n",
+-				param));
++			pm8001_dbg(pm8001_ha, IO,
++				   "SAS_PROTO_RESPONSE len = %d\n",
++				   param);
+ 			sata_resp = &psataPayload->sata_resp[0];
+ 			resp = (struct ata_task_resp *)ts->buf;
+ 			if (t->ata_task.dma_xfer == 0 &&
+ 			    t->data_dir == DMA_FROM_DEVICE) {
+ 				len = sizeof(struct pio_setup_fis);
+-				PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("PIO read len = %d\n", len));
++				pm8001_dbg(pm8001_ha, IO,
++					   "PIO read len = %d\n", len);
+ 			} else if (t->ata_task.use_ncq) {
+ 				len = sizeof(struct set_dev_bits_fis);
+-				PM8001_IO_DBG(pm8001_ha,
+-					pm8001_printk("FPDMA len = %d\n", len));
++				pm8001_dbg(pm8001_ha, IO, "FPDMA len = %d\n",
++					   len);
+ 			} else {
+ 				len = sizeof(struct dev_to_host_fis);
+-				PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("other len = %d\n", len));
++				pm8001_dbg(pm8001_ha, IO, "other len = %d\n",
++					   len);
+ 			}
+ 			if (SAS_STATUS_BUF_SIZE >= sizeof(*resp)) {
+ 				resp->frame_len = len;
+ 				memcpy(&resp->ending_fis[0], sata_resp, len);
+ 				ts->buf_valid_size = sizeof(*resp);
+ 			} else
+-				PM8001_IO_DBG(pm8001_ha,
+-					pm8001_printk("response to large\n"));
++				pm8001_dbg(pm8001_ha, IO,
++					   "response too large\n");
+ 		}
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_ABORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ABORTED IOMB Tag\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ABORTED IOMB Tag\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_ABORTED_TASK;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 		/* following cases are to do cases */
+ 	case IO_UNDERFLOW:
+ 		/* SATA Completion with error */
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_UNDERFLOW param = %d\n", param));
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW param = %d\n", param);
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_UNDERRUN;
+ 		ts->residual =  param;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_NO_DEVICE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_NO_DEVICE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_NO_DEVICE\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_PHY_DOWN;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_INTERRUPTED;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT"
+-			"_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_EPROTO;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_CONT0;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2556,8 +2462,8 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+@@ -2572,17 +2478,15 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_"
+-			"NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_STP_RESOURCES"
+-			"_BUSY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2596,57 +2500,65 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-		       pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_NAK_RECEIVED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_NAK_RECEIVED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_ACK_NAK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_ACK_NAK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_DMA:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_DMA\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_DMA\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_ABORTED_TASK;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_SATA_LINK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_SATA_LINK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_SATA_LINK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_REJECTED_NCQ_MODE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_REJECTED_NCQ_MODE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_REJECTED_NCQ_MODE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_UNDERRUN;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_PORT_IN_RESET:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_PORT_IN_RESET\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_PORT_IN_RESET\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_DS_NON_OPERATIONAL:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_NON_OPERATIONAL\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_NON_OPERATIONAL\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2659,14 +2571,14 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_DS_IN_RECOVERY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("  IO_DS_IN_RECOVERY\n"));
++		pm8001_dbg(pm8001_ha, IO, "  IO_DS_IN_RECOVERY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_DS_IN_ERROR:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_IN_ERROR\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_IN_ERROR\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2679,18 +2591,21 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", status);
+ 		/* not allowed case. Therefore, return failed status */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	}
+ 	spin_lock_irqsave(&t->task_state_lock, flags);
+@@ -2699,10 +2614,9 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task 0x%p done with io_status 0x%x"
+-			" resp 0x%x stat 0x%x but aborted by upper layer!\n",
+-			t, status, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, status, ts->resp, ts->stat);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+@@ -2731,12 +2645,10 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 		t = ccb->task;
+ 		pm8001_dev = ccb->device;
+ 	} else {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("No CCB !!!. returning\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "No CCB !!!. returning\n");
+ 	}
+ 	if (event)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("SATA EVENT 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, FAIL, "SATA EVENT 0x%x\n", event);
+ 
+ 	/* Check if this is NCQ error */
+ 	if (event == IO_XFER_ERROR_ABORTED_NCQ_MODE) {
+@@ -2752,61 +2664,54 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t = ccb->task;
+ 	pm8001_dev = ccb->device;
+ 	if (event)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("sata IO status 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, FAIL, "sata IO status 0x%x\n", event);
+ 	if (unlikely(!t || !t->lldd_task || !t->dev))
+ 		return;
+ 	ts = &t->task_status;
+-	PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-		"port_id:0x%x, device_id:0x%x, tag:0x%x, event:0x%x\n",
+-		port_id, dev_id, tag, event));
++	pm8001_dbg(pm8001_ha, DEVIO,
++		   "port_id:0x%x, device_id:0x%x, tag:0x%x, event:0x%x\n",
++		   port_id, dev_id, tag, event);
+ 	switch (event) {
+ 	case IO_OVERFLOW:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		ts->residual = 0;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_INTERRUPTED;
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT"
+-			"_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_EPROTO;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_CONT0;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2820,94 +2725,82 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_"
+-			"NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-		       pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
+ 		break;
+ 	case IO_XFER_ERROR_NAK_RECEIVED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_NAK_RECEIVED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
+ 		break;
+ 	case IO_XFER_ERROR_PEER_ABORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PEER_ABORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PEER_ABORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
+ 		break;
+ 	case IO_XFER_ERROR_REJECTED_NCQ_MODE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_REJECTED_NCQ_MODE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_REJECTED_NCQ_MODE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_UNDERRUN;
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_UNEXPECTED_PHASE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_UNEXPECTED_PHASE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_UNEXPECTED_PHASE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_RDY_OVERRUN:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_RDY_OVERRUN\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_XFER_RDY_OVERRUN\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-		       pm8001_printk("IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_OFFSET_MISMATCH:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_OFFSET_MISMATCH\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_ZERO_DATA_LEN:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_CMD_FRAME_ISSUED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_CMD_FRAME_ISSUED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_CMD_FRAME_ISSUED\n");
+ 		break;
+ 	case IO_XFER_PIO_SETUP_ERROR:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_PIO_SETUP_ERROR\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_PIO_SETUP_ERROR\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", event);
+ 		/* not allowed case. Therefore, return failed status */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+@@ -2919,10 +2812,9 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task 0x%p done with io_status 0x%x"
+-			" resp 0x%x stat 0x%x but aborted by upper layer!\n",
+-			t, event, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, event, ts->resp, ts->stat);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+@@ -2952,86 +2844,79 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	ts = &t->task_status;
+ 	pm8001_dev = ccb->device;
+ 	if (status) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("smp IO status 0x%x\n", status));
+-		PM8001_IOERR_DBG(pm8001_ha,
+-			pm8001_printk("status:0x%x, tag:0x%x, task:0x%p\n",
+-			status, tag, t));
++		pm8001_dbg(pm8001_ha, FAIL, "smp IO status 0x%x\n", status);
++		pm8001_dbg(pm8001_ha, IOERR,
++			   "status:0x%x, tag:0x%x, task:0x%p\n",
++			   status, tag, t);
+ 	}
+ 	if (unlikely(!t || !t->lldd_task || !t->dev))
+ 		return;
+ 
+ 	switch (status) {
+ 	case IO_SUCCESS:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_GOOD;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_ABORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ABORTED IOMB\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ABORTED IOMB\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_ABORTED_TASK;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OVERFLOW:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		ts->residual = 0;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_NO_DEVICE:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_NO_DEVICE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_NO_DEVICE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_PHY_DOWN;
+ 		break;
+ 	case IO_ERROR_HW_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ERROR_HW_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ERROR_HW_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_BUSY;
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_BUSY;
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_BUSY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_CONT0;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+@@ -3040,76 +2925,67 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_"
+-			"NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-		       pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
+ 		break;
+ 	case IO_XFER_ERROR_RX_FRAME:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_RX_FRAME\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_RX_FRAME\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_ERROR_INTERNAL_SMP_RESOURCE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ERROR_INTERNAL_SMP_RESOURCE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ERROR_INTERNAL_SMP_RESOURCE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_QUEUE_FULL;
+ 		break;
+ 	case IO_PORT_IN_RESET:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_PORT_IN_RESET\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_PORT_IN_RESET\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_DS_NON_OPERATIONAL:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_NON_OPERATIONAL\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_NON_OPERATIONAL\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		break;
+ 	case IO_DS_IN_RECOVERY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_IN_RECOVERY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_IN_RECOVERY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", status);
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		/* not allowed case. Therefore, return failed status */
+@@ -3121,10 +2997,8 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("task 0x%p done with"
+-			" io_status 0x%x resp 0x%x "
+-			"stat 0x%x but aborted by upper layer!\n",
+-			t, status, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL, "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, status, ts->resp, ts->stat);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+@@ -3146,9 +3020,8 @@ void pm8001_mpi_set_dev_state_resp(struct pm8001_hba_info *pm8001_ha,
+ 	u32 device_id = le32_to_cpu(pPayload->device_id);
+ 	u8 pds = le32_to_cpu(pPayload->pds_nds) & PDS_BITS;
+ 	u8 nds = le32_to_cpu(pPayload->pds_nds) & NDS_BITS;
+-	PM8001_MSG_DBG(pm8001_ha, pm8001_printk("Set device id = 0x%x state "
+-		"from 0x%x to 0x%x status = 0x%x!\n",
+-		device_id, pds, nds, status));
++	pm8001_dbg(pm8001_ha, MSG, "Set device id = 0x%x state from 0x%x to 0x%x status = 0x%x!\n",
++		   device_id, pds, nds, status);
+ 	complete(pm8001_dev->setds_completion);
+ 	ccb->task = NULL;
+ 	ccb->ccb_tag = 0xFFFFFFFF;
+@@ -3163,10 +3036,9 @@ void pm8001_mpi_set_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	struct pm8001_ccb_info *ccb = &pm8001_ha->ccb_info[tag];
+ 	u32 dlen_status = le32_to_cpu(pPayload->dlen_status);
+ 	complete(pm8001_ha->nvmd_completion);
+-	PM8001_MSG_DBG(pm8001_ha, pm8001_printk("Set nvm data complete!\n"));
++	pm8001_dbg(pm8001_ha, MSG, "Set nvm data complete!\n");
+ 	if ((dlen_status & NVMD_STAT) != 0) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Set nvm data error!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "Set nvm data error!\n");
+ 		return;
+ 	}
+ 	ccb->task = NULL;
+@@ -3188,26 +3060,22 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	void *virt_addr = pm8001_ha->memoryMap.region[NVMD].virt_ptr;
+ 	fw_control_context = ccb->fw_control_context;
+ 
+-	PM8001_MSG_DBG(pm8001_ha, pm8001_printk("Get nvm data complete!\n"));
++	pm8001_dbg(pm8001_ha, MSG, "Get nvm data complete!\n");
+ 	if ((dlen_status & NVMD_STAT) != 0) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Get nvm data error!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "Get nvm data error!\n");
+ 		complete(pm8001_ha->nvmd_completion);
+ 		return;
+ 	}
+ 
+ 	if (ir_tds_bn_dps_das_nvm & IPMode) {
+ 		/* indirect mode - IR bit set */
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("Get NVMD success, IR=1\n"));
++		pm8001_dbg(pm8001_ha, MSG, "Get NVMD success, IR=1\n");
+ 		if ((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == TWI_DEVICE) {
+ 			if (ir_tds_bn_dps_das_nvm == 0x80a80200) {
+ 				memcpy(pm8001_ha->sas_addr,
+ 				      ((u8 *)virt_addr + 4),
+ 				       SAS_ADDR_SIZE);
+-				PM8001_MSG_DBG(pm8001_ha,
+-					pm8001_printk("Get SAS address"
+-					" from VPD successfully!\n"));
++				pm8001_dbg(pm8001_ha, MSG, "Get SAS address from VPD successfully!\n");
+ 			}
+ 		} else if (((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == C_SEEPROM)
+ 			|| ((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == VPD_FLASH) ||
+@@ -3218,14 +3086,14 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 			;
+ 		} else {
+ 			/* Should not be happened*/
+-			PM8001_MSG_DBG(pm8001_ha,
+-				pm8001_printk("(IR=1)Wrong Device type 0x%x\n",
+-				ir_tds_bn_dps_das_nvm));
++			pm8001_dbg(pm8001_ha, MSG,
++				   "(IR=1)Wrong Device type 0x%x\n",
++				   ir_tds_bn_dps_das_nvm);
+ 		}
+ 	} else /* direct mode */{
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("Get NVMD success, IR=0, dataLen=%d\n",
+-			(dlen_status & NVMD_LEN) >> 24));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "Get NVMD success, IR=0, dataLen=%d\n",
++			   (dlen_status & NVMD_LEN) >> 24);
+ 	}
+ 	/* Though fw_control_context is freed below, usrAddr still needs
+ 	 * to be updated as this holds the response to the request function
+@@ -3234,10 +3102,15 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		pm8001_ha->memoryMap.region[NVMD].virt_ptr,
+ 		fw_control_context->len);
+ 	kfree(ccb->fw_control_context);
++	/* To avoid race condition, complete should be
++	 * called after the message is copied to
++	 * fw_control_context->usrAddr
++	 */
++	complete(pm8001_ha->nvmd_completion);
++	pm8001_dbg(pm8001_ha, MSG, "Set nvm data complete!\n");
+ 	ccb->task = NULL;
+ 	ccb->ccb_tag = 0xFFFFFFFF;
+ 	pm8001_tag_free(pm8001_ha, tag);
+-	complete(pm8001_ha->nvmd_completion);
+ }
+ 
+ int pm8001_mpi_local_phy_ctl(struct pm8001_hba_info *pm8001_ha, void *piomb)
+@@ -3250,13 +3123,13 @@ int pm8001_mpi_local_phy_ctl(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	u32 phy_op = le32_to_cpu(pPayload->phyop_phyid) & OP_BITS;
+ 	tag = le32_to_cpu(pPayload->tag);
+ 	if (status != 0) {
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("%x phy execute %x phy op failed!\n",
+-			phy_id, phy_op));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "%x phy execute %x phy op failed!\n",
++			   phy_id, phy_op);
+ 	} else {
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("%x phy execute %x phy op success!\n",
+-			phy_id, phy_op));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "%x phy execute %x phy op success!\n",
++			   phy_id, phy_op);
+ 		pm8001_ha->phy[phy_id].reset_success = true;
+ 	}
+ 	if (pm8001_ha->phy[phy_id].enable_completion) {
+@@ -3303,10 +3176,10 @@ void pm8001_bytes_dmaed(struct pm8001_hba_info *pm8001_ha, int i)
+ 	} else if (phy->phy_type & PORT_TYPE_SATA) {
+ 		/*Nothing*/
+ 	}
+-	PM8001_MSG_DBG(pm8001_ha, pm8001_printk("phy %d byte dmaded.\n", i));
++	pm8001_dbg(pm8001_ha, MSG, "phy %d byte dmaded.\n", i);
+ 
+ 	sas_phy->frame_rcvd_size = phy->frame_rcvd_size;
+-	pm8001_ha->sas->notify_port_event(sas_phy, PORTE_BYTES_DMAED);
++	sas_notify_port_event(sas_phy, PORTE_BYTES_DMAED);
+ }
+ 
+ /* Get the link rate speed  */
+@@ -3420,43 +3293,39 @@ hw_event_sas_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	u32 npip_portstate = le32_to_cpu(pPayload->npip_portstate);
+ 	u8 portstate = (u8)(npip_portstate & 0x0000000F);
+ 	struct pm8001_port *port = &pm8001_ha->port[port_id];
+-	struct sas_ha_struct *sas_ha = pm8001_ha->sas;
+ 	struct pm8001_phy *phy = &pm8001_ha->phy[phy_id];
+ 	unsigned long flags;
+ 	u8 deviceType = pPayload->sas_identify.dev_type;
+ 	port->port_state =  portstate;
+ 	phy->phy_state = PHY_STATE_LINK_UP_SPC;
+-	PM8001_MSG_DBG(pm8001_ha,
+-		pm8001_printk("HW_EVENT_SAS_PHY_UP port id = %d, phy id = %d\n",
+-		port_id, phy_id));
++	pm8001_dbg(pm8001_ha, MSG,
++		   "HW_EVENT_SAS_PHY_UP port id = %d, phy id = %d\n",
++		   port_id, phy_id);
+ 
+ 	switch (deviceType) {
+ 	case SAS_PHY_UNUSED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("device type no device.\n"));
++		pm8001_dbg(pm8001_ha, MSG, "device type no device.\n");
+ 		break;
+ 	case SAS_END_DEVICE:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk("end device.\n"));
++		pm8001_dbg(pm8001_ha, MSG, "end device.\n");
+ 		pm8001_chip_phy_ctl_req(pm8001_ha, phy_id,
+ 			PHY_NOTIFY_ENABLE_SPINUP);
+ 		port->port_attached = 1;
+ 		pm8001_get_lrate_mode(phy, link_rate);
+ 		break;
+ 	case SAS_EDGE_EXPANDER_DEVICE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("expander device.\n"));
++		pm8001_dbg(pm8001_ha, MSG, "expander device.\n");
+ 		port->port_attached = 1;
+ 		pm8001_get_lrate_mode(phy, link_rate);
+ 		break;
+ 	case SAS_FANOUT_EXPANDER_DEVICE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("fanout expander device.\n"));
++		pm8001_dbg(pm8001_ha, MSG, "fanout expander device.\n");
+ 		port->port_attached = 1;
+ 		pm8001_get_lrate_mode(phy, link_rate);
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("unknown device type(%x)\n", deviceType));
++		pm8001_dbg(pm8001_ha, DEVIO, "unknown device type(%x)\n",
++			   deviceType);
+ 		break;
+ 	}
+ 	phy->phy_type |= PORT_TYPE_SAS;
+@@ -3467,7 +3336,7 @@ hw_event_sas_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	else if (phy->identify.device_type != SAS_PHY_UNUSED)
+ 		phy->identify.target_port_protocols = SAS_PROTOCOL_SMP;
+ 	phy->sas_phy.oob_mode = SAS_OOB_MODE;
+-	sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
++	sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
+ 	spin_lock_irqsave(&phy->sas_phy.frame_rcvd_lock, flags);
+ 	memcpy(phy->frame_rcvd, &pPayload->sas_identify,
+ 		sizeof(struct sas_identify_frame)-4);
+@@ -3499,12 +3368,10 @@ hw_event_sata_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	u32 npip_portstate = le32_to_cpu(pPayload->npip_portstate);
+ 	u8 portstate = (u8)(npip_portstate & 0x0000000F);
+ 	struct pm8001_port *port = &pm8001_ha->port[port_id];
+-	struct sas_ha_struct *sas_ha = pm8001_ha->sas;
+ 	struct pm8001_phy *phy = &pm8001_ha->phy[phy_id];
+ 	unsigned long flags;
+-	PM8001_DEVIO_DBG(pm8001_ha,
+-		pm8001_printk("HW_EVENT_SATA_PHY_UP port id = %d,"
+-		" phy id = %d\n", port_id, phy_id));
++	pm8001_dbg(pm8001_ha, DEVIO, "HW_EVENT_SATA_PHY_UP port id = %d, phy id = %d\n",
++		   port_id, phy_id);
+ 	port->port_state =  portstate;
+ 	phy->phy_state = PHY_STATE_LINK_UP_SPC;
+ 	port->port_attached = 1;
+@@ -3512,7 +3379,7 @@ hw_event_sata_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	phy->phy_type |= PORT_TYPE_SATA;
+ 	phy->phy_attached = 1;
+ 	phy->sas_phy.oob_mode = SATA_OOB_MODE;
+-	sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
++	sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
+ 	spin_lock_irqsave(&phy->sas_phy.frame_rcvd_lock, flags);
+ 	memcpy(phy->frame_rcvd, ((u8 *)&pPayload->sata_fis - 4),
+ 		sizeof(struct dev_to_host_fis));
+@@ -3552,37 +3419,35 @@ hw_event_phy_down(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case PORT_VALID:
+ 		break;
+ 	case PORT_INVALID:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" PortInvalid portID %d\n", port_id));
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" Last phy Down and port invalid\n"));
++		pm8001_dbg(pm8001_ha, MSG, " PortInvalid portID %d\n",
++			   port_id);
++		pm8001_dbg(pm8001_ha, MSG,
++			   " Last phy Down and port invalid\n");
+ 		port->port_attached = 0;
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN,
+ 			port_id, phy_id, 0, 0);
+ 		break;
+ 	case PORT_IN_RESET:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" Port In Reset portID %d\n", port_id));
++		pm8001_dbg(pm8001_ha, MSG, " Port In Reset portID %d\n",
++			   port_id);
+ 		break;
+ 	case PORT_NOT_ESTABLISHED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" phy Down and PORT_NOT_ESTABLISHED\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   " phy Down and PORT_NOT_ESTABLISHED\n");
+ 		port->port_attached = 0;
+ 		break;
+ 	case PORT_LOSTCOMM:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" phy Down and PORT_LOSTCOMM\n"));
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" Last phy Down and port invalid\n"));
++		pm8001_dbg(pm8001_ha, MSG, " phy Down and PORT_LOSTCOMM\n");
++		pm8001_dbg(pm8001_ha, MSG,
++			   " Last phy Down and port invalid\n");
+ 		port->port_attached = 0;
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN,
+ 			port_id, phy_id, 0, 0);
+ 		break;
+ 	default:
+ 		port->port_attached = 0;
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk(" phy Down and(default) = %x\n",
+-			portstate));
++		pm8001_dbg(pm8001_ha, DEVIO, " phy Down and(default) = %x\n",
++			   portstate);
+ 		break;
+ 
+ 	}
+@@ -3613,44 +3478,42 @@ int pm8001_mpi_reg_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	pm8001_dev = ccb->device;
+ 	status = le32_to_cpu(registerRespPayload->status);
+ 	device_id = le32_to_cpu(registerRespPayload->device_id);
+-	PM8001_MSG_DBG(pm8001_ha,
+-		pm8001_printk(" register device is status = %d\n", status));
++	pm8001_dbg(pm8001_ha, MSG, " register device is status = %d\n",
++		   status);
+ 	switch (status) {
+ 	case DEVREG_SUCCESS:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk("DEVREG_SUCCESS\n"));
++		pm8001_dbg(pm8001_ha, MSG, "DEVREG_SUCCESS\n");
+ 		pm8001_dev->device_id = device_id;
+ 		break;
+ 	case DEVREG_FAILURE_OUT_OF_RESOURCE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("DEVREG_FAILURE_OUT_OF_RESOURCE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "DEVREG_FAILURE_OUT_OF_RESOURCE\n");
+ 		break;
+ 	case DEVREG_FAILURE_DEVICE_ALREADY_REGISTERED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-		   pm8001_printk("DEVREG_FAILURE_DEVICE_ALREADY_REGISTERED\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "DEVREG_FAILURE_DEVICE_ALREADY_REGISTERED\n");
+ 		break;
+ 	case DEVREG_FAILURE_INVALID_PHY_ID:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("DEVREG_FAILURE_INVALID_PHY_ID\n"));
++		pm8001_dbg(pm8001_ha, MSG, "DEVREG_FAILURE_INVALID_PHY_ID\n");
+ 		break;
+ 	case DEVREG_FAILURE_PHY_ID_ALREADY_REGISTERED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-		   pm8001_printk("DEVREG_FAILURE_PHY_ID_ALREADY_REGISTERED\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "DEVREG_FAILURE_PHY_ID_ALREADY_REGISTERED\n");
+ 		break;
+ 	case DEVREG_FAILURE_PORT_ID_OUT_OF_RANGE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("DEVREG_FAILURE_PORT_ID_OUT_OF_RANGE\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "DEVREG_FAILURE_PORT_ID_OUT_OF_RANGE\n");
+ 		break;
+ 	case DEVREG_FAILURE_PORT_NOT_VALID_STATE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("DEVREG_FAILURE_PORT_NOT_VALID_STATE\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "DEVREG_FAILURE_PORT_NOT_VALID_STATE\n");
+ 		break;
+ 	case DEVREG_FAILURE_DEVICE_TYPE_NOT_VALID:
+-		PM8001_MSG_DBG(pm8001_ha,
+-		       pm8001_printk("DEVREG_FAILURE_DEVICE_TYPE_NOT_VALID\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "DEVREG_FAILURE_DEVICE_TYPE_NOT_VALID\n");
+ 		break;
+ 	default:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("DEVREG_FAILURE_DEVICE_TYPE_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "DEVREG_FAILURE_DEVICE_TYPE_NOT_SUPPORTED\n");
+ 		break;
+ 	}
+ 	complete(pm8001_dev->dcompletion);
+@@ -3670,9 +3533,9 @@ int pm8001_mpi_dereg_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	status = le32_to_cpu(registerRespPayload->status);
+ 	device_id = le32_to_cpu(registerRespPayload->device_id);
+ 	if (status != 0)
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" deregister device failed ,status = %x"
+-			", device_id = %x\n", status, device_id));
++		pm8001_dbg(pm8001_ha, MSG,
++			   " deregister device failed ,status = %x, device_id = %x\n",
++			   status, device_id);
+ 	return 0;
+ }
+ 
+@@ -3692,44 +3555,37 @@ int pm8001_mpi_fw_flash_update_resp(struct pm8001_hba_info *pm8001_ha,
+ 	status = le32_to_cpu(ppayload->status);
+ 	switch (status) {
+ 	case FLASH_UPDATE_COMPLETE_PENDING_REBOOT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-		pm8001_printk(": FLASH_UPDATE_COMPLETE_PENDING_REBOOT\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   ": FLASH_UPDATE_COMPLETE_PENDING_REBOOT\n");
+ 		break;
+ 	case FLASH_UPDATE_IN_PROGRESS:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(": FLASH_UPDATE_IN_PROGRESS\n"));
++		pm8001_dbg(pm8001_ha, MSG, ": FLASH_UPDATE_IN_PROGRESS\n");
+ 		break;
+ 	case FLASH_UPDATE_HDR_ERR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(": FLASH_UPDATE_HDR_ERR\n"));
++		pm8001_dbg(pm8001_ha, MSG, ": FLASH_UPDATE_HDR_ERR\n");
+ 		break;
+ 	case FLASH_UPDATE_OFFSET_ERR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(": FLASH_UPDATE_OFFSET_ERR\n"));
++		pm8001_dbg(pm8001_ha, MSG, ": FLASH_UPDATE_OFFSET_ERR\n");
+ 		break;
+ 	case FLASH_UPDATE_CRC_ERR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(": FLASH_UPDATE_CRC_ERR\n"));
++		pm8001_dbg(pm8001_ha, MSG, ": FLASH_UPDATE_CRC_ERR\n");
+ 		break;
+ 	case FLASH_UPDATE_LENGTH_ERR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(": FLASH_UPDATE_LENGTH_ERR\n"));
++		pm8001_dbg(pm8001_ha, MSG, ": FLASH_UPDATE_LENGTH_ERR\n");
+ 		break;
+ 	case FLASH_UPDATE_HW_ERR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(": FLASH_UPDATE_HW_ERR\n"));
++		pm8001_dbg(pm8001_ha, MSG, ": FLASH_UPDATE_HW_ERR\n");
+ 		break;
+ 	case FLASH_UPDATE_DNLD_NOT_SUPPORTED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(": FLASH_UPDATE_DNLD_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   ": FLASH_UPDATE_DNLD_NOT_SUPPORTED\n");
+ 		break;
+ 	case FLASH_UPDATE_DISABLED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(": FLASH_UPDATE_DISABLED\n"));
++		pm8001_dbg(pm8001_ha, MSG, ": FLASH_UPDATE_DISABLED\n");
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("No matched status = %d\n", status));
++		pm8001_dbg(pm8001_ha, DEVIO, "No matched status = %d\n",
++			   status);
+ 		break;
+ 	}
+ 	kfree(ccb->fw_control_context);
+@@ -3747,12 +3603,11 @@ int pm8001_mpi_general_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	struct general_event_resp *pPayload =
+ 		(struct general_event_resp *)(piomb + 4);
+ 	status = le32_to_cpu(pPayload->status);
+-	PM8001_MSG_DBG(pm8001_ha,
+-		pm8001_printk(" status = 0x%x\n", status));
++	pm8001_dbg(pm8001_ha, MSG, " status = 0x%x\n", status);
+ 	for (i = 0; i < GENERAL_EVENT_PAYLOAD; i++)
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("inb_IOMB_payload[0x%x] 0x%x,\n", i,
+-			pPayload->inb_IOMB_payload[i]));
++		pm8001_dbg(pm8001_ha, MSG, "inb_IOMB_payload[0x%x] 0x%x,\n",
++			   i,
++			   pPayload->inb_IOMB_payload[i]);
+ 	return 0;
+ }
+ 
+@@ -3772,8 +3627,7 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	status = le32_to_cpu(pPayload->status);
+ 	tag = le32_to_cpu(pPayload->tag);
+ 	if (!tag) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk(" TAG NULL. RETURNING !!!"));
++		pm8001_dbg(pm8001_ha, FAIL, " TAG NULL. RETURNING !!!\n");
+ 		return -1;
+ 	}
+ 
+@@ -3783,23 +3637,21 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	pm8001_dev = ccb->device; /* retrieve device */
+ 
+ 	if (!t)	{
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk(" TASK NULL. RETURNING !!!"));
++		pm8001_dbg(pm8001_ha, FAIL, " TASK NULL. RETURNING !!!\n");
+ 		return -1;
+ 	}
+ 	ts = &t->task_status;
+ 	if (status != 0)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task abort failed status 0x%x ,"
+-			"tag = 0x%x, scp= 0x%x\n", status, tag, scp));
++		pm8001_dbg(pm8001_ha, FAIL, "task abort failed status 0x%x ,tag = 0x%x, scp= 0x%x\n",
++			   status, tag, scp);
+ 	switch (status) {
+ 	case IO_SUCCESS:
+-		PM8001_EH_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n"));
++		pm8001_dbg(pm8001_ha, EH, "IO_SUCCESS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_GOOD;
+ 		break;
+ 	case IO_NOT_VALID:
+-		PM8001_EH_DBG(pm8001_ha, pm8001_printk("IO_NOT_VALID\n"));
++		pm8001_dbg(pm8001_ha, EH, "IO_NOT_VALID\n");
+ 		ts->resp = TMF_RESP_FUNC_FAILED;
+ 		break;
+ 	}
+@@ -3844,14 +3696,13 @@ static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void* piomb)
+ 	struct sas_ha_struct *sas_ha = pm8001_ha->sas;
+ 	struct pm8001_phy *phy = &pm8001_ha->phy[phy_id];
+ 	struct asd_sas_phy *sas_phy = sas_ha->sas_phy[phy_id];
+-	PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-		"SPC HW event for portid:%d, phyid:%d, event:%x, status:%x\n",
+-		port_id, phy_id, eventType, status));
++	pm8001_dbg(pm8001_ha, DEVIO,
++		   "SPC HW event for portid:%d, phyid:%d, event:%x, status:%x\n",
++		   port_id, phy_id, eventType, status);
+ 	switch (eventType) {
+ 	case HW_EVENT_PHY_START_STATUS:
+-		PM8001_MSG_DBG(pm8001_ha,
+-		pm8001_printk("HW_EVENT_PHY_START_STATUS"
+-			" status = %x\n", status));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_START_STATUS status = %x\n",
++			   status);
+ 		if (status == 0) {
+ 			phy->phy_state = 1;
+ 			if (pm8001_ha->flags == PM8001F_RUN_TIME &&
+@@ -3860,178 +3711,160 @@ static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void* piomb)
+ 		}
+ 		break;
+ 	case HW_EVENT_SAS_PHY_UP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PHY_START_STATUS\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_START_STATUS\n");
+ 		hw_event_sas_phy_up(pm8001_ha, piomb);
+ 		break;
+ 	case HW_EVENT_SATA_PHY_UP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_SATA_PHY_UP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_SATA_PHY_UP\n");
+ 		hw_event_sata_phy_up(pm8001_ha, piomb);
+ 		break;
+ 	case HW_EVENT_PHY_STOP_STATUS:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PHY_STOP_STATUS "
+-			"status = %x\n", status));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_STOP_STATUS status = %x\n",
++			   status);
+ 		if (status == 0)
+ 			phy->phy_state = 0;
+ 		break;
+ 	case HW_EVENT_SATA_SPINUP_HOLD:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_SATA_SPINUP_HOLD\n"));
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD);
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_SATA_SPINUP_HOLD\n");
++		sas_notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD);
+ 		break;
+ 	case HW_EVENT_PHY_DOWN:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PHY_DOWN\n"));
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL);
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_DOWN\n");
++		sas_notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL);
+ 		phy->phy_attached = 0;
+ 		phy->phy_state = 0;
+ 		hw_event_phy_down(pm8001_ha, piomb);
+ 		break;
+ 	case HW_EVENT_PORT_INVALID:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_INVALID\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_INVALID\n");
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	/* the broadcast change primitive received, tell the LIBSAS this event
+ 	to revalidate the sas domain*/
+ 	case HW_EVENT_BROADCAST_CHANGE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_BROADCAST_CHANGE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_BROADCAST_CHANGE\n");
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_BROADCAST_CHANGE,
+ 			port_id, phy_id, 1, 0);
+ 		spin_lock_irqsave(&sas_phy->sas_prim_lock, flags);
+ 		sas_phy->sas_prim = HW_EVENT_BROADCAST_CHANGE;
+ 		spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags);
+-		sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 		break;
+ 	case HW_EVENT_PHY_ERROR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PHY_ERROR\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_ERROR\n");
+ 		sas_phy_disconnected(&phy->sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR);
++		sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR);
+ 		break;
+ 	case HW_EVENT_BROADCAST_EXP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_BROADCAST_EXP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_BROADCAST_EXP\n");
+ 		spin_lock_irqsave(&sas_phy->sas_prim_lock, flags);
+ 		sas_phy->sas_prim = HW_EVENT_BROADCAST_EXP;
+ 		spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags);
+-		sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_INVALID_DWORD:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_LINK_ERR_INVALID_DWORD\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_INVALID_DWORD\n");
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_INVALID_DWORD, port_id, phy_id, 0, 0);
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_DISPARITY_ERROR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_LINK_ERR_DISPARITY_ERROR\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_DISPARITY_ERROR\n");
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_DISPARITY_ERROR,
+ 			port_id, phy_id, 0, 0);
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_CODE_VIOLATION:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_LINK_ERR_CODE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_CODE_VIOLATION\n");
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_CODE_VIOLATION,
+ 			port_id, phy_id, 0, 0);
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH:
+-		PM8001_MSG_DBG(pm8001_ha,
+-		      pm8001_printk("HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH\n");
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH,
+ 			port_id, phy_id, 0, 0);
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_MALFUNCTION:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_MALFUNCTION\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_MALFUNCTION\n");
+ 		break;
+ 	case HW_EVENT_BROADCAST_SES:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_BROADCAST_SES\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_BROADCAST_SES\n");
+ 		spin_lock_irqsave(&sas_phy->sas_prim_lock, flags);
+ 		sas_phy->sas_prim = HW_EVENT_BROADCAST_SES;
+ 		spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags);
+-		sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 		break;
+ 	case HW_EVENT_INBOUND_CRC_ERROR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_INBOUND_CRC_ERROR\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_INBOUND_CRC_ERROR\n");
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_INBOUND_CRC_ERROR,
+ 			port_id, phy_id, 0, 0);
+ 		break;
+ 	case HW_EVENT_HARD_RESET_RECEIVED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_HARD_RESET_RECEIVED\n"));
+-		sas_ha->notify_port_event(sas_phy, PORTE_HARD_RESET);
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_HARD_RESET_RECEIVED\n");
++		sas_notify_port_event(sas_phy, PORTE_HARD_RESET);
+ 		break;
+ 	case HW_EVENT_ID_FRAME_TIMEOUT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_ID_FRAME_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_ID_FRAME_TIMEOUT\n");
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_PHY_RESET_FAILED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_LINK_ERR_PHY_RESET_FAILED\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_PHY_RESET_FAILED\n");
+ 		pm8001_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_PHY_RESET_FAILED,
+ 			port_id, phy_id, 0, 0);
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_PORT_RESET_TIMER_TMO:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_RESET_TIMER_TMO\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RESET_TIMER_TMO\n");
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_PORT_RECOVERY_TIMER_TMO:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_RECOVERY_TIMER_TMO\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_PORT_RECOVERY_TIMER_TMO\n");
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_PORT_RECOVER:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_RECOVER\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RECOVER\n");
+ 		break;
+ 	case HW_EVENT_PORT_RESET_COMPLETE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_RESET_COMPLETE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RESET_COMPLETE\n");
+ 		break;
+ 	case EVENT_BROADCAST_ASYNCH_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("EVENT_BROADCAST_ASYNCH_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "EVENT_BROADCAST_ASYNCH_EVENT\n");
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown event type = %x\n", eventType));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown event type = %x\n",
++			   eventType);
+ 		break;
+ 	}
+ 	return 0;
+@@ -4047,163 +3880,132 @@ static void process_one_iomb(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	__le32 pHeader = *(__le32 *)piomb;
+ 	u8 opc = (u8)((le32_to_cpu(pHeader)) & 0xFFF);
+ 
+-	PM8001_MSG_DBG(pm8001_ha, pm8001_printk("process_one_iomb:"));
++	pm8001_dbg(pm8001_ha, MSG, "process_one_iomb:\n");
+ 
+ 	switch (opc) {
+ 	case OPC_OUB_ECHO:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk("OPC_OUB_ECHO\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_ECHO\n");
+ 		break;
+ 	case OPC_OUB_HW_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_HW_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_HW_EVENT\n");
+ 		mpi_hw_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SSP_COMP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SSP_COMP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_COMP\n");
+ 		mpi_ssp_completion(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SMP_COMP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SMP_COMP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SMP_COMP\n");
+ 		mpi_smp_completion(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_LOCAL_PHY_CNTRL:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_LOCAL_PHY_CNTRL\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_LOCAL_PHY_CNTRL\n");
+ 		pm8001_mpi_local_phy_ctl(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEV_REGIST:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_DEV_REGIST\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_DEV_REGIST\n");
+ 		pm8001_mpi_reg_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEREG_DEV:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("unregister the device\n"));
++		pm8001_dbg(pm8001_ha, MSG, "unregister the device\n");
+ 		pm8001_mpi_dereg_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GET_DEV_HANDLE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GET_DEV_HANDLE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GET_DEV_HANDLE\n");
+ 		break;
+ 	case OPC_OUB_SATA_COMP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SATA_COMP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SATA_COMP\n");
+ 		mpi_sata_completion(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SATA_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SATA_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SATA_EVENT\n");
+ 		mpi_sata_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SSP_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SSP_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_EVENT\n");
+ 		mpi_ssp_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEV_HANDLE_ARRIV:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_DEV_HANDLE_ARRIV\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_DEV_HANDLE_ARRIV\n");
+ 		/*This is for target*/
+ 		break;
+ 	case OPC_OUB_SSP_RECV_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SSP_RECV_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_RECV_EVENT\n");
+ 		/*This is for target*/
+ 		break;
+ 	case OPC_OUB_DEV_INFO:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_DEV_INFO\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_DEV_INFO\n");
+ 		break;
+ 	case OPC_OUB_FW_FLASH_UPDATE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_FW_FLASH_UPDATE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_FW_FLASH_UPDATE\n");
+ 		pm8001_mpi_fw_flash_update_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GPIO_RESPONSE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GPIO_RESPONSE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GPIO_RESPONSE\n");
+ 		break;
+ 	case OPC_OUB_GPIO_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GPIO_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GPIO_EVENT\n");
+ 		break;
+ 	case OPC_OUB_GENERAL_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GENERAL_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GENERAL_EVENT\n");
+ 		pm8001_mpi_general_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SSP_ABORT_RSP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SSP_ABORT_RSP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_ABORT_RSP\n");
+ 		pm8001_mpi_task_abort_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SATA_ABORT_RSP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SATA_ABORT_RSP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SATA_ABORT_RSP\n");
+ 		pm8001_mpi_task_abort_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SAS_DIAG_MODE_START_END:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SAS_DIAG_MODE_START_END\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_SAS_DIAG_MODE_START_END\n");
+ 		break;
+ 	case OPC_OUB_SAS_DIAG_EXECUTE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SAS_DIAG_EXECUTE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SAS_DIAG_EXECUTE\n");
+ 		break;
+ 	case OPC_OUB_GET_TIME_STAMP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GET_TIME_STAMP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GET_TIME_STAMP\n");
+ 		break;
+ 	case OPC_OUB_SAS_HW_EVENT_ACK:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SAS_HW_EVENT_ACK\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SAS_HW_EVENT_ACK\n");
+ 		break;
+ 	case OPC_OUB_PORT_CONTROL:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_PORT_CONTROL\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_PORT_CONTROL\n");
+ 		break;
+ 	case OPC_OUB_SMP_ABORT_RSP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SMP_ABORT_RSP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SMP_ABORT_RSP\n");
+ 		pm8001_mpi_task_abort_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GET_NVMD_DATA:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GET_NVMD_DATA\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GET_NVMD_DATA\n");
+ 		pm8001_mpi_get_nvmd_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SET_NVMD_DATA:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SET_NVMD_DATA\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SET_NVMD_DATA\n");
+ 		pm8001_mpi_set_nvmd_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEVICE_HANDLE_REMOVAL:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_DEVICE_HANDLE_REMOVAL\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_DEVICE_HANDLE_REMOVAL\n");
+ 		break;
+ 	case OPC_OUB_SET_DEVICE_STATE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SET_DEVICE_STATE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SET_DEVICE_STATE\n");
+ 		pm8001_mpi_set_dev_state_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GET_DEVICE_STATE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GET_DEVICE_STATE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GET_DEVICE_STATE\n");
+ 		break;
+ 	case OPC_OUB_SET_DEV_INFO:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SET_DEV_INFO\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SET_DEV_INFO\n");
+ 		break;
+ 	case OPC_OUB_SAS_RE_INITIALIZE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SAS_RE_INITIALIZE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SAS_RE_INITIALIZE\n");
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown outbound Queue IOMB OPC = %x\n",
+-			opc));
++		pm8001_dbg(pm8001_ha, DEVIO,
++			   "Unknown outbound Queue IOMB OPC = %x\n",
++			   opc);
+ 		break;
+ 	}
+ }
+@@ -4416,19 +4218,19 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 	if (task->data_dir == DMA_NONE) {
+ 		ATAP = 0x04;  /* no data*/
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("no data\n"));
++		pm8001_dbg(pm8001_ha, IO, "no data\n");
+ 	} else if (likely(!task->ata_task.device_control_reg_update)) {
+ 		if (task->ata_task.dma_xfer) {
+ 			ATAP = 0x06; /* DMA */
+-			PM8001_IO_DBG(pm8001_ha, pm8001_printk("DMA\n"));
++			pm8001_dbg(pm8001_ha, IO, "DMA\n");
+ 		} else {
+ 			ATAP = 0x05; /* PIO*/
+-			PM8001_IO_DBG(pm8001_ha, pm8001_printk("PIO\n"));
++			pm8001_dbg(pm8001_ha, IO, "PIO\n");
+ 		}
+ 		if (task->ata_task.use_ncq &&
+ 			dev->sata_dev.class != ATA_DEV_ATAPI) {
+ 			ATAP = 0x07; /* FPDMA */
+-			PM8001_IO_DBG(pm8001_ha, pm8001_printk("FPDMA\n"));
++			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
+ 		}
+ 	}
+ 	if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) {
+@@ -4485,10 +4287,10 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 					SAS_TASK_STATE_ABORTED))) {
+ 				spin_unlock_irqrestore(&task->task_state_lock,
+ 							flags);
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("task 0x%p resp 0x%x "
+-					" stat 0x%x but aborted by upper layer "
+-					"\n", task, ts->resp, ts->stat));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "task 0x%p resp 0x%x  stat 0x%x but aborted by upper layer\n",
++					   task, ts->resp,
++					   ts->stat);
+ 				pm8001_ccb_task_free(pm8001_ha, task, ccb, tag);
+ 			} else {
+ 				spin_unlock_irqrestore(&task->task_state_lock,
+@@ -4637,8 +4439,8 @@ int pm8001_chip_dereg_dev_req(struct pm8001_hba_info *pm8001_ha,
+ 	memset(&payload, 0, sizeof(payload));
+ 	payload.tag = cpu_to_le32(1);
+ 	payload.device_id = cpu_to_le32(device_id);
+-	PM8001_MSG_DBG(pm8001_ha,
+-		pm8001_printk("unregister device device_id = %d\n", device_id));
++	pm8001_dbg(pm8001_ha, MSG, "unregister device device_id = %d\n",
++		   device_id);
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
+ 	return ret;
+@@ -4690,9 +4492,9 @@ static irqreturn_t
+ pm8001_chip_isr(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ {
+ 	pm8001_chip_interrupt_disable(pm8001_ha, vec);
+-	PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-		"irq vec %d, ODMR:0x%x\n",
+-		vec, pm8001_cr32(pm8001_ha, 0, 0x30)));
++	pm8001_dbg(pm8001_ha, DEVIO,
++		   "irq vec %d, ODMR:0x%x\n",
++		   vec, pm8001_cr32(pm8001_ha, 0, 0x30));
+ 	process_oq(pm8001_ha, vec);
+ 	pm8001_chip_interrupt_enable(pm8001_ha, vec);
+ 	return IRQ_HANDLED;
+@@ -4729,9 +4531,8 @@ int pm8001_chip_abort_task(struct pm8001_hba_info *pm8001_ha,
+ {
+ 	u32 opc, device_id;
+ 	int rc = TMF_RESP_FUNC_FAILED;
+-	PM8001_EH_DBG(pm8001_ha,
+-		pm8001_printk("cmd_tag = %x, abort task tag = 0x%x",
+-			cmd_tag, task_tag));
++	pm8001_dbg(pm8001_ha, EH, "cmd_tag = %x, abort task tag = 0x%x\n",
++		   cmd_tag, task_tag);
+ 	if (pm8001_dev->dev_type == SAS_END_DEVICE)
+ 		opc = OPC_INB_SSP_ABORT;
+ 	else if (pm8001_dev->dev_type == SAS_SATA_DEV)
+@@ -4742,7 +4543,7 @@ int pm8001_chip_abort_task(struct pm8001_hba_info *pm8001_ha,
+ 	rc = send_task_abort(pm8001_ha, opc, device_id, flag,
+ 		task_tag, cmd_tag);
+ 	if (rc != TMF_RESP_FUNC_COMPLETE)
+-		PM8001_EH_DBG(pm8001_ha, pm8001_printk("rc= %d\n", rc));
++		pm8001_dbg(pm8001_ha, EH, "rc= %d\n", rc);
+ 	return rc;
+ }
+ 
+@@ -5008,8 +4809,9 @@ pm8001_chip_fw_flash_update_req(struct pm8001_hba_info *pm8001_ha,
+ 	if (!fw_control_context)
+ 		return -ENOMEM;
+ 	fw_control = (struct fw_control_info *)&ioctl_payload->func_specific;
+-	PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-		"dma fw_control context input length :%x\n", fw_control->len));
++	pm8001_dbg(pm8001_ha, DEVIO,
++		   "dma fw_control context input length :%x\n",
++		   fw_control->len);
+ 	memcpy(buffer, fw_control->buffer, fw_control->len);
+ 	flash_update_info.sgl.addr = cpu_to_le64(phys_addr);
+ 	flash_update_info.sgl.im_len.len = cpu_to_le32(fw_control->len);
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 2025361b36e96..7657d68e12d5f 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -271,15 +271,14 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	spin_lock_init(&pm8001_ha->lock);
+ 	spin_lock_init(&pm8001_ha->bitmap_lock);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("pm8001_alloc: PHY:%x\n",
+-				pm8001_ha->chip->n_phy));
++	pm8001_dbg(pm8001_ha, INIT, "pm8001_alloc: PHY:%x\n",
++		   pm8001_ha->chip->n_phy);
+ 
+ 	/* Setup Interrupt */
+ 	rc = pm8001_setup_irq(pm8001_ha);
+ 	if (rc) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-				"pm8001_setup_irq failed [ret: %d]\n", rc));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "pm8001_setup_irq failed [ret: %d]\n", rc);
+ 		goto err_out_shost;
+ 	}
+ 	/* Request Interrupt */
+@@ -394,9 +393,9 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
+ 			&pm8001_ha->memoryMap.region[i].phys_addr_lo,
+ 			pm8001_ha->memoryMap.region[i].total_len,
+ 			pm8001_ha->memoryMap.region[i].alignment) != 0) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("Mem%d alloc failed\n",
+-					i));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "Mem%d alloc failed\n",
++				   i);
+ 				goto err_out;
+ 		}
+ 	}
+@@ -412,7 +411,7 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
+ 		pm8001_ha->devices[i].dev_type = SAS_PHY_UNUSED;
+ 		pm8001_ha->devices[i].id = i;
+ 		pm8001_ha->devices[i].device_id = PM8001_MAX_DEVICES;
+-		pm8001_ha->devices[i].running_req = 0;
++		atomic_set(&pm8001_ha->devices[i].running_req, 0);
+ 	}
+ 	pm8001_ha->flags = PM8001F_INIT_TIME;
+ 	/* Initialize tags */
+@@ -467,15 +466,15 @@ static int pm8001_ioremap(struct pm8001_hba_info *pm8001_ha)
+ 			pm8001_ha->io_mem[logicalBar].memvirtaddr =
+ 				ioremap(pm8001_ha->io_mem[logicalBar].membase,
+ 				pm8001_ha->io_mem[logicalBar].memsize);
+-			PM8001_INIT_DBG(pm8001_ha,
+-				pm8001_printk("PCI: bar %d, logicalBar %d ",
+-				bar, logicalBar));
+-			PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
+-				"base addr %llx virt_addr=%llx len=%d\n",
+-				(u64)pm8001_ha->io_mem[logicalBar].membase,
+-				(u64)(unsigned long)
+-				pm8001_ha->io_mem[logicalBar].memvirtaddr,
+-				pm8001_ha->io_mem[logicalBar].memsize));
++			pm8001_dbg(pm8001_ha, INIT,
++				   "PCI: bar %d, logicalBar %d\n",
++				   bar, logicalBar);
++			pm8001_dbg(pm8001_ha, INIT,
++				   "base addr %llx virt_addr=%llx len=%d\n",
++				   (u64)pm8001_ha->io_mem[logicalBar].membase,
++				   (u64)(unsigned long)
++				   pm8001_ha->io_mem[logicalBar].memvirtaddr,
++				   pm8001_ha->io_mem[logicalBar].memsize);
+ 		} else {
+ 			pm8001_ha->io_mem[logicalBar].membase	= 0;
+ 			pm8001_ha->io_mem[logicalBar].memsize	= 0;
+@@ -520,8 +519,8 @@ static struct pm8001_hba_info *pm8001_pci_alloc(struct pci_dev *pdev,
+ 	else {
+ 		pm8001_ha->link_rate = LINKRATE_15 | LINKRATE_30 |
+ 			LINKRATE_60 | LINKRATE_120;
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-			"Setting link rate to default value\n"));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "Setting link rate to default value\n");
+ 	}
+ 	sprintf(pm8001_ha->name, "%s%d", DRV_NAME, pm8001_ha->id);
+ 	/* IOMB size is 128 for 8088/89 controllers */
+@@ -684,13 +683,13 @@ static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
+ 	payload.offset = 0;
+ 	payload.func_specific = kzalloc(payload.rd_length, GFP_KERNEL);
+ 	if (!payload.func_specific) {
+-		PM8001_INIT_DBG(pm8001_ha, pm8001_printk("mem alloc fail\n"));
++		pm8001_dbg(pm8001_ha, INIT, "mem alloc fail\n");
+ 		return;
+ 	}
+ 	rc = PM8001_CHIP_DISP->get_nvmd_req(pm8001_ha, &payload);
+ 	if (rc) {
+ 		kfree(payload.func_specific);
+-		PM8001_INIT_DBG(pm8001_ha, pm8001_printk("nvmd failed\n"));
++		pm8001_dbg(pm8001_ha, INIT, "nvmd failed\n");
+ 		return;
+ 	}
+ 	wait_for_completion(&completion);
+@@ -718,9 +717,8 @@ static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
+ 			sas_add[7] = sas_add[7] + 4;
+ 		memcpy(&pm8001_ha->phy[i].dev_sas_addr,
+ 			sas_add, SAS_ADDR_SIZE);
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("phy %d sas_addr = %016llx\n", i,
+-			pm8001_ha->phy[i].dev_sas_addr));
++		pm8001_dbg(pm8001_ha, INIT, "phy %d sas_addr = %016llx\n", i,
++			   pm8001_ha->phy[i].dev_sas_addr);
+ 	}
+ 	kfree(payload.func_specific);
+ #else
+@@ -760,7 +758,7 @@ static int pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha)
+ 	rc = PM8001_CHIP_DISP->get_nvmd_req(pm8001_ha, &payload);
+ 	if (rc) {
+ 		kfree(payload.func_specific);
+-		PM8001_INIT_DBG(pm8001_ha, pm8001_printk("nvmd failed\n"));
++		pm8001_dbg(pm8001_ha, INIT, "nvmd failed\n");
+ 		return -ENOMEM;
+ 	}
+ 	wait_for_completion(&completion);
+@@ -854,9 +852,9 @@ void pm8001_get_phy_mask(struct pm8001_hba_info *pm8001_ha, int *phymask)
+ 		break;
+ 
+ 	default:
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("Unknown subsystem device=0x%.04x",
+-				pm8001_ha->pdev->subsystem_device));
++		pm8001_dbg(pm8001_ha, INIT,
++			   "Unknown subsystem device=0x%.04x\n",
++			   pm8001_ha->pdev->subsystem_device);
+ 	}
+ }
+ 
+@@ -950,9 +948,9 @@ static u32 pm8001_setup_msix(struct pm8001_hba_info *pm8001_ha)
+ 	/* Maximum queue number updating in HBA structure */
+ 	pm8001_ha->max_q_num = number_of_intr;
+ 
+-	PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
+-		"pci_alloc_irq_vectors request ret:%d no of intr %d\n",
+-				rc, pm8001_ha->number_of_intr));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "pci_alloc_irq_vectors request ret:%d no of intr %d\n",
++		   rc, pm8001_ha->number_of_intr);
+ 	return 0;
+ }
+ 
+@@ -964,9 +962,9 @@ static u32 pm8001_request_msix(struct pm8001_hba_info *pm8001_ha)
+ 	if (pm8001_ha->chip_id != chip_8001)
+ 		flag &= ~IRQF_SHARED;
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("pci_enable_msix request number of intr %d\n",
+-		pm8001_ha->number_of_intr));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "pci_enable_msix request number of intr %d\n",
++		   pm8001_ha->number_of_intr);
+ 
+ 	for (i = 0; i < pm8001_ha->number_of_intr; i++) {
+ 		snprintf(pm8001_ha->intr_drvname[i],
+@@ -1002,8 +1000,7 @@ static u32 pm8001_setup_irq(struct pm8001_hba_info *pm8001_ha)
+ #ifdef PM8001_USE_MSIX
+ 	if (pci_find_capability(pdev, PCI_CAP_ID_MSIX))
+ 		return pm8001_setup_msix(pm8001_ha);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("MSIX not supported!!!\n"));
++	pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
+ #endif
+ 	return 0;
+ }
+@@ -1023,8 +1020,7 @@ static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
+ 	if (pdev->msix_cap && pci_msi_enabled())
+ 		return pm8001_request_msix(pm8001_ha);
+ 	else {
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("MSIX not supported!!!\n"));
++		pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
+ 		goto intx;
+ 	}
+ #endif
+@@ -1108,8 +1104,8 @@ static int pm8001_pci_probe(struct pci_dev *pdev,
+ 	PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha);
+ 	rc = PM8001_CHIP_DISP->chip_init(pm8001_ha);
+ 	if (rc) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-			"chip_init failed [ret: %d]\n", rc));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "chip_init failed [ret: %d]\n", rc);
+ 		goto err_out_ha_free;
+ 	}
+ 
+@@ -1138,8 +1134,8 @@ static int pm8001_pci_probe(struct pci_dev *pdev,
+ 	pm8001_post_sas_ha_init(shost, chip);
+ 	rc = sas_register_ha(SHOST_TO_SAS_HA(shost));
+ 	if (rc) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-			"sas_register_ha failed [ret: %d]\n", rc));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "sas_register_ha failed [ret: %d]\n", rc);
+ 		goto err_out_shost;
+ 	}
+ 	list_add_tail(&pm8001_ha->list, &hba_list);
+@@ -1191,8 +1187,8 @@ pm8001_init_ccb_tag(struct pm8001_hba_info *pm8001_ha, struct Scsi_Host *shost,
+ 	pm8001_ha->ccb_info = (struct pm8001_ccb_info *)
+ 		kcalloc(ccb_count, sizeof(struct pm8001_ccb_info), GFP_KERNEL);
+ 	if (!pm8001_ha->ccb_info) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk
+-			("Unable to allocate memory for ccb\n"));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "Unable to allocate memory for ccb\n");
+ 		goto err_out_noccb;
+ 	}
+ 	for (i = 0; i < ccb_count; i++) {
+@@ -1200,8 +1196,8 @@ pm8001_init_ccb_tag(struct pm8001_hba_info *pm8001_ha, struct Scsi_Host *shost,
+ 				sizeof(struct pm8001_prd) * PM8001_MAX_DMA_SG,
+ 				&pm8001_ha->ccb_info[i].ccb_dma_handle);
+ 		if (!pm8001_ha->ccb_info[i].buf_prd) {
+-			PM8001_FAIL_DBG(pm8001_ha, pm8001_printk
+-					("pm80xx: ccb prd memory allocation error\n"));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "pm80xx: ccb prd memory allocation error\n");
+ 			goto err_out;
+ 		}
+ 		pm8001_ha->ccb_info[i].task = NULL;
+@@ -1345,8 +1341,7 @@ static int pm8001_pci_resume(struct pci_dev *pdev)
+ 	/* chip soft rst only for spc */
+ 	if (pm8001_ha->chip_id == chip_8001) {
+ 		PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha);
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("chip soft reset successful\n"));
++		pm8001_dbg(pm8001_ha, INIT, "chip soft reset successful\n");
+ 	}
+ 	rc = PM8001_CHIP_DISP->chip_init(pm8001_ha);
+ 	if (rc)
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index 9889bab7d31c1..474468df2a78d 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -158,7 +158,6 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
+ 	int rc = 0, phy_id = sas_phy->id;
+ 	struct pm8001_hba_info *pm8001_ha = NULL;
+ 	struct sas_phy_linkrates *rates;
+-	struct sas_ha_struct *sas_ha;
+ 	struct pm8001_phy *phy;
+ 	DECLARE_COMPLETION_ONSTACK(completion);
+ 	unsigned long flags;
+@@ -207,18 +206,16 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
+ 		if (pm8001_ha->chip_id != chip_8001) {
+ 			if (pm8001_ha->phy[phy_id].phy_state ==
+ 				PHY_STATE_LINK_UP_SPCV) {
+-				sas_ha = pm8001_ha->sas;
+ 				sas_phy_disconnected(&phy->sas_phy);
+-				sas_ha->notify_phy_event(&phy->sas_phy,
++				sas_notify_phy_event(&phy->sas_phy,
+ 					PHYE_LOSS_OF_SIGNAL);
+ 				phy->phy_attached = 0;
+ 			}
+ 		} else {
+ 			if (pm8001_ha->phy[phy_id].phy_state ==
+ 				PHY_STATE_LINK_UP_SPC) {
+-				sas_ha = pm8001_ha->sas;
+ 				sas_phy_disconnected(&phy->sas_phy);
+-				sas_ha->notify_phy_event(&phy->sas_phy,
++				sas_notify_phy_event(&phy->sas_phy,
+ 					PHYE_LOSS_OF_SIGNAL);
+ 				phy->phy_attached = 0;
+ 			}
+@@ -250,8 +247,7 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
+ 		spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 		return 0;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("func 0x%x\n", func));
++		pm8001_dbg(pm8001_ha, DEVIO, "func 0x%x\n", func);
+ 		rc = -EOPNOTSUPP;
+ 	}
+ 	msleep(300);
+@@ -405,7 +401,7 @@ static int pm8001_task_exec(struct sas_task *task,
+ 		t->task_done(t);
+ 		return 0;
+ 	}
+-	PM8001_IO_DBG(pm8001_ha, pm8001_printk("pm8001_task_exec device \n "));
++	pm8001_dbg(pm8001_ha, IO, "pm8001_task_exec device\n");
+ 	spin_lock_irqsave(&pm8001_ha->lock, flags);
+ 	do {
+ 		dev = t->dev;
+@@ -456,9 +452,11 @@ static int pm8001_task_exec(struct sas_task *task,
+ 		ccb->device = pm8001_dev;
+ 		switch (task_proto) {
+ 		case SAS_PROTOCOL_SMP:
++			atomic_inc(&pm8001_dev->running_req);
+ 			rc = pm8001_task_prep_smp(pm8001_ha, ccb);
+ 			break;
+ 		case SAS_PROTOCOL_SSP:
++			atomic_inc(&pm8001_dev->running_req);
+ 			if (is_tmf)
+ 				rc = pm8001_task_prep_ssp_tm(pm8001_ha,
+ 					ccb, tmf);
+@@ -467,6 +465,7 @@ static int pm8001_task_exec(struct sas_task *task,
+ 			break;
+ 		case SAS_PROTOCOL_SATA:
+ 		case SAS_PROTOCOL_STP:
++			atomic_inc(&pm8001_dev->running_req);
+ 			rc = pm8001_task_prep_ata(pm8001_ha, ccb);
+ 			break;
+ 		default:
+@@ -477,15 +476,14 @@ static int pm8001_task_exec(struct sas_task *task,
+ 		}
+ 
+ 		if (rc) {
+-			PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("rc is %x\n", rc));
++			pm8001_dbg(pm8001_ha, IO, "rc is %x\n", rc);
++			atomic_dec(&pm8001_dev->running_req);
+ 			goto err_out_tag;
+ 		}
+ 		/* TODO: select normal or high priority */
+ 		spin_lock(&t->task_state_lock);
+ 		t->task_state_flags |= SAS_TASK_AT_INITIATOR;
+ 		spin_unlock(&t->task_state_lock);
+-		pm8001_dev->running_req++;
+ 	} while (0);
+ 	rc = 0;
+ 	goto out_done;
+@@ -567,9 +565,9 @@ static struct pm8001_device *pm8001_alloc_dev(struct pm8001_hba_info *pm8001_ha)
+ 		}
+ 	}
+ 	if (dev == PM8001_MAX_DEVICES) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("max support %d devices, ignore ..\n",
+-			PM8001_MAX_DEVICES));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "max support %d devices, ignore ..\n",
++			   PM8001_MAX_DEVICES);
+ 	}
+ 	return NULL;
+ }
+@@ -587,8 +585,7 @@ struct pm8001_device *pm8001_find_dev(struct pm8001_hba_info *pm8001_ha,
+ 			return &pm8001_ha->devices[dev];
+ 	}
+ 	if (dev == PM8001_MAX_DEVICES) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("NO MATCHING "
+-				"DEVICE FOUND !!!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "NO MATCHING DEVICE FOUND !!!\n");
+ 	}
+ 	return NULL;
+ }
+@@ -649,10 +646,10 @@ static int pm8001_dev_found_notify(struct domain_device *dev)
+ 			}
+ 		}
+ 		if (phy_id == parent_dev->ex_dev.num_phys) {
+-			PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Error: no attached dev:%016llx"
+-			" at ex:%016llx.\n", SAS_ADDR(dev->sas_addr),
+-				SAS_ADDR(parent_dev->sas_addr)));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "Error: no attached dev:%016llx at ex:%016llx.\n",
++				   SAS_ADDR(dev->sas_addr),
++				   SAS_ADDR(parent_dev->sas_addr));
+ 			res = -1;
+ 		}
+ 	} else {
+@@ -662,7 +659,7 @@ static int pm8001_dev_found_notify(struct domain_device *dev)
+ 			flag = 1; /* directly sata */
+ 		}
+ 	} /*register this device to HBA*/
+-	PM8001_DISC_DBG(pm8001_ha, pm8001_printk("Found device\n"));
++	pm8001_dbg(pm8001_ha, DISC, "Found device\n");
+ 	PM8001_CHIP_DISP->reg_dev_req(pm8001_ha, pm8001_device, flag);
+ 	spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 	wait_for_completion(&completion);
+@@ -734,9 +731,7 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
+ 
+ 		if (res) {
+ 			del_timer(&task->slow_task->timer);
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("Executing internal task "
+-				"failed\n"));
++			pm8001_dbg(pm8001_ha, FAIL, "Executing internal task failed\n");
+ 			goto ex_err;
+ 		}
+ 		wait_for_completion(&task->slow_task->completion);
+@@ -750,9 +745,9 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
+ 		/* Even TMF timed out, return direct. */
+ 		if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
+ 			if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("TMF task[%x]timeout.\n",
+-					tmf->tmf));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "TMF task[%x]timeout.\n",
++					   tmf->tmf);
+ 				goto ex_err;
+ 			}
+ 		}
+@@ -773,17 +768,15 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
+ 
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+ 			task->task_status.stat == SAS_DATA_OVERRUN) {
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("Blocked task error.\n"));
++			pm8001_dbg(pm8001_ha, FAIL, "Blocked task error.\n");
+ 			res = -EMSGSIZE;
+ 			break;
+ 		} else {
+-			PM8001_EH_DBG(pm8001_ha,
+-				pm8001_printk(" Task to dev %016llx response:"
+-				"0x%x status 0x%x\n",
+-				SAS_ADDR(dev->sas_addr),
+-				task->task_status.resp,
+-				task->task_status.stat));
++			pm8001_dbg(pm8001_ha, EH,
++				   " Task to dev %016llx response:0x%x status 0x%x\n",
++				   SAS_ADDR(dev->sas_addr),
++				   task->task_status.resp,
++				   task->task_status.stat);
+ 			sas_free_task(task);
+ 			task = NULL;
+ 		}
+@@ -830,9 +823,7 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+ 
+ 		if (res) {
+ 			del_timer(&task->slow_task->timer);
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("Executing internal task "
+-				"failed\n"));
++			pm8001_dbg(pm8001_ha, FAIL, "Executing internal task failed\n");
+ 			goto ex_err;
+ 		}
+ 		wait_for_completion(&task->slow_task->completion);
+@@ -840,8 +831,8 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+ 		/* Even TMF timed out, return direct. */
+ 		if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
+ 			if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("TMF task timeout.\n"));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "TMF task timeout.\n");
+ 				goto ex_err;
+ 			}
+ 		}
+@@ -852,12 +843,11 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+ 			break;
+ 
+ 		} else {
+-			PM8001_EH_DBG(pm8001_ha,
+-				pm8001_printk(" Task to dev %016llx response: "
+-					"0x%x status 0x%x\n",
+-				SAS_ADDR(dev->sas_addr),
+-				task->task_status.resp,
+-				task->task_status.stat));
++			pm8001_dbg(pm8001_ha, EH,
++				   " Task to dev %016llx response: 0x%x status 0x%x\n",
++				   SAS_ADDR(dev->sas_addr),
++				   task->task_status.resp,
++				   task->task_status.stat);
+ 			sas_free_task(task);
+ 			task = NULL;
+ 		}
+@@ -883,22 +873,20 @@ static void pm8001_dev_gone_notify(struct domain_device *dev)
+ 	if (pm8001_dev) {
+ 		u32 device_id = pm8001_dev->device_id;
+ 
+-		PM8001_DISC_DBG(pm8001_ha,
+-			pm8001_printk("found dev[%d:%x] is gone.\n",
+-			pm8001_dev->device_id, pm8001_dev->dev_type));
+-		if (pm8001_dev->running_req) {
++		pm8001_dbg(pm8001_ha, DISC, "found dev[%d:%x] is gone.\n",
++			   pm8001_dev->device_id, pm8001_dev->dev_type);
++		if (atomic_read(&pm8001_dev->running_req)) {
+ 			spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ 			pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev ,
+ 				dev, 1, 0);
+-			while (pm8001_dev->running_req)
++			while (atomic_read(&pm8001_dev->running_req))
+ 				msleep(20);
+ 			spin_lock_irqsave(&pm8001_ha->lock, flags);
+ 		}
+ 		PM8001_CHIP_DISP->dereg_dev_req(pm8001_ha, device_id);
+ 		pm8001_free_dev(pm8001_dev);
+ 	} else {
+-		PM8001_DISC_DBG(pm8001_ha,
+-			pm8001_printk("Found dev has gone.\n"));
++		pm8001_dbg(pm8001_ha, DISC, "Found dev has gone.\n");
+ 	}
+ 	dev->lldd_dev = NULL;
+ 	spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+@@ -968,7 +956,7 @@ void pm8001_open_reject_retry(
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		spin_lock_irqsave(&task->task_state_lock, flags1);
+ 		task->task_state_flags &= ~SAS_TASK_STATE_PENDING;
+ 		task->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+@@ -1018,9 +1006,9 @@ int pm8001_I_T_nexus_reset(struct domain_device *dev)
+ 		}
+ 		rc = sas_phy_reset(phy, 1);
+ 		if (rc) {
+-			PM8001_EH_DBG(pm8001_ha,
+-			pm8001_printk("phy reset failed for device %x\n"
+-			"with rc %d\n", pm8001_dev->device_id, rc));
++			pm8001_dbg(pm8001_ha, EH,
++				   "phy reset failed for device %x\n"
++				   "with rc %d\n", pm8001_dev->device_id, rc);
+ 			rc = TMF_RESP_FUNC_FAILED;
+ 			goto out;
+ 		}
+@@ -1028,17 +1016,16 @@ int pm8001_I_T_nexus_reset(struct domain_device *dev)
+ 		rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev ,
+ 			dev, 1, 0);
+ 		if (rc) {
+-			PM8001_EH_DBG(pm8001_ha,
+-			pm8001_printk("task abort failed %x\n"
+-			"with rc %d\n", pm8001_dev->device_id, rc));
++			pm8001_dbg(pm8001_ha, EH, "task abort failed %x\n"
++				   "with rc %d\n", pm8001_dev->device_id, rc);
+ 			rc = TMF_RESP_FUNC_FAILED;
+ 		}
+ 	} else {
+ 		rc = sas_phy_reset(phy, 1);
+ 		msleep(2000);
+ 	}
+-	PM8001_EH_DBG(pm8001_ha, pm8001_printk(" for device[%x]:rc=%d\n",
+-		pm8001_dev->device_id, rc));
++	pm8001_dbg(pm8001_ha, EH, " for device[%x]:rc=%d\n",
++		   pm8001_dev->device_id, rc);
+  out:
+ 	sas_put_local_phy(phy);
+ 	return rc;
+@@ -1061,8 +1048,7 @@ int pm8001_I_T_nexus_event_handler(struct domain_device *dev)
+ 	pm8001_dev = dev->lldd_dev;
+ 	pm8001_ha = pm8001_find_ha_by_dev(dev);
+ 
+-	PM8001_EH_DBG(pm8001_ha,
+-			pm8001_printk("I_T_Nexus handler invoked !!"));
++	pm8001_dbg(pm8001_ha, EH, "I_T_Nexus handler invoked !!\n");
+ 
+ 	phy = sas_get_local_phy(dev);
+ 
+@@ -1101,8 +1087,8 @@ int pm8001_I_T_nexus_event_handler(struct domain_device *dev)
+ 		rc = sas_phy_reset(phy, 1);
+ 		msleep(2000);
+ 	}
+-	PM8001_EH_DBG(pm8001_ha, pm8001_printk(" for device[%x]:rc=%d\n",
+-		pm8001_dev->device_id, rc));
++	pm8001_dbg(pm8001_ha, EH, " for device[%x]:rc=%d\n",
++		   pm8001_dev->device_id, rc);
+ out:
+ 	sas_put_local_phy(phy);
+ 
+@@ -1131,8 +1117,8 @@ int pm8001_lu_reset(struct domain_device *dev, u8 *lun)
+ 		rc = pm8001_issue_ssp_tmf(dev, lun, &tmf_task);
+ 	}
+ 	/* If failed, fall-through I_T_Nexus reset */
+-	PM8001_EH_DBG(pm8001_ha, pm8001_printk("for device[%x]:rc=%d\n",
+-		pm8001_dev->device_id, rc));
++	pm8001_dbg(pm8001_ha, EH, "for device[%x]:rc=%d\n",
++		   pm8001_dev->device_id, rc);
+ 	return rc;
+ }
+ 
+@@ -1140,7 +1126,6 @@ int pm8001_lu_reset(struct domain_device *dev, u8 *lun)
+ int pm8001_query_task(struct sas_task *task)
+ {
+ 	u32 tag = 0xdeadbeef;
+-	int i = 0;
+ 	struct scsi_lun lun;
+ 	struct pm8001_tmf_task tmf_task;
+ 	int rc = TMF_RESP_FUNC_FAILED;
+@@ -1159,10 +1144,7 @@ int pm8001_query_task(struct sas_task *task)
+ 			rc = TMF_RESP_FUNC_FAILED;
+ 			return rc;
+ 		}
+-		PM8001_EH_DBG(pm8001_ha, pm8001_printk("Query:["));
+-		for (i = 0; i < 16; i++)
+-			printk(KERN_INFO "%02x ", cmnd->cmnd[i]);
+-		printk(KERN_INFO "]\n");
++		pm8001_dbg(pm8001_ha, EH, "Query:[%16ph]\n", cmnd->cmnd);
+ 		tmf_task.tmf = 	TMF_QUERY_TASK;
+ 		tmf_task.tag_of_task_to_be_managed = tag;
+ 
+@@ -1170,15 +1152,14 @@ int pm8001_query_task(struct sas_task *task)
+ 		switch (rc) {
+ 		/* The task is still in Lun, release it then */
+ 		case TMF_RESP_FUNC_SUCC:
+-			PM8001_EH_DBG(pm8001_ha,
+-				pm8001_printk("The task is still in Lun\n"));
++			pm8001_dbg(pm8001_ha, EH,
++				   "The task is still in Lun\n");
+ 			break;
+ 		/* The task is not in Lun or failed, reset the phy */
+ 		case TMF_RESP_FUNC_FAILED:
+ 		case TMF_RESP_FUNC_COMPLETE:
+-			PM8001_EH_DBG(pm8001_ha,
+-			pm8001_printk("The task is not in Lun or failed,"
+-			" reset the phy\n"));
++			pm8001_dbg(pm8001_ha, EH,
++				   "The task is not in Lun or failed, reset the phy\n");
+ 			break;
+ 		}
+ 	}
+@@ -1264,8 +1245,8 @@ int pm8001_abort_task(struct sas_task *task)
+ 			 * leaking the task in libsas or losing the race and
+ 			 * getting a double free.
+ 			 */
+-			PM8001_MSG_DBG(pm8001_ha,
+-				pm8001_printk("Waiting for local phy ctl\n"));
++			pm8001_dbg(pm8001_ha, MSG,
++				   "Waiting for local phy ctl\n");
+ 			ret = wait_for_completion_timeout(&completion,
+ 					PM8001_TASK_TIMEOUT * HZ);
+ 			if (!ret || !phy->reset_success) {
+@@ -1275,8 +1256,8 @@ int pm8001_abort_task(struct sas_task *task)
+ 				/* 3. Wait for Port Reset complete or
+ 				 * Port reset TMO
+ 				 */
+-				PM8001_MSG_DBG(pm8001_ha,
+-				pm8001_printk("Waiting for Port reset\n"));
++				pm8001_dbg(pm8001_ha, MSG,
++					   "Waiting for Port reset\n");
+ 				ret = wait_for_completion_timeout(
+ 					&completion_reset,
+ 					PM8001_TASK_TIMEOUT * HZ);
+@@ -1355,9 +1336,8 @@ int pm8001_clear_task_set(struct domain_device *dev, u8 *lun)
+ 	struct pm8001_device *pm8001_dev = dev->lldd_dev;
+ 	struct pm8001_hba_info *pm8001_ha = pm8001_find_ha_by_dev(dev);
+ 
+-	PM8001_EH_DBG(pm8001_ha,
+-		pm8001_printk("I_T_L_Q clear task set[%x]\n",
+-		pm8001_dev->device_id));
++	pm8001_dbg(pm8001_ha, EH, "I_T_L_Q clear task set[%x]\n",
++		   pm8001_dev->device_id);
+ 	tmf_task.tmf = TMF_CLEAR_TASK_SET;
+ 	return pm8001_issue_ssp_tmf(dev, lun, &tmf_task);
+ }
+diff --git a/drivers/scsi/pm8001/pm8001_sas.h b/drivers/scsi/pm8001/pm8001_sas.h
+index 95663e1380833..5cd6fe6a7d2d9 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.h
++++ b/drivers/scsi/pm8001/pm8001_sas.h
+@@ -69,45 +69,16 @@
+ #define PM8001_DEV_LOGGING	0x80 /* development message logging */
+ #define PM8001_DEVIO_LOGGING	0x100 /* development io message logging */
+ #define PM8001_IOERR_LOGGING	0x200 /* development io err message logging */
+-#define pm8001_printk(format, arg...)	pr_info("%s:: %s  %d:" \
+-			format, pm8001_ha->name, __func__, __LINE__, ## arg)
+-#define PM8001_CHECK_LOGGING(HBA, LEVEL, CMD)	\
+-do {						\
+-	if (unlikely(HBA->logging_level & LEVEL))	\
+-		do {					\
+-			CMD;				\
+-		} while (0);				\
+-} while (0);
+ 
+-#define PM8001_EH_DBG(HBA, CMD)			\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_EH_LOGGING, CMD)
++#define pm8001_printk(fmt, ...)						\
++	pr_info("%s:: %s  %d:" fmt,					\
++		pm8001_ha->name, __func__, __LINE__, ##__VA_ARGS__)
+ 
+-#define PM8001_INIT_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_INIT_LOGGING, CMD)
+-
+-#define PM8001_DISC_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_DISC_LOGGING, CMD)
+-
+-#define PM8001_IO_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_IO_LOGGING, CMD)
+-
+-#define PM8001_FAIL_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_FAIL_LOGGING, CMD)
+-
+-#define PM8001_IOCTL_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_IOCTL_LOGGING, CMD)
+-
+-#define PM8001_MSG_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_MSG_LOGGING, CMD)
+-
+-#define PM8001_DEV_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_DEV_LOGGING, CMD)
+-
+-#define PM8001_DEVIO_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_DEVIO_LOGGING, CMD)
+-
+-#define PM8001_IOERR_DBG(HBA, CMD)		\
+-	PM8001_CHECK_LOGGING(HBA, PM8001_IOERR_LOGGING, CMD)
++#define pm8001_dbg(HBA, level, fmt, ...)				\
++do {									\
++	if (unlikely((HBA)->logging_level & PM8001_##level##_LOGGING))	\
++		pm8001_printk(fmt, ##__VA_ARGS__);			\
++} while (0)
+ 
+ #define PM8001_USE_TASKLET
+ #define PM8001_USE_MSIX
+@@ -293,7 +264,7 @@ struct pm8001_device {
+ 	struct completion	*dcompletion;
+ 	struct completion	*setds_completion;
+ 	u32			device_id;
+-	u32			running_req;
++	atomic_t		running_req;
+ };
+ 
+ struct pm8001_prd_imt {
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 155382ce84698..055f7649676ec 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -58,9 +58,8 @@ int pm80xx_bar4_shift(struct pm8001_hba_info *pm8001_ha, u32 shift_value)
+ 		reg_val = pm8001_cr32(pm8001_ha, 0, MEMBASE_II_SHIFT_REGISTER);
+ 	} while ((reg_val != shift_value) && time_before(jiffies, start));
+ 	if (reg_val != shift_value) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("TIMEOUT:MEMBASE_II_SHIFT_REGISTER"
+-			" = 0x%x\n", reg_val));
++		pm8001_dbg(pm8001_ha, FAIL, "TIMEOUT:MEMBASE_II_SHIFT_REGISTER = 0x%x\n",
++			   reg_val);
+ 		return -1;
+ 	}
+ 	return 0;
+@@ -109,8 +108,8 @@ ssize_t pm80xx_get_fatal_dump(struct device *cdev,
+ 	}
+ 	/* initialize variables for very first call from host application */
+ 	if (pm8001_ha->forensic_info.data_buf.direct_offset == 0) {
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("forensic_info TYPE_NON_FATAL..............\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "forensic_info TYPE_NON_FATAL..............\n");
+ 		direct_data = (u8 *)fatal_error_data;
+ 		pm8001_ha->forensic_info.data_type = TYPE_NON_FATAL;
+ 		pm8001_ha->forensic_info.data_buf.direct_len = SYSFS_OFFSET;
+@@ -123,17 +122,13 @@ ssize_t pm80xx_get_fatal_dump(struct device *cdev,
+ 				MPI_FATAL_EDUMP_TABLE_SIGNATURE, 0x1234abcd);
+ 
+ 		pm8001_ha->forensic_info.data_buf.direct_data = direct_data;
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("ossaHwCB: status1 %d\n", status));
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("ossaHwCB: read_len 0x%x\n",
+-			pm8001_ha->forensic_info.data_buf.read_len));
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("ossaHwCB: direct_len 0x%x\n",
+-			pm8001_ha->forensic_info.data_buf.direct_len));
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("ossaHwCB: direct_offset 0x%x\n",
+-			pm8001_ha->forensic_info.data_buf.direct_offset));
++		pm8001_dbg(pm8001_ha, IO, "ossaHwCB: status1 %d\n", status);
++		pm8001_dbg(pm8001_ha, IO, "ossaHwCB: read_len 0x%x\n",
++			   pm8001_ha->forensic_info.data_buf.read_len);
++		pm8001_dbg(pm8001_ha, IO, "ossaHwCB: direct_len 0x%x\n",
++			   pm8001_ha->forensic_info.data_buf.direct_len);
++		pm8001_dbg(pm8001_ha, IO, "ossaHwCB: direct_offset 0x%x\n",
++			   pm8001_ha->forensic_info.data_buf.direct_offset);
+ 	}
+ 	if (pm8001_ha->forensic_info.data_buf.direct_offset == 0) {
+ 		/* start to get data */
+@@ -153,29 +148,24 @@ ssize_t pm80xx_get_fatal_dump(struct device *cdev,
+ 	 */
+ 	length_to_read =
+ 		accum_len - pm8001_ha->forensic_preserved_accumulated_transfer;
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("get_fatal_spcv: accum_len 0x%x\n", accum_len));
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("get_fatal_spcv: length_to_read 0x%x\n",
+-		length_to_read));
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("get_fatal_spcv: last_offset 0x%x\n",
+-		pm8001_ha->forensic_last_offset));
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("get_fatal_spcv: read_len 0x%x\n",
+-		pm8001_ha->forensic_info.data_buf.read_len));
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("get_fatal_spcv:: direct_len 0x%x\n",
+-		pm8001_ha->forensic_info.data_buf.direct_len));
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("get_fatal_spcv:: direct_offset 0x%x\n",
+-		pm8001_ha->forensic_info.data_buf.direct_offset));
++	pm8001_dbg(pm8001_ha, IO, "get_fatal_spcv: accum_len 0x%x\n",
++		   accum_len);
++	pm8001_dbg(pm8001_ha, IO, "get_fatal_spcv: length_to_read 0x%x\n",
++		   length_to_read);
++	pm8001_dbg(pm8001_ha, IO, "get_fatal_spcv: last_offset 0x%x\n",
++		   pm8001_ha->forensic_last_offset);
++	pm8001_dbg(pm8001_ha, IO, "get_fatal_spcv: read_len 0x%x\n",
++		   pm8001_ha->forensic_info.data_buf.read_len);
++	pm8001_dbg(pm8001_ha, IO, "get_fatal_spcv:: direct_len 0x%x\n",
++		   pm8001_ha->forensic_info.data_buf.direct_len);
++	pm8001_dbg(pm8001_ha, IO, "get_fatal_spcv:: direct_offset 0x%x\n",
++		   pm8001_ha->forensic_info.data_buf.direct_offset);
+ 
+ 	/* If accumulated length failed to read correctly fail the attempt.*/
+ 	if (accum_len == 0xFFFFFFFF) {
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("Possible PCI issue 0x%x not expected\n",
+-			accum_len));
++		pm8001_dbg(pm8001_ha, IO,
++			   "Possible PCI issue 0x%x not expected\n",
++			   accum_len);
+ 		return status;
+ 	}
+ 	/* If accumulated length is zero fail the attempt */
+@@ -239,8 +229,8 @@ moreData:
+ 			offset = (int)
+ 			((char *)pm8001_ha->forensic_info.data_buf.direct_data
+ 			- (char *)buf);
+-			PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("get_fatal_spcv:return1 0x%x\n", offset));
++			pm8001_dbg(pm8001_ha, IO,
++				   "get_fatal_spcv:return1 0x%x\n", offset);
+ 			return (char *)pm8001_ha->
+ 				forensic_info.data_buf.direct_data -
+ 				(char *)buf;
+@@ -262,8 +252,8 @@ moreData:
+ 			offset = (int)
+ 			((char *)pm8001_ha->forensic_info.data_buf.direct_data
+ 			- (char *)buf);
+-			PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("get_fatal_spcv:return2 0x%x\n", offset));
++			pm8001_dbg(pm8001_ha, IO,
++				   "get_fatal_spcv:return2 0x%x\n", offset);
+ 			return (char *)pm8001_ha->
+ 				forensic_info.data_buf.direct_data -
+ 				(char *)buf;
+@@ -289,8 +279,8 @@ moreData:
+ 		offset = (int)
+ 			((char *)pm8001_ha->forensic_info.data_buf.direct_data
+ 			- (char *)buf);
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("get_fatal_spcv: return3 0x%x\n", offset));
++		pm8001_dbg(pm8001_ha, IO, "get_fatal_spcv: return3 0x%x\n",
++			   offset);
+ 		return (char *)pm8001_ha->forensic_info.data_buf.direct_data -
+ 			(char *)buf;
+ 	}
+@@ -327,9 +317,9 @@ moreData:
+ 			} while ((reg_val) && time_before(jiffies, start));
+ 
+ 			if (reg_val != 0) {
+-				PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-				"TIMEOUT:MPI_FATAL_EDUMP_TABLE_HDSHAKE 0x%x\n",
+-				reg_val));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "TIMEOUT:MPI_FATAL_EDUMP_TABLE_HDSHAKE 0x%x\n",
++					   reg_val);
+ 			       /* Fail the dump if a timeout occurs */
+ 				pm8001_ha->forensic_info.data_buf.direct_data +=
+ 				sprintf(
+@@ -351,9 +341,9 @@ moreData:
+ 					time_before(jiffies, start));
+ 
+ 			if (reg_val < 2) {
+-				PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-				"TIMEOUT:MPI_FATAL_EDUMP_TABLE_STATUS = 0x%x\n",
+-				reg_val));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "TIMEOUT:MPI_FATAL_EDUMP_TABLE_STATUS = 0x%x\n",
++					   reg_val);
+ 				/* Fail the dump if a timeout occurs */
+ 				pm8001_ha->forensic_info.data_buf.direct_data +=
+ 				sprintf(
+@@ -387,8 +377,7 @@ moreData:
+ 	}
+ 	offset = (int)((char *)pm8001_ha->forensic_info.data_buf.direct_data
+ 			- (char *)buf);
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("get_fatal_spcv: return4 0x%x\n", offset));
++	pm8001_dbg(pm8001_ha, IO, "get_fatal_spcv: return4 0x%x\n", offset);
+ 	return (char *)pm8001_ha->forensic_info.data_buf.direct_data -
+ 		(char *)buf;
+ }
+@@ -419,8 +408,7 @@ ssize_t pm80xx_get_non_fatal_dump(struct device *cdev,
+ 				PAGE_SIZE, "Not supported for SPC controller");
+ 			return 0;
+ 		}
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("forensic_info TYPE_NON_FATAL...\n"));
++		pm8001_dbg(pm8001_ha, IO, "forensic_info TYPE_NON_FATAL...\n");
+ 		/*
+ 		 * Step 1: Write the host buffer parameters in the MPI Fatal and
+ 		 * Non-Fatal Error Dump Capture Table.This is the buffer
+@@ -581,24 +569,24 @@ static void read_main_config_table(struct pm8001_hba_info *pm8001_ha)
+ 	pm8001_ha->main_cfg_tbl.pm80xx_tbl.inc_fw_version =
+ 		pm8001_mr32(address, MAIN_MPI_INACTIVE_FW_VERSION);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"Main cfg table: sign:%x interface rev:%x fw_rev:%x\n",
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.signature,
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.interface_rev,
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.firmware_rev));
+-
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"table offset: gst:%x iq:%x oq:%x int vec:%x phy attr:%x\n",
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.gst_offset,
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.inbound_queue_offset,
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.outbound_queue_offset,
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.int_vec_table_offset,
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.phy_attr_table_offset));
+-
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"Main cfg table; ila rev:%x Inactive fw rev:%x\n",
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.ila_version,
+-		pm8001_ha->main_cfg_tbl.pm80xx_tbl.inc_fw_version));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "Main cfg table: sign:%x interface rev:%x fw_rev:%x\n",
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.signature,
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.interface_rev,
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.firmware_rev);
++
++	pm8001_dbg(pm8001_ha, DEV,
++		   "table offset: gst:%x iq:%x oq:%x int vec:%x phy attr:%x\n",
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.gst_offset,
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.inbound_queue_offset,
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.outbound_queue_offset,
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.int_vec_table_offset,
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.phy_attr_table_offset);
++
++	pm8001_dbg(pm8001_ha, DEV,
++		   "Main cfg table; ila rev:%x Inactive fw rev:%x\n",
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.ila_version,
++		   pm8001_ha->main_cfg_tbl.pm80xx_tbl.inc_fw_version);
+ }
+ 
+ /**
+@@ -808,10 +796,10 @@ static void init_default_table_values(struct pm8001_hba_info *pm8001_ha)
+ 		pm8001_ha->inbnd_q_tbl[i].producer_idx		= 0;
+ 		pm8001_ha->inbnd_q_tbl[i].consumer_index	= 0;
+ 
+-		PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-			"IQ %d pi_bar 0x%x pi_offset 0x%x\n", i,
+-			pm8001_ha->inbnd_q_tbl[i].pi_pci_bar,
+-			pm8001_ha->inbnd_q_tbl[i].pi_offset));
++		pm8001_dbg(pm8001_ha, DEV,
++			   "IQ %d pi_bar 0x%x pi_offset 0x%x\n", i,
++			   pm8001_ha->inbnd_q_tbl[i].pi_pci_bar,
++			   pm8001_ha->inbnd_q_tbl[i].pi_offset);
+ 	}
+ 	for (i = 0; i < pm8001_ha->max_q_num; i++) {
+ 		pm8001_ha->outbnd_q_tbl[i].element_size_cnt	=
+@@ -841,10 +829,10 @@ static void init_default_table_values(struct pm8001_hba_info *pm8001_ha)
+ 		pm8001_ha->outbnd_q_tbl[i].consumer_idx		= 0;
+ 		pm8001_ha->outbnd_q_tbl[i].producer_index	= 0;
+ 
+-		PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-			"OQ %d ci_bar 0x%x ci_offset 0x%x\n", i,
+-			pm8001_ha->outbnd_q_tbl[i].ci_pci_bar,
+-			pm8001_ha->outbnd_q_tbl[i].ci_offset));
++		pm8001_dbg(pm8001_ha, DEV,
++			   "OQ %d ci_bar 0x%x ci_offset 0x%x\n", i,
++			   pm8001_ha->outbnd_q_tbl[i].ci_pci_bar,
++			   pm8001_ha->outbnd_q_tbl[i].ci_offset);
+ 	}
+ }
+ 
+@@ -878,9 +866,9 @@ static void update_main_config_table(struct pm8001_hba_info *pm8001_ha)
+ 					((pm8001_ha->max_q_num - 1) << 8);
+ 	pm8001_mw32(address, MAIN_FATAL_ERROR_INTERRUPT,
+ 		pm8001_ha->main_cfg_tbl.pm80xx_tbl.fatal_err_interrupt);
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"Updated Fatal error interrupt vector 0x%x\n",
+-		pm8001_mr32(address, MAIN_FATAL_ERROR_INTERRUPT)));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "Updated Fatal error interrupt vector 0x%x\n",
++		   pm8001_mr32(address, MAIN_FATAL_ERROR_INTERRUPT));
+ 
+ 	pm8001_mw32(address, MAIN_EVENT_CRC_CHECK,
+ 		pm8001_ha->main_cfg_tbl.pm80xx_tbl.crc_core_dump);
+@@ -891,9 +879,9 @@ static void update_main_config_table(struct pm8001_hba_info *pm8001_ha)
+ 	pm8001_ha->main_cfg_tbl.pm80xx_tbl.gpio_led_mapping |= 0x20000000;
+ 	pm8001_mw32(address, MAIN_GPIO_LED_FLAGS_OFFSET,
+ 		pm8001_ha->main_cfg_tbl.pm80xx_tbl.gpio_led_mapping);
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"Programming DW 0x21 in main cfg table with 0x%x\n",
+-		pm8001_mr32(address, MAIN_GPIO_LED_FLAGS_OFFSET)));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "Programming DW 0x21 in main cfg table with 0x%x\n",
++		   pm8001_mr32(address, MAIN_GPIO_LED_FLAGS_OFFSET));
+ 
+ 	pm8001_mw32(address, MAIN_PORT_RECOVERY_TIMER,
+ 		pm8001_ha->main_cfg_tbl.pm80xx_tbl.port_recovery_timer);
+@@ -934,20 +922,20 @@ static void update_inbnd_queue_table(struct pm8001_hba_info *pm8001_ha,
+ 	pm8001_mw32(address, offset + IB_CI_BASE_ADDR_LO_OFFSET,
+ 		pm8001_ha->inbnd_q_tbl[number].ci_lower_base_addr);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"IQ %d: Element pri size 0x%x\n",
+-		number,
+-		pm8001_ha->inbnd_q_tbl[number].element_pri_size_cnt));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "IQ %d: Element pri size 0x%x\n",
++		   number,
++		   pm8001_ha->inbnd_q_tbl[number].element_pri_size_cnt);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"IQ upr base addr 0x%x IQ lwr base addr 0x%x\n",
+-		pm8001_ha->inbnd_q_tbl[number].upper_base_addr,
+-		pm8001_ha->inbnd_q_tbl[number].lower_base_addr));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "IQ upr base addr 0x%x IQ lwr base addr 0x%x\n",
++		   pm8001_ha->inbnd_q_tbl[number].upper_base_addr,
++		   pm8001_ha->inbnd_q_tbl[number].lower_base_addr);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"CI upper base addr 0x%x CI lower base addr 0x%x\n",
+-		pm8001_ha->inbnd_q_tbl[number].ci_upper_base_addr,
+-		pm8001_ha->inbnd_q_tbl[number].ci_lower_base_addr));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "CI upper base addr 0x%x CI lower base addr 0x%x\n",
++		   pm8001_ha->inbnd_q_tbl[number].ci_upper_base_addr,
++		   pm8001_ha->inbnd_q_tbl[number].ci_lower_base_addr);
+ }
+ 
+ /**
+@@ -973,20 +961,20 @@ static void update_outbnd_queue_table(struct pm8001_hba_info *pm8001_ha,
+ 	pm8001_mw32(address, offset + OB_INTERRUPT_COALES_OFFSET,
+ 		pm8001_ha->outbnd_q_tbl[number].interrup_vec_cnt_delay);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"OQ %d: Element pri size 0x%x\n",
+-		number,
+-		pm8001_ha->outbnd_q_tbl[number].element_size_cnt));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "OQ %d: Element pri size 0x%x\n",
++		   number,
++		   pm8001_ha->outbnd_q_tbl[number].element_size_cnt);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"OQ upr base addr 0x%x OQ lwr base addr 0x%x\n",
+-		pm8001_ha->outbnd_q_tbl[number].upper_base_addr,
+-		pm8001_ha->outbnd_q_tbl[number].lower_base_addr));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "OQ upr base addr 0x%x OQ lwr base addr 0x%x\n",
++		   pm8001_ha->outbnd_q_tbl[number].upper_base_addr,
++		   pm8001_ha->outbnd_q_tbl[number].lower_base_addr);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"PI upper base addr 0x%x PI lower base addr 0x%x\n",
+-		pm8001_ha->outbnd_q_tbl[number].pi_upper_base_addr,
+-		pm8001_ha->outbnd_q_tbl[number].pi_lower_base_addr));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "PI upper base addr 0x%x PI lower base addr 0x%x\n",
++		   pm8001_ha->outbnd_q_tbl[number].pi_upper_base_addr,
++		   pm8001_ha->outbnd_q_tbl[number].pi_lower_base_addr);
+ }
+ 
+ /**
+@@ -1016,8 +1004,9 @@ static int mpi_init_check(struct pm8001_hba_info *pm8001_ha)
+ 
+ 	if (!max_wait_count) {
+ 		/* additional check */
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-			"Inb doorbell clear not toggled[value:%x]\n", value));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "Inb doorbell clear not toggled[value:%x]\n",
++			   value);
+ 		return -EBUSY;
+ 	}
+ 	/* check the MPI-State for initialization upto 100ms*/
+@@ -1068,9 +1057,9 @@ static int check_fw_ready(struct pm8001_hba_info *pm8001_ha)
+ 	if (!max_wait_count)
+ 		ret = -1;
+ 	else {
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" ila ready status in %d millisec\n",
+-				(max_wait_time - max_wait_count)));
++		pm8001_dbg(pm8001_ha, MSG,
++			   " ila ready status in %d millisec\n",
++			   (max_wait_time - max_wait_count));
+ 	}
+ 
+ 	/* check RAAE status */
+@@ -1083,9 +1072,9 @@ static int check_fw_ready(struct pm8001_hba_info *pm8001_ha)
+ 	if (!max_wait_count)
+ 		ret = -1;
+ 	else {
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" raae ready status in %d millisec\n",
+-					(max_wait_time - max_wait_count)));
++		pm8001_dbg(pm8001_ha, MSG,
++			   " raae ready status in %d millisec\n",
++			   (max_wait_time - max_wait_count));
+ 	}
+ 
+ 	/* check iop0 status */
+@@ -1098,9 +1087,9 @@ static int check_fw_ready(struct pm8001_hba_info *pm8001_ha)
+ 	if (!max_wait_count)
+ 		ret = -1;
+ 	else {
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" iop0 ready status in %d millisec\n",
+-				(max_wait_time - max_wait_count)));
++		pm8001_dbg(pm8001_ha, MSG,
++			   " iop0 ready status in %d millisec\n",
++			   (max_wait_time - max_wait_count));
+ 	}
+ 
+ 	/* check iop1 status only for 16 port controllers */
+@@ -1116,9 +1105,9 @@ static int check_fw_ready(struct pm8001_hba_info *pm8001_ha)
+ 		if (!max_wait_count)
+ 			ret = -1;
+ 		else {
+-			PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-				"iop1 ready status in %d millisec\n",
+-				(max_wait_time - max_wait_count)));
++			pm8001_dbg(pm8001_ha, MSG,
++				   "iop1 ready status in %d millisec\n",
++				   (max_wait_time - max_wait_count));
+ 		}
+ 	}
+ 
+@@ -1136,13 +1125,11 @@ static void init_pci_device_addresses(struct pm8001_hba_info *pm8001_ha)
+ 	value = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0);
+ 	offset = value & 0x03FFFFFF; /* scratch pad 0 TBL address */
+ 
+-	PM8001_DEV_DBG(pm8001_ha,
+-		pm8001_printk("Scratchpad 0 Offset: 0x%x value 0x%x\n",
+-				offset, value));
++	pm8001_dbg(pm8001_ha, DEV, "Scratchpad 0 Offset: 0x%x value 0x%x\n",
++		   offset, value);
+ 	pcilogic = (value & 0xFC000000) >> 26;
+ 	pcibar = get_pci_bar_index(pcilogic);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("Scratchpad 0 PCI BAR: %d\n", pcibar));
++	pm8001_dbg(pm8001_ha, INIT, "Scratchpad 0 PCI BAR: %d\n", pcibar);
+ 	pm8001_ha->main_cfg_tbl_addr = base_addr =
+ 		pm8001_ha->io_mem[pcibar].memvirtaddr + offset;
+ 	pm8001_ha->general_stat_tbl_addr =
+@@ -1164,33 +1151,25 @@ static void init_pci_device_addresses(struct pm8001_hba_info *pm8001_ha)
+ 		base_addr + (pm8001_cr32(pm8001_ha, pcibar, offset + 0xA0) &
+ 					0xFFFFFF);
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("GST OFFSET 0x%x\n",
+-			pm8001_cr32(pm8001_ha, pcibar, offset + 0x18)));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("INBND OFFSET 0x%x\n",
+-			pm8001_cr32(pm8001_ha, pcibar, offset + 0x1C)));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("OBND OFFSET 0x%x\n",
+-			pm8001_cr32(pm8001_ha, pcibar, offset + 0x20)));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("IVT OFFSET 0x%x\n",
+-			pm8001_cr32(pm8001_ha, pcibar, offset + 0x8C)));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("PSPA OFFSET 0x%x\n",
+-			pm8001_cr32(pm8001_ha, pcibar, offset + 0x90)));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("addr - main cfg %p general status %p\n",
+-			pm8001_ha->main_cfg_tbl_addr,
+-			pm8001_ha->general_stat_tbl_addr));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("addr - inbnd %p obnd %p\n",
+-			pm8001_ha->inbnd_q_tbl_addr,
+-			pm8001_ha->outbnd_q_tbl_addr));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("addr - pspa %p ivt %p\n",
+-			pm8001_ha->pspa_q_tbl_addr,
+-			pm8001_ha->ivt_tbl_addr));
++	pm8001_dbg(pm8001_ha, INIT, "GST OFFSET 0x%x\n",
++		   pm8001_cr32(pm8001_ha, pcibar, offset + 0x18));
++	pm8001_dbg(pm8001_ha, INIT, "INBND OFFSET 0x%x\n",
++		   pm8001_cr32(pm8001_ha, pcibar, offset + 0x1C));
++	pm8001_dbg(pm8001_ha, INIT, "OBND OFFSET 0x%x\n",
++		   pm8001_cr32(pm8001_ha, pcibar, offset + 0x20));
++	pm8001_dbg(pm8001_ha, INIT, "IVT OFFSET 0x%x\n",
++		   pm8001_cr32(pm8001_ha, pcibar, offset + 0x8C));
++	pm8001_dbg(pm8001_ha, INIT, "PSPA OFFSET 0x%x\n",
++		   pm8001_cr32(pm8001_ha, pcibar, offset + 0x90));
++	pm8001_dbg(pm8001_ha, INIT, "addr - main cfg %p general status %p\n",
++		   pm8001_ha->main_cfg_tbl_addr,
++		   pm8001_ha->general_stat_tbl_addr);
++	pm8001_dbg(pm8001_ha, INIT, "addr - inbnd %p obnd %p\n",
++		   pm8001_ha->inbnd_q_tbl_addr,
++		   pm8001_ha->outbnd_q_tbl_addr);
++	pm8001_dbg(pm8001_ha, INIT, "addr - pspa %p ivt %p\n",
++		   pm8001_ha->pspa_q_tbl_addr,
++		   pm8001_ha->ivt_tbl_addr);
+ }
+ 
+ /**
+@@ -1224,9 +1203,9 @@ pm80xx_set_thermal_config(struct pm8001_hba_info *pm8001_ha)
+ 				(THERMAL_ENABLE << 8) | page_code;
+ 	payload.cfg_pg[1] = (LTEMPHIL << 24) | (RTEMPHIL << 8);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"Setting up thermal config. cfg_pg 0 0x%x cfg_pg 1 0x%x\n",
+-		payload.cfg_pg[0], payload.cfg_pg[1]));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "Setting up thermal config. cfg_pg 0 0x%x cfg_pg 1 0x%x\n",
++		   payload.cfg_pg[0], payload.cfg_pg[1]);
+ 
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
+@@ -1281,32 +1260,24 @@ pm80xx_set_sas_protocol_timer_config(struct pm8001_hba_info *pm8001_ha)
+ 						| SAS_COPNRJT_RTRY_THR;
+ 	SASConfigPage.MAX_AIP =  SAS_MAX_AIP;
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("SASConfigPage.pageCode "
+-			"0x%08x\n", SASConfigPage.pageCode));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("SASConfigPage.MST_MSI "
+-			" 0x%08x\n", SASConfigPage.MST_MSI));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("SASConfigPage.STP_SSP_MCT_TMO "
+-			" 0x%08x\n", SASConfigPage.STP_SSP_MCT_TMO));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("SASConfigPage.STP_FRM_TMO "
+-			" 0x%08x\n", SASConfigPage.STP_FRM_TMO));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("SASConfigPage.STP_IDLE_TMO "
+-			" 0x%08x\n", SASConfigPage.STP_IDLE_TMO));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("SASConfigPage.OPNRJT_RTRY_INTVL "
+-			" 0x%08x\n", SASConfigPage.OPNRJT_RTRY_INTVL));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO "
+-			" 0x%08x\n", SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO));
+-	PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR "
+-			" 0x%08x\n", SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR));
+-	PM8001_INIT_DBG(pm8001_ha, pm8001_printk("SASConfigPage.MAX_AIP "
+-			" 0x%08x\n", SASConfigPage.MAX_AIP));
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.pageCode 0x%08x\n",
++		   SASConfigPage.pageCode);
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.MST_MSI  0x%08x\n",
++		   SASConfigPage.MST_MSI);
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_SSP_MCT_TMO  0x%08x\n",
++		   SASConfigPage.STP_SSP_MCT_TMO);
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_FRM_TMO  0x%08x\n",
++		   SASConfigPage.STP_FRM_TMO);
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_IDLE_TMO  0x%08x\n",
++		   SASConfigPage.STP_IDLE_TMO);
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.OPNRJT_RTRY_INTVL  0x%08x\n",
++		   SASConfigPage.OPNRJT_RTRY_INTVL);
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO  0x%08x\n",
++		   SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO);
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR  0x%08x\n",
++		   SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR);
++	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.MAX_AIP  0x%08x\n",
++		   SASConfigPage.MAX_AIP);
+ 
+ 	memcpy(&payload.cfg_pg, &SASConfigPage,
+ 			 sizeof(SASProtocolTimerConfig_t));
+@@ -1346,18 +1317,18 @@ pm80xx_get_encrypt_info(struct pm8001_hba_info *pm8001_ha)
+ 						SCRATCH_PAD3_SMB_ENABLED)
+ 			pm8001_ha->encrypt_info.sec_mode = SEC_MODE_SMB;
+ 		pm8001_ha->encrypt_info.status = 0;
+-		PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
+-			"Encryption: SCRATCH_PAD3_ENC_READY 0x%08X."
+-			"Cipher mode 0x%x Sec mode 0x%x status 0x%x\n",
+-			scratch3_value, pm8001_ha->encrypt_info.cipher_mode,
+-			pm8001_ha->encrypt_info.sec_mode,
+-			pm8001_ha->encrypt_info.status));
++		pm8001_dbg(pm8001_ha, INIT,
++			   "Encryption: SCRATCH_PAD3_ENC_READY 0x%08X.Cipher mode 0x%x Sec mode 0x%x status 0x%x\n",
++			   scratch3_value,
++			   pm8001_ha->encrypt_info.cipher_mode,
++			   pm8001_ha->encrypt_info.sec_mode,
++			   pm8001_ha->encrypt_info.status);
+ 		ret = 0;
+ 	} else if ((scratch3_value & SCRATCH_PAD3_ENC_READY) ==
+ 					SCRATCH_PAD3_ENC_DISABLED) {
+-		PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
+-			"Encryption: SCRATCH_PAD3_ENC_DISABLED 0x%08X\n",
+-			scratch3_value));
++		pm8001_dbg(pm8001_ha, INIT,
++			   "Encryption: SCRATCH_PAD3_ENC_DISABLED 0x%08X\n",
++			   scratch3_value);
+ 		pm8001_ha->encrypt_info.status = 0xFFFFFFFF;
+ 		pm8001_ha->encrypt_info.cipher_mode = 0;
+ 		pm8001_ha->encrypt_info.sec_mode = 0;
+@@ -1377,12 +1348,12 @@ pm80xx_get_encrypt_info(struct pm8001_hba_info *pm8001_ha)
+ 		if ((scratch3_value & SCRATCH_PAD3_SM_MASK) ==
+ 					SCRATCH_PAD3_SMB_ENABLED)
+ 			pm8001_ha->encrypt_info.sec_mode = SEC_MODE_SMB;
+-		PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
+-			"Encryption: SCRATCH_PAD3_DIS_ERR 0x%08X."
+-			"Cipher mode 0x%x sec mode 0x%x status 0x%x\n",
+-			scratch3_value, pm8001_ha->encrypt_info.cipher_mode,
+-			pm8001_ha->encrypt_info.sec_mode,
+-			pm8001_ha->encrypt_info.status));
++		pm8001_dbg(pm8001_ha, INIT,
++			   "Encryption: SCRATCH_PAD3_DIS_ERR 0x%08X.Cipher mode 0x%x sec mode 0x%x status 0x%x\n",
++			   scratch3_value,
++			   pm8001_ha->encrypt_info.cipher_mode,
++			   pm8001_ha->encrypt_info.sec_mode,
++			   pm8001_ha->encrypt_info.status);
+ 	} else if ((scratch3_value & SCRATCH_PAD3_ENC_MASK) ==
+ 				 SCRATCH_PAD3_ENC_ENA_ERR) {
+ 
+@@ -1400,12 +1371,12 @@ pm80xx_get_encrypt_info(struct pm8001_hba_info *pm8001_ha)
+ 					SCRATCH_PAD3_SMB_ENABLED)
+ 			pm8001_ha->encrypt_info.sec_mode = SEC_MODE_SMB;
+ 
+-		PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
+-			"Encryption: SCRATCH_PAD3_ENA_ERR 0x%08X."
+-			"Cipher mode 0x%x sec mode 0x%x status 0x%x\n",
+-			scratch3_value, pm8001_ha->encrypt_info.cipher_mode,
+-			pm8001_ha->encrypt_info.sec_mode,
+-			pm8001_ha->encrypt_info.status));
++		pm8001_dbg(pm8001_ha, INIT,
++			   "Encryption: SCRATCH_PAD3_ENA_ERR 0x%08X.Cipher mode 0x%x sec mode 0x%x status 0x%x\n",
++			   scratch3_value,
++			   pm8001_ha->encrypt_info.cipher_mode,
++			   pm8001_ha->encrypt_info.sec_mode,
++			   pm8001_ha->encrypt_info.status);
+ 	}
+ 	return ret;
+ }
+@@ -1435,9 +1406,9 @@ static int pm80xx_encrypt_update(struct pm8001_hba_info *pm8001_ha)
+ 	payload.new_curidx_ksop = ((1 << 24) | (1 << 16) | (1 << 8) |
+ 					KEK_MGMT_SUBOP_KEYCARDUPDATE);
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"Saving Encryption info to flash. payload 0x%x\n",
+-		payload.new_curidx_ksop));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "Saving Encryption info to flash. payload 0x%x\n",
++		   payload.new_curidx_ksop);
+ 
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
+@@ -1458,8 +1429,7 @@ static int pm80xx_chip_init(struct pm8001_hba_info *pm8001_ha)
+ 
+ 	/* check the firmware status */
+ 	if (-1 == check_fw_ready(pm8001_ha)) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Firmware is not ready!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "Firmware is not ready!\n");
+ 		return -EBUSY;
+ 	}
+ 
+@@ -1483,8 +1453,7 @@ static int pm80xx_chip_init(struct pm8001_hba_info *pm8001_ha)
+ 	}
+ 	/* notify firmware update finished and check initialization status */
+ 	if (0 == mpi_init_check(pm8001_ha)) {
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("MPI initialize successful!\n"));
++		pm8001_dbg(pm8001_ha, INIT, "MPI initialize successful!\n");
+ 	} else
+ 		return -EBUSY;
+ 
+@@ -1493,16 +1462,13 @@ static int pm80xx_chip_init(struct pm8001_hba_info *pm8001_ha)
+ 
+ 	/* Check for encryption */
+ 	if (pm8001_ha->chip->encrypt) {
+-		PM8001_INIT_DBG(pm8001_ha,
+-			pm8001_printk("Checking for encryption\n"));
++		pm8001_dbg(pm8001_ha, INIT, "Checking for encryption\n");
+ 		ret = pm80xx_get_encrypt_info(pm8001_ha);
+ 		if (ret == -1) {
+-			PM8001_INIT_DBG(pm8001_ha,
+-				pm8001_printk("Encryption error !!\n"));
++			pm8001_dbg(pm8001_ha, INIT, "Encryption error !!\n");
+ 			if (pm8001_ha->encrypt_info.status == 0x81) {
+-				PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
+-					"Encryption enabled with error."
+-					"Saving encryption key to flash\n"));
++				pm8001_dbg(pm8001_ha, INIT,
++					   "Encryption enabled with error.Saving encryption key to flash\n");
+ 				pm80xx_encrypt_update(pm8001_ha);
+ 			}
+ 		}
+@@ -1533,8 +1499,7 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
+ 	} while ((value != 0) && (--max_wait_count));
+ 
+ 	if (!max_wait_count) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("TIMEOUT:IBDB value/=%x\n", value));
++		pm8001_dbg(pm8001_ha, FAIL, "TIMEOUT:IBDB value/=%x\n", value);
+ 		return -1;
+ 	}
+ 
+@@ -1551,9 +1516,8 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
+ 			break;
+ 	} while (--max_wait_count);
+ 	if (!max_wait_count) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk(" TIME OUT MPI State = 0x%x\n",
+-				gst_len_mpistate & GST_MPI_STATE_MASK));
++		pm8001_dbg(pm8001_ha, FAIL, " TIME OUT MPI State = 0x%x\n",
++			   gst_len_mpistate & GST_MPI_STATE_MASK);
+ 		return -1;
+ 	}
+ 
+@@ -1581,9 +1545,9 @@ pm80xx_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 			u32 r1 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1);
+ 			u32 r2 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2);
+ 			u32 r3 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_3);
+-			PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-				"MPI state is not ready scratch: %x:%x:%x:%x\n",
+-				r0, r1, r2, r3));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "MPI state is not ready scratch: %x:%x:%x:%x\n",
++				   r0, r1, r2, r3);
+ 			/* if things aren't ready but the bootloader is ok then
+ 			 * try the reset anyway.
+ 			 */
+@@ -1593,25 +1557,25 @@ pm80xx_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 	}
+ 	/* checked for reset register normal state; 0x0 */
+ 	regval = pm8001_cr32(pm8001_ha, 0, SPC_REG_SOFT_RESET);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("reset register before write : 0x%x\n", regval));
++	pm8001_dbg(pm8001_ha, INIT, "reset register before write : 0x%x\n",
++		   regval);
+ 
+ 	pm8001_cw32(pm8001_ha, 0, SPC_REG_SOFT_RESET, SPCv_NORMAL_RESET_VALUE);
+ 	msleep(500);
+ 
+ 	regval = pm8001_cr32(pm8001_ha, 0, SPC_REG_SOFT_RESET);
+-	PM8001_INIT_DBG(pm8001_ha,
+-	pm8001_printk("reset register after write 0x%x\n", regval));
++	pm8001_dbg(pm8001_ha, INIT, "reset register after write 0x%x\n",
++		   regval);
+ 
+ 	if ((regval & SPCv_SOFT_RESET_READ_MASK) ==
+ 			SPCv_SOFT_RESET_NORMAL_RESET_OCCURED) {
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" soft reset successful [regval: 0x%x]\n",
+-					regval));
++		pm8001_dbg(pm8001_ha, MSG,
++			   " soft reset successful [regval: 0x%x]\n",
++			   regval);
+ 	} else {
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" soft reset failed [regval: 0x%x]\n",
+-					regval));
++		pm8001_dbg(pm8001_ha, MSG,
++			   " soft reset failed [regval: 0x%x]\n",
++			   regval);
+ 
+ 		/* check bootloader is successfully executed or in HDA mode */
+ 		bootloader_state =
+@@ -1619,28 +1583,27 @@ pm80xx_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 			SCRATCH_PAD1_BOOTSTATE_MASK;
+ 
+ 		if (bootloader_state == SCRATCH_PAD1_BOOTSTATE_HDA_SEEPROM) {
+-			PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-				"Bootloader state - HDA mode SEEPROM\n"));
++			pm8001_dbg(pm8001_ha, MSG,
++				   "Bootloader state - HDA mode SEEPROM\n");
+ 		} else if (bootloader_state ==
+ 				SCRATCH_PAD1_BOOTSTATE_HDA_BOOTSTRAP) {
+-			PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-				"Bootloader state - HDA mode Bootstrap Pin\n"));
++			pm8001_dbg(pm8001_ha, MSG,
++				   "Bootloader state - HDA mode Bootstrap Pin\n");
+ 		} else if (bootloader_state ==
+ 				SCRATCH_PAD1_BOOTSTATE_HDA_SOFTRESET) {
+-			PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-				"Bootloader state - HDA mode soft reset\n"));
++			pm8001_dbg(pm8001_ha, MSG,
++				   "Bootloader state - HDA mode soft reset\n");
+ 		} else if (bootloader_state ==
+ 					SCRATCH_PAD1_BOOTSTATE_CRIT_ERROR) {
+-			PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-				"Bootloader state-HDA mode critical error\n"));
++			pm8001_dbg(pm8001_ha, MSG,
++				   "Bootloader state-HDA mode critical error\n");
+ 		}
+ 		return -EBUSY;
+ 	}
+ 
+ 	/* check the firmware status after reset */
+ 	if (-1 == check_fw_ready(pm8001_ha)) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Firmware is not ready!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "Firmware is not ready!\n");
+ 		/* check iButton feature support for motherboard controller */
+ 		if (pm8001_ha->pdev->subsystem_vendor !=
+ 			PCI_VENDOR_ID_ADAPTEC2 &&
+@@ -1652,21 +1615,18 @@ pm80xx_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
+ 			ibutton1 = pm8001_cr32(pm8001_ha, 0,
+ 					MSGU_HOST_SCRATCH_PAD_7);
+ 			if (!ibutton0 && !ibutton1) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("iButton Feature is"
+-					" not Available!!!\n"));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "iButton Feature is not Available!!!\n");
+ 				return -EBUSY;
+ 			}
+ 			if (ibutton0 == 0xdeadbeef && ibutton1 == 0xdeadbeef) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("CRC Check for iButton"
+-					" Feature Failed!!!\n"));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "CRC Check for iButton Feature Failed!!!\n");
+ 				return -EBUSY;
+ 			}
+ 		}
+ 	}
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("SPCv soft reset Complete\n"));
++	pm8001_dbg(pm8001_ha, INIT, "SPCv soft reset Complete\n");
+ 	return 0;
+ }
+ 
+@@ -1674,13 +1634,11 @@ static void pm80xx_hw_chip_rst(struct pm8001_hba_info *pm8001_ha)
+ {
+ 	u32 i;
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("chip reset start\n"));
++	pm8001_dbg(pm8001_ha, INIT, "chip reset start\n");
+ 
+ 	/* do SPCv chip reset. */
+ 	pm8001_cw32(pm8001_ha, 0, SPC_REG_SOFT_RESET, 0x11);
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("SPC soft reset Complete\n"));
++	pm8001_dbg(pm8001_ha, INIT, "SPC soft reset Complete\n");
+ 
+ 	/* Check this ..whether delay is required or no */
+ 	/* delay 10 usec */
+@@ -1692,8 +1650,7 @@ static void pm80xx_hw_chip_rst(struct pm8001_hba_info *pm8001_ha)
+ 		mdelay(1);
+ 	} while ((--i) != 0);
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("chip reset finished\n"));
++	pm8001_dbg(pm8001_ha, INIT, "chip reset finished\n");
+ }
+ 
+ /**
+@@ -1769,15 +1726,14 @@ static void pm80xx_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	int ret;
+ 
+ 	if (!pm8001_ha_dev) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("dev is null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "dev is null\n");
+ 		return;
+ 	}
+ 
+ 	task = sas_alloc_slow_task(GFP_ATOMIC);
+ 
+ 	if (!task) {
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("cannot "
+-						"allocate task\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "cannot allocate task\n");
+ 		return;
+ 	}
+ 
+@@ -1803,8 +1759,7 @@ static void pm80xx_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort,
+ 			sizeof(task_abort), 0);
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("Executing abort task end\n"));
++	pm8001_dbg(pm8001_ha, FAIL, "Executing abort task end\n");
+ 	if (ret) {
+ 		sas_free_task(task);
+ 		pm8001_tag_free(pm8001_ha, ccb_tag);
+@@ -1827,8 +1782,7 @@ static void pm80xx_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 	task = sas_alloc_slow_task(GFP_ATOMIC);
+ 
+ 	if (!task) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("cannot allocate task !!!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "cannot allocate task !!!\n");
+ 		return;
+ 	}
+ 	task->task_done = pm8001_task_done;
+@@ -1836,8 +1790,7 @@ static void pm80xx_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 	res = pm8001_tag_alloc(pm8001_ha, &ccb_tag);
+ 	if (res) {
+ 		sas_free_task(task);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("cannot allocate tag !!!\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "cannot allocate tag !!!\n");
+ 		return;
+ 	}
+ 
+@@ -1848,8 +1801,8 @@ static void pm80xx_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 	if (!dev) {
+ 		sas_free_task(task);
+ 		pm8001_tag_free(pm8001_ha, ccb_tag);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("Domain device cannot be allocated\n"));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "Domain device cannot be allocated\n");
+ 		return;
+ 	}
+ 
+@@ -1882,7 +1835,7 @@ static void pm80xx_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd,
+ 			sizeof(sata_cmd), 0);
+-	PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("Executing read log end\n"));
++	pm8001_dbg(pm8001_ha, FAIL, "Executing read log end\n");
+ 	if (res) {
+ 		sas_free_task(task);
+ 		pm8001_tag_free(pm8001_ha, ccb_tag);
+@@ -1928,27 +1881,24 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t = ccb->task;
+ 
+ 	if (status && status != IO_UNDERFLOW)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("sas IO status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, FAIL, "sas IO status 0x%x\n", status);
+ 	if (unlikely(!t || !t->lldd_task || !t->dev))
+ 		return;
+ 	ts = &t->task_status;
+ 
+-	PM8001_DEV_DBG(pm8001_ha, pm8001_printk(
+-		"tag::0x%x, status::0x%x task::0x%p\n", tag, status, t));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "tag::0x%x, status::0x%x task::0x%p\n", tag, status, t);
+ 
+ 	/* Print sas address of IO failed device */
+ 	if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) &&
+ 		(status != IO_UNDERFLOW))
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("SAS Address of IO Failure Drive"
+-			":%016llx", SAS_ADDR(t->dev->sas_addr)));
++		pm8001_dbg(pm8001_ha, FAIL, "SAS Address of IO Failure Drive:%016llx\n",
++			   SAS_ADDR(t->dev->sas_addr));
+ 
+ 	switch (status) {
+ 	case IO_SUCCESS:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_SUCCESS ,param = 0x%x\n",
+-				param));
++		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS ,param = 0x%x\n",
++			   param);
+ 		if (param == 0) {
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAM_STAT_GOOD;
+@@ -1960,73 +1910,83 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 			sas_ssp_task_response(pm8001_ha->dev, t, iu);
+ 		}
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_ABORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ABORTED IOMB Tag\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ABORTED IOMB Tag\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_ABORTED_TASK;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_UNDERFLOW:
+ 		/* SSP Completion with error */
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_UNDERFLOW ,param = 0x%x\n",
+-				param));
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW ,param = 0x%x\n",
++			   param);
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_UNDERRUN;
+ 		ts->residual = param;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_NO_DEVICE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_NO_DEVICE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_NO_DEVICE\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_PHY_DOWN;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		/* Force the midlayer to retry */
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_INVALID_SSP_RSP_FRAME:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_INVALID_SSP_RSP_FRAME\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_INVALID_SSP_RSP_FRAME\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_EPROTO;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS:
+ 	case IO_XFER_OPEN_RETRY_BACKOFF_THRESHOLD_REACHED:
+@@ -2034,8 +1994,7 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_NO_DEST:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_OPEN_COLLIDE:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_PATHWAY_BLOCKED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+@@ -2045,67 +2004,78 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_NAK_RECEIVED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_NAK_RECEIVED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_ACK_NAK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_ACK_NAK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_DMA:
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("IO_XFER_ERROR_DMA\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_DMA\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_OFFSET_MISMATCH:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_OFFSET_MISMATCH\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_PORT_IN_RESET:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_PORT_IN_RESET\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_PORT_IN_RESET\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_DS_NON_OPERATIONAL:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_NON_OPERATIONAL\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_NON_OPERATIONAL\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		if (!t->uldd_task)
+@@ -2114,51 +2084,55 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 				IO_DS_NON_OPERATIONAL);
+ 		break;
+ 	case IO_DS_IN_RECOVERY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_IN_RECOVERY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_IN_RECOVERY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_TM_TAG_NOT_FOUND:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_TM_TAG_NOT_FOUND\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_TM_TAG_NOT_FOUND\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_SSP_EXT_IU_ZERO_LEN_ERROR:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_SSP_EXT_IU_ZERO_LEN_ERROR\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_SSP_EXT_IU_ZERO_LEN_ERROR\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", status);
+ 		/* not allowed case. Therefore, return failed status */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	}
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("scsi_status = 0x%x\n ",
+-		psspPayload->ssp_resp_iu.status));
++	pm8001_dbg(pm8001_ha, IO, "scsi_status = 0x%x\n ",
++		   psspPayload->ssp_resp_iu.status);
+ 	spin_lock_irqsave(&t->task_state_lock, flags);
+ 	t->task_state_flags &= ~SAS_TASK_STATE_PENDING;
+ 	t->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-			"task 0x%p done with io_status 0x%x resp 0x%x "
+-			"stat 0x%x but aborted by upper layer!\n",
+-			t, status, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, status, ts->resp, ts->stat);
+ 		if (t->slow_task)
+ 			complete(&t->slow_task->completion);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+@@ -2188,52 +2162,47 @@ static void mpi_ssp_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t = ccb->task;
+ 	pm8001_dev = ccb->device;
+ 	if (event)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("sas IO status 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, FAIL, "sas IO status 0x%x\n", event);
+ 	if (unlikely(!t || !t->lldd_task || !t->dev))
+ 		return;
+ 	ts = &t->task_status;
+-	PM8001_IOERR_DBG(pm8001_ha,
+-		pm8001_printk("port_id:0x%x, tag:0x%x, event:0x%x\n",
+-				port_id, tag, event));
++	pm8001_dbg(pm8001_ha, IOERR, "port_id:0x%x, tag:0x%x, event:0x%x\n",
++		   port_id, tag, event);
+ 	switch (event) {
+ 	case IO_OVERFLOW:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n");)
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		ts->residual = 0;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		pm8001_handle_event(pm8001_ha, t, IO_XFER_ERROR_BREAK);
+ 		return;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_EPROTO;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+@@ -2244,8 +2213,7 @@ static void mpi_ssp_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_NO_DEST:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_OPEN_COLLIDE:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_PATHWAY_BLOCKED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+@@ -2255,94 +2223,86 @@ static void mpi_ssp_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
+ 		break;
+ 	case IO_XFER_ERROR_NAK_RECEIVED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_NAK_RECEIVED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_XFER_ERROR_ACK_NAK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_ACK_NAK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		pm8001_handle_event(pm8001_ha, t, IO_XFER_OPEN_RETRY_TIMEOUT);
+ 		return;
+ 	case IO_XFER_ERROR_UNEXPECTED_PHASE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_UNEXPECTED_PHASE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_UNEXPECTED_PHASE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_RDY_OVERRUN:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_RDY_OVERRUN\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_XFER_RDY_OVERRUN\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_OFFSET_MISMATCH:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_OFFSET_MISMATCH\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_ZERO_DATA_LEN:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_ERROR_INTERNAL_CRC_ERROR:
+-		PM8001_IOERR_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFR_ERROR_INTERNAL_CRC_ERROR\n"));
++		pm8001_dbg(pm8001_ha, IOERR,
++			   "IO_XFR_ERROR_INTERNAL_CRC_ERROR\n");
+ 		/* TBC: used default set values */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		break;
+ 	case IO_XFER_CMD_FRAME_ISSUED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_CMD_FRAME_ISSUED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_CMD_FRAME_ISSUED\n");
+ 		return;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", event);
+ 		/* not allowed case. Therefore, return failed status */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+@@ -2354,10 +2314,9 @@ static void mpi_ssp_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-			"task 0x%p done with event 0x%x resp 0x%x "
+-			"stat 0x%x but aborted by upper layer!\n",
+-			t, event, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "task 0x%p done with event 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, event, ts->resp, ts->stat);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+@@ -2392,8 +2351,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	tag = le32_to_cpu(psataPayload->tag);
+ 
+ 	if (!tag) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("tag null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "tag null\n");
+ 		return;
+ 	}
+ 	ccb = &pm8001_ha->ccb_info[tag];
+@@ -2402,8 +2360,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		t = ccb->task;
+ 		pm8001_dev = ccb->device;
+ 	} else {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("ccb null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "ccb null\n");
+ 		return;
+ 	}
+ 
+@@ -2411,29 +2368,26 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		if (t->dev && (t->dev->lldd_dev))
+ 			pm8001_dev = t->dev->lldd_dev;
+ 	} else {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "task null\n");
+ 		return;
+ 	}
+ 
+ 	if ((pm8001_dev && !(pm8001_dev->id & NCQ_READ_LOG_FLAG))
+ 		&& unlikely(!t || !t->lldd_task || !t->dev)) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task or dev null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "task or dev null\n");
+ 		return;
+ 	}
+ 
+ 	ts = &t->task_status;
+ 	if (!ts) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("ts null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "ts null\n");
+ 		return;
+ 	}
+ 
+ 	if (unlikely(status))
+-		PM8001_IOERR_DBG(pm8001_ha, pm8001_printk(
+-			"status:0x%x, tag:0x%x, task::0x%p\n",
+-			status, tag, t));
++		pm8001_dbg(pm8001_ha, IOERR,
++			   "status:0x%x, tag:0x%x, task::0x%p\n",
++			   status, tag, t);
+ 
+ 	/* Print sas address of IO failed device */
+ 	if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) &&
+@@ -2465,20 +2419,20 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 						& 0xff000000)) +
+ 						pm8001_dev->attached_phy +
+ 						0x10);
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SAS Address of IO Failure Drive:"
+-				"%08x%08x", temp_sata_addr_hi,
+-					temp_sata_addr_low));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SAS Address of IO Failure Drive:%08x%08x\n",
++				   temp_sata_addr_hi,
++				   temp_sata_addr_low);
+ 
+ 		} else {
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("SAS Address of IO Failure Drive:"
+-				"%016llx", SAS_ADDR(t->dev->sas_addr)));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "SAS Address of IO Failure Drive:%016llx\n",
++				   SAS_ADDR(t->dev->sas_addr));
+ 		}
+ 	}
+ 	switch (status) {
+ 	case IO_SUCCESS:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS\n");
+ 		if (param == 0) {
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAM_STAT_GOOD;
+@@ -2500,94 +2454,100 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAS_PROTO_RESPONSE;
+ 			ts->residual = param;
+-			PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("SAS_PROTO_RESPONSE len = %d\n",
+-				param));
++			pm8001_dbg(pm8001_ha, IO,
++				   "SAS_PROTO_RESPONSE len = %d\n",
++				   param);
+ 			sata_resp = &psataPayload->sata_resp[0];
+ 			resp = (struct ata_task_resp *)ts->buf;
+ 			if (t->ata_task.dma_xfer == 0 &&
+ 			    t->data_dir == DMA_FROM_DEVICE) {
+ 				len = sizeof(struct pio_setup_fis);
+-				PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("PIO read len = %d\n", len));
++				pm8001_dbg(pm8001_ha, IO,
++					   "PIO read len = %d\n", len);
+ 			} else if (t->ata_task.use_ncq) {
+ 				len = sizeof(struct set_dev_bits_fis);
+-				PM8001_IO_DBG(pm8001_ha,
+-					pm8001_printk("FPDMA len = %d\n", len));
++				pm8001_dbg(pm8001_ha, IO, "FPDMA len = %d\n",
++					   len);
+ 			} else {
+ 				len = sizeof(struct dev_to_host_fis);
+-				PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("other len = %d\n", len));
++				pm8001_dbg(pm8001_ha, IO, "other len = %d\n",
++					   len);
+ 			}
+ 			if (SAS_STATUS_BUF_SIZE >= sizeof(*resp)) {
+ 				resp->frame_len = len;
+ 				memcpy(&resp->ending_fis[0], sata_resp, len);
+ 				ts->buf_valid_size = sizeof(*resp);
+ 			} else
+-				PM8001_IO_DBG(pm8001_ha,
+-					pm8001_printk("response too large\n"));
++				pm8001_dbg(pm8001_ha, IO,
++					   "response too large\n");
+ 		}
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_ABORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ABORTED IOMB Tag\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ABORTED IOMB Tag\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_ABORTED_TASK;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 		/* following cases are to do cases */
+ 	case IO_UNDERFLOW:
+ 		/* SATA Completion with error */
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_UNDERFLOW param = %d\n", param));
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW param = %d\n", param);
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_UNDERRUN;
+ 		ts->residual = param;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_NO_DEVICE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_NO_DEVICE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_NO_DEVICE\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_PHY_DOWN;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_INTERRUPTED;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_EPROTO;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_CONT0;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS:
+ 	case IO_XFER_OPEN_RETRY_BACKOFF_THRESHOLD_REACHED:
+@@ -2595,8 +2555,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_NO_DEST:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_OPEN_COLLIDE:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_PATHWAY_BLOCKED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2610,8 +2569,8 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+@@ -2626,15 +2585,17 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2648,57 +2609,65 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_NAK_RECEIVED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_NAK_RECEIVED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_ACK_NAK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_ACK_NAK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_DMA:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_DMA\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_DMA\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_ABORTED_TASK;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_SATA_LINK_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_SATA_LINK_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_SATA_LINK_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_REJECTED_NCQ_MODE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_REJECTED_NCQ_MODE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_REJECTED_NCQ_MODE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_UNDERRUN;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_PORT_IN_RESET:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_PORT_IN_RESET\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_PORT_IN_RESET\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_DS_NON_OPERATIONAL:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_NON_OPERATIONAL\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_NON_OPERATIONAL\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2711,14 +2680,14 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_DS_IN_RECOVERY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_IN_RECOVERY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_IN_RECOVERY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_DS_IN_ERROR:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_IN_ERROR\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_IN_ERROR\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2731,18 +2700,21 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", status);
+ 		/* not allowed case. Therefore, return failed status */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
++		if (pm8001_dev)
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	}
+ 	spin_lock_irqsave(&t->task_state_lock, flags);
+@@ -2751,10 +2723,9 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task 0x%p done with io_status 0x%x"
+-			" resp 0x%x stat 0x%x but aborted by upper layer!\n",
+-			t, status, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, status, ts->resp, ts->stat);
+ 		if (t->slow_task)
+ 			complete(&t->slow_task->completion);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+@@ -2785,13 +2756,11 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 		t = ccb->task;
+ 		pm8001_dev = ccb->device;
+ 	} else {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("No CCB !!!. returning\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "No CCB !!!. returning\n");
+ 		return;
+ 	}
+ 	if (event)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("SATA EVENT 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, FAIL, "SATA EVENT 0x%x\n", event);
+ 
+ 	/* Check if this is NCQ error */
+ 	if (event == IO_XFER_ERROR_ABORTED_NCQ_MODE) {
+@@ -2804,54 +2773,49 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	}
+ 
+ 	if (unlikely(!t || !t->lldd_task || !t->dev)) {
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task or dev null\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "task or dev null\n");
+ 		return;
+ 	}
+ 
+ 	ts = &t->task_status;
+-	PM8001_IOERR_DBG(pm8001_ha,
+-		pm8001_printk("port_id:0x%x, tag:0x%x, event:0x%x\n",
+-				port_id, tag, event));
++	pm8001_dbg(pm8001_ha, IOERR, "port_id:0x%x, tag:0x%x, event:0x%x\n",
++		   port_id, tag, event);
+ 	switch (event) {
+ 	case IO_OVERFLOW:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		ts->residual = 0;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_INTERRUPTED;
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_EPROTO;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_CONT0;
+@@ -2862,8 +2826,8 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_NO_DEST:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_OPEN_COLLIDE:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_PATHWAY_BLOCKED:
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		if (!t->uldd_task) {
+@@ -2877,107 +2841,96 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 		}
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_UNDELIVERED;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
+ 		break;
+ 	case IO_XFER_ERROR_NAK_RECEIVED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_NAK_RECEIVED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
+ 		break;
+ 	case IO_XFER_ERROR_PEER_ABORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PEER_ABORTED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PEER_ABORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_NAK_R_ERR;
+ 		break;
+ 	case IO_XFER_ERROR_REJECTED_NCQ_MODE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_REJECTED_NCQ_MODE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_REJECTED_NCQ_MODE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_UNDERRUN;
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_UNEXPECTED_PHASE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_UNEXPECTED_PHASE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_UNEXPECTED_PHASE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_RDY_OVERRUN:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_RDY_OVERRUN\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_XFER_RDY_OVERRUN\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_OFFSET_MISMATCH:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_OFFSET_MISMATCH\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_XFER_ZERO_DATA_LEN:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_CMD_FRAME_ISSUED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_CMD_FRAME_ISSUED\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_CMD_FRAME_ISSUED\n");
+ 		break;
+ 	case IO_XFER_PIO_SETUP_ERROR:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_PIO_SETUP_ERROR\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_PIO_SETUP_ERROR\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_ERROR_INTERNAL_CRC_ERROR:
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFR_ERROR_INTERNAL_CRC_ERROR\n"));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "IO_XFR_ERROR_INTERNAL_CRC_ERROR\n");
+ 		/* TBC: used default set values */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	case IO_XFER_DMA_ACTIVATE_TIMEOUT:
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFR_DMA_ACTIVATE_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "IO_XFR_DMA_ACTIVATE_TIMEOUT\n");
+ 		/* TBC: used default set values */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	default:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", event));
++		pm8001_dbg(pm8001_ha, IO, "Unknown status 0x%x\n", event);
+ 		/* not allowed case. Therefore, return failed status */
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_TO;
+@@ -2989,10 +2942,9 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("task 0x%p done with io_status 0x%x"
+-			" resp 0x%x stat 0x%x but aborted by upper layer!\n",
+-			t, event, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
++			   t, event, ts->resp, ts->stat);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+@@ -3025,94 +2977,87 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	ts = &t->task_status;
+ 	pm8001_dev = ccb->device;
+ 	if (status)
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("smp IO status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, FAIL, "smp IO status 0x%x\n", status);
+ 	if (unlikely(!t || !t->lldd_task || !t->dev))
+ 		return;
+ 
+-	PM8001_DEV_DBG(pm8001_ha,
+-		pm8001_printk("tag::0x%x status::0x%x\n", tag, status));
++	pm8001_dbg(pm8001_ha, DEV, "tag::0x%x status::0x%x\n", tag, status);
+ 
+ 	switch (status) {
+ 
+ 	case IO_SUCCESS:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_GOOD;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		if (pm8001_ha->smp_exp_mode == SMP_DIRECT) {
+-			PM8001_IO_DBG(pm8001_ha,
+-				pm8001_printk("DIRECT RESPONSE Length:%d\n",
+-						param));
++			pm8001_dbg(pm8001_ha, IO,
++				   "DIRECT RESPONSE Length:%d\n",
++				   param);
+ 			pdma_respaddr = (char *)(phys_to_virt(cpu_to_le64
+ 						((u64)sg_dma_address
+ 						(&t->smp_task.smp_resp))));
+ 			for (i = 0; i < param; i++) {
+ 				*(pdma_respaddr+i) = psmpPayload->_r_a[i];
+-				PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-					"SMP Byte%d DMA data 0x%x psmp 0x%x\n",
+-					i, *(pdma_respaddr+i),
+-					psmpPayload->_r_a[i]));
++				pm8001_dbg(pm8001_ha, IO,
++					   "SMP Byte%d DMA data 0x%x psmp 0x%x\n",
++					   i, *(pdma_respaddr + i),
++					   psmpPayload->_r_a[i]);
+ 			}
+ 		}
+ 		break;
+ 	case IO_ABORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ABORTED IOMB\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ABORTED IOMB\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_ABORTED_TASK;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_OVERFLOW:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_UNDERFLOW\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		ts->residual = 0;
+ 		if (pm8001_dev)
+-			pm8001_dev->running_req--;
++			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_NO_DEVICE:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_NO_DEVICE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_NO_DEVICE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_PHY_DOWN;
+ 		break;
+ 	case IO_ERROR_HW_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ERROR_HW_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ERROR_HW_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_BUSY;
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_BUSY;
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAM_STAT_BUSY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_ZONE_VIOLATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BREAK:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_CONT0;
+@@ -3123,8 +3068,7 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_NO_DEST:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_OPEN_COLLIDE:
+ 	case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS_PATHWAY_BLOCKED:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_UNKNOWN;
+@@ -3133,75 +3077,68 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_BAD_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_BAD_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_BAD_DEST;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED:
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(\
+-			"IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_CONN_RATE;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_WRONG_DESTINATION:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_WRONG_DEST;
+ 		break;
+ 	case IO_XFER_ERROR_RX_FRAME:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_ERROR_RX_FRAME\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_RX_FRAME\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		break;
+ 	case IO_XFER_OPEN_RETRY_TIMEOUT:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_XFER_OPEN_RETRY_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_ERROR_INTERNAL_SMP_RESOURCE:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_ERROR_INTERNAL_SMP_RESOURCE\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_ERROR_INTERNAL_SMP_RESOURCE\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_QUEUE_FULL;
+ 		break;
+ 	case IO_PORT_IN_RESET:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_PORT_IN_RESET\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_PORT_IN_RESET\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_DS_NON_OPERATIONAL:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_NON_OPERATIONAL\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_NON_OPERATIONAL\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		break;
+ 	case IO_DS_IN_RECOVERY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_DS_IN_RECOVERY\n"));
++		pm8001_dbg(pm8001_ha, IO, "IO_DS_IN_RECOVERY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY:
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n"));
++		pm8001_dbg(pm8001_ha, IO,
++			   "IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_OPEN_REJECT;
+ 		ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown status 0x%x\n", status));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown status 0x%x\n", status);
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DEV_NO_RESPONSE;
+ 		/* not allowed case. Therefore, return failed status */
+@@ -3213,10 +3150,9 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	t->task_state_flags |= SAS_TASK_STATE_DONE;
+ 	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-			"task 0x%p done with io_status 0x%x resp 0x%x"
+-			"stat 0x%x but aborted by upper layer!\n",
+-			t, status, ts->resp, ts->stat));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "task 0x%p done with io_status 0x%x resp 0x%xstat 0x%x but aborted by upper layer!\n",
++			   t, status, ts->resp, ts->stat);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+@@ -3306,45 +3242,40 @@ hw_event_sas_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	u8 portstate = (u8)(phyid_npip_portstate & 0x0000000F);
+ 
+ 	struct pm8001_port *port = &pm8001_ha->port[port_id];
+-	struct sas_ha_struct *sas_ha = pm8001_ha->sas;
+ 	struct pm8001_phy *phy = &pm8001_ha->phy[phy_id];
+ 	unsigned long flags;
+ 	u8 deviceType = pPayload->sas_identify.dev_type;
+ 	port->port_state = portstate;
+ 	port->wide_port_phymap |= (1U << phy_id);
+ 	phy->phy_state = PHY_STATE_LINK_UP_SPCV;
+-	PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-		"portid:%d; phyid:%d; linkrate:%d; "
+-		"portstate:%x; devicetype:%x\n",
+-		port_id, phy_id, link_rate, portstate, deviceType));
++	pm8001_dbg(pm8001_ha, MSG,
++		   "portid:%d; phyid:%d; linkrate:%d; portstate:%x; devicetype:%x\n",
++		   port_id, phy_id, link_rate, portstate, deviceType);
+ 
+ 	switch (deviceType) {
+ 	case SAS_PHY_UNUSED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("device type no device.\n"));
++		pm8001_dbg(pm8001_ha, MSG, "device type no device.\n");
+ 		break;
+ 	case SAS_END_DEVICE:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk("end device.\n"));
++		pm8001_dbg(pm8001_ha, MSG, "end device.\n");
+ 		pm80xx_chip_phy_ctl_req(pm8001_ha, phy_id,
+ 			PHY_NOTIFY_ENABLE_SPINUP);
+ 		port->port_attached = 1;
+ 		pm8001_get_lrate_mode(phy, link_rate);
+ 		break;
+ 	case SAS_EDGE_EXPANDER_DEVICE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("expander device.\n"));
++		pm8001_dbg(pm8001_ha, MSG, "expander device.\n");
+ 		port->port_attached = 1;
+ 		pm8001_get_lrate_mode(phy, link_rate);
+ 		break;
+ 	case SAS_FANOUT_EXPANDER_DEVICE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("fanout expander device.\n"));
++		pm8001_dbg(pm8001_ha, MSG, "fanout expander device.\n");
+ 		port->port_attached = 1;
+ 		pm8001_get_lrate_mode(phy, link_rate);
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("unknown device type(%x)\n", deviceType));
++		pm8001_dbg(pm8001_ha, DEVIO, "unknown device type(%x)\n",
++			   deviceType);
+ 		break;
+ 	}
+ 	phy->phy_type |= PORT_TYPE_SAS;
+@@ -3355,7 +3286,7 @@ hw_event_sas_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	else if (phy->identify.device_type != SAS_PHY_UNUSED)
+ 		phy->identify.target_port_protocols = SAS_PROTOCOL_SMP;
+ 	phy->sas_phy.oob_mode = SAS_OOB_MODE;
+-	sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
++	sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
+ 	spin_lock_irqsave(&phy->sas_phy.frame_rcvd_lock, flags);
+ 	memcpy(phy->frame_rcvd, &pPayload->sas_identify,
+ 		sizeof(struct sas_identify_frame)-4);
+@@ -3389,12 +3320,11 @@ hw_event_sata_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	u8 portstate = (u8)(phyid_npip_portstate & 0x0000000F);
+ 
+ 	struct pm8001_port *port = &pm8001_ha->port[port_id];
+-	struct sas_ha_struct *sas_ha = pm8001_ha->sas;
+ 	struct pm8001_phy *phy = &pm8001_ha->phy[phy_id];
+ 	unsigned long flags;
+-	PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-		"port id %d, phy id %d link_rate %d portstate 0x%x\n",
+-				port_id, phy_id, link_rate, portstate));
++	pm8001_dbg(pm8001_ha, DEVIO,
++		   "port id %d, phy id %d link_rate %d portstate 0x%x\n",
++		   port_id, phy_id, link_rate, portstate);
+ 
+ 	port->port_state = portstate;
+ 	phy->phy_state = PHY_STATE_LINK_UP_SPCV;
+@@ -3403,7 +3333,7 @@ hw_event_sata_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	phy->phy_type |= PORT_TYPE_SATA;
+ 	phy->phy_attached = 1;
+ 	phy->sas_phy.oob_mode = SATA_OOB_MODE;
+-	sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
++	sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
+ 	spin_lock_irqsave(&phy->sas_phy.frame_rcvd_lock, flags);
+ 	memcpy(phy->frame_rcvd, ((u8 *)&pPayload->sata_fis - 4),
+ 		sizeof(struct dev_to_host_fis));
+@@ -3444,10 +3374,10 @@ hw_event_phy_down(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case PORT_VALID:
+ 		break;
+ 	case PORT_INVALID:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" PortInvalid portID %d\n", port_id));
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" Last phy Down and port invalid\n"));
++		pm8001_dbg(pm8001_ha, MSG, " PortInvalid portID %d\n",
++			   port_id);
++		pm8001_dbg(pm8001_ha, MSG,
++			   " Last phy Down and port invalid\n");
+ 		if (port_sata) {
+ 			phy->phy_type = 0;
+ 			port->port_attached = 0;
+@@ -3457,19 +3387,18 @@ hw_event_phy_down(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		sas_phy_disconnected(&phy->sas_phy);
+ 		break;
+ 	case PORT_IN_RESET:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" Port In Reset portID %d\n", port_id));
++		pm8001_dbg(pm8001_ha, MSG, " Port In Reset portID %d\n",
++			   port_id);
+ 		break;
+ 	case PORT_NOT_ESTABLISHED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" Phy Down and PORT_NOT_ESTABLISHED\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   " Phy Down and PORT_NOT_ESTABLISHED\n");
+ 		port->port_attached = 0;
+ 		break;
+ 	case PORT_LOSTCOMM:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" Phy Down and PORT_LOSTCOMM\n"));
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" Last phy Down and port invalid\n"));
++		pm8001_dbg(pm8001_ha, MSG, " Phy Down and PORT_LOSTCOMM\n");
++		pm8001_dbg(pm8001_ha, MSG,
++			   " Last phy Down and port invalid\n");
+ 		if (port_sata) {
+ 			port->port_attached = 0;
+ 			phy->phy_type = 0;
+@@ -3480,17 +3409,14 @@ hw_event_phy_down(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		break;
+ 	default:
+ 		port->port_attached = 0;
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk(" Phy Down and(default) = 0x%x\n",
+-			portstate));
++		pm8001_dbg(pm8001_ha, DEVIO,
++			   " Phy Down and(default) = 0x%x\n",
++			   portstate);
+ 		break;
+ 
+ 	}
+-	if (port_sata && (portstate != PORT_IN_RESET)) {
+-		struct sas_ha_struct *sas_ha = pm8001_ha->sas;
+-
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL);
+-	}
++	if (port_sata && (portstate != PORT_IN_RESET))
++		sas_notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL);
+ }
+ 
+ static int mpi_phy_start_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+@@ -3503,9 +3429,9 @@ static int mpi_phy_start_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		le32_to_cpu(pPayload->phyid);
+ 	struct pm8001_phy *phy = &pm8001_ha->phy[phy_id];
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("phy start resp status:0x%x, phyid:0x%x\n",
+-				status, phy_id));
++	pm8001_dbg(pm8001_ha, INIT,
++		   "phy start resp status:0x%x, phyid:0x%x\n",
++		   status, phy_id);
+ 	if (status == 0) {
+ 		phy->phy_state = PHY_LINK_DOWN;
+ 		if (pm8001_ha->flags == PM8001F_RUN_TIME &&
+@@ -3532,18 +3458,18 @@ static int mpi_thermal_hw_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	u32 rht_lht = le32_to_cpu(pPayload->rht_lht);
+ 
+ 	if (thermal_event & 0x40) {
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"Thermal Event: Local high temperature violated!\n"));
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"Thermal Event: Measured local high temperature %d\n",
+-				((rht_lht & 0xFF00) >> 8)));
++		pm8001_dbg(pm8001_ha, IO,
++			   "Thermal Event: Local high temperature violated!\n");
++		pm8001_dbg(pm8001_ha, IO,
++			   "Thermal Event: Measured local high temperature %d\n",
++			   ((rht_lht & 0xFF00) >> 8));
+ 	}
+ 	if (thermal_event & 0x10) {
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"Thermal Event: Remote high temperature violated!\n"));
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"Thermal Event: Measured remote high temperature %d\n",
+-				((rht_lht & 0xFF000000) >> 24)));
++		pm8001_dbg(pm8001_ha, IO,
++			   "Thermal Event: Remote high temperature violated!\n");
++		pm8001_dbg(pm8001_ha, IO,
++			   "Thermal Event: Measured remote high temperature %d\n",
++			   ((rht_lht & 0xFF000000) >> 24));
+ 	}
+ 	return 0;
+ }
+@@ -3572,149 +3498,134 @@ static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	struct pm8001_phy *phy = &pm8001_ha->phy[phy_id];
+ 	struct pm8001_port *port = &pm8001_ha->port[port_id];
+ 	struct asd_sas_phy *sas_phy = sas_ha->sas_phy[phy_id];
+-	PM8001_DEV_DBG(pm8001_ha,
+-		pm8001_printk("portid:%d phyid:%d event:0x%x status:0x%x\n",
+-				port_id, phy_id, eventType, status));
++	pm8001_dbg(pm8001_ha, DEV,
++		   "portid:%d phyid:%d event:0x%x status:0x%x\n",
++		   port_id, phy_id, eventType, status);
+ 
+ 	switch (eventType) {
+ 
+ 	case HW_EVENT_SAS_PHY_UP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PHY_START_STATUS\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_START_STATUS\n");
+ 		hw_event_sas_phy_up(pm8001_ha, piomb);
+ 		break;
+ 	case HW_EVENT_SATA_PHY_UP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_SATA_PHY_UP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_SATA_PHY_UP\n");
+ 		hw_event_sata_phy_up(pm8001_ha, piomb);
+ 		break;
+ 	case HW_EVENT_SATA_SPINUP_HOLD:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_SATA_SPINUP_HOLD\n"));
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD);
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_SATA_SPINUP_HOLD\n");
++		sas_notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD);
+ 		break;
+ 	case HW_EVENT_PHY_DOWN:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PHY_DOWN\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_DOWN\n");
+ 		hw_event_phy_down(pm8001_ha, piomb);
+ 		if (pm8001_ha->reset_in_progress) {
+-			PM8001_MSG_DBG(pm8001_ha,
+-				pm8001_printk("Reset in progress\n"));
++			pm8001_dbg(pm8001_ha, MSG, "Reset in progress\n");
+ 			return 0;
+ 		}
+ 		phy->phy_attached = 0;
+ 		phy->phy_state = PHY_LINK_DISABLE;
+ 		break;
+ 	case HW_EVENT_PORT_INVALID:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_INVALID\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_INVALID\n");
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	/* the broadcast change primitive received, tell the LIBSAS this event
+ 	to revalidate the sas domain*/
+ 	case HW_EVENT_BROADCAST_CHANGE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_BROADCAST_CHANGE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_BROADCAST_CHANGE\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_BROADCAST_CHANGE,
+ 			port_id, phy_id, 1, 0);
+ 		spin_lock_irqsave(&sas_phy->sas_prim_lock, flags);
+ 		sas_phy->sas_prim = HW_EVENT_BROADCAST_CHANGE;
+ 		spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags);
+-		sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 		break;
+ 	case HW_EVENT_PHY_ERROR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PHY_ERROR\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_ERROR\n");
+ 		sas_phy_disconnected(&phy->sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR);
++		sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR);
+ 		break;
+ 	case HW_EVENT_BROADCAST_EXP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_BROADCAST_EXP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_BROADCAST_EXP\n");
+ 		spin_lock_irqsave(&sas_phy->sas_prim_lock, flags);
+ 		sas_phy->sas_prim = HW_EVENT_BROADCAST_EXP;
+ 		spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags);
+-		sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_INVALID_DWORD:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_LINK_ERR_INVALID_DWORD\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_INVALID_DWORD\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_INVALID_DWORD, port_id, phy_id, 0, 0);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_DISPARITY_ERROR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_LINK_ERR_DISPARITY_ERROR\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_DISPARITY_ERROR\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_DISPARITY_ERROR,
+ 			port_id, phy_id, 0, 0);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_CODE_VIOLATION:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_LINK_ERR_CODE_VIOLATION\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_CODE_VIOLATION\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_CODE_VIOLATION,
+ 			port_id, phy_id, 0, 0);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-				"HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH,
+ 			port_id, phy_id, 0, 0);
+ 		break;
+ 	case HW_EVENT_MALFUNCTION:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_MALFUNCTION\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_MALFUNCTION\n");
+ 		break;
+ 	case HW_EVENT_BROADCAST_SES:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_BROADCAST_SES\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_BROADCAST_SES\n");
+ 		spin_lock_irqsave(&sas_phy->sas_prim_lock, flags);
+ 		sas_phy->sas_prim = HW_EVENT_BROADCAST_SES;
+ 		spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags);
+-		sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
++		sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
+ 		break;
+ 	case HW_EVENT_INBOUND_CRC_ERROR:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_INBOUND_CRC_ERROR\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_INBOUND_CRC_ERROR\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_INBOUND_CRC_ERROR,
+ 			port_id, phy_id, 0, 0);
+ 		break;
+ 	case HW_EVENT_HARD_RESET_RECEIVED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_HARD_RESET_RECEIVED\n"));
+-		sas_ha->notify_port_event(sas_phy, PORTE_HARD_RESET);
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_HARD_RESET_RECEIVED\n");
++		sas_notify_port_event(sas_phy, PORTE_HARD_RESET);
+ 		break;
+ 	case HW_EVENT_ID_FRAME_TIMEOUT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_ID_FRAME_TIMEOUT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_ID_FRAME_TIMEOUT\n");
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_LINK_ERR_PHY_RESET_FAILED:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_LINK_ERR_PHY_RESET_FAILED\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_LINK_ERR_PHY_RESET_FAILED\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_LINK_ERR_PHY_RESET_FAILED,
+ 			port_id, phy_id, 0, 0);
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		break;
+ 	case HW_EVENT_PORT_RESET_TIMER_TMO:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_RESET_TIMER_TMO\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RESET_TIMER_TMO\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN,
+ 			port_id, phy_id, 0, 0);
+ 		sas_phy_disconnected(sas_phy);
+ 		phy->phy_attached = 0;
+-		sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
++		sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
+ 		if (pm8001_ha->phy[phy_id].reset_completion) {
+ 			pm8001_ha->phy[phy_id].port_reset_status =
+ 					PORT_RESET_TMO;
+@@ -3723,28 +3634,26 @@ static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case HW_EVENT_PORT_RECOVERY_TIMER_TMO:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_RECOVERY_TIMER_TMO\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "HW_EVENT_PORT_RECOVERY_TIMER_TMO\n");
+ 		pm80xx_hw_event_ack_req(pm8001_ha, 0,
+ 			HW_EVENT_PORT_RECOVERY_TIMER_TMO,
+ 			port_id, phy_id, 0, 0);
+ 		for (i = 0; i < pm8001_ha->chip->n_phy; i++) {
+ 			if (port->wide_port_phymap & (1 << i)) {
+ 				phy = &pm8001_ha->phy[i];
+-				sas_ha->notify_phy_event(&phy->sas_phy,
++				sas_notify_phy_event(&phy->sas_phy,
+ 						PHYE_LOSS_OF_SIGNAL);
+ 				port->wide_port_phymap &= ~(1 << i);
+ 			}
+ 		}
+ 		break;
+ 	case HW_EVENT_PORT_RECOVER:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_RECOVER\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RECOVER\n");
+ 		hw_event_port_recover(pm8001_ha, piomb);
+ 		break;
+ 	case HW_EVENT_PORT_RESET_COMPLETE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("HW_EVENT_PORT_RESET_COMPLETE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RESET_COMPLETE\n");
+ 		if (pm8001_ha->phy[phy_id].reset_completion) {
+ 			pm8001_ha->phy[phy_id].port_reset_status =
+ 					PORT_RESET_SUCCESS;
+@@ -3753,12 +3662,11 @@ static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		}
+ 		break;
+ 	case EVENT_BROADCAST_ASYNCH_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("EVENT_BROADCAST_ASYNCH_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "EVENT_BROADCAST_ASYNCH_EVENT\n");
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha,
+-			pm8001_printk("Unknown event type 0x%x\n", eventType));
++		pm8001_dbg(pm8001_ha, DEVIO, "Unknown event type 0x%x\n",
++			   eventType);
+ 		break;
+ 	}
+ 	return 0;
+@@ -3778,9 +3686,8 @@ static int mpi_phy_stop_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	u32 phyid =
+ 		le32_to_cpu(pPayload->phyid) & 0xFF;
+ 	struct pm8001_phy *phy = &pm8001_ha->phy[phyid];
+-	PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("phy:0x%x status:0x%x\n",
+-					phyid, status));
++	pm8001_dbg(pm8001_ha, MSG, "phy:0x%x status:0x%x\n",
++		   phyid, status);
+ 	if (status == PHY_STOP_SUCCESS ||
+ 		status == PHY_STOP_ERR_DEVICE_ATTACHED)
+ 		phy->phy_state = PHY_LINK_DISABLE;
+@@ -3800,9 +3707,9 @@ static int mpi_set_controller_config_resp(struct pm8001_hba_info *pm8001_ha,
+ 	u32 status = le32_to_cpu(pPayload->status);
+ 	u32 err_qlfr_pgcd = le32_to_cpu(pPayload->err_qlfr_pgcd);
+ 
+-	PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"SET CONTROLLER RESP: status 0x%x qlfr_pgcd 0x%x\n",
+-			status, err_qlfr_pgcd));
++	pm8001_dbg(pm8001_ha, MSG,
++		   "SET CONTROLLER RESP: status 0x%x qlfr_pgcd 0x%x\n",
++		   status, err_qlfr_pgcd);
+ 
+ 	return 0;
+ }
+@@ -3815,8 +3722,7 @@ static int mpi_set_controller_config_resp(struct pm8001_hba_info *pm8001_ha,
+ static int mpi_get_controller_config_resp(struct pm8001_hba_info *pm8001_ha,
+ 			void *piomb)
+ {
+-	PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" pm80xx_addition_functionality\n"));
++	pm8001_dbg(pm8001_ha, MSG, " pm80xx_addition_functionality\n");
+ 
+ 	return 0;
+ }
+@@ -3829,8 +3735,7 @@ static int mpi_get_controller_config_resp(struct pm8001_hba_info *pm8001_ha,
+ static int mpi_get_phy_profile_resp(struct pm8001_hba_info *pm8001_ha,
+ 			void *piomb)
+ {
+-	PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" pm80xx_addition_functionality\n"));
++	pm8001_dbg(pm8001_ha, MSG, " pm80xx_addition_functionality\n");
+ 
+ 	return 0;
+ }
+@@ -3842,8 +3747,7 @@ static int mpi_get_phy_profile_resp(struct pm8001_hba_info *pm8001_ha,
+  */
+ static int mpi_flash_op_ext_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ {
+-	PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" pm80xx_addition_functionality\n"));
++	pm8001_dbg(pm8001_ha, MSG, " pm80xx_addition_functionality\n");
+ 
+ 	return 0;
+ }
+@@ -3868,15 +3772,14 @@ static int mpi_set_phy_profile_resp(struct pm8001_hba_info *pm8001_ha,
+ 	page_code = (u8)((ppc_phyid & 0xFF00) >> 8);
+ 	if (status) {
+ 		/* status is FAILED */
+-		PM8001_FAIL_DBG(pm8001_ha,
+-			pm8001_printk("PhyProfile command failed  with status "
+-			"0x%08X \n", status));
++		pm8001_dbg(pm8001_ha, FAIL,
++			   "PhyProfile command failed  with status 0x%08X\n",
++			   status);
+ 		rc = -1;
+ 	} else {
+ 		if (page_code != SAS_PHY_ANALOG_SETTINGS_PAGE) {
+-			PM8001_FAIL_DBG(pm8001_ha,
+-				pm8001_printk("Invalid page code 0x%X\n",
+-					page_code));
++			pm8001_dbg(pm8001_ha, FAIL, "Invalid page code 0x%X\n",
++				   page_code);
+ 			rc = -1;
+ 		}
+ 	}
+@@ -3898,9 +3801,9 @@ static int mpi_kek_management_resp(struct pm8001_hba_info *pm8001_ha,
+ 	u32 kidx_new_curr_ksop = le32_to_cpu(pPayload->kidx_new_curr_ksop);
+ 	u32 err_qlfr = le32_to_cpu(pPayload->err_qlfr);
+ 
+-	PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-		"KEK MGMT RESP. Status 0x%x idx_ksop 0x%x err_qlfr 0x%x\n",
+-		status, kidx_new_curr_ksop, err_qlfr));
++	pm8001_dbg(pm8001_ha, MSG,
++		   "KEK MGMT RESP. Status 0x%x idx_ksop 0x%x err_qlfr 0x%x\n",
++		   status, kidx_new_curr_ksop, err_qlfr);
+ 
+ 	return 0;
+ }
+@@ -3913,8 +3816,7 @@ static int mpi_kek_management_resp(struct pm8001_hba_info *pm8001_ha,
+ static int mpi_dek_management_resp(struct pm8001_hba_info *pm8001_ha,
+ 			void *piomb)
+ {
+-	PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" pm80xx_addition_functionality\n"));
++	pm8001_dbg(pm8001_ha, MSG, " pm80xx_addition_functionality\n");
+ 
+ 	return 0;
+ }
+@@ -3927,8 +3829,7 @@ static int mpi_dek_management_resp(struct pm8001_hba_info *pm8001_ha,
+ static int ssp_coalesced_comp_resp(struct pm8001_hba_info *pm8001_ha,
+ 			void *piomb)
+ {
+-	PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk(" pm80xx_addition_functionality\n"));
++	pm8001_dbg(pm8001_ha, MSG, " pm80xx_addition_functionality\n");
+ 
+ 	return 0;
+ }
+@@ -3945,248 +3846,206 @@ static void process_one_iomb(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 
+ 	switch (opc) {
+ 	case OPC_OUB_ECHO:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk("OPC_OUB_ECHO\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_ECHO\n");
+ 		break;
+ 	case OPC_OUB_HW_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_HW_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_HW_EVENT\n");
+ 		mpi_hw_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_THERM_HW_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_THERMAL_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_THERMAL_EVENT\n");
+ 		mpi_thermal_hw_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SSP_COMP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SSP_COMP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_COMP\n");
+ 		mpi_ssp_completion(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SMP_COMP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SMP_COMP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SMP_COMP\n");
+ 		mpi_smp_completion(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_LOCAL_PHY_CNTRL:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_LOCAL_PHY_CNTRL\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_LOCAL_PHY_CNTRL\n");
+ 		pm8001_mpi_local_phy_ctl(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEV_REGIST:
+-		PM8001_MSG_DBG(pm8001_ha,
+-		pm8001_printk("OPC_OUB_DEV_REGIST\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_DEV_REGIST\n");
+ 		pm8001_mpi_reg_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEREG_DEV:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("unregister the device\n"));
++		pm8001_dbg(pm8001_ha, MSG, "unregister the device\n");
+ 		pm8001_mpi_dereg_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GET_DEV_HANDLE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GET_DEV_HANDLE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GET_DEV_HANDLE\n");
+ 		break;
+ 	case OPC_OUB_SATA_COMP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SATA_COMP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SATA_COMP\n");
+ 		mpi_sata_completion(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SATA_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SATA_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SATA_EVENT\n");
+ 		mpi_sata_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SSP_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SSP_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_EVENT\n");
+ 		mpi_ssp_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEV_HANDLE_ARRIV:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_DEV_HANDLE_ARRIV\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_DEV_HANDLE_ARRIV\n");
+ 		/*This is for target*/
+ 		break;
+ 	case OPC_OUB_SSP_RECV_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SSP_RECV_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_RECV_EVENT\n");
+ 		/*This is for target*/
+ 		break;
+ 	case OPC_OUB_FW_FLASH_UPDATE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_FW_FLASH_UPDATE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_FW_FLASH_UPDATE\n");
+ 		pm8001_mpi_fw_flash_update_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GPIO_RESPONSE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GPIO_RESPONSE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GPIO_RESPONSE\n");
+ 		break;
+ 	case OPC_OUB_GPIO_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GPIO_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GPIO_EVENT\n");
+ 		break;
+ 	case OPC_OUB_GENERAL_EVENT:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GENERAL_EVENT\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GENERAL_EVENT\n");
+ 		pm8001_mpi_general_event(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SSP_ABORT_RSP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SSP_ABORT_RSP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_ABORT_RSP\n");
+ 		pm8001_mpi_task_abort_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SATA_ABORT_RSP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SATA_ABORT_RSP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SATA_ABORT_RSP\n");
+ 		pm8001_mpi_task_abort_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SAS_DIAG_MODE_START_END:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SAS_DIAG_MODE_START_END\n"));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_SAS_DIAG_MODE_START_END\n");
+ 		break;
+ 	case OPC_OUB_SAS_DIAG_EXECUTE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SAS_DIAG_EXECUTE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SAS_DIAG_EXECUTE\n");
+ 		break;
+ 	case OPC_OUB_GET_TIME_STAMP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GET_TIME_STAMP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GET_TIME_STAMP\n");
+ 		break;
+ 	case OPC_OUB_SAS_HW_EVENT_ACK:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SAS_HW_EVENT_ACK\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SAS_HW_EVENT_ACK\n");
+ 		break;
+ 	case OPC_OUB_PORT_CONTROL:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_PORT_CONTROL\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_PORT_CONTROL\n");
+ 		break;
+ 	case OPC_OUB_SMP_ABORT_RSP:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SMP_ABORT_RSP\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SMP_ABORT_RSP\n");
+ 		pm8001_mpi_task_abort_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GET_NVMD_DATA:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GET_NVMD_DATA\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GET_NVMD_DATA\n");
+ 		pm8001_mpi_get_nvmd_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SET_NVMD_DATA:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SET_NVMD_DATA\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SET_NVMD_DATA\n");
+ 		pm8001_mpi_set_nvmd_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEVICE_HANDLE_REMOVAL:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_DEVICE_HANDLE_REMOVAL\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_DEVICE_HANDLE_REMOVAL\n");
+ 		break;
+ 	case OPC_OUB_SET_DEVICE_STATE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SET_DEVICE_STATE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SET_DEVICE_STATE\n");
+ 		pm8001_mpi_set_dev_state_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GET_DEVICE_STATE:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_GET_DEVICE_STATE\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_GET_DEVICE_STATE\n");
+ 		break;
+ 	case OPC_OUB_SET_DEV_INFO:
+-		PM8001_MSG_DBG(pm8001_ha,
+-			pm8001_printk("OPC_OUB_SET_DEV_INFO\n"));
++		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SET_DEV_INFO\n");
+ 		break;
+ 	/* spcv specifc commands */
+ 	case OPC_OUB_PHY_START_RESP:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_PHY_START_RESP opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_PHY_START_RESP opcode:%x\n", opc);
+ 		mpi_phy_start_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_PHY_STOP_RESP:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_PHY_STOP_RESP opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_PHY_STOP_RESP opcode:%x\n", opc);
+ 		mpi_phy_stop_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SET_CONTROLLER_CONFIG:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_SET_CONTROLLER_CONFIG opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_SET_CONTROLLER_CONFIG opcode:%x\n", opc);
+ 		mpi_set_controller_config_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GET_CONTROLLER_CONFIG:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_GET_CONTROLLER_CONFIG opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_GET_CONTROLLER_CONFIG opcode:%x\n", opc);
+ 		mpi_get_controller_config_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_GET_PHY_PROFILE:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_GET_PHY_PROFILE opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_GET_PHY_PROFILE opcode:%x\n", opc);
+ 		mpi_get_phy_profile_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_FLASH_OP_EXT:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_FLASH_OP_EXT opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_FLASH_OP_EXT opcode:%x\n", opc);
+ 		mpi_flash_op_ext_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SET_PHY_PROFILE:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_SET_PHY_PROFILE opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_SET_PHY_PROFILE opcode:%x\n", opc);
+ 		mpi_set_phy_profile_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_KEK_MANAGEMENT_RESP:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_KEK_MANAGEMENT_RESP opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_KEK_MANAGEMENT_RESP opcode:%x\n", opc);
+ 		mpi_kek_management_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_DEK_MANAGEMENT_RESP:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_DEK_MANAGEMENT_RESP opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_DEK_MANAGEMENT_RESP opcode:%x\n", opc);
+ 		mpi_dek_management_resp(pm8001_ha, piomb);
+ 		break;
+ 	case OPC_OUB_SSP_COALESCED_COMP_RESP:
+-		PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
+-			"OPC_OUB_SSP_COALESCED_COMP_RESP opcode:%x\n", opc));
++		pm8001_dbg(pm8001_ha, MSG,
++			   "OPC_OUB_SSP_COALESCED_COMP_RESP opcode:%x\n", opc);
+ 		ssp_coalesced_comp_resp(pm8001_ha, piomb);
+ 		break;
+ 	default:
+-		PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-			"Unknown outbound Queue IOMB OPC = 0x%x\n", opc));
++		pm8001_dbg(pm8001_ha, DEVIO,
++			   "Unknown outbound Queue IOMB OPC = 0x%x\n", opc);
+ 		break;
+ 	}
+ }
+ 
+ static void print_scratchpad_registers(struct pm8001_hba_info *pm8001_ha)
+ {
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_SCRATCH_PAD_0: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_SCRATCH_PAD_1:0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_SCRATCH_PAD_2: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_SCRATCH_PAD_3: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_3)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_HOST_SCRATCH_PAD_0: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_0)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_HOST_SCRATCH_PAD_1: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_1)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_HOST_SCRATCH_PAD_2: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_2)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_HOST_SCRATCH_PAD_3: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_3)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_HOST_SCRATCH_PAD_4: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_4)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_HOST_SCRATCH_PAD_5: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_5)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_RSVD_SCRATCH_PAD_0: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_6)));
+-	PM8001_FAIL_DBG(pm8001_ha,
+-		pm8001_printk("MSGU_RSVD_SCRATCH_PAD_1: 0x%x\n",
+-			pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_7)));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_SCRATCH_PAD_0: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_SCRATCH_PAD_1:0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_SCRATCH_PAD_2: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_SCRATCH_PAD_3: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_3));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_HOST_SCRATCH_PAD_0: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_0));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_HOST_SCRATCH_PAD_1: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_1));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_HOST_SCRATCH_PAD_2: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_2));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_HOST_SCRATCH_PAD_3: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_3));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_HOST_SCRATCH_PAD_4: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_4));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_HOST_SCRATCH_PAD_5: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_5));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_RSVD_SCRATCH_PAD_0: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_6));
++	pm8001_dbg(pm8001_ha, FAIL, "MSGU_RSVD_SCRATCH_PAD_1: 0x%x\n",
++		   pm8001_cr32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_7));
+ }
+ 
+ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
+@@ -4203,8 +4062,9 @@ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ 		if ((regval & SCRATCH_PAD_MIPSALL_READY) !=
+ 					SCRATCH_PAD_MIPSALL_READY) {
+ 			pm8001_ha->controller_fatal_error = true;
+-			PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
+-				"Firmware Fatal error! Regval:0x%x\n", regval));
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "Firmware Fatal error! Regval:0x%x\n",
++				   regval);
+ 			print_scratchpad_registers(pm8001_ha);
+ 			return ret;
+ 		}
+@@ -4281,7 +4141,6 @@ static int pm80xx_chip_smp_req(struct pm8001_hba_info *pm8001_ha,
+ 	char *preq_dma_addr = NULL;
+ 	__le64 tmp_addr;
+ 	u32 i, length;
+-	unsigned long flags;
+ 
+ 	memset(&smp_cmd, 0, sizeof(smp_cmd));
+ 	/*
+@@ -4311,8 +4170,7 @@ static int pm80xx_chip_smp_req(struct pm8001_hba_info *pm8001_ha,
+ 	smp_cmd.tag = cpu_to_le32(ccb->ccb_tag);
+ 
+ 	length = sg_req->length;
+-	PM8001_IO_DBG(pm8001_ha,
+-		pm8001_printk("SMP Frame Length %d\n", sg_req->length));
++	pm8001_dbg(pm8001_ha, IO, "SMP Frame Length %d\n", sg_req->length);
+ 	if (!(length - 8))
+ 		pm8001_ha->smp_exp_mode = SMP_DIRECT;
+ 	else
+@@ -4324,8 +4182,7 @@ static int pm80xx_chip_smp_req(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	/* INDIRECT MODE command settings. Use DMA */
+ 	if (pm8001_ha->smp_exp_mode == SMP_INDIRECT) {
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("SMP REQUEST INDIRECT MODE\n"));
++		pm8001_dbg(pm8001_ha, IO, "SMP REQUEST INDIRECT MODE\n");
+ 		/* for SPCv indirect mode. Place the top 4 bytes of
+ 		 * SMP Request header here. */
+ 		for (i = 0; i < 4; i++)
+@@ -4357,30 +4214,27 @@ static int pm80xx_chip_smp_req(struct pm8001_hba_info *pm8001_ha,
+ 			((u32)sg_dma_len(&task->smp_task.smp_resp)-4);
+ 	}
+ 	if (pm8001_ha->smp_exp_mode == SMP_DIRECT) {
+-		PM8001_IO_DBG(pm8001_ha,
+-			pm8001_printk("SMP REQUEST DIRECT MODE\n"));
++		pm8001_dbg(pm8001_ha, IO, "SMP REQUEST DIRECT MODE\n");
+ 		for (i = 0; i < length; i++)
+ 			if (i < 16) {
+ 				smp_cmd.smp_req16[i] = *(preq_dma_addr+i);
+-				PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-					"Byte[%d]:%x (DMA data:%x)\n",
+-					i, smp_cmd.smp_req16[i],
+-					*(preq_dma_addr)));
++				pm8001_dbg(pm8001_ha, IO,
++					   "Byte[%d]:%x (DMA data:%x)\n",
++					   i, smp_cmd.smp_req16[i],
++					   *(preq_dma_addr));
+ 			} else {
+ 				smp_cmd.smp_req[i] = *(preq_dma_addr+i);
+-				PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-					"Byte[%d]:%x (DMA data:%x)\n",
+-					i, smp_cmd.smp_req[i],
+-					*(preq_dma_addr)));
++				pm8001_dbg(pm8001_ha, IO,
++					   "Byte[%d]:%x (DMA data:%x)\n",
++					   i, smp_cmd.smp_req[i],
++					   *(preq_dma_addr));
+ 			}
+ 	}
+ 
+ 	build_smp_cmd(pm8001_dev->device_id, smp_cmd.tag,
+ 				&smp_cmd, pm8001_ha->smp_exp_mode, length);
+-	spin_lock_irqsave(&circularQ->iq_lock, flags);
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &smp_cmd,
+ 			sizeof(smp_cmd), 0);
+-	spin_unlock_irqrestore(&circularQ->iq_lock, flags);
+ 	if (rc)
+ 		goto err_out_2;
+ 	return 0;
+@@ -4444,7 +4298,6 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 	u64 phys_addr, start_addr, end_addr;
+ 	u32 end_addr_high, end_addr_low;
+ 	struct inbound_queue_table *circularQ;
+-	unsigned long flags;
+ 	u32 q_index, cpu_id;
+ 	u32 opc = OPC_INB_SSPINIIOSTART;
+ 	memset(&ssp_cmd, 0, sizeof(ssp_cmd));
+@@ -4471,9 +4324,9 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 	/* Check if encryption is set */
+ 	if (pm8001_ha->chip->encrypt &&
+ 		!(pm8001_ha->encrypt_info.status) && check_enc_sas_cmd(task)) {
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"Encryption enabled.Sending Encrypt SAS command 0x%x\n",
+-			task->ssp_task.cmd->cmnd[0]));
++		pm8001_dbg(pm8001_ha, IO,
++			   "Encryption enabled.Sending Encrypt SAS command 0x%x\n",
++			   task->ssp_task.cmd->cmnd[0]);
+ 		opc = OPC_INB_SSP_INI_DIF_ENC_IO;
+ 		/* enable encryption. 0 for SAS 1.1 and SAS 2.0 compatible TLR*/
+ 		ssp_cmd.dad_dir_m_tlr =	cpu_to_le32
+@@ -4503,13 +4356,10 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+ 			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+ 			if (end_addr_high != ssp_cmd.enc_addr_high) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("The sg list address "
+-					"start_addr=0x%016llx data_len=0x%x "
+-					"end_addr_high=0x%08x end_addr_low="
+-					"0x%08x has crossed 4G boundary\n",
+-						start_addr, ssp_cmd.enc_len,
+-						end_addr_high, end_addr_low));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
++					   start_addr, ssp_cmd.enc_len,
++					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+ 				phys_addr = ccb->ccb_dma_handle;
+@@ -4533,9 +4383,9 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 						(task->ssp_task.cmd->cmnd[4] << 8) |
+ 						(task->ssp_task.cmd->cmnd[5]));
+ 	} else {
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"Sending Normal SAS command 0x%x inb q %x\n",
+-			task->ssp_task.cmd->cmnd[0], q_index));
++		pm8001_dbg(pm8001_ha, IO,
++			   "Sending Normal SAS command 0x%x inb q %x\n",
++			   task->ssp_task.cmd->cmnd[0], q_index);
+ 		/* fill in PRD (scatter/gather) table, if any */
+ 		if (task->num_scatter > 1) {
+ 			pm8001_chip_make_sg(task->scatter, ccb->n_elem,
+@@ -4559,13 +4409,10 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+ 			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+ 			if (end_addr_high != ssp_cmd.addr_high) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("The sg list address "
+-					"start_addr=0x%016llx data_len=0x%x "
+-					"end_addr_high=0x%08x end_addr_low="
+-					"0x%08x has crossed 4G boundary\n",
+-						 start_addr, ssp_cmd.len,
+-						 end_addr_high, end_addr_low));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
++					   start_addr, ssp_cmd.len,
++					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+ 				phys_addr = ccb->ccb_dma_handle;
+@@ -4582,10 +4429,8 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			ssp_cmd.esgl = 0;
+ 		}
+ 	}
+-	spin_lock_irqsave(&circularQ->iq_lock, flags);
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc,
+ 			&ssp_cmd, sizeof(ssp_cmd), q_index);
+-	spin_unlock_irqrestore(&circularQ->iq_lock, flags);
+ 	return ret;
+ }
+ 
+@@ -4614,19 +4459,19 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	if (task->data_dir == DMA_NONE) {
+ 		ATAP = 0x04; /* no data*/
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk("no data\n"));
++		pm8001_dbg(pm8001_ha, IO, "no data\n");
+ 	} else if (likely(!task->ata_task.device_control_reg_update)) {
+ 		if (task->ata_task.dma_xfer) {
+ 			ATAP = 0x06; /* DMA */
+-			PM8001_IO_DBG(pm8001_ha, pm8001_printk("DMA\n"));
++			pm8001_dbg(pm8001_ha, IO, "DMA\n");
+ 		} else {
+ 			ATAP = 0x05; /* PIO*/
+-			PM8001_IO_DBG(pm8001_ha, pm8001_printk("PIO\n"));
++			pm8001_dbg(pm8001_ha, IO, "PIO\n");
+ 		}
+ 		if (task->ata_task.use_ncq &&
+ 		    dev->sata_dev.class != ATA_DEV_ATAPI) {
+ 			ATAP = 0x07; /* FPDMA */
+-			PM8001_IO_DBG(pm8001_ha, pm8001_printk("FPDMA\n"));
++			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
+ 		}
+ 	}
+ 	if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) {
+@@ -4646,9 +4491,9 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 	/* Check if encryption is set */
+ 	if (pm8001_ha->chip->encrypt &&
+ 		!(pm8001_ha->encrypt_info.status) && check_enc_sat_cmd(task)) {
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"Encryption enabled.Sending Encrypt SATA cmd 0x%x\n",
+-			sata_cmd.sata_fis.command));
++		pm8001_dbg(pm8001_ha, IO,
++			   "Encryption enabled.Sending Encrypt SATA cmd 0x%x\n",
++			   sata_cmd.sata_fis.command);
+ 		opc = OPC_INB_SATA_DIF_ENC_IO;
+ 
+ 		/* set encryption bit */
+@@ -4676,13 +4521,10 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+ 			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+ 			if (end_addr_high != sata_cmd.enc_addr_high) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("The sg list address "
+-					"start_addr=0x%016llx data_len=0x%x "
+-					"end_addr_high=0x%08x end_addr_low"
+-					"=0x%08x has crossed 4G boundary\n",
+-						start_addr, sata_cmd.enc_len,
+-						end_addr_high, end_addr_low));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
++					   start_addr, sata_cmd.enc_len,
++					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+ 				phys_addr = ccb->ccb_dma_handle;
+@@ -4711,9 +4553,9 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			cpu_to_le32((sata_cmd.sata_fis.lbah_exp << 8) |
+ 					 (sata_cmd.sata_fis.lbam_exp));
+ 	} else {
+-		PM8001_IO_DBG(pm8001_ha, pm8001_printk(
+-			"Sending Normal SATA command 0x%x inb %x\n",
+-			sata_cmd.sata_fis.command, q_index));
++		pm8001_dbg(pm8001_ha, IO,
++			   "Sending Normal SATA command 0x%x inb %x\n",
++			   sata_cmd.sata_fis.command, q_index);
+ 		/* dad (bit 0-1) is 0 */
+ 		sata_cmd.ncqtag_atap_dir_m_dad =
+ 			cpu_to_le32(((ncg_tag & 0xff)<<16) |
+@@ -4739,13 +4581,10 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+ 			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+ 			if (end_addr_high != sata_cmd.addr_high) {
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("The sg list address "
+-					"start_addr=0x%016llx data_len=0x%x"
+-					"end_addr_high=0x%08x end_addr_low="
+-					"0x%08x has crossed 4G boundary\n",
+-						start_addr, sata_cmd.len,
+-						end_addr_high, end_addr_low));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "The sg list address start_addr=0x%016llx data_len=0x%xend_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
++					   start_addr, sata_cmd.len,
++					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+ 				phys_addr = ccb->ccb_dma_handle;
+@@ -4804,10 +4643,10 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 					SAS_TASK_STATE_ABORTED))) {
+ 				spin_unlock_irqrestore(&task->task_state_lock,
+ 							flags);
+-				PM8001_FAIL_DBG(pm8001_ha,
+-					pm8001_printk("task 0x%p resp 0x%x "
+-					" stat 0x%x but aborted by upper layer "
+-					"\n", task, ts->resp, ts->stat));
++				pm8001_dbg(pm8001_ha, FAIL,
++					   "task 0x%p resp 0x%x  stat 0x%x but aborted by upper layer\n",
++					   task, ts->resp,
++					   ts->stat);
+ 				pm8001_ccb_task_free(pm8001_ha, task, ccb, tag);
+ 				return 0;
+ 			} else {
+@@ -4815,14 +4654,13 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 							flags);
+ 				pm8001_ccb_task_free_done(pm8001_ha, task,
+ 								ccb, tag);
++				atomic_dec(&pm8001_ha_dev->running_req);
+ 				return 0;
+ 			}
+ 		}
+ 	}
+-	spin_lock_irqsave(&circularQ->iq_lock, flags);
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc,
+ 			&sata_cmd, sizeof(sata_cmd), q_index);
+-	spin_unlock_irqrestore(&circularQ->iq_lock, flags);
+ 	return ret;
+ }
+ 
+@@ -4843,8 +4681,7 @@ pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
+ 	memset(&payload, 0, sizeof(payload));
+ 	payload.tag = cpu_to_le32(tag);
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("PHY START REQ for phy_id %d\n", phy_id));
++	pm8001_dbg(pm8001_ha, INIT, "PHY START REQ for phy_id %d\n", phy_id);
+ 
+ 	payload.ase_sh_lm_slr_phyid = cpu_to_le32(SPINHOLD_DISABLE |
+ 			LINKMODE_AUTO | pm8001_ha->link_rate | phy_id);
+@@ -5008,9 +4845,9 @@ static irqreturn_t
+ pm80xx_chip_isr(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ {
+ 	pm80xx_chip_interrupt_disable(pm8001_ha, vec);
+-	PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk(
+-		"irq vec %d, ODMR:0x%x\n",
+-		vec, pm8001_cr32(pm8001_ha, 0, 0x30)));
++	pm8001_dbg(pm8001_ha, DEVIO,
++		   "irq vec %d, ODMR:0x%x\n",
++		   vec, pm8001_cr32(pm8001_ha, 0, 0x30));
+ 	process_oq(pm8001_ha, vec);
+ 	pm80xx_chip_interrupt_enable(pm8001_ha, vec);
+ 	return IRQ_HANDLED;
+@@ -5029,13 +4866,13 @@ static void mpi_set_phy_profile_req(struct pm8001_hba_info *pm8001_ha,
+ 	memset(&payload, 0, sizeof(payload));
+ 	rc = pm8001_tag_alloc(pm8001_ha, &tag);
+ 	if (rc)
+-		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("Invalid tag\n"));
++		pm8001_dbg(pm8001_ha, FAIL, "Invalid tag\n");
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 	payload.tag = cpu_to_le32(tag);
+ 	payload.ppc_phyid = (((operation & 0xF) << 8) | (phyid  & 0xFF));
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk(" phy profile command for phy %x ,length is %d\n",
+-			payload.ppc_phyid, length));
++	pm8001_dbg(pm8001_ha, INIT,
++		   " phy profile command for phy %x ,length is %d\n",
++		   payload.ppc_phyid, length);
+ 	for (i = length; i < (length + PHY_DWORD_LENGTH - 1); i++) {
+ 		payload.reserved[j] =  cpu_to_le32(*((u32 *)buf + i));
+ 		j++;
+@@ -5056,7 +4893,7 @@ void pm8001_set_phy_profile(struct pm8001_hba_info *pm8001_ha,
+ 			SAS_PHY_ANALOG_SETTINGS_PAGE, i, length, (u32 *)buf);
+ 		length = length + PHY_DWORD_LENGTH;
+ 	}
+-	PM8001_INIT_DBG(pm8001_ha, pm8001_printk("phy settings completed\n"));
++	pm8001_dbg(pm8001_ha, INIT, "phy settings completed\n");
+ }
+ 
+ void pm8001_set_phy_profile_single(struct pm8001_hba_info *pm8001_ha,
+@@ -5071,7 +4908,7 @@ void pm8001_set_phy_profile_single(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	rc = pm8001_tag_alloc(pm8001_ha, &tag);
+ 	if (rc)
+-		PM8001_INIT_DBG(pm8001_ha, pm8001_printk("Invalid tag"));
++		pm8001_dbg(pm8001_ha, INIT, "Invalid tag\n");
+ 
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 	opc = OPC_INB_SET_PHY_PROFILE;
+@@ -5088,8 +4925,7 @@ void pm8001_set_phy_profile_single(struct pm8001_hba_info *pm8001_ha,
+ 	if (rc)
+ 		pm8001_tag_free(pm8001_ha, tag);
+ 
+-	PM8001_INIT_DBG(pm8001_ha,
+-		pm8001_printk("PHY %d settings applied", phy));
++	pm8001_dbg(pm8001_ha, INIT, "PHY %d settings applied\n", phy);
+ }
+ const struct pm8001_dispatch pm8001_80xx_dispatch = {
+ 	.name			= "pmc80xx",
+diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c
+index 934713472ebce..09d2ac20508b5 100644
+--- a/drivers/scsi/ufs/ufs-mediatek.c
++++ b/drivers/scsi/ufs/ufs-mediatek.c
+@@ -813,7 +813,7 @@ static void ufs_mtk_vreg_set_lpm(struct ufs_hba *hba, bool lpm)
+ 	if (!hba->vreg_info.vccq2 || !hba->vreg_info.vcc)
+ 		return;
+ 
+-	if (lpm & !hba->vreg_info.vcc->enabled)
++	if (lpm && !hba->vreg_info.vcc->enabled)
+ 		regulator_set_mode(hba->vreg_info.vccq2->reg,
+ 				   REGULATOR_MODE_IDLE);
+ 	else if (!lpm)
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 826b01f346246..2e1255bf1b429 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1198,6 +1198,7 @@ static int cqspi_probe(struct platform_device *pdev)
+ 	cqspi = spi_master_get_devdata(master);
+ 
+ 	cqspi->pdev = pdev;
++	platform_set_drvdata(pdev, cqspi);
+ 
+ 	/* Obtain configuration from OF. */
+ 	ret = cqspi_of_get_pdata(cqspi);
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus.c b/drivers/staging/media/sunxi/cedrus/cedrus.c
+index e0e35502e34a3..1dd833757c4ee 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus.c
+@@ -103,6 +103,25 @@ static const struct cedrus_control cedrus_controls[] = {
+ 		.codec		= CEDRUS_CODEC_H264,
+ 		.required	= false,
+ 	},
++	/*
++	 * We only expose supported profiles information,
++	 * and not levels as it's not clear what is supported
++	 * for each hardware/core version.
++	 * In any case, TRY/S_FMT will clamp the format resolution
++	 * to the maximum supported.
++	 */
++	{
++		.cfg = {
++			.id	= V4L2_CID_MPEG_VIDEO_H264_PROFILE,
++			.min	= V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE,
++			.def	= V4L2_MPEG_VIDEO_H264_PROFILE_MAIN,
++			.max	= V4L2_MPEG_VIDEO_H264_PROFILE_HIGH,
++			.menu_skip_mask =
++				BIT(V4L2_MPEG_VIDEO_H264_PROFILE_EXTENDED),
++		},
++		.codec		= CEDRUS_CODEC_H264,
++		.required	= false,
++	},
+ 	{
+ 		.cfg = {
+ 			.id	= V4L2_CID_MPEG_VIDEO_HEVC_SPS,
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index c73bbfe69ba16..9a272a516b2d7 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -761,12 +761,6 @@ static int tb_init_port(struct tb_port *port)
+ 
+ 	tb_dump_port(port->sw->tb, &port->config);
+ 
+-	/* Control port does not need HopID allocation */
+-	if (port->port) {
+-		ida_init(&port->in_hopids);
+-		ida_init(&port->out_hopids);
+-	}
+-
+ 	INIT_LIST_HEAD(&port->list);
+ 	return 0;
+ 
+@@ -1764,10 +1758,8 @@ static void tb_switch_release(struct device *dev)
+ 	dma_port_free(sw->dma_port);
+ 
+ 	tb_switch_for_each_port(sw, port) {
+-		if (!port->disabled) {
+-			ida_destroy(&port->in_hopids);
+-			ida_destroy(&port->out_hopids);
+-		}
++		ida_destroy(&port->in_hopids);
++		ida_destroy(&port->out_hopids);
+ 	}
+ 
+ 	kfree(sw->uuid);
+@@ -1947,6 +1939,12 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
+ 		/* minimum setup for tb_find_cap and tb_drom_read to work */
+ 		sw->ports[i].sw = sw;
+ 		sw->ports[i].port = i;
++
++		/* Control port does not need HopID allocation */
++		if (i) {
++			ida_init(&sw->ports[i].in_hopids);
++			ida_init(&sw->ports[i].out_hopids);
++		}
+ 	}
+ 
+ 	ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS);
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index 214fbc92c1b7d..a56ea540af00b 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -138,6 +138,10 @@ static void tb_discover_tunnels(struct tb_switch *sw)
+ 				parent->boot = true;
+ 				parent = tb_switch_parent(parent);
+ 			}
++		} else if (tb_tunnel_is_dp(tunnel)) {
++			/* Keep the domain from powering down */
++			pm_runtime_get_sync(&tunnel->src_port->sw->dev);
++			pm_runtime_get_sync(&tunnel->dst_port->sw->dev);
+ 		}
+ 
+ 		list_add_tail(&tunnel->list, &tcm->tunnel_list);
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index ee6c7762d3559..6248304a001f4 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -350,7 +350,6 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
+ 	struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ 	struct circ_buf *xmit = &port->state->xmit;
+ 	struct dma_async_tx_descriptor *desc = NULL;
+-	dma_cookie_t cookie;
+ 	unsigned int count, i;
+ 
+ 	if (stm32port->tx_dma_busy)
+@@ -384,17 +383,18 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
+ 					   DMA_MEM_TO_DEV,
+ 					   DMA_PREP_INTERRUPT);
+ 
+-	if (!desc) {
+-		for (i = count; i > 0; i--)
+-			stm32_transmit_chars_pio(port);
+-		return;
+-	}
++	if (!desc)
++		goto fallback_err;
+ 
+ 	desc->callback = stm32_tx_dma_complete;
+ 	desc->callback_param = port;
+ 
+ 	/* Push current DMA TX transaction in the pending queue */
+-	cookie = dmaengine_submit(desc);
++	if (dma_submit_error(dmaengine_submit(desc))) {
++		/* dma no yet started, safe to free resources */
++		dmaengine_terminate_async(stm32port->tx_ch);
++		goto fallback_err;
++	}
+ 
+ 	/* Issue pending DMA TX requests */
+ 	dma_async_issue_pending(stm32port->tx_ch);
+@@ -403,6 +403,11 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
+ 
+ 	xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
+ 	port->icount.tx += count;
++	return;
++
++fallback_err:
++	for (i = count; i > 0; i--)
++		stm32_transmit_chars_pio(port);
+ }
+ 
+ static void stm32_transmit_chars(struct uart_port *port)
+@@ -1087,7 +1092,6 @@ static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
+ 	struct device *dev = &pdev->dev;
+ 	struct dma_slave_config config;
+ 	struct dma_async_tx_descriptor *desc = NULL;
+-	dma_cookie_t cookie;
+ 	int ret;
+ 
+ 	/* Request DMA RX channel */
+@@ -1132,7 +1136,11 @@ static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
+ 	desc->callback_param = NULL;
+ 
+ 	/* Push current DMA transaction in the pending queue */
+-	cookie = dmaengine_submit(desc);
++	ret = dma_submit_error(dmaengine_submit(desc));
++	if (ret) {
++		dmaengine_terminate_sync(stm32port->rx_ch);
++		goto config_err;
++	}
+ 
+ 	/* Issue pending DMA requests */
+ 	dma_async_issue_pending(stm32port->rx_ch);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 56f7235bc068c..2a86ad4b12b34 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -783,8 +783,6 @@ static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep)
+ 
+ 	trace_dwc3_gadget_ep_disable(dep);
+ 
+-	dwc3_remove_requests(dwc, dep);
+-
+ 	/* make sure HW endpoint isn't stalled */
+ 	if (dep->flags & DWC3_EP_STALL)
+ 		__dwc3_gadget_ep_set_halt(dep, 0, false);
+@@ -803,6 +801,8 @@ static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep)
+ 		dep->endpoint.desc = NULL;
+ 	}
+ 
++	dwc3_remove_requests(dwc, dep);
++
+ 	return 0;
+ }
+ 
+@@ -1617,7 +1617,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
+ {
+ 	struct dwc3		*dwc = dep->dwc;
+ 
+-	if (!dep->endpoint.desc || !dwc->pullups_connected) {
++	if (!dep->endpoint.desc || !dwc->pullups_connected || !dwc->connected) {
+ 		dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
+ 				dep->name);
+ 		return -ESHUTDOWN;
+@@ -2125,6 +2125,17 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 		}
+ 	}
+ 
++	/*
++	 * Check the return value for successful resume, or error.  For a
++	 * successful resume, the DWC3 runtime PM resume routine will handle
++	 * the run stop sequence, so avoid duplicate operations here.
++	 */
++	ret = pm_runtime_get_sync(dwc->dev);
++	if (!ret || ret < 0) {
++		pm_runtime_put(dwc->dev);
++		return 0;
++	}
++
+ 	/*
+ 	 * Synchronize any pending event handling before executing the controller
+ 	 * halt routine.
+@@ -2139,6 +2150,7 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 	if (!is_on) {
+ 		u32 count;
+ 
++		dwc->connected = false;
+ 		/*
+ 		 * In the Synopsis DesignWare Cores USB3 Databook Rev. 3.30a
+ 		 * Section 4.1.8 Table 4-7, it states that for a device-initiated
+@@ -2169,6 +2181,7 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 
+ 	ret = dwc3_gadget_run_stop(dwc, is_on, false);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
++	pm_runtime_put(dwc->dev);
+ 
+ 	return ret;
+ }
+@@ -3254,8 +3267,6 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
+ {
+ 	u32			reg;
+ 
+-	dwc->connected = true;
+-
+ 	/*
+ 	 * WORKAROUND: DWC3 revisions <1.88a have an issue which
+ 	 * would cause a missing Disconnect Event if there's a
+@@ -3295,6 +3306,7 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
+ 	 * transfers."
+ 	 */
+ 	dwc3_stop_active_transfers(dwc);
++	dwc->connected = true;
+ 
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ 	reg &= ~DWC3_DCTL_TSTCTRL_MASK;
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 36ffb43f9c1a0..9b7fa53d6642b 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -97,6 +97,8 @@ struct gadget_config_name {
+ 	struct list_head list;
+ };
+ 
++#define USB_MAX_STRING_WITH_NULL_LEN	(USB_MAX_STRING_LEN+1)
++
+ static int usb_string_copy(const char *s, char **s_copy)
+ {
+ 	int ret;
+@@ -106,12 +108,16 @@ static int usb_string_copy(const char *s, char **s_copy)
+ 	if (ret > USB_MAX_STRING_LEN)
+ 		return -EOVERFLOW;
+ 
+-	str = kstrdup(s, GFP_KERNEL);
+-	if (!str)
+-		return -ENOMEM;
++	if (copy) {
++		str = copy;
++	} else {
++		str = kmalloc(USB_MAX_STRING_WITH_NULL_LEN, GFP_KERNEL);
++		if (!str)
++			return -ENOMEM;
++	}
++	strcpy(str, s);
+ 	if (str[ret - 1] == '\n')
+ 		str[ret - 1] = '\0';
+-	kfree(copy);
+ 	*s_copy = str;
+ 	return 0;
+ }
+diff --git a/drivers/usb/storage/transport.c b/drivers/usb/storage/transport.c
+index 238a8088e17f6..7cc8813f5d8cb 100644
+--- a/drivers/usb/storage/transport.c
++++ b/drivers/usb/storage/transport.c
+@@ -651,6 +651,13 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
+ 		need_auto_sense = 1;
+ 	}
+ 
++	/* Some devices (Kindle) require another command after SYNC CACHE */
++	if ((us->fflags & US_FL_SENSE_AFTER_SYNC) &&
++			srb->cmnd[0] == SYNCHRONIZE_CACHE) {
++		usb_stor_dbg(us, "-- sense after SYNC CACHE\n");
++		need_auto_sense = 1;
++	}
++
+ 	/*
+ 	 * If we have a failure, we're going to do a REQUEST_SENSE 
+ 	 * automatically.  Note that we differentiate between a command
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 5732e9691f08f..efa972be2ee34 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2211,6 +2211,18 @@ UNUSUAL_DEV( 0x1908, 0x3335, 0x0200, 0x0200,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_READ_DISC_INFO ),
+ 
++/*
++ * Reported by Matthias Schwarzott <zzam@gentoo.org>
++ * The Amazon Kindle treats SYNCHRONIZE CACHE as an indication that
++ * the host may be finished with it, and automatically ejects its
++ * emulated media unless it receives another command within one second.
++ */
++UNUSUAL_DEV( 0x1949, 0x0004, 0x0000, 0x9999,
++		"Amazon",
++		"Kindle",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_SENSE_AFTER_SYNC ),
++
+ /*
+  * Reported by Oliver Neukum <oneukum@suse.com>
+  * This device morphes spontaneously into another device if the access
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index a6fae1f865051..563658096b675 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -785,6 +785,7 @@ static int tcpm_set_current_limit(struct tcpm_port *port, u32 max_ma, u32 mv)
+ 
+ 	port->supply_voltage = mv;
+ 	port->current_limit = max_ma;
++	power_supply_changed(port->psy);
+ 
+ 	if (port->tcpc->set_current_limit)
+ 		ret = port->tcpc->set_current_limit(port->tcpc, max_ma, mv);
+@@ -2300,6 +2301,7 @@ static int tcpm_pd_select_pdo(struct tcpm_port *port, int *sink_pdo,
+ 
+ 	port->pps_data.supported = false;
+ 	port->usb_type = POWER_SUPPLY_USB_TYPE_PD;
++	power_supply_changed(port->psy);
+ 
+ 	/*
+ 	 * Select the source PDO providing the most power which has a
+@@ -2324,6 +2326,7 @@ static int tcpm_pd_select_pdo(struct tcpm_port *port, int *sink_pdo,
+ 				port->pps_data.supported = true;
+ 				port->usb_type =
+ 					POWER_SUPPLY_USB_TYPE_PD_PPS;
++				power_supply_changed(port->psy);
+ 			}
+ 			continue;
+ 		default:
+@@ -2481,6 +2484,7 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
+ 						  port->pps_data.out_volt));
+ 		port->pps_data.op_curr = min(port->pps_data.max_curr,
+ 					     port->pps_data.op_curr);
++		power_supply_changed(port->psy);
+ 	}
+ 
+ 	return src_pdo;
+@@ -2716,6 +2720,7 @@ static int tcpm_set_charge(struct tcpm_port *port, bool charge)
+ 			return ret;
+ 	}
+ 	port->vbus_charge = charge;
++	power_supply_changed(port->psy);
+ 	return 0;
+ }
+ 
+@@ -2880,6 +2885,7 @@ static void tcpm_reset_port(struct tcpm_port *port)
+ 	port->try_src_count = 0;
+ 	port->try_snk_count = 0;
+ 	port->usb_type = POWER_SUPPLY_USB_TYPE_C;
++	power_supply_changed(port->psy);
+ 	port->nr_sink_caps = 0;
+ 	port->sink_cap_done = false;
+ 	if (port->tcpc->enable_frs)
+@@ -4982,7 +4988,7 @@ static int tcpm_psy_set_prop(struct power_supply *psy,
+ 		ret = -EINVAL;
+ 		break;
+ 	}
+-
++	power_supply_changed(port->psy);
+ 	return ret;
+ }
+ 
+@@ -5134,6 +5140,7 @@ struct tcpm_port *tcpm_register_port(struct device *dev, struct tcpc_dev *tcpc)
+ 	err = devm_tcpm_psy_register(port);
+ 	if (err)
+ 		goto out_role_sw_put;
++	power_supply_changed(port->psy);
+ 
+ 	port->typec_port = typec_register_port(port->dev, &port->typec_caps);
+ 	if (IS_ERR(port->typec_port)) {
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index 3db33bb622c38..d8e4594fe0090 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -62,7 +62,6 @@ enum {
+ struct tps6598x_rx_identity_reg {
+ 	u8 status;
+ 	struct usb_pd_identity identity;
+-	u32 vdo[3];
+ } __packed;
+ 
+ /* Standard Task return codes */
+diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
+index a3ec39fc61778..7383a543c6d12 100644
+--- a/drivers/usb/usbip/vudc_sysfs.c
++++ b/drivers/usb/usbip/vudc_sysfs.c
+@@ -174,7 +174,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ 
+ 		udc->ud.tcp_socket = socket;
+ 		udc->ud.tcp_rx = tcp_rx;
+-		udc->ud.tcp_rx = tcp_tx;
++		udc->ud.tcp_tx = tcp_tx;
+ 		udc->ud.status = SDEV_ST_USED;
+ 
+ 		spin_unlock_irq(&udc->ud.lock);
+diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
+index 5533df91b257d..90c0525b1e0cf 100644
+--- a/drivers/vfio/Kconfig
++++ b/drivers/vfio/Kconfig
+@@ -21,7 +21,7 @@ config VFIO_VIRQFD
+ 
+ menuconfig VFIO
+ 	tristate "VFIO Non-Privileged userspace driver framework"
+-	depends on IOMMU_API
++	select IOMMU_API
+ 	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM || ARM64)
+ 	help
+ 	  VFIO provides a framework for secure userspace device drivers.
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 29ed4173f04e6..fc5707ada024e 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -312,8 +312,10 @@ static long vhost_vdpa_get_vring_num(struct vhost_vdpa *v, u16 __user *argp)
+ 
+ static void vhost_vdpa_config_put(struct vhost_vdpa *v)
+ {
+-	if (v->config_ctx)
++	if (v->config_ctx) {
+ 		eventfd_ctx_put(v->config_ctx);
++		v->config_ctx = NULL;
++	}
+ }
+ 
+ static long vhost_vdpa_set_config_call(struct vhost_vdpa *v, u32 __user *argp)
+@@ -333,8 +335,12 @@ static long vhost_vdpa_set_config_call(struct vhost_vdpa *v, u32 __user *argp)
+ 	if (!IS_ERR_OR_NULL(ctx))
+ 		eventfd_ctx_put(ctx);
+ 
+-	if (IS_ERR(v->config_ctx))
+-		return PTR_ERR(v->config_ctx);
++	if (IS_ERR(v->config_ctx)) {
++		long ret = PTR_ERR(v->config_ctx);
++
++		v->config_ctx = NULL;
++		return ret;
++	}
+ 
+ 	v->vdpa->config->set_config_cb(v->vdpa, &cb);
+ 
+@@ -904,14 +910,10 @@ err:
+ 
+ static void vhost_vdpa_clean_irq(struct vhost_vdpa *v)
+ {
+-	struct vhost_virtqueue *vq;
+ 	int i;
+ 
+-	for (i = 0; i < v->nvqs; i++) {
+-		vq = &v->vqs[i];
+-		if (vq->call_ctx.producer.irq)
+-			irq_bypass_unregister_producer(&vq->call_ctx.producer);
+-	}
++	for (i = 0; i < v->nvqs; i++)
++		vhost_vdpa_unsetup_vq_irq(v, i);
+ }
+ 
+ static int vhost_vdpa_release(struct inode *inode, struct file *filep)
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 9068d5578a26f..9dc6f4b1c4177 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -69,7 +69,6 @@ const struct inode_operations afs_dir_inode_operations = {
+ 	.permission	= afs_permission,
+ 	.getattr	= afs_getattr,
+ 	.setattr	= afs_setattr,
+-	.listxattr	= afs_listxattr,
+ };
+ 
+ const struct address_space_operations afs_dir_aops = {
+diff --git a/fs/afs/file.c b/fs/afs/file.c
+index 85f5adf21aa08..960b64268623e 100644
+--- a/fs/afs/file.c
++++ b/fs/afs/file.c
+@@ -43,7 +43,6 @@ const struct inode_operations afs_file_inode_operations = {
+ 	.getattr	= afs_getattr,
+ 	.setattr	= afs_setattr,
+ 	.permission	= afs_permission,
+-	.listxattr	= afs_listxattr,
+ };
+ 
+ const struct address_space_operations afs_fs_aops = {
+diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c
+index 97cab12b0a6c2..71c58723763d2 100644
+--- a/fs/afs/fs_operation.c
++++ b/fs/afs/fs_operation.c
+@@ -181,10 +181,13 @@ void afs_wait_for_operation(struct afs_operation *op)
+ 		if (test_bit(AFS_SERVER_FL_IS_YFS, &op->server->flags) &&
+ 		    op->ops->issue_yfs_rpc)
+ 			op->ops->issue_yfs_rpc(op);
+-		else
++		else if (op->ops->issue_afs_rpc)
+ 			op->ops->issue_afs_rpc(op);
++		else
++			op->ac.error = -ENOTSUPP;
+ 
+-		op->error = afs_wait_for_call_to_complete(op->call, &op->ac);
++		if (op->call)
++			op->error = afs_wait_for_call_to_complete(op->call, &op->ac);
+ 	}
+ 
+ 	switch (op->error) {
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index b0d7b892090da..1d03eb1920ec0 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -27,7 +27,6 @@
+ 
+ static const struct inode_operations afs_symlink_inode_operations = {
+ 	.get_link	= page_get_link,
+-	.listxattr	= afs_listxattr,
+ };
+ 
+ static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode *parent_vnode)
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 0d150a29e39ec..525ef075fcd90 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -1508,7 +1508,6 @@ extern int afs_launder_page(struct page *);
+  * xattr.c
+  */
+ extern const struct xattr_handler *afs_xattr_handlers[];
+-extern ssize_t afs_listxattr(struct dentry *, char *, size_t);
+ 
+ /*
+  * yfsclient.c
+diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c
+index 052dab2f5c03a..bbb2c210d139d 100644
+--- a/fs/afs/mntpt.c
++++ b/fs/afs/mntpt.c
+@@ -32,7 +32,6 @@ const struct inode_operations afs_mntpt_inode_operations = {
+ 	.lookup		= afs_mntpt_lookup,
+ 	.readlink	= page_readlink,
+ 	.getattr	= afs_getattr,
+-	.listxattr	= afs_listxattr,
+ };
+ 
+ const struct inode_operations afs_autocell_inode_operations = {
+diff --git a/fs/afs/xattr.c b/fs/afs/xattr.c
+index 95c573dcda116..6a29337bd562f 100644
+--- a/fs/afs/xattr.c
++++ b/fs/afs/xattr.c
+@@ -11,29 +11,6 @@
+ #include <linux/xattr.h>
+ #include "internal.h"
+ 
+-static const char afs_xattr_list[] =
+-	"afs.acl\0"
+-	"afs.cell\0"
+-	"afs.fid\0"
+-	"afs.volume\0"
+-	"afs.yfs.acl\0"
+-	"afs.yfs.acl_inherited\0"
+-	"afs.yfs.acl_num_cleaned\0"
+-	"afs.yfs.vol_acl";
+-
+-/*
+- * Retrieve a list of the supported xattrs.
+- */
+-ssize_t afs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+-{
+-	if (size == 0)
+-		return sizeof(afs_xattr_list);
+-	if (size < sizeof(afs_xattr_list))
+-		return -ERANGE;
+-	memcpy(buffer, afs_xattr_list, sizeof(afs_xattr_list));
+-	return sizeof(afs_xattr_list);
+-}
+-
+ /*
+  * Deal with the result of a successful fetch ACL operation.
+  */
+@@ -230,6 +207,8 @@ static int afs_xattr_get_yfs(const struct xattr_handler *handler,
+ 			else
+ 				ret = -ERANGE;
+ 		}
++	} else if (ret == -ENOTSUPP) {
++		ret = -ENODATA;
+ 	}
+ 
+ error_yacl:
+@@ -254,6 +233,7 @@ static int afs_xattr_set_yfs(const struct xattr_handler *handler,
+ {
+ 	struct afs_operation *op;
+ 	struct afs_vnode *vnode = AFS_FS_I(inode);
++	int ret;
+ 
+ 	if (flags == XATTR_CREATE ||
+ 	    strcmp(name, "acl") != 0)
+@@ -268,7 +248,10 @@ static int afs_xattr_set_yfs(const struct xattr_handler *handler,
+ 		return afs_put_operation(op);
+ 
+ 	op->ops = &yfs_store_opaque_acl2_operation;
+-	return afs_do_sync_operation(op);
++	ret = afs_do_sync_operation(op);
++	if (ret == -ENOTSUPP)
++		ret = -ENODATA;
++	return ret;
+ }
+ 
+ static const struct xattr_handler afs_xattr_yfs_handler = {
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index f2f6f65038923..9faf15bd5a548 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1367,7 +1367,9 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
+ 				   "failed to read tree block %llu from get_old_root",
+ 				   logical);
+ 		} else {
++			btrfs_tree_read_lock(old);
+ 			eb = btrfs_clone_extent_buffer(old);
++			btrfs_tree_read_unlock(old);
+ 			free_extent_buffer(old);
+ 		}
+ 	} else if (old_root) {
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index cbeb0cdaca7af..4162ef602a024 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -8811,7 +8811,7 @@ int __init btrfs_init_cachep(void)
+ 
+ 	btrfs_free_space_bitmap_cachep = kmem_cache_create("btrfs_free_space_bitmap",
+ 							PAGE_SIZE, PAGE_SIZE,
+-							SLAB_RED_ZONE, NULL);
++							SLAB_MEM_SPREAD, NULL);
+ 	if (!btrfs_free_space_bitmap_cachep)
+ 		goto fail;
+ 
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 9ee5f304592f1..b1f0c05d6eaf8 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2375,7 +2375,7 @@ int cifs_getattr(const struct path *path, struct kstat *stat,
+ 	 * We need to be sure that all dirty pages are written and the server
+ 	 * has actual ctime, mtime and file length.
+ 	 */
+-	if ((request_mask & (STATX_CTIME | STATX_MTIME | STATX_SIZE)) &&
++	if ((request_mask & (STATX_CTIME | STATX_MTIME | STATX_SIZE | STATX_BLOCKS)) &&
+ 	    !CIFS_CACHE_READ(CIFS_I(inode)) &&
+ 	    inode->i_mapping && inode->i_mapping->nrpages != 0) {
+ 		rc = filemap_fdatawait(inode->i_mapping);
+@@ -2565,6 +2565,14 @@ set_size_out:
+ 	if (rc == 0) {
+ 		cifsInode->server_eof = attrs->ia_size;
+ 		cifs_setsize(inode, attrs->ia_size);
++		/*
++		 * i_blocks is not related to (i_size / i_blksize), but instead
++		 * 512 byte (2**9) size is required for calculating num blocks.
++		 * Until we can query the server for actual allocation size,
++		 * this is best estimate we have for blocks allocated for a file
++		 * Number of blocks must be rounded up so size 1 is not 0 blocks
++		 */
++		inode->i_blocks = (512 - 1 + attrs->ia_size) >> 9;
+ 
+ 		/*
+ 		 * The man page of truncate says if the size changed,
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 0b9f1a0cba1a3..7b45b3b79df56 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -1156,9 +1156,12 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 	/*
+ 	 * Compounding is never used during session establish.
+ 	 */
+-	if ((ses->status == CifsNew) || (optype & CIFS_NEG_OP))
++	if ((ses->status == CifsNew) || (optype & CIFS_NEG_OP)) {
++		mutex_lock(&server->srv_mutex);
+ 		smb311_update_preauth_hash(ses, rqst[0].rq_iov,
+ 					   rqst[0].rq_nvec);
++		mutex_unlock(&server->srv_mutex);
++	}
+ 
+ 	for (i = 0; i < num_rqst; i++) {
+ 		rc = wait_for_response(server, midQ[i]);
+@@ -1226,7 +1229,9 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 			.iov_base = resp_iov[0].iov_base,
+ 			.iov_len = resp_iov[0].iov_len
+ 		};
++		mutex_lock(&server->srv_mutex);
+ 		smb311_update_preauth_hash(ses, &iov, 1);
++		mutex_unlock(&server->srv_mutex);
+ 	}
+ 
+ out:
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 65ecaf96d0a4d..b92acb6603139 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2762,6 +2762,8 @@ void __ext4_fc_track_link(handle_t *handle, struct inode *inode,
+ 	struct dentry *dentry);
+ void ext4_fc_track_unlink(handle_t *handle, struct dentry *dentry);
+ void ext4_fc_track_link(handle_t *handle, struct dentry *dentry);
++void __ext4_fc_track_create(handle_t *handle, struct inode *inode,
++			    struct dentry *dentry);
+ void ext4_fc_track_create(handle_t *handle, struct dentry *dentry);
+ void ext4_fc_track_inode(handle_t *handle, struct inode *inode);
+ void ext4_fc_mark_ineligible(struct super_block *sb, int reason);
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index a1dd7ca962c3f..4008a674250cf 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -452,10 +452,10 @@ void ext4_fc_track_link(handle_t *handle, struct dentry *dentry)
+ 	__ext4_fc_track_link(handle, d_inode(dentry), dentry);
+ }
+ 
+-void ext4_fc_track_create(handle_t *handle, struct dentry *dentry)
++void __ext4_fc_track_create(handle_t *handle, struct inode *inode,
++			  struct dentry *dentry)
+ {
+ 	struct __track_dentry_update_args args;
+-	struct inode *inode = d_inode(dentry);
+ 	int ret;
+ 
+ 	args.dentry = dentry;
+@@ -466,6 +466,11 @@ void ext4_fc_track_create(handle_t *handle, struct dentry *dentry)
+ 	trace_ext4_fc_track_create(inode, dentry, ret);
+ }
+ 
++void ext4_fc_track_create(handle_t *handle, struct dentry *dentry)
++{
++	__ext4_fc_track_create(handle, d_inode(dentry), dentry);
++}
++
+ /* __track_fn for inode tracking */
+ static int __track_inode(struct inode *inode, void *arg, bool update)
+ {
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 0afab6d5c65bd..c2b8ba343bb4b 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5029,7 +5029,7 @@ static int ext4_do_update_inode(handle_t *handle,
+ 	struct ext4_inode_info *ei = EXT4_I(inode);
+ 	struct buffer_head *bh = iloc->bh;
+ 	struct super_block *sb = inode->i_sb;
+-	int err = 0, rc, block;
++	int err = 0, block;
+ 	int need_datasync = 0, set_large_file = 0;
+ 	uid_t i_uid;
+ 	gid_t i_gid;
+@@ -5141,9 +5141,9 @@ static int ext4_do_update_inode(handle_t *handle,
+ 					      bh->b_data);
+ 
+ 	BUFFER_TRACE(bh, "call ext4_handle_dirty_metadata");
+-	rc = ext4_handle_dirty_metadata(handle, NULL, bh);
+-	if (!err)
+-		err = rc;
++	err = ext4_handle_dirty_metadata(handle, NULL, bh);
++	if (err)
++		goto out_brelse;
+ 	ext4_clear_inode_state(inode, EXT4_STATE_NEW);
+ 	if (set_large_file) {
+ 		BUFFER_TRACE(EXT4_SB(sb)->s_sbh, "get write access");
+@@ -5385,8 +5385,10 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 			inode->i_gid = attr->ia_gid;
+ 		error = ext4_mark_inode_dirty(handle, inode);
+ 		ext4_journal_stop(handle);
+-		if (unlikely(error))
++		if (unlikely(error)) {
++			ext4_fc_stop_update(inode);
+ 			return error;
++		}
+ 	}
+ 
+ 	if (attr->ia_valid & ATTR_SIZE) {
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 14783f7dcbe98..6c7eba426a678 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3604,6 +3604,31 @@ static int ext4_setent(handle_t *handle, struct ext4_renament *ent,
+ 	return retval;
+ }
+ 
++static void ext4_resetent(handle_t *handle, struct ext4_renament *ent,
++			  unsigned ino, unsigned file_type)
++{
++	struct ext4_renament old = *ent;
++	int retval = 0;
++
++	/*
++	 * old->de could have moved from under us during make indexed dir,
++	 * so the old->de may no longer valid and need to find it again
++	 * before reset old inode info.
++	 */
++	old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de, NULL);
++	if (IS_ERR(old.bh))
++		retval = PTR_ERR(old.bh);
++	if (!old.bh)
++		retval = -ENOENT;
++	if (retval) {
++		ext4_std_error(old.dir->i_sb, retval);
++		return;
++	}
++
++	ext4_setent(handle, &old, ino, file_type);
++	brelse(old.bh);
++}
++
+ static int ext4_find_delete_entry(handle_t *handle, struct inode *dir,
+ 				  const struct qstr *d_name)
+ {
+@@ -3839,6 +3864,7 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		retval = ext4_mark_inode_dirty(handle, whiteout);
+ 		if (unlikely(retval))
+ 			goto end_rename;
++
+ 	}
+ 	if (!new.bh) {
+ 		retval = ext4_add_entry(handle, new.dentry, old.inode);
+@@ -3912,6 +3938,8 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			ext4_fc_track_unlink(handle, new.dentry);
+ 		__ext4_fc_track_link(handle, old.inode, new.dentry);
+ 		__ext4_fc_track_unlink(handle, old.inode, old.dentry);
++		if (whiteout)
++			__ext4_fc_track_create(handle, whiteout, old.dentry);
+ 	}
+ 
+ 	if (new.inode) {
+@@ -3926,8 +3954,8 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ end_rename:
+ 	if (whiteout) {
+ 		if (retval) {
+-			ext4_setent(handle, &old,
+-				old.inode->i_ino, old_file_type);
++			ext4_resetent(handle, &old,
++				      old.inode->i_ino, old_file_type);
+ 			drop_nlink(whiteout);
+ 		}
+ 		unlock_new_inode(whiteout);
+diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
+index 5b7ba8f711538..00e3cbde472e4 100644
+--- a/fs/ext4/verity.c
++++ b/fs/ext4/verity.c
+@@ -201,55 +201,76 @@ static int ext4_end_enable_verity(struct file *filp, const void *desc,
+ 	struct inode *inode = file_inode(filp);
+ 	const int credits = 2; /* superblock and inode for ext4_orphan_del() */
+ 	handle_t *handle;
++	struct ext4_iloc iloc;
+ 	int err = 0;
+-	int err2;
+ 
+-	if (desc != NULL) {
+-		/* Succeeded; write the verity descriptor. */
+-		err = ext4_write_verity_descriptor(inode, desc, desc_size,
+-						   merkle_tree_size);
+-
+-		/* Write all pages before clearing VERITY_IN_PROGRESS. */
+-		if (!err)
+-			err = filemap_write_and_wait(inode->i_mapping);
+-	}
++	/*
++	 * If an error already occurred (which fs/verity/ signals by passing
++	 * desc == NULL), then only clean-up is needed.
++	 */
++	if (desc == NULL)
++		goto cleanup;
+ 
+-	/* If we failed, truncate anything we wrote past i_size. */
+-	if (desc == NULL || err)
+-		ext4_truncate(inode);
++	/* Append the verity descriptor. */
++	err = ext4_write_verity_descriptor(inode, desc, desc_size,
++					   merkle_tree_size);
++	if (err)
++		goto cleanup;
+ 
+ 	/*
+-	 * We must always clean up by clearing EXT4_STATE_VERITY_IN_PROGRESS and
+-	 * deleting the inode from the orphan list, even if something failed.
+-	 * If everything succeeded, we'll also set the verity bit in the same
+-	 * transaction.
++	 * Write all pages (both data and verity metadata).  Note that this must
++	 * happen before clearing EXT4_STATE_VERITY_IN_PROGRESS; otherwise pages
++	 * beyond i_size won't be written properly.  For crash consistency, this
++	 * also must happen before the verity inode flag gets persisted.
+ 	 */
++	err = filemap_write_and_wait(inode->i_mapping);
++	if (err)
++		goto cleanup;
+ 
+-	ext4_clear_inode_state(inode, EXT4_STATE_VERITY_IN_PROGRESS);
++	/*
++	 * Finally, set the verity inode flag and remove the inode from the
++	 * orphan list (in a single transaction).
++	 */
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_INODE, credits);
+ 	if (IS_ERR(handle)) {
+-		ext4_orphan_del(NULL, inode);
+-		return PTR_ERR(handle);
++		err = PTR_ERR(handle);
++		goto cleanup;
+ 	}
+ 
+-	err2 = ext4_orphan_del(handle, inode);
+-	if (err2)
+-		goto out_stop;
++	err = ext4_orphan_del(handle, inode);
++	if (err)
++		goto stop_and_cleanup;
+ 
+-	if (desc != NULL && !err) {
+-		struct ext4_iloc iloc;
++	err = ext4_reserve_inode_write(handle, inode, &iloc);
++	if (err)
++		goto stop_and_cleanup;
+ 
+-		err = ext4_reserve_inode_write(handle, inode, &iloc);
+-		if (err)
+-			goto out_stop;
+-		ext4_set_inode_flag(inode, EXT4_INODE_VERITY);
+-		ext4_set_inode_flags(inode, false);
+-		err = ext4_mark_iloc_dirty(handle, inode, &iloc);
+-	}
+-out_stop:
++	ext4_set_inode_flag(inode, EXT4_INODE_VERITY);
++	ext4_set_inode_flags(inode, false);
++	err = ext4_mark_iloc_dirty(handle, inode, &iloc);
++	if (err)
++		goto stop_and_cleanup;
++
++	ext4_journal_stop(handle);
++
++	ext4_clear_inode_state(inode, EXT4_STATE_VERITY_IN_PROGRESS);
++	return 0;
++
++stop_and_cleanup:
+ 	ext4_journal_stop(handle);
+-	return err ?: err2;
++cleanup:
++	/*
++	 * Verity failed to be enabled, so clean up by truncating any verity
++	 * metadata that was written beyond i_size (both from cache and from
++	 * disk), removing the inode from the orphan list (if it wasn't done
++	 * already), and clearing EXT4_STATE_VERITY_IN_PROGRESS.
++	 */
++	truncate_inode_pages(inode->i_mapping, inode->i_size);
++	ext4_truncate(inode);
++	ext4_orphan_del(NULL, inode);
++	ext4_clear_inode_state(inode, EXT4_STATE_VERITY_IN_PROGRESS);
++	return err;
+ }
+ 
+ static int ext4_get_verity_descriptor_location(struct inode *inode,
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 6127e94ea4f5d..4698471795732 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2398,7 +2398,7 @@ retry_inode:
+ 				 * external inode if possible.
+ 				 */
+ 				if (ext4_has_feature_ea_inode(inode->i_sb) &&
+-				    !i.in_inode) {
++				    i.value_len && !i.in_inode) {
+ 					i.in_inode = 1;
+ 					goto retry_inode;
+ 				}
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index 61fce59cb4d38..f2c6bbe5cdb81 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -1084,6 +1084,7 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	int silent = fc->sb_flags & SB_SILENT;
+ 	struct gfs2_sbd *sdp;
+ 	struct gfs2_holder mount_gh;
++	struct gfs2_holder freeze_gh;
+ 	int error;
+ 
+ 	sdp = init_sbd(sb);
+@@ -1195,25 +1196,18 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		goto fail_per_node;
+ 	}
+ 
+-	if (sb_rdonly(sb)) {
+-		struct gfs2_holder freeze_gh;
++	error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
++	if (error)
++		goto fail_per_node;
+ 
+-		error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
+-					   LM_FLAG_NOEXP | GL_EXACT,
+-					   &freeze_gh);
+-		if (error) {
+-			fs_err(sdp, "can't make FS RO: %d\n", error);
+-			goto fail_per_node;
+-		}
+-		gfs2_glock_dq_uninit(&freeze_gh);
+-	} else {
++	if (!sb_rdonly(sb))
+ 		error = gfs2_make_fs_rw(sdp);
+-		if (error) {
+-			fs_err(sdp, "can't make FS RW: %d\n", error);
+-			goto fail_per_node;
+-		}
+-	}
+ 
++	gfs2_freeze_unlock(&freeze_gh);
++	if (error) {
++		fs_err(sdp, "can't make FS RW: %d\n", error);
++		goto fail_per_node;
++	}
+ 	gfs2_glock_dq_uninit(&mount_gh);
+ 	gfs2_online_uevent(sdp);
+ 	return 0;
+@@ -1514,6 +1508,12 @@ static int gfs2_reconfigure(struct fs_context *fc)
+ 		fc->sb_flags |= SB_RDONLY;
+ 
+ 	if ((sb->s_flags ^ fc->sb_flags) & SB_RDONLY) {
++		struct gfs2_holder freeze_gh;
++
++		error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
++		if (error)
++			return -EINVAL;
++
+ 		if (fc->sb_flags & SB_RDONLY) {
+ 			error = gfs2_make_fs_ro(sdp);
+ 			if (error)
+@@ -1523,6 +1523,7 @@ static int gfs2_reconfigure(struct fs_context *fc)
+ 			if (error)
+ 				errorfc(fc, "unable to remount read-write");
+ 		}
++		gfs2_freeze_unlock(&freeze_gh);
+ 	}
+ 	sdp->sd_args = *newargs;
+ 
+diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
+index a3c1911862f01..8f9c6480a5df4 100644
+--- a/fs/gfs2/recovery.c
++++ b/fs/gfs2/recovery.c
+@@ -470,9 +470,7 @@ void gfs2_recover_func(struct work_struct *work)
+ 
+ 		/* Acquire a shared hold on the freeze lock */
+ 
+-		error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
+-					   LM_FLAG_NOEXP | LM_FLAG_PRIORITY |
+-					   GL_EXACT, &thaw_gh);
++		error = gfs2_freeze_lock(sdp, &thaw_gh, LM_FLAG_PRIORITY);
+ 		if (error)
+ 			goto fail_gunlock_ji;
+ 
+@@ -524,7 +522,7 @@ void gfs2_recover_func(struct work_struct *work)
+ 		clean_journal(jd, &head);
+ 		up_read(&sdp->sd_log_flush_lock);
+ 
+-		gfs2_glock_dq_uninit(&thaw_gh);
++		gfs2_freeze_unlock(&thaw_gh);
+ 		t_rep = ktime_get();
+ 		fs_info(sdp, "jid=%u: Journal replayed in %lldms [jlck:%lldms, "
+ 			"jhead:%lldms, tlck:%lldms, replay:%lldms]\n",
+@@ -546,7 +544,7 @@ void gfs2_recover_func(struct work_struct *work)
+ 	goto done;
+ 
+ fail_gunlock_thaw:
+-	gfs2_glock_dq_uninit(&thaw_gh);
++	gfs2_freeze_unlock(&thaw_gh);
+ fail_gunlock_ji:
+ 	if (jlocked) {
+ 		gfs2_glock_dq_uninit(&ji_gh);
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index b3d951ab80680..ddd40c96f7a23 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -165,7 +165,6 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ {
+ 	struct gfs2_inode *ip = GFS2_I(sdp->sd_jdesc->jd_inode);
+ 	struct gfs2_glock *j_gl = ip->i_gl;
+-	struct gfs2_holder freeze_gh;
+ 	struct gfs2_log_header_host head;
+ 	int error;
+ 
+@@ -173,12 +172,6 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	if (error)
+ 		return error;
+ 
+-	error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
+-				   LM_FLAG_NOEXP | GL_EXACT,
+-				   &freeze_gh);
+-	if (error)
+-		goto fail_threads;
+-
+ 	j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+ 	if (gfs2_withdrawn(sdp)) {
+ 		error = -EIO;
+@@ -205,13 +198,9 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 
+ 	set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+ 
+-	gfs2_glock_dq_uninit(&freeze_gh);
+-
+ 	return 0;
+ 
+ fail:
+-	gfs2_glock_dq_uninit(&freeze_gh);
+-fail_threads:
+ 	if (sdp->sd_quotad_process)
+ 		kthread_stop(sdp->sd_quotad_process);
+ 	sdp->sd_quotad_process = NULL;
+@@ -454,7 +443,7 @@ static int gfs2_lock_fs_check_clean(struct gfs2_sbd *sdp)
+ 	}
+ 
+ 	if (error)
+-		gfs2_glock_dq_uninit(&sdp->sd_freeze_gh);
++		gfs2_freeze_unlock(&sdp->sd_freeze_gh);
+ 
+ out:
+ 	while (!list_empty(&list)) {
+@@ -611,30 +600,9 @@ out:
+ 
+ int gfs2_make_fs_ro(struct gfs2_sbd *sdp)
+ {
+-	struct gfs2_holder freeze_gh;
+ 	int error = 0;
+ 	int log_write_allowed = test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+ 
+-	gfs2_holder_mark_uninitialized(&freeze_gh);
+-	if (sdp->sd_freeze_gl &&
+-	    !gfs2_glock_is_locked_by_me(sdp->sd_freeze_gl)) {
+-		if (!log_write_allowed) {
+-			error = gfs2_glock_nq_init(sdp->sd_freeze_gl,
+-						   LM_ST_SHARED, LM_FLAG_TRY |
+-						   LM_FLAG_NOEXP | GL_EXACT,
+-						   &freeze_gh);
+-			if (error == GLR_TRYFAILED)
+-				error = 0;
+-		} else {
+-			error = gfs2_glock_nq_init(sdp->sd_freeze_gl,
+-						   LM_ST_SHARED,
+-						   LM_FLAG_NOEXP | GL_EXACT,
+-						   &freeze_gh);
+-			if (error && !gfs2_withdrawn(sdp))
+-				return error;
+-		}
+-	}
+-
+ 	gfs2_flush_delete_work(sdp);
+ 	if (!log_write_allowed && current == sdp->sd_quotad_process)
+ 		fs_warn(sdp, "The quotad daemon is withdrawing.\n");
+@@ -663,9 +631,6 @@ int gfs2_make_fs_ro(struct gfs2_sbd *sdp)
+ 				   atomic_read(&sdp->sd_reserving_log) == 0,
+ 				   HZ * 5);
+ 	}
+-	if (gfs2_holder_initialized(&freeze_gh))
+-		gfs2_glock_dq_uninit(&freeze_gh);
+-
+ 	gfs2_quota_cleanup(sdp);
+ 
+ 	if (!log_write_allowed)
+@@ -774,10 +739,8 @@ void gfs2_freeze_func(struct work_struct *work)
+ 	struct super_block *sb = sdp->sd_vfs;
+ 
+ 	atomic_inc(&sb->s_active);
+-	error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
+-				   LM_FLAG_NOEXP | GL_EXACT, &freeze_gh);
++	error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
+ 	if (error) {
+-		fs_info(sdp, "GFS2: couldn't get freeze lock : %d\n", error);
+ 		gfs2_assert_withdraw(sdp, 0);
+ 	} else {
+ 		atomic_set(&sdp->sd_freeze_state, SFS_UNFROZEN);
+@@ -787,7 +750,7 @@ void gfs2_freeze_func(struct work_struct *work)
+ 				error);
+ 			gfs2_assert_withdraw(sdp, 0);
+ 		}
+-		gfs2_glock_dq_uninit(&freeze_gh);
++		gfs2_freeze_unlock(&freeze_gh);
+ 	}
+ 	deactivate_super(sb);
+ 	clear_bit_unlock(SDF_FS_FROZEN, &sdp->sd_flags);
+@@ -855,7 +818,7 @@ static int gfs2_unfreeze(struct super_block *sb)
+                 return 0;
+ 	}
+ 
+-	gfs2_glock_dq_uninit(&sdp->sd_freeze_gh);
++	gfs2_freeze_unlock(&sdp->sd_freeze_gh);
+ 	mutex_unlock(&sdp->sd_freeze_mutex);
+ 	return wait_on_bit(&sdp->sd_flags, SDF_FS_FROZEN, TASK_INTERRUPTIBLE);
+ }
+diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
+index b7d4e4550880d..3ece99e6490c2 100644
+--- a/fs/gfs2/util.c
++++ b/fs/gfs2/util.c
+@@ -91,19 +91,50 @@ out_unlock:
+ 	return error;
+ }
+ 
++/**
++ * gfs2_freeze_lock - hold the freeze glock
++ * @sdp: the superblock
++ * @freeze_gh: pointer to the requested holder
++ * @caller_flags: any additional flags needed by the caller
++ */
++int gfs2_freeze_lock(struct gfs2_sbd *sdp, struct gfs2_holder *freeze_gh,
++		     int caller_flags)
++{
++	int flags = LM_FLAG_NOEXP | GL_EXACT | caller_flags;
++	int error;
++
++	error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED, flags,
++				   freeze_gh);
++	if (error && error != GLR_TRYFAILED)
++		fs_err(sdp, "can't lock the freeze lock: %d\n", error);
++	return error;
++}
++
++void gfs2_freeze_unlock(struct gfs2_holder *freeze_gh)
++{
++	if (gfs2_holder_initialized(freeze_gh))
++		gfs2_glock_dq_uninit(freeze_gh);
++}
++
+ static void signal_our_withdraw(struct gfs2_sbd *sdp)
+ {
+ 	struct gfs2_glock *live_gl = sdp->sd_live_gh.gh_gl;
+-	struct inode *inode = sdp->sd_jdesc->jd_inode;
+-	struct gfs2_inode *ip = GFS2_I(inode);
+-	struct gfs2_glock *i_gl = ip->i_gl;
+-	u64 no_formal_ino = ip->i_no_formal_ino;
++	struct inode *inode;
++	struct gfs2_inode *ip;
++	struct gfs2_glock *i_gl;
++	u64 no_formal_ino;
++	int log_write_allowed = test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+ 	int ret = 0;
+ 	int tries;
+ 
+-	if (test_bit(SDF_NORECOVERY, &sdp->sd_flags))
++	if (test_bit(SDF_NORECOVERY, &sdp->sd_flags) || !sdp->sd_jdesc)
+ 		return;
+ 
++	inode = sdp->sd_jdesc->jd_inode;
++	ip = GFS2_I(inode);
++	i_gl = ip->i_gl;
++	no_formal_ino = ip->i_no_formal_ino;
++
+ 	/* Prevent any glock dq until withdraw recovery is complete */
+ 	set_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags);
+ 	/*
+@@ -118,8 +149,21 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
+ 	 * therefore we need to clear SDF_JOURNAL_LIVE manually.
+ 	 */
+ 	clear_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+-	if (!sb_rdonly(sdp->sd_vfs))
+-		ret = gfs2_make_fs_ro(sdp);
++	if (!sb_rdonly(sdp->sd_vfs)) {
++		struct gfs2_holder freeze_gh;
++
++		gfs2_holder_mark_uninitialized(&freeze_gh);
++		if (sdp->sd_freeze_gl &&
++		    !gfs2_glock_is_locked_by_me(sdp->sd_freeze_gl)) {
++			ret = gfs2_freeze_lock(sdp, &freeze_gh,
++				       log_write_allowed ? 0 : LM_FLAG_TRY);
++			if (ret == GLR_TRYFAILED)
++				ret = 0;
++		}
++		if (!ret)
++			ret = gfs2_make_fs_ro(sdp);
++		gfs2_freeze_unlock(&freeze_gh);
++	}
+ 
+ 	if (sdp->sd_lockstruct.ls_ops->lm_lock == NULL) { /* lock_nolock */
+ 		if (!ret)
+diff --git a/fs/gfs2/util.h b/fs/gfs2/util.h
+index d7562981b3a09..aa3771281acac 100644
+--- a/fs/gfs2/util.h
++++ b/fs/gfs2/util.h
+@@ -149,6 +149,9 @@ int gfs2_io_error_i(struct gfs2_sbd *sdp, const char *function,
+ 
+ extern int check_journal_clean(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
+ 			       bool verbose);
++extern int gfs2_freeze_lock(struct gfs2_sbd *sdp,
++			    struct gfs2_holder *freeze_gh, int caller_flags);
++extern void gfs2_freeze_unlock(struct gfs2_holder *freeze_gh);
+ 
+ #define gfs2_io_error(sdp) \
+ gfs2_io_error_i((sdp), __func__, __FILE__, __LINE__);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 691c998691439..06e9c21819957 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2085,6 +2085,7 @@ static void __io_req_task_submit(struct io_kiocb *req)
+ 		__io_req_task_cancel(req, -EFAULT);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
++	ctx->flags &= ~IORING_SETUP_R_DISABLED;
+ 	if (ctx->flags & IORING_SETUP_SQPOLL)
+ 		io_sq_thread_drop_mm();
+ }
+@@ -2616,6 +2617,13 @@ static bool io_rw_reissue(struct io_kiocb *req, long res)
+ 		return false;
+ 	if ((res != -EAGAIN && res != -EOPNOTSUPP) || io_wq_current_is_worker())
+ 		return false;
++	/*
++	 * If ref is dying, we might be running poll reap from the exit work.
++	 * Don't attempt to reissue from that path, just let it fail with
++	 * -EAGAIN.
++	 */
++	if (percpu_ref_is_dying(&req->ctx->refs))
++		return false;
+ 
+ 	ret = io_sq_thread_acquire_mm(req->ctx, req);
+ 
+@@ -3493,6 +3501,7 @@ retry:
+ 		goto out_free;
+ 	} else if (ret > 0 && ret < io_size) {
+ 		/* we got some bytes, but not all. retry. */
++		kiocb->ki_flags &= ~IOCB_WAITQ;
+ 		goto retry;
+ 	}
+ done:
+@@ -6233,9 +6242,10 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
+ 	if (prev) {
+ 		req_set_fail_links(prev);
+ 		io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME);
+-		io_put_req(prev);
++		io_put_req_deferred(prev, 1);
+ 	} else {
+-		io_req_complete(req, -ETIME);
++		io_cqring_add_event(req, -ETIME, 0);
++		io_put_req_deferred(req, 1);
+ 	}
+ 	return HRTIMER_NORESTART;
+ }
+@@ -8684,6 +8694,8 @@ static void io_disable_sqo_submit(struct io_ring_ctx *ctx)
+ {
+ 	mutex_lock(&ctx->uring_lock);
+ 	ctx->sqo_dead = 1;
++	if (ctx->flags & IORING_SETUP_R_DISABLED)
++		io_sq_offload_start(ctx);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
+ 	/* make sure callers enter the ring to get error */
+@@ -9662,10 +9674,7 @@ static int io_register_enable_rings(struct io_ring_ctx *ctx)
+ 	if (ctx->restrictions.registered)
+ 		ctx->restricted = 1;
+ 
+-	ctx->flags &= ~IORING_SETUP_R_DISABLED;
+-
+ 	io_sq_offload_start(ctx);
+-
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index 5849c1bd88f17..e5aad1c10ea32 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -897,6 +897,8 @@ nfsd_file_find_locked(struct inode *inode, unsigned int may_flags,
+ 			continue;
+ 		if (!nfsd_match_cred(nf->nf_cred, current_cred()))
+ 			continue;
++		if (!test_bit(NFSD_FILE_HASHED, &nf->nf_flags))
++			continue;
+ 		if (nfsd_file_get(nf) != NULL)
+ 			return nf;
+ 	}
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index e83b21778816d..2e68cea148e0d 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1299,7 +1299,7 @@ nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct nfsd_file *src,
+ 			struct nfsd_file *dst)
+ {
+ 	nfs42_ssc_close(src->nf_file);
+-	/* 'src' is freed by nfsd4_do_async_copy */
++	fput(src->nf_file);
+ 	nfsd_file_put(dst);
+ 	mntput(ss_mnt);
+ }
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index ee4e6e3b995d4..55cf60b71cde0 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5372,7 +5372,7 @@ nfs4_laundromat(struct nfsd_net *nn)
+ 	idr_for_each_entry(&nn->s2s_cp_stateids, cps_t, i) {
+ 		cps = container_of(cps_t, struct nfs4_cpntf_state, cp_stateid);
+ 		if (cps->cp_stateid.sc_type == NFS4_COPYNOTIFY_STID &&
+-				cps->cpntf_time > cutoff)
++				cps->cpntf_time < cutoff)
+ 			_free_cpntf_state_locked(nn, cps);
+ 	}
+ 	spin_unlock(&nn->s2s_cp_lock);
+diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
+index c331efe8de953..bbf241a431f27 100644
+--- a/fs/pstore/inode.c
++++ b/fs/pstore/inode.c
+@@ -467,7 +467,7 @@ static struct dentry *pstore_mount(struct file_system_type *fs_type,
+ static void pstore_kill_sb(struct super_block *sb)
+ {
+ 	mutex_lock(&pstore_sb_lock);
+-	WARN_ON(pstore_sb != sb);
++	WARN_ON(pstore_sb && pstore_sb != sb);
+ 
+ 	kill_litter_super(sb);
+ 	pstore_sb = NULL;
+diff --git a/fs/select.c b/fs/select.c
+index 37aaa8317f3ae..945896d0ac9e7 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -1055,10 +1055,9 @@ static long do_restart_poll(struct restart_block *restart_block)
+ 
+ 	ret = do_sys_poll(ufds, nfds, to);
+ 
+-	if (ret == -ERESTARTNOHAND) {
+-		restart_block->fn = do_restart_poll;
+-		ret = -ERESTART_RESTARTBLOCK;
+-	}
++	if (ret == -ERESTARTNOHAND)
++		ret = set_restart_fn(restart_block, do_restart_poll);
++
+ 	return ret;
+ }
+ 
+@@ -1080,7 +1079,6 @@ SYSCALL_DEFINE3(poll, struct pollfd __user *, ufds, unsigned int, nfds,
+ 		struct restart_block *restart_block;
+ 
+ 		restart_block = &current->restart_block;
+-		restart_block->fn = do_restart_poll;
+ 		restart_block->poll.ufds = ufds;
+ 		restart_block->poll.nfds = nfds;
+ 
+@@ -1091,7 +1089,7 @@ SYSCALL_DEFINE3(poll, struct pollfd __user *, ufds, unsigned int, nfds,
+ 		} else
+ 			restart_block->poll.has_timeout = 0;
+ 
+-		ret = -ERESTART_RESTARTBLOCK;
++		ret = set_restart_fn(restart_block, do_restart_poll);
+ 	}
+ 	return ret;
+ }
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 3fe933b1010c3..2243dc1fb48fe 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -159,6 +159,21 @@ static int zonefs_writepages(struct address_space *mapping,
+ 	return iomap_writepages(mapping, wbc, &wpc, &zonefs_writeback_ops);
+ }
+ 
++static int zonefs_swap_activate(struct swap_info_struct *sis,
++				struct file *swap_file, sector_t *span)
++{
++	struct inode *inode = file_inode(swap_file);
++	struct zonefs_inode_info *zi = ZONEFS_I(inode);
++
++	if (zi->i_ztype != ZONEFS_ZTYPE_CNV) {
++		zonefs_err(inode->i_sb,
++			   "swap file: not a conventional zone file\n");
++		return -EINVAL;
++	}
++
++	return iomap_swapfile_activate(sis, swap_file, span, &zonefs_iomap_ops);
++}
++
+ static const struct address_space_operations zonefs_file_aops = {
+ 	.readpage		= zonefs_readpage,
+ 	.readahead		= zonefs_readahead,
+@@ -171,6 +186,7 @@ static const struct address_space_operations zonefs_file_aops = {
+ 	.is_partially_uptodate	= iomap_is_partially_uptodate,
+ 	.error_remove_page	= generic_error_remove_page,
+ 	.direct_IO		= noop_direct_IO,
++	.swap_activate		= zonefs_swap_activate,
+ };
+ 
+ static void zonefs_update_stats(struct inode *inode, loff_t new_isize)
+@@ -719,6 +735,68 @@ out_release:
+ 	return ret;
+ }
+ 
++/*
++ * Do not exceed the LFS limits nor the file zone size. If pos is under the
++ * limit it becomes a short access. If it exceeds the limit, return -EFBIG.
++ */
++static loff_t zonefs_write_check_limits(struct file *file, loff_t pos,
++					loff_t count)
++{
++	struct inode *inode = file_inode(file);
++	struct zonefs_inode_info *zi = ZONEFS_I(inode);
++	loff_t limit = rlimit(RLIMIT_FSIZE);
++	loff_t max_size = zi->i_max_size;
++
++	if (limit != RLIM_INFINITY) {
++		if (pos >= limit) {
++			send_sig(SIGXFSZ, current, 0);
++			return -EFBIG;
++		}
++		count = min(count, limit - pos);
++	}
++
++	if (!(file->f_flags & O_LARGEFILE))
++		max_size = min_t(loff_t, MAX_NON_LFS, max_size);
++
++	if (unlikely(pos >= max_size))
++		return -EFBIG;
++
++	return min(count, max_size - pos);
++}
++
++static ssize_t zonefs_write_checks(struct kiocb *iocb, struct iov_iter *from)
++{
++	struct file *file = iocb->ki_filp;
++	struct inode *inode = file_inode(file);
++	struct zonefs_inode_info *zi = ZONEFS_I(inode);
++	loff_t count;
++
++	if (IS_SWAPFILE(inode))
++		return -ETXTBSY;
++
++	if (!iov_iter_count(from))
++		return 0;
++
++	if ((iocb->ki_flags & IOCB_NOWAIT) && !(iocb->ki_flags & IOCB_DIRECT))
++		return -EINVAL;
++
++	if (iocb->ki_flags & IOCB_APPEND) {
++		if (zi->i_ztype != ZONEFS_ZTYPE_SEQ)
++			return -EINVAL;
++		mutex_lock(&zi->i_truncate_mutex);
++		iocb->ki_pos = zi->i_wpoffset;
++		mutex_unlock(&zi->i_truncate_mutex);
++	}
++
++	count = zonefs_write_check_limits(file, iocb->ki_pos,
++					  iov_iter_count(from));
++	if (count < 0)
++		return count;
++
++	iov_iter_truncate(from, count);
++	return iov_iter_count(from);
++}
++
+ /*
+  * Handle direct writes. For sequential zone files, this is the only possible
+  * write path. For these files, check that the user is issuing writes
+@@ -736,8 +814,7 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from)
+ 	struct super_block *sb = inode->i_sb;
+ 	bool sync = is_sync_kiocb(iocb);
+ 	bool append = false;
+-	size_t count;
+-	ssize_t ret;
++	ssize_t ret, count;
+ 
+ 	/*
+ 	 * For async direct IOs to sequential zone files, refuse IOCB_NOWAIT
+@@ -755,12 +832,11 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from)
+ 		inode_lock(inode);
+ 	}
+ 
+-	ret = generic_write_checks(iocb, from);
+-	if (ret <= 0)
++	count = zonefs_write_checks(iocb, from);
++	if (count <= 0) {
++		ret = count;
+ 		goto inode_unlock;
+-
+-	iov_iter_truncate(from, zi->i_max_size - iocb->ki_pos);
+-	count = iov_iter_count(from);
++	}
+ 
+ 	if ((iocb->ki_pos | count) & (sb->s_blocksize - 1)) {
+ 		ret = -EINVAL;
+@@ -820,12 +896,10 @@ static ssize_t zonefs_file_buffered_write(struct kiocb *iocb,
+ 		inode_lock(inode);
+ 	}
+ 
+-	ret = generic_write_checks(iocb, from);
++	ret = zonefs_write_checks(iocb, from);
+ 	if (ret <= 0)
+ 		goto inode_unlock;
+ 
+-	iov_iter_truncate(from, zi->i_max_size - iocb->ki_pos);
+-
+ 	ret = iomap_file_buffered_write(iocb, from, &zonefs_iomap_ops);
+ 	if (ret > 0)
+ 		iocb->ki_pos += ret;
+@@ -958,9 +1032,7 @@ static int zonefs_open_zone(struct inode *inode)
+ 
+ 	mutex_lock(&zi->i_truncate_mutex);
+ 
+-	zi->i_wr_refcnt++;
+-	if (zi->i_wr_refcnt == 1) {
+-
++	if (!zi->i_wr_refcnt) {
+ 		if (atomic_inc_return(&sbi->s_open_zones) > sbi->s_max_open_zones) {
+ 			atomic_dec(&sbi->s_open_zones);
+ 			ret = -EBUSY;
+@@ -970,7 +1042,6 @@ static int zonefs_open_zone(struct inode *inode)
+ 		if (i_size_read(inode) < zi->i_max_size) {
+ 			ret = zonefs_zone_mgmt(inode, REQ_OP_ZONE_OPEN);
+ 			if (ret) {
+-				zi->i_wr_refcnt--;
+ 				atomic_dec(&sbi->s_open_zones);
+ 				goto unlock;
+ 			}
+@@ -978,6 +1049,8 @@ static int zonefs_open_zone(struct inode *inode)
+ 		}
+ 	}
+ 
++	zi->i_wr_refcnt++;
++
+ unlock:
+ 	mutex_unlock(&zi->i_truncate_mutex);
+ 
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 642ce03f19c4c..76322b6452c80 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1201,8 +1201,6 @@ struct bpf_prog * __must_check bpf_prog_inc_not_zero(struct bpf_prog *prog);
+ void bpf_prog_put(struct bpf_prog *prog);
+ int __bpf_prog_charge(struct user_struct *user, u32 pages);
+ void __bpf_prog_uncharge(struct user_struct *user, u32 pages);
+-void __bpf_free_used_maps(struct bpf_prog_aux *aux,
+-			  struct bpf_map **used_maps, u32 len);
+ 
+ void bpf_prog_free_id(struct bpf_prog *prog, bool do_idr_lock);
+ void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock);
+@@ -1652,6 +1650,9 @@ static inline struct bpf_prog *bpf_prog_get_type(u32 ufd,
+ 	return bpf_prog_get_type_dev(ufd, type, false);
+ }
+ 
++void __bpf_free_used_maps(struct bpf_prog_aux *aux,
++			  struct bpf_map **used_maps, u32 len);
++
+ bool bpf_prog_get_ok(struct bpf_prog *, enum bpf_prog_type *, bool);
+ 
+ int bpf_prog_offload_compile(struct bpf_prog *prog);
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index d7c0e73af2b97..e17cd4c44f93a 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -72,8 +72,10 @@ typedef void *efi_handle_t;
+  */
+ typedef guid_t efi_guid_t __aligned(__alignof__(u32));
+ 
+-#define EFI_GUID(a,b,c,d0,d1,d2,d3,d4,d5,d6,d7) \
+-	GUID_INIT(a, b, c, d0, d1, d2, d3, d4, d5, d6, d7)
++#define EFI_GUID(a, b, c, d...) (efi_guid_t){ {					\
++	(a) & 0xff, ((a) >> 8) & 0xff, ((a) >> 16) & 0xff, ((a) >> 24) & 0xff,	\
++	(b) & 0xff, ((b) >> 8) & 0xff,						\
++	(c) & 0xff, ((c) >> 8) & 0xff, d } }
+ 
+ /*
+  * Generic EFI table header
+diff --git a/include/linux/regulator/pca9450.h b/include/linux/regulator/pca9450.h
+index 1bbd3014f9067..71902f41c9199 100644
+--- a/include/linux/regulator/pca9450.h
++++ b/include/linux/regulator/pca9450.h
+@@ -147,6 +147,9 @@ enum {
+ #define BUCK6_FPWM			0x04
+ #define BUCK6_ENMODE_MASK		0x03
+ 
++/* PCA9450_REG_BUCK123_PRESET_EN bit */
++#define BUCK123_PRESET_EN		0x80
++
+ /* PCA9450_BUCK1OUT_DVS0 bits */
+ #define BUCK1OUT_DVS0_MASK		0x7F
+ #define BUCK1OUT_DVS0_DEFAULT		0x14
+@@ -216,4 +219,11 @@ enum {
+ #define IRQ_THERM_105			0x02
+ #define IRQ_THERM_125			0x01
+ 
++/* PCA9450_REG_RESET_CTRL bits */
++#define WDOG_B_CFG_MASK			0xC0
++#define WDOG_B_CFG_NONE			0x00
++#define WDOG_B_CFG_WARM			0x40
++#define WDOG_B_CFG_COLD_LDO12		0x80
++#define WDOG_B_CFG_COLD			0xC0
++
+ #endif /* __LINUX_REG_PCA9450_H__ */
+diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
+index e93e249a4e9bf..f3040b0b4b235 100644
+--- a/include/linux/thread_info.h
++++ b/include/linux/thread_info.h
+@@ -11,6 +11,7 @@
+ #include <linux/types.h>
+ #include <linux/bug.h>
+ #include <linux/restart_block.h>
++#include <linux/errno.h>
+ 
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+ /*
+@@ -39,6 +40,18 @@ enum {
+ 
+ #ifdef __KERNEL__
+ 
++#ifndef arch_set_restart_data
++#define arch_set_restart_data(restart) do { } while (0)
++#endif
++
++static inline long set_restart_fn(struct restart_block *restart,
++					long (*fn)(struct restart_block *))
++{
++	restart->fn = fn;
++	arch_set_restart_data(restart);
++	return -ERESTART_RESTARTBLOCK;
++}
++
+ #ifndef THREAD_ALIGN
+ #define THREAD_ALIGN	THREAD_SIZE
+ #endif
+diff --git a/include/linux/usb_usual.h b/include/linux/usb_usual.h
+index 6b03fdd69d274..712363c7a2e8e 100644
+--- a/include/linux/usb_usual.h
++++ b/include/linux/usb_usual.h
+@@ -86,6 +86,8 @@
+ 		/* lies about caching, so always sync */	\
+ 	US_FLAG(NO_SAME, 0x40000000)				\
+ 		/* Cannot handle WRITE_SAME */			\
++	US_FLAG(SENSE_AFTER_SYNC, 0x80000000)			\
++		/* Do REQUEST_SENSE after SYNCHRONIZE_CACHE */	\
+ 
+ #define US_FLAG(name, value)	US_FL_##name = value ,
+ enum { US_DO_ALL_FLAGS };
+diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
+index 4e2d61e8fb1ed..e6a43163ab5b7 100644
+--- a/include/scsi/libsas.h
++++ b/include/scsi/libsas.h
+@@ -391,10 +391,6 @@ struct sas_ha_struct {
+ 	int strict_wide_ports; /* both sas_addr and attached_sas_addr must match
+ 				* their siblings when forming wide ports */
+ 
+-	/* LLDD calls these to notify the class of an event. */
+-	int (*notify_port_event)(struct asd_sas_phy *, enum port_event);
+-	int (*notify_phy_event)(struct asd_sas_phy *, enum phy_event);
+-
+ 	void *lldd_ha;		  /* not touched by sas class code */
+ 
+ 	struct list_head eh_done_q;  /* complete via scsi_eh_flush_done_q */
+@@ -706,4 +702,11 @@ struct sas_phy *sas_get_local_phy(struct domain_device *dev);
+ 
+ int sas_request_addr(struct Scsi_Host *shost, u8 *addr);
+ 
++int sas_notify_port_event(struct asd_sas_phy *phy, enum port_event event);
++int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event);
++int sas_notify_port_event_gfp(struct asd_sas_phy *phy, enum port_event event,
++			      gfp_t gfp_flags);
++int sas_notify_phy_event_gfp(struct asd_sas_phy *phy, enum phy_event event,
++			     gfp_t gfp_flags);
++
+ #endif /* _SASLIB_H_ */
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 0693b3ea0f9a4..7cf1987cfdb4f 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -2730,14 +2730,13 @@ retry:
+ 		goto out;
+ 
+ 	restart = &current->restart_block;
+-	restart->fn = futex_wait_restart;
+ 	restart->futex.uaddr = uaddr;
+ 	restart->futex.val = val;
+ 	restart->futex.time = *abs_time;
+ 	restart->futex.bitset = bitset;
+ 	restart->futex.flags = flags | FLAGS_HAS_TIMEOUT;
+ 
+-	ret = -ERESTART_RESTARTBLOCK;
++	ret = set_restart_fn(restart, futex_wait_restart);
+ 
+ out:
+ 	if (to) {
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index c460e0496006e..79dc02b956dc3 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -1072,11 +1072,15 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
+ 	irqreturn_t ret;
+ 
+ 	local_bh_disable();
++	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
++		local_irq_disable();
+ 	ret = action->thread_fn(action->irq, action->dev_id);
+ 	if (ret == IRQ_HANDLED)
+ 		atomic_inc(&desc->threads_handled);
+ 
+ 	irq_finalize_oneshot(desc, action);
++	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
++		local_irq_enable();
+ 	local_bh_enable();
+ 	return ret;
+ }
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index 015ef903ce8cc..a0c325664190b 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -407,6 +407,14 @@ static bool jump_label_can_update(struct jump_entry *entry, bool init)
+ 		return false;
+ 
+ 	if (!kernel_text_address(jump_entry_code(entry))) {
++		/*
++		 * This skips patching built-in __exit, which
++		 * is part of init_section_contains() but is
++		 * not part of kernel_text_address().
++		 *
++		 * Skipping built-in __exit is fine since it
++		 * will never be executed.
++		 */
+ 		WARN_ONCE(!jump_entry_is_init(entry),
+ 			  "can't patch jump_label at %pS",
+ 			  (void *)jump_entry_code(entry));
+diff --git a/kernel/module.c b/kernel/module.c
+index 94f926473e350..908d46abe1656 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -2922,20 +2922,14 @@ static int module_sig_check(struct load_info *info, int flags)
+ 		 * enforcing, certain errors are non-fatal.
+ 		 */
+ 	case -ENODATA:
+-		reason = "Loading of unsigned module";
+-		goto decide;
++		reason = "unsigned module";
++		break;
+ 	case -ENOPKG:
+-		reason = "Loading of module with unsupported crypto";
+-		goto decide;
++		reason = "module with unsupported crypto";
++		break;
+ 	case -ENOKEY:
+-		reason = "Loading of module with unavailable key";
+-	decide:
+-		if (is_module_sig_enforced()) {
+-			pr_notice("%s: %s is rejected\n", info->name, reason);
+-			return -EKEYREJECTED;
+-		}
+-
+-		return security_locked_down(LOCKDOWN_MODULE_SIGNATURE);
++		reason = "module with unavailable key";
++		break;
+ 
+ 		/* All other errors are fatal, including nomem, unparseable
+ 		 * signatures and signature check failures - even if signatures
+@@ -2944,6 +2938,13 @@ static int module_sig_check(struct load_info *info, int flags)
+ 	default:
+ 		return err;
+ 	}
++
++	if (is_module_sig_enforced()) {
++		pr_notice("Loading of %s is rejected\n", reason);
++		return -EKEYREJECTED;
++	}
++
++	return security_locked_down(LOCKDOWN_MODULE_SIGNATURE);
+ }
+ #else /* !CONFIG_MODULE_SIG */
+ static int module_sig_check(struct load_info *info, int flags)
+@@ -2952,9 +2953,33 @@ static int module_sig_check(struct load_info *info, int flags)
+ }
+ #endif /* !CONFIG_MODULE_SIG */
+ 
+-/* Sanity checks against invalid binaries, wrong arch, weird elf version. */
+-static int elf_header_check(struct load_info *info)
++static int validate_section_offset(struct load_info *info, Elf_Shdr *shdr)
++{
++	unsigned long secend;
++
++	/*
++	 * Check for both overflow and offset/size being
++	 * too large.
++	 */
++	secend = shdr->sh_offset + shdr->sh_size;
++	if (secend < shdr->sh_offset || secend > info->len)
++		return -ENOEXEC;
++
++	return 0;
++}
++
++/*
++ * Sanity checks against invalid binaries, wrong arch, weird elf version.
++ *
++ * Also do basic validity checks against section offsets and sizes, the
++ * section name string table, and the indices used for it (sh_name).
++ */
++static int elf_validity_check(struct load_info *info)
+ {
++	unsigned int i;
++	Elf_Shdr *shdr, *strhdr;
++	int err;
++
+ 	if (info->len < sizeof(*(info->hdr)))
+ 		return -ENOEXEC;
+ 
+@@ -2964,11 +2989,78 @@ static int elf_header_check(struct load_info *info)
+ 	    || info->hdr->e_shentsize != sizeof(Elf_Shdr))
+ 		return -ENOEXEC;
+ 
++	/*
++	 * e_shnum is 16 bits, and sizeof(Elf_Shdr) is
++	 * known and small. So e_shnum * sizeof(Elf_Shdr)
++	 * will not overflow unsigned long on any platform.
++	 */
+ 	if (info->hdr->e_shoff >= info->len
+ 	    || (info->hdr->e_shnum * sizeof(Elf_Shdr) >
+ 		info->len - info->hdr->e_shoff))
+ 		return -ENOEXEC;
+ 
++	info->sechdrs = (void *)info->hdr + info->hdr->e_shoff;
++
++	/*
++	 * Verify if the section name table index is valid.
++	 */
++	if (info->hdr->e_shstrndx == SHN_UNDEF
++	    || info->hdr->e_shstrndx >= info->hdr->e_shnum)
++		return -ENOEXEC;
++
++	strhdr = &info->sechdrs[info->hdr->e_shstrndx];
++	err = validate_section_offset(info, strhdr);
++	if (err < 0)
++		return err;
++
++	/*
++	 * The section name table must be NUL-terminated, as required
++	 * by the spec. This makes strcmp and pr_* calls that access
++	 * strings in the section safe.
++	 */
++	info->secstrings = (void *)info->hdr + strhdr->sh_offset;
++	if (info->secstrings[strhdr->sh_size - 1] != '\0')
++		return -ENOEXEC;
++
++	/*
++	 * The code assumes that section 0 has a length of zero and
++	 * an addr of zero, so check for it.
++	 */
++	if (info->sechdrs[0].sh_type != SHT_NULL
++	    || info->sechdrs[0].sh_size != 0
++	    || info->sechdrs[0].sh_addr != 0)
++		return -ENOEXEC;
++
++	for (i = 1; i < info->hdr->e_shnum; i++) {
++		shdr = &info->sechdrs[i];
++		switch (shdr->sh_type) {
++		case SHT_NULL:
++		case SHT_NOBITS:
++			continue;
++		case SHT_SYMTAB:
++			if (shdr->sh_link == SHN_UNDEF
++			    || shdr->sh_link >= info->hdr->e_shnum)
++				return -ENOEXEC;
++			fallthrough;
++		default:
++			err = validate_section_offset(info, shdr);
++			if (err < 0) {
++				pr_err("Invalid ELF section in module (section %u type %u)\n",
++					i, shdr->sh_type);
++				return err;
++			}
++
++			if (shdr->sh_flags & SHF_ALLOC) {
++				if (shdr->sh_name >= strhdr->sh_size) {
++					pr_err("Invalid ELF section name in module (section %u type %u)\n",
++					       i, shdr->sh_type);
++					return -ENOEXEC;
++				}
++			}
++			break;
++		}
++	}
++
+ 	return 0;
+ }
+ 
+@@ -3070,11 +3162,6 @@ static int rewrite_section_headers(struct load_info *info, int flags)
+ 
+ 	for (i = 1; i < info->hdr->e_shnum; i++) {
+ 		Elf_Shdr *shdr = &info->sechdrs[i];
+-		if (shdr->sh_type != SHT_NOBITS
+-		    && info->len < shdr->sh_offset + shdr->sh_size) {
+-			pr_err("Module len %lu truncated\n", info->len);
+-			return -ENOEXEC;
+-		}
+ 
+ 		/* Mark all sections sh_addr with their address in the
+ 		   temporary image. */
+@@ -3106,11 +3193,6 @@ static int setup_load_info(struct load_info *info, int flags)
+ {
+ 	unsigned int i;
+ 
+-	/* Set up the convenience variables */
+-	info->sechdrs = (void *)info->hdr + info->hdr->e_shoff;
+-	info->secstrings = (void *)info->hdr
+-		+ info->sechdrs[info->hdr->e_shstrndx].sh_offset;
+-
+ 	/* Try to find a name early so we can log errors with a module name */
+ 	info->index.info = find_sec(info, ".modinfo");
+ 	if (info->index.info)
+@@ -3854,26 +3936,50 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 	long err = 0;
+ 	char *after_dashes;
+ 
+-	err = elf_header_check(info);
++	/*
++	 * Do the signature check (if any) first. All that
++	 * the signature check needs is info->len, it does
++	 * not need any of the section info. That can be
++	 * set up later. This will minimize the chances
++	 * of a corrupt module causing problems before
++	 * we even get to the signature check.
++	 *
++	 * The check will also adjust info->len by stripping
++	 * off the sig length at the end of the module, making
++	 * checks against info->len more correct.
++	 */
++	err = module_sig_check(info, flags);
++	if (err)
++		goto free_copy;
++
++	/*
++	 * Do basic sanity checks against the ELF header and
++	 * sections.
++	 */
++	err = elf_validity_check(info);
+ 	if (err) {
+-		pr_err("Module has invalid ELF header\n");
++		pr_err("Module has invalid ELF structures\n");
+ 		goto free_copy;
+ 	}
+ 
++	/*
++	 * Everything checks out, so set up the section info
++	 * in the info structure.
++	 */
+ 	err = setup_load_info(info, flags);
+ 	if (err)
+ 		goto free_copy;
+ 
++	/*
++	 * Now that we know we have the correct module name, check
++	 * if it's blacklisted.
++	 */
+ 	if (blacklisted(info->name)) {
+ 		err = -EPERM;
+ 		pr_err("Module %s is blacklisted\n", info->name);
+ 		goto free_copy;
+ 	}
+ 
+-	err = module_sig_check(info, flags);
+-	if (err)
+-		goto free_copy;
+-
+ 	err = rewrite_section_headers(info, flags);
+ 	if (err)
+ 		goto free_copy;
+diff --git a/kernel/module_signature.c b/kernel/module_signature.c
+index 4224a1086b7d8..00132d12487cd 100644
+--- a/kernel/module_signature.c
++++ b/kernel/module_signature.c
+@@ -25,7 +25,7 @@ int mod_check_sig(const struct module_signature *ms, size_t file_len,
+ 		return -EBADMSG;
+ 
+ 	if (ms->id_type != PKEY_ID_PKCS7) {
+-		pr_err("%s: Module is not signed with expected PKCS#7 message\n",
++		pr_err("%s: not signed with expected PKCS#7 message\n",
+ 		       name);
+ 		return -ENOPKG;
+ 	}
+diff --git a/kernel/module_signing.c b/kernel/module_signing.c
+index 9d9fc678c91d6..8723ae70ea1fe 100644
+--- a/kernel/module_signing.c
++++ b/kernel/module_signing.c
+@@ -30,7 +30,7 @@ int mod_verify_sig(const void *mod, struct load_info *info)
+ 
+ 	memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms));
+ 
+-	ret = mod_check_sig(&ms, modlen, info->name);
++	ret = mod_check_sig(&ms, modlen, "module");
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/kernel/static_call.c b/kernel/static_call.c
+index 84565c2a41b8f..db914da6e7854 100644
+--- a/kernel/static_call.c
++++ b/kernel/static_call.c
+@@ -182,7 +182,16 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
+ 			}
+ 
+ 			if (!kernel_text_address((unsigned long)site_addr)) {
+-				WARN_ONCE(1, "can't patch static call site at %pS",
++				/*
++				 * This skips patching built-in __exit, which
++				 * is part of init_section_contains() but is
++				 * not part of kernel_text_address().
++				 *
++				 * Skipping built-in __exit is fine since it
++				 * will never be executed.
++				 */
++				WARN_ONCE(!static_call_is_init(site),
++					  "can't patch static call site at %pS",
+ 					  site_addr);
+ 				continue;
+ 			}
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index f4ace1bf83828..daeaa7140d0aa 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -848,9 +848,9 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags,
+ 	if (flags == TIMER_ABSTIME)
+ 		return -ERESTARTNOHAND;
+ 
+-	restart->fn = alarm_timer_nsleep_restart;
+ 	restart->nanosleep.clockid = type;
+ 	restart->nanosleep.expires = exp;
++	set_restart_fn(restart, alarm_timer_nsleep_restart);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 4416f5d72c11e..9505b1f21cdf8 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -1957,9 +1957,9 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ 	}
+ 
+ 	restart = &current->restart_block;
+-	restart->fn = hrtimer_nanosleep_restart;
+ 	restart->nanosleep.clockid = t.timer.base->clockid;
+ 	restart->nanosleep.expires = hrtimer_get_expires_tv64(&t.timer);
++	set_restart_fn(restart, hrtimer_nanosleep_restart);
+ out:
+ 	destroy_hrtimer_on_stack(&t.timer);
+ 	return ret;
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index a71758e34e456..9abe15255bc4e 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1480,8 +1480,8 @@ static int posix_cpu_nsleep(const clockid_t which_clock, int flags,
+ 		if (flags & TIMER_ABSTIME)
+ 			return -ERESTARTNOHAND;
+ 
+-		restart_block->fn = posix_cpu_nsleep_restart;
+ 		restart_block->nanosleep.clockid = which_clock;
++		set_restart_fn(restart_block, posix_cpu_nsleep_restart);
+ 	}
+ 	return error;
+ }
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 38de24af24c44..54031ee079a2c 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -433,7 +433,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 	if (len == 0 || len & 3)
+ 		return -EINVAL;
+ 
+-	skb = netdev_alloc_skb(NULL, len);
++	skb = __netdev_alloc_skb(NULL, len, GFP_ATOMIC | __GFP_NOWARN);
+ 	if (!skb)
+ 		return -ENOMEM;
+ 
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index c211b607239ed..d38788cd9433a 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -1408,7 +1408,7 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 
+  sendit:
+ 	if (svc_authorise(rqstp))
+-		goto close;
++		goto close_xprt;
+ 	return 1;		/* Caller can now send it */
+ 
+ release_dropit:
+@@ -1420,6 +1420,8 @@ release_dropit:
+ 	return 0;
+ 
+  close:
++	svc_authorise(rqstp);
++close_xprt:
+ 	if (rqstp->rq_xprt && test_bit(XPT_TEMP, &rqstp->rq_xprt->xpt_flags))
+ 		svc_close_xprt(rqstp->rq_xprt);
+ 	dprintk("svc: svc_process close\n");
+@@ -1428,7 +1430,7 @@ release_dropit:
+ err_short_len:
+ 	svc_printk(rqstp, "short len %zd, dropping request\n",
+ 			argv->iov_len);
+-	goto close;
++	goto close_xprt;
+ 
+ err_bad_rpc:
+ 	serv->sv_stats->rpcbadfmt++;
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index 43cf8dbde898b..06e503466c32c 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -1062,7 +1062,7 @@ static int svc_close_list(struct svc_serv *serv, struct list_head *xprt_list, st
+ 	struct svc_xprt *xprt;
+ 	int ret = 0;
+ 
+-	spin_lock(&serv->sv_lock);
++	spin_lock_bh(&serv->sv_lock);
+ 	list_for_each_entry(xprt, xprt_list, xpt_list) {
+ 		if (xprt->xpt_net != net)
+ 			continue;
+@@ -1070,7 +1070,7 @@ static int svc_close_list(struct svc_serv *serv, struct list_head *xprt_list, st
+ 		set_bit(XPT_CLOSE, &xprt->xpt_flags);
+ 		svc_xprt_enqueue(xprt);
+ 	}
+-	spin_unlock(&serv->sv_lock);
++	spin_unlock_bh(&serv->sv_lock);
+ 	return ret;
+ }
+ 
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+index 5e7c4ba9e1476..c5154bc38e129 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+@@ -246,9 +246,9 @@ xprt_setup_rdma_bc(struct xprt_create *args)
+ 	xprt->timeout = &xprt_rdma_bc_timeout;
+ 	xprt_set_bound(xprt);
+ 	xprt_set_connected(xprt);
+-	xprt->bind_timeout = RPCRDMA_BIND_TO;
+-	xprt->reestablish_timeout = RPCRDMA_INIT_REEST_TO;
+-	xprt->idle_timeout = RPCRDMA_IDLE_DISC_TO;
++	xprt->bind_timeout = 0;
++	xprt->reestablish_timeout = 0;
++	xprt->idle_timeout = 0;
+ 
+ 	xprt->prot = XPRT_TRANSPORT_BC_RDMA;
+ 	xprt->ops = &xprt_rdma_bc_procs;
+diff --git a/sound/firewire/dice/dice-stream.c b/sound/firewire/dice/dice-stream.c
+index 8e0c0380b4c4b..1a14c083e8cea 100644
+--- a/sound/firewire/dice/dice-stream.c
++++ b/sound/firewire/dice/dice-stream.c
+@@ -493,11 +493,10 @@ void snd_dice_stream_stop_duplex(struct snd_dice *dice)
+ 	struct reg_params tx_params, rx_params;
+ 
+ 	if (dice->substreams_counter == 0) {
+-		if (get_register_params(dice, &tx_params, &rx_params) >= 0) {
+-			amdtp_domain_stop(&dice->domain);
++		if (get_register_params(dice, &tx_params, &rx_params) >= 0)
+ 			finish_session(dice, &tx_params, &rx_params);
+-		}
+ 
++		amdtp_domain_stop(&dice->domain);
+ 		release_resources(dice);
+ 	}
+ }
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 8060cc86dfea3..96903295a9677 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -4065,7 +4065,7 @@ static int add_micmute_led_hook(struct hda_codec *codec)
+ 
+ 	spec->micmute_led.led_mode = MICMUTE_LED_FOLLOW_MUTE;
+ 	spec->micmute_led.capture = 0;
+-	spec->micmute_led.led_value = 0;
++	spec->micmute_led.led_value = -1;
+ 	spec->micmute_led.old_hook = spec->cap_sync_hook;
+ 	spec->cap_sync_hook = update_micmute_led;
+ 	if (!snd_hda_gen_add_kctl(spec, NULL, &micmute_led_mode_ctl))
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b47504fa8dfd0..316b9b4ccb32d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4225,6 +4225,12 @@ static void alc_fixup_hp_gpio_led(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc236_fixup_hp_gpio_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	alc_fixup_hp_gpio_led(codec, action, 0x02, 0x01);
++}
++
+ static void alc269_fixup_hp_gpio_led(struct hda_codec *codec,
+ 				const struct hda_fixup *fix, int action)
+ {
+@@ -6381,6 +6387,7 @@ enum {
+ 	ALC294_FIXUP_ASUS_GX502_VERBS,
+ 	ALC285_FIXUP_HP_GPIO_LED,
+ 	ALC285_FIXUP_HP_MUTE_LED,
++	ALC236_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ 	ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+@@ -7616,6 +7623,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_mute_led,
+ 	},
++	[ALC236_FIXUP_HP_GPIO_LED] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc236_fixup_hp_gpio_led,
++	},
+ 	[ALC236_FIXUP_HP_MUTE_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc236_fixup_hp_mute_led,
+@@ -8045,9 +8056,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation",
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -8242,7 +8256,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+ 	SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+diff --git a/sound/soc/codecs/ak4458.c b/sound/soc/codecs/ak4458.c
+index 472caad17012e..85a1d00894a9c 100644
+--- a/sound/soc/codecs/ak4458.c
++++ b/sound/soc/codecs/ak4458.c
+@@ -812,6 +812,7 @@ static const struct of_device_id ak4458_of_match[] = {
+ 	{ .compatible = "asahi-kasei,ak4497", .data = &ak4497_drvdata},
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, ak4458_of_match);
+ 
+ static struct i2c_driver ak4458_i2c_driver = {
+ 	.driver = {
+diff --git a/sound/soc/codecs/ak5558.c b/sound/soc/codecs/ak5558.c
+index 2f076d5ee284d..65a248c92f669 100644
+--- a/sound/soc/codecs/ak5558.c
++++ b/sound/soc/codecs/ak5558.c
+@@ -419,6 +419,7 @@ static const struct of_device_id ak5558_i2c_dt_ids[] = {
+ 	{ .compatible = "asahi-kasei,ak5558"},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, ak5558_i2c_dt_ids);
+ 
+ static struct i2c_driver ak5558_i2c_driver = {
+ 	.driver = {
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index 40f682f5dab8b..d18ae5e3ee809 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -1873,6 +1873,12 @@ static int wcd934x_set_channel_map(struct snd_soc_dai *dai,
+ 
+ 	wcd = snd_soc_component_get_drvdata(dai->component);
+ 
++	if (tx_num > WCD934X_TX_MAX || rx_num > WCD934X_RX_MAX) {
++		dev_err(wcd->dev, "Invalid tx %d or rx %d channel count\n",
++			tx_num, rx_num);
++		return -EINVAL;
++	}
++
+ 	if (!tx_slot || !rx_slot) {
+ 		dev_err(wcd->dev, "Invalid tx_slot=%p, rx_slot=%p\n",
+ 			tx_slot, rx_slot);
+diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
+index 404be27c15fed..1d774c876c52e 100644
+--- a/sound/soc/fsl/fsl_ssi.c
++++ b/sound/soc/fsl/fsl_ssi.c
+@@ -878,6 +878,7 @@ static int fsl_ssi_hw_free(struct snd_pcm_substream *substream,
+ static int _fsl_ssi_set_dai_fmt(struct fsl_ssi *ssi, unsigned int fmt)
+ {
+ 	u32 strcr = 0, scr = 0, stcr, srcr, mask;
++	unsigned int slots;
+ 
+ 	ssi->dai_fmt = fmt;
+ 
+@@ -909,10 +910,11 @@ static int _fsl_ssi_set_dai_fmt(struct fsl_ssi *ssi, unsigned int fmt)
+ 			return -EINVAL;
+ 		}
+ 
++		slots = ssi->slots ? : 2;
+ 		regmap_update_bits(ssi->regs, REG_SSI_STCCR,
+-				   SSI_SxCCR_DC_MASK, SSI_SxCCR_DC(2));
++				   SSI_SxCCR_DC_MASK, SSI_SxCCR_DC(slots));
+ 		regmap_update_bits(ssi->regs, REG_SSI_SRCCR,
+-				   SSI_SxCCR_DC_MASK, SSI_SxCCR_DC(2));
++				   SSI_SxCCR_DC_MASK, SSI_SxCCR_DC(slots));
+ 
+ 		/* Data on rising edge of bclk, frame low, 1clk before data */
+ 		strcr |= SSI_STCR_TFSI | SSI_STCR_TSCKP | SSI_STCR_TEFS;
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index ab31045cfc952..6cada4c1e283b 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -172,15 +172,16 @@ int asoc_simple_parse_clk(struct device *dev,
+ 	 *  or device's module clock.
+ 	 */
+ 	clk = devm_get_clk_from_child(dev, node, NULL);
+-	if (IS_ERR(clk))
+-		clk = devm_get_clk_from_child(dev, dlc->of_node, NULL);
+-
+ 	if (!IS_ERR(clk)) {
+-		simple_dai->clk = clk;
+ 		simple_dai->sysclk = clk_get_rate(clk);
+-	} else if (!of_property_read_u32(node, "system-clock-frequency",
+-					 &val)) {
++
++		simple_dai->clk = clk;
++	} else if (!of_property_read_u32(node, "system-clock-frequency", &val)) {
+ 		simple_dai->sysclk = val;
++	} else {
++		clk = devm_get_clk_from_child(dev, dlc->of_node, NULL);
++		if (!IS_ERR(clk))
++			simple_dai->sysclk = clk_get_rate(clk);
+ 	}
+ 
+ 	if (of_property_read_bool(node, "system-clock-direction-out"))
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index d56db9f34373e..d5812e73eb63f 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -577,7 +577,7 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
+ 					BYT_RT5640_JD_SRC_JD1_IN4P |
+-					BYT_RT5640_OVCD_TH_1500UA |
++					BYT_RT5640_OVCD_TH_2000UA |
+ 					BYT_RT5640_OVCD_SF_0P75 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index 3ddd32fd3a44b..4fb2ec7c8867b 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -737,7 +737,7 @@ static void of_lpass_cpu_parse_dai_data(struct device *dev,
+ 
+ 	for_each_child_of_node(dev->of_node, node) {
+ 		ret = of_property_read_u32(node, "reg", &id);
+-		if (ret || id < 0 || id >= data->variant->num_dai) {
++		if (ret || id < 0) {
+ 			dev_err(dev, "valid dai id not found: %d\n", ret);
+ 			continue;
+ 		}
+diff --git a/sound/soc/qcom/sdm845.c b/sound/soc/qcom/sdm845.c
+index 6c2760e27ea6f..153e9b2de0b53 100644
+--- a/sound/soc/qcom/sdm845.c
++++ b/sound/soc/qcom/sdm845.c
+@@ -27,18 +27,18 @@
+ #define SPK_TDM_RX_MASK         0x03
+ #define NUM_TDM_SLOTS           8
+ #define SLIM_MAX_TX_PORTS 16
+-#define SLIM_MAX_RX_PORTS 16
++#define SLIM_MAX_RX_PORTS 13
+ #define WCD934X_DEFAULT_MCLK_RATE	9600000
+ 
+ struct sdm845_snd_data {
+ 	struct snd_soc_jack jack;
+ 	bool jack_setup;
+-	bool stream_prepared[SLIM_MAX_RX_PORTS];
++	bool stream_prepared[AFE_PORT_MAX];
+ 	struct snd_soc_card *card;
+ 	uint32_t pri_mi2s_clk_count;
+ 	uint32_t sec_mi2s_clk_count;
+ 	uint32_t quat_tdm_clk_count;
+-	struct sdw_stream_runtime *sruntime[SLIM_MAX_RX_PORTS];
++	struct sdw_stream_runtime *sruntime[AFE_PORT_MAX];
+ };
+ 
+ static unsigned int tdm_slot_offset[8] = {0, 4, 8, 12, 16, 20, 24, 28};
+diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c
+index cd324f3d11d17..c731b9bd60b4c 100644
+--- a/sound/soc/sof/intel/hda-dsp.c
++++ b/sound/soc/sof/intel/hda-dsp.c
+@@ -207,7 +207,7 @@ int hda_dsp_core_power_down(struct snd_sof_dev *sdev, unsigned int core_mask)
+ 
+ 	ret = snd_sof_dsp_read_poll_timeout(sdev, HDA_DSP_BAR,
+ 				HDA_DSP_REG_ADSPCS, adspcs,
+-				!(adspcs & HDA_DSP_ADSPCS_SPA_MASK(core_mask)),
++				!(adspcs & HDA_DSP_ADSPCS_CPA_MASK(core_mask)),
+ 				HDA_DSP_REG_POLL_INTERVAL_US,
+ 				HDA_DSP_PD_TIMEOUT * USEC_PER_MSEC);
+ 	if (ret < 0)
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index bb4128a72a42f..b0faf050132d8 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -898,6 +898,7 @@ free_streams:
+ /* dsp_unmap: not currently used */
+ 	iounmap(sdev->bar[HDA_DSP_BAR]);
+ hdac_bus_unmap:
++	platform_device_unregister(hdev->dmic_dev);
+ 	iounmap(bus->remap_addr);
+ 	hda_codec_i915_exit(sdev);
+ err:
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 448de77f43fd8..5171b3dc1eb9e 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -2883,7 +2883,7 @@ static int snd_djm_controls_put(struct snd_kcontrol *kctl, struct snd_ctl_elem_v
+ 	u8 group = (private_value & SND_DJM_GROUP_MASK) >> SND_DJM_GROUP_SHIFT;
+ 	u16 value = elem->value.enumerated.item[0];
+ 
+-	kctl->private_value = ((device << SND_DJM_DEVICE_SHIFT) |
++	kctl->private_value = (((unsigned long)device << SND_DJM_DEVICE_SHIFT) |
+ 			      (group << SND_DJM_GROUP_SHIFT) |
+ 			      value);
+ 
+@@ -2921,7 +2921,7 @@ static int snd_djm_controls_create(struct usb_mixer_interface *mixer,
+ 		value = device->controls[i].default_value;
+ 		knew.name = device->controls[i].name;
+ 		knew.private_value = (
+-			(device_idx << SND_DJM_DEVICE_SHIFT) |
++			((unsigned long)device_idx << SND_DJM_DEVICE_SHIFT) |
+ 			(i << SND_DJM_GROUP_SHIFT) |
+ 			value);
+ 		err = snd_djm_controls_update(mixer, device_idx, i, value);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-03-30 12:57 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-03-30 12:57 UTC (permalink / raw
  To: gentoo-commits

commit:     df9eae456fca608e8af053e24c165a57bc410ce0
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Tue Mar 30 12:57:03 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Mar 30 12:57:14 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=df9eae45

Linux patch 5.10.27

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1026_linux-5.10.27.patch | 9018 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 9022 insertions(+)

diff --git a/0000_README b/0000_README
index da6b8c2..936c976 100644
--- a/0000_README
+++ b/0000_README
@@ -147,6 +147,10 @@ Patch:  1025_linux-5.10.26.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.26
 
+Patch:  1026_linux-5.10.27.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.27
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1026_linux-5.10.27.patch b/1026_linux-5.10.27.patch
new file mode 100644
index 0000000..9a18c2c
--- /dev/null
+++ b/1026_linux-5.10.27.patch
@@ -0,0 +1,9018 @@
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index a5d27553d59c9..cd8a585568045 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -4810,8 +4810,10 @@ If an MSR access is not permitted through the filtering, it generates a
+ allows user space to deflect and potentially handle various MSR accesses
+ into user space.
+ 
+-If a vCPU is in running state while this ioctl is invoked, the vCPU may
+-experience inconsistent filtering behavior on MSR accesses.
++Note, invoking this ioctl with a vCPU is running is inherently racy.  However,
++KVM does guarantee that vCPUs will see either the previous filter or the new
++filter, e.g. MSRs with identical settings in both the old and new filter will
++have deterministic behavior.
+ 
+ 
+ 5. The kvm_run structure
+diff --git a/Makefile b/Makefile
+index d4b87e604762a..4801cc25e3472 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 26
++SUBLEVEL = 27
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -265,7 +265,8 @@ no-dot-config-targets := $(clean-targets) \
+ 			 $(version_h) headers headers_% archheaders archscripts \
+ 			 %asm-generic kernelversion %src-pkg dt_binding_check \
+ 			 outputmakefile
+-no-sync-config-targets := $(no-dot-config-targets) %install kernelrelease
++no-sync-config-targets := $(no-dot-config-targets) %install kernelrelease \
++			  image_name
+ single-targets := %.a %.i %.ko %.lds %.ll %.lst %.mod %.o %.s %.symtypes %/
+ 
+ config-build	:=
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index 73b6b1f89de99..775ceb3acb6c0 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -334,14 +334,6 @@
+ };
+ 
+ &pinctrl {
+-	atmel,mux-mask = <
+-			 /*	A	B	C	*/
+-			 0xFFFFFE7F 0xC0E0397F 0xEF00019D	/* pioA */
+-			 0x03FFFFFF 0x02FC7E68 0x00780000	/* pioB */
+-			 0xffffffff 0xF83FFFFF 0xB800F3FC	/* pioC */
+-			 0x003FFFFF 0x003F8000 0x00000000	/* pioD */
+-			 >;
+-
+ 	adc {
+ 		pinctrl_adc_default: adc_default {
+ 			atmel,pins = <AT91_PIOB 15 AT91_PERIPH_A AT91_PINCTRL_NONE>;
+diff --git a/arch/arm/boot/dts/at91-sama5d27_som1.dtsi b/arch/arm/boot/dts/at91-sama5d27_som1.dtsi
+index b1f994c0ae793..e4824abb385ac 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_som1.dtsi
++++ b/arch/arm/boot/dts/at91-sama5d27_som1.dtsi
+@@ -84,8 +84,8 @@
+ 				pinctrl-0 = <&pinctrl_macb0_default>;
+ 				phy-mode = "rmii";
+ 
+-				ethernet-phy@0 {
+-					reg = <0x0>;
++				ethernet-phy@7 {
++					reg = <0x7>;
+ 					interrupt-parent = <&pioA>;
+ 					interrupts = <PIN_PD31 IRQ_TYPE_LEVEL_LOW>;
+ 					pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/imx6ull-myir-mys-6ulx-eval.dts b/arch/arm/boot/dts/imx6ull-myir-mys-6ulx-eval.dts
+index ecbb2cc5b9ab4..79cc45728cd2d 100644
+--- a/arch/arm/boot/dts/imx6ull-myir-mys-6ulx-eval.dts
++++ b/arch/arm/boot/dts/imx6ull-myir-mys-6ulx-eval.dts
+@@ -14,5 +14,6 @@
+ };
+ 
+ &gpmi {
++	fsl,use-minimum-ecc;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/sam9x60.dtsi b/arch/arm/boot/dts/sam9x60.dtsi
+index 84066c1298df9..ec45ced3cde68 100644
+--- a/arch/arm/boot/dts/sam9x60.dtsi
++++ b/arch/arm/boot/dts/sam9x60.dtsi
+@@ -606,6 +606,15 @@
+ 				compatible = "microchip,sam9x60-pinctrl", "atmel,at91sam9x5-pinctrl", "atmel,at91rm9200-pinctrl", "simple-bus";
+ 				ranges = <0xfffff400 0xfffff400 0x800>;
+ 
++				/* mux-mask corresponding to sam9x60 SoC in TFBGA228L package */
++				atmel,mux-mask = <
++						 /*	A	B	C	*/
++						 0xffffffff 0xffe03fff 0xef00019d	/* pioA */
++						 0x03ffffff 0x02fc7e7f 0x00780000	/* pioB */
++						 0xffffffff 0xffffffff 0xf83fffff	/* pioC */
++						 0x003fffff 0x003f8000 0x00000000	/* pioD */
++						 >;
++
+ 				pioA: gpio@fffff400 {
+ 					compatible = "microchip,sam9x60-gpio", "atmel,at91sam9x5-gpio", "atmel,at91rm9200-gpio";
+ 					reg = <0xfffff400 0x200>;
+diff --git a/arch/arm/mach-omap2/sr_device.c b/arch/arm/mach-omap2/sr_device.c
+index 62df666c2bd0b..17b66f0d0deef 100644
+--- a/arch/arm/mach-omap2/sr_device.c
++++ b/arch/arm/mach-omap2/sr_device.c
+@@ -88,34 +88,26 @@ static void __init sr_set_nvalues(struct omap_volt_data *volt_data,
+ 
+ extern struct omap_sr_data omap_sr_pdata[];
+ 
+-static int __init sr_dev_init(struct omap_hwmod *oh, void *user)
++static int __init sr_init_by_name(const char *name, const char *voltdm)
+ {
+ 	struct omap_sr_data *sr_data = NULL;
+ 	struct omap_volt_data *volt_data;
+-	struct omap_smartreflex_dev_attr *sr_dev_attr;
+ 	static int i;
+ 
+-	if (!strncmp(oh->name, "smartreflex_mpu_iva", 20) ||
+-	    !strncmp(oh->name, "smartreflex_mpu", 16))
++	if (!strncmp(name, "smartreflex_mpu_iva", 20) ||
++	    !strncmp(name, "smartreflex_mpu", 16))
+ 		sr_data = &omap_sr_pdata[OMAP_SR_MPU];
+-	else if (!strncmp(oh->name, "smartreflex_core", 17))
++	else if (!strncmp(name, "smartreflex_core", 17))
+ 		sr_data = &omap_sr_pdata[OMAP_SR_CORE];
+-	else if (!strncmp(oh->name, "smartreflex_iva", 16))
++	else if (!strncmp(name, "smartreflex_iva", 16))
+ 		sr_data = &omap_sr_pdata[OMAP_SR_IVA];
+ 
+ 	if (!sr_data) {
+-		pr_err("%s: Unknown instance %s\n", __func__, oh->name);
++		pr_err("%s: Unknown instance %s\n", __func__, name);
+ 		return -EINVAL;
+ 	}
+ 
+-	sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr;
+-	if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) {
+-		pr_err("%s: No voltage domain specified for %s. Cannot initialize\n",
+-		       __func__, oh->name);
+-		goto exit;
+-	}
+-
+-	sr_data->name = oh->name;
++	sr_data->name = name;
+ 	if (cpu_is_omap343x())
+ 		sr_data->ip_type = 1;
+ 	else
+@@ -136,10 +128,10 @@ static int __init sr_dev_init(struct omap_hwmod *oh, void *user)
+ 		}
+ 	}
+ 
+-	sr_data->voltdm = voltdm_lookup(sr_dev_attr->sensor_voltdm_name);
++	sr_data->voltdm = voltdm_lookup(voltdm);
+ 	if (!sr_data->voltdm) {
+ 		pr_err("%s: Unable to get voltage domain pointer for VDD %s\n",
+-			__func__, sr_dev_attr->sensor_voltdm_name);
++			__func__, voltdm);
+ 		goto exit;
+ 	}
+ 
+@@ -160,6 +152,20 @@ exit:
+ 	return 0;
+ }
+ 
++static int __init sr_dev_init(struct omap_hwmod *oh, void *user)
++{
++	struct omap_smartreflex_dev_attr *sr_dev_attr;
++
++	sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr;
++	if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) {
++		pr_err("%s: No voltage domain specified for %s. Cannot initialize\n",
++		       __func__, oh->name);
++		return 0;
++	}
++
++	return sr_init_by_name(oh->name, sr_dev_attr->sensor_voltdm_name);
++}
++
+ /*
+  * API to be called from board files to enable smartreflex
+  * autocompensation at init.
+@@ -169,7 +175,42 @@ void __init omap_enable_smartreflex_on_init(void)
+ 	sr_enable_on_init = true;
+ }
+ 
++static const char * const omap4_sr_instances[] = {
++	"mpu",
++	"iva",
++	"core",
++};
++
++static const char * const dra7_sr_instances[] = {
++	"mpu",
++	"core",
++};
++
+ int __init omap_devinit_smartreflex(void)
+ {
++	const char * const *sr_inst;
++	int i, nr_sr = 0;
++
++	if (soc_is_omap44xx()) {
++		sr_inst = omap4_sr_instances;
++		nr_sr = ARRAY_SIZE(omap4_sr_instances);
++
++	} else if (soc_is_dra7xx()) {
++		sr_inst = dra7_sr_instances;
++		nr_sr = ARRAY_SIZE(dra7_sr_instances);
++	}
++
++	if (nr_sr) {
++		const char *name, *voltdm;
++
++		for (i = 0; i < nr_sr; i++) {
++			name = kasprintf(GFP_KERNEL, "smartreflex_%s", sr_inst[i]);
++			voltdm = sr_inst[i];
++			sr_init_by_name(name, voltdm);
++		}
++
++		return 0;
++	}
++
+ 	return omap_hwmod_for_each_by_class("smartreflex", sr_dev_init, NULL);
+ }
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi
+index 6a2c091990479..5fd3ea2028c64 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi
+@@ -192,6 +192,7 @@
+ 			ranges = <0x0 0x00 0x1700000 0x100000>;
+ 			reg = <0x00 0x1700000 0x0 0x100000>;
+ 			interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
++			dma-coherent;
+ 
+ 			sec_jr0: jr@10000 {
+ 				compatible = "fsl,sec-v5.4-job-ring",
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
+index 0464b8aa4bc4d..b1b9544d8e7fd 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
+@@ -322,6 +322,7 @@
+ 			ranges = <0x0 0x00 0x1700000 0x100000>;
+ 			reg = <0x00 0x1700000 0x0 0x100000>;
+ 			interrupts = <0 75 0x4>;
++			dma-coherent;
+ 
+ 			sec_jr0: jr@10000 {
+ 				compatible = "fsl,sec-v5.4-job-ring",
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+index 0b4545012d43e..acf2ae2e151a0 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+@@ -325,6 +325,7 @@
+ 			ranges = <0x0 0x00 0x1700000 0x100000>;
+ 			reg = <0x00 0x1700000 0x0 0x100000>;
+ 			interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
++			dma-coherent;
+ 
+ 			sec_jr0: jr@10000 {
+ 				compatible = "fsl,sec-v5.4-job-ring",
+diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
+index e6e284265f19d..58303a9ec32c4 100644
+--- a/arch/arm64/kernel/crash_dump.c
++++ b/arch/arm64/kernel/crash_dump.c
+@@ -64,5 +64,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
+ ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
+ {
+ 	memcpy(buf, phys_to_virt((phys_addr_t)*ppos), count);
++	*ppos += count;
++
+ 	return count;
+ }
+diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
+index fa56af1a59c39..dbce0dcf4cc06 100644
+--- a/arch/arm64/kernel/stacktrace.c
++++ b/arch/arm64/kernel/stacktrace.c
+@@ -199,8 +199,9 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
+ 
+ #ifdef CONFIG_STACKTRACE
+ 
+-void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+-		     struct task_struct *task, struct pt_regs *regs)
++noinline void arch_stack_walk(stack_trace_consume_fn consume_entry,
++			      void *cookie, struct task_struct *task,
++			      struct pt_regs *regs)
+ {
+ 	struct stackframe frame;
+ 
+@@ -208,8 +209,8 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+ 		start_backtrace(&frame, regs->regs[29], regs->pc);
+ 	else if (task == current)
+ 		start_backtrace(&frame,
+-				(unsigned long)__builtin_frame_address(0),
+-				(unsigned long)arch_stack_walk);
++				(unsigned long)__builtin_frame_address(1),
++				(unsigned long)__builtin_return_address(0));
+ 	else
+ 		start_backtrace(&frame, thread_saved_fp(task),
+ 				thread_saved_pc(task));
+diff --git a/arch/ia64/include/asm/syscall.h b/arch/ia64/include/asm/syscall.h
+index 6c6f16e409a87..0d23c00493018 100644
+--- a/arch/ia64/include/asm/syscall.h
++++ b/arch/ia64/include/asm/syscall.h
+@@ -32,7 +32,7 @@ static inline void syscall_rollback(struct task_struct *task,
+ static inline long syscall_get_error(struct task_struct *task,
+ 				     struct pt_regs *regs)
+ {
+-	return regs->r10 == -1 ? regs->r8:0;
++	return regs->r10 == -1 ? -regs->r8:0;
+ }
+ 
+ static inline long syscall_get_return_value(struct task_struct *task,
+diff --git a/arch/ia64/kernel/ptrace.c b/arch/ia64/kernel/ptrace.c
+index 75c070aed81e0..dad3a605cb7e5 100644
+--- a/arch/ia64/kernel/ptrace.c
++++ b/arch/ia64/kernel/ptrace.c
+@@ -2010,27 +2010,39 @@ static void syscall_get_set_args_cb(struct unw_frame_info *info, void *data)
+ {
+ 	struct syscall_get_set_args *args = data;
+ 	struct pt_regs *pt = args->regs;
+-	unsigned long *krbs, cfm, ndirty;
++	unsigned long *krbs, cfm, ndirty, nlocals, nouts;
+ 	int i, count;
+ 
+ 	if (unw_unwind_to_user(info) < 0)
+ 		return;
+ 
++	/*
++	 * We get here via a few paths:
++	 * - break instruction: cfm is shared with caller.
++	 *   syscall args are in out= regs, locals are non-empty.
++	 * - epsinstruction: cfm is set by br.call
++	 *   locals don't exist.
++	 *
++	 * For both cases argguments are reachable in cfm.sof - cfm.sol.
++	 * CFM: [ ... | sor: 17..14 | sol : 13..7 | sof : 6..0 ]
++	 */
+ 	cfm = pt->cr_ifs;
++	nlocals = (cfm >> 7) & 0x7f; /* aka sol */
++	nouts = (cfm & 0x7f) - nlocals; /* aka sof - sol */
+ 	krbs = (unsigned long *)info->task + IA64_RBS_OFFSET/8;
+ 	ndirty = ia64_rse_num_regs(krbs, krbs + (pt->loadrs >> 19));
+ 
+ 	count = 0;
+ 	if (in_syscall(pt))
+-		count = min_t(int, args->n, cfm & 0x7f);
++		count = min_t(int, args->n, nouts);
+ 
++	/* Iterate over outs. */
+ 	for (i = 0; i < count; i++) {
++		int j = ndirty + nlocals + i + args->i;
+ 		if (args->rw)
+-			*ia64_rse_skip_regs(krbs, ndirty + i + args->i) =
+-				args->args[i];
++			*ia64_rse_skip_regs(krbs, j) = args->args[i];
+ 		else
+-			args->args[i] = *ia64_rse_skip_regs(krbs,
+-				ndirty + i + args->i);
++			args->args[i] = *ia64_rse_skip_regs(krbs, j);
+ 	}
+ 
+ 	if (!args->rw) {
+diff --git a/arch/powerpc/include/asm/dcr-native.h b/arch/powerpc/include/asm/dcr-native.h
+index 7141ccea8c94e..a92059964579b 100644
+--- a/arch/powerpc/include/asm/dcr-native.h
++++ b/arch/powerpc/include/asm/dcr-native.h
+@@ -53,8 +53,8 @@ static inline void mtdcrx(unsigned int reg, unsigned int val)
+ #define mfdcr(rn)						\
+ 	({unsigned int rval;					\
+ 	if (__builtin_constant_p(rn) && rn < 1024)		\
+-		asm volatile("mfdcr %0," __stringify(rn)	\
+-		              : "=r" (rval));			\
++		asm volatile("mfdcr %0, %1" : "=r" (rval)	\
++			      : "n" (rn));			\
+ 	else if (likely(cpu_has_feature(CPU_FTR_INDEXED_DCR)))	\
+ 		rval = mfdcrx(rn);				\
+ 	else							\
+@@ -64,8 +64,8 @@ static inline void mtdcrx(unsigned int reg, unsigned int val)
+ #define mtdcr(rn, v)						\
+ do {								\
+ 	if (__builtin_constant_p(rn) && rn < 1024)		\
+-		asm volatile("mtdcr " __stringify(rn) ",%0"	\
+-			      : : "r" (v)); 			\
++		asm volatile("mtdcr %0, %1"			\
++			      : : "n" (rn), "r" (v));		\
+ 	else if (likely(cpu_has_feature(CPU_FTR_INDEXED_DCR)))	\
+ 		mtdcrx(rn, v);					\
+ 	else							\
+diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
+index d92e5eaa4c1d7..a850dccd78ea1 100644
+--- a/arch/sparc/kernel/traps_64.c
++++ b/arch/sparc/kernel/traps_64.c
+@@ -275,14 +275,13 @@ bool is_no_fault_exception(struct pt_regs *regs)
+ 			asi = (regs->tstate >> 24); /* saved %asi       */
+ 		else
+ 			asi = (insn >> 5);	    /* immediate asi    */
+-		if ((asi & 0xf2) == ASI_PNF) {
+-			if (insn & 0x1000000) {     /* op3[5:4]=3       */
+-				handle_ldf_stq(insn, regs);
+-				return true;
+-			} else if (insn & 0x200000) { /* op3[2], stores */
++		if ((asi & 0xf6) == ASI_PNF) {
++			if (insn & 0x200000)        /* op3[2], stores   */
+ 				return false;
+-			}
+-			handle_ld_nf(insn, regs);
++			if (insn & 0x1000000)       /* op3[5:4]=3 (fp)  */
++				handle_ldf_stq(insn, regs);
++			else
++				handle_ld_nf(insn, regs);
+ 			return true;
+ 		}
+ 	}
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 7e5f33a0d0e2f..02d4c74d30e2b 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -890,6 +890,12 @@ enum kvm_irqchip_mode {
+ 	KVM_IRQCHIP_SPLIT,        /* created with KVM_CAP_SPLIT_IRQCHIP */
+ };
+ 
++struct kvm_x86_msr_filter {
++	u8 count;
++	bool default_allow:1;
++	struct msr_bitmap_range ranges[16];
++};
++
+ #define APICV_INHIBIT_REASON_DISABLE    0
+ #define APICV_INHIBIT_REASON_HYPERV     1
+ #define APICV_INHIBIT_REASON_NESTED     2
+@@ -985,14 +991,12 @@ struct kvm_arch {
+ 	bool guest_can_read_msr_platform_info;
+ 	bool exception_payload_enabled;
+ 
++	bool bus_lock_detection_enabled;
++
+ 	/* Deflect RDMSR and WRMSR to user space when they trigger a #GP */
+ 	u32 user_space_msr_mask;
+ 
+-	struct {
+-		u8 count;
+-		bool default_allow:1;
+-		struct msr_bitmap_range ranges[16];
+-	} msr_filter;
++	struct kvm_x86_msr_filter __rcu *msr_filter;
+ 
+ 	struct kvm_pmu_event_filter *pmu_event_filter;
+ 	struct task_struct *nx_lpage_recovery_thread;
+diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
+index c37f11999d0c0..cbb67b6030f97 100644
+--- a/arch/x86/include/asm/static_call.h
++++ b/arch/x86/include/asm/static_call.h
+@@ -37,4 +37,11 @@
+ #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)			\
+ 	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; nop; nop; nop; nop")
+ 
++
++#define ARCH_ADD_TRAMP_KEY(name)					\
++	asm(".pushsection .static_call_tramp_key, \"a\"		\n"	\
++	    ".long " STATIC_CALL_TRAMP_STR(name) " - .		\n"	\
++	    ".long " STATIC_CALL_KEY_STR(name) " - .		\n"	\
++	    ".popsection					\n")
++
+ #endif /* _ASM_STATIC_CALL_H */
+diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
+index ab09af613788b..5941e18edd5a9 100644
+--- a/arch/x86/include/asm/xen/page.h
++++ b/arch/x86/include/asm/xen/page.h
+@@ -86,18 +86,6 @@ clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+ }
+ #endif
+ 
+-/*
+- * The maximum amount of extra memory compared to the base size.  The
+- * main scaling factor is the size of struct page.  At extreme ratios
+- * of base:extra, all the base memory can be filled with page
+- * structures for the extra memory, leaving no space for anything
+- * else.
+- *
+- * 10x seems like a reasonable balance between scaling flexibility and
+- * leaving a practically usable system.
+- */
+-#define XEN_EXTRA_MEM_RATIO	(10)
+-
+ /*
+  * Helper functions to write or read unsigned long values to/from
+  * memory, when the access may fault.
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index fa5f059c2b940..0d8383b82bca4 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1505,35 +1505,44 @@ EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
+ 
+ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type)
+ {
++	struct kvm_x86_msr_filter *msr_filter;
++	struct msr_bitmap_range *ranges;
+ 	struct kvm *kvm = vcpu->kvm;
+-	struct msr_bitmap_range *ranges = kvm->arch.msr_filter.ranges;
+-	u32 count = kvm->arch.msr_filter.count;
+-	u32 i;
+-	bool r = kvm->arch.msr_filter.default_allow;
++	bool allowed;
+ 	int idx;
++	u32 i;
+ 
+-	/* MSR filtering not set up or x2APIC enabled, allow everything */
+-	if (!count || (index >= 0x800 && index <= 0x8ff))
++	/* x2APIC MSRs do not support filtering. */
++	if (index >= 0x800 && index <= 0x8ff)
+ 		return true;
+ 
+-	/* Prevent collision with set_msr_filter */
+ 	idx = srcu_read_lock(&kvm->srcu);
+ 
+-	for (i = 0; i < count; i++) {
++	msr_filter = srcu_dereference(kvm->arch.msr_filter, &kvm->srcu);
++	if (!msr_filter) {
++		allowed = true;
++		goto out;
++	}
++
++	allowed = msr_filter->default_allow;
++	ranges = msr_filter->ranges;
++
++	for (i = 0; i < msr_filter->count; i++) {
+ 		u32 start = ranges[i].base;
+ 		u32 end = start + ranges[i].nmsrs;
+ 		u32 flags = ranges[i].flags;
+ 		unsigned long *bitmap = ranges[i].bitmap;
+ 
+ 		if ((index >= start) && (index < end) && (flags & type)) {
+-			r = !!test_bit(index - start, bitmap);
++			allowed = !!test_bit(index - start, bitmap);
+ 			break;
+ 		}
+ 	}
+ 
++out:
+ 	srcu_read_unlock(&kvm->srcu, idx);
+ 
+-	return r;
++	return allowed;
+ }
+ EXPORT_SYMBOL_GPL(kvm_msr_allowed);
+ 
+@@ -5291,25 +5300,34 @@ split_irqchip_unlock:
+ 	return r;
+ }
+ 
+-static void kvm_clear_msr_filter(struct kvm *kvm)
++static struct kvm_x86_msr_filter *kvm_alloc_msr_filter(bool default_allow)
++{
++	struct kvm_x86_msr_filter *msr_filter;
++
++	msr_filter = kzalloc(sizeof(*msr_filter), GFP_KERNEL_ACCOUNT);
++	if (!msr_filter)
++		return NULL;
++
++	msr_filter->default_allow = default_allow;
++	return msr_filter;
++}
++
++static void kvm_free_msr_filter(struct kvm_x86_msr_filter *msr_filter)
+ {
+ 	u32 i;
+-	u32 count = kvm->arch.msr_filter.count;
+-	struct msr_bitmap_range ranges[16];
+ 
+-	mutex_lock(&kvm->lock);
+-	kvm->arch.msr_filter.count = 0;
+-	memcpy(ranges, kvm->arch.msr_filter.ranges, count * sizeof(ranges[0]));
+-	mutex_unlock(&kvm->lock);
+-	synchronize_srcu(&kvm->srcu);
++	if (!msr_filter)
++		return;
++
++	for (i = 0; i < msr_filter->count; i++)
++		kfree(msr_filter->ranges[i].bitmap);
+ 
+-	for (i = 0; i < count; i++)
+-		kfree(ranges[i].bitmap);
++	kfree(msr_filter);
+ }
+ 
+-static int kvm_add_msr_filter(struct kvm *kvm, struct kvm_msr_filter_range *user_range)
++static int kvm_add_msr_filter(struct kvm_x86_msr_filter *msr_filter,
++			      struct kvm_msr_filter_range *user_range)
+ {
+-	struct msr_bitmap_range *ranges = kvm->arch.msr_filter.ranges;
+ 	struct msr_bitmap_range range;
+ 	unsigned long *bitmap = NULL;
+ 	size_t bitmap_size;
+@@ -5343,11 +5361,9 @@ static int kvm_add_msr_filter(struct kvm *kvm, struct kvm_msr_filter_range *user
+ 		goto err;
+ 	}
+ 
+-	/* Everything ok, add this range identifier to our global pool */
+-	ranges[kvm->arch.msr_filter.count] = range;
+-	/* Make sure we filled the array before we tell anyone to walk it */
+-	smp_wmb();
+-	kvm->arch.msr_filter.count++;
++	/* Everything ok, add this range identifier. */
++	msr_filter->ranges[msr_filter->count] = range;
++	msr_filter->count++;
+ 
+ 	return 0;
+ err:
+@@ -5358,10 +5374,11 @@ err:
+ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
+ {
+ 	struct kvm_msr_filter __user *user_msr_filter = argp;
++	struct kvm_x86_msr_filter *new_filter, *old_filter;
+ 	struct kvm_msr_filter filter;
+ 	bool default_allow;
+-	int r = 0;
+ 	bool empty = true;
++	int r = 0;
+ 	u32 i;
+ 
+ 	if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
+@@ -5374,25 +5391,32 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
+ 	if (empty && !default_allow)
+ 		return -EINVAL;
+ 
+-	kvm_clear_msr_filter(kvm);
+-
+-	kvm->arch.msr_filter.default_allow = default_allow;
++	new_filter = kvm_alloc_msr_filter(default_allow);
++	if (!new_filter)
++		return -ENOMEM;
+ 
+-	/*
+-	 * Protect from concurrent calls to this function that could trigger
+-	 * a TOCTOU violation on kvm->arch.msr_filter.count.
+-	 */
+-	mutex_lock(&kvm->lock);
+ 	for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
+-		r = kvm_add_msr_filter(kvm, &filter.ranges[i]);
+-		if (r)
+-			break;
++		r = kvm_add_msr_filter(new_filter, &filter.ranges[i]);
++		if (r) {
++			kvm_free_msr_filter(new_filter);
++			return r;
++		}
+ 	}
+ 
++	mutex_lock(&kvm->lock);
++
++	/* The per-VM filter is protected by kvm->lock... */
++	old_filter = srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1);
++
++	rcu_assign_pointer(kvm->arch.msr_filter, new_filter);
++	synchronize_srcu(&kvm->srcu);
++
++	kvm_free_msr_filter(old_filter);
++
+ 	kvm_make_all_cpus_request(kvm, KVM_REQ_MSR_FILTER_CHANGED);
+ 	mutex_unlock(&kvm->lock);
+ 
+-	return r;
++	return 0;
+ }
+ 
+ long kvm_arch_vm_ioctl(struct file *filp,
+@@ -10423,8 +10447,6 @@ void kvm_arch_pre_destroy_vm(struct kvm *kvm)
+ 
+ void kvm_arch_destroy_vm(struct kvm *kvm)
+ {
+-	u32 i;
+-
+ 	if (current->mm == kvm->mm) {
+ 		/*
+ 		 * Free memory regions allocated on behalf of userspace,
+@@ -10441,8 +10463,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
+ 	}
+ 	if (kvm_x86_ops.vm_destroy)
+ 		kvm_x86_ops.vm_destroy(kvm);
+-	for (i = 0; i < kvm->arch.msr_filter.count; i++)
+-		kfree(kvm->arch.msr_filter.ranges[i].bitmap);
++	kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
+ 	kvm_pic_destroy(kvm);
+ 	kvm_ioapic_destroy(kvm);
+ 	kvm_free_vcpus(kvm);
+diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
+index f80d10d39cf6d..cc85e199108eb 100644
+--- a/arch/x86/mm/mem_encrypt.c
++++ b/arch/x86/mm/mem_encrypt.c
+@@ -231,7 +231,7 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
+ 	if (pgprot_val(old_prot) == pgprot_val(new_prot))
+ 		return;
+ 
+-	pa = pfn << page_level_shift(level);
++	pa = pfn << PAGE_SHIFT;
+ 	size = page_level_size(level);
+ 
+ 	/*
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index 60da7e793385e..56e0f290fef65 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -98,8 +98,8 @@ EXPORT_SYMBOL_GPL(xen_p2m_size);
+ unsigned long xen_max_p2m_pfn __read_mostly;
+ EXPORT_SYMBOL_GPL(xen_max_p2m_pfn);
+ 
+-#ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
+-#define P2M_LIMIT CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
++#ifdef CONFIG_XEN_MEMORY_HOTPLUG_LIMIT
++#define P2M_LIMIT CONFIG_XEN_MEMORY_HOTPLUG_LIMIT
+ #else
+ #define P2M_LIMIT 0
+ #endif
+@@ -416,9 +416,6 @@ void __init xen_vmalloc_p2m_tree(void)
+ 	xen_p2m_last_pfn = xen_max_p2m_pfn;
+ 
+ 	p2m_limit = (phys_addr_t)P2M_LIMIT * 1024 * 1024 * 1024 / PAGE_SIZE;
+-	if (!p2m_limit && IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC))
+-		p2m_limit = xen_start_info->nr_pages * XEN_EXTRA_MEM_RATIO;
+-
+ 	vm.flags = VM_ALLOC;
+ 	vm.size = ALIGN(sizeof(unsigned long) * max(xen_max_p2m_pfn, p2m_limit),
+ 			PMD_SIZE * PMDS_PER_MID_PAGE);
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index 1a3b75652fa4f..8bfc103301077 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -59,6 +59,18 @@ static struct {
+ } xen_remap_buf __initdata __aligned(PAGE_SIZE);
+ static unsigned long xen_remap_mfn __initdata = INVALID_P2M_ENTRY;
+ 
++/*
++ * The maximum amount of extra memory compared to the base size.  The
++ * main scaling factor is the size of struct page.  At extreme ratios
++ * of base:extra, all the base memory can be filled with page
++ * structures for the extra memory, leaving no space for anything
++ * else.
++ *
++ * 10x seems like a reasonable balance between scaling flexibility and
++ * leaving a practically usable system.
++ */
++#define EXTRA_MEM_RATIO		(10)
++
+ static bool xen_512gb_limit __initdata = IS_ENABLED(CONFIG_XEN_512GB);
+ 
+ static void __init xen_parse_512gb(void)
+@@ -778,13 +790,13 @@ char * __init xen_memory_setup(void)
+ 		extra_pages += max_pages - max_pfn;
+ 
+ 	/*
+-	 * Clamp the amount of extra memory to a XEN_EXTRA_MEM_RATIO
++	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
+ 	 * factor the base size.
+ 	 *
+ 	 * Make sure we have no memory above max_pages, as this area
+ 	 * isn't handled by the p2m management.
+ 	 */
+-	extra_pages = min3(XEN_EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
++	extra_pages = min3(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
+ 			   extra_pages, max_pages - max_pfn);
+ 	i = 0;
+ 	addr = xen_e820_table.entries[0].addr;
+diff --git a/block/blk-cgroup-rwstat.c b/block/blk-cgroup-rwstat.c
+index 85d5790ac49b0..3304e841df7ce 100644
+--- a/block/blk-cgroup-rwstat.c
++++ b/block/blk-cgroup-rwstat.c
+@@ -109,6 +109,7 @@ void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
+ 
+ 	lockdep_assert_held(&blkg->q->queue_lock);
+ 
++	memset(sum, 0, sizeof(*sum));
+ 	rcu_read_lock();
+ 	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
+ 		struct blkg_rwstat *rwstat;
+@@ -122,7 +123,7 @@ void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
+ 			rwstat = (void *)pos_blkg + off;
+ 
+ 		for (i = 0; i < BLKG_RWSTAT_NR; i++)
+-			sum->cnt[i] = blkg_rwstat_read_counter(rwstat, i);
++			sum->cnt[i] += blkg_rwstat_read_counter(rwstat, i);
+ 	}
+ 	rcu_read_unlock();
+ }
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 97b7c28215652..7cdd566966473 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -375,6 +375,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
+ 	switch (bio_op(rq->bio)) {
+ 	case REQ_OP_DISCARD:
+ 	case REQ_OP_SECURE_ERASE:
++		if (queue_max_discard_segments(rq->q) > 1) {
++			struct bio *bio = rq->bio;
++
++			for_each_bio(bio)
++				nr_phys_segs++;
++			return nr_phys_segs;
++		}
++		return 1;
+ 	case REQ_OP_WRITE_ZEROES:
+ 		return 0;
+ 	case REQ_OP_WRITE_SAME:
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 4676c6f00489c..ab7d7ebcf6ddc 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -240,7 +240,7 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op,
+ 		 */
+ 		if (op == REQ_OP_ZONE_RESET &&
+ 		    blkdev_allow_reset_all_zones(bdev, sector, nr_sectors)) {
+-			bio->bi_opf = REQ_OP_ZONE_RESET_ALL;
++			bio->bi_opf = REQ_OP_ZONE_RESET_ALL | REQ_SYNC;
+ 			break;
+ 		}
+ 
+diff --git a/block/genhd.c b/block/genhd.c
+index ec6264e2ed671..796baf7612024 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -732,10 +732,8 @@ static void register_disk(struct device *parent, struct gendisk *disk,
+ 	disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
+ 	disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
+ 
+-	if (disk->flags & GENHD_FL_HIDDEN) {
+-		dev_set_uevent_suppress(ddev, 0);
++	if (disk->flags & GENHD_FL_HIDDEN)
+ 		return;
+-	}
+ 
+ 	disk_scan_partitions(disk);
+ 
+diff --git a/drivers/acpi/acpica/nsaccess.c b/drivers/acpi/acpica/nsaccess.c
+index 3f045b5953b2e..a0c1a665dfc12 100644
+--- a/drivers/acpi/acpica/nsaccess.c
++++ b/drivers/acpi/acpica/nsaccess.c
+@@ -99,13 +99,12 @@ acpi_status acpi_ns_root_initialize(void)
+ 		 * just create and link the new node(s) here.
+ 		 */
+ 		new_node =
+-		    ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_namespace_node));
++		    acpi_ns_create_node(*ACPI_CAST_PTR(u32, init_val->name));
+ 		if (!new_node) {
+ 			status = AE_NO_MEMORY;
+ 			goto unlock_and_exit;
+ 		}
+ 
+-		ACPI_COPY_NAMESEG(new_node->name.ascii, init_val->name);
+ 		new_node->descriptor_type = ACPI_DESC_TYPE_NAMED;
+ 		new_node->type = init_val->type;
+ 
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index aee023ad02375..a958ad60a3394 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -9,6 +9,8 @@
+ #ifndef _ACPI_INTERNAL_H_
+ #define _ACPI_INTERNAL_H_
+ 
++#include <linux/idr.h>
++
+ #define PREFIX "ACPI: "
+ 
+ int early_acpi_osi_init(void);
+@@ -96,9 +98,11 @@ void acpi_scan_table_handler(u32 event, void *table, void *context);
+ 
+ extern struct list_head acpi_bus_id_list;
+ 
++#define ACPI_MAX_DEVICE_INSTANCES	4096
++
+ struct acpi_device_bus_id {
+ 	const char *bus_id;
+-	unsigned int instance_no;
++	struct ida instance_ida;
+ 	struct list_head node;
+ };
+ 
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index dca5cc423cd41..b47f14ac75ae0 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -482,9 +482,8 @@ static void acpi_device_del(struct acpi_device *device)
+ 	list_for_each_entry(acpi_device_bus_id, &acpi_bus_id_list, node)
+ 		if (!strcmp(acpi_device_bus_id->bus_id,
+ 			    acpi_device_hid(device))) {
+-			if (acpi_device_bus_id->instance_no > 0)
+-				acpi_device_bus_id->instance_no--;
+-			else {
++			ida_simple_remove(&acpi_device_bus_id->instance_ida, device->pnp.instance_no);
++			if (ida_is_empty(&acpi_device_bus_id->instance_ida)) {
+ 				list_del(&acpi_device_bus_id->node);
+ 				kfree_const(acpi_device_bus_id->bus_id);
+ 				kfree(acpi_device_bus_id);
+@@ -623,12 +622,38 @@ void acpi_bus_put_acpi_device(struct acpi_device *adev)
+ 	put_device(&adev->dev);
+ }
+ 
++static struct acpi_device_bus_id *acpi_device_bus_id_match(const char *dev_id)
++{
++	struct acpi_device_bus_id *acpi_device_bus_id;
++
++	/* Find suitable bus_id and instance number in acpi_bus_id_list. */
++	list_for_each_entry(acpi_device_bus_id, &acpi_bus_id_list, node) {
++		if (!strcmp(acpi_device_bus_id->bus_id, dev_id))
++			return acpi_device_bus_id;
++	}
++	return NULL;
++}
++
++static int acpi_device_set_name(struct acpi_device *device,
++				struct acpi_device_bus_id *acpi_device_bus_id)
++{
++	struct ida *instance_ida = &acpi_device_bus_id->instance_ida;
++	int result;
++
++	result = ida_simple_get(instance_ida, 0, ACPI_MAX_DEVICE_INSTANCES, GFP_KERNEL);
++	if (result < 0)
++		return result;
++
++	device->pnp.instance_no = result;
++	dev_set_name(&device->dev, "%s:%02x", acpi_device_bus_id->bus_id, result);
++	return 0;
++}
++
+ int acpi_device_add(struct acpi_device *device,
+ 		    void (*release)(struct device *))
+ {
++	struct acpi_device_bus_id *acpi_device_bus_id;
+ 	int result;
+-	struct acpi_device_bus_id *acpi_device_bus_id, *new_bus_id;
+-	int found = 0;
+ 
+ 	if (device->handle) {
+ 		acpi_status status;
+@@ -654,41 +679,38 @@ int acpi_device_add(struct acpi_device *device,
+ 	INIT_LIST_HEAD(&device->del_list);
+ 	mutex_init(&device->physical_node_lock);
+ 
+-	new_bus_id = kzalloc(sizeof(struct acpi_device_bus_id), GFP_KERNEL);
+-	if (!new_bus_id) {
+-		pr_err(PREFIX "Memory allocation error\n");
+-		result = -ENOMEM;
+-		goto err_detach;
+-	}
+-
+ 	mutex_lock(&acpi_device_lock);
+-	/*
+-	 * Find suitable bus_id and instance number in acpi_bus_id_list
+-	 * If failed, create one and link it into acpi_bus_id_list
+-	 */
+-	list_for_each_entry(acpi_device_bus_id, &acpi_bus_id_list, node) {
+-		if (!strcmp(acpi_device_bus_id->bus_id,
+-			    acpi_device_hid(device))) {
+-			acpi_device_bus_id->instance_no++;
+-			found = 1;
+-			kfree(new_bus_id);
+-			break;
++
++	acpi_device_bus_id = acpi_device_bus_id_match(acpi_device_hid(device));
++	if (acpi_device_bus_id) {
++		result = acpi_device_set_name(device, acpi_device_bus_id);
++		if (result)
++			goto err_unlock;
++	} else {
++		acpi_device_bus_id = kzalloc(sizeof(*acpi_device_bus_id),
++					     GFP_KERNEL);
++		if (!acpi_device_bus_id) {
++			result = -ENOMEM;
++			goto err_unlock;
+ 		}
+-	}
+-	if (!found) {
+-		acpi_device_bus_id = new_bus_id;
+ 		acpi_device_bus_id->bus_id =
+ 			kstrdup_const(acpi_device_hid(device), GFP_KERNEL);
+ 		if (!acpi_device_bus_id->bus_id) {
+-			pr_err(PREFIX "Memory allocation error for bus id\n");
++			kfree(acpi_device_bus_id);
+ 			result = -ENOMEM;
+-			goto err_free_new_bus_id;
++			goto err_unlock;
++		}
++
++		ida_init(&acpi_device_bus_id->instance_ida);
++
++		result = acpi_device_set_name(device, acpi_device_bus_id);
++		if (result) {
++			kfree(acpi_device_bus_id);
++			goto err_unlock;
+ 		}
+ 
+-		acpi_device_bus_id->instance_no = 0;
+ 		list_add_tail(&acpi_device_bus_id->node, &acpi_bus_id_list);
+ 	}
+-	dev_set_name(&device->dev, "%s:%02x", acpi_device_bus_id->bus_id, acpi_device_bus_id->instance_no);
+ 
+ 	if (device->parent)
+ 		list_add_tail(&device->node, &device->parent->children);
+@@ -720,13 +742,9 @@ int acpi_device_add(struct acpi_device *device,
+ 		list_del(&device->node);
+ 	list_del(&device->wakeup_list);
+ 
+- err_free_new_bus_id:
+-	if (!found)
+-		kfree(new_bus_id);
+-
++ err_unlock:
+ 	mutex_unlock(&acpi_device_lock);
+ 
+- err_detach:
+ 	acpi_detach_data(device->handle, acpi_scan_drop_device);
+ 	return result;
+ }
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 811d298637cb2..83cd4c95faf0d 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -147,6 +147,7 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		},
+ 	},
+ 	{
++	.callback = video_detect_force_vendor,
+ 	.ident = "Sony VPCEH3U1E",
+ 	.matches = {
+ 		DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+diff --git a/drivers/atm/eni.c b/drivers/atm/eni.c
+index 316a9947541fe..b574cce98dc36 100644
+--- a/drivers/atm/eni.c
++++ b/drivers/atm/eni.c
+@@ -2260,7 +2260,8 @@ out:
+ 	return rc;
+ 
+ err_eni_release:
+-	eni_do_release(dev);
++	dev->phy = NULL;
++	iounmap(ENI_DEV(dev)->ioaddr);
+ err_unregister:
+ 	atm_dev_deregister(dev);
+ err_free_consistent:
+diff --git a/drivers/atm/idt77105.c b/drivers/atm/idt77105.c
+index 3c081b6171a8f..bfca7b8a6f31e 100644
+--- a/drivers/atm/idt77105.c
++++ b/drivers/atm/idt77105.c
+@@ -262,7 +262,7 @@ static int idt77105_start(struct atm_dev *dev)
+ {
+ 	unsigned long flags;
+ 
+-	if (!(dev->dev_data = kmalloc(sizeof(struct idt77105_priv),GFP_KERNEL)))
++	if (!(dev->phy_data = kmalloc(sizeof(struct idt77105_priv),GFP_KERNEL)))
+ 		return -ENOMEM;
+ 	PRIV(dev)->dev = dev;
+ 	spin_lock_irqsave(&idt77105_priv_lock, flags);
+@@ -337,7 +337,7 @@ static int idt77105_stop(struct atm_dev *dev)
+                 else
+                     idt77105_all = walk->next;
+ 	        dev->phy = NULL;
+-                dev->dev_data = NULL;
++                dev->phy_data = NULL;
+                 kfree(walk);
+                 break;
+             }
+diff --git a/drivers/atm/lanai.c b/drivers/atm/lanai.c
+index ac811cfa68431..92edd100a394f 100644
+--- a/drivers/atm/lanai.c
++++ b/drivers/atm/lanai.c
+@@ -2234,6 +2234,7 @@ static int lanai_dev_open(struct atm_dev *atmdev)
+ 	conf1_write(lanai);
+ #endif
+ 	iounmap(lanai->base);
++	lanai->base = NULL;
+     error_pci:
+ 	pci_disable_device(lanai->pci);
+     error:
+@@ -2246,6 +2247,8 @@ static int lanai_dev_open(struct atm_dev *atmdev)
+ static void lanai_dev_close(struct atm_dev *atmdev)
+ {
+ 	struct lanai_dev *lanai = (struct lanai_dev *) atmdev->dev_data;
++	if (lanai->base==NULL)
++		return;
+ 	printk(KERN_INFO DEV_LABEL "(itf %d): shutting down interface\n",
+ 	    lanai->number);
+ 	lanai_timed_poll_stop(lanai);
+@@ -2553,7 +2556,7 @@ static int lanai_init_one(struct pci_dev *pci,
+ 	struct atm_dev *atmdev;
+ 	int result;
+ 
+-	lanai = kmalloc(sizeof(*lanai), GFP_KERNEL);
++	lanai = kzalloc(sizeof(*lanai), GFP_KERNEL);
+ 	if (lanai == NULL) {
+ 		printk(KERN_ERR DEV_LABEL
+ 		       ": couldn't allocate dev_data structure!\n");
+diff --git a/drivers/atm/uPD98402.c b/drivers/atm/uPD98402.c
+index 7850758b5bb82..239852d855589 100644
+--- a/drivers/atm/uPD98402.c
++++ b/drivers/atm/uPD98402.c
+@@ -211,7 +211,7 @@ static void uPD98402_int(struct atm_dev *dev)
+ static int uPD98402_start(struct atm_dev *dev)
+ {
+ 	DPRINTK("phy_start\n");
+-	if (!(dev->dev_data = kmalloc(sizeof(struct uPD98402_priv),GFP_KERNEL)))
++	if (!(dev->phy_data = kmalloc(sizeof(struct uPD98402_priv),GFP_KERNEL)))
+ 		return -ENOMEM;
+ 	spin_lock_init(&PRIV(dev)->lock);
+ 	memset(&PRIV(dev)->sonet_stats,0,sizeof(struct k_sonet_stats));
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index bfda153b1a41d..5ef67bacb585e 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -305,7 +305,7 @@ static int rpm_get_suppliers(struct device *dev)
+ 	return 0;
+ }
+ 
+-static void rpm_put_suppliers(struct device *dev)
++static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
+ {
+ 	struct device_link *link;
+ 
+@@ -313,10 +313,30 @@ static void rpm_put_suppliers(struct device *dev)
+ 				device_links_read_lock_held()) {
+ 
+ 		while (refcount_dec_not_one(&link->rpm_active))
+-			pm_runtime_put(link->supplier);
++			pm_runtime_put_noidle(link->supplier);
++
++		if (try_to_suspend)
++			pm_request_idle(link->supplier);
+ 	}
+ }
+ 
++static void rpm_put_suppliers(struct device *dev)
++{
++	__rpm_put_suppliers(dev, true);
++}
++
++static void rpm_suspend_suppliers(struct device *dev)
++{
++	struct device_link *link;
++	int idx = device_links_read_lock();
++
++	list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
++				device_links_read_lock_held())
++		pm_request_idle(link->supplier);
++
++	device_links_read_unlock(idx);
++}
++
+ /**
+  * __rpm_callback - Run a given runtime PM callback for a given device.
+  * @cb: Runtime PM callback to run.
+@@ -344,8 +364,10 @@ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
+ 			idx = device_links_read_lock();
+ 
+ 			retval = rpm_get_suppliers(dev);
+-			if (retval)
++			if (retval) {
++				rpm_put_suppliers(dev);
+ 				goto fail;
++			}
+ 
+ 			device_links_read_unlock(idx);
+ 		}
+@@ -368,9 +390,9 @@ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
+ 		    || (dev->power.runtime_status == RPM_RESUMING && retval))) {
+ 			idx = device_links_read_lock();
+ 
+- fail:
+-			rpm_put_suppliers(dev);
++			__rpm_put_suppliers(dev, false);
+ 
++fail:
+ 			device_links_read_unlock(idx);
+ 		}
+ 
+@@ -642,8 +664,11 @@ static int rpm_suspend(struct device *dev, int rpmflags)
+ 		goto out;
+ 	}
+ 
++	if (dev->power.irq_safe)
++		goto out;
++
+ 	/* Maybe the parent is now able to suspend. */
+-	if (parent && !parent->power.ignore_children && !dev->power.irq_safe) {
++	if (parent && !parent->power.ignore_children) {
+ 		spin_unlock(&dev->power.lock);
+ 
+ 		spin_lock(&parent->power.lock);
+@@ -652,6 +677,14 @@ static int rpm_suspend(struct device *dev, int rpmflags)
+ 
+ 		spin_lock(&dev->power.lock);
+ 	}
++	/* Maybe the suppliers are now able to suspend. */
++	if (dev->power.links_count > 0) {
++		spin_unlock_irq(&dev->power.lock);
++
++		rpm_suspend_suppliers(dev);
++
++		spin_lock_irq(&dev->power.lock);
++	}
+ 
+  out:
+ 	trace_rpm_return_int_rcuidle(dev, _THIS_IP_, retval);
+diff --git a/drivers/block/umem.c b/drivers/block/umem.c
+index 2b95d7b33b918..5eb44e4a91eeb 100644
+--- a/drivers/block/umem.c
++++ b/drivers/block/umem.c
+@@ -877,6 +877,7 @@ static int mm_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	if (card->mm_pages[0].desc == NULL ||
+ 	    card->mm_pages[1].desc == NULL) {
+ 		dev_printk(KERN_ERR, &card->dev->dev, "alloc failed\n");
++		ret = -ENOMEM;
+ 		goto failed_alloc;
+ 	}
+ 	reset_page(&card->mm_pages[0]);
+@@ -888,8 +889,10 @@ static int mm_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	spin_lock_init(&card->lock);
+ 
+ 	card->queue = blk_alloc_queue(NUMA_NO_NODE);
+-	if (!card->queue)
++	if (!card->queue) {
++		ret = -ENOMEM;
+ 		goto failed_alloc;
++	}
+ 
+ 	tasklet_init(&card->tasklet, process_page, (unsigned long)card);
+ 
+diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
+index da16121140cab..3874233f7194d 100644
+--- a/drivers/block/xen-blkback/blkback.c
++++ b/drivers/block/xen-blkback/blkback.c
+@@ -891,7 +891,7 @@ next:
+ out:
+ 	for (i = last_map; i < num; i++) {
+ 		/* Don't zap current batch's valid persistent grants. */
+-		if(i >= last_map + segs_to_map)
++		if(i >= map_until)
+ 			pages[i]->persistent_gnt = NULL;
+ 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
+ 	}
+diff --git a/drivers/bus/omap_l3_noc.c b/drivers/bus/omap_l3_noc.c
+index b040447575adc..dcfb32ee5cb60 100644
+--- a/drivers/bus/omap_l3_noc.c
++++ b/drivers/bus/omap_l3_noc.c
+@@ -285,7 +285,7 @@ static int omap_l3_probe(struct platform_device *pdev)
+ 	 */
+ 	l3->debug_irq = platform_get_irq(pdev, 0);
+ 	ret = devm_request_irq(l3->dev, l3->debug_irq, l3_interrupt_handler,
+-			       0x0, "l3-dbg-irq", l3);
++			       IRQF_NO_THREAD, "l3-dbg-irq", l3);
+ 	if (ret) {
+ 		dev_err(l3->dev, "request_irq failed for %d\n",
+ 			l3->debug_irq);
+@@ -294,7 +294,7 @@ static int omap_l3_probe(struct platform_device *pdev)
+ 
+ 	l3->app_irq = platform_get_irq(pdev, 1);
+ 	ret = devm_request_irq(l3->dev, l3->app_irq, l3_interrupt_handler,
+-			       0x0, "l3-app-irq", l3);
++			       IRQF_NO_THREAD, "l3-app-irq", l3);
+ 	if (ret)
+ 		dev_err(l3->dev, "request_irq failed for %d\n", l3->app_irq);
+ 
+diff --git a/drivers/clk/qcom/gcc-sc7180.c b/drivers/clk/qcom/gcc-sc7180.c
+index b080739ab0c33..7e80dbd4a3f9f 100644
+--- a/drivers/clk/qcom/gcc-sc7180.c
++++ b/drivers/clk/qcom/gcc-sc7180.c
+@@ -620,7 +620,7 @@ static struct clk_rcg2 gcc_sdcc1_apps_clk_src = {
+ 		.name = "gcc_sdcc1_apps_clk_src",
+ 		.parent_data = gcc_parent_data_1,
+ 		.num_parents = 5,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+@@ -642,7 +642,7 @@ static struct clk_rcg2 gcc_sdcc1_ice_core_clk_src = {
+ 		.name = "gcc_sdcc1_ice_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = 4,
+-		.ops = &clk_rcg2_floor_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index 3776d960f405e..1c192a42f11e0 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -103,6 +103,8 @@ static const struct of_device_id whitelist[] __initconst = {
+ static const struct of_device_id blacklist[] __initconst = {
+ 	{ .compatible = "allwinner,sun50i-h6", },
+ 
++	{ .compatible = "arm,vexpress", },
++
+ 	{ .compatible = "calxeda,highbank", },
+ 	{ .compatible = "calxeda,ecx-2000", },
+ 
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 49a1f8ce4baa6..863f059bc498a 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -174,7 +174,7 @@ static void acpi_gpiochip_request_irq(struct acpi_gpio_chip *acpi_gpio,
+ 	int ret, value;
+ 
+ 	ret = request_threaded_irq(event->irq, NULL, event->handler,
+-				   event->irqflags, "ACPI:Event", event);
++				   event->irqflags | IRQF_ONESHOT, "ACPI:Event", event);
+ 	if (ret) {
+ 		dev_err(acpi_gpio->chip->parent,
+ 			"Failed to setup interrupt handler for %d\n",
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 16f73c1023943..ca868271f4c43 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -239,6 +239,7 @@ source "drivers/gpu/drm/arm/Kconfig"
+ config DRM_RADEON
+ 	tristate "ATI Radeon"
+ 	depends on DRM && PCI && MMU
++	depends on AGP || !AGP
+ 	select FW_LOADER
+         select DRM_KMS_HELPER
+         select DRM_TTM
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 1a880cb48d19e..a2425f7ca7597 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1093,6 +1093,7 @@ static const struct pci_device_id pciidlist[] = {
+ 	{0x1002, 0x73A3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID},
+ 	{0x1002, 0x73AB, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID},
+ 	{0x1002, 0x73AE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID},
++	{0x1002, 0x73AF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID},
+ 	{0x1002, 0x73BF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID},
+ 
+ 	{0, 0, 0}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+index e2c2eb45a7934..1ea8af48ae2f5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+@@ -146,7 +146,7 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
+ 	size = mode_cmd->pitches[0] * height;
+ 	aligned_size = ALIGN(size, PAGE_SIZE);
+ 	ret = amdgpu_gem_object_create(adev, aligned_size, 0, domain, flags,
+-				       ttm_bo_type_kernel, NULL, &gobj);
++				       ttm_bo_type_device, NULL, &gobj);
+ 	if (ret) {
+ 		pr_err("failed to allocate framebuffer (%d)\n", aligned_size);
+ 		return -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_link_encoder.c
+index 15c2ff264ff60..1a347484cf2a1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_link_encoder.c
+@@ -341,8 +341,7 @@ void enc2_hw_init(struct link_encoder *enc)
+ 	} else {
+ 		AUX_REG_WRITE(AUX_DPHY_RX_CONTROL0, 0x103d1110);
+ 
+-		AUX_REG_WRITE(AUX_DPHY_TX_CONTROL, 0x21c4d);
+-
++		AUX_REG_WRITE(AUX_DPHY_TX_CONTROL, 0x21c7a);
+ 	}
+ 
+ 	//AUX_DPHY_TX_REF_CONTROL'AUX_TX_REF_DIV HW default is 0x32;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index b5fe2a008bd47..7ed4d7c8734f0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -295,7 +295,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn2_1_soc = {
+ 	.num_banks = 8,
+ 	.num_chans = 4,
+ 	.vmm_page_size_bytes = 4096,
+-	.dram_clock_change_latency_us = 11.72,
++	.dram_clock_change_latency_us = 23.84,
+ 	.return_bus_width_bytes = 64,
+ 	.dispclk_dppclk_vco_speed_mhz = 3600,
+ 	.xfc_bus_transport_time_us = 4,
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index c5223a9e0d891..b76425164e297 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -524,6 +524,48 @@ static int smu7_force_switch_to_arbf0(struct pp_hwmgr *hwmgr)
+ 			tmp, MC_CG_ARB_FREQ_F0);
+ }
+ 
++static uint16_t smu7_override_pcie_speed(struct pp_hwmgr *hwmgr)
++{
++	struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
++	uint16_t pcie_gen = 0;
++
++	if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4 &&
++	    adev->pm.pcie_gen_mask & CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN4)
++		pcie_gen = 3;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3 &&
++		adev->pm.pcie_gen_mask & CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3)
++		pcie_gen = 2;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 &&
++		adev->pm.pcie_gen_mask & CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2)
++		pcie_gen = 1;
++	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1 &&
++		adev->pm.pcie_gen_mask & CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1)
++		pcie_gen = 0;
++
++	return pcie_gen;
++}
++
++static uint16_t smu7_override_pcie_width(struct pp_hwmgr *hwmgr)
++{
++	struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
++	uint16_t pcie_width = 0;
++
++	if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
++		pcie_width = 16;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
++		pcie_width = 12;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8)
++		pcie_width = 8;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4)
++		pcie_width = 4;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2)
++		pcie_width = 2;
++	else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1)
++		pcie_width = 1;
++
++	return pcie_width;
++}
++
+ static int smu7_setup_default_pcie_table(struct pp_hwmgr *hwmgr)
+ {
+ 	struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
+@@ -620,6 +662,11 @@ static int smu7_setup_default_pcie_table(struct pp_hwmgr *hwmgr)
+ 					PP_Min_PCIEGen),
+ 			get_pcie_lane_support(data->pcie_lane_cap,
+ 					PP_Max_PCIELane));
++
++		if (data->pcie_dpm_key_disabled)
++			phm_setup_pcie_table_entry(&data->dpm_table.pcie_speed_table,
++				data->dpm_table.pcie_speed_table.count,
++				smu7_override_pcie_speed(hwmgr), smu7_override_pcie_width(hwmgr));
+ 	}
+ 	return 0;
+ }
+@@ -1180,6 +1227,13 @@ static int smu7_start_dpm(struct pp_hwmgr *hwmgr)
+ 						NULL)),
+ 				"Failed to enable pcie DPM during DPM Start Function!",
+ 				return -EINVAL);
++	} else {
++		PP_ASSERT_WITH_CODE(
++				(0 == smum_send_msg_to_smc(hwmgr,
++						PPSMC_MSG_PCIeDPM_Disable,
++						NULL)),
++				"Failed to disble pcie DPM during DPM Start Function!",
++				return -EINVAL);
+ 	}
+ 
+ 	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index 18e4eb8884c26..ed4eafc744d3d 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -54,6 +54,9 @@
+ #include "smuio/smuio_9_0_offset.h"
+ #include "smuio/smuio_9_0_sh_mask.h"
+ 
++#define smnPCIE_LC_SPEED_CNTL			0x11140290
++#define smnPCIE_LC_LINK_WIDTH_CNTL		0x11140288
++
+ #define HBM_MEMORY_CHANNEL_WIDTH    128
+ 
+ static const uint32_t channel_number[] = {1, 2, 0, 4, 0, 8, 0, 16, 2};
+@@ -443,8 +446,7 @@ static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 	if (PP_CAP(PHM_PlatformCaps_VCEDPM))
+ 		data->smu_features[GNLD_DPM_VCE].supported = true;
+ 
+-	if (!data->registry_data.pcie_dpm_key_disabled)
+-		data->smu_features[GNLD_DPM_LINK].supported = true;
++	data->smu_features[GNLD_DPM_LINK].supported = true;
+ 
+ 	if (!data->registry_data.dcefclk_dpm_key_disabled)
+ 		data->smu_features[GNLD_DPM_DCEFCLK].supported = true;
+@@ -1545,6 +1547,13 @@ static int vega10_override_pcie_parameters(struct pp_hwmgr *hwmgr)
+ 			pp_table->PcieLaneCount[i] = pcie_width;
+ 	}
+ 
++	if (data->registry_data.pcie_dpm_key_disabled) {
++		for (i = 0; i < NUM_LINK_LEVELS; i++) {
++			pp_table->PcieGenSpeed[i] = pcie_gen;
++			pp_table->PcieLaneCount[i] = pcie_width;
++		}
++	}
++
+ 	return 0;
+ }
+ 
+@@ -2967,6 +2976,14 @@ static int vega10_start_dpm(struct pp_hwmgr *hwmgr, uint32_t bitmap)
+ 		}
+ 	}
+ 
++	if (data->registry_data.pcie_dpm_key_disabled) {
++		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr,
++				false, data->smu_features[GNLD_DPM_LINK].smu_feature_bitmap),
++		"Attempt to Disable Link DPM feature Failed!", return -EINVAL);
++		data->smu_features[GNLD_DPM_LINK].enabled = false;
++		data->smu_features[GNLD_DPM_LINK].supported = false;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -4583,6 +4600,24 @@ static int vega10_set_ppfeature_status(struct pp_hwmgr *hwmgr, uint64_t new_ppfe
+ 	return 0;
+ }
+ 
++static int vega10_get_current_pcie_link_width_level(struct pp_hwmgr *hwmgr)
++{
++	struct amdgpu_device *adev = hwmgr->adev;
++
++	return (RREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL) &
++		PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD_MASK)
++		>> PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD__SHIFT;
++}
++
++static int vega10_get_current_pcie_link_speed_level(struct pp_hwmgr *hwmgr)
++{
++	struct amdgpu_device *adev = hwmgr->adev;
++
++	return (RREG32_PCIE(smnPCIE_LC_SPEED_CNTL) &
++		PSWUSP0_PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK)
++		>> PSWUSP0_PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE__SHIFT;
++}
++
+ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 		enum pp_clock_type type, char *buf)
+ {
+@@ -4591,8 +4626,9 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 	struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
+ 	struct vega10_single_dpm_table *soc_table = &(data->dpm_table.soc_table);
+ 	struct vega10_single_dpm_table *dcef_table = &(data->dpm_table.dcef_table);
+-	struct vega10_pcie_table *pcie_table = &(data->dpm_table.pcie_table);
+ 	struct vega10_odn_clock_voltage_dependency_table *podn_vdd_dep = NULL;
++	uint32_t gen_speed, lane_width, current_gen_speed, current_lane_width;
++	PPTable_t *pptable = &(data->smc_state_table.pp_table);
+ 
+ 	int i, now, size = 0, count = 0;
+ 
+@@ -4649,15 +4685,31 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 					"*" : "");
+ 		break;
+ 	case PP_PCIE:
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentLinkIndex, &now);
+-
+-		for (i = 0; i < pcie_table->count; i++)
+-			size += sprintf(buf + size, "%d: %s %s\n", i,
+-					(pcie_table->pcie_gen[i] == 0) ? "2.5GT/s, x1" :
+-					(pcie_table->pcie_gen[i] == 1) ? "5.0GT/s, x16" :
+-					(pcie_table->pcie_gen[i] == 2) ? "8.0GT/s, x16" : "",
+-					(i == now) ? "*" : "");
++		current_gen_speed =
++			vega10_get_current_pcie_link_speed_level(hwmgr);
++		current_lane_width =
++			vega10_get_current_pcie_link_width_level(hwmgr);
++		for (i = 0; i < NUM_LINK_LEVELS; i++) {
++			gen_speed = pptable->PcieGenSpeed[i];
++			lane_width = pptable->PcieLaneCount[i];
++
++			size += sprintf(buf + size, "%d: %s %s %s\n", i,
++					(gen_speed == 0) ? "2.5GT/s," :
++					(gen_speed == 1) ? "5.0GT/s," :
++					(gen_speed == 2) ? "8.0GT/s," :
++					(gen_speed == 3) ? "16.0GT/s," : "",
++					(lane_width == 1) ? "x1" :
++					(lane_width == 2) ? "x2" :
++					(lane_width == 3) ? "x4" :
++					(lane_width == 4) ? "x8" :
++					(lane_width == 5) ? "x12" :
++					(lane_width == 6) ? "x16" : "",
++					(current_gen_speed == gen_speed) &&
++					(current_lane_width == lane_width) ?
++					"*" : "");
++		}
+ 		break;
++
+ 	case OD_SCLK:
+ 		if (hwmgr->od_enabled) {
+ 			size = sprintf(buf, "%s:\n", "OD_SCLK");
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
+index 62076035029ac..e68651fb7ca4c 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
+@@ -133,6 +133,7 @@ static void vega12_set_default_registry_data(struct pp_hwmgr *hwmgr)
+ 	data->registry_data.auto_wattman_debug = 0;
+ 	data->registry_data.auto_wattman_sample_period = 100;
+ 	data->registry_data.auto_wattman_threshold = 50;
++	data->registry_data.pcie_dpm_key_disabled = !(hwmgr->feature_mask & PP_PCIE_DPM_MASK);
+ }
+ 
+ static int vega12_set_features_platform_caps(struct pp_hwmgr *hwmgr)
+@@ -539,6 +540,29 @@ static int vega12_override_pcie_parameters(struct pp_hwmgr *hwmgr)
+ 		pp_table->PcieLaneCount[i] = pcie_width_arg;
+ 	}
+ 
++	/* override to the highest if it's disabled from ppfeaturmask */
++	if (data->registry_data.pcie_dpm_key_disabled) {
++		for (i = 0; i < NUM_LINK_LEVELS; i++) {
++			smu_pcie_arg = (i << 16) | (pcie_gen << 8) | pcie_width;
++			ret = smum_send_msg_to_smc_with_parameter(hwmgr,
++				PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
++				NULL);
++			PP_ASSERT_WITH_CODE(!ret,
++				"[OverridePcieParameters] Attempt to override pcie params failed!",
++				return ret);
++
++			pp_table->PcieGenSpeed[i] = pcie_gen;
++			pp_table->PcieLaneCount[i] = pcie_width;
++		}
++		ret = vega12_enable_smc_features(hwmgr,
++				false,
++				data->smu_features[GNLD_DPM_LINK].smu_feature_bitmap);
++		PP_ASSERT_WITH_CODE(!ret,
++				"Attempt to Disable DPM LINK Failed!",
++				return ret);
++		data->smu_features[GNLD_DPM_LINK].enabled = false;
++		data->smu_features[GNLD_DPM_LINK].supported = false;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+index 251979c059c8b..60cde0c528257 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+@@ -171,6 +171,7 @@ static void vega20_set_default_registry_data(struct pp_hwmgr *hwmgr)
+ 	data->registry_data.gfxoff_controlled_by_driver = 1;
+ 	data->gfxoff_allowed = false;
+ 	data->counter_gfxoff = 0;
++	data->registry_data.pcie_dpm_key_disabled = !(hwmgr->feature_mask & PP_PCIE_DPM_MASK);
+ }
+ 
+ static int vega20_set_features_platform_caps(struct pp_hwmgr *hwmgr)
+@@ -885,6 +886,30 @@ static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
+ 		pp_table->PcieLaneCount[i] = pcie_width_arg;
+ 	}
+ 
++	/* override to the highest if it's disabled from ppfeaturmask */
++	if (data->registry_data.pcie_dpm_key_disabled) {
++		for (i = 0; i < NUM_LINK_LEVELS; i++) {
++			smu_pcie_arg = (i << 16) | (pcie_gen << 8) | pcie_width;
++			ret = smum_send_msg_to_smc_with_parameter(hwmgr,
++				PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
++				NULL);
++			PP_ASSERT_WITH_CODE(!ret,
++				"[OverridePcieParameters] Attempt to override pcie params failed!",
++				return ret);
++
++			pp_table->PcieGenSpeed[i] = pcie_gen;
++			pp_table->PcieLaneCount[i] = pcie_width;
++		}
++		ret = vega20_enable_smc_features(hwmgr,
++				false,
++				data->smu_features[GNLD_DPM_LINK].smu_feature_bitmap);
++		PP_ASSERT_WITH_CODE(!ret,
++				"Attempt to Disable DPM LINK Failed!",
++				return ret);
++		data->smu_features[GNLD_DPM_LINK].enabled = false;
++		data->smu_features[GNLD_DPM_LINK].supported = false;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+index d1533bdc1335e..2b7e85318a76a 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+@@ -675,7 +675,7 @@ static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj)
+ 		struct page **pages = pvec + pinned;
+ 
+ 		ret = pin_user_pages_fast(ptr, num_pages,
+-					  !userptr->ro ? FOLL_WRITE : 0, pages);
++					  FOLL_WRITE | FOLL_FORCE, pages);
+ 		if (ret < 0) {
+ 			unpin_user_pages(pvec, pinned);
+ 			kvfree(pvec);
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+index 7fb36b12fe7a2..6614f67364862 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+@@ -316,7 +316,18 @@ void i915_vma_revoke_fence(struct i915_vma *vma)
+ 	WRITE_ONCE(fence->vma, NULL);
+ 	vma->fence = NULL;
+ 
+-	with_intel_runtime_pm_if_in_use(fence_to_uncore(fence)->rpm, wakeref)
++	/*
++	 * Skip the write to HW if and only if the device is currently
++	 * suspended.
++	 *
++	 * If the driver does not currently hold a wakeref (if_in_use == 0),
++	 * the device may currently be runtime suspended, or it may be woken
++	 * up before the suspend takes place. If the device is not suspended
++	 * (powered down) and we skip clearing the fence register, the HW is
++	 * left in an undefined state where we may end up with multiple
++	 * registers overlapping.
++	 */
++	with_intel_runtime_pm_if_active(fence_to_uncore(fence)->rpm, wakeref)
+ 		fence_write(fence);
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
+index 153ca9e65382e..8b725efb2254c 100644
+--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
++++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
+@@ -412,12 +412,20 @@ intel_wakeref_t intel_runtime_pm_get(struct intel_runtime_pm *rpm)
+ }
+ 
+ /**
+- * intel_runtime_pm_get_if_in_use - grab a runtime pm reference if device in use
++ * __intel_runtime_pm_get_if_active - grab a runtime pm reference if device is active
+  * @rpm: the intel_runtime_pm structure
++ * @ignore_usecount: get a ref even if dev->power.usage_count is 0
+  *
+  * This function grabs a device-level runtime pm reference if the device is
+- * already in use and ensures that it is powered up. It is illegal to try
+- * and access the HW should intel_runtime_pm_get_if_in_use() report failure.
++ * already active and ensures that it is powered up. It is illegal to try
++ * and access the HW should intel_runtime_pm_get_if_active() report failure.
++ *
++ * If @ignore_usecount=true, a reference will be acquired even if there is no
++ * user requiring the device to be powered up (dev->power.usage_count == 0).
++ * If the function returns false in this case then it's guaranteed that the
++ * device's runtime suspend hook has been called already or that it will be
++ * called (and hence it's also guaranteed that the device's runtime resume
++ * hook will be called eventually).
+  *
+  * Any runtime pm reference obtained by this function must have a symmetric
+  * call to intel_runtime_pm_put() to release the reference again.
+@@ -425,7 +433,8 @@ intel_wakeref_t intel_runtime_pm_get(struct intel_runtime_pm *rpm)
+  * Returns: the wakeref cookie to pass to intel_runtime_pm_put(), evaluates
+  * as True if the wakeref was acquired, or False otherwise.
+  */
+-intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm)
++static intel_wakeref_t __intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm,
++							bool ignore_usecount)
+ {
+ 	if (IS_ENABLED(CONFIG_PM)) {
+ 		/*
+@@ -434,7 +443,7 @@ intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm)
+ 		 * function, since the power state is undefined. This applies
+ 		 * atm to the late/early system suspend/resume handlers.
+ 		 */
+-		if (pm_runtime_get_if_in_use(rpm->kdev) <= 0)
++		if (pm_runtime_get_if_active(rpm->kdev, ignore_usecount) <= 0)
+ 			return 0;
+ 	}
+ 
+@@ -443,6 +452,16 @@ intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm)
+ 	return track_intel_runtime_pm_wakeref(rpm);
+ }
+ 
++intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm)
++{
++	return __intel_runtime_pm_get_if_active(rpm, false);
++}
++
++intel_wakeref_t intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm)
++{
++	return __intel_runtime_pm_get_if_active(rpm, true);
++}
++
+ /**
+  * intel_runtime_pm_get_noresume - grab a runtime pm reference
+  * @rpm: the intel_runtime_pm structure
+diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.h b/drivers/gpu/drm/i915/intel_runtime_pm.h
+index ae64ff14c6425..1e4ddd11c12bb 100644
+--- a/drivers/gpu/drm/i915/intel_runtime_pm.h
++++ b/drivers/gpu/drm/i915/intel_runtime_pm.h
+@@ -177,6 +177,7 @@ void intel_runtime_pm_driver_release(struct intel_runtime_pm *rpm);
+ 
+ intel_wakeref_t intel_runtime_pm_get(struct intel_runtime_pm *rpm);
+ intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm);
++intel_wakeref_t intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm);
+ intel_wakeref_t intel_runtime_pm_get_noresume(struct intel_runtime_pm *rpm);
+ intel_wakeref_t intel_runtime_pm_get_raw(struct intel_runtime_pm *rpm);
+ 
+@@ -188,6 +189,10 @@ intel_wakeref_t intel_runtime_pm_get_raw(struct intel_runtime_pm *rpm);
+ 	for ((wf) = intel_runtime_pm_get_if_in_use(rpm); (wf); \
+ 	     intel_runtime_pm_put((rpm), (wf)), (wf) = 0)
+ 
++#define with_intel_runtime_pm_if_active(rpm, wf) \
++	for ((wf) = intel_runtime_pm_get_if_active(rpm); (wf); \
++	     intel_runtime_pm_put((rpm), (wf)), (wf) = 0)
++
+ void intel_runtime_pm_put_unchecked(struct intel_runtime_pm *rpm);
+ #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM)
+ void intel_runtime_pm_put(struct intel_runtime_pm *rpm, intel_wakeref_t wref);
+diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll.c b/drivers/gpu/drm/msm/dsi/pll/dsi_pll.c
+index a45fe95aff494..3dc65877fa10d 100644
+--- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll.c
++++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll.c
+@@ -163,7 +163,7 @@ struct msm_dsi_pll *msm_dsi_pll_init(struct platform_device *pdev,
+ 		break;
+ 	case MSM_DSI_PHY_7NM:
+ 	case MSM_DSI_PHY_7NM_V4_1:
+-		pll = msm_dsi_pll_7nm_init(pdev, id);
++		pll = msm_dsi_pll_7nm_init(pdev, type, id);
+ 		break;
+ 	default:
+ 		pll = ERR_PTR(-ENXIO);
+diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll.h b/drivers/gpu/drm/msm/dsi/pll/dsi_pll.h
+index 3405982a092c4..bbecb1de5678e 100644
+--- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll.h
++++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll.h
+@@ -117,10 +117,12 @@ msm_dsi_pll_10nm_init(struct platform_device *pdev, int id)
+ }
+ #endif
+ #ifdef CONFIG_DRM_MSM_DSI_7NM_PHY
+-struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev, int id);
++struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev,
++					enum msm_dsi_phy_type type, int id);
+ #else
+ static inline struct msm_dsi_pll *
+-msm_dsi_pll_7nm_init(struct platform_device *pdev, int id)
++msm_dsi_pll_7nm_init(struct platform_device *pdev,
++					enum msm_dsi_phy_type type, int id)
+ {
+ 	return ERR_PTR(-ENODEV);
+ }
+diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
+index 93bf142e4a4e6..c1f6708367ae9 100644
+--- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
+@@ -852,7 +852,8 @@ err_base_clk_hw:
+ 	return ret;
+ }
+ 
+-struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev, int id)
++struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev,
++					enum msm_dsi_phy_type type, int id)
+ {
+ 	struct dsi_pll_7nm *pll_7nm;
+ 	struct msm_dsi_pll *pll;
+@@ -885,7 +886,7 @@ struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev, int id)
+ 	pll = &pll_7nm->base;
+ 	pll->min_rate = 1000000000UL;
+ 	pll->max_rate = 3500000000UL;
+-	if (pll->type == MSM_DSI_PHY_7NM_V4_1) {
++	if (type == MSM_DSI_PHY_7NM_V4_1) {
+ 		pll->min_rate = 600000000UL;
+ 		pll->max_rate = (unsigned long)5000000000ULL;
+ 		/* workaround for max rate overflowing on 32-bit builds: */
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 3d0adfa6736a5..b38ebccad42ff 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -1079,6 +1079,10 @@ static int __maybe_unused msm_pm_resume(struct device *dev)
+ static int __maybe_unused msm_pm_prepare(struct device *dev)
+ {
+ 	struct drm_device *ddev = dev_get_drvdata(dev);
++	struct msm_drm_private *priv = ddev ? ddev->dev_private : NULL;
++
++	if (!priv || !priv->kms)
++		return 0;
+ 
+ 	return drm_mode_config_helper_suspend(ddev);
+ }
+@@ -1086,6 +1090,10 @@ static int __maybe_unused msm_pm_prepare(struct device *dev)
+ static void __maybe_unused msm_pm_complete(struct device *dev)
+ {
+ 	struct drm_device *ddev = dev_get_drvdata(dev);
++	struct msm_drm_private *priv = ddev ? ddev->dev_private : NULL;
++
++	if (!priv || !priv->kms)
++		return;
+ 
+ 	drm_mode_config_helper_resume(ddev);
+ }
+@@ -1318,6 +1326,10 @@ static int msm_pdev_remove(struct platform_device *pdev)
+ static void msm_pdev_shutdown(struct platform_device *pdev)
+ {
+ 	struct drm_device *drm = platform_get_drvdata(pdev);
++	struct msm_drm_private *priv = drm ? drm->dev_private : NULL;
++
++	if (!priv || !priv->kms)
++		return;
+ 
+ 	drm_atomic_helper_shutdown(drm);
+ }
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 8769e7aa097f4..81903749d2415 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -3610,13 +3610,13 @@ int c4iw_destroy_listen(struct iw_cm_id *cm_id)
+ 	    ep->com.local_addr.ss_family == AF_INET) {
+ 		err = cxgb4_remove_server_filter(
+ 			ep->com.dev->rdev.lldi.ports[0], ep->stid,
+-			ep->com.dev->rdev.lldi.rxq_ids[0], 0);
++			ep->com.dev->rdev.lldi.rxq_ids[0], false);
+ 	} else {
+ 		struct sockaddr_in6 *sin6;
+ 		c4iw_init_wr_wait(ep->com.wr_waitp);
+ 		err = cxgb4_remove_server(
+ 				ep->com.dev->rdev.lldi.ports[0], ep->stid,
+-				ep->com.dev->rdev.lldi.rxq_ids[0], 0);
++				ep->com.dev->rdev.lldi.rxq_ids[0], true);
+ 		if (err)
+ 			goto done;
+ 		err = c4iw_wait_for_reply(&ep->com.dev->rdev, ep->com.wr_waitp,
+diff --git a/drivers/irqchip/irq-ingenic-tcu.c b/drivers/irqchip/irq-ingenic-tcu.c
+index 7a7222d4c19c0..b938d1d04d96e 100644
+--- a/drivers/irqchip/irq-ingenic-tcu.c
++++ b/drivers/irqchip/irq-ingenic-tcu.c
+@@ -179,5 +179,6 @@ err_free_tcu:
+ }
+ IRQCHIP_DECLARE(jz4740_tcu_irq, "ingenic,jz4740-tcu", ingenic_tcu_irq_init);
+ IRQCHIP_DECLARE(jz4725b_tcu_irq, "ingenic,jz4725b-tcu", ingenic_tcu_irq_init);
++IRQCHIP_DECLARE(jz4760_tcu_irq, "ingenic,jz4760-tcu", ingenic_tcu_irq_init);
+ IRQCHIP_DECLARE(jz4770_tcu_irq, "ingenic,jz4770-tcu", ingenic_tcu_irq_init);
+ IRQCHIP_DECLARE(x1000_tcu_irq, "ingenic,x1000-tcu", ingenic_tcu_irq_init);
+diff --git a/drivers/irqchip/irq-ingenic.c b/drivers/irqchip/irq-ingenic.c
+index b61a8901ef722..ea36bb00be80b 100644
+--- a/drivers/irqchip/irq-ingenic.c
++++ b/drivers/irqchip/irq-ingenic.c
+@@ -155,6 +155,7 @@ static int __init intc_2chip_of_init(struct device_node *node,
+ {
+ 	return ingenic_intc_of_init(node, 2);
+ }
++IRQCHIP_DECLARE(jz4760_intc, "ingenic,jz4760-intc", intc_2chip_of_init);
+ IRQCHIP_DECLARE(jz4770_intc, "ingenic,jz4770-intc", intc_2chip_of_init);
+ IRQCHIP_DECLARE(jz4775_intc, "ingenic,jz4775-intc", intc_2chip_of_init);
+ IRQCHIP_DECLARE(jz4780_intc, "ingenic,jz4780-intc", intc_2chip_of_init);
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 5e306bba43751..1ca65b434f1fa 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -529,7 +529,7 @@ static int list_devices(struct file *filp, struct dm_ioctl *param, size_t param_
+ 	 * Grab our output buffer.
+ 	 */
+ 	nl = orig_nl = get_result_buffer(param, param_size, &len);
+-	if (len < needed) {
++	if (len < needed || len < sizeof(nl->dev)) {
+ 		param->flags |= DM_BUFFER_FULL_FLAG;
+ 		goto out;
+ 	}
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 9b824c21580a4..5c590895c14c3 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -1387,6 +1387,13 @@ static int device_not_zoned_model(struct dm_target *ti, struct dm_dev *dev,
+ 	return !q || blk_queue_zoned_model(q) != *zoned_model;
+ }
+ 
++/*
++ * Check the device zoned model based on the target feature flag. If the target
++ * has the DM_TARGET_ZONED_HM feature flag set, host-managed zoned devices are
++ * also accepted but all devices must have the same zoned model. If the target
++ * has the DM_TARGET_MIXED_ZONED_MODEL feature set, the devices can have any
++ * zoned model with all zoned devices having the same zone size.
++ */
+ static bool dm_table_supports_zoned_model(struct dm_table *t,
+ 					  enum blk_zoned_model zoned_model)
+ {
+@@ -1396,13 +1403,15 @@ static bool dm_table_supports_zoned_model(struct dm_table *t,
+ 	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+ 		ti = dm_table_get_target(t, i);
+ 
+-		if (zoned_model == BLK_ZONED_HM &&
+-		    !dm_target_supports_zoned_hm(ti->type))
+-			return false;
+-
+-		if (!ti->type->iterate_devices ||
+-		    ti->type->iterate_devices(ti, device_not_zoned_model, &zoned_model))
+-			return false;
++		if (dm_target_supports_zoned_hm(ti->type)) {
++			if (!ti->type->iterate_devices ||
++			    ti->type->iterate_devices(ti, device_not_zoned_model,
++						      &zoned_model))
++				return false;
++		} else if (!dm_target_supports_mixed_zoned_model(ti->type)) {
++			if (zoned_model == BLK_ZONED_HM)
++				return false;
++		}
+ 	}
+ 
+ 	return true;
+@@ -1414,9 +1423,17 @@ static int device_not_matches_zone_sectors(struct dm_target *ti, struct dm_dev *
+ 	struct request_queue *q = bdev_get_queue(dev->bdev);
+ 	unsigned int *zone_sectors = data;
+ 
++	if (!blk_queue_is_zoned(q))
++		return 0;
++
+ 	return !q || blk_queue_zone_sectors(q) != *zone_sectors;
+ }
+ 
++/*
++ * Check consistency of zoned model and zone sectors across all targets. For
++ * zone sectors, if the destination device is a zoned block device, it shall
++ * have the specified zone_sectors.
++ */
+ static int validate_hardware_zoned_model(struct dm_table *table,
+ 					 enum blk_zoned_model zoned_model,
+ 					 unsigned int zone_sectors)
+@@ -1435,7 +1452,7 @@ static int validate_hardware_zoned_model(struct dm_table *table,
+ 		return -EINVAL;
+ 
+ 	if (dm_table_any_dev_attr(table, device_not_matches_zone_sectors, &zone_sectors)) {
+-		DMERR("%s: zone sectors is not consistent across all devices",
++		DMERR("%s: zone sectors is not consistent across all zoned devices",
+ 		      dm_device_name(table->md));
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 6b8e5bdd8526d..808a98ef624c3 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -34,7 +34,7 @@
+ #define DM_VERITY_OPT_IGN_ZEROES	"ignore_zero_blocks"
+ #define DM_VERITY_OPT_AT_MOST_ONCE	"check_at_most_once"
+ 
+-#define DM_VERITY_OPTS_MAX		(2 + DM_VERITY_OPTS_FEC + \
++#define DM_VERITY_OPTS_MAX		(3 + DM_VERITY_OPTS_FEC + \
+ 					 DM_VERITY_ROOT_HASH_VERIFICATION_OPTS)
+ 
+ static unsigned dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index 697f9de37355e..7e88df64d197b 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -1143,7 +1143,7 @@ static int dmz_message(struct dm_target *ti, unsigned int argc, char **argv,
+ static struct target_type dmz_type = {
+ 	.name		 = "zoned",
+ 	.version	 = {2, 0, 0},
+-	.features	 = DM_TARGET_SINGLETON | DM_TARGET_ZONED_HM,
++	.features	 = DM_TARGET_SINGLETON | DM_TARGET_MIXED_ZONED_MODEL,
+ 	.module		 = THIS_MODULE,
+ 	.ctr		 = dmz_ctr,
+ 	.dtr		 = dmz_dtr,
+diff --git a/drivers/misc/habanalabs/common/device.c b/drivers/misc/habanalabs/common/device.c
+index 71b3a4d5adc65..7b0bf470795c4 100644
+--- a/drivers/misc/habanalabs/common/device.c
++++ b/drivers/misc/habanalabs/common/device.c
+@@ -106,6 +106,8 @@ static int hl_device_release_ctrl(struct inode *inode, struct file *filp)
+ 	list_del(&hpriv->dev_node);
+ 	mutex_unlock(&hdev->fpriv_list_lock);
+ 
++	put_pid(hpriv->taskpid);
++
+ 	kfree(hpriv);
+ 
+ 	return 0;
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 6d5a39af10978..47afc5938c26b 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3918,15 +3918,11 @@ static int bond_neigh_init(struct neighbour *n)
+ 
+ 	rcu_read_lock();
+ 	slave = bond_first_slave_rcu(bond);
+-	if (!slave) {
+-		ret = -EINVAL;
++	if (!slave)
+ 		goto out;
+-	}
+ 	slave_ops = slave->dev->netdev_ops;
+-	if (!slave_ops->ndo_neigh_setup) {
+-		ret = -EINVAL;
++	if (!slave_ops->ndo_neigh_setup)
+ 		goto out;
+-	}
+ 
+ 	/* TODO: find another way [1] to implement this.
+ 	 * Passing a zeroed structure is fragile,
+diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c
+index 1a9e9b9a4bf6c..6c75e5897620d 100644
+--- a/drivers/net/can/c_can/c_can.c
++++ b/drivers/net/can/c_can/c_can.c
+@@ -212,18 +212,6 @@ static const struct can_bittiming_const c_can_bittiming_const = {
+ 	.brp_inc = 1,
+ };
+ 
+-static inline void c_can_pm_runtime_enable(const struct c_can_priv *priv)
+-{
+-	if (priv->device)
+-		pm_runtime_enable(priv->device);
+-}
+-
+-static inline void c_can_pm_runtime_disable(const struct c_can_priv *priv)
+-{
+-	if (priv->device)
+-		pm_runtime_disable(priv->device);
+-}
+-
+ static inline void c_can_pm_runtime_get_sync(const struct c_can_priv *priv)
+ {
+ 	if (priv->device)
+@@ -1335,7 +1323,6 @@ static const struct net_device_ops c_can_netdev_ops = {
+ 
+ int register_c_can_dev(struct net_device *dev)
+ {
+-	struct c_can_priv *priv = netdev_priv(dev);
+ 	int err;
+ 
+ 	/* Deactivate pins to prevent DRA7 DCAN IP from being
+@@ -1345,28 +1332,19 @@ int register_c_can_dev(struct net_device *dev)
+ 	 */
+ 	pinctrl_pm_select_sleep_state(dev->dev.parent);
+ 
+-	c_can_pm_runtime_enable(priv);
+-
+ 	dev->flags |= IFF_ECHO;	/* we support local echo */
+ 	dev->netdev_ops = &c_can_netdev_ops;
+ 
+ 	err = register_candev(dev);
+-	if (err)
+-		c_can_pm_runtime_disable(priv);
+-	else
++	if (!err)
+ 		devm_can_led_init(dev);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(register_c_can_dev);
+ 
+ void unregister_c_can_dev(struct net_device *dev)
+ {
+-	struct c_can_priv *priv = netdev_priv(dev);
+-
+ 	unregister_candev(dev);
+-
+-	c_can_pm_runtime_disable(priv);
+ }
+ EXPORT_SYMBOL_GPL(unregister_c_can_dev);
+ 
+diff --git a/drivers/net/can/c_can/c_can_pci.c b/drivers/net/can/c_can/c_can_pci.c
+index 406b4847e5dc3..7efb60b508762 100644
+--- a/drivers/net/can/c_can/c_can_pci.c
++++ b/drivers/net/can/c_can/c_can_pci.c
+@@ -239,12 +239,13 @@ static void c_can_pci_remove(struct pci_dev *pdev)
+ {
+ 	struct net_device *dev = pci_get_drvdata(pdev);
+ 	struct c_can_priv *priv = netdev_priv(dev);
++	void __iomem *addr = priv->base;
+ 
+ 	unregister_c_can_dev(dev);
+ 
+ 	free_c_can_dev(dev);
+ 
+-	pci_iounmap(pdev, priv->base);
++	pci_iounmap(pdev, addr);
+ 	pci_disable_msi(pdev);
+ 	pci_clear_master(pdev);
+ 	pci_release_regions(pdev);
+diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c
+index 05f425ceb53a2..47b251b1607ce 100644
+--- a/drivers/net/can/c_can/c_can_platform.c
++++ b/drivers/net/can/c_can/c_can_platform.c
+@@ -29,6 +29,7 @@
+ #include <linux/list.h>
+ #include <linux/io.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ #include <linux/clk.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+@@ -386,6 +387,7 @@ static int c_can_plat_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, dev);
+ 	SET_NETDEV_DEV(dev, &pdev->dev);
+ 
++	pm_runtime_enable(priv->device);
+ 	ret = register_c_can_dev(dev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "registering %s failed (err=%d)\n",
+@@ -398,6 +400,7 @@ static int c_can_plat_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ exit_free_device:
++	pm_runtime_disable(priv->device);
+ 	free_c_can_dev(dev);
+ exit:
+ 	dev_err(&pdev->dev, "probe failed\n");
+@@ -408,9 +411,10 @@ exit:
+ static int c_can_plat_remove(struct platform_device *pdev)
+ {
+ 	struct net_device *dev = platform_get_drvdata(pdev);
++	struct c_can_priv *priv = netdev_priv(dev);
+ 
+ 	unregister_c_can_dev(dev);
+-
++	pm_runtime_disable(priv->device);
+ 	free_c_can_dev(dev);
+ 
+ 	return 0;
+diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
+index 24cd3c1027ecc..dc9b4aae3abb6 100644
+--- a/drivers/net/can/dev.c
++++ b/drivers/net/can/dev.c
+@@ -1255,6 +1255,7 @@ static void can_dellink(struct net_device *dev, struct list_head *head)
+ 
+ static struct rtnl_link_ops can_link_ops __read_mostly = {
+ 	.kind		= "can",
++	.netns_refund	= true,
+ 	.maxtype	= IFLA_CAN_MAX,
+ 	.policy		= can_policy,
+ 	.setup		= can_setup,
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index d712c6fdbc87d..7cbaac238ff62 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -658,9 +658,15 @@ static int flexcan_chip_disable(struct flexcan_priv *priv)
+ static int flexcan_chip_freeze(struct flexcan_priv *priv)
+ {
+ 	struct flexcan_regs __iomem *regs = priv->regs;
+-	unsigned int timeout = 1000 * 1000 * 10 / priv->can.bittiming.bitrate;
++	unsigned int timeout;
++	u32 bitrate = priv->can.bittiming.bitrate;
+ 	u32 reg;
+ 
++	if (bitrate)
++		timeout = 1000 * 1000 * 10 / bitrate;
++	else
++		timeout = FLEXCAN_TIMEOUT_US / 10;
++
+ 	reg = priv->read(&regs->mcr);
+ 	reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT;
+ 	priv->write(reg, &regs->mcr);
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 43151dd6cb1c3..99323c273aa56 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -57,6 +57,7 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
+ #define KVASER_PCIEFD_KCAN_STAT_REG 0x418
+ #define KVASER_PCIEFD_KCAN_MODE_REG 0x41c
+ #define KVASER_PCIEFD_KCAN_BTRN_REG 0x420
++#define KVASER_PCIEFD_KCAN_BUS_LOAD_REG 0x424
+ #define KVASER_PCIEFD_KCAN_BTRD_REG 0x428
+ #define KVASER_PCIEFD_KCAN_PWM_REG 0x430
+ /* Loopback control register */
+@@ -949,6 +950,9 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ 		timer_setup(&can->bec_poll_timer, kvaser_pciefd_bec_poll_timer,
+ 			    0);
+ 
++		/* Disable Bus load reporting */
++		iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_BUS_LOAD_REG);
++
+ 		tx_npackets = ioread32(can->reg_base +
+ 				       KVASER_PCIEFD_KCAN_TX_NPACKETS_REG);
+ 		if (((tx_npackets >> KVASER_PCIEFD_KCAN_TX_NPACKETS_MAX_SHIFT) &
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 3c1e379751683..6f0bf5db885cd 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -502,9 +502,6 @@ static int m_can_do_rx_poll(struct net_device *dev, int quota)
+ 	}
+ 
+ 	while ((rxfs & RXFS_FFL_MASK) && (quota > 0)) {
+-		if (rxfs & RXFS_RFL)
+-			netdev_warn(dev, "Rx FIFO 0 Message Lost\n");
+-
+ 		m_can_read_fifo(dev, rxfs);
+ 
+ 		quota--;
+@@ -885,7 +882,7 @@ static int m_can_rx_peripheral(struct net_device *dev)
+ {
+ 	struct m_can_classdev *cdev = netdev_priv(dev);
+ 
+-	m_can_rx_handler(dev, 1);
++	m_can_rx_handler(dev, M_CAN_NAPI_WEIGHT);
+ 
+ 	m_can_enable_all_interrupts(cdev);
+ 
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index f504b6858ed29..52100d4fe5a25 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1070,13 +1070,6 @@ static int b53_setup(struct dsa_switch *ds)
+ 			b53_disable_port(ds, port);
+ 	}
+ 
+-	/* Let DSA handle the case were multiple bridges span the same switch
+-	 * device and different VLAN awareness settings are requested, which
+-	 * would be breaking filtering semantics for any of the other bridge
+-	 * devices. (not hardware supported)
+-	 */
+-	ds->vlan_filtering_is_global = true;
+-
+ 	return b53_setup_devlink_resources(ds);
+ }
+ 
+@@ -2627,6 +2620,13 @@ struct b53_device *b53_switch_alloc(struct device *base,
+ 	ds->configure_vlan_while_not_filtering = true;
+ 	ds->untag_bridge_pvid = true;
+ 	dev->vlan_enabled = ds->configure_vlan_while_not_filtering;
++	/* Let DSA handle the case were multiple bridges span the same switch
++	 * device and different VLAN awareness settings are requested, which
++	 * would be breaking filtering semantics for any of the other bridge
++	 * devices. (not hardware supported)
++	 */
++	ds->vlan_filtering_is_global = true;
++
+ 	mutex_init(&dev->reg_mutex);
+ 	mutex_init(&dev->stats_mutex);
+ 
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index edb0a1027b38f..510324916e916 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -584,8 +584,10 @@ static u32 bcm_sf2_sw_get_phy_flags(struct dsa_switch *ds, int port)
+ 	 * in bits 15:8 and the patch level in bits 7:0 which is exactly what
+ 	 * the REG_PHY_REVISION register layout is.
+ 	 */
+-
+-	return priv->hw_params.gphy_rev;
++	if (priv->int_phy_mask & BIT(port))
++		return priv->hw_params.gphy_rev;
++	else
++		return 0;
+ }
+ 
+ static void bcm_sf2_sw_validate(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
+index 1b7e8c91b5417..423d6d78d15c7 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
+@@ -727,7 +727,7 @@ static int chcr_ktls_cpl_set_tcb_rpl(struct adapter *adap, unsigned char *input)
+ 		kvfree(tx_info);
+ 		return 0;
+ 	}
+-	tx_info->open_state = false;
++	tx_info->open_state = CH_KTLS_OPEN_SUCCESS;
+ 	spin_unlock(&tx_info->lock);
+ 
+ 	complete(&tx_info->completion);
+diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
+index ba7f857d1710d..ae09cac876028 100644
+--- a/drivers/net/ethernet/davicom/dm9000.c
++++ b/drivers/net/ethernet/davicom/dm9000.c
+@@ -1510,7 +1510,7 @@ dm9000_probe(struct platform_device *pdev)
+ 		goto out;
+ 	}
+ 
+-	db->irq_wake = platform_get_irq(pdev, 1);
++	db->irq_wake = platform_get_irq_optional(pdev, 1);
+ 	if (db->irq_wake >= 0) {
+ 		dev_dbg(db->dev, "wakeup irq %d\n", db->irq_wake);
+ 
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index 80fb1f537bb33..c9c380c508791 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1308,6 +1308,7 @@ static int ftgmac100_poll(struct napi_struct *napi, int budget)
+ 	 */
+ 	if (unlikely(priv->need_mac_restart)) {
+ 		ftgmac100_start_hw(priv);
++		priv->need_mac_restart = false;
+ 
+ 		/* Re-enable "bad" interrupts */
+ 		iowrite32(FTGMAC100_INT_BAD,
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+index 21a6ce415cb22..2b90a345507b8 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+@@ -234,6 +234,8 @@ enum enetc_bdr_type {TX, RX};
+ #define ENETC_PM0_MAXFRM	0x8014
+ #define ENETC_SET_TX_MTU(val)	((val) << 16)
+ #define ENETC_SET_MAXFRM(val)	((val) & 0xffff)
++#define ENETC_PM0_RX_FIFO	0x801c
++#define ENETC_PM0_RX_FIFO_VAL	1
+ 
+ #define ENETC_PM_IMDIO_BASE	0x8030
+ 
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 83187cd59fddd..68133563a40c1 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -490,6 +490,12 @@ static void enetc_configure_port_mac(struct enetc_hw *hw)
+ 
+ 	enetc_port_wr(hw, ENETC_PM1_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN |
+ 		      ENETC_PM0_CMD_TXP	| ENETC_PM0_PROMISC);
++
++	/* On LS1028A, the MAC RX FIFO defaults to 2, which is too high
++	 * and may lead to RX lock-up under traffic. Set it to 1 instead,
++	 * as recommended by the hardware team.
++	 */
++	enetc_port_wr(hw, ENETC_PM0_RX_FIFO, ENETC_PM0_RX_FIFO_VAL);
+ }
+ 
+ static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 2e344aada4c60..1753807cbf97e 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -377,9 +377,16 @@ static int fec_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
+ 	u64 ns;
+ 	unsigned long flags;
+ 
++	mutex_lock(&adapter->ptp_clk_mutex);
++	/* Check the ptp clock */
++	if (!adapter->ptp_clk_on) {
++		mutex_unlock(&adapter->ptp_clk_mutex);
++		return -EINVAL;
++	}
+ 	spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ 	ns = timecounter_read(&adapter->tc);
+ 	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
++	mutex_unlock(&adapter->ptp_clk_mutex);
+ 
+ 	*ts = ns_to_timespec64(ns);
+ 
+diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c
+index d391a45cebb66..4fab2ee5bbf58 100644
+--- a/drivers/net/ethernet/freescale/gianfar.c
++++ b/drivers/net/ethernet/freescale/gianfar.c
+@@ -2391,6 +2391,10 @@ static bool gfar_add_rx_frag(struct gfar_rx_buff *rxb, u32 lstatus,
+ 		if (lstatus & BD_LFLAG(RXBD_LAST))
+ 			size -= skb->len;
+ 
++		WARN(size < 0, "gianfar: rx fragment size underflow");
++		if (size < 0)
++			return false;
++
+ 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ 				rxb->page_offset + RXBUF_ALIGNMENT,
+ 				size, GFAR_RXB_TRUESIZE);
+@@ -2553,6 +2557,17 @@ static int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue,
+ 		if (lstatus & BD_LFLAG(RXBD_EMPTY))
+ 			break;
+ 
++		/* lost RXBD_LAST descriptor due to overrun */
++		if (skb &&
++		    (lstatus & BD_LFLAG(RXBD_FIRST))) {
++			/* discard faulty buffer */
++			dev_kfree_skb(skb);
++			skb = NULL;
++			rx_queue->stats.rx_dropped++;
++
++			/* can continue normally */
++		}
++
+ 		/* order rx buffer descriptor reads */
+ 		rmb();
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index 858cb293152a9..8bce5f1510bec 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -1663,8 +1663,10 @@ static int hns_nic_clear_all_rx_fetch(struct net_device *ndev)
+ 			for (j = 0; j < fetch_num; j++) {
+ 				/* alloc one skb and init */
+ 				skb = hns_assemble_skb(ndev);
+-				if (!skb)
++				if (!skb) {
++					ret = -ENOMEM;
+ 					goto out;
++				}
+ 				rd = &tx_ring_data(priv, skb->queue_mapping);
+ 				hns_nic_net_xmit_hw(ndev, skb, rd);
+ 
+diff --git a/drivers/net/ethernet/intel/e1000e/82571.c b/drivers/net/ethernet/intel/e1000e/82571.c
+index 88faf05e23baf..0b1e890dd583b 100644
+--- a/drivers/net/ethernet/intel/e1000e/82571.c
++++ b/drivers/net/ethernet/intel/e1000e/82571.c
+@@ -899,6 +899,8 @@ static s32 e1000_set_d0_lplu_state_82571(struct e1000_hw *hw, bool active)
+ 	} else {
+ 		data &= ~IGP02E1000_PM_D0_LPLU;
+ 		ret_val = e1e_wphy(hw, IGP02E1000_PHY_POWER_MGMT, data);
++		if (ret_val)
++			return ret_val;
+ 		/* LPLU and SmartSpeed are mutually exclusive.  LPLU is used
+ 		 * during Dx states where the power conservation is most
+ 		 * important.  During driver activity we should enable
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index e9b82c209c2df..a0948002ddf85 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -5974,15 +5974,19 @@ static void e1000_reset_task(struct work_struct *work)
+ 	struct e1000_adapter *adapter;
+ 	adapter = container_of(work, struct e1000_adapter, reset_task);
+ 
++	rtnl_lock();
+ 	/* don't run the task if already down */
+-	if (test_bit(__E1000_DOWN, &adapter->state))
++	if (test_bit(__E1000_DOWN, &adapter->state)) {
++		rtnl_unlock();
+ 		return;
++	}
+ 
+ 	if (!(adapter->flags & FLAG_RESTART_NOW)) {
+ 		e1000e_dump(adapter);
+ 		e_err("Reset adapter unexpectedly\n");
+ 	}
+ 	e1000e_reinit_locked(adapter);
++	rtnl_unlock();
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 0a867d64d4675..dc5b3c06d1e01 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1776,7 +1776,8 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
+ 		goto err_alloc;
+ 	}
+ 
+-	if (iavf_process_config(adapter))
++	err = iavf_process_config(adapter);
++	if (err)
+ 		goto err_alloc;
+ 	adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h
+index aaa954aae5744..7bda8c5edea5d 100644
+--- a/drivers/net/ethernet/intel/igb/igb.h
++++ b/drivers/net/ethernet/intel/igb/igb.h
+@@ -748,8 +748,8 @@ void igb_ptp_suspend(struct igb_adapter *adapter);
+ void igb_ptp_rx_hang(struct igb_adapter *adapter);
+ void igb_ptp_tx_hang(struct igb_adapter *adapter);
+ void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector, struct sk_buff *skb);
+-void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,
+-			 struct sk_buff *skb);
++int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,
++			struct sk_buff *skb);
+ int igb_ptp_set_ts_config(struct net_device *netdev, struct ifreq *ifr);
+ int igb_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr);
+ void igb_set_flag_queue_pairs(struct igb_adapter *, const u32);
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 0d343d0509739..fecfcfcf161ca 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -8232,7 +8232,8 @@ static inline bool igb_page_is_reserved(struct page *page)
+ 	return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
+ }
+ 
+-static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer)
++static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer,
++				  int rx_buf_pgcnt)
+ {
+ 	unsigned int pagecnt_bias = rx_buffer->pagecnt_bias;
+ 	struct page *page = rx_buffer->page;
+@@ -8243,7 +8244,7 @@ static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer)
+ 
+ #if (PAGE_SIZE < 8192)
+ 	/* if we are only owner of page we can reuse it */
+-	if (unlikely((page_ref_count(page) - pagecnt_bias) > 1))
++	if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1))
+ 		return false;
+ #else
+ #define IGB_LAST_OFFSET \
+@@ -8319,9 +8320,10 @@ static struct sk_buff *igb_construct_skb(struct igb_ring *rx_ring,
+ 		return NULL;
+ 
+ 	if (unlikely(igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP))) {
+-		igb_ptp_rx_pktstamp(rx_ring->q_vector, xdp->data, skb);
+-		xdp->data += IGB_TS_HDR_LEN;
+-		size -= IGB_TS_HDR_LEN;
++		if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, xdp->data, skb)) {
++			xdp->data += IGB_TS_HDR_LEN;
++			size -= IGB_TS_HDR_LEN;
++		}
+ 	}
+ 
+ 	/* Determine available headroom for copy */
+@@ -8382,8 +8384,8 @@ static struct sk_buff *igb_build_skb(struct igb_ring *rx_ring,
+ 
+ 	/* pull timestamp out of packet data */
+ 	if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) {
+-		igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb);
+-		__skb_pull(skb, IGB_TS_HDR_LEN);
++		if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb))
++			__skb_pull(skb, IGB_TS_HDR_LEN);
+ 	}
+ 
+ 	/* update buffer offset */
+@@ -8632,11 +8634,17 @@ static unsigned int igb_rx_offset(struct igb_ring *rx_ring)
+ }
+ 
+ static struct igb_rx_buffer *igb_get_rx_buffer(struct igb_ring *rx_ring,
+-					       const unsigned int size)
++					       const unsigned int size, int *rx_buf_pgcnt)
+ {
+ 	struct igb_rx_buffer *rx_buffer;
+ 
+ 	rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
++	*rx_buf_pgcnt =
++#if (PAGE_SIZE < 8192)
++		page_count(rx_buffer->page);
++#else
++		0;
++#endif
+ 	prefetchw(rx_buffer->page);
+ 
+ 	/* we are reusing so sync this buffer for CPU use */
+@@ -8652,9 +8660,9 @@ static struct igb_rx_buffer *igb_get_rx_buffer(struct igb_ring *rx_ring,
+ }
+ 
+ static void igb_put_rx_buffer(struct igb_ring *rx_ring,
+-			      struct igb_rx_buffer *rx_buffer)
++			      struct igb_rx_buffer *rx_buffer, int rx_buf_pgcnt)
+ {
+-	if (igb_can_reuse_rx_page(rx_buffer)) {
++	if (igb_can_reuse_rx_page(rx_buffer, rx_buf_pgcnt)) {
+ 		/* hand second half of page back to the ring */
+ 		igb_reuse_rx_page(rx_ring, rx_buffer);
+ 	} else {
+@@ -8681,6 +8689,7 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
+ 	u16 cleaned_count = igb_desc_unused(rx_ring);
+ 	unsigned int xdp_xmit = 0;
+ 	struct xdp_buff xdp;
++	int rx_buf_pgcnt;
+ 
+ 	xdp.rxq = &rx_ring->xdp_rxq;
+ 
+@@ -8711,7 +8720,7 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
+ 		 */
+ 		dma_rmb();
+ 
+-		rx_buffer = igb_get_rx_buffer(rx_ring, size);
++		rx_buffer = igb_get_rx_buffer(rx_ring, size, &rx_buf_pgcnt);
+ 
+ 		/* retrieve a buffer from the ring */
+ 		if (!skb) {
+@@ -8754,7 +8763,7 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
+ 			break;
+ 		}
+ 
+-		igb_put_rx_buffer(rx_ring, rx_buffer);
++		igb_put_rx_buffer(rx_ring, rx_buffer, rx_buf_pgcnt);
+ 		cleaned_count++;
+ 
+ 		/* fetch next buffer in frame if non-eop */
+diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c
+index 7cc5428c3b3d2..86a576201f5ff 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ptp.c
++++ b/drivers/net/ethernet/intel/igb/igb_ptp.c
+@@ -856,6 +856,9 @@ static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter)
+ 	dev_kfree_skb_any(skb);
+ }
+ 
++#define IGB_RET_PTP_DISABLED 1
++#define IGB_RET_PTP_INVALID 2
++
+ /**
+  * igb_ptp_rx_pktstamp - retrieve Rx per packet timestamp
+  * @q_vector: Pointer to interrupt specific structure
+@@ -864,19 +867,29 @@ static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter)
+  *
+  * This function is meant to retrieve a timestamp from the first buffer of an
+  * incoming frame.  The value is stored in little endian format starting on
+- * byte 8.
++ * byte 8
++ *
++ * Returns: 0 if success, nonzero if failure
+  **/
+-void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,
+-			 struct sk_buff *skb)
++int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,
++			struct sk_buff *skb)
+ {
+-	__le64 *regval = (__le64 *)va;
+ 	struct igb_adapter *adapter = q_vector->adapter;
++	__le64 *regval = (__le64 *)va;
+ 	int adjust = 0;
+ 
++	if (!(adapter->ptp_flags & IGB_PTP_ENABLED))
++		return IGB_RET_PTP_DISABLED;
++
+ 	/* The timestamp is recorded in little endian format.
+ 	 * DWORD: 0        1        2        3
+ 	 * Field: Reserved Reserved SYSTIML  SYSTIMH
+ 	 */
++
++	/* check reserved dwords are zero, be/le doesn't matter for zero */
++	if (regval[0])
++		return IGB_RET_PTP_INVALID;
++
+ 	igb_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb),
+ 				   le64_to_cpu(regval[1]));
+ 
+@@ -896,6 +909,8 @@ void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,
+ 	}
+ 	skb_hwtstamps(skb)->hwtstamp =
+ 		ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust);
++
++	return 0;
+ }
+ 
+ /**
+@@ -906,13 +921,15 @@ void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,
+  * This function is meant to retrieve a timestamp from the internal registers
+  * of the adapter and store it in the skb.
+  **/
+-void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector,
+-			 struct sk_buff *skb)
++void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector, struct sk_buff *skb)
+ {
+ 	struct igb_adapter *adapter = q_vector->adapter;
+ 	struct e1000_hw *hw = &adapter->hw;
+-	u64 regval;
+ 	int adjust = 0;
++	u64 regval;
++
++	if (!(adapter->ptp_flags & IGB_PTP_ENABLED))
++		return;
+ 
+ 	/* If this bit is set, then the RX registers contain the time stamp. No
+ 	 * other packet will be time stamped until we read these registers, so
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index 35baae900c1fd..6dca67d9c25d8 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -545,7 +545,7 @@ void igc_ptp_init(struct igc_adapter *adapter);
+ void igc_ptp_reset(struct igc_adapter *adapter);
+ void igc_ptp_suspend(struct igc_adapter *adapter);
+ void igc_ptp_stop(struct igc_adapter *adapter);
+-void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, void *va,
++void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va,
+ 			 struct sk_buff *skb);
+ int igc_ptp_set_ts_config(struct net_device *netdev, struct ifreq *ifr);
+ int igc_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr);
+diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+index ec8cd69d49928..da259cd59adda 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
++++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+@@ -1695,6 +1695,9 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
+ 						     Autoneg);
+ 	}
+ 
++	/* Set pause flow control settings */
++	ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
++
+ 	switch (hw->fc.requested_mode) {
+ 	case igc_fc_full:
+ 		ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+@@ -1709,9 +1712,7 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
+ 						     Asym_Pause);
+ 		break;
+ 	default:
+-		ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+-		ethtool_link_ksettings_add_link_mode(cmd, advertising,
+-						     Asym_Pause);
++		break;
+ 	}
+ 
+ 	status = pm_runtime_suspended(&adapter->pdev->dev) ?
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index b673ac1199bbc..7b822cdcc6c58 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -3846,10 +3846,19 @@ static void igc_reset_task(struct work_struct *work)
+ 
+ 	adapter = container_of(work, struct igc_adapter, reset_task);
+ 
++	rtnl_lock();
++	/* If we're already down or resetting, just bail */
++	if (test_bit(__IGC_DOWN, &adapter->state) ||
++	    test_bit(__IGC_RESETTING, &adapter->state)) {
++		rtnl_unlock();
++		return;
++	}
++
+ 	igc_rings_dump(adapter);
+ 	igc_regs_dump(adapter);
+ 	netdev_err(adapter->netdev, "Reset adapter\n");
+ 	igc_reinit_locked(adapter);
++	rtnl_unlock();
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index ac0b9c85da7ca..545f4d0e67cf4 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -152,46 +152,54 @@ static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
+ }
+ 
+ /**
+- * igc_ptp_rx_pktstamp - retrieve Rx per packet timestamp
++ * igc_ptp_rx_pktstamp - Retrieve timestamp from Rx packet buffer
+  * @q_vector: Pointer to interrupt specific structure
+  * @va: Pointer to address containing Rx buffer
+  * @skb: Buffer containing timestamp and packet
+  *
+- * This function is meant to retrieve the first timestamp from the
+- * first buffer of an incoming frame. The value is stored in little
+- * endian format starting on byte 0. There's a second timestamp
+- * starting on byte 8.
+- **/
+-void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, void *va,
++ * This function retrieves the timestamp saved in the beginning of packet
++ * buffer. While two timestamps are available, one in timer0 reference and the
++ * other in timer1 reference, this function considers only the timestamp in
++ * timer0 reference.
++ */
++void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va,
+ 			 struct sk_buff *skb)
+ {
+ 	struct igc_adapter *adapter = q_vector->adapter;
+-	__le64 *regval = (__le64 *)va;
+-	int adjust = 0;
+-
+-	/* The timestamp is recorded in little endian format.
+-	 * DWORD: | 0          | 1           | 2          | 3
+-	 * Field: | Timer0 Low | Timer0 High | Timer1 Low | Timer1 High
++	u64 regval;
++	int adjust;
++
++	/* Timestamps are saved in little endian at the beginning of the packet
++	 * buffer following the layout:
++	 *
++	 * DWORD: | 0              | 1              | 2              | 3              |
++	 * Field: | Timer1 SYSTIML | Timer1 SYSTIMH | Timer0 SYSTIML | Timer0 SYSTIMH |
++	 *
++	 * SYSTIML holds the nanoseconds part while SYSTIMH holds the seconds
++	 * part of the timestamp.
+ 	 */
+-	igc_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb),
+-				   le64_to_cpu(regval[0]));
+-
+-	/* adjust timestamp for the RX latency based on link speed */
+-	if (adapter->hw.mac.type == igc_i225) {
+-		switch (adapter->link_speed) {
+-		case SPEED_10:
+-			adjust = IGC_I225_RX_LATENCY_10;
+-			break;
+-		case SPEED_100:
+-			adjust = IGC_I225_RX_LATENCY_100;
+-			break;
+-		case SPEED_1000:
+-			adjust = IGC_I225_RX_LATENCY_1000;
+-			break;
+-		case SPEED_2500:
+-			adjust = IGC_I225_RX_LATENCY_2500;
+-			break;
+-		}
++	regval = le32_to_cpu(va[2]);
++	regval |= (u64)le32_to_cpu(va[3]) << 32;
++	igc_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb), regval);
++
++	/* Adjust timestamp for the RX latency based on link speed */
++	switch (adapter->link_speed) {
++	case SPEED_10:
++		adjust = IGC_I225_RX_LATENCY_10;
++		break;
++	case SPEED_100:
++		adjust = IGC_I225_RX_LATENCY_100;
++		break;
++	case SPEED_1000:
++		adjust = IGC_I225_RX_LATENCY_1000;
++		break;
++	case SPEED_2500:
++		adjust = IGC_I225_RX_LATENCY_2500;
++		break;
++	default:
++		adjust = 0;
++		netdev_warn_once(adapter->netdev, "Imprecise timestamp\n");
++		break;
+ 	}
+ 	skb_hwtstamps(skb)->hwtstamp =
+ 		ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index f3f449f53920f..278fc866fad49 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -9582,8 +9582,10 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
+ 	ixgbe_atr_compute_perfect_hash_82599(&input->filter, mask);
+ 	err = ixgbe_fdir_write_perfect_filter_82599(hw, &input->filter,
+ 						    input->sw_idx, queue);
+-	if (!err)
+-		ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
++	if (err)
++		goto err_out_w_lock;
++
++	ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
+ 	spin_unlock(&adapter->fdir_perfect_lock);
+ 
+ 	if ((uhtid != 0x800) && (adapter->jump_tables[uhtid]))
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+index 91a9d00e4fb51..407b9477da248 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+@@ -140,6 +140,15 @@ enum npc_kpu_lh_ltype {
+ 	NPC_LT_LH_CUSTOM1 = 0xF,
+ };
+ 
++/* NPC port kind defines how the incoming or outgoing packets
++ * are processed. NPC accepts packets from up to 64 pkinds.
++ * Software assigns pkind for each incoming port such as CGX
++ * Ethernet interfaces, LBK interfaces, etc.
++ */
++enum npc_pkind_type {
++	NPC_TX_DEF_PKIND = 63ULL,	/* NIX-TX PKIND */
++};
++
+ struct npc_kpu_profile_cam {
+ 	u8 state;
+ 	u8 state_mask;
+@@ -300,6 +309,28 @@ struct nix_rx_action {
+ /* NPC_AF_INTFX_KEX_CFG field masks */
+ #define NPC_PARSE_NIBBLE		GENMASK_ULL(30, 0)
+ 
++/* NPC_PARSE_KEX_S nibble definitions for each field */
++#define NPC_PARSE_NIBBLE_CHAN		GENMASK_ULL(2, 0)
++#define NPC_PARSE_NIBBLE_ERRLEV		BIT_ULL(3)
++#define NPC_PARSE_NIBBLE_ERRCODE	GENMASK_ULL(5, 4)
++#define NPC_PARSE_NIBBLE_L2L3_BCAST	BIT_ULL(6)
++#define NPC_PARSE_NIBBLE_LA_FLAGS	GENMASK_ULL(8, 7)
++#define NPC_PARSE_NIBBLE_LA_LTYPE	BIT_ULL(9)
++#define NPC_PARSE_NIBBLE_LB_FLAGS	GENMASK_ULL(11, 10)
++#define NPC_PARSE_NIBBLE_LB_LTYPE	BIT_ULL(12)
++#define NPC_PARSE_NIBBLE_LC_FLAGS	GENMASK_ULL(14, 13)
++#define NPC_PARSE_NIBBLE_LC_LTYPE	BIT_ULL(15)
++#define NPC_PARSE_NIBBLE_LD_FLAGS	GENMASK_ULL(17, 16)
++#define NPC_PARSE_NIBBLE_LD_LTYPE	BIT_ULL(18)
++#define NPC_PARSE_NIBBLE_LE_FLAGS	GENMASK_ULL(20, 19)
++#define NPC_PARSE_NIBBLE_LE_LTYPE	BIT_ULL(21)
++#define NPC_PARSE_NIBBLE_LF_FLAGS	GENMASK_ULL(23, 22)
++#define NPC_PARSE_NIBBLE_LF_LTYPE	BIT_ULL(24)
++#define NPC_PARSE_NIBBLE_LG_FLAGS	GENMASK_ULL(26, 25)
++#define NPC_PARSE_NIBBLE_LG_LTYPE	BIT_ULL(27)
++#define NPC_PARSE_NIBBLE_LH_FLAGS	GENMASK_ULL(29, 28)
++#define NPC_PARSE_NIBBLE_LH_LTYPE	BIT_ULL(30)
++
+ /* NIX Receive Vtag Action Structure */
+ #define VTAG0_VALID_BIT		BIT_ULL(15)
+ #define VTAG0_TYPE_MASK		GENMASK_ULL(14, 12)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h b/drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h
+index 77bb4ed326005..0e4af93be0fb4 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h
+@@ -148,6 +148,20 @@
+ 			(((bytesm1) << 16) | ((hdr_ofs) << 8) | ((ena) << 7) | \
+ 			 ((flags_ena) << 6) | ((key_ofs) & 0x3F))
+ 
++/* Rx parse key extract nibble enable */
++#define NPC_PARSE_NIBBLE_INTF_RX	(NPC_PARSE_NIBBLE_CHAN | \
++					 NPC_PARSE_NIBBLE_LA_LTYPE | \
++					 NPC_PARSE_NIBBLE_LB_LTYPE | \
++					 NPC_PARSE_NIBBLE_LC_LTYPE | \
++					 NPC_PARSE_NIBBLE_LD_LTYPE | \
++					 NPC_PARSE_NIBBLE_LE_LTYPE)
++/* Tx parse key extract nibble enable */
++#define NPC_PARSE_NIBBLE_INTF_TX	(NPC_PARSE_NIBBLE_LA_LTYPE | \
++					 NPC_PARSE_NIBBLE_LB_LTYPE | \
++					 NPC_PARSE_NIBBLE_LC_LTYPE | \
++					 NPC_PARSE_NIBBLE_LD_LTYPE | \
++					 NPC_PARSE_NIBBLE_LE_LTYPE)
++
+ enum npc_kpu_parser_state {
+ 	NPC_S_NA = 0,
+ 	NPC_S_KPU1_ETHER,
+@@ -13385,9 +13399,10 @@ static const struct npc_mcam_kex npc_mkex_default = {
+ 	.name = "default",
+ 	.kpu_version = NPC_KPU_PROFILE_VER,
+ 	.keyx_cfg = {
+-		/* nibble: LA..LE (ltype only) + Channel */
+-		[NIX_INTF_RX] = ((u64)NPC_MCAM_KEY_X2 << 32) | 0x49247,
+-		[NIX_INTF_TX] = ((u64)NPC_MCAM_KEY_X2 << 32) | ((1ULL << 19) - 1),
++		/* nibble: LA..LE (ltype only) + channel */
++		[NIX_INTF_RX] = ((u64)NPC_MCAM_KEY_X2 << 32) | NPC_PARSE_NIBBLE_INTF_RX,
++		/* nibble: LA..LE (ltype only) */
++		[NIX_INTF_TX] = ((u64)NPC_MCAM_KEY_X2 << 32) | NPC_PARSE_NIBBLE_INTF_TX,
+ 	},
+ 	.intf_lid_lt_ld = {
+ 	/* Default RX MCAM KEX profile */
+@@ -13405,12 +13420,14 @@ static const struct npc_mcam_kex npc_mkex_default = {
+ 			/* Layer B: Single VLAN (CTAG) */
+ 			/* CTAG VLAN[2..3] + Ethertype, 4 bytes, KW0[63:32] */
+ 			[NPC_LT_LB_CTAG] = {
+-				KEX_LD_CFG(0x03, 0x0, 0x1, 0x0, 0x4),
++				KEX_LD_CFG(0x03, 0x2, 0x1, 0x0, 0x4),
+ 			},
+ 			/* Layer B: Stacked VLAN (STAG|QinQ) */
+ 			[NPC_LT_LB_STAG_QINQ] = {
+-				/* CTAG VLAN[2..3] + Ethertype, 4 bytes, KW0[63:32] */
+-				KEX_LD_CFG(0x03, 0x4, 0x1, 0x0, 0x4),
++				/* Outer VLAN: 2 bytes, KW0[63:48] */
++				KEX_LD_CFG(0x01, 0x2, 0x1, 0x0, 0x6),
++				/* Ethertype: 2 bytes, KW0[47:32] */
++				KEX_LD_CFG(0x01, 0x8, 0x1, 0x0, 0x4),
+ 			},
+ 			[NPC_LT_LB_FDSA] = {
+ 				/* SWITCH PORT: 1 byte, KW0[63:48] */
+@@ -13436,17 +13453,69 @@ static const struct npc_mcam_kex npc_mkex_default = {
+ 		[NPC_LID_LD] = {
+ 			/* Layer D:UDP */
+ 			[NPC_LT_LD_UDP] = {
+-				/* SPORT: 2 bytes, KW3[15:0] */
+-				KEX_LD_CFG(0x1, 0x0, 0x1, 0x0, 0x18),
+-				/* DPORT: 2 bytes, KW3[31:16] */
+-				KEX_LD_CFG(0x1, 0x2, 0x1, 0x0, 0x1a),
++				/* SPORT+DPORT: 4 bytes, KW3[31:0] */
++				KEX_LD_CFG(0x3, 0x0, 0x1, 0x0, 0x18),
++			},
++			/* Layer D:TCP */
++			[NPC_LT_LD_TCP] = {
++				/* SPORT+DPORT: 4 bytes, KW3[31:0] */
++				KEX_LD_CFG(0x3, 0x0, 0x1, 0x0, 0x18),
++			},
++		},
++	},
++
++	/* Default TX MCAM KEX profile */
++	[NIX_INTF_TX] = {
++		[NPC_LID_LA] = {
++			/* Layer A: NIX_INST_HDR_S + Ethernet */
++			/* NIX appends 8 bytes of NIX_INST_HDR_S at the
++			 * start of each TX packet supplied to NPC.
++			 */
++			[NPC_LT_LA_IH_NIX_ETHER] = {
++				/* PF_FUNC: 2B , KW0 [47:32] */
++				KEX_LD_CFG(0x01, 0x0, 0x1, 0x0, 0x4),
++				/* DMAC: 6 bytes, KW1[63:16] */
++				KEX_LD_CFG(0x05, 0x8, 0x1, 0x0, 0xa),
++			},
++		},
++		[NPC_LID_LB] = {
++			/* Layer B: Single VLAN (CTAG) */
++			[NPC_LT_LB_CTAG] = {
++				/* CTAG VLAN[2..3] KW0[63:48] */
++				KEX_LD_CFG(0x01, 0x2, 0x1, 0x0, 0x6),
++				/* CTAG VLAN[2..3] KW1[15:0] */
++				KEX_LD_CFG(0x01, 0x4, 0x1, 0x0, 0x8),
++			},
++			/* Layer B: Stacked VLAN (STAG|QinQ) */
++			[NPC_LT_LB_STAG_QINQ] = {
++				/* Outer VLAN: 2 bytes, KW0[63:48] */
++				KEX_LD_CFG(0x01, 0x2, 0x1, 0x0, 0x6),
++				/* Outer VLAN: 2 Bytes, KW1[15:0] */
++				KEX_LD_CFG(0x01, 0x8, 0x1, 0x0, 0x8),
++			},
++		},
++		[NPC_LID_LC] = {
++			/* Layer C: IPv4 */
++			[NPC_LT_LC_IP] = {
++				/* SIP+DIP: 8 bytes, KW2[63:0] */
++				KEX_LD_CFG(0x07, 0xc, 0x1, 0x0, 0x10),
++			},
++			/* Layer C: IPv6 */
++			[NPC_LT_LC_IP6] = {
++				/* Everything up to SADDR: 8 bytes, KW2[63:0] */
++				KEX_LD_CFG(0x07, 0x0, 0x1, 0x0, 0x10),
++			},
++		},
++		[NPC_LID_LD] = {
++			/* Layer D:UDP */
++			[NPC_LT_LD_UDP] = {
++				/* SPORT+DPORT: 4 bytes, KW3[31:0] */
++				KEX_LD_CFG(0x3, 0x0, 0x1, 0x0, 0x18),
+ 			},
+ 			/* Layer D:TCP */
+ 			[NPC_LT_LD_TCP] = {
+-				/* SPORT: 2 bytes, KW3[15:0] */
+-				KEX_LD_CFG(0x1, 0x0, 0x1, 0x0, 0x18),
+-				/* DPORT: 2 bytes, KW3[31:16] */
+-				KEX_LD_CFG(0x1, 0x2, 0x1, 0x0, 0x1a),
++				/* SPORT+DPORT: 4 bytes, KW3[31:0] */
++				KEX_LD_CFG(0x3, 0x0, 0x1, 0x0, 0x18),
+ 			},
+ 		},
+ 	},
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index e1f9189607303..644d28b0692b3 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2151,8 +2151,10 @@ static void rvu_unregister_interrupts(struct rvu *rvu)
+ 		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
+ 
+ 	for (irq = 0; irq < rvu->num_vec; irq++) {
+-		if (rvu->irq_allocated[irq])
++		if (rvu->irq_allocated[irq]) {
+ 			free_irq(pci_irq_vector(rvu->pdev, irq), rvu);
++			rvu->irq_allocated[irq] = false;
++		}
+ 	}
+ 
+ 	pci_free_irq_vectors(rvu->pdev);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index 809f50ab0432e..bc870bff14df7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -144,12 +144,14 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 					  char __user *buffer,
+ 					  size_t count, loff_t *ppos)
+ {
+-	int index, off = 0, flag = 0, go_back = 0, off_prev;
++	int index, off = 0, flag = 0, go_back = 0, len = 0;
+ 	struct rvu *rvu = filp->private_data;
+ 	int lf, pf, vf, pcifunc;
+ 	struct rvu_block block;
+ 	int bytes_not_copied;
++	int lf_str_size = 12;
+ 	int buf_size = 2048;
++	char *lfs;
+ 	char *buf;
+ 
+ 	/* don't allow partial reads */
+@@ -159,12 +161,20 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 	buf = kzalloc(buf_size, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOSPC;
+-	off +=	scnprintf(&buf[off], buf_size - 1 - off, "\npcifunc\t\t");
++
++	lfs = kzalloc(lf_str_size, GFP_KERNEL);
++	if (!lfs) {
++		kfree(buf);
++		return -ENOMEM;
++	}
++	off +=	scnprintf(&buf[off], buf_size - 1 - off, "%-*s", lf_str_size,
++			  "pcifunc");
+ 	for (index = 0; index < BLK_COUNT; index++)
+-		if (strlen(rvu->hw->block[index].name))
+-			off +=	scnprintf(&buf[off], buf_size - 1 - off,
+-					  "%*s\t", (index - 1) * 2,
+-					  rvu->hw->block[index].name);
++		if (strlen(rvu->hw->block[index].name)) {
++			off += scnprintf(&buf[off], buf_size - 1 - off,
++					 "%-*s", lf_str_size,
++					 rvu->hw->block[index].name);
++		}
+ 	off += scnprintf(&buf[off], buf_size - 1 - off, "\n");
+ 	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+ 		for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
+@@ -173,14 +183,15 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 				continue;
+ 
+ 			if (vf) {
++				sprintf(lfs, "PF%d:VF%d", pf, vf - 1);
+ 				go_back = scnprintf(&buf[off],
+ 						    buf_size - 1 - off,
+-						    "PF%d:VF%d\t\t", pf,
+-						    vf - 1);
++						    "%-*s", lf_str_size, lfs);
+ 			} else {
++				sprintf(lfs, "PF%d", pf);
+ 				go_back = scnprintf(&buf[off],
+ 						    buf_size - 1 - off,
+-						    "PF%d\t\t", pf);
++						    "%-*s", lf_str_size, lfs);
+ 			}
+ 
+ 			off += go_back;
+@@ -188,20 +199,22 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 				block = rvu->hw->block[index];
+ 				if (!strlen(block.name))
+ 					continue;
+-				off_prev = off;
++				len = 0;
++				lfs[len] = '\0';
+ 				for (lf = 0; lf < block.lf.max; lf++) {
+ 					if (block.fn_map[lf] != pcifunc)
+ 						continue;
+ 					flag = 1;
+-					off += scnprintf(&buf[off], buf_size - 1
+-							- off, "%3d,", lf);
++					len += sprintf(&lfs[len], "%d,", lf);
+ 				}
+-				if (flag && off_prev != off)
+-					off--;
+-				else
+-					go_back++;
++
++				if (flag)
++					len--;
++				lfs[len] = '\0';
+ 				off += scnprintf(&buf[off], buf_size - 1 - off,
+-						"\t");
++						 "%-*s", lf_str_size, lfs);
++				if (!strlen(lfs))
++					go_back += lf_str_size;
+ 			}
+ 			if (!flag)
+ 				off -= go_back;
+@@ -213,6 +226,7 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 	}
+ 
+ 	bytes_not_copied = copy_to_user(buffer, buf, off);
++	kfree(lfs);
+ 	kfree(buf);
+ 
+ 	if (bytes_not_copied)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 21a89dd76d3c1..f6a3cf3e6f236 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -1129,6 +1129,10 @@ int rvu_mbox_handler_nix_lf_alloc(struct rvu *rvu,
+ 	/* Config Rx pkt length, csum checks and apad  enable / disable */
+ 	rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_CFG(nixlf), req->rx_cfg);
+ 
++	/* Configure pkind for TX parse config */
++	cfg = NPC_TX_DEF_PKIND;
++	rvu_write64(rvu, blkaddr, NIX_AF_LFX_TX_PARSE_CFG(nixlf), cfg);
++
+ 	intf = is_afvf(pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX;
+ 	err = nix_interface_init(rvu, pcifunc, intf, nixlf);
+ 	if (err)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 511b01dd03edc..169ae491f9786 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -2035,10 +2035,10 @@ int rvu_mbox_handler_npc_mcam_free_counter(struct rvu *rvu,
+ 		index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry);
+ 		if (index >= mcam->bmap_entries)
+ 			break;
++		entry = index + 1;
+ 		if (mcam->entry2cntr_map[index] != req->cntr)
+ 			continue;
+ 
+-		entry = index + 1;
+ 		npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+ 					      index, req->cntr);
+ 	}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 66f1a212f1f4e..9fef9be015e5e 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1616,6 +1616,7 @@ int otx2_stop(struct net_device *netdev)
+ 	struct otx2_nic *pf = netdev_priv(netdev);
+ 	struct otx2_cq_poll *cq_poll = NULL;
+ 	struct otx2_qset *qset = &pf->qset;
++	struct otx2_rss_info *rss;
+ 	int qidx, vec, wrk;
+ 
+ 	netif_carrier_off(netdev);
+@@ -1628,6 +1629,10 @@ int otx2_stop(struct net_device *netdev)
+ 	/* First stop packet Rx/Tx */
+ 	otx2_rxtx_enable(pf, false);
+ 
++	/* Clear RSS enable flag */
++	rss = &pf->hw.rss_info;
++	rss->enable = false;
++
+ 	/* Cleanup Queue IRQ */
+ 	vec = pci_irq_vector(pf->pdev,
+ 			     pf->hw.nix_msixoff + NIX_LF_QINT_VEC_START);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 2f05b0f9de019..9da34f82d4668 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -90,14 +90,15 @@ struct page_pool;
+ 				    MLX5_MPWRQ_LOG_WQE_SZ - PAGE_SHIFT : 0)
+ #define MLX5_MPWRQ_PAGES_PER_WQE		BIT(MLX5_MPWRQ_WQE_PAGE_ORDER)
+ 
+-#define MLX5_MTT_OCTW(npages) (ALIGN(npages, 8) / 2)
++#define MLX5_ALIGN_MTTS(mtts)		(ALIGN(mtts, 8))
++#define MLX5_ALIGNED_MTTS_OCTW(mtts)	((mtts) / 2)
++#define MLX5_MTT_OCTW(mtts)		(MLX5_ALIGNED_MTTS_OCTW(MLX5_ALIGN_MTTS(mtts)))
+ /* Add another page to MLX5E_REQUIRED_WQE_MTTS as a buffer between
+  * WQEs, This page will absorb write overflow by the hardware, when
+  * receiving packets larger than MTU. These oversize packets are
+  * dropped by the driver at a later stage.
+  */
+-#define MLX5E_REQUIRED_WQE_MTTS		(ALIGN(MLX5_MPWRQ_PAGES_PER_WQE + 1, 8))
+-#define MLX5E_LOG_ALIGNED_MPWQE_PPW	(ilog2(MLX5E_REQUIRED_WQE_MTTS))
++#define MLX5E_REQUIRED_WQE_MTTS		(MLX5_ALIGN_MTTS(MLX5_MPWRQ_PAGES_PER_WQE + 1))
+ #define MLX5E_REQUIRED_MTTS(wqes)	(wqes * MLX5E_REQUIRED_WQE_MTTS)
+ #define MLX5E_MAX_RQ_NUM_MTTS	\
+ 	((1 << 16) * 2) /* So that MLX5_MTT_OCTW(num_mtts) fits into u16 */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 24e2c0d955b99..b42396df3111d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -1182,7 +1182,8 @@ int mlx5_tc_ct_add_no_trk_match(struct mlx5_flow_spec *spec)
+ 
+ 	mlx5e_tc_match_to_reg_get_match(spec, CTSTATE_TO_REG,
+ 					&ctstate, &ctstate_mask);
+-	if (ctstate_mask)
++
++	if ((ctstate & ctstate_mask) == MLX5_CT_STATE_TRK_BIT)
+ 		return -EOPNOTSUPP;
+ 
+ 	ctstate_mask |= MLX5_CT_STATE_TRK_BIT;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
+index e472ed0eacfbc..7ed3f9f79f11a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
+@@ -227,6 +227,10 @@ static int mlx5e_tc_tun_parse_geneve_options(struct mlx5e_priv *priv,
+ 	option_key = (struct geneve_opt *)&enc_opts.key->data[0];
+ 	option_mask = (struct geneve_opt *)&enc_opts.mask->data[0];
+ 
++	if (option_mask->opt_class == 0 && option_mask->type == 0 &&
++	    !memchr_inv(option_mask->opt_data, 0, option_mask->length * 4))
++		return 0;
++
+ 	if (option_key->length > max_tlv_option_data_len) {
+ 		NL_SET_ERR_MSG_MOD(extack,
+ 				   "Matching on GENEVE options: unsupported option len");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index b8622440243b4..bcd05457647e2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1873,6 +1873,7 @@ static int set_pflag_rx_cqe_compress(struct net_device *netdev,
+ {
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
+ 	struct mlx5_core_dev *mdev = priv->mdev;
++	int err;
+ 
+ 	if (!MLX5_CAP_GEN(mdev, cqe_compression))
+ 		return -EOPNOTSUPP;
+@@ -1882,7 +1883,10 @@ static int set_pflag_rx_cqe_compress(struct net_device *netdev,
+ 		return -EINVAL;
+ 	}
+ 
+-	mlx5e_modify_rx_cqe_compression_locked(priv, enable);
++	err = mlx5e_modify_rx_cqe_compression_locked(priv, enable);
++	if (err)
++		return err;
++
+ 	priv->channels.params.rx_cqe_compress_def = enable;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 6394f9d8c6851..e2006c6053c9c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -303,9 +303,9 @@ static int mlx5e_create_rq_umr_mkey(struct mlx5_core_dev *mdev, struct mlx5e_rq
+ 				     rq->wqe_overflow.addr);
+ }
+ 
+-static inline u64 mlx5e_get_mpwqe_offset(struct mlx5e_rq *rq, u16 wqe_ix)
++static u64 mlx5e_get_mpwqe_offset(u16 wqe_ix)
+ {
+-	return (wqe_ix << MLX5E_LOG_ALIGNED_MPWQE_PPW) << PAGE_SHIFT;
++	return MLX5E_REQUIRED_MTTS(wqe_ix) << PAGE_SHIFT;
+ }
+ 
+ static void mlx5e_init_frags_partition(struct mlx5e_rq *rq)
+@@ -544,7 +544,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
+ 				mlx5_wq_ll_get_wqe(&rq->mpwqe.wq, i);
+ 			u32 byte_count =
+ 				rq->mpwqe.num_strides << rq->mpwqe.log_stride_sz;
+-			u64 dma_offset = mlx5e_get_mpwqe_offset(rq, i);
++			u64 dma_offset = mlx5e_get_mpwqe_offset(i);
+ 
+ 			wqe->data[0].addr = cpu_to_be64(dma_offset + rq->buff.headroom);
+ 			wqe->data[0].byte_count = cpu_to_be32(byte_count);
+@@ -3666,10 +3666,17 @@ mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
+ 	}
+ 
+ 	if (mlx5e_is_uplink_rep(priv)) {
++		struct mlx5e_vport_stats *vstats = &priv->stats.vport;
++
+ 		stats->rx_packets = PPORT_802_3_GET(pstats, a_frames_received_ok);
+ 		stats->rx_bytes   = PPORT_802_3_GET(pstats, a_octets_received_ok);
+ 		stats->tx_packets = PPORT_802_3_GET(pstats, a_frames_transmitted_ok);
+ 		stats->tx_bytes   = PPORT_802_3_GET(pstats, a_octets_transmitted_ok);
++
++		/* vport multicast also counts packets that are dropped due to steering
++		 * or rx out of buffer
++		 */
++		stats->multicast = VPORT_COUNTER_GET(vstats, received_eth_multicast.packets);
+ 	} else {
+ 		mlx5e_fold_sw_stats64(priv, stats);
+ 	}
+@@ -4494,8 +4501,10 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
+ 		struct mlx5e_channel *c = priv->channels.c[i];
+ 
+ 		mlx5e_rq_replace_xdp_prog(&c->rq, prog);
+-		if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
++		if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state)) {
++			bpf_prog_inc(prog);
+ 			mlx5e_rq_replace_xdp_prog(&c->xskrq, prog);
++		}
+ 	}
+ 
+ unlock:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 6d2ba8b84187c..7e1f8660dfec0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -506,7 +506,6 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
+ 	struct mlx5e_icosq *sq = &rq->channel->icosq;
+ 	struct mlx5_wq_cyc *wq = &sq->wq;
+ 	struct mlx5e_umr_wqe *umr_wqe;
+-	u16 xlt_offset = ix << (MLX5E_LOG_ALIGNED_MPWQE_PPW - 1);
+ 	u16 pi;
+ 	int err;
+ 	int i;
+@@ -537,7 +536,8 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
+ 	umr_wqe->ctrl.opmod_idx_opcode =
+ 		cpu_to_be32((sq->pc << MLX5_WQE_CTRL_WQE_INDEX_SHIFT) |
+ 			    MLX5_OPCODE_UMR);
+-	umr_wqe->uctrl.xlt_offset = cpu_to_be16(xlt_offset);
++	umr_wqe->uctrl.xlt_offset =
++		cpu_to_be16(MLX5_ALIGNED_MTTS_OCTW(MLX5E_REQUIRED_MTTS(ix)));
+ 
+ 	sq->db.wqe_info[pi] = (struct mlx5e_icosq_wqe_info) {
+ 		.wqe_type   = MLX5E_ICOSQ_WQE_UMR_RX,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 4b8a442f09cd6..930f19c598bb6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2597,6 +2597,16 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 			*match_level = MLX5_MATCH_L4;
+ 	}
+ 
++	/* Currenlty supported only for MPLS over UDP */
++	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS) &&
++	    !netif_is_bareudp(filter_dev)) {
++		NL_SET_ERR_MSG_MOD(extack,
++				   "Matching on MPLS is supported only for MPLS over UDP");
++		netdev_err(priv->netdev,
++			   "Matching on MPLS is supported only for MPLS over UDP\n");
++		return -EOPNOTSUPP;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -3200,6 +3210,37 @@ static int is_action_keys_supported(const struct flow_action_entry *act,
+ 	return 0;
+ }
+ 
++static bool modify_tuple_supported(bool modify_tuple, bool ct_clear,
++				   bool ct_flow, struct netlink_ext_ack *extack,
++				   struct mlx5e_priv *priv,
++				   struct mlx5_flow_spec *spec)
++{
++	if (!modify_tuple || ct_clear)
++		return true;
++
++	if (ct_flow) {
++		NL_SET_ERR_MSG_MOD(extack,
++				   "can't offload tuple modification with non-clear ct()");
++		netdev_info(priv->netdev,
++			    "can't offload tuple modification with non-clear ct()");
++		return false;
++	}
++
++	/* Add ct_state=-trk match so it will be offloaded for non ct flows
++	 * (or after clear action), as otherwise, since the tuple is changed,
++	 * we can't restore ct state
++	 */
++	if (mlx5_tc_ct_add_no_trk_match(spec)) {
++		NL_SET_ERR_MSG_MOD(extack,
++				   "can't offload tuple modification with ct matches and no ct(clear) action");
++		netdev_info(priv->netdev,
++			    "can't offload tuple modification with ct matches and no ct(clear) action");
++		return false;
++	}
++
++	return true;
++}
++
+ static bool modify_header_match_supported(struct mlx5e_priv *priv,
+ 					  struct mlx5_flow_spec *spec,
+ 					  struct flow_action *flow_action,
+@@ -3238,18 +3279,9 @@ static bool modify_header_match_supported(struct mlx5e_priv *priv,
+ 			return err;
+ 	}
+ 
+-	/* Add ct_state=-trk match so it will be offloaded for non ct flows
+-	 * (or after clear action), as otherwise, since the tuple is changed,
+-	 *  we can't restore ct state
+-	 */
+-	if (!ct_clear && modify_tuple &&
+-	    mlx5_tc_ct_add_no_trk_match(spec)) {
+-		NL_SET_ERR_MSG_MOD(extack,
+-				   "can't offload tuple modify header with ct matches");
+-		netdev_info(priv->netdev,
+-			    "can't offload tuple modify header with ct matches");
++	if (!modify_tuple_supported(modify_tuple, ct_clear, ct_flow, extack,
++				    priv, spec))
+ 		return false;
+-	}
+ 
+ 	ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol);
+ 	if (modify_ip_header && ip_proto != IPPROTO_TCP &&
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+index 5defd31d481c2..aa06fcb38f8b9 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+@@ -327,8 +327,14 @@ int nfp_compile_flow_metadata(struct nfp_app *app,
+ 		goto err_free_ctx_entry;
+ 	}
+ 
++	/* Do net allocate a mask-id for pre_tun_rules. These flows are used to
++	 * configure the pre_tun table and are never actually send to the
++	 * firmware as an add-flow message. This causes the mask-id allocation
++	 * on the firmware to get out of sync if allocated here.
++	 */
+ 	new_mask_id = 0;
+-	if (!nfp_check_mask_add(app, nfp_flow->mask_data,
++	if (!nfp_flow->pre_tun_rule.dev &&
++	    !nfp_check_mask_add(app, nfp_flow->mask_data,
+ 				nfp_flow->meta.mask_len,
+ 				&nfp_flow->meta.flags, &new_mask_id)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot allocate a new mask id");
+@@ -359,7 +365,8 @@ int nfp_compile_flow_metadata(struct nfp_app *app,
+ 			goto err_remove_mask;
+ 		}
+ 
+-		if (!nfp_check_mask_remove(app, nfp_flow->mask_data,
++		if (!nfp_flow->pre_tun_rule.dev &&
++		    !nfp_check_mask_remove(app, nfp_flow->mask_data,
+ 					   nfp_flow->meta.mask_len,
+ 					   NULL, &new_mask_id)) {
+ 			NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot release mask id");
+@@ -374,8 +381,10 @@ int nfp_compile_flow_metadata(struct nfp_app *app,
+ 	return 0;
+ 
+ err_remove_mask:
+-	nfp_check_mask_remove(app, nfp_flow->mask_data, nfp_flow->meta.mask_len,
+-			      NULL, &new_mask_id);
++	if (!nfp_flow->pre_tun_rule.dev)
++		nfp_check_mask_remove(app, nfp_flow->mask_data,
++				      nfp_flow->meta.mask_len,
++				      NULL, &new_mask_id);
+ err_remove_rhash:
+ 	WARN_ON_ONCE(rhashtable_remove_fast(&priv->stats_ctx_table,
+ 					    &ctx_entry->ht_node,
+@@ -406,9 +415,10 @@ int nfp_modify_flow_metadata(struct nfp_app *app,
+ 
+ 	__nfp_modify_flow_metadata(priv, nfp_flow);
+ 
+-	nfp_check_mask_remove(app, nfp_flow->mask_data,
+-			      nfp_flow->meta.mask_len, &nfp_flow->meta.flags,
+-			      &new_mask_id);
++	if (!nfp_flow->pre_tun_rule.dev)
++		nfp_check_mask_remove(app, nfp_flow->mask_data,
++				      nfp_flow->meta.mask_len, &nfp_flow->meta.flags,
++				      &new_mask_id);
+ 
+ 	/* Update flow payload with mask ids. */
+ 	nfp_flow->unmasked_data[NFP_FL_MASK_ID_LOCATION] = new_mask_id;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+index 1c59aff2163c7..d72225d64a75d 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+@@ -1142,6 +1142,12 @@ nfp_flower_validate_pre_tun_rule(struct nfp_app *app,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	if (!(key_layer & NFP_FLOWER_LAYER_IPV4) &&
++	    !(key_layer & NFP_FLOWER_LAYER_IPV6)) {
++		NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: match on ipv4/ipv6 eth_type must be present");
++		return -EOPNOTSUPP;
++	}
++
+ 	/* Skip fields known to exist. */
+ 	mask += sizeof(struct nfp_flower_meta_tci);
+ 	ext += sizeof(struct nfp_flower_meta_tci);
+@@ -1152,6 +1158,13 @@ nfp_flower_validate_pre_tun_rule(struct nfp_app *app,
+ 	mask += sizeof(struct nfp_flower_in_port);
+ 	ext += sizeof(struct nfp_flower_in_port);
+ 
++	/* Ensure destination MAC address matches pre_tun_dev. */
++	mac = (struct nfp_flower_mac_mpls *)ext;
++	if (memcmp(&mac->mac_dst[0], flow->pre_tun_rule.dev->dev_addr, 6)) {
++		NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: dest MAC must match output dev MAC");
++		return -EOPNOTSUPP;
++	}
++
+ 	/* Ensure destination MAC address is fully matched. */
+ 	mac = (struct nfp_flower_mac_mpls *)mask;
+ 	if (!is_broadcast_ether_addr(&mac->mac_dst[0])) {
+@@ -1159,6 +1172,11 @@ nfp_flower_validate_pre_tun_rule(struct nfp_app *app,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	if (mac->mpls_lse) {
++		NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: MPLS not supported");
++		return -EOPNOTSUPP;
++	}
++
+ 	mask += sizeof(struct nfp_flower_mac_mpls);
+ 	ext += sizeof(struct nfp_flower_mac_mpls);
+ 	if (key_layer & NFP_FLOWER_LAYER_IPV4 ||
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index 7248d248f6041..d19c02e991145 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -16,8 +16,9 @@
+ #define NFP_FL_MAX_ROUTES               32
+ 
+ #define NFP_TUN_PRE_TUN_RULE_LIMIT	32
+-#define NFP_TUN_PRE_TUN_RULE_DEL	0x1
+-#define NFP_TUN_PRE_TUN_IDX_BIT		0x8
++#define NFP_TUN_PRE_TUN_RULE_DEL	BIT(0)
++#define NFP_TUN_PRE_TUN_IDX_BIT		BIT(3)
++#define NFP_TUN_PRE_TUN_IPV6_BIT	BIT(7)
+ 
+ /**
+  * struct nfp_tun_pre_run_rule - rule matched before decap
+@@ -1268,6 +1269,7 @@ int nfp_flower_xmit_pre_tun_flow(struct nfp_app *app,
+ {
+ 	struct nfp_flower_priv *app_priv = app->priv;
+ 	struct nfp_tun_offloaded_mac *mac_entry;
++	struct nfp_flower_meta_tci *key_meta;
+ 	struct nfp_tun_pre_tun_rule payload;
+ 	struct net_device *internal_dev;
+ 	int err;
+@@ -1290,6 +1292,15 @@ int nfp_flower_xmit_pre_tun_flow(struct nfp_app *app,
+ 	if (!mac_entry)
+ 		return -ENOENT;
+ 
++	/* Set/clear IPV6 bit. cpu_to_be16() swap will lead to MSB being
++	 * set/clear for port_idx.
++	 */
++	key_meta = (struct nfp_flower_meta_tci *)flow->unmasked_data;
++	if (key_meta->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV6)
++		mac_entry->index |= NFP_TUN_PRE_TUN_IPV6_BIT;
++	else
++		mac_entry->index &= ~NFP_TUN_PRE_TUN_IPV6_BIT;
++
+ 	payload.port_idx = cpu_to_be16(mac_entry->index);
+ 
+ 	/* Copy mac id and vlan to flow - dev may not exist at delete time. */
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index a81feffb09b8b..909eca14f647f 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -1077,15 +1077,17 @@ static int ionic_tx_descs_needed(struct ionic_queue *q, struct sk_buff *skb)
+ {
+ 	int sg_elems = q->lif->qtype_info[IONIC_QTYPE_TXQ].max_sg_elems;
+ 	struct ionic_tx_stats *stats = q_to_tx_stats(q);
++	int ndescs;
+ 	int err;
+ 
+-	/* If TSO, need roundup(skb->len/mss) descs */
++	/* Each desc is mss long max, so a descriptor for each gso_seg */
+ 	if (skb_is_gso(skb))
+-		return (skb->len / skb_shinfo(skb)->gso_size) + 1;
++		ndescs = skb_shinfo(skb)->gso_segs;
++	else
++		ndescs = 1;
+ 
+-	/* If non-TSO, just need 1 desc and nr_frags sg elems */
+ 	if (skb_shinfo(skb)->nr_frags <= sg_elems)
+-		return 1;
++		return ndescs;
+ 
+ 	/* Too many frags, so linearize */
+ 	err = skb_linearize(skb);
+@@ -1094,8 +1096,7 @@ static int ionic_tx_descs_needed(struct ionic_queue *q, struct sk_buff *skb)
+ 
+ 	stats->linearize++;
+ 
+-	/* Need 1 desc and zero sg elems */
+-	return 1;
++	return ndescs;
+ }
+ 
+ static int ionic_maybe_stop_tx(struct ionic_queue *q, int ndescs)
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
+index 7760a3394e93c..7ecb3dfe30bd2 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
+@@ -1425,6 +1425,7 @@ void qlcnic_83xx_get_minidump_template(struct qlcnic_adapter *adapter)
+ 
+ 	if (fw_dump->tmpl_hdr == NULL || current_version > prev_version) {
+ 		vfree(fw_dump->tmpl_hdr);
++		fw_dump->tmpl_hdr = NULL;
+ 
+ 		if (qlcnic_83xx_md_check_extended_dump_capability(adapter))
+ 			extended = !qlcnic_83xx_extend_md_capab(adapter);
+@@ -1443,6 +1444,8 @@ void qlcnic_83xx_get_minidump_template(struct qlcnic_adapter *adapter)
+ 			struct qlcnic_83xx_dump_template_hdr *hdr;
+ 
+ 			hdr = fw_dump->tmpl_hdr;
++			if (!hdr)
++				return;
+ 			hdr->drv_cap_mask = 0x1f;
+ 			fw_dump->cap_mask = 0x1f;
+ 			dev_info(&pdev->dev,
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 1591715c97177..d634da20b4f94 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4726,6 +4726,9 @@ static void rtl8169_down(struct rtl8169_private *tp)
+ 
+ 	rtl8169_update_counters(tp);
+ 
++	pci_clear_master(tp->pci_dev);
++	rtl_pci_commit(tp);
++
+ 	rtl8169_cleanup(tp, true);
+ 
+ 	rtl_pll_power_down(tp);
+@@ -4733,6 +4736,7 @@ static void rtl8169_down(struct rtl8169_private *tp)
+ 
+ static void rtl8169_up(struct rtl8169_private *tp)
+ {
++	pci_set_master(tp->pci_dev);
+ 	rtl_pll_power_up(tp);
+ 	rtl8169_init_phy(tp);
+ 	napi_enable(&tp->napi);
+@@ -5394,8 +5398,6 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	rtl_hw_reset(tp);
+ 
+-	pci_set_master(pdev);
+-
+ 	rc = rtl_alloc_irq(tp);
+ 	if (rc < 0) {
+ 		dev_err(&pdev->dev, "Can't allocate interrupt\n");
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index 1503cc9ec6e2d..ef3634d1b9f7f 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -1708,14 +1708,17 @@ static int netsec_netdev_init(struct net_device *ndev)
+ 		goto err1;
+ 
+ 	/* set phy power down */
+-	data = netsec_phy_read(priv->mii_bus, priv->phy_addr, MII_BMCR) |
+-		BMCR_PDOWN;
+-	netsec_phy_write(priv->mii_bus, priv->phy_addr, MII_BMCR, data);
++	data = netsec_phy_read(priv->mii_bus, priv->phy_addr, MII_BMCR);
++	netsec_phy_write(priv->mii_bus, priv->phy_addr, MII_BMCR,
++			 data | BMCR_PDOWN);
+ 
+ 	ret = netsec_reset_hardware(priv, true);
+ 	if (ret)
+ 		goto err2;
+ 
++	/* Restore phy power state */
++	netsec_phy_write(priv->mii_bus, priv->phy_addr, MII_BMCR, data);
++
+ 	spin_lock_init(&priv->desc_ring[NETSEC_RING_TX].lock);
+ 	spin_lock_init(&priv->desc_ring[NETSEC_RING_RX].lock);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index a5e0eff4a3874..9f5ccf1a0a540 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -1217,6 +1217,8 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
+ 	plat_dat->init = sun8i_dwmac_init;
+ 	plat_dat->exit = sun8i_dwmac_exit;
+ 	plat_dat->setup = sun8i_dwmac_setup;
++	plat_dat->tx_fifo_size = 4096;
++	plat_dat->rx_fifo_size = 16384;
+ 
+ 	ret = sun8i_dwmac_set_syscon(&pdev->dev, plat_dat);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
+index 2ecd3a8a690c2..cbf4429fb1d23 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
+@@ -402,19 +402,53 @@ static void dwmac4_rd_set_tx_ic(struct dma_desc *p)
+ 	p->des2 |= cpu_to_le32(TDES2_INTERRUPT_ON_COMPLETION);
+ }
+ 
+-static void dwmac4_display_ring(void *head, unsigned int size, bool rx)
++static void dwmac4_display_ring(void *head, unsigned int size, bool rx,
++				dma_addr_t dma_rx_phy, unsigned int desc_size)
+ {
+-	struct dma_desc *p = (struct dma_desc *)head;
++	dma_addr_t dma_addr;
+ 	int i;
+ 
+ 	pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
+ 
+-	for (i = 0; i < size; i++) {
+-		pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
+-			i, (unsigned int)virt_to_phys(p),
+-			le32_to_cpu(p->des0), le32_to_cpu(p->des1),
+-			le32_to_cpu(p->des2), le32_to_cpu(p->des3));
+-		p++;
++	if (desc_size == sizeof(struct dma_desc)) {
++		struct dma_desc *p = (struct dma_desc *)head;
++
++		for (i = 0; i < size; i++) {
++			dma_addr = dma_rx_phy + i * sizeof(*p);
++			pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
++				i, &dma_addr,
++				le32_to_cpu(p->des0), le32_to_cpu(p->des1),
++				le32_to_cpu(p->des2), le32_to_cpu(p->des3));
++			p++;
++		}
++	} else if (desc_size == sizeof(struct dma_extended_desc)) {
++		struct dma_extended_desc *extp = (struct dma_extended_desc *)head;
++
++		for (i = 0; i < size; i++) {
++			dma_addr = dma_rx_phy + i * sizeof(*extp);
++			pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
++				i, &dma_addr,
++				le32_to_cpu(extp->basic.des0), le32_to_cpu(extp->basic.des1),
++				le32_to_cpu(extp->basic.des2), le32_to_cpu(extp->basic.des3),
++				le32_to_cpu(extp->des4), le32_to_cpu(extp->des5),
++				le32_to_cpu(extp->des6), le32_to_cpu(extp->des7));
++			extp++;
++		}
++	} else if (desc_size == sizeof(struct dma_edesc)) {
++		struct dma_edesc *ep = (struct dma_edesc *)head;
++
++		for (i = 0; i < size; i++) {
++			dma_addr = dma_rx_phy + i * sizeof(*ep);
++			pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
++				i, &dma_addr,
++				le32_to_cpu(ep->des4), le32_to_cpu(ep->des5),
++				le32_to_cpu(ep->des6), le32_to_cpu(ep->des7),
++				le32_to_cpu(ep->basic.des0), le32_to_cpu(ep->basic.des1),
++				le32_to_cpu(ep->basic.des2), le32_to_cpu(ep->basic.des3));
++			ep++;
++		}
++	} else {
++		pr_err("unsupported descriptor!");
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
+index d02cec296f51e..6650edfab5bc4 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
+@@ -417,19 +417,22 @@ static int enh_desc_get_rx_timestamp_status(void *desc, void *next_desc,
+ 	}
+ }
+ 
+-static void enh_desc_display_ring(void *head, unsigned int size, bool rx)
++static void enh_desc_display_ring(void *head, unsigned int size, bool rx,
++				  dma_addr_t dma_rx_phy, unsigned int desc_size)
+ {
+ 	struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
++	dma_addr_t dma_addr;
+ 	int i;
+ 
+ 	pr_info("Extended %s descriptor ring:\n", rx ? "RX" : "TX");
+ 
+ 	for (i = 0; i < size; i++) {
+ 		u64 x;
++		dma_addr = dma_rx_phy + i * sizeof(*ep);
+ 
+ 		x = *(u64 *)ep;
+-		pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
+-			i, (unsigned int)virt_to_phys(ep),
++		pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
++			i, &dma_addr,
+ 			(unsigned int)x, (unsigned int)(x >> 32),
+ 			ep->basic.des2, ep->basic.des3);
+ 		ep++;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+index afe7ec496545a..b0b84244ef107 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
++++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+@@ -78,7 +78,8 @@ struct stmmac_desc_ops {
+ 	/* get rx timestamp status */
+ 	int (*get_rx_timestamp_status)(void *desc, void *next_desc, u32 ats);
+ 	/* Display ring */
+-	void (*display_ring)(void *head, unsigned int size, bool rx);
++	void (*display_ring)(void *head, unsigned int size, bool rx,
++			     dma_addr_t dma_rx_phy, unsigned int desc_size);
+ 	/* set MSS via context descriptor */
+ 	void (*set_mss)(struct dma_desc *p, unsigned int mss);
+ 	/* get descriptor skbuff address */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
+index f083360e4ba67..98ef43f35802a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
+@@ -269,19 +269,22 @@ static int ndesc_get_rx_timestamp_status(void *desc, void *next_desc, u32 ats)
+ 		return 1;
+ }
+ 
+-static void ndesc_display_ring(void *head, unsigned int size, bool rx)
++static void ndesc_display_ring(void *head, unsigned int size, bool rx,
++			       dma_addr_t dma_rx_phy, unsigned int desc_size)
+ {
+ 	struct dma_desc *p = (struct dma_desc *)head;
++	dma_addr_t dma_addr;
+ 	int i;
+ 
+ 	pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
+ 
+ 	for (i = 0; i < size; i++) {
+ 		u64 x;
++		dma_addr = dma_rx_phy + i * sizeof(*p);
+ 
+ 		x = *(u64 *)p;
+-		pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x",
+-			i, (unsigned int)virt_to_phys(p),
++		pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x",
++			i, &dma_addr,
+ 			(unsigned int)x, (unsigned int)(x >> 32),
+ 			p->des2, p->des3);
+ 		p++;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 7d01c5cf60c96..6012eadae4604 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1109,6 +1109,7 @@ static int stmmac_phy_setup(struct stmmac_priv *priv)
+ static void stmmac_display_rx_rings(struct stmmac_priv *priv)
+ {
+ 	u32 rx_cnt = priv->plat->rx_queues_to_use;
++	unsigned int desc_size;
+ 	void *head_rx;
+ 	u32 queue;
+ 
+@@ -1118,19 +1119,24 @@ static void stmmac_display_rx_rings(struct stmmac_priv *priv)
+ 
+ 		pr_info("\tRX Queue %u rings\n", queue);
+ 
+-		if (priv->extend_desc)
++		if (priv->extend_desc) {
+ 			head_rx = (void *)rx_q->dma_erx;
+-		else
++			desc_size = sizeof(struct dma_extended_desc);
++		} else {
+ 			head_rx = (void *)rx_q->dma_rx;
++			desc_size = sizeof(struct dma_desc);
++		}
+ 
+ 		/* Display RX ring */
+-		stmmac_display_ring(priv, head_rx, priv->dma_rx_size, true);
++		stmmac_display_ring(priv, head_rx, priv->dma_rx_size, true,
++				    rx_q->dma_rx_phy, desc_size);
+ 	}
+ }
+ 
+ static void stmmac_display_tx_rings(struct stmmac_priv *priv)
+ {
+ 	u32 tx_cnt = priv->plat->tx_queues_to_use;
++	unsigned int desc_size;
+ 	void *head_tx;
+ 	u32 queue;
+ 
+@@ -1140,14 +1146,19 @@ static void stmmac_display_tx_rings(struct stmmac_priv *priv)
+ 
+ 		pr_info("\tTX Queue %d rings\n", queue);
+ 
+-		if (priv->extend_desc)
++		if (priv->extend_desc) {
+ 			head_tx = (void *)tx_q->dma_etx;
+-		else if (tx_q->tbs & STMMAC_TBS_AVAIL)
++			desc_size = sizeof(struct dma_extended_desc);
++		} else if (tx_q->tbs & STMMAC_TBS_AVAIL) {
+ 			head_tx = (void *)tx_q->dma_entx;
+-		else
++			desc_size = sizeof(struct dma_edesc);
++		} else {
+ 			head_tx = (void *)tx_q->dma_tx;
++			desc_size = sizeof(struct dma_desc);
++		}
+ 
+-		stmmac_display_ring(priv, head_tx, priv->dma_tx_size, false);
++		stmmac_display_ring(priv, head_tx, priv->dma_tx_size, false,
++				    tx_q->dma_tx_phy, desc_size);
+ 	}
+ }
+ 
+@@ -3710,18 +3721,23 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 	unsigned int count = 0, error = 0, len = 0;
+ 	int status = 0, coe = priv->hw->rx_csum;
+ 	unsigned int next_entry = rx_q->cur_rx;
++	unsigned int desc_size;
+ 	struct sk_buff *skb = NULL;
+ 
+ 	if (netif_msg_rx_status(priv)) {
+ 		void *rx_head;
+ 
+ 		netdev_dbg(priv->dev, "%s: descriptor ring:\n", __func__);
+-		if (priv->extend_desc)
++		if (priv->extend_desc) {
+ 			rx_head = (void *)rx_q->dma_erx;
+-		else
++			desc_size = sizeof(struct dma_extended_desc);
++		} else {
+ 			rx_head = (void *)rx_q->dma_rx;
++			desc_size = sizeof(struct dma_desc);
++		}
+ 
+-		stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true);
++		stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true,
++				    rx_q->dma_rx_phy, desc_size);
+ 	}
+ 	while (count < limit) {
+ 		unsigned int buf1_len = 0, buf2_len = 0;
+@@ -4289,24 +4305,27 @@ static int stmmac_set_mac_address(struct net_device *ndev, void *addr)
+ static struct dentry *stmmac_fs_dir;
+ 
+ static void sysfs_display_ring(void *head, int size, int extend_desc,
+-			       struct seq_file *seq)
++			       struct seq_file *seq, dma_addr_t dma_phy_addr)
+ {
+ 	int i;
+ 	struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
+ 	struct dma_desc *p = (struct dma_desc *)head;
++	dma_addr_t dma_addr;
+ 
+ 	for (i = 0; i < size; i++) {
+ 		if (extend_desc) {
+-			seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
+-				   i, (unsigned int)virt_to_phys(ep),
++			dma_addr = dma_phy_addr + i * sizeof(*ep);
++			seq_printf(seq, "%d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
++				   i, &dma_addr,
+ 				   le32_to_cpu(ep->basic.des0),
+ 				   le32_to_cpu(ep->basic.des1),
+ 				   le32_to_cpu(ep->basic.des2),
+ 				   le32_to_cpu(ep->basic.des3));
+ 			ep++;
+ 		} else {
+-			seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
+-				   i, (unsigned int)virt_to_phys(p),
++			dma_addr = dma_phy_addr + i * sizeof(*p);
++			seq_printf(seq, "%d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
++				   i, &dma_addr,
+ 				   le32_to_cpu(p->des0), le32_to_cpu(p->des1),
+ 				   le32_to_cpu(p->des2), le32_to_cpu(p->des3));
+ 			p++;
+@@ -4334,11 +4353,11 @@ static int stmmac_rings_status_show(struct seq_file *seq, void *v)
+ 		if (priv->extend_desc) {
+ 			seq_printf(seq, "Extended descriptor ring:\n");
+ 			sysfs_display_ring((void *)rx_q->dma_erx,
+-					   priv->dma_rx_size, 1, seq);
++					   priv->dma_rx_size, 1, seq, rx_q->dma_rx_phy);
+ 		} else {
+ 			seq_printf(seq, "Descriptor ring:\n");
+ 			sysfs_display_ring((void *)rx_q->dma_rx,
+-					   priv->dma_rx_size, 0, seq);
++					   priv->dma_rx_size, 0, seq, rx_q->dma_rx_phy);
+ 		}
+ 	}
+ 
+@@ -4350,11 +4369,11 @@ static int stmmac_rings_status_show(struct seq_file *seq, void *v)
+ 		if (priv->extend_desc) {
+ 			seq_printf(seq, "Extended descriptor ring:\n");
+ 			sysfs_display_ring((void *)tx_q->dma_etx,
+-					   priv->dma_tx_size, 1, seq);
++					   priv->dma_tx_size, 1, seq, tx_q->dma_tx_phy);
+ 		} else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) {
+ 			seq_printf(seq, "Descriptor ring:\n");
+ 			sysfs_display_ring((void *)tx_q->dma_tx,
+-					   priv->dma_tx_size, 0, seq);
++					   priv->dma_tx_size, 0, seq, tx_q->dma_tx_phy);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 68695d4afacd5..707ccdd03b19e 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -3931,8 +3931,6 @@ static void niu_xmac_interrupt(struct niu *np)
+ 		mp->rx_mcasts += RXMAC_MC_FRM_CNT_COUNT;
+ 	if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP)
+ 		mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT;
+-	if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP)
+-		mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT;
+ 	if (val & XRXMAC_STATUS_RXHIST1_CNT_EXP)
+ 		mp->rx_hist_cnt1 += RXMAC_HIST_CNT1_COUNT;
+ 	if (val & XRXMAC_STATUS_RXHIST2_CNT_EXP)
+diff --git a/drivers/net/ethernet/tehuti/tehuti.c b/drivers/net/ethernet/tehuti/tehuti.c
+index b8f4f419173f9..d054c6e83b1c9 100644
+--- a/drivers/net/ethernet/tehuti/tehuti.c
++++ b/drivers/net/ethernet/tehuti/tehuti.c
+@@ -2044,6 +2044,7 @@ bdx_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		/*bdx_hw_reset(priv); */
+ 		if (bdx_read_mac(priv)) {
+ 			pr_err("load MAC address failed\n");
++			err = -EFAULT;
+ 			goto err_out_iomap;
+ 		}
+ 		SET_NETDEV_DEV(ndev, &pdev->dev);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+index f34c7903ff524..7326ad4d5e1c7 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+@@ -419,6 +419,9 @@ struct axienet_local {
+ 	struct phylink *phylink;
+ 	struct phylink_config phylink_config;
+ 
++	/* Reference to PCS/PMA PHY if used */
++	struct mdio_device *pcs_phy;
++
+ 	/* Clock for AXI bus */
+ 	struct clk *clk;
+ 
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index eea0bb7c23ede..69c79cc24e6e4 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -1517,10 +1517,27 @@ static void axienet_validate(struct phylink_config *config,
+ 
+ 	phylink_set(mask, Asym_Pause);
+ 	phylink_set(mask, Pause);
+-	phylink_set(mask, 1000baseX_Full);
+-	phylink_set(mask, 10baseT_Full);
+-	phylink_set(mask, 100baseT_Full);
+-	phylink_set(mask, 1000baseT_Full);
++
++	switch (state->interface) {
++	case PHY_INTERFACE_MODE_NA:
++	case PHY_INTERFACE_MODE_1000BASEX:
++	case PHY_INTERFACE_MODE_SGMII:
++	case PHY_INTERFACE_MODE_GMII:
++	case PHY_INTERFACE_MODE_RGMII:
++	case PHY_INTERFACE_MODE_RGMII_ID:
++	case PHY_INTERFACE_MODE_RGMII_RXID:
++	case PHY_INTERFACE_MODE_RGMII_TXID:
++		phylink_set(mask, 1000baseX_Full);
++		phylink_set(mask, 1000baseT_Full);
++		if (state->interface == PHY_INTERFACE_MODE_1000BASEX)
++			break;
++		fallthrough;
++	case PHY_INTERFACE_MODE_MII:
++		phylink_set(mask, 100baseT_Full);
++		phylink_set(mask, 10baseT_Full);
++	default:
++		break;
++	}
+ 
+ 	bitmap_and(supported, supported, mask,
+ 		   __ETHTOOL_LINK_MODE_MASK_NBITS);
+@@ -1533,38 +1550,46 @@ static void axienet_mac_pcs_get_state(struct phylink_config *config,
+ {
+ 	struct net_device *ndev = to_net_dev(config->dev);
+ 	struct axienet_local *lp = netdev_priv(ndev);
+-	u32 emmc_reg, fcc_reg;
+-
+-	state->interface = lp->phy_mode;
+-
+-	emmc_reg = axienet_ior(lp, XAE_EMMC_OFFSET);
+-	if (emmc_reg & XAE_EMMC_LINKSPD_1000)
+-		state->speed = SPEED_1000;
+-	else if (emmc_reg & XAE_EMMC_LINKSPD_100)
+-		state->speed = SPEED_100;
+-	else
+-		state->speed = SPEED_10;
+-
+-	state->pause = 0;
+-	fcc_reg = axienet_ior(lp, XAE_FCC_OFFSET);
+-	if (fcc_reg & XAE_FCC_FCTX_MASK)
+-		state->pause |= MLO_PAUSE_TX;
+-	if (fcc_reg & XAE_FCC_FCRX_MASK)
+-		state->pause |= MLO_PAUSE_RX;
+ 
+-	state->an_complete = 0;
+-	state->duplex = 1;
++	switch (state->interface) {
++	case PHY_INTERFACE_MODE_SGMII:
++	case PHY_INTERFACE_MODE_1000BASEX:
++		phylink_mii_c22_pcs_get_state(lp->pcs_phy, state);
++		break;
++	default:
++		break;
++	}
+ }
+ 
+ static void axienet_mac_an_restart(struct phylink_config *config)
+ {
+-	/* Unsupported, do nothing */
++	struct net_device *ndev = to_net_dev(config->dev);
++	struct axienet_local *lp = netdev_priv(ndev);
++
++	phylink_mii_c22_pcs_an_restart(lp->pcs_phy);
+ }
+ 
+ static void axienet_mac_config(struct phylink_config *config, unsigned int mode,
+ 			       const struct phylink_link_state *state)
+ {
+-	/* nothing meaningful to do */
++	struct net_device *ndev = to_net_dev(config->dev);
++	struct axienet_local *lp = netdev_priv(ndev);
++	int ret;
++
++	switch (state->interface) {
++	case PHY_INTERFACE_MODE_SGMII:
++	case PHY_INTERFACE_MODE_1000BASEX:
++		ret = phylink_mii_c22_pcs_config(lp->pcs_phy, mode,
++						 state->interface,
++						 state->advertising);
++		if (ret < 0)
++			netdev_warn(ndev, "Failed to configure PCS: %d\n",
++				    ret);
++		break;
++
++	default:
++		break;
++	}
+ }
+ 
+ static void axienet_mac_link_down(struct phylink_config *config,
+@@ -1823,7 +1848,7 @@ static int axienet_probe(struct platform_device *pdev)
+ 	if (IS_ERR(lp->regs)) {
+ 		dev_err(&pdev->dev, "could not map Axi Ethernet regs.\n");
+ 		ret = PTR_ERR(lp->regs);
+-		goto free_netdev;
++		goto cleanup_clk;
+ 	}
+ 	lp->regs_start = ethres->start;
+ 
+@@ -1898,12 +1923,12 @@ static int axienet_probe(struct platform_device *pdev)
+ 			break;
+ 		default:
+ 			ret = -EINVAL;
+-			goto free_netdev;
++			goto cleanup_clk;
+ 		}
+ 	} else {
+ 		ret = of_get_phy_mode(pdev->dev.of_node, &lp->phy_mode);
+ 		if (ret)
+-			goto free_netdev;
++			goto cleanup_clk;
+ 	}
+ 
+ 	/* Find the DMA node, map the DMA registers, and decode the DMA IRQs */
+@@ -1916,7 +1941,7 @@ static int axienet_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev,
+ 				"unable to get DMA resource\n");
+ 			of_node_put(np);
+-			goto free_netdev;
++			goto cleanup_clk;
+ 		}
+ 		lp->dma_regs = devm_ioremap_resource(&pdev->dev,
+ 						     &dmares);
+@@ -1936,12 +1961,12 @@ static int axienet_probe(struct platform_device *pdev)
+ 	if (IS_ERR(lp->dma_regs)) {
+ 		dev_err(&pdev->dev, "could not map DMA regs\n");
+ 		ret = PTR_ERR(lp->dma_regs);
+-		goto free_netdev;
++		goto cleanup_clk;
+ 	}
+ 	if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) {
+ 		dev_err(&pdev->dev, "could not determine irqs\n");
+ 		ret = -ENOMEM;
+-		goto free_netdev;
++		goto cleanup_clk;
+ 	}
+ 
+ 	/* Autodetect the need for 64-bit DMA pointers.
+@@ -1971,7 +1996,7 @@ static int axienet_probe(struct platform_device *pdev)
+ 	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(addr_width));
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "No suitable DMA available\n");
+-		goto free_netdev;
++		goto cleanup_clk;
+ 	}
+ 
+ 	/* Check for Ethernet core IRQ (optional) */
+@@ -1997,6 +2022,20 @@ static int axienet_probe(struct platform_device *pdev)
+ 			dev_warn(&pdev->dev,
+ 				 "error registering MDIO bus: %d\n", ret);
+ 	}
++	if (lp->phy_mode == PHY_INTERFACE_MODE_SGMII ||
++	    lp->phy_mode == PHY_INTERFACE_MODE_1000BASEX) {
++		if (!lp->phy_node) {
++			dev_err(&pdev->dev, "phy-handle required for 1000BaseX/SGMII\n");
++			ret = -EINVAL;
++			goto cleanup_mdio;
++		}
++		lp->pcs_phy = of_mdio_find_device(lp->phy_node);
++		if (!lp->pcs_phy) {
++			ret = -EPROBE_DEFER;
++			goto cleanup_mdio;
++		}
++		lp->phylink_config.pcs_poll = true;
++	}
+ 
+ 	lp->phylink_config.dev = &ndev->dev;
+ 	lp->phylink_config.type = PHYLINK_NETDEV;
+@@ -2007,17 +2046,30 @@ static int axienet_probe(struct platform_device *pdev)
+ 	if (IS_ERR(lp->phylink)) {
+ 		ret = PTR_ERR(lp->phylink);
+ 		dev_err(&pdev->dev, "phylink_create error (%i)\n", ret);
+-		goto free_netdev;
++		goto cleanup_mdio;
+ 	}
+ 
+ 	ret = register_netdev(lp->ndev);
+ 	if (ret) {
+ 		dev_err(lp->dev, "register_netdev() error (%i)\n", ret);
+-		goto free_netdev;
++		goto cleanup_phylink;
+ 	}
+ 
+ 	return 0;
+ 
++cleanup_phylink:
++	phylink_destroy(lp->phylink);
++
++cleanup_mdio:
++	if (lp->pcs_phy)
++		put_device(&lp->pcs_phy->dev);
++	if (lp->mii_bus)
++		axienet_mdio_teardown(lp);
++	of_node_put(lp->phy_node);
++
++cleanup_clk:
++	clk_disable_unprepare(lp->clk);
++
+ free_netdev:
+ 	free_netdev(ndev);
+ 
+@@ -2034,6 +2086,9 @@ static int axienet_remove(struct platform_device *pdev)
+ 	if (lp->phylink)
+ 		phylink_destroy(lp->phylink);
+ 
++	if (lp->pcs_phy)
++		put_device(&lp->pcs_phy->dev);
++
+ 	axienet_mdio_teardown(lp);
+ 
+ 	clk_disable_unprepare(lp->clk);
+diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
+index 5090f0f923ad5..1a87a49538c50 100644
+--- a/drivers/net/ipa/ipa_qmi.c
++++ b/drivers/net/ipa/ipa_qmi.c
+@@ -249,6 +249,7 @@ static struct qmi_msg_handler ipa_server_msg_handlers[] = {
+ 		.decoded_size	= IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ,
+ 		.fn		= ipa_server_driver_init_complete,
+ 	},
++	{ },
+ };
+ 
+ /* Handle an INIT_DRIVER response message from the modem. */
+@@ -269,6 +270,7 @@ static struct qmi_msg_handler ipa_client_msg_handlers[] = {
+ 		.decoded_size	= IPA_QMI_INIT_DRIVER_RSP_SZ,
+ 		.fn		= ipa_client_init_driver,
+ 	},
++	{ },
+ };
+ 
+ /* Return a pointer to an init modem driver request structure, which contains
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index cd271de9609be..dbed15dc0fe77 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -26,7 +26,46 @@ MODULE_DESCRIPTION("Broadcom PHY driver");
+ MODULE_AUTHOR("Maciej W. Rozycki");
+ MODULE_LICENSE("GPL");
+ 
+-static int bcm54xx_config_clock_delay(struct phy_device *phydev);
++static int bcm54xx_config_clock_delay(struct phy_device *phydev)
++{
++	int rc, val;
++
++	/* handling PHY's internal RX clock delay */
++	val = bcm54xx_auxctl_read(phydev, MII_BCM54XX_AUXCTL_SHDWSEL_MISC);
++	val |= MII_BCM54XX_AUXCTL_MISC_WREN;
++	if (phydev->interface == PHY_INTERFACE_MODE_RGMII ||
++	    phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) {
++		/* Disable RGMII RXC-RXD skew */
++		val &= ~MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN;
++	}
++	if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
++	    phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) {
++		/* Enable RGMII RXC-RXD skew */
++		val |= MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN;
++	}
++	rc = bcm54xx_auxctl_write(phydev, MII_BCM54XX_AUXCTL_SHDWSEL_MISC,
++				  val);
++	if (rc < 0)
++		return rc;
++
++	/* handling PHY's internal TX clock delay */
++	val = bcm_phy_read_shadow(phydev, BCM54810_SHD_CLK_CTL);
++	if (phydev->interface == PHY_INTERFACE_MODE_RGMII ||
++	    phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) {
++		/* Disable internal TX clock delay */
++		val &= ~BCM54810_SHD_CLK_CTL_GTXCLK_EN;
++	}
++	if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
++	    phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) {
++		/* Enable internal TX clock delay */
++		val |= BCM54810_SHD_CLK_CTL_GTXCLK_EN;
++	}
++	rc = bcm_phy_write_shadow(phydev, BCM54810_SHD_CLK_CTL, val);
++	if (rc < 0)
++		return rc;
++
++	return 0;
++}
+ 
+ static int bcm54210e_config_init(struct phy_device *phydev)
+ {
+@@ -64,45 +103,62 @@ static int bcm54612e_config_init(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
+-static int bcm54xx_config_clock_delay(struct phy_device *phydev)
++static int bcm54616s_config_init(struct phy_device *phydev)
+ {
+ 	int rc, val;
+ 
+-	/* handling PHY's internal RX clock delay */
++	if (phydev->interface != PHY_INTERFACE_MODE_SGMII &&
++	    phydev->interface != PHY_INTERFACE_MODE_1000BASEX)
++		return 0;
++
++	/* Ensure proper interface mode is selected. */
++	/* Disable RGMII mode */
+ 	val = bcm54xx_auxctl_read(phydev, MII_BCM54XX_AUXCTL_SHDWSEL_MISC);
++	if (val < 0)
++		return val;
++	val &= ~MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_EN;
+ 	val |= MII_BCM54XX_AUXCTL_MISC_WREN;
+-	if (phydev->interface == PHY_INTERFACE_MODE_RGMII ||
+-	    phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) {
+-		/* Disable RGMII RXC-RXD skew */
+-		val &= ~MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN;
+-	}
+-	if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+-	    phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) {
+-		/* Enable RGMII RXC-RXD skew */
+-		val |= MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN;
+-	}
+ 	rc = bcm54xx_auxctl_write(phydev, MII_BCM54XX_AUXCTL_SHDWSEL_MISC,
+ 				  val);
+ 	if (rc < 0)
+ 		return rc;
+ 
+-	/* handling PHY's internal TX clock delay */
+-	val = bcm_phy_read_shadow(phydev, BCM54810_SHD_CLK_CTL);
+-	if (phydev->interface == PHY_INTERFACE_MODE_RGMII ||
+-	    phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) {
+-		/* Disable internal TX clock delay */
+-		val &= ~BCM54810_SHD_CLK_CTL_GTXCLK_EN;
+-	}
+-	if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+-	    phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) {
+-		/* Enable internal TX clock delay */
+-		val |= BCM54810_SHD_CLK_CTL_GTXCLK_EN;
+-	}
+-	rc = bcm_phy_write_shadow(phydev, BCM54810_SHD_CLK_CTL, val);
++	/* Select 1000BASE-X register set (primary SerDes) */
++	val = bcm_phy_read_shadow(phydev, BCM54XX_SHD_MODE);
++	if (val < 0)
++		return val;
++	val |= BCM54XX_SHD_MODE_1000BX;
++	rc = bcm_phy_write_shadow(phydev, BCM54XX_SHD_MODE, val);
+ 	if (rc < 0)
+ 		return rc;
+ 
+-	return 0;
++	/* Power down SerDes interface */
++	rc = phy_set_bits(phydev, MII_BMCR, BMCR_PDOWN);
++	if (rc < 0)
++		return rc;
++
++	/* Select proper interface mode */
++	val &= ~BCM54XX_SHD_INTF_SEL_MASK;
++	val |= phydev->interface == PHY_INTERFACE_MODE_SGMII ?
++		BCM54XX_SHD_INTF_SEL_SGMII :
++		BCM54XX_SHD_INTF_SEL_GBIC;
++	rc = bcm_phy_write_shadow(phydev, BCM54XX_SHD_MODE, val);
++	if (rc < 0)
++		return rc;
++
++	/* Power up SerDes interface */
++	rc = phy_clear_bits(phydev, MII_BMCR, BMCR_PDOWN);
++	if (rc < 0)
++		return rc;
++
++	/* Select copper register set */
++	val &= ~BCM54XX_SHD_MODE_1000BX;
++	rc = bcm_phy_write_shadow(phydev, BCM54XX_SHD_MODE, val);
++	if (rc < 0)
++		return rc;
++
++	/* Power up copper interface */
++	return phy_clear_bits(phydev, MII_BMCR, BMCR_PDOWN);
+ }
+ 
+ /* Needs SMDSP clock enabled via bcm54xx_phydsp_config() */
+@@ -283,15 +339,21 @@ static int bcm54xx_config_init(struct phy_device *phydev)
+ 
+ 	bcm54xx_adjust_rxrefclk(phydev);
+ 
+-	if (BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54210E) {
++	switch (BRCM_PHY_MODEL(phydev)) {
++	case PHY_ID_BCM50610:
++	case PHY_ID_BCM50610M:
++		err = bcm54xx_config_clock_delay(phydev);
++		break;
++	case PHY_ID_BCM54210E:
+ 		err = bcm54210e_config_init(phydev);
+-		if (err)
+-			return err;
+-	} else if (BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54612E) {
++		break;
++	case PHY_ID_BCM54612E:
+ 		err = bcm54612e_config_init(phydev);
+-		if (err)
+-			return err;
+-	} else if (BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54810) {
++		break;
++	case PHY_ID_BCM54616S:
++		err = bcm54616s_config_init(phydev);
++		break;
++	case PHY_ID_BCM54810:
+ 		/* For BCM54810, we need to disable BroadR-Reach function */
+ 		val = bcm_phy_read_exp(phydev,
+ 				       BCM54810_EXP_BROADREACH_LRE_MISC_CTL);
+@@ -299,9 +361,10 @@ static int bcm54xx_config_init(struct phy_device *phydev)
+ 		err = bcm_phy_write_exp(phydev,
+ 					BCM54810_EXP_BROADREACH_LRE_MISC_CTL,
+ 					val);
+-		if (err < 0)
+-			return err;
++		break;
+ 	}
++	if (err)
++		return err;
+ 
+ 	bcm54xx_phydsp_config(phydev);
+ 
+@@ -332,6 +395,11 @@ static int bcm54xx_resume(struct phy_device *phydev)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	/* Upon exiting power down, the PHY remains in an internal reset state
++	 * for 40us
++	 */
++	fsleep(40);
++
+ 	return bcm54xx_config_init(phydev);
+ }
+ 
+@@ -475,7 +543,7 @@ static int bcm5481_config_aneg(struct phy_device *phydev)
+ 
+ static int bcm54616s_probe(struct phy_device *phydev)
+ {
+-	int val, intf_sel;
++	int val;
+ 
+ 	val = bcm_phy_read_shadow(phydev, BCM54XX_SHD_MODE);
+ 	if (val < 0)
+@@ -487,8 +555,7 @@ static int bcm54616s_probe(struct phy_device *phydev)
+ 	 * RGMII-1000Base-X is properly supported, but RGMII-100Base-FX
+ 	 * support is still missing as of now.
+ 	 */
+-	intf_sel = (val & BCM54XX_SHD_INTF_SEL_MASK) >> 1;
+-	if (intf_sel == 1) {
++	if ((val & BCM54XX_SHD_INTF_SEL_MASK) == BCM54XX_SHD_INTF_SEL_RGMII) {
+ 		val = bcm_phy_read_shadow(phydev, BCM54616S_SHD_100FX_CTRL);
+ 		if (val < 0)
+ 			return val;
+@@ -500,6 +567,8 @@ static int bcm54616s_probe(struct phy_device *phydev)
+ 		 */
+ 		if (!(val & BCM54616S_100FX_MODE))
+ 			phydev->dev_flags |= PHY_BCM_FLAGS_MODE_1000BX;
++
++		phydev->port = PORT_FIBRE;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index c162c9551bd11..a9b058bb1be87 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -534,6 +534,9 @@ static int dp83822_probe(struct phy_device *phydev)
+ 
+ 	dp83822_of_init(phydev);
+ 
++	if (dp83822->fx_enabled)
++		phydev->port = PORT_FIBRE;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/phy/dp83869.c b/drivers/net/phy/dp83869.c
+index cf6dec7b7d8e7..a9daff88006b3 100644
+--- a/drivers/net/phy/dp83869.c
++++ b/drivers/net/phy/dp83869.c
+@@ -821,6 +821,10 @@ static int dp83869_probe(struct phy_device *phydev)
+ 	if (ret)
+ 		return ret;
+ 
++	if (dp83869->mode == DP83869_RGMII_100_BASE ||
++	    dp83869->mode == DP83869_RGMII_1000_BASE)
++		phydev->port = PORT_FIBRE;
++
+ 	return dp83869_config_init(phydev);
+ }
+ 
+diff --git a/drivers/net/phy/lxt.c b/drivers/net/phy/lxt.c
+index fec58ad69e02c..cb8e4f0215fe8 100644
+--- a/drivers/net/phy/lxt.c
++++ b/drivers/net/phy/lxt.c
+@@ -218,6 +218,7 @@ static int lxt973_probe(struct phy_device *phydev)
+ 		phy_write(phydev, MII_BMCR, val);
+ 		/* Remember that the port is in fiber mode. */
+ 		phydev->priv = lxt973_probe;
++		phydev->port = PORT_FIBRE;
+ 	} else {
+ 		phydev->priv = NULL;
+ 	}
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 5aec673a0120b..5dbdaf0f5f09c 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -1449,6 +1449,7 @@ static int marvell_read_status_page(struct phy_device *phydev, int page)
+ 	phydev->asym_pause = 0;
+ 	phydev->speed = SPEED_UNKNOWN;
+ 	phydev->duplex = DUPLEX_UNKNOWN;
++	phydev->port = fiber ? PORT_FIBRE : PORT_TP;
+ 
+ 	if (phydev->autoneg == AUTONEG_ENABLE)
+ 		err = marvell_read_status_page_an(phydev, fiber, status);
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index 1901ba277413d..b1bb9b8e1e4ed 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -631,6 +631,7 @@ static int mv3310_read_status_10gbaser(struct phy_device *phydev)
+ 	phydev->link = 1;
+ 	phydev->speed = SPEED_10000;
+ 	phydev->duplex = DUPLEX_FULL;
++	phydev->port = PORT_FIBRE;
+ 
+ 	return 0;
+ }
+@@ -690,6 +691,7 @@ static int mv3310_read_status_copper(struct phy_device *phydev)
+ 
+ 	phydev->duplex = cssr1 & MV_PCS_CSSR1_DUPLEX_FULL ?
+ 			 DUPLEX_FULL : DUPLEX_HALF;
++	phydev->port = PORT_TP;
+ 	phydev->mdix = cssr1 & MV_PCS_CSSR1_MDIX ?
+ 		       ETH_TP_MDI_X : ETH_TP_MDI;
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 47ae1d1723c54..9b0bc8b74bc01 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -308,14 +308,19 @@ static int kszphy_config_init(struct phy_device *phydev)
+ 	return kszphy_config_reset(phydev);
+ }
+ 
++static int ksz8041_fiber_mode(struct phy_device *phydev)
++{
++	struct device_node *of_node = phydev->mdio.dev.of_node;
++
++	return of_property_read_bool(of_node, "micrel,fiber-mode");
++}
++
+ static int ksz8041_config_init(struct phy_device *phydev)
+ {
+ 	__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
+ 
+-	struct device_node *of_node = phydev->mdio.dev.of_node;
+-
+ 	/* Limit supported and advertised modes in fiber mode */
+-	if (of_property_read_bool(of_node, "micrel,fiber-mode")) {
++	if (ksz8041_fiber_mode(phydev)) {
+ 		phydev->dev_flags |= MICREL_PHY_FXEN;
+ 		linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, mask);
+ 		linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, mask);
+@@ -1143,6 +1148,9 @@ static int kszphy_probe(struct phy_device *phydev)
+ 		}
+ 	}
+ 
++	if (ksz8041_fiber_mode(phydev))
++		phydev->port = PORT_FIBRE;
++
+ 	/* Support legacy board-file configuration */
+ 	if (phydev->dev_flags & MICREL_PHY_50MHZ_CLK) {
+ 		priv->rmii_ref_clk_sel = true;
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 49e96ca585fff..28ddaad721ed1 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -327,7 +327,7 @@ void phy_ethtool_ksettings_get(struct phy_device *phydev,
+ 	if (phydev->interface == PHY_INTERFACE_MODE_MOCA)
+ 		cmd->base.port = PORT_BNC;
+ 	else
+-		cmd->base.port = PORT_MII;
++		cmd->base.port = phydev->port;
+ 	cmd->base.transceiver = phy_is_internal(phydev) ?
+ 				XCVR_INTERNAL : XCVR_EXTERNAL;
+ 	cmd->base.phy_address = phydev->mdio.addr;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 2d4eed2d61ce9..85f3cde5ffd09 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -576,6 +576,7 @@ struct phy_device *phy_device_create(struct mii_bus *bus, int addr, u32 phy_id,
+ 	dev->pause = 0;
+ 	dev->asym_pause = 0;
+ 	dev->link = 0;
++	dev->port = PORT_TP;
+ 	dev->interface = PHY_INTERFACE_MODE_GMII;
+ 
+ 	dev->autoneg = AUTONEG_ENABLE;
+@@ -1384,6 +1385,14 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
+ 
+ 	phydev->state = PHY_READY;
+ 
++	/* Port is set to PORT_TP by default and the actual PHY driver will set
++	 * it to different value depending on the PHY configuration. If we have
++	 * the generic PHY driver we can't figure it out, thus set the old
++	 * legacy PORT_MII value.
++	 */
++	if (using_genphy)
++		phydev->port = PORT_MII;
++
+ 	/* Initial carrier state is off as the phy is about to be
+ 	 * (re)initialized.
+ 	 */
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index fe2296fdda19d..6072e87ed6c3c 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -472,7 +472,7 @@ static void phylink_major_config(struct phylink *pl, bool restart,
+ 		err = pl->mac_ops->mac_finish(pl->config, pl->cur_link_an_mode,
+ 					      state->interface);
+ 		if (err < 0)
+-			phylink_err(pl, "mac_prepare failed: %pe\n",
++			phylink_err(pl, "mac_finish failed: %pe\n",
+ 				    ERR_PTR(err));
+ 	}
+ }
+diff --git a/drivers/net/usb/cdc-phonet.c b/drivers/net/usb/cdc-phonet.c
+index dba847f280962..2520421946a6a 100644
+--- a/drivers/net/usb/cdc-phonet.c
++++ b/drivers/net/usb/cdc-phonet.c
+@@ -387,6 +387,8 @@ static int usbpn_probe(struct usb_interface *intf, const struct usb_device_id *i
+ 
+ 	err = register_netdev(dev);
+ 	if (err) {
++		/* Set disconnected flag so that disconnect() returns early. */
++		pnd->disconnected = 1;
+ 		usb_driver_release_interface(&usbpn_driver, data_intf);
+ 		goto out;
+ 	}
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 88f177aca342e..f5010f8ac1ec7 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -3033,29 +3033,6 @@ static void __rtl_set_wol(struct r8152 *tp, u32 wolopts)
+ 		device_set_wakeup_enable(&tp->udev->dev, false);
+ }
+ 
+-static void r8153_mac_clk_spd(struct r8152 *tp, bool enable)
+-{
+-	/* MAC clock speed down */
+-	if (enable) {
+-		ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL,
+-			       ALDPS_SPDWN_RATIO);
+-		ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2,
+-			       EEE_SPDWN_RATIO);
+-		ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3,
+-			       PKT_AVAIL_SPDWN_EN | SUSPEND_SPDWN_EN |
+-			       U1U2_SPDWN_EN | L1_SPDWN_EN);
+-		ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL4,
+-			       PWRSAVE_SPDWN_EN | RXDV_SPDWN_EN | TX10MIDLE_EN |
+-			       TP100_SPDWN_EN | TP500_SPDWN_EN | EEE_SPDWN_EN |
+-			       TP1000_SPDWN_EN);
+-	} else {
+-		ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL, 0);
+-		ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2, 0);
+-		ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, 0);
+-		ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL4, 0);
+-	}
+-}
+-
+ static void r8153_u1u2en(struct r8152 *tp, bool enable)
+ {
+ 	u8 u1u2[8];
+@@ -3355,11 +3332,9 @@ static void rtl8153_runtime_enable(struct r8152 *tp, bool enable)
+ 	if (enable) {
+ 		r8153_u1u2en(tp, false);
+ 		r8153_u2p3en(tp, false);
+-		r8153_mac_clk_spd(tp, true);
+ 		rtl_runtime_suspend_enable(tp, true);
+ 	} else {
+ 		rtl_runtime_suspend_enable(tp, false);
+-		r8153_mac_clk_spd(tp, false);
+ 
+ 		switch (tp->version) {
+ 		case RTL_VER_03:
+@@ -4695,7 +4670,6 @@ static void r8153_first_init(struct r8152 *tp)
+ {
+ 	u32 ocp_data;
+ 
+-	r8153_mac_clk_spd(tp, false);
+ 	rxdy_gated_en(tp, true);
+ 	r8153_teredo_off(tp);
+ 
+@@ -4746,8 +4720,6 @@ static void r8153_enter_oob(struct r8152 *tp)
+ {
+ 	u32 ocp_data;
+ 
+-	r8153_mac_clk_spd(tp, true);
+-
+ 	ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL);
+ 	ocp_data &= ~NOW_IS_OOB;
+ 	ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data);
+@@ -5473,10 +5445,15 @@ static void r8153_init(struct r8152 *tp)
+ 
+ 	ocp_write_word(tp, MCU_TYPE_USB, USB_CONNECT_TIMER, 0x0001);
+ 
++	/* MAC clock speed down */
++	ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL, 0);
++	ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2, 0);
++	ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, 0);
++	ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL4, 0);
++
+ 	r8153_power_cut_en(tp, false);
+ 	rtl_runtime_suspend_enable(tp, false);
+ 	r8153_u1u2en(tp, true);
+-	r8153_mac_clk_spd(tp, false);
+ 	usb_enable_lpm(tp->udev);
+ 
+ 	ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6);
+@@ -6542,7 +6519,10 @@ static int rtl_ops_init(struct r8152 *tp)
+ 		ops->in_nway		= rtl8153_in_nway;
+ 		ops->hw_phy_cfg		= r8153_hw_phy_cfg;
+ 		ops->autosuspend_en	= rtl8153_runtime_enable;
+-		tp->rx_buf_sz		= 32 * 1024;
++		if (tp->udev->speed < USB_SPEED_SUPER)
++			tp->rx_buf_sz	= 16 * 1024;
++		else
++			tp->rx_buf_sz	= 32 * 1024;
+ 		tp->eee_en		= true;
+ 		tp->eee_adv		= MDIO_EEE_1000T | MDIO_EEE_100TX;
+ 		break;
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index 8c737668008a0..be18b243642f0 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -301,8 +301,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (rxq < rcv->real_num_rx_queues) {
+ 		rq = &rcv_priv->rq[rxq];
+ 		rcv_xdp = rcu_access_pointer(rq->xdp_prog);
+-		if (rcv_xdp)
+-			skb_record_rx_queue(skb, rxq);
++		skb_record_rx_queue(skb, rxq);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c
+index dca97cd7c4e75..7eac6a3e1cdee 100644
+--- a/drivers/net/wan/fsl_ucc_hdlc.c
++++ b/drivers/net/wan/fsl_ucc_hdlc.c
+@@ -204,14 +204,18 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
+ 	priv->rx_skbuff = kcalloc(priv->rx_ring_size,
+ 				  sizeof(*priv->rx_skbuff),
+ 				  GFP_KERNEL);
+-	if (!priv->rx_skbuff)
++	if (!priv->rx_skbuff) {
++		ret = -ENOMEM;
+ 		goto free_ucc_pram;
++	}
+ 
+ 	priv->tx_skbuff = kcalloc(priv->tx_ring_size,
+ 				  sizeof(*priv->tx_skbuff),
+ 				  GFP_KERNEL);
+-	if (!priv->tx_skbuff)
++	if (!priv->tx_skbuff) {
++		ret = -ENOMEM;
+ 		goto free_rx_skbuff;
++	}
+ 
+ 	priv->skb_curtx = 0;
+ 	priv->skb_dirtytx = 0;
+diff --git a/drivers/net/wan/hdlc_x25.c b/drivers/net/wan/hdlc_x25.c
+index 34bc53facd11c..6938cb3bdf4e9 100644
+--- a/drivers/net/wan/hdlc_x25.c
++++ b/drivers/net/wan/hdlc_x25.c
+@@ -23,6 +23,8 @@
+ 
+ struct x25_state {
+ 	x25_hdlc_proto settings;
++	bool up;
++	spinlock_t up_lock; /* Protects "up" */
+ };
+ 
+ static int x25_ioctl(struct net_device *dev, struct ifreq *ifr);
+@@ -105,6 +107,8 @@ static void x25_data_transmit(struct net_device *dev, struct sk_buff *skb)
+ 
+ static netdev_tx_t x25_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
++	hdlc_device *hdlc = dev_to_hdlc(dev);
++	struct x25_state *x25st = state(hdlc);
+ 	int result;
+ 
+ 	/* There should be a pseudo header of 1 byte added by upper layers.
+@@ -115,12 +119,20 @@ static netdev_tx_t x25_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		return NETDEV_TX_OK;
+ 	}
+ 
++	spin_lock_bh(&x25st->up_lock);
++	if (!x25st->up) {
++		spin_unlock_bh(&x25st->up_lock);
++		kfree_skb(skb);
++		return NETDEV_TX_OK;
++	}
++
+ 	switch (skb->data[0]) {
+ 	case X25_IFACE_DATA:	/* Data to be transmitted */
+ 		skb_pull(skb, 1);
+ 		skb_reset_network_header(skb);
+ 		if ((result = lapb_data_request(dev, skb)) != LAPB_OK)
+ 			dev_kfree_skb(skb);
++		spin_unlock_bh(&x25st->up_lock);
+ 		return NETDEV_TX_OK;
+ 
+ 	case X25_IFACE_CONNECT:
+@@ -149,6 +161,7 @@ static netdev_tx_t x25_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		break;
+ 	}
+ 
++	spin_unlock_bh(&x25st->up_lock);
+ 	dev_kfree_skb(skb);
+ 	return NETDEV_TX_OK;
+ }
+@@ -166,6 +179,7 @@ static int x25_open(struct net_device *dev)
+ 		.data_transmit = x25_data_transmit,
+ 	};
+ 	hdlc_device *hdlc = dev_to_hdlc(dev);
++	struct x25_state *x25st = state(hdlc);
+ 	struct lapb_parms_struct params;
+ 	int result;
+ 
+@@ -192,6 +206,10 @@ static int x25_open(struct net_device *dev)
+ 	if (result != LAPB_OK)
+ 		return -EINVAL;
+ 
++	spin_lock_bh(&x25st->up_lock);
++	x25st->up = true;
++	spin_unlock_bh(&x25st->up_lock);
++
+ 	return 0;
+ }
+ 
+@@ -199,6 +217,13 @@ static int x25_open(struct net_device *dev)
+ 
+ static void x25_close(struct net_device *dev)
+ {
++	hdlc_device *hdlc = dev_to_hdlc(dev);
++	struct x25_state *x25st = state(hdlc);
++
++	spin_lock_bh(&x25st->up_lock);
++	x25st->up = false;
++	spin_unlock_bh(&x25st->up_lock);
++
+ 	lapb_unregister(dev);
+ }
+ 
+@@ -207,15 +232,28 @@ static void x25_close(struct net_device *dev)
+ static int x25_rx(struct sk_buff *skb)
+ {
+ 	struct net_device *dev = skb->dev;
++	hdlc_device *hdlc = dev_to_hdlc(dev);
++	struct x25_state *x25st = state(hdlc);
+ 
+ 	if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL) {
+ 		dev->stats.rx_dropped++;
+ 		return NET_RX_DROP;
+ 	}
+ 
+-	if (lapb_data_received(dev, skb) == LAPB_OK)
++	spin_lock_bh(&x25st->up_lock);
++	if (!x25st->up) {
++		spin_unlock_bh(&x25st->up_lock);
++		kfree_skb(skb);
++		dev->stats.rx_dropped++;
++		return NET_RX_DROP;
++	}
++
++	if (lapb_data_received(dev, skb) == LAPB_OK) {
++		spin_unlock_bh(&x25st->up_lock);
+ 		return NET_RX_SUCCESS;
++	}
+ 
++	spin_unlock_bh(&x25st->up_lock);
+ 	dev->stats.rx_errors++;
+ 	dev_kfree_skb_any(skb);
+ 	return NET_RX_DROP;
+@@ -300,6 +338,8 @@ static int x25_ioctl(struct net_device *dev, struct ifreq *ifr)
+ 			return result;
+ 
+ 		memcpy(&state(hdlc)->settings, &new_settings, size);
++		state(hdlc)->up = false;
++		spin_lock_init(&state(hdlc)->up_lock);
+ 
+ 		/* There's no header_ops so hard_header_len should be 0. */
+ 		dev->hard_header_len = 0;
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 262c40dc14a63..665a03ebf9efd 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -355,7 +355,6 @@ mt76_dma_tx_queue_skb(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 	};
+ 	struct ieee80211_hw *hw;
+ 	int len, n = 0, ret = -ENOMEM;
+-	struct mt76_queue_entry e;
+ 	struct mt76_txwi_cache *t;
+ 	struct sk_buff *iter;
+ 	dma_addr_t addr;
+@@ -397,6 +396,11 @@ mt76_dma_tx_queue_skb(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 	}
+ 	tx_info.nbuf = n;
+ 
++	if (q->queued + (tx_info.nbuf + 1) / 2 >= q->ndesc - 1) {
++		ret = -ENOMEM;
++		goto unmap;
++	}
++
+ 	dma_sync_single_for_cpu(dev->dev, t->dma_addr, dev->drv->txwi_size,
+ 				DMA_TO_DEVICE);
+ 	ret = dev->drv->tx_prepare_skb(dev, txwi, qid, wcid, sta, &tx_info);
+@@ -405,11 +409,6 @@ mt76_dma_tx_queue_skb(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 	if (ret < 0)
+ 		goto unmap;
+ 
+-	if (q->queued + (tx_info.nbuf + 1) / 2 >= q->ndesc - 1) {
+-		ret = -ENOMEM;
+-		goto unmap;
+-	}
+-
+ 	return mt76_dma_add_buf(dev, q, tx_info.buf, tx_info.nbuf,
+ 				tx_info.info, tx_info.skb, t);
+ 
+@@ -425,9 +424,7 @@ free:
+ 		dev->test.tx_done--;
+ #endif
+ 
+-	e.skb = tx_info.skb;
+-	e.txwi = t;
+-	dev->drv->tx_complete_skb(dev, &e);
++	dev_kfree_skb(tx_info.skb);
+ 	mt76_put_txwi(dev, t);
+ 	return ret;
+ }
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index de846aaa8728b..610d2bc43ea2d 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -346,6 +346,7 @@ bool nvme_cancel_request(struct request *req, void *data, bool reserved)
+ 		return true;
+ 
+ 	nvme_req(req)->status = NVME_SC_HOST_ABORTED_CMD;
++	nvme_req(req)->flags |= NVME_REQ_CANCELLED;
+ 	blk_mq_complete_request(req);
+ 	return true;
+ }
+@@ -1371,7 +1372,7 @@ static int nvme_identify_ns(struct nvme_ctrl *ctrl, unsigned nsid,
+ 		goto out_free_id;
+ 	}
+ 
+-	error = -ENODEV;
++	error = NVME_SC_INVALID_NS | NVME_SC_DNR;
+ 	if ((*id)->ncap == 0) /* namespace not allocated or attached */
+ 		goto out_free_id;
+ 
+@@ -3959,7 +3960,7 @@ static void nvme_ns_remove_by_nsid(struct nvme_ctrl *ctrl, u32 nsid)
+ static void nvme_validate_ns(struct nvme_ns *ns, struct nvme_ns_ids *ids)
+ {
+ 	struct nvme_id_ns *id;
+-	int ret = -ENODEV;
++	int ret = NVME_SC_INVALID_NS | NVME_SC_DNR;
+ 
+ 	if (test_bit(NVME_NS_DEAD, &ns->flags))
+ 		goto out;
+@@ -3968,7 +3969,7 @@ static void nvme_validate_ns(struct nvme_ns *ns, struct nvme_ns_ids *ids)
+ 	if (ret)
+ 		goto out;
+ 
+-	ret = -ENODEV;
++	ret = NVME_SC_INVALID_NS | NVME_SC_DNR;
+ 	if (!nvme_ns_ids_equal(&ns->head->ids, ids)) {
+ 		dev_err(ns->ctrl->device,
+ 			"identifiers changed for nsid %d\n", ns->head->ns_id);
+@@ -3986,7 +3987,7 @@ out:
+ 	 *
+ 	 * TODO: we should probably schedule a delayed retry here.
+ 	 */
+-	if (ret && ret != -ENOMEM && !(ret > 0 && !(ret & NVME_SC_DNR)))
++	if (ret > 0 && (ret & NVME_SC_DNR))
+ 		nvme_ns_remove(ns);
+ 	else
+ 		revalidate_disk_size(ns->disk, true);
+@@ -4018,6 +4019,12 @@ static void nvme_validate_or_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+ 				nsid);
+ 			break;
+ 		}
++		if (!nvme_multi_css(ctrl)) {
++			dev_warn(ctrl->device,
++				"command set not reported for nsid: %d\n",
++				nsid);
++			break;
++		}
+ 		nvme_alloc_ns(ctrl, nsid, &ids);
+ 		break;
+ 	default:
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index fab068c8ba026..41257daf7464d 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -1956,7 +1956,7 @@ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
+ 				sizeof(op->rsp_iu), DMA_FROM_DEVICE);
+ 
+ 	if (opstate == FCPOP_STATE_ABORTED)
+-		status = cpu_to_le16(NVME_SC_HOST_PATH_ERROR << 1);
++		status = cpu_to_le16(NVME_SC_HOST_ABORTED_CMD << 1);
+ 	else if (freq->status) {
+ 		status = cpu_to_le16(NVME_SC_HOST_PATH_ERROR << 1);
+ 		dev_info(ctrl->ctrl.device,
+@@ -2443,6 +2443,7 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
+ 	struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);
+ 	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(req);
+ 
++	op->nreq.flags |= NVME_REQ_CANCELLED;
+ 	__nvme_fc_abort_op(ctrl, op);
+ 	return true;
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 99c59f93a0641..4dca58f4afdf7 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3247,6 +3247,7 @@ static const struct pci_device_id nvme_id_table[] = {
+ 		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY, },
+ 	{ PCI_DEVICE(0x144d, 0xa822),   /* Samsung PM1725a */
+ 		.driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
++				NVME_QUIRK_DISABLE_WRITE_ZEROES|
+ 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE(0x1987, 0x5016),	/* Phison E16 */
+ 		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 06b6b742bb213..6c1f3ab7649c7 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -802,9 +802,8 @@ static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		nvmet_req_uninit(&rsp->req);
+ 		nvmet_rdma_release_rsp(rsp);
+ 		if (wc->status != IB_WC_WR_FLUSH_ERR) {
+-			pr_info("RDMA WRITE for CQE 0x%p failed with status %s (%d).\n",
+-				wc->wr_cqe, ib_wc_status_msg(wc->status),
+-				wc->status);
++			pr_info("RDMA WRITE for CQE failed with status %s (%d).\n",
++				ib_wc_status_msg(wc->status), wc->status);
+ 			nvmet_rdma_error_comp(queue);
+ 		}
+ 		return;
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index 30a9062d2b4b8..a90c32d072da3 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -47,8 +47,16 @@ static const struct key_entry intel_vbtn_keymap[] = {
+ };
+ 
+ static const struct key_entry intel_vbtn_switchmap[] = {
+-	{ KE_SW,     0xCA, { .sw = { SW_DOCK, 1 } } },		/* Docked */
+-	{ KE_SW,     0xCB, { .sw = { SW_DOCK, 0 } } },		/* Undocked */
++	/*
++	 * SW_DOCK should only be reported for docking stations, but DSDTs using the
++	 * intel-vbtn code, always seem to use this for 2-in-1s / convertibles and set
++	 * SW_DOCK=1 when in laptop-mode (in tandem with setting SW_TABLET_MODE=0).
++	 * This causes userspace to think the laptop is docked to a port-replicator
++	 * and to disable suspend-on-lid-close, which is undesirable.
++	 * Map the dock events to KEY_IGNORE to avoid this broken SW_DOCK reporting.
++	 */
++	{ KE_IGNORE, 0xCA, { .sw = { SW_DOCK, 1 } } },		/* Docked */
++	{ KE_IGNORE, 0xCB, { .sw = { SW_DOCK, 0 } } },		/* Undocked */
+ 	{ KE_SW,     0xCC, { .sw = { SW_TABLET_MODE, 1 } } },	/* Tablet */
+ 	{ KE_SW,     0xCD, { .sw = { SW_TABLET_MODE, 0 } } },	/* Laptop */
+ };
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index 52e4396d40717..c3036591b259a 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -726,8 +726,8 @@ static const struct rpmh_vreg_hw_data pmic5_ftsmps510 = {
+ static const struct rpmh_vreg_hw_data pmic5_hfsmps515 = {
+ 	.regulator_type = VRM,
+ 	.ops = &rpmh_regulator_vrm_ops,
+-	.voltage_range = REGULATOR_LINEAR_RANGE(2800000, 0, 4, 16000),
+-	.n_voltages = 5,
++	.voltage_range = REGULATOR_LINEAR_RANGE(320000, 0, 235, 16000),
++	.n_voltages = 236,
+ 	.pmic_mode_map = pmic_mode_map_pmic5_smps,
+ 	.of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+ };
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index bb940cbcbb5dd..ac25ec5f97388 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -7358,14 +7358,18 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
+ 		ioc->pend_os_device_add_sz++;
+ 	ioc->pend_os_device_add = kzalloc(ioc->pend_os_device_add_sz,
+ 	    GFP_KERNEL);
+-	if (!ioc->pend_os_device_add)
++	if (!ioc->pend_os_device_add) {
++		r = -ENOMEM;
+ 		goto out_free_resources;
++	}
+ 
+ 	ioc->device_remove_in_progress_sz = ioc->pend_os_device_add_sz;
+ 	ioc->device_remove_in_progress =
+ 		kzalloc(ioc->device_remove_in_progress_sz, GFP_KERNEL);
+-	if (!ioc->device_remove_in_progress)
++	if (!ioc->device_remove_in_progress) {
++		r = -ENOMEM;
+ 		goto out_free_resources;
++	}
+ 
+ 	ioc->fwfault_debug = mpt3sas_fwfault_debug;
+ 
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 47ad64b066236..69c5b5ee2169b 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1675,6 +1675,7 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ 		if (!qedi->global_queues[i]) {
+ 			QEDI_ERR(&qedi->dbg_ctx,
+ 				 "Unable to allocation global queue %d.\n", i);
++			status = -ENOMEM;
+ 			goto mem_alloc_failure;
+ 		}
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index a27a625839e68..dcae8f071c355 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -3222,8 +3222,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ 	if (!qpair->fw_started || (cmd->reset_count != qpair->chip_reset) ||
+ 	    (cmd->sess && cmd->sess->deleted)) {
+ 		cmd->state = QLA_TGT_STATE_PROCESSED;
+-		res = 0;
+-		goto free;
++		return 0;
+ 	}
+ 
+ 	ql_dbg_qp(ql_dbg_tgt, qpair, 0xe018,
+@@ -3234,8 +3233,9 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ 
+ 	res = qlt_pre_xmit_response(cmd, &prm, xmit_type, scsi_status,
+ 	    &full_req_cnt);
+-	if (unlikely(res != 0))
+-		goto free;
++	if (unlikely(res != 0)) {
++		return res;
++	}
+ 
+ 	spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ 
+@@ -3255,8 +3255,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ 			vha->flags.online, qla2x00_reset_active(vha),
+ 			cmd->reset_count, qpair->chip_reset);
+ 		spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+-		res = 0;
+-		goto free;
++		return 0;
+ 	}
+ 
+ 	/* Does F/W have an IOCBs for this request */
+@@ -3359,8 +3358,6 @@ out_unmap_unlock:
+ 	qlt_unmap_sg(vha, cmd);
+ 	spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+ 
+-free:
+-	vha->hw->tgt.tgt_ops->free_cmd(cmd);
+ 	return res;
+ }
+ EXPORT_SYMBOL(qlt_xmit_response);
+diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+index 61017acd3458b..7405fab324c82 100644
+--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+@@ -646,7 +646,6 @@ static int tcm_qla2xxx_queue_data_in(struct se_cmd *se_cmd)
+ {
+ 	struct qla_tgt_cmd *cmd = container_of(se_cmd,
+ 				struct qla_tgt_cmd, se_cmd);
+-	struct scsi_qla_host *vha = cmd->vha;
+ 
+ 	if (cmd->aborted) {
+ 		/* Cmd can loop during Q-full.  tcm_qla2xxx_aborted_task
+@@ -659,7 +658,6 @@ static int tcm_qla2xxx_queue_data_in(struct se_cmd *se_cmd)
+ 			cmd->se_cmd.transport_state,
+ 			cmd->se_cmd.t_state,
+ 			cmd->se_cmd.se_cmd_flags);
+-		vha->hw->tgt.tgt_ops->free_cmd(cmd);
+ 		return 0;
+ 	}
+ 
+@@ -687,7 +685,6 @@ static int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
+ {
+ 	struct qla_tgt_cmd *cmd = container_of(se_cmd,
+ 				struct qla_tgt_cmd, se_cmd);
+-	struct scsi_qla_host *vha = cmd->vha;
+ 	int xmit_type = QLA_TGT_XMIT_STATUS;
+ 
+ 	if (cmd->aborted) {
+@@ -701,7 +698,6 @@ static int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
+ 		    cmd, kref_read(&cmd->se_cmd.cmd_kref),
+ 		    cmd->se_cmd.transport_state, cmd->se_cmd.t_state,
+ 		    cmd->se_cmd.se_cmd_flags);
+-		vha->hw->tgt.tgt_ops->free_cmd(cmd);
+ 		return 0;
+ 	}
+ 	cmd->bufflen = se_cmd->data_length;
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index a244c8ae1b4eb..20182e39cb282 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -253,12 +253,17 @@ static int ufs_qcom_host_reset(struct ufs_hba *hba)
+ {
+ 	int ret = 0;
+ 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
++	bool reenable_intr = false;
+ 
+ 	if (!host->core_reset) {
+ 		dev_warn(hba->dev, "%s: reset control not set\n", __func__);
+ 		goto out;
+ 	}
+ 
++	reenable_intr = hba->is_irq_enabled;
++	disable_irq(hba->irq);
++	hba->is_irq_enabled = false;
++
+ 	ret = reset_control_assert(host->core_reset);
+ 	if (ret) {
+ 		dev_err(hba->dev, "%s: core_reset assert failed, err = %d\n",
+@@ -280,6 +285,11 @@ static int ufs_qcom_host_reset(struct ufs_hba *hba)
+ 
+ 	usleep_range(1000, 1100);
+ 
++	if (reenable_intr) {
++		enable_irq(hba->irq);
++		hba->is_irq_enabled = true;
++	}
++
+ out:
+ 	return ret;
+ }
+diff --git a/drivers/soc/ti/omap_prm.c b/drivers/soc/ti/omap_prm.c
+index c8b14b3a171f7..fb067b5e4a977 100644
+--- a/drivers/soc/ti/omap_prm.c
++++ b/drivers/soc/ti/omap_prm.c
+@@ -522,8 +522,12 @@ static int omap_reset_deassert(struct reset_controller_dev *rcdev,
+ 		       reset->prm->data->name, id);
+ 
+ exit:
+-	if (reset->clkdm)
++	if (reset->clkdm) {
++		/* At least dra7 iva needs a delay before clkdm idle */
++		if (has_rstst)
++			udelay(1);
+ 		pdata->clkdm_allow_idle(reset->clkdm);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/staging/rtl8192e/Kconfig b/drivers/staging/rtl8192e/Kconfig
+index 03fcc23516fd3..6e7d84ac06f50 100644
+--- a/drivers/staging/rtl8192e/Kconfig
++++ b/drivers/staging/rtl8192e/Kconfig
+@@ -26,6 +26,7 @@ config RTLLIB_CRYPTO_CCMP
+ config RTLLIB_CRYPTO_TKIP
+ 	tristate "Support for rtllib TKIP crypto"
+ 	depends on RTLLIB
++	select CRYPTO
+ 	select CRYPTO_LIB_ARC4
+ 	select CRYPTO_MICHAEL_MIC
+ 	default y
+diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
+index 41645fe6ad48a..ea0efd290c372 100644
+--- a/drivers/xen/Kconfig
++++ b/drivers/xen/Kconfig
+@@ -50,11 +50,11 @@ config XEN_BALLOON_MEMORY_HOTPLUG
+ 
+ 	  SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
+ 
+-config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
++config XEN_MEMORY_HOTPLUG_LIMIT
+ 	int "Hotplugged memory limit (in GiB) for a PV guest"
+ 	default 512
+ 	depends on XEN_HAVE_PVMMU
+-	depends on XEN_BALLOON_MEMORY_HOTPLUG
++	depends on MEMORY_HOTPLUG
+ 	help
+ 	  Maxmium amount of memory (in GiB) that a PV guest can be
+ 	  expanded to when using memory hotplug.
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index cd9b1a16489b4..4bac32a274ceb 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -226,7 +226,6 @@ static void __del_qgroup_rb(struct btrfs_fs_info *fs_info,
+ {
+ 	struct btrfs_qgroup_list *list;
+ 
+-	btrfs_sysfs_del_one_qgroup(fs_info, qgroup);
+ 	list_del(&qgroup->dirty);
+ 	while (!list_empty(&qgroup->groups)) {
+ 		list = list_first_entry(&qgroup->groups,
+@@ -243,7 +242,6 @@ static void __del_qgroup_rb(struct btrfs_fs_info *fs_info,
+ 		list_del(&list->next_member);
+ 		kfree(list);
+ 	}
+-	kfree(qgroup);
+ }
+ 
+ /* must be called with qgroup_lock held */
+@@ -569,6 +567,8 @@ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)
+ 		qgroup = rb_entry(n, struct btrfs_qgroup, node);
+ 		rb_erase(n, &fs_info->qgroup_tree);
+ 		__del_qgroup_rb(fs_info, qgroup);
++		btrfs_sysfs_del_one_qgroup(fs_info, qgroup);
++		kfree(qgroup);
+ 	}
+ 	/*
+ 	 * We call btrfs_free_qgroup_config() when unmounting
+@@ -1580,6 +1580,14 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 	spin_lock(&fs_info->qgroup_lock);
+ 	del_qgroup_rb(fs_info, qgroupid);
+ 	spin_unlock(&fs_info->qgroup_lock);
++
++	/*
++	 * Remove the qgroup from sysfs now without holding the qgroup_lock
++	 * spinlock, since the sysfs_remove_group() function needs to take
++	 * the mutex kernfs_mutex through kernfs_remove_by_name_ns().
++	 */
++	btrfs_sysfs_del_one_qgroup(fs_info, qgroup);
++	kfree(qgroup);
+ out:
+ 	mutex_unlock(&fs_info->qgroup_ioctl_lock);
+ 	return ret;
+diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
+index e027c718ca01a..8ffc40e84a594 100644
+--- a/fs/cachefiles/rdwr.c
++++ b/fs/cachefiles/rdwr.c
+@@ -24,17 +24,16 @@ static int cachefiles_read_waiter(wait_queue_entry_t *wait, unsigned mode,
+ 		container_of(wait, struct cachefiles_one_read, monitor);
+ 	struct cachefiles_object *object;
+ 	struct fscache_retrieval *op = monitor->op;
+-	struct wait_bit_key *key = _key;
++	struct wait_page_key *key = _key;
+ 	struct page *page = wait->private;
+ 
+ 	ASSERT(key);
+ 
+ 	_enter("{%lu},%u,%d,{%p,%u}",
+ 	       monitor->netfs_page->index, mode, sync,
+-	       key->flags, key->bit_nr);
++	       key->page, key->bit_nr);
+ 
+-	if (key->flags != &page->flags ||
+-	    key->bit_nr != PG_locked)
++	if (key->page != page || key->bit_nr != PG_locked)
+ 		return 0;
+ 
+ 	_debug("--- monitor %p %lx ---", page, page->flags);
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 3295516af2aec..248ee81e01516 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1002,8 +1002,8 @@ struct cifs_ses {
+ 	bool binding:1; /* are we binding the session? */
+ 	__u16 session_flags;
+ 	__u8 smb3signingkey[SMB3_SIGN_KEY_SIZE];
+-	__u8 smb3encryptionkey[SMB3_SIGN_KEY_SIZE];
+-	__u8 smb3decryptionkey[SMB3_SIGN_KEY_SIZE];
++	__u8 smb3encryptionkey[SMB3_ENC_DEC_KEY_SIZE];
++	__u8 smb3decryptionkey[SMB3_ENC_DEC_KEY_SIZE];
+ 	__u8 preauth_sha_hash[SMB2_PREAUTH_HASH_SIZE];
+ 
+ 	__u8 binding_preauth_sha_hash[SMB2_PREAUTH_HASH_SIZE];
+diff --git a/fs/cifs/cifspdu.h b/fs/cifs/cifspdu.h
+index 593d826820c34..a843422942b1d 100644
+--- a/fs/cifs/cifspdu.h
++++ b/fs/cifs/cifspdu.h
+@@ -147,6 +147,11 @@
+  */
+ #define SMB3_SIGN_KEY_SIZE (16)
+ 
++/*
++ * Size of the smb3 encryption/decryption keys
++ */
++#define SMB3_ENC_DEC_KEY_SIZE (32)
++
+ #define CIFS_CLIENT_CHALLENGE_SIZE (8)
+ #define CIFS_SERVER_CHALLENGE_SIZE (8)
+ #define CIFS_HMAC_MD5_HASH_SIZE (16)
+diff --git a/fs/cifs/smb2glob.h b/fs/cifs/smb2glob.h
+index 99a1951a01ec2..d9a990c991213 100644
+--- a/fs/cifs/smb2glob.h
++++ b/fs/cifs/smb2glob.h
+@@ -58,6 +58,7 @@
+ #define SMB2_HMACSHA256_SIZE (32)
+ #define SMB2_CMACAES_SIZE (16)
+ #define SMB3_SIGNKEY_SIZE (16)
++#define SMB3_GCM128_CRYPTKEY_SIZE (16)
+ #define SMB3_GCM256_CRYPTKEY_SIZE (32)
+ 
+ /* Maximum buffer size value we can send with 1 credit */
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 02998c79bb907..46d247c722eec 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1980,6 +1980,7 @@ smb2_duplicate_extents(const unsigned int xid,
+ {
+ 	int rc;
+ 	unsigned int ret_data_len;
++	struct inode *inode;
+ 	struct duplicate_extents_to_file dup_ext_buf;
+ 	struct cifs_tcon *tcon = tlink_tcon(trgtfile->tlink);
+ 
+@@ -1996,10 +1997,21 @@ smb2_duplicate_extents(const unsigned int xid,
+ 	cifs_dbg(FYI, "Duplicate extents: src off %lld dst off %lld len %lld\n",
+ 		src_off, dest_off, len);
+ 
+-	rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false);
+-	if (rc)
+-		goto duplicate_extents_out;
++	inode = d_inode(trgtfile->dentry);
++	if (inode->i_size < dest_off + len) {
++		rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false);
++		if (rc)
++			goto duplicate_extents_out;
+ 
++		/*
++		 * Although also could set plausible allocation size (i_blocks)
++		 * here in addition to setting the file size, in reflink
++		 * it is likely that the target file is sparse. Its allocation
++		 * size will be queried on next revalidate, but it is important
++		 * to make sure that file's cached size is updated immediately
++		 */
++		cifs_setsize(inode, dest_off + len);
++	}
+ 	rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,
+ 			trgtfile->fid.volatile_fid,
+ 			FSCTL_DUPLICATE_EXTENTS_TO_FILE,
+@@ -4056,7 +4068,7 @@ smb2_get_enc_key(struct TCP_Server_Info *server, __u64 ses_id, int enc, u8 *key)
+ 			if (ses->Suid == ses_id) {
+ 				ses_enc_key = enc ? ses->smb3encryptionkey :
+ 					ses->smb3decryptionkey;
+-				memcpy(key, ses_enc_key, SMB3_SIGN_KEY_SIZE);
++				memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE);
+ 				spin_unlock(&cifs_tcp_ses_lock);
+ 				return 0;
+ 			}
+@@ -4083,7 +4095,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ 	int rc = 0;
+ 	struct scatterlist *sg;
+ 	u8 sign[SMB2_SIGNATURE_SIZE] = {};
+-	u8 key[SMB3_SIGN_KEY_SIZE];
++	u8 key[SMB3_ENC_DEC_KEY_SIZE];
+ 	struct aead_request *req;
+ 	char *iv;
+ 	unsigned int iv_len;
+@@ -4107,10 +4119,11 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ 	tfm = enc ? server->secmech.ccmaesencrypt :
+ 						server->secmech.ccmaesdecrypt;
+ 
+-	if (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)
++	if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) ||
++		(server->cipher_type == SMB2_ENCRYPTION_AES256_GCM))
+ 		rc = crypto_aead_setkey(tfm, key, SMB3_GCM256_CRYPTKEY_SIZE);
+ 	else
+-		rc = crypto_aead_setkey(tfm, key, SMB3_SIGN_KEY_SIZE);
++		rc = crypto_aead_setkey(tfm, key, SMB3_GCM128_CRYPTKEY_SIZE);
+ 
+ 	if (rc) {
+ 		cifs_server_dbg(VFS, "%s: Failed to set aead key %d\n", __func__, rc);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index c6f8bc6729aa1..d1d550647cd64 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -4032,8 +4032,7 @@ smb2_async_readv(struct cifs_readdata *rdata)
+ 	if (rdata->credits.value > 0) {
+ 		shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes,
+ 						SMB2_MAX_BUFFER_SIZE));
+-		shdr->CreditRequest =
+-			cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 1);
++		shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8);
+ 
+ 		rc = adjust_credits(server, &rdata->credits, rdata->bytes);
+ 		if (rc)
+@@ -4339,8 +4338,7 @@ smb2_async_writev(struct cifs_writedata *wdata,
+ 	if (wdata->credits.value > 0) {
+ 		shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes,
+ 						    SMB2_MAX_BUFFER_SIZE));
+-		shdr->CreditRequest =
+-			cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 1);
++		shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8);
+ 
+ 		rc = adjust_credits(server, &wdata->credits, wdata->bytes);
+ 		if (rc)
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index ebccd71cc60a3..e6fa76ab70be7 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -298,7 +298,8 @@ static int generate_key(struct cifs_ses *ses, struct kvec label,
+ {
+ 	unsigned char zero = 0x0;
+ 	__u8 i[4] = {0, 0, 0, 1};
+-	__u8 L[4] = {0, 0, 0, 128};
++	__u8 L128[4] = {0, 0, 0, 128};
++	__u8 L256[4] = {0, 0, 1, 0};
+ 	int rc = 0;
+ 	unsigned char prfhash[SMB2_HMACSHA256_SIZE];
+ 	unsigned char *hashptr = prfhash;
+@@ -354,8 +355,14 @@ static int generate_key(struct cifs_ses *ses, struct kvec label,
+ 		goto smb3signkey_ret;
+ 	}
+ 
+-	rc = crypto_shash_update(&server->secmech.sdeschmacsha256->shash,
+-				L, 4);
++	if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) ||
++		(server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)) {
++		rc = crypto_shash_update(&server->secmech.sdeschmacsha256->shash,
++				L256, 4);
++	} else {
++		rc = crypto_shash_update(&server->secmech.sdeschmacsha256->shash,
++				L128, 4);
++	}
+ 	if (rc) {
+ 		cifs_server_dbg(VFS, "%s: Could not update with L\n", __func__);
+ 		goto smb3signkey_ret;
+@@ -390,6 +397,9 @@ generate_smb3signingkey(struct cifs_ses *ses,
+ 			const struct derivation_triplet *ptriplet)
+ {
+ 	int rc;
++#ifdef CONFIG_CIFS_DEBUG_DUMP_KEYS
++	struct TCP_Server_Info *server = ses->server;
++#endif
+ 
+ 	/*
+ 	 * All channels use the same encryption/decryption keys but
+@@ -422,11 +432,11 @@ generate_smb3signingkey(struct cifs_ses *ses,
+ 		rc = generate_key(ses, ptriplet->encryption.label,
+ 				  ptriplet->encryption.context,
+ 				  ses->smb3encryptionkey,
+-				  SMB3_SIGN_KEY_SIZE);
++				  SMB3_ENC_DEC_KEY_SIZE);
+ 		rc = generate_key(ses, ptriplet->decryption.label,
+ 				  ptriplet->decryption.context,
+ 				  ses->smb3decryptionkey,
+-				  SMB3_SIGN_KEY_SIZE);
++				  SMB3_ENC_DEC_KEY_SIZE);
+ 		if (rc)
+ 			return rc;
+ 	}
+@@ -442,14 +452,23 @@ generate_smb3signingkey(struct cifs_ses *ses,
+ 	 */
+ 	cifs_dbg(VFS, "Session Id    %*ph\n", (int)sizeof(ses->Suid),
+ 			&ses->Suid);
++	cifs_dbg(VFS, "Cipher type   %d\n", server->cipher_type);
+ 	cifs_dbg(VFS, "Session Key   %*ph\n",
+ 		 SMB2_NTLMV2_SESSKEY_SIZE, ses->auth_key.response);
+ 	cifs_dbg(VFS, "Signing Key   %*ph\n",
+ 		 SMB3_SIGN_KEY_SIZE, ses->smb3signingkey);
+-	cifs_dbg(VFS, "ServerIn Key  %*ph\n",
+-		 SMB3_SIGN_KEY_SIZE, ses->smb3encryptionkey);
+-	cifs_dbg(VFS, "ServerOut Key %*ph\n",
+-		 SMB3_SIGN_KEY_SIZE, ses->smb3decryptionkey);
++	if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) ||
++		(server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)) {
++		cifs_dbg(VFS, "ServerIn Key  %*ph\n",
++				SMB3_GCM256_CRYPTKEY_SIZE, ses->smb3encryptionkey);
++		cifs_dbg(VFS, "ServerOut Key %*ph\n",
++				SMB3_GCM256_CRYPTKEY_SIZE, ses->smb3decryptionkey);
++	} else {
++		cifs_dbg(VFS, "ServerIn Key  %*ph\n",
++				SMB3_GCM128_CRYPTKEY_SIZE, ses->smb3encryptionkey);
++		cifs_dbg(VFS, "ServerOut Key %*ph\n",
++				SMB3_GCM128_CRYPTKEY_SIZE, ses->smb3decryptionkey);
++	}
+ #endif
+ 	return rc;
+ }
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 7b45b3b79df56..503a0056b60f2 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -1170,7 +1170,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 	}
+ 	if (rc != 0) {
+ 		for (; i < num_rqst; i++) {
+-			cifs_server_dbg(VFS, "Cancelling wait for mid %llu cmd: %d\n",
++			cifs_server_dbg(FYI, "Cancelling wait for mid %llu cmd: %d\n",
+ 				 midQ[i]->mid, le16_to_cpu(midQ[i]->command));
+ 			send_cancel(server, &rqst[i], midQ[i]);
+ 			spin_lock(&GlobalMid_Lock);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index e67d5de6f28ca..b6229fe1aa233 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2732,8 +2732,15 @@ static int ext4_mb_init_backend(struct super_block *sb)
+ 	}
+ 
+ 	if (ext4_has_feature_flex_bg(sb)) {
+-		/* a single flex group is supposed to be read by a single IO */
+-		sbi->s_mb_prefetch = min(1 << sbi->s_es->s_log_groups_per_flex,
++		/* a single flex group is supposed to be read by a single IO.
++		 * 2 ^ s_log_groups_per_flex != UINT_MAX as s_mb_prefetch is
++		 * unsigned integer, so the maximum shift is 32.
++		 */
++		if (sbi->s_es->s_log_groups_per_flex >= 32) {
++			ext4_msg(sb, KERN_ERR, "too many log groups per flexible block group");
++			goto err_freesgi;
++		}
++		sbi->s_mb_prefetch = min_t(uint, 1 << sbi->s_es->s_log_groups_per_flex,
+ 			BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9));
+ 		sbi->s_mb_prefetch *= 8; /* 8 prefetch IOs in flight at most */
+ 	} else {
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 4698471795732..5462f26907c19 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1459,6 +1459,9 @@ ext4_xattr_inode_cache_find(struct inode *inode, const void *value,
+ 	if (!ce)
+ 		return NULL;
+ 
++	WARN_ON_ONCE(ext4_handle_valid(journal_current_handle()) &&
++		     !(current->flags & PF_MEMALLOC_NOFS));
++
+ 	ea_data = kvmalloc(value_len, GFP_KERNEL);
+ 	if (!ea_data) {
+ 		mb_cache_entry_put(ea_inode_cache, ce);
+@@ -2325,6 +2328,7 @@ ext4_xattr_set_handle(handle_t *handle, struct inode *inode, int name_index,
+ 			error = -ENOSPC;
+ 			goto cleanup;
+ 		}
++		WARN_ON_ONCE(!(current->flags & PF_MEMALLOC_NOFS));
+ 	}
+ 
+ 	error = ext4_reserve_inode_write(handle, inode, &is.iloc);
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 2e9314091c81d..1955dea999f79 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -935,12 +935,16 @@ static void trans_drain(struct gfs2_trans *tr)
+ 	while (!list_empty(head)) {
+ 		bd = list_first_entry(head, struct gfs2_bufdata, bd_list);
+ 		list_del_init(&bd->bd_list);
++		if (!list_empty(&bd->bd_ail_st_list))
++			gfs2_remove_from_ail(bd);
+ 		kmem_cache_free(gfs2_bufdata_cachep, bd);
+ 	}
+ 	head = &tr->tr_databuf;
+ 	while (!list_empty(head)) {
+ 		bd = list_first_entry(head, struct gfs2_bufdata, bd_list);
+ 		list_del_init(&bd->bd_list);
++		if (!list_empty(&bd->bd_ail_st_list))
++			gfs2_remove_from_ail(bd);
+ 		kmem_cache_free(gfs2_bufdata_cachep, bd);
+ 	}
+ }
+diff --git a/fs/gfs2/trans.c b/fs/gfs2/trans.c
+index 6d4bf7ea7b3be..7f850ff6a05de 100644
+--- a/fs/gfs2/trans.c
++++ b/fs/gfs2/trans.c
+@@ -134,6 +134,8 @@ static struct gfs2_bufdata *gfs2_alloc_bufdata(struct gfs2_glock *gl,
+ 	bd->bd_bh = bh;
+ 	bd->bd_gl = gl;
+ 	INIT_LIST_HEAD(&bd->bd_list);
++	INIT_LIST_HEAD(&bd->bd_ail_st_list);
++	INIT_LIST_HEAD(&bd->bd_ail_gl_list);
+ 	bh->b_private = bd;
+ 	return bd;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 06e9c21819957..dde290eb7dd0c 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3996,6 +3996,7 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
+ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 				   const struct io_uring_sqe *sqe)
+ {
++	unsigned long size;
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+@@ -4009,7 +4010,8 @@ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 	p->addr = READ_ONCE(sqe->addr);
+ 	p->len = READ_ONCE(sqe->len);
+ 
+-	if (!access_ok(u64_to_user_ptr(p->addr), (p->len * p->nbufs)))
++	size = (unsigned long)p->len * p->nbufs;
++	if (!access_ok(u64_to_user_ptr(p->addr), size))
+ 		return -EFAULT;
+ 
+ 	p->bgid = READ_ONCE(sqe->buf_group);
+diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig
+index e2a488d403a61..14a72224b6571 100644
+--- a/fs/nfs/Kconfig
++++ b/fs/nfs/Kconfig
+@@ -127,7 +127,7 @@ config PNFS_BLOCK
+ config PNFS_FLEXFILE_LAYOUT
+ 	tristate
+ 	depends on NFS_V4_1 && NFS_V3
+-	default m
++	default NFS_V4
+ 
+ config NFS_V4_1_IMPLEMENTATION_ID_DOMAIN
+ 	string "NFSv4.1 Implementation ID Domain"
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index 69971f6c840d2..dff6b52d26a85 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -35,6 +35,7 @@
+  */
+ #define NFS3_fhandle_sz		(1+16)
+ #define NFS3_fh_sz		(NFS3_fhandle_sz)	/* shorthand */
++#define NFS3_post_op_fh_sz	(1+NFS3_fh_sz)
+ #define NFS3_sattr_sz		(15)
+ #define NFS3_filename_sz	(1+(NFS3_MAXNAMLEN>>2))
+ #define NFS3_path_sz		(1+(NFS3_MAXPATHLEN>>2))
+@@ -72,7 +73,7 @@
+ #define NFS3_readlinkres_sz	(1+NFS3_post_op_attr_sz+1+1)
+ #define NFS3_readres_sz		(1+NFS3_post_op_attr_sz+3+1)
+ #define NFS3_writeres_sz	(1+NFS3_wcc_data_sz+4)
+-#define NFS3_createres_sz	(1+NFS3_fh_sz+NFS3_post_op_attr_sz+NFS3_wcc_data_sz)
++#define NFS3_createres_sz	(1+NFS3_post_op_fh_sz+NFS3_post_op_attr_sz+NFS3_wcc_data_sz)
+ #define NFS3_renameres_sz	(1+(2 * NFS3_wcc_data_sz))
+ #define NFS3_linkres_sz		(1+NFS3_post_op_attr_sz+NFS3_wcc_data_sz)
+ #define NFS3_readdirres_sz	(1+NFS3_post_op_attr_sz+2+1)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index ba2dfba4854bf..15ac6b6893e76 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5891,6 +5891,9 @@ static int __nfs4_proc_set_acl(struct inode *inode, const void *buf, size_t bufl
+ 	unsigned int npages = DIV_ROUND_UP(buflen, PAGE_SIZE);
+ 	int ret, i;
+ 
++	/* You can't remove system.nfs4_acl: */
++	if (buflen == 0)
++		return -EINVAL;
+ 	if (!nfs4_server_supports_acls(server))
+ 		return -EOPNOTSUPP;
+ 	if (npages > ARRAY_SIZE(pages))
+diff --git a/fs/squashfs/export.c b/fs/squashfs/export.c
+index eb02072d28dd6..723763746238d 100644
+--- a/fs/squashfs/export.c
++++ b/fs/squashfs/export.c
+@@ -152,14 +152,18 @@ __le64 *squashfs_read_inode_lookup_table(struct super_block *sb,
+ 		start = le64_to_cpu(table[n]);
+ 		end = le64_to_cpu(table[n + 1]);
+ 
+-		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++		if (start >= end
++		    || (end - start) >
++		    (SQUASHFS_METADATA_SIZE + SQUASHFS_BLOCK_OFFSET)) {
+ 			kfree(table);
+ 			return ERR_PTR(-EINVAL);
+ 		}
+ 	}
+ 
+ 	start = le64_to_cpu(table[indexes - 1]);
+-	if (start >= lookup_table_start || (lookup_table_start - start) > SQUASHFS_METADATA_SIZE) {
++	if (start >= lookup_table_start ||
++	    (lookup_table_start - start) >
++	    (SQUASHFS_METADATA_SIZE + SQUASHFS_BLOCK_OFFSET)) {
+ 		kfree(table);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/fs/squashfs/id.c b/fs/squashfs/id.c
+index 11581bf31af41..ea5387679723f 100644
+--- a/fs/squashfs/id.c
++++ b/fs/squashfs/id.c
+@@ -97,14 +97,16 @@ __le64 *squashfs_read_id_index_table(struct super_block *sb,
+ 		start = le64_to_cpu(table[n]);
+ 		end = le64_to_cpu(table[n + 1]);
+ 
+-		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++		if (start >= end || (end - start) >
++				(SQUASHFS_METADATA_SIZE + SQUASHFS_BLOCK_OFFSET)) {
+ 			kfree(table);
+ 			return ERR_PTR(-EINVAL);
+ 		}
+ 	}
+ 
+ 	start = le64_to_cpu(table[indexes - 1]);
+-	if (start >= id_table_start || (id_table_start - start) > SQUASHFS_METADATA_SIZE) {
++	if (start >= id_table_start || (id_table_start - start) >
++				(SQUASHFS_METADATA_SIZE + SQUASHFS_BLOCK_OFFSET)) {
+ 		kfree(table);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/fs/squashfs/squashfs_fs.h b/fs/squashfs/squashfs_fs.h
+index 8d64edb80ebf0..b3fdc8212c5f5 100644
+--- a/fs/squashfs/squashfs_fs.h
++++ b/fs/squashfs/squashfs_fs.h
+@@ -17,6 +17,7 @@
+ 
+ /* size of metadata (inode and directory) blocks */
+ #define SQUASHFS_METADATA_SIZE		8192
++#define SQUASHFS_BLOCK_OFFSET		2
+ 
+ /* default size of block device I/O */
+ #ifdef CONFIG_SQUASHFS_4K_DEVBLK_SIZE
+diff --git a/fs/squashfs/xattr_id.c b/fs/squashfs/xattr_id.c
+index ead66670b41a5..087cab8c78f4e 100644
+--- a/fs/squashfs/xattr_id.c
++++ b/fs/squashfs/xattr_id.c
+@@ -109,14 +109,16 @@ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
+ 		start = le64_to_cpu(table[n]);
+ 		end = le64_to_cpu(table[n + 1]);
+ 
+-		if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++		if (start >= end || (end - start) >
++				(SQUASHFS_METADATA_SIZE + SQUASHFS_BLOCK_OFFSET)) {
+ 			kfree(table);
+ 			return ERR_PTR(-EINVAL);
+ 		}
+ 	}
+ 
+ 	start = le64_to_cpu(table[indexes - 1]);
+-	if (start >= table_start || (table_start - start) > SQUASHFS_METADATA_SIZE) {
++	if (start >= table_start || (table_start - start) >
++				(SQUASHFS_METADATA_SIZE + SQUASHFS_BLOCK_OFFSET)) {
+ 		kfree(table);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index 6d1879bf94403..37dac195adbb4 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -233,6 +233,7 @@ struct acpi_pnp_type {
+ 
+ struct acpi_device_pnp {
+ 	acpi_bus_id bus_id;		/* Object name */
++	int instance_no;		/* Instance number of this object */
+ 	struct acpi_pnp_type type;	/* ID type */
+ 	acpi_bus_address bus_address;	/* _ADR */
+ 	char *unique_id;		/* _UID */
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 34d8287cd7749..d7efbc5490e8c 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -393,7 +393,10 @@
+ 	. = ALIGN(8);							\
+ 	__start_static_call_sites = .;					\
+ 	KEEP(*(.static_call_sites))					\
+-	__stop_static_call_sites = .;
++	__stop_static_call_sites = .;					\
++	__start_static_call_tramp_key = .;				\
++	KEEP(*(.static_call_tramp_key))					\
++	__stop_static_call_tramp_key = .;
+ 
+ /*
+  * Allow architectures to handle ro_after_init data on their
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 76322b6452c80..dd236ef59db3e 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1059,7 +1059,7 @@ int bpf_prog_array_copy(struct bpf_prog_array *old_array,
+ 			struct bpf_prog *include_prog,
+ 			struct bpf_prog_array **new_array);
+ 
+-#define __BPF_PROG_RUN_ARRAY(array, ctx, func, check_non_null)	\
++#define __BPF_PROG_RUN_ARRAY(array, ctx, func, check_non_null, set_cg_storage) \
+ 	({						\
+ 		struct bpf_prog_array_item *_item;	\
+ 		struct bpf_prog *_prog;			\
+@@ -1072,7 +1072,8 @@ int bpf_prog_array_copy(struct bpf_prog_array *old_array,
+ 			goto _out;			\
+ 		_item = &_array->items[0];		\
+ 		while ((_prog = READ_ONCE(_item->prog))) {		\
+-			bpf_cgroup_storage_set(_item->cgroup_storage);	\
++			if (set_cg_storage)		\
++				bpf_cgroup_storage_set(_item->cgroup_storage);	\
+ 			_ret &= func(_prog, ctx);	\
+ 			_item++;			\
+ 		}					\
+@@ -1133,10 +1134,10 @@ _out:							\
+ 	})
+ 
+ #define BPF_PROG_RUN_ARRAY(array, ctx, func)		\
+-	__BPF_PROG_RUN_ARRAY(array, ctx, func, false)
++	__BPF_PROG_RUN_ARRAY(array, ctx, func, false, true)
+ 
+ #define BPF_PROG_RUN_ARRAY_CHECK(array, ctx, func)	\
+-	__BPF_PROG_RUN_ARRAY(array, ctx, func, true)
++	__BPF_PROG_RUN_ARRAY(array, ctx, func, true, false)
+ 
+ #ifdef CONFIG_BPF_SYSCALL
+ DECLARE_PER_CPU(int, bpf_prog_active);
+diff --git a/include/linux/brcmphy.h b/include/linux/brcmphy.h
+index d0bd226d6bd96..54665952d6ade 100644
+--- a/include/linux/brcmphy.h
++++ b/include/linux/brcmphy.h
+@@ -136,6 +136,7 @@
+ 
+ #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC			0x07
+ #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_WIRESPEED_EN	0x0010
++#define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_EN	0x0080
+ #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN	0x0100
+ #define MII_BCM54XX_AUXCTL_MISC_FORCE_AMDIX		0x0200
+ #define MII_BCM54XX_AUXCTL_MISC_WREN			0x8000
+@@ -222,6 +223,9 @@
+ /* 11111: Mode Control Register */
+ #define BCM54XX_SHD_MODE		0x1f
+ #define BCM54XX_SHD_INTF_SEL_MASK	GENMASK(2, 1)	/* INTERF_SEL[1:0] */
++#define BCM54XX_SHD_INTF_SEL_RGMII	0x02
++#define BCM54XX_SHD_INTF_SEL_SGMII	0x04
++#define BCM54XX_SHD_INTF_SEL_GBIC	0x06
+ #define BCM54XX_SHD_MODE_1000BX		BIT(0)	/* Enable 1000-X registers */
+ 
+ /*
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index d2d7f9b6a2761..50cc070cb1f7c 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -246,7 +246,11 @@ struct target_type {
+ #define dm_target_passes_integrity(type) ((type)->features & DM_TARGET_PASSES_INTEGRITY)
+ 
+ /*
+- * Indicates that a target supports host-managed zoned block devices.
++ * Indicates support for zoned block devices:
++ * - DM_TARGET_ZONED_HM: the target also supports host-managed zoned
++ *   block devices but does not support combining different zoned models.
++ * - DM_TARGET_MIXED_ZONED_MODEL: the target supports combining multiple
++ *   devices with different zoned models.
+  */
+ #define DM_TARGET_ZONED_HM		0x00000040
+ #define dm_target_supports_zoned_hm(type) ((type)->features & DM_TARGET_ZONED_HM)
+@@ -257,6 +261,15 @@ struct target_type {
+ #define DM_TARGET_NOWAIT		0x00000080
+ #define dm_target_supports_nowait(type) ((type)->features & DM_TARGET_NOWAIT)
+ 
++#ifdef CONFIG_BLK_DEV_ZONED
++#define DM_TARGET_MIXED_ZONED_MODEL	0x00000200
++#define dm_target_supports_mixed_zoned_model(type) \
++	((type)->features & DM_TARGET_MIXED_ZONED_MODEL)
++#else
++#define DM_TARGET_MIXED_ZONED_MODEL	0x00000000
++#define dm_target_supports_mixed_zoned_model(type) (false)
++#endif
++
+ struct dm_target {
+ 	struct dm_table *table;
+ 	struct target_type *type;
+diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
+index 2ad6e92f124ad..0bff345c4bc68 100644
+--- a/include/linux/hugetlb_cgroup.h
++++ b/include/linux/hugetlb_cgroup.h
+@@ -113,6 +113,11 @@ static inline bool hugetlb_cgroup_disabled(void)
+ 	return !cgroup_subsys_enabled(hugetlb_cgrp_subsys);
+ }
+ 
++static inline void hugetlb_cgroup_put_rsvd_cgroup(struct hugetlb_cgroup *h_cg)
++{
++	css_put(&h_cg->css);
++}
++
+ extern int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ 					struct hugetlb_cgroup **ptr);
+ extern int hugetlb_cgroup_charge_cgroup_rsvd(int idx, unsigned long nr_pages,
+@@ -138,7 +143,8 @@ extern void hugetlb_cgroup_uncharge_counter(struct resv_map *resv,
+ 
+ extern void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,
+ 						struct file_region *rg,
+-						unsigned long nr_pages);
++						unsigned long nr_pages,
++						bool region_del);
+ 
+ extern void hugetlb_cgroup_file_init(void) __init;
+ extern void hugetlb_cgroup_migrate(struct page *oldhpage,
+@@ -147,7 +153,8 @@ extern void hugetlb_cgroup_migrate(struct page *oldhpage,
+ #else
+ static inline void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,
+ 						       struct file_region *rg,
+-						       unsigned long nr_pages)
++						       unsigned long nr_pages,
++						       bool region_del)
+ {
+ }
+ 
+@@ -185,6 +192,10 @@ static inline bool hugetlb_cgroup_disabled(void)
+ 	return true;
+ }
+ 
++static inline void hugetlb_cgroup_put_rsvd_cgroup(struct hugetlb_cgroup *h_cg)
++{
++}
++
+ static inline int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ 					       struct hugetlb_cgroup **ptr)
+ {
+diff --git a/include/linux/if_macvlan.h b/include/linux/if_macvlan.h
+index a367ead4bf4bb..e11555989090c 100644
+--- a/include/linux/if_macvlan.h
++++ b/include/linux/if_macvlan.h
+@@ -42,13 +42,14 @@ static inline void macvlan_count_rx(const struct macvlan_dev *vlan,
+ 	if (likely(success)) {
+ 		struct vlan_pcpu_stats *pcpu_stats;
+ 
+-		pcpu_stats = this_cpu_ptr(vlan->pcpu_stats);
++		pcpu_stats = get_cpu_ptr(vlan->pcpu_stats);
+ 		u64_stats_update_begin(&pcpu_stats->syncp);
+ 		pcpu_stats->rx_packets++;
+ 		pcpu_stats->rx_bytes += len;
+ 		if (multicast)
+ 			pcpu_stats->rx_multicast++;
+ 		u64_stats_update_end(&pcpu_stats->syncp);
++		put_cpu_ptr(vlan->pcpu_stats);
+ 	} else {
+ 		this_cpu_inc(vlan->pcpu_stats->rx_errors);
+ 	}
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 922a7f6004658..c691b1ac95f88 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -937,9 +937,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
+ 	rcu_read_unlock();
+ }
+ 
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-void mem_cgroup_split_huge_fixup(struct page *head);
+-#endif
++void split_page_memcg(struct page *head, unsigned int nr);
+ 
+ #else /* CONFIG_MEMCG */
+ 
+@@ -1267,7 +1265,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
+ 	return 0;
+ }
+ 
+-static inline void mem_cgroup_split_huge_fixup(struct page *head)
++static inline void split_page_memcg(struct page *head, unsigned int nr)
+ {
+ }
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index b8eadd9f96802..08a48d3eaeaae 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1414,13 +1414,26 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
+ #endif /* CONFIG_NUMA_BALANCING */
+ 
+ #ifdef CONFIG_KASAN_SW_TAGS
++
++/*
++ * KASAN per-page tags are stored xor'ed with 0xff. This allows to avoid
++ * setting tags for all pages to native kernel tag value 0xff, as the default
++ * value 0x00 maps to 0xff.
++ */
++
+ static inline u8 page_kasan_tag(const struct page *page)
+ {
+-	return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
++	u8 tag;
++
++	tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
++	tag ^= 0xff;
++
++	return tag;
+ }
+ 
+ static inline void page_kasan_tag_set(struct page *page, u8 tag)
+ {
++	tag ^= 0xff;
+ 	page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
+ 	page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+ }
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 915f4f100383b..3433ecc9c1f74 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -23,6 +23,7 @@
+ #endif
+ #define AT_VECTOR_SIZE (2*(AT_VECTOR_SIZE_ARCH + AT_VECTOR_SIZE_BASE + 1))
+ 
++#define INIT_PASID	0
+ 
+ struct address_space;
+ struct mem_cgroup;
+diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
+index b8200782dedeb..1a6a9eb6d3fac 100644
+--- a/include/linux/mmu_notifier.h
++++ b/include/linux/mmu_notifier.h
+@@ -169,11 +169,11 @@ struct mmu_notifier_ops {
+ 	 * the last refcount is dropped.
+ 	 *
+ 	 * If blockable argument is set to false then the callback cannot
+-	 * sleep and has to return with -EAGAIN. 0 should be returned
+-	 * otherwise. Please note that if invalidate_range_start approves
+-	 * a non-blocking behavior then the same applies to
+-	 * invalidate_range_end.
+-	 *
++	 * sleep and has to return with -EAGAIN if sleeping would be required.
++	 * 0 should be returned otherwise. Please note that notifiers that can
++	 * fail invalidate_range_start are not allowed to implement
++	 * invalidate_range_end, as there is no mechanism for informing the
++	 * notifier that its start failed.
+ 	 */
+ 	int (*invalidate_range_start)(struct mmu_notifier *subscription,
+ 				      const struct mmu_notifier_range *range);
+diff --git a/include/linux/mutex.h b/include/linux/mutex.h
+index dcd185cbfe793..4d671fba3cab4 100644
+--- a/include/linux/mutex.h
++++ b/include/linux/mutex.h
+@@ -185,7 +185,7 @@ extern void mutex_lock_io(struct mutex *lock);
+ # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock)
+ # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lock)
+ # define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock)
+-# define mutex_lock_io_nested(lock, subclass) mutex_lock(lock)
++# define mutex_lock_io_nested(lock, subclass) mutex_lock_io(lock)
+ #endif
+ 
+ /*
+diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h
+index 8ebb641937571..8ec48466410a6 100644
+--- a/include/linux/netfilter/x_tables.h
++++ b/include/linux/netfilter/x_tables.h
+@@ -227,7 +227,7 @@ struct xt_table {
+ 	unsigned int valid_hooks;
+ 
+ 	/* Man behind the curtain... */
+-	struct xt_table_info __rcu *private;
++	struct xt_table_info *private;
+ 
+ 	/* Set this to THIS_MODULE if you are a module, otherwise NULL */
+ 	struct module *me;
+@@ -376,7 +376,7 @@ static inline unsigned int xt_write_recseq_begin(void)
+ 	 * since addend is most likely 1
+ 	 */
+ 	__this_cpu_add(xt_recseq.sequence, addend);
+-	smp_wmb();
++	smp_mb();
+ 
+ 	return addend;
+ }
+@@ -448,9 +448,6 @@ xt_get_per_cpu_counter(struct xt_counters *cnt, unsigned int cpu)
+ 
+ struct nf_hook_ops *xt_hook_ops_alloc(const struct xt_table *, nf_hookfn *);
+ 
+-struct xt_table_info
+-*xt_table_get_private_protected(const struct xt_table *table);
+-
+ #ifdef CONFIG_COMPAT
+ #include <net/compat.h>
+ 
+diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
+index d5570deff4003..b032f094a7827 100644
+--- a/include/linux/pagemap.h
++++ b/include/linux/pagemap.h
+@@ -559,7 +559,6 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
+ 	return pgoff;
+ }
+ 
+-/* This has the same layout as wait_bit_key - see fs/cachefiles/rdwr.c */
+ struct wait_page_key {
+ 	struct page *page;
+ 	int bit_nr;
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index 56563e5e0dc75..08725a262f320 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -499,6 +499,7 @@ struct macsec_ops;
+  *
+  * @speed: Current link speed
+  * @duplex: Current duplex
++ * @port: Current port
+  * @pause: Current pause
+  * @asym_pause: Current asymmetric pause
+  * @supported: Combined MAC/PHY supported linkmodes
+@@ -577,6 +578,7 @@ struct phy_device {
+ 	 */
+ 	int speed;
+ 	int duplex;
++	int port;
+ 	int pause;
+ 	int asym_pause;
+ 	u8 master_slave_get;
+diff --git a/include/linux/static_call.h b/include/linux/static_call.h
+index 695da4c9b3381..04e6042d252d3 100644
+--- a/include/linux/static_call.h
++++ b/include/linux/static_call.h
+@@ -107,26 +107,10 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
+ 
+ #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name)
+ 
+-/*
+- * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
+- * the symbol table so that objtool can reference it when it generates the
+- * .static_call_sites section.
+- */
+-#define __static_call(name)						\
+-({									\
+-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
+-	&STATIC_CALL_TRAMP(name);					\
+-})
+-
+ #else
+ #define STATIC_CALL_TRAMP_ADDR(name) NULL
+ #endif
+ 
+-
+-#define DECLARE_STATIC_CALL(name, func)					\
+-	extern struct static_call_key STATIC_CALL_KEY(name);		\
+-	extern typeof(func) STATIC_CALL_TRAMP(name);
+-
+ #define static_call_update(name, func)					\
+ ({									\
+ 	BUILD_BUG_ON(!__same_type(*(func), STATIC_CALL_TRAMP(name)));	\
+@@ -154,6 +138,12 @@ struct static_call_key {
+ 	};
+ };
+ 
++/* For finding the key associated with a trampoline */
++struct static_call_tramp_key {
++	s32 tramp;
++	s32 key;
++};
++
+ extern void __static_call_update(struct static_call_key *key, void *tramp, void *func);
+ extern int static_call_mod_init(struct module *mod);
+ extern int static_call_text_reserved(void *start, void *end);
+@@ -174,17 +164,23 @@ extern int static_call_text_reserved(void *start, void *end);
+ 	};								\
+ 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
+ 
+-#define static_call(name)	__static_call(name)
+ #define static_call_cond(name)	(void)__static_call(name)
+ 
+ #define EXPORT_STATIC_CALL(name)					\
+ 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
+ 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+-
+ #define EXPORT_STATIC_CALL_GPL(name)					\
+ 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
+ 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+ 
++/* Leave the key unexported, so modules can't change static call targets: */
++#define EXPORT_STATIC_CALL_TRAMP(name)					\
++	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name));				\
++	ARCH_ADD_TRAMP_KEY(name)
++#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
++	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name));			\
++	ARCH_ADD_TRAMP_KEY(name)
++
+ #elif defined(CONFIG_HAVE_STATIC_CALL)
+ 
+ static inline int static_call_init(void) { return 0; }
+@@ -207,7 +203,6 @@ struct static_call_key {
+ 	};								\
+ 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
+ 
+-#define static_call(name)	__static_call(name)
+ #define static_call_cond(name)	(void)__static_call(name)
+ 
+ static inline
+@@ -227,11 +222,16 @@ static inline int static_call_text_reserved(void *start, void *end)
+ #define EXPORT_STATIC_CALL(name)					\
+ 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
+ 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+-
+ #define EXPORT_STATIC_CALL_GPL(name)					\
+ 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
+ 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+ 
++/* Leave the key unexported, so modules can't change static call targets: */
++#define EXPORT_STATIC_CALL_TRAMP(name)					\
++	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
++#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
++	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
++
+ #else /* Generic implementation */
+ 
+ static inline int static_call_init(void) { return 0; }
+@@ -252,9 +252,6 @@ struct static_call_key {
+ 		.func = NULL,						\
+ 	}
+ 
+-#define static_call(name)						\
+-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+-
+ static inline void __static_call_nop(void) { }
+ 
+ /*
+diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
+index 89135bb35bf76..ae5662d368b98 100644
+--- a/include/linux/static_call_types.h
++++ b/include/linux/static_call_types.h
+@@ -4,11 +4,13 @@
+ 
+ #include <linux/types.h>
+ #include <linux/stringify.h>
++#include <linux/compiler.h>
+ 
+ #define STATIC_CALL_KEY_PREFIX		__SCK__
+ #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
+ #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
+ #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
++#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
+ 
+ #define STATIC_CALL_TRAMP_PREFIX	__SCT__
+ #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
+@@ -32,4 +34,52 @@ struct static_call_site {
+ 	s32 key;
+ };
+ 
++#define DECLARE_STATIC_CALL(name, func)					\
++	extern struct static_call_key STATIC_CALL_KEY(name);		\
++	extern typeof(func) STATIC_CALL_TRAMP(name);
++
++#ifdef CONFIG_HAVE_STATIC_CALL
++
++#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
++
++#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
++
++/*
++ * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
++ * the symbol table so that objtool can reference it when it generates the
++ * .static_call_sites section.
++ */
++#define __STATIC_CALL_ADDRESSABLE(name) \
++	__ADDRESSABLE(STATIC_CALL_KEY(name))
++
++#define __static_call(name)						\
++({									\
++	__STATIC_CALL_ADDRESSABLE(name);				\
++	__raw_static_call(name);					\
++})
++
++#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
++
++#define __STATIC_CALL_ADDRESSABLE(name)
++#define __static_call(name)	__raw_static_call(name)
++
++#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
++
++#ifdef MODULE
++#define __STATIC_CALL_MOD_ADDRESSABLE(name)
++#define static_call_mod(name)	__raw_static_call(name)
++#else
++#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
++#define static_call_mod(name)	__static_call(name)
++#endif
++
++#define static_call(name)	__static_call(name)
++
++#else
++
++#define static_call(name)						\
++	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
++
++#endif /* CONFIG_HAVE_STATIC_CALL */
++
+ #endif /* _STATIC_CALL_TYPES_H */
+diff --git a/include/linux/u64_stats_sync.h b/include/linux/u64_stats_sync.h
+index c6abb79501b33..e81856c0ba134 100644
+--- a/include/linux/u64_stats_sync.h
++++ b/include/linux/u64_stats_sync.h
+@@ -115,12 +115,13 @@ static inline void u64_stats_inc(u64_stats_t *p)
+ }
+ #endif
+ 
++#if BITS_PER_LONG == 32 && defined(CONFIG_SMP)
++#define u64_stats_init(syncp)	seqcount_init(&(syncp)->seq)
++#else
+ static inline void u64_stats_init(struct u64_stats_sync *syncp)
+ {
+-#if BITS_PER_LONG == 32 && defined(CONFIG_SMP)
+-	seqcount_init(&syncp->seq);
+-#endif
+ }
++#endif
+ 
+ static inline void u64_stats_update_begin(struct u64_stats_sync *syncp)
+ {
+diff --git a/include/linux/usermode_driver.h b/include/linux/usermode_driver.h
+index 073a9e0ec07d0..ad970416260dd 100644
+--- a/include/linux/usermode_driver.h
++++ b/include/linux/usermode_driver.h
+@@ -14,5 +14,6 @@ struct umd_info {
+ int umd_load_blob(struct umd_info *info, const void *data, size_t len);
+ int umd_unload_blob(struct umd_info *info);
+ int fork_usermode_driver(struct umd_info *info);
++void umd_cleanup_helper(struct umd_info *info);
+ 
+ #endif /* __LINUX_USERMODE_DRIVER_H__ */
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 8ea8812b0b418..acd15c544cf37 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -535,4 +535,15 @@ static inline void skb_dst_update_pmtu_no_confirm(struct sk_buff *skb, u32 mtu)
+ 		dst->ops->update_pmtu(dst, NULL, skb, mtu, false);
+ }
+ 
++struct dst_entry *dst_blackhole_check(struct dst_entry *dst, u32 cookie);
++void dst_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk,
++			       struct sk_buff *skb, u32 mtu, bool confirm_neigh);
++void dst_blackhole_redirect(struct dst_entry *dst, struct sock *sk,
++			    struct sk_buff *skb);
++u32 *dst_blackhole_cow_metrics(struct dst_entry *dst, unsigned long old);
++struct neighbour *dst_blackhole_neigh_lookup(const struct dst_entry *dst,
++					     struct sk_buff *skb,
++					     const void *daddr);
++unsigned int dst_blackhole_mtu(const struct dst_entry *dst);
++
+ #endif /* _NET_DST_H */
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index 111d7771b2081..aa92af3dd444d 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -284,7 +284,7 @@ static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk)
+ 	return inet_csk_reqsk_queue_len(sk) >= sk->sk_max_ack_backlog;
+ }
+ 
+-void inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req);
++bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req);
+ void inet_csk_reqsk_queue_drop_and_put(struct sock *sk, struct request_sock *req);
+ 
+ static inline void inet_csk_prepare_for_destroy_sock(struct sock *sk)
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index c1c0a4ff92ae9..ed4a9d098164f 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1508,6 +1508,7 @@ struct nft_trans_flowtable {
+ 	struct nft_flowtable		*flowtable;
+ 	bool				update;
+ 	struct list_head		hook_list;
++	u32				flags;
+ };
+ 
+ #define nft_trans_flowtable(trans)	\
+@@ -1516,6 +1517,8 @@ struct nft_trans_flowtable {
+ 	(((struct nft_trans_flowtable *)trans->data)->update)
+ #define nft_trans_flowtable_hooks(trans)	\
+ 	(((struct nft_trans_flowtable *)trans->data)->hook_list)
++#define nft_trans_flowtable_flags(trans)	\
++	(((struct nft_trans_flowtable *)trans->data)->flags)
+ 
+ int __init nft_chain_filter_init(void);
+ void nft_chain_filter_fini(void);
+diff --git a/include/net/nexthop.h b/include/net/nexthop.h
+index 2fd76a9b6dc8b..4c8c9fe9a3f0e 100644
+--- a/include/net/nexthop.h
++++ b/include/net/nexthop.h
+@@ -362,6 +362,7 @@ static inline struct fib_nh *fib_info_nh(struct fib_info *fi, int nhsel)
+ int fib6_check_nexthop(struct nexthop *nh, struct fib6_config *cfg,
+ 		       struct netlink_ext_ack *extack);
+ 
++/* Caller should either hold rcu_read_lock(), or RTNL. */
+ static inline struct fib6_nh *nexthop_fib6_nh(struct nexthop *nh)
+ {
+ 	struct nh_info *nhi;
+@@ -382,6 +383,29 @@ static inline struct fib6_nh *nexthop_fib6_nh(struct nexthop *nh)
+ 	return NULL;
+ }
+ 
++/* Variant of nexthop_fib6_nh().
++ * Caller should either hold rcu_read_lock_bh(), or RTNL.
++ */
++static inline struct fib6_nh *nexthop_fib6_nh_bh(struct nexthop *nh)
++{
++	struct nh_info *nhi;
++
++	if (nh->is_group) {
++		struct nh_group *nh_grp;
++
++		nh_grp = rcu_dereference_bh_rtnl(nh->nh_grp);
++		nh = nexthop_mpath_select(nh_grp, 0);
++		if (!nh)
++			return NULL;
++	}
++
++	nhi = rcu_dereference_bh_rtnl(nh->nh_info);
++	if (nhi->family == AF_INET6)
++		return &nhi->fib6_nh;
++
++	return NULL;
++}
++
+ static inline struct net_device *fib6_info_nh_dev(struct fib6_info *f6i)
+ {
+ 	struct fib6_nh *fib6_nh;
+diff --git a/include/net/red.h b/include/net/red.h
+index 932f0d79d60cb..9e6647c4ccd1f 100644
+--- a/include/net/red.h
++++ b/include/net/red.h
+@@ -168,7 +168,8 @@ static inline void red_set_vars(struct red_vars *v)
+ 	v->qcount	= -1;
+ }
+ 
+-static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, u8 Scell_log)
++static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog,
++				    u8 Scell_log, u8 *stab)
+ {
+ 	if (fls(qth_min) + Wlog > 32)
+ 		return false;
+@@ -178,6 +179,13 @@ static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, u8 Scell_
+ 		return false;
+ 	if (qth_max < qth_min)
+ 		return false;
++	if (stab) {
++		int i;
++
++		for (i = 0; i < RED_STAB_SIZE; i++)
++			if (stab[i] >= 32)
++				return false;
++	}
+ 	return true;
+ }
+ 
+diff --git a/include/net/rtnetlink.h b/include/net/rtnetlink.h
+index e2091bb2b3a8e..4da61c950e931 100644
+--- a/include/net/rtnetlink.h
++++ b/include/net/rtnetlink.h
+@@ -33,6 +33,7 @@ static inline int rtnl_msg_family(const struct nlmsghdr *nlh)
+  *
+  *	@list: Used internally
+  *	@kind: Identifier
++ *	@netns_refund: Physical device, move to init_net on netns exit
+  *	@maxtype: Highest device specific netlink attribute number
+  *	@policy: Netlink policy for device specific attribute validation
+  *	@validate: Optional validation function for netlink/changelink parameters
+@@ -64,6 +65,7 @@ struct rtnl_link_ops {
+ 	size_t			priv_size;
+ 	void			(*setup)(struct net_device *dev);
+ 
++	bool			netns_refund;
+ 	unsigned int		maxtype;
+ 	const struct nla_policy	*policy;
+ 	int			(*validate)(struct nlattr *tb[],
+diff --git a/include/uapi/linux/psample.h b/include/uapi/linux/psample.h
+index aea26ab1431c1..bff5032c98df4 100644
+--- a/include/uapi/linux/psample.h
++++ b/include/uapi/linux/psample.h
+@@ -3,7 +3,6 @@
+ #define __UAPI_PSAMPLE_H
+ 
+ enum {
+-	/* sampled packet metadata */
+ 	PSAMPLE_ATTR_IIFINDEX,
+ 	PSAMPLE_ATTR_OIFINDEX,
+ 	PSAMPLE_ATTR_ORIGSIZE,
+@@ -11,10 +10,8 @@ enum {
+ 	PSAMPLE_ATTR_GROUP_SEQ,
+ 	PSAMPLE_ATTR_SAMPLE_RATE,
+ 	PSAMPLE_ATTR_DATA,
+-	PSAMPLE_ATTR_TUNNEL,
+-
+-	/* commands attributes */
+ 	PSAMPLE_ATTR_GROUP_REFCOUNT,
++	PSAMPLE_ATTR_TUNNEL,
+ 
+ 	__PSAMPLE_ATTR_MAX
+ };
+diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
+index c2a501cd90eba..a4ac48c7dada1 100644
+--- a/kernel/bpf/bpf_inode_storage.c
++++ b/kernel/bpf/bpf_inode_storage.c
+@@ -109,7 +109,7 @@ static void *bpf_fd_inode_storage_lookup_elem(struct bpf_map *map, void *key)
+ 	fd = *(int *)key;
+ 	f = fget_raw(fd);
+ 	if (!f)
+-		return NULL;
++		return ERR_PTR(-EBADF);
+ 
+ 	sdata = inode_storage_lookup(f->f_inode, map, true);
+ 	fput(f);
+diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c
+index 79c5772465f14..53736e52c1dfa 100644
+--- a/kernel/bpf/preload/bpf_preload_kern.c
++++ b/kernel/bpf/preload/bpf_preload_kern.c
+@@ -60,9 +60,12 @@ static int finish(void)
+ 			 &magic, sizeof(magic), &pos);
+ 	if (n != sizeof(magic))
+ 		return -EPIPE;
++
+ 	tgid = umd_ops.info.tgid;
+-	wait_event(tgid->wait_pidfd, thread_group_exited(tgid));
+-	umd_ops.info.tgid = NULL;
++	if (tgid) {
++		wait_event(tgid->wait_pidfd, thread_group_exited(tgid));
++		umd_cleanup_helper(&umd_ops.info);
++	}
+ 	return 0;
+ }
+ 
+@@ -80,10 +83,18 @@ static int __init load_umd(void)
+ 
+ static void __exit fini_umd(void)
+ {
++	struct pid *tgid;
++
+ 	bpf_preload_ops = NULL;
++
+ 	/* kill UMD in case it's still there due to earlier error */
+-	kill_pid(umd_ops.info.tgid, SIGKILL, 1);
+-	umd_ops.info.tgid = NULL;
++	tgid = umd_ops.info.tgid;
++	if (tgid) {
++		kill_pid(tgid, SIGKILL, 1);
++
++		wait_event(tgid->wait_pidfd, thread_group_exited(tgid));
++		umd_cleanup_helper(&umd_ops.info);
++	}
+ 	umd_unload_blob(&umd_ops.info);
+ }
+ late_initcall(load_umd);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index c675fdbd3dce1..7c044d377926c 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -992,6 +992,13 @@ static void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
+ #endif
+ }
+ 
++static void mm_init_pasid(struct mm_struct *mm)
++{
++#ifdef CONFIG_IOMMU_SUPPORT
++	mm->pasid = INIT_PASID;
++#endif
++}
++
+ static void mm_init_uprobes_state(struct mm_struct *mm)
+ {
+ #ifdef CONFIG_UPROBES
+@@ -1022,6 +1029,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
+ 	mm_init_cpumask(mm);
+ 	mm_init_aio(mm);
+ 	mm_init_owner(mm, p);
++	mm_init_pasid(mm);
+ 	RCU_INIT_POINTER(mm->exe_file, NULL);
+ 	mmu_notifier_subscriptions_init(mm);
+ 	init_tlb_flush_pending(mm);
+diff --git a/kernel/gcov/clang.c b/kernel/gcov/clang.c
+index c94b820a1b62c..8743150db2acc 100644
+--- a/kernel/gcov/clang.c
++++ b/kernel/gcov/clang.c
+@@ -75,7 +75,9 @@ struct gcov_fn_info {
+ 
+ 	u32 num_counters;
+ 	u64 *counters;
++#if CONFIG_CLANG_VERSION < 110000
+ 	const char *function_name;
++#endif
+ };
+ 
+ static struct gcov_info *current_info;
+@@ -105,6 +107,7 @@ void llvm_gcov_init(llvm_gcov_callback writeout, llvm_gcov_callback flush)
+ }
+ EXPORT_SYMBOL(llvm_gcov_init);
+ 
++#if CONFIG_CLANG_VERSION < 110000
+ void llvm_gcda_start_file(const char *orig_filename, const char version[4],
+ 		u32 checksum)
+ {
+@@ -113,7 +116,17 @@ void llvm_gcda_start_file(const char *orig_filename, const char version[4],
+ 	current_info->checksum = checksum;
+ }
+ EXPORT_SYMBOL(llvm_gcda_start_file);
++#else
++void llvm_gcda_start_file(const char *orig_filename, u32 version, u32 checksum)
++{
++	current_info->filename = orig_filename;
++	current_info->version = version;
++	current_info->checksum = checksum;
++}
++EXPORT_SYMBOL(llvm_gcda_start_file);
++#endif
+ 
++#if CONFIG_CLANG_VERSION < 110000
+ void llvm_gcda_emit_function(u32 ident, const char *function_name,
+ 		u32 func_checksum, u8 use_extra_checksum, u32 cfg_checksum)
+ {
+@@ -133,6 +146,24 @@ void llvm_gcda_emit_function(u32 ident, const char *function_name,
+ 	list_add_tail(&info->head, &current_info->functions);
+ }
+ EXPORT_SYMBOL(llvm_gcda_emit_function);
++#else
++void llvm_gcda_emit_function(u32 ident, u32 func_checksum,
++		u8 use_extra_checksum, u32 cfg_checksum)
++{
++	struct gcov_fn_info *info = kzalloc(sizeof(*info), GFP_KERNEL);
++
++	if (!info)
++		return;
++
++	INIT_LIST_HEAD(&info->head);
++	info->ident = ident;
++	info->checksum = func_checksum;
++	info->use_extra_checksum = use_extra_checksum;
++	info->cfg_checksum = cfg_checksum;
++	list_add_tail(&info->head, &current_info->functions);
++}
++EXPORT_SYMBOL(llvm_gcda_emit_function);
++#endif
+ 
+ void llvm_gcda_emit_arcs(u32 num_counters, u64 *counters)
+ {
+@@ -295,6 +326,7 @@ void gcov_info_add(struct gcov_info *dst, struct gcov_info *src)
+ 	}
+ }
+ 
++#if CONFIG_CLANG_VERSION < 110000
+ static struct gcov_fn_info *gcov_fn_info_dup(struct gcov_fn_info *fn)
+ {
+ 	size_t cv_size; /* counter values size */
+@@ -322,6 +354,28 @@ err_name:
+ 	kfree(fn_dup);
+ 	return NULL;
+ }
++#else
++static struct gcov_fn_info *gcov_fn_info_dup(struct gcov_fn_info *fn)
++{
++	size_t cv_size; /* counter values size */
++	struct gcov_fn_info *fn_dup = kmemdup(fn, sizeof(*fn),
++			GFP_KERNEL);
++	if (!fn_dup)
++		return NULL;
++	INIT_LIST_HEAD(&fn_dup->head);
++
++	cv_size = fn->num_counters * sizeof(fn->counters[0]);
++	fn_dup->counters = vmalloc(cv_size);
++	if (!fn_dup->counters) {
++		kfree(fn_dup);
++		return NULL;
++	}
++
++	memcpy(fn_dup->counters, fn->counters, cv_size);
++
++	return fn_dup;
++}
++#endif
+ 
+ /**
+  * gcov_info_dup - duplicate profiling data set
+@@ -362,6 +416,7 @@ err:
+  * gcov_info_free - release memory for profiling data set duplicate
+  * @info: profiling data set duplicate to free
+  */
++#if CONFIG_CLANG_VERSION < 110000
+ void gcov_info_free(struct gcov_info *info)
+ {
+ 	struct gcov_fn_info *fn, *tmp;
+@@ -375,6 +430,20 @@ void gcov_info_free(struct gcov_info *info)
+ 	kfree(info->filename);
+ 	kfree(info);
+ }
++#else
++void gcov_info_free(struct gcov_info *info)
++{
++	struct gcov_fn_info *fn, *tmp;
++
++	list_for_each_entry_safe(fn, tmp, &info->functions, head) {
++		vfree(fn->counters);
++		list_del(&fn->head);
++		kfree(fn);
++	}
++	kfree(info->filename);
++	kfree(info);
++}
++#endif
+ 
+ #define ITER_STRIDE	PAGE_SIZE
+ 
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index c1ff7fa030abe..994ca8353543a 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -85,7 +85,7 @@ static int __init em_debug_init(void)
+ 
+ 	return 0;
+ }
+-core_initcall(em_debug_init);
++fs_initcall(em_debug_init);
+ #else /* CONFIG_DEBUG_FS */
+ static void em_debug_create_pd(struct device *dev) {}
+ static void em_debug_remove_pd(struct device *dev) {}
+diff --git a/kernel/static_call.c b/kernel/static_call.c
+index db914da6e7854..49efbdc5b4800 100644
+--- a/kernel/static_call.c
++++ b/kernel/static_call.c
+@@ -12,6 +12,8 @@
+ 
+ extern struct static_call_site __start_static_call_sites[],
+ 			       __stop_static_call_sites[];
++extern struct static_call_tramp_key __start_static_call_tramp_key[],
++				    __stop_static_call_tramp_key[];
+ 
+ static bool static_call_initialized;
+ 
+@@ -33,27 +35,30 @@ static inline void *static_call_addr(struct static_call_site *site)
+ 	return (void *)((long)site->addr + (long)&site->addr);
+ }
+ 
++static inline unsigned long __static_call_key(const struct static_call_site *site)
++{
++	return (long)site->key + (long)&site->key;
++}
+ 
+ static inline struct static_call_key *static_call_key(const struct static_call_site *site)
+ {
+-	return (struct static_call_key *)
+-		(((long)site->key + (long)&site->key) & ~STATIC_CALL_SITE_FLAGS);
++	return (void *)(__static_call_key(site) & ~STATIC_CALL_SITE_FLAGS);
+ }
+ 
+ /* These assume the key is word-aligned. */
+ static inline bool static_call_is_init(struct static_call_site *site)
+ {
+-	return ((long)site->key + (long)&site->key) & STATIC_CALL_SITE_INIT;
++	return __static_call_key(site) & STATIC_CALL_SITE_INIT;
+ }
+ 
+ static inline bool static_call_is_tail(struct static_call_site *site)
+ {
+-	return ((long)site->key + (long)&site->key) & STATIC_CALL_SITE_TAIL;
++	return __static_call_key(site) & STATIC_CALL_SITE_TAIL;
+ }
+ 
+ static inline void static_call_set_init(struct static_call_site *site)
+ {
+-	site->key = ((long)static_call_key(site) | STATIC_CALL_SITE_INIT) -
++	site->key = (__static_call_key(site) | STATIC_CALL_SITE_INIT) -
+ 		    (long)&site->key;
+ }
+ 
+@@ -197,7 +202,7 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
+ 			}
+ 
+ 			arch_static_call_transform(site_addr, NULL, func,
+-				static_call_is_tail(site));
++						   static_call_is_tail(site));
+ 		}
+ 	}
+ 
+@@ -332,10 +337,60 @@ static int __static_call_mod_text_reserved(void *start, void *end)
+ 	return ret;
+ }
+ 
++static unsigned long tramp_key_lookup(unsigned long addr)
++{
++	struct static_call_tramp_key *start = __start_static_call_tramp_key;
++	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
++	struct static_call_tramp_key *tramp_key;
++
++	for (tramp_key = start; tramp_key != stop; tramp_key++) {
++		unsigned long tramp;
++
++		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
++		if (tramp == addr)
++			return (long)tramp_key->key + (long)&tramp_key->key;
++	}
++
++	return 0;
++}
++
+ static int static_call_add_module(struct module *mod)
+ {
+-	return __static_call_init(mod, mod->static_call_sites,
+-				  mod->static_call_sites + mod->num_static_call_sites);
++	struct static_call_site *start = mod->static_call_sites;
++	struct static_call_site *stop = start + mod->num_static_call_sites;
++	struct static_call_site *site;
++
++	for (site = start; site != stop; site++) {
++		unsigned long s_key = __static_call_key(site);
++		unsigned long addr = s_key & ~STATIC_CALL_SITE_FLAGS;
++		unsigned long key;
++
++		/*
++		 * Is the key is exported, 'addr' points to the key, which
++		 * means modules are allowed to call static_call_update() on
++		 * it.
++		 *
++		 * Otherwise, the key isn't exported, and 'addr' points to the
++		 * trampoline so we need to lookup the key.
++		 *
++		 * We go through this dance to prevent crazy modules from
++		 * abusing sensitive static calls.
++		 */
++		if (!kernel_text_address(addr))
++			continue;
++
++		key = tramp_key_lookup(addr);
++		if (!key) {
++			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
++				static_call_addr(site));
++			return -EINVAL;
++		}
++
++		key |= s_key & STATIC_CALL_SITE_FLAGS;
++		site->key = key - (long)&site->key;
++	}
++
++	return __static_call_init(mod, start, stop);
+ }
+ 
+ static void static_call_del_module(struct module *mod)
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 9c1bba8cc51b0..82041bbf8fc2d 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -5045,6 +5045,20 @@ struct ftrace_direct_func *ftrace_find_direct_func(unsigned long addr)
+ 	return NULL;
+ }
+ 
++static struct ftrace_direct_func *ftrace_alloc_direct_func(unsigned long addr)
++{
++	struct ftrace_direct_func *direct;
++
++	direct = kmalloc(sizeof(*direct), GFP_KERNEL);
++	if (!direct)
++		return NULL;
++	direct->addr = addr;
++	direct->count = 0;
++	list_add_rcu(&direct->next, &ftrace_direct_funcs);
++	ftrace_direct_func_count++;
++	return direct;
++}
++
+ /**
+  * register_ftrace_direct - Call a custom trampoline directly
+  * @ip: The address of the nop at the beginning of a function
+@@ -5120,15 +5134,11 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
+ 
+ 	direct = ftrace_find_direct_func(addr);
+ 	if (!direct) {
+-		direct = kmalloc(sizeof(*direct), GFP_KERNEL);
++		direct = ftrace_alloc_direct_func(addr);
+ 		if (!direct) {
+ 			kfree(entry);
+ 			goto out_unlock;
+ 		}
+-		direct->addr = addr;
+-		direct->count = 0;
+-		list_add_rcu(&direct->next, &ftrace_direct_funcs);
+-		ftrace_direct_func_count++;
+ 	}
+ 
+ 	entry->ip = ip;
+@@ -5329,6 +5339,7 @@ int __weak ftrace_modify_direct_caller(struct ftrace_func_entry *entry,
+ int modify_ftrace_direct(unsigned long ip,
+ 			 unsigned long old_addr, unsigned long new_addr)
+ {
++	struct ftrace_direct_func *direct, *new_direct = NULL;
+ 	struct ftrace_func_entry *entry;
+ 	struct dyn_ftrace *rec;
+ 	int ret = -ENODEV;
+@@ -5344,6 +5355,20 @@ int modify_ftrace_direct(unsigned long ip,
+ 	if (entry->direct != old_addr)
+ 		goto out_unlock;
+ 
++	direct = ftrace_find_direct_func(old_addr);
++	if (WARN_ON(!direct))
++		goto out_unlock;
++	if (direct->count > 1) {
++		ret = -ENOMEM;
++		new_direct = ftrace_alloc_direct_func(new_addr);
++		if (!new_direct)
++			goto out_unlock;
++		direct->count--;
++		new_direct->count++;
++	} else {
++		direct->addr = new_addr;
++	}
++
+ 	/*
+ 	 * If there's no other ftrace callback on the rec->ip location,
+ 	 * then it can be changed directly by the architecture.
+@@ -5357,6 +5382,14 @@ int modify_ftrace_direct(unsigned long ip,
+ 		ret = 0;
+ 	}
+ 
++	if (unlikely(ret && new_direct)) {
++		direct->count++;
++		list_del_rcu(&new_direct->next);
++		synchronize_rcu_tasks();
++		kfree(new_direct);
++		ftrace_direct_func_count--;
++	}
++
+  out_unlock:
+ 	mutex_unlock(&ftrace_lock);
+ 	mutex_unlock(&direct_mutex);
+diff --git a/kernel/usermode_driver.c b/kernel/usermode_driver.c
+index 0b35212ffc3d0..bb7bb3b478abf 100644
+--- a/kernel/usermode_driver.c
++++ b/kernel/usermode_driver.c
+@@ -139,13 +139,22 @@ static void umd_cleanup(struct subprocess_info *info)
+ 	struct umd_info *umd_info = info->data;
+ 
+ 	/* cleanup if umh_setup() was successful but exec failed */
+-	if (info->retval) {
+-		fput(umd_info->pipe_to_umh);
+-		fput(umd_info->pipe_from_umh);
+-		put_pid(umd_info->tgid);
+-		umd_info->tgid = NULL;
+-	}
++	if (info->retval)
++		umd_cleanup_helper(umd_info);
++}
++
++/**
++ * umd_cleanup_helper - release the resources which were allocated in umd_setup
++ * @info: information about usermode driver
++ */
++void umd_cleanup_helper(struct umd_info *info)
++{
++	fput(info->pipe_to_umh);
++	fput(info->pipe_from_umh);
++	put_pid(info->tgid);
++	info->tgid = NULL;
+ }
++EXPORT_SYMBOL_GPL(umd_cleanup_helper);
+ 
+ /**
+  * fork_usermode_driver - fork a usermode driver
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 4a78514830d5a..d9ade23ac2b22 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2433,7 +2433,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
+ 	lruvec = mem_cgroup_page_lruvec(head, pgdat);
+ 
+ 	/* complete memcg works before add pages to LRU */
+-	mem_cgroup_split_huge_fixup(head);
++	split_page_memcg(head, nr);
+ 
+ 	if (PageAnon(head) && PageSwapCache(head)) {
+ 		swp_entry_t entry = { .val = page_private(head) };
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 94c6b3d4df96a..573f1a0183be6 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -285,6 +285,17 @@ static void record_hugetlb_cgroup_uncharge_info(struct hugetlb_cgroup *h_cg,
+ 		nrg->reservation_counter =
+ 			&h_cg->rsvd_hugepage[hstate_index(h)];
+ 		nrg->css = &h_cg->css;
++		/*
++		 * The caller will hold exactly one h_cg->css reference for the
++		 * whole contiguous reservation region. But this area might be
++		 * scattered when there are already some file_regions reside in
++		 * it. As a result, many file_regions may share only one css
++		 * reference. In order to ensure that one file_region must hold
++		 * exactly one h_cg->css reference, we should do css_get for
++		 * each file_region and leave the reference held by caller
++		 * untouched.
++		 */
++		css_get(&h_cg->css);
+ 		if (!resv->pages_per_hpage)
+ 			resv->pages_per_hpage = pages_per_huge_page(h);
+ 		/* pages_per_hpage should be the same for all entries in
+@@ -298,6 +309,14 @@ static void record_hugetlb_cgroup_uncharge_info(struct hugetlb_cgroup *h_cg,
+ #endif
+ }
+ 
++static void put_uncharge_info(struct file_region *rg)
++{
++#ifdef CONFIG_CGROUP_HUGETLB
++	if (rg->css)
++		css_put(rg->css);
++#endif
++}
++
+ static bool has_same_uncharge_info(struct file_region *rg,
+ 				   struct file_region *org)
+ {
+@@ -321,6 +340,7 @@ static void coalesce_file_region(struct resv_map *resv, struct file_region *rg)
+ 		prg->to = rg->to;
+ 
+ 		list_del(&rg->link);
++		put_uncharge_info(rg);
+ 		kfree(rg);
+ 
+ 		rg = prg;
+@@ -332,6 +352,7 @@ static void coalesce_file_region(struct resv_map *resv, struct file_region *rg)
+ 		nrg->from = rg->from;
+ 
+ 		list_del(&rg->link);
++		put_uncharge_info(rg);
+ 		kfree(rg);
+ 	}
+ }
+@@ -664,7 +685,7 @@ retry:
+ 
+ 			del += t - f;
+ 			hugetlb_cgroup_uncharge_file_region(
+-				resv, rg, t - f);
++				resv, rg, t - f, false);
+ 
+ 			/* New entry for end of split region */
+ 			nrg->from = t;
+@@ -685,7 +706,7 @@ retry:
+ 		if (f <= rg->from && t >= rg->to) { /* Remove entire region */
+ 			del += rg->to - rg->from;
+ 			hugetlb_cgroup_uncharge_file_region(resv, rg,
+-							    rg->to - rg->from);
++							    rg->to - rg->from, true);
+ 			list_del(&rg->link);
+ 			kfree(rg);
+ 			continue;
+@@ -693,13 +714,13 @@ retry:
+ 
+ 		if (f <= rg->from) {	/* Trim beginning of region */
+ 			hugetlb_cgroup_uncharge_file_region(resv, rg,
+-							    t - rg->from);
++							    t - rg->from, false);
+ 
+ 			del += t - rg->from;
+ 			rg->from = t;
+ 		} else {		/* Trim end of region */
+ 			hugetlb_cgroup_uncharge_file_region(resv, rg,
+-							    rg->to - f);
++							    rg->to - f, false);
+ 
+ 			del += rg->to - f;
+ 			rg->to = f;
+@@ -5189,6 +5210,10 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 			 */
+ 			long rsv_adjust;
+ 
++			/*
++			 * hugetlb_cgroup_uncharge_cgroup_rsvd() will put the
++			 * reference to h_cg->css. See comment below for detail.
++			 */
+ 			hugetlb_cgroup_uncharge_cgroup_rsvd(
+ 				hstate_index(h),
+ 				(chg - add) * pages_per_huge_page(h), h_cg);
+@@ -5196,6 +5221,14 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 			rsv_adjust = hugepage_subpool_put_pages(spool,
+ 								chg - add);
+ 			hugetlb_acct_memory(h, -rsv_adjust);
++		} else if (h_cg) {
++			/*
++			 * The file_regions will hold their own reference to
++			 * h_cg->css. So we should release the reference held
++			 * via hugetlb_cgroup_charge_cgroup_rsvd() when we are
++			 * done.
++			 */
++			hugetlb_cgroup_put_rsvd_cgroup(h_cg);
+ 		}
+ 	}
+ 	return 0;
+diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
+index 9182848dda3e0..1348819f546cb 100644
+--- a/mm/hugetlb_cgroup.c
++++ b/mm/hugetlb_cgroup.c
+@@ -391,7 +391,8 @@ void hugetlb_cgroup_uncharge_counter(struct resv_map *resv, unsigned long start,
+ 
+ void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,
+ 					 struct file_region *rg,
+-					 unsigned long nr_pages)
++					 unsigned long nr_pages,
++					 bool region_del)
+ {
+ 	if (hugetlb_cgroup_disabled() || !resv || !rg || !nr_pages)
+ 		return;
+@@ -400,7 +401,12 @@ void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,
+ 	    !resv->reservation_counter) {
+ 		page_counter_uncharge(rg->reservation_counter,
+ 				      nr_pages * resv->pages_per_hpage);
+-		css_put(rg->css);
++		/*
++		 * Only do css_put(rg->css) when we delete the entire region
++		 * because one file_region must hold exactly one css reference.
++		 */
++		if (region_del)
++			css_put(rg->css);
+ 	}
+ }
+ 
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index d6966f1ebc7af..d72d2b90474a4 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -3268,26 +3268,25 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
+ 
+ #endif /* CONFIG_MEMCG_KMEM */
+ 
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-
+ /*
+- * Because tail pages are not marked as "used", set it. We're under
+- * pgdat->lru_lock and migration entries setup in all page mappings.
++ * Because head->mem_cgroup is not set on tails, set it now.
+  */
+-void mem_cgroup_split_huge_fixup(struct page *head)
++void split_page_memcg(struct page *head, unsigned int nr)
+ {
+ 	struct mem_cgroup *memcg = head->mem_cgroup;
++	int kmemcg = PageKmemcg(head);
+ 	int i;
+ 
+-	if (mem_cgroup_disabled())
++	if (mem_cgroup_disabled() || !memcg)
+ 		return;
+ 
+-	for (i = 1; i < HPAGE_PMD_NR; i++) {
+-		css_get(&memcg->css);
++	for (i = 1; i < nr; i++) {
+ 		head[i].mem_cgroup = memcg;
++		if (kmemcg)
++			__SetPageKmemcg(head + i);
+ 	}
++	css_get_many(&memcg->css, nr - 1);
+ }
+-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+ 
+ #ifdef CONFIG_MEMCG_SWAP
+ /**
+diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
+index 5654dd19addc0..07f42a7a60657 100644
+--- a/mm/mmu_notifier.c
++++ b/mm/mmu_notifier.c
+@@ -501,10 +501,33 @@ static int mn_hlist_invalidate_range_start(
+ 						"");
+ 				WARN_ON(mmu_notifier_range_blockable(range) ||
+ 					_ret != -EAGAIN);
++				/*
++				 * We call all the notifiers on any EAGAIN,
++				 * there is no way for a notifier to know if
++				 * its start method failed, thus a start that
++				 * does EAGAIN can't also do end.
++				 */
++				WARN_ON(ops->invalidate_range_end);
+ 				ret = _ret;
+ 			}
+ 		}
+ 	}
++
++	if (ret) {
++		/*
++		 * Must be non-blocking to get here.  If there are multiple
++		 * notifiers and one or more failed start, any that succeeded
++		 * start are expecting their end to be called.  Do so now.
++		 */
++		hlist_for_each_entry_rcu(subscription, &subscriptions->list,
++					 hlist, srcu_read_lock_held(&srcu)) {
++			if (!subscription->ops->invalidate_range_end)
++				continue;
++
++			subscription->ops->invalidate_range_end(subscription,
++								range);
++		}
++	}
+ 	srcu_read_unlock(&srcu, id);
+ 
+ 	return ret;
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 690f79c781cf7..7ffa706e5c305 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3272,6 +3272,7 @@ void split_page(struct page *page, unsigned int order)
+ 	for (i = 1; i < (1 << order); i++)
+ 		set_page_refcounted(page + i);
+ 	split_page_owner(page, 1 << order);
++	split_page_memcg(page, 1 << order);
+ }
+ EXPORT_SYMBOL_GPL(split_page);
+ 
+diff --git a/mm/z3fold.c b/mm/z3fold.c
+index 0152ad9931a87..8ae944eeb8e20 100644
+--- a/mm/z3fold.c
++++ b/mm/z3fold.c
+@@ -1350,8 +1350,22 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ 			page = list_entry(pos, struct page, lru);
+ 
+ 			zhdr = page_address(page);
+-			if (test_bit(PAGE_HEADLESS, &page->private))
++			if (test_bit(PAGE_HEADLESS, &page->private)) {
++				/*
++				 * For non-headless pages, we wait to do this
++				 * until we have the page lock to avoid racing
++				 * with __z3fold_alloc(). Headless pages don't
++				 * have a lock (and __z3fold_alloc() will never
++				 * see them), but we still need to test and set
++				 * PAGE_CLAIMED to avoid racing with
++				 * z3fold_free(), so just do it now before
++				 * leaving the loop.
++				 */
++				if (test_and_set_bit(PAGE_CLAIMED, &page->private))
++					continue;
++
+ 				break;
++			}
+ 
+ 			if (kref_get_unless_zero(&zhdr->refcount) == 0) {
+ 				zhdr = NULL;
+diff --git a/net/bridge/br_switchdev.c b/net/bridge/br_switchdev.c
+index 015209bf44aa4..3c42095fa75fd 100644
+--- a/net/bridge/br_switchdev.c
++++ b/net/bridge/br_switchdev.c
+@@ -123,6 +123,8 @@ br_switchdev_fdb_notify(const struct net_bridge_fdb_entry *fdb, int type)
+ {
+ 	if (!fdb->dst)
+ 		return;
++	if (test_bit(BR_FDB_LOCAL, &fdb->flags))
++		return;
+ 
+ 	switch (type) {
+ 	case RTM_DELNEIGH:
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 8bd565f2073e7..ea1e227b8e544 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -196,7 +196,7 @@ static int isotp_send_fc(struct sock *sk, int ae, u8 flowstatus)
+ 	nskb->dev = dev;
+ 	can_skb_set_owner(nskb, sk);
+ 	ncf = (struct canfd_frame *)nskb->data;
+-	skb_put(nskb, so->ll.mtu);
++	skb_put_zero(nskb, so->ll.mtu);
+ 
+ 	/* create & send flow control reply */
+ 	ncf->can_id = so->txid;
+@@ -215,8 +215,7 @@ static int isotp_send_fc(struct sock *sk, int ae, u8 flowstatus)
+ 	if (ae)
+ 		ncf->data[0] = so->opt.ext_address;
+ 
+-	if (so->ll.mtu == CANFD_MTU)
+-		ncf->flags = so->ll.tx_flags;
++	ncf->flags = so->ll.tx_flags;
+ 
+ 	can_send_ret = can_send(nskb, 1);
+ 	if (can_send_ret)
+@@ -780,7 +779,7 @@ isotp_tx_burst:
+ 		can_skb_prv(skb)->skbcnt = 0;
+ 
+ 		cf = (struct canfd_frame *)skb->data;
+-		skb_put(skb, so->ll.mtu);
++		skb_put_zero(skb, so->ll.mtu);
+ 
+ 		/* create consecutive frame */
+ 		isotp_fill_dataframe(cf, so, ae, 0);
+@@ -790,8 +789,7 @@ isotp_tx_burst:
+ 		so->tx.sn %= 16;
+ 		so->tx.bs++;
+ 
+-		if (so->ll.mtu == CANFD_MTU)
+-			cf->flags = so->ll.tx_flags;
++		cf->flags = so->ll.tx_flags;
+ 
+ 		skb->dev = dev;
+ 		can_skb_set_owner(skb, sk);
+@@ -889,7 +887,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	so->tx.idx = 0;
+ 
+ 	cf = (struct canfd_frame *)skb->data;
+-	skb_put(skb, so->ll.mtu);
++	skb_put_zero(skb, so->ll.mtu);
+ 
+ 	/* take care of a potential SF_DL ESC offset for TX_DL > 8 */
+ 	off = (so->tx.ll_dl > CAN_MAX_DLEN) ? 1 : 0;
+@@ -934,8 +932,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	}
+ 
+ 	/* send the first or only CAN frame */
+-	if (so->ll.mtu == CANFD_MTU)
+-		cf->flags = so->ll.tx_flags;
++	cf->flags = so->ll.tx_flags;
+ 
+ 	skb->dev = dev;
+ 	skb->sk = sk;
+@@ -1212,7 +1209,8 @@ static int isotp_setsockopt(struct socket *sock, int level, int optname,
+ 			if (ll.mtu != CAN_MTU && ll.mtu != CANFD_MTU)
+ 				return -EINVAL;
+ 
+-			if (ll.mtu == CAN_MTU && ll.tx_dl > CAN_MAX_DLEN)
++			if (ll.mtu == CAN_MTU &&
++			    (ll.tx_dl > CAN_MAX_DLEN || ll.tx_flags != 0))
+ 				return -EINVAL;
+ 
+ 			memcpy(&so->ll, &ll, sizeof(ll));
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 75ca6c6d01d6e..62ff7121b22d3 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1195,6 +1195,18 @@ static int __dev_alloc_name(struct net *net, const char *name, char *buf)
+ 			return -ENOMEM;
+ 
+ 		for_each_netdev(net, d) {
++			struct netdev_name_node *name_node;
++			list_for_each_entry(name_node, &d->name_node->list, list) {
++				if (!sscanf(name_node->name, name, &i))
++					continue;
++				if (i < 0 || i >= max_netdevices)
++					continue;
++
++				/*  avoid cases where sscanf is not exact inverse of printf */
++				snprintf(buf, IFNAMSIZ, name, i);
++				if (!strncmp(buf, name_node->name, IFNAMSIZ))
++					set_bit(i, inuse);
++			}
+ 			if (!sscanf(d->name, name, &i))
+ 				continue;
+ 			if (i < 0 || i >= max_netdevices)
+@@ -11094,7 +11106,7 @@ static void __net_exit default_device_exit(struct net *net)
+ 			continue;
+ 
+ 		/* Leave virtual devices for the generic cleanup */
+-		if (dev->rtnl_link_ops)
++		if (dev->rtnl_link_ops && !dev->rtnl_link_ops->netns_refund)
+ 			continue;
+ 
+ 		/* Push remaining network devices to init_net */
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index 571f191c06d94..db65ce62b625a 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -1053,6 +1053,20 @@ static int net_dm_hw_monitor_start(struct netlink_ext_ack *extack)
+ 	return 0;
+ 
+ err_module_put:
++	for_each_possible_cpu(cpu) {
++		struct per_cpu_dm_data *hw_data = &per_cpu(dm_hw_cpu_data, cpu);
++		struct sk_buff *skb;
++
++		del_timer_sync(&hw_data->send_timer);
++		cancel_work_sync(&hw_data->dm_alert_work);
++		while ((skb = __skb_dequeue(&hw_data->drop_queue))) {
++			struct devlink_trap_metadata *hw_metadata;
++
++			hw_metadata = NET_DM_SKB_CB(skb)->hw_metadata;
++			net_dm_hw_metadata_free(hw_metadata);
++			consume_skb(skb);
++		}
++	}
+ 	module_put(THIS_MODULE);
+ 	return rc;
+ }
+@@ -1134,6 +1148,15 @@ static int net_dm_trace_on_set(struct netlink_ext_ack *extack)
+ err_unregister_trace:
+ 	unregister_trace_kfree_skb(ops->kfree_skb_probe, NULL);
+ err_module_put:
++	for_each_possible_cpu(cpu) {
++		struct per_cpu_dm_data *data = &per_cpu(dm_cpu_data, cpu);
++		struct sk_buff *skb;
++
++		del_timer_sync(&data->send_timer);
++		cancel_work_sync(&data->dm_alert_work);
++		while ((skb = __skb_dequeue(&data->drop_queue)))
++			consume_skb(skb);
++	}
+ 	module_put(THIS_MODULE);
+ 	return rc;
+ }
+diff --git a/net/core/dst.c b/net/core/dst.c
+index 0c01bd8d9d81e..fb3bcba87744d 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -237,37 +237,62 @@ void __dst_destroy_metrics_generic(struct dst_entry *dst, unsigned long old)
+ }
+ EXPORT_SYMBOL(__dst_destroy_metrics_generic);
+ 
+-static struct dst_ops md_dst_ops = {
+-	.family =		AF_UNSPEC,
+-};
++struct dst_entry *dst_blackhole_check(struct dst_entry *dst, u32 cookie)
++{
++	return NULL;
++}
+ 
+-static int dst_md_discard_out(struct net *net, struct sock *sk, struct sk_buff *skb)
++u32 *dst_blackhole_cow_metrics(struct dst_entry *dst, unsigned long old)
+ {
+-	WARN_ONCE(1, "Attempting to call output on metadata dst\n");
+-	kfree_skb(skb);
+-	return 0;
++	return NULL;
+ }
+ 
+-static int dst_md_discard(struct sk_buff *skb)
++struct neighbour *dst_blackhole_neigh_lookup(const struct dst_entry *dst,
++					     struct sk_buff *skb,
++					     const void *daddr)
+ {
+-	WARN_ONCE(1, "Attempting to call input on metadata dst\n");
+-	kfree_skb(skb);
+-	return 0;
++	return NULL;
++}
++
++void dst_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk,
++			       struct sk_buff *skb, u32 mtu,
++			       bool confirm_neigh)
++{
++}
++EXPORT_SYMBOL_GPL(dst_blackhole_update_pmtu);
++
++void dst_blackhole_redirect(struct dst_entry *dst, struct sock *sk,
++			    struct sk_buff *skb)
++{
++}
++EXPORT_SYMBOL_GPL(dst_blackhole_redirect);
++
++unsigned int dst_blackhole_mtu(const struct dst_entry *dst)
++{
++	unsigned int mtu = dst_metric_raw(dst, RTAX_MTU);
++
++	return mtu ? : dst->dev->mtu;
+ }
++EXPORT_SYMBOL_GPL(dst_blackhole_mtu);
++
++static struct dst_ops dst_blackhole_ops = {
++	.family		= AF_UNSPEC,
++	.neigh_lookup	= dst_blackhole_neigh_lookup,
++	.check		= dst_blackhole_check,
++	.cow_metrics	= dst_blackhole_cow_metrics,
++	.update_pmtu	= dst_blackhole_update_pmtu,
++	.redirect	= dst_blackhole_redirect,
++	.mtu		= dst_blackhole_mtu,
++};
+ 
+ static void __metadata_dst_init(struct metadata_dst *md_dst,
+ 				enum metadata_type type, u8 optslen)
+-
+ {
+ 	struct dst_entry *dst;
+ 
+ 	dst = &md_dst->dst;
+-	dst_init(dst, &md_dst_ops, NULL, 1, DST_OBSOLETE_NONE,
++	dst_init(dst, &dst_blackhole_ops, NULL, 1, DST_OBSOLETE_NONE,
+ 		 DST_METADATA | DST_NOCOUNT);
+-
+-	dst->input = dst_md_discard;
+-	dst->output = dst_md_discard_out;
+-
+ 	memset(dst + 1, 0, sizeof(*md_dst) + optslen - sizeof(*dst));
+ 	md_dst->type = type;
+ }
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index e21950a2c8970..c79be25b2e0c2 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -175,7 +175,7 @@ void skb_flow_get_icmp_tci(const struct sk_buff *skb,
+ 	 * avoid confusion with packets without such field
+ 	 */
+ 	if (icmp_has_id(ih->type))
+-		key_icmp->id = ih->un.echo.id ? : 1;
++		key_icmp->id = ih->un.echo.id ? ntohs(ih->un.echo.id) : 1;
+ 	else
+ 		key_icmp->id = 0;
+ }
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 78ee1b5acf1f1..49f4034bf1263 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -319,6 +319,11 @@ static int dccp_v6_conn_request(struct sock *sk, struct sk_buff *skb)
+ 	if (!ipv6_unicast_destination(skb))
+ 		return 0;	/* discard, don't send a reset here */
+ 
++	if (ipv6_addr_v4mapped(&ipv6_hdr(skb)->saddr)) {
++		__IP6_INC_STATS(sock_net(sk), NULL, IPSTATS_MIB_INHDRERRORS);
++		return 0;
++	}
++
+ 	if (dccp_bad_service_code(sk, service)) {
+ 		dcb->dccpd_reset_code = DCCP_RESET_CODE_BAD_SERVICE_CODE;
+ 		goto drop;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 48d2b615edc26..1dfa561e8f981 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -705,12 +705,15 @@ static bool reqsk_queue_unlink(struct request_sock *req)
+ 	return found;
+ }
+ 
+-void inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
++bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
+ {
+-	if (reqsk_queue_unlink(req)) {
++	bool unlinked = reqsk_queue_unlink(req);
++
++	if (unlinked) {
+ 		reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
+ 		reqsk_put(req);
+ 	}
++	return unlinked;
+ }
+ EXPORT_SYMBOL(inet_csk_reqsk_queue_drop);
+ 
+diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
+index c576a63d09db1..d1e04d2b5170e 100644
+--- a/net/ipv4/netfilter/arp_tables.c
++++ b/net/ipv4/netfilter/arp_tables.c
+@@ -203,7 +203,7 @@ unsigned int arpt_do_table(struct sk_buff *skb,
+ 
+ 	local_bh_disable();
+ 	addend = xt_write_recseq_begin();
+-	private = rcu_access_pointer(table->private);
++	private = READ_ONCE(table->private); /* Address dependency. */
+ 	cpu     = smp_processor_id();
+ 	table_base = private->entries;
+ 	jumpstack  = (struct arpt_entry **)private->jumpstack[cpu];
+@@ -649,7 +649,7 @@ static struct xt_counters *alloc_counters(const struct xt_table *table)
+ {
+ 	unsigned int countersize;
+ 	struct xt_counters *counters;
+-	const struct xt_table_info *private = xt_table_get_private_protected(table);
++	const struct xt_table_info *private = table->private;
+ 
+ 	/* We need atomic snapshot of counters: rest doesn't change
+ 	 * (other than comefrom, which userspace doesn't care
+@@ -673,7 +673,7 @@ static int copy_entries_to_user(unsigned int total_size,
+ 	unsigned int off, num;
+ 	const struct arpt_entry *e;
+ 	struct xt_counters *counters;
+-	struct xt_table_info *private = xt_table_get_private_protected(table);
++	struct xt_table_info *private = table->private;
+ 	int ret = 0;
+ 	void *loc_cpu_entry;
+ 
+@@ -807,7 +807,7 @@ static int get_info(struct net *net, void __user *user, const int *len)
+ 	t = xt_request_find_table_lock(net, NFPROTO_ARP, name);
+ 	if (!IS_ERR(t)) {
+ 		struct arpt_getinfo info;
+-		const struct xt_table_info *private = xt_table_get_private_protected(t);
++		const struct xt_table_info *private = t->private;
+ #ifdef CONFIG_COMPAT
+ 		struct xt_table_info tmp;
+ 
+@@ -860,7 +860,7 @@ static int get_entries(struct net *net, struct arpt_get_entries __user *uptr,
+ 
+ 	t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
+ 	if (!IS_ERR(t)) {
+-		const struct xt_table_info *private = xt_table_get_private_protected(t);
++		const struct xt_table_info *private = t->private;
+ 
+ 		if (get.size == private->size)
+ 			ret = copy_entries_to_user(private->size,
+@@ -1017,7 +1017,7 @@ static int do_add_counters(struct net *net, sockptr_t arg, unsigned int len)
+ 	}
+ 
+ 	local_bh_disable();
+-	private = xt_table_get_private_protected(t);
++	private = t->private;
+ 	if (private->number != tmp.num_counters) {
+ 		ret = -EINVAL;
+ 		goto unlock_up_free;
+@@ -1330,7 +1330,7 @@ static int compat_copy_entries_to_user(unsigned int total_size,
+ 				       void __user *userptr)
+ {
+ 	struct xt_counters *counters;
+-	const struct xt_table_info *private = xt_table_get_private_protected(table);
++	const struct xt_table_info *private = table->private;
+ 	void __user *pos;
+ 	unsigned int size;
+ 	int ret = 0;
+@@ -1379,7 +1379,7 @@ static int compat_get_entries(struct net *net,
+ 	xt_compat_lock(NFPROTO_ARP);
+ 	t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
+ 	if (!IS_ERR(t)) {
+-		const struct xt_table_info *private = xt_table_get_private_protected(t);
++		const struct xt_table_info *private = t->private;
+ 		struct xt_table_info info;
+ 
+ 		ret = compat_table_info(private, &info);
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index e8f6f9d862376..f15bc21d73016 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -258,7 +258,7 @@ ipt_do_table(struct sk_buff *skb,
+ 	WARN_ON(!(table->valid_hooks & (1 << hook)));
+ 	local_bh_disable();
+ 	addend = xt_write_recseq_begin();
+-	private = rcu_access_pointer(table->private);
++	private = READ_ONCE(table->private); /* Address dependency. */
+ 	cpu        = smp_processor_id();
+ 	table_base = private->entries;
+ 	jumpstack  = (struct ipt_entry **)private->jumpstack[cpu];
+@@ -791,7 +791,7 @@ static struct xt_counters *alloc_counters(const struct xt_table *table)
+ {
+ 	unsigned int countersize;
+ 	struct xt_counters *counters;
+-	const struct xt_table_info *private = xt_table_get_private_protected(table);
++	const struct xt_table_info *private = table->private;
+ 
+ 	/* We need atomic snapshot of counters: rest doesn't change
+ 	   (other than comefrom, which userspace doesn't care
+@@ -815,7 +815,7 @@ copy_entries_to_user(unsigned int total_size,
+ 	unsigned int off, num;
+ 	const struct ipt_entry *e;
+ 	struct xt_counters *counters;
+-	const struct xt_table_info *private = xt_table_get_private_protected(table);
++	const struct xt_table_info *private = table->private;
+ 	int ret = 0;
+ 	const void *loc_cpu_entry;
+ 
+@@ -964,7 +964,7 @@ static int get_info(struct net *net, void __user *user, const int *len)
+ 	t = xt_request_find_table_lock(net, AF_INET, name);
+ 	if (!IS_ERR(t)) {
+ 		struct ipt_getinfo info;
+-		const struct xt_table_info *private = xt_table_get_private_protected(t);
++		const struct xt_table_info *private = t->private;
+ #ifdef CONFIG_COMPAT
+ 		struct xt_table_info tmp;
+ 
+@@ -1018,7 +1018,7 @@ get_entries(struct net *net, struct ipt_get_entries __user *uptr,
+ 
+ 	t = xt_find_table_lock(net, AF_INET, get.name);
+ 	if (!IS_ERR(t)) {
+-		const struct xt_table_info *private = xt_table_get_private_protected(t);
++		const struct xt_table_info *private = t->private;
+ 		if (get.size == private->size)
+ 			ret = copy_entries_to_user(private->size,
+ 						   t, uptr->entrytable);
+@@ -1173,7 +1173,7 @@ do_add_counters(struct net *net, sockptr_t arg, unsigned int len)
+ 	}
+ 
+ 	local_bh_disable();
+-	private = xt_table_get_private_protected(t);
++	private = t->private;
+ 	if (private->number != tmp.num_counters) {
+ 		ret = -EINVAL;
+ 		goto unlock_up_free;
+@@ -1543,7 +1543,7 @@ compat_copy_entries_to_user(unsigned int total_size, struct xt_table *table,
+ 			    void __user *userptr)
+ {
+ 	struct xt_counters *counters;
+-	const struct xt_table_info *private = xt_table_get_private_protected(table);
++	const struct xt_table_info *private = table->private;
+ 	void __user *pos;
+ 	unsigned int size;
+ 	int ret = 0;
+@@ -1589,7 +1589,7 @@ compat_get_entries(struct net *net, struct compat_ipt_get_entries __user *uptr,
+ 	xt_compat_lock(AF_INET);
+ 	t = xt_find_table_lock(net, AF_INET, get.name);
+ 	if (!IS_ERR(t)) {
+-		const struct xt_table_info *private = xt_table_get_private_protected(t);
++		const struct xt_table_info *private = t->private;
+ 		struct xt_table_info info;
+ 		ret = compat_table_info(private, &info);
+ 		if (!ret && get.size == info.size)
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 9f43abeac3a8e..50a6d935376f5 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2682,44 +2682,15 @@ out:
+ 	return rth;
+ }
+ 
+-static struct dst_entry *ipv4_blackhole_dst_check(struct dst_entry *dst, u32 cookie)
+-{
+-	return NULL;
+-}
+-
+-static unsigned int ipv4_blackhole_mtu(const struct dst_entry *dst)
+-{
+-	unsigned int mtu = dst_metric_raw(dst, RTAX_MTU);
+-
+-	return mtu ? : dst->dev->mtu;
+-}
+-
+-static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk,
+-					  struct sk_buff *skb, u32 mtu,
+-					  bool confirm_neigh)
+-{
+-}
+-
+-static void ipv4_rt_blackhole_redirect(struct dst_entry *dst, struct sock *sk,
+-				       struct sk_buff *skb)
+-{
+-}
+-
+-static u32 *ipv4_rt_blackhole_cow_metrics(struct dst_entry *dst,
+-					  unsigned long old)
+-{
+-	return NULL;
+-}
+-
+ static struct dst_ops ipv4_dst_blackhole_ops = {
+-	.family			=	AF_INET,
+-	.check			=	ipv4_blackhole_dst_check,
+-	.mtu			=	ipv4_blackhole_mtu,
+-	.default_advmss		=	ipv4_default_advmss,
+-	.update_pmtu		=	ipv4_rt_blackhole_update_pmtu,
+-	.redirect		=	ipv4_rt_blackhole_redirect,
+-	.cow_metrics		=	ipv4_rt_blackhole_cow_metrics,
+-	.neigh_lookup		=	ipv4_neigh_lookup,
++	.family			= AF_INET,
++	.default_advmss		= ipv4_default_advmss,
++	.neigh_lookup		= ipv4_neigh_lookup,
++	.check			= dst_blackhole_check,
++	.cow_metrics		= dst_blackhole_cow_metrics,
++	.update_pmtu		= dst_blackhole_update_pmtu,
++	.redirect		= dst_blackhole_redirect,
++	.mtu			= dst_blackhole_mtu,
+ };
+ 
+ struct dst_entry *ipv4_blackhole_route(struct net *net, struct dst_entry *dst_orig)
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 495dda2449fe5..f0f67b25c97ab 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -804,8 +804,11 @@ embryonic_reset:
+ 		tcp_reset(sk);
+ 	}
+ 	if (!fastopen) {
+-		inet_csk_reqsk_queue_drop(sk, req);
+-		__NET_INC_STATS(sock_net(sk), LINUX_MIB_EMBRYONICRSTS);
++		bool unlinked = inet_csk_reqsk_queue_drop(sk, req);
++
++		if (unlinked)
++			__NET_INC_STATS(sock_net(sk), LINUX_MIB_EMBRYONICRSTS);
++		*req_stolen = !unlinked;
+ 	}
+ 	return NULL;
+ }
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index f43e275557251..1fb79dbde0cb3 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -2485,7 +2485,7 @@ static int ipv6_route_native_seq_show(struct seq_file *seq, void *v)
+ 	const struct net_device *dev;
+ 
+ 	if (rt->nh)
+-		fib6_nh = nexthop_fib6_nh(rt->nh);
++		fib6_nh = nexthop_fib6_nh_bh(rt->nh);
+ 
+ 	seq_printf(seq, "%pi6 %02x ", &rt->fib6_dst.addr, rt->fib6_dst.plen);
+ 
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index e96304d8a4a7f..06d60662717d1 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -245,16 +245,6 @@ static struct sk_buff *ip6_rcv_core(struct sk_buff *skb, struct net_device *dev,
+ 	if (ipv6_addr_is_multicast(&hdr->saddr))
+ 		goto err;
+ 
+-	/* While RFC4291 is not explicit about v4mapped addresses
+-	 * in IPv6 headers, it seems clear linux dual-stack
+-	 * model can not deal properly with these.
+-	 * Security models could be fooled by ::ffff:127.0.0.1 for example.
+-	 *
+-	 * https://tools.ietf.org/html/draft-itojun-v6ops-v4mapped-harmful-02
+-	 */
+-	if (ipv6_addr_v4mapped(&hdr->saddr))
+-		goto err;
+-
+ 	skb->transport_header = skb->network_header + sizeof(*hdr);
+ 	IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr);
+ 
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index 0d453fa9e327b..2e2119bfcf137 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -280,7 +280,7 @@ ip6t_do_table(struct sk_buff *skb,
+ 
+ 	local_bh_disable();
+ 	addend = xt_write_recseq_begin();
+-	private = rcu_access_pointer(table->private);
++	private = READ_ONCE(table->private); /* Address dependency. */
+ 	cpu        = smp_processor_id();
+ 	table_base = private->entries;
+ 	jumpstack  = (struct ip6t_entry **)private->jumpstack[cpu];
+@@ -807,7 +807,7 @@ static struct xt_counters *alloc_counters(const struct xt_table *table)
+ {
+ 	unsigned int countersize;
+ 	struct xt_counters *counters;
+-	const struct xt_table_info *private = xt_table_get_private_protected(table);
++	const struct xt_table_info *private = table->private;
+ 
+ 	/* We need atomic snapshot of counters: rest doesn't change
+ 	   (other than comefrom, which userspace doesn't care
+@@ -831,7 +831,7 @@ copy_entries_to_user(unsigned int total_size,
+ 	unsigned int off, num;
+ 	const struct ip6t_entry *e;
+ 	struct xt_counters *counters;
+-	const struct xt_table_info *private = xt_table_get_private_protected(table);
++	const struct xt_table_info *private = table->private;
+ 	int ret = 0;
+ 	const void *loc_cpu_entry;
+ 
+@@ -980,7 +980,7 @@ static int get_info(struct net *net, void __user *user, const int *len)
+ 	t = xt_request_find_table_lock(net, AF_INET6, name);
+ 	if (!IS_ERR(t)) {
+ 		struct ip6t_getinfo info;
+-		const struct xt_table_info *private = xt_table_get_private_protected(t);
++		const struct xt_table_info *private = t->private;
+ #ifdef CONFIG_COMPAT
+ 		struct xt_table_info tmp;
+ 
+@@ -1035,7 +1035,7 @@ get_entries(struct net *net, struct ip6t_get_entries __user *uptr,
+ 
+ 	t = xt_find_table_lock(net, AF_INET6, get.name);
+ 	if (!IS_ERR(t)) {
+-		struct xt_table_info *private = xt_table_get_private_protected(t);
++		struct xt_table_info *private = t->private;
+ 		if (get.size == private->size)
+ 			ret = copy_entries_to_user(private->size,
+ 						   t, uptr->entrytable);
+@@ -1189,7 +1189,7 @@ do_add_counters(struct net *net, sockptr_t arg, unsigned int len)
+ 	}
+ 
+ 	local_bh_disable();
+-	private = xt_table_get_private_protected(t);
++	private = t->private;
+ 	if (private->number != tmp.num_counters) {
+ 		ret = -EINVAL;
+ 		goto unlock_up_free;
+@@ -1552,7 +1552,7 @@ compat_copy_entries_to_user(unsigned int total_size, struct xt_table *table,
+ 			    void __user *userptr)
+ {
+ 	struct xt_counters *counters;
+-	const struct xt_table_info *private = xt_table_get_private_protected(table);
++	const struct xt_table_info *private = table->private;
+ 	void __user *pos;
+ 	unsigned int size;
+ 	int ret = 0;
+@@ -1598,7 +1598,7 @@ compat_get_entries(struct net *net, struct compat_ip6t_get_entries __user *uptr,
+ 	xt_compat_lock(AF_INET6);
+ 	t = xt_find_table_lock(net, AF_INET6, get.name);
+ 	if (!IS_ERR(t)) {
+-		const struct xt_table_info *private = xt_table_get_private_protected(t);
++		const struct xt_table_info *private = t->private;
+ 		struct xt_table_info info;
+ 		ret = compat_table_info(private, &info);
+ 		if (!ret && get.size == info.size)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 7e0ce7af82346..fa276448d5a2a 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -258,34 +258,16 @@ static struct dst_ops ip6_dst_ops_template = {
+ 	.confirm_neigh		=	ip6_confirm_neigh,
+ };
+ 
+-static unsigned int ip6_blackhole_mtu(const struct dst_entry *dst)
+-{
+-	unsigned int mtu = dst_metric_raw(dst, RTAX_MTU);
+-
+-	return mtu ? : dst->dev->mtu;
+-}
+-
+-static void ip6_rt_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk,
+-					 struct sk_buff *skb, u32 mtu,
+-					 bool confirm_neigh)
+-{
+-}
+-
+-static void ip6_rt_blackhole_redirect(struct dst_entry *dst, struct sock *sk,
+-				      struct sk_buff *skb)
+-{
+-}
+-
+ static struct dst_ops ip6_dst_blackhole_ops = {
+-	.family			=	AF_INET6,
+-	.destroy		=	ip6_dst_destroy,
+-	.check			=	ip6_dst_check,
+-	.mtu			=	ip6_blackhole_mtu,
+-	.default_advmss		=	ip6_default_advmss,
+-	.update_pmtu		=	ip6_rt_blackhole_update_pmtu,
+-	.redirect		=	ip6_rt_blackhole_redirect,
+-	.cow_metrics		=	dst_cow_metrics_generic,
+-	.neigh_lookup		=	ip6_dst_neigh_lookup,
++	.family			= AF_INET6,
++	.default_advmss		= ip6_default_advmss,
++	.neigh_lookup		= ip6_dst_neigh_lookup,
++	.check			= ip6_dst_check,
++	.destroy		= ip6_dst_destroy,
++	.cow_metrics		= dst_cow_metrics_generic,
++	.update_pmtu		= dst_blackhole_update_pmtu,
++	.redirect		= dst_blackhole_redirect,
++	.mtu			= dst_blackhole_mtu,
+ };
+ 
+ static const u32 ip6_template_metrics[RTAX_MAX] = {
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 991dc36f95ff0..3f9bb6dd1f986 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1170,6 +1170,11 @@ static int tcp_v6_conn_request(struct sock *sk, struct sk_buff *skb)
+ 	if (!ipv6_unicast_destination(skb))
+ 		goto drop;
+ 
++	if (ipv6_addr_v4mapped(&ipv6_hdr(skb)->saddr)) {
++		__IP6_INC_STATS(sock_net(sk), NULL, IPSTATS_MIB_INHDRERRORS);
++		return 0;
++	}
++
+ 	return tcp_conn_request(&tcp6_request_sock_ops,
+ 				&tcp_request_sock_ipv6_ops, sk, skb);
+ 
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 7276e66ae435e..2bf6271d9e3f6 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2961,14 +2961,14 @@ static int ieee80211_set_bitrate_mask(struct wiphy *wiphy,
+ 			continue;
+ 
+ 		for (j = 0; j < IEEE80211_HT_MCS_MASK_LEN; j++) {
+-			if (~sdata->rc_rateidx_mcs_mask[i][j]) {
++			if (sdata->rc_rateidx_mcs_mask[i][j] != 0xff) {
+ 				sdata->rc_has_mcs_mask[i] = true;
+ 				break;
+ 			}
+ 		}
+ 
+ 		for (j = 0; j < NL80211_VHT_NSS_MAX; j++) {
+-			if (~sdata->rc_rateidx_vht_mcs_mask[i][j]) {
++			if (sdata->rc_rateidx_vht_mcs_mask[i][j] != 0xffff) {
+ 				sdata->rc_has_vht_mcs_mask[i] = true;
+ 				break;
+ 			}
+diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
+index 1f552f374e97d..a7ac53a2f00d8 100644
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -1874,6 +1874,8 @@ int ieee80211_ibss_leave(struct ieee80211_sub_if_data *sdata)
+ 
+ 	/* remove beacon */
+ 	kfree(sdata->u.ibss.ie);
++	sdata->u.ibss.ie = NULL;
++	sdata->u.ibss.ie_len = 0;
+ 
+ 	/* on the next join, re-program HT parameters */
+ 	memset(&ifibss->ht_capa, 0, sizeof(ifibss->ht_capa));
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 6adfcb9c06dcc..3f483e84d5df3 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -5023,7 +5023,7 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
+ 		he_oper_ie = cfg80211_find_ext_ie(WLAN_EID_EXT_HE_OPERATION,
+ 						  ies->data, ies->len);
+ 		if (he_oper_ie &&
+-		    he_oper_ie[1] == ieee80211_he_oper_size(&he_oper_ie[3]))
++		    he_oper_ie[1] >= ieee80211_he_oper_size(&he_oper_ie[3]))
+ 			he_oper = (void *)(he_oper_ie + 3);
+ 		else
+ 			he_oper = NULL;
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 94e624e9439b7..d8f9fb0646a4d 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -967,7 +967,7 @@ static void ieee80211_parse_extension_element(u32 *crc,
+ 		break;
+ 	case WLAN_EID_EXT_HE_OPERATION:
+ 		if (len >= sizeof(*elems->he_operation) &&
+-		    len == ieee80211_he_oper_size(data) - 1) {
++		    len >= ieee80211_he_oper_size(data) - 1) {
+ 			if (crc)
+ 				*crc = crc32_be(*crc, (void *)elem,
+ 						elem->datalen + 2);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 16adba172fb94..6317b9bc86815 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -398,6 +398,11 @@ static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb)
+ 	if (!ipv6_unicast_destination(skb))
+ 		goto drop;
+ 
++	if (ipv6_addr_v4mapped(&ipv6_hdr(skb)->saddr)) {
++		__IP6_INC_STATS(sock_net(sk), NULL, IPSTATS_MIB_INHDRERRORS);
++		return 0;
++	}
++
+ 	return tcp_conn_request(&mptcp_subflow_request_sock_ops,
+ 				&subflow_request_sock_ipv6_ops, sk, skb);
+ 
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 3d0fd33be0186..c1bfd8181341a 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -2960,6 +2960,7 @@ static int ctnetlink_exp_dump_mask(struct sk_buff *skb,
+ 	memset(&m, 0xFF, sizeof(m));
+ 	memcpy(&m.src.u3, &mask->src.u3, sizeof(m.src.u3));
+ 	m.src.u.all = mask->src.u.all;
++	m.src.l3num = tuple->src.l3num;
+ 	m.dst.protonum = tuple->dst.protonum;
+ 
+ 	nest_parms = nla_nest_start(skb, CTA_EXPECT_MASK);
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index 4a4acbba78ff7..b03feb6e1226a 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -506,7 +506,7 @@ int nf_flow_table_init(struct nf_flowtable *flowtable)
+ {
+ 	int err;
+ 
+-	INIT_DEFERRABLE_WORK(&flowtable->gc_work, nf_flow_offload_work_gc);
++	INIT_DELAYED_WORK(&flowtable->gc_work, nf_flow_offload_work_gc);
+ 	flow_block_init(&flowtable->flow_block);
+ 	init_rwsem(&flowtable->flow_block_lock);
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 8739ef135156b..978a968d7aeda 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -6632,6 +6632,7 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+ 	struct nft_hook *hook, *next;
+ 	struct nft_trans *trans;
+ 	bool unregister = false;
++	u32 flags;
+ 	int err;
+ 
+ 	err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK],
+@@ -6646,6 +6647,17 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+ 		}
+ 	}
+ 
++	if (nla[NFTA_FLOWTABLE_FLAGS]) {
++		flags = ntohl(nla_get_be32(nla[NFTA_FLOWTABLE_FLAGS]));
++		if (flags & ~NFT_FLOWTABLE_MASK)
++			return -EOPNOTSUPP;
++		if ((flowtable->data.flags & NFT_FLOWTABLE_HW_OFFLOAD) ^
++		    (flags & NFT_FLOWTABLE_HW_OFFLOAD))
++			return -EOPNOTSUPP;
++	} else {
++		flags = flowtable->data.flags;
++	}
++
+ 	err = nft_register_flowtable_net_hooks(ctx->net, ctx->table,
+ 					       &flowtable_hook.list, flowtable);
+ 	if (err < 0)
+@@ -6659,6 +6671,7 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+ 		goto err_flowtable_update_hook;
+ 	}
+ 
++	nft_trans_flowtable_flags(trans) = flags;
+ 	nft_trans_flowtable(trans) = flowtable;
+ 	nft_trans_flowtable_update(trans) = true;
+ 	INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
+@@ -6753,8 +6766,10 @@ static int nf_tables_newflowtable(struct net *net, struct sock *nlsk,
+ 	if (nla[NFTA_FLOWTABLE_FLAGS]) {
+ 		flowtable->data.flags =
+ 			ntohl(nla_get_be32(nla[NFTA_FLOWTABLE_FLAGS]));
+-		if (flowtable->data.flags & ~NFT_FLOWTABLE_MASK)
++		if (flowtable->data.flags & ~NFT_FLOWTABLE_MASK) {
++			err = -EOPNOTSUPP;
+ 			goto err3;
++		}
+ 	}
+ 
+ 	write_pnet(&flowtable->data.net, net);
+@@ -7966,6 +7981,8 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			break;
+ 		case NFT_MSG_NEWFLOWTABLE:
+ 			if (nft_trans_flowtable_update(trans)) {
++				nft_trans_flowtable(trans)->data.flags =
++					nft_trans_flowtable_flags(trans);
+ 				nf_tables_flowtable_notify(&trans->ctx,
+ 							   nft_trans_flowtable(trans),
+ 							   &nft_trans_flowtable_hooks(trans),
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index bce6ca203d462..6bd31a7a27fc5 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -1351,14 +1351,6 @@ struct xt_counters *xt_counters_alloc(unsigned int counters)
+ }
+ EXPORT_SYMBOL(xt_counters_alloc);
+ 
+-struct xt_table_info
+-*xt_table_get_private_protected(const struct xt_table *table)
+-{
+-	return rcu_dereference_protected(table->private,
+-					 mutex_is_locked(&xt[table->af].mutex));
+-}
+-EXPORT_SYMBOL(xt_table_get_private_protected);
+-
+ struct xt_table_info *
+ xt_replace_table(struct xt_table *table,
+ 	      unsigned int num_counters,
+@@ -1366,6 +1358,7 @@ xt_replace_table(struct xt_table *table,
+ 	      int *error)
+ {
+ 	struct xt_table_info *private;
++	unsigned int cpu;
+ 	int ret;
+ 
+ 	ret = xt_jumpstack_alloc(newinfo);
+@@ -1375,20 +1368,47 @@ xt_replace_table(struct xt_table *table,
+ 	}
+ 
+ 	/* Do the substitution. */
+-	private = xt_table_get_private_protected(table);
++	local_bh_disable();
++	private = table->private;
+ 
+ 	/* Check inside lock: is the old number correct? */
+ 	if (num_counters != private->number) {
+ 		pr_debug("num_counters != table->private->number (%u/%u)\n",
+ 			 num_counters, private->number);
++		local_bh_enable();
+ 		*error = -EAGAIN;
+ 		return NULL;
+ 	}
+ 
+ 	newinfo->initial_entries = private->initial_entries;
++	/*
++	 * Ensure contents of newinfo are visible before assigning to
++	 * private.
++	 */
++	smp_wmb();
++	table->private = newinfo;
++
++	/* make sure all cpus see new ->private value */
++	smp_mb();
+ 
+-	rcu_assign_pointer(table->private, newinfo);
+-	synchronize_rcu();
++	/*
++	 * Even though table entries have now been swapped, other CPU's
++	 * may still be using the old entries...
++	 */
++	local_bh_enable();
++
++	/* ... so wait for even xt_recseq on all cpus */
++	for_each_possible_cpu(cpu) {
++		seqcount_t *s = &per_cpu(xt_recseq, cpu);
++		u32 seq = raw_read_seqcount(s);
++
++		if (seq & 1) {
++			do {
++				cond_resched();
++				cpu_relax();
++			} while (seq == raw_read_seqcount(s));
++		}
++	}
+ 
+ 	audit_log_nfcfg(table->name, table->af, private->number,
+ 			!private->number ? AUDIT_XT_OP_REGISTER :
+@@ -1424,12 +1444,12 @@ struct xt_table *xt_register_table(struct net *net,
+ 	}
+ 
+ 	/* Simplifies replace_table code. */
+-	rcu_assign_pointer(table->private, bootstrap);
++	table->private = bootstrap;
+ 
+ 	if (!xt_replace_table(table, 0, newinfo, &ret))
+ 		goto unlock;
+ 
+-	private = xt_table_get_private_protected(table);
++	private = table->private;
+ 	pr_debug("table->private->number = %u\n", private->number);
+ 
+ 	/* save number of initial entries */
+@@ -1452,8 +1472,7 @@ void *xt_unregister_table(struct xt_table *table)
+ 	struct xt_table_info *private;
+ 
+ 	mutex_lock(&xt[table->af].mutex);
+-	private = xt_table_get_private_protected(table);
+-	RCU_INIT_POINTER(table->private, NULL);
++	private = table->private;
+ 	list_del(&table->list);
+ 	mutex_unlock(&xt[table->af].mutex);
+ 	audit_log_nfcfg(table->name, table->af, private->number,
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 54031ee079a2c..45fbf5f4dcd25 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -1035,6 +1035,11 @@ static int qrtr_recvmsg(struct socket *sock, struct msghdr *msg,
+ 	rc = copied;
+ 
+ 	if (addr) {
++		/* There is an anonymous 2-byte hole after sq_family,
++		 * make sure to clear it.
++		 */
++		memset(addr, 0, sizeof(*addr));
++
+ 		addr->sq_family = AF_QIPCRTR;
+ 		addr->sq_node = cb->src_node;
+ 		addr->sq_port = cb->src_port;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 46c1b3e9f66a5..14316ba9b3b32 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1432,7 +1432,7 @@ static int fl_set_key_ct(struct nlattr **tb,
+ 			       &mask->ct_state, TCA_FLOWER_KEY_CT_STATE_MASK,
+ 			       sizeof(key->ct_state));
+ 
+-		err = fl_validate_ct_state(mask->ct_state,
++		err = fl_validate_ct_state(key->ct_state & mask->ct_state,
+ 					   tb[TCA_FLOWER_KEY_CT_STATE_MASK],
+ 					   extack);
+ 		if (err)
+diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c
+index 50f680f03a547..2adbd945bf15a 100644
+--- a/net/sched/sch_choke.c
++++ b/net/sched/sch_choke.c
+@@ -345,6 +345,7 @@ static int choke_change(struct Qdisc *sch, struct nlattr *opt,
+ 	struct sk_buff **old = NULL;
+ 	unsigned int mask;
+ 	u32 max_P;
++	u8 *stab;
+ 
+ 	if (opt == NULL)
+ 		return -EINVAL;
+@@ -361,8 +362,8 @@ static int choke_change(struct Qdisc *sch, struct nlattr *opt,
+ 	max_P = tb[TCA_CHOKE_MAX_P] ? nla_get_u32(tb[TCA_CHOKE_MAX_P]) : 0;
+ 
+ 	ctl = nla_data(tb[TCA_CHOKE_PARMS]);
+-
+-	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log))
++	stab = nla_data(tb[TCA_CHOKE_STAB]);
++	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log, stab))
+ 		return -EINVAL;
+ 
+ 	if (ctl->limit > CHOKE_MAX_QUEUE)
+@@ -412,7 +413,7 @@ static int choke_change(struct Qdisc *sch, struct nlattr *opt,
+ 
+ 	red_set_parms(&q->parms, ctl->qth_min, ctl->qth_max, ctl->Wlog,
+ 		      ctl->Plog, ctl->Scell_log,
+-		      nla_data(tb[TCA_CHOKE_STAB]),
++		      stab,
+ 		      max_P);
+ 	red_set_vars(&q->vars);
+ 
+diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
+index e0bc77533acc3..f4132dc25ac05 100644
+--- a/net/sched/sch_gred.c
++++ b/net/sched/sch_gred.c
+@@ -480,7 +480,7 @@ static inline int gred_change_vq(struct Qdisc *sch, int dp,
+ 	struct gred_sched *table = qdisc_priv(sch);
+ 	struct gred_sched_data *q = table->tab[dp];
+ 
+-	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) {
++	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log, stab)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "invalid RED parameters");
+ 		return -EINVAL;
+ 	}
+diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
+index b4ae34d7aa965..40adf1f07a82d 100644
+--- a/net/sched/sch_red.c
++++ b/net/sched/sch_red.c
+@@ -242,6 +242,7 @@ static int __red_change(struct Qdisc *sch, struct nlattr **tb,
+ 	unsigned char flags;
+ 	int err;
+ 	u32 max_P;
++	u8 *stab;
+ 
+ 	if (tb[TCA_RED_PARMS] == NULL ||
+ 	    tb[TCA_RED_STAB] == NULL)
+@@ -250,7 +251,9 @@ static int __red_change(struct Qdisc *sch, struct nlattr **tb,
+ 	max_P = tb[TCA_RED_MAX_P] ? nla_get_u32(tb[TCA_RED_MAX_P]) : 0;
+ 
+ 	ctl = nla_data(tb[TCA_RED_PARMS]);
+-	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log))
++	stab = nla_data(tb[TCA_RED_STAB]);
++	if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog,
++			      ctl->Scell_log, stab))
+ 		return -EINVAL;
+ 
+ 	err = red_get_flags(ctl->flags, TC_RED_HISTORIC_FLAGS,
+@@ -288,7 +291,7 @@ static int __red_change(struct Qdisc *sch, struct nlattr **tb,
+ 	red_set_parms(&q->parms,
+ 		      ctl->qth_min, ctl->qth_max, ctl->Wlog,
+ 		      ctl->Plog, ctl->Scell_log,
+-		      nla_data(tb[TCA_RED_STAB]),
++		      stab,
+ 		      max_P);
+ 	red_set_vars(&q->vars);
+ 
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index b25e51440623b..066754a18569b 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -647,7 +647,7 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ 	}
+ 
+ 	if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,
+-					ctl_v1->Wlog, ctl_v1->Scell_log))
++					ctl_v1->Wlog, ctl_v1->Scell_log, NULL))
+ 		return -EINVAL;
+ 	if (ctl_v1 && ctl_v1->qth_min) {
+ 		p = kmalloc(sizeof(*p), GFP_KERNEL);
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index 83978d5dae594..e4452d55851f9 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -2855,17 +2855,22 @@ int tipc_nl_node_dump_monitor_peer(struct sk_buff *skb,
+ 
+ #ifdef CONFIG_TIPC_CRYPTO
+ static int tipc_nl_retrieve_key(struct nlattr **attrs,
+-				struct tipc_aead_key **key)
++				struct tipc_aead_key **pkey)
+ {
+ 	struct nlattr *attr = attrs[TIPC_NLA_NODE_KEY];
++	struct tipc_aead_key *key;
+ 
+ 	if (!attr)
+ 		return -ENODATA;
+ 
+-	*key = (struct tipc_aead_key *)nla_data(attr);
+-	if (nla_len(attr) < tipc_aead_key_size(*key))
++	if (nla_len(attr) < sizeof(*key))
++		return -EINVAL;
++	key = (struct tipc_aead_key *)nla_data(attr);
++	if (key->keylen > TIPC_AEAD_KEYLEN_MAX ||
++	    nla_len(attr) < tipc_aead_key_size(key))
+ 		return -EINVAL;
+ 
++	*pkey = key;
+ 	return 0;
+ }
+ 
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 791955f5e7ec0..cf86c1376b1a4 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -738,6 +738,7 @@ static struct sock *__vsock_create(struct net *net,
+ 		vsk->buffer_size = psk->buffer_size;
+ 		vsk->buffer_min_size = psk->buffer_min_size;
+ 		vsk->buffer_max_size = psk->buffer_max_size;
++		security_sk_clone(parent, sk);
+ 	} else {
+ 		vsk->trusted = ns_capable_noaudit(&init_user_ns, CAP_NET_ADMIN);
+ 		vsk->owner = get_current_cred();
+diff --git a/scripts/dummy-tools/gcc b/scripts/dummy-tools/gcc
+index 33487e99d83e5..11c9f045ee4b9 100755
+--- a/scripts/dummy-tools/gcc
++++ b/scripts/dummy-tools/gcc
+@@ -89,3 +89,8 @@ if arg_contain -print-file-name=plugin "$@"; then
+ 	echo $plugin_dir
+ 	exit 0
+ fi
++
++# inverted return value
++if arg_contain -D__SIZEOF_INT128__=0 "$@"; then
++	exit 1
++fi
+diff --git a/security/integrity/iint.c b/security/integrity/iint.c
+index 1d20003243c3f..0ba01847e836c 100644
+--- a/security/integrity/iint.c
++++ b/security/integrity/iint.c
+@@ -98,6 +98,14 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode)
+ 	struct rb_node *node, *parent = NULL;
+ 	struct integrity_iint_cache *iint, *test_iint;
+ 
++	/*
++	 * The integrity's "iint_cache" is initialized at security_init(),
++	 * unless it is not included in the ordered list of LSMs enabled
++	 * on the boot command line.
++	 */
++	if (!iint_cache)
++		panic("%s: lsm=integrity required.\n", __func__);
++
+ 	iint = integrity_iint_find(inode);
+ 	if (iint)
+ 		return iint;
+diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h
+index 3cc8bab31ea85..63ca6e79daeb9 100644
+--- a/security/selinux/include/security.h
++++ b/security/selinux/include/security.h
+@@ -219,14 +219,21 @@ static inline bool selinux_policycap_genfs_seclabel_symlinks(void)
+ 	return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS]);
+ }
+ 
++struct selinux_policy_convert_data;
++
++struct selinux_load_state {
++	struct selinux_policy *policy;
++	struct selinux_policy_convert_data *convert_data;
++};
++
+ int security_mls_enabled(struct selinux_state *state);
+ int security_load_policy(struct selinux_state *state,
+-			void *data, size_t len,
+-			struct selinux_policy **newpolicyp);
++			 void *data, size_t len,
++			 struct selinux_load_state *load_state);
+ void selinux_policy_commit(struct selinux_state *state,
+-			struct selinux_policy *newpolicy);
++			   struct selinux_load_state *load_state);
+ void selinux_policy_cancel(struct selinux_state *state,
+-			struct selinux_policy *policy);
++			   struct selinux_load_state *load_state);
+ int security_read_policy(struct selinux_state *state,
+ 			 void **data, size_t *len);
+ 
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index 4bde570d56a2c..2b745ae8cb981 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -616,7 +616,7 @@ static ssize_t sel_write_load(struct file *file, const char __user *buf,
+ 
+ {
+ 	struct selinux_fs_info *fsi = file_inode(file)->i_sb->s_fs_info;
+-	struct selinux_policy *newpolicy;
++	struct selinux_load_state load_state;
+ 	ssize_t length;
+ 	void *data = NULL;
+ 
+@@ -642,23 +642,22 @@ static ssize_t sel_write_load(struct file *file, const char __user *buf,
+ 	if (copy_from_user(data, buf, count) != 0)
+ 		goto out;
+ 
+-	length = security_load_policy(fsi->state, data, count, &newpolicy);
++	length = security_load_policy(fsi->state, data, count, &load_state);
+ 	if (length) {
+ 		pr_warn_ratelimited("SELinux: failed to load policy\n");
+ 		goto out;
+ 	}
+ 
+-	length = sel_make_policy_nodes(fsi, newpolicy);
++	length = sel_make_policy_nodes(fsi, load_state.policy);
+ 	if (length) {
+-		selinux_policy_cancel(fsi->state, newpolicy);
+-		goto out1;
++		selinux_policy_cancel(fsi->state, &load_state);
++		goto out;
+ 	}
+ 
+-	selinux_policy_commit(fsi->state, newpolicy);
++	selinux_policy_commit(fsi->state, &load_state);
+ 
+ 	length = count;
+ 
+-out1:
+ 	audit_log(audit_context(), GFP_KERNEL, AUDIT_MAC_POLICY_LOAD,
+ 		"auid=%u ses=%u lsm=selinux res=1",
+ 		from_kuid(&init_user_ns, audit_get_loginuid(current)),
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 9704c8a32303f..495fc865faf55 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -66,6 +66,17 @@
+ #include "audit.h"
+ #include "policycap_names.h"
+ 
++struct convert_context_args {
++	struct selinux_state *state;
++	struct policydb *oldp;
++	struct policydb *newp;
++};
++
++struct selinux_policy_convert_data {
++	struct convert_context_args args;
++	struct sidtab_convert_params sidtab_params;
++};
++
+ /* Forward declaration. */
+ static int context_struct_to_string(struct policydb *policydb,
+ 				    struct context *context,
+@@ -1975,12 +1986,6 @@ static inline int convert_context_handle_invalid_context(
+ 	return 0;
+ }
+ 
+-struct convert_context_args {
+-	struct selinux_state *state;
+-	struct policydb *oldp;
+-	struct policydb *newp;
+-};
+-
+ /*
+  * Convert the values in the security context
+  * structure `oldc' from the values specified
+@@ -2160,7 +2165,7 @@ static void selinux_policy_cond_free(struct selinux_policy *policy)
+ }
+ 
+ void selinux_policy_cancel(struct selinux_state *state,
+-			struct selinux_policy *policy)
++			   struct selinux_load_state *load_state)
+ {
+ 	struct selinux_policy *oldpolicy;
+ 
+@@ -2168,7 +2173,8 @@ void selinux_policy_cancel(struct selinux_state *state,
+ 					lockdep_is_held(&state->policy_mutex));
+ 
+ 	sidtab_cancel_convert(oldpolicy->sidtab);
+-	selinux_policy_free(policy);
++	selinux_policy_free(load_state->policy);
++	kfree(load_state->convert_data);
+ }
+ 
+ static void selinux_notify_policy_change(struct selinux_state *state,
+@@ -2183,9 +2189,9 @@ static void selinux_notify_policy_change(struct selinux_state *state,
+ }
+ 
+ void selinux_policy_commit(struct selinux_state *state,
+-			struct selinux_policy *newpolicy)
++			   struct selinux_load_state *load_state)
+ {
+-	struct selinux_policy *oldpolicy;
++	struct selinux_policy *oldpolicy, *newpolicy = load_state->policy;
+ 	u32 seqno;
+ 
+ 	oldpolicy = rcu_dereference_protected(state->policy,
+@@ -2225,6 +2231,7 @@ void selinux_policy_commit(struct selinux_state *state,
+ 	/* Free the old policy */
+ 	synchronize_rcu();
+ 	selinux_policy_free(oldpolicy);
++	kfree(load_state->convert_data);
+ 
+ 	/* Notify others of the policy change */
+ 	selinux_notify_policy_change(state, seqno);
+@@ -2241,11 +2248,10 @@ void selinux_policy_commit(struct selinux_state *state,
+  * loading the new policy.
+  */
+ int security_load_policy(struct selinux_state *state, void *data, size_t len,
+-			struct selinux_policy **newpolicyp)
++			 struct selinux_load_state *load_state)
+ {
+ 	struct selinux_policy *newpolicy, *oldpolicy;
+-	struct sidtab_convert_params convert_params;
+-	struct convert_context_args args;
++	struct selinux_policy_convert_data *convert_data;
+ 	int rc = 0;
+ 	struct policy_file file = { data, len }, *fp = &file;
+ 
+@@ -2275,10 +2281,10 @@ int security_load_policy(struct selinux_state *state, void *data, size_t len,
+ 		goto err_mapping;
+ 	}
+ 
+-
+ 	if (!selinux_initialized(state)) {
+ 		/* First policy load, so no need to preserve state from old policy */
+-		*newpolicyp = newpolicy;
++		load_state->policy = newpolicy;
++		load_state->convert_data = NULL;
+ 		return 0;
+ 	}
+ 
+@@ -2292,29 +2298,38 @@ int security_load_policy(struct selinux_state *state, void *data, size_t len,
+ 		goto err_free_isids;
+ 	}
+ 
++	convert_data = kmalloc(sizeof(*convert_data), GFP_KERNEL);
++	if (!convert_data) {
++		rc = -ENOMEM;
++		goto err_free_isids;
++	}
++
+ 	/*
+ 	 * Convert the internal representations of contexts
+ 	 * in the new SID table.
+ 	 */
+-	args.state = state;
+-	args.oldp = &oldpolicy->policydb;
+-	args.newp = &newpolicy->policydb;
++	convert_data->args.state = state;
++	convert_data->args.oldp = &oldpolicy->policydb;
++	convert_data->args.newp = &newpolicy->policydb;
+ 
+-	convert_params.func = convert_context;
+-	convert_params.args = &args;
+-	convert_params.target = newpolicy->sidtab;
++	convert_data->sidtab_params.func = convert_context;
++	convert_data->sidtab_params.args = &convert_data->args;
++	convert_data->sidtab_params.target = newpolicy->sidtab;
+ 
+-	rc = sidtab_convert(oldpolicy->sidtab, &convert_params);
++	rc = sidtab_convert(oldpolicy->sidtab, &convert_data->sidtab_params);
+ 	if (rc) {
+ 		pr_err("SELinux:  unable to convert the internal"
+ 			" representation of contexts in the new SID"
+ 			" table\n");
+-		goto err_free_isids;
++		goto err_free_convert_data;
+ 	}
+ 
+-	*newpolicyp = newpolicy;
++	load_state->policy = newpolicy;
++	load_state->convert_data = convert_data;
+ 	return 0;
+ 
++err_free_convert_data:
++	kfree(convert_data);
+ err_free_isids:
+ 	sidtab_destroy(newpolicy->sidtab);
+ err_mapping:
+diff --git a/sound/hda/intel-nhlt.c b/sound/hda/intel-nhlt.c
+index d053beccfaec3..e2237239d922a 100644
+--- a/sound/hda/intel-nhlt.c
++++ b/sound/hda/intel-nhlt.c
+@@ -39,6 +39,11 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
+ 	if (!nhlt)
+ 		return 0;
+ 
++	if (nhlt->header.length <= sizeof(struct acpi_table_header)) {
++		dev_warn(dev, "Invalid DMIC description table\n");
++		return 0;
++	}
++
+ 	for (j = 0, epnt = nhlt->desc; j < nhlt->endpoint_count; j++,
+ 	     epnt = (struct nhlt_endpoint *)((u8 *)epnt + epnt->length)) {
+ 
+diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
+index 89135bb35bf76..ae5662d368b98 100644
+--- a/tools/include/linux/static_call_types.h
++++ b/tools/include/linux/static_call_types.h
+@@ -4,11 +4,13 @@
+ 
+ #include <linux/types.h>
+ #include <linux/stringify.h>
++#include <linux/compiler.h>
+ 
+ #define STATIC_CALL_KEY_PREFIX		__SCK__
+ #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
+ #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
+ #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
++#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
+ 
+ #define STATIC_CALL_TRAMP_PREFIX	__SCT__
+ #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
+@@ -32,4 +34,52 @@ struct static_call_site {
+ 	s32 key;
+ };
+ 
++#define DECLARE_STATIC_CALL(name, func)					\
++	extern struct static_call_key STATIC_CALL_KEY(name);		\
++	extern typeof(func) STATIC_CALL_TRAMP(name);
++
++#ifdef CONFIG_HAVE_STATIC_CALL
++
++#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
++
++#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
++
++/*
++ * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
++ * the symbol table so that objtool can reference it when it generates the
++ * .static_call_sites section.
++ */
++#define __STATIC_CALL_ADDRESSABLE(name) \
++	__ADDRESSABLE(STATIC_CALL_KEY(name))
++
++#define __static_call(name)						\
++({									\
++	__STATIC_CALL_ADDRESSABLE(name);				\
++	__raw_static_call(name);					\
++})
++
++#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
++
++#define __STATIC_CALL_ADDRESSABLE(name)
++#define __static_call(name)	__raw_static_call(name)
++
++#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
++
++#ifdef MODULE
++#define __STATIC_CALL_MOD_ADDRESSABLE(name)
++#define static_call_mod(name)	__raw_static_call(name)
++#else
++#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
++#define static_call_mod(name)	__static_call(name)
++#endif
++
++#define static_call(name)	__static_call(name)
++
++#else
++
++#define static_call(name)						\
++	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
++
++#endif /* CONFIG_HAVE_STATIC_CALL */
++
+ #endif /* _STATIC_CALL_TYPES_H */
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 55bd78b3496fb..310f647c2d5b6 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -236,7 +236,7 @@ define do_install
+ 	if [ ! -d '$(DESTDIR_SQ)$2' ]; then		\
+ 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$2';	\
+ 	fi;						\
+-	$(INSTALL) $1 $(if $3,-m $3,) '$(DESTDIR_SQ)$2'
++	$(INSTALL) $(if $3,-m $3,) $1 '$(DESTDIR_SQ)$2'
+ endef
+ 
+ install_lib: all_cmd
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 2f9d685bd522c..0911aea4cdbe5 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -462,7 +462,7 @@ static int btf_dump_order_type(struct btf_dump *d, __u32 id, bool through_ptr)
+ 		return err;
+ 
+ 	case BTF_KIND_ARRAY:
+-		return btf_dump_order_type(d, btf_array(t)->type, through_ptr);
++		return btf_dump_order_type(d, btf_array(t)->type, false);
+ 
+ 	case BTF_KIND_STRUCT:
+ 	case BTF_KIND_UNION: {
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index b954db52bb807..95eef7ebdac5c 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -1162,7 +1162,8 @@ static int bpf_object__elf_init(struct bpf_object *obj)
+ 	if (!elf_rawdata(elf_getscn(obj->efile.elf, obj->efile.shstrndx), NULL)) {
+ 		pr_warn("elf: failed to get section names strings from %s: %s\n",
+ 			obj->path, elf_errmsg(-1));
+-		return -LIBBPF_ERRNO__FORMAT;
++		err = -LIBBPF_ERRNO__FORMAT;
++		goto errout;
+ 	}
+ 
+ 	/* Old LLVM set e_machine to EM_NONE */
+diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
+index 4dd73de00b6f1..d2cb28e9ef52e 100644
+--- a/tools/lib/bpf/netlink.c
++++ b/tools/lib/bpf/netlink.c
+@@ -40,7 +40,7 @@ static int libbpf_netlink_open(__u32 *nl_pid)
+ 	memset(&sa, 0, sizeof(sa));
+ 	sa.nl_family = AF_NETLINK;
+ 
+-	sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
++	sock = socket(AF_NETLINK, SOCK_RAW | SOCK_CLOEXEC, NETLINK_ROUTE);
+ 	if (sock < 0)
+ 		return -errno;
+ 
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index dc24aac08edd6..5c83f73ad6687 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -502,8 +502,21 @@ static int create_static_call_sections(struct objtool_file *file)
+ 
+ 		key_sym = find_symbol_by_name(file->elf, tmp);
+ 		if (!key_sym) {
+-			WARN("static_call: can't find static_call_key symbol: %s", tmp);
+-			return -1;
++			if (!module) {
++				WARN("static_call: can't find static_call_key symbol: %s", tmp);
++				return -1;
++			}
++
++			/*
++			 * For modules(), the key might not be exported, which
++			 * means the module can make static calls but isn't
++			 * allowed to change them.
++			 *
++			 * In that case we temporarily set the key to be the
++			 * trampoline address.  This is fixed up in
++			 * static_call_add_module().
++			 */
++			key_sym = insn->call_dest;
+ 		}
+ 		free(key_name);
+ 
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index 42a85c86421d7..d8ada6a3c555a 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -300,10 +300,6 @@ static int auxtrace_queues__queue_buffer(struct auxtrace_queues *queues,
+ 		queue->set = true;
+ 		queue->tid = buffer->tid;
+ 		queue->cpu = buffer->cpu;
+-	} else if (buffer->cpu != queue->cpu || buffer->tid != queue->tid) {
+-		pr_err("auxtrace queue conflict: cpu %d, tid %d vs cpu %d, tid %d\n",
+-		       queue->cpu, queue->tid, buffer->cpu, buffer->tid);
+-		return -EINVAL;
+ 	}
+ 
+ 	buffer->buffer_nr = queues->next_buffer_nr++;
+diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
+index d9c624377da73..b4cf6dd57dd6e 100644
+--- a/tools/perf/util/synthetic-events.c
++++ b/tools/perf/util/synthetic-events.c
+@@ -384,7 +384,7 @@ int perf_event__synthesize_mmap_events(struct perf_tool *tool,
+ 
+ 	while (!io.eof) {
+ 		static const char anonstr[] = "//anon";
+-		size_t size;
++		size_t size, aligned_size;
+ 
+ 		/* ensure null termination since stack will be reused. */
+ 		event->mmap2.filename[0] = '\0';
+@@ -444,11 +444,12 @@ out:
+ 		}
+ 
+ 		size = strlen(event->mmap2.filename) + 1;
+-		size = PERF_ALIGN(size, sizeof(u64));
++		aligned_size = PERF_ALIGN(size, sizeof(u64));
+ 		event->mmap2.len -= event->mmap.start;
+ 		event->mmap2.header.size = (sizeof(event->mmap2) -
+-					(sizeof(event->mmap2.filename) - size));
+-		memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
++					(sizeof(event->mmap2.filename) - aligned_size));
++		memset(event->mmap2.filename + size, 0, machine->id_hdr_size +
++			(aligned_size - size));
+ 		event->mmap2.header.size += machine->id_hdr_size;
+ 		event->mmap2.pid = tgid;
+ 		event->mmap2.tid = pid;
+diff --git a/tools/testing/selftests/arm64/fp/sve-ptrace.c b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+index b2282be6f9384..612d3899614ac 100644
+--- a/tools/testing/selftests/arm64/fp/sve-ptrace.c
++++ b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+@@ -332,5 +332,5 @@ int main(void)
+ 
+ 	ksft_print_cnts();
+ 
+-	return 0;
++	return ret;
+ }
+diff --git a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
+index 9afe947cfae95..ba6eadfec5653 100644
+--- a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
++++ b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
+@@ -508,10 +508,8 @@ int _ip6geneve_get_tunnel(struct __sk_buff *skb)
+ 	}
+ 
+ 	ret = bpf_skb_get_tunnel_opt(skb, &gopt, sizeof(gopt));
+-	if (ret < 0) {
+-		ERROR(ret);
+-		return TC_ACT_SHOT;
+-	}
++	if (ret < 0)
++		gopt.opt_class = 0;
+ 
+ 	bpf_trace_printk(fmt, sizeof(fmt),
+ 			key.tunnel_id, key.remote_ipv4, gopt.opt_class);
+diff --git a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
+index ce6bea9675c07..0ccb1dda099ae 100755
+--- a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
++++ b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
+@@ -658,7 +658,7 @@ test_ecn_decap()
+ 	# In accordance with INET_ECN_decapsulate()
+ 	__test_ecn_decap 00 00 0x00
+ 	__test_ecn_decap 01 01 0x01
+-	__test_ecn_decap 02 01 0x02
++	__test_ecn_decap 02 01 0x01
+ 	__test_ecn_decap 01 03 0x03
+ 	__test_ecn_decap 02 03 0x03
+ 	test_ecn_decap_error
+diff --git a/tools/testing/selftests/net/reuseaddr_ports_exhausted.c b/tools/testing/selftests/net/reuseaddr_ports_exhausted.c
+index 7b01b7c2ec104..066efd30e2946 100644
+--- a/tools/testing/selftests/net/reuseaddr_ports_exhausted.c
++++ b/tools/testing/selftests/net/reuseaddr_ports_exhausted.c
+@@ -30,25 +30,25 @@ struct reuse_opts {
+ };
+ 
+ struct reuse_opts unreusable_opts[12] = {
+-	{0, 0, 0, 0},
+-	{0, 0, 0, 1},
+-	{0, 0, 1, 0},
+-	{0, 0, 1, 1},
+-	{0, 1, 0, 0},
+-	{0, 1, 0, 1},
+-	{0, 1, 1, 0},
+-	{0, 1, 1, 1},
+-	{1, 0, 0, 0},
+-	{1, 0, 0, 1},
+-	{1, 0, 1, 0},
+-	{1, 0, 1, 1},
++	{{0, 0}, {0, 0}},
++	{{0, 0}, {0, 1}},
++	{{0, 0}, {1, 0}},
++	{{0, 0}, {1, 1}},
++	{{0, 1}, {0, 0}},
++	{{0, 1}, {0, 1}},
++	{{0, 1}, {1, 0}},
++	{{0, 1}, {1, 1}},
++	{{1, 0}, {0, 0}},
++	{{1, 0}, {0, 1}},
++	{{1, 0}, {1, 0}},
++	{{1, 0}, {1, 1}},
+ };
+ 
+ struct reuse_opts reusable_opts[4] = {
+-	{1, 1, 0, 0},
+-	{1, 1, 0, 1},
+-	{1, 1, 1, 0},
+-	{1, 1, 1, 1},
++	{{1, 1}, {0, 0}},
++	{{1, 1}, {0, 1}},
++	{{1, 1}, {1, 0}},
++	{{1, 1}, {1, 1}},
+ };
+ 
+ int bind_port(struct __test_metadata *_metadata, int reuseaddr, int reuseport)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-04-07 13:27 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-04-07 13:27 UTC (permalink / raw
  To: gentoo-commits

commit:     8837803fb00c2684b1474e4545c52717ada44b80
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr  7 13:27:48 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr  7 13:27:48 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8837803f

Linux patch 5.10.28

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1027_linux-5.10.28.patch | 7753 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7757 insertions(+)

diff --git a/0000_README b/0000_README
index 936c976..a34afba 100644
--- a/0000_README
+++ b/0000_README
@@ -151,6 +151,10 @@ Patch:  1026_linux-5.10.27.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.27
 
+Patch:  1027_linux-5.10.28.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.28
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1027_linux-5.10.28.patch b/1027_linux-5.10.28.patch
new file mode 100644
index 0000000..795eb7f
--- /dev/null
+++ b/1027_linux-5.10.28.patch
@@ -0,0 +1,7753 @@
+diff --git a/Makefile b/Makefile
+index 4801cc25e3472..cb76f64abb6da 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 27
++SUBLEVEL = 28
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 6aabf1eced31e..afdad76078506 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -1447,14 +1447,30 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
+ 
+ static bool inside_linear_region(u64 start, u64 size)
+ {
++	u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
++	u64 end_linear_pa = __pa(PAGE_END - 1);
++
++	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
++		/*
++		 * Check for a wrap, it is possible because of randomized linear
++		 * mapping the start physical address is actually bigger than
++		 * the end physical address. In this case set start to zero
++		 * because [0, end_linear_pa] range must still be able to cover
++		 * all addressable physical addresses.
++		 */
++		if (start_linear_pa > end_linear_pa)
++			start_linear_pa = 0;
++	}
++
++	WARN_ON(start_linear_pa > end_linear_pa);
++
+ 	/*
+ 	 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
+ 	 * accommodating both its ends but excluding PAGE_END. Max physical
+ 	 * range which can be mapped inside this linear mapping range, must
+ 	 * also be derived from its end points.
+ 	 */
+-	return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
+-	       (start + size - 1) <= __pa(PAGE_END - 1);
++	return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
+ }
+ 
+ int arch_add_memory(int nid, u64 start, u64 size,
+diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
+index 824b2c9da75bd..f944062c9d990 100644
+--- a/arch/riscv/include/asm/uaccess.h
++++ b/arch/riscv/include/asm/uaccess.h
+@@ -306,7 +306,9 @@ do {								\
+  * data types like structures or arrays.
+  *
+  * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+- * to the result of dereferencing @ptr.
++ * to the result of dereferencing @ptr. The value of @x is copied to avoid
++ * re-ordering where @x is evaluated inside the block that enables user-space
++ * access (thus bypassing user space protection if @x is a function).
+  *
+  * Caller must check the pointer with access_ok() before calling this
+  * function.
+@@ -316,12 +318,13 @@ do {								\
+ #define __put_user(x, ptr)					\
+ ({								\
+ 	__typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
++	__typeof__(*__gu_ptr) __val = (x);			\
+ 	long __pu_err = 0;					\
+ 								\
+ 	__chk_user_ptr(__gu_ptr);				\
+ 								\
+ 	__enable_user_access();					\
+-	__put_user_nocheck(x, __gu_ptr, __pu_err);		\
++	__put_user_nocheck(__val, __gu_ptr, __pu_err);		\
+ 	__disable_user_access();				\
+ 								\
+ 	__pu_err;						\
+diff --git a/arch/s390/include/asm/vdso/data.h b/arch/s390/include/asm/vdso/data.h
+index 7b3cdb4a5f481..73ee891426662 100644
+--- a/arch/s390/include/asm/vdso/data.h
++++ b/arch/s390/include/asm/vdso/data.h
+@@ -6,7 +6,7 @@
+ #include <vdso/datapage.h>
+ 
+ struct arch_vdso_data {
+-	__u64 tod_steering_delta;
++	__s64 tod_steering_delta;
+ 	__u64 tod_steering_end;
+ };
+ 
+diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
+index 0ac30ee2c6330..b6517453fa234 100644
+--- a/arch/s390/kernel/time.c
++++ b/arch/s390/kernel/time.c
+@@ -398,6 +398,7 @@ static void clock_sync_global(unsigned long long delta)
+ 		      tod_steering_delta);
+ 	tod_steering_end = now + (abs(tod_steering_delta) << 15);
+ 	vdso_data->arch_data.tod_steering_end = tod_steering_end;
++	vdso_data->arch_data.tod_steering_delta = tod_steering_delta;
+ 
+ 	/* Update LPAR offset. */
+ 	if (ptff_query(PTFF_QTO) && ptff(&qto, sizeof(qto), PTFF_QTO) == 0)
+diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
+index c0538f82c9a22..57ef2094af93e 100644
+--- a/arch/x86/include/asm/smp.h
++++ b/arch/x86/include/asm/smp.h
+@@ -132,6 +132,7 @@ void native_play_dead(void);
+ void play_dead_common(void);
+ void wbinvd_on_cpu(int cpu);
+ int wbinvd_on_all_cpus(void);
++bool wakeup_cpu0(void);
+ 
+ void native_smp_send_reschedule(int cpu);
+ void native_send_call_func_ipi(const struct cpumask *mask);
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 7bdc0239a9435..14cd3186dc77d 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -1554,10 +1554,18 @@ void __init acpi_boot_table_init(void)
+ 	/*
+ 	 * Initialize the ACPI boot-time table parser.
+ 	 */
+-	if (acpi_table_init()) {
++	if (acpi_locate_initial_tables())
+ 		disable_acpi();
+-		return;
+-	}
++	else
++		acpi_reserve_initial_tables();
++}
++
++int __init early_acpi_boot_init(void)
++{
++	if (acpi_disabled)
++		return 1;
++
++	acpi_table_init_complete();
+ 
+ 	acpi_table_parse(ACPI_SIG_BOOT, acpi_parse_sbf);
+ 
+@@ -1570,18 +1578,9 @@ void __init acpi_boot_table_init(void)
+ 		} else {
+ 			printk(KERN_WARNING PREFIX "Disabling ACPI support\n");
+ 			disable_acpi();
+-			return;
++			return 1;
+ 		}
+ 	}
+-}
+-
+-int __init early_acpi_boot_init(void)
+-{
+-	/*
+-	 * If acpi_disabled, bail out
+-	 */
+-	if (acpi_disabled)
+-		return 1;
+ 
+ 	/*
+ 	 * Process the Multiple APIC Description Table (MADT), if present
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 84f581c91db45..d23795057c4f1 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1051,6 +1051,9 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	cleanup_highmap();
+ 
++	/* Look for ACPI tables and reserve memory occupied by them. */
++	acpi_boot_table_init();
++
+ 	memblock_set_current_limit(ISA_END_ADDRESS);
+ 	e820__memblock_setup();
+ 
+@@ -1136,11 +1139,6 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	early_platform_quirks();
+ 
+-	/*
+-	 * Parse the ACPI tables for possible boot-time SMP configuration.
+-	 */
+-	acpi_boot_table_init();
+-
+ 	early_acpi_boot_init();
+ 
+ 	initmem_init();
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index c65642c10aaea..5ea5f964f0a97 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1655,7 +1655,7 @@ void play_dead_common(void)
+ 	local_irq_disable();
+ }
+ 
+-static bool wakeup_cpu0(void)
++bool wakeup_cpu0(void)
+ {
+ 	if (smp_processor_id() == 0 && enable_start_cpu0)
+ 		return true;
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 1008cc6cb66c5..da6ce73c10bb7 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -246,11 +246,18 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
+ 	return true;
+ }
+ 
+-static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
++static bool nested_vmcb_check_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
+ {
+ 	struct kvm_vcpu *vcpu = &svm->vcpu;
+ 	bool vmcb12_lma;
+ 
++	/*
++	 * FIXME: these should be done after copying the fields,
++	 * to avoid TOC/TOU races.  For these save area checks
++	 * the possible damage is limited since kvm_set_cr0 and
++	 * kvm_set_cr4 handle failure; EFER_SVME is an exception
++	 * so it is force-set later in nested_prepare_vmcb_save.
++	 */
+ 	if ((vmcb12->save.efer & EFER_SVME) == 0)
+ 		return false;
+ 
+@@ -271,7 +278,7 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
+ 	if (kvm_valid_cr4(&svm->vcpu, vmcb12->save.cr4))
+ 		return false;
+ 
+-	return nested_vmcb_check_controls(&vmcb12->control);
++	return true;
+ }
+ 
+ static void load_nested_vmcb_control(struct vcpu_svm *svm,
+@@ -396,7 +403,14 @@ static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
+ 	svm->vmcb->save.gdtr = vmcb12->save.gdtr;
+ 	svm->vmcb->save.idtr = vmcb12->save.idtr;
+ 	kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags);
+-	svm_set_efer(&svm->vcpu, vmcb12->save.efer);
++
++	/*
++	 * Force-set EFER_SVME even though it is checked earlier on the
++	 * VMCB12, because the guest can flip the bit between the check
++	 * and now.  Clearing EFER_SVME would call svm_free_nested.
++	 */
++	svm_set_efer(&svm->vcpu, vmcb12->save.efer | EFER_SVME);
++
+ 	svm_set_cr0(&svm->vcpu, vmcb12->save.cr0);
+ 	svm_set_cr4(&svm->vcpu, vmcb12->save.cr4);
+ 	svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2;
+@@ -454,7 +468,6 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb12_gpa,
+ 	int ret;
+ 
+ 	svm->nested.vmcb12_gpa = vmcb12_gpa;
+-	load_nested_vmcb_control(svm, &vmcb12->control);
+ 	nested_prepare_vmcb_save(svm, vmcb12);
+ 	nested_prepare_vmcb_control(svm);
+ 
+@@ -501,7 +514,10 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
+ 	if (WARN_ON_ONCE(!svm->nested.initialized))
+ 		return -EINVAL;
+ 
+-	if (!nested_vmcb_checks(svm, vmcb12)) {
++	load_nested_vmcb_control(svm, &vmcb12->control);
++
++	if (!nested_vmcb_check_save(svm, vmcb12) ||
++	    !nested_vmcb_check_controls(&svm->nested.ctl)) {
+ 		vmcb12->control.exit_code    = SVM_EXIT_ERR;
+ 		vmcb12->control.exit_code_hi = 0;
+ 		vmcb12->control.exit_info_1  = 0;
+@@ -1205,6 +1221,8 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
+ 	 */
+ 	if (!(save->cr0 & X86_CR0_PG))
+ 		goto out_free;
++	if (!(save->efer & EFER_SVME))
++		goto out_free;
+ 
+ 	/*
+ 	 * All checks done, we can enter guest mode.  L1 control fields
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 796506dcfc42e..023ac12f54a29 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1735,7 +1735,7 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
+  * add rsp, 8                      // skip eth_type_trans's frame
+  * ret                             // return to its caller
+  */
+-int arch_prepare_bpf_trampoline(void *image, void *image_end,
++int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
+ 				const struct btf_func_model *m, u32 flags,
+ 				struct bpf_tramp_progs *tprogs,
+ 				void *orig_call)
+@@ -1774,6 +1774,15 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
+ 
+ 	save_regs(m, &prog, nr_args, stack_size);
+ 
++	if (flags & BPF_TRAMP_F_CALL_ORIG) {
++		/* arg1: mov rdi, im */
++		emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im);
++		if (emit_call(&prog, __bpf_tramp_enter, prog)) {
++			ret = -EINVAL;
++			goto cleanup;
++		}
++	}
++
+ 	if (fentry->nr_progs)
+ 		if (invoke_bpf(m, &prog, fentry, stack_size))
+ 			return -EINVAL;
+@@ -1792,8 +1801,7 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
+ 	}
+ 
+ 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
+-		if (fentry->nr_progs || fmod_ret->nr_progs)
+-			restore_regs(m, &prog, nr_args, stack_size);
++		restore_regs(m, &prog, nr_args, stack_size);
+ 
+ 		/* call original function */
+ 		if (emit_call(&prog, orig_call, prog)) {
+@@ -1802,6 +1810,9 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
+ 		}
+ 		/* remember return value in a stack for bpf prog to access */
+ 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
++		im->ip_after_call = prog;
++		memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE);
++		prog += X86_PATCH_SIZE;
+ 	}
+ 
+ 	if (fmod_ret->nr_progs) {
+@@ -1832,9 +1843,17 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
+ 	 * the return value is only updated on the stack and still needs to be
+ 	 * restored to R0.
+ 	 */
+-	if (flags & BPF_TRAMP_F_CALL_ORIG)
++	if (flags & BPF_TRAMP_F_CALL_ORIG) {
++		im->ip_epilogue = prog;
++		/* arg1: mov rdi, im */
++		emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im);
++		if (emit_call(&prog, __bpf_tramp_exit, prog)) {
++			ret = -EINVAL;
++			goto cleanup;
++		}
+ 		/* restore original return value back into RAX */
+ 		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
++	}
+ 
+ 	EMIT1(0x5B); /* pop rbx */
+ 	EMIT1(0xC9); /* leave */
+diff --git a/arch/xtensa/kernel/coprocessor.S b/arch/xtensa/kernel/coprocessor.S
+index c426b846beefb..45cc0ae0af6f9 100644
+--- a/arch/xtensa/kernel/coprocessor.S
++++ b/arch/xtensa/kernel/coprocessor.S
+@@ -99,37 +99,6 @@
+ 	LOAD_CP_REGS_TAB(6)
+ 	LOAD_CP_REGS_TAB(7)
+ 
+-/*
+- * coprocessor_flush(struct thread_info*, index)
+- *                             a2        a3
+- *
+- * Save coprocessor registers for coprocessor 'index'.
+- * The register values are saved to or loaded from the coprocessor area 
+- * inside the task_info structure.
+- *
+- * Note that this function doesn't update the coprocessor_owner information!
+- *
+- */
+-
+-ENTRY(coprocessor_flush)
+-
+-	/* reserve 4 bytes on stack to save a0 */
+-	abi_entry(4)
+-
+-	s32i	a0, a1, 0
+-	movi	a0, .Lsave_cp_regs_jump_table
+-	addx8	a3, a3, a0
+-	l32i	a4, a3, 4
+-	l32i	a3, a3, 0
+-	add	a2, a2, a4
+-	beqz	a3, 1f
+-	callx0	a3
+-1:	l32i	a0, a1, 0
+-
+-	abi_ret(4)
+-
+-ENDPROC(coprocessor_flush)
+-
+ /*
+  * Entry condition:
+  *
+@@ -245,6 +214,39 @@ ENTRY(fast_coprocessor)
+ 
+ ENDPROC(fast_coprocessor)
+ 
++	.text
++
++/*
++ * coprocessor_flush(struct thread_info*, index)
++ *                             a2        a3
++ *
++ * Save coprocessor registers for coprocessor 'index'.
++ * The register values are saved to or loaded from the coprocessor area
++ * inside the task_info structure.
++ *
++ * Note that this function doesn't update the coprocessor_owner information!
++ *
++ */
++
++ENTRY(coprocessor_flush)
++
++	/* reserve 4 bytes on stack to save a0 */
++	abi_entry(4)
++
++	s32i	a0, a1, 0
++	movi	a0, .Lsave_cp_regs_jump_table
++	addx8	a3, a3, a0
++	l32i	a4, a3, 4
++	l32i	a3, a3, 0
++	add	a2, a2, a4
++	beqz	a3, 1f
++	callx0	a3
++1:	l32i	a0, a1, 0
++
++	abi_ret(4)
++
++ENDPROC(coprocessor_flush)
++
+ 	.data
+ 
+ ENTRY(coprocessor_owner)
+diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c
+index 7666408ce12a4..95a74890c7e99 100644
+--- a/arch/xtensa/mm/fault.c
++++ b/arch/xtensa/mm/fault.c
+@@ -112,8 +112,11 @@ good_area:
+ 	 */
+ 	fault = handle_mm_fault(vma, address, flags, regs);
+ 
+-	if (fault_signal_pending(fault, regs))
++	if (fault_signal_pending(fault, regs)) {
++		if (!user_mode(regs))
++			goto bad_page_fault;
+ 		return;
++	}
+ 
+ 	if (unlikely(fault & VM_FAULT_ERROR)) {
+ 		if (fault & VM_FAULT_OOM)
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index f66236cff69b0..4e303964f7e7f 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -29,6 +29,7 @@
+  */
+ #ifdef CONFIG_X86
+ #include <asm/apic.h>
++#include <asm/cpu.h>
+ #endif
+ 
+ #define ACPI_PROCESSOR_CLASS            "processor"
+@@ -542,6 +543,12 @@ static int acpi_idle_play_dead(struct cpuidle_device *dev, int index)
+ 			wait_for_freeze();
+ 		} else
+ 			return -ENODEV;
++
++#if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU)
++		/* If NMI wants to wake up CPU0, start CPU0. */
++		if (wakeup_cpu0())
++			start_cpu0();
++#endif
+ 	}
+ 
+ 	/* Never reached */
+diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
+index e48690a006a4e..9d581045acff0 100644
+--- a/drivers/acpi/tables.c
++++ b/drivers/acpi/tables.c
+@@ -780,7 +780,7 @@ acpi_status acpi_os_table_override(struct acpi_table_header *existing_table,
+ }
+ 
+ /*
+- * acpi_table_init()
++ * acpi_locate_initial_tables()
+  *
+  * find RSDP, find and checksum SDT/XSDT.
+  * checksum all tables, print SDT/XSDT
+@@ -788,7 +788,7 @@ acpi_status acpi_os_table_override(struct acpi_table_header *existing_table,
+  * result: sdt_entry[] is initialized
+  */
+ 
+-int __init acpi_table_init(void)
++int __init acpi_locate_initial_tables(void)
+ {
+ 	acpi_status status;
+ 
+@@ -803,9 +803,45 @@ int __init acpi_table_init(void)
+ 	status = acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0);
+ 	if (ACPI_FAILURE(status))
+ 		return -EINVAL;
+-	acpi_table_initrd_scan();
+ 
++	return 0;
++}
++
++void __init acpi_reserve_initial_tables(void)
++{
++	int i;
++
++	for (i = 0; i < ACPI_MAX_TABLES; i++) {
++		struct acpi_table_desc *table_desc = &initial_tables[i];
++		u64 start = table_desc->address;
++		u64 size = table_desc->length;
++
++		if (!start || !size)
++			break;
++
++		pr_info("Reserving %4s table memory at [mem 0x%llx-0x%llx]\n",
++			table_desc->signature.ascii, start, start + size - 1);
++
++		memblock_reserve(start, size);
++	}
++}
++
++void __init acpi_table_init_complete(void)
++{
++	acpi_table_initrd_scan();
+ 	check_multiple_madt();
++}
++
++int __init acpi_table_init(void)
++{
++	int ret;
++
++	ret = acpi_locate_initial_tables();
++	if (ret)
++		return ret;
++
++	acpi_table_init_complete();
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 3c94ebc8d4bb0..43130d64e213d 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -97,6 +97,9 @@ static void deferred_probe_work_func(struct work_struct *work)
+ 
+ 		get_device(dev);
+ 
++		kfree(dev->p->deferred_probe_reason);
++		dev->p->deferred_probe_reason = NULL;
++
+ 		/*
+ 		 * Drop the mutex while probing each device; the probe path may
+ 		 * manipulate the deferred list
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 5ef67bacb585e..d6d73ff94e88f 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1690,8 +1690,8 @@ void pm_runtime_get_suppliers(struct device *dev)
+ 				device_links_read_lock_held())
+ 		if (link->flags & DL_FLAG_PM_RUNTIME) {
+ 			link->supplier_preactivated = true;
+-			refcount_inc(&link->rpm_active);
+ 			pm_runtime_get_sync(link->supplier);
++			refcount_inc(&link->rpm_active);
+ 		}
+ 
+ 	device_links_read_unlock(idx);
+@@ -1704,6 +1704,8 @@ void pm_runtime_get_suppliers(struct device *dev)
+ void pm_runtime_put_suppliers(struct device *dev)
+ {
+ 	struct device_link *link;
++	unsigned long flags;
++	bool put;
+ 	int idx;
+ 
+ 	idx = device_links_read_lock();
+@@ -1712,7 +1714,11 @@ void pm_runtime_put_suppliers(struct device *dev)
+ 				device_links_read_lock_held())
+ 		if (link->supplier_preactivated) {
+ 			link->supplier_preactivated = false;
+-			if (refcount_dec_not_one(&link->rpm_active))
++			spin_lock_irqsave(&dev->power.lock, flags);
++			put = pm_runtime_status_suspended(dev) &&
++			      refcount_dec_not_one(&link->rpm_active);
++			spin_unlock_irqrestore(&dev->power.lock, flags);
++			if (put)
+ 				pm_runtime_put(link->supplier);
+ 		}
+ 
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index 0a6438cbb3f30..e7a9561a826d3 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -1241,6 +1241,7 @@ int extcon_dev_register(struct extcon_dev *edev)
+ 				sizeof(*edev->nh), GFP_KERNEL);
+ 	if (!edev->nh) {
+ 		ret = -ENOMEM;
++		device_unregister(&edev->dev);
+ 		goto err_dev;
+ 	}
+ 
+diff --git a/drivers/firewire/nosy.c b/drivers/firewire/nosy.c
+index 5fd6a60b67410..88ed971e32c0d 100644
+--- a/drivers/firewire/nosy.c
++++ b/drivers/firewire/nosy.c
+@@ -346,6 +346,7 @@ nosy_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	struct client *client = file->private_data;
+ 	spinlock_t *client_list_lock = &client->lynx->client_list_lock;
+ 	struct nosy_stats stats;
++	int ret;
+ 
+ 	switch (cmd) {
+ 	case NOSY_IOC_GET_STATS:
+@@ -360,11 +361,15 @@ nosy_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 			return 0;
+ 
+ 	case NOSY_IOC_START:
++		ret = -EBUSY;
+ 		spin_lock_irq(client_list_lock);
+-		list_add_tail(&client->link, &client->lynx->client_list);
++		if (list_empty(&client->link)) {
++			list_add_tail(&client->link, &client->lynx->client_list);
++			ret = 0;
++		}
+ 		spin_unlock_irq(client_list_lock);
+ 
+-		return 0;
++		return ret;
+ 
+ 	case NOSY_IOC_STOP:
+ 		spin_lock_irq(client_list_lock);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index df110afa97bf4..605d1545274c2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2223,8 +2223,8 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
+ 	uint64_t eaddr;
+ 
+ 	/* validate the parameters */
+-	if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK ||
+-	    size == 0 || size & AMDGPU_GPU_PAGE_MASK)
++	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
++	    size == 0 || size & ~PAGE_MASK)
+ 		return -EINVAL;
+ 
+ 	/* make sure object fit at this offset */
+@@ -2289,8 +2289,8 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
+ 	int r;
+ 
+ 	/* validate the parameters */
+-	if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK ||
+-	    size == 0 || size & AMDGPU_GPU_PAGE_MASK)
++	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
++	    size == 0 || size & ~PAGE_MASK)
+ 		return -EINVAL;
+ 
+ 	/* make sure object fit at this offset */
+@@ -2435,7 +2435,7 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
+ 			after->start = eaddr + 1;
+ 			after->last = tmp->last;
+ 			after->offset = tmp->offset;
+-			after->offset += after->start - tmp->start;
++			after->offset += (after->start - tmp->start) << PAGE_SHIFT;
+ 			after->flags = tmp->flags;
+ 			after->bo_va = tmp->bo_va;
+ 			list_add(&after->list, &tmp->bo_va->invalids);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
+index b258a3dae767f..159add0f5aaae 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
+@@ -155,7 +155,7 @@ static int dbgdev_diq_submit_ib(struct kfd_dbgdev *dbgdev,
+ 
+ 	/* Wait till CP writes sync code: */
+ 	status = amdkfd_fence_wait_timeout(
+-			(unsigned int *) rm_state,
++			rm_state,
+ 			QUEUESTATE__ACTIVE, 1500);
+ 
+ 	kfd_gtt_sa_free(dbgdev->dev, mem_obj);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index c0ae04a08625c..8e5cfb1f8a512 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -1167,7 +1167,7 @@ static int start_cpsch(struct device_queue_manager *dqm)
+ 	if (retval)
+ 		goto fail_allocate_vidmem;
+ 
+-	dqm->fence_addr = dqm->fence_mem->cpu_ptr;
++	dqm->fence_addr = (uint64_t *)dqm->fence_mem->cpu_ptr;
+ 	dqm->fence_gpu_addr = dqm->fence_mem->gpu_addr;
+ 
+ 	init_interrupts(dqm);
+@@ -1340,8 +1340,8 @@ out:
+ 	return retval;
+ }
+ 
+-int amdkfd_fence_wait_timeout(unsigned int *fence_addr,
+-				unsigned int fence_value,
++int amdkfd_fence_wait_timeout(uint64_t *fence_addr,
++				uint64_t fence_value,
+ 				unsigned int timeout_ms)
+ {
+ 	unsigned long end_jiffies = msecs_to_jiffies(timeout_ms) + jiffies;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
+index 7351dd195274e..45f8159465544 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
+@@ -192,7 +192,7 @@ struct device_queue_manager {
+ 	uint16_t		vmid_pasid[VMID_NUM];
+ 	uint64_t		pipelines_addr;
+ 	uint64_t		fence_gpu_addr;
+-	unsigned int		*fence_addr;
++	uint64_t		*fence_addr;
+ 	struct kfd_mem_obj	*fence_mem;
+ 	bool			active_runlist;
+ 	int			sched_policy;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
+index 47ee40fbbd86c..c85e4f9d92cf2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
+@@ -345,7 +345,7 @@ fail_create_runlist_ib:
+ }
+ 
+ int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address,
+-			uint32_t fence_value)
++			uint64_t fence_value)
+ {
+ 	uint32_t *buffer, size;
+ 	int retval = 0;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
+index dfaf771a42e66..e3ba0cd3b6fa7 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
+@@ -283,7 +283,7 @@ static int pm_unmap_queues_v9(struct packet_manager *pm, uint32_t *buffer,
+ }
+ 
+ static int pm_query_status_v9(struct packet_manager *pm, uint32_t *buffer,
+-			uint64_t fence_address,	uint32_t fence_value)
++			uint64_t fence_address,	uint64_t fence_value)
+ {
+ 	struct pm4_mes_query_status *packet;
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c
+index a852e0d7d804f..08442e7d99440 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c
+@@ -263,7 +263,7 @@ static int pm_unmap_queues_vi(struct packet_manager *pm, uint32_t *buffer,
+ }
+ 
+ static int pm_query_status_vi(struct packet_manager *pm, uint32_t *buffer,
+-			uint64_t fence_address,	uint32_t fence_value)
++			uint64_t fence_address,	uint64_t fence_value)
+ {
+ 	struct pm4_mes_query_status *packet;
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index c77cf23032ac5..057c48a9b53a7 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -1006,8 +1006,8 @@ int pqm_get_wave_state(struct process_queue_manager *pqm,
+ 		       u32 *ctl_stack_used_size,
+ 		       u32 *save_area_used_size);
+ 
+-int amdkfd_fence_wait_timeout(unsigned int *fence_addr,
+-			      unsigned int fence_value,
++int amdkfd_fence_wait_timeout(uint64_t *fence_addr,
++			      uint64_t fence_value,
+ 			      unsigned int timeout_ms);
+ 
+ /* Packet Manager */
+@@ -1043,7 +1043,7 @@ struct packet_manager_funcs {
+ 			uint32_t filter_param, bool reset,
+ 			unsigned int sdma_engine);
+ 	int (*query_status)(struct packet_manager *pm, uint32_t *buffer,
+-			uint64_t fence_address,	uint32_t fence_value);
++			uint64_t fence_address,	uint64_t fence_value);
+ 	int (*release_mem)(uint64_t gpu_addr, uint32_t *buffer);
+ 
+ 	/* Packet sizes */
+@@ -1065,7 +1065,7 @@ int pm_send_set_resources(struct packet_manager *pm,
+ 				struct scheduling_resources *res);
+ int pm_send_runlist(struct packet_manager *pm, struct list_head *dqm_queues);
+ int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address,
+-				uint32_t fence_value);
++				uint64_t fence_value);
+ 
+ int pm_send_unmap_queue(struct packet_manager *pm, enum kfd_queue_type type,
+ 			enum kfd_unmap_queues_filter mode,
+diff --git a/drivers/gpu/drm/imx/imx-drm-core.c b/drivers/gpu/drm/imx/imx-drm-core.c
+index 9bf5ad6d18a22..a1423be70721a 100644
+--- a/drivers/gpu/drm/imx/imx-drm-core.c
++++ b/drivers/gpu/drm/imx/imx-drm-core.c
+@@ -215,7 +215,7 @@ static int imx_drm_bind(struct device *dev)
+ 
+ 	ret = drmm_mode_config_init(drm);
+ 	if (ret)
+-		return ret;
++		goto err_kms;
+ 
+ 	ret = drm_vblank_init(drm, MAX_CRTC);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index b2c8c68b7e261..3a244ef7f30f3 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -2499,22 +2499,18 @@ static int tegra_dc_couple(struct tegra_dc *dc)
+ 	 * POWER_CONTROL registers during CRTC enabling.
+ 	 */
+ 	if (dc->soc->coupled_pm && dc->pipe == 1) {
+-		u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_CONSUMER;
+-		struct device_link *link;
+-		struct device *partner;
++		struct device *companion;
++		struct tegra_dc *parent;
+ 
+-		partner = driver_find_device(dc->dev->driver, NULL, NULL,
+-					     tegra_dc_match_by_pipe);
+-		if (!partner)
++		companion = driver_find_device(dc->dev->driver, NULL, (const void *)0,
++					       tegra_dc_match_by_pipe);
++		if (!companion)
+ 			return -EPROBE_DEFER;
+ 
+-		link = device_link_add(dc->dev, partner, flags);
+-		if (!link) {
+-			dev_err(dc->dev, "failed to link controllers\n");
+-			return -EINVAL;
+-		}
++		parent = dev_get_drvdata(companion);
++		dc->client.parent = &parent->client;
+ 
+-		dev_dbg(dc->dev, "coupled to %s\n", dev_name(partner));
++		dev_dbg(dc->dev, "coupled to %s\n", dev_name(companion));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c
+index f02a035dda453..7b88261f57bb6 100644
+--- a/drivers/gpu/drm/tegra/sor.c
++++ b/drivers/gpu/drm/tegra/sor.c
+@@ -3115,6 +3115,12 @@ static int tegra_sor_init(struct host1x_client *client)
+ 	 * kernel is possible.
+ 	 */
+ 	if (sor->rst) {
++		err = pm_runtime_resume_and_get(sor->dev);
++		if (err < 0) {
++			dev_err(sor->dev, "failed to get runtime PM: %d\n", err);
++			return err;
++		}
++
+ 		err = reset_control_acquire(sor->rst);
+ 		if (err < 0) {
+ 			dev_err(sor->dev, "failed to acquire SOR reset: %d\n",
+@@ -3148,6 +3154,7 @@ static int tegra_sor_init(struct host1x_client *client)
+ 		}
+ 
+ 		reset_control_release(sor->rst);
++		pm_runtime_put(sor->dev);
+ 	}
+ 
+ 	err = clk_prepare_enable(sor->clk_safe);
+diff --git a/drivers/net/can/Makefile b/drivers/net/can/Makefile
+index 22164300122d5..a2b4463d84802 100644
+--- a/drivers/net/can/Makefile
++++ b/drivers/net/can/Makefile
+@@ -7,12 +7,7 @@ obj-$(CONFIG_CAN_VCAN)		+= vcan.o
+ obj-$(CONFIG_CAN_VXCAN)		+= vxcan.o
+ obj-$(CONFIG_CAN_SLCAN)		+= slcan.o
+ 
+-obj-$(CONFIG_CAN_DEV)		+= can-dev.o
+-can-dev-y			+= dev.o
+-can-dev-y			+= rx-offload.o
+-
+-can-dev-$(CONFIG_CAN_LEDS)	+= led.o
+-
++obj-y				+= dev/
+ obj-y				+= rcar/
+ obj-y				+= spi/
+ obj-y				+= usb/
+diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
+deleted file mode 100644
+index dc9b4aae3abb6..0000000000000
+--- a/drivers/net/can/dev.c
++++ /dev/null
+@@ -1,1339 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/* Copyright (C) 2005 Marc Kleine-Budde, Pengutronix
+- * Copyright (C) 2006 Andrey Volkov, Varma Electronics
+- * Copyright (C) 2008-2009 Wolfgang Grandegger <wg@grandegger.com>
+- */
+-
+-#include <linux/module.h>
+-#include <linux/kernel.h>
+-#include <linux/slab.h>
+-#include <linux/netdevice.h>
+-#include <linux/if_arp.h>
+-#include <linux/workqueue.h>
+-#include <linux/can.h>
+-#include <linux/can/can-ml.h>
+-#include <linux/can/dev.h>
+-#include <linux/can/skb.h>
+-#include <linux/can/netlink.h>
+-#include <linux/can/led.h>
+-#include <linux/of.h>
+-#include <net/rtnetlink.h>
+-
+-#define MOD_DESC "CAN device driver interface"
+-
+-MODULE_DESCRIPTION(MOD_DESC);
+-MODULE_LICENSE("GPL v2");
+-MODULE_AUTHOR("Wolfgang Grandegger <wg@grandegger.com>");
+-
+-/* CAN DLC to real data length conversion helpers */
+-
+-static const u8 dlc2len[] = {0, 1, 2, 3, 4, 5, 6, 7,
+-			     8, 12, 16, 20, 24, 32, 48, 64};
+-
+-/* get data length from can_dlc with sanitized can_dlc */
+-u8 can_dlc2len(u8 can_dlc)
+-{
+-	return dlc2len[can_dlc & 0x0F];
+-}
+-EXPORT_SYMBOL_GPL(can_dlc2len);
+-
+-static const u8 len2dlc[] = {0, 1, 2, 3, 4, 5, 6, 7, 8,		/* 0 - 8 */
+-			     9, 9, 9, 9,			/* 9 - 12 */
+-			     10, 10, 10, 10,			/* 13 - 16 */
+-			     11, 11, 11, 11,			/* 17 - 20 */
+-			     12, 12, 12, 12,			/* 21 - 24 */
+-			     13, 13, 13, 13, 13, 13, 13, 13,	/* 25 - 32 */
+-			     14, 14, 14, 14, 14, 14, 14, 14,	/* 33 - 40 */
+-			     14, 14, 14, 14, 14, 14, 14, 14,	/* 41 - 48 */
+-			     15, 15, 15, 15, 15, 15, 15, 15,	/* 49 - 56 */
+-			     15, 15, 15, 15, 15, 15, 15, 15};	/* 57 - 64 */
+-
+-/* map the sanitized data length to an appropriate data length code */
+-u8 can_len2dlc(u8 len)
+-{
+-	if (unlikely(len > 64))
+-		return 0xF;
+-
+-	return len2dlc[len];
+-}
+-EXPORT_SYMBOL_GPL(can_len2dlc);
+-
+-#ifdef CONFIG_CAN_CALC_BITTIMING
+-#define CAN_CALC_MAX_ERROR 50 /* in one-tenth of a percent */
+-
+-/* Bit-timing calculation derived from:
+- *
+- * Code based on LinCAN sources and H8S2638 project
+- * Copyright 2004-2006 Pavel Pisa - DCE FELK CVUT cz
+- * Copyright 2005      Stanislav Marek
+- * email: pisa@cmp.felk.cvut.cz
+- *
+- * Calculates proper bit-timing parameters for a specified bit-rate
+- * and sample-point, which can then be used to set the bit-timing
+- * registers of the CAN controller. You can find more information
+- * in the header file linux/can/netlink.h.
+- */
+-static int
+-can_update_sample_point(const struct can_bittiming_const *btc,
+-			unsigned int sample_point_nominal, unsigned int tseg,
+-			unsigned int *tseg1_ptr, unsigned int *tseg2_ptr,
+-			unsigned int *sample_point_error_ptr)
+-{
+-	unsigned int sample_point_error, best_sample_point_error = UINT_MAX;
+-	unsigned int sample_point, best_sample_point = 0;
+-	unsigned int tseg1, tseg2;
+-	int i;
+-
+-	for (i = 0; i <= 1; i++) {
+-		tseg2 = tseg + CAN_SYNC_SEG -
+-			(sample_point_nominal * (tseg + CAN_SYNC_SEG)) /
+-			1000 - i;
+-		tseg2 = clamp(tseg2, btc->tseg2_min, btc->tseg2_max);
+-		tseg1 = tseg - tseg2;
+-		if (tseg1 > btc->tseg1_max) {
+-			tseg1 = btc->tseg1_max;
+-			tseg2 = tseg - tseg1;
+-		}
+-
+-		sample_point = 1000 * (tseg + CAN_SYNC_SEG - tseg2) /
+-			(tseg + CAN_SYNC_SEG);
+-		sample_point_error = abs(sample_point_nominal - sample_point);
+-
+-		if (sample_point <= sample_point_nominal &&
+-		    sample_point_error < best_sample_point_error) {
+-			best_sample_point = sample_point;
+-			best_sample_point_error = sample_point_error;
+-			*tseg1_ptr = tseg1;
+-			*tseg2_ptr = tseg2;
+-		}
+-	}
+-
+-	if (sample_point_error_ptr)
+-		*sample_point_error_ptr = best_sample_point_error;
+-
+-	return best_sample_point;
+-}
+-
+-static int can_calc_bittiming(struct net_device *dev, struct can_bittiming *bt,
+-			      const struct can_bittiming_const *btc)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	unsigned int bitrate;			/* current bitrate */
+-	unsigned int bitrate_error;		/* difference between current and nominal value */
+-	unsigned int best_bitrate_error = UINT_MAX;
+-	unsigned int sample_point_error;	/* difference between current and nominal value */
+-	unsigned int best_sample_point_error = UINT_MAX;
+-	unsigned int sample_point_nominal;	/* nominal sample point */
+-	unsigned int best_tseg = 0;		/* current best value for tseg */
+-	unsigned int best_brp = 0;		/* current best value for brp */
+-	unsigned int brp, tsegall, tseg, tseg1 = 0, tseg2 = 0;
+-	u64 v64;
+-
+-	/* Use CiA recommended sample points */
+-	if (bt->sample_point) {
+-		sample_point_nominal = bt->sample_point;
+-	} else {
+-		if (bt->bitrate > 800000)
+-			sample_point_nominal = 750;
+-		else if (bt->bitrate > 500000)
+-			sample_point_nominal = 800;
+-		else
+-			sample_point_nominal = 875;
+-	}
+-
+-	/* tseg even = round down, odd = round up */
+-	for (tseg = (btc->tseg1_max + btc->tseg2_max) * 2 + 1;
+-	     tseg >= (btc->tseg1_min + btc->tseg2_min) * 2; tseg--) {
+-		tsegall = CAN_SYNC_SEG + tseg / 2;
+-
+-		/* Compute all possible tseg choices (tseg=tseg1+tseg2) */
+-		brp = priv->clock.freq / (tsegall * bt->bitrate) + tseg % 2;
+-
+-		/* choose brp step which is possible in system */
+-		brp = (brp / btc->brp_inc) * btc->brp_inc;
+-		if (brp < btc->brp_min || brp > btc->brp_max)
+-			continue;
+-
+-		bitrate = priv->clock.freq / (brp * tsegall);
+-		bitrate_error = abs(bt->bitrate - bitrate);
+-
+-		/* tseg brp biterror */
+-		if (bitrate_error > best_bitrate_error)
+-			continue;
+-
+-		/* reset sample point error if we have a better bitrate */
+-		if (bitrate_error < best_bitrate_error)
+-			best_sample_point_error = UINT_MAX;
+-
+-		can_update_sample_point(btc, sample_point_nominal, tseg / 2,
+-					&tseg1, &tseg2, &sample_point_error);
+-		if (sample_point_error > best_sample_point_error)
+-			continue;
+-
+-		best_sample_point_error = sample_point_error;
+-		best_bitrate_error = bitrate_error;
+-		best_tseg = tseg / 2;
+-		best_brp = brp;
+-
+-		if (bitrate_error == 0 && sample_point_error == 0)
+-			break;
+-	}
+-
+-	if (best_bitrate_error) {
+-		/* Error in one-tenth of a percent */
+-		v64 = (u64)best_bitrate_error * 1000;
+-		do_div(v64, bt->bitrate);
+-		bitrate_error = (u32)v64;
+-		if (bitrate_error > CAN_CALC_MAX_ERROR) {
+-			netdev_err(dev,
+-				   "bitrate error %d.%d%% too high\n",
+-				   bitrate_error / 10, bitrate_error % 10);
+-			return -EDOM;
+-		}
+-		netdev_warn(dev, "bitrate error %d.%d%%\n",
+-			    bitrate_error / 10, bitrate_error % 10);
+-	}
+-
+-	/* real sample point */
+-	bt->sample_point = can_update_sample_point(btc, sample_point_nominal,
+-						   best_tseg, &tseg1, &tseg2,
+-						   NULL);
+-
+-	v64 = (u64)best_brp * 1000 * 1000 * 1000;
+-	do_div(v64, priv->clock.freq);
+-	bt->tq = (u32)v64;
+-	bt->prop_seg = tseg1 / 2;
+-	bt->phase_seg1 = tseg1 - bt->prop_seg;
+-	bt->phase_seg2 = tseg2;
+-
+-	/* check for sjw user settings */
+-	if (!bt->sjw || !btc->sjw_max) {
+-		bt->sjw = 1;
+-	} else {
+-		/* bt->sjw is at least 1 -> sanitize upper bound to sjw_max */
+-		if (bt->sjw > btc->sjw_max)
+-			bt->sjw = btc->sjw_max;
+-		/* bt->sjw must not be higher than tseg2 */
+-		if (tseg2 < bt->sjw)
+-			bt->sjw = tseg2;
+-	}
+-
+-	bt->brp = best_brp;
+-
+-	/* real bitrate */
+-	bt->bitrate = priv->clock.freq /
+-		(bt->brp * (CAN_SYNC_SEG + tseg1 + tseg2));
+-
+-	return 0;
+-}
+-#else /* !CONFIG_CAN_CALC_BITTIMING */
+-static int can_calc_bittiming(struct net_device *dev, struct can_bittiming *bt,
+-			      const struct can_bittiming_const *btc)
+-{
+-	netdev_err(dev, "bit-timing calculation not available\n");
+-	return -EINVAL;
+-}
+-#endif /* CONFIG_CAN_CALC_BITTIMING */
+-
+-/* Checks the validity of the specified bit-timing parameters prop_seg,
+- * phase_seg1, phase_seg2 and sjw and tries to determine the bitrate
+- * prescaler value brp. You can find more information in the header
+- * file linux/can/netlink.h.
+- */
+-static int can_fixup_bittiming(struct net_device *dev, struct can_bittiming *bt,
+-			       const struct can_bittiming_const *btc)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	int tseg1, alltseg;
+-	u64 brp64;
+-
+-	tseg1 = bt->prop_seg + bt->phase_seg1;
+-	if (!bt->sjw)
+-		bt->sjw = 1;
+-	if (bt->sjw > btc->sjw_max ||
+-	    tseg1 < btc->tseg1_min || tseg1 > btc->tseg1_max ||
+-	    bt->phase_seg2 < btc->tseg2_min || bt->phase_seg2 > btc->tseg2_max)
+-		return -ERANGE;
+-
+-	brp64 = (u64)priv->clock.freq * (u64)bt->tq;
+-	if (btc->brp_inc > 1)
+-		do_div(brp64, btc->brp_inc);
+-	brp64 += 500000000UL - 1;
+-	do_div(brp64, 1000000000UL); /* the practicable BRP */
+-	if (btc->brp_inc > 1)
+-		brp64 *= btc->brp_inc;
+-	bt->brp = (u32)brp64;
+-
+-	if (bt->brp < btc->brp_min || bt->brp > btc->brp_max)
+-		return -EINVAL;
+-
+-	alltseg = bt->prop_seg + bt->phase_seg1 + bt->phase_seg2 + 1;
+-	bt->bitrate = priv->clock.freq / (bt->brp * alltseg);
+-	bt->sample_point = ((tseg1 + 1) * 1000) / alltseg;
+-
+-	return 0;
+-}
+-
+-/* Checks the validity of predefined bitrate settings */
+-static int
+-can_validate_bitrate(struct net_device *dev, struct can_bittiming *bt,
+-		     const u32 *bitrate_const,
+-		     const unsigned int bitrate_const_cnt)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	unsigned int i;
+-
+-	for (i = 0; i < bitrate_const_cnt; i++) {
+-		if (bt->bitrate == bitrate_const[i])
+-			break;
+-	}
+-
+-	if (i >= priv->bitrate_const_cnt)
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+-static int can_get_bittiming(struct net_device *dev, struct can_bittiming *bt,
+-			     const struct can_bittiming_const *btc,
+-			     const u32 *bitrate_const,
+-			     const unsigned int bitrate_const_cnt)
+-{
+-	int err;
+-
+-	/* Depending on the given can_bittiming parameter structure the CAN
+-	 * timing parameters are calculated based on the provided bitrate OR
+-	 * alternatively the CAN timing parameters (tq, prop_seg, etc.) are
+-	 * provided directly which are then checked and fixed up.
+-	 */
+-	if (!bt->tq && bt->bitrate && btc)
+-		err = can_calc_bittiming(dev, bt, btc);
+-	else if (bt->tq && !bt->bitrate && btc)
+-		err = can_fixup_bittiming(dev, bt, btc);
+-	else if (!bt->tq && bt->bitrate && bitrate_const)
+-		err = can_validate_bitrate(dev, bt, bitrate_const,
+-					   bitrate_const_cnt);
+-	else
+-		err = -EINVAL;
+-
+-	return err;
+-}
+-
+-static void can_update_state_error_stats(struct net_device *dev,
+-					 enum can_state new_state)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	if (new_state <= priv->state)
+-		return;
+-
+-	switch (new_state) {
+-	case CAN_STATE_ERROR_WARNING:
+-		priv->can_stats.error_warning++;
+-		break;
+-	case CAN_STATE_ERROR_PASSIVE:
+-		priv->can_stats.error_passive++;
+-		break;
+-	case CAN_STATE_BUS_OFF:
+-		priv->can_stats.bus_off++;
+-		break;
+-	default:
+-		break;
+-	}
+-}
+-
+-static int can_tx_state_to_frame(struct net_device *dev, enum can_state state)
+-{
+-	switch (state) {
+-	case CAN_STATE_ERROR_ACTIVE:
+-		return CAN_ERR_CRTL_ACTIVE;
+-	case CAN_STATE_ERROR_WARNING:
+-		return CAN_ERR_CRTL_TX_WARNING;
+-	case CAN_STATE_ERROR_PASSIVE:
+-		return CAN_ERR_CRTL_TX_PASSIVE;
+-	default:
+-		return 0;
+-	}
+-}
+-
+-static int can_rx_state_to_frame(struct net_device *dev, enum can_state state)
+-{
+-	switch (state) {
+-	case CAN_STATE_ERROR_ACTIVE:
+-		return CAN_ERR_CRTL_ACTIVE;
+-	case CAN_STATE_ERROR_WARNING:
+-		return CAN_ERR_CRTL_RX_WARNING;
+-	case CAN_STATE_ERROR_PASSIVE:
+-		return CAN_ERR_CRTL_RX_PASSIVE;
+-	default:
+-		return 0;
+-	}
+-}
+-
+-static const char *can_get_state_str(const enum can_state state)
+-{
+-	switch (state) {
+-	case CAN_STATE_ERROR_ACTIVE:
+-		return "Error Active";
+-	case CAN_STATE_ERROR_WARNING:
+-		return "Error Warning";
+-	case CAN_STATE_ERROR_PASSIVE:
+-		return "Error Passive";
+-	case CAN_STATE_BUS_OFF:
+-		return "Bus Off";
+-	case CAN_STATE_STOPPED:
+-		return "Stopped";
+-	case CAN_STATE_SLEEPING:
+-		return "Sleeping";
+-	default:
+-		return "<unknown>";
+-	}
+-
+-	return "<unknown>";
+-}
+-
+-void can_change_state(struct net_device *dev, struct can_frame *cf,
+-		      enum can_state tx_state, enum can_state rx_state)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	enum can_state new_state = max(tx_state, rx_state);
+-
+-	if (unlikely(new_state == priv->state)) {
+-		netdev_warn(dev, "%s: oops, state did not change", __func__);
+-		return;
+-	}
+-
+-	netdev_dbg(dev, "Controller changed from %s State (%d) into %s State (%d).\n",
+-		   can_get_state_str(priv->state), priv->state,
+-		   can_get_state_str(new_state), new_state);
+-
+-	can_update_state_error_stats(dev, new_state);
+-	priv->state = new_state;
+-
+-	if (!cf)
+-		return;
+-
+-	if (unlikely(new_state == CAN_STATE_BUS_OFF)) {
+-		cf->can_id |= CAN_ERR_BUSOFF;
+-		return;
+-	}
+-
+-	cf->can_id |= CAN_ERR_CRTL;
+-	cf->data[1] |= tx_state >= rx_state ?
+-		       can_tx_state_to_frame(dev, tx_state) : 0;
+-	cf->data[1] |= tx_state <= rx_state ?
+-		       can_rx_state_to_frame(dev, rx_state) : 0;
+-}
+-EXPORT_SYMBOL_GPL(can_change_state);
+-
+-/* Local echo of CAN messages
+- *
+- * CAN network devices *should* support a local echo functionality
+- * (see Documentation/networking/can.rst). To test the handling of CAN
+- * interfaces that do not support the local echo both driver types are
+- * implemented. In the case that the driver does not support the echo
+- * the IFF_ECHO remains clear in dev->flags. This causes the PF_CAN core
+- * to perform the echo as a fallback solution.
+- */
+-static void can_flush_echo_skb(struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	struct net_device_stats *stats = &dev->stats;
+-	int i;
+-
+-	for (i = 0; i < priv->echo_skb_max; i++) {
+-		if (priv->echo_skb[i]) {
+-			kfree_skb(priv->echo_skb[i]);
+-			priv->echo_skb[i] = NULL;
+-			stats->tx_dropped++;
+-			stats->tx_aborted_errors++;
+-		}
+-	}
+-}
+-
+-/* Put the skb on the stack to be looped backed locally lateron
+- *
+- * The function is typically called in the start_xmit function
+- * of the device driver. The driver must protect access to
+- * priv->echo_skb, if necessary.
+- */
+-int can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
+-		     unsigned int idx)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	BUG_ON(idx >= priv->echo_skb_max);
+-
+-	/* check flag whether this packet has to be looped back */
+-	if (!(dev->flags & IFF_ECHO) || skb->pkt_type != PACKET_LOOPBACK ||
+-	    (skb->protocol != htons(ETH_P_CAN) &&
+-	     skb->protocol != htons(ETH_P_CANFD))) {
+-		kfree_skb(skb);
+-		return 0;
+-	}
+-
+-	if (!priv->echo_skb[idx]) {
+-		skb = can_create_echo_skb(skb);
+-		if (!skb)
+-			return -ENOMEM;
+-
+-		/* make settings for echo to reduce code in irq context */
+-		skb->pkt_type = PACKET_BROADCAST;
+-		skb->ip_summed = CHECKSUM_UNNECESSARY;
+-		skb->dev = dev;
+-
+-		/* save this skb for tx interrupt echo handling */
+-		priv->echo_skb[idx] = skb;
+-	} else {
+-		/* locking problem with netif_stop_queue() ?? */
+-		netdev_err(dev, "%s: BUG! echo_skb %d is occupied!\n", __func__, idx);
+-		kfree_skb(skb);
+-		return -EBUSY;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(can_put_echo_skb);
+-
+-struct sk_buff *
+-__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	if (idx >= priv->echo_skb_max) {
+-		netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",
+-			   __func__, idx, priv->echo_skb_max);
+-		return NULL;
+-	}
+-
+-	if (priv->echo_skb[idx]) {
+-		/* Using "struct canfd_frame::len" for the frame
+-		 * length is supported on both CAN and CANFD frames.
+-		 */
+-		struct sk_buff *skb = priv->echo_skb[idx];
+-		struct canfd_frame *cf = (struct canfd_frame *)skb->data;
+-
+-		/* get the real payload length for netdev statistics */
+-		if (cf->can_id & CAN_RTR_FLAG)
+-			*len_ptr = 0;
+-		else
+-			*len_ptr = cf->len;
+-
+-		priv->echo_skb[idx] = NULL;
+-
+-		return skb;
+-	}
+-
+-	return NULL;
+-}
+-
+-/* Get the skb from the stack and loop it back locally
+- *
+- * The function is typically called when the TX done interrupt
+- * is handled in the device driver. The driver must protect
+- * access to priv->echo_skb, if necessary.
+- */
+-unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)
+-{
+-	struct sk_buff *skb;
+-	u8 len;
+-
+-	skb = __can_get_echo_skb(dev, idx, &len);
+-	if (!skb)
+-		return 0;
+-
+-	skb_get(skb);
+-	if (netif_rx(skb) == NET_RX_SUCCESS)
+-		dev_consume_skb_any(skb);
+-	else
+-		dev_kfree_skb_any(skb);
+-
+-	return len;
+-}
+-EXPORT_SYMBOL_GPL(can_get_echo_skb);
+-
+-/* Remove the skb from the stack and free it.
+- *
+- * The function is typically called when TX failed.
+- */
+-void can_free_echo_skb(struct net_device *dev, unsigned int idx)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	BUG_ON(idx >= priv->echo_skb_max);
+-
+-	if (priv->echo_skb[idx]) {
+-		dev_kfree_skb_any(priv->echo_skb[idx]);
+-		priv->echo_skb[idx] = NULL;
+-	}
+-}
+-EXPORT_SYMBOL_GPL(can_free_echo_skb);
+-
+-/* CAN device restart for bus-off recovery */
+-static void can_restart(struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	struct net_device_stats *stats = &dev->stats;
+-	struct sk_buff *skb;
+-	struct can_frame *cf;
+-	int err;
+-
+-	BUG_ON(netif_carrier_ok(dev));
+-
+-	/* No synchronization needed because the device is bus-off and
+-	 * no messages can come in or go out.
+-	 */
+-	can_flush_echo_skb(dev);
+-
+-	/* send restart message upstream */
+-	skb = alloc_can_err_skb(dev, &cf);
+-	if (!skb)
+-		goto restart;
+-
+-	cf->can_id |= CAN_ERR_RESTARTED;
+-
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->can_dlc;
+-
+-	netif_rx_ni(skb);
+-
+-restart:
+-	netdev_dbg(dev, "restarted\n");
+-	priv->can_stats.restarts++;
+-
+-	/* Now restart the device */
+-	err = priv->do_set_mode(dev, CAN_MODE_START);
+-
+-	netif_carrier_on(dev);
+-	if (err)
+-		netdev_err(dev, "Error %d during restart", err);
+-}
+-
+-static void can_restart_work(struct work_struct *work)
+-{
+-	struct delayed_work *dwork = to_delayed_work(work);
+-	struct can_priv *priv = container_of(dwork, struct can_priv,
+-					     restart_work);
+-
+-	can_restart(priv->dev);
+-}
+-
+-int can_restart_now(struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	/* A manual restart is only permitted if automatic restart is
+-	 * disabled and the device is in the bus-off state
+-	 */
+-	if (priv->restart_ms)
+-		return -EINVAL;
+-	if (priv->state != CAN_STATE_BUS_OFF)
+-		return -EBUSY;
+-
+-	cancel_delayed_work_sync(&priv->restart_work);
+-	can_restart(dev);
+-
+-	return 0;
+-}
+-
+-/* CAN bus-off
+- *
+- * This functions should be called when the device goes bus-off to
+- * tell the netif layer that no more packets can be sent or received.
+- * If enabled, a timer is started to trigger bus-off recovery.
+- */
+-void can_bus_off(struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	if (priv->restart_ms)
+-		netdev_info(dev, "bus-off, scheduling restart in %d ms\n",
+-			    priv->restart_ms);
+-	else
+-		netdev_info(dev, "bus-off\n");
+-
+-	netif_carrier_off(dev);
+-
+-	if (priv->restart_ms)
+-		schedule_delayed_work(&priv->restart_work,
+-				      msecs_to_jiffies(priv->restart_ms));
+-}
+-EXPORT_SYMBOL_GPL(can_bus_off);
+-
+-static void can_setup(struct net_device *dev)
+-{
+-	dev->type = ARPHRD_CAN;
+-	dev->mtu = CAN_MTU;
+-	dev->hard_header_len = 0;
+-	dev->addr_len = 0;
+-	dev->tx_queue_len = 10;
+-
+-	/* New-style flags. */
+-	dev->flags = IFF_NOARP;
+-	dev->features = NETIF_F_HW_CSUM;
+-}
+-
+-struct sk_buff *alloc_can_skb(struct net_device *dev, struct can_frame **cf)
+-{
+-	struct sk_buff *skb;
+-
+-	skb = netdev_alloc_skb(dev, sizeof(struct can_skb_priv) +
+-			       sizeof(struct can_frame));
+-	if (unlikely(!skb))
+-		return NULL;
+-
+-	skb->protocol = htons(ETH_P_CAN);
+-	skb->pkt_type = PACKET_BROADCAST;
+-	skb->ip_summed = CHECKSUM_UNNECESSARY;
+-
+-	skb_reset_mac_header(skb);
+-	skb_reset_network_header(skb);
+-	skb_reset_transport_header(skb);
+-
+-	can_skb_reserve(skb);
+-	can_skb_prv(skb)->ifindex = dev->ifindex;
+-	can_skb_prv(skb)->skbcnt = 0;
+-
+-	*cf = skb_put_zero(skb, sizeof(struct can_frame));
+-
+-	return skb;
+-}
+-EXPORT_SYMBOL_GPL(alloc_can_skb);
+-
+-struct sk_buff *alloc_canfd_skb(struct net_device *dev,
+-				struct canfd_frame **cfd)
+-{
+-	struct sk_buff *skb;
+-
+-	skb = netdev_alloc_skb(dev, sizeof(struct can_skb_priv) +
+-			       sizeof(struct canfd_frame));
+-	if (unlikely(!skb))
+-		return NULL;
+-
+-	skb->protocol = htons(ETH_P_CANFD);
+-	skb->pkt_type = PACKET_BROADCAST;
+-	skb->ip_summed = CHECKSUM_UNNECESSARY;
+-
+-	skb_reset_mac_header(skb);
+-	skb_reset_network_header(skb);
+-	skb_reset_transport_header(skb);
+-
+-	can_skb_reserve(skb);
+-	can_skb_prv(skb)->ifindex = dev->ifindex;
+-	can_skb_prv(skb)->skbcnt = 0;
+-
+-	*cfd = skb_put_zero(skb, sizeof(struct canfd_frame));
+-
+-	return skb;
+-}
+-EXPORT_SYMBOL_GPL(alloc_canfd_skb);
+-
+-struct sk_buff *alloc_can_err_skb(struct net_device *dev, struct can_frame **cf)
+-{
+-	struct sk_buff *skb;
+-
+-	skb = alloc_can_skb(dev, cf);
+-	if (unlikely(!skb))
+-		return NULL;
+-
+-	(*cf)->can_id = CAN_ERR_FLAG;
+-	(*cf)->can_dlc = CAN_ERR_DLC;
+-
+-	return skb;
+-}
+-EXPORT_SYMBOL_GPL(alloc_can_err_skb);
+-
+-/* Allocate and setup space for the CAN network device */
+-struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
+-				    unsigned int txqs, unsigned int rxqs)
+-{
+-	struct net_device *dev;
+-	struct can_priv *priv;
+-	int size;
+-
+-	/* We put the driver's priv, the CAN mid layer priv and the
+-	 * echo skb into the netdevice's priv. The memory layout for
+-	 * the netdev_priv is like this:
+-	 *
+-	 * +-------------------------+
+-	 * | driver's priv           |
+-	 * +-------------------------+
+-	 * | struct can_ml_priv      |
+-	 * +-------------------------+
+-	 * | array of struct sk_buff |
+-	 * +-------------------------+
+-	 */
+-
+-	size = ALIGN(sizeof_priv, NETDEV_ALIGN) + sizeof(struct can_ml_priv);
+-
+-	if (echo_skb_max)
+-		size = ALIGN(size, sizeof(struct sk_buff *)) +
+-			echo_skb_max * sizeof(struct sk_buff *);
+-
+-	dev = alloc_netdev_mqs(size, "can%d", NET_NAME_UNKNOWN, can_setup,
+-			       txqs, rxqs);
+-	if (!dev)
+-		return NULL;
+-
+-	priv = netdev_priv(dev);
+-	priv->dev = dev;
+-
+-	dev->ml_priv = (void *)priv + ALIGN(sizeof_priv, NETDEV_ALIGN);
+-
+-	if (echo_skb_max) {
+-		priv->echo_skb_max = echo_skb_max;
+-		priv->echo_skb = (void *)priv +
+-			(size - echo_skb_max * sizeof(struct sk_buff *));
+-	}
+-
+-	priv->state = CAN_STATE_STOPPED;
+-
+-	INIT_DELAYED_WORK(&priv->restart_work, can_restart_work);
+-
+-	return dev;
+-}
+-EXPORT_SYMBOL_GPL(alloc_candev_mqs);
+-
+-/* Free space of the CAN network device */
+-void free_candev(struct net_device *dev)
+-{
+-	free_netdev(dev);
+-}
+-EXPORT_SYMBOL_GPL(free_candev);
+-
+-/* changing MTU and control mode for CAN/CANFD devices */
+-int can_change_mtu(struct net_device *dev, int new_mtu)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	/* Do not allow changing the MTU while running */
+-	if (dev->flags & IFF_UP)
+-		return -EBUSY;
+-
+-	/* allow change of MTU according to the CANFD ability of the device */
+-	switch (new_mtu) {
+-	case CAN_MTU:
+-		/* 'CANFD-only' controllers can not switch to CAN_MTU */
+-		if (priv->ctrlmode_static & CAN_CTRLMODE_FD)
+-			return -EINVAL;
+-
+-		priv->ctrlmode &= ~CAN_CTRLMODE_FD;
+-		break;
+-
+-	case CANFD_MTU:
+-		/* check for potential CANFD ability */
+-		if (!(priv->ctrlmode_supported & CAN_CTRLMODE_FD) &&
+-		    !(priv->ctrlmode_static & CAN_CTRLMODE_FD))
+-			return -EINVAL;
+-
+-		priv->ctrlmode |= CAN_CTRLMODE_FD;
+-		break;
+-
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	dev->mtu = new_mtu;
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(can_change_mtu);
+-
+-/* Common open function when the device gets opened.
+- *
+- * This function should be called in the open function of the device
+- * driver.
+- */
+-int open_candev(struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	if (!priv->bittiming.bitrate) {
+-		netdev_err(dev, "bit-timing not yet defined\n");
+-		return -EINVAL;
+-	}
+-
+-	/* For CAN FD the data bitrate has to be >= the arbitration bitrate */
+-	if ((priv->ctrlmode & CAN_CTRLMODE_FD) &&
+-	    (!priv->data_bittiming.bitrate ||
+-	     priv->data_bittiming.bitrate < priv->bittiming.bitrate)) {
+-		netdev_err(dev, "incorrect/missing data bit-timing\n");
+-		return -EINVAL;
+-	}
+-
+-	/* Switch carrier on if device was stopped while in bus-off state */
+-	if (!netif_carrier_ok(dev))
+-		netif_carrier_on(dev);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(open_candev);
+-
+-#ifdef CONFIG_OF
+-/* Common function that can be used to understand the limitation of
+- * a transceiver when it provides no means to determine these limitations
+- * at runtime.
+- */
+-void of_can_transceiver(struct net_device *dev)
+-{
+-	struct device_node *dn;
+-	struct can_priv *priv = netdev_priv(dev);
+-	struct device_node *np = dev->dev.parent->of_node;
+-	int ret;
+-
+-	dn = of_get_child_by_name(np, "can-transceiver");
+-	if (!dn)
+-		return;
+-
+-	ret = of_property_read_u32(dn, "max-bitrate", &priv->bitrate_max);
+-	of_node_put(dn);
+-	if ((ret && ret != -EINVAL) || (!ret && !priv->bitrate_max))
+-		netdev_warn(dev, "Invalid value for transceiver max bitrate. Ignoring bitrate limit.\n");
+-}
+-EXPORT_SYMBOL_GPL(of_can_transceiver);
+-#endif
+-
+-/* Common close function for cleanup before the device gets closed.
+- *
+- * This function should be called in the close function of the device
+- * driver.
+- */
+-void close_candev(struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	cancel_delayed_work_sync(&priv->restart_work);
+-	can_flush_echo_skb(dev);
+-}
+-EXPORT_SYMBOL_GPL(close_candev);
+-
+-/* CAN netlink interface */
+-static const struct nla_policy can_policy[IFLA_CAN_MAX + 1] = {
+-	[IFLA_CAN_STATE]	= { .type = NLA_U32 },
+-	[IFLA_CAN_CTRLMODE]	= { .len = sizeof(struct can_ctrlmode) },
+-	[IFLA_CAN_RESTART_MS]	= { .type = NLA_U32 },
+-	[IFLA_CAN_RESTART]	= { .type = NLA_U32 },
+-	[IFLA_CAN_BITTIMING]	= { .len = sizeof(struct can_bittiming) },
+-	[IFLA_CAN_BITTIMING_CONST]
+-				= { .len = sizeof(struct can_bittiming_const) },
+-	[IFLA_CAN_CLOCK]	= { .len = sizeof(struct can_clock) },
+-	[IFLA_CAN_BERR_COUNTER]	= { .len = sizeof(struct can_berr_counter) },
+-	[IFLA_CAN_DATA_BITTIMING]
+-				= { .len = sizeof(struct can_bittiming) },
+-	[IFLA_CAN_DATA_BITTIMING_CONST]
+-				= { .len = sizeof(struct can_bittiming_const) },
+-	[IFLA_CAN_TERMINATION]	= { .type = NLA_U16 },
+-};
+-
+-static int can_validate(struct nlattr *tb[], struct nlattr *data[],
+-			struct netlink_ext_ack *extack)
+-{
+-	bool is_can_fd = false;
+-
+-	/* Make sure that valid CAN FD configurations always consist of
+-	 * - nominal/arbitration bittiming
+-	 * - data bittiming
+-	 * - control mode with CAN_CTRLMODE_FD set
+-	 */
+-
+-	if (!data)
+-		return 0;
+-
+-	if (data[IFLA_CAN_CTRLMODE]) {
+-		struct can_ctrlmode *cm = nla_data(data[IFLA_CAN_CTRLMODE]);
+-
+-		is_can_fd = cm->flags & cm->mask & CAN_CTRLMODE_FD;
+-	}
+-
+-	if (is_can_fd) {
+-		if (!data[IFLA_CAN_BITTIMING] || !data[IFLA_CAN_DATA_BITTIMING])
+-			return -EOPNOTSUPP;
+-	}
+-
+-	if (data[IFLA_CAN_DATA_BITTIMING]) {
+-		if (!is_can_fd || !data[IFLA_CAN_BITTIMING])
+-			return -EOPNOTSUPP;
+-	}
+-
+-	return 0;
+-}
+-
+-static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+-			  struct nlattr *data[],
+-			  struct netlink_ext_ack *extack)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	int err;
+-
+-	/* We need synchronization with dev->stop() */
+-	ASSERT_RTNL();
+-
+-	if (data[IFLA_CAN_BITTIMING]) {
+-		struct can_bittiming bt;
+-
+-		/* Do not allow changing bittiming while running */
+-		if (dev->flags & IFF_UP)
+-			return -EBUSY;
+-
+-		/* Calculate bittiming parameters based on
+-		 * bittiming_const if set, otherwise pass bitrate
+-		 * directly via do_set_bitrate(). Bail out if neither
+-		 * is given.
+-		 */
+-		if (!priv->bittiming_const && !priv->do_set_bittiming)
+-			return -EOPNOTSUPP;
+-
+-		memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
+-		err = can_get_bittiming(dev, &bt,
+-					priv->bittiming_const,
+-					priv->bitrate_const,
+-					priv->bitrate_const_cnt);
+-		if (err)
+-			return err;
+-
+-		if (priv->bitrate_max && bt.bitrate > priv->bitrate_max) {
+-			netdev_err(dev, "arbitration bitrate surpasses transceiver capabilities of %d bps\n",
+-				   priv->bitrate_max);
+-			return -EINVAL;
+-		}
+-
+-		memcpy(&priv->bittiming, &bt, sizeof(bt));
+-
+-		if (priv->do_set_bittiming) {
+-			/* Finally, set the bit-timing registers */
+-			err = priv->do_set_bittiming(dev);
+-			if (err)
+-				return err;
+-		}
+-	}
+-
+-	if (data[IFLA_CAN_CTRLMODE]) {
+-		struct can_ctrlmode *cm;
+-		u32 ctrlstatic;
+-		u32 maskedflags;
+-
+-		/* Do not allow changing controller mode while running */
+-		if (dev->flags & IFF_UP)
+-			return -EBUSY;
+-		cm = nla_data(data[IFLA_CAN_CTRLMODE]);
+-		ctrlstatic = priv->ctrlmode_static;
+-		maskedflags = cm->flags & cm->mask;
+-
+-		/* check whether provided bits are allowed to be passed */
+-		if (cm->mask & ~(priv->ctrlmode_supported | ctrlstatic))
+-			return -EOPNOTSUPP;
+-
+-		/* do not check for static fd-non-iso if 'fd' is disabled */
+-		if (!(maskedflags & CAN_CTRLMODE_FD))
+-			ctrlstatic &= ~CAN_CTRLMODE_FD_NON_ISO;
+-
+-		/* make sure static options are provided by configuration */
+-		if ((maskedflags & ctrlstatic) != ctrlstatic)
+-			return -EOPNOTSUPP;
+-
+-		/* clear bits to be modified and copy the flag values */
+-		priv->ctrlmode &= ~cm->mask;
+-		priv->ctrlmode |= maskedflags;
+-
+-		/* CAN_CTRLMODE_FD can only be set when driver supports FD */
+-		if (priv->ctrlmode & CAN_CTRLMODE_FD)
+-			dev->mtu = CANFD_MTU;
+-		else
+-			dev->mtu = CAN_MTU;
+-	}
+-
+-	if (data[IFLA_CAN_RESTART_MS]) {
+-		/* Do not allow changing restart delay while running */
+-		if (dev->flags & IFF_UP)
+-			return -EBUSY;
+-		priv->restart_ms = nla_get_u32(data[IFLA_CAN_RESTART_MS]);
+-	}
+-
+-	if (data[IFLA_CAN_RESTART]) {
+-		/* Do not allow a restart while not running */
+-		if (!(dev->flags & IFF_UP))
+-			return -EINVAL;
+-		err = can_restart_now(dev);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_CAN_DATA_BITTIMING]) {
+-		struct can_bittiming dbt;
+-
+-		/* Do not allow changing bittiming while running */
+-		if (dev->flags & IFF_UP)
+-			return -EBUSY;
+-
+-		/* Calculate bittiming parameters based on
+-		 * data_bittiming_const if set, otherwise pass bitrate
+-		 * directly via do_set_bitrate(). Bail out if neither
+-		 * is given.
+-		 */
+-		if (!priv->data_bittiming_const && !priv->do_set_data_bittiming)
+-			return -EOPNOTSUPP;
+-
+-		memcpy(&dbt, nla_data(data[IFLA_CAN_DATA_BITTIMING]),
+-		       sizeof(dbt));
+-		err = can_get_bittiming(dev, &dbt,
+-					priv->data_bittiming_const,
+-					priv->data_bitrate_const,
+-					priv->data_bitrate_const_cnt);
+-		if (err)
+-			return err;
+-
+-		if (priv->bitrate_max && dbt.bitrate > priv->bitrate_max) {
+-			netdev_err(dev, "canfd data bitrate surpasses transceiver capabilities of %d bps\n",
+-				   priv->bitrate_max);
+-			return -EINVAL;
+-		}
+-
+-		memcpy(&priv->data_bittiming, &dbt, sizeof(dbt));
+-
+-		if (priv->do_set_data_bittiming) {
+-			/* Finally, set the bit-timing registers */
+-			err = priv->do_set_data_bittiming(dev);
+-			if (err)
+-				return err;
+-		}
+-	}
+-
+-	if (data[IFLA_CAN_TERMINATION]) {
+-		const u16 termval = nla_get_u16(data[IFLA_CAN_TERMINATION]);
+-		const unsigned int num_term = priv->termination_const_cnt;
+-		unsigned int i;
+-
+-		if (!priv->do_set_termination)
+-			return -EOPNOTSUPP;
+-
+-		/* check whether given value is supported by the interface */
+-		for (i = 0; i < num_term; i++) {
+-			if (termval == priv->termination_const[i])
+-				break;
+-		}
+-		if (i >= num_term)
+-			return -EINVAL;
+-
+-		/* Finally, set the termination value */
+-		err = priv->do_set_termination(dev, termval);
+-		if (err)
+-			return err;
+-
+-		priv->termination = termval;
+-	}
+-
+-	return 0;
+-}
+-
+-static size_t can_get_size(const struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	size_t size = 0;
+-
+-	if (priv->bittiming.bitrate)				/* IFLA_CAN_BITTIMING */
+-		size += nla_total_size(sizeof(struct can_bittiming));
+-	if (priv->bittiming_const)				/* IFLA_CAN_BITTIMING_CONST */
+-		size += nla_total_size(sizeof(struct can_bittiming_const));
+-	size += nla_total_size(sizeof(struct can_clock));	/* IFLA_CAN_CLOCK */
+-	size += nla_total_size(sizeof(u32));			/* IFLA_CAN_STATE */
+-	size += nla_total_size(sizeof(struct can_ctrlmode));	/* IFLA_CAN_CTRLMODE */
+-	size += nla_total_size(sizeof(u32));			/* IFLA_CAN_RESTART_MS */
+-	if (priv->do_get_berr_counter)				/* IFLA_CAN_BERR_COUNTER */
+-		size += nla_total_size(sizeof(struct can_berr_counter));
+-	if (priv->data_bittiming.bitrate)			/* IFLA_CAN_DATA_BITTIMING */
+-		size += nla_total_size(sizeof(struct can_bittiming));
+-	if (priv->data_bittiming_const)				/* IFLA_CAN_DATA_BITTIMING_CONST */
+-		size += nla_total_size(sizeof(struct can_bittiming_const));
+-	if (priv->termination_const) {
+-		size += nla_total_size(sizeof(priv->termination));		/* IFLA_CAN_TERMINATION */
+-		size += nla_total_size(sizeof(*priv->termination_const) *	/* IFLA_CAN_TERMINATION_CONST */
+-				       priv->termination_const_cnt);
+-	}
+-	if (priv->bitrate_const)				/* IFLA_CAN_BITRATE_CONST */
+-		size += nla_total_size(sizeof(*priv->bitrate_const) *
+-				       priv->bitrate_const_cnt);
+-	if (priv->data_bitrate_const)				/* IFLA_CAN_DATA_BITRATE_CONST */
+-		size += nla_total_size(sizeof(*priv->data_bitrate_const) *
+-				       priv->data_bitrate_const_cnt);
+-	size += sizeof(priv->bitrate_max);			/* IFLA_CAN_BITRATE_MAX */
+-
+-	return size;
+-}
+-
+-static int can_fill_info(struct sk_buff *skb, const struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-	struct can_ctrlmode cm = {.flags = priv->ctrlmode};
+-	struct can_berr_counter bec = { };
+-	enum can_state state = priv->state;
+-
+-	if (priv->do_get_state)
+-		priv->do_get_state(dev, &state);
+-
+-	if ((priv->bittiming.bitrate &&
+-	     nla_put(skb, IFLA_CAN_BITTIMING,
+-		     sizeof(priv->bittiming), &priv->bittiming)) ||
+-
+-	    (priv->bittiming_const &&
+-	     nla_put(skb, IFLA_CAN_BITTIMING_CONST,
+-		     sizeof(*priv->bittiming_const), priv->bittiming_const)) ||
+-
+-	    nla_put(skb, IFLA_CAN_CLOCK, sizeof(priv->clock), &priv->clock) ||
+-	    nla_put_u32(skb, IFLA_CAN_STATE, state) ||
+-	    nla_put(skb, IFLA_CAN_CTRLMODE, sizeof(cm), &cm) ||
+-	    nla_put_u32(skb, IFLA_CAN_RESTART_MS, priv->restart_ms) ||
+-
+-	    (priv->do_get_berr_counter &&
+-	     !priv->do_get_berr_counter(dev, &bec) &&
+-	     nla_put(skb, IFLA_CAN_BERR_COUNTER, sizeof(bec), &bec)) ||
+-
+-	    (priv->data_bittiming.bitrate &&
+-	     nla_put(skb, IFLA_CAN_DATA_BITTIMING,
+-		     sizeof(priv->data_bittiming), &priv->data_bittiming)) ||
+-
+-	    (priv->data_bittiming_const &&
+-	     nla_put(skb, IFLA_CAN_DATA_BITTIMING_CONST,
+-		     sizeof(*priv->data_bittiming_const),
+-		     priv->data_bittiming_const)) ||
+-
+-	    (priv->termination_const &&
+-	     (nla_put_u16(skb, IFLA_CAN_TERMINATION, priv->termination) ||
+-	      nla_put(skb, IFLA_CAN_TERMINATION_CONST,
+-		      sizeof(*priv->termination_const) *
+-		      priv->termination_const_cnt,
+-		      priv->termination_const))) ||
+-
+-	    (priv->bitrate_const &&
+-	     nla_put(skb, IFLA_CAN_BITRATE_CONST,
+-		     sizeof(*priv->bitrate_const) *
+-		     priv->bitrate_const_cnt,
+-		     priv->bitrate_const)) ||
+-
+-	    (priv->data_bitrate_const &&
+-	     nla_put(skb, IFLA_CAN_DATA_BITRATE_CONST,
+-		     sizeof(*priv->data_bitrate_const) *
+-		     priv->data_bitrate_const_cnt,
+-		     priv->data_bitrate_const)) ||
+-
+-	    (nla_put(skb, IFLA_CAN_BITRATE_MAX,
+-		     sizeof(priv->bitrate_max),
+-		     &priv->bitrate_max))
+-	    )
+-
+-		return -EMSGSIZE;
+-
+-	return 0;
+-}
+-
+-static size_t can_get_xstats_size(const struct net_device *dev)
+-{
+-	return sizeof(struct can_device_stats);
+-}
+-
+-static int can_fill_xstats(struct sk_buff *skb, const struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	if (nla_put(skb, IFLA_INFO_XSTATS,
+-		    sizeof(priv->can_stats), &priv->can_stats))
+-		goto nla_put_failure;
+-	return 0;
+-
+-nla_put_failure:
+-	return -EMSGSIZE;
+-}
+-
+-static int can_newlink(struct net *src_net, struct net_device *dev,
+-		       struct nlattr *tb[], struct nlattr *data[],
+-		       struct netlink_ext_ack *extack)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static void can_dellink(struct net_device *dev, struct list_head *head)
+-{
+-}
+-
+-static struct rtnl_link_ops can_link_ops __read_mostly = {
+-	.kind		= "can",
+-	.netns_refund	= true,
+-	.maxtype	= IFLA_CAN_MAX,
+-	.policy		= can_policy,
+-	.setup		= can_setup,
+-	.validate	= can_validate,
+-	.newlink	= can_newlink,
+-	.changelink	= can_changelink,
+-	.dellink	= can_dellink,
+-	.get_size	= can_get_size,
+-	.fill_info	= can_fill_info,
+-	.get_xstats_size = can_get_xstats_size,
+-	.fill_xstats	= can_fill_xstats,
+-};
+-
+-/* Register the CAN network device */
+-int register_candev(struct net_device *dev)
+-{
+-	struct can_priv *priv = netdev_priv(dev);
+-
+-	/* Ensure termination_const, termination_const_cnt and
+-	 * do_set_termination consistency. All must be either set or
+-	 * unset.
+-	 */
+-	if ((!priv->termination_const != !priv->termination_const_cnt) ||
+-	    (!priv->termination_const != !priv->do_set_termination))
+-		return -EINVAL;
+-
+-	if (!priv->bitrate_const != !priv->bitrate_const_cnt)
+-		return -EINVAL;
+-
+-	if (!priv->data_bitrate_const != !priv->data_bitrate_const_cnt)
+-		return -EINVAL;
+-
+-	dev->rtnl_link_ops = &can_link_ops;
+-	netif_carrier_off(dev);
+-
+-	return register_netdev(dev);
+-}
+-EXPORT_SYMBOL_GPL(register_candev);
+-
+-/* Unregister the CAN network device */
+-void unregister_candev(struct net_device *dev)
+-{
+-	unregister_netdev(dev);
+-}
+-EXPORT_SYMBOL_GPL(unregister_candev);
+-
+-/* Test if a network device is a candev based device
+- * and return the can_priv* if so.
+- */
+-struct can_priv *safe_candev_priv(struct net_device *dev)
+-{
+-	if (dev->type != ARPHRD_CAN || dev->rtnl_link_ops != &can_link_ops)
+-		return NULL;
+-
+-	return netdev_priv(dev);
+-}
+-EXPORT_SYMBOL_GPL(safe_candev_priv);
+-
+-static __init int can_dev_init(void)
+-{
+-	int err;
+-
+-	can_led_notifier_init();
+-
+-	err = rtnl_link_register(&can_link_ops);
+-	if (!err)
+-		pr_info(MOD_DESC "\n");
+-
+-	return err;
+-}
+-module_init(can_dev_init);
+-
+-static __exit void can_dev_exit(void)
+-{
+-	rtnl_link_unregister(&can_link_ops);
+-
+-	can_led_notifier_exit();
+-}
+-module_exit(can_dev_exit);
+-
+-MODULE_ALIAS_RTNL_LINK("can");
+diff --git a/drivers/net/can/dev/Makefile b/drivers/net/can/dev/Makefile
+new file mode 100644
+index 0000000000000..cba92e6bcf6f5
+--- /dev/null
++++ b/drivers/net/can/dev/Makefile
+@@ -0,0 +1,7 @@
++# SPDX-License-Identifier: GPL-2.0
++
++obj-$(CONFIG_CAN_DEV)		+= can-dev.o
++can-dev-y			+= dev.o
++can-dev-y			+= rx-offload.o
++
++can-dev-$(CONFIG_CAN_LEDS)	+= led.o
+diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
+new file mode 100644
+index 0000000000000..2b38a99884f2f
+--- /dev/null
++++ b/drivers/net/can/dev/dev.c
+@@ -0,0 +1,1341 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/* Copyright (C) 2005 Marc Kleine-Budde, Pengutronix
++ * Copyright (C) 2006 Andrey Volkov, Varma Electronics
++ * Copyright (C) 2008-2009 Wolfgang Grandegger <wg@grandegger.com>
++ */
++
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/slab.h>
++#include <linux/netdevice.h>
++#include <linux/if_arp.h>
++#include <linux/workqueue.h>
++#include <linux/can.h>
++#include <linux/can/can-ml.h>
++#include <linux/can/dev.h>
++#include <linux/can/skb.h>
++#include <linux/can/netlink.h>
++#include <linux/can/led.h>
++#include <linux/of.h>
++#include <net/rtnetlink.h>
++
++#define MOD_DESC "CAN device driver interface"
++
++MODULE_DESCRIPTION(MOD_DESC);
++MODULE_LICENSE("GPL v2");
++MODULE_AUTHOR("Wolfgang Grandegger <wg@grandegger.com>");
++
++/* CAN DLC to real data length conversion helpers */
++
++static const u8 dlc2len[] = {0, 1, 2, 3, 4, 5, 6, 7,
++			     8, 12, 16, 20, 24, 32, 48, 64};
++
++/* get data length from can_dlc with sanitized can_dlc */
++u8 can_dlc2len(u8 can_dlc)
++{
++	return dlc2len[can_dlc & 0x0F];
++}
++EXPORT_SYMBOL_GPL(can_dlc2len);
++
++static const u8 len2dlc[] = {0, 1, 2, 3, 4, 5, 6, 7, 8,		/* 0 - 8 */
++			     9, 9, 9, 9,			/* 9 - 12 */
++			     10, 10, 10, 10,			/* 13 - 16 */
++			     11, 11, 11, 11,			/* 17 - 20 */
++			     12, 12, 12, 12,			/* 21 - 24 */
++			     13, 13, 13, 13, 13, 13, 13, 13,	/* 25 - 32 */
++			     14, 14, 14, 14, 14, 14, 14, 14,	/* 33 - 40 */
++			     14, 14, 14, 14, 14, 14, 14, 14,	/* 41 - 48 */
++			     15, 15, 15, 15, 15, 15, 15, 15,	/* 49 - 56 */
++			     15, 15, 15, 15, 15, 15, 15, 15};	/* 57 - 64 */
++
++/* map the sanitized data length to an appropriate data length code */
++u8 can_len2dlc(u8 len)
++{
++	if (unlikely(len > 64))
++		return 0xF;
++
++	return len2dlc[len];
++}
++EXPORT_SYMBOL_GPL(can_len2dlc);
++
++#ifdef CONFIG_CAN_CALC_BITTIMING
++#define CAN_CALC_MAX_ERROR 50 /* in one-tenth of a percent */
++
++/* Bit-timing calculation derived from:
++ *
++ * Code based on LinCAN sources and H8S2638 project
++ * Copyright 2004-2006 Pavel Pisa - DCE FELK CVUT cz
++ * Copyright 2005      Stanislav Marek
++ * email: pisa@cmp.felk.cvut.cz
++ *
++ * Calculates proper bit-timing parameters for a specified bit-rate
++ * and sample-point, which can then be used to set the bit-timing
++ * registers of the CAN controller. You can find more information
++ * in the header file linux/can/netlink.h.
++ */
++static int
++can_update_sample_point(const struct can_bittiming_const *btc,
++			unsigned int sample_point_nominal, unsigned int tseg,
++			unsigned int *tseg1_ptr, unsigned int *tseg2_ptr,
++			unsigned int *sample_point_error_ptr)
++{
++	unsigned int sample_point_error, best_sample_point_error = UINT_MAX;
++	unsigned int sample_point, best_sample_point = 0;
++	unsigned int tseg1, tseg2;
++	int i;
++
++	for (i = 0; i <= 1; i++) {
++		tseg2 = tseg + CAN_SYNC_SEG -
++			(sample_point_nominal * (tseg + CAN_SYNC_SEG)) /
++			1000 - i;
++		tseg2 = clamp(tseg2, btc->tseg2_min, btc->tseg2_max);
++		tseg1 = tseg - tseg2;
++		if (tseg1 > btc->tseg1_max) {
++			tseg1 = btc->tseg1_max;
++			tseg2 = tseg - tseg1;
++		}
++
++		sample_point = 1000 * (tseg + CAN_SYNC_SEG - tseg2) /
++			(tseg + CAN_SYNC_SEG);
++		sample_point_error = abs(sample_point_nominal - sample_point);
++
++		if (sample_point <= sample_point_nominal &&
++		    sample_point_error < best_sample_point_error) {
++			best_sample_point = sample_point;
++			best_sample_point_error = sample_point_error;
++			*tseg1_ptr = tseg1;
++			*tseg2_ptr = tseg2;
++		}
++	}
++
++	if (sample_point_error_ptr)
++		*sample_point_error_ptr = best_sample_point_error;
++
++	return best_sample_point;
++}
++
++static int can_calc_bittiming(struct net_device *dev, struct can_bittiming *bt,
++			      const struct can_bittiming_const *btc)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	unsigned int bitrate;			/* current bitrate */
++	unsigned int bitrate_error;		/* difference between current and nominal value */
++	unsigned int best_bitrate_error = UINT_MAX;
++	unsigned int sample_point_error;	/* difference between current and nominal value */
++	unsigned int best_sample_point_error = UINT_MAX;
++	unsigned int sample_point_nominal;	/* nominal sample point */
++	unsigned int best_tseg = 0;		/* current best value for tseg */
++	unsigned int best_brp = 0;		/* current best value for brp */
++	unsigned int brp, tsegall, tseg, tseg1 = 0, tseg2 = 0;
++	u64 v64;
++
++	/* Use CiA recommended sample points */
++	if (bt->sample_point) {
++		sample_point_nominal = bt->sample_point;
++	} else {
++		if (bt->bitrate > 800000)
++			sample_point_nominal = 750;
++		else if (bt->bitrate > 500000)
++			sample_point_nominal = 800;
++		else
++			sample_point_nominal = 875;
++	}
++
++	/* tseg even = round down, odd = round up */
++	for (tseg = (btc->tseg1_max + btc->tseg2_max) * 2 + 1;
++	     tseg >= (btc->tseg1_min + btc->tseg2_min) * 2; tseg--) {
++		tsegall = CAN_SYNC_SEG + tseg / 2;
++
++		/* Compute all possible tseg choices (tseg=tseg1+tseg2) */
++		brp = priv->clock.freq / (tsegall * bt->bitrate) + tseg % 2;
++
++		/* choose brp step which is possible in system */
++		brp = (brp / btc->brp_inc) * btc->brp_inc;
++		if (brp < btc->brp_min || brp > btc->brp_max)
++			continue;
++
++		bitrate = priv->clock.freq / (brp * tsegall);
++		bitrate_error = abs(bt->bitrate - bitrate);
++
++		/* tseg brp biterror */
++		if (bitrate_error > best_bitrate_error)
++			continue;
++
++		/* reset sample point error if we have a better bitrate */
++		if (bitrate_error < best_bitrate_error)
++			best_sample_point_error = UINT_MAX;
++
++		can_update_sample_point(btc, sample_point_nominal, tseg / 2,
++					&tseg1, &tseg2, &sample_point_error);
++		if (sample_point_error > best_sample_point_error)
++			continue;
++
++		best_sample_point_error = sample_point_error;
++		best_bitrate_error = bitrate_error;
++		best_tseg = tseg / 2;
++		best_brp = brp;
++
++		if (bitrate_error == 0 && sample_point_error == 0)
++			break;
++	}
++
++	if (best_bitrate_error) {
++		/* Error in one-tenth of a percent */
++		v64 = (u64)best_bitrate_error * 1000;
++		do_div(v64, bt->bitrate);
++		bitrate_error = (u32)v64;
++		if (bitrate_error > CAN_CALC_MAX_ERROR) {
++			netdev_err(dev,
++				   "bitrate error %d.%d%% too high\n",
++				   bitrate_error / 10, bitrate_error % 10);
++			return -EDOM;
++		}
++		netdev_warn(dev, "bitrate error %d.%d%%\n",
++			    bitrate_error / 10, bitrate_error % 10);
++	}
++
++	/* real sample point */
++	bt->sample_point = can_update_sample_point(btc, sample_point_nominal,
++						   best_tseg, &tseg1, &tseg2,
++						   NULL);
++
++	v64 = (u64)best_brp * 1000 * 1000 * 1000;
++	do_div(v64, priv->clock.freq);
++	bt->tq = (u32)v64;
++	bt->prop_seg = tseg1 / 2;
++	bt->phase_seg1 = tseg1 - bt->prop_seg;
++	bt->phase_seg2 = tseg2;
++
++	/* check for sjw user settings */
++	if (!bt->sjw || !btc->sjw_max) {
++		bt->sjw = 1;
++	} else {
++		/* bt->sjw is at least 1 -> sanitize upper bound to sjw_max */
++		if (bt->sjw > btc->sjw_max)
++			bt->sjw = btc->sjw_max;
++		/* bt->sjw must not be higher than tseg2 */
++		if (tseg2 < bt->sjw)
++			bt->sjw = tseg2;
++	}
++
++	bt->brp = best_brp;
++
++	/* real bitrate */
++	bt->bitrate = priv->clock.freq /
++		(bt->brp * (CAN_SYNC_SEG + tseg1 + tseg2));
++
++	return 0;
++}
++#else /* !CONFIG_CAN_CALC_BITTIMING */
++static int can_calc_bittiming(struct net_device *dev, struct can_bittiming *bt,
++			      const struct can_bittiming_const *btc)
++{
++	netdev_err(dev, "bit-timing calculation not available\n");
++	return -EINVAL;
++}
++#endif /* CONFIG_CAN_CALC_BITTIMING */
++
++/* Checks the validity of the specified bit-timing parameters prop_seg,
++ * phase_seg1, phase_seg2 and sjw and tries to determine the bitrate
++ * prescaler value brp. You can find more information in the header
++ * file linux/can/netlink.h.
++ */
++static int can_fixup_bittiming(struct net_device *dev, struct can_bittiming *bt,
++			       const struct can_bittiming_const *btc)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	int tseg1, alltseg;
++	u64 brp64;
++
++	tseg1 = bt->prop_seg + bt->phase_seg1;
++	if (!bt->sjw)
++		bt->sjw = 1;
++	if (bt->sjw > btc->sjw_max ||
++	    tseg1 < btc->tseg1_min || tseg1 > btc->tseg1_max ||
++	    bt->phase_seg2 < btc->tseg2_min || bt->phase_seg2 > btc->tseg2_max)
++		return -ERANGE;
++
++	brp64 = (u64)priv->clock.freq * (u64)bt->tq;
++	if (btc->brp_inc > 1)
++		do_div(brp64, btc->brp_inc);
++	brp64 += 500000000UL - 1;
++	do_div(brp64, 1000000000UL); /* the practicable BRP */
++	if (btc->brp_inc > 1)
++		brp64 *= btc->brp_inc;
++	bt->brp = (u32)brp64;
++
++	if (bt->brp < btc->brp_min || bt->brp > btc->brp_max)
++		return -EINVAL;
++
++	alltseg = bt->prop_seg + bt->phase_seg1 + bt->phase_seg2 + 1;
++	bt->bitrate = priv->clock.freq / (bt->brp * alltseg);
++	bt->sample_point = ((tseg1 + 1) * 1000) / alltseg;
++
++	return 0;
++}
++
++/* Checks the validity of predefined bitrate settings */
++static int
++can_validate_bitrate(struct net_device *dev, struct can_bittiming *bt,
++		     const u32 *bitrate_const,
++		     const unsigned int bitrate_const_cnt)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	unsigned int i;
++
++	for (i = 0; i < bitrate_const_cnt; i++) {
++		if (bt->bitrate == bitrate_const[i])
++			break;
++	}
++
++	if (i >= priv->bitrate_const_cnt)
++		return -EINVAL;
++
++	return 0;
++}
++
++static int can_get_bittiming(struct net_device *dev, struct can_bittiming *bt,
++			     const struct can_bittiming_const *btc,
++			     const u32 *bitrate_const,
++			     const unsigned int bitrate_const_cnt)
++{
++	int err;
++
++	/* Depending on the given can_bittiming parameter structure the CAN
++	 * timing parameters are calculated based on the provided bitrate OR
++	 * alternatively the CAN timing parameters (tq, prop_seg, etc.) are
++	 * provided directly which are then checked and fixed up.
++	 */
++	if (!bt->tq && bt->bitrate && btc)
++		err = can_calc_bittiming(dev, bt, btc);
++	else if (bt->tq && !bt->bitrate && btc)
++		err = can_fixup_bittiming(dev, bt, btc);
++	else if (!bt->tq && bt->bitrate && bitrate_const)
++		err = can_validate_bitrate(dev, bt, bitrate_const,
++					   bitrate_const_cnt);
++	else
++		err = -EINVAL;
++
++	return err;
++}
++
++static void can_update_state_error_stats(struct net_device *dev,
++					 enum can_state new_state)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	if (new_state <= priv->state)
++		return;
++
++	switch (new_state) {
++	case CAN_STATE_ERROR_WARNING:
++		priv->can_stats.error_warning++;
++		break;
++	case CAN_STATE_ERROR_PASSIVE:
++		priv->can_stats.error_passive++;
++		break;
++	case CAN_STATE_BUS_OFF:
++		priv->can_stats.bus_off++;
++		break;
++	default:
++		break;
++	}
++}
++
++static int can_tx_state_to_frame(struct net_device *dev, enum can_state state)
++{
++	switch (state) {
++	case CAN_STATE_ERROR_ACTIVE:
++		return CAN_ERR_CRTL_ACTIVE;
++	case CAN_STATE_ERROR_WARNING:
++		return CAN_ERR_CRTL_TX_WARNING;
++	case CAN_STATE_ERROR_PASSIVE:
++		return CAN_ERR_CRTL_TX_PASSIVE;
++	default:
++		return 0;
++	}
++}
++
++static int can_rx_state_to_frame(struct net_device *dev, enum can_state state)
++{
++	switch (state) {
++	case CAN_STATE_ERROR_ACTIVE:
++		return CAN_ERR_CRTL_ACTIVE;
++	case CAN_STATE_ERROR_WARNING:
++		return CAN_ERR_CRTL_RX_WARNING;
++	case CAN_STATE_ERROR_PASSIVE:
++		return CAN_ERR_CRTL_RX_PASSIVE;
++	default:
++		return 0;
++	}
++}
++
++static const char *can_get_state_str(const enum can_state state)
++{
++	switch (state) {
++	case CAN_STATE_ERROR_ACTIVE:
++		return "Error Active";
++	case CAN_STATE_ERROR_WARNING:
++		return "Error Warning";
++	case CAN_STATE_ERROR_PASSIVE:
++		return "Error Passive";
++	case CAN_STATE_BUS_OFF:
++		return "Bus Off";
++	case CAN_STATE_STOPPED:
++		return "Stopped";
++	case CAN_STATE_SLEEPING:
++		return "Sleeping";
++	default:
++		return "<unknown>";
++	}
++
++	return "<unknown>";
++}
++
++void can_change_state(struct net_device *dev, struct can_frame *cf,
++		      enum can_state tx_state, enum can_state rx_state)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	enum can_state new_state = max(tx_state, rx_state);
++
++	if (unlikely(new_state == priv->state)) {
++		netdev_warn(dev, "%s: oops, state did not change", __func__);
++		return;
++	}
++
++	netdev_dbg(dev, "Controller changed from %s State (%d) into %s State (%d).\n",
++		   can_get_state_str(priv->state), priv->state,
++		   can_get_state_str(new_state), new_state);
++
++	can_update_state_error_stats(dev, new_state);
++	priv->state = new_state;
++
++	if (!cf)
++		return;
++
++	if (unlikely(new_state == CAN_STATE_BUS_OFF)) {
++		cf->can_id |= CAN_ERR_BUSOFF;
++		return;
++	}
++
++	cf->can_id |= CAN_ERR_CRTL;
++	cf->data[1] |= tx_state >= rx_state ?
++		       can_tx_state_to_frame(dev, tx_state) : 0;
++	cf->data[1] |= tx_state <= rx_state ?
++		       can_rx_state_to_frame(dev, rx_state) : 0;
++}
++EXPORT_SYMBOL_GPL(can_change_state);
++
++/* Local echo of CAN messages
++ *
++ * CAN network devices *should* support a local echo functionality
++ * (see Documentation/networking/can.rst). To test the handling of CAN
++ * interfaces that do not support the local echo both driver types are
++ * implemented. In the case that the driver does not support the echo
++ * the IFF_ECHO remains clear in dev->flags. This causes the PF_CAN core
++ * to perform the echo as a fallback solution.
++ */
++static void can_flush_echo_skb(struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	struct net_device_stats *stats = &dev->stats;
++	int i;
++
++	for (i = 0; i < priv->echo_skb_max; i++) {
++		if (priv->echo_skb[i]) {
++			kfree_skb(priv->echo_skb[i]);
++			priv->echo_skb[i] = NULL;
++			stats->tx_dropped++;
++			stats->tx_aborted_errors++;
++		}
++	}
++}
++
++/* Put the skb on the stack to be looped backed locally lateron
++ *
++ * The function is typically called in the start_xmit function
++ * of the device driver. The driver must protect access to
++ * priv->echo_skb, if necessary.
++ */
++int can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
++		     unsigned int idx)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	BUG_ON(idx >= priv->echo_skb_max);
++
++	/* check flag whether this packet has to be looped back */
++	if (!(dev->flags & IFF_ECHO) || skb->pkt_type != PACKET_LOOPBACK ||
++	    (skb->protocol != htons(ETH_P_CAN) &&
++	     skb->protocol != htons(ETH_P_CANFD))) {
++		kfree_skb(skb);
++		return 0;
++	}
++
++	if (!priv->echo_skb[idx]) {
++		skb = can_create_echo_skb(skb);
++		if (!skb)
++			return -ENOMEM;
++
++		/* make settings for echo to reduce code in irq context */
++		skb->pkt_type = PACKET_BROADCAST;
++		skb->ip_summed = CHECKSUM_UNNECESSARY;
++		skb->dev = dev;
++
++		/* save this skb for tx interrupt echo handling */
++		priv->echo_skb[idx] = skb;
++	} else {
++		/* locking problem with netif_stop_queue() ?? */
++		netdev_err(dev, "%s: BUG! echo_skb %d is occupied!\n", __func__, idx);
++		kfree_skb(skb);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(can_put_echo_skb);
++
++struct sk_buff *
++__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	if (idx >= priv->echo_skb_max) {
++		netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",
++			   __func__, idx, priv->echo_skb_max);
++		return NULL;
++	}
++
++	if (priv->echo_skb[idx]) {
++		/* Using "struct canfd_frame::len" for the frame
++		 * length is supported on both CAN and CANFD frames.
++		 */
++		struct sk_buff *skb = priv->echo_skb[idx];
++		struct canfd_frame *cf = (struct canfd_frame *)skb->data;
++
++		/* get the real payload length for netdev statistics */
++		if (cf->can_id & CAN_RTR_FLAG)
++			*len_ptr = 0;
++		else
++			*len_ptr = cf->len;
++
++		priv->echo_skb[idx] = NULL;
++
++		return skb;
++	}
++
++	return NULL;
++}
++
++/* Get the skb from the stack and loop it back locally
++ *
++ * The function is typically called when the TX done interrupt
++ * is handled in the device driver. The driver must protect
++ * access to priv->echo_skb, if necessary.
++ */
++unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)
++{
++	struct sk_buff *skb;
++	u8 len;
++
++	skb = __can_get_echo_skb(dev, idx, &len);
++	if (!skb)
++		return 0;
++
++	skb_get(skb);
++	if (netif_rx(skb) == NET_RX_SUCCESS)
++		dev_consume_skb_any(skb);
++	else
++		dev_kfree_skb_any(skb);
++
++	return len;
++}
++EXPORT_SYMBOL_GPL(can_get_echo_skb);
++
++/* Remove the skb from the stack and free it.
++ *
++ * The function is typically called when TX failed.
++ */
++void can_free_echo_skb(struct net_device *dev, unsigned int idx)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	BUG_ON(idx >= priv->echo_skb_max);
++
++	if (priv->echo_skb[idx]) {
++		dev_kfree_skb_any(priv->echo_skb[idx]);
++		priv->echo_skb[idx] = NULL;
++	}
++}
++EXPORT_SYMBOL_GPL(can_free_echo_skb);
++
++/* CAN device restart for bus-off recovery */
++static void can_restart(struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	struct net_device_stats *stats = &dev->stats;
++	struct sk_buff *skb;
++	struct can_frame *cf;
++	int err;
++
++	BUG_ON(netif_carrier_ok(dev));
++
++	/* No synchronization needed because the device is bus-off and
++	 * no messages can come in or go out.
++	 */
++	can_flush_echo_skb(dev);
++
++	/* send restart message upstream */
++	skb = alloc_can_err_skb(dev, &cf);
++	if (!skb)
++		goto restart;
++
++	cf->can_id |= CAN_ERR_RESTARTED;
++
++	stats->rx_packets++;
++	stats->rx_bytes += cf->can_dlc;
++
++	netif_rx_ni(skb);
++
++restart:
++	netdev_dbg(dev, "restarted\n");
++	priv->can_stats.restarts++;
++
++	/* Now restart the device */
++	err = priv->do_set_mode(dev, CAN_MODE_START);
++
++	netif_carrier_on(dev);
++	if (err)
++		netdev_err(dev, "Error %d during restart", err);
++}
++
++static void can_restart_work(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct can_priv *priv = container_of(dwork, struct can_priv,
++					     restart_work);
++
++	can_restart(priv->dev);
++}
++
++int can_restart_now(struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	/* A manual restart is only permitted if automatic restart is
++	 * disabled and the device is in the bus-off state
++	 */
++	if (priv->restart_ms)
++		return -EINVAL;
++	if (priv->state != CAN_STATE_BUS_OFF)
++		return -EBUSY;
++
++	cancel_delayed_work_sync(&priv->restart_work);
++	can_restart(dev);
++
++	return 0;
++}
++
++/* CAN bus-off
++ *
++ * This functions should be called when the device goes bus-off to
++ * tell the netif layer that no more packets can be sent or received.
++ * If enabled, a timer is started to trigger bus-off recovery.
++ */
++void can_bus_off(struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	if (priv->restart_ms)
++		netdev_info(dev, "bus-off, scheduling restart in %d ms\n",
++			    priv->restart_ms);
++	else
++		netdev_info(dev, "bus-off\n");
++
++	netif_carrier_off(dev);
++
++	if (priv->restart_ms)
++		schedule_delayed_work(&priv->restart_work,
++				      msecs_to_jiffies(priv->restart_ms));
++}
++EXPORT_SYMBOL_GPL(can_bus_off);
++
++static void can_setup(struct net_device *dev)
++{
++	dev->type = ARPHRD_CAN;
++	dev->mtu = CAN_MTU;
++	dev->hard_header_len = 0;
++	dev->addr_len = 0;
++	dev->tx_queue_len = 10;
++
++	/* New-style flags. */
++	dev->flags = IFF_NOARP;
++	dev->features = NETIF_F_HW_CSUM;
++}
++
++struct sk_buff *alloc_can_skb(struct net_device *dev, struct can_frame **cf)
++{
++	struct sk_buff *skb;
++
++	skb = netdev_alloc_skb(dev, sizeof(struct can_skb_priv) +
++			       sizeof(struct can_frame));
++	if (unlikely(!skb))
++		return NULL;
++
++	skb->protocol = htons(ETH_P_CAN);
++	skb->pkt_type = PACKET_BROADCAST;
++	skb->ip_summed = CHECKSUM_UNNECESSARY;
++
++	skb_reset_mac_header(skb);
++	skb_reset_network_header(skb);
++	skb_reset_transport_header(skb);
++
++	can_skb_reserve(skb);
++	can_skb_prv(skb)->ifindex = dev->ifindex;
++	can_skb_prv(skb)->skbcnt = 0;
++
++	*cf = skb_put_zero(skb, sizeof(struct can_frame));
++
++	return skb;
++}
++EXPORT_SYMBOL_GPL(alloc_can_skb);
++
++struct sk_buff *alloc_canfd_skb(struct net_device *dev,
++				struct canfd_frame **cfd)
++{
++	struct sk_buff *skb;
++
++	skb = netdev_alloc_skb(dev, sizeof(struct can_skb_priv) +
++			       sizeof(struct canfd_frame));
++	if (unlikely(!skb))
++		return NULL;
++
++	skb->protocol = htons(ETH_P_CANFD);
++	skb->pkt_type = PACKET_BROADCAST;
++	skb->ip_summed = CHECKSUM_UNNECESSARY;
++
++	skb_reset_mac_header(skb);
++	skb_reset_network_header(skb);
++	skb_reset_transport_header(skb);
++
++	can_skb_reserve(skb);
++	can_skb_prv(skb)->ifindex = dev->ifindex;
++	can_skb_prv(skb)->skbcnt = 0;
++
++	*cfd = skb_put_zero(skb, sizeof(struct canfd_frame));
++
++	return skb;
++}
++EXPORT_SYMBOL_GPL(alloc_canfd_skb);
++
++struct sk_buff *alloc_can_err_skb(struct net_device *dev, struct can_frame **cf)
++{
++	struct sk_buff *skb;
++
++	skb = alloc_can_skb(dev, cf);
++	if (unlikely(!skb))
++		return NULL;
++
++	(*cf)->can_id = CAN_ERR_FLAG;
++	(*cf)->can_dlc = CAN_ERR_DLC;
++
++	return skb;
++}
++EXPORT_SYMBOL_GPL(alloc_can_err_skb);
++
++/* Allocate and setup space for the CAN network device */
++struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
++				    unsigned int txqs, unsigned int rxqs)
++{
++	struct can_ml_priv *can_ml;
++	struct net_device *dev;
++	struct can_priv *priv;
++	int size;
++
++	/* We put the driver's priv, the CAN mid layer priv and the
++	 * echo skb into the netdevice's priv. The memory layout for
++	 * the netdev_priv is like this:
++	 *
++	 * +-------------------------+
++	 * | driver's priv           |
++	 * +-------------------------+
++	 * | struct can_ml_priv      |
++	 * +-------------------------+
++	 * | array of struct sk_buff |
++	 * +-------------------------+
++	 */
++
++	size = ALIGN(sizeof_priv, NETDEV_ALIGN) + sizeof(struct can_ml_priv);
++
++	if (echo_skb_max)
++		size = ALIGN(size, sizeof(struct sk_buff *)) +
++			echo_skb_max * sizeof(struct sk_buff *);
++
++	dev = alloc_netdev_mqs(size, "can%d", NET_NAME_UNKNOWN, can_setup,
++			       txqs, rxqs);
++	if (!dev)
++		return NULL;
++
++	priv = netdev_priv(dev);
++	priv->dev = dev;
++
++	can_ml = (void *)priv + ALIGN(sizeof_priv, NETDEV_ALIGN);
++	can_set_ml_priv(dev, can_ml);
++
++	if (echo_skb_max) {
++		priv->echo_skb_max = echo_skb_max;
++		priv->echo_skb = (void *)priv +
++			(size - echo_skb_max * sizeof(struct sk_buff *));
++	}
++
++	priv->state = CAN_STATE_STOPPED;
++
++	INIT_DELAYED_WORK(&priv->restart_work, can_restart_work);
++
++	return dev;
++}
++EXPORT_SYMBOL_GPL(alloc_candev_mqs);
++
++/* Free space of the CAN network device */
++void free_candev(struct net_device *dev)
++{
++	free_netdev(dev);
++}
++EXPORT_SYMBOL_GPL(free_candev);
++
++/* changing MTU and control mode for CAN/CANFD devices */
++int can_change_mtu(struct net_device *dev, int new_mtu)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	/* Do not allow changing the MTU while running */
++	if (dev->flags & IFF_UP)
++		return -EBUSY;
++
++	/* allow change of MTU according to the CANFD ability of the device */
++	switch (new_mtu) {
++	case CAN_MTU:
++		/* 'CANFD-only' controllers can not switch to CAN_MTU */
++		if (priv->ctrlmode_static & CAN_CTRLMODE_FD)
++			return -EINVAL;
++
++		priv->ctrlmode &= ~CAN_CTRLMODE_FD;
++		break;
++
++	case CANFD_MTU:
++		/* check for potential CANFD ability */
++		if (!(priv->ctrlmode_supported & CAN_CTRLMODE_FD) &&
++		    !(priv->ctrlmode_static & CAN_CTRLMODE_FD))
++			return -EINVAL;
++
++		priv->ctrlmode |= CAN_CTRLMODE_FD;
++		break;
++
++	default:
++		return -EINVAL;
++	}
++
++	dev->mtu = new_mtu;
++	return 0;
++}
++EXPORT_SYMBOL_GPL(can_change_mtu);
++
++/* Common open function when the device gets opened.
++ *
++ * This function should be called in the open function of the device
++ * driver.
++ */
++int open_candev(struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	if (!priv->bittiming.bitrate) {
++		netdev_err(dev, "bit-timing not yet defined\n");
++		return -EINVAL;
++	}
++
++	/* For CAN FD the data bitrate has to be >= the arbitration bitrate */
++	if ((priv->ctrlmode & CAN_CTRLMODE_FD) &&
++	    (!priv->data_bittiming.bitrate ||
++	     priv->data_bittiming.bitrate < priv->bittiming.bitrate)) {
++		netdev_err(dev, "incorrect/missing data bit-timing\n");
++		return -EINVAL;
++	}
++
++	/* Switch carrier on if device was stopped while in bus-off state */
++	if (!netif_carrier_ok(dev))
++		netif_carrier_on(dev);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(open_candev);
++
++#ifdef CONFIG_OF
++/* Common function that can be used to understand the limitation of
++ * a transceiver when it provides no means to determine these limitations
++ * at runtime.
++ */
++void of_can_transceiver(struct net_device *dev)
++{
++	struct device_node *dn;
++	struct can_priv *priv = netdev_priv(dev);
++	struct device_node *np = dev->dev.parent->of_node;
++	int ret;
++
++	dn = of_get_child_by_name(np, "can-transceiver");
++	if (!dn)
++		return;
++
++	ret = of_property_read_u32(dn, "max-bitrate", &priv->bitrate_max);
++	of_node_put(dn);
++	if ((ret && ret != -EINVAL) || (!ret && !priv->bitrate_max))
++		netdev_warn(dev, "Invalid value for transceiver max bitrate. Ignoring bitrate limit.\n");
++}
++EXPORT_SYMBOL_GPL(of_can_transceiver);
++#endif
++
++/* Common close function for cleanup before the device gets closed.
++ *
++ * This function should be called in the close function of the device
++ * driver.
++ */
++void close_candev(struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	cancel_delayed_work_sync(&priv->restart_work);
++	can_flush_echo_skb(dev);
++}
++EXPORT_SYMBOL_GPL(close_candev);
++
++/* CAN netlink interface */
++static const struct nla_policy can_policy[IFLA_CAN_MAX + 1] = {
++	[IFLA_CAN_STATE]	= { .type = NLA_U32 },
++	[IFLA_CAN_CTRLMODE]	= { .len = sizeof(struct can_ctrlmode) },
++	[IFLA_CAN_RESTART_MS]	= { .type = NLA_U32 },
++	[IFLA_CAN_RESTART]	= { .type = NLA_U32 },
++	[IFLA_CAN_BITTIMING]	= { .len = sizeof(struct can_bittiming) },
++	[IFLA_CAN_BITTIMING_CONST]
++				= { .len = sizeof(struct can_bittiming_const) },
++	[IFLA_CAN_CLOCK]	= { .len = sizeof(struct can_clock) },
++	[IFLA_CAN_BERR_COUNTER]	= { .len = sizeof(struct can_berr_counter) },
++	[IFLA_CAN_DATA_BITTIMING]
++				= { .len = sizeof(struct can_bittiming) },
++	[IFLA_CAN_DATA_BITTIMING_CONST]
++				= { .len = sizeof(struct can_bittiming_const) },
++	[IFLA_CAN_TERMINATION]	= { .type = NLA_U16 },
++};
++
++static int can_validate(struct nlattr *tb[], struct nlattr *data[],
++			struct netlink_ext_ack *extack)
++{
++	bool is_can_fd = false;
++
++	/* Make sure that valid CAN FD configurations always consist of
++	 * - nominal/arbitration bittiming
++	 * - data bittiming
++	 * - control mode with CAN_CTRLMODE_FD set
++	 */
++
++	if (!data)
++		return 0;
++
++	if (data[IFLA_CAN_CTRLMODE]) {
++		struct can_ctrlmode *cm = nla_data(data[IFLA_CAN_CTRLMODE]);
++
++		is_can_fd = cm->flags & cm->mask & CAN_CTRLMODE_FD;
++	}
++
++	if (is_can_fd) {
++		if (!data[IFLA_CAN_BITTIMING] || !data[IFLA_CAN_DATA_BITTIMING])
++			return -EOPNOTSUPP;
++	}
++
++	if (data[IFLA_CAN_DATA_BITTIMING]) {
++		if (!is_can_fd || !data[IFLA_CAN_BITTIMING])
++			return -EOPNOTSUPP;
++	}
++
++	return 0;
++}
++
++static int can_changelink(struct net_device *dev, struct nlattr *tb[],
++			  struct nlattr *data[],
++			  struct netlink_ext_ack *extack)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	int err;
++
++	/* We need synchronization with dev->stop() */
++	ASSERT_RTNL();
++
++	if (data[IFLA_CAN_BITTIMING]) {
++		struct can_bittiming bt;
++
++		/* Do not allow changing bittiming while running */
++		if (dev->flags & IFF_UP)
++			return -EBUSY;
++
++		/* Calculate bittiming parameters based on
++		 * bittiming_const if set, otherwise pass bitrate
++		 * directly via do_set_bitrate(). Bail out if neither
++		 * is given.
++		 */
++		if (!priv->bittiming_const && !priv->do_set_bittiming)
++			return -EOPNOTSUPP;
++
++		memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
++		err = can_get_bittiming(dev, &bt,
++					priv->bittiming_const,
++					priv->bitrate_const,
++					priv->bitrate_const_cnt);
++		if (err)
++			return err;
++
++		if (priv->bitrate_max && bt.bitrate > priv->bitrate_max) {
++			netdev_err(dev, "arbitration bitrate surpasses transceiver capabilities of %d bps\n",
++				   priv->bitrate_max);
++			return -EINVAL;
++		}
++
++		memcpy(&priv->bittiming, &bt, sizeof(bt));
++
++		if (priv->do_set_bittiming) {
++			/* Finally, set the bit-timing registers */
++			err = priv->do_set_bittiming(dev);
++			if (err)
++				return err;
++		}
++	}
++
++	if (data[IFLA_CAN_CTRLMODE]) {
++		struct can_ctrlmode *cm;
++		u32 ctrlstatic;
++		u32 maskedflags;
++
++		/* Do not allow changing controller mode while running */
++		if (dev->flags & IFF_UP)
++			return -EBUSY;
++		cm = nla_data(data[IFLA_CAN_CTRLMODE]);
++		ctrlstatic = priv->ctrlmode_static;
++		maskedflags = cm->flags & cm->mask;
++
++		/* check whether provided bits are allowed to be passed */
++		if (cm->mask & ~(priv->ctrlmode_supported | ctrlstatic))
++			return -EOPNOTSUPP;
++
++		/* do not check for static fd-non-iso if 'fd' is disabled */
++		if (!(maskedflags & CAN_CTRLMODE_FD))
++			ctrlstatic &= ~CAN_CTRLMODE_FD_NON_ISO;
++
++		/* make sure static options are provided by configuration */
++		if ((maskedflags & ctrlstatic) != ctrlstatic)
++			return -EOPNOTSUPP;
++
++		/* clear bits to be modified and copy the flag values */
++		priv->ctrlmode &= ~cm->mask;
++		priv->ctrlmode |= maskedflags;
++
++		/* CAN_CTRLMODE_FD can only be set when driver supports FD */
++		if (priv->ctrlmode & CAN_CTRLMODE_FD)
++			dev->mtu = CANFD_MTU;
++		else
++			dev->mtu = CAN_MTU;
++	}
++
++	if (data[IFLA_CAN_RESTART_MS]) {
++		/* Do not allow changing restart delay while running */
++		if (dev->flags & IFF_UP)
++			return -EBUSY;
++		priv->restart_ms = nla_get_u32(data[IFLA_CAN_RESTART_MS]);
++	}
++
++	if (data[IFLA_CAN_RESTART]) {
++		/* Do not allow a restart while not running */
++		if (!(dev->flags & IFF_UP))
++			return -EINVAL;
++		err = can_restart_now(dev);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_CAN_DATA_BITTIMING]) {
++		struct can_bittiming dbt;
++
++		/* Do not allow changing bittiming while running */
++		if (dev->flags & IFF_UP)
++			return -EBUSY;
++
++		/* Calculate bittiming parameters based on
++		 * data_bittiming_const if set, otherwise pass bitrate
++		 * directly via do_set_bitrate(). Bail out if neither
++		 * is given.
++		 */
++		if (!priv->data_bittiming_const && !priv->do_set_data_bittiming)
++			return -EOPNOTSUPP;
++
++		memcpy(&dbt, nla_data(data[IFLA_CAN_DATA_BITTIMING]),
++		       sizeof(dbt));
++		err = can_get_bittiming(dev, &dbt,
++					priv->data_bittiming_const,
++					priv->data_bitrate_const,
++					priv->data_bitrate_const_cnt);
++		if (err)
++			return err;
++
++		if (priv->bitrate_max && dbt.bitrate > priv->bitrate_max) {
++			netdev_err(dev, "canfd data bitrate surpasses transceiver capabilities of %d bps\n",
++				   priv->bitrate_max);
++			return -EINVAL;
++		}
++
++		memcpy(&priv->data_bittiming, &dbt, sizeof(dbt));
++
++		if (priv->do_set_data_bittiming) {
++			/* Finally, set the bit-timing registers */
++			err = priv->do_set_data_bittiming(dev);
++			if (err)
++				return err;
++		}
++	}
++
++	if (data[IFLA_CAN_TERMINATION]) {
++		const u16 termval = nla_get_u16(data[IFLA_CAN_TERMINATION]);
++		const unsigned int num_term = priv->termination_const_cnt;
++		unsigned int i;
++
++		if (!priv->do_set_termination)
++			return -EOPNOTSUPP;
++
++		/* check whether given value is supported by the interface */
++		for (i = 0; i < num_term; i++) {
++			if (termval == priv->termination_const[i])
++				break;
++		}
++		if (i >= num_term)
++			return -EINVAL;
++
++		/* Finally, set the termination value */
++		err = priv->do_set_termination(dev, termval);
++		if (err)
++			return err;
++
++		priv->termination = termval;
++	}
++
++	return 0;
++}
++
++static size_t can_get_size(const struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	size_t size = 0;
++
++	if (priv->bittiming.bitrate)				/* IFLA_CAN_BITTIMING */
++		size += nla_total_size(sizeof(struct can_bittiming));
++	if (priv->bittiming_const)				/* IFLA_CAN_BITTIMING_CONST */
++		size += nla_total_size(sizeof(struct can_bittiming_const));
++	size += nla_total_size(sizeof(struct can_clock));	/* IFLA_CAN_CLOCK */
++	size += nla_total_size(sizeof(u32));			/* IFLA_CAN_STATE */
++	size += nla_total_size(sizeof(struct can_ctrlmode));	/* IFLA_CAN_CTRLMODE */
++	size += nla_total_size(sizeof(u32));			/* IFLA_CAN_RESTART_MS */
++	if (priv->do_get_berr_counter)				/* IFLA_CAN_BERR_COUNTER */
++		size += nla_total_size(sizeof(struct can_berr_counter));
++	if (priv->data_bittiming.bitrate)			/* IFLA_CAN_DATA_BITTIMING */
++		size += nla_total_size(sizeof(struct can_bittiming));
++	if (priv->data_bittiming_const)				/* IFLA_CAN_DATA_BITTIMING_CONST */
++		size += nla_total_size(sizeof(struct can_bittiming_const));
++	if (priv->termination_const) {
++		size += nla_total_size(sizeof(priv->termination));		/* IFLA_CAN_TERMINATION */
++		size += nla_total_size(sizeof(*priv->termination_const) *	/* IFLA_CAN_TERMINATION_CONST */
++				       priv->termination_const_cnt);
++	}
++	if (priv->bitrate_const)				/* IFLA_CAN_BITRATE_CONST */
++		size += nla_total_size(sizeof(*priv->bitrate_const) *
++				       priv->bitrate_const_cnt);
++	if (priv->data_bitrate_const)				/* IFLA_CAN_DATA_BITRATE_CONST */
++		size += nla_total_size(sizeof(*priv->data_bitrate_const) *
++				       priv->data_bitrate_const_cnt);
++	size += sizeof(priv->bitrate_max);			/* IFLA_CAN_BITRATE_MAX */
++
++	return size;
++}
++
++static int can_fill_info(struct sk_buff *skb, const struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++	struct can_ctrlmode cm = {.flags = priv->ctrlmode};
++	struct can_berr_counter bec = { };
++	enum can_state state = priv->state;
++
++	if (priv->do_get_state)
++		priv->do_get_state(dev, &state);
++
++	if ((priv->bittiming.bitrate &&
++	     nla_put(skb, IFLA_CAN_BITTIMING,
++		     sizeof(priv->bittiming), &priv->bittiming)) ||
++
++	    (priv->bittiming_const &&
++	     nla_put(skb, IFLA_CAN_BITTIMING_CONST,
++		     sizeof(*priv->bittiming_const), priv->bittiming_const)) ||
++
++	    nla_put(skb, IFLA_CAN_CLOCK, sizeof(priv->clock), &priv->clock) ||
++	    nla_put_u32(skb, IFLA_CAN_STATE, state) ||
++	    nla_put(skb, IFLA_CAN_CTRLMODE, sizeof(cm), &cm) ||
++	    nla_put_u32(skb, IFLA_CAN_RESTART_MS, priv->restart_ms) ||
++
++	    (priv->do_get_berr_counter &&
++	     !priv->do_get_berr_counter(dev, &bec) &&
++	     nla_put(skb, IFLA_CAN_BERR_COUNTER, sizeof(bec), &bec)) ||
++
++	    (priv->data_bittiming.bitrate &&
++	     nla_put(skb, IFLA_CAN_DATA_BITTIMING,
++		     sizeof(priv->data_bittiming), &priv->data_bittiming)) ||
++
++	    (priv->data_bittiming_const &&
++	     nla_put(skb, IFLA_CAN_DATA_BITTIMING_CONST,
++		     sizeof(*priv->data_bittiming_const),
++		     priv->data_bittiming_const)) ||
++
++	    (priv->termination_const &&
++	     (nla_put_u16(skb, IFLA_CAN_TERMINATION, priv->termination) ||
++	      nla_put(skb, IFLA_CAN_TERMINATION_CONST,
++		      sizeof(*priv->termination_const) *
++		      priv->termination_const_cnt,
++		      priv->termination_const))) ||
++
++	    (priv->bitrate_const &&
++	     nla_put(skb, IFLA_CAN_BITRATE_CONST,
++		     sizeof(*priv->bitrate_const) *
++		     priv->bitrate_const_cnt,
++		     priv->bitrate_const)) ||
++
++	    (priv->data_bitrate_const &&
++	     nla_put(skb, IFLA_CAN_DATA_BITRATE_CONST,
++		     sizeof(*priv->data_bitrate_const) *
++		     priv->data_bitrate_const_cnt,
++		     priv->data_bitrate_const)) ||
++
++	    (nla_put(skb, IFLA_CAN_BITRATE_MAX,
++		     sizeof(priv->bitrate_max),
++		     &priv->bitrate_max))
++	    )
++
++		return -EMSGSIZE;
++
++	return 0;
++}
++
++static size_t can_get_xstats_size(const struct net_device *dev)
++{
++	return sizeof(struct can_device_stats);
++}
++
++static int can_fill_xstats(struct sk_buff *skb, const struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	if (nla_put(skb, IFLA_INFO_XSTATS,
++		    sizeof(priv->can_stats), &priv->can_stats))
++		goto nla_put_failure;
++	return 0;
++
++nla_put_failure:
++	return -EMSGSIZE;
++}
++
++static int can_newlink(struct net *src_net, struct net_device *dev,
++		       struct nlattr *tb[], struct nlattr *data[],
++		       struct netlink_ext_ack *extack)
++{
++	return -EOPNOTSUPP;
++}
++
++static void can_dellink(struct net_device *dev, struct list_head *head)
++{
++}
++
++static struct rtnl_link_ops can_link_ops __read_mostly = {
++	.kind		= "can",
++	.netns_refund	= true,
++	.maxtype	= IFLA_CAN_MAX,
++	.policy		= can_policy,
++	.setup		= can_setup,
++	.validate	= can_validate,
++	.newlink	= can_newlink,
++	.changelink	= can_changelink,
++	.dellink	= can_dellink,
++	.get_size	= can_get_size,
++	.fill_info	= can_fill_info,
++	.get_xstats_size = can_get_xstats_size,
++	.fill_xstats	= can_fill_xstats,
++};
++
++/* Register the CAN network device */
++int register_candev(struct net_device *dev)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	/* Ensure termination_const, termination_const_cnt and
++	 * do_set_termination consistency. All must be either set or
++	 * unset.
++	 */
++	if ((!priv->termination_const != !priv->termination_const_cnt) ||
++	    (!priv->termination_const != !priv->do_set_termination))
++		return -EINVAL;
++
++	if (!priv->bitrate_const != !priv->bitrate_const_cnt)
++		return -EINVAL;
++
++	if (!priv->data_bitrate_const != !priv->data_bitrate_const_cnt)
++		return -EINVAL;
++
++	dev->rtnl_link_ops = &can_link_ops;
++	netif_carrier_off(dev);
++
++	return register_netdev(dev);
++}
++EXPORT_SYMBOL_GPL(register_candev);
++
++/* Unregister the CAN network device */
++void unregister_candev(struct net_device *dev)
++{
++	unregister_netdev(dev);
++}
++EXPORT_SYMBOL_GPL(unregister_candev);
++
++/* Test if a network device is a candev based device
++ * and return the can_priv* if so.
++ */
++struct can_priv *safe_candev_priv(struct net_device *dev)
++{
++	if (dev->type != ARPHRD_CAN || dev->rtnl_link_ops != &can_link_ops)
++		return NULL;
++
++	return netdev_priv(dev);
++}
++EXPORT_SYMBOL_GPL(safe_candev_priv);
++
++static __init int can_dev_init(void)
++{
++	int err;
++
++	can_led_notifier_init();
++
++	err = rtnl_link_register(&can_link_ops);
++	if (!err)
++		pr_info(MOD_DESC "\n");
++
++	return err;
++}
++module_init(can_dev_init);
++
++static __exit void can_dev_exit(void)
++{
++	rtnl_link_unregister(&can_link_ops);
++
++	can_led_notifier_exit();
++}
++module_exit(can_dev_exit);
++
++MODULE_ALIAS_RTNL_LINK("can");
+diff --git a/drivers/net/can/dev/rx-offload.c b/drivers/net/can/dev/rx-offload.c
+new file mode 100644
+index 0000000000000..6e95193b215ba
+--- /dev/null
++++ b/drivers/net/can/dev/rx-offload.c
+@@ -0,0 +1,376 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/* Copyright (c) 2014      Protonic Holland,
++ *                         David Jander
++ * Copyright (C) 2014-2017 Pengutronix,
++ *                         Marc Kleine-Budde <kernel@pengutronix.de>
++ */
++
++#include <linux/can/dev.h>
++#include <linux/can/rx-offload.h>
++
++struct can_rx_offload_cb {
++	u32 timestamp;
++};
++
++static inline struct can_rx_offload_cb *
++can_rx_offload_get_cb(struct sk_buff *skb)
++{
++	BUILD_BUG_ON(sizeof(struct can_rx_offload_cb) > sizeof(skb->cb));
++
++	return (struct can_rx_offload_cb *)skb->cb;
++}
++
++static inline bool
++can_rx_offload_le(struct can_rx_offload *offload,
++		  unsigned int a, unsigned int b)
++{
++	if (offload->inc)
++		return a <= b;
++	else
++		return a >= b;
++}
++
++static inline unsigned int
++can_rx_offload_inc(struct can_rx_offload *offload, unsigned int *val)
++{
++	if (offload->inc)
++		return (*val)++;
++	else
++		return (*val)--;
++}
++
++static int can_rx_offload_napi_poll(struct napi_struct *napi, int quota)
++{
++	struct can_rx_offload *offload = container_of(napi,
++						      struct can_rx_offload,
++						      napi);
++	struct net_device *dev = offload->dev;
++	struct net_device_stats *stats = &dev->stats;
++	struct sk_buff *skb;
++	int work_done = 0;
++
++	while ((work_done < quota) &&
++	       (skb = skb_dequeue(&offload->skb_queue))) {
++		struct can_frame *cf = (struct can_frame *)skb->data;
++
++		work_done++;
++		stats->rx_packets++;
++		stats->rx_bytes += cf->can_dlc;
++		netif_receive_skb(skb);
++	}
++
++	if (work_done < quota) {
++		napi_complete_done(napi, work_done);
++
++		/* Check if there was another interrupt */
++		if (!skb_queue_empty(&offload->skb_queue))
++			napi_reschedule(&offload->napi);
++	}
++
++	can_led_event(offload->dev, CAN_LED_EVENT_RX);
++
++	return work_done;
++}
++
++static inline void
++__skb_queue_add_sort(struct sk_buff_head *head, struct sk_buff *new,
++		     int (*compare)(struct sk_buff *a, struct sk_buff *b))
++{
++	struct sk_buff *pos, *insert = NULL;
++
++	skb_queue_reverse_walk(head, pos) {
++		const struct can_rx_offload_cb *cb_pos, *cb_new;
++
++		cb_pos = can_rx_offload_get_cb(pos);
++		cb_new = can_rx_offload_get_cb(new);
++
++		netdev_dbg(new->dev,
++			   "%s: pos=0x%08x, new=0x%08x, diff=%10d, queue_len=%d\n",
++			   __func__,
++			   cb_pos->timestamp, cb_new->timestamp,
++			   cb_new->timestamp - cb_pos->timestamp,
++			   skb_queue_len(head));
++
++		if (compare(pos, new) < 0)
++			continue;
++		insert = pos;
++		break;
++	}
++	if (!insert)
++		__skb_queue_head(head, new);
++	else
++		__skb_queue_after(head, insert, new);
++}
++
++static int can_rx_offload_compare(struct sk_buff *a, struct sk_buff *b)
++{
++	const struct can_rx_offload_cb *cb_a, *cb_b;
++
++	cb_a = can_rx_offload_get_cb(a);
++	cb_b = can_rx_offload_get_cb(b);
++
++	/* Subtract two u32 and return result as int, to keep
++	 * difference steady around the u32 overflow.
++	 */
++	return cb_b->timestamp - cb_a->timestamp;
++}
++
++/**
++ * can_rx_offload_offload_one() - Read one CAN frame from HW
++ * @offload: pointer to rx_offload context
++ * @n: number of mailbox to read
++ *
++ * The task of this function is to read a CAN frame from mailbox @n
++ * from the device and return the mailbox's content as a struct
++ * sk_buff.
++ *
++ * If the struct can_rx_offload::skb_queue exceeds the maximal queue
++ * length (struct can_rx_offload::skb_queue_len_max) or no skb can be
++ * allocated, the mailbox contents is discarded by reading it into an
++ * overflow buffer. This way the mailbox is marked as free by the
++ * driver.
++ *
++ * Return: A pointer to skb containing the CAN frame on success.
++ *
++ *         NULL if the mailbox @n is empty.
++ *
++ *         ERR_PTR() in case of an error
++ */
++static struct sk_buff *
++can_rx_offload_offload_one(struct can_rx_offload *offload, unsigned int n)
++{
++	struct sk_buff *skb;
++	struct can_rx_offload_cb *cb;
++	bool drop = false;
++	u32 timestamp;
++
++	/* If queue is full drop frame */
++	if (unlikely(skb_queue_len(&offload->skb_queue) >
++		     offload->skb_queue_len_max))
++		drop = true;
++
++	skb = offload->mailbox_read(offload, n, &timestamp, drop);
++	/* Mailbox was empty. */
++	if (unlikely(!skb))
++		return NULL;
++
++	/* There was a problem reading the mailbox, propagate
++	 * error value.
++	 */
++	if (unlikely(IS_ERR(skb))) {
++		offload->dev->stats.rx_dropped++;
++		offload->dev->stats.rx_fifo_errors++;
++
++		return skb;
++	}
++
++	/* Mailbox was read. */
++	cb = can_rx_offload_get_cb(skb);
++	cb->timestamp = timestamp;
++
++	return skb;
++}
++
++int can_rx_offload_irq_offload_timestamp(struct can_rx_offload *offload,
++					 u64 pending)
++{
++	struct sk_buff_head skb_queue;
++	unsigned int i;
++
++	__skb_queue_head_init(&skb_queue);
++
++	for (i = offload->mb_first;
++	     can_rx_offload_le(offload, i, offload->mb_last);
++	     can_rx_offload_inc(offload, &i)) {
++		struct sk_buff *skb;
++
++		if (!(pending & BIT_ULL(i)))
++			continue;
++
++		skb = can_rx_offload_offload_one(offload, i);
++		if (IS_ERR_OR_NULL(skb))
++			continue;
++
++		__skb_queue_add_sort(&skb_queue, skb, can_rx_offload_compare);
++	}
++
++	if (!skb_queue_empty(&skb_queue)) {
++		unsigned long flags;
++		u32 queue_len;
++
++		spin_lock_irqsave(&offload->skb_queue.lock, flags);
++		skb_queue_splice_tail(&skb_queue, &offload->skb_queue);
++		spin_unlock_irqrestore(&offload->skb_queue.lock, flags);
++
++		queue_len = skb_queue_len(&offload->skb_queue);
++		if (queue_len > offload->skb_queue_len_max / 8)
++			netdev_dbg(offload->dev, "%s: queue_len=%d\n",
++				   __func__, queue_len);
++
++		can_rx_offload_schedule(offload);
++	}
++
++	return skb_queue_len(&skb_queue);
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_timestamp);
++
++int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload)
++{
++	struct sk_buff *skb;
++	int received = 0;
++
++	while (1) {
++		skb = can_rx_offload_offload_one(offload, 0);
++		if (IS_ERR(skb))
++			continue;
++		if (!skb)
++			break;
++
++		skb_queue_tail(&offload->skb_queue, skb);
++		received++;
++	}
++
++	if (received)
++		can_rx_offload_schedule(offload);
++
++	return received;
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_fifo);
++
++int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
++				struct sk_buff *skb, u32 timestamp)
++{
++	struct can_rx_offload_cb *cb;
++	unsigned long flags;
++
++	if (skb_queue_len(&offload->skb_queue) >
++	    offload->skb_queue_len_max) {
++		dev_kfree_skb_any(skb);
++		return -ENOBUFS;
++	}
++
++	cb = can_rx_offload_get_cb(skb);
++	cb->timestamp = timestamp;
++
++	spin_lock_irqsave(&offload->skb_queue.lock, flags);
++	__skb_queue_add_sort(&offload->skb_queue, skb, can_rx_offload_compare);
++	spin_unlock_irqrestore(&offload->skb_queue.lock, flags);
++
++	can_rx_offload_schedule(offload);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_queue_sorted);
++
++unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
++					 unsigned int idx, u32 timestamp)
++{
++	struct net_device *dev = offload->dev;
++	struct net_device_stats *stats = &dev->stats;
++	struct sk_buff *skb;
++	u8 len;
++	int err;
++
++	skb = __can_get_echo_skb(dev, idx, &len);
++	if (!skb)
++		return 0;
++
++	err = can_rx_offload_queue_sorted(offload, skb, timestamp);
++	if (err) {
++		stats->rx_errors++;
++		stats->tx_fifo_errors++;
++	}
++
++	return len;
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_get_echo_skb);
++
++int can_rx_offload_queue_tail(struct can_rx_offload *offload,
++			      struct sk_buff *skb)
++{
++	if (skb_queue_len(&offload->skb_queue) >
++	    offload->skb_queue_len_max) {
++		dev_kfree_skb_any(skb);
++		return -ENOBUFS;
++	}
++
++	skb_queue_tail(&offload->skb_queue, skb);
++	can_rx_offload_schedule(offload);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_queue_tail);
++
++static int can_rx_offload_init_queue(struct net_device *dev,
++				     struct can_rx_offload *offload,
++				     unsigned int weight)
++{
++	offload->dev = dev;
++
++	/* Limit queue len to 4x the weight (rounted to next power of two) */
++	offload->skb_queue_len_max = 2 << fls(weight);
++	offload->skb_queue_len_max *= 4;
++	skb_queue_head_init(&offload->skb_queue);
++
++	netif_napi_add(dev, &offload->napi, can_rx_offload_napi_poll, weight);
++
++	dev_dbg(dev->dev.parent, "%s: skb_queue_len_max=%d\n",
++		__func__, offload->skb_queue_len_max);
++
++	return 0;
++}
++
++int can_rx_offload_add_timestamp(struct net_device *dev,
++				 struct can_rx_offload *offload)
++{
++	unsigned int weight;
++
++	if (offload->mb_first > BITS_PER_LONG_LONG ||
++	    offload->mb_last > BITS_PER_LONG_LONG || !offload->mailbox_read)
++		return -EINVAL;
++
++	if (offload->mb_first < offload->mb_last) {
++		offload->inc = true;
++		weight = offload->mb_last - offload->mb_first;
++	} else {
++		offload->inc = false;
++		weight = offload->mb_first - offload->mb_last;
++	}
++
++	return can_rx_offload_init_queue(dev, offload, weight);
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_add_timestamp);
++
++int can_rx_offload_add_fifo(struct net_device *dev,
++			    struct can_rx_offload *offload, unsigned int weight)
++{
++	if (!offload->mailbox_read)
++		return -EINVAL;
++
++	return can_rx_offload_init_queue(dev, offload, weight);
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_add_fifo);
++
++int can_rx_offload_add_manual(struct net_device *dev,
++			      struct can_rx_offload *offload,
++			      unsigned int weight)
++{
++	if (offload->mailbox_read)
++		return -EINVAL;
++
++	return can_rx_offload_init_queue(dev, offload, weight);
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_add_manual);
++
++void can_rx_offload_enable(struct can_rx_offload *offload)
++{
++	napi_enable(&offload->napi);
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_enable);
++
++void can_rx_offload_del(struct can_rx_offload *offload)
++{
++	netif_napi_del(&offload->napi);
++	skb_queue_purge(&offload->skb_queue);
++}
++EXPORT_SYMBOL_GPL(can_rx_offload_del);
+diff --git a/drivers/net/can/m_can/tcan4x5x.c b/drivers/net/can/m_can/tcan4x5x.c
+index 01f5b6e03a2dd..f169d9090e52f 100644
+--- a/drivers/net/can/m_can/tcan4x5x.c
++++ b/drivers/net/can/m_can/tcan4x5x.c
+@@ -88,7 +88,7 @@
+ 
+ #define TCAN4X5X_MRAM_START 0x8000
+ #define TCAN4X5X_MCAN_OFFSET 0x1000
+-#define TCAN4X5X_MAX_REGISTER 0x8fff
++#define TCAN4X5X_MAX_REGISTER 0x8ffc
+ 
+ #define TCAN4X5X_CLEAR_ALL_INT 0xffffffff
+ #define TCAN4X5X_SET_ALL_INT 0xffffffff
+diff --git a/drivers/net/can/rx-offload.c b/drivers/net/can/rx-offload.c
+deleted file mode 100644
+index 6e95193b215ba..0000000000000
+--- a/drivers/net/can/rx-offload.c
++++ /dev/null
+@@ -1,376 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/* Copyright (c) 2014      Protonic Holland,
+- *                         David Jander
+- * Copyright (C) 2014-2017 Pengutronix,
+- *                         Marc Kleine-Budde <kernel@pengutronix.de>
+- */
+-
+-#include <linux/can/dev.h>
+-#include <linux/can/rx-offload.h>
+-
+-struct can_rx_offload_cb {
+-	u32 timestamp;
+-};
+-
+-static inline struct can_rx_offload_cb *
+-can_rx_offload_get_cb(struct sk_buff *skb)
+-{
+-	BUILD_BUG_ON(sizeof(struct can_rx_offload_cb) > sizeof(skb->cb));
+-
+-	return (struct can_rx_offload_cb *)skb->cb;
+-}
+-
+-static inline bool
+-can_rx_offload_le(struct can_rx_offload *offload,
+-		  unsigned int a, unsigned int b)
+-{
+-	if (offload->inc)
+-		return a <= b;
+-	else
+-		return a >= b;
+-}
+-
+-static inline unsigned int
+-can_rx_offload_inc(struct can_rx_offload *offload, unsigned int *val)
+-{
+-	if (offload->inc)
+-		return (*val)++;
+-	else
+-		return (*val)--;
+-}
+-
+-static int can_rx_offload_napi_poll(struct napi_struct *napi, int quota)
+-{
+-	struct can_rx_offload *offload = container_of(napi,
+-						      struct can_rx_offload,
+-						      napi);
+-	struct net_device *dev = offload->dev;
+-	struct net_device_stats *stats = &dev->stats;
+-	struct sk_buff *skb;
+-	int work_done = 0;
+-
+-	while ((work_done < quota) &&
+-	       (skb = skb_dequeue(&offload->skb_queue))) {
+-		struct can_frame *cf = (struct can_frame *)skb->data;
+-
+-		work_done++;
+-		stats->rx_packets++;
+-		stats->rx_bytes += cf->can_dlc;
+-		netif_receive_skb(skb);
+-	}
+-
+-	if (work_done < quota) {
+-		napi_complete_done(napi, work_done);
+-
+-		/* Check if there was another interrupt */
+-		if (!skb_queue_empty(&offload->skb_queue))
+-			napi_reschedule(&offload->napi);
+-	}
+-
+-	can_led_event(offload->dev, CAN_LED_EVENT_RX);
+-
+-	return work_done;
+-}
+-
+-static inline void
+-__skb_queue_add_sort(struct sk_buff_head *head, struct sk_buff *new,
+-		     int (*compare)(struct sk_buff *a, struct sk_buff *b))
+-{
+-	struct sk_buff *pos, *insert = NULL;
+-
+-	skb_queue_reverse_walk(head, pos) {
+-		const struct can_rx_offload_cb *cb_pos, *cb_new;
+-
+-		cb_pos = can_rx_offload_get_cb(pos);
+-		cb_new = can_rx_offload_get_cb(new);
+-
+-		netdev_dbg(new->dev,
+-			   "%s: pos=0x%08x, new=0x%08x, diff=%10d, queue_len=%d\n",
+-			   __func__,
+-			   cb_pos->timestamp, cb_new->timestamp,
+-			   cb_new->timestamp - cb_pos->timestamp,
+-			   skb_queue_len(head));
+-
+-		if (compare(pos, new) < 0)
+-			continue;
+-		insert = pos;
+-		break;
+-	}
+-	if (!insert)
+-		__skb_queue_head(head, new);
+-	else
+-		__skb_queue_after(head, insert, new);
+-}
+-
+-static int can_rx_offload_compare(struct sk_buff *a, struct sk_buff *b)
+-{
+-	const struct can_rx_offload_cb *cb_a, *cb_b;
+-
+-	cb_a = can_rx_offload_get_cb(a);
+-	cb_b = can_rx_offload_get_cb(b);
+-
+-	/* Subtract two u32 and return result as int, to keep
+-	 * difference steady around the u32 overflow.
+-	 */
+-	return cb_b->timestamp - cb_a->timestamp;
+-}
+-
+-/**
+- * can_rx_offload_offload_one() - Read one CAN frame from HW
+- * @offload: pointer to rx_offload context
+- * @n: number of mailbox to read
+- *
+- * The task of this function is to read a CAN frame from mailbox @n
+- * from the device and return the mailbox's content as a struct
+- * sk_buff.
+- *
+- * If the struct can_rx_offload::skb_queue exceeds the maximal queue
+- * length (struct can_rx_offload::skb_queue_len_max) or no skb can be
+- * allocated, the mailbox contents is discarded by reading it into an
+- * overflow buffer. This way the mailbox is marked as free by the
+- * driver.
+- *
+- * Return: A pointer to skb containing the CAN frame on success.
+- *
+- *         NULL if the mailbox @n is empty.
+- *
+- *         ERR_PTR() in case of an error
+- */
+-static struct sk_buff *
+-can_rx_offload_offload_one(struct can_rx_offload *offload, unsigned int n)
+-{
+-	struct sk_buff *skb;
+-	struct can_rx_offload_cb *cb;
+-	bool drop = false;
+-	u32 timestamp;
+-
+-	/* If queue is full drop frame */
+-	if (unlikely(skb_queue_len(&offload->skb_queue) >
+-		     offload->skb_queue_len_max))
+-		drop = true;
+-
+-	skb = offload->mailbox_read(offload, n, &timestamp, drop);
+-	/* Mailbox was empty. */
+-	if (unlikely(!skb))
+-		return NULL;
+-
+-	/* There was a problem reading the mailbox, propagate
+-	 * error value.
+-	 */
+-	if (unlikely(IS_ERR(skb))) {
+-		offload->dev->stats.rx_dropped++;
+-		offload->dev->stats.rx_fifo_errors++;
+-
+-		return skb;
+-	}
+-
+-	/* Mailbox was read. */
+-	cb = can_rx_offload_get_cb(skb);
+-	cb->timestamp = timestamp;
+-
+-	return skb;
+-}
+-
+-int can_rx_offload_irq_offload_timestamp(struct can_rx_offload *offload,
+-					 u64 pending)
+-{
+-	struct sk_buff_head skb_queue;
+-	unsigned int i;
+-
+-	__skb_queue_head_init(&skb_queue);
+-
+-	for (i = offload->mb_first;
+-	     can_rx_offload_le(offload, i, offload->mb_last);
+-	     can_rx_offload_inc(offload, &i)) {
+-		struct sk_buff *skb;
+-
+-		if (!(pending & BIT_ULL(i)))
+-			continue;
+-
+-		skb = can_rx_offload_offload_one(offload, i);
+-		if (IS_ERR_OR_NULL(skb))
+-			continue;
+-
+-		__skb_queue_add_sort(&skb_queue, skb, can_rx_offload_compare);
+-	}
+-
+-	if (!skb_queue_empty(&skb_queue)) {
+-		unsigned long flags;
+-		u32 queue_len;
+-
+-		spin_lock_irqsave(&offload->skb_queue.lock, flags);
+-		skb_queue_splice_tail(&skb_queue, &offload->skb_queue);
+-		spin_unlock_irqrestore(&offload->skb_queue.lock, flags);
+-
+-		queue_len = skb_queue_len(&offload->skb_queue);
+-		if (queue_len > offload->skb_queue_len_max / 8)
+-			netdev_dbg(offload->dev, "%s: queue_len=%d\n",
+-				   __func__, queue_len);
+-
+-		can_rx_offload_schedule(offload);
+-	}
+-
+-	return skb_queue_len(&skb_queue);
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_timestamp);
+-
+-int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload)
+-{
+-	struct sk_buff *skb;
+-	int received = 0;
+-
+-	while (1) {
+-		skb = can_rx_offload_offload_one(offload, 0);
+-		if (IS_ERR(skb))
+-			continue;
+-		if (!skb)
+-			break;
+-
+-		skb_queue_tail(&offload->skb_queue, skb);
+-		received++;
+-	}
+-
+-	if (received)
+-		can_rx_offload_schedule(offload);
+-
+-	return received;
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_fifo);
+-
+-int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
+-				struct sk_buff *skb, u32 timestamp)
+-{
+-	struct can_rx_offload_cb *cb;
+-	unsigned long flags;
+-
+-	if (skb_queue_len(&offload->skb_queue) >
+-	    offload->skb_queue_len_max) {
+-		dev_kfree_skb_any(skb);
+-		return -ENOBUFS;
+-	}
+-
+-	cb = can_rx_offload_get_cb(skb);
+-	cb->timestamp = timestamp;
+-
+-	spin_lock_irqsave(&offload->skb_queue.lock, flags);
+-	__skb_queue_add_sort(&offload->skb_queue, skb, can_rx_offload_compare);
+-	spin_unlock_irqrestore(&offload->skb_queue.lock, flags);
+-
+-	can_rx_offload_schedule(offload);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_queue_sorted);
+-
+-unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
+-					 unsigned int idx, u32 timestamp)
+-{
+-	struct net_device *dev = offload->dev;
+-	struct net_device_stats *stats = &dev->stats;
+-	struct sk_buff *skb;
+-	u8 len;
+-	int err;
+-
+-	skb = __can_get_echo_skb(dev, idx, &len);
+-	if (!skb)
+-		return 0;
+-
+-	err = can_rx_offload_queue_sorted(offload, skb, timestamp);
+-	if (err) {
+-		stats->rx_errors++;
+-		stats->tx_fifo_errors++;
+-	}
+-
+-	return len;
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_get_echo_skb);
+-
+-int can_rx_offload_queue_tail(struct can_rx_offload *offload,
+-			      struct sk_buff *skb)
+-{
+-	if (skb_queue_len(&offload->skb_queue) >
+-	    offload->skb_queue_len_max) {
+-		dev_kfree_skb_any(skb);
+-		return -ENOBUFS;
+-	}
+-
+-	skb_queue_tail(&offload->skb_queue, skb);
+-	can_rx_offload_schedule(offload);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_queue_tail);
+-
+-static int can_rx_offload_init_queue(struct net_device *dev,
+-				     struct can_rx_offload *offload,
+-				     unsigned int weight)
+-{
+-	offload->dev = dev;
+-
+-	/* Limit queue len to 4x the weight (rounted to next power of two) */
+-	offload->skb_queue_len_max = 2 << fls(weight);
+-	offload->skb_queue_len_max *= 4;
+-	skb_queue_head_init(&offload->skb_queue);
+-
+-	netif_napi_add(dev, &offload->napi, can_rx_offload_napi_poll, weight);
+-
+-	dev_dbg(dev->dev.parent, "%s: skb_queue_len_max=%d\n",
+-		__func__, offload->skb_queue_len_max);
+-
+-	return 0;
+-}
+-
+-int can_rx_offload_add_timestamp(struct net_device *dev,
+-				 struct can_rx_offload *offload)
+-{
+-	unsigned int weight;
+-
+-	if (offload->mb_first > BITS_PER_LONG_LONG ||
+-	    offload->mb_last > BITS_PER_LONG_LONG || !offload->mailbox_read)
+-		return -EINVAL;
+-
+-	if (offload->mb_first < offload->mb_last) {
+-		offload->inc = true;
+-		weight = offload->mb_last - offload->mb_first;
+-	} else {
+-		offload->inc = false;
+-		weight = offload->mb_first - offload->mb_last;
+-	}
+-
+-	return can_rx_offload_init_queue(dev, offload, weight);
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_add_timestamp);
+-
+-int can_rx_offload_add_fifo(struct net_device *dev,
+-			    struct can_rx_offload *offload, unsigned int weight)
+-{
+-	if (!offload->mailbox_read)
+-		return -EINVAL;
+-
+-	return can_rx_offload_init_queue(dev, offload, weight);
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_add_fifo);
+-
+-int can_rx_offload_add_manual(struct net_device *dev,
+-			      struct can_rx_offload *offload,
+-			      unsigned int weight)
+-{
+-	if (offload->mailbox_read)
+-		return -EINVAL;
+-
+-	return can_rx_offload_init_queue(dev, offload, weight);
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_add_manual);
+-
+-void can_rx_offload_enable(struct can_rx_offload *offload)
+-{
+-	napi_enable(&offload->napi);
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_enable);
+-
+-void can_rx_offload_del(struct can_rx_offload *offload)
+-{
+-	netif_napi_del(&offload->napi);
+-	skb_queue_purge(&offload->skb_queue);
+-}
+-EXPORT_SYMBOL_GPL(can_rx_offload_del);
+diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c
+index b4a39f0449ba4..6471a71c2ee6d 100644
+--- a/drivers/net/can/slcan.c
++++ b/drivers/net/can/slcan.c
+@@ -516,6 +516,7 @@ static struct slcan *slc_alloc(void)
+ 	int i;
+ 	char name[IFNAMSIZ];
+ 	struct net_device *dev = NULL;
++	struct can_ml_priv *can_ml;
+ 	struct slcan       *sl;
+ 	int size;
+ 
+@@ -538,7 +539,8 @@ static struct slcan *slc_alloc(void)
+ 
+ 	dev->base_addr  = i;
+ 	sl = netdev_priv(dev);
+-	dev->ml_priv = (void *)sl + ALIGN(sizeof(*sl), NETDEV_ALIGN);
++	can_ml = (void *)sl + ALIGN(sizeof(*sl), NETDEV_ALIGN);
++	can_set_ml_priv(dev, can_ml);
+ 
+ 	/* Initialize channel control data */
+ 	sl->magic = SLCAN_MAGIC;
+diff --git a/drivers/net/can/vcan.c b/drivers/net/can/vcan.c
+index 39ca14b0585dc..067705e2850b3 100644
+--- a/drivers/net/can/vcan.c
++++ b/drivers/net/can/vcan.c
+@@ -153,7 +153,7 @@ static void vcan_setup(struct net_device *dev)
+ 	dev->addr_len		= 0;
+ 	dev->tx_queue_len	= 0;
+ 	dev->flags		= IFF_NOARP;
+-	dev->ml_priv		= netdev_priv(dev);
++	can_set_ml_priv(dev, netdev_priv(dev));
+ 
+ 	/* set flags according to driver capabilities */
+ 	if (echo)
+diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
+index b1baa4ac1d537..7000c6cd1e48b 100644
+--- a/drivers/net/can/vxcan.c
++++ b/drivers/net/can/vxcan.c
+@@ -141,6 +141,8 @@ static const struct net_device_ops vxcan_netdev_ops = {
+ 
+ static void vxcan_setup(struct net_device *dev)
+ {
++	struct can_ml_priv *can_ml;
++
+ 	dev->type		= ARPHRD_CAN;
+ 	dev->mtu		= CANFD_MTU;
+ 	dev->hard_header_len	= 0;
+@@ -149,7 +151,9 @@ static void vxcan_setup(struct net_device *dev)
+ 	dev->flags		= (IFF_NOARP|IFF_ECHO);
+ 	dev->netdev_ops		= &vxcan_netdev_ops;
+ 	dev->needs_free_netdev	= true;
+-	dev->ml_priv		= netdev_priv(dev) + ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN);
++
++	can_ml = netdev_priv(dev) + ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN);
++	can_set_ml_priv(dev, can_ml);
+ }
+ 
+ /* forward declaration for rtnl_create_link() */
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+index 8f70a3909929a..4af0cd9530de6 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+@@ -71,8 +71,10 @@ static int aq_ndev_open(struct net_device *ndev)
+ 		goto err_exit;
+ 
+ 	err = aq_nic_start(aq_nic);
+-	if (err < 0)
++	if (err < 0) {
++		aq_nic_stop(aq_nic);
+ 		goto err_exit;
++	}
+ 
+ err_exit:
+ 	if (err < 0)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index d1f7b51cab620..f5333fc27e14f 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1153,7 +1153,7 @@ static void mvpp2_interrupts_unmask(void *arg)
+ 	u32 val;
+ 
+ 	/* If the thread isn't used, don't do anything */
+-	if (smp_processor_id() > port->priv->nthreads)
++	if (smp_processor_id() >= port->priv->nthreads)
+ 		return;
+ 
+ 	val = MVPP2_CAUSE_MISC_SUM_MASK |
+@@ -2287,7 +2287,7 @@ static void mvpp2_txq_sent_counter_clear(void *arg)
+ 	int queue;
+ 
+ 	/* If the thread isn't used, don't do anything */
+-	if (smp_processor_id() > port->priv->nthreads)
++	if (smp_processor_id() >= port->priv->nthreads)
+ 		return;
+ 
+ 	for (queue = 0; queue < port->ntxqs; queue++) {
+diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h
+index 8e0e9350c3831..42e5a8b8d3240 100644
+--- a/drivers/net/ipa/gsi_reg.h
++++ b/drivers/net/ipa/gsi_reg.h
+@@ -48,16 +48,6 @@
+ #define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \
+ 			(0x0000c01c + 0x1000 * (ee))
+ 
+-#define GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET \
+-			GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP)
+-#define GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(ee) \
+-			(0x0000c028 + 0x1000 * (ee))
+-
+-#define GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET \
+-			GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP)
+-#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \
+-			(0x0000c02c + 0x1000 * (ee))
+-
+ #define GSI_CH_C_CNTXT_0_OFFSET(ch) \
+ 		GSI_EE_N_CH_C_CNTXT_0_OFFSET((ch), GSI_EE_AP)
+ #define GSI_EE_N_CH_C_CNTXT_0_OFFSET(ch, ee) \
+diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
+index d92dd3f09b735..46d8b7336d8f2 100644
+--- a/drivers/net/ipa/ipa_cmd.c
++++ b/drivers/net/ipa/ipa_cmd.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ 
+ /* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+- * Copyright (C) 2019-2020 Linaro Ltd.
++ * Copyright (C) 2019-2021 Linaro Ltd.
+  */
+ 
+ #include <linux/types.h>
+@@ -244,11 +244,15 @@ static bool ipa_cmd_register_write_offset_valid(struct ipa *ipa,
+ 	if (ipa->version != IPA_VERSION_3_5_1)
+ 		bit_count += hweight32(REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK);
+ 	BUILD_BUG_ON(bit_count > 32);
+-	offset_max = ~0 >> (32 - bit_count);
++	offset_max = ~0U >> (32 - bit_count);
+ 
++	/* Make sure the offset can be represented by the field(s)
++	 * that holds it.  Also make sure the offset is not outside
++	 * the overall IPA memory range.
++	 */
+ 	if (offset > offset_max || ipa->mem_offset > offset_max - offset) {
+ 		dev_err(dev, "%s offset too large 0x%04x + 0x%04x > 0x%04x)\n",
+-				ipa->mem_offset + offset, offset_max);
++			name, ipa->mem_offset, offset, offset_max);
+ 		return false;
+ 	}
+ 
+@@ -261,12 +265,24 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
+ 	const char *name;
+ 	u32 offset;
+ 
+-	offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version);
+-	name = "filter/route hash flush";
+-	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
+-		return false;
++	/* If hashed tables are supported, ensure the hash flush register
++	 * offset will fit in a register write IPA immediate command.
++	 */
++	if (ipa->version != IPA_VERSION_4_2) {
++		offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version);
++		name = "filter/route hash flush";
++		if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
++			return false;
++	}
+ 
+-	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT);
++	/* Each endpoint can have a status endpoint associated with it,
++	 * and this is recorded in an endpoint register.  If the modem
++	 * crashes, we reset the status endpoint for all modem endpoints
++	 * using a register write IPA immediate command.  Make sure the
++	 * worst case (highest endpoint number) offset of that endpoint
++	 * fits in the register write command field(s) that must hold it.
++	 */
++	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT - 1);
+ 	name = "maximal endpoint status";
+ 	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
+ 		return false;
+diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
+index e7972e88ffe0b..9bbecf4d159b4 100644
+--- a/drivers/net/netdevsim/dev.c
++++ b/drivers/net/netdevsim/dev.c
+@@ -1008,23 +1008,25 @@ static int nsim_dev_reload_create(struct nsim_dev *nsim_dev,
+ 	nsim_dev->fw_update_status = true;
+ 	nsim_dev->fw_update_overwrite_mask = 0;
+ 
+-	nsim_dev->fib_data = nsim_fib_create(devlink, extack);
+-	if (IS_ERR(nsim_dev->fib_data))
+-		return PTR_ERR(nsim_dev->fib_data);
+-
+ 	nsim_devlink_param_load_driverinit_values(devlink);
+ 
+ 	err = nsim_dev_dummy_region_init(nsim_dev, devlink);
+ 	if (err)
+-		goto err_fib_destroy;
++		return err;
+ 
+ 	err = nsim_dev_traps_init(devlink);
+ 	if (err)
+ 		goto err_dummy_region_exit;
+ 
++	nsim_dev->fib_data = nsim_fib_create(devlink, extack);
++	if (IS_ERR(nsim_dev->fib_data)) {
++		err = PTR_ERR(nsim_dev->fib_data);
++		goto err_traps_exit;
++	}
++
+ 	err = nsim_dev_health_init(nsim_dev, devlink);
+ 	if (err)
+-		goto err_traps_exit;
++		goto err_fib_destroy;
+ 
+ 	err = nsim_dev_port_add_all(nsim_dev, nsim_bus_dev->port_count);
+ 	if (err)
+@@ -1039,12 +1041,12 @@ static int nsim_dev_reload_create(struct nsim_dev *nsim_dev,
+ 
+ err_health_exit:
+ 	nsim_dev_health_exit(nsim_dev);
++err_fib_destroy:
++	nsim_fib_destroy(devlink, nsim_dev->fib_data);
+ err_traps_exit:
+ 	nsim_dev_traps_exit(devlink);
+ err_dummy_region_exit:
+ 	nsim_dev_dummy_region_exit(nsim_dev);
+-err_fib_destroy:
+-	nsim_fib_destroy(devlink, nsim_dev->fib_data);
+ 	return err;
+ }
+ 
+@@ -1076,15 +1078,9 @@ int nsim_dev_probe(struct nsim_bus_dev *nsim_bus_dev)
+ 	if (err)
+ 		goto err_devlink_free;
+ 
+-	nsim_dev->fib_data = nsim_fib_create(devlink, NULL);
+-	if (IS_ERR(nsim_dev->fib_data)) {
+-		err = PTR_ERR(nsim_dev->fib_data);
+-		goto err_resources_unregister;
+-	}
+-
+ 	err = devlink_register(devlink, &nsim_bus_dev->dev);
+ 	if (err)
+-		goto err_fib_destroy;
++		goto err_resources_unregister;
+ 
+ 	err = devlink_params_register(devlink, nsim_devlink_params,
+ 				      ARRAY_SIZE(nsim_devlink_params));
+@@ -1104,9 +1100,15 @@ int nsim_dev_probe(struct nsim_bus_dev *nsim_bus_dev)
+ 	if (err)
+ 		goto err_traps_exit;
+ 
++	nsim_dev->fib_data = nsim_fib_create(devlink, NULL);
++	if (IS_ERR(nsim_dev->fib_data)) {
++		err = PTR_ERR(nsim_dev->fib_data);
++		goto err_debugfs_exit;
++	}
++
+ 	err = nsim_dev_health_init(nsim_dev, devlink);
+ 	if (err)
+-		goto err_debugfs_exit;
++		goto err_fib_destroy;
+ 
+ 	err = nsim_bpf_dev_init(nsim_dev);
+ 	if (err)
+@@ -1124,6 +1126,8 @@ err_bpf_dev_exit:
+ 	nsim_bpf_dev_exit(nsim_dev);
+ err_health_exit:
+ 	nsim_dev_health_exit(nsim_dev);
++err_fib_destroy:
++	nsim_fib_destroy(devlink, nsim_dev->fib_data);
+ err_debugfs_exit:
+ 	nsim_dev_debugfs_exit(nsim_dev);
+ err_traps_exit:
+@@ -1135,8 +1139,6 @@ err_params_unregister:
+ 				  ARRAY_SIZE(nsim_devlink_params));
+ err_dl_unregister:
+ 	devlink_unregister(devlink);
+-err_fib_destroy:
+-	nsim_fib_destroy(devlink, nsim_dev->fib_data);
+ err_resources_unregister:
+ 	devlink_resources_unregister(devlink, NULL);
+ err_devlink_free:
+@@ -1153,10 +1155,10 @@ static void nsim_dev_reload_destroy(struct nsim_dev *nsim_dev)
+ 	debugfs_remove(nsim_dev->take_snapshot);
+ 	nsim_dev_port_del_all(nsim_dev);
+ 	nsim_dev_health_exit(nsim_dev);
++	nsim_fib_destroy(devlink, nsim_dev->fib_data);
+ 	nsim_dev_traps_exit(devlink);
+ 	nsim_dev_dummy_region_exit(nsim_dev);
+ 	mutex_destroy(&nsim_dev->port_list_lock);
+-	nsim_fib_destroy(devlink, nsim_dev->fib_data);
+ }
+ 
+ void nsim_dev_remove(struct nsim_bus_dev *nsim_bus_dev)
+diff --git a/drivers/net/wan/lmc/lmc_main.c b/drivers/net/wan/lmc/lmc_main.c
+index 36600b0a0ab06..1ee4c8a906320 100644
+--- a/drivers/net/wan/lmc/lmc_main.c
++++ b/drivers/net/wan/lmc/lmc_main.c
+@@ -901,6 +901,8 @@ static int lmc_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+         break;
+     default:
+ 	printk(KERN_WARNING "%s: LMC UNKNOWN CARD!\n", dev->name);
++	unregister_hdlc_device(dev);
++	return -EIO;
+         break;
+     }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index e6135795719a1..e7072fc4f487a 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -576,13 +576,13 @@ static void ath10k_wmi_event_tdls_peer(struct ath10k *ar, struct sk_buff *skb)
+ 	case WMI_TDLS_TEARDOWN_REASON_TX:
+ 	case WMI_TDLS_TEARDOWN_REASON_RSSI:
+ 	case WMI_TDLS_TEARDOWN_REASON_PTR_TIMEOUT:
++		rcu_read_lock();
+ 		station = ieee80211_find_sta_by_ifaddr(ar->hw,
+ 						       ev->peer_macaddr.addr,
+ 						       NULL);
+ 		if (!station) {
+ 			ath10k_warn(ar, "did not find station from tdls peer event");
+-			kfree(tb);
+-			return;
++			goto exit;
+ 		}
+ 		arvif = ath10k_get_arvif(ar, __le32_to_cpu(ev->vdev_id));
+ 		ieee80211_tdls_oper_request(
+@@ -593,6 +593,9 @@ static void ath10k_wmi_event_tdls_peer(struct ath10k *ar, struct sk_buff *skb)
+ 					);
+ 		break;
+ 	}
++
++exit:
++	rcu_read_unlock();
+ 	kfree(tb);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index ee0edd9185604..e9e6b0c4de220 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -6317,17 +6317,20 @@ static int __ath11k_mac_register(struct ath11k *ar)
+ 	ret = ath11k_regd_update(ar, true);
+ 	if (ret) {
+ 		ath11k_err(ar->ab, "ath11k regd update failed: %d\n", ret);
+-		goto err_free_if_combs;
++		goto err_unregister_hw;
+ 	}
+ 
+ 	ret = ath11k_debugfs_register(ar);
+ 	if (ret) {
+ 		ath11k_err(ar->ab, "debugfs registration failed: %d\n", ret);
+-		goto err_free_if_combs;
++		goto err_unregister_hw;
+ 	}
+ 
+ 	return 0;
+ 
++err_unregister_hw:
++	ieee80211_unregister_hw(ar->hw);
++
+ err_free_if_combs:
+ 	kfree(ar->hw->wiphy->iface_combinations[0].limits);
+ 	kfree(ar->hw->wiphy->iface_combinations);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 0ee421f30aa24..23e6422c2251b 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -5611,7 +5611,8 @@ static bool brcmf_is_linkup(struct brcmf_cfg80211_vif *vif,
+ 	return false;
+ }
+ 
+-static bool brcmf_is_linkdown(const struct brcmf_event_msg *e)
++static bool brcmf_is_linkdown(struct brcmf_cfg80211_vif *vif,
++			    const struct brcmf_event_msg *e)
+ {
+ 	u32 event = e->event_code;
+ 	u16 flags = e->flags;
+@@ -5620,6 +5621,8 @@ static bool brcmf_is_linkdown(const struct brcmf_event_msg *e)
+ 	    (event == BRCMF_E_DISASSOC_IND) ||
+ 	    ((event == BRCMF_E_LINK) && (!(flags & BRCMF_EVENT_MSG_LINK)))) {
+ 		brcmf_dbg(CONN, "Processing link down\n");
++		clear_bit(BRCMF_VIF_STATUS_EAP_SUCCESS, &vif->sme_state);
++		clear_bit(BRCMF_VIF_STATUS_ASSOC_SUCCESS, &vif->sme_state);
+ 		return true;
+ 	}
+ 	return false;
+@@ -6067,7 +6070,7 @@ brcmf_notify_connect_status(struct brcmf_if *ifp,
+ 		} else
+ 			brcmf_bss_connect_done(cfg, ndev, e, true);
+ 		brcmf_net_setcarrier(ifp, true);
+-	} else if (brcmf_is_linkdown(e)) {
++	} else if (brcmf_is_linkdown(ifp->vif, e)) {
+ 		brcmf_dbg(CONN, "Linkdown\n");
+ 		if (!brcmf_is_ibssmode(ifp->vif) &&
+ 		    test_bit(BRCMF_VIF_STATUS_CONNECTED,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 1a222469b5b4e..bb990be7c870b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -2026,7 +2026,7 @@ static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans,
+ 	int ret;
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 
+-	spin_lock_irqsave(&trans_pcie->reg_lock, *flags);
++	spin_lock_bh(&trans_pcie->reg_lock);
+ 
+ 	if (trans_pcie->cmd_hold_nic_awake)
+ 		goto out;
+@@ -2111,7 +2111,7 @@ static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans,
+ 		}
+ 
+ err:
+-		spin_unlock_irqrestore(&trans_pcie->reg_lock, *flags);
++		spin_unlock_bh(&trans_pcie->reg_lock);
+ 		return false;
+ 	}
+ 
+@@ -2149,7 +2149,7 @@ static void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans,
+ 	 * scheduled on different CPUs (after we drop reg_lock).
+ 	 */
+ out:
+-	spin_unlock_irqrestore(&trans_pcie->reg_lock, *flags);
++	spin_unlock_bh(&trans_pcie->reg_lock);
+ }
+ 
+ static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
+@@ -2403,11 +2403,10 @@ static void iwl_trans_pcie_set_bits_mask(struct iwl_trans *trans, u32 reg,
+ 					 u32 mask, u32 value)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&trans_pcie->reg_lock, flags);
++	spin_lock_bh(&trans_pcie->reg_lock);
+ 	__iwl_trans_pcie_set_bits_mask(trans, reg, mask, value);
+-	spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
++	spin_unlock_bh(&trans_pcie->reg_lock);
+ }
+ 
+ static const char *get_csr_string(int cmd)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+index baa83a0b85934..8c7138247869a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+@@ -78,7 +78,6 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
+ 	struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id];
+ 	struct iwl_device_cmd *out_cmd;
+ 	struct iwl_cmd_meta *out_meta;
+-	unsigned long flags;
+ 	void *dup_buf = NULL;
+ 	dma_addr_t phys_addr;
+ 	int i, cmd_pos, idx;
+@@ -291,11 +290,11 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
+ 	if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
+ 		mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
+ 
+-	spin_lock_irqsave(&trans_pcie->reg_lock, flags);
++	spin_lock(&trans_pcie->reg_lock);
+ 	/* Increment and update queue's write index */
+ 	txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
+ 	iwl_txq_inc_wr_ptr(trans, txq);
+-	spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
++	spin_unlock(&trans_pcie->reg_lock);
+ 
+ out:
+ 	spin_unlock_bh(&txq->lock);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index ed54d04e43964..50133c09a7805 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -321,12 +321,10 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
+ 		txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
+ 
+ 		if (txq->read_ptr == txq->write_ptr) {
+-			unsigned long flags;
+-
+-			spin_lock_irqsave(&trans_pcie->reg_lock, flags);
++			spin_lock(&trans_pcie->reg_lock);
+ 			if (txq_id == trans->txqs.cmd.q_id)
+ 				iwl_pcie_clear_cmd_in_flight(trans);
+-			spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
++			spin_unlock(&trans_pcie->reg_lock);
+ 		}
+ 	}
+ 
+@@ -931,7 +929,6 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	struct iwl_txq *txq = trans->txqs.txq[txq_id];
+-	unsigned long flags;
+ 	int nfreed = 0;
+ 	u16 r;
+ 
+@@ -962,9 +959,10 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
+ 	}
+ 
+ 	if (txq->read_ptr == txq->write_ptr) {
+-		spin_lock_irqsave(&trans_pcie->reg_lock, flags);
++		/* BHs are also disabled due to txq->lock */
++		spin_lock(&trans_pcie->reg_lock);
+ 		iwl_pcie_clear_cmd_in_flight(trans);
+-		spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
++		spin_unlock(&trans_pcie->reg_lock);
+ 	}
+ 
+ 	iwl_pcie_txq_progress(txq);
+@@ -1173,7 +1171,6 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
+ 	struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id];
+ 	struct iwl_device_cmd *out_cmd;
+ 	struct iwl_cmd_meta *out_meta;
+-	unsigned long flags;
+ 	void *dup_buf = NULL;
+ 	dma_addr_t phys_addr;
+ 	int idx;
+@@ -1416,20 +1413,19 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
+ 	if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
+ 		mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
+ 
+-	spin_lock_irqsave(&trans_pcie->reg_lock, flags);
++	spin_lock(&trans_pcie->reg_lock);
+ 	ret = iwl_pcie_set_cmd_in_flight(trans, cmd);
+ 	if (ret < 0) {
+ 		idx = ret;
+-		spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
+-		goto out;
++		goto unlock_reg;
+ 	}
+ 
+ 	/* Increment and update queue's write index */
+ 	txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
+ 	iwl_pcie_txq_inc_wr_ptr(trans, txq);
+ 
+-	spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
+-
++ unlock_reg:
++	spin_unlock(&trans_pcie->reg_lock);
+  out:
+ 	spin_unlock_bh(&txq->lock);
+  free_dup_buf:
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index da2e7415be8fe..f9615f76f1734 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -720,8 +720,8 @@ static void rtw8821c_coex_cfg_ant_switch(struct rtw_dev *rtwdev, u8 ctrl_type,
+ 			regval = (!polarity_inverse ? 0x1 : 0x2);
+ 		}
+ 
+-		rtw_write8_mask(rtwdev, REG_RFE_CTRL8, BIT_MASK_R_RFE_SEL_15,
+-				regval);
++		rtw_write32_mask(rtwdev, REG_RFE_CTRL8, BIT_MASK_R_RFE_SEL_15,
++				 regval);
+ 		break;
+ 	case COEX_SWITCH_CTRL_BY_PTA:
+ 		rtw_write32_clr(rtwdev, REG_LED_CFG, BIT_DPDT_SEL_EN);
+@@ -731,8 +731,8 @@ static void rtw8821c_coex_cfg_ant_switch(struct rtw_dev *rtwdev, u8 ctrl_type,
+ 				PTA_CTRL_PIN);
+ 
+ 		regval = (!polarity_inverse ? 0x2 : 0x1);
+-		rtw_write8_mask(rtwdev, REG_RFE_CTRL8, BIT_MASK_R_RFE_SEL_15,
+-				regval);
++		rtw_write32_mask(rtwdev, REG_RFE_CTRL8, BIT_MASK_R_RFE_SEL_15,
++				 regval);
+ 		break;
+ 	case COEX_SWITCH_CTRL_BY_ANTDIV:
+ 		rtw_write32_clr(rtwdev, REG_LED_CFG, BIT_DPDT_SEL_EN);
+@@ -758,11 +758,11 @@ static void rtw8821c_coex_cfg_ant_switch(struct rtw_dev *rtwdev, u8 ctrl_type,
+ 	}
+ 
+ 	if (ctrl_type == COEX_SWITCH_CTRL_BY_BT) {
+-		rtw_write32_clr(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE1);
+-		rtw_write32_clr(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE2);
++		rtw_write8_clr(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE1);
++		rtw_write8_clr(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE2);
+ 	} else {
+-		rtw_write32_set(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE1);
+-		rtw_write32_set(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE2);
++		rtw_write8_set(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE1);
++		rtw_write8_set(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE2);
+ 	}
+ }
+ 
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 8b0485ada315b..d658c6e8263af 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1098,11 +1098,11 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
+ 		cmd->rbytes_done += ret;
+ 	}
+ 
++	nvmet_tcp_unmap_pdu_iovec(cmd);
+ 	if (queue->data_digest) {
+ 		nvmet_tcp_prep_recv_ddgst(cmd);
+ 		return 0;
+ 	}
+-	nvmet_tcp_unmap_pdu_iovec(cmd);
+ 
+ 	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+ 	    cmd->rbytes_done == cmd->req.transfer_len) {
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index aa1a1c850d057..53a0badc6b035 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -3727,12 +3727,15 @@ static int __maybe_unused rockchip_pinctrl_suspend(struct device *dev)
+ static int __maybe_unused rockchip_pinctrl_resume(struct device *dev)
+ {
+ 	struct rockchip_pinctrl *info = dev_get_drvdata(dev);
+-	int ret = regmap_write(info->regmap_base, RK3288_GRF_GPIO6C_IOMUX,
+-			       rk3288_grf_gpio6c_iomux |
+-			       GPIO6C6_SEL_WRITE_ENABLE);
++	int ret;
+ 
+-	if (ret)
+-		return ret;
++	if (info->ctrl->type == RK3288) {
++		ret = regmap_write(info->regmap_base, RK3288_GRF_GPIO6C_IOMUX,
++				   rk3288_grf_gpio6c_iomux |
++				   GPIO6C6_SEL_WRITE_ENABLE);
++		if (ret)
++			return ret;
++	}
+ 
+ 	return pinctrl_force_default(info->pctl_dev);
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h
+index 1cff7c69d4483..1e94586c7eb21 100644
+--- a/drivers/scsi/qla2xxx/qla_target.h
++++ b/drivers/scsi/qla2xxx/qla_target.h
+@@ -116,7 +116,6 @@
+ 	(min(1270, ((ql) > 0) ? (QLA_TGT_DATASEGS_PER_CMD_24XX + \
+ 		QLA_TGT_DATASEGS_PER_CONT_24XX*((ql) - 1)) : 0))
+ #endif
+-#endif
+ 
+ #define GET_TARGET_ID(ha, iocb) ((HAS_EXTENDED_IDS(ha))			\
+ 			 ? le16_to_cpu((iocb)->u.isp2x.target.extended)	\
+@@ -244,6 +243,7 @@ struct ctio_to_2xxx {
+ #ifndef CTIO_RET_TYPE
+ #define CTIO_RET_TYPE	0x17		/* CTIO return entry */
+ #define ATIO_TYPE7 0x06 /* Accept target I/O entry for 24xx */
++#endif
+ 
+ struct fcp_hdr {
+ 	uint8_t  r_ctl;
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index e2e5356a997de..19bc8c923fce5 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -1269,8 +1269,8 @@ static int st_open(struct inode *inode, struct file *filp)
+ 	spin_lock(&st_use_lock);
+ 	if (STp->in_use) {
+ 		spin_unlock(&st_use_lock);
+-		scsi_tape_put(STp);
+ 		DEBC_printk(STp, "Device already in use.\n");
++		scsi_tape_put(STp);
+ 		return (-EBUSY);
+ 	}
+ 
+diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
+index 751a49f6534f4..be76fddbf524b 100644
+--- a/drivers/soc/qcom/qcom-geni-se.c
++++ b/drivers/soc/qcom/qcom-geni-se.c
+@@ -3,7 +3,6 @@
+ 
+ #include <linux/acpi.h>
+ #include <linux/clk.h>
+-#include <linux/console.h>
+ #include <linux/slab.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+@@ -91,14 +90,11 @@ struct geni_wrapper {
+ 	struct device *dev;
+ 	void __iomem *base;
+ 	struct clk_bulk_data ahb_clks[NUM_AHB_CLKS];
+-	struct geni_icc_path to_core;
+ };
+ 
+ static const char * const icc_path_names[] = {"qup-core", "qup-config",
+ 						"qup-memory"};
+ 
+-static struct geni_wrapper *earlycon_wrapper;
+-
+ #define QUP_HW_VER_REG			0x4
+ 
+ /* Common SE registers */
+@@ -828,44 +824,11 @@ int geni_icc_disable(struct geni_se *se)
+ }
+ EXPORT_SYMBOL(geni_icc_disable);
+ 
+-void geni_remove_earlycon_icc_vote(void)
+-{
+-	struct platform_device *pdev;
+-	struct geni_wrapper *wrapper;
+-	struct device_node *parent;
+-	struct device_node *child;
+-
+-	if (!earlycon_wrapper)
+-		return;
+-
+-	wrapper = earlycon_wrapper;
+-	parent = of_get_next_parent(wrapper->dev->of_node);
+-	for_each_child_of_node(parent, child) {
+-		if (!of_device_is_compatible(child, "qcom,geni-se-qup"))
+-			continue;
+-
+-		pdev = of_find_device_by_node(child);
+-		if (!pdev)
+-			continue;
+-
+-		wrapper = platform_get_drvdata(pdev);
+-		icc_put(wrapper->to_core.path);
+-		wrapper->to_core.path = NULL;
+-
+-	}
+-	of_node_put(parent);
+-
+-	earlycon_wrapper = NULL;
+-}
+-EXPORT_SYMBOL(geni_remove_earlycon_icc_vote);
+-
+ static int geni_se_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct resource *res;
+ 	struct geni_wrapper *wrapper;
+-	struct console __maybe_unused *bcon;
+-	bool __maybe_unused has_earlycon = false;
+ 	int ret;
+ 
+ 	wrapper = devm_kzalloc(dev, sizeof(*wrapper), GFP_KERNEL);
+@@ -888,43 +851,6 @@ static int geni_se_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-#ifdef CONFIG_SERIAL_EARLYCON
+-	for_each_console(bcon) {
+-		if (!strcmp(bcon->name, "qcom_geni")) {
+-			has_earlycon = true;
+-			break;
+-		}
+-	}
+-	if (!has_earlycon)
+-		goto exit;
+-
+-	wrapper->to_core.path = devm_of_icc_get(dev, "qup-core");
+-	if (IS_ERR(wrapper->to_core.path))
+-		return PTR_ERR(wrapper->to_core.path);
+-	/*
+-	 * Put minmal BW request on core clocks on behalf of early console.
+-	 * The vote will be removed earlycon exit function.
+-	 *
+-	 * Note: We are putting vote on each QUP wrapper instead only to which
+-	 * earlycon is connected because QUP core clock of different wrapper
+-	 * share same voltage domain. If core1 is put to 0, then core2 will
+-	 * also run at 0, if not voted. Default ICC vote will be removed ASA
+-	 * we touch any of the core clock.
+-	 * core1 = core2 = max(core1, core2)
+-	 */
+-	ret = icc_set_bw(wrapper->to_core.path, GENI_DEFAULT_BW,
+-				GENI_DEFAULT_BW);
+-	if (ret) {
+-		dev_err(&pdev->dev, "%s: ICC BW voting failed for core: %d\n",
+-			__func__, ret);
+-		return ret;
+-	}
+-
+-	if (of_get_compatible_child(pdev->dev.of_node, "qcom,geni-debug-uart"))
+-		earlycon_wrapper = wrapper;
+-	of_node_put(pdev->dev.of_node);
+-exit:
+-#endif
+ 	dev_set_drvdata(dev, wrapper);
+ 	dev_dbg(dev, "GENI SE Driver probed\n");
+ 	return devm_of_platform_populate(dev);
+diff --git a/drivers/staging/comedi/drivers/cb_pcidas.c b/drivers/staging/comedi/drivers/cb_pcidas.c
+index d740c47827751..2f20bd56ec6ca 100644
+--- a/drivers/staging/comedi/drivers/cb_pcidas.c
++++ b/drivers/staging/comedi/drivers/cb_pcidas.c
+@@ -1281,7 +1281,7 @@ static int cb_pcidas_auto_attach(struct comedi_device *dev,
+ 	     devpriv->amcc + AMCC_OP_REG_INTCSR);
+ 
+ 	ret = request_irq(pcidev->irq, cb_pcidas_interrupt, IRQF_SHARED,
+-			  dev->board_name, dev);
++			  "cb_pcidas", dev);
+ 	if (ret) {
+ 		dev_dbg(dev->class_dev, "unable to allocate irq %d\n",
+ 			pcidev->irq);
+diff --git a/drivers/staging/comedi/drivers/cb_pcidas64.c b/drivers/staging/comedi/drivers/cb_pcidas64.c
+index fa987bb0e7cd4..6d3ba399a7f0b 100644
+--- a/drivers/staging/comedi/drivers/cb_pcidas64.c
++++ b/drivers/staging/comedi/drivers/cb_pcidas64.c
+@@ -4035,7 +4035,7 @@ static int auto_attach(struct comedi_device *dev,
+ 	init_stc_registers(dev);
+ 
+ 	retval = request_irq(pcidev->irq, handle_interrupt, IRQF_SHARED,
+-			     dev->board_name, dev);
++			     "cb_pcidas64", dev);
+ 	if (retval) {
+ 		dev_dbg(dev->class_dev, "unable to allocate irq %u\n",
+ 			pcidev->irq);
+diff --git a/drivers/staging/rtl8192e/rtllib.h b/drivers/staging/rtl8192e/rtllib.h
+index b84f00b8d18bc..4cabaf21c1ca0 100644
+--- a/drivers/staging/rtl8192e/rtllib.h
++++ b/drivers/staging/rtl8192e/rtllib.h
+@@ -1105,7 +1105,7 @@ struct rtllib_network {
+ 	bool	bWithAironetIE;
+ 	bool	bCkipSupported;
+ 	bool	bCcxRmEnable;
+-	u16	CcxRmState[2];
++	u8	CcxRmState[2];
+ 	bool	bMBssidValid;
+ 	u8	MBssidMask;
+ 	u8	MBssid[ETH_ALEN];
+diff --git a/drivers/staging/rtl8192e/rtllib_rx.c b/drivers/staging/rtl8192e/rtllib_rx.c
+index d31b5e1c8df47..63752233e551f 100644
+--- a/drivers/staging/rtl8192e/rtllib_rx.c
++++ b/drivers/staging/rtl8192e/rtllib_rx.c
+@@ -1968,7 +1968,7 @@ static void rtllib_parse_mife_generic(struct rtllib_device *ieee,
+ 	    info_element->data[2] == 0x96 &&
+ 	    info_element->data[3] == 0x01) {
+ 		if (info_element->len == 6) {
+-			memcpy(network->CcxRmState, &info_element[4], 2);
++			memcpy(network->CcxRmState, &info_element->data[4], 2);
+ 			if (network->CcxRmState[0] != 0)
+ 				network->bCcxRmEnable = true;
+ 			else
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index a6f371fc9af27..f52708f310e03 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -754,6 +754,9 @@ void thermal_cooling_device_stats_update(struct thermal_cooling_device *cdev,
+ {
+ 	struct cooling_dev_stats *stats = cdev->stats;
+ 
++	if (!stats)
++		return;
++
+ 	spin_lock(&stats->lock);
+ 
+ 	if (stats->state == new_state)
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 291649f028213..0d85b55ea8233 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -1177,12 +1177,6 @@ static inline void qcom_geni_serial_enable_early_read(struct geni_se *se,
+ 						      struct console *con) { }
+ #endif
+ 
+-static int qcom_geni_serial_earlycon_exit(struct console *con)
+-{
+-	geni_remove_earlycon_icc_vote();
+-	return 0;
+-}
+-
+ static struct qcom_geni_private_data earlycon_private_data;
+ 
+ static int __init qcom_geni_serial_earlycon_setup(struct earlycon_device *dev,
+@@ -1233,7 +1227,6 @@ static int __init qcom_geni_serial_earlycon_setup(struct earlycon_device *dev,
+ 	writel(stop_bit_len, uport->membase + SE_UART_TX_STOP_BIT_LEN);
+ 
+ 	dev->con->write = qcom_geni_serial_earlycon_write;
+-	dev->con->exit = qcom_geni_serial_earlycon_exit;
+ 	dev->con->setup = NULL;
+ 	qcom_geni_serial_enable_early_read(&se, dev->con);
+ 
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 2f4e5174e78c8..e79359326411a 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -147,17 +147,29 @@ static inline int acm_set_control(struct acm *acm, int control)
+ #define acm_send_break(acm, ms) \
+ 	acm_ctrl_msg(acm, USB_CDC_REQ_SEND_BREAK, ms, NULL, 0)
+ 
+-static void acm_kill_urbs(struct acm *acm)
++static void acm_poison_urbs(struct acm *acm)
+ {
+ 	int i;
+ 
+-	usb_kill_urb(acm->ctrlurb);
++	usb_poison_urb(acm->ctrlurb);
+ 	for (i = 0; i < ACM_NW; i++)
+-		usb_kill_urb(acm->wb[i].urb);
++		usb_poison_urb(acm->wb[i].urb);
+ 	for (i = 0; i < acm->rx_buflimit; i++)
+-		usb_kill_urb(acm->read_urbs[i]);
++		usb_poison_urb(acm->read_urbs[i]);
+ }
+ 
++static void acm_unpoison_urbs(struct acm *acm)
++{
++	int i;
++
++	for (i = 0; i < acm->rx_buflimit; i++)
++		usb_unpoison_urb(acm->read_urbs[i]);
++	for (i = 0; i < ACM_NW; i++)
++		usb_unpoison_urb(acm->wb[i].urb);
++	usb_unpoison_urb(acm->ctrlurb);
++}
++
++
+ /*
+  * Write buffer management.
+  * All of these assume proper locks taken by the caller.
+@@ -226,9 +238,10 @@ static int acm_start_wb(struct acm *acm, struct acm_wb *wb)
+ 
+ 	rc = usb_submit_urb(wb->urb, GFP_ATOMIC);
+ 	if (rc < 0) {
+-		dev_err(&acm->data->dev,
+-			"%s - usb_submit_urb(write bulk) failed: %d\n",
+-			__func__, rc);
++		if (rc != -EPERM)
++			dev_err(&acm->data->dev,
++				"%s - usb_submit_urb(write bulk) failed: %d\n",
++				__func__, rc);
+ 		acm_write_done(acm, wb);
+ 	}
+ 	return rc;
+@@ -313,8 +326,10 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
+ 			acm->iocount.dsr++;
+ 		if (difference & ACM_CTRL_DCD)
+ 			acm->iocount.dcd++;
+-		if (newctrl & ACM_CTRL_BRK)
++		if (newctrl & ACM_CTRL_BRK) {
+ 			acm->iocount.brk++;
++			tty_insert_flip_char(&acm->port, 0, TTY_BREAK);
++		}
+ 		if (newctrl & ACM_CTRL_RI)
+ 			acm->iocount.rng++;
+ 		if (newctrl & ACM_CTRL_FRAMING)
+@@ -480,11 +495,6 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 	dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n",
+ 		rb->index, urb->actual_length, status);
+ 
+-	if (!acm->dev) {
+-		dev_dbg(&acm->data->dev, "%s - disconnected\n", __func__);
+-		return;
+-	}
+-
+ 	switch (status) {
+ 	case 0:
+ 		usb_mark_last_busy(acm->dev);
+@@ -649,7 +659,8 @@ static void acm_port_dtr_rts(struct tty_port *port, int raise)
+ 
+ 	res = acm_set_control(acm, val);
+ 	if (res && (acm->ctrl_caps & USB_CDC_CAP_LINE))
+-		dev_err(&acm->control->dev, "failed to set dtr/rts\n");
++		/* This is broken in too many devices to spam the logs */
++		dev_dbg(&acm->control->dev, "failed to set dtr/rts\n");
+ }
+ 
+ static int acm_port_activate(struct tty_port *port, struct tty_struct *tty)
+@@ -731,6 +742,7 @@ static void acm_port_shutdown(struct tty_port *port)
+ 	 * Need to grab write_lock to prevent race with resume, but no need to
+ 	 * hold it due to the tty-port initialised flag.
+ 	 */
++	acm_poison_urbs(acm);
+ 	spin_lock_irq(&acm->write_lock);
+ 	spin_unlock_irq(&acm->write_lock);
+ 
+@@ -747,7 +759,8 @@ static void acm_port_shutdown(struct tty_port *port)
+ 		usb_autopm_put_interface_async(acm->control);
+ 	}
+ 
+-	acm_kill_urbs(acm);
++	acm_unpoison_urbs(acm);
++
+ }
+ 
+ static void acm_tty_cleanup(struct tty_struct *tty)
+@@ -1503,12 +1516,16 @@ skip_countries:
+ 
+ 	return 0;
+ alloc_fail6:
++	if (!acm->combined_interfaces) {
++		/* Clear driver data so that disconnect() returns early. */
++		usb_set_intfdata(data_interface, NULL);
++		usb_driver_release_interface(&acm_driver, data_interface);
++	}
+ 	if (acm->country_codes) {
+ 		device_remove_file(&acm->control->dev,
+ 				&dev_attr_wCountryCodes);
+ 		device_remove_file(&acm->control->dev,
+ 				&dev_attr_iCountryCodeRelDate);
+-		kfree(acm->country_codes);
+ 	}
+ 	device_remove_file(&acm->control->dev, &dev_attr_bmCapabilities);
+ alloc_fail5:
+@@ -1540,8 +1557,14 @@ static void acm_disconnect(struct usb_interface *intf)
+ 	if (!acm)
+ 		return;
+ 
+-	mutex_lock(&acm->mutex);
+ 	acm->disconnected = true;
++	/*
++	 * there is a circular dependency. acm_softint() can resubmit
++	 * the URBs in error handling so we need to block any
++	 * submission right away
++	 */
++	acm_poison_urbs(acm);
++	mutex_lock(&acm->mutex);
+ 	if (acm->country_codes) {
+ 		device_remove_file(&acm->control->dev,
+ 				&dev_attr_wCountryCodes);
+@@ -1560,7 +1583,6 @@ static void acm_disconnect(struct usb_interface *intf)
+ 		tty_kref_put(tty);
+ 	}
+ 
+-	acm_kill_urbs(acm);
+ 	cancel_delayed_work_sync(&acm->dwork);
+ 
+ 	tty_unregister_device(acm_tty_driver, acm->minor);
+@@ -1602,7 +1624,7 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message)
+ 	if (cnt)
+ 		return 0;
+ 
+-	acm_kill_urbs(acm);
++	acm_poison_urbs(acm);
+ 	cancel_delayed_work_sync(&acm->dwork);
+ 	acm->urbs_in_error_delay = 0;
+ 
+@@ -1615,6 +1637,7 @@ static int acm_resume(struct usb_interface *intf)
+ 	struct urb *urb;
+ 	int rv = 0;
+ 
++	acm_unpoison_urbs(acm);
+ 	spin_lock_irq(&acm->write_lock);
+ 
+ 	if (--acm->susp_count)
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 6ade3daf78584..76ac5d6555ae4 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -498,6 +498,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* DJI CineSSD */
+ 	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+ 
++	/* Fibocom L850-GL LTE Modem */
++	{ USB_DEVICE(0x2cb7, 0x0007), .driver_info =
++			USB_QUIRK_IGNORE_REMOTE_WAKEUP },
++
+ 	/* INTEL VALUE SSD */
+ 	{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index fc3269f5faf19..1a9789ec5847f 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -4322,7 +4322,8 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
+ 	if (hsotg->op_state == OTG_STATE_B_PERIPHERAL)
+ 		goto unlock;
+ 
+-	if (hsotg->params.power_down > DWC2_POWER_DOWN_PARAM_PARTIAL)
++	if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_PARTIAL ||
++	    hsotg->flags.b.port_connect_status == 0)
+ 		goto skip_power_saving;
+ 
+ 	/*
+@@ -5398,7 +5399,7 @@ int dwc2_host_enter_hibernation(struct dwc2_hsotg *hsotg)
+ 	dwc2_writel(hsotg, hprt0, HPRT0);
+ 
+ 	/* Wait for the HPRT0.PrtSusp register field to be set */
+-	if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 3000))
++	if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 5000))
+ 		dev_warn(hsotg->dev, "Suspend wasn't generated\n");
+ 
+ 	/*
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index bae6a70664c80..598daed8086f6 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -118,6 +118,8 @@ static const struct property_entry dwc3_pci_intel_properties[] = {
+ static const struct property_entry dwc3_pci_mrfld_properties[] = {
+ 	PROPERTY_ENTRY_STRING("dr_mode", "otg"),
+ 	PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
++	PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"),
++	PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
+ 	PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
+ 	{}
+ };
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index c00c4fa139b88..8bd077fb1190f 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -244,6 +244,9 @@ static int dwc3_qcom_interconnect_init(struct dwc3_qcom *qcom)
+ 	struct device *dev = qcom->dev;
+ 	int ret;
+ 
++	if (has_acpi_companion(dev))
++		return 0;
++
+ 	qcom->icc_path_ddr = of_icc_get(dev, "usb-ddr");
+ 	if (IS_ERR(qcom->icc_path_ddr)) {
+ 		dev_err(dev, "failed to get usb-ddr path: %ld\n",
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 2a86ad4b12b34..65ff41e3a18eb 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -791,10 +791,6 @@ static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep)
+ 	reg &= ~DWC3_DALEPENA_EP(dep->number);
+ 	dwc3_writel(dwc->regs, DWC3_DALEPENA, reg);
+ 
+-	dep->stream_capable = false;
+-	dep->type = 0;
+-	dep->flags = 0;
+-
+ 	/* Clear out the ep descriptors for non-ep0 */
+ 	if (dep->number > 1) {
+ 		dep->endpoint.comp_desc = NULL;
+@@ -803,6 +799,10 @@ static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep)
+ 
+ 	dwc3_remove_requests(dwc, dep);
+ 
++	dep->stream_capable = false;
++	dep->type = 0;
++	dep->flags = 0;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/amd5536udc_pci.c b/drivers/usb/gadget/udc/amd5536udc_pci.c
+index 8d387e0e4d91f..c80f9bd51b750 100644
+--- a/drivers/usb/gadget/udc/amd5536udc_pci.c
++++ b/drivers/usb/gadget/udc/amd5536udc_pci.c
+@@ -153,6 +153,11 @@ static int udc_pci_probe(
+ 	pci_set_master(pdev);
+ 	pci_try_set_mwi(pdev);
+ 
++	dev->phys_addr = resource;
++	dev->irq = pdev->irq;
++	dev->pdev = pdev;
++	dev->dev = &pdev->dev;
++
+ 	/* init dma pools */
+ 	if (use_dma) {
+ 		retval = init_dma_pools(dev);
+@@ -160,11 +165,6 @@ static int udc_pci_probe(
+ 			goto err_dma;
+ 	}
+ 
+-	dev->phys_addr = resource;
+-	dev->irq = pdev->irq;
+-	dev->pdev = pdev;
+-	dev->dev = &pdev->dev;
+-
+ 	/* general probing */
+ 	if (udc_probe(dev)) {
+ 		retval = -ENODEV;
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index fe010cc61f19b..2f27dc0d9c6bd 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -397,6 +397,13 @@ static void xhci_mtk_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
+ 	if (mtk->lpm_support)
+ 		xhci->quirks |= XHCI_LPM_SUPPORT;
++
++	/*
++	 * MTK xHCI 0.96: PSA is 1 by default even if doesn't support stream,
++	 * and it's 3 when support it.
++	 */
++	if (xhci->hci_version < 0x100 && HCC_MAX_PSA(xhci->hcc_params) == 4)
++		xhci->quirks |= XHCI_BROKEN_STREAMS;
+ }
+ 
+ /* called during probe() after chip reset completes */
+@@ -548,7 +555,8 @@ static int xhci_mtk_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto put_usb3_hcd;
+ 
+-	if (HCC_MAX_PSA(xhci->hcc_params) >= 4)
++	if (HCC_MAX_PSA(xhci->hcc_params) >= 4 &&
++	    !(xhci->quirks & XHCI_BROKEN_STREAMS))
+ 		xhci->shared_hcd->can_do_streams = 1;
+ 
+ 	ret = usb_add_hcd(xhci->shared_hcd, irq, IRQF_SHARED);
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 1cd87729ba604..fc0457db62e1a 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2004,10 +2004,14 @@ static void musb_pm_runtime_check_session(struct musb *musb)
+ 		MUSB_DEVCTL_HR;
+ 	switch (devctl & ~s) {
+ 	case MUSB_QUIRK_B_DISCONNECT_99:
+-		musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n");
+-		schedule_delayed_work(&musb->irq_work,
+-				      msecs_to_jiffies(1000));
+-		break;
++		if (musb->quirk_retries && !musb->flush_irq_work) {
++			musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n");
++			schedule_delayed_work(&musb->irq_work,
++					      msecs_to_jiffies(1000));
++			musb->quirk_retries--;
++			break;
++		}
++		fallthrough;
+ 	case MUSB_QUIRK_B_INVALID_VBUS_91:
+ 		if (musb->quirk_retries && !musb->flush_irq_work) {
+ 			musb_dbg(musb,
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index 3209b5ddd30c9..a20a8380ca0c9 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -594,6 +594,8 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
+ 			}
++			if (wValue >= 32)
++				goto error;
+ 			if (hcd->speed == HCD_USB3) {
+ 				if ((vhci_hcd->port_status[rhport] &
+ 				     USB_SS_PORT_STAT_POWER) != 0) {
+diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
+index 40a223381ab61..0f28bf99efebc 100644
+--- a/drivers/vfio/pci/Kconfig
++++ b/drivers/vfio/pci/Kconfig
+@@ -42,7 +42,7 @@ config VFIO_PCI_IGD
+ 
+ config VFIO_PCI_NVLINK2
+ 	def_bool y
+-	depends on VFIO_PCI && PPC_POWERNV
++	depends on VFIO_PCI && PPC_POWERNV && SPAPR_TCE_IOMMU
+ 	help
+ 	  VFIO PCI support for P9 Witherspoon machine with NVIDIA V100 GPUs
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index a262e12c6dc26..5ccb0705beae1 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -332,8 +332,8 @@ static void vhost_vq_reset(struct vhost_dev *dev,
+ 	vq->error_ctx = NULL;
+ 	vq->kick = NULL;
+ 	vq->log_ctx = NULL;
+-	vhost_reset_is_le(vq);
+ 	vhost_disable_cross_endian(vq);
++	vhost_reset_is_le(vq);
+ 	vq->busyloop_timeout = 0;
+ 	vq->umem = NULL;
+ 	vq->iotlb = NULL;
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 8d1ae973041ae..26581194fdf81 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -1344,6 +1344,9 @@ static void fbcon_cursor(struct vc_data *vc, int mode)
+ 
+ 	ops->cursor_flash = (mode == CM_ERASE) ? 0 : 1;
+ 
++	if (!ops->cursor)
++		return;
++
+ 	ops->cursor(vc, info, mode, get_color(vc, info, c, 1),
+ 		    get_color(vc, info, c, 0));
+ }
+diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
+index c8b0ae676809b..4dc9077dd2ac0 100644
+--- a/drivers/video/fbdev/hyperv_fb.c
++++ b/drivers/video/fbdev/hyperv_fb.c
+@@ -1031,7 +1031,6 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+ 			PCI_DEVICE_ID_HYPERV_VIDEO, NULL);
+ 		if (!pdev) {
+ 			pr_err("Unable to find PCI Hyper-V video\n");
+-			kfree(info->apertures);
+ 			return -ENODEV;
+ 		}
+ 
+@@ -1129,7 +1128,6 @@ getmem_done:
+ 	} else {
+ 		pci_dev_put(pdev);
+ 	}
+-	kfree(info->apertures);
+ 
+ 	return 0;
+ 
+@@ -1141,7 +1139,6 @@ err2:
+ err1:
+ 	if (!gen2vm)
+ 		pci_dev_put(pdev);
+-	kfree(info->apertures);
+ 
+ 	return -ENOMEM;
+ }
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 1d640b1456375..1afd60fcd7723 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -626,27 +626,41 @@ int ext4_claim_free_clusters(struct ext4_sb_info *sbi,
+ 
+ /**
+  * ext4_should_retry_alloc() - check if a block allocation should be retried
+- * @sb:			super block
+- * @retries:		number of attemps has been made
++ * @sb:			superblock
++ * @retries:		number of retry attempts made so far
+  *
+- * ext4_should_retry_alloc() is called when ENOSPC is returned, and if
+- * it is profitable to retry the operation, this function will wait
+- * for the current or committing transaction to complete, and then
+- * return TRUE.  We will only retry once.
++ * ext4_should_retry_alloc() is called when ENOSPC is returned while
++ * attempting to allocate blocks.  If there's an indication that a pending
++ * journal transaction might free some space and allow another attempt to
++ * succeed, this function will wait for the current or committing transaction
++ * to complete and then return TRUE.
+  */
+ int ext4_should_retry_alloc(struct super_block *sb, int *retries)
+ {
+-	if (!ext4_has_free_clusters(EXT4_SB(sb), 1, 0) ||
+-	    (*retries)++ > 1 ||
+-	    !EXT4_SB(sb)->s_journal)
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
++
++	if (!sbi->s_journal)
+ 		return 0;
+ 
+-	smp_mb();
+-	if (EXT4_SB(sb)->s_mb_free_pending == 0)
++	if (++(*retries) > 3) {
++		percpu_counter_inc(&sbi->s_sra_exceeded_retry_limit);
+ 		return 0;
++	}
+ 
++	/*
++	 * if there's no indication that blocks are about to be freed it's
++	 * possible we just missed a transaction commit that did so
++	 */
++	smp_mb();
++	if (sbi->s_mb_free_pending == 0)
++		return ext4_has_free_clusters(sbi, 1, 0);
++
++	/*
++	 * it's possible we've just missed a transaction commit here,
++	 * so ignore the returned status
++	 */
+ 	jbd_debug(1, "%s: retrying operation after ENOSPC\n", sb->s_id);
+-	jbd2_journal_force_commit_nested(EXT4_SB(sb)->s_journal);
++	(void) jbd2_journal_force_commit_nested(sbi->s_journal);
+ 	return 1;
+ }
+ 
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index b92acb6603139..7cae226b000f7 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1474,6 +1474,7 @@ struct ext4_sb_info {
+ 	struct percpu_counter s_freeinodes_counter;
+ 	struct percpu_counter s_dirs_counter;
+ 	struct percpu_counter s_dirtyclusters_counter;
++	struct percpu_counter s_sra_exceeded_retry_limit;
+ 	struct blockgroup_lock *s_blockgroup_lock;
+ 	struct proc_dir_entry *s_proc;
+ 	struct kobject s_kobj;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index c2b8ba343bb4b..3f11c948feb02 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1937,13 +1937,13 @@ static int __ext4_journalled_writepage(struct page *page,
+ 	if (!ret)
+ 		ret = err;
+ 
+-	if (!ext4_has_inline_data(inode))
+-		ext4_walk_page_buffers(NULL, page_bufs, 0, len,
+-				       NULL, bput_one);
+ 	ext4_set_inode_state(inode, EXT4_STATE_JDATA);
+ out:
+ 	unlock_page(page);
+ out_no_pagelock:
++	if (!inline_data && page_bufs)
++		ext4_walk_page_buffers(NULL, page_bufs, 0, len,
++				       NULL, bput_one);
+ 	brelse(inode_bh);
+ 	return ret;
+ }
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 6c7eba426a678..ab7baf5299176 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3788,14 +3788,14 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	 */
+ 	retval = -ENOENT;
+ 	if (!old.bh || le32_to_cpu(old.de->inode) != old.inode->i_ino)
+-		goto end_rename;
++		goto release_bh;
+ 
+ 	new.bh = ext4_find_entry(new.dir, &new.dentry->d_name,
+ 				 &new.de, &new.inlined);
+ 	if (IS_ERR(new.bh)) {
+ 		retval = PTR_ERR(new.bh);
+ 		new.bh = NULL;
+-		goto end_rename;
++		goto release_bh;
+ 	}
+ 	if (new.bh) {
+ 		if (!new.inode) {
+@@ -3812,15 +3812,13 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		handle = ext4_journal_start(old.dir, EXT4_HT_DIR, credits);
+ 		if (IS_ERR(handle)) {
+ 			retval = PTR_ERR(handle);
+-			handle = NULL;
+-			goto end_rename;
++			goto release_bh;
+ 		}
+ 	} else {
+ 		whiteout = ext4_whiteout_for_rename(&old, credits, &handle);
+ 		if (IS_ERR(whiteout)) {
+ 			retval = PTR_ERR(whiteout);
+-			whiteout = NULL;
+-			goto end_rename;
++			goto release_bh;
+ 		}
+ 	}
+ 
+@@ -3957,16 +3955,18 @@ end_rename:
+ 			ext4_resetent(handle, &old,
+ 				      old.inode->i_ino, old_file_type);
+ 			drop_nlink(whiteout);
++			ext4_orphan_add(handle, whiteout);
+ 		}
+ 		unlock_new_inode(whiteout);
++		ext4_journal_stop(handle);
+ 		iput(whiteout);
+-
++	} else {
++		ext4_journal_stop(handle);
+ 	}
++release_bh:
+ 	brelse(old.dir_bh);
+ 	brelse(old.bh);
+ 	brelse(new.bh);
+-	if (handle)
+-		ext4_journal_stop(handle);
+ 	return retval;
+ }
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index e30bf8f342c2a..594300d315ef2 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1226,6 +1226,7 @@ static void ext4_put_super(struct super_block *sb)
+ 	percpu_counter_destroy(&sbi->s_freeinodes_counter);
+ 	percpu_counter_destroy(&sbi->s_dirs_counter);
+ 	percpu_counter_destroy(&sbi->s_dirtyclusters_counter);
++	percpu_counter_destroy(&sbi->s_sra_exceeded_retry_limit);
+ 	percpu_free_rwsem(&sbi->s_writepages_rwsem);
+ #ifdef CONFIG_QUOTA
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+@@ -5019,6 +5020,9 @@ no_journal:
+ 	if (!err)
+ 		err = percpu_counter_init(&sbi->s_dirtyclusters_counter, 0,
+ 					  GFP_KERNEL);
++	if (!err)
++		err = percpu_counter_init(&sbi->s_sra_exceeded_retry_limit, 0,
++					  GFP_KERNEL);
+ 	if (!err)
+ 		err = percpu_init_rwsem(&sbi->s_writepages_rwsem);
+ 
+@@ -5131,6 +5135,7 @@ failed_mount6:
+ 	percpu_counter_destroy(&sbi->s_freeinodes_counter);
+ 	percpu_counter_destroy(&sbi->s_dirs_counter);
+ 	percpu_counter_destroy(&sbi->s_dirtyclusters_counter);
++	percpu_counter_destroy(&sbi->s_sra_exceeded_retry_limit);
+ 	percpu_free_rwsem(&sbi->s_writepages_rwsem);
+ failed_mount5:
+ 	ext4_ext_release(sb);
+diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
+index 4e27fe6ed3ae6..f24bef3be48a3 100644
+--- a/fs/ext4/sysfs.c
++++ b/fs/ext4/sysfs.c
+@@ -24,6 +24,7 @@ typedef enum {
+ 	attr_session_write_kbytes,
+ 	attr_lifetime_write_kbytes,
+ 	attr_reserved_clusters,
++	attr_sra_exceeded_retry_limit,
+ 	attr_inode_readahead,
+ 	attr_trigger_test_error,
+ 	attr_first_error_time,
+@@ -208,6 +209,7 @@ EXT4_ATTR_FUNC(delayed_allocation_blocks, 0444);
+ EXT4_ATTR_FUNC(session_write_kbytes, 0444);
+ EXT4_ATTR_FUNC(lifetime_write_kbytes, 0444);
+ EXT4_ATTR_FUNC(reserved_clusters, 0644);
++EXT4_ATTR_FUNC(sra_exceeded_retry_limit, 0444);
+ 
+ EXT4_ATTR_OFFSET(inode_readahead_blks, 0644, inode_readahead,
+ 		 ext4_sb_info, s_inode_readahead_blks);
+@@ -257,6 +259,7 @@ static struct attribute *ext4_attrs[] = {
+ 	ATTR_LIST(session_write_kbytes),
+ 	ATTR_LIST(lifetime_write_kbytes),
+ 	ATTR_LIST(reserved_clusters),
++	ATTR_LIST(sra_exceeded_retry_limit),
+ 	ATTR_LIST(inode_readahead_blks),
+ 	ATTR_LIST(inode_goal),
+ 	ATTR_LIST(mb_stats),
+@@ -380,6 +383,10 @@ static ssize_t ext4_attr_show(struct kobject *kobj,
+ 		return snprintf(buf, PAGE_SIZE, "%llu\n",
+ 				(unsigned long long)
+ 				atomic64_read(&sbi->s_resv_clusters));
++	case attr_sra_exceeded_retry_limit:
++		return snprintf(buf, PAGE_SIZE, "%llu\n",
++				(unsigned long long)
++			percpu_counter_sum(&sbi->s_sra_exceeded_retry_limit));
+ 	case attr_inode_readahead:
+ 	case attr_pointer_ui:
+ 		if (!ptr)
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index d2c0e58c6416f..3d83c9e128487 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1324,8 +1324,15 @@ static int virtio_fs_fill_super(struct super_block *sb, struct fs_context *fsc)
+ 
+ 	/* virtiofs allocates and installs its own fuse devices */
+ 	ctx->fudptr = NULL;
+-	if (ctx->dax)
++	if (ctx->dax) {
++		if (!fs->dax_dev) {
++			err = -EINVAL;
++			pr_err("virtio-fs: dax can't be enabled as filesystem"
++			       " device does not support it.\n");
++			goto err_free_fuse_devs;
++		}
+ 		ctx->dax_dev = fs->dax_dev;
++	}
+ 	err = fuse_fill_super_common(sb, ctx);
+ 	if (err < 0)
+ 		goto err_free_fuse_devs;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index dde290eb7dd0c..4ccf99cb8cdc0 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4401,6 +4401,7 @@ static int io_sendmsg(struct io_kiocb *req, bool force_nonblock,
+ 	struct io_async_msghdr iomsg, *kmsg;
+ 	struct socket *sock;
+ 	unsigned flags;
++	int min_ret = 0;
+ 	int ret;
+ 
+ 	sock = sock_from_file(req->file, &ret);
+@@ -4421,12 +4422,15 @@ static int io_sendmsg(struct io_kiocb *req, bool force_nonblock,
+ 		kmsg = &iomsg;
+ 	}
+ 
+-	flags = req->sr_msg.msg_flags;
++	flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
+ 	if (flags & MSG_DONTWAIT)
+ 		req->flags |= REQ_F_NOWAIT;
+ 	else if (force_nonblock)
+ 		flags |= MSG_DONTWAIT;
+ 
++	if (flags & MSG_WAITALL)
++		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
++
+ 	ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
+ 	if (force_nonblock && ret == -EAGAIN)
+ 		return io_setup_async_msg(req, kmsg);
+@@ -4436,7 +4440,7 @@ static int io_sendmsg(struct io_kiocb *req, bool force_nonblock,
+ 	if (kmsg->iov != kmsg->fast_iov)
+ 		kfree(kmsg->iov);
+ 	req->flags &= ~REQ_F_NEED_CLEANUP;
+-	if (ret < 0)
++	if (ret < min_ret)
+ 		req_set_fail_links(req);
+ 	__io_req_complete(req, ret, 0, cs);
+ 	return 0;
+@@ -4450,6 +4454,7 @@ static int io_send(struct io_kiocb *req, bool force_nonblock,
+ 	struct iovec iov;
+ 	struct socket *sock;
+ 	unsigned flags;
++	int min_ret = 0;
+ 	int ret;
+ 
+ 	sock = sock_from_file(req->file, &ret);
+@@ -4465,12 +4470,15 @@ static int io_send(struct io_kiocb *req, bool force_nonblock,
+ 	msg.msg_controllen = 0;
+ 	msg.msg_namelen = 0;
+ 
+-	flags = req->sr_msg.msg_flags;
++	flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
+ 	if (flags & MSG_DONTWAIT)
+ 		req->flags |= REQ_F_NOWAIT;
+ 	else if (force_nonblock)
+ 		flags |= MSG_DONTWAIT;
+ 
++	if (flags & MSG_WAITALL)
++		min_ret = iov_iter_count(&msg.msg_iter);
++
+ 	msg.msg_flags = flags;
+ 	ret = sock_sendmsg(sock, &msg);
+ 	if (force_nonblock && ret == -EAGAIN)
+@@ -4478,7 +4486,7 @@ static int io_send(struct io_kiocb *req, bool force_nonblock,
+ 	if (ret == -ERESTARTSYS)
+ 		ret = -EINTR;
+ 
+-	if (ret < 0)
++	if (ret < min_ret)
+ 		req_set_fail_links(req);
+ 	__io_req_complete(req, ret, 0, cs);
+ 	return 0;
+@@ -4630,6 +4638,7 @@ static int io_recvmsg(struct io_kiocb *req, bool force_nonblock,
+ 	struct socket *sock;
+ 	struct io_buffer *kbuf;
+ 	unsigned flags;
++	int min_ret = 0;
+ 	int ret, cflags = 0;
+ 
+ 	sock = sock_from_file(req->file, &ret);
+@@ -4659,12 +4668,15 @@ static int io_recvmsg(struct io_kiocb *req, bool force_nonblock,
+ 				1, req->sr_msg.len);
+ 	}
+ 
+-	flags = req->sr_msg.msg_flags;
++	flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
+ 	if (flags & MSG_DONTWAIT)
+ 		req->flags |= REQ_F_NOWAIT;
+ 	else if (force_nonblock)
+ 		flags |= MSG_DONTWAIT;
+ 
++	if (flags & MSG_WAITALL)
++		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
++
+ 	ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
+ 					kmsg->uaddr, flags);
+ 	if (force_nonblock && ret == -EAGAIN)
+@@ -4677,7 +4689,7 @@ static int io_recvmsg(struct io_kiocb *req, bool force_nonblock,
+ 	if (kmsg->iov != kmsg->fast_iov)
+ 		kfree(kmsg->iov);
+ 	req->flags &= ~REQ_F_NEED_CLEANUP;
+-	if (ret < 0)
++	if (ret < min_ret || ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
+ 		req_set_fail_links(req);
+ 	__io_req_complete(req, ret, cflags, cs);
+ 	return 0;
+@@ -4693,6 +4705,7 @@ static int io_recv(struct io_kiocb *req, bool force_nonblock,
+ 	struct socket *sock;
+ 	struct iovec iov;
+ 	unsigned flags;
++	int min_ret = 0;
+ 	int ret, cflags = 0;
+ 
+ 	sock = sock_from_file(req->file, &ret);
+@@ -4717,12 +4730,15 @@ static int io_recv(struct io_kiocb *req, bool force_nonblock,
+ 	msg.msg_iocb = NULL;
+ 	msg.msg_flags = 0;
+ 
+-	flags = req->sr_msg.msg_flags;
++	flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
+ 	if (flags & MSG_DONTWAIT)
+ 		req->flags |= REQ_F_NOWAIT;
+ 	else if (force_nonblock)
+ 		flags |= MSG_DONTWAIT;
+ 
++	if (flags & MSG_WAITALL)
++		min_ret = iov_iter_count(&msg.msg_iter);
++
+ 	ret = sock_recvmsg(sock, &msg, flags);
+ 	if (force_nonblock && ret == -EAGAIN)
+ 		return -EAGAIN;
+@@ -4731,7 +4747,7 @@ static int io_recv(struct io_kiocb *req, bool force_nonblock,
+ out_free:
+ 	if (req->flags & REQ_F_BUFFER_SELECTED)
+ 		cflags = io_put_recv_kbuf(req);
+-	if (ret < 0)
++	if (ret < min_ret || ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
+ 		req_set_fail_links(req);
+ 	__io_req_complete(req, ret, cflags, cs);
+ 	return 0;
+@@ -6242,7 +6258,6 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
+ 	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+ 
+ 	if (prev) {
+-		req_set_fail_links(prev);
+ 		io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME);
+ 		io_put_req_deferred(prev, 1);
+ 	} else {
+diff --git a/fs/iomap/swapfile.c b/fs/iomap/swapfile.c
+index a648dbf6991e4..a5e478de14174 100644
+--- a/fs/iomap/swapfile.c
++++ b/fs/iomap/swapfile.c
+@@ -170,6 +170,16 @@ int iomap_swapfile_activate(struct swap_info_struct *sis,
+ 			return ret;
+ 	}
+ 
++	/*
++	 * If this swapfile doesn't contain even a single page-aligned
++	 * contiguous range of blocks, reject this useless swapfile to
++	 * prevent confusion later on.
++	 */
++	if (isi.nr_pages == 0) {
++		pr_warn("swapon: Cannot find a single usable page in file.\n");
++		return -EINVAL;
++	}
++
+ 	*pagespan = 1 + isi.highest_ppage - isi.lowest_ppage;
+ 	sis->max = isi.nr_pages;
+ 	sis->pages = isi.nr_pages - 1;
+diff --git a/fs/nfsd/Kconfig b/fs/nfsd/Kconfig
+index dbbc583d62730..248f1459c0399 100644
+--- a/fs/nfsd/Kconfig
++++ b/fs/nfsd/Kconfig
+@@ -73,6 +73,7 @@ config NFSD_V4
+ 	select NFSD_V3
+ 	select FS_POSIX_ACL
+ 	select SUNRPC_GSS
++	select CRYPTO
+ 	select CRYPTO_MD5
+ 	select CRYPTO_SHA256
+ 	select GRACE_PERIOD
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index 052be5bf9ef50..7325592b456e5 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1189,6 +1189,7 @@ static void nfsd4_cb_done(struct rpc_task *task, void *calldata)
+ 		switch (task->tk_status) {
+ 		case -EIO:
+ 		case -ETIMEDOUT:
++		case -EACCES:
+ 			nfsd4_mark_cb_down(clp, task->tk_status);
+ 		}
+ 		break;
+diff --git a/fs/reiserfs/xattr.h b/fs/reiserfs/xattr.h
+index c764352447ba1..81bec2c80b25c 100644
+--- a/fs/reiserfs/xattr.h
++++ b/fs/reiserfs/xattr.h
+@@ -43,7 +43,7 @@ void reiserfs_security_free(struct reiserfs_security_handle *sec);
+ 
+ static inline int reiserfs_xattrs_initialized(struct super_block *sb)
+ {
+-	return REISERFS_SB(sb)->priv_root != NULL;
++	return REISERFS_SB(sb)->priv_root && REISERFS_SB(sb)->xattr_root;
+ }
+ 
+ #define xattr_size(size) ((size) + sizeof(struct reiserfs_xattr_header))
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 9e173c6f312dc..fdb1d5262ce84 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -222,10 +222,14 @@ void __iomem *__acpi_map_table(unsigned long phys, unsigned long size);
+ void __acpi_unmap_table(void __iomem *map, unsigned long size);
+ int early_acpi_boot_init(void);
+ int acpi_boot_init (void);
++void acpi_boot_table_prepare (void);
+ void acpi_boot_table_init (void);
+ int acpi_mps_check (void);
+ int acpi_numa_init (void);
+ 
++int acpi_locate_initial_tables (void);
++void acpi_reserve_initial_tables (void);
++void acpi_table_init_complete (void);
+ int acpi_table_init (void);
+ int acpi_table_parse(char *id, acpi_tbl_table_handler handler);
+ int __init acpi_table_parse_entries(char *id, unsigned long table_size,
+@@ -807,9 +811,12 @@ static inline int acpi_boot_init(void)
+ 	return 0;
+ }
+ 
++static inline void acpi_boot_table_prepare(void)
++{
++}
++
+ static inline void acpi_boot_table_init(void)
+ {
+-	return;
+ }
+ 
+ static inline int acpi_mps_check(void)
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index dd236ef59db3e..b416bba3a62b5 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -20,6 +20,7 @@
+ #include <linux/module.h>
+ #include <linux/kallsyms.h>
+ #include <linux/capability.h>
++#include <linux/percpu-refcount.h>
+ 
+ struct bpf_verifier_env;
+ struct bpf_verifier_log;
+@@ -556,7 +557,8 @@ struct bpf_tramp_progs {
+  *      fentry = a set of program to run before calling original function
+  *      fexit = a set of program to run after original function
+  */
+-int arch_prepare_bpf_trampoline(void *image, void *image_end,
++struct bpf_tramp_image;
++int arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end,
+ 				const struct btf_func_model *m, u32 flags,
+ 				struct bpf_tramp_progs *tprogs,
+ 				void *orig_call);
+@@ -565,6 +567,8 @@ u64 notrace __bpf_prog_enter(void);
+ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start);
+ void notrace __bpf_prog_enter_sleepable(void);
+ void notrace __bpf_prog_exit_sleepable(void);
++void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr);
++void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr);
+ 
+ struct bpf_ksym {
+ 	unsigned long		 start;
+@@ -583,6 +587,18 @@ enum bpf_tramp_prog_type {
+ 	BPF_TRAMP_REPLACE, /* more than MAX */
+ };
+ 
++struct bpf_tramp_image {
++	void *image;
++	struct bpf_ksym ksym;
++	struct percpu_ref pcref;
++	void *ip_after_call;
++	void *ip_epilogue;
++	union {
++		struct rcu_head rcu;
++		struct work_struct work;
++	};
++};
++
+ struct bpf_trampoline {
+ 	/* hlist for trampoline_table */
+ 	struct hlist_node hlist;
+@@ -605,9 +621,8 @@ struct bpf_trampoline {
+ 	/* Number of attached programs. A counter per kind. */
+ 	int progs_cnt[BPF_TRAMP_MAX];
+ 	/* Executable image of trampoline */
+-	void *image;
++	struct bpf_tramp_image *cur_image;
+ 	u64 selector;
+-	struct bpf_ksym ksym;
+ };
+ 
+ struct bpf_attach_target_info {
+@@ -691,6 +706,8 @@ void bpf_image_ksym_add(void *data, struct bpf_ksym *ksym);
+ void bpf_image_ksym_del(struct bpf_ksym *ksym);
+ void bpf_ksym_add(struct bpf_ksym *ksym);
+ void bpf_ksym_del(struct bpf_ksym *ksym);
++int bpf_jit_charge_modmem(u32 pages);
++void bpf_jit_uncharge_modmem(u32 pages);
+ #else
+ static inline int bpf_trampoline_link_prog(struct bpf_prog *prog,
+ 					   struct bpf_trampoline *tr)
+@@ -780,7 +797,6 @@ struct bpf_prog_aux {
+ 	bool func_proto_unreliable;
+ 	bool sleepable;
+ 	bool tail_call_reachable;
+-	enum bpf_tramp_prog_type trampoline_prog_type;
+ 	struct hlist_node tramp_hlist;
+ 	/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
+ 	const struct btf_type *attach_func_proto;
+diff --git a/include/linux/can/can-ml.h b/include/linux/can/can-ml.h
+index 2f5d731ae251d..8afa92d15a664 100644
+--- a/include/linux/can/can-ml.h
++++ b/include/linux/can/can-ml.h
+@@ -44,6 +44,7 @@
+ 
+ #include <linux/can.h>
+ #include <linux/list.h>
++#include <linux/netdevice.h>
+ 
+ #define CAN_SFF_RCV_ARRAY_SZ (1 << CAN_SFF_ID_BITS)
+ #define CAN_EFF_RCV_HASH_BITS 10
+@@ -65,4 +66,15 @@ struct can_ml_priv {
+ #endif
+ };
+ 
++static inline struct can_ml_priv *can_get_ml_priv(struct net_device *dev)
++{
++	return netdev_get_ml_priv(dev, ML_PRIV_CAN);
++}
++
++static inline void can_set_ml_priv(struct net_device *dev,
++				   struct can_ml_priv *ml_priv)
++{
++	netdev_set_ml_priv(dev, ml_priv, ML_PRIV_CAN);
++}
++
+ #endif /* CAN_ML_H */
+diff --git a/include/linux/extcon.h b/include/linux/extcon.h
+index fd183fb9c20f7..0c19010da77fa 100644
+--- a/include/linux/extcon.h
++++ b/include/linux/extcon.h
+@@ -271,6 +271,29 @@ static inline  void devm_extcon_unregister_notifier(struct device *dev,
+ 				struct extcon_dev *edev, unsigned int id,
+ 				struct notifier_block *nb) { }
+ 
++static inline int extcon_register_notifier_all(struct extcon_dev *edev,
++					       struct notifier_block *nb)
++{
++	return 0;
++}
++
++static inline int extcon_unregister_notifier_all(struct extcon_dev *edev,
++						 struct notifier_block *nb)
++{
++	return 0;
++}
++
++static inline int devm_extcon_register_notifier_all(struct device *dev,
++						    struct extcon_dev *edev,
++						    struct notifier_block *nb)
++{
++	return 0;
++}
++
++static inline void devm_extcon_unregister_notifier_all(struct device *dev,
++						       struct extcon_dev *edev,
++						       struct notifier_block *nb) { }
++
+ static inline struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
+ {
+ 	return ERR_PTR(-ENODEV);
+diff --git a/include/linux/firmware/intel/stratix10-svc-client.h b/include/linux/firmware/intel/stratix10-svc-client.h
+index a93d85932eb92..f843c6a10cf36 100644
+--- a/include/linux/firmware/intel/stratix10-svc-client.h
++++ b/include/linux/firmware/intel/stratix10-svc-client.h
+@@ -56,7 +56,7 @@
+  * COMMAND_RECONFIG_FLAG_PARTIAL:
+  * Set to FPGA configuration type (full or partial).
+  */
+-#define COMMAND_RECONFIG_FLAG_PARTIAL	1
++#define COMMAND_RECONFIG_FLAG_PARTIAL	0
+ 
+ /**
+  * Timeout settings for service clients:
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 8753e98a8d58a..e37480b5f4c0e 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -1600,6 +1600,12 @@ enum netdev_priv_flags {
+ #define IFF_L3MDEV_RX_HANDLER		IFF_L3MDEV_RX_HANDLER
+ #define IFF_LIVE_RENAME_OK		IFF_LIVE_RENAME_OK
+ 
++/* Specifies the type of the struct net_device::ml_priv pointer */
++enum netdev_ml_priv_type {
++	ML_PRIV_NONE,
++	ML_PRIV_CAN,
++};
++
+ /**
+  *	struct net_device - The DEVICE structure.
+  *
+@@ -1795,6 +1801,7 @@ enum netdev_priv_flags {
+  * 	@nd_net:		Network namespace this network device is inside
+  *
+  * 	@ml_priv:	Mid-layer private
++ *	@ml_priv_type:  Mid-layer private type
+  * 	@lstats:	Loopback statistics
+  * 	@tstats:	Tunnel statistics
+  * 	@dstats:	Dummy statistics
+@@ -2107,8 +2114,10 @@ struct net_device {
+ 	possible_net_t			nd_net;
+ 
+ 	/* mid-layer private */
++	void				*ml_priv;
++	enum netdev_ml_priv_type	ml_priv_type;
++
+ 	union {
+-		void					*ml_priv;
+ 		struct pcpu_lstats __percpu		*lstats;
+ 		struct pcpu_sw_netstats __percpu	*tstats;
+ 		struct pcpu_dstats __percpu		*dstats;
+@@ -2298,6 +2307,29 @@ static inline void netdev_reset_rx_headroom(struct net_device *dev)
+ 	netdev_set_rx_headroom(dev, -1);
+ }
+ 
++static inline void *netdev_get_ml_priv(struct net_device *dev,
++				       enum netdev_ml_priv_type type)
++{
++	if (dev->ml_priv_type != type)
++		return NULL;
++
++	return dev->ml_priv;
++}
++
++static inline void netdev_set_ml_priv(struct net_device *dev,
++				      void *ml_priv,
++				      enum netdev_ml_priv_type type)
++{
++	WARN(dev->ml_priv_type && dev->ml_priv_type != type,
++	     "Overwriting already set ml_priv_type (%u) with different ml_priv_type (%u)!\n",
++	     dev->ml_priv_type, type);
++	WARN(!dev->ml_priv_type && dev->ml_priv,
++	     "Overwriting already set ml_priv and ml_priv_type is ML_PRIV_NONE!\n");
++
++	dev->ml_priv = ml_priv;
++	dev->ml_priv_type = type;
++}
++
+ /*
+  * Net namespace inlines
+  */
+diff --git a/include/linux/qcom-geni-se.h b/include/linux/qcom-geni-se.h
+index f7bbea3f09ca9..964cfe68f72cc 100644
+--- a/include/linux/qcom-geni-se.h
++++ b/include/linux/qcom-geni-se.h
+@@ -462,7 +462,5 @@ void geni_icc_set_tag(struct geni_se *se, u32 tag);
+ int geni_icc_enable(struct geni_se *se);
+ 
+ int geni_icc_disable(struct geni_se *se);
+-
+-void geni_remove_earlycon_icc_vote(void);
+ #endif
+ #endif
+diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
+index 850424e5d0306..6ecf2a0220dbe 100644
+--- a/include/linux/ww_mutex.h
++++ b/include/linux/ww_mutex.h
+@@ -173,9 +173,10 @@ static inline void ww_acquire_done(struct ww_acquire_ctx *ctx)
+  */
+ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
+ {
+-#ifdef CONFIG_DEBUG_MUTEXES
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ 	mutex_release(&ctx->dep_map, _THIS_IP_);
+-
++#endif
++#ifdef CONFIG_DEBUG_MUTEXES
+ 	DEBUG_LOCKS_WARN_ON(ctx->acquired);
+ 	if (!IS_ENABLED(CONFIG_PROVE_LOCKING))
+ 		/*
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index 4c3b543bb33b7..f527063864b55 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -430,7 +430,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ 
+ 		tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
+ 		tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
+-		err = arch_prepare_bpf_trampoline(image,
++		err = arch_prepare_bpf_trampoline(NULL, image,
+ 						  st_map->image + PAGE_SIZE,
+ 						  &st_ops->func_models[i], 0,
+ 						  tprogs, NULL);
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 55454d2278b17..182e162f8fd0b 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -825,7 +825,7 @@ static int __init bpf_jit_charge_init(void)
+ }
+ pure_initcall(bpf_jit_charge_init);
+ 
+-static int bpf_jit_charge_modmem(u32 pages)
++int bpf_jit_charge_modmem(u32 pages)
+ {
+ 	if (atomic_long_add_return(pages, &bpf_jit_current) >
+ 	    (bpf_jit_limit >> PAGE_SHIFT)) {
+@@ -838,7 +838,7 @@ static int bpf_jit_charge_modmem(u32 pages)
+ 	return 0;
+ }
+ 
+-static void bpf_jit_uncharge_modmem(u32 pages)
++void bpf_jit_uncharge_modmem(u32 pages)
+ {
+ 	atomic_long_sub(pages, &bpf_jit_current);
+ }
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index 35c5887d82ffe..986dabc3d11f0 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -57,19 +57,10 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
+ 			   PAGE_SIZE, true, ksym->name);
+ }
+ 
+-static void bpf_trampoline_ksym_add(struct bpf_trampoline *tr)
+-{
+-	struct bpf_ksym *ksym = &tr->ksym;
+-
+-	snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", tr->key);
+-	bpf_image_ksym_add(tr->image, ksym);
+-}
+-
+ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
+ {
+ 	struct bpf_trampoline *tr;
+ 	struct hlist_head *head;
+-	void *image;
+ 	int i;
+ 
+ 	mutex_lock(&trampoline_mutex);
+@@ -84,14 +75,6 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
+ 	if (!tr)
+ 		goto out;
+ 
+-	/* is_root was checked earlier. No need for bpf_jit_charge_modmem() */
+-	image = bpf_jit_alloc_exec_page();
+-	if (!image) {
+-		kfree(tr);
+-		tr = NULL;
+-		goto out;
+-	}
+-
+ 	tr->key = key;
+ 	INIT_HLIST_NODE(&tr->hlist);
+ 	hlist_add_head(&tr->hlist, head);
+@@ -99,9 +82,6 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
+ 	mutex_init(&tr->mutex);
+ 	for (i = 0; i < BPF_TRAMP_MAX; i++)
+ 		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
+-	tr->image = image;
+-	INIT_LIST_HEAD_RCU(&tr->ksym.lnode);
+-	bpf_trampoline_ksym_add(tr);
+ out:
+ 	mutex_unlock(&trampoline_mutex);
+ 	return tr;
+@@ -185,10 +165,142 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total)
+ 	return tprogs;
+ }
+ 
++static void __bpf_tramp_image_put_deferred(struct work_struct *work)
++{
++	struct bpf_tramp_image *im;
++
++	im = container_of(work, struct bpf_tramp_image, work);
++	bpf_image_ksym_del(&im->ksym);
++	bpf_jit_free_exec(im->image);
++	bpf_jit_uncharge_modmem(1);
++	percpu_ref_exit(&im->pcref);
++	kfree_rcu(im, rcu);
++}
++
++/* callback, fexit step 3 or fentry step 2 */
++static void __bpf_tramp_image_put_rcu(struct rcu_head *rcu)
++{
++	struct bpf_tramp_image *im;
++
++	im = container_of(rcu, struct bpf_tramp_image, rcu);
++	INIT_WORK(&im->work, __bpf_tramp_image_put_deferred);
++	schedule_work(&im->work);
++}
++
++/* callback, fexit step 2. Called after percpu_ref_kill confirms. */
++static void __bpf_tramp_image_release(struct percpu_ref *pcref)
++{
++	struct bpf_tramp_image *im;
++
++	im = container_of(pcref, struct bpf_tramp_image, pcref);
++	call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu);
++}
++
++/* callback, fexit or fentry step 1 */
++static void __bpf_tramp_image_put_rcu_tasks(struct rcu_head *rcu)
++{
++	struct bpf_tramp_image *im;
++
++	im = container_of(rcu, struct bpf_tramp_image, rcu);
++	if (im->ip_after_call)
++		/* the case of fmod_ret/fexit trampoline and CONFIG_PREEMPTION=y */
++		percpu_ref_kill(&im->pcref);
++	else
++		/* the case of fentry trampoline */
++		call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu);
++}
++
++static void bpf_tramp_image_put(struct bpf_tramp_image *im)
++{
++	/* The trampoline image that calls original function is using:
++	 * rcu_read_lock_trace to protect sleepable bpf progs
++	 * rcu_read_lock to protect normal bpf progs
++	 * percpu_ref to protect trampoline itself
++	 * rcu tasks to protect trampoline asm not covered by percpu_ref
++	 * (which are few asm insns before __bpf_tramp_enter and
++	 *  after __bpf_tramp_exit)
++	 *
++	 * The trampoline is unreachable before bpf_tramp_image_put().
++	 *
++	 * First, patch the trampoline to avoid calling into fexit progs.
++	 * The progs will be freed even if the original function is still
++	 * executing or sleeping.
++	 * In case of CONFIG_PREEMPT=y use call_rcu_tasks() to wait on
++	 * first few asm instructions to execute and call into
++	 * __bpf_tramp_enter->percpu_ref_get.
++	 * Then use percpu_ref_kill to wait for the trampoline and the original
++	 * function to finish.
++	 * Then use call_rcu_tasks() to make sure few asm insns in
++	 * the trampoline epilogue are done as well.
++	 *
++	 * In !PREEMPT case the task that got interrupted in the first asm
++	 * insns won't go through an RCU quiescent state which the
++	 * percpu_ref_kill will be waiting for. Hence the first
++	 * call_rcu_tasks() is not necessary.
++	 */
++	if (im->ip_after_call) {
++		int err = bpf_arch_text_poke(im->ip_after_call, BPF_MOD_JUMP,
++					     NULL, im->ip_epilogue);
++		WARN_ON(err);
++		if (IS_ENABLED(CONFIG_PREEMPTION))
++			call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu_tasks);
++		else
++			percpu_ref_kill(&im->pcref);
++		return;
++	}
++
++	/* The trampoline without fexit and fmod_ret progs doesn't call original
++	 * function and doesn't use percpu_ref.
++	 * Use call_rcu_tasks_trace() to wait for sleepable progs to finish.
++	 * Then use call_rcu_tasks() to wait for the rest of trampoline asm
++	 * and normal progs.
++	 */
++	call_rcu_tasks_trace(&im->rcu, __bpf_tramp_image_put_rcu_tasks);
++}
++
++static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)
++{
++	struct bpf_tramp_image *im;
++	struct bpf_ksym *ksym;
++	void *image;
++	int err = -ENOMEM;
++
++	im = kzalloc(sizeof(*im), GFP_KERNEL);
++	if (!im)
++		goto out;
++
++	err = bpf_jit_charge_modmem(1);
++	if (err)
++		goto out_free_im;
++
++	err = -ENOMEM;
++	im->image = image = bpf_jit_alloc_exec_page();
++	if (!image)
++		goto out_uncharge;
++
++	err = percpu_ref_init(&im->pcref, __bpf_tramp_image_release, 0, GFP_KERNEL);
++	if (err)
++		goto out_free_image;
++
++	ksym = &im->ksym;
++	INIT_LIST_HEAD_RCU(&ksym->lnode);
++	snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu_%u", key, idx);
++	bpf_image_ksym_add(image, ksym);
++	return im;
++
++out_free_image:
++	bpf_jit_free_exec(im->image);
++out_uncharge:
++	bpf_jit_uncharge_modmem(1);
++out_free_im:
++	kfree(im);
++out:
++	return ERR_PTR(err);
++}
++
+ static int bpf_trampoline_update(struct bpf_trampoline *tr)
+ {
+-	void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2;
+-	void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2;
++	struct bpf_tramp_image *im;
+ 	struct bpf_tramp_progs *tprogs;
+ 	u32 flags = BPF_TRAMP_F_RESTORE_REGS;
+ 	int err, total;
+@@ -198,41 +310,42 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
+ 		return PTR_ERR(tprogs);
+ 
+ 	if (total == 0) {
+-		err = unregister_fentry(tr, old_image);
++		err = unregister_fentry(tr, tr->cur_image->image);
++		bpf_tramp_image_put(tr->cur_image);
++		tr->cur_image = NULL;
+ 		tr->selector = 0;
+ 		goto out;
+ 	}
+ 
++	im = bpf_tramp_image_alloc(tr->key, tr->selector);
++	if (IS_ERR(im)) {
++		err = PTR_ERR(im);
++		goto out;
++	}
++
+ 	if (tprogs[BPF_TRAMP_FEXIT].nr_progs ||
+ 	    tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
+ 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
+ 
+-	/* Though the second half of trampoline page is unused a task could be
+-	 * preempted in the middle of the first half of trampoline and two
+-	 * updates to trampoline would change the code from underneath the
+-	 * preempted task. Hence wait for tasks to voluntarily schedule or go
+-	 * to userspace.
+-	 * The same trampoline can hold both sleepable and non-sleepable progs.
+-	 * synchronize_rcu_tasks_trace() is needed to make sure all sleepable
+-	 * programs finish executing.
+-	 * Wait for these two grace periods together.
+-	 */
+-	synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace);
+-
+-	err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2,
++	err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE,
+ 					  &tr->func.model, flags, tprogs,
+ 					  tr->func.addr);
+ 	if (err < 0)
+ 		goto out;
+ 
+-	if (tr->selector)
++	WARN_ON(tr->cur_image && tr->selector == 0);
++	WARN_ON(!tr->cur_image && tr->selector);
++	if (tr->cur_image)
+ 		/* progs already running at this address */
+-		err = modify_fentry(tr, old_image, new_image);
++		err = modify_fentry(tr, tr->cur_image->image, im->image);
+ 	else
+ 		/* first time registering */
+-		err = register_fentry(tr, new_image);
++		err = register_fentry(tr, im->image);
+ 	if (err)
+ 		goto out;
++	if (tr->cur_image)
++		bpf_tramp_image_put(tr->cur_image);
++	tr->cur_image = im;
+ 	tr->selector++;
+ out:
+ 	kfree(tprogs);
+@@ -364,17 +477,12 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
+ 		goto out;
+ 	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
+ 		goto out;
+-	bpf_image_ksym_del(&tr->ksym);
+-	/* This code will be executed when all bpf progs (both sleepable and
+-	 * non-sleepable) went through
+-	 * bpf_prog_put()->call_rcu[_tasks_trace]()->bpf_prog_free_deferred().
+-	 * Hence no need for another synchronize_rcu_tasks_trace() here,
+-	 * but synchronize_rcu_tasks() is still needed, since trampoline
+-	 * may not have had any sleepable programs and we need to wait
+-	 * for tasks to get out of trampoline code before freeing it.
++	/* This code will be executed even when the last bpf_tramp_image
++	 * is alive. All progs are detached from the trampoline and the
++	 * trampoline image is patched with jmp into epilogue to skip
++	 * fexit progs. The fentry-only trampoline will be freed via
++	 * multiple rcu callbacks.
+ 	 */
+-	synchronize_rcu_tasks();
+-	bpf_jit_free_exec(tr->image);
+ 	hlist_del(&tr->hlist);
+ 	kfree(tr);
+ out:
+@@ -433,8 +541,18 @@ void notrace __bpf_prog_exit_sleepable(void)
+ 	rcu_read_unlock_trace();
+ }
+ 
++void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr)
++{
++	percpu_ref_get(&tr->pcref);
++}
++
++void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr)
++{
++	percpu_ref_put(&tr->pcref);
++}
++
+ int __weak
+-arch_prepare_bpf_trampoline(void *image, void *image_end,
++arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end,
+ 			    const struct btf_func_model *m, u32 flags,
+ 			    struct bpf_tramp_progs *tprogs,
+ 			    void *orig_call)
+diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
+index 5352ce50a97e3..2c25b830203cd 100644
+--- a/kernel/locking/mutex.c
++++ b/kernel/locking/mutex.c
+@@ -636,7 +636,7 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
+  */
+ static __always_inline bool
+ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
+-		      const bool use_ww_ctx, struct mutex_waiter *waiter)
++		      struct mutex_waiter *waiter)
+ {
+ 	if (!waiter) {
+ 		/*
+@@ -712,7 +712,7 @@ fail:
+ #else
+ static __always_inline bool
+ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
+-		      const bool use_ww_ctx, struct mutex_waiter *waiter)
++		      struct mutex_waiter *waiter)
+ {
+ 	return false;
+ }
+@@ -932,6 +932,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 	struct ww_mutex *ww;
+ 	int ret;
+ 
++	if (!use_ww_ctx)
++		ww_ctx = NULL;
++
+ 	might_sleep();
+ 
+ #ifdef CONFIG_DEBUG_MUTEXES
+@@ -939,7 +942,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ #endif
+ 
+ 	ww = container_of(lock, struct ww_mutex, base);
+-	if (use_ww_ctx && ww_ctx) {
++	if (ww_ctx) {
+ 		if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
+ 			return -EALREADY;
+ 
+@@ -956,10 +959,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 	mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
+ 
+ 	if (__mutex_trylock(lock) ||
+-	    mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, NULL)) {
++	    mutex_optimistic_spin(lock, ww_ctx, NULL)) {
+ 		/* got the lock, yay! */
+ 		lock_acquired(&lock->dep_map, ip);
+-		if (use_ww_ctx && ww_ctx)
++		if (ww_ctx)
+ 			ww_mutex_set_context_fastpath(ww, ww_ctx);
+ 		preempt_enable();
+ 		return 0;
+@@ -970,7 +973,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 	 * After waiting to acquire the wait_lock, try again.
+ 	 */
+ 	if (__mutex_trylock(lock)) {
+-		if (use_ww_ctx && ww_ctx)
++		if (ww_ctx)
+ 			__ww_mutex_check_waiters(lock, ww_ctx);
+ 
+ 		goto skip_wait;
+@@ -1023,7 +1026,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 			goto err;
+ 		}
+ 
+-		if (use_ww_ctx && ww_ctx) {
++		if (ww_ctx) {
+ 			ret = __ww_mutex_check_kill(lock, &waiter, ww_ctx);
+ 			if (ret)
+ 				goto err;
+@@ -1036,7 +1039,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 		 * ww_mutex needs to always recheck its position since its waiter
+ 		 * list is not FIFO ordered.
+ 		 */
+-		if ((use_ww_ctx && ww_ctx) || !first) {
++		if (ww_ctx || !first) {
+ 			first = __mutex_waiter_is_first(lock, &waiter);
+ 			if (first)
+ 				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+@@ -1049,7 +1052,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 		 * or we must see its unlock and acquire.
+ 		 */
+ 		if (__mutex_trylock(lock) ||
+-		    (first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, &waiter)))
++		    (first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
+ 			break;
+ 
+ 		spin_lock(&lock->wait_lock);
+@@ -1058,7 +1061,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ acquired:
+ 	__set_current_state(TASK_RUNNING);
+ 
+-	if (use_ww_ctx && ww_ctx) {
++	if (ww_ctx) {
+ 		/*
+ 		 * Wound-Wait; we stole the lock (!first_waiter), check the
+ 		 * waiters as anyone might want to wound us.
+@@ -1078,7 +1081,7 @@ skip_wait:
+ 	/* got the lock - cleanup and rejoice! */
+ 	lock_acquired(&lock->dep_map, ip);
+ 
+-	if (use_ww_ctx && ww_ctx)
++	if (ww_ctx)
+ 		ww_mutex_lock_acquired(ww, ww_ctx);
+ 
+ 	spin_unlock(&lock->wait_lock);
+diff --git a/kernel/static_call.c b/kernel/static_call.c
+index 49efbdc5b4800..f59089a122319 100644
+--- a/kernel/static_call.c
++++ b/kernel/static_call.c
+@@ -149,6 +149,7 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
+ 	};
+ 
+ 	for (site_mod = &first; site_mod; site_mod = site_mod->next) {
++		bool init = system_state < SYSTEM_RUNNING;
+ 		struct module *mod = site_mod->mod;
+ 
+ 		if (!site_mod->sites) {
+@@ -168,6 +169,7 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
+ 		if (mod) {
+ 			stop = mod->static_call_sites +
+ 			       mod->num_static_call_sites;
++			init = mod->state == MODULE_STATE_COMING;
+ 		}
+ #endif
+ 
+@@ -175,16 +177,8 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
+ 		     site < stop && static_call_key(site) == key; site++) {
+ 			void *site_addr = static_call_addr(site);
+ 
+-			if (static_call_is_init(site)) {
+-				/*
+-				 * Don't write to call sites which were in
+-				 * initmem and have since been freed.
+-				 */
+-				if (!mod && system_state >= SYSTEM_RUNNING)
+-					continue;
+-				if (mod && !within_module_init((unsigned long)site_addr, mod))
+-					continue;
+-			}
++			if (!init && static_call_is_init(site))
++				continue;
+ 
+ 			if (!kernel_text_address((unsigned long)site_addr)) {
+ 				/*
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index ee4be813ba85b..8bfa4e78d8951 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2984,7 +2984,8 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer,
+ 
+ 	size = nr_entries * sizeof(unsigned long);
+ 	event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
+-					    sizeof(*entry) + size, flags, pc);
++				    (sizeof(*entry) - sizeof(entry->caller)) + size,
++				    flags, pc);
+ 	if (!event)
+ 		goto out;
+ 	entry = ring_buffer_event_data(event);
+diff --git a/mm/memory.c b/mm/memory.c
+index 4d565d7c80169..b70bd3ba33888 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -154,7 +154,7 @@ static int __init init_zero_pfn(void)
+ 	zero_pfn = page_to_pfn(ZERO_PAGE(0));
+ 	return 0;
+ }
+-core_initcall(init_zero_pfn);
++early_initcall(init_zero_pfn);
+ 
+ void mm_trace_rss_stat(struct mm_struct *mm, int member, long count)
+ {
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 09f1ec589b80b..eb42bbb72f523 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -1617,10 +1617,6 @@ p9_client_read_once(struct p9_fid *fid, u64 offset, struct iov_iter *to,
+ 	}
+ 
+ 	p9_debug(P9_DEBUG_9P, "<<< RREAD count %d\n", count);
+-	if (!count) {
+-		p9_tag_remove(clnt, req);
+-		return 0;
+-	}
+ 
+ 	if (non_zc) {
+ 		int n = copy_to_iter(dataptr, count, to);
+diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
+index 1d48708c5a2eb..c94b212d8e7ca 100644
+--- a/net/appletalk/ddp.c
++++ b/net/appletalk/ddp.c
+@@ -1576,8 +1576,8 @@ static int atalk_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	struct sk_buff *skb;
+ 	struct net_device *dev;
+ 	struct ddpehdr *ddp;
+-	int size;
+-	struct atalk_route *rt;
++	int size, hard_header_len;
++	struct atalk_route *rt, *rt_lo = NULL;
+ 	int err;
+ 
+ 	if (flags & ~(MSG_DONTWAIT|MSG_CMSG_COMPAT))
+@@ -1640,7 +1640,22 @@ static int atalk_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	SOCK_DEBUG(sk, "SK %p: Size needed %d, device %s\n",
+ 			sk, size, dev->name);
+ 
+-	size += dev->hard_header_len;
++	hard_header_len = dev->hard_header_len;
++	/* Leave room for loopback hardware header if necessary */
++	if (usat->sat_addr.s_node == ATADDR_BCAST &&
++	    (dev->flags & IFF_LOOPBACK || !(rt->flags & RTF_GATEWAY))) {
++		struct atalk_addr at_lo;
++
++		at_lo.s_node = 0;
++		at_lo.s_net  = 0;
++
++		rt_lo = atrtr_find(&at_lo);
++
++		if (rt_lo && rt_lo->dev->hard_header_len > hard_header_len)
++			hard_header_len = rt_lo->dev->hard_header_len;
++	}
++
++	size += hard_header_len;
+ 	release_sock(sk);
+ 	skb = sock_alloc_send_skb(sk, size, (flags & MSG_DONTWAIT), &err);
+ 	lock_sock(sk);
+@@ -1648,7 +1663,7 @@ static int atalk_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		goto out;
+ 
+ 	skb_reserve(skb, ddp_dl->header_length);
+-	skb_reserve(skb, dev->hard_header_len);
++	skb_reserve(skb, hard_header_len);
+ 	skb->dev = dev;
+ 
+ 	SOCK_DEBUG(sk, "SK %p: Begin build.\n", sk);
+@@ -1699,18 +1714,12 @@ static int atalk_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		/* loop back */
+ 		skb_orphan(skb);
+ 		if (ddp->deh_dnode == ATADDR_BCAST) {
+-			struct atalk_addr at_lo;
+-
+-			at_lo.s_node = 0;
+-			at_lo.s_net  = 0;
+-
+-			rt = atrtr_find(&at_lo);
+-			if (!rt) {
++			if (!rt_lo) {
+ 				kfree_skb(skb);
+ 				err = -ENETUNREACH;
+ 				goto out;
+ 			}
+-			dev = rt->dev;
++			dev = rt_lo->dev;
+ 			skb->dev = dev;
+ 		}
+ 		ddp_dl->request(ddp_dl, skb, dev->dev_addr);
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index 4c343b43067f6..1c95ede2c9a6e 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -304,8 +304,8 @@ static struct can_dev_rcv_lists *can_dev_rcv_lists_find(struct net *net,
+ 							struct net_device *dev)
+ {
+ 	if (dev) {
+-		struct can_ml_priv *ml_priv = dev->ml_priv;
+-		return &ml_priv->dev_rcv_lists;
++		struct can_ml_priv *can_ml = can_get_ml_priv(dev);
++		return &can_ml->dev_rcv_lists;
+ 	} else {
+ 		return net->can.rx_alldev_list;
+ 	}
+@@ -790,25 +790,6 @@ void can_proto_unregister(const struct can_proto *cp)
+ }
+ EXPORT_SYMBOL(can_proto_unregister);
+ 
+-/* af_can notifier to create/remove CAN netdevice specific structs */
+-static int can_notifier(struct notifier_block *nb, unsigned long msg,
+-			void *ptr)
+-{
+-	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-
+-	if (dev->type != ARPHRD_CAN)
+-		return NOTIFY_DONE;
+-
+-	switch (msg) {
+-	case NETDEV_REGISTER:
+-		WARN(!dev->ml_priv,
+-		     "No CAN mid layer private allocated, please fix your driver and use alloc_candev()!\n");
+-		break;
+-	}
+-
+-	return NOTIFY_DONE;
+-}
+-
+ static int can_pernet_init(struct net *net)
+ {
+ 	spin_lock_init(&net->can.rcvlists_lock);
+@@ -876,11 +857,6 @@ static const struct net_proto_family can_family_ops = {
+ 	.owner  = THIS_MODULE,
+ };
+ 
+-/* notifier block for netdevice event */
+-static struct notifier_block can_netdev_notifier __read_mostly = {
+-	.notifier_call = can_notifier,
+-};
+-
+ static struct pernet_operations can_pernet_ops __read_mostly = {
+ 	.init = can_pernet_init,
+ 	.exit = can_pernet_exit,
+@@ -911,17 +887,12 @@ static __init int can_init(void)
+ 	err = sock_register(&can_family_ops);
+ 	if (err)
+ 		goto out_sock;
+-	err = register_netdevice_notifier(&can_netdev_notifier);
+-	if (err)
+-		goto out_notifier;
+ 
+ 	dev_add_pack(&can_packet);
+ 	dev_add_pack(&canfd_packet);
+ 
+ 	return 0;
+ 
+-out_notifier:
+-	sock_unregister(PF_CAN);
+ out_sock:
+ 	unregister_pernet_subsys(&can_pernet_ops);
+ out_pernet:
+@@ -935,7 +906,6 @@ static __exit void can_exit(void)
+ 	/* protocol unregister */
+ 	dev_remove_pack(&canfd_packet);
+ 	dev_remove_pack(&can_packet);
+-	unregister_netdevice_notifier(&can_netdev_notifier);
+ 	sock_unregister(PF_CAN);
+ 
+ 	unregister_pernet_subsys(&can_pernet_ops);
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 137054bff9ec7..e52330f628c9f 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -140,9 +140,9 @@ static struct j1939_priv *j1939_priv_create(struct net_device *ndev)
+ static inline void j1939_priv_set(struct net_device *ndev,
+ 				  struct j1939_priv *priv)
+ {
+-	struct can_ml_priv *can_ml_priv = ndev->ml_priv;
++	struct can_ml_priv *can_ml = can_get_ml_priv(ndev);
+ 
+-	can_ml_priv->j1939_priv = priv;
++	can_ml->j1939_priv = priv;
+ }
+ 
+ static void __j1939_priv_release(struct kref *kref)
+@@ -211,12 +211,9 @@ static void __j1939_rx_release(struct kref *kref)
+ /* get pointer to priv without increasing ref counter */
+ static inline struct j1939_priv *j1939_ndev_to_priv(struct net_device *ndev)
+ {
+-	struct can_ml_priv *can_ml_priv = ndev->ml_priv;
++	struct can_ml_priv *can_ml = can_get_ml_priv(ndev);
+ 
+-	if (!can_ml_priv)
+-		return NULL;
+-
+-	return can_ml_priv->j1939_priv;
++	return can_ml->j1939_priv;
+ }
+ 
+ static struct j1939_priv *j1939_priv_get_by_ndev_locked(struct net_device *ndev)
+@@ -225,9 +222,6 @@ static struct j1939_priv *j1939_priv_get_by_ndev_locked(struct net_device *ndev)
+ 
+ 	lockdep_assert_held(&j1939_netdev_lock);
+ 
+-	if (ndev->type != ARPHRD_CAN)
+-		return NULL;
+-
+ 	priv = j1939_ndev_to_priv(ndev);
+ 	if (priv)
+ 		j1939_priv_get(priv);
+@@ -348,15 +342,16 @@ static int j1939_netdev_notify(struct notifier_block *nb,
+ 			       unsigned long msg, void *data)
+ {
+ 	struct net_device *ndev = netdev_notifier_info_to_dev(data);
++	struct can_ml_priv *can_ml = can_get_ml_priv(ndev);
+ 	struct j1939_priv *priv;
+ 
++	if (!can_ml)
++		goto notify_done;
++
+ 	priv = j1939_priv_get_by_ndev(ndev);
+ 	if (!priv)
+ 		goto notify_done;
+ 
+-	if (ndev->type != ARPHRD_CAN)
+-		goto notify_put;
+-
+ 	switch (msg) {
+ 	case NETDEV_DOWN:
+ 		j1939_cancel_active_session(priv, NULL);
+@@ -365,7 +360,6 @@ static int j1939_netdev_notify(struct notifier_block *nb,
+ 		break;
+ 	}
+ 
+-notify_put:
+ 	j1939_priv_put(priv);
+ 
+ notify_done:
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index f23966526a885..56aa66147d5ac 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -12,6 +12,7 @@
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
++#include <linux/can/can-ml.h>
+ #include <linux/can/core.h>
+ #include <linux/can/skb.h>
+ #include <linux/errqueue.h>
+@@ -453,6 +454,7 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 		j1939_jsk_del(priv, jsk);
+ 		j1939_local_ecu_put(priv, jsk->addr.src_name, jsk->addr.sa);
+ 	} else {
++		struct can_ml_priv *can_ml;
+ 		struct net_device *ndev;
+ 
+ 		ndev = dev_get_by_index(net, addr->can_ifindex);
+@@ -461,15 +463,8 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 			goto out_release_sock;
+ 		}
+ 
+-		if (ndev->type != ARPHRD_CAN) {
+-			dev_put(ndev);
+-			ret = -ENODEV;
+-			goto out_release_sock;
+-		}
+-
+-		if (!ndev->ml_priv) {
+-			netdev_warn_once(ndev,
+-					 "No CAN mid layer private allocated, please fix your driver and use alloc_candev()!\n");
++		can_ml = can_get_ml_priv(ndev);
++		if (!can_ml) {
+ 			dev_put(ndev);
+ 			ret = -ENODEV;
+ 			goto out_release_sock;
+diff --git a/net/can/proc.c b/net/can/proc.c
+index 5ea8695f507eb..b15760b5c1cce 100644
+--- a/net/can/proc.c
++++ b/net/can/proc.c
+@@ -322,8 +322,11 @@ static int can_rcvlist_proc_show(struct seq_file *m, void *v)
+ 
+ 	/* receive list for registered CAN devices */
+ 	for_each_netdev_rcu(net, dev) {
+-		if (dev->type == ARPHRD_CAN && dev->ml_priv)
+-			can_rcvlist_proc_show_one(m, idx, dev, dev->ml_priv);
++		struct can_ml_priv *can_ml = can_get_ml_priv(dev);
++
++		if (can_ml)
++			can_rcvlist_proc_show_one(m, idx, dev,
++						  &can_ml->dev_rcv_lists);
+ 	}
+ 
+ 	rcu_read_unlock();
+@@ -375,8 +378,10 @@ static int can_rcvlist_sff_proc_show(struct seq_file *m, void *v)
+ 
+ 	/* sff receive list for registered CAN devices */
+ 	for_each_netdev_rcu(net, dev) {
+-		if (dev->type == ARPHRD_CAN && dev->ml_priv) {
+-			dev_rcv_lists = dev->ml_priv;
++		struct can_ml_priv *can_ml = can_get_ml_priv(dev);
++
++		if (can_ml) {
++			dev_rcv_lists = &can_ml->dev_rcv_lists;
+ 			can_rcvlist_proc_show_array(m, dev, dev_rcv_lists->rx_sff,
+ 						    ARRAY_SIZE(dev_rcv_lists->rx_sff));
+ 		}
+@@ -406,8 +411,10 @@ static int can_rcvlist_eff_proc_show(struct seq_file *m, void *v)
+ 
+ 	/* eff receive list for registered CAN devices */
+ 	for_each_netdev_rcu(net, dev) {
+-		if (dev->type == ARPHRD_CAN && dev->ml_priv) {
+-			dev_rcv_lists = dev->ml_priv;
++		struct can_ml_priv *can_ml = can_get_ml_priv(dev);
++
++		if (can_ml) {
++			dev_rcv_lists = &can_ml->dev_rcv_lists;
+ 			can_rcvlist_proc_show_array(m, dev, dev_rcv_lists->rx_eff,
+ 						    ARRAY_SIZE(dev_rcv_lists->rx_eff));
+ 		}
+diff --git a/net/core/filter.c b/net/core/filter.c
+index f0a19a48c0481..9358bc4a3711f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3552,11 +3552,7 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,
+ 	return 0;
+ }
+ 
+-static u32 __bpf_skb_max_len(const struct sk_buff *skb)
+-{
+-	return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len :
+-			  SKB_MAX_ALLOC;
+-}
++#define BPF_SKB_MAX_LEN SKB_MAX_ALLOC
+ 
+ BPF_CALL_4(sk_skb_adjust_room, struct sk_buff *, skb, s32, len_diff,
+ 	   u32, mode, u64, flags)
+@@ -3605,7 +3601,7 @@ BPF_CALL_4(bpf_skb_adjust_room, struct sk_buff *, skb, s32, len_diff,
+ {
+ 	u32 len_cur, len_diff_abs = abs(len_diff);
+ 	u32 len_min = bpf_skb_net_base_len(skb);
+-	u32 len_max = __bpf_skb_max_len(skb);
++	u32 len_max = BPF_SKB_MAX_LEN;
+ 	__be16 proto = skb->protocol;
+ 	bool shrink = len_diff < 0;
+ 	u32 off;
+@@ -3688,7 +3684,7 @@ static int bpf_skb_trim_rcsum(struct sk_buff *skb, unsigned int new_len)
+ static inline int __bpf_skb_change_tail(struct sk_buff *skb, u32 new_len,
+ 					u64 flags)
+ {
+-	u32 max_len = __bpf_skb_max_len(skb);
++	u32 max_len = BPF_SKB_MAX_LEN;
+ 	u32 min_len = __bpf_skb_min_len(skb);
+ 	int ret;
+ 
+@@ -3764,7 +3760,7 @@ static const struct bpf_func_proto sk_skb_change_tail_proto = {
+ static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
+ 					u64 flags)
+ {
+-	u32 max_len = __bpf_skb_max_len(skb);
++	u32 max_len = BPF_SKB_MAX_LEN;
+ 	u32 new_len = skb->len + head_room;
+ 	int ret;
+ 
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index c79be25b2e0c2..d48b37b15b276 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1050,6 +1050,9 @@ proto_again:
+ 			key_control->addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+ 		}
+ 
++		__skb_flow_dissect_ipv4(skb, flow_dissector,
++					target_container, data, iph);
++
+ 		if (ip_is_fragment(iph)) {
+ 			key_control->flags |= FLOW_DIS_IS_FRAGMENT;
+ 
+@@ -1066,9 +1069,6 @@ proto_again:
+ 			}
+ 		}
+ 
+-		__skb_flow_dissect_ipv4(skb, flow_dissector,
+-					target_container, data, iph);
+-
+ 		break;
+ 	}
+ 	case htons(ETH_P_IPV6): {
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index bd4678db9d76b..6dff64374bfe1 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -1825,11 +1825,14 @@ static int
+ svcauth_gss_release(struct svc_rqst *rqstp)
+ {
+ 	struct gss_svc_data *gsd = (struct gss_svc_data *)rqstp->rq_auth_data;
+-	struct rpc_gss_wire_cred *gc = &gsd->clcred;
++	struct rpc_gss_wire_cred *gc;
+ 	struct xdr_buf *resbuf = &rqstp->rq_res;
+ 	int stat = -EINVAL;
+ 	struct sunrpc_net *sn = net_generic(SVC_NET(rqstp), sunrpc_net_id);
+ 
++	if (!gsd)
++		goto out;
++	gc = &gsd->clcred;
+ 	if (gc->gc_proc != RPC_GSS_PROC_DATA)
+ 		goto out;
+ 	/* Release can be called twice, but we only wrap once. */
+@@ -1870,10 +1873,10 @@ out_err:
+ 	if (rqstp->rq_cred.cr_group_info)
+ 		put_group_info(rqstp->rq_cred.cr_group_info);
+ 	rqstp->rq_cred.cr_group_info = NULL;
+-	if (gsd->rsci)
++	if (gsd && gsd->rsci) {
+ 		cache_put(&gsd->rsci->h, sn->rsc_cache);
+-	gsd->rsci = NULL;
+-
++		gsd->rsci = NULL;
++	}
+ 	return stat;
+ }
+ 
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index d244616d28d88..4c8b281c39921 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1023,8 +1023,12 @@ static int azx_prepare(struct device *dev)
+ 	struct snd_card *card = dev_get_drvdata(dev);
+ 	struct azx *chip;
+ 
++	if (!azx_is_pm_ready(card))
++		return 0;
++
+ 	chip = card->private_data;
+ 	chip->pm_prepared = 1;
++	snd_power_change_state(card, SNDRV_CTL_POWER_D3hot);
+ 
+ 	flush_work(&azx_bus(chip)->unsol_work);
+ 
+@@ -1039,7 +1043,11 @@ static void azx_complete(struct device *dev)
+ 	struct snd_card *card = dev_get_drvdata(dev);
+ 	struct azx *chip;
+ 
++	if (!azx_is_pm_ready(card))
++		return;
++
+ 	chip = card->private_data;
++	snd_power_change_state(card, SNDRV_CTL_POWER_D0);
+ 	chip->pm_prepared = 0;
+ }
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 316b9b4ccb32d..58946d069ee59 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5256,7 +5256,7 @@ static void alc_determine_headset_type(struct hda_codec *codec)
+ 	case 0x10ec0274:
+ 	case 0x10ec0294:
+ 		alc_process_coef_fw(codec, coef0274);
+-		msleep(80);
++		msleep(850);
+ 		val = alc_read_coef_idx(codec, 0x46);
+ 		is_ctia = (val & 0x00f0) == 0x00f0;
+ 		break;
+@@ -5440,6 +5440,7 @@ static void alc_update_headset_jack_cb(struct hda_codec *codec,
+ 				       struct hda_jack_callback *jack)
+ {
+ 	snd_hda_gen_hp_automute(codec, jack);
++	alc_update_headset_mode(codec);
+ }
+ 
+ static void alc_probe_headset_mode(struct hda_codec *codec)
+@@ -8057,6 +8058,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87f2, "HP ProBook 640 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 210fcbedf2413..4d82d24c7828d 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -401,7 +401,7 @@ static const struct regmap_config cs42l42_regmap = {
+ };
+ 
+ static DECLARE_TLV_DB_SCALE(adc_tlv, -9600, 100, false);
+-static DECLARE_TLV_DB_SCALE(mixer_tlv, -6200, 100, false);
++static DECLARE_TLV_DB_SCALE(mixer_tlv, -6300, 100, true);
+ 
+ static const char * const cs42l42_hpf_freq_text[] = {
+ 	"1.86Hz", "120Hz", "235Hz", "466Hz"
+@@ -458,7 +458,7 @@ static const struct snd_kcontrol_new cs42l42_snd_controls[] = {
+ 				CS42L42_DAC_HPF_EN_SHIFT, true, false),
+ 	SOC_DOUBLE_R_TLV("Mixer Volume", CS42L42_MIXER_CHA_VOL,
+ 			 CS42L42_MIXER_CHB_VOL, CS42L42_MIXER_CH_VOL_SHIFT,
+-				0x3e, 1, mixer_tlv)
++				0x3f, 1, mixer_tlv)
+ };
+ 
+ static int cs42l42_hpdrv_evt(struct snd_soc_dapm_widget *w,
+@@ -691,24 +691,6 @@ static int cs42l42_pll_config(struct snd_soc_component *component)
+ 					CS42L42_CLK_OASRC_SEL_MASK,
+ 					CS42L42_CLK_OASRC_SEL_12 <<
+ 					CS42L42_CLK_OASRC_SEL_SHIFT);
+-			/* channel 1 on low LRCLK, 32 bit */
+-			snd_soc_component_update_bits(component,
+-					CS42L42_ASP_RX_DAI0_CH1_AP_RES,
+-					CS42L42_ASP_RX_CH_AP_MASK |
+-					CS42L42_ASP_RX_CH_RES_MASK,
+-					(CS42L42_ASP_RX_CH_AP_LOW <<
+-					CS42L42_ASP_RX_CH_AP_SHIFT) |
+-					(CS42L42_ASP_RX_CH_RES_32 <<
+-					CS42L42_ASP_RX_CH_RES_SHIFT));
+-			/* Channel 2 on high LRCLK, 32 bit */
+-			snd_soc_component_update_bits(component,
+-					CS42L42_ASP_RX_DAI0_CH2_AP_RES,
+-					CS42L42_ASP_RX_CH_AP_MASK |
+-					CS42L42_ASP_RX_CH_RES_MASK,
+-					(CS42L42_ASP_RX_CH_AP_HI <<
+-					CS42L42_ASP_RX_CH_AP_SHIFT) |
+-					(CS42L42_ASP_RX_CH_RES_32 <<
+-					CS42L42_ASP_RX_CH_RES_SHIFT));
+ 			if (pll_ratio_table[i].mclk_src_sel == 0) {
+ 				/* Pass the clock straight through */
+ 				snd_soc_component_update_bits(component,
+@@ -797,27 +779,23 @@ static int cs42l42_set_dai_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt)
+ 	/* Bitclock/frame inversion */
+ 	switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
+ 	case SND_SOC_DAIFMT_NB_NF:
++		asp_cfg_val |= CS42L42_ASP_SCPOL_NOR << CS42L42_ASP_SCPOL_SHIFT;
+ 		break;
+ 	case SND_SOC_DAIFMT_NB_IF:
+-		asp_cfg_val |= CS42L42_ASP_POL_INV <<
+-				CS42L42_ASP_LCPOL_IN_SHIFT;
++		asp_cfg_val |= CS42L42_ASP_SCPOL_NOR << CS42L42_ASP_SCPOL_SHIFT;
++		asp_cfg_val |= CS42L42_ASP_LCPOL_INV << CS42L42_ASP_LCPOL_SHIFT;
+ 		break;
+ 	case SND_SOC_DAIFMT_IB_NF:
+-		asp_cfg_val |= CS42L42_ASP_POL_INV <<
+-				CS42L42_ASP_SCPOL_IN_DAC_SHIFT;
+ 		break;
+ 	case SND_SOC_DAIFMT_IB_IF:
+-		asp_cfg_val |= CS42L42_ASP_POL_INV <<
+-				CS42L42_ASP_LCPOL_IN_SHIFT;
+-		asp_cfg_val |= CS42L42_ASP_POL_INV <<
+-				CS42L42_ASP_SCPOL_IN_DAC_SHIFT;
++		asp_cfg_val |= CS42L42_ASP_LCPOL_INV << CS42L42_ASP_LCPOL_SHIFT;
+ 		break;
+ 	}
+ 
+-	snd_soc_component_update_bits(component, CS42L42_ASP_CLK_CFG,
+-				CS42L42_ASP_MODE_MASK |
+-				CS42L42_ASP_SCPOL_IN_DAC_MASK |
+-				CS42L42_ASP_LCPOL_IN_MASK, asp_cfg_val);
++	snd_soc_component_update_bits(component, CS42L42_ASP_CLK_CFG, CS42L42_ASP_MODE_MASK |
++								      CS42L42_ASP_SCPOL_MASK |
++								      CS42L42_ASP_LCPOL_MASK,
++								      asp_cfg_val);
+ 
+ 	return 0;
+ }
+@@ -828,14 +806,29 @@ static int cs42l42_pcm_hw_params(struct snd_pcm_substream *substream,
+ {
+ 	struct snd_soc_component *component = dai->component;
+ 	struct cs42l42_private *cs42l42 = snd_soc_component_get_drvdata(component);
+-	int retval;
++	unsigned int width = (params_width(params) / 8) - 1;
++	unsigned int val = 0;
+ 
+ 	cs42l42->srate = params_rate(params);
+-	cs42l42->swidth = params_width(params);
+ 
+-	retval = cs42l42_pll_config(component);
++	switch(substream->stream) {
++	case SNDRV_PCM_STREAM_PLAYBACK:
++		val |= width << CS42L42_ASP_RX_CH_RES_SHIFT;
++		/* channel 1 on low LRCLK */
++		snd_soc_component_update_bits(component, CS42L42_ASP_RX_DAI0_CH1_AP_RES,
++							 CS42L42_ASP_RX_CH_AP_MASK |
++							 CS42L42_ASP_RX_CH_RES_MASK, val);
++		/* Channel 2 on high LRCLK */
++		val |= CS42L42_ASP_RX_CH_AP_HI << CS42L42_ASP_RX_CH_AP_SHIFT;
++		snd_soc_component_update_bits(component, CS42L42_ASP_RX_DAI0_CH2_AP_RES,
++							 CS42L42_ASP_RX_CH_AP_MASK |
++							 CS42L42_ASP_RX_CH_RES_MASK, val);
++		break;
++	default:
++		break;
++	}
+ 
+-	return retval;
++	return cs42l42_pll_config(component);
+ }
+ 
+ static int cs42l42_set_sysclk(struct snd_soc_dai *dai,
+@@ -900,9 +893,9 @@ static int cs42l42_mute(struct snd_soc_dai *dai, int mute, int direction)
+ 	return 0;
+ }
+ 
+-#define CS42L42_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S18_3LE | \
+-			SNDRV_PCM_FMTBIT_S20_3LE | SNDRV_PCM_FMTBIT_S24_LE | \
+-			SNDRV_PCM_FMTBIT_S32_LE)
++#define CS42L42_FORMATS (SNDRV_PCM_FMTBIT_S16_LE |\
++			 SNDRV_PCM_FMTBIT_S24_LE |\
++			 SNDRV_PCM_FMTBIT_S32_LE )
+ 
+ 
+ static const struct snd_soc_dai_ops cs42l42_ops = {
+@@ -1801,7 +1794,7 @@ static int cs42l42_i2c_probe(struct i2c_client *i2c_client,
+ 		dev_dbg(&i2c_client->dev, "Found reset GPIO\n");
+ 		gpiod_set_value_cansleep(cs42l42->reset_gpio, 1);
+ 	}
+-	mdelay(3);
++	usleep_range(CS42L42_BOOT_TIME_US, CS42L42_BOOT_TIME_US * 2);
+ 
+ 	/* Request IRQ */
+ 	ret = devm_request_threaded_irq(&i2c_client->dev,
+@@ -1926,6 +1919,7 @@ static int cs42l42_runtime_resume(struct device *dev)
+ 	}
+ 
+ 	gpiod_set_value_cansleep(cs42l42->reset_gpio, 1);
++	usleep_range(CS42L42_BOOT_TIME_US, CS42L42_BOOT_TIME_US * 2);
+ 
+ 	regcache_cache_only(cs42l42->regmap, false);
+ 	regcache_sync(cs42l42->regmap);
+diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
+index 9e3cc528dcff0..866d7c873e3c9 100644
+--- a/sound/soc/codecs/cs42l42.h
++++ b/sound/soc/codecs/cs42l42.h
+@@ -258,11 +258,12 @@
+ #define CS42L42_ASP_SLAVE_MODE		0x00
+ #define CS42L42_ASP_MODE_SHIFT		4
+ #define CS42L42_ASP_MODE_MASK		(1 << CS42L42_ASP_MODE_SHIFT)
+-#define CS42L42_ASP_SCPOL_IN_DAC_SHIFT	2
+-#define CS42L42_ASP_SCPOL_IN_DAC_MASK	(1 << CS42L42_ASP_SCPOL_IN_DAC_SHIFT)
+-#define CS42L42_ASP_LCPOL_IN_SHIFT	0
+-#define CS42L42_ASP_LCPOL_IN_MASK	(1 << CS42L42_ASP_LCPOL_IN_SHIFT)
+-#define CS42L42_ASP_POL_INV		1
++#define CS42L42_ASP_SCPOL_SHIFT		2
++#define CS42L42_ASP_SCPOL_MASK		(3 << CS42L42_ASP_SCPOL_SHIFT)
++#define CS42L42_ASP_SCPOL_NOR		3
++#define CS42L42_ASP_LCPOL_SHIFT		0
++#define CS42L42_ASP_LCPOL_MASK		(3 << CS42L42_ASP_LCPOL_SHIFT)
++#define CS42L42_ASP_LCPOL_INV		3
+ 
+ #define CS42L42_ASP_FRM_CFG		(CS42L42_PAGE_12 + 0x08)
+ #define CS42L42_ASP_STP_SHIFT		4
+@@ -739,6 +740,7 @@
+ #define CS42L42_FRAC2_VAL(val)	(((val) & 0xff0000) >> 16)
+ 
+ #define CS42L42_NUM_SUPPLIES	5
++#define CS42L42_BOOT_TIME_US	3000
+ 
+ static const char *const cs42l42_supply_names[CS42L42_NUM_SUPPLIES] = {
+ 	"VA",
+@@ -756,7 +758,6 @@ struct  cs42l42_private {
+ 	struct completion pdn_done;
+ 	u32 sclk;
+ 	u32 srate;
+-	u32 swidth;
+ 	u8 plug_state;
+ 	u8 hs_type;
+ 	u8 ts_inv;
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index bd5d230c5df2f..609459077f9d9 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -63,13 +63,8 @@ static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(adc_pga_gain_tlv,
+ 	1, 1, TLV_DB_SCALE_ITEM(0, 0, 0),
+ 	2, 2, TLV_DB_SCALE_ITEM(250, 0, 0),
+ 	3, 3, TLV_DB_SCALE_ITEM(450, 0, 0),
+-	4, 4, TLV_DB_SCALE_ITEM(700, 0, 0),
+-	5, 5, TLV_DB_SCALE_ITEM(1000, 0, 0),
+-	6, 6, TLV_DB_SCALE_ITEM(1300, 0, 0),
+-	7, 7, TLV_DB_SCALE_ITEM(1600, 0, 0),
+-	8, 8, TLV_DB_SCALE_ITEM(1800, 0, 0),
+-	9, 9, TLV_DB_SCALE_ITEM(2100, 0, 0),
+-	10, 10, TLV_DB_SCALE_ITEM(2400, 0, 0),
++	4, 7, TLV_DB_SCALE_ITEM(700, 300, 0),
++	8, 10, TLV_DB_SCALE_ITEM(1800, 300, 0),
+ );
+ 
+ static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(hpout_vol_tlv,
+diff --git a/sound/soc/codecs/rt1015.c b/sound/soc/codecs/rt1015.c
+index 3db07293c70b6..2627910060dc1 100644
+--- a/sound/soc/codecs/rt1015.c
++++ b/sound/soc/codecs/rt1015.c
+@@ -209,6 +209,7 @@ static bool rt1015_volatile_register(struct device *dev, unsigned int reg)
+ 	case RT1015_VENDOR_ID:
+ 	case RT1015_DEVICE_ID:
+ 	case RT1015_PRO_ALT:
++	case RT1015_MAN_I2C:
+ 	case RT1015_DAC3:
+ 	case RT1015_VBAT_TEST_OUT1:
+ 	case RT1015_VBAT_TEST_OUT2:
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index 1414ad15d01cf..a5674c227b3a6 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -339,9 +339,9 @@ static bool rt5640_readable_register(struct device *dev, unsigned int reg)
+ }
+ 
+ static const DECLARE_TLV_DB_SCALE(out_vol_tlv, -4650, 150, 0);
+-static const DECLARE_TLV_DB_SCALE(dac_vol_tlv, -65625, 375, 0);
++static const DECLARE_TLV_DB_MINMAX(dac_vol_tlv, -6562, 0);
+ static const DECLARE_TLV_DB_SCALE(in_vol_tlv, -3450, 150, 0);
+-static const DECLARE_TLV_DB_SCALE(adc_vol_tlv, -17625, 375, 0);
++static const DECLARE_TLV_DB_MINMAX(adc_vol_tlv, -1762, 3000);
+ static const DECLARE_TLV_DB_SCALE(adc_bst_tlv, 0, 1200, 0);
+ 
+ /* {0, +20, +24, +30, +35, +40, +44, +50, +52} dB */
+diff --git a/sound/soc/codecs/rt5651.c b/sound/soc/codecs/rt5651.c
+index d198e191fb0c9..e59fdc81dbd45 100644
+--- a/sound/soc/codecs/rt5651.c
++++ b/sound/soc/codecs/rt5651.c
+@@ -285,9 +285,9 @@ static bool rt5651_readable_register(struct device *dev, unsigned int reg)
+ }
+ 
+ static const DECLARE_TLV_DB_SCALE(out_vol_tlv, -4650, 150, 0);
+-static const DECLARE_TLV_DB_SCALE(dac_vol_tlv, -65625, 375, 0);
++static const DECLARE_TLV_DB_MINMAX(dac_vol_tlv, -6562, 0);
+ static const DECLARE_TLV_DB_SCALE(in_vol_tlv, -3450, 150, 0);
+-static const DECLARE_TLV_DB_SCALE(adc_vol_tlv, -17625, 375, 0);
++static const DECLARE_TLV_DB_MINMAX(adc_vol_tlv, -1762, 3000);
+ static const DECLARE_TLV_DB_SCALE(adc_bst_tlv, 0, 1200, 0);
+ 
+ /* {0, +20, +24, +30, +35, +40, +44, +50, +52} dB */
+diff --git a/sound/soc/codecs/rt5659.c b/sound/soc/codecs/rt5659.c
+index 41e5917b16a5e..91a4ef7f620ca 100644
+--- a/sound/soc/codecs/rt5659.c
++++ b/sound/soc/codecs/rt5659.c
+@@ -3426,12 +3426,17 @@ static int rt5659_set_component_sysclk(struct snd_soc_component *component, int
+ {
+ 	struct rt5659_priv *rt5659 = snd_soc_component_get_drvdata(component);
+ 	unsigned int reg_val = 0;
++	int ret;
+ 
+ 	if (freq == rt5659->sysclk && clk_id == rt5659->sysclk_src)
+ 		return 0;
+ 
+ 	switch (clk_id) {
+ 	case RT5659_SCLK_S_MCLK:
++		ret = clk_set_rate(rt5659->mclk, freq);
++		if (ret)
++			return ret;
++
+ 		reg_val |= RT5659_SCLK_SRC_MCLK;
+ 		break;
+ 	case RT5659_SCLK_S_PLL1:
+diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
+index a9b1b4180c471..93d86f7558e0a 100644
+--- a/sound/soc/codecs/rt711.c
++++ b/sound/soc/codecs/rt711.c
+@@ -895,6 +895,13 @@ static int rt711_probe(struct snd_soc_component *component)
+ 	return 0;
+ }
+ 
++static void rt711_remove(struct snd_soc_component *component)
++{
++	struct rt711_priv *rt711 = snd_soc_component_get_drvdata(component);
++
++	regcache_cache_only(rt711->regmap, true);
++}
++
+ static const struct snd_soc_component_driver soc_codec_dev_rt711 = {
+ 	.probe = rt711_probe,
+ 	.set_bias_level = rt711_set_bias_level,
+@@ -905,6 +912,7 @@ static const struct snd_soc_component_driver soc_codec_dev_rt711 = {
+ 	.dapm_routes = rt711_audio_map,
+ 	.num_dapm_routes = ARRAY_SIZE(rt711_audio_map),
+ 	.set_jack = rt711_set_jack_detect,
++	.remove = rt711_remove,
+ };
+ 
+ static int rt711_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
+index 4d6ff81146228..4c0e87e22b97b 100644
+--- a/sound/soc/codecs/sgtl5000.c
++++ b/sound/soc/codecs/sgtl5000.c
+@@ -71,7 +71,7 @@ static const struct reg_default sgtl5000_reg_defaults[] = {
+ 	{ SGTL5000_DAP_EQ_BASS_BAND4,		0x002f },
+ 	{ SGTL5000_DAP_MAIN_CHAN,		0x8000 },
+ 	{ SGTL5000_DAP_MIX_CHAN,		0x0000 },
+-	{ SGTL5000_DAP_AVC_CTRL,		0x0510 },
++	{ SGTL5000_DAP_AVC_CTRL,		0x5100 },
+ 	{ SGTL5000_DAP_AVC_THRESHOLD,		0x1473 },
+ 	{ SGTL5000_DAP_AVC_ATTACK,		0x0028 },
+ 	{ SGTL5000_DAP_AVC_DECAY,		0x0050 },
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 05a085f6dc7ce..bf65cba232e67 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -31,6 +31,7 @@
+ #include <linux/of.h>
+ #include <linux/of_graph.h>
+ #include <linux/dmi.h>
++#include <linux/acpi.h>
+ #include <sound/core.h>
+ #include <sound/jack.h>
+ #include <sound/pcm.h>
+@@ -1581,6 +1582,9 @@ int snd_soc_set_dmi_name(struct snd_soc_card *card, const char *flavour)
+ 	if (card->long_name)
+ 		return 0; /* long name already set by driver or from DMI */
+ 
++	if (!is_acpi_device_node(card->dev->fwnode))
++		return 0;
++
+ 	/* make up dmi long name as: vendor-product-version-board */
+ 	vendor = dmi_get_system_info(DMI_BOARD_VENDOR);
+ 	if (!vendor || !is_dmi_valid(vendor)) {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 10b3a8006bdb3..5ab2a4580bfb2 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1521,6 +1521,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ 	case USB_ID(0x21b4, 0x0081): /* AudioQuest DragonFly */
+ 	case USB_ID(0x2912, 0x30c8): /* Audioengine D1 */
+ 	case USB_ID(0x413c, 0xa506): /* Dell AE515 sound bar */
++	case USB_ID(0x046d, 0x084c): /* Logitech ConferenceCam Connect */
+ 		return true;
+ 	}
+ 
+diff --git a/tools/testing/selftests/net/forwarding/tc_flower.sh b/tools/testing/selftests/net/forwarding/tc_flower.sh
+index 058c746ee3006..b11d8e6b5bc14 100755
+--- a/tools/testing/selftests/net/forwarding/tc_flower.sh
++++ b/tools/testing/selftests/net/forwarding/tc_flower.sh
+@@ -3,7 +3,7 @@
+ 
+ ALL_TESTS="match_dst_mac_test match_src_mac_test match_dst_ip_test \
+ 	match_src_ip_test match_ip_flags_test match_pcp_test match_vlan_test \
+-	match_ip_tos_test match_indev_test"
++	match_ip_tos_test match_indev_test match_ip_ttl_test"
+ NUM_NETIFS=2
+ source tc_common.sh
+ source lib.sh
+@@ -310,6 +310,42 @@ match_ip_tos_test()
+ 	log_test "ip_tos match ($tcflags)"
+ }
+ 
++match_ip_ttl_test()
++{
++	RET=0
++
++	tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \
++		$tcflags dst_ip 192.0.2.2 ip_ttl 63 action drop
++	tc filter add dev $h2 ingress protocol ip pref 2 handle 102 flower \
++		$tcflags dst_ip 192.0.2.2 action drop
++
++	$MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \
++		-t ip "ttl=63" -q
++
++	$MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \
++		-t ip "ttl=63,mf,frag=256" -q
++
++	tc_check_packets "dev $h2 ingress" 102 1
++	check_fail $? "Matched on the wrong filter (no check on ttl)"
++
++	tc_check_packets "dev $h2 ingress" 101 2
++	check_err $? "Did not match on correct filter (ttl=63)"
++
++	$MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \
++		-t ip "ttl=255" -q
++
++	tc_check_packets "dev $h2 ingress" 101 3
++	check_fail $? "Matched on a wrong filter (ttl=63)"
++
++	tc_check_packets "dev $h2 ingress" 102 1
++	check_err $? "Did not match on correct filter (no check on ttl)"
++
++	tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower
++	tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower
++
++	log_test "ip_ttl match ($tcflags)"
++}
++
+ match_indev_test()
+ {
+ 	RET=0


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-04-10 13:26 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-04-10 13:26 UTC (permalink / raw
  To: gentoo-commits

commit:     63ff4296933fd303f2598039ad5f1e85c9951919
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 10 13:26:28 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Apr 10 13:26:28 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=63ff4296

Linux patch 5.10.29

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1028_linux-5.10.29.patch | 1006 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1010 insertions(+)

diff --git a/0000_README b/0000_README
index a34afba..ac316db 100644
--- a/0000_README
+++ b/0000_README
@@ -155,6 +155,10 @@ Patch:  1027_linux-5.10.28.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.28
 
+Patch:  1028_linux-5.10.29.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.29
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1028_linux-5.10.29.patch b/1028_linux-5.10.29.patch
new file mode 100644
index 0000000..629ca7d
--- /dev/null
+++ b/1028_linux-5.10.29.patch
@@ -0,0 +1,1006 @@
+diff --git a/Makefile b/Makefile
+index cb76f64abb6da..1d4a50ebe3b77 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 28
++SUBLEVEL = 29
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1083,6 +1083,17 @@ ifdef CONFIG_STACK_VALIDATION
+   endif
+ endif
+ 
++PHONY += resolve_btfids_clean
++
++resolve_btfids_O = $(abspath $(objtree))/tools/bpf/resolve_btfids
++
++# tools/bpf/resolve_btfids directory might not exist
++# in output directory, skip its clean in that case
++resolve_btfids_clean:
++ifneq ($(wildcard $(resolve_btfids_O)),)
++	$(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean
++endif
++
+ ifdef CONFIG_BPF
+ ifdef CONFIG_DEBUG_INFO_BTF
+   ifeq ($(has_libelf),1)
+@@ -1500,7 +1511,7 @@ vmlinuxclean:
+ 	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/link-vmlinux.sh clean
+ 	$(Q)$(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) clean)
+ 
+-clean: archclean vmlinuxclean
++clean: archclean vmlinuxclean resolve_btfids_clean
+ 
+ # mrproper - Delete all generated files, including .config
+ #
+diff --git a/arch/arm/boot/dts/am33xx.dtsi b/arch/arm/boot/dts/am33xx.dtsi
+index 4c22980241377..f09a61cac2dc9 100644
+--- a/arch/arm/boot/dts/am33xx.dtsi
++++ b/arch/arm/boot/dts/am33xx.dtsi
+@@ -40,6 +40,9 @@
+ 		ethernet1 = &cpsw_emac1;
+ 		spi0 = &spi0;
+ 		spi1 = &spi1;
++		mmc0 = &mmc1;
++		mmc1 = &mmc2;
++		mmc2 = &mmc3;
+ 	};
+ 
+ 	cpus {
+diff --git a/arch/ia64/kernel/err_inject.c b/arch/ia64/kernel/err_inject.c
+index 8b5b8e6bc9d9a..dd5bfed52031d 100644
+--- a/arch/ia64/kernel/err_inject.c
++++ b/arch/ia64/kernel/err_inject.c
+@@ -59,7 +59,7 @@ show_##name(struct device *dev, struct device_attribute *attr,	\
+ 		char *buf)						\
+ {									\
+ 	u32 cpu=dev->id;						\
+-	return sprintf(buf, "%lx\n", name[cpu]);			\
++	return sprintf(buf, "%llx\n", name[cpu]);			\
+ }
+ 
+ #define store(name)							\
+@@ -86,9 +86,9 @@ store_call_start(struct device *dev, struct device_attribute *attr,
+ 
+ #ifdef ERR_INJ_DEBUG
+ 	printk(KERN_DEBUG "pal_mc_err_inject for cpu%d:\n", cpu);
+-	printk(KERN_DEBUG "err_type_info=%lx,\n", err_type_info[cpu]);
+-	printk(KERN_DEBUG "err_struct_info=%lx,\n", err_struct_info[cpu]);
+-	printk(KERN_DEBUG "err_data_buffer=%lx, %lx, %lx.\n",
++	printk(KERN_DEBUG "err_type_info=%llx,\n", err_type_info[cpu]);
++	printk(KERN_DEBUG "err_struct_info=%llx,\n", err_struct_info[cpu]);
++	printk(KERN_DEBUG "err_data_buffer=%llx, %llx, %llx.\n",
+ 			  err_data_buffer[cpu].data1,
+ 			  err_data_buffer[cpu].data2,
+ 			  err_data_buffer[cpu].data3);
+@@ -117,8 +117,8 @@ store_call_start(struct device *dev, struct device_attribute *attr,
+ 
+ #ifdef ERR_INJ_DEBUG
+ 	printk(KERN_DEBUG "Returns: status=%d,\n", (int)status[cpu]);
+-	printk(KERN_DEBUG "capabilities=%lx,\n", capabilities[cpu]);
+-	printk(KERN_DEBUG "resources=%lx\n", resources[cpu]);
++	printk(KERN_DEBUG "capabilities=%llx,\n", capabilities[cpu]);
++	printk(KERN_DEBUG "resources=%llx\n", resources[cpu]);
+ #endif
+ 	return size;
+ }
+@@ -131,7 +131,7 @@ show_virtual_to_phys(struct device *dev, struct device_attribute *attr,
+ 			char *buf)
+ {
+ 	unsigned int cpu=dev->id;
+-	return sprintf(buf, "%lx\n", phys_addr[cpu]);
++	return sprintf(buf, "%llx\n", phys_addr[cpu]);
+ }
+ 
+ static ssize_t
+@@ -145,7 +145,7 @@ store_virtual_to_phys(struct device *dev, struct device_attribute *attr,
+ 	ret = get_user_pages_fast(virt_addr, 1, FOLL_WRITE, NULL);
+ 	if (ret<=0) {
+ #ifdef ERR_INJ_DEBUG
+-		printk("Virtual address %lx is not existing.\n",virt_addr);
++		printk("Virtual address %llx is not existing.\n", virt_addr);
+ #endif
+ 		return -EINVAL;
+ 	}
+@@ -163,7 +163,7 @@ show_err_data_buffer(struct device *dev,
+ {
+ 	unsigned int cpu=dev->id;
+ 
+-	return sprintf(buf, "%lx, %lx, %lx\n",
++	return sprintf(buf, "%llx, %llx, %llx\n",
+ 			err_data_buffer[cpu].data1,
+ 			err_data_buffer[cpu].data2,
+ 			err_data_buffer[cpu].data3);
+@@ -178,13 +178,13 @@ store_err_data_buffer(struct device *dev,
+ 	int ret;
+ 
+ #ifdef ERR_INJ_DEBUG
+-	printk("write err_data_buffer=[%lx,%lx,%lx] on cpu%d\n",
++	printk("write err_data_buffer=[%llx,%llx,%llx] on cpu%d\n",
+ 		 err_data_buffer[cpu].data1,
+ 		 err_data_buffer[cpu].data2,
+ 		 err_data_buffer[cpu].data3,
+ 		 cpu);
+ #endif
+-	ret=sscanf(buf, "%lx, %lx, %lx",
++	ret = sscanf(buf, "%llx, %llx, %llx",
+ 			&err_data_buffer[cpu].data1,
+ 			&err_data_buffer[cpu].data2,
+ 			&err_data_buffer[cpu].data3);
+diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
+index 2703f7795672d..bd0a51dc345af 100644
+--- a/arch/ia64/kernel/mca.c
++++ b/arch/ia64/kernel/mca.c
+@@ -1822,7 +1822,7 @@ ia64_mca_cpu_init(void *cpu_data)
+ 			data = mca_bootmem();
+ 			first_time = 0;
+ 		} else
+-			data = (void *)__get_free_pages(GFP_KERNEL,
++			data = (void *)__get_free_pages(GFP_ATOMIC,
+ 							get_order(sz));
+ 		if (!data)
+ 			panic("Could not allocate MCA memory for cpu %d\n",
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 0a6d497221e49..9c86f2dc16b1d 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -34,7 +34,7 @@ M16_CFLAGS	 := $(call cc-option, -m16, $(CODE16GCC_CFLAGS))
+ REALMODE_CFLAGS	:= $(M16_CFLAGS) -g -Os -DDISABLE_BRANCH_PROFILING \
+ 		   -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
+ 		   -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+-		   -mno-mmx -mno-sse
++		   -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
+ 
+ REALMODE_CFLAGS += -ffreestanding
+ REALMODE_CFLAGS += -fno-stack-protector
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 023ac12f54a29..a11796bbb9cee 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1476,7 +1476,16 @@ emit_jmp:
+ 		}
+ 
+ 		if (image) {
+-			if (unlikely(proglen + ilen > oldproglen)) {
++			/*
++			 * When populating the image, assert that:
++			 *
++			 *  i) We do not write beyond the allocated space, and
++			 * ii) addrs[i] did not change from the prior run, in order
++			 *     to validate assumptions made for computing branch
++			 *     displacements.
++			 */
++			if (unlikely(proglen + ilen > oldproglen ||
++				     proglen + ilen != addrs[i])) {
+ 				pr_err("bpf_jit: fatal error\n");
+ 				return -EFAULT;
+ 			}
+@@ -2038,7 +2047,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 		extra_pass = true;
+ 		goto skip_init_addrs;
+ 	}
+-	addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
++	addrs = kvmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
+ 	if (!addrs) {
+ 		prog = orig_prog;
+ 		goto out_addrs;
+@@ -2128,7 +2137,7 @@ out_image:
+ 		if (image)
+ 			bpf_prog_fill_jited_linfo(prog, addrs + 1);
+ out_addrs:
+-		kfree(addrs);
++		kvfree(addrs);
+ 		kfree(jit_data);
+ 		prog->aux->jit_data = NULL;
+ 	}
+diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
+index 96fde03aa9877..2cf4d217840d8 100644
+--- a/arch/x86/net/bpf_jit_comp32.c
++++ b/arch/x86/net/bpf_jit_comp32.c
+@@ -2278,7 +2278,16 @@ notyet:
+ 		}
+ 
+ 		if (image) {
+-			if (unlikely(proglen + ilen > oldproglen)) {
++			/*
++			 * When populating the image, assert that:
++			 *
++			 *  i) We do not write beyond the allocated space, and
++			 * ii) addrs[i] did not change from the prior run, in order
++			 *     to validate assumptions made for computing branch
++			 *     displacements.
++			 */
++			if (unlikely(proglen + ilen > oldproglen ||
++				     proglen + ilen != addrs[i])) {
+ 				pr_err("bpf_jit: fatal error\n");
+ 				return -EFAULT;
+ 			}
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 45f5530666d3f..16e389dce1118 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3044,7 +3044,9 @@ static int sysc_remove(struct platform_device *pdev)
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+-	reset_control_assert(ddata->rsts);
++
++	if (!reset_control_status(ddata->rsts))
++		reset_control_assert(ddata->rsts);
+ 
+ unprepare:
+ 	sysc_unprepare(ddata);
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c
+index f176a6f3eff66..e58670a61df4b 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_power.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
+@@ -304,7 +304,7 @@ int a5xx_power_init(struct msm_gpu *gpu)
+ 	/* Set up the limits management */
+ 	if (adreno_is_a530(adreno_gpu))
+ 		a530_lm_setup(gpu);
+-	else
++	else if (adreno_is_a540(adreno_gpu))
+ 		a540_lm_setup(gpu);
+ 
+ 	/* Set up SP/TP power collpase */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index d93c44f6996db..e69ea810e18d9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -43,6 +43,8 @@
+ #define DPU_DEBUGFS_DIR "msm_dpu"
+ #define DPU_DEBUGFS_HWMASKNAME "hw_log_mask"
+ 
++#define MIN_IB_BW	400000000ULL /* Min ib vote 400MB */
++
+ static int dpu_kms_hw_init(struct msm_kms *kms);
+ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms);
+ 
+@@ -929,6 +931,9 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+ 		DPU_DEBUG("REG_DMA is not defined");
+ 	}
+ 
++	if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
++		dpu_kms_parse_data_bus_icc_path(dpu_kms);
++
+ 	pm_runtime_get_sync(&dpu_kms->pdev->dev);
+ 
+ 	dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0);
+@@ -1030,9 +1035,6 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+ 
+ 	dpu_vbif_init_memtypes(dpu_kms);
+ 
+-	if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
+-		dpu_kms_parse_data_bus_icc_path(dpu_kms);
+-
+ 	pm_runtime_put_sync(&dpu_kms->pdev->dev);
+ 
+ 	return 0;
+@@ -1189,10 +1191,10 @@ static int __maybe_unused dpu_runtime_resume(struct device *dev)
+ 
+ 	ddev = dpu_kms->dev;
+ 
++	WARN_ON(!(dpu_kms->num_paths));
+ 	/* Min vote of BW is required before turning on AXI clk */
+ 	for (i = 0; i < dpu_kms->num_paths; i++)
+-		icc_set_bw(dpu_kms->path[i], 0,
+-			dpu_kms->catalog->perf.min_dram_ib);
++		icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW));
+ 
+ 	rc = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true);
+ 	if (rc) {
+diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
+index c1f6708367ae9..c1c41846b6b2b 100644
+--- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
+@@ -325,7 +325,7 @@ static void dsi_pll_commit(struct dsi_pll_7nm *pll)
+ 	pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_LOW_1, reg->frac_div_start_low);
+ 	pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_MID_1, reg->frac_div_start_mid);
+ 	pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_HIGH_1, reg->frac_div_start_high);
+-	pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCKDET_RATE_1, 0x40);
++	pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCKDET_RATE_1, reg->pll_lockdet_rate);
+ 	pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCK_DELAY, 0x06);
+ 	pll_write(base + REG_DSI_7nm_PHY_PLL_CMODE_1, 0x10); /* TODO: 0x00 for CPHY */
+ 	pll_write(base + REG_DSI_7nm_PHY_PLL_CLOCK_INVERTERS, reg->pll_clock_inverters);
+diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
+index ad2703698b052..cd59a59180385 100644
+--- a/drivers/gpu/drm/msm/msm_fence.c
++++ b/drivers/gpu/drm/msm/msm_fence.c
+@@ -45,7 +45,7 @@ int msm_wait_fence(struct msm_fence_context *fctx, uint32_t fence,
+ 	int ret;
+ 
+ 	if (fence > fctx->last_fence) {
+-		DRM_ERROR("%s: waiting on invalid fence: %u (of %u)\n",
++		DRM_ERROR_RATELIMITED("%s: waiting on invalid fence: %u (of %u)\n",
+ 				fctx->name, fence, fctx->last_fence);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/isdn/hardware/mISDN/mISDNipac.c b/drivers/isdn/hardware/mISDN/mISDNipac.c
+index ec475087fbf93..39f841b424883 100644
+--- a/drivers/isdn/hardware/mISDN/mISDNipac.c
++++ b/drivers/isdn/hardware/mISDN/mISDNipac.c
+@@ -694,7 +694,7 @@ isac_release(struct isac_hw *isac)
+ {
+ 	if (isac->type & IPAC_TYPE_ISACX)
+ 		WriteISAC(isac, ISACX_MASK, 0xff);
+-	else
++	else if (isac->type != 0)
+ 		WriteISAC(isac, ISAC_MASK, 0xff);
+ 	if (isac->dch.timer.function != NULL) {
+ 		del_timer(&isac->dch.timer);
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index d1e4d42e497d8..3712e1786091f 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1544,8 +1544,8 @@ static int pxa168_eth_remove(struct platform_device *pdev)
+ 	clk_disable_unprepare(pep->clk);
+ 	mdiobus_unregister(pep->smi_bus);
+ 	mdiobus_free(pep->smi_bus);
+-	unregister_netdev(dev);
+ 	cancel_work_sync(&pep->tx_timeout_task);
++	unregister_netdev(dev);
+ 	free_netdev(dev);
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index e2006c6053c9c..9a12df43becc4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2326,8 +2326,9 @@ static u8 mlx5e_build_icosq_log_wq_sz(struct mlx5e_params *params,
+ {
+ 	switch (params->rq_wq_type) {
+ 	case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+-		return order_base_2(MLX5E_UMR_WQEBBS) +
+-			mlx5e_get_rq_log_wq_sz(rqp->rqc);
++		return max_t(u8, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE,
++			     order_base_2(MLX5E_UMR_WQEBBS) +
++			     mlx5e_get_rq_log_wq_sz(rqp->rqc));
+ 	default: /* MLX5_WQ_TYPE_CYCLIC */
+ 		return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
+ 	}
+diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
+index 46d8b7336d8f2..a47378b7d9b2f 100644
+--- a/drivers/net/ipa/ipa_cmd.c
++++ b/drivers/net/ipa/ipa_cmd.c
+@@ -175,21 +175,23 @@ bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+ 			    : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
+ 	if (mem->offset > offset_max ||
+ 	    ipa->mem_offset > offset_max - mem->offset) {
+-		dev_err(dev, "IPv%c %s%s table region offset too large "
+-			      "(0x%04x + 0x%04x > 0x%04x)\n",
+-			      ipv6 ? '6' : '4', hashed ? "hashed " : "",
+-			      route ? "route" : "filter",
+-			      ipa->mem_offset, mem->offset, offset_max);
++		dev_err(dev, "IPv%c %s%s table region offset too large\n",
++			ipv6 ? '6' : '4', hashed ? "hashed " : "",
++			route ? "route" : "filter");
++		dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
++			ipa->mem_offset, mem->offset, offset_max);
++
+ 		return false;
+ 	}
+ 
+ 	if (mem->offset > ipa->mem_size ||
+ 	    mem->size > ipa->mem_size - mem->offset) {
+-		dev_err(dev, "IPv%c %s%s table region out of range "
+-			      "(0x%04x + 0x%04x > 0x%04x)\n",
+-			      ipv6 ? '6' : '4', hashed ? "hashed " : "",
+-			      route ? "route" : "filter",
+-			      mem->offset, mem->size, ipa->mem_size);
++		dev_err(dev, "IPv%c %s%s table region out of range\n",
++			ipv6 ? '6' : '4', hashed ? "hashed " : "",
++			route ? "route" : "filter");
++		dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
++			mem->offset, mem->size, ipa->mem_size);
++
+ 		return false;
+ 	}
+ 
+@@ -205,22 +207,36 @@ static bool ipa_cmd_header_valid(struct ipa *ipa)
+ 	u32 size_max;
+ 	u32 size;
+ 
++	/* In ipa_cmd_hdr_init_local_add() we record the offset and size
++	 * of the header table memory area.  Make sure the offset and size
++	 * fit in the fields that need to hold them, and that the entire
++	 * range is within the overall IPA memory range.
++	 */
+ 	offset_max = field_max(HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
+ 	if (mem->offset > offset_max ||
+ 	    ipa->mem_offset > offset_max - mem->offset) {
+-		dev_err(dev, "header table region offset too large "
+-			      "(0x%04x + 0x%04x > 0x%04x)\n",
+-			      ipa->mem_offset + mem->offset, offset_max);
++		dev_err(dev, "header table region offset too large\n");
++		dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
++			ipa->mem_offset, mem->offset, offset_max);
++
+ 		return false;
+ 	}
+ 
+ 	size_max = field_max(HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
+ 	size = ipa->mem[IPA_MEM_MODEM_HEADER].size;
+ 	size += ipa->mem[IPA_MEM_AP_HEADER].size;
+-	if (mem->offset > ipa->mem_size || size > ipa->mem_size - mem->offset) {
+-		dev_err(dev, "header table region out of range "
+-			      "(0x%04x + 0x%04x > 0x%04x)\n",
+-			      mem->offset, size, ipa->mem_size);
++
++	if (size > size_max) {
++		dev_err(dev, "header table region size too large\n");
++		dev_err(dev, "    (0x%04x > 0x%08x)\n", size, size_max);
++
++		return false;
++	}
++	if (size > ipa->mem_size || mem->offset > ipa->mem_size - size) {
++		dev_err(dev, "header table region out of range\n");
++		dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
++			mem->offset, size, ipa->mem_size);
++
+ 		return false;
+ 	}
+ 
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index 86261970bd8f3..8a0cd5bf00657 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -86,6 +86,13 @@ static const struct dmi_system_id button_array_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x2 Detachable"),
+ 		},
+ 	},
++	{
++		.ident = "Lenovo ThinkPad X1 Tablet Gen 2",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Tablet Gen 2"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index 3e5fe66333f13..e06b36e87a33f 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -863,34 +863,45 @@ out_unlock:
+ }
+ DEFINE_SHOW_ATTRIBUTE(pmc_core_pll);
+ 
+-static ssize_t pmc_core_ltr_ignore_write(struct file *file,
+-					 const char __user *userbuf,
+-					 size_t count, loff_t *ppos)
++static int pmc_core_send_ltr_ignore(u32 value)
+ {
+ 	struct pmc_dev *pmcdev = &pmc;
+ 	const struct pmc_reg_map *map = pmcdev->map;
+-	u32 val, buf_size, fd;
+-	int err;
+-
+-	buf_size = count < 64 ? count : 64;
+-
+-	err = kstrtou32_from_user(userbuf, buf_size, 10, &val);
+-	if (err)
+-		return err;
++	u32 reg;
++	int err = 0;
+ 
+ 	mutex_lock(&pmcdev->lock);
+ 
+-	if (val > map->ltr_ignore_max) {
++	if (value > map->ltr_ignore_max) {
+ 		err = -EINVAL;
+ 		goto out_unlock;
+ 	}
+ 
+-	fd = pmc_core_reg_read(pmcdev, map->ltr_ignore_offset);
+-	fd |= (1U << val);
+-	pmc_core_reg_write(pmcdev, map->ltr_ignore_offset, fd);
++	reg = pmc_core_reg_read(pmcdev, map->ltr_ignore_offset);
++	reg |= BIT(value);
++	pmc_core_reg_write(pmcdev, map->ltr_ignore_offset, reg);
+ 
+ out_unlock:
+ 	mutex_unlock(&pmcdev->lock);
++
++	return err;
++}
++
++static ssize_t pmc_core_ltr_ignore_write(struct file *file,
++					 const char __user *userbuf,
++					 size_t count, loff_t *ppos)
++{
++	u32 buf_size, value;
++	int err;
++
++	buf_size = min_t(u32, count, 64);
++
++	err = kstrtou32_from_user(userbuf, buf_size, 10, &value);
++	if (err)
++		return err;
++
++	err = pmc_core_send_ltr_ignore(value);
++
+ 	return err == 0 ? count : err;
+ }
+ 
+@@ -1244,6 +1255,15 @@ static int pmc_core_probe(struct platform_device *pdev)
+ 	pmcdev->pmc_xram_read_bit = pmc_core_check_read_lock_bit();
+ 	dmi_check_system(pmc_core_dmi_table);
+ 
++	/*
++	 * On TGL, due to a hardware limitation, the GBE LTR blocks PC10 when
++	 * a cable is attached. Tell the PMC to ignore it.
++	 */
++	if (pmcdev->map == &tgl_reg_map) {
++		dev_dbg(&pdev->dev, "ignoring GBE LTR\n");
++		pmc_core_send_ltr_ignore(3);
++	}
++
+ 	pmc_core_dbgfs_register(pmcdev);
+ 
+ 	device_initialized = true;
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 69402758b99c3..3b0acaeb20cf7 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -4079,13 +4079,19 @@ static bool hotkey_notify_6xxx(const u32 hkey,
+ 
+ 	case TP_HKEY_EV_KEY_NUMLOCK:
+ 	case TP_HKEY_EV_KEY_FN:
+-	case TP_HKEY_EV_KEY_FN_ESC:
+ 		/* key press events, we just ignore them as long as the EC
+ 		 * is still reporting them in the normal keyboard stream */
+ 		*send_acpi_ev = false;
+ 		*ignore_acpi_ev = true;
+ 		return true;
+ 
++	case TP_HKEY_EV_KEY_FN_ESC:
++		/* Get the media key status to foce the status LED to update */
++		acpi_evalf(hkey_handle, NULL, "GMKS", "v");
++		*send_acpi_ev = false;
++		*ignore_acpi_ev = true;
++		return true;
++
+ 	case TP_HKEY_EV_TABLET_CHANGED:
+ 		tpacpi_input_send_tabletsw();
+ 		hotkey_tablet_mode_notify_change();
+diff --git a/drivers/ptp/ptp_qoriq.c b/drivers/ptp/ptp_qoriq.c
+index beb5f74944cdf..08f4cf0ad9e3c 100644
+--- a/drivers/ptp/ptp_qoriq.c
++++ b/drivers/ptp/ptp_qoriq.c
+@@ -189,15 +189,16 @@ int ptp_qoriq_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+ 	tmr_add = ptp_qoriq->tmr_add;
+ 	adj = tmr_add;
+ 
+-	/* calculate diff as adj*(scaled_ppm/65536)/1000000
+-	 * and round() to the nearest integer
++	/*
++	 * Calculate diff and round() to the nearest integer
++	 *
++	 * diff = adj * (ppb / 1000000000)
++	 *      = adj * scaled_ppm / 65536000000
+ 	 */
+-	adj *= scaled_ppm;
+-	diff = div_u64(adj, 8000000);
+-	diff = (diff >> 13) + ((diff >> 12) & 1);
++	diff = mul_u64_u64_div_u64(adj, scaled_ppm, 32768000000);
++	diff = DIV64_U64_ROUND_UP(diff, 2);
+ 
+ 	tmr_add = neg_adj ? tmr_add - diff : tmr_add + diff;
+-
+ 	ptp_qoriq->write(&regs->ctrl_regs->tmr_add, tmr_add);
+ 
+ 	return 0;
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index 4e37fa9b409d5..723a51a3f4316 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -939,6 +939,14 @@ new_bio:
+ 
+ 	return 0;
+ fail:
++	if (bio)
++		bio_put(bio);
++	while (req->bio) {
++		bio = req->bio;
++		req->bio = bio->bi_next;
++		bio_put(bio);
++	}
++	req->biotail = NULL;
+ 	return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ }
+ 
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index fe201b757baa4..6516051807b89 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1404,13 +1404,13 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
+ 
+ 	lockdep_assert_held(&bdev->bd_mutex);
+ 
+-	clear_bit(GD_NEED_PART_SCAN, &bdev->bd_disk->state);
+-
+ rescan:
+ 	ret = blk_drop_partitions(bdev);
+ 	if (ret)
+ 		return ret;
+ 
++	clear_bit(GD_NEED_PART_SCAN, &disk->state);
++
+ 	/*
+ 	 * Historically we only set the capacity to zero for devices that
+ 	 * support partitions (independ of actually having partitions created).
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index be46fab4c96d8..da057570bb93d 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -164,6 +164,7 @@ int cifs_posix_open(char *full_path, struct inode **pinode,
+ 			goto posix_open_ret;
+ 		}
+ 	} else {
++		cifs_revalidate_mapping(*pinode);
+ 		cifs_fattr_to_inode(*pinode, &fattr);
+ 	}
+ 
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index db22d686c61ff..be3df90bb2bcc 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -745,8 +745,8 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ 		}
+ 	}
+ 	spin_unlock(&cifs_tcp_ses_lock);
+-	cifs_dbg(FYI, "Can not process oplock break for non-existent connection\n");
+-	return false;
++	cifs_dbg(FYI, "No file id matched, oplock break ignored\n");
++	return true;
+ }
+ 
+ void
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4ccf99cb8cdc0..0de27e75460d1 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1489,7 +1489,7 @@ static void io_queue_async_work(struct io_kiocb *req)
+ 		io_queue_linked_timeout(link);
+ }
+ 
+-static void io_kill_timeout(struct io_kiocb *req)
++static void io_kill_timeout(struct io_kiocb *req, int status)
+ {
+ 	struct io_timeout_data *io = req->async_data;
+ 	int ret;
+@@ -1499,7 +1499,7 @@ static void io_kill_timeout(struct io_kiocb *req)
+ 		atomic_set(&req->ctx->cq_timeouts,
+ 			atomic_read(&req->ctx->cq_timeouts) + 1);
+ 		list_del_init(&req->timeout.list);
+-		io_cqring_fill_event(req, 0);
++		io_cqring_fill_event(req, status);
+ 		io_put_req_deferred(req, 1);
+ 	}
+ }
+@@ -1516,7 +1516,7 @@ static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
+ 	spin_lock_irq(&ctx->completion_lock);
+ 	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+ 		if (io_match_task(req, tsk, files)) {
+-			io_kill_timeout(req);
++			io_kill_timeout(req, -ECANCELED);
+ 			canceled++;
+ 		}
+ 	}
+@@ -1568,7 +1568,7 @@ static void io_flush_timeouts(struct io_ring_ctx *ctx)
+ 			break;
+ 
+ 		list_del_init(&req->timeout.list);
+-		io_kill_timeout(req);
++		io_kill_timeout(req, 0);
+ 	} while (!list_empty(&ctx->timeout_list));
+ 
+ 	ctx->cq_last_tm_flush = seq;
+diff --git a/init/Kconfig b/init/Kconfig
+index d559abf38c905..fc4c9f416fadb 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -114,8 +114,7 @@ config INIT_ENV_ARG_LIMIT
+ 
+ config COMPILE_TEST
+ 	bool "Compile also drivers which will not load"
+-	depends on !UML
+-	default n
++	depends on HAS_IOMEM
+ 	help
+ 	  Some drivers can be compiled on a different platform than they are
+ 	  intended to be run on. Despite they cannot be loaded there (or even
+diff --git a/lib/math/div64.c b/lib/math/div64.c
+index 3952a07130d88..edd1090c9edb1 100644
+--- a/lib/math/div64.c
++++ b/lib/math/div64.c
+@@ -230,4 +230,5 @@ u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 c)
+ 
+ 	return res + div64_u64(a * b, c);
+ }
++EXPORT_SYMBOL(mul_u64_u64_div_u64);
+ #endif
+diff --git a/net/mac80211/aead_api.c b/net/mac80211/aead_api.c
+index d7b3d905d5353..b00d6f5b33f40 100644
+--- a/net/mac80211/aead_api.c
++++ b/net/mac80211/aead_api.c
+@@ -23,6 +23,7 @@ int aead_encrypt(struct crypto_aead *tfm, u8 *b_0, u8 *aad, size_t aad_len,
+ 	struct aead_request *aead_req;
+ 	int reqsize = sizeof(*aead_req) + crypto_aead_reqsize(tfm);
+ 	u8 *__aad;
++	int ret;
+ 
+ 	aead_req = kzalloc(reqsize + aad_len, GFP_ATOMIC);
+ 	if (!aead_req)
+@@ -40,10 +41,10 @@ int aead_encrypt(struct crypto_aead *tfm, u8 *b_0, u8 *aad, size_t aad_len,
+ 	aead_request_set_crypt(aead_req, sg, sg, data_len, b_0);
+ 	aead_request_set_ad(aead_req, sg[0].length);
+ 
+-	crypto_aead_encrypt(aead_req);
++	ret = crypto_aead_encrypt(aead_req);
+ 	kfree_sensitive(aead_req);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ int aead_decrypt(struct crypto_aead *tfm, u8 *b_0, u8 *aad, size_t aad_len,
+diff --git a/net/mac80211/aes_gmac.c b/net/mac80211/aes_gmac.c
+index 6f3b3a0cc10a4..512cab073f2e8 100644
+--- a/net/mac80211/aes_gmac.c
++++ b/net/mac80211/aes_gmac.c
+@@ -22,6 +22,7 @@ int ieee80211_aes_gmac(struct crypto_aead *tfm, const u8 *aad, u8 *nonce,
+ 	struct aead_request *aead_req;
+ 	int reqsize = sizeof(*aead_req) + crypto_aead_reqsize(tfm);
+ 	const __le16 *fc;
++	int ret;
+ 
+ 	if (data_len < GMAC_MIC_LEN)
+ 		return -EINVAL;
+@@ -59,10 +60,10 @@ int ieee80211_aes_gmac(struct crypto_aead *tfm, const u8 *aad, u8 *nonce,
+ 	aead_request_set_crypt(aead_req, sg, sg, 0, iv);
+ 	aead_request_set_ad(aead_req, GMAC_AAD_LEN + data_len);
+ 
+-	crypto_aead_encrypt(aead_req);
++	ret = crypto_aead_encrypt(aead_req);
+ 	kfree_sensitive(aead_req);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ struct crypto_aead *ieee80211_aes_gmac_key_setup(const u8 key[],
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 523380aed92eb..19c093bb3876e 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -982,8 +982,19 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 			continue;
+ 
+ 		if (!dflt_chandef.chan) {
++			/*
++			 * Assign the first enabled channel to dflt_chandef
++			 * from the list of channels
++			 */
++			for (i = 0; i < sband->n_channels; i++)
++				if (!(sband->channels[i].flags &
++						IEEE80211_CHAN_DISABLED))
++					break;
++			/* if none found then use the first anyway */
++			if (i == sband->n_channels)
++				i = 0;
+ 			cfg80211_chandef_create(&dflt_chandef,
+-						&sband->channels[0],
++						&sband->channels[i],
+ 						NL80211_CHAN_NO_HT);
+ 			/* init channel we're on */
+ 			if (!local->use_chanctx && !local->_oper_chandef.chan) {
+diff --git a/net/netfilter/nf_conntrack_proto_gre.c b/net/netfilter/nf_conntrack_proto_gre.c
+index 5b05487a60d21..db11e403d8187 100644
+--- a/net/netfilter/nf_conntrack_proto_gre.c
++++ b/net/netfilter/nf_conntrack_proto_gre.c
+@@ -218,9 +218,6 @@ int nf_conntrack_gre_packet(struct nf_conn *ct,
+ 			    enum ip_conntrack_info ctinfo,
+ 			    const struct nf_hook_state *state)
+ {
+-	if (state->pf != NFPROTO_IPV4)
+-		return -NF_ACCEPT;
+-
+ 	if (!nf_ct_is_confirmed(ct)) {
+ 		unsigned int *timeouts = nf_ct_timeout_lookup(ct);
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 978a968d7aeda..2e76935db2c88 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -6573,6 +6573,9 @@ static int nft_register_flowtable_net_hooks(struct net *net,
+ 
+ 	list_for_each_entry(hook, hook_list, list) {
+ 		list_for_each_entry(ft, &table->flowtables, list) {
++			if (!nft_is_active_next(net, ft))
++				continue;
++
+ 			list_for_each_entry(hook2, &ft->hook_list, list) {
+ 				if (hook->ops.dev == hook2->ops.dev &&
+ 				    hook->ops.pf == hook2->ops.pf) {
+diff --git a/tools/bpf/resolve_btfids/.gitignore b/tools/bpf/resolve_btfids/.gitignore
+index a026df7dc2809..16913fffc9859 100644
+--- a/tools/bpf/resolve_btfids/.gitignore
++++ b/tools/bpf/resolve_btfids/.gitignore
+@@ -1,4 +1,3 @@
+-/FEATURE-DUMP.libbpf
+-/bpf_helper_defs.h
+ /fixdep
+ /resolve_btfids
++/libbpf/
+diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
+index bf656432ad736..bb9fa8de7e625 100644
+--- a/tools/bpf/resolve_btfids/Makefile
++++ b/tools/bpf/resolve_btfids/Makefile
+@@ -2,11 +2,7 @@
+ include ../../scripts/Makefile.include
+ include ../../scripts/Makefile.arch
+ 
+-ifeq ($(srctree),)
+-srctree := $(patsubst %/,%,$(dir $(CURDIR)))
+-srctree := $(patsubst %/,%,$(dir $(srctree)))
+-srctree := $(patsubst %/,%,$(dir $(srctree)))
+-endif
++srctree := $(abspath $(CURDIR)/../../../)
+ 
+ ifeq ($(V),1)
+   Q =
+@@ -22,28 +18,29 @@ AR       = $(HOSTAR)
+ CC       = $(HOSTCC)
+ LD       = $(HOSTLD)
+ ARCH     = $(HOSTARCH)
++RM      ?= rm
+ 
+ OUTPUT ?= $(srctree)/tools/bpf/resolve_btfids/
+ 
+ LIBBPF_SRC := $(srctree)/tools/lib/bpf/
+ SUBCMD_SRC := $(srctree)/tools/lib/subcmd/
+ 
+-BPFOBJ     := $(OUTPUT)/libbpf.a
+-SUBCMDOBJ  := $(OUTPUT)/libsubcmd.a
++BPFOBJ     := $(OUTPUT)/libbpf/libbpf.a
++SUBCMDOBJ  := $(OUTPUT)/libsubcmd/libsubcmd.a
+ 
+ BINARY     := $(OUTPUT)/resolve_btfids
+ BINARY_IN  := $(BINARY)-in.o
+ 
+ all: $(BINARY)
+ 
+-$(OUTPUT):
++$(OUTPUT) $(OUTPUT)/libbpf $(OUTPUT)/libsubcmd:
+ 	$(call msg,MKDIR,,$@)
+-	$(Q)mkdir -p $(OUTPUT)
++	$(Q)mkdir -p $(@)
+ 
+-$(SUBCMDOBJ): fixdep FORCE
+-	$(Q)$(MAKE) -C $(SUBCMD_SRC) OUTPUT=$(OUTPUT)
++$(SUBCMDOBJ): fixdep FORCE | $(OUTPUT)/libsubcmd
++	$(Q)$(MAKE) -C $(SUBCMD_SRC) OUTPUT=$(abspath $(dir $@))/ $(abspath $@)
+ 
+-$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(OUTPUT)
++$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(OUTPUT)/libbpf
+ 	$(Q)$(MAKE) $(submake_extras) -C $(LIBBPF_SRC)  OUTPUT=$(abspath $(dir $@))/ $(abspath $@)
+ 
+ CFLAGS := -g \
+@@ -57,24 +54,27 @@ LIBS = -lelf -lz
+ export srctree OUTPUT CFLAGS Q
+ include $(srctree)/tools/build/Makefile.include
+ 
+-$(BINARY_IN): fixdep FORCE
++$(BINARY_IN): fixdep FORCE | $(OUTPUT)
+ 	$(Q)$(MAKE) $(build)=resolve_btfids
+ 
+ $(BINARY): $(BPFOBJ) $(SUBCMDOBJ) $(BINARY_IN)
+ 	$(call msg,LINK,$@)
+ 	$(Q)$(CC) $(BINARY_IN) $(LDFLAGS) -o $@ $(BPFOBJ) $(SUBCMDOBJ) $(LIBS)
+ 
+-libsubcmd-clean:
+-	$(Q)$(MAKE) -C $(SUBCMD_SRC) OUTPUT=$(OUTPUT) clean
+-
+-libbpf-clean:
+-	$(Q)$(MAKE) -C $(LIBBPF_SRC) OUTPUT=$(OUTPUT) clean
++clean_objects := $(wildcard $(OUTPUT)/*.o                \
++                            $(OUTPUT)/.*.o.cmd           \
++                            $(OUTPUT)/.*.o.d             \
++                            $(OUTPUT)/libbpf             \
++                            $(OUTPUT)/libsubcmd          \
++                            $(OUTPUT)/resolve_btfids)
+ 
+-clean: libsubcmd-clean libbpf-clean fixdep-clean
++ifneq ($(clean_objects),)
++clean: fixdep-clean
+ 	$(call msg,CLEAN,$(BINARY))
+-	$(Q)$(RM) -f $(BINARY); \
+-	$(RM) -rf $(if $(OUTPUT),$(OUTPUT),.)/feature; \
+-	find $(if $(OUTPUT),$(OUTPUT),.) -name \*.o -or -name \*.o.cmd -or -name \*.o.d | xargs $(RM)
++	$(Q)$(RM) -rf $(clean_objects)
++else
++clean:
++endif
+ 
+ tags:
+ 	$(call msg,GEN,,tags)
+diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
+index 02ffc3a3e5dc7..b30e9d6db6b40 100644
+--- a/tools/testing/kunit/kunit_config.py
++++ b/tools/testing/kunit/kunit_config.py
+@@ -12,7 +12,7 @@ import re
+ CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_(\w+) is not set$'
+ CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+|".*")$'
+ 
+-KconfigEntryBase = collections.namedtuple('KconfigEntry', ['name', 'value'])
++KconfigEntryBase = collections.namedtuple('KconfigEntryBase', ['name', 'value'])
+ 
+ class KconfigEntry(KconfigEntryBase):
+ 
+diff --git a/tools/testing/selftests/arm64/fp/sve-test.S b/tools/testing/selftests/arm64/fp/sve-test.S
+index f95074c9b48b7..07f14e279a904 100644
+--- a/tools/testing/selftests/arm64/fp/sve-test.S
++++ b/tools/testing/selftests/arm64/fp/sve-test.S
+@@ -284,16 +284,28 @@ endfunction
+ // Set up test pattern in the FFR
+ // x0: pid
+ // x2: generation
++//
++// We need to generate a canonical FFR value, which consists of a number of
++// low "1" bits, followed by a number of zeros. This gives us 17 unique values
++// per 16 bits of FFR, so we create a 4 bit signature out of the PID and
++// generation, and use that as the initial number of ones in the pattern.
++// We fill the upper lanes of FFR with zeros.
+ // Beware: corrupts P0.
+ function setup_ffr
+ 	mov	x4, x30
+ 
+-	bl	pattern
++	and	w0, w0, #0x3
++	bfi	w0, w2, #2, #2
++	mov	w1, #1
++	lsl	w1, w1, w0
++	sub	w1, w1, #1
++
+ 	ldr	x0, =ffrref
+-	ldr	x1, =scratch
+-	rdvl	x2, #1
+-	lsr	x2, x2, #3
+-	bl	memcpy
++	strh	w1, [x0], 2
++	rdvl	x1, #1
++	lsr	x1, x1, #3
++	sub	x1, x1, #2
++	bl	memclr
+ 
+ 	mov	x0, #0
+ 	ldr	x1, =ffrref
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index e63f316327080..2cf32e6b376e1 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -99,7 +99,7 @@ endef
+ ifeq ($(CAN_BUILD_I386),1)
+ $(BINARIES_32): CFLAGS += -m32
+ $(BINARIES_32): LDLIBS += -lrt -ldl -lm
+-$(BINARIES_32): %_32: %.c
++$(BINARIES_32): $(OUTPUT)/%_32: %.c
+ 	$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(notdir $^) $(LDLIBS) -o $@
+ $(foreach t,$(TARGETS),$(eval $(call gen-target-rule-32,$(t))))
+ endif
+@@ -107,7 +107,7 @@ endif
+ ifeq ($(CAN_BUILD_X86_64),1)
+ $(BINARIES_64): CFLAGS += -m64
+ $(BINARIES_64): LDLIBS += -lrt -ldl
+-$(BINARIES_64): %_64: %.c
++$(BINARIES_64): $(OUTPUT)/%_64: %.c
+ 	$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(notdir $^) $(LDLIBS) -o $@
+ $(foreach t,$(TARGETS),$(eval $(call gen-target-rule-64,$(t))))
+ endif


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-04-14 11:07 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-04-14 11:07 UTC (permalink / raw
  To: gentoo-commits

commit:     b36c49b2edea89c887f039a660805e275b16c3f1
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 14 11:06:22 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Apr 14 11:06:33 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b36c49b2

Linux patch 5.10.30

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1029_linux-5.10.30.patch | 7702 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7706 insertions(+)

diff --git a/0000_README b/0000_README
index ac316db..a5a8bc1 100644
--- a/0000_README
+++ b/0000_README
@@ -159,6 +159,10 @@ Patch:  1028_linux-5.10.29.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.29
 
+Patch:  1029_linux-5.10.30.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.30
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1029_linux-5.10.30.patch b/1029_linux-5.10.30.patch
new file mode 100644
index 0000000..3789577
--- /dev/null
+++ b/1029_linux-5.10.30.patch
@@ -0,0 +1,7702 @@
+diff --git a/Documentation/devicetree/bindings/net/ethernet-controller.yaml b/Documentation/devicetree/bindings/net/ethernet-controller.yaml
+index 39147d33e8c7c..9e41dad9ef4da 100644
+--- a/Documentation/devicetree/bindings/net/ethernet-controller.yaml
++++ b/Documentation/devicetree/bindings/net/ethernet-controller.yaml
+@@ -49,7 +49,7 @@ properties:
+     description:
+       Reference to an nvmem node for the MAC address
+ 
+-  nvmem-cells-names:
++  nvmem-cell-names:
+     const: mac-address
+ 
+   phy-connection-type:
+diff --git a/Makefile b/Makefile
+index 1d4a50ebe3b77..872af26e085a1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 29
++SUBLEVEL = 30
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/armada-385-turris-omnia.dts b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+index 768b6c5d2129a..fde4c302f08ec 100644
+--- a/arch/arm/boot/dts/armada-385-turris-omnia.dts
++++ b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+@@ -236,6 +236,7 @@
+ 		status = "okay";
+ 		compatible = "ethernet-phy-id0141.0DD1", "ethernet-phy-ieee802.3-c22";
+ 		reg = <1>;
++		marvell,reg-init = <3 18 0 0x4985>;
+ 
+ 		/* irq is connected to &pcawan pin 7 */
+ 	};
+diff --git a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+index e361df26a168d..5f84e9f2b5767 100644
+--- a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+@@ -432,6 +432,7 @@
+ 	pinctrl-0 = <&pinctrl_usdhc2>;
+ 	cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
+ 	wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
++	vmmc-supply = <&vdd_sd1_reg>;
+ 	status = "disabled";
+ };
+ 
+@@ -441,5 +442,6 @@
+ 		     &pinctrl_usdhc3_cdwp>;
+ 	cd-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
+ 	wp-gpios = <&gpio1 29 GPIO_ACTIVE_HIGH>;
++	vmmc-supply = <&vdd_sd0_reg>;
+ 	status = "disabled";
+ };
+diff --git a/arch/arm/mach-omap2/omap-secure.c b/arch/arm/mach-omap2/omap-secure.c
+index f70d561f37f71..0659ab4cb0af3 100644
+--- a/arch/arm/mach-omap2/omap-secure.c
++++ b/arch/arm/mach-omap2/omap-secure.c
+@@ -9,6 +9,7 @@
+  */
+ 
+ #include <linux/arm-smccc.h>
++#include <linux/cpu_pm.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+@@ -20,6 +21,7 @@
+ 
+ #include "common.h"
+ #include "omap-secure.h"
++#include "soc.h"
+ 
+ static phys_addr_t omap_secure_memblock_base;
+ 
+@@ -213,3 +215,40 @@ void __init omap_secure_init(void)
+ {
+ 	omap_optee_init_check();
+ }
++
++/*
++ * Dummy dispatcher call after core OSWR and MPU off. Updates the ROM return
++ * address after MMU has been re-enabled after CPU1 has been woken up again.
++ * Otherwise the ROM code will attempt to use the earlier physical return
++ * address that got set with MMU off when waking up CPU1. Only used on secure
++ * devices.
++ */
++static int cpu_notifier(struct notifier_block *nb, unsigned long cmd, void *v)
++{
++	switch (cmd) {
++	case CPU_CLUSTER_PM_EXIT:
++		omap_secure_dispatcher(OMAP4_PPA_SERVICE_0,
++				       FLAG_START_CRITICAL,
++				       0, 0, 0, 0, 0);
++		break;
++	default:
++		break;
++	}
++
++	return NOTIFY_OK;
++}
++
++static struct notifier_block secure_notifier_block = {
++	.notifier_call = cpu_notifier,
++};
++
++static int __init secure_pm_init(void)
++{
++	if (omap_type() == OMAP2_DEVICE_TYPE_GP || !soc_is_omap44xx())
++		return 0;
++
++	cpu_pm_register_notifier(&secure_notifier_block);
++
++	return 0;
++}
++omap_arch_initcall(secure_pm_init);
+diff --git a/arch/arm/mach-omap2/omap-secure.h b/arch/arm/mach-omap2/omap-secure.h
+index 4aaa95706d39f..172069f316164 100644
+--- a/arch/arm/mach-omap2/omap-secure.h
++++ b/arch/arm/mach-omap2/omap-secure.h
+@@ -50,6 +50,7 @@
+ #define OMAP5_DRA7_MON_SET_ACR_INDEX	0x107
+ 
+ /* Secure PPA(Primary Protected Application) APIs */
++#define OMAP4_PPA_SERVICE_0		0x21
+ #define OMAP4_PPA_L2_POR_INDEX		0x23
+ #define OMAP4_PPA_CPU_ACTRL_SMP_INDEX	0x25
+ 
+diff --git a/arch/arm/mach-omap2/pmic-cpcap.c b/arch/arm/mach-omap2/pmic-cpcap.c
+index 09076ad0576d9..668dc84fd31e0 100644
+--- a/arch/arm/mach-omap2/pmic-cpcap.c
++++ b/arch/arm/mach-omap2/pmic-cpcap.c
+@@ -246,10 +246,10 @@ int __init omap4_cpcap_init(void)
+ 	omap_voltage_register_pmic(voltdm, &omap443x_max8952_mpu);
+ 
+ 	if (of_machine_is_compatible("motorola,droid-bionic")) {
+-		voltdm = voltdm_lookup("mpu");
++		voltdm = voltdm_lookup("core");
+ 		omap_voltage_register_pmic(voltdm, &omap_cpcap_core);
+ 
+-		voltdm = voltdm_lookup("mpu");
++		voltdm = voltdm_lookup("iva");
+ 		omap_voltage_register_pmic(voltdm, &omap_cpcap_iva);
+ 	} else {
+ 		voltdm = voltdm_lookup("core");
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h b/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
+index 5ccc4cc91959d..a003e6af33533 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
++++ b/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
+@@ -124,7 +124,7 @@
+ #define MX8MM_IOMUXC_SD1_CMD_USDHC1_CMD                                     0x0A4 0x30C 0x000 0x0 0x0
+ #define MX8MM_IOMUXC_SD1_CMD_GPIO2_IO1                                      0x0A4 0x30C 0x000 0x5 0x0
+ #define MX8MM_IOMUXC_SD1_DATA0_USDHC1_DATA0                                 0x0A8 0x310 0x000 0x0 0x0
+-#define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2                                    0x0A8 0x31  0x000 0x5 0x0
++#define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2                                    0x0A8 0x310 0x000 0x5 0x0
+ #define MX8MM_IOMUXC_SD1_DATA1_USDHC1_DATA1                                 0x0AC 0x314 0x000 0x0 0x0
+ #define MX8MM_IOMUXC_SD1_DATA1_GPIO2_IO3                                    0x0AC 0x314 0x000 0x5 0x0
+ #define MX8MM_IOMUXC_SD1_DATA2_USDHC1_DATA2                                 0x0B0 0x318 0x000 0x0 0x0
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h b/arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h
+index b94b02080a344..68e8fa1729741 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h
++++ b/arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h
+@@ -130,7 +130,7 @@
+ #define MX8MQ_IOMUXC_SD1_CMD_USDHC1_CMD                                     0x0A4 0x30C 0x000 0x0 0x0
+ #define MX8MQ_IOMUXC_SD1_CMD_GPIO2_IO1                                      0x0A4 0x30C 0x000 0x5 0x0
+ #define MX8MQ_IOMUXC_SD1_DATA0_USDHC1_DATA0                                 0x0A8 0x310 0x000 0x0 0x0
+-#define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2                                    0x0A8 0x31  0x000 0x5 0x0
++#define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2                                    0x0A8 0x310 0x000 0x5 0x0
+ #define MX8MQ_IOMUXC_SD1_DATA1_USDHC1_DATA1                                 0x0AC 0x314 0x000 0x0 0x0
+ #define MX8MQ_IOMUXC_SD1_DATA1_GPIO2_IO3                                    0x0AC 0x314 0x000 0x5 0x0
+ #define MX8MQ_IOMUXC_SD1_DATA2_USDHC1_DATA2                                 0x0B0 0x318 0x000 0x0 0x0
+diff --git a/arch/ia64/include/asm/ptrace.h b/arch/ia64/include/asm/ptrace.h
+index b3aa460901012..08179135905cd 100644
+--- a/arch/ia64/include/asm/ptrace.h
++++ b/arch/ia64/include/asm/ptrace.h
+@@ -54,8 +54,7 @@
+ 
+ static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+ {
+-	/* FIXME: should this be bspstore + nr_dirty regs? */
+-	return regs->ar_bspstore;
++	return regs->r12;
+ }
+ 
+ static inline int is_syscall_success(struct pt_regs *regs)
+@@ -79,11 +78,6 @@ static inline long regs_return_value(struct pt_regs *regs)
+ 	unsigned long __ip = instruction_pointer(regs);			\
+ 	(__ip & ~3UL) + ((__ip & 3UL) << 2);				\
+ })
+-/*
+- * Why not default?  Because user_stack_pointer() on ia64 gives register
+- * stack backing store instead...
+- */
+-#define current_user_stack_pointer() (current_pt_regs()->r12)
+ 
+   /* given a pointer to a task_struct, return the user's pt_regs */
+ # define task_pt_regs(t)		(((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1)
+diff --git a/arch/nds32/mm/cacheflush.c b/arch/nds32/mm/cacheflush.c
+index 6eb98a7ad27d2..ad5344ef5d334 100644
+--- a/arch/nds32/mm/cacheflush.c
++++ b/arch/nds32/mm/cacheflush.c
+@@ -238,7 +238,7 @@ void flush_dcache_page(struct page *page)
+ {
+ 	struct address_space *mapping;
+ 
+-	mapping = page_mapping(page);
++	mapping = page_mapping_file(page);
+ 	if (mapping && !mapping_mapped(mapping))
+ 		set_bit(PG_dcache_dirty, &page->flags);
+ 	else {
+diff --git a/arch/parisc/include/asm/cmpxchg.h b/arch/parisc/include/asm/cmpxchg.h
+index cf5ee9b0b393c..84ee232278a6a 100644
+--- a/arch/parisc/include/asm/cmpxchg.h
++++ b/arch/parisc/include/asm/cmpxchg.h
+@@ -72,7 +72,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
+ #endif
+ 	case 4: return __cmpxchg_u32((unsigned int *)ptr,
+ 				     (unsigned int)old, (unsigned int)new_);
+-	case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_);
++	case 1: return __cmpxchg_u8((u8 *)ptr, old & 0xff, new_ & 0xff);
+ 	}
+ 	__cmpxchg_called_with_bad_pointer();
+ 	return old;
+diff --git a/arch/s390/kernel/cpcmd.c b/arch/s390/kernel/cpcmd.c
+index af013b4244d34..2da0273597989 100644
+--- a/arch/s390/kernel/cpcmd.c
++++ b/arch/s390/kernel/cpcmd.c
+@@ -37,10 +37,12 @@ static int diag8_noresponse(int cmdlen)
+ 
+ static int diag8_response(int cmdlen, char *response, int *rlen)
+ {
++	unsigned long _cmdlen = cmdlen | 0x40000000L;
++	unsigned long _rlen = *rlen;
+ 	register unsigned long reg2 asm ("2") = (addr_t) cpcmd_buf;
+ 	register unsigned long reg3 asm ("3") = (addr_t) response;
+-	register unsigned long reg4 asm ("4") = cmdlen | 0x40000000L;
+-	register unsigned long reg5 asm ("5") = *rlen;
++	register unsigned long reg4 asm ("4") = _cmdlen;
++	register unsigned long reg5 asm ("5") = _rlen;
+ 
+ 	asm volatile(
+ 		"	diag	%2,%0,0x8\n"
+diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
+index 57ef2094af93e..630ff08532be8 100644
+--- a/arch/x86/include/asm/smp.h
++++ b/arch/x86/include/asm/smp.h
+@@ -132,7 +132,7 @@ void native_play_dead(void);
+ void play_dead_common(void);
+ void wbinvd_on_cpu(int cpu);
+ int wbinvd_on_all_cpus(void);
+-bool wakeup_cpu0(void);
++void cond_wakeup_cpu0(void);
+ 
+ void native_smp_send_reschedule(int cpu);
+ void native_send_call_func_ipi(const struct cpumask *mask);
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 5ea5f964f0a97..b95d1c533fef5 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1655,13 +1655,17 @@ void play_dead_common(void)
+ 	local_irq_disable();
+ }
+ 
+-bool wakeup_cpu0(void)
++/**
++ * cond_wakeup_cpu0 - Wake up CPU0 if needed.
++ *
++ * If NMI wants to wake up CPU0, start CPU0.
++ */
++void cond_wakeup_cpu0(void)
+ {
+ 	if (smp_processor_id() == 0 && enable_start_cpu0)
+-		return true;
+-
+-	return false;
++		start_cpu0();
+ }
++EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
+ 
+ /*
+  * We need to flush the caches before going to sleep, lest we have
+@@ -1730,11 +1734,8 @@ static inline void mwait_play_dead(void)
+ 		__monitor(mwait_ptr, 0, 0);
+ 		mb();
+ 		__mwait(eax, 0);
+-		/*
+-		 * If NMI wants to wake up CPU0, start CPU0.
+-		 */
+-		if (wakeup_cpu0())
+-			start_cpu0();
++
++		cond_wakeup_cpu0();
+ 	}
+ }
+ 
+@@ -1745,11 +1746,8 @@ void hlt_play_dead(void)
+ 
+ 	while (1) {
+ 		native_halt();
+-		/*
+-		 * If NMI wants to wake up CPU0, start CPU0.
+-		 */
+-		if (wakeup_cpu0())
+-			start_cpu0();
++
++		cond_wakeup_cpu0();
+ 	}
+ }
+ 
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index dacbd13d32c69..15717a28b212e 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5972,6 +5972,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
+ 	struct kvm_mmu_page *sp;
+ 	unsigned int ratio;
+ 	LIST_HEAD(invalid_list);
++	bool flush = false;
+ 	ulong to_zap;
+ 
+ 	rcu_idx = srcu_read_lock(&kvm->srcu);
+@@ -5992,20 +5993,20 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
+ 				      struct kvm_mmu_page,
+ 				      lpage_disallowed_link);
+ 		WARN_ON_ONCE(!sp->lpage_disallowed);
+-		if (sp->tdp_mmu_page)
+-			kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn,
+-				sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level));
+-		else {
++		if (sp->tdp_mmu_page) {
++			flush |= kvm_tdp_mmu_zap_sp(kvm, sp);
++		} else {
+ 			kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
+ 			WARN_ON_ONCE(sp->lpage_disallowed);
+ 		}
+ 
+ 		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
+-			kvm_mmu_commit_zap_page(kvm, &invalid_list);
++			kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
+ 			cond_resched_lock(&kvm->mmu_lock);
++			flush = false;
+ 		}
+ 	}
+-	kvm_mmu_commit_zap_page(kvm, &invalid_list);
++	kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
+ 
+ 	spin_unlock(&kvm->mmu_lock);
+ 	srcu_read_unlock(&kvm->srcu, rcu_idx);
+diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
+index 87b7e16911dbb..1a09d212186b3 100644
+--- a/arch/x86/kvm/mmu/tdp_iter.c
++++ b/arch/x86/kvm/mmu/tdp_iter.c
+@@ -22,21 +22,22 @@ static gfn_t round_gfn_for_level(gfn_t gfn, int level)
+ 
+ /*
+  * Sets a TDP iterator to walk a pre-order traversal of the paging structure
+- * rooted at root_pt, starting with the walk to translate goal_gfn.
++ * rooted at root_pt, starting with the walk to translate next_last_level_gfn.
+  */
+ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
+-		    int min_level, gfn_t goal_gfn)
++		    int min_level, gfn_t next_last_level_gfn)
+ {
+ 	WARN_ON(root_level < 1);
+ 	WARN_ON(root_level > PT64_ROOT_MAX_LEVEL);
+ 
+-	iter->goal_gfn = goal_gfn;
++	iter->next_last_level_gfn = next_last_level_gfn;
++	iter->yielded_gfn = iter->next_last_level_gfn;
+ 	iter->root_level = root_level;
+ 	iter->min_level = min_level;
+ 	iter->level = root_level;
+ 	iter->pt_path[iter->level - 1] = root_pt;
+ 
+-	iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
++	iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level);
+ 	tdp_iter_refresh_sptep(iter);
+ 
+ 	iter->valid = true;
+@@ -82,7 +83,7 @@ static bool try_step_down(struct tdp_iter *iter)
+ 
+ 	iter->level--;
+ 	iter->pt_path[iter->level - 1] = child_pt;
+-	iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
++	iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level);
+ 	tdp_iter_refresh_sptep(iter);
+ 
+ 	return true;
+@@ -106,7 +107,7 @@ static bool try_step_side(struct tdp_iter *iter)
+ 		return false;
+ 
+ 	iter->gfn += KVM_PAGES_PER_HPAGE(iter->level);
+-	iter->goal_gfn = iter->gfn;
++	iter->next_last_level_gfn = iter->gfn;
+ 	iter->sptep++;
+ 	iter->old_spte = READ_ONCE(*iter->sptep);
+ 
+@@ -158,23 +159,6 @@ void tdp_iter_next(struct tdp_iter *iter)
+ 	iter->valid = false;
+ }
+ 
+-/*
+- * Restart the walk over the paging structure from the root, starting from the
+- * highest gfn the iterator had previously reached. Assumes that the entire
+- * paging structure, except the root page, may have been completely torn down
+- * and rebuilt.
+- */
+-void tdp_iter_refresh_walk(struct tdp_iter *iter)
+-{
+-	gfn_t goal_gfn = iter->goal_gfn;
+-
+-	if (iter->gfn > goal_gfn)
+-		goal_gfn = iter->gfn;
+-
+-	tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
+-		       iter->root_level, iter->min_level, goal_gfn);
+-}
+-
+ u64 *tdp_iter_root_pt(struct tdp_iter *iter)
+ {
+ 	return iter->pt_path[iter->root_level - 1];
+diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
+index 47170d0dc98e5..d480c540ee27d 100644
+--- a/arch/x86/kvm/mmu/tdp_iter.h
++++ b/arch/x86/kvm/mmu/tdp_iter.h
+@@ -15,7 +15,13 @@ struct tdp_iter {
+ 	 * The iterator will traverse the paging structure towards the mapping
+ 	 * for this GFN.
+ 	 */
+-	gfn_t goal_gfn;
++	gfn_t next_last_level_gfn;
++	/*
++	 * The next_last_level_gfn at the time when the thread last
++	 * yielded. Only yielding when the next_last_level_gfn !=
++	 * yielded_gfn helps ensure forward progress.
++	 */
++	gfn_t yielded_gfn;
+ 	/* Pointers to the page tables traversed to reach the current SPTE */
+ 	u64 *pt_path[PT64_ROOT_MAX_LEVEL];
+ 	/* A pointer to the current SPTE */
+@@ -52,9 +58,8 @@ struct tdp_iter {
+ u64 *spte_to_child_pt(u64 pte, int level);
+ 
+ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
+-		    int min_level, gfn_t goal_gfn);
++		    int min_level, gfn_t next_last_level_gfn);
+ void tdp_iter_next(struct tdp_iter *iter);
+-void tdp_iter_refresh_walk(struct tdp_iter *iter);
+ u64 *tdp_iter_root_pt(struct tdp_iter *iter);
+ 
+ #endif /* __KVM_X86_MMU_TDP_ITER_H */
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index ffa0bd0e033fb..61c00f8631f1a 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -103,7 +103,7 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
+ }
+ 
+ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+-			  gfn_t start, gfn_t end, bool can_yield);
++			  gfn_t start, gfn_t end, bool can_yield, bool flush);
+ 
+ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root)
+ {
+@@ -116,7 +116,7 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root)
+ 
+ 	list_del(&root->link);
+ 
+-	zap_gfn_range(kvm, root, 0, max_gfn, false);
++	zap_gfn_range(kvm, root, 0, max_gfn, false, false);
+ 
+ 	free_page((unsigned long)root->spt);
+ 	kmem_cache_free(mmu_page_header_cache, root);
+@@ -405,27 +405,43 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
+ 			 _mmu->shadow_root_level, _start, _end)
+ 
+ /*
+- * Flush the TLB if the process should drop kvm->mmu_lock.
+- * Return whether the caller still needs to flush the tlb.
++ * Yield if the MMU lock is contended or this thread needs to return control
++ * to the scheduler.
++ *
++ * If this function should yield and flush is set, it will perform a remote
++ * TLB flush before yielding.
++ *
++ * If this function yields, it will also reset the tdp_iter's walk over the
++ * paging structure and the calling function should skip to the next
++ * iteration to allow the iterator to continue its traversal from the
++ * paging structure root.
++ *
++ * Return true if this function yielded and the iterator's traversal was reset.
++ * Return false if a yield was not needed.
+  */
+-static bool tdp_mmu_iter_flush_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
++static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
++					     struct tdp_iter *iter, bool flush)
+ {
+-	if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
+-		kvm_flush_remote_tlbs(kvm);
+-		cond_resched_lock(&kvm->mmu_lock);
+-		tdp_iter_refresh_walk(iter);
++	/* Ensure forward progress has been made before yielding. */
++	if (iter->next_last_level_gfn == iter->yielded_gfn)
+ 		return false;
+-	} else {
+-		return true;
+-	}
+-}
+ 
+-static void tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
+-{
+ 	if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
++		if (flush)
++			kvm_flush_remote_tlbs(kvm);
++
+ 		cond_resched_lock(&kvm->mmu_lock);
+-		tdp_iter_refresh_walk(iter);
++
++		WARN_ON(iter->gfn > iter->next_last_level_gfn);
++
++		tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
++			       iter->root_level, iter->min_level,
++			       iter->next_last_level_gfn);
++
++		return true;
+ 	}
++
++	return false;
+ }
+ 
+ /*
+@@ -437,15 +453,22 @@ static void tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
+  * scheduler needs the CPU or there is contention on the MMU lock. If this
+  * function cannot yield, it will not release the MMU lock or reschedule and
+  * the caller must ensure it does not supply too large a GFN range, or the
+- * operation can cause a soft lockup.
++ * operation can cause a soft lockup.  Note, in some use cases a flush may be
++ * required by prior actions.  Ensure the pending flush is performed prior to
++ * yielding.
+  */
+ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+-			  gfn_t start, gfn_t end, bool can_yield)
++			  gfn_t start, gfn_t end, bool can_yield, bool flush)
+ {
+ 	struct tdp_iter iter;
+-	bool flush_needed = false;
+ 
+ 	tdp_root_for_each_pte(iter, root, start, end) {
++		if (can_yield &&
++		    tdp_mmu_iter_cond_resched(kvm, &iter, flush)) {
++			flush = false;
++			continue;
++		}
++
+ 		if (!is_shadow_present_pte(iter.old_spte))
+ 			continue;
+ 
+@@ -460,13 +483,10 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 			continue;
+ 
+ 		tdp_mmu_set_spte(kvm, &iter, 0);
+-
+-		if (can_yield)
+-			flush_needed = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
+-		else
+-			flush_needed = true;
++		flush = true;
+ 	}
+-	return flush_needed;
++
++	return flush;
+ }
+ 
+ /*
+@@ -475,13 +495,14 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+  * SPTEs have been cleared and a TLB flush is needed before releasing the
+  * MMU lock.
+  */
+-bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end)
++bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,
++				 bool can_yield)
+ {
+ 	struct kvm_mmu_page *root;
+ 	bool flush = false;
+ 
+ 	for_each_tdp_mmu_root_yield_safe(kvm, root)
+-		flush |= zap_gfn_range(kvm, root, start, end, true);
++		flush = zap_gfn_range(kvm, root, start, end, can_yield, flush);
+ 
+ 	return flush;
+ }
+@@ -673,7 +694,7 @@ static int zap_gfn_range_hva_wrapper(struct kvm *kvm,
+ 				     struct kvm_mmu_page *root, gfn_t start,
+ 				     gfn_t end, unsigned long unused)
+ {
+-	return zap_gfn_range(kvm, root, start, end, false);
++	return zap_gfn_range(kvm, root, start, end, false, false);
+ }
+ 
+ int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start,
+@@ -824,6 +845,9 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 
+ 	for_each_tdp_pte_min_level(iter, root->spt, root->role.level,
+ 				   min_level, start, end) {
++		if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
++			continue;
++
+ 		if (!is_shadow_present_pte(iter.old_spte) ||
+ 		    !is_last_spte(iter.old_spte, iter.level))
+ 			continue;
+@@ -832,8 +856,6 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 
+ 		tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte);
+ 		spte_set = true;
+-
+-		tdp_mmu_iter_cond_resched(kvm, &iter);
+ 	}
+ 	return spte_set;
+ }
+@@ -877,6 +899,9 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 	bool spte_set = false;
+ 
+ 	tdp_root_for_each_leaf_pte(iter, root, start, end) {
++		if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
++			continue;
++
+ 		if (spte_ad_need_write_protect(iter.old_spte)) {
+ 			if (is_writable_pte(iter.old_spte))
+ 				new_spte = iter.old_spte & ~PT_WRITABLE_MASK;
+@@ -891,8 +916,6 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 
+ 		tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte);
+ 		spte_set = true;
+-
+-		tdp_mmu_iter_cond_resched(kvm, &iter);
+ 	}
+ 	return spte_set;
+ }
+@@ -1000,6 +1023,9 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 	bool spte_set = false;
+ 
+ 	tdp_root_for_each_pte(iter, root, start, end) {
++		if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
++			continue;
++
+ 		if (!is_shadow_present_pte(iter.old_spte))
+ 			continue;
+ 
+@@ -1007,8 +1033,6 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 
+ 		tdp_mmu_set_spte(kvm, &iter, new_spte);
+ 		spte_set = true;
+-
+-		tdp_mmu_iter_cond_resched(kvm, &iter);
+ 	}
+ 
+ 	return spte_set;
+@@ -1049,6 +1073,11 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
+ 	bool spte_set = false;
+ 
+ 	tdp_root_for_each_pte(iter, root, start, end) {
++		if (tdp_mmu_iter_cond_resched(kvm, &iter, spte_set)) {
++			spte_set = false;
++			continue;
++		}
++
+ 		if (!is_shadow_present_pte(iter.old_spte) ||
+ 		    !is_last_spte(iter.old_spte, iter.level))
+ 			continue;
+@@ -1061,7 +1090,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
+ 
+ 		tdp_mmu_set_spte(kvm, &iter, 0);
+ 
+-		spte_set = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
++		spte_set = true;
+ 	}
+ 
+ 	if (spte_set)
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
+index cbbdbadd1526f..a7a3f6db263d2 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.h
++++ b/arch/x86/kvm/mmu/tdp_mmu.h
+@@ -12,7 +12,23 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t root);
+ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
+ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root);
+ 
+-bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end);
++bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,
++				 bool can_yield);
++static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start,
++					     gfn_t end)
++{
++	return __kvm_tdp_mmu_zap_gfn_range(kvm, start, end, true);
++}
++static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
++{
++	gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
++
++	/*
++	 * Don't allow yielding, as the caller may have pending pages to zap
++	 * on the shadow MMU.
++	 */
++	return __kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, end, false);
++}
+ void kvm_tdp_mmu_zap_all(struct kvm *kvm);
+ 
+ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 4e303964f7e7f..fb161a21d0aec 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -545,9 +545,7 @@ static int acpi_idle_play_dead(struct cpuidle_device *dev, int index)
+ 			return -ENODEV;
+ 
+ #if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU)
+-		/* If NMI wants to wake up CPU0, start CPU0. */
+-		if (wakeup_cpu0())
+-			start_cpu0();
++		cond_wakeup_cpu0();
+ #endif
+ 	}
+ 
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 43130d64e213d..8d7f94ef0cfe0 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -292,14 +292,16 @@ int driver_deferred_probe_check_state(struct device *dev)
+ 
+ static void deferred_probe_timeout_work_func(struct work_struct *work)
+ {
+-	struct device_private *private, *p;
++	struct device_private *p;
+ 
+ 	driver_deferred_probe_timeout = 0;
+ 	driver_deferred_probe_trigger();
+ 	flush_work(&deferred_probe_work);
+ 
+-	list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe)
+-		dev_info(private->device, "deferred probe pending\n");
++	mutex_lock(&deferred_probe_mutex);
++	list_for_each_entry(p, &deferred_probe_pending_list, deferred_probe)
++		dev_info(p->device, "deferred probe pending\n");
++	mutex_unlock(&deferred_probe_mutex);
+ 	wake_up_all(&probe_timeout_waitqueue);
+ }
+ static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
+diff --git a/drivers/char/agp/Kconfig b/drivers/char/agp/Kconfig
+index a086dd34f932f..4f501e4842ab3 100644
+--- a/drivers/char/agp/Kconfig
++++ b/drivers/char/agp/Kconfig
+@@ -125,7 +125,7 @@ config AGP_HP_ZX1
+ 
+ config AGP_PARISC
+ 	tristate "HP Quicksilver AGP support"
+-	depends on AGP && PARISC && 64BIT
++	depends on AGP && PARISC && 64BIT && IOMMU_SBA
+ 	help
+ 	  This option gives you AGP GART support for the HP Quicksilver
+ 	  AGP bus adapter on HP PA-RISC machines (Ok, just on the C8000
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index f83dac54ed853..61c78714c0957 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -4262,20 +4262,19 @@ int clk_notifier_register(struct clk *clk, struct notifier_block *nb)
+ 	/* search the list of notifiers for this clk */
+ 	list_for_each_entry(cn, &clk_notifier_list, node)
+ 		if (cn->clk == clk)
+-			break;
++			goto found;
+ 
+ 	/* if clk wasn't in the notifier list, allocate new clk_notifier */
+-	if (cn->clk != clk) {
+-		cn = kzalloc(sizeof(*cn), GFP_KERNEL);
+-		if (!cn)
+-			goto out;
++	cn = kzalloc(sizeof(*cn), GFP_KERNEL);
++	if (!cn)
++		goto out;
+ 
+-		cn->clk = clk;
+-		srcu_init_notifier_head(&cn->notifier_head);
++	cn->clk = clk;
++	srcu_init_notifier_head(&cn->notifier_head);
+ 
+-		list_add(&cn->node, &clk_notifier_list);
+-	}
++	list_add(&cn->node, &clk_notifier_list);
+ 
++found:
+ 	ret = srcu_notifier_chain_register(&cn->notifier_head, nb);
+ 
+ 	clk->core->notifier_count++;
+@@ -4300,32 +4299,28 @@ EXPORT_SYMBOL_GPL(clk_notifier_register);
+  */
+ int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb)
+ {
+-	struct clk_notifier *cn = NULL;
+-	int ret = -EINVAL;
++	struct clk_notifier *cn;
++	int ret = -ENOENT;
+ 
+ 	if (!clk || !nb)
+ 		return -EINVAL;
+ 
+ 	clk_prepare_lock();
+ 
+-	list_for_each_entry(cn, &clk_notifier_list, node)
+-		if (cn->clk == clk)
+-			break;
+-
+-	if (cn->clk == clk) {
+-		ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb);
++	list_for_each_entry(cn, &clk_notifier_list, node) {
++		if (cn->clk == clk) {
++			ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb);
+ 
+-		clk->core->notifier_count--;
++			clk->core->notifier_count--;
+ 
+-		/* XXX the notifier code should handle this better */
+-		if (!cn->notifier_head.head) {
+-			srcu_cleanup_notifier_head(&cn->notifier_head);
+-			list_del(&cn->node);
+-			kfree(cn);
++			/* XXX the notifier code should handle this better */
++			if (!cn->notifier_head.head) {
++				srcu_cleanup_notifier_head(&cn->notifier_head);
++				list_del(&cn->node);
++				kfree(cn);
++			}
++			break;
+ 		}
+-
+-	} else {
+-		ret = -ENOENT;
+ 	}
+ 
+ 	clk_prepare_unlock();
+diff --git a/drivers/clk/socfpga/clk-gate.c b/drivers/clk/socfpga/clk-gate.c
+index 43ecd507bf836..cf94a12459ea4 100644
+--- a/drivers/clk/socfpga/clk-gate.c
++++ b/drivers/clk/socfpga/clk-gate.c
+@@ -99,7 +99,7 @@ static unsigned long socfpga_clk_recalc_rate(struct clk_hw *hwclk,
+ 		val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift;
+ 		val &= GENMASK(socfpgaclk->width - 1, 0);
+ 		/* Check for GPIO_DB_CLK by its offset */
+-		if ((int) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET)
++		if ((uintptr_t) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET)
+ 			div = val + 1;
+ 		else
+ 			div = (1 << val);
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 0a2c4adcd833c..af5bb8fedfea7 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -364,22 +364,18 @@ static int gpiochip_set_desc_names(struct gpio_chip *gc)
+  *
+  * Looks for device property "gpio-line-names" and if it exists assigns
+  * GPIO line names for the chip. The memory allocated for the assigned
+- * names belong to the underlying software node and should not be released
++ * names belong to the underlying firmware node and should not be released
+  * by the caller.
+  */
+ static int devprop_gpiochip_set_names(struct gpio_chip *chip)
+ {
+ 	struct gpio_device *gdev = chip->gpiodev;
+-	struct device *dev = chip->parent;
++	struct fwnode_handle *fwnode = dev_fwnode(&gdev->dev);
+ 	const char **names;
+ 	int ret, i;
+ 	int count;
+ 
+-	/* GPIO chip may not have a parent device whose properties we inspect. */
+-	if (!dev)
+-		return 0;
+-
+-	count = device_property_string_array_count(dev, "gpio-line-names");
++	count = fwnode_property_string_array_count(fwnode, "gpio-line-names");
+ 	if (count < 0)
+ 		return 0;
+ 
+@@ -393,7 +389,7 @@ static int devprop_gpiochip_set_names(struct gpio_chip *chip)
+ 	if (!names)
+ 		return -ENOMEM;
+ 
+-	ret = device_property_read_string_array(dev, "gpio-line-names",
++	ret = fwnode_property_read_string_array(fwnode, "gpio-line-names",
+ 						names, count);
+ 	if (ret < 0) {
+ 		dev_warn(&gdev->dev, "failed to read GPIO line names\n");
+diff --git a/drivers/gpu/drm/i915/display/intel_acpi.c b/drivers/gpu/drm/i915/display/intel_acpi.c
+index e21fb14d5e07b..833d0c1be4f1d 100644
+--- a/drivers/gpu/drm/i915/display/intel_acpi.c
++++ b/drivers/gpu/drm/i915/display/intel_acpi.c
+@@ -84,13 +84,31 @@ static void intel_dsm_platform_mux_info(acpi_handle dhandle)
+ 		return;
+ 	}
+ 
++	if (!pkg->package.count) {
++		DRM_DEBUG_DRIVER("no connection in _DSM\n");
++		return;
++	}
++
+ 	connector_count = &pkg->package.elements[0];
+ 	DRM_DEBUG_DRIVER("MUX info connectors: %lld\n",
+ 		  (unsigned long long)connector_count->integer.value);
+ 	for (i = 1; i < pkg->package.count; i++) {
+ 		union acpi_object *obj = &pkg->package.elements[i];
+-		union acpi_object *connector_id = &obj->package.elements[0];
+-		union acpi_object *info = &obj->package.elements[1];
++		union acpi_object *connector_id;
++		union acpi_object *info;
++
++		if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < 2) {
++			DRM_DEBUG_DRIVER("Invalid object for MUX #%d\n", i);
++			continue;
++		}
++
++		connector_id = &obj->package.elements[0];
++		info = &obj->package.elements[1];
++		if (info->type != ACPI_TYPE_BUFFER || info->buffer.length < 4) {
++			DRM_DEBUG_DRIVER("Invalid info for MUX obj #%d\n", i);
++			continue;
++		}
++
+ 		DRM_DEBUG_DRIVER("Connector id: 0x%016llx\n",
+ 			  (unsigned long long)connector_id->integer.value);
+ 		DRM_DEBUG_DRIVER("  port id: %s\n",
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index b38ebccad42ff..0aacc43faefa3 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -557,6 +557,7 @@ err_free_priv:
+ 	kfree(priv);
+ err_put_drm_dev:
+ 	drm_dev_put(ddev);
++	platform_set_drvdata(pdev, NULL);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 482219fb4db21..1d2416d466a36 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -210,6 +210,7 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
+ {
+ 	const struct vc4_crtc_data *crtc_data = vc4_crtc_to_vc4_crtc_data(vc4_crtc);
+ 	const struct vc4_pv_data *pv_data = vc4_crtc_to_vc4_pv_data(vc4_crtc);
++	struct vc4_dev *vc4 = to_vc4_dev(vc4_crtc->base.dev);
+ 	u32 fifo_len_bytes = pv_data->fifo_depth;
+ 
+ 	/*
+@@ -238,6 +239,22 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
+ 		if (crtc_data->hvs_output == 5)
+ 			return 32;
+ 
++		/*
++		 * It looks like in some situations, we will overflow
++		 * the PixelValve FIFO (with the bit 10 of PV stat being
++		 * set) and stall the HVS / PV, eventually resulting in
++		 * a page flip timeout.
++		 *
++		 * Displaying the video overlay during a playback with
++		 * Kodi on an RPi3 seems to be a great solution with a
++		 * failure rate around 50%.
++		 *
++		 * Removing 1 from the FIFO full level however
++		 * seems to completely remove that issue.
++		 */
++		if (!vc4->hvs->hvs5)
++			return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;
++
+ 		return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX;
+ 	}
+ }
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index d6425ad6e6a38..2871cf2ee8b44 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -129,6 +129,7 @@ static int i2c_dw_set_timings_master(struct dw_i2c_dev *dev)
+ 		if ((comp_param1 & DW_IC_COMP_PARAM_1_SPEED_MODE_MASK)
+ 			!= DW_IC_COMP_PARAM_1_SPEED_MODE_HIGH) {
+ 			dev_err(dev->dev, "High Speed not supported!\n");
++			t->bus_freq_hz = I2C_MAX_FAST_MODE_FREQ;
+ 			dev->master_cfg &= ~DW_IC_CON_SPEED_MASK;
+ 			dev->master_cfg |= DW_IC_CON_SPEED_FAST;
+ 			dev->hs_hcnt = 0;
+diff --git a/drivers/i2c/busses/i2c-jz4780.c b/drivers/i2c/busses/i2c-jz4780.c
+index cb4a25ebb8900..2a946c2079284 100644
+--- a/drivers/i2c/busses/i2c-jz4780.c
++++ b/drivers/i2c/busses/i2c-jz4780.c
+@@ -526,8 +526,8 @@ static irqreturn_t jz4780_i2c_irq(int irqno, void *dev_id)
+ 				i2c_sta = jz4780_i2c_readw(i2c, JZ4780_I2C_STA);
+ 				data = *i2c->wbuf;
+ 				data &= ~JZ4780_I2C_DC_READ;
+-				if ((!i2c->stop_hold) && (i2c->cdata->version >=
+-						ID_X1000))
++				if ((i2c->wt_len == 1) && (!i2c->stop_hold) &&
++						(i2c->cdata->version >= ID_X1000))
+ 					data |= X1000_I2C_DC_STOP;
+ 				jz4780_i2c_writew(i2c, JZ4780_I2C_DC, data);
+ 				i2c->wbuf++;
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 573b5da145d1e..c13e7f107dd36 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -378,7 +378,7 @@ static int i2c_gpio_init_recovery(struct i2c_adapter *adap)
+ static int i2c_init_recovery(struct i2c_adapter *adap)
+ {
+ 	struct i2c_bus_recovery_info *bri = adap->bus_recovery_info;
+-	char *err_str;
++	char *err_str, *err_level = KERN_ERR;
+ 
+ 	if (!bri)
+ 		return 0;
+@@ -387,7 +387,8 @@ static int i2c_init_recovery(struct i2c_adapter *adap)
+ 		return -EPROBE_DEFER;
+ 
+ 	if (!bri->recover_bus) {
+-		err_str = "no recover_bus() found";
++		err_str = "no suitable method provided";
++		err_level = KERN_DEBUG;
+ 		goto err;
+ 	}
+ 
+@@ -414,7 +415,7 @@ static int i2c_init_recovery(struct i2c_adapter *adap)
+ 
+ 	return 0;
+  err:
+-	dev_err(&adap->dev, "Not using recovery: %s\n", err_str);
++	dev_printk(err_level, &adap->dev, "Not using recovery: %s\n", err_str);
+ 	adap->bus_recovery_info = NULL;
+ 
+ 	return -EINVAL;
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index 0abce004a9591..65e3e7df8a4b0 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -76,7 +76,9 @@ static struct workqueue_struct *addr_wq;
+ 
+ static const struct nla_policy ib_nl_addr_policy[LS_NLA_TYPE_MAX] = {
+ 	[LS_NLA_TYPE_DGID] = {.type = NLA_BINARY,
+-		.len = sizeof(struct rdma_nla_ls_gid)},
++		.len = sizeof(struct rdma_nla_ls_gid),
++		.validation_type = NLA_VALIDATE_MIN,
++		.min = sizeof(struct rdma_nla_ls_gid)},
+ };
+ 
+ static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 81903749d2415..e42c812e74c3c 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -3616,7 +3616,8 @@ int c4iw_destroy_listen(struct iw_cm_id *cm_id)
+ 		c4iw_init_wr_wait(ep->com.wr_waitp);
+ 		err = cxgb4_remove_server(
+ 				ep->com.dev->rdev.lldi.ports[0], ep->stid,
+-				ep->com.dev->rdev.lldi.rxq_ids[0], true);
++				ep->com.dev->rdev.lldi.rxq_ids[0],
++				ep->com.local_addr.ss_family == AF_INET6);
+ 		if (err)
+ 			goto done;
+ 		err = c4iw_wait_for_reply(&ep->com.dev->rdev, ep->com.wr_waitp,
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index 2a91b8d95e12f..04b1e8f021f64 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -632,22 +632,11 @@ static void _dev_comp_vect_cpu_mask_clean_up(struct hfi1_devdata *dd,
+  */
+ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
+ {
+-	int node = pcibus_to_node(dd->pcidev->bus);
+ 	struct hfi1_affinity_node *entry;
+ 	const struct cpumask *local_mask;
+ 	int curr_cpu, possible, i, ret;
+ 	bool new_entry = false;
+ 
+-	/*
+-	 * If the BIOS does not have the NUMA node information set, select
+-	 * NUMA 0 so we get consistent performance.
+-	 */
+-	if (node < 0) {
+-		dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n");
+-		node = 0;
+-	}
+-	dd->node = node;
+-
+ 	local_mask = cpumask_of_node(dd->node);
+ 	if (cpumask_first(local_mask) >= nr_cpu_ids)
+ 		local_mask = topology_core_cpumask(0);
+@@ -660,7 +649,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
+ 	 * create an entry in the global affinity structure and initialize it.
+ 	 */
+ 	if (!entry) {
+-		entry = node_affinity_allocate(node);
++		entry = node_affinity_allocate(dd->node);
+ 		if (!entry) {
+ 			dd_dev_err(dd,
+ 				   "Unable to allocate global affinity node\n");
+@@ -751,6 +740,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
+ 	if (new_entry)
+ 		node_affinity_add_tail(entry);
+ 
++	dd->affinity_entry = entry;
+ 	mutex_unlock(&node_affinity.lock);
+ 
+ 	return 0;
+@@ -766,10 +756,9 @@ void hfi1_dev_affinity_clean_up(struct hfi1_devdata *dd)
+ {
+ 	struct hfi1_affinity_node *entry;
+ 
+-	if (dd->node < 0)
+-		return;
+-
+ 	mutex_lock(&node_affinity.lock);
++	if (!dd->affinity_entry)
++		goto unlock;
+ 	entry = node_affinity_lookup(dd->node);
+ 	if (!entry)
+ 		goto unlock;
+@@ -780,8 +769,8 @@ void hfi1_dev_affinity_clean_up(struct hfi1_devdata *dd)
+ 	 */
+ 	_dev_comp_vect_cpu_mask_clean_up(dd, entry);
+ unlock:
++	dd->affinity_entry = NULL;
+ 	mutex_unlock(&node_affinity.lock);
+-	dd->node = NUMA_NO_NODE;
+ }
+ 
+ /*
+diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
+index e09e8244a94c4..2a9a040569ebb 100644
+--- a/drivers/infiniband/hw/hfi1/hfi.h
++++ b/drivers/infiniband/hw/hfi1/hfi.h
+@@ -1409,6 +1409,7 @@ struct hfi1_devdata {
+ 	spinlock_t irq_src_lock;
+ 	int vnic_num_vports;
+ 	struct net_device *dummy_netdev;
++	struct hfi1_affinity_node *affinity_entry;
+ 
+ 	/* Keeps track of IPoIB RSM rule users */
+ 	atomic_t ipoib_rsm_usr_num;
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index cb7ad12888219..786c6316273f7 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -1277,7 +1277,6 @@ static struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev,
+ 	dd->pport = (struct hfi1_pportdata *)(dd + 1);
+ 	dd->pcidev = pdev;
+ 	pci_set_drvdata(pdev, dd);
+-	dd->node = NUMA_NO_NODE;
+ 
+ 	ret = xa_alloc_irq(&hfi1_dev_table, &dd->unit, dd, xa_limit_32b,
+ 			GFP_KERNEL);
+@@ -1287,6 +1286,15 @@ static struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev,
+ 		goto bail;
+ 	}
+ 	rvt_set_ibdev_name(&dd->verbs_dev.rdi, "%s_%d", class_name(), dd->unit);
++	/*
++	 * If the BIOS does not have the NUMA node information set, select
++	 * NUMA 0 so we get consistent performance.
++	 */
++	dd->node = pcibus_to_node(pdev->bus);
++	if (dd->node == NUMA_NO_NODE) {
++		dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n");
++		dd->node = 0;
++	}
+ 
+ 	/*
+ 	 * Initialize all locks for the device. This needs to be as early as
+diff --git a/drivers/infiniband/hw/hfi1/netdev_rx.c b/drivers/infiniband/hw/hfi1/netdev_rx.c
+index 6d263c9749b36..ea95baada2b6b 100644
+--- a/drivers/infiniband/hw/hfi1/netdev_rx.c
++++ b/drivers/infiniband/hw/hfi1/netdev_rx.c
+@@ -173,8 +173,7 @@ u32 hfi1_num_netdev_contexts(struct hfi1_devdata *dd, u32 available_contexts,
+ 		return 0;
+ 	}
+ 
+-	cpumask_and(node_cpu_mask, cpu_mask,
+-		    cpumask_of_node(pcibus_to_node(dd->pcidev->bus)));
++	cpumask_and(node_cpu_mask, cpu_mask, cpumask_of_node(dd->node));
+ 
+ 	available_cpus = cpumask_weight(node_cpu_mask);
+ 
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 511c95bb3d012..cdfb7732dff3e 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -1241,7 +1241,8 @@ static int qedr_check_qp_attrs(struct ib_pd *ibpd, struct qedr_dev *dev,
+ 	 * TGT QP isn't associated with RQ/SQ
+ 	 */
+ 	if ((attrs->qp_type != IB_QPT_GSI) && (dev->gsi_qp_created) &&
+-	    (attrs->qp_type != IB_QPT_XRC_TGT)) {
++	    (attrs->qp_type != IB_QPT_XRC_TGT) &&
++	    (attrs->qp_type != IB_QPT_XRC_INI)) {
+ 		struct qedr_cq *send_cq = get_qedr_cq(attrs->send_cq);
+ 		struct qedr_cq *recv_cq = get_qedr_cq(attrs->recv_cq);
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 6eb95e3c4c8a4..6ff97fbf87566 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -2739,8 +2739,8 @@ void rtrs_clt_close(struct rtrs_clt *clt)
+ 
+ 	/* Now it is safe to iterate over all paths without locks */
+ 	list_for_each_entry_safe(sess, tmp, &clt->paths_list, s.entry) {
+-		rtrs_clt_destroy_sess_files(sess, NULL);
+ 		rtrs_clt_close_conns(sess, true);
++		rtrs_clt_destroy_sess_files(sess, NULL);
+ 		kobject_put(&sess->kobj);
+ 	}
+ 	free_clt(clt);
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 22d814ae4edcd..42c3046fa3047 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -314,6 +314,18 @@ static int mcp251x_spi_trans(struct spi_device *spi, int len)
+ 	return ret;
+ }
+ 
++static int mcp251x_spi_write(struct spi_device *spi, int len)
++{
++	struct mcp251x_priv *priv = spi_get_drvdata(spi);
++	int ret;
++
++	ret = spi_write(spi, priv->spi_tx_buf, len);
++	if (ret)
++		dev_err(&spi->dev, "spi write failed: ret = %d\n", ret);
++
++	return ret;
++}
++
+ static u8 mcp251x_read_reg(struct spi_device *spi, u8 reg)
+ {
+ 	struct mcp251x_priv *priv = spi_get_drvdata(spi);
+@@ -361,7 +373,7 @@ static void mcp251x_write_reg(struct spi_device *spi, u8 reg, u8 val)
+ 	priv->spi_tx_buf[1] = reg;
+ 	priv->spi_tx_buf[2] = val;
+ 
+-	mcp251x_spi_trans(spi, 3);
++	mcp251x_spi_write(spi, 3);
+ }
+ 
+ static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2)
+@@ -373,7 +385,7 @@ static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2)
+ 	priv->spi_tx_buf[2] = v1;
+ 	priv->spi_tx_buf[3] = v2;
+ 
+-	mcp251x_spi_trans(spi, 4);
++	mcp251x_spi_write(spi, 4);
+ }
+ 
+ static void mcp251x_write_bits(struct spi_device *spi, u8 reg,
+@@ -386,7 +398,7 @@ static void mcp251x_write_bits(struct spi_device *spi, u8 reg,
+ 	priv->spi_tx_buf[2] = mask;
+ 	priv->spi_tx_buf[3] = val;
+ 
+-	mcp251x_spi_trans(spi, 4);
++	mcp251x_spi_write(spi, 4);
+ }
+ 
+ static u8 mcp251x_read_stat(struct spi_device *spi)
+@@ -618,7 +630,7 @@ static void mcp251x_hw_tx_frame(struct spi_device *spi, u8 *buf,
+ 					  buf[i]);
+ 	} else {
+ 		memcpy(priv->spi_tx_buf, buf, TXBDAT_OFF + len);
+-		mcp251x_spi_trans(spi, TXBDAT_OFF + len);
++		mcp251x_spi_write(spi, TXBDAT_OFF + len);
+ 	}
+ }
+ 
+@@ -650,7 +662,7 @@ static void mcp251x_hw_tx(struct spi_device *spi, struct can_frame *frame,
+ 
+ 	/* use INSTRUCTION_RTS, to avoid "repeated frame problem" */
+ 	priv->spi_tx_buf[0] = INSTRUCTION_RTS(1 << tx_buf_idx);
+-	mcp251x_spi_trans(priv->spi, 1);
++	mcp251x_spi_write(priv->spi, 1);
+ }
+ 
+ static void mcp251x_hw_rx_frame(struct spi_device *spi, u8 *buf,
+@@ -888,7 +900,7 @@ static int mcp251x_hw_reset(struct spi_device *spi)
+ 	mdelay(MCP251X_OST_DELAY_MS);
+ 
+ 	priv->spi_tx_buf[0] = INSTRUCTION_RESET;
+-	ret = mcp251x_spi_trans(spi, 1);
++	ret = mcp251x_spi_write(spi, 1);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+index 204ccb27d6d9a..73c1bc3cb70d3 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+@@ -856,7 +856,7 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
+ 	if (dev->adapter->dev_set_bus) {
+ 		err = dev->adapter->dev_set_bus(dev, 0);
+ 		if (err)
+-			goto lbl_unregister_candev;
++			goto adap_dev_free;
+ 	}
+ 
+ 	/* get device number early */
+@@ -868,6 +868,10 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
+ 
+ 	return 0;
+ 
++adap_dev_free:
++	if (dev->adapter->dev_free)
++		dev->adapter->dev_free(dev);
++
+ lbl_unregister_candev:
+ 	unregister_candev(netdev);
+ 
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 662e68a0e7e61..93c7fa1fd4cb6 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -93,8 +93,12 @@
+ 
+ /* GSWIP MII Registers */
+ #define GSWIP_MII_CFGp(p)		(0x2 * (p))
++#define  GSWIP_MII_CFG_RESET		BIT(15)
+ #define  GSWIP_MII_CFG_EN		BIT(14)
++#define  GSWIP_MII_CFG_ISOLATE		BIT(13)
+ #define  GSWIP_MII_CFG_LDCLKDIS		BIT(12)
++#define  GSWIP_MII_CFG_RGMII_IBS	BIT(8)
++#define  GSWIP_MII_CFG_RMII_CLK		BIT(7)
+ #define  GSWIP_MII_CFG_MODE_MIIP	0x0
+ #define  GSWIP_MII_CFG_MODE_MIIM	0x1
+ #define  GSWIP_MII_CFG_MODE_RMIIP	0x2
+@@ -190,6 +194,23 @@
+ #define GSWIP_PCE_DEFPVID(p)		(0x486 + ((p) * 0xA))
+ 
+ #define GSWIP_MAC_FLEN			0x8C5
++#define GSWIP_MAC_CTRL_0p(p)		(0x903 + ((p) * 0xC))
++#define  GSWIP_MAC_CTRL_0_PADEN		BIT(8)
++#define  GSWIP_MAC_CTRL_0_FCS_EN	BIT(7)
++#define  GSWIP_MAC_CTRL_0_FCON_MASK	0x0070
++#define  GSWIP_MAC_CTRL_0_FCON_AUTO	0x0000
++#define  GSWIP_MAC_CTRL_0_FCON_RX	0x0010
++#define  GSWIP_MAC_CTRL_0_FCON_TX	0x0020
++#define  GSWIP_MAC_CTRL_0_FCON_RXTX	0x0030
++#define  GSWIP_MAC_CTRL_0_FCON_NONE	0x0040
++#define  GSWIP_MAC_CTRL_0_FDUP_MASK	0x000C
++#define  GSWIP_MAC_CTRL_0_FDUP_AUTO	0x0000
++#define  GSWIP_MAC_CTRL_0_FDUP_EN	0x0004
++#define  GSWIP_MAC_CTRL_0_FDUP_DIS	0x000C
++#define  GSWIP_MAC_CTRL_0_GMII_MASK	0x0003
++#define  GSWIP_MAC_CTRL_0_GMII_AUTO	0x0000
++#define  GSWIP_MAC_CTRL_0_GMII_MII	0x0001
++#define  GSWIP_MAC_CTRL_0_GMII_RGMII	0x0002
+ #define GSWIP_MAC_CTRL_2p(p)		(0x905 + ((p) * 0xC))
+ #define GSWIP_MAC_CTRL_2_MLEN		BIT(3) /* Maximum Untagged Frame Lnegth */
+ 
+@@ -653,16 +674,13 @@ static int gswip_port_enable(struct dsa_switch *ds, int port,
+ 			  GSWIP_SDMA_PCTRLp(port));
+ 
+ 	if (!dsa_is_cpu_port(ds, port)) {
+-		u32 macconf = GSWIP_MDIO_PHY_LINK_AUTO |
+-			      GSWIP_MDIO_PHY_SPEED_AUTO |
+-			      GSWIP_MDIO_PHY_FDUP_AUTO |
+-			      GSWIP_MDIO_PHY_FCONTX_AUTO |
+-			      GSWIP_MDIO_PHY_FCONRX_AUTO |
+-			      (phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK);
+-
+-		gswip_mdio_w(priv, macconf, GSWIP_MDIO_PHYp(port));
+-		/* Activate MDIO auto polling */
+-		gswip_mdio_mask(priv, 0, BIT(port), GSWIP_MDIO_MDC_CFG0);
++		u32 mdio_phy = 0;
++
++		if (phydev)
++			mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK;
++
++		gswip_mdio_mask(priv, GSWIP_MDIO_PHY_ADDR_MASK, mdio_phy,
++				GSWIP_MDIO_PHYp(port));
+ 	}
+ 
+ 	return 0;
+@@ -675,14 +693,6 @@ static void gswip_port_disable(struct dsa_switch *ds, int port)
+ 	if (!dsa_is_user_port(ds, port))
+ 		return;
+ 
+-	if (!dsa_is_cpu_port(ds, port)) {
+-		gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_DOWN,
+-				GSWIP_MDIO_PHY_LINK_MASK,
+-				GSWIP_MDIO_PHYp(port));
+-		/* Deactivate MDIO auto polling */
+-		gswip_mdio_mask(priv, BIT(port), 0, GSWIP_MDIO_MDC_CFG0);
+-	}
+-
+ 	gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0,
+ 			  GSWIP_FDMA_PCTRLp(port));
+ 	gswip_switch_mask(priv, GSWIP_SDMA_PCTRL_EN, 0,
+@@ -806,14 +816,32 @@ static int gswip_setup(struct dsa_switch *ds)
+ 	gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2);
+ 	gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3);
+ 
+-	/* disable PHY auto polling */
++	/* Deactivate MDIO PHY auto polling. Some PHYs as the AR8030 have an
++	 * interoperability problem with this auto polling mechanism because
++	 * their status registers think that the link is in a different state
++	 * than it actually is. For the AR8030 it has the BMSR_ESTATEN bit set
++	 * as well as ESTATUS_1000_TFULL and ESTATUS_1000_XFULL. This makes the
++	 * auto polling state machine consider the link being negotiated with
++	 * 1Gbit/s. Since the PHY itself is a Fast Ethernet RMII PHY this leads
++	 * to the switch port being completely dead (RX and TX are both not
++	 * working).
++	 * Also with various other PHY / port combinations (PHY11G GPHY, PHY22F
++	 * GPHY, external RGMII PEF7071/7072) any traffic would stop. Sometimes
++	 * it would work fine for a few minutes to hours and then stop, on
++	 * other device it would no traffic could be sent or received at all.
++	 * Testing shows that when PHY auto polling is disabled these problems
++	 * go away.
++	 */
+ 	gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0);
++
+ 	/* Configure the MDIO Clock 2.5 MHz */
+ 	gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1);
+ 
+-	/* Disable the xMII link */
++	/* Disable the xMII interface and clear it's isolation bit */
+ 	for (i = 0; i < priv->hw_info->max_ports; i++)
+-		gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i);
++		gswip_mii_mask_cfg(priv,
++				   GSWIP_MII_CFG_EN | GSWIP_MII_CFG_ISOLATE,
++				   0, i);
+ 
+ 	/* enable special tag insertion on cpu port */
+ 	gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN,
+@@ -1464,6 +1492,112 @@ unsupported:
+ 	return;
+ }
+ 
++static void gswip_port_set_link(struct gswip_priv *priv, int port, bool link)
++{
++	u32 mdio_phy;
++
++	if (link)
++		mdio_phy = GSWIP_MDIO_PHY_LINK_UP;
++	else
++		mdio_phy = GSWIP_MDIO_PHY_LINK_DOWN;
++
++	gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_MASK, mdio_phy,
++			GSWIP_MDIO_PHYp(port));
++}
++
++static void gswip_port_set_speed(struct gswip_priv *priv, int port, int speed,
++				 phy_interface_t interface)
++{
++	u32 mdio_phy = 0, mii_cfg = 0, mac_ctrl_0 = 0;
++
++	switch (speed) {
++	case SPEED_10:
++		mdio_phy = GSWIP_MDIO_PHY_SPEED_M10;
++
++		if (interface == PHY_INTERFACE_MODE_RMII)
++			mii_cfg = GSWIP_MII_CFG_RATE_M50;
++		else
++			mii_cfg = GSWIP_MII_CFG_RATE_M2P5;
++
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII;
++		break;
++
++	case SPEED_100:
++		mdio_phy = GSWIP_MDIO_PHY_SPEED_M100;
++
++		if (interface == PHY_INTERFACE_MODE_RMII)
++			mii_cfg = GSWIP_MII_CFG_RATE_M50;
++		else
++			mii_cfg = GSWIP_MII_CFG_RATE_M25;
++
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII;
++		break;
++
++	case SPEED_1000:
++		mdio_phy = GSWIP_MDIO_PHY_SPEED_G1;
++
++		mii_cfg = GSWIP_MII_CFG_RATE_M125;
++
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_RGMII;
++		break;
++	}
++
++	gswip_mdio_mask(priv, GSWIP_MDIO_PHY_SPEED_MASK, mdio_phy,
++			GSWIP_MDIO_PHYp(port));
++	gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_RATE_MASK, mii_cfg, port);
++	gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_GMII_MASK, mac_ctrl_0,
++			  GSWIP_MAC_CTRL_0p(port));
++}
++
++static void gswip_port_set_duplex(struct gswip_priv *priv, int port, int duplex)
++{
++	u32 mac_ctrl_0, mdio_phy;
++
++	if (duplex == DUPLEX_FULL) {
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_EN;
++		mdio_phy = GSWIP_MDIO_PHY_FDUP_EN;
++	} else {
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_DIS;
++		mdio_phy = GSWIP_MDIO_PHY_FDUP_DIS;
++	}
++
++	gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FDUP_MASK, mac_ctrl_0,
++			  GSWIP_MAC_CTRL_0p(port));
++	gswip_mdio_mask(priv, GSWIP_MDIO_PHY_FDUP_MASK, mdio_phy,
++			GSWIP_MDIO_PHYp(port));
++}
++
++static void gswip_port_set_pause(struct gswip_priv *priv, int port,
++				 bool tx_pause, bool rx_pause)
++{
++	u32 mac_ctrl_0, mdio_phy;
++
++	if (tx_pause && rx_pause) {
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RXTX;
++		mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN |
++			   GSWIP_MDIO_PHY_FCONRX_EN;
++	} else if (tx_pause) {
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_TX;
++		mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN |
++			   GSWIP_MDIO_PHY_FCONRX_DIS;
++	} else if (rx_pause) {
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RX;
++		mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS |
++			   GSWIP_MDIO_PHY_FCONRX_EN;
++	} else {
++		mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_NONE;
++		mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS |
++			   GSWIP_MDIO_PHY_FCONRX_DIS;
++	}
++
++	gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FCON_MASK,
++			  mac_ctrl_0, GSWIP_MAC_CTRL_0p(port));
++	gswip_mdio_mask(priv,
++			GSWIP_MDIO_PHY_FCONTX_MASK |
++			GSWIP_MDIO_PHY_FCONRX_MASK,
++			mdio_phy, GSWIP_MDIO_PHYp(port));
++}
++
+ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
+ 				     unsigned int mode,
+ 				     const struct phylink_link_state *state)
+@@ -1483,6 +1617,9 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
+ 		break;
+ 	case PHY_INTERFACE_MODE_RMII:
+ 		miicfg |= GSWIP_MII_CFG_MODE_RMIIM;
++
++		/* Configure the RMII clock as output: */
++		miicfg |= GSWIP_MII_CFG_RMII_CLK;
+ 		break;
+ 	case PHY_INTERFACE_MODE_RGMII:
+ 	case PHY_INTERFACE_MODE_RGMII_ID:
+@@ -1495,7 +1632,11 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
+ 			"Unsupported interface: %d\n", state->interface);
+ 		return;
+ 	}
+-	gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_MODE_MASK, miicfg, port);
++
++	gswip_mii_mask_cfg(priv,
++			   GSWIP_MII_CFG_MODE_MASK | GSWIP_MII_CFG_RMII_CLK |
++			   GSWIP_MII_CFG_RGMII_IBS | GSWIP_MII_CFG_LDCLKDIS,
++			   miicfg, port);
+ 
+ 	switch (state->interface) {
+ 	case PHY_INTERFACE_MODE_RGMII_ID:
+@@ -1520,6 +1661,9 @@ static void gswip_phylink_mac_link_down(struct dsa_switch *ds, int port,
+ 	struct gswip_priv *priv = ds->priv;
+ 
+ 	gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, port);
++
++	if (!dsa_is_cpu_port(ds, port))
++		gswip_port_set_link(priv, port, false);
+ }
+ 
+ static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
+@@ -1531,6 +1675,13 @@ static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
+ {
+ 	struct gswip_priv *priv = ds->priv;
+ 
++	if (!dsa_is_cpu_port(ds, port)) {
++		gswip_port_set_link(priv, port, true);
++		gswip_port_set_speed(priv, port, speed, interface);
++		gswip_port_set_duplex(priv, port, duplex);
++		gswip_port_set_pause(priv, port, tx_pause, rx_pause);
++	}
++
+ 	gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port);
+ }
+ 
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
+index ba8321ec1ee73..3305979a9f7c1 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
+@@ -180,9 +180,9 @@
+ #define XGBE_DMA_SYS_AWCR	0x30303030
+ 
+ /* DMA cache settings - PCI device */
+-#define XGBE_DMA_PCI_ARCR	0x00000003
+-#define XGBE_DMA_PCI_AWCR	0x13131313
+-#define XGBE_DMA_PCI_AWARCR	0x00000313
++#define XGBE_DMA_PCI_ARCR	0x000f0f0f
++#define XGBE_DMA_PCI_AWCR	0x0f0f0f0f
++#define XGBE_DMA_PCI_AWARCR	0x00000f0f
+ 
+ /* DMA channel interrupt modes */
+ #define XGBE_IRQ_MODE_EDGE	0
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 286f0341bdf83..48a6bda2a8cc7 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3111,6 +3111,9 @@ static void gem_prog_cmp_regs(struct macb *bp, struct ethtool_rx_flow_spec *fs)
+ 	bool cmp_b = false;
+ 	bool cmp_c = false;
+ 
++	if (!macb_is_gem(bp))
++		return;
++
+ 	tp4sp_v = &(fs->h_u.tcp_ip4_spec);
+ 	tp4sp_m = &(fs->m_u.tcp_ip4_spec);
+ 
+@@ -3479,6 +3482,7 @@ static void macb_restore_features(struct macb *bp)
+ {
+ 	struct net_device *netdev = bp->dev;
+ 	netdev_features_t features = netdev->features;
++	struct ethtool_rx_fs_item *item;
+ 
+ 	/* TX checksum offload */
+ 	macb_set_txcsum_feature(bp, features);
+@@ -3487,6 +3491,9 @@ static void macb_restore_features(struct macb *bp)
+ 	macb_set_rxcsum_feature(bp, features);
+ 
+ 	/* RX Flow Filters */
++	list_for_each_entry(item, &bp->rx_fs_list.list, list)
++		gem_prog_cmp_regs(bp, &item->fs);
++
+ 	macb_set_rxflow_feature(bp, features);
+ }
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+index 75474f8102490..c5b0e725b2382 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+@@ -1794,11 +1794,25 @@ int cudbg_collect_sge_indirect(struct cudbg_init *pdbg_init,
+ 	struct cudbg_buffer temp_buff = { 0 };
+ 	struct sge_qbase_reg_field *sge_qbase;
+ 	struct ireg_buf *ch_sge_dbg;
++	u8 padap_running = 0;
+ 	int i, rc;
++	u32 size;
+ 
+-	rc = cudbg_get_buff(pdbg_init, dbg_buff,
+-			    sizeof(*ch_sge_dbg) * 2 + sizeof(*sge_qbase),
+-			    &temp_buff);
++	/* Accessing SGE_QBASE_MAP[0-3] and SGE_QBASE_INDEX regs can
++	 * lead to SGE missing doorbells under heavy traffic. So, only
++	 * collect them when adapter is idle.
++	 */
++	for_each_port(padap, i) {
++		padap_running = netif_running(padap->port[i]);
++		if (padap_running)
++			break;
++	}
++
++	size = sizeof(*ch_sge_dbg) * 2;
++	if (!padap_running)
++		size += sizeof(*sge_qbase);
++
++	rc = cudbg_get_buff(pdbg_init, dbg_buff, size, &temp_buff);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -1820,7 +1834,8 @@ int cudbg_collect_sge_indirect(struct cudbg_init *pdbg_init,
+ 		ch_sge_dbg++;
+ 	}
+ 
+-	if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5) {
++	if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5 &&
++	    !padap_running) {
+ 		sge_qbase = (struct sge_qbase_reg_field *)ch_sge_dbg;
+ 		/* 1 addr reg SGE_QBASE_INDEX and 4 data reg
+ 		 * SGE_QBASE_MAP[0-3]
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 98d01a7497ecd..581670dced6ec 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -2090,7 +2090,8 @@ void t4_get_regs(struct adapter *adap, void *buf, size_t buf_size)
+ 		0x1190, 0x1194,
+ 		0x11a0, 0x11a4,
+ 		0x11b0, 0x11b4,
+-		0x11fc, 0x1274,
++		0x11fc, 0x123c,
++		0x1254, 0x1274,
+ 		0x1280, 0x133c,
+ 		0x1800, 0x18fc,
+ 		0x3000, 0x302c,
+diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c
+index 4fab2ee5bbf58..e4d9c4c640e55 100644
+--- a/drivers/net/ethernet/freescale/gianfar.c
++++ b/drivers/net/ethernet/freescale/gianfar.c
+@@ -364,7 +364,11 @@ static void gfar_set_mac_for_addr(struct net_device *dev, int num,
+ 
+ static int gfar_set_mac_addr(struct net_device *dev, void *p)
+ {
+-	eth_mac_addr(dev, p);
++	int ret;
++
++	ret = eth_mac_addr(dev, p);
++	if (ret)
++		return ret;
+ 
+ 	gfar_set_mac_for_addr(dev, 0, dev->dev_addr);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index dc5d150a9c546..ac6980acb6f02 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2554,14 +2554,14 @@ static int hclgevf_ae_start(struct hnae3_handle *handle)
+ {
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+ 
++	clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
++
+ 	hclgevf_reset_tqp_stats(handle);
+ 
+ 	hclgevf_request_link_info(hdev);
+ 
+ 	hclgevf_update_link_mode(hdev);
+ 
+-	clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 118473dfdcbd2..fe1258778cbc4 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -142,6 +142,7 @@ enum i40e_state_t {
+ 	__I40E_VIRTCHNL_OP_PENDING,
+ 	__I40E_RECOVERY_MODE,
+ 	__I40E_VF_RESETS_DISABLED,	/* disable resets during i40e_remove */
++	__I40E_VFS_RELEASING,
+ 	/* This must be last as it determines the size of the BITMAP */
+ 	__I40E_STATE_SIZE__,
+ };
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index d7c13ca9be7dd..d627b59ad4465 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -578,6 +578,9 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
+ 	case RING_TYPE_XDP:
+ 		ring = kmemdup(vsi->xdp_rings[ring_id], sizeof(*ring), GFP_KERNEL);
+ 		break;
++	default:
++		ring = NULL;
++		break;
+ 	}
+ 	if (!ring)
+ 		return;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 9e81f85ee2d8d..31d48a85cfaf0 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -232,6 +232,8 @@ static void __i40e_add_stat_strings(u8 **p, const struct i40e_stats stats[],
+ 	I40E_STAT(struct i40e_vsi, _name, _stat)
+ #define I40E_VEB_STAT(_name, _stat) \
+ 	I40E_STAT(struct i40e_veb, _name, _stat)
++#define I40E_VEB_TC_STAT(_name, _stat) \
++	I40E_STAT(struct i40e_cp_veb_tc_stats, _name, _stat)
+ #define I40E_PFC_STAT(_name, _stat) \
+ 	I40E_STAT(struct i40e_pfc_stats, _name, _stat)
+ #define I40E_QUEUE_STAT(_name, _stat) \
+@@ -266,11 +268,18 @@ static const struct i40e_stats i40e_gstrings_veb_stats[] = {
+ 	I40E_VEB_STAT("veb.rx_unknown_protocol", stats.rx_unknown_protocol),
+ };
+ 
++struct i40e_cp_veb_tc_stats {
++	u64 tc_rx_packets;
++	u64 tc_rx_bytes;
++	u64 tc_tx_packets;
++	u64 tc_tx_bytes;
++};
++
+ static const struct i40e_stats i40e_gstrings_veb_tc_stats[] = {
+-	I40E_VEB_STAT("veb.tc_%u_tx_packets", tc_stats.tc_tx_packets),
+-	I40E_VEB_STAT("veb.tc_%u_tx_bytes", tc_stats.tc_tx_bytes),
+-	I40E_VEB_STAT("veb.tc_%u_rx_packets", tc_stats.tc_rx_packets),
+-	I40E_VEB_STAT("veb.tc_%u_rx_bytes", tc_stats.tc_rx_bytes),
++	I40E_VEB_TC_STAT("veb.tc_%u_tx_packets", tc_tx_packets),
++	I40E_VEB_TC_STAT("veb.tc_%u_tx_bytes", tc_tx_bytes),
++	I40E_VEB_TC_STAT("veb.tc_%u_rx_packets", tc_rx_packets),
++	I40E_VEB_TC_STAT("veb.tc_%u_rx_bytes", tc_rx_bytes),
+ };
+ 
+ static const struct i40e_stats i40e_gstrings_misc_stats[] = {
+@@ -1101,6 +1110,7 @@ static int i40e_get_link_ksettings(struct net_device *netdev,
+ 
+ 	/* Set flow control settings */
+ 	ethtool_link_ksettings_add_link_mode(ks, supported, Pause);
++	ethtool_link_ksettings_add_link_mode(ks, supported, Asym_Pause);
+ 
+ 	switch (hw->fc.requested_mode) {
+ 	case I40E_FC_FULL:
+@@ -2216,6 +2226,29 @@ static int i40e_get_sset_count(struct net_device *netdev, int sset)
+ 	}
+ }
+ 
++/**
++ * i40e_get_veb_tc_stats - copy VEB TC statistics to formatted structure
++ * @tc: the TC statistics in VEB structure (veb->tc_stats)
++ * @i: the index of traffic class in (veb->tc_stats) structure to copy
++ *
++ * Copy VEB TC statistics from structure of arrays (veb->tc_stats) to
++ * one dimensional structure i40e_cp_veb_tc_stats.
++ * Produce formatted i40e_cp_veb_tc_stats structure of the VEB TC
++ * statistics for the given TC.
++ **/
++static struct i40e_cp_veb_tc_stats
++i40e_get_veb_tc_stats(struct i40e_veb_tc_stats *tc, unsigned int i)
++{
++	struct i40e_cp_veb_tc_stats veb_tc = {
++		.tc_rx_packets = tc->tc_rx_packets[i],
++		.tc_rx_bytes = tc->tc_rx_bytes[i],
++		.tc_tx_packets = tc->tc_tx_packets[i],
++		.tc_tx_bytes = tc->tc_tx_bytes[i],
++	};
++
++	return veb_tc;
++}
++
+ /**
+  * i40e_get_pfc_stats - copy HW PFC statistics to formatted structure
+  * @pf: the PF device structure
+@@ -2300,8 +2333,16 @@ static void i40e_get_ethtool_stats(struct net_device *netdev,
+ 			       i40e_gstrings_veb_stats);
+ 
+ 	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+-		i40e_add_ethtool_stats(&data, veb_stats ? veb : NULL,
+-				       i40e_gstrings_veb_tc_stats);
++		if (veb_stats) {
++			struct i40e_cp_veb_tc_stats veb_tc =
++				i40e_get_veb_tc_stats(&veb->tc_stats, i);
++
++			i40e_add_ethtool_stats(&data, &veb_tc,
++					       i40e_gstrings_veb_tc_stats);
++		} else {
++			i40e_add_ethtool_stats(&data, NULL,
++					       i40e_gstrings_veb_tc_stats);
++		}
+ 
+ 	i40e_add_ethtool_stats(&data, pf, i40e_gstrings_stats);
+ 
+@@ -5244,7 +5285,7 @@ static int i40e_get_module_eeprom(struct net_device *netdev,
+ 
+ 		status = i40e_aq_get_phy_register(hw,
+ 				I40E_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE,
+-				true, addr, offset, &value, NULL);
++				addr, true, offset, &value, NULL);
+ 		if (status)
+ 			return -EIO;
+ 		data[i] = value;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 4a2d03cada01e..7fab60128c76d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -2560,8 +2560,7 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
+ 				 i40e_stat_str(hw, aq_ret),
+ 				 i40e_aq_str(hw, hw->aq.asq_last_status));
+ 		} else {
+-			dev_info(&pf->pdev->dev, "%s is %s allmulti mode.\n",
+-				 vsi->netdev->name,
++			dev_info(&pf->pdev->dev, "%s allmulti mode.\n",
+ 				 cur_multipromisc ? "entering" : "leaving");
+ 		}
+ 	}
+@@ -14647,12 +14646,16 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw)
+ 	 * in order to register the netdev
+ 	 */
+ 	v_idx = i40e_vsi_mem_alloc(pf, I40E_VSI_MAIN);
+-	if (v_idx < 0)
++	if (v_idx < 0) {
++		err = v_idx;
+ 		goto err_switch_setup;
++	}
+ 	pf->lan_vsi = v_idx;
+ 	vsi = pf->vsi[v_idx];
+-	if (!vsi)
++	if (!vsi) {
++		err = -EFAULT;
+ 		goto err_switch_setup;
++	}
+ 	vsi->alloc_queue_pairs = 1;
+ 	err = i40e_config_netdev(vsi);
+ 	if (err)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 899714243af7a..62b439232fa50 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -2187,8 +2187,7 @@ int i40e_xmit_xdp_tx_ring(struct xdp_buff *xdp, struct i40e_ring *xdp_ring)
+  * @rx_ring: Rx ring being processed
+  * @xdp: XDP buffer containing the frame
+  **/
+-static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
+-				    struct xdp_buff *xdp)
++static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
+ {
+ 	int err, result = I40E_XDP_PASS;
+ 	struct i40e_ring *xdp_ring;
+@@ -2227,7 +2226,7 @@ static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
+ 	}
+ xdp_out:
+ 	rcu_read_unlock();
+-	return ERR_PTR(-result);
++	return result;
+ }
+ 
+ /**
+@@ -2339,6 +2338,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ 	unsigned int xdp_xmit = 0;
+ 	bool failure = false;
+ 	struct xdp_buff xdp;
++	int xdp_res = 0;
+ 
+ #if (PAGE_SIZE < 8192)
+ 	xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, 0);
+@@ -2405,12 +2405,10 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ 			/* At larger PAGE_SIZE, frame_sz depend on len size */
+ 			xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size);
+ #endif
+-			skb = i40e_run_xdp(rx_ring, &xdp);
++			xdp_res = i40e_run_xdp(rx_ring, &xdp);
+ 		}
+ 
+-		if (IS_ERR(skb)) {
+-			unsigned int xdp_res = -PTR_ERR(skb);
+-
++		if (xdp_res) {
+ 			if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
+ 				xdp_xmit |= xdp_res;
+ 				i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 3b269c70dcfe1..e4f13a49c3df8 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -137,6 +137,7 @@ void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
+  **/
+ static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
+ {
++	struct i40e_pf *pf = vf->pf;
+ 	int i;
+ 
+ 	i40e_vc_notify_vf_reset(vf);
+@@ -147,6 +148,11 @@ static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
+ 	 * ensure a reset.
+ 	 */
+ 	for (i = 0; i < 20; i++) {
++		/* If PF is in VFs releasing state reset VF is impossible,
++		 * so leave it.
++		 */
++		if (test_bit(__I40E_VFS_RELEASING, pf->state))
++			return;
+ 		if (i40e_reset_vf(vf, false))
+ 			return;
+ 		usleep_range(10000, 20000);
+@@ -1574,6 +1580,8 @@ void i40e_free_vfs(struct i40e_pf *pf)
+ 
+ 	if (!pf->vf)
+ 		return;
++
++	set_bit(__I40E_VFS_RELEASING, pf->state);
+ 	while (test_and_set_bit(__I40E_VF_DISABLE, pf->state))
+ 		usleep_range(1000, 2000);
+ 
+@@ -1631,6 +1639,7 @@ void i40e_free_vfs(struct i40e_pf *pf)
+ 		}
+ 	}
+ 	clear_bit(__I40E_VF_DISABLE, pf->state);
++	clear_bit(__I40E_VFS_RELEASING, pf->state);
+ }
+ 
+ #ifdef CONFIG_PCI_IOV
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 5b3f2bb22eba7..6a57b41ddb545 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -194,7 +194,6 @@ enum ice_state {
+ 	__ICE_NEEDS_RESTART,
+ 	__ICE_PREPARED_FOR_RESET,	/* set by driver when prepared */
+ 	__ICE_RESET_OICR_RECV,		/* set by driver after rcv reset OICR */
+-	__ICE_DCBNL_DEVRESET,		/* set by dcbnl devreset */
+ 	__ICE_PFR_REQ,			/* set by driver and peers */
+ 	__ICE_CORER_REQ,		/* set by driver and peers */
+ 	__ICE_GLOBR_REQ,		/* set by driver and peers */
+@@ -587,7 +586,7 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset);
+ void ice_print_link_msg(struct ice_vsi *vsi, bool isup);
+ const char *ice_stat_str(enum ice_status stat_err);
+ const char *ice_aq_str(enum ice_aq_err aq_err);
+-bool ice_is_wol_supported(struct ice_pf *pf);
++bool ice_is_wol_supported(struct ice_hw *hw);
+ int
+ ice_fdir_write_fltr(struct ice_pf *pf, struct ice_fdir_fltr *input, bool add,
+ 		    bool is_tun);
+@@ -605,6 +604,7 @@ int ice_fdir_create_dflt_rules(struct ice_pf *pf);
+ int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout,
+ 			  struct ice_rq_event_info *event);
+ int ice_open(struct net_device *netdev);
++int ice_open_internal(struct net_device *netdev);
+ int ice_stop(struct net_device *netdev);
+ void ice_service_task_schedule(struct ice_pf *pf);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 7db5fd9773672..2239a5f45e5a7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -717,8 +717,8 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
+ 
+ 			if (!data) {
+ 				data = devm_kcalloc(ice_hw_to_dev(hw),
+-						    sizeof(*data),
+ 						    ICE_AQC_FW_LOG_ID_MAX,
++						    sizeof(*data),
+ 						    GFP_KERNEL);
+ 				if (!data)
+ 					return ICE_ERR_NO_MEMORY;
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.h b/drivers/net/ethernet/intel/ice/ice_controlq.h
+index faaa08e8171b5..68866f4f0eb09 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.h
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.h
+@@ -31,8 +31,8 @@ enum ice_ctl_q {
+ 	ICE_CTL_Q_MAILBOX,
+ };
+ 
+-/* Control Queue timeout settings - max delay 250ms */
+-#define ICE_CTL_Q_SQ_CMD_TIMEOUT	2500  /* Count 2500 times */
++/* Control Queue timeout settings - max delay 1s */
++#define ICE_CTL_Q_SQ_CMD_TIMEOUT	10000 /* Count 10000 times */
+ #define ICE_CTL_Q_SQ_CMD_USEC		100   /* Check every 100usec */
+ #define ICE_CTL_Q_ADMIN_INIT_TIMEOUT	10    /* Count 10 times */
+ #define ICE_CTL_Q_ADMIN_INIT_MSEC	100   /* Check every 100msec */
+diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c
+index 2a3147ee0bbb1..211ac6f907adb 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dcb.c
++++ b/drivers/net/ethernet/intel/ice/ice_dcb.c
+@@ -738,22 +738,27 @@ ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
+ /**
+  * ice_cee_to_dcb_cfg
+  * @cee_cfg: pointer to CEE configuration struct
+- * @dcbcfg: DCB configuration struct
++ * @pi: port information structure
+  *
+  * Convert CEE configuration from firmware to DCB configuration
+  */
+ static void
+ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
+-		   struct ice_dcbx_cfg *dcbcfg)
++		   struct ice_port_info *pi)
+ {
+ 	u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status);
+ 	u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift;
++	u8 i, j, err, sync, oper, app_index, ice_app_sel_type;
+ 	u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio);
+-	u8 i, err, sync, oper, app_index, ice_app_sel_type;
+ 	u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift;
++	struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg;
+ 	u16 ice_app_prot_id_type;
+ 
+-	/* CEE PG data to ETS config */
++	dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
++	dcbcfg->dcbx_mode = ICE_DCBX_MODE_CEE;
++	dcbcfg->tlv_status = tlv_status;
++
++	/* CEE PG data */
+ 	dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc;
+ 
+ 	/* Note that the FW creates the oper_prio_tc nibbles reversed
+@@ -780,10 +785,16 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
+ 		}
+ 	}
+ 
+-	/* CEE PFC data to ETS config */
++	/* CEE PFC data */
+ 	dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en;
+ 	dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS;
+ 
++	/* CEE APP TLV data */
++	if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING)
++		cmp_dcbcfg = &pi->qos_cfg.desired_dcbx_cfg;
++	else
++		cmp_dcbcfg = &pi->qos_cfg.remote_dcbx_cfg;
++
+ 	app_index = 0;
+ 	for (i = 0; i < 3; i++) {
+ 		if (i == 0) {
+@@ -802,6 +813,18 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
+ 			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S;
+ 			ice_app_sel_type = ICE_APP_SEL_TCPIP;
+ 			ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI;
++
++			for (j = 0; j < cmp_dcbcfg->numapps; j++) {
++				u16 prot_id = cmp_dcbcfg->app[j].prot_id;
++				u8 sel = cmp_dcbcfg->app[j].selector;
++
++				if  (sel == ICE_APP_SEL_TCPIP &&
++				     (prot_id == ICE_APP_PROT_ID_ISCSI ||
++				      prot_id == ICE_APP_PROT_ID_ISCSI_860)) {
++					ice_app_prot_id_type = prot_id;
++					break;
++				}
++			}
+ 		} else {
+ 			/* FIP APP */
+ 			ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M;
+@@ -850,9 +873,9 @@ ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode)
+ 		return ICE_ERR_PARAM;
+ 
+ 	if (dcbx_mode == ICE_DCBX_MODE_IEEE)
+-		dcbx_cfg = &pi->local_dcbx_cfg;
++		dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
+ 	else if (dcbx_mode == ICE_DCBX_MODE_CEE)
+-		dcbx_cfg = &pi->desired_dcbx_cfg;
++		dcbx_cfg = &pi->qos_cfg.desired_dcbx_cfg;
+ 
+ 	/* Get Local DCB Config in case of ICE_DCBX_MODE_IEEE
+ 	 * or get CEE DCB Desired Config in case of ICE_DCBX_MODE_CEE
+@@ -863,7 +886,7 @@ ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode)
+ 		goto out;
+ 
+ 	/* Get Remote DCB Config */
+-	dcbx_cfg = &pi->remote_dcbx_cfg;
++	dcbx_cfg = &pi->qos_cfg.remote_dcbx_cfg;
+ 	ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
+ 				 ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, dcbx_cfg);
+ 	/* Don't treat ENOENT as an error for Remote MIBs */
+@@ -892,14 +915,11 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
+ 	ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL);
+ 	if (!ret) {
+ 		/* CEE mode */
+-		dcbx_cfg = &pi->local_dcbx_cfg;
+-		dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE;
+-		dcbx_cfg->tlv_status = le32_to_cpu(cee_cfg.tlv_status);
+-		ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg);
+ 		ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE);
++		ice_cee_to_dcb_cfg(&cee_cfg, pi);
+ 	} else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) {
+ 		/* CEE mode not enabled try querying IEEE data */
+-		dcbx_cfg = &pi->local_dcbx_cfg;
++		dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
+ 		dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_IEEE;
+ 		ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_IEEE);
+ 	}
+@@ -916,26 +936,26 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
+  */
+ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
+ {
+-	struct ice_port_info *pi = hw->port_info;
++	struct ice_qos_cfg *qos_cfg = &hw->port_info->qos_cfg;
+ 	enum ice_status ret = 0;
+ 
+ 	if (!hw->func_caps.common_cap.dcb)
+ 		return ICE_ERR_NOT_SUPPORTED;
+ 
+-	pi->is_sw_lldp = true;
++	qos_cfg->is_sw_lldp = true;
+ 
+ 	/* Get DCBX status */
+-	pi->dcbx_status = ice_get_dcbx_status(hw);
++	qos_cfg->dcbx_status = ice_get_dcbx_status(hw);
+ 
+-	if (pi->dcbx_status == ICE_DCBX_STATUS_DONE ||
+-	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
+-	    pi->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
++	if (qos_cfg->dcbx_status == ICE_DCBX_STATUS_DONE ||
++	    qos_cfg->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
++	    qos_cfg->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
+ 		/* Get current DCBX configuration */
+-		ret = ice_get_dcb_cfg(pi);
++		ret = ice_get_dcb_cfg(hw->port_info);
+ 		if (ret)
+ 			return ret;
+-		pi->is_sw_lldp = false;
+-	} else if (pi->dcbx_status == ICE_DCBX_STATUS_DIS) {
++		qos_cfg->is_sw_lldp = false;
++	} else if (qos_cfg->dcbx_status == ICE_DCBX_STATUS_DIS) {
+ 		return ICE_ERR_NOT_READY;
+ 	}
+ 
+@@ -943,7 +963,7 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
+ 	if (enable_mib_change) {
+ 		ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+ 		if (ret)
+-			pi->is_sw_lldp = true;
++			qos_cfg->is_sw_lldp = true;
+ 	}
+ 
+ 	return ret;
+@@ -958,21 +978,21 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
+  */
+ enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib)
+ {
+-	struct ice_port_info *pi = hw->port_info;
++	struct ice_qos_cfg *qos_cfg = &hw->port_info->qos_cfg;
+ 	enum ice_status ret;
+ 
+ 	if (!hw->func_caps.common_cap.dcb)
+ 		return ICE_ERR_NOT_SUPPORTED;
+ 
+ 	/* Get DCBX status */
+-	pi->dcbx_status = ice_get_dcbx_status(hw);
++	qos_cfg->dcbx_status = ice_get_dcbx_status(hw);
+ 
+-	if (pi->dcbx_status == ICE_DCBX_STATUS_DIS)
++	if (qos_cfg->dcbx_status == ICE_DCBX_STATUS_DIS)
+ 		return ICE_ERR_NOT_READY;
+ 
+ 	ret = ice_aq_cfg_lldp_mib_change(hw, ena_mib, NULL);
+ 	if (!ret)
+-		pi->is_sw_lldp = !ena_mib;
++		qos_cfg->is_sw_lldp = !ena_mib;
+ 
+ 	return ret;
+ }
+@@ -1270,7 +1290,7 @@ enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi)
+ 	hw = pi->hw;
+ 
+ 	/* update the HW local config */
+-	dcbcfg = &pi->local_dcbx_cfg;
++	dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
+ 	/* Allocate the LLDPDU */
+ 	lldpmib = devm_kzalloc(ice_hw_to_dev(hw), ICE_LLDPDU_SIZE, GFP_KERNEL);
+ 	if (!lldpmib)
+diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+index 36abd6b7280c8..1e8f71ffc8ce7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+@@ -28,7 +28,7 @@ void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc)
+ 	if (netdev_set_num_tc(netdev, vsi->tc_cfg.numtc))
+ 		return;
+ 
+-	dcbcfg = &pf->hw.port_info->local_dcbx_cfg;
++	dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ 
+ 	ice_for_each_traffic_class(i)
+ 		if (vsi->tc_cfg.ena_tc & BIT(i))
+@@ -134,7 +134,7 @@ static u8 ice_dcb_get_mode(struct ice_port_info *port_info, bool host)
+ 	else
+ 		mode = DCB_CAP_DCBX_LLD_MANAGED;
+ 
+-	if (port_info->local_dcbx_cfg.dcbx_mode & ICE_DCBX_MODE_CEE)
++	if (port_info->qos_cfg.local_dcbx_cfg.dcbx_mode & ICE_DCBX_MODE_CEE)
+ 		return mode | DCB_CAP_DCBX_VER_CEE;
+ 	else
+ 		return mode | DCB_CAP_DCBX_VER_IEEE;
+@@ -277,10 +277,10 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
+ 	int ret = ICE_DCB_NO_HW_CHG;
+ 	struct ice_vsi *pf_vsi;
+ 
+-	curr_cfg = &pf->hw.port_info->local_dcbx_cfg;
++	curr_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ 
+ 	/* FW does not care if change happened */
+-	if (!pf->hw.port_info->is_sw_lldp)
++	if (!pf->hw.port_info->qos_cfg.is_sw_lldp)
+ 		ret = ICE_DCB_HW_CHG_RST;
+ 
+ 	/* Enable DCB tagging only when more than one TC */
+@@ -327,7 +327,7 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
+ 	/* Only send new config to HW if we are in SW LLDP mode. Otherwise,
+ 	 * the new config came from the HW in the first place.
+ 	 */
+-	if (pf->hw.port_info->is_sw_lldp) {
++	if (pf->hw.port_info->qos_cfg.is_sw_lldp) {
+ 		ret = ice_set_dcb_cfg(pf->hw.port_info);
+ 		if (ret) {
+ 			dev_err(dev, "Set DCB Config failed\n");
+@@ -360,7 +360,7 @@ free_cfg:
+  */
+ static void ice_cfg_etsrec_defaults(struct ice_port_info *pi)
+ {
+-	struct ice_dcbx_cfg *dcbcfg = &pi->local_dcbx_cfg;
++	struct ice_dcbx_cfg *dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
+ 	u8 i;
+ 
+ 	/* Ensure ETS recommended DCB configuration is not already set */
+@@ -446,7 +446,7 @@ void ice_dcb_rebuild(struct ice_pf *pf)
+ 
+ 	mutex_lock(&pf->tc_mutex);
+ 
+-	if (!pf->hw.port_info->is_sw_lldp)
++	if (!pf->hw.port_info->qos_cfg.is_sw_lldp)
+ 		ice_cfg_etsrec_defaults(pf->hw.port_info);
+ 
+ 	ret = ice_set_dcb_cfg(pf->hw.port_info);
+@@ -455,9 +455,9 @@ void ice_dcb_rebuild(struct ice_pf *pf)
+ 		goto dcb_error;
+ 	}
+ 
+-	if (!pf->hw.port_info->is_sw_lldp) {
++	if (!pf->hw.port_info->qos_cfg.is_sw_lldp) {
+ 		ret = ice_cfg_lldp_mib_change(&pf->hw, true);
+-		if (ret && !pf->hw.port_info->is_sw_lldp) {
++		if (ret && !pf->hw.port_info->qos_cfg.is_sw_lldp) {
+ 			dev_err(dev, "Failed to register for MIB changes\n");
+ 			goto dcb_error;
+ 		}
+@@ -510,11 +510,12 @@ static int ice_dcb_init_cfg(struct ice_pf *pf, bool locked)
+ 	int ret = 0;
+ 
+ 	pi = pf->hw.port_info;
+-	newcfg = kmemdup(&pi->local_dcbx_cfg, sizeof(*newcfg), GFP_KERNEL);
++	newcfg = kmemdup(&pi->qos_cfg.local_dcbx_cfg, sizeof(*newcfg),
++			 GFP_KERNEL);
+ 	if (!newcfg)
+ 		return -ENOMEM;
+ 
+-	memset(&pi->local_dcbx_cfg, 0, sizeof(*newcfg));
++	memset(&pi->qos_cfg.local_dcbx_cfg, 0, sizeof(*newcfg));
+ 
+ 	dev_info(ice_pf_to_dev(pf), "Configuring initial DCB values\n");
+ 	if (ice_pf_dcb_cfg(pf, newcfg, locked))
+@@ -545,7 +546,7 @@ static int ice_dcb_sw_dflt_cfg(struct ice_pf *pf, bool ets_willing, bool locked)
+ 	if (!dcbcfg)
+ 		return -ENOMEM;
+ 
+-	memset(&pi->local_dcbx_cfg, 0, sizeof(*dcbcfg));
++	memset(&pi->qos_cfg.local_dcbx_cfg, 0, sizeof(*dcbcfg));
+ 
+ 	dcbcfg->etscfg.willing = ets_willing ? 1 : 0;
+ 	dcbcfg->etscfg.maxtcs = hw->func_caps.common_cap.maxtc;
+@@ -608,7 +609,7 @@ static bool ice_dcb_tc_contig(u8 *prio_table)
+  */
+ static int ice_dcb_noncontig_cfg(struct ice_pf *pf)
+ {
+-	struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->local_dcbx_cfg;
++	struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ 	struct device *dev = ice_pf_to_dev(pf);
+ 	int ret;
+ 
+@@ -638,7 +639,7 @@ static int ice_dcb_noncontig_cfg(struct ice_pf *pf)
+  */
+ void ice_pf_dcb_recfg(struct ice_pf *pf)
+ {
+-	struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->local_dcbx_cfg;
++	struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ 	u8 tc_map = 0;
+ 	int v, ret;
+ 
+@@ -691,7 +692,7 @@ int ice_init_pf_dcb(struct ice_pf *pf, bool locked)
+ 	port_info = hw->port_info;
+ 
+ 	err = ice_init_dcb(hw, false);
+-	if (err && !port_info->is_sw_lldp) {
++	if (err && !port_info->qos_cfg.is_sw_lldp) {
+ 		dev_err(dev, "Error initializing DCB %d\n", err);
+ 		goto dcb_init_err;
+ 	}
+@@ -858,7 +859,7 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
+ 		/* Update the remote cached instance and return */
+ 		ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
+ 					 ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID,
+-					 &pi->remote_dcbx_cfg);
++					 &pi->qos_cfg.remote_dcbx_cfg);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to get remote DCB config\n");
+ 			return;
+@@ -868,10 +869,11 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
+ 	mutex_lock(&pf->tc_mutex);
+ 
+ 	/* store the old configuration */
+-	tmp_dcbx_cfg = pf->hw.port_info->local_dcbx_cfg;
++	tmp_dcbx_cfg = pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ 
+ 	/* Reset the old DCBX configuration data */
+-	memset(&pi->local_dcbx_cfg, 0, sizeof(pi->local_dcbx_cfg));
++	memset(&pi->qos_cfg.local_dcbx_cfg, 0,
++	       sizeof(pi->qos_cfg.local_dcbx_cfg));
+ 
+ 	/* Get updated DCBX data from firmware */
+ 	ret = ice_get_dcb_cfg(pf->hw.port_info);
+@@ -881,7 +883,8 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
+ 	}
+ 
+ 	/* No change detected in DCBX configs */
+-	if (!memcmp(&tmp_dcbx_cfg, &pi->local_dcbx_cfg, sizeof(tmp_dcbx_cfg))) {
++	if (!memcmp(&tmp_dcbx_cfg, &pi->qos_cfg.local_dcbx_cfg,
++		    sizeof(tmp_dcbx_cfg))) {
+ 		dev_dbg(dev, "No change detected in DCBX configuration.\n");
+ 		goto out;
+ 	}
+@@ -889,13 +892,13 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
+ 	pf->dcbx_cap = ice_dcb_get_mode(pi, false);
+ 
+ 	need_reconfig = ice_dcb_need_recfg(pf, &tmp_dcbx_cfg,
+-					   &pi->local_dcbx_cfg);
+-	ice_dcbnl_flush_apps(pf, &tmp_dcbx_cfg, &pi->local_dcbx_cfg);
++					   &pi->qos_cfg.local_dcbx_cfg);
++	ice_dcbnl_flush_apps(pf, &tmp_dcbx_cfg, &pi->qos_cfg.local_dcbx_cfg);
+ 	if (!need_reconfig)
+ 		goto out;
+ 
+ 	/* Enable DCB tagging only when more than one TC */
+-	if (ice_dcb_get_num_tc(&pi->local_dcbx_cfg) > 1) {
++	if (ice_dcb_get_num_tc(&pi->qos_cfg.local_dcbx_cfg) > 1) {
+ 		dev_dbg(dev, "DCB tagging enabled (num TC > 1)\n");
+ 		set_bit(ICE_FLAG_DCB_ENA, pf->flags);
+ 	} else {
+diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
+index 8c133a8be6add..4180f1f35fb89 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
++++ b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
+@@ -18,12 +18,10 @@ static void ice_dcbnl_devreset(struct net_device *netdev)
+ 	while (ice_is_reset_in_progress(pf->state))
+ 		usleep_range(1000, 2000);
+ 
+-	set_bit(__ICE_DCBNL_DEVRESET, pf->state);
+ 	dev_close(netdev);
+ 	netdev_state_change(netdev);
+ 	dev_open(netdev, NULL);
+ 	netdev_state_change(netdev);
+-	clear_bit(__ICE_DCBNL_DEVRESET, pf->state);
+ }
+ 
+ /**
+@@ -34,12 +32,10 @@ static void ice_dcbnl_devreset(struct net_device *netdev)
+ static int ice_dcbnl_getets(struct net_device *netdev, struct ieee_ets *ets)
+ {
+ 	struct ice_dcbx_cfg *dcbxcfg;
+-	struct ice_port_info *pi;
+ 	struct ice_pf *pf;
+ 
+ 	pf = ice_netdev_to_pf(netdev);
+-	pi = pf->hw.port_info;
+-	dcbxcfg = &pi->local_dcbx_cfg;
++	dcbxcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ 
+ 	ets->willing = dcbxcfg->etscfg.willing;
+ 	ets->ets_cap = dcbxcfg->etscfg.maxtcs;
+@@ -74,7 +70,7 @@ static int ice_dcbnl_setets(struct net_device *netdev, struct ieee_ets *ets)
+ 	    !(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
+ 		return -EINVAL;
+ 
+-	new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
++	new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
+ 
+ 	mutex_lock(&pf->tc_mutex);
+ 
+@@ -159,6 +155,7 @@ static u8 ice_dcbnl_getdcbx(struct net_device *netdev)
+ static u8 ice_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
+ {
+ 	struct ice_pf *pf = ice_netdev_to_pf(netdev);
++	struct ice_qos_cfg *qos_cfg;
+ 
+ 	/* if FW LLDP agent is running, DCBNL not allowed to change mode */
+ 	if (test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags))
+@@ -175,10 +172,11 @@ static u8 ice_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
+ 		return ICE_DCB_NO_HW_CHG;
+ 
+ 	pf->dcbx_cap = mode;
++	qos_cfg = &pf->hw.port_info->qos_cfg;
+ 	if (mode & DCB_CAP_DCBX_VER_CEE)
+-		pf->hw.port_info->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE;
++		qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE;
+ 	else
+-		pf->hw.port_info->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE;
++		qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE;
+ 
+ 	dev_info(ice_pf_to_dev(pf), "DCBx mode = 0x%x\n", mode);
+ 	return ICE_DCB_HW_CHG_RST;
+@@ -229,7 +227,7 @@ static int ice_dcbnl_getpfc(struct net_device *netdev, struct ieee_pfc *pfc)
+ 	struct ice_dcbx_cfg *dcbxcfg;
+ 	int i;
+ 
+-	dcbxcfg = &pi->local_dcbx_cfg;
++	dcbxcfg = &pi->qos_cfg.local_dcbx_cfg;
+ 	pfc->pfc_cap = dcbxcfg->pfc.pfccap;
+ 	pfc->pfc_en = dcbxcfg->pfc.pfcena;
+ 	pfc->mbc = dcbxcfg->pfc.mbc;
+@@ -260,7 +258,7 @@ static int ice_dcbnl_setpfc(struct net_device *netdev, struct ieee_pfc *pfc)
+ 
+ 	mutex_lock(&pf->tc_mutex);
+ 
+-	new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
++	new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
+ 
+ 	if (pfc->pfc_cap)
+ 		new_cfg->pfc.pfccap = pfc->pfc_cap;
+@@ -297,9 +295,9 @@ ice_dcbnl_get_pfc_cfg(struct net_device *netdev, int prio, u8 *setting)
+ 	if (prio >= ICE_MAX_USER_PRIORITY)
+ 		return;
+ 
+-	*setting = (pi->local_dcbx_cfg.pfc.pfcena >> prio) & 0x1;
++	*setting = (pi->qos_cfg.local_dcbx_cfg.pfc.pfcena >> prio) & 0x1;
+ 	dev_dbg(ice_pf_to_dev(pf), "Get PFC Config up=%d, setting=%d, pfcenable=0x%x\n",
+-		prio, *setting, pi->local_dcbx_cfg.pfc.pfcena);
++		prio, *setting, pi->qos_cfg.local_dcbx_cfg.pfc.pfcena);
+ }
+ 
+ /**
+@@ -320,7 +318,7 @@ static void ice_dcbnl_set_pfc_cfg(struct net_device *netdev, int prio, u8 set)
+ 	if (prio >= ICE_MAX_USER_PRIORITY)
+ 		return;
+ 
+-	new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
++	new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
+ 
+ 	new_cfg->pfc.pfccap = pf->hw.func_caps.common_cap.maxtc;
+ 	if (set)
+@@ -342,7 +340,7 @@ static u8 ice_dcbnl_getpfcstate(struct net_device *netdev)
+ 	struct ice_port_info *pi = pf->hw.port_info;
+ 
+ 	/* Return enabled if any UP enabled for PFC */
+-	if (pi->local_dcbx_cfg.pfc.pfcena)
++	if (pi->qos_cfg.local_dcbx_cfg.pfc.pfcena)
+ 		return 1;
+ 
+ 	return 0;
+@@ -382,8 +380,8 @@ static u8 ice_dcbnl_setstate(struct net_device *netdev, u8 state)
+ 
+ 	if (state) {
+ 		set_bit(ICE_FLAG_DCB_ENA, pf->flags);
+-		memcpy(&pf->hw.port_info->desired_dcbx_cfg,
+-		       &pf->hw.port_info->local_dcbx_cfg,
++		memcpy(&pf->hw.port_info->qos_cfg.desired_dcbx_cfg,
++		       &pf->hw.port_info->qos_cfg.local_dcbx_cfg,
+ 		       sizeof(struct ice_dcbx_cfg));
+ 	} else {
+ 		clear_bit(ICE_FLAG_DCB_ENA, pf->flags);
+@@ -417,7 +415,7 @@ ice_dcbnl_get_pg_tc_cfg_tx(struct net_device *netdev, int prio,
+ 	if (prio >= ICE_MAX_USER_PRIORITY)
+ 		return;
+ 
+-	*pgid = pi->local_dcbx_cfg.etscfg.prio_table[prio];
++	*pgid = pi->qos_cfg.local_dcbx_cfg.etscfg.prio_table[prio];
+ 	dev_dbg(ice_pf_to_dev(pf), "Get PG config prio=%d tc=%d\n", prio,
+ 		*pgid);
+ }
+@@ -448,7 +446,7 @@ ice_dcbnl_set_pg_tc_cfg_tx(struct net_device *netdev, int tc,
+ 	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+ 		return;
+ 
+-	new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
++	new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
+ 
+ 	/* prio_type, bwg_id and bw_pct per UP are not supported */
+ 
+@@ -478,7 +476,7 @@ ice_dcbnl_get_pg_bwg_cfg_tx(struct net_device *netdev, int pgid, u8 *bw_pct)
+ 	if (pgid >= ICE_MAX_TRAFFIC_CLASS)
+ 		return;
+ 
+-	*bw_pct = pi->local_dcbx_cfg.etscfg.tcbwtable[pgid];
++	*bw_pct = pi->qos_cfg.local_dcbx_cfg.etscfg.tcbwtable[pgid];
+ 	dev_dbg(ice_pf_to_dev(pf), "Get PG BW config tc=%d bw_pct=%d\n",
+ 		pgid, *bw_pct);
+ }
+@@ -502,7 +500,7 @@ ice_dcbnl_set_pg_bwg_cfg_tx(struct net_device *netdev, int pgid, u8 bw_pct)
+ 	if (pgid >= ICE_MAX_TRAFFIC_CLASS)
+ 		return;
+ 
+-	new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
++	new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
+ 
+ 	new_cfg->etscfg.tcbwtable[pgid] = bw_pct;
+ }
+@@ -532,7 +530,7 @@ ice_dcbnl_get_pg_tc_cfg_rx(struct net_device *netdev, int prio,
+ 	if (prio >= ICE_MAX_USER_PRIORITY)
+ 		return;
+ 
+-	*pgid = pi->local_dcbx_cfg.etscfg.prio_table[prio];
++	*pgid = pi->qos_cfg.local_dcbx_cfg.etscfg.prio_table[prio];
+ }
+ 
+ /**
+@@ -703,9 +701,9 @@ static int ice_dcbnl_setapp(struct net_device *netdev, struct dcb_app *app)
+ 
+ 	mutex_lock(&pf->tc_mutex);
+ 
+-	new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
++	new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
+ 
+-	old_cfg = &pf->hw.port_info->local_dcbx_cfg;
++	old_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ 
+ 	if (old_cfg->numapps == ICE_DCBX_MAX_APPS) {
+ 		ret = -EINVAL;
+@@ -755,7 +753,7 @@ static int ice_dcbnl_delapp(struct net_device *netdev, struct dcb_app *app)
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&pf->tc_mutex);
+-	old_cfg = &pf->hw.port_info->local_dcbx_cfg;
++	old_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ 
+ 	if (old_cfg->numapps <= 1)
+ 		goto delapp_out;
+@@ -764,7 +762,7 @@ static int ice_dcbnl_delapp(struct net_device *netdev, struct dcb_app *app)
+ 	if (ret)
+ 		goto delapp_out;
+ 
+-	new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
++	new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
+ 
+ 	for (i = 1; i < new_cfg->numapps; i++) {
+ 		if (app->selector == new_cfg->app[i].selector &&
+@@ -817,7 +815,7 @@ static u8 ice_dcbnl_cee_set_all(struct net_device *netdev)
+ 	    !(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE))
+ 		return ICE_DCB_NO_HW_CHG;
+ 
+-	new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
++	new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
+ 
+ 	mutex_lock(&pf->tc_mutex);
+ 
+@@ -888,7 +886,7 @@ void ice_dcbnl_set_all(struct ice_vsi *vsi)
+ 	if (!test_bit(ICE_FLAG_DCB_ENA, pf->flags))
+ 		return;
+ 
+-	dcbxcfg = &pi->local_dcbx_cfg;
++	dcbxcfg = &pi->qos_cfg.local_dcbx_cfg;
+ 
+ 	for (i = 0; i < dcbxcfg->numapps; i++) {
+ 		u8 prio, tc_map;
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index aebebd2102da0..d70573f5072c6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -2986,7 +2986,7 @@ ice_get_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *pause)
+ 	pause->rx_pause = 0;
+ 	pause->tx_pause = 0;
+ 
+-	dcbx_cfg = &pi->local_dcbx_cfg;
++	dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
+ 
+ 	pcaps = kzalloc(sizeof(*pcaps), GFP_KERNEL);
+ 	if (!pcaps)
+@@ -3038,7 +3038,7 @@ ice_set_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *pause)
+ 
+ 	pi = vsi->port_info;
+ 	hw_link_info = &pi->phy.link_info;
+-	dcbx_cfg = &pi->local_dcbx_cfg;
++	dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
+ 	link_up = hw_link_info->link_info & ICE_AQ_LINK_UP;
+ 
+ 	/* Changing the port's flow control is not supported if this isn't the
+@@ -3472,7 +3472,7 @@ static void ice_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+ 		netdev_warn(netdev, "Wake on LAN is not supported on this interface!\n");
+ 
+ 	/* Get WoL settings based on the HW capability */
+-	if (ice_is_wol_supported(pf)) {
++	if (ice_is_wol_supported(&pf->hw)) {
+ 		wol->supported = WAKE_MAGIC;
+ 		wol->wolopts = pf->wol_ena ? WAKE_MAGIC : 0;
+ 	} else {
+@@ -3492,7 +3492,7 @@ static int ice_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+ 	struct ice_vsi *vsi = np->vsi;
+ 	struct ice_pf *pf = vsi->back;
+ 
+-	if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(pf))
++	if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(&pf->hw))
+ 		return -EOPNOTSUPP;
+ 
+ 	/* only magic packet is supported */
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index ad9c22a1b97a0..170367eaa95aa 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2078,7 +2078,7 @@ err_out:
+ 
+ static void ice_vsi_set_tc_cfg(struct ice_vsi *vsi)
+ {
+-	struct ice_dcbx_cfg *cfg = &vsi->port_info->local_dcbx_cfg;
++	struct ice_dcbx_cfg *cfg = &vsi->port_info->qos_cfg.local_dcbx_cfg;
+ 
+ 	vsi->tc_cfg.ena_tc = ice_dcb_get_ena_tc(cfg);
+ 	vsi->tc_cfg.numtc = ice_dcb_get_num_tc(cfg);
+@@ -2489,7 +2489,7 @@ int ice_ena_vsi(struct ice_vsi *vsi, bool locked)
+ 			if (!locked)
+ 				rtnl_lock();
+ 
+-			err = ice_open(vsi->netdev);
++			err = ice_open_internal(vsi->netdev);
+ 
+ 			if (!locked)
+ 				rtnl_unlock();
+@@ -2518,7 +2518,7 @@ void ice_dis_vsi(struct ice_vsi *vsi, bool locked)
+ 			if (!locked)
+ 				rtnl_lock();
+ 
+-			ice_stop(vsi->netdev);
++			ice_vsi_close(vsi);
+ 
+ 			if (!locked)
+ 				rtnl_unlock();
+@@ -2944,7 +2944,6 @@ err_vsi:
+ bool ice_is_reset_in_progress(unsigned long *state)
+ {
+ 	return test_bit(__ICE_RESET_OICR_RECV, state) ||
+-	       test_bit(__ICE_DCBNL_DEVRESET, state) ||
+ 	       test_bit(__ICE_PFR_REQ, state) ||
+ 	       test_bit(__ICE_CORER_REQ, state) ||
+ 	       test_bit(__ICE_GLOBR_REQ, state);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index bacb368063e34..6f30aad7695fb 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -3515,15 +3515,14 @@ static int ice_init_interrupt_scheme(struct ice_pf *pf)
+ }
+ 
+ /**
+- * ice_is_wol_supported - get NVM state of WoL
+- * @pf: board private structure
++ * ice_is_wol_supported - check if WoL is supported
++ * @hw: pointer to hardware info
+  *
+  * Check if WoL is supported based on the HW configuration.
+  * Returns true if NVM supports and enables WoL for this port, false otherwise
+  */
+-bool ice_is_wol_supported(struct ice_pf *pf)
++bool ice_is_wol_supported(struct ice_hw *hw)
+ {
+-	struct ice_hw *hw = &pf->hw;
+ 	u16 wol_ctrl;
+ 
+ 	/* A bit set to 1 in the NVM Software Reserved Word 2 (WoL control
+@@ -3532,7 +3531,7 @@ bool ice_is_wol_supported(struct ice_pf *pf)
+ 	if (ice_read_sr_word(hw, ICE_SR_NVM_WOL_CFG, &wol_ctrl))
+ 		return false;
+ 
+-	return !(BIT(hw->pf_id) & wol_ctrl);
++	return !(BIT(hw->port_info->lport) & wol_ctrl);
+ }
+ 
+ /**
+@@ -4170,28 +4169,25 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
+ 		goto err_send_version_unroll;
+ 	}
+ 
++	/* not a fatal error if this fails */
+ 	err = ice_init_nvm_phy_type(pf->hw.port_info);
+-	if (err) {
++	if (err)
+ 		dev_err(dev, "ice_init_nvm_phy_type failed: %d\n", err);
+-		goto err_send_version_unroll;
+-	}
+ 
++	/* not a fatal error if this fails */
+ 	err = ice_update_link_info(pf->hw.port_info);
+-	if (err) {
++	if (err)
+ 		dev_err(dev, "ice_update_link_info failed: %d\n", err);
+-		goto err_send_version_unroll;
+-	}
+ 
+ 	ice_init_link_dflt_override(pf->hw.port_info);
+ 
+ 	/* if media available, initialize PHY settings */
+ 	if (pf->hw.port_info->phy.link_info.link_info &
+ 	    ICE_AQ_MEDIA_AVAILABLE) {
++		/* not a fatal error if this fails */
+ 		err = ice_init_phy_user_cfg(pf->hw.port_info);
+-		if (err) {
++		if (err)
+ 			dev_err(dev, "ice_init_phy_user_cfg failed: %d\n", err);
+-			goto err_send_version_unroll;
+-		}
+ 
+ 		if (!test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, pf->flags)) {
+ 			struct ice_vsi *vsi = ice_get_main_vsi(pf);
+@@ -4542,6 +4538,7 @@ static int __maybe_unused ice_suspend(struct device *dev)
+ 			continue;
+ 		ice_vsi_free_q_vectors(pf->vsi[v]);
+ 	}
++	ice_free_cpu_rx_rmap(ice_get_main_vsi(pf));
+ 	ice_clear_interrupt_scheme(pf);
+ 
+ 	pci_save_state(pdev);
+@@ -6616,6 +6613,28 @@ static void ice_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+  * Returns 0 on success, negative value on failure
+  */
+ int ice_open(struct net_device *netdev)
++{
++	struct ice_netdev_priv *np = netdev_priv(netdev);
++	struct ice_pf *pf = np->vsi->back;
++
++	if (ice_is_reset_in_progress(pf->state)) {
++		netdev_err(netdev, "can't open net device while reset is in progress");
++		return -EBUSY;
++	}
++
++	return ice_open_internal(netdev);
++}
++
++/**
++ * ice_open_internal - Called when a network interface becomes active
++ * @netdev: network interface device structure
++ *
++ * Internal ice_open implementation. Should not be used directly except for ice_open and reset
++ * handling routine
++ *
++ * Returns 0 on success, negative value on failure
++ */
++int ice_open_internal(struct net_device *netdev)
+ {
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 	struct ice_vsi *vsi = np->vsi;
+@@ -6696,6 +6715,12 @@ int ice_stop(struct net_device *netdev)
+ {
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 	struct ice_vsi *vsi = np->vsi;
++	struct ice_pf *pf = vsi->back;
++
++	if (ice_is_reset_in_progress(pf->state)) {
++		netdev_err(netdev, "can't stop net device while reset is in progress");
++		return -EBUSY;
++	}
+ 
+ 	ice_vsi_close(vsi);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index c3a6c41385ee3..5ce8590cdb374 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -1239,6 +1239,9 @@ ice_add_update_vsi_list(struct ice_hw *hw,
+ 			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+ 						vsi_list_id);
+ 
++		if (!m_entry->vsi_list_info)
++			return ICE_ERR_NO_MEMORY;
++
+ 		/* If this entry was large action then the large action needs
+ 		 * to be updated to point to FWD to VSI list
+ 		 */
+@@ -2224,6 +2227,7 @@ ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
+ 	return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
+ 		 fm_entry->fltr_info.vsi_handle == vsi_handle) ||
+ 		(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
++		 fm_entry->vsi_list_info &&
+ 		 (test_bit(vsi_handle, fm_entry->vsi_list_info->vsi_map))));
+ }
+ 
+@@ -2296,14 +2300,12 @@ ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+ 		return ICE_ERR_PARAM;
+ 
+ 	list_for_each_entry(fm_entry, lkup_list_head, list_entry) {
+-		struct ice_fltr_info *fi;
+-
+-		fi = &fm_entry->fltr_info;
+-		if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
++		if (!ice_vsi_uses_fltr(fm_entry, vsi_handle))
+ 			continue;
+ 
+ 		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+-							vsi_list_head, fi);
++							vsi_list_head,
++							&fm_entry->fltr_info);
+ 		if (status)
+ 			return status;
+ 	}
+@@ -2626,7 +2628,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
+ 					  &remove_list_head);
+ 	mutex_unlock(rule_lock);
+ 	if (status)
+-		return;
++		goto free_fltr_list;
+ 
+ 	switch (lkup) {
+ 	case ICE_SW_LKUP_MAC:
+@@ -2649,6 +2651,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
+ 		break;
+ 	}
+ 
++free_fltr_list:
+ 	list_for_each_entry_safe(fm_entry, tmp, &remove_list_head, list_entry) {
+ 		list_del(&fm_entry->list_entry);
+ 		devm_kfree(ice_hw_to_dev(hw), fm_entry);
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index af5b7f33db9af..0f2544c420ac3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -2421,7 +2421,7 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_ring *tx_ring)
+ 	/* allow CONTROL frames egress from main VSI if FW LLDP disabled */
+ 	if (unlikely(skb->priority == TC_PRIO_CONTROL &&
+ 		     vsi->type == ICE_VSI_PF &&
+-		     vsi->port_info->is_sw_lldp))
++		     vsi->port_info->qos_cfg.is_sw_lldp))
+ 		offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX |
+ 					ICE_TX_CTX_DESC_SWTCH_UPLINK <<
+ 					ICE_TXD_CTX_QW1_CMD_S);
+diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
+index 2226a291a3943..1bed183d96a0d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_type.h
++++ b/drivers/net/ethernet/intel/ice/ice_type.h
+@@ -493,6 +493,7 @@ struct ice_dcb_app_priority_table {
+ #define ICE_TLV_STATUS_ERR	0x4
+ #define ICE_APP_PROT_ID_FCOE	0x8906
+ #define ICE_APP_PROT_ID_ISCSI	0x0cbc
++#define ICE_APP_PROT_ID_ISCSI_860 0x035c
+ #define ICE_APP_PROT_ID_FIP	0x8914
+ #define ICE_APP_SEL_ETHTYPE	0x1
+ #define ICE_APP_SEL_TCPIP	0x2
+@@ -514,6 +515,14 @@ struct ice_dcbx_cfg {
+ #define ICE_DCBX_APPS_NON_WILLING	0x1
+ };
+ 
++struct ice_qos_cfg {
++	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
++	struct ice_dcbx_cfg desired_dcbx_cfg;	/* CEE Desired Cfg */
++	struct ice_dcbx_cfg remote_dcbx_cfg;	/* Peer Cfg */
++	u8 dcbx_status : 3;			/* see ICE_DCBX_STATUS_DIS */
++	u8 is_sw_lldp : 1;
++};
++
+ struct ice_port_info {
+ 	struct ice_sched_node *root;	/* Root Node per Port */
+ 	struct ice_hw *hw;		/* back pointer to HW instance */
+@@ -537,13 +546,7 @@ struct ice_port_info {
+ 		sib_head[ICE_MAX_TRAFFIC_CLASS][ICE_AQC_TOPO_MAX_LEVEL_NUM];
+ 	/* List contain profile ID(s) and other params per layer */
+ 	struct list_head rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+-	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
+-	/* DCBX info */
+-	struct ice_dcbx_cfg remote_dcbx_cfg;	/* Peer Cfg */
+-	struct ice_dcbx_cfg desired_dcbx_cfg;	/* CEE Desired Cfg */
+-	/* LLDP/DCBX Status */
+-	u8 dcbx_status:3;		/* see ICE_DCBX_STATUS_DIS */
+-	u8 is_sw_lldp:1;
++	struct ice_qos_cfg qos_cfg;
+ 	u8 is_vf:1;
+ };
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index b42396df3111d..0469f53dfb99e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -184,6 +184,28 @@ mlx5_tc_ct_entry_has_nat(struct mlx5_ct_entry *entry)
+ 	return !!(entry->tuple_nat_node.next);
+ }
+ 
++static int
++mlx5_get_label_mapping(struct mlx5_tc_ct_priv *ct_priv,
++		       u32 *labels, u32 *id)
++{
++	if (!memchr_inv(labels, 0, sizeof(u32) * 4)) {
++		*id = 0;
++		return 0;
++	}
++
++	if (mapping_add(ct_priv->labels_mapping, labels, id))
++		return -EOPNOTSUPP;
++
++	return 0;
++}
++
++static void
++mlx5_put_label_mapping(struct mlx5_tc_ct_priv *ct_priv, u32 id)
++{
++	if (id)
++		mapping_remove(ct_priv->labels_mapping, id);
++}
++
+ static int
+ mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule)
+ {
+@@ -435,7 +457,7 @@ mlx5_tc_ct_entry_del_rule(struct mlx5_tc_ct_priv *ct_priv,
+ 	mlx5_tc_rule_delete(netdev_priv(ct_priv->netdev), zone_rule->rule, attr);
+ 	mlx5e_mod_hdr_detach(ct_priv->dev,
+ 			     ct_priv->mod_hdr_tbl, zone_rule->mh);
+-	mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
++	mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
+ 	kfree(attr);
+ }
+ 
+@@ -638,8 +660,8 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
+ 	if (!meta)
+ 		return -EOPNOTSUPP;
+ 
+-	err = mapping_add(ct_priv->labels_mapping, meta->ct_metadata.labels,
+-			  &attr->ct_attr.ct_labels_id);
++	err = mlx5_get_label_mapping(ct_priv, meta->ct_metadata.labels,
++				     &attr->ct_attr.ct_labels_id);
+ 	if (err)
+ 		return -EOPNOTSUPP;
+ 	if (nat) {
+@@ -675,7 +697,7 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
+ 
+ err_mapping:
+ 	dealloc_mod_hdr_actions(&mod_acts);
+-	mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
++	mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
+ 	return err;
+ }
+ 
+@@ -743,7 +765,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
+ err_rule:
+ 	mlx5e_mod_hdr_detach(ct_priv->dev,
+ 			     ct_priv->mod_hdr_tbl, zone_rule->mh);
+-	mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
++	mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
+ err_mod_hdr:
+ 	kfree(attr);
+ err_attr:
+@@ -1198,7 +1220,7 @@ void mlx5_tc_ct_match_del(struct mlx5_tc_ct_priv *priv, struct mlx5_ct_attr *ct_
+ 	if (!priv || !ct_attr->ct_labels_id)
+ 		return;
+ 
+-	mapping_remove(priv->labels_mapping, ct_attr->ct_labels_id);
++	mlx5_put_label_mapping(priv, ct_attr->ct_labels_id);
+ }
+ 
+ int
+@@ -1276,7 +1298,7 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv,
+ 		ct_labels[1] = key->ct_labels[1] & mask->ct_labels[1];
+ 		ct_labels[2] = key->ct_labels[2] & mask->ct_labels[2];
+ 		ct_labels[3] = key->ct_labels[3] & mask->ct_labels[3];
+-		if (mapping_add(priv->labels_mapping, ct_labels, &ct_attr->ct_labels_id))
++		if (mlx5_get_label_mapping(priv, ct_labels, &ct_attr->ct_labels_id))
+ 			return -EOPNOTSUPP;
+ 		mlx5e_tc_match_to_reg_match(spec, LABELS_TO_REG, ct_attr->ct_labels_id,
+ 					    MLX5_CT_LABELS_MASK);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index bcd05457647e2..986f0d86e94dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -744,11 +744,11 @@ static int get_fec_supported_advertised(struct mlx5_core_dev *dev,
+ 	return 0;
+ }
+ 
+-static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings *link_ksettings,
+-						   u32 eth_proto_cap,
+-						   u8 connector_type, bool ext)
++static void ptys2ethtool_supported_advertised_port(struct mlx5_core_dev *mdev,
++						   struct ethtool_link_ksettings *link_ksettings,
++						   u32 eth_proto_cap, u8 connector_type)
+ {
+-	if ((!connector_type && !ext) || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) {
++	if (!MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) {
+ 		if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR)
+ 				   | MLX5E_PROT_MASK(MLX5E_10GBASE_SR)
+ 				   | MLX5E_PROT_MASK(MLX5E_40GBASE_CR4)
+@@ -884,9 +884,9 @@ static int ptys2connector_type[MLX5E_CONNECTOR_TYPE_NUMBER] = {
+ 		[MLX5E_PORT_OTHER]              = PORT_OTHER,
+ 	};
+ 
+-static u8 get_connector_port(u32 eth_proto, u8 connector_type, bool ext)
++static u8 get_connector_port(struct mlx5_core_dev *mdev, u32 eth_proto, u8 connector_type)
+ {
+-	if ((connector_type || ext) && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER)
++	if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type))
+ 		return ptys2connector_type[connector_type];
+ 
+ 	if (eth_proto &
+@@ -987,11 +987,11 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
+ 			 data_rate_oper, link_ksettings);
+ 
+ 	eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap;
+-
+-	link_ksettings->base.port = get_connector_port(eth_proto_oper,
+-						       connector_type, ext);
+-	ptys2ethtool_supported_advertised_port(link_ksettings, eth_proto_admin,
+-					       connector_type, ext);
++	connector_type = connector_type < MLX5E_CONNECTOR_TYPE_NUMBER ?
++			 connector_type : MLX5E_PORT_UNKNOWN;
++	link_ksettings->base.port = get_connector_port(mdev, eth_proto_oper, connector_type);
++	ptys2ethtool_supported_advertised_port(mdev, link_ksettings, eth_proto_admin,
++					       connector_type);
+ 	get_lp_advertising(mdev, eth_proto_lp, link_ksettings);
+ 
+ 	if (an_status == MLX5_AN_COMPLETE)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index 8ebfe782f95e5..ccd53a7a2b801 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -926,13 +926,24 @@ void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev)
+ 	mutex_unlock(&table->lock);
+ }
+ 
++#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
++#define MLX5_MAX_ASYNC_EQS 4
++#else
++#define MLX5_MAX_ASYNC_EQS 3
++#endif
++
+ int mlx5_eq_table_create(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_eq_table *eq_table = dev->priv.eq_table;
++	int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
++		      MLX5_CAP_GEN(dev, max_num_eqs) :
++		      1 << MLX5_CAP_GEN(dev, log_max_eq);
+ 	int err;
+ 
+ 	eq_table->num_comp_eqs =
+-		mlx5_irq_get_num_comp(eq_table->irq_table);
++		min_t(int,
++		      mlx5_irq_get_num_comp(eq_table->irq_table),
++		      num_eqs - MLX5_MAX_ASYNC_EQS);
+ 
+ 	err = create_async_eqs(dev);
+ 	if (err) {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+index 74b3959b36d4d..3e7576e671df8 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+@@ -20,6 +20,7 @@
+ #include <net/red.h>
+ #include <net/vxlan.h>
+ #include <net/flow_offload.h>
++#include <net/inet_ecn.h>
+ 
+ #include "port.h"
+ #include "core.h"
+@@ -345,6 +346,20 @@ struct mlxsw_sp_port_type_speed_ops {
+ 	u32 (*ptys_proto_cap_masked_get)(u32 eth_proto_cap);
+ };
+ 
++static inline u8 mlxsw_sp_tunnel_ecn_decap(u8 outer_ecn, u8 inner_ecn,
++					   bool *trap_en)
++{
++	bool set_ce = false;
++
++	*trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
++	if (set_ce)
++		return INET_ECN_CE;
++	else if (outer_ecn == INET_ECN_ECT_1 && inner_ecn == INET_ECN_ECT_0)
++		return INET_ECN_ECT_1;
++	else
++		return inner_ecn;
++}
++
+ static inline struct net_device *
+ mlxsw_sp_bridge_vxlan_dev_find(struct net_device *br_dev)
+ {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
+index a8525992528f5..3262a2c15ea75 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
+@@ -371,12 +371,11 @@ static int mlxsw_sp_ipip_ecn_decap_init_one(struct mlxsw_sp *mlxsw_sp,
+ 					    u8 inner_ecn, u8 outer_ecn)
+ {
+ 	char tidem_pl[MLXSW_REG_TIDEM_LEN];
+-	bool trap_en, set_ce = false;
+ 	u8 new_inner_ecn;
++	bool trap_en;
+ 
+-	trap_en = __INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
+-	new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn;
+-
++	new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn,
++						  &trap_en);
+ 	mlxsw_reg_tidem_pack(tidem_pl, outer_ecn, inner_ecn, new_inner_ecn,
+ 			     trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0);
+ 	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tidem), tidem_pl);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
+index 54d3e7dcd303b..a2d1b95d1f58b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
+@@ -909,12 +909,11 @@ static int __mlxsw_sp_nve_ecn_decap_init(struct mlxsw_sp *mlxsw_sp,
+ 					 u8 inner_ecn, u8 outer_ecn)
+ {
+ 	char tndem_pl[MLXSW_REG_TNDEM_LEN];
+-	bool trap_en, set_ce = false;
+ 	u8 new_inner_ecn;
++	bool trap_en;
+ 
+-	trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
+-	new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn;
+-
++	new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn,
++						  &trap_en);
+ 	mlxsw_reg_tndem_pack(tndem_pl, outer_ecn, inner_ecn, new_inner_ecn,
+ 			     trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0);
+ 	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tndem), tndem_pl);
+diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+index 1634ca6d4a8f0..c84c8bf2bc20e 100644
+--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
++++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+@@ -2897,7 +2897,7 @@ static netdev_tx_t myri10ge_sw_tso(struct sk_buff *skb,
+ 			dev_kfree_skb_any(curr);
+ 			if (segs != NULL) {
+ 				curr = segs;
+-				segs = segs->next;
++				segs = next;
+ 				curr->next = NULL;
+ 				dev_kfree_skb_any(segs);
+ 			}
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
+index 0e2db6ea79e96..2ec62c8d86e1c 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
+@@ -454,6 +454,7 @@ void nfp_bpf_ctrl_msg_rx(struct nfp_app *app, struct sk_buff *skb)
+ 			dev_consume_skb_any(skb);
+ 		else
+ 			dev_kfree_skb_any(skb);
++		return;
+ 	}
+ 
+ 	nfp_ccm_rx(&bpf->ccm, skb);
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
+index caf12eec99459..56833a41f3d27 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
++++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
+@@ -190,6 +190,7 @@ struct nfp_fl_internal_ports {
+  * @qos_rate_limiters:	Current active qos rate limiters
+  * @qos_stats_lock:	Lock on qos stats updates
+  * @pre_tun_rule_cnt:	Number of pre-tunnel rules offloaded
++ * @merge_table:	Hash table to store merged flows
+  */
+ struct nfp_flower_priv {
+ 	struct nfp_app *app;
+@@ -223,6 +224,7 @@ struct nfp_flower_priv {
+ 	unsigned int qos_rate_limiters;
+ 	spinlock_t qos_stats_lock; /* Protect the qos stats */
+ 	int pre_tun_rule_cnt;
++	struct rhashtable merge_table;
+ };
+ 
+ /**
+@@ -350,6 +352,12 @@ struct nfp_fl_payload_link {
+ };
+ 
+ extern const struct rhashtable_params nfp_flower_table_params;
++extern const struct rhashtable_params merge_table_params;
++
++struct nfp_merge_info {
++	u64 parent_ctx;
++	struct rhash_head ht_node;
++};
+ 
+ struct nfp_fl_stats_frame {
+ 	__be32 stats_con_id;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+index aa06fcb38f8b9..327bb56b3ef56 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+@@ -490,6 +490,12 @@ const struct rhashtable_params nfp_flower_table_params = {
+ 	.automatic_shrinking	= true,
+ };
+ 
++const struct rhashtable_params merge_table_params = {
++	.key_offset	= offsetof(struct nfp_merge_info, parent_ctx),
++	.head_offset	= offsetof(struct nfp_merge_info, ht_node),
++	.key_len	= sizeof(u64),
++};
++
+ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
+ 			     unsigned int host_num_mems)
+ {
+@@ -506,6 +512,10 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
+ 	if (err)
+ 		goto err_free_flow_table;
+ 
++	err = rhashtable_init(&priv->merge_table, &merge_table_params);
++	if (err)
++		goto err_free_stats_ctx_table;
++
+ 	get_random_bytes(&priv->mask_id_seed, sizeof(priv->mask_id_seed));
+ 
+ 	/* Init ring buffer and unallocated mask_ids. */
+@@ -513,7 +523,7 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
+ 		kmalloc_array(NFP_FLOWER_MASK_ENTRY_RS,
+ 			      NFP_FLOWER_MASK_ELEMENT_RS, GFP_KERNEL);
+ 	if (!priv->mask_ids.mask_id_free_list.buf)
+-		goto err_free_stats_ctx_table;
++		goto err_free_merge_table;
+ 
+ 	priv->mask_ids.init_unallocated = NFP_FLOWER_MASK_ENTRY_RS - 1;
+ 
+@@ -550,6 +560,8 @@ err_free_last_used:
+ 	kfree(priv->mask_ids.last_used);
+ err_free_mask_id:
+ 	kfree(priv->mask_ids.mask_id_free_list.buf);
++err_free_merge_table:
++	rhashtable_destroy(&priv->merge_table);
+ err_free_stats_ctx_table:
+ 	rhashtable_destroy(&priv->stats_ctx_table);
+ err_free_flow_table:
+@@ -568,6 +580,8 @@ void nfp_flower_metadata_cleanup(struct nfp_app *app)
+ 				    nfp_check_rhashtable_empty, NULL);
+ 	rhashtable_free_and_destroy(&priv->stats_ctx_table,
+ 				    nfp_check_rhashtable_empty, NULL);
++	rhashtable_free_and_destroy(&priv->merge_table,
++				    nfp_check_rhashtable_empty, NULL);
+ 	kvfree(priv->stats);
+ 	kfree(priv->mask_ids.mask_id_free_list.buf);
+ 	kfree(priv->mask_ids.last_used);
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+index d72225d64a75d..e95969c462e46 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+@@ -1009,6 +1009,8 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
+ 	struct netlink_ext_ack *extack = NULL;
+ 	struct nfp_fl_payload *merge_flow;
+ 	struct nfp_fl_key_ls merge_key_ls;
++	struct nfp_merge_info *merge_info;
++	u64 parent_ctx = 0;
+ 	int err;
+ 
+ 	ASSERT_RTNL();
+@@ -1019,6 +1021,15 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
+ 	    nfp_flower_is_merge_flow(sub_flow2))
+ 		return -EINVAL;
+ 
++	/* check if the two flows are already merged */
++	parent_ctx = (u64)(be32_to_cpu(sub_flow1->meta.host_ctx_id)) << 32;
++	parent_ctx |= (u64)(be32_to_cpu(sub_flow2->meta.host_ctx_id));
++	if (rhashtable_lookup_fast(&priv->merge_table,
++				   &parent_ctx, merge_table_params)) {
++		nfp_flower_cmsg_warn(app, "The two flows are already merged.\n");
++		return 0;
++	}
++
+ 	err = nfp_flower_can_merge(sub_flow1, sub_flow2);
+ 	if (err)
+ 		return err;
+@@ -1060,16 +1071,33 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
+ 	if (err)
+ 		goto err_release_metadata;
+ 
++	merge_info = kmalloc(sizeof(*merge_info), GFP_KERNEL);
++	if (!merge_info) {
++		err = -ENOMEM;
++		goto err_remove_rhash;
++	}
++	merge_info->parent_ctx = parent_ctx;
++	err = rhashtable_insert_fast(&priv->merge_table, &merge_info->ht_node,
++				     merge_table_params);
++	if (err)
++		goto err_destroy_merge_info;
++
+ 	err = nfp_flower_xmit_flow(app, merge_flow,
+ 				   NFP_FLOWER_CMSG_TYPE_FLOW_MOD);
+ 	if (err)
+-		goto err_remove_rhash;
++		goto err_remove_merge_info;
+ 
+ 	merge_flow->in_hw = true;
+ 	sub_flow1->in_hw = false;
+ 
+ 	return 0;
+ 
++err_remove_merge_info:
++	WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table,
++					    &merge_info->ht_node,
++					    merge_table_params));
++err_destroy_merge_info:
++	kfree(merge_info);
+ err_remove_rhash:
+ 	WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table,
+ 					    &merge_flow->fl_node,
+@@ -1359,7 +1387,9 @@ nfp_flower_remove_merge_flow(struct nfp_app *app,
+ {
+ 	struct nfp_flower_priv *priv = app->priv;
+ 	struct nfp_fl_payload_link *link, *temp;
++	struct nfp_merge_info *merge_info;
+ 	struct nfp_fl_payload *origin;
++	u64 parent_ctx = 0;
+ 	bool mod = false;
+ 	int err;
+ 
+@@ -1396,8 +1426,22 @@ nfp_flower_remove_merge_flow(struct nfp_app *app,
+ err_free_links:
+ 	/* Clean any links connected with the merged flow. */
+ 	list_for_each_entry_safe(link, temp, &merge_flow->linked_flows,
+-				 merge_flow.list)
++				 merge_flow.list) {
++		u32 ctx_id = be32_to_cpu(link->sub_flow.flow->meta.host_ctx_id);
++
++		parent_ctx = (parent_ctx << 32) | (u64)(ctx_id);
+ 		nfp_flower_unlink_flow(link);
++	}
++
++	merge_info = rhashtable_lookup_fast(&priv->merge_table,
++					    &parent_ctx,
++					    merge_table_params);
++	if (merge_info) {
++		WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table,
++						    &merge_info->ht_node,
++						    merge_table_params));
++		kfree(merge_info);
++	}
+ 
+ 	kfree(merge_flow->action_data);
+ 	kfree(merge_flow->mask_data);
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 1426bfc009bc3..abd37f26af682 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -907,8 +907,16 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 
+ 		info = skb_tunnel_info(skb);
+ 		if (info) {
+-			info->key.u.ipv4.dst = fl4.saddr;
+-			info->key.u.ipv4.src = fl4.daddr;
++			struct ip_tunnel_info *unclone;
++
++			unclone = skb_tunnel_info_unclone(skb);
++			if (unlikely(!unclone)) {
++				dst_release(&rt->dst);
++				return -ENOMEM;
++			}
++
++			unclone->key.u.ipv4.dst = fl4.saddr;
++			unclone->key.u.ipv4.src = fl4.daddr;
+ 		}
+ 
+ 		if (!pskb_may_pull(skb, ETH_HLEN)) {
+@@ -992,8 +1000,16 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 		struct ip_tunnel_info *info = skb_tunnel_info(skb);
+ 
+ 		if (info) {
+-			info->key.u.ipv6.dst = fl6.saddr;
+-			info->key.u.ipv6.src = fl6.daddr;
++			struct ip_tunnel_info *unclone;
++
++			unclone = skb_tunnel_info_unclone(skb);
++			if (unlikely(!unclone)) {
++				dst_release(dst);
++				return -ENOMEM;
++			}
++
++			unclone->key.u.ipv6.dst = fl6.saddr;
++			unclone->key.u.ipv6.src = fl6.daddr;
+ 		}
+ 
+ 		if (!pskb_may_pull(skb, ETH_HLEN)) {
+diff --git a/drivers/net/ieee802154/atusb.c b/drivers/net/ieee802154/atusb.c
+index 0dd0ba915ab97..23ee0b14cbfa1 100644
+--- a/drivers/net/ieee802154/atusb.c
++++ b/drivers/net/ieee802154/atusb.c
+@@ -365,6 +365,7 @@ static int atusb_alloc_urbs(struct atusb *atusb, int n)
+ 			return -ENOMEM;
+ 		}
+ 		usb_anchor_urb(urb, &atusb->idle_urbs);
++		usb_free_urb(urb);
+ 		n--;
+ 	}
+ 	return 0;
+diff --git a/drivers/net/phy/bcm-phy-lib.c b/drivers/net/phy/bcm-phy-lib.c
+index ef6825b303233..b72e11ca974f7 100644
+--- a/drivers/net/phy/bcm-phy-lib.c
++++ b/drivers/net/phy/bcm-phy-lib.c
+@@ -328,7 +328,7 @@ EXPORT_SYMBOL_GPL(bcm_phy_enable_apd);
+ 
+ int bcm_phy_set_eee(struct phy_device *phydev, bool enable)
+ {
+-	int val;
++	int val, mask = 0;
+ 
+ 	/* Enable EEE at PHY level */
+ 	val = phy_read_mmd(phydev, MDIO_MMD_AN, BRCM_CL45VEN_EEE_CONTROL);
+@@ -347,10 +347,17 @@ int bcm_phy_set_eee(struct phy_device *phydev, bool enable)
+ 	if (val < 0)
+ 		return val;
+ 
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
++			      phydev->supported))
++		mask |= MDIO_EEE_1000T;
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT,
++			      phydev->supported))
++		mask |= MDIO_EEE_100TX;
++
+ 	if (enable)
+-		val |= (MDIO_EEE_100TX | MDIO_EEE_1000T);
++		val |= mask;
+ 	else
+-		val &= ~(MDIO_EEE_100TX | MDIO_EEE_1000T);
++		val &= ~mask;
+ 
+ 	phy_write_mmd(phydev, MDIO_MMD_AN, BCM_CL45VEN_EEE_ADV, (u32)val);
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index accde25a66a01..c671d8e257741 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -69,6 +69,14 @@
+ #include <linux/bpf.h>
+ #include <linux/bpf_trace.h>
+ #include <linux/mutex.h>
++#include <linux/ieee802154.h>
++#include <linux/if_ltalk.h>
++#include <uapi/linux/if_fddi.h>
++#include <uapi/linux/if_hippi.h>
++#include <uapi/linux/if_fc.h>
++#include <net/ax25.h>
++#include <net/rose.h>
++#include <net/6lowpan.h>
+ 
+ #include <linux/uaccess.h>
+ #include <linux/proc_fs.h>
+@@ -2978,6 +2986,45 @@ static int tun_set_ebpf(struct tun_struct *tun, struct tun_prog __rcu **prog_p,
+ 	return __tun_set_ebpf(tun, prog_p, prog);
+ }
+ 
++/* Return correct value for tun->dev->addr_len based on tun->dev->type. */
++static unsigned char tun_get_addr_len(unsigned short type)
++{
++	switch (type) {
++	case ARPHRD_IP6GRE:
++	case ARPHRD_TUNNEL6:
++		return sizeof(struct in6_addr);
++	case ARPHRD_IPGRE:
++	case ARPHRD_TUNNEL:
++	case ARPHRD_SIT:
++		return 4;
++	case ARPHRD_ETHER:
++		return ETH_ALEN;
++	case ARPHRD_IEEE802154:
++	case ARPHRD_IEEE802154_MONITOR:
++		return IEEE802154_EXTENDED_ADDR_LEN;
++	case ARPHRD_PHONET_PIPE:
++	case ARPHRD_PPP:
++	case ARPHRD_NONE:
++		return 0;
++	case ARPHRD_6LOWPAN:
++		return EUI64_ADDR_LEN;
++	case ARPHRD_FDDI:
++		return FDDI_K_ALEN;
++	case ARPHRD_HIPPI:
++		return HIPPI_ALEN;
++	case ARPHRD_IEEE802:
++		return FC_ALEN;
++	case ARPHRD_ROSE:
++		return ROSE_ADDR_LEN;
++	case ARPHRD_NETROM:
++		return AX25_ADDR_LEN;
++	case ARPHRD_LOCALTLK:
++		return LTALK_ALEN;
++	default:
++		return 0;
++	}
++}
++
+ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
+ 			    unsigned long arg, int ifreq_len)
+ {
+@@ -3133,6 +3180,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
+ 			ret = -EBUSY;
+ 		} else {
+ 			tun->dev->type = (int) arg;
++			tun->dev->addr_len = tun_get_addr_len(tun->dev->type);
+ 			netif_info(tun, drv, tun->dev, "linktype set to %d\n",
+ 				   tun->dev->type);
+ 			ret = 0;
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index 2bb28db894320..d18642a8144cf 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -611,7 +611,7 @@ static struct hso_serial *get_serial_by_index(unsigned index)
+ 	return serial;
+ }
+ 
+-static int get_free_serial_index(void)
++static int obtain_minor(struct hso_serial *serial)
+ {
+ 	int index;
+ 	unsigned long flags;
+@@ -619,8 +619,10 @@ static int get_free_serial_index(void)
+ 	spin_lock_irqsave(&serial_table_lock, flags);
+ 	for (index = 0; index < HSO_SERIAL_TTY_MINORS; index++) {
+ 		if (serial_table[index] == NULL) {
++			serial_table[index] = serial->parent;
++			serial->minor = index;
+ 			spin_unlock_irqrestore(&serial_table_lock, flags);
+-			return index;
++			return 0;
+ 		}
+ 	}
+ 	spin_unlock_irqrestore(&serial_table_lock, flags);
+@@ -629,15 +631,12 @@ static int get_free_serial_index(void)
+ 	return -1;
+ }
+ 
+-static void set_serial_by_index(unsigned index, struct hso_serial *serial)
++static void release_minor(struct hso_serial *serial)
+ {
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&serial_table_lock, flags);
+-	if (serial)
+-		serial_table[index] = serial->parent;
+-	else
+-		serial_table[index] = NULL;
++	serial_table[serial->minor] = NULL;
+ 	spin_unlock_irqrestore(&serial_table_lock, flags);
+ }
+ 
+@@ -2230,6 +2229,7 @@ static int hso_stop_serial_device(struct hso_device *hso_dev)
+ static void hso_serial_tty_unregister(struct hso_serial *serial)
+ {
+ 	tty_unregister_device(tty_drv, serial->minor);
++	release_minor(serial);
+ }
+ 
+ static void hso_serial_common_free(struct hso_serial *serial)
+@@ -2253,24 +2253,22 @@ static void hso_serial_common_free(struct hso_serial *serial)
+ static int hso_serial_common_create(struct hso_serial *serial, int num_urbs,
+ 				    int rx_size, int tx_size)
+ {
+-	int minor;
+ 	int i;
+ 
+ 	tty_port_init(&serial->port);
+ 
+-	minor = get_free_serial_index();
+-	if (minor < 0)
++	if (obtain_minor(serial))
+ 		goto exit2;
+ 
+ 	/* register our minor number */
+ 	serial->parent->dev = tty_port_register_device_attr(&serial->port,
+-			tty_drv, minor, &serial->parent->interface->dev,
++			tty_drv, serial->minor, &serial->parent->interface->dev,
+ 			serial->parent, hso_serial_dev_groups);
+-	if (IS_ERR(serial->parent->dev))
++	if (IS_ERR(serial->parent->dev)) {
++		release_minor(serial);
+ 		goto exit2;
++	}
+ 
+-	/* fill in specific data for later use */
+-	serial->minor = minor;
+ 	serial->magic = HSO_SERIAL_MAGIC;
+ 	spin_lock_init(&serial->serial_lock);
+ 	serial->num_rx_urbs = num_urbs;
+@@ -2667,9 +2665,6 @@ static struct hso_device *hso_create_bulk_serial_device(
+ 
+ 	serial->write_data = hso_std_serial_write_data;
+ 
+-	/* and record this serial */
+-	set_serial_by_index(serial->minor, serial);
+-
+ 	/* setup the proc dirs and files if needed */
+ 	hso_log_port(hso_dev);
+ 
+@@ -2726,9 +2721,6 @@ struct hso_device *hso_create_mux_serial_device(struct usb_interface *interface,
+ 	serial->shared_int->ref_count++;
+ 	mutex_unlock(&serial->shared_int->shared_int_lock);
+ 
+-	/* and record this serial */
+-	set_serial_by_index(serial->minor, serial);
+-
+ 	/* setup the proc dirs and files if needed */
+ 	hso_log_port(hso_dev);
+ 
+@@ -3113,7 +3105,6 @@ static void hso_free_interface(struct usb_interface *interface)
+ 			cancel_work_sync(&serial_table[i]->async_get_intf);
+ 			hso_serial_tty_unregister(serial);
+ 			kref_put(&serial_table[i]->ref, hso_serial_ref_free);
+-			set_serial_by_index(i, NULL);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 50cb8f045a1e5..d3b698d9e2e6a 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2724,12 +2724,17 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ 			goto tx_error;
+ 		} else if (err) {
+ 			if (info) {
++				struct ip_tunnel_info *unclone;
+ 				struct in_addr src, dst;
+ 
++				unclone = skb_tunnel_info_unclone(skb);
++				if (unlikely(!unclone))
++					goto tx_error;
++
+ 				src = remote_ip.sin.sin_addr;
+ 				dst = local_ip.sin.sin_addr;
+-				info->key.u.ipv4.src = src.s_addr;
+-				info->key.u.ipv4.dst = dst.s_addr;
++				unclone->key.u.ipv4.src = src.s_addr;
++				unclone->key.u.ipv4.dst = dst.s_addr;
+ 			}
+ 			vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
+ 			dst_release(ndst);
+@@ -2780,12 +2785,17 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ 			goto tx_error;
+ 		} else if (err) {
+ 			if (info) {
++				struct ip_tunnel_info *unclone;
+ 				struct in6_addr src, dst;
+ 
++				unclone = skb_tunnel_info_unclone(skb);
++				if (unlikely(!unclone))
++					goto tx_error;
++
+ 				src = remote_ip.sin6.sin6_addr;
+ 				dst = local_ip.sin6.sin6_addr;
+-				info->key.u.ipv6.src = src;
+-				info->key.u.ipv6.dst = dst;
++				unclone->key.u.ipv6.src = src;
++				unclone->key.u.ipv6.dst = dst;
+ 			}
+ 
+ 			vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
+diff --git a/drivers/net/wan/hdlc_fr.c b/drivers/net/wan/hdlc_fr.c
+index 409e5a7ad8e26..857912ae84d71 100644
+--- a/drivers/net/wan/hdlc_fr.c
++++ b/drivers/net/wan/hdlc_fr.c
+@@ -415,7 +415,7 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 		if (pad > 0) { /* Pad the frame with zeros */
+ 			if (__skb_pad(skb, pad, false))
+-				goto drop;
++				goto out;
+ 			skb_put(skb, pad);
+ 		}
+ 	}
+@@ -448,8 +448,9 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	return NETDEV_TX_OK;
+ 
+ drop:
+-	dev->stats.tx_dropped++;
+ 	kfree_skb(skb);
++out:
++	dev->stats.tx_dropped++;
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index 6d19de3058d21..cbde21e772b17 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -285,7 +285,7 @@ enum iwl_reg_capa_flags_v2 {
+ 	REG_CAPA_V2_MCS_9_ALLOWED	= BIT(6),
+ 	REG_CAPA_V2_WEATHER_DISABLED	= BIT(7),
+ 	REG_CAPA_V2_40MHZ_ALLOWED	= BIT(8),
+-	REG_CAPA_V2_11AX_DISABLED	= BIT(13),
++	REG_CAPA_V2_11AX_DISABLED	= BIT(10),
+ };
+ 
+ /*
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index 81ef4fc8d7831..ec1d6025081de 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -5,7 +5,7 @@
+  *
+  * GPL LICENSE SUMMARY
+  *
+- * Copyright(c) 2018 - 2020 Intel Corporation
++ * Copyright(c) 2018 - 2021 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -122,15 +122,6 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 				 const struct fw_img *fw)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+-	u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
+-		      u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
+-				      CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
+-		      u32_encode_bits(250,
+-				      CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
+-		      CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
+-		      u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
+-				      CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
+-		      u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
+ 	struct iwl_context_info_gen3 *ctxt_info_gen3;
+ 	struct iwl_prph_scratch *prph_scratch;
+ 	struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl;
+@@ -264,26 +255,6 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL,
+ 		    CSR_AUTO_FUNC_BOOT_ENA);
+ 
+-	/*
+-	 * To workaround hardware latency issues during the boot process,
+-	 * initialize the LTR to ~250 usec (see ltr_val above).
+-	 * The firmware initializes this again later (to a smaller value).
+-	 */
+-	if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 ||
+-	     trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) &&
+-	    !trans->trans_cfg->integrated) {
+-		iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val);
+-	} else if (trans->trans_cfg->integrated &&
+-		   trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) {
+-		iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL);
+-		iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val);
+-	}
+-
+-	if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
+-		iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1);
+-	else
+-		iwl_set_bit(trans, CSR_GP_CNTRL, CSR_AUTO_FUNC_INIT);
+-
+ 	return 0;
+ 
+ err_free_ctxt_info:
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
+index 13fe9c00d7e8f..f65d363cc8922 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
+@@ -6,7 +6,7 @@
+  * GPL LICENSE SUMMARY
+  *
+  * Copyright(c) 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2020 Intel Corporation
++ * Copyright(c) 2018 - 2021 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -288,7 +288,6 @@ int iwl_pcie_ctxt_info_init(struct iwl_trans *trans,
+ 
+ 	/* kick FW self load */
+ 	iwl_write64(trans, CSR_CTXT_INFO_BA, trans_pcie->ctxt_info_dma_addr);
+-	iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1);
+ 
+ 	/* Context info will be released upon alive or failure to get one */
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index 91ec9379c0611..4c3ca2a376964 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -281,6 +281,34 @@ void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr)
+ 	mutex_unlock(&trans_pcie->mutex);
+ }
+ 
++static void iwl_pcie_set_ltr(struct iwl_trans *trans)
++{
++	u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
++		      u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
++				      CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
++		      u32_encode_bits(250,
++				      CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
++		      CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
++		      u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
++				      CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
++		      u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
++
++	/*
++	 * To workaround hardware latency issues during the boot process,
++	 * initialize the LTR to ~250 usec (see ltr_val above).
++	 * The firmware initializes this again later (to a smaller value).
++	 */
++	if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 ||
++	     trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) &&
++	    !trans->trans_cfg->integrated) {
++		iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val);
++	} else if (trans->trans_cfg->integrated &&
++		   trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) {
++		iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL);
++		iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val);
++	}
++}
++
+ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
+ 				 const struct fw_img *fw, bool run_in_rfkill)
+ {
+@@ -347,6 +375,13 @@ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
+ 	if (ret)
+ 		goto out;
+ 
++	iwl_pcie_set_ltr(trans);
++
++	if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
++		iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1);
++	else
++		iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1);
++
+ 	/* re-check RF-Kill state since we may have missed the interrupt */
+ 	hw_rfkill = iwl_pcie_check_hw_rf_kill(trans);
+ 	if (hw_rfkill && !run_in_rfkill)
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 408a7b5f06a9e..1d7d24e7094b0 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1308,7 +1308,16 @@ DEFINE_SIMPLE_PROP(pinctrl7, "pinctrl-7", NULL)
+ DEFINE_SIMPLE_PROP(pinctrl8, "pinctrl-8", NULL)
+ DEFINE_SUFFIX_PROP(regulators, "-supply", NULL)
+ DEFINE_SUFFIX_PROP(gpio, "-gpio", "#gpio-cells")
+-DEFINE_SUFFIX_PROP(gpios, "-gpios", "#gpio-cells")
++
++static struct device_node *parse_gpios(struct device_node *np,
++				       const char *prop_name, int index)
++{
++	if (!strcmp_suffix(prop_name, ",nr-gpios"))
++		return NULL;
++
++	return parse_suffix_prop_cells(np, prop_name, index, "-gpios",
++				       "#gpio-cells");
++}
+ 
+ static struct device_node *parse_iommu_maps(struct device_node *np,
+ 					    const char *prop_name, int index)
+diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
+index ddecf25b5dd40..d7894f178bd4f 100644
+--- a/drivers/ras/cec.c
++++ b/drivers/ras/cec.c
+@@ -309,11 +309,20 @@ static bool sanity_check(struct ce_array *ca)
+ 	return ret;
+ }
+ 
++/**
++ * cec_add_elem - Add an element to the CEC array.
++ * @pfn:	page frame number to insert
++ *
++ * Return values:
++ * - <0:	on error
++ * -  0:	on success
++ * - >0:	when the inserted pfn was offlined
++ */
+ static int cec_add_elem(u64 pfn)
+ {
+ 	struct ce_array *ca = &ce_arr;
++	int count, err, ret = 0;
+ 	unsigned int to = 0;
+-	int count, ret = 0;
+ 
+ 	/*
+ 	 * We can be called very early on the identify_cpu() path where we are
+@@ -330,8 +339,8 @@ static int cec_add_elem(u64 pfn)
+ 	if (ca->n == MAX_ELEMS)
+ 		WARN_ON(!del_lru_elem_unlocked(ca));
+ 
+-	ret = find_elem(ca, pfn, &to);
+-	if (ret < 0) {
++	err = find_elem(ca, pfn, &to);
++	if (err < 0) {
+ 		/*
+ 		 * Shift range [to-end] to make room for one more element.
+ 		 */
+diff --git a/drivers/regulator/bd9571mwv-regulator.c b/drivers/regulator/bd9571mwv-regulator.c
+index e690c2ce5b3c5..25e33028871c0 100644
+--- a/drivers/regulator/bd9571mwv-regulator.c
++++ b/drivers/regulator/bd9571mwv-regulator.c
+@@ -124,7 +124,7 @@ static const struct regulator_ops vid_ops = {
+ 
+ static const struct regulator_desc regulators[] = {
+ 	BD9571MWV_REG("VD09", "vd09", VD09, avs_ops, 0, 0x7f,
+-		      0x80, 600000, 10000, 0x3c),
++		      0x6f, 600000, 10000, 0x3c),
+ 	BD9571MWV_REG("VD18", "vd18", VD18, vid_ops, BD9571MWV_VD18_VID, 0xf,
+ 		      16, 1625000, 25000, 0),
+ 	BD9571MWV_REG("VD25", "vd25", VD25, vid_ops, BD9571MWV_VD25_VID, 0xf,
+@@ -133,7 +133,7 @@ static const struct regulator_desc regulators[] = {
+ 		      11, 2800000, 100000, 0),
+ 	BD9571MWV_REG("DVFS", "dvfs", DVFS, reg_ops,
+ 		      BD9571MWV_DVFS_MONIVDAC, 0x7f,
+-		      0x80, 600000, 10000, 0x3c),
++		      0x6f, 600000, 10000, 0x3c),
+ };
+ 
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/remoteproc/qcom_pil_info.c b/drivers/remoteproc/qcom_pil_info.c
+index 5521c4437ffab..7c007dd7b2000 100644
+--- a/drivers/remoteproc/qcom_pil_info.c
++++ b/drivers/remoteproc/qcom_pil_info.c
+@@ -56,7 +56,7 @@ static int qcom_pil_info_init(void)
+ 	memset_io(base, 0, resource_size(&imem));
+ 
+ 	_reloc.base = base;
+-	_reloc.num_entries = resource_size(&imem) / PIL_RELOC_ENTRY_SIZE;
++	_reloc.num_entries = (u32)resource_size(&imem) / PIL_RELOC_ENTRY_SIZE;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 95ba1bd16db93..c4705269e39fa 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -223,7 +223,7 @@ static void init_default_table_values(struct pm8001_hba_info *pm8001_ha)
+ 		PM8001_EVENT_LOG_SIZE;
+ 	pm8001_ha->main_cfg_tbl.pm8001_tbl.iop_event_log_option		= 0x01;
+ 	pm8001_ha->main_cfg_tbl.pm8001_tbl.fatal_err_interrupt		= 0x01;
+-	for (i = 0; i < PM8001_MAX_INB_NUM; i++) {
++	for (i = 0; i < pm8001_ha->max_q_num; i++) {
+ 		pm8001_ha->inbnd_q_tbl[i].element_pri_size_cnt	=
+ 			PM8001_MPI_QUEUE | (pm8001_ha->iomb_size << 16) | (0x00<<30);
+ 		pm8001_ha->inbnd_q_tbl[i].upper_base_addr	=
+@@ -249,7 +249,7 @@ static void init_default_table_values(struct pm8001_hba_info *pm8001_ha)
+ 		pm8001_ha->inbnd_q_tbl[i].producer_idx		= 0;
+ 		pm8001_ha->inbnd_q_tbl[i].consumer_index	= 0;
+ 	}
+-	for (i = 0; i < PM8001_MAX_OUTB_NUM; i++) {
++	for (i = 0; i < pm8001_ha->max_q_num; i++) {
+ 		pm8001_ha->outbnd_q_tbl[i].element_size_cnt	=
+ 			PM8001_MPI_QUEUE | (pm8001_ha->iomb_size << 16) | (0x01<<30);
+ 		pm8001_ha->outbnd_q_tbl[i].upper_base_addr	=
+@@ -671,9 +671,9 @@ static int pm8001_chip_init(struct pm8001_hba_info *pm8001_ha)
+ 	read_outbnd_queue_table(pm8001_ha);
+ 	/* update main config table ,inbound table and outbound table */
+ 	update_main_config_table(pm8001_ha);
+-	for (i = 0; i < PM8001_MAX_INB_NUM; i++)
++	for (i = 0; i < pm8001_ha->max_q_num; i++)
+ 		update_inbnd_queue_table(pm8001_ha, i);
+-	for (i = 0; i < PM8001_MAX_OUTB_NUM; i++)
++	for (i = 0; i < pm8001_ha->max_q_num; i++)
+ 		update_outbnd_queue_table(pm8001_ha, i);
+ 	/* 8081 controller donot require these operations */
+ 	if (deviceid != 0x8081 && deviceid != 0x0042) {
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 97d9d5d99adcc..4215d9a8e5de4 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -6256,37 +6256,34 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
+ 	DECLARE_COMPLETION_ONSTACK(wait);
+ 	struct request *req;
+ 	unsigned long flags;
+-	int free_slot, task_tag, err;
++	int task_tag, err;
+ 
+ 	/*
+-	 * Get free slot, sleep if slots are unavailable.
+-	 * Even though we use wait_event() which sleeps indefinitely,
+-	 * the maximum wait time is bounded by %TM_CMD_TIMEOUT.
++	 * blk_get_request() is used here only to get a free tag.
+ 	 */
+ 	req = blk_get_request(q, REQ_OP_DRV_OUT, 0);
+ 	if (IS_ERR(req))
+ 		return PTR_ERR(req);
+ 
+ 	req->end_io_data = &wait;
+-	free_slot = req->tag;
+-	WARN_ON_ONCE(free_slot < 0 || free_slot >= hba->nutmrs);
+ 	ufshcd_hold(hba, false);
+ 
+ 	spin_lock_irqsave(host->host_lock, flags);
+-	task_tag = hba->nutrs + free_slot;
++	blk_mq_start_request(req);
+ 
++	task_tag = req->tag;
+ 	treq->req_header.dword_0 |= cpu_to_be32(task_tag);
+ 
+-	memcpy(hba->utmrdl_base_addr + free_slot, treq, sizeof(*treq));
+-	ufshcd_vops_setup_task_mgmt(hba, free_slot, tm_function);
++	memcpy(hba->utmrdl_base_addr + task_tag, treq, sizeof(*treq));
++	ufshcd_vops_setup_task_mgmt(hba, task_tag, tm_function);
+ 
+ 	/* send command to the controller */
+-	__set_bit(free_slot, &hba->outstanding_tasks);
++	__set_bit(task_tag, &hba->outstanding_tasks);
+ 
+ 	/* Make sure descriptors are ready before ringing the task doorbell */
+ 	wmb();
+ 
+-	ufshcd_writel(hba, 1 << free_slot, REG_UTP_TASK_REQ_DOOR_BELL);
++	ufshcd_writel(hba, 1 << task_tag, REG_UTP_TASK_REQ_DOOR_BELL);
+ 	/* Make sure that doorbell is committed immediately */
+ 	wmb();
+ 
+@@ -6306,24 +6303,24 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
+ 		ufshcd_add_tm_upiu_trace(hba, task_tag, "tm_complete_err");
+ 		dev_err(hba->dev, "%s: task management cmd 0x%.2x timed-out\n",
+ 				__func__, tm_function);
+-		if (ufshcd_clear_tm_cmd(hba, free_slot))
+-			dev_WARN(hba->dev, "%s: unable clear tm cmd (slot %d) after timeout\n",
+-					__func__, free_slot);
++		if (ufshcd_clear_tm_cmd(hba, task_tag))
++			dev_WARN(hba->dev, "%s: unable to clear tm cmd (slot %d) after timeout\n",
++					__func__, task_tag);
+ 		err = -ETIMEDOUT;
+ 	} else {
+ 		err = 0;
+-		memcpy(treq, hba->utmrdl_base_addr + free_slot, sizeof(*treq));
++		memcpy(treq, hba->utmrdl_base_addr + task_tag, sizeof(*treq));
+ 
+ 		ufshcd_add_tm_upiu_trace(hba, task_tag, "tm_complete");
+ 	}
+ 
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
+-	__clear_bit(free_slot, &hba->outstanding_tasks);
++	__clear_bit(task_tag, &hba->outstanding_tasks);
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 
++	ufshcd_release(hba);
+ 	blk_put_request(req);
+ 
+-	ufshcd_release(hba);
+ 	return err;
+ }
+ 
+diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
+index 9888a70618730..feb97470699d9 100644
+--- a/drivers/soc/fsl/qbman/qman.c
++++ b/drivers/soc/fsl/qbman/qman.c
+@@ -186,7 +186,7 @@ struct qm_eqcr_entry {
+ 	__be32 tag;
+ 	struct qm_fd fd;
+ 	u8 __reserved3[32];
+-} __packed;
++} __packed __aligned(8);
+ #define QM_EQCR_VERB_VBIT		0x80
+ #define QM_EQCR_VERB_CMD_MASK		0x61	/* but only one value; */
+ #define QM_EQCR_VERB_CMD_ENQUEUE	0x01
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 518fac4864cfa..a237f1cf9bd60 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -1166,6 +1166,7 @@ int iscsit_setup_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
+ 
+ 	target_get_sess_cmd(&cmd->se_cmd, true);
+ 
++	cmd->se_cmd.tag = (__force u32)cmd->init_task_tag;
+ 	cmd->sense_reason = target_cmd_init_cdb(&cmd->se_cmd, hdr->cdb);
+ 	if (cmd->sense_reason) {
+ 		if (cmd->sense_reason == TCM_OUT_OF_RESOURCES) {
+@@ -1180,8 +1181,6 @@ int iscsit_setup_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
+ 	if (cmd->sense_reason)
+ 		goto attach_cmd;
+ 
+-	/* only used for printks or comparing with ->ref_task_tag */
+-	cmd->se_cmd.tag = (__force u32)cmd->init_task_tag;
+ 	cmd->sense_reason = target_cmd_parse_cdb(&cmd->se_cmd);
+ 	if (cmd->sense_reason)
+ 		goto attach_cmd;
+diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
+index 620bcf586ee24..c44fad2b9fbbf 100644
+--- a/drivers/thunderbolt/retimer.c
++++ b/drivers/thunderbolt/retimer.c
+@@ -347,7 +347,7 @@ static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
+ 	ret = tb_retimer_nvm_add(rt);
+ 	if (ret) {
+ 		dev_err(&rt->dev, "failed to add NVM devices: %d\n", ret);
+-		device_del(&rt->dev);
++		device_unregister(&rt->dev);
+ 		return ret;
+ 	}
+ 
+@@ -406,7 +406,7 @@ static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
+  */
+ int tb_retimer_scan(struct tb_port *port)
+ {
+-	u32 status[TB_MAX_RETIMER_INDEX] = {};
++	u32 status[TB_MAX_RETIMER_INDEX + 1] = {};
+ 	int ret, i, last_idx = 0;
+ 
+ 	if (!port->cap_usb4)
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index 8f1de1fbbeedf..d8d3892e5a69a 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -63,6 +63,7 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 
+ 		dev_info(dev, "stub up\n");
+ 
++		mutex_lock(&sdev->ud.sysfs_lock);
+ 		spin_lock_irq(&sdev->ud.lock);
+ 
+ 		if (sdev->ud.status != SDEV_ST_AVAILABLE) {
+@@ -87,13 +88,13 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 		tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx");
+ 		if (IS_ERR(tcp_rx)) {
+ 			sockfd_put(socket);
+-			return -EINVAL;
++			goto unlock_mutex;
+ 		}
+ 		tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx");
+ 		if (IS_ERR(tcp_tx)) {
+ 			kthread_stop(tcp_rx);
+ 			sockfd_put(socket);
+-			return -EINVAL;
++			goto unlock_mutex;
+ 		}
+ 
+ 		/* get task structs now */
+@@ -112,6 +113,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 		wake_up_process(sdev->ud.tcp_rx);
+ 		wake_up_process(sdev->ud.tcp_tx);
+ 
++		mutex_unlock(&sdev->ud.sysfs_lock);
++
+ 	} else {
+ 		dev_info(dev, "stub down\n");
+ 
+@@ -122,6 +125,7 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
+ 		spin_unlock_irq(&sdev->ud.lock);
+ 
+ 		usbip_event_add(&sdev->ud, SDEV_EVENT_DOWN);
++		mutex_unlock(&sdev->ud.sysfs_lock);
+ 	}
+ 
+ 	return count;
+@@ -130,6 +134,8 @@ sock_err:
+ 	sockfd_put(socket);
+ err:
+ 	spin_unlock_irq(&sdev->ud.lock);
++unlock_mutex:
++	mutex_unlock(&sdev->ud.sysfs_lock);
+ 	return -EINVAL;
+ }
+ static DEVICE_ATTR_WO(usbip_sockfd);
+@@ -270,6 +276,7 @@ static struct stub_device *stub_device_alloc(struct usb_device *udev)
+ 	sdev->ud.side		= USBIP_STUB;
+ 	sdev->ud.status		= SDEV_ST_AVAILABLE;
+ 	spin_lock_init(&sdev->ud.lock);
++	mutex_init(&sdev->ud.sysfs_lock);
+ 	sdev->ud.tcp_socket	= NULL;
+ 	sdev->ud.sockfd		= -1;
+ 
+diff --git a/drivers/usb/usbip/usbip_common.h b/drivers/usb/usbip/usbip_common.h
+index 8be857a4fa132..a7e6ce96f62c7 100644
+--- a/drivers/usb/usbip/usbip_common.h
++++ b/drivers/usb/usbip/usbip_common.h
+@@ -263,6 +263,9 @@ struct usbip_device {
+ 	/* lock for status */
+ 	spinlock_t lock;
+ 
++	/* mutex for synchronizing sysfs store paths */
++	struct mutex sysfs_lock;
++
+ 	int sockfd;
+ 	struct socket *tcp_socket;
+ 
+diff --git a/drivers/usb/usbip/usbip_event.c b/drivers/usb/usbip/usbip_event.c
+index 5d88917c96314..086ca76dd0531 100644
+--- a/drivers/usb/usbip/usbip_event.c
++++ b/drivers/usb/usbip/usbip_event.c
+@@ -70,6 +70,7 @@ static void event_handler(struct work_struct *work)
+ 	while ((ud = get_event()) != NULL) {
+ 		usbip_dbg_eh("pending event %lx\n", ud->event);
+ 
++		mutex_lock(&ud->sysfs_lock);
+ 		/*
+ 		 * NOTE: shutdown must come first.
+ 		 * Shutdown the device.
+@@ -90,6 +91,7 @@ static void event_handler(struct work_struct *work)
+ 			ud->eh_ops.unusable(ud);
+ 			unset_event(ud, USBIP_EH_UNUSABLE);
+ 		}
++		mutex_unlock(&ud->sysfs_lock);
+ 
+ 		wake_up(&ud->eh_waitq);
+ 	}
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index a20a8380ca0c9..4ba6bcdaa8e9d 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -1101,6 +1101,7 @@ static void vhci_device_init(struct vhci_device *vdev)
+ 	vdev->ud.side   = USBIP_VHCI;
+ 	vdev->ud.status = VDEV_ST_NULL;
+ 	spin_lock_init(&vdev->ud.lock);
++	mutex_init(&vdev->ud.sysfs_lock);
+ 
+ 	INIT_LIST_HEAD(&vdev->priv_rx);
+ 	INIT_LIST_HEAD(&vdev->priv_tx);
+diff --git a/drivers/usb/usbip/vhci_sysfs.c b/drivers/usb/usbip/vhci_sysfs.c
+index e64ea314930be..ebc7be1d98207 100644
+--- a/drivers/usb/usbip/vhci_sysfs.c
++++ b/drivers/usb/usbip/vhci_sysfs.c
+@@ -185,6 +185,8 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
+ 
+ 	usbip_dbg_vhci_sysfs("enter\n");
+ 
++	mutex_lock(&vdev->ud.sysfs_lock);
++
+ 	/* lock */
+ 	spin_lock_irqsave(&vhci->lock, flags);
+ 	spin_lock(&vdev->ud.lock);
+@@ -195,6 +197,7 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
+ 		/* unlock */
+ 		spin_unlock(&vdev->ud.lock);
+ 		spin_unlock_irqrestore(&vhci->lock, flags);
++		mutex_unlock(&vdev->ud.sysfs_lock);
+ 
+ 		return -EINVAL;
+ 	}
+@@ -205,6 +208,8 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
+ 
+ 	usbip_event_add(&vdev->ud, VDEV_EVENT_DOWN);
+ 
++	mutex_unlock(&vdev->ud.sysfs_lock);
++
+ 	return 0;
+ }
+ 
+@@ -349,30 +354,36 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
+ 	else
+ 		vdev = &vhci->vhci_hcd_hs->vdev[rhport];
+ 
++	mutex_lock(&vdev->ud.sysfs_lock);
++
+ 	/* Extract socket from fd. */
+ 	socket = sockfd_lookup(sockfd, &err);
+ 	if (!socket) {
+ 		dev_err(dev, "failed to lookup sock");
+-		return -EINVAL;
++		err = -EINVAL;
++		goto unlock_mutex;
+ 	}
+ 	if (socket->type != SOCK_STREAM) {
+ 		dev_err(dev, "Expecting SOCK_STREAM - found %d",
+ 			socket->type);
+ 		sockfd_put(socket);
+-		return -EINVAL;
++		err = -EINVAL;
++		goto unlock_mutex;
+ 	}
+ 
+ 	/* create threads before locking */
+ 	tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx");
+ 	if (IS_ERR(tcp_rx)) {
+ 		sockfd_put(socket);
+-		return -EINVAL;
++		err = -EINVAL;
++		goto unlock_mutex;
+ 	}
+ 	tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx");
+ 	if (IS_ERR(tcp_tx)) {
+ 		kthread_stop(tcp_rx);
+ 		sockfd_put(socket);
+-		return -EINVAL;
++		err = -EINVAL;
++		goto unlock_mutex;
+ 	}
+ 
+ 	/* get task structs now */
+@@ -397,7 +408,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
+ 		 * Will be retried from userspace
+ 		 * if there's another free port.
+ 		 */
+-		return -EBUSY;
++		err = -EBUSY;
++		goto unlock_mutex;
+ 	}
+ 
+ 	dev_info(dev, "pdev(%u) rhport(%u) sockfd(%d)\n",
+@@ -422,7 +434,15 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
+ 
+ 	rh_port_connect(vdev, speed);
+ 
++	dev_info(dev, "Device attached\n");
++
++	mutex_unlock(&vdev->ud.sysfs_lock);
++
+ 	return count;
++
++unlock_mutex:
++	mutex_unlock(&vdev->ud.sysfs_lock);
++	return err;
+ }
+ static DEVICE_ATTR_WO(attach);
+ 
+diff --git a/drivers/usb/usbip/vudc_dev.c b/drivers/usb/usbip/vudc_dev.c
+index c8eeabdd9b568..2bc428f2e2610 100644
+--- a/drivers/usb/usbip/vudc_dev.c
++++ b/drivers/usb/usbip/vudc_dev.c
+@@ -572,6 +572,7 @@ static int init_vudc_hw(struct vudc *udc)
+ 	init_waitqueue_head(&udc->tx_waitq);
+ 
+ 	spin_lock_init(&ud->lock);
++	mutex_init(&ud->sysfs_lock);
+ 	ud->status = SDEV_ST_AVAILABLE;
+ 	ud->side = USBIP_VUDC;
+ 
+diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
+index 7383a543c6d12..f7633ee655a17 100644
+--- a/drivers/usb/usbip/vudc_sysfs.c
++++ b/drivers/usb/usbip/vudc_sysfs.c
+@@ -112,6 +112,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ 		dev_err(dev, "no device");
+ 		return -ENODEV;
+ 	}
++	mutex_lock(&udc->ud.sysfs_lock);
+ 	spin_lock_irqsave(&udc->lock, flags);
+ 	/* Don't export what we don't have */
+ 	if (!udc->driver || !udc->pullup) {
+@@ -187,6 +188,8 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ 
+ 		wake_up_process(udc->ud.tcp_rx);
+ 		wake_up_process(udc->ud.tcp_tx);
++
++		mutex_unlock(&udc->ud.sysfs_lock);
+ 		return count;
+ 
+ 	} else {
+@@ -207,6 +210,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ 	}
+ 
+ 	spin_unlock_irqrestore(&udc->lock, flags);
++	mutex_unlock(&udc->ud.sysfs_lock);
+ 
+ 	return count;
+ 
+@@ -216,6 +220,7 @@ unlock_ud:
+ 	spin_unlock_irq(&udc->ud.lock);
+ unlock:
+ 	spin_unlock_irqrestore(&udc->lock, flags);
++	mutex_unlock(&udc->ud.sysfs_lock);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+index 08f742fd24099..b6cc53ba980cc 100644
+--- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
++++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+@@ -4,9 +4,13 @@
+ #ifndef __MLX5_VDPA_H__
+ #define __MLX5_VDPA_H__
+ 
++#include <linux/etherdevice.h>
++#include <linux/if_vlan.h>
+ #include <linux/vdpa.h>
+ #include <linux/mlx5/driver.h>
+ 
++#define MLX5V_ETH_HARD_MTU (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN)
++
+ struct mlx5_vdpa_direct_mr {
+ 	u64 start;
+ 	u64 end;
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 5a86ede36c1ca..65cfbd3771301 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -805,7 +805,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
+ 	MLX5_SET(virtio_q, vq_ctx, event_qpn_or_msix, mvq->fwqp.mqp.qpn);
+ 	MLX5_SET(virtio_q, vq_ctx, queue_size, mvq->num_ent);
+ 	MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0,
+-		 !!(ndev->mvdev.actual_features & VIRTIO_F_VERSION_1));
++		 !!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_F_VERSION_1)));
+ 	MLX5_SET64(virtio_q, vq_ctx, desc_addr, mvq->desc_addr);
+ 	MLX5_SET64(virtio_q, vq_ctx, used_addr, mvq->device_addr);
+ 	MLX5_SET64(virtio_q, vq_ctx, available_addr, mvq->driver_addr);
+@@ -1154,6 +1154,7 @@ static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
+ 		return;
+ 	}
+ 	mvq->avail_idx = attr.available_index;
++	mvq->used_idx = attr.used_index;
+ }
+ 
+ static void suspend_vqs(struct mlx5_vdpa_net *ndev)
+@@ -1411,6 +1412,7 @@ static int mlx5_vdpa_set_vq_state(struct vdpa_device *vdev, u16 idx,
+ 		return -EINVAL;
+ 	}
+ 
++	mvq->used_idx = state->avail_index;
+ 	mvq->avail_idx = state->avail_index;
+ 	return 0;
+ }
+@@ -1428,7 +1430,11 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
+ 	 * that cares about emulating the index after vq is stopped.
+ 	 */
+ 	if (!mvq->initialized) {
+-		state->avail_index = mvq->avail_idx;
++		/* Firmware returns a wrong value for the available index.
++		 * Since both values should be identical, we take the value of
++		 * used_idx which is reported correctly.
++		 */
++		state->avail_index = mvq->used_idx;
+ 		return 0;
+ 	}
+ 
+@@ -1437,7 +1443,7 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
+ 		mlx5_vdpa_warn(mvdev, "failed to query virtqueue\n");
+ 		return err;
+ 	}
+-	state->avail_index = attr.available_index;
++	state->avail_index = attr.used_index;
+ 	return 0;
+ }
+ 
+@@ -1525,21 +1531,11 @@ static void teardown_virtqueues(struct mlx5_vdpa_net *ndev)
+ 	}
+ }
+ 
+-static void clear_virtqueues(struct mlx5_vdpa_net *ndev)
+-{
+-	int i;
+-
+-	for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) {
+-		ndev->vqs[i].avail_idx = 0;
+-		ndev->vqs[i].used_idx = 0;
+-	}
+-}
+-
+ /* TODO: cross-endian support */
+ static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev)
+ {
+ 	return virtio_legacy_is_little_endian() ||
+-		(mvdev->actual_features & (1ULL << VIRTIO_F_VERSION_1));
++		(mvdev->actual_features & BIT_ULL(VIRTIO_F_VERSION_1));
+ }
+ 
+ static __virtio16 cpu_to_mlx5vdpa16(struct mlx5_vdpa_dev *mvdev, u16 val)
+@@ -1770,7 +1766,6 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ 	if (!status) {
+ 		mlx5_vdpa_info(mvdev, "performing device reset\n");
+ 		teardown_driver(ndev);
+-		clear_virtqueues(ndev);
+ 		mlx5_vdpa_destroy_mr(&ndev->mvdev);
+ 		ndev->mvdev.status = 0;
+ 		ndev->mvdev.mlx_features = 0;
+@@ -1892,6 +1887,19 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
+ 	.free = mlx5_vdpa_free,
+ };
+ 
++static int query_mtu(struct mlx5_core_dev *mdev, u16 *mtu)
++{
++	u16 hw_mtu;
++	int err;
++
++	err = mlx5_query_nic_vport_mtu(mdev, &hw_mtu);
++	if (err)
++		return err;
++
++	*mtu = hw_mtu - MLX5V_ETH_HARD_MTU;
++	return 0;
++}
++
+ static int alloc_resources(struct mlx5_vdpa_net *ndev)
+ {
+ 	struct mlx5_vdpa_net_resources *res = &ndev->res;
+@@ -1974,7 +1982,7 @@ void *mlx5_vdpa_add_dev(struct mlx5_core_dev *mdev)
+ 	init_mvqs(ndev);
+ 	mutex_init(&ndev->reslock);
+ 	config = &ndev->config;
+-	err = mlx5_query_nic_vport_mtu(mdev, &ndev->mtu);
++	err = query_mtu(mdev, &ndev->mtu);
+ 	if (err)
+ 		goto err_mtu;
+ 
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 7bd03f6e0422f..b91c19b90b8b4 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -108,7 +108,7 @@ struct irq_info {
+ 	unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
+ 	unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
+ 	u64 eoi_time;           /* Time in jiffies when to EOI. */
+-	spinlock_t lock;
++	raw_spinlock_t lock;
+ 
+ 	union {
+ 		unsigned short virq;
+@@ -280,7 +280,7 @@ static int xen_irq_info_common_setup(struct irq_info *info,
+ 	info->evtchn = evtchn;
+ 	info->cpu = cpu;
+ 	info->mask_reason = EVT_MASK_REASON_EXPLICIT;
+-	spin_lock_init(&info->lock);
++	raw_spin_lock_init(&info->lock);
+ 
+ 	ret = set_evtchn_to_irq(evtchn, irq);
+ 	if (ret < 0)
+@@ -432,28 +432,28 @@ static void do_mask(struct irq_info *info, u8 reason)
+ {
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&info->lock, flags);
++	raw_spin_lock_irqsave(&info->lock, flags);
+ 
+ 	if (!info->mask_reason)
+ 		mask_evtchn(info->evtchn);
+ 
+ 	info->mask_reason |= reason;
+ 
+-	spin_unlock_irqrestore(&info->lock, flags);
++	raw_spin_unlock_irqrestore(&info->lock, flags);
+ }
+ 
+ static void do_unmask(struct irq_info *info, u8 reason)
+ {
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&info->lock, flags);
++	raw_spin_lock_irqsave(&info->lock, flags);
+ 
+ 	info->mask_reason &= ~reason;
+ 
+ 	if (!info->mask_reason)
+ 		unmask_evtchn(info->evtchn);
+ 
+-	spin_unlock_irqrestore(&info->lock, flags);
++	raw_spin_unlock_irqrestore(&info->lock, flags);
+ }
+ 
+ #ifdef CONFIG_X86
+diff --git a/fs/direct-io.c b/fs/direct-io.c
+index d53fa92a1ab65..c64d4eb38995a 100644
+--- a/fs/direct-io.c
++++ b/fs/direct-io.c
+@@ -810,6 +810,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
+ 		    struct buffer_head *map_bh)
+ {
+ 	int ret = 0;
++	int boundary = sdio->boundary;	/* dio_send_cur_page may clear it */
+ 
+ 	if (dio->op == REQ_OP_WRITE) {
+ 		/*
+@@ -848,10 +849,10 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
+ 	sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits;
+ out:
+ 	/*
+-	 * If sdio->boundary then we want to schedule the IO now to
++	 * If boundary then we want to schedule the IO now to
+ 	 * avoid metadata seeks.
+ 	 */
+-	if (sdio->boundary) {
++	if (boundary) {
+ 		ret = dio_send_cur_page(dio, sdio, map_bh);
+ 		if (sdio->bio)
+ 			dio_bio_submit(dio, sdio);
+diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
+index c070c0d8e3e97..d4e360234579b 100644
+--- a/fs/hostfs/hostfs_kern.c
++++ b/fs/hostfs/hostfs_kern.c
+@@ -142,7 +142,7 @@ static char *follow_link(char *link)
+ 	char *name, *resolved, *end;
+ 	int n;
+ 
+-	name = __getname();
++	name = kmalloc(PATH_MAX, GFP_KERNEL);
+ 	if (!name) {
+ 		n = -ENOMEM;
+ 		goto out_free;
+@@ -171,12 +171,11 @@ static char *follow_link(char *link)
+ 		goto out_free;
+ 	}
+ 
+-	__putname(name);
+-	kfree(link);
++	kfree(name);
+ 	return resolved;
+ 
+  out_free:
+-	__putname(name);
++	kfree(name);
+ 	return ERR_PTR(n);
+ }
+ 
+diff --git a/fs/namei.c b/fs/namei.c
+index 7af66d5a0c1bf..4c9d0c36545d3 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2328,16 +2328,16 @@ static int path_lookupat(struct nameidata *nd, unsigned flags, struct path *path
+ 	while (!(err = link_path_walk(s, nd)) &&
+ 	       (s = lookup_last(nd)) != NULL)
+ 		;
++	if (!err && unlikely(nd->flags & LOOKUP_MOUNTPOINT)) {
++		err = handle_lookup_down(nd);
++		nd->flags &= ~LOOKUP_JUMPED; // no d_weak_revalidate(), please...
++	}
+ 	if (!err)
+ 		err = complete_walk(nd);
+ 
+ 	if (!err && nd->flags & LOOKUP_DIRECTORY)
+ 		if (!d_can_lookup(nd->path.dentry))
+ 			err = -ENOTDIR;
+-	if (!err && unlikely(nd->flags & LOOKUP_MOUNTPOINT)) {
+-		err = handle_lookup_down(nd);
+-		nd->flags &= ~LOOKUP_JUMPED; // no d_weak_revalidate(), please...
+-	}
+ 	if (!err) {
+ 		*path = nd->path;
+ 		nd->path.mnt = NULL;
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 3bfb4147895a0..ad20403b383fa 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -2295,7 +2295,7 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
+ 	struct ocfs2_alloc_context *meta_ac = NULL;
+ 	handle_t *handle = NULL;
+ 	loff_t end = offset + bytes;
+-	int ret = 0, credits = 0, locked = 0;
++	int ret = 0, credits = 0;
+ 
+ 	ocfs2_init_dealloc_ctxt(&dealloc);
+ 
+@@ -2306,13 +2306,6 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
+ 	    !dwc->dw_orphaned)
+ 		goto out;
+ 
+-	/* ocfs2_file_write_iter will get i_mutex, so we need not lock if we
+-	 * are in that context. */
+-	if (dwc->dw_writer_pid != task_pid_nr(current)) {
+-		inode_lock(inode);
+-		locked = 1;
+-	}
+-
+ 	ret = ocfs2_inode_lock(inode, &di_bh, 1);
+ 	if (ret < 0) {
+ 		mlog_errno(ret);
+@@ -2393,8 +2386,6 @@ out:
+ 	if (meta_ac)
+ 		ocfs2_free_alloc_context(meta_ac);
+ 	ocfs2_run_deallocs(osb, &dealloc);
+-	if (locked)
+-		inode_unlock(inode);
+ 	ocfs2_dio_free_write_ctx(inode, dwc);
+ 
+ 	return ret;
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 85979e2214b39..8880071ee4ee0 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1244,22 +1244,24 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
+ 				goto bail_unlock;
+ 			}
+ 		}
++		down_write(&OCFS2_I(inode)->ip_alloc_sem);
+ 		handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS +
+ 					   2 * ocfs2_quota_trans_credits(sb));
+ 		if (IS_ERR(handle)) {
+ 			status = PTR_ERR(handle);
+ 			mlog_errno(status);
+-			goto bail_unlock;
++			goto bail_unlock_alloc;
+ 		}
+ 		status = __dquot_transfer(inode, transfer_to);
+ 		if (status < 0)
+ 			goto bail_commit;
+ 	} else {
++		down_write(&OCFS2_I(inode)->ip_alloc_sem);
+ 		handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
+ 		if (IS_ERR(handle)) {
+ 			status = PTR_ERR(handle);
+ 			mlog_errno(status);
+-			goto bail_unlock;
++			goto bail_unlock_alloc;
+ 		}
+ 	}
+ 
+@@ -1272,6 +1274,8 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
+ 
+ bail_commit:
+ 	ocfs2_commit_trans(osb, handle);
++bail_unlock_alloc:
++	up_write(&OCFS2_I(inode)->ip_alloc_sem);
+ bail_unlock:
+ 	if (status && inode_locked) {
+ 		ocfs2_inode_unlock_tracker(inode, 1, &oh, had_lock);
+diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
+index 40bad71865ea7..532bcbfc47161 100644
+--- a/include/linux/avf/virtchnl.h
++++ b/include/linux/avf/virtchnl.h
+@@ -476,7 +476,6 @@ struct virtchnl_rss_key {
+ 	u16 vsi_id;
+ 	u16 key_len;
+ 	u8 key[1];         /* RSS hash key, packed bytes */
+-	u8 pad[1];
+ };
+ 
+ VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+@@ -485,7 +484,6 @@ struct virtchnl_rss_lut {
+ 	u16 vsi_id;
+ 	u16 lut_entries;
+ 	u8 lut[1];        /* RSS lookup table */
+-	u8 pad[1];
+ };
+ 
+ VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 233352447b1a4..cc9ee07769745 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -435,11 +435,11 @@ struct mlx5_ifc_flow_table_prop_layout_bits {
+ 	u8         reserved_at_60[0x18];
+ 	u8         log_max_ft_num[0x8];
+ 
+-	u8         reserved_at_80[0x18];
++	u8         reserved_at_80[0x10];
++	u8         log_max_flow_counter[0x8];
+ 	u8         log_max_destination[0x8];
+ 
+-	u8         log_max_flow_counter[0x8];
+-	u8         reserved_at_a8[0x10];
++	u8         reserved_at_a0[0x18];
+ 	u8         log_max_flow[0x8];
+ 
+ 	u8         reserved_at_c0[0x40];
+@@ -8719,6 +8719,8 @@ struct mlx5_ifc_pplm_reg_bits {
+ 
+ 	u8         fec_override_admin_100g_2x[0x10];
+ 	u8         fec_override_admin_50g_1x[0x10];
++
++	u8         reserved_at_140[0x140];
+ };
+ 
+ struct mlx5_ifc_ppcnt_reg_bits {
+@@ -10056,7 +10058,7 @@ struct mlx5_ifc_pbmc_reg_bits {
+ 
+ 	struct mlx5_ifc_bufferx_reg_bits buffer[10];
+ 
+-	u8         reserved_at_2e0[0x40];
++	u8         reserved_at_2e0[0x80];
+ };
+ 
+ struct mlx5_ifc_qtct_reg_bits {
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index fec0c5ac1c4f9..82126d5297986 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -349,8 +349,13 @@ static inline void sk_psock_update_proto(struct sock *sk,
+ static inline void sk_psock_restore_proto(struct sock *sk,
+ 					  struct sk_psock *psock)
+ {
+-	sk->sk_prot->unhash = psock->saved_unhash;
+ 	if (inet_csk_has_ulp(sk)) {
++		/* TLS does not have an unhash proto in SW cases, but we need
++		 * to ensure we stop using the sock_map unhash routine because
++		 * the associated psock is being removed. So use the original
++		 * unhash handler.
++		 */
++		WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash);
+ 		tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space);
+ 	} else {
+ 		sk->sk_write_space = psock->saved_write_space;
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 6b5fcfa1e5553..98775d7fa6963 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -62,6 +62,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 			return -EINVAL;
+ 	}
+ 
++	skb_reset_mac_header(skb);
++
+ 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
+ 		u16 start = __virtio16_to_cpu(little_endian, hdr->csum_start);
+ 		u16 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);
+diff --git a/include/net/act_api.h b/include/net/act_api.h
+index 89b42a1e4f88e..2c88b8af3cdbe 100644
+--- a/include/net/act_api.h
++++ b/include/net/act_api.h
+@@ -170,12 +170,7 @@ void tcf_idr_insert_many(struct tc_action *actions[]);
+ void tcf_idr_cleanup(struct tc_action_net *tn, u32 index);
+ int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index,
+ 			struct tc_action **a, int bind);
+-int __tcf_idr_release(struct tc_action *a, bool bind, bool strict);
+-
+-static inline int tcf_idr_release(struct tc_action *a, bool bind)
+-{
+-	return __tcf_idr_release(a, bind, false);
+-}
++int tcf_idr_release(struct tc_action *a, bool bind);
+ 
+ int tcf_register_action(struct tc_action_ops *a, struct pernet_operations *ops);
+ int tcf_unregister_action(struct tc_action_ops *a,
+@@ -185,7 +180,7 @@ int tcf_action_exec(struct sk_buff *skb, struct tc_action **actions,
+ 		    int nr_actions, struct tcf_result *res);
+ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ 		    struct nlattr *est, char *name, int ovr, int bind,
+-		    struct tc_action *actions[], size_t *attr_size,
++		    struct tc_action *actions[], int init_res[], size_t *attr_size,
+ 		    bool rtnl_held, struct netlink_ext_ack *extack);
+ struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
+ 					 bool rtnl_held,
+@@ -193,7 +188,8 @@ struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
+ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ 				    struct nlattr *nla, struct nlattr *est,
+ 				    char *name, int ovr, int bind,
+-				    struct tc_action_ops *ops, bool rtnl_held,
++				    struct tc_action_ops *a_o, int *init_res,
++				    bool rtnl_held,
+ 				    struct netlink_ext_ack *extack);
+ int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind,
+ 		    int ref, bool terse);
+diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
+index 59f45b1e9dac0..b59d73d529ba7 100644
+--- a/include/net/netns/xfrm.h
++++ b/include/net/netns/xfrm.h
+@@ -72,7 +72,9 @@ struct netns_xfrm {
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	struct dst_ops		xfrm6_dst_ops;
+ #endif
+-	spinlock_t xfrm_state_lock;
++	spinlock_t		xfrm_state_lock;
++	seqcount_t		xfrm_state_hash_generation;
++
+ 	spinlock_t xfrm_policy_lock;
+ 	struct mutex xfrm_cfg_mutex;
+ };
+diff --git a/include/net/red.h b/include/net/red.h
+index 9e6647c4ccd1f..cc9f6b0d7f1e9 100644
+--- a/include/net/red.h
++++ b/include/net/red.h
+@@ -171,9 +171,9 @@ static inline void red_set_vars(struct red_vars *v)
+ static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog,
+ 				    u8 Scell_log, u8 *stab)
+ {
+-	if (fls(qth_min) + Wlog > 32)
++	if (fls(qth_min) + Wlog >= 32)
+ 		return false;
+-	if (fls(qth_max) + Wlog > 32)
++	if (fls(qth_max) + Wlog >= 32)
+ 		return false;
+ 	if (Scell_log >= 32)
+ 		return false;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 253202dcc5e61..261195598df39 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2197,6 +2197,15 @@ static inline void skb_set_owner_r(struct sk_buff *skb, struct sock *sk)
+ 	sk_mem_charge(sk, skb->truesize);
+ }
+ 
++static inline void skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk)
++{
++	if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) {
++		skb_orphan(skb);
++		skb->destructor = sock_efree;
++		skb->sk = sk;
++	}
++}
++
+ void sk_reset_timer(struct sock *sk, struct timer_list *timer,
+ 		    unsigned long expires);
+ 
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index b2a06f10b62ce..c58a6d4eb6103 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1097,7 +1097,7 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
+ 		return __xfrm_policy_check(sk, ndir, skb, family);
+ 
+ 	return	(!net->xfrm.policy_count[dir] && !secpath_exists(skb)) ||
+-		(skb_dst(skb)->flags & DST_NOPOLICY) ||
++		(skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) ||
+ 		__xfrm_policy_check(sk, ndir, skb, family);
+ }
+ 
+@@ -1557,7 +1557,7 @@ int xfrm_trans_queue_net(struct net *net, struct sk_buff *skb,
+ int xfrm_trans_queue(struct sk_buff *skb,
+ 		     int (*finish)(struct net *, struct sock *,
+ 				   struct sk_buff *));
+-int xfrm_output_resume(struct sk_buff *skb, int err);
++int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err);
+ int xfrm_output(struct sock *sk, struct sk_buff *skb);
+ 
+ #if IS_ENABLED(CONFIG_NET_PKTGEN)
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index dd4b7fd60ee7d..6b14b4c4068cc 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -546,7 +546,7 @@ int bpf_obj_get_user(const char __user *pathname, int flags)
+ 	else if (type == BPF_TYPE_MAP)
+ 		ret = bpf_map_new_fd(raw, f_flags);
+ 	else if (type == BPF_TYPE_LINK)
+-		ret = bpf_link_new_fd(raw);
++		ret = (f_flags != O_RDWR) ? -EINVAL : bpf_link_new_fd(raw);
+ 	else
+ 		return -ENOENT;
+ 
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 6e83bf8c080db..ebf60848d5eb7 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -662,9 +662,17 @@ const struct bpf_func_proto bpf_get_stack_proto = {
+ BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf,
+ 	   u32, size, u64, flags)
+ {
+-	struct pt_regs *regs = task_pt_regs(task);
++	struct pt_regs *regs;
++	long res;
+ 
+-	return __bpf_get_stack(regs, task, NULL, buf, size, flags);
++	if (!try_get_task_stack(task))
++		return -EFAULT;
++
++	regs = task_pt_regs(task);
++	res = __bpf_get_stack(regs, task, NULL, buf, size, flags);
++	put_task_stack(task);
++
++	return res;
+ }
+ 
+ BTF_ID_LIST_SINGLE(bpf_get_task_stack_btf_ids, struct, task_struct)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 9a1aba2d00733..12cd2997f982a 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -11429,6 +11429,11 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
+ 	u32 btf_id, member_idx;
+ 	const char *mname;
+ 
++	if (!prog->gpl_compatible) {
++		verbose(env, "struct ops programs must have a GPL compatible license\n");
++		return -EINVAL;
++	}
++
+ 	btf_id = prog->aux->attach_btf_id;
+ 	st_ops = bpf_struct_ops_find(btf_id);
+ 	if (!st_ops) {
+diff --git a/kernel/gcov/clang.c b/kernel/gcov/clang.c
+index 8743150db2acc..c466c7fbdece5 100644
+--- a/kernel/gcov/clang.c
++++ b/kernel/gcov/clang.c
+@@ -70,7 +70,9 @@ struct gcov_fn_info {
+ 
+ 	u32 ident;
+ 	u32 checksum;
++#if CONFIG_CLANG_VERSION < 110000
+ 	u8 use_extra_checksum;
++#endif
+ 	u32 cfg_checksum;
+ 
+ 	u32 num_counters;
+@@ -145,10 +147,8 @@ void llvm_gcda_emit_function(u32 ident, const char *function_name,
+ 
+ 	list_add_tail(&info->head, &current_info->functions);
+ }
+-EXPORT_SYMBOL(llvm_gcda_emit_function);
+ #else
+-void llvm_gcda_emit_function(u32 ident, u32 func_checksum,
+-		u8 use_extra_checksum, u32 cfg_checksum)
++void llvm_gcda_emit_function(u32 ident, u32 func_checksum, u32 cfg_checksum)
+ {
+ 	struct gcov_fn_info *info = kzalloc(sizeof(*info), GFP_KERNEL);
+ 
+@@ -158,12 +158,11 @@ void llvm_gcda_emit_function(u32 ident, u32 func_checksum,
+ 	INIT_LIST_HEAD(&info->head);
+ 	info->ident = ident;
+ 	info->checksum = func_checksum;
+-	info->use_extra_checksum = use_extra_checksum;
+ 	info->cfg_checksum = cfg_checksum;
+ 	list_add_tail(&info->head, &current_info->functions);
+ }
+-EXPORT_SYMBOL(llvm_gcda_emit_function);
+ #endif
++EXPORT_SYMBOL(llvm_gcda_emit_function);
+ 
+ void llvm_gcda_emit_arcs(u32 num_counters, u64 *counters)
+ {
+@@ -293,11 +292,16 @@ int gcov_info_is_compatible(struct gcov_info *info1, struct gcov_info *info2)
+ 		!list_is_last(&fn_ptr2->head, &info2->functions)) {
+ 		if (fn_ptr1->checksum != fn_ptr2->checksum)
+ 			return false;
++#if CONFIG_CLANG_VERSION < 110000
+ 		if (fn_ptr1->use_extra_checksum != fn_ptr2->use_extra_checksum)
+ 			return false;
+ 		if (fn_ptr1->use_extra_checksum &&
+ 			fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum)
+ 			return false;
++#else
++		if (fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum)
++			return false;
++#endif
+ 		fn_ptr1 = list_next_entry(fn_ptr1, head);
+ 		fn_ptr2 = list_next_entry(fn_ptr2, head);
+ 	}
+@@ -529,17 +533,22 @@ static size_t convert_to_gcda(char *buffer, struct gcov_info *info)
+ 
+ 	list_for_each_entry(fi_ptr, &info->functions, head) {
+ 		u32 i;
+-		u32 len = 2;
+-
+-		if (fi_ptr->use_extra_checksum)
+-			len++;
+ 
+ 		pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION);
+-		pos += store_gcov_u32(buffer, pos, len);
++#if CONFIG_CLANG_VERSION < 110000
++		pos += store_gcov_u32(buffer, pos,
++			fi_ptr->use_extra_checksum ? 3 : 2);
++#else
++		pos += store_gcov_u32(buffer, pos, 3);
++#endif
+ 		pos += store_gcov_u32(buffer, pos, fi_ptr->ident);
+ 		pos += store_gcov_u32(buffer, pos, fi_ptr->checksum);
++#if CONFIG_CLANG_VERSION < 110000
+ 		if (fi_ptr->use_extra_checksum)
+ 			pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
++#else
++		pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
++#endif
+ 
+ 		pos += store_gcov_u32(buffer, pos, GCOV_TAG_COUNTER_BASE);
+ 		pos += store_gcov_u32(buffer, pos, fi_ptr->num_counters * 2);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 780012eb2f3fe..eead7efbe7e5d 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -705,7 +705,7 @@ static void print_lock_name(struct lock_class *class)
+ 
+ 	printk(KERN_CONT " (");
+ 	__print_lock_name(class);
+-	printk(KERN_CONT "){%s}-{%hd:%hd}", usage,
++	printk(KERN_CONT "){%s}-{%d:%d}", usage,
+ 			class->wait_type_outer ?: class->wait_type_inner,
+ 			class->wait_type_inner);
+ }
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 1d99c52cc99a6..1e2ca744dadbe 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1409,7 +1409,6 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
+ 	 */
+ 	lockdep_assert_irqs_disabled();
+ 
+-	debug_work_activate(work);
+ 
+ 	/* if draining, only works from the same workqueue are allowed */
+ 	if (unlikely(wq->flags & __WQ_DRAINING) &&
+@@ -1491,6 +1490,7 @@ retry:
+ 		worklist = &pwq->delayed_works;
+ 	}
+ 
++	debug_work_activate(work);
+ 	insert_work(pwq, work, worklist, work_flags);
+ 
+ out:
+diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h
+index 18b768ac7dcae..095d7eaa0db42 100644
+--- a/mm/percpu-internal.h
++++ b/mm/percpu-internal.h
+@@ -87,7 +87,7 @@ extern spinlock_t pcpu_lock;
+ 
+ extern struct list_head *pcpu_chunk_lists;
+ extern int pcpu_nr_slots;
+-extern int pcpu_nr_empty_pop_pages;
++extern int pcpu_nr_empty_pop_pages[];
+ 
+ extern struct pcpu_chunk *pcpu_first_chunk;
+ extern struct pcpu_chunk *pcpu_reserved_chunk;
+diff --git a/mm/percpu-stats.c b/mm/percpu-stats.c
+index c8400a2adbc2b..f6026dbcdf6b3 100644
+--- a/mm/percpu-stats.c
++++ b/mm/percpu-stats.c
+@@ -145,6 +145,7 @@ static int percpu_stats_show(struct seq_file *m, void *v)
+ 	int slot, max_nr_alloc;
+ 	int *buffer;
+ 	enum pcpu_chunk_type type;
++	int nr_empty_pop_pages;
+ 
+ alloc_buffer:
+ 	spin_lock_irq(&pcpu_lock);
+@@ -165,7 +166,11 @@ alloc_buffer:
+ 		goto alloc_buffer;
+ 	}
+ 
+-#define PL(X) \
++	nr_empty_pop_pages = 0;
++	for (type = 0; type < PCPU_NR_CHUNK_TYPES; type++)
++		nr_empty_pop_pages += pcpu_nr_empty_pop_pages[type];
++
++#define PL(X)								\
+ 	seq_printf(m, "  %-20s: %12lld\n", #X, (long long int)pcpu_stats_ai.X)
+ 
+ 	seq_printf(m,
+@@ -196,7 +201,7 @@ alloc_buffer:
+ 	PU(nr_max_chunks);
+ 	PU(min_alloc_size);
+ 	PU(max_alloc_size);
+-	P("empty_pop_pages", pcpu_nr_empty_pop_pages);
++	P("empty_pop_pages", nr_empty_pop_pages);
+ 	seq_putc(m, '\n');
+ 
+ #undef PU
+diff --git a/mm/percpu.c b/mm/percpu.c
+index ad7a37ee74ef5..e12ab708fe15b 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -172,10 +172,10 @@ struct list_head *pcpu_chunk_lists __ro_after_init; /* chunk list slots */
+ static LIST_HEAD(pcpu_map_extend_chunks);
+ 
+ /*
+- * The number of empty populated pages, protected by pcpu_lock.  The
+- * reserved chunk doesn't contribute to the count.
++ * The number of empty populated pages by chunk type, protected by pcpu_lock.
++ * The reserved chunk doesn't contribute to the count.
+  */
+-int pcpu_nr_empty_pop_pages;
++int pcpu_nr_empty_pop_pages[PCPU_NR_CHUNK_TYPES];
+ 
+ /*
+  * The number of populated pages in use by the allocator, protected by
+@@ -555,7 +555,7 @@ static inline void pcpu_update_empty_pages(struct pcpu_chunk *chunk, int nr)
+ {
+ 	chunk->nr_empty_pop_pages += nr;
+ 	if (chunk != pcpu_reserved_chunk)
+-		pcpu_nr_empty_pop_pages += nr;
++		pcpu_nr_empty_pop_pages[pcpu_chunk_type(chunk)] += nr;
+ }
+ 
+ /*
+@@ -1831,7 +1831,7 @@ area_found:
+ 		mutex_unlock(&pcpu_alloc_mutex);
+ 	}
+ 
+-	if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW)
++	if (pcpu_nr_empty_pop_pages[type] < PCPU_EMPTY_POP_PAGES_LOW)
+ 		pcpu_schedule_balance_work();
+ 
+ 	/* clear the areas and return address relative to base address */
+@@ -1999,7 +1999,7 @@ retry_pop:
+ 		pcpu_atomic_alloc_failed = false;
+ 	} else {
+ 		nr_to_pop = clamp(PCPU_EMPTY_POP_PAGES_HIGH -
+-				  pcpu_nr_empty_pop_pages,
++				  pcpu_nr_empty_pop_pages[type],
+ 				  0, PCPU_EMPTY_POP_PAGES_HIGH);
+ 	}
+ 
+@@ -2579,7 +2579,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
+ 
+ 	/* link the first chunk in */
+ 	pcpu_first_chunk = chunk;
+-	pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
++	pcpu_nr_empty_pop_pages[PCPU_CHUNK_ROOT] = pcpu_first_chunk->nr_empty_pop_pages;
+ 	pcpu_chunk_relocate(pcpu_first_chunk, -1);
+ 
+ 	/* include all regions of the first chunk */
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 98a0aaaf0d502..a510f7f86a7dc 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -891,6 +891,7 @@ batadv_tt_prepare_tvlv_global_data(struct batadv_orig_node *orig_node,
+ 	hlist_for_each_entry(vlan, &orig_node->vlan_list, list) {
+ 		tt_vlan->vid = htons(vlan->vid);
+ 		tt_vlan->crc = htonl(vlan->tt.crc);
++		tt_vlan->reserved = 0;
+ 
+ 		tt_vlan++;
+ 	}
+@@ -974,6 +975,7 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
+ 
+ 		tt_vlan->vid = htons(vlan->vid);
+ 		tt_vlan->crc = htonl(vlan->tt.crc);
++		tt_vlan->reserved = 0;
+ 
+ 		tt_vlan++;
+ 	}
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 0e5c37be4a2bd..909b9e684e043 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -86,6 +86,8 @@ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_AUTHOR("Oliver Hartkopp <oliver.hartkopp@volkswagen.de>");
+ MODULE_ALIAS("can-proto-2");
+ 
++#define BCM_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex)
++
+ /*
+  * easy access to the first 64 bit of can(fd)_frame payload. cp->data is
+  * 64 bit aligned so the offset has to be multiples of 8 which is ensured
+@@ -1292,7 +1294,7 @@ static int bcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		/* no bound device as default => check msg_name */
+ 		DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name);
+ 
+-		if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex))
++		if (msg->msg_namelen < BCM_MIN_NAMELEN)
+ 			return -EINVAL;
+ 
+ 		if (addr->can_family != AF_CAN)
+@@ -1534,7 +1536,7 @@ static int bcm_connect(struct socket *sock, struct sockaddr *uaddr, int len,
+ 	struct net *net = sock_net(sk);
+ 	int ret = 0;
+ 
+-	if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex))
++	if (len < BCM_MIN_NAMELEN)
+ 		return -EINVAL;
+ 
+ 	lock_sock(sk);
+@@ -1616,8 +1618,8 @@ static int bcm_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 	sock_recv_ts_and_drops(msg, sk, skb);
+ 
+ 	if (msg->msg_name) {
+-		__sockaddr_check_size(sizeof(struct sockaddr_can));
+-		msg->msg_namelen = sizeof(struct sockaddr_can);
++		__sockaddr_check_size(BCM_MIN_NAMELEN);
++		msg->msg_namelen = BCM_MIN_NAMELEN;
+ 		memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
+ 	}
+ 
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index ea1e227b8e544..d5780ab29e098 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -77,6 +77,8 @@ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_AUTHOR("Oliver Hartkopp <socketcan@hartkopp.net>");
+ MODULE_ALIAS("can-proto-6");
+ 
++#define ISOTP_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp)
++
+ #define SINGLE_MASK(id) (((id) & CAN_EFF_FLAG) ? \
+ 			 (CAN_EFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG) : \
+ 			 (CAN_SFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG))
+@@ -981,7 +983,8 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 	sock_recv_timestamp(msg, sk, skb);
+ 
+ 	if (msg->msg_name) {
+-		msg->msg_namelen = sizeof(struct sockaddr_can);
++		__sockaddr_check_size(ISOTP_MIN_NAMELEN);
++		msg->msg_namelen = ISOTP_MIN_NAMELEN;
+ 		memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
+ 	}
+ 
+@@ -1050,7 +1053,7 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	int err = 0;
+ 	int notify_enetdown = 0;
+ 
+-	if (len < CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp))
++	if (len < ISOTP_MIN_NAMELEN)
+ 		return -EINVAL;
+ 
+ 	if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id)
+@@ -1136,13 +1139,13 @@ static int isotp_getname(struct socket *sock, struct sockaddr *uaddr, int peer)
+ 	if (peer)
+ 		return -EOPNOTSUPP;
+ 
+-	memset(addr, 0, sizeof(*addr));
++	memset(addr, 0, ISOTP_MIN_NAMELEN);
+ 	addr->can_family = AF_CAN;
+ 	addr->can_ifindex = so->ifindex;
+ 	addr->can_addr.tp.rx_id = so->rxid;
+ 	addr->can_addr.tp.tx_id = so->txid;
+ 
+-	return sizeof(*addr);
++	return ISOTP_MIN_NAMELEN;
+ }
+ 
+ static int isotp_setsockopt(struct socket *sock, int level, int optname,
+diff --git a/net/can/raw.c b/net/can/raw.c
+index 6ec8aa1d0da46..95113b0898b24 100644
+--- a/net/can/raw.c
++++ b/net/can/raw.c
+@@ -60,6 +60,8 @@ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_AUTHOR("Urs Thuermann <urs.thuermann@volkswagen.de>");
+ MODULE_ALIAS("can-proto-1");
+ 
++#define RAW_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex)
++
+ #define MASK_ALL 0
+ 
+ /* A raw socket has a list of can_filters attached to it, each receiving
+@@ -394,7 +396,7 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	int err = 0;
+ 	int notify_enetdown = 0;
+ 
+-	if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex))
++	if (len < RAW_MIN_NAMELEN)
+ 		return -EINVAL;
+ 	if (addr->can_family != AF_CAN)
+ 		return -EINVAL;
+@@ -475,11 +477,11 @@ static int raw_getname(struct socket *sock, struct sockaddr *uaddr,
+ 	if (peer)
+ 		return -EOPNOTSUPP;
+ 
+-	memset(addr, 0, sizeof(*addr));
++	memset(addr, 0, RAW_MIN_NAMELEN);
+ 	addr->can_family  = AF_CAN;
+ 	addr->can_ifindex = ro->ifindex;
+ 
+-	return sizeof(*addr);
++	return RAW_MIN_NAMELEN;
+ }
+ 
+ static int raw_setsockopt(struct socket *sock, int level, int optname,
+@@ -731,7 +733,7 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	if (msg->msg_name) {
+ 		DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name);
+ 
+-		if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex))
++		if (msg->msg_namelen < RAW_MIN_NAMELEN)
+ 			return -EINVAL;
+ 
+ 		if (addr->can_family != AF_CAN)
+@@ -824,8 +826,8 @@ static int raw_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 	sock_recv_ts_and_drops(msg, sk, skb);
+ 
+ 	if (msg->msg_name) {
+-		__sockaddr_check_size(sizeof(struct sockaddr_can));
+-		msg->msg_namelen = sizeof(struct sockaddr_can);
++		__sockaddr_check_size(RAW_MIN_NAMELEN);
++		msg->msg_namelen = RAW_MIN_NAMELEN;
+ 		memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
+ 	}
+ 
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 25cdbb20f3a03..923a1d0f84ca3 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -488,6 +488,7 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
+ 	if (unlikely(!msg))
+ 		return -EAGAIN;
+ 	sk_msg_init(msg);
++	skb_set_owner_r(skb, sk);
+ 	return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
+ }
+ 
+@@ -791,7 +792,6 @@ static void sk_psock_tls_verdict_apply(struct sk_buff *skb, struct sock *sk, int
+ {
+ 	switch (verdict) {
+ 	case __SK_REDIRECT:
+-		skb_set_owner_r(skb, sk);
+ 		sk_psock_skb_redirect(skb);
+ 		break;
+ 	case __SK_PASS:
+@@ -809,10 +809,6 @@ int sk_psock_tls_strp_read(struct sk_psock *psock, struct sk_buff *skb)
+ 	rcu_read_lock();
+ 	prog = READ_ONCE(psock->progs.skb_verdict);
+ 	if (likely(prog)) {
+-		/* We skip full set_owner_r here because if we do a SK_PASS
+-		 * or SK_DROP we can skip skb memory accounting and use the
+-		 * TLS context.
+-		 */
+ 		skb->sk = psock->sk;
+ 		tcp_skb_bpf_redirect_clear(skb);
+ 		ret = sk_psock_bpf_run(psock, prog, skb);
+@@ -881,12 +877,13 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
+ 		kfree_skb(skb);
+ 		goto out;
+ 	}
+-	skb_set_owner_r(skb, sk);
+ 	prog = READ_ONCE(psock->progs.skb_verdict);
+ 	if (likely(prog)) {
++		skb->sk = sk;
+ 		tcp_skb_bpf_redirect_clear(skb);
+ 		ret = sk_psock_bpf_run(psock, prog, skb);
+ 		ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
++		skb->sk = NULL;
+ 	}
+ 	sk_psock_verdict_apply(psock, skb, ret);
+ out:
+@@ -957,12 +954,13 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
+ 		kfree_skb(skb);
+ 		goto out;
+ 	}
+-	skb_set_owner_r(skb, sk);
+ 	prog = READ_ONCE(psock->progs.skb_verdict);
+ 	if (likely(prog)) {
++		skb->sk = sk;
+ 		tcp_skb_bpf_redirect_clear(skb);
+ 		ret = sk_psock_bpf_run(psock, prog, skb);
+ 		ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
++		skb->sk = NULL;
+ 	}
+ 	sk_psock_verdict_apply(psock, skb, ret);
+ out:
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 727ea1cc633ca..c75c1e723a840 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2099,16 +2099,10 @@ void skb_orphan_partial(struct sk_buff *skb)
+ 	if (skb_is_tcp_pure_ack(skb))
+ 		return;
+ 
+-	if (can_skb_orphan_partial(skb)) {
+-		struct sock *sk = skb->sk;
+-
+-		if (refcount_inc_not_zero(&sk->sk_refcnt)) {
+-			WARN_ON(refcount_sub_and_test(skb->truesize, &sk->sk_wmem_alloc));
+-			skb->destructor = sock_efree;
+-		}
+-	} else {
++	if (can_skb_orphan_partial(skb))
++		skb_set_owner_sk_safe(skb, skb->sk);
++	else
+ 		skb_orphan(skb);
+-	}
+ }
+ EXPORT_SYMBOL(skb_orphan_partial);
+ 
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index d900cebc0acd7..b8d7fa47d293c 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -349,7 +349,8 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
+ 		/* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */
+ 		xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
+ 		page = virt_to_head_page(data);
+-		napi_direct &= !xdp_return_frame_no_direct();
++		if (napi_direct && xdp_return_frame_no_direct())
++			napi_direct = false;
+ 		page_pool_put_full_page(xa->page_pool, page, napi_direct);
+ 		rcu_read_unlock();
+ 		break;
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index a04fd637b4cdc..3ada338d7e08b 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -533,8 +533,14 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
+ 
+ 	list_for_each_entry(dp, &dst->ports, list) {
+ 		err = dsa_port_setup(dp);
+-		if (err)
++		if (err) {
++			dsa_port_devlink_teardown(dp);
++			dp->type = DSA_PORT_TYPE_UNUSED;
++			err = dsa_port_devlink_setup(dp);
++			if (err)
++				goto teardown;
+ 			continue;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/net/ethtool/eee.c b/net/ethtool/eee.c
+index 901b7de941abd..e10bfcc078531 100644
+--- a/net/ethtool/eee.c
++++ b/net/ethtool/eee.c
+@@ -169,8 +169,8 @@ int ethnl_set_eee(struct sk_buff *skb, struct genl_info *info)
+ 	ethnl_update_bool32(&eee.eee_enabled, tb[ETHTOOL_A_EEE_ENABLED], &mod);
+ 	ethnl_update_bool32(&eee.tx_lpi_enabled,
+ 			    tb[ETHTOOL_A_EEE_TX_LPI_ENABLED], &mod);
+-	ethnl_update_bool32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER],
+-			    &mod);
++	ethnl_update_u32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER],
++			 &mod);
+ 	ret = 0;
+ 	if (!mod)
+ 		goto out_ops;
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index ab953a1a0d6cc..6f4c34b6a5d69 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -217,6 +217,7 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	master = hsr_port_get_hsr(hsr, HSR_PT_MASTER);
+ 	if (master) {
+ 		skb->dev = master->dev;
++		skb_reset_mac_header(skb);
+ 		hsr_forward_skb(skb, master);
+ 	} else {
+ 		atomic_long_inc(&dev->tx_dropped);
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index cadfccd7876e4..b4e06ae088348 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -528,12 +528,6 @@ void hsr_forward_skb(struct sk_buff *skb, struct hsr_port *port)
+ {
+ 	struct hsr_frame_info frame;
+ 
+-	if (skb_mac_header(skb) != skb->data) {
+-		WARN_ONCE(1, "%s:%d: Malformed frame (port_src %s)\n",
+-			  __FILE__, __LINE__, port->dev->name);
+-		goto out_drop;
+-	}
+-
+ 	if (fill_frame_info(&frame, skb, port) < 0)
+ 		goto out_drop;
+ 
+diff --git a/net/ieee802154/nl-mac.c b/net/ieee802154/nl-mac.c
+index 6d091e419d3ee..d19c40c684e80 100644
+--- a/net/ieee802154/nl-mac.c
++++ b/net/ieee802154/nl-mac.c
+@@ -551,9 +551,7 @@ ieee802154_llsec_parse_key_id(struct genl_info *info,
+ 	desc->mode = nla_get_u8(info->attrs[IEEE802154_ATTR_LLSEC_KEY_MODE]);
+ 
+ 	if (desc->mode == IEEE802154_SCF_KEY_IMPLICIT) {
+-		if (!info->attrs[IEEE802154_ATTR_PAN_ID] &&
+-		    !(info->attrs[IEEE802154_ATTR_SHORT_ADDR] ||
+-		      info->attrs[IEEE802154_ATTR_HW_ADDR]))
++		if (!info->attrs[IEEE802154_ATTR_PAN_ID])
+ 			return -EINVAL;
+ 
+ 		desc->device_addr.pan_id = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_PAN_ID]);
+@@ -562,6 +560,9 @@ ieee802154_llsec_parse_key_id(struct genl_info *info,
+ 			desc->device_addr.mode = IEEE802154_ADDR_SHORT;
+ 			desc->device_addr.short_addr = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_SHORT_ADDR]);
+ 		} else {
++			if (!info->attrs[IEEE802154_ATTR_HW_ADDR])
++				return -EINVAL;
++
+ 			desc->device_addr.mode = IEEE802154_ADDR_LONG;
+ 			desc->device_addr.extended_addr = nla_get_hwaddr(info->attrs[IEEE802154_ATTR_HW_ADDR]);
+ 		}
+diff --git a/net/ieee802154/nl802154.c b/net/ieee802154/nl802154.c
+index 7c5a1aa5adb42..d1b6a9665b170 100644
+--- a/net/ieee802154/nl802154.c
++++ b/net/ieee802154/nl802154.c
+@@ -820,8 +820,13 @@ nl802154_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flags,
+ 		goto nla_put_failure;
+ 
+ #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		goto out;
++
+ 	if (nl802154_get_llsec_params(msg, rdev, wpan_dev) < 0)
+ 		goto nla_put_failure;
++
++out:
+ #endif /* CONFIG_IEEE802154_NL802154_EXPERIMENTAL */
+ 
+ 	genlmsg_end(msg, hdr);
+@@ -1384,6 +1389,9 @@ static int nl802154_set_llsec_params(struct sk_buff *skb,
+ 	u32 changed = 0;
+ 	int ret;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (info->attrs[NL802154_ATTR_SEC_ENABLED]) {
+ 		u8 enabled;
+ 
+@@ -1544,7 +1552,8 @@ static int nl802154_add_llsec_key(struct sk_buff *skb, struct genl_info *info)
+ 	struct ieee802154_llsec_key_id id = { };
+ 	u32 commands[NL802154_CMD_FRAME_NR_IDS / 32] = { };
+ 
+-	if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
++	if (!info->attrs[NL802154_ATTR_SEC_KEY] ||
++	    nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
+ 		return -EINVAL;
+ 
+ 	if (!attrs[NL802154_KEY_ATTR_USAGE_FRAMES] ||
+@@ -1592,7 +1601,8 @@ static int nl802154_del_llsec_key(struct sk_buff *skb, struct genl_info *info)
+ 	struct nlattr *attrs[NL802154_KEY_ATTR_MAX + 1];
+ 	struct ieee802154_llsec_key_id id;
+ 
+-	if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
++	if (!info->attrs[NL802154_ATTR_SEC_KEY] ||
++	    nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
+ 		return -EINVAL;
+ 
+ 	if (ieee802154_llsec_parse_key_id(attrs[NL802154_KEY_ATTR_ID], &id) < 0)
+@@ -1757,7 +1767,8 @@ static int nl802154_del_llsec_dev(struct sk_buff *skb, struct genl_info *info)
+ 	struct nlattr *attrs[NL802154_DEV_ATTR_MAX + 1];
+ 	__le64 extended_addr;
+ 
+-	if (nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack))
++	if (!info->attrs[NL802154_ATTR_SEC_DEVICE] ||
++	    nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack))
+ 		return -EINVAL;
+ 
+ 	if (!attrs[NL802154_DEV_ATTR_EXTENDED_ADDR])
+@@ -1913,7 +1924,8 @@ static int nl802154_del_llsec_devkey(struct sk_buff *skb, struct genl_info *info
+ 	struct ieee802154_llsec_device_key key;
+ 	__le64 extended_addr;
+ 
+-	if (nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack))
++	if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] ||
++	    nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack))
+ 		return -EINVAL;
+ 
+ 	if (!attrs[NL802154_DEVKEY_ATTR_EXTENDED_ADDR])
+@@ -2085,6 +2097,9 @@ static int nl802154_del_llsec_seclevel(struct sk_buff *skb,
+ 	struct wpan_dev *wpan_dev = dev->ieee802154_ptr;
+ 	struct ieee802154_llsec_seclevel sl;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (!info->attrs[NL802154_ATTR_SEC_LEVEL] ||
+ 	    llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL],
+ 				 &sl) < 0)
+diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
+index d99e1be94019d..36ed85bf2ad51 100644
+--- a/net/ipv4/ah4.c
++++ b/net/ipv4/ah4.c
+@@ -141,7 +141,7 @@ static void ah_output_done(struct crypto_async_request *base, int err)
+ 	}
+ 
+ 	kfree(AH_SKB_CB(skb)->tmp);
+-	xfrm_output_resume(skb, err);
++	xfrm_output_resume(skb->sk, skb, err);
+ }
+ 
+ static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index a3271ec3e1627..4b834bbf95e07 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -279,7 +279,7 @@ static void esp_output_done(struct crypto_async_request *base, int err)
+ 		    x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP)
+ 			esp_output_tail_tcp(x, skb);
+ 		else
+-			xfrm_output_resume(skb, err);
++			xfrm_output_resume(skb->sk, skb, err);
+ 	}
+ }
+ 
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 5bda5aeda5791..5aa7344dbec7f 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -217,10 +217,12 @@ static struct sk_buff *esp4_gso_segment(struct sk_buff *skb,
+ 
+ 	if ((!(skb->dev->gso_partial_features & NETIF_F_HW_ESP) &&
+ 	     !(features & NETIF_F_HW_ESP)) || x->xso.dev != skb->dev)
+-		esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK);
++		esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK |
++					    NETIF_F_SCTP_CRC);
+ 	else if (!(features & NETIF_F_HW_ESP_TX_CSUM) &&
+ 		 !(skb->dev->gso_partial_features & NETIF_F_HW_ESP_TX_CSUM))
+-		esp_features = features & ~NETIF_F_CSUM_MASK;
++		esp_features = features & ~(NETIF_F_CSUM_MASK |
++					    NETIF_F_SCTP_CRC);
+ 
+ 	xo->flags |= XFRM_GSO_SEGMENT;
+ 
+@@ -312,8 +314,17 @@ static int esp_xmit(struct xfrm_state *x, struct sk_buff *skb,  netdev_features_
+ 	ip_hdr(skb)->tot_len = htons(skb->len);
+ 	ip_send_check(ip_hdr(skb));
+ 
+-	if (hw_offload)
++	if (hw_offload) {
++		if (!skb_ext_add(skb, SKB_EXT_SEC_PATH))
++			return -ENOMEM;
++
++		xo = xfrm_offload(skb);
++		if (!xo)
++			return -EINVAL;
++
++		xo->flags |= XFRM_XMIT;
+ 		return 0;
++	}
+ 
+ 	err = esp_output_tail(x, skb, &esp);
+ 	if (err)
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index e37a2fa65c294..4a2fd286787c0 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2747,6 +2747,10 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
+ 		val = up->gso_size;
+ 		break;
+ 
++	case UDP_GRO:
++		val = up->gro_enabled;
++		break;
++
+ 	/* The following two cannot be changed on UDP sockets, the return is
+ 	 * always 0 (which corresponds to the full checksum coverage of UDP). */
+ 	case UDPLITE_SEND_CSCOV:
+diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c
+index 440080da805b5..080ee7f44c649 100644
+--- a/net/ipv6/ah6.c
++++ b/net/ipv6/ah6.c
+@@ -316,7 +316,7 @@ static void ah6_output_done(struct crypto_async_request *base, int err)
+ 	}
+ 
+ 	kfree(AH_SKB_CB(skb)->tmp);
+-	xfrm_output_resume(skb, err);
++	xfrm_output_resume(skb->sk, skb, err);
+ }
+ 
+ static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 2b804fcebcc65..4071cb7c7a154 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -314,7 +314,7 @@ static void esp_output_done(struct crypto_async_request *base, int err)
+ 		    x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP)
+ 			esp_output_tail_tcp(x, skb);
+ 		else
+-			xfrm_output_resume(skb, err);
++			xfrm_output_resume(skb->sk, skb, err);
+ 	}
+ }
+ 
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 1ca516fb30e1c..4af56affaafd4 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -254,9 +254,11 @@ static struct sk_buff *esp6_gso_segment(struct sk_buff *skb,
+ 	skb->encap_hdr_csum = 1;
+ 
+ 	if (!(features & NETIF_F_HW_ESP) || x->xso.dev != skb->dev)
+-		esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK);
++		esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK |
++					    NETIF_F_SCTP_CRC);
+ 	else if (!(features & NETIF_F_HW_ESP_TX_CSUM))
+-		esp_features = features & ~NETIF_F_CSUM_MASK;
++		esp_features = features & ~(NETIF_F_CSUM_MASK |
++					    NETIF_F_SCTP_CRC);
+ 
+ 	xo->flags |= XFRM_GSO_SEGMENT;
+ 
+@@ -346,8 +348,17 @@ static int esp6_xmit(struct xfrm_state *x, struct sk_buff *skb,  netdev_features
+ 
+ 	ipv6_hdr(skb)->payload_len = htons(len);
+ 
+-	if (hw_offload)
++	if (hw_offload) {
++		if (!skb_ext_add(skb, SKB_EXT_SEC_PATH))
++			return -ENOMEM;
++
++		xo = xfrm_offload(skb);
++		if (!xo)
++			return -EINVAL;
++
++		xo->flags |= XFRM_XMIT;
+ 		return 0;
++	}
+ 
+ 	err = esp6_output_tail(x, skb, &esp);
+ 	if (err)
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 6e4ab80a3b944..00f133a55ef7c 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -298,7 +298,7 @@ static int rawv6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 		 */
+ 		v4addr = LOOPBACK4_IPV6;
+ 		if (!(addr_type & IPV6_ADDR_MULTICAST) &&
+-		    !sock_net(sk)->ipv6.sysctl.ip_nonlocal_bind) {
++		    !ipv6_can_nonlocal_bind(sock_net(sk), inet)) {
+ 			err = -EADDRNOTAVAIL;
+ 			if (!ipv6_chk_addr(sock_net(sk), &addr->sin6_addr,
+ 					   dev, 0)) {
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index fa276448d5a2a..71e578ed8699f 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5203,9 +5203,11 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 		 * nexthops have been replaced by first new, the rest should
+ 		 * be added to it.
+ 		 */
+-		cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL |
+-						     NLM_F_REPLACE);
+-		cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE;
++		if (cfg->fc_nlinfo.nlh) {
++			cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL |
++							     NLM_F_REPLACE);
++			cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE;
++		}
+ 		nhn++;
+ 	}
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 3f483e84d5df3..ef19c3399b893 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -4660,7 +4660,10 @@ static void ieee80211_sta_conn_mon_timer(struct timer_list *t)
+ 		timeout = sta->rx_stats.last_rx;
+ 	timeout += IEEE80211_CONNECTION_IDLE_TIME;
+ 
+-	if (time_is_before_jiffies(timeout)) {
++	/* If timeout is after now, then update timer to fire at
++	 * the later date, but do not actually probe at this time.
++	 */
++	if (time_is_after_jiffies(timeout)) {
+ 		mod_timer(&ifmgd->conn_mon_timer, round_jiffies_up(timeout));
+ 		return;
+ 	}
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 88868bf300513..1d8526d89505f 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3605,7 +3605,7 @@ begin:
+ 	    test_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags))
+ 		goto out;
+ 
+-	if (vif->txqs_stopped[ieee80211_ac_from_tid(txq->tid)]) {
++	if (vif->txqs_stopped[txq->ac]) {
+ 		set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags);
+ 		goto out;
+ 	}
+diff --git a/net/mac802154/llsec.c b/net/mac802154/llsec.c
+index 585d33144c33f..55550ead2ced8 100644
+--- a/net/mac802154/llsec.c
++++ b/net/mac802154/llsec.c
+@@ -152,7 +152,7 @@ err_tfm0:
+ 	crypto_free_sync_skcipher(key->tfm0);
+ err_tfm:
+ 	for (i = 0; i < ARRAY_SIZE(key->tfm); i++)
+-		if (key->tfm[i])
++		if (!IS_ERR_OR_NULL(key->tfm[i]))
+ 			crypto_free_aead(key->tfm[i]);
+ 
+ 	kfree_sensitive(key);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index f56b2e331bb6b..27f6672589ce2 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2245,6 +2245,48 @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname,
+ 	return ret;
+ }
+ 
++static bool mptcp_unsupported(int level, int optname)
++{
++	if (level == SOL_IP) {
++		switch (optname) {
++		case IP_ADD_MEMBERSHIP:
++		case IP_ADD_SOURCE_MEMBERSHIP:
++		case IP_DROP_MEMBERSHIP:
++		case IP_DROP_SOURCE_MEMBERSHIP:
++		case IP_BLOCK_SOURCE:
++		case IP_UNBLOCK_SOURCE:
++		case MCAST_JOIN_GROUP:
++		case MCAST_LEAVE_GROUP:
++		case MCAST_JOIN_SOURCE_GROUP:
++		case MCAST_LEAVE_SOURCE_GROUP:
++		case MCAST_BLOCK_SOURCE:
++		case MCAST_UNBLOCK_SOURCE:
++		case MCAST_MSFILTER:
++			return true;
++		}
++		return false;
++	}
++	if (level == SOL_IPV6) {
++		switch (optname) {
++		case IPV6_ADDRFORM:
++		case IPV6_ADD_MEMBERSHIP:
++		case IPV6_DROP_MEMBERSHIP:
++		case IPV6_JOIN_ANYCAST:
++		case IPV6_LEAVE_ANYCAST:
++		case MCAST_JOIN_GROUP:
++		case MCAST_LEAVE_GROUP:
++		case MCAST_JOIN_SOURCE_GROUP:
++		case MCAST_LEAVE_SOURCE_GROUP:
++		case MCAST_BLOCK_SOURCE:
++		case MCAST_UNBLOCK_SOURCE:
++		case MCAST_MSFILTER:
++			return true;
++		}
++		return false;
++	}
++	return false;
++}
++
+ static int mptcp_setsockopt(struct sock *sk, int level, int optname,
+ 			    sockptr_t optval, unsigned int optlen)
+ {
+@@ -2253,6 +2295,9 @@ static int mptcp_setsockopt(struct sock *sk, int level, int optname,
+ 
+ 	pr_debug("msk=%p", msk);
+ 
++	if (mptcp_unsupported(level, optname))
++		return -ENOPROTOOPT;
++
+ 	if (level == SOL_SOCKET)
+ 		return mptcp_setsockopt_sol_socket(msk, optname, optval, optlen);
+ 
+diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
+index a9cb355324d1a..ffff8da707b8c 100644
+--- a/net/ncsi/ncsi-manage.c
++++ b/net/ncsi/ncsi-manage.c
+@@ -105,13 +105,20 @@ static void ncsi_channel_monitor(struct timer_list *t)
+ 	monitor_state = nc->monitor.state;
+ 	spin_unlock_irqrestore(&nc->lock, flags);
+ 
+-	if (!enabled || chained) {
+-		ncsi_stop_channel_monitor(nc);
+-		return;
+-	}
++	if (!enabled)
++		return;		/* expected race disabling timer */
++	if (WARN_ON_ONCE(chained))
++		goto bad_state;
++
+ 	if (state != NCSI_CHANNEL_INACTIVE &&
+ 	    state != NCSI_CHANNEL_ACTIVE) {
+-		ncsi_stop_channel_monitor(nc);
++bad_state:
++		netdev_warn(ndp->ndev.dev,
++			    "Bad NCSI monitor state channel %d 0x%x %s queue\n",
++			    nc->id, state, chained ? "on" : "off");
++		spin_lock_irqsave(&nc->lock, flags);
++		nc->monitor.enabled = false;
++		spin_unlock_irqrestore(&nc->lock, flags);
+ 		return;
+ 	}
+ 
+@@ -136,10 +143,9 @@ static void ncsi_channel_monitor(struct timer_list *t)
+ 		ncsi_report_link(ndp, true);
+ 		ndp->flags |= NCSI_DEV_RESHUFFLE;
+ 
+-		ncsi_stop_channel_monitor(nc);
+-
+ 		ncm = &nc->modes[NCSI_MODE_LINK];
+ 		spin_lock_irqsave(&nc->lock, flags);
++		nc->monitor.enabled = false;
+ 		nc->state = NCSI_CHANNEL_INVISIBLE;
+ 		ncm->data[2] &= ~0x1;
+ 		spin_unlock_irqrestore(&nc->lock, flags);
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index d257ed3b732ae..a3b46f8888033 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -108,11 +108,13 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 					  llcp_sock->service_name_len,
+ 					  GFP_KERNEL);
+ 	if (!llcp_sock->service_name) {
++		nfc_llcp_local_put(llcp_sock->local);
+ 		ret = -ENOMEM;
+ 		goto put_dev;
+ 	}
+ 	llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
++		nfc_llcp_local_put(llcp_sock->local);
+ 		kfree(llcp_sock->service_name);
+ 		llcp_sock->service_name = NULL;
+ 		ret = -EADDRINUSE;
+@@ -671,6 +673,10 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
+ 		ret = -EISCONN;
+ 		goto error;
+ 	}
++	if (sk->sk_state == LLCP_CONNECTING) {
++		ret = -EINPROGRESS;
++		goto error;
++	}
+ 
+ 	dev = nfc_get_device(addr->dev_idx);
+ 	if (dev == NULL) {
+@@ -702,6 +708,7 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
+ 	llcp_sock->local = nfc_llcp_local_get(local);
+ 	llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
++		nfc_llcp_local_put(llcp_sock->local);
+ 		ret = -ENOMEM;
+ 		goto put_dev;
+ 	}
+@@ -743,9 +750,12 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
+ 
+ sock_unlink:
+ 	nfc_llcp_sock_unlink(&local->connecting_sockets, sk);
++	kfree(llcp_sock->service_name);
++	llcp_sock->service_name = NULL;
+ 
+ sock_llcp_release:
+ 	nfc_llcp_put_ssap(local, llcp_sock->ssap);
++	nfc_llcp_local_put(llcp_sock->local);
+ 
+ put_dev:
+ 	nfc_put_device(dev);
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 4beb96139d776..a11b558813c10 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -2024,16 +2024,12 @@ static int ovs_ct_limit_del_zone_limit(struct nlattr *nla_zone_limit,
+ static int ovs_ct_limit_get_default_limit(struct ovs_ct_limit_info *info,
+ 					  struct sk_buff *reply)
+ {
+-	struct ovs_zone_limit zone_limit;
+-	int err;
++	struct ovs_zone_limit zone_limit = {
++		.zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE,
++		.limit   = info->default_limit,
++	};
+ 
+-	zone_limit.zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE;
+-	zone_limit.limit = info->default_limit;
+-	err = nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit);
+-	if (err)
+-		return err;
+-
+-	return 0;
++	return nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit);
+ }
+ 
+ static int __ovs_ct_limit_get_zone_limit(struct net *net,
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 45fbf5f4dcd25..93a7edcff11e7 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -266,7 +266,10 @@ static int qrtr_tx_wait(struct qrtr_node *node, int dest_node, int dest_port,
+ 		flow = kzalloc(sizeof(*flow), GFP_KERNEL);
+ 		if (flow) {
+ 			init_waitqueue_head(&flow->resume_tx);
+-			radix_tree_insert(&node->qrtr_tx_flow, key, flow);
++			if (radix_tree_insert(&node->qrtr_tx_flow, key, flow)) {
++				kfree(flow);
++				flow = NULL;
++			}
+ 		}
+ 	}
+ 	mutex_unlock(&node->qrtr_tx_lock);
+diff --git a/net/rds/message.c b/net/rds/message.c
+index 071a261fdaabb..799034e0f513d 100644
+--- a/net/rds/message.c
++++ b/net/rds/message.c
+@@ -347,8 +347,9 @@ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned in
+ 	rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE);
+ 	rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);
+ 	if (IS_ERR(rm->data.op_sg)) {
++		void *err = ERR_CAST(rm->data.op_sg);
+ 		rds_message_put(rm);
+-		return ERR_CAST(rm->data.op_sg);
++		return err;
+ 	}
+ 
+ 	for (i = 0; i < rm->data.op_nents; ++i) {
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 181c4b501225f..88e14cfeb5d52 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -142,7 +142,7 @@ static int __tcf_action_put(struct tc_action *p, bool bind)
+ 	return 0;
+ }
+ 
+-int __tcf_idr_release(struct tc_action *p, bool bind, bool strict)
++static int __tcf_idr_release(struct tc_action *p, bool bind, bool strict)
+ {
+ 	int ret = 0;
+ 
+@@ -168,7 +168,18 @@ int __tcf_idr_release(struct tc_action *p, bool bind, bool strict)
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL(__tcf_idr_release);
++
++int tcf_idr_release(struct tc_action *a, bool bind)
++{
++	const struct tc_action_ops *ops = a->ops;
++	int ret;
++
++	ret = __tcf_idr_release(a, bind, false);
++	if (ret == ACT_P_DELETED)
++		module_put(ops->owner);
++	return ret;
++}
++EXPORT_SYMBOL(tcf_idr_release);
+ 
+ static size_t tcf_action_shared_attrs_size(const struct tc_action *act)
+ {
+@@ -445,6 +456,7 @@ int tcf_idr_create(struct tc_action_net *tn, u32 index, struct nlattr *est,
+ 	}
+ 
+ 	p->idrinfo = idrinfo;
++	__module_get(ops->owner);
+ 	p->ops = ops;
+ 	*a = p;
+ 	return 0;
+@@ -972,7 +984,8 @@ struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
+ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ 				    struct nlattr *nla, struct nlattr *est,
+ 				    char *name, int ovr, int bind,
+-				    struct tc_action_ops *a_o, bool rtnl_held,
++				    struct tc_action_ops *a_o, int *init_res,
++				    bool rtnl_held,
+ 				    struct netlink_ext_ack *extack)
+ {
+ 	struct nla_bitfield32 flags = { 0, 0 };
+@@ -1008,6 +1021,7 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ 	}
+ 	if (err < 0)
+ 		goto err_out;
++	*init_res = err;
+ 
+ 	if (!name && tb[TCA_ACT_COOKIE])
+ 		tcf_set_action_cookie(&a->act_cookie, cookie);
+@@ -1015,13 +1029,6 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ 	if (!name)
+ 		a->hw_stats = hw_stats;
+ 
+-	/* module count goes up only when brand new policy is created
+-	 * if it exists and is only bound to in a_o->init() then
+-	 * ACT_P_CREATED is not returned (a zero is).
+-	 */
+-	if (err != ACT_P_CREATED)
+-		module_put(a_o->owner);
+-
+ 	return a;
+ 
+ err_out:
+@@ -1036,7 +1043,7 @@ err_out:
+ 
+ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ 		    struct nlattr *est, char *name, int ovr, int bind,
+-		    struct tc_action *actions[], size_t *attr_size,
++		    struct tc_action *actions[], int init_res[], size_t *attr_size,
+ 		    bool rtnl_held, struct netlink_ext_ack *extack)
+ {
+ 	struct tc_action_ops *ops[TCA_ACT_MAX_PRIO] = {};
+@@ -1064,7 +1071,8 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ 
+ 	for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) {
+ 		act = tcf_action_init_1(net, tp, tb[i], est, name, ovr, bind,
+-					ops[i - 1], rtnl_held, extack);
++					ops[i - 1], &init_res[i - 1], rtnl_held,
++					extack);
+ 		if (IS_ERR(act)) {
+ 			err = PTR_ERR(act);
+ 			goto err;
+@@ -1080,7 +1088,8 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ 	tcf_idr_insert_many(actions);
+ 
+ 	*attr_size = tcf_action_full_attrs_size(sz);
+-	return i - 1;
++	err = i - 1;
++	goto err_mod;
+ 
+ err:
+ 	tcf_action_destroy(actions, bind);
+@@ -1477,12 +1486,13 @@ static int tcf_action_add(struct net *net, struct nlattr *nla,
+ 			  struct netlink_ext_ack *extack)
+ {
+ 	size_t attr_size = 0;
+-	int loop, ret;
++	int loop, ret, i;
+ 	struct tc_action *actions[TCA_ACT_MAX_PRIO] = {};
++	int init_res[TCA_ACT_MAX_PRIO] = {};
+ 
+ 	for (loop = 0; loop < 10; loop++) {
+ 		ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0,
+-				      actions, &attr_size, true, extack);
++				      actions, init_res, &attr_size, true, extack);
+ 		if (ret != -EAGAIN)
+ 			break;
+ 	}
+@@ -1490,8 +1500,12 @@ static int tcf_action_add(struct net *net, struct nlattr *nla,
+ 	if (ret < 0)
+ 		return ret;
+ 	ret = tcf_add_notify(net, n, actions, portid, attr_size, extack);
+-	if (ovr)
+-		tcf_action_put_many(actions);
++
++	/* only put existing actions */
++	for (i = 0; i < TCA_ACT_MAX_PRIO; i++)
++		if (init_res[i] == ACT_P_CREATED)
++			actions[i] = NULL;
++	tcf_action_put_many(actions);
+ 
+ 	return ret;
+ }
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index b2b7834c6cf8a..9383dc29ead5d 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -646,7 +646,7 @@ static void tc_block_indr_cleanup(struct flow_block_cb *block_cb)
+ 	struct net_device *dev = block_cb->indr.dev;
+ 	struct Qdisc *sch = block_cb->indr.sch;
+ 	struct netlink_ext_ack extack = {};
+-	struct flow_block_offload bo;
++	struct flow_block_offload bo = {};
+ 
+ 	tcf_block_offload_init(&bo, dev, sch, FLOW_BLOCK_UNBIND,
+ 			       block_cb->indr.binder_type,
+@@ -3051,6 +3051,7 @@ int tcf_exts_validate(struct net *net, struct tcf_proto *tp, struct nlattr **tb,
+ {
+ #ifdef CONFIG_NET_CLS_ACT
+ 	{
++		int init_res[TCA_ACT_MAX_PRIO] = {};
+ 		struct tc_action *act;
+ 		size_t attr_size = 0;
+ 
+@@ -3062,12 +3063,11 @@ int tcf_exts_validate(struct net *net, struct tcf_proto *tp, struct nlattr **tb,
+ 				return PTR_ERR(a_o);
+ 			act = tcf_action_init_1(net, tp, tb[exts->police],
+ 						rate_tlv, "police", ovr,
+-						TCA_ACT_BIND, a_o, rtnl_held,
+-						extack);
+-			if (IS_ERR(act)) {
+-				module_put(a_o->owner);
++						TCA_ACT_BIND, a_o, init_res,
++						rtnl_held, extack);
++			module_put(a_o->owner);
++			if (IS_ERR(act))
+ 				return PTR_ERR(act);
+-			}
+ 
+ 			act->type = exts->type = TCA_OLD_COMPAT;
+ 			exts->actions[0] = act;
+@@ -3078,8 +3078,8 @@ int tcf_exts_validate(struct net *net, struct tcf_proto *tp, struct nlattr **tb,
+ 
+ 			err = tcf_action_init(net, tp, tb[exts->action],
+ 					      rate_tlv, NULL, ovr, TCA_ACT_BIND,
+-					      exts->actions, &attr_size,
+-					      rtnl_held, extack);
++					      exts->actions, init_res,
++					      &attr_size, rtnl_held, extack);
+ 			if (err < 0)
+ 				return err;
+ 			exts->nr_actions = err;
+diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c
+index 2f1f0a3784083..6af6b95bdb672 100644
+--- a/net/sched/sch_teql.c
++++ b/net/sched/sch_teql.c
+@@ -134,6 +134,9 @@ teql_destroy(struct Qdisc *sch)
+ 	struct teql_sched_data *dat = qdisc_priv(sch);
+ 	struct teql_master *master = dat->m;
+ 
++	if (!master)
++		return;
++
+ 	prev = master->slaves;
+ 	if (prev) {
+ 		do {
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index 8a58f42d6d195..c8074f435d3ef 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -643,8 +643,8 @@ static int sctp_v6_available(union sctp_addr *addr, struct sctp_sock *sp)
+ 	if (!(type & IPV6_ADDR_UNICAST))
+ 		return 0;
+ 
+-	return sp->inet.freebind || net->ipv6.sysctl.ip_nonlocal_bind ||
+-		ipv6_chk_addr(net, in6, NULL, 0);
++	return ipv6_can_nonlocal_bind(net, &sp->inet) ||
++	       ipv6_chk_addr(net, in6, NULL, 0);
+ }
+ 
+ /* This function checks if the address is a valid address to be used for
+@@ -933,8 +933,7 @@ static int sctp_inet6_bind_verify(struct sctp_sock *opt, union sctp_addr *addr)
+ 			net = sock_net(&opt->inet.sk);
+ 			rcu_read_lock();
+ 			dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id);
+-			if (!dev || !(opt->inet.freebind ||
+-				      net->ipv6.sysctl.ip_nonlocal_bind ||
++			if (!dev || !(ipv6_can_nonlocal_bind(net, &opt->inet) ||
+ 				      ipv6_chk_addr(net, &addr->v6.sin6_addr,
+ 						    dev, 0))) {
+ 				rcu_read_unlock();
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 740ab9ae41a66..86eb6d679225c 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -1934,12 +1934,13 @@ static void tipc_crypto_rcv_complete(struct net *net, struct tipc_aead *aead,
+ 			goto rcv;
+ 		if (tipc_aead_clone(&tmp, aead) < 0)
+ 			goto rcv;
++		WARN_ON(!refcount_inc_not_zero(&tmp->refcnt));
+ 		if (tipc_crypto_key_attach(rx, tmp, ehdr->tx_key, false) < 0) {
+ 			tipc_aead_free(&tmp->rcu);
+ 			goto rcv;
+ 		}
+ 		tipc_aead_put(aead);
+-		aead = tipc_aead_get(tmp);
++		aead = tmp;
+ 	}
+ 
+ 	if (unlikely(err)) {
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index e795a8a2955b5..5b18c6a46cfb8 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1244,7 +1244,7 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq,
+ 		spin_lock_bh(&inputq->lock);
+ 		if (skb_peek(arrvq) == skb) {
+ 			skb_queue_splice_tail_init(&tmpq, inputq);
+-			kfree_skb(__skb_dequeue(arrvq));
++			__skb_dequeue(arrvq);
+ 		}
+ 		spin_unlock_bh(&inputq->lock);
+ 		__skb_queue_purge(&tmpq);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 535e34a84d651..daf3f29c7f0cc 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -5,7 +5,7 @@
+  * Copyright 2006-2010	Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright 2015-2017	Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ 
+ #include <linux/if.h>
+@@ -209,9 +209,13 @@ static int validate_beacon_head(const struct nlattr *attr,
+ 	unsigned int len = nla_len(attr);
+ 	const struct element *elem;
+ 	const struct ieee80211_mgmt *mgmt = (void *)data;
+-	bool s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control);
+ 	unsigned int fixedlen, hdrlen;
++	bool s1g_bcn;
+ 
++	if (len < offsetofend(typeof(*mgmt), frame_control))
++		goto err;
++
++	s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control);
+ 	if (s1g_bcn) {
+ 		fixedlen = offsetof(struct ieee80211_ext,
+ 				    u.s1g_beacon.variable);
+@@ -5320,7 +5324,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ 			rdev, info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP],
+ 			&params);
+ 		if (err)
+-			return err;
++			goto out;
+ 	}
+ 
+ 	nl80211_calculate_ap_params(&params);
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 3409f37d838b3..345ef1c967685 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -2351,14 +2351,16 @@ cfg80211_inform_single_bss_frame_data(struct wiphy *wiphy,
+ 		return NULL;
+ 
+ 	if (ext) {
+-		struct ieee80211_s1g_bcn_compat_ie *compat;
+-		u8 *ie;
++		const struct ieee80211_s1g_bcn_compat_ie *compat;
++		const struct element *elem;
+ 
+-		ie = (void *)cfg80211_find_ie(WLAN_EID_S1G_BCN_COMPAT,
+-					      variable, ielen);
+-		if (!ie)
++		elem = cfg80211_find_elem(WLAN_EID_S1G_BCN_COMPAT,
++					  variable, ielen);
++		if (!elem)
++			return NULL;
++		if (elem->datalen < sizeof(*compat))
+ 			return NULL;
+-		compat = (void *)(ie + 2);
++		compat = (void *)elem->data;
+ 		bssid = ext->u.s1g_beacon.sa;
+ 		capability = le16_to_cpu(compat->compat_info);
+ 		beacon_int = le16_to_cpu(compat->beacon_int);
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index 38df713f2e2ed..060e365c8259b 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -530,7 +530,7 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
+ 		cfg80211_sme_free(wdev);
+ 	}
+ 
+-	if (WARN_ON(wdev->conn))
++	if (wdev->conn)
+ 		return -EINPROGRESS;
+ 
+ 	wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL);
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index d8e8a11ca845e..a20aec9d73933 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -216,7 +216,7 @@ static struct nlmsghdr *xfrm_nlmsg_put_compat(struct sk_buff *skb,
+ 	case XFRM_MSG_GETSADINFO:
+ 	case XFRM_MSG_GETSPDINFO:
+ 	default:
+-		WARN_ONCE(1, "unsupported nlmsg_type %d", nlh_src->nlmsg_type);
++		pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type);
+ 		return ERR_PTR(-EOPNOTSUPP);
+ 	}
+ 
+@@ -277,7 +277,7 @@ static int xfrm_xlate64_attr(struct sk_buff *dst, const struct nlattr *src)
+ 		return xfrm_nla_cpy(dst, src, nla_len(src));
+ 	default:
+ 		BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID);
+-		WARN_ONCE(1, "unsupported nla_type %d", src->nla_type);
++		pr_warn_once("unsupported nla_type %d\n", src->nla_type);
+ 		return -EOPNOTSUPP;
+ 	}
+ }
+@@ -315,8 +315,10 @@ static int xfrm_alloc_compat(struct sk_buff *skb, const struct nlmsghdr *nlh_src
+ 	struct sk_buff *new = NULL;
+ 	int err;
+ 
+-	if (WARN_ON_ONCE(type >= ARRAY_SIZE(xfrm_msg_min)))
++	if (type >= ARRAY_SIZE(xfrm_msg_min)) {
++		pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type);
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	if (skb_shinfo(skb)->frag_list == NULL) {
+ 		new = alloc_skb(skb->len + skb_tailroom(skb), GFP_ATOMIC);
+@@ -378,6 +380,10 @@ static int xfrm_attr_cpy32(void *dst, size_t *pos, const struct nlattr *src,
+ 	struct nlmsghdr *nlmsg = dst;
+ 	struct nlattr *nla;
+ 
++	/* xfrm_user_rcv_msg_compat() relies on fact that 32-bit messages
++	 * have the same len or shorted than 64-bit ones.
++	 * 32-bit translation that is bigger than 64-bit original is unexpected.
++	 */
+ 	if (WARN_ON_ONCE(copy_len > payload))
+ 		copy_len = payload;
+ 
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index edf11893dbe81..6d6917b68856f 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -134,8 +134,6 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur
+ 		return skb;
+ 	}
+ 
+-	xo->flags |= XFRM_XMIT;
+-
+ 	if (skb_is_gso(skb) && unlikely(x->xso.dev != dev)) {
+ 		struct sk_buff *segs;
+ 
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index 9b8e292a7c6a7..e9ce23343f5ca 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -305,6 +305,8 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 
+ 			icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ 		} else {
++			if (!(ip_hdr(skb)->frag_off & htons(IP_DF)))
++				goto xmit;
+ 			icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+ 				      htonl(mtu));
+ 		}
+@@ -313,6 +315,7 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 		return -EMSGSIZE;
+ 	}
+ 
++xmit:
+ 	xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev)));
+ 	skb_dst_set(skb, dst);
+ 	skb->dev = tdev;
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index a7ab19353313c..b81ca117dac7a 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -503,22 +503,22 @@ out:
+ 	return err;
+ }
+ 
+-int xfrm_output_resume(struct sk_buff *skb, int err)
++int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err)
+ {
+ 	struct net *net = xs_net(skb_dst(skb)->xfrm);
+ 
+ 	while (likely((err = xfrm_output_one(skb, err)) == 0)) {
+ 		nf_reset_ct(skb);
+ 
+-		err = skb_dst(skb)->ops->local_out(net, skb->sk, skb);
++		err = skb_dst(skb)->ops->local_out(net, sk, skb);
+ 		if (unlikely(err != 1))
+ 			goto out;
+ 
+ 		if (!skb_dst(skb)->xfrm)
+-			return dst_output(net, skb->sk, skb);
++			return dst_output(net, sk, skb);
+ 
+ 		err = nf_hook(skb_dst(skb)->ops->family,
+-			      NF_INET_POST_ROUTING, net, skb->sk, skb,
++			      NF_INET_POST_ROUTING, net, sk, skb,
+ 			      NULL, skb_dst(skb)->dev, xfrm_output2);
+ 		if (unlikely(err != 1))
+ 			goto out;
+@@ -534,7 +534,7 @@ EXPORT_SYMBOL_GPL(xfrm_output_resume);
+ 
+ static int xfrm_output2(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+-	return xfrm_output_resume(skb, 1);
++	return xfrm_output_resume(sk, skb, 1);
+ }
+ 
+ static int xfrm_output_gso(struct net *net, struct sock *sk, struct sk_buff *skb)
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 2f1517827995c..77499abd9f992 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -44,7 +44,6 @@ static void xfrm_state_gc_task(struct work_struct *work);
+  */
+ 
+ static unsigned int xfrm_state_hashmax __read_mostly = 1 * 1024 * 1024;
+-static __read_mostly seqcount_t xfrm_state_hash_generation = SEQCNT_ZERO(xfrm_state_hash_generation);
+ static struct kmem_cache *xfrm_state_cache __ro_after_init;
+ 
+ static DECLARE_WORK(xfrm_state_gc_work, xfrm_state_gc_task);
+@@ -140,7 +139,7 @@ static void xfrm_hash_resize(struct work_struct *work)
+ 	}
+ 
+ 	spin_lock_bh(&net->xfrm.xfrm_state_lock);
+-	write_seqcount_begin(&xfrm_state_hash_generation);
++	write_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
+ 
+ 	nhashmask = (nsize / sizeof(struct hlist_head)) - 1U;
+ 	odst = xfrm_state_deref_prot(net->xfrm.state_bydst, net);
+@@ -156,7 +155,7 @@ static void xfrm_hash_resize(struct work_struct *work)
+ 	rcu_assign_pointer(net->xfrm.state_byspi, nspi);
+ 	net->xfrm.state_hmask = nhashmask;
+ 
+-	write_seqcount_end(&xfrm_state_hash_generation);
++	write_seqcount_end(&net->xfrm.xfrm_state_hash_generation);
+ 	spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ 
+ 	osize = (ohashmask + 1) * sizeof(struct hlist_head);
+@@ -1061,7 +1060,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 
+ 	to_put = NULL;
+ 
+-	sequence = read_seqcount_begin(&xfrm_state_hash_generation);
++	sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
+ 
+ 	rcu_read_lock();
+ 	h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
+@@ -1174,7 +1173,7 @@ out:
+ 	if (to_put)
+ 		xfrm_state_put(to_put);
+ 
+-	if (read_seqcount_retry(&xfrm_state_hash_generation, sequence)) {
++	if (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence)) {
+ 		*err = -EAGAIN;
+ 		if (x) {
+ 			xfrm_state_put(x);
+@@ -2664,6 +2663,7 @@ int __net_init xfrm_state_init(struct net *net)
+ 	net->xfrm.state_num = 0;
+ 	INIT_WORK(&net->xfrm.state_hash_work, xfrm_hash_resize);
+ 	spin_lock_init(&net->xfrm.xfrm_state_lock);
++	seqcount_init(&net->xfrm.xfrm_state_hash_generation);
+ 	return 0;
+ 
+ out_byspi:
+diff --git a/security/selinux/ss/avtab.c b/security/selinux/ss/avtab.c
+index 0172d87e2b9ae..364b2ef9b36f8 100644
+--- a/security/selinux/ss/avtab.c
++++ b/security/selinux/ss/avtab.c
+@@ -109,7 +109,7 @@ static int avtab_insert(struct avtab *h, struct avtab_key *key, struct avtab_dat
+ 	struct avtab_node *prev, *cur, *newnode;
+ 	u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD);
+ 
+-	if (!h)
++	if (!h || !h->nslot)
+ 		return -EINVAL;
+ 
+ 	hvalue = avtab_hash(key, h->mask);
+@@ -154,7 +154,7 @@ avtab_insert_nonunique(struct avtab *h, struct avtab_key *key, struct avtab_datu
+ 	struct avtab_node *prev, *cur;
+ 	u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD);
+ 
+-	if (!h)
++	if (!h || !h->nslot)
+ 		return NULL;
+ 	hvalue = avtab_hash(key, h->mask);
+ 	for (prev = NULL, cur = h->htable[hvalue];
+@@ -184,7 +184,7 @@ struct avtab_datum *avtab_search(struct avtab *h, struct avtab_key *key)
+ 	struct avtab_node *cur;
+ 	u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD);
+ 
+-	if (!h)
++	if (!h || !h->nslot)
+ 		return NULL;
+ 
+ 	hvalue = avtab_hash(key, h->mask);
+@@ -220,7 +220,7 @@ avtab_search_node(struct avtab *h, struct avtab_key *key)
+ 	struct avtab_node *cur;
+ 	u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD);
+ 
+-	if (!h)
++	if (!h || !h->nslot)
+ 		return NULL;
+ 
+ 	hvalue = avtab_hash(key, h->mask);
+@@ -295,6 +295,7 @@ void avtab_destroy(struct avtab *h)
+ 	}
+ 	kvfree(h->htable);
+ 	h->htable = NULL;
++	h->nel = 0;
+ 	h->nslot = 0;
+ 	h->mask = 0;
+ }
+@@ -303,88 +304,52 @@ void avtab_init(struct avtab *h)
+ {
+ 	h->htable = NULL;
+ 	h->nel = 0;
++	h->nslot = 0;
++	h->mask = 0;
+ }
+ 
+-int avtab_alloc(struct avtab *h, u32 nrules)
++static int avtab_alloc_common(struct avtab *h, u32 nslot)
+ {
+-	u32 mask = 0;
+-	u32 shift = 0;
+-	u32 work = nrules;
+-	u32 nslot = 0;
+-
+-	if (nrules == 0)
+-		goto avtab_alloc_out;
+-
+-	while (work) {
+-		work  = work >> 1;
+-		shift++;
+-	}
+-	if (shift > 2)
+-		shift = shift - 2;
+-	nslot = 1 << shift;
+-	if (nslot > MAX_AVTAB_HASH_BUCKETS)
+-		nslot = MAX_AVTAB_HASH_BUCKETS;
+-	mask = nslot - 1;
++	if (!nslot)
++		return 0;
+ 
+ 	h->htable = kvcalloc(nslot, sizeof(void *), GFP_KERNEL);
+ 	if (!h->htable)
+ 		return -ENOMEM;
+ 
+- avtab_alloc_out:
+-	h->nel = 0;
+ 	h->nslot = nslot;
+-	h->mask = mask;
+-	pr_debug("SELinux: %d avtab hash slots, %d rules.\n",
+-	       h->nslot, nrules);
++	h->mask = nslot - 1;
+ 	return 0;
+ }
+ 
+-int avtab_duplicate(struct avtab *new, struct avtab *orig)
++int avtab_alloc(struct avtab *h, u32 nrules)
+ {
+-	int i;
+-	struct avtab_node *node, *tmp, *tail;
+-
+-	memset(new, 0, sizeof(*new));
++	int rc;
++	u32 nslot = 0;
+ 
+-	new->htable = kvcalloc(orig->nslot, sizeof(void *), GFP_KERNEL);
+-	if (!new->htable)
+-		return -ENOMEM;
+-	new->nslot = orig->nslot;
+-	new->mask = orig->mask;
+-
+-	for (i = 0; i < orig->nslot; i++) {
+-		tail = NULL;
+-		for (node = orig->htable[i]; node; node = node->next) {
+-			tmp = kmem_cache_zalloc(avtab_node_cachep, GFP_KERNEL);
+-			if (!tmp)
+-				goto error;
+-			tmp->key = node->key;
+-			if (tmp->key.specified & AVTAB_XPERMS) {
+-				tmp->datum.u.xperms =
+-					kmem_cache_zalloc(avtab_xperms_cachep,
+-							GFP_KERNEL);
+-				if (!tmp->datum.u.xperms) {
+-					kmem_cache_free(avtab_node_cachep, tmp);
+-					goto error;
+-				}
+-				tmp->datum.u.xperms = node->datum.u.xperms;
+-			} else
+-				tmp->datum.u.data = node->datum.u.data;
+-
+-			if (tail)
+-				tail->next = tmp;
+-			else
+-				new->htable[i] = tmp;
+-
+-			tail = tmp;
+-			new->nel++;
++	if (nrules != 0) {
++		u32 shift = 1;
++		u32 work = nrules >> 3;
++		while (work) {
++			work >>= 1;
++			shift++;
+ 		}
++		nslot = 1 << shift;
++		if (nslot > MAX_AVTAB_HASH_BUCKETS)
++			nslot = MAX_AVTAB_HASH_BUCKETS;
++
++		rc = avtab_alloc_common(h, nslot);
++		if (rc)
++			return rc;
+ 	}
+ 
++	pr_debug("SELinux: %d avtab hash slots, %d rules.\n", nslot, nrules);
+ 	return 0;
+-error:
+-	avtab_destroy(new);
+-	return -ENOMEM;
++}
++
++int avtab_alloc_dup(struct avtab *new, const struct avtab *orig)
++{
++	return avtab_alloc_common(new, orig->nslot);
+ }
+ 
+ void avtab_hash_eval(struct avtab *h, char *tag)
+diff --git a/security/selinux/ss/avtab.h b/security/selinux/ss/avtab.h
+index 4c4445ca9118e..f2eeb36265d15 100644
+--- a/security/selinux/ss/avtab.h
++++ b/security/selinux/ss/avtab.h
+@@ -89,7 +89,7 @@ struct avtab {
+ 
+ void avtab_init(struct avtab *h);
+ int avtab_alloc(struct avtab *, u32);
+-int avtab_duplicate(struct avtab *new, struct avtab *orig);
++int avtab_alloc_dup(struct avtab *new, const struct avtab *orig);
+ struct avtab_datum *avtab_search(struct avtab *h, struct avtab_key *k);
+ void avtab_destroy(struct avtab *h);
+ void avtab_hash_eval(struct avtab *h, char *tag);
+diff --git a/security/selinux/ss/conditional.c b/security/selinux/ss/conditional.c
+index 0b32f3ab025e5..1ef74c085f2b0 100644
+--- a/security/selinux/ss/conditional.c
++++ b/security/selinux/ss/conditional.c
+@@ -605,7 +605,6 @@ static int cond_dup_av_list(struct cond_av_list *new,
+ 			struct cond_av_list *orig,
+ 			struct avtab *avtab)
+ {
+-	struct avtab_node *avnode;
+ 	u32 i;
+ 
+ 	memset(new, 0, sizeof(*new));
+@@ -615,10 +614,11 @@ static int cond_dup_av_list(struct cond_av_list *new,
+ 		return -ENOMEM;
+ 
+ 	for (i = 0; i < orig->len; i++) {
+-		avnode = avtab_search_node(avtab, &orig->nodes[i]->key);
+-		if (WARN_ON(!avnode))
+-			return -EINVAL;
+-		new->nodes[i] = avnode;
++		new->nodes[i] = avtab_insert_nonunique(avtab,
++						       &orig->nodes[i]->key,
++						       &orig->nodes[i]->datum);
++		if (!new->nodes[i])
++			return -ENOMEM;
+ 		new->len++;
+ 	}
+ 
+@@ -630,7 +630,7 @@ static int duplicate_policydb_cond_list(struct policydb *newp,
+ {
+ 	int rc, i, j;
+ 
+-	rc = avtab_duplicate(&newp->te_cond_avtab, &origp->te_cond_avtab);
++	rc = avtab_alloc_dup(&newp->te_cond_avtab, &origp->te_cond_avtab);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 495fc865faf55..fbdbfb5aa3707 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -1553,6 +1553,7 @@ static int security_context_to_sid_core(struct selinux_state *state,
+ 		if (!str)
+ 			goto out;
+ 	}
++retry:
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -1566,6 +1567,15 @@ static int security_context_to_sid_core(struct selinux_state *state,
+ 	} else if (rc)
+ 		goto out_unlock;
+ 	rc = sidtab_context_to_sid(sidtab, &context, sid);
++	if (rc == -ESTALE) {
++		rcu_read_unlock();
++		if (context.str) {
++			str = context.str;
++			context.str = NULL;
++		}
++		context_destroy(&context);
++		goto retry;
++	}
+ 	context_destroy(&context);
+ out_unlock:
+ 	rcu_read_unlock();
+@@ -1715,7 +1725,7 @@ static int security_compute_sid(struct selinux_state *state,
+ 	struct selinux_policy *policy;
+ 	struct policydb *policydb;
+ 	struct sidtab *sidtab;
+-	struct class_datum *cladatum = NULL;
++	struct class_datum *cladatum;
+ 	struct context *scontext, *tcontext, newcontext;
+ 	struct sidtab_entry *sentry, *tentry;
+ 	struct avtab_key avkey;
+@@ -1737,6 +1747,8 @@ static int security_compute_sid(struct selinux_state *state,
+ 		goto out;
+ 	}
+ 
++retry:
++	cladatum = NULL;
+ 	context_init(&newcontext);
+ 
+ 	rcu_read_lock();
+@@ -1881,6 +1893,11 @@ static int security_compute_sid(struct selinux_state *state,
+ 	}
+ 	/* Obtain the sid for the context. */
+ 	rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid);
++	if (rc == -ESTALE) {
++		rcu_read_unlock();
++		context_destroy(&newcontext);
++		goto retry;
++	}
+ out_unlock:
+ 	rcu_read_unlock();
+ 	context_destroy(&newcontext);
+@@ -2192,6 +2209,7 @@ void selinux_policy_commit(struct selinux_state *state,
+ 			   struct selinux_load_state *load_state)
+ {
+ 	struct selinux_policy *oldpolicy, *newpolicy = load_state->policy;
++	unsigned long flags;
+ 	u32 seqno;
+ 
+ 	oldpolicy = rcu_dereference_protected(state->policy,
+@@ -2213,7 +2231,13 @@ void selinux_policy_commit(struct selinux_state *state,
+ 	seqno = newpolicy->latest_granting;
+ 
+ 	/* Install the new policy. */
+-	rcu_assign_pointer(state->policy, newpolicy);
++	if (oldpolicy) {
++		sidtab_freeze_begin(oldpolicy->sidtab, &flags);
++		rcu_assign_pointer(state->policy, newpolicy);
++		sidtab_freeze_end(oldpolicy->sidtab, &flags);
++	} else {
++		rcu_assign_pointer(state->policy, newpolicy);
++	}
+ 
+ 	/* Load the policycaps from the new policy */
+ 	security_load_policycaps(state, newpolicy);
+@@ -2357,13 +2381,15 @@ int security_port_sid(struct selinux_state *state,
+ 	struct policydb *policydb;
+ 	struct sidtab *sidtab;
+ 	struct ocontext *c;
+-	int rc = 0;
++	int rc;
+ 
+ 	if (!selinux_initialized(state)) {
+ 		*out_sid = SECINITSID_PORT;
+ 		return 0;
+ 	}
+ 
++retry:
++	rc = 0;
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -2382,6 +2408,10 @@ int security_port_sid(struct selinux_state *state,
+ 		if (!c->sid[0]) {
+ 			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+ 						   &c->sid[0]);
++			if (rc == -ESTALE) {
++				rcu_read_unlock();
++				goto retry;
++			}
+ 			if (rc)
+ 				goto out;
+ 		}
+@@ -2408,13 +2438,15 @@ int security_ib_pkey_sid(struct selinux_state *state,
+ 	struct policydb *policydb;
+ 	struct sidtab *sidtab;
+ 	struct ocontext *c;
+-	int rc = 0;
++	int rc;
+ 
+ 	if (!selinux_initialized(state)) {
+ 		*out_sid = SECINITSID_UNLABELED;
+ 		return 0;
+ 	}
+ 
++retry:
++	rc = 0;
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -2435,6 +2467,10 @@ int security_ib_pkey_sid(struct selinux_state *state,
+ 			rc = sidtab_context_to_sid(sidtab,
+ 						   &c->context[0],
+ 						   &c->sid[0]);
++			if (rc == -ESTALE) {
++				rcu_read_unlock();
++				goto retry;
++			}
+ 			if (rc)
+ 				goto out;
+ 		}
+@@ -2460,13 +2496,15 @@ int security_ib_endport_sid(struct selinux_state *state,
+ 	struct policydb *policydb;
+ 	struct sidtab *sidtab;
+ 	struct ocontext *c;
+-	int rc = 0;
++	int rc;
+ 
+ 	if (!selinux_initialized(state)) {
+ 		*out_sid = SECINITSID_UNLABELED;
+ 		return 0;
+ 	}
+ 
++retry:
++	rc = 0;
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -2487,6 +2525,10 @@ int security_ib_endport_sid(struct selinux_state *state,
+ 		if (!c->sid[0]) {
+ 			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+ 						   &c->sid[0]);
++			if (rc == -ESTALE) {
++				rcu_read_unlock();
++				goto retry;
++			}
+ 			if (rc)
+ 				goto out;
+ 		}
+@@ -2510,7 +2552,7 @@ int security_netif_sid(struct selinux_state *state,
+ 	struct selinux_policy *policy;
+ 	struct policydb *policydb;
+ 	struct sidtab *sidtab;
+-	int rc = 0;
++	int rc;
+ 	struct ocontext *c;
+ 
+ 	if (!selinux_initialized(state)) {
+@@ -2518,6 +2560,8 @@ int security_netif_sid(struct selinux_state *state,
+ 		return 0;
+ 	}
+ 
++retry:
++	rc = 0;
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -2534,10 +2578,18 @@ int security_netif_sid(struct selinux_state *state,
+ 		if (!c->sid[0] || !c->sid[1]) {
+ 			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+ 						   &c->sid[0]);
++			if (rc == -ESTALE) {
++				rcu_read_unlock();
++				goto retry;
++			}
+ 			if (rc)
+ 				goto out;
+ 			rc = sidtab_context_to_sid(sidtab, &c->context[1],
+ 						   &c->sid[1]);
++			if (rc == -ESTALE) {
++				rcu_read_unlock();
++				goto retry;
++			}
+ 			if (rc)
+ 				goto out;
+ 		}
+@@ -2587,6 +2639,7 @@ int security_node_sid(struct selinux_state *state,
+ 		return 0;
+ 	}
+ 
++retry:
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -2635,6 +2688,10 @@ int security_node_sid(struct selinux_state *state,
+ 			rc = sidtab_context_to_sid(sidtab,
+ 						   &c->context[0],
+ 						   &c->sid[0]);
++			if (rc == -ESTALE) {
++				rcu_read_unlock();
++				goto retry;
++			}
+ 			if (rc)
+ 				goto out;
+ 		}
+@@ -2676,18 +2733,24 @@ int security_get_user_sids(struct selinux_state *state,
+ 	struct sidtab *sidtab;
+ 	struct context *fromcon, usercon;
+ 	u32 *mysids = NULL, *mysids2, sid;
+-	u32 mynel = 0, maxnel = SIDS_NEL;
++	u32 i, j, mynel, maxnel = SIDS_NEL;
+ 	struct user_datum *user;
+ 	struct role_datum *role;
+ 	struct ebitmap_node *rnode, *tnode;
+-	int rc = 0, i, j;
++	int rc;
+ 
+ 	*sids = NULL;
+ 	*nel = 0;
+ 
+ 	if (!selinux_initialized(state))
+-		goto out;
++		return 0;
++
++	mysids = kcalloc(maxnel, sizeof(*mysids), GFP_KERNEL);
++	if (!mysids)
++		return -ENOMEM;
+ 
++retry:
++	mynel = 0;
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -2707,11 +2770,6 @@ int security_get_user_sids(struct selinux_state *state,
+ 
+ 	usercon.user = user->value;
+ 
+-	rc = -ENOMEM;
+-	mysids = kcalloc(maxnel, sizeof(*mysids), GFP_ATOMIC);
+-	if (!mysids)
+-		goto out_unlock;
+-
+ 	ebitmap_for_each_positive_bit(&user->roles, rnode, i) {
+ 		role = policydb->role_val_to_struct[i];
+ 		usercon.role = i + 1;
+@@ -2723,6 +2781,10 @@ int security_get_user_sids(struct selinux_state *state,
+ 				continue;
+ 
+ 			rc = sidtab_context_to_sid(sidtab, &usercon, &sid);
++			if (rc == -ESTALE) {
++				rcu_read_unlock();
++				goto retry;
++			}
+ 			if (rc)
+ 				goto out_unlock;
+ 			if (mynel < maxnel) {
+@@ -2745,14 +2807,14 @@ out_unlock:
+ 	rcu_read_unlock();
+ 	if (rc || !mynel) {
+ 		kfree(mysids);
+-		goto out;
++		return rc;
+ 	}
+ 
+ 	rc = -ENOMEM;
+ 	mysids2 = kcalloc(mynel, sizeof(*mysids2), GFP_KERNEL);
+ 	if (!mysids2) {
+ 		kfree(mysids);
+-		goto out;
++		return rc;
+ 	}
+ 	for (i = 0, j = 0; i < mynel; i++) {
+ 		struct av_decision dummy_avd;
+@@ -2765,12 +2827,10 @@ out_unlock:
+ 			mysids2[j++] = mysids[i];
+ 		cond_resched();
+ 	}
+-	rc = 0;
+ 	kfree(mysids);
+ 	*sids = mysids2;
+ 	*nel = j;
+-out:
+-	return rc;
++	return 0;
+ }
+ 
+ /**
+@@ -2783,6 +2843,9 @@ out:
+  * Obtain a SID to use for a file in a filesystem that
+  * cannot support xattr or use a fixed labeling behavior like
+  * transition SIDs or task SIDs.
++ *
++ * WARNING: This function may return -ESTALE, indicating that the caller
++ * must retry the operation after re-acquiring the policy pointer!
+  */
+ static inline int __security_genfs_sid(struct selinux_policy *policy,
+ 				       const char *fstype,
+@@ -2861,11 +2924,13 @@ int security_genfs_sid(struct selinux_state *state,
+ 		return 0;
+ 	}
+ 
+-	rcu_read_lock();
+-	policy = rcu_dereference(state->policy);
+-	retval = __security_genfs_sid(policy,
+-				fstype, path, orig_sclass, sid);
+-	rcu_read_unlock();
++	do {
++		rcu_read_lock();
++		policy = rcu_dereference(state->policy);
++		retval = __security_genfs_sid(policy, fstype, path,
++					      orig_sclass, sid);
++		rcu_read_unlock();
++	} while (retval == -ESTALE);
+ 	return retval;
+ }
+ 
+@@ -2888,7 +2953,7 @@ int security_fs_use(struct selinux_state *state, struct super_block *sb)
+ 	struct selinux_policy *policy;
+ 	struct policydb *policydb;
+ 	struct sidtab *sidtab;
+-	int rc = 0;
++	int rc;
+ 	struct ocontext *c;
+ 	struct superblock_security_struct *sbsec = sb->s_security;
+ 	const char *fstype = sb->s_type->name;
+@@ -2899,6 +2964,8 @@ int security_fs_use(struct selinux_state *state, struct super_block *sb)
+ 		return 0;
+ 	}
+ 
++retry:
++	rc = 0;
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -2916,6 +2983,10 @@ int security_fs_use(struct selinux_state *state, struct super_block *sb)
+ 		if (!c->sid[0]) {
+ 			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+ 						   &c->sid[0]);
++			if (rc == -ESTALE) {
++				rcu_read_unlock();
++				goto retry;
++			}
+ 			if (rc)
+ 				goto out;
+ 		}
+@@ -2923,6 +2994,10 @@ int security_fs_use(struct selinux_state *state, struct super_block *sb)
+ 	} else {
+ 		rc = __security_genfs_sid(policy, fstype, "/",
+ 					SECCLASS_DIR, &sbsec->sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
++		}
+ 		if (rc) {
+ 			sbsec->behavior = SECURITY_FS_USE_NONE;
+ 			rc = 0;
+@@ -3132,12 +3207,13 @@ int security_sid_mls_copy(struct selinux_state *state,
+ 	u32 len;
+ 	int rc;
+ 
+-	rc = 0;
+ 	if (!selinux_initialized(state)) {
+ 		*new_sid = sid;
+-		goto out;
++		return 0;
+ 	}
+ 
++retry:
++	rc = 0;
+ 	context_init(&newcon);
+ 
+ 	rcu_read_lock();
+@@ -3196,10 +3272,14 @@ int security_sid_mls_copy(struct selinux_state *state,
+ 		}
+ 	}
+ 	rc = sidtab_context_to_sid(sidtab, &newcon, new_sid);
++	if (rc == -ESTALE) {
++		rcu_read_unlock();
++		context_destroy(&newcon);
++		goto retry;
++	}
+ out_unlock:
+ 	rcu_read_unlock();
+ 	context_destroy(&newcon);
+-out:
+ 	return rc;
+ }
+ 
+@@ -3796,6 +3876,8 @@ int security_netlbl_secattr_to_sid(struct selinux_state *state,
+ 		return 0;
+ 	}
+ 
++retry:
++	rc = 0;
+ 	rcu_read_lock();
+ 	policy = rcu_dereference(state->policy);
+ 	policydb = &policy->policydb;
+@@ -3822,23 +3904,24 @@ int security_netlbl_secattr_to_sid(struct selinux_state *state,
+ 				goto out;
+ 		}
+ 		rc = -EIDRM;
+-		if (!mls_context_isvalid(policydb, &ctx_new))
+-			goto out_free;
++		if (!mls_context_isvalid(policydb, &ctx_new)) {
++			ebitmap_destroy(&ctx_new.range.level[0].cat);
++			goto out;
++		}
+ 
+ 		rc = sidtab_context_to_sid(sidtab, &ctx_new, sid);
++		ebitmap_destroy(&ctx_new.range.level[0].cat);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
++		}
+ 		if (rc)
+-			goto out_free;
++			goto out;
+ 
+ 		security_netlbl_cache_add(secattr, *sid);
+-
+-		ebitmap_destroy(&ctx_new.range.level[0].cat);
+ 	} else
+ 		*sid = SECSID_NULL;
+ 
+-	rcu_read_unlock();
+-	return 0;
+-out_free:
+-	ebitmap_destroy(&ctx_new.range.level[0].cat);
+ out:
+ 	rcu_read_unlock();
+ 	return rc;
+diff --git a/security/selinux/ss/sidtab.c b/security/selinux/ss/sidtab.c
+index 5ee190bd30f53..656d50b09f762 100644
+--- a/security/selinux/ss/sidtab.c
++++ b/security/selinux/ss/sidtab.c
+@@ -39,6 +39,7 @@ int sidtab_init(struct sidtab *s)
+ 	for (i = 0; i < SECINITSID_NUM; i++)
+ 		s->isids[i].set = 0;
+ 
++	s->frozen = false;
+ 	s->count = 0;
+ 	s->convert = NULL;
+ 	hash_init(s->context_to_sid);
+@@ -281,6 +282,15 @@ int sidtab_context_to_sid(struct sidtab *s, struct context *context,
+ 	if (*sid)
+ 		goto out_unlock;
+ 
++	if (unlikely(s->frozen)) {
++		/*
++		 * This sidtab is now frozen - tell the caller to abort and
++		 * get the new one.
++		 */
++		rc = -ESTALE;
++		goto out_unlock;
++	}
++
+ 	count = s->count;
+ 	convert = s->convert;
+ 
+@@ -474,6 +484,17 @@ void sidtab_cancel_convert(struct sidtab *s)
+ 	spin_unlock_irqrestore(&s->lock, flags);
+ }
+ 
++void sidtab_freeze_begin(struct sidtab *s, unsigned long *flags) __acquires(&s->lock)
++{
++	spin_lock_irqsave(&s->lock, *flags);
++	s->frozen = true;
++	s->convert = NULL;
++}
++void sidtab_freeze_end(struct sidtab *s, unsigned long *flags) __releases(&s->lock)
++{
++	spin_unlock_irqrestore(&s->lock, *flags);
++}
++
+ static void sidtab_destroy_entry(struct sidtab_entry *entry)
+ {
+ 	context_destroy(&entry->context);
+diff --git a/security/selinux/ss/sidtab.h b/security/selinux/ss/sidtab.h
+index 80c744d07ad62..4eff0e49dcb22 100644
+--- a/security/selinux/ss/sidtab.h
++++ b/security/selinux/ss/sidtab.h
+@@ -86,6 +86,7 @@ struct sidtab {
+ 	u32 count;
+ 	/* access only under spinlock */
+ 	struct sidtab_convert_params *convert;
++	bool frozen;
+ 	spinlock_t lock;
+ 
+ #if CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE > 0
+@@ -125,6 +126,9 @@ int sidtab_convert(struct sidtab *s, struct sidtab_convert_params *params);
+ 
+ void sidtab_cancel_convert(struct sidtab *s);
+ 
++void sidtab_freeze_begin(struct sidtab *s, unsigned long *flags) __acquires(&s->lock);
++void sidtab_freeze_end(struct sidtab *s, unsigned long *flags) __releases(&s->lock);
++
+ int sidtab_context_to_sid(struct sidtab *s, struct context *context, u32 *sid);
+ 
+ void sidtab_destroy(struct sidtab *s);
+diff --git a/sound/drivers/aloop.c b/sound/drivers/aloop.c
+index c91356326699e..2c5f7e905ab8f 100644
+--- a/sound/drivers/aloop.c
++++ b/sound/drivers/aloop.c
+@@ -1572,6 +1572,14 @@ static int loopback_mixer_new(struct loopback *loopback, int notify)
+ 					return -ENOMEM;
+ 				kctl->id.device = dev;
+ 				kctl->id.subdevice = substr;
++
++				/* Add the control before copying the id so that
++				 * the numid field of the id is set in the copy.
++				 */
++				err = snd_ctl_add(card, kctl);
++				if (err < 0)
++					return err;
++
+ 				switch (idx) {
+ 				case ACTIVE_IDX:
+ 					setup->active_id = kctl->id;
+@@ -1588,9 +1596,6 @@ static int loopback_mixer_new(struct loopback *loopback, int notify)
+ 				default:
+ 					break;
+ 				}
+-				err = snd_ctl_add(card, kctl);
+-				if (err < 0)
+-					return err;
+ 			}
+ 		}
+ 	}
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index a980a4eda51c9..7aa9062f4f838 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -944,6 +944,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x844f, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 58946d069ee59..a7544b77d3f7c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3927,6 +3927,15 @@ static void alc271_fixup_dmic(struct hda_codec *codec,
+ 		snd_hda_sequence_write(codec, verbs);
+ }
+ 
++/* Fix the speaker amp after resume, etc */
++static void alc269vb_fixup_aspire_e1_coef(struct hda_codec *codec,
++					  const struct hda_fixup *fix,
++					  int action)
++{
++	if (action == HDA_FIXUP_ACT_INIT)
++		alc_update_coef_idx(codec, 0x0d, 0x6000, 0x6000);
++}
++
+ static void alc269_fixup_pcm_44k(struct hda_codec *codec,
+ 				 const struct hda_fixup *fix, int action)
+ {
+@@ -6301,6 +6310,7 @@ enum {
+ 	ALC283_FIXUP_HEADSET_MIC,
+ 	ALC255_FIXUP_MIC_MUTE_LED,
+ 	ALC282_FIXUP_ASPIRE_V5_PINS,
++	ALC269VB_FIXUP_ASPIRE_E1_COEF,
+ 	ALC280_FIXUP_HP_GPIO4,
+ 	ALC286_FIXUP_HP_GPIO_LED,
+ 	ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY,
+@@ -6979,6 +6989,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ },
+ 		},
+ 	},
++	[ALC269VB_FIXUP_ASPIRE_E1_COEF] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269vb_fixup_aspire_e1_coef,
++	},
+ 	[ALC280_FIXUP_HP_GPIO4] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc280_fixup_hp_gpio4,
+@@ -7901,6 +7915,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ 	SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
++	SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF),
+ 	SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
+@@ -8395,6 +8410,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC283_FIXUP_HEADSET_MIC, .name = "alc283-headset"},
+ 	{.id = ALC255_FIXUP_MIC_MUTE_LED, .name = "alc255-dell-mute"},
+ 	{.id = ALC282_FIXUP_ASPIRE_V5_PINS, .name = "aspire-v5"},
++	{.id = ALC269VB_FIXUP_ASPIRE_E1_COEF, .name = "aspire-e1-coef"},
+ 	{.id = ALC280_FIXUP_HP_GPIO4, .name = "hp-gpio4"},
+ 	{.id = ALC286_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
+ 	{.id = ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, .name = "hp-gpio2-hotkey"},
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index 660ec46eecf25..ceaf3bbb18e66 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -707,7 +707,13 @@ int wm8960_configure_pll(struct snd_soc_component *component, int freq_in,
+ 	best_freq_out = -EINVAL;
+ 	*sysclk_idx = *dac_idx = *bclk_idx = -1;
+ 
+-	for (i = 0; i < ARRAY_SIZE(sysclk_divs); ++i) {
++	/*
++	 * From Datasheet, the PLL performs best when f2 is between
++	 * 90MHz and 100MHz, the desired sysclk output is 11.2896MHz
++	 * or 12.288MHz, then sysclkdiv = 2 is the best choice.
++	 * So search sysclk_divs from 2 to 1 other than from 1 to 2.
++	 */
++	for (i = ARRAY_SIZE(sysclk_divs) - 1; i >= 0; --i) {
+ 		if (sysclk_divs[i] == -1)
+ 			continue;
+ 		for (j = 0; j < ARRAY_SIZE(dac_divs); ++j) {
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 9e9b05883557c..aa5dd590ddd52 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -488,14 +488,14 @@ static struct snd_soc_dai_driver sst_platform_dai[] = {
+ 		.channels_min = SST_STEREO,
+ 		.channels_max = SST_STEREO,
+ 		.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
+-		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 	},
+ 	.capture = {
+ 		.stream_name = "Headset Capture",
+ 		.channels_min = 1,
+ 		.channels_max = 2,
+ 		.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
+-		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 	},
+ },
+ {
+@@ -506,7 +506,7 @@ static struct snd_soc_dai_driver sst_platform_dai[] = {
+ 		.channels_min = SST_STEREO,
+ 		.channels_max = SST_STEREO,
+ 		.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
+-		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 	},
+ },
+ {
+diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c
+index c731b9bd60b4c..85ec4361c8c4e 100644
+--- a/sound/soc/sof/intel/hda-dsp.c
++++ b/sound/soc/sof/intel/hda-dsp.c
+@@ -226,10 +226,17 @@ bool hda_dsp_core_is_enabled(struct snd_sof_dev *sdev,
+ 
+ 	val = snd_sof_dsp_read(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS);
+ 
+-	is_enable = (val & HDA_DSP_ADSPCS_CPA_MASK(core_mask)) &&
+-		    (val & HDA_DSP_ADSPCS_SPA_MASK(core_mask)) &&
+-		    !(val & HDA_DSP_ADSPCS_CRST_MASK(core_mask)) &&
+-		    !(val & HDA_DSP_ADSPCS_CSTALL_MASK(core_mask));
++#define MASK_IS_EQUAL(v, m, field) ({	\
++	u32 _m = field(m);		\
++	((v) & _m) == _m;		\
++})
++
++	is_enable = MASK_IS_EQUAL(val, core_mask, HDA_DSP_ADSPCS_CPA_MASK) &&
++		MASK_IS_EQUAL(val, core_mask, HDA_DSP_ADSPCS_SPA_MASK) &&
++		!(val & HDA_DSP_ADSPCS_CRST_MASK(core_mask)) &&
++		!(val & HDA_DSP_ADSPCS_CSTALL_MASK(core_mask));
++
++#undef MASK_IS_EQUAL
+ 
+ 	dev_dbg(sdev->dev, "DSP core(s) enabled? %d : core_mask %x\n",
+ 		is_enable, core_mask);
+diff --git a/sound/soc/sunxi/sun4i-codec.c b/sound/soc/sunxi/sun4i-codec.c
+index 6c13cc84b3fb5..2173991c13db1 100644
+--- a/sound/soc/sunxi/sun4i-codec.c
++++ b/sound/soc/sunxi/sun4i-codec.c
+@@ -1364,6 +1364,7 @@ static struct snd_soc_card *sun4i_codec_create_card(struct device *dev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	card->dev		= dev;
++	card->owner		= THIS_MODULE;
+ 	card->name		= "sun4i-codec";
+ 	card->dapm_widgets	= sun4i_codec_card_dapm_widgets;
+ 	card->num_dapm_widgets	= ARRAY_SIZE(sun4i_codec_card_dapm_widgets);
+@@ -1396,6 +1397,7 @@ static struct snd_soc_card *sun6i_codec_create_card(struct device *dev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	card->dev		= dev;
++	card->owner		= THIS_MODULE;
+ 	card->name		= "A31 Audio Codec";
+ 	card->dapm_widgets	= sun6i_codec_card_dapm_widgets;
+ 	card->num_dapm_widgets	= ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
+@@ -1449,6 +1451,7 @@ static struct snd_soc_card *sun8i_a23_codec_create_card(struct device *dev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	card->dev		= dev;
++	card->owner		= THIS_MODULE;
+ 	card->name		= "A23 Audio Codec";
+ 	card->dapm_widgets	= sun6i_codec_card_dapm_widgets;
+ 	card->num_dapm_widgets	= ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
+@@ -1487,6 +1490,7 @@ static struct snd_soc_card *sun8i_h3_codec_create_card(struct device *dev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	card->dev		= dev;
++	card->owner		= THIS_MODULE;
+ 	card->name		= "H3 Audio Codec";
+ 	card->dapm_widgets	= sun6i_codec_card_dapm_widgets;
+ 	card->num_dapm_widgets	= ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
+@@ -1525,6 +1529,7 @@ static struct snd_soc_card *sun8i_v3s_codec_create_card(struct device *dev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	card->dev		= dev;
++	card->owner		= THIS_MODULE;
+ 	card->name		= "V3s Audio Codec";
+ 	card->dapm_widgets	= sun6i_codec_card_dapm_widgets;
+ 	card->num_dapm_widgets	= ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
+diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
+index 98537ff2679ef..06cd709a3453a 100644
+--- a/tools/lib/bpf/ringbuf.c
++++ b/tools/lib/bpf/ringbuf.c
+@@ -227,7 +227,7 @@ static int ringbuf_process_ring(struct ring* r)
+ 			if ((len & BPF_RINGBUF_DISCARD_BIT) == 0) {
+ 				sample = (void *)len_ptr + BPF_RINGBUF_HDR_SZ;
+ 				err = r->sample_cb(r->ctx, sample, len);
+-				if (err) {
++				if (err < 0) {
+ 					/* update consumer pos and bail out */
+ 					smp_store_release(r->consumer_pos,
+ 							  cons_pos);
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index f6e8831673f9b..5f7b85fba39d0 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -54,6 +54,8 @@ struct xsk_umem {
+ 	int fd;
+ 	int refcount;
+ 	struct list_head ctx_list;
++	bool rx_ring_setup_done;
++	bool tx_ring_setup_done;
+ };
+ 
+ struct xsk_ctx {
+@@ -628,26 +630,30 @@ static struct xsk_ctx *xsk_get_ctx(struct xsk_umem *umem, int ifindex,
+ 	return NULL;
+ }
+ 
+-static void xsk_put_ctx(struct xsk_ctx *ctx)
++static void xsk_put_ctx(struct xsk_ctx *ctx, bool unmap)
+ {
+ 	struct xsk_umem *umem = ctx->umem;
+ 	struct xdp_mmap_offsets off;
+ 	int err;
+ 
+-	if (--ctx->refcount == 0) {
+-		err = xsk_get_mmap_offsets(umem->fd, &off);
+-		if (!err) {
+-			munmap(ctx->fill->ring - off.fr.desc,
+-			       off.fr.desc + umem->config.fill_size *
+-			       sizeof(__u64));
+-			munmap(ctx->comp->ring - off.cr.desc,
+-			       off.cr.desc + umem->config.comp_size *
+-			       sizeof(__u64));
+-		}
++	if (--ctx->refcount)
++		return;
+ 
+-		list_del(&ctx->list);
+-		free(ctx);
+-	}
++	if (!unmap)
++		goto out_free;
++
++	err = xsk_get_mmap_offsets(umem->fd, &off);
++	if (err)
++		goto out_free;
++
++	munmap(ctx->fill->ring - off.fr.desc, off.fr.desc + umem->config.fill_size *
++	       sizeof(__u64));
++	munmap(ctx->comp->ring - off.cr.desc, off.cr.desc + umem->config.comp_size *
++	       sizeof(__u64));
++
++out_free:
++	list_del(&ctx->list);
++	free(ctx);
+ }
+ 
+ static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk,
+@@ -682,8 +688,6 @@ static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk,
+ 	memcpy(ctx->ifname, ifname, IFNAMSIZ - 1);
+ 	ctx->ifname[IFNAMSIZ - 1] = '\0';
+ 
+-	umem->fill_save = NULL;
+-	umem->comp_save = NULL;
+ 	ctx->fill = fill;
+ 	ctx->comp = comp;
+ 	list_add(&ctx->list, &umem->ctx_list);
+@@ -705,6 +709,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 	struct xsk_socket *xsk;
+ 	struct xsk_ctx *ctx;
+ 	int err, ifindex;
++	bool unmap = umem->fill_save != fill;
++	bool rx_setup_done = false, tx_setup_done = false;
+ 
+ 	if (!umem || !xsk_ptr || !(rx || tx))
+ 		return -EFAULT;
+@@ -732,6 +738,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 		}
+ 	} else {
+ 		xsk->fd = umem->fd;
++		rx_setup_done = umem->rx_ring_setup_done;
++		tx_setup_done = umem->tx_ring_setup_done;
+ 	}
+ 
+ 	ctx = xsk_get_ctx(umem, ifindex, queue_id);
+@@ -750,7 +758,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 	}
+ 	xsk->ctx = ctx;
+ 
+-	if (rx) {
++	if (rx && !rx_setup_done) {
+ 		err = setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING,
+ 				 &xsk->config.rx_size,
+ 				 sizeof(xsk->config.rx_size));
+@@ -758,8 +766,10 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 			err = -errno;
+ 			goto out_put_ctx;
+ 		}
++		if (xsk->fd == umem->fd)
++			umem->rx_ring_setup_done = true;
+ 	}
+-	if (tx) {
++	if (tx && !tx_setup_done) {
+ 		err = setsockopt(xsk->fd, SOL_XDP, XDP_TX_RING,
+ 				 &xsk->config.tx_size,
+ 				 sizeof(xsk->config.tx_size));
+@@ -767,6 +777,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 			err = -errno;
+ 			goto out_put_ctx;
+ 		}
++		if (xsk->fd == umem->fd)
++			umem->rx_ring_setup_done = true;
+ 	}
+ 
+ 	err = xsk_get_mmap_offsets(xsk->fd, &off);
+@@ -845,6 +857,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 	}
+ 
+ 	*xsk_ptr = xsk;
++	umem->fill_save = NULL;
++	umem->comp_save = NULL;
+ 	return 0;
+ 
+ out_mmap_tx:
+@@ -856,7 +870,7 @@ out_mmap_rx:
+ 		munmap(rx_map, off.rx.desc +
+ 		       xsk->config.rx_size * sizeof(struct xdp_desc));
+ out_put_ctx:
+-	xsk_put_ctx(ctx);
++	xsk_put_ctx(ctx, unmap);
+ out_socket:
+ 	if (--umem->refcount)
+ 		close(xsk->fd);
+@@ -870,6 +884,9 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
+ 		       struct xsk_ring_cons *rx, struct xsk_ring_prod *tx,
+ 		       const struct xsk_socket_config *usr_config)
+ {
++	if (!umem)
++		return -EFAULT;
++
+ 	return xsk_socket__create_shared(xsk_ptr, ifname, queue_id, umem,
+ 					 rx, tx, umem->fill_save,
+ 					 umem->comp_save, usr_config);
+@@ -919,7 +936,7 @@ void xsk_socket__delete(struct xsk_socket *xsk)
+ 		}
+ 	}
+ 
+-	xsk_put_ctx(ctx);
++	xsk_put_ctx(ctx, true);
+ 
+ 	umem->refcount--;
+ 	/* Do not close an fd that also has an associated umem connected
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index 0462dc8db2e38..5320ac1b1285c 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -904,7 +904,7 @@ int cmd_inject(int argc, const char **argv)
+ 	}
+ 
+ 	data.path = inject.input_name;
+-	inject.session = perf_session__new(&data, true, &inject.tool);
++	inject.session = perf_session__new(&data, inject.output.is_pipe, &inject.tool);
+ 	if (IS_ERR(inject.session))
+ 		return PTR_ERR(inject.session);
+ 
+diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
+index 423ec69bda6ca..5ecd4f401f324 100644
+--- a/tools/perf/util/block-info.c
++++ b/tools/perf/util/block-info.c
+@@ -201,7 +201,7 @@ static int block_total_cycles_pct_entry(struct perf_hpp_fmt *fmt,
+ 	double ratio = 0.0;
+ 
+ 	if (block_fmt->total_cycles)
+-		ratio = (double)bi->cycles / (double)block_fmt->total_cycles;
++		ratio = (double)bi->cycles_aggr / (double)block_fmt->total_cycles;
+ 
+ 	return color_pct(hpp, block_fmt->width, 100.0 * ratio);
+ }
+@@ -216,9 +216,9 @@ static int64_t block_total_cycles_pct_sort(struct perf_hpp_fmt *fmt,
+ 	double l, r;
+ 
+ 	if (block_fmt->total_cycles) {
+-		l = ((double)bi_l->cycles /
++		l = ((double)bi_l->cycles_aggr /
+ 			(double)block_fmt->total_cycles) * 100000.0;
+-		r = ((double)bi_r->cycles /
++		r = ((double)bi_r->cycles_aggr /
+ 			(double)block_fmt->total_cycles) * 100000.0;
+ 		return (int64_t)l - (int64_t)r;
+ 	}


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-04-16 11:02 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-04-16 11:02 UTC (permalink / raw
  To: gentoo-commits

commit:     fe119214d6c7f16e54cbfc5e266f9b66acb06cd8
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 16 11:01:48 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Apr 16 11:02:13 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fe119214

Linux patch 5.10.31

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |   4 +
 1030_linux-5.10.31.patch | 777 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 781 insertions(+)

diff --git a/0000_README b/0000_README
index a5a8bc1..69e2945 100644
--- a/0000_README
+++ b/0000_README
@@ -163,6 +163,10 @@ Patch:  1029_linux-5.10.30.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.30
 
+Patch:  1030_linux-5.10.31.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.31
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1030_linux-5.10.31.patch b/1030_linux-5.10.31.patch
new file mode 100644
index 0000000..6475de3
--- /dev/null
+++ b/1030_linux-5.10.31.patch
@@ -0,0 +1,777 @@
+diff --git a/Makefile b/Makefile
+index 872af26e085a1..c4c0b47e6edea 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 30
++SUBLEVEL = 31
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
+index 64ce29378467c..395cb22d9f7d3 100644
+--- a/arch/arm64/include/asm/kvm_arm.h
++++ b/arch/arm64/include/asm/kvm_arm.h
+@@ -277,6 +277,7 @@
+ #define CPTR_EL2_DEFAULT	CPTR_EL2_RES1
+ 
+ /* Hyp Debug Configuration Register bits */
++#define MDCR_EL2_TTRF		(1 << 19)
+ #define MDCR_EL2_TPMS		(1 << 14)
+ #define MDCR_EL2_E2PB_MASK	(UL(0x3))
+ #define MDCR_EL2_E2PB_SHIFT	(UL(12))
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 7da9a7cee4cef..5001c43ea6c33 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -382,7 +382,6 @@ static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
+ 	 * of support.
+ 	 */
+ 	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
+-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
+ 	ARM64_FTR_END,
+ };
+diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
+index 7a7e425616b54..dbc8905116311 100644
+--- a/arch/arm64/kvm/debug.c
++++ b/arch/arm64/kvm/debug.c
+@@ -89,6 +89,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
+  *  - Debug ROM Address (MDCR_EL2_TDRA)
+  *  - OS related registers (MDCR_EL2_TDOSA)
+  *  - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB)
++ *  - Self-hosted Trace Filter controls (MDCR_EL2_TTRF)
+  *
+  * Additionally, KVM only traps guest accesses to the debug registers if
+  * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
+@@ -112,6 +113,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
+ 	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+ 				MDCR_EL2_TPMS |
++				MDCR_EL2_TTRF |
+ 				MDCR_EL2_TPMCR |
+ 				MDCR_EL2_TDRA |
+ 				MDCR_EL2_TDOSA);
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index 744f3209c48d0..76274a4a1d8e6 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -447,6 +447,7 @@ ENDPROC(__switch_to)
+ #endif
+ 
+ 	.section ".rodata"
++	.align LGREG
+ 	/* Exception vector table */
+ ENTRY(excp_vect_table)
+ 	RISCV_PTR do_trap_insn_misaligned
+diff --git a/block/bio.c b/block/bio.c
+index fa01bef35bb1f..9c931df2d9864 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -313,7 +313,7 @@ static struct bio *__bio_chain_endio(struct bio *bio)
+ {
+ 	struct bio *parent = bio->bi_private;
+ 
+-	if (!parent->bi_status)
++	if (bio->bi_status && !parent->bi_status)
+ 		parent->bi_status = bio->bi_status;
+ 	bio_put(bio);
+ 	return parent;
+diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h
+index c24d9b5ad81af..7de703f28617b 100644
+--- a/drivers/block/null_blk.h
++++ b/drivers/block/null_blk.h
+@@ -20,6 +20,7 @@ struct nullb_cmd {
+ 	blk_status_t error;
+ 	struct nullb_queue *nq;
+ 	struct hrtimer timer;
++	bool fake_timeout;
+ };
+ 
+ struct nullb_queue {
+diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
+index 4685ea401d5b1..bb3686c3869de 100644
+--- a/drivers/block/null_blk_main.c
++++ b/drivers/block/null_blk_main.c
+@@ -1367,10 +1367,13 @@ static blk_status_t null_handle_cmd(struct nullb_cmd *cmd, sector_t sector,
+ 	}
+ 
+ 	if (dev->zoned)
+-		cmd->error = null_process_zoned_cmd(cmd, op,
+-						    sector, nr_sectors);
++		sts = null_process_zoned_cmd(cmd, op, sector, nr_sectors);
+ 	else
+-		cmd->error = null_process_cmd(cmd, op, sector, nr_sectors);
++		sts = null_process_cmd(cmd, op, sector, nr_sectors);
++
++	/* Do not overwrite errors (e.g. timeout errors) */
++	if (cmd->error == BLK_STS_OK)
++		cmd->error = sts;
+ 
+ out:
+ 	nullb_complete_cmd(cmd);
+@@ -1449,8 +1452,20 @@ static bool should_requeue_request(struct request *rq)
+ 
+ static enum blk_eh_timer_return null_timeout_rq(struct request *rq, bool res)
+ {
++	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
++
+ 	pr_info("rq %p timed out\n", rq);
+-	blk_mq_complete_request(rq);
++
++	/*
++	 * If the device is marked as blocking (i.e. memory backed or zoned
++	 * device), the submission path may be blocked waiting for resources
++	 * and cause real timeouts. For these real timeouts, the submission
++	 * path will complete the request using blk_mq_complete_request().
++	 * Only fake timeouts need to execute blk_mq_complete_request() here.
++	 */
++	cmd->error = BLK_STS_TIMEOUT;
++	if (cmd->fake_timeout)
++		blk_mq_complete_request(rq);
+ 	return BLK_EH_DONE;
+ }
+ 
+@@ -1471,6 +1486,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	cmd->rq = bd->rq;
+ 	cmd->error = BLK_STS_OK;
+ 	cmd->nq = nq;
++	cmd->fake_timeout = should_timeout_request(bd->rq);
+ 
+ 	blk_mq_start_request(bd->rq);
+ 
+@@ -1487,7 +1503,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 			return BLK_STS_OK;
+ 		}
+ 	}
+-	if (should_timeout_request(bd->rq))
++	if (cmd->fake_timeout)
+ 		return BLK_STS_OK;
+ 
+ 	return null_handle_cmd(cmd, sector, nr_sectors, req_op(bd->rq));
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index 41e2978cb1ebf..75036aaa0c639 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -190,6 +190,11 @@ static void imx_ldb_encoder_enable(struct drm_encoder *encoder)
+ 	int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;
+ 	int mux = drm_of_encoder_active_port_id(imx_ldb_ch->child, encoder);
+ 
++	if (mux < 0 || mux >= ARRAY_SIZE(ldb->clk_sel)) {
++		dev_warn(ldb->dev, "%s: invalid mux %d\n", __func__, mux);
++		return;
++	}
++
+ 	drm_panel_prepare(imx_ldb_ch->panel);
+ 
+ 	if (dual) {
+@@ -248,6 +253,11 @@ imx_ldb_encoder_atomic_mode_set(struct drm_encoder *encoder,
+ 	int mux = drm_of_encoder_active_port_id(imx_ldb_ch->child, encoder);
+ 	u32 bus_format = imx_ldb_ch->bus_format;
+ 
++	if (mux < 0 || mux >= ARRAY_SIZE(ldb->clk_sel)) {
++		dev_warn(ldb->dev, "%s: invalid mux %d\n", __func__, mux);
++		return;
++	}
++
+ 	if (mode->clock > 170000) {
+ 		dev_warn(ldb->dev,
+ 			 "%s: mode exceeds 170 MHz pixel clock\n", __func__);
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index 3a244ef7f30f3..3aa9a74060854 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -1688,6 +1688,11 @@ static void tegra_dc_commit_state(struct tegra_dc *dc,
+ 			dev_err(dc->dev,
+ 				"failed to set clock rate to %lu Hz\n",
+ 				state->pclk);
++
++		err = clk_set_rate(dc->clk, state->pclk);
++		if (err < 0)
++			dev_err(dc->dev, "failed to set clock %pC to %lu Hz: %d\n",
++				dc->clk, state->pclk, err);
+ 	}
+ 
+ 	DRM_DEBUG_KMS("rate: %lu, div: %u\n", clk_get_rate(dc->clk),
+@@ -1698,11 +1703,6 @@ static void tegra_dc_commit_state(struct tegra_dc *dc,
+ 		value = SHIFT_CLK_DIVIDER(state->div) | PIXEL_CLK_DIVIDER_PCD1;
+ 		tegra_dc_writel(dc, value, DC_DISP_DISP_CLOCK_CONTROL);
+ 	}
+-
+-	err = clk_set_rate(dc->clk, state->pclk);
+-	if (err < 0)
+-		dev_err(dc->dev, "failed to set clock %pC to %lu Hz: %d\n",
+-			dc->clk, state->pclk, err);
+ }
+ 
+ static void tegra_dc_stop(struct tegra_dc *dc)
+diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/host1x/bus.c
+index e201f62d62c0c..9e2cb6968819e 100644
+--- a/drivers/gpu/host1x/bus.c
++++ b/drivers/gpu/host1x/bus.c
+@@ -704,8 +704,9 @@ void host1x_driver_unregister(struct host1x_driver *driver)
+ EXPORT_SYMBOL(host1x_driver_unregister);
+ 
+ /**
+- * host1x_client_register() - register a host1x client
++ * __host1x_client_register() - register a host1x client
+  * @client: host1x client
++ * @key: lock class key for the client-specific mutex
+  *
+  * Registers a host1x client with each host1x controller instance. Note that
+  * each client will only match their parent host1x controller and will only be
+@@ -714,13 +715,14 @@ EXPORT_SYMBOL(host1x_driver_unregister);
+  * device and call host1x_device_init(), which will in turn call each client's
+  * &host1x_client_ops.init implementation.
+  */
+-int host1x_client_register(struct host1x_client *client)
++int __host1x_client_register(struct host1x_client *client,
++			     struct lock_class_key *key)
+ {
+ 	struct host1x *host1x;
+ 	int err;
+ 
+ 	INIT_LIST_HEAD(&client->list);
+-	mutex_init(&client->lock);
++	__mutex_init(&client->lock, "host1x client lock", key);
+ 	client->usecount = 0;
+ 
+ 	mutex_lock(&devices_lock);
+@@ -741,7 +743,7 @@ int host1x_client_register(struct host1x_client *client)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(host1x_client_register);
++EXPORT_SYMBOL(__host1x_client_register);
+ 
+ /**
+  * host1x_client_unregister() - unregister a host1x client
+diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
+index 5ad519c9f2396..8a1e70e008764 100644
+--- a/drivers/interconnect/core.c
++++ b/drivers/interconnect/core.c
+@@ -942,6 +942,8 @@ int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
+ 		       GFP_KERNEL);
+ 	if (new)
+ 		src->links = new;
++	else
++		ret = -ENOMEM;
+ 
+ out:
+ 	mutex_unlock(&icc_lock);
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index fb954e8141802..4cf874fb5c5b4 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -349,14 +349,13 @@ void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id,
+ 	}
+ 
+ 	/* If we haven't discovered any modes that this module supports, try
+-	 * the encoding and bitrate to determine supported modes. Some BiDi
+-	 * modules (eg, 1310nm/1550nm) are not 1000BASE-BX compliant due to
+-	 * the differing wavelengths, so do not set any transceiver bits.
++	 * the bitrate to determine supported modes. Some BiDi modules (eg,
++	 * 1310nm/1550nm) are not 1000BASE-BX compliant due to the differing
++	 * wavelengths, so do not set any transceiver bits.
+ 	 */
+ 	if (bitmap_empty(modes, __ETHTOOL_LINK_MODE_MASK_NBITS)) {
+-		/* If the encoding and bit rate allows 1000baseX */
+-		if (id->base.encoding == SFF8024_ENCODING_8B10B && br_nom &&
+-		    br_min <= 1300 && br_max >= 1200)
++		/* If the bit rate allows 1000baseX */
++		if (br_nom && br_min <= 1300 && br_max >= 1200)
+ 			phylink_set(modes, 1000baseX_Full);
+ 	}
+ 
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 7a680b5177f5e..2fff62695455d 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -1501,15 +1501,19 @@ static void sfp_sm_link_down(struct sfp *sfp)
+ 
+ static void sfp_sm_link_check_los(struct sfp *sfp)
+ {
+-	unsigned int los = sfp->state & SFP_F_LOS;
++	const __be16 los_inverted = cpu_to_be16(SFP_OPTIONS_LOS_INVERTED);
++	const __be16 los_normal = cpu_to_be16(SFP_OPTIONS_LOS_NORMAL);
++	__be16 los_options = sfp->id.ext.options & (los_inverted | los_normal);
++	bool los = false;
+ 
+ 	/* If neither SFP_OPTIONS_LOS_INVERTED nor SFP_OPTIONS_LOS_NORMAL
+-	 * are set, we assume that no LOS signal is available.
++	 * are set, we assume that no LOS signal is available. If both are
++	 * set, we assume LOS is not implemented (and is meaningless.)
+ 	 */
+-	if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_INVERTED))
+-		los ^= SFP_F_LOS;
+-	else if (!(sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_NORMAL)))
+-		los = 0;
++	if (los_options == los_inverted)
++		los = !(sfp->state & SFP_F_LOS);
++	else if (los_options == los_normal)
++		los = !!(sfp->state & SFP_F_LOS);
+ 
+ 	if (los)
+ 		sfp_sm_next(sfp, SFP_S_WAIT_LOS, 0);
+@@ -1519,18 +1523,22 @@ static void sfp_sm_link_check_los(struct sfp *sfp)
+ 
+ static bool sfp_los_event_active(struct sfp *sfp, unsigned int event)
+ {
+-	return (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_INVERTED) &&
+-		event == SFP_E_LOS_LOW) ||
+-	       (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_NORMAL) &&
+-		event == SFP_E_LOS_HIGH);
++	const __be16 los_inverted = cpu_to_be16(SFP_OPTIONS_LOS_INVERTED);
++	const __be16 los_normal = cpu_to_be16(SFP_OPTIONS_LOS_NORMAL);
++	__be16 los_options = sfp->id.ext.options & (los_inverted | los_normal);
++
++	return (los_options == los_inverted && event == SFP_E_LOS_LOW) ||
++	       (los_options == los_normal && event == SFP_E_LOS_HIGH);
+ }
+ 
+ static bool sfp_los_event_inactive(struct sfp *sfp, unsigned int event)
+ {
+-	return (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_INVERTED) &&
+-		event == SFP_E_LOS_HIGH) ||
+-	       (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_NORMAL) &&
+-		event == SFP_E_LOS_LOW);
++	const __be16 los_inverted = cpu_to_be16(SFP_OPTIONS_LOS_INVERTED);
++	const __be16 los_normal = cpu_to_be16(SFP_OPTIONS_LOS_NORMAL);
++	__be16 los_options = sfp->id.ext.options & (los_inverted | los_normal);
++
++	return (los_options == los_inverted && event == SFP_E_LOS_HIGH) ||
++	       (los_options == los_normal && event == SFP_E_LOS_LOW);
+ }
+ 
+ static void sfp_sm_fault(struct sfp *sfp, unsigned int next_state, bool warn)
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index b91c19b90b8b4..29bec07205142 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -1809,7 +1809,7 @@ static void lateeoi_ack_dynirq(struct irq_data *data)
+ 
+ 	if (VALID_EVTCHN(evtchn)) {
+ 		do_mask(info, EVT_MASK_REASON_EOI_PENDING);
+-		event_handler_exit(info);
++		ack_dynirq(data);
+ 	}
+ }
+ 
+@@ -1820,7 +1820,7 @@ static void lateeoi_mask_ack_dynirq(struct irq_data *data)
+ 
+ 	if (VALID_EVTCHN(evtchn)) {
+ 		do_mask(info, EVT_MASK_REASON_EXPLICIT);
+-		event_handler_exit(info);
++		ack_dynirq(data);
+ 	}
+ }
+ 
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 6516051807b89..718533f0fb90b 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -280,6 +280,8 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
+ 		bio.bi_opf = dio_bio_write_op(iocb);
+ 		task_io_account_write(ret);
+ 	}
++	if (iocb->ki_flags & IOCB_NOWAIT)
++		bio.bi_opf |= REQ_NOWAIT;
+ 	if (iocb->ki_flags & IOCB_HIPRI)
+ 		bio_set_polled(&bio, iocb);
+ 
+@@ -433,6 +435,8 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
+ 			bio->bi_opf = dio_bio_write_op(iocb);
+ 			task_io_account_write(bio->bi_iter.bi_size);
+ 		}
++		if (iocb->ki_flags & IOCB_NOWAIT)
++			bio->bi_opf |= REQ_NOWAIT;
+ 
+ 		dio->size += bio->bi_iter.bi_size;
+ 		pos += bio->bi_iter.bi_size;
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index ddd40c96f7a23..077dc8c035a8b 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -169,8 +169,10 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	int error;
+ 
+ 	error = init_threads(sdp);
+-	if (error)
++	if (error) {
++		gfs2_withdraw_delayed(sdp);
+ 		return error;
++	}
+ 
+ 	j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+ 	if (gfs2_withdrawn(sdp)) {
+@@ -767,11 +769,13 @@ void gfs2_freeze_func(struct work_struct *work)
+ static int gfs2_freeze(struct super_block *sb)
+ {
+ 	struct gfs2_sbd *sdp = sb->s_fs_info;
+-	int error = 0;
++	int error;
+ 
+ 	mutex_lock(&sdp->sd_freeze_mutex);
+-	if (atomic_read(&sdp->sd_freeze_state) != SFS_UNFROZEN)
++	if (atomic_read(&sdp->sd_freeze_state) != SFS_UNFROZEN) {
++		error = -EBUSY;
+ 		goto out;
++	}
+ 
+ 	for (;;) {
+ 		if (gfs2_withdrawn(sdp)) {
+@@ -812,10 +816,10 @@ static int gfs2_unfreeze(struct super_block *sb)
+ 	struct gfs2_sbd *sdp = sb->s_fs_info;
+ 
+ 	mutex_lock(&sdp->sd_freeze_mutex);
+-        if (atomic_read(&sdp->sd_freeze_state) != SFS_FROZEN ||
++	if (atomic_read(&sdp->sd_freeze_state) != SFS_FROZEN ||
+ 	    !gfs2_holder_initialized(&sdp->sd_freeze_gh)) {
+ 		mutex_unlock(&sdp->sd_freeze_mutex);
+-                return 0;
++		return -EINVAL;
+ 	}
+ 
+ 	gfs2_freeze_unlock(&sdp->sd_freeze_gh);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 0de27e75460d1..dc1b0f6fd49b5 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1439,7 +1439,7 @@ static void io_prep_async_work(struct io_kiocb *req)
+ 	if (req->flags & REQ_F_ISREG) {
+ 		if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
+ 			io_wq_hash_work(&req->work, file_inode(req->file));
+-	} else {
++	} else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
+ 		if (def->unbound_nonreg_file)
+ 			req->work.flags |= IO_WQ_WORK_UNBOUND;
+ 	}
+diff --git a/include/linux/host1x.h b/include/linux/host1x.h
+index ce59a6a6a0087..9eb77c87a83b0 100644
+--- a/include/linux/host1x.h
++++ b/include/linux/host1x.h
+@@ -320,7 +320,14 @@ static inline struct host1x_device *to_host1x_device(struct device *dev)
+ int host1x_device_init(struct host1x_device *device);
+ int host1x_device_exit(struct host1x_device *device);
+ 
+-int host1x_client_register(struct host1x_client *client);
++int __host1x_client_register(struct host1x_client *client,
++			     struct lock_class_key *key);
++#define host1x_client_register(class) \
++	({ \
++		static struct lock_class_key __key; \
++		__host1x_client_register(class, &__key); \
++	})
++
+ int host1x_client_unregister(struct host1x_client *client);
+ 
+ int host1x_client_suspend(struct host1x_client *client);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 82041bbf8fc2d..b1983c2aeb53c 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -3230,7 +3230,8 @@ ftrace_allocate_pages(unsigned long num_to_init)
+ 	pg = start_pg;
+ 	while (pg) {
+ 		order = get_count_order(pg->size / ENTRIES_PER_PAGE);
+-		free_pages((unsigned long)pg->records, order);
++		if (order >= 0)
++			free_pages((unsigned long)pg->records, order);
+ 		start_pg = pg->next;
+ 		kfree(pg);
+ 		pg = start_pg;
+@@ -6452,7 +6453,8 @@ void ftrace_release_mod(struct module *mod)
+ 		clear_mod_from_hashes(pg);
+ 
+ 		order = get_count_order(pg->size / ENTRIES_PER_PAGE);
+-		free_pages((unsigned long)pg->records, order);
++		if (order >= 0)
++			free_pages((unsigned long)pg->records, order);
+ 		tmp_page = pg->next;
+ 		kfree(pg);
+ 		ftrace_number_of_pages -= 1 << order;
+@@ -6812,7 +6814,8 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr)
+ 		if (!pg->index) {
+ 			*last_pg = pg->next;
+ 			order = get_count_order(pg->size / ENTRIES_PER_PAGE);
+-			free_pages((unsigned long)pg->records, order);
++			if (order >= 0)
++				free_pages((unsigned long)pg->records, order);
+ 			ftrace_number_of_pages -= 1 << order;
+ 			ftrace_number_of_groups--;
+ 			kfree(pg);
+diff --git a/lib/test_xarray.c b/lib/test_xarray.c
+index 8294f43f49816..8b1c318189ce8 100644
+--- a/lib/test_xarray.c
++++ b/lib/test_xarray.c
+@@ -1530,24 +1530,24 @@ static noinline void check_store_range(struct xarray *xa)
+ 
+ #ifdef CONFIG_XARRAY_MULTI
+ static void check_split_1(struct xarray *xa, unsigned long index,
+-							unsigned int order)
++				unsigned int order, unsigned int new_order)
+ {
+-	XA_STATE(xas, xa, index);
+-	void *entry;
+-	unsigned int i = 0;
++	XA_STATE_ORDER(xas, xa, index, new_order);
++	unsigned int i;
+ 
+ 	xa_store_order(xa, index, order, xa, GFP_KERNEL);
+ 
+ 	xas_split_alloc(&xas, xa, order, GFP_KERNEL);
+ 	xas_lock(&xas);
+ 	xas_split(&xas, xa, order);
++	for (i = 0; i < (1 << order); i += (1 << new_order))
++		__xa_store(xa, index + i, xa_mk_index(index + i), 0);
+ 	xas_unlock(&xas);
+ 
+-	xa_for_each(xa, index, entry) {
+-		XA_BUG_ON(xa, entry != xa);
+-		i++;
++	for (i = 0; i < (1 << order); i++) {
++		unsigned int val = index + (i & ~((1 << new_order) - 1));
++		XA_BUG_ON(xa, xa_load(xa, index + i) != xa_mk_index(val));
+ 	}
+-	XA_BUG_ON(xa, i != 1 << order);
+ 
+ 	xa_set_mark(xa, index, XA_MARK_0);
+ 	XA_BUG_ON(xa, !xa_get_mark(xa, index, XA_MARK_0));
+@@ -1557,14 +1557,16 @@ static void check_split_1(struct xarray *xa, unsigned long index,
+ 
+ static noinline void check_split(struct xarray *xa)
+ {
+-	unsigned int order;
++	unsigned int order, new_order;
+ 
+ 	XA_BUG_ON(xa, !xa_empty(xa));
+ 
+ 	for (order = 1; order < 2 * XA_CHUNK_SHIFT; order++) {
+-		check_split_1(xa, 0, order);
+-		check_split_1(xa, 1UL << order, order);
+-		check_split_1(xa, 3UL << order, order);
++		for (new_order = 0; new_order < order; new_order++) {
++			check_split_1(xa, 0, order, new_order);
++			check_split_1(xa, 1UL << order, order, new_order);
++			check_split_1(xa, 3UL << order, order, new_order);
++		}
+ 	}
+ }
+ #else
+diff --git a/lib/xarray.c b/lib/xarray.c
+index 5fa51614802ad..ed775dee1074c 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -1011,7 +1011,7 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
+ 
+ 	do {
+ 		unsigned int i;
+-		void *sibling;
++		void *sibling = NULL;
+ 		struct xa_node *node;
+ 
+ 		node = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+@@ -1021,7 +1021,7 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
+ 		for (i = 0; i < XA_CHUNK_SIZE; i++) {
+ 			if ((i & mask) == 0) {
+ 				RCU_INIT_POINTER(node->slots[i], entry);
+-				sibling = xa_mk_sibling(0);
++				sibling = xa_mk_sibling(i);
+ 			} else {
+ 				RCU_INIT_POINTER(node->slots[i], sibling);
+ 			}
+diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
+index d1e04d2b5170e..e0093411d85d6 100644
+--- a/net/ipv4/netfilter/arp_tables.c
++++ b/net/ipv4/netfilter/arp_tables.c
+@@ -1193,6 +1193,8 @@ static int translate_compat_table(struct net *net,
+ 	if (!newinfo)
+ 		goto out_unlock;
+ 
++	memset(newinfo->entries, 0, size);
++
+ 	newinfo->number = compatr->num_entries;
+ 	for (i = 0; i < NF_ARP_NUMHOOKS; i++) {
+ 		newinfo->hook_entry[i] = compatr->hook_entry[i];
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index f15bc21d73016..f77ea0dbe6562 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -1428,6 +1428,8 @@ translate_compat_table(struct net *net,
+ 	if (!newinfo)
+ 		goto out_unlock;
+ 
++	memset(newinfo->entries, 0, size);
++
+ 	newinfo->number = compatr->num_entries;
+ 	for (i = 0; i < NF_INET_NUMHOOKS; i++) {
+ 		newinfo->hook_entry[i] = compatr->hook_entry[i];
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index 2e2119bfcf137..eb2b5404806c6 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -1443,6 +1443,8 @@ translate_compat_table(struct net *net,
+ 	if (!newinfo)
+ 		goto out_unlock;
+ 
++	memset(newinfo->entries, 0, size);
++
+ 	newinfo->number = compatr->num_entries;
+ 	for (i = 0; i < NF_INET_NUMHOOKS; i++) {
+ 		newinfo->hook_entry[i] = compatr->hook_entry[i];
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index 6bd31a7a27fc5..92e9d4ebc5e8d 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -733,7 +733,7 @@ void xt_compat_match_from_user(struct xt_entry_match *m, void **dstptr,
+ {
+ 	const struct xt_match *match = m->u.kernel.match;
+ 	struct compat_xt_entry_match *cm = (struct compat_xt_entry_match *)m;
+-	int pad, off = xt_compat_match_offset(match);
++	int off = xt_compat_match_offset(match);
+ 	u_int16_t msize = cm->u.user.match_size;
+ 	char name[sizeof(m->u.user.name)];
+ 
+@@ -743,9 +743,6 @@ void xt_compat_match_from_user(struct xt_entry_match *m, void **dstptr,
+ 		match->compat_from_user(m->data, cm->data);
+ 	else
+ 		memcpy(m->data, cm->data, msize - sizeof(*cm));
+-	pad = XT_ALIGN(match->matchsize) - match->matchsize;
+-	if (pad > 0)
+-		memset(m->data + match->matchsize, 0, pad);
+ 
+ 	msize += off;
+ 	m->u.user.match_size = msize;
+@@ -1116,7 +1113,7 @@ void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr,
+ {
+ 	const struct xt_target *target = t->u.kernel.target;
+ 	struct compat_xt_entry_target *ct = (struct compat_xt_entry_target *)t;
+-	int pad, off = xt_compat_target_offset(target);
++	int off = xt_compat_target_offset(target);
+ 	u_int16_t tsize = ct->u.user.target_size;
+ 	char name[sizeof(t->u.user.name)];
+ 
+@@ -1126,9 +1123,6 @@ void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr,
+ 		target->compat_from_user(t->data, ct->data);
+ 	else
+ 		memcpy(t->data, ct->data, tsize - sizeof(*ct));
+-	pad = XT_ALIGN(target->targetsize) - target->targetsize;
+-	if (pad > 0)
+-		memset(t->data + target->targetsize, 0, pad);
+ 
+ 	tsize += off;
+ 	t->u.user.target_size = tsize;
+diff --git a/tools/kvm/kvm_stat/kvm_stat.service b/tools/kvm/kvm_stat/kvm_stat.service
+index 71aabaffe7791..8f13b843d5b4e 100644
+--- a/tools/kvm/kvm_stat/kvm_stat.service
++++ b/tools/kvm/kvm_stat/kvm_stat.service
+@@ -9,6 +9,7 @@ Type=simple
+ ExecStart=/usr/bin/kvm_stat -dtcz -s 10 -L /var/log/kvm_stat.csv
+ ExecReload=/bin/kill -HUP $MAINPID
+ Restart=always
++RestartSec=60s
+ SyslogIdentifier=kvm_stat
+ SyslogLevel=debug
+ 
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index f44ede437dc7f..e2537d5acab09 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -77,8 +77,7 @@ static inline bool replace_android_lib(const char *filename, char *newfilename)
+ 	if (strstarts(filename, "/system/lib/")) {
+ 		char *ndk, *app;
+ 		const char *arch;
+-		size_t ndk_length;
+-		size_t app_length;
++		int ndk_length, app_length;
+ 
+ 		ndk = getenv("NDK_ROOT");
+ 		app = getenv("APP_PLATFORM");
+@@ -106,8 +105,8 @@ static inline bool replace_android_lib(const char *filename, char *newfilename)
+ 		if (new_length > PATH_MAX)
+ 			return false;
+ 		snprintf(newfilename, new_length,
+-			"%s/platforms/%s/arch-%s/usr/lib/%s",
+-			ndk, app, arch, libname);
++			"%.*s/platforms/%.*s/arch-%s/usr/lib/%s",
++			ndk_length, ndk, app_length, app, arch, libname);
+ 
+ 		return true;
+ 	}
+diff --git a/tools/testing/radix-tree/idr-test.c b/tools/testing/radix-tree/idr-test.c
+index 3b796dd5e5772..6ce7460f3c7a9 100644
+--- a/tools/testing/radix-tree/idr-test.c
++++ b/tools/testing/radix-tree/idr-test.c
+@@ -301,16 +301,20 @@ void idr_find_test_1(int anchor_id, int throbber_id)
+ 	pthread_t throbber;
+ 	time_t start = time(NULL);
+ 
+-	pthread_create(&throbber, NULL, idr_throbber, &throbber_id);
+-
+ 	BUG_ON(idr_alloc(&find_idr, xa_mk_value(anchor_id), anchor_id,
+ 				anchor_id + 1, GFP_KERNEL) != anchor_id);
+ 
++	pthread_create(&throbber, NULL, idr_throbber, &throbber_id);
++
++	rcu_read_lock();
+ 	do {
+ 		int id = 0;
+ 		void *entry = idr_get_next(&find_idr, &id);
++		rcu_read_unlock();
+ 		BUG_ON(entry != xa_mk_value(id));
++		rcu_read_lock();
+ 	} while (time(NULL) < start + 11);
++	rcu_read_unlock();
+ 
+ 	pthread_join(throbber, NULL);
+ 
+@@ -577,6 +581,7 @@ void ida_tests(void)
+ 
+ int __weak main(void)
+ {
++	rcu_register_thread();
+ 	radix_tree_init();
+ 	idr_checks();
+ 	ida_tests();
+@@ -584,5 +589,6 @@ int __weak main(void)
+ 	rcu_barrier();
+ 	if (nr_allocated)
+ 		printf("nr_allocated = %d\n", nr_allocated);
++	rcu_unregister_thread();
+ 	return 0;
+ }
+diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
+index 9eae0fb5a67d1..e00520cc63498 100644
+--- a/tools/testing/radix-tree/multiorder.c
++++ b/tools/testing/radix-tree/multiorder.c
+@@ -224,7 +224,9 @@ void multiorder_checks(void)
+ 
+ int __weak main(void)
+ {
++	rcu_register_thread();
+ 	radix_tree_init();
+ 	multiorder_checks();
++	rcu_unregister_thread();
+ 	return 0;
+ }
+diff --git a/tools/testing/radix-tree/xarray.c b/tools/testing/radix-tree/xarray.c
+index e61e43efe463c..f20e12cbbfd40 100644
+--- a/tools/testing/radix-tree/xarray.c
++++ b/tools/testing/radix-tree/xarray.c
+@@ -25,11 +25,13 @@ void xarray_tests(void)
+ 
+ int __weak main(void)
+ {
++	rcu_register_thread();
+ 	radix_tree_init();
+ 	xarray_tests();
+ 	radix_tree_cpu_dead(1);
+ 	rcu_barrier();
+ 	if (nr_allocated)
+ 		printf("nr_allocated = %d\n", nr_allocated);
++	rcu_unregister_thread();
+ 	return 0;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-04-21 11:42 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-04-21 11:42 UTC (permalink / raw
  To: gentoo-commits

commit:     669b6ce0423f386158d729e907a47a74fbd2c10f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 21 11:42:37 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 21 11:42:37 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=669b6ce0

Linux patch 5.10.32

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1031_linux-5.10.32.patch | 3223 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3227 insertions(+)

diff --git a/0000_README b/0000_README
index 69e2945..68ff733 100644
--- a/0000_README
+++ b/0000_README
@@ -167,6 +167,10 @@ Patch:  1030_linux-5.10.31.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.31
 
+Patch:  1031_linux-5.10.32.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.32
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1031_linux-5.10.32.patch b/1031_linux-5.10.32.patch
new file mode 100644
index 0000000..354de13
--- /dev/null
+++ b/1031_linux-5.10.32.patch
@@ -0,0 +1,3223 @@
+diff --git a/Makefile b/Makefile
+index c4c0b47e6edea..cad90171b4b9b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 31
++SUBLEVEL = 32
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/kernel/signal.c b/arch/arc/kernel/signal.c
+index 2be55fb96d870..98e575dbcce51 100644
+--- a/arch/arc/kernel/signal.c
++++ b/arch/arc/kernel/signal.c
+@@ -96,7 +96,7 @@ stash_usr_regs(struct rt_sigframe __user *sf, struct pt_regs *regs,
+ 			     sizeof(sf->uc.uc_mcontext.regs.scratch));
+ 	err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t));
+ 
+-	return err;
++	return err ? -EFAULT : 0;
+ }
+ 
+ static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf)
+@@ -110,7 +110,7 @@ static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf)
+ 				&(sf->uc.uc_mcontext.regs.scratch),
+ 				sizeof(sf->uc.uc_mcontext.regs.scratch));
+ 	if (err)
+-		return err;
++		return -EFAULT;
+ 
+ 	set_current_blocked(&set);
+ 	regs->bta	= uregs.scratch.bta;
+diff --git a/arch/arm/boot/dts/omap4.dtsi b/arch/arm/boot/dts/omap4.dtsi
+index d6475cc6a91a7..049174086756d 100644
+--- a/arch/arm/boot/dts/omap4.dtsi
++++ b/arch/arm/boot/dts/omap4.dtsi
+@@ -22,6 +22,11 @@
+ 		i2c1 = &i2c2;
+ 		i2c2 = &i2c3;
+ 		i2c3 = &i2c4;
++		mmc0 = &mmc1;
++		mmc1 = &mmc2;
++		mmc2 = &mmc3;
++		mmc3 = &mmc4;
++		mmc4 = &mmc5;
+ 		serial0 = &uart1;
+ 		serial1 = &uart2;
+ 		serial2 = &uart3;
+diff --git a/arch/arm/boot/dts/omap44xx-clocks.dtsi b/arch/arm/boot/dts/omap44xx-clocks.dtsi
+index 532868591107b..1f1c04d8f4721 100644
+--- a/arch/arm/boot/dts/omap44xx-clocks.dtsi
++++ b/arch/arm/boot/dts/omap44xx-clocks.dtsi
+@@ -770,14 +770,6 @@
+ 		ti,max-div = <2>;
+ 	};
+ 
+-	sha2md5_fck: sha2md5_fck@15c8 {
+-		#clock-cells = <0>;
+-		compatible = "ti,gate-clock";
+-		clocks = <&l3_div_ck>;
+-		ti,bit-shift = <1>;
+-		reg = <0x15c8>;
+-	};
+-
+ 	usb_phy_cm_clk32k: usb_phy_cm_clk32k@640 {
+ 		#clock-cells = <0>;
+ 		compatible = "ti,gate-clock";
+diff --git a/arch/arm/boot/dts/omap5.dtsi b/arch/arm/boot/dts/omap5.dtsi
+index 2bf2e5839a7f1..530210db27198 100644
+--- a/arch/arm/boot/dts/omap5.dtsi
++++ b/arch/arm/boot/dts/omap5.dtsi
+@@ -25,6 +25,11 @@
+ 		i2c2 = &i2c3;
+ 		i2c3 = &i2c4;
+ 		i2c4 = &i2c5;
++		mmc0 = &mmc1;
++		mmc1 = &mmc2;
++		mmc2 = &mmc3;
++		mmc3 = &mmc4;
++		mmc4 = &mmc5;
+ 		serial0 = &uart1;
+ 		serial1 = &uart2;
+ 		serial2 = &uart3;
+diff --git a/arch/arm/mach-footbridge/cats-pci.c b/arch/arm/mach-footbridge/cats-pci.c
+index 0b2fd7e2e9b42..90b1e9be430e9 100644
+--- a/arch/arm/mach-footbridge/cats-pci.c
++++ b/arch/arm/mach-footbridge/cats-pci.c
+@@ -15,14 +15,14 @@
+ #include <asm/mach-types.h>
+ 
+ /* cats host-specific stuff */
+-static int irqmap_cats[] __initdata = { IRQ_PCI, IRQ_IN0, IRQ_IN1, IRQ_IN3 };
++static int irqmap_cats[] = { IRQ_PCI, IRQ_IN0, IRQ_IN1, IRQ_IN3 };
+ 
+ static u8 cats_no_swizzle(struct pci_dev *dev, u8 *pin)
+ {
+ 	return 0;
+ }
+ 
+-static int __init cats_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
++static int cats_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ {
+ 	if (dev->irq >= 255)
+ 		return -1;	/* not a valid interrupt. */
+diff --git a/arch/arm/mach-footbridge/ebsa285-pci.c b/arch/arm/mach-footbridge/ebsa285-pci.c
+index 6f28aaa9ca79b..c3f280d08fa7f 100644
+--- a/arch/arm/mach-footbridge/ebsa285-pci.c
++++ b/arch/arm/mach-footbridge/ebsa285-pci.c
+@@ -14,9 +14,9 @@
+ #include <asm/mach/pci.h>
+ #include <asm/mach-types.h>
+ 
+-static int irqmap_ebsa285[] __initdata = { IRQ_IN3, IRQ_IN1, IRQ_IN0, IRQ_PCI };
++static int irqmap_ebsa285[] = { IRQ_IN3, IRQ_IN1, IRQ_IN0, IRQ_PCI };
+ 
+-static int __init ebsa285_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
++static int ebsa285_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ {
+ 	if (dev->vendor == PCI_VENDOR_ID_CONTAQ &&
+ 	    dev->device == PCI_DEVICE_ID_CONTAQ_82C693)
+diff --git a/arch/arm/mach-footbridge/netwinder-pci.c b/arch/arm/mach-footbridge/netwinder-pci.c
+index 9473aa0305e5f..e8304392074b8 100644
+--- a/arch/arm/mach-footbridge/netwinder-pci.c
++++ b/arch/arm/mach-footbridge/netwinder-pci.c
+@@ -18,7 +18,7 @@
+  * We now use the slot ID instead of the device identifiers to select
+  * which interrupt is routed where.
+  */
+-static int __init netwinder_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
++static int netwinder_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ {
+ 	switch (slot) {
+ 	case 0:  /* host bridge */
+diff --git a/arch/arm/mach-footbridge/personal-pci.c b/arch/arm/mach-footbridge/personal-pci.c
+index 4391e433a4b2f..9d19aa98a663e 100644
+--- a/arch/arm/mach-footbridge/personal-pci.c
++++ b/arch/arm/mach-footbridge/personal-pci.c
+@@ -14,13 +14,12 @@
+ #include <asm/mach/pci.h>
+ #include <asm/mach-types.h>
+ 
+-static int irqmap_personal_server[] __initdata = {
++static int irqmap_personal_server[] = {
+ 	IRQ_IN0, IRQ_IN1, IRQ_IN2, IRQ_IN3, 0, 0, 0,
+ 	IRQ_DOORBELLHOST, IRQ_DMA1, IRQ_DMA2, IRQ_PCI
+ };
+ 
+-static int __init personal_server_map_irq(const struct pci_dev *dev, u8 slot,
+-	u8 pin)
++static int personal_server_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ {
+ 	unsigned char line;
+ 
+diff --git a/arch/arm/mach-keystone/keystone.c b/arch/arm/mach-keystone/keystone.c
+index 09a65c2dfd732..b8fa01f9516eb 100644
+--- a/arch/arm/mach-keystone/keystone.c
++++ b/arch/arm/mach-keystone/keystone.c
+@@ -65,7 +65,7 @@ static void __init keystone_init(void)
+ static long long __init keystone_pv_fixup(void)
+ {
+ 	long long offset;
+-	phys_addr_t mem_start, mem_end;
++	u64 mem_start, mem_end;
+ 
+ 	mem_start = memblock_start_of_DRAM();
+ 	mem_end = memblock_end_of_DRAM();
+@@ -78,7 +78,7 @@ static long long __init keystone_pv_fixup(void)
+ 	if (mem_start < KEYSTONE_HIGH_PHYS_START ||
+ 	    mem_end   > KEYSTONE_HIGH_PHYS_END) {
+ 		pr_crit("Invalid address space for memory (%08llx-%08llx)\n",
+-		        (u64)mem_start, (u64)mem_end);
++		        mem_start, mem_end);
+ 		return 0;
+ 	}
+ 
+diff --git a/arch/arm/mach-omap1/ams-delta-fiq-handler.S b/arch/arm/mach-omap1/ams-delta-fiq-handler.S
+index 14a6c3eb32985..f745a65d3bd7a 100644
+--- a/arch/arm/mach-omap1/ams-delta-fiq-handler.S
++++ b/arch/arm/mach-omap1/ams-delta-fiq-handler.S
+@@ -15,6 +15,7 @@
+ #include <linux/platform_data/gpio-omap.h>
+ 
+ #include <asm/assembler.h>
++#include <asm/irq.h>
+ 
+ #include "ams-delta-fiq.h"
+ #include "board-ams-delta.h"
+diff --git a/arch/arm/mach-omap2/board-generic.c b/arch/arm/mach-omap2/board-generic.c
+index 7290f033fd2da..1610c567a6a3a 100644
+--- a/arch/arm/mach-omap2/board-generic.c
++++ b/arch/arm/mach-omap2/board-generic.c
+@@ -33,7 +33,7 @@ static void __init __maybe_unused omap_generic_init(void)
+ }
+ 
+ /* Clocks are needed early, see drivers/clocksource for the rest */
+-void __init __maybe_unused omap_init_time_of(void)
++static void __init __maybe_unused omap_init_time_of(void)
+ {
+ 	omap_clk_init();
+ 	timer_probe();
+diff --git a/arch/arm/mach-omap2/sr_device.c b/arch/arm/mach-omap2/sr_device.c
+index 17b66f0d0deef..605925684b0aa 100644
+--- a/arch/arm/mach-omap2/sr_device.c
++++ b/arch/arm/mach-omap2/sr_device.c
+@@ -188,7 +188,7 @@ static const char * const dra7_sr_instances[] = {
+ 
+ int __init omap_devinit_smartreflex(void)
+ {
+-	const char * const *sr_inst;
++	const char * const *sr_inst = NULL;
+ 	int i, nr_sr = 0;
+ 
+ 	if (soc_is_omap44xx()) {
+diff --git a/arch/arm/mm/pmsa-v7.c b/arch/arm/mm/pmsa-v7.c
+index 88950e41a3a9e..59d916ccdf25f 100644
+--- a/arch/arm/mm/pmsa-v7.c
++++ b/arch/arm/mm/pmsa-v7.c
+@@ -235,6 +235,7 @@ void __init pmsav7_adjust_lowmem_bounds(void)
+ 	phys_addr_t mem_end;
+ 	phys_addr_t reg_start, reg_end;
+ 	unsigned int mem_max_regions;
++	bool first = true;
+ 	int num;
+ 	u64 i;
+ 
+@@ -263,7 +264,7 @@ void __init pmsav7_adjust_lowmem_bounds(void)
+ #endif
+ 
+ 	for_each_mem_range(i, &reg_start, &reg_end) {
+-		if (i == 0) {
++		if (first) {
+ 			phys_addr_t phys_offset = PHYS_OFFSET;
+ 
+ 			/*
+@@ -275,6 +276,7 @@ void __init pmsav7_adjust_lowmem_bounds(void)
+ 			mem_start = reg_start;
+ 			mem_end = reg_end;
+ 			specified_mem_size = mem_end - mem_start;
++			first = false;
+ 		} else {
+ 			/*
+ 			 * memblock auto merges contiguous blocks, remove
+diff --git a/arch/arm/mm/pmsa-v8.c b/arch/arm/mm/pmsa-v8.c
+index 2de019f7503e8..8359748a19a11 100644
+--- a/arch/arm/mm/pmsa-v8.c
++++ b/arch/arm/mm/pmsa-v8.c
+@@ -95,10 +95,11 @@ void __init pmsav8_adjust_lowmem_bounds(void)
+ {
+ 	phys_addr_t mem_end;
+ 	phys_addr_t reg_start, reg_end;
++	bool first = true;
+ 	u64 i;
+ 
+ 	for_each_mem_range(i, &reg_start, &reg_end) {
+-		if (i == 0) {
++		if (first) {
+ 			phys_addr_t phys_offset = PHYS_OFFSET;
+ 
+ 			/*
+@@ -107,6 +108,7 @@ void __init pmsav8_adjust_lowmem_bounds(void)
+ 			if (reg_start != phys_offset)
+ 				panic("First memory bank must be contiguous from PHYS_OFFSET");
+ 			mem_end = reg_end;
++			first = false;
+ 		} else {
+ 			/*
+ 			 * memblock auto merges contiguous blocks, remove
+diff --git a/arch/arm/probes/uprobes/core.c b/arch/arm/probes/uprobes/core.c
+index c4b49b322e8a8..f5f790c6e5f89 100644
+--- a/arch/arm/probes/uprobes/core.c
++++ b/arch/arm/probes/uprobes/core.c
+@@ -204,7 +204,7 @@ unsigned long uprobe_get_swbp_addr(struct pt_regs *regs)
+ static struct undef_hook uprobes_arm_break_hook = {
+ 	.instr_mask	= 0x0fffffff,
+ 	.instr_val	= (UPROBE_SWBP_ARM_INSN & 0x0fffffff),
+-	.cpsr_mask	= MODE_MASK,
++	.cpsr_mask	= (PSR_T_BIT | MODE_MASK),
+ 	.cpsr_val	= USR_MODE,
+ 	.fn		= uprobe_trap_handler,
+ };
+@@ -212,7 +212,7 @@ static struct undef_hook uprobes_arm_break_hook = {
+ static struct undef_hook uprobes_arm_ss_hook = {
+ 	.instr_mask	= 0x0fffffff,
+ 	.instr_val	= (UPROBE_SS_ARM_INSN & 0x0fffffff),
+-	.cpsr_mask	= MODE_MASK,
++	.cpsr_mask	= (PSR_T_BIT | MODE_MASK),
+ 	.cpsr_val	= USR_MODE,
+ 	.fn		= uprobe_trap_handler,
+ };
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index c1be64228327c..5e5cf3af63515 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1390,10 +1390,13 @@ config ARM64_PAN
+ 	 The feature is detected at runtime, and will remain as a 'nop'
+ 	 instruction if the cpu does not implement the feature.
+ 
++config AS_HAS_LSE_ATOMICS
++	def_bool $(as-instr,.arch_extension lse)
++
+ config ARM64_LSE_ATOMICS
+ 	bool
+ 	default ARM64_USE_LSE_ATOMICS
+-	depends on $(as-instr,.arch_extension lse)
++	depends on AS_HAS_LSE_ATOMICS
+ 
+ config ARM64_USE_LSE_ATOMICS
+ 	bool "Atomic instructions"
+@@ -1667,6 +1670,7 @@ config ARM64_MTE
+ 	bool "Memory Tagging Extension support"
+ 	default y
+ 	depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI
++	depends on AS_HAS_LSE_ATOMICS
+ 	select ARCH_USES_HIGH_VMA_FLAGS
+ 	help
+ 	  Memory Tagging (part of the ARMv8.5 Extensions) provides
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
+index 302e24be0a318..a1f621b388fe7 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
+@@ -8,3 +8,7 @@
+ 	compatible = "pine64,pine64-lts", "allwinner,sun50i-r18",
+ 		     "allwinner,sun50i-a64";
+ };
++
++&mmc0 {
++	cd-gpios = <&pio 5 6 GPIO_ACTIVE_LOW>; /* PF6 push-push switch */
++};
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
+index 3402cec87035b..df62044ff7a7a 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
+@@ -34,7 +34,7 @@
+ 	vmmc-supply = <&reg_dcdc1>;
+ 	disable-wp;
+ 	bus-width = <4>;
+-	cd-gpios = <&pio 5 6 GPIO_ACTIVE_LOW>; /* PF6 */
++	cd-gpios = <&pio 5 6 GPIO_ACTIVE_HIGH>; /* PF6 push-pull switch */
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
+index 7c9dbde645b52..e8163c572daba 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
+@@ -289,10 +289,6 @@
+ 	vcc-pm-supply = <&reg_aldo1>;
+ };
+ 
+-&rtc {
+-	clocks = <&ext_osc32k>;
+-};
+-
+ &spdif {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
+index 619db9b4c9d5c..3cb3c4ab3ea56 100644
+--- a/arch/arm64/include/asm/alternative.h
++++ b/arch/arm64/include/asm/alternative.h
+@@ -119,9 +119,9 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
+ 	.popsection
+ 	.subsection 1
+ 663:	\insn2
+-664:	.previous
+-	.org	. - (664b-663b) + (662b-661b)
++664:	.org	. - (664b-663b) + (662b-661b)
+ 	.org	. - (662b-661b) + (664b-663b)
++	.previous
+ 	.endif
+ .endm
+ 
+@@ -191,11 +191,11 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
+  */
+ .macro alternative_endif
+ 664:
++	.org	. - (664b-663b) + (662b-661b)
++	.org	. - (662b-661b) + (664b-663b)
+ 	.if .Lasm_alt_mode==0
+ 	.previous
+ 	.endif
+-	.org	. - (664b-663b) + (662b-661b)
+-	.org	. - (662b-661b) + (664b-663b)
+ .endm
+ 
+ /*
+diff --git a/arch/arm64/include/asm/word-at-a-time.h b/arch/arm64/include/asm/word-at-a-time.h
+index 3333950b59093..ea487218db790 100644
+--- a/arch/arm64/include/asm/word-at-a-time.h
++++ b/arch/arm64/include/asm/word-at-a-time.h
+@@ -53,7 +53,7 @@ static inline unsigned long find_zero(unsigned long mask)
+  */
+ static inline unsigned long load_unaligned_zeropad(const void *addr)
+ {
+-	unsigned long ret, offset;
++	unsigned long ret, tmp;
+ 
+ 	/* Load word from unaligned pointer addr */
+ 	asm(
+@@ -61,9 +61,9 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
+ 	"2:\n"
+ 	"	.pushsection .fixup,\"ax\"\n"
+ 	"	.align 2\n"
+-	"3:	and	%1, %2, #0x7\n"
+-	"	bic	%2, %2, #0x7\n"
+-	"	ldr	%0, [%2]\n"
++	"3:	bic	%1, %2, #0x7\n"
++	"	ldr	%0, [%1]\n"
++	"	and	%1, %2, #0x7\n"
+ 	"	lsl	%1, %1, #0x3\n"
+ #ifndef __AARCH64EB__
+ 	"	lsr	%0, %0, %1\n"
+@@ -73,7 +73,7 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
+ 	"	b	2b\n"
+ 	"	.popsection\n"
+ 	_ASM_EXTABLE(1b, 3b)
+-	: "=&r" (ret), "=&r" (offset)
++	: "=&r" (ret), "=&r" (tmp)
+ 	: "r" (addr), "Q" (*(unsigned long *)addr));
+ 
+ 	return ret;
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index d72c818b019ca..2da82c139e1cd 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -148,16 +148,18 @@ alternative_cb_end
+ 	.endm
+ 
+ 	/* Check for MTE asynchronous tag check faults */
+-	.macro check_mte_async_tcf, flgs, tmp
++	.macro check_mte_async_tcf, tmp, ti_flags
+ #ifdef CONFIG_ARM64_MTE
++	.arch_extension lse
+ alternative_if_not ARM64_MTE
+ 	b	1f
+ alternative_else_nop_endif
+ 	mrs_s	\tmp, SYS_TFSRE0_EL1
+ 	tbz	\tmp, #SYS_TFSR_EL1_TF0_SHIFT, 1f
+ 	/* Asynchronous TCF occurred for TTBR0 access, set the TI flag */
+-	orr	\flgs, \flgs, #_TIF_MTE_ASYNC_FAULT
+-	str	\flgs, [tsk, #TSK_TI_FLAGS]
++	mov	\tmp, #_TIF_MTE_ASYNC_FAULT
++	add	\ti_flags, tsk, #TSK_TI_FLAGS
++	stset	\tmp, [\ti_flags]
+ 	msr_s	SYS_TFSRE0_EL1, xzr
+ 1:
+ #endif
+@@ -207,7 +209,7 @@ alternative_else_nop_endif
+ 	disable_step_tsk x19, x20
+ 
+ 	/* Check for asynchronous tag check faults in user space */
+-	check_mte_async_tcf x19, x22
++	check_mte_async_tcf x22, x23
+ 	apply_ssbd 1, x22, x23
+ 
+ 	ptrauth_keys_install_kernel tsk, x20, x22, x23
+diff --git a/arch/ia64/configs/generic_defconfig b/arch/ia64/configs/generic_defconfig
+index ca0d596c800d8..8916a2850c48b 100644
+--- a/arch/ia64/configs/generic_defconfig
++++ b/arch/ia64/configs/generic_defconfig
+@@ -55,8 +55,6 @@ CONFIG_CHR_DEV_SG=m
+ CONFIG_SCSI_FC_ATTRS=y
+ CONFIG_SCSI_SYM53C8XX_2=y
+ CONFIG_SCSI_QLOGIC_1280=y
+-CONFIG_ATA=y
+-CONFIG_ATA_PIIX=y
+ CONFIG_SATA_VITESSE=y
+ CONFIG_MD=y
+ CONFIG_BLK_DEV_MD=m
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index df7fccf76df69..f7abd118d23d3 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -144,7 +144,7 @@ config ARCH_FLATMEM_ENABLE
+ config ARCH_SPARSEMEM_ENABLE
+ 	def_bool y
+ 	depends on MMU
+-	select SPARSEMEM_STATIC if 32BIT && SPARSMEM
++	select SPARSEMEM_STATIC if 32BIT && SPARSEMEM
+ 	select SPARSEMEM_VMEMMAP_ENABLE if 64BIT
+ 
+ config ARCH_SELECT_MEMORY_MODEL
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index d23795057c4f1..28c89fce0dab8 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1051,9 +1051,6 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	cleanup_highmap();
+ 
+-	/* Look for ACPI tables and reserve memory occupied by them. */
+-	acpi_boot_table_init();
+-
+ 	memblock_set_current_limit(ISA_END_ADDRESS);
+ 	e820__memblock_setup();
+ 
+@@ -1132,6 +1129,8 @@ void __init setup_arch(char **cmdline_p)
+ 	reserve_initrd();
+ 
+ 	acpi_table_upgrade();
++	/* Look for ACPI tables and reserve memory occupied by them. */
++	acpi_boot_table_init();
+ 
+ 	vsmp_init();
+ 
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index f3eca45267781..15532feb19f10 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3329,7 +3329,11 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ 	enum vm_entry_failure_code entry_failure_code;
+ 	bool evaluate_pending_interrupts;
+-	u32 exit_reason, failed_index;
++	union vmx_exit_reason exit_reason = {
++		.basic = EXIT_REASON_INVALID_STATE,
++		.failed_vmentry = 1,
++	};
++	u32 failed_index;
+ 
+ 	if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
+ 		kvm_vcpu_flush_tlb_current(vcpu);
+@@ -3381,7 +3385,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ 
+ 		if (nested_vmx_check_guest_state(vcpu, vmcs12,
+ 						 &entry_failure_code)) {
+-			exit_reason = EXIT_REASON_INVALID_STATE;
++			exit_reason.basic = EXIT_REASON_INVALID_STATE;
+ 			vmcs12->exit_qualification = entry_failure_code;
+ 			goto vmentry_fail_vmexit;
+ 		}
+@@ -3392,7 +3396,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ 		vcpu->arch.tsc_offset += vmcs12->tsc_offset;
+ 
+ 	if (prepare_vmcs02(vcpu, vmcs12, &entry_failure_code)) {
+-		exit_reason = EXIT_REASON_INVALID_STATE;
++		exit_reason.basic = EXIT_REASON_INVALID_STATE;
+ 		vmcs12->exit_qualification = entry_failure_code;
+ 		goto vmentry_fail_vmexit_guest_mode;
+ 	}
+@@ -3402,7 +3406,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ 						   vmcs12->vm_entry_msr_load_addr,
+ 						   vmcs12->vm_entry_msr_load_count);
+ 		if (failed_index) {
+-			exit_reason = EXIT_REASON_MSR_LOAD_FAIL;
++			exit_reason.basic = EXIT_REASON_MSR_LOAD_FAIL;
+ 			vmcs12->exit_qualification = failed_index;
+ 			goto vmentry_fail_vmexit_guest_mode;
+ 		}
+@@ -3470,7 +3474,7 @@ vmentry_fail_vmexit:
+ 		return NVMX_VMENTRY_VMEXIT;
+ 
+ 	load_vmcs12_host_state(vcpu, vmcs12);
+-	vmcs12->vm_exit_reason = exit_reason | VMX_EXIT_REASONS_FAILED_VMENTRY;
++	vmcs12->vm_exit_reason = exit_reason.full;
+ 	if (enable_shadow_vmcs || vmx->nested.hv_evmcs)
+ 		vmx->nested.need_vmcs12_to_shadow_sync = true;
+ 	return NVMX_VMENTRY_VMEXIT;
+@@ -5533,7 +5537,12 @@ static int handle_vmfunc(struct kvm_vcpu *vcpu)
+ 	return kvm_skip_emulated_instruction(vcpu);
+ 
+ fail:
+-	nested_vmx_vmexit(vcpu, vmx->exit_reason,
++	/*
++	 * This is effectively a reflected VM-Exit, as opposed to a synthesized
++	 * nested VM-Exit.  Pass the original exit reason, i.e. don't hardcode
++	 * EXIT_REASON_VMFUNC as the exit reason.
++	 */
++	nested_vmx_vmexit(vcpu, vmx->exit_reason.full,
+ 			  vmx_get_intr_info(vcpu),
+ 			  vmx_get_exit_qual(vcpu));
+ 	return 1;
+@@ -5601,7 +5610,8 @@ static bool nested_vmx_exit_handled_io(struct kvm_vcpu *vcpu,
+  * MSR bitmap. This may be the case even when L0 doesn't use MSR bitmaps.
+  */
+ static bool nested_vmx_exit_handled_msr(struct kvm_vcpu *vcpu,
+-	struct vmcs12 *vmcs12, u32 exit_reason)
++					struct vmcs12 *vmcs12,
++					union vmx_exit_reason exit_reason)
+ {
+ 	u32 msr_index = kvm_rcx_read(vcpu);
+ 	gpa_t bitmap;
+@@ -5615,7 +5625,7 @@ static bool nested_vmx_exit_handled_msr(struct kvm_vcpu *vcpu,
+ 	 * First we need to figure out which of the four to use:
+ 	 */
+ 	bitmap = vmcs12->msr_bitmap;
+-	if (exit_reason == EXIT_REASON_MSR_WRITE)
++	if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
+ 		bitmap += 2048;
+ 	if (msr_index >= 0xc0000000) {
+ 		msr_index -= 0xc0000000;
+@@ -5752,11 +5762,12 @@ static bool nested_vmx_exit_handled_mtf(struct vmcs12 *vmcs12)
+  * Return true if L0 wants to handle an exit from L2 regardless of whether or not
+  * L1 wants the exit.  Only call this when in is_guest_mode (L2).
+  */
+-static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason)
++static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
++				     union vmx_exit_reason exit_reason)
+ {
+ 	u32 intr_info;
+ 
+-	switch ((u16)exit_reason) {
++	switch ((u16)exit_reason.basic) {
+ 	case EXIT_REASON_EXCEPTION_NMI:
+ 		intr_info = vmx_get_intr_info(vcpu);
+ 		if (is_nmi(intr_info))
+@@ -5812,12 +5823,13 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason)
+  * Return 1 if L1 wants to intercept an exit from L2.  Only call this when in
+  * is_guest_mode (L2).
+  */
+-static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason)
++static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu,
++				     union vmx_exit_reason exit_reason)
+ {
+ 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ 	u32 intr_info;
+ 
+-	switch ((u16)exit_reason) {
++	switch ((u16)exit_reason.basic) {
+ 	case EXIT_REASON_EXCEPTION_NMI:
+ 		intr_info = vmx_get_intr_info(vcpu);
+ 		if (is_nmi(intr_info))
+@@ -5936,7 +5948,7 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason)
+ bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+-	u32 exit_reason = vmx->exit_reason;
++	union vmx_exit_reason exit_reason = vmx->exit_reason;
+ 	unsigned long exit_qual;
+ 	u32 exit_intr_info;
+ 
+@@ -5955,7 +5967,7 @@ bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu)
+ 		goto reflect_vmexit;
+ 	}
+ 
+-	trace_kvm_nested_vmexit(exit_reason, vcpu, KVM_ISA_VMX);
++	trace_kvm_nested_vmexit(exit_reason.full, vcpu, KVM_ISA_VMX);
+ 
+ 	/* If L0 (KVM) wants the exit, it trumps L1's desires. */
+ 	if (nested_vmx_l0_wants_exit(vcpu, exit_reason))
+@@ -5981,7 +5993,7 @@ bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu)
+ 	exit_qual = vmx_get_exit_qual(vcpu);
+ 
+ reflect_vmexit:
+-	nested_vmx_vmexit(vcpu, exit_reason, exit_intr_info, exit_qual);
++	nested_vmx_vmexit(vcpu, exit_reason.full, exit_intr_info, exit_qual);
+ 	return true;
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 82af43e14b09c..f8835cabf29f3 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1578,7 +1578,7 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
+ 	 * i.e. we end up advancing IP with some random value.
+ 	 */
+ 	if (!static_cpu_has(X86_FEATURE_HYPERVISOR) ||
+-	    to_vmx(vcpu)->exit_reason != EXIT_REASON_EPT_MISCONFIG) {
++	    to_vmx(vcpu)->exit_reason.basic != EXIT_REASON_EPT_MISCONFIG) {
+ 		orig_rip = kvm_rip_read(vcpu);
+ 		rip = orig_rip + vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+ #ifdef CONFIG_X86_64
+@@ -5687,7 +5687,7 @@ static void vmx_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2,
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 
+ 	*info1 = vmx_get_exit_qual(vcpu);
+-	if (!(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY)) {
++	if (!(vmx->exit_reason.failed_vmentry)) {
+ 		*info2 = vmx->idt_vectoring_info;
+ 		*intr_info = vmx_get_intr_info(vcpu);
+ 		if (is_exception_with_error_code(*intr_info))
+@@ -5931,8 +5931,9 @@ void dump_vmcs(void)
+ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+-	u32 exit_reason = vmx->exit_reason;
++	union vmx_exit_reason exit_reason = vmx->exit_reason;
+ 	u32 vectoring_info = vmx->idt_vectoring_info;
++	u16 exit_handler_index;
+ 
+ 	/*
+ 	 * Flush logged GPAs PML buffer, this will make dirty_bitmap more
+@@ -5974,11 +5975,11 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
+ 			return 1;
+ 	}
+ 
+-	if (exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY) {
++	if (exit_reason.failed_vmentry) {
+ 		dump_vmcs();
+ 		vcpu->run->exit_reason = KVM_EXIT_FAIL_ENTRY;
+ 		vcpu->run->fail_entry.hardware_entry_failure_reason
+-			= exit_reason;
++			= exit_reason.full;
+ 		vcpu->run->fail_entry.cpu = vcpu->arch.last_vmentry_cpu;
+ 		return 0;
+ 	}
+@@ -6000,24 +6001,24 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
+ 	 * will cause infinite loop.
+ 	 */
+ 	if ((vectoring_info & VECTORING_INFO_VALID_MASK) &&
+-			(exit_reason != EXIT_REASON_EXCEPTION_NMI &&
+-			exit_reason != EXIT_REASON_EPT_VIOLATION &&
+-			exit_reason != EXIT_REASON_PML_FULL &&
+-			exit_reason != EXIT_REASON_APIC_ACCESS &&
+-			exit_reason != EXIT_REASON_TASK_SWITCH)) {
++	    (exit_reason.basic != EXIT_REASON_EXCEPTION_NMI &&
++	     exit_reason.basic != EXIT_REASON_EPT_VIOLATION &&
++	     exit_reason.basic != EXIT_REASON_PML_FULL &&
++	     exit_reason.basic != EXIT_REASON_APIC_ACCESS &&
++	     exit_reason.basic != EXIT_REASON_TASK_SWITCH)) {
++		int ndata = 3;
++
+ 		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ 		vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_DELIVERY_EV;
+-		vcpu->run->internal.ndata = 3;
+ 		vcpu->run->internal.data[0] = vectoring_info;
+-		vcpu->run->internal.data[1] = exit_reason;
++		vcpu->run->internal.data[1] = exit_reason.full;
+ 		vcpu->run->internal.data[2] = vcpu->arch.exit_qualification;
+-		if (exit_reason == EXIT_REASON_EPT_MISCONFIG) {
+-			vcpu->run->internal.ndata++;
+-			vcpu->run->internal.data[3] =
++		if (exit_reason.basic == EXIT_REASON_EPT_MISCONFIG) {
++			vcpu->run->internal.data[ndata++] =
+ 				vmcs_read64(GUEST_PHYSICAL_ADDRESS);
+ 		}
+-		vcpu->run->internal.data[vcpu->run->internal.ndata++] =
+-			vcpu->arch.last_vmentry_cpu;
++		vcpu->run->internal.data[ndata++] = vcpu->arch.last_vmentry_cpu;
++		vcpu->run->internal.ndata = ndata;
+ 		return 0;
+ 	}
+ 
+@@ -6043,38 +6044,39 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
+ 	if (exit_fastpath != EXIT_FASTPATH_NONE)
+ 		return 1;
+ 
+-	if (exit_reason >= kvm_vmx_max_exit_handlers)
++	if (exit_reason.basic >= kvm_vmx_max_exit_handlers)
+ 		goto unexpected_vmexit;
+ #ifdef CONFIG_RETPOLINE
+-	if (exit_reason == EXIT_REASON_MSR_WRITE)
++	if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
+ 		return kvm_emulate_wrmsr(vcpu);
+-	else if (exit_reason == EXIT_REASON_PREEMPTION_TIMER)
++	else if (exit_reason.basic == EXIT_REASON_PREEMPTION_TIMER)
+ 		return handle_preemption_timer(vcpu);
+-	else if (exit_reason == EXIT_REASON_INTERRUPT_WINDOW)
++	else if (exit_reason.basic == EXIT_REASON_INTERRUPT_WINDOW)
+ 		return handle_interrupt_window(vcpu);
+-	else if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT)
++	else if (exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
+ 		return handle_external_interrupt(vcpu);
+-	else if (exit_reason == EXIT_REASON_HLT)
++	else if (exit_reason.basic == EXIT_REASON_HLT)
+ 		return kvm_emulate_halt(vcpu);
+-	else if (exit_reason == EXIT_REASON_EPT_MISCONFIG)
++	else if (exit_reason.basic == EXIT_REASON_EPT_MISCONFIG)
+ 		return handle_ept_misconfig(vcpu);
+ #endif
+ 
+-	exit_reason = array_index_nospec(exit_reason,
+-					 kvm_vmx_max_exit_handlers);
+-	if (!kvm_vmx_exit_handlers[exit_reason])
++	exit_handler_index = array_index_nospec((u16)exit_reason.basic,
++						kvm_vmx_max_exit_handlers);
++	if (!kvm_vmx_exit_handlers[exit_handler_index])
+ 		goto unexpected_vmexit;
+ 
+-	return kvm_vmx_exit_handlers[exit_reason](vcpu);
++	return kvm_vmx_exit_handlers[exit_handler_index](vcpu);
+ 
+ unexpected_vmexit:
+-	vcpu_unimpl(vcpu, "vmx: unexpected exit reason 0x%x\n", exit_reason);
++	vcpu_unimpl(vcpu, "vmx: unexpected exit reason 0x%x\n",
++		    exit_reason.full);
+ 	dump_vmcs();
+ 	vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ 	vcpu->run->internal.suberror =
+ 			KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON;
+ 	vcpu->run->internal.ndata = 2;
+-	vcpu->run->internal.data[0] = exit_reason;
++	vcpu->run->internal.data[0] = exit_reason.full;
+ 	vcpu->run->internal.data[1] = vcpu->arch.last_vmentry_cpu;
+ 	return 0;
+ }
+@@ -6393,9 +6395,9 @@ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 
+-	if (vmx->exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT)
++	if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
+ 		handle_external_interrupt_irqoff(vcpu);
+-	else if (vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI)
++	else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
+ 		handle_exception_nmi_irqoff(vmx);
+ }
+ 
+@@ -6583,7 +6585,7 @@ void noinstr vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
+ 
+ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+ {
+-	switch (to_vmx(vcpu)->exit_reason) {
++	switch (to_vmx(vcpu)->exit_reason.basic) {
+ 	case EXIT_REASON_MSR_WRITE:
+ 		return handle_fastpath_set_msr_irqoff(vcpu);
+ 	case EXIT_REASON_PREEMPTION_TIMER:
+@@ -6782,17 +6784,17 @@ reenter_guest:
+ 	vmx->idt_vectoring_info = 0;
+ 
+ 	if (unlikely(vmx->fail)) {
+-		vmx->exit_reason = 0xdead;
++		vmx->exit_reason.full = 0xdead;
+ 		return EXIT_FASTPATH_NONE;
+ 	}
+ 
+-	vmx->exit_reason = vmcs_read32(VM_EXIT_REASON);
+-	if (unlikely((u16)vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY))
++	vmx->exit_reason.full = vmcs_read32(VM_EXIT_REASON);
++	if (unlikely((u16)vmx->exit_reason.basic == EXIT_REASON_MCE_DURING_VMENTRY))
+ 		kvm_machine_check();
+ 
+-	trace_kvm_exit(vmx->exit_reason, vcpu, KVM_ISA_VMX);
++	trace_kvm_exit(vmx->exit_reason.full, vcpu, KVM_ISA_VMX);
+ 
+-	if (unlikely(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
++	if (unlikely(vmx->exit_reason.failed_vmentry))
+ 		return EXIT_FASTPATH_NONE;
+ 
+ 	vmx->loaded_vmcs->launched = 1;
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index f6f66e5c65104..ae3a89ac0600d 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -70,6 +70,29 @@ struct pt_desc {
+ 	struct pt_ctx guest;
+ };
+ 
++union vmx_exit_reason {
++	struct {
++		u32	basic			: 16;
++		u32	reserved16		: 1;
++		u32	reserved17		: 1;
++		u32	reserved18		: 1;
++		u32	reserved19		: 1;
++		u32	reserved20		: 1;
++		u32	reserved21		: 1;
++		u32	reserved22		: 1;
++		u32	reserved23		: 1;
++		u32	reserved24		: 1;
++		u32	reserved25		: 1;
++		u32	reserved26		: 1;
++		u32	enclave_mode		: 1;
++		u32	smi_pending_mtf		: 1;
++		u32	smi_from_vmx_root	: 1;
++		u32	reserved30		: 1;
++		u32	failed_vmentry		: 1;
++	};
++	u32 full;
++};
++
+ /*
+  * The nested_vmx structure is part of vcpu_vmx, and holds information we need
+  * for correct emulation of VMX (i.e., nested VMX) on this vcpu.
+@@ -244,7 +267,7 @@ struct vcpu_vmx {
+ 	int vpid;
+ 	bool emulation_required;
+ 
+-	u32 exit_reason;
++	union vmx_exit_reason exit_reason;
+ 
+ 	/* Posted interrupt descriptor */
+ 	struct pi_desc pi_desc;
+diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
+index fe6a460c43735..af3ee288bc117 100644
+--- a/drivers/dma/dmaengine.c
++++ b/drivers/dma/dmaengine.c
+@@ -1086,6 +1086,7 @@ static int __dma_async_device_channel_register(struct dma_device *device,
+ 	kfree(chan->dev);
+  err_free_local:
+ 	free_percpu(chan->local);
++	chan->local = NULL;
+ 	return rc;
+ }
+ 
+diff --git a/drivers/dma/dw/Kconfig b/drivers/dma/dw/Kconfig
+index e5162690de8f1..db25f9b7778c9 100644
+--- a/drivers/dma/dw/Kconfig
++++ b/drivers/dma/dw/Kconfig
+@@ -10,6 +10,7 @@ config DW_DMAC_CORE
+ 
+ config DW_DMAC
+ 	tristate "Synopsys DesignWare AHB DMA platform driver"
++	depends on HAS_IOMEM
+ 	select DW_DMAC_CORE
+ 	help
+ 	  Support the Synopsys DesignWare AHB DMA controller. This
+@@ -18,6 +19,7 @@ config DW_DMAC
+ config DW_DMAC_PCI
+ 	tristate "Synopsys DesignWare AHB DMA PCI driver"
+ 	depends on PCI
++	depends on HAS_IOMEM
+ 	select DW_DMAC_CORE
+ 	help
+ 	  Support the Synopsys DesignWare AHB DMA controller on the
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index a6704838ffcb7..459e9fbc2253a 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -263,6 +263,22 @@ void idxd_wq_drain(struct idxd_wq *wq)
+ 	idxd_cmd_exec(idxd, IDXD_CMD_DRAIN_WQ, operand, NULL);
+ }
+ 
++void idxd_wq_reset(struct idxd_wq *wq)
++{
++	struct idxd_device *idxd = wq->idxd;
++	struct device *dev = &idxd->pdev->dev;
++	u32 operand;
++
++	if (wq->state != IDXD_WQ_ENABLED) {
++		dev_dbg(dev, "WQ %d in wrong state: %d\n", wq->id, wq->state);
++		return;
++	}
++
++	operand = BIT(wq->id % 16) | ((wq->id / 16) << 16);
++	idxd_cmd_exec(idxd, IDXD_CMD_RESET_WQ, operand, NULL);
++	wq->state = IDXD_WQ_DISABLED;
++}
++
+ int idxd_wq_map_portal(struct idxd_wq *wq)
+ {
+ 	struct idxd_device *idxd = wq->idxd;
+@@ -291,8 +307,6 @@ void idxd_wq_unmap_portal(struct idxd_wq *wq)
+ void idxd_wq_disable_cleanup(struct idxd_wq *wq)
+ {
+ 	struct idxd_device *idxd = wq->idxd;
+-	struct device *dev = &idxd->pdev->dev;
+-	int i, wq_offset;
+ 
+ 	lockdep_assert_held(&idxd->dev_lock);
+ 	memset(wq->wqcfg, 0, idxd->wqcfg_size);
+@@ -303,14 +317,6 @@ void idxd_wq_disable_cleanup(struct idxd_wq *wq)
+ 	wq->priority = 0;
+ 	clear_bit(WQ_FLAG_DEDICATED, &wq->flags);
+ 	memset(wq->name, 0, WQ_NAME_SIZE);
+-
+-	for (i = 0; i < WQCFG_STRIDES(idxd); i++) {
+-		wq_offset = WQCFG_OFFSET(idxd, wq->id, i);
+-		iowrite32(0, idxd->reg_base + wq_offset);
+-		dev_dbg(dev, "WQ[%d][%d][%#x]: %#x\n",
+-			wq->id, i, wq_offset,
+-			ioread32(idxd->reg_base + wq_offset));
+-	}
+ }
+ 
+ /* Device control bits */
+@@ -560,7 +566,14 @@ static int idxd_wq_config_write(struct idxd_wq *wq)
+ 	if (!wq->group)
+ 		return 0;
+ 
+-	memset(wq->wqcfg, 0, idxd->wqcfg_size);
++	/*
++	 * Instead of memset the entire shadow copy of WQCFG, copy from the hardware after
++	 * wq reset. This will copy back the sticky values that are present on some devices.
++	 */
++	for (i = 0; i < WQCFG_STRIDES(idxd); i++) {
++		wq_offset = WQCFG_OFFSET(idxd, wq->id, i);
++		wq->wqcfg->bits[i] = ioread32(idxd->reg_base + wq_offset);
++	}
+ 
+ 	/* byte 0-3 */
+ 	wq->wqcfg->wq_size = wq->size;
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index 953ef6536aac4..1d7849cb91004 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -295,6 +295,7 @@ void idxd_wq_free_resources(struct idxd_wq *wq);
+ int idxd_wq_enable(struct idxd_wq *wq);
+ int idxd_wq_disable(struct idxd_wq *wq);
+ void idxd_wq_drain(struct idxd_wq *wq);
++void idxd_wq_reset(struct idxd_wq *wq);
+ int idxd_wq_map_portal(struct idxd_wq *wq);
+ void idxd_wq_unmap_portal(struct idxd_wq *wq);
+ void idxd_wq_disable_cleanup(struct idxd_wq *wq);
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index 552e2e2707058..6bb1c1773aae6 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -66,7 +66,9 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
+ 		for (i = 0; i < 4; i++)
+ 			idxd->sw_err.bits[i] = ioread64(idxd->reg_base +
+ 					IDXD_SWERR_OFFSET + i * sizeof(u64));
+-		iowrite64(IDXD_SWERR_ACK, idxd->reg_base + IDXD_SWERR_OFFSET);
++
++		iowrite64(idxd->sw_err.bits[0] & IDXD_SWERR_ACK,
++			  idxd->reg_base + IDXD_SWERR_OFFSET);
+ 
+ 		if (idxd->sw_err.valid && idxd->sw_err.wq_idx_valid) {
+ 			int id = idxd->sw_err.wq_idx;
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index fb97c9f319a55..7566b573d546e 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -241,7 +241,6 @@ static void disable_wq(struct idxd_wq *wq)
+ {
+ 	struct idxd_device *idxd = wq->idxd;
+ 	struct device *dev = &idxd->pdev->dev;
+-	int rc;
+ 
+ 	mutex_lock(&wq->wq_lock);
+ 	dev_dbg(dev, "%s removing WQ %s\n", __func__, dev_name(&wq->conf_dev));
+@@ -262,17 +261,13 @@ static void disable_wq(struct idxd_wq *wq)
+ 	idxd_wq_unmap_portal(wq);
+ 
+ 	idxd_wq_drain(wq);
+-	rc = idxd_wq_disable(wq);
++	idxd_wq_reset(wq);
+ 
+ 	idxd_wq_free_resources(wq);
+ 	wq->client_count = 0;
+ 	mutex_unlock(&wq->wq_lock);
+ 
+-	if (rc < 0)
+-		dev_warn(dev, "Failed to disable %s: %d\n",
+-			 dev_name(&wq->conf_dev), rc);
+-	else
+-		dev_info(dev, "wq %s disabled\n", dev_name(&wq->conf_dev));
++	dev_info(dev, "wq %s disabled\n", dev_name(&wq->conf_dev));
+ }
+ 
+ static int idxd_config_bus_remove(struct device *dev)
+@@ -923,7 +918,7 @@ static ssize_t wq_size_store(struct device *dev,
+ 	if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
+ 		return -EPERM;
+ 
+-	if (wq->state != IDXD_WQ_DISABLED)
++	if (idxd->state == IDXD_DEV_ENABLED)
+ 		return -EPERM;
+ 
+ 	if (size + total_claimed_wq_size(idxd) - wq->size > idxd->max_wq_size)
+@@ -1259,8 +1254,14 @@ static ssize_t op_cap_show(struct device *dev,
+ {
+ 	struct idxd_device *idxd =
+ 		container_of(dev, struct idxd_device, conf_dev);
++	int i, rc = 0;
++
++	for (i = 0; i < 4; i++)
++		rc += sysfs_emit_at(buf, rc, "%#llx ", idxd->hw.opcap.bits[i]);
+ 
+-	return sprintf(buf, "%#llx\n", idxd->hw.opcap.bits[0]);
++	rc--;
++	rc += sysfs_emit_at(buf, rc, "\n");
++	return rc;
+ }
+ static DEVICE_ATTR_RO(op_cap);
+ 
+diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
+index f387c5bbc170c..1669345441619 100644
+--- a/drivers/dma/plx_dma.c
++++ b/drivers/dma/plx_dma.c
+@@ -507,10 +507,8 @@ static int plx_dma_create(struct pci_dev *pdev)
+ 
+ 	rc = request_irq(pci_irq_vector(pdev, 0), plx_dma_isr, 0,
+ 			 KBUILD_MODNAME, plxdev);
+-	if (rc) {
+-		kfree(plxdev);
+-		return rc;
+-	}
++	if (rc)
++		goto free_plx;
+ 
+ 	spin_lock_init(&plxdev->ring_lock);
+ 	tasklet_setup(&plxdev->desc_task, plx_dma_desc_task);
+@@ -540,14 +538,20 @@ static int plx_dma_create(struct pci_dev *pdev)
+ 	rc = dma_async_device_register(dma);
+ 	if (rc) {
+ 		pci_err(pdev, "Failed to register dma device: %d\n", rc);
+-		free_irq(pci_irq_vector(pdev, 0),  plxdev);
+-		kfree(plxdev);
+-		return rc;
++		goto put_device;
+ 	}
+ 
+ 	pci_set_drvdata(pdev, plxdev);
+ 
+ 	return 0;
++
++put_device:
++	put_device(&pdev->dev);
++	free_irq(pci_irq_vector(pdev, 0),  plxdev);
++free_plx:
++	kfree(plxdev);
++
++	return rc;
+ }
+ 
+ static int plx_dma_probe(struct pci_dev *pdev,
+diff --git a/drivers/gpio/gpiolib-sysfs.c b/drivers/gpio/gpiolib-sysfs.c
+index 728f6c6871824..fa5d945b2f286 100644
+--- a/drivers/gpio/gpiolib-sysfs.c
++++ b/drivers/gpio/gpiolib-sysfs.c
+@@ -458,6 +458,8 @@ static ssize_t export_store(struct class *class,
+ 	long			gpio;
+ 	struct gpio_desc	*desc;
+ 	int			status;
++	struct gpio_chip	*gc;
++	int			offset;
+ 
+ 	status = kstrtol(buf, 0, &gpio);
+ 	if (status < 0)
+@@ -469,6 +471,12 @@ static ssize_t export_store(struct class *class,
+ 		pr_warn("%s: invalid GPIO %ld\n", __func__, gpio);
+ 		return -EINVAL;
+ 	}
++	gc = desc->gdev->chip;
++	offset = gpio_chip_hwgpio(desc);
++	if (!gpiochip_line_is_valid(gc, offset)) {
++		pr_warn("%s: GPIO %ld masked\n", __func__, gpio);
++		return -EINVAL;
++	}
+ 
+ 	/* No extra locking here; FLAG_SYSFS just signifies that the
+ 	 * request and export were done by on behalf of userspace, so
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 5e11cdb207d83..0ca7e53db112a 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1240,8 +1240,8 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu)
+ 
+ static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
+ {
+-	*value = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_CP_0_LO,
+-		REG_A5XX_RBBM_PERFCTR_CP_0_HI);
++	*value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO,
++		REG_A5XX_RBBM_ALWAYSON_COUNTER_HI);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 83b50f6d6bb78..722c2fe3bfd56 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1073,8 +1073,8 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
+ 	/* Force the GPU power on so we can read this register */
+ 	a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
+ 
+-	*value = gpu_read64(gpu, REG_A6XX_RBBM_PERFCTR_CP_0_LO,
+-		REG_A6XX_RBBM_PERFCTR_CP_0_HI);
++	*value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
++		REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
+ 
+ 	a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
+ 	mutex_unlock(&perfcounter_oob);
+diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
+index cc93a8c9547bc..8ea91542b567a 100644
+--- a/drivers/gpu/drm/xen/xen_drm_front.c
++++ b/drivers/gpu/drm/xen/xen_drm_front.c
+@@ -531,7 +531,7 @@ static int xen_drm_drv_init(struct xen_drm_front_info *front_info)
+ 	drm_dev = drm_dev_alloc(&xen_drm_driver, dev);
+ 	if (IS_ERR(drm_dev)) {
+ 		ret = PTR_ERR(drm_dev);
+-		goto fail;
++		goto fail_dev;
+ 	}
+ 
+ 	drm_info->drm_dev = drm_dev;
+@@ -561,8 +561,10 @@ fail_modeset:
+ 	drm_kms_helper_poll_fini(drm_dev);
+ 	drm_mode_config_cleanup(drm_dev);
+ 	drm_dev_put(drm_dev);
+-fail:
++fail_dev:
+ 	kfree(drm_info);
++	front_info->drm_info = NULL;
++fail:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 44d715c12f6ab..6cda5935fc09c 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3574,8 +3574,6 @@ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
+ {
+ 	struct wacom_features *features = &wacom_wac->features;
+ 
+-	input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
+-
+ 	if (!(features->device_type & WACOM_DEVICETYPE_PEN))
+ 		return -ENODEV;
+ 
+@@ -3590,6 +3588,7 @@ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
+ 		return 0;
+ 	}
+ 
++	input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
+ 	__set_bit(BTN_TOUCH, input_dev->keybit);
+ 	__set_bit(ABS_MISC, input_dev->absbit);
+ 
+@@ -3742,8 +3741,6 @@ int wacom_setup_touch_input_capabilities(struct input_dev *input_dev,
+ {
+ 	struct wacom_features *features = &wacom_wac->features;
+ 
+-	input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
+-
+ 	if (!(features->device_type & WACOM_DEVICETYPE_TOUCH))
+ 		return -ENODEV;
+ 
+@@ -3756,6 +3753,7 @@ int wacom_setup_touch_input_capabilities(struct input_dev *input_dev,
+ 		/* setup has already been done */
+ 		return 0;
+ 
++	input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
+ 	__set_bit(BTN_TOUCH, input_dev->keybit);
+ 
+ 	if (features->touch_max == 1) {
+diff --git a/drivers/input/keyboard/nspire-keypad.c b/drivers/input/keyboard/nspire-keypad.c
+index 63d5e488137dc..e9fa1423f1360 100644
+--- a/drivers/input/keyboard/nspire-keypad.c
++++ b/drivers/input/keyboard/nspire-keypad.c
+@@ -93,9 +93,15 @@ static irqreturn_t nspire_keypad_irq(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static int nspire_keypad_chip_init(struct nspire_keypad *keypad)
++static int nspire_keypad_open(struct input_dev *input)
+ {
++	struct nspire_keypad *keypad = input_get_drvdata(input);
+ 	unsigned long val = 0, cycles_per_us, delay_cycles, row_delay_cycles;
++	int error;
++
++	error = clk_prepare_enable(keypad->clk);
++	if (error)
++		return error;
+ 
+ 	cycles_per_us = (clk_get_rate(keypad->clk) / 1000000);
+ 	if (cycles_per_us == 0)
+@@ -121,30 +127,6 @@ static int nspire_keypad_chip_init(struct nspire_keypad *keypad)
+ 	keypad->int_mask = 1 << 1;
+ 	writel(keypad->int_mask, keypad->reg_base + KEYPAD_INTMSK);
+ 
+-	/* Disable GPIO interrupts to prevent hanging on touchpad */
+-	/* Possibly used to detect touchpad events */
+-	writel(0, keypad->reg_base + KEYPAD_UNKNOWN_INT);
+-	/* Acknowledge existing interrupts */
+-	writel(~0, keypad->reg_base + KEYPAD_UNKNOWN_INT_STS);
+-
+-	return 0;
+-}
+-
+-static int nspire_keypad_open(struct input_dev *input)
+-{
+-	struct nspire_keypad *keypad = input_get_drvdata(input);
+-	int error;
+-
+-	error = clk_prepare_enable(keypad->clk);
+-	if (error)
+-		return error;
+-
+-	error = nspire_keypad_chip_init(keypad);
+-	if (error) {
+-		clk_disable_unprepare(keypad->clk);
+-		return error;
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -152,6 +134,11 @@ static void nspire_keypad_close(struct input_dev *input)
+ {
+ 	struct nspire_keypad *keypad = input_get_drvdata(input);
+ 
++	/* Disable interrupts */
++	writel(0, keypad->reg_base + KEYPAD_INTMSK);
++	/* Acknowledge existing interrupts */
++	writel(~0, keypad->reg_base + KEYPAD_INT);
++
+ 	clk_disable_unprepare(keypad->clk);
+ }
+ 
+@@ -210,6 +197,25 @@ static int nspire_keypad_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 	}
+ 
++	error = clk_prepare_enable(keypad->clk);
++	if (error) {
++		dev_err(&pdev->dev, "failed to enable clock\n");
++		return error;
++	}
++
++	/* Disable interrupts */
++	writel(0, keypad->reg_base + KEYPAD_INTMSK);
++	/* Acknowledge existing interrupts */
++	writel(~0, keypad->reg_base + KEYPAD_INT);
++
++	/* Disable GPIO interrupts to prevent hanging on touchpad */
++	/* Possibly used to detect touchpad events */
++	writel(0, keypad->reg_base + KEYPAD_UNKNOWN_INT);
++	/* Acknowledge existing GPIO interrupts */
++	writel(~0, keypad->reg_base + KEYPAD_UNKNOWN_INT_STS);
++
++	clk_disable_unprepare(keypad->clk);
++
+ 	input_set_drvdata(input, keypad);
+ 
+ 	input->id.bustype = BUS_HOST;
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 9119e12a57784..a5a0035536462 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -588,6 +588,7 @@ static const struct dmi_system_id i8042_dmi_noselftest_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
+ 		},
++	}, {
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 			DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */
+diff --git a/drivers/input/touchscreen/s6sy761.c b/drivers/input/touchscreen/s6sy761.c
+index b63d7fdf0cd20..85a1f465c097e 100644
+--- a/drivers/input/touchscreen/s6sy761.c
++++ b/drivers/input/touchscreen/s6sy761.c
+@@ -145,8 +145,8 @@ static void s6sy761_report_coordinates(struct s6sy761_data *sdata,
+ 	u8 major = event[4];
+ 	u8 minor = event[5];
+ 	u8 z = event[6] & S6SY761_MASK_Z;
+-	u16 x = (event[1] << 3) | ((event[3] & S6SY761_MASK_X) >> 4);
+-	u16 y = (event[2] << 3) | (event[3] & S6SY761_MASK_Y);
++	u16 x = (event[1] << 4) | ((event[3] & S6SY761_MASK_X) >> 4);
++	u16 y = (event[2] << 4) | (event[3] & S6SY761_MASK_Y);
+ 
+ 	input_mt_slot(sdata->input, tid);
+ 
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index 66f4c6398f670..cea2b37897367 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -65,7 +65,7 @@ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index,
+ 	u8 *res;
+ 
+ 	position = (index + rsb) * v->fec->roots;
+-	block = div64_u64_rem(position, v->fec->roots << SECTOR_SHIFT, &rem);
++	block = div64_u64_rem(position, v->fec->io_size, &rem);
+ 	*offset = (unsigned)rem;
+ 
+ 	res = dm_bufio_read(v->fec->bufio, block, buf);
+@@ -154,7 +154,7 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio,
+ 
+ 		/* read the next block when we run out of parity bytes */
+ 		offset += v->fec->roots;
+-		if (offset >= v->fec->roots << SECTOR_SHIFT) {
++		if (offset >= v->fec->io_size) {
+ 			dm_bufio_release(buf);
+ 
+ 			par = fec_read_parity(v, rsb, block_offset, &offset, &buf);
+@@ -742,8 +742,13 @@ int verity_fec_ctr(struct dm_verity *v)
+ 		return -E2BIG;
+ 	}
+ 
++	if ((f->roots << SECTOR_SHIFT) & ((1 << v->data_dev_block_bits) - 1))
++		f->io_size = 1 << v->data_dev_block_bits;
++	else
++		f->io_size = v->fec->roots << SECTOR_SHIFT;
++
+ 	f->bufio = dm_bufio_client_create(f->dev->bdev,
+-					  f->roots << SECTOR_SHIFT,
++					  f->io_size,
+ 					  1, 0, NULL, NULL);
+ 	if (IS_ERR(f->bufio)) {
+ 		ti->error = "Cannot initialize FEC bufio client";
+diff --git a/drivers/md/dm-verity-fec.h b/drivers/md/dm-verity-fec.h
+index 42fbd3a7fc9f1..3c46c8d618833 100644
+--- a/drivers/md/dm-verity-fec.h
++++ b/drivers/md/dm-verity-fec.h
+@@ -36,6 +36,7 @@ struct dm_verity_fec {
+ 	struct dm_dev *dev;	/* parity data device */
+ 	struct dm_bufio_client *data_bufio;	/* for data dev access */
+ 	struct dm_bufio_client *bufio;		/* for parity data access */
++	size_t io_size;		/* IO size for roots */
+ 	sector_t start;		/* parity data start in blocks */
+ 	sector_t blocks;	/* number of blocks covered */
+ 	sector_t rounds;	/* number of interleaving rounds */
+diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
+index 57f1f17089946..5c5c92132287d 100644
+--- a/drivers/mtd/nand/raw/mtk_nand.c
++++ b/drivers/mtd/nand/raw/mtk_nand.c
+@@ -488,8 +488,8 @@ static int mtk_nfc_exec_instr(struct nand_chip *chip,
+ 		return 0;
+ 	case NAND_OP_WAITRDY_INSTR:
+ 		return readl_poll_timeout(nfc->regs + NFI_STA, status,
+-					  status & STA_BUSY, 20,
+-					  instr->ctx.waitrdy.timeout_ms);
++					  !(status & STA_BUSY), 20,
++					  instr->ctx.waitrdy.timeout_ms * 1000);
+ 	default:
+ 		break;
+ 	}
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 87160e723dfcf..70ec17f3c3007 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2994,10 +2994,17 @@ out_resources:
+ 	return err;
+ }
+ 
++/* prod_id for switch families which do not have a PHY model number */
++static const u16 family_prod_id_table[] = {
++	[MV88E6XXX_FAMILY_6341] = MV88E6XXX_PORT_SWITCH_ID_PROD_6341,
++	[MV88E6XXX_FAMILY_6390] = MV88E6XXX_PORT_SWITCH_ID_PROD_6390,
++};
++
+ static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg)
+ {
+ 	struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
+ 	struct mv88e6xxx_chip *chip = mdio_bus->chip;
++	u16 prod_id;
+ 	u16 val;
+ 	int err;
+ 
+@@ -3008,23 +3015,12 @@ static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg)
+ 	err = chip->info->ops->phy_read(chip, bus, phy, reg, &val);
+ 	mv88e6xxx_reg_unlock(chip);
+ 
+-	if (reg == MII_PHYSID2) {
+-		/* Some internal PHYs don't have a model number. */
+-		if (chip->info->family != MV88E6XXX_FAMILY_6165)
+-			/* Then there is the 6165 family. It gets is
+-			 * PHYs correct. But it can also have two
+-			 * SERDES interfaces in the PHY address
+-			 * space. And these don't have a model
+-			 * number. But they are not PHYs, so we don't
+-			 * want to give them something a PHY driver
+-			 * will recognise.
+-			 *
+-			 * Use the mv88e6390 family model number
+-			 * instead, for anything which really could be
+-			 * a PHY,
+-			 */
+-			if (!(val & 0x3f0))
+-				val |= MV88E6XXX_PORT_SWITCH_ID_PROD_6390 >> 4;
++	/* Some internal PHYs don't have a model number. */
++	if (reg == MII_PHYSID2 && !(val & 0x3f0) &&
++	    chip->info->family < ARRAY_SIZE(family_prod_id_table)) {
++		prod_id = family_prod_id_table[chip->info->family];
++		if (prod_id)
++			val |= prod_id >> 4;
+ 	}
+ 
+ 	return err ? err : val;
+diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c
+index 187b0b9a6e1df..f78daba60b35c 100644
+--- a/drivers/net/ethernet/amd/pcnet32.c
++++ b/drivers/net/ethernet/amd/pcnet32.c
+@@ -1534,8 +1534,7 @@ pcnet32_probe_pci(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	}
+ 	pci_set_master(pdev);
+ 
+-	ioaddr = pci_resource_start(pdev, 0);
+-	if (!ioaddr) {
++	if (!pci_resource_len(pdev, 0)) {
+ 		if (pcnet32_debug & NETIF_MSG_PROBE)
+ 			pr_err("card has no PCI IO resources, aborting\n");
+ 		err = -ENODEV;
+@@ -1548,6 +1547,8 @@ pcnet32_probe_pci(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			pr_err("architecture does not support 32bit PCI busmaster DMA\n");
+ 		goto err_disable_dev;
+ 	}
++
++	ioaddr = pci_resource_start(pdev, 0);
+ 	if (!request_region(ioaddr, PCNET32_TOTAL_SIZE, "pcnet32_probe_pci")) {
+ 		if (pcnet32_debug & NETIF_MSG_PROBE)
+ 			pr_err("io address range already allocated\n");
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 48a6bda2a8cc7..390f45e49eaf7 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3777,6 +3777,7 @@ static int macb_init(struct platform_device *pdev)
+ 	reg = gem_readl(bp, DCFG8);
+ 	bp->max_tuples = min((GEM_BFEXT(SCR2CMP, reg) / 3),
+ 			GEM_BFEXT(T2SCR, reg));
++	INIT_LIST_HEAD(&bp->rx_fs_list.list);
+ 	if (bp->max_tuples > 0) {
+ 		/* also needs one ethtype match to check IPv4 */
+ 		if (GEM_BFEXT(SCR2ETH, reg) > 0) {
+@@ -3787,7 +3788,6 @@ static int macb_init(struct platform_device *pdev)
+ 			/* Filtering is supported in hw but don't enable it in kernel now */
+ 			dev->hw_features |= NETIF_F_NTUPLE;
+ 			/* init Rx flow definitions */
+-			INIT_LIST_HEAD(&bp->rx_fs_list.list);
+ 			bp->rx_fs_list.count = 0;
+ 			spin_lock_init(&bp->rx_fs_lock);
+ 		} else
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
+index 423d6d78d15c7..3a50d5a62aceb 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
+@@ -354,18 +354,6 @@ static int chcr_set_tcb_field(struct chcr_ktls_info *tx_info, u16 word,
+ 	return cxgb4_ofld_send(tx_info->netdev, skb);
+ }
+ 
+-/*
+- * chcr_ktls_mark_tcb_close: mark tcb state to CLOSE
+- * @tx_info - driver specific tls info.
+- * return: NET_TX_OK/NET_XMIT_DROP.
+- */
+-static int chcr_ktls_mark_tcb_close(struct chcr_ktls_info *tx_info)
+-{
+-	return chcr_set_tcb_field(tx_info, TCB_T_STATE_W,
+-				  TCB_T_STATE_V(TCB_T_STATE_M),
+-				  CHCR_TCB_STATE_CLOSED, 1);
+-}
+-
+ /*
+  * chcr_ktls_dev_del:  call back for tls_dev_del.
+  * Remove the tid and l2t entry and close the connection.
+@@ -400,8 +388,6 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
+ 
+ 	/* clear tid */
+ 	if (tx_info->tid != -1) {
+-		/* clear tcb state and then release tid */
+-		chcr_ktls_mark_tcb_close(tx_info);
+ 		cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
+ 				 tx_info->tid, tx_info->ip_family);
+ 	}
+@@ -579,7 +565,6 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
+ 	return 0;
+ 
+ free_tid:
+-	chcr_ktls_mark_tcb_close(tx_info);
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	/* clear clip entry */
+ 	if (tx_info->ip_family == AF_INET6)
+@@ -677,10 +662,6 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
+ 	if (tx_info->pending_close) {
+ 		spin_unlock(&tx_info->lock);
+ 		if (!status) {
+-			/* it's a late success, tcb status is establised,
+-			 * mark it close.
+-			 */
+-			chcr_ktls_mark_tcb_close(tx_info);
+ 			cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
+ 					 tid, tx_info->ip_family);
+ 		}
+@@ -1668,54 +1649,6 @@ static void chcr_ktls_copy_record_in_skb(struct sk_buff *nskb,
+ 	refcount_add(nskb->truesize, &nskb->sk->sk_wmem_alloc);
+ }
+ 
+-/*
+- * chcr_ktls_update_snd_una:  Reset the SEND_UNA. It will be done to avoid
+- * sending the same segment again. It will discard the segment which is before
+- * the current tx max.
+- * @tx_info - driver specific tls info.
+- * @q - TX queue.
+- * return: NET_TX_OK/NET_XMIT_DROP.
+- */
+-static int chcr_ktls_update_snd_una(struct chcr_ktls_info *tx_info,
+-				    struct sge_eth_txq *q)
+-{
+-	struct fw_ulptx_wr *wr;
+-	unsigned int ndesc;
+-	int credits;
+-	void *pos;
+-	u32 len;
+-
+-	len = sizeof(*wr) + roundup(CHCR_SET_TCB_FIELD_LEN, 16);
+-	ndesc = DIV_ROUND_UP(len, 64);
+-
+-	credits = chcr_txq_avail(&q->q) - ndesc;
+-	if (unlikely(credits < 0)) {
+-		chcr_eth_txq_stop(q);
+-		return NETDEV_TX_BUSY;
+-	}
+-
+-	pos = &q->q.desc[q->q.pidx];
+-
+-	wr = pos;
+-	/* ULPTX wr */
+-	wr->op_to_compl = htonl(FW_WR_OP_V(FW_ULPTX_WR));
+-	wr->cookie = 0;
+-	/* fill len in wr field */
+-	wr->flowid_len16 = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(len, 16)));
+-
+-	pos += sizeof(*wr);
+-
+-	pos = chcr_write_cpl_set_tcb_ulp(tx_info, q, tx_info->tid, pos,
+-					 TCB_SND_UNA_RAW_W,
+-					 TCB_SND_UNA_RAW_V(TCB_SND_UNA_RAW_M),
+-					 TCB_SND_UNA_RAW_V(0), 0);
+-
+-	chcr_txq_advance(&q->q, ndesc);
+-	cxgb4_ring_tx_db(tx_info->adap, &q->q, ndesc);
+-
+-	return 0;
+-}
+-
+ /*
+  * chcr_end_part_handler: This handler will handle the record which
+  * is complete or if record's end part is received. T6 adapter has a issue that
+@@ -1740,7 +1673,9 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
+ 				 struct sge_eth_txq *q, u32 skb_offset,
+ 				 u32 tls_end_offset, bool last_wr)
+ {
++	bool free_skb_if_tx_fails = false;
+ 	struct sk_buff *nskb = NULL;
++
+ 	/* check if it is a complete record */
+ 	if (tls_end_offset == record->len) {
+ 		nskb = skb;
+@@ -1763,6 +1698,8 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
+ 
+ 		if (last_wr)
+ 			dev_kfree_skb_any(skb);
++		else
++			free_skb_if_tx_fails = true;
+ 
+ 		last_wr = true;
+ 
+@@ -1774,6 +1711,8 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
+ 				       record->num_frags,
+ 				       (last_wr && tcp_push_no_fin),
+ 				       mss)) {
++		if (free_skb_if_tx_fails)
++			dev_kfree_skb_any(skb);
+ 		goto out;
+ 	}
+ 	tx_info->prev_seq = record->end_seq;
+@@ -1910,11 +1849,6 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info,
+ 			/* reset tcp_seq as per the prior_data_required len */
+ 			tcp_seq -= prior_data_len;
+ 		}
+-		/* reset snd una, so the middle record won't send the already
+-		 * sent part.
+-		 */
+-		if (chcr_ktls_update_snd_una(tx_info, q))
+-			goto out;
+ 		atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_middle_pkts);
+ 	} else {
+ 		atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_start_pkts);
+@@ -2015,12 +1949,11 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * we will send the complete record again.
+ 	 */
+ 
++	spin_lock_irqsave(&tx_ctx->base.lock, flags);
++
+ 	do {
+-		int i;
+ 
+ 		cxgb4_reclaim_completed_tx(adap, &q->q, true);
+-		/* lock taken */
+-		spin_lock_irqsave(&tx_ctx->base.lock, flags);
+ 		/* fetch the tls record */
+ 		record = tls_get_record(&tx_ctx->base, tcp_seq,
+ 					&tx_info->record_no);
+@@ -2079,11 +2012,11 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
+ 						    tls_end_offset, skb_offset,
+ 						    0);
+ 
+-			spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+ 			if (ret) {
+ 				/* free the refcount taken earlier */
+ 				if (tls_end_offset < data_len)
+ 					dev_kfree_skb_any(skb);
++				spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+ 				goto out;
+ 			}
+ 
+@@ -2093,16 +2026,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			continue;
+ 		}
+ 
+-		/* increase page reference count of the record, so that there
+-		 * won't be any chance of page free in middle if in case stack
+-		 * receives ACK and try to delete the record.
+-		 */
+-		for (i = 0; i < record->num_frags; i++)
+-			__skb_frag_ref(&record->frags[i]);
+-		/* lock cleared */
+-		spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+-
+-
+ 		/* if a tls record is finishing in this SKB */
+ 		if (tls_end_offset <= data_len) {
+ 			ret = chcr_end_part_handler(tx_info, skb, record,
+@@ -2127,13 +2050,9 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			data_len = 0;
+ 		}
+ 
+-		/* clear the frag ref count which increased locally before */
+-		for (i = 0; i < record->num_frags; i++) {
+-			/* clear the frag ref count */
+-			__skb_frag_unref(&record->frags[i]);
+-		}
+ 		/* if any failure, come out from the loop. */
+ 		if (ret) {
++			spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+ 			if (th->fin)
+ 				dev_kfree_skb_any(skb);
+ 
+@@ -2148,6 +2067,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	} while (data_len > 0);
+ 
++	spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+ 	atomic64_inc(&port_stats->ktls_tx_encrypted_packets);
+ 	atomic64_add(skb_data_len, &port_stats->ktls_tx_encrypted_bytes);
+ 
+diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
+index ae09cac876028..afc4a103c5080 100644
+--- a/drivers/net/ethernet/davicom/dm9000.c
++++ b/drivers/net/ethernet/davicom/dm9000.c
+@@ -1474,8 +1474,10 @@ dm9000_probe(struct platform_device *pdev)
+ 
+ 	/* Init network device */
+ 	ndev = alloc_etherdev(sizeof(struct board_info));
+-	if (!ndev)
+-		return -ENOMEM;
++	if (!ndev) {
++		ret = -ENOMEM;
++		goto out_regulator_disable;
++	}
+ 
+ 	SET_NETDEV_DEV(ndev, &pdev->dev);
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 4a4cb62b73320..8cc444684491a 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1159,19 +1159,13 @@ static int __ibmvnic_open(struct net_device *netdev)
+ 
+ 	rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_UP);
+ 	if (rc) {
+-		for (i = 0; i < adapter->req_rx_queues; i++)
+-			napi_disable(&adapter->napi[i]);
++		ibmvnic_napi_disable(adapter);
+ 		release_resources(adapter);
+ 		return rc;
+ 	}
+ 
+ 	netif_tx_start_all_queues(netdev);
+ 
+-	if (prev_state == VNIC_CLOSED) {
+-		for (i = 0; i < adapter->req_rx_queues; i++)
+-			napi_schedule(&adapter->napi[i]);
+-	}
+-
+ 	adapter->state = VNIC_OPEN;
+ 	return rc;
+ }
+@@ -1942,7 +1936,7 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 	u64 old_num_rx_queues, old_num_tx_queues;
+ 	u64 old_num_rx_slots, old_num_tx_slots;
+ 	struct net_device *netdev = adapter->netdev;
+-	int i, rc;
++	int rc;
+ 
+ 	netdev_dbg(adapter->netdev,
+ 		   "[S:%d FOP:%d] Reset reason %d, reset_state %d\n",
+@@ -2088,10 +2082,6 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 	/* refresh device's multicast list */
+ 	ibmvnic_set_multi(netdev);
+ 
+-	/* kick napi */
+-	for (i = 0; i < adapter->req_rx_queues; i++)
+-		napi_schedule(&adapter->napi[i]);
+-
+ 	if (adapter->reset_reason == VNIC_RESET_FAILOVER ||
+ 	    adapter->reset_reason == VNIC_RESET_MOBILITY) {
+ 		call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 7fab60128c76d..f0edea7cdbccc 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -11863,6 +11863,7 @@ static int i40e_sw_init(struct i40e_pf *pf)
+ {
+ 	int err = 0;
+ 	int size;
++	u16 pow;
+ 
+ 	/* Set default capability flags */
+ 	pf->flags = I40E_FLAG_RX_CSUM_ENABLED |
+@@ -11881,6 +11882,11 @@ static int i40e_sw_init(struct i40e_pf *pf)
+ 	pf->rss_table_size = pf->hw.func_caps.rss_table_size;
+ 	pf->rss_size_max = min_t(int, pf->rss_size_max,
+ 				 pf->hw.func_caps.num_tx_qp);
++
++	/* find the next higher power-of-2 of num cpus */
++	pow = roundup_pow_of_two(num_online_cpus());
++	pf->rss_size_max = min_t(int, pf->rss_size_max, pow);
++
+ 	if (pf->hw.func_caps.rss) {
+ 		pf->flags |= I40E_FLAG_RSS_ENABLED;
+ 		pf->alloc_rss_size = min_t(int, pf->rss_size_max,
+diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c
+index 211ac6f907adb..28e834a128c07 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dcb.c
++++ b/drivers/net/ethernet/intel/ice/ice_dcb.c
+@@ -747,8 +747,8 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
+ 		   struct ice_port_info *pi)
+ {
+ 	u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status);
+-	u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift;
+-	u8 i, j, err, sync, oper, app_index, ice_app_sel_type;
++	u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift, j;
++	u8 i, err, sync, oper, app_index, ice_app_sel_type;
+ 	u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio);
+ 	u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift;
+ 	struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 278fc866fad49..0b9fddbc5db4f 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -6896,6 +6896,11 @@ static int __maybe_unused ixgbe_resume(struct device *dev_d)
+ 
+ 	adapter->hw.hw_addr = adapter->io_addr;
+ 
++	err = pci_enable_device_mem(pdev);
++	if (err) {
++		e_dev_err("Cannot enable PCI device from suspend\n");
++		return err;
++	}
+ 	smp_mb__before_atomic();
+ 	clear_bit(__IXGBE_DISABLED, &adapter->state);
+ 	pci_set_master(pdev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+index 308fd279669ec..89510cac46c22 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+@@ -387,21 +387,6 @@ enum mlx5e_fec_supported_link_mode {
+ 			*_policy = MLX5_GET(pplm_reg, _buf, fec_override_admin_##link);	\
+ 	} while (0)
+ 
+-#define MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(buf, policy, write, link)			\
+-	do {										\
+-		unsigned long policy_long;						\
+-		u16 *__policy = &(policy);						\
+-		bool _write = (write);							\
+-											\
+-		policy_long = *__policy;						\
+-		if (_write && *__policy)						\
+-			*__policy = find_first_bit(&policy_long,			\
+-						   sizeof(policy_long) * BITS_PER_BYTE);\
+-		MLX5E_FEC_OVERRIDE_ADMIN_POLICY(buf, *__policy, _write, link);		\
+-		if (!_write && *__policy)						\
+-			*__policy = 1 << *__policy;					\
+-	} while (0)
+-
+ /* get/set FEC admin field for a given speed */
+ static int mlx5e_fec_admin_field(u32 *pplm, u16 *fec_policy, bool write,
+ 				 enum mlx5e_fec_supported_link_mode link_mode)
+@@ -423,16 +408,16 @@ static int mlx5e_fec_admin_field(u32 *pplm, u16 *fec_policy, bool write,
+ 		MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 100g);
+ 		break;
+ 	case MLX5E_FEC_SUPPORTED_LINK_MODE_50G_1X:
+-		MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 50g_1x);
++		MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 50g_1x);
+ 		break;
+ 	case MLX5E_FEC_SUPPORTED_LINK_MODE_100G_2X:
+-		MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 100g_2x);
++		MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 100g_2x);
+ 		break;
+ 	case MLX5E_FEC_SUPPORTED_LINK_MODE_200G_4X:
+-		MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 200g_4x);
++		MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 200g_4x);
+ 		break;
+ 	case MLX5E_FEC_SUPPORTED_LINK_MODE_400G_8X:
+-		MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 400g_8x);
++		MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 400g_8x);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 930f19c598bb6..3079a82f1f412 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2196,6 +2196,9 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev,
+ 		return 0;
+ 
+ 	flow_rule_match_meta(rule, &match);
++	if (!match.mask->ingress_ifindex)
++		return 0;
++
+ 	if (match.mask->ingress_ifindex != 0xFFFFFFFF) {
+ 		NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask");
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index d634da20b4f94..3bb36f4a984e8 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2378,13 +2378,14 @@ static void r8168b_1_hw_jumbo_disable(struct rtl8169_private *tp)
+ static void rtl_jumbo_config(struct rtl8169_private *tp)
+ {
+ 	bool jumbo = tp->dev->mtu > ETH_DATA_LEN;
++	int readrq = 4096;
+ 
+ 	rtl_unlock_config_regs(tp);
+ 	switch (tp->mac_version) {
+ 	case RTL_GIGA_MAC_VER_12:
+ 	case RTL_GIGA_MAC_VER_17:
+ 		if (jumbo) {
+-			pcie_set_readrq(tp->pci_dev, 512);
++			readrq = 512;
+ 			r8168b_1_hw_jumbo_enable(tp);
+ 		} else {
+ 			r8168b_1_hw_jumbo_disable(tp);
+@@ -2392,7 +2393,7 @@ static void rtl_jumbo_config(struct rtl8169_private *tp)
+ 		break;
+ 	case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_26:
+ 		if (jumbo) {
+-			pcie_set_readrq(tp->pci_dev, 512);
++			readrq = 512;
+ 			r8168c_hw_jumbo_enable(tp);
+ 		} else {
+ 			r8168c_hw_jumbo_disable(tp);
+@@ -2417,8 +2418,15 @@ static void rtl_jumbo_config(struct rtl8169_private *tp)
+ 	}
+ 	rtl_lock_config_regs(tp);
+ 
+-	if (!jumbo && pci_is_pcie(tp->pci_dev) && tp->supports_gmii)
+-		pcie_set_readrq(tp->pci_dev, 4096);
++	if (pci_is_pcie(tp->pci_dev) && tp->supports_gmii)
++		pcie_set_readrq(tp->pci_dev, readrq);
++
++	/* Chip doesn't support pause in jumbo mode */
++	linkmode_mod_bit(ETHTOOL_LINK_MODE_Pause_BIT,
++			 tp->phydev->advertising, !jumbo);
++	linkmode_mod_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
++			 tp->phydev->advertising, !jumbo);
++	phy_start_aneg(tp->phydev);
+ }
+ 
+ DECLARE_RTL_COND(rtl_chipcmd_cond)
+@@ -4710,8 +4718,6 @@ static int r8169_phy_connect(struct rtl8169_private *tp)
+ 	if (!tp->supports_gmii)
+ 		phy_set_max_speed(phydev, SPEED_100);
+ 
+-	phy_support_asym_pause(phydev);
+-
+ 	phy_attached_info(phydev);
+ 
+ 	return 0;
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 5dbdaf0f5f09c..823a89354466d 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -2913,9 +2913,35 @@ static struct phy_driver marvell_drivers[] = {
+ 		.get_stats = marvell_get_stats,
+ 	},
+ 	{
+-		.phy_id = MARVELL_PHY_ID_88E6390,
++		.phy_id = MARVELL_PHY_ID_88E6341_FAMILY,
+ 		.phy_id_mask = MARVELL_PHY_ID_MASK,
+-		.name = "Marvell 88E6390",
++		.name = "Marvell 88E6341 Family",
++		/* PHY_GBIT_FEATURES */
++		.flags = PHY_POLL_CABLE_TEST,
++		.probe = m88e1510_probe,
++		.config_init = marvell_config_init,
++		.config_aneg = m88e6390_config_aneg,
++		.read_status = marvell_read_status,
++		.ack_interrupt = marvell_ack_interrupt,
++		.config_intr = marvell_config_intr,
++		.did_interrupt = m88e1121_did_interrupt,
++		.resume = genphy_resume,
++		.suspend = genphy_suspend,
++		.read_page = marvell_read_page,
++		.write_page = marvell_write_page,
++		.get_sset_count = marvell_get_sset_count,
++		.get_strings = marvell_get_strings,
++		.get_stats = marvell_get_stats,
++		.get_tunable = m88e1540_get_tunable,
++		.set_tunable = m88e1540_set_tunable,
++		.cable_test_start = marvell_vct7_cable_test_start,
++		.cable_test_tdr_start = marvell_vct5_cable_test_tdr_start,
++		.cable_test_get_status = marvell_vct7_cable_test_get_status,
++	},
++	{
++		.phy_id = MARVELL_PHY_ID_88E6390_FAMILY,
++		.phy_id_mask = MARVELL_PHY_ID_MASK,
++		.name = "Marvell 88E6390 Family",
+ 		/* PHY_GBIT_FEATURES */
+ 		.flags = PHY_POLL_CABLE_TEST,
+ 		.probe = m88e6390_probe,
+@@ -3001,7 +3027,8 @@ static struct mdio_device_id __maybe_unused marvell_tbl[] = {
+ 	{ MARVELL_PHY_ID_88E1540, MARVELL_PHY_ID_MASK },
+ 	{ MARVELL_PHY_ID_88E1545, MARVELL_PHY_ID_MASK },
+ 	{ MARVELL_PHY_ID_88E3016, MARVELL_PHY_ID_MASK },
+-	{ MARVELL_PHY_ID_88E6390, MARVELL_PHY_ID_MASK },
++	{ MARVELL_PHY_ID_88E6341_FAMILY, MARVELL_PHY_ID_MASK },
++	{ MARVELL_PHY_ID_88E6390_FAMILY, MARVELL_PHY_ID_MASK },
+ 	{ MARVELL_PHY_ID_88E1340S, MARVELL_PHY_ID_MASK },
+ 	{ MARVELL_PHY_ID_88E1548P, MARVELL_PHY_ID_MASK },
+ 	{ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index fa32f9045c0cb..500fdb0b6c42b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -684,6 +684,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0x4DF0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL),
+ 	IWL_DEV_INFO(0x4DF0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x4DF0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
++	IWL_DEV_INFO(0x4DF0, 0x6074, iwl_ax201_cfg_qu_hr, NULL),
+ 
+ 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ 		      IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 50133c09a7805..133371385056d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -1181,6 +1181,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
+ 	u32 cmd_pos;
+ 	const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD];
+ 	u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD];
++	unsigned long flags;
+ 
+ 	if (WARN(!trans->wide_cmd_header &&
+ 		 group_id > IWL_ALWAYS_LONG_GROUP,
+@@ -1264,10 +1265,10 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
+ 		goto free_dup_buf;
+ 	}
+ 
+-	spin_lock_bh(&txq->lock);
++	spin_lock_irqsave(&txq->lock, flags);
+ 
+ 	if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
+-		spin_unlock_bh(&txq->lock);
++		spin_unlock_irqrestore(&txq->lock, flags);
+ 
+ 		IWL_ERR(trans, "No space in command queue\n");
+ 		iwl_op_mode_cmd_queue_full(trans->op_mode);
+@@ -1427,7 +1428,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
+  unlock_reg:
+ 	spin_unlock(&trans_pcie->reg_lock);
+  out:
+-	spin_unlock_bh(&txq->lock);
++	spin_unlock_irqrestore(&txq->lock, flags);
+  free_dup_buf:
+ 	if (idx < 0)
+ 		kfree(dup_buf);
+diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c
+index c878097f0ddaf..1df959532c7d3 100644
+--- a/drivers/net/wireless/virt_wifi.c
++++ b/drivers/net/wireless/virt_wifi.c
+@@ -12,6 +12,7 @@
+ #include <net/cfg80211.h>
+ #include <net/rtnetlink.h>
+ #include <linux/etherdevice.h>
++#include <linux/math64.h>
+ #include <linux/module.h>
+ 
+ static struct wiphy *common_wiphy;
+@@ -168,11 +169,11 @@ static void virt_wifi_scan_result(struct work_struct *work)
+ 			     scan_result.work);
+ 	struct wiphy *wiphy = priv_to_wiphy(priv);
+ 	struct cfg80211_scan_info scan_info = { .aborted = false };
++	u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
+ 
+ 	informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
+ 					   CFG80211_BSS_FTYPE_PRESP,
+-					   fake_router_bssid,
+-					   ktime_get_boottime_ns(),
++					   fake_router_bssid, tsf,
+ 					   WLAN_CAPABILITY_ESS, 0,
+ 					   (void *)&ssid, sizeof(ssid),
+ 					   DBM_TO_MBM(-50), GFP_KERNEL);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index ef23119db5746..e05cc9f8a9fd1 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -1239,6 +1239,11 @@ int nvdimm_has_flush(struct nd_region *nd_region)
+ 			|| !IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
+ 		return -ENXIO;
+ 
++	/* Test if an explicit flush function is defined */
++	if (test_bit(ND_REGION_ASYNC, &nd_region->flags) && nd_region->flush)
++		return 1;
++
++	/* Test if any flush hints for the region are available */
+ 	for (i = 0; i < nd_region->ndr_mappings; i++) {
+ 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ 		struct nvdimm *nvdimm = nd_mapping->nvdimm;
+@@ -1249,8 +1254,8 @@ int nvdimm_has_flush(struct nd_region *nd_region)
+ 	}
+ 
+ 	/*
+-	 * The platform defines dimm devices without hints, assume
+-	 * platform persistence mechanism like ADR
++	 * The platform defines dimm devices without hints nor explicit flush,
++	 * assume platform persistence mechanism like ADR
+ 	 */
+ 	return 0;
+ }
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index 024e5a550759c..8b9a39077dbab 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -201,18 +201,17 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ 		memcpy(task->ata_task.atapi_packet, qc->cdb, qc->dev->cdb_len);
+ 		task->total_xfer_len = qc->nbytes;
+ 		task->num_scatter = qc->n_elem;
++		task->data_dir = qc->dma_dir;
++	} else if (qc->tf.protocol == ATA_PROT_NODATA) {
++		task->data_dir = DMA_NONE;
+ 	} else {
+ 		for_each_sg(qc->sg, sg, qc->n_elem, si)
+ 			xfer += sg_dma_len(sg);
+ 
+ 		task->total_xfer_len = xfer;
+ 		task->num_scatter = si;
+-	}
+-
+-	if (qc->tf.protocol == ATA_PROT_NODATA)
+-		task->data_dir = DMA_NONE;
+-	else
+ 		task->data_dir = qc->dma_dir;
++	}
+ 	task->scatter = qc->sg;
+ 	task->ata_task.retry_count = 1;
+ 	task->task_state_flags = SAS_TASK_STATE_PENDING;
+diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
+index 1e939a2a387f3..98a34ed10f1a0 100644
+--- a/drivers/scsi/scsi_transport_srp.c
++++ b/drivers/scsi/scsi_transport_srp.c
+@@ -541,7 +541,7 @@ int srp_reconnect_rport(struct srp_rport *rport)
+ 	res = mutex_lock_interruptible(&rport->mutex);
+ 	if (res)
+ 		goto out;
+-	if (rport->state != SRP_RPORT_FAIL_FAST)
++	if (rport->state != SRP_RPORT_FAIL_FAST && rport->state != SRP_RPORT_LOST)
+ 		/*
+ 		 * sdev state must be SDEV_TRANSPORT_OFFLINE, transition
+ 		 * to SDEV_BLOCK is illegal. Calling scsi_target_unblock()
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index 706de3ef94bbf..465f646e33298 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -1658,6 +1658,8 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
+ 
+ 	index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
+ 
++	if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions)
++		return -EINVAL;
+ 	if (vma->vm_end < vma->vm_start)
+ 		return -EINVAL;
+ 	if ((vma->vm_flags & VM_SHARED) == 0)
+@@ -1666,7 +1668,7 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
+ 		int regnum = index - VFIO_PCI_NUM_REGIONS;
+ 		struct vfio_pci_region *region = vdev->region + regnum;
+ 
+-		if (region && region->ops && region->ops->mmap &&
++		if (region->ops && region->ops->mmap &&
+ 		    (region->flags & VFIO_REGION_INFO_FLAG_MMAP))
+ 			return region->ops->mmap(vdev, region, vma);
+ 		return -EINVAL;
+diff --git a/fs/readdir.c b/fs/readdir.c
+index 19434b3c982cd..09e8ed7d41614 100644
+--- a/fs/readdir.c
++++ b/fs/readdir.c
+@@ -150,6 +150,9 @@ static int fillonedir(struct dir_context *ctx, const char *name, int namlen,
+ 
+ 	if (buf->result)
+ 		return -EINVAL;
++	buf->result = verify_dirent_name(name, namlen);
++	if (buf->result < 0)
++		return buf->result;
+ 	d_ino = ino;
+ 	if (sizeof(d_ino) < sizeof(ino) && d_ino != ino) {
+ 		buf->result = -EOVERFLOW;
+@@ -405,6 +408,9 @@ static int compat_fillonedir(struct dir_context *ctx, const char *name,
+ 
+ 	if (buf->result)
+ 		return -EINVAL;
++	buf->result = verify_dirent_name(name, namlen);
++	if (buf->result < 0)
++		return buf->result;
+ 	d_ino = ino;
+ 	if (sizeof(d_ino) < sizeof(ino) && d_ino != ino) {
+ 		buf->result = -EOVERFLOW;
+diff --git a/include/linux/marvell_phy.h b/include/linux/marvell_phy.h
+index ff7b7607c8cf5..f5cf19d197763 100644
+--- a/include/linux/marvell_phy.h
++++ b/include/linux/marvell_phy.h
+@@ -25,11 +25,12 @@
+ #define MARVELL_PHY_ID_88X3310		0x002b09a0
+ #define MARVELL_PHY_ID_88E2110		0x002b09b0
+ 
+-/* The MV88e6390 Ethernet switch contains embedded PHYs. These PHYs do
++/* These Ethernet switch families contain embedded PHYs, but they do
+  * not have a model ID. So the switch driver traps reads to the ID2
+  * register and returns the switch family ID
+  */
+-#define MARVELL_PHY_ID_88E6390		0x01410f90
++#define MARVELL_PHY_ID_88E6341_FAMILY	0x01410f41
++#define MARVELL_PHY_ID_88E6390_FAMILY	0x01410f90
+ 
+ #define MARVELL_PHY_FAMILY_ID(id)	((id) >> 4)
+ 
+diff --git a/include/linux/netfilter_arp/arp_tables.h b/include/linux/netfilter_arp/arp_tables.h
+index 7d3537c40ec95..26a13294318cf 100644
+--- a/include/linux/netfilter_arp/arp_tables.h
++++ b/include/linux/netfilter_arp/arp_tables.h
+@@ -52,8 +52,9 @@ extern void *arpt_alloc_initial_table(const struct xt_table *);
+ int arpt_register_table(struct net *net, const struct xt_table *table,
+ 			const struct arpt_replace *repl,
+ 			const struct nf_hook_ops *ops, struct xt_table **res);
+-void arpt_unregister_table(struct net *net, struct xt_table *table,
+-			   const struct nf_hook_ops *ops);
++void arpt_unregister_table(struct net *net, struct xt_table *table);
++void arpt_unregister_table_pre_exit(struct net *net, struct xt_table *table,
++				    const struct nf_hook_ops *ops);
+ extern unsigned int arpt_do_table(struct sk_buff *skb,
+ 				  const struct nf_hook_state *state,
+ 				  struct xt_table *table);
+diff --git a/include/linux/netfilter_bridge/ebtables.h b/include/linux/netfilter_bridge/ebtables.h
+index 2f5c4e6ecd8a4..3a956145a25cb 100644
+--- a/include/linux/netfilter_bridge/ebtables.h
++++ b/include/linux/netfilter_bridge/ebtables.h
+@@ -110,8 +110,9 @@ extern int ebt_register_table(struct net *net,
+ 			      const struct ebt_table *table,
+ 			      const struct nf_hook_ops *ops,
+ 			      struct ebt_table **res);
+-extern void ebt_unregister_table(struct net *net, struct ebt_table *table,
+-				 const struct nf_hook_ops *);
++extern void ebt_unregister_table(struct net *net, struct ebt_table *table);
++void ebt_unregister_table_pre_exit(struct net *net, const char *tablename,
++				   const struct nf_hook_ops *ops);
+ extern unsigned int ebt_do_table(struct sk_buff *skb,
+ 				 const struct nf_hook_state *state,
+ 				 struct ebt_table *table);
+diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
+index fdcdfe414223e..9d9ecc0f4c383 100644
+--- a/include/uapi/linux/idxd.h
++++ b/include/uapi/linux/idxd.h
+@@ -187,8 +187,8 @@ struct dsa_completion_record {
+ 			uint32_t	rsvd2:8;
+ 		};
+ 
+-		uint16_t	delta_rec_size;
+-		uint16_t	crc_val;
++		uint32_t	delta_rec_size;
++		uint32_t	crc_val;
+ 
+ 		/* DIF check & strip */
+ 		struct {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 12cd2997f982a..3370f0d476e97 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5328,12 +5328,26 @@ static struct bpf_insn_aux_data *cur_aux(struct bpf_verifier_env *env)
+ 	return &env->insn_aux_data[env->insn_idx];
+ }
+ 
++enum {
++	REASON_BOUNDS	= -1,
++	REASON_TYPE	= -2,
++	REASON_PATHS	= -3,
++	REASON_LIMIT	= -4,
++	REASON_STACK	= -5,
++};
++
+ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
+-			      u32 *ptr_limit, u8 opcode, bool off_is_neg)
++			      const struct bpf_reg_state *off_reg,
++			      u32 *alu_limit, u8 opcode)
+ {
++	bool off_is_neg = off_reg->smin_value < 0;
+ 	bool mask_to_left = (opcode == BPF_ADD &&  off_is_neg) ||
+ 			    (opcode == BPF_SUB && !off_is_neg);
+-	u32 off, max;
++	u32 off, max = 0, ptr_limit = 0;
++
++	if (!tnum_is_const(off_reg->var_off) &&
++	    (off_reg->smin_value < 0) != (off_reg->smax_value < 0))
++		return REASON_BOUNDS;
+ 
+ 	switch (ptr_reg->type) {
+ 	case PTR_TO_STACK:
+@@ -5346,22 +5360,27 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
+ 		 */
+ 		off = ptr_reg->off + ptr_reg->var_off.value;
+ 		if (mask_to_left)
+-			*ptr_limit = MAX_BPF_STACK + off;
++			ptr_limit = MAX_BPF_STACK + off;
+ 		else
+-			*ptr_limit = -off - 1;
+-		return *ptr_limit >= max ? -ERANGE : 0;
++			ptr_limit = -off - 1;
++		break;
+ 	case PTR_TO_MAP_VALUE:
+ 		max = ptr_reg->map_ptr->value_size;
+ 		if (mask_to_left) {
+-			*ptr_limit = ptr_reg->umax_value + ptr_reg->off;
++			ptr_limit = ptr_reg->umax_value + ptr_reg->off;
+ 		} else {
+ 			off = ptr_reg->smin_value + ptr_reg->off;
+-			*ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
++			ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
+ 		}
+-		return *ptr_limit >= max ? -ERANGE : 0;
++		break;
+ 	default:
+-		return -EINVAL;
++		return REASON_TYPE;
+ 	}
++
++	if (ptr_limit >= max)
++		return REASON_LIMIT;
++	*alu_limit = ptr_limit;
++	return 0;
+ }
+ 
+ static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env,
+@@ -5379,7 +5398,7 @@ static int update_alu_sanitation_state(struct bpf_insn_aux_data *aux,
+ 	if (aux->alu_state &&
+ 	    (aux->alu_state != alu_state ||
+ 	     aux->alu_limit != alu_limit))
+-		return -EACCES;
++		return REASON_PATHS;
+ 
+ 	/* Corresponding fixup done in fixup_bpf_calls(). */
+ 	aux->alu_state = alu_state;
+@@ -5398,14 +5417,20 @@ static int sanitize_val_alu(struct bpf_verifier_env *env,
+ 	return update_alu_sanitation_state(aux, BPF_ALU_NON_POINTER, 0);
+ }
+ 
++static bool sanitize_needed(u8 opcode)
++{
++	return opcode == BPF_ADD || opcode == BPF_SUB;
++}
++
+ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 			    struct bpf_insn *insn,
+ 			    const struct bpf_reg_state *ptr_reg,
+-			    struct bpf_reg_state *dst_reg,
+-			    bool off_is_neg)
++			    const struct bpf_reg_state *off_reg,
++			    struct bpf_reg_state *dst_reg)
+ {
+ 	struct bpf_verifier_state *vstate = env->cur_state;
+ 	struct bpf_insn_aux_data *aux = cur_aux(env);
++	bool off_is_neg = off_reg->smin_value < 0;
+ 	bool ptr_is_dst_reg = ptr_reg == dst_reg;
+ 	u8 opcode = BPF_OP(insn->code);
+ 	u32 alu_state, alu_limit;
+@@ -5427,7 +5452,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 	alu_state |= ptr_is_dst_reg ?
+ 		     BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
+ 
+-	err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg);
++	err = retrieve_ptr_limit(ptr_reg, off_reg, &alu_limit, opcode);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -5451,7 +5476,46 @@ do_sim:
+ 	ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
+ 	if (!ptr_is_dst_reg && ret)
+ 		*dst_reg = tmp;
+-	return !ret ? -EFAULT : 0;
++	return !ret ? REASON_STACK : 0;
++}
++
++static int sanitize_err(struct bpf_verifier_env *env,
++			const struct bpf_insn *insn, int reason,
++			const struct bpf_reg_state *off_reg,
++			const struct bpf_reg_state *dst_reg)
++{
++	static const char *err = "pointer arithmetic with it prohibited for !root";
++	const char *op = BPF_OP(insn->code) == BPF_ADD ? "add" : "sub";
++	u32 dst = insn->dst_reg, src = insn->src_reg;
++
++	switch (reason) {
++	case REASON_BOUNDS:
++		verbose(env, "R%d has unknown scalar with mixed signed bounds, %s\n",
++			off_reg == dst_reg ? dst : src, err);
++		break;
++	case REASON_TYPE:
++		verbose(env, "R%d has pointer with unsupported alu operation, %s\n",
++			off_reg == dst_reg ? src : dst, err);
++		break;
++	case REASON_PATHS:
++		verbose(env, "R%d tried to %s from different maps, paths or scalars, %s\n",
++			dst, op, err);
++		break;
++	case REASON_LIMIT:
++		verbose(env, "R%d tried to %s beyond pointer bounds, %s\n",
++			dst, op, err);
++		break;
++	case REASON_STACK:
++		verbose(env, "R%d could not be pushed for speculative verification, %s\n",
++			dst, err);
++		break;
++	default:
++		verbose(env, "verifier internal error: unknown reason (%d)\n",
++			reason);
++		break;
++	}
++
++	return -EACCES;
+ }
+ 
+ /* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
+@@ -5472,8 +5536,8 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	    smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
+ 	u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
+ 	    umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
+-	u32 dst = insn->dst_reg, src = insn->src_reg;
+ 	u8 opcode = BPF_OP(insn->code);
++	u32 dst = insn->dst_reg;
+ 	int ret;
+ 
+ 	dst_reg = &regs[dst];
+@@ -5521,13 +5585,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		verbose(env, "R%d pointer arithmetic on %s prohibited\n",
+ 			dst, reg_type_str[ptr_reg->type]);
+ 		return -EACCES;
+-	case PTR_TO_MAP_VALUE:
+-		if (!env->allow_ptr_leaks && !known && (smin_val < 0) != (smax_val < 0)) {
+-			verbose(env, "R%d has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root\n",
+-				off_reg == dst_reg ? dst : src);
+-			return -EACCES;
+-		}
+-		fallthrough;
+ 	default:
+ 		break;
+ 	}
+@@ -5547,11 +5604,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 
+ 	switch (opcode) {
+ 	case BPF_ADD:
+-		ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
+-		if (ret < 0) {
+-			verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst);
+-			return ret;
+-		}
++		ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg);
++		if (ret < 0)
++			return sanitize_err(env, insn, ret, off_reg, dst_reg);
++
+ 		/* We can take a fixed offset as long as it doesn't overflow
+ 		 * the s32 'off' field
+ 		 */
+@@ -5602,11 +5658,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		}
+ 		break;
+ 	case BPF_SUB:
+-		ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
+-		if (ret < 0) {
+-			verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst);
+-			return ret;
+-		}
++		ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg);
++		if (ret < 0)
++			return sanitize_err(env, insn, ret, off_reg, dst_reg);
++
+ 		if (dst_reg == off_reg) {
+ 			/* scalar -= pointer.  Creates an unknown scalar */
+ 			verbose(env, "R%d tried to subtract pointer from scalar\n",
+@@ -6296,9 +6351,8 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	s32 s32_min_val, s32_max_val;
+ 	u32 u32_min_val, u32_max_val;
+ 	u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
+-	u32 dst = insn->dst_reg;
+-	int ret;
+ 	bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64);
++	int ret;
+ 
+ 	smin_val = src_reg.smin_value;
+ 	smax_val = src_reg.smax_value;
+@@ -6340,6 +6394,12 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 		return 0;
+ 	}
+ 
++	if (sanitize_needed(opcode)) {
++		ret = sanitize_val_alu(env, insn);
++		if (ret < 0)
++			return sanitize_err(env, insn, ret, NULL, NULL);
++	}
++
+ 	/* Calculate sign/unsigned bounds and tnum for alu32 and alu64 bit ops.
+ 	 * There are two classes of instructions: The first class we track both
+ 	 * alu32 and alu64 sign/unsigned bounds independently this provides the
+@@ -6356,21 +6416,11 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	 */
+ 	switch (opcode) {
+ 	case BPF_ADD:
+-		ret = sanitize_val_alu(env, insn);
+-		if (ret < 0) {
+-			verbose(env, "R%d tried to add from different pointers or scalars\n", dst);
+-			return ret;
+-		}
+ 		scalar32_min_max_add(dst_reg, &src_reg);
+ 		scalar_min_max_add(dst_reg, &src_reg);
+ 		dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
+ 		break;
+ 	case BPF_SUB:
+-		ret = sanitize_val_alu(env, insn);
+-		if (ret < 0) {
+-			verbose(env, "R%d tried to sub from different pointers or scalars\n", dst);
+-			return ret;
+-		}
+ 		scalar32_min_max_sub(dst_reg, &src_reg);
+ 		scalar_min_max_sub(dst_reg, &src_reg);
+ 		dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index eead7efbe7e5d..38d7c03e694cd 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -930,7 +930,8 @@ static bool assign_lock_key(struct lockdep_map *lock)
+ 		/* Debug-check: all keys must be persistent! */
+ 		debug_locks_off();
+ 		pr_err("INFO: trying to register non-static key.\n");
+-		pr_err("the code is fine but needs lockdep annotation.\n");
++		pr_err("The code is fine but needs lockdep annotation, or maybe\n");
++		pr_err("you didn't initialize this object before use?\n");
+ 		pr_err("turning off the locking correctness validator.\n");
+ 		dump_stack();
+ 		return false;
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index c789b39ed5271..dcf4a9028e165 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1302,7 +1302,7 @@ config LOCKDEP
+ 	bool
+ 	depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
+ 	select STACKTRACE
+-	select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86
++	depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
+ 	select KALLSYMS
+ 	select KALLSYMS_ALL
+ 
+@@ -1596,7 +1596,7 @@ config LATENCYTOP
+ 	depends on DEBUG_KERNEL
+ 	depends on STACKTRACE_SUPPORT
+ 	depends on PROC_FS
+-	select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86
++	depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
+ 	select KALLSYMS
+ 	select KALLSYMS_ALL
+ 	select STACKTRACE
+@@ -1849,7 +1849,7 @@ config FAULT_INJECTION_STACKTRACE_FILTER
+ 	depends on FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT
+ 	depends on !X86_64
+ 	select STACKTRACE
+-	select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86
++	depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
+ 	help
+ 	  Provide stacktrace filter for fault-injection capabilities
+ 
+diff --git a/mm/ptdump.c b/mm/ptdump.c
+index ba88ec43ff218..93f2f63dc52dc 100644
+--- a/mm/ptdump.c
++++ b/mm/ptdump.c
+@@ -108,7 +108,7 @@ static int ptdump_pte_entry(pte_t *pte, unsigned long addr,
+ 			    unsigned long next, struct mm_walk *walk)
+ {
+ 	struct ptdump_state *st = walk->private;
+-	pte_t val = READ_ONCE(*pte);
++	pte_t val = ptep_get(pte);
+ 
+ 	if (st->effective_prot)
+ 		st->effective_prot(st, 4, pte_val(val));
+diff --git a/net/bridge/netfilter/ebtable_broute.c b/net/bridge/netfilter/ebtable_broute.c
+index 66e7af1654943..32bc2821027f3 100644
+--- a/net/bridge/netfilter/ebtable_broute.c
++++ b/net/bridge/netfilter/ebtable_broute.c
+@@ -105,14 +105,20 @@ static int __net_init broute_net_init(struct net *net)
+ 				  &net->xt.broute_table);
+ }
+ 
++static void __net_exit broute_net_pre_exit(struct net *net)
++{
++	ebt_unregister_table_pre_exit(net, "broute", &ebt_ops_broute);
++}
++
+ static void __net_exit broute_net_exit(struct net *net)
+ {
+-	ebt_unregister_table(net, net->xt.broute_table, &ebt_ops_broute);
++	ebt_unregister_table(net, net->xt.broute_table);
+ }
+ 
+ static struct pernet_operations broute_net_ops = {
+ 	.init = broute_net_init,
+ 	.exit = broute_net_exit,
++	.pre_exit = broute_net_pre_exit,
+ };
+ 
+ static int __init ebtable_broute_init(void)
+diff --git a/net/bridge/netfilter/ebtable_filter.c b/net/bridge/netfilter/ebtable_filter.c
+index 78cb9b21022d0..bcf982e12f16b 100644
+--- a/net/bridge/netfilter/ebtable_filter.c
++++ b/net/bridge/netfilter/ebtable_filter.c
+@@ -99,14 +99,20 @@ static int __net_init frame_filter_net_init(struct net *net)
+ 				  &net->xt.frame_filter);
+ }
+ 
++static void __net_exit frame_filter_net_pre_exit(struct net *net)
++{
++	ebt_unregister_table_pre_exit(net, "filter", ebt_ops_filter);
++}
++
+ static void __net_exit frame_filter_net_exit(struct net *net)
+ {
+-	ebt_unregister_table(net, net->xt.frame_filter, ebt_ops_filter);
++	ebt_unregister_table(net, net->xt.frame_filter);
+ }
+ 
+ static struct pernet_operations frame_filter_net_ops = {
+ 	.init = frame_filter_net_init,
+ 	.exit = frame_filter_net_exit,
++	.pre_exit = frame_filter_net_pre_exit,
+ };
+ 
+ static int __init ebtable_filter_init(void)
+diff --git a/net/bridge/netfilter/ebtable_nat.c b/net/bridge/netfilter/ebtable_nat.c
+index 0888936ef8537..0d092773f8161 100644
+--- a/net/bridge/netfilter/ebtable_nat.c
++++ b/net/bridge/netfilter/ebtable_nat.c
+@@ -99,14 +99,20 @@ static int __net_init frame_nat_net_init(struct net *net)
+ 				  &net->xt.frame_nat);
+ }
+ 
++static void __net_exit frame_nat_net_pre_exit(struct net *net)
++{
++	ebt_unregister_table_pre_exit(net, "nat", ebt_ops_nat);
++}
++
+ static void __net_exit frame_nat_net_exit(struct net *net)
+ {
+-	ebt_unregister_table(net, net->xt.frame_nat, ebt_ops_nat);
++	ebt_unregister_table(net, net->xt.frame_nat);
+ }
+ 
+ static struct pernet_operations frame_nat_net_ops = {
+ 	.init = frame_nat_net_init,
+ 	.exit = frame_nat_net_exit,
++	.pre_exit = frame_nat_net_pre_exit,
+ };
+ 
+ static int __init ebtable_nat_init(void)
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index ebe33b60efd6b..d481ff24a1501 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -1232,10 +1232,34 @@ out:
+ 	return ret;
+ }
+ 
+-void ebt_unregister_table(struct net *net, struct ebt_table *table,
+-			  const struct nf_hook_ops *ops)
++static struct ebt_table *__ebt_find_table(struct net *net, const char *name)
++{
++	struct ebt_table *t;
++
++	mutex_lock(&ebt_mutex);
++
++	list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) {
++		if (strcmp(t->name, name) == 0) {
++			mutex_unlock(&ebt_mutex);
++			return t;
++		}
++	}
++
++	mutex_unlock(&ebt_mutex);
++	return NULL;
++}
++
++void ebt_unregister_table_pre_exit(struct net *net, const char *name, const struct nf_hook_ops *ops)
++{
++	struct ebt_table *table = __ebt_find_table(net, name);
++
++	if (table)
++		nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
++}
++EXPORT_SYMBOL(ebt_unregister_table_pre_exit);
++
++void ebt_unregister_table(struct net *net, struct ebt_table *table)
+ {
+-	nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
+ 	__ebt_unregister_table(net, table);
+ }
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 62ff7121b22d3..64f4c7ec729dc 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5867,7 +5867,8 @@ static void skb_gro_reset_offset(struct sk_buff *skb)
+ 	NAPI_GRO_CB(skb)->frag0_len = 0;
+ 
+ 	if (!skb_headlen(skb) && pinfo->nr_frags &&
+-	    !PageHighMem(skb_frag_page(frag0))) {
++	    !PageHighMem(skb_frag_page(frag0)) &&
++	    (!NET_IP_ALIGN || !(skb_frag_off(frag0) & 3))) {
+ 		NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
+ 		NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int,
+ 						    skb_frag_size(frag0),
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 2fe4bbb6b80cf..8339978d46ff8 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1380,7 +1380,7 @@ static int __neigh_update(struct neighbour *neigh, const u8 *lladdr,
+ 			 * we can reinject the packet there.
+ 			 */
+ 			n2 = NULL;
+-			if (dst) {
++			if (dst && dst->obsolete != DST_OBSOLETE_DEAD) {
+ 				n2 = dst_neigh_lookup_skb(dst, skb);
+ 				if (n2)
+ 					n1 = n2;
+diff --git a/net/ethtool/pause.c b/net/ethtool/pause.c
+index 09998dc5c185f..d4ac02718b72a 100644
+--- a/net/ethtool/pause.c
++++ b/net/ethtool/pause.c
+@@ -38,16 +38,16 @@ static int pause_prepare_data(const struct ethnl_req_info *req_base,
+ 	if (!dev->ethtool_ops->get_pauseparam)
+ 		return -EOPNOTSUPP;
+ 
++	ethtool_stats_init((u64 *)&data->pausestat,
++			   sizeof(data->pausestat) / 8);
++
+ 	ret = ethnl_ops_begin(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 	dev->ethtool_ops->get_pauseparam(dev, &data->pauseparam);
+ 	if (req_base->flags & ETHTOOL_FLAG_STATS &&
+-	    dev->ethtool_ops->get_pause_stats) {
+-		ethtool_stats_init((u64 *)&data->pausestat,
+-				   sizeof(data->pausestat) / 8);
++	    dev->ethtool_ops->get_pause_stats)
+ 		dev->ethtool_ops->get_pause_stats(dev, &data->pausestat);
+-	}
+ 	ethnl_ops_complete(dev);
+ 
+ 	return 0;
+diff --git a/net/ieee802154/nl802154.c b/net/ieee802154/nl802154.c
+index d1b6a9665b170..f0b47d43c9f6e 100644
+--- a/net/ieee802154/nl802154.c
++++ b/net/ieee802154/nl802154.c
+@@ -1498,6 +1498,11 @@ nl802154_dump_llsec_key(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (err)
+ 		return err;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) {
++		err = skb->len;
++		goto out_err;
++	}
++
+ 	if (!wpan_dev->netdev) {
+ 		err = -EINVAL;
+ 		goto out_err;
+@@ -1552,6 +1557,9 @@ static int nl802154_add_llsec_key(struct sk_buff *skb, struct genl_info *info)
+ 	struct ieee802154_llsec_key_id id = { };
+ 	u32 commands[NL802154_CMD_FRAME_NR_IDS / 32] = { };
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (!info->attrs[NL802154_ATTR_SEC_KEY] ||
+ 	    nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
+ 		return -EINVAL;
+@@ -1601,6 +1609,9 @@ static int nl802154_del_llsec_key(struct sk_buff *skb, struct genl_info *info)
+ 	struct nlattr *attrs[NL802154_KEY_ATTR_MAX + 1];
+ 	struct ieee802154_llsec_key_id id;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (!info->attrs[NL802154_ATTR_SEC_KEY] ||
+ 	    nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
+ 		return -EINVAL;
+@@ -1666,6 +1677,11 @@ nl802154_dump_llsec_dev(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (err)
+ 		return err;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) {
++		err = skb->len;
++		goto out_err;
++	}
++
+ 	if (!wpan_dev->netdev) {
+ 		err = -EINVAL;
+ 		goto out_err;
+@@ -1752,6 +1768,9 @@ static int nl802154_add_llsec_dev(struct sk_buff *skb, struct genl_info *info)
+ 	struct wpan_dev *wpan_dev = dev->ieee802154_ptr;
+ 	struct ieee802154_llsec_device dev_desc;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (ieee802154_llsec_parse_device(info->attrs[NL802154_ATTR_SEC_DEVICE],
+ 					  &dev_desc) < 0)
+ 		return -EINVAL;
+@@ -1767,6 +1786,9 @@ static int nl802154_del_llsec_dev(struct sk_buff *skb, struct genl_info *info)
+ 	struct nlattr *attrs[NL802154_DEV_ATTR_MAX + 1];
+ 	__le64 extended_addr;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (!info->attrs[NL802154_ATTR_SEC_DEVICE] ||
+ 	    nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack))
+ 		return -EINVAL;
+@@ -1836,6 +1858,11 @@ nl802154_dump_llsec_devkey(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (err)
+ 		return err;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) {
++		err = skb->len;
++		goto out_err;
++	}
++
+ 	if (!wpan_dev->netdev) {
+ 		err = -EINVAL;
+ 		goto out_err;
+@@ -1893,6 +1920,9 @@ static int nl802154_add_llsec_devkey(struct sk_buff *skb, struct genl_info *info
+ 	struct ieee802154_llsec_device_key key;
+ 	__le64 extended_addr;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] ||
+ 	    nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack) < 0)
+ 		return -EINVAL;
+@@ -1924,6 +1954,9 @@ static int nl802154_del_llsec_devkey(struct sk_buff *skb, struct genl_info *info
+ 	struct ieee802154_llsec_device_key key;
+ 	__le64 extended_addr;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] ||
+ 	    nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack))
+ 		return -EINVAL;
+@@ -1998,6 +2031,11 @@ nl802154_dump_llsec_seclevel(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (err)
+ 		return err;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) {
++		err = skb->len;
++		goto out_err;
++	}
++
+ 	if (!wpan_dev->netdev) {
+ 		err = -EINVAL;
+ 		goto out_err;
+@@ -2082,6 +2120,9 @@ static int nl802154_add_llsec_seclevel(struct sk_buff *skb,
+ 	struct wpan_dev *wpan_dev = dev->ieee802154_ptr;
+ 	struct ieee802154_llsec_seclevel sl;
+ 
++	if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
++		return -EOPNOTSUPP;
++
+ 	if (llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL],
+ 				 &sl) < 0)
+ 		return -EINVAL;
+diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
+index e0093411d85d6..d6d45d820d79a 100644
+--- a/net/ipv4/netfilter/arp_tables.c
++++ b/net/ipv4/netfilter/arp_tables.c
+@@ -1541,10 +1541,15 @@ out_free:
+ 	return ret;
+ }
+ 
+-void arpt_unregister_table(struct net *net, struct xt_table *table,
+-			   const struct nf_hook_ops *ops)
++void arpt_unregister_table_pre_exit(struct net *net, struct xt_table *table,
++				    const struct nf_hook_ops *ops)
+ {
+ 	nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
++}
++EXPORT_SYMBOL(arpt_unregister_table_pre_exit);
++
++void arpt_unregister_table(struct net *net, struct xt_table *table)
++{
+ 	__arpt_unregister_table(net, table);
+ }
+ 
+diff --git a/net/ipv4/netfilter/arptable_filter.c b/net/ipv4/netfilter/arptable_filter.c
+index c216b9ad3bb24..6c300ba5634e2 100644
+--- a/net/ipv4/netfilter/arptable_filter.c
++++ b/net/ipv4/netfilter/arptable_filter.c
+@@ -56,16 +56,24 @@ static int __net_init arptable_filter_table_init(struct net *net)
+ 	return err;
+ }
+ 
++static void __net_exit arptable_filter_net_pre_exit(struct net *net)
++{
++	if (net->ipv4.arptable_filter)
++		arpt_unregister_table_pre_exit(net, net->ipv4.arptable_filter,
++					       arpfilter_ops);
++}
++
+ static void __net_exit arptable_filter_net_exit(struct net *net)
+ {
+ 	if (!net->ipv4.arptable_filter)
+ 		return;
+-	arpt_unregister_table(net, net->ipv4.arptable_filter, arpfilter_ops);
++	arpt_unregister_table(net, net->ipv4.arptable_filter);
+ 	net->ipv4.arptable_filter = NULL;
+ }
+ 
+ static struct pernet_operations arptable_filter_net_ops = {
+ 	.exit = arptable_filter_net_exit,
++	.pre_exit = arptable_filter_net_pre_exit,
+ };
+ 
+ static int __init arptable_filter_init(void)
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 3e5f4f2e705e8..08829809e88b7 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -1369,9 +1369,19 @@ static __net_init int ipv4_sysctl_init_net(struct net *net)
+ 		if (!table)
+ 			goto err_alloc;
+ 
+-		/* Update the variables to point into the current struct net */
+-		for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++)
+-			table[i].data += (void *)net - (void *)&init_net;
++		for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++) {
++			if (table[i].data) {
++				/* Update the variables to point into
++				 * the current struct net
++				 */
++				table[i].data += (void *)net - (void *)&init_net;
++			} else {
++				/* Entries without data pointer are global;
++				 * Make them read-only in non-init_net ns
++				 */
++				table[i].mode &= ~0222;
++			}
++		}
+ 	}
+ 
+ 	net->ipv4.ipv4_hdr = register_net_sysctl(net, "net/ipv4", table);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 5d27b5c631217..ecc1abfca0650 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -2275,6 +2275,16 @@ static void __net_exit ip6_tnl_destroy_tunnels(struct net *net, struct list_head
+ 			t = rtnl_dereference(t->next);
+ 		}
+ 	}
++
++	t = rtnl_dereference(ip6n->tnls_wc[0]);
++	while (t) {
++		/* If dev is in the same netns, it has already
++		 * been added to the list by the previous loop.
++		 */
++		if (!net_eq(dev_net(t->dev), net))
++			unregister_netdevice_queue(t->dev, list);
++		t = rtnl_dereference(t->next);
++	}
+ }
+ 
+ static int __net_init ip6_tnl_init_net(struct net *net)
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index b26f469a3fb8c..146ba7fa5bf62 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1867,9 +1867,9 @@ static void __net_exit sit_destroy_tunnels(struct net *net,
+ 		if (dev->rtnl_link_ops == &sit_link_ops)
+ 			unregister_netdevice_queue(dev, head);
+ 
+-	for (prio = 1; prio < 4; prio++) {
++	for (prio = 0; prio < 4; prio++) {
+ 		int h;
+-		for (h = 0; h < IP6_SIT_HASH_SIZE; h++) {
++		for (h = 0; h < (prio ? IP6_SIT_HASH_SIZE : 1); h++) {
+ 			struct ip_tunnel *t;
+ 
+ 			t = rtnl_dereference(sitn->tunnels[prio][h]);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 2bf6271d9e3f6..6a96deded7632 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1789,8 +1789,10 @@ static int ieee80211_change_station(struct wiphy *wiphy,
+ 		}
+ 
+ 		if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
+-		    sta->sdata->u.vlan.sta)
++		    sta->sdata->u.vlan.sta) {
++			ieee80211_clear_fast_rx(sta);
+ 			RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL);
++		}
+ 
+ 		if (test_sta_flag(sta, WLAN_STA_AUTHORIZED))
+ 			ieee80211_vif_dec_num_mcast(sta->sdata);
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index 0ee702d374b02..c6c0cb4656645 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -266,6 +266,7 @@ static const char* l4proto_name(u16 proto)
+ 	case IPPROTO_GRE: return "gre";
+ 	case IPPROTO_SCTP: return "sctp";
+ 	case IPPROTO_UDPLITE: return "udplite";
++	case IPPROTO_ICMPV6: return "icmpv6";
+ 	}
+ 
+ 	return "unknown";
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index 2a6993fa40d78..1c5460e7bce87 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -305,12 +305,12 @@ static void flow_offload_ipv6_mangle(struct nf_flow_rule *flow_rule,
+ 				     const __be32 *addr, const __be32 *mask)
+ {
+ 	struct flow_action_entry *entry;
+-	int i;
++	int i, j;
+ 
+-	for (i = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32)) {
++	for (i = 0, j = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32), j++) {
+ 		entry = flow_action_entry_next(flow_rule);
+ 		flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP6,
+-				    offset + i, &addr[i], mask);
++				    offset + i, &addr[j], mask);
+ 	}
+ }
+ 
+diff --git a/net/netfilter/nft_limit.c b/net/netfilter/nft_limit.c
+index 0e2c315c3b5ed..82ec27bdf9412 100644
+--- a/net/netfilter/nft_limit.c
++++ b/net/netfilter/nft_limit.c
+@@ -76,13 +76,13 @@ static int nft_limit_init(struct nft_limit *limit,
+ 		return -EOVERFLOW;
+ 
+ 	if (pkts) {
+-		tokens = div_u64(limit->nsecs, limit->rate) * limit->burst;
++		tokens = div64_u64(limit->nsecs, limit->rate) * limit->burst;
+ 	} else {
+ 		/* The token bucket size limits the number of tokens can be
+ 		 * accumulated. tokens_max specifies the bucket size.
+ 		 * tokens_max = unit * (rate + burst) / rate.
+ 		 */
+-		tokens = div_u64(limit->nsecs * (limit->rate + limit->burst),
++		tokens = div64_u64(limit->nsecs * (limit->rate + limit->burst),
+ 				 limit->rate);
+ 	}
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 53d0a4161df3f..9463c54c465af 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -1520,11 +1520,9 @@ static void sctp_close(struct sock *sk, long timeout)
+ 
+ 	/* Supposedly, no process has access to the socket, but
+ 	 * the net layers still may.
+-	 * Also, sctp_destroy_sock() needs to be called with addr_wq_lock
+-	 * held and that should be grabbed before socket lock.
+ 	 */
+-	spin_lock_bh(&net->sctp.addr_wq_lock);
+-	bh_lock_sock_nested(sk);
++	local_bh_disable();
++	bh_lock_sock(sk);
+ 
+ 	/* Hold the sock, since sk_common_release() will put sock_put()
+ 	 * and we have just a little more cleanup.
+@@ -1533,7 +1531,7 @@ static void sctp_close(struct sock *sk, long timeout)
+ 	sk_common_release(sk);
+ 
+ 	bh_unlock_sock(sk);
+-	spin_unlock_bh(&net->sctp.addr_wq_lock);
++	local_bh_enable();
+ 
+ 	sock_put(sk);
+ 
+@@ -4939,9 +4937,6 @@ static int sctp_init_sock(struct sock *sk)
+ 	sk_sockets_allocated_inc(sk);
+ 	sock_prot_inuse_add(net, sk->sk_prot, 1);
+ 
+-	/* Nothing can fail after this block, otherwise
+-	 * sctp_destroy_sock() will be called without addr_wq_lock held
+-	 */
+ 	if (net->sctp.default_auto_asconf) {
+ 		spin_lock(&sock_net(sk)->sctp.addr_wq_lock);
+ 		list_add_tail(&sp->auto_asconf_list,
+@@ -4976,7 +4971,9 @@ static void sctp_destroy_sock(struct sock *sk)
+ 
+ 	if (sp->do_auto_asconf) {
+ 		sp->do_auto_asconf = 0;
++		spin_lock_bh(&sock_net(sk)->sctp.addr_wq_lock);
+ 		list_del(&sp->auto_asconf_list);
++		spin_unlock_bh(&sock_net(sk)->sctp.addr_wq_lock);
+ 	}
+ 	sctp_endpoint_free(sp->ep);
+ 	local_bh_disable();
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index b81ca117dac7a..e4cb0ff4dcf41 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -660,6 +660,12 @@ static int xfrm4_extract_output(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ 	int err;
+ 
++	if (x->outer_mode.encap == XFRM_MODE_BEET &&
++	    ip_is_fragment(ip_hdr(skb))) {
++		net_warn_ratelimited("BEET mode doesn't support inner IPv4 fragments\n");
++		return -EAFNOSUPPORT;
++	}
++
+ 	err = xfrm4_tunnel_check_size(skb);
+ 	if (err)
+ 		return err;
+@@ -705,8 +711,15 @@ out:
+ static int xfrm6_extract_output(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ #if IS_ENABLED(CONFIG_IPV6)
++	unsigned int ptr = 0;
+ 	int err;
+ 
++	if (x->outer_mode.encap == XFRM_MODE_BEET &&
++	    ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT, NULL, NULL) >= 0) {
++		net_warn_ratelimited("BEET mode doesn't support inner IPv6 fragments\n");
++		return -EAFNOSUPPORT;
++	}
++
+ 	err = xfrm6_tunnel_check_size(skb);
+ 	if (err)
+ 		return err;
+diff --git a/sound/soc/codecs/max98373-i2c.c b/sound/soc/codecs/max98373-i2c.c
+index 92921e34f9486..32b0c1d983650 100644
+--- a/sound/soc/codecs/max98373-i2c.c
++++ b/sound/soc/codecs/max98373-i2c.c
+@@ -440,6 +440,7 @@ static bool max98373_volatile_reg(struct device *dev, unsigned int reg)
+ 	case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK:
+ 	case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK:
+ 	case MAX98373_R20B6_BDE_CUR_STATE_READBACK:
++	case MAX98373_R20FF_GLOBAL_SHDN:
+ 	case MAX98373_R21FF_REV_ID:
+ 		return true;
+ 	default:
+diff --git a/sound/soc/codecs/max98373-sdw.c b/sound/soc/codecs/max98373-sdw.c
+index fa589d834f9aa..14fd2f9a0bf3a 100644
+--- a/sound/soc/codecs/max98373-sdw.c
++++ b/sound/soc/codecs/max98373-sdw.c
+@@ -214,6 +214,7 @@ static bool max98373_volatile_reg(struct device *dev, unsigned int reg)
+ 	case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK:
+ 	case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK:
+ 	case MAX98373_R20B6_BDE_CUR_STATE_READBACK:
++	case MAX98373_R20FF_GLOBAL_SHDN:
+ 	case MAX98373_R21FF_REV_ID:
+ 	/* SoundWire Control Port Registers */
+ 	case MAX98373_R0040_SCP_INIT_STAT_1 ... MAX98373_R0070_SCP_FRAME_CTLR:
+diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c
+index 929bb1798c43f..1fd4dbbb4ecf4 100644
+--- a/sound/soc/codecs/max98373.c
++++ b/sound/soc/codecs/max98373.c
+@@ -28,11 +28,13 @@ static int max98373_dac_event(struct snd_soc_dapm_widget *w,
+ 		regmap_update_bits(max98373->regmap,
+ 			MAX98373_R20FF_GLOBAL_SHDN,
+ 			MAX98373_GLOBAL_EN_MASK, 1);
++		usleep_range(30000, 31000);
+ 		break;
+ 	case SND_SOC_DAPM_POST_PMD:
+ 		regmap_update_bits(max98373->regmap,
+ 			MAX98373_R20FF_GLOBAL_SHDN,
+ 			MAX98373_GLOBAL_EN_MASK, 0);
++		usleep_range(30000, 31000);
+ 		max98373->tdm_mode = false;
+ 		break;
+ 	default:
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index 39637ca78cdbb..9f5f217a96077 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -524,11 +524,13 @@ static int fsl_esai_startup(struct snd_pcm_substream *substream,
+ 				   ESAI_SAICR_SYNC, esai_priv->synchronous ?
+ 				   ESAI_SAICR_SYNC : 0);
+ 
+-		/* Set a default slot number -- 2 */
++		/* Set slots count */
+ 		regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR,
+-				   ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(2));
++				   ESAI_xCCR_xDC_MASK,
++				   ESAI_xCCR_xDC(esai_priv->slots));
+ 		regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR,
+-				   ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(2));
++				   ESAI_xCCR_xDC_MASK,
++				   ESAI_xCCR_xDC(esai_priv->slots));
+ 	}
+ 
+ 	return 0;
+diff --git a/tools/include/uapi/asm/errno.h b/tools/include/uapi/asm/errno.h
+index 637189ec1ab99..d30439b4b8ab4 100644
+--- a/tools/include/uapi/asm/errno.h
++++ b/tools/include/uapi/asm/errno.h
+@@ -9,8 +9,6 @@
+ #include "../../../arch/alpha/include/uapi/asm/errno.h"
+ #elif defined(__mips__)
+ #include "../../../arch/mips/include/uapi/asm/errno.h"
+-#elif defined(__ia64__)
+-#include "../../../arch/ia64/include/uapi/asm/errno.h"
+ #elif defined(__xtensa__)
+ #include "../../../arch/xtensa/include/uapi/asm/errno.h"
+ #else
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index 5f7b85fba39d0..7150e34cf2afb 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -703,18 +703,19 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 			      struct xsk_ring_cons *comp,
+ 			      const struct xsk_socket_config *usr_config)
+ {
++	bool unmap, rx_setup_done = false, tx_setup_done = false;
+ 	void *rx_map = NULL, *tx_map = NULL;
+ 	struct sockaddr_xdp sxdp = {};
+ 	struct xdp_mmap_offsets off;
+ 	struct xsk_socket *xsk;
+ 	struct xsk_ctx *ctx;
+ 	int err, ifindex;
+-	bool unmap = umem->fill_save != fill;
+-	bool rx_setup_done = false, tx_setup_done = false;
+ 
+ 	if (!umem || !xsk_ptr || !(rx || tx))
+ 		return -EFAULT;
+ 
++	unmap = umem->fill_save != fill;
++
+ 	xsk = calloc(1, sizeof(*xsk));
+ 	if (!xsk)
+ 		return -ENOMEM;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-04-28 12:03 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-04-28 12:03 UTC (permalink / raw
  To: gentoo-commits

commit:     cd69f153f5510bc0fb2d5ce2698e8fdc5784e61c
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 28 12:03:18 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Apr 28 12:03:29 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cd69f153

Linux patch 5.10.33

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1032_linux-5.10.33.patch | 1964 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1968 insertions(+)

diff --git a/0000_README b/0000_README
index 68ff733..acf61dd 100644
--- a/0000_README
+++ b/0000_README
@@ -171,6 +171,10 @@ Patch:  1031_linux-5.10.32.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.32
 
+Patch:  1032_linux-5.10.33.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.33
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1032_linux-5.10.33.patch b/1032_linux-5.10.33.patch
new file mode 100644
index 0000000..ace1cc0
--- /dev/null
+++ b/1032_linux-5.10.33.patch
@@ -0,0 +1,1964 @@
+diff --git a/Makefile b/Makefile
+index cad90171b4b9b..fd5c8b5c013bf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 32
++SUBLEVEL = 33
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/omap3.dtsi b/arch/arm/boot/dts/omap3.dtsi
+index 9dcae1f2bc99f..c5b9da0d7e6ce 100644
+--- a/arch/arm/boot/dts/omap3.dtsi
++++ b/arch/arm/boot/dts/omap3.dtsi
+@@ -24,6 +24,9 @@
+ 		i2c0 = &i2c1;
+ 		i2c1 = &i2c2;
+ 		i2c2 = &i2c3;
++		mmc0 = &mmc1;
++		mmc1 = &mmc2;
++		mmc2 = &mmc3;
+ 		serial0 = &uart1;
+ 		serial1 = &uart2;
+ 		serial2 = &uart3;
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
+index a1f621b388fe7..358df6d926aff 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
+@@ -10,5 +10,5 @@
+ };
+ 
+ &mmc0 {
+-	cd-gpios = <&pio 5 6 GPIO_ACTIVE_LOW>; /* PF6 push-push switch */
++	broken-cd;		/* card detect is broken on *some* boards */
+ };
+diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
+index f11a1a1f70261..798c3e78b84bb 100644
+--- a/arch/arm64/kernel/probes/kprobes.c
++++ b/arch/arm64/kernel/probes/kprobes.c
+@@ -286,10 +286,12 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
+ 		if (!instruction_pointer(regs))
+ 			BUG();
+ 
+-		if (kcb->kprobe_status == KPROBE_REENTER)
++		if (kcb->kprobe_status == KPROBE_REENTER) {
+ 			restore_previous_kprobe(kcb);
+-		else
++		} else {
++			kprobes_restore_local_irqflag(kcb, regs);
+ 			reset_current_kprobe();
++		}
+ 
+ 		break;
+ 	case KPROBE_HIT_ACTIVE:
+diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
+index 268fad5f51cf4..7bf0a617e94c3 100644
+--- a/arch/csky/Kconfig
++++ b/arch/csky/Kconfig
+@@ -292,7 +292,7 @@ config FORCE_MAX_ZONEORDER
+ 	int "Maximum zone order"
+ 	default "11"
+ 
+-config RAM_BASE
++config DRAM_BASE
+ 	hex "DRAM start addr (the same with memory-section in dts)"
+ 	default 0x0
+ 
+diff --git a/arch/csky/include/asm/page.h b/arch/csky/include/asm/page.h
+index 9b98bf31d57ce..16878240ef9ac 100644
+--- a/arch/csky/include/asm/page.h
++++ b/arch/csky/include/asm/page.h
+@@ -28,7 +28,7 @@
+ #define SSEG_SIZE	0x20000000
+ #define LOWMEM_LIMIT	(SSEG_SIZE * 2)
+ 
+-#define PHYS_OFFSET_OFFSET (CONFIG_RAM_BASE & (SSEG_SIZE - 1))
++#define PHYS_OFFSET_OFFSET (CONFIG_DRAM_BASE & (SSEG_SIZE - 1))
+ 
+ #ifndef __ASSEMBLY__
+ 
+diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
+index dbe829fc52980..4d08134190134 100644
+--- a/arch/ia64/mm/discontig.c
++++ b/arch/ia64/mm/discontig.c
+@@ -94,7 +94,7 @@ static int __init build_node_maps(unsigned long start, unsigned long len,
+  * acpi_boot_init() (which builds the node_to_cpu_mask array) hasn't been
+  * called yet.  Note that node 0 will also count all non-existent cpus.
+  */
+-static int __meminit early_nr_cpus_node(int node)
++static int early_nr_cpus_node(int node)
+ {
+ 	int cpu, n = 0;
+ 
+@@ -109,7 +109,7 @@ static int __meminit early_nr_cpus_node(int node)
+  * compute_pernodesize - compute size of pernode data
+  * @node: the node id.
+  */
+-static unsigned long __meminit compute_pernodesize(int node)
++static unsigned long compute_pernodesize(int node)
+ {
+ 	unsigned long pernodesize = 0, cpus;
+ 
+@@ -366,7 +366,7 @@ static void __init reserve_pernode_space(void)
+ 	}
+ }
+ 
+-static void __meminit scatter_node_data(void)
++static void scatter_node_data(void)
+ {
+ 	pg_data_t **dst;
+ 	int node;
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 71203324ff42b..81c458e996d9b 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -994,6 +994,7 @@ ENDPROC(ext_int_handler)
+  * Load idle PSW.
+  */
+ ENTRY(psw_idle)
++	stg	%r14,(__SF_GPRS+8*8)(%r15)
+ 	stg	%r3,__SF_EMPTY(%r15)
+ 	larl	%r1,.Lpsw_idle_exit
+ 	stg	%r1,__SF_EMPTY+8(%r15)
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index e7dc13fe5e29f..0b9975200ae35 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4387,7 +4387,7 @@ static const struct x86_cpu_desc isolation_ucodes[] = {
+ 	INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_D,		 3, 0x07000009),
+ 	INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_D,		 4, 0x0f000009),
+ 	INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_D,		 5, 0x0e000002),
+-	INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_X,		 2, 0x0b000014),
++	INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_X,		 1, 0x0b000014),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 3, 0x00000021),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 4, 0x00000000),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 5, 0x00000000),
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 7bdb1821215db..3112186a4f4b2 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -1159,7 +1159,6 @@ enum {
+ 	SNBEP_PCI_QPI_PORT0_FILTER,
+ 	SNBEP_PCI_QPI_PORT1_FILTER,
+ 	BDX_PCI_QPI_PORT2_FILTER,
+-	HSWEP_PCI_PCU_3,
+ };
+ 
+ static int snbep_qpi_hw_config(struct intel_uncore_box *box, struct perf_event *event)
+@@ -2816,22 +2815,33 @@ static struct intel_uncore_type *hswep_msr_uncores[] = {
+ 	NULL,
+ };
+ 
+-void hswep_uncore_cpu_init(void)
++#define HSWEP_PCU_DID			0x2fc0
++#define HSWEP_PCU_CAPID4_OFFET		0x94
++#define hswep_get_chop(_cap)		(((_cap) >> 6) & 0x3)
++
++static bool hswep_has_limit_sbox(unsigned int device)
+ {
+-	int pkg = boot_cpu_data.logical_proc_id;
++	struct pci_dev *dev = pci_get_device(PCI_VENDOR_ID_INTEL, device, NULL);
++	u32 capid4;
++
++	if (!dev)
++		return false;
++
++	pci_read_config_dword(dev, HSWEP_PCU_CAPID4_OFFET, &capid4);
++	if (!hswep_get_chop(capid4))
++		return true;
+ 
++	return false;
++}
++
++void hswep_uncore_cpu_init(void)
++{
+ 	if (hswep_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
+ 		hswep_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
+ 
+ 	/* Detect 6-8 core systems with only two SBOXes */
+-	if (uncore_extra_pci_dev[pkg].dev[HSWEP_PCI_PCU_3]) {
+-		u32 capid4;
+-
+-		pci_read_config_dword(uncore_extra_pci_dev[pkg].dev[HSWEP_PCI_PCU_3],
+-				      0x94, &capid4);
+-		if (((capid4 >> 6) & 0x3) == 0)
+-			hswep_uncore_sbox.num_boxes = 2;
+-	}
++	if (hswep_has_limit_sbox(HSWEP_PCU_DID))
++		hswep_uncore_sbox.num_boxes = 2;
+ 
+ 	uncore_msr_uncores = hswep_msr_uncores;
+ }
+@@ -3094,11 +3104,6 @@ static const struct pci_device_id hswep_uncore_pci_ids[] = {
+ 		.driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
+ 						   SNBEP_PCI_QPI_PORT1_FILTER),
+ 	},
+-	{ /* PCU.3 (for Capability registers) */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2fc0),
+-		.driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
+-						   HSWEP_PCI_PCU_3),
+-	},
+ 	{ /* end: all zeroes */ }
+ };
+ 
+@@ -3190,27 +3195,18 @@ static struct event_constraint bdx_uncore_pcu_constraints[] = {
+ 	EVENT_CONSTRAINT_END
+ };
+ 
++#define BDX_PCU_DID			0x6fc0
++
+ void bdx_uncore_cpu_init(void)
+ {
+-	int pkg = topology_phys_to_logical_pkg(boot_cpu_data.phys_proc_id);
+-
+ 	if (bdx_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
+ 		bdx_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
+ 	uncore_msr_uncores = bdx_msr_uncores;
+ 
+-	/* BDX-DE doesn't have SBOX */
+-	if (boot_cpu_data.x86_model == 86) {
+-		uncore_msr_uncores[BDX_MSR_UNCORE_SBOX] = NULL;
+ 	/* Detect systems with no SBOXes */
+-	} else if (uncore_extra_pci_dev[pkg].dev[HSWEP_PCI_PCU_3]) {
+-		struct pci_dev *pdev;
+-		u32 capid4;
+-
+-		pdev = uncore_extra_pci_dev[pkg].dev[HSWEP_PCI_PCU_3];
+-		pci_read_config_dword(pdev, 0x94, &capid4);
+-		if (((capid4 >> 6) & 0x3) == 0)
+-			bdx_msr_uncores[BDX_MSR_UNCORE_SBOX] = NULL;
+-	}
++	if ((boot_cpu_data.x86_model == 86) || hswep_has_limit_sbox(BDX_PCU_DID))
++		uncore_msr_uncores[BDX_MSR_UNCORE_SBOX] = NULL;
++
+ 	hswep_uncore_pcu.constraints = bdx_uncore_pcu_constraints;
+ }
+ 
+@@ -3431,11 +3427,6 @@ static const struct pci_device_id bdx_uncore_pci_ids[] = {
+ 		.driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
+ 						   BDX_PCI_QPI_PORT2_FILTER),
+ 	},
+-	{ /* PCU.3 (for Capability registers) */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6fc0),
+-		.driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
+-						   HSWEP_PCI_PCU_3),
+-	},
+ 	{ /* end: all zeroes */ }
+ };
+ 
+diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
+index a8f3af257e26c..b1deacbeb2669 100644
+--- a/arch/x86/kernel/crash.c
++++ b/arch/x86/kernel/crash.c
+@@ -337,7 +337,7 @@ int crash_setup_memmap_entries(struct kimage *image, struct boot_params *params)
+ 	struct crash_memmap_data cmd;
+ 	struct crash_mem *cmem;
+ 
+-	cmem = vzalloc(sizeof(struct crash_mem));
++	cmem = vzalloc(struct_size(cmem, ranges, 1));
+ 	if (!cmem)
+ 		return -ENOMEM;
+ 
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 3be4d0e2a96c3..ed240e170e148 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -98,6 +98,8 @@ static int blkdev_reread_part(struct block_device *bdev, fmode_t mode)
+ 		return -EINVAL;
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EACCES;
++	if (bdev->bd_part_count)
++		return -EBUSY;
+ 
+ 	/*
+ 	 * Reopen the device to revalidate the driver state and force a
+diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
+index 71827d9b0aa19..b7260749e8eee 100644
+--- a/drivers/dma/tegra20-apb-dma.c
++++ b/drivers/dma/tegra20-apb-dma.c
+@@ -723,7 +723,7 @@ static void tegra_dma_issue_pending(struct dma_chan *dc)
+ 		goto end;
+ 	}
+ 	if (!tdc->busy) {
+-		err = pm_runtime_get_sync(tdc->tdma->dev);
++		err = pm_runtime_resume_and_get(tdc->tdma->dev);
+ 		if (err < 0) {
+ 			dev_err(tdc2dev(tdc), "Failed to enable DMA\n");
+ 			goto end;
+@@ -818,7 +818,7 @@ static void tegra_dma_synchronize(struct dma_chan *dc)
+ 	struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc);
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(tdc->tdma->dev);
++	err = pm_runtime_resume_and_get(tdc->tdma->dev);
+ 	if (err < 0) {
+ 		dev_err(tdc2dev(tdc), "Failed to synchronize DMA: %d\n", err);
+ 		return;
+diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
+index 55df63dead8d3..70b29bd079c9f 100644
+--- a/drivers/dma/xilinx/xilinx_dpdma.c
++++ b/drivers/dma/xilinx/xilinx_dpdma.c
+@@ -839,6 +839,7 @@ static void xilinx_dpdma_chan_queue_transfer(struct xilinx_dpdma_chan *chan)
+ 	struct xilinx_dpdma_tx_desc *desc;
+ 	struct virt_dma_desc *vdesc;
+ 	u32 reg, channels;
++	bool first_frame;
+ 
+ 	lockdep_assert_held(&chan->lock);
+ 
+@@ -852,14 +853,6 @@ static void xilinx_dpdma_chan_queue_transfer(struct xilinx_dpdma_chan *chan)
+ 		chan->running = true;
+ 	}
+ 
+-	if (chan->video_group)
+-		channels = xilinx_dpdma_chan_video_group_ready(chan);
+-	else
+-		channels = BIT(chan->id);
+-
+-	if (!channels)
+-		return;
+-
+ 	vdesc = vchan_next_desc(&chan->vchan);
+ 	if (!vdesc)
+ 		return;
+@@ -884,13 +877,26 @@ static void xilinx_dpdma_chan_queue_transfer(struct xilinx_dpdma_chan *chan)
+ 			    FIELD_PREP(XILINX_DPDMA_CH_DESC_START_ADDRE_MASK,
+ 				       upper_32_bits(sw_desc->dma_addr)));
+ 
+-	if (chan->first_frame)
++	first_frame = chan->first_frame;
++	chan->first_frame = false;
++
++	if (chan->video_group) {
++		channels = xilinx_dpdma_chan_video_group_ready(chan);
++		/*
++		 * Trigger the transfer only when all channels in the group are
++		 * ready.
++		 */
++		if (!channels)
++			return;
++	} else {
++		channels = BIT(chan->id);
++	}
++
++	if (first_frame)
+ 		reg = XILINX_DPDMA_GBL_TRIG_MASK(channels);
+ 	else
+ 		reg = XILINX_DPDMA_GBL_RETRIG_MASK(channels);
+ 
+-	chan->first_frame = false;
+-
+ 	dpdma_write(xdev->reg, XILINX_DPDMA_GBL, reg);
+ }
+ 
+@@ -1042,13 +1048,14 @@ static int xilinx_dpdma_chan_stop(struct xilinx_dpdma_chan *chan)
+  */
+ static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan)
+ {
+-	struct xilinx_dpdma_tx_desc *active = chan->desc.active;
++	struct xilinx_dpdma_tx_desc *active;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&chan->lock, flags);
+ 
+ 	xilinx_dpdma_debugfs_desc_done_irq(chan);
+ 
++	active = chan->desc.active;
+ 	if (active)
+ 		vchan_cyclic_callback(&active->vdesc);
+ 	else
+diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
+index f7ceb2b11afc5..a7e8ed5191a8e 100644
+--- a/drivers/gpio/gpio-omap.c
++++ b/drivers/gpio/gpio-omap.c
+@@ -29,6 +29,7 @@
+ #define OMAP4_GPIO_DEBOUNCINGTIME_MASK 0xFF
+ 
+ struct gpio_regs {
++	u32 sysconfig;
+ 	u32 irqenable1;
+ 	u32 irqenable2;
+ 	u32 wake_en;
+@@ -1072,6 +1073,7 @@ static void omap_gpio_init_context(struct gpio_bank *p)
+ 	const struct omap_gpio_reg_offs *regs = p->regs;
+ 	void __iomem *base = p->base;
+ 
++	p->context.sysconfig	= readl_relaxed(base + regs->sysconfig);
+ 	p->context.ctrl		= readl_relaxed(base + regs->ctrl);
+ 	p->context.oe		= readl_relaxed(base + regs->direction);
+ 	p->context.wake_en	= readl_relaxed(base + regs->wkup_en);
+@@ -1091,6 +1093,7 @@ static void omap_gpio_restore_context(struct gpio_bank *bank)
+ 	const struct omap_gpio_reg_offs *regs = bank->regs;
+ 	void __iomem *base = bank->base;
+ 
++	writel_relaxed(bank->context.sysconfig, base + regs->sysconfig);
+ 	writel_relaxed(bank->context.wake_en, base + regs->wkup_en);
+ 	writel_relaxed(bank->context.ctrl, base + regs->ctrl);
+ 	writel_relaxed(bank->context.leveldetect0, base + regs->leveldetect0);
+@@ -1118,6 +1121,10 @@ static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context)
+ 
+ 	bank->saved_datain = readl_relaxed(base + bank->regs->datain);
+ 
++	/* Save syconfig, it's runtime value can be different from init value */
++	if (bank->loses_context)
++		bank->context.sysconfig = readl_relaxed(base + bank->regs->sysconfig);
++
+ 	if (!bank->enabled_non_wakeup_gpios)
+ 		goto update_gpio_context_count;
+ 
+@@ -1282,6 +1289,7 @@ out_unlock:
+ 
+ static const struct omap_gpio_reg_offs omap2_gpio_regs = {
+ 	.revision =		OMAP24XX_GPIO_REVISION,
++	.sysconfig =		OMAP24XX_GPIO_SYSCONFIG,
+ 	.direction =		OMAP24XX_GPIO_OE,
+ 	.datain =		OMAP24XX_GPIO_DATAIN,
+ 	.dataout =		OMAP24XX_GPIO_DATAOUT,
+@@ -1305,6 +1313,7 @@ static const struct omap_gpio_reg_offs omap2_gpio_regs = {
+ 
+ static const struct omap_gpio_reg_offs omap4_gpio_regs = {
+ 	.revision =		OMAP4_GPIO_REVISION,
++	.sysconfig =		OMAP4_GPIO_SYSCONFIG,
+ 	.direction =		OMAP4_GPIO_OE,
+ 	.datain =		OMAP4_GPIO_DATAIN,
+ 	.dataout =		OMAP4_GPIO_DATAOUT,
+diff --git a/drivers/hid/hid-alps.c b/drivers/hid/hid-alps.c
+index 3feaece13ade0..6b665931147df 100644
+--- a/drivers/hid/hid-alps.c
++++ b/drivers/hid/hid-alps.c
+@@ -761,6 +761,7 @@ static int alps_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ 
+ 		if (input_register_device(data->input2)) {
+ 			input_free_device(input2);
++			ret = -ENOENT;
+ 			goto exit;
+ 		}
+ 	}
+diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
+index 21e15627a4614..477baa30889cc 100644
+--- a/drivers/hid/hid-cp2112.c
++++ b/drivers/hid/hid-cp2112.c
+@@ -161,6 +161,7 @@ struct cp2112_device {
+ 	atomic_t read_avail;
+ 	atomic_t xfer_avail;
+ 	struct gpio_chip gc;
++	struct irq_chip irq;
+ 	u8 *in_out_buffer;
+ 	struct mutex lock;
+ 
+@@ -1175,16 +1176,6 @@ static int cp2112_gpio_irq_type(struct irq_data *d, unsigned int type)
+ 	return 0;
+ }
+ 
+-static struct irq_chip cp2112_gpio_irqchip = {
+-	.name = "cp2112-gpio",
+-	.irq_startup = cp2112_gpio_irq_startup,
+-	.irq_shutdown = cp2112_gpio_irq_shutdown,
+-	.irq_ack = cp2112_gpio_irq_ack,
+-	.irq_mask = cp2112_gpio_irq_mask,
+-	.irq_unmask = cp2112_gpio_irq_unmask,
+-	.irq_set_type = cp2112_gpio_irq_type,
+-};
+-
+ static int __maybe_unused cp2112_allocate_irq(struct cp2112_device *dev,
+ 					      int pin)
+ {
+@@ -1339,8 +1330,17 @@ static int cp2112_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	dev->gc.can_sleep		= 1;
+ 	dev->gc.parent			= &hdev->dev;
+ 
++	dev->irq.name = "cp2112-gpio";
++	dev->irq.irq_startup = cp2112_gpio_irq_startup;
++	dev->irq.irq_shutdown = cp2112_gpio_irq_shutdown;
++	dev->irq.irq_ack = cp2112_gpio_irq_ack;
++	dev->irq.irq_mask = cp2112_gpio_irq_mask;
++	dev->irq.irq_unmask = cp2112_gpio_irq_unmask;
++	dev->irq.irq_set_type = cp2112_gpio_irq_type;
++	dev->irq.flags = IRQCHIP_MASK_ON_SUSPEND;
++
+ 	girq = &dev->gc.irq;
+-	girq->chip = &cp2112_gpio_irqchip;
++	girq->chip = &dev->irq;
+ 	/* The event comes from the outside so no parent handler */
+ 	girq->parent_handler = NULL;
+ 	girq->num_parents = 0;
+diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c
+index 85a054f1ce389..2a176f77b32e9 100644
+--- a/drivers/hid/hid-google-hammer.c
++++ b/drivers/hid/hid-google-hammer.c
+@@ -526,6 +526,8 @@ static void hammer_remove(struct hid_device *hdev)
+ }
+ 
+ static const struct hid_device_id hammer_devices[] = {
++	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_DON) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 06813f297dcca..b93ce0d475e09 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -486,6 +486,7 @@
+ #define USB_DEVICE_ID_GOOGLE_MASTERBALL	0x503c
+ #define USB_DEVICE_ID_GOOGLE_MAGNEMITE	0x503d
+ #define USB_DEVICE_ID_GOOGLE_MOONBALL	0x5044
++#define USB_DEVICE_ID_GOOGLE_DON	0x5050
+ 
+ #define USB_VENDOR_ID_GOTOP		0x08f2
+ #define USB_DEVICE_ID_SUPER_Q2		0x007f
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 6cda5935fc09c..2d70dc4bea654 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2533,7 +2533,7 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
+ 	    !wacom_wac->shared->is_touch_on) {
+ 		if (!wacom_wac->shared->touch_down)
+ 			return;
+-		prox = 0;
++		prox = false;
+ 	}
+ 
+ 	wacom_wac->hid_data.num_received++;
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h b/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
+index b248966837b4c..7aad40b2aa736 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
++++ b/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
+@@ -412,7 +412,7 @@
+ 	   | CN6XXX_INTR_M0UNWI_ERR             \
+ 	   | CN6XXX_INTR_M1UPB0_ERR             \
+ 	   | CN6XXX_INTR_M1UPWI_ERR             \
+-	   | CN6XXX_INTR_M1UPB0_ERR             \
++	   | CN6XXX_INTR_M1UNB0_ERR             \
+ 	   | CN6XXX_INTR_M1UNWI_ERR             \
+ 	   | CN6XXX_INTR_INSTR_DB_OF_ERR        \
+ 	   | CN6XXX_INTR_SLIST_DB_OF_ERR        \
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index abd37f26af682..11864ac101b8d 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -890,6 +890,9 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
++	if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
++		return -EINVAL;
++
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ 	rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+ 			      geneve->cfg.info.key.tp_dst, sport);
+@@ -984,6 +987,9 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
++	if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr)))
++		return -EINVAL;
++
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ 	dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
+ 				geneve->cfg.info.key.tp_dst, sport);
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index d18642a8144cf..4909405803d57 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -3104,7 +3104,7 @@ static void hso_free_interface(struct usb_interface *interface)
+ 			cancel_work_sync(&serial_table[i]->async_put_intf);
+ 			cancel_work_sync(&serial_table[i]->async_get_intf);
+ 			hso_serial_tty_unregister(serial);
+-			kref_put(&serial_table[i]->ref, hso_serial_ref_free);
++			kref_put(&serial->parent->ref, hso_serial_ref_free);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
+index 6f10e0998f1ce..94d19158efc18 100644
+--- a/drivers/net/xen-netback/xenbus.c
++++ b/drivers/net/xen-netback/xenbus.c
+@@ -824,11 +824,15 @@ static void connect(struct backend_info *be)
+ 	xenvif_carrier_on(be->vif);
+ 
+ 	unregister_hotplug_status_watch(be);
+-	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
+-				   hotplug_status_changed,
+-				   "%s/%s", dev->nodename, "hotplug-status");
+-	if (!err)
++	if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
++		err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
++					   NULL, hotplug_status_changed,
++					   "%s/%s", dev->nodename,
++					   "hotplug-status");
++		if (err)
++			goto err;
+ 		be->have_hotplug_status_watch = 1;
++	}
+ 
+ 	netif_tx_wake_all_queues(be->vif->dev);
+ 
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 9fc4433fece4f..20b477cd5a30a 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1604,8 +1604,8 @@ static int pinctrl_pins_show(struct seq_file *s, void *what)
+ 	unsigned i, pin;
+ #ifdef CONFIG_GPIOLIB
+ 	struct pinctrl_gpio_range *range;
+-	unsigned int gpio_num;
+ 	struct gpio_chip *chip;
++	int gpio_num;
+ #endif
+ 
+ 	seq_printf(s, "registered pins: %d\n", pctldev->desc->npins);
+@@ -1625,7 +1625,7 @@ static int pinctrl_pins_show(struct seq_file *s, void *what)
+ 		seq_printf(s, "pin %d (%s) ", pin, desc->name);
+ 
+ #ifdef CONFIG_GPIOLIB
+-		gpio_num = 0;
++		gpio_num = -1;
+ 		list_for_each_entry(range, &pctldev->gpio_ranges, node) {
+ 			if ((pin >= range->pin_base) &&
+ 			    (pin < (range->pin_base + range->npins))) {
+@@ -1633,10 +1633,12 @@ static int pinctrl_pins_show(struct seq_file *s, void *what)
+ 				break;
+ 			}
+ 		}
+-		chip = gpio_to_chip(gpio_num);
+-		if (chip && chip->gpiodev && chip->gpiodev->base)
+-			seq_printf(s, "%u:%s ", gpio_num -
+-				chip->gpiodev->base, chip->label);
++		if (gpio_num >= 0)
++			chip = gpio_to_chip(gpio_num);
++		else
++			chip = NULL;
++		if (chip)
++			seq_printf(s, "%u:%s ", gpio_num - chip->gpiodev->base, chip->label);
+ 		else
+ 			seq_puts(s, "0:? ");
+ #endif
+diff --git a/drivers/pinctrl/intel/pinctrl-lewisburg.c b/drivers/pinctrl/intel/pinctrl-lewisburg.c
+index 7fdf4257df1ed..ad4b446d588e6 100644
+--- a/drivers/pinctrl/intel/pinctrl-lewisburg.c
++++ b/drivers/pinctrl/intel/pinctrl-lewisburg.c
+@@ -299,9 +299,9 @@ static const struct pinctrl_pin_desc lbg_pins[] = {
+ static const struct intel_community lbg_communities[] = {
+ 	LBG_COMMUNITY(0, 0, 71),
+ 	LBG_COMMUNITY(1, 72, 132),
+-	LBG_COMMUNITY(3, 133, 144),
+-	LBG_COMMUNITY(4, 145, 180),
+-	LBG_COMMUNITY(5, 181, 246),
++	LBG_COMMUNITY(3, 133, 143),
++	LBG_COMMUNITY(4, 144, 178),
++	LBG_COMMUNITY(5, 179, 246),
+ };
+ 
+ static const struct intel_pinctrl_soc_data lbg_soc_data = {
+diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
+index be76fddbf524b..0dbca679bd32f 100644
+--- a/drivers/soc/qcom/qcom-geni-se.c
++++ b/drivers/soc/qcom/qcom-geni-se.c
+@@ -741,6 +741,9 @@ int geni_icc_get(struct geni_se *se, const char *icc_ddr)
+ 	int i, err;
+ 	const char *icc_names[] = {"qup-core", "qup-config", icc_ddr};
+ 
++	if (has_acpi_companion(se->dev))
++		return 0;
++
+ 	for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) {
+ 		if (!icc_names[i])
+ 			continue;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index e79359326411a..bc035ba6e0105 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1637,12 +1637,13 @@ static int acm_resume(struct usb_interface *intf)
+ 	struct urb *urb;
+ 	int rv = 0;
+ 
+-	acm_unpoison_urbs(acm);
+ 	spin_lock_irq(&acm->write_lock);
+ 
+ 	if (--acm->susp_count)
+ 		goto out;
+ 
++	acm_unpoison_urbs(acm);
++
+ 	if (tty_port_initialized(&acm->port)) {
+ 		rv = usb_submit_urb(acm->ctrlurb, GFP_ATOMIC);
+ 
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index d300f799efcd1..aa656f57bf5b7 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -273,8 +273,10 @@ done:
+ 	mr->log_size = log_entity_size;
+ 	mr->nsg = nsg;
+ 	mr->nent = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
+-	if (!mr->nent)
++	if (!mr->nent) {
++		err = -ENOMEM;
+ 		goto err_map;
++	}
+ 
+ 	err = create_direct_mr(mvdev, mr);
+ 	if (err)
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index fc5707ada024e..84e5949bc8617 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -749,9 +749,11 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev,
+ 	const struct vdpa_config_ops *ops = vdpa->config;
+ 	int r = 0;
+ 
++	mutex_lock(&dev->mutex);
++
+ 	r = vhost_dev_check_owner(dev);
+ 	if (r)
+-		return r;
++		goto unlock;
+ 
+ 	switch (msg->type) {
+ 	case VHOST_IOTLB_UPDATE:
+@@ -772,6 +774,8 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev,
+ 		r = -EINVAL;
+ 		break;
+ 	}
++unlock:
++	mutex_unlock(&dev->mutex);
+ 
+ 	return r;
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index b416bba3a62b5..8ad819132dde3 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1259,6 +1259,11 @@ static inline bool bpf_allow_ptr_leaks(void)
+ 	return perfmon_capable();
+ }
+ 
++static inline bool bpf_allow_uninit_stack(void)
++{
++	return perfmon_capable();
++}
++
+ static inline bool bpf_allow_ptr_to_map_access(void)
+ {
+ 	return perfmon_capable();
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index e83ef6f6bf43a..85bac3191e127 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -187,7 +187,7 @@ struct bpf_func_state {
+ 	 * 0 = main function, 1 = first callee.
+ 	 */
+ 	u32 frameno;
+-	/* subprog number == index within subprog_stack_depth
++	/* subprog number == index within subprog_info
+ 	 * zero == main subprog
+ 	 */
+ 	u32 subprogno;
+@@ -390,6 +390,7 @@ struct bpf_verifier_env {
+ 	u32 used_map_cnt;		/* number of used maps */
+ 	u32 id_gen;			/* used to generate unique reg IDs */
+ 	bool allow_ptr_leaks;
++	bool allow_uninit_stack;
+ 	bool allow_ptr_to_map_access;
+ 	bool bpf_capable;
+ 	bool bypass_spec_v1;
+diff --git a/include/linux/platform_data/gpio-omap.h b/include/linux/platform_data/gpio-omap.h
+index 8b30b14b47d3f..f377817ce75c1 100644
+--- a/include/linux/platform_data/gpio-omap.h
++++ b/include/linux/platform_data/gpio-omap.h
+@@ -85,6 +85,7 @@
+  * omap2+ specific GPIO registers
+  */
+ #define OMAP24XX_GPIO_REVISION		0x0000
++#define OMAP24XX_GPIO_SYSCONFIG		0x0010
+ #define OMAP24XX_GPIO_IRQSTATUS1	0x0018
+ #define OMAP24XX_GPIO_IRQSTATUS2	0x0028
+ #define OMAP24XX_GPIO_IRQENABLE2	0x002c
+@@ -108,6 +109,7 @@
+ #define OMAP24XX_GPIO_SETDATAOUT	0x0094
+ 
+ #define OMAP4_GPIO_REVISION		0x0000
++#define OMAP4_GPIO_SYSCONFIG		0x0010
+ #define OMAP4_GPIO_EOI			0x0020
+ #define OMAP4_GPIO_IRQSTATUSRAW0	0x0024
+ #define OMAP4_GPIO_IRQSTATUSRAW1	0x0028
+@@ -148,6 +150,7 @@
+ #ifndef __ASSEMBLER__
+ struct omap_gpio_reg_offs {
+ 	u16 revision;
++	u16 sysconfig;
+ 	u16 direction;
+ 	u16 datain;
+ 	u16 dataout;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 3370f0d476e97..b9180509917e3 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2268,12 +2268,14 @@ static void save_register_state(struct bpf_func_state *state,
+ 		state->stack[spi].slot_type[i] = STACK_SPILL;
+ }
+ 
+-/* check_stack_read/write functions track spill/fill of registers,
++/* check_stack_{read,write}_fixed_off functions track spill/fill of registers,
+  * stack boundary and alignment are checked in check_mem_access()
+  */
+-static int check_stack_write(struct bpf_verifier_env *env,
+-			     struct bpf_func_state *state, /* func where register points to */
+-			     int off, int size, int value_regno, int insn_idx)
++static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
++				       /* stack frame we're writing to */
++				       struct bpf_func_state *state,
++				       int off, int size, int value_regno,
++				       int insn_idx)
+ {
+ 	struct bpf_func_state *cur; /* state of the current function */
+ 	int i, slot = -off - 1, spi = slot / BPF_REG_SIZE, err;
+@@ -2399,9 +2401,175 @@ static int check_stack_write(struct bpf_verifier_env *env,
+ 	return 0;
+ }
+ 
+-static int check_stack_read(struct bpf_verifier_env *env,
+-			    struct bpf_func_state *reg_state /* func where register points to */,
+-			    int off, int size, int value_regno)
++/* Write the stack: 'stack[ptr_regno + off] = value_regno'. 'ptr_regno' is
++ * known to contain a variable offset.
++ * This function checks whether the write is permitted and conservatively
++ * tracks the effects of the write, considering that each stack slot in the
++ * dynamic range is potentially written to.
++ *
++ * 'off' includes 'regno->off'.
++ * 'value_regno' can be -1, meaning that an unknown value is being written to
++ * the stack.
++ *
++ * Spilled pointers in range are not marked as written because we don't know
++ * what's going to be actually written. This means that read propagation for
++ * future reads cannot be terminated by this write.
++ *
++ * For privileged programs, uninitialized stack slots are considered
++ * initialized by this write (even though we don't know exactly what offsets
++ * are going to be written to). The idea is that we don't want the verifier to
++ * reject future reads that access slots written to through variable offsets.
++ */
++static int check_stack_write_var_off(struct bpf_verifier_env *env,
++				     /* func where register points to */
++				     struct bpf_func_state *state,
++				     int ptr_regno, int off, int size,
++				     int value_regno, int insn_idx)
++{
++	struct bpf_func_state *cur; /* state of the current function */
++	int min_off, max_off;
++	int i, err;
++	struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL;
++	bool writing_zero = false;
++	/* set if the fact that we're writing a zero is used to let any
++	 * stack slots remain STACK_ZERO
++	 */
++	bool zero_used = false;
++
++	cur = env->cur_state->frame[env->cur_state->curframe];
++	ptr_reg = &cur->regs[ptr_regno];
++	min_off = ptr_reg->smin_value + off;
++	max_off = ptr_reg->smax_value + off + size;
++	if (value_regno >= 0)
++		value_reg = &cur->regs[value_regno];
++	if (value_reg && register_is_null(value_reg))
++		writing_zero = true;
++
++	err = realloc_func_state(state, round_up(-min_off, BPF_REG_SIZE),
++				 state->acquired_refs, true);
++	if (err)
++		return err;
++
++
++	/* Variable offset writes destroy any spilled pointers in range. */
++	for (i = min_off; i < max_off; i++) {
++		u8 new_type, *stype;
++		int slot, spi;
++
++		slot = -i - 1;
++		spi = slot / BPF_REG_SIZE;
++		stype = &state->stack[spi].slot_type[slot % BPF_REG_SIZE];
++
++		if (!env->allow_ptr_leaks
++				&& *stype != NOT_INIT
++				&& *stype != SCALAR_VALUE) {
++			/* Reject the write if there's are spilled pointers in
++			 * range. If we didn't reject here, the ptr status
++			 * would be erased below (even though not all slots are
++			 * actually overwritten), possibly opening the door to
++			 * leaks.
++			 */
++			verbose(env, "spilled ptr in range of var-offset stack write; insn %d, ptr off: %d",
++				insn_idx, i);
++			return -EINVAL;
++		}
++
++		/* Erase all spilled pointers. */
++		state->stack[spi].spilled_ptr.type = NOT_INIT;
++
++		/* Update the slot type. */
++		new_type = STACK_MISC;
++		if (writing_zero && *stype == STACK_ZERO) {
++			new_type = STACK_ZERO;
++			zero_used = true;
++		}
++		/* If the slot is STACK_INVALID, we check whether it's OK to
++		 * pretend that it will be initialized by this write. The slot
++		 * might not actually be written to, and so if we mark it as
++		 * initialized future reads might leak uninitialized memory.
++		 * For privileged programs, we will accept such reads to slots
++		 * that may or may not be written because, if we're reject
++		 * them, the error would be too confusing.
++		 */
++		if (*stype == STACK_INVALID && !env->allow_uninit_stack) {
++			verbose(env, "uninit stack in range of var-offset write prohibited for !root; insn %d, off: %d",
++					insn_idx, i);
++			return -EINVAL;
++		}
++		*stype = new_type;
++	}
++	if (zero_used) {
++		/* backtracking doesn't work for STACK_ZERO yet. */
++		err = mark_chain_precision(env, value_regno);
++		if (err)
++			return err;
++	}
++	return 0;
++}
++
++/* When register 'dst_regno' is assigned some values from stack[min_off,
++ * max_off), we set the register's type according to the types of the
++ * respective stack slots. If all the stack values are known to be zeros, then
++ * so is the destination reg. Otherwise, the register is considered to be
++ * SCALAR. This function does not deal with register filling; the caller must
++ * ensure that all spilled registers in the stack range have been marked as
++ * read.
++ */
++static void mark_reg_stack_read(struct bpf_verifier_env *env,
++				/* func where src register points to */
++				struct bpf_func_state *ptr_state,
++				int min_off, int max_off, int dst_regno)
++{
++	struct bpf_verifier_state *vstate = env->cur_state;
++	struct bpf_func_state *state = vstate->frame[vstate->curframe];
++	int i, slot, spi;
++	u8 *stype;
++	int zeros = 0;
++
++	for (i = min_off; i < max_off; i++) {
++		slot = -i - 1;
++		spi = slot / BPF_REG_SIZE;
++		stype = ptr_state->stack[spi].slot_type;
++		if (stype[slot % BPF_REG_SIZE] != STACK_ZERO)
++			break;
++		zeros++;
++	}
++	if (zeros == max_off - min_off) {
++		/* any access_size read into register is zero extended,
++		 * so the whole register == const_zero
++		 */
++		__mark_reg_const_zero(&state->regs[dst_regno]);
++		/* backtracking doesn't support STACK_ZERO yet,
++		 * so mark it precise here, so that later
++		 * backtracking can stop here.
++		 * Backtracking may not need this if this register
++		 * doesn't participate in pointer adjustment.
++		 * Forward propagation of precise flag is not
++		 * necessary either. This mark is only to stop
++		 * backtracking. Any register that contributed
++		 * to const 0 was marked precise before spill.
++		 */
++		state->regs[dst_regno].precise = true;
++	} else {
++		/* have read misc data from the stack */
++		mark_reg_unknown(env, state->regs, dst_regno);
++	}
++	state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
++}
++
++/* Read the stack at 'off' and put the results into the register indicated by
++ * 'dst_regno'. It handles reg filling if the addressed stack slot is a
++ * spilled reg.
++ *
++ * 'dst_regno' can be -1, meaning that the read value is not going to a
++ * register.
++ *
++ * The access is assumed to be within the current stack bounds.
++ */
++static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
++				      /* func where src register points to */
++				      struct bpf_func_state *reg_state,
++				      int off, int size, int dst_regno)
+ {
+ 	struct bpf_verifier_state *vstate = env->cur_state;
+ 	struct bpf_func_state *state = vstate->frame[vstate->curframe];
+@@ -2409,11 +2577,6 @@ static int check_stack_read(struct bpf_verifier_env *env,
+ 	struct bpf_reg_state *reg;
+ 	u8 *stype;
+ 
+-	if (reg_state->allocated_stack <= slot) {
+-		verbose(env, "invalid read from stack off %d+0 size %d\n",
+-			off, size);
+-		return -EACCES;
+-	}
+ 	stype = reg_state->stack[spi].slot_type;
+ 	reg = &reg_state->stack[spi].spilled_ptr;
+ 
+@@ -2424,9 +2587,9 @@ static int check_stack_read(struct bpf_verifier_env *env,
+ 				verbose(env, "invalid size of register fill\n");
+ 				return -EACCES;
+ 			}
+-			if (value_regno >= 0) {
+-				mark_reg_unknown(env, state->regs, value_regno);
+-				state->regs[value_regno].live |= REG_LIVE_WRITTEN;
++			if (dst_regno >= 0) {
++				mark_reg_unknown(env, state->regs, dst_regno);
++				state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
+ 			}
+ 			mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
+ 			return 0;
+@@ -2438,16 +2601,16 @@ static int check_stack_read(struct bpf_verifier_env *env,
+ 			}
+ 		}
+ 
+-		if (value_regno >= 0) {
++		if (dst_regno >= 0) {
+ 			/* restore register state from stack */
+-			state->regs[value_regno] = *reg;
++			state->regs[dst_regno] = *reg;
+ 			/* mark reg as written since spilled pointer state likely
+ 			 * has its liveness marks cleared by is_state_visited()
+ 			 * which resets stack/reg liveness for state transitions
+ 			 */
+-			state->regs[value_regno].live |= REG_LIVE_WRITTEN;
++			state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
+ 		} else if (__is_pointer_value(env->allow_ptr_leaks, reg)) {
+-			/* If value_regno==-1, the caller is asking us whether
++			/* If dst_regno==-1, the caller is asking us whether
+ 			 * it is acceptable to use this value as a SCALAR_VALUE
+ 			 * (e.g. for XADD).
+ 			 * We must not allow unprivileged callers to do that
+@@ -2459,70 +2622,167 @@ static int check_stack_read(struct bpf_verifier_env *env,
+ 		}
+ 		mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
+ 	} else {
+-		int zeros = 0;
++		u8 type;
+ 
+ 		for (i = 0; i < size; i++) {
+-			if (stype[(slot - i) % BPF_REG_SIZE] == STACK_MISC)
++			type = stype[(slot - i) % BPF_REG_SIZE];
++			if (type == STACK_MISC)
+ 				continue;
+-			if (stype[(slot - i) % BPF_REG_SIZE] == STACK_ZERO) {
+-				zeros++;
++			if (type == STACK_ZERO)
+ 				continue;
+-			}
+ 			verbose(env, "invalid read from stack off %d+%d size %d\n",
+ 				off, i, size);
+ 			return -EACCES;
+ 		}
+ 		mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
+-		if (value_regno >= 0) {
+-			if (zeros == size) {
+-				/* any size read into register is zero extended,
+-				 * so the whole register == const_zero
+-				 */
+-				__mark_reg_const_zero(&state->regs[value_regno]);
+-				/* backtracking doesn't support STACK_ZERO yet,
+-				 * so mark it precise here, so that later
+-				 * backtracking can stop here.
+-				 * Backtracking may not need this if this register
+-				 * doesn't participate in pointer adjustment.
+-				 * Forward propagation of precise flag is not
+-				 * necessary either. This mark is only to stop
+-				 * backtracking. Any register that contributed
+-				 * to const 0 was marked precise before spill.
+-				 */
+-				state->regs[value_regno].precise = true;
+-			} else {
+-				/* have read misc data from the stack */
+-				mark_reg_unknown(env, state->regs, value_regno);
+-			}
+-			state->regs[value_regno].live |= REG_LIVE_WRITTEN;
+-		}
++		if (dst_regno >= 0)
++			mark_reg_stack_read(env, reg_state, off, off + size, dst_regno);
+ 	}
+ 	return 0;
+ }
+ 
+-static int check_stack_access(struct bpf_verifier_env *env,
+-			      const struct bpf_reg_state *reg,
+-			      int off, int size)
++enum stack_access_src {
++	ACCESS_DIRECT = 1,  /* the access is performed by an instruction */
++	ACCESS_HELPER = 2,  /* the access is performed by a helper */
++};
++
++static int check_stack_range_initialized(struct bpf_verifier_env *env,
++					 int regno, int off, int access_size,
++					 bool zero_size_allowed,
++					 enum stack_access_src type,
++					 struct bpf_call_arg_meta *meta);
++
++static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
++{
++	return cur_regs(env) + regno;
++}
++
++/* Read the stack at 'ptr_regno + off' and put the result into the register
++ * 'dst_regno'.
++ * 'off' includes the pointer register's fixed offset(i.e. 'ptr_regno.off'),
++ * but not its variable offset.
++ * 'size' is assumed to be <= reg size and the access is assumed to be aligned.
++ *
++ * As opposed to check_stack_read_fixed_off, this function doesn't deal with
++ * filling registers (i.e. reads of spilled register cannot be detected when
++ * the offset is not fixed). We conservatively mark 'dst_regno' as containing
++ * SCALAR_VALUE. That's why we assert that the 'ptr_regno' has a variable
++ * offset; for a fixed offset check_stack_read_fixed_off should be used
++ * instead.
++ */
++static int check_stack_read_var_off(struct bpf_verifier_env *env,
++				    int ptr_regno, int off, int size, int dst_regno)
++{
++	/* The state of the source register. */
++	struct bpf_reg_state *reg = reg_state(env, ptr_regno);
++	struct bpf_func_state *ptr_state = func(env, reg);
++	int err;
++	int min_off, max_off;
++
++	/* Note that we pass a NULL meta, so raw access will not be permitted.
++	 */
++	err = check_stack_range_initialized(env, ptr_regno, off, size,
++					    false, ACCESS_DIRECT, NULL);
++	if (err)
++		return err;
++
++	min_off = reg->smin_value + off;
++	max_off = reg->smax_value + off;
++	mark_reg_stack_read(env, ptr_state, min_off, max_off + size, dst_regno);
++	return 0;
++}
++
++/* check_stack_read dispatches to check_stack_read_fixed_off or
++ * check_stack_read_var_off.
++ *
++ * The caller must ensure that the offset falls within the allocated stack
++ * bounds.
++ *
++ * 'dst_regno' is a register which will receive the value from the stack. It
++ * can be -1, meaning that the read value is not going to a register.
++ */
++static int check_stack_read(struct bpf_verifier_env *env,
++			    int ptr_regno, int off, int size,
++			    int dst_regno)
+ {
+-	/* Stack accesses must be at a fixed offset, so that we
+-	 * can determine what type of data were returned. See
+-	 * check_stack_read().
++	struct bpf_reg_state *reg = reg_state(env, ptr_regno);
++	struct bpf_func_state *state = func(env, reg);
++	int err;
++	/* Some accesses are only permitted with a static offset. */
++	bool var_off = !tnum_is_const(reg->var_off);
++
++	/* The offset is required to be static when reads don't go to a
++	 * register, in order to not leak pointers (see
++	 * check_stack_read_fixed_off).
+ 	 */
+-	if (!tnum_is_const(reg->var_off)) {
++	if (dst_regno < 0 && var_off) {
+ 		char tn_buf[48];
+ 
+ 		tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+-		verbose(env, "variable stack access var_off=%s off=%d size=%d\n",
++		verbose(env, "variable offset stack pointer cannot be passed into helper function; var_off=%s off=%d size=%d\n",
+ 			tn_buf, off, size);
+ 		return -EACCES;
+ 	}
++	/* Variable offset is prohibited for unprivileged mode for simplicity
++	 * since it requires corresponding support in Spectre masking for stack
++	 * ALU. See also retrieve_ptr_limit().
++	 */
++	if (!env->bypass_spec_v1 && var_off) {
++		char tn_buf[48];
+ 
+-	if (off >= 0 || off < -MAX_BPF_STACK) {
+-		verbose(env, "invalid stack off=%d size=%d\n", off, size);
++		tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
++		verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n",
++				ptr_regno, tn_buf);
+ 		return -EACCES;
+ 	}
+ 
+-	return 0;
++	if (!var_off) {
++		off += reg->var_off.value;
++		err = check_stack_read_fixed_off(env, state, off, size,
++						 dst_regno);
++	} else {
++		/* Variable offset stack reads need more conservative handling
++		 * than fixed offset ones. Note that dst_regno >= 0 on this
++		 * branch.
++		 */
++		err = check_stack_read_var_off(env, ptr_regno, off, size,
++					       dst_regno);
++	}
++	return err;
++}
++
++
++/* check_stack_write dispatches to check_stack_write_fixed_off or
++ * check_stack_write_var_off.
++ *
++ * 'ptr_regno' is the register used as a pointer into the stack.
++ * 'off' includes 'ptr_regno->off', but not its variable offset (if any).
++ * 'value_regno' is the register whose value we're writing to the stack. It can
++ * be -1, meaning that we're not writing from a register.
++ *
++ * The caller must ensure that the offset falls within the maximum stack size.
++ */
++static int check_stack_write(struct bpf_verifier_env *env,
++			     int ptr_regno, int off, int size,
++			     int value_regno, int insn_idx)
++{
++	struct bpf_reg_state *reg = reg_state(env, ptr_regno);
++	struct bpf_func_state *state = func(env, reg);
++	int err;
++
++	if (tnum_is_const(reg->var_off)) {
++		off += reg->var_off.value;
++		err = check_stack_write_fixed_off(env, state, off, size,
++						  value_regno, insn_idx);
++	} else {
++		/* Variable offset stack reads need more conservative handling
++		 * than fixed offset ones.
++		 */
++		err = check_stack_write_var_off(env, state,
++						ptr_regno, off, size,
++						value_regno, insn_idx);
++	}
++	return err;
+ }
+ 
+ static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
+@@ -2851,11 +3111,6 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
+ 	return -EACCES;
+ }
+ 
+-static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
+-{
+-	return cur_regs(env) + regno;
+-}
+-
+ static bool is_pointer_value(struct bpf_verifier_env *env, int regno)
+ {
+ 	return __is_pointer_value(env->allow_ptr_leaks, reg_state(env, regno));
+@@ -2974,8 +3229,8 @@ static int check_ptr_alignment(struct bpf_verifier_env *env,
+ 		break;
+ 	case PTR_TO_STACK:
+ 		pointer_desc = "stack ";
+-		/* The stack spill tracking logic in check_stack_write()
+-		 * and check_stack_read() relies on stack accesses being
++		/* The stack spill tracking logic in check_stack_write_fixed_off()
++		 * and check_stack_read_fixed_off() relies on stack accesses being
+ 		 * aligned.
+ 		 */
+ 		strict = true;
+@@ -3393,6 +3648,91 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env,
+ 	return 0;
+ }
+ 
++/* Check that the stack access at the given offset is within bounds. The
++ * maximum valid offset is -1.
++ *
++ * The minimum valid offset is -MAX_BPF_STACK for writes, and
++ * -state->allocated_stack for reads.
++ */
++static int check_stack_slot_within_bounds(int off,
++					  struct bpf_func_state *state,
++					  enum bpf_access_type t)
++{
++	int min_valid_off;
++
++	if (t == BPF_WRITE)
++		min_valid_off = -MAX_BPF_STACK;
++	else
++		min_valid_off = -state->allocated_stack;
++
++	if (off < min_valid_off || off > -1)
++		return -EACCES;
++	return 0;
++}
++
++/* Check that the stack access at 'regno + off' falls within the maximum stack
++ * bounds.
++ *
++ * 'off' includes `regno->offset`, but not its dynamic part (if any).
++ */
++static int check_stack_access_within_bounds(
++		struct bpf_verifier_env *env,
++		int regno, int off, int access_size,
++		enum stack_access_src src, enum bpf_access_type type)
++{
++	struct bpf_reg_state *regs = cur_regs(env);
++	struct bpf_reg_state *reg = regs + regno;
++	struct bpf_func_state *state = func(env, reg);
++	int min_off, max_off;
++	int err;
++	char *err_extra;
++
++	if (src == ACCESS_HELPER)
++		/* We don't know if helpers are reading or writing (or both). */
++		err_extra = " indirect access to";
++	else if (type == BPF_READ)
++		err_extra = " read from";
++	else
++		err_extra = " write to";
++
++	if (tnum_is_const(reg->var_off)) {
++		min_off = reg->var_off.value + off;
++		if (access_size > 0)
++			max_off = min_off + access_size - 1;
++		else
++			max_off = min_off;
++	} else {
++		if (reg->smax_value >= BPF_MAX_VAR_OFF ||
++		    reg->smin_value <= -BPF_MAX_VAR_OFF) {
++			verbose(env, "invalid unbounded variable-offset%s stack R%d\n",
++				err_extra, regno);
++			return -EACCES;
++		}
++		min_off = reg->smin_value + off;
++		if (access_size > 0)
++			max_off = reg->smax_value + off + access_size - 1;
++		else
++			max_off = min_off;
++	}
++
++	err = check_stack_slot_within_bounds(min_off, state, type);
++	if (!err)
++		err = check_stack_slot_within_bounds(max_off, state, type);
++
++	if (err) {
++		if (tnum_is_const(reg->var_off)) {
++			verbose(env, "invalid%s stack R%d off=%d size=%d\n",
++				err_extra, regno, off, access_size);
++		} else {
++			char tn_buf[48];
++
++			tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
++			verbose(env, "invalid variable-offset%s stack R%d var_off=%s size=%d\n",
++				err_extra, regno, tn_buf, access_size);
++		}
++	}
++	return err;
++}
+ 
+ /* check whether memory at (regno + off) is accessible for t = (read | write)
+  * if t==write, value_regno is a register which value is stored into memory
+@@ -3505,8 +3845,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 		}
+ 
+ 	} else if (reg->type == PTR_TO_STACK) {
+-		off += reg->var_off.value;
+-		err = check_stack_access(env, reg, off, size);
++		/* Basic bounds checks. */
++		err = check_stack_access_within_bounds(env, regno, off, size, ACCESS_DIRECT, t);
+ 		if (err)
+ 			return err;
+ 
+@@ -3515,12 +3855,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 		if (err)
+ 			return err;
+ 
+-		if (t == BPF_WRITE)
+-			err = check_stack_write(env, state, off, size,
+-						value_regno, insn_idx);
+-		else
+-			err = check_stack_read(env, state, off, size,
++		if (t == BPF_READ)
++			err = check_stack_read(env, regno, off, size,
+ 					       value_regno);
++		else
++			err = check_stack_write(env, regno, off, size,
++						value_regno, insn_idx);
+ 	} else if (reg_is_pkt_pointer(reg)) {
+ 		if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) {
+ 			verbose(env, "cannot write into packet\n");
+@@ -3642,49 +3982,53 @@ static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_ins
+ 				BPF_SIZE(insn->code), BPF_WRITE, -1, true);
+ }
+ 
+-static int __check_stack_boundary(struct bpf_verifier_env *env, u32 regno,
+-				  int off, int access_size,
+-				  bool zero_size_allowed)
++/* When register 'regno' is used to read the stack (either directly or through
++ * a helper function) make sure that it's within stack boundary and, depending
++ * on the access type, that all elements of the stack are initialized.
++ *
++ * 'off' includes 'regno->off', but not its dynamic part (if any).
++ *
++ * All registers that have been spilled on the stack in the slots within the
++ * read offsets are marked as read.
++ */
++static int check_stack_range_initialized(
++		struct bpf_verifier_env *env, int regno, int off,
++		int access_size, bool zero_size_allowed,
++		enum stack_access_src type, struct bpf_call_arg_meta *meta)
+ {
+ 	struct bpf_reg_state *reg = reg_state(env, regno);
++	struct bpf_func_state *state = func(env, reg);
++	int err, min_off, max_off, i, j, slot, spi;
++	char *err_extra = type == ACCESS_HELPER ? " indirect" : "";
++	enum bpf_access_type bounds_check_type;
++	/* Some accesses can write anything into the stack, others are
++	 * read-only.
++	 */
++	bool clobber = false;
+ 
+-	if (off >= 0 || off < -MAX_BPF_STACK || off + access_size > 0 ||
+-	    access_size < 0 || (access_size == 0 && !zero_size_allowed)) {
+-		if (tnum_is_const(reg->var_off)) {
+-			verbose(env, "invalid stack type R%d off=%d access_size=%d\n",
+-				regno, off, access_size);
+-		} else {
+-			char tn_buf[48];
+-
+-			tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+-			verbose(env, "invalid stack type R%d var_off=%s access_size=%d\n",
+-				regno, tn_buf, access_size);
+-		}
++	if (access_size == 0 && !zero_size_allowed) {
++		verbose(env, "invalid zero-sized read\n");
+ 		return -EACCES;
+ 	}
+-	return 0;
+-}
+ 
+-/* when register 'regno' is passed into function that will read 'access_size'
+- * bytes from that pointer, make sure that it's within stack boundary
+- * and all elements of stack are initialized.
+- * Unlike most pointer bounds-checking functions, this one doesn't take an
+- * 'off' argument, so it has to add in reg->off itself.
+- */
+-static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
+-				int access_size, bool zero_size_allowed,
+-				struct bpf_call_arg_meta *meta)
+-{
+-	struct bpf_reg_state *reg = reg_state(env, regno);
+-	struct bpf_func_state *state = func(env, reg);
+-	int err, min_off, max_off, i, j, slot, spi;
++	if (type == ACCESS_HELPER) {
++		/* The bounds checks for writes are more permissive than for
++		 * reads. However, if raw_mode is not set, we'll do extra
++		 * checks below.
++		 */
++		bounds_check_type = BPF_WRITE;
++		clobber = true;
++	} else {
++		bounds_check_type = BPF_READ;
++	}
++	err = check_stack_access_within_bounds(env, regno, off, access_size,
++					       type, bounds_check_type);
++	if (err)
++		return err;
++
+ 
+ 	if (tnum_is_const(reg->var_off)) {
+-		min_off = max_off = reg->var_off.value + reg->off;
+-		err = __check_stack_boundary(env, regno, min_off, access_size,
+-					     zero_size_allowed);
+-		if (err)
+-			return err;
++		min_off = max_off = reg->var_off.value + off;
+ 	} else {
+ 		/* Variable offset is prohibited for unprivileged mode for
+ 		 * simplicity since it requires corresponding support in
+@@ -3695,8 +4039,8 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
+ 			char tn_buf[48];
+ 
+ 			tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+-			verbose(env, "R%d indirect variable offset stack access prohibited for !root, var_off=%s\n",
+-				regno, tn_buf);
++			verbose(env, "R%d%s variable offset stack access prohibited for !root, var_off=%s\n",
++				regno, err_extra, tn_buf);
+ 			return -EACCES;
+ 		}
+ 		/* Only initialized buffer on stack is allowed to be accessed
+@@ -3708,28 +4052,8 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
+ 		if (meta && meta->raw_mode)
+ 			meta = NULL;
+ 
+-		if (reg->smax_value >= BPF_MAX_VAR_OFF ||
+-		    reg->smax_value <= -BPF_MAX_VAR_OFF) {
+-			verbose(env, "R%d unbounded indirect variable offset stack access\n",
+-				regno);
+-			return -EACCES;
+-		}
+-		min_off = reg->smin_value + reg->off;
+-		max_off = reg->smax_value + reg->off;
+-		err = __check_stack_boundary(env, regno, min_off, access_size,
+-					     zero_size_allowed);
+-		if (err) {
+-			verbose(env, "R%d min value is outside of stack bound\n",
+-				regno);
+-			return err;
+-		}
+-		err = __check_stack_boundary(env, regno, max_off, access_size,
+-					     zero_size_allowed);
+-		if (err) {
+-			verbose(env, "R%d max value is outside of stack bound\n",
+-				regno);
+-			return err;
+-		}
++		min_off = reg->smin_value + off;
++		max_off = reg->smax_value + off;
+ 	}
+ 
+ 	if (meta && meta->raw_mode) {
+@@ -3749,8 +4073,10 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
+ 		if (*stype == STACK_MISC)
+ 			goto mark;
+ 		if (*stype == STACK_ZERO) {
+-			/* helper can write anything into the stack */
+-			*stype = STACK_MISC;
++			if (clobber) {
++				/* helper can write anything into the stack */
++				*stype = STACK_MISC;
++			}
+ 			goto mark;
+ 		}
+ 
+@@ -3759,23 +4085,26 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
+ 			goto mark;
+ 
+ 		if (state->stack[spi].slot_type[0] == STACK_SPILL &&
+-		    state->stack[spi].spilled_ptr.type == SCALAR_VALUE) {
+-			__mark_reg_unknown(env, &state->stack[spi].spilled_ptr);
+-			for (j = 0; j < BPF_REG_SIZE; j++)
+-				state->stack[spi].slot_type[j] = STACK_MISC;
++		    (state->stack[spi].spilled_ptr.type == SCALAR_VALUE ||
++		     env->allow_ptr_leaks)) {
++			if (clobber) {
++				__mark_reg_unknown(env, &state->stack[spi].spilled_ptr);
++				for (j = 0; j < BPF_REG_SIZE; j++)
++					state->stack[spi].slot_type[j] = STACK_MISC;
++			}
+ 			goto mark;
+ 		}
+ 
+ err:
+ 		if (tnum_is_const(reg->var_off)) {
+-			verbose(env, "invalid indirect read from stack off %d+%d size %d\n",
+-				min_off, i - min_off, access_size);
++			verbose(env, "invalid%s read from stack R%d off %d+%d size %d\n",
++				err_extra, regno, min_off, i - min_off, access_size);
+ 		} else {
+ 			char tn_buf[48];
+ 
+ 			tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+-			verbose(env, "invalid indirect read from stack var_off %s+%d size %d\n",
+-				tn_buf, i - min_off, access_size);
++			verbose(env, "invalid%s read from stack R%d var_off %s+%d size %d\n",
++				err_extra, regno, tn_buf, i - min_off, access_size);
+ 		}
+ 		return -EACCES;
+ mark:
+@@ -3824,8 +4153,10 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ 					   "rdwr",
+ 					   &env->prog->aux->max_rdwr_access);
+ 	case PTR_TO_STACK:
+-		return check_stack_boundary(env, regno, access_size,
+-					    zero_size_allowed, meta);
++		return check_stack_range_initialized(
++				env,
++				regno, reg->off, access_size,
++				zero_size_allowed, ACCESS_HELPER, meta);
+ 	default: /* scalar_value or invalid ptr */
+ 		/* Allow zero-byte read from NULL, regardless of pointer type */
+ 		if (zero_size_allowed && access_size == 0 &&
+@@ -5343,7 +5674,7 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
+ 	bool off_is_neg = off_reg->smin_value < 0;
+ 	bool mask_to_left = (opcode == BPF_ADD &&  off_is_neg) ||
+ 			    (opcode == BPF_SUB && !off_is_neg);
+-	u32 off, max = 0, ptr_limit = 0;
++	u32 max = 0, ptr_limit = 0;
+ 
+ 	if (!tnum_is_const(off_reg->var_off) &&
+ 	    (off_reg->smin_value < 0) != (off_reg->smax_value < 0))
+@@ -5352,26 +5683,18 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
+ 	switch (ptr_reg->type) {
+ 	case PTR_TO_STACK:
+ 		/* Offset 0 is out-of-bounds, but acceptable start for the
+-		 * left direction, see BPF_REG_FP.
++		 * left direction, see BPF_REG_FP. Also, unknown scalar
++		 * offset where we would need to deal with min/max bounds is
++		 * currently prohibited for unprivileged.
+ 		 */
+ 		max = MAX_BPF_STACK + mask_to_left;
+-		/* Indirect variable offset stack access is prohibited in
+-		 * unprivileged mode so it's not handled here.
+-		 */
+-		off = ptr_reg->off + ptr_reg->var_off.value;
+-		if (mask_to_left)
+-			ptr_limit = MAX_BPF_STACK + off;
+-		else
+-			ptr_limit = -off - 1;
++		ptr_limit = -(ptr_reg->var_off.value + ptr_reg->off);
+ 		break;
+ 	case PTR_TO_MAP_VALUE:
+ 		max = ptr_reg->map_ptr->value_size;
+-		if (mask_to_left) {
+-			ptr_limit = ptr_reg->umax_value + ptr_reg->off;
+-		} else {
+-			off = ptr_reg->smin_value + ptr_reg->off;
+-			ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
+-		}
++		ptr_limit = (mask_to_left ?
++			     ptr_reg->smin_value :
++			     ptr_reg->umax_value) + ptr_reg->off;
+ 		break;
+ 	default:
+ 		return REASON_TYPE;
+@@ -5426,10 +5749,12 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 			    struct bpf_insn *insn,
+ 			    const struct bpf_reg_state *ptr_reg,
+ 			    const struct bpf_reg_state *off_reg,
+-			    struct bpf_reg_state *dst_reg)
++			    struct bpf_reg_state *dst_reg,
++			    struct bpf_insn_aux_data *tmp_aux,
++			    const bool commit_window)
+ {
++	struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : tmp_aux;
+ 	struct bpf_verifier_state *vstate = env->cur_state;
+-	struct bpf_insn_aux_data *aux = cur_aux(env);
+ 	bool off_is_neg = off_reg->smin_value < 0;
+ 	bool ptr_is_dst_reg = ptr_reg == dst_reg;
+ 	u8 opcode = BPF_OP(insn->code);
+@@ -5448,18 +5773,33 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 	if (vstate->speculative)
+ 		goto do_sim;
+ 
+-	alu_state  = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
+-	alu_state |= ptr_is_dst_reg ?
+-		     BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
+-
+ 	err = retrieve_ptr_limit(ptr_reg, off_reg, &alu_limit, opcode);
+ 	if (err < 0)
+ 		return err;
+ 
++	if (commit_window) {
++		/* In commit phase we narrow the masking window based on
++		 * the observed pointer move after the simulated operation.
++		 */
++		alu_state = tmp_aux->alu_state;
++		alu_limit = abs(tmp_aux->alu_limit - alu_limit);
++	} else {
++		alu_state  = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
++		alu_state |= ptr_is_dst_reg ?
++			     BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
++	}
++
+ 	err = update_alu_sanitation_state(aux, alu_state, alu_limit);
+ 	if (err < 0)
+ 		return err;
+ do_sim:
++	/* If we're in commit phase, we're done here given we already
++	 * pushed the truncated dst_reg into the speculative verification
++	 * stack.
++	 */
++	if (commit_window)
++		return 0;
++
+ 	/* Simulate and find potential out-of-bounds access under
+ 	 * speculative execution from truncation as a result of
+ 	 * masking when off was not within expected range. If off
+@@ -5518,6 +5858,72 @@ static int sanitize_err(struct bpf_verifier_env *env,
+ 	return -EACCES;
+ }
+ 
++/* check that stack access falls within stack limits and that 'reg' doesn't
++ * have a variable offset.
++ *
++ * Variable offset is prohibited for unprivileged mode for simplicity since it
++ * requires corresponding support in Spectre masking for stack ALU.  See also
++ * retrieve_ptr_limit().
++ *
++ *
++ * 'off' includes 'reg->off'.
++ */
++static int check_stack_access_for_ptr_arithmetic(
++				struct bpf_verifier_env *env,
++				int regno,
++				const struct bpf_reg_state *reg,
++				int off)
++{
++	if (!tnum_is_const(reg->var_off)) {
++		char tn_buf[48];
++
++		tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
++		verbose(env, "R%d variable stack access prohibited for !root, var_off=%s off=%d\n",
++			regno, tn_buf, off);
++		return -EACCES;
++	}
++
++	if (off >= 0 || off < -MAX_BPF_STACK) {
++		verbose(env, "R%d stack pointer arithmetic goes out of range, "
++			"prohibited for !root; off=%d\n", regno, off);
++		return -EACCES;
++	}
++
++	return 0;
++}
++
++static int sanitize_check_bounds(struct bpf_verifier_env *env,
++				 const struct bpf_insn *insn,
++				 const struct bpf_reg_state *dst_reg)
++{
++	u32 dst = insn->dst_reg;
++
++	/* For unprivileged we require that resulting offset must be in bounds
++	 * in order to be able to sanitize access later on.
++	 */
++	if (env->bypass_spec_v1)
++		return 0;
++
++	switch (dst_reg->type) {
++	case PTR_TO_STACK:
++		if (check_stack_access_for_ptr_arithmetic(env, dst, dst_reg,
++					dst_reg->off + dst_reg->var_off.value))
++			return -EACCES;
++		break;
++	case PTR_TO_MAP_VALUE:
++		if (check_map_access(env, dst, dst_reg->off, 1, false)) {
++			verbose(env, "R%d pointer arithmetic of map value goes out of range, "
++				"prohibited for !root\n", dst);
++			return -EACCES;
++		}
++		break;
++	default:
++		break;
++	}
++
++	return 0;
++}
++
+ /* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
+  * Caller should also handle BPF_MOV case separately.
+  * If we return -EACCES, caller may want to try again treating pointer as a
+@@ -5536,6 +5942,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	    smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
+ 	u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
+ 	    umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
++	struct bpf_insn_aux_data tmp_aux = {};
+ 	u8 opcode = BPF_OP(insn->code);
+ 	u32 dst = insn->dst_reg;
+ 	int ret;
+@@ -5602,12 +6009,15 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	/* pointer types do not carry 32-bit bounds at the moment. */
+ 	__mark_reg32_unbounded(dst_reg);
+ 
+-	switch (opcode) {
+-	case BPF_ADD:
+-		ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg);
++	if (sanitize_needed(opcode)) {
++		ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg,
++				       &tmp_aux, false);
+ 		if (ret < 0)
+ 			return sanitize_err(env, insn, ret, off_reg, dst_reg);
++	}
+ 
++	switch (opcode) {
++	case BPF_ADD:
+ 		/* We can take a fixed offset as long as it doesn't overflow
+ 		 * the s32 'off' field
+ 		 */
+@@ -5658,10 +6068,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		}
+ 		break;
+ 	case BPF_SUB:
+-		ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg);
+-		if (ret < 0)
+-			return sanitize_err(env, insn, ret, off_reg, dst_reg);
+-
+ 		if (dst_reg == off_reg) {
+ 			/* scalar -= pointer.  Creates an unknown scalar */
+ 			verbose(env, "R%d tried to subtract pointer from scalar\n",
+@@ -5742,22 +6148,13 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	__reg_deduce_bounds(dst_reg);
+ 	__reg_bound_offset(dst_reg);
+ 
+-	/* For unprivileged we require that resulting offset must be in bounds
+-	 * in order to be able to sanitize access later on.
+-	 */
+-	if (!env->bypass_spec_v1) {
+-		if (dst_reg->type == PTR_TO_MAP_VALUE &&
+-		    check_map_access(env, dst, dst_reg->off, 1, false)) {
+-			verbose(env, "R%d pointer arithmetic of map value goes out of range, "
+-				"prohibited for !root\n", dst);
+-			return -EACCES;
+-		} else if (dst_reg->type == PTR_TO_STACK &&
+-			   check_stack_access(env, dst_reg, dst_reg->off +
+-					      dst_reg->var_off.value, 1)) {
+-			verbose(env, "R%d stack pointer arithmetic goes out of range, "
+-				"prohibited for !root\n", dst);
+-			return -EACCES;
+-		}
++	if (sanitize_check_bounds(env, insn, dst_reg) < 0)
++		return -EACCES;
++	if (sanitize_needed(opcode)) {
++		ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg,
++				       &tmp_aux, true);
++		if (ret < 0)
++			return sanitize_err(env, insn, ret, off_reg, dst_reg);
+ 	}
+ 
+ 	return 0;
+@@ -11951,6 +12348,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
+ 		env->strict_alignment = false;
+ 
+ 	env->allow_ptr_leaks = bpf_allow_ptr_leaks();
++	env->allow_uninit_stack = bpf_allow_uninit_stack();
+ 	env->allow_ptr_to_map_access = bpf_allow_ptr_to_map_access();
+ 	env->bypass_spec_v1 = bpf_bypass_spec_v1();
+ 	env->bypass_spec_v4 = bpf_bypass_spec_v4();
+diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
+index fe9ca92faa2a7..909b0bf22a1ec 100644
+--- a/kernel/locking/qrwlock.c
++++ b/kernel/locking/qrwlock.c
+@@ -61,6 +61,8 @@ EXPORT_SYMBOL(queued_read_lock_slowpath);
+  */
+ void queued_write_lock_slowpath(struct qrwlock *lock)
+ {
++	int cnts;
++
+ 	/* Put the writer into the wait queue */
+ 	arch_spin_lock(&lock->wait_lock);
+ 
+@@ -74,9 +76,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock)
+ 
+ 	/* When no more readers or writers, set the locked flag */
+ 	do {
+-		atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING);
+-	} while (atomic_cmpxchg_relaxed(&lock->cnts, _QW_WAITING,
+-					_QW_LOCKED) != _QW_WAITING);
++		cnts = atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING);
++	} while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED));
+ unlock:
+ 	arch_spin_unlock(&lock->wait_lock);
+ }
+diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
+index 1e000cc2e7b4b..127012f45166e 100644
+--- a/scripts/Makefile.kasan
++++ b/scripts/Makefile.kasan
+@@ -2,6 +2,8 @@
+ CFLAGS_KASAN_NOSANITIZE := -fno-builtin
+ KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
+ 
++cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1)))
++
+ ifdef CONFIG_KASAN_GENERIC
+ 
+ ifdef CONFIG_KASAN_INLINE
+@@ -12,8 +14,6 @@ endif
+ 
+ CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
+ 
+-cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1)))
+-
+ # -fasan-shadow-offset fails without -fsanitize
+ CFLAGS_KASAN_SHADOW := $(call cc-option, -fsanitize=kernel-address \
+ 			-fasan-shadow-offset=$(KASAN_SHADOW_OFFSET), \
+@@ -36,14 +36,14 @@ endif # CONFIG_KASAN_GENERIC
+ ifdef CONFIG_KASAN_SW_TAGS
+ 
+ ifdef CONFIG_KASAN_INLINE
+-    instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
++    instrumentation_flags := $(call cc-param,hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET))
+ else
+-    instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1
++    instrumentation_flags := $(call cc-param,hwasan-instrument-with-calls=1)
+ endif
+ 
+ CFLAGS_KASAN := -fsanitize=kernel-hwaddress \
+-		-mllvm -hwasan-instrument-stack=$(CONFIG_KASAN_STACK) \
+-		-mllvm -hwasan-use-short-granules=0 \
++		$(call cc-param,hwasan-instrument-stack=$(CONFIG_KASAN_STACK)) \
++		$(call cc-param,hwasan-use-short-granules=0) \
+ 		$(instrumentation_flags)
+ 
+ endif # CONFIG_KASAN_SW_TAGS
+diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
+index e2a0ed5d02f01..c87c4df8703d4 100644
+--- a/security/keys/trusted-keys/trusted_tpm2.c
++++ b/security/keys/trusted-keys/trusted_tpm2.c
+@@ -79,7 +79,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
+ 	if (i == ARRAY_SIZE(tpm2_hash_map))
+ 		return -EINVAL;
+ 
+-	rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_CREATE);
++	rc = tpm_try_get_ops(chip);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/tools/arch/ia64/include/asm/barrier.h b/tools/arch/ia64/include/asm/barrier.h
+index 4d471d9511a54..6fffe56827134 100644
+--- a/tools/arch/ia64/include/asm/barrier.h
++++ b/tools/arch/ia64/include/asm/barrier.h
+@@ -39,9 +39,6 @@
+  * sequential memory pages only.
+  */
+ 
+-/* XXX From arch/ia64/include/uapi/asm/gcc_intrin.h */
+-#define ia64_mf()       asm volatile ("mf" ::: "memory")
+-
+ #define mb()		ia64_mf()
+ #define rmb()		mb()
+ #define wmb()		mb()
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index d8ada6a3c555a..d3c15b53495d6 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -636,7 +636,7 @@ int auxtrace_parse_snapshot_options(struct auxtrace_record *itr,
+ 		break;
+ 	}
+ 
+-	if (itr)
++	if (itr && itr->parse_snapshot_options)
+ 		return itr->parse_snapshot_options(itr, opts, str);
+ 
+ 	pr_err("No AUX area tracing to snapshot\n");
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index e2537d5acab09..f4d44f75ba152 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -836,15 +836,18 @@ out:
+ int maps__clone(struct thread *thread, struct maps *parent)
+ {
+ 	struct maps *maps = thread->maps;
+-	int err = -ENOMEM;
++	int err;
+ 	struct map *map;
+ 
+ 	down_read(&parent->lock);
+ 
+ 	maps__for_each_entry(parent, map) {
+ 		struct map *new = map__clone(map);
+-		if (new == NULL)
++
++		if (new == NULL) {
++			err = -ENOMEM;
+ 			goto out_unlock;
++		}
+ 
+ 		err = unwind__prepare_access(maps, new, NULL);
+ 		if (err)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-04-30 18:58 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-04-30 18:58 UTC (permalink / raw
  To: gentoo-commits

commit:     32295054ff07918abe69ef158a9103ff30932d15
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 30 18:57:54 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 30 18:57:54 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=32295054

Rename cpu opt patch to standardize on naming format

Remove redundant patches

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   8 +-
 ...> 5010_enable-cpu-optimizations-universal.patch | 271 +++++++---
 5012_enable-cpu-optimizations-for-gcc91.patch      | 549 ---------------------
 3 files changed, 191 insertions(+), 637 deletions(-)

diff --git a/0000_README b/0000_README
index acf61dd..9d8e885 100644
--- a/0000_README
+++ b/0000_README
@@ -203,10 +203,6 @@ Patch:  5000_shifts-ubuntu-20.04.patch
 From:   https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
 Desc:   UID/GID shifting overlay filesystem for containers 
 
-Patch:  5012_enable-cpu-optimizations-for-gcc91.patch
+Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
-Desc:   Kernel patch enables gcc >= v9.1 optimizations for additional CPUs.
-
-Patch:  5013_enable-cpu-optimizations-for-gcc10.patch
-From:   https://github.com/graysky2/kernel_gcc_patch/
-Desc:   Kernel patch enables gcc = v10.1+ optimizations for additional CPUs.
+Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.

diff --git a/5013_enable-cpu-optimizations-for-gcc10.patch b/5010_enable-cpu-optimizations-universal.patch
similarity index 66%
rename from 5013_enable-cpu-optimizations-for-gcc10.patch
rename to 5010_enable-cpu-optimizations-universal.patch
index c90b586..1868f23 100644
--- a/5013_enable-cpu-optimizations-for-gcc10.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,64 +1,82 @@
-From 4666424a864159b4de572c90adb2c3e1fcdd5890 Mon Sep 17 00:00:00 2001
+From 59db769ad69e080c512b3890e1d27d6120f4a1a4 Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Fri, 13 Nov 2020 15:45:08 -0500
-Subject: [PATCH]more-uarches-for-gcc-v10-and-kernel-5.8+
+Date: Mon, 12 Apr 2021 07:09:27 -0400
+Subject: [PATCH] more uarches for kernel 5.8+
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
 
 WARNING
-This patch works with gcc versions 10.1+ and with kernel version 5.8+ and should
+This patch works with all gcc versions 9.0+ and with kernel version 5.8+ and should
 NOT be applied when compiling on older versions of gcc due to key name changes
 of the march flags introduced with the version 4.9 release of gcc.[1]
 
-Use the older version of this patch hosted on the same github for older
-versions of gcc.
-
 FEATURES
 This patch adds additional CPU options to the Linux kernel accessible under:
  Processor type and features  --->
   Processor family --->
 
-The expanded microarchitectures include:
-* AMD Improved K8-family
-* AMD K10-family
-* AMD Family 10h (Barcelona)
-* AMD Family 14h (Bobcat)
-* AMD Family 16h (Jaguar)
-* AMD Family 15h (Bulldozer)
-* AMD Family 15h (Piledriver)
-* AMD Family 15h (Steamroller)
-* AMD Family 15h (Excavator)
-* AMD Family 17h (Zen)
-* AMD Family 17h (Zen 2)
-* Intel Silvermont low-power processors
-* Intel Goldmont low-power processors (Apollo Lake and Denverton)
-* Intel Goldmont Plus low-power processors (Gemini Lake)
-* Intel 1st Gen Core i3/i5/i7 (Nehalem)
-* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
-* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
-* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
-* Intel 4th Gen Core i3/i5/i7 (Haswell)
-* Intel 5th Gen Core i3/i5/i7 (Broadwell)
-* Intel 6th Gen Core i3/i5/i7 (Skylake)
-* Intel 6th Gen Core i7/i9 (Skylake X)
-* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
-* Intel 10th Gen Core i7/i9 (Ice Lake)
-* Intel Xeon (Cascade Lake)
-* Intel Xeon (Cooper Lake)
-* Intel 3rd Gen 10nm++  i3/i5/i7/i9-family (Tiger Lake)
+With the release of gcc 11.0, several generic 64-bit levels are offered which
+are good for supported Intel or AMD CPUs:
+• x86-64-v2
+• x86-64-v3
+• x86-64-v4
+
+Users of glibc 2.33 and above can see which level is supported by current
+hardware by running:
+  /lib/ld-linux-x86-64.so.2 --help | grep supported
+
+Alternatively, compare the flags from /proc/cpuinfo to this list.[2]
+
+CPU-specific microarchitectures include:
+• AMD Improved K8-family
+• AMD K10-family
+• AMD Family 10h (Barcelona)
+• AMD Family 14h (Bobcat)
+• AMD Family 16h (Jaguar)
+• AMD Family 15h (Bulldozer)
+• AMD Family 15h (Piledriver)
+• AMD Family 15h (Steamroller)
+• AMD Family 15h (Excavator)
+• AMD Family 17h (Zen)
+• AMD Family 17h (Zen 2)
+• AMD Family 19h (Zen 3)†
+• Intel Silvermont low-power processors
+• Intel Goldmont low-power processors (Apollo Lake and Denverton)
+• Intel Goldmont Plus low-power processors (Gemini Lake)
+• Intel 1st Gen Core i3/i5/i7 (Nehalem)
+• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+• Intel 4th Gen Core i3/i5/i7 (Haswell)
+• Intel 5th Gen Core i3/i5/i7 (Broadwell)
+• Intel 6th Gen Core i3/i5/i7 (Skylake)
+• Intel 6th Gen Core i7/i9 (Skylake X)
+• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+• Intel 10th Gen Core i7/i9 (Ice Lake)
+• Intel Xeon (Cascade Lake)
+• Intel Xeon (Cooper Lake)*
+• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
+• Intel 3rd Gen 10nm++ Xeon (Sapphire Rapids)‡
+• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)‡
+• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)‡
+
+Notes: If not otherwise noted, gcc >=9.1 is required for support.
+       *Requires gcc >=10.1  †Required gcc >=10.3  ‡Required gcc >=11.0
 
 It also offers to compile passing the 'native' option which, "selects the CPU
 to generate code for at compilation time by determining the processor type of
 the compiling machine. Using -march=native enables all instruction subsets
 supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[2]
+machine under the constraints of the selected instruction set."[3]
 
-Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
-Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
-kernel's objtool issue with these.[3a,b]
+Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
+CPUs should select the 'AMD-Native' option.
 
-MINOR NOTES
-This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
-changes. Note that upstream is using the deprecated 'match=atom' flags when I
-believe it should use the newer 'march=bonnell' flag for atom processors.[4]
+MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
+This patch also changes -march=atom to -march=bonnell in accordance with the
+gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
+believe it should use the newer -march=bonnell flag for atom processors.[4]
 
 It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
 recommendation is to use the 'atom' option instead.
@@ -72,28 +90,26 @@ https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
 linux version >=5.8
-gcc version >=10.1
+gcc version >=9.0
 
 ACKNOWLEDGMENTS
 This patch builds on the seminal work by Jeroen.[6]
 
 REFERENCES
 1.  https://gcc.gnu.org/gcc-4.9/changes.html
-2.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
-3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
-3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
+2.  https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
+3.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
 4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
 5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
 6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
-
 ---
- arch/x86/Kconfig.cpu            | 258 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  39 ++++-
- arch/x86/include/asm/vermagic.h |  56 +++++++
- 3 files changed, 336 insertions(+), 17 deletions(-)
+ arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile               |  47 ++++-
+ arch/x86/include/asm/vermagic.h |  66 +++++++
+ 3 files changed, 428 insertions(+), 17 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..134390e619bb 100644
+index 814fe0d349b0..872b9cf598e3 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
@@ -114,7 +130,7 @@ index 814fe0d349b0..134390e619bb 100644
  	depends on X86_32
  	help
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
-@@ -173,12 +173,90 @@ config MK7
+@@ -173,12 +173,98 @@ config MK7
  	  flags to GCC.
  
  config MK8
@@ -202,11 +218,19 @@ index 814fe0d349b0..134390e619bb 100644
 +	  Select this for AMD Family 17h Zen 2 processors.
 +
 +	  Enables -march=znver2
++
++config MZEN3
++	bool "AMD Zen 3"
++	depends on GCC_VERSION > 100300
++	help
++	  Select this for AMD Family 19h Zen 3 processors.
++
++	  Enables -march=znver3
 +
  config MCRUSOE
  	bool "Crusoe"
  	depends on X86_32
-@@ -270,7 +348,7 @@ config MPSC
+@@ -270,7 +356,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
  
  config MCORE2
@@ -215,7 +239,7 @@ index 814fe0d349b0..134390e619bb 100644
  	help
  
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,6 +356,8 @@ config MCORE2
+@@ -278,6 +364,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
  
@@ -224,7 +248,7 @@ index 814fe0d349b0..134390e619bb 100644
  config MATOM
  	bool "Intel Atom"
  	help
-@@ -287,6 +367,150 @@ config MATOM
+@@ -287,6 +375,182 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
  
@@ -356,6 +380,7 @@ index 814fe0d349b0..134390e619bb 100644
 +
 +config MCOOPERLAKE
 +	bool "Intel Cooper Lake"
++	depends on GCC_VERSION > 100100
 +	select X86_P6_NOP
 +	help
 +
@@ -365,22 +390,77 @@ index 814fe0d349b0..134390e619bb 100644
 +
 +config MTIGERLAKE
 +	bool "Intel Tiger Lake"
++	depends on GCC_VERSION > 100100
 +	select X86_P6_NOP
 +	help
 +
 +	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
 +
 +	  Enables -march=tigerlake
++
++config MSAPPHIRERAPIDS
++	bool "Intel Sapphire Rapids"
++	depends on GCC_VERSION > 110000
++	select X86_P6_NOP
++	help
++
++	  Select this for third-generation 10 nm process processors in the Sapphire Rapids family.
++
++	  Enables -march=sapphirerapids
++
++config MROCKETLAKE
++	bool "Intel Rocket Lake"
++	depends on GCC_VERSION > 110000
++	select X86_P6_NOP
++	help
++
++	  Select this for eleventh-generation processors in the Rocket Lake family.
++
++	  Enables -march=rocketlake
++
++config MALDERLAKE
++	bool "Intel Alder Lake"
++	depends on GCC_VERSION > 110000
++	select X86_P6_NOP
++	help
++
++	  Select this for twelfth-generation processors in the Alder Lake family.
++
++	  Enables -march=alderlake
 +
  config GENERIC_CPU
  	bool "Generic-x86-64"
  	depends on X86_64
-@@ -294,6 +518,16 @@ config GENERIC_CPU
+@@ -294,6 +558,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
  
-+config MNATIVE
-+	bool "Native optimizations autodetected by GCC"
++config GENERIC_CPU2
++	bool "Generic-x86-64-v2"
++	depends on GCC_VERSION > 110000
++	depends on X86_64
++	help
++	  Generic x86-64 CPU.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v2.
++
++config GENERIC_CPU3
++	bool "Generic-x86-64-v3"
++	depends on GCC_VERSION > 110000
++	depends on X86_64
++	help
++	  Generic x86-64-v3 CPU with v3 instructions.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v3.
++
++config GENERIC_CPU4
++	bool "Generic-x86-64-v4"
++	depends on GCC_VERSION > 110000
++	depends on X86_64
++	help
++	  Generic x86-64 CPU with v4 instructions.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v4.
++
++config MNATIVE_INTEL
++	bool "Intel-Native optimizations autodetected by GCC"
 +	help
 +
 +	  GCC 4.2 and above support -march=native, which automatically detects
@@ -388,70 +468,80 @@ index 814fe0d349b0..134390e619bb 100644
 +	  for AMD CPUs.  Intel Only!
 +
 +	  Enables -march=native
++
++config MNATIVE_AMD
++	bool "AMD-Native optimizations autodetected by GCC"
++	help
++
++	  GCC 4.2 and above support -march=native, which automatically detects
++	  the optimum settings to use based on your processor. Do NOT use this
++	  for Intel CPUs.  AMD Only!
++
++	  Enables -march=native
 +
  endchoice
  
  config X86_GENERIC
-@@ -318,7 +552,7 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
  	int
  	default "7" if MPENTIUM4 || MPSC
 -	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
  
-@@ -336,11 +570,11 @@ config X86_ALIGNMENT_16
+@@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
  
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
  
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
  
  config X86_USE_3DNOW
  	def_bool y
-@@ -360,26 +594,26 @@ config X86_USE_3DNOW
+@@ -360,26 +668,26 @@ config X86_USE_3DNOW
  config X86_P6_NOP
  	def_bool y
  	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
-+	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
  
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
  
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
-+	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
++	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
  
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
++	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
  
  config X86_MINIMUM_CPU_FAMILY
  	int
  	default "64" if X86_64
 -	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
-+	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
  	default "5" if X86_32 && X86_CMPXCHG64
  	default "4"
  
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 7116da3980be..50c8af35092b 100644
+index 9a85eae37b17..facf9a278fe3 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -110,11 +110,40 @@ else
+@@ -113,11 +113,48 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
          cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
          cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
@@ -473,9 +563,11 @@ index 7116da3980be..50c8af35092b 100644
 +        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
 +        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
 +        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+        cflags-$(CONFIG_MZEN2) +=  $(call cc-option,-march=znver2)
++        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
++        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
 +
-+        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
 +        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
 +        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
 +        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
@@ -494,19 +586,27 @@ index 7116da3980be..50c8af35092b 100644
 +        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
 +        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
 +        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
++        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
++        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
++        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
++        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
++        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
          cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
          KBUILD_CFLAGS += $(cflags-y)
  
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
-index 75884d2cdec3..14c222e78213 100644
+index 75884d2cdec3..4e6a08d4c7e5 100644
 --- a/arch/x86/include/asm/vermagic.h
 +++ b/arch/x86/include/asm/vermagic.h
-@@ -17,6 +17,40 @@
+@@ -17,6 +17,48 @@
  #define MODULE_PROC_FAMILY "586MMX "
  #elif defined CONFIG_MCORE2
  #define MODULE_PROC_FAMILY "CORE2 "
-+#elif defined CONFIG_MNATIVE
-+#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNATIVE_INTEL
++#define MODULE_PROC_FAMILY "NATIVE_INTEL "
++#elif defined CONFIG_MNATIVE_AMD
++#define MODULE_PROC_FAMILY "NATIVE_AMD "
 +#elif defined CONFIG_MNEHALEM
 +#define MODULE_PROC_FAMILY "NEHALEM "
 +#elif defined CONFIG_MWESTMERE
@@ -539,10 +639,16 @@ index 75884d2cdec3..14c222e78213 100644
 +#define MODULE_PROC_FAMILY "COOPERLAKE "
 +#elif defined CONFIG_MTIGERLAKE
 +#define MODULE_PROC_FAMILY "TIGERLAKE "
++#elif defined CONFIG_MSAPPHIRERAPIDS
++#define MODULE_PROC_FAMILY "SAPPHIRERAPIDS "
++#elif defined CONFIG_ROCKETLAKE
++#define MODULE_PROC_FAMILY "ROCKETLAKE "
++#elif defined CONFIG_MALDERLAKE
++#define MODULE_PROC_FAMILY "ALDERLAKE "
  #elif defined CONFIG_MATOM
  #define MODULE_PROC_FAMILY "ATOM "
  #elif defined CONFIG_M686
-@@ -35,6 +69,28 @@
+@@ -35,6 +77,30 @@
  #define MODULE_PROC_FAMILY "K7 "
  #elif defined CONFIG_MK8
  #define MODULE_PROC_FAMILY "K8 "
@@ -568,10 +674,11 @@ index 75884d2cdec3..14c222e78213 100644
 +#define MODULE_PROC_FAMILY "ZEN "
 +#elif defined CONFIG_MZEN2
 +#define MODULE_PROC_FAMILY "ZEN2 "
++#elif defined CONFIG_MZEN3
++#define MODULE_PROC_FAMILY "ZEN3 "
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
 -- 
-2.30.1
-
+2.31.1
 

diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
deleted file mode 100644
index 56aff7e..0000000
--- a/5012_enable-cpu-optimizations-for-gcc91.patch
+++ /dev/null
@@ -1,549 +0,0 @@
-From 56af79dc8be395c6adf25a05de3566822dbb2947 Mon Sep 17 00:00:00 2001
-From: graysky <graysky@archlinux.us>
-Date: Tue, 9 Mar 2021 01:57:33 -0500
-Subject: [PATCH] more-uarches-for-gcc-v9-and-kernel-5.8+
-
-WARNING
-This patch works with gcc versions 9.1+ and with kernel version 5.8+ and should
-NOT be applied when compiling on older versions of gcc due to key name changes
-of the march flags introduced with the version 4.9 release of gcc.[1]
-
-Use the older version of this patch hosted on the same github for older
-versions of gcc.
-
-FEATURES
-This patch adds additional CPU options to the Linux kernel accessible under:
- Processor type and features  --->
-  Processor family --->
-
-The expanded microarchitectures include:
-* AMD Improved K8-family
-* AMD K10-family
-* AMD Family 10h (Barcelona)
-* AMD Family 14h (Bobcat)
-* AMD Family 16h (Jaguar)
-* AMD Family 15h (Bulldozer)
-* AMD Family 15h (Piledriver)
-* AMD Family 15h (Steamroller)
-* AMD Family 15h (Excavator)
-* AMD Family 17h (Zen)
-* AMD Family 17h (Zen 2)
-* Intel Silvermont low-power processors
-* Intel Goldmont low-power processors (Apollo Lake and Denverton)
-* Intel Goldmont Plus low-power processors (Gemini Lake)
-* Intel 1st Gen Core i3/i5/i7 (Nehalem)
-* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
-* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
-* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
-* Intel 4th Gen Core i3/i5/i7 (Haswell)
-* Intel 5th Gen Core i3/i5/i7 (Broadwell)
-* Intel 6th Gen Core i3/i5/i7 (Skylake)
-* Intel 6th Gen Core i7/i9 (Skylake X)
-* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
-* Intel 10th Gen Core i7/i9 (Ice Lake)
-* Intel Xeon (Cascade Lake)
-
-It also offers to compile passing the 'native' option which, "selects the CPU
-to generate code for at compilation time by determining the processor type of
-the compiling machine. Using -march=native enables all instruction subsets
-supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[2]
-
-Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
-Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
-kernel's objtool issue with these.[3a,b]
-
-MINOR NOTES
-This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
-changes. Note that upstream is using the deprecated 'match=atom' flags when I
-believe it should use the newer 'march=bonnell' flag for atom processors.[4]
-
-It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
-recommendation is to use the 'atom' option instead.
-
-BENEFITS
-Small but real speed increases are measurable using a make endpoint comparing
-a generic kernel to one built with one of the respective microarchs.
-
-See the following experimental evidence supporting this statement:
-https://github.com/graysky2/kernel_gcc_patch
-
-REQUIREMENTS
-linux version >=5.8
-gcc version >=9.1 and <10
-
-ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[6]
-
-REFERENCES
-1.  https://gcc.gnu.org/gcc-4.9/changes.html
-2.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
-3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
-3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
-4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
-5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
-6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
----
- arch/x86/Kconfig.cpu            | 240 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  37 ++++-
- arch/x86/include/asm/vermagic.h |  52 +++++++
- 3 files changed, 312 insertions(+), 17 deletions(-)
-
-diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..aa7dd036e8a3 100644
---- a/arch/x86/Kconfig.cpu
-+++ b/arch/x86/Kconfig.cpu
-@@ -157,7 +157,7 @@ config MPENTIUM4
- 
- 
- config MK6
--	bool "K6/K6-II/K6-III"
-+	bool "AMD K6/K6-II/K6-III"
- 	depends on X86_32
- 	help
- 	  Select this for an AMD K6-family processor.  Enables use of
-@@ -165,7 +165,7 @@ config MK6
- 	  flags to GCC.
- 
- config MK7
--	bool "Athlon/Duron/K7"
-+	bool "AMD Athlon/Duron/K7"
- 	depends on X86_32
- 	help
- 	  Select this for an AMD Athlon K7-family processor.  Enables use of
-@@ -173,12 +173,90 @@ config MK7
- 	  flags to GCC.
- 
- config MK8
--	bool "Opteron/Athlon64/Hammer/K8"
-+	bool "AMD Opteron/Athlon64/Hammer/K8"
- 	help
- 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
- 	  Enables use of some extended instructions, and passes appropriate
- 	  optimization flags to GCC.
- 
-+config MK8SSE3
-+	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
-+	help
-+	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MK10
-+	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
-+	help
-+	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
-+	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MBARCELONA
-+	bool "AMD Barcelona"
-+	help
-+	  Select this for AMD Family 10h Barcelona processors.
-+
-+	  Enables -march=barcelona
-+
-+config MBOBCAT
-+	bool "AMD Bobcat"
-+	help
-+	  Select this for AMD Family 14h Bobcat processors.
-+
-+	  Enables -march=btver1
-+
-+config MJAGUAR
-+	bool "AMD Jaguar"
-+	help
-+	  Select this for AMD Family 16h Jaguar processors.
-+
-+	  Enables -march=btver2
-+
-+config MBULLDOZER
-+	bool "AMD Bulldozer"
-+	help
-+	  Select this for AMD Family 15h Bulldozer processors.
-+
-+	  Enables -march=bdver1
-+
-+config MPILEDRIVER
-+	bool "AMD Piledriver"
-+	help
-+	  Select this for AMD Family 15h Piledriver processors.
-+
-+	  Enables -march=bdver2
-+
-+config MSTEAMROLLER
-+	bool "AMD Steamroller"
-+	help
-+	  Select this for AMD Family 15h Steamroller processors.
-+
-+	  Enables -march=bdver3
-+
-+config MEXCAVATOR
-+	bool "AMD Excavator"
-+	help
-+	  Select this for AMD Family 15h Excavator processors.
-+
-+	  Enables -march=bdver4
-+
-+config MZEN
-+	bool "AMD Zen"
-+	help
-+	  Select this for AMD Family 17h Zen processors.
-+
-+	  Enables -march=znver1
-+
-+config MZEN2
-+	bool "AMD Zen 2"
-+	help
-+	  Select this for AMD Family 17h Zen 2 processors.
-+
-+	  Enables -march=znver2
-+
- config MCRUSOE
- 	bool "Crusoe"
- 	depends on X86_32
-@@ -270,7 +348,7 @@ config MPSC
- 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
- 
- config MCORE2
--	bool "Core 2/newer Xeon"
-+	bool "Intel Core 2"
- 	help
- 
- 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,6 +356,8 @@ config MCORE2
- 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
- 	  (not a typo)
- 
-+	  Enables -march=core2
-+
- config MATOM
- 	bool "Intel Atom"
- 	help
-@@ -287,6 +367,132 @@ config MATOM
- 	  accordingly optimized code. Use a recent GCC with specific Atom
- 	  support in order to fully benefit from selecting this option.
- 
-+config MNEHALEM
-+	bool "Intel Nehalem"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 1st Gen Core processors in the Nehalem family.
-+
-+	  Enables -march=nehalem
-+
-+config MWESTMERE
-+	bool "Intel Westmere"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Westmere formerly Nehalem-C family.
-+
-+	  Enables -march=westmere
-+
-+config MSILVERMONT
-+	bool "Intel Silvermont"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Silvermont platform.
-+
-+	  Enables -march=silvermont
-+
-+config MGOLDMONT
-+	bool "Intel Goldmont"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
-+
-+	  Enables -march=goldmont
-+
-+config MGOLDMONTPLUS
-+	bool "Intel Goldmont Plus"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
-+
-+	  Enables -march=goldmont-plus
-+
-+config MSANDYBRIDGE
-+	bool "Intel Sandy Bridge"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
-+
-+	  Enables -march=sandybridge
-+
-+config MIVYBRIDGE
-+	bool "Intel Ivy Bridge"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
-+
-+	  Enables -march=ivybridge
-+
-+config MHASWELL
-+	bool "Intel Haswell"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 4th Gen Core processors in the Haswell family.
-+
-+	  Enables -march=haswell
-+
-+config MBROADWELL
-+	bool "Intel Broadwell"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 5th Gen Core processors in the Broadwell family.
-+
-+	  Enables -march=broadwell
-+
-+config MSKYLAKE
-+	bool "Intel Skylake"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 6th Gen Core processors in the Skylake family.
-+
-+	  Enables -march=skylake
-+
-+config MSKYLAKEX
-+	bool "Intel Skylake X"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 6th Gen Core processors in the Skylake X family.
-+
-+	  Enables -march=skylake-avx512
-+
-+config MCANNONLAKE
-+	bool "Intel Cannon Lake"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 8th Gen Core processors
-+
-+	  Enables -march=cannonlake
-+
-+config MICELAKE
-+	bool "Intel Ice Lake"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 10th Gen Core processors in the Ice Lake family.
-+
-+	  Enables -march=icelake-client
-+
-+config MCASCADELAKE
-+	bool "Intel Cascade Lake"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for Xeon processors in the Cascade Lake family.
-+
-+	  Enables -march=cascadelake
-+
- config GENERIC_CPU
- 	bool "Generic-x86-64"
- 	depends on X86_64
-@@ -294,6 +500,16 @@ config GENERIC_CPU
- 	  Generic x86-64 CPU.
- 	  Run equally well on all x86-64 CPUs.
- 
-+config MNATIVE
-+	bool "Native optimizations autodetected by GCC"
-+	help
-+
-+	  GCC 4.2 and above support -march=native, which automatically detects
-+	  the optimum settings to use based on your processor. Do NOT use this
-+	  for AMD CPUs.  Intel Only!
-+
-+	  Enables -march=native
-+
- endchoice
- 
- config X86_GENERIC
-@@ -318,7 +534,7 @@ config X86_INTERNODE_CACHE_SHIFT
- config X86_L1_CACHE_SHIFT
- 	int
- 	default "7" if MPENTIUM4 || MPSC
--	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || X86_GENERIC || GENERIC_CPU
- 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
- 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
-@@ -336,11 +552,11 @@ config X86_ALIGNMENT_16
- 
- config X86_INTEL_USERCOPY
- 	def_bool y
--	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
- 
- config X86_USE_PPRO_CHECKSUM
- 	def_bool y
--	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
- 
- config X86_USE_3DNOW
- 	def_bool y
-@@ -360,26 +576,26 @@ config X86_USE_3DNOW
- config X86_P6_NOP
- 	def_bool y
- 	depends on X86_64
--	depends on (MCORE2 || MPENTIUM4 || MPSC)
-+	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
- 
- config X86_TSC
- 	def_bool y
--	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE) || X86_64
- 
- config X86_CMPXCHG64
- 	def_bool y
--	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
-+	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
- 
- # this should be set for all -march=.. options where the compiler
- # generates cmov.
- config X86_CMOV
- 	def_bool y
--	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
- 
- config X86_MINIMUM_CPU_FAMILY
- 	int
- 	default "64" if X86_64
--	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
-+	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
- 	default "5" if X86_32 && X86_CMPXCHG64
- 	default "4"
- 
-diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 00e378de8bc0..7602ef4a2dd4 100644
---- a/arch/x86/Makefile
-+++ b/arch/x86/Makefile
-@@ -121,11 +121,38 @@ else
-         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
--
--        cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
-+        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
-+        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
-+        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
-+        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
-+        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+        cflags-$(CONFIG_MZEN2) +=  $(call cc-option,-march=znver2)
-+
-+        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
-+        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
-+        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
-+        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
-+        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
-+        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
-+        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
-+        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
-+        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
-+        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
-+        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
-+        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
-+        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
-+        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
-+        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
-         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
-         KBUILD_CFLAGS += $(cflags-y)
- 
-diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
-index 75884d2cdec3..0cf864d2d110 100644
---- a/arch/x86/include/asm/vermagic.h
-+++ b/arch/x86/include/asm/vermagic.h
-@@ -17,6 +17,36 @@
- #define MODULE_PROC_FAMILY "586MMX "
- #elif defined CONFIG_MCORE2
- #define MODULE_PROC_FAMILY "CORE2 "
-+#elif defined CONFIG_MNATIVE
-+#define MODULE_PROC_FAMILY "NATIVE "
-+#elif defined CONFIG_MNEHALEM
-+#define MODULE_PROC_FAMILY "NEHALEM "
-+#elif defined CONFIG_MWESTMERE
-+#define MODULE_PROC_FAMILY "WESTMERE "
-+#elif defined CONFIG_MSILVERMONT
-+#define MODULE_PROC_FAMILY "SILVERMONT "
-+#elif defined CONFIG_MGOLDMONT
-+#define MODULE_PROC_FAMILY "GOLDMONT "
-+#elif defined CONFIG_MGOLDMONTPLUS
-+#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
-+#elif defined CONFIG_MSANDYBRIDGE
-+#define MODULE_PROC_FAMILY "SANDYBRIDGE "
-+#elif defined CONFIG_MIVYBRIDGE
-+#define MODULE_PROC_FAMILY "IVYBRIDGE "
-+#elif defined CONFIG_MHASWELL
-+#define MODULE_PROC_FAMILY "HASWELL "
-+#elif defined CONFIG_MBROADWELL
-+#define MODULE_PROC_FAMILY "BROADWELL "
-+#elif defined CONFIG_MSKYLAKE
-+#define MODULE_PROC_FAMILY "SKYLAKE "
-+#elif defined CONFIG_MSKYLAKEX
-+#define MODULE_PROC_FAMILY "SKYLAKEX "
-+#elif defined CONFIG_MCANNONLAKE
-+#define MODULE_PROC_FAMILY "CANNONLAKE "
-+#elif defined CONFIG_MICELAKE
-+#define MODULE_PROC_FAMILY "ICELAKE "
-+#elif defined CONFIG_MCASCADELAKE
-+#define MODULE_PROC_FAMILY "CASCADELAKE "
- #elif defined CONFIG_MATOM
- #define MODULE_PROC_FAMILY "ATOM "
- #elif defined CONFIG_M686
-@@ -35,6 +65,28 @@
- #define MODULE_PROC_FAMILY "K7 "
- #elif defined CONFIG_MK8
- #define MODULE_PROC_FAMILY "K8 "
-+#elif defined CONFIG_MK8SSE3
-+#define MODULE_PROC_FAMILY "K8SSE3 "
-+#elif defined CONFIG_MK10
-+#define MODULE_PROC_FAMILY "K10 "
-+#elif defined CONFIG_MBARCELONA
-+#define MODULE_PROC_FAMILY "BARCELONA "
-+#elif defined CONFIG_MBOBCAT
-+#define MODULE_PROC_FAMILY "BOBCAT "
-+#elif defined CONFIG_MBULLDOZER
-+#define MODULE_PROC_FAMILY "BULLDOZER "
-+#elif defined CONFIG_MPILEDRIVER
-+#define MODULE_PROC_FAMILY "PILEDRIVER "
-+#elif defined CONFIG_MSTEAMROLLER
-+#define MODULE_PROC_FAMILY "STEAMROLLER "
-+#elif defined CONFIG_MJAGUAR
-+#define MODULE_PROC_FAMILY "JAGUAR "
-+#elif defined CONFIG_MEXCAVATOR
-+#define MODULE_PROC_FAMILY "EXCAVATOR "
-+#elif defined CONFIG_MZEN
-+#define MODULE_PROC_FAMILY "ZEN "
-+#elif defined CONFIG_MZEN2
-+#define MODULE_PROC_FAMILY "ZEN2 "
- #elif defined CONFIG_MELAN
- #define MODULE_PROC_FAMILY "ELAN "
- #elif defined CONFIG_MCRUSOE
--- 
-2.30.1
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-05-02 16:03 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-05-02 16:03 UTC (permalink / raw
  To: gentoo-commits

commit:     ba11a603689ad309716c66c8c99cfa4d12e57bf1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May  2 16:02:57 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May  2 16:02:57 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ba11a603

Linux patch 5.10.34

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |  4 +++
 1033_linux-5.10.34.patch | 75 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 79 insertions(+)

diff --git a/0000_README b/0000_README
index 9d8e885..51a1c56 100644
--- a/0000_README
+++ b/0000_README
@@ -175,6 +175,10 @@ Patch:  1032_linux-5.10.33.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.33
 
+Patch:  1033_linux-5.10.34.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.34
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1033_linux-5.10.34.patch b/1033_linux-5.10.34.patch
new file mode 100644
index 0000000..93b10c1
--- /dev/null
+++ b/1033_linux-5.10.34.patch
@@ -0,0 +1,75 @@
+diff --git a/Makefile b/Makefile
+index fd5c8b5c013bf..ac2f14a067d33 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 33
++SUBLEVEL = 34
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 14be76d4c2e61..cb34925e10f15 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -105,6 +105,7 @@
+ 
+ #define MEI_DEV_ID_ADP_S      0x7AE8  /* Alder Lake Point S */
+ #define MEI_DEV_ID_ADP_LP     0x7A60  /* Alder Lake Point LP */
++#define MEI_DEV_ID_ADP_P      0x51E0  /* Alder Lake Point P */
+ 
+ /*
+  * MEI HW Section
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index a7e179626b635..c3393b383e598 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -111,6 +111,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_S, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_LP, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
+ 
+ 	/* required last entry */
+ 	{0, }
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+index 8c7138247869a..833fd133d15bd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+@@ -87,6 +87,7 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
+ 	const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD];
+ 	u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD];
+ 	struct iwl_tfh_tfd *tfd;
++	unsigned long flags;
+ 
+ 	copy_size = sizeof(struct iwl_cmd_header_wide);
+ 	cmd_size = sizeof(struct iwl_cmd_header_wide);
+@@ -155,14 +156,14 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
+ 		goto free_dup_buf;
+ 	}
+ 
+-	spin_lock_bh(&txq->lock);
++	spin_lock_irqsave(&txq->lock, flags);
+ 
+ 	idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
+ 	tfd = iwl_txq_get_tfd(trans, txq, txq->write_ptr);
+ 	memset(tfd, 0, sizeof(*tfd));
+ 
+ 	if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
+-		spin_unlock_bh(&txq->lock);
++		spin_unlock_irqrestore(&txq->lock, flags);
+ 
+ 		IWL_ERR(trans, "No space in command queue\n");
+ 		iwl_op_mode_cmd_queue_full(trans->op_mode);
+@@ -297,7 +298,7 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
+ 	spin_unlock(&trans_pcie->reg_lock);
+ 
+ out:
+-	spin_unlock_bh(&txq->lock);
++	spin_unlock_irqrestore(&txq->lock, flags);
+ free_dup_buf:
+ 	if (idx < 0)
+ 		kfree(dup_buf);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-05-07 11:27 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-05-07 11:27 UTC (permalink / raw
  To: gentoo-commits

commit:     abc8f3e6b5e5ca47bde39592a244189dd20ae229
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri May  7 11:25:26 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri May  7 11:26:43 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=abc8f3e6

Linux patch 5.10.35

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1034_linux-5.10.35.patch | 1122 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1126 insertions(+)

diff --git a/0000_README b/0000_README
index 51a1c56..675256e 100644
--- a/0000_README
+++ b/0000_README
@@ -179,6 +179,10 @@ Patch:  1033_linux-5.10.34.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.34
 
+Patch:  1034_linux-5.10.35.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.35
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1034_linux-5.10.35.patch b/1034_linux-5.10.35.patch
new file mode 100644
index 0000000..63a5724
--- /dev/null
+++ b/1034_linux-5.10.35.patch
@@ -0,0 +1,1122 @@
+diff --git a/Makefile b/Makefile
+index ac2f14a067d33..6ca39f3aa4e94 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 34
++SUBLEVEL = 35
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/mips/include/asm/vdso/gettimeofday.h b/arch/mips/include/asm/vdso/gettimeofday.h
+index 2203e2d0ae2ad..44a45f3fa4b01 100644
+--- a/arch/mips/include/asm/vdso/gettimeofday.h
++++ b/arch/mips/include/asm/vdso/gettimeofday.h
+@@ -20,6 +20,12 @@
+ 
+ #define VDSO_HAS_CLOCK_GETRES		1
+ 
++#if MIPS_ISA_REV < 6
++#define VDSO_SYSCALL_CLOBBERS "hi", "lo",
++#else
++#define VDSO_SYSCALL_CLOBBERS
++#endif
++
+ static __always_inline long gettimeofday_fallback(
+ 				struct __kernel_old_timeval *_tv,
+ 				struct timezone *_tz)
+@@ -35,7 +41,9 @@ static __always_inline long gettimeofday_fallback(
+ 	: "=r" (ret), "=r" (error)
+ 	: "r" (tv), "r" (tz), "r" (nr)
+ 	: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
+-	  "$14", "$15", "$24", "$25", "hi", "lo", "memory");
++	  "$14", "$15", "$24", "$25",
++	  VDSO_SYSCALL_CLOBBERS
++	  "memory");
+ 
+ 	return error ? -ret : ret;
+ }
+@@ -59,7 +67,9 @@ static __always_inline long clock_gettime_fallback(
+ 	: "=r" (ret), "=r" (error)
+ 	: "r" (clkid), "r" (ts), "r" (nr)
+ 	: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
+-	  "$14", "$15", "$24", "$25", "hi", "lo", "memory");
++	  "$14", "$15", "$24", "$25",
++	  VDSO_SYSCALL_CLOBBERS
++	  "memory");
+ 
+ 	return error ? -ret : ret;
+ }
+@@ -83,7 +93,9 @@ static __always_inline int clock_getres_fallback(
+ 	: "=r" (ret), "=r" (error)
+ 	: "r" (clkid), "r" (ts), "r" (nr)
+ 	: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
+-	  "$14", "$15", "$24", "$25", "hi", "lo", "memory");
++	  "$14", "$15", "$24", "$25",
++	  VDSO_SYSCALL_CLOBBERS
++	  "memory");
+ 
+ 	return error ? -ret : ret;
+ }
+@@ -105,7 +117,9 @@ static __always_inline long clock_gettime32_fallback(
+ 	: "=r" (ret), "=r" (error)
+ 	: "r" (clkid), "r" (ts), "r" (nr)
+ 	: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
+-	  "$14", "$15", "$24", "$25", "hi", "lo", "memory");
++	  "$14", "$15", "$24", "$25",
++	  VDSO_SYSCALL_CLOBBERS
++	  "memory");
+ 
+ 	return error ? -ret : ret;
+ }
+@@ -125,7 +139,9 @@ static __always_inline int clock_getres32_fallback(
+ 	: "=r" (ret), "=r" (error)
+ 	: "r" (clkid), "r" (ts), "r" (nr)
+ 	: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
+-	  "$14", "$15", "$24", "$25", "hi", "lo", "memory");
++	  "$14", "$15", "$24", "$25",
++	  VDSO_SYSCALL_CLOBBERS
++	  "memory");
+ 
+ 	return error ? -ret : ret;
+ }
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index fecfcfcf161ca..368f0aac5e1d4 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4482,8 +4482,7 @@ static void igb_setup_mrqc(struct igb_adapter *adapter)
+ 		else
+ 			mrqc |= E1000_MRQC_ENABLE_VMDQ;
+ 	} else {
+-		if (hw->mac.type != e1000_i211)
+-			mrqc |= E1000_MRQC_ENABLE_RSS_MQ;
++		mrqc |= E1000_MRQC_ENABLE_RSS_MQ;
+ 	}
+ 	igb_vmm_control(adapter);
+ 
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 5541f3faedbca..b77b0a33d697d 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -296,12 +296,12 @@ static int ax88179_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+ 	int ret;
+ 
+ 	if (2 == size) {
+-		u16 buf;
++		u16 buf = 0;
+ 		ret = __ax88179_read_cmd(dev, cmd, value, index, size, &buf, 0);
+ 		le16_to_cpus(&buf);
+ 		*((u16 *)data) = buf;
+ 	} else if (4 == size) {
+-		u32 buf;
++		u32 buf = 0;
+ 		ret = __ax88179_read_cmd(dev, cmd, value, index, size, &buf, 0);
+ 		le32_to_cpus(&buf);
+ 		*((u32 *)data) = buf;
+@@ -1296,6 +1296,8 @@ static void ax88179_get_mac_addr(struct usbnet *dev)
+ {
+ 	u8 mac[ETH_ALEN];
+ 
++	memset(mac, 0, sizeof(mac));
++
+ 	/* Maybe the boot loader passed the MAC address via device tree */
+ 	if (!eth_platform_get_mac_address(&dev->udev->dev, mac)) {
+ 		netif_dbg(dev, ifup, dev->net,
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 4dca58f4afdf7..716039ea4450e 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2634,6 +2634,7 @@ static void nvme_reset_work(struct work_struct *work)
+ 	 * Don't limit the IOMMU merged segment size.
+ 	 */
+ 	dma_set_max_seg_size(dev->dev, 0xffffffff);
++	dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
+ 
+ 	mutex_unlock(&dev->shutdown_lock);
+ 
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 3b0acaeb20cf7..1c25af28a7233 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -6258,6 +6258,7 @@ enum thermal_access_mode {
+ enum { /* TPACPI_THERMAL_TPEC_* */
+ 	TP_EC_THERMAL_TMP0 = 0x78,	/* ACPI EC regs TMP 0..7 */
+ 	TP_EC_THERMAL_TMP8 = 0xC0,	/* ACPI EC regs TMP 8..15 */
++	TP_EC_FUNCREV      = 0xEF,      /* ACPI EC Functional revision */
+ 	TP_EC_THERMAL_TMP_NA = -128,	/* ACPI EC sensor not available */
+ 
+ 	TPACPI_THERMAL_SENSOR_NA = -128000, /* Sensor not available */
+@@ -6456,7 +6457,7 @@ static const struct attribute_group thermal_temp_input8_group = {
+ 
+ static int __init thermal_init(struct ibm_init_struct *iibm)
+ {
+-	u8 t, ta1, ta2;
++	u8 t, ta1, ta2, ver = 0;
+ 	int i;
+ 	int acpi_tmp7;
+ 	int res;
+@@ -6471,7 +6472,14 @@ static int __init thermal_init(struct ibm_init_struct *iibm)
+ 		 * 0x78-0x7F, 0xC0-0xC7.  Registers return 0x00 for
+ 		 * non-implemented, thermal sensors return 0x80 when
+ 		 * not available
++		 * The above rule is unfortunately flawed. This has been seen with
++		 * 0xC2 (power supply ID) causing thermal control problems.
++		 * The EC version can be determined by offset 0xEF and at least for
++		 * version 3 the Lenovo firmware team confirmed that registers 0xC0-0xC7
++		 * are not thermal registers.
+ 		 */
++		if (!acpi_ec_read(TP_EC_FUNCREV, &ver))
++			pr_warn("Thinkpad ACPI EC unable to access EC version\n");
+ 
+ 		ta1 = ta2 = 0;
+ 		for (i = 0; i < 8; i++) {
+@@ -6481,11 +6489,13 @@ static int __init thermal_init(struct ibm_init_struct *iibm)
+ 				ta1 = 0;
+ 				break;
+ 			}
+-			if (acpi_ec_read(TP_EC_THERMAL_TMP8 + i, &t)) {
+-				ta2 |= t;
+-			} else {
+-				ta1 = 0;
+-				break;
++			if (ver < 3) {
++				if (acpi_ec_read(TP_EC_THERMAL_TMP8 + i, &t)) {
++					ta2 |= t;
++				} else {
++					ta1 = 0;
++					break;
++				}
+ 			}
+ 		}
+ 		if (ta1 == 0) {
+@@ -6498,9 +6508,12 @@ static int __init thermal_init(struct ibm_init_struct *iibm)
+ 				thermal_read_mode = TPACPI_THERMAL_NONE;
+ 			}
+ 		} else {
+-			thermal_read_mode =
+-			    (ta2 != 0) ?
+-			    TPACPI_THERMAL_TPEC_16 : TPACPI_THERMAL_TPEC_8;
++			if (ver >= 3)
++				thermal_read_mode = TPACPI_THERMAL_TPEC_8;
++			else
++				thermal_read_mode =
++					(ta2 != 0) ?
++					TPACPI_THERMAL_TPEC_16 : TPACPI_THERMAL_TPEC_8;
+ 		}
+ 	} else if (acpi_tmp7) {
+ 		if (tpacpi_is_ibm() &&
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 76ac5d6555ae4..21e7522655ac9 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -406,6 +406,7 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 
+ 	/* Realtek hub in Dell WD19 (Type-C) */
+ 	{ USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM },
++	{ USB_DEVICE(0x0bda, 0x5487), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+ 	/* Generic RTL8153 based ethernet adapters */
+ 	{ USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM },
+@@ -438,6 +439,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x17ef, 0xa012), .driver_info =
+ 			USB_QUIRK_DISCONNECT_SUSPEND },
+ 
++	/* Lenovo ThinkPad USB-C Dock Gen2 Ethernet (RTL8153 GigE) */
++	{ USB_DEVICE(0x17ef, 0xa387), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* BUILDWIN Photo Frame */
+ 	{ USB_DEVICE(0x1908, 0x1315), .driver_info =
+ 			USB_QUIRK_HONOR_BNUMINTERFACES },
+diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
+index 90c0525b1e0cf..67d0bf4efa160 100644
+--- a/drivers/vfio/Kconfig
++++ b/drivers/vfio/Kconfig
+@@ -22,7 +22,7 @@ config VFIO_VIRQFD
+ menuconfig VFIO
+ 	tristate "VFIO Non-Privileged userspace driver framework"
+ 	select IOMMU_API
+-	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM || ARM64)
++	select VFIO_IOMMU_TYPE1 if MMU && (X86 || S390 || ARM || ARM64)
+ 	help
+ 	  VFIO provides a framework for secure userspace device drivers.
+ 	  See Documentation/driver-api/vfio.rst for more details.
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index a6162c4076db7..f3309e044f079 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -913,6 +913,7 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 			continue;
+ 
+ 		if ((uppermetacopy || d.metacopy) && !ofs->config.metacopy) {
++			dput(this);
+ 			err = -EPERM;
+ 			pr_warn_ratelimited("refusing to follow metacopy origin for (%pd2)\n", dentry);
+ 			goto out_put;
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 50529a4e7bf39..77f08ac04d1f3 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1759,7 +1759,8 @@ out_err:
+  * - upper/work dir of any overlayfs instance
+  */
+ static int ovl_check_layer(struct super_block *sb, struct ovl_fs *ofs,
+-			   struct dentry *dentry, const char *name)
++			   struct dentry *dentry, const char *name,
++			   bool is_lower)
+ {
+ 	struct dentry *next = dentry, *parent;
+ 	int err = 0;
+@@ -1771,7 +1772,7 @@ static int ovl_check_layer(struct super_block *sb, struct ovl_fs *ofs,
+ 
+ 	/* Walk back ancestors to root (inclusive) looking for traps */
+ 	while (!err && parent != next) {
+-		if (ovl_lookup_trap_inode(sb, parent)) {
++		if (is_lower && ovl_lookup_trap_inode(sb, parent)) {
+ 			err = -ELOOP;
+ 			pr_err("overlapping %s path\n", name);
+ 		} else if (ovl_is_inuse(parent)) {
+@@ -1797,7 +1798,7 @@ static int ovl_check_overlapping_layers(struct super_block *sb,
+ 
+ 	if (ovl_upper_mnt(ofs)) {
+ 		err = ovl_check_layer(sb, ofs, ovl_upper_mnt(ofs)->mnt_root,
+-				      "upperdir");
++				      "upperdir", false);
+ 		if (err)
+ 			return err;
+ 
+@@ -1808,7 +1809,8 @@ static int ovl_check_overlapping_layers(struct super_block *sb,
+ 		 * workbasedir.  In that case, we already have their traps in
+ 		 * inode cache and we will catch that case on lookup.
+ 		 */
+-		err = ovl_check_layer(sb, ofs, ofs->workbasedir, "workdir");
++		err = ovl_check_layer(sb, ofs, ofs->workbasedir, "workdir",
++				      false);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -1816,7 +1818,7 @@ static int ovl_check_overlapping_layers(struct super_block *sb,
+ 	for (i = 1; i < ofs->numlayer; i++) {
+ 		err = ovl_check_layer(sb, ofs,
+ 				      ofs->layers[i].mnt->mnt_root,
+-				      "lowerdir");
++				      "lowerdir", true);
+ 		if (err)
+ 			return err;
+ 	}
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 85bac3191e127..2739a6431b9ee 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -291,10 +291,11 @@ struct bpf_verifier_state_list {
+ };
+ 
+ /* Possible states for alu_state member. */
+-#define BPF_ALU_SANITIZE_SRC		1U
+-#define BPF_ALU_SANITIZE_DST		2U
++#define BPF_ALU_SANITIZE_SRC		(1U << 0)
++#define BPF_ALU_SANITIZE_DST		(1U << 1)
+ #define BPF_ALU_NEG_VALUE		(1U << 2)
+ #define BPF_ALU_NON_POINTER		(1U << 3)
++#define BPF_ALU_IMMEDIATE		(1U << 4)
+ #define BPF_ALU_SANITIZE		(BPF_ALU_SANITIZE_SRC | \
+ 					 BPF_ALU_SANITIZE_DST)
+ 
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 2b39de35525a9..75a24b32fee8a 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -291,6 +291,7 @@ struct device_dma_parameters {
+ 	 * sg limitations.
+ 	 */
+ 	unsigned int max_segment_size;
++	unsigned int min_align_mask;
+ 	unsigned long segment_boundary_mask;
+ };
+ 
+diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
+index 956151052d454..a7d70cdee25e3 100644
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -500,6 +500,22 @@ static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask)
+ 	return -EIO;
+ }
+ 
++static inline unsigned int dma_get_min_align_mask(struct device *dev)
++{
++	if (dev->dma_parms)
++		return dev->dma_parms->min_align_mask;
++	return 0;
++}
++
++static inline int dma_set_min_align_mask(struct device *dev,
++		unsigned int min_align_mask)
++{
++	if (WARN_ON_ONCE(!dev->dma_parms))
++		return -EIO;
++	dev->dma_parms->min_align_mask = min_align_mask;
++	return 0;
++}
++
+ static inline int dma_get_cache_alignment(void)
+ {
+ #ifdef ARCH_DMA_MINALIGN
+diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
+index fbdc657821957..5d2dbe7e04c3c 100644
+--- a/include/linux/swiotlb.h
++++ b/include/linux/swiotlb.h
+@@ -29,6 +29,7 @@ enum swiotlb_force {
+  * controllable.
+  */
+ #define IO_TLB_SHIFT 11
++#define IO_TLB_SIZE (1 << IO_TLB_SHIFT)
+ 
+ extern void swiotlb_init(int verbose);
+ int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
+diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
+index 6ef1c7109fc4d..7616c7bf4b241 100644
+--- a/include/linux/user_namespace.h
++++ b/include/linux/user_namespace.h
+@@ -64,6 +64,9 @@ struct user_namespace {
+ 	kgid_t			group;
+ 	struct ns_common	ns;
+ 	unsigned long		flags;
++	/* parent_could_setfcap: true if the creator if this ns had CAP_SETFCAP
++	 * in its effective capability set at the child ns creation time. */
++	bool			parent_could_setfcap;
+ 
+ #ifdef CONFIG_KEYS
+ 	/* List of joinable keyrings in this namespace.  Modification access of
+diff --git a/include/uapi/linux/capability.h b/include/uapi/linux/capability.h
+index c6ca330341471..2ddb4226cd231 100644
+--- a/include/uapi/linux/capability.h
++++ b/include/uapi/linux/capability.h
+@@ -335,7 +335,8 @@ struct vfs_ns_cap_data {
+ 
+ #define CAP_AUDIT_CONTROL    30
+ 
+-/* Set or remove capabilities on files */
++/* Set or remove capabilities on files.
++   Map uid=0 into a child user namespace. */
+ 
+ #define CAP_SETFCAP	     31
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b9180509917e3..b6656d181c9e7 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5755,6 +5755,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ {
+ 	struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : tmp_aux;
+ 	struct bpf_verifier_state *vstate = env->cur_state;
++	bool off_is_imm = tnum_is_const(off_reg->var_off);
+ 	bool off_is_neg = off_reg->smin_value < 0;
+ 	bool ptr_is_dst_reg = ptr_reg == dst_reg;
+ 	u8 opcode = BPF_OP(insn->code);
+@@ -5785,6 +5786,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 		alu_limit = abs(tmp_aux->alu_limit - alu_limit);
+ 	} else {
+ 		alu_state  = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
++		alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;
+ 		alu_state |= ptr_is_dst_reg ?
+ 			     BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
+ 	}
+@@ -11383,7 +11385,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
+ 			const u8 code_sub = BPF_ALU64 | BPF_SUB | BPF_X;
+ 			struct bpf_insn insn_buf[16];
+ 			struct bpf_insn *patch = &insn_buf[0];
+-			bool issrc, isneg;
++			bool issrc, isneg, isimm;
+ 			u32 off_reg;
+ 
+ 			aux = &env->insn_aux_data[i + delta];
+@@ -11394,28 +11396,29 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
+ 			isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
+ 			issrc = (aux->alu_state & BPF_ALU_SANITIZE) ==
+ 				BPF_ALU_SANITIZE_SRC;
++			isimm = aux->alu_state & BPF_ALU_IMMEDIATE;
+ 
+ 			off_reg = issrc ? insn->src_reg : insn->dst_reg;
+-			if (isneg)
+-				*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
+-			*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
+-			*patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
+-			*patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
+-			*patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
+-			*patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
+-			if (issrc) {
+-				*patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX,
+-							 off_reg);
+-				insn->src_reg = BPF_REG_AX;
++			if (isimm) {
++				*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
+ 			} else {
+-				*patch++ = BPF_ALU64_REG(BPF_AND, off_reg,
+-							 BPF_REG_AX);
++				if (isneg)
++					*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
++				*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
++				*patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
++				*patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
++				*patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
++				*patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
++				*patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX, off_reg);
+ 			}
++			if (!issrc)
++				*patch++ = BPF_MOV64_REG(insn->dst_reg, insn->src_reg);
++			insn->src_reg = BPF_REG_AX;
+ 			if (isneg)
+ 				insn->code = insn->code == code_add ?
+ 					     code_sub : code_add;
+ 			*patch++ = *insn;
+-			if (issrc && isneg)
++			if (issrc && isneg && !isimm)
+ 				*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
+ 			cnt = patch - insn_buf;
+ 
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 781b9dca197cd..ba4055a192e4c 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -50,9 +50,6 @@
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/swiotlb.h>
+ 
+-#define OFFSET(val,align) ((unsigned long)	\
+-	                   ( (val) & ( (align) - 1)))
+-
+ #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
+ 
+ /*
+@@ -176,6 +173,16 @@ void swiotlb_print_info(void)
+ 	       bytes >> 20);
+ }
+ 
++static inline unsigned long io_tlb_offset(unsigned long val)
++{
++	return val & (IO_TLB_SEGSIZE - 1);
++}
++
++static inline unsigned long nr_slots(u64 val)
++{
++	return DIV_ROUND_UP(val, IO_TLB_SIZE);
++}
++
+ /*
+  * Early SWIOTLB allocation may be too early to allow an architecture to
+  * perform the desired operations.  This function allows the architecture to
+@@ -225,7 +232,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+ 		      __func__, alloc_size, PAGE_SIZE);
+ 
+ 	for (i = 0; i < io_tlb_nslabs; i++) {
+-		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
++		io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i);
+ 		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+ 	}
+ 	io_tlb_index = 0;
+@@ -359,7 +366,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
+ 		goto cleanup4;
+ 
+ 	for (i = 0; i < io_tlb_nslabs; i++) {
+-		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
++		io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i);
+ 		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+ 	}
+ 	io_tlb_index = 0;
+@@ -445,79 +452,71 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
+ 	}
+ }
+ 
+-phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
+-		size_t mapping_size, size_t alloc_size,
+-		enum dma_data_direction dir, unsigned long attrs)
+-{
+-	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start);
+-	unsigned long flags;
+-	phys_addr_t tlb_addr;
+-	unsigned int nslots, stride, index, wrap;
+-	int i;
+-	unsigned long mask;
+-	unsigned long offset_slots;
+-	unsigned long max_slots;
+-	unsigned long tmp_io_tlb_used;
++#define slot_addr(start, idx)	((start) + ((idx) << IO_TLB_SHIFT))
+ 
+-	if (no_iotlb_memory)
+-		panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
+-
+-	if (mem_encrypt_active())
+-		pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
+-
+-	if (mapping_size > alloc_size) {
+-		dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)",
+-			      mapping_size, alloc_size);
+-		return (phys_addr_t)DMA_MAPPING_ERROR;
+-	}
++/*
++ * Return the offset into a iotlb slot required to keep the device happy.
++ */
++static unsigned int swiotlb_align_offset(struct device *dev, u64 addr)
++{
++	return addr & dma_get_min_align_mask(dev) & (IO_TLB_SIZE - 1);
++}
+ 
+-	mask = dma_get_seg_boundary(hwdev);
++/*
++ * Carefully handle integer overflow which can occur when boundary_mask == ~0UL.
++ */
++static inline unsigned long get_max_slots(unsigned long boundary_mask)
++{
++	if (boundary_mask == ~0UL)
++		return 1UL << (BITS_PER_LONG - IO_TLB_SHIFT);
++	return nr_slots(boundary_mask + 1);
++}
+ 
+-	tbl_dma_addr &= mask;
++static unsigned int wrap_index(unsigned int index)
++{
++	if (index >= io_tlb_nslabs)
++		return 0;
++	return index;
++}
+ 
+-	offset_slots = ALIGN(tbl_dma_addr, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
++/*
++ * Find a suitable number of IO TLB entries size that will fit this request and
++ * allocate a buffer from that IO TLB pool.
++ */
++static int find_slots(struct device *dev, phys_addr_t orig_addr,
++		size_t alloc_size)
++{
++	unsigned long boundary_mask = dma_get_seg_boundary(dev);
++	dma_addr_t tbl_dma_addr =
++		phys_to_dma_unencrypted(dev, io_tlb_start) & boundary_mask;
++	unsigned long max_slots = get_max_slots(boundary_mask);
++	unsigned int iotlb_align_mask =
++		dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
++	unsigned int nslots = nr_slots(alloc_size), stride;
++	unsigned int index, wrap, count = 0, i;
++	unsigned long flags;
+ 
+-	/*
+-	 * Carefully handle integer overflow which can occur when mask == ~0UL.
+-	 */
+-	max_slots = mask + 1
+-		    ? ALIGN(mask + 1, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT
+-		    : 1UL << (BITS_PER_LONG - IO_TLB_SHIFT);
++	BUG_ON(!nslots);
+ 
+ 	/*
+-	 * For mappings greater than or equal to a page, we limit the stride
+-	 * (and hence alignment) to a page size.
++	 * For mappings with an alignment requirement don't bother looping to
++	 * unaligned slots once we found an aligned one.  For allocations of
++	 * PAGE_SIZE or larger only look for page aligned allocations.
+ 	 */
+-	nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
++	stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
+ 	if (alloc_size >= PAGE_SIZE)
+-		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+-	else
+-		stride = 1;
+-
+-	BUG_ON(!nslots);
++		stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT));
+ 
+-	/*
+-	 * Find suitable number of IO TLB entries size that will fit this
+-	 * request and allocate a buffer from that IO TLB pool.
+-	 */
+ 	spin_lock_irqsave(&io_tlb_lock, flags);
+-
+ 	if (unlikely(nslots > io_tlb_nslabs - io_tlb_used))
+ 		goto not_found;
+ 
+-	index = ALIGN(io_tlb_index, stride);
+-	if (index >= io_tlb_nslabs)
+-		index = 0;
+-	wrap = index;
+-
++	index = wrap = wrap_index(ALIGN(io_tlb_index, stride));
+ 	do {
+-		while (iommu_is_span_boundary(index, nslots, offset_slots,
+-					      max_slots)) {
+-			index += stride;
+-			if (index >= io_tlb_nslabs)
+-				index = 0;
+-			if (index == wrap)
+-				goto not_found;
++		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
++		    (orig_addr & iotlb_align_mask)) {
++			index = wrap_index(index + 1);
++			continue;
+ 		}
+ 
+ 		/*
+@@ -525,52 +524,81 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
+ 		 * contiguous buffers, we allocate the buffers from that slot
+ 		 * and mark the entries as '0' indicating unavailable.
+ 		 */
+-		if (io_tlb_list[index] >= nslots) {
+-			int count = 0;
+-
+-			for (i = index; i < (int) (index + nslots); i++)
+-				io_tlb_list[i] = 0;
+-			for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--)
+-				io_tlb_list[i] = ++count;
+-			tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+-
+-			/*
+-			 * Update the indices to avoid searching in the next
+-			 * round.
+-			 */
+-			io_tlb_index = ((index + nslots) < io_tlb_nslabs
+-					? (index + nslots) : 0);
+-
+-			goto found;
++		if (!iommu_is_span_boundary(index, nslots,
++					    nr_slots(tbl_dma_addr),
++					    max_slots)) {
++			if (io_tlb_list[index] >= nslots)
++				goto found;
+ 		}
+-		index += stride;
+-		if (index >= io_tlb_nslabs)
+-			index = 0;
++		index = wrap_index(index + stride);
+ 	} while (index != wrap);
+ 
+ not_found:
+-	tmp_io_tlb_used = io_tlb_used;
+-
+ 	spin_unlock_irqrestore(&io_tlb_lock, flags);
+-	if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
+-		dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
+-			 alloc_size, io_tlb_nslabs, tmp_io_tlb_used);
+-	return (phys_addr_t)DMA_MAPPING_ERROR;
++	return -1;
++
+ found:
++	for (i = index; i < index + nslots; i++)
++		io_tlb_list[i] = 0;
++	for (i = index - 1;
++	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
++	     io_tlb_list[i]; i--)
++		io_tlb_list[i] = ++count;
++
++	/*
++	 * Update the indices to avoid searching in the next round.
++	 */
++	if (index + nslots < io_tlb_nslabs)
++		io_tlb_index = index + nslots;
++	else
++		io_tlb_index = 0;
+ 	io_tlb_used += nslots;
++
+ 	spin_unlock_irqrestore(&io_tlb_lock, flags);
++	return index;
++}
++
++phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
++		size_t mapping_size, size_t alloc_size,
++		enum dma_data_direction dir, unsigned long attrs)
++{
++	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
++	unsigned int index, i;
++	phys_addr_t tlb_addr;
++
++	if (no_iotlb_memory)
++		panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
++
++	if (mem_encrypt_active())
++		pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
++
++	if (mapping_size > alloc_size) {
++		dev_warn_once(dev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)",
++			      mapping_size, alloc_size);
++		return (phys_addr_t)DMA_MAPPING_ERROR;
++	}
++
++	index = find_slots(dev, orig_addr, alloc_size + offset);
++	if (index == -1) {
++		if (!(attrs & DMA_ATTR_NO_WARN))
++			dev_warn_ratelimited(dev,
++	"swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
++				 alloc_size, io_tlb_nslabs, io_tlb_used);
++		return (phys_addr_t)DMA_MAPPING_ERROR;
++	}
+ 
+ 	/*
+ 	 * Save away the mapping from the original address to the DMA address.
+ 	 * This is needed when we sync the memory.  Then we sync the buffer if
+ 	 * needed.
+ 	 */
+-	for (i = 0; i < nslots; i++)
+-		io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
++	for (i = 0; i < nr_slots(alloc_size + offset); i++)
++		io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i);
++
++	tlb_addr = slot_addr(io_tlb_start, index) + offset;
+ 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+ 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
+ 		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
+-
+ 	return tlb_addr;
+ }
+ 
+@@ -582,8 +610,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
+ 			      enum dma_data_direction dir, unsigned long attrs)
+ {
+ 	unsigned long flags;
+-	int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+-	int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;
++	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
++	int i, count, nslots = nr_slots(alloc_size + offset);
++	int index = (tlb_addr - offset - io_tlb_start) >> IO_TLB_SHIFT;
+ 	phys_addr_t orig_addr = io_tlb_orig_addr[index];
+ 
+ 	/*
+@@ -601,26 +630,29 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
+ 	 * with slots below and above the pool being returned.
+ 	 */
+ 	spin_lock_irqsave(&io_tlb_lock, flags);
+-	{
+-		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+-			 io_tlb_list[index + nslots] : 0);
+-		/*
+-		 * Step 1: return the slots to the free list, merging the
+-		 * slots with superceeding slots
+-		 */
+-		for (i = index + nslots - 1; i >= index; i--) {
+-			io_tlb_list[i] = ++count;
+-			io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+-		}
+-		/*
+-		 * Step 2: merge the returned slots with the preceding slots,
+-		 * if available (non zero)
+-		 */
+-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+-			io_tlb_list[i] = ++count;
++	if (index + nslots < ALIGN(index + 1, IO_TLB_SEGSIZE))
++		count = io_tlb_list[index + nslots];
++	else
++		count = 0;
+ 
+-		io_tlb_used -= nslots;
++	/*
++	 * Step 1: return the slots to the free list, merging the slots with
++	 * superceeding slots
++	 */
++	for (i = index + nslots - 1; i >= index; i--) {
++		io_tlb_list[i] = ++count;
++		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+ 	}
++
++	/*
++	 * Step 2: merge the returned slots with the preceding slots, if
++	 * available (non zero)
++	 */
++	for (i = index - 1;
++	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && io_tlb_list[i];
++	     i--)
++		io_tlb_list[i] = ++count;
++	io_tlb_used -= nslots;
+ 	spin_unlock_irqrestore(&io_tlb_lock, flags);
+ }
+ 
+@@ -633,7 +665,6 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
+ 
+ 	if (orig_addr == INVALID_PHYS_ADDR)
+ 		return;
+-	orig_addr += (unsigned long)tlb_addr & ((1 << IO_TLB_SHIFT) - 1);
+ 
+ 	switch (target) {
+ 	case SYNC_FOR_CPU:
+@@ -691,7 +722,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
+ 
+ size_t swiotlb_max_mapping_size(struct device *dev)
+ {
+-	return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
++	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
+ }
+ 
+ bool is_swiotlb_active(void)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 4af161b3f322f..8e1b8126c0e49 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -11705,12 +11705,12 @@ SYSCALL_DEFINE5(perf_event_open,
+ 			return err;
+ 	}
+ 
+-	err = security_locked_down(LOCKDOWN_PERF);
+-	if (err && (attr.sample_type & PERF_SAMPLE_REGS_INTR))
+-		/* REGS_INTR can leak data, lockdown must prevent this */
+-		return err;
+-
+-	err = 0;
++	/* REGS_INTR can leak data, lockdown must prevent this */
++	if (attr.sample_type & PERF_SAMPLE_REGS_INTR) {
++		err = security_locked_down(LOCKDOWN_PERF);
++		if (err)
++			return err;
++	}
+ 
+ 	/*
+ 	 * In cgroup mode, the pid argument is used to pass the fd
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index e703d5d9cbe8e..ce396ea4de608 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -106,6 +106,7 @@ int create_user_ns(struct cred *new)
+ 	if (!ns)
+ 		goto fail_dec;
+ 
++	ns->parent_could_setfcap = cap_raised(new->cap_effective, CAP_SETFCAP);
+ 	ret = ns_alloc_inum(&ns->ns);
+ 	if (ret)
+ 		goto fail_free;
+@@ -841,6 +842,60 @@ static int sort_idmaps(struct uid_gid_map *map)
+ 	return 0;
+ }
+ 
++/**
++ * verify_root_map() - check the uid 0 mapping
++ * @file: idmapping file
++ * @map_ns: user namespace of the target process
++ * @new_map: requested idmap
++ *
++ * If a process requests mapping parent uid 0 into the new ns, verify that the
++ * process writing the map had the CAP_SETFCAP capability as the target process
++ * will be able to write fscaps that are valid in ancestor user namespaces.
++ *
++ * Return: true if the mapping is allowed, false if not.
++ */
++static bool verify_root_map(const struct file *file,
++			    struct user_namespace *map_ns,
++			    struct uid_gid_map *new_map)
++{
++	int idx;
++	const struct user_namespace *file_ns = file->f_cred->user_ns;
++	struct uid_gid_extent *extent0 = NULL;
++
++	for (idx = 0; idx < new_map->nr_extents; idx++) {
++		if (new_map->nr_extents <= UID_GID_MAP_MAX_BASE_EXTENTS)
++			extent0 = &new_map->extent[idx];
++		else
++			extent0 = &new_map->forward[idx];
++		if (extent0->lower_first == 0)
++			break;
++
++		extent0 = NULL;
++	}
++
++	if (!extent0)
++		return true;
++
++	if (map_ns == file_ns) {
++		/* The process unshared its ns and is writing to its own
++		 * /proc/self/uid_map.  User already has full capabilites in
++		 * the new namespace.  Verify that the parent had CAP_SETFCAP
++		 * when it unshared.
++		 * */
++		if (!file_ns->parent_could_setfcap)
++			return false;
++	} else {
++		/* Process p1 is writing to uid_map of p2, who is in a child
++		 * user namespace to p1's.  Verify that the opener of the map
++		 * file has CAP_SETFCAP against the parent of the new map
++		 * namespace */
++		if (!file_ns_capable(file, map_ns->parent, CAP_SETFCAP))
++			return false;
++	}
++
++	return true;
++}
++
+ static ssize_t map_write(struct file *file, const char __user *buf,
+ 			 size_t count, loff_t *ppos,
+ 			 int cap_setid,
+@@ -848,7 +903,7 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 			 struct uid_gid_map *parent_map)
+ {
+ 	struct seq_file *seq = file->private_data;
+-	struct user_namespace *ns = seq->private;
++	struct user_namespace *map_ns = seq->private;
+ 	struct uid_gid_map new_map;
+ 	unsigned idx;
+ 	struct uid_gid_extent extent;
+@@ -895,7 +950,7 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	/*
+ 	 * Adjusting namespace settings requires capabilities on the target.
+ 	 */
+-	if (cap_valid(cap_setid) && !file_ns_capable(file, ns, CAP_SYS_ADMIN))
++	if (cap_valid(cap_setid) && !file_ns_capable(file, map_ns, CAP_SYS_ADMIN))
+ 		goto out;
+ 
+ 	/* Parse the user data */
+@@ -965,7 +1020,7 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 
+ 	ret = -EPERM;
+ 	/* Validate the user is allowed to use user id's mapped to. */
+-	if (!new_idmap_permitted(file, ns, cap_setid, &new_map))
++	if (!new_idmap_permitted(file, map_ns, cap_setid, &new_map))
+ 		goto out;
+ 
+ 	ret = -EPERM;
+@@ -1086,6 +1141,10 @@ static bool new_idmap_permitted(const struct file *file,
+ 				struct uid_gid_map *new_map)
+ {
+ 	const struct cred *cred = file->f_cred;
++
++	if (cap_setid == CAP_SETUID && !verify_root_map(file, ns, new_map))
++		return false;
++
+ 	/* Don't allow mappings that would allow anything that wouldn't
+ 	 * be allowed without the establishment of unprivileged mappings.
+ 	 */
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index c6c0cb4656645..313d1c8ff066a 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -1060,16 +1060,10 @@ static int nf_conntrack_standalone_init_sysctl(struct net *net)
+ 	nf_conntrack_standalone_init_dccp_sysctl(net, table);
+ 	nf_conntrack_standalone_init_gre_sysctl(net, table);
+ 
+-	/* Don't allow unprivileged users to alter certain sysctls */
+-	if (net->user_ns != &init_user_ns) {
++	/* Don't allow non-init_net ns to alter global sysctls */
++	if (!net_eq(&init_net, net)) {
+ 		table[NF_SYSCTL_CT_MAX].mode = 0444;
+ 		table[NF_SYSCTL_CT_EXPECT_MAX].mode = 0444;
+-		table[NF_SYSCTL_CT_HELPER].mode = 0444;
+-#ifdef CONFIG_NF_CONNTRACK_EVENTS
+-		table[NF_SYSCTL_CT_EVENTS].mode = 0444;
+-#endif
+-		table[NF_SYSCTL_CT_BUCKETS].mode = 0444;
+-	} else if (!net_eq(&init_net, net)) {
+ 		table[NF_SYSCTL_CT_BUCKETS].mode = 0444;
+ 	}
+ 
+diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c
+index ff0c41467fc1d..0fe4bf40919db 100644
+--- a/net/qrtr/mhi.c
++++ b/net/qrtr/mhi.c
+@@ -50,6 +50,9 @@ static int qcom_mhi_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
+ 	struct qrtr_mhi_dev *qdev = container_of(ep, struct qrtr_mhi_dev, ep);
+ 	int rc;
+ 
++	if (skb->sk)
++		sock_hold(skb->sk);
++
+ 	rc = skb_linearize(skb);
+ 	if (rc)
+ 		goto free_skb;
+@@ -59,12 +62,11 @@ static int qcom_mhi_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
+ 	if (rc)
+ 		goto free_skb;
+ 
+-	if (skb->sk)
+-		sock_hold(skb->sk);
+-
+ 	return rc;
+ 
+ free_skb:
++	if (skb->sk)
++		sock_put(skb->sk);
+ 	kfree_skb(skb);
+ 
+ 	return rc;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 3c1697f6b60c9..5728bf722c88a 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2376,6 +2376,16 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	}
+ },
+ 
++{
++	USB_DEVICE_VENDOR_SPEC(0x0944, 0x0204),
++	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++		.vendor_name = "KORG, Inc.",
++		/* .product_name = "ToneLab EX", */
++		.ifnum = 3,
++		.type = QUIRK_MIDI_STANDARD_INTERFACE,
++	}
++},
++
+ /* AKAI devices */
+ {
+ 	USB_DEVICE(0x09e8, 0x0062),
+diff --git a/tools/cgroup/memcg_slabinfo.py b/tools/cgroup/memcg_slabinfo.py
+index c4225ed63565a..1600b17dbb8ab 100644
+--- a/tools/cgroup/memcg_slabinfo.py
++++ b/tools/cgroup/memcg_slabinfo.py
+@@ -128,9 +128,9 @@ def detect_kernel_config():
+ 
+     cfg['nr_nodes'] = prog['nr_online_nodes'].value_()
+ 
+-    if prog.type('struct kmem_cache').members[1][1] == 'flags':
++    if prog.type('struct kmem_cache').members[1].name == 'flags':
+         cfg['allocator'] = 'SLUB'
+-    elif prog.type('struct kmem_cache').members[1][1] == 'batchcount':
++    elif prog.type('struct kmem_cache').members[1].name == 'batchcount':
+         cfg['allocator'] = 'SLAB'
+     else:
+         err('Can\'t determine the slab allocator')
+@@ -193,7 +193,7 @@ def main():
+         # look over all slab pages, belonging to non-root memcgs
+         # and look for objects belonging to the given memory cgroup
+         for page in for_each_slab_page(prog):
+-            objcg_vec_raw = page.obj_cgroups.value_()
++            objcg_vec_raw = page.memcg_data.value_()
+             if objcg_vec_raw == 0:
+                 continue
+             cache = page.slab_cache
+@@ -202,7 +202,7 @@ def main():
+             addr = cache.value_()
+             caches[addr] = cache
+             # clear the lowest bit to get the true obj_cgroups
+-            objcg_vec = Object(prog, page.obj_cgroups.type_,
++            objcg_vec = Object(prog, 'struct obj_cgroup **',
+                                value=objcg_vec_raw & ~1)
+ 
+             if addr not in stats:
+diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
+index 9366fad591dcc..eecc70fc3b199 100644
+--- a/tools/perf/builtin-ftrace.c
++++ b/tools/perf/builtin-ftrace.c
+@@ -289,7 +289,7 @@ static int set_tracing_pid(struct perf_ftrace *ftrace)
+ 
+ 	for (i = 0; i < perf_thread_map__nr(ftrace->evlist->core.threads); i++) {
+ 		scnprintf(buf, sizeof(buf), "%d",
+-			  ftrace->evlist->core.threads->map[i]);
++			  perf_thread_map__pid(ftrace->evlist->core.threads, i));
+ 		if (append_tracing_file("set_ftrace_pid", buf) < 0)
+ 			return -1;
+ 	}
+diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
+index c47aa34fdc0a7..5d97b3e45fbb1 100644
+--- a/tools/perf/util/data.c
++++ b/tools/perf/util/data.c
+@@ -35,7 +35,7 @@ void perf_data__close_dir(struct perf_data *data)
+ int perf_data__create_dir(struct perf_data *data, int nr)
+ {
+ 	struct perf_data_file *files = NULL;
+-	int i, ret = -1;
++	int i, ret;
+ 
+ 	if (WARN_ON(!data->is_dir))
+ 		return -EINVAL;
+@@ -51,7 +51,8 @@ int perf_data__create_dir(struct perf_data *data, int nr)
+ 	for (i = 0; i < nr; i++) {
+ 		struct perf_data_file *file = &files[i];
+ 
+-		if (asprintf(&file->path, "%s/data.%d", data->path, i) < 0)
++		ret = asprintf(&file->path, "%s/data.%d", data->path, i);
++		if (ret < 0)
+ 			goto out_err;
+ 
+ 		ret = open(file->path, O_RDWR|O_CREAT|O_TRUNC, S_IRUSR|S_IWUSR);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-05-11 14:20 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-05-11 14:20 UTC (permalink / raw
  To: gentoo-commits

commit:     417fb2f6e547a03992ebf4d6838fb70851797243
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 11 14:20:42 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 11 14:20:42 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=417fb2f6

Linux patch 5.10.36

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1035_linux-5.10.36.patch | 11505 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11509 insertions(+)

diff --git a/0000_README b/0000_README
index 675256e..e9a544e 100644
--- a/0000_README
+++ b/0000_README
@@ -183,6 +183,10 @@ Patch:  1034_linux-5.10.35.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.35
 
+Patch:  1035_linux-5.10.36.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.36
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1035_linux-5.10.36.patch b/1035_linux-5.10.36.patch
new file mode 100644
index 0000000..f0f2da7
--- /dev/null
+++ b/1035_linux-5.10.36.patch
@@ -0,0 +1,11505 @@
+diff --git a/Makefile b/Makefile
+index 6ca39f3aa4e94..ece5b660dcb06 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 35
++SUBLEVEL = 36
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -775,16 +775,16 @@ KBUILD_CFLAGS += -Wno-gnu
+ KBUILD_CFLAGS += -mno-global-merge
+ else
+ 
+-# These warnings generated too much noise in a regular build.
+-# Use make W=1 to enable them (see scripts/Makefile.extrawarn)
+-KBUILD_CFLAGS += -Wno-unused-but-set-variable
+-
+ # Warn about unmarked fall-throughs in switch statement.
+ # Disabled for clang while comment to attribute conversion happens and
+ # https://github.com/ClangBuiltLinux/linux/issues/636 is discussed.
+ KBUILD_CFLAGS += $(call cc-option,-Wimplicit-fallthrough,)
+ endif
+ 
++# These warnings generated too much noise in a regular build.
++# Use make W=1 to enable them (see scripts/Makefile.extrawarn)
++KBUILD_CFLAGS += $(call cc-disable-warning, unused-but-set-variable)
++
+ KBUILD_CFLAGS += $(call cc-disable-warning, unused-const-variable)
+ ifdef CONFIG_FRAME_POINTER
+ KBUILD_CFLAGS	+= -fno-omit-frame-pointer -fno-optimize-sibling-calls
+diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
+index e1567418a2b14..0d6ee56f5831e 100644
+--- a/arch/arm/boot/compressed/Makefile
++++ b/arch/arm/boot/compressed/Makefile
+@@ -114,8 +114,8 @@ asflags-y := -DZIMAGE
+ 
+ # Supply kernel BSS size to the decompressor via a linker symbol.
+ KBSS_SZ = $(shell echo $$(($$($(NM) $(obj)/../../../../vmlinux | \
+-		sed -n -e 's/^\([^ ]*\) [AB] __bss_start$$/-0x\1/p' \
+-		       -e 's/^\([^ ]*\) [AB] __bss_stop$$/+0x\1/p') )) )
++		sed -n -e 's/^\([^ ]*\) [ABD] __bss_start$$/-0x\1/p' \
++		       -e 's/^\([^ ]*\) [ABD] __bss_stop$$/+0x\1/p') )) )
+ LDFLAGS_vmlinux = --defsym _kernel_bss_size=$(KBSS_SZ)
+ # Supply ZRELADDR to the decompressor via a linker symbol.
+ ifneq ($(CONFIG_AUTO_ZRELADDR),y)
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index 775ceb3acb6c0..edca66c232c15 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -8,6 +8,7 @@
+  */
+ /dts-v1/;
+ #include "sam9x60.dtsi"
++#include <dt-bindings/input/input.h>
+ 
+ / {
+ 	model = "Microchip SAM9X60-EK";
+@@ -84,7 +85,7 @@
+ 		sw1 {
+ 			label = "SW1";
+ 			gpios = <&pioD 18 GPIO_ACTIVE_LOW>;
+-			linux,code=<0x104>;
++			linux,code=<KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+index 0e159f879c15e..d3cd2443ba252 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+@@ -11,6 +11,7 @@
+ #include "at91-sama5d27_som1.dtsi"
+ #include <dt-bindings/mfd/atmel-flexcom.h>
+ #include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/input.h>
+ 
+ / {
+ 	model = "Atmel SAMA5D27 SOM1 EK";
+@@ -467,7 +468,7 @@
+ 		pb4 {
+ 			label = "USER";
+ 			gpios = <&pioA PIN_PA29 GPIO_ACTIVE_LOW>;
+-			linux,code = <0x104>;
++			linux,code = <KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+index 6b38fa3f5568f..4883b84b4eded 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+@@ -8,6 +8,7 @@
+  */
+ /dts-v1/;
+ #include "at91-sama5d27_wlsom1.dtsi"
++#include <dt-bindings/input/input.h>
+ 
+ / {
+ 	model = "Microchip SAMA5D27 WLSOM1 EK";
+@@ -35,7 +36,7 @@
+ 		sw4 {
+ 			label = "USER BUTTON";
+ 			gpios = <&pioA PIN_PB2 GPIO_ACTIVE_LOW>;
+-			linux,code = <0x104>;
++			linux,code = <KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+index 6783cf16ff818..19bb50f50c1fc 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_icp.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+@@ -12,6 +12,7 @@
+ #include "sama5d2.dtsi"
+ #include "sama5d2-pinfunc.h"
+ #include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/input.h>
+ #include <dt-bindings/mfd/atmel-flexcom.h>
+ 
+ / {
+@@ -51,7 +52,7 @@
+ 		sw4 {
+ 			label = "USER_PB1";
+ 			gpios = <&pioA PIN_PD0 GPIO_ACTIVE_LOW>;
+-			linux,code = <0x104>;
++			linux,code = <KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+index c894c7c788a93..1c6361ba1aca4 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+@@ -11,6 +11,7 @@
+ #include "sama5d2-pinfunc.h"
+ #include <dt-bindings/mfd/atmel-flexcom.h>
+ #include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/input.h>
+ #include <dt-bindings/pinctrl/at91.h>
+ 
+ / {
+@@ -403,7 +404,7 @@
+ 		bp1 {
+ 			label = "PB_USER";
+ 			gpios = <&pioA PIN_PA10 GPIO_ACTIVE_LOW>;
+-			linux,code = <0x104>;
++			linux,code = <KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91-sama5d2_xplained.dts b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+index 058fae1b4a76e..d767968ae2175 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+@@ -10,6 +10,7 @@
+ #include "sama5d2-pinfunc.h"
+ #include <dt-bindings/mfd/atmel-flexcom.h>
+ #include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/input.h>
+ #include <dt-bindings/regulator/active-semi,8945a-regulator.h>
+ 
+ / {
+@@ -713,7 +714,7 @@
+ 		bp1 {
+ 			label = "PB_USER";
+ 			gpios = <&pioA PIN_PB9 GPIO_ACTIVE_LOW>;
+-			linux,code = <0x104>;
++			linux,code = <KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index 5179258f92470..9c55a921263bd 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -7,6 +7,7 @@
+  */
+ /dts-v1/;
+ #include "sama5d36.dtsi"
++#include <dt-bindings/input/input.h>
+ 
+ / {
+ 	model = "SAMA5D3 Xplained";
+@@ -354,7 +355,7 @@
+ 		bp3 {
+ 			label = "PB_USER";
+ 			gpios = <&pioE 29 GPIO_ACTIVE_LOW>;
+-			linux,code = <0x104>;
++			linux,code = <KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91sam9260ek.dts b/arch/arm/boot/dts/at91sam9260ek.dts
+index d3446e42b5983..ce96345d28a39 100644
+--- a/arch/arm/boot/dts/at91sam9260ek.dts
++++ b/arch/arm/boot/dts/at91sam9260ek.dts
+@@ -7,6 +7,7 @@
+  */
+ /dts-v1/;
+ #include "at91sam9260.dtsi"
++#include <dt-bindings/input/input.h>
+ 
+ / {
+ 	model = "Atmel at91sam9260ek";
+@@ -156,7 +157,7 @@
+ 		btn4 {
+ 			label = "Button 4";
+ 			gpios = <&pioA 31 GPIO_ACTIVE_LOW>;
+-			linux,code = <0x104>;
++			linux,code = <KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+index 6e6e672c0b86d..87bb39060e8be 100644
+--- a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
++++ b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+@@ -5,6 +5,7 @@
+  * Copyright (C) 2012 Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
+  */
+ #include "at91sam9g20.dtsi"
++#include <dt-bindings/input/input.h>
+ 
+ / {
+ 
+@@ -234,7 +235,7 @@
+ 		btn4 {
+ 			label = "Button 4";
+ 			gpios = <&pioA 31 GPIO_ACTIVE_LOW>;
+-			linux,code = <0x104>;
++			linux,code = <KEY_PROG1>;
+ 			wakeup-source;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm4708-asus-rt-ac56u.dts b/arch/arm/boot/dts/bcm4708-asus-rt-ac56u.dts
+index 6a96655d86260..8ed403767540e 100644
+--- a/arch/arm/boot/dts/bcm4708-asus-rt-ac56u.dts
++++ b/arch/arm/boot/dts/bcm4708-asus-rt-ac56u.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm4708-asus-rt-ac68u.dts b/arch/arm/boot/dts/bcm4708-asus-rt-ac68u.dts
+index 3b0029e61b4c6..667b118ba4ee1 100644
+--- a/arch/arm/boot/dts/bcm4708-asus-rt-ac68u.dts
++++ b/arch/arm/boot/dts/bcm4708-asus-rt-ac68u.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts b/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
+index 90f57bad6b243..ff31ce45831a7 100644
+--- a/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
++++ b/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x18000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x18000000>;
+ 	};
+ 
+ 	spi {
+diff --git a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
+index fed75e6ab58ca..61c7b137607e5 100644
+--- a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
++++ b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
+@@ -22,8 +22,8 @@
+ 
+ 	memory {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm4708-netgear-r6300-v2.dts b/arch/arm/boot/dts/bcm4708-netgear-r6300-v2.dts
+index 79542e18915c5..4c60eda296d97 100644
+--- a/arch/arm/boot/dts/bcm4708-netgear-r6300-v2.dts
++++ b/arch/arm/boot/dts/bcm4708-netgear-r6300-v2.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts b/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
+index abd35a518046d..7d46561fca3cd 100644
+--- a/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
++++ b/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47081-asus-rt-n18u.dts b/arch/arm/boot/dts/bcm47081-asus-rt-n18u.dts
+index c29950b43a953..0e273c598732f 100644
+--- a/arch/arm/boot/dts/bcm47081-asus-rt-n18u.dts
++++ b/arch/arm/boot/dts/bcm47081-asus-rt-n18u.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts b/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
+index 4dcec6865469a..083ec4036bd72 100644
+--- a/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
++++ b/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	spi {
+diff --git a/arch/arm/boot/dts/bcm47081-buffalo-wzr-900dhp.dts b/arch/arm/boot/dts/bcm47081-buffalo-wzr-900dhp.dts
+index 0e349e39f6081..8b1a05a0f1a11 100644
+--- a/arch/arm/boot/dts/bcm47081-buffalo-wzr-900dhp.dts
++++ b/arch/arm/boot/dts/bcm47081-buffalo-wzr-900dhp.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	spi {
+diff --git a/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts b/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
+index 8f1e565c3db45..6c6bb7b17d27a 100644
+--- a/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
++++ b/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
+index ce888b1835d1f..d29e7f80ea6aa 100644
+--- a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
++++ b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x18000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x18000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts b/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
+index ed8619b54d692..38fbefdf2e4e4 100644
+--- a/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
++++ b/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
+@@ -18,8 +18,8 @@
+ 
+ 	memory {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	gpio-keys {
+diff --git a/arch/arm/boot/dts/bcm4709-netgear-r7000.dts b/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
+index 1f87993eae1d1..7989a53597d4f 100644
+--- a/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
++++ b/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
+index 6c6199a53d091..87b655be674c5 100644
+--- a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
++++ b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
+@@ -32,8 +32,8 @@
+ 
+ 	memory {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts b/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
+index 911c65fbf2510..e635a15041dd8 100644
+--- a/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
++++ b/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
+@@ -21,8 +21,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	nand: nand@18028000 {
+diff --git a/arch/arm/boot/dts/bcm47094-linksys-panamera.dts b/arch/arm/boot/dts/bcm47094-linksys-panamera.dts
+index 0faae89503753..36d63beba8cd8 100644
+--- a/arch/arm/boot/dts/bcm47094-linksys-panamera.dts
++++ b/arch/arm/boot/dts/bcm47094-linksys-panamera.dts
+@@ -18,8 +18,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	gpio-keys {
+diff --git a/arch/arm/boot/dts/bcm47094-luxul-abr-4500.dts b/arch/arm/boot/dts/bcm47094-luxul-abr-4500.dts
+index 50f7cd08cfbbc..a6dc99955e191 100644
+--- a/arch/arm/boot/dts/bcm47094-luxul-abr-4500.dts
++++ b/arch/arm/boot/dts/bcm47094-luxul-abr-4500.dts
+@@ -18,8 +18,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x18000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x18000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47094-luxul-xbr-4500.dts b/arch/arm/boot/dts/bcm47094-luxul-xbr-4500.dts
+index bcc420f85b566..ff98837bc0db0 100644
+--- a/arch/arm/boot/dts/bcm47094-luxul-xbr-4500.dts
++++ b/arch/arm/boot/dts/bcm47094-luxul-xbr-4500.dts
+@@ -18,8 +18,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x18000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x18000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+index 9ae815ddbb4bd..2666195b6ffeb 100644
+--- a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
++++ b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+@@ -18,8 +18,8 @@
+ 
+ 	memory {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x18000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x18000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwr-3100.dts b/arch/arm/boot/dts/bcm47094-luxul-xwr-3100.dts
+index a21b2d1855968..9f798025748bd 100644
+--- a/arch/arm/boot/dts/bcm47094-luxul-xwr-3100.dts
++++ b/arch/arm/boot/dts/bcm47094-luxul-xwr-3100.dts
+@@ -18,8 +18,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwr-3150-v1.dts b/arch/arm/boot/dts/bcm47094-luxul-xwr-3150-v1.dts
+index 4d5c5aa7dc423..c8dfa4c58d2f1 100644
+--- a/arch/arm/boot/dts/bcm47094-luxul-xwr-3150-v1.dts
++++ b/arch/arm/boot/dts/bcm47094-luxul-xwr-3150-v1.dts
+@@ -18,8 +18,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x18000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x18000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47094-netgear-r8500.dts b/arch/arm/boot/dts/bcm47094-netgear-r8500.dts
+index f42a1703f4ab1..42097a4c2659f 100644
+--- a/arch/arm/boot/dts/bcm47094-netgear-r8500.dts
++++ b/arch/arm/boot/dts/bcm47094-netgear-r8500.dts
+@@ -18,8 +18,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x18000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x18000000>;
+ 	};
+ 
+ 	leds {
+diff --git a/arch/arm/boot/dts/bcm47094-phicomm-k3.dts b/arch/arm/boot/dts/bcm47094-phicomm-k3.dts
+index ac3a4483dcb3f..a2566ad4619c4 100644
+--- a/arch/arm/boot/dts/bcm47094-phicomm-k3.dts
++++ b/arch/arm/boot/dts/bcm47094-phicomm-k3.dts
+@@ -15,8 +15,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000
+-		       0x88000000 0x18000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x18000000>;
+ 	};
+ 
+ 	gpio-keys {
+diff --git a/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi b/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
+index 9f285c7cf9141..c0de1337bdaad 100644
+--- a/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
++++ b/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
+@@ -8,37 +8,43 @@
+ / {
+ 	soc {
+ 		i2c@80128000 {
+-			/* Marked:
+-			 * 129
+-			 * M35
+-			 * L3GD20
+-			 */
+-			l3gd20@6a {
+-				/* Gyroscope */
+-				compatible = "st,l3gd20";
+-				status = "disabled";
++			accelerometer@19 {
++				compatible = "st,lsm303dlhc-accel";
+ 				st,drdy-int-pin = <1>;
+-				drive-open-drain;
+-				reg = <0x6a>; // 0x6a or 0x6b
++				reg = <0x19>;
+ 				vdd-supply = <&ab8500_ldo_aux1_reg>;
+ 				vddio-supply = <&db8500_vsmps2_reg>;
++				interrupt-parent = <&gpio2>;
++				interrupts = <18 IRQ_TYPE_EDGE_RISING>,
++					     <19 IRQ_TYPE_EDGE_RISING>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&accel_tvk_mode>;
+ 			};
+-			/*
+-			 * Marked:
+-			 * 2122
+-			 * C3H
+-			 * DQEEE
+-			 * LIS3DH?
+-			 */
+-			lis3dh@18 {
+-				/* Accelerometer */
+-				compatible = "st,lis3dh-accel";
++			magnetometer@1e {
++				compatible = "st,lsm303dlm-magn";
+ 				st,drdy-int-pin = <1>;
+-				reg = <0x18>;
++				reg = <0x1e>;
+ 				vdd-supply = <&ab8500_ldo_aux1_reg>;
+ 				vddio-supply = <&db8500_vsmps2_reg>;
++				// This interrupt is not properly working with the driver
++				// interrupt-parent = <&gpio1>;
++				// interrupts = <0 IRQ_TYPE_EDGE_RISING>;
+ 				pinctrl-names = "default";
+-				pinctrl-0 = <&accel_tvk_mode>;
++				pinctrl-0 = <&magn_tvk_mode>;
++			};
++			gyroscope@68 {
++				/* Gyroscope */
++				compatible = "st,l3g4200d-gyro";
++				reg = <0x68>;
++				vdd-supply = <&ab8500_ldo_aux1_reg>;
++				vddio-supply = <&db8500_vsmps2_reg>;
++			};
++			pressure@5c {
++				/* Barometer/pressure sensor */
++				compatible = "st,lps001wp-press";
++				reg = <0x5c>;
++				vdd-supply = <&ab8500_ldo_aux1_reg>;
++				vddio-supply = <&db8500_vsmps2_reg>;
+ 			};
+ 		};
+ 
+@@ -54,5 +60,26 @@
+ 				};
+ 			};
+ 		};
++
++		pinctrl {
++			accelerometer {
++				accel_tvk_mode: accel_tvk {
++					/* Accelerometer interrupt lines 1 & 2 */
++					tvk_cfg {
++						pins = "GPIO82_C1", "GPIO83_D3";
++						ste,config = <&gpio_in_pd>;
++					};
++				};
++			};
++			magnetometer {
++				magn_tvk_mode: magn_tvk {
++					/* GPIO 32 used for DRDY, pull this down */
++					tvk_cfg {
++						pins = "GPIO32_V2";
++						ste,config = <&gpio_in_pd>;
++					};
++				};
++			};
++		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+index a0b829738e8f2..068aabcffb13b 100644
+--- a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
++++ b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+@@ -448,7 +448,7 @@
+ 
+ 			reset-gpios = <&gpio TEGRA_GPIO(Q, 7) GPIO_ACTIVE_HIGH>;
+ 
+-			avdd-supply = <&vdd_3v3_sys>;
++			vdda-supply = <&vdd_3v3_sys>;
+ 			vdd-supply  = <&vdd_3v3_sys>;
+ 		};
+ 
+diff --git a/arch/arm/crypto/curve25519-core.S b/arch/arm/crypto/curve25519-core.S
+index be18af52e7dc9..b697fa5d059a2 100644
+--- a/arch/arm/crypto/curve25519-core.S
++++ b/arch/arm/crypto/curve25519-core.S
+@@ -10,8 +10,8 @@
+ #include <linux/linkage.h>
+ 
+ .text
+-.fpu neon
+ .arch armv7-a
++.fpu neon
+ .align 4
+ 
+ ENTRY(curve25519_neon)
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts b/arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts
+index 6704ea2c72a35..cc29223ca188c 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts
+@@ -22,6 +22,10 @@
+ 	ti,termination-current = <144000>;  /* uA */
+ };
+ 
++&buck3_reg {
++	regulator-always-on;
++};
++
+ &proximity {
+ 	proximity-near-level = <25>;
+ };
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index d5b6c0a1c54a5..a89e47d95eef2 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -156,7 +156,8 @@
+ 			};
+ 
+ 			nb_periph_clk: nb-periph-clk@13000 {
+-				compatible = "marvell,armada-3700-periph-clock-nb";
++				compatible = "marvell,armada-3700-periph-clock-nb",
++					     "syscon";
+ 				reg = <0x13000 0x100>;
+ 				clocks = <&tbg 0>, <&tbg 1>, <&tbg 2>,
+ 				<&tbg 3>, <&xtalclk>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+index 5e046f9d48ce9..592c6bc10dd1d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+@@ -1169,7 +1169,7 @@
+ 				 <&mmsys CLK_MM_DSI1_DIGITAL>,
+ 				 <&mipi_tx1>;
+ 			clock-names = "engine", "digital", "hs";
+-			phy = <&mipi_tx1>;
++			phys = <&mipi_tx1>;
+ 			phy-names = "dphy";
+ 			status = "disabled";
+ 		};
+diff --git a/arch/arm64/kernel/vdso/vdso.lds.S b/arch/arm64/kernel/vdso/vdso.lds.S
+index d808ad31e01f7..b840ab1b705cf 100644
+--- a/arch/arm64/kernel/vdso/vdso.lds.S
++++ b/arch/arm64/kernel/vdso/vdso.lds.S
+@@ -31,6 +31,13 @@ SECTIONS
+ 	.gnu.version_d	: { *(.gnu.version_d) }
+ 	.gnu.version_r	: { *(.gnu.version_r) }
+ 
++	/*
++	 * Discard .note.gnu.property sections which are unused and have
++	 * different alignment requirement from vDSO note sections.
++	 */
++	/DISCARD/	: {
++		*(.note.GNU-stack .note.gnu.property)
++	}
+ 	.note		: { *(.note.*) }		:text	:note
+ 
+ 	. = ALIGN(16);
+@@ -51,7 +58,6 @@ SECTIONS
+ 	PROVIDE(end = .);
+ 
+ 	/DISCARD/	: {
+-		*(.note.GNU-stack)
+ 		*(.data .data.* .gnu.linkonce.d.* .sdata*)
+ 		*(.bss .sbss .dynbss .dynsbss)
+ 	}
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index f877a576b338b..f4b98903064f5 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -444,6 +444,7 @@
+ #define   LPCR_VRMA_LP1		ASM_CONST(0x0000800000000000)
+ #define   LPCR_RMLS		0x1C000000	/* Implementation dependent RMO limit sel */
+ #define   LPCR_RMLS_SH		26
++#define   LPCR_HAIL		ASM_CONST(0x0000000004000000)   /* HV AIL (ISAv3.1) */
+ #define   LPCR_ILE		ASM_CONST(0x0000000002000000)   /* !HV irqs set MSR:LE */
+ #define   LPCR_AIL		ASM_CONST(0x0000000001800000)	/* Alternate interrupt location */
+ #define   LPCR_AIL_0		ASM_CONST(0x0000000000000000)	/* MMU off exception offset 0x0 */
+diff --git a/arch/powerpc/include/uapi/asm/errno.h b/arch/powerpc/include/uapi/asm/errno.h
+index cc79856896a19..4ba87de32be00 100644
+--- a/arch/powerpc/include/uapi/asm/errno.h
++++ b/arch/powerpc/include/uapi/asm/errno.h
+@@ -2,6 +2,7 @@
+ #ifndef _ASM_POWERPC_ERRNO_H
+ #define _ASM_POWERPC_ERRNO_H
+ 
++#undef	EDEADLOCK
+ #include <asm-generic/errno.h>
+ 
+ #undef	EDEADLOCK
+diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
+index 813713c9120c0..20c417ad9c6de 100644
+--- a/arch/powerpc/kernel/eeh.c
++++ b/arch/powerpc/kernel/eeh.c
+@@ -362,14 +362,11 @@ static inline unsigned long eeh_token_to_phys(unsigned long token)
+ 	pa = pte_pfn(*ptep);
+ 
+ 	/* On radix we can do hugepage mappings for io, so handle that */
+-	if (hugepage_shift) {
+-		pa <<= hugepage_shift;
+-		pa |= token & ((1ul << hugepage_shift) - 1);
+-	} else {
+-		pa <<= PAGE_SHIFT;
+-		pa |= token & (PAGE_SIZE - 1);
+-	}
++	if (!hugepage_shift)
++		hugepage_shift = PAGE_SHIFT;
+ 
++	pa <<= PAGE_SHIFT;
++	pa |= token & ((1ul << hugepage_shift) - 1);
+ 	return pa;
+ }
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index c28e949cc2229..3b871ecb3a921 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -231,10 +231,23 @@ static void cpu_ready_for_interrupts(void)
+ 	 * If we are not in hypervisor mode the job is done once for
+ 	 * the whole partition in configure_exceptions().
+ 	 */
+-	if (cpu_has_feature(CPU_FTR_HVMODE) &&
+-	    cpu_has_feature(CPU_FTR_ARCH_207S)) {
++	if (cpu_has_feature(CPU_FTR_HVMODE)) {
+ 		unsigned long lpcr = mfspr(SPRN_LPCR);
+-		mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);
++		unsigned long new_lpcr = lpcr;
++
++		if (cpu_has_feature(CPU_FTR_ARCH_31)) {
++			/* P10 DD1 does not have HAIL */
++			if (pvr_version_is(PVR_POWER10) &&
++					(mfspr(SPRN_PVR) & 0xf00) == 0x100)
++				new_lpcr |= LPCR_AIL_3;
++			else
++				new_lpcr |= LPCR_HAIL;
++		} else if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
++			new_lpcr |= LPCR_AIL_3;
++		}
++
++		if (new_lpcr != lpcr)
++			mtspr(SPRN_LPCR, new_lpcr);
+ 	}
+ 
+ 	/*
+diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
+index 02b9e4d0dc40b..a8a7cb71086b3 100644
+--- a/arch/powerpc/kexec/file_load_64.c
++++ b/arch/powerpc/kexec/file_load_64.c
+@@ -960,6 +960,93 @@ unsigned int kexec_fdt_totalsize_ppc64(struct kimage *image)
+ 	return fdt_size;
+ }
+ 
++/**
++ * add_node_props - Reads node properties from device node structure and add
++ *                  them to fdt.
++ * @fdt:            Flattened device tree of the kernel
++ * @node_offset:    offset of the node to add a property at
++ * @dn:             device node pointer
++ *
++ * Returns 0 on success, negative errno on error.
++ */
++static int add_node_props(void *fdt, int node_offset, const struct device_node *dn)
++{
++	int ret = 0;
++	struct property *pp;
++
++	if (!dn)
++		return -EINVAL;
++
++	for_each_property_of_node(dn, pp) {
++		ret = fdt_setprop(fdt, node_offset, pp->name, pp->value, pp->length);
++		if (ret < 0) {
++			pr_err("Unable to add %s property: %s\n", pp->name, fdt_strerror(ret));
++			return ret;
++		}
++	}
++	return ret;
++}
++
++/**
++ * update_cpus_node - Update cpus node of flattened device tree using of_root
++ *                    device node.
++ * @fdt:              Flattened device tree of the kernel.
++ *
++ * Returns 0 on success, negative errno on error.
++ */
++static int update_cpus_node(void *fdt)
++{
++	struct device_node *cpus_node, *dn;
++	int cpus_offset, cpus_subnode_offset, ret = 0;
++
++	cpus_offset = fdt_path_offset(fdt, "/cpus");
++	if (cpus_offset < 0 && cpus_offset != -FDT_ERR_NOTFOUND) {
++		pr_err("Malformed device tree: error reading /cpus node: %s\n",
++		       fdt_strerror(cpus_offset));
++		return cpus_offset;
++	}
++
++	if (cpus_offset > 0) {
++		ret = fdt_del_node(fdt, cpus_offset);
++		if (ret < 0) {
++			pr_err("Error deleting /cpus node: %s\n", fdt_strerror(ret));
++			return -EINVAL;
++		}
++	}
++
++	/* Add cpus node to fdt */
++	cpus_offset = fdt_add_subnode(fdt, fdt_path_offset(fdt, "/"), "cpus");
++	if (cpus_offset < 0) {
++		pr_err("Error creating /cpus node: %s\n", fdt_strerror(cpus_offset));
++		return -EINVAL;
++	}
++
++	/* Add cpus node properties */
++	cpus_node = of_find_node_by_path("/cpus");
++	ret = add_node_props(fdt, cpus_offset, cpus_node);
++	of_node_put(cpus_node);
++	if (ret < 0)
++		return ret;
++
++	/* Loop through all subnodes of cpus and add them to fdt */
++	for_each_node_by_type(dn, "cpu") {
++		cpus_subnode_offset = fdt_add_subnode(fdt, cpus_offset, dn->full_name);
++		if (cpus_subnode_offset < 0) {
++			pr_err("Unable to add %s subnode: %s\n", dn->full_name,
++			       fdt_strerror(cpus_subnode_offset));
++			ret = cpus_subnode_offset;
++			goto out;
++		}
++
++		ret = add_node_props(fdt, cpus_subnode_offset, dn);
++		if (ret < 0)
++			goto out;
++	}
++out:
++	of_node_put(dn);
++	return ret;
++}
++
+ /**
+  * setup_new_fdt_ppc64 - Update the flattend device-tree of the kernel
+  *                       being loaded.
+@@ -1020,6 +1107,11 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
+ 		}
+ 	}
+ 
++	/* Update cpus nodes information to account hotplug CPUs. */
++	ret =  update_cpus_node(fdt);
++	if (ret < 0)
++		goto out;
++
+ 	/* Update memory reserve map */
+ 	ret = get_reserved_memory_ranges(&rmem);
+ 	if (ret)
+diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
+index 69a91b571845d..58991233381ed 100644
+--- a/arch/powerpc/lib/Makefile
++++ b/arch/powerpc/lib/Makefile
+@@ -5,6 +5,9 @@
+ 
+ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
+ 
++CFLAGS_code-patching.o += -fno-stack-protector
++CFLAGS_feature-fixups.o += -fno-stack-protector
++
+ CFLAGS_REMOVE_code-patching.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_feature-fixups.o = $(CC_FLAGS_FTRACE)
+ 
+diff --git a/arch/s390/crypto/arch_random.c b/arch/s390/crypto/arch_random.c
+index dd95cdbd22ce8..4cbb4b6d85a83 100644
+--- a/arch/s390/crypto/arch_random.c
++++ b/arch/s390/crypto/arch_random.c
+@@ -53,6 +53,10 @@ static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);
+ 
+ bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)
+ {
++	/* max hunk is ARCH_RNG_BUF_SIZE */
++	if (nbytes > ARCH_RNG_BUF_SIZE)
++		return false;
++
+ 	/* lock rng buffer */
+ 	if (!spin_trylock(&arch_rng_lock))
+ 		return false;
+diff --git a/arch/s390/kernel/dis.c b/arch/s390/kernel/dis.c
+index a7eab7be4db05..5412efe328f80 100644
+--- a/arch/s390/kernel/dis.c
++++ b/arch/s390/kernel/dis.c
+@@ -563,7 +563,7 @@ void show_code(struct pt_regs *regs)
+ 
+ void print_fn_code(unsigned char *code, unsigned long len)
+ {
+-	char buffer[64], *ptr;
++	char buffer[128], *ptr;
+ 	int opsize, i;
+ 
+ 	while (len) {
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 3a5ecb1039bfb..183ee73d9019f 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1414,7 +1414,7 @@ config HIGHMEM4G
+ 
+ config HIGHMEM64G
+ 	bool "64GB"
+-	depends on !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !WINCHIP3D && !MK6
++	depends on !M486SX && !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !WINCHIP3D && !MK6
+ 	select X86_PAE
+ 	help
+ 	  Select this if you have a 32-bit processor and more than 4
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 9c86f2dc16b1d..8ed757d06f772 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -40,6 +40,7 @@ REALMODE_CFLAGS += -ffreestanding
+ REALMODE_CFLAGS += -fno-stack-protector
+ REALMODE_CFLAGS += $(call __cc-option, $(CC), $(REALMODE_CFLAGS), -Wno-address-of-packed-member)
+ REALMODE_CFLAGS += $(call __cc-option, $(CC), $(REALMODE_CFLAGS), $(cc_stack_align4))
++REALMODE_CFLAGS += $(CLANG_FLAGS)
+ export REALMODE_CFLAGS
+ 
+ # BITS is used as extension for files which are available in a 32 bit
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 40b8fd375d522..6004047d25fdd 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -46,6 +46,7 @@ KBUILD_CFLAGS += -D__DISABLE_EXPORTS
+ # Disable relocation relaxation in case the link is not PIE.
+ KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
+ KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h
++KBUILD_CFLAGS += $(CLANG_FLAGS)
+ 
+ # sev-es.c indirectly inludes inat-table.h which is generated during
+ # compilation and stored in $(objtree). Add the directory to the includes so
+diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
+index aa561795efd16..a6dea4e8a082f 100644
+--- a/arch/x86/boot/compressed/mem_encrypt.S
++++ b/arch/x86/boot/compressed/mem_encrypt.S
+@@ -23,12 +23,6 @@ SYM_FUNC_START(get_sev_encryption_bit)
+ 	push	%ecx
+ 	push	%edx
+ 
+-	/* Check if running under a hypervisor */
+-	movl	$1, %eax
+-	cpuid
+-	bt	$31, %ecx		/* Check the hypervisor bit */
+-	jnc	.Lno_sev
+-
+ 	movl	$0x80000000, %eax	/* CPUID to check the highest leaf */
+ 	cpuid
+ 	cmpl	$0x8000001f, %eax	/* See if 0x8000001f is available */
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 35ad8480c464e..25148ebd36341 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1847,7 +1847,7 @@ static inline void setup_getcpu(int cpu)
+ 	unsigned long cpudata = vdso_encode_cpunode(cpu, early_cpu_to_node(cpu));
+ 	struct desc_struct d = { };
+ 
+-	if (boot_cpu_has(X86_FEATURE_RDTSCP))
++	if (boot_cpu_has(X86_FEATURE_RDTSCP) || boot_cpu_has(X86_FEATURE_RDPID))
+ 		write_rdtscp_aux(cpudata);
+ 
+ 	/* Store CPU and node number in limit. */
+diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
+index cdc04d0912423..387b716698187 100644
+--- a/arch/x86/kernel/sev-es-shared.c
++++ b/arch/x86/kernel/sev-es-shared.c
+@@ -186,7 +186,6 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
+ 	 * make it accessible to the hypervisor.
+ 	 *
+ 	 * In particular, check for:
+-	 *	- Hypervisor CPUID bit
+ 	 *	- Availability of CPUID leaf 0x8000001f
+ 	 *	- SEV CPUID bit.
+ 	 *
+@@ -194,10 +193,7 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
+ 	 * can't be checked here.
+ 	 */
+ 
+-	if ((fn == 1 && !(regs->cx & BIT(31))))
+-		/* Hypervisor bit */
+-		goto fail;
+-	else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
++	if (fn == 0x80000000 && (regs->ax < 0x8000001f))
+ 		/* SEV leaf check */
+ 		goto fail;
+ 	else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index 6c5eb6f3f14f4..a19374d261013 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -503,14 +503,10 @@ void __init sme_enable(struct boot_params *bp)
+ 
+ #define AMD_SME_BIT	BIT(0)
+ #define AMD_SEV_BIT	BIT(1)
+-	/*
+-	 * Set the feature mask (SME or SEV) based on whether we are
+-	 * running under a hypervisor.
+-	 */
+-	eax = 1;
+-	ecx = 0;
+-	native_cpuid(&eax, &ebx, &ecx, &edx);
+-	feature_mask = (ecx & BIT(31)) ? AMD_SEV_BIT : AMD_SME_BIT;
++
++	/* Check the SEV MSR whether SEV or SME is enabled */
++	sev_status   = __rdmsr(MSR_AMD64_SEV);
++	feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
+ 
+ 	/*
+ 	 * Check for the SME/SEV feature:
+@@ -530,19 +526,26 @@ void __init sme_enable(struct boot_params *bp)
+ 
+ 	/* Check if memory encryption is enabled */
+ 	if (feature_mask == AMD_SME_BIT) {
++		/*
++		 * No SME if Hypervisor bit is set. This check is here to
++		 * prevent a guest from trying to enable SME. For running as a
++		 * KVM guest the MSR_K8_SYSCFG will be sufficient, but there
++		 * might be other hypervisors which emulate that MSR as non-zero
++		 * or even pass it through to the guest.
++		 * A malicious hypervisor can still trick a guest into this
++		 * path, but there is no way to protect against that.
++		 */
++		eax = 1;
++		ecx = 0;
++		native_cpuid(&eax, &ebx, &ecx, &edx);
++		if (ecx & BIT(31))
++			return;
++
+ 		/* For SME, check the SYSCFG MSR */
+ 		msr = __rdmsr(MSR_K8_SYSCFG);
+ 		if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT))
+ 			return;
+ 	} else {
+-		/* For SEV, check the SEV MSR */
+-		msr = __rdmsr(MSR_AMD64_SEV);
+-		if (!(msr & MSR_AMD64_SEV_ENABLED))
+-			return;
+-
+-		/* Save SEV_STATUS to avoid reading MSR again */
+-		sev_status = msr;
+-
+ 		/* SEV state cannot be controlled by a command line option */
+ 		sme_me_mask = me_mask;
+ 		sev_enabled = true;
+diff --git a/crypto/api.c b/crypto/api.c
+index ed08cbd5b9d3f..c4eda56cff891 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -562,7 +562,7 @@ void crypto_destroy_tfm(void *mem, struct crypto_tfm *tfm)
+ {
+ 	struct crypto_alg *alg;
+ 
+-	if (unlikely(!mem))
++	if (IS_ERR_OR_NULL(mem))
+ 		return;
+ 
+ 	alg = tfm->__crt_alg;
+diff --git a/crypto/rng.c b/crypto/rng.c
+index a888d84b524a4..fea082b25fe4b 100644
+--- a/crypto/rng.c
++++ b/crypto/rng.c
+@@ -34,22 +34,18 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen)
+ 	u8 *buf = NULL;
+ 	int err;
+ 
+-	crypto_stats_get(alg);
+ 	if (!seed && slen) {
+ 		buf = kmalloc(slen, GFP_KERNEL);
+-		if (!buf) {
+-			crypto_alg_put(alg);
++		if (!buf)
+ 			return -ENOMEM;
+-		}
+ 
+ 		err = get_random_bytes_wait(buf, slen);
+-		if (err) {
+-			crypto_alg_put(alg);
++		if (err)
+ 			goto out;
+-		}
+ 		seed = buf;
+ 	}
+ 
++	crypto_stats_get(alg);
+ 	err = crypto_rng_alg(tfm)->seed(tfm, seed, slen);
+ 	crypto_stats_rng_seed(alg, err);
+ out:
+diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
+index f2d0e5915dab5..0a0a982f9c28d 100644
+--- a/drivers/acpi/arm64/gtdt.c
++++ b/drivers/acpi/arm64/gtdt.c
+@@ -329,7 +329,7 @@ static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
+ 					int index)
+ {
+ 	struct platform_device *pdev;
+-	int irq = map_gt_gsi(wd->timer_interrupt, wd->timer_flags);
++	int irq;
+ 
+ 	/*
+ 	 * According to SBSA specification the size of refresh and control
+@@ -338,7 +338,7 @@ static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
+ 	struct resource res[] = {
+ 		DEFINE_RES_MEM(wd->control_frame_address, SZ_4K),
+ 		DEFINE_RES_MEM(wd->refresh_frame_address, SZ_4K),
+-		DEFINE_RES_IRQ(irq),
++		{},
+ 	};
+ 	int nr_res = ARRAY_SIZE(res);
+ 
+@@ -348,10 +348,11 @@ static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
+ 
+ 	if (!(wd->refresh_frame_address && wd->control_frame_address)) {
+ 		pr_err(FW_BUG "failed to get the Watchdog base address.\n");
+-		acpi_unregister_gsi(wd->timer_interrupt);
+ 		return -EINVAL;
+ 	}
+ 
++	irq = map_gt_gsi(wd->timer_interrupt, wd->timer_flags);
++	res[2] = (struct resource)DEFINE_RES_IRQ(irq);
+ 	if (irq <= 0) {
+ 		pr_warn("failed to map the Watchdog interrupt.\n");
+ 		nr_res--;
+@@ -364,7 +365,8 @@ static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
+ 	 */
+ 	pdev = platform_device_register_simple("sbsa-gwdt", index, res, nr_res);
+ 	if (IS_ERR(pdev)) {
+-		acpi_unregister_gsi(wd->timer_interrupt);
++		if (irq > 0)
++			acpi_unregister_gsi(wd->timer_interrupt);
+ 		return PTR_ERR(pdev);
+ 	}
+ 
+diff --git a/drivers/acpi/custom_method.c b/drivers/acpi/custom_method.c
+index 7b54dc95d36b3..4058e02410917 100644
+--- a/drivers/acpi/custom_method.c
++++ b/drivers/acpi/custom_method.c
+@@ -42,6 +42,8 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
+ 				   sizeof(struct acpi_table_header)))
+ 			return -EFAULT;
+ 		uncopied_bytes = max_size = table.length;
++		/* make sure the buf is not allocated */
++		kfree(buf);
+ 		buf = kzalloc(max_size, GFP_KERNEL);
+ 		if (!buf)
+ 			return -ENOMEM;
+@@ -55,6 +57,7 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
+ 	    (*ppos + count < count) ||
+ 	    (count > uncopied_bytes)) {
+ 		kfree(buf);
++		buf = NULL;
+ 		return -EINVAL;
+ 	}
+ 
+@@ -76,7 +79,6 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
+ 		add_taint(TAINT_OVERRIDDEN_ACPI_TABLE, LOCKDEP_NOW_UNRELIABLE);
+ 	}
+ 
+-	kfree(buf);
+ 	return count;
+ }
+ 
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 00ba8e5a1ccc0..33192a8f687d6 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -1772,6 +1772,11 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		hpriv->flags |= AHCI_HFLAG_NO_DEVSLP;
+ 
+ #ifdef CONFIG_ARM64
++	if (pdev->vendor == PCI_VENDOR_ID_HUAWEI &&
++	    pdev->device == 0xa235 &&
++	    pdev->revision < 0x30)
++		hpriv->flags |= AHCI_HFLAG_NO_SXS;
++
+ 	if (pdev->vendor == 0x177d && pdev->device == 0xa01c)
+ 		hpriv->irq_handler = ahci_thunderx_irq_handler;
+ #endif
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index 98b8baa47dc5e..d1f284f0c83d9 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -242,6 +242,7 @@ enum {
+ 							suspend/resume */
+ 	AHCI_HFLAG_IGN_NOTSUPP_POWER_ON	= (1 << 27), /* ignore -EOPNOTSUPP
+ 							from phy_power_on() */
++	AHCI_HFLAG_NO_SXS		= (1 << 28), /* SXS not supported */
+ 
+ 	/* ap->flags bits */
+ 
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index ea5bf5f4cbed5..fec2e9754aed2 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -493,6 +493,11 @@ void ahci_save_initial_config(struct device *dev, struct ahci_host_priv *hpriv)
+ 		cap |= HOST_CAP_ALPM;
+ 	}
+ 
++	if ((cap & HOST_CAP_SXS) && (hpriv->flags & AHCI_HFLAG_NO_SXS)) {
++		dev_info(dev, "controller does not support SXS, disabling CAP_SXS\n");
++		cap &= ~HOST_CAP_SXS;
++	}
++
+ 	if (hpriv->force_port_map && port_map != hpriv->force_port_map) {
+ 		dev_info(dev, "forcing port_map 0x%x -> 0x%x\n",
+ 			 port_map, hpriv->force_port_map);
+diff --git a/drivers/block/rnbd/rnbd-clt-sysfs.c b/drivers/block/rnbd/rnbd-clt-sysfs.c
+index d9dd138ca9c64..5613cd45866b6 100644
+--- a/drivers/block/rnbd/rnbd-clt-sysfs.c
++++ b/drivers/block/rnbd/rnbd-clt-sysfs.c
+@@ -433,10 +433,14 @@ void rnbd_clt_remove_dev_symlink(struct rnbd_clt_dev *dev)
+ 	 * i.e. rnbd_clt_unmap_dev_store() leading to a sysfs warning because
+ 	 * of sysfs link already was removed already.
+ 	 */
+-	if (dev->blk_symlink_name && try_module_get(THIS_MODULE)) {
+-		sysfs_remove_link(rnbd_devs_kobj, dev->blk_symlink_name);
++	if (dev->blk_symlink_name) {
++		if (try_module_get(THIS_MODULE)) {
++			sysfs_remove_link(rnbd_devs_kobj, dev->blk_symlink_name);
++			module_put(THIS_MODULE);
++		}
++		/* It should be freed always. */
+ 		kfree(dev->blk_symlink_name);
+-		module_put(THIS_MODULE);
++		dev->blk_symlink_name = NULL;
+ 	}
+ }
+ 
+diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
+index 8cefa359fccd8..0d0386f67ffe2 100644
+--- a/drivers/bus/mhi/core/init.c
++++ b/drivers/bus/mhi/core/init.c
+@@ -544,6 +544,7 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ 	struct mhi_ring *buf_ring;
+ 	struct mhi_ring *tre_ring;
+ 	struct mhi_chan_ctxt *chan_ctxt;
++	u32 tmp;
+ 
+ 	buf_ring = &mhi_chan->buf_ring;
+ 	tre_ring = &mhi_chan->tre_ring;
+@@ -554,7 +555,19 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ 	vfree(buf_ring->base);
+ 
+ 	buf_ring->base = tre_ring->base = NULL;
++	tre_ring->ctxt_wp = NULL;
+ 	chan_ctxt->rbase = 0;
++	chan_ctxt->rlen = 0;
++	chan_ctxt->rp = 0;
++	chan_ctxt->wp = 0;
++
++	tmp = chan_ctxt->chcfg;
++	tmp &= ~CHAN_CTX_CHSTATE_MASK;
++	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
++	chan_ctxt->chcfg = tmp;
++
++	/* Update to all cores */
++	smp_wmb();
+ }
+ 
+ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+@@ -1267,7 +1280,8 @@ static int mhi_driver_remove(struct device *dev)
+ 
+ 		mutex_lock(&mhi_chan->mutex);
+ 
+-		if (ch_state[dir] == MHI_CH_STATE_ENABLED &&
++		if ((ch_state[dir] == MHI_CH_STATE_ENABLED ||
++		     ch_state[dir] == MHI_CH_STATE_STOP) &&
+ 		    !mhi_chan->offload_ch)
+ 			mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+ 
+diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
+index 2cff5ddff2257..d86ce1a06b75c 100644
+--- a/drivers/bus/mhi/core/main.c
++++ b/drivers/bus/mhi/core/main.c
+@@ -220,10 +220,17 @@ static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
+ 	smp_wmb();
+ }
+ 
++static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
++{
++	return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
++}
++
+ int mhi_destroy_device(struct device *dev, void *data)
+ {
++	struct mhi_chan *ul_chan, *dl_chan;
+ 	struct mhi_device *mhi_dev;
+ 	struct mhi_controller *mhi_cntrl;
++	enum mhi_ee_type ee = MHI_EE_MAX;
+ 
+ 	if (dev->bus != &mhi_bus_type)
+ 		return 0;
+@@ -235,6 +242,17 @@ int mhi_destroy_device(struct device *dev, void *data)
+ 	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+ 		return 0;
+ 
++	ul_chan = mhi_dev->ul_chan;
++	dl_chan = mhi_dev->dl_chan;
++
++	/*
++	 * If execution environment is specified, remove only those devices that
++	 * started in them based on ee_mask for the channels as we move on to a
++	 * different execution environment
++	 */
++	if (data)
++		ee = *(enum mhi_ee_type *)data;
++
+ 	/*
+ 	 * For the suspend and resume case, this function will get called
+ 	 * without mhi_unregister_controller(). Hence, we need to drop the
+@@ -242,11 +260,19 @@ int mhi_destroy_device(struct device *dev, void *data)
+ 	 * be sure that there will be no instances of mhi_dev left after
+ 	 * this.
+ 	 */
+-	if (mhi_dev->ul_chan)
+-		put_device(&mhi_dev->ul_chan->mhi_dev->dev);
++	if (ul_chan) {
++		if (ee != MHI_EE_MAX && !(ul_chan->ee_mask & BIT(ee)))
++			return 0;
++
++		put_device(&ul_chan->mhi_dev->dev);
++	}
+ 
+-	if (mhi_dev->dl_chan)
+-		put_device(&mhi_dev->dl_chan->mhi_dev->dev);
++	if (dl_chan) {
++		if (ee != MHI_EE_MAX && !(dl_chan->ee_mask & BIT(ee)))
++			return 0;
++
++		put_device(&dl_chan->mhi_dev->dev);
++	}
+ 
+ 	dev_dbg(&mhi_cntrl->mhi_dev->dev, "destroy device for chan:%s\n",
+ 		 mhi_dev->name);
+@@ -349,7 +375,16 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
+ 	struct mhi_event_ctxt *er_ctxt =
+ 		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+ 	struct mhi_ring *ev_ring = &mhi_event->ring;
+-	void *dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
++	dma_addr_t ptr = er_ctxt->rp;
++	void *dev_rp;
++
++	if (!is_valid_ring_ptr(ev_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event ring rp points outside of the event ring\n");
++		return IRQ_HANDLED;
++	}
++
++	dev_rp = mhi_to_virtual(ev_ring, ptr);
+ 
+ 	/* Only proceed if event ring has pending events */
+ 	if (ev_ring->rp == dev_rp)
+@@ -498,6 +533,11 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ 		struct mhi_buf_info *buf_info;
+ 		u16 xfer_len;
+ 
++		if (!is_valid_ring_ptr(tre_ring, ptr)) {
++			dev_err(&mhi_cntrl->mhi_dev->dev,
++				"Event element points outside of the tre ring\n");
++			break;
++		}
+ 		/* Get the TRB this event points to */
+ 		ev_tre = mhi_to_virtual(tre_ring, ptr);
+ 
+@@ -657,6 +697,12 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
+ 	struct mhi_chan *mhi_chan;
+ 	u32 chan;
+ 
++	if (!is_valid_ring_ptr(mhi_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event element points outside of the cmd ring\n");
++		return;
++	}
++
+ 	cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
+ 
+ 	chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+@@ -681,6 +727,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ 	u32 chan;
+ 	int count = 0;
++	dma_addr_t ptr = er_ctxt->rp;
+ 
+ 	/*
+ 	 * This is a quick check to avoid unnecessary event processing
+@@ -690,7 +737,13 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ 	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ 		return -EIO;
+ 
+-	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
++	if (!is_valid_ring_ptr(ev_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event ring rp points outside of the event ring\n");
++		return -EIO;
++	}
++
++	dev_rp = mhi_to_virtual(ev_ring, ptr);
+ 	local_rp = ev_ring->rp;
+ 
+ 	while (dev_rp != local_rp) {
+@@ -801,6 +854,8 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ 			 */
+ 			if (chan < mhi_cntrl->max_chan) {
+ 				mhi_chan = &mhi_cntrl->mhi_chan[chan];
++				if (!mhi_chan->configured)
++					break;
+ 				parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+ 				event_quota--;
+ 			}
+@@ -812,7 +867,15 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ 
+ 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ 		local_rp = ev_ring->rp;
+-		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
++
++		ptr = er_ctxt->rp;
++		if (!is_valid_ring_ptr(ev_ring, ptr)) {
++			dev_err(&mhi_cntrl->mhi_dev->dev,
++				"Event ring rp points outside of the event ring\n");
++			return -EIO;
++		}
++
++		dev_rp = mhi_to_virtual(ev_ring, ptr);
+ 		count++;
+ 	}
+ 
+@@ -835,11 +898,18 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ 	int count = 0;
+ 	u32 chan;
+ 	struct mhi_chan *mhi_chan;
++	dma_addr_t ptr = er_ctxt->rp;
+ 
+ 	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ 		return -EIO;
+ 
+-	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
++	if (!is_valid_ring_ptr(ev_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event ring rp points outside of the event ring\n");
++		return -EIO;
++	}
++
++	dev_rp = mhi_to_virtual(ev_ring, ptr);
+ 	local_rp = ev_ring->rp;
+ 
+ 	while (dev_rp != local_rp && event_quota > 0) {
+@@ -853,7 +923,8 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ 		 * Only process the event ring elements whose channel
+ 		 * ID is within the maximum supported range.
+ 		 */
+-		if (chan < mhi_cntrl->max_chan) {
++		if (chan < mhi_cntrl->max_chan &&
++		    mhi_cntrl->mhi_chan[chan].configured) {
+ 			mhi_chan = &mhi_cntrl->mhi_chan[chan];
+ 
+ 			if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
+@@ -867,7 +938,15 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ 
+ 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ 		local_rp = ev_ring->rp;
+-		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
++
++		ptr = er_ctxt->rp;
++		if (!is_valid_ring_ptr(ev_ring, ptr)) {
++			dev_err(&mhi_cntrl->mhi_dev->dev,
++				"Event ring rp points outside of the event ring\n");
++			return -EIO;
++		}
++
++		dev_rp = mhi_to_virtual(ev_ring, ptr);
+ 		count++;
+ 	}
+ 	read_lock_bh(&mhi_cntrl->pm_lock);
+@@ -1394,6 +1473,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
+ 	struct mhi_ring *ev_ring;
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ 	unsigned long flags;
++	dma_addr_t ptr;
+ 
+ 	dev_dbg(dev, "Marking all events for chan: %d as stale\n", chan);
+ 
+@@ -1401,7 +1481,15 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
+ 
+ 	/* mark all stale events related to channel as STALE event */
+ 	spin_lock_irqsave(&mhi_event->lock, flags);
+-	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
++
++	ptr = er_ctxt->rp;
++	if (!is_valid_ring_ptr(ev_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event ring rp points outside of the event ring\n");
++		dev_rp = ev_ring->rp;
++	} else {
++		dev_rp = mhi_to_virtual(ev_ring, ptr);
++	}
+ 
+ 	local_rp = ev_ring->rp;
+ 	while (dev_rp != local_rp) {
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+index 3de7b1639ec6a..aeb895c084607 100644
+--- a/drivers/bus/mhi/core/pm.c
++++ b/drivers/bus/mhi/core/pm.c
+@@ -376,6 +376,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
+ {
+ 	struct mhi_event *mhi_event;
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	enum mhi_ee_type current_ee = mhi_cntrl->ee;
+ 	int i, ret;
+ 
+ 	dev_dbg(dev, "Processing Mission Mode transition\n");
+@@ -390,6 +391,8 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
+ 
+ 	wake_up_all(&mhi_cntrl->state_event);
+ 
++	device_for_each_child(&mhi_cntrl->mhi_dev->dev, &current_ee,
++			      mhi_destroy_device);
+ 	mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_MISSION_MODE);
+ 
+ 	/* Force MHI to be in M0 state before continuing */
+@@ -992,7 +995,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+ 							   &val) ||
+ 					!val,
+ 				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-		if (ret) {
++		if (!ret) {
+ 			ret = -EIO;
+ 			dev_info(dev, "Failed to reset MHI due to syserr state\n");
+ 			goto error_bhi_offset;
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 16e389dce1118..9afbe4992a1dd 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -635,6 +635,51 @@ static int sysc_parse_and_check_child_range(struct sysc *ddata)
+ 	return 0;
+ }
+ 
++/* Interconnect instances to probe before l4_per instances */
++static struct resource early_bus_ranges[] = {
++	/* am3/4 l4_wkup */
++	{ .start = 0x44c00000, .end = 0x44c00000 + 0x300000, },
++	/* omap4/5 and dra7 l4_cfg */
++	{ .start = 0x4a000000, .end = 0x4a000000 + 0x300000, },
++	/* omap4 l4_wkup */
++	{ .start = 0x4a300000, .end = 0x4a300000 + 0x30000,  },
++	/* omap5 and dra7 l4_wkup without dra7 dcan segment */
++	{ .start = 0x4ae00000, .end = 0x4ae00000 + 0x30000,  },
++};
++
++static atomic_t sysc_defer = ATOMIC_INIT(10);
++
++/**
++ * sysc_defer_non_critical - defer non_critical interconnect probing
++ * @ddata: device driver data
++ *
++ * We want to probe l4_cfg and l4_wkup interconnect instances before any
++ * l4_per instances as l4_per instances depend on resources on l4_cfg and
++ * l4_wkup interconnects.
++ */
++static int sysc_defer_non_critical(struct sysc *ddata)
++{
++	struct resource *res;
++	int i;
++
++	if (!atomic_read(&sysc_defer))
++		return 0;
++
++	for (i = 0; i < ARRAY_SIZE(early_bus_ranges); i++) {
++		res = &early_bus_ranges[i];
++		if (ddata->module_pa >= res->start &&
++		    ddata->module_pa <= res->end) {
++			atomic_set(&sysc_defer, 0);
++
++			return 0;
++		}
++	}
++
++	atomic_dec_if_positive(&sysc_defer);
++
++	return -EPROBE_DEFER;
++}
++
+ static struct device_node *stdout_path;
+ 
+ static void sysc_init_stdout_path(struct sysc *ddata)
+@@ -859,6 +904,10 @@ static int sysc_map_and_check_registers(struct sysc *ddata)
+ 	if (error)
+ 		return error;
+ 
++	error = sysc_defer_non_critical(ddata);
++	if (error)
++		return error;
++
+ 	sysc_check_children(ddata);
+ 
+ 	error = sysc_parse_registers(ddata);
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index f462b9d2f5a52..340ad21491e28 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -819,7 +819,7 @@ static bool __init crng_init_try_arch_early(struct crng_state *crng)
+ 
+ static void __maybe_unused crng_initialize_secondary(struct crng_state *crng)
+ {
+-	memcpy(&crng->state[0], "expand 32-byte k", 16);
++	chacha_init_consts(crng->state);
+ 	_get_random_bytes(&crng->state[4], sizeof(__u32) * 12);
+ 	crng_init_try_arch(crng);
+ 	crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
+@@ -827,7 +827,7 @@ static void __maybe_unused crng_initialize_secondary(struct crng_state *crng)
+ 
+ static void __init crng_initialize_primary(struct crng_state *crng)
+ {
+-	memcpy(&crng->state[0], "expand 32-byte k", 16);
++	chacha_init_consts(crng->state);
+ 	_extract_entropy(&input_pool, &crng->state[4], sizeof(__u32) * 12, 0);
+ 	if (crng_init_try_arch_early(crng) && trust_cpu) {
+ 		invalidate_batched_entropy();
+diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c
+index 3633ed70f48fa..1b18ce5ebab1e 100644
+--- a/drivers/char/tpm/eventlog/acpi.c
++++ b/drivers/char/tpm/eventlog/acpi.c
+@@ -41,6 +41,27 @@ struct acpi_tcpa {
+ 	};
+ };
+ 
++/* Check that the given log is indeed a TPM2 log. */
++static bool tpm_is_tpm2_log(void *bios_event_log, u64 len)
++{
++	struct tcg_efi_specid_event_head *efispecid;
++	struct tcg_pcr_event *event_header;
++	int n;
++
++	if (len < sizeof(*event_header))
++		return false;
++	len -= sizeof(*event_header);
++	event_header = bios_event_log;
++
++	if (len < sizeof(*efispecid))
++		return false;
++	efispecid = (struct tcg_efi_specid_event_head *)event_header->event;
++
++	n = memcmp(efispecid->signature, TCG_SPECID_SIG,
++		   sizeof(TCG_SPECID_SIG));
++	return n == 0;
++}
++
+ /* read binary bios log */
+ int tpm_read_log_acpi(struct tpm_chip *chip)
+ {
+@@ -52,6 +73,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ 	struct acpi_table_tpm2 *tbl;
+ 	struct acpi_tpm2_phy *tpm2_phy;
+ 	int format;
++	int ret;
+ 
+ 	log = &chip->log;
+ 
+@@ -112,6 +134,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ 
+ 	log->bios_event_log_end = log->bios_event_log + len;
+ 
++	ret = -EIO;
+ 	virt = acpi_os_map_iomem(start, len);
+ 	if (!virt)
+ 		goto err;
+@@ -119,11 +142,19 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ 	memcpy_fromio(log->bios_event_log, virt, len);
+ 
+ 	acpi_os_unmap_iomem(virt, len);
++
++	if (chip->flags & TPM_CHIP_FLAG_TPM2 &&
++	    !tpm_is_tpm2_log(log->bios_event_log, len)) {
++		/* try EFI log next */
++		ret = -ENODEV;
++		goto err;
++	}
++
+ 	return format;
+ 
+ err:
+ 	kfree(log->bios_event_log);
+ 	log->bios_event_log = NULL;
+-	return -EIO;
++	return ret;
+ 
+ }
+diff --git a/drivers/char/tpm/eventlog/common.c b/drivers/char/tpm/eventlog/common.c
+index 7460f230bae4c..8512ec76d5260 100644
+--- a/drivers/char/tpm/eventlog/common.c
++++ b/drivers/char/tpm/eventlog/common.c
+@@ -107,6 +107,9 @@ void tpm_bios_log_setup(struct tpm_chip *chip)
+ 	int log_version;
+ 	int rc = 0;
+ 
++	if (chip->flags & TPM_CHIP_FLAG_VIRTUAL)
++		return;
++
+ 	rc = tpm_read_log(chip);
+ 	if (rc < 0)
+ 		return;
+diff --git a/drivers/char/tpm/eventlog/efi.c b/drivers/char/tpm/eventlog/efi.c
+index 35229e5143cac..e6cb9d525e30c 100644
+--- a/drivers/char/tpm/eventlog/efi.c
++++ b/drivers/char/tpm/eventlog/efi.c
+@@ -17,6 +17,7 @@ int tpm_read_log_efi(struct tpm_chip *chip)
+ {
+ 
+ 	struct efi_tcg2_final_events_table *final_tbl = NULL;
++	int final_events_log_size = efi_tpm_final_log_size;
+ 	struct linux_efi_tpm_eventlog *log_tbl;
+ 	struct tpm_bios_log *log;
+ 	u32 log_size;
+@@ -66,12 +67,12 @@ int tpm_read_log_efi(struct tpm_chip *chip)
+ 	ret = tpm_log_version;
+ 
+ 	if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||
+-	    efi_tpm_final_log_size == 0 ||
++	    final_events_log_size == 0 ||
+ 	    tpm_log_version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2)
+ 		goto out;
+ 
+ 	final_tbl = memremap(efi.tpm_final_log,
+-			     sizeof(*final_tbl) + efi_tpm_final_log_size,
++			     sizeof(*final_tbl) + final_events_log_size,
+ 			     MEMREMAP_WB);
+ 	if (!final_tbl) {
+ 		pr_err("Could not map UEFI TPM final log\n");
+@@ -80,10 +81,18 @@ int tpm_read_log_efi(struct tpm_chip *chip)
+ 		goto out;
+ 	}
+ 
+-	efi_tpm_final_log_size -= log_tbl->final_events_preboot_size;
++	/*
++	 * The 'final events log' size excludes the 'final events preboot log'
++	 * at its beginning.
++	 */
++	final_events_log_size -= log_tbl->final_events_preboot_size;
+ 
++	/*
++	 * Allocate memory for the 'combined log' where we will append the
++	 * 'final events log' to.
++	 */
+ 	tmp = krealloc(log->bios_event_log,
+-		       log_size + efi_tpm_final_log_size,
++		       log_size + final_events_log_size,
+ 		       GFP_KERNEL);
+ 	if (!tmp) {
+ 		kfree(log->bios_event_log);
+@@ -94,15 +103,19 @@ int tpm_read_log_efi(struct tpm_chip *chip)
+ 	log->bios_event_log = tmp;
+ 
+ 	/*
+-	 * Copy any of the final events log that didn't also end up in the
+-	 * main log. Events can be logged in both if events are generated
++	 * Append any of the 'final events log' that didn't also end up in the
++	 * 'main log'. Events can be logged in both if events are generated
+ 	 * between GetEventLog() and ExitBootServices().
+ 	 */
+ 	memcpy((void *)log->bios_event_log + log_size,
+ 	       final_tbl->events + log_tbl->final_events_preboot_size,
+-	       efi_tpm_final_log_size);
++	       final_events_log_size);
++	/*
++	 * The size of the 'combined log' is the size of the 'main log' plus
++	 * the size of the 'final events log'.
++	 */
+ 	log->bios_event_log_end = log->bios_event_log +
+-		log_size + efi_tpm_final_log_size;
++		log_size + final_events_log_size;
+ 
+ out:
+ 	memunmap(final_tbl);
+diff --git a/drivers/clk/socfpga/clk-gate-a10.c b/drivers/clk/socfpga/clk-gate-a10.c
+index cd5df91036142..d62778884208c 100644
+--- a/drivers/clk/socfpga/clk-gate-a10.c
++++ b/drivers/clk/socfpga/clk-gate-a10.c
+@@ -146,6 +146,7 @@ static void __init __socfpga_gate_init(struct device_node *node,
+ 		if (IS_ERR(socfpga_clk->sys_mgr_base_addr)) {
+ 			pr_err("%s: failed to find altr,sys-mgr regmap!\n",
+ 					__func__);
++			kfree(socfpga_clk);
+ 			return;
+ 		}
+ 	}
+diff --git a/drivers/cpuidle/cpuidle-tegra.c b/drivers/cpuidle/cpuidle-tegra.c
+index 191966dc8d023..29c5e83500d33 100644
+--- a/drivers/cpuidle/cpuidle-tegra.c
++++ b/drivers/cpuidle/cpuidle-tegra.c
+@@ -135,13 +135,13 @@ static int tegra_cpuidle_c7_enter(void)
+ {
+ 	int err;
+ 
+-	if (tegra_cpuidle_using_firmware()) {
+-		err = call_firmware_op(prepare_idle, TF_PM_MODE_LP2_NOFLUSH_L2);
+-		if (err)
+-			return err;
++	err = call_firmware_op(prepare_idle, TF_PM_MODE_LP2_NOFLUSH_L2);
++	if (err && err != -ENOSYS)
++		return err;
+ 
+-		return call_firmware_op(do_idle, 0);
+-	}
++	err = call_firmware_op(do_idle, 0);
++	if (err != -ENOSYS)
++		return err;
+ 
+ 	return cpu_suspend(0, tegra30_pm_secondary_cpu_suspend);
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+index 158422ff5695c..00194d1d9ae69 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+@@ -932,7 +932,7 @@ static int sun8i_ce_probe(struct platform_device *pdev)
+ 	if (err)
+ 		goto error_alg;
+ 
+-	err = pm_runtime_get_sync(ce->dev);
++	err = pm_runtime_resume_and_get(ce->dev);
+ 	if (err < 0)
+ 		goto error_alg;
+ 
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index ed2a69f82e1c1..7c355bc2fb066 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -351,7 +351,7 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm)
+ 	op->enginectx.op.prepare_request = NULL;
+ 	op->enginectx.op.unprepare_request = NULL;
+ 
+-	err = pm_runtime_get_sync(op->ss->dev);
++	err = pm_runtime_resume_and_get(op->ss->dev);
+ 	if (err < 0) {
+ 		dev_err(op->ss->dev, "pm error %d\n", err);
+ 		goto error_pm;
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index e0ddc684798dc..80e89066dbd1a 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -753,7 +753,7 @@ static int sun8i_ss_probe(struct platform_device *pdev)
+ 	if (err)
+ 		goto error_alg;
+ 
+-	err = pm_runtime_get_sync(ss->dev);
++	err = pm_runtime_resume_and_get(ss->dev);
+ 	if (err < 0)
+ 		goto error_alg;
+ 
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index bb493423668cc..41f1fcacb2809 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -544,7 +544,7 @@ static int sec_skcipher_init(struct crypto_skcipher *tfm)
+ 	crypto_skcipher_set_reqsize(tfm, sizeof(struct sec_req));
+ 	ctx->c_ctx.ivsize = crypto_skcipher_ivsize(tfm);
+ 	if (ctx->c_ctx.ivsize > SEC_IV_SIZE) {
+-		dev_err(SEC_CTX_DEV(ctx), "get error skcipher iv size!\n");
++		pr_err("get error skcipher iv size!\n");
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 1b1e0ab0a831a..0dd4c6b157de9 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -103,7 +103,7 @@ static int omap_aes_hw_init(struct omap_aes_dev *dd)
+ 		dd->err = 0;
+ 	}
+ 
+-	err = pm_runtime_get_sync(dd->dev);
++	err = pm_runtime_resume_and_get(dd->dev);
+ 	if (err < 0) {
+ 		dev_err(dd->dev, "failed to get sync: %d\n", err);
+ 		return err;
+@@ -1133,7 +1133,7 @@ static int omap_aes_probe(struct platform_device *pdev)
+ 	pm_runtime_set_autosuspend_delay(dev, DEFAULT_AUTOSUSPEND_DELAY);
+ 
+ 	pm_runtime_enable(dev);
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "%s: failed to get_sync(%d)\n",
+ 			__func__, err);
+@@ -1302,7 +1302,7 @@ static int omap_aes_suspend(struct device *dev)
+ 
+ static int omap_aes_resume(struct device *dev)
+ {
+-	pm_runtime_get_sync(dev);
++	pm_runtime_resume_and_get(dev);
+ 	return 0;
+ }
+ #endif
+diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
+index d552dbcfe0a07..06abe1e2074e9 100644
+--- a/drivers/crypto/qat/qat_common/qat_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_algs.c
+@@ -670,7 +670,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 	struct qat_alg_buf_list *bufl;
+ 	struct qat_alg_buf_list *buflout = NULL;
+ 	dma_addr_t blp;
+-	dma_addr_t bloutp = 0;
++	dma_addr_t bloutp;
+ 	struct scatterlist *sg;
+ 	size_t sz_out, sz = struct_size(bufl, bufers, n + 1);
+ 
+@@ -682,6 +682,9 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 	if (unlikely(!bufl))
+ 		return -ENOMEM;
+ 
++	for_each_sg(sgl, sg, n, i)
++		bufl->bufers[i].addr = DMA_MAPPING_ERROR;
++
+ 	blp = dma_map_single(dev, bufl, sz, DMA_TO_DEVICE);
+ 	if (unlikely(dma_mapping_error(dev, blp)))
+ 		goto err_in;
+@@ -715,10 +718,14 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 				       dev_to_node(&GET_DEV(inst->accel_dev)));
+ 		if (unlikely(!buflout))
+ 			goto err_in;
++
++		bufers = buflout->bufers;
++		for_each_sg(sglout, sg, n, i)
++			bufers[i].addr = DMA_MAPPING_ERROR;
++
+ 		bloutp = dma_map_single(dev, buflout, sz_out, DMA_TO_DEVICE);
+ 		if (unlikely(dma_mapping_error(dev, bloutp)))
+ 			goto err_out;
+-		bufers = buflout->bufers;
+ 		for_each_sg(sglout, sg, n, i) {
+ 			int y = sg_nctr;
+ 
+diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
+index eda93fab95fe2..39d56ab12f275 100644
+--- a/drivers/crypto/sa2ul.c
++++ b/drivers/crypto/sa2ul.c
+@@ -2345,7 +2345,7 @@ static int sa_ul_probe(struct platform_device *pdev)
+ 	dev_set_drvdata(sa_k3_dev, dev_data);
+ 
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "%s: failed to get sync: %d\n", __func__,
+ 			ret);
+diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
+index 2670c30332fad..7999b26a16ed0 100644
+--- a/drivers/crypto/stm32/stm32-cryp.c
++++ b/drivers/crypto/stm32/stm32-cryp.c
+@@ -542,7 +542,7 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+ 	int ret;
+ 	u32 cfg, hw_mode;
+ 
+-	pm_runtime_get_sync(cryp->dev);
++	pm_runtime_resume_and_get(cryp->dev);
+ 
+ 	/* Disable interrupt */
+ 	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+@@ -2043,7 +2043,7 @@ static int stm32_cryp_remove(struct platform_device *pdev)
+ 	if (!cryp)
+ 		return -ENODEV;
+ 
+-	ret = pm_runtime_get_sync(cryp->dev);
++	ret = pm_runtime_resume_and_get(cryp->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
+index e3e25278a970c..ff5362da118d8 100644
+--- a/drivers/crypto/stm32/stm32-hash.c
++++ b/drivers/crypto/stm32/stm32-hash.c
+@@ -812,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
+ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
+ 			      struct stm32_hash_request_ctx *rctx)
+ {
+-	pm_runtime_get_sync(hdev->dev);
++	pm_runtime_resume_and_get(hdev->dev);
+ 
+ 	if (!(HASH_FLAGS_INIT & hdev->flags)) {
+ 		stm32_hash_write(hdev, HASH_CR, HASH_CR_INIT);
+@@ -961,7 +961,7 @@ static int stm32_hash_export(struct ahash_request *req, void *out)
+ 	u32 *preg;
+ 	unsigned int i;
+ 
+-	pm_runtime_get_sync(hdev->dev);
++	pm_runtime_resume_and_get(hdev->dev);
+ 
+ 	while ((stm32_hash_read(hdev, HASH_SR) & HASH_SR_BUSY))
+ 		cpu_relax();
+@@ -999,7 +999,7 @@ static int stm32_hash_import(struct ahash_request *req, const void *in)
+ 
+ 	preg = rctx->hw_context;
+ 
+-	pm_runtime_get_sync(hdev->dev);
++	pm_runtime_resume_and_get(hdev->dev);
+ 
+ 	stm32_hash_write(hdev, HASH_IMR, *preg++);
+ 	stm32_hash_write(hdev, HASH_STR, *preg++);
+@@ -1565,7 +1565,7 @@ static int stm32_hash_remove(struct platform_device *pdev)
+ 	if (!hdev)
+ 		return -ENODEV;
+ 
+-	ret = pm_runtime_get_sync(hdev->dev);
++	ret = pm_runtime_resume_and_get(hdev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/extcon/extcon-arizona.c b/drivers/extcon/extcon-arizona.c
+index aae82db542a5e..76aacbac5869d 100644
+--- a/drivers/extcon/extcon-arizona.c
++++ b/drivers/extcon/extcon-arizona.c
+@@ -601,7 +601,7 @@ static irqreturn_t arizona_hpdet_irq(int irq, void *data)
+ 	struct arizona *arizona = info->arizona;
+ 	int id_gpio = arizona->pdata.hpdet_id_gpio;
+ 	unsigned int report = EXTCON_JACK_HEADPHONE;
+-	int ret, reading;
++	int ret, reading, state;
+ 	bool mic = false;
+ 
+ 	mutex_lock(&info->lock);
+@@ -614,12 +614,11 @@ static irqreturn_t arizona_hpdet_irq(int irq, void *data)
+ 	}
+ 
+ 	/* If the cable was removed while measuring ignore the result */
+-	ret = extcon_get_state(info->edev, EXTCON_MECHANICAL);
+-	if (ret < 0) {
+-		dev_err(arizona->dev, "Failed to check cable state: %d\n",
+-			ret);
++	state = extcon_get_state(info->edev, EXTCON_MECHANICAL);
++	if (state < 0) {
++		dev_err(arizona->dev, "Failed to check cable state: %d\n", state);
+ 		goto out;
+-	} else if (!ret) {
++	} else if (!state) {
+ 		dev_dbg(arizona->dev, "Ignoring HPDET for removed cable\n");
+ 		goto done;
+ 	}
+@@ -667,7 +666,7 @@ done:
+ 		gpio_set_value_cansleep(id_gpio, 0);
+ 
+ 	/* If we have a mic then reenable MICDET */
+-	if (mic || info->mic)
++	if (state && (mic || info->mic))
+ 		arizona_start_mic(info);
+ 
+ 	if (info->hpdet_active) {
+@@ -675,7 +674,9 @@ done:
+ 		info->hpdet_active = false;
+ 	}
+ 
+-	info->hpdet_done = true;
++	/* Do not set hp_det done when the cable has been unplugged */
++	if (state)
++		info->hpdet_done = true;
+ 
+ out:
+ 	mutex_unlock(&info->lock);
+@@ -1759,25 +1760,6 @@ static int arizona_extcon_remove(struct platform_device *pdev)
+ 	bool change;
+ 	int ret;
+ 
+-	ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
+-				       ARIZONA_MICD_ENA, 0,
+-				       &change);
+-	if (ret < 0) {
+-		dev_err(&pdev->dev, "Failed to disable micd on remove: %d\n",
+-			ret);
+-	} else if (change) {
+-		regulator_disable(info->micvdd);
+-		pm_runtime_put(info->dev);
+-	}
+-
+-	gpiod_put(info->micd_pol_gpio);
+-
+-	pm_runtime_disable(&pdev->dev);
+-
+-	regmap_update_bits(arizona->regmap,
+-			   ARIZONA_MICD_CLAMP_CONTROL,
+-			   ARIZONA_MICD_CLAMP_MODE_MASK, 0);
+-
+ 	if (info->micd_clamp) {
+ 		jack_irq_rise = ARIZONA_IRQ_MICD_CLAMP_RISE;
+ 		jack_irq_fall = ARIZONA_IRQ_MICD_CLAMP_FALL;
+@@ -1793,10 +1775,31 @@ static int arizona_extcon_remove(struct platform_device *pdev)
+ 	arizona_free_irq(arizona, jack_irq_rise, info);
+ 	arizona_free_irq(arizona, jack_irq_fall, info);
+ 	cancel_delayed_work_sync(&info->hpdet_work);
++	cancel_delayed_work_sync(&info->micd_detect_work);
++	cancel_delayed_work_sync(&info->micd_timeout_work);
++
++	ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
++				       ARIZONA_MICD_ENA, 0,
++				       &change);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Failed to disable micd on remove: %d\n",
++			ret);
++	} else if (change) {
++		regulator_disable(info->micvdd);
++		pm_runtime_put(info->dev);
++	}
++
++	regmap_update_bits(arizona->regmap,
++			   ARIZONA_MICD_CLAMP_CONTROL,
++			   ARIZONA_MICD_CLAMP_MODE_MASK, 0);
+ 	regmap_update_bits(arizona->regmap, ARIZONA_JACK_DETECT_ANALOGUE,
+ 			   ARIZONA_JD1_ENA, 0);
+ 	arizona_clk32k_disable(arizona);
+ 
++	gpiod_put(info->micd_pol_gpio);
++
++	pm_runtime_disable(&pdev->dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
+index 8a94388e38b33..a2ae9c3b95793 100644
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -13,7 +13,8 @@ cflags-$(CONFIG_X86)		+= -m$(BITS) -D__KERNEL__ \
+ 				   -Wno-pointer-sign \
+ 				   $(call cc-disable-warning, address-of-packed-member) \
+ 				   $(call cc-disable-warning, gnu) \
+-				   -fno-asynchronous-unwind-tables
++				   -fno-asynchronous-unwind-tables \
++				   $(CLANG_FLAGS)
+ 
+ # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
+ # disable the stackleak plugin
+diff --git a/drivers/fpga/dfl-pci.c b/drivers/fpga/dfl-pci.c
+index a2203d03c9e2b..bc108ee8e9eb4 100644
+--- a/drivers/fpga/dfl-pci.c
++++ b/drivers/fpga/dfl-pci.c
+@@ -61,14 +61,16 @@ static void cci_pci_free_irq(struct pci_dev *pcidev)
+ }
+ 
+ /* PCI Device ID */
+-#define PCIE_DEVICE_ID_PF_INT_5_X	0xBCBD
+-#define PCIE_DEVICE_ID_PF_INT_6_X	0xBCC0
+-#define PCIE_DEVICE_ID_PF_DSC_1_X	0x09C4
+-#define PCIE_DEVICE_ID_INTEL_PAC_N3000	0x0B30
++#define PCIE_DEVICE_ID_PF_INT_5_X		0xBCBD
++#define PCIE_DEVICE_ID_PF_INT_6_X		0xBCC0
++#define PCIE_DEVICE_ID_PF_DSC_1_X		0x09C4
++#define PCIE_DEVICE_ID_INTEL_PAC_N3000		0x0B30
++#define PCIE_DEVICE_ID_INTEL_PAC_D5005		0x0B2B
+ /* VF Device */
+-#define PCIE_DEVICE_ID_VF_INT_5_X	0xBCBF
+-#define PCIE_DEVICE_ID_VF_INT_6_X	0xBCC1
+-#define PCIE_DEVICE_ID_VF_DSC_1_X	0x09C5
++#define PCIE_DEVICE_ID_VF_INT_5_X		0xBCBF
++#define PCIE_DEVICE_ID_VF_INT_6_X		0xBCC1
++#define PCIE_DEVICE_ID_VF_DSC_1_X		0x09C5
++#define PCIE_DEVICE_ID_INTEL_PAC_D5005_VF	0x0B2C
+ 
+ static struct pci_device_id cci_pcie_id_tbl[] = {
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_5_X),},
+@@ -78,6 +80,8 @@ static struct pci_device_id cci_pcie_id_tbl[] = {
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_DSC_1_X),},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_DSC_1_X),},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_N3000),},
++	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_D5005),},
++	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_D5005_VF),},
+ 	{0,}
+ };
+ MODULE_DEVICE_TABLE(pci, cci_pcie_id_tbl);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 76d10f1c579ba..7f2689d4b86da 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3551,6 +3551,7 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
+ {
+ 	dev_info(adev->dev, "amdgpu: finishing device.\n");
+ 	flush_delayed_work(&adev->delayed_init_work);
++	ttm_bo_lock_delayed_workqueue(&adev->mman.bdev);
+ 	adev->shutdown = true;
+ 
+ 	kfree(adev->pci_state);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index fe2d495d08ab0..d07c458c0bedb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -532,6 +532,8 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev)
+ 
+ 		if (!ring || !ring->fence_drv.initialized)
+ 			continue;
++		if (!ring->no_scheduler)
++			drm_sched_fini(&ring->sched);
+ 		r = amdgpu_fence_wait_empty(ring);
+ 		if (r) {
+ 			/* no need to trigger GPU reset as we are unloading */
+@@ -540,8 +542,7 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev)
+ 		if (ring->fence_drv.irq_src)
+ 			amdgpu_irq_put(adev, ring->fence_drv.irq_src,
+ 				       ring->fence_drv.irq_type);
+-		if (!ring->no_scheduler)
+-			drm_sched_fini(&ring->sched);
++
+ 		del_timer_sync(&ring->fence_drv.fallback_timer);
+ 		for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
+ 			dma_fence_put(ring->fence_drv.fences[j]);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+index 300ac73b47382..2f70fdd6104f2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+@@ -499,7 +499,7 @@ void amdgpu_irq_gpu_reset_resume_helper(struct amdgpu_device *adev)
+ 		for (j = 0; j < AMDGPU_MAX_IRQ_SRC_ID; ++j) {
+ 			struct amdgpu_irq_src *src = adev->irq.client[i].sources[j];
+ 
+-			if (!src)
++			if (!src || !src->funcs || !src->funcs->set)
+ 				continue;
+ 			for (k = 0; k < src->num_types; k++)
+ 				amdgpu_irq_update(adev, src, k);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index a0248d78190f2..ab7755a3885a6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -1034,7 +1034,7 @@ static void amdgpu_ttm_tt_unpin_userptr(struct ttm_bo_device *bdev,
+ 		DMA_BIDIRECTIONAL : DMA_TO_DEVICE;
+ 
+ 	/* double check that we don't free the table twice */
+-	if (!ttm->sg->sgl)
++	if (!ttm->sg || !ttm->sg->sgl)
+ 		return;
+ 
+ 	/* unmap the pages mapped to the device */
+@@ -1254,13 +1254,13 @@ static void amdgpu_ttm_backend_unbind(struct ttm_bo_device *bdev,
+ 	struct amdgpu_ttm_tt *gtt = (void *)ttm;
+ 	int r;
+ 
+-	if (!gtt->bound)
+-		return;
+-
+ 	/* if the pages have userptr pinning then clear that first */
+ 	if (gtt->userptr)
+ 		amdgpu_ttm_tt_unpin_userptr(bdev, ttm);
+ 
++	if (!gtt->bound)
++		return;
++
+ 	if (gtt->offset == AMDGPU_BO_INVALID_OFFSET)
+ 		return;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+index f8bebf18ee362..665ead139c302 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+@@ -259,7 +259,7 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
+ 		if ((adev->asic_type == CHIP_POLARIS10 ||
+ 		     adev->asic_type == CHIP_POLARIS11) &&
+ 		    (adev->uvd.fw_version < FW_1_66_16))
+-			DRM_ERROR("POLARIS10/11 UVD firmware version %hu.%hu is too old.\n",
++			DRM_ERROR("POLARIS10/11 UVD firmware version %u.%u is too old.\n",
+ 				  version_major, version_minor);
+ 	} else {
+ 		unsigned int enc_major, enc_minor, dec_minor;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+index 1162913c8bf42..0526dec1d736e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+@@ -465,15 +465,22 @@ int amdgpu_xgmi_update_topology(struct amdgpu_hive_info *hive, struct amdgpu_dev
+ }
+ 
+ 
++/*
++ * NOTE psp_xgmi_node_info.num_hops layout is as follows:
++ * num_hops[7:6] = link type (0 = xGMI2, 1 = xGMI3, 2/3 = reserved)
++ * num_hops[5:3] = reserved
++ * num_hops[2:0] = number of hops
++ */
+ int amdgpu_xgmi_get_hops_count(struct amdgpu_device *adev,
+ 		struct amdgpu_device *peer_adev)
+ {
+ 	struct psp_xgmi_topology_info *top = &adev->psp.xgmi_context.top_info;
++	uint8_t num_hops_mask = 0x7;
+ 	int i;
+ 
+ 	for (i = 0 ; i < top->num_nodes; ++i)
+ 		if (top->nodes[i].node_id == peer_adev->gmc.xgmi.node_id)
+-			return top->nodes[i].num_hops;
++			return top->nodes[i].num_hops & num_hops_mask;
+ 	return	-EINVAL;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c b/drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c
+index 511712c2e382d..673d5e34f213c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c
+@@ -33,6 +33,11 @@ static int kfd_debugfs_open(struct inode *inode, struct file *file)
+ 
+ 	return single_open(file, show, NULL);
+ }
++static int kfd_debugfs_hang_hws_read(struct seq_file *m, void *data)
++{
++	seq_printf(m, "echo gpu_id > hang_hws\n");
++	return 0;
++}
+ 
+ static ssize_t kfd_debugfs_hang_hws_write(struct file *file,
+ 	const char __user *user_buf, size_t size, loff_t *ppos)
+@@ -94,7 +99,7 @@ void kfd_debugfs_init(void)
+ 	debugfs_create_file("rls", S_IFREG | 0444, debugfs_root,
+ 			    kfd_debugfs_rls_by_device, &kfd_debugfs_fops);
+ 	debugfs_create_file("hang_hws", S_IFREG | 0200, debugfs_root,
+-			    NULL, &kfd_debugfs_hang_hws_fops);
++			    kfd_debugfs_hang_hws_read, &kfd_debugfs_hang_hws_fops);
+ }
+ 
+ void kfd_debugfs_fini(void)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 8e5cfb1f8a512..6ea8a4b6efde3 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -1128,6 +1128,9 @@ static int set_sched_resources(struct device_queue_manager *dqm)
+ 
+ static int initialize_cpsch(struct device_queue_manager *dqm)
+ {
++	uint64_t num_sdma_queues;
++	uint64_t num_xgmi_sdma_queues;
++
+ 	pr_debug("num of pipes: %d\n", get_pipes_per_mec(dqm));
+ 
+ 	mutex_init(&dqm->lock_hidden);
+@@ -1136,8 +1139,18 @@ static int initialize_cpsch(struct device_queue_manager *dqm)
+ 	dqm->active_cp_queue_count = 0;
+ 	dqm->gws_queue_count = 0;
+ 	dqm->active_runlist = false;
+-	dqm->sdma_bitmap = ~0ULL >> (64 - get_num_sdma_queues(dqm));
+-	dqm->xgmi_sdma_bitmap = ~0ULL >> (64 - get_num_xgmi_sdma_queues(dqm));
++
++	num_sdma_queues = get_num_sdma_queues(dqm);
++	if (num_sdma_queues >= BITS_PER_TYPE(dqm->sdma_bitmap))
++		dqm->sdma_bitmap = ULLONG_MAX;
++	else
++		dqm->sdma_bitmap = (BIT_ULL(num_sdma_queues) - 1);
++
++	num_xgmi_sdma_queues = get_num_xgmi_sdma_queues(dqm);
++	if (num_xgmi_sdma_queues >= BITS_PER_TYPE(dqm->xgmi_sdma_bitmap))
++		dqm->xgmi_sdma_bitmap = ULLONG_MAX;
++	else
++		dqm->xgmi_sdma_bitmap = (BIT_ULL(num_xgmi_sdma_queues) - 1);
+ 
+ 	INIT_WORK(&dqm->hw_exception_work, kfd_process_hw_exception);
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index c07737c456776..d18341b7daacd 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -5328,6 +5328,15 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 
+ 	} while (stream == NULL && requested_bpc >= 6);
+ 
++	if (dc_result == DC_FAIL_ENC_VALIDATE && !aconnector->force_yuv420_output) {
++		DRM_DEBUG_KMS("Retry forcing YCbCr420 encoding\n");
++
++		aconnector->force_yuv420_output = true;
++		stream = create_validate_stream_for_sink(aconnector, drm_mode,
++						dm_state, old_stream);
++		aconnector->force_yuv420_output = false;
++	}
++
+ 	return stream;
+ }
+ 
+@@ -6800,10 +6809,6 @@ static int get_cursor_position(struct drm_plane *plane, struct drm_crtc *crtc,
+ 	int x, y;
+ 	int xorigin = 0, yorigin = 0;
+ 
+-	position->enable = false;
+-	position->x = 0;
+-	position->y = 0;
+-
+ 	if (!crtc || !plane->state->fb)
+ 		return 0;
+ 
+@@ -6850,7 +6855,7 @@ static void handle_cursor_update(struct drm_plane *plane,
+ 	struct dm_crtc_state *crtc_state = crtc ? to_dm_crtc_state(crtc->state) : NULL;
+ 	struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
+ 	uint64_t address = afb ? afb->address : 0;
+-	struct dc_cursor_position position;
++	struct dc_cursor_position position = {0};
+ 	struct dc_cursor_attributes attributes;
+ 	int ret;
+ 
+@@ -8659,7 +8664,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 	}
+ 
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+-	if (adev->asic_type >= CHIP_NAVI10) {
++	if (dc_resource_is_dsc_encoding_supported(dc)) {
+ 		for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+ 			if (drm_atomic_crtc_needs_modeset(new_crtc_state)) {
+ 				ret = add_affected_mst_dsc_crtcs(state, crtc);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index a8a0e8cb1a118..1df7f1b180496 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -68,18 +68,6 @@ struct common_irq_params {
+ 	enum dc_irq_source irq_src;
+ };
+ 
+-/**
+- * struct irq_list_head - Linked-list for low context IRQ handlers.
+- *
+- * @head: The list_head within &struct handler_data
+- * @work: A work_struct containing the deferred handler work
+- */
+-struct irq_list_head {
+-	struct list_head head;
+-	/* In case this interrupt needs post-processing, 'work' will be queued*/
+-	struct work_struct work;
+-};
+-
+ /**
+  * struct dm_compressor_info - Buffer info used by frame buffer compression
+  * @cpu_addr: MMIO cpu addr
+@@ -270,7 +258,7 @@ struct amdgpu_display_manager {
+ 	 * Note that handlers are called in the same order as they were
+ 	 * registered (FIFO).
+ 	 */
+-	struct irq_list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
++	struct list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+ 
+ 	/**
+ 	 * @irq_handler_list_high_tab:
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 8cd646eef096c..e02a55fc1382f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -150,7 +150,7 @@ static int parse_write_buffer_into_params(char *wr_buf, uint32_t wr_buf_size,
+  *
+  * --- to get dp configuration
+  *
+- * cat link_settings
++ * cat /sys/kernel/debug/dri/0/DP-x/link_settings
+  *
+  * It will list current, verified, reported, preferred dp configuration.
+  * current -- for current video mode
+@@ -163,7 +163,7 @@ static int parse_write_buffer_into_params(char *wr_buf, uint32_t wr_buf_size,
+  * echo <lane_count>  <link_rate> > link_settings
+  *
+  * for example, to force to  2 lane, 2.7GHz,
+- * echo 4 0xa > link_settings
++ * echo 4 0xa > /sys/kernel/debug/dri/0/DP-x/link_settings
+  *
+  * spread_spectrum could not be changed dynamically.
+  *
+@@ -171,7 +171,7 @@ static int parse_write_buffer_into_params(char *wr_buf, uint32_t wr_buf_size,
+  * done. please check link settings after force operation to see if HW get
+  * programming.
+  *
+- * cat link_settings
++ * cat /sys/kernel/debug/dri/0/DP-x/link_settings
+  *
+  * check current and preferred settings.
+  *
+@@ -255,7 +255,7 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
+ 	int max_param_num = 2;
+ 	uint8_t param_nums = 0;
+ 	long param[2];
+-	bool valid_input = false;
++	bool valid_input = true;
+ 
+ 	if (size == 0)
+ 		return -EINVAL;
+@@ -282,9 +282,9 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
+ 	case LANE_COUNT_ONE:
+ 	case LANE_COUNT_TWO:
+ 	case LANE_COUNT_FOUR:
+-		valid_input = true;
+ 		break;
+ 	default:
++		valid_input = false;
+ 		break;
+ 	}
+ 
+@@ -294,9 +294,9 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
+ 	case LINK_RATE_RBR2:
+ 	case LINK_RATE_HIGH2:
+ 	case LINK_RATE_HIGH3:
+-		valid_input = true;
+ 		break;
+ 	default:
++		valid_input = false;
+ 		break;
+ 	}
+ 
+@@ -310,10 +310,11 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
+ 	 * spread spectrum will not be changed
+ 	 */
+ 	prefer_link_settings.link_spread = link->cur_link_settings.link_spread;
++	prefer_link_settings.use_link_rate_set = false;
+ 	prefer_link_settings.lane_count = param[0];
+ 	prefer_link_settings.link_rate = param[1];
+ 
+-	dc_link_set_preferred_link_settings(dc, &prefer_link_settings, link);
++	dc_link_set_preferred_training_settings(dc, &prefer_link_settings, NULL, link, true);
+ 
+ 	kfree(wr_buf);
+ 	return size;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index 357778556b06c..281b274e2b9b2 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
+ 	struct amdgpu_display_manager *dm;
+ 	/* DAL irq source which registered for this interrupt. */
+ 	enum dc_irq_source irq_source;
++	struct work_struct work;
+ };
+ 
+ #define DM_IRQ_TABLE_LOCK(adev, flags) \
+@@ -111,20 +112,10 @@ static void init_handler_common_data(struct amdgpu_dm_irq_handler_data *hcd,
+  */
+ static void dm_irq_work_func(struct work_struct *work)
+ {
+-	struct irq_list_head *irq_list_head =
+-		container_of(work, struct irq_list_head, work);
+-	struct list_head *handler_list = &irq_list_head->head;
+-	struct amdgpu_dm_irq_handler_data *handler_data;
+-
+-	list_for_each_entry(handler_data, handler_list, list) {
+-		DRM_DEBUG_KMS("DM_IRQ: work_func: for dal_src=%d\n",
+-				handler_data->irq_source);
++	struct amdgpu_dm_irq_handler_data *handler_data =
++		container_of(work, struct amdgpu_dm_irq_handler_data, work);
+ 
+-		DRM_DEBUG_KMS("DM_IRQ: schedule_work: for dal_src=%d\n",
+-			handler_data->irq_source);
+-
+-		handler_data->handler(handler_data->handler_arg);
+-	}
++	handler_data->handler(handler_data->handler_arg);
+ 
+ 	/* Call a DAL subcomponent which registered for interrupt notification
+ 	 * at INTERRUPT_LOW_IRQ_CONTEXT.
+@@ -156,7 +147,7 @@ static struct list_head *remove_irq_handler(struct amdgpu_device *adev,
+ 		break;
+ 	case INTERRUPT_LOW_IRQ_CONTEXT:
+ 	default:
+-		hnd_list = &adev->dm.irq_handler_list_low_tab[irq_source].head;
++		hnd_list = &adev->dm.irq_handler_list_low_tab[irq_source];
+ 		break;
+ 	}
+ 
+@@ -287,7 +278,8 @@ void *amdgpu_dm_irq_register_interrupt(struct amdgpu_device *adev,
+ 		break;
+ 	case INTERRUPT_LOW_IRQ_CONTEXT:
+ 	default:
+-		hnd_list = &adev->dm.irq_handler_list_low_tab[irq_source].head;
++		hnd_list = &adev->dm.irq_handler_list_low_tab[irq_source];
++		INIT_WORK(&handler_data->work, dm_irq_work_func);
+ 		break;
+ 	}
+ 
+@@ -369,7 +361,7 @@ void amdgpu_dm_irq_unregister_interrupt(struct amdgpu_device *adev,
+ int amdgpu_dm_irq_init(struct amdgpu_device *adev)
+ {
+ 	int src;
+-	struct irq_list_head *lh;
++	struct list_head *lh;
+ 
+ 	DRM_DEBUG_KMS("DM_IRQ\n");
+ 
+@@ -378,9 +370,7 @@ int amdgpu_dm_irq_init(struct amdgpu_device *adev)
+ 	for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) {
+ 		/* low context handler list init */
+ 		lh = &adev->dm.irq_handler_list_low_tab[src];
+-		INIT_LIST_HEAD(&lh->head);
+-		INIT_WORK(&lh->work, dm_irq_work_func);
+-
++		INIT_LIST_HEAD(lh);
+ 		/* high context handler init */
+ 		INIT_LIST_HEAD(&adev->dm.irq_handler_list_high_tab[src]);
+ 	}
+@@ -397,8 +387,11 @@ int amdgpu_dm_irq_init(struct amdgpu_device *adev)
+ void amdgpu_dm_irq_fini(struct amdgpu_device *adev)
+ {
+ 	int src;
+-	struct irq_list_head *lh;
++	struct list_head *lh;
++	struct list_head *entry, *tmp;
++	struct amdgpu_dm_irq_handler_data *handler;
+ 	unsigned long irq_table_flags;
++
+ 	DRM_DEBUG_KMS("DM_IRQ: releasing resources.\n");
+ 	for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) {
+ 		DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
+@@ -407,7 +400,16 @@ void amdgpu_dm_irq_fini(struct amdgpu_device *adev)
+ 		 * (because no code can schedule a new one). */
+ 		lh = &adev->dm.irq_handler_list_low_tab[src];
+ 		DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags);
+-		flush_work(&lh->work);
++
++		if (!list_empty(lh)) {
++			list_for_each_safe(entry, tmp, lh) {
++				handler = list_entry(
++					entry,
++					struct amdgpu_dm_irq_handler_data,
++					list);
++				flush_work(&handler->work);
++			}
++		}
+ 	}
+ }
+ 
+@@ -417,6 +419,8 @@ int amdgpu_dm_irq_suspend(struct amdgpu_device *adev)
+ 	struct list_head *hnd_list_h;
+ 	struct list_head *hnd_list_l;
+ 	unsigned long irq_table_flags;
++	struct list_head *entry, *tmp;
++	struct amdgpu_dm_irq_handler_data *handler;
+ 
+ 	DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
+ 
+@@ -427,14 +431,22 @@ int amdgpu_dm_irq_suspend(struct amdgpu_device *adev)
+ 	 * will be disabled from manage_dm_interrupts on disable CRTC.
+ 	 */
+ 	for (src = DC_IRQ_SOURCE_HPD1; src <= DC_IRQ_SOURCE_HPD6RX; src++) {
+-		hnd_list_l = &adev->dm.irq_handler_list_low_tab[src].head;
++		hnd_list_l = &adev->dm.irq_handler_list_low_tab[src];
+ 		hnd_list_h = &adev->dm.irq_handler_list_high_tab[src];
+ 		if (!list_empty(hnd_list_l) || !list_empty(hnd_list_h))
+ 			dc_interrupt_set(adev->dm.dc, src, false);
+ 
+ 		DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags);
+-		flush_work(&adev->dm.irq_handler_list_low_tab[src].work);
+ 
++		if (!list_empty(hnd_list_l)) {
++			list_for_each_safe (entry, tmp, hnd_list_l) {
++				handler = list_entry(
++					entry,
++					struct amdgpu_dm_irq_handler_data,
++					list);
++				flush_work(&handler->work);
++			}
++		}
+ 		DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
+ 	}
+ 
+@@ -454,7 +466,7 @@ int amdgpu_dm_irq_resume_early(struct amdgpu_device *adev)
+ 
+ 	/* re-enable short pulse interrupts HW interrupt */
+ 	for (src = DC_IRQ_SOURCE_HPD1RX; src <= DC_IRQ_SOURCE_HPD6RX; src++) {
+-		hnd_list_l = &adev->dm.irq_handler_list_low_tab[src].head;
++		hnd_list_l = &adev->dm.irq_handler_list_low_tab[src];
+ 		hnd_list_h = &adev->dm.irq_handler_list_high_tab[src];
+ 		if (!list_empty(hnd_list_l) || !list_empty(hnd_list_h))
+ 			dc_interrupt_set(adev->dm.dc, src, true);
+@@ -480,7 +492,7 @@ int amdgpu_dm_irq_resume_late(struct amdgpu_device *adev)
+ 	 * will be enabled from manage_dm_interrupts on enable CRTC.
+ 	 */
+ 	for (src = DC_IRQ_SOURCE_HPD1; src <= DC_IRQ_SOURCE_HPD6; src++) {
+-		hnd_list_l = &adev->dm.irq_handler_list_low_tab[src].head;
++		hnd_list_l = &adev->dm.irq_handler_list_low_tab[src];
+ 		hnd_list_h = &adev->dm.irq_handler_list_high_tab[src];
+ 		if (!list_empty(hnd_list_l) || !list_empty(hnd_list_h))
+ 			dc_interrupt_set(adev->dm.dc, src, true);
+@@ -497,22 +509,53 @@ int amdgpu_dm_irq_resume_late(struct amdgpu_device *adev)
+ static void amdgpu_dm_irq_schedule_work(struct amdgpu_device *adev,
+ 					enum dc_irq_source irq_source)
+ {
+-	unsigned long irq_table_flags;
+-	struct work_struct *work = NULL;
++	struct  list_head *handler_list = &adev->dm.irq_handler_list_low_tab[irq_source];
++	struct  amdgpu_dm_irq_handler_data *handler_data;
++	bool    work_queued = false;
+ 
+-	DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
++	if (list_empty(handler_list))
++		return;
+ 
+-	if (!list_empty(&adev->dm.irq_handler_list_low_tab[irq_source].head))
+-		work = &adev->dm.irq_handler_list_low_tab[irq_source].work;
++	list_for_each_entry (handler_data, handler_list, list) {
++		if (!queue_work(system_highpri_wq, &handler_data->work)) {
++			continue;
++		} else {
++			work_queued = true;
++			break;
++		}
++	}
+ 
+-	DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags);
++	if (!work_queued) {
++		struct  amdgpu_dm_irq_handler_data *handler_data_add;
++		/*get the amdgpu_dm_irq_handler_data of first item pointed by handler_list*/
++		handler_data = container_of(handler_list->next, struct amdgpu_dm_irq_handler_data, list);
+ 
+-	if (work) {
+-		if (!schedule_work(work))
+-			DRM_INFO("amdgpu_dm_irq_schedule_work FAILED src %d\n",
+-						irq_source);
+-	}
++		/*allocate a new amdgpu_dm_irq_handler_data*/
++		handler_data_add = kzalloc(sizeof(*handler_data), GFP_KERNEL);
++		if (!handler_data_add) {
++			DRM_ERROR("DM_IRQ: failed to allocate irq handler!\n");
++			return;
++		}
++
++		/*copy new amdgpu_dm_irq_handler_data members from handler_data*/
++		handler_data_add->handler       = handler_data->handler;
++		handler_data_add->handler_arg   = handler_data->handler_arg;
++		handler_data_add->dm            = handler_data->dm;
++		handler_data_add->irq_source    = irq_source;
+ 
++		list_add_tail(&handler_data_add->list, handler_list);
++
++		INIT_WORK(&handler_data_add->work, dm_irq_work_func);
++
++		if (queue_work(system_highpri_wq, &handler_data_add->work))
++			DRM_DEBUG("Queued work for handling interrupt from "
++				  "display for IRQ source %d\n",
++				  irq_source);
++		else
++			DRM_ERROR("Failed to queue work for handling interrupt "
++				  "from display for IRQ source %d\n",
++				  irq_source);
++	}
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+index 95d883482227e..cab47bb211728 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+@@ -240,6 +240,7 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	bool force_reset = false;
+ 	bool update_uclk = false;
+ 	bool p_state_change_support;
++	int total_plane_count;
+ 
+ 	if (dc->work_arounds.skip_clock_update || !clk_mgr->smu_present)
+ 		return;
+@@ -280,7 +281,8 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
+ 		clk_mgr_base->clks.socclk_khz = new_clocks->socclk_khz;
+ 
+ 	clk_mgr_base->clks.prev_p_state_change_support = clk_mgr_base->clks.p_state_change_support;
+-	p_state_change_support = new_clocks->p_state_change_support || (display_count == 0);
++	total_plane_count = clk_mgr_helper_get_active_plane_cnt(dc, context);
++	p_state_change_support = new_clocks->p_state_change_support || (total_plane_count == 0);
+ 	if (should_update_pstate_support(safe_to_lower, p_state_change_support, clk_mgr_base->clks.p_state_change_support)) {
+ 		clk_mgr_base->clks.p_state_change_support = p_state_change_support;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index ffb21196bf599..921c4ca6e902a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2345,7 +2345,8 @@ static void commit_planes_do_stream_update(struct dc *dc,
+ 					if (pipe_ctx->stream_res.audio && !dc->debug.az_endpoint_mute_only)
+ 						pipe_ctx->stream_res.audio->funcs->az_disable(pipe_ctx->stream_res.audio);
+ 
+-					dc->hwss.optimize_bandwidth(dc, dc->current_state);
++					dc->optimized_required = true;
++
+ 				} else {
+ 					if (dc->optimize_seamless_boot_streams == 0)
+ 						dc->hwss.prepare_bandwidth(dc, dc->current_state);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
+index 382465862f297..f72f02e016aea 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
+@@ -99,7 +99,6 @@ struct dce110_aux_registers {
+ 	AUX_SF(AUX_SW_CONTROL, AUX_SW_GO, mask_sh),\
+ 	AUX_SF(AUX_SW_DATA, AUX_SW_AUTOINCREMENT_DISABLE, mask_sh),\
+ 	AUX_SF(AUX_SW_DATA, AUX_SW_DATA_RW, mask_sh),\
+-	AUX_SF(AUX_SW_DATA, AUX_SW_AUTOINCREMENT_DISABLE, mask_sh),\
+ 	AUX_SF(AUX_SW_DATA, AUX_SW_INDEX, mask_sh),\
+ 	AUX_SF(AUX_SW_DATA, AUX_SW_DATA, mask_sh),\
+ 	AUX_SF(AUX_SW_STATUS, AUX_SW_REPLY_BYTE_COUNT, mask_sh),\
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+index 2455d210ccf68..8465cae180da7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+@@ -180,7 +180,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_0_soc = {
+ 		},
+ 	.min_dcfclk = 500.0, /* TODO: set this to actual min DCFCLK */
+ 	.num_states = 1,
+-	.sr_exit_time_us = 12,
++	.sr_exit_time_us = 15.5,
+ 	.sr_enter_plus_exit_time_us = 20,
+ 	.urgent_latency_us = 4.0,
+ 	.urgent_latency_pixel_data_only_us = 4.0,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+index 45f028986a8db..b3f0476899d32 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+@@ -3437,6 +3437,7 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 			mode_lib->vba.DCCEnabledInAnyPlane = true;
+ 		}
+ 	}
++	mode_lib->vba.UrgentLatency = mode_lib->vba.UrgentLatencyPixelDataOnly;
+ 	for (i = 0; i <= mode_lib->vba.soc.num_states; i++) {
+ 		locals->FabricAndDRAMBandwidthPerState[i] = dml_min(
+ 				mode_lib->vba.DRAMSpeedPerState[i] * mode_lib->vba.NumberOfChannels
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+index 80170f9721ce9..1bcda7eba4a6f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+@@ -3510,6 +3510,7 @@ void dml20v2_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode
+ 			mode_lib->vba.DCCEnabledInAnyPlane = true;
+ 		}
+ 	}
++	mode_lib->vba.UrgentLatency = mode_lib->vba.UrgentLatencyPixelDataOnly;
+ 	for (i = 0; i <= mode_lib->vba.soc.num_states; i++) {
+ 		locals->FabricAndDRAMBandwidthPerState[i] = dml_min(
+ 				mode_lib->vba.DRAMSpeedPerState[i] * mode_lib->vba.NumberOfChannels
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+index 72423dc425dc0..799bae229e679 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+@@ -293,13 +293,31 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
+ 	if (surf_linear) {
+ 		log2_swath_height_l = 0;
+ 		log2_swath_height_c = 0;
+-	} else if (!surf_vert) {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
+ 	} else {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
++		unsigned int swath_height_l;
++		unsigned int swath_height_c;
++
++		if (!surf_vert) {
++			swath_height_l = rq_param->misc.rq_l.blk256_height;
++			swath_height_c = rq_param->misc.rq_c.blk256_height;
++		} else {
++			swath_height_l = rq_param->misc.rq_l.blk256_width;
++			swath_height_c = rq_param->misc.rq_c.blk256_width;
++		}
++
++		if (swath_height_l > 0)
++			log2_swath_height_l = dml_log2(swath_height_l);
++
++		if (req128_l && log2_swath_height_l > 0)
++			log2_swath_height_l -= 1;
++
++		if (swath_height_c > 0)
++			log2_swath_height_c = dml_log2(swath_height_c);
++
++		if (req128_c && log2_swath_height_c > 0)
++			log2_swath_height_c -= 1;
+ 	}
++
+ 	rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
+ 	rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+index 9c78446c3a9d8..6a6d5970d1d58 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+@@ -293,13 +293,31 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
+ 	if (surf_linear) {
+ 		log2_swath_height_l = 0;
+ 		log2_swath_height_c = 0;
+-	} else if (!surf_vert) {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
+ 	} else {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
++		unsigned int swath_height_l;
++		unsigned int swath_height_c;
++
++		if (!surf_vert) {
++			swath_height_l = rq_param->misc.rq_l.blk256_height;
++			swath_height_c = rq_param->misc.rq_c.blk256_height;
++		} else {
++			swath_height_l = rq_param->misc.rq_l.blk256_width;
++			swath_height_c = rq_param->misc.rq_c.blk256_width;
++		}
++
++		if (swath_height_l > 0)
++			log2_swath_height_l = dml_log2(swath_height_l);
++
++		if (req128_l && log2_swath_height_l > 0)
++			log2_swath_height_l -= 1;
++
++		if (swath_height_c > 0)
++			log2_swath_height_c = dml_log2(swath_height_c);
++
++		if (req128_c && log2_swath_height_c > 0)
++			log2_swath_height_c -= 1;
+ 	}
++
+ 	rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
+ 	rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+index edd41d3582910..dc1c81a6e3771 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+@@ -277,13 +277,31 @@ static void handle_det_buf_split(
+ 	if (surf_linear) {
+ 		log2_swath_height_l = 0;
+ 		log2_swath_height_c = 0;
+-	} else if (!surf_vert) {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
+ 	} else {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
++		unsigned int swath_height_l;
++		unsigned int swath_height_c;
++
++		if (!surf_vert) {
++			swath_height_l = rq_param->misc.rq_l.blk256_height;
++			swath_height_c = rq_param->misc.rq_c.blk256_height;
++		} else {
++			swath_height_l = rq_param->misc.rq_l.blk256_width;
++			swath_height_c = rq_param->misc.rq_c.blk256_width;
++		}
++
++		if (swath_height_l > 0)
++			log2_swath_height_l = dml_log2(swath_height_l);
++
++		if (req128_l && log2_swath_height_l > 0)
++			log2_swath_height_l -= 1;
++
++		if (swath_height_c > 0)
++			log2_swath_height_c = dml_log2(swath_height_c);
++
++		if (req128_c && log2_swath_height_c > 0)
++			log2_swath_height_c -= 1;
+ 	}
++
+ 	rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
+ 	rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
+index 416bf6fb67bd9..58c312f80a07c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
+@@ -237,13 +237,31 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
+ 	if (surf_linear) {
+ 		log2_swath_height_l = 0;
+ 		log2_swath_height_c = 0;
+-	} else if (!surf_vert) {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
+ 	} else {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
++		unsigned int swath_height_l;
++		unsigned int swath_height_c;
++
++		if (!surf_vert) {
++			swath_height_l = rq_param->misc.rq_l.blk256_height;
++			swath_height_c = rq_param->misc.rq_c.blk256_height;
++		} else {
++			swath_height_l = rq_param->misc.rq_l.blk256_width;
++			swath_height_c = rq_param->misc.rq_c.blk256_width;
++		}
++
++		if (swath_height_l > 0)
++			log2_swath_height_l = dml_log2(swath_height_l);
++
++		if (req128_l && log2_swath_height_l > 0)
++			log2_swath_height_l -= 1;
++
++		if (swath_height_c > 0)
++			log2_swath_height_c = dml_log2(swath_height_c);
++
++		if (req128_c && log2_swath_height_c > 0)
++			log2_swath_height_c -= 1;
+ 	}
++
+ 	rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
+ 	rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+index 4c3e9cc301679..414da64f57340 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+@@ -344,13 +344,31 @@ static void handle_det_buf_split(
+ 	if (surf_linear) {
+ 		log2_swath_height_l = 0;
+ 		log2_swath_height_c = 0;
+-	} else if (!surf_vert) {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
+ 	} else {
+-		log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
+-		log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
++		unsigned int swath_height_l;
++		unsigned int swath_height_c;
++
++		if (!surf_vert) {
++			swath_height_l = rq_param->misc.rq_l.blk256_height;
++			swath_height_c = rq_param->misc.rq_c.blk256_height;
++		} else {
++			swath_height_l = rq_param->misc.rq_l.blk256_width;
++			swath_height_c = rq_param->misc.rq_c.blk256_width;
++		}
++
++		if (swath_height_l > 0)
++			log2_swath_height_l = dml_log2(swath_height_l);
++
++		if (req128_l && log2_swath_height_l > 0)
++			log2_swath_height_l -= 1;
++
++		if (swath_height_c > 0)
++			log2_swath_height_c = dml_log2(swath_height_c);
++
++		if (req128_c && log2_swath_height_c > 0)
++			log2_swath_height_c -= 1;
+ 	}
++
+ 	rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
+ 	rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index ed4eafc744d3d..132c269c7c893 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -5159,7 +5159,7 @@ static int vega10_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui
+ 
+ out:
+ 	smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask,
+-						1 << power_profile_mode,
++						(!power_profile_mode) ? 0 : 1 << (power_profile_mode - 1),
+ 						NULL);
+ 	hwmgr->power_profile_mode = power_profile_mode;
+ 
+diff --git a/drivers/gpu/drm/arm/display/include/malidp_utils.h b/drivers/gpu/drm/arm/display/include/malidp_utils.h
+index 3bc383d5bf73d..49a1d7f3539c2 100644
+--- a/drivers/gpu/drm/arm/display/include/malidp_utils.h
++++ b/drivers/gpu/drm/arm/display/include/malidp_utils.h
+@@ -13,9 +13,6 @@
+ #define has_bit(nr, mask)	(BIT(nr) & (mask))
+ #define has_bits(bits, mask)	(((bits) & (mask)) == (bits))
+ 
+-#define dp_for_each_set_bit(bit, mask) \
+-	for_each_set_bit((bit), ((unsigned long *)&(mask)), sizeof(mask) * 8)
+-
+ #define dp_wait_cond(__cond, __tries, __min_range, __max_range)	\
+ ({							\
+ 	int num_tries = __tries;			\
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline.c b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline.c
+index 452e505a1fd39..79a6339840bb2 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline.c
+@@ -46,8 +46,9 @@ void komeda_pipeline_destroy(struct komeda_dev *mdev,
+ {
+ 	struct komeda_component *c;
+ 	int i;
++	unsigned long avail_comps = pipe->avail_comps;
+ 
+-	dp_for_each_set_bit(i, pipe->avail_comps) {
++	for_each_set_bit(i, &avail_comps, 32) {
+ 		c = komeda_pipeline_get_component(pipe, i);
+ 		komeda_component_destroy(mdev, c);
+ 	}
+@@ -246,6 +247,7 @@ static void komeda_pipeline_dump(struct komeda_pipeline *pipe)
+ {
+ 	struct komeda_component *c;
+ 	int id;
++	unsigned long avail_comps = pipe->avail_comps;
+ 
+ 	DRM_INFO("Pipeline-%d: n_layers: %d, n_scalers: %d, output: %s.\n",
+ 		 pipe->id, pipe->n_layers, pipe->n_scalers,
+@@ -257,7 +259,7 @@ static void komeda_pipeline_dump(struct komeda_pipeline *pipe)
+ 		 pipe->of_output_links[1] ?
+ 		 pipe->of_output_links[1]->full_name : "none");
+ 
+-	dp_for_each_set_bit(id, pipe->avail_comps) {
++	for_each_set_bit(id, &avail_comps, 32) {
+ 		c = komeda_pipeline_get_component(pipe, id);
+ 
+ 		komeda_component_dump(c);
+@@ -269,8 +271,9 @@ static void komeda_component_verify_inputs(struct komeda_component *c)
+ 	struct komeda_pipeline *pipe = c->pipeline;
+ 	struct komeda_component *input;
+ 	int id;
++	unsigned long supported_inputs = c->supported_inputs;
+ 
+-	dp_for_each_set_bit(id, c->supported_inputs) {
++	for_each_set_bit(id, &supported_inputs, 32) {
+ 		input = komeda_pipeline_get_component(pipe, id);
+ 		if (!input) {
+ 			c->supported_inputs &= ~(BIT(id));
+@@ -301,8 +304,9 @@ static void komeda_pipeline_assemble(struct komeda_pipeline *pipe)
+ 	struct komeda_component *c;
+ 	struct komeda_layer *layer;
+ 	int i, id;
++	unsigned long avail_comps = pipe->avail_comps;
+ 
+-	dp_for_each_set_bit(id, pipe->avail_comps) {
++	for_each_set_bit(id, &avail_comps, 32) {
+ 		c = komeda_pipeline_get_component(pipe, id);
+ 		komeda_component_verify_inputs(c);
+ 	}
+@@ -354,13 +358,15 @@ void komeda_pipeline_dump_register(struct komeda_pipeline *pipe,
+ {
+ 	struct komeda_component *c;
+ 	u32 id;
++	unsigned long avail_comps;
+ 
+ 	seq_printf(sf, "\n======== Pipeline-%d ==========\n", pipe->id);
+ 
+ 	if (pipe->funcs && pipe->funcs->dump_register)
+ 		pipe->funcs->dump_register(pipe, sf);
+ 
+-	dp_for_each_set_bit(id, pipe->avail_comps) {
++	avail_comps = pipe->avail_comps;
++	for_each_set_bit(id, &avail_comps, 32) {
+ 		c = komeda_pipeline_get_component(pipe, id);
+ 
+ 		seq_printf(sf, "\n------%s------\n", c->name);
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
+index 8f32ae7c25d06..c3cdf283ecefa 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
+@@ -1231,14 +1231,15 @@ komeda_pipeline_unbound_components(struct komeda_pipeline *pipe,
+ 	struct komeda_pipeline_state *old = priv_to_pipe_st(pipe->obj.state);
+ 	struct komeda_component_state *c_st;
+ 	struct komeda_component *c;
+-	u32 disabling_comps, id;
++	u32 id;
++	unsigned long disabling_comps;
+ 
+ 	WARN_ON(!old);
+ 
+ 	disabling_comps = (~new->active_comps) & old->active_comps;
+ 
+ 	/* unbound all disabling component */
+-	dp_for_each_set_bit(id, disabling_comps) {
++	for_each_set_bit(id, &disabling_comps, 32) {
+ 		c = komeda_pipeline_get_component(pipe, id);
+ 		c_st = komeda_component_get_state_and_set_user(c,
+ 				drm_st, NULL, new->crtc);
+@@ -1286,7 +1287,8 @@ bool komeda_pipeline_disable(struct komeda_pipeline *pipe,
+ 	struct komeda_pipeline_state *old;
+ 	struct komeda_component *c;
+ 	struct komeda_component_state *c_st;
+-	u32 id, disabling_comps = 0;
++	u32 id;
++	unsigned long disabling_comps;
+ 
+ 	old = komeda_pipeline_get_old_state(pipe, old_state);
+ 
+@@ -1296,10 +1298,10 @@ bool komeda_pipeline_disable(struct komeda_pipeline *pipe,
+ 		disabling_comps = old->active_comps &
+ 				  pipe->standalone_disabled_comps;
+ 
+-	DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, disabling_comps: 0x%x.\n",
++	DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, disabling_comps: 0x%lx.\n",
+ 			 pipe->id, old->active_comps, disabling_comps);
+ 
+-	dp_for_each_set_bit(id, disabling_comps) {
++	for_each_set_bit(id, &disabling_comps, 32) {
+ 		c = komeda_pipeline_get_component(pipe, id);
+ 		c_st = priv_to_comp_st(c->obj.state);
+ 
+@@ -1330,16 +1332,17 @@ void komeda_pipeline_update(struct komeda_pipeline *pipe,
+ 	struct komeda_pipeline_state *new = priv_to_pipe_st(pipe->obj.state);
+ 	struct komeda_pipeline_state *old;
+ 	struct komeda_component *c;
+-	u32 id, changed_comps = 0;
++	u32 id;
++	unsigned long changed_comps;
+ 
+ 	old = komeda_pipeline_get_old_state(pipe, old_state);
+ 
+ 	changed_comps = new->active_comps | old->active_comps;
+ 
+-	DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, changed: 0x%x.\n",
++	DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, changed: 0x%lx.\n",
+ 			 pipe->id, new->active_comps, changed_comps);
+ 
+-	dp_for_each_set_bit(id, changed_comps) {
++	for_each_set_bit(id, &changed_comps, 32) {
+ 		c = komeda_pipeline_get_component(pipe, id);
+ 
+ 		if (new->active_comps & BIT(c->id))
+diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c
+index f0b4af1c390aa..59d2466d40c6f 100644
+--- a/drivers/gpu/drm/ast/ast_drv.c
++++ b/drivers/gpu/drm/ast/ast_drv.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ 
++#include <drm/drm_atomic_helper.h>
+ #include <drm/drm_crtc_helper.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_fb_helper.h>
+@@ -138,6 +139,7 @@ static void ast_pci_remove(struct pci_dev *pdev)
+ 	struct drm_device *dev = pci_get_drvdata(pdev);
+ 
+ 	drm_dev_unregister(dev);
++	drm_atomic_helper_shutdown(dev);
+ }
+ 
+ static int ast_drm_freeze(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 0a1e1cf57e199..a3c2f76668abe 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -688,7 +688,7 @@ ast_cursor_plane_helper_atomic_update(struct drm_plane *plane,
+ 	unsigned int offset_x, offset_y;
+ 
+ 	offset_x = AST_MAX_HWC_WIDTH - fb->width;
+-	offset_y = AST_MAX_HWC_WIDTH - fb->height;
++	offset_y = AST_MAX_HWC_HEIGHT - fb->height;
+ 
+ 	if (state->fb != old_state->fb) {
+ 		/* A new cursor image was installed. */
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 58f5dc2f6dd52..f6bdec7fa9253 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -84,6 +84,13 @@ static const struct drm_dmi_panel_orientation_data itworks_tw891 = {
+ 	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+ 
++static const struct drm_dmi_panel_orientation_data onegx1_pro = {
++	.width = 1200,
++	.height = 1920,
++	.bios_dates = (const char * const []){ "12/17/2020", NULL },
++	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data lcd720x1280_rightside_up = {
+ 	.width = 720,
+ 	.height = 1280,
+@@ -211,6 +218,13 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
+ 		},
+ 		.driver_data = (void *)&lcd1200x1920_rightside_up,
++	}, {	/* OneGX1 Pro */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SYSTEM_MANUFACTURER"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SYSTEM_PRODUCT_NAME"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Default string"),
++		},
++		.driver_data = (void *)&onegx1_pro,
+ 	}, {	/* VIOS LTH17 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "VIOS"),
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index cfb806767fc52..1f23cb6ece588 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -2992,7 +2992,7 @@ int ilk_wm_max_level(const struct drm_i915_private *dev_priv)
+ 
+ static void intel_print_wm_latency(struct drm_i915_private *dev_priv,
+ 				   const char *name,
+-				   const u16 wm[8])
++				   const u16 wm[])
+ {
+ 	int level, max_level = ilk_wm_max_level(dev_priv);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_cmd_encoder.c
+index ff2c1d583c792..0392d4dfe270a 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_cmd_encoder.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_cmd_encoder.c
+@@ -20,7 +20,7 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder,
+ {
+ 	struct mdp5_kms *mdp5_kms = get_kms(encoder);
+ 	struct device *dev = encoder->dev->dev;
+-	u32 total_lines_x100, vclks_line, cfg;
++	u32 total_lines, vclks_line, cfg;
+ 	long vsync_clk_speed;
+ 	struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc);
+ 	int pp_id = mixer->pp;
+@@ -30,8 +30,8 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder,
+ 		return -EINVAL;
+ 	}
+ 
+-	total_lines_x100 = mode->vtotal * drm_mode_vrefresh(mode);
+-	if (!total_lines_x100) {
++	total_lines = mode->vtotal * drm_mode_vrefresh(mode);
++	if (!total_lines) {
+ 		DRM_DEV_ERROR(dev, "%s: vtotal(%d) or vrefresh(%d) is 0\n",
+ 			      __func__, mode->vtotal, drm_mode_vrefresh(mode));
+ 		return -EINVAL;
+@@ -43,15 +43,23 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder,
+ 							vsync_clk_speed);
+ 		return -EINVAL;
+ 	}
+-	vclks_line = vsync_clk_speed * 100 / total_lines_x100;
++	vclks_line = vsync_clk_speed / total_lines;
+ 
+ 	cfg = MDP5_PP_SYNC_CONFIG_VSYNC_COUNTER_EN
+ 		| MDP5_PP_SYNC_CONFIG_VSYNC_IN_EN;
+ 	cfg |= MDP5_PP_SYNC_CONFIG_VSYNC_COUNT(vclks_line);
+ 
++	/*
++	 * Tearcheck emits a blanking signal every vclks_line * vtotal * 2 ticks on
++	 * the vsync_clk equating to roughly half the desired panel refresh rate.
++	 * This is only necessary as stability fallback if interrupts from the
++	 * panel arrive too late or not at all, but is currently used by default
++	 * because these panel interrupts are not wired up yet.
++	 */
+ 	mdp5_write(mdp5_kms, REG_MDP5_PP_SYNC_CONFIG_VSYNC(pp_id), cfg);
+ 	mdp5_write(mdp5_kms,
+-		REG_MDP5_PP_SYNC_CONFIG_HEIGHT(pp_id), 0xfff0);
++		REG_MDP5_PP_SYNC_CONFIG_HEIGHT(pp_id), (2 * mode->vtotal));
++
+ 	mdp5_write(mdp5_kms,
+ 		REG_MDP5_PP_VSYNC_INIT_VAL(pp_id), mode->vdisplay);
+ 	mdp5_write(mdp5_kms, REG_MDP5_PP_RD_PTR_IRQ(pp_id), mode->vdisplay + 1);
+diff --git a/drivers/gpu/drm/msm/dp/dp_hpd.c b/drivers/gpu/drm/msm/dp/dp_hpd.c
+index 5b8fe32022b5f..e1c90fa47411f 100644
+--- a/drivers/gpu/drm/msm/dp/dp_hpd.c
++++ b/drivers/gpu/drm/msm/dp/dp_hpd.c
+@@ -34,8 +34,8 @@ int dp_hpd_connect(struct dp_usbpd *dp_usbpd, bool hpd)
+ 
+ 	dp_usbpd->hpd_high = hpd;
+ 
+-	if (!hpd_priv->dp_cb && !hpd_priv->dp_cb->configure
+-				&& !hpd_priv->dp_cb->disconnect) {
++	if (!hpd_priv->dp_cb || !hpd_priv->dp_cb->configure
++				|| !hpd_priv->dp_cb->disconnect) {
+ 		pr_err("hpd dp_cb not initialized\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index 862ef59d4d033..1f0802f5d84ef 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -1224,6 +1224,10 @@ int qxl_modeset_init(struct qxl_device *qdev)
+ 
+ void qxl_modeset_fini(struct qxl_device *qdev)
+ {
++	if (qdev->dumb_shadow_bo) {
++		drm_gem_object_put(&qdev->dumb_shadow_bo->tbo.base);
++		qdev->dumb_shadow_bo = NULL;
++	}
+ 	qxl_destroy_monitors_object(qdev);
+ 	drm_mode_config_cleanup(&qdev->ddev);
+ }
+diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
+index 6e7f16f4cec79..41cdf9d1e59dc 100644
+--- a/drivers/gpu/drm/qxl/qxl_drv.c
++++ b/drivers/gpu/drm/qxl/qxl_drv.c
+@@ -144,6 +144,8 @@ static void qxl_drm_release(struct drm_device *dev)
+ 	 * reodering qxl_modeset_fini() + qxl_device_fini() calls is
+ 	 * non-trivial though.
+ 	 */
++	if (!dev->registered)
++		return;
+ 	qxl_modeset_fini(qdev);
+ 	qxl_device_fini(qdev);
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
+index 36150b7f31a90..a65cb349fac2d 100644
+--- a/drivers/gpu/drm/radeon/radeon_ttm.c
++++ b/drivers/gpu/drm/radeon/radeon_ttm.c
+@@ -566,13 +566,14 @@ static void radeon_ttm_backend_unbind(struct ttm_bo_device *bdev, struct ttm_tt
+ 	struct radeon_ttm_tt *gtt = (void *)ttm;
+ 	struct radeon_device *rdev = radeon_get_rdev(bdev);
+ 
++	if (gtt->userptr)
++		radeon_ttm_tt_unpin_userptr(bdev, ttm);
++
+ 	if (!gtt->bound)
+ 		return;
+ 
+ 	radeon_gart_unbind(rdev, gtt->offset, ttm->num_pages);
+ 
+-	if (gtt->userptr)
+-		radeon_ttm_tt_unpin_userptr(bdev, ttm);
+ 	gtt->bound = false;
+ }
+ 
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index 09c012d54d58f..1ae5cd47d9546 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -18,7 +18,8 @@ static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
+ 
+ 	ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer,
+ 					  output->period_ns);
+-	WARN_ON(ret_overrun != 1);
++	if (ret_overrun != 1)
++		pr_warn("%s: vblank timer overrun\n", __func__);
+ 
+ 	spin_lock(&output->lock);
+ 	ret = drm_crtc_handle_vblank(crtc);
+diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
+index f72803a023910..28509b02a0b56 100644
+--- a/drivers/hwtracing/intel_th/gth.c
++++ b/drivers/hwtracing/intel_th/gth.c
+@@ -543,7 +543,7 @@ static void intel_th_gth_disable(struct intel_th_device *thdev,
+ 	output->active = false;
+ 
+ 	for_each_set_bit(master, gth->output[output->port].master,
+-			 TH_CONFIGURABLE_MASTERS) {
++			 TH_CONFIGURABLE_MASTERS + 1) {
+ 		gth_master_set(gth, master, -1);
+ 	}
+ 	spin_unlock(&gth->gth_lock);
+@@ -697,7 +697,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
+ 	othdev->output.port = -1;
+ 	othdev->output.active = false;
+ 	gth->output[port].output = NULL;
+-	for (master = 0; master <= TH_CONFIGURABLE_MASTERS; master++)
++	for (master = 0; master < TH_CONFIGURABLE_MASTERS + 1; master++)
+ 		if (gth->master[master] == port)
+ 			gth->master[master] = -1;
+ 	spin_unlock(&gth->gth_lock);
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 251e75c9ba9d0..817cdb29bbd89 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -273,11 +273,21 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x51a6),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Alder Lake-M */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x54a6),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{
+ 		/* Alder Lake CPU */
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Rocket Lake CPU */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c19),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{ 0 },
+ };
+ 
+diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c
+index d8fccf048bf44..30576a5f2f045 100644
+--- a/drivers/input/touchscreen/ili210x.c
++++ b/drivers/input/touchscreen/ili210x.c
+@@ -87,7 +87,7 @@ static bool ili210x_touchdata_to_coords(const u8 *touchdata,
+ 					unsigned int *x, unsigned int *y,
+ 					unsigned int *z)
+ {
+-	if (touchdata[0] & BIT(finger))
++	if (!(touchdata[0] & BIT(finger)))
+ 		return false;
+ 
+ 	*x = get_unaligned_be16(touchdata + 1 + (finger * 4) + 0);
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 16fecc0febe83..7929bf12651ca 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -648,6 +648,10 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
+ 
+ 	irqnr = gic_read_iar();
+ 
++	/* Check for special IDs first */
++	if ((irqnr >= 1020 && irqnr <= 1023))
++		return;
++
+ 	if (gic_supports_nmi() &&
+ 	    unlikely(gic_read_rpr() == GICD_INT_NMI_PRI)) {
+ 		gic_handle_nmi(irqnr, regs);
+@@ -659,10 +663,6 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
+ 		gic_arch_enable_irqs();
+ 	}
+ 
+-	/* Check for special IDs first */
+-	if ((irqnr >= 1020 && irqnr <= 1023))
+-		return;
+-
+ 	if (static_branch_likely(&supports_deactivate_key))
+ 		gic_write_eoir(irqnr);
+ 	else
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index b64fede032dc5..4c7da1c4e6cb9 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -3929,6 +3929,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 			if (val >= (uint64_t)UINT_MAX * 1000 / HZ) {
+ 				r = -EINVAL;
+ 				ti->error = "Invalid bitmap_flush_interval argument";
++				goto bad;
+ 			}
+ 			ic->bitmap_flush_interval = msecs_to_jiffies(val);
+ 		} else if (!strncmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 6dca932d6f1d1..f5083b4a01958 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -1869,6 +1869,14 @@ static bool rs_takeover_requested(struct raid_set *rs)
+ 	return rs->md.new_level != rs->md.level;
+ }
+ 
++/* True if layout is set to reshape. */
++static bool rs_is_layout_change(struct raid_set *rs, bool use_mddev)
++{
++	return (use_mddev ? rs->md.delta_disks : rs->delta_disks) ||
++	       rs->md.new_layout != rs->md.layout ||
++	       rs->md.new_chunk_sectors != rs->md.chunk_sectors;
++}
++
+ /* True if @rs is requested to reshape by ctr */
+ static bool rs_reshape_requested(struct raid_set *rs)
+ {
+@@ -1881,9 +1889,7 @@ static bool rs_reshape_requested(struct raid_set *rs)
+ 	if (rs_is_raid0(rs))
+ 		return false;
+ 
+-	change = mddev->new_layout != mddev->layout ||
+-		 mddev->new_chunk_sectors != mddev->chunk_sectors ||
+-		 rs->delta_disks;
++	change = rs_is_layout_change(rs, false);
+ 
+ 	/* Historical case to support raid1 reshape without delta disks */
+ 	if (rs_is_raid1(rs)) {
+@@ -2818,7 +2824,7 @@ static sector_t _get_reshape_sectors(struct raid_set *rs)
+ }
+ 
+ /*
+- *
++ * Reshape:
+  * - change raid layout
+  * - change chunk size
+  * - add disks
+@@ -2927,6 +2933,20 @@ static int rs_setup_reshape(struct raid_set *rs)
+ 	return r;
+ }
+ 
++/*
++ * If the md resync thread has updated superblock with max reshape position
++ * at the end of a reshape but not (yet) reset the layout configuration
++ * changes -> reset the latter.
++ */
++static void rs_reset_inconclusive_reshape(struct raid_set *rs)
++{
++	if (!rs_is_reshaping(rs) && rs_is_layout_change(rs, true)) {
++		rs_set_cur(rs);
++		rs->md.delta_disks = 0;
++		rs->md.reshape_backwards = 0;
++	}
++}
++
+ /*
+  * Enable/disable discard support on RAID set depending on
+  * RAID level and discard properties of underlying RAID members.
+@@ -3213,11 +3233,14 @@ size_check:
+ 	if (r)
+ 		goto bad;
+ 
++	/* Catch any inconclusive reshape superblock content. */
++	rs_reset_inconclusive_reshape(rs);
++
+ 	/* Start raid set read-only and assumed clean to change in raid_resume() */
+ 	rs->md.ro = 1;
+ 	rs->md.in_sync = 1;
+ 
+-	/* Keep array frozen */
++	/* Keep array frozen until resume. */
+ 	set_bit(MD_RECOVERY_FROZEN, &rs->md.recovery);
+ 
+ 	/* Has to be held on running the array */
+@@ -3231,7 +3254,6 @@ size_check:
+ 	}
+ 
+ 	r = md_start(&rs->md);
+-
+ 	if (r) {
+ 		ti->error = "Failed to start raid array";
+ 		mddev_unlock(&rs->md);
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 729a72ec30cca..b1e867feb4f6b 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -569,6 +569,7 @@ out_tag_set:
+ 	blk_mq_free_tag_set(md->tag_set);
+ out_kfree_tag_set:
+ 	kfree(md->tag_set);
++	md->tag_set = NULL;
+ 
+ 	return err;
+ }
+@@ -578,6 +579,7 @@ void dm_mq_cleanup_mapped_device(struct mapped_device *md)
+ 	if (md->tag_set) {
+ 		blk_mq_free_tag_set(md->tag_set);
+ 		kfree(md->tag_set);
++		md->tag_set = NULL;
+ 	}
+ }
+ 
+diff --git a/drivers/md/persistent-data/dm-btree-internal.h b/drivers/md/persistent-data/dm-btree-internal.h
+index 564896659dd44..21d1a17e77c96 100644
+--- a/drivers/md/persistent-data/dm-btree-internal.h
++++ b/drivers/md/persistent-data/dm-btree-internal.h
+@@ -34,12 +34,12 @@ struct node_header {
+ 	__le32 max_entries;
+ 	__le32 value_size;
+ 	__le32 padding;
+-} __packed;
++} __attribute__((packed, aligned(8)));
+ 
+ struct btree_node {
+ 	struct node_header header;
+ 	__le64 keys[];
+-} __packed;
++} __attribute__((packed, aligned(8)));
+ 
+ 
+ /*
+diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
+index d8b4125e338ca..a213bf11738fb 100644
+--- a/drivers/md/persistent-data/dm-space-map-common.c
++++ b/drivers/md/persistent-data/dm-space-map-common.c
+@@ -339,6 +339,8 @@ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
+ 	 */
+ 	begin = do_div(index_begin, ll->entries_per_block);
+ 	end = do_div(end, ll->entries_per_block);
++	if (end == 0)
++		end = ll->entries_per_block;
+ 
+ 	for (i = index_begin; i < index_end; i++, begin = 0) {
+ 		struct dm_block *blk;
+diff --git a/drivers/md/persistent-data/dm-space-map-common.h b/drivers/md/persistent-data/dm-space-map-common.h
+index 8de63ce39bdd5..87e17909ef521 100644
+--- a/drivers/md/persistent-data/dm-space-map-common.h
++++ b/drivers/md/persistent-data/dm-space-map-common.h
+@@ -33,7 +33,7 @@ struct disk_index_entry {
+ 	__le64 blocknr;
+ 	__le32 nr_free;
+ 	__le32 none_free_before;
+-} __packed;
++} __attribute__ ((packed, aligned(8)));
+ 
+ 
+ #define MAX_METADATA_BITMAPS 255
+@@ -43,7 +43,7 @@ struct disk_metadata_index {
+ 	__le64 blocknr;
+ 
+ 	struct disk_index_entry index[MAX_METADATA_BITMAPS];
+-} __packed;
++} __attribute__ ((packed, aligned(8)));
+ 
+ struct ll_disk;
+ 
+@@ -86,7 +86,7 @@ struct disk_sm_root {
+ 	__le64 nr_allocated;
+ 	__le64 bitmap_root;
+ 	__le64 ref_count_root;
+-} __packed;
++} __attribute__ ((packed, aligned(8)));
+ 
+ #define ENTRIES_PER_BYTE 4
+ 
+@@ -94,7 +94,7 @@ struct disk_bitmap_header {
+ 	__le32 csum;
+ 	__le32 not_used;
+ 	__le64 blocknr;
+-} __packed;
++} __attribute__ ((packed, aligned(8)));
+ 
+ enum allocation_event {
+ 	SM_NONE,
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 960d854c07f89..a6480568c7ebe 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -478,6 +478,8 @@ static void raid1_end_write_request(struct bio *bio)
+ 		if (!test_bit(Faulty, &rdev->flags))
+ 			set_bit(R1BIO_WriteError, &r1_bio->state);
+ 		else {
++			/* Fail the request */
++			set_bit(R1BIO_Degraded, &r1_bio->state);
+ 			/* Finished with this branch */
+ 			r1_bio->bios[mirror] = NULL;
+ 			to_put = bio;
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 959fa28202597..ec9ebff28552b 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -241,6 +241,7 @@ static void dvb_media_device_free(struct dvb_device *dvbdev)
+ 
+ 	if (dvbdev->adapter->conn) {
+ 		media_device_unregister_entity(dvbdev->adapter->conn);
++		kfree(dvbdev->adapter->conn);
+ 		dvbdev->adapter->conn = NULL;
+ 		kfree(dvbdev->adapter->conn_pads);
+ 		dvbdev->adapter->conn_pads = NULL;
+diff --git a/drivers/media/i2c/adv7511-v4l2.c b/drivers/media/i2c/adv7511-v4l2.c
+index a3161d7090153..ab7883cff8b22 100644
+--- a/drivers/media/i2c/adv7511-v4l2.c
++++ b/drivers/media/i2c/adv7511-v4l2.c
+@@ -1964,7 +1964,7 @@ static int adv7511_remove(struct i2c_client *client)
+ 
+ 	adv7511_set_isr(sd, false);
+ 	adv7511_init_setup(sd);
+-	cancel_delayed_work(&state->edid_handler);
++	cancel_delayed_work_sync(&state->edid_handler);
+ 	i2c_unregister_device(state->i2c_edid);
+ 	i2c_unregister_device(state->i2c_cec);
+ 	i2c_unregister_device(state->i2c_pktmem);
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index 09004d928d11f..d1f58795794fd 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -3616,7 +3616,7 @@ static int adv76xx_remove(struct i2c_client *client)
+ 	io_write(sd, 0x6e, 0);
+ 	io_write(sd, 0x73, 0);
+ 
+-	cancel_delayed_work(&state->delayed_work_enable_hotplug);
++	cancel_delayed_work_sync(&state->delayed_work_enable_hotplug);
+ 	v4l2_async_unregister_subdev(sd);
+ 	media_entity_cleanup(&sd->entity);
+ 	adv76xx_unregister_clients(to_state(sd));
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index 0855f648416d1..f7d2b6cd3008b 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -3586,7 +3586,7 @@ static int adv7842_remove(struct i2c_client *client)
+ 	struct adv7842_state *state = to_state(sd);
+ 
+ 	adv7842_irq_enable(sd, false);
+-	cancel_delayed_work(&state->delayed_work_enable_hotplug);
++	cancel_delayed_work_sync(&state->delayed_work_enable_hotplug);
+ 	v4l2_device_unregister_subdev(sd);
+ 	media_entity_cleanup(&sd->entity);
+ 	adv7842_unregister_clients(sd);
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 831b5b54fd78c..1b309bb743c7b 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -2193,7 +2193,7 @@ static int tc358743_remove(struct i2c_client *client)
+ 		del_timer_sync(&state->timer);
+ 		flush_work(&state->work_i2c_poll);
+ 	}
+-	cancel_delayed_work(&state->delayed_work_enable_hotplug);
++	cancel_delayed_work_sync(&state->delayed_work_enable_hotplug);
+ 	cec_unregister_adapter(state->cec_adap);
+ 	v4l2_async_unregister_subdev(sd);
+ 	v4l2_device_unregister_subdev(sd);
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index a09bf0a39d058..89bb7e6dc7a42 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -2804,7 +2804,7 @@ static int tda1997x_remove(struct i2c_client *client)
+ 	media_entity_cleanup(&sd->entity);
+ 	v4l2_ctrl_handler_free(&state->hdl);
+ 	regulator_bulk_disable(TDA1997X_NUM_SUPPLIES, state->supplies);
+-	cancel_delayed_work(&state->delayed_work_enable_hpd);
++	cancel_delayed_work_sync(&state->delayed_work_enable_hpd);
+ 	mutex_destroy(&state->page_lock);
+ 	mutex_destroy(&state->lock);
+ 
+diff --git a/drivers/media/pci/saa7164/saa7164-encoder.c b/drivers/media/pci/saa7164/saa7164-encoder.c
+index 11e1eb6a6809e..1d1d32e043f16 100644
+--- a/drivers/media/pci/saa7164/saa7164-encoder.c
++++ b/drivers/media/pci/saa7164/saa7164-encoder.c
+@@ -1008,7 +1008,7 @@ int saa7164_encoder_register(struct saa7164_port *port)
+ 		printk(KERN_ERR "%s() failed (errno = %d), NO PCI configuration\n",
+ 			__func__, result);
+ 		result = -ENOMEM;
+-		goto failed;
++		goto fail_pci;
+ 	}
+ 
+ 	/* Establish encoder defaults here */
+@@ -1062,7 +1062,7 @@ int saa7164_encoder_register(struct saa7164_port *port)
+ 			  100000, ENCODER_DEF_BITRATE);
+ 	if (hdl->error) {
+ 		result = hdl->error;
+-		goto failed;
++		goto fail_hdl;
+ 	}
+ 
+ 	port->std = V4L2_STD_NTSC_M;
+@@ -1080,7 +1080,7 @@ int saa7164_encoder_register(struct saa7164_port *port)
+ 		printk(KERN_INFO "%s: can't allocate mpeg device\n",
+ 			dev->name);
+ 		result = -ENOMEM;
+-		goto failed;
++		goto fail_hdl;
+ 	}
+ 
+ 	port->v4l_device->ctrl_handler = hdl;
+@@ -1091,10 +1091,7 @@ int saa7164_encoder_register(struct saa7164_port *port)
+ 	if (result < 0) {
+ 		printk(KERN_INFO "%s: can't register mpeg device\n",
+ 			dev->name);
+-		/* TODO: We're going to leak here if we don't dealloc
+-		 The buffers above. The unreg function can't deal wit it.
+-		*/
+-		goto failed;
++		goto fail_reg;
+ 	}
+ 
+ 	printk(KERN_INFO "%s: registered device video%d [mpeg]\n",
+@@ -1116,9 +1113,14 @@ int saa7164_encoder_register(struct saa7164_port *port)
+ 
+ 	saa7164_api_set_encoder(port);
+ 	saa7164_api_get_encoder(port);
++	return 0;
+ 
+-	result = 0;
+-failed:
++fail_reg:
++	video_device_release(port->v4l_device);
++	port->v4l_device = NULL;
++fail_hdl:
++	v4l2_ctrl_handler_free(hdl);
++fail_pci:
+ 	return result;
+ }
+ 
+diff --git a/drivers/media/pci/sta2x11/Kconfig b/drivers/media/pci/sta2x11/Kconfig
+index 4dd98f94a91ed..27bb785136319 100644
+--- a/drivers/media/pci/sta2x11/Kconfig
++++ b/drivers/media/pci/sta2x11/Kconfig
+@@ -3,6 +3,7 @@ config STA2X11_VIP
+ 	tristate "STA2X11 VIP Video For Linux"
+ 	depends on PCI && VIDEO_V4L2 && VIRT_TO_BUS && I2C
+ 	depends on STA2X11 || COMPILE_TEST
++	select GPIOLIB if MEDIA_SUBDRV_AUTOSELECT
+ 	select VIDEO_ADV7180 if MEDIA_SUBDRV_AUTOSELECT
+ 	select VIDEOBUF2_DMA_CONTIG
+ 	select MEDIA_CONTROLLER
+diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c
+index 363ee2a65453c..2dcf7eaea4ce2 100644
+--- a/drivers/media/platform/qcom/venus/hfi_parser.c
++++ b/drivers/media/platform/qcom/venus/hfi_parser.c
+@@ -239,8 +239,10 @@ u32 hfi_parser(struct venus_core *core, struct venus_inst *inst, void *buf,
+ 
+ 	parser_init(inst, &codecs, &domain);
+ 
+-	core->codecs_count = 0;
+-	memset(core->caps, 0, sizeof(core->caps));
++	if (core->res->hfi_version > HFI_VERSION_1XX) {
++		core->codecs_count = 0;
++		memset(core->caps, 0, sizeof(core->caps));
++	}
+ 
+ 	while (words_count) {
+ 		data = word + 1;
+diff --git a/drivers/media/platform/sti/bdisp/bdisp-debug.c b/drivers/media/platform/sti/bdisp/bdisp-debug.c
+index 2b270093009c7..a27f638df11c6 100644
+--- a/drivers/media/platform/sti/bdisp/bdisp-debug.c
++++ b/drivers/media/platform/sti/bdisp/bdisp-debug.c
+@@ -480,7 +480,7 @@ static int regs_show(struct seq_file *s, void *data)
+ 	int ret;
+ 	unsigned int i;
+ 
+-	ret = pm_runtime_get_sync(bdisp->dev);
++	ret = pm_runtime_resume_and_get(bdisp->dev);
+ 	if (ret < 0) {
+ 		seq_puts(s, "Cannot wake up IP\n");
+ 		return 0;
+diff --git a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+index ba5d078866072..2c159483c56ba 100644
+--- a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
++++ b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+@@ -589,7 +589,7 @@ static int deinterlace_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 	int ret;
+ 
+ 	if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
+-		ret = pm_runtime_get_sync(dev);
++		ret = pm_runtime_resume_and_get(dev);
+ 		if (ret < 0) {
+ 			dev_err(dev, "Failed to enable module\n");
+ 
+diff --git a/drivers/media/rc/ite-cir.c b/drivers/media/rc/ite-cir.c
+index 0c6229592e132..e5c4a6941d26b 100644
+--- a/drivers/media/rc/ite-cir.c
++++ b/drivers/media/rc/ite-cir.c
+@@ -276,8 +276,14 @@ static irqreturn_t ite_cir_isr(int irq, void *data)
+ 	/* read the interrupt flags */
+ 	iflags = dev->params.get_irq_causes(dev);
+ 
++	/* Check for RX overflow */
++	if (iflags & ITE_IRQ_RX_FIFO_OVERRUN) {
++		dev_warn(&dev->rdev->dev, "receive overflow\n");
++		ir_raw_event_reset(dev->rdev);
++	}
++
+ 	/* check for the receive interrupt */
+-	if (iflags & (ITE_IRQ_RX_FIFO | ITE_IRQ_RX_FIFO_OVERRUN)) {
++	if (iflags & ITE_IRQ_RX_FIFO) {
+ 		/* read the FIFO bytes */
+ 		rx_bytes =
+ 			dev->params.get_rx_bytes(dev, rx_buf,
+diff --git a/drivers/media/test-drivers/vivid/vivid-core.c b/drivers/media/test-drivers/vivid/vivid-core.c
+index aa8d350fd682a..1e356dc65d318 100644
+--- a/drivers/media/test-drivers/vivid/vivid-core.c
++++ b/drivers/media/test-drivers/vivid/vivid-core.c
+@@ -205,13 +205,13 @@ static const u8 vivid_hdmi_edid[256] = {
+ 	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x7b,
+ 
+-	0x02, 0x03, 0x3f, 0xf0, 0x51, 0x61, 0x60, 0x5f,
++	0x02, 0x03, 0x3f, 0xf1, 0x51, 0x61, 0x60, 0x5f,
+ 	0x5e, 0x5d, 0x10, 0x1f, 0x04, 0x13, 0x22, 0x21,
+ 	0x20, 0x05, 0x14, 0x02, 0x11, 0x01, 0x23, 0x09,
+ 	0x07, 0x07, 0x83, 0x01, 0x00, 0x00, 0x6d, 0x03,
+ 	0x0c, 0x00, 0x10, 0x00, 0x00, 0x3c, 0x21, 0x00,
+ 	0x60, 0x01, 0x02, 0x03, 0x67, 0xd8, 0x5d, 0xc4,
+-	0x01, 0x78, 0x00, 0x00, 0xe2, 0x00, 0xea, 0xe3,
++	0x01, 0x78, 0x00, 0x00, 0xe2, 0x00, 0xca, 0xe3,
+ 	0x05, 0x00, 0x00, 0xe3, 0x06, 0x01, 0x00, 0x4d,
+ 	0xd0, 0x00, 0xa0, 0xf0, 0x70, 0x3e, 0x80, 0x30,
+ 	0x20, 0x35, 0x00, 0xc0, 0x1c, 0x32, 0x00, 0x00,
+@@ -220,7 +220,7 @@ static const u8 vivid_hdmi_edid[256] = {
+ 	0x00, 0x00, 0x1a, 0x1a, 0x1d, 0x00, 0x80, 0x51,
+ 	0xd0, 0x1c, 0x20, 0x40, 0x80, 0x35, 0x00, 0xc0,
+ 	0x1c, 0x32, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00,
+-	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x63,
++	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x82,
+ };
+ 
+ static int vidioc_querycap(struct file *file, void  *priv,
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index c1a7634e27b43..28e1fd64dd3c2 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -79,11 +79,17 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
+ 			}
+ 		}
+ 
+-		if ((ret = dvb_usb_adapter_stream_init(adap)) ||
+-			(ret = dvb_usb_adapter_dvb_init(adap, adapter_nrs)) ||
+-			(ret = dvb_usb_adapter_frontend_init(adap))) {
++		ret = dvb_usb_adapter_stream_init(adap);
++		if (ret)
+ 			return ret;
+-		}
++
++		ret = dvb_usb_adapter_dvb_init(adap, adapter_nrs);
++		if (ret)
++			goto dvb_init_err;
++
++		ret = dvb_usb_adapter_frontend_init(adap);
++		if (ret)
++			goto frontend_init_err;
+ 
+ 		/* use exclusive FE lock if there is multiple shared FEs */
+ 		if (adap->fe_adap[1].fe)
+@@ -103,6 +109,12 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
+ 	}
+ 
+ 	return 0;
++
++frontend_init_err:
++	dvb_usb_adapter_dvb_exit(adap);
++dvb_init_err:
++	dvb_usb_adapter_stream_exit(adap);
++	return ret;
+ }
+ 
+ static int dvb_usb_adapter_exit(struct dvb_usb_device *d)
+@@ -158,22 +170,20 @@ static int dvb_usb_init(struct dvb_usb_device *d, short *adapter_nums)
+ 
+ 		if (d->props.priv_init != NULL) {
+ 			ret = d->props.priv_init(d);
+-			if (ret != 0) {
+-				kfree(d->priv);
+-				d->priv = NULL;
+-				return ret;
+-			}
++			if (ret != 0)
++				goto err_priv_init;
+ 		}
+ 	}
+ 
+ 	/* check the capabilities and set appropriate variables */
+ 	dvb_usb_device_power_ctrl(d, 1);
+ 
+-	if ((ret = dvb_usb_i2c_init(d)) ||
+-		(ret = dvb_usb_adapter_init(d, adapter_nums))) {
+-		dvb_usb_exit(d);
+-		return ret;
+-	}
++	ret = dvb_usb_i2c_init(d);
++	if (ret)
++		goto err_i2c_init;
++	ret = dvb_usb_adapter_init(d, adapter_nums);
++	if (ret)
++		goto err_adapter_init;
+ 
+ 	if ((ret = dvb_usb_remote_init(d)))
+ 		err("could not initialize remote control.");
+@@ -181,6 +191,17 @@ static int dvb_usb_init(struct dvb_usb_device *d, short *adapter_nums)
+ 	dvb_usb_device_power_ctrl(d, 0);
+ 
+ 	return 0;
++
++err_adapter_init:
++	dvb_usb_adapter_exit(d);
++err_i2c_init:
++	dvb_usb_i2c_exit(d);
++	if (d->priv && d->props.priv_destroy)
++		d->props.priv_destroy(d);
++err_priv_init:
++	kfree(d->priv);
++	d->priv = NULL;
++	return ret;
+ }
+ 
+ /* determine the name and the state of the just found USB device */
+@@ -255,41 +276,50 @@ int dvb_usb_device_init(struct usb_interface *intf,
+ 	if (du != NULL)
+ 		*du = NULL;
+ 
+-	if ((desc = dvb_usb_find_device(udev, props, &cold)) == NULL) {
++	d = kzalloc(sizeof(*d), GFP_KERNEL);
++	if (!d) {
++		err("no memory for 'struct dvb_usb_device'");
++		return -ENOMEM;
++	}
++
++	memcpy(&d->props, props, sizeof(struct dvb_usb_device_properties));
++
++	desc = dvb_usb_find_device(udev, &d->props, &cold);
++	if (!desc) {
+ 		deb_err("something went very wrong, device was not found in current device list - let's see what comes next.\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto error;
+ 	}
+ 
+ 	if (cold) {
+ 		info("found a '%s' in cold state, will try to load a firmware", desc->name);
+ 		ret = dvb_usb_download_firmware(udev, props);
+ 		if (!props->no_reconnect || ret != 0)
+-			return ret;
++			goto error;
+ 	}
+ 
+ 	info("found a '%s' in warm state.", desc->name);
+-	d = kzalloc(sizeof(struct dvb_usb_device), GFP_KERNEL);
+-	if (d == NULL) {
+-		err("no memory for 'struct dvb_usb_device'");
+-		return -ENOMEM;
+-	}
+-
+ 	d->udev = udev;
+-	memcpy(&d->props, props, sizeof(struct dvb_usb_device_properties));
+ 	d->desc = desc;
+ 	d->owner = owner;
+ 
+ 	usb_set_intfdata(intf, d);
+ 
+-	if (du != NULL)
++	ret = dvb_usb_init(d, adapter_nums);
++	if (ret) {
++		info("%s error while loading driver (%d)", desc->name, ret);
++		goto error;
++	}
++
++	if (du)
+ 		*du = d;
+ 
+-	ret = dvb_usb_init(d, adapter_nums);
++	info("%s successfully initialized and connected.", desc->name);
++	return 0;
+ 
+-	if (ret == 0)
+-		info("%s successfully initialized and connected.", desc->name);
+-	else
+-		info("%s error while loading driver (%d)", desc->name, ret);
++ error:
++	usb_set_intfdata(intf, NULL);
++	kfree(d);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(dvb_usb_device_init);
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb.h b/drivers/media/usb/dvb-usb/dvb-usb.h
+index 741be0e694471..2b8ad2bde8a48 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb.h
++++ b/drivers/media/usb/dvb-usb/dvb-usb.h
+@@ -487,7 +487,7 @@ extern int __must_check
+ dvb_usb_generic_write(struct dvb_usb_device *, u8 *, u16);
+ 
+ /* commonly used remote control parsing */
+-extern int dvb_usb_nec_rc_key_to_event(struct dvb_usb_device *, u8[], u32 *, int *);
++extern int dvb_usb_nec_rc_key_to_event(struct dvb_usb_device *, u8[5], u32 *, int *);
+ 
+ /* commonly used firmware download types and function */
+ struct hexline {
+diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
+index fb9cbfa81a84b..3cd9e9556fa9f 100644
+--- a/drivers/media/usb/em28xx/em28xx-dvb.c
++++ b/drivers/media/usb/em28xx/em28xx-dvb.c
+@@ -1984,6 +1984,7 @@ ret:
+ 	return result;
+ 
+ out_free:
++	em28xx_uninit_usb_xfer(dev, EM28XX_DIGITAL_MODE);
+ 	kfree(dvb);
+ 	dev->dvb = NULL;
+ 	goto ret;
+diff --git a/drivers/media/usb/gspca/gspca.c b/drivers/media/usb/gspca/gspca.c
+index 158c8e28ed2cc..47d8f28bfdfc2 100644
+--- a/drivers/media/usb/gspca/gspca.c
++++ b/drivers/media/usb/gspca/gspca.c
+@@ -1576,6 +1576,8 @@ out:
+ #endif
+ 	v4l2_ctrl_handler_free(gspca_dev->vdev.ctrl_handler);
+ 	v4l2_device_unregister(&gspca_dev->v4l2_dev);
++	if (sd_desc->probe_error)
++		sd_desc->probe_error(gspca_dev);
+ 	kfree(gspca_dev->usb_buf);
+ 	kfree(gspca_dev);
+ 	return ret;
+diff --git a/drivers/media/usb/gspca/gspca.h b/drivers/media/usb/gspca/gspca.h
+index b0ced2e140064..a6554d5e9e1a5 100644
+--- a/drivers/media/usb/gspca/gspca.h
++++ b/drivers/media/usb/gspca/gspca.h
+@@ -105,6 +105,7 @@ struct sd_desc {
+ 	cam_cf_op config;	/* called on probe */
+ 	cam_op init;		/* called on probe and resume */
+ 	cam_op init_controls;	/* called on probe */
++	cam_v_op probe_error;	/* called if probe failed, do cleanup here */
+ 	cam_op start;		/* called on stream on after URBs creation */
+ 	cam_pkt_op pkt_scan;
+ /* optional operations */
+diff --git a/drivers/media/usb/gspca/sq905.c b/drivers/media/usb/gspca/sq905.c
+index 97799cfb832e3..9491110709718 100644
+--- a/drivers/media/usb/gspca/sq905.c
++++ b/drivers/media/usb/gspca/sq905.c
+@@ -158,7 +158,7 @@ static int
+ sq905_read_data(struct gspca_dev *gspca_dev, u8 *data, int size, int need_lock)
+ {
+ 	int ret;
+-	int act_len;
++	int act_len = 0;
+ 
+ 	gspca_dev->usb_buf[0] = '\0';
+ 	if (need_lock)
+diff --git a/drivers/media/usb/gspca/stv06xx/stv06xx.c b/drivers/media/usb/gspca/stv06xx/stv06xx.c
+index 95673fc0a99c5..d9bc2aacc8851 100644
+--- a/drivers/media/usb/gspca/stv06xx/stv06xx.c
++++ b/drivers/media/usb/gspca/stv06xx/stv06xx.c
+@@ -529,12 +529,21 @@ static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
+ static int stv06xx_config(struct gspca_dev *gspca_dev,
+ 			  const struct usb_device_id *id);
+ 
++static void stv06xx_probe_error(struct gspca_dev *gspca_dev)
++{
++	struct sd *sd = (struct sd *)gspca_dev;
++
++	kfree(sd->sensor_priv);
++	sd->sensor_priv = NULL;
++}
++
+ /* sub-driver description */
+ static const struct sd_desc sd_desc = {
+ 	.name = MODULE_NAME,
+ 	.config = stv06xx_config,
+ 	.init = stv06xx_init,
+ 	.init_controls = stv06xx_init_controls,
++	.probe_error = stv06xx_probe_error,
+ 	.start = stv06xx_start,
+ 	.stopN = stv06xx_stopN,
+ 	.pkt_scan = stv06xx_pkt_scan,
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index cfe422d9f439b..3d8c54b826e99 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -2201,7 +2201,16 @@ static void new_to_req(struct v4l2_ctrl_ref *ref)
+ 	if (!ref)
+ 		return;
+ 	ptr_to_ptr(ref->ctrl, ref->ctrl->p_new, ref->p_req);
+-	ref->req = ref;
++	ref->valid_p_req = true;
++}
++
++/* Copy the current value to the request value */
++static void cur_to_req(struct v4l2_ctrl_ref *ref)
++{
++	if (!ref)
++		return;
++	ptr_to_ptr(ref->ctrl, ref->ctrl->p_cur, ref->p_req);
++	ref->valid_p_req = true;
+ }
+ 
+ /* Copy the request value to the new value */
+@@ -2209,8 +2218,8 @@ static void req_to_new(struct v4l2_ctrl_ref *ref)
+ {
+ 	if (!ref)
+ 		return;
+-	if (ref->req)
+-		ptr_to_ptr(ref->ctrl, ref->req->p_req, ref->ctrl->p_new);
++	if (ref->valid_p_req)
++		ptr_to_ptr(ref->ctrl, ref->p_req, ref->ctrl->p_new);
+ 	else
+ 		ptr_to_ptr(ref->ctrl, ref->ctrl->p_cur, ref->ctrl->p_new);
+ }
+@@ -3380,39 +3389,8 @@ static void v4l2_ctrl_request_queue(struct media_request_object *obj)
+ 	struct v4l2_ctrl_handler *hdl =
+ 		container_of(obj, struct v4l2_ctrl_handler, req_obj);
+ 	struct v4l2_ctrl_handler *main_hdl = obj->priv;
+-	struct v4l2_ctrl_handler *prev_hdl = NULL;
+-	struct v4l2_ctrl_ref *ref_ctrl, *ref_ctrl_prev = NULL;
+ 
+ 	mutex_lock(main_hdl->lock);
+-	if (list_empty(&main_hdl->requests_queued))
+-		goto queue;
+-
+-	prev_hdl = list_last_entry(&main_hdl->requests_queued,
+-				   struct v4l2_ctrl_handler, requests_queued);
+-	/*
+-	 * Note: prev_hdl and hdl must contain the same list of control
+-	 * references, so if any differences are detected then that is a
+-	 * driver bug and the WARN_ON is triggered.
+-	 */
+-	mutex_lock(prev_hdl->lock);
+-	ref_ctrl_prev = list_first_entry(&prev_hdl->ctrl_refs,
+-					 struct v4l2_ctrl_ref, node);
+-	list_for_each_entry(ref_ctrl, &hdl->ctrl_refs, node) {
+-		if (ref_ctrl->req)
+-			continue;
+-		while (ref_ctrl_prev->ctrl->id < ref_ctrl->ctrl->id) {
+-			/* Should never happen, but just in case... */
+-			if (list_is_last(&ref_ctrl_prev->node,
+-					 &prev_hdl->ctrl_refs))
+-				break;
+-			ref_ctrl_prev = list_next_entry(ref_ctrl_prev, node);
+-		}
+-		if (WARN_ON(ref_ctrl_prev->ctrl->id != ref_ctrl->ctrl->id))
+-			break;
+-		ref_ctrl->req = ref_ctrl_prev->req;
+-	}
+-	mutex_unlock(prev_hdl->lock);
+-queue:
+ 	list_add_tail(&hdl->requests_queued, &main_hdl->requests_queued);
+ 	hdl->request_is_queued = true;
+ 	mutex_unlock(main_hdl->lock);
+@@ -3469,7 +3447,7 @@ v4l2_ctrl_request_hdl_ctrl_find(struct v4l2_ctrl_handler *hdl, u32 id)
+ {
+ 	struct v4l2_ctrl_ref *ref = find_ref_lock(hdl, id);
+ 
+-	return (ref && ref->req == ref) ? ref->ctrl : NULL;
++	return (ref && ref->valid_p_req) ? ref->ctrl : NULL;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_ctrl_request_hdl_ctrl_find);
+ 
+@@ -3655,7 +3633,13 @@ static int class_check(struct v4l2_ctrl_handler *hdl, u32 which)
+ 	return find_ref_lock(hdl, which | 1) ? 0 : -EINVAL;
+ }
+ 
+-/* Get extended controls. Allocates the helpers array if needed. */
++/*
++ * Get extended controls. Allocates the helpers array if needed.
++ *
++ * Note that v4l2_g_ext_ctrls_common() with 'which' set to
++ * V4L2_CTRL_WHICH_REQUEST_VAL is only called if the request was
++ * completed, and in that case valid_p_req is true for all controls.
++ */
+ static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
+ 				   struct v4l2_ext_controls *cs,
+ 				   struct video_device *vdev)
+@@ -3664,9 +3648,10 @@ static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
+ 	struct v4l2_ctrl_helper *helpers = helper;
+ 	int ret;
+ 	int i, j;
+-	bool def_value;
++	bool is_default, is_request;
+ 
+-	def_value = (cs->which == V4L2_CTRL_WHICH_DEF_VAL);
++	is_default = (cs->which == V4L2_CTRL_WHICH_DEF_VAL);
++	is_request = (cs->which == V4L2_CTRL_WHICH_REQUEST_VAL);
+ 
+ 	cs->error_idx = cs->count;
+ 	cs->which = V4L2_CTRL_ID2WHICH(cs->which);
+@@ -3692,11 +3677,9 @@ static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
+ 			ret = -EACCES;
+ 
+ 	for (i = 0; !ret && i < cs->count; i++) {
+-		int (*ctrl_to_user)(struct v4l2_ext_control *c,
+-				    struct v4l2_ctrl *ctrl);
+ 		struct v4l2_ctrl *master;
+-
+-		ctrl_to_user = def_value ? def_to_user : cur_to_user;
++		bool is_volatile = false;
++		u32 idx = i;
+ 
+ 		if (helpers[i].mref == NULL)
+ 			continue;
+@@ -3706,31 +3689,48 @@ static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
+ 
+ 		v4l2_ctrl_lock(master);
+ 
+-		/* g_volatile_ctrl will update the new control values */
+-		if (!def_value &&
++		/*
++		 * g_volatile_ctrl will update the new control values.
++		 * This makes no sense for V4L2_CTRL_WHICH_DEF_VAL and
++		 * V4L2_CTRL_WHICH_REQUEST_VAL. In the case of requests
++		 * it is v4l2_ctrl_request_complete() that copies the
++		 * volatile controls at the time of request completion
++		 * to the request, so you don't want to do that again.
++		 */
++		if (!is_default && !is_request &&
+ 		    ((master->flags & V4L2_CTRL_FLAG_VOLATILE) ||
+ 		    (master->has_volatiles && !is_cur_manual(master)))) {
+ 			for (j = 0; j < master->ncontrols; j++)
+ 				cur_to_new(master->cluster[j]);
+ 			ret = call_op(master, g_volatile_ctrl);
+-			ctrl_to_user = new_to_user;
++			is_volatile = true;
+ 		}
+-		/* If OK, then copy the current (for non-volatile controls)
+-		   or the new (for volatile controls) control values to the
+-		   caller */
+-		if (!ret) {
+-			u32 idx = i;
+ 
+-			do {
+-				if (helpers[idx].ref->req)
+-					ret = req_to_user(cs->controls + idx,
+-						helpers[idx].ref->req);
+-				else
+-					ret = ctrl_to_user(cs->controls + idx,
+-						helpers[idx].ref->ctrl);
+-				idx = helpers[idx].next;
+-			} while (!ret && idx);
++		if (ret) {
++			v4l2_ctrl_unlock(master);
++			break;
+ 		}
++
++		/*
++		 * Copy the default value (if is_default is true), the
++		 * request value (if is_request is true and p_req is valid),
++		 * the new volatile value (if is_volatile is true) or the
++		 * current value.
++		 */
++		do {
++			struct v4l2_ctrl_ref *ref = helpers[idx].ref;
++
++			if (is_default)
++				ret = def_to_user(cs->controls + idx, ref->ctrl);
++			else if (is_request && ref->valid_p_req)
++				ret = req_to_user(cs->controls + idx, ref);
++			else if (is_volatile)
++				ret = new_to_user(cs->controls + idx, ref->ctrl);
++			else
++				ret = cur_to_user(cs->controls + idx, ref->ctrl);
++			idx = helpers[idx].next;
++		} while (!ret && idx);
++
+ 		v4l2_ctrl_unlock(master);
+ 	}
+ 
+@@ -4368,8 +4368,6 @@ void v4l2_ctrl_request_complete(struct media_request *req,
+ 		unsigned int i;
+ 
+ 		if (ctrl->flags & V4L2_CTRL_FLAG_VOLATILE) {
+-			ref->req = ref;
+-
+ 			v4l2_ctrl_lock(master);
+ 			/* g_volatile_ctrl will update the current control values */
+ 			for (i = 0; i < master->ncontrols; i++)
+@@ -4379,21 +4377,12 @@ void v4l2_ctrl_request_complete(struct media_request *req,
+ 			v4l2_ctrl_unlock(master);
+ 			continue;
+ 		}
+-		if (ref->req == ref)
++		if (ref->valid_p_req)
+ 			continue;
+ 
++		/* Copy the current control value into the request */
+ 		v4l2_ctrl_lock(ctrl);
+-		if (ref->req) {
+-			ptr_to_ptr(ctrl, ref->req->p_req, ref->p_req);
+-		} else {
+-			ptr_to_ptr(ctrl, ctrl->p_cur, ref->p_req);
+-			/*
+-			 * Set ref->req to ensure that when userspace wants to
+-			 * obtain the controls of this request it will take
+-			 * this value and not the current value of the control.
+-			 */
+-			ref->req = ref;
+-		}
++		cur_to_req(ref);
+ 		v4l2_ctrl_unlock(ctrl);
+ 	}
+ 
+@@ -4458,7 +4447,7 @@ int v4l2_ctrl_request_setup(struct media_request *req,
+ 				struct v4l2_ctrl_ref *r =
+ 					find_ref(hdl, master->cluster[i]->id);
+ 
+-				if (r->req && r == r->req) {
++				if (r->valid_p_req) {
+ 					have_new_data = true;
+ 					break;
+ 				}
+diff --git a/drivers/mfd/arizona-irq.c b/drivers/mfd/arizona-irq.c
+index 077d9ab112b71..d919ae9691e23 100644
+--- a/drivers/mfd/arizona-irq.c
++++ b/drivers/mfd/arizona-irq.c
+@@ -100,7 +100,7 @@ static irqreturn_t arizona_irq_thread(int irq, void *data)
+ 	unsigned int val;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(arizona->dev);
++	ret = pm_runtime_resume_and_get(arizona->dev);
+ 	if (ret < 0) {
+ 		dev_err(arizona->dev, "Failed to resume device: %d\n", ret);
+ 		return IRQ_NONE;
+diff --git a/drivers/mfd/da9063-i2c.c b/drivers/mfd/da9063-i2c.c
+index b8217ad303ce8..3419814d016b5 100644
+--- a/drivers/mfd/da9063-i2c.c
++++ b/drivers/mfd/da9063-i2c.c
+@@ -442,6 +442,16 @@ static int da9063_i2c_probe(struct i2c_client *i2c,
+ 		return ret;
+ 	}
+ 
++	/* If SMBus is not available and only I2C is possible, enter I2C mode */
++	if (i2c_check_functionality(i2c->adapter, I2C_FUNC_I2C)) {
++		ret = regmap_clear_bits(da9063->regmap, DA9063_REG_CONFIG_J,
++					DA9063_TWOWIRE_TO);
++		if (ret < 0) {
++			dev_err(da9063->dev, "Failed to set Two-Wire Bus Mode.\n");
++			return -EIO;
++		}
++	}
++
+ 	return da9063_device_init(da9063, i2c->irq);
+ }
+ 
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 42e27a2982180..3246598e4d7e3 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -571,6 +571,18 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 		main_md->part_curr = value & EXT_CSD_PART_CONFIG_ACC_MASK;
+ 	}
+ 
++	/*
++	 * Make sure to update CACHE_CTRL in case it was changed. The cache
++	 * will get turned back on if the card is re-initialized, e.g.
++	 * suspend/resume or hw reset in recovery.
++	 */
++	if ((MMC_EXTRACT_INDEX_FROM_ARG(cmd.arg) == EXT_CSD_CACHE_CTRL) &&
++	    (cmd.opcode == MMC_SWITCH)) {
++		u8 value = MMC_EXTRACT_VALUE_FROM_ARG(cmd.arg) & 1;
++
++		card->ext_csd.cache_ctrl = value;
++	}
++
+ 	/*
+ 	 * According to the SD specs, some commands require a delay after
+ 	 * issuing the command.
+@@ -2221,6 +2233,10 @@ enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req)
+ 	case MMC_ISSUE_ASYNC:
+ 		switch (req_op(req)) {
+ 		case REQ_OP_FLUSH:
++			if (!mmc_cache_enabled(host)) {
++				blk_mq_end_request(req, BLK_STS_OK);
++				return MMC_REQ_FINISHED;
++			}
+ 			ret = mmc_blk_cqe_issue_flush(mq, req);
+ 			break;
+ 		case REQ_OP_READ:
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index d42037f0f10d7..eaf4810fe656d 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -1204,7 +1204,7 @@ int mmc_set_uhs_voltage(struct mmc_host *host, u32 ocr)
+ 
+ 	err = mmc_wait_for_cmd(host, &cmd, 0);
+ 	if (err)
+-		return err;
++		goto power_cycle;
+ 
+ 	if (!mmc_host_is_spi(host) && (cmd.resp[0] & R1_ERROR))
+ 		return -EIO;
+@@ -2355,80 +2355,6 @@ void mmc_stop_host(struct mmc_host *host)
+ 	mmc_release_host(host);
+ }
+ 
+-#ifdef CONFIG_PM_SLEEP
+-/* Do the card removal on suspend if card is assumed removeable
+- * Do that in pm notifier while userspace isn't yet frozen, so we will be able
+-   to sync the card.
+-*/
+-static int mmc_pm_notify(struct notifier_block *notify_block,
+-			unsigned long mode, void *unused)
+-{
+-	struct mmc_host *host = container_of(
+-		notify_block, struct mmc_host, pm_notify);
+-	unsigned long flags;
+-	int err = 0;
+-
+-	switch (mode) {
+-	case PM_HIBERNATION_PREPARE:
+-	case PM_SUSPEND_PREPARE:
+-	case PM_RESTORE_PREPARE:
+-		spin_lock_irqsave(&host->lock, flags);
+-		host->rescan_disable = 1;
+-		spin_unlock_irqrestore(&host->lock, flags);
+-		cancel_delayed_work_sync(&host->detect);
+-
+-		if (!host->bus_ops)
+-			break;
+-
+-		/* Validate prerequisites for suspend */
+-		if (host->bus_ops->pre_suspend)
+-			err = host->bus_ops->pre_suspend(host);
+-		if (!err)
+-			break;
+-
+-		if (!mmc_card_is_removable(host)) {
+-			dev_warn(mmc_dev(host),
+-				 "pre_suspend failed for non-removable host: "
+-				 "%d\n", err);
+-			/* Avoid removing non-removable hosts */
+-			break;
+-		}
+-
+-		/* Calling bus_ops->remove() with a claimed host can deadlock */
+-		host->bus_ops->remove(host);
+-		mmc_claim_host(host);
+-		mmc_detach_bus(host);
+-		mmc_power_off(host);
+-		mmc_release_host(host);
+-		host->pm_flags = 0;
+-		break;
+-
+-	case PM_POST_SUSPEND:
+-	case PM_POST_HIBERNATION:
+-	case PM_POST_RESTORE:
+-
+-		spin_lock_irqsave(&host->lock, flags);
+-		host->rescan_disable = 0;
+-		spin_unlock_irqrestore(&host->lock, flags);
+-		_mmc_detect_change(host, 0, false);
+-
+-	}
+-
+-	return 0;
+-}
+-
+-void mmc_register_pm_notifier(struct mmc_host *host)
+-{
+-	host->pm_notify.notifier_call = mmc_pm_notify;
+-	register_pm_notifier(&host->pm_notify);
+-}
+-
+-void mmc_unregister_pm_notifier(struct mmc_host *host)
+-{
+-	unregister_pm_notifier(&host->pm_notify);
+-}
+-#endif
+-
+ static int __init mmc_init(void)
+ {
+ 	int ret;
+diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h
+index 575ac0257af2f..db3c9c68875d8 100644
+--- a/drivers/mmc/core/core.h
++++ b/drivers/mmc/core/core.h
+@@ -29,6 +29,7 @@ struct mmc_bus_ops {
+ 	int (*shutdown)(struct mmc_host *);
+ 	int (*hw_reset)(struct mmc_host *);
+ 	int (*sw_reset)(struct mmc_host *);
++	bool (*cache_enabled)(struct mmc_host *);
+ };
+ 
+ void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops);
+@@ -93,14 +94,6 @@ int mmc_execute_tuning(struct mmc_card *card);
+ int mmc_hs200_to_hs400(struct mmc_card *card);
+ int mmc_hs400_to_hs200(struct mmc_card *card);
+ 
+-#ifdef CONFIG_PM_SLEEP
+-void mmc_register_pm_notifier(struct mmc_host *host);
+-void mmc_unregister_pm_notifier(struct mmc_host *host);
+-#else
+-static inline void mmc_register_pm_notifier(struct mmc_host *host) { }
+-static inline void mmc_unregister_pm_notifier(struct mmc_host *host) { }
+-#endif
+-
+ void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq);
+ bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq);
+ 
+@@ -171,4 +164,12 @@ static inline void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
+ 		host->ops->post_req(host, mrq, err);
+ }
+ 
++static inline bool mmc_cache_enabled(struct mmc_host *host)
++{
++	if (host->bus_ops->cache_enabled)
++		return host->bus_ops->cache_enabled(host);
++
++	return false;
++}
++
+ #endif
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index 96b2ca1f1b06d..fa59e6f4801c1 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -34,6 +34,42 @@
+ 
+ static DEFINE_IDA(mmc_host_ida);
+ 
++#ifdef CONFIG_PM_SLEEP
++static int mmc_host_class_prepare(struct device *dev)
++{
++	struct mmc_host *host = cls_dev_to_mmc_host(dev);
++
++	/*
++	 * It's safe to access the bus_ops pointer, as both userspace and the
++	 * workqueue for detecting cards are frozen at this point.
++	 */
++	if (!host->bus_ops)
++		return 0;
++
++	/* Validate conditions for system suspend. */
++	if (host->bus_ops->pre_suspend)
++		return host->bus_ops->pre_suspend(host);
++
++	return 0;
++}
++
++static void mmc_host_class_complete(struct device *dev)
++{
++	struct mmc_host *host = cls_dev_to_mmc_host(dev);
++
++	_mmc_detect_change(host, 0, false);
++}
++
++static const struct dev_pm_ops mmc_host_class_dev_pm_ops = {
++	.prepare = mmc_host_class_prepare,
++	.complete = mmc_host_class_complete,
++};
++
++#define MMC_HOST_CLASS_DEV_PM_OPS (&mmc_host_class_dev_pm_ops)
++#else
++#define MMC_HOST_CLASS_DEV_PM_OPS NULL
++#endif
++
+ static void mmc_host_classdev_release(struct device *dev)
+ {
+ 	struct mmc_host *host = cls_dev_to_mmc_host(dev);
+@@ -45,6 +81,7 @@ static void mmc_host_classdev_release(struct device *dev)
+ static struct class mmc_host_class = {
+ 	.name		= "mmc_host",
+ 	.dev_release	= mmc_host_classdev_release,
++	.pm		= MMC_HOST_CLASS_DEV_PM_OPS,
+ };
+ 
+ int mmc_register_host_class(void)
+@@ -493,8 +530,6 @@ int mmc_add_host(struct mmc_host *host)
+ #endif
+ 
+ 	mmc_start_host(host);
+-	mmc_register_pm_notifier(host);
+-
+ 	return 0;
+ }
+ 
+@@ -510,7 +545,6 @@ EXPORT_SYMBOL(mmc_add_host);
+  */
+ void mmc_remove_host(struct mmc_host *host)
+ {
+-	mmc_unregister_pm_notifier(host);
+ 	mmc_stop_host(host);
+ 
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 9ce34e8800335..7494d595035e3 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -2033,6 +2033,12 @@ static void mmc_detect(struct mmc_host *host)
+ 	}
+ }
+ 
++static bool _mmc_cache_enabled(struct mmc_host *host)
++{
++	return host->card->ext_csd.cache_size > 0 &&
++	       host->card->ext_csd.cache_ctrl & 1;
++}
++
+ static int _mmc_suspend(struct mmc_host *host, bool is_suspend)
+ {
+ 	int err = 0;
+@@ -2212,6 +2218,7 @@ static const struct mmc_bus_ops mmc_ops = {
+ 	.alive = mmc_alive,
+ 	.shutdown = mmc_shutdown,
+ 	.hw_reset = _mmc_hw_reset,
++	.cache_enabled = _mmc_cache_enabled,
+ };
+ 
+ /*
+diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
+index baa6314f69b41..ebad70e4481af 100644
+--- a/drivers/mmc/core/mmc_ops.c
++++ b/drivers/mmc/core/mmc_ops.c
+@@ -988,9 +988,7 @@ int mmc_flush_cache(struct mmc_card *card)
+ {
+ 	int err = 0;
+ 
+-	if (mmc_card_mmc(card) &&
+-			(card->ext_csd.cache_size > 0) &&
+-			(card->ext_csd.cache_ctrl & 1)) {
++	if (mmc_cache_enabled(card->host)) {
+ 		err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
+ 				 EXT_CSD_FLUSH_CACHE, 1,
+ 				 MMC_CACHE_FLUSH_TIMEOUT_MS);
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 6f054c449d467..636d4e3aa0e35 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -135,6 +135,9 @@ static int mmc_decode_csd(struct mmc_card *card)
+ 			csd->erase_size = UNSTUFF_BITS(resp, 39, 7) + 1;
+ 			csd->erase_size <<= csd->write_blkbits - 9;
+ 		}
++
++		if (UNSTUFF_BITS(resp, 13, 1))
++			mmc_card_set_readonly(card);
+ 		break;
+ 	case 1:
+ 		/*
+@@ -169,6 +172,9 @@ static int mmc_decode_csd(struct mmc_card *card)
+ 		csd->write_blkbits = 9;
+ 		csd->write_partial = 0;
+ 		csd->erase_size = 1;
++
++		if (UNSTUFF_BITS(resp, 13, 1))
++			mmc_card_set_readonly(card);
+ 		break;
+ 	default:
+ 		pr_err("%s: unrecognised CSD structure version %d\n",
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 694a212cbe25a..1b0853a82189a 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -985,21 +985,37 @@ out:
+  */
+ static int mmc_sdio_pre_suspend(struct mmc_host *host)
+ {
+-	int i, err = 0;
++	int i;
+ 
+ 	for (i = 0; i < host->card->sdio_funcs; i++) {
+ 		struct sdio_func *func = host->card->sdio_func[i];
+ 		if (func && sdio_func_present(func) && func->dev.driver) {
+ 			const struct dev_pm_ops *pmops = func->dev.driver->pm;
+-			if (!pmops || !pmops->suspend || !pmops->resume) {
++			if (!pmops || !pmops->suspend || !pmops->resume)
+ 				/* force removal of entire card in that case */
+-				err = -ENOSYS;
+-				break;
+-			}
++				goto remove;
+ 		}
+ 	}
+ 
+-	return err;
++	return 0;
++
++remove:
++	if (!mmc_card_is_removable(host)) {
++		dev_warn(mmc_dev(host),
++			 "missing suspend/resume ops for non-removable SDIO card\n");
++		/* Don't remove a non-removable card - we can't re-detect it. */
++		return 0;
++	}
++
++	/* Remove the SDIO card and let it be re-detected later on. */
++	mmc_sdio_remove(host);
++	mmc_claim_host(host);
++	mmc_detach_bus(host);
++	mmc_power_off(host);
++	mmc_release_host(host);
++	host->pm_flags = 0;
++
++	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c
+index f9780c65ebe98..f24623aac2dbe 100644
+--- a/drivers/mmc/host/sdhci-brcmstb.c
++++ b/drivers/mmc/host/sdhci-brcmstb.c
+@@ -199,7 +199,6 @@ static int sdhci_brcmstb_add_host(struct sdhci_host *host,
+ 	if (dma64) {
+ 		dev_dbg(mmc_dev(host->mmc), "Using 64 bit DMA\n");
+ 		cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
+-		cq_host->quirks |= CQHCI_QUIRK_SHORT_TXFR_DESC_SZ;
+ 	}
+ 
+ 	ret = cqhci_init(cq_host, host->mmc, dma64);
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 5d9b3106d2f70..d28809e479628 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -1504,7 +1504,7 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
+ 
+ 	mmc_of_parse_voltage(np, &host->ocr_mask);
+ 
+-	if (esdhc_is_usdhc(imx_data)) {
++	if (esdhc_is_usdhc(imx_data) && !IS_ERR(imx_data->pinctrl)) {
+ 		imx_data->pins_100mhz = pinctrl_lookup_state(imx_data->pinctrl,
+ 						ESDHC_PINCTRL_STATE_100MHZ);
+ 		imx_data->pins_200mhz = pinctrl_lookup_state(imx_data->pinctrl,
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 9552708846ca3..bf04a08eeba13 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -516,6 +516,7 @@ struct intel_host {
+ 	int	drv_strength;
+ 	bool	d3_retune;
+ 	bool	rpm_retune_ok;
++	bool	needs_pwr_off;
+ 	u32	glk_rx_ctrl1;
+ 	u32	glk_tun_val;
+ 	u32	active_ltr;
+@@ -643,9 +644,25 @@ out:
+ static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
+ 				  unsigned short vdd)
+ {
++	struct sdhci_pci_slot *slot = sdhci_priv(host);
++	struct intel_host *intel_host = sdhci_pci_priv(slot);
+ 	int cntr;
+ 	u8 reg;
+ 
++	/*
++	 * Bus power may control card power, but a full reset still may not
++	 * reset the power, whereas a direct write to SDHCI_POWER_CONTROL can.
++	 * That might be needed to initialize correctly, if the card was left
++	 * powered on previously.
++	 */
++	if (intel_host->needs_pwr_off) {
++		intel_host->needs_pwr_off = false;
++		if (mode != MMC_POWER_OFF) {
++			sdhci_writeb(host, 0, SDHCI_POWER_CONTROL);
++			usleep_range(10000, 12500);
++		}
++	}
++
+ 	sdhci_set_power(host, mode, vdd);
+ 
+ 	if (mode == MMC_POWER_OFF)
+@@ -1135,6 +1152,14 @@ static int byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
+ 	return 0;
+ }
+ 
++static void byt_needs_pwr_off(struct sdhci_pci_slot *slot)
++{
++	struct intel_host *intel_host = sdhci_pci_priv(slot);
++	u8 reg = sdhci_readb(slot->host, SDHCI_POWER_CONTROL);
++
++	intel_host->needs_pwr_off = reg  & SDHCI_POWER_ON;
++}
++
+ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
+ {
+ 	byt_probe_slot(slot);
+@@ -1152,6 +1177,8 @@ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
+ 	    slot->chip->pdev->subsystem_device == PCI_SUBDEVICE_ID_NI_78E3)
+ 		slot->host->mmc->caps2 |= MMC_CAP2_AVOID_3_3V;
+ 
++	byt_needs_pwr_off(slot);
++
+ 	return 0;
+ }
+ 
+@@ -1903,6 +1930,8 @@ static const struct pci_device_id pci_ids[] = {
+ 	SDHCI_PCI_DEVICE(INTEL, CMLH_SD,   intel_byt_sd),
+ 	SDHCI_PCI_DEVICE(INTEL, JSL_EMMC,  intel_glk_emmc),
+ 	SDHCI_PCI_DEVICE(INTEL, JSL_SD,    intel_byt_sd),
++	SDHCI_PCI_DEVICE(INTEL, LKF_EMMC,  intel_glk_emmc),
++	SDHCI_PCI_DEVICE(INTEL, LKF_SD,    intel_byt_sd),
+ 	SDHCI_PCI_DEVICE(O2, 8120,     o2),
+ 	SDHCI_PCI_DEVICE(O2, 8220,     o2),
+ 	SDHCI_PCI_DEVICE(O2, 8221,     o2),
+diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
+index d0ed232af0eb8..8f90c4163bb5c 100644
+--- a/drivers/mmc/host/sdhci-pci.h
++++ b/drivers/mmc/host/sdhci-pci.h
+@@ -57,6 +57,8 @@
+ #define PCI_DEVICE_ID_INTEL_CMLH_SD	0x06f5
+ #define PCI_DEVICE_ID_INTEL_JSL_EMMC	0x4dc4
+ #define PCI_DEVICE_ID_INTEL_JSL_SD	0x4df8
++#define PCI_DEVICE_ID_INTEL_LKF_EMMC	0x98c4
++#define PCI_DEVICE_ID_INTEL_LKF_SD	0x98f8
+ 
+ #define PCI_DEVICE_ID_SYSKONNECT_8000	0x8000
+ #define PCI_DEVICE_ID_VIA_95D0		0x95d0
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 41d193fa77bbf..8ea9132ebca4e 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -119,6 +119,10 @@
+ /* SDMMC CQE Base Address for Tegra Host Ver 4.1 and Higher */
+ #define SDHCI_TEGRA_CQE_BASE_ADDR			0xF000
+ 
++#define SDHCI_TEGRA_CQE_TRNS_MODE	(SDHCI_TRNS_MULTI | \
++					 SDHCI_TRNS_BLK_CNT_EN | \
++					 SDHCI_TRNS_DMA)
++
+ struct sdhci_tegra_soc_data {
+ 	const struct sdhci_pltfm_data *pdata;
+ 	u64 dma_mask;
+@@ -1156,6 +1160,7 @@ static void tegra_sdhci_voltage_switch(struct sdhci_host *host)
+ static void tegra_cqhci_writel(struct cqhci_host *cq_host, u32 val, int reg)
+ {
+ 	struct mmc_host *mmc = cq_host->mmc;
++	struct sdhci_host *host = mmc_priv(mmc);
+ 	u8 ctrl;
+ 	ktime_t timeout;
+ 	bool timed_out;
+@@ -1170,6 +1175,7 @@ static void tegra_cqhci_writel(struct cqhci_host *cq_host, u32 val, int reg)
+ 	 */
+ 	if (reg == CQHCI_CTL && !(val & CQHCI_HALT) &&
+ 	    cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT) {
++		sdhci_writew(host, SDHCI_TEGRA_CQE_TRNS_MODE, SDHCI_TRANSFER_MODE);
+ 		sdhci_cqe_enable(mmc);
+ 		writel(val, cq_host->mmio + reg);
+ 		timeout = ktime_add_us(ktime_get(), 50);
+@@ -1205,6 +1211,7 @@ static void sdhci_tegra_update_dcmd_desc(struct mmc_host *mmc,
+ static void sdhci_tegra_cqe_enable(struct mmc_host *mmc)
+ {
+ 	struct cqhci_host *cq_host = mmc->cqe_private;
++	struct sdhci_host *host = mmc_priv(mmc);
+ 	u32 val;
+ 
+ 	/*
+@@ -1218,6 +1225,7 @@ static void sdhci_tegra_cqe_enable(struct mmc_host *mmc)
+ 		if (val & CQHCI_ENABLE)
+ 			cqhci_writel(cq_host, (val & ~CQHCI_ENABLE),
+ 				     CQHCI_CFG);
++		sdhci_writew(host, SDHCI_TEGRA_CQE_TRNS_MODE, SDHCI_TRANSFER_MODE);
+ 		sdhci_cqe_enable(mmc);
+ 		if (val & CQHCI_ENABLE)
+ 			cqhci_writel(cq_host, val, CQHCI_CFG);
+@@ -1281,12 +1289,36 @@ static void tegra_sdhci_set_timeout(struct sdhci_host *host,
+ 	__sdhci_set_timeout(host, cmd);
+ }
+ 
++static void sdhci_tegra_cqe_pre_enable(struct mmc_host *mmc)
++{
++	struct cqhci_host *cq_host = mmc->cqe_private;
++	u32 reg;
++
++	reg = cqhci_readl(cq_host, CQHCI_CFG);
++	reg |= CQHCI_ENABLE;
++	cqhci_writel(cq_host, reg, CQHCI_CFG);
++}
++
++static void sdhci_tegra_cqe_post_disable(struct mmc_host *mmc)
++{
++	struct cqhci_host *cq_host = mmc->cqe_private;
++	struct sdhci_host *host = mmc_priv(mmc);
++	u32 reg;
++
++	reg = cqhci_readl(cq_host, CQHCI_CFG);
++	reg &= ~CQHCI_ENABLE;
++	cqhci_writel(cq_host, reg, CQHCI_CFG);
++	sdhci_writew(host, 0x0, SDHCI_TRANSFER_MODE);
++}
++
+ static const struct cqhci_host_ops sdhci_tegra_cqhci_ops = {
+ 	.write_l    = tegra_cqhci_writel,
+ 	.enable	= sdhci_tegra_cqe_enable,
+ 	.disable = sdhci_cqe_disable,
+ 	.dumpregs = sdhci_tegra_dumpregs,
+ 	.update_dcmd_desc = sdhci_tegra_update_dcmd_desc,
++	.pre_enable = sdhci_tegra_cqe_pre_enable,
++	.post_disable = sdhci_tegra_cqe_post_disable,
+ };
+ 
+ static int tegra_sdhci_set_dma_mask(struct sdhci_host *host)
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 6edf9fffd934a..58c977d581e7c 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2997,6 +2997,37 @@ static bool sdhci_request_done(struct sdhci_host *host)
+ 		return true;
+ 	}
+ 
++	/*
++	 * The controller needs a reset of internal state machines
++	 * upon error conditions.
++	 */
++	if (sdhci_needs_reset(host, mrq)) {
++		/*
++		 * Do not finish until command and data lines are available for
++		 * reset. Note there can only be one other mrq, so it cannot
++		 * also be in mrqs_done, otherwise host->cmd and host->data_cmd
++		 * would both be null.
++		 */
++		if (host->cmd || host->data_cmd) {
++			spin_unlock_irqrestore(&host->lock, flags);
++			return true;
++		}
++
++		/* Some controllers need this kick or reset won't work here */
++		if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET)
++			/* This is to force an update */
++			host->ops->set_clock(host, host->clock);
++
++		/*
++		 * Spec says we should do both at the same time, but Ricoh
++		 * controllers do not like that.
++		 */
++		sdhci_do_reset(host, SDHCI_RESET_CMD);
++		sdhci_do_reset(host, SDHCI_RESET_DATA);
++
++		host->pending_reset = false;
++	}
++
+ 	/*
+ 	 * Always unmap the data buffers if they were mapped by
+ 	 * sdhci_prepare_data() whenever we finish with a request.
+@@ -3060,35 +3091,6 @@ static bool sdhci_request_done(struct sdhci_host *host)
+ 		}
+ 	}
+ 
+-	/*
+-	 * The controller needs a reset of internal state machines
+-	 * upon error conditions.
+-	 */
+-	if (sdhci_needs_reset(host, mrq)) {
+-		/*
+-		 * Do not finish until command and data lines are available for
+-		 * reset. Note there can only be one other mrq, so it cannot
+-		 * also be in mrqs_done, otherwise host->cmd and host->data_cmd
+-		 * would both be null.
+-		 */
+-		if (host->cmd || host->data_cmd) {
+-			spin_unlock_irqrestore(&host->lock, flags);
+-			return true;
+-		}
+-
+-		/* Some controllers need this kick or reset won't work here */
+-		if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET)
+-			/* This is to force an update */
+-			host->ops->set_clock(host, host->clock);
+-
+-		/* Spec says we should do both at the same time, but Ricoh
+-		   controllers do not like that. */
+-		sdhci_do_reset(host, SDHCI_RESET_CMD);
+-		sdhci_do_reset(host, SDHCI_RESET_DATA);
+-
+-		host->pending_reset = false;
+-	}
+-
+ 	host->mrqs_done[i] = NULL;
+ 
+ 	spin_unlock_irqrestore(&host->lock, flags);
+diff --git a/drivers/mmc/host/uniphier-sd.c b/drivers/mmc/host/uniphier-sd.c
+index 3092466a99abf..196e94bf37f0d 100644
+--- a/drivers/mmc/host/uniphier-sd.c
++++ b/drivers/mmc/host/uniphier-sd.c
+@@ -636,7 +636,7 @@ static int uniphier_sd_probe(struct platform_device *pdev)
+ 
+ 	ret = tmio_mmc_host_probe(host);
+ 	if (ret)
+-		goto free_host;
++		goto disable_clk;
+ 
+ 	ret = devm_request_irq(dev, irq, tmio_mmc_irq, IRQF_SHARED,
+ 			       dev_name(dev), host);
+@@ -647,6 +647,8 @@ static int uniphier_sd_probe(struct platform_device *pdev)
+ 
+ remove_host:
+ 	tmio_mmc_host_remove(host);
++disable_clk:
++	uniphier_sd_clk_disable(host);
+ free_host:
+ 	tmio_mmc_host_free(host);
+ 
+@@ -659,6 +661,7 @@ static int uniphier_sd_remove(struct platform_device *pdev)
+ 
+ 	tmio_mmc_host_remove(host);
+ 	uniphier_sd_clk_disable(host);
++	tmio_mmc_host_free(host);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mtd/maps/physmap-bt1-rom.c b/drivers/mtd/maps/physmap-bt1-rom.c
+index 27cfe1c63a2e9..d68ae75e19a0e 100644
+--- a/drivers/mtd/maps/physmap-bt1-rom.c
++++ b/drivers/mtd/maps/physmap-bt1-rom.c
+@@ -79,7 +79,7 @@ static void __xipram bt1_rom_map_copy_from(struct map_info *map,
+ 	if (shift) {
+ 		chunk = min_t(ssize_t, 4 - shift, len);
+ 		data = readl_relaxed(src - shift);
+-		memcpy(to, &data + shift, chunk);
++		memcpy(to, (char *)&data + shift, chunk);
+ 		src += chunk;
+ 		to += chunk;
+ 		len -= chunk;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index e6ceec8f50dce..8aab1017b4600 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -883,10 +883,12 @@ static int atmel_nand_pmecc_correct_data(struct nand_chip *chip, void *buf,
+ 							  NULL, 0,
+ 							  chip->ecc.strength);
+ 
+-		if (ret >= 0)
++		if (ret >= 0) {
++			mtd->ecc_stats.corrected += ret;
+ 			max_bitflips = max(ret, max_bitflips);
+-		else
++		} else {
+ 			mtd->ecc_stats.failed++;
++		}
+ 
+ 		databuf += chip->ecc.size;
+ 		eccbuf += chip->ecc.bytes;
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index c352217946455..558d8a14810b6 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -1173,12 +1173,14 @@ static const struct spi_device_id spinand_ids[] = {
+ 	{ .name = "spi-nand" },
+ 	{ /* sentinel */ },
+ };
++MODULE_DEVICE_TABLE(spi, spinand_ids);
+ 
+ #ifdef CONFIG_OF
+ static const struct of_device_id spinand_of_ids[] = {
+ 	{ .compatible = "spi-nand" },
+ 	{ /* sentinel */ },
+ };
++MODULE_DEVICE_TABLE(of, spinand_of_ids);
+ #endif
+ 
+ static struct spi_mem_driver spinand_drv = {
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 06e1bf01fd920..2b26a875a8550 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -2981,6 +2981,37 @@ static void spi_nor_resume(struct mtd_info *mtd)
+ 		dev_err(dev, "resume() failed\n");
+ }
+ 
++static int spi_nor_get_device(struct mtd_info *mtd)
++{
++	struct mtd_info *master = mtd_get_master(mtd);
++	struct spi_nor *nor = mtd_to_spi_nor(master);
++	struct device *dev;
++
++	if (nor->spimem)
++		dev = nor->spimem->spi->controller->dev.parent;
++	else
++		dev = nor->dev;
++
++	if (!try_module_get(dev->driver->owner))
++		return -ENODEV;
++
++	return 0;
++}
++
++static void spi_nor_put_device(struct mtd_info *mtd)
++{
++	struct mtd_info *master = mtd_get_master(mtd);
++	struct spi_nor *nor = mtd_to_spi_nor(master);
++	struct device *dev;
++
++	if (nor->spimem)
++		dev = nor->spimem->spi->controller->dev.parent;
++	else
++		dev = nor->dev;
++
++	module_put(dev->driver->owner);
++}
++
+ void spi_nor_restore(struct spi_nor *nor)
+ {
+ 	/* restore the addressing mode */
+@@ -3157,6 +3188,8 @@ int spi_nor_scan(struct spi_nor *nor, const char *name,
+ 	mtd->_erase = spi_nor_erase;
+ 	mtd->_read = spi_nor_read;
+ 	mtd->_resume = spi_nor_resume;
++	mtd->_get_device = spi_nor_get_device;
++	mtd->_put_device = spi_nor_put_device;
+ 
+ 	if (nor->params->locking_ops) {
+ 		mtd->_lock = spi_nor_lock;
+diff --git a/drivers/mtd/spi-nor/macronix.c b/drivers/mtd/spi-nor/macronix.c
+index 9203abaac2297..662b212787d4d 100644
+--- a/drivers/mtd/spi-nor/macronix.c
++++ b/drivers/mtd/spi-nor/macronix.c
+@@ -73,9 +73,6 @@ static const struct flash_info macronix_parts[] = {
+ 			      SECT_4K | SPI_NOR_DUAL_READ |
+ 			      SPI_NOR_QUAD_READ) },
+ 	{ "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) },
+-	{ "mx25l51245g", INFO(0xc2201a, 0, 64 * 1024, 1024,
+-			      SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
+-			      SPI_NOR_4B_OPCODES) },
+ 	{ "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024,
+ 			      SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
+ 			      SPI_NOR_4B_OPCODES) },
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
+index 47eb751a2570a..ee308d9aedcdc 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
+@@ -535,6 +535,16 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
+ 	u16 erif_index = 0;
+ 	int err;
+ 
++	/* Add the eRIF */
++	if (mlxsw_sp_mr_vif_valid(rve->mr_vif)) {
++		erif_index = mlxsw_sp_rif_index(rve->mr_vif->rif);
++		err = mr->mr_ops->route_erif_add(mlxsw_sp,
++						 rve->mr_route->route_priv,
++						 erif_index);
++		if (err)
++			return err;
++	}
++
+ 	/* Update the route action, as the new eVIF can be a tunnel or a pimreg
+ 	 * device which will require updating the action.
+ 	 */
+@@ -544,17 +554,7 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
+ 						      rve->mr_route->route_priv,
+ 						      route_action);
+ 		if (err)
+-			return err;
+-	}
+-
+-	/* Add the eRIF */
+-	if (mlxsw_sp_mr_vif_valid(rve->mr_vif)) {
+-		erif_index = mlxsw_sp_rif_index(rve->mr_vif->rif);
+-		err = mr->mr_ops->route_erif_add(mlxsw_sp,
+-						 rve->mr_route->route_priv,
+-						 erif_index);
+-		if (err)
+-			goto err_route_erif_add;
++			goto err_route_action_update;
+ 	}
+ 
+ 	/* Update the minimum MTU */
+@@ -572,14 +572,14 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
+ 	return 0;
+ 
+ err_route_min_mtu_update:
+-	if (mlxsw_sp_mr_vif_valid(rve->mr_vif))
+-		mr->mr_ops->route_erif_del(mlxsw_sp, rve->mr_route->route_priv,
+-					   erif_index);
+-err_route_erif_add:
+ 	if (route_action != rve->mr_route->route_action)
+ 		mr->mr_ops->route_action_update(mlxsw_sp,
+ 						rve->mr_route->route_priv,
+ 						rve->mr_route->route_action);
++err_route_action_update:
++	if (mlxsw_sp_mr_vif_valid(rve->mr_vif))
++		mr->mr_ops->route_erif_del(mlxsw_sp, rve->mr_route->route_priv,
++					   erif_index);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/sfc/farch.c b/drivers/net/ethernet/sfc/farch.c
+index d75cf5ff5686a..49df02ecee912 100644
+--- a/drivers/net/ethernet/sfc/farch.c
++++ b/drivers/net/ethernet/sfc/farch.c
+@@ -835,14 +835,14 @@ efx_farch_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ 		/* Transmit completion */
+ 		tx_ev_desc_ptr = EFX_QWORD_FIELD(*event, FSF_AZ_TX_EV_DESC_PTR);
+ 		tx_ev_q_label = EFX_QWORD_FIELD(*event, FSF_AZ_TX_EV_Q_LABEL);
+-		tx_queue = efx_channel_get_tx_queue(
+-			channel, tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
++		tx_queue = channel->tx_queue +
++				(tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
+ 		efx_xmit_done(tx_queue, tx_ev_desc_ptr);
+ 	} else if (EFX_QWORD_FIELD(*event, FSF_AZ_TX_EV_WQ_FF_FULL)) {
+ 		/* Rewrite the FIFO write pointer */
+ 		tx_ev_q_label = EFX_QWORD_FIELD(*event, FSF_AZ_TX_EV_Q_LABEL);
+-		tx_queue = efx_channel_get_tx_queue(
+-			channel, tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
++		tx_queue = channel->tx_queue +
++				(tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
+ 
+ 		netif_tx_lock(efx->net_dev);
+ 		efx_farch_notify_tx_desc(tx_queue);
+@@ -1081,16 +1081,16 @@ static void
+ efx_farch_handle_tx_flush_done(struct efx_nic *efx, efx_qword_t *event)
+ {
+ 	struct efx_tx_queue *tx_queue;
++	struct efx_channel *channel;
+ 	int qid;
+ 
+ 	qid = EFX_QWORD_FIELD(*event, FSF_AZ_DRIVER_EV_SUBDATA);
+ 	if (qid < EFX_MAX_TXQ_PER_CHANNEL * (efx->n_tx_channels + efx->n_extra_tx_channels)) {
+-		tx_queue = efx_get_tx_queue(efx, qid / EFX_MAX_TXQ_PER_CHANNEL,
+-					    qid % EFX_MAX_TXQ_PER_CHANNEL);
+-		if (atomic_cmpxchg(&tx_queue->flush_outstanding, 1, 0)) {
++		channel = efx_get_tx_channel(efx, qid / EFX_MAX_TXQ_PER_CHANNEL);
++		tx_queue = channel->tx_queue + (qid % EFX_MAX_TXQ_PER_CHANNEL);
++		if (atomic_cmpxchg(&tx_queue->flush_outstanding, 1, 0))
+ 			efx_farch_magic_event(tx_queue->channel,
+ 					      EFX_CHANNEL_MAGIC_TX_DRAIN(tx_queue));
+-		}
+ 	}
+ }
+ 
+diff --git a/drivers/net/wimax/i2400m/op-rfkill.c b/drivers/net/wimax/i2400m/op-rfkill.c
+index 5c79f052cad20..34f81f16b5a0c 100644
+--- a/drivers/net/wimax/i2400m/op-rfkill.c
++++ b/drivers/net/wimax/i2400m/op-rfkill.c
+@@ -86,7 +86,7 @@ int i2400m_op_rfkill_sw_toggle(struct wimax_dev *wimax_dev,
+ 	if (cmd == NULL)
+ 		goto error_alloc;
+ 	cmd->hdr.type = cpu_to_le16(I2400M_MT_CMD_RF_CONTROL);
+-	cmd->hdr.length = sizeof(cmd->sw_rf);
++	cmd->hdr.length = cpu_to_le16(sizeof(cmd->sw_rf));
+ 	cmd->hdr.version = cpu_to_le16(I2400M_L3L4_VERSION);
+ 	cmd->sw_rf.hdr.type = cpu_to_le16(I2400M_TLV_RF_OPERATION);
+ 	cmd->sw_rf.hdr.length = cpu_to_le16(sizeof(cmd->sw_rf.status));
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index 592e9dadcb556..3a243c5326471 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -1513,7 +1513,7 @@ static int rsi_restore(struct device *dev)
+ }
+ static const struct dev_pm_ops rsi_pm_ops = {
+ 	.suspend = rsi_suspend,
+-	.resume = rsi_resume,
++	.resume_noirq = rsi_resume,
+ 	.freeze = rsi_freeze,
+ 	.thaw = rsi_thaw,
+ 	.restore = rsi_restore,
+diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
+index f40c05c33c3ab..5b8ee824b100a 100644
+--- a/drivers/nvme/target/discovery.c
++++ b/drivers/nvme/target/discovery.c
+@@ -177,12 +177,14 @@ static void nvmet_execute_disc_get_log_page(struct nvmet_req *req)
+ 	if (req->cmd->get_log_page.lid != NVME_LOG_DISC) {
+ 		req->error_loc =
+ 			offsetof(struct nvme_get_log_page_command, lid);
+-		status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ 		goto out;
+ 	}
+ 
+ 	/* Spec requires dword aligned offsets */
+ 	if (offset & 0x3) {
++		req->error_loc =
++			offsetof(struct nvme_get_log_page_command, lpo);
+ 		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ 		goto out;
+ 	}
+@@ -249,7 +251,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
+ 
+ 	if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) {
+ 		req->error_loc = offsetof(struct nvme_identify, cns);
+-		status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 9e971fffeb6a3..d5d9ea864fe66 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1874,20 +1874,10 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
+ 	int err;
+ 	int i, bars = 0;
+ 
+-	/*
+-	 * Power state could be unknown at this point, either due to a fresh
+-	 * boot or a device removal call.  So get the current power state
+-	 * so that things like MSI message writing will behave as expected
+-	 * (e.g. if the device really is in D0 at enable time).
+-	 */
+-	if (dev->pm_cap) {
+-		u16 pmcsr;
+-		pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
+-		dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
+-	}
+-
+-	if (atomic_inc_return(&dev->enable_cnt) > 1)
++	if (atomic_inc_return(&dev->enable_cnt) > 1) {
++		pci_update_current_state(dev, dev->current_state);
+ 		return 0;		/* already enabled */
++	}
+ 
+ 	bridge = pci_upstream_bridge(dev);
+ 	if (bridge)
+diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
+index 933bd8410fc2a..ef9676418c9f4 100644
+--- a/drivers/perf/arm_pmu_platform.c
++++ b/drivers/perf/arm_pmu_platform.c
+@@ -6,6 +6,7 @@
+  * Copyright (C) 2010 ARM Ltd., Will Deacon <will.deacon@arm.com>
+  */
+ #define pr_fmt(fmt) "hw perfevents: " fmt
++#define dev_fmt pr_fmt
+ 
+ #include <linux/bug.h>
+ #include <linux/cpumask.h>
+@@ -100,10 +101,8 @@ static int pmu_parse_irqs(struct arm_pmu *pmu)
+ 	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
+ 
+ 	num_irqs = platform_irq_count(pdev);
+-	if (num_irqs < 0) {
+-		pr_err("unable to count PMU IRQs\n");
+-		return num_irqs;
+-	}
++	if (num_irqs < 0)
++		return dev_err_probe(&pdev->dev, num_irqs, "unable to count PMU IRQs\n");
+ 
+ 	/*
+ 	 * In this case we have no idea which CPUs are covered by the PMU.
+@@ -236,7 +235,7 @@ int arm_pmu_device_probe(struct platform_device *pdev,
+ 
+ 	ret = armpmu_register(pmu);
+ 	if (ret)
+-		goto out_free;
++		goto out_free_irqs;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/phy/ti/phy-twl4030-usb.c b/drivers/phy/ti/phy-twl4030-usb.c
+index 9887f908f5401..812e5409d3595 100644
+--- a/drivers/phy/ti/phy-twl4030-usb.c
++++ b/drivers/phy/ti/phy-twl4030-usb.c
+@@ -779,7 +779,7 @@ static int twl4030_usb_remove(struct platform_device *pdev)
+ 
+ 	usb_remove_phy(&twl->phy);
+ 	pm_runtime_get_sync(twl->dev);
+-	cancel_delayed_work(&twl->id_workaround_work);
++	cancel_delayed_work_sync(&twl->id_workaround_work);
+ 	device_remove_file(twl->dev, &dev_attr_vbus);
+ 
+ 	/* set transceiver mode to power on defaults */
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index e06b36e87a33f..ca32a4c80f62c 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -1186,9 +1186,15 @@ static const struct pci_device_id pmc_pci_ids[] = {
+  * the platform BIOS enforces 24Mhz crystal to shutdown
+  * before PMC can assert SLP_S0#.
+  */
++static bool xtal_ignore;
+ static int quirk_xtal_ignore(const struct dmi_system_id *id)
+ {
+-	struct pmc_dev *pmcdev = &pmc;
++	xtal_ignore = true;
++	return 0;
++}
++
++static void pmc_core_xtal_ignore(struct pmc_dev *pmcdev)
++{
+ 	u32 value;
+ 
+ 	value = pmc_core_reg_read(pmcdev, pmcdev->map->pm_vric1_offset);
+@@ -1197,7 +1203,6 @@ static int quirk_xtal_ignore(const struct dmi_system_id *id)
+ 	/* Low Voltage Mode Enable */
+ 	value &= ~SPT_PMC_VRIC1_SLPS0LVEN;
+ 	pmc_core_reg_write(pmcdev, pmcdev->map->pm_vric1_offset, value);
+-	return 0;
+ }
+ 
+ static const struct dmi_system_id pmc_core_dmi_table[]  = {
+@@ -1212,6 +1217,14 @@ static const struct dmi_system_id pmc_core_dmi_table[]  = {
+ 	{}
+ };
+ 
++static void pmc_core_do_dmi_quirks(struct pmc_dev *pmcdev)
++{
++	dmi_check_system(pmc_core_dmi_table);
++
++	if (xtal_ignore)
++		pmc_core_xtal_ignore(pmcdev);
++}
++
+ static int pmc_core_probe(struct platform_device *pdev)
+ {
+ 	static bool device_initialized;
+@@ -1253,7 +1266,7 @@ static int pmc_core_probe(struct platform_device *pdev)
+ 	mutex_init(&pmcdev->lock);
+ 	platform_set_drvdata(pdev, pmcdev);
+ 	pmcdev->pmc_xram_read_bit = pmc_core_check_read_lock_bit();
+-	dmi_check_system(pmc_core_dmi_table);
++	pmc_core_do_dmi_quirks(pmcdev);
+ 
+ 	/*
+ 	 * On TGL, due to a hardware limitation, the GBE LTR blocks PC10 when
+diff --git a/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c b/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
+index 95f01e7a87d57..da958aa8468d0 100644
+--- a/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
++++ b/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
+@@ -21,12 +21,16 @@
+ #define PUNIT_MAILBOX_BUSY_BIT		31
+ 
+ /*
+- * The average time to complete some commands is about 40us. The current
+- * count is enough to satisfy 40us. But when the firmware is very busy, this
+- * causes timeout occasionally.  So increase to deal with some worst case
+- * scenarios. Most of the command still complete in few us.
++ * The average time to complete mailbox commands is less than 40us. Most of
++ * the commands complete in few micro seconds. But the same firmware handles
++ * requests from all power management features.
++ * We can create a scenario where we flood the firmware with requests then
++ * the mailbox response can be delayed for 100s of micro seconds. So define
++ * two timeouts. One for average case and one for long.
++ * If the firmware is taking more than average, just call cond_resched().
+  */
+-#define OS_MAILBOX_RETRY_COUNT		100
++#define OS_MAILBOX_TIMEOUT_AVG_US	40
++#define OS_MAILBOX_TIMEOUT_MAX_US	1000
+ 
+ struct isst_if_device {
+ 	struct mutex mutex;
+@@ -35,11 +39,13 @@ struct isst_if_device {
+ static int isst_if_mbox_cmd(struct pci_dev *pdev,
+ 			    struct isst_if_mbox_cmd *mbox_cmd)
+ {
+-	u32 retries, data;
++	s64 tm_delta = 0;
++	ktime_t tm;
++	u32 data;
+ 	int ret;
+ 
+ 	/* Poll for rb bit == 0 */
+-	retries = OS_MAILBOX_RETRY_COUNT;
++	tm = ktime_get();
+ 	do {
+ 		ret = pci_read_config_dword(pdev, PUNIT_MAILBOX_INTERFACE,
+ 					    &data);
+@@ -48,11 +54,14 @@ static int isst_if_mbox_cmd(struct pci_dev *pdev,
+ 
+ 		if (data & BIT_ULL(PUNIT_MAILBOX_BUSY_BIT)) {
+ 			ret = -EBUSY;
++			tm_delta = ktime_us_delta(ktime_get(), tm);
++			if (tm_delta > OS_MAILBOX_TIMEOUT_AVG_US)
++				cond_resched();
+ 			continue;
+ 		}
+ 		ret = 0;
+ 		break;
+-	} while (--retries);
++	} while (tm_delta < OS_MAILBOX_TIMEOUT_MAX_US);
+ 
+ 	if (ret)
+ 		return ret;
+@@ -74,7 +83,8 @@ static int isst_if_mbox_cmd(struct pci_dev *pdev,
+ 		return ret;
+ 
+ 	/* Poll for rb bit == 0 */
+-	retries = OS_MAILBOX_RETRY_COUNT;
++	tm_delta = 0;
++	tm = ktime_get();
+ 	do {
+ 		ret = pci_read_config_dword(pdev, PUNIT_MAILBOX_INTERFACE,
+ 					    &data);
+@@ -83,6 +93,9 @@ static int isst_if_mbox_cmd(struct pci_dev *pdev,
+ 
+ 		if (data & BIT_ULL(PUNIT_MAILBOX_BUSY_BIT)) {
+ 			ret = -EBUSY;
++			tm_delta = ktime_us_delta(ktime_get(), tm);
++			if (tm_delta > OS_MAILBOX_TIMEOUT_AVG_US)
++				cond_resched();
+ 			continue;
+ 		}
+ 
+@@ -96,7 +109,7 @@ static int isst_if_mbox_cmd(struct pci_dev *pdev,
+ 		mbox_cmd->resp_data = data;
+ 		ret = 0;
+ 		break;
+-	} while (--retries);
++	} while (tm_delta < OS_MAILBOX_TIMEOUT_MAX_US);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 315e0909e6a48..72a2bcf3ab32b 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1631,27 +1631,6 @@ static int bq27xxx_battery_read_time(struct bq27xxx_device_info *di, u8 reg)
+ 	return tval * 60;
+ }
+ 
+-/*
+- * Read an average power register.
+- * Return < 0 if something fails.
+- */
+-static int bq27xxx_battery_read_pwr_avg(struct bq27xxx_device_info *di)
+-{
+-	int tval;
+-
+-	tval = bq27xxx_read(di, BQ27XXX_REG_AP, false);
+-	if (tval < 0) {
+-		dev_err(di->dev, "error reading average power register  %02x: %d\n",
+-			BQ27XXX_REG_AP, tval);
+-		return tval;
+-	}
+-
+-	if (di->opts & BQ27XXX_O_ZERO)
+-		return (tval * BQ27XXX_POWER_CONSTANT) / BQ27XXX_RS;
+-	else
+-		return tval;
+-}
+-
+ /*
+  * Returns true if a battery over temperature condition is detected
+  */
+@@ -1739,8 +1718,6 @@ void bq27xxx_battery_update(struct bq27xxx_device_info *di)
+ 		}
+ 		if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR)
+ 			cache.cycle_count = bq27xxx_battery_read_cyct(di);
+-		if (di->regs[BQ27XXX_REG_AP] != INVALID_REG_ADDR)
+-			cache.power_avg = bq27xxx_battery_read_pwr_avg(di);
+ 
+ 		/* We only have to read charge design full once */
+ 		if (di->charge_design_full <= 0)
+@@ -1803,6 +1780,32 @@ static int bq27xxx_battery_current(struct bq27xxx_device_info *di,
+ 	return 0;
+ }
+ 
++/*
++ * Get the average power in µW
++ * Return < 0 if something fails.
++ */
++static int bq27xxx_battery_pwr_avg(struct bq27xxx_device_info *di,
++				   union power_supply_propval *val)
++{
++	int power;
++
++	power = bq27xxx_read(di, BQ27XXX_REG_AP, false);
++	if (power < 0) {
++		dev_err(di->dev,
++			"error reading average power register %02x: %d\n",
++			BQ27XXX_REG_AP, power);
++		return power;
++	}
++
++	if (di->opts & BQ27XXX_O_ZERO)
++		val->intval = (power * BQ27XXX_POWER_CONSTANT) / BQ27XXX_RS;
++	else
++		/* Other gauges return a signed value in units of 10mW */
++		val->intval = (int)((s16)power) * 10000;
++
++	return 0;
++}
++
+ static int bq27xxx_battery_status(struct bq27xxx_device_info *di,
+ 				  union power_supply_propval *val)
+ {
+@@ -1987,7 +1990,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
+ 		ret = bq27xxx_simple_value(di->cache.energy, val);
+ 		break;
+ 	case POWER_SUPPLY_PROP_POWER_AVG:
+-		ret = bq27xxx_simple_value(di->cache.power_avg, val);
++		ret = bq27xxx_battery_pwr_avg(di, val);
+ 		break;
+ 	case POWER_SUPPLY_PROP_HEALTH:
+ 		ret = bq27xxx_simple_value(di->cache.health, val);
+diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c
+index cebc5c8fda1b5..793d4ca52f8a1 100644
+--- a/drivers/power/supply/cpcap-battery.c
++++ b/drivers/power/supply/cpcap-battery.c
+@@ -626,7 +626,7 @@ static irqreturn_t cpcap_battery_irq_thread(int irq, void *data)
+ 			break;
+ 	}
+ 
+-	if (!d)
++	if (list_entry_is_head(d, &ddata->irq_list, node))
+ 		return IRQ_NONE;
+ 
+ 	latest = cpcap_battery_latest(ddata);
+diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
+index 22fff01425d63..891e1eb8e39d5 100644
+--- a/drivers/power/supply/cpcap-charger.c
++++ b/drivers/power/supply/cpcap-charger.c
+@@ -633,6 +633,9 @@ static void cpcap_usb_detect(struct work_struct *work)
+ 		return;
+ 	}
+ 
++	/* Delay for 80ms to avoid vbus bouncing when usb cable is plugged in */
++	usleep_range(80000, 120000);
++
+ 	/* Throttle chrgcurr2 interrupt for charger done and retry */
+ 	switch (ddata->state) {
+ 	case CPCAP_CHARGER_CHARGING:
+diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
+index caa829738ef79..58f09314741a7 100644
+--- a/drivers/power/supply/generic-adc-battery.c
++++ b/drivers/power/supply/generic-adc-battery.c
+@@ -382,7 +382,7 @@ static int gab_remove(struct platform_device *pdev)
+ 	}
+ 
+ 	kfree(adc_bat->psy_desc.properties);
+-	cancel_delayed_work(&adc_bat->bat_work);
++	cancel_delayed_work_sync(&adc_bat->bat_work);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/power/supply/lp8788-charger.c b/drivers/power/supply/lp8788-charger.c
+index e7931ffb7151d..397e5a03b7d9a 100644
+--- a/drivers/power/supply/lp8788-charger.c
++++ b/drivers/power/supply/lp8788-charger.c
+@@ -501,7 +501,7 @@ static int lp8788_set_irqs(struct platform_device *pdev,
+ 
+ 		ret = request_threaded_irq(virq, NULL,
+ 					lp8788_charger_irq_thread,
+-					0, name, pchg);
++					IRQF_ONESHOT, name, pchg);
+ 		if (ret)
+ 			break;
+ 	}
+diff --git a/drivers/power/supply/pm2301_charger.c b/drivers/power/supply/pm2301_charger.c
+index 2df6a2459d1f1..34f168f62178d 100644
+--- a/drivers/power/supply/pm2301_charger.c
++++ b/drivers/power/supply/pm2301_charger.c
+@@ -1090,7 +1090,7 @@ static int pm2xxx_wall_charger_probe(struct i2c_client *i2c_client,
+ 	ret = request_threaded_irq(gpio_to_irq(pm2->pdata->gpio_irq_number),
+ 				NULL,
+ 				pm2xxx_charger_irq[0].isr,
+-				pm2->pdata->irq_type,
++				pm2->pdata->irq_type | IRQF_ONESHOT,
+ 				pm2xxx_charger_irq[0].name, pm2);
+ 
+ 	if (ret != 0) {
+diff --git a/drivers/power/supply/s3c_adc_battery.c b/drivers/power/supply/s3c_adc_battery.c
+index 60b7f41ab0631..ff46bcf5db010 100644
+--- a/drivers/power/supply/s3c_adc_battery.c
++++ b/drivers/power/supply/s3c_adc_battery.c
+@@ -394,7 +394,7 @@ static int s3c_adc_bat_remove(struct platform_device *pdev)
+ 		gpio_free(pdata->gpio_charge_finished);
+ 	}
+ 
+-	cancel_delayed_work(&bat_work);
++	cancel_delayed_work_sync(&bat_work);
+ 
+ 	if (pdata->exit)
+ 		pdata->exit();
+diff --git a/drivers/power/supply/tps65090-charger.c b/drivers/power/supply/tps65090-charger.c
+index 6b0098e5a88b5..0990b2fa6cd8d 100644
+--- a/drivers/power/supply/tps65090-charger.c
++++ b/drivers/power/supply/tps65090-charger.c
+@@ -301,7 +301,7 @@ static int tps65090_charger_probe(struct platform_device *pdev)
+ 
+ 	if (irq != -ENXIO) {
+ 		ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
+-			tps65090_charger_isr, 0, "tps65090-charger", cdata);
++			tps65090_charger_isr, IRQF_ONESHOT, "tps65090-charger", cdata);
+ 		if (ret) {
+ 			dev_err(cdata->dev,
+ 				"Unable to register irq %d err %d\n", irq,
+diff --git a/drivers/power/supply/tps65217_charger.c b/drivers/power/supply/tps65217_charger.c
+index 814c2b81fdfec..ba33d1617e0b6 100644
+--- a/drivers/power/supply/tps65217_charger.c
++++ b/drivers/power/supply/tps65217_charger.c
+@@ -238,7 +238,7 @@ static int tps65217_charger_probe(struct platform_device *pdev)
+ 	for (i = 0; i < NUM_CHARGER_IRQS; i++) {
+ 		ret = devm_request_threaded_irq(&pdev->dev, irq[i], NULL,
+ 						tps65217_charger_irq,
+-						0, "tps65217-charger",
++						IRQF_ONESHOT, "tps65217-charger",
+ 						charger);
+ 		if (ret) {
+ 			dev_err(charger->dev,
+diff --git a/drivers/s390/crypto/zcrypt_card.c b/drivers/s390/crypto/zcrypt_card.c
+index 33b23884b133f..09fe6bb8880bc 100644
+--- a/drivers/s390/crypto/zcrypt_card.c
++++ b/drivers/s390/crypto/zcrypt_card.c
+@@ -192,5 +192,6 @@ void zcrypt_card_unregister(struct zcrypt_card *zc)
+ 	spin_unlock(&zcrypt_list_lock);
+ 	sysfs_remove_group(&zc->card->ap_dev.device.kobj,
+ 			   &zcrypt_card_attr_group);
++	zcrypt_card_put(zc);
+ }
+ EXPORT_SYMBOL(zcrypt_card_unregister);
+diff --git a/drivers/s390/crypto/zcrypt_queue.c b/drivers/s390/crypto/zcrypt_queue.c
+index 5062eae73d4aa..c3ffbd26b73ff 100644
+--- a/drivers/s390/crypto/zcrypt_queue.c
++++ b/drivers/s390/crypto/zcrypt_queue.c
+@@ -223,5 +223,6 @@ void zcrypt_queue_unregister(struct zcrypt_queue *zq)
+ 	sysfs_remove_group(&zq->queue->ap_dev.device.kobj,
+ 			   &zcrypt_queue_attr_group);
+ 	zcrypt_card_put(zc);
++	zcrypt_queue_put(zq);
+ }
+ EXPORT_SYMBOL(zcrypt_queue_unregister);
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index 308bda2e9c000..df5a3bbeba5eb 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -565,10 +565,11 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ 		 * even though it shouldn't according to T10.
+ 		 * The retry without rtpg_ext_hdr_req set
+ 		 * handles this.
++		 * Note:  some arrays return a sense key of ILLEGAL_REQUEST
++		 * with ASC 00h if they don't support the extended header.
+ 		 */
+ 		if (!(pg->flags & ALUA_RTPG_EXT_HDR_UNSUPP) &&
+-		    sense_hdr.sense_key == ILLEGAL_REQUEST &&
+-		    sense_hdr.asc == 0x24 && sense_hdr.ascq == 0) {
++		    sense_hdr.sense_key == ILLEGAL_REQUEST) {
+ 			pg->flags |= ALUA_RTPG_EXT_HDR_UNSUPP;
+ 			goto retry;
+ 		}
+diff --git a/drivers/scsi/libfc/fc_lport.c b/drivers/scsi/libfc/fc_lport.c
+index 6557fda85c5c7..abb14b206be04 100644
+--- a/drivers/scsi/libfc/fc_lport.c
++++ b/drivers/scsi/libfc/fc_lport.c
+@@ -1731,7 +1731,7 @@ void fc_lport_flogi_resp(struct fc_seq *sp, struct fc_frame *fp,
+ 
+ 	if (mfs < FC_SP_MIN_MAX_PAYLOAD || mfs > FC_SP_MAX_MAX_PAYLOAD) {
+ 		FC_LPORT_DBG(lport, "FLOGI bad mfs:%hu response, "
+-			     "lport->mfs:%hu\n", mfs, lport->mfs);
++			     "lport->mfs:%u\n", mfs, lport->mfs);
+ 		fc_lport_error(lport, fp);
+ 		goto out;
+ 	}
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index e94eac1946762..bdea2867516c0 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -1690,8 +1690,7 @@ lpfc_set_trunking(struct lpfc_hba *phba, char *buff_out)
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
+ 				"0071 Set trunk mode failed with status: %d",
+ 				rc);
+-	if (rc != MBX_TIMEOUT)
+-		mempool_free(mbox, phba->mbox_mem_pool);
++	mempool_free(mbox, phba->mbox_mem_pool);
+ 
+ 	return 0;
+ }
+@@ -6780,15 +6779,19 @@ lpfc_get_stats(struct Scsi_Host *shost)
+ 	pmboxq->ctx_buf = NULL;
+ 	pmboxq->vport = vport;
+ 
+-	if (vport->fc_flag & FC_OFFLINE_MODE)
++	if (vport->fc_flag & FC_OFFLINE_MODE) {
+ 		rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
+-	else
+-		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
+-
+-	if (rc != MBX_SUCCESS) {
+-		if (rc != MBX_TIMEOUT)
++		if (rc != MBX_SUCCESS) {
+ 			mempool_free(pmboxq, phba->mbox_mem_pool);
+-		return NULL;
++			return NULL;
++		}
++	} else {
++		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
++		if (rc != MBX_SUCCESS) {
++			if (rc != MBX_TIMEOUT)
++				mempool_free(pmboxq, phba->mbox_mem_pool);
++			return NULL;
++		}
+ 	}
+ 
+ 	memset(hs, 0, sizeof (struct fc_host_statistics));
+@@ -6812,15 +6815,19 @@ lpfc_get_stats(struct Scsi_Host *shost)
+ 	pmboxq->ctx_buf = NULL;
+ 	pmboxq->vport = vport;
+ 
+-	if (vport->fc_flag & FC_OFFLINE_MODE)
++	if (vport->fc_flag & FC_OFFLINE_MODE) {
+ 		rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
+-	else
+-		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
+-
+-	if (rc != MBX_SUCCESS) {
+-		if (rc != MBX_TIMEOUT)
++		if (rc != MBX_SUCCESS) {
+ 			mempool_free(pmboxq, phba->mbox_mem_pool);
+-		return NULL;
++			return NULL;
++		}
++	} else {
++		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
++		if (rc != MBX_SUCCESS) {
++			if (rc != MBX_TIMEOUT)
++				mempool_free(pmboxq, phba->mbox_mem_pool);
++			return NULL;
++		}
+ 	}
+ 
+ 	hs->link_failure_count = pmb->un.varRdLnk.linkFailureCnt;
+@@ -6893,15 +6900,19 @@ lpfc_reset_stats(struct Scsi_Host *shost)
+ 	pmboxq->vport = vport;
+ 
+ 	if ((vport->fc_flag & FC_OFFLINE_MODE) ||
+-		(!(psli->sli_flag & LPFC_SLI_ACTIVE)))
++		(!(psli->sli_flag & LPFC_SLI_ACTIVE))) {
+ 		rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
+-	else
+-		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
+-
+-	if (rc != MBX_SUCCESS) {
+-		if (rc != MBX_TIMEOUT)
++		if (rc != MBX_SUCCESS) {
+ 			mempool_free(pmboxq, phba->mbox_mem_pool);
+-		return;
++			return;
++		}
++	} else {
++		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
++		if (rc != MBX_SUCCESS) {
++			if (rc != MBX_TIMEOUT)
++				mempool_free(pmboxq, phba->mbox_mem_pool);
++			return;
++		}
+ 	}
+ 
+ 	memset(pmboxq, 0, sizeof(LPFC_MBOXQ_t));
+@@ -6911,15 +6922,19 @@ lpfc_reset_stats(struct Scsi_Host *shost)
+ 	pmboxq->vport = vport;
+ 
+ 	if ((vport->fc_flag & FC_OFFLINE_MODE) ||
+-	    (!(psli->sli_flag & LPFC_SLI_ACTIVE)))
++	    (!(psli->sli_flag & LPFC_SLI_ACTIVE))) {
+ 		rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
+-	else
++		if (rc != MBX_SUCCESS) {
++			mempool_free(pmboxq, phba->mbox_mem_pool);
++			return;
++		}
++	} else {
+ 		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
+-
+-	if (rc != MBX_SUCCESS) {
+-		if (rc != MBX_TIMEOUT)
+-			mempool_free( pmboxq, phba->mbox_mem_pool);
+-		return;
++		if (rc != MBX_SUCCESS) {
++			if (rc != MBX_TIMEOUT)
++				mempool_free(pmboxq, phba->mbox_mem_pool);
++			return;
++		}
+ 	}
+ 
+ 	lso->link_failure_count = pmb->un.varRdLnk.linkFailureCnt;
+diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
+index 782f6f76f18aa..26f4a345bd14e 100644
+--- a/drivers/scsi/lpfc/lpfc_crtn.h
++++ b/drivers/scsi/lpfc/lpfc_crtn.h
+@@ -55,9 +55,6 @@ void lpfc_register_new_vport(struct lpfc_hba *, struct lpfc_vport *,
+ void lpfc_unreg_vpi(struct lpfc_hba *, uint16_t, LPFC_MBOXQ_t *);
+ void lpfc_init_link(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t);
+ void lpfc_request_features(struct lpfc_hba *, struct lpfcMboxq *);
+-void lpfc_supported_pages(struct lpfcMboxq *);
+-void lpfc_pc_sli4_params(struct lpfcMboxq *);
+-int lpfc_pc_sli4_params_get(struct lpfc_hba *, LPFC_MBOXQ_t *);
+ int lpfc_sli4_mbox_rsrc_extent(struct lpfc_hba *, struct lpfcMboxq *,
+ 			   uint16_t, uint16_t, bool);
+ int lpfc_get_sli4_parameters(struct lpfc_hba *, LPFC_MBOXQ_t *);
+diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h
+index 12e4e76233e6a..47e832b7f2c25 100644
+--- a/drivers/scsi/lpfc/lpfc_hw4.h
++++ b/drivers/scsi/lpfc/lpfc_hw4.h
+@@ -124,6 +124,7 @@ struct lpfc_sli_intf {
+ /* Define SLI4 Alignment requirements. */
+ #define LPFC_ALIGN_16_BYTE	16
+ #define LPFC_ALIGN_64_BYTE	64
++#define SLI4_PAGE_SIZE		4096
+ 
+ /* Define SLI4 specific definitions. */
+ #define LPFC_MQ_CQE_BYTE_OFFSET	256
+@@ -2976,62 +2977,6 @@ struct lpfc_mbx_request_features {
+ #define lpfc_mbx_rq_ftr_rsp_mrqp_WORD		word3
+ };
+ 
+-struct lpfc_mbx_supp_pages {
+-	uint32_t word1;
+-#define qs_SHIFT 				0
+-#define qs_MASK					0x00000001
+-#define qs_WORD					word1
+-#define wr_SHIFT				1
+-#define wr_MASK 				0x00000001
+-#define wr_WORD					word1
+-#define pf_SHIFT				8
+-#define pf_MASK					0x000000ff
+-#define pf_WORD					word1
+-#define cpn_SHIFT				16
+-#define cpn_MASK				0x000000ff
+-#define cpn_WORD				word1
+-	uint32_t word2;
+-#define list_offset_SHIFT 			0
+-#define list_offset_MASK			0x000000ff
+-#define list_offset_WORD			word2
+-#define next_offset_SHIFT			8
+-#define next_offset_MASK			0x000000ff
+-#define next_offset_WORD			word2
+-#define elem_cnt_SHIFT				16
+-#define elem_cnt_MASK				0x000000ff
+-#define elem_cnt_WORD				word2
+-	uint32_t word3;
+-#define pn_0_SHIFT				24
+-#define pn_0_MASK  				0x000000ff
+-#define pn_0_WORD				word3
+-#define pn_1_SHIFT				16
+-#define pn_1_MASK				0x000000ff
+-#define pn_1_WORD				word3
+-#define pn_2_SHIFT				8
+-#define pn_2_MASK				0x000000ff
+-#define pn_2_WORD				word3
+-#define pn_3_SHIFT				0
+-#define pn_3_MASK				0x000000ff
+-#define pn_3_WORD				word3
+-	uint32_t word4;
+-#define pn_4_SHIFT				24
+-#define pn_4_MASK				0x000000ff
+-#define pn_4_WORD				word4
+-#define pn_5_SHIFT				16
+-#define pn_5_MASK				0x000000ff
+-#define pn_5_WORD				word4
+-#define pn_6_SHIFT				8
+-#define pn_6_MASK				0x000000ff
+-#define pn_6_WORD				word4
+-#define pn_7_SHIFT				0
+-#define pn_7_MASK				0x000000ff
+-#define pn_7_WORD				word4
+-	uint32_t rsvd[27];
+-#define LPFC_SUPP_PAGES			0
+-#define LPFC_BLOCK_GUARD_PROFILES	1
+-#define LPFC_SLI4_PARAMETERS		2
+-};
+-
+ struct lpfc_mbx_memory_dump_type3 {
+ 	uint32_t word1;
+ #define lpfc_mbx_memory_dump_type3_type_SHIFT    0
+@@ -3248,121 +3193,6 @@ struct user_eeprom {
+ 	uint8_t reserved191[57];
+ };
+ 
+-struct lpfc_mbx_pc_sli4_params {
+-	uint32_t word1;
+-#define qs_SHIFT				0
+-#define qs_MASK					0x00000001
+-#define qs_WORD					word1
+-#define wr_SHIFT				1
+-#define wr_MASK					0x00000001
+-#define wr_WORD					word1
+-#define pf_SHIFT				8
+-#define pf_MASK					0x000000ff
+-#define pf_WORD					word1
+-#define cpn_SHIFT				16
+-#define cpn_MASK				0x000000ff
+-#define cpn_WORD				word1
+-	uint32_t word2;
+-#define if_type_SHIFT				0
+-#define if_type_MASK				0x00000007
+-#define if_type_WORD				word2
+-#define sli_rev_SHIFT				4
+-#define sli_rev_MASK				0x0000000f
+-#define sli_rev_WORD				word2
+-#define sli_family_SHIFT			8
+-#define sli_family_MASK				0x000000ff
+-#define sli_family_WORD				word2
+-#define featurelevel_1_SHIFT			16
+-#define featurelevel_1_MASK			0x000000ff
+-#define featurelevel_1_WORD			word2
+-#define featurelevel_2_SHIFT			24
+-#define featurelevel_2_MASK			0x0000001f
+-#define featurelevel_2_WORD			word2
+-	uint32_t word3;
+-#define fcoe_SHIFT 				0
+-#define fcoe_MASK				0x00000001
+-#define fcoe_WORD				word3
+-#define fc_SHIFT				1
+-#define fc_MASK					0x00000001
+-#define fc_WORD					word3
+-#define nic_SHIFT				2
+-#define nic_MASK				0x00000001
+-#define nic_WORD				word3
+-#define iscsi_SHIFT				3
+-#define iscsi_MASK				0x00000001
+-#define iscsi_WORD				word3
+-#define rdma_SHIFT				4
+-#define rdma_MASK				0x00000001
+-#define rdma_WORD				word3
+-	uint32_t sge_supp_len;
+-#define SLI4_PAGE_SIZE 4096
+-	uint32_t word5;
+-#define if_page_sz_SHIFT			0
+-#define if_page_sz_MASK				0x0000ffff
+-#define if_page_sz_WORD				word5
+-#define loopbk_scope_SHIFT			24
+-#define loopbk_scope_MASK			0x0000000f
+-#define loopbk_scope_WORD			word5
+-#define rq_db_window_SHIFT			28
+-#define rq_db_window_MASK			0x0000000f
+-#define rq_db_window_WORD			word5
+-	uint32_t word6;
+-#define eq_pages_SHIFT				0
+-#define eq_pages_MASK				0x0000000f
+-#define eq_pages_WORD				word6
+-#define eqe_size_SHIFT				8
+-#define eqe_size_MASK				0x000000ff
+-#define eqe_size_WORD				word6
+-	uint32_t word7;
+-#define cq_pages_SHIFT				0
+-#define cq_pages_MASK				0x0000000f
+-#define cq_pages_WORD				word7
+-#define cqe_size_SHIFT				8
+-#define cqe_size_MASK				0x000000ff
+-#define cqe_size_WORD				word7
+-	uint32_t word8;
+-#define mq_pages_SHIFT				0
+-#define mq_pages_MASK				0x0000000f
+-#define mq_pages_WORD				word8
+-#define mqe_size_SHIFT				8
+-#define mqe_size_MASK				0x000000ff
+-#define mqe_size_WORD				word8
+-#define mq_elem_cnt_SHIFT			16
+-#define mq_elem_cnt_MASK			0x000000ff
+-#define mq_elem_cnt_WORD			word8
+-	uint32_t word9;
+-#define wq_pages_SHIFT				0
+-#define wq_pages_MASK				0x0000ffff
+-#define wq_pages_WORD				word9
+-#define wqe_size_SHIFT				8
+-#define wqe_size_MASK				0x000000ff
+-#define wqe_size_WORD				word9
+-	uint32_t word10;
+-#define rq_pages_SHIFT				0
+-#define rq_pages_MASK				0x0000ffff
+-#define rq_pages_WORD				word10
+-#define rqe_size_SHIFT				8
+-#define rqe_size_MASK				0x000000ff
+-#define rqe_size_WORD				word10
+-	uint32_t word11;
+-#define hdr_pages_SHIFT				0
+-#define hdr_pages_MASK				0x0000000f
+-#define hdr_pages_WORD				word11
+-#define hdr_size_SHIFT				8
+-#define hdr_size_MASK				0x0000000f
+-#define hdr_size_WORD				word11
+-#define hdr_pp_align_SHIFT			16
+-#define hdr_pp_align_MASK			0x0000ffff
+-#define hdr_pp_align_WORD			word11
+-	uint32_t word12;
+-#define sgl_pages_SHIFT				0
+-#define sgl_pages_MASK				0x0000000f
+-#define sgl_pages_WORD				word12
+-#define sgl_pp_align_SHIFT			16
+-#define sgl_pp_align_MASK			0x0000ffff
+-#define sgl_pp_align_WORD			word12
+-	uint32_t rsvd_13_63[51];
+-};
+ #define SLI4_PAGE_ALIGN(addr) (((addr)+((SLI4_PAGE_SIZE)-1)) \
+ 			       &(~((SLI4_PAGE_SIZE)-1)))
+ 
+@@ -3988,8 +3818,6 @@ struct lpfc_mqe {
+ 		struct lpfc_mbx_post_hdr_tmpl hdr_tmpl;
+ 		struct lpfc_mbx_query_fw_config query_fw_cfg;
+ 		struct lpfc_mbx_set_beacon_config beacon_config;
+-		struct lpfc_mbx_supp_pages supp_pages;
+-		struct lpfc_mbx_pc_sli4_params sli4_params;
+ 		struct lpfc_mbx_get_sli4_parameters get_sli4_parameters;
+ 		struct lpfc_mbx_set_link_diag_state link_diag_state;
+ 		struct lpfc_mbx_set_link_diag_loopback link_diag_loopback;
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 40fe889033d43..2dde5ddc687de 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -6548,8 +6548,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ 	LPFC_MBOXQ_t *mboxq;
+ 	MAILBOX_t *mb;
+ 	int rc, i, max_buf_size;
+-	uint8_t pn_page[LPFC_MAX_SUPPORTED_PAGES] = {0};
+-	struct lpfc_mqe *mqe;
+ 	int longs;
+ 	int extra;
+ 	uint64_t wwn;
+@@ -6783,32 +6781,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ 
+ 	lpfc_nvme_mod_param_dep(phba);
+ 
+-	/* Get the Supported Pages if PORT_CAPABILITIES is supported by port. */
+-	lpfc_supported_pages(mboxq);
+-	rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
+-	if (!rc) {
+-		mqe = &mboxq->u.mqe;
+-		memcpy(&pn_page[0], ((uint8_t *)&mqe->un.supp_pages.word3),
+-		       LPFC_MAX_SUPPORTED_PAGES);
+-		for (i = 0; i < LPFC_MAX_SUPPORTED_PAGES; i++) {
+-			switch (pn_page[i]) {
+-			case LPFC_SLI4_PARAMETERS:
+-				phba->sli4_hba.pc_sli4_params.supported = 1;
+-				break;
+-			default:
+-				break;
+-			}
+-		}
+-		/* Read the port's SLI4 Parameters capabilities if supported. */
+-		if (phba->sli4_hba.pc_sli4_params.supported)
+-			rc = lpfc_pc_sli4_params_get(phba, mboxq);
+-		if (rc) {
+-			mempool_free(mboxq, phba->mbox_mem_pool);
+-			rc = -EIO;
+-			goto out_free_bsmbx;
+-		}
+-	}
+-
+ 	/*
+ 	 * Get sli4 parameters that override parameters from Port capabilities.
+ 	 * If this call fails, it isn't critical unless the SLI4 parameters come
+@@ -9636,8 +9608,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
+ 				"3250 QUERY_FW_CFG mailbox failed with status "
+ 				"x%x add_status x%x, mbx status x%x\n",
+ 				shdr_status, shdr_add_status, rc);
+-		if (rc != MBX_TIMEOUT)
+-			mempool_free(mboxq, phba->mbox_mem_pool);
++		mempool_free(mboxq, phba->mbox_mem_pool);
+ 		rc = -ENXIO;
+ 		goto out_error;
+ 	}
+@@ -9653,8 +9624,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
+ 			"ulp1_mode:x%x\n", phba->sli4_hba.fw_func_mode,
+ 			phba->sli4_hba.ulp0_mode, phba->sli4_hba.ulp1_mode);
+ 
+-	if (rc != MBX_TIMEOUT)
+-		mempool_free(mboxq, phba->mbox_mem_pool);
++	mempool_free(mboxq, phba->mbox_mem_pool);
+ 
+ 	/*
+ 	 * Set up HBA Event Queues (EQs)
+@@ -10252,8 +10222,7 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
+ 		shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
+ 		shdr_add_status = bf_get(lpfc_mbox_hdr_add_status,
+ 					 &shdr->response);
+-		if (rc != MBX_TIMEOUT)
+-			mempool_free(mboxq, phba->mbox_mem_pool);
++		mempool_free(mboxq, phba->mbox_mem_pool);
+ 		if (shdr_status || shdr_add_status || rc) {
+ 			lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+ 					"0495 SLI_FUNCTION_RESET mailbox "
+@@ -12049,78 +12018,6 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba)
+ 		phba->pport->work_port_events = 0;
+ }
+ 
+- /**
+- * lpfc_pc_sli4_params_get - Get the SLI4_PARAMS port capabilities.
+- * @phba: Pointer to HBA context object.
+- * @mboxq: Pointer to the mailboxq memory for the mailbox command response.
+- *
+- * This function is called in the SLI4 code path to read the port's
+- * sli4 capabilities.
+- *
+- * This function may be be called from any context that can block-wait
+- * for the completion.  The expectation is that this routine is called
+- * typically from probe_one or from the online routine.
+- **/
+-int
+-lpfc_pc_sli4_params_get(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
+-{
+-	int rc;
+-	struct lpfc_mqe *mqe;
+-	struct lpfc_pc_sli4_params *sli4_params;
+-	uint32_t mbox_tmo;
+-
+-	rc = 0;
+-	mqe = &mboxq->u.mqe;
+-
+-	/* Read the port's SLI4 Parameters port capabilities */
+-	lpfc_pc_sli4_params(mboxq);
+-	if (!phba->sli4_hba.intr_enable)
+-		rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
+-	else {
+-		mbox_tmo = lpfc_mbox_tmo_val(phba, mboxq);
+-		rc = lpfc_sli_issue_mbox_wait(phba, mboxq, mbox_tmo);
+-	}
+-
+-	if (unlikely(rc))
+-		return 1;
+-
+-	sli4_params = &phba->sli4_hba.pc_sli4_params;
+-	sli4_params->if_type = bf_get(if_type, &mqe->un.sli4_params);
+-	sli4_params->sli_rev = bf_get(sli_rev, &mqe->un.sli4_params);
+-	sli4_params->sli_family = bf_get(sli_family, &mqe->un.sli4_params);
+-	sli4_params->featurelevel_1 = bf_get(featurelevel_1,
+-					     &mqe->un.sli4_params);
+-	sli4_params->featurelevel_2 = bf_get(featurelevel_2,
+-					     &mqe->un.sli4_params);
+-	sli4_params->proto_types = mqe->un.sli4_params.word3;
+-	sli4_params->sge_supp_len = mqe->un.sli4_params.sge_supp_len;
+-	sli4_params->if_page_sz = bf_get(if_page_sz, &mqe->un.sli4_params);
+-	sli4_params->rq_db_window = bf_get(rq_db_window, &mqe->un.sli4_params);
+-	sli4_params->loopbk_scope = bf_get(loopbk_scope, &mqe->un.sli4_params);
+-	sli4_params->eq_pages_max = bf_get(eq_pages, &mqe->un.sli4_params);
+-	sli4_params->eqe_size = bf_get(eqe_size, &mqe->un.sli4_params);
+-	sli4_params->cq_pages_max = bf_get(cq_pages, &mqe->un.sli4_params);
+-	sli4_params->cqe_size = bf_get(cqe_size, &mqe->un.sli4_params);
+-	sli4_params->mq_pages_max = bf_get(mq_pages, &mqe->un.sli4_params);
+-	sli4_params->mqe_size = bf_get(mqe_size, &mqe->un.sli4_params);
+-	sli4_params->mq_elem_cnt = bf_get(mq_elem_cnt, &mqe->un.sli4_params);
+-	sli4_params->wq_pages_max = bf_get(wq_pages, &mqe->un.sli4_params);
+-	sli4_params->wqe_size = bf_get(wqe_size, &mqe->un.sli4_params);
+-	sli4_params->rq_pages_max = bf_get(rq_pages, &mqe->un.sli4_params);
+-	sli4_params->rqe_size = bf_get(rqe_size, &mqe->un.sli4_params);
+-	sli4_params->hdr_pages_max = bf_get(hdr_pages, &mqe->un.sli4_params);
+-	sli4_params->hdr_size = bf_get(hdr_size, &mqe->un.sli4_params);
+-	sli4_params->hdr_pp_align = bf_get(hdr_pp_align, &mqe->un.sli4_params);
+-	sli4_params->sgl_pages_max = bf_get(sgl_pages, &mqe->un.sli4_params);
+-	sli4_params->sgl_pp_align = bf_get(sgl_pp_align, &mqe->un.sli4_params);
+-
+-	/* Make sure that sge_supp_len can be handled by the driver */
+-	if (sli4_params->sge_supp_len > LPFC_MAX_SGE_SIZE)
+-		sli4_params->sge_supp_len = LPFC_MAX_SGE_SIZE;
+-
+-	return rc;
+-}
+-
+ /**
+  * lpfc_get_sli4_parameters - Get the SLI4 Config PARAMETERS.
+  * @phba: Pointer to HBA context object.
+@@ -12179,7 +12076,8 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
+ 	else
+ 		phba->sli3_options &= ~LPFC_SLI4_PHWQ_ENABLED;
+ 	sli4_params->sge_supp_len = mbx_sli4_parameters->sge_supp_len;
+-	sli4_params->loopbk_scope = bf_get(loopbk_scope, mbx_sli4_parameters);
++	sli4_params->loopbk_scope = bf_get(cfg_loopbk_scope,
++					   mbx_sli4_parameters);
+ 	sli4_params->oas_supported = bf_get(cfg_oas, mbx_sli4_parameters);
+ 	sli4_params->cqv = bf_get(cfg_cqv, mbx_sli4_parameters);
+ 	sli4_params->mqv = bf_get(cfg_mqv, mbx_sli4_parameters);
+diff --git a/drivers/scsi/lpfc/lpfc_mbox.c b/drivers/scsi/lpfc/lpfc_mbox.c
+index 3414ffcb26fed..8764fdfc41d49 100644
+--- a/drivers/scsi/lpfc/lpfc_mbox.c
++++ b/drivers/scsi/lpfc/lpfc_mbox.c
+@@ -2624,39 +2624,3 @@ lpfc_resume_rpi(struct lpfcMboxq *mbox, struct lpfc_nodelist *ndlp)
+ 	resume_rpi->event_tag = ndlp->phba->fc_eventTag;
+ }
+ 
+-/**
+- * lpfc_supported_pages - Initialize the PORT_CAPABILITIES supported pages
+- *                        mailbox command.
+- * @mbox: pointer to lpfc mbox command to initialize.
+- *
+- * The PORT_CAPABILITIES supported pages mailbox command is issued to
+- * retrieve the particular feature pages supported by the port.
+- **/
+-void
+-lpfc_supported_pages(struct lpfcMboxq *mbox)
+-{
+-	struct lpfc_mbx_supp_pages *supp_pages;
+-
+-	memset(mbox, 0, sizeof(*mbox));
+-	supp_pages = &mbox->u.mqe.un.supp_pages;
+-	bf_set(lpfc_mqe_command, &mbox->u.mqe, MBX_PORT_CAPABILITIES);
+-	bf_set(cpn, supp_pages, LPFC_SUPP_PAGES);
+-}
+-
+-/**
+- * lpfc_pc_sli4_params - Initialize the PORT_CAPABILITIES SLI4 Params mbox cmd.
+- * @mbox: pointer to lpfc mbox command to initialize.
+- *
+- * The PORT_CAPABILITIES SLI4 parameters mailbox command is issued to
+- * retrieve the particular SLI4 features supported by the port.
+- **/
+-void
+-lpfc_pc_sli4_params(struct lpfcMboxq *mbox)
+-{
+-	struct lpfc_mbx_pc_sli4_params *sli4_params;
+-
+-	memset(mbox, 0, sizeof(*mbox));
+-	sli4_params = &mbox->u.mqe.un.sli4_params;
+-	bf_set(lpfc_mqe_command, &mbox->u.mqe, MBX_PORT_CAPABILITIES);
+-	bf_set(cpn, sli4_params, LPFC_SLI4_PARAMETERS);
+-}
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index 92d6e7b98770d..6afcb1426e357 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -903,9 +903,14 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 		}
+ 	} else if ((!(ndlp->nlp_type & NLP_FABRIC) &&
+ 		((ndlp->nlp_type & NLP_FCP_TARGET) ||
+-		!(ndlp->nlp_type & NLP_FCP_INITIATOR))) ||
++		(ndlp->nlp_type & NLP_NVME_TARGET) ||
++		(vport->fc_flag & FC_PT2PT))) ||
+ 		(ndlp->nlp_state == NLP_STE_ADISC_ISSUE)) {
+-		/* Only try to re-login if this is NOT a Fabric Node */
++		/* Only try to re-login if this is NOT a Fabric Node
++		 * AND the remote NPORT is a FCP/NVME Target or we
++		 * are in pt2pt mode. NLP_STE_ADISC_ISSUE is a special
++		 * case for LOGO as a response to ADISC behavior.
++		 */
+ 		mod_timer(&ndlp->nlp_delayfunc,
+ 			  jiffies + msecs_to_jiffies(1000 * 1));
+ 		spin_lock_irq(shost->host_lock);
+@@ -1979,8 +1984,6 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
+ 		ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
+ 
+ 		lpfc_issue_els_logo(vport, ndlp, 0);
+-		ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
+-		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+ 		return ndlp->nlp_state;
+ 	}
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index d4ade7cdb93a9..deab8931ab48e 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -3300,7 +3300,6 @@ lpfc_nvmet_unsol_issue_abort(struct lpfc_hba *phba,
+ 	bf_set(wqe_rcvoxid, &wqe_abts->xmit_sequence.wqe_com, xri);
+ 
+ 	/* Word 10 */
+-	bf_set(wqe_dbde, &wqe_abts->xmit_sequence.wqe_com, 1);
+ 	bf_set(wqe_iod, &wqe_abts->xmit_sequence.wqe_com, LPFC_WQE_IOD_WRITE);
+ 	bf_set(wqe_lenloc, &wqe_abts->xmit_sequence.wqe_com,
+ 	       LPFC_WQE_LENLOC_WORD12);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index f103340820c66..3e5c0718555ad 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -5546,12 +5546,10 @@ lpfc_sli4_get_ctl_attr(struct lpfc_hba *phba)
+ 			phba->sli4_hba.lnk_info.lnk_no,
+ 			phba->BIOSVersion);
+ out_free_mboxq:
+-	if (rc != MBX_TIMEOUT) {
+-		if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
+-			lpfc_sli4_mbox_cmd_free(phba, mboxq);
+-		else
+-			mempool_free(mboxq, phba->mbox_mem_pool);
+-	}
++	if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
++		lpfc_sli4_mbox_cmd_free(phba, mboxq);
++	else
++		mempool_free(mboxq, phba->mbox_mem_pool);
+ 	return rc;
+ }
+ 
+@@ -5652,12 +5650,10 @@ retrieve_ppname:
+ 	}
+ 
+ out_free_mboxq:
+-	if (rc != MBX_TIMEOUT) {
+-		if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
+-			lpfc_sli4_mbox_cmd_free(phba, mboxq);
+-		else
+-			mempool_free(mboxq, phba->mbox_mem_pool);
+-	}
++	if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
++		lpfc_sli4_mbox_cmd_free(phba, mboxq);
++	else
++		mempool_free(mboxq, phba->mbox_mem_pool);
+ 	return rc;
+ }
+ 
+@@ -16844,8 +16840,7 @@ lpfc_rq_destroy(struct lpfc_hba *phba, struct lpfc_queue *hrq,
+ 				"2509 RQ_DESTROY mailbox failed with "
+ 				"status x%x add_status x%x, mbx status x%x\n",
+ 				shdr_status, shdr_add_status, rc);
+-		if (rc != MBX_TIMEOUT)
+-			mempool_free(mbox, hrq->phba->mbox_mem_pool);
++		mempool_free(mbox, hrq->phba->mbox_mem_pool);
+ 		return -ENXIO;
+ 	}
+ 	bf_set(lpfc_mbx_rq_destroy_q_id, &mbox->u.mqe.un.rq_destroy.u.request,
+@@ -16942,7 +16937,9 @@ lpfc_sli4_post_sgl(struct lpfc_hba *phba,
+ 	shdr = (union lpfc_sli4_cfg_shdr *) &post_sgl_pages->header.cfg_shdr;
+ 	shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
+ 	shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
+-	if (rc != MBX_TIMEOUT)
++	if (!phba->sli4_hba.intr_enable)
++		mempool_free(mbox, phba->mbox_mem_pool);
++	else if (rc != MBX_TIMEOUT)
+ 		mempool_free(mbox, phba->mbox_mem_pool);
+ 	if (shdr_status || shdr_add_status || rc) {
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+@@ -17139,7 +17136,9 @@ lpfc_sli4_post_sgl_list(struct lpfc_hba *phba,
+ 	shdr = (union lpfc_sli4_cfg_shdr *) &sgl->cfg_shdr;
+ 	shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
+ 	shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
+-	if (rc != MBX_TIMEOUT)
++	if (!phba->sli4_hba.intr_enable)
++		lpfc_sli4_mbox_cmd_free(phba, mbox);
++	else if (rc != MBX_TIMEOUT)
+ 		lpfc_sli4_mbox_cmd_free(phba, mbox);
+ 	if (shdr_status || shdr_add_status || rc) {
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+@@ -17252,7 +17251,9 @@ lpfc_sli4_post_io_sgl_block(struct lpfc_hba *phba, struct list_head *nblist,
+ 	shdr = (union lpfc_sli4_cfg_shdr *)&sgl->cfg_shdr;
+ 	shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
+ 	shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
+-	if (rc != MBX_TIMEOUT)
++	if (!phba->sli4_hba.intr_enable)
++		lpfc_sli4_mbox_cmd_free(phba, mbox);
++	else if (rc != MBX_TIMEOUT)
+ 		lpfc_sli4_mbox_cmd_free(phba, mbox);
+ 	if (shdr_status || shdr_add_status || rc) {
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+@@ -17836,7 +17837,6 @@ lpfc_sli4_seq_abort_rsp_cmpl(struct lpfc_hba *phba,
+ 	if (cmd_iocbq) {
+ 		ndlp = (struct lpfc_nodelist *)cmd_iocbq->context1;
+ 		lpfc_nlp_put(ndlp);
+-		lpfc_nlp_not_used(ndlp);
+ 		lpfc_sli_release_iocbq(phba, cmd_iocbq);
+ 	}
+ 
+@@ -18608,8 +18608,7 @@ lpfc_sli4_post_rpi_hdr(struct lpfc_hba *phba, struct lpfc_rpi_hdr *rpi_page)
+ 	shdr = (union lpfc_sli4_cfg_shdr *) &hdr_tmpl->header.cfg_shdr;
+ 	shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
+ 	shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
+-	if (rc != MBX_TIMEOUT)
+-		mempool_free(mboxq, phba->mbox_mem_pool);
++	mempool_free(mboxq, phba->mbox_mem_pool);
+ 	if (shdr_status || shdr_add_status || rc) {
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+ 				"2514 POST_RPI_HDR mailbox failed with "
+@@ -19853,7 +19852,9 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
+ 			break;
+ 		}
+ 	}
+-	if (rc != MBX_TIMEOUT)
++	if (!phba->sli4_hba.intr_enable)
++		mempool_free(mbox, phba->mbox_mem_pool);
++	else if (rc != MBX_TIMEOUT)
+ 		mempool_free(mbox, phba->mbox_mem_pool);
+ 	if (shdr_status || shdr_add_status || rc) {
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index ac25ec5f97388..3fbbdf084d67a 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -6804,6 +6804,8 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+ 
+ 	ioc_info(ioc, "sending diag reset !!\n");
+ 
++	pci_cfg_access_lock(ioc->pdev);
++
+ 	drsprintk(ioc, ioc_info(ioc, "clear interrupts\n"));
+ 
+ 	count = 0;
+@@ -6894,10 +6896,12 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+ 		goto out;
+ 	}
+ 
++	pci_cfg_access_unlock(ioc->pdev);
+ 	ioc_info(ioc, "diag reset: SUCCESS\n");
+ 	return 0;
+ 
+  out:
++	pci_cfg_access_unlock(ioc->pdev);
+ 	ioc_err(ioc, "diag reset: FAILED\n");
+ 	return -EFAULT;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index ab45ac1e5a72c..6a2c4a6fcded8 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -2855,6 +2855,8 @@ qla2x00_reset_host_stats(struct Scsi_Host *shost)
+ 	vha->qla_stats.jiffies_at_last_reset = get_jiffies_64();
+ 
+ 	if (IS_FWI2_CAPABLE(ha)) {
++		int rval;
++
+ 		stats = dma_alloc_coherent(&ha->pdev->dev,
+ 		    sizeof(*stats), &stats_dma, GFP_KERNEL);
+ 		if (!stats) {
+@@ -2864,7 +2866,11 @@ qla2x00_reset_host_stats(struct Scsi_Host *shost)
+ 		}
+ 
+ 		/* reset firmware statistics */
+-		qla24xx_get_isp_stats(base_vha, stats, stats_dma, BIT_0);
++		rval = qla24xx_get_isp_stats(base_vha, stats, stats_dma, BIT_0);
++		if (rval != QLA_SUCCESS)
++			ql_log(ql_log_warn, vha, 0x70de,
++			       "Resetting ISP statistics failed: rval = %d\n",
++			       rval);
+ 
+ 		dma_free_coherent(&ha->pdev->dev, sizeof(*stats),
+ 		    stats, stats_dma);
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 23b604832a54d..7fa085969a63a 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -24,10 +24,11 @@ void qla2x00_bsg_job_done(srb_t *sp, int res)
+ 	struct bsg_job *bsg_job = sp->u.bsg_job;
+ 	struct fc_bsg_reply *bsg_reply = bsg_job->reply;
+ 
++	sp->free(sp);
++
+ 	bsg_reply->result = res;
+ 	bsg_job_done(bsg_job, bsg_reply->result,
+ 		       bsg_reply->reply_payload_rcv_len);
+-	sp->free(sp);
+ }
+ 
+ void qla2x00_bsg_sp_free(srb_t *sp)
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index d389f56fff54a..21be50b35bc27 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1008,8 +1008,6 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ 	if (rval != QLA_SUCCESS) {
+ 		ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3078,
+ 		    "Start scsi failed rval=%d for cmd=%p.\n", rval, cmd);
+-		if (rval == QLA_INTERFACE_ERROR)
+-			goto qc24_free_sp_fail_command;
+ 		goto qc24_host_busy_free_sp;
+ 	}
+ 
+@@ -1021,11 +1019,6 @@ qc24_host_busy_free_sp:
+ qc24_target_busy:
+ 	return SCSI_MLQUEUE_TARGET_BUSY;
+ 
+-qc24_free_sp_fail_command:
+-	sp->free(sp);
+-	CMD_SP(cmd) = NULL;
+-	qla2xxx_rel_qpair_sp(sp->qpair, sp);
+-
+ qc24_fail_command:
+ 	cmd->scsi_done(cmd);
+ 
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 9d0229656681f..5083e5d2b4675 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -5491,6 +5491,8 @@ static void pqi_fail_io_queued_for_device(struct pqi_ctrl_info *ctrl_info,
+ 
+ 				list_del(&io_request->request_list_entry);
+ 				set_host_byte(scmd, DID_RESET);
++				pqi_free_io_request(io_request);
++				scsi_dma_unmap(scmd);
+ 				pqi_scsi_done(scmd);
+ 			}
+ 
+@@ -5527,6 +5529,8 @@ static void pqi_fail_io_queued_for_all_devices(struct pqi_ctrl_info *ctrl_info)
+ 
+ 				list_del(&io_request->request_list_entry);
+ 				set_host_byte(scmd, DID_RESET);
++				pqi_free_io_request(io_request);
++				scsi_dma_unmap(scmd);
+ 				pqi_scsi_done(scmd);
+ 			}
+ 
+@@ -6601,6 +6605,7 @@ static int pqi_register_scsi(struct pqi_ctrl_info *ctrl_info)
+ 	shost->irq = pci_irq_vector(ctrl_info->pci_dev, 0);
+ 	shost->unique_id = shost->irq;
+ 	shost->nr_hw_queues = ctrl_info->num_queue_groups;
++	shost->host_tagset = 1;
+ 	shost->hostdata[0] = (unsigned long)ctrl_info;
+ 
+ 	rc = scsi_add_host(shost, &ctrl_info->pci_dev->dev);
+@@ -8221,6 +8226,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x152d, 0x8a37)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x193d, 0x8460)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x193d, 0x1104)
+@@ -8293,6 +8302,22 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1bd4, 0x004f)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1bd4, 0x0051)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1bd4, 0x0052)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1bd4, 0x0053)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1bd4, 0x0054)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x19e5, 0xd227)
+@@ -8453,6 +8478,122 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_ADAPTEC2, 0x1380)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1400)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1402)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1410)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1411)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1412)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1420)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1430)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1440)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1441)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1450)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1452)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1460)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1461)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1462)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1470)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1471)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1472)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1480)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1490)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x1491)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14a0)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14a1)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14b0)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14b1)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14c0)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14c1)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14d0)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14e0)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_ADAPTEC2, 0x14f0)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_ADVANTECH, 0x8312)
+@@ -8517,6 +8658,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_HP, 0x1001)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       PCI_VENDOR_ID_HP, 0x1002)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_HP, 0x1100)
+@@ -8525,6 +8670,22 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_VENDOR_ID_HP, 0x1101)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1590, 0x0294)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1590, 0x02db)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1590, 0x02dc)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1590, 0x032e)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1d8d, 0x0800)
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index df9a5ca8c99c4..0118bd986f902 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -317,6 +317,8 @@ struct tegra_pmc_soc {
+ 				   bool invert);
+ 	int (*irq_set_wake)(struct irq_data *data, unsigned int on);
+ 	int (*irq_set_type)(struct irq_data *data, unsigned int type);
++	int (*powergate_set)(struct tegra_pmc *pmc, unsigned int id,
++			     bool new_state);
+ 
+ 	const char * const *reset_sources;
+ 	unsigned int num_reset_sources;
+@@ -517,6 +519,63 @@ static int tegra_powergate_lookup(struct tegra_pmc *pmc, const char *name)
+ 	return -ENODEV;
+ }
+ 
++static int tegra20_powergate_set(struct tegra_pmc *pmc, unsigned int id,
++				 bool new_state)
++{
++	unsigned int retries = 100;
++	bool status;
++	int ret;
++
++	/*
++	 * As per TRM documentation, the toggle command will be dropped by PMC
++	 * if there is contention with a HW-initiated toggling (i.e. CPU core
++	 * power-gated), the command should be retried in that case.
++	 */
++	do {
++		tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE);
++
++		/* wait for PMC to execute the command */
++		ret = readx_poll_timeout(tegra_powergate_state, id, status,
++					 status == new_state, 1, 10);
++	} while (ret == -ETIMEDOUT && retries--);
++
++	return ret;
++}
++
++static inline bool tegra_powergate_toggle_ready(struct tegra_pmc *pmc)
++{
++	return !(tegra_pmc_readl(pmc, PWRGATE_TOGGLE) & PWRGATE_TOGGLE_START);
++}
++
++static int tegra114_powergate_set(struct tegra_pmc *pmc, unsigned int id,
++				  bool new_state)
++{
++	bool status;
++	int err;
++
++	/* wait while PMC power gating is contended */
++	err = readx_poll_timeout(tegra_powergate_toggle_ready, pmc, status,
++				 status == true, 1, 100);
++	if (err)
++		return err;
++
++	tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE);
++
++	/* wait for PMC to accept the command */
++	err = readx_poll_timeout(tegra_powergate_toggle_ready, pmc, status,
++				 status == true, 1, 100);
++	if (err)
++		return err;
++
++	/* wait for PMC to execute the command */
++	err = readx_poll_timeout(tegra_powergate_state, id, status,
++				 status == new_state, 10, 100000);
++	if (err)
++		return err;
++
++	return 0;
++}
++
+ /**
+  * tegra_powergate_set() - set the state of a partition
+  * @pmc: power management controller
+@@ -526,7 +585,6 @@ static int tegra_powergate_lookup(struct tegra_pmc *pmc, const char *name)
+ static int tegra_powergate_set(struct tegra_pmc *pmc, unsigned int id,
+ 			       bool new_state)
+ {
+-	bool status;
+ 	int err;
+ 
+ 	if (id == TEGRA_POWERGATE_3D && pmc->soc->has_gpu_clamps)
+@@ -539,10 +597,7 @@ static int tegra_powergate_set(struct tegra_pmc *pmc, unsigned int id,
+ 		return 0;
+ 	}
+ 
+-	tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE);
+-
+-	err = readx_poll_timeout(tegra_powergate_state, id, status,
+-				 status == new_state, 10, 100000);
++	err = pmc->soc->powergate_set(pmc, id, new_state);
+ 
+ 	mutex_unlock(&pmc->powergates_lock);
+ 
+@@ -2699,6 +2754,7 @@ static const struct tegra_pmc_soc tegra20_pmc_soc = {
+ 	.regs = &tegra20_pmc_regs,
+ 	.init = tegra20_pmc_init,
+ 	.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
++	.powergate_set = tegra20_powergate_set,
+ 	.reset_sources = NULL,
+ 	.num_reset_sources = 0,
+ 	.reset_levels = NULL,
+@@ -2757,6 +2813,7 @@ static const struct tegra_pmc_soc tegra30_pmc_soc = {
+ 	.regs = &tegra20_pmc_regs,
+ 	.init = tegra20_pmc_init,
+ 	.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
++	.powergate_set = tegra20_powergate_set,
+ 	.reset_sources = tegra30_reset_sources,
+ 	.num_reset_sources = ARRAY_SIZE(tegra30_reset_sources),
+ 	.reset_levels = NULL,
+@@ -2811,6 +2868,7 @@ static const struct tegra_pmc_soc tegra114_pmc_soc = {
+ 	.regs = &tegra20_pmc_regs,
+ 	.init = tegra20_pmc_init,
+ 	.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
++	.powergate_set = tegra114_powergate_set,
+ 	.reset_sources = tegra30_reset_sources,
+ 	.num_reset_sources = ARRAY_SIZE(tegra30_reset_sources),
+ 	.reset_levels = NULL,
+@@ -2925,6 +2983,7 @@ static const struct tegra_pmc_soc tegra124_pmc_soc = {
+ 	.regs = &tegra20_pmc_regs,
+ 	.init = tegra20_pmc_init,
+ 	.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
++	.powergate_set = tegra114_powergate_set,
+ 	.reset_sources = tegra30_reset_sources,
+ 	.num_reset_sources = ARRAY_SIZE(tegra30_reset_sources),
+ 	.reset_levels = NULL,
+@@ -3048,6 +3107,7 @@ static const struct tegra_pmc_soc tegra210_pmc_soc = {
+ 	.regs = &tegra20_pmc_regs,
+ 	.init = tegra20_pmc_init,
+ 	.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
++	.powergate_set = tegra114_powergate_set,
+ 	.irq_set_wake = tegra210_pmc_irq_set_wake,
+ 	.irq_set_type = tegra210_pmc_irq_set_type,
+ 	.reset_sources = tegra210_reset_sources,
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index 580660599f461..c6d421a4b91b6 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -1449,10 +1449,12 @@ int sdw_cdns_clock_stop(struct sdw_cdns *cdns, bool block_wake)
+ 	}
+ 
+ 	/* Prepare slaves for clock stop */
+-	ret = sdw_bus_prep_clk_stop(&cdns->bus);
+-	if (ret < 0) {
+-		dev_err(cdns->dev, "prepare clock stop failed %d", ret);
+-		return ret;
++	if (slave_present) {
++		ret = sdw_bus_prep_clk_stop(&cdns->bus);
++		if (ret < 0 && ret != -ENODATA) {
++			dev_err(cdns->dev, "prepare clock stop failed %d\n", ret);
++			return ret;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/drivers/spi/spi-ath79.c b/drivers/spi/spi-ath79.c
+index eb9a243e95265..98ace748cd986 100644
+--- a/drivers/spi/spi-ath79.c
++++ b/drivers/spi/spi-ath79.c
+@@ -156,8 +156,7 @@ static int ath79_spi_probe(struct platform_device *pdev)
+ 
+ 	master->use_gpio_descriptors = true;
+ 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
+-	master->setup = spi_bitbang_setup;
+-	master->cleanup = spi_bitbang_cleanup;
++	master->flags = SPI_MASTER_GPIO_SS;
+ 	if (pdata) {
+ 		master->bus_num = pdata->bus_num;
+ 		master->num_chipselect = pdata->num_chipselect;
+diff --git a/drivers/spi/spi-dln2.c b/drivers/spi/spi-dln2.c
+index 75b33d7d14b04..9a4d942fafcf5 100644
+--- a/drivers/spi/spi-dln2.c
++++ b/drivers/spi/spi-dln2.c
+@@ -780,7 +780,7 @@ exit_free_master:
+ 
+ static int dln2_spi_remove(struct platform_device *pdev)
+ {
+-	struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
++	struct spi_master *master = platform_get_drvdata(pdev);
+ 	struct dln2_spi *dln2 = spi_master_get_devdata(master);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/spi/spi-omap-100k.c b/drivers/spi/spi-omap-100k.c
+index 36a4922a134a1..ccd817ee4917b 100644
+--- a/drivers/spi/spi-omap-100k.c
++++ b/drivers/spi/spi-omap-100k.c
+@@ -424,7 +424,7 @@ err:
+ 
+ static int omap1_spi100k_remove(struct platform_device *pdev)
+ {
+-	struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
++	struct spi_master *master = platform_get_drvdata(pdev);
+ 	struct omap1_spi100k *spi100k = spi_master_get_devdata(master);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+@@ -438,7 +438,7 @@ static int omap1_spi100k_remove(struct platform_device *pdev)
+ #ifdef CONFIG_PM
+ static int omap1_spi100k_runtime_suspend(struct device *dev)
+ {
+-	struct spi_master *master = spi_master_get(dev_get_drvdata(dev));
++	struct spi_master *master = dev_get_drvdata(dev);
+ 	struct omap1_spi100k *spi100k = spi_master_get_devdata(master);
+ 
+ 	clk_disable_unprepare(spi100k->ick);
+@@ -449,7 +449,7 @@ static int omap1_spi100k_runtime_suspend(struct device *dev)
+ 
+ static int omap1_spi100k_runtime_resume(struct device *dev)
+ {
+-	struct spi_master *master = spi_master_get(dev_get_drvdata(dev));
++	struct spi_master *master = dev_get_drvdata(dev);
+ 	struct omap1_spi100k *spi100k = spi_master_get_devdata(master);
+ 	int ret;
+ 
+diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c
+index 8dcb2e70735c9..d39dec6d1c91e 100644
+--- a/drivers/spi/spi-qup.c
++++ b/drivers/spi/spi-qup.c
+@@ -1263,7 +1263,7 @@ static int spi_qup_remove(struct platform_device *pdev)
+ 	struct spi_qup *controller = spi_master_get_devdata(master);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index 947e6b9dc9f4d..2786470a52011 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -727,21 +727,31 @@ static int __maybe_unused stm32_qspi_suspend(struct device *dev)
+ {
+ 	pinctrl_pm_select_sleep_state(dev);
+ 
+-	return 0;
++	return pm_runtime_force_suspend(dev);
+ }
+ 
+ static int __maybe_unused stm32_qspi_resume(struct device *dev)
+ {
+ 	struct stm32_qspi *qspi = dev_get_drvdata(dev);
++	int ret;
++
++	ret = pm_runtime_force_resume(dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	pinctrl_pm_select_default_state(dev);
+-	clk_prepare_enable(qspi->clk);
++
++	ret = pm_runtime_get_sync(dev);
++	if (ret < 0) {
++		pm_runtime_put_noidle(dev);
++		return ret;
++	}
+ 
+ 	writel_relaxed(qspi->cr_reg, qspi->io_base + QSPI_CR);
+ 	writel_relaxed(qspi->dcr_reg, qspi->io_base + QSPI_DCR);
+ 
+-	pm_runtime_mark_last_busy(qspi->dev);
+-	pm_runtime_put_autosuspend(qspi->dev);
++	pm_runtime_mark_last_busy(dev);
++	pm_runtime_put_autosuspend(dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index 9417385c09217..e06aafe169e0c 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -733,6 +733,17 @@ static int ti_qspi_runtime_resume(struct device *dev)
+ 	return 0;
+ }
+ 
++static void ti_qspi_dma_cleanup(struct ti_qspi *qspi)
++{
++	if (qspi->rx_bb_addr)
++		dma_free_coherent(qspi->dev, QSPI_DMA_BUFFER_SIZE,
++				  qspi->rx_bb_addr,
++				  qspi->rx_bb_dma_addr);
++
++	if (qspi->rx_chan)
++		dma_release_channel(qspi->rx_chan);
++}
++
+ static const struct of_device_id ti_qspi_match[] = {
+ 	{.compatible = "ti,dra7xxx-qspi" },
+ 	{.compatible = "ti,am4372-qspi" },
+@@ -886,6 +897,8 @@ no_dma:
+ 	if (!ret)
+ 		return 0;
+ 
++	ti_qspi_dma_cleanup(qspi);
++
+ 	pm_runtime_disable(&pdev->dev);
+ free_master:
+ 	spi_master_put(master);
+@@ -904,12 +917,7 @@ static int ti_qspi_remove(struct platform_device *pdev)
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
+-	if (qspi->rx_bb_addr)
+-		dma_free_coherent(qspi->dev, QSPI_DMA_BUFFER_SIZE,
+-				  qspi->rx_bb_addr,
+-				  qspi->rx_bb_dma_addr);
+-	if (qspi->rx_chan)
+-		dma_release_channel(qspi->rx_chan);
++	ti_qspi_dma_cleanup(qspi);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 4257a2d368f71..1eee8b3c1b381 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -787,7 +787,7 @@ int spi_register_board_info(struct spi_board_info const *info, unsigned n)
+ 
+ /*-------------------------------------------------------------------------*/
+ 
+-static void spi_set_cs(struct spi_device *spi, bool enable)
++static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
+ {
+ 	bool enable1 = enable;
+ 
+@@ -795,7 +795,7 @@ static void spi_set_cs(struct spi_device *spi, bool enable)
+ 	 * Avoid calling into the driver (or doing delays) if the chip select
+ 	 * isn't actually changing from the last time this was called.
+ 	 */
+-	if ((spi->controller->last_cs_enable == enable) &&
++	if (!force && (spi->controller->last_cs_enable == enable) &&
+ 	    (spi->controller->last_cs_mode_high == (spi->mode & SPI_CS_HIGH)))
+ 		return;
+ 
+@@ -1243,7 +1243,7 @@ static int spi_transfer_one_message(struct spi_controller *ctlr,
+ 	struct spi_statistics *statm = &ctlr->statistics;
+ 	struct spi_statistics *stats = &msg->spi->statistics;
+ 
+-	spi_set_cs(msg->spi, true);
++	spi_set_cs(msg->spi, true, false);
+ 
+ 	SPI_STATISTICS_INCREMENT_FIELD(statm, messages);
+ 	SPI_STATISTICS_INCREMENT_FIELD(stats, messages);
+@@ -1311,9 +1311,9 @@ fallback_pio:
+ 					 &msg->transfers)) {
+ 				keep_cs = true;
+ 			} else {
+-				spi_set_cs(msg->spi, false);
++				spi_set_cs(msg->spi, false, false);
+ 				_spi_transfer_cs_change_delay(msg, xfer);
+-				spi_set_cs(msg->spi, true);
++				spi_set_cs(msg->spi, true, false);
+ 			}
+ 		}
+ 
+@@ -1322,7 +1322,7 @@ fallback_pio:
+ 
+ out:
+ 	if (ret != 0 || !keep_cs)
+-		spi_set_cs(msg->spi, false);
++		spi_set_cs(msg->spi, false, false);
+ 
+ 	if (msg->status == -EINPROGRESS)
+ 		msg->status = ret;
+@@ -3400,11 +3400,11 @@ int spi_setup(struct spi_device *spi)
+ 		 */
+ 		status = 0;
+ 
+-		spi_set_cs(spi, false);
++		spi_set_cs(spi, false, true);
+ 		pm_runtime_mark_last_busy(spi->controller->dev.parent);
+ 		pm_runtime_put_autosuspend(spi->controller->dev.parent);
+ 	} else {
+-		spi_set_cs(spi, false);
++		spi_set_cs(spi, false, true);
+ 	}
+ 
+ 	mutex_unlock(&spi->controller->io_mutex);
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_fops.c b/drivers/staging/media/atomisp/pci/atomisp_fops.c
+index 453bb69135505..f1e6b25978534 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_fops.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_fops.c
+@@ -221,6 +221,9 @@ int atomisp_q_video_buffers_to_css(struct atomisp_sub_device *asd,
+ 	unsigned long irqflags;
+ 	int err = 0;
+ 
++	if (WARN_ON(css_pipe_id >= IA_CSS_PIPE_ID_NUM))
++		return -EINVAL;
++
+ 	while (pipe->buffers_in_css < ATOMISP_CSS_Q_DEPTH) {
+ 		struct videobuf_buffer *vb;
+ 
+diff --git a/drivers/staging/media/imx/imx-media-capture.c b/drivers/staging/media/imx/imx-media-capture.c
+index c1931eb2540e3..b2f2cb3d6a609 100644
+--- a/drivers/staging/media/imx/imx-media-capture.c
++++ b/drivers/staging/media/imx/imx-media-capture.c
+@@ -557,7 +557,7 @@ static int capture_validate_fmt(struct capture_priv *priv)
+ 		priv->vdev.fmt.fmt.pix.height != f.fmt.pix.height ||
+ 		priv->vdev.cc->cs != cc->cs ||
+ 		priv->vdev.compose.width != compose.width ||
+-		priv->vdev.compose.height != compose.height) ? -EINVAL : 0;
++		priv->vdev.compose.height != compose.height) ? -EPIPE : 0;
+ }
+ 
+ static int capture_start_streaming(struct vb2_queue *vq, unsigned int count)
+diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c
+index 4dc8d9165f634..e0179616a29cf 100644
+--- a/drivers/staging/media/ipu3/ipu3-v4l2.c
++++ b/drivers/staging/media/ipu3/ipu3-v4l2.c
+@@ -686,6 +686,7 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
+ 
+ 	dev_dbg(dev, "IPU3 pipe %u pipe_id = %u", pipe, css_pipe->pipe_id);
+ 
++	css_q = imgu_node_to_queue(node);
+ 	for (i = 0; i < IPU3_CSS_QUEUES; i++) {
+ 		unsigned int inode = imgu_map_node(imgu, i);
+ 
+@@ -693,6 +694,18 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
+ 		if (inode == IMGU_NODE_STAT_3A || inode == IMGU_NODE_PARAMS)
+ 			continue;
+ 
++		/* CSS expects some format on OUT queue */
++		if (i != IPU3_CSS_QUEUE_OUT &&
++		    !imgu_pipe->nodes[inode].enabled) {
++			fmts[i] = NULL;
++			continue;
++		}
++
++		if (i == css_q) {
++			fmts[i] = &f->fmt.pix_mp;
++			continue;
++		}
++
+ 		if (try) {
+ 			fmts[i] = kmemdup(&imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp,
+ 					  sizeof(struct v4l2_pix_format_mplane),
+@@ -705,10 +718,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
+ 			fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp;
+ 		}
+ 
+-		/* CSS expects some format on OUT queue */
+-		if (i != IPU3_CSS_QUEUE_OUT &&
+-		    !imgu_pipe->nodes[inode].enabled)
+-			fmts[i] = NULL;
+ 	}
+ 
+ 	if (!try) {
+@@ -725,16 +734,10 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
+ 		rects[IPU3_CSS_RECT_GDC]->height = pad_fmt.height;
+ 	}
+ 
+-	/*
+-	 * imgu doesn't set the node to the value given by user
+-	 * before we return success from this function, so set it here.
+-	 */
+-	css_q = imgu_node_to_queue(node);
+ 	if (!fmts[css_q]) {
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+-	*fmts[css_q] = f->fmt.pix_mp;
+ 
+ 	if (try)
+ 		ret = imgu_css_fmt_try(&imgu->css, fmts, rects, pipe);
+@@ -745,15 +748,18 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
+ 	if (ret < 0)
+ 		goto out;
+ 
+-	if (try)
+-		f->fmt.pix_mp = *fmts[css_q];
+-	else
+-		f->fmt = imgu_pipe->nodes[node].vdev_fmt.fmt;
++	/*
++	 * imgu doesn't set the node to the value given by user
++	 * before we return success from this function, so set it here.
++	 */
++	if (!try)
++		imgu_pipe->nodes[node].vdev_fmt.fmt.pix_mp = f->fmt.pix_mp;
+ 
+ out:
+ 	if (try) {
+ 		for (i = 0; i < IPU3_CSS_QUEUES; i++)
+-			kfree(fmts[i]);
++			if (i != css_q)
++				kfree(fmts[i]);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index 723a51a3f4316..f10f0aa6cd374 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -620,8 +620,9 @@ static void pscsi_complete_cmd(struct se_cmd *cmd, u8 scsi_status,
+ 			unsigned char *buf;
+ 
+ 			buf = transport_kmap_data_sg(cmd);
+-			if (!buf)
++			if (!buf) {
+ 				; /* XXX: TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE */
++			}
+ 
+ 			if (cdb[0] == MODE_SENSE_10) {
+ 				if (!(buf[3] & 0x80))
+diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
+index cf4718c6d35da..63542c1cc2914 100644
+--- a/drivers/tee/optee/core.c
++++ b/drivers/tee/optee/core.c
+@@ -79,16 +79,6 @@ int optee_from_msg_param(struct tee_param *params, size_t num_params,
+ 				return rc;
+ 			p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa;
+ 			p->u.memref.shm = shm;
+-
+-			/* Check that the memref is covered by the shm object */
+-			if (p->u.memref.size) {
+-				size_t o = p->u.memref.shm_offs +
+-					   p->u.memref.size - 1;
+-
+-				rc = tee_shm_get_pa(shm, o, NULL);
+-				if (rc)
+-					return rc;
+-			}
+ 			break;
+ 		case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
+ 		case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index ddc166e3a93eb..3f6a69ccc1737 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -123,7 +123,7 @@ static u32 cpu_power_to_freq(struct cpufreq_cooling_device *cpufreq_cdev,
+ {
+ 	int i;
+ 
+-	for (i = cpufreq_cdev->max_level; i >= 0; i--) {
++	for (i = cpufreq_cdev->max_level; i > 0; i--) {
+ 		if (power >= cpufreq_cdev->em->table[i].power)
+ 			break;
+ 	}
+diff --git a/drivers/thermal/gov_fair_share.c b/drivers/thermal/gov_fair_share.c
+index aaa07180ab482..645432ce63659 100644
+--- a/drivers/thermal/gov_fair_share.c
++++ b/drivers/thermal/gov_fair_share.c
+@@ -82,6 +82,8 @@ static int fair_share_throttle(struct thermal_zone_device *tz, int trip)
+ 	int total_instance = 0;
+ 	int cur_trip_level = get_trip_level(tz);
+ 
++	mutex_lock(&tz->lock);
++
+ 	list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+ 		if (instance->trip != trip)
+ 			continue;
+@@ -110,6 +112,8 @@ static int fair_share_throttle(struct thermal_zone_device *tz, int trip)
+ 		mutex_unlock(&instance->cdev->lock);
+ 		thermal_cdev_update(cdev);
+ 	}
++
++	mutex_unlock(&tz->lock);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index fea1eeac5b907..d76880ae68c83 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -2382,8 +2382,18 @@ static int gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ 		/* Don't register device 0 - this is the control channel and not
+ 		   a usable tty interface */
+ 		base = mux_num_to_base(gsm); /* Base for this MUX */
+-		for (i = 1; i < NUM_DLCI; i++)
+-			tty_register_device(gsm_tty_driver, base + i, NULL);
++		for (i = 1; i < NUM_DLCI; i++) {
++			struct device *dev;
++
++			dev = tty_register_device(gsm_tty_driver,
++							base + i, NULL);
++			if (IS_ERR(dev)) {
++				for (i--; i >= 1; i--)
++					tty_unregister_device(gsm_tty_driver,
++								base + i);
++				return PTR_ERR(dev);
++			}
++		}
+ 	}
+ 	return ret;
+ }
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index d04a162939a4d..8f88ee2a2c8d0 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1382,6 +1382,7 @@ struct vc_data *vc_deallocate(unsigned int currcons)
+ 		atomic_notifier_call_chain(&vt_notifier_list, VT_DEALLOCATE, &param);
+ 		vcs_remove_sysfs(currcons);
+ 		visual_deinit(vc);
++		con_free_unimap(vc);
+ 		put_pid(vc->vt_pid);
+ 		vc_uniscr_set(vc, NULL);
+ 		kfree(vc->vc_screenbuf);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 17202b2ee0638..22a86ae4f639c 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3555,7 +3555,7 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ 	u16		portchange, portstatus;
+ 
+ 	if (!test_and_set_bit(port1, hub->child_usage_bits)) {
+-		status = pm_runtime_get_sync(&port_dev->dev);
++		status = pm_runtime_resume_and_get(&port_dev->dev);
+ 		if (status < 0) {
+ 			dev_dbg(&udev->dev, "can't resume usb port, status %d\n",
+ 					status);
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index 55f1d14fc4148..800c8b6c55ff1 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -307,6 +307,7 @@ static void dwc2_handle_conn_id_status_change_intr(struct dwc2_hsotg *hsotg)
+ static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
+ {
+ 	int ret;
++	u32 hprt0;
+ 
+ 	/* Clear interrupt */
+ 	dwc2_writel(hsotg, GINTSTS_SESSREQINT, GINTSTS);
+@@ -327,6 +328,13 @@ static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
+ 		 * established
+ 		 */
+ 		dwc2_hsotg_disconnect(hsotg);
++	} else {
++		/* Turn on the port power bit. */
++		hprt0 = dwc2_read_hprt0(hsotg);
++		hprt0 |= HPRT0_PWR;
++		dwc2_writel(hsotg, hprt0, HPRT0);
++		/* Connect hcd after port power is set. */
++		dwc2_hcd_connect(hsotg);
+ 	}
+ }
+ 
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 3101f0dcf6ae8..e07fd5ee8ed95 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -114,6 +114,8 @@ void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode)
+ 	dwc->current_dr_role = mode;
+ }
+ 
++static int dwc3_core_soft_reset(struct dwc3 *dwc);
++
+ static void __dwc3_set_mode(struct work_struct *work)
+ {
+ 	struct dwc3 *dwc = work_to_dwc(work);
+@@ -121,6 +123,8 @@ static void __dwc3_set_mode(struct work_struct *work)
+ 	int ret;
+ 	u32 reg;
+ 
++	mutex_lock(&dwc->mutex);
++
+ 	pm_runtime_get_sync(dwc->dev);
+ 
+ 	if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_OTG)
+@@ -154,6 +158,25 @@ static void __dwc3_set_mode(struct work_struct *work)
+ 		break;
+ 	}
+ 
++	/* For DRD host or device mode only */
++	if (dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG) {
++		reg = dwc3_readl(dwc->regs, DWC3_GCTL);
++		reg |= DWC3_GCTL_CORESOFTRESET;
++		dwc3_writel(dwc->regs, DWC3_GCTL, reg);
++
++		/*
++		 * Wait for internal clocks to synchronized. DWC_usb31 and
++		 * DWC_usb32 may need at least 50ms (less for DWC_usb3). To
++		 * keep it consistent across different IPs, let's wait up to
++		 * 100ms before clearing GCTL.CORESOFTRESET.
++		 */
++		msleep(100);
++
++		reg = dwc3_readl(dwc->regs, DWC3_GCTL);
++		reg &= ~DWC3_GCTL_CORESOFTRESET;
++		dwc3_writel(dwc->regs, DWC3_GCTL, reg);
++	}
++
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 
+ 	dwc3_set_prtcap(dwc, dwc->desired_dr_role);
+@@ -178,6 +201,8 @@ static void __dwc3_set_mode(struct work_struct *work)
+ 		}
+ 		break;
+ 	case DWC3_GCTL_PRTCAP_DEVICE:
++		dwc3_core_soft_reset(dwc);
++
+ 		dwc3_event_buffers_setup(dwc);
+ 
+ 		if (dwc->usb2_phy)
+@@ -200,6 +225,7 @@ static void __dwc3_set_mode(struct work_struct *work)
+ out:
+ 	pm_runtime_mark_last_busy(dwc->dev);
+ 	pm_runtime_put_autosuspend(dwc->dev);
++	mutex_unlock(&dwc->mutex);
+ }
+ 
+ void dwc3_set_mode(struct dwc3 *dwc, u32 mode)
+@@ -1297,6 +1323,8 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ 				"snps,usb3_lpm_capable");
+ 	dwc->usb2_lpm_disable = device_property_read_bool(dev,
+ 				"snps,usb2-lpm-disable");
++	dwc->usb2_gadget_lpm_disable = device_property_read_bool(dev,
++				"snps,usb2-gadget-lpm-disable");
+ 	device_property_read_u8(dev, "snps,rx-thr-num-pkt-prd",
+ 				&rx_thr_num_pkt_prd);
+ 	device_property_read_u8(dev, "snps,rx-max-burst-prd",
+@@ -1527,6 +1555,7 @@ static int dwc3_probe(struct platform_device *pdev)
+ 	dwc3_cache_hwparams(dwc);
+ 
+ 	spin_lock_init(&dwc->lock);
++	mutex_init(&dwc->mutex);
+ 
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_use_autosuspend(dev);
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 1b241f937d8f4..79e1b82e5e057 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -13,6 +13,7 @@
+ 
+ #include <linux/device.h>
+ #include <linux/spinlock.h>
++#include <linux/mutex.h>
+ #include <linux/ioport.h>
+ #include <linux/list.h>
+ #include <linux/bitops.h>
+@@ -942,6 +943,7 @@ struct dwc3_scratchpad_array {
+  * @scratch_addr: dma address of scratchbuf
+  * @ep0_in_setup: one control transfer is completed and enter setup phase
+  * @lock: for synchronizing
++ * @mutex: for mode switching
+  * @dev: pointer to our struct device
+  * @sysdev: pointer to the DMA-capable device
+  * @xhci: pointer to our xHCI child
+@@ -1026,7 +1028,8 @@ struct dwc3_scratchpad_array {
+  * @dis_start_transfer_quirk: set if start_transfer failure SW workaround is
+  *			not needed for DWC_usb31 version 1.70a-ea06 and below
+  * @usb3_lpm_capable: set if hadrware supports Link Power Management
+- * @usb2_lpm_disable: set to disable usb2 lpm
++ * @usb2_lpm_disable: set to disable usb2 lpm for host
++ * @usb2_gadget_lpm_disable: set to disable usb2 lpm for gadget
+  * @disable_scramble_quirk: set if we enable the disable scramble quirk
+  * @u2exit_lfps_quirk: set if we enable u2exit lfps quirk
+  * @u2ss_inp3_quirk: set if we enable P3 OK for U2/SS Inactive quirk
+@@ -1077,6 +1080,9 @@ struct dwc3 {
+ 	/* device lock */
+ 	spinlock_t		lock;
+ 
++	/* mode switching lock */
++	struct mutex		mutex;
++
+ 	struct device		*dev;
+ 	struct device		*sysdev;
+ 
+@@ -1227,6 +1233,7 @@ struct dwc3 {
+ 	unsigned		dis_start_transfer_quirk:1;
+ 	unsigned		usb3_lpm_capable:1;
+ 	unsigned		usb2_lpm_disable:1;
++	unsigned		usb2_gadget_lpm_disable:1;
+ 
+ 	unsigned		disable_scramble_quirk:1;
+ 	unsigned		u2exit_lfps_quirk:1;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 65ff41e3a18eb..84d1487e9f060 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -308,13 +308,12 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
+ 	}
+ 
+ 	if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
+-		int		needs_wakeup;
++		int link_state;
+ 
+-		needs_wakeup = (dwc->link_state == DWC3_LINK_STATE_U1 ||
+-				dwc->link_state == DWC3_LINK_STATE_U2 ||
+-				dwc->link_state == DWC3_LINK_STATE_U3);
+-
+-		if (unlikely(needs_wakeup)) {
++		link_state = dwc3_gadget_get_link_state(dwc);
++		if (link_state == DWC3_LINK_STATE_U1 ||
++		    link_state == DWC3_LINK_STATE_U2 ||
++		    link_state == DWC3_LINK_STATE_U3) {
+ 			ret = __dwc3_gadget_wakeup(dwc);
+ 			dev_WARN_ONCE(dwc->dev, ret, "wakeup failed --> %d\n",
+ 					ret);
+@@ -608,12 +607,14 @@ static int dwc3_gadget_set_ep_config(struct dwc3_ep *dep, unsigned int action)
+ 		u8 bInterval_m1;
+ 
+ 		/*
+-		 * Valid range for DEPCFG.bInterval_m1 is from 0 to 13, and it
+-		 * must be set to 0 when the controller operates in full-speed.
++		 * Valid range for DEPCFG.bInterval_m1 is from 0 to 13.
++		 *
++		 * NOTE: The programming guide incorrectly stated bInterval_m1
++		 * must be set to 0 when operating in fullspeed. Internally the
++		 * controller does not have this limitation. See DWC_usb3x
++		 * programming guide section 3.2.2.1.
+ 		 */
+ 		bInterval_m1 = min_t(u8, desc->bInterval - 1, 13);
+-		if (dwc->gadget->speed == USB_SPEED_FULL)
+-			bInterval_m1 = 0;
+ 
+ 		if (usb_endpoint_type(desc) == USB_ENDPOINT_XFER_INT &&
+ 		    dwc->gadget->speed == USB_SPEED_FULL)
+@@ -1973,6 +1974,8 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
+ 	case DWC3_LINK_STATE_RESET:
+ 	case DWC3_LINK_STATE_RX_DET:	/* in HS, means Early Suspend */
+ 	case DWC3_LINK_STATE_U3:	/* in HS, means SUSPEND */
++	case DWC3_LINK_STATE_U2:	/* in HS, means Sleep (L1) */
++	case DWC3_LINK_STATE_U1:
+ 	case DWC3_LINK_STATE_RESUME:
+ 		break;
+ 	default:
+@@ -3267,6 +3270,15 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
+ {
+ 	u32			reg;
+ 
++	/*
++	 * Ideally, dwc3_reset_gadget() would trigger the function
++	 * drivers to stop any active transfers through ep disable.
++	 * However, for functions which defer ep disable, such as mass
++	 * storage, we will need to rely on the call to stop active
++	 * transfers here, and avoid allowing of request queuing.
++	 */
++	dwc->connected = false;
++
+ 	/*
+ 	 * WORKAROUND: DWC3 revisions <1.88a have an issue which
+ 	 * would cause a missing Disconnect Event if there's a
+@@ -3389,6 +3401,7 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
+ 	/* Enable USB2 LPM Capability */
+ 
+ 	if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A) &&
++	    !dwc->usb2_gadget_lpm_disable &&
+ 	    (speed != DWC3_DSTS_SUPERSPEED) &&
+ 	    (speed != DWC3_DSTS_SUPERSPEED_PLUS)) {
+ 		reg = dwc3_readl(dwc->regs, DWC3_DCFG);
+@@ -3415,6 +3428,12 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
+ 
+ 		dwc3_gadget_dctl_write_safe(dwc, reg);
+ 	} else {
++		if (dwc->usb2_gadget_lpm_disable) {
++			reg = dwc3_readl(dwc->regs, DWC3_DCFG);
++			reg &= ~DWC3_DCFG_LPM_CAP;
++			dwc3_writel(dwc->regs, DWC3_DCFG, reg);
++		}
++
+ 		reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ 		reg &= ~DWC3_DCTL_HIRD_THRES_MASK;
+ 		dwc3_gadget_dctl_write_safe(dwc, reg);
+@@ -3862,7 +3881,7 @@ int dwc3_gadget_init(struct dwc3 *dwc)
+ 	dwc->gadget->speed		= USB_SPEED_UNKNOWN;
+ 	dwc->gadget->sg_supported	= true;
+ 	dwc->gadget->name		= "dwc3-gadget";
+-	dwc->gadget->lpm_capable	= true;
++	dwc->gadget->lpm_capable	= !dwc->usb2_gadget_lpm_disable;
+ 
+ 	/*
+ 	 * FIXME We might be setting max_speed to <SUPER, however versions
+diff --git a/drivers/usb/gadget/config.c b/drivers/usb/gadget/config.c
+index 2d115353424c2..8bb25773b61e9 100644
+--- a/drivers/usb/gadget/config.c
++++ b/drivers/usb/gadget/config.c
+@@ -194,9 +194,13 @@ EXPORT_SYMBOL_GPL(usb_assign_descriptors);
+ void usb_free_all_descriptors(struct usb_function *f)
+ {
+ 	usb_free_descriptors(f->fs_descriptors);
++	f->fs_descriptors = NULL;
+ 	usb_free_descriptors(f->hs_descriptors);
++	f->hs_descriptors = NULL;
+ 	usb_free_descriptors(f->ss_descriptors);
++	f->ss_descriptors = NULL;
+ 	usb_free_descriptors(f->ssp_descriptors);
++	f->ssp_descriptors = NULL;
+ }
+ EXPORT_SYMBOL_GPL(usb_free_all_descriptors);
+ 
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index f3443347874d2..ffe67d836b0ce 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -2639,6 +2639,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
+ 
+ 	do { /* lang_count > 0 so we can use do-while */
+ 		unsigned needed = needed_count;
++		u32 str_per_lang = str_count;
+ 
+ 		if (unlikely(len < 3))
+ 			goto error_free;
+@@ -2674,7 +2675,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
+ 
+ 			data += length + 1;
+ 			len -= length + 1;
+-		} while (--str_count);
++		} while (--str_per_lang);
+ 
+ 		s->id = 0;   /* terminator */
+ 		s->s = NULL;
+diff --git a/drivers/usb/gadget/function/f_uac1.c b/drivers/usb/gadget/function/f_uac1.c
+index 560382e0a8f38..e65f474ad7b3b 100644
+--- a/drivers/usb/gadget/function/f_uac1.c
++++ b/drivers/usb/gadget/function/f_uac1.c
+@@ -19,6 +19,9 @@
+ #include "u_audio.h"
+ #include "u_uac1.h"
+ 
++/* UAC1 spec: 3.7.2.3 Audio Channel Cluster Format */
++#define UAC1_CHANNEL_MASK 0x0FFF
++
+ struct f_uac1 {
+ 	struct g_audio g_audio;
+ 	u8 ac_intf, as_in_intf, as_out_intf;
+@@ -30,6 +33,11 @@ static inline struct f_uac1 *func_to_uac1(struct usb_function *f)
+ 	return container_of(f, struct f_uac1, g_audio.func);
+ }
+ 
++static inline struct f_uac1_opts *g_audio_to_uac1_opts(struct g_audio *audio)
++{
++	return container_of(audio->func.fi, struct f_uac1_opts, func_inst);
++}
++
+ /*
+  * DESCRIPTORS ... most are static, but strings and full
+  * configuration descriptors are built on demand.
+@@ -505,11 +513,42 @@ static void f_audio_disable(struct usb_function *f)
+ 
+ /*-------------------------------------------------------------------------*/
+ 
++static int f_audio_validate_opts(struct g_audio *audio, struct device *dev)
++{
++	struct f_uac1_opts *opts = g_audio_to_uac1_opts(audio);
++
++	if (!opts->p_chmask && !opts->c_chmask) {
++		dev_err(dev, "Error: no playback and capture channels\n");
++		return -EINVAL;
++	} else if (opts->p_chmask & ~UAC1_CHANNEL_MASK) {
++		dev_err(dev, "Error: unsupported playback channels mask\n");
++		return -EINVAL;
++	} else if (opts->c_chmask & ~UAC1_CHANNEL_MASK) {
++		dev_err(dev, "Error: unsupported capture channels mask\n");
++		return -EINVAL;
++	} else if ((opts->p_ssize < 1) || (opts->p_ssize > 4)) {
++		dev_err(dev, "Error: incorrect playback sample size\n");
++		return -EINVAL;
++	} else if ((opts->c_ssize < 1) || (opts->c_ssize > 4)) {
++		dev_err(dev, "Error: incorrect capture sample size\n");
++		return -EINVAL;
++	} else if (!opts->p_srate) {
++		dev_err(dev, "Error: incorrect playback sampling rate\n");
++		return -EINVAL;
++	} else if (!opts->c_srate) {
++		dev_err(dev, "Error: incorrect capture sampling rate\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ /* audio function driver setup/binding */
+ static int f_audio_bind(struct usb_configuration *c, struct usb_function *f)
+ {
+ 	struct usb_composite_dev	*cdev = c->cdev;
+ 	struct usb_gadget		*gadget = cdev->gadget;
++	struct device			*dev = &gadget->dev;
+ 	struct f_uac1			*uac1 = func_to_uac1(f);
+ 	struct g_audio			*audio = func_to_g_audio(f);
+ 	struct f_uac1_opts		*audio_opts;
+@@ -519,6 +558,10 @@ static int f_audio_bind(struct usb_configuration *c, struct usb_function *f)
+ 	int				rate;
+ 	int				status;
+ 
++	status = f_audio_validate_opts(audio, dev);
++	if (status)
++		return status;
++
+ 	audio_opts = container_of(f->fi, struct f_uac1_opts, func_inst);
+ 
+ 	us = usb_gstrings_attach(cdev, uac1_strings, ARRAY_SIZE(strings_uac1));
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index 6f03e944e0e31..dd960cea642f3 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -14,6 +14,9 @@
+ #include "u_audio.h"
+ #include "u_uac2.h"
+ 
++/* UAC2 spec: 4.1 Audio Channel Cluster Descriptor */
++#define UAC2_CHANNEL_MASK 0x07FFFFFF
++
+ /*
+  * The driver implements a simple UAC_2 topology.
+  * USB-OUT -> IT_1 -> OT_3 -> ALSA_Capture
+@@ -604,6 +607,36 @@ static void setup_descriptor(struct f_uac2_opts *opts)
+ 	hs_audio_desc[i] = NULL;
+ }
+ 
++static int afunc_validate_opts(struct g_audio *agdev, struct device *dev)
++{
++	struct f_uac2_opts *opts = g_audio_to_uac2_opts(agdev);
++
++	if (!opts->p_chmask && !opts->c_chmask) {
++		dev_err(dev, "Error: no playback and capture channels\n");
++		return -EINVAL;
++	} else if (opts->p_chmask & ~UAC2_CHANNEL_MASK) {
++		dev_err(dev, "Error: unsupported playback channels mask\n");
++		return -EINVAL;
++	} else if (opts->c_chmask & ~UAC2_CHANNEL_MASK) {
++		dev_err(dev, "Error: unsupported capture channels mask\n");
++		return -EINVAL;
++	} else if ((opts->p_ssize < 1) || (opts->p_ssize > 4)) {
++		dev_err(dev, "Error: incorrect playback sample size\n");
++		return -EINVAL;
++	} else if ((opts->c_ssize < 1) || (opts->c_ssize > 4)) {
++		dev_err(dev, "Error: incorrect capture sample size\n");
++		return -EINVAL;
++	} else if (!opts->p_srate) {
++		dev_err(dev, "Error: incorrect playback sampling rate\n");
++		return -EINVAL;
++	} else if (!opts->c_srate) {
++		dev_err(dev, "Error: incorrect capture sampling rate\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static int
+ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ {
+@@ -612,11 +645,13 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	struct usb_composite_dev *cdev = cfg->cdev;
+ 	struct usb_gadget *gadget = cdev->gadget;
+ 	struct device *dev = &gadget->dev;
+-	struct f_uac2_opts *uac2_opts;
++	struct f_uac2_opts *uac2_opts = g_audio_to_uac2_opts(agdev);
+ 	struct usb_string *us;
+ 	int ret;
+ 
+-	uac2_opts = container_of(fn->fi, struct f_uac2_opts, func_inst);
++	ret = afunc_validate_opts(agdev, dev);
++	if (ret)
++		return ret;
+ 
+ 	us = usb_gstrings_attach(cdev, fn_strings, ARRAY_SIZE(strings_fn));
+ 	if (IS_ERR(us))
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index 44b4352a26765..f48a00e497945 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -633,7 +633,12 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
+ 
+ 	uvc_hs_streaming_ep.wMaxPacketSize =
+ 		cpu_to_le16(max_packet_size | ((max_packet_mult - 1) << 11));
+-	uvc_hs_streaming_ep.bInterval = opts->streaming_interval;
++
++	/* A high-bandwidth endpoint must specify a bInterval value of 1 */
++	if (max_packet_mult > 1)
++		uvc_hs_streaming_ep.bInterval = 1;
++	else
++		uvc_hs_streaming_ep.bInterval = opts->streaming_interval;
+ 
+ 	uvc_ss_streaming_ep.wMaxPacketSize = cpu_to_le16(max_packet_size);
+ 	uvc_ss_streaming_ep.bInterval = opts->streaming_interval;
+@@ -817,6 +822,7 @@ static struct usb_function_instance *uvc_alloc_inst(void)
+ 	pd->bmControls[0]		= 1;
+ 	pd->bmControls[1]		= 0;
+ 	pd->iProcessing			= 0;
++	pd->bmVideoStandards		= 0;
+ 
+ 	od = &opts->uvc_output_terminal;
+ 	od->bLength			= UVC_DT_OUTPUT_TERMINAL_SIZE;
+diff --git a/drivers/usb/gadget/legacy/webcam.c b/drivers/usb/gadget/legacy/webcam.c
+index a9f8eb8e1c767..2c9eab2b863d2 100644
+--- a/drivers/usb/gadget/legacy/webcam.c
++++ b/drivers/usb/gadget/legacy/webcam.c
+@@ -125,6 +125,7 @@ static const struct uvc_processing_unit_descriptor uvc_processing = {
+ 	.bmControls[0]		= 1,
+ 	.bmControls[1]		= 0,
+ 	.iProcessing		= 0,
++	.bmVideoStandards	= 0,
+ };
+ 
+ static const struct uvc_output_terminal_descriptor uvc_output_terminal = {
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index 17704ee2d7f54..92d01ddaee0d4 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -901,6 +901,21 @@ static int dummy_pullup(struct usb_gadget *_gadget, int value)
+ 	spin_lock_irqsave(&dum->lock, flags);
+ 	dum->pullup = (value != 0);
+ 	set_link_state(dum_hcd);
++	if (value == 0) {
++		/*
++		 * Emulate synchronize_irq(): wait for callbacks to finish.
++		 * This seems to be the best place to emulate the call to
++		 * synchronize_irq() that's in usb_gadget_remove_driver().
++		 * Doing it in dummy_udc_stop() would be too late since it
++		 * is called after the unbind callback and unbind shouldn't
++		 * be invoked until all the other callbacks are finished.
++		 */
++		while (dum->callback_usage > 0) {
++			spin_unlock_irqrestore(&dum->lock, flags);
++			usleep_range(1000, 2000);
++			spin_lock_irqsave(&dum->lock, flags);
++		}
++	}
+ 	spin_unlock_irqrestore(&dum->lock, flags);
+ 
+ 	usb_hcd_poll_rh_status(dummy_hcd_to_hcd(dum_hcd));
+@@ -1002,14 +1017,6 @@ static int dummy_udc_stop(struct usb_gadget *g)
+ 	spin_lock_irq(&dum->lock);
+ 	dum->ints_enabled = 0;
+ 	stop_activity(dum);
+-
+-	/* emulate synchronize_irq(): wait for callbacks to finish */
+-	while (dum->callback_usage > 0) {
+-		spin_unlock_irq(&dum->lock);
+-		usleep_range(1000, 2000);
+-		spin_lock_irq(&dum->lock);
+-	}
+-
+ 	dum->driver = NULL;
+ 	spin_unlock_irq(&dum->lock);
+ 
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 580bef8eb4cbc..2319c9737c2bd 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3883,7 +3883,7 @@ static int tegra_xudc_remove(struct platform_device *pdev)
+ 
+ 	pm_runtime_get_sync(xudc->dev);
+ 
+-	cancel_delayed_work(&xudc->plc_reset_work);
++	cancel_delayed_work_sync(&xudc->plc_reset_work);
+ 	cancel_work_sync(&xudc->usb_role_sw_work);
+ 
+ 	usb_del_gadget_udc(&xudc->gadget);
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 138ba4528dd3a..8ce043e6ed872 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2143,6 +2143,15 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
+ 
+ 	if (major_revision == 0x03) {
+ 		rhub = &xhci->usb3_rhub;
++		/*
++		 * Some hosts incorrectly use sub-minor version for minor
++		 * version (i.e. 0x02 instead of 0x20 for bcdUSB 0x320 and 0x01
++		 * for bcdUSB 0x310). Since there is no USB release with sub
++		 * minor version 0x301 to 0x309, we can assume that they are
++		 * incorrect and fix it here.
++		 */
++		if (minor_revision > 0x00 && minor_revision < 0x10)
++			minor_revision <<= 4;
+ 	} else if (major_revision <= 0x02) {
+ 		rhub = &xhci->usb2_rhub;
+ 	} else {
+@@ -2254,6 +2263,9 @@ static void xhci_create_rhub_port_array(struct xhci_hcd *xhci,
+ 		return;
+ 	rhub->ports = kcalloc_node(rhub->num_ports, sizeof(*rhub->ports),
+ 			flags, dev_to_node(dev));
++	if (!rhub->ports)
++		return;
++
+ 	for (i = 0; i < HCS_MAX_PORTS(xhci->hcs_params1); i++) {
+ 		if (xhci->hw_ports[i].rhub != rhub ||
+ 		    xhci->hw_ports[i].hcd_portnum == DUPLICATE_ENTRY)
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index 2f27dc0d9c6bd..1c331577fca92 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -397,6 +397,8 @@ static void xhci_mtk_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
+ 	if (mtk->lpm_support)
+ 		xhci->quirks |= XHCI_LPM_SUPPORT;
++	if (mtk->u2_lpm_disable)
++		xhci->quirks |= XHCI_HW_LPM_DISABLE;
+ 
+ 	/*
+ 	 * MTK xHCI 0.96: PSA is 1 by default even if doesn't support stream,
+@@ -469,6 +471,7 @@ static int xhci_mtk_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	mtk->lpm_support = of_property_read_bool(node, "usb3-lpm-capable");
++	mtk->u2_lpm_disable = of_property_read_bool(node, "usb2-lpm-disable");
+ 	/* optional property, ignore the error if it does not exist */
+ 	of_property_read_u32(node, "mediatek,u3p-dis-msk",
+ 			     &mtk->u3p_dis_msk);
+diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
+index cbb09dfea62e0..080109012b9ac 100644
+--- a/drivers/usb/host/xhci-mtk.h
++++ b/drivers/usb/host/xhci-mtk.h
+@@ -150,6 +150,7 @@ struct xhci_hcd_mtk {
+ 	struct phy **phys;
+ 	int num_phys;
+ 	bool lpm_support;
++	bool u2_lpm_disable;
+ 	/* usb remote wakeup */
+ 	bool uwk_en;
+ 	struct regmap *uwk;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index c449de6164b18..dbe5553872ff0 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -228,6 +228,7 @@ static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
+ 	struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
+ 	int err, i;
+ 	u64 val;
++	u32 intrs;
+ 
+ 	/*
+ 	 * Some Renesas controllers get into a weird state if they are
+@@ -266,7 +267,10 @@ static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
+ 	if (upper_32_bits(val))
+ 		xhci_write_64(xhci, 0, &xhci->op_regs->cmd_ring);
+ 
+-	for (i = 0; i < HCS_MAX_INTRS(xhci->hcs_params1); i++) {
++	intrs = min_t(u32, HCS_MAX_INTRS(xhci->hcs_params1),
++		      ARRAY_SIZE(xhci->run_regs->ir_set));
++
++	for (i = 0; i < intrs; i++) {
+ 		struct xhci_intr_reg __iomem *ir;
+ 
+ 		ir = &xhci->run_regs->ir_set[i];
+@@ -3227,6 +3231,14 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ 
+ 	/* config ep command clears toggle if add and drop ep flags are set */
+ 	ctrl_ctx = xhci_get_input_control_ctx(cfg_cmd->in_ctx);
++	if (!ctrl_ctx) {
++		spin_unlock_irqrestore(&xhci->lock, flags);
++		xhci_free_command(xhci, cfg_cmd);
++		xhci_warn(xhci, "%s: Could not get input context, bad type.\n",
++				__func__);
++		goto cleanup;
++	}
++
+ 	xhci_setup_input_ctx_for_config_ep(xhci, cfg_cmd->in_ctx, vdev->out_ctx,
+ 					   ctrl_ctx, ep_flag, ep_flag);
+ 	xhci_endpoint_copy(xhci, cfg_cmd->in_ctx, vdev->out_ctx, ep_index);
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index fc0457db62e1a..8f09a387b7738 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2070,7 +2070,7 @@ static void musb_irq_work(struct work_struct *data)
+ 	struct musb *musb = container_of(data, struct musb, irq_work.work);
+ 	int error;
+ 
+-	error = pm_runtime_get_sync(musb->controller);
++	error = pm_runtime_resume_and_get(musb->controller);
+ 	if (error < 0) {
+ 		dev_err(musb->controller, "Could not enable: %i\n", error);
+ 
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 84e5949bc8617..80184153ac7d9 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -997,6 +997,7 @@ static int vhost_vdpa_mmap(struct file *file, struct vm_area_struct *vma)
+ 	if (vma->vm_end - vma->vm_start != notify.size)
+ 		return -ENOTSUPP;
+ 
++	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ 	vma->vm_ops = &vhost_vdpa_vm_ops;
+ 	return 0;
+ }
+diff --git a/drivers/video/backlight/qcom-wled.c b/drivers/video/backlight/qcom-wled.c
+index 3bc7800eb0a93..cd11c57764381 100644
+--- a/drivers/video/backlight/qcom-wled.c
++++ b/drivers/video/backlight/qcom-wled.c
+@@ -336,19 +336,19 @@ static int wled3_sync_toggle(struct wled *wled)
+ 	unsigned int mask = GENMASK(wled->max_string_count - 1, 0);
+ 
+ 	rc = regmap_update_bits(wled->regmap,
+-				wled->ctrl_addr + WLED3_SINK_REG_SYNC,
++				wled->sink_addr + WLED3_SINK_REG_SYNC,
+ 				mask, mask);
+ 	if (rc < 0)
+ 		return rc;
+ 
+ 	rc = regmap_update_bits(wled->regmap,
+-				wled->ctrl_addr + WLED3_SINK_REG_SYNC,
++				wled->sink_addr + WLED3_SINK_REG_SYNC,
+ 				mask, WLED3_SINK_REG_SYNC_CLEAR);
+ 
+ 	return rc;
+ }
+ 
+-static int wled5_sync_toggle(struct wled *wled)
++static int wled5_mod_sync_toggle(struct wled *wled)
+ {
+ 	int rc;
+ 	u8 val;
+@@ -445,10 +445,23 @@ static int wled_update_status(struct backlight_device *bl)
+ 			goto unlock_mutex;
+ 		}
+ 
+-		rc = wled->wled_sync_toggle(wled);
+-		if (rc < 0) {
+-			dev_err(wled->dev, "wled sync failed rc:%d\n", rc);
+-			goto unlock_mutex;
++		if (wled->version < 5) {
++			rc = wled->wled_sync_toggle(wled);
++			if (rc < 0) {
++				dev_err(wled->dev, "wled sync failed rc:%d\n", rc);
++				goto unlock_mutex;
++			}
++		} else {
++			/*
++			 * For WLED5 toggling the MOD_SYNC_BIT updates the
++			 * brightness
++			 */
++			rc = wled5_mod_sync_toggle(wled);
++			if (rc < 0) {
++				dev_err(wled->dev, "wled mod sync failed rc:%d\n",
++					rc);
++				goto unlock_mutex;
++			}
+ 		}
+ 	}
+ 
+@@ -1459,7 +1472,7 @@ static int wled_configure(struct wled *wled)
+ 		size = ARRAY_SIZE(wled5_opts);
+ 		*cfg = wled5_config_defaults;
+ 		wled->wled_set_brightness = wled5_set_brightness;
+-		wled->wled_sync_toggle = wled5_sync_toggle;
++		wled->wled_sync_toggle = wled3_sync_toggle;
+ 		wled->wled_cabc_config = wled5_cabc_config;
+ 		wled->wled_ovp_delay = wled5_ovp_delay;
+ 		wled->wled_auto_detection_required =
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index e5ae33c1a8e84..e8a17fb715ace 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -101,17 +101,17 @@ int fb_alloc_cmap_gfp(struct fb_cmap *cmap, int len, int transp, gfp_t flags)
+ 		if (!len)
+ 			return 0;
+ 
+-		cmap->red = kmalloc(size, flags);
++		cmap->red = kzalloc(size, flags);
+ 		if (!cmap->red)
+ 			goto fail;
+-		cmap->green = kmalloc(size, flags);
++		cmap->green = kzalloc(size, flags);
+ 		if (!cmap->green)
+ 			goto fail;
+-		cmap->blue = kmalloc(size, flags);
++		cmap->blue = kzalloc(size, flags);
+ 		if (!cmap->blue)
+ 			goto fail;
+ 		if (transp) {
+-			cmap->transp = kmalloc(size, flags);
++			cmap->transp = kzalloc(size, flags);
+ 			if (!cmap->transp)
+ 				goto fail;
+ 		} else {
+diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
+index f1964ea4b8269..e21e1e86ad15f 100644
+--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
++++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
+@@ -1524,7 +1524,8 @@ static const struct file_operations ne_enclave_fops = {
+  *			  enclave file descriptor to be further used for enclave
+  *			  resources handling e.g. memory regions and CPUs.
+  * @ne_pci_dev :	Private data associated with the PCI device.
+- * @slot_uid:		Generated unique slot id associated with an enclave.
++ * @slot_uid:		User pointer to store the generated unique slot id
++ *			associated with an enclave to.
+  *
+  * Context: Process context. This function is called with the ne_pci_dev enclave
+  *	    mutex held.
+@@ -1532,7 +1533,7 @@ static const struct file_operations ne_enclave_fops = {
+  * * Enclave fd on success.
+  * * Negative return value on failure.
+  */
+-static int ne_create_vm_ioctl(struct ne_pci_dev *ne_pci_dev, u64 *slot_uid)
++static int ne_create_vm_ioctl(struct ne_pci_dev *ne_pci_dev, u64 __user *slot_uid)
+ {
+ 	struct ne_pci_dev_cmd_reply cmd_reply = {};
+ 	int enclave_fd = -1;
+@@ -1634,7 +1635,18 @@ static int ne_create_vm_ioctl(struct ne_pci_dev *ne_pci_dev, u64 *slot_uid)
+ 
+ 	list_add(&ne_enclave->enclave_list_entry, &ne_pci_dev->enclaves_list);
+ 
+-	*slot_uid = ne_enclave->slot_uid;
++	if (copy_to_user(slot_uid, &ne_enclave->slot_uid, sizeof(ne_enclave->slot_uid))) {
++		/*
++		 * As we're holding the only reference to 'enclave_file', fput()
++		 * will call ne_enclave_release() which will do a proper cleanup
++		 * of all so far allocated resources, leaving only the unused fd
++		 * for us to free.
++		 */
++		fput(enclave_file);
++		put_unused_fd(enclave_fd);
++
++		return -EFAULT;
++	}
+ 
+ 	fd_install(enclave_fd, enclave_file);
+ 
+@@ -1671,34 +1683,13 @@ static long ne_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	switch (cmd) {
+ 	case NE_CREATE_VM: {
+ 		int enclave_fd = -1;
+-		struct file *enclave_file = NULL;
+ 		struct ne_pci_dev *ne_pci_dev = ne_devs.ne_pci_dev;
+-		int rc = -EINVAL;
+-		u64 slot_uid = 0;
++		u64 __user *slot_uid = (void __user *)arg;
+ 
+ 		mutex_lock(&ne_pci_dev->enclaves_list_mutex);
+-
+-		enclave_fd = ne_create_vm_ioctl(ne_pci_dev, &slot_uid);
+-		if (enclave_fd < 0) {
+-			rc = enclave_fd;
+-
+-			mutex_unlock(&ne_pci_dev->enclaves_list_mutex);
+-
+-			return rc;
+-		}
+-
++		enclave_fd = ne_create_vm_ioctl(ne_pci_dev, slot_uid);
+ 		mutex_unlock(&ne_pci_dev->enclaves_list_mutex);
+ 
+-		if (copy_to_user((void __user *)arg, &slot_uid, sizeof(slot_uid))) {
+-			enclave_file = fget(enclave_fd);
+-			/* Decrement file refs to have release() called. */
+-			fput(enclave_file);
+-			fput(enclave_file);
+-			put_unused_fd(enclave_fd);
+-
+-			return -EFAULT;
+-		}
+-
+ 		return enclave_fd;
+ 	}
+ 
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index eeface30facd3..7bdc86b74f152 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -80,10 +80,15 @@ static int compression_compress_pages(int type, struct list_head *ws,
+ 	case BTRFS_COMPRESS_NONE:
+ 	default:
+ 		/*
+-		 * This can't happen, the type is validated several times
+-		 * before we get here. As a sane fallback, return what the
+-		 * callers will understand as 'no compression happened'.
++		 * This can happen when compression races with remount setting
++		 * it to 'no compress', while caller doesn't call
++		 * inode_need_compress() to check if we really need to
++		 * compress.
++		 *
++		 * Not a big deal, just need to inform caller that we
++		 * haven't allocated any pages yet.
+ 		 */
++		*out_pages = 0;
+ 		return -E2BIG;
+ 	}
+ }
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 9faf15bd5a548..519cf145f9bd1 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1367,10 +1367,30 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
+ 				   "failed to read tree block %llu from get_old_root",
+ 				   logical);
+ 		} else {
++			struct tree_mod_elem *tm2;
++
+ 			btrfs_tree_read_lock(old);
+ 			eb = btrfs_clone_extent_buffer(old);
++			/*
++			 * After the lookup for the most recent tree mod operation
++			 * above and before we locked and cloned the extent buffer
++			 * 'old', a new tree mod log operation may have been added.
++			 * So lookup for a more recent one to make sure the number
++			 * of mod log operations we replay is consistent with the
++			 * number of items we have in the cloned extent buffer,
++			 * otherwise we can hit a BUG_ON when rewinding the extent
++			 * buffer.
++			 */
++			tm2 = tree_mod_log_search(fs_info, logical, time_seq);
+ 			btrfs_tree_read_unlock(old);
+ 			free_extent_buffer(old);
++			ASSERT(tm2);
++			ASSERT(tm2 == tm || tm2->seq > tm->seq);
++			if (!tm2 || tm2->seq < tm->seq) {
++				free_extent_buffer(eb);
++				return NULL;
++			}
++			tm = tm2;
+ 		}
+ 	} else if (old_root) {
+ 		eb_root_owner = btrfs_header_owner(eb_root);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index f5135314e4b39..040db0dfba264 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -678,8 +678,6 @@ static noinline int create_subvol(struct inode *dir,
+ 	btrfs_set_root_otransid(root_item, trans->transid);
+ 
+ 	btrfs_tree_unlock(leaf);
+-	free_extent_buffer(leaf);
+-	leaf = NULL;
+ 
+ 	btrfs_set_root_dirid(root_item, new_dirid);
+ 
+@@ -688,8 +686,22 @@ static noinline int create_subvol(struct inode *dir,
+ 	key.type = BTRFS_ROOT_ITEM_KEY;
+ 	ret = btrfs_insert_root(trans, fs_info->tree_root, &key,
+ 				root_item);
+-	if (ret)
++	if (ret) {
++		/*
++		 * Since we don't abort the transaction in this case, free the
++		 * tree block so that we don't leak space and leave the
++		 * filesystem in an inconsistent state (an extent item in the
++		 * extent tree without backreferences). Also no need to have
++		 * the tree block locked since it is not in any tree at this
++		 * point, so no other task can find it and use it.
++		 */
++		btrfs_free_tree_block(trans, root, leaf, 0, 1);
++		free_extent_buffer(leaf);
+ 		goto fail;
++	}
++
++	free_extent_buffer(leaf);
++	leaf = NULL;
+ 
+ 	key.offset = (u64)-1;
+ 	new_root = btrfs_get_new_fs_root(fs_info, objectid, anon_dev);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 6a44d8f5e12e2..c21545c5b34bd 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -733,10 +733,12 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 	struct extent_buffer *eb;
+ 	struct btrfs_root_item *root_item;
+ 	struct btrfs_key root_key;
+-	int ret;
++	int ret = 0;
++	bool must_abort = false;
+ 
+ 	root_item = kmalloc(sizeof(*root_item), GFP_NOFS);
+-	BUG_ON(!root_item);
++	if (!root_item)
++		return ERR_PTR(-ENOMEM);
+ 
+ 	root_key.objectid = BTRFS_TREE_RELOC_OBJECTID;
+ 	root_key.type = BTRFS_ROOT_ITEM_KEY;
+@@ -748,7 +750,9 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 		/* called by btrfs_init_reloc_root */
+ 		ret = btrfs_copy_root(trans, root, root->commit_root, &eb,
+ 				      BTRFS_TREE_RELOC_OBJECTID);
+-		BUG_ON(ret);
++		if (ret)
++			goto fail;
++
+ 		/*
+ 		 * Set the last_snapshot field to the generation of the commit
+ 		 * root - like this ctree.c:btrfs_block_can_be_shared() behaves
+@@ -769,9 +773,16 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 		 */
+ 		ret = btrfs_copy_root(trans, root, root->node, &eb,
+ 				      BTRFS_TREE_RELOC_OBJECTID);
+-		BUG_ON(ret);
++		if (ret)
++			goto fail;
+ 	}
+ 
++	/*
++	 * We have changed references at this point, we must abort the
++	 * transaction if anything fails.
++	 */
++	must_abort = true;
++
+ 	memcpy(root_item, &root->root_item, sizeof(*root_item));
+ 	btrfs_set_root_bytenr(root_item, eb->start);
+ 	btrfs_set_root_level(root_item, btrfs_header_level(eb));
+@@ -789,14 +800,25 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 
+ 	ret = btrfs_insert_root(trans, fs_info->tree_root,
+ 				&root_key, root_item);
+-	BUG_ON(ret);
++	if (ret)
++		goto fail;
++
+ 	kfree(root_item);
+ 
+ 	reloc_root = btrfs_read_tree_root(fs_info->tree_root, &root_key);
+-	BUG_ON(IS_ERR(reloc_root));
++	if (IS_ERR(reloc_root)) {
++		ret = PTR_ERR(reloc_root);
++		goto abort;
++	}
+ 	set_bit(BTRFS_ROOT_SHAREABLE, &reloc_root->state);
+ 	reloc_root->last_trans = trans->transid;
+ 	return reloc_root;
++fail:
++	kfree(root_item);
++abort:
++	if (must_abort)
++		btrfs_abort_transaction(trans, ret);
++	return ERR_PTR(ret);
+ }
+ 
+ /*
+@@ -875,7 +897,7 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans,
+ 	int ret;
+ 
+ 	if (!have_reloc_root(root))
+-		goto out;
++		return 0;
+ 
+ 	reloc_root = root->reloc_root;
+ 	root_item = &reloc_root->root_item;
+@@ -908,10 +930,8 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans,
+ 
+ 	ret = btrfs_update_root(trans, fs_info->tree_root,
+ 				&reloc_root->root_key, root_item);
+-	BUG_ON(ret);
+ 	btrfs_put_root(reloc_root);
+-out:
+-	return 0;
++	return ret;
+ }
+ 
+ /*
+@@ -1185,8 +1205,8 @@ int replace_path(struct btrfs_trans_handle *trans, struct reloc_control *rc,
+ 	int ret;
+ 	int slot;
+ 
+-	BUG_ON(src->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID);
+-	BUG_ON(dest->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID);
++	ASSERT(src->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID);
++	ASSERT(dest->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID);
+ 
+ 	last_snapshot = btrfs_root_last_snapshot(&src->root_item);
+ again:
+@@ -1221,7 +1241,7 @@ again:
+ 		struct btrfs_key first_key;
+ 
+ 		level = btrfs_header_level(parent);
+-		BUG_ON(level < lowest_level);
++		ASSERT(level >= lowest_level);
+ 
+ 		ret = btrfs_bin_search(parent, &key, &slot);
+ 		if (ret < 0)
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 261a50708cb89..af2f2f8704d8b 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1950,7 +1950,6 @@ static void cleanup_transaction(struct btrfs_trans_handle *trans, int err)
+ 	 */
+ 	BUG_ON(list_empty(&cur_trans->list));
+ 
+-	list_del_init(&cur_trans->list);
+ 	if (cur_trans == fs_info->running_transaction) {
+ 		cur_trans->state = TRANS_STATE_COMMIT_DOING;
+ 		spin_unlock(&fs_info->trans_lock);
+@@ -1959,6 +1958,17 @@ static void cleanup_transaction(struct btrfs_trans_handle *trans, int err)
+ 
+ 		spin_lock(&fs_info->trans_lock);
+ 	}
++
++	/*
++	 * Now that we know no one else is still using the transaction we can
++	 * remove the transaction from the list of transactions. This avoids
++	 * the transaction kthread from cleaning up the transaction while some
++	 * other task is still using it, which could result in a use-after-free
++	 * on things like log trees, as it forces the transaction kthread to
++	 * wait for this transaction to be cleaned up by us.
++	 */
++	list_del_init(&cur_trans->list);
++
+ 	spin_unlock(&fs_info->trans_lock);
+ 
+ 	btrfs_cleanup_one_transaction(trans->transaction, fs_info);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index fa359f473e3db..aabaebd1535f0 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -663,6 +663,7 @@ server_unresponsive(struct TCP_Server_Info *server)
+ 	 */
+ 	if ((server->tcpStatus == CifsGood ||
+ 	    server->tcpStatus == CifsNeedNegotiate) &&
++	    (!server->ops->can_echo || server->ops->can_echo(server)) &&
+ 	    time_after(jiffies, server->lstrp + 3 * server->echo_interval)) {
+ 		cifs_server_dbg(VFS, "has not responded in %lu seconds. Reconnecting...\n",
+ 			 (3 * server->echo_interval) / HZ);
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index c2fe85ca2ded3..1a0298d1e7cda 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -92,6 +92,12 @@ int cifs_try_adding_channels(struct cifs_ses *ses)
+ 		return 0;
+ 	}
+ 
++	if (!(ses->server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
++		cifs_dbg(VFS, "server %s does not support multichannel\n", ses->server->hostname);
++		ses->chan_max = 1;
++		return 0;
++	}
++
+ 	/*
+ 	 * Make a copy of the iface list at the time and use that
+ 	 * instead so as to not hold the iface spinlock for opening
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 46d247c722eec..6e45a25adeff8 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1705,18 +1705,14 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	}
+ 
+  iqinf_exit:
+-	kfree(vars);
+-	kfree(buffer);
+-	SMB2_open_free(&rqst[0]);
+-	if (qi.flags & PASSTHRU_FSCTL)
+-		SMB2_ioctl_free(&rqst[1]);
+-	else
+-		SMB2_query_info_free(&rqst[1]);
+-
+-	SMB2_close_free(&rqst[2]);
++	cifs_small_buf_release(rqst[0].rq_iov[0].iov_base);
++	cifs_small_buf_release(rqst[1].rq_iov[0].iov_base);
++	cifs_small_buf_release(rqst[2].rq_iov[0].iov_base);
+ 	free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+ 	free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
+ 	free_rsp_buf(resp_buftype[2], rsp_iov[2].iov_base);
++	kfree(vars);
++	kfree(buffer);
+ 	return rc;
+ 
+ e_fault:
+@@ -2174,7 +2170,7 @@ smb3_notify(const unsigned int xid, struct file *pfile,
+ 
+ 	cifs_sb = CIFS_SB(inode->i_sb);
+ 
+-	utf16_path = cifs_convert_path_to_utf16(path + 1, cifs_sb);
++	utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
+ 	if (utf16_path == NULL) {
+ 		rc = -ENOMEM;
+ 		goto notify_exit;
+@@ -4076,7 +4072,7 @@ smb2_get_enc_key(struct TCP_Server_Info *server, __u64 ses_id, int enc, u8 *key)
+ 	}
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
+-	return 1;
++	return -EAGAIN;
+ }
+ /*
+  * Encrypt or decrypt @rqst message. @rqst[0] has the following format:
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index d1d550647cd64..d424f431263c8 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -840,6 +840,8 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
+ 		req->SecurityMode = 0;
+ 
+ 	req->Capabilities = cpu_to_le32(server->vals->req_capabilities);
++	if (ses->chan_max > 1)
++		req->Capabilities |= cpu_to_le32(SMB2_GLOBAL_CAP_MULTI_CHANNEL);
+ 
+ 	/* ClientGUID must be zero for SMB2.02 dialect */
+ 	if (server->vals->protocol_id == SMB20_PROT_ID)
+@@ -1025,6 +1027,9 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ 
+ 	pneg_inbuf->Capabilities =
+ 			cpu_to_le32(server->vals->req_capabilities);
++	if (tcon->ses->chan_max > 1)
++		pneg_inbuf->Capabilities |= cpu_to_le32(SMB2_GLOBAL_CAP_MULTI_CHANNEL);
++
+ 	memcpy(pneg_inbuf->Guid, server->client_guid,
+ 					SMB2_CLIENT_GUID_SIZE);
+ 
+diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
+index e63259fdef288..b2f6a1937d239 100644
+--- a/fs/ecryptfs/main.c
++++ b/fs/ecryptfs/main.c
+@@ -492,6 +492,12 @@ static struct dentry *ecryptfs_mount(struct file_system_type *fs_type, int flags
+ 		goto out;
+ 	}
+ 
++	if (!dev_name) {
++		rc = -EINVAL;
++		err = "Device name cannot be null";
++		goto out;
++	}
++
+ 	rc = ecryptfs_parse_options(sbi, raw_data, &check_ruid);
+ 	if (rc) {
+ 		err = "Error parsing options";
+diff --git a/fs/erofs/erofs_fs.h b/fs/erofs/erofs_fs.h
+index 9ad1615f44743..e8d04d808fa62 100644
+--- a/fs/erofs/erofs_fs.h
++++ b/fs/erofs/erofs_fs.h
+@@ -75,6 +75,9 @@ static inline bool erofs_inode_is_data_compressed(unsigned int datamode)
+ #define EROFS_I_VERSION_BIT             0
+ #define EROFS_I_DATALAYOUT_BIT          1
+ 
++#define EROFS_I_ALL	\
++	((1 << (EROFS_I_DATALAYOUT_BIT + EROFS_I_DATALAYOUT_BITS)) - 1)
++
+ /* 32-byte reduced form of an ondisk inode */
+ struct erofs_inode_compact {
+ 	__le16 i_format;	/* inode format hints */
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index 3e21c0e8adae7..0a94a52a119fb 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -44,6 +44,13 @@ static struct page *erofs_read_inode(struct inode *inode,
+ 	dic = page_address(page) + *ofs;
+ 	ifmt = le16_to_cpu(dic->i_format);
+ 
++	if (ifmt & ~EROFS_I_ALL) {
++		erofs_err(inode->i_sb, "unsupported i_format %u of nid %llu",
++			  ifmt, vi->nid);
++		err = -EOPNOTSUPP;
++		goto err_out;
++	}
++
+ 	vi->datalayout = erofs_inode_datalayout(ifmt);
+ 	if (vi->datalayout >= EROFS_INODE_DATALAYOUT_MAX) {
+ 		erofs_err(inode->i_sb, "unsupported datalayout %u of nid %llu",
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 2d5b172f490e0..6094b2e9058b0 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -746,6 +746,12 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
+ 	 */
+ 	list_splice(&txlist, &ep->rdllist);
+ 	__pm_relax(ep->ws);
++
++	if (!list_empty(&ep->rdllist)) {
++		if (waitqueue_active(&ep->wq))
++			wake_up(&ep->wq);
++	}
++
+ 	write_unlock_irq(&ep->lock);
+ 
+ 	if (!ep_locked)
+diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c
+index a987919686c0d..579c10f57c2b0 100644
+--- a/fs/exfat/balloc.c
++++ b/fs/exfat/balloc.c
+@@ -141,10 +141,6 @@ void exfat_free_bitmap(struct exfat_sb_info *sbi)
+ 	kfree(sbi->vol_amap);
+ }
+ 
+-/*
+- * If the value of "clu" is 0, it means cluster 2 which is the first cluster of
+- * the cluster heap.
+- */
+ int exfat_set_bitmap(struct inode *inode, unsigned int clu)
+ {
+ 	int i, b;
+@@ -162,10 +158,6 @@ int exfat_set_bitmap(struct inode *inode, unsigned int clu)
+ 	return 0;
+ }
+ 
+-/*
+- * If the value of "clu" is 0, it means cluster 2 which is the first cluster of
+- * the cluster heap.
+- */
+ void exfat_clear_bitmap(struct inode *inode, unsigned int clu)
+ {
+ 	int i, b;
+@@ -186,8 +178,7 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu)
+ 		int ret_discard;
+ 
+ 		ret_discard = sb_issue_discard(sb,
+-			exfat_cluster_to_sector(sbi, clu +
+-						EXFAT_RESERVED_CLUSTERS),
++			exfat_cluster_to_sector(sbi, clu),
+ 			(1 << sbi->sect_per_clus_bits), GFP_NOFS, 0);
+ 
+ 		if (ret_discard == -EOPNOTSUPP) {
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 4008a674250cf..4a0411b229a5d 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1032,8 +1032,10 @@ static int ext4_fc_perform_commit(journal_t *journal)
+ 		head.fc_tid = cpu_to_le32(
+ 			sbi->s_journal->j_running_transaction->t_tid);
+ 		if (!ext4_fc_add_tlv(sb, EXT4_FC_TAG_HEAD, sizeof(head),
+-			(u8 *)&head, &crc))
++			(u8 *)&head, &crc)) {
++			ret = -ENOSPC;
+ 			goto out;
++		}
+ 	}
+ 
+ 	spin_lock(&sbi->s_fc_lock);
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index b692355b8c770..7b28d44b0ddd1 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -372,15 +372,32 @@ truncate:
+ static int ext4_dio_write_end_io(struct kiocb *iocb, ssize_t size,
+ 				 int error, unsigned int flags)
+ {
+-	loff_t offset = iocb->ki_pos;
++	loff_t pos = iocb->ki_pos;
+ 	struct inode *inode = file_inode(iocb->ki_filp);
+ 
+ 	if (error)
+ 		return error;
+ 
+-	if (size && flags & IOMAP_DIO_UNWRITTEN)
+-		return ext4_convert_unwritten_extents(NULL, inode,
+-						      offset, size);
++	if (size && flags & IOMAP_DIO_UNWRITTEN) {
++		error = ext4_convert_unwritten_extents(NULL, inode, pos, size);
++		if (error < 0)
++			return error;
++	}
++	/*
++	 * If we are extending the file, we have to update i_size here before
++	 * page cache gets invalidated in iomap_dio_rw(). Otherwise racing
++	 * buffered reads could zero out too much from page cache pages. Update
++	 * of on-disk size will happen later in ext4_dio_write_iter() where
++	 * we have enough information to also perform orphan list handling etc.
++	 * Note that we perform all extending writes synchronously under
++	 * i_rwsem held exclusively so i_size update is safe here in that case.
++	 * If the write was not extending, we cannot see pos > i_size here
++	 * because operations reducing i_size like truncate wait for all
++	 * outstanding DIO before updating i_size.
++	 */
++	pos += size;
++	if (pos > i_size_read(inode))
++		i_size_write(inode, pos);
+ 
+ 	return 0;
+ }
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index b215c564bc318..c92558ede623e 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -1291,7 +1291,8 @@ got:
+ 
+ 	ei->i_extra_isize = sbi->s_want_extra_isize;
+ 	ei->i_inline_off = 0;
+-	if (ext4_has_feature_inline_data(sb))
++	if (ext4_has_feature_inline_data(sb) &&
++	    (!(ei->i_flags & EXT4_DAX_FL) || S_ISDIR(mode)))
+ 		ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ 	ret = inode;
+ 	err = dquot_alloc_inode(inode);
+@@ -1512,6 +1513,7 @@ int ext4_init_inode_table(struct super_block *sb, ext4_group_t group,
+ 	handle_t *handle;
+ 	ext4_fsblk_t blk;
+ 	int num, ret = 0, used_blks = 0;
++	unsigned long used_inos = 0;
+ 
+ 	/* This should not happen, but just to be sure check this */
+ 	if (sb_rdonly(sb)) {
+@@ -1542,22 +1544,37 @@ int ext4_init_inode_table(struct super_block *sb, ext4_group_t group,
+ 	 * used inodes so we need to skip blocks with used inodes in
+ 	 * inode table.
+ 	 */
+-	if (!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT)))
+-		used_blks = DIV_ROUND_UP((EXT4_INODES_PER_GROUP(sb) -
+-			    ext4_itable_unused_count(sb, gdp)),
+-			    sbi->s_inodes_per_block);
+-
+-	if ((used_blks < 0) || (used_blks > sbi->s_itb_per_group) ||
+-	    ((group == 0) && ((EXT4_INODES_PER_GROUP(sb) -
+-			       ext4_itable_unused_count(sb, gdp)) <
+-			      EXT4_FIRST_INO(sb)))) {
+-		ext4_error(sb, "Something is wrong with group %u: "
+-			   "used itable blocks: %d; "
+-			   "itable unused count: %u",
+-			   group, used_blks,
+-			   ext4_itable_unused_count(sb, gdp));
+-		ret = 1;
+-		goto err_out;
++	if (!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT))) {
++		used_inos = EXT4_INODES_PER_GROUP(sb) -
++			    ext4_itable_unused_count(sb, gdp);
++		used_blks = DIV_ROUND_UP(used_inos, sbi->s_inodes_per_block);
++
++		/* Bogus inode unused count? */
++		if (used_blks < 0 || used_blks > sbi->s_itb_per_group) {
++			ext4_error(sb, "Something is wrong with group %u: "
++				   "used itable blocks: %d; "
++				   "itable unused count: %u",
++				   group, used_blks,
++				   ext4_itable_unused_count(sb, gdp));
++			ret = 1;
++			goto err_out;
++		}
++
++		used_inos += group * EXT4_INODES_PER_GROUP(sb);
++		/*
++		 * Are there some uninitialized inodes in the inode table
++		 * before the first normal inode?
++		 */
++		if ((used_blks != sbi->s_itb_per_group) &&
++		     (used_inos < EXT4_FIRST_INO(sb))) {
++			ext4_error(sb, "Something is wrong with group %u: "
++				   "itable unused count: %u; "
++				   "itables initialized count: %ld",
++				   group, ext4_itable_unused_count(sb, gdp),
++				   used_inos);
++			ret = 1;
++			goto err_out;
++		}
+ 	}
+ 
+ 	blk = ext4_inode_table(sb, gdp) + used_blks;
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 106bf149e8ca8..cb54ea6461fd8 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -312,6 +312,12 @@ static void ext4_dax_dontcache(struct inode *inode, unsigned int flags)
+ static bool dax_compatible(struct inode *inode, unsigned int oldflags,
+ 			   unsigned int flags)
+ {
++	/* Allow the DAX flag to be changed on inline directories */
++	if (S_ISDIR(inode->i_mode)) {
++		flags &= ~EXT4_INLINE_DATA_FL;
++		oldflags &= ~EXT4_INLINE_DATA_FL;
++	}
++
+ 	if (flags & EXT4_DAX_FL) {
+ 		if ((oldflags & EXT4_DAX_MUT_EXCL) ||
+ 		     ext4_test_inode_state(inode,
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 795c3ff2907c2..68fbeedd627bc 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -56,7 +56,7 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
+ 	wait_on_buffer(bh);
+ 	sb_end_write(sb);
+ 	if (unlikely(!buffer_uptodate(bh)))
+-		return 1;
++		return -EIO;
+ 
+ 	return 0;
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 594300d315ef2..c7f5b665834fc 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3039,9 +3039,6 @@ static void ext4_orphan_cleanup(struct super_block *sb,
+ 		sb->s_flags &= ~SB_RDONLY;
+ 	}
+ #ifdef CONFIG_QUOTA
+-	/* Needed for iput() to work correctly and not trash data */
+-	sb->s_flags |= SB_ACTIVE;
+-
+ 	/*
+ 	 * Turn on quotas which were not enabled for read-only mounts if
+ 	 * filesystem has quota feature, so that they are updated correctly.
+@@ -5492,8 +5489,10 @@ static int ext4_commit_super(struct super_block *sb, int sync)
+ 	struct buffer_head *sbh = EXT4_SB(sb)->s_sbh;
+ 	int error = 0;
+ 
+-	if (!sbh || block_device_ejected(sb))
+-		return error;
++	if (!sbh)
++		return -EINVAL;
++	if (block_device_ejected(sb))
++		return -ENODEV;
+ 
+ 	/*
+ 	 * If the file system is mounted read-only, don't update the
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index e65d73293a3f6..597a145c08ef5 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2781,6 +2781,9 @@ static void remove_nats_in_journal(struct f2fs_sb_info *sbi)
+ 		struct f2fs_nat_entry raw_ne;
+ 		nid_t nid = le32_to_cpu(nid_in_journal(journal, i));
+ 
++		if (f2fs_check_nid_range(sbi, nid))
++			continue;
++
+ 		raw_ne = nat_in_journal(journal, i);
+ 
+ 		ne = __lookup_nat_cache(nm_i, nid);
+diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c
+index 054ec852b5ea4..15ba36926fad7 100644
+--- a/fs/f2fs/verity.c
++++ b/fs/f2fs/verity.c
+@@ -152,40 +152,73 @@ static int f2fs_end_enable_verity(struct file *filp, const void *desc,
+ 				  size_t desc_size, u64 merkle_tree_size)
+ {
+ 	struct inode *inode = file_inode(filp);
++	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	u64 desc_pos = f2fs_verity_metadata_pos(inode) + merkle_tree_size;
+ 	struct fsverity_descriptor_location dloc = {
+ 		.version = cpu_to_le32(F2FS_VERIFY_VER),
+ 		.size = cpu_to_le32(desc_size),
+ 		.pos = cpu_to_le64(desc_pos),
+ 	};
+-	int err = 0;
++	int err = 0, err2 = 0;
+ 
+-	if (desc != NULL) {
+-		/* Succeeded; write the verity descriptor. */
+-		err = pagecache_write(inode, desc, desc_size, desc_pos);
++	/*
++	 * If an error already occurred (which fs/verity/ signals by passing
++	 * desc == NULL), then only clean-up is needed.
++	 */
++	if (desc == NULL)
++		goto cleanup;
+ 
+-		/* Write all pages before clearing FI_VERITY_IN_PROGRESS. */
+-		if (!err)
+-			err = filemap_write_and_wait(inode->i_mapping);
+-	}
++	/* Append the verity descriptor. */
++	err = pagecache_write(inode, desc, desc_size, desc_pos);
++	if (err)
++		goto cleanup;
++
++	/*
++	 * Write all pages (both data and verity metadata).  Note that this must
++	 * happen before clearing FI_VERITY_IN_PROGRESS; otherwise pages beyond
++	 * i_size won't be written properly.  For crash consistency, this also
++	 * must happen before the verity inode flag gets persisted.
++	 */
++	err = filemap_write_and_wait(inode->i_mapping);
++	if (err)
++		goto cleanup;
++
++	/* Set the verity xattr. */
++	err = f2fs_setxattr(inode, F2FS_XATTR_INDEX_VERITY,
++			    F2FS_XATTR_NAME_VERITY, &dloc, sizeof(dloc),
++			    NULL, XATTR_CREATE);
++	if (err)
++		goto cleanup;
+ 
+-	/* If we failed, truncate anything we wrote past i_size. */
+-	if (desc == NULL || err)
+-		f2fs_truncate(inode);
++	/* Finally, set the verity inode flag. */
++	file_set_verity(inode);
++	f2fs_set_inode_flags(inode);
++	f2fs_mark_inode_dirty_sync(inode, true);
+ 
+ 	clear_inode_flag(inode, FI_VERITY_IN_PROGRESS);
++	return 0;
+ 
+-	if (desc != NULL && !err) {
+-		err = f2fs_setxattr(inode, F2FS_XATTR_INDEX_VERITY,
+-				    F2FS_XATTR_NAME_VERITY, &dloc, sizeof(dloc),
+-				    NULL, XATTR_CREATE);
+-		if (!err) {
+-			file_set_verity(inode);
+-			f2fs_set_inode_flags(inode);
+-			f2fs_mark_inode_dirty_sync(inode, true);
+-		}
++cleanup:
++	/*
++	 * Verity failed to be enabled, so clean up by truncating any verity
++	 * metadata that was written beyond i_size (both from cache and from
++	 * disk) and clearing FI_VERITY_IN_PROGRESS.
++	 *
++	 * Taking i_gc_rwsem[WRITE] is needed to stop f2fs garbage collection
++	 * from re-instantiating cached pages we are truncating (since unlike
++	 * normal file accesses, garbage collection isn't limited by i_size).
++	 */
++	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++	truncate_inode_pages(inode->i_mapping, inode->i_size);
++	err2 = f2fs_truncate(inode);
++	if (err2) {
++		f2fs_err(sbi, "Truncating verity metadata failed (errno=%d)",
++			 err2);
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 	}
+-	return err;
++	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++	clear_inode_flag(inode, FI_VERITY_IN_PROGRESS);
++	return err ?: err2;
+ }
+ 
+ static int f2fs_get_verity_descriptor(struct inode *inode, void *buf,
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 8b306005453cc..7160d30068f32 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1093,6 +1093,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
+ 	struct fuse_file *ff = file->private_data;
+ 	struct fuse_mount *fm = ff->fm;
+ 	unsigned int offset, i;
++	bool short_write;
+ 	int err;
+ 
+ 	for (i = 0; i < ap->num_pages; i++)
+@@ -1105,32 +1106,38 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
+ 	if (!err && ia->write.out.size > count)
+ 		err = -EIO;
+ 
++	short_write = ia->write.out.size < count;
+ 	offset = ap->descs[0].offset;
+ 	count = ia->write.out.size;
+ 	for (i = 0; i < ap->num_pages; i++) {
+ 		struct page *page = ap->pages[i];
+ 
+-		if (!err && !offset && count >= PAGE_SIZE)
+-			SetPageUptodate(page);
+-
+-		if (count > PAGE_SIZE - offset)
+-			count -= PAGE_SIZE - offset;
+-		else
+-			count = 0;
+-		offset = 0;
+-
+-		unlock_page(page);
++		if (err) {
++			ClearPageUptodate(page);
++		} else {
++			if (count >= PAGE_SIZE - offset)
++				count -= PAGE_SIZE - offset;
++			else {
++				if (short_write)
++					ClearPageUptodate(page);
++				count = 0;
++			}
++			offset = 0;
++		}
++		if (ia->write.page_locked && (i == ap->num_pages - 1))
++			unlock_page(page);
+ 		put_page(page);
+ 	}
+ 
+ 	return err;
+ }
+ 
+-static ssize_t fuse_fill_write_pages(struct fuse_args_pages *ap,
++static ssize_t fuse_fill_write_pages(struct fuse_io_args *ia,
+ 				     struct address_space *mapping,
+ 				     struct iov_iter *ii, loff_t pos,
+ 				     unsigned int max_pages)
+ {
++	struct fuse_args_pages *ap = &ia->ap;
+ 	struct fuse_conn *fc = get_fuse_conn(mapping->host);
+ 	unsigned offset = pos & (PAGE_SIZE - 1);
+ 	size_t count = 0;
+@@ -1183,6 +1190,16 @@ static ssize_t fuse_fill_write_pages(struct fuse_args_pages *ap,
+ 		if (offset == PAGE_SIZE)
+ 			offset = 0;
+ 
++		/* If we copied full page, mark it uptodate */
++		if (tmp == PAGE_SIZE)
++			SetPageUptodate(page);
++
++		if (PageUptodate(page)) {
++			unlock_page(page);
++		} else {
++			ia->write.page_locked = true;
++			break;
++		}
+ 		if (!fc->big_writes)
+ 			break;
+ 	} while (iov_iter_count(ii) && count < fc->max_write &&
+@@ -1226,7 +1243,7 @@ static ssize_t fuse_perform_write(struct kiocb *iocb,
+ 			break;
+ 		}
+ 
+-		count = fuse_fill_write_pages(ap, mapping, ii, pos, nr_pages);
++		count = fuse_fill_write_pages(&ia, mapping, ii, pos, nr_pages);
+ 		if (count <= 0) {
+ 			err = count;
+ 		} else {
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index d64aee04e59a7..8150621101c6f 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -911,6 +911,7 @@ struct fuse_io_args {
+ 		struct {
+ 			struct fuse_write_in in;
+ 			struct fuse_write_out out;
++			bool page_locked;
+ 		} write;
+ 	};
+ 	struct fuse_args_pages ap;
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index 3d83c9e128487..f0a7f1b7b75fc 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -896,6 +896,7 @@ static int virtio_fs_probe(struct virtio_device *vdev)
+ out_vqs:
+ 	vdev->config->reset(vdev);
+ 	virtio_fs_cleanup_vqs(vdev, fs);
++	kfree(fs->vqs);
+ 
+ out:
+ 	vdev->priv = NULL;
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 9396666b73145..e8fc45fd751fb 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -349,7 +349,12 @@ static int start_this_handle(journal_t *journal, handle_t *handle,
+ 	}
+ 
+ alloc_transaction:
+-	if (!journal->j_running_transaction) {
++	/*
++	 * This check is racy but it is just an optimization of allocating new
++	 * transaction early if there are high chances we'll need it. If we
++	 * guess wrong, we'll retry or free unused transaction.
++	 */
++	if (!data_race(journal->j_running_transaction)) {
+ 		/*
+ 		 * If __GFP_FS is not present, then we may be being called from
+ 		 * inside the fs writeback layer, so we MUST NOT fail.
+@@ -1474,8 +1479,8 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 	 * crucial to catch bugs so let's do a reliable check until the
+ 	 * lockless handling is fully proven.
+ 	 */
+-	if (jh->b_transaction != transaction &&
+-	    jh->b_next_transaction != transaction) {
++	if (data_race(jh->b_transaction != transaction &&
++	    jh->b_next_transaction != transaction)) {
+ 		spin_lock(&jh->b_state_lock);
+ 		J_ASSERT_JH(jh, jh->b_transaction == transaction ||
+ 				jh->b_next_transaction == transaction);
+@@ -1483,8 +1488,8 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 	}
+ 	if (jh->b_modified == 1) {
+ 		/* If it's in our transaction it must be in BJ_Metadata list. */
+-		if (jh->b_transaction == transaction &&
+-		    jh->b_jlist != BJ_Metadata) {
++		if (data_race(jh->b_transaction == transaction &&
++		    jh->b_jlist != BJ_Metadata)) {
+ 			spin_lock(&jh->b_state_lock);
+ 			if (jh->b_transaction == transaction &&
+ 			    jh->b_jlist != BJ_Metadata)
+diff --git a/fs/jffs2/compr_rtime.c b/fs/jffs2/compr_rtime.c
+index 406d9cc84ba8d..79e771ab624f4 100644
+--- a/fs/jffs2/compr_rtime.c
++++ b/fs/jffs2/compr_rtime.c
+@@ -37,6 +37,9 @@ static int jffs2_rtime_compress(unsigned char *data_in,
+ 	int outpos = 0;
+ 	int pos=0;
+ 
++	if (*dstlen <= 3)
++		return -1;
++
+ 	memset(positions,0,sizeof(positions));
+ 
+ 	while (pos < (*sourcelen) && outpos <= (*dstlen)-2) {
+diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
+index f8fb89b10227c..4fc8cd698d1a4 100644
+--- a/fs/jffs2/file.c
++++ b/fs/jffs2/file.c
+@@ -57,6 +57,7 @@ const struct file_operations jffs2_file_operations =
+ 	.mmap =		generic_file_readonly_mmap,
+ 	.fsync =	jffs2_fsync,
+ 	.splice_read =	generic_file_splice_read,
++	.splice_write = iter_file_splice_write,
+ };
+ 
+ /* jffs2_file_inode_operations */
+diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c
+index db72a9d2d0afb..b676056826beb 100644
+--- a/fs/jffs2/scan.c
++++ b/fs/jffs2/scan.c
+@@ -1079,7 +1079,7 @@ static int jffs2_scan_dirent_node(struct jffs2_sb_info *c, struct jffs2_eraseblo
+ 	memcpy(&fd->name, rd->name, checkedlen);
+ 	fd->name[checkedlen] = 0;
+ 
+-	crc = crc32(0, fd->name, rd->nsize);
++	crc = crc32(0, fd->name, checkedlen);
+ 	if (crc != je32_to_cpu(rd->name_crc)) {
+ 		pr_notice("%s(): Name CRC failed on node at 0x%08x: Read 0x%08x, calculated 0x%08x\n",
+ 			  __func__, ofs, je32_to_cpu(rd->name_crc), crc);
+diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
+index 29ec8b09a52d8..05b39e8f97b9a 100644
+--- a/fs/nfs/fs_context.c
++++ b/fs/nfs/fs_context.c
+@@ -937,6 +937,15 @@ static int nfs23_parse_monolithic(struct fs_context *fc,
+ 			memset(mntfh->data + mntfh->size, 0,
+ 			       sizeof(mntfh->data) - mntfh->size);
+ 
++		/*
++		 * for proto == XPRT_TRANSPORT_UDP, which is what uses
++		 * to_exponential, implying shift: limit the shift value
++		 * to BITS_PER_LONG (majortimeo is unsigned long)
++		 */
++		if (!(data->flags & NFS_MOUNT_TCP)) /* this will be UDP */
++			if (data->retrans >= 64) /* shift value is too large */
++				goto out_invalid_data;
++
+ 		/*
+ 		 * Translate to nfs_fs_context, which nfs_fill_super
+ 		 * can deal with.
+@@ -1037,6 +1046,9 @@ out_no_address:
+ 
+ out_invalid_fh:
+ 	return nfs_invalf(fc, "NFS: invalid root filehandle");
++
++out_invalid_data:
++	return nfs_invalf(fc, "NFS: invalid binary mount data");
+ }
+ 
+ #if IS_ENABLED(CONFIG_NFS_V4)
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index b8712b835b105..0a32d182dce46 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1344,7 +1344,7 @@ _pnfs_return_layout(struct inode *ino)
+ 	}
+ 	valid_layout = pnfs_layout_is_valid(lo);
+ 	pnfs_clear_layoutcommit(ino, &tmp_list);
+-	pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL, 0);
++	pnfs_mark_matching_lsegs_return(lo, &tmp_list, NULL, 0);
+ 
+ 	if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
+ 		struct pnfs_layout_range range = {
+@@ -2470,6 +2470,9 @@ pnfs_mark_matching_lsegs_return(struct pnfs_layout_hdr *lo,
+ 
+ 	assert_spin_locked(&lo->plh_inode->i_lock);
+ 
++	if (test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags))
++		tmp_list = &lo->plh_return_segs;
++
+ 	list_for_each_entry_safe(lseg, next, &lo->plh_segs, pls_list)
+ 		if (pnfs_match_lseg_recall(lseg, return_range, seq)) {
+ 			dprintk("%s: marking lseg %p iomode %d "
+@@ -2477,6 +2480,8 @@ pnfs_mark_matching_lsegs_return(struct pnfs_layout_hdr *lo,
+ 				lseg, lseg->pls_range.iomode,
+ 				lseg->pls_range.offset,
+ 				lseg->pls_range.length);
++			if (test_bit(NFS_LSEG_LAYOUTRETURN, &lseg->pls_flags))
++				tmp_list = &lo->plh_return_segs;
+ 			if (mark_lseg_invalid(lseg, tmp_list))
+ 				continue;
+ 			remaining++;
+diff --git a/fs/stat.c b/fs/stat.c
+index dacecdda2e796..1196af4d1ea03 100644
+--- a/fs/stat.c
++++ b/fs/stat.c
+@@ -77,12 +77,20 @@ int vfs_getattr_nosec(const struct path *path, struct kstat *stat,
+ 	/* SB_NOATIME means filesystem supplies dummy atime value */
+ 	if (inode->i_sb->s_flags & SB_NOATIME)
+ 		stat->result_mask &= ~STATX_ATIME;
++
++	/*
++	 * Note: If you add another clause to set an attribute flag, please
++	 * update attributes_mask below.
++	 */
+ 	if (IS_AUTOMOUNT(inode))
+ 		stat->attributes |= STATX_ATTR_AUTOMOUNT;
+ 
+ 	if (IS_DAX(inode))
+ 		stat->attributes |= STATX_ATTR_DAX;
+ 
++	stat->attributes_mask |= (STATX_ATTR_AUTOMOUNT |
++				  STATX_ATTR_DAX);
++
+ 	if (inode->i_op->getattr)
+ 		return inode->i_op->getattr(path, stat, request_mask,
+ 					    query_flags);
+diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c
+index 9a151a1f5e260..1c6fc99fca30e 100644
+--- a/fs/ubifs/replay.c
++++ b/fs/ubifs/replay.c
+@@ -223,7 +223,8 @@ static bool inode_still_linked(struct ubifs_info *c, struct replay_entry *rino)
+ 	 */
+ 	list_for_each_entry_reverse(r, &c->replay_list, list) {
+ 		ubifs_assert(c, r->sqnum >= rino->sqnum);
+-		if (key_inum(c, &r->key) == key_inum(c, &rino->key))
++		if (key_inum(c, &r->key) == key_inum(c, &rino->key) &&
++		    key_type(c, &r->key) == UBIFS_INO_KEY)
+ 			return r->deletion == 0;
+ 
+ 	}
+diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
+index fcde59c65a81b..cb3d6b1c655de 100644
+--- a/include/crypto/acompress.h
++++ b/include/crypto/acompress.h
+@@ -165,6 +165,8 @@ static inline struct crypto_acomp *crypto_acomp_reqtfm(struct acomp_req *req)
+  * crypto_free_acomp() -- free ACOMPRESS tfm handle
+  *
+  * @tfm:	ACOMPRESS tfm handle allocated with crypto_alloc_acomp()
++ *
++ * If @tfm is a NULL or error pointer, this function does nothing.
+  */
+ static inline void crypto_free_acomp(struct crypto_acomp *tfm)
+ {
+diff --git a/include/crypto/aead.h b/include/crypto/aead.h
+index c32a6f5664e9a..fe956629f34cb 100644
+--- a/include/crypto/aead.h
++++ b/include/crypto/aead.h
+@@ -185,6 +185,8 @@ static inline struct crypto_tfm *crypto_aead_tfm(struct crypto_aead *tfm)
+ /**
+  * crypto_free_aead() - zeroize and free aead handle
+  * @tfm: cipher handle to be freed
++ *
++ * If @tfm is a NULL or error pointer, this function does nothing.
+  */
+ static inline void crypto_free_aead(struct crypto_aead *tfm)
+ {
+diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h
+index 1d3aa252cabaf..5764b46bd1ec1 100644
+--- a/include/crypto/akcipher.h
++++ b/include/crypto/akcipher.h
+@@ -174,6 +174,8 @@ static inline struct crypto_akcipher *crypto_akcipher_reqtfm(
+  * crypto_free_akcipher() - free AKCIPHER tfm handle
+  *
+  * @tfm: AKCIPHER tfm handle allocated with crypto_alloc_akcipher()
++ *
++ * If @tfm is a NULL or error pointer, this function does nothing.
+  */
+ static inline void crypto_free_akcipher(struct crypto_akcipher *tfm)
+ {
+diff --git a/include/crypto/chacha.h b/include/crypto/chacha.h
+index 3a1c72fdb7cf5..dabaee6987186 100644
+--- a/include/crypto/chacha.h
++++ b/include/crypto/chacha.h
+@@ -47,13 +47,18 @@ static inline void hchacha_block(const u32 *state, u32 *out, int nrounds)
+ 		hchacha_block_generic(state, out, nrounds);
+ }
+ 
+-void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv);
+-static inline void chacha_init_generic(u32 *state, const u32 *key, const u8 *iv)
++static inline void chacha_init_consts(u32 *state)
+ {
+ 	state[0]  = 0x61707865; /* "expa" */
+ 	state[1]  = 0x3320646e; /* "nd 3" */
+ 	state[2]  = 0x79622d32; /* "2-by" */
+ 	state[3]  = 0x6b206574; /* "te k" */
++}
++
++void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv);
++static inline void chacha_init_generic(u32 *state, const u32 *key, const u8 *iv)
++{
++	chacha_init_consts(state);
+ 	state[4]  = key[0];
+ 	state[5]  = key[1];
+ 	state[6]  = key[2];
+diff --git a/include/crypto/hash.h b/include/crypto/hash.h
+index 13f8a6a54ca87..b2bc1e46e86a7 100644
+--- a/include/crypto/hash.h
++++ b/include/crypto/hash.h
+@@ -281,6 +281,8 @@ static inline struct crypto_tfm *crypto_ahash_tfm(struct crypto_ahash *tfm)
+ /**
+  * crypto_free_ahash() - zeroize and free the ahash handle
+  * @tfm: cipher handle to be freed
++ *
++ * If @tfm is a NULL or error pointer, this function does nothing.
+  */
+ static inline void crypto_free_ahash(struct crypto_ahash *tfm)
+ {
+@@ -724,6 +726,8 @@ static inline struct crypto_tfm *crypto_shash_tfm(struct crypto_shash *tfm)
+ /**
+  * crypto_free_shash() - zeroize and free the message digest handle
+  * @tfm: cipher handle to be freed
++ *
++ * If @tfm is a NULL or error pointer, this function does nothing.
+  */
+ static inline void crypto_free_shash(struct crypto_shash *tfm)
+ {
+diff --git a/include/crypto/kpp.h b/include/crypto/kpp.h
+index 88b591215d5c8..cccceadc164b9 100644
+--- a/include/crypto/kpp.h
++++ b/include/crypto/kpp.h
+@@ -154,6 +154,8 @@ static inline void crypto_kpp_set_flags(struct crypto_kpp *tfm, u32 flags)
+  * crypto_free_kpp() - free KPP tfm handle
+  *
+  * @tfm: KPP tfm handle allocated with crypto_alloc_kpp()
++ *
++ * If @tfm is a NULL or error pointer, this function does nothing.
+  */
+ static inline void crypto_free_kpp(struct crypto_kpp *tfm)
+ {
+diff --git a/include/crypto/rng.h b/include/crypto/rng.h
+index 8b4b844b4eef8..17bb3673d3c17 100644
+--- a/include/crypto/rng.h
++++ b/include/crypto/rng.h
+@@ -111,6 +111,8 @@ static inline struct rng_alg *crypto_rng_alg(struct crypto_rng *tfm)
+ /**
+  * crypto_free_rng() - zeroize and free RNG handle
+  * @tfm: cipher handle to be freed
++ *
++ * If @tfm is a NULL or error pointer, this function does nothing.
+  */
+ static inline void crypto_free_rng(struct crypto_rng *tfm)
+ {
+diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
+index 6a733b171a5d0..ef0fc9ed4342e 100644
+--- a/include/crypto/skcipher.h
++++ b/include/crypto/skcipher.h
+@@ -196,6 +196,8 @@ static inline struct crypto_tfm *crypto_skcipher_tfm(
+ /**
+  * crypto_free_skcipher() - zeroize and free cipher handle
+  * @tfm: cipher handle to be freed
++ *
++ * If @tfm is a NULL or error pointer, this function does nothing.
+  */
+ static inline void crypto_free_skcipher(struct crypto_skcipher *tfm)
+ {
+diff --git a/include/linux/mfd/da9063/registers.h b/include/linux/mfd/da9063/registers.h
+index 1dbabf1b3cb81..6e0f66a2e7279 100644
+--- a/include/linux/mfd/da9063/registers.h
++++ b/include/linux/mfd/da9063/registers.h
+@@ -1037,6 +1037,9 @@
+ #define		DA9063_NONKEY_PIN_AUTODOWN	0x02
+ #define		DA9063_NONKEY_PIN_AUTOFLPRT	0x03
+ 
++/* DA9063_REG_CONFIG_J (addr=0x10F) */
++#define DA9063_TWOWIRE_TO			0x40
++
+ /* DA9063_REG_MON_REG_5 (addr=0x116) */
+ #define DA9063_MON_A8_IDX_MASK			0x07
+ #define		DA9063_MON_A8_IDX_NONE		0x00
+diff --git a/include/linux/mfd/intel-m10-bmc.h b/include/linux/mfd/intel-m10-bmc.h
+index c8ef2f1654a44..06da62c25234f 100644
+--- a/include/linux/mfd/intel-m10-bmc.h
++++ b/include/linux/mfd/intel-m10-bmc.h
+@@ -11,7 +11,7 @@
+ 
+ #define M10BMC_LEGACY_SYS_BASE		0x300400
+ #define M10BMC_SYS_BASE			0x300800
+-#define M10BMC_MEM_END			0x200000fc
++#define M10BMC_MEM_END			0x1fffffff
+ 
+ /* Register offset of system registers */
+ #define NIOS2_FW_VERSION		0x0
+diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
+index c079b932330f2..40d7e98fc9902 100644
+--- a/include/linux/mmc/host.h
++++ b/include/linux/mmc/host.h
+@@ -285,9 +285,6 @@ struct mmc_host {
+ 	u32			ocr_avail_sdio;	/* SDIO-specific OCR */
+ 	u32			ocr_avail_sd;	/* SD-specific OCR */
+ 	u32			ocr_avail_mmc;	/* MMC-specific OCR */
+-#ifdef CONFIG_PM_SLEEP
+-	struct notifier_block	pm_notify;
+-#endif
+ 	struct wakeup_source	*ws;		/* Enable consume of uevents */
+ 	u32			max_current_330;
+ 	u32			max_current_300;
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 22ce0604b4480..072ac6c1ef2b6 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -607,6 +607,7 @@ struct swevent_hlist {
+ #define PERF_ATTACH_TASK_DATA	0x08
+ #define PERF_ATTACH_ITRACE	0x10
+ #define PERF_ATTACH_SCHED_CB	0x20
++#define PERF_ATTACH_CHILD	0x40
+ 
+ struct perf_cgroup;
+ struct perf_buffer;
+diff --git a/include/linux/power/bq27xxx_battery.h b/include/linux/power/bq27xxx_battery.h
+index 111a40d0d3d50..8d5f4f40fb418 100644
+--- a/include/linux/power/bq27xxx_battery.h
++++ b/include/linux/power/bq27xxx_battery.h
+@@ -53,7 +53,6 @@ struct bq27xxx_reg_cache {
+ 	int capacity;
+ 	int energy;
+ 	int flags;
+-	int power_avg;
+ 	int health;
+ };
+ 
+diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h
+index cb25f345e9adc..9ecbb98908f05 100644
+--- a/include/media/v4l2-ctrls.h
++++ b/include/media/v4l2-ctrls.h
+@@ -303,12 +303,14 @@ struct v4l2_ctrl {
+  *		the control has been applied. This prevents applying controls
+  *		from a cluster with multiple controls twice (when the first
+  *		control of a cluster is applied, they all are).
+- * @req:	If set, this refers to another request that sets this control.
++ * @valid_p_req: If set, then p_req contains the control value for the request.
+  * @p_req:	If the control handler containing this control reference
+  *		is bound to a media request, then this points to the
+- *		value of the control that should be applied when the request
++ *		value of the control that must be applied when the request
+  *		is executed, or to the value of the control at the time
+- *		that the request was completed.
++ *		that the request was completed. If @valid_p_req is false,
++ *		then this control was never set for this request and the
++ *		control will not be updated when this request is applied.
+  *
+  * Each control handler has a list of these refs. The list_head is used to
+  * keep a sorted-by-control-ID list of all controls, while the next pointer
+@@ -321,7 +323,7 @@ struct v4l2_ctrl_ref {
+ 	struct v4l2_ctrl_helper *helper;
+ 	bool from_other_dev;
+ 	bool req_done;
+-	struct v4l2_ctrl_ref *req;
++	bool valid_p_req;
+ 	union v4l2_ctrl_ptr p_req;
+ };
+ 
+@@ -348,7 +350,7 @@ struct v4l2_ctrl_ref {
+  * @error:	The error code of the first failed control addition.
+  * @request_is_queued: True if the request was queued.
+  * @requests:	List to keep track of open control handler request objects.
+- *		For the parent control handler (@req_obj.req == NULL) this
++ *		For the parent control handler (@req_obj.ops == NULL) this
+  *		is the list header. When the parent control handler is
+  *		removed, it has to unbind and put all these requests since
+  *		they refer to the parent.
+diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
+index 2568cb0627ec0..fac8e89aed81d 100644
+--- a/include/scsi/libfcoe.h
++++ b/include/scsi/libfcoe.h
+@@ -249,7 +249,7 @@ int fcoe_ctlr_recv_flogi(struct fcoe_ctlr *, struct fc_lport *,
+ 			 struct fc_frame *);
+ 
+ /* libfcoe funcs */
+-u64 fcoe_wwn_from_mac(unsigned char mac[], unsigned int, unsigned int);
++u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int);
+ int fcoe_libfc_config(struct fc_lport *, struct fcoe_ctlr *,
+ 		      const struct libfc_function_template *, int init_fcp);
+ u32 fcoe_fc_crc(struct fc_frame *fp);
+diff --git a/include/uapi/linux/usb/video.h b/include/uapi/linux/usb/video.h
+index d854cb19c42c3..bfdae12cdacf8 100644
+--- a/include/uapi/linux/usb/video.h
++++ b/include/uapi/linux/usb/video.h
+@@ -302,9 +302,10 @@ struct uvc_processing_unit_descriptor {
+ 	__u8   bControlSize;
+ 	__u8   bmControls[2];
+ 	__u8   iProcessing;
++	__u8   bmVideoStandards;
+ } __attribute__((__packed__));
+ 
+-#define UVC_DT_PROCESSING_UNIT_SIZE(n)			(9+(n))
++#define UVC_DT_PROCESSING_UNIT_SIZE(n)			(10+(n))
+ 
+ /* 3.7.2.6. Extension Unit Descriptor */
+ struct uvc_extension_unit_descriptor {
+diff --git a/kernel/.gitignore b/kernel/.gitignore
+index 78701ea37c973..5518835ac35c7 100644
+--- a/kernel/.gitignore
++++ b/kernel/.gitignore
+@@ -1,4 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
++/config_data
+ kheaders.md5
+ timeconst.h
+ hz.bc
+diff --git a/kernel/Makefile b/kernel/Makefile
+index 88b60a6e5dd0a..e7905bdf6e972 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -134,10 +134,15 @@ obj-$(CONFIG_SCF_TORTURE_TEST) += scftorture.o
+ 
+ $(obj)/configs.o: $(obj)/config_data.gz
+ 
+-targets += config_data.gz
+-$(obj)/config_data.gz: $(KCONFIG_CONFIG) FORCE
++targets += config_data config_data.gz
++$(obj)/config_data.gz: $(obj)/config_data FORCE
+ 	$(call if_changed,gzip)
+ 
++filechk_cat = cat $<
++
++$(obj)/config_data: $(KCONFIG_CONFIG) FORCE
++	$(call filechk,cat)
++
+ $(obj)/kheaders.o: $(obj)/kheaders_data.tar.xz
+ 
+ quiet_cmd_genikh = CHK     $(obj)/kheaders_data.tar.xz
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 8e1b8126c0e49..45fa7167cee2d 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2209,6 +2209,26 @@ out:
+ 	perf_event__header_size(leader);
+ }
+ 
++static void sync_child_event(struct perf_event *child_event);
++
++static void perf_child_detach(struct perf_event *event)
++{
++	struct perf_event *parent_event = event->parent;
++
++	if (!(event->attach_state & PERF_ATTACH_CHILD))
++		return;
++
++	event->attach_state &= ~PERF_ATTACH_CHILD;
++
++	if (WARN_ON_ONCE(!parent_event))
++		return;
++
++	lockdep_assert_held(&parent_event->child_mutex);
++
++	sync_child_event(event);
++	list_del_init(&event->child_list);
++}
++
+ static bool is_orphaned_event(struct perf_event *event)
+ {
+ 	return event->state == PERF_EVENT_STATE_DEAD;
+@@ -2316,6 +2336,7 @@ group_sched_out(struct perf_event *group_event,
+ }
+ 
+ #define DETACH_GROUP	0x01UL
++#define DETACH_CHILD	0x02UL
+ 
+ /*
+  * Cross CPU call to remove a performance event
+@@ -2339,6 +2360,8 @@ __perf_remove_from_context(struct perf_event *event,
+ 	event_sched_out(event, cpuctx, ctx);
+ 	if (flags & DETACH_GROUP)
+ 		perf_group_detach(event);
++	if (flags & DETACH_CHILD)
++		perf_child_detach(event);
+ 	list_del_event(event, ctx);
+ 
+ 	if (!ctx->nr_events && ctx->is_active) {
+@@ -2367,25 +2390,21 @@ static void perf_remove_from_context(struct perf_event *event, unsigned long fla
+ 
+ 	lockdep_assert_held(&ctx->mutex);
+ 
+-	event_function_call(event, __perf_remove_from_context, (void *)flags);
+-
+ 	/*
+-	 * The above event_function_call() can NO-OP when it hits
+-	 * TASK_TOMBSTONE. In that case we must already have been detached
+-	 * from the context (by perf_event_exit_event()) but the grouping
+-	 * might still be in-tact.
++	 * Because of perf_event_exit_task(), perf_remove_from_context() ought
++	 * to work in the face of TASK_TOMBSTONE, unlike every other
++	 * event_function_call() user.
+ 	 */
+-	WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT);
+-	if ((flags & DETACH_GROUP) &&
+-	    (event->attach_state & PERF_ATTACH_GROUP)) {
+-		/*
+-		 * Since in that case we cannot possibly be scheduled, simply
+-		 * detach now.
+-		 */
+-		raw_spin_lock_irq(&ctx->lock);
+-		perf_group_detach(event);
++	raw_spin_lock_irq(&ctx->lock);
++	if (!ctx->is_active) {
++		__perf_remove_from_context(event, __get_cpu_context(ctx),
++					   ctx, (void *)flags);
+ 		raw_spin_unlock_irq(&ctx->lock);
++		return;
+ 	}
++	raw_spin_unlock_irq(&ctx->lock);
++
++	event_function_call(event, __perf_remove_from_context, (void *)flags);
+ }
+ 
+ /*
+@@ -12249,14 +12268,17 @@ void perf_pmu_migrate_context(struct pmu *pmu, int src_cpu, int dst_cpu)
+ }
+ EXPORT_SYMBOL_GPL(perf_pmu_migrate_context);
+ 
+-static void sync_child_event(struct perf_event *child_event,
+-			       struct task_struct *child)
++static void sync_child_event(struct perf_event *child_event)
+ {
+ 	struct perf_event *parent_event = child_event->parent;
+ 	u64 child_val;
+ 
+-	if (child_event->attr.inherit_stat)
+-		perf_event_read_event(child_event, child);
++	if (child_event->attr.inherit_stat) {
++		struct task_struct *task = child_event->ctx->task;
++
++		if (task && task != TASK_TOMBSTONE)
++			perf_event_read_event(child_event, task);
++	}
+ 
+ 	child_val = perf_event_count(child_event);
+ 
+@@ -12271,60 +12293,53 @@ static void sync_child_event(struct perf_event *child_event,
+ }
+ 
+ static void
+-perf_event_exit_event(struct perf_event *child_event,
+-		      struct perf_event_context *child_ctx,
+-		      struct task_struct *child)
++perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx)
+ {
+-	struct perf_event *parent_event = child_event->parent;
++	struct perf_event *parent_event = event->parent;
++	unsigned long detach_flags = 0;
+ 
+-	/*
+-	 * Do not destroy the 'original' grouping; because of the context
+-	 * switch optimization the original events could've ended up in a
+-	 * random child task.
+-	 *
+-	 * If we were to destroy the original group, all group related
+-	 * operations would cease to function properly after this random
+-	 * child dies.
+-	 *
+-	 * Do destroy all inherited groups, we don't care about those
+-	 * and being thorough is better.
+-	 */
+-	raw_spin_lock_irq(&child_ctx->lock);
+-	WARN_ON_ONCE(child_ctx->is_active);
++	if (parent_event) {
++		/*
++		 * Do not destroy the 'original' grouping; because of the
++		 * context switch optimization the original events could've
++		 * ended up in a random child task.
++		 *
++		 * If we were to destroy the original group, all group related
++		 * operations would cease to function properly after this
++		 * random child dies.
++		 *
++		 * Do destroy all inherited groups, we don't care about those
++		 * and being thorough is better.
++		 */
++		detach_flags = DETACH_GROUP | DETACH_CHILD;
++		mutex_lock(&parent_event->child_mutex);
++	}
+ 
+-	if (parent_event)
+-		perf_group_detach(child_event);
+-	list_del_event(child_event, child_ctx);
+-	perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */
+-	raw_spin_unlock_irq(&child_ctx->lock);
++	perf_remove_from_context(event, detach_flags);
++
++	raw_spin_lock_irq(&ctx->lock);
++	if (event->state > PERF_EVENT_STATE_EXIT)
++		perf_event_set_state(event, PERF_EVENT_STATE_EXIT);
++	raw_spin_unlock_irq(&ctx->lock);
+ 
+ 	/*
+-	 * Parent events are governed by their filedesc, retain them.
++	 * Child events can be freed.
+ 	 */
+-	if (!parent_event) {
+-		perf_event_wakeup(child_event);
++	if (parent_event) {
++		mutex_unlock(&parent_event->child_mutex);
++		/*
++		 * Kick perf_poll() for is_event_hup();
++		 */
++		perf_event_wakeup(parent_event);
++		free_event(event);
++		put_event(parent_event);
+ 		return;
+ 	}
+-	/*
+-	 * Child events can be cleaned up.
+-	 */
+-
+-	sync_child_event(child_event, child);
+ 
+ 	/*
+-	 * Remove this event from the parent's list
+-	 */
+-	WARN_ON_ONCE(parent_event->ctx->parent_ctx);
+-	mutex_lock(&parent_event->child_mutex);
+-	list_del_init(&child_event->child_list);
+-	mutex_unlock(&parent_event->child_mutex);
+-
+-	/*
+-	 * Kick perf_poll() for is_event_hup().
++	 * Parent events are governed by their filedesc, retain them.
+ 	 */
+-	perf_event_wakeup(parent_event);
+-	free_event(child_event);
+-	put_event(parent_event);
++	perf_event_wakeup(event);
+ }
+ 
+ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
+@@ -12381,7 +12396,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
+ 	perf_event_task(child, child_ctx, 0);
+ 
+ 	list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry)
+-		perf_event_exit_event(child_event, child_ctx, child);
++		perf_event_exit_event(child_event, child_ctx);
+ 
+ 	mutex_unlock(&child_ctx->mutex);
+ 
+@@ -12641,6 +12656,7 @@ inherit_event(struct perf_event *parent_event,
+ 	 */
+ 	raw_spin_lock_irqsave(&child_ctx->lock, flags);
+ 	add_event_to_ctx(child_event, child_ctx);
++	child_event->attach_state |= PERF_ATTACH_CHILD;
+ 	raw_spin_unlock_irqrestore(&child_ctx->lock, flags);
+ 
+ 	/*
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 7cf1987cfdb4f..3136aba177720 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -3714,8 +3714,7 @@ long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
+ 
+ 	if (op & FUTEX_CLOCK_REALTIME) {
+ 		flags |= FLAGS_CLOCKRT;
+-		if (cmd != FUTEX_WAIT && cmd != FUTEX_WAIT_BITSET && \
+-		    cmd != FUTEX_WAIT_REQUEUE_PI)
++		if (cmd != FUTEX_WAIT_BITSET &&	cmd != FUTEX_WAIT_REQUEUE_PI)
+ 			return -ENOSYS;
+ 	}
+ 
+@@ -3785,7 +3784,7 @@ SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val,
+ 		t = timespec64_to_ktime(ts);
+ 		if (cmd == FUTEX_WAIT)
+ 			t = ktime_add_safe(ktime_get(), t);
+-		else if (!(op & FUTEX_CLOCK_REALTIME))
++		else if (cmd != FUTEX_LOCK_PI && !(op & FUTEX_CLOCK_REALTIME))
+ 			t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
+ 		tp = &t;
+ 	}
+@@ -3979,7 +3978,7 @@ SYSCALL_DEFINE6(futex_time32, u32 __user *, uaddr, int, op, u32, val,
+ 		t = timespec64_to_ktime(ts);
+ 		if (cmd == FUTEX_WAIT)
+ 			t = ktime_add_safe(ktime_get(), t);
+-		else if (!(op & FUTEX_CLOCK_REALTIME))
++		else if (cmd != FUTEX_LOCK_PI && !(op & FUTEX_CLOCK_REALTIME))
+ 			t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
+ 		tp = &t;
+ 	}
+diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
+index 651a4ad6d711f..8e586858bcf41 100644
+--- a/kernel/irq/matrix.c
++++ b/kernel/irq/matrix.c
+@@ -423,7 +423,9 @@ void irq_matrix_free(struct irq_matrix *m, unsigned int cpu,
+ 	if (WARN_ON_ONCE(bit < m->alloc_start || bit >= m->alloc_end))
+ 		return;
+ 
+-	clear_bit(bit, cm->alloc_map);
++	if (WARN_ON_ONCE(!test_and_clear_bit(bit, cm->alloc_map)))
++		return;
++
+ 	cm->allocated--;
+ 	if(managed)
+ 		cm->managed_allocated--;
+diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
+index 3bf98db9c702d..23e7acb5c6679 100644
+--- a/kernel/kcsan/core.c
++++ b/kernel/kcsan/core.c
+@@ -639,8 +639,6 @@ void __init kcsan_init(void)
+ 
+ 	BUG_ON(!in_task());
+ 
+-	kcsan_debugfs_init();
+-
+ 	for_each_possible_cpu(cpu)
+ 		per_cpu(kcsan_rand_state, cpu) = (u32)get_cycles();
+ 
+diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
+index 3c8093a371b1c..209ad8dcfcecf 100644
+--- a/kernel/kcsan/debugfs.c
++++ b/kernel/kcsan/debugfs.c
+@@ -261,7 +261,9 @@ static const struct file_operations debugfs_ops =
+ 	.release = single_release
+ };
+ 
+-void __init kcsan_debugfs_init(void)
++static void __init kcsan_debugfs_init(void)
+ {
+ 	debugfs_create_file("kcsan", 0644, NULL, NULL, &debugfs_ops);
+ }
++
++late_initcall(kcsan_debugfs_init);
+diff --git a/kernel/kcsan/kcsan.h b/kernel/kcsan/kcsan.h
+index 8d4bf3431b3cc..87ccdb3b051fd 100644
+--- a/kernel/kcsan/kcsan.h
++++ b/kernel/kcsan/kcsan.h
+@@ -30,11 +30,6 @@ extern bool kcsan_enabled;
+ void kcsan_save_irqtrace(struct task_struct *task);
+ void kcsan_restore_irqtrace(struct task_struct *task);
+ 
+-/*
+- * Initialize debugfs file.
+- */
+-void kcsan_debugfs_init(void);
+-
+ /*
+  * Statistics counters displayed via debugfs; should only be modified in
+  * slow-paths.
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 5dc36c6e80fdc..8a5cc76ecac96 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3386,7 +3386,7 @@ static void fill_page_cache_func(struct work_struct *work)
+ 
+ 	for (i = 0; i < rcu_min_cached_objs; i++) {
+ 		bnode = (struct kvfree_rcu_bulk_data *)
+-			__get_free_page(GFP_KERNEL | __GFP_NOWARN);
++			__get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
+ 
+ 		if (bnode) {
+ 			raw_spin_lock_irqsave(&krcp->lock, flags);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 3486053060276..a0239649c7419 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -700,7 +700,13 @@ static u64 __sched_period(unsigned long nr_running)
+  */
+ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
+ {
+-	u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
++	unsigned int nr_running = cfs_rq->nr_running;
++	u64 slice;
++
++	if (sched_feat(ALT_PERIOD))
++		nr_running = rq_of(cfs_rq)->cfs.h_nr_running;
++
++	slice = __sched_period(nr_running + !se->on_rq);
+ 
+ 	for_each_sched_entity(se) {
+ 		struct load_weight *load;
+@@ -717,6 +723,10 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
+ 		}
+ 		slice = __calc_delta(slice, se->load.weight, load);
+ 	}
++
++	if (sched_feat(BASE_SLICE))
++		slice = max(slice, (u64)sysctl_sched_min_granularity);
++
+ 	return slice;
+ }
+ 
+@@ -3948,6 +3958,8 @@ static inline void util_est_dequeue(struct cfs_rq *cfs_rq,
+ 	trace_sched_util_est_cfs_tp(cfs_rq);
+ }
+ 
++#define UTIL_EST_MARGIN (SCHED_CAPACITY_SCALE / 100)
++
+ /*
+  * Check if a (signed) value is within a specified (unsigned) margin,
+  * based on the observation that:
+@@ -3965,7 +3977,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
+ 				   struct task_struct *p,
+ 				   bool task_sleep)
+ {
+-	long last_ewma_diff;
++	long last_ewma_diff, last_enqueued_diff;
+ 	struct util_est ue;
+ 
+ 	if (!sched_feat(UTIL_EST))
+@@ -3986,6 +3998,8 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
+ 	if (ue.enqueued & UTIL_AVG_UNCHANGED)
+ 		return;
+ 
++	last_enqueued_diff = ue.enqueued;
++
+ 	/*
+ 	 * Reset EWMA on utilization increases, the moving average is used only
+ 	 * to smooth utilization decreases.
+@@ -3999,12 +4013,17 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
+ 	}
+ 
+ 	/*
+-	 * Skip update of task's estimated utilization when its EWMA is
++	 * Skip update of task's estimated utilization when its members are
+ 	 * already ~1% close to its last activation value.
+ 	 */
+ 	last_ewma_diff = ue.enqueued - ue.ewma;
+-	if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100)))
++	last_enqueued_diff -= ue.enqueued;
++	if (within_margin(last_ewma_diff, UTIL_EST_MARGIN)) {
++		if (!within_margin(last_enqueued_diff, UTIL_EST_MARGIN))
++			goto done;
++
+ 		return;
++	}
+ 
+ 	/*
+ 	 * To avoid overestimation of actual task utilization, skip updates if
+@@ -7543,6 +7562,10 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
+ 	if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
+ 		return 0;
+ 
++	/* Disregard pcpu kthreads; they are where they need to be. */
++	if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
++		return 0;
++
+ 	if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
+ 		int cpu;
+ 
+diff --git a/kernel/sched/features.h b/kernel/sched/features.h
+index 68d369cba9e45..f1bf5e12d889e 100644
+--- a/kernel/sched/features.h
++++ b/kernel/sched/features.h
+@@ -90,3 +90,6 @@ SCHED_FEAT(WA_BIAS, true)
+  */
+ SCHED_FEAT(UTIL_EST, true)
+ SCHED_FEAT(UTIL_EST_FASTUP, true)
++
++SCHED_FEAT(ALT_PERIOD, true)
++SCHED_FEAT(BASE_SLICE, true)
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 967732c0766c5..651218ded9817 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -711,14 +711,15 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ 	for (t = 0, m = clear; m; m &= ~(1 << t), t++) {
+ 		if (!(m & (1 << t)))
+ 			continue;
+-		if (groupc->tasks[t] == 0 && !psi_bug) {
++		if (groupc->tasks[t]) {
++			groupc->tasks[t]--;
++		} else if (!psi_bug) {
+ 			printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u] clear=%x set=%x\n",
+ 					cpu, t, groupc->tasks[0],
+ 					groupc->tasks[1], groupc->tasks[2],
+ 					groupc->tasks[3], clear, set);
+ 			psi_bug = 1;
+ 		}
+-		groupc->tasks[t]--;
+ 	}
+ 
+ 	for (t = 0; set; set &= ~(1 << t), t++)
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index bf540f5a4115a..dd5697d7347b1 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -1191,8 +1191,8 @@ SYSCALL_DEFINE2(clock_adjtime32, clockid_t, which_clock,
+ 
+ 	err = do_clock_adjtime(which_clock, &ktx);
+ 
+-	if (err >= 0)
+-		err = put_old_timex32(utp, &ktx);
++	if (err >= 0 && put_old_timex32(utp, &ktx))
++		return -EFAULT;
+ 
+ 	return err;
+ }
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index b1983c2aeb53c..a6d15a3187d0e 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -5632,7 +5632,10 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
+ 
+ 	parser = &iter->parser;
+ 	if (trace_parser_loaded(parser)) {
+-		ftrace_match_records(iter->hash, parser->buffer, parser->idx);
++		int enable = !(iter->flags & FTRACE_ITER_NOTRACE);
++
++		ftrace_process_regex(iter, parser->buffer,
++				     parser->idx, enable);
+ 	}
+ 
+ 	trace_parser_put(parser);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 8bfa4e78d8951..321f7f7a29b4b 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2387,14 +2387,13 @@ static void tracing_stop_tr(struct trace_array *tr)
+ 
+ static int trace_save_cmdline(struct task_struct *tsk)
+ {
+-	unsigned pid, idx;
++	unsigned tpid, idx;
+ 
+ 	/* treat recording of idle task as a success */
+ 	if (!tsk->pid)
+ 		return 1;
+ 
+-	if (unlikely(tsk->pid > PID_MAX_DEFAULT))
+-		return 0;
++	tpid = tsk->pid & (PID_MAX_DEFAULT - 1);
+ 
+ 	/*
+ 	 * It's not the end of the world if we don't get
+@@ -2405,26 +2404,15 @@ static int trace_save_cmdline(struct task_struct *tsk)
+ 	if (!arch_spin_trylock(&trace_cmdline_lock))
+ 		return 0;
+ 
+-	idx = savedcmd->map_pid_to_cmdline[tsk->pid];
++	idx = savedcmd->map_pid_to_cmdline[tpid];
+ 	if (idx == NO_CMDLINE_MAP) {
+ 		idx = (savedcmd->cmdline_idx + 1) % savedcmd->cmdline_num;
+ 
+-		/*
+-		 * Check whether the cmdline buffer at idx has a pid
+-		 * mapped. We are going to overwrite that entry so we
+-		 * need to clear the map_pid_to_cmdline. Otherwise we
+-		 * would read the new comm for the old pid.
+-		 */
+-		pid = savedcmd->map_cmdline_to_pid[idx];
+-		if (pid != NO_CMDLINE_MAP)
+-			savedcmd->map_pid_to_cmdline[pid] = NO_CMDLINE_MAP;
+-
+-		savedcmd->map_cmdline_to_pid[idx] = tsk->pid;
+-		savedcmd->map_pid_to_cmdline[tsk->pid] = idx;
+-
++		savedcmd->map_pid_to_cmdline[tpid] = idx;
+ 		savedcmd->cmdline_idx = idx;
+ 	}
+ 
++	savedcmd->map_cmdline_to_pid[idx] = tsk->pid;
+ 	set_cmdline(idx, tsk->comm);
+ 
+ 	arch_spin_unlock(&trace_cmdline_lock);
+@@ -2435,6 +2423,7 @@ static int trace_save_cmdline(struct task_struct *tsk)
+ static void __trace_find_cmdline(int pid, char comm[])
+ {
+ 	unsigned map;
++	int tpid;
+ 
+ 	if (!pid) {
+ 		strcpy(comm, "<idle>");
+@@ -2446,16 +2435,16 @@ static void __trace_find_cmdline(int pid, char comm[])
+ 		return;
+ 	}
+ 
+-	if (pid > PID_MAX_DEFAULT) {
+-		strcpy(comm, "<...>");
+-		return;
++	tpid = pid & (PID_MAX_DEFAULT - 1);
++	map = savedcmd->map_pid_to_cmdline[tpid];
++	if (map != NO_CMDLINE_MAP) {
++		tpid = savedcmd->map_cmdline_to_pid[map];
++		if (tpid == pid) {
++			strlcpy(comm, get_saved_cmdlines(map), TASK_COMM_LEN);
++			return;
++		}
+ 	}
+-
+-	map = savedcmd->map_pid_to_cmdline[pid];
+-	if (map != NO_CMDLINE_MAP)
+-		strlcpy(comm, get_saved_cmdlines(map), TASK_COMM_LEN);
+-	else
+-		strcpy(comm, "<...>");
++	strcpy(comm, "<...>");
+ }
+ 
+ void trace_find_cmdline(int pid, char comm[])
+diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c
+index aaf6793ededaa..c1637f90c8a38 100644
+--- a/kernel/trace/trace_clock.c
++++ b/kernel/trace/trace_clock.c
+@@ -95,33 +95,49 @@ u64 notrace trace_clock_global(void)
+ {
+ 	unsigned long flags;
+ 	int this_cpu;
+-	u64 now;
++	u64 now, prev_time;
+ 
+ 	raw_local_irq_save(flags);
+ 
+ 	this_cpu = raw_smp_processor_id();
+-	now = sched_clock_cpu(this_cpu);
++
+ 	/*
+-	 * If in an NMI context then dont risk lockups and return the
+-	 * cpu_clock() time:
++	 * The global clock "guarantees" that the events are ordered
++	 * between CPUs. But if two events on two different CPUS call
++	 * trace_clock_global at roughly the same time, it really does
++	 * not matter which one gets the earlier time. Just make sure
++	 * that the same CPU will always show a monotonic clock.
++	 *
++	 * Use a read memory barrier to get the latest written
++	 * time that was recorded.
+ 	 */
+-	if (unlikely(in_nmi()))
+-		goto out;
++	smp_rmb();
++	prev_time = READ_ONCE(trace_clock_struct.prev_time);
++	now = sched_clock_cpu(this_cpu);
+ 
+-	arch_spin_lock(&trace_clock_struct.lock);
++	/* Make sure that now is always greater than prev_time */
++	if ((s64)(now - prev_time) < 0)
++		now = prev_time + 1;
+ 
+ 	/*
+-	 * TODO: if this happens often then maybe we should reset
+-	 * my_scd->clock to prev_time+1, to make sure
+-	 * we start ticking with the local clock from now on?
++	 * If in an NMI context then dont risk lockups and simply return
++	 * the current time.
+ 	 */
+-	if ((s64)(now - trace_clock_struct.prev_time) < 0)
+-		now = trace_clock_struct.prev_time + 1;
++	if (unlikely(in_nmi()))
++		goto out;
+ 
+-	trace_clock_struct.prev_time = now;
++	/* Tracing can cause strange recursion, always use a try lock */
++	if (arch_spin_trylock(&trace_clock_struct.lock)) {
++		/* Reread prev_time in case it was already updated */
++		prev_time = READ_ONCE(trace_clock_struct.prev_time);
++		if ((s64)(now - prev_time) < 0)
++			now = prev_time + 1;
+ 
+-	arch_spin_unlock(&trace_clock_struct.lock);
++		trace_clock_struct.prev_time = now;
+ 
++		/* The unlock acts as the wmb for the above rmb */
++		arch_spin_unlock(&trace_clock_struct.lock);
++	}
+  out:
+ 	raw_local_irq_restore(flags);
+ 
+diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
+index c70d6347afa2b..921d0a654243c 100644
+--- a/lib/dynamic_debug.c
++++ b/lib/dynamic_debug.c
+@@ -396,7 +396,7 @@ static int ddebug_parse_query(char *words[], int nwords,
+ 			/* tail :$info is function or line-range */
+ 			fline = strchr(query->filename, ':');
+ 			if (!fline)
+-				break;
++				continue;
+ 			*fline++ = '\0';
+ 			if (isalpha(*fline) || *fline == '*' || *fline == '?') {
+ 				/* take as function name */
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 14c9a6af1b239..fd0fde639ec91 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -3102,8 +3102,6 @@ int bstr_printf(char *buf, size_t size, const char *fmt, const u32 *bin_buf)
+ 			switch (*fmt) {
+ 			case 'S':
+ 			case 's':
+-			case 'F':
+-			case 'f':
+ 			case 'x':
+ 			case 'K':
+ 			case 'e':
+diff --git a/net/bluetooth/ecdh_helper.h b/net/bluetooth/ecdh_helper.h
+index a6f8d03d4aaf6..830723971cf83 100644
+--- a/net/bluetooth/ecdh_helper.h
++++ b/net/bluetooth/ecdh_helper.h
+@@ -25,6 +25,6 @@
+ 
+ int compute_ecdh_secret(struct crypto_kpp *tfm, const u8 pair_public_key[64],
+ 			u8 secret[32]);
+-int set_ecdh_privkey(struct crypto_kpp *tfm, const u8 *private_key);
++int set_ecdh_privkey(struct crypto_kpp *tfm, const u8 private_key[32]);
+ int generate_ecdh_public_key(struct crypto_kpp *tfm, u8 public_key[64]);
+ int generate_ecdh_keys(struct crypto_kpp *tfm, u8 public_key[64]);
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index e8902a7e60f24..fc487f9812fc5 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -827,17 +827,17 @@ static void ovs_fragment(struct net *net, struct vport *vport,
+ 	}
+ 
+ 	if (key->eth.type == htons(ETH_P_IP)) {
+-		struct dst_entry ovs_dst;
++		struct rtable ovs_rt = { 0 };
+ 		unsigned long orig_dst;
+ 
+ 		prepare_frag(vport, skb, orig_network_offset,
+ 			     ovs_key_mac_proto(key));
+-		dst_init(&ovs_dst, &ovs_dst_ops, NULL, 1,
++		dst_init(&ovs_rt.dst, &ovs_dst_ops, NULL, 1,
+ 			 DST_OBSOLETE_NONE, DST_NOCOUNT);
+-		ovs_dst.dev = vport->dev;
++		ovs_rt.dst.dev = vport->dev;
+ 
+ 		orig_dst = skb->_skb_refdst;
+-		skb_dst_set_noref(skb, &ovs_dst);
++		skb_dst_set_noref(skb, &ovs_rt.dst);
+ 		IPCB(skb)->frag_max_size = mru;
+ 
+ 		ip_do_fragment(net, skb->sk, skb, ovs_vport_output);
+diff --git a/security/commoncap.c b/security/commoncap.c
+index a6c9bb4441d54..28d582ed80c9d 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -391,7 +391,7 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
+ 				 &tmpbuf, size, GFP_NOFS);
+ 	dput(dentry);
+ 
+-	if (ret < 0)
++	if (ret < 0 || !tmpbuf)
+ 		return ret;
+ 
+ 	fs_ns = inode->i_sb->s_user_ns;
+diff --git a/sound/isa/sb/emu8000.c b/sound/isa/sb/emu8000.c
+index 0aa545ac6e60c..1c90421a88dcd 100644
+--- a/sound/isa/sb/emu8000.c
++++ b/sound/isa/sb/emu8000.c
+@@ -1029,8 +1029,10 @@ snd_emu8000_create_mixer(struct snd_card *card, struct snd_emu8000 *emu)
+ 
+ 	memset(emu->controls, 0, sizeof(emu->controls));
+ 	for (i = 0; i < EMU8000_NUM_CONTROLS; i++) {
+-		if ((err = snd_ctl_add(card, emu->controls[i] = snd_ctl_new1(mixer_defs[i], emu))) < 0)
++		if ((err = snd_ctl_add(card, emu->controls[i] = snd_ctl_new1(mixer_defs[i], emu))) < 0) {
++			emu->controls[i] = NULL;
+ 			goto __error;
++		}
+ 	}
+ 	return 0;
+ 
+diff --git a/sound/isa/sb/sb16_csp.c b/sound/isa/sb/sb16_csp.c
+index 270af863e198b..1528e04a4d28e 100644
+--- a/sound/isa/sb/sb16_csp.c
++++ b/sound/isa/sb/sb16_csp.c
+@@ -1045,10 +1045,14 @@ static int snd_sb_qsound_build(struct snd_sb_csp * p)
+ 
+ 	spin_lock_init(&p->q_lock);
+ 
+-	if ((err = snd_ctl_add(card, p->qsound_switch = snd_ctl_new1(&snd_sb_qsound_switch, p))) < 0)
++	if ((err = snd_ctl_add(card, p->qsound_switch = snd_ctl_new1(&snd_sb_qsound_switch, p))) < 0) {
++		p->qsound_switch = NULL;
+ 		goto __error;
+-	if ((err = snd_ctl_add(card, p->qsound_space = snd_ctl_new1(&snd_sb_qsound_space, p))) < 0)
++	}
++	if ((err = snd_ctl_add(card, p->qsound_space = snd_ctl_new1(&snd_sb_qsound_space, p))) < 0) {
++		p->qsound_space = NULL;
+ 		goto __error;
++	}
+ 
+ 	return 0;
+ 
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 7aa9062f4f838..8098088b00568 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -930,18 +930,18 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8079, "HP EliteBook 840 G3", CXT_FIXUP_HP_DOCK),
+ 	SND_PCI_QUIRK(0x103c, 0x807C, "HP EliteBook 820 G3", CXT_FIXUP_HP_DOCK),
+ 	SND_PCI_QUIRK(0x103c, 0x80FD, "HP ProBook 640 G2", CXT_FIXUP_HP_DOCK),
+-	SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK),
+-	SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK),
+-	SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK),
+-	SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK),
+-	SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
+ 	SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
++	SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
+ 	SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO),
+-	SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO),
+-	SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO),
++	SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK),
+ 	SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO),
++	SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO),
++	SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK),
++	SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK),
++	SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK),
+ 	SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x844f, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a7544b77d3f7c..d05d16ddbdf2c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2552,8 +2552,10 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+@@ -4438,6 +4440,25 @@ static void alc236_fixup_hp_mute_led(struct hda_codec *codec,
+ 	alc236_fixup_hp_coef_micmute_led(codec, fix, action);
+ }
+ 
++static void alc236_fixup_hp_micmute_led_vref(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->cap_mute_led_nid = 0x1a;
++		snd_hda_gen_add_micmute_led_cdev(codec, vref_micmute_led_set);
++		codec->power_filter = led_power_filter;
++	}
++}
++
++static void alc236_fixup_hp_mute_led_micmute_vref(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	alc236_fixup_hp_mute_led_coefbit(codec, fix, action);
++	alc236_fixup_hp_micmute_led_vref(codec, fix, action);
++}
++
+ #if IS_REACHABLE(CONFIG_INPUT)
+ static void gpio2_mic_hotkey_event(struct hda_codec *codec,
+ 				   struct hda_jack_callback *event)
+@@ -6400,6 +6421,7 @@ enum {
+ 	ALC285_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED,
++	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ 	ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ 	ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+ 	ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS,
+@@ -6415,6 +6437,8 @@ enum {
+ 	ALC269_FIXUP_LEMOTE_A1802,
+ 	ALC269_FIXUP_LEMOTE_A190X,
+ 	ALC256_FIXUP_INTEL_NUC8_RUGGED,
++	ALC233_FIXUP_INTEL_NUC8_DMIC,
++	ALC233_FIXUP_INTEL_NUC8_BOOST,
+ 	ALC256_FIXUP_INTEL_NUC10,
+ 	ALC255_FIXUP_XIAOMI_HEADSET_MIC,
+ 	ALC274_FIXUP_HP_MIC,
+@@ -7136,6 +7160,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc233_fixup_lenovo_line2_mic_hotkey,
+ 	},
++	[ALC233_FIXUP_INTEL_NUC8_DMIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_inv_dmic,
++		.chained = true,
++		.chain_id = ALC233_FIXUP_INTEL_NUC8_BOOST,
++	},
++	[ALC233_FIXUP_INTEL_NUC8_BOOST] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_limit_int_mic_boost
++	},
+ 	[ALC255_FIXUP_DELL_SPK_NOISE] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_disable_aamix,
+@@ -7646,6 +7680,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc236_fixup_hp_mute_led,
+ 	},
++	[ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc236_fixup_hp_mute_led_micmute_vref,
++	},
+ 	[ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -8051,6 +8089,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x8077, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x103c, 0x8158, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+@@ -8063,6 +8103,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+@@ -8113,6 +8154,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
++	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+@@ -8279,6 +8321,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
++	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ 	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+ 
+@@ -8733,12 +8776,7 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60130},
+ 		{0x19, 0x03a11020},
+ 		{0x21, 0x0321101f}),
+-	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+-		{0x14, 0x90170110},
+-		{0x19, 0x04a11040},
+-		{0x21, 0x04211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
+-		{0x12, 0x90a60130},
+ 		{0x14, 0x90170110},
+ 		{0x19, 0x04a11040},
+ 		{0x21, 0x04211020}),
+@@ -8909,6 +8947,10 @@ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = {
+ 	SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+ 		{0x19, 0x40000000},
+ 		{0x1a, 0x40000000}),
++	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
++		{0x14, 0x90170110},
++		{0x19, 0x04a11040},
++		{0x21, 0x04211020}),
+ 	{}
+ };
+ 
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 674e15bf98ed5..e3d97e5112fde 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -296,7 +296,7 @@ static int __uac_clock_find_source(struct snd_usb_audio *chip,
+ 
+ 	selector = snd_usb_find_clock_selector(chip->ctrl_intf, entity_id);
+ 	if (selector) {
+-		int ret, i, cur;
++		int ret, i, cur, err;
+ 
+ 		/* the entity ID we are looking for is a selector.
+ 		 * find out what it currently selects */
+@@ -318,13 +318,17 @@ static int __uac_clock_find_source(struct snd_usb_audio *chip,
+ 		ret = __uac_clock_find_source(chip, fmt,
+ 					      selector->baCSourceID[ret - 1],
+ 					      visited, validate);
++		if (ret > 0) {
++			err = uac_clock_selector_set_val(chip, entity_id, cur);
++			if (err < 0)
++				return err;
++		}
++
+ 		if (!validate || ret > 0 || !chip->autoclock)
+ 			return ret;
+ 
+ 		/* The current clock source is invalid, try others. */
+ 		for (i = 1; i <= selector->bNrInPins; i++) {
+-			int err;
+-
+ 			if (i == cur)
+ 				continue;
+ 
+@@ -390,7 +394,7 @@ static int __uac3_clock_find_source(struct snd_usb_audio *chip,
+ 
+ 	selector = snd_usb_find_clock_selector_v3(chip->ctrl_intf, entity_id);
+ 	if (selector) {
+-		int ret, i, cur;
++		int ret, i, cur, err;
+ 
+ 		/* the entity ID we are looking for is a selector.
+ 		 * find out what it currently selects */
+@@ -412,6 +416,12 @@ static int __uac3_clock_find_source(struct snd_usb_audio *chip,
+ 		ret = __uac3_clock_find_source(chip, fmt,
+ 					       selector->baCSourceID[ret - 1],
+ 					       visited, validate);
++		if (ret > 0) {
++			err = uac_clock_selector_set_val(chip, entity_id, cur);
++			if (err < 0)
++				return err;
++		}
++
+ 		if (!validate || ret > 0 || !chip->autoclock)
+ 			return ret;
+ 
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 646deb6244b15..c5794e83fd800 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -337,6 +337,13 @@ static const struct usbmix_name_map bose_companion5_map[] = {
+ 	{ 0 }	/* terminator */
+ };
+ 
++/* Sennheiser Communications Headset [PC 8], the dB value is reported as -6 negative maximum  */
++static const struct usbmix_dB_map sennheiser_pc8_dB = {-9500, 0};
++static const struct usbmix_name_map sennheiser_pc8_map[] = {
++	{ 9, NULL, .dB = &sennheiser_pc8_dB },
++	{ 0 }   /* terminator */
++};
++
+ /*
+  * Dell usb dock with ALC4020 codec had a firmware problem where it got
+  * screwed up when zero volume is passed; just skip it as a workaround
+@@ -593,6 +600,11 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x17aa, 0x1046),
+ 		.map = lenovo_p620_rear_map,
+ 	},
++	{
++		/* Sennheiser Communications Headset [PC 8] */
++		.id = USB_ID(0x1395, 0x0025),
++		.map = sennheiser_pc8_map,
++	},
+ 	{ 0 } /* terminator */
+ };
+ 
+diff --git a/tools/power/x86/intel-speed-select/isst-display.c b/tools/power/x86/intel-speed-select/isst-display.c
+index e105fece47b61..f32ce0362eb7b 100644
+--- a/tools/power/x86/intel-speed-select/isst-display.c
++++ b/tools/power/x86/intel-speed-select/isst-display.c
+@@ -25,10 +25,14 @@ static void printcpulist(int str_len, char *str, int mask_size,
+ 			index = snprintf(&str[curr_index],
+ 					 str_len - curr_index, ",");
+ 			curr_index += index;
++			if (curr_index >= str_len)
++				break;
+ 		}
+ 		index = snprintf(&str[curr_index], str_len - curr_index, "%d",
+ 				 i);
+ 		curr_index += index;
++		if (curr_index >= str_len)
++			break;
+ 		first = 0;
+ 	}
+ }
+@@ -64,10 +68,14 @@ static void printcpumask(int str_len, char *str, int mask_size,
+ 		index = snprintf(&str[curr_index], str_len - curr_index, "%08x",
+ 				 mask[i]);
+ 		curr_index += index;
++		if (curr_index >= str_len)
++			break;
+ 		if (i) {
+ 			strncat(&str[curr_index], ",", str_len - curr_index);
+ 			curr_index++;
+ 		}
++		if (curr_index >= str_len)
++			break;
+ 	}
+ 
+ 	free(mask);
+@@ -185,7 +193,7 @@ static void _isst_pbf_display_information(int cpu, FILE *outf, int level,
+ 					  int disp_level)
+ {
+ 	char header[256];
+-	char value[256];
++	char value[512];
+ 
+ 	snprintf(header, sizeof(header), "speed-select-base-freq-properties");
+ 	format_and_print(outf, disp_level, header, NULL);
+@@ -349,7 +357,7 @@ void isst_ctdp_display_information(int cpu, FILE *outf, int tdp_level,
+ 				   struct isst_pkg_ctdp *pkg_dev)
+ {
+ 	char header[256];
+-	char value[256];
++	char value[512];
+ 	static int level;
+ 	int i;
+ 
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index f3a1746f7f454..ca69bdb0159fd 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -291,13 +291,16 @@ struct msr_sum_array {
+ /* The percpu MSR sum array.*/
+ struct msr_sum_array *per_cpu_msr_sum;
+ 
+-int idx_to_offset(int idx)
++off_t idx_to_offset(int idx)
+ {
+-	int offset;
++	off_t offset;
+ 
+ 	switch (idx) {
+ 	case IDX_PKG_ENERGY:
+-		offset = MSR_PKG_ENERGY_STATUS;
++		if (do_rapl & RAPL_AMD_F17H)
++			offset = MSR_PKG_ENERGY_STAT;
++		else
++			offset = MSR_PKG_ENERGY_STATUS;
+ 		break;
+ 	case IDX_DRAM_ENERGY:
+ 		offset = MSR_DRAM_ENERGY_STATUS;
+@@ -320,12 +323,13 @@ int idx_to_offset(int idx)
+ 	return offset;
+ }
+ 
+-int offset_to_idx(int offset)
++int offset_to_idx(off_t offset)
+ {
+ 	int idx;
+ 
+ 	switch (offset) {
+ 	case MSR_PKG_ENERGY_STATUS:
++	case MSR_PKG_ENERGY_STAT:
+ 		idx = IDX_PKG_ENERGY;
+ 		break;
+ 	case MSR_DRAM_ENERGY_STATUS:
+@@ -353,7 +357,7 @@ int idx_valid(int idx)
+ {
+ 	switch (idx) {
+ 	case IDX_PKG_ENERGY:
+-		return do_rapl & RAPL_PKG;
++		return do_rapl & (RAPL_PKG | RAPL_AMD_F17H);
+ 	case IDX_DRAM_ENERGY:
+ 		return do_rapl & RAPL_DRAM;
+ 	case IDX_PP0_ENERGY:
+@@ -3245,7 +3249,7 @@ static int update_msr_sum(struct thread_data *t, struct core_data *c, struct pkg
+ 
+ 	for (i = IDX_PKG_ENERGY; i < IDX_COUNT; i++) {
+ 		unsigned long long msr_cur, msr_last;
+-		int offset;
++		off_t offset;
+ 
+ 		if (!idx_valid(i))
+ 			continue;
+@@ -3254,7 +3258,8 @@ static int update_msr_sum(struct thread_data *t, struct core_data *c, struct pkg
+ 			continue;
+ 		ret = get_msr(cpu, offset, &msr_cur);
+ 		if (ret) {
+-			fprintf(outf, "Can not update msr(0x%x)\n", offset);
++			fprintf(outf, "Can not update msr(0x%llx)\n",
++				(unsigned long long)offset);
+ 			continue;
+ 		}
+ 
+diff --git a/tools/testing/selftests/arm64/mte/Makefile b/tools/testing/selftests/arm64/mte/Makefile
+index 2480226dfe57c..4084ef108d050 100644
+--- a/tools/testing/selftests/arm64/mte/Makefile
++++ b/tools/testing/selftests/arm64/mte/Makefile
+@@ -6,9 +6,7 @@ SRCS := $(filter-out mte_common_util.c,$(wildcard *.c))
+ PROGS := $(patsubst %.c,%,$(SRCS))
+ 
+ #Add mte compiler option
+-ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep gcc),)
+ CFLAGS += -march=armv8.5-a+memtag
+-endif
+ 
+ #check if the compiler works well
+ mte_cc_support := $(shell if ($(CC) $(CFLAGS) -E -x c /dev/null -o /dev/null 2>&1) then echo "1"; fi)
+diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
+index 39f8908988eab..70665ba88cbb1 100644
+--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
++++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
+@@ -278,22 +278,13 @@ int mte_switch_mode(int mte_option, unsigned long incl_mask)
+ 	return 0;
+ }
+ 
+-#define ID_AA64PFR1_MTE_SHIFT		8
+-#define ID_AA64PFR1_MTE			2
+-
+ int mte_default_setup(void)
+ {
+-	unsigned long hwcaps = getauxval(AT_HWCAP);
++	unsigned long hwcaps2 = getauxval(AT_HWCAP2);
+ 	unsigned long en = 0;
+ 	int ret;
+ 
+-	if (!(hwcaps & HWCAP_CPUID)) {
+-		ksft_print_msg("FAIL: CPUID registers unavailable\n");
+-		return KSFT_FAIL;
+-	}
+-	/* Read ID_AA64PFR1_EL1 register */
+-	asm volatile("mrs %0, id_aa64pfr1_el1" : "=r"(hwcaps) : : "memory");
+-	if (((hwcaps >> ID_AA64PFR1_MTE_SHIFT) & MT_TAG_MASK) != ID_AA64PFR1_MTE) {
++	if (!(hwcaps2 & HWCAP2_MTE)) {
+ 		ksft_print_msg("FAIL: MTE features unavailable\n");
+ 		return KSFT_SKIP;
+ 	}
+diff --git a/tools/testing/selftests/resctrl/Makefile b/tools/testing/selftests/resctrl/Makefile
+index d585cc1948cc7..6bcee2ec91a9c 100644
+--- a/tools/testing/selftests/resctrl/Makefile
++++ b/tools/testing/selftests/resctrl/Makefile
+@@ -1,5 +1,5 @@
+ CC = $(CROSS_COMPILE)gcc
+-CFLAGS = -g -Wall
++CFLAGS = -g -Wall -O2 -D_FORTIFY_SOURCE=2
+ SRCS=$(wildcard *.c)
+ OBJS=$(SRCS:.c=.o)
+ 
+diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
+index 38dbf4962e333..5922cc1b03867 100644
+--- a/tools/testing/selftests/resctrl/cache.c
++++ b/tools/testing/selftests/resctrl/cache.c
+@@ -182,7 +182,7 @@ int measure_cache_vals(struct resctrl_val_param *param, int bm_pid)
+ 	/*
+ 	 * Measure cache miss from perf.
+ 	 */
+-	if (!strcmp(param->resctrl_val, "cat")) {
++	if (!strncmp(param->resctrl_val, CAT_STR, sizeof(CAT_STR))) {
+ 		ret = get_llc_perf(&llc_perf_miss);
+ 		if (ret < 0)
+ 			return ret;
+@@ -192,7 +192,7 @@ int measure_cache_vals(struct resctrl_val_param *param, int bm_pid)
+ 	/*
+ 	 * Measure llc occupancy from resctrl.
+ 	 */
+-	if (!strcmp(param->resctrl_val, "cqm")) {
++	if (!strncmp(param->resctrl_val, CQM_STR, sizeof(CQM_STR))) {
+ 		ret = get_llc_occu_resctrl(&llc_occu_resc);
+ 		if (ret < 0)
+ 			return ret;
+@@ -234,7 +234,7 @@ int cat_val(struct resctrl_val_param *param)
+ 	if (ret)
+ 		return ret;
+ 
+-	if ((strcmp(resctrl_val, "cat") == 0)) {
++	if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR))) {
+ 		ret = initialize_llc_perf();
+ 		if (ret)
+ 			return ret;
+@@ -242,7 +242,7 @@ int cat_val(struct resctrl_val_param *param)
+ 
+ 	/* Test runs until the callback setup() tells the test to stop. */
+ 	while (1) {
+-		if (strcmp(resctrl_val, "cat") == 0) {
++		if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR))) {
+ 			ret = param->setup(1, param);
+ 			if (ret) {
+ 				ret = 0;
+diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
+index 5da43767b9731..20823725daca5 100644
+--- a/tools/testing/selftests/resctrl/cat_test.c
++++ b/tools/testing/selftests/resctrl/cat_test.c
+@@ -17,10 +17,10 @@
+ #define MAX_DIFF_PERCENT	4
+ #define MAX_DIFF		1000000
+ 
+-int count_of_bits;
+-char cbm_mask[256];
+-unsigned long long_mask;
+-unsigned long cache_size;
++static int count_of_bits;
++static char cbm_mask[256];
++static unsigned long long_mask;
++static unsigned long cache_size;
+ 
+ /*
+  * Change schemata. Write schemata to specified
+@@ -136,7 +136,7 @@ int cat_perf_miss_val(int cpu_no, int n, char *cache_type)
+ 		return -1;
+ 
+ 	/* Get default cbm mask for L3/L2 cache */
+-	ret = get_cbm_mask(cache_type);
++	ret = get_cbm_mask(cache_type, cbm_mask);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -164,7 +164,7 @@ int cat_perf_miss_val(int cpu_no, int n, char *cache_type)
+ 		return -1;
+ 
+ 	struct resctrl_val_param param = {
+-		.resctrl_val	= "cat",
++		.resctrl_val	= CAT_STR,
+ 		.cpu_no		= cpu_no,
+ 		.mum_resctrlfs	= 0,
+ 		.setup		= cat_setup,
+diff --git a/tools/testing/selftests/resctrl/cqm_test.c b/tools/testing/selftests/resctrl/cqm_test.c
+index c8756152bd615..271752e9ef5be 100644
+--- a/tools/testing/selftests/resctrl/cqm_test.c
++++ b/tools/testing/selftests/resctrl/cqm_test.c
+@@ -16,10 +16,10 @@
+ #define MAX_DIFF		2000000
+ #define MAX_DIFF_PERCENT	15
+ 
+-int count_of_bits;
+-char cbm_mask[256];
+-unsigned long long_mask;
+-unsigned long cache_size;
++static int count_of_bits;
++static char cbm_mask[256];
++static unsigned long long_mask;
++static unsigned long cache_size;
+ 
+ static int cqm_setup(int num, ...)
+ {
+@@ -86,7 +86,7 @@ static int check_results(struct resctrl_val_param *param, int no_of_bits)
+ 		return errno;
+ 	}
+ 
+-	while (fgets(temp, 1024, fp)) {
++	while (fgets(temp, sizeof(temp), fp)) {
+ 		char *token = strtok(temp, ":\t");
+ 		int fields = 0;
+ 
+@@ -125,7 +125,7 @@ int cqm_resctrl_val(int cpu_no, int n, char **benchmark_cmd)
+ 	if (!validate_resctrl_feature_request("cqm"))
+ 		return -1;
+ 
+-	ret = get_cbm_mask("L3");
++	ret = get_cbm_mask("L3", cbm_mask);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -145,7 +145,7 @@ int cqm_resctrl_val(int cpu_no, int n, char **benchmark_cmd)
+ 	}
+ 
+ 	struct resctrl_val_param param = {
+-		.resctrl_val	= "cqm",
++		.resctrl_val	= CQM_STR,
+ 		.ctrlgrp	= "c1",
+ 		.mongrp		= "m1",
+ 		.cpu_no		= cpu_no,
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index 79c611c99a3dd..51e5cf22632f7 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -115,7 +115,7 @@ static int fill_cache_read(unsigned char *start_ptr, unsigned char *end_ptr,
+ 
+ 	while (1) {
+ 		ret = fill_one_span_read(start_ptr, end_ptr);
+-		if (!strcmp(resctrl_val, "cat"))
++		if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)))
+ 			break;
+ 	}
+ 
+@@ -134,7 +134,7 @@ static int fill_cache_write(unsigned char *start_ptr, unsigned char *end_ptr,
+ {
+ 	while (1) {
+ 		fill_one_span_write(start_ptr, end_ptr);
+-		if (!strcmp(resctrl_val, "cat"))
++		if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)))
+ 			break;
+ 	}
+ 
+diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
+index 7bf8eaa6204bf..6449fbd96096a 100644
+--- a/tools/testing/selftests/resctrl/mba_test.c
++++ b/tools/testing/selftests/resctrl/mba_test.c
+@@ -141,7 +141,7 @@ void mba_test_cleanup(void)
+ int mba_schemata_change(int cpu_no, char *bw_report, char **benchmark_cmd)
+ {
+ 	struct resctrl_val_param param = {
+-		.resctrl_val	= "mba",
++		.resctrl_val	= MBA_STR,
+ 		.ctrlgrp	= "c1",
+ 		.mongrp		= "m1",
+ 		.cpu_no		= cpu_no,
+diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
+index 4700f7453f811..ec6cfe01c9c26 100644
+--- a/tools/testing/selftests/resctrl/mbm_test.c
++++ b/tools/testing/selftests/resctrl/mbm_test.c
+@@ -114,7 +114,7 @@ void mbm_test_cleanup(void)
+ int mbm_bw_change(int span, int cpu_no, char *bw_report, char **benchmark_cmd)
+ {
+ 	struct resctrl_val_param param = {
+-		.resctrl_val	= "mbm",
++		.resctrl_val	= MBM_STR,
+ 		.ctrlgrp	= "c1",
+ 		.mongrp		= "m1",
+ 		.span		= span,
+diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
+index 39bf59c6b9c56..9dcc96e1ad3d7 100644
+--- a/tools/testing/selftests/resctrl/resctrl.h
++++ b/tools/testing/selftests/resctrl/resctrl.h
+@@ -28,6 +28,10 @@
+ #define RESCTRL_PATH		"/sys/fs/resctrl"
+ #define PHYS_ID_PATH		"/sys/devices/system/cpu/cpu"
+ #define CBM_MASK_PATH		"/sys/fs/resctrl/info"
++#define L3_PATH			"/sys/fs/resctrl/info/L3"
++#define MB_PATH			"/sys/fs/resctrl/info/MB"
++#define L3_MON_PATH		"/sys/fs/resctrl/info/L3_MON"
++#define L3_MON_FEATURES_PATH	"/sys/fs/resctrl/info/L3_MON/mon_features"
+ 
+ #define PARENT_EXIT(err_msg)			\
+ 	do {					\
+@@ -62,11 +66,16 @@ struct resctrl_val_param {
+ 	int		(*setup)(int num, ...);
+ };
+ 
+-pid_t bm_pid, ppid;
+-int tests_run;
++#define MBM_STR			"mbm"
++#define MBA_STR			"mba"
++#define CQM_STR			"cqm"
++#define CAT_STR			"cat"
+ 
+-char llc_occup_path[1024];
+-bool is_amd;
++extern pid_t bm_pid, ppid;
++extern int tests_run;
++
++extern char llc_occup_path[1024];
++extern bool is_amd;
+ 
+ bool check_resctrlfs_support(void);
+ int filter_dmesg(void);
+@@ -74,7 +83,7 @@ int remount_resctrlfs(bool mum_resctrlfs);
+ int get_resource_id(int cpu_no, int *resource_id);
+ int umount_resctrlfs(void);
+ int validate_bw_report_request(char *bw_report);
+-bool validate_resctrl_feature_request(char *resctrl_val);
++bool validate_resctrl_feature_request(const char *resctrl_val);
+ char *fgrep(FILE *inf, const char *str);
+ int taskset_benchmark(pid_t bm_pid, int cpu_no);
+ void run_benchmark(int signum, siginfo_t *info, void *ucontext);
+@@ -92,7 +101,7 @@ void tests_cleanup(void);
+ void mbm_test_cleanup(void);
+ int mba_schemata_change(int cpu_no, char *bw_report, char **benchmark_cmd);
+ void mba_test_cleanup(void);
+-int get_cbm_mask(char *cache_type);
++int get_cbm_mask(char *cache_type, char *cbm_mask);
+ int get_cache_size(int cpu_no, char *cache_type, unsigned long *cache_size);
+ void ctrlc_handler(int signum, siginfo_t *info, void *ptr);
+ int cat_val(struct resctrl_val_param *param);
+diff --git a/tools/testing/selftests/resctrl/resctrl_tests.c b/tools/testing/selftests/resctrl/resctrl_tests.c
+index 425cc85ac8836..ac2269610aa9d 100644
+--- a/tools/testing/selftests/resctrl/resctrl_tests.c
++++ b/tools/testing/selftests/resctrl/resctrl_tests.c
+@@ -73,7 +73,7 @@ int main(int argc, char **argv)
+ 		}
+ 	}
+ 
+-	while ((c = getopt(argc_new, argv, "ht:b:")) != -1) {
++	while ((c = getopt(argc_new, argv, "ht:b:n:p:")) != -1) {
+ 		char *token;
+ 
+ 		switch (c) {
+@@ -85,13 +85,13 @@ int main(int argc, char **argv)
+ 			cqm_test = false;
+ 			cat_test = false;
+ 			while (token) {
+-				if (!strcmp(token, "mbm")) {
++				if (!strncmp(token, MBM_STR, sizeof(MBM_STR))) {
+ 					mbm_test = true;
+-				} else if (!strcmp(token, "mba")) {
++				} else if (!strncmp(token, MBA_STR, sizeof(MBA_STR))) {
+ 					mba_test = true;
+-				} else if (!strcmp(token, "cqm")) {
++				} else if (!strncmp(token, CQM_STR, sizeof(CQM_STR))) {
+ 					cqm_test = true;
+-				} else if (!strcmp(token, "cat")) {
++				} else if (!strncmp(token, CAT_STR, sizeof(CAT_STR))) {
+ 					cat_test = true;
+ 				} else {
+ 					printf("invalid argument\n");
+@@ -161,7 +161,7 @@ int main(int argc, char **argv)
+ 	if (!is_amd && mbm_test) {
+ 		printf("# Starting MBM BW change ...\n");
+ 		if (!has_ben)
+-			sprintf(benchmark_cmd[5], "%s", "mba");
++			sprintf(benchmark_cmd[5], "%s", MBA_STR);
+ 		res = mbm_bw_change(span, cpu_no, bw_report, benchmark_cmd);
+ 		printf("%sok MBM: bw change\n", res ? "not " : "");
+ 		mbm_test_cleanup();
+@@ -181,7 +181,7 @@ int main(int argc, char **argv)
+ 	if (cqm_test) {
+ 		printf("# Starting CQM test ...\n");
+ 		if (!has_ben)
+-			sprintf(benchmark_cmd[5], "%s", "cqm");
++			sprintf(benchmark_cmd[5], "%s", CQM_STR);
+ 		res = cqm_resctrl_val(cpu_no, no_of_bits, benchmark_cmd);
+ 		printf("%sok CQM: test\n", res ? "not " : "");
+ 		cqm_test_cleanup();
+diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
+index 520fea3606d17..8df557894059a 100644
+--- a/tools/testing/selftests/resctrl/resctrl_val.c
++++ b/tools/testing/selftests/resctrl/resctrl_val.c
+@@ -221,8 +221,8 @@ static int read_from_imc_dir(char *imc_dir, int count)
+  */
+ static int num_of_imcs(void)
+ {
++	char imc_dir[512], *temp;
+ 	unsigned int count = 0;
+-	char imc_dir[512];
+ 	struct dirent *ep;
+ 	int ret;
+ 	DIR *dp;
+@@ -230,7 +230,25 @@ static int num_of_imcs(void)
+ 	dp = opendir(DYN_PMU_PATH);
+ 	if (dp) {
+ 		while ((ep = readdir(dp))) {
+-			if (strstr(ep->d_name, UNCORE_IMC)) {
++			temp = strstr(ep->d_name, UNCORE_IMC);
++			if (!temp)
++				continue;
++
++			/*
++			 * imc counters are named as "uncore_imc_<n>", hence
++			 * increment the pointer to point to <n>. Note that
++			 * sizeof(UNCORE_IMC) would count for null character as
++			 * well and hence the last underscore character in
++			 * uncore_imc'_' need not be counted.
++			 */
++			temp = temp + sizeof(UNCORE_IMC);
++
++			/*
++			 * Some directories under "DYN_PMU_PATH" could have
++			 * names like "uncore_imc_free_running", hence, check if
++			 * first character is a numerical digit or not.
++			 */
++			if (temp[0] >= '0' && temp[0] <= '9') {
+ 				sprintf(imc_dir, "%s/%s/", DYN_PMU_PATH,
+ 					ep->d_name);
+ 				ret = read_from_imc_dir(imc_dir, count);
+@@ -282,9 +300,9 @@ static int initialize_mem_bw_imc(void)
+  * Memory B/W utilized by a process on a socket can be calculated using
+  * iMC counters. Perf events are used to read these counters.
+  *
+- * Return: >= 0 on success. < 0 on failure.
++ * Return: = 0 on success. < 0 on failure.
+  */
+-static float get_mem_bw_imc(int cpu_no, char *bw_report)
++static int get_mem_bw_imc(int cpu_no, char *bw_report, float *bw_imc)
+ {
+ 	float reads, writes, of_mul_read, of_mul_write;
+ 	int imc, j, ret;
+@@ -355,13 +373,18 @@ static float get_mem_bw_imc(int cpu_no, char *bw_report)
+ 		close(imc_counters_config[imc][WRITE].fd);
+ 	}
+ 
+-	if (strcmp(bw_report, "reads") == 0)
+-		return reads;
++	if (strcmp(bw_report, "reads") == 0) {
++		*bw_imc = reads;
++		return 0;
++	}
+ 
+-	if (strcmp(bw_report, "writes") == 0)
+-		return writes;
++	if (strcmp(bw_report, "writes") == 0) {
++		*bw_imc = writes;
++		return 0;
++	}
+ 
+-	return (reads + writes);
++	*bw_imc = reads + writes;
++	return 0;
+ }
+ 
+ void set_mbm_path(const char *ctrlgrp, const char *mongrp, int resource_id)
+@@ -397,10 +420,10 @@ static void initialize_mem_bw_resctrl(const char *ctrlgrp, const char *mongrp,
+ 		return;
+ 	}
+ 
+-	if (strcmp(resctrl_val, "mbm") == 0)
++	if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)))
+ 		set_mbm_path(ctrlgrp, mongrp, resource_id);
+ 
+-	if ((strcmp(resctrl_val, "mba") == 0)) {
++	if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
+ 		if (ctrlgrp)
+ 			sprintf(mbm_total_path, CON_MBM_LOCAL_BYTES_PATH,
+ 				RESCTRL_PATH, ctrlgrp, resource_id);
+@@ -420,9 +443,8 @@ static void initialize_mem_bw_resctrl(const char *ctrlgrp, const char *mongrp,
+  * 1. If con_mon grp is given, then read from it
+  * 2. If con_mon grp is not given, then read from root con_mon grp
+  */
+-static unsigned long get_mem_bw_resctrl(void)
++static int get_mem_bw_resctrl(unsigned long *mbm_total)
+ {
+-	unsigned long mbm_total = 0;
+ 	FILE *fp;
+ 
+ 	fp = fopen(mbm_total_path, "r");
+@@ -431,7 +453,7 @@ static unsigned long get_mem_bw_resctrl(void)
+ 
+ 		return -1;
+ 	}
+-	if (fscanf(fp, "%lu", &mbm_total) <= 0) {
++	if (fscanf(fp, "%lu", mbm_total) <= 0) {
+ 		perror("Could not get mbm local bytes");
+ 		fclose(fp);
+ 
+@@ -439,7 +461,7 @@ static unsigned long get_mem_bw_resctrl(void)
+ 	}
+ 	fclose(fp);
+ 
+-	return mbm_total;
++	return 0;
+ }
+ 
+ pid_t bm_pid, ppid;
+@@ -524,14 +546,15 @@ static void initialize_llc_occu_resctrl(const char *ctrlgrp, const char *mongrp,
+ 		return;
+ 	}
+ 
+-	if (strcmp(resctrl_val, "cqm") == 0)
++	if (!strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
+ 		set_cqm_path(ctrlgrp, mongrp, resource_id);
+ }
+ 
+ static int
+ measure_vals(struct resctrl_val_param *param, unsigned long *bw_resc_start)
+ {
+-	unsigned long bw_imc, bw_resc, bw_resc_end;
++	unsigned long bw_resc, bw_resc_end;
++	float bw_imc;
+ 	int ret;
+ 
+ 	/*
+@@ -541,13 +564,13 @@ measure_vals(struct resctrl_val_param *param, unsigned long *bw_resc_start)
+ 	 * Compare the two values to validate resctrl value.
+ 	 * It takes 1sec to measure the data.
+ 	 */
+-	bw_imc = get_mem_bw_imc(param->cpu_no, param->bw_report);
+-	if (bw_imc <= 0)
+-		return bw_imc;
++	ret = get_mem_bw_imc(param->cpu_no, param->bw_report, &bw_imc);
++	if (ret < 0)
++		return ret;
+ 
+-	bw_resc_end = get_mem_bw_resctrl();
+-	if (bw_resc_end <= 0)
+-		return bw_resc_end;
++	ret = get_mem_bw_resctrl(&bw_resc_end);
++	if (ret < 0)
++		return ret;
+ 
+ 	bw_resc = (bw_resc_end - *bw_resc_start) / MB;
+ 	ret = print_results_bw(param->filename, bm_pid, bw_imc, bw_resc);
+@@ -579,8 +602,8 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
+ 	if (strcmp(param->filename, "") == 0)
+ 		sprintf(param->filename, "stdio");
+ 
+-	if ((strcmp(resctrl_val, "mba")) == 0 ||
+-	    (strcmp(resctrl_val, "mbm")) == 0) {
++	if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR)) ||
++	    !strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) {
+ 		ret = validate_bw_report_request(param->bw_report);
+ 		if (ret)
+ 			return ret;
+@@ -674,15 +697,15 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
+ 	if (ret)
+ 		goto out;
+ 
+-	if ((strcmp(resctrl_val, "mbm") == 0) ||
+-	    (strcmp(resctrl_val, "mba") == 0)) {
++	if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) ||
++	    !strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
+ 		ret = initialize_mem_bw_imc();
+ 		if (ret)
+ 			goto out;
+ 
+ 		initialize_mem_bw_resctrl(param->ctrlgrp, param->mongrp,
+ 					  param->cpu_no, resctrl_val);
+-	} else if (strcmp(resctrl_val, "cqm") == 0)
++	} else if (!strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
+ 		initialize_llc_occu_resctrl(param->ctrlgrp, param->mongrp,
+ 					    param->cpu_no, resctrl_val);
+ 
+@@ -710,8 +733,8 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
+ 
+ 	/* Test runs until the callback setup() tells the test to stop. */
+ 	while (1) {
+-		if ((strcmp(resctrl_val, "mbm") == 0) ||
+-		    (strcmp(resctrl_val, "mba") == 0)) {
++		if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) ||
++		    !strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
+ 			ret = param->setup(1, param);
+ 			if (ret) {
+ 				ret = 0;
+@@ -721,7 +744,7 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
+ 			ret = measure_vals(param, &bw_resc_start);
+ 			if (ret)
+ 				break;
+-		} else if (strcmp(resctrl_val, "cqm") == 0) {
++		} else if (!strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR))) {
+ 			ret = param->setup(1, param);
+ 			if (ret) {
+ 				ret = 0;
+diff --git a/tools/testing/selftests/resctrl/resctrlfs.c b/tools/testing/selftests/resctrl/resctrlfs.c
+index 19c0ec4045a40..b57170f53861d 100644
+--- a/tools/testing/selftests/resctrl/resctrlfs.c
++++ b/tools/testing/selftests/resctrl/resctrlfs.c
+@@ -49,8 +49,6 @@ static int find_resctrl_mount(char *buffer)
+ 	return -ENOENT;
+ }
+ 
+-char cbm_mask[256];
+-
+ /*
+  * remount_resctrlfs - Remount resctrl FS at /sys/fs/resctrl
+  * @mum_resctrlfs:	Should the resctrl FS be remounted?
+@@ -205,16 +203,18 @@ int get_cache_size(int cpu_no, char *cache_type, unsigned long *cache_size)
+ /*
+  * get_cbm_mask - Get cbm mask for given cache
+  * @cache_type:	Cache level L2/L3
+- *
+- * Mask is stored in cbm_mask which is global variable.
++ * @cbm_mask:	cbm_mask returned as a string
+  *
+  * Return: = 0 on success, < 0 on failure.
+  */
+-int get_cbm_mask(char *cache_type)
++int get_cbm_mask(char *cache_type, char *cbm_mask)
+ {
+ 	char cbm_mask_path[1024];
+ 	FILE *fp;
+ 
++	if (!cbm_mask)
++		return -1;
++
+ 	sprintf(cbm_mask_path, "%s/%s/cbm_mask", CBM_MASK_PATH, cache_type);
+ 
+ 	fp = fopen(cbm_mask_path, "r");
+@@ -334,7 +334,7 @@ void run_benchmark(int signum, siginfo_t *info, void *ucontext)
+ 		operation = atoi(benchmark_cmd[4]);
+ 		sprintf(resctrl_val, "%s", benchmark_cmd[5]);
+ 
+-		if (strcmp(resctrl_val, "cqm") != 0)
++		if (strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
+ 			buffer_span = span * MB;
+ 		else
+ 			buffer_span = span;
+@@ -459,8 +459,8 @@ int write_bm_pid_to_resctrl(pid_t bm_pid, char *ctrlgrp, char *mongrp,
+ 		goto out;
+ 
+ 	/* Create mon grp and write pid into it for "mbm" and "cqm" test */
+-	if ((strcmp(resctrl_val, "cqm") == 0) ||
+-	    (strcmp(resctrl_val, "mbm") == 0)) {
++	if (!strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)) ||
++	    !strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) {
+ 		if (strlen(mongrp)) {
+ 			sprintf(monitorgroup_p, "%s/mon_groups", controlgroup);
+ 			sprintf(monitorgroup, "%s/%s", monitorgroup_p, mongrp);
+@@ -505,9 +505,9 @@ int write_schemata(char *ctrlgrp, char *schemata, int cpu_no, char *resctrl_val)
+ 	int resource_id, ret = 0;
+ 	FILE *fp;
+ 
+-	if ((strcmp(resctrl_val, "mba") != 0) &&
+-	    (strcmp(resctrl_val, "cat") != 0) &&
+-	    (strcmp(resctrl_val, "cqm") != 0))
++	if (strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR)) &&
++	    strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)) &&
++	    strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
+ 		return -ENOENT;
+ 
+ 	if (!schemata) {
+@@ -528,9 +528,10 @@ int write_schemata(char *ctrlgrp, char *schemata, int cpu_no, char *resctrl_val)
+ 	else
+ 		sprintf(controlgroup, "%s/schemata", RESCTRL_PATH);
+ 
+-	if (!strcmp(resctrl_val, "cat") || !strcmp(resctrl_val, "cqm"))
++	if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)) ||
++	    !strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
+ 		sprintf(schema, "%s%d%c%s", "L3:", resource_id, '=', schemata);
+-	if (strcmp(resctrl_val, "mba") == 0)
++	if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR)))
+ 		sprintf(schema, "%s%d%c%s", "MB:", resource_id, '=', schemata);
+ 
+ 	fp = fopen(controlgroup, "w");
+@@ -615,26 +616,56 @@ char *fgrep(FILE *inf, const char *str)
+  * validate_resctrl_feature_request - Check if requested feature is valid.
+  * @resctrl_val:	Requested feature
+  *
+- * Return: 0 on success, non-zero on failure
++ * Return: True if the feature is supported, else false
+  */
+-bool validate_resctrl_feature_request(char *resctrl_val)
++bool validate_resctrl_feature_request(const char *resctrl_val)
+ {
+-	FILE *inf = fopen("/proc/cpuinfo", "r");
++	struct stat statbuf;
+ 	bool found = false;
+ 	char *res;
++	FILE *inf;
+ 
+-	if (!inf)
++	if (!resctrl_val)
+ 		return false;
+ 
+-	res = fgrep(inf, "flags");
+-
+-	if (res) {
+-		char *s = strchr(res, ':');
++	if (remount_resctrlfs(false))
++		return false;
+ 
+-		found = s && !strstr(s, resctrl_val);
+-		free(res);
++	if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR))) {
++		if (!stat(L3_PATH, &statbuf))
++			return true;
++	} else if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
++		if (!stat(MB_PATH, &statbuf))
++			return true;
++	} else if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) ||
++		   !strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) {
++		if (!stat(L3_MON_PATH, &statbuf)) {
++			inf = fopen(L3_MON_FEATURES_PATH, "r");
++			if (!inf)
++				return false;
++
++			if (!strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) {
++				res = fgrep(inf, "llc_occupancy");
++				if (res) {
++					found = true;
++					free(res);
++				}
++			}
++
++			if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) {
++				res = fgrep(inf, "mbm_total_bytes");
++				if (res) {
++					free(res);
++					res = fgrep(inf, "mbm_local_bytes");
++					if (res) {
++						found = true;
++						free(res);
++					}
++				}
++			}
++			fclose(inf);
++		}
+ 	}
+-	fclose(inf);
+ 
+ 	return found;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-05-14 14:07 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-05-14 14:07 UTC (permalink / raw
  To: gentoo-commits

commit:     55bd8ef17301314122ac11a224fa5d911554f951
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri May 14 14:07:18 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri May 14 14:07:34 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=55bd8ef1

Linux patch 5.10.37

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |     4 +
 1036_linux-5.10.37.patch | 21107 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 21111 insertions(+)

diff --git a/0000_README b/0000_README
index e9a544e..fc87a37 100644
--- a/0000_README
+++ b/0000_README
@@ -187,6 +187,10 @@ Patch:  1035_linux-5.10.36.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.36
 
+Patch:  1036_linux-5.10.37.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.37
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1036_linux-5.10.37.patch b/1036_linux-5.10.37.patch
new file mode 100644
index 0000000..981d7dd
--- /dev/null
+++ b/1036_linux-5.10.37.patch
@@ -0,0 +1,21107 @@
+diff --git a/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml b/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
+index 06d5f251ec880..51f390e5c276c 100644
+--- a/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
++++ b/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
+@@ -77,7 +77,8 @@ required:
+   - interrupts
+   - clocks
+ 
+-additionalProperties: false
++additionalProperties:
++  type: object
+ 
+ examples:
+   - |
+diff --git a/Documentation/driver-api/xilinx/eemi.rst b/Documentation/driver-api/xilinx/eemi.rst
+index 9dcbc6f18d75d..c1bc47b9000dc 100644
+--- a/Documentation/driver-api/xilinx/eemi.rst
++++ b/Documentation/driver-api/xilinx/eemi.rst
+@@ -16,35 +16,8 @@ components running across different processing clusters on a chip or
+ device to communicate with a power management controller (PMC) on a
+ device to issue or respond to power management requests.
+ 
+-EEMI ops is a structure containing all eemi APIs supported by Zynq MPSoC.
+-The zynqmp-firmware driver maintain all EEMI APIs in zynqmp_eemi_ops
+-structure. Any driver who want to communicate with PMC using EEMI APIs
+-can call zynqmp_pm_get_eemi_ops().
+-
+-Example of EEMI ops::
+-
+-	/* zynqmp-firmware driver maintain all EEMI APIs */
+-	struct zynqmp_eemi_ops {
+-		int (*get_api_version)(u32 *version);
+-		int (*query_data)(struct zynqmp_pm_query_data qdata, u32 *out);
+-	};
+-
+-	static const struct zynqmp_eemi_ops eemi_ops = {
+-		.get_api_version = zynqmp_pm_get_api_version,
+-		.query_data = zynqmp_pm_query_data,
+-	};
+-
+-Example of EEMI ops usage::
+-
+-	static const struct zynqmp_eemi_ops *eemi_ops;
+-	u32 ret_payload[PAYLOAD_ARG_CNT];
+-	int ret;
+-
+-	eemi_ops = zynqmp_pm_get_eemi_ops();
+-	if (IS_ERR(eemi_ops))
+-		return PTR_ERR(eemi_ops);
+-
+-	ret = eemi_ops->query_data(qdata, ret_payload);
++Any driver who wants to communicate with PMC using EEMI APIs use the
++functions provided for each function.
+ 
+ IOCTL
+ ------
+diff --git a/Documentation/userspace-api/media/v4l/subdev-formats.rst b/Documentation/userspace-api/media/v4l/subdev-formats.rst
+index c9b7bb3ca089d..eff6727c69d30 100644
+--- a/Documentation/userspace-api/media/v4l/subdev-formats.rst
++++ b/Documentation/userspace-api/media/v4l/subdev-formats.rst
+@@ -1567,8 +1567,8 @@ The following tables list existing packed RGB formats.
+       - MEDIA_BUS_FMT_RGB101010_1X30
+       - 0x1018
+       -
+-      - 0
+-      - 0
++      -
++      -
+       - r\ :sub:`9`
+       - r\ :sub:`8`
+       - r\ :sub:`7`
+diff --git a/Makefile b/Makefile
+index ece5b660dcb06..39f14ad009aef 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 36
++SUBLEVEL = 37
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+index 21ae880c7530f..c76b0046b4028 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+@@ -707,9 +707,9 @@
+ 	multi-master;
+ 	status = "okay";
+ 
+-	si7021-a20@20 {
++	si7021-a20@40 {
+ 		compatible = "silabs,si7020";
+-		reg = <0x20>;
++		reg = <0x40>;
+ 	};
+ 
+ 	tmp275@48 {
+diff --git a/arch/arm/boot/dts/exynos4210-i9100.dts b/arch/arm/boot/dts/exynos4210-i9100.dts
+index 5370ee477186c..7777bf51a6e64 100644
+--- a/arch/arm/boot/dts/exynos4210-i9100.dts
++++ b/arch/arm/boot/dts/exynos4210-i9100.dts
+@@ -136,7 +136,7 @@
+ 			compatible = "maxim,max17042";
+ 
+ 			interrupt-parent = <&gpx2>;
+-			interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 
+ 			pinctrl-0 = <&max17042_fuel_irq>;
+ 			pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/exynos4412-midas.dtsi b/arch/arm/boot/dts/exynos4412-midas.dtsi
+index 7e7c243ff196a..06450066b1787 100644
+--- a/arch/arm/boot/dts/exynos4412-midas.dtsi
++++ b/arch/arm/boot/dts/exynos4412-midas.dtsi
+@@ -174,7 +174,7 @@
+ 		max77693@66 {
+ 			compatible = "maxim,max77693";
+ 			interrupt-parent = <&gpx1>;
+-			interrupts = <5 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&max77693_irq>;
+ 			reg = <0x66>;
+@@ -223,7 +223,7 @@
+ 		max77693-fuel-gauge@36 {
+ 			compatible = "maxim,max17047";
+ 			interrupt-parent = <&gpx2>;
+-			interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&max77693_fuel_irq>;
+ 			reg = <0x36>;
+@@ -668,7 +668,7 @@
+ 	max77686: max77686_pmic@9 {
+ 		compatible = "maxim,max77686";
+ 		interrupt-parent = <&gpx0>;
+-		interrupts = <7 IRQ_TYPE_NONE>;
++		interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-0 = <&max77686_irq>;
+ 		pinctrl-names = "default";
+ 		reg = <0x09>;
+diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+index 2983e91bc7dde..869d80be1b36e 100644
+--- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
++++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+@@ -279,7 +279,7 @@
+ 	max77686: pmic@9 {
+ 		compatible = "maxim,max77686";
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <2 IRQ_TYPE_NONE>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&max77686_irq>;
+ 		reg = <0x09>;
+diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+index 186790f39e4d3..d0e48c10aec2b 100644
+--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+@@ -134,7 +134,7 @@
+ 		compatible = "maxim,max77686";
+ 		reg = <0x09>;
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <2 IRQ_TYPE_NONE>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&max77686_irq>;
+ 		#clock-cells = <1>;
+diff --git a/arch/arm/boot/dts/exynos5250-snow-common.dtsi b/arch/arm/boot/dts/exynos5250-snow-common.dtsi
+index c952a615148e5..737f0e20a4525 100644
+--- a/arch/arm/boot/dts/exynos5250-snow-common.dtsi
++++ b/arch/arm/boot/dts/exynos5250-snow-common.dtsi
+@@ -292,7 +292,7 @@
+ 	max77686: max77686@9 {
+ 		compatible = "maxim,max77686";
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <2 IRQ_TYPE_NONE>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&max77686_irq>;
+ 		wakeup-source;
+diff --git a/arch/arm/boot/dts/r8a7790-lager.dts b/arch/arm/boot/dts/r8a7790-lager.dts
+index 09a152b915575..1d6f0c5d02e9a 100644
+--- a/arch/arm/boot/dts/r8a7790-lager.dts
++++ b/arch/arm/boot/dts/r8a7790-lager.dts
+@@ -53,6 +53,9 @@
+ 		i2c11 = &i2cexio1;
+ 		i2c12 = &i2chdmi;
+ 		i2c13 = &i2cpwr;
++		mmc0 = &mmcif1;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7791-koelsch.dts b/arch/arm/boot/dts/r8a7791-koelsch.dts
+index f603cba5441fc..6af1727b82690 100644
+--- a/arch/arm/boot/dts/r8a7791-koelsch.dts
++++ b/arch/arm/boot/dts/r8a7791-koelsch.dts
+@@ -53,6 +53,9 @@
+ 		i2c12 = &i2cexio1;
+ 		i2c13 = &i2chdmi;
+ 		i2c14 = &i2cexio4;
++		mmc0 = &sdhi0;
++		mmc1 = &sdhi1;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7791-porter.dts b/arch/arm/boot/dts/r8a7791-porter.dts
+index c6d563fb7ec7c..bf51e29c793a3 100644
+--- a/arch/arm/boot/dts/r8a7791-porter.dts
++++ b/arch/arm/boot/dts/r8a7791-porter.dts
+@@ -28,6 +28,8 @@
+ 		serial0 = &scif0;
+ 		i2c9 = &gpioi2c2;
+ 		i2c10 = &i2chdmi;
++		mmc0 = &sdhi0;
++		mmc1 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7793-gose.dts b/arch/arm/boot/dts/r8a7793-gose.dts
+index abf487e8fe0f3..2b59a04913500 100644
+--- a/arch/arm/boot/dts/r8a7793-gose.dts
++++ b/arch/arm/boot/dts/r8a7793-gose.dts
+@@ -49,6 +49,9 @@
+ 		i2c10 = &gpioi2c4;
+ 		i2c11 = &i2chdmi;
+ 		i2c12 = &i2cexio4;
++		mmc0 = &sdhi0;
++		mmc1 = &sdhi1;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7794-alt.dts b/arch/arm/boot/dts/r8a7794-alt.dts
+index 3f1cc5bbf3297..32025986b3b9b 100644
+--- a/arch/arm/boot/dts/r8a7794-alt.dts
++++ b/arch/arm/boot/dts/r8a7794-alt.dts
+@@ -19,6 +19,9 @@
+ 		i2c10 = &gpioi2c4;
+ 		i2c11 = &i2chdmi;
+ 		i2c12 = &i2cexio4;
++		mmc0 = &mmcif0;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi1;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7794-silk.dts b/arch/arm/boot/dts/r8a7794-silk.dts
+index 677596f6c9c9a..af066ee5e2754 100644
+--- a/arch/arm/boot/dts/r8a7794-silk.dts
++++ b/arch/arm/boot/dts/r8a7794-silk.dts
+@@ -31,6 +31,8 @@
+ 		serial0 = &scif2;
+ 		i2c9 = &gpioi2c1;
+ 		i2c10 = &i2chdmi;
++		mmc0 = &mmcif0;
++		mmc1 = &sdhi1;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/s5pv210-fascinate4g.dts b/arch/arm/boot/dts/s5pv210-fascinate4g.dts
+index ca064359dd308..b47d8300e536e 100644
+--- a/arch/arm/boot/dts/s5pv210-fascinate4g.dts
++++ b/arch/arm/boot/dts/s5pv210-fascinate4g.dts
+@@ -115,7 +115,7 @@
+ 	compatible = "maxim,max77836-battery";
+ 
+ 	interrupt-parent = <&gph3>;
+-	interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
++	interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&fg_irq>;
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index d84686e003709..dee4d32ab32c4 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1806,10 +1806,15 @@
+ 	usart2_idle_pins_c: usart2-idle-2 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('D', 5, ANALOG)>, /* USART2_TX */
+-				 <STM32_PINMUX('D', 4, ANALOG)>, /* USART2_RTS */
+ 				 <STM32_PINMUX('D', 3, ANALOG)>; /* USART2_CTS_NSS */
+ 		};
+ 		pins2 {
++			pinmux = <STM32_PINMUX('D', 4, AF7)>; /* USART2_RTS */
++			bias-disable;
++			drive-push-pull;
++			slew-rate = <3>;
++		};
++		pins3 {
+ 			pinmux = <STM32_PINMUX('D', 6, AF7)>; /* USART2_RX */
+ 			bias-disable;
+ 		};
+@@ -1855,10 +1860,15 @@
+ 	usart3_idle_pins_b: usart3-idle-1 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('B', 10, ANALOG)>, /* USART3_TX */
+-				 <STM32_PINMUX('G', 8, ANALOG)>, /* USART3_RTS */
+ 				 <STM32_PINMUX('I', 10, ANALOG)>; /* USART3_CTS_NSS */
+ 		};
+ 		pins2 {
++			pinmux = <STM32_PINMUX('G', 8, AF8)>; /* USART3_RTS */
++			bias-disable;
++			drive-push-pull;
++			slew-rate = <0>;
++		};
++		pins3 {
+ 			pinmux = <STM32_PINMUX('B', 12, AF8)>; /* USART3_RX */
+ 			bias-disable;
+ 		};
+@@ -1891,10 +1901,15 @@
+ 	usart3_idle_pins_c: usart3-idle-2 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('B', 10, ANALOG)>, /* USART3_TX */
+-				 <STM32_PINMUX('G', 8, ANALOG)>, /* USART3_RTS */
+ 				 <STM32_PINMUX('B', 13, ANALOG)>; /* USART3_CTS_NSS */
+ 		};
+ 		pins2 {
++			pinmux = <STM32_PINMUX('G', 8, AF8)>; /* USART3_RTS */
++			bias-disable;
++			drive-push-pull;
++			slew-rate = <0>;
++		};
++		pins3 {
+ 			pinmux = <STM32_PINMUX('B', 12, AF8)>; /* USART3_RX */
+ 			bias-disable;
+ 		};
+diff --git a/arch/arm/boot/dts/uniphier-pxs2.dtsi b/arch/arm/boot/dts/uniphier-pxs2.dtsi
+index b0b15c97306b8..e81e5937a60ae 100644
+--- a/arch/arm/boot/dts/uniphier-pxs2.dtsi
++++ b/arch/arm/boot/dts/uniphier-pxs2.dtsi
+@@ -583,7 +583,7 @@
+ 			clocks = <&sys_clk 6>;
+ 			reset-names = "ether";
+ 			resets = <&sys_rst 6>;
+-			phy-mode = "rgmii";
++			phy-mode = "rgmii-id";
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			socionext,syscon-phy-mode = <&soc_glue 0>;
+ 
+diff --git a/arch/arm/crypto/poly1305-glue.c b/arch/arm/crypto/poly1305-glue.c
+index 3023c1acfa194..c31bd8f7c0927 100644
+--- a/arch/arm/crypto/poly1305-glue.c
++++ b/arch/arm/crypto/poly1305-glue.c
+@@ -29,7 +29,7 @@ void __weak poly1305_blocks_neon(void *state, const u8 *src, u32 len, u32 hibit)
+ 
+ static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
++void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_init_arm(&dctx->h, key);
+ 	dctx->s[0] = get_unaligned_le32(key + 16);
+diff --git a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+index 29d8cf6df46bf..99c2d6fd6304a 100644
+--- a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
++++ b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+@@ -56,7 +56,7 @@
+ 	tca6416: gpio@20 {
+ 		compatible = "ti,tca6416";
+ 		reg = <0x20>;
+-		reset-gpios = <&pio 65 GPIO_ACTIVE_HIGH>;
++		reset-gpios = <&pio 65 GPIO_ACTIVE_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&tca6416_pins>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index c4ac6f5dc008d..96d36b38f2696 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -1015,7 +1015,7 @@
+ 		left_spkr: wsa8810-left{
+ 			compatible = "sdw10217201000";
+ 			reg = <0 1>;
+-			powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
++			powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>;
+ 			#thermal-sensor-cells = <0>;
+ 			sound-name-prefix = "SpkrLeft";
+ 			#sound-dai-cells = <0>;
+@@ -1023,7 +1023,7 @@
+ 
+ 		right_spkr: wsa8810-right{
+ 			compatible = "sdw10217201000";
+-			powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
++			powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>;
+ 			reg = <0 2>;
+ 			#thermal-sensor-cells = <0>;
+ 			sound-name-prefix = "SpkrRight";
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index f97f354af86f4..ea6e3a11e641b 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -2192,7 +2192,7 @@
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+-			gpio-ranges = <&tlmm 0 0 150>;
++			gpio-ranges = <&tlmm 0 0 151>;
+ 			wakeup-parent = <&pdc_intc>;
+ 
+ 			cci0_default: cci0-default {
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index f0a872e02686d..1aec54590a11a 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -748,7 +748,7 @@
+ 			      <0x0 0x03D00000 0x0 0x300000>;
+ 			reg-names = "west", "east", "north", "south";
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+-			gpio-ranges = <&tlmm 0 0 175>;
++			gpio-ranges = <&tlmm 0 0 176>;
+ 			gpio-controller;
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index d057d85a19fb2..d4547a192748b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -216,7 +216,7 @@
+ 
+ 	pmu {
+ 		compatible = "arm,armv8-pmuv3";
+-		interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_LOW>;
+ 	};
+ 
+ 	psci {
+@@ -1555,7 +1555,7 @@
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+-			gpio-ranges = <&tlmm 0 0 180>;
++			gpio-ranges = <&tlmm 0 0 181>;
+ 			wakeup-parent = <&pdc>;
+ 
+ 			qup_i2c0_default: qup-i2c0-default {
+@@ -2379,7 +2379,7 @@
+ 				(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 11
+ 				(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>,
+-			     <GIC_PPI 12
++			     <GIC_PPI 10
+ 				(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/renesas/hihope-common.dtsi b/arch/arm64/boot/dts/renesas/hihope-common.dtsi
+index 2eda9f66ae81d..e8bf6f0c4c400 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-common.dtsi
+@@ -12,6 +12,9 @@
+ 	aliases {
+ 		serial0 = &scif2;
+ 		serial1 = &hscif0;
++		mmc0 = &sdhi3;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts b/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
+index 2c5b057c30c62..ad26f5bf0648d 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
++++ b/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
+@@ -21,6 +21,9 @@
+ 		serial4 = &hscif2;
+ 		serial5 = &scif5;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi3;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts b/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
+index 26aee004a44e2..c4b50a5e3d92c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
++++ b/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
+@@ -17,6 +17,8 @@
+ 	aliases {
+ 		serial0 = &scif2;
+ 		serial1 = &hscif2;
++		mmc0 = &sdhi0;
++		mmc1 = &sdhi3;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a77980.dtsi b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+index d6cae90d7fd9e..e6ef837c4a3b3 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77980.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+@@ -990,8 +990,8 @@
+ 
+ 					reg = <1>;
+ 
+-					vin4csi41: endpoint@2 {
+-						reg = <2>;
++					vin4csi41: endpoint@3 {
++						reg = <3>;
+ 						remote-endpoint = <&csi41vin4>;
+ 					};
+ 				};
+@@ -1018,8 +1018,8 @@
+ 
+ 					reg = <1>;
+ 
+-					vin5csi41: endpoint@2 {
+-						reg = <2>;
++					vin5csi41: endpoint@3 {
++						reg = <3>;
+ 						remote-endpoint = <&csi41vin5>;
+ 					};
+ 				};
+@@ -1046,8 +1046,8 @@
+ 
+ 					reg = <1>;
+ 
+-					vin6csi41: endpoint@2 {
+-						reg = <2>;
++					vin6csi41: endpoint@3 {
++						reg = <3>;
+ 						remote-endpoint = <&csi41vin6>;
+ 					};
+ 				};
+@@ -1074,8 +1074,8 @@
+ 
+ 					reg = <1>;
+ 
+-					vin7csi41: endpoint@2 {
+-						reg = <2>;
++					vin7csi41: endpoint@3 {
++						reg = <3>;
+ 						remote-endpoint = <&csi41vin7>;
+ 					};
+ 				};
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts b/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
+index e0ccca2222d2d..b9e3b6762ff42 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
++++ b/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
+@@ -16,6 +16,9 @@
+ 	aliases {
+ 		serial0 = &scif2;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi3;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi1;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+index 6cf77ce9aa937..86ec32a919d29 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+@@ -50,10 +50,7 @@
+ 
+ 	pmu_a76 {
+ 		compatible = "arm,cortex-a76-pmu";
+-		interrupts-extended = <&gic GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>,
+-				      <&gic GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
+-				      <&gic GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
+-				      <&gic GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts-extended = <&gic GIC_PPI 7 IRQ_TYPE_LEVEL_LOW>;
+ 	};
+ 
+ 	/* External SCIF clock - to be overridden by boards that provide it */
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index 1bf77957d2c21..08b8525bb7257 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -36,6 +36,9 @@
+ 		serial0 = &scif2;
+ 		serial1 = &hscif1;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi2;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi3;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+index 202177706cdeb..05e64bfad0235 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+@@ -16,6 +16,7 @@
+ 	aliases {
+ 		serial1 = &hscif0;
+ 		serial2 = &scif1;
++		mmc2 = &sdhi3;
+ 	};
+ 
+ 	clksndsel: clksndsel {
+diff --git a/arch/arm64/boot/dts/renesas/ulcb.dtsi b/arch/arm64/boot/dts/renesas/ulcb.dtsi
+index a2e085db87c53..e11521b4b9ca4 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb.dtsi
+@@ -23,6 +23,8 @@
+ 	aliases {
+ 		serial0 = &scif2;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi2;
++		mmc1 = &sdhi0;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+index a87b8a6787196..8f2c1c1e2c64e 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+@@ -734,7 +734,7 @@
+ 			clocks = <&sys_clk 6>;
+ 			reset-names = "ether";
+ 			resets = <&sys_rst 6>;
+-			phy-mode = "rgmii";
++			phy-mode = "rgmii-id";
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			socionext,syscon-phy-mode = <&soc_glue 0>;
+ 
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+index 0e52dadf54b3a..be97da1322580 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+@@ -564,7 +564,7 @@
+ 			clocks = <&sys_clk 6>;
+ 			reset-names = "ether";
+ 			resets = <&sys_rst 6>;
+-			phy-mode = "rgmii";
++			phy-mode = "rgmii-id";
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			socionext,syscon-phy-mode = <&soc_glue 0>;
+ 
+@@ -585,7 +585,7 @@
+ 			clocks = <&sys_clk 7>;
+ 			reset-names = "ether";
+ 			resets = <&sys_rst 7>;
+-			phy-mode = "rgmii";
++			phy-mode = "rgmii-id";
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			socionext,syscon-phy-mode = <&soc_glue 1>;
+ 
+diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c
+index f33ada70c4ed8..01e22fe408235 100644
+--- a/arch/arm64/crypto/poly1305-glue.c
++++ b/arch/arm64/crypto/poly1305-glue.c
+@@ -25,7 +25,7 @@ asmlinkage void poly1305_emit(void *state, u8 *digest, const u32 *nonce);
+ 
+ static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
++void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_init_arm64(&dctx->h, key);
+ 	dctx->s[0] = get_unaligned_le32(key + 16);
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index cc060c41adaab..912b83e784bbf 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -601,6 +601,7 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
+ 
+ void kvm_arm_init_debug(void);
++void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu);
+ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
+ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
+ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index a884d77739895..fce8cbecd6bc7 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -96,8 +96,7 @@
+ #endif /* CONFIG_ARM64_FORCE_52BIT */
+ 
+ extern phys_addr_t arm64_dma_phys_limit;
+-extern phys_addr_t arm64_dma32_phys_limit;
+-#define ARCH_LOW_ADDRESS_LIMIT	((arm64_dma_phys_limit ? : arm64_dma32_phys_limit) - 1)
++#define ARCH_LOW_ADDRESS_LIMIT	(arm64_dma_phys_limit - 1)
+ 
+ struct debug_info {
+ #ifdef CONFIG_HAVE_HW_BREAKPOINT
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index a1c2c955474e9..5e5dd99e8cee8 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -547,6 +547,8 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
+ 
+ 	vcpu->arch.has_run_once = true;
+ 
++	kvm_arm_vcpu_init_debug(vcpu);
++
+ 	if (likely(irqchip_in_kernel(kvm))) {
+ 		/*
+ 		 * Map the VGIC hardware resources before running a vcpu the
+diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
+index dbc8905116311..2484b2cca74bc 100644
+--- a/arch/arm64/kvm/debug.c
++++ b/arch/arm64/kvm/debug.c
+@@ -68,6 +68,64 @@ void kvm_arm_init_debug(void)
+ 	__this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
+ }
+ 
++/**
++ * kvm_arm_setup_mdcr_el2 - configure vcpu mdcr_el2 value
++ *
++ * @vcpu:	the vcpu pointer
++ *
++ * This ensures we will trap access to:
++ *  - Performance monitors (MDCR_EL2_TPM/MDCR_EL2_TPMCR)
++ *  - Debug ROM Address (MDCR_EL2_TDRA)
++ *  - OS related registers (MDCR_EL2_TDOSA)
++ *  - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB)
++ *  - Self-hosted Trace Filter controls (MDCR_EL2_TTRF)
++ */
++static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
++{
++	/*
++	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
++	 * to the profiling buffer.
++	 */
++	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
++	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
++				MDCR_EL2_TPMS |
++				MDCR_EL2_TTRF |
++				MDCR_EL2_TPMCR |
++				MDCR_EL2_TDRA |
++				MDCR_EL2_TDOSA);
++
++	/* Is the VM being debugged by userspace? */
++	if (vcpu->guest_debug)
++		/* Route all software debug exceptions to EL2 */
++		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
++
++	/*
++	 * Trap debug register access when one of the following is true:
++	 *  - Userspace is using the hardware to debug the guest
++	 *  (KVM_GUESTDBG_USE_HW is set).
++	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
++	 */
++	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
++	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
++		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
++
++	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
++}
++
++/**
++ * kvm_arm_vcpu_init_debug - setup vcpu debug traps
++ *
++ * @vcpu:	the vcpu pointer
++ *
++ * Set vcpu initial mdcr_el2 value.
++ */
++void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu)
++{
++	preempt_disable();
++	kvm_arm_setup_mdcr_el2(vcpu);
++	preempt_enable();
++}
++
+ /**
+  * kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
+  */
+@@ -83,13 +141,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
+  * @vcpu:	the vcpu pointer
+  *
+  * This is called before each entry into the hypervisor to setup any
+- * debug related registers. Currently this just ensures we will trap
+- * access to:
+- *  - Performance monitors (MDCR_EL2_TPM/MDCR_EL2_TPMCR)
+- *  - Debug ROM Address (MDCR_EL2_TDRA)
+- *  - OS related registers (MDCR_EL2_TDOSA)
+- *  - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB)
+- *  - Self-hosted Trace Filter controls (MDCR_EL2_TTRF)
++ * debug related registers.
+  *
+  * Additionally, KVM only traps guest accesses to the debug registers if
+  * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
+@@ -101,28 +153,14 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
+ 
+ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ {
+-	bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
+ 	unsigned long mdscr, orig_mdcr_el2 = vcpu->arch.mdcr_el2;
+ 
+ 	trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug);
+ 
+-	/*
+-	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
+-	 * to the profiling buffer.
+-	 */
+-	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
+-	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+-				MDCR_EL2_TPMS |
+-				MDCR_EL2_TTRF |
+-				MDCR_EL2_TPMCR |
+-				MDCR_EL2_TDRA |
+-				MDCR_EL2_TDOSA);
++	kvm_arm_setup_mdcr_el2(vcpu);
+ 
+ 	/* Is Guest debugging in effect? */
+ 	if (vcpu->guest_debug) {
+-		/* Route all software debug exceptions to EL2 */
+-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
+-
+ 		/* Save guest debug state */
+ 		save_guest_debug_regs(vcpu);
+ 
+@@ -176,7 +214,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ 
+ 			vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
+ 			vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+-			trap_debug = true;
+ 
+ 			trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
+ 						&vcpu->arch.debug_ptr->dbg_bcr[0],
+@@ -191,10 +228,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ 	BUG_ON(!vcpu->guest_debug &&
+ 		vcpu->arch.debug_ptr != &vcpu->arch.vcpu_debug_state);
+ 
+-	/* Trap debug register access */
+-	if (trap_debug)
+-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
+-
+ 	/* If KDE or MDE are set, perform a full save/restore cycle. */
+ 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
+ 		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+@@ -203,7 +236,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ 	if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
+ 		write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+ 
+-	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
+ 	trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
+ }
+ 
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index e911eea36eb0e..53a127d3e460b 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -291,6 +291,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 
+ 	/* Reset core registers */
+ 	memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu)));
++	memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs));
++	vcpu->arch.ctxt.spsr_abt = 0;
++	vcpu->arch.ctxt.spsr_und = 0;
++	vcpu->arch.ctxt.spsr_irq = 0;
++	vcpu->arch.ctxt.spsr_fiq = 0;
+ 	vcpu_gp_regs(vcpu)->pstate = pstate;
+ 
+ 	/* Reset system registers */
+diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+index 44419679f91ad..7740995de982e 100644
+--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
++++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+@@ -87,8 +87,8 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write)
+ 			r = vgic_v3_set_redist_base(kvm, 0, *addr, 0);
+ 			goto out;
+ 		}
+-		rdreg = list_first_entry(&vgic->rd_regions,
+-					 struct vgic_redist_region, list);
++		rdreg = list_first_entry_or_null(&vgic->rd_regions,
++						 struct vgic_redist_region, list);
+ 		if (!rdreg)
+ 			addr_ptr = &undef_value;
+ 		else
+@@ -226,6 +226,9 @@ static int vgic_get_common_attr(struct kvm_device *dev,
+ 		u64 addr;
+ 		unsigned long type = (unsigned long)attr->attr;
+ 
++		if (copy_from_user(&addr, uaddr, sizeof(addr)))
++			return -EFAULT;
++
+ 		r = kvm_vgic_addr(dev->kvm, type, &addr, false);
+ 		if (r)
+ 			return (r == -ENODEV) ? -ENXIO : r;
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 916e0547fdccf..a985d292e8203 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -53,13 +53,13 @@ s64 memstart_addr __ro_after_init = -1;
+ EXPORT_SYMBOL(memstart_addr);
+ 
+ /*
+- * We create both ZONE_DMA and ZONE_DMA32. ZONE_DMA covers the first 1G of
+- * memory as some devices, namely the Raspberry Pi 4, have peripherals with
+- * this limited view of the memory. ZONE_DMA32 will cover the rest of the 32
+- * bit addressable memory area.
++ * If the corresponding config options are enabled, we create both ZONE_DMA
++ * and ZONE_DMA32. By default ZONE_DMA covers the 32-bit addressable memory
++ * unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4).
++ * In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory,
++ * otherwise it is empty.
+  */
+ phys_addr_t arm64_dma_phys_limit __ro_after_init;
+-phys_addr_t arm64_dma32_phys_limit __ro_after_init;
+ 
+ #ifdef CONFIG_KEXEC_CORE
+ /*
+@@ -84,7 +84,7 @@ static void __init reserve_crashkernel(void)
+ 
+ 	if (crash_base == 0) {
+ 		/* Current arm64 boot protocol requires 2MB alignment */
+-		crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
++		crash_base = memblock_find_in_range(0, arm64_dma_phys_limit,
+ 				crash_size, SZ_2M);
+ 		if (crash_base == 0) {
+ 			pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
+@@ -189,6 +189,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
+ 	unsigned int __maybe_unused acpi_zone_dma_bits;
+ 	unsigned int __maybe_unused dt_zone_dma_bits;
++	phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
+ 
+ #ifdef CONFIG_ZONE_DMA
+ 	acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
+@@ -198,8 +199,12 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ 	max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
+ #endif
+ #ifdef CONFIG_ZONE_DMA32
+-	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(arm64_dma32_phys_limit);
++	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
++	if (!arm64_dma_phys_limit)
++		arm64_dma_phys_limit = dma32_phys_limit;
+ #endif
++	if (!arm64_dma_phys_limit)
++		arm64_dma_phys_limit = PHYS_MASK + 1;
+ 	max_zone_pfns[ZONE_NORMAL] = max;
+ 
+ 	free_area_init(max_zone_pfns);
+@@ -393,16 +398,9 @@ void __init arm64_memblock_init(void)
+ 
+ 	early_init_fdt_scan_reserved_mem();
+ 
+-	if (IS_ENABLED(CONFIG_ZONE_DMA32))
+-		arm64_dma32_phys_limit = max_zone_phys(32);
+-	else
+-		arm64_dma32_phys_limit = PHYS_MASK + 1;
+-
+ 	reserve_elfcorehdr();
+ 
+ 	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
+-
+-	dma_contiguous_reserve(arm64_dma32_phys_limit);
+ }
+ 
+ void __init bootmem_init(void)
+@@ -437,6 +435,11 @@ void __init bootmem_init(void)
+ 	sparse_init();
+ 	zone_sizes_init(min, max);
+ 
++	/*
++	 * Reserve the CMA area after arm64_dma_phys_limit was initialised.
++	 */
++	dma_contiguous_reserve(arm64_dma_phys_limit);
++
+ 	/*
+ 	 * request_standard_resources() depends on crashkernel's memory being
+ 	 * reserved, so do it here.
+@@ -519,7 +522,7 @@ static void __init free_unused_memmap(void)
+ void __init mem_init(void)
+ {
+ 	if (swiotlb_force == SWIOTLB_FORCE ||
+-	    max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit))
++	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
+ 		swiotlb_init(1);
+ 	else
+ 		swiotlb_force = SWIOTLB_NO_FORCE;
+diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
+index f932b25fb817a..33282f33466e7 100644
+--- a/arch/ia64/kernel/efi.c
++++ b/arch/ia64/kernel/efi.c
+@@ -413,10 +413,10 @@ efi_get_pal_addr (void)
+ 		mask  = ~((1 << IA64_GRANULE_SHIFT) - 1);
+ 
+ 		printk(KERN_INFO "CPU %d: mapping PAL code "
+-                       "[0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
+-                       smp_processor_id(), md->phys_addr,
+-                       md->phys_addr + efi_md_size(md),
+-                       vaddr & mask, (vaddr & mask) + IA64_GRANULE_SIZE);
++			"[0x%llx-0x%llx) into [0x%llx-0x%llx)\n",
++			smp_processor_id(), md->phys_addr,
++			md->phys_addr + efi_md_size(md),
++			vaddr & mask, (vaddr & mask) + IA64_GRANULE_SIZE);
+ #endif
+ 		return __va(md->phys_addr);
+ 	}
+@@ -558,6 +558,7 @@ efi_init (void)
+ 	{
+ 		efi_memory_desc_t *md;
+ 		void *p;
++		unsigned int i;
+ 
+ 		for (i = 0, p = efi_map_start; p < efi_map_end;
+ 		     ++i, p += efi_desc_size)
+@@ -584,7 +585,7 @@ efi_init (void)
+ 			}
+ 
+ 			printk("mem%02d: %s "
+-			       "range=[0x%016lx-0x%016lx) (%4lu%s)\n",
++			       "range=[0x%016llx-0x%016llx) (%4lu%s)\n",
+ 			       i, efi_md_typeattr_format(buf, sizeof(buf), md),
+ 			       md->phys_addr,
+ 			       md->phys_addr + efi_md_size(md), size, unit);
+diff --git a/arch/m68k/include/asm/mvme147hw.h b/arch/m68k/include/asm/mvme147hw.h
+index 257b29184af91..e28eb1c0e0bfb 100644
+--- a/arch/m68k/include/asm/mvme147hw.h
++++ b/arch/m68k/include/asm/mvme147hw.h
+@@ -66,6 +66,9 @@ struct pcc_regs {
+ #define PCC_INT_ENAB		0x08
+ 
+ #define PCC_TIMER_INT_CLR	0x80
++
++#define PCC_TIMER_TIC_EN	0x01
++#define PCC_TIMER_COC_EN	0x02
+ #define PCC_TIMER_CLR_OVF	0x04
+ 
+ #define PCC_LEVEL_ABORT		0x07
+diff --git a/arch/m68k/kernel/sys_m68k.c b/arch/m68k/kernel/sys_m68k.c
+index 1c235d8f53f36..f55bdcb8e4f15 100644
+--- a/arch/m68k/kernel/sys_m68k.c
++++ b/arch/m68k/kernel/sys_m68k.c
+@@ -388,6 +388,8 @@ sys_cacheflush (unsigned long addr, int scope, int cache, unsigned long len)
+ 		ret = -EPERM;
+ 		if (!capable(CAP_SYS_ADMIN))
+ 			goto out;
++
++		mmap_read_lock(current->mm);
+ 	} else {
+ 		struct vm_area_struct *vma;
+ 
+diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c
+index 490700aa2212e..aab7880e078df 100644
+--- a/arch/m68k/mvme147/config.c
++++ b/arch/m68k/mvme147/config.c
+@@ -116,8 +116,10 @@ static irqreturn_t mvme147_timer_int (int irq, void *dev_id)
+ 	unsigned long flags;
+ 
+ 	local_irq_save(flags);
+-	m147_pcc->t1_int_cntrl = PCC_TIMER_INT_CLR;
+-	m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF;
++	m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF | PCC_TIMER_COC_EN |
++			     PCC_TIMER_TIC_EN;
++	m147_pcc->t1_int_cntrl = PCC_INT_ENAB | PCC_TIMER_INT_CLR |
++				 PCC_LEVEL_TIMER1;
+ 	clk_total += PCC_TIMER_CYCLES;
+ 	timer_routine(0, NULL);
+ 	local_irq_restore(flags);
+@@ -135,10 +137,10 @@ void mvme147_sched_init (irq_handler_t timer_routine)
+ 	/* Init the clock with a value */
+ 	/* The clock counter increments until 0xFFFF then reloads */
+ 	m147_pcc->t1_preload = PCC_TIMER_PRELOAD;
+-	m147_pcc->t1_cntrl = 0x0;	/* clear timer */
+-	m147_pcc->t1_cntrl = 0x3;	/* start timer */
+-	m147_pcc->t1_int_cntrl = PCC_TIMER_INT_CLR;  /* clear pending ints */
+-	m147_pcc->t1_int_cntrl = PCC_INT_ENAB|PCC_LEVEL_TIMER1;
++	m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF | PCC_TIMER_COC_EN |
++			     PCC_TIMER_TIC_EN;
++	m147_pcc->t1_int_cntrl = PCC_INT_ENAB | PCC_TIMER_INT_CLR |
++				 PCC_LEVEL_TIMER1;
+ 
+ 	clocksource_register_hz(&mvme147_clk, PCC_TIMER_CLOCK_FREQ);
+ }
+diff --git a/arch/m68k/mvme16x/config.c b/arch/m68k/mvme16x/config.c
+index 5b86d10e0f84e..d43d128b77471 100644
+--- a/arch/m68k/mvme16x/config.c
++++ b/arch/m68k/mvme16x/config.c
+@@ -367,6 +367,7 @@ static u32 clk_total;
+ #define PCCTOVR1_COC_EN      0x02
+ #define PCCTOVR1_OVR_CLR     0x04
+ 
++#define PCCTIC1_INT_LEVEL    6
+ #define PCCTIC1_INT_CLR      0x08
+ #define PCCTIC1_INT_EN       0x10
+ 
+@@ -376,8 +377,8 @@ static irqreturn_t mvme16x_timer_int (int irq, void *dev_id)
+ 	unsigned long flags;
+ 
+ 	local_irq_save(flags);
+-	out_8(PCCTIC1, in_8(PCCTIC1) | PCCTIC1_INT_CLR);
+-	out_8(PCCTOVR1, PCCTOVR1_OVR_CLR);
++	out_8(PCCTOVR1, PCCTOVR1_OVR_CLR | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
++	out_8(PCCTIC1, PCCTIC1_INT_EN | PCCTIC1_INT_CLR | PCCTIC1_INT_LEVEL);
+ 	clk_total += PCC_TIMER_CYCLES;
+ 	timer_routine(0, NULL);
+ 	local_irq_restore(flags);
+@@ -391,14 +392,15 @@ void mvme16x_sched_init (irq_handler_t timer_routine)
+     int irq;
+ 
+     /* Using PCCchip2 or MC2 chip tick timer 1 */
+-    out_be32(PCCTCNT1, 0);
+-    out_be32(PCCTCMP1, PCC_TIMER_CYCLES);
+-    out_8(PCCTOVR1, in_8(PCCTOVR1) | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
+-    out_8(PCCTIC1, PCCTIC1_INT_EN | 6);
+     if (request_irq(MVME16x_IRQ_TIMER, mvme16x_timer_int, IRQF_TIMER, "timer",
+                     timer_routine))
+ 	panic ("Couldn't register timer int");
+ 
++    out_be32(PCCTCNT1, 0);
++    out_be32(PCCTCMP1, PCC_TIMER_CYCLES);
++    out_8(PCCTOVR1, PCCTOVR1_OVR_CLR | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
++    out_8(PCCTIC1, PCCTIC1_INT_EN | PCCTIC1_INT_CLR | PCCTIC1_INT_LEVEL);
++
+     clocksource_register_hz(&mvme16x_clk, PCC_TIMER_CLOCK_FREQ);
+ 
+     if (brdno == 0x0162 || brdno == 0x172)
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 2000bb2b0220d..1917ccd392564 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -6,6 +6,7 @@ config MIPS
+ 	select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT
+ 	select ARCH_HAS_FORTIFY_SOURCE
+ 	select ARCH_HAS_KCOV
++	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE if !EVA
+ 	select ARCH_HAS_PTE_SPECIAL if !(32BIT && CPU_HAS_RIXI)
+ 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ 	select ARCH_HAS_UBSAN_SANITIZE_ALL
+diff --git a/arch/mips/boot/dts/brcm/bcm3368.dtsi b/arch/mips/boot/dts/brcm/bcm3368.dtsi
+index 69cbef4723775..d4b2b430dad01 100644
+--- a/arch/mips/boot/dts/brcm/bcm3368.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm3368.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@fff8c008 {
+ 			compatible = "syscon";
+-			reg = <0xfff8c000 0x4>;
++			reg = <0xfff8c008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/boot/dts/brcm/bcm63268.dtsi b/arch/mips/boot/dts/brcm/bcm63268.dtsi
+index 5acb49b618678..365fa75cd9ac5 100644
+--- a/arch/mips/boot/dts/brcm/bcm63268.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm63268.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@10000008 {
+ 			compatible = "syscon";
+-			reg = <0x10000000 0xc>;
++			reg = <0x10000008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/boot/dts/brcm/bcm6358.dtsi b/arch/mips/boot/dts/brcm/bcm6358.dtsi
+index f21176cac0381..89a3107cad28e 100644
+--- a/arch/mips/boot/dts/brcm/bcm6358.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm6358.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@fffe0008 {
+ 			compatible = "syscon";
+-			reg = <0xfffe0000 0x4>;
++			reg = <0xfffe0008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/boot/dts/brcm/bcm6362.dtsi b/arch/mips/boot/dts/brcm/bcm6362.dtsi
+index c98f9111e3c8b..0b2adefd75cec 100644
+--- a/arch/mips/boot/dts/brcm/bcm6362.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm6362.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@10000008 {
+ 			compatible = "syscon";
+-			reg = <0x10000000 0xc>;
++			reg = <0x10000008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/boot/dts/brcm/bcm6368.dtsi b/arch/mips/boot/dts/brcm/bcm6368.dtsi
+index 449c167dd8921..b84a3bfe8c51e 100644
+--- a/arch/mips/boot/dts/brcm/bcm6368.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm6368.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@100000008 {
+ 			compatible = "syscon";
+-			reg = <0x10000000 0xc>;
++			reg = <0x10000008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/crypto/poly1305-glue.c b/arch/mips/crypto/poly1305-glue.c
+index fc881b46d9111..bc6110fb98e0a 100644
+--- a/arch/mips/crypto/poly1305-glue.c
++++ b/arch/mips/crypto/poly1305-glue.c
+@@ -17,7 +17,7 @@ asmlinkage void poly1305_init_mips(void *state, const u8 *key);
+ asmlinkage void poly1305_blocks_mips(void *state, const u8 *src, u32 len, u32 hibit);
+ asmlinkage void poly1305_emit_mips(void *state, u8 *digest, const u32 *nonce);
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
++void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_init_mips(&dctx->h, key);
+ 	dctx->s[0] = get_unaligned_le32(key + 16);
+diff --git a/arch/mips/include/asm/asmmacro.h b/arch/mips/include/asm/asmmacro.h
+index 86f2323ebe6bc..ca83ada7015f5 100644
+--- a/arch/mips/include/asm/asmmacro.h
++++ b/arch/mips/include/asm/asmmacro.h
+@@ -44,8 +44,7 @@
+ 	.endm
+ #endif
+ 
+-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR5) || \
+-    defined(CONFIG_CPU_MIPSR6)
++#ifdef CONFIG_CPU_HAS_DIEI
+ 	.macro	local_irq_enable reg=t0
+ 	ei
+ 	irq_enable_hazard
+diff --git a/arch/mips/loongson64/init.c b/arch/mips/loongson64/init.c
+index ed75f7971261b..052cce6a8a998 100644
+--- a/arch/mips/loongson64/init.c
++++ b/arch/mips/loongson64/init.c
+@@ -82,7 +82,7 @@ static int __init add_legacy_isa_io(struct fwnode_handle *fwnode, resource_size_
+ 		return -ENOMEM;
+ 
+ 	range->fwnode = fwnode;
+-	range->size = size;
++	range->size = size = round_up(size, PAGE_SIZE);
+ 	range->hw_start = hw_start;
+ 	range->flags = LOGIC_PIO_CPU_MMIO;
+ 
+diff --git a/arch/mips/pci/pci-legacy.c b/arch/mips/pci/pci-legacy.c
+index 39052de915f34..3a909194284a6 100644
+--- a/arch/mips/pci/pci-legacy.c
++++ b/arch/mips/pci/pci-legacy.c
+@@ -166,8 +166,13 @@ void pci_load_of_ranges(struct pci_controller *hose, struct device_node *node)
+ 			res = hose->mem_resource;
+ 			break;
+ 		}
+-		if (res != NULL)
+-			of_pci_range_to_resource(&range, node, res);
++		if (res != NULL) {
++			res->name = node->full_name;
++			res->flags = range.flags;
++			res->start = range.cpu_addr;
++			res->end = range.cpu_addr + range.size - 1;
++			res->parent = res->child = res->sibling = NULL;
++		}
+ 	}
+ }
+ 
+diff --git a/arch/mips/pci/pci-mt7620.c b/arch/mips/pci/pci-mt7620.c
+index d360616037525..e032932348d6f 100644
+--- a/arch/mips/pci/pci-mt7620.c
++++ b/arch/mips/pci/pci-mt7620.c
+@@ -30,6 +30,7 @@
+ #define RALINK_GPIOMODE			0x60
+ 
+ #define PPLL_CFG1			0x9c
++#define PPLL_LD				BIT(23)
+ 
+ #define PPLL_DRV			0xa0
+ #define PDRV_SW_SET			BIT(31)
+@@ -239,8 +240,8 @@ static int mt7620_pci_hw_init(struct platform_device *pdev)
+ 	rt_sysc_m32(0, RALINK_PCIE0_CLK_EN, RALINK_CLKCFG1);
+ 	mdelay(100);
+ 
+-	if (!(rt_sysc_r32(PPLL_CFG1) & PDRV_SW_SET)) {
+-		dev_err(&pdev->dev, "MT7620 PPLL unlock\n");
++	if (!(rt_sysc_r32(PPLL_CFG1) & PPLL_LD)) {
++		dev_err(&pdev->dev, "pcie PLL not locked, aborting init\n");
+ 		reset_control_assert(rstpcie0);
+ 		rt_sysc_m32(RALINK_PCIE0_CLK_EN, 0, RALINK_CLKCFG1);
+ 		return -1;
+diff --git a/arch/mips/pci/pci-rt2880.c b/arch/mips/pci/pci-rt2880.c
+index e1f12e3981363..f1538d2be89e5 100644
+--- a/arch/mips/pci/pci-rt2880.c
++++ b/arch/mips/pci/pci-rt2880.c
+@@ -180,7 +180,6 @@ static inline void rt2880_pci_write_u32(unsigned long reg, u32 val)
+ 
+ int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ {
+-	u16 cmd;
+ 	int irq = -1;
+ 
+ 	if (dev->bus->number != 0)
+@@ -188,8 +187,6 @@ int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ 
+ 	switch (PCI_SLOT(dev->devfn)) {
+ 	case 0x00:
+-		rt2880_pci_write_u32(PCI_BASE_ADDRESS_0, 0x08000000);
+-		(void) rt2880_pci_read_u32(PCI_BASE_ADDRESS_0);
+ 		break;
+ 	case 0x11:
+ 		irq = RT288X_CPU_IRQ_PCI;
+@@ -201,16 +198,6 @@ int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ 		break;
+ 	}
+ 
+-	pci_write_config_byte((struct pci_dev *) dev,
+-		PCI_CACHE_LINE_SIZE, 0x14);
+-	pci_write_config_byte((struct pci_dev *) dev, PCI_LATENCY_TIMER, 0xFF);
+-	pci_read_config_word((struct pci_dev *) dev, PCI_COMMAND, &cmd);
+-	cmd |= PCI_COMMAND_MASTER | PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
+-		PCI_COMMAND_INVALIDATE | PCI_COMMAND_FAST_BACK |
+-		PCI_COMMAND_SERR | PCI_COMMAND_WAIT | PCI_COMMAND_PARITY;
+-	pci_write_config_word((struct pci_dev *) dev, PCI_COMMAND, cmd);
+-	pci_write_config_byte((struct pci_dev *) dev, PCI_INTERRUPT_LINE,
+-			      dev->irq);
+ 	return irq;
+ }
+ 
+@@ -251,6 +238,30 @@ static int rt288x_pci_probe(struct platform_device *pdev)
+ 
+ int pcibios_plat_dev_init(struct pci_dev *dev)
+ {
++	static bool slot0_init;
++
++	/*
++	 * Nobody seems to initialize slot 0, but this platform requires it, so
++	 * do it once when some other slot is being enabled. The PCI subsystem
++	 * should configure other slots properly, so no need to do anything
++	 * special for those.
++	 */
++	if (!slot0_init && dev->bus->number == 0) {
++		u16 cmd;
++		u32 bar0;
++
++		slot0_init = true;
++
++		pci_bus_write_config_dword(dev->bus, 0, PCI_BASE_ADDRESS_0,
++					   0x08000000);
++		pci_bus_read_config_dword(dev->bus, 0, PCI_BASE_ADDRESS_0,
++					  &bar0);
++
++		pci_bus_read_config_word(dev->bus, 0, PCI_COMMAND, &cmd);
++		cmd |= PCI_COMMAND_MASTER | PCI_COMMAND_IO | PCI_COMMAND_MEMORY;
++		pci_bus_write_config_word(dev->bus, 0, PCI_COMMAND, cmd);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 31ed8083571ff..5afa0ebd78ca5 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -222,7 +222,7 @@ config PPC
+ 	select HAVE_LIVEPATCH			if HAVE_DYNAMIC_FTRACE_WITH_REGS
+ 	select HAVE_MOD_ARCH_SPECIFIC
+ 	select HAVE_NMI				if PERF_EVENTS || (PPC64 && PPC_BOOK3S)
+-	select HAVE_HARDLOCKUP_DETECTOR_ARCH	if (PPC64 && PPC_BOOK3S)
++	select HAVE_HARDLOCKUP_DETECTOR_ARCH	if PPC64 && PPC_BOOK3S && SMP
+ 	select HAVE_OPROFILE
+ 	select HAVE_OPTPROBES			if PPC64
+ 	select HAVE_PERF_EVENTS
+diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
+index b88900f4832fd..52abca88b5b2b 100644
+--- a/arch/powerpc/Kconfig.debug
++++ b/arch/powerpc/Kconfig.debug
+@@ -352,6 +352,7 @@ config PPC_EARLY_DEBUG_CPM_ADDR
+ config FAIL_IOMMU
+ 	bool "Fault-injection capability for IOMMU"
+ 	depends on FAULT_INJECTION
++	depends on PCI || IBMVIO
+ 	help
+ 	  Provide fault-injection capability for IOMMU. Each device can
+ 	  be selectively enabled via the fail_iommu property.
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index cd3feeac6e87c..4a3dca0271f1e 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -7,6 +7,7 @@
+ #ifndef __ASSEMBLY__
+ #include <linux/mmdebug.h>
+ #include <linux/bug.h>
++#include <linux/sizes.h>
+ #endif
+ 
+ /*
+@@ -323,7 +324,8 @@ extern unsigned long pci_io_base;
+ #define  PHB_IO_END	(KERN_IO_START + FULL_IO_SIZE)
+ #define IOREMAP_BASE	(PHB_IO_END)
+ #define IOREMAP_START	(ioremap_bot)
+-#define IOREMAP_END	(KERN_IO_END)
++#define IOREMAP_END	(KERN_IO_END - FIXADDR_SIZE)
++#define FIXADDR_SIZE	SZ_32M
+ 
+ /* Advertise special mapping type for AGP */
+ #define HAVE_PAGE_AGP
+diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
+index c7813dc628fc9..59cab558e2f05 100644
+--- a/arch/powerpc/include/asm/book3s/64/radix.h
++++ b/arch/powerpc/include/asm/book3s/64/radix.h
+@@ -222,8 +222,10 @@ static inline void radix__set_pte_at(struct mm_struct *mm, unsigned long addr,
+ 	 * from ptesync, it should probably go into update_mmu_cache, rather
+ 	 * than set_pte_at (which is used to set ptes unrelated to faults).
+ 	 *
+-	 * Spurious faults to vmalloc region are not tolerated, so there is
+-	 * a ptesync in flush_cache_vmap.
++	 * Spurious faults from the kernel memory are not tolerated, so there
++	 * is a ptesync in flush_cache_vmap, and __map_kernel_page() follows
++	 * the pte update sequence from ISA Book III 6.10 Translation Table
++	 * Update Synchronization Requirements.
+ 	 */
+ }
+ 
+diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
+index 6bfc87915d5db..591b2f4deed53 100644
+--- a/arch/powerpc/include/asm/fixmap.h
++++ b/arch/powerpc/include/asm/fixmap.h
+@@ -23,12 +23,17 @@
+ #include <asm/kmap_types.h>
+ #endif
+ 
++#ifdef CONFIG_PPC64
++#define FIXADDR_TOP	(IOREMAP_END + FIXADDR_SIZE)
++#else
++#define FIXADDR_SIZE	0
+ #ifdef CONFIG_KASAN
+ #include <asm/kasan.h>
+ #define FIXADDR_TOP	(KASAN_SHADOW_START - PAGE_SIZE)
+ #else
+ #define FIXADDR_TOP	((unsigned long)(-PAGE_SIZE))
+ #endif
++#endif
+ 
+ /*
+  * Here we define all the compile-time 'special' virtual
+@@ -50,6 +55,7 @@
+  */
+ enum fixed_addresses {
+ 	FIX_HOLE,
++#ifdef CONFIG_PPC32
+ 	/* reserve the top 128K for early debugging purposes */
+ 	FIX_EARLY_DEBUG_TOP = FIX_HOLE,
+ 	FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128K, PAGE_SIZE)/PAGE_SIZE)-1,
+@@ -72,6 +78,7 @@ enum fixed_addresses {
+ 		       FIX_IMMR_SIZE,
+ #endif
+ 	/* FIX_PCIE_MCFG, */
++#endif /* CONFIG_PPC32 */
+ 	__end_of_permanent_fixed_addresses,
+ 
+ #define NR_FIX_BTMAPS		(SZ_256K / PAGE_SIZE)
+@@ -98,6 +105,8 @@ enum fixed_addresses {
+ static inline void __set_fixmap(enum fixed_addresses idx,
+ 				phys_addr_t phys, pgprot_t flags)
+ {
++	BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC64) && __FIXADDR_SIZE > FIXADDR_SIZE);
++
+ 	if (__builtin_constant_p(idx))
+ 		BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
+ 	else if (WARN_ON(idx >= __end_of_fixed_addresses))
+diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
+index 6cb8aa3571917..57cd3892bfe05 100644
+--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
+@@ -6,6 +6,8 @@
+  * the ppc64 non-hashed page table.
+  */
+ 
++#include <linux/sizes.h>
++
+ #include <asm/nohash/64/pgtable-4k.h>
+ #include <asm/barrier.h>
+ #include <asm/asm-const.h>
+@@ -54,7 +56,8 @@
+ #define  PHB_IO_END	(KERN_IO_START + FULL_IO_SIZE)
+ #define IOREMAP_BASE	(PHB_IO_END)
+ #define IOREMAP_START	(ioremap_bot)
+-#define IOREMAP_END	(KERN_VIRT_START + KERN_VIRT_SIZE)
++#define IOREMAP_END	(KERN_VIRT_START + KERN_VIRT_SIZE - FIXADDR_SIZE)
++#define FIXADDR_SIZE	SZ_32M
+ 
+ 
+ /*
+diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
+index b2035b2f57ce3..635bdf947105c 100644
+--- a/arch/powerpc/include/asm/smp.h
++++ b/arch/powerpc/include/asm/smp.h
+@@ -121,6 +121,11 @@ static inline struct cpumask *cpu_sibling_mask(int cpu)
+ 	return per_cpu(cpu_sibling_map, cpu);
+ }
+ 
++static inline struct cpumask *cpu_core_mask(int cpu)
++{
++	return per_cpu(cpu_core_map, cpu);
++}
++
+ static inline struct cpumask *cpu_l2_cache_mask(int cpu)
+ {
+ 	return per_cpu(cpu_l2_cache_map, cpu);
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 8482739d42f38..eddf362caedce 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -292,7 +292,7 @@ static void fadump_show_config(void)
+  * that is required for a kernel to boot successfully.
+  *
+  */
+-static inline u64 fadump_calculate_reserve_size(void)
++static __init u64 fadump_calculate_reserve_size(void)
+ {
+ 	u64 base, size, bootmem_min;
+ 	int ret;
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index c1545f22c077d..7a14a094be8ac 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -268,7 +268,7 @@ static struct feature_property {
+ };
+ 
+ #if defined(CONFIG_44x) && defined(CONFIG_PPC_FPU)
+-static inline void identical_pvr_fixup(unsigned long node)
++static __init void identical_pvr_fixup(unsigned long node)
+ {
+ 	unsigned int pvr;
+ 	const char *model = of_get_flat_dt_prop(node, "model", NULL);
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 7d6cf75a7fd80..dd34ea6744965 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -975,17 +975,12 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ 				local_memory_node(numa_cpu_lookup_table[cpu]));
+ 		}
+ #endif
+-		/*
+-		 * cpu_core_map is now more updated and exists only since
+-		 * its been exported for long. It only will have a snapshot
+-		 * of cpu_cpu_mask.
+-		 */
+-		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
+ 	}
+ 
+ 	/* Init the cpumasks so the boot CPU is related to itself */
+ 	cpumask_set_cpu(boot_cpuid, cpu_sibling_mask(boot_cpuid));
+ 	cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid));
++	cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid));
+ 
+ 	if (has_coregroup_support())
+ 		cpumask_set_cpu(boot_cpuid, cpu_coregroup_mask(boot_cpuid));
+@@ -1304,6 +1299,9 @@ static void remove_cpu_from_masks(int cpu)
+ 			set_cpus_unrelated(cpu, i, cpu_smallcore_mask);
+ 	}
+ 
++	for_each_cpu(i, cpu_core_mask(cpu))
++		set_cpus_unrelated(cpu, i, cpu_core_mask);
++
+ 	if (has_coregroup_support()) {
+ 		for_each_cpu(i, cpu_coregroup_mask(cpu))
+ 			set_cpus_unrelated(cpu, i, cpu_coregroup_mask);
+@@ -1364,8 +1362,11 @@ static void update_coregroup_mask(int cpu, cpumask_var_t *mask)
+ 
+ static void add_cpu_to_masks(int cpu)
+ {
++	struct cpumask *(*submask_fn)(int) = cpu_sibling_mask;
+ 	int first_thread = cpu_first_thread_sibling(cpu);
++	int chip_id = cpu_to_chip_id(cpu);
+ 	cpumask_var_t mask;
++	bool ret;
+ 	int i;
+ 
+ 	/*
+@@ -1381,12 +1382,36 @@ static void add_cpu_to_masks(int cpu)
+ 	add_cpu_to_smallcore_masks(cpu);
+ 
+ 	/* In CPU-hotplug path, hence use GFP_ATOMIC */
+-	alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
++	ret = alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
+ 	update_mask_by_l2(cpu, &mask);
+ 
+ 	if (has_coregroup_support())
+ 		update_coregroup_mask(cpu, &mask);
+ 
++	if (chip_id == -1 || !ret) {
++		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
++		goto out;
++	}
++
++	if (shared_caches)
++		submask_fn = cpu_l2_cache_mask;
++
++	/* Update core_mask with all the CPUs that are part of submask */
++	or_cpumasks_related(cpu, cpu, submask_fn, cpu_core_mask);
++
++	/* Skip all CPUs already part of current CPU core mask */
++	cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
++
++	for_each_cpu(i, mask) {
++		if (chip_id == cpu_to_chip_id(i)) {
++			or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
++			cpumask_andnot(mask, mask, submask_fn(i));
++		} else {
++			cpumask_andnot(mask, mask, cpu_core_mask(i));
++		}
++	}
++
++out:
+ 	free_cpumask_var(mask);
+ }
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index e3b1839fc2519..280f7992ae993 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3697,7 +3697,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	vcpu->arch.dec_expires = dec + tb;
+ 	vcpu->cpu = -1;
+ 	vcpu->arch.thread_cpu = -1;
++	/* Save guest CTRL register, set runlatch to 1 */
+ 	vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
++	if (!(vcpu->arch.ctrl & 1))
++		mtspr(SPRN_CTRLT, vcpu->arch.ctrl | 1);
+ 
+ 	vcpu->arch.iamr = mfspr(SPRN_IAMR);
+ 	vcpu->arch.pspb = mfspr(SPRN_PSPB);
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 3adcf730f4784..1d5eec847b883 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -108,7 +108,7 @@ static int early_map_kernel_page(unsigned long ea, unsigned long pa,
+ 
+ set_the_pte:
+ 	set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
+-	smp_wmb();
++	asm volatile("ptesync": : :"memory");
+ 	return 0;
+ }
+ 
+@@ -168,7 +168,7 @@ static int __map_kernel_page(unsigned long ea, unsigned long pa,
+ 
+ set_the_pte:
+ 	set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
+-	smp_wmb();
++	asm volatile("ptesync": : :"memory");
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index e1a21d34c6e49..5e8eedda45d39 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -400,8 +400,8 @@ ebb_bhrb:
+ 	 * EBB events are pinned & exclusive, so this should never actually
+ 	 * hit, but we leave it as a fallback in case.
+ 	 */
+-	mask  |= CNST_EBB_VAL(ebb);
+-	value |= CNST_EBB_MASK;
++	mask  |= CNST_EBB_MASK;
++	value |= CNST_EBB_VAL(ebb);
+ 
+ 	*maskp = mask;
+ 	*valp = value;
+diff --git a/arch/powerpc/perf/power10-events-list.h b/arch/powerpc/perf/power10-events-list.h
+index 60c1b8111082d..e66487804a599 100644
+--- a/arch/powerpc/perf/power10-events-list.h
++++ b/arch/powerpc/perf/power10-events-list.h
+@@ -66,5 +66,5 @@ EVENT(PM_RUN_INST_CMPL_ALT,			0x00002);
+  *     thresh end (TE)
+  */
+ 
+-EVENT(MEM_LOADS,				0x34340401e0);
+-EVENT(MEM_STORES,				0x343c0401e0);
++EVENT(MEM_LOADS,				0x35340401e0);
++EVENT(MEM_STORES,				0x353c0401e0);
+diff --git a/arch/powerpc/platforms/52xx/lite5200_sleep.S b/arch/powerpc/platforms/52xx/lite5200_sleep.S
+index 11475c58ea431..afee8b1515a8e 100644
+--- a/arch/powerpc/platforms/52xx/lite5200_sleep.S
++++ b/arch/powerpc/platforms/52xx/lite5200_sleep.S
+@@ -181,7 +181,7 @@ sram_code:
+   udelay: /* r11 - tb_ticks_per_usec, r12 - usecs, overwrites r13 */
+ 	mullw	r12, r12, r11
+ 	mftb	r13	/* start */
+-	addi	r12, r13, r12 /* end */
++	add	r12, r13, r12 /* end */
+     1:
+ 	mftb	r13	/* current */
+ 	cmp	cr0, r13, r12
+diff --git a/arch/powerpc/platforms/pseries/pci_dlpar.c b/arch/powerpc/platforms/pseries/pci_dlpar.c
+index f9ae17e8a0f46..a8f9140a24fa3 100644
+--- a/arch/powerpc/platforms/pseries/pci_dlpar.c
++++ b/arch/powerpc/platforms/pseries/pci_dlpar.c
+@@ -50,6 +50,7 @@ EXPORT_SYMBOL_GPL(init_phb_dynamic);
+ int remove_phb_dynamic(struct pci_controller *phb)
+ {
+ 	struct pci_bus *b = phb->bus;
++	struct pci_host_bridge *host_bridge = to_pci_host_bridge(b->bridge);
+ 	struct resource *res;
+ 	int rc, i;
+ 
+@@ -76,7 +77,8 @@ int remove_phb_dynamic(struct pci_controller *phb)
+ 	/* Remove the PCI bus and unregister the bridge device from sysfs */
+ 	phb->bus = NULL;
+ 	pci_remove_bus(b);
+-	device_unregister(b->bridge);
++	host_bridge->bus = NULL;
++	device_unregister(&host_bridge->dev);
+ 
+ 	/* Now release the IO resource */
+ 	if (res->flags & IORESOURCE_IO)
+diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/pseries/vio.c
+index b2797cfe4e2b0..68276e05502b9 100644
+--- a/arch/powerpc/platforms/pseries/vio.c
++++ b/arch/powerpc/platforms/pseries/vio.c
+@@ -1286,6 +1286,10 @@ static int vio_bus_remove(struct device *dev)
+ int __vio_register_driver(struct vio_driver *viodrv, struct module *owner,
+ 			  const char *mod_name)
+ {
++	// vio_bus_type is only initialised for pseries
++	if (!machine_is(pseries))
++		return -ENODEV;
++
+ 	pr_debug("%s: driver %s registering\n", __func__, viodrv->name);
+ 
+ 	/* fill in 'struct driver' fields */
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index a80440af491a2..5b0f6b6278e3d 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -261,17 +261,20 @@ notrace void xmon_xive_do_dump(int cpu)
+ 	xmon_printf("\n");
+ }
+ 
++static struct irq_data *xive_get_irq_data(u32 hw_irq)
++{
++	unsigned int irq = irq_find_mapping(xive_irq_domain, hw_irq);
++
++	return irq ? irq_get_irq_data(irq) : NULL;
++}
++
+ int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d)
+ {
+-	struct irq_chip *chip = irq_data_get_irq_chip(d);
+ 	int rc;
+ 	u32 target;
+ 	u8 prio;
+ 	u32 lirq;
+ 
+-	if (!is_xive_irq(chip))
+-		return -EINVAL;
+-
+ 	rc = xive_ops->get_irq_config(hw_irq, &target, &prio, &lirq);
+ 	if (rc) {
+ 		xmon_printf("IRQ 0x%08x : no config rc=%d\n", hw_irq, rc);
+@@ -281,6 +284,9 @@ int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d)
+ 	xmon_printf("IRQ 0x%08x : target=0x%x prio=%02x lirq=0x%x ",
+ 		    hw_irq, target, prio, lirq);
+ 
++	if (!d)
++		d = xive_get_irq_data(hw_irq);
++
+ 	if (d) {
+ 		struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
+ 		u64 val = xive_esb_read(xd, XIVE_ESB_GET);
+@@ -1606,6 +1612,8 @@ static void xive_debug_show_irq(struct seq_file *m, u32 hw_irq, struct irq_data
+ 	u32 target;
+ 	u8 prio;
+ 	u32 lirq;
++	struct xive_irq_data *xd;
++	u64 val;
+ 
+ 	if (!is_xive_irq(chip))
+ 		return;
+@@ -1619,17 +1627,14 @@ static void xive_debug_show_irq(struct seq_file *m, u32 hw_irq, struct irq_data
+ 	seq_printf(m, "IRQ 0x%08x : target=0x%x prio=%02x lirq=0x%x ",
+ 		   hw_irq, target, prio, lirq);
+ 
+-	if (d) {
+-		struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
+-		u64 val = xive_esb_read(xd, XIVE_ESB_GET);
+-
+-		seq_printf(m, "flags=%c%c%c PQ=%c%c",
+-			   xd->flags & XIVE_IRQ_FLAG_STORE_EOI ? 'S' : ' ',
+-			   xd->flags & XIVE_IRQ_FLAG_LSI ? 'L' : ' ',
+-			   xd->flags & XIVE_IRQ_FLAG_H_INT_ESB ? 'H' : ' ',
+-			   val & XIVE_ESB_VAL_P ? 'P' : '-',
+-			   val & XIVE_ESB_VAL_Q ? 'Q' : '-');
+-	}
++	xd = irq_data_get_irq_handler_data(d);
++	val = xive_esb_read(xd, XIVE_ESB_GET);
++	seq_printf(m, "flags=%c%c%c PQ=%c%c",
++		   xd->flags & XIVE_IRQ_FLAG_STORE_EOI ? 'S' : ' ',
++		   xd->flags & XIVE_IRQ_FLAG_LSI ? 'L' : ' ',
++		   xd->flags & XIVE_IRQ_FLAG_H_INT_ESB ? 'H' : ' ',
++		   val & XIVE_ESB_VAL_P ? 'P' : '-',
++		   val & XIVE_ESB_VAL_Q ? 'Q' : '-');
+ 	seq_puts(m, "\n");
+ }
+ 
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 4d843e64496f4..e83ce909686c5 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -925,9 +925,9 @@ static int __init setup_hwcaps(void)
+ 	if (MACHINE_HAS_VX) {
+ 		elf_hwcap |= HWCAP_S390_VXRS;
+ 		if (test_facility(134))
+-			elf_hwcap |= HWCAP_S390_VXRS_EXT;
+-		if (test_facility(135))
+ 			elf_hwcap |= HWCAP_S390_VXRS_BCD;
++		if (test_facility(135))
++			elf_hwcap |= HWCAP_S390_VXRS_EXT;
+ 		if (test_facility(148))
+ 			elf_hwcap |= HWCAP_S390_VXRS_EXT2;
+ 		if (test_facility(152))
+diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
+index 6d6b57059493e..b9f85b2dc053f 100644
+--- a/arch/s390/kvm/gaccess.c
++++ b/arch/s390/kvm/gaccess.c
+@@ -976,7 +976,9 @@ int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra)
+  * kvm_s390_shadow_tables - walk the guest page table and create shadow tables
+  * @sg: pointer to the shadow guest address space structure
+  * @saddr: faulting address in the shadow gmap
+- * @pgt: pointer to the page table address result
++ * @pgt: pointer to the beginning of the page table for the given address if
++ *	 successful (return value 0), or to the first invalid DAT entry in
++ *	 case of exceptions (return value > 0)
+  * @fake: pgt references contiguous guest memory block, not a pgtable
+  */
+ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
+@@ -1034,6 +1036,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
+ 			rfte.val = ptr;
+ 			goto shadow_r2t;
+ 		}
++		*pgt = ptr + vaddr.rfx * 8;
+ 		rc = gmap_read_table(parent, ptr + vaddr.rfx * 8, &rfte.val);
+ 		if (rc)
+ 			return rc;
+@@ -1060,6 +1063,7 @@ shadow_r2t:
+ 			rste.val = ptr;
+ 			goto shadow_r3t;
+ 		}
++		*pgt = ptr + vaddr.rsx * 8;
+ 		rc = gmap_read_table(parent, ptr + vaddr.rsx * 8, &rste.val);
+ 		if (rc)
+ 			return rc;
+@@ -1087,6 +1091,7 @@ shadow_r3t:
+ 			rtte.val = ptr;
+ 			goto shadow_sgt;
+ 		}
++		*pgt = ptr + vaddr.rtx * 8;
+ 		rc = gmap_read_table(parent, ptr + vaddr.rtx * 8, &rtte.val);
+ 		if (rc)
+ 			return rc;
+@@ -1123,6 +1128,7 @@ shadow_sgt:
+ 			ste.val = ptr;
+ 			goto shadow_pgt;
+ 		}
++		*pgt = ptr + vaddr.sx * 8;
+ 		rc = gmap_read_table(parent, ptr + vaddr.sx * 8, &ste.val);
+ 		if (rc)
+ 			return rc;
+@@ -1157,6 +1163,8 @@ shadow_pgt:
+  * @vcpu: virtual cpu
+  * @sg: pointer to the shadow guest address space structure
+  * @saddr: faulting address in the shadow gmap
++ * @datptr: will contain the address of the faulting DAT table entry, or of
++ *	    the valid leaf, plus some flags
+  *
+  * Returns: - 0 if the shadow fault was successfully resolved
+  *	    - > 0 (pgm exception code) on exceptions while faulting
+@@ -1165,11 +1173,11 @@ shadow_pgt:
+  *	    - -ENOMEM if out of memory
+  */
+ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
+-			  unsigned long saddr)
++			  unsigned long saddr, unsigned long *datptr)
+ {
+ 	union vaddress vaddr;
+ 	union page_table_entry pte;
+-	unsigned long pgt;
++	unsigned long pgt = 0;
+ 	int dat_protection, fake;
+ 	int rc;
+ 
+@@ -1191,8 +1199,20 @@ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
+ 		pte.val = pgt + vaddr.px * PAGE_SIZE;
+ 		goto shadow_page;
+ 	}
+-	if (!rc)
+-		rc = gmap_read_table(sg->parent, pgt + vaddr.px * 8, &pte.val);
++
++	switch (rc) {
++	case PGM_SEGMENT_TRANSLATION:
++	case PGM_REGION_THIRD_TRANS:
++	case PGM_REGION_SECOND_TRANS:
++	case PGM_REGION_FIRST_TRANS:
++		pgt |= PEI_NOT_PTE;
++		break;
++	case 0:
++		pgt += vaddr.px * 8;
++		rc = gmap_read_table(sg->parent, pgt, &pte.val);
++	}
++	if (datptr)
++		*datptr = pgt | dat_protection * PEI_DAT_PROT;
+ 	if (!rc && pte.i)
+ 		rc = PGM_PAGE_TRANSLATION;
+ 	if (!rc && pte.z)
+diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
+index f4c51756c4623..7c72a5e3449f8 100644
+--- a/arch/s390/kvm/gaccess.h
++++ b/arch/s390/kvm/gaccess.h
+@@ -18,17 +18,14 @@
+ 
+ /**
+  * kvm_s390_real_to_abs - convert guest real address to guest absolute address
+- * @vcpu - guest virtual cpu
++ * @prefix - guest prefix
+  * @gra - guest real address
+  *
+  * Returns the guest absolute address that corresponds to the passed guest real
+- * address @gra of a virtual guest cpu by applying its prefix.
++ * address @gra of by applying the given prefix.
+  */
+-static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
+-						 unsigned long gra)
++static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
+ {
+-	unsigned long prefix  = kvm_s390_get_prefix(vcpu);
+-
+ 	if (gra < 2 * PAGE_SIZE)
+ 		gra += prefix;
+ 	else if (gra >= prefix && gra < prefix + 2 * PAGE_SIZE)
+@@ -36,6 +33,43 @@ static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
+ 	return gra;
+ }
+ 
++/**
++ * kvm_s390_real_to_abs - convert guest real address to guest absolute address
++ * @vcpu - guest virtual cpu
++ * @gra - guest real address
++ *
++ * Returns the guest absolute address that corresponds to the passed guest real
++ * address @gra of a virtual guest cpu by applying its prefix.
++ */
++static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
++						 unsigned long gra)
++{
++	return _kvm_s390_real_to_abs(kvm_s390_get_prefix(vcpu), gra);
++}
++
++/**
++ * _kvm_s390_logical_to_effective - convert guest logical to effective address
++ * @psw: psw of the guest
++ * @ga: guest logical address
++ *
++ * Convert a guest logical address to an effective address by applying the
++ * rules of the addressing mode defined by bits 31 and 32 of the given PSW
++ * (extendended/basic addressing mode).
++ *
++ * Depending on the addressing mode, the upper 40 bits (24 bit addressing
++ * mode), 33 bits (31 bit addressing mode) or no bits (64 bit addressing
++ * mode) of @ga will be zeroed and the remaining bits will be returned.
++ */
++static inline unsigned long _kvm_s390_logical_to_effective(psw_t *psw,
++							   unsigned long ga)
++{
++	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
++		return ga;
++	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
++		return ga & ((1UL << 31) - 1);
++	return ga & ((1UL << 24) - 1);
++}
++
+ /**
+  * kvm_s390_logical_to_effective - convert guest logical to effective address
+  * @vcpu: guest virtual cpu
+@@ -52,13 +86,7 @@ static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
+ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
+ 							  unsigned long ga)
+ {
+-	psw_t *psw = &vcpu->arch.sie_block->gpsw;
+-
+-	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
+-		return ga;
+-	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
+-		return ga & ((1UL << 31) - 1);
+-	return ga & ((1UL << 24) - 1);
++	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
+ }
+ 
+ /*
+@@ -359,7 +387,11 @@ void ipte_unlock(struct kvm_vcpu *vcpu);
+ int ipte_lock_held(struct kvm_vcpu *vcpu);
+ int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra);
+ 
++/* MVPG PEI indication bits */
++#define PEI_DAT_PROT 2
++#define PEI_NOT_PTE 4
++
+ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *shadow,
+-			  unsigned long saddr);
++			  unsigned long saddr, unsigned long *datptr);
+ 
+ #endif /* __KVM_S390_GACCESS_H */
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 425d3d75320bf..20afffd6b9820 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4308,16 +4308,16 @@ static void store_regs_fmt2(struct kvm_vcpu *vcpu)
+ 	kvm_run->s.regs.bpbc = (vcpu->arch.sie_block->fpf & FPF_BPBC) == FPF_BPBC;
+ 	kvm_run->s.regs.diag318 = vcpu->arch.diag318_info.val;
+ 	if (MACHINE_HAS_GS) {
++		preempt_disable();
+ 		__ctl_set_bit(2, 4);
+ 		if (vcpu->arch.gs_enabled)
+ 			save_gs_cb(current->thread.gs_cb);
+-		preempt_disable();
+ 		current->thread.gs_cb = vcpu->arch.host_gscb;
+ 		restore_gs_cb(vcpu->arch.host_gscb);
+-		preempt_enable();
+ 		if (!vcpu->arch.host_gscb)
+ 			__ctl_clear_bit(2, 4);
+ 		vcpu->arch.host_gscb = NULL;
++		preempt_enable();
+ 	}
+ 	/* SIE will save etoken directly into SDNX and therefore kvm_run */
+ }
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 4f3cbf6003a9d..3fbf7081c000c 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -416,11 +416,6 @@ static void unshadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 		memcpy((void *)((u64)scb_o + 0xc0),
+ 		       (void *)((u64)scb_s + 0xc0), 0xf0 - 0xc0);
+ 		break;
+-	case ICPT_PARTEXEC:
+-		/* MVPG only */
+-		memcpy((void *)((u64)scb_o + 0xc0),
+-		       (void *)((u64)scb_s + 0xc0), 0xd0 - 0xc0);
+-		break;
+ 	}
+ 
+ 	if (scb_s->ihcpu != 0xffffU)
+@@ -619,10 +614,10 @@ static int map_prefix(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 	/* with mso/msl, the prefix lies at offset *mso* */
+ 	prefix += scb_s->mso;
+ 
+-	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, prefix);
++	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, prefix, NULL);
+ 	if (!rc && (scb_s->ecb & ECB_TE))
+ 		rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
+-					   prefix + PAGE_SIZE);
++					   prefix + PAGE_SIZE, NULL);
+ 	/*
+ 	 * We don't have to mprotect, we will be called for all unshadows.
+ 	 * SIE will detect if protection applies and trigger a validity.
+@@ -913,7 +908,7 @@ static int handle_fault(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 				    current->thread.gmap_addr, 1);
+ 
+ 	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
+-				   current->thread.gmap_addr);
++				   current->thread.gmap_addr, NULL);
+ 	if (rc > 0) {
+ 		rc = inject_fault(vcpu, rc,
+ 				  current->thread.gmap_addr,
+@@ -935,7 +930,7 @@ static void handle_last_fault(struct kvm_vcpu *vcpu,
+ {
+ 	if (vsie_page->fault_addr)
+ 		kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
+-				      vsie_page->fault_addr);
++				      vsie_page->fault_addr, NULL);
+ 	vsie_page->fault_addr = 0;
+ }
+ 
+@@ -982,6 +977,98 @@ static int handle_stfle(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 	return 0;
+ }
+ 
++/*
++ * Get a register for a nested guest.
++ * @vcpu the vcpu of the guest
++ * @vsie_page the vsie_page for the nested guest
++ * @reg the register number, the upper 4 bits are ignored.
++ * returns: the value of the register.
++ */
++static u64 vsie_get_register(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, u8 reg)
++{
++	/* no need to validate the parameter and/or perform error handling */
++	reg &= 0xf;
++	switch (reg) {
++	case 15:
++		return vsie_page->scb_s.gg15;
++	case 14:
++		return vsie_page->scb_s.gg14;
++	default:
++		return vcpu->run->s.regs.gprs[reg];
++	}
++}
++
++static int vsie_handle_mvpg(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
++{
++	struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
++	unsigned long pei_dest, pei_src, src, dest, mask, prefix;
++	u64 *pei_block = &vsie_page->scb_o->mcic;
++	int edat, rc_dest, rc_src;
++	union ctlreg0 cr0;
++
++	cr0.val = vcpu->arch.sie_block->gcr[0];
++	edat = cr0.edat && test_kvm_facility(vcpu->kvm, 8);
++	mask = _kvm_s390_logical_to_effective(&scb_s->gpsw, PAGE_MASK);
++	prefix = scb_s->prefix << GUEST_PREFIX_SHIFT;
++
++	dest = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 20) & mask;
++	dest = _kvm_s390_real_to_abs(prefix, dest) + scb_s->mso;
++	src = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 16) & mask;
++	src = _kvm_s390_real_to_abs(prefix, src) + scb_s->mso;
++
++	rc_dest = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, dest, &pei_dest);
++	rc_src = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, src, &pei_src);
++	/*
++	 * Either everything went well, or something non-critical went wrong
++	 * e.g. because of a race. In either case, simply retry.
++	 */
++	if (rc_dest == -EAGAIN || rc_src == -EAGAIN || (!rc_dest && !rc_src)) {
++		retry_vsie_icpt(vsie_page);
++		return -EAGAIN;
++	}
++	/* Something more serious went wrong, propagate the error */
++	if (rc_dest < 0)
++		return rc_dest;
++	if (rc_src < 0)
++		return rc_src;
++
++	/* The only possible suppressing exception: just deliver it */
++	if (rc_dest == PGM_TRANSLATION_SPEC || rc_src == PGM_TRANSLATION_SPEC) {
++		clear_vsie_icpt(vsie_page);
++		rc_dest = kvm_s390_inject_program_int(vcpu, PGM_TRANSLATION_SPEC);
++		WARN_ON_ONCE(rc_dest);
++		return 1;
++	}
++
++	/*
++	 * Forward the PEI intercept to the guest if it was a page fault, or
++	 * also for segment and region table faults if EDAT applies.
++	 */
++	if (edat) {
++		rc_dest = rc_dest == PGM_ASCE_TYPE ? rc_dest : 0;
++		rc_src = rc_src == PGM_ASCE_TYPE ? rc_src : 0;
++	} else {
++		rc_dest = rc_dest != PGM_PAGE_TRANSLATION ? rc_dest : 0;
++		rc_src = rc_src != PGM_PAGE_TRANSLATION ? rc_src : 0;
++	}
++	if (!rc_dest && !rc_src) {
++		pei_block[0] = pei_dest;
++		pei_block[1] = pei_src;
++		return 1;
++	}
++
++	retry_vsie_icpt(vsie_page);
++
++	/*
++	 * The host has edat, and the guest does not, or it was an ASCE type
++	 * exception. The host needs to inject the appropriate DAT interrupts
++	 * into the guest.
++	 */
++	if (rc_dest)
++		return inject_fault(vcpu, rc_dest, dest, 1);
++	return inject_fault(vcpu, rc_src, src, 0);
++}
++
+ /*
+  * Run the vsie on a shadow scb and a shadow gmap, without any further
+  * sanity checks, handling SIE faults.
+@@ -1068,6 +1155,10 @@ static int do_vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 		if ((scb_s->ipa & 0xf000) != 0xf000)
+ 			scb_s->ipa += 0x1000;
+ 		break;
++	case ICPT_PARTEXEC:
++		if (scb_s->ipa == 0xb254)
++			rc = vsie_handle_mvpg(vcpu, vsie_page);
++		break;
+ 	}
+ 	return rc;
+ }
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 183ee73d9019f..f3c8a8110f60c 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -562,6 +562,7 @@ config X86_UV
+ 	depends on X86_EXTENDED_PLATFORM
+ 	depends on NUMA
+ 	depends on EFI
++	depends on KEXEC_CORE
+ 	depends on X86_X2APIC
+ 	depends on PCI
+ 	help
+diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c
+index c44aba290fbb8..2bc312ffee52b 100644
+--- a/arch/x86/crypto/poly1305_glue.c
++++ b/arch/x86/crypto/poly1305_glue.c
+@@ -16,7 +16,7 @@
+ #include <asm/simd.h>
+ 
+ asmlinkage void poly1305_init_x86_64(void *ctx,
+-				     const u8 key[POLY1305_KEY_SIZE]);
++				     const u8 key[POLY1305_BLOCK_SIZE]);
+ asmlinkage void poly1305_blocks_x86_64(void *ctx, const u8 *inp,
+ 				       const size_t len, const u32 padbit);
+ asmlinkage void poly1305_emit_x86_64(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
+@@ -81,7 +81,7 @@ static void convert_to_base2_64(void *ctx)
+ 	state->is_base2_26 = 0;
+ }
+ 
+-static void poly1305_simd_init(void *ctx, const u8 key[POLY1305_KEY_SIZE])
++static void poly1305_simd_init(void *ctx, const u8 key[POLY1305_BLOCK_SIZE])
+ {
+ 	poly1305_init_x86_64(ctx, key);
+ }
+@@ -129,7 +129,7 @@ static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
+ 		poly1305_emit_avx(ctx, mac, nonce);
+ }
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
++void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_simd_init(&dctx->h, key);
+ 	dctx->s[0] = get_unaligned_le32(&key[16]);
+diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
+index be50ef8572cce..6a98a76516214 100644
+--- a/arch/x86/events/amd/iommu.c
++++ b/arch/x86/events/amd/iommu.c
+@@ -81,12 +81,12 @@ static struct attribute_group amd_iommu_events_group = {
+ };
+ 
+ struct amd_iommu_event_desc {
+-	struct kobj_attribute attr;
++	struct device_attribute attr;
+ 	const char *event;
+ };
+ 
+-static ssize_t _iommu_event_show(struct kobject *kobj,
+-				struct kobj_attribute *attr, char *buf)
++static ssize_t _iommu_event_show(struct device *dev,
++				struct device_attribute *attr, char *buf)
+ {
+ 	struct amd_iommu_event_desc *event =
+ 		container_of(attr, struct amd_iommu_event_desc, attr);
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 7f014d450bc28..582c0ffb5e983 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -275,14 +275,14 @@ static struct attribute_group amd_uncore_attr_group = {
+ };
+ 
+ #define DEFINE_UNCORE_FORMAT_ATTR(_var, _name, _format)			\
+-static ssize_t __uncore_##_var##_show(struct kobject *kobj,		\
+-				struct kobj_attribute *attr,		\
++static ssize_t __uncore_##_var##_show(struct device *dev,		\
++				struct device_attribute *attr,		\
+ 				char *page)				\
+ {									\
+ 	BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE);			\
+ 	return sprintf(page, _format "\n");				\
+ }									\
+-static struct kobj_attribute format_attr_##_var =			\
++static struct device_attribute format_attr_##_var =			\
+ 	__ATTR(_name, 0444, __uncore_##_var##_show, NULL)
+ 
+ DEFINE_UNCORE_FORMAT_ATTR(event12,	event,		"config:0-7,32-35");
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index 235f5cde06fc3..40f466de89242 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -1652,6 +1652,9 @@ static __init int uv_system_init_hubless(void)
+ 	if (rc < 0)
+ 		return rc;
+ 
++	/* Set section block size for current node memory */
++	set_block_size();
++
+ 	/* Create user access node */
+ 	if (rc >= 0)
+ 		uv_setup_proc_files(1);
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index ec6f0415bc6d1..bbbd248fe9132 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -629,16 +629,16 @@ static ssize_t reload_store(struct device *dev,
+ 	if (val != 1)
+ 		return size;
+ 
+-	tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev, true);
+-	if (tmp_ret != UCODE_NEW)
+-		return size;
+-
+ 	get_online_cpus();
+ 
+ 	ret = check_online_cpus();
+ 	if (ret)
+ 		goto put;
+ 
++	tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev, true);
++	if (tmp_ret != UCODE_NEW)
++		goto put;
++
+ 	mutex_lock(&microcode_mutex);
+ 	ret = microcode_reload_late();
+ 	mutex_unlock(&microcode_mutex);
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index 22aad412f965e..629c4994f1654 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -31,8 +31,8 @@
+  *       - inform the user about the firmware's notion of memory layout
+  *         via /sys/firmware/memmap
+  *
+- *       - the hibernation code uses it to generate a kernel-independent MD5
+- *         fingerprint of the physical memory layout of a system.
++ *       - the hibernation code uses it to generate a kernel-independent CRC32
++ *         checksum of the physical memory layout of a system.
+  *
+  * - 'e820_table_kexec': a slightly modified (by the kernel) firmware version
+  *   passed to us by the bootloader - the major difference between
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 39f7d8c3c064b..535da74c124e4 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -159,6 +159,8 @@ NOKPROBE_SYMBOL(skip_prefixes);
+ int can_boost(struct insn *insn, void *addr)
+ {
+ 	kprobe_opcode_t opcode;
++	insn_byte_t prefix;
++	int i;
+ 
+ 	if (search_exception_tables((unsigned long)addr))
+ 		return 0;	/* Page fault may occur on this address. */
+@@ -171,9 +173,14 @@ int can_boost(struct insn *insn, void *addr)
+ 	if (insn->opcode.nbytes != 1)
+ 		return 0;
+ 
+-	/* Can't boost Address-size override prefix */
+-	if (unlikely(inat_is_address_size_prefix(insn->attr)))
+-		return 0;
++	for_each_insn_prefix(insn, i, prefix) {
++		insn_attr_t attr;
++
++		attr = inat_get_opcode_attribute(prefix);
++		/* Can't boost Address-size override prefix and CS override prefix */
++		if (prefix == 0x2e || inat_is_address_size_prefix(attr))
++			return 0;
++	}
+ 
+ 	opcode = insn->opcode.bytes[0];
+ 
+@@ -198,8 +205,8 @@ int can_boost(struct insn *insn, void *addr)
+ 		/* clear and set flags are boostable */
+ 		return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
+ 	default:
+-		/* CS override prefix and call are not boostable */
+-		return (opcode != 0x2e && opcode != 0x9a);
++		/* call is not boostable */
++		return opcode != 0x9a;
+ 	}
+ }
+ 
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index b95d1c533fef5..582387fc939f4 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -452,29 +452,52 @@ static bool match_smt(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+ 	return false;
+ }
+ 
++static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
++{
++	if (c->phys_proc_id == o->phys_proc_id &&
++	    c->cpu_die_id == o->cpu_die_id)
++		return true;
++	return false;
++}
++
+ /*
+- * Define snc_cpu[] for SNC (Sub-NUMA Cluster) CPUs.
++ * Unlike the other levels, we do not enforce keeping a
++ * multicore group inside a NUMA node.  If this happens, we will
++ * discard the MC level of the topology later.
++ */
++static bool match_pkg(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
++{
++	if (c->phys_proc_id == o->phys_proc_id)
++		return true;
++	return false;
++}
++
++/*
++ * Define intel_cod_cpu[] for Intel COD (Cluster-on-Die) CPUs.
+  *
+- * These are Intel CPUs that enumerate an LLC that is shared by
+- * multiple NUMA nodes. The LLC on these systems is shared for
+- * off-package data access but private to the NUMA node (half
+- * of the package) for on-package access.
++ * Any Intel CPU that has multiple nodes per package and does not
++ * match intel_cod_cpu[] has the SNC (Sub-NUMA Cluster) topology.
+  *
+- * CPUID (the source of the information about the LLC) can only
+- * enumerate the cache as being shared *or* unshared, but not
+- * this particular configuration. The CPU in this case enumerates
+- * the cache to be shared across the entire package (spanning both
+- * NUMA nodes).
++ * When in SNC mode, these CPUs enumerate an LLC that is shared
++ * by multiple NUMA nodes. The LLC is shared for off-package data
++ * access but private to the NUMA node (half of the package) for
++ * on-package access. CPUID (the source of the information about
++ * the LLC) can only enumerate the cache as shared or unshared,
++ * but not this particular configuration.
+  */
+ 
+-static const struct x86_cpu_id snc_cpu[] = {
+-	X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, NULL),
++static const struct x86_cpu_id intel_cod_cpu[] = {
++	X86_MATCH_INTEL_FAM6_MODEL(HASWELL_X, 0),	/* COD */
++	X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_X, 0),	/* COD */
++	X86_MATCH_INTEL_FAM6_MODEL(ANY, 1),		/* SNC */
+ 	{}
+ };
+ 
+ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+ {
++	const struct x86_cpu_id *id = x86_match_cpu(intel_cod_cpu);
+ 	int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
++	bool intel_snc = id && id->driver_data;
+ 
+ 	/* Do not match if we do not have a valid APICID for cpu: */
+ 	if (per_cpu(cpu_llc_id, cpu1) == BAD_APICID)
+@@ -489,32 +512,12 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+ 	 * means 'c' does not share the LLC of 'o'. This will be
+ 	 * reflected to userspace.
+ 	 */
+-	if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu))
++	if (match_pkg(c, o) && !topology_same_node(c, o) && intel_snc)
+ 		return false;
+ 
+ 	return topology_sane(c, o, "llc");
+ }
+ 
+-/*
+- * Unlike the other levels, we do not enforce keeping a
+- * multicore group inside a NUMA node.  If this happens, we will
+- * discard the MC level of the topology later.
+- */
+-static bool match_pkg(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+-{
+-	if (c->phys_proc_id == o->phys_proc_id)
+-		return true;
+-	return false;
+-}
+-
+-static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+-{
+-	if ((c->phys_proc_id == o->phys_proc_id) &&
+-		(c->cpu_die_id == o->cpu_die_id))
+-		return true;
+-	return false;
+-}
+-
+ 
+ #if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
+ static inline int x86_sched_itmt_flags(void)
+@@ -586,14 +589,23 @@ void set_cpu_sibling_map(int cpu)
+ 	for_each_cpu(i, cpu_sibling_setup_mask) {
+ 		o = &cpu_data(i);
+ 
++		if (match_pkg(c, o) && !topology_same_node(c, o))
++			x86_has_numa_in_package = true;
++
+ 		if ((i == cpu) || (has_smt && match_smt(c, o)))
+ 			link_mask(topology_sibling_cpumask, cpu, i);
+ 
+ 		if ((i == cpu) || (has_mp && match_llc(c, o)))
+ 			link_mask(cpu_llc_shared_mask, cpu, i);
+ 
++		if ((i == cpu) || (has_mp && match_die(c, o)))
++			link_mask(topology_die_cpumask, cpu, i);
+ 	}
+ 
++	threads = cpumask_weight(topology_sibling_cpumask(cpu));
++	if (threads > __max_smt_threads)
++		__max_smt_threads = threads;
++
+ 	/*
+ 	 * This needs a separate iteration over the cpus because we rely on all
+ 	 * topology_sibling_cpumask links to be set-up.
+@@ -607,8 +619,7 @@ void set_cpu_sibling_map(int cpu)
+ 			/*
+ 			 *  Does this new cpu bringup a new core?
+ 			 */
+-			if (cpumask_weight(
+-			    topology_sibling_cpumask(cpu)) == 1) {
++			if (threads == 1) {
+ 				/*
+ 				 * for each core in package, increment
+ 				 * the booted_cores for this new cpu
+@@ -625,16 +636,7 @@ void set_cpu_sibling_map(int cpu)
+ 			} else if (i != cpu && !c->booted_cores)
+ 				c->booted_cores = cpu_data(i).booted_cores;
+ 		}
+-		if (match_pkg(c, o) && !topology_same_node(c, o))
+-			x86_has_numa_in_package = true;
+-
+-		if ((i == cpu) || (has_mp && match_die(c, o)))
+-			link_mask(topology_die_cpumask, cpu, i);
+ 	}
+-
+-	threads = cpumask_weight(topology_sibling_cpumask(cpu));
+-	if (threads > __max_smt_threads)
+-		__max_smt_threads = threads;
+ }
+ 
+ /* maps the cpu to the sched domain representing multi-core */
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 1453b9b794425..d3f2b63167451 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -4220,7 +4220,7 @@ static bool valid_cr(int nr)
+ 	}
+ }
+ 
+-static int check_cr_read(struct x86_emulate_ctxt *ctxt)
++static int check_cr_access(struct x86_emulate_ctxt *ctxt)
+ {
+ 	if (!valid_cr(ctxt->modrm_reg))
+ 		return emulate_ud(ctxt);
+@@ -4228,80 +4228,6 @@ static int check_cr_read(struct x86_emulate_ctxt *ctxt)
+ 	return X86EMUL_CONTINUE;
+ }
+ 
+-static int check_cr_write(struct x86_emulate_ctxt *ctxt)
+-{
+-	u64 new_val = ctxt->src.val64;
+-	int cr = ctxt->modrm_reg;
+-	u64 efer = 0;
+-
+-	static u64 cr_reserved_bits[] = {
+-		0xffffffff00000000ULL,
+-		0, 0, 0, /* CR3 checked later */
+-		CR4_RESERVED_BITS,
+-		0, 0, 0,
+-		CR8_RESERVED_BITS,
+-	};
+-
+-	if (!valid_cr(cr))
+-		return emulate_ud(ctxt);
+-
+-	if (new_val & cr_reserved_bits[cr])
+-		return emulate_gp(ctxt, 0);
+-
+-	switch (cr) {
+-	case 0: {
+-		u64 cr4;
+-		if (((new_val & X86_CR0_PG) && !(new_val & X86_CR0_PE)) ||
+-		    ((new_val & X86_CR0_NW) && !(new_val & X86_CR0_CD)))
+-			return emulate_gp(ctxt, 0);
+-
+-		cr4 = ctxt->ops->get_cr(ctxt, 4);
+-		ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
+-
+-		if ((new_val & X86_CR0_PG) && (efer & EFER_LME) &&
+-		    !(cr4 & X86_CR4_PAE))
+-			return emulate_gp(ctxt, 0);
+-
+-		break;
+-		}
+-	case 3: {
+-		u64 rsvd = 0;
+-
+-		ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
+-		if (efer & EFER_LMA) {
+-			u64 maxphyaddr;
+-			u32 eax, ebx, ecx, edx;
+-
+-			eax = 0x80000008;
+-			ecx = 0;
+-			if (ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx,
+-						 &edx, true))
+-				maxphyaddr = eax & 0xff;
+-			else
+-				maxphyaddr = 36;
+-			rsvd = rsvd_bits(maxphyaddr, 63);
+-			if (ctxt->ops->get_cr(ctxt, 4) & X86_CR4_PCIDE)
+-				rsvd &= ~X86_CR3_PCID_NOFLUSH;
+-		}
+-
+-		if (new_val & rsvd)
+-			return emulate_gp(ctxt, 0);
+-
+-		break;
+-		}
+-	case 4: {
+-		ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
+-
+-		if ((efer & EFER_LMA) && !(new_val & X86_CR4_PAE))
+-			return emulate_gp(ctxt, 0);
+-
+-		break;
+-		}
+-	}
+-
+-	return X86EMUL_CONTINUE;
+-}
+-
+ static int check_dr7_gd(struct x86_emulate_ctxt *ctxt)
+ {
+ 	unsigned long dr7;
+@@ -4841,10 +4767,10 @@ static const struct opcode twobyte_table[256] = {
+ 	D(ImplicitOps | ModRM | SrcMem | NoAccess), /* 8 * reserved NOP */
+ 	D(ImplicitOps | ModRM | SrcMem | NoAccess), /* NOP + 7 * reserved NOP */
+ 	/* 0x20 - 0x2F */
+-	DIP(ModRM | DstMem | Priv | Op3264 | NoMod, cr_read, check_cr_read),
++	DIP(ModRM | DstMem | Priv | Op3264 | NoMod, cr_read, check_cr_access),
+ 	DIP(ModRM | DstMem | Priv | Op3264 | NoMod, dr_read, check_dr_read),
+ 	IIP(ModRM | SrcMem | Priv | Op3264 | NoMod, em_cr_write, cr_write,
+-						check_cr_write),
++						check_cr_access),
+ 	IIP(ModRM | SrcMem | Priv | Op3264 | NoMod, em_dr_write, dr_write,
+ 						check_dr_write),
+ 	N, N, N, N,
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 15717a28b212e..2f2576fd343e6 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3195,14 +3195,14 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+ 		if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL &&
+ 		    (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) {
+ 			mmu_free_root_page(kvm, &mmu->root_hpa, &invalid_list);
+-		} else {
++		} else if (mmu->pae_root) {
+ 			for (i = 0; i < 4; ++i)
+ 				if (mmu->pae_root[i] != 0)
+ 					mmu_free_root_page(kvm,
+ 							   &mmu->pae_root[i],
+ 							   &invalid_list);
+-			mmu->root_hpa = INVALID_PAGE;
+ 		}
++		mmu->root_hpa = INVALID_PAGE;
+ 		mmu->root_pgd = 0;
+ 	}
+ 
+@@ -3314,9 +3314,23 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
+ 	 * the shadow page table may be a PAE or a long mode page table.
+ 	 */
+ 	pm_mask = PT_PRESENT_MASK;
+-	if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL)
++	if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) {
+ 		pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK;
+ 
++		/*
++		 * Allocate the page for the PDPTEs when shadowing 32-bit NPT
++		 * with 64-bit only when needed.  Unlike 32-bit NPT, it doesn't
++		 * need to be in low mem.  See also lm_root below.
++		 */
++		if (!vcpu->arch.mmu->pae_root) {
++			WARN_ON_ONCE(!tdp_enabled);
++
++			vcpu->arch.mmu->pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT);
++			if (!vcpu->arch.mmu->pae_root)
++				return -ENOMEM;
++		}
++	}
++
+ 	for (i = 0; i < 4; ++i) {
+ 		MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->pae_root[i]));
+ 		if (vcpu->arch.mmu->root_level == PT32E_ROOT_LEVEL) {
+@@ -3339,21 +3353,19 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root);
+ 
+ 	/*
+-	 * If we shadow a 32 bit page table with a long mode page
+-	 * table we enter this path.
++	 * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP
++	 * tables are allocated and initialized at MMU creation as there is no
++	 * equivalent level in the guest's NPT to shadow.  Allocate the tables
++	 * on demand, as running a 32-bit L1 VMM is very rare.  The PDP is
++	 * handled above (to share logic with PAE), deal with the PML4 here.
+ 	 */
+ 	if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) {
+ 		if (vcpu->arch.mmu->lm_root == NULL) {
+-			/*
+-			 * The additional page necessary for this is only
+-			 * allocated on demand.
+-			 */
+-
+ 			u64 *lm_root;
+ 
+ 			lm_root = (void*)get_zeroed_page(GFP_KERNEL_ACCOUNT);
+-			if (lm_root == NULL)
+-				return 1;
++			if (!lm_root)
++				return -ENOMEM;
+ 
+ 			lm_root[0] = __pa(vcpu->arch.mmu->pae_root) | pm_mask;
+ 
+@@ -3651,6 +3663,14 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
+ 	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+ 	bool async;
+ 
++	/*
++	 * Retry the page fault if the gfn hit a memslot that is being deleted
++	 * or moved.  This ensures any existing SPTEs for the old memslot will
++	 * be zapped before KVM inserts a new MMIO SPTE for the gfn.
++	 */
++	if (slot && (slot->flags & KVM_MEMSLOT_INVALID))
++		return true;
++
+ 	/* Don't expose private memslots to L2. */
+ 	if (is_guest_mode(vcpu) && !kvm_is_visible_memslot(slot)) {
+ 		*pfn = KVM_PFN_NOSLOT;
+@@ -4605,12 +4625,17 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer,
+ 	struct kvm_mmu *context = &vcpu->arch.guest_mmu;
+ 	union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu);
+ 
+-	context->shadow_root_level = new_role.base.level;
+-
+ 	__kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base, false, false);
+ 
+-	if (new_role.as_u64 != context->mmu_role.as_u64)
++	if (new_role.as_u64 != context->mmu_role.as_u64) {
+ 		shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role);
++
++		/*
++		 * Override the level set by the common init helper, nested TDP
++		 * always uses the host's TDP configuration.
++		 */
++		context->shadow_root_level = new_role.base.level;
++	}
+ }
+ EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu);
+ 
+@@ -5297,9 +5322,11 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
+ 	 * while the PDP table is a per-vCPU construct that's allocated at MMU
+ 	 * creation.  When emulating 32-bit mode, cr3 is only 32 bits even on
+ 	 * x86_64.  Therefore we need to allocate the PDP table in the first
+-	 * 4GB of memory, which happens to fit the DMA32 zone.  Except for
+-	 * SVM's 32-bit NPT support, TDP paging doesn't use PAE paging and can
+-	 * skip allocating the PDP table.
++	 * 4GB of memory, which happens to fit the DMA32 zone.  TDP paging
++	 * generally doesn't use PAE paging and can skip allocating the PDP
++	 * table.  The main exception, handled here, is SVM's 32-bit NPT.  The
++	 * other exception is for shadowing L1's 32-bit or PAE NPT on 64-bit
++	 * KVM; that horror is handled on-demand by mmu_alloc_shadow_roots().
+ 	 */
+ 	if (tdp_enabled && kvm_mmu_get_tdp_level(vcpu) > PT32E_ROOT_LEVEL)
+ 		return 0;
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index e3e04988fdabe..16b10b9436dc5 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -168,6 +168,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ 	int asid, ret;
+ 
++	if (kvm->created_vcpus)
++		return -EINVAL;
++
+ 	ret = -EBUSY;
+ 	if (unlikely(sev->active))
+ 		return ret;
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 642f0da31ac4f..ca7a717477e70 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1805,7 +1805,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
+ 
+ static int pf_interception(struct vcpu_svm *svm)
+ {
+-	u64 fault_address = __sme_clr(svm->vmcb->control.exit_info_2);
++	u64 fault_address = svm->vmcb->control.exit_info_2;
+ 	u64 error_code = svm->vmcb->control.exit_info_1;
+ 
+ 	return kvm_handle_page_fault(&svm->vcpu, error_code, fault_address,
+@@ -2519,6 +2519,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_TSC_AUX:
+ 		if (!boot_cpu_has(X86_FEATURE_RDTSCP))
+ 			return 1;
++		if (!msr_info->host_initiated &&
++		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
++			return 1;
+ 		msr_info->data = svm->tsc_aux;
+ 		break;
+ 	/*
+@@ -2713,6 +2716,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 		if (!boot_cpu_has(X86_FEATURE_RDTSCP))
+ 			return 1;
+ 
++		if (!msr->host_initiated &&
++		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
++			return 1;
++
+ 		/*
+ 		 * This is rare, so we update the MSR here instead of using
+ 		 * direct_access_msrs.  Doing that would require a rdmsr in
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 15532feb19f10..e8882715735ae 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -618,6 +618,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 	}
+ 
+ 	/* KVM unconditionally exposes the FS/GS base MSRs to L1. */
++#ifdef CONFIG_X86_64
+ 	nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
+ 					     MSR_FS_BASE, MSR_TYPE_RW);
+ 
+@@ -626,6 +627,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 
+ 	nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
+ 					     MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
++#endif
+ 
+ 	/*
+ 	 * Checking the L0->L1 bitmap is trying to verify two things:
+@@ -4613,9 +4615,9 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
+ 	else if (addr_size == 0)
+ 		off = (gva_t)sign_extend64(off, 15);
+ 	if (base_is_valid)
+-		off += kvm_register_read(vcpu, base_reg);
++		off += kvm_register_readl(vcpu, base_reg);
+ 	if (index_is_valid)
+-		off += kvm_register_read(vcpu, index_reg) << scaling;
++		off += kvm_register_readl(vcpu, index_reg) << scaling;
+ 	vmx_get_segment(vcpu, &s, seg_reg);
+ 
+ 	/*
+@@ -5491,16 +5493,11 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
+ 		if (!nested_vmx_check_eptp(vcpu, new_eptp))
+ 			return 1;
+ 
+-		kvm_mmu_unload(vcpu);
+ 		mmu->ept_ad = accessed_dirty;
+ 		mmu->mmu_role.base.ad_disabled = !accessed_dirty;
+ 		vmcs12->ept_pointer = new_eptp;
+-		/*
+-		 * TODO: Check what's the correct approach in case
+-		 * mmu reload fails. Currently, we just let the next
+-		 * reload potentially fail
+-		 */
+-		kvm_mmu_reload(vcpu);
++
++		kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+ 	}
+ 
+ 	return 0;
+@@ -5729,7 +5726,7 @@ static bool nested_vmx_exit_handled_vmcs_access(struct kvm_vcpu *vcpu,
+ 
+ 	/* Decode instruction info and find the field to access */
+ 	vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
+-	field = kvm_register_read(vcpu, (((vmx_instruction_info) >> 28) & 0xf));
++	field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf));
+ 
+ 	/* Out-of-range fields always cause a VM exit from L2 to L1 */
+ 	if (field >> 15)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index f8835cabf29f3..fca4f452827b7 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -156,9 +156,11 @@ static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
+ 	MSR_IA32_SPEC_CTRL,
+ 	MSR_IA32_PRED_CMD,
+ 	MSR_IA32_TSC,
++#ifdef CONFIG_X86_64
+ 	MSR_FS_BASE,
+ 	MSR_GS_BASE,
+ 	MSR_KERNEL_GS_BASE,
++#endif
+ 	MSR_IA32_SYSENTER_CS,
+ 	MSR_IA32_SYSENTER_ESP,
+ 	MSR_IA32_SYSENTER_EIP,
+@@ -5779,7 +5781,6 @@ void dump_vmcs(void)
+ 	u32 vmentry_ctl, vmexit_ctl;
+ 	u32 cpu_based_exec_ctrl, pin_based_exec_ctrl, secondary_exec_control;
+ 	unsigned long cr4;
+-	u64 efer;
+ 
+ 	if (!dump_invalid_vmcs) {
+ 		pr_warn_ratelimited("set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state.\n");
+@@ -5791,7 +5792,6 @@ void dump_vmcs(void)
+ 	cpu_based_exec_ctrl = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
+ 	pin_based_exec_ctrl = vmcs_read32(PIN_BASED_VM_EXEC_CONTROL);
+ 	cr4 = vmcs_readl(GUEST_CR4);
+-	efer = vmcs_read64(GUEST_IA32_EFER);
+ 	secondary_exec_control = 0;
+ 	if (cpu_has_secondary_exec_ctrls())
+ 		secondary_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
+@@ -5803,9 +5803,7 @@ void dump_vmcs(void)
+ 	pr_err("CR4: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n",
+ 	       cr4, vmcs_readl(CR4_READ_SHADOW), vmcs_readl(CR4_GUEST_HOST_MASK));
+ 	pr_err("CR3 = 0x%016lx\n", vmcs_readl(GUEST_CR3));
+-	if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT) &&
+-	    (cr4 & X86_CR4_PAE) && !(efer & EFER_LMA))
+-	{
++	if (cpu_has_vmx_ept()) {
+ 		pr_err("PDPTR0 = 0x%016llx  PDPTR1 = 0x%016llx\n",
+ 		       vmcs_read64(GUEST_PDPTR0), vmcs_read64(GUEST_PDPTR1));
+ 		pr_err("PDPTR2 = 0x%016llx  PDPTR3 = 0x%016llx\n",
+@@ -5831,7 +5829,8 @@ void dump_vmcs(void)
+ 	if ((vmexit_ctl & (VM_EXIT_SAVE_IA32_PAT | VM_EXIT_SAVE_IA32_EFER)) ||
+ 	    (vmentry_ctl & (VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_IA32_EFER)))
+ 		pr_err("EFER =     0x%016llx  PAT = 0x%016llx\n",
+-		       efer, vmcs_read64(GUEST_IA32_PAT));
++		       vmcs_read64(GUEST_IA32_EFER),
++		       vmcs_read64(GUEST_IA32_PAT));
+ 	pr_err("DebugCtl = 0x%016llx  DebugExceptions = 0x%016lx\n",
+ 	       vmcs_read64(GUEST_IA32_DEBUGCTL),
+ 	       vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS));
+@@ -6907,9 +6906,11 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+ 	bitmap_fill(vmx->shadow_msr_intercept.write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
+ 
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_TSC, MSR_TYPE_R);
++#ifdef CONFIG_X86_64
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW);
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_GS_BASE, MSR_TYPE_RW);
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
++#endif
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0d8383b82bca4..0a5dd7568ebc8 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -11290,7 +11290,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva)
+ 
+ 		fallthrough;
+ 	case INVPCID_TYPE_ALL_INCL_GLOBAL:
+-		kvm_mmu_unload(vcpu);
++		kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+ 		return kvm_skip_emulated_instruction(vcpu);
+ 
+ 	default:
+diff --git a/arch/x86/power/hibernate.c b/arch/x86/power/hibernate.c
+index cd3914fc9f3d4..e94e0050a583a 100644
+--- a/arch/x86/power/hibernate.c
++++ b/arch/x86/power/hibernate.c
+@@ -13,8 +13,8 @@
+ #include <linux/kdebug.h>
+ #include <linux/cpu.h>
+ #include <linux/pgtable.h>
+-
+-#include <crypto/hash.h>
++#include <linux/types.h>
++#include <linux/crc32.h>
+ 
+ #include <asm/e820/api.h>
+ #include <asm/init.h>
+@@ -54,95 +54,33 @@ int pfn_is_nosave(unsigned long pfn)
+ 	return pfn >= nosave_begin_pfn && pfn < nosave_end_pfn;
+ }
+ 
+-
+-#define MD5_DIGEST_SIZE 16
+-
+ struct restore_data_record {
+ 	unsigned long jump_address;
+ 	unsigned long jump_address_phys;
+ 	unsigned long cr3;
+ 	unsigned long magic;
+-	u8 e820_digest[MD5_DIGEST_SIZE];
++	unsigned long e820_checksum;
+ };
+ 
+-#if IS_BUILTIN(CONFIG_CRYPTO_MD5)
+ /**
+- * get_e820_md5 - calculate md5 according to given e820 table
++ * compute_e820_crc32 - calculate crc32 of a given e820 table
+  *
+  * @table: the e820 table to be calculated
+- * @buf: the md5 result to be stored to
++ *
++ * Return: the resulting checksum
+  */
+-static int get_e820_md5(struct e820_table *table, void *buf)
++static inline u32 compute_e820_crc32(struct e820_table *table)
+ {
+-	struct crypto_shash *tfm;
+-	struct shash_desc *desc;
+-	int size;
+-	int ret = 0;
+-
+-	tfm = crypto_alloc_shash("md5", 0, 0);
+-	if (IS_ERR(tfm))
+-		return -ENOMEM;
+-
+-	desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
+-		       GFP_KERNEL);
+-	if (!desc) {
+-		ret = -ENOMEM;
+-		goto free_tfm;
+-	}
+-
+-	desc->tfm = tfm;
+-
+-	size = offsetof(struct e820_table, entries) +
++	int size = offsetof(struct e820_table, entries) +
+ 		sizeof(struct e820_entry) * table->nr_entries;
+ 
+-	if (crypto_shash_digest(desc, (u8 *)table, size, buf))
+-		ret = -EINVAL;
+-
+-	kfree_sensitive(desc);
+-
+-free_tfm:
+-	crypto_free_shash(tfm);
+-	return ret;
+-}
+-
+-static int hibernation_e820_save(void *buf)
+-{
+-	return get_e820_md5(e820_table_firmware, buf);
+-}
+-
+-static bool hibernation_e820_mismatch(void *buf)
+-{
+-	int ret;
+-	u8 result[MD5_DIGEST_SIZE];
+-
+-	memset(result, 0, MD5_DIGEST_SIZE);
+-	/* If there is no digest in suspend kernel, let it go. */
+-	if (!memcmp(result, buf, MD5_DIGEST_SIZE))
+-		return false;
+-
+-	ret = get_e820_md5(e820_table_firmware, result);
+-	if (ret)
+-		return true;
+-
+-	return memcmp(result, buf, MD5_DIGEST_SIZE) ? true : false;
+-}
+-#else
+-static int hibernation_e820_save(void *buf)
+-{
+-	return 0;
+-}
+-
+-static bool hibernation_e820_mismatch(void *buf)
+-{
+-	/* If md5 is not builtin for restore kernel, let it go. */
+-	return false;
++	return ~crc32_le(~0, (unsigned char const *)table, size);
+ }
+-#endif
+ 
+ #ifdef CONFIG_X86_64
+-#define RESTORE_MAGIC	0x23456789ABCDEF01UL
++#define RESTORE_MAGIC	0x23456789ABCDEF02UL
+ #else
+-#define RESTORE_MAGIC	0x12345678UL
++#define RESTORE_MAGIC	0x12345679UL
+ #endif
+ 
+ /**
+@@ -179,7 +117,8 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
+ 	 */
+ 	rdr->cr3 = restore_cr3 & ~CR3_PCID_MASK;
+ 
+-	return hibernation_e820_save(rdr->e820_digest);
++	rdr->e820_checksum = compute_e820_crc32(e820_table_firmware);
++	return 0;
+ }
+ 
+ /**
+@@ -200,7 +139,7 @@ int arch_hibernation_header_restore(void *addr)
+ 	jump_address_phys = rdr->jump_address_phys;
+ 	restore_cr3 = rdr->cr3;
+ 
+-	if (hibernation_e820_mismatch(rdr->e820_digest)) {
++	if (rdr->e820_checksum != compute_e820_crc32(e820_table_firmware)) {
+ 		pr_crit("Hibernate inconsistent memory map detected!\n");
+ 		return -ENODEV;
+ 	}
+diff --git a/crypto/async_tx/async_xor.c b/crypto/async_tx/async_xor.c
+index a057ecb1288d2..6cd7f7025df47 100644
+--- a/crypto/async_tx/async_xor.c
++++ b/crypto/async_tx/async_xor.c
+@@ -233,6 +233,7 @@ async_xor_offs(struct page *dest, unsigned int offset,
+ 		if (submit->flags & ASYNC_TX_XOR_DROP_DST) {
+ 			src_cnt--;
+ 			src_list++;
++			src_offs++;
+ 		}
+ 
+ 		/* wait for any prerequisite operations */
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 7a99b19bb893d..0a2da06e9d8bf 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -118,23 +118,15 @@ static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
+  */
+ #define NUM_RETRIES 500ULL
+ 
+-struct cppc_attr {
+-	struct attribute attr;
+-	ssize_t (*show)(struct kobject *kobj,
+-			struct attribute *attr, char *buf);
+-	ssize_t (*store)(struct kobject *kobj,
+-			struct attribute *attr, const char *c, ssize_t count);
+-};
+-
+ #define define_one_cppc_ro(_name)		\
+-static struct cppc_attr _name =			\
++static struct kobj_attribute _name =		\
+ __ATTR(_name, 0444, show_##_name, NULL)
+ 
+ #define to_cpc_desc(a) container_of(a, struct cpc_desc, kobj)
+ 
+ #define show_cppc_data(access_fn, struct_name, member_name)		\
+ 	static ssize_t show_##member_name(struct kobject *kobj,		\
+-					struct attribute *attr,	char *buf) \
++				struct kobj_attribute *attr, char *buf)	\
+ 	{								\
+ 		struct cpc_desc *cpc_ptr = to_cpc_desc(kobj);		\
+ 		struct struct_name st_name = {0};			\
+@@ -160,7 +152,7 @@ show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, reference_perf);
+ show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, wraparound_time);
+ 
+ static ssize_t show_feedback_ctrs(struct kobject *kobj,
+-		struct attribute *attr, char *buf)
++		struct kobj_attribute *attr, char *buf)
+ {
+ 	struct cpc_desc *cpc_ptr = to_cpc_desc(kobj);
+ 	struct cppc_perf_fb_ctrs fb_ctrs = {0};
+diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c
+index de638dafce21e..b2f5520882918 100644
+--- a/drivers/ata/libahci_platform.c
++++ b/drivers/ata/libahci_platform.c
+@@ -582,11 +582,13 @@ int ahci_platform_init_host(struct platform_device *pdev,
+ 	int i, irq, n_ports, rc;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0) {
++	if (irq < 0) {
+ 		if (irq != -EPROBE_DEFER)
+ 			dev_err(dev, "no irq\n");
+ 		return irq;
+ 	}
++	if (!irq)
++		return -EINVAL;
+ 
+ 	hpriv->irq = irq;
+ 
+diff --git a/drivers/ata/pata_arasan_cf.c b/drivers/ata/pata_arasan_cf.c
+index e9cf31f384506..63f39440a9b42 100644
+--- a/drivers/ata/pata_arasan_cf.c
++++ b/drivers/ata/pata_arasan_cf.c
+@@ -818,12 +818,19 @@ static int arasan_cf_probe(struct platform_device *pdev)
+ 	else
+ 		quirk = CF_BROKEN_UDMA; /* as it is on spear1340 */
+ 
+-	/* if irq is 0, support only PIO */
+-	acdev->irq = platform_get_irq(pdev, 0);
+-	if (acdev->irq)
++	/*
++	 * If there's an error getting IRQ (or we do get IRQ0),
++	 * support only PIO
++	 */
++	ret = platform_get_irq(pdev, 0);
++	if (ret > 0) {
++		acdev->irq = ret;
+ 		irq_handler = arasan_cf_interrupt;
+-	else
++	} else	if (ret == -EPROBE_DEFER) {
++		return ret;
++	} else	{
+ 		quirk |= CF_BROKEN_MWDMA | CF_BROKEN_UDMA;
++	}
+ 
+ 	acdev->pbase = res->start;
+ 	acdev->vbase = devm_ioremap(&pdev->dev, res->start,
+diff --git a/drivers/ata/pata_ixp4xx_cf.c b/drivers/ata/pata_ixp4xx_cf.c
+index d1644a8ef9fa6..abc0e87ca1a8b 100644
+--- a/drivers/ata/pata_ixp4xx_cf.c
++++ b/drivers/ata/pata_ixp4xx_cf.c
+@@ -165,8 +165,12 @@ static int ixp4xx_pata_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq)
++	if (irq > 0)
+ 		irq_set_irq_type(irq, IRQ_TYPE_EDGE_RISING);
++	else if (irq < 0)
++		return irq;
++	else
++		return -EINVAL;
+ 
+ 	/* Setup expansion bus chip selects */
+ 	*data->cs0_cfg = data->cs0_bits;
+diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
+index 664ef658a955f..b62446ea5f408 100644
+--- a/drivers/ata/sata_mv.c
++++ b/drivers/ata/sata_mv.c
+@@ -4097,6 +4097,10 @@ static int mv_platform_probe(struct platform_device *pdev)
+ 		n_ports = mv_platform_data->n_ports;
+ 		irq = platform_get_irq(pdev, 0);
+ 	}
++	if (irq < 0)
++		return irq;
++	if (!irq)
++		return -EINVAL;
+ 
+ 	host = ata_host_alloc_pinfo(&pdev->dev, ppi, n_ports);
+ 	hpriv = devm_kzalloc(&pdev->dev, sizeof(*hpriv), GFP_KERNEL);
+diff --git a/drivers/base/devtmpfs.c b/drivers/base/devtmpfs.c
+index eac184e6d6577..a71d141179439 100644
+--- a/drivers/base/devtmpfs.c
++++ b/drivers/base/devtmpfs.c
+@@ -416,7 +416,6 @@ static int __init devtmpfs_setup(void *p)
+ 	init_chroot(".");
+ out:
+ 	*(int *)p = err;
+-	complete(&setup_done);
+ 	return err;
+ }
+ 
+@@ -429,6 +428,7 @@ static int __ref devtmpfsd(void *p)
+ {
+ 	int err = devtmpfs_setup(p);
+ 
++	complete(&setup_done);
+ 	if (err)
+ 		return err;
+ 	devtmpfs_work_loop();
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index 6ffa470e29840..21965de8538be 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -268,21 +268,20 @@ static void node_init_cache_dev(struct node *node)
+ 	if (!dev)
+ 		return;
+ 
++	device_initialize(dev);
+ 	dev->parent = &node->dev;
+ 	dev->release = node_cache_release;
+ 	if (dev_set_name(dev, "memory_side_cache"))
+-		goto free_dev;
++		goto put_device;
+ 
+-	if (device_register(dev))
+-		goto free_name;
++	if (device_add(dev))
++		goto put_device;
+ 
+ 	pm_runtime_no_callbacks(dev);
+ 	node->cache_dev = dev;
+ 	return;
+-free_name:
+-	kfree_const(dev->kobj.name);
+-free_dev:
+-	kfree(dev);
++put_device:
++	put_device(dev);
+ }
+ 
+ /**
+@@ -319,25 +318,24 @@ void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs)
+ 		return;
+ 
+ 	dev = &info->dev;
++	device_initialize(dev);
+ 	dev->parent = node->cache_dev;
+ 	dev->release = node_cacheinfo_release;
+ 	dev->groups = cache_groups;
+ 	if (dev_set_name(dev, "index%d", cache_attrs->level))
+-		goto free_cache;
++		goto put_device;
+ 
+ 	info->cache_attrs = *cache_attrs;
+-	if (device_register(dev)) {
++	if (device_add(dev)) {
+ 		dev_warn(&node->dev, "failed to add cache level:%d\n",
+ 			 cache_attrs->level);
+-		goto free_name;
++		goto put_device;
+ 	}
+ 	pm_runtime_no_callbacks(dev);
+ 	list_add_tail(&info->node, &node->cache_attrs);
+ 	return;
+-free_name:
+-	kfree_const(dev->kobj.name);
+-free_cache:
+-	kfree(info);
++put_device:
++	put_device(dev);
+ }
+ 
+ static void node_remove_caches(struct node *node)
+diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
+index ff2ee87987c7e..211a335a608d7 100644
+--- a/drivers/base/regmap/regmap-debugfs.c
++++ b/drivers/base/regmap/regmap-debugfs.c
+@@ -660,6 +660,7 @@ void regmap_debugfs_exit(struct regmap *map)
+ 		regmap_debugfs_free_dump_cache(map);
+ 		mutex_unlock(&map->cache_lock);
+ 		kfree(map->debugfs_name);
++		map->debugfs_name = NULL;
+ 	} else {
+ 		struct regmap_debugfs_node *node, *tmp;
+ 
+diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c
+index 172f720b8d637..f5df82c26c16f 100644
+--- a/drivers/block/null_blk_zoned.c
++++ b/drivers/block/null_blk_zoned.c
+@@ -149,6 +149,7 @@ void null_free_zoned_dev(struct nullb_device *dev)
+ {
+ 	bitmap_free(dev->zone_locks);
+ 	kvfree(dev->zones);
++	dev->zones = NULL;
+ }
+ 
+ static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)
+diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
+index a1b9df2c4ef1a..040829e2d0162 100644
+--- a/drivers/block/xen-blkback/common.h
++++ b/drivers/block/xen-blkback/common.h
+@@ -313,6 +313,7 @@ struct xen_blkif {
+ 
+ 	struct work_struct	free_work;
+ 	unsigned int 		nr_ring_pages;
++	bool			multi_ref;
+ 	/* All rings for this device. */
+ 	struct xen_blkif_ring	*rings;
+ 	unsigned int		nr_rings;
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index 9860d4842f36c..6c5e9373e91c3 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -998,14 +998,17 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir)
+ 	for (i = 0; i < nr_grefs; i++) {
+ 		char ring_ref_name[RINGREF_NAME_LEN];
+ 
+-		snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
++		if (blkif->multi_ref)
++			snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
++		else {
++			WARN_ON(i != 0);
++			snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref");
++		}
++
+ 		err = xenbus_scanf(XBT_NIL, dir, ring_ref_name,
+ 				   "%u", &ring_ref[i]);
+ 
+ 		if (err != 1) {
+-			if (nr_grefs == 1)
+-				break;
+-
+ 			err = -EINVAL;
+ 			xenbus_dev_fatal(dev, err, "reading %s/%s",
+ 					 dir, ring_ref_name);
+@@ -1013,18 +1016,6 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir)
+ 		}
+ 	}
+ 
+-	if (err != 1) {
+-		WARN_ON(nr_grefs != 1);
+-
+-		err = xenbus_scanf(XBT_NIL, dir, "ring-ref", "%u",
+-				   &ring_ref[0]);
+-		if (err != 1) {
+-			err = -EINVAL;
+-			xenbus_dev_fatal(dev, err, "reading %s/ring-ref", dir);
+-			return err;
+-		}
+-	}
+-
+ 	err = -ENOMEM;
+ 	for (i = 0; i < nr_grefs * XEN_BLKIF_REQS_PER_PAGE; i++) {
+ 		req = kzalloc(sizeof(*req), GFP_KERNEL);
+@@ -1129,10 +1120,15 @@ static int connect_ring(struct backend_info *be)
+ 		 blkif->nr_rings, blkif->blk_protocol, protocol,
+ 		 blkif->vbd.feature_gnt_persistent ? "persistent grants" : "");
+ 
+-	ring_page_order = xenbus_read_unsigned(dev->otherend,
+-					       "ring-page-order", 0);
+-
+-	if (ring_page_order > xen_blkif_max_ring_order) {
++	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-page-order", "%u",
++			   &ring_page_order);
++	if (err != 1) {
++		blkif->nr_ring_pages = 1;
++		blkif->multi_ref = false;
++	} else if (ring_page_order <= xen_blkif_max_ring_order) {
++		blkif->nr_ring_pages = 1 << ring_page_order;
++		blkif->multi_ref = true;
++	} else {
+ 		err = -EINVAL;
+ 		xenbus_dev_fatal(dev, err,
+ 				 "requested ring page order %d exceed max:%d",
+@@ -1141,8 +1137,6 @@ static int connect_ring(struct backend_info *be)
+ 		return err;
+ 	}
+ 
+-	blkif->nr_ring_pages = 1 << ring_page_order;
+-
+ 	if (blkif->nr_rings == 1)
+ 		return read_per_ring_refs(&blkif->rings[0], dev->otherend);
+ 	else {
+diff --git a/drivers/bus/qcom-ebi2.c b/drivers/bus/qcom-ebi2.c
+index 03ddcf426887b..0b8f53a688b8a 100644
+--- a/drivers/bus/qcom-ebi2.c
++++ b/drivers/bus/qcom-ebi2.c
+@@ -353,8 +353,10 @@ static int qcom_ebi2_probe(struct platform_device *pdev)
+ 
+ 		/* Figure out the chipselect */
+ 		ret = of_property_read_u32(child, "reg", &csindex);
+-		if (ret)
++		if (ret) {
++			of_node_put(child);
+ 			return ret;
++		}
+ 
+ 		if (csindex > 5) {
+ 			dev_err(dev,
+diff --git a/drivers/char/ttyprintk.c b/drivers/char/ttyprintk.c
+index 6a0059e508e38..93f5d11c830b7 100644
+--- a/drivers/char/ttyprintk.c
++++ b/drivers/char/ttyprintk.c
+@@ -158,12 +158,23 @@ static int tpk_ioctl(struct tty_struct *tty,
+ 	return 0;
+ }
+ 
++/*
++ * TTY operations hangup function.
++ */
++static void tpk_hangup(struct tty_struct *tty)
++{
++	struct ttyprintk_port *tpkp = tty->driver_data;
++
++	tty_port_hangup(&tpkp->port);
++}
++
+ static const struct tty_operations ttyprintk_ops = {
+ 	.open = tpk_open,
+ 	.close = tpk_close,
+ 	.write = tpk_write,
+ 	.write_room = tpk_write_room,
+ 	.ioctl = tpk_ioctl,
++	.hangup = tpk_hangup,
+ };
+ 
+ static const struct tty_port_operations null_ops = { };
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index a55b37fc2c8bd..bc3be5f3eae15 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -61,10 +61,10 @@ static void __iomem *scu_g6_base;
+ static const struct aspeed_gate_data aspeed_g6_gates[] = {
+ 	/*				    clk rst  name		parent	 flags */
+ 	[ASPEED_CLK_GATE_MCLK]		= {  0, -1, "mclk-gate",	"mpll",	 CLK_IS_CRITICAL }, /* SDRAM */
+-	[ASPEED_CLK_GATE_ECLK]		= {  1, -1, "eclk-gate",	"eclk",	 0 },	/* Video Engine */
++	[ASPEED_CLK_GATE_ECLK]		= {  1,  6, "eclk-gate",	"eclk",	 0 },	/* Video Engine */
+ 	[ASPEED_CLK_GATE_GCLK]		= {  2,  7, "gclk-gate",	NULL,	 0 },	/* 2D engine */
+ 	/* vclk parent - dclk/d1clk/hclk/mclk */
+-	[ASPEED_CLK_GATE_VCLK]		= {  3,  6, "vclk-gate",	NULL,	 0 },	/* Video Capture */
++	[ASPEED_CLK_GATE_VCLK]		= {  3, -1, "vclk-gate",	NULL,	 0 },	/* Video Capture */
+ 	[ASPEED_CLK_GATE_BCLK]		= {  4,  8, "bclk-gate",	"bclk",	 0 }, /* PCIe/PCI */
+ 	/* From dpll */
+ 	[ASPEED_CLK_GATE_DCLK]		= {  5, -1, "dclk-gate",	NULL,	 CLK_IS_CRITICAL }, /* DAC */
+diff --git a/drivers/clk/imx/clk-imx25.c b/drivers/clk/imx/clk-imx25.c
+index a66cabfbf94f1..66192fe0a898c 100644
+--- a/drivers/clk/imx/clk-imx25.c
++++ b/drivers/clk/imx/clk-imx25.c
+@@ -73,16 +73,6 @@ enum mx25_clks {
+ 
+ static struct clk *clk[clk_max];
+ 
+-static struct clk ** const uart_clks[] __initconst = {
+-	&clk[uart_ipg_per],
+-	&clk[uart1_ipg],
+-	&clk[uart2_ipg],
+-	&clk[uart3_ipg],
+-	&clk[uart4_ipg],
+-	&clk[uart5_ipg],
+-	NULL
+-};
+-
+ static int __init __mx25_clocks_init(void __iomem *ccm_base)
+ {
+ 	BUG_ON(!ccm_base);
+@@ -228,7 +218,7 @@ static int __init __mx25_clocks_init(void __iomem *ccm_base)
+ 	 */
+ 	clk_set_parent(clk[cko_sel], clk[ipg]);
+ 
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(6);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/clk/imx/clk-imx27.c b/drivers/clk/imx/clk-imx27.c
+index 5585ded8b8c6f..56a5fc402b10c 100644
+--- a/drivers/clk/imx/clk-imx27.c
++++ b/drivers/clk/imx/clk-imx27.c
+@@ -49,17 +49,6 @@ static const char *ssi_sel_clks[] = { "spll_gate", "mpll", };
+ static struct clk *clk[IMX27_CLK_MAX];
+ static struct clk_onecell_data clk_data;
+ 
+-static struct clk ** const uart_clks[] __initconst = {
+-	&clk[IMX27_CLK_PER1_GATE],
+-	&clk[IMX27_CLK_UART1_IPG_GATE],
+-	&clk[IMX27_CLK_UART2_IPG_GATE],
+-	&clk[IMX27_CLK_UART3_IPG_GATE],
+-	&clk[IMX27_CLK_UART4_IPG_GATE],
+-	&clk[IMX27_CLK_UART5_IPG_GATE],
+-	&clk[IMX27_CLK_UART6_IPG_GATE],
+-	NULL
+-};
+-
+ static void __init _mx27_clocks_init(unsigned long fref)
+ {
+ 	BUG_ON(!ccm);
+@@ -176,7 +165,7 @@ static void __init _mx27_clocks_init(unsigned long fref)
+ 
+ 	clk_prepare_enable(clk[IMX27_CLK_EMI_AHB_GATE]);
+ 
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(7);
+ 
+ 	imx_print_silicon_rev("i.MX27", mx27_revision());
+ }
+diff --git a/drivers/clk/imx/clk-imx35.c b/drivers/clk/imx/clk-imx35.c
+index c1df03665c09a..0fe5ac2101566 100644
+--- a/drivers/clk/imx/clk-imx35.c
++++ b/drivers/clk/imx/clk-imx35.c
+@@ -82,14 +82,6 @@ enum mx35_clks {
+ 
+ static struct clk *clk[clk_max];
+ 
+-static struct clk ** const uart_clks[] __initconst = {
+-	&clk[ipg],
+-	&clk[uart1_gate],
+-	&clk[uart2_gate],
+-	&clk[uart3_gate],
+-	NULL
+-};
+-
+ static void __init _mx35_clocks_init(void)
+ {
+ 	void __iomem *base;
+@@ -243,7 +235,7 @@ static void __init _mx35_clocks_init(void)
+ 	 */
+ 	clk_prepare_enable(clk[scc_gate]);
+ 
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(4);
+ 
+ 	imx_print_silicon_rev("i.MX35", mx35_revision());
+ }
+diff --git a/drivers/clk/imx/clk-imx5.c b/drivers/clk/imx/clk-imx5.c
+index 01e079b810261..e4493846454dd 100644
+--- a/drivers/clk/imx/clk-imx5.c
++++ b/drivers/clk/imx/clk-imx5.c
+@@ -128,30 +128,6 @@ static const char *ieee1588_sels[] = { "pll3_sw", "pll4_sw", "dummy" /* usbphy2_
+ static struct clk *clk[IMX5_CLK_END];
+ static struct clk_onecell_data clk_data;
+ 
+-static struct clk ** const uart_clks_mx51[] __initconst = {
+-	&clk[IMX5_CLK_UART1_IPG_GATE],
+-	&clk[IMX5_CLK_UART1_PER_GATE],
+-	&clk[IMX5_CLK_UART2_IPG_GATE],
+-	&clk[IMX5_CLK_UART2_PER_GATE],
+-	&clk[IMX5_CLK_UART3_IPG_GATE],
+-	&clk[IMX5_CLK_UART3_PER_GATE],
+-	NULL
+-};
+-
+-static struct clk ** const uart_clks_mx50_mx53[] __initconst = {
+-	&clk[IMX5_CLK_UART1_IPG_GATE],
+-	&clk[IMX5_CLK_UART1_PER_GATE],
+-	&clk[IMX5_CLK_UART2_IPG_GATE],
+-	&clk[IMX5_CLK_UART2_PER_GATE],
+-	&clk[IMX5_CLK_UART3_IPG_GATE],
+-	&clk[IMX5_CLK_UART3_PER_GATE],
+-	&clk[IMX5_CLK_UART4_IPG_GATE],
+-	&clk[IMX5_CLK_UART4_PER_GATE],
+-	&clk[IMX5_CLK_UART5_IPG_GATE],
+-	&clk[IMX5_CLK_UART5_PER_GATE],
+-	NULL
+-};
+-
+ static void __init mx5_clocks_common_init(void __iomem *ccm_base)
+ {
+ 	clk[IMX5_CLK_DUMMY]		= imx_clk_fixed("dummy", 0);
+@@ -382,7 +358,7 @@ static void __init mx50_clocks_init(struct device_node *np)
+ 	r = clk_round_rate(clk[IMX5_CLK_USBOH3_PER_GATE], 54000000);
+ 	clk_set_rate(clk[IMX5_CLK_USBOH3_PER_GATE], r);
+ 
+-	imx_register_uart_clocks(uart_clks_mx50_mx53);
++	imx_register_uart_clocks(5);
+ }
+ CLK_OF_DECLARE(imx50_ccm, "fsl,imx50-ccm", mx50_clocks_init);
+ 
+@@ -488,7 +464,7 @@ static void __init mx51_clocks_init(struct device_node *np)
+ 	val |= 1 << 23;
+ 	writel(val, MXC_CCM_CLPCR);
+ 
+-	imx_register_uart_clocks(uart_clks_mx51);
++	imx_register_uart_clocks(3);
+ }
+ CLK_OF_DECLARE(imx51_ccm, "fsl,imx51-ccm", mx51_clocks_init);
+ 
+@@ -633,6 +609,6 @@ static void __init mx53_clocks_init(struct device_node *np)
+ 	r = clk_round_rate(clk[IMX5_CLK_USBOH3_PER_GATE], 54000000);
+ 	clk_set_rate(clk[IMX5_CLK_USBOH3_PER_GATE], r);
+ 
+-	imx_register_uart_clocks(uart_clks_mx50_mx53);
++	imx_register_uart_clocks(5);
+ }
+ CLK_OF_DECLARE(imx53_ccm, "fsl,imx53-ccm", mx53_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx6q.c b/drivers/clk/imx/clk-imx6q.c
+index b2ff187cedabc..f444bbe8244c2 100644
+--- a/drivers/clk/imx/clk-imx6q.c
++++ b/drivers/clk/imx/clk-imx6q.c
+@@ -140,13 +140,6 @@ static inline int clk_on_imx6dl(void)
+ 	return of_machine_is_compatible("fsl,imx6dl");
+ }
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX6QDL_CLK_UART_IPG,
+-	IMX6QDL_CLK_UART_SERIAL,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static int ldb_di_sel_by_clock_id(int clock_id)
+ {
+ 	switch (clock_id) {
+@@ -440,7 +433,6 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 	struct device_node *np;
+ 	void __iomem *anatop_base, *base;
+ 	int ret;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX6QDL_CLK_END), GFP_KERNEL);
+@@ -982,12 +974,6 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 			       hws[IMX6QDL_CLK_PLL3_USB_OTG]->clk);
+ 	}
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(1);
+ }
+ CLK_OF_DECLARE(imx6q, "fsl,imx6q-ccm", imx6q_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx6sl.c b/drivers/clk/imx/clk-imx6sl.c
+index 2f9361946a0e1..d997b5b078183 100644
+--- a/drivers/clk/imx/clk-imx6sl.c
++++ b/drivers/clk/imx/clk-imx6sl.c
+@@ -178,19 +178,11 @@ void imx6sl_set_wait_clk(bool enter)
+ 		imx6sl_enable_pll_arm(false);
+ }
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX6SL_CLK_UART,
+-	IMX6SL_CLK_UART_SERIAL,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx6sl_clocks_init(struct device_node *ccm_node)
+ {
+ 	struct device_node *np;
+ 	void __iomem *base;
+ 	int ret;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX6SL_CLK_END), GFP_KERNEL);
+@@ -447,12 +439,6 @@ static void __init imx6sl_clocks_init(struct device_node *ccm_node)
+ 	clk_set_parent(hws[IMX6SL_CLK_LCDIF_AXI_SEL]->clk,
+ 		       hws[IMX6SL_CLK_PLL2_PFD2]->clk);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(2);
+ }
+ CLK_OF_DECLARE(imx6sl, "fsl,imx6sl-ccm", imx6sl_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx6sll.c b/drivers/clk/imx/clk-imx6sll.c
+index 8e8288bda4d0b..31d777f300395 100644
+--- a/drivers/clk/imx/clk-imx6sll.c
++++ b/drivers/clk/imx/clk-imx6sll.c
+@@ -76,26 +76,10 @@ static u32 share_count_ssi1;
+ static u32 share_count_ssi2;
+ static u32 share_count_ssi3;
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX6SLL_CLK_UART1_IPG,
+-	IMX6SLL_CLK_UART1_SERIAL,
+-	IMX6SLL_CLK_UART2_IPG,
+-	IMX6SLL_CLK_UART2_SERIAL,
+-	IMX6SLL_CLK_UART3_IPG,
+-	IMX6SLL_CLK_UART3_SERIAL,
+-	IMX6SLL_CLK_UART4_IPG,
+-	IMX6SLL_CLK_UART4_SERIAL,
+-	IMX6SLL_CLK_UART5_IPG,
+-	IMX6SLL_CLK_UART5_SERIAL,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx6sll_clocks_init(struct device_node *ccm_node)
+ {
+ 	struct device_node *np;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX6SLL_CLK_END), GFP_KERNEL);
+@@ -356,13 +340,7 @@ static void __init imx6sll_clocks_init(struct device_node *ccm_node)
+ 
+ 	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(5);
+ 
+ 	/* Lower the AHB clock rate before changing the clock source. */
+ 	clk_set_rate(hws[IMX6SLL_CLK_AHB]->clk, 99000000);
+diff --git a/drivers/clk/imx/clk-imx6sx.c b/drivers/clk/imx/clk-imx6sx.c
+index 20dcce526d072..fc1bd23d45834 100644
+--- a/drivers/clk/imx/clk-imx6sx.c
++++ b/drivers/clk/imx/clk-imx6sx.c
+@@ -117,18 +117,10 @@ static u32 share_count_ssi3;
+ static u32 share_count_sai1;
+ static u32 share_count_sai2;
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX6SX_CLK_UART_IPG,
+-	IMX6SX_CLK_UART_SERIAL,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx6sx_clocks_init(struct device_node *ccm_node)
+ {
+ 	struct device_node *np;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX6SX_CLK_CLK_END), GFP_KERNEL);
+@@ -556,12 +548,6 @@ static void __init imx6sx_clocks_init(struct device_node *ccm_node)
+ 	clk_set_parent(hws[IMX6SX_CLK_QSPI1_SEL]->clk, hws[IMX6SX_CLK_PLL2_BUS]->clk);
+ 	clk_set_parent(hws[IMX6SX_CLK_QSPI2_SEL]->clk, hws[IMX6SX_CLK_PLL2_BUS]->clk);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(2);
+ }
+ CLK_OF_DECLARE(imx6sx, "fsl,imx6sx-ccm", imx6sx_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx7d.c b/drivers/clk/imx/clk-imx7d.c
+index 22d24a6a05e70..c4e0f1c07192f 100644
+--- a/drivers/clk/imx/clk-imx7d.c
++++ b/drivers/clk/imx/clk-imx7d.c
+@@ -377,23 +377,10 @@ static const char *pll_video_bypass_sel[] = { "pll_video_main", "pll_video_main_
+ static struct clk_hw **hws;
+ static struct clk_hw_onecell_data *clk_hw_data;
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX7D_UART1_ROOT_CLK,
+-	IMX7D_UART2_ROOT_CLK,
+-	IMX7D_UART3_ROOT_CLK,
+-	IMX7D_UART4_ROOT_CLK,
+-	IMX7D_UART5_ROOT_CLK,
+-	IMX7D_UART6_ROOT_CLK,
+-	IMX7D_UART7_ROOT_CLK,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx7d_clocks_init(struct device_node *ccm_node)
+ {
+ 	struct device_node *np;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX7D_CLK_END), GFP_KERNEL);
+@@ -897,14 +884,7 @@ static void __init imx7d_clocks_init(struct device_node *ccm_node)
+ 	hws[IMX7D_USB1_MAIN_480M_CLK] = imx_clk_hw_fixed_factor("pll_usb1_main_clk", "osc", 20, 1);
+ 	hws[IMX7D_USB_MAIN_480M_CLK] = imx_clk_hw_fixed_factor("pll_usb_main_clk", "osc", 20, 1);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(7);
+ 
+ }
+ CLK_OF_DECLARE(imx7d, "fsl,imx7d-ccm", imx7d_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx7ulp.c b/drivers/clk/imx/clk-imx7ulp.c
+index 634c0b6636b0e..779e09105da7d 100644
+--- a/drivers/clk/imx/clk-imx7ulp.c
++++ b/drivers/clk/imx/clk-imx7ulp.c
+@@ -43,19 +43,6 @@ static const struct clk_div_table ulp_div_table[] = {
+ 	{ /* sentinel */ },
+ };
+ 
+-static const int pcc2_uart_clk_ids[] __initconst = {
+-	IMX7ULP_CLK_LPUART4,
+-	IMX7ULP_CLK_LPUART5,
+-};
+-
+-static const int pcc3_uart_clk_ids[] __initconst = {
+-	IMX7ULP_CLK_LPUART6,
+-	IMX7ULP_CLK_LPUART7,
+-};
+-
+-static struct clk **pcc2_uart_clks[ARRAY_SIZE(pcc2_uart_clk_ids) + 1] __initdata;
+-static struct clk **pcc3_uart_clks[ARRAY_SIZE(pcc3_uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx7ulp_clk_scg1_init(struct device_node *np)
+ {
+ 	struct clk_hw_onecell_data *clk_data;
+@@ -150,7 +137,6 @@ static void __init imx7ulp_clk_pcc2_init(struct device_node *np)
+ 	struct clk_hw_onecell_data *clk_data;
+ 	struct clk_hw **hws;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_data = kzalloc(struct_size(clk_data, hws, IMX7ULP_CLK_PCC2_END),
+ 			   GFP_KERNEL);
+@@ -190,13 +176,7 @@ static void __init imx7ulp_clk_pcc2_init(struct device_node *np)
+ 
+ 	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
+ 
+-	for (i = 0; i < ARRAY_SIZE(pcc2_uart_clk_ids); i++) {
+-		int index = pcc2_uart_clk_ids[i];
+-
+-		pcc2_uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(pcc2_uart_clks);
++	imx_register_uart_clocks(2);
+ }
+ CLK_OF_DECLARE(imx7ulp_clk_pcc2, "fsl,imx7ulp-pcc2", imx7ulp_clk_pcc2_init);
+ 
+@@ -205,7 +185,6 @@ static void __init imx7ulp_clk_pcc3_init(struct device_node *np)
+ 	struct clk_hw_onecell_data *clk_data;
+ 	struct clk_hw **hws;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_data = kzalloc(struct_size(clk_data, hws, IMX7ULP_CLK_PCC3_END),
+ 			   GFP_KERNEL);
+@@ -244,13 +223,7 @@ static void __init imx7ulp_clk_pcc3_init(struct device_node *np)
+ 
+ 	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
+ 
+-	for (i = 0; i < ARRAY_SIZE(pcc3_uart_clk_ids); i++) {
+-		int index = pcc3_uart_clk_ids[i];
+-
+-		pcc3_uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(pcc3_uart_clks);
++	imx_register_uart_clocks(7);
+ }
+ CLK_OF_DECLARE(imx7ulp_clk_pcc3, "fsl,imx7ulp-pcc3", imx7ulp_clk_pcc3_init);
+ 
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index f358ad9072990..4cbf86ab2eacf 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -291,20 +291,12 @@ static const char *imx8mm_clko2_sels[] = {"osc_24m", "sys_pll2_200m", "sys_pll1_
+ static struct clk_hw_onecell_data *clk_hw_data;
+ static struct clk_hw **hws;
+ 
+-static const int uart_clk_ids[] = {
+-	IMX8MM_CLK_UART1_ROOT,
+-	IMX8MM_CLK_UART2_ROOT,
+-	IMX8MM_CLK_UART3_ROOT,
+-	IMX8MM_CLK_UART4_ROOT,
+-};
+-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
+-
+ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+ 	void __iomem *base;
+-	int ret, i;
++	int ret;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX8MM_CLK_END), GFP_KERNEL);
+@@ -622,13 +614,7 @@ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ 		goto unregister_hws;
+ 	}
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_hws[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_hws);
++	imx_register_uart_clocks(4);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index f3c5e6cf55dd4..f98f252795396 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -284,20 +284,12 @@ static const char * const imx8mn_clko2_sels[] = {"osc_24m", "sys_pll2_200m", "sy
+ static struct clk_hw_onecell_data *clk_hw_data;
+ static struct clk_hw **hws;
+ 
+-static const int uart_clk_ids[] = {
+-	IMX8MN_CLK_UART1_ROOT,
+-	IMX8MN_CLK_UART2_ROOT,
+-	IMX8MN_CLK_UART3_ROOT,
+-	IMX8MN_CLK_UART4_ROOT,
+-};
+-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
+-
+ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+ 	void __iomem *base;
+-	int ret, i;
++	int ret;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX8MN_CLK_END), GFP_KERNEL);
+@@ -573,13 +565,7 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 		goto unregister_hws;
+ 	}
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_hws[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_hws);
++	imx_register_uart_clocks(4);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 48e212477f52a..0391f5bda5e46 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -414,20 +414,11 @@ static const char * const imx8mp_dram_core_sels[] = {"dram_pll_out", "dram_alt_r
+ static struct clk_hw **hws;
+ static struct clk_hw_onecell_data *clk_hw_data;
+ 
+-static const int uart_clk_ids[] = {
+-	IMX8MP_CLK_UART1_ROOT,
+-	IMX8MP_CLK_UART2_ROOT,
+-	IMX8MP_CLK_UART3_ROOT,
+-	IMX8MP_CLK_UART4_ROOT,
+-};
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1];
+-
+ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+ 	void __iomem *anatop_base, *ccm_base;
+-	int i;
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mp-anatop");
+ 	anatop_base = of_iomap(np, 0);
+@@ -737,13 +728,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 
+ 	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(4);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index 06292d4a98ff7..4e6c81a702214 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -273,20 +273,12 @@ static const char * const imx8mq_clko2_sels[] = {"osc_25m", "sys2_pll_200m", "sy
+ static struct clk_hw_onecell_data *clk_hw_data;
+ static struct clk_hw **hws;
+ 
+-static const int uart_clk_ids[] = {
+-	IMX8MQ_CLK_UART1_ROOT,
+-	IMX8MQ_CLK_UART2_ROOT,
+-	IMX8MQ_CLK_UART3_ROOT,
+-	IMX8MQ_CLK_UART4_ROOT,
+-};
+-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
+-
+ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+ 	void __iomem *base;
+-	int err, i;
++	int err;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX8MQ_CLK_END), GFP_KERNEL);
+@@ -607,13 +599,7 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 		goto unregister_hws;
+ 	}
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_hws[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_hws);
++	imx_register_uart_clocks(4);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/clk/imx/clk.c b/drivers/clk/imx/clk.c
+index 47882c51cb853..7cc669934253a 100644
+--- a/drivers/clk/imx/clk.c
++++ b/drivers/clk/imx/clk.c
+@@ -147,8 +147,10 @@ void imx_cscmr1_fixup(u32 *val)
+ }
+ 
+ #ifndef MODULE
+-static int imx_keep_uart_clocks;
+-static struct clk ** const *imx_uart_clocks;
++
++static bool imx_keep_uart_clocks;
++static int imx_enabled_uart_clocks;
++static struct clk **imx_uart_clocks;
+ 
+ static int __init imx_keep_uart_clocks_param(char *str)
+ {
+@@ -161,24 +163,45 @@ __setup_param("earlycon", imx_keep_uart_earlycon,
+ __setup_param("earlyprintk", imx_keep_uart_earlyprintk,
+ 	      imx_keep_uart_clocks_param, 0);
+ 
+-void imx_register_uart_clocks(struct clk ** const clks[])
++void imx_register_uart_clocks(unsigned int clk_count)
+ {
++	imx_enabled_uart_clocks = 0;
++
++/* i.MX boards use device trees now.  For build tests without CONFIG_OF, do nothing */
++#ifdef CONFIG_OF
+ 	if (imx_keep_uart_clocks) {
+ 		int i;
+ 
+-		imx_uart_clocks = clks;
+-		for (i = 0; imx_uart_clocks[i]; i++)
+-			clk_prepare_enable(*imx_uart_clocks[i]);
++		imx_uart_clocks = kcalloc(clk_count, sizeof(struct clk *), GFP_KERNEL);
++
++		if (!of_stdout)
++			return;
++
++		for (i = 0; i < clk_count; i++) {
++			imx_uart_clocks[imx_enabled_uart_clocks] = of_clk_get(of_stdout, i);
++
++			/* Stop if there are no more of_stdout references */
++			if (IS_ERR(imx_uart_clocks[imx_enabled_uart_clocks]))
++				return;
++
++			/* Only enable the clock if it's not NULL */
++			if (imx_uart_clocks[imx_enabled_uart_clocks])
++				clk_prepare_enable(imx_uart_clocks[imx_enabled_uart_clocks++]);
++		}
+ 	}
++#endif
+ }
+ 
+ static int __init imx_clk_disable_uart(void)
+ {
+-	if (imx_keep_uart_clocks && imx_uart_clocks) {
++	if (imx_keep_uart_clocks && imx_enabled_uart_clocks) {
+ 		int i;
+ 
+-		for (i = 0; imx_uart_clocks[i]; i++)
+-			clk_disable_unprepare(*imx_uart_clocks[i]);
++		for (i = 0; i < imx_enabled_uart_clocks; i++) {
++			clk_disable_unprepare(imx_uart_clocks[i]);
++			clk_put(imx_uart_clocks[i]);
++		}
++		kfree(imx_uart_clocks);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h
+index 1d7be0c86538a..f04cbbab9fccd 100644
+--- a/drivers/clk/imx/clk.h
++++ b/drivers/clk/imx/clk.h
+@@ -13,9 +13,9 @@ extern spinlock_t imx_ccm_lock;
+ void imx_check_clocks(struct clk *clks[], unsigned int count);
+ void imx_check_clk_hws(struct clk_hw *clks[], unsigned int count);
+ #ifndef MODULE
+-void imx_register_uart_clocks(struct clk ** const clks[]);
++void imx_register_uart_clocks(unsigned int clk_count);
+ #else
+-static inline void imx_register_uart_clocks(struct clk ** const clks[])
++static inline void imx_register_uart_clocks(unsigned int clk_count)
+ {
+ }
+ #endif
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index f5746f9ea929f..32ac6b6b75306 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -84,6 +84,7 @@ struct clk_pm_cpu {
+ 	void __iomem *reg_div;
+ 	u8 shift_div;
+ 	struct regmap *nb_pm_base;
++	unsigned long l1_expiration;
+ };
+ 
+ #define to_clk_double_div(_hw) container_of(_hw, struct clk_double_div, hw)
+@@ -440,33 +441,6 @@ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
+ 	return val;
+ }
+ 
+-static int clk_pm_cpu_set_parent(struct clk_hw *hw, u8 index)
+-{
+-	struct clk_pm_cpu *pm_cpu = to_clk_pm_cpu(hw);
+-	struct regmap *base = pm_cpu->nb_pm_base;
+-	int load_level;
+-
+-	/*
+-	 * We set the clock parent only if the DVFS is available but
+-	 * not enabled.
+-	 */
+-	if (IS_ERR(base) || armada_3700_pm_dvfs_is_enabled(base))
+-		return -EINVAL;
+-
+-	/* Set the parent clock for all the load level */
+-	for (load_level = 0; load_level < LOAD_LEVEL_NR; load_level++) {
+-		unsigned int reg, mask,  val,
+-			offset = ARMADA_37XX_NB_TBG_SEL_OFF;
+-
+-		armada_3700_pm_dvfs_update_regs(load_level, &reg, &offset);
+-
+-		val = index << offset;
+-		mask = ARMADA_37XX_NB_TBG_SEL_MASK << offset;
+-		regmap_update_bits(base, reg, mask, val);
+-	}
+-	return 0;
+-}
+-
+ static unsigned long clk_pm_cpu_recalc_rate(struct clk_hw *hw,
+ 					    unsigned long parent_rate)
+ {
+@@ -514,8 +488,10 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
+ }
+ 
+ /*
+- * Switching the CPU from the L2 or L3 frequencies (300 and 200 Mhz
+- * respectively) to L0 frequency (1.2 Ghz) requires a significant
++ * Workaround when base CPU frequnecy is 1000 or 1200 MHz
++ *
++ * Switching the CPU from the L2 or L3 frequencies (250/300 or 200 MHz
++ * respectively) to L0 frequency (1/1.2 GHz) requires a significant
+  * amount of time to let VDD stabilize to the appropriate
+  * voltage. This amount of time is large enough that it cannot be
+  * covered by the hardware countdown register. Due to this, the CPU
+@@ -525,26 +501,56 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
+  * To work around this problem, we prevent switching directly from the
+  * L2/L3 frequencies to the L0 frequency, and instead switch to the L1
+  * frequency in-between. The sequence therefore becomes:
+- * 1. First switch from L2/L3(200/300MHz) to L1(600MHZ)
++ * 1. First switch from L2/L3 (200/250/300 MHz) to L1 (500/600 MHz)
+  * 2. Sleep 20ms for stabling VDD voltage
+- * 3. Then switch from L1(600MHZ) to L0(1200Mhz).
++ * 3. Then switch from L1 (500/600 MHz) to L0 (1000/1200 MHz).
+  */
+-static void clk_pm_cpu_set_rate_wa(unsigned long rate, struct regmap *base)
++static void clk_pm_cpu_set_rate_wa(struct clk_pm_cpu *pm_cpu,
++				   unsigned int new_level, unsigned long rate,
++				   struct regmap *base)
+ {
+ 	unsigned int cur_level;
+ 
+-	if (rate != 1200 * 1000 * 1000)
+-		return;
+-
+ 	regmap_read(base, ARMADA_37XX_NB_CPU_LOAD, &cur_level);
+ 	cur_level &= ARMADA_37XX_NB_CPU_LOAD_MASK;
+-	if (cur_level <= ARMADA_37XX_DVFS_LOAD_1)
++
++	if (cur_level == new_level)
++		return;
++
++	/*
++	 * System wants to go to L1 on its own. If we are going from L2/L3,
++	 * remember when 20ms will expire. If from L0, set the value so that
++	 * next switch to L0 won't have to wait.
++	 */
++	if (new_level == ARMADA_37XX_DVFS_LOAD_1) {
++		if (cur_level == ARMADA_37XX_DVFS_LOAD_0)
++			pm_cpu->l1_expiration = jiffies;
++		else
++			pm_cpu->l1_expiration = jiffies + msecs_to_jiffies(20);
+ 		return;
++	}
++
++	/*
++	 * If we are setting to L2/L3, just invalidate L1 expiration time,
++	 * sleeping is not needed.
++	 */
++	if (rate < 1000*1000*1000)
++		goto invalidate_l1_exp;
++
++	/*
++	 * We are going to L0 with rate >= 1GHz. Check whether we have been at
++	 * L1 for long enough time. If not, go to L1 for 20ms.
++	 */
++	if (pm_cpu->l1_expiration && jiffies >= pm_cpu->l1_expiration)
++		goto invalidate_l1_exp;
+ 
+ 	regmap_update_bits(base, ARMADA_37XX_NB_CPU_LOAD,
+ 			   ARMADA_37XX_NB_CPU_LOAD_MASK,
+ 			   ARMADA_37XX_DVFS_LOAD_1);
+ 	msleep(20);
++
++invalidate_l1_exp:
++	pm_cpu->l1_expiration = 0;
+ }
+ 
+ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+@@ -578,7 +584,9 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+ 			reg = ARMADA_37XX_NB_CPU_LOAD;
+ 			mask = ARMADA_37XX_NB_CPU_LOAD_MASK;
+ 
+-			clk_pm_cpu_set_rate_wa(rate, base);
++			/* Apply workaround when base CPU frequency is 1000 or 1200 MHz */
++			if (parent_rate >= 1000*1000*1000)
++				clk_pm_cpu_set_rate_wa(pm_cpu, load_level, rate, base);
+ 
+ 			regmap_update_bits(base, reg, mask, load_level);
+ 
+@@ -592,7 +600,6 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ static const struct clk_ops clk_pm_cpu_ops = {
+ 	.get_parent = clk_pm_cpu_get_parent,
+-	.set_parent = clk_pm_cpu_set_parent,
+ 	.round_rate = clk_pm_cpu_round_rate,
+ 	.set_rate = clk_pm_cpu_set_rate,
+ 	.recalc_rate = clk_pm_cpu_recalc_rate,
+diff --git a/drivers/clk/qcom/a53-pll.c b/drivers/clk/qcom/a53-pll.c
+index 45cfc57bff924..af6ac17c7daeb 100644
+--- a/drivers/clk/qcom/a53-pll.c
++++ b/drivers/clk/qcom/a53-pll.c
+@@ -93,6 +93,7 @@ static const struct of_device_id qcom_a53pll_match_table[] = {
+ 	{ .compatible = "qcom,msm8916-a53pll" },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, qcom_a53pll_match_table);
+ 
+ static struct platform_driver qcom_a53pll_driver = {
+ 	.probe = qcom_a53pll_probe,
+diff --git a/drivers/clk/qcom/apss-ipq-pll.c b/drivers/clk/qcom/apss-ipq-pll.c
+index 30be87fb222aa..bef7899ad0d66 100644
+--- a/drivers/clk/qcom/apss-ipq-pll.c
++++ b/drivers/clk/qcom/apss-ipq-pll.c
+@@ -81,6 +81,7 @@ static const struct of_device_id apss_ipq_pll_match_table[] = {
+ 	{ .compatible = "qcom,ipq6018-a53pll" },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, apss_ipq_pll_match_table);
+ 
+ static struct platform_driver apss_ipq_pll_driver = {
+ 	.probe = apss_ipq_pll_probe,
+diff --git a/drivers/clk/uniphier/clk-uniphier-mux.c b/drivers/clk/uniphier/clk-uniphier-mux.c
+index 462c84321b2d2..1998e9d4cfc02 100644
+--- a/drivers/clk/uniphier/clk-uniphier-mux.c
++++ b/drivers/clk/uniphier/clk-uniphier-mux.c
+@@ -31,10 +31,10 @@ static int uniphier_clk_mux_set_parent(struct clk_hw *hw, u8 index)
+ static u8 uniphier_clk_mux_get_parent(struct clk_hw *hw)
+ {
+ 	struct uniphier_clk_mux *mux = to_uniphier_clk_mux(hw);
+-	int num_parents = clk_hw_get_num_parents(hw);
++	unsigned int num_parents = clk_hw_get_num_parents(hw);
+ 	int ret;
+ 	unsigned int val;
+-	u8 i;
++	unsigned int i;
+ 
+ 	ret = regmap_read(mux->regmap, mux->reg, &val);
+ 	if (ret)
+diff --git a/drivers/clk/zynqmp/pll.c b/drivers/clk/zynqmp/pll.c
+index 92f449ed38e51..abe6afbf3407b 100644
+--- a/drivers/clk/zynqmp/pll.c
++++ b/drivers/clk/zynqmp/pll.c
+@@ -14,10 +14,12 @@
+  * struct zynqmp_pll - PLL clock
+  * @hw:		Handle between common and hardware-specific interfaces
+  * @clk_id:	PLL clock ID
++ * @set_pll_mode:	Whether an IOCTL_SET_PLL_FRAC_MODE request be sent to ATF
+  */
+ struct zynqmp_pll {
+ 	struct clk_hw hw;
+ 	u32 clk_id;
++	bool set_pll_mode;
+ };
+ 
+ #define to_zynqmp_pll(_hw)	container_of(_hw, struct zynqmp_pll, hw)
+@@ -81,6 +83,8 @@ static inline void zynqmp_pll_set_mode(struct clk_hw *hw, bool on)
+ 	if (ret)
+ 		pr_warn_once("%s() PLL set frac mode failed for %s, ret = %d\n",
+ 			     __func__, clk_name, ret);
++	else
++		clk->set_pll_mode = true;
+ }
+ 
+ /**
+@@ -100,9 +104,7 @@ static long zynqmp_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	/* Enable the fractional mode if needed */
+ 	rate_div = (rate * FRAC_DIV) / *prate;
+ 	f = rate_div % FRAC_DIV;
+-	zynqmp_pll_set_mode(hw, !!f);
+-
+-	if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
++	if (f) {
+ 		if (rate > PS_PLL_VCO_MAX) {
+ 			fbdiv = rate / PS_PLL_VCO_MAX;
+ 			rate = rate / (fbdiv + 1);
+@@ -173,10 +175,12 @@ static int zynqmp_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	long rate_div, frac, m, f;
+ 	int ret;
+ 
+-	if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
+-		rate_div = (rate * FRAC_DIV) / parent_rate;
++	rate_div = (rate * FRAC_DIV) / parent_rate;
++	f = rate_div % FRAC_DIV;
++	zynqmp_pll_set_mode(hw, !!f);
++
++	if (f) {
+ 		m = rate_div / FRAC_DIV;
+-		f = rate_div % FRAC_DIV;
+ 		m = clamp_t(u32, m, (PLL_FBDIV_MIN), (PLL_FBDIV_MAX));
+ 		rate = parent_rate * m;
+ 		frac = (parent_rate * f) / FRAC_DIV;
+@@ -240,9 +244,15 @@ static int zynqmp_pll_enable(struct clk_hw *hw)
+ 	u32 clk_id = clk->clk_id;
+ 	int ret;
+ 
+-	if (zynqmp_pll_is_enabled(hw))
++	/*
++	 * Don't skip enabling clock if there is an IOCTL_SET_PLL_FRAC_MODE request
++	 * that has been sent to ATF.
++	 */
++	if (zynqmp_pll_is_enabled(hw) && (!clk->set_pll_mode))
+ 		return 0;
+ 
++	clk->set_pll_mode = false;
++
+ 	ret = zynqmp_pm_clock_enable(clk_id);
+ 	if (ret)
+ 		pr_warn_once("%s() clock enable failed for %s, ret = %d\n",
+diff --git a/drivers/clocksource/ingenic-ost.c b/drivers/clocksource/ingenic-ost.c
+index 029efc2731b49..6af2470136bd2 100644
+--- a/drivers/clocksource/ingenic-ost.c
++++ b/drivers/clocksource/ingenic-ost.c
+@@ -88,9 +88,9 @@ static int __init ingenic_ost_probe(struct platform_device *pdev)
+ 		return PTR_ERR(ost->regs);
+ 
+ 	map = device_node_to_regmap(dev->parent->of_node);
+-	if (!map) {
++	if (IS_ERR(map)) {
+ 		dev_err(dev, "regmap not found");
+-		return -EINVAL;
++		return PTR_ERR(map);
+ 	}
+ 
+ 	ost->clk = devm_clk_get(dev, "ost");
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 33b3e8aa2cc50..3fae9ebb58b83 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -449,13 +449,13 @@ static int dmtimer_set_next_event(unsigned long cycles,
+ 	struct dmtimer_systimer *t = &clkevt->t;
+ 	void __iomem *pend = t->base + t->pend;
+ 
+-	writel_relaxed(0xffffffff - cycles, t->base + t->counter);
+ 	while (readl_relaxed(pend) & WP_TCRR)
+ 		cpu_relax();
++	writel_relaxed(0xffffffff - cycles, t->base + t->counter);
+ 
+-	writel_relaxed(OMAP_TIMER_CTRL_ST, t->base + t->ctrl);
+ 	while (readl_relaxed(pend) & WP_TCLR)
+ 		cpu_relax();
++	writel_relaxed(OMAP_TIMER_CTRL_ST, t->base + t->ctrl);
+ 
+ 	return 0;
+ }
+@@ -490,18 +490,18 @@ static int dmtimer_set_periodic(struct clock_event_device *evt)
+ 	dmtimer_clockevent_shutdown(evt);
+ 
+ 	/* Looks like we need to first set the load value separately */
+-	writel_relaxed(clkevt->period, t->base + t->load);
+ 	while (readl_relaxed(pend) & WP_TLDR)
+ 		cpu_relax();
++	writel_relaxed(clkevt->period, t->base + t->load);
+ 
+-	writel_relaxed(clkevt->period, t->base + t->counter);
+ 	while (readl_relaxed(pend) & WP_TCRR)
+ 		cpu_relax();
++	writel_relaxed(clkevt->period, t->base + t->counter);
+ 
+-	writel_relaxed(OMAP_TIMER_CTRL_AR | OMAP_TIMER_CTRL_ST,
+-		       t->base + t->ctrl);
+ 	while (readl_relaxed(pend) & WP_TCLR)
+ 		cpu_relax();
++	writel_relaxed(OMAP_TIMER_CTRL_AR | OMAP_TIMER_CTRL_ST,
++		       t->base + t->ctrl);
+ 
+ 	return 0;
+ }
+@@ -554,6 +554,7 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
+ 	dev->set_state_shutdown = dmtimer_clockevent_shutdown;
+ 	dev->set_state_periodic = dmtimer_set_periodic;
+ 	dev->set_state_oneshot = dmtimer_clockevent_shutdown;
++	dev->set_state_oneshot_stopped = dmtimer_clockevent_shutdown;
+ 	dev->tick_resume = dmtimer_clockevent_shutdown;
+ 	dev->cpumask = cpu_possible_mask;
+ 
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index b4af4094309b0..e4782f562e7a9 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -25,6 +25,10 @@
+ 
+ #include "cpufreq-dt.h"
+ 
++/* Clk register set */
++#define ARMADA_37XX_CLK_TBG_SEL		0
++#define ARMADA_37XX_CLK_TBG_SEL_CPU_OFF	22
++
+ /* Power management in North Bridge register set */
+ #define ARMADA_37XX_NB_L0L1	0x18
+ #define ARMADA_37XX_NB_L2L3	0x1C
+@@ -69,6 +73,8 @@
+ #define LOAD_LEVEL_NR	4
+ 
+ #define MIN_VOLT_MV 1000
++#define MIN_VOLT_MV_FOR_L1_1000MHZ 1108
++#define MIN_VOLT_MV_FOR_L1_1200MHZ 1155
+ 
+ /*  AVS value for the corresponding voltage (in mV) */
+ static int avs_map[] = {
+@@ -120,10 +126,15 @@ static struct armada_37xx_dvfs *armada_37xx_cpu_freq_info_get(u32 freq)
+  * will be configured then the DVFS will be enabled.
+  */
+ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
+-						 struct clk *clk, u8 *divider)
++						 struct regmap *clk_base, u8 *divider)
+ {
++	u32 cpu_tbg_sel;
+ 	int load_lvl;
+-	struct clk *parent;
++
++	/* Determine to which TBG clock is CPU connected */
++	regmap_read(clk_base, ARMADA_37XX_CLK_TBG_SEL, &cpu_tbg_sel);
++	cpu_tbg_sel >>= ARMADA_37XX_CLK_TBG_SEL_CPU_OFF;
++	cpu_tbg_sel &= ARMADA_37XX_NB_TBG_SEL_MASK;
+ 
+ 	for (load_lvl = 0; load_lvl < LOAD_LEVEL_NR; load_lvl++) {
+ 		unsigned int reg, mask, val, offset = 0;
+@@ -142,6 +153,11 @@ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
+ 		mask = (ARMADA_37XX_NB_CLK_SEL_MASK
+ 			<< ARMADA_37XX_NB_CLK_SEL_OFF);
+ 
++		/* Set TBG index, for all levels we use the same TBG */
++		val = cpu_tbg_sel << ARMADA_37XX_NB_TBG_SEL_OFF;
++		mask = (ARMADA_37XX_NB_TBG_SEL_MASK
++			<< ARMADA_37XX_NB_TBG_SEL_OFF);
++
+ 		/*
+ 		 * Set cpu divider based on the pre-computed array in
+ 		 * order to have balanced step.
+@@ -160,14 +176,6 @@ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
+ 
+ 		regmap_update_bits(base, reg, mask, val);
+ 	}
+-
+-	/*
+-	 * Set cpu clock source, for all the level we keep the same
+-	 * clock source that the one already configured. For this one
+-	 * we need to use the clock framework
+-	 */
+-	parent = clk_get_parent(clk);
+-	clk_set_parent(clk, parent);
+ }
+ 
+ /*
+@@ -202,6 +210,8 @@ static u32 armada_37xx_avs_val_match(int target_vm)
+  * - L2 & L3 voltage should be about 150mv smaller than L0 voltage.
+  * This function calculates L1 & L2 & L3 AVS values dynamically based
+  * on L0 voltage and fill all AVS values to the AVS value table.
++ * When base CPU frequency is 1000 or 1200 MHz then there is additional
++ * minimal avs value for load L1.
+  */
+ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
+ 						struct armada_37xx_dvfs *dvfs)
+@@ -233,6 +243,19 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
+ 		for (load_level = 1; load_level < LOAD_LEVEL_NR; load_level++)
+ 			dvfs->avs[load_level] = avs_min;
+ 
++		/*
++		 * Set the avs values for load L0 and L1 when base CPU frequency
++		 * is 1000/1200 MHz to its typical initial values according to
++		 * the Armada 3700 Hardware Specifications.
++		 */
++		if (dvfs->cpu_freq_max >= 1000*1000*1000) {
++			if (dvfs->cpu_freq_max >= 1200*1000*1000)
++				avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ);
++			else
++				avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ);
++			dvfs->avs[0] = dvfs->avs[1] = avs_min;
++		}
++
+ 		return;
+ 	}
+ 
+@@ -252,6 +275,26 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
+ 	target_vm = avs_map[l0_vdd_min] - 150;
+ 	target_vm = target_vm > MIN_VOLT_MV ? target_vm : MIN_VOLT_MV;
+ 	dvfs->avs[2] = dvfs->avs[3] = armada_37xx_avs_val_match(target_vm);
++
++	/*
++	 * Fix the avs value for load L1 when base CPU frequency is 1000/1200 MHz,
++	 * otherwise the CPU gets stuck when switching from load L1 to load L0.
++	 * Also ensure that avs value for load L1 is not higher than for L0.
++	 */
++	if (dvfs->cpu_freq_max >= 1000*1000*1000) {
++		u32 avs_min_l1;
++
++		if (dvfs->cpu_freq_max >= 1200*1000*1000)
++			avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ);
++		else
++			avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ);
++
++		if (avs_min_l1 > dvfs->avs[0])
++			avs_min_l1 = dvfs->avs[0];
++
++		if (dvfs->avs[1] < avs_min_l1)
++			dvfs->avs[1] = avs_min_l1;
++	}
+ }
+ 
+ static void __init armada37xx_cpufreq_avs_setup(struct regmap *base,
+@@ -358,11 +401,16 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 	struct platform_device *pdev;
+ 	unsigned long freq;
+ 	unsigned int cur_frequency, base_frequency;
+-	struct regmap *nb_pm_base, *avs_base;
++	struct regmap *nb_clk_base, *nb_pm_base, *avs_base;
+ 	struct device *cpu_dev;
+ 	int load_lvl, ret;
+ 	struct clk *clk, *parent;
+ 
++	nb_clk_base =
++		syscon_regmap_lookup_by_compatible("marvell,armada-3700-periph-clock-nb");
++	if (IS_ERR(nb_clk_base))
++		return -ENODEV;
++
+ 	nb_pm_base =
+ 		syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm");
+ 
+@@ -421,7 +469,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 		return -EINVAL;
+ 	}
+ 
+-	dvfs = armada_37xx_cpu_freq_info_get(cur_frequency);
++	dvfs = armada_37xx_cpu_freq_info_get(base_frequency);
+ 	if (!dvfs) {
+ 		clk_put(clk);
+ 		return -EINVAL;
+@@ -439,7 +487,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 	armada37xx_cpufreq_avs_configure(avs_base, dvfs);
+ 	armada37xx_cpufreq_avs_setup(avs_base, dvfs);
+ 
+-	armada37xx_cpufreq_dvfs_setup(nb_pm_base, clk, dvfs->divider);
++	armada37xx_cpufreq_dvfs_setup(nb_pm_base, nb_clk_base, dvfs->divider);
+ 	clk_put(clk);
+ 
+ 	for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR;
+@@ -473,7 +521,7 @@ disable_dvfs:
+ remove_opp:
+ 	/* clean-up the already added opp before leaving */
+ 	while (load_lvl-- > ARMADA_37XX_DVFS_LOAD_0) {
+-		freq = cur_frequency / dvfs->divider[load_lvl];
++		freq = base_frequency / dvfs->divider[load_lvl];
+ 		dev_pm_opp_remove(cpu_dev, freq);
+ 	}
+ 
+diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm
+index 0844fadc4be85..334f83e56120c 100644
+--- a/drivers/cpuidle/Kconfig.arm
++++ b/drivers/cpuidle/Kconfig.arm
+@@ -107,7 +107,7 @@ config ARM_TEGRA_CPUIDLE
+ 
+ config ARM_QCOM_SPM_CPUIDLE
+ 	bool "CPU Idle Driver for Qualcomm Subsystem Power Manager (SPM)"
+-	depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64
++	depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 && MMU
+ 	select ARM_CPU_SUSPEND
+ 	select CPU_IDLE_MULTIPLE_DRIVERS
+ 	select DT_IDLE_STATES
+diff --git a/drivers/crypto/allwinner/Kconfig b/drivers/crypto/allwinner/Kconfig
+index 0cdfe0e8cc66b..ce34048d0d68a 100644
+--- a/drivers/crypto/allwinner/Kconfig
++++ b/drivers/crypto/allwinner/Kconfig
+@@ -62,10 +62,10 @@ config CRYPTO_DEV_SUN8I_CE_DEBUG
+ config CRYPTO_DEV_SUN8I_CE_HASH
+ 	bool "Enable support for hash on sun8i-ce"
+ 	depends on CRYPTO_DEV_SUN8I_CE
+-	select MD5
+-	select SHA1
+-	select SHA256
+-	select SHA512
++	select CRYPTO_MD5
++	select CRYPTO_SHA1
++	select CRYPTO_SHA256
++	select CRYPTO_SHA512
+ 	help
+ 	  Say y to enable support for hash algorithms.
+ 
+@@ -123,8 +123,8 @@ config CRYPTO_DEV_SUN8I_SS_PRNG
+ config CRYPTO_DEV_SUN8I_SS_HASH
+ 	bool "Enable support for hash on sun8i-ss"
+ 	depends on CRYPTO_DEV_SUN8I_SS
+-	select MD5
+-	select SHA1
+-	select SHA256
++	select CRYPTO_MD5
++	select CRYPTO_SHA1
++	select CRYPTO_SHA256
+ 	help
+ 	  Say y to enable support for hash algorithms.
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index b6ab2054f217b..756d5a7835482 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -347,8 +347,10 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ 	bf = (__le32 *)pad;
+ 
+ 	result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
+-	if (!result)
++	if (!result) {
++		kfree(pad);
+ 		return -ENOMEM;
++	}
+ 
+ 	for (i = 0; i < MAX_SG; i++) {
+ 		rctx->t_dst[i].addr = 0;
+@@ -434,11 +436,10 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ 	dma_unmap_sg(ss->dev, areq->src, nr_sgs, DMA_TO_DEVICE);
+ 	dma_unmap_single(ss->dev, addr_res, digestsize, DMA_FROM_DEVICE);
+ 
+-	kfree(pad);
+-
+ 	memcpy(areq->result, result, algt->alg.hash.halg.digestsize);
+-	kfree(result);
+ theend:
++	kfree(pad);
++	kfree(result);
+ 	crypto_finalize_hash_request(engine, breq, err);
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
+index 08a1473b21457..3191527928e41 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
+@@ -103,7 +103,8 @@ int sun8i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
+ 	dma_iv = dma_map_single(ss->dev, ctx->seed, ctx->slen, DMA_TO_DEVICE);
+ 	if (dma_mapping_error(ss->dev, dma_iv)) {
+ 		dev_err(ss->dev, "Cannot DMA MAP IV\n");
+-		return -EFAULT;
++		err = -EFAULT;
++		goto err_free;
+ 	}
+ 
+ 	dma_dst = dma_map_single(ss->dev, d, todo, DMA_FROM_DEVICE);
+@@ -167,6 +168,7 @@ err_iv:
+ 		memcpy(ctx->seed, d + dlen, ctx->slen);
+ 	}
+ 	memzero_explicit(d, todo);
++err_free:
+ 	kfree(d);
+ 
+ 	return err;
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 476113e12489f..5b82ba7acc7cb 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -149,6 +149,9 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 
+ 	sev = psp->sev_data;
+ 
++	if (data && WARN_ON_ONCE(!virt_addr_valid(data)))
++		return -EINVAL;
++
+ 	/* Get the physical address of the command buffer */
+ 	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+ 	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+diff --git a/drivers/crypto/ccp/tee-dev.c b/drivers/crypto/ccp/tee-dev.c
+index 5e697a90ea7f4..bcb81fef42118 100644
+--- a/drivers/crypto/ccp/tee-dev.c
++++ b/drivers/crypto/ccp/tee-dev.c
+@@ -36,6 +36,7 @@ static int tee_alloc_ring(struct psp_tee_device *tee, int ring_size)
+ 	if (!start_addr)
+ 		return -ENOMEM;
+ 
++	memset(start_addr, 0x0, ring_size);
+ 	rb_mgr->ring_start = start_addr;
+ 	rb_mgr->ring_size = ring_size;
+ 	rb_mgr->ring_pa = __psp_pa(start_addr);
+@@ -244,41 +245,54 @@ static int tee_submit_cmd(struct psp_tee_device *tee, enum tee_cmd_id cmd_id,
+ 			  void *buf, size_t len, struct tee_ring_cmd **resp)
+ {
+ 	struct tee_ring_cmd *cmd;
+-	u32 rptr, wptr;
+ 	int nloop = 1000, ret = 0;
++	u32 rptr;
+ 
+ 	*resp = NULL;
+ 
+ 	mutex_lock(&tee->rb_mgr.mutex);
+ 
+-	wptr = tee->rb_mgr.wptr;
+-
+-	/* Check if ring buffer is full */
++	/* Loop until empty entry found in ring buffer */
+ 	do {
++		/* Get pointer to ring buffer command entry */
++		cmd = (struct tee_ring_cmd *)
++			(tee->rb_mgr.ring_start + tee->rb_mgr.wptr);
++
+ 		rptr = ioread32(tee->io_regs + tee->vdata->ring_rptr_reg);
+ 
+-		if (!(wptr + sizeof(struct tee_ring_cmd) == rptr))
++		/* Check if ring buffer is full or command entry is waiting
++		 * for response from TEE
++		 */
++		if (!(tee->rb_mgr.wptr + sizeof(struct tee_ring_cmd) == rptr ||
++		      cmd->flag == CMD_WAITING_FOR_RESPONSE))
+ 			break;
+ 
+-		dev_info(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
+-			 rptr, wptr);
++		dev_dbg(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
++			rptr, tee->rb_mgr.wptr);
+ 
+-		/* Wait if ring buffer is full */
++		/* Wait if ring buffer is full or TEE is processing data */
+ 		mutex_unlock(&tee->rb_mgr.mutex);
+ 		schedule_timeout_interruptible(msecs_to_jiffies(10));
+ 		mutex_lock(&tee->rb_mgr.mutex);
+ 
+ 	} while (--nloop);
+ 
+-	if (!nloop && (wptr + sizeof(struct tee_ring_cmd) == rptr)) {
+-		dev_err(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
+-			rptr, wptr);
++	if (!nloop &&
++	    (tee->rb_mgr.wptr + sizeof(struct tee_ring_cmd) == rptr ||
++	     cmd->flag == CMD_WAITING_FOR_RESPONSE)) {
++		dev_err(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u response flag %u\n",
++			rptr, tee->rb_mgr.wptr, cmd->flag);
+ 		ret = -EBUSY;
+ 		goto unlock;
+ 	}
+ 
+-	/* Pointer to empty data entry in ring buffer */
+-	cmd = (struct tee_ring_cmd *)(tee->rb_mgr.ring_start + wptr);
++	/* Do not submit command if PSP got disabled while processing any
++	 * command in another thread
++	 */
++	if (psp_dead) {
++		ret = -EBUSY;
++		goto unlock;
++	}
+ 
+ 	/* Write command data into ring buffer */
+ 	cmd->cmd_id = cmd_id;
+@@ -286,6 +300,9 @@ static int tee_submit_cmd(struct psp_tee_device *tee, enum tee_cmd_id cmd_id,
+ 	memset(&cmd->buf[0], 0, sizeof(cmd->buf));
+ 	memcpy(&cmd->buf[0], buf, len);
+ 
++	/* Indicate driver is waiting for response */
++	cmd->flag = CMD_WAITING_FOR_RESPONSE;
++
+ 	/* Update local copy of write pointer */
+ 	tee->rb_mgr.wptr += sizeof(struct tee_ring_cmd);
+ 	if (tee->rb_mgr.wptr >= tee->rb_mgr.ring_size)
+@@ -353,12 +370,16 @@ int psp_tee_process_cmd(enum tee_cmd_id cmd_id, void *buf, size_t len,
+ 		return ret;
+ 
+ 	ret = tee_wait_cmd_completion(tee, resp, TEE_DEFAULT_TIMEOUT);
+-	if (ret)
++	if (ret) {
++		resp->flag = CMD_RESPONSE_TIMEDOUT;
+ 		return ret;
++	}
+ 
+ 	memcpy(buf, &resp->buf[0], len);
+ 	*status = resp->status;
+ 
++	resp->flag = CMD_RESPONSE_COPIED;
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(psp_tee_process_cmd);
+diff --git a/drivers/crypto/ccp/tee-dev.h b/drivers/crypto/ccp/tee-dev.h
+index f099601121150..49d26158b71e3 100644
+--- a/drivers/crypto/ccp/tee-dev.h
++++ b/drivers/crypto/ccp/tee-dev.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: MIT */
+ /*
+- * Copyright 2019 Advanced Micro Devices, Inc.
++ * Copyright (C) 2019,2021 Advanced Micro Devices, Inc.
+  *
+  * Author: Rijo Thomas <Rijo-john.Thomas@amd.com>
+  * Author: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
+@@ -18,7 +18,7 @@
+ #include <linux/mutex.h>
+ 
+ #define TEE_DEFAULT_TIMEOUT		10
+-#define MAX_BUFFER_SIZE			992
++#define MAX_BUFFER_SIZE			988
+ 
+ /**
+  * enum tee_ring_cmd_id - TEE interface commands for ring buffer configuration
+@@ -81,6 +81,20 @@ enum tee_cmd_state {
+ 	TEE_CMD_STATE_COMPLETED,
+ };
+ 
++/**
++ * enum cmd_resp_state - TEE command's response status maintained by driver
++ * @CMD_RESPONSE_INVALID:      initial state when no command is written to ring
++ * @CMD_WAITING_FOR_RESPONSE:  driver waiting for response from TEE
++ * @CMD_RESPONSE_TIMEDOUT:     failed to get response from TEE
++ * @CMD_RESPONSE_COPIED:       driver has copied response from TEE
++ */
++enum cmd_resp_state {
++	CMD_RESPONSE_INVALID,
++	CMD_WAITING_FOR_RESPONSE,
++	CMD_RESPONSE_TIMEDOUT,
++	CMD_RESPONSE_COPIED,
++};
++
+ /**
+  * struct tee_ring_cmd - Structure of the command buffer in TEE ring
+  * @cmd_id:      refers to &enum tee_cmd_id. Command id for the ring buffer
+@@ -91,6 +105,7 @@ enum tee_cmd_state {
+  * @pdata:       private data (currently unused)
+  * @res1:        reserved region
+  * @buf:         TEE command specific buffer
++ * @flag:	 refers to &enum cmd_resp_state
+  */
+ struct tee_ring_cmd {
+ 	u32 cmd_id;
+@@ -100,6 +115,7 @@ struct tee_ring_cmd {
+ 	u64 pdata;
+ 	u32 res1[2];
+ 	u8 buf[MAX_BUFFER_SIZE];
++	u32 flag;
+ 
+ 	/* Total size: 1024 bytes */
+ } __packed;
+diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
+index 13b908ea48738..884adeb63ba3c 100644
+--- a/drivers/crypto/chelsio/chcr_algo.c
++++ b/drivers/crypto/chelsio/chcr_algo.c
+@@ -768,13 +768,14 @@ static inline void create_wreq(struct chcr_context *ctx,
+ 	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	unsigned int tx_channel_id, rx_channel_id;
+ 	unsigned int txqidx = 0, rxqidx = 0;
+-	unsigned int qid, fid;
++	unsigned int qid, fid, portno;
+ 
+ 	get_qidxs(req, &txqidx, &rxqidx);
+ 	qid = u_ctx->lldi.rxq_ids[rxqidx];
+ 	fid = u_ctx->lldi.rxq_ids[0];
++	portno = rxqidx / ctx->rxq_perchan;
+ 	tx_channel_id = txqidx / ctx->txq_perchan;
+-	rx_channel_id = rxqidx / ctx->rxq_perchan;
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[portno]);
+ 
+ 
+ 	chcr_req->wreq.op_to_cctx_size = FILL_WR_OP_CCTX_SIZE;
+@@ -805,6 +806,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(wrparam->req);
+ 	struct chcr_context *ctx = c_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct ablk_ctx *ablkctx = ABLK_CTX(ctx);
+ 	struct sk_buff *skb = NULL;
+ 	struct chcr_wr *chcr_req;
+@@ -821,6 +823,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam)
+ 	struct adapter *adap = padap(ctx->dev);
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	nents = sg_nents_xlen(reqctx->dstsg,  wrparam->bytes, CHCR_DST_SG_SIZE,
+ 			      reqctx->dst_ofst);
+ 	dst_size = get_space_for_phys_dsgl(nents);
+@@ -1579,6 +1582,7 @@ static struct sk_buff *create_hash_wr(struct ahash_request *req,
+ 	int error = 0;
+ 	unsigned int rx_channel_id = req_ctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	transhdr_len = HASH_TRANSHDR_SIZE(param->kctx_len);
+ 	req_ctx->hctx_wr.imm = (transhdr_len + param->bfr_len +
+ 				param->sg_len) <= SGE_MAX_WR_LEN;
+@@ -2437,6 +2441,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct chcr_context *ctx = a_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+ 	struct chcr_authenc_ctx *actx = AUTHENC_CTX(aeadctx);
+ 	struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
+@@ -2456,6 +2461,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
+ 	struct adapter *adap = padap(ctx->dev);
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	if (req->cryptlen == 0)
+ 		return NULL;
+ 
+@@ -2709,9 +2715,11 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
+ 	struct dsgl_walk dsgl_walk;
+ 	unsigned int authsize = crypto_aead_authsize(tfm);
+ 	struct chcr_context *ctx = a_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	u32 temp;
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+ 	dsgl_walk_add_page(&dsgl_walk, IV + reqctx->b0_len, reqctx->iv_dma);
+ 	temp = req->assoclen + req->cryptlen +
+@@ -2751,9 +2759,11 @@ void chcr_add_cipher_dst_ent(struct skcipher_request *req,
+ 	struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req);
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(wrparam->req);
+ 	struct chcr_context *ctx = c_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct dsgl_walk dsgl_walk;
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+ 	dsgl_walk_add_sg(&dsgl_walk, reqctx->dstsg, wrparam->bytes,
+ 			 reqctx->dst_ofst);
+@@ -2957,6 +2967,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct chcr_context *ctx = a_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+ 	struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
+ 	unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM;
+@@ -2966,6 +2977,8 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl,
+ 	unsigned int tag_offset = 0, auth_offset = 0;
+ 	unsigned int assoclen;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
++
+ 	if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4309)
+ 		assoclen = req->assoclen - 8;
+ 	else
+@@ -3126,6 +3139,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct chcr_context *ctx = a_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+ 	struct chcr_aead_reqctx  *reqctx = aead_request_ctx(req);
+ 	struct sk_buff *skb = NULL;
+@@ -3142,6 +3156,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req,
+ 	struct adapter *adap = padap(ctx->dev);
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106)
+ 		assoclen = req->assoclen - 8;
+ 
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+index 456979b136a27..ea932b6c4534f 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (ret)
+ 		goto out_err_free_reg;
+ 
+-	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+-
+ 	ret = adf_dev_init(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_shutdown;
+ 
++	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
++
+ 	ret = adf_dev_start(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_stop;
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+index b9810f79eb848..6200ad448b119 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (ret)
+ 		goto out_err_free_reg;
+ 
+-	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+-
+ 	ret = adf_dev_init(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_shutdown;
+ 
++	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
++
+ 	ret = adf_dev_start(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_stop;
+diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c
+index 36136f7db509d..da6ef007a6aef 100644
+--- a/drivers/crypto/qat/qat_common/adf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_isr.c
+@@ -286,19 +286,32 @@ int adf_isr_resource_alloc(struct adf_accel_dev *accel_dev)
+ 
+ 	ret = adf_isr_alloc_msix_entry_table(accel_dev);
+ 	if (ret)
+-		return ret;
+-	if (adf_enable_msix(accel_dev))
+ 		goto err_out;
+ 
+-	if (adf_setup_bh(accel_dev))
+-		goto err_out;
++	ret = adf_enable_msix(accel_dev);
++	if (ret)
++		goto err_free_msix_table;
+ 
+-	if (adf_request_irqs(accel_dev))
+-		goto err_out;
++	ret = adf_setup_bh(accel_dev);
++	if (ret)
++		goto err_disable_msix;
++
++	ret = adf_request_irqs(accel_dev);
++	if (ret)
++		goto err_cleanup_bh;
+ 
+ 	return 0;
++
++err_cleanup_bh:
++	adf_cleanup_bh(accel_dev);
++
++err_disable_msix:
++	adf_disable_msix(&accel_dev->accel_pci_dev);
++
++err_free_msix_table:
++	adf_isr_free_msix_entry_table(accel_dev);
++
+ err_out:
+-	adf_isr_resource_free(accel_dev);
+-	return -EFAULT;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(adf_isr_resource_alloc);
+diff --git a/drivers/crypto/qat/qat_common/adf_transport.c b/drivers/crypto/qat/qat_common/adf_transport.c
+index 2ad774017200f..cdfd56c9e3458 100644
+--- a/drivers/crypto/qat/qat_common/adf_transport.c
++++ b/drivers/crypto/qat/qat_common/adf_transport.c
+@@ -153,6 +153,7 @@ static int adf_init_ring(struct adf_etr_ring_data *ring)
+ 		dev_err(&GET_DEV(accel_dev), "Ring address not aligned\n");
+ 		dma_free_coherent(&GET_DEV(accel_dev), ring_size_bytes,
+ 				  ring->base_addr, ring->dma_addr);
++		ring->base_addr = NULL;
+ 		return -EFAULT;
+ 	}
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+index c4a44dc6af3ee..31a36288623a2 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+@@ -260,17 +260,26 @@ int adf_vf_isr_resource_alloc(struct adf_accel_dev *accel_dev)
+ 		goto err_out;
+ 
+ 	if (adf_setup_pf2vf_bh(accel_dev))
+-		goto err_out;
++		goto err_disable_msi;
+ 
+ 	if (adf_setup_bh(accel_dev))
+-		goto err_out;
++		goto err_cleanup_pf2vf_bh;
+ 
+ 	if (adf_request_msi_irq(accel_dev))
+-		goto err_out;
++		goto err_cleanup_bh;
+ 
+ 	return 0;
++
++err_cleanup_bh:
++	adf_cleanup_bh(accel_dev);
++
++err_cleanup_pf2vf_bh:
++	adf_cleanup_pf2vf_bh(accel_dev);
++
++err_disable_msi:
++	adf_disable_msi(accel_dev);
++
+ err_out:
+-	adf_vf_isr_resource_free(accel_dev);
+ 	return -EFAULT;
+ }
+ EXPORT_SYMBOL_GPL(adf_vf_isr_resource_alloc);
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+index 404cf9df69220..737508ded37b4 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (ret)
+ 		goto out_err_free_reg;
+ 
+-	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+-
+ 	ret = adf_dev_init(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_shutdown;
+ 
++	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
++
+ 	ret = adf_dev_start(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_stop;
+diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
+index 39d56ab12f275..4640fe0c1f221 100644
+--- a/drivers/crypto/sa2ul.c
++++ b/drivers/crypto/sa2ul.c
+@@ -1138,8 +1138,10 @@ static int sa_run(struct sa_req *req)
+ 		mapped_sg->sgt.sgl = src;
+ 		mapped_sg->sgt.orig_nents = src_nents;
+ 		ret = dma_map_sgtable(ddev, &mapped_sg->sgt, dir_src, 0);
+-		if (ret)
++		if (ret) {
++			kfree(rxd);
+ 			return ret;
++		}
+ 
+ 		mapped_sg->dir = dir_src;
+ 		mapped_sg->mapped = true;
+@@ -1147,8 +1149,10 @@ static int sa_run(struct sa_req *req)
+ 		mapped_sg->sgt.sgl = req->src;
+ 		mapped_sg->sgt.orig_nents = sg_nents;
+ 		ret = dma_map_sgtable(ddev, &mapped_sg->sgt, dir_src, 0);
+-		if (ret)
++		if (ret) {
++			kfree(rxd);
+ 			return ret;
++		}
+ 
+ 		mapped_sg->dir = dir_src;
+ 		mapped_sg->mapped = true;
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 861c100f9fac3..98f03a02d1122 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -377,7 +377,7 @@ static int devfreq_set_target(struct devfreq *devfreq, unsigned long new_freq,
+ 	devfreq->previous_freq = new_freq;
+ 
+ 	if (devfreq->suspend_freq)
+-		devfreq->resume_freq = cur_freq;
++		devfreq->resume_freq = new_freq;
+ 
+ 	return err;
+ }
+@@ -788,7 +788,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
+ 
+ 	if (devfreq->profile->timer < 0
+ 		|| devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
+-		goto err_out;
++		mutex_unlock(&devfreq->lock);
++		goto err_dev;
+ 	}
+ 
+ 	if (!devfreq->profile->max_state && !devfreq->profile->freq_table) {
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index 3315e3c215864..5fa6b3ca0a385 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -237,6 +237,7 @@ config INTEL_STRATIX10_RSU
+ config QCOM_SCM
+ 	bool
+ 	depends on ARM || ARM64
++	depends on HAVE_ARM_SMCCC
+ 	select RESET_CONTROLLER
+ 
+ config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
+diff --git a/drivers/firmware/qcom_scm-smc.c b/drivers/firmware/qcom_scm-smc.c
+index 497c13ba98d67..d111833364ba4 100644
+--- a/drivers/firmware/qcom_scm-smc.c
++++ b/drivers/firmware/qcom_scm-smc.c
+@@ -77,8 +77,10 @@ static void __scm_smc_do(const struct arm_smccc_args *smc,
+ 	}  while (res->a0 == QCOM_SCM_V2_EBUSY);
+ }
+ 
+-int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+-		 struct qcom_scm_res *res, bool atomic)
++
++int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
++		   enum qcom_scm_convention qcom_convention,
++		   struct qcom_scm_res *res, bool atomic)
+ {
+ 	int arglen = desc->arginfo & 0xf;
+ 	int i;
+@@ -87,9 +89,8 @@ int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+ 	size_t alloc_len;
+ 	gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL;
+ 	u32 smccc_call_type = atomic ? ARM_SMCCC_FAST_CALL : ARM_SMCCC_STD_CALL;
+-	u32 qcom_smccc_convention =
+-			(qcom_scm_convention == SMC_CONVENTION_ARM_32) ?
+-			ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
++	u32 qcom_smccc_convention = (qcom_convention == SMC_CONVENTION_ARM_32) ?
++				    ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
+ 	struct arm_smccc_res smc_res;
+ 	struct arm_smccc_args smc = {0};
+ 
+@@ -148,4 +149,5 @@ int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+ 	}
+ 
+ 	return (long)smc_res.a0 ? qcom_scm_remap_error(smc_res.a0) : 0;
++
+ }
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 7be48c1bec96d..c5b20bdc08e9d 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -113,14 +113,10 @@ static void qcom_scm_clk_disable(void)
+ 	clk_disable_unprepare(__scm->bus_clk);
+ }
+ 
+-static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+-					u32 cmd_id);
++enum qcom_scm_convention qcom_scm_convention = SMC_CONVENTION_UNKNOWN;
++static DEFINE_SPINLOCK(scm_query_lock);
+ 
+-enum qcom_scm_convention qcom_scm_convention;
+-static bool has_queried __read_mostly;
+-static DEFINE_SPINLOCK(query_lock);
+-
+-static void __query_convention(void)
++static enum qcom_scm_convention __get_convention(void)
+ {
+ 	unsigned long flags;
+ 	struct qcom_scm_desc desc = {
+@@ -133,36 +129,50 @@ static void __query_convention(void)
+ 		.owner = ARM_SMCCC_OWNER_SIP,
+ 	};
+ 	struct qcom_scm_res res;
++	enum qcom_scm_convention probed_convention;
+ 	int ret;
++	bool forced = false;
+ 
+-	spin_lock_irqsave(&query_lock, flags);
+-	if (has_queried)
+-		goto out;
++	if (likely(qcom_scm_convention != SMC_CONVENTION_UNKNOWN))
++		return qcom_scm_convention;
+ 
+-	qcom_scm_convention = SMC_CONVENTION_ARM_64;
+-	// Device isn't required as there is only one argument - no device
+-	// needed to dma_map_single to secure world
+-	ret = scm_smc_call(NULL, &desc, &res, true);
++	/*
++	 * Device isn't required as there is only one argument - no device
++	 * needed to dma_map_single to secure world
++	 */
++	probed_convention = SMC_CONVENTION_ARM_64;
++	ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
+ 	if (!ret && res.result[0] == 1)
+-		goto out;
++		goto found;
++
++	/*
++	 * Some SC7180 firmwares didn't implement the
++	 * QCOM_SCM_INFO_IS_CALL_AVAIL call, so we fallback to forcing ARM_64
++	 * calling conventions on these firmwares. Luckily we don't make any
++	 * early calls into the firmware on these SoCs so the device pointer
++	 * will be valid here to check if the compatible matches.
++	 */
++	if (of_device_is_compatible(__scm ? __scm->dev->of_node : NULL, "qcom,scm-sc7180")) {
++		forced = true;
++		goto found;
++	}
+ 
+-	qcom_scm_convention = SMC_CONVENTION_ARM_32;
+-	ret = scm_smc_call(NULL, &desc, &res, true);
++	probed_convention = SMC_CONVENTION_ARM_32;
++	ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
+ 	if (!ret && res.result[0] == 1)
+-		goto out;
+-
+-	qcom_scm_convention = SMC_CONVENTION_LEGACY;
+-out:
+-	has_queried = true;
+-	spin_unlock_irqrestore(&query_lock, flags);
+-	pr_info("qcom_scm: convention: %s\n",
+-		qcom_scm_convention_names[qcom_scm_convention]);
+-}
++		goto found;
++
++	probed_convention = SMC_CONVENTION_LEGACY;
++found:
++	spin_lock_irqsave(&scm_query_lock, flags);
++	if (probed_convention != qcom_scm_convention) {
++		qcom_scm_convention = probed_convention;
++		pr_info("qcom_scm: convention: %s%s\n",
++			qcom_scm_convention_names[qcom_scm_convention],
++			forced ? " (forced)" : "");
++	}
++	spin_unlock_irqrestore(&scm_query_lock, flags);
+ 
+-static inline enum qcom_scm_convention __get_convention(void)
+-{
+-	if (unlikely(!has_queried))
+-		__query_convention();
+ 	return qcom_scm_convention;
+ }
+ 
+@@ -219,8 +229,8 @@ static int qcom_scm_call_atomic(struct device *dev,
+ 	}
+ }
+ 
+-static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+-					u32 cmd_id)
++static bool __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
++					 u32 cmd_id)
+ {
+ 	int ret;
+ 	struct qcom_scm_desc desc = {
+@@ -247,7 +257,7 @@ static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+ 
+ 	ret = qcom_scm_call(dev, &desc, &res);
+ 
+-	return ret ? : res.result[0];
++	return ret ? false : !!res.result[0];
+ }
+ 
+ /**
+@@ -585,9 +595,8 @@ bool qcom_scm_pas_supported(u32 peripheral)
+ 	};
+ 	struct qcom_scm_res res;
+ 
+-	ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL,
+-					   QCOM_SCM_PIL_PAS_IS_SUPPORTED);
+-	if (ret <= 0)
++	if (!__qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL,
++					  QCOM_SCM_PIL_PAS_IS_SUPPORTED))
+ 		return false;
+ 
+ 	ret = qcom_scm_call(__scm->dev, &desc, &res);
+@@ -1054,17 +1063,18 @@ EXPORT_SYMBOL(qcom_scm_ice_set_key);
+  */
+ bool qcom_scm_hdcp_available(void)
+ {
++	bool avail;
+ 	int ret = qcom_scm_clk_enable();
+ 
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP,
++	avail = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP,
+ 						QCOM_SCM_HDCP_INVOKE);
+ 
+ 	qcom_scm_clk_disable();
+ 
+-	return ret > 0;
++	return avail;
+ }
+ EXPORT_SYMBOL(qcom_scm_hdcp_available);
+ 
+@@ -1236,7 +1246,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ 	__scm = scm;
+ 	__scm->dev = &pdev->dev;
+ 
+-	__query_convention();
++	__get_convention();
+ 
+ 	/*
+ 	 * If requested enable "download mode", from this point on warmboot
+diff --git a/drivers/firmware/qcom_scm.h b/drivers/firmware/qcom_scm.h
+index 95cd1ac30ab0b..632fe31424621 100644
+--- a/drivers/firmware/qcom_scm.h
++++ b/drivers/firmware/qcom_scm.h
+@@ -61,8 +61,11 @@ struct qcom_scm_res {
+ };
+ 
+ #define SCM_SMC_FNID(s, c)	((((s) & 0xFF) << 8) | ((c) & 0xFF))
+-extern int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+-			struct qcom_scm_res *res, bool atomic);
++extern int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
++			  enum qcom_scm_convention qcom_convention,
++			  struct qcom_scm_res *res, bool atomic);
++#define scm_smc_call(dev, desc, res, atomic) \
++	__scm_smc_call((dev), (desc), qcom_scm_convention, (res), (atomic))
+ 
+ #define SCM_LEGACY_FNID(s, c)	(((s) << 10) | ((c) & 0x3ff))
+ extern int scm_legacy_call_atomic(struct device *dev,
+diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
+index fd95edeb702b1..9e6504592646e 100644
+--- a/drivers/firmware/xilinx/zynqmp.c
++++ b/drivers/firmware/xilinx/zynqmp.c
+@@ -2,7 +2,7 @@
+ /*
+  * Xilinx Zynq MPSoC Firmware layer
+  *
+- *  Copyright (C) 2014-2020 Xilinx, Inc.
++ *  Copyright (C) 2014-2021 Xilinx, Inc.
+  *
+  *  Michal Simek <michal.simek@xilinx.com>
+  *  Davorin Mista <davorin.mista@aggios.com>
+@@ -1280,12 +1280,13 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
+ static int zynqmp_firmware_remove(struct platform_device *pdev)
+ {
+ 	struct pm_api_feature_data *feature_data;
++	struct hlist_node *tmp;
+ 	int i;
+ 
+ 	mfd_remove_devices(&pdev->dev);
+ 	zynqmp_pm_api_debugfs_exit();
+ 
+-	hash_for_each(pm_api_features_map, i, feature_data, hentry) {
++	hash_for_each_safe(pm_api_features_map, i, tmp, feature_data, hentry) {
+ 		hash_del(&feature_data->hentry);
+ 		kfree(feature_data);
+ 	}
+diff --git a/drivers/fpga/xilinx-spi.c b/drivers/fpga/xilinx-spi.c
+index 824abbbd631e4..d3e6f41e78bf7 100644
+--- a/drivers/fpga/xilinx-spi.c
++++ b/drivers/fpga/xilinx-spi.c
+@@ -233,25 +233,19 @@ static int xilinx_spi_probe(struct spi_device *spi)
+ 
+ 	/* PROGRAM_B is active low */
+ 	conf->prog_b = devm_gpiod_get(&spi->dev, "prog_b", GPIOD_OUT_LOW);
+-	if (IS_ERR(conf->prog_b)) {
+-		dev_err(&spi->dev, "Failed to get PROGRAM_B gpio: %ld\n",
+-			PTR_ERR(conf->prog_b));
+-		return PTR_ERR(conf->prog_b);
+-	}
++	if (IS_ERR(conf->prog_b))
++		return dev_err_probe(&spi->dev, PTR_ERR(conf->prog_b),
++				     "Failed to get PROGRAM_B gpio\n");
+ 
+ 	conf->init_b = devm_gpiod_get_optional(&spi->dev, "init-b", GPIOD_IN);
+-	if (IS_ERR(conf->init_b)) {
+-		dev_err(&spi->dev, "Failed to get INIT_B gpio: %ld\n",
+-			PTR_ERR(conf->init_b));
+-		return PTR_ERR(conf->init_b);
+-	}
++	if (IS_ERR(conf->init_b))
++		return dev_err_probe(&spi->dev, PTR_ERR(conf->init_b),
++				     "Failed to get INIT_B gpio\n");
+ 
+ 	conf->done = devm_gpiod_get(&spi->dev, "done", GPIOD_IN);
+-	if (IS_ERR(conf->done)) {
+-		dev_err(&spi->dev, "Failed to get DONE gpio: %ld\n",
+-			PTR_ERR(conf->done));
+-		return PTR_ERR(conf->done);
+-	}
++	if (IS_ERR(conf->done))
++		return dev_err_probe(&spi->dev, PTR_ERR(conf->done),
++				     "Failed to get DONE gpio\n");
+ 
+ 	mgr = devm_fpga_mgr_create(&spi->dev,
+ 				   "Xilinx Slave Serial FPGA Manager",
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+index 6e9a9e5dbea07..90e16d14e6c38 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+@@ -215,7 +215,11 @@ static int amdgpu_vmid_grab_idle(struct amdgpu_vm *vm,
+ 	/* Check if we have an idle VMID */
+ 	i = 0;
+ 	list_for_each_entry((*idle), &id_mgr->ids_lru, list) {
+-		fences[i] = amdgpu_sync_peek_fence(&(*idle)->active, ring);
++		/* Don't use per engine and per process VMID at the same time */
++		struct amdgpu_ring *r = adev->vm_manager.concurrent_flush ?
++			NULL : ring;
++
++		fences[i] = amdgpu_sync_peek_fence(&(*idle)->active, r);
+ 		if (!fences[i])
+ 			break;
+ 		++i;
+@@ -280,7 +284,7 @@ static int amdgpu_vmid_grab_reserved(struct amdgpu_vm *vm,
+ 	if (updates && (*id)->flushed_updates &&
+ 	    updates->context == (*id)->flushed_updates->context &&
+ 	    !dma_fence_is_later(updates, (*id)->flushed_updates))
+-	    updates = NULL;
++		updates = NULL;
+ 
+ 	if ((*id)->owner != vm->immediate.fence_context ||
+ 	    job->vm_pd_addr != (*id)->pd_gpu_addr ||
+@@ -289,6 +293,10 @@ static int amdgpu_vmid_grab_reserved(struct amdgpu_vm *vm,
+ 	     !dma_fence_is_signaled((*id)->last_flush))) {
+ 		struct dma_fence *tmp;
+ 
++		/* Don't use per engine and per process VMID at the same time */
++		if (adev->vm_manager.concurrent_flush)
++			ring = NULL;
++
+ 		/* to prevent one context starved by another context */
+ 		(*id)->pd_gpu_addr = 0;
+ 		tmp = amdgpu_sync_peek_fence(&(*id)->active, ring);
+@@ -364,12 +372,7 @@ static int amdgpu_vmid_grab_used(struct amdgpu_vm *vm,
+ 		if (updates && (!flushed || dma_fence_is_later(updates, flushed)))
+ 			needs_flush = true;
+ 
+-		/* Concurrent flushes are only possible starting with Vega10 and
+-		 * are broken on Navi10 and Navi14.
+-		 */
+-		if (needs_flush && (adev->asic_type < CHIP_VEGA10 ||
+-				    adev->asic_type == CHIP_NAVI10 ||
+-				    adev->asic_type == CHIP_NAVI14))
++		if (needs_flush && !adev->vm_manager.concurrent_flush)
+ 			continue;
+ 
+ 		/* Good, we can use this VMID. Remember this submission as
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 605d1545274c2..b47829ff30af7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -3173,6 +3173,12 @@ void amdgpu_vm_manager_init(struct amdgpu_device *adev)
+ {
+ 	unsigned i;
+ 
++	/* Concurrent flushes are only possible starting with Vega10 and
++	 * are broken on Navi10 and Navi14.
++	 */
++	adev->vm_manager.concurrent_flush = !(adev->asic_type < CHIP_VEGA10 ||
++					      adev->asic_type == CHIP_NAVI10 ||
++					      adev->asic_type == CHIP_NAVI14);
+ 	amdgpu_vmid_mgr_init(adev);
+ 
+ 	adev->vm_manager.fence_context =
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+index 58c83a7ad0fd9..c4218800e043f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+@@ -325,6 +325,7 @@ struct amdgpu_vm_manager {
+ 	/* Handling of VMIDs */
+ 	struct amdgpu_vmid_mgr			id_mgr[AMDGPU_MAX_VMHUBS];
+ 	unsigned int				first_kfd_vmid;
++	bool					concurrent_flush;
+ 
+ 	/* Handling of VM fences */
+ 	u64					fence_context;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+index 66bbca61e3ef5..9318936aa8054 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+@@ -20,6 +20,10 @@
+  * OTHER DEALINGS IN THE SOFTWARE.
+  */
+ 
++#include <linux/kconfig.h>
++
++#if IS_REACHABLE(CONFIG_AMD_IOMMU_V2)
++
+ #include <linux/printk.h>
+ #include <linux/device.h>
+ #include <linux/slab.h>
+@@ -355,3 +359,5 @@ int kfd_iommu_add_perf_counters(struct kfd_topology_device *kdev)
+ 
+ 	return 0;
+ }
++
++#endif
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
+index dd23d9fdf6a82..afd420b01a0c2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
+@@ -23,7 +23,9 @@
+ #ifndef __KFD_IOMMU_H__
+ #define __KFD_IOMMU_H__
+ 
+-#if defined(CONFIG_AMD_IOMMU_V2_MODULE) || defined(CONFIG_AMD_IOMMU_V2)
++#include <linux/kconfig.h>
++
++#if IS_REACHABLE(CONFIG_AMD_IOMMU_V2)
+ 
+ #define KFD_SUPPORT_IOMMU_V2
+ 
+@@ -46,6 +48,9 @@ static inline int kfd_iommu_check_device(struct kfd_dev *kfd)
+ }
+ static inline int kfd_iommu_device_init(struct kfd_dev *kfd)
+ {
++#if IS_MODULE(CONFIG_AMD_IOMMU_V2)
++	WARN_ONCE(1, "iommu_v2 module is not usable by built-in KFD");
++#endif
+ 	return 0;
+ }
+ 
+@@ -73,6 +78,6 @@ static inline int kfd_iommu_add_perf_counters(struct kfd_topology_device *kdev)
+ 	return 0;
+ }
+ 
+-#endif /* defined(CONFIG_AMD_IOMMU_V2) */
++#endif /* IS_REACHABLE(CONFIG_AMD_IOMMU_V2) */
+ 
+ #endif /* __KFD_IOMMU_H__ */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index d18341b7daacd..8180894bbd1e3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3685,6 +3685,23 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state,
+ 	scaling_info->src_rect.x = state->src_x >> 16;
+ 	scaling_info->src_rect.y = state->src_y >> 16;
+ 
++	/*
++	 * For reasons we don't (yet) fully understand a non-zero
++	 * src_y coordinate into an NV12 buffer can cause a
++	 * system hang. To avoid hangs (and maybe be overly cautious)
++	 * let's reject both non-zero src_x and src_y.
++	 *
++	 * We currently know of only one use-case to reproduce a
++	 * scenario with non-zero src_x and src_y for NV12, which
++	 * is to gesture the YouTube Android app into full screen
++	 * on ChromeOS.
++	 */
++	if (state->fb &&
++	    state->fb->format->format == DRM_FORMAT_NV12 &&
++	    (scaling_info->src_rect.x != 0 ||
++	     scaling_info->src_rect.y != 0))
++		return -EINVAL;
++
+ 	scaling_info->src_rect.width = state->src_w >> 16;
+ 	if (scaling_info->src_rect.width == 0)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
+index 4e87e70237e3d..874b132fe1d78 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
+@@ -283,7 +283,7 @@ struct abm *dce_abm_create(
+ 	const struct dce_abm_shift *abm_shift,
+ 	const struct dce_abm_mask *abm_mask)
+ {
+-	struct dce_abm *abm_dce = kzalloc(sizeof(*abm_dce), GFP_KERNEL);
++	struct dce_abm *abm_dce = kzalloc(sizeof(*abm_dce), GFP_ATOMIC);
+ 
+ 	if (abm_dce == NULL) {
+ 		BREAK_TO_DEBUGGER();
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+index f0cebe721bcc1..4216419503af4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+@@ -925,7 +925,7 @@ struct dmcu *dcn10_dmcu_create(
+ 	const struct dce_dmcu_shift *dmcu_shift,
+ 	const struct dce_dmcu_mask *dmcu_mask)
+ {
+-	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
++	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
+ 
+ 	if (dmcu_dce == NULL) {
+ 		BREAK_TO_DEBUGGER();
+@@ -946,7 +946,7 @@ struct dmcu *dcn20_dmcu_create(
+ 	const struct dce_dmcu_shift *dmcu_shift,
+ 	const struct dce_dmcu_mask *dmcu_mask)
+ {
+-	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
++	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
+ 
+ 	if (dmcu_dce == NULL) {
+ 		BREAK_TO_DEBUGGER();
+@@ -967,7 +967,7 @@ struct dmcu *dcn21_dmcu_create(
+ 	const struct dce_dmcu_shift *dmcu_shift,
+ 	const struct dce_dmcu_mask *dmcu_mask)
+ {
+-	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
++	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
+ 
+ 	if (dmcu_dce == NULL) {
+ 		BREAK_TO_DEBUGGER();
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
+index 62cc2651e00c1..8774406120fc1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
+@@ -112,7 +112,7 @@ struct dccg *dccg2_create(
+ 	const struct dccg_shift *dccg_shift,
+ 	const struct dccg_mask *dccg_mask)
+ {
+-	struct dcn_dccg *dccg_dcn = kzalloc(sizeof(*dccg_dcn), GFP_KERNEL);
++	struct dcn_dccg *dccg_dcn = kzalloc(sizeof(*dccg_dcn), GFP_ATOMIC);
+ 	struct dccg *base;
+ 
+ 	if (dccg_dcn == NULL) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 4ea53c543e082..33488b3c5c3ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -1104,7 +1104,7 @@ struct dpp *dcn20_dpp_create(
+ 	uint32_t inst)
+ {
+ 	struct dcn20_dpp *dpp =
+-		kzalloc(sizeof(struct dcn20_dpp), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_dpp), GFP_ATOMIC);
+ 
+ 	if (!dpp)
+ 		return NULL;
+@@ -1122,7 +1122,7 @@ struct input_pixel_processor *dcn20_ipp_create(
+ 	struct dc_context *ctx, uint32_t inst)
+ {
+ 	struct dcn10_ipp *ipp =
+-		kzalloc(sizeof(struct dcn10_ipp), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn10_ipp), GFP_ATOMIC);
+ 
+ 	if (!ipp) {
+ 		BREAK_TO_DEBUGGER();
+@@ -1139,7 +1139,7 @@ struct output_pixel_processor *dcn20_opp_create(
+ 	struct dc_context *ctx, uint32_t inst)
+ {
+ 	struct dcn20_opp *opp =
+-		kzalloc(sizeof(struct dcn20_opp), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_opp), GFP_ATOMIC);
+ 
+ 	if (!opp) {
+ 		BREAK_TO_DEBUGGER();
+@@ -1156,7 +1156,7 @@ struct dce_aux *dcn20_aux_engine_create(
+ 	uint32_t inst)
+ {
+ 	struct aux_engine_dce110 *aux_engine =
+-		kzalloc(sizeof(struct aux_engine_dce110), GFP_KERNEL);
++		kzalloc(sizeof(struct aux_engine_dce110), GFP_ATOMIC);
+ 
+ 	if (!aux_engine)
+ 		return NULL;
+@@ -1194,7 +1194,7 @@ struct dce_i2c_hw *dcn20_i2c_hw_create(
+ 	uint32_t inst)
+ {
+ 	struct dce_i2c_hw *dce_i2c_hw =
+-		kzalloc(sizeof(struct dce_i2c_hw), GFP_KERNEL);
++		kzalloc(sizeof(struct dce_i2c_hw), GFP_ATOMIC);
+ 
+ 	if (!dce_i2c_hw)
+ 		return NULL;
+@@ -1207,7 +1207,7 @@ struct dce_i2c_hw *dcn20_i2c_hw_create(
+ struct mpc *dcn20_mpc_create(struct dc_context *ctx)
+ {
+ 	struct dcn20_mpc *mpc20 = kzalloc(sizeof(struct dcn20_mpc),
+-					  GFP_KERNEL);
++					  GFP_ATOMIC);
+ 
+ 	if (!mpc20)
+ 		return NULL;
+@@ -1225,7 +1225,7 @@ struct hubbub *dcn20_hubbub_create(struct dc_context *ctx)
+ {
+ 	int i;
+ 	struct dcn20_hubbub *hubbub = kzalloc(sizeof(struct dcn20_hubbub),
+-					  GFP_KERNEL);
++					  GFP_ATOMIC);
+ 
+ 	if (!hubbub)
+ 		return NULL;
+@@ -1253,7 +1253,7 @@ struct timing_generator *dcn20_timing_generator_create(
+ 		uint32_t instance)
+ {
+ 	struct optc *tgn10 =
+-		kzalloc(sizeof(struct optc), GFP_KERNEL);
++		kzalloc(sizeof(struct optc), GFP_ATOMIC);
+ 
+ 	if (!tgn10)
+ 		return NULL;
+@@ -1332,7 +1332,7 @@ static struct clock_source *dcn20_clock_source_create(
+ 	bool dp_clk_src)
+ {
+ 	struct dce110_clk_src *clk_src =
+-		kzalloc(sizeof(struct dce110_clk_src), GFP_KERNEL);
++		kzalloc(sizeof(struct dce110_clk_src), GFP_ATOMIC);
+ 
+ 	if (!clk_src)
+ 		return NULL;
+@@ -1438,7 +1438,7 @@ struct display_stream_compressor *dcn20_dsc_create(
+ 	struct dc_context *ctx, uint32_t inst)
+ {
+ 	struct dcn20_dsc *dsc =
+-		kzalloc(sizeof(struct dcn20_dsc), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_dsc), GFP_ATOMIC);
+ 
+ 	if (!dsc) {
+ 		BREAK_TO_DEBUGGER();
+@@ -1572,7 +1572,7 @@ struct hubp *dcn20_hubp_create(
+ 	uint32_t inst)
+ {
+ 	struct dcn20_hubp *hubp2 =
+-		kzalloc(sizeof(struct dcn20_hubp), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_hubp), GFP_ATOMIC);
+ 
+ 	if (!hubp2)
+ 		return NULL;
+@@ -3391,7 +3391,7 @@ bool dcn20_mmhubbub_create(struct dc_context *ctx, struct resource_pool *pool)
+ 
+ static struct pp_smu_funcs *dcn20_pp_smu_create(struct dc_context *ctx)
+ {
+-	struct pp_smu_funcs *pp_smu = kzalloc(sizeof(*pp_smu), GFP_KERNEL);
++	struct pp_smu_funcs *pp_smu = kzalloc(sizeof(*pp_smu), GFP_ATOMIC);
+ 
+ 	if (!pp_smu)
+ 		return pp_smu;
+@@ -4142,7 +4142,7 @@ struct resource_pool *dcn20_create_resource_pool(
+ 		struct dc *dc)
+ {
+ 	struct dcn20_resource_pool *pool =
+-		kzalloc(sizeof(struct dcn20_resource_pool), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_resource_pool), GFP_ATOMIC);
+ 
+ 	if (!pool)
+ 		return NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+index 5e384a8a83dc2..51855a2624cf4 100644
+--- a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
++++ b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+@@ -39,7 +39,7 @@
+ #define HDCP14_KSV_SIZE 5
+ #define HDCP14_MAX_KSV_FIFO_SIZE 127*HDCP14_KSV_SIZE
+ 
+-static const bool hdcp_cmd_is_read[] = {
++static const bool hdcp_cmd_is_read[HDCP_MESSAGE_ID_MAX] = {
+ 	[HDCP_MESSAGE_ID_READ_BKSV] = true,
+ 	[HDCP_MESSAGE_ID_READ_RI_R0] = true,
+ 	[HDCP_MESSAGE_ID_READ_PJ] = true,
+@@ -75,7 +75,7 @@ static const bool hdcp_cmd_is_read[] = {
+ 	[HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = false
+ };
+ 
+-static const uint8_t hdcp_i2c_offsets[] = {
++static const uint8_t hdcp_i2c_offsets[HDCP_MESSAGE_ID_MAX] = {
+ 	[HDCP_MESSAGE_ID_READ_BKSV] = 0x0,
+ 	[HDCP_MESSAGE_ID_READ_RI_R0] = 0x8,
+ 	[HDCP_MESSAGE_ID_READ_PJ] = 0xA,
+@@ -106,7 +106,8 @@ static const uint8_t hdcp_i2c_offsets[] = {
+ 	[HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_SEND_ACK] = 0x60,
+ 	[HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_STREAM_MANAGE] = 0x60,
+ 	[HDCP_MESSAGE_ID_READ_REPEATER_AUTH_STREAM_READY] = 0x80,
+-	[HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70
++	[HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70,
++	[HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = 0x0,
+ };
+ 
+ struct protection_properties {
+@@ -184,7 +185,7 @@ static const struct protection_properties hdmi_14_protection = {
+ 	.process_transaction = hdmi_14_process_transaction
+ };
+ 
+-static const uint32_t hdcp_dpcd_addrs[] = {
++static const uint32_t hdcp_dpcd_addrs[HDCP_MESSAGE_ID_MAX] = {
+ 	[HDCP_MESSAGE_ID_READ_BKSV] = 0x68000,
+ 	[HDCP_MESSAGE_ID_READ_RI_R0] = 0x68005,
+ 	[HDCP_MESSAGE_ID_READ_PJ] = 0xFFFFFFFF,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 5cc45b1cff7e7..e5893218fa4bb 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2001,6 +2001,7 @@ int smu_set_power_limit(struct smu_context *smu, uint32_t limit)
+ 		dev_err(smu->adev->dev,
+ 			"New power limit (%d) is over the max allowed %d\n",
+ 			limit, smu->max_power_limit);
++		ret = -EINVAL;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
+index ef91646441b16..e145cbb35baca 100644
+--- a/drivers/gpu/drm/bridge/Kconfig
++++ b/drivers/gpu/drm/bridge/Kconfig
+@@ -54,6 +54,7 @@ config DRM_LONTIUM_LT9611
+ 	depends on OF
+ 	select DRM_PANEL_BRIDGE
+ 	select DRM_KMS_HELPER
++	select DRM_MIPI_DSI
+ 	select REGMAP_I2C
+ 	help
+ 	  Driver for Lontium LT9611 DSI to HDMI bridge
+@@ -138,6 +139,7 @@ config DRM_SII902X
+ 	tristate "Silicon Image sii902x RGB/HDMI bridge"
+ 	depends on OF
+ 	select DRM_KMS_HELPER
++	select DRM_MIPI_DSI
+ 	select REGMAP_I2C
+ 	select I2C_MUX
+ 	select SND_SOC_HDMI_CODEC if SND_SOC
+@@ -187,6 +189,7 @@ config DRM_TOSHIBA_TC358767
+ 	tristate "Toshiba TC358767 eDP bridge"
+ 	depends on OF
+ 	select DRM_KMS_HELPER
++	select DRM_MIPI_DSI
+ 	select REGMAP_I2C
+ 	select DRM_PANEL
+ 	help
+diff --git a/drivers/gpu/drm/bridge/panel.c b/drivers/gpu/drm/bridge/panel.c
+index 0ddc37551194e..c916f4b8907ef 100644
+--- a/drivers/gpu/drm/bridge/panel.c
++++ b/drivers/gpu/drm/bridge/panel.c
+@@ -87,6 +87,18 @@ static int panel_bridge_attach(struct drm_bridge *bridge,
+ 
+ static void panel_bridge_detach(struct drm_bridge *bridge)
+ {
++	struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge);
++	struct drm_connector *connector = &panel_bridge->connector;
++
++	/*
++	 * Cleanup the connector if we know it was initialized.
++	 *
++	 * FIXME: This wouldn't be needed if the panel_bridge structure was
++	 * allocated with drmm_kzalloc(). This might be tricky since the
++	 * drm_device pointer can only be retrieved when the bridge is attached.
++	 */
++	if (connector->dev)
++		drm_connector_cleanup(connector);
+ }
+ 
+ static void panel_bridge_pre_enable(struct drm_bridge *bridge)
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 9cf35dab25273..a08cc6b53bc2f 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -1154,6 +1154,7 @@ static void build_clear_payload_id_table(struct drm_dp_sideband_msg_tx *msg)
+ 
+ 	req.req_type = DP_CLEAR_PAYLOAD_ID_TABLE;
+ 	drm_dp_encode_sideband_req(&req, msg);
++	msg->path_msg = true;
+ }
+ 
+ static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg,
+@@ -2824,15 +2825,21 @@ static int set_hdr_from_dst_qlock(struct drm_dp_sideband_msg_hdr *hdr,
+ 
+ 	req_type = txmsg->msg[0] & 0x7f;
+ 	if (req_type == DP_CONNECTION_STATUS_NOTIFY ||
+-		req_type == DP_RESOURCE_STATUS_NOTIFY)
++		req_type == DP_RESOURCE_STATUS_NOTIFY ||
++		req_type == DP_CLEAR_PAYLOAD_ID_TABLE)
+ 		hdr->broadcast = 1;
+ 	else
+ 		hdr->broadcast = 0;
+ 	hdr->path_msg = txmsg->path_msg;
+-	hdr->lct = mstb->lct;
+-	hdr->lcr = mstb->lct - 1;
+-	if (mstb->lct > 1)
+-		memcpy(hdr->rad, mstb->rad, mstb->lct / 2);
++	if (hdr->broadcast) {
++		hdr->lct = 1;
++		hdr->lcr = 6;
++	} else {
++		hdr->lct = mstb->lct;
++		hdr->lcr = mstb->lct - 1;
++	}
++
++	memcpy(hdr->rad, mstb->rad, hdr->lct / 2);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
+index d6017726cc2a0..e5432dcf69996 100644
+--- a/drivers/gpu/drm/drm_probe_helper.c
++++ b/drivers/gpu/drm/drm_probe_helper.c
+@@ -623,6 +623,7 @@ static void output_poll_execute(struct work_struct *work)
+ 	struct drm_connector_list_iter conn_iter;
+ 	enum drm_connector_status old_status;
+ 	bool repoll = false, changed;
++	u64 old_epoch_counter;
+ 
+ 	if (!dev->mode_config.poll_enabled)
+ 		return;
+@@ -659,8 +660,9 @@ static void output_poll_execute(struct work_struct *work)
+ 
+ 		repoll = true;
+ 
++		old_epoch_counter = connector->epoch_counter;
+ 		connector->status = drm_helper_probe_detect(connector, NULL, false);
+-		if (old_status != connector->status) {
++		if (old_epoch_counter != connector->epoch_counter) {
+ 			const char *old, *new;
+ 
+ 			/*
+@@ -689,6 +691,9 @@ static void output_poll_execute(struct work_struct *work)
+ 				      connector->base.id,
+ 				      connector->name,
+ 				      old, new);
++			DRM_DEBUG_KMS("[CONNECTOR:%d:%s] epoch counter %llu -> %llu\n",
++				      connector->base.id, connector->name,
++				      old_epoch_counter, connector->epoch_counter);
+ 
+ 			changed = true;
+ 		}
+diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
+index d7898e87791fe..234650230701d 100644
+--- a/drivers/gpu/drm/i915/gvt/display.c
++++ b/drivers/gpu/drm/i915/gvt/display.c
+@@ -173,21 +173,176 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
+ 	int pipe;
+ 
+ 	if (IS_BROXTON(dev_priv)) {
++		enum transcoder trans;
++		enum port port;
++
++		/* Clear PIPE, DDI, PHY, HPD before setting new */
+ 		vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~(BXT_DE_PORT_HP_DDIA |
+ 			BXT_DE_PORT_HP_DDIB |
+ 			BXT_DE_PORT_HP_DDIC);
+ 
++		for_each_pipe(dev_priv, pipe) {
++			vgpu_vreg_t(vgpu, PIPECONF(pipe)) &=
++				~(PIPECONF_ENABLE | I965_PIPECONF_ACTIVE);
++			vgpu_vreg_t(vgpu, DSPCNTR(pipe)) &= ~DISPLAY_PLANE_ENABLE;
++			vgpu_vreg_t(vgpu, SPRCTL(pipe)) &= ~SPRITE_ENABLE;
++			vgpu_vreg_t(vgpu, CURCNTR(pipe)) &= ~MCURSOR_MODE;
++			vgpu_vreg_t(vgpu, CURCNTR(pipe)) |= MCURSOR_MODE_DISABLE;
++		}
++
++		for (trans = TRANSCODER_A; trans <= TRANSCODER_EDP; trans++) {
++			vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(trans)) &=
++				~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
++				  TRANS_DDI_PORT_MASK | TRANS_DDI_FUNC_ENABLE);
++		}
++		vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) &=
++			~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
++			  TRANS_DDI_PORT_MASK);
++
++		for (port = PORT_A; port <= PORT_C; port++) {
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL(port)) &=
++				~BXT_PHY_LANE_ENABLED;
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL(port)) |=
++				(BXT_PHY_CMNLANE_POWERDOWN_ACK |
++				 BXT_PHY_LANE_POWERDOWN_ACK);
++
++			vgpu_vreg_t(vgpu, BXT_PORT_PLL_ENABLE(port)) &=
++				~(PORT_PLL_POWER_STATE | PORT_PLL_POWER_ENABLE |
++				  PORT_PLL_REF_SEL | PORT_PLL_LOCK |
++				  PORT_PLL_ENABLE);
++
++			vgpu_vreg_t(vgpu, DDI_BUF_CTL(port)) &=
++				~(DDI_INIT_DISPLAY_DETECTED |
++				  DDI_BUF_CTL_ENABLE);
++			vgpu_vreg_t(vgpu, DDI_BUF_CTL(port)) |= DDI_BUF_IS_IDLE;
++		}
++		vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
++			~(PORTA_HOTPLUG_ENABLE | PORTA_HOTPLUG_STATUS_MASK);
++		vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
++			~(PORTB_HOTPLUG_ENABLE | PORTB_HOTPLUG_STATUS_MASK);
++		vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
++			~(PORTC_HOTPLUG_ENABLE | PORTC_HOTPLUG_STATUS_MASK);
++		/* No hpd_invert set in vgpu vbt, need to clear invert mask */
++		vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= ~BXT_DDI_HPD_INVERT_MASK;
++		vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~BXT_DE_PORT_HOTPLUG_MASK;
++
++		vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) &= ~(BIT(0) | BIT(1));
++		vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) &=
++			~PHY_POWER_GOOD;
++		vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) &=
++			~PHY_POWER_GOOD;
++		vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY0)) &= ~BIT(30);
++		vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY1)) &= ~BIT(30);
++
++		vgpu_vreg_t(vgpu, SFUSE_STRAP) &= ~SFUSE_STRAP_DDIB_DETECTED;
++		vgpu_vreg_t(vgpu, SFUSE_STRAP) &= ~SFUSE_STRAP_DDIC_DETECTED;
++
++		/*
++		 * Only 1 PIPE enabled in current vGPU display and PIPE_A is
++		 *  tied to TRANSCODER_A in HW, so it's safe to assume PIPE_A,
++		 *   TRANSCODER_A can be enabled. PORT_x depends on the input of
++		 *   setup_virtual_dp_monitor.
++		 */
++		vgpu_vreg_t(vgpu, PIPECONF(PIPE_A)) |= PIPECONF_ENABLE;
++		vgpu_vreg_t(vgpu, PIPECONF(PIPE_A)) |= I965_PIPECONF_ACTIVE;
++
++		/*
++		 * Golden M/N are calculated based on:
++		 *   24 bpp, 4 lanes, 154000 pixel clk (from virtual EDID),
++		 *   DP link clk 1620 MHz and non-constant_n.
++		 * TODO: calculate DP link symbol clk and stream clk m/n.
++		 */
++		vgpu_vreg_t(vgpu, PIPE_DATA_M1(TRANSCODER_A)) = 63 << TU_SIZE_SHIFT;
++		vgpu_vreg_t(vgpu, PIPE_DATA_M1(TRANSCODER_A)) |= 0x5b425e;
++		vgpu_vreg_t(vgpu, PIPE_DATA_N1(TRANSCODER_A)) = 0x800000;
++		vgpu_vreg_t(vgpu, PIPE_LINK_M1(TRANSCODER_A)) = 0x3cd6e;
++		vgpu_vreg_t(vgpu, PIPE_LINK_N1(TRANSCODER_A)) = 0x80000;
++
++		/* Enable per-DDI/PORT vreg */
+ 		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
++			vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) |= BIT(1);
++			vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) |=
++				PHY_POWER_GOOD;
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY1)) |=
++				BIT(30);
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_A)) |=
++				BXT_PHY_LANE_ENABLED;
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_A)) &=
++				~(BXT_PHY_CMNLANE_POWERDOWN_ACK |
++				  BXT_PHY_LANE_POWERDOWN_ACK);
++			vgpu_vreg_t(vgpu, BXT_PORT_PLL_ENABLE(PORT_A)) |=
++				(PORT_PLL_POWER_STATE | PORT_PLL_POWER_ENABLE |
++				 PORT_PLL_REF_SEL | PORT_PLL_LOCK |
++				 PORT_PLL_ENABLE);
++			vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_A)) |=
++				(DDI_BUF_CTL_ENABLE | DDI_INIT_DISPLAY_DETECTED);
++			vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_A)) &=
++				~DDI_BUF_IS_IDLE;
++			vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_EDP)) |=
++				(TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
++				 TRANS_DDI_FUNC_ENABLE);
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
++				PORTA_HOTPLUG_ENABLE;
+ 			vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+ 				BXT_DE_PORT_HP_DDIA;
+ 		}
+ 
+ 		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
++			vgpu_vreg_t(vgpu, SFUSE_STRAP) |= SFUSE_STRAP_DDIB_DETECTED;
++			vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) |= BIT(0);
++			vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) |=
++				PHY_POWER_GOOD;
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY0)) |=
++				BIT(30);
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_B)) |=
++				BXT_PHY_LANE_ENABLED;
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_B)) &=
++				~(BXT_PHY_CMNLANE_POWERDOWN_ACK |
++				  BXT_PHY_LANE_POWERDOWN_ACK);
++			vgpu_vreg_t(vgpu, BXT_PORT_PLL_ENABLE(PORT_B)) |=
++				(PORT_PLL_POWER_STATE | PORT_PLL_POWER_ENABLE |
++				 PORT_PLL_REF_SEL | PORT_PLL_LOCK |
++				 PORT_PLL_ENABLE);
++			vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_B)) |=
++				DDI_BUF_CTL_ENABLE;
++			vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_B)) &=
++				~DDI_BUF_IS_IDLE;
++			vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |=
++				(TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
++				 (PORT_B << TRANS_DDI_PORT_SHIFT) |
++				 TRANS_DDI_FUNC_ENABLE);
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
++				PORTB_HOTPLUG_ENABLE;
+ 			vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+ 				BXT_DE_PORT_HP_DDIB;
+ 		}
+ 
+ 		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
++			vgpu_vreg_t(vgpu, SFUSE_STRAP) |= SFUSE_STRAP_DDIC_DETECTED;
++			vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) |= BIT(0);
++			vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) |=
++				PHY_POWER_GOOD;
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY0)) |=
++				BIT(30);
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_C)) |=
++				BXT_PHY_LANE_ENABLED;
++			vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_C)) &=
++				~(BXT_PHY_CMNLANE_POWERDOWN_ACK |
++				  BXT_PHY_LANE_POWERDOWN_ACK);
++			vgpu_vreg_t(vgpu, BXT_PORT_PLL_ENABLE(PORT_C)) |=
++				(PORT_PLL_POWER_STATE | PORT_PLL_POWER_ENABLE |
++				 PORT_PLL_REF_SEL | PORT_PLL_LOCK |
++				 PORT_PLL_ENABLE);
++			vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_C)) |=
++				DDI_BUF_CTL_ENABLE;
++			vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_C)) &=
++				~DDI_BUF_IS_IDLE;
++			vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |=
++				(TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
++				 (PORT_B << TRANS_DDI_PORT_SHIFT) |
++				 TRANS_DDI_FUNC_ENABLE);
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
++				PORTC_HOTPLUG_ENABLE;
+ 			vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+ 				BXT_DE_PORT_HP_DDIC;
+ 		}
+@@ -519,6 +674,63 @@ void intel_vgpu_emulate_hotplug(struct intel_vgpu *vgpu, bool connected)
+ 		vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ 				PORTD_HOTPLUG_STATUS_MASK;
+ 		intel_vgpu_trigger_virtual_event(vgpu, DP_D_HOTPLUG);
++	} else if (IS_BROXTON(i915)) {
++		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
++			if (connected) {
++				vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
++					BXT_DE_PORT_HP_DDIA;
++			} else {
++				vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
++					~BXT_DE_PORT_HP_DDIA;
++			}
++			vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
++				BXT_DE_PORT_HP_DDIA;
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
++				~PORTA_HOTPLUG_STATUS_MASK;
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
++				PORTA_HOTPLUG_LONG_DETECT;
++			intel_vgpu_trigger_virtual_event(vgpu, DP_A_HOTPLUG);
++		}
++		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
++			if (connected) {
++				vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
++					BXT_DE_PORT_HP_DDIB;
++				vgpu_vreg_t(vgpu, SFUSE_STRAP) |=
++					SFUSE_STRAP_DDIB_DETECTED;
++			} else {
++				vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
++					~BXT_DE_PORT_HP_DDIB;
++				vgpu_vreg_t(vgpu, SFUSE_STRAP) &=
++					~SFUSE_STRAP_DDIB_DETECTED;
++			}
++			vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
++				BXT_DE_PORT_HP_DDIB;
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
++				~PORTB_HOTPLUG_STATUS_MASK;
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
++				PORTB_HOTPLUG_LONG_DETECT;
++			intel_vgpu_trigger_virtual_event(vgpu, DP_B_HOTPLUG);
++		}
++		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
++			if (connected) {
++				vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
++					BXT_DE_PORT_HP_DDIC;
++				vgpu_vreg_t(vgpu, SFUSE_STRAP) |=
++					SFUSE_STRAP_DDIC_DETECTED;
++			} else {
++				vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
++					~BXT_DE_PORT_HP_DDIC;
++				vgpu_vreg_t(vgpu, SFUSE_STRAP) &=
++					~SFUSE_STRAP_DDIC_DETECTED;
++			}
++			vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
++				BXT_DE_PORT_HP_DDIC;
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
++				~PORTC_HOTPLUG_STATUS_MASK;
++			vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
++				PORTC_HOTPLUG_LONG_DETECT;
++			intel_vgpu_trigger_virtual_event(vgpu, DP_C_HOTPLUG);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/gvt/gvt.c b/drivers/gpu/drm/i915/gvt/gvt.c
+index c7c5612378832..5c9ef8e58a087 100644
+--- a/drivers/gpu/drm/i915/gvt/gvt.c
++++ b/drivers/gpu/drm/i915/gvt/gvt.c
+@@ -126,7 +126,7 @@ static bool intel_get_gvt_attrs(struct attribute_group ***intel_vgpu_type_groups
+ 	return true;
+ }
+ 
+-static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
++static int intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
+ {
+ 	int i, j;
+ 	struct intel_vgpu_type *type;
+@@ -144,7 +144,7 @@ static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
+ 		gvt_vgpu_type_groups[i] = group;
+ 	}
+ 
+-	return true;
++	return 0;
+ 
+ unwind:
+ 	for (j = 0; j < i; j++) {
+@@ -152,7 +152,7 @@ unwind:
+ 		kfree(group);
+ 	}
+ 
+-	return false;
++	return -ENOMEM;
+ }
+ 
+ static void intel_gvt_cleanup_vgpu_type_groups(struct intel_gvt *gvt)
+@@ -360,7 +360,7 @@ int intel_gvt_init_device(struct drm_i915_private *i915)
+ 		goto out_clean_thread;
+ 
+ 	ret = intel_gvt_init_vgpu_type_groups(gvt);
+-	if (ret == false) {
++	if (ret) {
+ 		gvt_err("failed to init vgpu type groups: %d\n", ret);
+ 		goto out_clean_types;
+ 	}
+diff --git a/drivers/gpu/drm/i915/gvt/mmio.c b/drivers/gpu/drm/i915/gvt/mmio.c
+index b6811f6a230df..24210b1eaec58 100644
+--- a/drivers/gpu/drm/i915/gvt/mmio.c
++++ b/drivers/gpu/drm/i915/gvt/mmio.c
+@@ -280,6 +280,11 @@ void intel_vgpu_reset_mmio(struct intel_vgpu *vgpu, bool dmlr)
+ 			vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_C)) |=
+ 				    BXT_PHY_CMNLANE_POWERDOWN_ACK |
+ 				    BXT_PHY_LANE_POWERDOWN_ACK;
++			vgpu_vreg_t(vgpu, SKL_FUSE_STATUS) |=
++				SKL_FUSE_DOWNLOAD_STATUS |
++				SKL_FUSE_PG_DIST_STATUS(SKL_PG0) |
++				SKL_FUSE_PG_DIST_STATUS(SKL_PG1) |
++				SKL_FUSE_PG_DIST_STATUS(SKL_PG2);
+ 		}
+ 	} else {
+ #define GVT_GEN8_MMIO_RESET_OFFSET		(0x44200)
+diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
+index 399582aeeefb9..821b6c3ff88b2 100644
+--- a/drivers/gpu/drm/i915/gvt/vgpu.c
++++ b/drivers/gpu/drm/i915/gvt/vgpu.c
+@@ -437,10 +437,9 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
+ 	if (ret)
+ 		goto out_clean_sched_policy;
+ 
+-	if (IS_BROADWELL(dev_priv))
++	if (IS_BROADWELL(dev_priv) || IS_BROXTON(dev_priv))
+ 		ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B);
+-	/* FixMe: Re-enable APL/BXT once vfio_edid enabled */
+-	else if (!IS_BROXTON(dev_priv))
++	else
+ 		ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);
+ 	if (ret)
+ 		goto out_clean_sched_policy;
+diff --git a/drivers/gpu/drm/mcde/mcde_dsi.c b/drivers/gpu/drm/mcde/mcde_dsi.c
+index 2314c81229920..b3fd3501c4127 100644
+--- a/drivers/gpu/drm/mcde/mcde_dsi.c
++++ b/drivers/gpu/drm/mcde/mcde_dsi.c
+@@ -760,7 +760,7 @@ static void mcde_dsi_start(struct mcde_dsi *d)
+ 		DSI_MCTL_MAIN_DATA_CTL_BTA_EN |
+ 		DSI_MCTL_MAIN_DATA_CTL_READ_EN |
+ 		DSI_MCTL_MAIN_DATA_CTL_REG_TE_EN;
+-	if (d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET)
++	if (!(d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET))
+ 		val |= DSI_MCTL_MAIN_DATA_CTL_HOST_EOT_GEN;
+ 	writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
+ 
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+index b9a0e56f33e24..ef70140c5b09d 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+@@ -898,8 +898,7 @@ static int nt35510_probe(struct mipi_dsi_device *dsi)
+ 	 */
+ 	dsi->hs_rate = 349440000;
+ 	dsi->lp_rate = 9600000;
+-	dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS |
+-		MIPI_DSI_MODE_EOT_PACKET;
++	dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+ 	/*
+ 	 * Every new incarnation of this display must have a unique
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
+index 4aac0d1573dd0..70560cac53a99 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
+@@ -184,9 +184,7 @@ static int s6d16d0_probe(struct mipi_dsi_device *dsi)
+ 	 * As we only send commands we do not need to be continuously
+ 	 * clocked.
+ 	 */
+-	dsi->mode_flags =
+-		MIPI_DSI_CLOCK_NON_CONTINUOUS |
+-		MIPI_DSI_MODE_EOT_PACKET;
++	dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+ 	s6->supply = devm_regulator_get(dev, "vdd1");
+ 	if (IS_ERR(s6->supply))
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c b/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
+index eec74c10dddaf..9c3563c61e8cc 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
+@@ -97,7 +97,6 @@ static int s6e63m0_dsi_probe(struct mipi_dsi_device *dsi)
+ 	dsi->hs_rate = 349440000;
+ 	dsi->lp_rate = 9600000;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
+-		MIPI_DSI_MODE_EOT_PACKET |
+ 		MIPI_DSI_MODE_VIDEO_BURST;
+ 
+ 	ret = s6e63m0_probe(dev, s6e63m0_dsi_dcs_read, s6e63m0_dsi_dcs_write,
+diff --git a/drivers/gpu/drm/panel/panel-sony-acx424akp.c b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
+index 065efae213f5b..95659a4d15e97 100644
+--- a/drivers/gpu/drm/panel/panel-sony-acx424akp.c
++++ b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
+@@ -449,8 +449,7 @@ static int acx424akp_probe(struct mipi_dsi_device *dsi)
+ 			MIPI_DSI_MODE_VIDEO_BURST;
+ 	else
+ 		dsi->mode_flags =
+-			MIPI_DSI_CLOCK_NON_CONTINUOUS |
+-			MIPI_DSI_MODE_EOT_PACKET;
++			MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+ 	acx->supply = devm_regulator_get(dev, "vddi");
+ 	if (IS_ERR(acx->supply))
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index be8d68fb0e11e..1986862163178 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -495,8 +495,14 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ 		}
+ 		bo->base.pages = pages;
+ 		bo->base.pages_use_count = 1;
+-	} else
++	} else {
+ 		pages = bo->base.pages;
++		if (pages[page_offset]) {
++			/* Pages are already mapped, bail out. */
++			mutex_unlock(&bo->base.pages_lock);
++			goto out;
++		}
++	}
+ 
+ 	mapping = bo->base.base.filp->f_mapping;
+ 	mapping_set_unevictable(mapping);
+@@ -529,6 +535,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ 
+ 	dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
+ 
++out:
+ 	panfrost_gem_mapping_put(bomapping);
+ 
+ 	return 0;
+@@ -600,6 +607,8 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
+ 		access_type = (fault_status >> 8) & 0x3;
+ 		source_id = (fault_status >> 16);
+ 
++		mmu_write(pfdev, MMU_INT_CLEAR, mask);
++
+ 		/* Page fault only */
+ 		ret = -1;
+ 		if ((status & mask) == BIT(i) && (exception_type & 0xF8) == 0xC0)
+@@ -623,8 +632,6 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
+ 				access_type, access_type_name(pfdev, fault_status),
+ 				source_id);
+ 
+-		mmu_write(pfdev, MMU_INT_CLEAR, mask);
+-
+ 		status &= ~mask;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
+index 54e3c3a974407..741cc983daf1c 100644
+--- a/drivers/gpu/drm/qxl/qxl_cmd.c
++++ b/drivers/gpu/drm/qxl/qxl_cmd.c
+@@ -268,7 +268,7 @@ int qxl_alloc_bo_reserved(struct qxl_device *qdev,
+ 	int ret;
+ 
+ 	ret = qxl_bo_create(qdev, size, false /* not kernel - device */,
+-			    false, QXL_GEM_DOMAIN_VRAM, NULL, &bo);
++			    false, QXL_GEM_DOMAIN_VRAM, 0, NULL, &bo);
+ 	if (ret) {
+ 		DRM_ERROR("failed to allocate VRAM BO\n");
+ 		return ret;
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index 1f0802f5d84ef..f22a1b776f4ba 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -791,8 +791,8 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
+ 				qdev->dumb_shadow_bo = NULL;
+ 			}
+ 			qxl_bo_create(qdev, surf.height * surf.stride,
+-				      true, true, QXL_GEM_DOMAIN_SURFACE, &surf,
+-				      &qdev->dumb_shadow_bo);
++				      true, true, QXL_GEM_DOMAIN_SURFACE, 0,
++				      &surf, &qdev->dumb_shadow_bo);
+ 		}
+ 		if (user_bo->shadow != qdev->dumb_shadow_bo) {
+ 			if (user_bo->shadow) {
+diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
+index 41cdf9d1e59dc..6e7f16f4cec79 100644
+--- a/drivers/gpu/drm/qxl/qxl_drv.c
++++ b/drivers/gpu/drm/qxl/qxl_drv.c
+@@ -144,8 +144,6 @@ static void qxl_drm_release(struct drm_device *dev)
+ 	 * reodering qxl_modeset_fini() + qxl_device_fini() calls is
+ 	 * non-trivial though.
+ 	 */
+-	if (!dev->registered)
+-		return;
+ 	qxl_modeset_fini(qdev);
+ 	qxl_device_fini(qdev);
+ }
+diff --git a/drivers/gpu/drm/qxl/qxl_gem.c b/drivers/gpu/drm/qxl/qxl_gem.c
+index 48e096285b4c6..a08da0bd9098b 100644
+--- a/drivers/gpu/drm/qxl/qxl_gem.c
++++ b/drivers/gpu/drm/qxl/qxl_gem.c
+@@ -55,7 +55,7 @@ int qxl_gem_object_create(struct qxl_device *qdev, int size,
+ 	/* At least align on page size */
+ 	if (alignment < PAGE_SIZE)
+ 		alignment = PAGE_SIZE;
+-	r = qxl_bo_create(qdev, size, kernel, false, initial_domain, surf, &qbo);
++	r = qxl_bo_create(qdev, size, kernel, false, initial_domain, 0, surf, &qbo);
+ 	if (r) {
+ 		if (r != -ERESTARTSYS)
+ 			DRM_ERROR(
+diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
+index 2bc364412e8b8..544a9e4df2a86 100644
+--- a/drivers/gpu/drm/qxl/qxl_object.c
++++ b/drivers/gpu/drm/qxl/qxl_object.c
+@@ -103,8 +103,8 @@ static const struct drm_gem_object_funcs qxl_object_funcs = {
+ 	.print_info = drm_gem_ttm_print_info,
+ };
+ 
+-int qxl_bo_create(struct qxl_device *qdev,
+-		  unsigned long size, bool kernel, bool pinned, u32 domain,
++int qxl_bo_create(struct qxl_device *qdev, unsigned long size,
++		  bool kernel, bool pinned, u32 domain, u32 priority,
+ 		  struct qxl_surface *surf,
+ 		  struct qxl_bo **bo_ptr)
+ {
+@@ -137,6 +137,7 @@ int qxl_bo_create(struct qxl_device *qdev,
+ 
+ 	qxl_ttm_placement_from_domain(bo, domain, pinned);
+ 
++	bo->tbo.priority = priority;
+ 	r = ttm_bo_init(&qdev->mman.bdev, &bo->tbo, size, type,
+ 			&bo->placement, 0, !kernel, size,
+ 			NULL, NULL, &qxl_ttm_bo_destroy);
+diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
+index 6b434e5ef795a..5762ea40d047c 100644
+--- a/drivers/gpu/drm/qxl/qxl_object.h
++++ b/drivers/gpu/drm/qxl/qxl_object.h
+@@ -84,6 +84,7 @@ static inline int qxl_bo_wait(struct qxl_bo *bo, u32 *mem_type,
+ extern int qxl_bo_create(struct qxl_device *qdev,
+ 			 unsigned long size,
+ 			 bool kernel, bool pinned, u32 domain,
++			 u32 priority,
+ 			 struct qxl_surface *surf,
+ 			 struct qxl_bo **bo_ptr);
+ extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c
+index 4fae3e393da14..b2a475a0ca4aa 100644
+--- a/drivers/gpu/drm/qxl/qxl_release.c
++++ b/drivers/gpu/drm/qxl/qxl_release.c
+@@ -199,11 +199,12 @@ qxl_release_free(struct qxl_device *qdev,
+ }
+ 
+ static int qxl_release_bo_alloc(struct qxl_device *qdev,
+-				struct qxl_bo **bo)
++				struct qxl_bo **bo,
++				u32 priority)
+ {
+ 	/* pin releases bo's they are too messy to evict */
+ 	return qxl_bo_create(qdev, PAGE_SIZE, false, true,
+-			     QXL_GEM_DOMAIN_VRAM, NULL, bo);
++			     QXL_GEM_DOMAIN_VRAM, priority, NULL, bo);
+ }
+ 
+ int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo)
+@@ -326,13 +327,18 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
+ 	int ret = 0;
+ 	union qxl_release_info *info;
+ 	int cur_idx;
++	u32 priority;
+ 
+-	if (type == QXL_RELEASE_DRAWABLE)
++	if (type == QXL_RELEASE_DRAWABLE) {
+ 		cur_idx = 0;
+-	else if (type == QXL_RELEASE_SURFACE_CMD)
++		priority = 0;
++	} else if (type == QXL_RELEASE_SURFACE_CMD) {
+ 		cur_idx = 1;
+-	else if (type == QXL_RELEASE_CURSOR_CMD)
++		priority = 1;
++	} else if (type == QXL_RELEASE_CURSOR_CMD) {
+ 		cur_idx = 2;
++		priority = 1;
++	}
+ 	else {
+ 		DRM_ERROR("got illegal type: %d\n", type);
+ 		return -EINVAL;
+@@ -352,7 +358,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
+ 		qdev->current_release_bo[cur_idx] = NULL;
+ 	}
+ 	if (!qdev->current_release_bo[cur_idx]) {
+-		ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx]);
++		ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx], priority);
+ 		if (ret) {
+ 			mutex_unlock(&qdev->release_mutex);
+ 			qxl_release_free(qdev, *release);
+diff --git a/drivers/gpu/drm/radeon/radeon_dp_mst.c b/drivers/gpu/drm/radeon/radeon_dp_mst.c
+index 008308780443c..9bd6c06975385 100644
+--- a/drivers/gpu/drm/radeon/radeon_dp_mst.c
++++ b/drivers/gpu/drm/radeon/radeon_dp_mst.c
+@@ -242,6 +242,9 @@ radeon_dp_mst_detect(struct drm_connector *connector,
+ 		to_radeon_connector(connector);
+ 	struct radeon_connector *master = radeon_connector->mst_port;
+ 
++	if (drm_connector_is_unregistered(connector))
++		return connector_status_disconnected;
++
+ 	return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr,
+ 				      radeon_connector->port);
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index 99ee60f8b604d..8c0a572940e82 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -512,6 +512,7 @@ static int radeon_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 			*value = rdev->config.si.backend_enable_mask;
+ 		} else {
+ 			DRM_DEBUG_KMS("BACKEND_ENABLED_MASK is si+ only!\n");
++			return -EINVAL;
+ 		}
+ 		break;
+ 	case RADEON_INFO_MAX_SCLK:
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 6e28f707092f0..62488ac149238 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -525,13 +525,42 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ {
+ 	struct ltdc_device *ldev = crtc_to_ltdc(crtc);
+ 	struct drm_device *ddev = crtc->dev;
++	struct drm_connector_list_iter iter;
++	struct drm_connector *connector = NULL;
++	struct drm_encoder *encoder = NULL;
++	struct drm_bridge *bridge = NULL;
+ 	struct drm_display_mode *mode = &crtc->state->adjusted_mode;
+ 	struct videomode vm;
+ 	u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h;
+ 	u32 total_width, total_height;
++	u32 bus_flags = 0;
+ 	u32 val;
+ 	int ret;
+ 
++	/* get encoder from crtc */
++	drm_for_each_encoder(encoder, ddev)
++		if (encoder->crtc == crtc)
++			break;
++
++	if (encoder) {
++		/* get bridge from encoder */
++		list_for_each_entry(bridge, &encoder->bridge_chain, chain_node)
++			if (bridge->encoder == encoder)
++				break;
++
++		/* Get the connector from encoder */
++		drm_connector_list_iter_begin(ddev, &iter);
++		drm_for_each_connector_iter(connector, &iter)
++			if (connector->encoder == encoder)
++				break;
++		drm_connector_list_iter_end(&iter);
++	}
++
++	if (bridge && bridge->timings)
++		bus_flags = bridge->timings->input_bus_flags;
++	else if (connector)
++		bus_flags = connector->display_info.bus_flags;
++
+ 	if (!pm_runtime_active(ddev->dev)) {
+ 		ret = pm_runtime_get_sync(ddev->dev);
+ 		if (ret) {
+@@ -567,10 +596,10 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ 	if (vm.flags & DISPLAY_FLAGS_VSYNC_HIGH)
+ 		val |= GCR_VSPOL;
+ 
+-	if (vm.flags & DISPLAY_FLAGS_DE_LOW)
++	if (bus_flags & DRM_BUS_FLAG_DE_LOW)
+ 		val |= GCR_DEPOL;
+ 
+-	if (vm.flags & DISPLAY_FLAGS_PIXDATA_NEGEDGE)
++	if (bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
+ 		val |= GCR_PCPOL;
+ 
+ 	reg_update_bits(ldev->regs, LTDC_GCR,
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
+index 518220bd092a6..0aaa4a26b5db5 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
++++ b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
+@@ -518,6 +518,15 @@ static void tilcdc_crtc_off(struct drm_crtc *crtc, bool shutdown)
+ 
+ 	drm_crtc_vblank_off(crtc);
+ 
++	spin_lock_irq(&crtc->dev->event_lock);
++
++	if (crtc->state->event) {
++		drm_crtc_send_vblank_event(crtc, crtc->state->event);
++		crtc->state->event = NULL;
++	}
++
++	spin_unlock_irq(&crtc->dev->event_lock);
++
+ 	tilcdc_crtc_disable_irqs(dev);
+ 
+ 	pm_runtime_put_sync(dev->dev);
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dp.c b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+index 99158ee67d02b..59d1fb017da01 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+@@ -866,7 +866,7 @@ static int zynqmp_dp_train(struct zynqmp_dp *dp)
+ 		return ret;
+ 
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_SCRAMBLING_DISABLE, 1);
+-	memset(dp->train_set, 0, 4);
++	memset(dp->train_set, 0, sizeof(dp->train_set));
+ 	ret = zynqmp_dp_link_train_cr(dp);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index b93ce0d475e09..e220a05a05b48 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -938,6 +938,7 @@
+ #define USB_DEVICE_ID_ORTEK_IHOME_IMAC_A210S	0x8003
+ 
+ #define USB_VENDOR_ID_PLANTRONICS	0x047f
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES	0xc056
+ 
+ #define USB_VENDOR_ID_PANASONIC		0x04da
+ #define USB_DEVICE_ID_PANABOARD_UBT780	0x1044
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index c6c8e20f3e8d5..0ff03fed97709 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -33,6 +33,9 @@
+ 
+ #include "hid-ids.h"
+ 
++/* Userspace expects F20 for mic-mute KEY_MICMUTE does not work */
++#define LENOVO_KEY_MICMUTE KEY_F20
++
+ struct lenovo_drvdata {
+ 	u8 led_report[3]; /* Must be first for proper alignment */
+ 	int led_state;
+@@ -62,8 +65,8 @@ struct lenovo_drvdata {
+ #define TP10UBKBD_LED_OFF		1
+ #define TP10UBKBD_LED_ON		2
+ 
+-static void lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
+-				     enum led_brightness value)
++static int lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
++				    enum led_brightness value)
+ {
+ 	struct lenovo_drvdata *data = hid_get_drvdata(hdev);
+ 	int ret;
+@@ -75,10 +78,18 @@ static void lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
+ 	data->led_report[2] = value ? TP10UBKBD_LED_ON : TP10UBKBD_LED_OFF;
+ 	ret = hid_hw_raw_request(hdev, data->led_report[0], data->led_report, 3,
+ 				 HID_OUTPUT_REPORT, HID_REQ_SET_REPORT);
+-	if (ret)
+-		hid_err(hdev, "Set LED output report error: %d\n", ret);
++	if (ret != 3) {
++		if (ret != -ENODEV)
++			hid_err(hdev, "Set LED output report error: %d\n", ret);
++
++		ret = ret < 0 ? ret : -EIO;
++	} else {
++		ret = 0;
++	}
+ 
+ 	mutex_unlock(&data->led_report_mutex);
++
++	return ret;
+ }
+ 
+ static void lenovo_tp10ubkbd_sync_fn_lock(struct work_struct *work)
+@@ -126,7 +137,7 @@ static int lenovo_input_mapping_tpkbd(struct hid_device *hdev,
+ 	if (usage->hid == (HID_UP_BUTTON | 0x0010)) {
+ 		/* This sub-device contains trackpoint, mark it */
+ 		hid_set_drvdata(hdev, (void *)1);
+-		map_key_clear(KEY_MICMUTE);
++		map_key_clear(LENOVO_KEY_MICMUTE);
+ 		return 1;
+ 	}
+ 	return 0;
+@@ -141,7 +152,7 @@ static int lenovo_input_mapping_cptkbd(struct hid_device *hdev,
+ 	    (usage->hid & HID_USAGE_PAGE) == HID_UP_LNVENDOR) {
+ 		switch (usage->hid & HID_USAGE) {
+ 		case 0x00f1: /* Fn-F4: Mic mute */
+-			map_key_clear(KEY_MICMUTE);
++			map_key_clear(LENOVO_KEY_MICMUTE);
+ 			return 1;
+ 		case 0x00f2: /* Fn-F5: Brightness down */
+ 			map_key_clear(KEY_BRIGHTNESSDOWN);
+@@ -231,7 +242,7 @@ static int lenovo_input_mapping_tp10_ultrabook_kbd(struct hid_device *hdev,
+ 			map_key_clear(KEY_FN_ESC);
+ 			return 1;
+ 		case 9: /* Fn-F4: Mic mute */
+-			map_key_clear(KEY_MICMUTE);
++			map_key_clear(LENOVO_KEY_MICMUTE);
+ 			return 1;
+ 		case 10: /* Fn-F7: Control panel */
+ 			map_key_clear(KEY_CONFIG);
+@@ -349,7 +360,7 @@ static ssize_t attr_fn_lock_store(struct device *dev,
+ {
+ 	struct hid_device *hdev = to_hid_device(dev);
+ 	struct lenovo_drvdata *data = hid_get_drvdata(hdev);
+-	int value;
++	int value, ret;
+ 
+ 	if (kstrtoint(buf, 10, &value))
+ 		return -EINVAL;
+@@ -364,7 +375,9 @@ static ssize_t attr_fn_lock_store(struct device *dev,
+ 		lenovo_features_set_cptkbd(hdev);
+ 		break;
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+-		lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);
++		ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);
++		if (ret)
++			return ret;
+ 		break;
+ 	}
+ 
+@@ -498,6 +511,9 @@ static int lenovo_event_cptkbd(struct hid_device *hdev,
+ static int lenovo_event(struct hid_device *hdev, struct hid_field *field,
+ 		struct hid_usage *usage, __s32 value)
+ {
++	if (!hid_get_drvdata(hdev))
++		return 0;
++
+ 	switch (hdev->product) {
+ 	case USB_DEVICE_ID_LENOVO_CUSBKBD:
+ 	case USB_DEVICE_ID_LENOVO_CBTKBD:
+@@ -777,7 +793,7 @@ static enum led_brightness lenovo_led_brightness_get(
+ 				: LED_OFF;
+ }
+ 
+-static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
++static int lenovo_led_brightness_set(struct led_classdev *led_cdev,
+ 			enum led_brightness value)
+ {
+ 	struct device *dev = led_cdev->dev->parent;
+@@ -785,6 +801,7 @@ static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
+ 	struct lenovo_drvdata *data_pointer = hid_get_drvdata(hdev);
+ 	u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
+ 	int led_nr = 0;
++	int ret = 0;
+ 
+ 	if (led_cdev == &data_pointer->led_micmute)
+ 		led_nr = 1;
+@@ -799,9 +816,11 @@ static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
+ 		lenovo_led_set_tpkbd(hdev);
+ 		break;
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+-		lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);
++		ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);
+ 		break;
+ 	}
++
++	return ret;
+ }
+ 
+ static int lenovo_register_leds(struct hid_device *hdev)
+@@ -822,7 +841,8 @@ static int lenovo_register_leds(struct hid_device *hdev)
+ 
+ 	data->led_mute.name = name_mute;
+ 	data->led_mute.brightness_get = lenovo_led_brightness_get;
+-	data->led_mute.brightness_set = lenovo_led_brightness_set;
++	data->led_mute.brightness_set_blocking = lenovo_led_brightness_set;
++	data->led_mute.flags = LED_HW_PLUGGABLE;
+ 	data->led_mute.dev = &hdev->dev;
+ 	ret = led_classdev_register(&hdev->dev, &data->led_mute);
+ 	if (ret < 0)
+@@ -830,7 +850,8 @@ static int lenovo_register_leds(struct hid_device *hdev)
+ 
+ 	data->led_micmute.name = name_micm;
+ 	data->led_micmute.brightness_get = lenovo_led_brightness_get;
+-	data->led_micmute.brightness_set = lenovo_led_brightness_set;
++	data->led_micmute.brightness_set_blocking = lenovo_led_brightness_set;
++	data->led_micmute.flags = LED_HW_PLUGGABLE;
+ 	data->led_micmute.dev = &hdev->dev;
+ 	ret = led_classdev_register(&hdev->dev, &data->led_micmute);
+ 	if (ret < 0) {
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index 85b685efc12f3..e81b7cec2d124 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -13,6 +13,7 @@
+ 
+ #include <linux/hid.h>
+ #include <linux/module.h>
++#include <linux/jiffies.h>
+ 
+ #define PLT_HID_1_0_PAGE	0xffa00000
+ #define PLT_HID_2_0_PAGE	0xffa20000
+@@ -36,6 +37,16 @@
+ #define PLT_ALLOW_CONSUMER (field->application == HID_CP_CONSUMERCONTROL && \
+ 			    (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER)
+ 
++#define PLT_QUIRK_DOUBLE_VOLUME_KEYS BIT(0)
++
++#define PLT_DOUBLE_KEY_TIMEOUT 5 /* ms */
++
++struct plt_drv_data {
++	unsigned long device_type;
++	unsigned long last_volume_key_ts;
++	u32 quirks;
++};
++
+ static int plantronics_input_mapping(struct hid_device *hdev,
+ 				     struct hid_input *hi,
+ 				     struct hid_field *field,
+@@ -43,7 +54,8 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ 				     unsigned long **bit, int *max)
+ {
+ 	unsigned short mapped_key;
+-	unsigned long plt_type = (unsigned long)hid_get_drvdata(hdev);
++	struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
++	unsigned long plt_type = drv_data->device_type;
+ 
+ 	/* special case for PTT products */
+ 	if (field->application == HID_GD_JOYSTICK)
+@@ -105,6 +117,30 @@ mapped:
+ 	return 1;
+ }
+ 
++static int plantronics_event(struct hid_device *hdev, struct hid_field *field,
++			     struct hid_usage *usage, __s32 value)
++{
++	struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
++
++	if (drv_data->quirks & PLT_QUIRK_DOUBLE_VOLUME_KEYS) {
++		unsigned long prev_ts, cur_ts;
++
++		/* Usages are filtered in plantronics_usages. */
++
++		if (!value) /* Handle key presses only. */
++			return 0;
++
++		prev_ts = drv_data->last_volume_key_ts;
++		cur_ts = jiffies;
++		if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_DOUBLE_KEY_TIMEOUT)
++			return 1; /* Ignore the repeated key. */
++
++		drv_data->last_volume_key_ts = cur_ts;
++	}
++
++	return 0;
++}
++
+ static unsigned long plantronics_device_type(struct hid_device *hdev)
+ {
+ 	unsigned i, col_page;
+@@ -133,15 +169,24 @@ exit:
+ static int plantronics_probe(struct hid_device *hdev,
+ 			     const struct hid_device_id *id)
+ {
++	struct plt_drv_data *drv_data;
+ 	int ret;
+ 
++	drv_data = devm_kzalloc(&hdev->dev, sizeof(*drv_data), GFP_KERNEL);
++	if (!drv_data)
++		return -ENOMEM;
++
+ 	ret = hid_parse(hdev);
+ 	if (ret) {
+ 		hid_err(hdev, "parse failed\n");
+ 		goto err;
+ 	}
+ 
+-	hid_set_drvdata(hdev, (void *)plantronics_device_type(hdev));
++	drv_data->device_type = plantronics_device_type(hdev);
++	drv_data->quirks = id->driver_data;
++	drv_data->last_volume_key_ts = jiffies - msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT);
++
++	hid_set_drvdata(hdev, drv_data);
+ 
+ 	ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT |
+ 		HID_CONNECT_HIDINPUT_FORCE | HID_CONNECT_HIDDEV_FORCE);
+@@ -153,15 +198,26 @@ err:
+ }
+ 
+ static const struct hid_device_id plantronics_devices[] = {
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES),
++		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(hid, plantronics_devices);
+ 
++static const struct hid_usage_id plantronics_usages[] = {
++	{ HID_CP_VOLUMEUP, EV_KEY, HID_ANY_ID },
++	{ HID_CP_VOLUMEDOWN, EV_KEY, HID_ANY_ID },
++	{ HID_TERMINATOR, HID_TERMINATOR, HID_TERMINATOR }
++};
++
+ static struct hid_driver plantronics_driver = {
+ 	.name = "plantronics",
+ 	.id_table = plantronics_devices,
++	.usage_table = plantronics_usages,
+ 	.input_mapping = plantronics_input_mapping,
++	.event = plantronics_event,
+ 	.probe = plantronics_probe,
+ };
+ module_hid_driver(plantronics_driver);
+diff --git a/drivers/hsi/hsi_core.c b/drivers/hsi/hsi_core.c
+index 47f0208aa7c37..a5f92e2889cb8 100644
+--- a/drivers/hsi/hsi_core.c
++++ b/drivers/hsi/hsi_core.c
+@@ -210,8 +210,6 @@ static void hsi_add_client_from_dt(struct hsi_port *port,
+ 	if (err)
+ 		goto err;
+ 
+-	dev_set_name(&cl->device, "%s", name);
+-
+ 	err = hsi_of_property_parse_mode(client, "hsi-mode", &mode);
+ 	if (err) {
+ 		err = hsi_of_property_parse_mode(client, "hsi-rx-mode",
+@@ -293,6 +291,7 @@ static void hsi_add_client_from_dt(struct hsi_port *port,
+ 	cl->device.release = hsi_client_release;
+ 	cl->device.of_node = client;
+ 
++	dev_set_name(&cl->device, "%s", name);
+ 	if (device_register(&cl->device) < 0) {
+ 		pr_err("hsi: failed to register client: %s\n", name);
+ 		put_device(&cl->device);
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index fbdda9938039a..f064fa6ef181a 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -583,7 +583,7 @@ static int __vmbus_open(struct vmbus_channel *newchannel,
+ 
+ 	if (newchannel->rescind) {
+ 		err = -ENODEV;
+-		goto error_free_info;
++		goto error_clean_msglist;
+ 	}
+ 
+ 	err = vmbus_post_msg(open_msg,
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 6be9f56cb6270..6476bfe193afd 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -725,6 +725,12 @@ static void init_vp_index(struct vmbus_channel *channel)
+ 	free_cpumask_var(available_mask);
+ }
+ 
++#define UNLOAD_DELAY_UNIT_MS	10		/* 10 milliseconds */
++#define UNLOAD_WAIT_MS		(100*1000)	/* 100 seconds */
++#define UNLOAD_WAIT_LOOPS	(UNLOAD_WAIT_MS/UNLOAD_DELAY_UNIT_MS)
++#define UNLOAD_MSG_MS		(5*1000)	/* Every 5 seconds */
++#define UNLOAD_MSG_LOOPS	(UNLOAD_MSG_MS/UNLOAD_DELAY_UNIT_MS)
++
+ static void vmbus_wait_for_unload(void)
+ {
+ 	int cpu;
+@@ -742,12 +748,17 @@ static void vmbus_wait_for_unload(void)
+ 	 * vmbus_connection.unload_event. If not, the last thing we can do is
+ 	 * read message pages for all CPUs directly.
+ 	 *
+-	 * Wait no more than 10 seconds so that the panic path can't get
+-	 * hung forever in case the response message isn't seen.
++	 * Wait up to 100 seconds since an Azure host must writeback any dirty
++	 * data in its disk cache before the VMbus UNLOAD request will
++	 * complete. This flushing has been empirically observed to take up
++	 * to 50 seconds in cases with a lot of dirty data, so allow additional
++	 * leeway and for inaccuracies in mdelay(). But eventually time out so
++	 * that the panic path can't get hung forever in case the response
++	 * message isn't seen.
+ 	 */
+-	for (i = 0; i < 1000; i++) {
++	for (i = 1; i <= UNLOAD_WAIT_LOOPS; i++) {
+ 		if (completion_done(&vmbus_connection.unload_event))
+-			break;
++			goto completed;
+ 
+ 		for_each_online_cpu(cpu) {
+ 			struct hv_per_cpu_context *hv_cpu
+@@ -770,9 +781,18 @@ static void vmbus_wait_for_unload(void)
+ 			vmbus_signal_eom(msg, message_type);
+ 		}
+ 
+-		mdelay(10);
++		/*
++		 * Give a notice periodically so someone watching the
++		 * serial output won't think it is completely hung.
++		 */
++		if (!(i % UNLOAD_MSG_LOOPS))
++			pr_notice("Waiting for VMBus UNLOAD to complete\n");
++
++		mdelay(UNLOAD_DELAY_UNIT_MS);
+ 	}
++	pr_err("Continuing even though VMBus UNLOAD did not complete\n");
+ 
++completed:
+ 	/*
+ 	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
+ 	 * maybe-pending messages on all CPUs to be able to receive new
+diff --git a/drivers/hwmon/pmbus/pxe1610.c b/drivers/hwmon/pmbus/pxe1610.c
+index fa5c5dd29b7ae..212433eb6cc31 100644
+--- a/drivers/hwmon/pmbus/pxe1610.c
++++ b/drivers/hwmon/pmbus/pxe1610.c
+@@ -41,6 +41,15 @@ static int pxe1610_identify(struct i2c_client *client,
+ 				info->vrm_version[i] = vr13;
+ 				break;
+ 			default:
++				/*
++				 * If prior pages are available limit operation
++				 * to them
++				 */
++				if (i != 0) {
++					info->pages = i;
++					return 0;
++				}
++
+ 				return -ENODEV;
+ 			}
+ 		}
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index e4b7f2a951ad5..c1bbc4caeb5c9 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -789,7 +789,7 @@ static int cdns_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 	bool change_role = false;
+ #endif
+ 
+-	ret = pm_runtime_get_sync(id->dev);
++	ret = pm_runtime_resume_and_get(id->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -911,7 +911,7 @@ static int cdns_reg_slave(struct i2c_client *slave)
+ 	if (slave->flags & I2C_CLIENT_TEN)
+ 		return -EAFNOSUPPORT;
+ 
+-	ret = pm_runtime_get_sync(id->dev);
++	ret = pm_runtime_resume_and_get(id->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1200,7 +1200,10 @@ static int cdns_i2c_probe(struct platform_device *pdev)
+ 	if (IS_ERR(id->membase))
+ 		return PTR_ERR(id->membase);
+ 
+-	id->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		return ret;
++	id->irq = ret;
+ 
+ 	id->adap.owner = THIS_MODULE;
+ 	id->adap.dev.of_node = pdev->dev.of_node;
+diff --git a/drivers/i2c/busses/i2c-emev2.c b/drivers/i2c/busses/i2c-emev2.c
+index a08554c1a5704..bdff0e6345d9a 100644
+--- a/drivers/i2c/busses/i2c-emev2.c
++++ b/drivers/i2c/busses/i2c-emev2.c
+@@ -395,7 +395,10 @@ static int em_i2c_probe(struct platform_device *pdev)
+ 
+ 	em_i2c_reset(&priv->adap);
+ 
+-	priv->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto err_clk;
++	priv->irq = ret;
+ 	ret = devm_request_irq(&pdev->dev, priv->irq, em_i2c_irq_handler, 0,
+ 				"em_i2c", priv);
+ 	if (ret)
+diff --git a/drivers/i2c/busses/i2c-img-scb.c b/drivers/i2c/busses/i2c-img-scb.c
+index 98a89301ed2a6..8e987945ed450 100644
+--- a/drivers/i2c/busses/i2c-img-scb.c
++++ b/drivers/i2c/busses/i2c-img-scb.c
+@@ -1057,7 +1057,7 @@ static int img_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 			atomic = true;
+ 	}
+ 
+-	ret = pm_runtime_get_sync(adap->dev.parent);
++	ret = pm_runtime_resume_and_get(adap->dev.parent);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1158,7 +1158,7 @@ static int img_i2c_init(struct img_i2c *i2c)
+ 	u32 rev;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(i2c->adap.dev.parent);
++	ret = pm_runtime_resume_and_get(i2c->adap.dev.parent);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
+index 9db6ccded5e9e..8b9ba055c4186 100644
+--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
++++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
+@@ -259,7 +259,7 @@ static int lpi2c_imx_master_enable(struct lpi2c_imx_struct *lpi2c_imx)
+ 	unsigned int temp;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(lpi2c_imx->adapter.dev.parent);
++	ret = pm_runtime_resume_and_get(lpi2c_imx->adapter.dev.parent);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index e6f8d6e45a15a..72af4b4d13180 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1036,7 +1036,7 @@ static int i2c_imx_xfer(struct i2c_adapter *adapter,
+ 	struct imx_i2c_struct *i2c_imx = i2c_get_adapdata(adapter);
+ 	int result;
+ 
+-	result = pm_runtime_get_sync(i2c_imx->adapter.dev.parent);
++	result = pm_runtime_resume_and_get(i2c_imx->adapter.dev.parent);
+ 	if (result < 0)
+ 		return result;
+ 
+@@ -1280,7 +1280,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ 	struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev);
+ 	int irq, ret;
+ 
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-jz4780.c b/drivers/i2c/busses/i2c-jz4780.c
+index 2a946c2079284..e181db3fd2cce 100644
+--- a/drivers/i2c/busses/i2c-jz4780.c
++++ b/drivers/i2c/busses/i2c-jz4780.c
+@@ -826,7 +826,10 @@ static int jz4780_i2c_probe(struct platform_device *pdev)
+ 
+ 	jz4780_i2c_writew(i2c, JZ4780_I2C_INTM, 0x0);
+ 
+-	i2c->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto err;
++	i2c->irq = ret;
+ 	ret = devm_request_irq(&pdev->dev, i2c->irq, jz4780_i2c_irq, 0,
+ 			       dev_name(&pdev->dev), i2c);
+ 	if (ret)
+diff --git a/drivers/i2c/busses/i2c-mlxbf.c b/drivers/i2c/busses/i2c-mlxbf.c
+index 2fb0532d8a161..ab261d762dea3 100644
+--- a/drivers/i2c/busses/i2c-mlxbf.c
++++ b/drivers/i2c/busses/i2c-mlxbf.c
+@@ -2376,6 +2376,8 @@ static int mlxbf_i2c_probe(struct platform_device *pdev)
+ 	mlxbf_i2c_init_slave(pdev, priv);
+ 
+ 	irq = platform_get_irq(pdev, 0);
++	if (irq < 0)
++		return irq;
+ 	ret = devm_request_irq(dev, irq, mlxbf_smbus_irq,
+ 			       IRQF_ONESHOT | IRQF_SHARED | IRQF_PROBE_SHARED,
+ 			       dev_name(dev), priv);
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 2ffd2f354d0ae..86f70c7513192 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -479,7 +479,7 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ {
+ 	u16 control_reg;
+ 
+-	if (i2c->dev_comp->dma_sync) {
++	if (i2c->dev_comp->apdma_sync) {
+ 		writel(I2C_DMA_WARM_RST, i2c->pdmabase + OFFSET_RST);
+ 		udelay(10);
+ 		writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
+diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
+index 12ac4212aded8..d4f6c6d60683a 100644
+--- a/drivers/i2c/busses/i2c-omap.c
++++ b/drivers/i2c/busses/i2c-omap.c
+@@ -1404,9 +1404,9 @@ omap_i2c_probe(struct platform_device *pdev)
+ 	pm_runtime_set_autosuspend_delay(omap->dev, OMAP_I2C_PM_TIMEOUT);
+ 	pm_runtime_use_autosuspend(omap->dev);
+ 
+-	r = pm_runtime_get_sync(omap->dev);
++	r = pm_runtime_resume_and_get(omap->dev);
+ 	if (r < 0)
+-		goto err_free_mem;
++		goto err_disable_pm;
+ 
+ 	/*
+ 	 * Read the Rev hi bit-[15:14] ie scheme this is 1 indicates ver2.
+@@ -1513,8 +1513,8 @@ err_unuse_clocks:
+ 	omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0);
+ 	pm_runtime_dont_use_autosuspend(omap->dev);
+ 	pm_runtime_put_sync(omap->dev);
++err_disable_pm:
+ 	pm_runtime_disable(&pdev->dev);
+-err_free_mem:
+ 
+ 	return r;
+ }
+@@ -1525,7 +1525,7 @@ static int omap_i2c_remove(struct platform_device *pdev)
+ 	int ret;
+ 
+ 	i2c_del_adapter(&omap->adapter);
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index ad6630e3cc779..8722ca23f889b 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -625,20 +625,11 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+  * generated. It turned out that taking a spinlock at the beginning of the ISR
+  * was already causing repeated messages. Thus, this driver was converted to
+  * the now lockless behaviour. Please keep this in mind when hacking the driver.
++ * R-Car Gen3 seems to have this fixed but earlier versions than R-Car Gen2 are
++ * likely affected. Therefore, we have different interrupt handler entries.
+  */
+-static irqreturn_t rcar_i2c_irq(int irq, void *ptr)
++static irqreturn_t rcar_i2c_irq(int irq, struct rcar_i2c_priv *priv, u32 msr)
+ {
+-	struct rcar_i2c_priv *priv = ptr;
+-	u32 msr;
+-
+-	/* Clear START or STOP immediately, except for REPSTART after read */
+-	if (likely(!(priv->flags & ID_P_REP_AFTER_RD)))
+-		rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
+-
+-	msr = rcar_i2c_read(priv, ICMSR);
+-
+-	/* Only handle interrupts that are currently enabled */
+-	msr &= rcar_i2c_read(priv, ICMIER);
+ 	if (!msr) {
+ 		if (rcar_i2c_slave_irq(priv))
+ 			return IRQ_HANDLED;
+@@ -682,6 +673,41 @@ out:
+ 	return IRQ_HANDLED;
+ }
+ 
++static irqreturn_t rcar_i2c_gen2_irq(int irq, void *ptr)
++{
++	struct rcar_i2c_priv *priv = ptr;
++	u32 msr;
++
++	/* Clear START or STOP immediately, except for REPSTART after read */
++	if (likely(!(priv->flags & ID_P_REP_AFTER_RD)))
++		rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
++
++	/* Only handle interrupts that are currently enabled */
++	msr = rcar_i2c_read(priv, ICMSR);
++	msr &= rcar_i2c_read(priv, ICMIER);
++
++	return rcar_i2c_irq(irq, priv, msr);
++}
++
++static irqreturn_t rcar_i2c_gen3_irq(int irq, void *ptr)
++{
++	struct rcar_i2c_priv *priv = ptr;
++	u32 msr;
++
++	/* Only handle interrupts that are currently enabled */
++	msr = rcar_i2c_read(priv, ICMSR);
++	msr &= rcar_i2c_read(priv, ICMIER);
++
++	/*
++	 * Clear START or STOP immediately, except for REPSTART after read or
++	 * if a spurious interrupt was detected.
++	 */
++	if (likely(!(priv->flags & ID_P_REP_AFTER_RD) && msr))
++		rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
++
++	return rcar_i2c_irq(irq, priv, msr);
++}
++
+ static struct dma_chan *rcar_i2c_request_dma_chan(struct device *dev,
+ 					enum dma_transfer_direction dir,
+ 					dma_addr_t port_addr)
+@@ -928,6 +954,8 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 	struct rcar_i2c_priv *priv;
+ 	struct i2c_adapter *adap;
+ 	struct device *dev = &pdev->dev;
++	unsigned long irqflags = 0;
++	irqreturn_t (*irqhandler)(int irq, void *ptr) = rcar_i2c_gen3_irq;
+ 	int ret;
+ 
+ 	/* Otherwise logic will break because some bytes must always use PIO */
+@@ -976,6 +1004,11 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 
+ 	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+ 
++	if (priv->devtype < I2C_RCAR_GEN3) {
++		irqflags |= IRQF_NO_THREAD;
++		irqhandler = rcar_i2c_gen2_irq;
++	}
++
+ 	if (priv->devtype == I2C_RCAR_GEN3) {
+ 		priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ 		if (!IS_ERR(priv->rstc)) {
+@@ -994,8 +1027,11 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 	if (of_property_read_bool(dev->of_node, "smbus"))
+ 		priv->flags |= ID_P_HOST_NOTIFY;
+ 
+-	priv->irq = platform_get_irq(pdev, 0);
+-	ret = devm_request_irq(dev, priv->irq, rcar_i2c_irq, 0, dev_name(dev), priv);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto out_pm_disable;
++	priv->irq = ret;
++	ret = devm_request_irq(dev, priv->irq, irqhandler, irqflags, dev_name(dev), priv);
+ 	if (ret < 0) {
+ 		dev_err(dev, "cannot get irq %d\n", priv->irq);
+ 		goto out_pm_disable;
+diff --git a/drivers/i2c/busses/i2c-sh7760.c b/drivers/i2c/busses/i2c-sh7760.c
+index c2005c789d2b0..319d1fa617c88 100644
+--- a/drivers/i2c/busses/i2c-sh7760.c
++++ b/drivers/i2c/busses/i2c-sh7760.c
+@@ -471,7 +471,10 @@ static int sh7760_i2c_probe(struct platform_device *pdev)
+ 		goto out2;
+ 	}
+ 
+-	id->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto out3;
++	id->irq = ret;
+ 
+ 	id->adap.nr = pdev->id;
+ 	id->adap.algo = &sh7760_i2c_algo;
+diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c
+index 2917fecf6c80d..8ead7e021008c 100644
+--- a/drivers/i2c/busses/i2c-sprd.c
++++ b/drivers/i2c/busses/i2c-sprd.c
+@@ -290,7 +290,7 @@ static int sprd_i2c_master_xfer(struct i2c_adapter *i2c_adap,
+ 	struct sprd_i2c *i2c_dev = i2c_adap->algo_data;
+ 	int im, ret;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -576,7 +576,7 @@ static int sprd_i2c_remove(struct platform_device *pdev)
+ 	struct sprd_i2c *i2c_dev = platform_get_drvdata(pdev);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index 6747353345475..1e800b65e20a0 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -1652,7 +1652,7 @@ static int stm32f7_i2c_xfer(struct i2c_adapter *i2c_adap,
+ 	i2c_dev->msg_id = 0;
+ 	f7_msg->smbus = false;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1698,7 +1698,7 @@ static int stm32f7_i2c_smbus_xfer(struct i2c_adapter *adapter, u16 addr,
+ 	f7_msg->read_write = read_write;
+ 	f7_msg->smbus = true;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1799,7 +1799,7 @@ static int stm32f7_i2c_reg_slave(struct i2c_client *slave)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1880,7 +1880,7 @@ static int stm32f7_i2c_unreg_slave(struct i2c_client *slave)
+ 
+ 	WARN_ON(!i2c_dev->slave[id]);
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -2277,7 +2277,7 @@ static int stm32f7_i2c_regs_backup(struct stm32f7_i2c_dev *i2c_dev)
+ 	int ret;
+ 	struct stm32f7_i2c_regs *backup_regs = &i2c_dev->backup_regs;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -2299,7 +2299,7 @@ static int stm32f7_i2c_regs_restore(struct stm32f7_i2c_dev *i2c_dev)
+ 	int ret;
+ 	struct stm32f7_i2c_regs *backup_regs = &i2c_dev->backup_regs;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 087b2951942eb..2a8568b97c14d 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -706,7 +706,7 @@ static int xiic_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 	dev_dbg(adap->dev.parent, "%s entry SR: 0x%x\n", __func__,
+ 		xiic_getreg8(i2c, XIIC_SR_REG_OFFSET));
+ 
+-	err = pm_runtime_get_sync(i2c->dev);
++	err = pm_runtime_resume_and_get(i2c->dev);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -873,7 +873,7 @@ static int xiic_i2c_remove(struct platform_device *pdev)
+ 	/* remove adapter & data */
+ 	i2c_del_adapter(&i2c->adap);
+ 
+-	ret = pm_runtime_get_sync(i2c->dev);
++	ret = pm_runtime_resume_and_get(i2c->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index b61bf53ec07af..1c6b78ad5ade4 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -2537,7 +2537,7 @@ int i3c_master_register(struct i3c_master_controller *master,
+ 
+ 	ret = i3c_master_bus_init(master);
+ 	if (ret)
+-		goto err_destroy_wq;
++		goto err_put_dev;
+ 
+ 	ret = device_add(&master->dev);
+ 	if (ret)
+@@ -2568,9 +2568,6 @@ err_del_dev:
+ err_cleanup_bus:
+ 	i3c_master_bus_cleanup(master);
+ 
+-err_destroy_wq:
+-	destroy_workqueue(master->wq);
+-
+ err_put_dev:
+ 	put_device(&master->dev);
+ 
+diff --git a/drivers/iio/accel/adis16201.c b/drivers/iio/accel/adis16201.c
+index f955cccb3e779..84bbdfd2f2ba3 100644
+--- a/drivers/iio/accel/adis16201.c
++++ b/drivers/iio/accel/adis16201.c
+@@ -215,7 +215,7 @@ static const struct iio_chan_spec adis16201_channels[] = {
+ 	ADIS_AUX_ADC_CHAN(ADIS16201_AUX_ADC_REG, ADIS16201_SCAN_AUX_ADC, 0, 12),
+ 	ADIS_INCLI_CHAN(X, ADIS16201_XINCL_OUT_REG, ADIS16201_SCAN_INCLI_X,
+ 			BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
+-	ADIS_INCLI_CHAN(X, ADIS16201_YINCL_OUT_REG, ADIS16201_SCAN_INCLI_Y,
++	ADIS_INCLI_CHAN(Y, ADIS16201_YINCL_OUT_REG, ADIS16201_SCAN_INCLI_Y,
+ 			BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
+ 	IIO_CHAN_SOFT_TIMESTAMP(7)
+ };
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index 86fda6182543b..e39b679126a2a 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -249,7 +249,7 @@ config AD799X
+ config AD9467
+ 	tristate "Analog Devices AD9467 High Speed ADC driver"
+ 	depends on SPI
+-	select ADI_AXI_ADC
++	depends on ADI_AXI_ADC
+ 	help
+ 	  Say yes here to build support for Analog Devices:
+ 	  * AD9467 16-Bit, 200 MSPS/250 MSPS Analog-to-Digital Converter
+diff --git a/drivers/iio/adc/ad7476.c b/drivers/iio/adc/ad7476.c
+index 66c55ae67791b..bf55726702443 100644
+--- a/drivers/iio/adc/ad7476.c
++++ b/drivers/iio/adc/ad7476.c
+@@ -316,25 +316,15 @@ static int ad7476_probe(struct spi_device *spi)
+ 	spi_message_init(&st->msg);
+ 	spi_message_add_tail(&st->xfer, &st->msg);
+ 
+-	ret = iio_triggered_buffer_setup(indio_dev, NULL,
+-			&ad7476_trigger_handler, NULL);
++	ret = devm_iio_triggered_buffer_setup(&spi->dev, indio_dev, NULL,
++					      &ad7476_trigger_handler, NULL);
+ 	if (ret)
+-		goto error_disable_reg;
++		return ret;
+ 
+ 	if (st->chip_info->reset)
+ 		st->chip_info->reset(st);
+ 
+-	ret = iio_device_register(indio_dev);
+-	if (ret)
+-		goto error_ring_unregister;
+-	return 0;
+-
+-error_ring_unregister:
+-	iio_triggered_buffer_cleanup(indio_dev);
+-error_disable_reg:
+-	regulator_disable(st->reg);
+-
+-	return ret;
++	return devm_iio_device_register(&spi->dev, indio_dev);
+ }
+ 
+ static const struct spi_device_id ad7476_id[] = {
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+index 18a1898e3e348..ae391ec4a7275 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+@@ -723,12 +723,16 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+ 	}
+ }
+ 
+-static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val)
++static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val,
++					int val2)
+ {
+ 	int result, i;
+ 
++	if (val != 0)
++		return -EINVAL;
++
+ 	for (i = 0; i < ARRAY_SIZE(gyro_scale_6050); ++i) {
+-		if (gyro_scale_6050[i] == val) {
++		if (gyro_scale_6050[i] == val2) {
+ 			result = inv_mpu6050_set_gyro_fsr(st, i);
+ 			if (result)
+ 				return result;
+@@ -759,13 +763,17 @@ static int inv_write_raw_get_fmt(struct iio_dev *indio_dev,
+ 	return -EINVAL;
+ }
+ 
+-static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val)
++static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val,
++					 int val2)
+ {
+ 	int result, i;
+ 	u8 d;
+ 
++	if (val != 0)
++		return -EINVAL;
++
+ 	for (i = 0; i < ARRAY_SIZE(accel_scale); ++i) {
+-		if (accel_scale[i] == val) {
++		if (accel_scale[i] == val2) {
+ 			d = (i << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT);
+ 			result = regmap_write(st->map, st->reg->accl_config, d);
+ 			if (result)
+@@ -806,10 +814,10 @@ static int inv_mpu6050_write_raw(struct iio_dev *indio_dev,
+ 	case IIO_CHAN_INFO_SCALE:
+ 		switch (chan->type) {
+ 		case IIO_ANGL_VEL:
+-			result = inv_mpu6050_write_gyro_scale(st, val2);
++			result = inv_mpu6050_write_gyro_scale(st, val, val2);
+ 			break;
+ 		case IIO_ACCEL:
+-			result = inv_mpu6050_write_accel_scale(st, val2);
++			result = inv_mpu6050_write_accel_scale(st, val, val2);
+ 			break;
+ 		default:
+ 			result = -EINVAL;
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index bbba0cd42c89b..ee568bdf3c788 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -2137,7 +2137,8 @@ static int cm_req_handler(struct cm_work *work)
+ 		goto destroy;
+ 	}
+ 
+-	cm_process_routed_req(req_msg, work->mad_recv_wc->wc);
++	if (cm_id_priv->av.ah_attr.type != RDMA_AH_ATTR_TYPE_ROCE)
++		cm_process_routed_req(req_msg, work->mad_recv_wc->wc);
+ 
+ 	memset(&work->path[0], 0, sizeof(work->path[0]));
+ 	if (cm_req_has_alt_path(req_msg))
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index e3638f80e1d52..6af066a2c8c06 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -463,7 +463,6 @@ static void _cma_attach_to_dev(struct rdma_id_private *id_priv,
+ 	id_priv->id.route.addr.dev_addr.transport =
+ 		rdma_node_get_transport(cma_dev->device->node_type);
+ 	list_add_tail(&id_priv->list, &cma_dev->id_list);
+-	rdma_restrack_add(&id_priv->res);
+ 
+ 	trace_cm_id_attach(id_priv, cma_dev->device);
+ }
+@@ -700,6 +699,7 @@ static int cma_ib_acquire_dev(struct rdma_id_private *id_priv,
+ 	mutex_lock(&lock);
+ 	cma_attach_to_dev(id_priv, listen_id_priv->cma_dev);
+ 	mutex_unlock(&lock);
++	rdma_restrack_add(&id_priv->res);
+ 	return 0;
+ }
+ 
+@@ -754,8 +754,10 @@ static int cma_iw_acquire_dev(struct rdma_id_private *id_priv,
+ 	}
+ 
+ out:
+-	if (!ret)
++	if (!ret) {
+ 		cma_attach_to_dev(id_priv, cma_dev);
++		rdma_restrack_add(&id_priv->res);
++	}
+ 
+ 	mutex_unlock(&lock);
+ 	return ret;
+@@ -816,6 +818,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 
+ found:
+ 	cma_attach_to_dev(id_priv, cma_dev);
++	rdma_restrack_add(&id_priv->res);
+ 	mutex_unlock(&lock);
+ 	addr = (struct sockaddr_ib *)cma_src_addr(id_priv);
+ 	memcpy(&addr->sib_addr, &sgid, sizeof(sgid));
+@@ -2529,6 +2532,7 @@ static int cma_listen_on_dev(struct rdma_id_private *id_priv,
+ 	       rdma_addr_size(cma_src_addr(id_priv)));
+ 
+ 	_cma_attach_to_dev(dev_id_priv, cma_dev);
++	rdma_restrack_add(&dev_id_priv->res);
+ 	cma_id_get(id_priv);
+ 	dev_id_priv->internal_id = 1;
+ 	dev_id_priv->afonly = id_priv->afonly;
+@@ -3169,6 +3173,7 @@ port_found:
+ 	ib_addr_set_pkey(&id_priv->id.route.addr.dev_addr, pkey);
+ 	id_priv->id.port_num = p;
+ 	cma_attach_to_dev(id_priv, cma_dev);
++	rdma_restrack_add(&id_priv->res);
+ 	cma_set_loopback(cma_src_addr(id_priv));
+ out:
+ 	mutex_unlock(&lock);
+@@ -3201,6 +3206,7 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ 		if (status)
+ 			pr_debug_ratelimited("RDMA CM: ADDR_ERROR: failed to acquire device. status %d\n",
+ 					     status);
++		rdma_restrack_add(&id_priv->res);
+ 	} else if (status) {
+ 		pr_debug_ratelimited("RDMA CM: ADDR_ERROR: failed to resolve IP. status %d\n", status);
+ 	}
+@@ -3812,6 +3818,8 @@ int rdma_bind_addr(struct rdma_cm_id *id, struct sockaddr *addr)
+ 	if (ret)
+ 		goto err2;
+ 
++	if (!cma_any_addr(addr))
++		rdma_restrack_add(&id_priv->res);
+ 	return 0;
+ err2:
+ 	if (id_priv->cma_dev)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 995d4633b0a1c..d4d4959c2434c 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -2784,6 +2784,7 @@ do_rq:
+ 		dev_err(&cq->hwq.pdev->dev,
+ 			"FP: CQ Processed terminal reported rq_cons_idx 0x%x exceeds max 0x%x\n",
+ 			cqe_cons, rq->max_wqe);
++		rc = -EINVAL;
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index fa7878336100a..3ca47004b7527 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -854,6 +854,7 @@ static int bnxt_qplib_alloc_dpi_tbl(struct bnxt_qplib_res     *res,
+ 
+ unmap_io:
+ 	pci_iounmap(res->pdev, dpit->dbr_bar_reg_iomem);
++	dpit->dbr_bar_reg_iomem = NULL;
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/drivers/infiniband/hw/cxgb4/resource.c b/drivers/infiniband/hw/cxgb4/resource.c
+index 5c95c789f302d..e800e8e8bed5a 100644
+--- a/drivers/infiniband/hw/cxgb4/resource.c
++++ b/drivers/infiniband/hw/cxgb4/resource.c
+@@ -216,7 +216,7 @@ u32 c4iw_get_qpid(struct c4iw_rdev *rdev, struct c4iw_dev_ucontext *uctx)
+ 			goto out;
+ 		entry->qid = qid;
+ 		list_add_tail(&entry->entry, &uctx->cqids);
+-		for (i = qid; i & rdev->qpmask; i++) {
++		for (i = qid + 1; i & rdev->qpmask; i++) {
+ 			entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ 			if (!entry)
+ 				goto out;
+diff --git a/drivers/infiniband/hw/hfi1/firmware.c b/drivers/infiniband/hw/hfi1/firmware.c
+index 0e83d4b61e463..2cf102b5abd44 100644
+--- a/drivers/infiniband/hw/hfi1/firmware.c
++++ b/drivers/infiniband/hw/hfi1/firmware.c
+@@ -1916,6 +1916,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 			dd_dev_err(dd, "%s: Failed CRC check at offset %ld\n",
+ 				   __func__, (ptr -
+ 				   (u32 *)dd->platform_config.data));
++			ret = -EINVAL;
+ 			goto bail;
+ 		}
+ 		/* Jump the CRC DWORD */
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index f3fb28e3d5d74..d213f65d4cdd0 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -89,7 +89,7 @@ int hfi1_mmu_rb_register(void *ops_arg,
+ 	struct mmu_rb_handler *h;
+ 	int ret;
+ 
+-	h = kmalloc(sizeof(*h), GFP_KERNEL);
++	h = kzalloc(sizeof(*h), GFP_KERNEL);
+ 	if (!h)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_pble.c b/drivers/infiniband/hw/i40iw/i40iw_pble.c
+index 5f97643e22e53..ae7d227edad2f 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_pble.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_pble.c
+@@ -392,12 +392,9 @@ static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
+ 	i40iw_debug(dev, I40IW_DEBUG_PBLE, "next_fpm_addr = %llx chunk_size[%u] = 0x%x\n",
+ 		    pble_rsrc->next_fpm_addr, chunk->size, chunk->size);
+ 	pble_rsrc->unallocated_pble -= (chunk->size >> 3);
+-	list_add(&chunk->list, &pble_rsrc->pinfo.clist);
+ 	sd_reg_val = (sd_entry_type == I40IW_SD_TYPE_PAGED) ?
+ 			sd_entry->u.pd_table.pd_page_addr.pa : sd_entry->u.bp.addr.pa;
+-	if (sd_entry->valid)
+-		return 0;
+-	if (dev->is_pf) {
++	if (dev->is_pf && !sd_entry->valid) {
+ 		ret_code = i40iw_hmc_sd_one(dev, hmc_info->hmc_fn_id,
+ 					    sd_reg_val, idx->sd_idx,
+ 					    sd_entry->entry_type, true);
+@@ -408,6 +405,7 @@ static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
+ 	}
+ 
+ 	sd_entry->valid = true;
++	list_add(&chunk->list, &pble_rsrc->pinfo.clist);
+ 	return 0;
+  error:
+ 	kfree(chunk);
+diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
+index 492cfe063bcad..13d50b1781660 100644
+--- a/drivers/infiniband/hw/mlx5/fs.c
++++ b/drivers/infiniband/hw/mlx5/fs.c
+@@ -1528,8 +1528,8 @@ static struct mlx5_ib_flow_handler *raw_fs_rule_add(
+ 		dst_num++;
+ 	}
+ 
+-	handler = _create_raw_flow_rule(dev, ft_prio, dst, fs_matcher,
+-					flow_context, flow_act,
++	handler = _create_raw_flow_rule(dev, ft_prio, dst_num ? dst : NULL,
++					fs_matcher, flow_context, flow_act,
+ 					cmd_in, inlen, dst_num);
+ 
+ 	if (IS_ERR(handler)) {
+@@ -1885,8 +1885,9 @@ static int get_dests(struct uverbs_attr_bundle *attrs,
+ 		else
+ 			*dest_id = mqp->raw_packet_qp.rq.tirn;
+ 		*dest_type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+-	} else if (fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_EGRESS ||
+-		   fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_RDMA_TX) {
++	} else if ((fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_EGRESS ||
++		    fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_RDMA_TX) &&
++		   !(*flags & MLX5_IB_ATTR_CREATE_FLOW_FLAGS_DROP)) {
+ 		*dest_type = MLX5_FLOW_DESTINATION_TYPE_PORT;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 75caeec378bda..6d2715f65d788 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3079,6 +3079,19 @@ enum {
+ 	MLX5_PATH_FLAG_COUNTER	= 1 << 2,
+ };
+ 
++static int mlx5_to_ib_rate_map(u8 rate)
++{
++	static const int rates[] = { IB_RATE_PORT_CURRENT, IB_RATE_56_GBPS,
++				     IB_RATE_25_GBPS,	   IB_RATE_100_GBPS,
++				     IB_RATE_200_GBPS,	   IB_RATE_50_GBPS,
++				     IB_RATE_400_GBPS };
++
++	if (rate < ARRAY_SIZE(rates))
++		return rates[rate];
++
++	return rate - MLX5_STAT_RATE_OFFSET;
++}
++
+ static int ib_to_mlx5_rate_map(u8 rate)
+ {
+ 	switch (rate) {
+@@ -4420,7 +4433,7 @@ static void to_rdma_ah_attr(struct mlx5_ib_dev *ibdev,
+ 	rdma_ah_set_path_bits(ah_attr, MLX5_GET(ads, path, mlid));
+ 
+ 	static_rate = MLX5_GET(ads, path, stat_rate);
+-	rdma_ah_set_static_rate(ah_attr, static_rate ? static_rate - 5 : 0);
++	rdma_ah_set_static_rate(ah_attr, mlx5_to_ib_rate_map(static_rate));
+ 	if (MLX5_GET(ads, path, grh) ||
+ 	    ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) {
+ 		rdma_ah_set_grh(ah_attr, NULL, MLX5_GET(ads, path, flow_label),
+diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+index c4bc58736e489..1715fbe0719d8 100644
+--- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
++++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+@@ -636,8 +636,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ 	memcpy(in_params.local_mac_addr, dev->ndev->dev_addr, ETH_ALEN);
+ 
+ 	if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_CONNECT,
+-			     &qp->iwarp_cm_flags))
++			     &qp->iwarp_cm_flags)) {
++		rc = -ENODEV;
+ 		goto err; /* QP already being destroyed */
++	}
+ 
+ 	rc = dev->ops->iwarp_connect(dev->rdma_ctx, &in_params, &out_params);
+ 	if (rc) {
+diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c
+index df0d173d6acba..da2e867a1ed93 100644
+--- a/drivers/infiniband/sw/rxe/rxe_av.c
++++ b/drivers/infiniband/sw/rxe/rxe_av.c
+@@ -88,7 +88,7 @@ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr)
+ 		type = RXE_NETWORK_TYPE_IPV4;
+ 		break;
+ 	case RDMA_NETWORK_IPV6:
+-		type = RXE_NETWORK_TYPE_IPV4;
++		type = RXE_NETWORK_TYPE_IPV6;
+ 		break;
+ 	default:
+ 		/* not reached - checked in rxe_av_chk_attr */
+diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c
+index 34a910cf0edbd..61c17db70d658 100644
+--- a/drivers/infiniband/sw/siw/siw_mem.c
++++ b/drivers/infiniband/sw/siw/siw_mem.c
+@@ -106,8 +106,6 @@ int siw_mr_add_mem(struct siw_mr *mr, struct ib_pd *pd, void *mem_obj,
+ 	mem->perms = rights & IWARP_ACCESS_MASK;
+ 	kref_init(&mem->ref);
+ 
+-	mr->mem = mem;
+-
+ 	get_random_bytes(&next, 4);
+ 	next &= 0x00ffffff;
+ 
+@@ -116,6 +114,8 @@ int siw_mr_add_mem(struct siw_mr *mr, struct ib_pd *pd, void *mem_obj,
+ 		kfree(mem);
+ 		return -ENOMEM;
+ 	}
++
++	mr->mem = mem;
+ 	/* Set the STag index part */
+ 	mem->stag = id << 8;
+ 	mr->base_mr.lkey = mr->base_mr.rkey = mem->stag;
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index bd478947b93a5..e653c83f8a356 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -438,23 +438,23 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ 	isert_init_conn(isert_conn);
+ 	isert_conn->cm_id = cma_id;
+ 
+-	ret = isert_alloc_login_buf(isert_conn, cma_id->device);
+-	if (ret)
+-		goto out;
+-
+ 	device = isert_device_get(cma_id);
+ 	if (IS_ERR(device)) {
+ 		ret = PTR_ERR(device);
+-		goto out_rsp_dma_map;
++		goto out;
+ 	}
+ 	isert_conn->device = device;
+ 
++	ret = isert_alloc_login_buf(isert_conn, cma_id->device);
++	if (ret)
++		goto out_conn_dev;
++
+ 	isert_set_nego_params(isert_conn, &event->param.conn);
+ 
+ 	isert_conn->qp = isert_create_qp(isert_conn, cma_id);
+ 	if (IS_ERR(isert_conn->qp)) {
+ 		ret = PTR_ERR(isert_conn->qp);
+-		goto out_conn_dev;
++		goto out_rsp_dma_map;
+ 	}
+ 
+ 	ret = isert_login_post_recv(isert_conn);
+@@ -473,10 +473,10 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ 
+ out_destroy_qp:
+ 	isert_destroy_qp(isert_conn);
+-out_conn_dev:
+-	isert_device_put(device);
+ out_rsp_dma_map:
+ 	isert_free_login_buf(isert_conn);
++out_conn_dev:
++	isert_device_put(device);
+ out:
+ 	kfree(isert_conn);
+ 	rdma_reject(cma_id, NULL, 0, IB_CM_REJ_CONSUMER_DEFINED);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 6ff97fbf87566..7db550ba25d7f 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -2803,8 +2803,8 @@ int rtrs_clt_remove_path_from_sysfs(struct rtrs_clt_sess *sess,
+ 	} while (!changed && old_state != RTRS_CLT_DEAD);
+ 
+ 	if (likely(changed)) {
+-		rtrs_clt_destroy_sess_files(sess, sysfs_self);
+ 		rtrs_clt_remove_path_from_arr(sess);
++		rtrs_clt_destroy_sess_files(sess, sysfs_self);
+ 		kobject_put(&sess->kobj);
+ 	}
+ 
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 53a8becac8276..07ecc7dc1822b 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -2378,6 +2378,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 		pr_info("rejected SRP_LOGIN_REQ because target %s_%d is not enabled\n",
+ 			dev_name(&sdev->device->dev), port_num);
+ 		mutex_unlock(&sport->mutex);
++		ret = -EINVAL;
+ 		goto reject;
+ 	}
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 3c215f0a6052b..fa502c0e2e31b 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -1840,7 +1840,7 @@ static void __init late_iommu_features_init(struct amd_iommu *iommu)
+ 	 * IVHD and MMIO conflict.
+ 	 */
+ 	if (features != iommu->features)
+-		pr_warn(FW_WARN "EFR mismatch. Use IVHD EFR (%#llx : %#llx\n).",
++		pr_warn(FW_WARN "EFR mismatch. Use IVHD EFR (%#llx : %#llx).\n",
+ 			features, iommu->features);
+ }
+ 
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+index d4b7f40ccb029..57e5d223c4673 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+@@ -115,7 +115,7 @@
+ #define GERROR_PRIQ_ABT_ERR		(1 << 3)
+ #define GERROR_EVTQ_ABT_ERR		(1 << 2)
+ #define GERROR_CMDQ_ERR			(1 << 0)
+-#define GERROR_ERR_MASK			0xfd
++#define GERROR_ERR_MASK			0x1fd
+ 
+ #define ARM_SMMU_GERRORN		0x64
+ 
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 7e3db4c0324d3..db9bf5ac07228 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -656,7 +656,14 @@ static int domain_update_iommu_snooping(struct intel_iommu *skip)
+ 	rcu_read_lock();
+ 	for_each_active_iommu(iommu, drhd) {
+ 		if (iommu != skip) {
+-			if (!ecap_sc_support(iommu->ecap)) {
++			/*
++			 * If the hardware is operating in the scalable mode,
++			 * the snooping control is always supported since we
++			 * always set PASID-table-entry.PGSNP bit if the domain
++			 * is managed outside (UNMANAGED).
++			 */
++			if (!sm_supported(iommu) &&
++			    !ecap_sc_support(iommu->ecap)) {
+ 				ret = 0;
+ 				break;
+ 			}
+@@ -1021,8 +1028,11 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
+ 
+ 			domain_flush_cache(domain, tmp_page, VTD_PAGE_SIZE);
+ 			pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE;
+-			if (domain_use_first_level(domain))
++			if (domain_use_first_level(domain)) {
+ 				pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
++				if (domain->domain.type == IOMMU_DOMAIN_DMA)
++					pteval |= DMA_FL_PTE_ACCESS;
++			}
+ 			if (cmpxchg64(&pte->val, 0ULL, pteval))
+ 				/* Someone else set it while we were thinking; use theirs. */
+ 				free_pgtable_page(tmp_page);
+@@ -1338,6 +1348,11 @@ static void iommu_set_root_entry(struct intel_iommu *iommu)
+ 		      readl, (sts & DMA_GSTS_RTPS), sts);
+ 
+ 	raw_spin_unlock_irqrestore(&iommu->register_lock, flag);
++
++	iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
++	if (sm_supported(iommu))
++		qi_flush_pasid_cache(iommu, 0, QI_PC_GLOBAL, 0);
++	iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ }
+ 
+ void iommu_flush_write_buffer(struct intel_iommu *iommu)
+@@ -2347,14 +2362,19 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+ 		return -EINVAL;
+ 
+ 	attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP);
+-	if (domain_use_first_level(domain))
+-		attr |= DMA_FL_PTE_PRESENT | DMA_FL_PTE_XD | DMA_FL_PTE_US;
++	attr |= DMA_FL_PTE_PRESENT;
++	if (domain_use_first_level(domain)) {
++		attr |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
+ 
+-	if (!sg) {
+-		sg_res = nr_pages;
+-		pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | attr;
++		if (domain->domain.type == IOMMU_DOMAIN_DMA) {
++			attr |= DMA_FL_PTE_ACCESS;
++			if (prot & DMA_PTE_WRITE)
++				attr |= DMA_FL_PTE_DIRTY;
++		}
+ 	}
+ 
++	pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | attr;
++
+ 	while (nr_pages > 0) {
+ 		uint64_t tmp;
+ 
+@@ -2506,6 +2526,10 @@ static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 devfn
+ 				   (((u16)bus) << 8) | devfn,
+ 				   DMA_CCMD_MASK_NOBIT,
+ 				   DMA_CCMD_DEVICE_INVL);
++
++	if (sm_supported(iommu))
++		qi_flush_pasid_cache(iommu, did_old, QI_PC_ALL_PASIDS, 0);
++
+ 	iommu->flush.flush_iotlb(iommu,
+ 				 did_old,
+ 				 0,
+@@ -2599,6 +2623,9 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ 
+ 	flags |= (level == 5) ? PASID_FLAG_FL5LP : 0;
+ 
++	if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
++		flags |= PASID_FLAG_PAGE_SNOOP;
++
+ 	return intel_pasid_setup_first_level(iommu, dev, (pgd_t *)pgd, pasid,
+ 					     domain->iommu_did[iommu->seq_id],
+ 					     flags);
+@@ -3369,8 +3396,6 @@ static int __init init_dmars(void)
+ 		register_pasid_allocator(iommu);
+ #endif
+ 		iommu_set_root_entry(iommu);
+-		iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+-		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ 	}
+ 
+ #ifdef CONFIG_INTEL_IOMMU_BROKEN_GFX_WA
+@@ -4148,12 +4173,7 @@ static int init_iommu_hw(void)
+ 		}
+ 
+ 		iommu_flush_write_buffer(iommu);
+-
+ 		iommu_set_root_entry(iommu);
+-
+-		iommu->flush.flush_context(iommu, 0, 0, 0,
+-					   DMA_CCMD_GLOBAL_INVL);
+-		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ 		iommu_enable_translation(iommu);
+ 		iommu_disable_protect_mem_regions(iommu);
+ 	}
+@@ -4481,8 +4501,6 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
+ 		goto disable_iommu;
+ 
+ 	iommu_set_root_entry(iommu);
+-	iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+-	iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ 	iommu_enable_translation(iommu);
+ 
+ 	iommu_disable_protect_mem_regions(iommu);
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index b92af83b79bdc..ce4ef2d245e3b 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -411,6 +411,16 @@ static inline void pasid_set_page_snoop(struct pasid_entry *pe, bool value)
+ 	pasid_set_bits(&pe->val[1], 1 << 23, value << 23);
+ }
+ 
++/*
++ * Setup the Page Snoop (PGSNP) field (Bit 88) of a scalable mode
++ * PASID entry.
++ */
++static inline void
++pasid_set_pgsnp(struct pasid_entry *pe)
++{
++	pasid_set_bits(&pe->val[1], 1ULL << 24, 1ULL << 24);
++}
++
+ /*
+  * Setup the First Level Page table Pointer field (Bit 140~191)
+  * of a scalable mode PASID entry.
+@@ -579,6 +589,9 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu,
+ 		}
+ 	}
+ 
++	if (flags & PASID_FLAG_PAGE_SNOOP)
++		pasid_set_pgsnp(pte);
++
+ 	pasid_set_domain_id(pte, did);
+ 	pasid_set_address_width(pte, iommu->agaw);
+ 	pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
+@@ -657,6 +670,9 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
+ 	pasid_set_fault_enable(pte);
+ 	pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
+ 
++	if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
++		pasid_set_pgsnp(pte);
++
+ 	/*
+ 	 * Since it is a second level only translation setup, we should
+ 	 * set SRE bit as well (addresses are expected to be GPAs).
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index 444c0bec221a4..086ebd6973199 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -48,6 +48,7 @@
+  */
+ #define PASID_FLAG_SUPERVISOR_MODE	BIT(0)
+ #define PASID_FLAG_NESTED		BIT(1)
++#define PASID_FLAG_PAGE_SNOOP		BIT(2)
+ 
+ /*
+  * The PASID_FLAG_FL5LP flag Indicates using 5-level paging for first-
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index b200a3acc6ed9..6168dec7cb40d 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -899,7 +899,7 @@ intel_svm_prq_report(struct device *dev, struct page_req_dsc *desc)
+ 	/* Fill in event data for device specific processing */
+ 	memset(&event, 0, sizeof(struct iommu_fault_event));
+ 	event.fault.type = IOMMU_FAULT_PAGE_REQ;
+-	event.fault.prm.addr = desc->addr;
++	event.fault.prm.addr = (u64)desc->addr << VTD_PAGE_SHIFT;
+ 	event.fault.prm.pasid = desc->pasid;
+ 	event.fault.prm.grpid = desc->prg_index;
+ 	event.fault.prm.perm = prq_to_iommu_prot(desc);
+@@ -959,7 +959,17 @@ static irqreturn_t prq_event_thread(int irq, void *d)
+ 			       ((unsigned long long *)req)[1]);
+ 			goto no_pasid;
+ 		}
+-
++		/* We shall not receive page request for supervisor SVM */
++		if (req->pm_req && (req->rd_req | req->wr_req)) {
++			pr_err("Unexpected page request in Privilege Mode");
++			/* No need to find the matching sdev as for bad_req */
++			goto no_pasid;
++		}
++		/* DMA read with exec requeset is not supported. */
++		if (req->exe_req && req->rd_req) {
++			pr_err("Execution request not supported\n");
++			goto no_pasid;
++		}
+ 		if (!svm || svm->pasid != req->pasid) {
+ 			rcu_read_lock();
+ 			svm = ioasid_find(NULL, req->pasid, NULL);
+@@ -1061,12 +1071,12 @@ no_pasid:
+ 				QI_PGRP_RESP_TYPE;
+ 			resp.qw1 = QI_PGRP_IDX(req->prg_index) |
+ 				QI_PGRP_LPIG(req->lpig);
++			resp.qw2 = 0;
++			resp.qw3 = 0;
+ 
+ 			if (req->priv_data_present)
+ 				memcpy(&resp.qw2, req->priv_data,
+ 				       sizeof(req->priv_data));
+-			resp.qw2 = 0;
+-			resp.qw3 = 0;
+ 			qi_submit_sync(iommu, &resp, 1, 0);
+ 		}
+ prq_advance:
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 0d9adce6d812f..9b8664d388af0 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -2872,10 +2872,12 @@ EXPORT_SYMBOL_GPL(iommu_dev_has_feature);
+ 
+ int iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat)
+ {
+-	const struct iommu_ops *ops = dev->bus->iommu_ops;
++	if (dev->iommu && dev->iommu->iommu_dev) {
++		const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
+ 
+-	if (ops && ops->dev_enable_feat)
+-		return ops->dev_enable_feat(dev, feat);
++		if (ops->dev_enable_feat)
++			return ops->dev_enable_feat(dev, feat);
++	}
+ 
+ 	return -ENODEV;
+ }
+@@ -2888,10 +2890,12 @@ EXPORT_SYMBOL_GPL(iommu_dev_enable_feature);
+  */
+ int iommu_dev_disable_feature(struct device *dev, enum iommu_dev_features feat)
+ {
+-	const struct iommu_ops *ops = dev->bus->iommu_ops;
++	if (dev->iommu && dev->iommu->iommu_dev) {
++		const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
+ 
+-	if (ops && ops->dev_disable_feat)
+-		return ops->dev_disable_feat(dev, feat);
++		if (ops->dev_disable_feat)
++			return ops->dev_disable_feat(dev, feat);
++	}
+ 
+ 	return -EBUSY;
+ }
+@@ -2899,10 +2903,12 @@ EXPORT_SYMBOL_GPL(iommu_dev_disable_feature);
+ 
+ bool iommu_dev_feature_enabled(struct device *dev, enum iommu_dev_features feat)
+ {
+-	const struct iommu_ops *ops = dev->bus->iommu_ops;
++	if (dev->iommu && dev->iommu->iommu_dev) {
++		const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
+ 
+-	if (ops && ops->dev_feat_enabled)
+-		return ops->dev_feat_enabled(dev, feat);
++		if (ops->dev_feat_enabled)
++			return ops->dev_feat_enabled(dev, feat);
++	}
+ 
+ 	return false;
+ }
+diff --git a/drivers/irqchip/irq-gic-v3-mbi.c b/drivers/irqchip/irq-gic-v3-mbi.c
+index 563a9b3662941..e81e89a81cb5b 100644
+--- a/drivers/irqchip/irq-gic-v3-mbi.c
++++ b/drivers/irqchip/irq-gic-v3-mbi.c
+@@ -303,7 +303,7 @@ int __init mbi_init(struct fwnode_handle *fwnode, struct irq_domain *parent)
+ 	reg = of_get_property(np, "mbi-alias", NULL);
+ 	if (reg) {
+ 		mbi_phys_base = of_translate_address(np, reg);
+-		if (mbi_phys_base == OF_BAD_ADDR) {
++		if (mbi_phys_base == (phys_addr_t)OF_BAD_ADDR) {
+ 			ret = -ENXIO;
+ 			goto err_free_mbi;
+ 		}
+diff --git a/drivers/mailbox/sprd-mailbox.c b/drivers/mailbox/sprd-mailbox.c
+index 4c325301a2fe8..94d9067dc8d09 100644
+--- a/drivers/mailbox/sprd-mailbox.c
++++ b/drivers/mailbox/sprd-mailbox.c
+@@ -60,6 +60,8 @@ struct sprd_mbox_priv {
+ 	struct clk		*clk;
+ 	u32			outbox_fifo_depth;
+ 
++	struct mutex		lock;
++	u32			refcnt;
+ 	struct mbox_chan	chan[SPRD_MBOX_CHAN_MAX];
+ };
+ 
+@@ -115,7 +117,11 @@ static irqreturn_t sprd_mbox_outbox_isr(int irq, void *data)
+ 		id = readl(priv->outbox_base + SPRD_MBOX_ID);
+ 
+ 		chan = &priv->chan[id];
+-		mbox_chan_received_data(chan, (void *)msg);
++		if (chan->cl)
++			mbox_chan_received_data(chan, (void *)msg);
++		else
++			dev_warn_ratelimited(priv->dev,
++				    "message's been dropped at ch[%d]\n", id);
+ 
+ 		/* Trigger to update outbox FIFO pointer */
+ 		writel(0x1, priv->outbox_base + SPRD_MBOX_TRIGGER);
+@@ -215,18 +221,22 @@ static int sprd_mbox_startup(struct mbox_chan *chan)
+ 	struct sprd_mbox_priv *priv = to_sprd_mbox_priv(chan->mbox);
+ 	u32 val;
+ 
+-	/* Select outbox FIFO mode and reset the outbox FIFO status */
+-	writel(0x0, priv->outbox_base + SPRD_MBOX_FIFO_RST);
++	mutex_lock(&priv->lock);
++	if (priv->refcnt++ == 0) {
++		/* Select outbox FIFO mode and reset the outbox FIFO status */
++		writel(0x0, priv->outbox_base + SPRD_MBOX_FIFO_RST);
+ 
+-	/* Enable inbox FIFO overflow and delivery interrupt */
+-	val = readl(priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+-	val &= ~(SPRD_INBOX_FIFO_OVERFLOW_IRQ | SPRD_INBOX_FIFO_DELIVER_IRQ);
+-	writel(val, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
++		/* Enable inbox FIFO overflow and delivery interrupt */
++		val = readl(priv->inbox_base + SPRD_MBOX_IRQ_MSK);
++		val &= ~(SPRD_INBOX_FIFO_OVERFLOW_IRQ | SPRD_INBOX_FIFO_DELIVER_IRQ);
++		writel(val, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+ 
+-	/* Enable outbox FIFO not empty interrupt */
+-	val = readl(priv->outbox_base + SPRD_MBOX_IRQ_MSK);
+-	val &= ~SPRD_OUTBOX_FIFO_NOT_EMPTY_IRQ;
+-	writel(val, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++		/* Enable outbox FIFO not empty interrupt */
++		val = readl(priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++		val &= ~SPRD_OUTBOX_FIFO_NOT_EMPTY_IRQ;
++		writel(val, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++	}
++	mutex_unlock(&priv->lock);
+ 
+ 	return 0;
+ }
+@@ -235,9 +245,13 @@ static void sprd_mbox_shutdown(struct mbox_chan *chan)
+ {
+ 	struct sprd_mbox_priv *priv = to_sprd_mbox_priv(chan->mbox);
+ 
+-	/* Disable inbox & outbox interrupt */
+-	writel(SPRD_INBOX_FIFO_IRQ_MASK, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+-	writel(SPRD_OUTBOX_FIFO_IRQ_MASK, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++	mutex_lock(&priv->lock);
++	if (--priv->refcnt == 0) {
++		/* Disable inbox & outbox interrupt */
++		writel(SPRD_INBOX_FIFO_IRQ_MASK, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
++		writel(SPRD_OUTBOX_FIFO_IRQ_MASK, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++	}
++	mutex_unlock(&priv->lock);
+ }
+ 
+ static const struct mbox_chan_ops sprd_mbox_ops = {
+@@ -266,6 +280,7 @@ static int sprd_mbox_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	priv->dev = dev;
++	mutex_init(&priv->lock);
+ 
+ 	/*
+ 	 * The Spreadtrum mailbox uses an inbox to send messages to the target
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 200c5d0f08bf5..ea3130e116801 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1722,6 +1722,8 @@ void md_bitmap_flush(struct mddev *mddev)
+ 	md_bitmap_daemon_work(mddev);
+ 	bitmap->daemon_lastrun -= sleep;
+ 	md_bitmap_daemon_work(mddev);
++	if (mddev->bitmap_info.external)
++		md_super_wait(mddev);
+ 	md_bitmap_update_sb(bitmap);
+ }
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 7a0a228d64bbe..288d26013de27 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -748,7 +748,34 @@ void mddev_init(struct mddev *mddev)
+ }
+ EXPORT_SYMBOL_GPL(mddev_init);
+ 
++static struct mddev *mddev_find_locked(dev_t unit)
++{
++	struct mddev *mddev;
++
++	list_for_each_entry(mddev, &all_mddevs, all_mddevs)
++		if (mddev->unit == unit)
++			return mddev;
++
++	return NULL;
++}
++
+ static struct mddev *mddev_find(dev_t unit)
++{
++	struct mddev *mddev;
++
++	if (MAJOR(unit) != MD_MAJOR)
++		unit &= ~((1 << MdpMinorShift) - 1);
++
++	spin_lock(&all_mddevs_lock);
++	mddev = mddev_find_locked(unit);
++	if (mddev)
++		mddev_get(mddev);
++	spin_unlock(&all_mddevs_lock);
++
++	return mddev;
++}
++
++static struct mddev *mddev_find_or_alloc(dev_t unit)
+ {
+ 	struct mddev *mddev, *new = NULL;
+ 
+@@ -759,13 +786,13 @@ static struct mddev *mddev_find(dev_t unit)
+ 	spin_lock(&all_mddevs_lock);
+ 
+ 	if (unit) {
+-		list_for_each_entry(mddev, &all_mddevs, all_mddevs)
+-			if (mddev->unit == unit) {
+-				mddev_get(mddev);
+-				spin_unlock(&all_mddevs_lock);
+-				kfree(new);
+-				return mddev;
+-			}
++		mddev = mddev_find_locked(unit);
++		if (mddev) {
++			mddev_get(mddev);
++			spin_unlock(&all_mddevs_lock);
++			kfree(new);
++			return mddev;
++		}
+ 
+ 		if (new) {
+ 			list_add(&new->all_mddevs, &all_mddevs);
+@@ -791,12 +818,7 @@ static struct mddev *mddev_find(dev_t unit)
+ 				return NULL;
+ 			}
+ 
+-			is_free = 1;
+-			list_for_each_entry(mddev, &all_mddevs, all_mddevs)
+-				if (mddev->unit == dev) {
+-					is_free = 0;
+-					break;
+-				}
++			is_free = !mddev_find_locked(dev);
+ 		}
+ 		new->unit = dev;
+ 		new->md_minor = MINOR(dev);
+@@ -5656,7 +5678,7 @@ static int md_alloc(dev_t dev, char *name)
+ 	 * writing to /sys/module/md_mod/parameters/new_array.
+ 	 */
+ 	static DEFINE_MUTEX(disks_mutex);
+-	struct mddev *mddev = mddev_find(dev);
++	struct mddev *mddev = mddev_find_or_alloc(dev);
+ 	struct gendisk *disk;
+ 	int partitioned;
+ 	int shift;
+@@ -6539,11 +6561,9 @@ static void autorun_devices(int part)
+ 
+ 		md_probe(dev, NULL, NULL);
+ 		mddev = mddev_find(dev);
+-		if (!mddev || !mddev->gendisk) {
+-			if (mddev)
+-				mddev_put(mddev);
++		if (!mddev)
+ 			break;
+-		}
++
+ 		if (mddev_lock(mddev))
+ 			pr_warn("md: %s locked, cannot run\n", mdname(mddev));
+ 		else if (mddev->raid_disks || mddev->major_version
+@@ -7837,8 +7857,7 @@ static int md_open(struct block_device *bdev, fmode_t mode)
+ 		/* Wait until bdev->bd_disk is definitely gone */
+ 		if (work_pending(&mddev->del_work))
+ 			flush_workqueue(md_misc_wq);
+-		/* Then retry the open from the top */
+-		return -ERESTARTSYS;
++		return -EBUSY;
+ 	}
+ 	BUG_ON(mddev != bdev->bd_disk->private_data);
+ 
+@@ -8168,7 +8187,11 @@ static void *md_seq_start(struct seq_file *seq, loff_t *pos)
+ 	loff_t l = *pos;
+ 	struct mddev *mddev;
+ 
+-	if (l >= 0x10000)
++	if (l == 0x10000) {
++		++*pos;
++		return (void *)2;
++	}
++	if (l > 0x10000)
+ 		return NULL;
+ 	if (!l--)
+ 		/* header */
+@@ -9267,11 +9290,11 @@ void md_check_recovery(struct mddev *mddev)
+ 		}
+ 
+ 		if (mddev_is_clustered(mddev)) {
+-			struct md_rdev *rdev;
++			struct md_rdev *rdev, *tmp;
+ 			/* kick the device if another node issued a
+ 			 * remove disk.
+ 			 */
+-			rdev_for_each(rdev, mddev) {
++			rdev_for_each_safe(rdev, tmp, mddev) {
+ 				if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
+ 						rdev->raid_disk < 0)
+ 					md_kick_rdev_from_array(rdev);
+@@ -9588,7 +9611,7 @@ err_wq:
+ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ {
+ 	struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
+-	struct md_rdev *rdev2;
++	struct md_rdev *rdev2, *tmp;
+ 	int role, ret;
+ 	char b[BDEVNAME_SIZE];
+ 
+@@ -9605,7 +9628,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ 	}
+ 
+ 	/* Check for change of roles in the active devices */
+-	rdev_for_each(rdev2, mddev) {
++	rdev_for_each_safe(rdev2, tmp, mddev) {
+ 		if (test_bit(Faulty, &rdev2->flags))
+ 			continue;
+ 
+diff --git a/drivers/media/common/saa7146/saa7146_core.c b/drivers/media/common/saa7146/saa7146_core.c
+index 21fb16cc5ca1e..e43edb0d76f4b 100644
+--- a/drivers/media/common/saa7146/saa7146_core.c
++++ b/drivers/media/common/saa7146/saa7146_core.c
+@@ -253,7 +253,7 @@ int saa7146_pgtable_build_single(struct pci_dev *pci, struct saa7146_pgtable *pt
+ 			 i, sg_dma_address(list), sg_dma_len(list),
+ 			 list->offset);
+ */
+-		for (p = 0; p * 4096 < list->length; p++, ptr++) {
++		for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr++) {
+ 			*ptr = cpu_to_le32(sg_dma_address(list) + p * 4096);
+ 			nr_pages++;
+ 		}
+diff --git a/drivers/media/common/saa7146/saa7146_video.c b/drivers/media/common/saa7146/saa7146_video.c
+index ccd15b4d4920b..0d1be4042a403 100644
+--- a/drivers/media/common/saa7146/saa7146_video.c
++++ b/drivers/media/common/saa7146/saa7146_video.c
+@@ -247,9 +247,8 @@ static int saa7146_pgtable_build(struct saa7146_dev *dev, struct saa7146_buf *bu
+ 
+ 		/* walk all pages, copy all page addresses to ptr1 */
+ 		for (i = 0; i < length; i++, list++) {
+-			for (p = 0; p * 4096 < list->length; p++, ptr1++) {
++			for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr1++)
+ 				*ptr1 = cpu_to_le32(sg_dma_address(list) - list->offset);
+-			}
+ 		}
+ /*
+ 		ptr1 = pt1->cpu;
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index ad6d9d564a87e..c120cffb52ad4 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -1904,8 +1904,8 @@ static int m88ds3103_probe(struct i2c_client *client,
+ 
+ 		dev->dt_client = i2c_new_dummy_device(client->adapter,
+ 						      dev->dt_addr);
+-		if (!dev->dt_client) {
+-			ret = -ENODEV;
++		if (IS_ERR(dev->dt_client)) {
++			ret = PTR_ERR(dev->dt_client);
+ 			goto err_kfree;
+ 		}
+ 	}
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index 0ae66091a6962..4771d0ef2c46f 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -1026,29 +1026,47 @@ static int imx219_start_streaming(struct imx219 *imx219)
+ 	const struct imx219_reg_list *reg_list;
+ 	int ret;
+ 
++	ret = pm_runtime_get_sync(&client->dev);
++	if (ret < 0) {
++		pm_runtime_put_noidle(&client->dev);
++		return ret;
++	}
++
+ 	/* Apply default values of current mode */
+ 	reg_list = &imx219->mode->reg_list;
+ 	ret = imx219_write_regs(imx219, reg_list->regs, reg_list->num_of_regs);
+ 	if (ret) {
+ 		dev_err(&client->dev, "%s failed to set mode\n", __func__);
+-		return ret;
++		goto err_rpm_put;
+ 	}
+ 
+ 	ret = imx219_set_framefmt(imx219);
+ 	if (ret) {
+ 		dev_err(&client->dev, "%s failed to set frame format: %d\n",
+ 			__func__, ret);
+-		return ret;
++		goto err_rpm_put;
+ 	}
+ 
+ 	/* Apply customized values from user */
+ 	ret =  __v4l2_ctrl_handler_setup(imx219->sd.ctrl_handler);
+ 	if (ret)
+-		return ret;
++		goto err_rpm_put;
+ 
+ 	/* set stream on register */
+-	return imx219_write_reg(imx219, IMX219_REG_MODE_SELECT,
+-				IMX219_REG_VALUE_08BIT, IMX219_MODE_STREAMING);
++	ret = imx219_write_reg(imx219, IMX219_REG_MODE_SELECT,
++			       IMX219_REG_VALUE_08BIT, IMX219_MODE_STREAMING);
++	if (ret)
++		goto err_rpm_put;
++
++	/* vflip and hflip cannot change during streaming */
++	__v4l2_ctrl_grab(imx219->vflip, true);
++	__v4l2_ctrl_grab(imx219->hflip, true);
++
++	return 0;
++
++err_rpm_put:
++	pm_runtime_put(&client->dev);
++	return ret;
+ }
+ 
+ static void imx219_stop_streaming(struct imx219 *imx219)
+@@ -1061,12 +1079,16 @@ static void imx219_stop_streaming(struct imx219 *imx219)
+ 			       IMX219_REG_VALUE_08BIT, IMX219_MODE_STANDBY);
+ 	if (ret)
+ 		dev_err(&client->dev, "%s failed to set stream\n", __func__);
++
++	__v4l2_ctrl_grab(imx219->vflip, false);
++	__v4l2_ctrl_grab(imx219->hflip, false);
++
++	pm_runtime_put(&client->dev);
+ }
+ 
+ static int imx219_set_stream(struct v4l2_subdev *sd, int enable)
+ {
+ 	struct imx219 *imx219 = to_imx219(sd);
+-	struct i2c_client *client = v4l2_get_subdevdata(sd);
+ 	int ret = 0;
+ 
+ 	mutex_lock(&imx219->mutex);
+@@ -1076,36 +1098,23 @@ static int imx219_set_stream(struct v4l2_subdev *sd, int enable)
+ 	}
+ 
+ 	if (enable) {
+-		ret = pm_runtime_get_sync(&client->dev);
+-		if (ret < 0) {
+-			pm_runtime_put_noidle(&client->dev);
+-			goto err_unlock;
+-		}
+-
+ 		/*
+ 		 * Apply default & customized values
+ 		 * and then start streaming.
+ 		 */
+ 		ret = imx219_start_streaming(imx219);
+ 		if (ret)
+-			goto err_rpm_put;
++			goto err_unlock;
+ 	} else {
+ 		imx219_stop_streaming(imx219);
+-		pm_runtime_put(&client->dev);
+ 	}
+ 
+ 	imx219->streaming = enable;
+ 
+-	/* vflip and hflip cannot change during streaming */
+-	__v4l2_ctrl_grab(imx219->vflip, enable);
+-	__v4l2_ctrl_grab(imx219->hflip, enable);
+-
+ 	mutex_unlock(&imx219->mutex);
+ 
+ 	return ret;
+ 
+-err_rpm_put:
+-	pm_runtime_put(&client->dev);
+ err_unlock:
+ 	mutex_unlock(&imx219->mutex);
+ 
+diff --git a/drivers/media/pci/saa7134/saa7134-core.c b/drivers/media/pci/saa7134/saa7134-core.c
+index 391572a6ec76a..efb757d5168a6 100644
+--- a/drivers/media/pci/saa7134/saa7134-core.c
++++ b/drivers/media/pci/saa7134/saa7134-core.c
+@@ -243,7 +243,7 @@ int saa7134_pgtable_build(struct pci_dev *pci, struct saa7134_pgtable *pt,
+ 
+ 	ptr = pt->cpu + startpage;
+ 	for (i = 0; i < length; i++, list = sg_next(list)) {
+-		for (p = 0; p * 4096 < list->length; p++, ptr++)
++		for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr++)
+ 			*ptr = cpu_to_le32(sg_dma_address(list) +
+ 						list->offset + p * 4096);
+ 	}
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index f2c4dadd6a0eb..7bb6babdcade0 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -514,8 +514,8 @@ static void aspeed_video_off(struct aspeed_video *video)
+ 	aspeed_video_write(video, VE_INTERRUPT_STATUS, 0xffffffff);
+ 
+ 	/* Turn off the relevant clocks */
+-	clk_disable(video->vclk);
+ 	clk_disable(video->eclk);
++	clk_disable(video->vclk);
+ 
+ 	clear_bit(VIDEO_CLOCKS_ON, &video->flags);
+ }
+@@ -526,8 +526,8 @@ static void aspeed_video_on(struct aspeed_video *video)
+ 		return;
+ 
+ 	/* Turn on the relevant clocks */
+-	clk_enable(video->eclk);
+ 	clk_enable(video->vclk);
++	clk_enable(video->eclk);
+ 
+ 	set_bit(VIDEO_CLOCKS_ON, &video->flags);
+ }
+@@ -1719,8 +1719,11 @@ static int aspeed_video_probe(struct platform_device *pdev)
+ 		return rc;
+ 
+ 	rc = aspeed_video_setup_video(video);
+-	if (rc)
++	if (rc) {
++		clk_unprepare(video->vclk);
++		clk_unprepare(video->eclk);
+ 		return rc;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index d5bfd6fff85b4..fd5993b3e6743 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -195,11 +195,11 @@ static int venus_probe(struct platform_device *pdev)
+ 	if (IS_ERR(core->base))
+ 		return PTR_ERR(core->base);
+ 
+-	core->video_path = of_icc_get(dev, "video-mem");
++	core->video_path = devm_of_icc_get(dev, "video-mem");
+ 	if (IS_ERR(core->video_path))
+ 		return PTR_ERR(core->video_path);
+ 
+-	core->cpucfg_path = of_icc_get(dev, "cpu-cfg");
++	core->cpucfg_path = devm_of_icc_get(dev, "cpu-cfg");
+ 	if (IS_ERR(core->cpucfg_path))
+ 		return PTR_ERR(core->cpucfg_path);
+ 
+@@ -334,9 +334,6 @@ static int venus_remove(struct platform_device *pdev)
+ 
+ 	hfi_destroy(core);
+ 
+-	icc_put(core->video_path);
+-	icc_put(core->cpucfg_path);
+-
+ 	v4l2_device_unregister(&core->v4l2_dev);
+ 	mutex_destroy(&core->pm_lock);
+ 	mutex_destroy(&core->lock);
+diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
+index b55de9ab64d8b..3181d0781b613 100644
+--- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
++++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
+@@ -151,8 +151,10 @@ static int sun6i_video_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 	}
+ 
+ 	subdev = sun6i_video_remote_subdev(video, NULL);
+-	if (!subdev)
++	if (!subdev) {
++		ret = -EINVAL;
+ 		goto stop_media_pipeline;
++	}
+ 
+ 	config.pixelformat = video->fmt.fmt.pix.pixelformat;
+ 	config.code = video->mbus_code;
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-out.c b/drivers/media/test-drivers/vivid/vivid-vid-out.c
+index ee3446e3217cc..cd6c247547d66 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-out.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-out.c
+@@ -1025,7 +1025,7 @@ int vivid_vid_out_s_fbuf(struct file *file, void *fh,
+ 		return -EINVAL;
+ 	}
+ 	dev->fbuf_out_flags &= ~(chroma_flags | alpha_flags);
+-	dev->fbuf_out_flags = a->flags & (chroma_flags | alpha_flags);
++	dev->fbuf_out_flags |= a->flags & (chroma_flags | alpha_flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/tuners/m88rs6000t.c b/drivers/media/tuners/m88rs6000t.c
+index b3505f4024764..8647c50b66e50 100644
+--- a/drivers/media/tuners/m88rs6000t.c
++++ b/drivers/media/tuners/m88rs6000t.c
+@@ -525,7 +525,7 @@ static int m88rs6000t_get_rf_strength(struct dvb_frontend *fe, u16 *strength)
+ 	PGA2_cri = PGA2_GC >> 2;
+ 	PGA2_crf = PGA2_GC & 0x03;
+ 
+-	for (i = 0; i <= RF_GC; i++)
++	for (i = 0; i <= RF_GC && i < ARRAY_SIZE(RFGS); i++)
+ 		RFG += RFGS[i];
+ 
+ 	if (RF_GC == 0)
+@@ -537,12 +537,12 @@ static int m88rs6000t_get_rf_strength(struct dvb_frontend *fe, u16 *strength)
+ 	if (RF_GC == 3)
+ 		RFG += 100;
+ 
+-	for (i = 0; i <= IF_GC; i++)
++	for (i = 0; i <= IF_GC && i < ARRAY_SIZE(IFGS); i++)
+ 		IFG += IFGS[i];
+ 
+ 	TIAG = TIA_GC * TIA_GS;
+ 
+-	for (i = 0; i <= BB_GC; i++)
++	for (i = 0; i <= BB_GC && i < ARRAY_SIZE(BBGS); i++)
+ 		BBG += BBGS[i];
+ 
+ 	PGA2G = PGA2_cri * PGA2_cri_GS + PGA2_crf * PGA2_crf_GS;
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 3d8c54b826e99..41f8410d08d65 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -2356,7 +2356,15 @@ void v4l2_ctrl_handler_free(struct v4l2_ctrl_handler *hdl)
+ 	if (hdl == NULL || hdl->buckets == NULL)
+ 		return;
+ 
+-	if (!hdl->req_obj.req && !list_empty(&hdl->requests)) {
++	/*
++	 * If the main handler is freed and it is used by handler objects in
++	 * outstanding requests, then unbind and put those objects before
++	 * freeing the main handler.
++	 *
++	 * The main handler can be identified by having a NULL ops pointer in
++	 * the request object.
++	 */
++	if (!hdl->req_obj.ops && !list_empty(&hdl->requests)) {
+ 		struct v4l2_ctrl_handler *req, *next_req;
+ 
+ 		list_for_each_entry_safe(req, next_req, &hdl->requests, requests) {
+@@ -3402,8 +3410,8 @@ static void v4l2_ctrl_request_unbind(struct media_request_object *obj)
+ 		container_of(obj, struct v4l2_ctrl_handler, req_obj);
+ 	struct v4l2_ctrl_handler *main_hdl = obj->priv;
+ 
+-	list_del_init(&hdl->requests);
+ 	mutex_lock(main_hdl->lock);
++	list_del_init(&hdl->requests);
+ 	if (hdl->request_is_queued) {
+ 		list_del_init(&hdl->requests_queued);
+ 		hdl->request_is_queued = false;
+@@ -3462,8 +3470,11 @@ static int v4l2_ctrl_request_bind(struct media_request *req,
+ 	if (!ret) {
+ 		ret = media_request_object_bind(req, &req_ops,
+ 						from, false, &hdl->req_obj);
+-		if (!ret)
++		if (!ret) {
++			mutex_lock(from->lock);
+ 			list_add_tail(&hdl->requests, &from->requests);
++			mutex_unlock(from->lock);
++		}
+ 	}
+ 	return ret;
+ }
+diff --git a/drivers/memory/omap-gpmc.c b/drivers/memory/omap-gpmc.c
+index cfa730cfd1453..f80c2ea39ca4c 100644
+--- a/drivers/memory/omap-gpmc.c
++++ b/drivers/memory/omap-gpmc.c
+@@ -1009,8 +1009,8 @@ EXPORT_SYMBOL(gpmc_cs_request);
+ 
+ void gpmc_cs_free(int cs)
+ {
+-	struct gpmc_cs_data *gpmc = &gpmc_cs[cs];
+-	struct resource *res = &gpmc->mem;
++	struct gpmc_cs_data *gpmc;
++	struct resource *res;
+ 
+ 	spin_lock(&gpmc_mem_lock);
+ 	if (cs >= gpmc_cs_num || cs < 0 || !gpmc_cs_reserved(cs)) {
+@@ -1018,6 +1018,9 @@ void gpmc_cs_free(int cs)
+ 		spin_unlock(&gpmc_mem_lock);
+ 		return;
+ 	}
++	gpmc = &gpmc_cs[cs];
++	res = &gpmc->mem;
++
+ 	gpmc_cs_disable_mem(cs);
+ 	if (res->flags)
+ 		release_resource(res);
+diff --git a/drivers/memory/pl353-smc.c b/drivers/memory/pl353-smc.c
+index 73bd3023202f0..b42804b1801e6 100644
+--- a/drivers/memory/pl353-smc.c
++++ b/drivers/memory/pl353-smc.c
+@@ -63,7 +63,7 @@
+ /* ECC memory config register specific constants */
+ #define PL353_SMC_ECC_MEMCFG_MODE_MASK	0xC
+ #define PL353_SMC_ECC_MEMCFG_MODE_SHIFT	2
+-#define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK	0xC
++#define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK	0x3
+ 
+ #define PL353_SMC_DC_UPT_NAND_REGS	((4 << 23) |	/* CS: NAND chip */ \
+ 				 (2 << 21))	/* UpdateRegs operation */
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index da0fdb4c75959..1fe6c35b7503e 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -193,10 +193,10 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
+ 	}
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
+-	rpc->size = resource_size(res);
+ 	rpc->dirmap = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(rpc->dirmap))
+ 		rpc->dirmap = NULL;
++	rpc->size = resource_size(res);
+ 
+ 	rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ 
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index c5ee4121a4d22..3d230f07eaf21 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -1298,7 +1298,9 @@ static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc)
+ 
+ 	dmc->curr_volt = target_volt;
+ 
+-	clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll);
++	ret = clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll);
++	if (ret)
++		return ret;
+ 
+ 	clk_prepare_enable(dmc->fout_bpll);
+ 	clk_prepare_enable(dmc->mout_bpll);
+diff --git a/drivers/mfd/stm32-timers.c b/drivers/mfd/stm32-timers.c
+index add6033591242..44ed2fce03196 100644
+--- a/drivers/mfd/stm32-timers.c
++++ b/drivers/mfd/stm32-timers.c
+@@ -158,13 +158,18 @@ static const struct regmap_config stm32_timers_regmap_cfg = {
+ 
+ static void stm32_timers_get_arr_size(struct stm32_timers *ddata)
+ {
++	u32 arr;
++
++	/* Backup ARR to restore it after getting the maximum value */
++	regmap_read(ddata->regmap, TIM_ARR, &arr);
++
+ 	/*
+ 	 * Only the available bits will be written so when readback
+ 	 * we get the maximum value of auto reload register
+ 	 */
+ 	regmap_write(ddata->regmap, TIM_ARR, ~0L);
+ 	regmap_read(ddata->regmap, TIM_ARR, &ddata->max_arr);
+-	regmap_write(ddata->regmap, TIM_ARR, 0x0);
++	regmap_write(ddata->regmap, TIM_ARR, arr);
+ }
+ 
+ static int stm32_timers_dma_probe(struct device *dev,
+diff --git a/drivers/misc/lis3lv02d/lis3lv02d.c b/drivers/misc/lis3lv02d/lis3lv02d.c
+index dd65cedf3b125..9d14bf444481b 100644
+--- a/drivers/misc/lis3lv02d/lis3lv02d.c
++++ b/drivers/misc/lis3lv02d/lis3lv02d.c
+@@ -208,7 +208,7 @@ static int lis3_3dc_rates[16] = {0, 1, 10, 25, 50, 100, 200, 400, 1600, 5000};
+ static int lis3_3dlh_rates[4] = {50, 100, 400, 1000};
+ 
+ /* ODR is Output Data Rate */
+-static int lis3lv02d_get_odr(struct lis3lv02d *lis3)
++static int lis3lv02d_get_odr_index(struct lis3lv02d *lis3)
+ {
+ 	u8 ctrl;
+ 	int shift;
+@@ -216,15 +216,23 @@ static int lis3lv02d_get_odr(struct lis3lv02d *lis3)
+ 	lis3->read(lis3, CTRL_REG1, &ctrl);
+ 	ctrl &= lis3->odr_mask;
+ 	shift = ffs(lis3->odr_mask) - 1;
+-	return lis3->odrs[(ctrl >> shift)];
++	return (ctrl >> shift);
+ }
+ 
+ static int lis3lv02d_get_pwron_wait(struct lis3lv02d *lis3)
+ {
+-	int div = lis3lv02d_get_odr(lis3);
++	int odr_idx = lis3lv02d_get_odr_index(lis3);
++	int div = lis3->odrs[odr_idx];
+ 
+-	if (WARN_ONCE(div == 0, "device returned spurious data"))
++	if (div == 0) {
++		if (odr_idx == 0) {
++			/* Power-down mode, not sampling no need to sleep */
++			return 0;
++		}
++
++		dev_err(&lis3->pdev->dev, "Error unknown odrs-index: %d\n", odr_idx);
+ 		return -ENXIO;
++	}
+ 
+ 	/* LIS3 power on delay is quite long */
+ 	msleep(lis3->pwron_delay / div);
+@@ -816,9 +824,12 @@ static ssize_t lis3lv02d_rate_show(struct device *dev,
+ 			struct device_attribute *attr, char *buf)
+ {
+ 	struct lis3lv02d *lis3 = dev_get_drvdata(dev);
++	int odr_idx;
+ 
+ 	lis3lv02d_sysfs_poweron(lis3);
+-	return sprintf(buf, "%d\n", lis3lv02d_get_odr(lis3));
++
++	odr_idx = lis3lv02d_get_odr_index(lis3);
++	return sprintf(buf, "%d\n", lis3->odrs[odr_idx]);
+ }
+ 
+ static ssize_t lis3lv02d_rate_set(struct device *dev,
+diff --git a/drivers/misc/vmw_vmci/vmci_doorbell.c b/drivers/misc/vmw_vmci/vmci_doorbell.c
+index 345addd9306de..fa8a7fce4481b 100644
+--- a/drivers/misc/vmw_vmci/vmci_doorbell.c
++++ b/drivers/misc/vmw_vmci/vmci_doorbell.c
+@@ -326,7 +326,7 @@ int vmci_dbell_host_context_notify(u32 src_cid, struct vmci_handle handle)
+ bool vmci_dbell_register_notification_bitmap(u64 bitmap_ppn)
+ {
+ 	int result;
+-	struct vmci_notify_bm_set_msg bitmap_set_msg;
++	struct vmci_notify_bm_set_msg bitmap_set_msg = { };
+ 
+ 	bitmap_set_msg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+ 						  VMCI_SET_NOTIFY_BITMAP);
+diff --git a/drivers/misc/vmw_vmci/vmci_guest.c b/drivers/misc/vmw_vmci/vmci_guest.c
+index cc8eeb361fcdb..1018dc77269d4 100644
+--- a/drivers/misc/vmw_vmci/vmci_guest.c
++++ b/drivers/misc/vmw_vmci/vmci_guest.c
+@@ -168,7 +168,7 @@ static int vmci_check_host_caps(struct pci_dev *pdev)
+ 				VMCI_UTIL_NUM_RESOURCES * sizeof(u32);
+ 	struct vmci_datagram *check_msg;
+ 
+-	check_msg = kmalloc(msg_size, GFP_KERNEL);
++	check_msg = kzalloc(msg_size, GFP_KERNEL);
+ 	if (!check_msg) {
+ 		dev_err(&pdev->dev, "%s: Insufficient memory\n", __func__);
+ 		return -ENOMEM;
+diff --git a/drivers/mtd/maps/physmap-core.c b/drivers/mtd/maps/physmap-core.c
+index 001ed5deb622a..4f63b8430c710 100644
+--- a/drivers/mtd/maps/physmap-core.c
++++ b/drivers/mtd/maps/physmap-core.c
+@@ -69,8 +69,10 @@ static int physmap_flash_remove(struct platform_device *dev)
+ 	int i, err = 0;
+ 
+ 	info = platform_get_drvdata(dev);
+-	if (!info)
++	if (!info) {
++		err = -EINVAL;
+ 		goto out;
++	}
+ 
+ 	if (info->cmtd) {
+ 		err = mtd_device_unregister(info->cmtd);
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index b40f46a43fc66..69fb5dafa9ad6 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -651,16 +651,12 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+ 	case MEMGETINFO:
+ 	case MEMREADOOB:
+ 	case MEMREADOOB64:
+-	case MEMLOCK:
+-	case MEMUNLOCK:
+ 	case MEMISLOCKED:
+ 	case MEMGETOOBSEL:
+ 	case MEMGETBADBLOCK:
+-	case MEMSETBADBLOCK:
+ 	case OTPSELECT:
+ 	case OTPGETREGIONCOUNT:
+ 	case OTPGETREGIONINFO:
+-	case OTPLOCK:
+ 	case ECCGETLAYOUT:
+ 	case ECCGETSTATS:
+ 	case MTDFILEMODE:
+@@ -671,9 +667,13 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+ 	/* "dangerous" commands */
+ 	case MEMERASE:
+ 	case MEMERASE64:
++	case MEMLOCK:
++	case MEMUNLOCK:
++	case MEMSETBADBLOCK:
+ 	case MEMWRITEOOB:
+ 	case MEMWRITEOOB64:
+ 	case MEMWRITE:
++	case OTPLOCK:
+ 		if (!(file->f_mode & FMODE_WRITE))
+ 			return -EPERM;
+ 		break;
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index b07cbb0661fb1..1c8c407286783 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -820,6 +820,9 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types,
+ 
+ 	/* Prefer parsed partitions over driver-provided fallback */
+ 	ret = parse_mtd_partitions(mtd, types, parser_data);
++	if (ret == -EPROBE_DEFER)
++		goto out;
++
+ 	if (ret > 0)
+ 		ret = 0;
+ 	else if (nr_parts)
+diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
+index c3575b686f796..95d47422bbf20 100644
+--- a/drivers/mtd/mtdpart.c
++++ b/drivers/mtd/mtdpart.c
+@@ -331,7 +331,7 @@ static int __del_mtd_partitions(struct mtd_info *mtd)
+ 
+ 	list_for_each_entry_safe(child, next, &mtd->partitions, part.node) {
+ 		if (mtd_has_partitions(child))
+-			del_mtd_partitions(child);
++			__del_mtd_partitions(child);
+ 
+ 		pr_info("Deleting %s MTD partition\n", child->name);
+ 		ret = del_mtd_device(child);
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 2da39ab892869..909b14cc8e55c 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -2688,6 +2688,12 @@ static int brcmnand_attach_chip(struct nand_chip *chip)
+ 
+ 	ret = brcmstb_choose_ecc_layout(host);
+ 
++	/* If OOB is written with ECC enabled it will cause ECC errors */
++	if (is_hamming_ecc(host->ctrl, &host->hwcfg)) {
++		chip->ecc.write_oob = brcmnand_write_oob_raw;
++		chip->ecc.read_oob = brcmnand_read_oob_raw;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index c88421a1c078d..ce05dd4088e9d 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -1078,11 +1078,13 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
+ 		host->read_dma_chan = dma_request_channel(mask, filter, NULL);
+ 		if (!host->read_dma_chan) {
+ 			dev_err(&pdev->dev, "Unable to get read dma channel\n");
++			ret = -ENODEV;
+ 			goto disable_clk;
+ 		}
+ 		host->write_dma_chan = dma_request_channel(mask, filter, NULL);
+ 		if (!host->write_dma_chan) {
+ 			dev_err(&pdev->dev, "Unable to get write dma channel\n");
++			ret = -ENODEV;
+ 			goto release_dma_read_chan;
+ 		}
+ 	}
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 31a6210eb5d44..a6658567d55c0 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -2447,7 +2447,7 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
+ 	this->bch_geometry.auxiliary_size = 128;
+ 	ret = gpmi_alloc_dma_buffer(this);
+ 	if (ret)
+-		goto err_out;
++		return ret;
+ 
+ 	nand_controller_init(&this->base);
+ 	this->base.ops = &gpmi_nand_controller_ops;
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index dfc17a28a06b9..b99d2e9d1e2c4 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -2874,7 +2874,7 @@ static int qcom_probe_nand_devices(struct qcom_nand_controller *nandc)
+ 	struct device *dev = nandc->dev;
+ 	struct device_node *dn = dev->of_node, *child;
+ 	struct qcom_nand_host *host;
+-	int ret;
++	int ret = -ENODEV;
+ 
+ 	for_each_available_child_of_node(dn, child) {
+ 		host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
+@@ -2892,10 +2892,7 @@ static int qcom_probe_nand_devices(struct qcom_nand_controller *nandc)
+ 		list_add_tail(&host->node, &nandc->host_list);
+ 	}
+ 
+-	if (list_empty(&nandc->host_list))
+-		return -ENODEV;
+-
+-	return 0;
++	return ret;
+ }
+ 
+ /* parse custom DT properties here */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index a59c1f1fb31ed..7ddc2e2e4976a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1731,14 +1731,16 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 
+ 	cons = rxcmp->rx_cmp_opaque;
+ 	if (unlikely(cons != rxr->rx_next_cons)) {
+-		int rc1 = bnxt_discard_rx(bp, cpr, raw_cons, rxcmp);
++		int rc1 = bnxt_discard_rx(bp, cpr, &tmp_raw_cons, rxcmp);
+ 
+ 		/* 0xffff is forced error, don't print it */
+ 		if (rxr->rx_next_cons != 0xffff)
+ 			netdev_warn(bp->dev, "RX cons %x != expected cons %x\n",
+ 				    cons, rxr->rx_next_cons);
+ 		bnxt_sched_reset(bp, rxr);
+-		return rc1;
++		if (rc1)
++			return rc1;
++		goto next_rx_no_prod_no_len;
+ 	}
+ 	rx_buf = &rxr->rx_buf_ring[cons];
+ 	data = rx_buf->data;
+@@ -9546,7 +9548,9 @@ static ssize_t bnxt_show_temp(struct device *dev,
+ 	if (!rc)
+ 		len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
+ 	mutex_unlock(&bp->hwrm_cmd_lock);
+-	return rc ?: len;
++	if (rc)
++		return rc;
++	return len;
+ }
+ static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
+ 
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
+index e6d4ad99cc387..3f1c189646f4e 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
+@@ -521,7 +521,7 @@
+ #define    CN23XX_BAR1_INDEX_OFFSET                3
+ 
+ #define    CN23XX_PEM_BAR1_INDEX_REG(port, idx)		\
+-		(CN23XX_PEM_BAR1_INDEX_START + ((port) << CN23XX_PEM_OFFSET) + \
++		(CN23XX_PEM_BAR1_INDEX_START + (((u64)port) << CN23XX_PEM_OFFSET) + \
+ 		 ((idx) << CN23XX_BAR1_INDEX_OFFSET))
+ 
+ /*############################ DPI #########################*/
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+index 7a141ce32e86c..0ccd5b40ef5c9 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+@@ -776,7 +776,7 @@ static void nicvf_rcv_queue_config(struct nicvf *nic, struct queue_set *qs,
+ 	mbx.rq.msg = NIC_MBOX_MSG_RQ_CFG;
+ 	mbx.rq.qs_num = qs->vnic_id;
+ 	mbx.rq.rq_num = qidx;
+-	mbx.rq.cfg = (rq->caching << 26) | (rq->cq_qs << 19) |
++	mbx.rq.cfg = ((u64)rq->caching << 26) | (rq->cq_qs << 19) |
+ 			  (rq->cq_idx << 16) | (rq->cont_rbdr_qs << 9) |
+ 			  (rq->cont_qs_rbdr_idx << 8) |
+ 			  (rq->start_rbdr_qs << 1) | (rq->start_qs_rbdr_idx);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index 83b46440408ba..bde8494215c41 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -174,31 +174,31 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ 				      WORD_MASK, f->fs.nat_lip[15] |
+ 				      f->fs.nat_lip[14] << 8 |
+ 				      f->fs.nat_lip[13] << 16 |
+-				      f->fs.nat_lip[12] << 24, 1);
++				      (u64)f->fs.nat_lip[12] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 1,
+ 				      WORD_MASK, f->fs.nat_lip[11] |
+ 				      f->fs.nat_lip[10] << 8 |
+ 				      f->fs.nat_lip[9] << 16 |
+-				      f->fs.nat_lip[8] << 24, 1);
++				      (u64)f->fs.nat_lip[8] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 2,
+ 				      WORD_MASK, f->fs.nat_lip[7] |
+ 				      f->fs.nat_lip[6] << 8 |
+ 				      f->fs.nat_lip[5] << 16 |
+-				      f->fs.nat_lip[4] << 24, 1);
++				      (u64)f->fs.nat_lip[4] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 3,
+ 				      WORD_MASK, f->fs.nat_lip[3] |
+ 				      f->fs.nat_lip[2] << 8 |
+ 				      f->fs.nat_lip[1] << 16 |
+-				      f->fs.nat_lip[0] << 24, 1);
++				      (u64)f->fs.nat_lip[0] << 24, 1);
+ 		} else {
+ 			set_tcb_field(adap, f, tid, TCB_RX_FRAG3_LEN_RAW_W,
+ 				      WORD_MASK, f->fs.nat_lip[3] |
+ 				      f->fs.nat_lip[2] << 8 |
+ 				      f->fs.nat_lip[1] << 16 |
+-				      f->fs.nat_lip[0] << 24, 1);
++				      (u64)f->fs.nat_lip[0] << 25, 1);
+ 		}
+ 	}
+ 
+@@ -208,25 +208,25 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ 				      WORD_MASK, f->fs.nat_fip[15] |
+ 				      f->fs.nat_fip[14] << 8 |
+ 				      f->fs.nat_fip[13] << 16 |
+-				      f->fs.nat_fip[12] << 24, 1);
++				      (u64)f->fs.nat_fip[12] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 1,
+ 				      WORD_MASK, f->fs.nat_fip[11] |
+ 				      f->fs.nat_fip[10] << 8 |
+ 				      f->fs.nat_fip[9] << 16 |
+-				      f->fs.nat_fip[8] << 24, 1);
++				      (u64)f->fs.nat_fip[8] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 2,
+ 				      WORD_MASK, f->fs.nat_fip[7] |
+ 				      f->fs.nat_fip[6] << 8 |
+ 				      f->fs.nat_fip[5] << 16 |
+-				      f->fs.nat_fip[4] << 24, 1);
++				      (u64)f->fs.nat_fip[4] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 3,
+ 				      WORD_MASK, f->fs.nat_fip[3] |
+ 				      f->fs.nat_fip[2] << 8 |
+ 				      f->fs.nat_fip[1] << 16 |
+-				      f->fs.nat_fip[0] << 24, 1);
++				      (u64)f->fs.nat_fip[0] << 24, 1);
+ 
+ 		} else {
+ 			set_tcb_field(adap, f, tid,
+@@ -234,13 +234,13 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ 				      WORD_MASK, f->fs.nat_fip[3] |
+ 				      f->fs.nat_fip[2] << 8 |
+ 				      f->fs.nat_fip[1] << 16 |
+-				      f->fs.nat_fip[0] << 24, 1);
++				      (u64)f->fs.nat_fip[0] << 24, 1);
+ 		}
+ 	}
+ 
+ 	set_tcb_field(adap, f, tid, TCB_PDU_HDR_LEN_W, WORD_MASK,
+ 		      (dp ? (nat_lp[1] | nat_lp[0] << 8) : 0) |
+-		      (sp ? (nat_fp[1] << 16 | nat_fp[0] << 24) : 0),
++		      (sp ? (nat_fp[1] << 16 | (u64)nat_fp[0] << 24) : 0),
+ 		      1);
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/Makefile b/drivers/net/ethernet/freescale/Makefile
+index 67c436400352f..de7b318422330 100644
+--- a/drivers/net/ethernet/freescale/Makefile
++++ b/drivers/net/ethernet/freescale/Makefile
+@@ -24,6 +24,4 @@ obj-$(CONFIG_FSL_DPAA_ETH) += dpaa/
+ 
+ obj-$(CONFIG_FSL_DPAA2_ETH) += dpaa2/
+ 
+-obj-$(CONFIG_FSL_ENETC) += enetc/
+-obj-$(CONFIG_FSL_ENETC_MDIO) += enetc/
+-obj-$(CONFIG_FSL_ENETC_VF) += enetc/
++obj-y += enetc/
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index a362516a31853..070bef303d184 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -3526,7 +3526,6 @@ static void hns3_nic_set_cpumask(struct hns3_nic_priv *priv)
+ 
+ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv)
+ {
+-	struct hnae3_ring_chain_node vector_ring_chain;
+ 	struct hnae3_handle *h = priv->ae_handle;
+ 	struct hns3_enet_tqp_vector *tqp_vector;
+ 	int ret;
+@@ -3558,6 +3557,8 @@ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv)
+ 	}
+ 
+ 	for (i = 0; i < priv->vector_num; i++) {
++		struct hnae3_ring_chain_node vector_ring_chain;
++
+ 		tqp_vector = &priv->tqp_vector[i];
+ 
+ 		tqp_vector->rx_group.total_bytes = 0;
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_main.c b/drivers/net/ethernet/marvell/prestera/prestera_main.c
+index da4b286d13377..feb69fcd908e3 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_main.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_main.c
+@@ -436,7 +436,8 @@ static void prestera_port_handle_event(struct prestera_switch *sw,
+ 			netif_carrier_on(port->dev);
+ 			if (!delayed_work_pending(caching_dw))
+ 				queue_delayed_work(prestera_wq, caching_dw, 0);
+-		} else {
++		} else if (netif_running(port->dev) &&
++			   netif_carrier_ok(port->dev)) {
+ 			netif_carrier_off(port->dev);
+ 			if (delayed_work_pending(caching_dw))
+ 				cancel_delayed_work(caching_dw);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+index cc67366495b09..bed154e9a1ef9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+@@ -850,7 +850,7 @@ mlx5_fpga_ipsec_release_sa_ctx(struct mlx5_fpga_ipsec_sa_ctx *sa_ctx)
+ 		return;
+ 	}
+ 
+-	if (sa_ctx->fpga_xfrm->accel_xfrm.attrs.action &
++	if (sa_ctx->fpga_xfrm->accel_xfrm.attrs.action ==
+ 	    MLX5_ACCEL_ESP_ACTION_DECRYPT)
+ 		ida_simple_remove(&fipsec->halloc, sa_ctx->sa_handle);
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+index 97d2b03208de0..7a8187458724d 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+@@ -364,6 +364,7 @@ int nfp_devlink_port_register(struct nfp_app *app, struct nfp_port *port)
+ 
+ 	attrs.split = eth_port.is_split;
+ 	attrs.splittable = !attrs.split;
++	attrs.lanes = eth_port.port_lanes;
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+ 	attrs.phys.port_number = eth_port.label_port;
+ 	attrs.phys.split_subport_number = eth_port.label_subport;
+diff --git a/drivers/net/ethernet/qualcomm/emac/emac-mac.c b/drivers/net/ethernet/qualcomm/emac/emac-mac.c
+index 117188e3c7de2..87b8c032195d0 100644
+--- a/drivers/net/ethernet/qualcomm/emac/emac-mac.c
++++ b/drivers/net/ethernet/qualcomm/emac/emac-mac.c
+@@ -1437,6 +1437,7 @@ netdev_tx_t emac_mac_tx_buf_send(struct emac_adapter *adpt,
+ {
+ 	struct emac_tpd tpd;
+ 	u32 prod_idx;
++	int len;
+ 
+ 	memset(&tpd, 0, sizeof(tpd));
+ 
+@@ -1456,9 +1457,10 @@ netdev_tx_t emac_mac_tx_buf_send(struct emac_adapter *adpt,
+ 	if (skb_network_offset(skb) != ETH_HLEN)
+ 		TPD_TYP_SET(&tpd, 1);
+ 
++	len = skb->len;
+ 	emac_tx_fill_tpd(adpt, tx_q, skb, &tpd);
+ 
+-	netdev_sent_queue(adpt->netdev, skb->len);
++	netdev_sent_queue(adpt->netdev, len);
+ 
+ 	/* Make sure the are enough free descriptors to hold one
+ 	 * maximum-sized SKB.  We need one desc for each fragment,
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index bd30505fbc57a..f96eed67e1a2b 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -911,31 +911,20 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 	int q = napi - priv->napi;
+ 	int mask = BIT(q);
+ 	int quota = budget;
+-	u32 ris0, tis;
+ 
+-	for (;;) {
+-		tis = ravb_read(ndev, TIS);
+-		ris0 = ravb_read(ndev, RIS0);
+-		if (!((ris0 & mask) || (tis & mask)))
+-			break;
++	/* Processing RX Descriptor Ring */
++	/* Clear RX interrupt */
++	ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
++	if (ravb_rx(ndev, &quota, q))
++		goto out;
+ 
+-		/* Processing RX Descriptor Ring */
+-		if (ris0 & mask) {
+-			/* Clear RX interrupt */
+-			ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
+-			if (ravb_rx(ndev, &quota, q))
+-				goto out;
+-		}
+-		/* Processing TX Descriptor Ring */
+-		if (tis & mask) {
+-			spin_lock_irqsave(&priv->lock, flags);
+-			/* Clear TX interrupt */
+-			ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
+-			ravb_tx_free(ndev, q, true);
+-			netif_wake_subqueue(ndev, q);
+-			spin_unlock_irqrestore(&priv->lock, flags);
+-		}
+-	}
++	/* Processing RX Descriptor Ring */
++	spin_lock_irqsave(&priv->lock, flags);
++	/* Clear TX interrupt */
++	ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
++	ravb_tx_free(ndev, q, true);
++	netif_wake_subqueue(ndev, q);
++	spin_unlock_irqrestore(&priv->lock, flags);
+ 
+ 	napi_complete(napi);
+ 
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index da6886dcac37c..4fa72b573c172 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -2928,8 +2928,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ 
+ 	/* Get the transmit queue */
+ 	tx_ev_q_label = EFX_QWORD_FIELD(*event, ESF_DZ_TX_QLABEL);
+-	tx_queue = efx_channel_get_tx_queue(channel,
+-					    tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
++	tx_queue = channel->tx_queue + (tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
+ 
+ 	if (!tx_queue->timestamping) {
+ 		/* Transmit completion */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 6012eadae4604..5b9478dffe103 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2727,8 +2727,15 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
+ 
+ 	/* Enable TSO */
+ 	if (priv->tso) {
+-		for (chan = 0; chan < tx_cnt; chan++)
++		for (chan = 0; chan < tx_cnt; chan++) {
++			struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
++
++			/* TSO and TBS cannot co-exist */
++			if (tx_q->tbs & STMMAC_TBS_AVAIL)
++				continue;
++
+ 			stmmac_enable_tso(priv, priv->ioaddr, 1, chan);
++		}
+ 	}
+ 
+ 	/* Enable Split Header */
+@@ -2820,9 +2827,8 @@ static int stmmac_open(struct net_device *dev)
+ 		struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
+ 		int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en;
+ 
++		/* Setup per-TXQ tbs flag before TX descriptor alloc */
+ 		tx_q->tbs |= tbs_en ? STMMAC_TBS_AVAIL : 0;
+-		if (stmmac_enable_tbs(priv, priv->ioaddr, tbs_en, chan))
+-			tx_q->tbs &= ~STMMAC_TBS_AVAIL;
+ 	}
+ 
+ 	ret = alloc_dma_desc_resources(priv);
+diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
+index c7031e1960d4a..03055c96f0760 100644
+--- a/drivers/net/ethernet/ti/davinci_emac.c
++++ b/drivers/net/ethernet/ti/davinci_emac.c
+@@ -169,11 +169,11 @@ static const char emac_version_string[] = "TI DaVinci EMAC Linux v6.1";
+ /* EMAC mac_status register */
+ #define EMAC_MACSTATUS_TXERRCODE_MASK	(0xF00000)
+ #define EMAC_MACSTATUS_TXERRCODE_SHIFT	(20)
+-#define EMAC_MACSTATUS_TXERRCH_MASK	(0x7)
++#define EMAC_MACSTATUS_TXERRCH_MASK	(0x70000)
+ #define EMAC_MACSTATUS_TXERRCH_SHIFT	(16)
+ #define EMAC_MACSTATUS_RXERRCODE_MASK	(0xF000)
+ #define EMAC_MACSTATUS_RXERRCODE_SHIFT	(12)
+-#define EMAC_MACSTATUS_RXERRCH_MASK	(0x7)
++#define EMAC_MACSTATUS_RXERRCH_MASK	(0x700)
+ #define EMAC_MACSTATUS_RXERRCH_SHIFT	(8)
+ 
+ /* EMAC RX register masks */
+diff --git a/drivers/net/ethernet/xscale/ixp4xx_eth.c b/drivers/net/ethernet/xscale/ixp4xx_eth.c
+index 2e52029235104..403358f2c8536 100644
+--- a/drivers/net/ethernet/xscale/ixp4xx_eth.c
++++ b/drivers/net/ethernet/xscale/ixp4xx_eth.c
+@@ -1086,7 +1086,7 @@ static int init_queues(struct port *port)
+ 	int i;
+ 
+ 	if (!ports_open) {
+-		dma_pool = dma_pool_create(DRV_NAME, port->netdev->dev.parent,
++		dma_pool = dma_pool_create(DRV_NAME, &port->netdev->dev,
+ 					   POOL_ALLOC_SIZE, 32, 0);
+ 		if (!dma_pool)
+ 			return -ENOMEM;
+@@ -1436,6 +1436,9 @@ static int ixp4xx_eth_probe(struct platform_device *pdev)
+ 	ndev->netdev_ops = &ixp4xx_netdev_ops;
+ 	ndev->ethtool_ops = &ixp4xx_ethtool_ops;
+ 	ndev->tx_queue_len = 100;
++	/* Inherit the DMA masks from the platform device */
++	ndev->dev.dma_mask = dev->dma_mask;
++	ndev->dev.coherent_dma_mask = dev->coherent_dma_mask;
+ 
+ 	netif_napi_add(ndev, &port->napi, eth_poll, NAPI_WEIGHT);
+ 
+diff --git a/drivers/net/fddi/Kconfig b/drivers/net/fddi/Kconfig
+index f722079dfb6ae..f99c1048c97e3 100644
+--- a/drivers/net/fddi/Kconfig
++++ b/drivers/net/fddi/Kconfig
+@@ -40,17 +40,20 @@ config DEFXX
+ 
+ config DEFXX_MMIO
+ 	bool
+-	prompt "Use MMIO instead of PIO" if PCI || EISA
++	prompt "Use MMIO instead of IOP" if PCI || EISA
+ 	depends on DEFXX
+-	default n if PCI || EISA
++	default n if EISA
+ 	default y
+ 	help
+ 	  This instructs the driver to use EISA or PCI memory-mapped I/O
+-	  (MMIO) as appropriate instead of programmed I/O ports (PIO).
++	  (MMIO) as appropriate instead of programmed I/O ports (IOP).
+ 	  Enabling this gives an improvement in processing time in parts
+-	  of the driver, but it may cause problems with EISA (DEFEA)
+-	  adapters.  TURBOchannel does not have the concept of I/O ports,
+-	  so MMIO is always used for these (DEFTA) adapters.
++	  of the driver, but it requires a memory window to be configured
++	  for EISA (DEFEA) adapters that may not always be available.
++	  Conversely some PCIe host bridges do not support IOP, so MMIO
++	  may be required to access PCI (DEFPA) adapters on downstream PCI
++	  buses with some systems.  TURBOchannel does not have the concept
++	  of I/O ports, so MMIO is always used for these (DEFTA) adapters.
+ 
+ 	  If unsure, say N.
+ 
+diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c
+index 077c68498f048..c7ce6d5491afc 100644
+--- a/drivers/net/fddi/defxx.c
++++ b/drivers/net/fddi/defxx.c
+@@ -495,6 +495,25 @@ static const struct net_device_ops dfx_netdev_ops = {
+ 	.ndo_set_mac_address	= dfx_ctl_set_mac_address,
+ };
+ 
++static void dfx_register_res_alloc_err(const char *print_name, bool mmio,
++				       bool eisa)
++{
++	pr_err("%s: Cannot use %s, no address set, aborting\n",
++	       print_name, mmio ? "MMIO" : "I/O");
++	pr_err("%s: Recompile driver with \"CONFIG_DEFXX_MMIO=%c\"\n",
++	       print_name, mmio ? 'n' : 'y');
++	if (eisa && mmio)
++		pr_err("%s: Or run ECU and set adapter's MMIO location\n",
++		       print_name);
++}
++
++static void dfx_register_res_err(const char *print_name, bool mmio,
++				 unsigned long start, unsigned long len)
++{
++	pr_err("%s: Cannot reserve %s resource 0x%lx @ 0x%lx, aborting\n",
++	       print_name, mmio ? "MMIO" : "I/O", len, start);
++}
++
+ /*
+  * ================
+  * = dfx_register =
+@@ -568,15 +587,12 @@ static int dfx_register(struct device *bdev)
+ 	dev_set_drvdata(bdev, dev);
+ 
+ 	dfx_get_bars(bdev, bar_start, bar_len);
+-	if (dfx_bus_eisa && dfx_use_mmio && bar_start[0] == 0) {
+-		pr_err("%s: Cannot use MMIO, no address set, aborting\n",
+-		       print_name);
+-		pr_err("%s: Run ECU and set adapter's MMIO location\n",
+-		       print_name);
+-		pr_err("%s: Or recompile driver with \"CONFIG_DEFXX_MMIO=n\""
+-		       "\n", print_name);
++	if (bar_len[0] == 0 ||
++	    (dfx_bus_eisa && dfx_use_mmio && bar_start[0] == 0)) {
++		dfx_register_res_alloc_err(print_name, dfx_use_mmio,
++					   dfx_bus_eisa);
+ 		err = -ENXIO;
+-		goto err_out;
++		goto err_out_disable;
+ 	}
+ 
+ 	if (dfx_use_mmio)
+@@ -585,18 +601,16 @@ static int dfx_register(struct device *bdev)
+ 	else
+ 		region = request_region(bar_start[0], bar_len[0], print_name);
+ 	if (!region) {
+-		pr_err("%s: Cannot reserve %s resource 0x%lx @ 0x%lx, "
+-		       "aborting\n", dfx_use_mmio ? "MMIO" : "I/O", print_name,
+-		       (long)bar_len[0], (long)bar_start[0]);
++		dfx_register_res_err(print_name, dfx_use_mmio,
++				     bar_start[0], bar_len[0]);
+ 		err = -EBUSY;
+ 		goto err_out_disable;
+ 	}
+ 	if (bar_start[1] != 0) {
+ 		region = request_region(bar_start[1], bar_len[1], print_name);
+ 		if (!region) {
+-			pr_err("%s: Cannot reserve I/O resource "
+-			       "0x%lx @ 0x%lx, aborting\n", print_name,
+-			       (long)bar_len[1], (long)bar_start[1]);
++			dfx_register_res_err(print_name, 0,
++					     bar_start[1], bar_len[1]);
+ 			err = -EBUSY;
+ 			goto err_out_csr_region;
+ 		}
+@@ -604,9 +618,8 @@ static int dfx_register(struct device *bdev)
+ 	if (bar_start[2] != 0) {
+ 		region = request_region(bar_start[2], bar_len[2], print_name);
+ 		if (!region) {
+-			pr_err("%s: Cannot reserve I/O resource "
+-			       "0x%lx @ 0x%lx, aborting\n", print_name,
+-			       (long)bar_len[2], (long)bar_start[2]);
++			dfx_register_res_err(print_name, 0,
++					     bar_start[2], bar_len[2]);
+ 			err = -EBUSY;
+ 			goto err_out_bh_region;
+ 		}
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 11864ac101b8d..5ddb2dbb8572b 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -890,7 +890,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
+-	if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
++	if (!pskb_inet_may_pull(skb))
+ 		return -EINVAL;
+ 
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+@@ -987,7 +987,7 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
+-	if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr)))
++	if (!pskb_inet_may_pull(skb))
+ 		return -EINVAL;
+ 
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+diff --git a/drivers/net/phy/intel-xway.c b/drivers/net/phy/intel-xway.c
+index b7875b36097fe..574a8bca1ec46 100644
+--- a/drivers/net/phy/intel-xway.c
++++ b/drivers/net/phy/intel-xway.c
+@@ -11,6 +11,18 @@
+ 
+ #define XWAY_MDIO_IMASK			0x19	/* interrupt mask */
+ #define XWAY_MDIO_ISTAT			0x1A	/* interrupt status */
++#define XWAY_MDIO_LED			0x1B	/* led control */
++
++/* bit 15:12 are reserved */
++#define XWAY_MDIO_LED_LED3_EN		BIT(11)	/* Enable the integrated function of LED3 */
++#define XWAY_MDIO_LED_LED2_EN		BIT(10)	/* Enable the integrated function of LED2 */
++#define XWAY_MDIO_LED_LED1_EN		BIT(9)	/* Enable the integrated function of LED1 */
++#define XWAY_MDIO_LED_LED0_EN		BIT(8)	/* Enable the integrated function of LED0 */
++/* bit 7:4 are reserved */
++#define XWAY_MDIO_LED_LED3_DA		BIT(3)	/* Direct Access to LED3 */
++#define XWAY_MDIO_LED_LED2_DA		BIT(2)	/* Direct Access to LED2 */
++#define XWAY_MDIO_LED_LED1_DA		BIT(1)	/* Direct Access to LED1 */
++#define XWAY_MDIO_LED_LED0_DA		BIT(0)	/* Direct Access to LED0 */
+ 
+ #define XWAY_MDIO_INIT_WOL		BIT(15)	/* Wake-On-LAN */
+ #define XWAY_MDIO_INIT_MSRE		BIT(14)
+@@ -159,6 +171,15 @@ static int xway_gphy_config_init(struct phy_device *phydev)
+ 	/* Clear all pending interrupts */
+ 	phy_read(phydev, XWAY_MDIO_ISTAT);
+ 
++	/* Ensure that integrated led function is enabled for all leds */
++	err = phy_write(phydev, XWAY_MDIO_LED,
++			XWAY_MDIO_LED_LED0_EN |
++			XWAY_MDIO_LED_LED1_EN |
++			XWAY_MDIO_LED_LED2_EN |
++			XWAY_MDIO_LED_LED3_EN);
++	if (err)
++		return err;
++
+ 	phy_write_mmd(phydev, MDIO_MMD_VEND2, XWAY_MMD_LEDCH,
+ 		      XWAY_MMD_LEDCH_NACS_NONE |
+ 		      XWAY_MMD_LEDCH_SBF_F02HZ |
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 823a89354466d..91616182c311f 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -861,22 +861,28 @@ static int m88e1111_get_downshift(struct phy_device *phydev, u8 *data)
+ 
+ static int m88e1111_set_downshift(struct phy_device *phydev, u8 cnt)
+ {
+-	int val;
++	int val, err;
+ 
+ 	if (cnt > MII_M1111_PHY_EXT_CR_DOWNSHIFT_MAX)
+ 		return -E2BIG;
+ 
+-	if (!cnt)
+-		return phy_clear_bits(phydev, MII_M1111_PHY_EXT_CR,
+-				      MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN);
++	if (!cnt) {
++		err = phy_clear_bits(phydev, MII_M1111_PHY_EXT_CR,
++				     MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN);
++	} else {
++		val = MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN;
++		val |= FIELD_PREP(MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK, cnt - 1);
+ 
+-	val = MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN;
+-	val |= FIELD_PREP(MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK, cnt - 1);
++		err = phy_modify(phydev, MII_M1111_PHY_EXT_CR,
++				 MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN |
++				 MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK,
++				 val);
++	}
+ 
+-	return phy_modify(phydev, MII_M1111_PHY_EXT_CR,
+-			  MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN |
+-			  MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK,
+-			  val);
++	if (err < 0)
++		return err;
++
++	return genphy_soft_reset(phydev);
+ }
+ 
+ static int m88e1111_get_tunable(struct phy_device *phydev,
+@@ -919,22 +925,28 @@ static int m88e1011_get_downshift(struct phy_device *phydev, u8 *data)
+ 
+ static int m88e1011_set_downshift(struct phy_device *phydev, u8 cnt)
+ {
+-	int val;
++	int val, err;
+ 
+ 	if (cnt > MII_M1011_PHY_SCR_DOWNSHIFT_MAX)
+ 		return -E2BIG;
+ 
+-	if (!cnt)
+-		return phy_clear_bits(phydev, MII_M1011_PHY_SCR,
+-				      MII_M1011_PHY_SCR_DOWNSHIFT_EN);
++	if (!cnt) {
++		err = phy_clear_bits(phydev, MII_M1011_PHY_SCR,
++				     MII_M1011_PHY_SCR_DOWNSHIFT_EN);
++	} else {
++		val = MII_M1011_PHY_SCR_DOWNSHIFT_EN;
++		val |= FIELD_PREP(MII_M1011_PHY_SCR_DOWNSHIFT_MASK, cnt - 1);
+ 
+-	val = MII_M1011_PHY_SCR_DOWNSHIFT_EN;
+-	val |= FIELD_PREP(MII_M1011_PHY_SCR_DOWNSHIFT_MASK, cnt - 1);
++		err = phy_modify(phydev, MII_M1011_PHY_SCR,
++				 MII_M1011_PHY_SCR_DOWNSHIFT_EN |
++				 MII_M1011_PHY_SCR_DOWNSHIFT_MASK,
++				 val);
++	}
+ 
+-	return phy_modify(phydev, MII_M1011_PHY_SCR,
+-			  MII_M1011_PHY_SCR_DOWNSHIFT_EN |
+-			  MII_M1011_PHY_SCR_DOWNSHIFT_MASK,
+-			  val);
++	if (err < 0)
++		return err;
++
++	return genphy_soft_reset(phydev);
+ }
+ 
+ static int m88e1011_get_tunable(struct phy_device *phydev,
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index 10722fed666de..caf7291ffaf83 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -152,10 +152,13 @@ static int lan87xx_config_aneg(struct phy_device *phydev)
+ 	return genphy_config_aneg(phydev);
+ }
+ 
+-static int lan87xx_config_aneg_ext(struct phy_device *phydev)
++static int lan95xx_config_aneg_ext(struct phy_device *phydev)
+ {
+ 	int rc;
+ 
++	if (phydev->phy_id != 0x0007c0f0) /* not (LAN9500A or LAN9505A) */
++		return lan87xx_config_aneg(phydev);
++
+ 	/* Extend Manual AutoMDIX timer */
+ 	rc = phy_read(phydev, PHY_EDPD_CONFIG);
+ 	if (rc < 0)
+@@ -408,7 +411,7 @@ static struct phy_driver smsc_phy_driver[] = {
+ 	.read_status	= lan87xx_read_status,
+ 	.config_init	= smsc_phy_config_init,
+ 	.soft_reset	= smsc_phy_reset,
+-	.config_aneg	= lan87xx_config_aneg_ext,
++	.config_aneg	= lan95xx_config_aneg_ext,
+ 
+ 	/* IRQ related */
+ 	.ack_interrupt	= smsc_phy_ack_interrupt,
+diff --git a/drivers/net/wan/hdlc_fr.c b/drivers/net/wan/hdlc_fr.c
+index 857912ae84d71..409e5a7ad8e26 100644
+--- a/drivers/net/wan/hdlc_fr.c
++++ b/drivers/net/wan/hdlc_fr.c
+@@ -415,7 +415,7 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 		if (pad > 0) { /* Pad the frame with zeros */
+ 			if (__skb_pad(skb, pad, false))
+-				goto out;
++				goto drop;
+ 			skb_put(skb, pad);
+ 		}
+ 	}
+@@ -448,9 +448,8 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	return NETDEV_TX_OK;
+ 
+ drop:
+-	kfree_skb(skb);
+-out:
+ 	dev->stats.tx_dropped++;
++	kfree_skb(skb);
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index 605c01fb73f15..f6562a343cb4e 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -51,6 +51,8 @@ struct lapbethdev {
+ 	struct list_head	node;
+ 	struct net_device	*ethdev;	/* link to ethernet device */
+ 	struct net_device	*axdev;		/* lapbeth device (lapb#) */
++	bool			up;
++	spinlock_t		up_lock;	/* Protects "up" */
+ };
+ 
+ static LIST_HEAD(lapbeth_devices);
+@@ -98,8 +100,9 @@ static int lapbeth_rcv(struct sk_buff *skb, struct net_device *dev, struct packe
+ 	rcu_read_lock();
+ 	lapbeth = lapbeth_get_x25_dev(dev);
+ 	if (!lapbeth)
+-		goto drop_unlock;
+-	if (!netif_running(lapbeth->axdev))
++		goto drop_unlock_rcu;
++	spin_lock_bh(&lapbeth->up_lock);
++	if (!lapbeth->up)
+ 		goto drop_unlock;
+ 
+ 	len = skb->data[0] + skb->data[1] * 256;
+@@ -114,11 +117,14 @@ static int lapbeth_rcv(struct sk_buff *skb, struct net_device *dev, struct packe
+ 		goto drop_unlock;
+ 	}
+ out:
++	spin_unlock_bh(&lapbeth->up_lock);
+ 	rcu_read_unlock();
+ 	return 0;
+ drop_unlock:
+ 	kfree_skb(skb);
+ 	goto out;
++drop_unlock_rcu:
++	rcu_read_unlock();
+ drop:
+ 	kfree_skb(skb);
+ 	return 0;
+@@ -148,13 +154,11 @@ static int lapbeth_data_indication(struct net_device *dev, struct sk_buff *skb)
+ static netdev_tx_t lapbeth_xmit(struct sk_buff *skb,
+ 				      struct net_device *dev)
+ {
++	struct lapbethdev *lapbeth = netdev_priv(dev);
+ 	int err;
+ 
+-	/*
+-	 * Just to be *really* sure not to send anything if the interface
+-	 * is down, the ethernet device may have gone.
+-	 */
+-	if (!netif_running(dev))
++	spin_lock_bh(&lapbeth->up_lock);
++	if (!lapbeth->up)
+ 		goto drop;
+ 
+ 	/* There should be a pseudo header of 1 byte added by upper layers.
+@@ -185,6 +189,7 @@ static netdev_tx_t lapbeth_xmit(struct sk_buff *skb,
+ 		goto drop;
+ 	}
+ out:
++	spin_unlock_bh(&lapbeth->up_lock);
+ 	return NETDEV_TX_OK;
+ drop:
+ 	kfree_skb(skb);
+@@ -276,6 +281,7 @@ static const struct lapb_register_struct lapbeth_callbacks = {
+  */
+ static int lapbeth_open(struct net_device *dev)
+ {
++	struct lapbethdev *lapbeth = netdev_priv(dev);
+ 	int err;
+ 
+ 	if ((err = lapb_register(dev, &lapbeth_callbacks)) != LAPB_OK) {
+@@ -283,13 +289,22 @@ static int lapbeth_open(struct net_device *dev)
+ 		return -ENODEV;
+ 	}
+ 
++	spin_lock_bh(&lapbeth->up_lock);
++	lapbeth->up = true;
++	spin_unlock_bh(&lapbeth->up_lock);
++
+ 	return 0;
+ }
+ 
+ static int lapbeth_close(struct net_device *dev)
+ {
++	struct lapbethdev *lapbeth = netdev_priv(dev);
+ 	int err;
+ 
++	spin_lock_bh(&lapbeth->up_lock);
++	lapbeth->up = false;
++	spin_unlock_bh(&lapbeth->up_lock);
++
+ 	if ((err = lapb_unregister(dev)) != LAPB_OK)
+ 		pr_err("lapb_unregister error: %d\n", err);
+ 
+@@ -347,6 +362,9 @@ static int lapbeth_new_device(struct net_device *dev)
+ 	dev_hold(dev);
+ 	lapbeth->ethdev = dev;
+ 
++	lapbeth->up = false;
++	spin_lock_init(&lapbeth->up_lock);
++
+ 	rc = -EIO;
+ 	if (register_netdevice(ndev))
+ 		goto fail;
+diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
+index 31df6dd04bf6f..540dd59112a5c 100644
+--- a/drivers/net/wireless/ath/ath10k/htc.c
++++ b/drivers/net/wireless/ath/ath10k/htc.c
+@@ -665,7 +665,7 @@ static int ath10k_htc_send_bundle(struct ath10k_htc_ep *ep,
+ 
+ 	ath10k_dbg(ar, ATH10K_DBG_HTC,
+ 		   "bundle tx status %d eid %d req count %d count %d len %d\n",
+-		   ret, ep->eid, skb_queue_len(&ep->tx_req_head), cn, bundle_skb->len);
++		   ret, ep->eid, skb_queue_len(&ep->tx_req_head), cn, skb_len);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index e7072fc4f487a..4f2fbc610d798 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -592,6 +592,9 @@ static void ath10k_wmi_event_tdls_peer(struct ath10k *ar, struct sk_buff *skb)
+ 					GFP_ATOMIC
+ 					);
+ 		break;
++	default:
++		kfree(tb);
++		return;
+ 	}
+ 
+ exit:
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index db0c6fa9c9dc4..ff61ae34ecdf0 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -246,7 +246,7 @@ static unsigned int ath9k_regread(void *hw_priv, u32 reg_offset)
+ 	if (unlikely(r)) {
+ 		ath_dbg(common, WMI, "REGISTER READ FAILED: (0x%04x, %d)\n",
+ 			reg_offset, r);
+-		return -EIO;
++		return -1;
+ 	}
+ 
+ 	return be32_to_cpu(val);
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index 6609ce122e6e5..c86faebbc4594 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -287,7 +287,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
+ 
+ 	srev = REG_READ(ah, AR_SREV);
+ 
+-	if (srev == -EIO) {
++	if (srev == -1) {
+ 		ath_err(ath9k_hw_common(ah),
+ 			"Failed to read SREV register");
+ 		return false;
+diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_wx.c b/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
+index a0cf78c418ac9..903de34028efb 100644
+--- a/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
++++ b/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
+@@ -633,8 +633,10 @@ int libipw_wx_set_encodeext(struct libipw_device *ieee,
+ 	}
+ 
+ 	if (ext->alg != IW_ENCODE_ALG_NONE) {
+-		memcpy(sec.keys[idx], ext->key, ext->key_len);
+-		sec.key_sizes[idx] = ext->key_len;
++		int key_len = clamp_val(ext->key_len, 0, SCM_KEY_LEN);
++
++		memcpy(sec.keys[idx], ext->key, key_len);
++		sec.key_sizes[idx] = key_len;
+ 		sec.flags |= (1 << idx);
+ 		if (ext->alg == IW_ENCODE_ALG_WEP) {
+ 			sec.encode_alg[idx] = SEC_ALG_WEP;
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index 23efd7075df6a..27b7d4b779e0b 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -1469,6 +1469,7 @@ static int mwl8k_txq_init(struct ieee80211_hw *hw, int index)
+ 	txq->skb = kcalloc(MWL8K_TX_DESCS, sizeof(*txq->skb), GFP_KERNEL);
+ 	if (txq->skb == NULL) {
+ 		pci_free_consistent(priv->pdev, size, txq->txd, txq->txd_dma);
++		txq->txd = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 665a03ebf9efd..0fdfead45c77c 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -318,7 +318,7 @@ mt76_dma_tx_queue_skb_raw(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 			  struct sk_buff *skb, u32 tx_info)
+ {
+ 	struct mt76_queue *q = dev->q_tx[qid];
+-	struct mt76_queue_buf buf;
++	struct mt76_queue_buf buf = {};
+ 	dma_addr_t addr;
+ 
+ 	if (q->queued + 1 >= q->ndesc - 1)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index f1f954ff46856..5795e44f8a529 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -688,7 +688,7 @@ mt7615_txp_skb_unmap_fw(struct mt76_dev *dev, struct mt7615_fw_txp *txp)
+ {
+ 	int i;
+ 
+-	for (i = 1; i < txp->nbuf; i++)
++	for (i = 0; i < txp->nbuf; i++)
+ 		dma_unmap_single(dev->dev, le32_to_cpu(txp->buf[i]),
+ 				 le16_to_cpu(txp->len[i]), DMA_TO_DEVICE);
+ }
+@@ -1817,10 +1817,8 @@ mt7615_mac_update_mib_stats(struct mt7615_phy *phy)
+ 	int i, aggr;
+ 	u32 val, val2;
+ 
+-	memset(mib, 0, sizeof(*mib));
+-
+-	mib->fcs_err_cnt = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
+-					  MT_MIB_SDR3_FCS_ERR_MASK);
++	mib->fcs_err_cnt += mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
++					   MT_MIB_SDR3_FCS_ERR_MASK);
+ 
+ 	val = mt76_get_field(dev, MT_MIB_SDR14(ext_phy),
+ 			     MT_MIB_AMPDU_MPDU_COUNT);
+@@ -1833,24 +1831,16 @@ mt7615_mac_update_mib_stats(struct mt7615_phy *phy)
+ 	aggr = ext_phy ? ARRAY_SIZE(dev->mt76.aggr_stats) / 2 : 0;
+ 	for (i = 0; i < 4; i++) {
+ 		val = mt76_rr(dev, MT_MIB_MB_SDR1(ext_phy, i));
+-
+-		val2 = FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
+-		if (val2 > mib->ack_fail_cnt)
+-			mib->ack_fail_cnt = val2;
+-
+-		val2 = FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
+-		if (val2 > mib->ba_miss_cnt)
+-			mib->ba_miss_cnt = val2;
++		mib->ba_miss_cnt += FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
++		mib->ack_fail_cnt += FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK,
++					       val);
+ 
+ 		val = mt76_rr(dev, MT_MIB_MB_SDR0(ext_phy, i));
+-		val2 = FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
+-		if (val2 > mib->rts_retries_cnt) {
+-			mib->rts_cnt = FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
+-			mib->rts_retries_cnt = val2;
+-		}
++		mib->rts_cnt += FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
++		mib->rts_retries_cnt += FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK,
++						  val);
+ 
+ 		val = mt76_rr(dev, MT_TX_AGG_CNT(ext_phy, i));
+-
+ 		dev->mt76.aggr_stats[aggr++] += val & 0xffff;
+ 		dev->mt76.aggr_stats[aggr++] += val >> 16;
+ 	}
+@@ -2106,8 +2096,12 @@ void mt7615_tx_token_put(struct mt7615_dev *dev)
+ 	spin_lock_bh(&dev->token_lock);
+ 	idr_for_each_entry(&dev->token, txwi, id) {
+ 		mt7615_txp_skb_unmap(&dev->mt76, txwi);
+-		if (txwi->skb)
+-			dev_kfree_skb_any(txwi->skb);
++		if (txwi->skb) {
++			struct ieee80211_hw *hw;
++
++			hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb);
++			ieee80211_free_txskb(hw, txwi->skb);
++		}
+ 		mt76_put_txwi(&dev->mt76, txwi);
+ 	}
+ 	spin_unlock_bh(&dev->token_lock);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index 3186b7b2ca483..88cdc2badeae7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -851,11 +851,17 @@ mt7615_get_stats(struct ieee80211_hw *hw,
+ 	struct mt7615_phy *phy = mt7615_hw_phy(hw);
+ 	struct mib_stats *mib = &phy->mib;
+ 
++	mt7615_mutex_acquire(phy->dev);
++
+ 	stats->dot11RTSSuccessCount = mib->rts_cnt;
+ 	stats->dot11RTSFailureCount = mib->rts_retries_cnt;
+ 	stats->dot11FCSErrorCount = mib->fcs_err_cnt;
+ 	stats->dot11ACKFailureCount = mib->ack_fail_cnt;
+ 
++	memset(mib, 0, sizeof(*mib));
++
++	mt7615_mutex_release(phy->dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+index 5b06294d654aa..4cee76691786e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+@@ -161,11 +161,11 @@ struct mt7615_vif {
+ };
+ 
+ struct mib_stats {
+-	u16 ack_fail_cnt;
+-	u16 fcs_err_cnt;
+-	u16 rts_cnt;
+-	u16 rts_retries_cnt;
+-	u16 ba_miss_cnt;
++	u32 ack_fail_cnt;
++	u32 fcs_err_cnt;
++	u32 rts_cnt;
++	u32 rts_retries_cnt;
++	u32 ba_miss_cnt;
+ 	unsigned long aggr_per;
+ };
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+index 7b81aef3684ed..726e4781d9d9f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+@@ -161,10 +161,9 @@ void mt7615_unregister_device(struct mt7615_dev *dev)
+ 	mt76_unregister_device(&dev->mt76);
+ 	if (mcu_running)
+ 		mt7615_mcu_exit(dev);
+-	mt7615_dma_cleanup(dev);
+ 
+ 	mt7615_tx_token_put(dev);
+-
++	mt7615_dma_cleanup(dev);
+ 	tasklet_disable(&dev->irq_tasklet);
+ 
+ 	mt76_free_device(&dev->mt76);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+index 595519c582558..d7d61a5b66a3c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+@@ -195,11 +195,14 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, enum mt76_txq_id qid)
+ 	int err, nframes = 0, len = 0, pse_sz = 0, ple_sz = 0;
+ 	struct mt76_queue *q = dev->q_tx[qid];
+ 	struct mt76_sdio *sdio = &dev->sdio;
++	u8 pad;
+ 
+ 	while (q->first != q->head) {
+ 		struct mt76_queue_entry *e = &q->entry[q->first];
+ 		struct sk_buff *iter;
+ 
++		smp_rmb();
++
+ 		if (!test_bit(MT76_STATE_MCU_RUNNING, &dev->phy.state)) {
+ 			__skb_put_zero(e->skb, 4);
+ 			err = __mt7663s_xmit_queue(dev, e->skb->data,
+@@ -210,7 +213,8 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, enum mt76_txq_id qid)
+ 			goto next;
+ 		}
+ 
+-		if (len + e->skb->len + 4 > MT76S_XMIT_BUF_SZ)
++		pad = roundup(e->skb->len, 4) - e->skb->len;
++		if (len + e->skb->len + pad + 4 > MT76S_XMIT_BUF_SZ)
+ 			break;
+ 
+ 		if (mt7663s_tx_pick_quota(sdio, qid, e->buf_sz, &pse_sz,
+@@ -228,6 +232,11 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, enum mt76_txq_id qid)
+ 			len += iter->len;
+ 			nframes++;
+ 		}
++
++		if (unlikely(pad)) {
++			memset(sdio->xmit_buf[qid] + len, 0, pad);
++			len += pad;
++		}
+ next:
+ 		q->first = (q->first + 1) % q->ndesc;
+ 		e->done = true;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index 8f2ad32ade180..e4d7eb33a9f44 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -124,7 +124,7 @@ mt7915_ampdu_stat_read_phy(struct mt7915_phy *phy,
+ 		range[i] = mt76_rr(dev, MT_MIB_ARNG(ext_phy, i));
+ 
+ 	for (i = 0; i < ARRAY_SIZE(bound); i++)
+-		bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i) + 1;
++		bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i % 4) + 1;
+ 
+ 	seq_printf(file, "\nPhy %d\n", ext_phy);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 6f159d99a5965..1e14d7782841e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -856,7 +856,7 @@ void mt7915_txp_skb_unmap(struct mt76_dev *dev,
+ 	int i;
+ 
+ 	txp = mt7915_txwi_to_txp(dev, t);
+-	for (i = 1; i < txp->nbuf; i++)
++	for (i = 0; i < txp->nbuf; i++)
+ 		dma_unmap_single(dev->dev, le32_to_cpu(txp->buf[i]),
+ 				 le16_to_cpu(txp->len[i]), DMA_TO_DEVICE);
+ }
+@@ -1277,39 +1277,30 @@ mt7915_mac_update_mib_stats(struct mt7915_phy *phy)
+ 	bool ext_phy = phy != &dev->phy;
+ 	int i, aggr0, aggr1;
+ 
+-	memset(mib, 0, sizeof(*mib));
+-
+-	mib->fcs_err_cnt = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
+-					  MT_MIB_SDR3_FCS_ERR_MASK);
++	mib->fcs_err_cnt += mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
++					   MT_MIB_SDR3_FCS_ERR_MASK);
+ 
+ 	aggr0 = ext_phy ? ARRAY_SIZE(dev->mt76.aggr_stats) / 2 : 0;
+ 	for (i = 0, aggr1 = aggr0 + 4; i < 4; i++) {
+-		u32 val, val2;
++		u32 val;
+ 
+ 		val = mt76_rr(dev, MT_MIB_MB_SDR1(ext_phy, i));
+-
+-		val2 = FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
+-		if (val2 > mib->ack_fail_cnt)
+-			mib->ack_fail_cnt = val2;
+-
+-		val2 = FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
+-		if (val2 > mib->ba_miss_cnt)
+-			mib->ba_miss_cnt = val2;
++		mib->ba_miss_cnt += FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
++		mib->ack_fail_cnt +=
++			FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
+ 
+ 		val = mt76_rr(dev, MT_MIB_MB_SDR0(ext_phy, i));
+-		val2 = FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
+-		if (val2 > mib->rts_retries_cnt) {
+-			mib->rts_cnt = FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
+-			mib->rts_retries_cnt = val2;
+-		}
++		mib->rts_cnt += FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
++		mib->rts_retries_cnt +=
++			FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
+ 
+ 		val = mt76_rr(dev, MT_TX_AGG_CNT(ext_phy, i));
+-		val2 = mt76_rr(dev, MT_TX_AGG_CNT2(ext_phy, i));
+-
+ 		dev->mt76.aggr_stats[aggr0++] += val & 0xffff;
+ 		dev->mt76.aggr_stats[aggr0++] += val >> 16;
+-		dev->mt76.aggr_stats[aggr1++] += val2 & 0xffff;
+-		dev->mt76.aggr_stats[aggr1++] += val2 >> 16;
++
++		val = mt76_rr(dev, MT_TX_AGG_CNT2(ext_phy, i));
++		dev->mt76.aggr_stats[aggr1++] += val & 0xffff;
++		dev->mt76.aggr_stats[aggr1++] += val >> 16;
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index c48158392057e..e78d3efa3fdf4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -651,13 +651,19 @@ mt7915_get_stats(struct ieee80211_hw *hw,
+ 		 struct ieee80211_low_level_stats *stats)
+ {
+ 	struct mt7915_phy *phy = mt7915_hw_phy(hw);
++	struct mt7915_dev *dev = mt7915_hw_dev(hw);
+ 	struct mib_stats *mib = &phy->mib;
+ 
++	mutex_lock(&dev->mt76.mutex);
+ 	stats->dot11RTSSuccessCount = mib->rts_cnt;
+ 	stats->dot11RTSFailureCount = mib->rts_retries_cnt;
+ 	stats->dot11FCSErrorCount = mib->fcs_err_cnt;
+ 	stats->dot11ACKFailureCount = mib->ack_fail_cnt;
+ 
++	memset(mib, 0, sizeof(*mib));
++
++	mutex_unlock(&dev->mt76.mutex);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index 4b8908fa7eda6..c84110e34ede1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -99,11 +99,11 @@ struct mt7915_vif {
+ };
+ 
+ struct mib_stats {
+-	u16 ack_fail_cnt;
+-	u16 fcs_err_cnt;
+-	u16 rts_cnt;
+-	u16 rts_retries_cnt;
+-	u16 ba_miss_cnt;
++	u32 ack_fail_cnt;
++	u32 fcs_err_cnt;
++	u32 rts_cnt;
++	u32 rts_retries_cnt;
++	u32 ba_miss_cnt;
+ };
+ 
+ struct mt7915_phy {
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c
+index 9a4d95a2a7072..439ea4158260e 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/sdio.c
+@@ -215,6 +215,9 @@ mt76s_tx_queue_skb(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 
+ 	q->entry[q->head].skb = tx_info.skb;
+ 	q->entry[q->head].buf_sz = len;
++
++	smp_wmb();
++
+ 	q->head = (q->head + 1) % q->ndesc;
+ 	q->queued++;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt7601u/eeprom.c b/drivers/net/wireless/mediatek/mt7601u/eeprom.c
+index c868582c5d225..aa3b64902cf9b 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt7601u/eeprom.c
+@@ -99,7 +99,7 @@ mt7601u_has_tssi(struct mt7601u_dev *dev, u8 *eeprom)
+ {
+ 	u16 nic_conf1 = get_unaligned_le16(eeprom + MT_EE_NIC_CONF_1);
+ 
+-	return ~nic_conf1 && (nic_conf1 & MT_EE_NIC_CONF_1_TX_ALC_EN);
++	return (u16)~nic_conf1 && (nic_conf1 & MT_EE_NIC_CONF_1_TX_ALC_EN);
+ }
+ 
+ static void
+diff --git a/drivers/net/wireless/microchip/wilc1000/sdio.c b/drivers/net/wireless/microchip/wilc1000/sdio.c
+index 351ff909ab1c7..e14b9fc2c67ac 100644
+--- a/drivers/net/wireless/microchip/wilc1000/sdio.c
++++ b/drivers/net/wireless/microchip/wilc1000/sdio.c
+@@ -947,7 +947,7 @@ static int wilc_sdio_sync_ext(struct wilc *wilc, int nint)
+ 			for (i = 0; (i < 3) && (nint > 0); i++, nint--)
+ 				reg |= BIT(i);
+ 
+-			ret = wilc_sdio_read_reg(wilc, WILC_INTR2_ENABLE, &reg);
++			ret = wilc_sdio_write_reg(wilc, WILC_INTR2_ENABLE, reg);
+ 			if (ret) {
+ 				dev_err(&func->dev,
+ 					"Failed write reg (%08x)...\n",
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
+index 85093b3e53733..ed72a2aeb6c8e 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
+@@ -249,7 +249,7 @@ u32 RTL8821AE_PHY_REG_ARRAY[] = {
+ 	0x824, 0x00030FE0,
+ 	0x828, 0x00000000,
+ 	0x82C, 0x002081DD,
+-	0x830, 0x2AAA8E24,
++	0x830, 0x2AAAEEC8,
+ 	0x834, 0x0037A706,
+ 	0x838, 0x06489B44,
+ 	0x83C, 0x0000095B,
+@@ -324,10 +324,10 @@ u32 RTL8821AE_PHY_REG_ARRAY[] = {
+ 	0x9D8, 0x00000000,
+ 	0x9DC, 0x00000000,
+ 	0x9E0, 0x00005D00,
+-	0x9E4, 0x00000002,
++	0x9E4, 0x00000003,
+ 	0x9E8, 0x00000001,
+ 	0xA00, 0x00D047C8,
+-	0xA04, 0x01FF000C,
++	0xA04, 0x01FF800C,
+ 	0xA08, 0x8C8A8300,
+ 	0xA0C, 0x2E68000F,
+ 	0xA10, 0x9500BB78,
+@@ -1320,7 +1320,11 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x083, 0x00021800,
+ 		0x084, 0x00028000,
+ 		0x085, 0x00048000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
++		0x086, 0x0009483A,
++	0xA0000000,	0x00000000,
+ 		0x086, 0x00094838,
++	0xB0000000,	0x00000000,
+ 		0x087, 0x00044980,
+ 		0x088, 0x00048000,
+ 		0x089, 0x0000D480,
+@@ -1409,36 +1413,32 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x03C, 0x000CA000,
+ 		0x0EF, 0x00000000,
+ 		0x0EF, 0x00001100,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0004ADF3,
+ 		0x034, 0x00049DF0,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0004ADF3,
+ 		0x034, 0x00049DF0,
+-	0xFF0F0404, 0xCDEF,
+-		0x034, 0x0004ADF3,
+-		0x034, 0x00049DF0,
+-	0xFF0F0200, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0004ADF5,
+ 		0x034, 0x00049DF2,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0004A0F3,
++		0x034, 0x000490B1,
++		0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0004A0F3,
+ 		0x034, 0x000490B1,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0004ADF5,
++		0x034, 0x00049DF2,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0004ADF3,
++		0x034, 0x00049DF0,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x0004ADF7,
+ 		0x034, 0x00049DF3,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
+-		0x034, 0x00048DED,
+-		0x034, 0x00047DEA,
+-		0x034, 0x00046DE7,
+-		0x034, 0x00045CE9,
+-		0x034, 0x00044CE6,
+-		0x034, 0x000438C6,
+-		0x034, 0x00042886,
+-		0x034, 0x00041486,
+-		0x034, 0x00040447,
+-	0xFF0F0204, 0xCDEF,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00048DED,
+ 		0x034, 0x00047DEA,
+ 		0x034, 0x00046DE7,
+@@ -1448,7 +1448,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00042886,
+ 		0x034, 0x00041486,
+ 		0x034, 0x00040447,
+-	0xFF0F0404, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00048DED,
+ 		0x034, 0x00047DEA,
+ 		0x034, 0x00046DE7,
+@@ -1458,7 +1458,17 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00042886,
+ 		0x034, 0x00041486,
+ 		0x034, 0x00040447,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x000480AE,
++		0x034, 0x000470AB,
++		0x034, 0x0004608B,
++		0x034, 0x00045069,
++		0x034, 0x00044048,
++		0x034, 0x00043045,
++		0x034, 0x00042026,
++		0x034, 0x00041023,
++		0x034, 0x00040002,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x000480AE,
+ 		0x034, 0x000470AB,
+ 		0x034, 0x0004608B,
+@@ -1468,7 +1478,17 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00042026,
+ 		0x034, 0x00041023,
+ 		0x034, 0x00040002,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00048DED,
++		0x034, 0x00047DEA,
++		0x034, 0x00046DE7,
++		0x034, 0x00045CE9,
++		0x034, 0x00044CE6,
++		0x034, 0x000438C6,
++		0x034, 0x00042886,
++		0x034, 0x00041486,
++		0x034, 0x00040447,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x00048DEF,
+ 		0x034, 0x00047DEC,
+ 		0x034, 0x00046DE9,
+@@ -1478,38 +1498,36 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x0004248A,
+ 		0x034, 0x0004108D,
+ 		0x034, 0x0004008A,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0200, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0002ADF4,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0002A0F3,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0002A0F3,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0002ADF4,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x0002ADF7,
+-	0xFF0F0200, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
+-		0x034, 0x00029DF4,
+-	0xFF0F0204, 0xCDEF,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00029DF4,
+-	0xFF0F0404, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00029DF4,
+-	0xFF0F0200, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00029DF1,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x000290F0,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x000290F0,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00029DF1,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00029DF4,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x00029DF2,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
+-		0x034, 0x00028DF1,
+-		0x034, 0x00027DEE,
+-		0x034, 0x00026DEB,
+-		0x034, 0x00025CEC,
+-		0x034, 0x00024CE9,
+-		0x034, 0x000238CA,
+-		0x034, 0x00022889,
+-		0x034, 0x00021489,
+-		0x034, 0x0002044A,
+-	0xFF0F0204, 0xCDEF,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00028DF1,
+ 		0x034, 0x00027DEE,
+ 		0x034, 0x00026DEB,
+@@ -1519,7 +1537,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00022889,
+ 		0x034, 0x00021489,
+ 		0x034, 0x0002044A,
+-	0xFF0F0404, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00028DF1,
+ 		0x034, 0x00027DEE,
+ 		0x034, 0x00026DEB,
+@@ -1529,7 +1547,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00022889,
+ 		0x034, 0x00021489,
+ 		0x034, 0x0002044A,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x000280AF,
+ 		0x034, 0x000270AC,
+ 		0x034, 0x0002608B,
+@@ -1539,7 +1557,27 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00022026,
+ 		0x034, 0x00021023,
+ 		0x034, 0x00020002,
+-	0xCDCDCDCD, 0xCDCD,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x000280AF,
++		0x034, 0x000270AC,
++		0x034, 0x0002608B,
++		0x034, 0x00025069,
++		0x034, 0x00024048,
++		0x034, 0x00023045,
++		0x034, 0x00022026,
++		0x034, 0x00021023,
++		0x034, 0x00020002,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00028DF1,
++		0x034, 0x00027DEE,
++		0x034, 0x00026DEB,
++		0x034, 0x00025CEC,
++		0x034, 0x00024CE9,
++		0x034, 0x000238CA,
++		0x034, 0x00022889,
++		0x034, 0x00021489,
++		0x034, 0x0002044A,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x00028DEE,
+ 		0x034, 0x00027DEB,
+ 		0x034, 0x00026CCD,
+@@ -1549,27 +1587,24 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00022849,
+ 		0x034, 0x00021449,
+ 		0x034, 0x0002004D,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F02C0, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x8000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0000A0D7,
++		0x034, 0x000090D3,
++		0x034, 0x000080B1,
++		0x034, 0x000070AE,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0000A0D7,
+ 		0x034, 0x000090D3,
+ 		0x034, 0x000080B1,
+ 		0x034, 0x000070AE,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x0000ADF7,
+ 		0x034, 0x00009DF4,
+ 		0x034, 0x00008DF1,
+ 		0x034, 0x00007DEE,
+-	0xFF0F02C0, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
+-		0x034, 0x00006DEB,
+-		0x034, 0x00005CEC,
+-		0x034, 0x00004CE9,
+-		0x034, 0x000038CA,
+-		0x034, 0x00002889,
+-		0x034, 0x00001489,
+-		0x034, 0x0000044A,
+-	0xFF0F0204, 0xCDEF,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00006DEB,
+ 		0x034, 0x00005CEC,
+ 		0x034, 0x00004CE9,
+@@ -1577,7 +1612,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00002889,
+ 		0x034, 0x00001489,
+ 		0x034, 0x0000044A,
+-	0xFF0F0404, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00006DEB,
+ 		0x034, 0x00005CEC,
+ 		0x034, 0x00004CE9,
+@@ -1585,7 +1620,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00002889,
+ 		0x034, 0x00001489,
+ 		0x034, 0x0000044A,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0000608D,
+ 		0x034, 0x0000506B,
+ 		0x034, 0x0000404A,
+@@ -1593,7 +1628,23 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00002044,
+ 		0x034, 0x00001025,
+ 		0x034, 0x00000004,
+-	0xCDCDCDCD, 0xCDCD,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0000608D,
++		0x034, 0x0000506B,
++		0x034, 0x0000404A,
++		0x034, 0x00003047,
++		0x034, 0x00002044,
++		0x034, 0x00001025,
++		0x034, 0x00000004,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00006DEB,
++		0x034, 0x00005CEC,
++		0x034, 0x00004CE9,
++		0x034, 0x000038CA,
++		0x034, 0x00002889,
++		0x034, 0x00001489,
++		0x034, 0x0000044A,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x00006DCD,
+ 		0x034, 0x00005CCD,
+ 		0x034, 0x00004CCA,
+@@ -1601,11 +1652,11 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00002888,
+ 		0x034, 0x00001488,
+ 		0x034, 0x00000486,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x018, 0x0001712A,
+ 		0x0EF, 0x00000040,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x035, 0x00000187,
+ 		0x035, 0x00008187,
+ 		0x035, 0x00010187,
+@@ -1615,7 +1666,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x035, 0x00040188,
+ 		0x035, 0x00048188,
+ 		0x035, 0x00050188,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x035, 0x00000187,
+ 		0x035, 0x00008187,
+ 		0x035, 0x00010187,
+@@ -1625,7 +1676,37 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x035, 0x00040188,
+ 		0x035, 0x00048188,
+ 		0x035, 0x00050188,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x035, 0x00000128,
++		0x035, 0x00008128,
++		0x035, 0x00010128,
++		0x035, 0x000201C8,
++		0x035, 0x000281C8,
++		0x035, 0x000301C8,
++		0x035, 0x000401C8,
++		0x035, 0x000481C8,
++		0x035, 0x000501C8,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x035, 0x00000145,
++		0x035, 0x00008145,
++		0x035, 0x00010145,
++		0x035, 0x00020196,
++		0x035, 0x00028196,
++		0x035, 0x00030196,
++		0x035, 0x000401C7,
++		0x035, 0x000481C7,
++		0x035, 0x000501C7,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x035, 0x00000128,
++		0x035, 0x00008128,
++		0x035, 0x00010128,
++		0x035, 0x000201C8,
++		0x035, 0x000281C8,
++		0x035, 0x000301C8,
++		0x035, 0x000401C8,
++		0x035, 0x000481C8,
++		0x035, 0x000501C8,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x035, 0x00000187,
+ 		0x035, 0x00008187,
+ 		0x035, 0x00010187,
+@@ -1635,7 +1716,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x035, 0x00040188,
+ 		0x035, 0x00048188,
+ 		0x035, 0x00050188,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x035, 0x00000145,
+ 		0x035, 0x00008145,
+ 		0x035, 0x00010145,
+@@ -1645,11 +1726,11 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x035, 0x000401C7,
+ 		0x035, 0x000481C7,
+ 		0x035, 0x000501C7,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x018, 0x0001712A,
+ 		0x0EF, 0x00000010,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x036, 0x00085733,
+ 		0x036, 0x0008D733,
+ 		0x036, 0x00095733,
+@@ -1662,7 +1743,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x036, 0x000CE4B4,
+ 		0x036, 0x000D64B4,
+ 		0x036, 0x000DE4B4,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x036, 0x00085733,
+ 		0x036, 0x0008D733,
+ 		0x036, 0x00095733,
+@@ -1675,7 +1756,46 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x036, 0x000CE4B4,
+ 		0x036, 0x000D64B4,
+ 		0x036, 0x000DE4B4,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x036, 0x000063B5,
++		0x036, 0x0000E3B5,
++		0x036, 0x000163B5,
++		0x036, 0x0001E3B5,
++		0x036, 0x000263B5,
++		0x036, 0x0002E3B5,
++		0x036, 0x000363B5,
++		0x036, 0x0003E3B5,
++		0x036, 0x000463B5,
++		0x036, 0x0004E3B5,
++		0x036, 0x000563B5,
++		0x036, 0x0005E3B5,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x036, 0x000056B3,
++		0x036, 0x0000D6B3,
++		0x036, 0x000156B3,
++		0x036, 0x0001D6B3,
++		0x036, 0x00026634,
++		0x036, 0x0002E634,
++		0x036, 0x00036634,
++		0x036, 0x0003E634,
++		0x036, 0x000467B4,
++		0x036, 0x0004E7B4,
++		0x036, 0x000567B4,
++		0x036, 0x0005E7B4,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x036, 0x000063B5,
++		0x036, 0x0000E3B5,
++		0x036, 0x000163B5,
++		0x036, 0x0001E3B5,
++		0x036, 0x000263B5,
++		0x036, 0x0002E3B5,
++		0x036, 0x000363B5,
++		0x036, 0x0003E3B5,
++		0x036, 0x000463B5,
++		0x036, 0x0004E3B5,
++		0x036, 0x000563B5,
++		0x036, 0x0005E3B5,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x036, 0x00085733,
+ 		0x036, 0x0008D733,
+ 		0x036, 0x00095733,
+@@ -1688,7 +1808,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x036, 0x000CE4B4,
+ 		0x036, 0x000D64B4,
+ 		0x036, 0x000DE4B4,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x036, 0x000056B3,
+ 		0x036, 0x0000D6B3,
+ 		0x036, 0x000156B3,
+@@ -1701,103 +1821,162 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x036, 0x0004E7B4,
+ 		0x036, 0x000567B4,
+ 		0x036, 0x0005E7B4,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x0EF, 0x00000008,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x000001C8,
+ 		0x03C, 0x00000492,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x000001C8,
+ 		0x03C, 0x00000492,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x000001B6,
++		0x03C, 0x00000492,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x0000022A,
++		0x03C, 0x00000594,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x000001B6,
++		0x03C, 0x00000492,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x000001C8,
+ 		0x03C, 0x00000492,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x03C, 0x0000022A,
+ 		0x03C, 0x00000594,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x00000800,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x00000800,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x00000800,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x00000820,
+-	0xCDCDCDCD, 0xCDCD,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x00000820,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x00000800,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x00000800,
++	0xA0000000,	0x00000000,
+ 		0x03C, 0x00000900,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x018, 0x0001712A,
+ 		0x0EF, 0x00000002,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x008, 0x0004E400,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x008, 0x0004E400,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x008, 0x00002000,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x008, 0x00002000,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x008, 0x00002000,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x008, 0x00002000,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x008, 0x0004E400,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x008, 0x00002000,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x0DF, 0x000000C0,
+-		0x01F, 0x00040064,
+-	0xFF0F0104, 0xABCD,
++		0x01F, 0x00000064,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x058, 0x000A7284,
+ 		0x059, 0x000600EC,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x058, 0x000A7284,
+ 		0x059, 0x000600EC,
+-	0xFF0F0404, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x058, 0x00081184,
++		0x059, 0x0006016C,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x058, 0x00081184,
++		0x059, 0x0006016C,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x058, 0x00081184,
++		0x059, 0x0006016C,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x058, 0x000A7284,
+ 		0x059, 0x000600EC,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x058, 0x00081184,
+ 		0x059, 0x0006016C,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x061, 0x000E8D73,
+ 		0x062, 0x00093FC5,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x061, 0x000E8D73,
+ 		0x062, 0x00093FC5,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x061, 0x000EFD83,
++		0x062, 0x00093FCC,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x061, 0x000EAD53,
++		0x062, 0x00093BC4,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x061, 0x000EFD83,
++		0x062, 0x00093FCC,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x061, 0x000E8D73,
+ 		0x062, 0x00093FC5,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x061, 0x000EAD53,
+ 		0x062, 0x00093BC4,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x063, 0x000110E9,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x063, 0x000110E9,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x063, 0x000110EB,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x063, 0x000110E9,
+-	0xFF0F0200, 0xCDEF,
+-		0x063, 0x000710E9,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x063, 0x000110E9,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x063, 0x000110EB,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x063, 0x000110E9,
++	0xA0000000,	0x00000000,
+ 		0x063, 0x000714E9,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
++		0x064, 0x0001C27C,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
++		0x064, 0x0001C27C,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x064, 0x0001C27C,
+-	0xFF0F0204, 0xCDEF,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x064, 0x0001C67C,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
+ 		0x064, 0x0001C27C,
+-	0xFF0F0404, 0xCDEF,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x064, 0x0001C27C,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x064, 0x0001C67C,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0200, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
++		0x065, 0x00091016,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
++		0x065, 0x00091016,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x065, 0x00093016,
+-	0xFF0F02C0, 0xCDEF,
++		0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x065, 0x00093015,
+-	0xCDCDCDCD, 0xCDCD,
++		0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x065, 0x00093015,
++		0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x065, 0x00093016,
++		0xA0000000,	0x00000000,
+ 		0x065, 0x00091016,
+-	0xFF0F0200, 0xDEAD,
++		0xB0000000,	0x00000000,
+ 		0x018, 0x00000006,
+ 		0x0EF, 0x00002000,
+ 		0x03B, 0x0003824B,
+@@ -1895,9 +2074,10 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x0B4, 0x0001214C,
+ 		0x0B7, 0x0003000C,
+ 		0x01C, 0x000539D2,
++		0x0C4, 0x000AFE00,
+ 		0x018, 0x0001F12A,
+-		0x0FE, 0x00000000,
+-		0x0FE, 0x00000000,
++		0xFFE, 0x00000000,
++		0xFFE, 0x00000000,
+ 		0x018, 0x0001712A,
+ 
+ };
+@@ -2017,6 +2197,7 @@ u32 RTL8812AE_MAC_REG_ARRAY[] = {
+ u32 RTL8812AE_MAC_1T_ARRAYLEN = ARRAY_SIZE(RTL8812AE_MAC_REG_ARRAY);
+ 
+ u32 RTL8821AE_MAC_REG_ARRAY[] = {
++		0x421, 0x0000000F,
+ 		0x428, 0x0000000A,
+ 		0x429, 0x00000010,
+ 		0x430, 0x00000000,
+@@ -2485,7 +2666,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
+ 		0x81C, 0xA6360001,
+ 		0x81C, 0xA5380001,
+ 		0x81C, 0xA43A0001,
+-		0x81C, 0xA33C0001,
++		0x81C, 0x683C0001,
+ 		0x81C, 0x673E0001,
+ 		0x81C, 0x66400001,
+ 		0x81C, 0x65420001,
+@@ -2519,7 +2700,66 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
+ 		0x81C, 0x017A0001,
+ 		0x81C, 0x017C0001,
+ 		0x81C, 0x017E0001,
+-	0xFF0F02C0, 0xABCD,
++	0x8000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x81C, 0xFB000101,
++		0x81C, 0xFA020101,
++		0x81C, 0xF9040101,
++		0x81C, 0xF8060101,
++		0x81C, 0xF7080101,
++		0x81C, 0xF60A0101,
++		0x81C, 0xF50C0101,
++		0x81C, 0xF40E0101,
++		0x81C, 0xF3100101,
++		0x81C, 0xF2120101,
++		0x81C, 0xF1140101,
++		0x81C, 0xF0160101,
++		0x81C, 0xEF180101,
++		0x81C, 0xEE1A0101,
++		0x81C, 0xED1C0101,
++		0x81C, 0xEC1E0101,
++		0x81C, 0xEB200101,
++		0x81C, 0xEA220101,
++		0x81C, 0xE9240101,
++		0x81C, 0xE8260101,
++		0x81C, 0xE7280101,
++		0x81C, 0xE62A0101,
++		0x81C, 0xE52C0101,
++		0x81C, 0xE42E0101,
++		0x81C, 0xE3300101,
++		0x81C, 0xA5320101,
++		0x81C, 0xA4340101,
++		0x81C, 0xA3360101,
++		0x81C, 0x87380101,
++		0x81C, 0x863A0101,
++		0x81C, 0x853C0101,
++		0x81C, 0x843E0101,
++		0x81C, 0x69400101,
++		0x81C, 0x68420101,
++		0x81C, 0x67440101,
++		0x81C, 0x66460101,
++		0x81C, 0x49480101,
++		0x81C, 0x484A0101,
++		0x81C, 0x474C0101,
++		0x81C, 0x2A4E0101,
++		0x81C, 0x29500101,
++		0x81C, 0x28520101,
++		0x81C, 0x27540101,
++		0x81C, 0x26560101,
++		0x81C, 0x25580101,
++		0x81C, 0x245A0101,
++		0x81C, 0x235C0101,
++		0x81C, 0x055E0101,
++		0x81C, 0x04600101,
++		0x81C, 0x03620101,
++		0x81C, 0x02640101,
++		0x81C, 0x01660101,
++		0x81C, 0x01680101,
++		0x81C, 0x016A0101,
++		0x81C, 0x016C0101,
++		0x81C, 0x016E0101,
++		0x81C, 0x01700101,
++		0x81C, 0x01720101,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x81C, 0xFB000101,
+ 		0x81C, 0xFA020101,
+ 		0x81C, 0xF9040101,
+@@ -2578,7 +2818,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
+ 		0x81C, 0x016E0101,
+ 		0x81C, 0x01700101,
+ 		0x81C, 0x01720101,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x81C, 0xFF000101,
+ 		0x81C, 0xFF020101,
+ 		0x81C, 0xFE040101,
+@@ -2637,7 +2877,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
+ 		0x81C, 0x046E0101,
+ 		0x81C, 0x03700101,
+ 		0x81C, 0x02720101,
+-	0xFF0F02C0, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x81C, 0x01740101,
+ 		0x81C, 0x01760101,
+ 		0x81C, 0x01780101,
+diff --git a/drivers/net/wireless/realtek/rtw88/debug.c b/drivers/net/wireless/realtek/rtw88/debug.c
+index efbba9caef3bf..8bb6cc8ca74e5 100644
+--- a/drivers/net/wireless/realtek/rtw88/debug.c
++++ b/drivers/net/wireless/realtek/rtw88/debug.c
+@@ -270,7 +270,7 @@ static ssize_t rtw_debugfs_set_rsvd_page(struct file *filp,
+ 
+ 	if (num != 2) {
+ 		rtw_warn(rtwdev, "invalid arguments\n");
+-		return num;
++		return -EINVAL;
+ 	}
+ 
+ 	debugfs_priv->rsvd_page.page_offset = offset;
+diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c
+index 5cd9cc42648eb..36e2f0dba00c0 100644
+--- a/drivers/net/wireless/realtek/rtw88/phy.c
++++ b/drivers/net/wireless/realtek/rtw88/phy.c
+@@ -1518,7 +1518,7 @@ void rtw_phy_load_tables(struct rtw_dev *rtwdev)
+ }
+ EXPORT_SYMBOL(rtw_phy_load_tables);
+ 
+-static u8 rtw_get_channel_group(u8 channel)
++static u8 rtw_get_channel_group(u8 channel, u8 rate)
+ {
+ 	switch (channel) {
+ 	default:
+@@ -1562,6 +1562,7 @@ static u8 rtw_get_channel_group(u8 channel)
+ 	case 106:
+ 		return 4;
+ 	case 14:
++		return rate <= DESC_RATE11M ? 5 : 4;
+ 	case 108:
+ 	case 110:
+ 	case 112:
+@@ -1813,7 +1814,7 @@ void rtw_get_tx_power_params(struct rtw_dev *rtwdev, u8 path, u8 rate, u8 bw,
+ 	s8 *remnant = &pwr_param->pwr_remnant;
+ 
+ 	pwr_idx = &rtwdev->efuse.txpwr_idx_table[path];
+-	group = rtw_get_channel_group(ch);
++	group = rtw_get_channel_group(ch, rate);
+ 
+ 	/* base power index for 2.4G/5G */
+ 	if (IS_CH_2G_BAND(ch)) {
+diff --git a/drivers/net/wireless/ti/wlcore/boot.c b/drivers/net/wireless/ti/wlcore/boot.c
+index e14d88e558f04..85abd0a2d1c90 100644
+--- a/drivers/net/wireless/ti/wlcore/boot.c
++++ b/drivers/net/wireless/ti/wlcore/boot.c
+@@ -72,6 +72,7 @@ static int wlcore_validate_fw_ver(struct wl1271 *wl)
+ 	unsigned int *min_ver = (wl->fw_type == WL12XX_FW_TYPE_MULTI) ?
+ 		wl->min_mr_fw_ver : wl->min_sr_fw_ver;
+ 	char min_fw_str[32] = "";
++	int off = 0;
+ 	int i;
+ 
+ 	/* the chip must be exactly equal */
+@@ -105,13 +106,15 @@ static int wlcore_validate_fw_ver(struct wl1271 *wl)
+ 	return 0;
+ 
+ fail:
+-	for (i = 0; i < NUM_FW_VER; i++)
++	for (i = 0; i < NUM_FW_VER && off < sizeof(min_fw_str); i++)
+ 		if (min_ver[i] == WLCORE_FW_VER_IGNORE)
+-			snprintf(min_fw_str, sizeof(min_fw_str),
+-				  "%s*.", min_fw_str);
++			off += snprintf(min_fw_str + off,
++					sizeof(min_fw_str) - off,
++					"*.");
+ 		else
+-			snprintf(min_fw_str, sizeof(min_fw_str),
+-				  "%s%u.", min_fw_str, min_ver[i]);
++			off += snprintf(min_fw_str + off,
++					sizeof(min_fw_str) - off,
++					"%u.", min_ver[i]);
+ 
+ 	wl1271_error("Your WiFi FW version (%u.%u.%u.%u.%u) is invalid.\n"
+ 		     "Please use at least FW %s\n"
+diff --git a/drivers/net/wireless/ti/wlcore/debugfs.h b/drivers/net/wireless/ti/wlcore/debugfs.h
+index b143293e694f9..a9e13e6d65c50 100644
+--- a/drivers/net/wireless/ti/wlcore/debugfs.h
++++ b/drivers/net/wireless/ti/wlcore/debugfs.h
+@@ -78,13 +78,14 @@ static ssize_t sub## _ ##name## _read(struct file *file,		\
+ 	struct wl1271 *wl = file->private_data;				\
+ 	struct struct_type *stats = wl->stats.fw_stats;			\
+ 	char buf[DEBUGFS_FORMAT_BUFFER_SIZE] = "";			\
++	int pos = 0;							\
+ 	int i;								\
+ 									\
+ 	wl1271_debugfs_update_stats(wl);				\
+ 									\
+-	for (i = 0; i < len; i++)					\
+-		snprintf(buf, sizeof(buf), "%s[%d] = %d\n",		\
+-			 buf, i, stats->sub.name[i]);			\
++	for (i = 0; i < len && pos < sizeof(buf); i++)			\
++		pos += snprintf(buf + pos, sizeof(buf) - pos,		\
++			 "[%d] = %d\n", i, stats->sub.name[i]);		\
+ 									\
+ 	return wl1271_format_buffer(userbuf, count, ppos, "%s", buf);	\
+ }									\
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index f7464bd6d57cb..18e3435ab8f33 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -706,6 +706,9 @@ static bool pn533_target_type_a_is_valid(struct pn533_target_type_a *type_a,
+ 	if (PN533_TYPE_A_SEL_CASCADE(type_a->sel_res) != 0)
+ 		return false;
+ 
++	if (type_a->nfcid_len > NFC_NFCID1_MAXSIZE)
++		return false;
++
+ 	return true;
+ }
+ 
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index e812a0d0fdb3d..f750cf98ae264 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -667,6 +667,10 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id)
+ 		if (desc.state) {
+ 			/* found the group desc: update */
+ 			nvme_update_ns_ana_state(&desc, ns);
++		} else {
++			/* group desc not found: trigger a re-read */
++			set_bit(NVME_NS_ANA_PENDING, &ns->flags);
++			queue_work(nvme_wq, &ns->ctrl->ana_work);
+ 		}
+ 	} else {
+ 		ns->ana_state = NVME_ANA_OPTIMIZED; 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 716039ea4450e..c1f3446216c5c 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -852,7 +852,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
+ 				return nvme_setup_prp_simple(dev, req,
+ 							     &cmnd->rw, &bv);
+ 
+-			if (iod->nvmeq->qid &&
++			if (iod->nvmeq->qid && sgl_threshold &&
+ 			    dev->ctrl.sgls & ((1 << 0) | (1 << 1)))
+ 				return nvme_setup_sgl_simple(dev, req,
+ 							     &cmnd->rw, &bv);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 9444e5e2a95ba..4cf81f3841aee 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -874,7 +874,7 @@ static void nvme_tcp_state_change(struct sock *sk)
+ {
+ 	struct nvme_tcp_queue *queue;
+ 
+-	read_lock(&sk->sk_callback_lock);
++	read_lock_bh(&sk->sk_callback_lock);
+ 	queue = sk->sk_user_data;
+ 	if (!queue)
+ 		goto done;
+@@ -895,7 +895,7 @@ static void nvme_tcp_state_change(struct sock *sk)
+ 
+ 	queue->state_change(sk);
+ done:
+-	read_unlock(&sk->sk_callback_lock);
++	read_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+ static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index d658c6e8263af..d958b5da9b88a 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -525,11 +525,36 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
+ 	struct nvmet_tcp_cmd *cmd =
+ 		container_of(req, struct nvmet_tcp_cmd, req);
+ 	struct nvmet_tcp_queue	*queue = cmd->queue;
++	struct nvme_sgl_desc *sgl;
++	u32 len;
++
++	if (unlikely(cmd == queue->cmd)) {
++		sgl = &cmd->req.cmd->common.dptr.sgl;
++		len = le32_to_cpu(sgl->length);
++
++		/*
++		 * Wait for inline data before processing the response.
++		 * Avoid using helpers, this might happen before
++		 * nvmet_req_init is completed.
++		 */
++		if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
++		    len && len < cmd->req.port->inline_data_size &&
++		    nvme_is_write(cmd->req.cmd))
++			return;
++	}
+ 
+ 	llist_add(&cmd->lentry, &queue->resp_list);
+ 	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
+ }
+ 
++static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd)
++{
++	if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
++		nvmet_tcp_queue_response(&cmd->req);
++	else
++		cmd->req.execute(&cmd->req);
++}
++
+ static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)
+ {
+ 	u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue);
+@@ -961,7 +986,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
+ 			le32_to_cpu(req->cmd->common.dptr.sgl.length));
+ 
+ 		nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
+-		return -EAGAIN;
++		return 0;
+ 	}
+ 
+ 	ret = nvmet_tcp_map_data(queue->cmd);
+@@ -1104,10 +1129,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
+ 		return 0;
+ 	}
+ 
+-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+-	    cmd->rbytes_done == cmd->req.transfer_len) {
+-		cmd->req.execute(&cmd->req);
+-	}
++	if (cmd->rbytes_done == cmd->req.transfer_len)
++		nvmet_tcp_execute_request(cmd);
+ 
+ 	nvmet_prepare_receive_pdu(queue);
+ 	return 0;
+@@ -1144,9 +1167,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
+ 		goto out;
+ 	}
+ 
+-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+-	    cmd->rbytes_done == cmd->req.transfer_len)
+-		cmd->req.execute(&cmd->req);
++	if (cmd->rbytes_done == cmd->req.transfer_len)
++		nvmet_tcp_execute_request(cmd);
++
+ 	ret = 0;
+ out:
+ 	nvmet_prepare_receive_pdu(queue);
+@@ -1434,7 +1457,7 @@ static void nvmet_tcp_state_change(struct sock *sk)
+ {
+ 	struct nvmet_tcp_queue *queue;
+ 
+-	write_lock_bh(&sk->sk_callback_lock);
++	read_lock_bh(&sk->sk_callback_lock);
+ 	queue = sk->sk_user_data;
+ 	if (!queue)
+ 		goto done;
+@@ -1452,7 +1475,7 @@ static void nvmet_tcp_state_change(struct sock *sk)
+ 			queue->idx, sk->sk_state);
+ 	}
+ done:
+-	write_unlock_bh(&sk->sk_callback_lock);
++	read_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index 5e9e60e2e591d..955b8b8c82386 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -104,6 +104,16 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
+ {
+ 	int ret;
+ 
++	/*
++	 * This may be a shared rail and may be able to run at a lower rate
++	 * when we're not blowing fuses.  At the moment, the regulator framework
++	 * applies voltage constraints even on disabled rails, so remove our
++	 * constraints and allow the rail to be adjusted by other users.
++	 */
++	ret = regulator_set_voltage(priv->vcc, 0, INT_MAX);
++	if (ret)
++		dev_warn(priv->dev, "Failed to set 0 voltage (ignoring)\n");
++
+ 	ret = regulator_disable(priv->vcc);
+ 	if (ret)
+ 		dev_warn(priv->dev, "Failed to disable regulator (ignoring)\n");
+@@ -149,6 +159,17 @@ static int qfprom_enable_fuse_blowing(const struct qfprom_priv *priv,
+ 		goto err_clk_prepared;
+ 	}
+ 
++	/*
++	 * Hardware requires 1.8V min for fuse blowing; this may be
++	 * a rail shared do don't specify a max--regulator constraints
++	 * will handle.
++	 */
++	ret = regulator_set_voltage(priv->vcc, 1800000, INT_MAX);
++	if (ret) {
++		dev_err(priv->dev, "Failed to set 1.8 voltage\n");
++		goto err_clk_rate_set;
++	}
++
+ 	ret = regulator_enable(priv->vcc);
+ 	if (ret) {
+ 		dev_err(priv->dev, "Failed to enable regulator\n");
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 50bbe0edf5380..43a77d7200087 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -796,6 +796,7 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
+ 		if (!fragment->target) {
+ 			of_node_put(fragment->overlay);
+ 			ret = -EINVAL;
++			of_node_put(node);
+ 			goto err_free_fragments;
+ 		}
+ 
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index a222728238cae..90482d5246ff1 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -811,7 +811,8 @@ static int __init ks_pcie_host_init(struct pcie_port *pp)
+ 	int ret;
+ 
+ 	pp->bridge->ops = &ks_pcie_ops;
+-	pp->bridge->child_ops = &ks_child_pcie_ops;
++	if (!ks_pcie->is_am6)
++		pp->bridge->child_ops = &ks_child_pcie_ops;
+ 
+ 	ret = ks_pcie_config_legacy_irq(ks_pcie);
+ 	if (ret)
+diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
+index 8e0db84f089da..c33b385ac918e 100644
+--- a/drivers/pci/controller/pci-xgene.c
++++ b/drivers/pci/controller/pci-xgene.c
+@@ -355,7 +355,8 @@ static int xgene_pcie_map_reg(struct xgene_pcie_port *port,
+ 	if (IS_ERR(port->csr_base))
+ 		return PTR_ERR(port->csr_base);
+ 
+-	port->cfg_base = devm_platform_ioremap_resource_byname(pdev, "cfg");
++	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
++	port->cfg_base = devm_ioremap_resource(dev, res);
+ 	if (IS_ERR(port->cfg_base))
+ 		return PTR_ERR(port->cfg_base);
+ 	port->cfg_addr = res->start;
+diff --git a/drivers/pci/vpd.c b/drivers/pci/vpd.c
+index 7915d10f9aa10..bd549070c0112 100644
+--- a/drivers/pci/vpd.c
++++ b/drivers/pci/vpd.c
+@@ -570,7 +570,6 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID,
+ 		quirk_blacklist_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd);
+ /*
+  * The Amazon Annapurna Labs 0x0031 device id is reused for other non Root Port
+  * device types, so the quirk is registered for the PCI_CLASS_BRIDGE_PCI class.
+diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c
+index 453ef26fa1c7f..aaa0bbe473f76 100644
+--- a/drivers/phy/cadence/phy-cadence-sierra.c
++++ b/drivers/phy/cadence/phy-cadence-sierra.c
+@@ -319,6 +319,12 @@ static int cdns_sierra_phy_on(struct phy *gphy)
+ 	u32 val;
+ 	int ret;
+ 
++	ret = reset_control_deassert(sp->phy_rst);
++	if (ret) {
++		dev_err(dev, "Failed to take the PHY out of reset\n");
++		return ret;
++	}
++
+ 	/* Take the PHY lane group out of reset */
+ 	ret = reset_control_deassert(ins->lnk_rst);
+ 	if (ret) {
+@@ -618,7 +624,6 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 	phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
+-	reset_control_deassert(sp->phy_rst);
+ 	return PTR_ERR_OR_ZERO(phy_provider);
+ 
+ put_child:
+diff --git a/drivers/phy/marvell/Kconfig b/drivers/phy/marvell/Kconfig
+index 8f6273c837ec3..8ab9031c98946 100644
+--- a/drivers/phy/marvell/Kconfig
++++ b/drivers/phy/marvell/Kconfig
+@@ -3,8 +3,8 @@
+ # Phy drivers for Marvell platforms
+ #
+ config ARMADA375_USBCLUSTER_PHY
+-	def_bool y
+-	depends on MACH_ARMADA_375 || COMPILE_TEST
++	bool "Armada 375 USB cluster PHY support" if COMPILE_TEST
++	default y if MACH_ARMADA_375
+ 	depends on OF && HAS_IOMEM
+ 	select GENERIC_PHY
+ 
+diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
+index c9cfafe89cbf1..e28e25f98708c 100644
+--- a/drivers/phy/ti/phy-j721e-wiz.c
++++ b/drivers/phy/ti/phy-j721e-wiz.c
+@@ -615,6 +615,12 @@ static void wiz_clock_cleanup(struct wiz *wiz, struct device_node *node)
+ 		of_clk_del_provider(clk_node);
+ 		of_node_put(clk_node);
+ 	}
++
++	for (i = 0; i < wiz->clk_div_sel_num; i++) {
++		clk_node = of_get_child_by_name(node, clk_div_sel[i].node_name);
++		of_clk_del_provider(clk_node);
++		of_node_put(clk_node);
++	}
+ }
+ 
+ static int wiz_clock_init(struct wiz *wiz, struct device_node *node)
+@@ -947,27 +953,24 @@ static int wiz_probe(struct platform_device *pdev)
+ 		goto err_get_sync;
+ 	}
+ 
++	ret = wiz_init(wiz);
++	if (ret) {
++		dev_err(dev, "WIZ initialization failed\n");
++		goto err_wiz_init;
++	}
++
+ 	serdes_pdev = of_platform_device_create(child_node, NULL, dev);
+ 	if (!serdes_pdev) {
+ 		dev_WARN(dev, "Unable to create SERDES platform device\n");
+ 		ret = -ENOMEM;
+-		goto err_pdev_create;
+-	}
+-	wiz->serdes_pdev = serdes_pdev;
+-
+-	ret = wiz_init(wiz);
+-	if (ret) {
+-		dev_err(dev, "WIZ initialization failed\n");
+ 		goto err_wiz_init;
+ 	}
++	wiz->serdes_pdev = serdes_pdev;
+ 
+ 	of_node_put(child_node);
+ 	return 0;
+ 
+ err_wiz_init:
+-	of_platform_device_destroy(&serdes_pdev->dev, NULL);
+-
+-err_pdev_create:
+ 	wiz_clock_cleanup(wiz, node);
+ 
+ err_get_sync:
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index f3cd7e2967126..12cc4eb186377 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -270,20 +270,44 @@ static void __maybe_unused pcs_writel(unsigned val, void __iomem *reg)
+ 	writel(val, reg);
+ }
+ 
++static unsigned int pcs_pin_reg_offset_get(struct pcs_device *pcs,
++					   unsigned int pin)
++{
++	unsigned int mux_bytes = pcs->width / BITS_PER_BYTE;
++
++	if (pcs->bits_per_mux) {
++		unsigned int pin_offset_bytes;
++
++		pin_offset_bytes = (pcs->bits_per_pin * pin) / BITS_PER_BYTE;
++		return (pin_offset_bytes / mux_bytes) * mux_bytes;
++	}
++
++	return pin * mux_bytes;
++}
++
++static unsigned int pcs_pin_shift_reg_get(struct pcs_device *pcs,
++					  unsigned int pin)
++{
++	return (pin % (pcs->width / pcs->bits_per_pin)) * pcs->bits_per_pin;
++}
++
+ static void pcs_pin_dbg_show(struct pinctrl_dev *pctldev,
+ 					struct seq_file *s,
+ 					unsigned pin)
+ {
+ 	struct pcs_device *pcs;
+-	unsigned val, mux_bytes;
++	unsigned int val;
+ 	unsigned long offset;
+ 	size_t pa;
+ 
+ 	pcs = pinctrl_dev_get_drvdata(pctldev);
+ 
+-	mux_bytes = pcs->width / BITS_PER_BYTE;
+-	offset = pin * mux_bytes;
++	offset = pcs_pin_reg_offset_get(pcs, pin);
+ 	val = pcs->read(pcs->base + offset);
++
++	if (pcs->bits_per_mux)
++		val &= pcs->fmask << pcs_pin_shift_reg_get(pcs, pin);
++
+ 	pa = pcs->res->start + offset;
+ 
+ 	seq_printf(s, "%zx %08x %s ", pa, val, DRIVER_NAME);
+@@ -384,7 +408,6 @@ static int pcs_request_gpio(struct pinctrl_dev *pctldev,
+ 	struct pcs_device *pcs = pinctrl_dev_get_drvdata(pctldev);
+ 	struct pcs_gpiofunc_range *frange = NULL;
+ 	struct list_head *pos, *tmp;
+-	int mux_bytes = 0;
+ 	unsigned data;
+ 
+ 	/* If function mask is null, return directly. */
+@@ -392,29 +415,27 @@ static int pcs_request_gpio(struct pinctrl_dev *pctldev,
+ 		return -ENOTSUPP;
+ 
+ 	list_for_each_safe(pos, tmp, &pcs->gpiofuncs) {
++		u32 offset;
++
+ 		frange = list_entry(pos, struct pcs_gpiofunc_range, node);
+ 		if (pin >= frange->offset + frange->npins
+ 			|| pin < frange->offset)
+ 			continue;
+-		mux_bytes = pcs->width / BITS_PER_BYTE;
+ 
+-		if (pcs->bits_per_mux) {
+-			int byte_num, offset, pin_shift;
++		offset = pcs_pin_reg_offset_get(pcs, pin);
+ 
+-			byte_num = (pcs->bits_per_pin * pin) / BITS_PER_BYTE;
+-			offset = (byte_num / mux_bytes) * mux_bytes;
+-			pin_shift = pin % (pcs->width / pcs->bits_per_pin) *
+-				    pcs->bits_per_pin;
++		if (pcs->bits_per_mux) {
++			int pin_shift = pcs_pin_shift_reg_get(pcs, pin);
+ 
+ 			data = pcs->read(pcs->base + offset);
+ 			data &= ~(pcs->fmask << pin_shift);
+ 			data |= frange->gpiofunc << pin_shift;
+ 			pcs->write(data, pcs->base + offset);
+ 		} else {
+-			data = pcs->read(pcs->base + pin * mux_bytes);
++			data = pcs->read(pcs->base + offset);
+ 			data &= ~pcs->fmask;
+ 			data |= frange->gpiofunc;
+-			pcs->write(data, pcs->base + pin * mux_bytes);
++			pcs->write(data, pcs->base + offset);
+ 		}
+ 		break;
+ 	}
+@@ -656,10 +677,8 @@ static const struct pinconf_ops pcs_pinconf_ops = {
+  * pcs_add_pin() - add a pin to the static per controller pin array
+  * @pcs: pcs driver instance
+  * @offset: register offset from base
+- * @pin_pos: unused
+  */
+-static int pcs_add_pin(struct pcs_device *pcs, unsigned offset,
+-		unsigned pin_pos)
++static int pcs_add_pin(struct pcs_device *pcs, unsigned int offset)
+ {
+ 	struct pcs_soc_data *pcs_soc = &pcs->socdata;
+ 	struct pinctrl_pin_desc *pin;
+@@ -728,17 +747,9 @@ static int pcs_allocate_pin_table(struct pcs_device *pcs)
+ 	for (i = 0; i < pcs->desc.npins; i++) {
+ 		unsigned offset;
+ 		int res;
+-		int byte_num;
+-		int pin_pos = 0;
+ 
+-		if (pcs->bits_per_mux) {
+-			byte_num = (pcs->bits_per_pin * i) / BITS_PER_BYTE;
+-			offset = (byte_num / mux_bytes) * mux_bytes;
+-			pin_pos = i % num_pins_in_register;
+-		} else {
+-			offset = i * mux_bytes;
+-		}
+-		res = pcs_add_pin(pcs, offset, pin_pos);
++		offset = pcs_pin_reg_offset_get(pcs, i);
++		res = pcs_add_pin(pcs, offset);
+ 		if (res < 0) {
+ 			dev_err(pcs->dev, "error adding pins: %i\n", res);
+ 			return res;
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index ca684ed760d14..a9d2a4b98e570 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -393,34 +393,10 @@ static const struct dmi_system_id critclk_systems[] = {
+ 	},
+ 	{
+ 		/* pmc_plt_clk* - are used for ethernet controllers */
+-		.ident = "Beckhoff CB3163",
++		.ident = "Beckhoff Baytrail",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "CB3163"),
+-		},
+-	},
+-	{
+-		/* pmc_plt_clk* - are used for ethernet controllers */
+-		.ident = "Beckhoff CB4063",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "CB4063"),
+-		},
+-	},
+-	{
+-		/* pmc_plt_clk* - are used for ethernet controllers */
+-		.ident = "Beckhoff CB6263",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "CB6263"),
+-		},
+-	},
+-	{
+-		/* pmc_plt_clk* - are used for ethernet controllers */
+-		.ident = "Beckhoff CB6363",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "CB6363"),
++			DMI_MATCH(DMI_PRODUCT_FAMILY, "CBxx63"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/power/supply/bq25980_charger.c b/drivers/power/supply/bq25980_charger.c
+index c936f311eb4f0..b94ecf814e434 100644
+--- a/drivers/power/supply/bq25980_charger.c
++++ b/drivers/power/supply/bq25980_charger.c
+@@ -606,33 +606,6 @@ static int bq25980_get_state(struct bq25980_device *bq,
+ 	return 0;
+ }
+ 
+-static int bq25980_set_battery_property(struct power_supply *psy,
+-				enum power_supply_property psp,
+-				const union power_supply_propval *val)
+-{
+-	struct bq25980_device *bq = power_supply_get_drvdata(psy);
+-	int ret = 0;
+-
+-	switch (psp) {
+-	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
+-		ret = bq25980_set_const_charge_curr(bq, val->intval);
+-		if (ret)
+-			return ret;
+-		break;
+-
+-	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
+-		ret = bq25980_set_const_charge_volt(bq, val->intval);
+-		if (ret)
+-			return ret;
+-		break;
+-
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return ret;
+-}
+-
+ static int bq25980_get_battery_property(struct power_supply *psy,
+ 				enum power_supply_property psp,
+ 				union power_supply_propval *val)
+@@ -701,6 +674,18 @@ static int bq25980_set_charger_property(struct power_supply *psy,
+ 			return ret;
+ 		break;
+ 
++	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
++		ret = bq25980_set_const_charge_curr(bq, val->intval);
++		if (ret)
++			return ret;
++		break;
++
++	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
++		ret = bq25980_set_const_charge_volt(bq, val->intval);
++		if (ret)
++			return ret;
++		break;
++
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -922,7 +907,6 @@ static struct power_supply_desc bq25980_battery_desc = {
+ 	.name			= "bq25980-battery",
+ 	.type			= POWER_SUPPLY_TYPE_BATTERY,
+ 	.get_property		= bq25980_get_battery_property,
+-	.set_property		= bq25980_set_battery_property,
+ 	.properties		= bq25980_battery_props,
+ 	.num_properties		= ARRAY_SIZE(bq25980_battery_props),
+ 	.property_is_writeable	= bq25980_property_is_writeable,
+diff --git a/drivers/regulator/bd9576-regulator.c b/drivers/regulator/bd9576-regulator.c
+index a8b5832a5a1bb..204a2da054f53 100644
+--- a/drivers/regulator/bd9576-regulator.c
++++ b/drivers/regulator/bd9576-regulator.c
+@@ -206,7 +206,7 @@ static int bd957x_probe(struct platform_device *pdev)
+ {
+ 	struct regmap *regmap;
+ 	struct regulator_config config = { 0 };
+-	int i, err;
++	int i;
+ 	bool vout_mode, ddr_sel;
+ 	const struct bd957x_regulator_data *reg_data = &bd9576_regulators[0];
+ 	unsigned int num_reg_data = ARRAY_SIZE(bd9576_regulators);
+@@ -279,8 +279,7 @@ static int bd957x_probe(struct platform_device *pdev)
+ 		break;
+ 	default:
+ 		dev_err(&pdev->dev, "Unsupported chip type\n");
+-		err = -EINVAL;
+-		goto err;
++		return -EINVAL;
+ 	}
+ 
+ 	config.dev = pdev->dev.parent;
+@@ -300,8 +299,7 @@ static int bd957x_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev,
+ 				"failed to register %s regulator\n",
+ 				desc->name);
+-			err = PTR_ERR(rdev);
+-			goto err;
++			return PTR_ERR(rdev);
+ 		}
+ 		/*
+ 		 * Clear the VOUT1 GPIO setting - rest of the regulators do not
+@@ -310,8 +308,7 @@ static int bd957x_probe(struct platform_device *pdev)
+ 		config.ena_gpiod = NULL;
+ 	}
+ 
+-err:
+-	return err;
++	return 0;
+ }
+ 
+ static const struct platform_device_id bd957x_pmic_id[] = {
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 22eecc89d41bd..6c2a97f80b120 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1644,7 +1644,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		idx = i * HISI_SAS_PHY_INT_NR;
+ 		for (j = 0; j < HISI_SAS_PHY_INT_NR; j++, idx++) {
+ 			irq = platform_get_irq(pdev, idx);
+-			if (!irq) {
++			if (irq < 0) {
+ 				dev_err(dev, "irq init: fail map phy interrupt %d\n",
+ 					idx);
+ 				return -ENOENT;
+@@ -1663,7 +1663,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 	idx = hisi_hba->n_phy * HISI_SAS_PHY_INT_NR;
+ 	for (i = 0; i < hisi_hba->queue_count; i++, idx++) {
+ 		irq = platform_get_irq(pdev, idx);
+-		if (!irq) {
++		if (irq < 0) {
+ 			dev_err(dev, "irq init: could not map cq interrupt %d\n",
+ 				idx);
+ 			return -ENOENT;
+@@ -1681,7 +1681,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 	idx = (hisi_hba->n_phy * HISI_SAS_PHY_INT_NR) + hisi_hba->queue_count;
+ 	for (i = 0; i < HISI_SAS_FATAL_INT_NR; i++, idx++) {
+ 		irq = platform_get_irq(pdev, idx);
+-		if (!irq) {
++		if (irq < 0) {
+ 			dev_err(dev, "irq init: could not map fatal interrupt %d\n",
+ 				idx);
+ 			return -ENOENT;
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
+index 57c9a71fa33a7..f6d6539c657f0 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.c
++++ b/drivers/scsi/ibmvscsi/ibmvfc.c
+@@ -532,8 +532,17 @@ static void ibmvfc_set_host_action(struct ibmvfc_host *vhost,
+ 		if (vhost->action == IBMVFC_HOST_ACTION_ALLOC_TGTS)
+ 			vhost->action = action;
+ 		break;
++	case IBMVFC_HOST_ACTION_REENABLE:
++	case IBMVFC_HOST_ACTION_RESET:
++		vhost->action = action;
++		break;
+ 	case IBMVFC_HOST_ACTION_INIT:
+ 	case IBMVFC_HOST_ACTION_TGT_DEL:
++	case IBMVFC_HOST_ACTION_LOGO:
++	case IBMVFC_HOST_ACTION_QUERY_TGTS:
++	case IBMVFC_HOST_ACTION_TGT_DEL_FAILED:
++	case IBMVFC_HOST_ACTION_NONE:
++	default:
+ 		switch (vhost->action) {
+ 		case IBMVFC_HOST_ACTION_RESET:
+ 		case IBMVFC_HOST_ACTION_REENABLE:
+@@ -543,15 +552,6 @@ static void ibmvfc_set_host_action(struct ibmvfc_host *vhost,
+ 			break;
+ 		}
+ 		break;
+-	case IBMVFC_HOST_ACTION_LOGO:
+-	case IBMVFC_HOST_ACTION_QUERY_TGTS:
+-	case IBMVFC_HOST_ACTION_TGT_DEL_FAILED:
+-	case IBMVFC_HOST_ACTION_NONE:
+-	case IBMVFC_HOST_ACTION_RESET:
+-	case IBMVFC_HOST_ACTION_REENABLE:
+-	default:
+-		vhost->action = action;
+-		break;
+ 	}
+ }
+ 
+@@ -4658,26 +4658,45 @@ static void ibmvfc_do_work(struct ibmvfc_host *vhost)
+ 	case IBMVFC_HOST_ACTION_INIT_WAIT:
+ 		break;
+ 	case IBMVFC_HOST_ACTION_RESET:
+-		vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
+ 		spin_unlock_irqrestore(vhost->host->host_lock, flags);
+ 		rc = ibmvfc_reset_crq(vhost);
++
+ 		spin_lock_irqsave(vhost->host->host_lock, flags);
+-		if (rc == H_CLOSED)
++		if (!rc || rc == H_CLOSED)
+ 			vio_enable_interrupts(to_vio_dev(vhost->dev));
+-		if (rc || (rc = ibmvfc_send_crq_init(vhost)) ||
+-		    (rc = vio_enable_interrupts(to_vio_dev(vhost->dev)))) {
+-			ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
+-			dev_err(vhost->dev, "Error after reset (rc=%d)\n", rc);
++		if (vhost->action == IBMVFC_HOST_ACTION_RESET) {
++			/*
++			 * The only action we could have changed to would have
++			 * been reenable, in which case, we skip the rest of
++			 * this path and wait until we've done the re-enable
++			 * before sending the crq init.
++			 */
++			vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
++
++			if (rc || (rc = ibmvfc_send_crq_init(vhost)) ||
++			    (rc = vio_enable_interrupts(to_vio_dev(vhost->dev)))) {
++				ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
++				dev_err(vhost->dev, "Error after reset (rc=%d)\n", rc);
++			}
+ 		}
+ 		break;
+ 	case IBMVFC_HOST_ACTION_REENABLE:
+-		vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
+ 		spin_unlock_irqrestore(vhost->host->host_lock, flags);
+ 		rc = ibmvfc_reenable_crq_queue(vhost);
++
+ 		spin_lock_irqsave(vhost->host->host_lock, flags);
+-		if (rc || (rc = ibmvfc_send_crq_init(vhost))) {
+-			ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
+-			dev_err(vhost->dev, "Error after enable (rc=%d)\n", rc);
++		if (vhost->action == IBMVFC_HOST_ACTION_REENABLE) {
++			/*
++			 * The only action we could have changed to would have
++			 * been reset, in which case, we skip the rest of this
++			 * path and wait until we've done the reset before
++			 * sending the crq init.
++			 */
++			vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
++			if (rc || (rc = ibmvfc_send_crq_init(vhost))) {
++				ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
++				dev_err(vhost->dev, "Error after enable (rc=%d)\n", rc);
++			}
+ 		}
+ 		break;
+ 	case IBMVFC_HOST_ACTION_LOGO:
+diff --git a/drivers/scsi/jazz_esp.c b/drivers/scsi/jazz_esp.c
+index f0ed6863cc700..60a88a95a8e23 100644
+--- a/drivers/scsi/jazz_esp.c
++++ b/drivers/scsi/jazz_esp.c
+@@ -143,7 +143,9 @@ static int esp_jazz_probe(struct platform_device *dev)
+ 	if (!esp->command_block)
+ 		goto fail_unmap_regs;
+ 
+-	host->irq = platform_get_irq(dev, 0);
++	host->irq = err = platform_get_irq(dev, 0);
++	if (err < 0)
++		goto fail_unmap_command_block;
+ 	err = request_irq(host->irq, scsi_esp_intr, IRQF_SHARED, "ESP", esp);
+ 	if (err < 0)
+ 		goto fail_unmap_command_block;
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index c4705269e39fa..355d1c5f2194c 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -643,7 +643,7 @@ static void init_pci_device_addresses(struct pm8001_hba_info *pm8001_ha)
+  */
+ static int pm8001_chip_init(struct pm8001_hba_info *pm8001_ha)
+ {
+-	u8 i = 0;
++	u32 i = 0;
+ 	u16 deviceid;
+ 	pci_read_config_word(pm8001_ha->pdev, PCI_DEVICE_ID, &deviceid);
+ 	/* 8081 controllers need BAR shift to access MPI space
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 055f7649676ec..27b354860a16e 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -1488,9 +1488,9 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
+ 
+ 	/* wait until Inbound DoorBell Clear Register toggled */
+ 	if (IS_SPCV_12G(pm8001_ha->pdev)) {
+-		max_wait_count = 4 * 1000 * 1000;/* 4 sec */
++		max_wait_count = 30 * 1000 * 1000; /* 30 sec */
+ 	} else {
+-		max_wait_count = 2 * 1000 * 1000;/* 2 sec */
++		max_wait_count = 15 * 1000 * 1000; /* 15 sec */
+ 	}
+ 	do {
+ 		udelay(1);
+diff --git a/drivers/scsi/sni_53c710.c b/drivers/scsi/sni_53c710.c
+index 9e2e196bc2026..97c6f81b1d2a6 100644
+--- a/drivers/scsi/sni_53c710.c
++++ b/drivers/scsi/sni_53c710.c
+@@ -58,6 +58,7 @@ static int snirm710_probe(struct platform_device *dev)
+ 	struct NCR_700_Host_Parameters *hostdata;
+ 	struct Scsi_Host *host;
+ 	struct  resource *res;
++	int rc;
+ 
+ 	res = platform_get_resource(dev, IORESOURCE_MEM, 0);
+ 	if (!res)
+@@ -83,7 +84,9 @@ static int snirm710_probe(struct platform_device *dev)
+ 		goto out_kfree;
+ 	host->this_id = 7;
+ 	host->base = base;
+-	host->irq = platform_get_irq(dev, 0);
++	host->irq = rc = platform_get_irq(dev, 0);
++	if (rc < 0)
++		goto out_put_host;
+ 	if(request_irq(host->irq, NCR_700_intr, IRQF_SHARED, "snirm710", host)) {
+ 		printk(KERN_ERR "snirm710: request_irq failed!\n");
+ 		goto out_put_host;
+diff --git a/drivers/scsi/sun3x_esp.c b/drivers/scsi/sun3x_esp.c
+index 7de82f2c97579..d3489ac7ab28b 100644
+--- a/drivers/scsi/sun3x_esp.c
++++ b/drivers/scsi/sun3x_esp.c
+@@ -206,7 +206,9 @@ static int esp_sun3x_probe(struct platform_device *dev)
+ 	if (!esp->command_block)
+ 		goto fail_unmap_regs_dma;
+ 
+-	host->irq = platform_get_irq(dev, 0);
++	host->irq = err = platform_get_irq(dev, 0);
++	if (err < 0)
++		goto fail_unmap_command_block;
+ 	err = request_irq(host->irq, scsi_esp_intr, IRQF_SHARED,
+ 			  "SUN3X ESP", esp);
+ 	if (err < 0)
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index 3db0af66c71c0..24927cf485b47 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -377,7 +377,7 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+-		err = -ENODEV;
++		err = irq;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+index dbe5325a324d5..538d7aab8db5c 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c
++++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+@@ -95,8 +95,10 @@ static ssize_t snoop_file_read(struct file *file, char __user *buffer,
+ 			return -EINTR;
+ 	}
+ 	ret = kfifo_to_user(&chan->fifo, buffer, count, &copied);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static __poll_t snoop_file_poll(struct file *file,
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index 24cd193dec550..eba7f76f9d61a 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -230,6 +230,14 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
+ 			break;
+ 		}
+ 
++		if (phdr->p_filesz > phdr->p_memsz) {
++			dev_err(dev,
++				"refusing to load segment %d with p_filesz > p_memsz\n",
++				i);
++			ret = -EINVAL;
++			break;
++		}
++
+ 		ptr = mem_region + offset;
+ 
+ 		if (phdr->p_filesz && phdr->p_offset < fw->size) {
+@@ -253,6 +261,15 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
+ 				break;
+ 			}
+ 
++			if (seg_fw->size != phdr->p_filesz) {
++				dev_err(dev,
++					"failed to load segment %d from truncated file %s\n",
++					i, fw_name);
++				release_firmware(seg_fw);
++				ret = -EINVAL;
++				break;
++			}
++
+ 			release_firmware(seg_fw);
+ 		}
+ 
+diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
+index f63135c09667f..205cc96823b70 100644
+--- a/drivers/soc/qcom/pdr_interface.c
++++ b/drivers/soc/qcom/pdr_interface.c
+@@ -153,7 +153,7 @@ static int pdr_register_listener(struct pdr_handle *pdr,
+ 	if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ 		pr_err("PDR: %s register listener failed: 0x%x\n",
+ 		       pds->service_path, resp.resp.error);
+-		return ret;
++		return -EREMOTEIO;
+ 	}
+ 
+ 	pds->state = resp.curr_state;
+diff --git a/drivers/soc/tegra/regulators-tegra30.c b/drivers/soc/tegra/regulators-tegra30.c
+index 7f21f31de09d6..0e776b20f6252 100644
+--- a/drivers/soc/tegra/regulators-tegra30.c
++++ b/drivers/soc/tegra/regulators-tegra30.c
+@@ -178,7 +178,7 @@ static int tegra30_voltage_update(struct tegra_regulator_coupler *tegra,
+ 	 * survive the voltage drop if it's running on a higher frequency.
+ 	 */
+ 	if (!cpu_min_uV_consumers)
+-		cpu_min_uV = cpu_uV;
++		cpu_min_uV = max(cpu_uV, cpu_min_uV);
+ 
+ 	/*
+ 	 * Bootloader shall set up voltages correctly, but if it
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 1fe786855095a..3317a02bcc170 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -703,7 +703,7 @@ static int sdw_program_device_num(struct sdw_bus *bus)
+ 	struct sdw_slave *slave, *_s;
+ 	struct sdw_slave_id id;
+ 	struct sdw_msg msg;
+-	bool found = false;
++	bool found;
+ 	int count = 0, ret;
+ 	u64 addr;
+ 
+@@ -735,6 +735,7 @@ static int sdw_program_device_num(struct sdw_bus *bus)
+ 
+ 		sdw_extract_slave_id(bus, addr, &id);
+ 
++		found = false;
+ 		/* Now compare with entries */
+ 		list_for_each_entry_safe(slave, _s, &bus->slaves, node) {
+ 			if (sdw_compare_devid(slave, id) == 0) {
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 1099b5d1262be..a418c3c7001c0 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1375,8 +1375,16 @@ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 	}
+ 
+ 	ret = sdw_config_stream(&slave->dev, stream, stream_config, true);
+-	if (ret)
++	if (ret) {
++		/*
++		 * sdw_release_master_stream will release s_rt in slave_rt_list in
++		 * stream_error case, but s_rt is only added to slave_rt_list
++		 * when sdw_config_stream is successful, so free s_rt explicitly
++		 * when sdw_config_stream is failed.
++		 */
++		kfree(s_rt);
+ 		goto stream_error;
++	}
+ 
+ 	list_add_tail(&s_rt->m_rt_node, &m_rt->slave_rt_list);
+ 
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index a2886ee44e4cb..5d98611dd999d 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -200,7 +200,7 @@ static int lpspi_prepare_xfer_hardware(struct spi_controller *controller)
+ 				spi_controller_get_devdata(controller);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(fsl_lpspi->dev);
++	ret = pm_runtime_resume_and_get(fsl_lpspi->dev);
+ 	if (ret < 0) {
+ 		dev_err(fsl_lpspi->dev, "failed to enable clock\n");
+ 		return ret;
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index e4a8d203f9408..d0e5aa18b7bad 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -707,6 +707,11 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
+ 	struct resource mem;
+ 	int irq, type;
+ 	int ret;
++	bool spisel_boot = false;
++#if IS_ENABLED(CONFIG_FSL_SOC)
++	struct mpc8xxx_spi_probe_info *pinfo = NULL;
++#endif
++
+ 
+ 	ret = of_mpc8xxx_spi_probe(ofdev);
+ 	if (ret)
+@@ -715,9 +720,8 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
+ 	type = fsl_spi_get_type(&ofdev->dev);
+ 	if (type == TYPE_FSL) {
+ 		struct fsl_spi_platform_data *pdata = dev_get_platdata(dev);
+-		bool spisel_boot = false;
+ #if IS_ENABLED(CONFIG_FSL_SOC)
+-		struct mpc8xxx_spi_probe_info *pinfo = to_of_pinfo(pdata);
++		pinfo = to_of_pinfo(pdata);
+ 
+ 		spisel_boot = of_property_read_bool(np, "fsl,spisel_boot");
+ 		if (spisel_boot) {
+@@ -746,15 +750,24 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
+ 
+ 	ret = of_address_to_resource(np, 0, &mem);
+ 	if (ret)
+-		return ret;
++		goto unmap_out;
+ 
+ 	irq = platform_get_irq(ofdev, 0);
+-	if (irq < 0)
+-		return irq;
++	if (irq < 0) {
++		ret = irq;
++		goto unmap_out;
++	}
+ 
+ 	master = fsl_spi_probe(dev, &mem, irq);
+ 
+ 	return PTR_ERR_OR_ZERO(master);
++
++unmap_out:
++#if IS_ENABLED(CONFIG_FSL_SOC)
++	if (spisel_boot)
++		iounmap(pinfo->immr_spi_cs);
++#endif
++	return ret;
+ }
+ 
+ static int of_fsl_spi_remove(struct platform_device *ofdev)
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 75a8a9428ff89..0aab37cd64e74 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -474,7 +474,7 @@ static int rockchip_spi_prepare_dma(struct rockchip_spi *rs,
+ 	return 1;
+ }
+ 
+-static void rockchip_spi_config(struct rockchip_spi *rs,
++static int rockchip_spi_config(struct rockchip_spi *rs,
+ 		struct spi_device *spi, struct spi_transfer *xfer,
+ 		bool use_dma, bool slave_mode)
+ {
+@@ -519,7 +519,9 @@ static void rockchip_spi_config(struct rockchip_spi *rs,
+ 		 * ctlr->bits_per_word_mask, so this shouldn't
+ 		 * happen
+ 		 */
+-		unreachable();
++		dev_err(rs->dev, "unknown bits per word: %d\n",
++			xfer->bits_per_word);
++		return -EINVAL;
+ 	}
+ 
+ 	if (use_dma) {
+@@ -552,6 +554,8 @@ static void rockchip_spi_config(struct rockchip_spi *rs,
+ 	 */
+ 	writel_relaxed(2 * DIV_ROUND_UP(rs->freq, 2 * xfer->speed_hz),
+ 			rs->regs + ROCKCHIP_SPI_BAUDR);
++
++	return 0;
+ }
+ 
+ static size_t rockchip_spi_max_transfer_size(struct spi_device *spi)
+@@ -575,6 +579,7 @@ static int rockchip_spi_transfer_one(
+ 		struct spi_transfer *xfer)
+ {
+ 	struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
++	int ret;
+ 	bool use_dma;
+ 
+ 	WARN_ON(readl_relaxed(rs->regs + ROCKCHIP_SPI_SSIENR) &&
+@@ -594,7 +599,9 @@ static int rockchip_spi_transfer_one(
+ 
+ 	use_dma = ctlr->can_dma ? ctlr->can_dma(ctlr, spi, xfer) : false;
+ 
+-	rockchip_spi_config(rs, spi, xfer, use_dma, ctlr->slave);
++	ret = rockchip_spi_config(rs, spi, xfer, use_dma, ctlr->slave);
++	if (ret)
++		return ret;
+ 
+ 	if (use_dma)
+ 		return rockchip_spi_prepare_dma(rs, ctlr, xfer);
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 53c4311cc6ab5..0318f02d62123 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -1830,7 +1830,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int ret;
+ 
+-	master = spi_alloc_master(&pdev->dev, sizeof(struct stm32_spi));
++	master = devm_spi_alloc_master(&pdev->dev, sizeof(struct stm32_spi));
+ 	if (!master) {
+ 		dev_err(&pdev->dev, "spi master allocation failed\n");
+ 		return -ENOMEM;
+@@ -1848,18 +1848,16 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	spi->base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(spi->base)) {
+-		ret = PTR_ERR(spi->base);
+-		goto err_master_put;
+-	}
++	if (IS_ERR(spi->base))
++		return PTR_ERR(spi->base);
+ 
+ 	spi->phys_addr = (dma_addr_t)res->start;
+ 
+ 	spi->irq = platform_get_irq(pdev, 0);
+-	if (spi->irq <= 0) {
+-		ret = dev_err_probe(&pdev->dev, spi->irq, "failed to get irq\n");
+-		goto err_master_put;
+-	}
++	if (spi->irq <= 0)
++		return dev_err_probe(&pdev->dev, spi->irq,
++				     "failed to get irq\n");
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, spi->irq,
+ 					spi->cfg->irq_handler_event,
+ 					spi->cfg->irq_handler_thread,
+@@ -1867,20 +1865,20 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "irq%d request failed: %d\n", spi->irq,
+ 			ret);
+-		goto err_master_put;
++		return ret;
+ 	}
+ 
+ 	spi->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(spi->clk)) {
+ 		ret = PTR_ERR(spi->clk);
+ 		dev_err(&pdev->dev, "clk get failed: %d\n", ret);
+-		goto err_master_put;
++		return ret;
+ 	}
+ 
+ 	ret = clk_prepare_enable(spi->clk);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "clk enable failed: %d\n", ret);
+-		goto err_master_put;
++		return ret;
+ 	}
+ 	spi->clk_rate = clk_get_rate(spi->clk);
+ 	if (!spi->clk_rate) {
+@@ -1950,7 +1948,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	ret = devm_spi_register_master(&pdev->dev, master);
++	ret = spi_register_master(master);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "spi master registration failed: %d\n",
+ 			ret);
+@@ -1976,8 +1974,6 @@ err_dma_release:
+ 		dma_release_channel(spi->dma_rx);
+ err_clk_disable:
+ 	clk_disable_unprepare(spi->clk);
+-err_master_put:
+-	spi_master_put(master);
+ 
+ 	return ret;
+ }
+@@ -1987,6 +1983,7 @@ static int stm32_spi_remove(struct platform_device *pdev)
+ 	struct spi_master *master = platform_get_drvdata(pdev);
+ 	struct stm32_spi *spi = spi_master_get_devdata(master);
+ 
++	spi_unregister_master(master);
+ 	spi->cfg->disable(spi);
+ 
+ 	if (master->dma_tx)
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index c8fa6ee18ae77..1dd2af9cc2374 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -157,6 +157,7 @@ enum mode_type {GQSPI_MODE_IO, GQSPI_MODE_DMA};
+  * @data_completion:	completion structure
+  */
+ struct zynqmp_qspi {
++	struct spi_controller *ctlr;
+ 	void __iomem *regs;
+ 	struct clk *refclk;
+ 	struct clk *pclk;
+@@ -173,6 +174,7 @@ struct zynqmp_qspi {
+ 	u32 genfifoentry;
+ 	enum mode_type mode;
+ 	struct completion data_completion;
++	struct mutex op_lock;
+ };
+ 
+ /**
+@@ -486,24 +488,10 @@ static int zynqmp_qspi_setup_op(struct spi_device *qspi)
+ {
+ 	struct spi_controller *ctlr = qspi->master;
+ 	struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
+-	struct device *dev = &ctlr->dev;
+-	int ret;
+ 
+ 	if (ctlr->busy)
+ 		return -EBUSY;
+ 
+-	ret = clk_enable(xqspi->refclk);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable device clock.\n");
+-		return ret;
+-	}
+-
+-	ret = clk_enable(xqspi->pclk);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable APB clock.\n");
+-		clk_disable(xqspi->refclk);
+-		return ret;
+-	}
+ 	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, GQSPI_EN_MASK);
+ 
+ 	return 0;
+@@ -520,7 +508,7 @@ static void zynqmp_qspi_filltxfifo(struct zynqmp_qspi *xqspi, int size)
+ {
+ 	u32 count = 0, intermediate;
+ 
+-	while ((xqspi->bytes_to_transfer > 0) && (count < size)) {
++	while ((xqspi->bytes_to_transfer > 0) && (count < size) && (xqspi->txbuf)) {
+ 		memcpy(&intermediate, xqspi->txbuf, 4);
+ 		zynqmp_gqspi_write(xqspi, GQSPI_TXD_OFST, intermediate);
+ 
+@@ -579,7 +567,7 @@ static void zynqmp_qspi_fillgenfifo(struct zynqmp_qspi *xqspi, u8 nbits,
+ 		genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
+ 		genfifoentry |= GQSPI_GENFIFO_TX;
+ 		transfer_len = xqspi->bytes_to_transfer;
+-	} else {
++	} else if (xqspi->rxbuf) {
+ 		genfifoentry &= ~GQSPI_GENFIFO_TX;
+ 		genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
+ 		genfifoentry |= GQSPI_GENFIFO_RX;
+@@ -587,6 +575,11 @@ static void zynqmp_qspi_fillgenfifo(struct zynqmp_qspi *xqspi, u8 nbits,
+ 			transfer_len = xqspi->dma_rx_bytes;
+ 		else
+ 			transfer_len = xqspi->bytes_to_receive;
++	} else {
++		/* Sending dummy circles here */
++		genfifoentry &= ~(GQSPI_GENFIFO_TX | GQSPI_GENFIFO_RX);
++		genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
++		transfer_len = xqspi->bytes_to_transfer;
+ 	}
+ 	genfifoentry |= zynqmp_qspi_selectspimode(xqspi, nbits);
+ 	xqspi->genfifoentry = genfifoentry;
+@@ -738,7 +731,7 @@ static irqreturn_t zynqmp_qspi_irq(int irq, void *dev_id)
+  * zynqmp_qspi_setuprxdma - This function sets up the RX DMA operation
+  * @xqspi:	xqspi is a pointer to the GQSPI instance.
+  */
+-static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
++static int zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+ {
+ 	u32 rx_bytes, rx_rem, config_reg;
+ 	dma_addr_t addr;
+@@ -752,7 +745,7 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+ 		zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST, config_reg);
+ 		xqspi->mode = GQSPI_MODE_IO;
+ 		xqspi->dma_rx_bytes = 0;
+-		return;
++		return 0;
+ 	}
+ 
+ 	rx_rem = xqspi->bytes_to_receive % 4;
+@@ -760,8 +753,10 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+ 
+ 	addr = dma_map_single(xqspi->dev, (void *)xqspi->rxbuf,
+ 			      rx_bytes, DMA_FROM_DEVICE);
+-	if (dma_mapping_error(xqspi->dev, addr))
++	if (dma_mapping_error(xqspi->dev, addr)) {
+ 		dev_err(xqspi->dev, "ERR:rxdma:memory not mapped\n");
++		return -ENOMEM;
++	}
+ 
+ 	xqspi->dma_rx_bytes = rx_bytes;
+ 	xqspi->dma_addr = addr;
+@@ -782,6 +777,8 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+ 
+ 	/* Write the number of bytes to transfer */
+ 	zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_SIZE_OFST, rx_bytes);
++
++	return 0;
+ }
+ 
+ /**
+@@ -818,11 +815,17 @@ static void zynqmp_qspi_write_op(struct zynqmp_qspi *xqspi, u8 tx_nbits,
+  * @genfifoentry:	genfifoentry is pointer to the variable in which
+  *			GENFIFO	mask is returned to calling function
+  */
+-static void zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
++static int zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
+ 				u32 genfifoentry)
+ {
++	int ret;
++
++	ret = zynqmp_qspi_setuprxdma(xqspi);
++	if (ret)
++		return ret;
+ 	zynqmp_qspi_fillgenfifo(xqspi, rx_nbits, genfifoentry);
+-	zynqmp_qspi_setuprxdma(xqspi);
++
++	return 0;
+ }
+ 
+ /**
+@@ -835,10 +838,13 @@ static void zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
+  */
+ static int __maybe_unused zynqmp_qspi_suspend(struct device *dev)
+ {
+-	struct spi_controller *ctlr = dev_get_drvdata(dev);
+-	struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
++	struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
++	struct spi_controller *ctlr = xqspi->ctlr;
++	int ret;
+ 
+-	spi_controller_suspend(ctlr);
++	ret = spi_controller_suspend(ctlr);
++	if (ret)
++		return ret;
+ 
+ 	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
+ 
+@@ -856,27 +862,13 @@ static int __maybe_unused zynqmp_qspi_suspend(struct device *dev)
+  */
+ static int __maybe_unused zynqmp_qspi_resume(struct device *dev)
+ {
+-	struct spi_controller *ctlr = dev_get_drvdata(dev);
+-	struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
+-	int ret = 0;
++	struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
++	struct spi_controller *ctlr = xqspi->ctlr;
+ 
+-	ret = clk_enable(xqspi->pclk);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable APB clock.\n");
+-		return ret;
+-	}
+-
+-	ret = clk_enable(xqspi->refclk);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable device clock.\n");
+-		clk_disable(xqspi->pclk);
+-		return ret;
+-	}
++	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, GQSPI_EN_MASK);
+ 
+ 	spi_controller_resume(ctlr);
+ 
+-	clk_disable(xqspi->refclk);
+-	clk_disable(xqspi->pclk);
+ 	return 0;
+ }
+ 
+@@ -890,10 +882,10 @@ static int __maybe_unused zynqmp_qspi_resume(struct device *dev)
+  */
+ static int __maybe_unused zynqmp_runtime_suspend(struct device *dev)
+ {
+-	struct zynqmp_qspi *xqspi = (struct zynqmp_qspi *)dev_get_drvdata(dev);
++	struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
+ 
+-	clk_disable(xqspi->refclk);
+-	clk_disable(xqspi->pclk);
++	clk_disable_unprepare(xqspi->refclk);
++	clk_disable_unprepare(xqspi->pclk);
+ 
+ 	return 0;
+ }
+@@ -908,19 +900,19 @@ static int __maybe_unused zynqmp_runtime_suspend(struct device *dev)
+  */
+ static int __maybe_unused zynqmp_runtime_resume(struct device *dev)
+ {
+-	struct zynqmp_qspi *xqspi = (struct zynqmp_qspi *)dev_get_drvdata(dev);
++	struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
+ 	int ret;
+ 
+-	ret = clk_enable(xqspi->pclk);
++	ret = clk_prepare_enable(xqspi->pclk);
+ 	if (ret) {
+ 		dev_err(dev, "Cannot enable APB clock.\n");
+ 		return ret;
+ 	}
+ 
+-	ret = clk_enable(xqspi->refclk);
++	ret = clk_prepare_enable(xqspi->refclk);
+ 	if (ret) {
+ 		dev_err(dev, "Cannot enable device clock.\n");
+-		clk_disable(xqspi->pclk);
++		clk_disable_unprepare(xqspi->pclk);
+ 		return ret;
+ 	}
+ 
+@@ -944,25 +936,23 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 	struct zynqmp_qspi *xqspi = spi_controller_get_devdata
+ 				    (mem->spi->master);
+ 	int err = 0, i;
+-	u8 *tmpbuf;
+ 	u32 genfifoentry = 0;
++	u16 opcode = op->cmd.opcode;
++	u64 opaddr;
+ 
+ 	dev_dbg(xqspi->dev, "cmd:%#x mode:%d.%d.%d.%d\n",
+ 		op->cmd.opcode, op->cmd.buswidth, op->addr.buswidth,
+ 		op->dummy.buswidth, op->data.buswidth);
+ 
++	mutex_lock(&xqspi->op_lock);
+ 	zynqmp_qspi_config_op(xqspi, mem->spi);
+ 	zynqmp_qspi_chipselect(mem->spi, false);
+ 	genfifoentry |= xqspi->genfifocs;
+ 	genfifoentry |= xqspi->genfifobus;
+ 
+ 	if (op->cmd.opcode) {
+-		tmpbuf = kzalloc(op->cmd.nbytes, GFP_KERNEL | GFP_DMA);
+-		if (!tmpbuf)
+-			return -ENOMEM;
+-		tmpbuf[0] = op->cmd.opcode;
+ 		reinit_completion(&xqspi->data_completion);
+-		xqspi->txbuf = tmpbuf;
++		xqspi->txbuf = &opcode;
+ 		xqspi->rxbuf = NULL;
+ 		xqspi->bytes_to_transfer = op->cmd.nbytes;
+ 		xqspi->bytes_to_receive = 0;
+@@ -973,16 +963,15 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 		zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
+ 				   GQSPI_IER_GENFIFOEMPTY_MASK |
+ 				   GQSPI_IER_TXNOT_FULL_MASK);
+-		if (!wait_for_completion_interruptible_timeout
++		if (!wait_for_completion_timeout
+ 		    (&xqspi->data_completion, msecs_to_jiffies(1000))) {
+ 			err = -ETIMEDOUT;
+-			kfree(tmpbuf);
+ 			goto return_err;
+ 		}
+-		kfree(tmpbuf);
+ 	}
+ 
+ 	if (op->addr.nbytes) {
++		xqspi->txbuf = &opaddr;
+ 		for (i = 0; i < op->addr.nbytes; i++) {
+ 			*(((u8 *)xqspi->txbuf) + i) = op->addr.val >>
+ 					(8 * (op->addr.nbytes - i - 1));
+@@ -1001,7 +990,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 				   GQSPI_IER_TXEMPTY_MASK |
+ 				   GQSPI_IER_GENFIFOEMPTY_MASK |
+ 				   GQSPI_IER_TXNOT_FULL_MASK);
+-		if (!wait_for_completion_interruptible_timeout
++		if (!wait_for_completion_timeout
+ 		    (&xqspi->data_completion, msecs_to_jiffies(1000))) {
+ 			err = -ETIMEDOUT;
+ 			goto return_err;
+@@ -1009,32 +998,23 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 	}
+ 
+ 	if (op->dummy.nbytes) {
+-		tmpbuf = kzalloc(op->dummy.nbytes, GFP_KERNEL | GFP_DMA);
+-		if (!tmpbuf)
+-			return -ENOMEM;
+-		memset(tmpbuf, 0xff, op->dummy.nbytes);
+-		reinit_completion(&xqspi->data_completion);
+-		xqspi->txbuf = tmpbuf;
++		xqspi->txbuf = NULL;
+ 		xqspi->rxbuf = NULL;
+-		xqspi->bytes_to_transfer = op->dummy.nbytes;
++		/*
++		 * xqspi->bytes_to_transfer here represents the dummy circles
++		 * which need to be sent.
++		 */
++		xqspi->bytes_to_transfer = op->dummy.nbytes * 8 / op->dummy.buswidth;
+ 		xqspi->bytes_to_receive = 0;
+-		zynqmp_qspi_write_op(xqspi, op->dummy.buswidth,
++		/*
++		 * Using op->data.buswidth instead of op->dummy.buswidth here because
++		 * we need to use it to configure the correct SPI mode.
++		 */
++		zynqmp_qspi_write_op(xqspi, op->data.buswidth,
+ 				     genfifoentry);
+ 		zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
+ 				   zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST) |
+ 				   GQSPI_CFG_START_GEN_FIFO_MASK);
+-		zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
+-				   GQSPI_IER_TXEMPTY_MASK |
+-				   GQSPI_IER_GENFIFOEMPTY_MASK |
+-				   GQSPI_IER_TXNOT_FULL_MASK);
+-		if (!wait_for_completion_interruptible_timeout
+-		    (&xqspi->data_completion, msecs_to_jiffies(1000))) {
+-			err = -ETIMEDOUT;
+-			kfree(tmpbuf);
+-			goto return_err;
+-		}
+-
+-		kfree(tmpbuf);
+ 	}
+ 
+ 	if (op->data.nbytes) {
+@@ -1059,8 +1039,11 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 			xqspi->rxbuf = (u8 *)op->data.buf.in;
+ 			xqspi->bytes_to_receive = op->data.nbytes;
+ 			xqspi->bytes_to_transfer = 0;
+-			zynqmp_qspi_read_op(xqspi, op->data.buswidth,
++			err = zynqmp_qspi_read_op(xqspi, op->data.buswidth,
+ 					    genfifoentry);
++			if (err)
++				goto return_err;
++
+ 			zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
+ 					   zynqmp_gqspi_read
+ 					   (xqspi, GQSPI_CONFIG_OFST) |
+@@ -1076,7 +1059,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 						   GQSPI_IER_RXEMPTY_MASK);
+ 			}
+ 		}
+-		if (!wait_for_completion_interruptible_timeout
++		if (!wait_for_completion_timeout
+ 		    (&xqspi->data_completion, msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+@@ -1084,6 +1067,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ return_err:
+ 
+ 	zynqmp_qspi_chipselect(mem->spi, true);
++	mutex_unlock(&xqspi->op_lock);
+ 
+ 	return err;
+ }
+@@ -1120,6 +1104,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 
+ 	xqspi = spi_controller_get_devdata(ctlr);
+ 	xqspi->dev = dev;
++	xqspi->ctlr = ctlr;
+ 	platform_set_drvdata(pdev, xqspi);
+ 
+ 	xqspi->regs = devm_platform_ioremap_resource(pdev, 0);
+@@ -1135,13 +1120,11 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto remove_master;
+ 	}
+ 
+-	init_completion(&xqspi->data_completion);
+-
+ 	xqspi->refclk = devm_clk_get(&pdev->dev, "ref_clk");
+ 	if (IS_ERR(xqspi->refclk)) {
+ 		dev_err(dev, "ref_clk clock not found.\n");
+ 		ret = PTR_ERR(xqspi->refclk);
+-		goto clk_dis_pclk;
++		goto remove_master;
+ 	}
+ 
+ 	ret = clk_prepare_enable(xqspi->pclk);
+@@ -1156,6 +1139,10 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_pclk;
+ 	}
+ 
++	init_completion(&xqspi->data_completion);
++
++	mutex_init(&xqspi->op_lock);
++
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_set_active(&pdev->dev);
+@@ -1178,6 +1165,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_all;
+ 	}
+ 
++	dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
+ 	ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
+ 	ctlr->num_chipselect = GQSPI_DEFAULT_NUM_CS;
+ 	ctlr->mem_ops = &zynqmp_qspi_mem_ops;
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 1eee8b3c1b381..419de3d404814 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2480,6 +2480,7 @@ struct spi_controller *__devm_spi_alloc_controller(struct device *dev,
+ 
+ 	ctlr = __spi_alloc_controller(dev, size, slave);
+ 	if (ctlr) {
++		ctlr->devm_allocated = true;
+ 		*ptr = ctlr;
+ 		devres_add(dev, ptr);
+ 	} else {
+@@ -2826,11 +2827,6 @@ int devm_spi_register_controller(struct device *dev,
+ }
+ EXPORT_SYMBOL_GPL(devm_spi_register_controller);
+ 
+-static int devm_spi_match_controller(struct device *dev, void *res, void *ctlr)
+-{
+-	return *(struct spi_controller **)res == ctlr;
+-}
+-
+ static int __unregister(struct device *dev, void *null)
+ {
+ 	spi_unregister_device(to_spi_device(dev));
+@@ -2877,8 +2873,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ 	/* Release the last reference on the controller if its driver
+ 	 * has not yet been converted to devm_spi_alloc_master/slave().
+ 	 */
+-	if (!devres_find(ctlr->dev.parent, devm_spi_release_controller,
+-			 devm_spi_match_controller, ctlr))
++	if (!ctlr->devm_allocated)
+ 		put_device(&ctlr->dev);
+ 
+ 	/* free bus id */
+diff --git a/drivers/staging/comedi/drivers/tests/ni_routes_test.c b/drivers/staging/comedi/drivers/tests/ni_routes_test.c
+index eaefaf596a376..02606e39625af 100644
+--- a/drivers/staging/comedi/drivers/tests/ni_routes_test.c
++++ b/drivers/staging/comedi/drivers/tests/ni_routes_test.c
+@@ -217,7 +217,8 @@ void test_ni_assign_device_routes(void)
+ 	const u8 *table, *oldtable;
+ 
+ 	init_pci_6070e();
+-	ni_assign_device_routes(ni_eseries, pci_6070e, &private.routing_tables);
++	ni_assign_device_routes(ni_eseries, pci_6070e, NULL,
++				&private.routing_tables);
+ 	devroutes = private.routing_tables.valid_routes;
+ 	table = private.routing_tables.route_values;
+ 
+@@ -253,7 +254,8 @@ void test_ni_assign_device_routes(void)
+ 	olddevroutes = devroutes;
+ 	oldtable = table;
+ 	init_pci_6220();
+-	ni_assign_device_routes(ni_mseries, pci_6220, &private.routing_tables);
++	ni_assign_device_routes(ni_mseries, pci_6220, NULL,
++				&private.routing_tables);
+ 	devroutes = private.routing_tables.valid_routes;
+ 	table = private.routing_tables.route_values;
+ 
+diff --git a/drivers/staging/fwserial/fwserial.c b/drivers/staging/fwserial/fwserial.c
+index c368082aae1aa..0f4655d7d520a 100644
+--- a/drivers/staging/fwserial/fwserial.c
++++ b/drivers/staging/fwserial/fwserial.c
+@@ -1218,13 +1218,12 @@ static int get_serial_info(struct tty_struct *tty,
+ 	struct fwtty_port *port = tty->driver_data;
+ 
+ 	mutex_lock(&port->port.mutex);
+-	ss->type =  PORT_UNKNOWN;
+-	ss->line =  port->port.tty->index;
+-	ss->flags = port->port.flags;
+-	ss->xmit_fifo_size = FWTTY_PORT_TXFIFO_LEN;
++	ss->line = port->index;
+ 	ss->baud_base = 400000000;
+-	ss->close_delay = port->port.close_delay;
++	ss->close_delay = jiffies_to_msecs(port->port.close_delay) / 10;
++	ss->closing_wait = 3000;
+ 	mutex_unlock(&port->port.mutex);
++
+ 	return 0;
+ }
+ 
+@@ -1232,20 +1231,20 @@ static int set_serial_info(struct tty_struct *tty,
+ 			   struct serial_struct *ss)
+ {
+ 	struct fwtty_port *port = tty->driver_data;
++	unsigned int cdelay;
+ 
+-	if (ss->irq != 0 || ss->port != 0 || ss->custom_divisor != 0 ||
+-	    ss->baud_base != 400000000)
+-		return -EPERM;
++	cdelay = msecs_to_jiffies(ss->close_delay * 10);
+ 
+ 	mutex_lock(&port->port.mutex);
+ 	if (!capable(CAP_SYS_ADMIN)) {
+-		if (((ss->flags & ~ASYNC_USR_MASK) !=
++		if (cdelay != port->port.close_delay ||
++		    ((ss->flags & ~ASYNC_USR_MASK) !=
+ 		     (port->port.flags & ~ASYNC_USR_MASK))) {
+ 			mutex_unlock(&port->port.mutex);
+ 			return -EPERM;
+ 		}
+ 	}
+-	port->port.close_delay = ss->close_delay * HZ / 100;
++	port->port.close_delay = cdelay;
+ 	mutex_unlock(&port->port.mutex);
+ 
+ 	return 0;
+diff --git a/drivers/staging/greybus/uart.c b/drivers/staging/greybus/uart.c
+index 607378bfebb7e..a520f7f213db0 100644
+--- a/drivers/staging/greybus/uart.c
++++ b/drivers/staging/greybus/uart.c
+@@ -614,10 +614,12 @@ static int get_serial_info(struct tty_struct *tty,
+ 	ss->line = gb_tty->minor;
+ 	ss->xmit_fifo_size = 16;
+ 	ss->baud_base = 9600;
+-	ss->close_delay = gb_tty->port.close_delay / 10;
++	ss->close_delay = jiffies_to_msecs(gb_tty->port.close_delay) / 10;
+ 	ss->closing_wait =
+ 		gb_tty->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+-		ASYNC_CLOSING_WAIT_NONE : gb_tty->port.closing_wait / 10;
++		ASYNC_CLOSING_WAIT_NONE :
++		jiffies_to_msecs(gb_tty->port.closing_wait) / 10;
++
+ 	return 0;
+ }
+ 
+@@ -629,17 +631,16 @@ static int set_serial_info(struct tty_struct *tty,
+ 	unsigned int close_delay;
+ 	int retval = 0;
+ 
+-	close_delay = ss->close_delay * 10;
++	close_delay = msecs_to_jiffies(ss->close_delay * 10);
+ 	closing_wait = ss->closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+-			ASYNC_CLOSING_WAIT_NONE : ss->closing_wait * 10;
++			ASYNC_CLOSING_WAIT_NONE :
++			msecs_to_jiffies(ss->closing_wait * 10);
+ 
+ 	mutex_lock(&gb_tty->port.mutex);
+ 	if (!capable(CAP_SYS_ADMIN)) {
+ 		if ((close_delay != gb_tty->port.close_delay) ||
+ 		    (closing_wait != gb_tty->port.closing_wait))
+ 			retval = -EPERM;
+-		else
+-			retval = -EOPNOTSUPP;
+ 	} else {
+ 		gb_tty->port.close_delay = close_delay;
+ 		gb_tty->port.closing_wait = closing_wait;
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+index 7ca7378b18592..0ab67b2aec671 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+@@ -843,8 +843,10 @@ static int lm3554_probe(struct i2c_client *client)
+ 		return -ENOMEM;
+ 
+ 	flash->pdata = lm3554_platform_data_func(client);
+-	if (IS_ERR(flash->pdata))
+-		return PTR_ERR(flash->pdata);
++	if (IS_ERR(flash->pdata)) {
++		err = PTR_ERR(flash->pdata);
++		goto fail1;
++	}
+ 
+ 	v4l2_i2c_subdev_init(&flash->sd, client, &lm3554_ops);
+ 	flash->sd.internal_ops = &lm3554_internal_ops;
+@@ -856,7 +858,7 @@ static int lm3554_probe(struct i2c_client *client)
+ 				   ARRAY_SIZE(lm3554_controls));
+ 	if (ret) {
+ 		dev_err(&client->dev, "error initialize a ctrl_handler.\n");
+-		goto fail2;
++		goto fail3;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(lm3554_controls); i++)
+@@ -865,14 +867,14 @@ static int lm3554_probe(struct i2c_client *client)
+ 
+ 	if (flash->ctrl_handler.error) {
+ 		dev_err(&client->dev, "ctrl_handler error.\n");
+-		goto fail2;
++		goto fail3;
+ 	}
+ 
+ 	flash->sd.ctrl_handler = &flash->ctrl_handler;
+ 	err = media_entity_pads_init(&flash->sd.entity, 0, NULL);
+ 	if (err) {
+ 		dev_err(&client->dev, "error initialize a media entity.\n");
+-		goto fail1;
++		goto fail2;
+ 	}
+ 
+ 	flash->sd.entity.function = MEDIA_ENT_F_FLASH;
+@@ -884,14 +886,15 @@ static int lm3554_probe(struct i2c_client *client)
+ 	err = lm3554_gpio_init(client);
+ 	if (err) {
+ 		dev_err(&client->dev, "gpio request/direction_output fail");
+-		goto fail2;
++		goto fail3;
+ 	}
+ 	return atomisp_register_i2c_module(&flash->sd, NULL, LED_FLASH);
+-fail2:
++fail3:
+ 	media_entity_cleanup(&flash->sd.entity);
+ 	v4l2_ctrl_handler_free(&flash->ctrl_handler);
+-fail1:
++fail2:
+ 	v4l2_device_unregister_subdev(&flash->sd);
++fail1:
+ 	kfree(flash);
+ 
+ 	return err;
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+index 2ae50decfc8bd..9da82855552de 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+@@ -948,10 +948,8 @@ int atomisp_alloc_css_stat_bufs(struct atomisp_sub_device *asd,
+ 		dev_dbg(isp->dev, "allocating %d dis buffers\n", count);
+ 		while (count--) {
+ 			dis_buf = kzalloc(sizeof(struct atomisp_dis_buf), GFP_KERNEL);
+-			if (!dis_buf) {
+-				kfree(s3a_buf);
++			if (!dis_buf)
+ 				goto error;
+-			}
+ 			if (atomisp_css_allocate_stat_buffers(
+ 				asd, stream_id, NULL, dis_buf, NULL)) {
+ 				kfree(dis_buf);
+diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
+index f13af2329f486..0168f9839c905 100644
+--- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
++++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
+@@ -857,16 +857,17 @@ static void free_private_pages(struct hmm_buffer_object *bo,
+ 	kfree(bo->page_obj);
+ }
+ 
+-static void free_user_pages(struct hmm_buffer_object *bo)
++static void free_user_pages(struct hmm_buffer_object *bo,
++			    unsigned int page_nr)
+ {
+ 	int i;
+ 
+ 	hmm_mem_stat.usr_size -= bo->pgnr;
+ 
+ 	if (bo->mem_type == HMM_BO_MEM_TYPE_PFN) {
+-		unpin_user_pages(bo->pages, bo->pgnr);
++		unpin_user_pages(bo->pages, page_nr);
+ 	} else {
+-		for (i = 0; i < bo->pgnr; i++)
++		for (i = 0; i < page_nr; i++)
+ 			put_page(bo->pages[i]);
+ 	}
+ 	kfree(bo->pages);
+@@ -942,6 +943,8 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
+ 		dev_err(atomisp_dev,
+ 			"get_user_pages err: bo->pgnr = %d, pgnr actually pinned = %d.\n",
+ 			bo->pgnr, page_nr);
++		if (page_nr < 0)
++			page_nr = 0;
+ 		goto out_of_mem;
+ 	}
+ 
+@@ -954,7 +957,7 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
+ 
+ out_of_mem:
+ 
+-	free_user_pages(bo);
++	free_user_pages(bo, page_nr);
+ 
+ 	return -ENOMEM;
+ }
+@@ -1037,7 +1040,7 @@ void hmm_bo_free_pages(struct hmm_buffer_object *bo)
+ 	if (bo->type == HMM_BO_PRIVATE)
+ 		free_private_pages(bo, &dynamic_pool, &reserved_pool);
+ 	else if (bo->type == HMM_BO_USER)
+-		free_user_pages(bo);
++		free_user_pages(bo, bo->pgnr);
+ 	else
+ 		dev_err(atomisp_dev, "invalid buffer type.\n");
+ 	mutex_unlock(&bo->mutex);
+diff --git a/drivers/staging/media/omap4iss/iss.c b/drivers/staging/media/omap4iss/iss.c
+index e06ea7ea1e502..3dac35f682388 100644
+--- a/drivers/staging/media/omap4iss/iss.c
++++ b/drivers/staging/media/omap4iss/iss.c
+@@ -1236,8 +1236,10 @@ static int iss_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto error;
+ 
+-	if (!omap4iss_get(iss))
++	if (!omap4iss_get(iss)) {
++		ret = -EINVAL;
+ 		goto error;
++	}
+ 
+ 	ret = iss_reset(iss);
+ 	if (ret < 0)
+diff --git a/drivers/staging/media/rkisp1/rkisp1-resizer.c b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+index 1687d82e6c68d..4dcc342ac2b27 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-resizer.c
++++ b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+@@ -520,14 +520,15 @@ static void rkisp1_rsz_set_src_fmt(struct rkisp1_resizer *rsz,
+ 				   struct v4l2_mbus_framefmt *format,
+ 				   unsigned int which)
+ {
+-	const struct rkisp1_isp_mbus_info *mbus_info;
+-	struct v4l2_mbus_framefmt *src_fmt;
++	const struct rkisp1_isp_mbus_info *sink_mbus_info;
++	struct v4l2_mbus_framefmt *src_fmt, *sink_fmt;
+ 
++	sink_fmt = rkisp1_rsz_get_pad_fmt(rsz, cfg, RKISP1_RSZ_PAD_SINK, which);
+ 	src_fmt = rkisp1_rsz_get_pad_fmt(rsz, cfg, RKISP1_RSZ_PAD_SRC, which);
+-	mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
++	sink_mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+ 
+ 	/* for YUV formats, userspace can change the mbus code on the src pad if it is supported */
+-	if (mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV &&
++	if (sink_mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV &&
+ 	    rkisp1_rsz_get_yuv_mbus_info(format->code))
+ 		src_fmt->code = format->code;
+ 
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
+index 66b152f18d17a..426387cf16ac7 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
+@@ -443,16 +443,17 @@
+ #define VE_DEC_H265_STATUS_STCD_BUSY		BIT(21)
+ #define VE_DEC_H265_STATUS_WB_BUSY		BIT(20)
+ #define VE_DEC_H265_STATUS_BS_DMA_BUSY		BIT(19)
+-#define VE_DEC_H265_STATUS_IQIT_BUSY		BIT(18)
++#define VE_DEC_H265_STATUS_IT_BUSY		BIT(18)
+ #define VE_DEC_H265_STATUS_INTER_BUSY		BIT(17)
+ #define VE_DEC_H265_STATUS_MORE_DATA		BIT(16)
+-#define VE_DEC_H265_STATUS_VLD_BUSY		BIT(14)
+-#define VE_DEC_H265_STATUS_DEBLOCKING_BUSY	BIT(13)
+-#define VE_DEC_H265_STATUS_DEBLOCKING_DRAM_BUSY	BIT(12)
+-#define VE_DEC_H265_STATUS_INTRA_BUSY		BIT(11)
+-#define VE_DEC_H265_STATUS_SAO_BUSY		BIT(10)
+-#define VE_DEC_H265_STATUS_MVP_BUSY		BIT(9)
+-#define VE_DEC_H265_STATUS_SWDEC_BUSY		BIT(8)
++#define VE_DEC_H265_STATUS_DBLK_BUSY		BIT(15)
++#define VE_DEC_H265_STATUS_IREC_BUSY		BIT(14)
++#define VE_DEC_H265_STATUS_INTRA_BUSY		BIT(13)
++#define VE_DEC_H265_STATUS_MCRI_BUSY		BIT(12)
++#define VE_DEC_H265_STATUS_IQIT_BUSY		BIT(11)
++#define VE_DEC_H265_STATUS_MVP_BUSY		BIT(10)
++#define VE_DEC_H265_STATUS_IS_BUSY		BIT(9)
++#define VE_DEC_H265_STATUS_VLD_BUSY		BIT(8)
+ #define VE_DEC_H265_STATUS_OVER_TIME		BIT(3)
+ #define VE_DEC_H265_STATUS_VLD_DATA_REQ		BIT(2)
+ #define VE_DEC_H265_STATUS_ERROR		BIT(1)
+diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
+index 27dc181c4c9b6..03d31e52b3999 100644
+--- a/drivers/staging/rtl8192u/r8192U_core.c
++++ b/drivers/staging/rtl8192u/r8192U_core.c
+@@ -3208,7 +3208,7 @@ static void rtl819x_update_rxcounts(struct r8192_priv *priv, u32 *TotalRxBcnNum,
+ 			     u32 *TotalRxDataNum)
+ {
+ 	u16			SlotIndex;
+-	u8			i;
++	u16			i;
+ 
+ 	*TotalRxBcnNum = 0;
+ 	*TotalRxDataNum = 0;
+diff --git a/drivers/tty/amiserial.c b/drivers/tty/amiserial.c
+index 13f63c01c5894..f60db967bf7b5 100644
+--- a/drivers/tty/amiserial.c
++++ b/drivers/tty/amiserial.c
+@@ -970,6 +970,7 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ 	if (!serial_isroot()) {
+ 		if ((ss->baud_base != state->baud_base) ||
+ 		    (ss->close_delay != port->close_delay) ||
++		    (ss->closing_wait != port->closing_wait) ||
+ 		    (ss->xmit_fifo_size != state->xmit_fifo_size) ||
+ 		    ((ss->flags & ~ASYNC_USR_MASK) !=
+ 		     (port->flags & ~ASYNC_USR_MASK))) {
+diff --git a/drivers/tty/moxa.c b/drivers/tty/moxa.c
+index 9f13f7d49dd78..f9f14104bd2c0 100644
+--- a/drivers/tty/moxa.c
++++ b/drivers/tty/moxa.c
+@@ -2040,7 +2040,7 @@ static int moxa_get_serial_info(struct tty_struct *tty,
+ 	ss->line = info->port.tty->index,
+ 	ss->flags = info->port.flags,
+ 	ss->baud_base = 921600,
+-	ss->close_delay = info->port.close_delay;
++	ss->close_delay = jiffies_to_msecs(info->port.close_delay) / 10;
+ 	mutex_unlock(&info->port.mutex);
+ 	return 0;
+ }
+@@ -2050,6 +2050,7 @@ static int moxa_set_serial_info(struct tty_struct *tty,
+ 		struct serial_struct *ss)
+ {
+ 	struct moxa_port *info = tty->driver_data;
++	unsigned int close_delay;
+ 
+ 	if (tty->index == MAX_PORTS)
+ 		return -EINVAL;
+@@ -2061,19 +2062,24 @@ static int moxa_set_serial_info(struct tty_struct *tty,
+ 			ss->baud_base != 921600)
+ 		return -EPERM;
+ 
++	close_delay = msecs_to_jiffies(ss->close_delay * 10);
++
+ 	mutex_lock(&info->port.mutex);
+ 	if (!capable(CAP_SYS_ADMIN)) {
+-		if (((ss->flags & ~ASYNC_USR_MASK) !=
++		if (close_delay != info->port.close_delay ||
++		    ss->type != info->type ||
++		    ((ss->flags & ~ASYNC_USR_MASK) !=
+ 		     (info->port.flags & ~ASYNC_USR_MASK))) {
+ 			mutex_unlock(&info->port.mutex);
+ 			return -EPERM;
+ 		}
+-	}
+-	info->port.close_delay = ss->close_delay * HZ / 100;
++	} else {
++		info->port.close_delay = close_delay;
+ 
+-	MoxaSetFifo(info, ss->type == PORT_16550A);
++		MoxaSetFifo(info, ss->type == PORT_16550A);
+ 
+-	info->type = ss->type;
++		info->type = ss->type;
++	}
+ 	mutex_unlock(&info->port.mutex);
+ 	return 0;
+ }
+diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
+index 76b94d0ff5865..84e8158088cd2 100644
+--- a/drivers/tty/serial/omap-serial.c
++++ b/drivers/tty/serial/omap-serial.c
+@@ -159,6 +159,8 @@ struct uart_omap_port {
+ 	u32			calc_latency;
+ 	struct work_struct	qos_work;
+ 	bool			is_suspending;
++
++	unsigned int		rs485_tx_filter_count;
+ };
+ 
+ #define to_uart_omap_port(p) ((container_of((p), struct uart_omap_port, port)))
+@@ -302,7 +304,8 @@ static void serial_omap_stop_tx(struct uart_port *port)
+ 			serial_out(up, UART_OMAP_SCR, up->scr);
+ 			res = (port->rs485.flags & SER_RS485_RTS_AFTER_SEND) ?
+ 				1 : 0;
+-			if (gpiod_get_value(up->rts_gpiod) != res) {
++			if (up->rts_gpiod &&
++			    gpiod_get_value(up->rts_gpiod) != res) {
+ 				if (port->rs485.delay_rts_after_send > 0)
+ 					mdelay(
+ 					port->rs485.delay_rts_after_send);
+@@ -328,19 +331,6 @@ static void serial_omap_stop_tx(struct uart_port *port)
+ 		serial_out(up, UART_IER, up->ier);
+ 	}
+ 
+-	if ((port->rs485.flags & SER_RS485_ENABLED) &&
+-	    !(port->rs485.flags & SER_RS485_RX_DURING_TX)) {
+-		/*
+-		 * Empty the RX FIFO, we are not interested in anything
+-		 * received during the half-duplex transmission.
+-		 */
+-		serial_out(up, UART_FCR, up->fcr | UART_FCR_CLEAR_RCVR);
+-		/* Re-enable RX interrupts */
+-		up->ier |= UART_IER_RLSI | UART_IER_RDI;
+-		up->port.read_status_mask |= UART_LSR_DR;
+-		serial_out(up, UART_IER, up->ier);
+-	}
+-
+ 	pm_runtime_mark_last_busy(up->dev);
+ 	pm_runtime_put_autosuspend(up->dev);
+ }
+@@ -366,6 +356,10 @@ static void transmit_chars(struct uart_omap_port *up, unsigned int lsr)
+ 		serial_out(up, UART_TX, up->port.x_char);
+ 		up->port.icount.tx++;
+ 		up->port.x_char = 0;
++		if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
++		    !(up->port.rs485.flags & SER_RS485_RX_DURING_TX))
++			up->rs485_tx_filter_count++;
++
+ 		return;
+ 	}
+ 	if (uart_circ_empty(xmit) || uart_tx_stopped(&up->port)) {
+@@ -377,6 +371,10 @@ static void transmit_chars(struct uart_omap_port *up, unsigned int lsr)
+ 		serial_out(up, UART_TX, xmit->buf[xmit->tail]);
+ 		xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ 		up->port.icount.tx++;
++		if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
++		    !(up->port.rs485.flags & SER_RS485_RX_DURING_TX))
++			up->rs485_tx_filter_count++;
++
+ 		if (uart_circ_empty(xmit))
+ 			break;
+ 	} while (--count > 0);
+@@ -411,7 +409,7 @@ static void serial_omap_start_tx(struct uart_port *port)
+ 
+ 		/* if rts not already enabled */
+ 		res = (port->rs485.flags & SER_RS485_RTS_ON_SEND) ? 1 : 0;
+-		if (gpiod_get_value(up->rts_gpiod) != res) {
++		if (up->rts_gpiod && gpiod_get_value(up->rts_gpiod) != res) {
+ 			gpiod_set_value(up->rts_gpiod, res);
+ 			if (port->rs485.delay_rts_before_send > 0)
+ 				mdelay(port->rs485.delay_rts_before_send);
+@@ -420,7 +418,7 @@ static void serial_omap_start_tx(struct uart_port *port)
+ 
+ 	if ((port->rs485.flags & SER_RS485_ENABLED) &&
+ 	    !(port->rs485.flags & SER_RS485_RX_DURING_TX))
+-		serial_omap_stop_rx(port);
++		up->rs485_tx_filter_count = 0;
+ 
+ 	serial_omap_enable_ier_thri(up);
+ 	pm_runtime_mark_last_busy(up->dev);
+@@ -491,8 +489,13 @@ static void serial_omap_rlsi(struct uart_omap_port *up, unsigned int lsr)
+ 	 * Read one data character out to avoid stalling the receiver according
+ 	 * to the table 23-246 of the omap4 TRM.
+ 	 */
+-	if (likely(lsr & UART_LSR_DR))
++	if (likely(lsr & UART_LSR_DR)) {
+ 		serial_in(up, UART_RX);
++		if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
++		    !(up->port.rs485.flags & SER_RS485_RX_DURING_TX) &&
++		    up->rs485_tx_filter_count)
++			up->rs485_tx_filter_count--;
++	}
+ 
+ 	up->port.icount.rx++;
+ 	flag = TTY_NORMAL;
+@@ -543,6 +546,13 @@ static void serial_omap_rdi(struct uart_omap_port *up, unsigned int lsr)
+ 		return;
+ 
+ 	ch = serial_in(up, UART_RX);
++	if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
++	    !(up->port.rs485.flags & SER_RS485_RX_DURING_TX) &&
++	    up->rs485_tx_filter_count) {
++		up->rs485_tx_filter_count--;
++		return;
++	}
++
+ 	flag = TTY_NORMAL;
+ 	up->port.icount.rx++;
+ 
+@@ -1407,18 +1417,13 @@ serial_omap_config_rs485(struct uart_port *port, struct serial_rs485 *rs485)
+ 	/* store new config */
+ 	port->rs485 = *rs485;
+ 
+-	/*
+-	 * Just as a precaution, only allow rs485
+-	 * to be enabled if the gpio pin is valid
+-	 */
+ 	if (up->rts_gpiod) {
+ 		/* enable / disable rts */
+ 		val = (port->rs485.flags & SER_RS485_ENABLED) ?
+ 			SER_RS485_RTS_AFTER_SEND : SER_RS485_RTS_ON_SEND;
+ 		val = (port->rs485.flags & val) ? 1 : 0;
+ 		gpiod_set_value(up->rts_gpiod, val);
+-	} else
+-		port->rs485.flags &= ~SER_RS485_ENABLED;
++	}
+ 
+ 	/* Enable interrupts */
+ 	up->ier = mode;
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index f86ec2d2635b7..9adb8362578c5 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1196,7 +1196,7 @@ static int sc16is7xx_probe(struct device *dev,
+ 	ret = regmap_read(regmap,
+ 			  SC16IS7XX_LSR_REG << SC16IS7XX_REG_SHIFT, &val);
+ 	if (ret < 0)
+-		return ret;
++		return -EPROBE_DEFER;
+ 
+ 	/* Alloc port structure */
+ 	s = devm_kzalloc(dev, struct_size(s, p, devtype->nr_uart), GFP_KERNEL);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 828f9ad1be49c..c6cbaccc19b0d 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1306,7 +1306,7 @@ static int uart_set_rs485_config(struct uart_port *port,
+ 	unsigned long flags;
+ 
+ 	if (!port->rs485_config)
+-		return -ENOIOCTLCMD;
++		return -ENOTTY;
+ 
+ 	if (copy_from_user(&rs485, rs485_user, sizeof(*rs485_user)))
+ 		return -EFAULT;
+@@ -1330,7 +1330,7 @@ static int uart_get_iso7816_config(struct uart_port *port,
+ 	struct serial_iso7816 aux;
+ 
+ 	if (!port->iso7816_config)
+-		return -ENOIOCTLCMD;
++		return -ENOTTY;
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+ 	aux = port->iso7816;
+@@ -1350,7 +1350,7 @@ static int uart_set_iso7816_config(struct uart_port *port,
+ 	unsigned long flags;
+ 
+ 	if (!port->iso7816_config)
+-		return -ENOIOCTLCMD;
++		return -ENOTTY;
+ 
+ 	if (copy_from_user(&iso7816, iso7816_user, sizeof(*iso7816_user)))
+ 		return -EFAULT;
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 6248304a001f4..2cf9fc915510c 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -34,15 +34,15 @@
+ #include "serial_mctrl_gpio.h"
+ #include "stm32-usart.h"
+ 
+-static void stm32_stop_tx(struct uart_port *port);
+-static void stm32_transmit_chars(struct uart_port *port);
++static void stm32_usart_stop_tx(struct uart_port *port);
++static void stm32_usart_transmit_chars(struct uart_port *port);
+ 
+ static inline struct stm32_port *to_stm32_port(struct uart_port *port)
+ {
+ 	return container_of(port, struct stm32_port, port);
+ }
+ 
+-static void stm32_set_bits(struct uart_port *port, u32 reg, u32 bits)
++static void stm32_usart_set_bits(struct uart_port *port, u32 reg, u32 bits)
+ {
+ 	u32 val;
+ 
+@@ -51,7 +51,7 @@ static void stm32_set_bits(struct uart_port *port, u32 reg, u32 bits)
+ 	writel_relaxed(val, port->membase + reg);
+ }
+ 
+-static void stm32_clr_bits(struct uart_port *port, u32 reg, u32 bits)
++static void stm32_usart_clr_bits(struct uart_port *port, u32 reg, u32 bits)
+ {
+ 	u32 val;
+ 
+@@ -60,8 +60,8 @@ static void stm32_clr_bits(struct uart_port *port, u32 reg, u32 bits)
+ 	writel_relaxed(val, port->membase + reg);
+ }
+ 
+-static void stm32_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
+-				   u32 delay_DDE, u32 baud)
++static void stm32_usart_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
++					 u32 delay_DDE, u32 baud)
+ {
+ 	u32 rs485_deat_dedt;
+ 	u32 rs485_deat_dedt_max = (USART_CR1_DEAT_MASK >> USART_CR1_DEAT_SHIFT);
+@@ -95,16 +95,16 @@ static void stm32_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
+ 	*cr1 |= rs485_deat_dedt;
+ }
+ 
+-static int stm32_config_rs485(struct uart_port *port,
+-			      struct serial_rs485 *rs485conf)
++static int stm32_usart_config_rs485(struct uart_port *port,
++				    struct serial_rs485 *rs485conf)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	struct stm32_usart_config *cfg = &stm32_port->info->cfg;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ 	u32 usartdiv, baud, cr1, cr3;
+ 	bool over8;
+ 
+-	stm32_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
++	stm32_usart_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ 
+ 	port->rs485 = *rs485conf;
+ 
+@@ -122,9 +122,10 @@ static int stm32_config_rs485(struct uart_port *port,
+ 				   << USART_BRR_04_R_SHIFT;
+ 
+ 		baud = DIV_ROUND_CLOSEST(port->uartclk, usartdiv);
+-		stm32_config_reg_rs485(&cr1, &cr3,
+-				       rs485conf->delay_rts_before_send,
+-				       rs485conf->delay_rts_after_send, baud);
++		stm32_usart_config_reg_rs485(&cr1, &cr3,
++					     rs485conf->delay_rts_before_send,
++					     rs485conf->delay_rts_after_send,
++					     baud);
+ 
+ 		if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
+ 			cr3 &= ~USART_CR3_DEP;
+@@ -137,18 +138,19 @@ static int stm32_config_rs485(struct uart_port *port,
+ 		writel_relaxed(cr3, port->membase + ofs->cr3);
+ 		writel_relaxed(cr1, port->membase + ofs->cr1);
+ 	} else {
+-		stm32_clr_bits(port, ofs->cr3, USART_CR3_DEM | USART_CR3_DEP);
+-		stm32_clr_bits(port, ofs->cr1,
+-			       USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
++		stm32_usart_clr_bits(port, ofs->cr3,
++				     USART_CR3_DEM | USART_CR3_DEP);
++		stm32_usart_clr_bits(port, ofs->cr1,
++				     USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
+ 	}
+ 
+-	stm32_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
++	stm32_usart_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ 
+ 	return 0;
+ }
+ 
+-static int stm32_init_rs485(struct uart_port *port,
+-			    struct platform_device *pdev)
++static int stm32_usart_init_rs485(struct uart_port *port,
++				  struct platform_device *pdev)
+ {
+ 	struct serial_rs485 *rs485conf = &port->rs485;
+ 
+@@ -162,11 +164,11 @@ static int stm32_init_rs485(struct uart_port *port,
+ 	return uart_get_rs485_mode(port);
+ }
+ 
+-static int stm32_pending_rx(struct uart_port *port, u32 *sr, int *last_res,
+-			    bool threaded)
++static int stm32_usart_pending_rx(struct uart_port *port, u32 *sr,
++				  int *last_res, bool threaded)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	enum dma_status status;
+ 	struct dma_tx_state state;
+ 
+@@ -176,8 +178,7 @@ static int stm32_pending_rx(struct uart_port *port, u32 *sr, int *last_res,
+ 		status = dmaengine_tx_status(stm32_port->rx_ch,
+ 					     stm32_port->rx_ch->cookie,
+ 					     &state);
+-		if ((status == DMA_IN_PROGRESS) &&
+-		    (*last_res != state.residue))
++		if (status == DMA_IN_PROGRESS && (*last_res != state.residue))
+ 			return 1;
+ 		else
+ 			return 0;
+@@ -187,11 +188,11 @@ static int stm32_pending_rx(struct uart_port *port, u32 *sr, int *last_res,
+ 	return 0;
+ }
+ 
+-static unsigned long stm32_get_char(struct uart_port *port, u32 *sr,
+-				    int *last_res)
++static unsigned long stm32_usart_get_char(struct uart_port *port, u32 *sr,
++					  int *last_res)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	unsigned long c;
+ 
+ 	if (stm32_port->rx_ch) {
+@@ -207,19 +208,22 @@ static unsigned long stm32_get_char(struct uart_port *port, u32 *sr,
+ 	return c;
+ }
+ 
+-static void stm32_receive_chars(struct uart_port *port, bool threaded)
++static void stm32_usart_receive_chars(struct uart_port *port, bool threaded)
+ {
+ 	struct tty_port *tport = &port->state->port;
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	unsigned long c;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	unsigned long c, flags;
+ 	u32 sr;
+ 	char flag;
+ 
+-	if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
+-		pm_wakeup_event(tport->tty->dev, 0);
++	if (threaded)
++		spin_lock_irqsave(&port->lock, flags);
++	else
++		spin_lock(&port->lock);
+ 
+-	while (stm32_pending_rx(port, &sr, &stm32_port->last_res, threaded)) {
++	while (stm32_usart_pending_rx(port, &sr, &stm32_port->last_res,
++				      threaded)) {
+ 		sr |= USART_SR_DUMMY_RX;
+ 		flag = TTY_NORMAL;
+ 
+@@ -238,7 +242,7 @@ static void stm32_receive_chars(struct uart_port *port, bool threaded)
+ 			writel_relaxed(sr & USART_SR_ERR_MASK,
+ 				       port->membase + ofs->icr);
+ 
+-		c = stm32_get_char(port, &sr, &stm32_port->last_res);
++		c = stm32_usart_get_char(port, &sr, &stm32_port->last_res);
+ 		port->icount.rx++;
+ 		if (sr & USART_SR_ERR_MASK) {
+ 			if (sr & USART_SR_ORE) {
+@@ -273,58 +277,65 @@ static void stm32_receive_chars(struct uart_port *port, bool threaded)
+ 		uart_insert_char(port, sr, USART_SR_ORE, c, flag);
+ 	}
+ 
+-	spin_unlock(&port->lock);
++	if (threaded)
++		spin_unlock_irqrestore(&port->lock, flags);
++	else
++		spin_unlock(&port->lock);
++
+ 	tty_flip_buffer_push(tport);
+-	spin_lock(&port->lock);
+ }
+ 
+-static void stm32_tx_dma_complete(void *arg)
++static void stm32_usart_tx_dma_complete(void *arg)
+ {
+ 	struct uart_port *port = arg;
+ 	struct stm32_port *stm32port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
++	unsigned long flags;
+ 
+-	stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
++	dmaengine_terminate_async(stm32port->tx_ch);
++	stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ 	stm32port->tx_dma_busy = false;
+ 
+ 	/* Let's see if we have pending data to send */
+-	stm32_transmit_chars(port);
++	spin_lock_irqsave(&port->lock, flags);
++	stm32_usart_transmit_chars(port);
++	spin_unlock_irqrestore(&port->lock, flags);
+ }
+ 
+-static void stm32_tx_interrupt_enable(struct uart_port *port)
++static void stm32_usart_tx_interrupt_enable(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 
+ 	/*
+ 	 * Enables TX FIFO threashold irq when FIFO is enabled,
+ 	 * or TX empty irq when FIFO is disabled
+ 	 */
+ 	if (stm32_port->fifoen)
+-		stm32_set_bits(port, ofs->cr3, USART_CR3_TXFTIE);
++		stm32_usart_set_bits(port, ofs->cr3, USART_CR3_TXFTIE);
+ 	else
+-		stm32_set_bits(port, ofs->cr1, USART_CR1_TXEIE);
++		stm32_usart_set_bits(port, ofs->cr1, USART_CR1_TXEIE);
+ }
+ 
+-static void stm32_tx_interrupt_disable(struct uart_port *port)
++static void stm32_usart_tx_interrupt_disable(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 
+ 	if (stm32_port->fifoen)
+-		stm32_clr_bits(port, ofs->cr3, USART_CR3_TXFTIE);
++		stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_TXFTIE);
+ 	else
+-		stm32_clr_bits(port, ofs->cr1, USART_CR1_TXEIE);
++		stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_TXEIE);
+ }
+ 
+-static void stm32_transmit_chars_pio(struct uart_port *port)
++static void stm32_usart_transmit_chars_pio(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	struct circ_buf *xmit = &port->state->xmit;
+ 
+ 	if (stm32_port->tx_dma_busy) {
+-		stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
++		stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ 		stm32_port->tx_dma_busy = false;
+ 	}
+ 
+@@ -339,15 +350,15 @@ static void stm32_transmit_chars_pio(struct uart_port *port)
+ 
+ 	/* rely on TXE irq (mask or unmask) for sending remaining data */
+ 	if (uart_circ_empty(xmit))
+-		stm32_tx_interrupt_disable(port);
++		stm32_usart_tx_interrupt_disable(port);
+ 	else
+-		stm32_tx_interrupt_enable(port);
++		stm32_usart_tx_interrupt_enable(port);
+ }
+ 
+-static void stm32_transmit_chars_dma(struct uart_port *port)
++static void stm32_usart_transmit_chars_dma(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ 	struct circ_buf *xmit = &port->state->xmit;
+ 	struct dma_async_tx_descriptor *desc = NULL;
+ 	unsigned int count, i;
+@@ -386,7 +397,7 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
+ 	if (!desc)
+ 		goto fallback_err;
+ 
+-	desc->callback = stm32_tx_dma_complete;
++	desc->callback = stm32_usart_tx_dma_complete;
+ 	desc->callback_param = port;
+ 
+ 	/* Push current DMA TX transaction in the pending queue */
+@@ -399,7 +410,7 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
+ 	/* Issue pending DMA TX requests */
+ 	dma_async_issue_pending(stm32port->tx_ch);
+ 
+-	stm32_set_bits(port, ofs->cr3, USART_CR3_DMAT);
++	stm32_usart_set_bits(port, ofs->cr3, USART_CR3_DMAT);
+ 
+ 	xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
+ 	port->icount.tx += count;
+@@ -407,74 +418,79 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
+ 
+ fallback_err:
+ 	for (i = count; i > 0; i--)
+-		stm32_transmit_chars_pio(port);
++		stm32_usart_transmit_chars_pio(port);
+ }
+ 
+-static void stm32_transmit_chars(struct uart_port *port)
++static void stm32_usart_transmit_chars(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	struct circ_buf *xmit = &port->state->xmit;
+ 
+ 	if (port->x_char) {
+ 		if (stm32_port->tx_dma_busy)
+-			stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
++			stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ 		writel_relaxed(port->x_char, port->membase + ofs->tdr);
+ 		port->x_char = 0;
+ 		port->icount.tx++;
+ 		if (stm32_port->tx_dma_busy)
+-			stm32_set_bits(port, ofs->cr3, USART_CR3_DMAT);
++			stm32_usart_set_bits(port, ofs->cr3, USART_CR3_DMAT);
+ 		return;
+ 	}
+ 
+ 	if (uart_circ_empty(xmit) || uart_tx_stopped(port)) {
+-		stm32_tx_interrupt_disable(port);
++		stm32_usart_tx_interrupt_disable(port);
+ 		return;
+ 	}
+ 
+ 	if (ofs->icr == UNDEF_REG)
+-		stm32_clr_bits(port, ofs->isr, USART_SR_TC);
++		stm32_usart_clr_bits(port, ofs->isr, USART_SR_TC);
+ 	else
+ 		writel_relaxed(USART_ICR_TCCF, port->membase + ofs->icr);
+ 
+ 	if (stm32_port->tx_ch)
+-		stm32_transmit_chars_dma(port);
++		stm32_usart_transmit_chars_dma(port);
+ 	else
+-		stm32_transmit_chars_pio(port);
++		stm32_usart_transmit_chars_pio(port);
+ 
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(port);
+ 
+ 	if (uart_circ_empty(xmit))
+-		stm32_tx_interrupt_disable(port);
++		stm32_usart_tx_interrupt_disable(port);
+ }
+ 
+-static irqreturn_t stm32_interrupt(int irq, void *ptr)
++static irqreturn_t stm32_usart_interrupt(int irq, void *ptr)
+ {
+ 	struct uart_port *port = ptr;
++	struct tty_port *tport = &port->state->port;
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	u32 sr;
+ 
+-	spin_lock(&port->lock);
+-
+ 	sr = readl_relaxed(port->membase + ofs->isr);
+ 
+ 	if ((sr & USART_SR_RTOF) && ofs->icr != UNDEF_REG)
+ 		writel_relaxed(USART_ICR_RTOCF,
+ 			       port->membase + ofs->icr);
+ 
+-	if ((sr & USART_SR_WUF) && (ofs->icr != UNDEF_REG))
++	if ((sr & USART_SR_WUF) && ofs->icr != UNDEF_REG) {
++		/* Clear wake up flag and disable wake up interrupt */
+ 		writel_relaxed(USART_ICR_WUCF,
+ 			       port->membase + ofs->icr);
++		stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_WUFIE);
++		if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
++			pm_wakeup_event(tport->tty->dev, 0);
++	}
+ 
+ 	if ((sr & USART_SR_RXNE) && !(stm32_port->rx_ch))
+-		stm32_receive_chars(port, false);
+-
+-	if ((sr & USART_SR_TXE) && !(stm32_port->tx_ch))
+-		stm32_transmit_chars(port);
++		stm32_usart_receive_chars(port, false);
+ 
+-	spin_unlock(&port->lock);
++	if ((sr & USART_SR_TXE) && !(stm32_port->tx_ch)) {
++		spin_lock(&port->lock);
++		stm32_usart_transmit_chars(port);
++		spin_unlock(&port->lock);
++	}
+ 
+ 	if (stm32_port->rx_ch)
+ 		return IRQ_WAKE_THREAD;
+@@ -482,43 +498,42 @@ static irqreturn_t stm32_interrupt(int irq, void *ptr)
+ 		return IRQ_HANDLED;
+ }
+ 
+-static irqreturn_t stm32_threaded_interrupt(int irq, void *ptr)
++static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr)
+ {
+ 	struct uart_port *port = ptr;
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 
+-	spin_lock(&port->lock);
+-
+ 	if (stm32_port->rx_ch)
+-		stm32_receive_chars(port, true);
+-
+-	spin_unlock(&port->lock);
++		stm32_usart_receive_chars(port, true);
+ 
+ 	return IRQ_HANDLED;
+ }
+ 
+-static unsigned int stm32_tx_empty(struct uart_port *port)
++static unsigned int stm32_usart_tx_empty(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 
+-	return readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE;
++	if (readl_relaxed(port->membase + ofs->isr) & USART_SR_TC)
++		return TIOCSER_TEMT;
++
++	return 0;
+ }
+ 
+-static void stm32_set_mctrl(struct uart_port *port, unsigned int mctrl)
++static void stm32_usart_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 
+ 	if ((mctrl & TIOCM_RTS) && (port->status & UPSTAT_AUTORTS))
+-		stm32_set_bits(port, ofs->cr3, USART_CR3_RTSE);
++		stm32_usart_set_bits(port, ofs->cr3, USART_CR3_RTSE);
+ 	else
+-		stm32_clr_bits(port, ofs->cr3, USART_CR3_RTSE);
++		stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_RTSE);
+ 
+ 	mctrl_gpio_set(stm32_port->gpios, mctrl);
+ }
+ 
+-static unsigned int stm32_get_mctrl(struct uart_port *port)
++static unsigned int stm32_usart_get_mctrl(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	unsigned int ret;
+@@ -529,23 +544,23 @@ static unsigned int stm32_get_mctrl(struct uart_port *port)
+ 	return mctrl_gpio_get(stm32_port->gpios, &ret);
+ }
+ 
+-static void stm32_enable_ms(struct uart_port *port)
++static void stm32_usart_enable_ms(struct uart_port *port)
+ {
+ 	mctrl_gpio_enable_ms(to_stm32_port(port)->gpios);
+ }
+ 
+-static void stm32_disable_ms(struct uart_port *port)
++static void stm32_usart_disable_ms(struct uart_port *port)
+ {
+ 	mctrl_gpio_disable_ms(to_stm32_port(port)->gpios);
+ }
+ 
+ /* Transmit stop */
+-static void stm32_stop_tx(struct uart_port *port)
++static void stm32_usart_stop_tx(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	struct serial_rs485 *rs485conf = &port->rs485;
+ 
+-	stm32_tx_interrupt_disable(port);
++	stm32_usart_tx_interrupt_disable(port);
+ 
+ 	if (rs485conf->flags & SER_RS485_ENABLED) {
+ 		if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
+@@ -559,7 +574,7 @@ static void stm32_stop_tx(struct uart_port *port)
+ }
+ 
+ /* There are probably characters waiting to be transmitted. */
+-static void stm32_start_tx(struct uart_port *port)
++static void stm32_usart_start_tx(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	struct serial_rs485 *rs485conf = &port->rs485;
+@@ -578,102 +593,91 @@ static void stm32_start_tx(struct uart_port *port)
+ 		}
+ 	}
+ 
+-	stm32_transmit_chars(port);
++	stm32_usart_transmit_chars(port);
+ }
+ 
+ /* Throttle the remote when input buffer is about to overflow. */
+-static void stm32_throttle(struct uart_port *port)
++static void stm32_usart_throttle(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+-	stm32_clr_bits(port, ofs->cr1, stm32_port->cr1_irq);
++	stm32_usart_clr_bits(port, ofs->cr1, stm32_port->cr1_irq);
+ 	if (stm32_port->cr3_irq)
+-		stm32_clr_bits(port, ofs->cr3, stm32_port->cr3_irq);
++		stm32_usart_clr_bits(port, ofs->cr3, stm32_port->cr3_irq);
+ 
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ }
+ 
+ /* Unthrottle the remote, the input buffer can now accept data. */
+-static void stm32_unthrottle(struct uart_port *port)
++static void stm32_usart_unthrottle(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+-	stm32_set_bits(port, ofs->cr1, stm32_port->cr1_irq);
++	stm32_usart_set_bits(port, ofs->cr1, stm32_port->cr1_irq);
+ 	if (stm32_port->cr3_irq)
+-		stm32_set_bits(port, ofs->cr3, stm32_port->cr3_irq);
++		stm32_usart_set_bits(port, ofs->cr3, stm32_port->cr3_irq);
+ 
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ }
+ 
+ /* Receive stop */
+-static void stm32_stop_rx(struct uart_port *port)
++static void stm32_usart_stop_rx(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 
+-	stm32_clr_bits(port, ofs->cr1, stm32_port->cr1_irq);
++	stm32_usart_clr_bits(port, ofs->cr1, stm32_port->cr1_irq);
+ 	if (stm32_port->cr3_irq)
+-		stm32_clr_bits(port, ofs->cr3, stm32_port->cr3_irq);
+-
++		stm32_usart_clr_bits(port, ofs->cr3, stm32_port->cr3_irq);
+ }
+ 
+ /* Handle breaks - ignored by us */
+-static void stm32_break_ctl(struct uart_port *port, int break_state)
++static void stm32_usart_break_ctl(struct uart_port *port, int break_state)
+ {
+ }
+ 
+-static int stm32_startup(struct uart_port *port)
++static int stm32_usart_startup(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ 	const char *name = to_platform_device(port->dev)->name;
+ 	u32 val;
+ 	int ret;
+ 
+-	ret = request_threaded_irq(port->irq, stm32_interrupt,
+-				   stm32_threaded_interrupt,
++	ret = request_threaded_irq(port->irq, stm32_usart_interrupt,
++				   stm32_usart_threaded_interrupt,
+ 				   IRQF_NO_SUSPEND, name, port);
+ 	if (ret)
+ 		return ret;
+ 
+ 	/* RX FIFO Flush */
+ 	if (ofs->rqr != UNDEF_REG)
+-		stm32_set_bits(port, ofs->rqr, USART_RQR_RXFRQ);
++		writel_relaxed(USART_RQR_RXFRQ, port->membase + ofs->rqr);
+ 
+-	/* Tx and RX FIFO configuration */
+-	if (stm32_port->fifoen) {
+-		val = readl_relaxed(port->membase + ofs->cr3);
+-		val &= ~(USART_CR3_TXFTCFG_MASK | USART_CR3_RXFTCFG_MASK);
+-		val |= USART_CR3_TXFTCFG_HALF << USART_CR3_TXFTCFG_SHIFT;
+-		val |= USART_CR3_RXFTCFG_HALF << USART_CR3_RXFTCFG_SHIFT;
+-		writel_relaxed(val, port->membase + ofs->cr3);
+-	}
+-
+-	/* RX FIFO enabling */
+-	val = stm32_port->cr1_irq | USART_CR1_RE;
+-	if (stm32_port->fifoen)
+-		val |= USART_CR1_FIFOEN;
+-	stm32_set_bits(port, ofs->cr1, val);
++	/* RX enabling */
++	val = stm32_port->cr1_irq | USART_CR1_RE | BIT(cfg->uart_enable_bit);
++	stm32_usart_set_bits(port, ofs->cr1, val);
+ 
+ 	return 0;
+ }
+ 
+-static void stm32_shutdown(struct uart_port *port)
++static void stm32_usart_shutdown(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	struct stm32_usart_config *cfg = &stm32_port->info->cfg;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ 	u32 val, isr;
+ 	int ret;
+ 
+ 	/* Disable modem control interrupts */
+-	stm32_disable_ms(port);
++	stm32_usart_disable_ms(port);
+ 
+ 	val = USART_CR1_TXEIE | USART_CR1_TE;
+ 	val |= stm32_port->cr1_irq | USART_CR1_RE;
+@@ -688,12 +692,17 @@ static void stm32_shutdown(struct uart_port *port)
+ 	if (ret)
+ 		dev_err(port->dev, "transmission complete not set\n");
+ 
+-	stm32_clr_bits(port, ofs->cr1, val);
++	/* flush RX & TX FIFO */
++	if (ofs->rqr != UNDEF_REG)
++		writel_relaxed(USART_RQR_TXFRQ | USART_RQR_RXFRQ,
++			       port->membase + ofs->rqr);
++
++	stm32_usart_clr_bits(port, ofs->cr1, val);
+ 
+ 	free_irq(port->irq, port);
+ }
+ 
+-static unsigned int stm32_get_databits(struct ktermios *termios)
++static unsigned int stm32_usart_get_databits(struct ktermios *termios)
+ {
+ 	unsigned int bits;
+ 
+@@ -723,18 +732,20 @@ static unsigned int stm32_get_databits(struct ktermios *termios)
+ 	return bits;
+ }
+ 
+-static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
+-			    struct ktermios *old)
++static void stm32_usart_set_termios(struct uart_port *port,
++				    struct ktermios *termios,
++				    struct ktermios *old)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	struct stm32_usart_config *cfg = &stm32_port->info->cfg;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ 	struct serial_rs485 *rs485conf = &port->rs485;
+ 	unsigned int baud, bits;
+ 	u32 usartdiv, mantissa, fraction, oversampling;
+ 	tcflag_t cflag = termios->c_cflag;
+-	u32 cr1, cr2, cr3;
++	u32 cr1, cr2, cr3, isr;
+ 	unsigned long flags;
++	int ret;
+ 
+ 	if (!stm32_port->hw_flow_control)
+ 		cflag &= ~CRTSCTS;
+@@ -743,26 +754,41 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+ 
++	ret = readl_relaxed_poll_timeout_atomic(port->membase + ofs->isr,
++						isr,
++						(isr & USART_SR_TC),
++						10, 100000);
++
++	/* Send the TC error message only when ISR_TC is not set. */
++	if (ret)
++		dev_err(port->dev, "Transmission is not complete\n");
++
+ 	/* Stop serial port and reset value */
+ 	writel_relaxed(0, port->membase + ofs->cr1);
+ 
+ 	/* flush RX & TX FIFO */
+ 	if (ofs->rqr != UNDEF_REG)
+-		stm32_set_bits(port, ofs->rqr,
+-			       USART_RQR_TXFRQ | USART_RQR_RXFRQ);
++		writel_relaxed(USART_RQR_TXFRQ | USART_RQR_RXFRQ,
++			       port->membase + ofs->rqr);
+ 
+ 	cr1 = USART_CR1_TE | USART_CR1_RE;
+ 	if (stm32_port->fifoen)
+ 		cr1 |= USART_CR1_FIFOEN;
+ 	cr2 = 0;
++
++	/* Tx and RX FIFO configuration */
+ 	cr3 = readl_relaxed(port->membase + ofs->cr3);
+-	cr3 &= USART_CR3_TXFTIE | USART_CR3_RXFTCFG_MASK | USART_CR3_RXFTIE
+-		| USART_CR3_TXFTCFG_MASK;
++	cr3 &= USART_CR3_TXFTIE | USART_CR3_RXFTIE;
++	if (stm32_port->fifoen) {
++		cr3 &= ~(USART_CR3_TXFTCFG_MASK | USART_CR3_RXFTCFG_MASK);
++		cr3 |= USART_CR3_TXFTCFG_HALF << USART_CR3_TXFTCFG_SHIFT;
++		cr3 |= USART_CR3_RXFTCFG_HALF << USART_CR3_RXFTCFG_SHIFT;
++	}
+ 
+ 	if (cflag & CSTOPB)
+ 		cr2 |= USART_CR2_STOP_2B;
+ 
+-	bits = stm32_get_databits(termios);
++	bits = stm32_usart_get_databits(termios);
+ 	stm32_port->rdr_mask = (BIT(bits) - 1);
+ 
+ 	if (cflag & PARENB) {
+@@ -813,12 +839,6 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 		cr3 |= USART_CR3_CTSE | USART_CR3_RTSE;
+ 	}
+ 
+-	/* Handle modem control interrupts */
+-	if (UART_ENABLE_MS(port, termios->c_cflag))
+-		stm32_enable_ms(port);
+-	else
+-		stm32_disable_ms(port);
+-
+ 	usartdiv = DIV_ROUND_CLOSEST(port->uartclk, baud);
+ 
+ 	/*
+@@ -830,11 +850,11 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	if (usartdiv < 16) {
+ 		oversampling = 8;
+ 		cr1 |= USART_CR1_OVER8;
+-		stm32_set_bits(port, ofs->cr1, USART_CR1_OVER8);
++		stm32_usart_set_bits(port, ofs->cr1, USART_CR1_OVER8);
+ 	} else {
+ 		oversampling = 16;
+ 		cr1 &= ~USART_CR1_OVER8;
+-		stm32_clr_bits(port, ofs->cr1, USART_CR1_OVER8);
++		stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_OVER8);
+ 	}
+ 
+ 	mantissa = (usartdiv / oversampling) << USART_BRR_DIV_M_SHIFT;
+@@ -871,9 +891,10 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 		cr3 |= USART_CR3_DMAR;
+ 
+ 	if (rs485conf->flags & SER_RS485_ENABLED) {
+-		stm32_config_reg_rs485(&cr1, &cr3,
+-				       rs485conf->delay_rts_before_send,
+-				       rs485conf->delay_rts_after_send, baud);
++		stm32_usart_config_reg_rs485(&cr1, &cr3,
++					     rs485conf->delay_rts_before_send,
++					     rs485conf->delay_rts_after_send,
++					     baud);
+ 		if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
+ 			cr3 &= ~USART_CR3_DEP;
+ 			rs485conf->flags &= ~SER_RS485_RTS_AFTER_SEND;
+@@ -887,48 +908,60 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 		cr1 &= ~(USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
+ 	}
+ 
++	/* Configure wake up from low power on start bit detection */
++	if (stm32_port->wakeirq > 0) {
++		cr3 &= ~USART_CR3_WUS_MASK;
++		cr3 |= USART_CR3_WUS_START_BIT;
++	}
++
+ 	writel_relaxed(cr3, port->membase + ofs->cr3);
+ 	writel_relaxed(cr2, port->membase + ofs->cr2);
+ 	writel_relaxed(cr1, port->membase + ofs->cr1);
+ 
+-	stm32_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
++	stm32_usart_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ 	spin_unlock_irqrestore(&port->lock, flags);
++
++	/* Handle modem control interrupts */
++	if (UART_ENABLE_MS(port, termios->c_cflag))
++		stm32_usart_enable_ms(port);
++	else
++		stm32_usart_disable_ms(port);
+ }
+ 
+-static const char *stm32_type(struct uart_port *port)
++static const char *stm32_usart_type(struct uart_port *port)
+ {
+ 	return (port->type == PORT_STM32) ? DRIVER_NAME : NULL;
+ }
+ 
+-static void stm32_release_port(struct uart_port *port)
++static void stm32_usart_release_port(struct uart_port *port)
+ {
+ }
+ 
+-static int stm32_request_port(struct uart_port *port)
++static int stm32_usart_request_port(struct uart_port *port)
+ {
+ 	return 0;
+ }
+ 
+-static void stm32_config_port(struct uart_port *port, int flags)
++static void stm32_usart_config_port(struct uart_port *port, int flags)
+ {
+ 	if (flags & UART_CONFIG_TYPE)
+ 		port->type = PORT_STM32;
+ }
+ 
+ static int
+-stm32_verify_port(struct uart_port *port, struct serial_struct *ser)
++stm32_usart_verify_port(struct uart_port *port, struct serial_struct *ser)
+ {
+ 	/* No user changeable parameters */
+ 	return -EINVAL;
+ }
+ 
+-static void stm32_pm(struct uart_port *port, unsigned int state,
+-		unsigned int oldstate)
++static void stm32_usart_pm(struct uart_port *port, unsigned int state,
++			   unsigned int oldstate)
+ {
+ 	struct stm32_port *stm32port = container_of(port,
+ 			struct stm32_port, port);
+-	struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+-	struct stm32_usart_config *cfg = &stm32port->info->cfg;
++	const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
++	const struct stm32_usart_config *cfg = &stm32port->info->cfg;
+ 	unsigned long flags = 0;
+ 
+ 	switch (state) {
+@@ -937,7 +970,7 @@ static void stm32_pm(struct uart_port *port, unsigned int state,
+ 		break;
+ 	case UART_PM_STATE_OFF:
+ 		spin_lock_irqsave(&port->lock, flags);
+-		stm32_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
++		stm32_usart_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ 		spin_unlock_irqrestore(&port->lock, flags);
+ 		pm_runtime_put_sync(port->dev);
+ 		break;
+@@ -945,49 +978,48 @@ static void stm32_pm(struct uart_port *port, unsigned int state,
+ }
+ 
+ static const struct uart_ops stm32_uart_ops = {
+-	.tx_empty	= stm32_tx_empty,
+-	.set_mctrl	= stm32_set_mctrl,
+-	.get_mctrl	= stm32_get_mctrl,
+-	.stop_tx	= stm32_stop_tx,
+-	.start_tx	= stm32_start_tx,
+-	.throttle	= stm32_throttle,
+-	.unthrottle	= stm32_unthrottle,
+-	.stop_rx	= stm32_stop_rx,
+-	.enable_ms	= stm32_enable_ms,
+-	.break_ctl	= stm32_break_ctl,
+-	.startup	= stm32_startup,
+-	.shutdown	= stm32_shutdown,
+-	.set_termios	= stm32_set_termios,
+-	.pm		= stm32_pm,
+-	.type		= stm32_type,
+-	.release_port	= stm32_release_port,
+-	.request_port	= stm32_request_port,
+-	.config_port	= stm32_config_port,
+-	.verify_port	= stm32_verify_port,
++	.tx_empty	= stm32_usart_tx_empty,
++	.set_mctrl	= stm32_usart_set_mctrl,
++	.get_mctrl	= stm32_usart_get_mctrl,
++	.stop_tx	= stm32_usart_stop_tx,
++	.start_tx	= stm32_usart_start_tx,
++	.throttle	= stm32_usart_throttle,
++	.unthrottle	= stm32_usart_unthrottle,
++	.stop_rx	= stm32_usart_stop_rx,
++	.enable_ms	= stm32_usart_enable_ms,
++	.break_ctl	= stm32_usart_break_ctl,
++	.startup	= stm32_usart_startup,
++	.shutdown	= stm32_usart_shutdown,
++	.set_termios	= stm32_usart_set_termios,
++	.pm		= stm32_usart_pm,
++	.type		= stm32_usart_type,
++	.release_port	= stm32_usart_release_port,
++	.request_port	= stm32_usart_request_port,
++	.config_port	= stm32_usart_config_port,
++	.verify_port	= stm32_usart_verify_port,
+ };
+ 
+-static int stm32_init_port(struct stm32_port *stm32port,
+-			  struct platform_device *pdev)
++static int stm32_usart_init_port(struct stm32_port *stm32port,
++				 struct platform_device *pdev)
+ {
+ 	struct uart_port *port = &stm32port->port;
+ 	struct resource *res;
+ 	int ret;
+ 
++	ret = platform_get_irq(pdev, 0);
++	if (ret <= 0)
++		return ret ? : -ENODEV;
++
+ 	port->iotype	= UPIO_MEM;
+ 	port->flags	= UPF_BOOT_AUTOCONF;
+ 	port->ops	= &stm32_uart_ops;
+ 	port->dev	= &pdev->dev;
+ 	port->fifosize	= stm32port->info->cfg.fifosize;
+ 	port->has_sysrq = IS_ENABLED(CONFIG_SERIAL_STM32_CONSOLE);
+-
+-	ret = platform_get_irq(pdev, 0);
+-	if (ret <= 0)
+-		return ret ? : -ENODEV;
+ 	port->irq = ret;
++	port->rs485_config = stm32_usart_config_rs485;
+ 
+-	port->rs485_config = stm32_config_rs485;
+-
+-	ret = stm32_init_rs485(port, pdev);
++	ret = stm32_usart_init_rs485(port, pdev);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1046,7 +1078,7 @@ err_clk:
+ 	return ret;
+ }
+ 
+-static struct stm32_port *stm32_of_get_stm32_port(struct platform_device *pdev)
++static struct stm32_port *stm32_usart_of_get_port(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+ 	int id;
+@@ -1084,10 +1116,10 @@ static const struct of_device_id stm32_match[] = {
+ MODULE_DEVICE_TABLE(of, stm32_match);
+ #endif
+ 
+-static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
+-				 struct platform_device *pdev)
++static int stm32_usart_of_dma_rx_probe(struct stm32_port *stm32port,
++				       struct platform_device *pdev)
+ {
+-	struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ 	struct uart_port *port = &stm32port->port;
+ 	struct device *dev = &pdev->dev;
+ 	struct dma_slave_config config;
+@@ -1101,8 +1133,8 @@ static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
+ 		return -ENODEV;
+ 	}
+ 	stm32port->rx_buf = dma_alloc_coherent(&pdev->dev, RX_BUF_L,
+-						 &stm32port->rx_dma_buf,
+-						 GFP_KERNEL);
++					       &stm32port->rx_dma_buf,
++					       GFP_KERNEL);
+ 	if (!stm32port->rx_buf) {
+ 		ret = -ENOMEM;
+ 		goto alloc_err;
+@@ -1159,10 +1191,10 @@ alloc_err:
+ 	return ret;
+ }
+ 
+-static int stm32_of_dma_tx_probe(struct stm32_port *stm32port,
+-				 struct platform_device *pdev)
++static int stm32_usart_of_dma_tx_probe(struct stm32_port *stm32port,
++				       struct platform_device *pdev)
+ {
+-	struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ 	struct uart_port *port = &stm32port->port;
+ 	struct device *dev = &pdev->dev;
+ 	struct dma_slave_config config;
+@@ -1177,8 +1209,8 @@ static int stm32_of_dma_tx_probe(struct stm32_port *stm32port,
+ 		return -ENODEV;
+ 	}
+ 	stm32port->tx_buf = dma_alloc_coherent(&pdev->dev, TX_BUF_L,
+-						 &stm32port->tx_dma_buf,
+-						 GFP_KERNEL);
++					       &stm32port->tx_dma_buf,
++					       GFP_KERNEL);
+ 	if (!stm32port->tx_buf) {
+ 		ret = -ENOMEM;
+ 		goto alloc_err;
+@@ -1210,23 +1242,20 @@ alloc_err:
+ 	return ret;
+ }
+ 
+-static int stm32_serial_probe(struct platform_device *pdev)
++static int stm32_usart_serial_probe(struct platform_device *pdev)
+ {
+-	const struct of_device_id *match;
+ 	struct stm32_port *stm32port;
+ 	int ret;
+ 
+-	stm32port = stm32_of_get_stm32_port(pdev);
++	stm32port = stm32_usart_of_get_port(pdev);
+ 	if (!stm32port)
+ 		return -ENODEV;
+ 
+-	match = of_match_device(stm32_match, &pdev->dev);
+-	if (match && match->data)
+-		stm32port->info = (struct stm32_usart_info *)match->data;
+-	else
++	stm32port->info = of_device_get_match_data(&pdev->dev);
++	if (!stm32port->info)
+ 		return -EINVAL;
+ 
+-	ret = stm32_init_port(stm32port, pdev);
++	ret = stm32_usart_init_port(stm32port, pdev);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1243,15 +1272,11 @@ static int stm32_serial_probe(struct platform_device *pdev)
+ 		device_set_wakeup_enable(&pdev->dev, false);
+ 	}
+ 
+-	ret = uart_add_one_port(&stm32_usart_driver, &stm32port->port);
+-	if (ret)
+-		goto err_wirq;
+-
+-	ret = stm32_of_dma_rx_probe(stm32port, pdev);
++	ret = stm32_usart_of_dma_rx_probe(stm32port, pdev);
+ 	if (ret)
+ 		dev_info(&pdev->dev, "interrupt mode used for rx (no dma)\n");
+ 
+-	ret = stm32_of_dma_tx_probe(stm32port, pdev);
++	ret = stm32_usart_of_dma_tx_probe(stm32port, pdev);
+ 	if (ret)
+ 		dev_info(&pdev->dev, "interrupt mode used for tx (no dma)\n");
+ 
+@@ -1260,11 +1285,40 @@ static int stm32_serial_probe(struct platform_device *pdev)
+ 	pm_runtime_get_noresume(&pdev->dev);
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
++
++	ret = uart_add_one_port(&stm32_usart_driver, &stm32port->port);
++	if (ret)
++		goto err_port;
++
+ 	pm_runtime_put_sync(&pdev->dev);
+ 
+ 	return 0;
+ 
+-err_wirq:
++err_port:
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++
++	if (stm32port->rx_ch) {
++		dmaengine_terminate_async(stm32port->rx_ch);
++		dma_release_channel(stm32port->rx_ch);
++	}
++
++	if (stm32port->rx_dma_buf)
++		dma_free_coherent(&pdev->dev,
++				  RX_BUF_L, stm32port->rx_buf,
++				  stm32port->rx_dma_buf);
++
++	if (stm32port->tx_ch) {
++		dmaengine_terminate_async(stm32port->tx_ch);
++		dma_release_channel(stm32port->tx_ch);
++	}
++
++	if (stm32port->tx_dma_buf)
++		dma_free_coherent(&pdev->dev,
++				  TX_BUF_L, stm32port->tx_buf,
++				  stm32port->tx_dma_buf);
++
+ 	if (stm32port->wakeirq > 0)
+ 		dev_pm_clear_wake_irq(&pdev->dev);
+ 
+@@ -1278,29 +1332,40 @@ err_uninit:
+ 	return ret;
+ }
+ 
+-static int stm32_serial_remove(struct platform_device *pdev)
++static int stm32_usart_serial_remove(struct platform_device *pdev)
+ {
+ 	struct uart_port *port = platform_get_drvdata(pdev);
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	int err;
+ 
+ 	pm_runtime_get_sync(&pdev->dev);
++	err = uart_remove_one_port(&stm32_usart_driver, port);
++	if (err)
++		return(err);
+ 
+-	stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAR);
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
+ 
+-	if (stm32_port->rx_ch)
++	stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAR);
++
++	if (stm32_port->rx_ch) {
++		dmaengine_terminate_async(stm32_port->rx_ch);
+ 		dma_release_channel(stm32_port->rx_ch);
++	}
+ 
+ 	if (stm32_port->rx_dma_buf)
+ 		dma_free_coherent(&pdev->dev,
+ 				  RX_BUF_L, stm32_port->rx_buf,
+ 				  stm32_port->rx_dma_buf);
+ 
+-	stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
++	stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ 
+-	if (stm32_port->tx_ch)
++	if (stm32_port->tx_ch) {
++		dmaengine_terminate_async(stm32_port->tx_ch);
+ 		dma_release_channel(stm32_port->tx_ch);
++	}
+ 
+ 	if (stm32_port->tx_dma_buf)
+ 		dma_free_coherent(&pdev->dev,
+@@ -1314,20 +1379,14 @@ static int stm32_serial_remove(struct platform_device *pdev)
+ 
+ 	clk_disable_unprepare(stm32_port->clk);
+ 
+-	err = uart_remove_one_port(&stm32_usart_driver, port);
+-
+-	pm_runtime_disable(&pdev->dev);
+-	pm_runtime_put_noidle(&pdev->dev);
+-
+-	return err;
++	return 0;
+ }
+ 
+-
+ #ifdef CONFIG_SERIAL_STM32_CONSOLE
+-static void stm32_console_putchar(struct uart_port *port, int ch)
++static void stm32_usart_console_putchar(struct uart_port *port, int ch)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 
+ 	while (!(readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE))
+ 		cpu_relax();
+@@ -1335,12 +1394,13 @@ static void stm32_console_putchar(struct uart_port *port, int ch)
+ 	writel_relaxed(ch, port->membase + ofs->tdr);
+ }
+ 
+-static void stm32_console_write(struct console *co, const char *s, unsigned cnt)
++static void stm32_usart_console_write(struct console *co, const char *s,
++				      unsigned int cnt)
+ {
+ 	struct uart_port *port = &stm32_ports[co->index].port;
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	struct stm32_usart_config *cfg = &stm32_port->info->cfg;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ 	unsigned long flags;
+ 	u32 old_cr1, new_cr1;
+ 	int locked = 1;
+@@ -1359,7 +1419,7 @@ static void stm32_console_write(struct console *co, const char *s, unsigned cnt)
+ 	new_cr1 |=  USART_CR1_TE | BIT(cfg->uart_enable_bit);
+ 	writel_relaxed(new_cr1, port->membase + ofs->cr1);
+ 
+-	uart_console_write(port, s, cnt, stm32_console_putchar);
++	uart_console_write(port, s, cnt, stm32_usart_console_putchar);
+ 
+ 	/* Restore interrupt state */
+ 	writel_relaxed(old_cr1, port->membase + ofs->cr1);
+@@ -1369,7 +1429,7 @@ static void stm32_console_write(struct console *co, const char *s, unsigned cnt)
+ 	local_irq_restore(flags);
+ }
+ 
+-static int stm32_console_setup(struct console *co, char *options)
++static int stm32_usart_console_setup(struct console *co, char *options)
+ {
+ 	struct stm32_port *stm32port;
+ 	int baud = 9600;
+@@ -1388,7 +1448,7 @@ static int stm32_console_setup(struct console *co, char *options)
+ 	 * this to be called during the uart port registration when the
+ 	 * driver gets probed and the port should be mapped at that point.
+ 	 */
+-	if (stm32port->port.mapbase == 0 || stm32port->port.membase == NULL)
++	if (stm32port->port.mapbase == 0 || !stm32port->port.membase)
+ 		return -ENXIO;
+ 
+ 	if (options)
+@@ -1400,8 +1460,8 @@ static int stm32_console_setup(struct console *co, char *options)
+ static struct console stm32_console = {
+ 	.name		= STM32_SERIAL_NAME,
+ 	.device		= uart_console_device,
+-	.write		= stm32_console_write,
+-	.setup		= stm32_console_setup,
++	.write		= stm32_usart_console_write,
++	.setup		= stm32_usart_console_setup,
+ 	.flags		= CON_PRINTBUFFER,
+ 	.index		= -1,
+ 	.data		= &stm32_usart_driver,
+@@ -1422,41 +1482,38 @@ static struct uart_driver stm32_usart_driver = {
+ 	.cons		= STM32_SERIAL_CONSOLE,
+ };
+ 
+-static void __maybe_unused stm32_serial_enable_wakeup(struct uart_port *port,
+-						      bool enable)
++static void __maybe_unused stm32_usart_serial_en_wakeup(struct uart_port *port,
++							bool enable)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+-	struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+-	u32 val;
++	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 
+ 	if (stm32_port->wakeirq <= 0)
+ 		return;
+ 
++	/*
++	 * Enable low-power wake-up and wake-up irq if argument is set to
++	 * "enable", disable low-power wake-up and wake-up irq otherwise
++	 */
+ 	if (enable) {
+-		stm32_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+-		stm32_set_bits(port, ofs->cr1, USART_CR1_UESM);
+-		val = readl_relaxed(port->membase + ofs->cr3);
+-		val &= ~USART_CR3_WUS_MASK;
+-		/* Enable Wake up interrupt from low power on start bit */
+-		val |= USART_CR3_WUS_START_BIT | USART_CR3_WUFIE;
+-		writel_relaxed(val, port->membase + ofs->cr3);
+-		stm32_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
++		stm32_usart_set_bits(port, ofs->cr1, USART_CR1_UESM);
++		stm32_usart_set_bits(port, ofs->cr3, USART_CR3_WUFIE);
+ 	} else {
+-		stm32_clr_bits(port, ofs->cr1, USART_CR1_UESM);
++		stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_UESM);
++		stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_WUFIE);
+ 	}
+ }
+ 
+-static int __maybe_unused stm32_serial_suspend(struct device *dev)
++static int __maybe_unused stm32_usart_serial_suspend(struct device *dev)
+ {
+ 	struct uart_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_suspend_port(&stm32_usart_driver, port);
+ 
+ 	if (device_may_wakeup(dev))
+-		stm32_serial_enable_wakeup(port, true);
++		stm32_usart_serial_en_wakeup(port, true);
+ 	else
+-		stm32_serial_enable_wakeup(port, false);
++		stm32_usart_serial_en_wakeup(port, false);
+ 
+ 	/*
+ 	 * When "no_console_suspend" is enabled, keep the pinctrl default state
+@@ -1474,19 +1531,19 @@ static int __maybe_unused stm32_serial_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int __maybe_unused stm32_serial_resume(struct device *dev)
++static int __maybe_unused stm32_usart_serial_resume(struct device *dev)
+ {
+ 	struct uart_port *port = dev_get_drvdata(dev);
+ 
+ 	pinctrl_pm_select_default_state(dev);
+ 
+ 	if (device_may_wakeup(dev))
+-		stm32_serial_enable_wakeup(port, false);
++		stm32_usart_serial_en_wakeup(port, false);
+ 
+ 	return uart_resume_port(&stm32_usart_driver, port);
+ }
+ 
+-static int __maybe_unused stm32_serial_runtime_suspend(struct device *dev)
++static int __maybe_unused stm32_usart_runtime_suspend(struct device *dev)
+ {
+ 	struct uart_port *port = dev_get_drvdata(dev);
+ 	struct stm32_port *stm32port = container_of(port,
+@@ -1497,7 +1554,7 @@ static int __maybe_unused stm32_serial_runtime_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int __maybe_unused stm32_serial_runtime_resume(struct device *dev)
++static int __maybe_unused stm32_usart_runtime_resume(struct device *dev)
+ {
+ 	struct uart_port *port = dev_get_drvdata(dev);
+ 	struct stm32_port *stm32port = container_of(port,
+@@ -1507,14 +1564,15 @@ static int __maybe_unused stm32_serial_runtime_resume(struct device *dev)
+ }
+ 
+ static const struct dev_pm_ops stm32_serial_pm_ops = {
+-	SET_RUNTIME_PM_OPS(stm32_serial_runtime_suspend,
+-			   stm32_serial_runtime_resume, NULL)
+-	SET_SYSTEM_SLEEP_PM_OPS(stm32_serial_suspend, stm32_serial_resume)
++	SET_RUNTIME_PM_OPS(stm32_usart_runtime_suspend,
++			   stm32_usart_runtime_resume, NULL)
++	SET_SYSTEM_SLEEP_PM_OPS(stm32_usart_serial_suspend,
++				stm32_usart_serial_resume)
+ };
+ 
+ static struct platform_driver stm32_serial_driver = {
+-	.probe		= stm32_serial_probe,
+-	.remove		= stm32_serial_remove,
++	.probe		= stm32_usart_serial_probe,
++	.remove		= stm32_usart_serial_remove,
+ 	.driver	= {
+ 		.name	= DRIVER_NAME,
+ 		.pm	= &stm32_serial_pm_ops,
+@@ -1522,7 +1580,7 @@ static struct platform_driver stm32_serial_driver = {
+ 	},
+ };
+ 
+-static int __init usart_init(void)
++static int __init stm32_usart_init(void)
+ {
+ 	static char banner[] __initdata = "STM32 USART driver initialized";
+ 	int ret;
+@@ -1540,14 +1598,14 @@ static int __init usart_init(void)
+ 	return ret;
+ }
+ 
+-static void __exit usart_exit(void)
++static void __exit stm32_usart_exit(void)
+ {
+ 	platform_driver_unregister(&stm32_serial_driver);
+ 	uart_unregister_driver(&stm32_usart_driver);
+ }
+ 
+-module_init(usart_init);
+-module_exit(usart_exit);
++module_init(stm32_usart_init);
++module_exit(stm32_usart_exit);
+ 
+ MODULE_ALIAS("platform:" DRIVER_NAME);
+ MODULE_DESCRIPTION("STMicroelectronics STM32 serial port driver");
+diff --git a/drivers/tty/serial/stm32-usart.h b/drivers/tty/serial/stm32-usart.h
+index d4c916e78d403..94b568aa46bbd 100644
+--- a/drivers/tty/serial/stm32-usart.h
++++ b/drivers/tty/serial/stm32-usart.h
+@@ -127,9 +127,6 @@ struct stm32_usart_info stm32h7_info = {
+ /* Dummy bits */
+ #define USART_SR_DUMMY_RX	BIT(16)
+ 
+-/* USART_ICR (F7) */
+-#define USART_CR_TC		BIT(6)
+-
+ /* USART_DR */
+ #define USART_DR_MASK		GENMASK(8, 0)
+ 
+@@ -259,7 +256,7 @@ struct stm32_usart_info stm32h7_info = {
+ struct stm32_port {
+ 	struct uart_port port;
+ 	struct clk *clk;
+-	struct stm32_usart_info *info;
++	const struct stm32_usart_info *info;
+ 	struct dma_chan *rx_ch;  /* dma rx channel            */
+ 	dma_addr_t rx_dma_buf;   /* dma rx buffer bus address */
+ 	unsigned char *rx_buf;   /* dma rx buffer cpu address */
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 146bd67115623..bc5314092aa4e 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -2492,14 +2492,14 @@ out:
+  *	@p: pointer to result
+  *
+  *	Obtain the modem status bits from the tty driver if the feature
+- *	is supported. Return -EINVAL if it is not available.
++ *	is supported. Return -ENOTTY if it is not available.
+  *
+  *	Locking: none (up to the driver)
+  */
+ 
+ static int tty_tiocmget(struct tty_struct *tty, int __user *p)
+ {
+-	int retval = -EINVAL;
++	int retval = -ENOTTY;
+ 
+ 	if (tty->ops->tiocmget) {
+ 		retval = tty->ops->tiocmget(tty);
+@@ -2517,7 +2517,7 @@ static int tty_tiocmget(struct tty_struct *tty, int __user *p)
+  *	@p: pointer to desired bits
+  *
+  *	Set the modem status bits from the tty driver if the feature
+- *	is supported. Return -EINVAL if it is not available.
++ *	is supported. Return -ENOTTY if it is not available.
+  *
+  *	Locking: none (up to the driver)
+  */
+@@ -2529,7 +2529,7 @@ static int tty_tiocmset(struct tty_struct *tty, unsigned int cmd,
+ 	unsigned int set, clear, val;
+ 
+ 	if (tty->ops->tiocmset == NULL)
+-		return -EINVAL;
++		return -ENOTTY;
+ 
+ 	retval = get_user(val, p);
+ 	if (retval)
+diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
+index e18f318586ab4..803da2d111c8c 100644
+--- a/drivers/tty/tty_ioctl.c
++++ b/drivers/tty/tty_ioctl.c
+@@ -443,51 +443,6 @@ static int get_termio(struct tty_struct *tty, struct termio __user *termio)
+ 	return 0;
+ }
+ 
+-
+-#ifdef TCGETX
+-
+-/**
+- *	set_termiox	-	set termiox fields if possible
+- *	@tty: terminal
+- *	@arg: termiox structure from user
+- *	@opt: option flags for ioctl type
+- *
+- *	Implement the device calling points for the SYS5 termiox ioctl
+- *	interface in Linux
+- */
+-
+-static int set_termiox(struct tty_struct *tty, void __user *arg, int opt)
+-{
+-	struct termiox tnew;
+-	struct tty_ldisc *ld;
+-
+-	if (tty->termiox == NULL)
+-		return -EINVAL;
+-	if (copy_from_user(&tnew, arg, sizeof(struct termiox)))
+-		return -EFAULT;
+-
+-	ld = tty_ldisc_ref(tty);
+-	if (ld != NULL) {
+-		if ((opt & TERMIOS_FLUSH) && ld->ops->flush_buffer)
+-			ld->ops->flush_buffer(tty);
+-		tty_ldisc_deref(ld);
+-	}
+-	if (opt & TERMIOS_WAIT) {
+-		tty_wait_until_sent(tty, 0);
+-		if (signal_pending(current))
+-			return -ERESTARTSYS;
+-	}
+-
+-	down_write(&tty->termios_rwsem);
+-	if (tty->ops->set_termiox)
+-		tty->ops->set_termiox(tty, &tnew);
+-	up_write(&tty->termios_rwsem);
+-	return 0;
+-}
+-
+-#endif
+-
+-
+ #ifdef TIOCGETP
+ /*
+  * These are deprecated, but there is limited support..
+@@ -815,24 +770,12 @@ int tty_mode_ioctl(struct tty_struct *tty, struct file *file,
+ 		return ret;
+ #endif
+ #ifdef TCGETX
+-	case TCGETX: {
+-		struct termiox ktermx;
+-		if (real_tty->termiox == NULL)
+-			return -EINVAL;
+-		down_read(&real_tty->termios_rwsem);
+-		memcpy(&ktermx, real_tty->termiox, sizeof(struct termiox));
+-		up_read(&real_tty->termios_rwsem);
+-		if (copy_to_user(p, &ktermx, sizeof(struct termiox)))
+-			ret = -EFAULT;
+-		return ret;
+-	}
++	case TCGETX:
+ 	case TCSETX:
+-		return set_termiox(real_tty, p, 0);
+ 	case TCSETXW:
+-		return set_termiox(real_tty, p, TERMIOS_WAIT);
+ 	case TCSETXF:
+-		return set_termiox(real_tty, p, TERMIOS_FLUSH);
+-#endif		
++		return -ENOTTY;
++#endif
+ 	case TIOCGSOFTCAR:
+ 		copy_termios(real_tty, &kterm);
+ 		ret = put_user((kterm.c_cflag & CLOCAL) ? 1 : 0,
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index bc035ba6e0105..6fbabf56dbb76 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -929,8 +929,7 @@ static int get_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ {
+ 	struct acm *acm = tty->driver_data;
+ 
+-	ss->xmit_fifo_size = acm->writesize;
+-	ss->baud_base = le32_to_cpu(acm->line.dwDTERate);
++	ss->line = acm->minor;
+ 	ss->close_delay	= jiffies_to_msecs(acm->port.close_delay) / 10;
+ 	ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+ 				ASYNC_CLOSING_WAIT_NONE :
+@@ -942,7 +941,6 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ {
+ 	struct acm *acm = tty->driver_data;
+ 	unsigned int closing_wait, close_delay;
+-	unsigned int old_closing_wait, old_close_delay;
+ 	int retval = 0;
+ 
+ 	close_delay = msecs_to_jiffies(ss->close_delay * 10);
+@@ -950,20 +948,12 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ 			ASYNC_CLOSING_WAIT_NONE :
+ 			msecs_to_jiffies(ss->closing_wait * 10);
+ 
+-	/* we must redo the rounding here, so that the values match */
+-	old_close_delay	= jiffies_to_msecs(acm->port.close_delay) / 10;
+-	old_closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+-				ASYNC_CLOSING_WAIT_NONE :
+-				jiffies_to_msecs(acm->port.closing_wait) / 10;
+-
+ 	mutex_lock(&acm->port.mutex);
+ 
+ 	if (!capable(CAP_SYS_ADMIN)) {
+-		if ((ss->close_delay != old_close_delay) ||
+-		    (ss->closing_wait != old_closing_wait))
++		if ((close_delay != acm->port.close_delay) ||
++		    (closing_wait != acm->port.closing_wait))
+ 			retval = -EPERM;
+-		else
+-			retval = -EOPNOTSUPP;
+ 	} else {
+ 		acm->port.close_delay  = close_delay;
+ 		acm->port.closing_wait = closing_wait;
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index 800c8b6c55ff1..510fd0572feb1 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -660,6 +660,71 @@ static u32 dwc2_read_common_intr(struct dwc2_hsotg *hsotg)
+ 		return 0;
+ }
+ 
++/**
++ * dwc_handle_gpwrdn_disc_det() - Handles the gpwrdn disconnect detect.
++ * Exits hibernation without restoring registers.
++ *
++ * @hsotg: Programming view of DWC_otg controller
++ * @gpwrdn: GPWRDN register
++ */
++static inline void dwc_handle_gpwrdn_disc_det(struct dwc2_hsotg *hsotg,
++					      u32 gpwrdn)
++{
++	u32 gpwrdn_tmp;
++
++	/* Switch-on voltage to the core */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PWRDNSWTCH;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++	udelay(5);
++
++	/* Reset core */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PWRDNRSTN;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++	udelay(5);
++
++	/* Disable Power Down Clamp */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PWRDNCLMP;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++	udelay(5);
++
++	/* Deassert reset core */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp |= GPWRDN_PWRDNRSTN;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++	udelay(5);
++
++	/* Disable PMU interrupt */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PMUINTSEL;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++
++	/* De-assert Wakeup Logic */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PMUACTV;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++
++	hsotg->hibernated = 0;
++	hsotg->bus_suspended = 0;
++
++	if (gpwrdn & GPWRDN_IDSTS) {
++		hsotg->op_state = OTG_STATE_B_PERIPHERAL;
++		dwc2_core_init(hsotg, false);
++		dwc2_enable_global_interrupts(hsotg);
++		dwc2_hsotg_core_init_disconnected(hsotg, false);
++		dwc2_hsotg_core_connect(hsotg);
++	} else {
++		hsotg->op_state = OTG_STATE_A_HOST;
++
++		/* Initialize the Core for Host mode */
++		dwc2_core_init(hsotg, false);
++		dwc2_enable_global_interrupts(hsotg);
++		dwc2_hcd_start(hsotg);
++	}
++}
++
+ /*
+  * GPWRDN interrupt handler.
+  *
+@@ -681,64 +746,14 @@ static void dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
+ 
+ 	if ((gpwrdn & GPWRDN_DISCONN_DET) &&
+ 	    (gpwrdn & GPWRDN_DISCONN_DET_MSK) && !linestate) {
+-		u32 gpwrdn_tmp;
+-
+ 		dev_dbg(hsotg->dev, "%s: GPWRDN_DISCONN_DET\n", __func__);
+-
+-		/* Switch-on voltage to the core */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PWRDNSWTCH;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-		udelay(10);
+-
+-		/* Reset core */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PWRDNRSTN;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-		udelay(10);
+-
+-		/* Disable Power Down Clamp */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PWRDNCLMP;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-		udelay(10);
+-
+-		/* Deassert reset core */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp |= GPWRDN_PWRDNRSTN;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-		udelay(10);
+-
+-		/* Disable PMU interrupt */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PMUINTSEL;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-
+-		/* De-assert Wakeup Logic */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PMUACTV;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-
+-		hsotg->hibernated = 0;
+-
+-		if (gpwrdn & GPWRDN_IDSTS) {
+-			hsotg->op_state = OTG_STATE_B_PERIPHERAL;
+-			dwc2_core_init(hsotg, false);
+-			dwc2_enable_global_interrupts(hsotg);
+-			dwc2_hsotg_core_init_disconnected(hsotg, false);
+-			dwc2_hsotg_core_connect(hsotg);
+-		} else {
+-			hsotg->op_state = OTG_STATE_A_HOST;
+-
+-			/* Initialize the Core for Host mode */
+-			dwc2_core_init(hsotg, false);
+-			dwc2_enable_global_interrupts(hsotg);
+-			dwc2_hcd_start(hsotg);
+-		}
+-	}
+-
+-	if ((gpwrdn & GPWRDN_LNSTSCHG) &&
+-	    (gpwrdn & GPWRDN_LNSTSCHG_MSK) && linestate) {
++		/*
++		 * Call disconnect detect function to exit from
++		 * hibernation
++		 */
++		dwc_handle_gpwrdn_disc_det(hsotg, gpwrdn);
++	} else if ((gpwrdn & GPWRDN_LNSTSCHG) &&
++		   (gpwrdn & GPWRDN_LNSTSCHG_MSK) && linestate) {
+ 		dev_dbg(hsotg->dev, "%s: GPWRDN_LNSTSCHG\n", __func__);
+ 		if (hsotg->hw_params.hibernation &&
+ 		    hsotg->hibernated) {
+@@ -749,24 +764,21 @@ static void dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
+ 				dwc2_exit_hibernation(hsotg, 1, 0, 1);
+ 			}
+ 		}
+-	}
+-	if ((gpwrdn & GPWRDN_RST_DET) && (gpwrdn & GPWRDN_RST_DET_MSK)) {
++	} else if ((gpwrdn & GPWRDN_RST_DET) &&
++		   (gpwrdn & GPWRDN_RST_DET_MSK)) {
+ 		dev_dbg(hsotg->dev, "%s: GPWRDN_RST_DET\n", __func__);
+ 		if (!linestate && (gpwrdn & GPWRDN_BSESSVLD))
+ 			dwc2_exit_hibernation(hsotg, 0, 1, 0);
+-	}
+-	if ((gpwrdn & GPWRDN_STS_CHGINT) &&
+-	    (gpwrdn & GPWRDN_STS_CHGINT_MSK) && linestate) {
++	} else if ((gpwrdn & GPWRDN_STS_CHGINT) &&
++		   (gpwrdn & GPWRDN_STS_CHGINT_MSK)) {
+ 		dev_dbg(hsotg->dev, "%s: GPWRDN_STS_CHGINT\n", __func__);
+-		if (hsotg->hw_params.hibernation &&
+-		    hsotg->hibernated) {
+-			if (gpwrdn & GPWRDN_IDSTS) {
+-				dwc2_exit_hibernation(hsotg, 0, 0, 0);
+-				call_gadget(hsotg, resume);
+-			} else {
+-				dwc2_exit_hibernation(hsotg, 1, 0, 1);
+-			}
+-		}
++		/*
++		 * As GPWRDN_STS_CHGINT exit from hibernation flow is
++		 * the same as in GPWRDN_DISCONN_DET flow. Call
++		 * disconnect detect helper function to exit from
++		 * hibernation.
++		 */
++		dwc_handle_gpwrdn_disc_det(hsotg, gpwrdn);
+ 	}
+ }
+ 
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 1a9789ec5847f..6af1dcbc36564 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -5580,7 +5580,15 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
+ 		return ret;
+ 	}
+ 
+-	dwc2_hcd_rem_wakeup(hsotg);
++	if (rem_wakeup) {
++		dwc2_hcd_rem_wakeup(hsotg);
++		/*
++		 * Change "port_connect_status_change" flag to re-enumerate,
++		 * because after exit from hibernation port connection status
++		 * is not detected.
++		 */
++		hsotg->flags.b.port_connect_status_change = 1;
++	}
+ 
+ 	hsotg->hibernated = 0;
+ 	hsotg->bus_suspended = 0;
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/core.c b/drivers/usb/gadget/udc/aspeed-vhub/core.c
+index be7bb64e3594d..d11d3d14313f9 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/core.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/core.c
+@@ -36,6 +36,7 @@ void ast_vhub_done(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
+ 		   int status)
+ {
+ 	bool internal = req->internal;
++	struct ast_vhub *vhub = ep->vhub;
+ 
+ 	EPVDBG(ep, "completing request @%p, status %d\n", req, status);
+ 
+@@ -46,7 +47,7 @@ void ast_vhub_done(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
+ 
+ 	if (req->req.dma) {
+ 		if (!WARN_ON(!ep->dev))
+-			usb_gadget_unmap_request(&ep->dev->gadget,
++			usb_gadget_unmap_request_by_dev(&vhub->pdev->dev,
+ 						 &req->req, ep->epn.is_in);
+ 		req->req.dma = 0;
+ 	}
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/epn.c b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
+index 02d8bfae58fb1..cb164c615e6fc 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/epn.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
+@@ -376,7 +376,7 @@ static int ast_vhub_epn_queue(struct usb_ep* u_ep, struct usb_request *u_req,
+ 	if (ep->epn.desc_mode ||
+ 	    ((((unsigned long)u_req->buf & 7) == 0) &&
+ 	     (ep->epn.is_in || !(u_req->length & (u_ep->maxpacket - 1))))) {
+-		rc = usb_gadget_map_request(&ep->dev->gadget, u_req,
++		rc = usb_gadget_map_request_by_dev(&vhub->pdev->dev, u_req,
+ 					    ep->epn.is_in);
+ 		if (rc) {
+ 			dev_warn(&vhub->pdev->dev,
+diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
+index d6ca50f019853..75bf446f4a666 100644
+--- a/drivers/usb/gadget/udc/fotg210-udc.c
++++ b/drivers/usb/gadget/udc/fotg210-udc.c
+@@ -338,15 +338,16 @@ static void fotg210_start_dma(struct fotg210_ep *ep,
+ 		} else {
+ 			buffer = req->req.buf + req->req.actual;
+ 			length = ioread32(ep->fotg210->reg +
+-					FOTG210_FIBCR(ep->epnum - 1));
+-			length &= FIBCR_BCFX;
++					FOTG210_FIBCR(ep->epnum - 1)) & FIBCR_BCFX;
++			if (length > req->req.length - req->req.actual)
++				length = req->req.length - req->req.actual;
+ 		}
+ 	} else {
+ 		buffer = req->req.buf + req->req.actual;
+ 		if (req->req.length - req->req.actual > ep->ep.maxpacket)
+ 			length = ep->ep.maxpacket;
+ 		else
+-			length = req->req.length;
++			length = req->req.length - req->req.actual;
+ 	}
+ 
+ 	d = dma_map_single(dev, buffer, length,
+@@ -379,8 +380,7 @@ static void fotg210_ep0_queue(struct fotg210_ep *ep,
+ 	}
+ 	if (ep->dir_in) { /* if IN */
+ 		fotg210_start_dma(ep, req);
+-		if ((req->req.length == req->req.actual) ||
+-		    (req->req.actual < ep->ep.maxpacket))
++		if (req->req.length == req->req.actual)
+ 			fotg210_done(ep, req, 0);
+ 	} else { /* OUT */
+ 		u32 value = ioread32(ep->fotg210->reg + FOTG210_DMISGR0);
+@@ -820,7 +820,7 @@ static void fotg210_ep0in(struct fotg210_udc *fotg210)
+ 		if (req->req.length)
+ 			fotg210_start_dma(ep, req);
+ 
+-		if ((req->req.length - req->req.actual) < ep->ep.maxpacket)
++		if (req->req.actual == req->req.length)
+ 			fotg210_done(ep, req, 0);
+ 	} else {
+ 		fotg210_set_cxdone(fotg210);
+@@ -849,12 +849,16 @@ static void fotg210_out_fifo_handler(struct fotg210_ep *ep)
+ {
+ 	struct fotg210_request *req = list_entry(ep->queue.next,
+ 						 struct fotg210_request, queue);
++	int disgr1 = ioread32(ep->fotg210->reg + FOTG210_DISGR1);
+ 
+ 	fotg210_start_dma(ep, req);
+ 
+-	/* finish out transfer */
++	/* Complete the request when it's full or a short packet arrived.
++	 * Like other drivers, short_not_ok isn't handled.
++	 */
++
+ 	if (req->req.length == req->req.actual ||
+-	    req->req.actual < ep->ep.maxpacket)
++	    (disgr1 & DISGR1_SPK_INT(ep->epnum - 1)))
+ 		fotg210_done(ep, req, 0);
+ }
+ 
+@@ -1027,6 +1031,12 @@ static void fotg210_init(struct fotg210_udc *fotg210)
+ 	value &= ~DMCR_GLINT_EN;
+ 	iowrite32(value, fotg210->reg + FOTG210_DMCR);
+ 
++	/* enable only grp2 irqs we handle */
++	iowrite32(~(DISGR2_DMA_ERROR | DISGR2_RX0BYTE_INT | DISGR2_TX0BYTE_INT
++		    | DISGR2_ISO_SEQ_ABORT_INT | DISGR2_ISO_SEQ_ERR_INT
++		    | DISGR2_RESM_INT | DISGR2_SUSP_INT | DISGR2_USBRST_INT),
++		  fotg210->reg + FOTG210_DMISGR2);
++
+ 	/* disable all fifo interrupt */
+ 	iowrite32(~(u32)0, fotg210->reg + FOTG210_DMISGR1);
+ 
+diff --git a/drivers/usb/gadget/udc/pch_udc.c b/drivers/usb/gadget/udc/pch_udc.c
+index a3c1fc9242686..fd3656d0f760c 100644
+--- a/drivers/usb/gadget/udc/pch_udc.c
++++ b/drivers/usb/gadget/udc/pch_udc.c
+@@ -7,12 +7,14 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/delay.h>
++#include <linux/dmi.h>
+ #include <linux/errno.h>
++#include <linux/gpio/consumer.h>
++#include <linux/gpio/machine.h>
+ #include <linux/list.h>
+ #include <linux/interrupt.h>
+ #include <linux/usb/ch9.h>
+ #include <linux/usb/gadget.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/irq.h>
+ 
+ #define PCH_VBUS_PERIOD		3000	/* VBUS polling period (msec) */
+@@ -596,18 +598,22 @@ static void pch_udc_reconnect(struct pch_udc_dev *dev)
+ static inline void pch_udc_vbus_session(struct pch_udc_dev *dev,
+ 					  int is_active)
+ {
++	unsigned long		iflags;
++
++	spin_lock_irqsave(&dev->lock, iflags);
+ 	if (is_active) {
+ 		pch_udc_reconnect(dev);
+ 		dev->vbus_session = 1;
+ 	} else {
+ 		if (dev->driver && dev->driver->disconnect) {
+-			spin_lock(&dev->lock);
++			spin_unlock_irqrestore(&dev->lock, iflags);
+ 			dev->driver->disconnect(&dev->gadget);
+-			spin_unlock(&dev->lock);
++			spin_lock_irqsave(&dev->lock, iflags);
+ 		}
+ 		pch_udc_set_disconnect(dev);
+ 		dev->vbus_session = 0;
+ 	}
++	spin_unlock_irqrestore(&dev->lock, iflags);
+ }
+ 
+ /**
+@@ -1166,20 +1172,25 @@ static int pch_udc_pcd_selfpowered(struct usb_gadget *gadget, int value)
+ static int pch_udc_pcd_pullup(struct usb_gadget *gadget, int is_on)
+ {
+ 	struct pch_udc_dev	*dev;
++	unsigned long		iflags;
+ 
+ 	if (!gadget)
+ 		return -EINVAL;
++
+ 	dev = container_of(gadget, struct pch_udc_dev, gadget);
++
++	spin_lock_irqsave(&dev->lock, iflags);
+ 	if (is_on) {
+ 		pch_udc_reconnect(dev);
+ 	} else {
+ 		if (dev->driver && dev->driver->disconnect) {
+-			spin_lock(&dev->lock);
++			spin_unlock_irqrestore(&dev->lock, iflags);
+ 			dev->driver->disconnect(&dev->gadget);
+-			spin_unlock(&dev->lock);
++			spin_lock_irqsave(&dev->lock, iflags);
+ 		}
+ 		pch_udc_set_disconnect(dev);
+ 	}
++	spin_unlock_irqrestore(&dev->lock, iflags);
+ 
+ 	return 0;
+ }
+@@ -1350,6 +1361,43 @@ static irqreturn_t pch_vbus_gpio_irq(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
++static struct gpiod_lookup_table minnowboard_udc_gpios = {
++	.dev_id		= "0000:02:02.4",
++	.table		= {
++		GPIO_LOOKUP("sch_gpio.33158", 12, NULL, GPIO_ACTIVE_HIGH),
++		{}
++	},
++};
++
++static const struct dmi_system_id pch_udc_gpio_dmi_table[] = {
++	{
++		.ident = "MinnowBoard",
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "MinnowBoard"),
++		},
++		.driver_data = &minnowboard_udc_gpios,
++	},
++	{ }
++};
++
++static void pch_vbus_gpio_remove_table(void *table)
++{
++	gpiod_remove_lookup_table(table);
++}
++
++static int pch_vbus_gpio_add_table(struct pch_udc_dev *dev)
++{
++	struct device *d = &dev->pdev->dev;
++	const struct dmi_system_id *dmi;
++
++	dmi = dmi_first_match(pch_udc_gpio_dmi_table);
++	if (!dmi)
++		return 0;
++
++	gpiod_add_lookup_table(dmi->driver_data);
++	return devm_add_action_or_reset(d, pch_vbus_gpio_remove_table, dmi->driver_data);
++}
++
+ /**
+  * pch_vbus_gpio_init() - This API initializes GPIO port detecting VBUS.
+  * @dev:		Reference to the driver structure
+@@ -1360,6 +1408,7 @@ static irqreturn_t pch_vbus_gpio_irq(int irq, void *data)
+  */
+ static int pch_vbus_gpio_init(struct pch_udc_dev *dev)
+ {
++	struct device *d = &dev->pdev->dev;
+ 	int err;
+ 	int irq_num = 0;
+ 	struct gpio_desc *gpiod;
+@@ -1367,8 +1416,12 @@ static int pch_vbus_gpio_init(struct pch_udc_dev *dev)
+ 	dev->vbus_gpio.port = NULL;
+ 	dev->vbus_gpio.intr = 0;
+ 
++	err = pch_vbus_gpio_add_table(dev);
++	if (err)
++		return err;
++
+ 	/* Retrieve the GPIO line from the USB gadget device */
+-	gpiod = devm_gpiod_get(dev->gadget.dev.parent, NULL, GPIOD_IN);
++	gpiod = devm_gpiod_get_optional(d, NULL, GPIOD_IN);
+ 	if (IS_ERR(gpiod))
+ 		return PTR_ERR(gpiod);
+ 	gpiod_set_consumer_name(gpiod, "pch_vbus");
+@@ -1756,7 +1809,7 @@ static struct usb_request *pch_udc_alloc_request(struct usb_ep *usbep,
+ 	}
+ 	/* prevent from using desc. - set HOST BUSY */
+ 	dma_desc->status |= PCH_UDC_BS_HST_BSY;
+-	dma_desc->dataptr = cpu_to_le32(DMA_ADDR_INVALID);
++	dma_desc->dataptr = lower_32_bits(DMA_ADDR_INVALID);
+ 	req->td_data = dma_desc;
+ 	req->td_data_last = dma_desc;
+ 	req->chain_len = 1;
+@@ -2298,6 +2351,21 @@ static void pch_udc_svc_data_out(struct pch_udc_dev *dev, int ep_num)
+ 		pch_udc_set_dma(dev, DMA_DIR_RX);
+ }
+ 
++static int pch_udc_gadget_setup(struct pch_udc_dev *dev)
++	__must_hold(&dev->lock)
++{
++	int rc;
++
++	/* In some cases we can get an interrupt before driver gets setup */
++	if (!dev->driver)
++		return -ESHUTDOWN;
++
++	spin_unlock(&dev->lock);
++	rc = dev->driver->setup(&dev->gadget, &dev->setup_data);
++	spin_lock(&dev->lock);
++	return rc;
++}
++
+ /**
+  * pch_udc_svc_control_in() - Handle Control IN endpoint interrupts
+  * @dev:	Reference to the device structure
+@@ -2369,15 +2437,12 @@ static void pch_udc_svc_control_out(struct pch_udc_dev *dev)
+ 			dev->gadget.ep0 = &dev->ep[UDC_EP0IN_IDX].ep;
+ 		else /* OUT */
+ 			dev->gadget.ep0 = &ep->ep;
+-		spin_lock(&dev->lock);
+ 		/* If Mass storage Reset */
+ 		if ((dev->setup_data.bRequestType == 0x21) &&
+ 		    (dev->setup_data.bRequest == 0xFF))
+ 			dev->prot_stall = 0;
+ 		/* call gadget with setup data received */
+-		setup_supported = dev->driver->setup(&dev->gadget,
+-						     &dev->setup_data);
+-		spin_unlock(&dev->lock);
++		setup_supported = pch_udc_gadget_setup(dev);
+ 
+ 		if (dev->setup_data.bRequestType & USB_DIR_IN) {
+ 			ep->td_data->status = (ep->td_data->status &
+@@ -2625,9 +2690,7 @@ static void pch_udc_svc_intf_interrupt(struct pch_udc_dev *dev)
+ 		dev->ep[i].halted = 0;
+ 	}
+ 	dev->stall = 0;
+-	spin_unlock(&dev->lock);
+-	dev->driver->setup(&dev->gadget, &dev->setup_data);
+-	spin_lock(&dev->lock);
++	pch_udc_gadget_setup(dev);
+ }
+ 
+ /**
+@@ -2662,9 +2725,7 @@ static void pch_udc_svc_cfg_interrupt(struct pch_udc_dev *dev)
+ 	dev->stall = 0;
+ 
+ 	/* call gadget zero with setup data received */
+-	spin_unlock(&dev->lock);
+-	dev->driver->setup(&dev->gadget, &dev->setup_data);
+-	spin_lock(&dev->lock);
++	pch_udc_gadget_setup(dev);
+ }
+ 
+ /**
+@@ -2870,14 +2931,20 @@ static void pch_udc_pcd_reinit(struct pch_udc_dev *dev)
+  * @dev:	Reference to the driver structure
+  *
+  * Return codes:
+- *	0: Success
++ *	0:		Success
++ *	-%ERRNO:	All kind of errors when retrieving VBUS GPIO
+  */
+ static int pch_udc_pcd_init(struct pch_udc_dev *dev)
+ {
++	int ret;
++
+ 	pch_udc_init(dev);
+ 	pch_udc_pcd_reinit(dev);
+-	pch_vbus_gpio_init(dev);
+-	return 0;
++
++	ret = pch_vbus_gpio_init(dev);
++	if (ret)
++		pch_udc_exit(dev);
++	return ret;
+ }
+ 
+ /**
+@@ -2938,7 +3005,7 @@ static int init_dma_pools(struct pch_udc_dev *dev)
+ 	dev->dma_addr = dma_map_single(&dev->pdev->dev, ep0out_buf,
+ 				       UDC_EP0OUT_BUFF_SIZE * 4,
+ 				       DMA_FROM_DEVICE);
+-	return 0;
++	return dma_mapping_error(&dev->pdev->dev, dev->dma_addr);
+ }
+ 
+ static int pch_udc_start(struct usb_gadget *g,
+@@ -3063,6 +3130,7 @@ static int pch_udc_probe(struct pci_dev *pdev,
+ 	if (retval)
+ 		return retval;
+ 
++	dev->pdev = pdev;
+ 	pci_set_drvdata(pdev, dev);
+ 
+ 	/* Determine BAR based on PCI ID */
+@@ -3078,16 +3146,10 @@ static int pch_udc_probe(struct pci_dev *pdev,
+ 
+ 	dev->base_addr = pcim_iomap_table(pdev)[bar];
+ 
+-	/*
+-	 * FIXME: add a GPIO descriptor table to pdev.dev using
+-	 * gpiod_add_descriptor_table() from <linux/gpio/machine.h> based on
+-	 * the PCI subsystem ID. The system-dependent GPIO is necessary for
+-	 * VBUS operation.
+-	 */
+-
+ 	/* initialize the hardware */
+-	if (pch_udc_pcd_init(dev))
+-		return -ENODEV;
++	retval = pch_udc_pcd_init(dev);
++	if (retval)
++		return retval;
+ 
+ 	pci_enable_msi(pdev);
+ 
+@@ -3104,7 +3166,6 @@ static int pch_udc_probe(struct pci_dev *pdev,
+ 
+ 	/* device struct setup */
+ 	spin_lock_init(&dev->lock);
+-	dev->pdev = pdev;
+ 	dev->gadget.ops = &pch_udc_ops;
+ 
+ 	retval = init_dma_pools(dev);
+diff --git a/drivers/usb/gadget/udc/r8a66597-udc.c b/drivers/usb/gadget/udc/r8a66597-udc.c
+index 896c1a016d550..65cae48834545 100644
+--- a/drivers/usb/gadget/udc/r8a66597-udc.c
++++ b/drivers/usb/gadget/udc/r8a66597-udc.c
+@@ -1849,6 +1849,8 @@ static int r8a66597_probe(struct platform_device *pdev)
+ 		return PTR_ERR(reg);
+ 
+ 	ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
++	if (!ires)
++		return -EINVAL;
+ 	irq = ires->start;
+ 	irq_trigger = ires->flags & IRQF_TRIGGER_MASK;
+ 
+diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
+index 1d3ebb07ccd4d..b154b62abefa1 100644
+--- a/drivers/usb/gadget/udc/s3c2410_udc.c
++++ b/drivers/usb/gadget/udc/s3c2410_udc.c
+@@ -54,8 +54,6 @@ static struct clk		*udc_clock;
+ static struct clk		*usb_bus_clock;
+ static void __iomem		*base_addr;
+ static int			irq_usbd;
+-static u64			rsrc_start;
+-static u64			rsrc_len;
+ static struct dentry		*s3c2410_udc_debugfs_root;
+ 
+ static inline u32 udc_read(u32 reg)
+@@ -1752,7 +1750,8 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	udc_clock = clk_get(NULL, "usb-device");
+ 	if (IS_ERR(udc_clock)) {
+ 		dev_err(dev, "failed to get udc clock source\n");
+-		return PTR_ERR(udc_clock);
++		retval = PTR_ERR(udc_clock);
++		goto err_usb_bus_clk;
+ 	}
+ 
+ 	clk_prepare_enable(udc_clock);
+@@ -1775,7 +1774,7 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	base_addr = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(base_addr)) {
+ 		retval = PTR_ERR(base_addr);
+-		goto err_mem;
++		goto err_udc_clk;
+ 	}
+ 
+ 	the_controller = udc;
+@@ -1793,7 +1792,7 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	if (retval != 0) {
+ 		dev_err(dev, "cannot get irq %i, err %d\n", irq_usbd, retval);
+ 		retval = -EBUSY;
+-		goto err_map;
++		goto err_udc_clk;
+ 	}
+ 
+ 	dev_dbg(dev, "got irq %i\n", irq_usbd);
+@@ -1864,10 +1863,14 @@ err_gpio_claim:
+ 		gpio_free(udc_info->vbus_pin);
+ err_int:
+ 	free_irq(irq_usbd, udc);
+-err_map:
+-	iounmap(base_addr);
+-err_mem:
+-	release_mem_region(rsrc_start, rsrc_len);
++err_udc_clk:
++	clk_disable_unprepare(udc_clock);
++	clk_put(udc_clock);
++	udc_clock = NULL;
++err_usb_bus_clk:
++	clk_disable_unprepare(usb_bus_clock);
++	clk_put(usb_bus_clock);
++	usb_bus_clock = NULL;
+ 
+ 	return retval;
+ }
+@@ -1899,9 +1902,6 @@ static int s3c2410_udc_remove(struct platform_device *pdev)
+ 
+ 	free_irq(irq_usbd, udc);
+ 
+-	iounmap(base_addr);
+-	release_mem_region(rsrc_start, rsrc_len);
+-
+ 	if (!IS_ERR(udc_clock) && udc_clock != NULL) {
+ 		clk_disable_unprepare(udc_clock);
+ 		clk_put(udc_clock);
+diff --git a/drivers/usb/gadget/udc/snps_udc_plat.c b/drivers/usb/gadget/udc/snps_udc_plat.c
+index 32f1d3e90c264..99805d60a7ab3 100644
+--- a/drivers/usb/gadget/udc/snps_udc_plat.c
++++ b/drivers/usb/gadget/udc/snps_udc_plat.c
+@@ -114,8 +114,8 @@ static int udc_plat_probe(struct platform_device *pdev)
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	udc->virt_addr = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(udc->regs))
+-		return PTR_ERR(udc->regs);
++	if (IS_ERR(udc->virt_addr))
++		return PTR_ERR(udc->virt_addr);
+ 
+ 	/* udc csr registers base */
+ 	udc->csr = udc->virt_addr + UDC_CSR_ADDR;
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index b45e5bf089979..8950d1f10a7fb 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -378,6 +378,31 @@ static void update_bus_bw(struct mu3h_sch_bw_info *sch_bw,
+ 	sch_ep->allocated = used;
+ }
+ 
++static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
++{
++	struct mu3h_sch_tt *tt = sch_ep->sch_tt;
++	u32 num_esit, tmp;
++	int base;
++	int i, j;
++
++	num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
++	for (i = 0; i < num_esit; i++) {
++		base = offset + i * sch_ep->esit;
++
++		/*
++		 * Compared with hs bus, no matter what ep type,
++		 * the hub will always delay one uframe to send data
++		 */
++		for (j = 0; j < sch_ep->cs_count; j++) {
++			tmp = tt->fs_bus_bw[base + j] + sch_ep->bw_cost_per_microframe;
++			if (tmp > FS_PAYLOAD_MAX)
++				return -ERANGE;
++		}
++	}
++
++	return 0;
++}
++
+ static int check_sch_tt(struct usb_device *udev,
+ 	struct mu3h_sch_ep_info *sch_ep, u32 offset)
+ {
+@@ -402,7 +427,7 @@ static int check_sch_tt(struct usb_device *udev,
+ 			return -ERANGE;
+ 
+ 		for (i = 0; i < sch_ep->cs_count; i++)
+-			if (test_bit(offset + i, tt->split_bit_map))
++			if (test_bit(offset + i, tt->ss_bit_map))
+ 				return -ERANGE;
+ 
+ 	} else {
+@@ -432,7 +457,7 @@ static int check_sch_tt(struct usb_device *udev,
+ 			cs_count = 7; /* HW limit */
+ 
+ 		for (i = 0; i < cs_count + 2; i++) {
+-			if (test_bit(offset + i, tt->split_bit_map))
++			if (test_bit(offset + i, tt->ss_bit_map))
+ 				return -ERANGE;
+ 		}
+ 
+@@ -448,24 +473,44 @@ static int check_sch_tt(struct usb_device *udev,
+ 			sch_ep->num_budget_microframes = sch_ep->esit;
+ 	}
+ 
+-	return 0;
++	return check_fs_bus_bw(sch_ep, offset);
+ }
+ 
+ static void update_sch_tt(struct usb_device *udev,
+-	struct mu3h_sch_ep_info *sch_ep)
++	struct mu3h_sch_ep_info *sch_ep, bool used)
+ {
+ 	struct mu3h_sch_tt *tt = sch_ep->sch_tt;
+ 	u32 base, num_esit;
++	int bw_updated;
++	int bits;
+ 	int i, j;
+ 
+ 	num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
++	bits = (sch_ep->ep_type == ISOC_OUT_EP) ? sch_ep->cs_count : 1;
++
++	if (used)
++		bw_updated = sch_ep->bw_cost_per_microframe;
++	else
++		bw_updated = -sch_ep->bw_cost_per_microframe;
++
+ 	for (i = 0; i < num_esit; i++) {
+ 		base = sch_ep->offset + i * sch_ep->esit;
+-		for (j = 0; j < sch_ep->num_budget_microframes; j++)
+-			set_bit(base + j, tt->split_bit_map);
++
++		for (j = 0; j < bits; j++) {
++			if (used)
++				set_bit(base + j, tt->ss_bit_map);
++			else
++				clear_bit(base + j, tt->ss_bit_map);
++		}
++
++		for (j = 0; j < sch_ep->cs_count; j++)
++			tt->fs_bus_bw[base + j] += bw_updated;
+ 	}
+ 
+-	list_add_tail(&sch_ep->tt_endpoint, &tt->ep_list);
++	if (used)
++		list_add_tail(&sch_ep->tt_endpoint, &tt->ep_list);
++	else
++		list_del(&sch_ep->tt_endpoint);
+ }
+ 
+ static int check_sch_bw(struct usb_device *udev,
+@@ -535,7 +580,7 @@ static int check_sch_bw(struct usb_device *udev,
+ 		if (!tt_offset_ok)
+ 			return -ERANGE;
+ 
+-		update_sch_tt(udev, sch_ep);
++		update_sch_tt(udev, sch_ep, 1);
+ 	}
+ 
+ 	/* update bus bandwidth info */
+@@ -548,15 +593,16 @@ static void destroy_sch_ep(struct usb_device *udev,
+ 	struct mu3h_sch_bw_info *sch_bw, struct mu3h_sch_ep_info *sch_ep)
+ {
+ 	/* only release ep bw check passed by check_sch_bw() */
+-	if (sch_ep->allocated)
++	if (sch_ep->allocated) {
+ 		update_bus_bw(sch_bw, sch_ep, 0);
++		if (sch_ep->sch_tt)
++			update_sch_tt(udev, sch_ep, 0);
++	}
+ 
+-	list_del(&sch_ep->endpoint);
+-
+-	if (sch_ep->sch_tt) {
+-		list_del(&sch_ep->tt_endpoint);
++	if (sch_ep->sch_tt)
+ 		drop_tt(udev);
+-	}
++
++	list_del(&sch_ep->endpoint);
+ 	kfree(sch_ep);
+ }
+ 
+@@ -643,7 +689,7 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 		 */
+ 		if (usb_endpoint_xfer_int(&ep->desc)
+ 			|| usb_endpoint_xfer_isoc(&ep->desc))
+-			ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(1));
++			ep_ctx->reserved[0] = cpu_to_le32(EP_BPKTS(1));
+ 
+ 		return 0;
+ 	}
+@@ -730,10 +776,10 @@ int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ 		list_move_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list);
+ 
+ 		ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+-		ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts)
++		ep_ctx->reserved[0] = cpu_to_le32(EP_BPKTS(sch_ep->pkts)
+ 			| EP_BCSCOUNT(sch_ep->cs_count)
+ 			| EP_BBM(sch_ep->burst_mode));
+-		ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset)
++		ep_ctx->reserved[1] = cpu_to_le32(EP_BOFFSET(sch_ep->offset)
+ 			| EP_BREPEAT(sch_ep->repeat));
+ 
+ 		xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n",
+diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
+index 080109012b9ac..2fc0568ba054e 100644
+--- a/drivers/usb/host/xhci-mtk.h
++++ b/drivers/usb/host/xhci-mtk.h
+@@ -20,13 +20,15 @@
+ #define XHCI_MTK_MAX_ESIT	64
+ 
+ /**
+- * @split_bit_map: used to avoid split microframes overlay
++ * @ss_bit_map: used to avoid start split microframes overlay
++ * @fs_bus_bw: array to keep track of bandwidth already used for FS
+  * @ep_list: Endpoints using this TT
+  * @usb_tt: usb TT related
+  * @tt_port: TT port number
+  */
+ struct mu3h_sch_tt {
+-	DECLARE_BITMAP(split_bit_map, XHCI_MTK_MAX_ESIT);
++	DECLARE_BITMAP(ss_bit_map, XHCI_MTK_MAX_ESIT);
++	u32 fs_bus_bw[XHCI_MTK_MAX_ESIT];
+ 	struct list_head ep_list;
+ 	struct usb_tt *usb_tt;
+ 	int tt_port;
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index 97f37077b7f97..33b637d0d8d99 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -189,6 +189,8 @@ usb_role_switch_find_by_fwnode(const struct fwnode_handle *fwnode)
+ 		return NULL;
+ 
+ 	dev = class_find_device_by_fwnode(role_class, fwnode);
++	if (dev)
++		WARN_ON(!try_module_get(dev->parent->driver->owner));
+ 
+ 	return dev ? to_role_switch(dev) : NULL;
+ }
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index 73075b9351c58..622e24b06b4b7 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -1420,14 +1420,19 @@ static int ti_set_serial_info(struct tty_struct *tty,
+ 	struct serial_struct *ss)
+ {
+ 	struct usb_serial_port *port = tty->driver_data;
+-	struct ti_port *tport = usb_get_serial_port_data(port);
++	struct tty_port *tport = &port->port;
+ 	unsigned cwait;
+ 
+ 	cwait = ss->closing_wait;
+ 	if (cwait != ASYNC_CLOSING_WAIT_NONE)
+ 		cwait = msecs_to_jiffies(10 * ss->closing_wait);
+ 
+-	tport->tp_port->port.closing_wait = cwait;
++	if (!capable(CAP_SYS_ADMIN)) {
++		if (cwait != tport->closing_wait)
++			return -EPERM;
++	}
++
++	tport->closing_wait = cwait;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
+index 4b9845807bee1..b2285d5a869de 100644
+--- a/drivers/usb/serial/usb_wwan.c
++++ b/drivers/usb/serial/usb_wwan.c
+@@ -140,10 +140,10 @@ int usb_wwan_get_serial_info(struct tty_struct *tty,
+ 	ss->line            = port->minor;
+ 	ss->port            = port->port_number;
+ 	ss->baud_base       = tty_get_baud_rate(port->port.tty);
+-	ss->close_delay	    = port->port.close_delay / 10;
++	ss->close_delay	    = jiffies_to_msecs(port->port.close_delay) / 10;
+ 	ss->closing_wait    = port->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+ 				 ASYNC_CLOSING_WAIT_NONE :
+-				 port->port.closing_wait / 10;
++				 jiffies_to_msecs(port->port.closing_wait) / 10;
+ 	return 0;
+ }
+ EXPORT_SYMBOL(usb_wwan_get_serial_info);
+@@ -155,9 +155,10 @@ int usb_wwan_set_serial_info(struct tty_struct *tty,
+ 	unsigned int closing_wait, close_delay;
+ 	int retval = 0;
+ 
+-	close_delay = ss->close_delay * 10;
++	close_delay = msecs_to_jiffies(ss->close_delay * 10);
+ 	closing_wait = ss->closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+-			ASYNC_CLOSING_WAIT_NONE : ss->closing_wait * 10;
++			ASYNC_CLOSING_WAIT_NONE :
++			msecs_to_jiffies(ss->closing_wait * 10);
+ 
+ 	mutex_lock(&port->port.mutex);
+ 
+diff --git a/drivers/usb/typec/stusb160x.c b/drivers/usb/typec/stusb160x.c
+index d21750bbbb44d..6eaeba9b096e1 100644
+--- a/drivers/usb/typec/stusb160x.c
++++ b/drivers/usb/typec/stusb160x.c
+@@ -682,8 +682,8 @@ static int stusb160x_probe(struct i2c_client *client)
+ 	}
+ 
+ 	fwnode = device_get_named_child_node(chip->dev, "connector");
+-	if (IS_ERR(fwnode))
+-		return PTR_ERR(fwnode);
++	if (!fwnode)
++		return -ENODEV;
+ 
+ 	/*
+ 	 * When both VDD and VSYS power supplies are present, the low power
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index f9f0af64da5f0..a06da1854c10c 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -20,6 +20,15 @@
+ 
+ #define PD_RETRY_COUNT 3
+ 
++#define tcpc_presenting_cc1_rd(reg) \
++	(!(TCPC_ROLE_CTRL_DRP & (reg)) && \
++	 (((reg) & (TCPC_ROLE_CTRL_CC1_MASK << TCPC_ROLE_CTRL_CC1_SHIFT)) == \
++	  (TCPC_ROLE_CTRL_CC_RD << TCPC_ROLE_CTRL_CC1_SHIFT)))
++#define tcpc_presenting_cc2_rd(reg) \
++	(!(TCPC_ROLE_CTRL_DRP & (reg)) && \
++	 (((reg) & (TCPC_ROLE_CTRL_CC2_MASK << TCPC_ROLE_CTRL_CC2_SHIFT)) == \
++	  (TCPC_ROLE_CTRL_CC_RD << TCPC_ROLE_CTRL_CC2_SHIFT)))
++
+ struct tcpci {
+ 	struct device *dev;
+ 
+@@ -174,19 +183,25 @@ static int tcpci_get_cc(struct tcpc_dev *tcpc,
+ 			enum typec_cc_status *cc1, enum typec_cc_status *cc2)
+ {
+ 	struct tcpci *tcpci = tcpc_to_tcpci(tcpc);
+-	unsigned int reg;
++	unsigned int reg, role_control;
+ 	int ret;
+ 
++	ret = regmap_read(tcpci->regmap, TCPC_ROLE_CTRL, &role_control);
++	if (ret < 0)
++		return ret;
++
+ 	ret = regmap_read(tcpci->regmap, TCPC_CC_STATUS, &reg);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	*cc1 = tcpci_to_typec_cc((reg >> TCPC_CC_STATUS_CC1_SHIFT) &
+ 				 TCPC_CC_STATUS_CC1_MASK,
+-				 reg & TCPC_CC_STATUS_TERM);
++				 reg & TCPC_CC_STATUS_TERM ||
++				 tcpc_presenting_cc1_rd(role_control));
+ 	*cc2 = tcpci_to_typec_cc((reg >> TCPC_CC_STATUS_CC2_SHIFT) &
+ 				 TCPC_CC_STATUS_CC2_MASK,
+-				 reg & TCPC_CC_STATUS_TERM);
++				 reg & TCPC_CC_STATUS_TERM ||
++				 tcpc_presenting_cc2_rd(role_control));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 563658096b675..912dbf8ca2dac 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -218,12 +218,27 @@ struct pd_mode_data {
+ 	struct typec_altmode_desc altmode_desc[ALTMODE_DISCOVERY_MAX];
+ };
+ 
++/*
++ * @min_volt: Actual min voltage at the local port
++ * @req_min_volt: Requested min voltage to the port partner
++ * @max_volt: Actual max voltage at the local port
++ * @req_max_volt: Requested max voltage to the port partner
++ * @max_curr: Actual max current at the local port
++ * @req_max_curr: Requested max current of the port partner
++ * @req_out_volt: Requested output voltage to the port partner
++ * @req_op_curr: Requested operating current to the port partner
++ * @supported: Parter has atleast one APDO hence supports PPS
++ * @active: PPS mode is active
++ */
+ struct pd_pps_data {
+ 	u32 min_volt;
++	u32 req_min_volt;
+ 	u32 max_volt;
++	u32 req_max_volt;
+ 	u32 max_curr;
+-	u32 out_volt;
+-	u32 op_curr;
++	u32 req_max_curr;
++	u32 req_out_volt;
++	u32 req_op_curr;
+ 	bool supported;
+ 	bool active;
+ };
+@@ -326,7 +341,10 @@ struct tcpm_port {
+ 	unsigned int operating_snk_mw;
+ 	bool update_sink_caps;
+ 
+-	/* Requested current / voltage */
++	/* Requested current / voltage to the port partner */
++	u32 req_current_limit;
++	u32 req_supply_voltage;
++	/* Actual current / voltage limit of the local port */
+ 	u32 current_limit;
+ 	u32 supply_voltage;
+ 
+@@ -1873,8 +1891,8 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 		case SNK_TRANSITION_SINK:
+ 			if (port->vbus_present) {
+ 				tcpm_set_current_limit(port,
+-						       port->current_limit,
+-						       port->supply_voltage);
++						       port->req_current_limit,
++						       port->req_supply_voltage);
+ 				port->explicit_contract = true;
+ 				tcpm_set_state(port, SNK_READY, 0);
+ 			} else {
+@@ -1916,8 +1934,8 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 			break;
+ 		case SNK_NEGOTIATE_PPS_CAPABILITIES:
+ 			/* Revert data back from any requested PPS updates */
+-			port->pps_data.out_volt = port->supply_voltage;
+-			port->pps_data.op_curr = port->current_limit;
++			port->pps_data.req_out_volt = port->supply_voltage;
++			port->pps_data.req_op_curr = port->current_limit;
+ 			port->pps_status = (type == PD_CTRL_WAIT ?
+ 					    -EAGAIN : -EOPNOTSUPP);
+ 			tcpm_set_state(port, SNK_READY, 0);
+@@ -1956,8 +1974,12 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 			break;
+ 		case SNK_NEGOTIATE_PPS_CAPABILITIES:
+ 			port->pps_data.active = true;
+-			port->supply_voltage = port->pps_data.out_volt;
+-			port->current_limit = port->pps_data.op_curr;
++			port->pps_data.min_volt = port->pps_data.req_min_volt;
++			port->pps_data.max_volt = port->pps_data.req_max_volt;
++			port->pps_data.max_curr = port->pps_data.req_max_curr;
++			port->req_supply_voltage = port->pps_data.req_out_volt;
++			port->req_current_limit = port->pps_data.req_op_curr;
++			power_supply_changed(port->psy);
+ 			tcpm_set_state(port, SNK_TRANSITION_SINK, 0);
+ 			break;
+ 		case SOFT_RESET_SEND:
+@@ -2474,17 +2496,16 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
+ 		src = port->source_caps[src_pdo];
+ 		snk = port->snk_pdo[snk_pdo];
+ 
+-		port->pps_data.min_volt = max(pdo_pps_apdo_min_voltage(src),
+-					      pdo_pps_apdo_min_voltage(snk));
+-		port->pps_data.max_volt = min(pdo_pps_apdo_max_voltage(src),
+-					      pdo_pps_apdo_max_voltage(snk));
+-		port->pps_data.max_curr = min_pps_apdo_current(src, snk);
+-		port->pps_data.out_volt = min(port->pps_data.max_volt,
+-					      max(port->pps_data.min_volt,
+-						  port->pps_data.out_volt));
+-		port->pps_data.op_curr = min(port->pps_data.max_curr,
+-					     port->pps_data.op_curr);
+-		power_supply_changed(port->psy);
++		port->pps_data.req_min_volt = max(pdo_pps_apdo_min_voltage(src),
++						  pdo_pps_apdo_min_voltage(snk));
++		port->pps_data.req_max_volt = min(pdo_pps_apdo_max_voltage(src),
++						  pdo_pps_apdo_max_voltage(snk));
++		port->pps_data.req_max_curr = min_pps_apdo_current(src, snk);
++		port->pps_data.req_out_volt = min(port->pps_data.max_volt,
++						  max(port->pps_data.min_volt,
++						      port->pps_data.req_out_volt));
++		port->pps_data.req_op_curr = min(port->pps_data.max_curr,
++						 port->pps_data.req_op_curr);
+ 	}
+ 
+ 	return src_pdo;
+@@ -2564,8 +2585,8 @@ static int tcpm_pd_build_request(struct tcpm_port *port, u32 *rdo)
+ 			 flags & RDO_CAP_MISMATCH ? " [mismatch]" : "");
+ 	}
+ 
+-	port->current_limit = ma;
+-	port->supply_voltage = mv;
++	port->req_current_limit = ma;
++	port->req_supply_voltage = mv;
+ 
+ 	return 0;
+ }
+@@ -2611,10 +2632,10 @@ static int tcpm_pd_build_pps_request(struct tcpm_port *port, u32 *rdo)
+ 			tcpm_log(port, "Invalid APDO selected!");
+ 			return -EINVAL;
+ 		}
+-		max_mv = port->pps_data.max_volt;
+-		max_ma = port->pps_data.max_curr;
+-		out_mv = port->pps_data.out_volt;
+-		op_ma = port->pps_data.op_curr;
++		max_mv = port->pps_data.req_max_volt;
++		max_ma = port->pps_data.req_max_curr;
++		out_mv = port->pps_data.req_out_volt;
++		op_ma = port->pps_data.req_op_curr;
+ 		break;
+ 	default:
+ 		tcpm_log(port, "Invalid PDO selected!");
+@@ -2661,8 +2682,8 @@ static int tcpm_pd_build_pps_request(struct tcpm_port *port, u32 *rdo)
+ 	tcpm_log(port, "Requesting APDO %d: %u mV, %u mA",
+ 		 src_pdo_index, out_mv, op_ma);
+ 
+-	port->pps_data.op_curr = op_ma;
+-	port->pps_data.out_volt = out_mv;
++	port->pps_data.req_op_curr = op_ma;
++	port->pps_data.req_out_volt = out_mv;
+ 
+ 	return 0;
+ }
+@@ -2890,8 +2911,6 @@ static void tcpm_reset_port(struct tcpm_port *port)
+ 	port->sink_cap_done = false;
+ 	if (port->tcpc->enable_frs)
+ 		port->tcpc->enable_frs(port->tcpc, false);
+-
+-	power_supply_changed(port->psy);
+ }
+ 
+ static void tcpm_detach(struct tcpm_port *port)
+@@ -4503,7 +4522,7 @@ static int tcpm_try_role(struct typec_port *p, int role)
+ 	return ret;
+ }
+ 
+-static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
++static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 req_op_curr)
+ {
+ 	unsigned int target_mw;
+ 	int ret;
+@@ -4521,22 +4540,22 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
+ 		goto port_unlock;
+ 	}
+ 
+-	if (op_curr > port->pps_data.max_curr) {
++	if (req_op_curr > port->pps_data.max_curr) {
+ 		ret = -EINVAL;
+ 		goto port_unlock;
+ 	}
+ 
+-	target_mw = (op_curr * port->pps_data.out_volt) / 1000;
++	target_mw = (req_op_curr * port->supply_voltage) / 1000;
+ 	if (target_mw < port->operating_snk_mw) {
+ 		ret = -EINVAL;
+ 		goto port_unlock;
+ 	}
+ 
+ 	/* Round down operating current to align with PPS valid steps */
+-	op_curr = op_curr - (op_curr % RDO_PROG_CURR_MA_STEP);
++	req_op_curr = req_op_curr - (req_op_curr % RDO_PROG_CURR_MA_STEP);
+ 
+ 	reinit_completion(&port->pps_complete);
+-	port->pps_data.op_curr = op_curr;
++	port->pps_data.req_op_curr = req_op_curr;
+ 	port->pps_status = 0;
+ 	port->pps_pending = true;
+ 	tcpm_set_state(port, SNK_NEGOTIATE_PPS_CAPABILITIES, 0);
+@@ -4558,7 +4577,7 @@ swap_unlock:
+ 	return ret;
+ }
+ 
+-static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
++static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 req_out_volt)
+ {
+ 	unsigned int target_mw;
+ 	int ret;
+@@ -4576,23 +4595,23 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
+ 		goto port_unlock;
+ 	}
+ 
+-	if (out_volt < port->pps_data.min_volt ||
+-	    out_volt > port->pps_data.max_volt) {
++	if (req_out_volt < port->pps_data.min_volt ||
++	    req_out_volt > port->pps_data.max_volt) {
+ 		ret = -EINVAL;
+ 		goto port_unlock;
+ 	}
+ 
+-	target_mw = (port->pps_data.op_curr * out_volt) / 1000;
++	target_mw = (port->current_limit * req_out_volt) / 1000;
+ 	if (target_mw < port->operating_snk_mw) {
+ 		ret = -EINVAL;
+ 		goto port_unlock;
+ 	}
+ 
+ 	/* Round down output voltage to align with PPS valid steps */
+-	out_volt = out_volt - (out_volt % RDO_PROG_VOLT_MV_STEP);
++	req_out_volt = req_out_volt - (req_out_volt % RDO_PROG_VOLT_MV_STEP);
+ 
+ 	reinit_completion(&port->pps_complete);
+-	port->pps_data.out_volt = out_volt;
++	port->pps_data.req_out_volt = req_out_volt;
+ 	port->pps_status = 0;
+ 	port->pps_pending = true;
+ 	tcpm_set_state(port, SNK_NEGOTIATE_PPS_CAPABILITIES, 0);
+@@ -4641,8 +4660,8 @@ static int tcpm_pps_activate(struct tcpm_port *port, bool activate)
+ 
+ 	/* Trigger PPS request or move back to standard PDO contract */
+ 	if (activate) {
+-		port->pps_data.out_volt = port->supply_voltage;
+-		port->pps_data.op_curr = port->current_limit;
++		port->pps_data.req_out_volt = port->supply_voltage;
++		port->pps_data.req_op_curr = port->current_limit;
+ 		tcpm_set_state(port, SNK_NEGOTIATE_PPS_CAPABILITIES, 0);
+ 	} else {
+ 		tcpm_set_state(port, SNK_NEGOTIATE_CAPABILITIES, 0);
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index d8e4594fe0090..30bfc314b743c 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -515,8 +515,8 @@ static int tps6598x_probe(struct i2c_client *client)
+ 		return ret;
+ 
+ 	fwnode = device_get_named_child_node(&client->dev, "connector");
+-	if (IS_ERR(fwnode))
+-		return PTR_ERR(fwnode);
++	if (!fwnode)
++		return -ENODEV;
+ 
+ 	tps->role_sw = fwnode_usb_role_switch_get(fwnode);
+ 	if (IS_ERR(tps->role_sw)) {
+diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
+index f7633ee655a17..d1cf6b51bf85d 100644
+--- a/drivers/usb/usbip/vudc_sysfs.c
++++ b/drivers/usb/usbip/vudc_sysfs.c
+@@ -156,12 +156,14 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ 		tcp_rx = kthread_create(&v_rx_loop, &udc->ud, "vudc_rx");
+ 		if (IS_ERR(tcp_rx)) {
+ 			sockfd_put(socket);
++			mutex_unlock(&udc->ud.sysfs_lock);
+ 			return -EINVAL;
+ 		}
+ 		tcp_tx = kthread_create(&v_tx_loop, &udc->ud, "vudc_tx");
+ 		if (IS_ERR(tcp_tx)) {
+ 			kthread_stop(tcp_rx);
+ 			sockfd_put(socket);
++			mutex_unlock(&udc->ud.sysfs_lock);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc.c b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
+index f27e25112c403..8722f5effacd4 100644
+--- a/drivers/vfio/fsl-mc/vfio_fsl_mc.c
++++ b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
+@@ -568,23 +568,39 @@ static int vfio_fsl_mc_init_device(struct vfio_fsl_mc_device *vdev)
+ 		dev_err(&mc_dev->dev, "VFIO_FSL_MC: Failed to setup DPRC (%d)\n", ret);
+ 		goto out_nc_unreg;
+ 	}
++	return 0;
++
++out_nc_unreg:
++	bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
++	return ret;
++}
+ 
++static int vfio_fsl_mc_scan_container(struct fsl_mc_device *mc_dev)
++{
++	int ret;
++
++	/* non dprc devices do not scan for other devices */
++	if (!is_fsl_mc_bus_dprc(mc_dev))
++		return 0;
+ 	ret = dprc_scan_container(mc_dev, false);
+ 	if (ret) {
+-		dev_err(&mc_dev->dev, "VFIO_FSL_MC: Container scanning failed (%d)\n", ret);
+-		goto out_dprc_cleanup;
++		dev_err(&mc_dev->dev,
++			"VFIO_FSL_MC: Container scanning failed (%d)\n", ret);
++		dprc_remove_devices(mc_dev, NULL, 0);
++		return ret;
+ 	}
+-
+ 	return 0;
++}
++
++static void vfio_fsl_uninit_device(struct vfio_fsl_mc_device *vdev)
++{
++	struct fsl_mc_device *mc_dev = vdev->mc_dev;
++
++	if (!is_fsl_mc_bus_dprc(mc_dev))
++		return;
+ 
+-out_dprc_cleanup:
+-	dprc_remove_devices(mc_dev, NULL, 0);
+ 	dprc_cleanup(mc_dev);
+-out_nc_unreg:
+ 	bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
+-	vdev->nb.notifier_call = NULL;
+-
+-	return ret;
+ }
+ 
+ static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
+@@ -607,29 +623,39 @@ static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
+ 	}
+ 
+ 	vdev->mc_dev = mc_dev;
+-
+-	ret = vfio_add_group_dev(dev, &vfio_fsl_mc_ops, vdev);
+-	if (ret) {
+-		dev_err(dev, "VFIO_FSL_MC: Failed to add to vfio group\n");
+-		goto out_group_put;
+-	}
++	mutex_init(&vdev->igate);
+ 
+ 	ret = vfio_fsl_mc_reflck_attach(vdev);
+ 	if (ret)
+-		goto out_group_dev;
++		goto out_group_put;
+ 
+ 	ret = vfio_fsl_mc_init_device(vdev);
+ 	if (ret)
+ 		goto out_reflck;
+ 
+-	mutex_init(&vdev->igate);
++	ret = vfio_add_group_dev(dev, &vfio_fsl_mc_ops, vdev);
++	if (ret) {
++		dev_err(dev, "VFIO_FSL_MC: Failed to add to vfio group\n");
++		goto out_device;
++	}
+ 
++	/*
++	 * This triggers recursion into vfio_fsl_mc_probe() on another device
++	 * and the vfio_fsl_mc_reflck_attach() must succeed, which relies on the
++	 * vfio_add_group_dev() above. It has no impact on this vdev, so it is
++	 * safe to be after the vfio device is made live.
++	 */
++	ret = vfio_fsl_mc_scan_container(mc_dev);
++	if (ret)
++		goto out_group_dev;
+ 	return 0;
+ 
+-out_reflck:
+-	vfio_fsl_mc_reflck_put(vdev->reflck);
+ out_group_dev:
+ 	vfio_del_group_dev(dev);
++out_device:
++	vfio_fsl_uninit_device(vdev);
++out_reflck:
++	vfio_fsl_mc_reflck_put(vdev->reflck);
+ out_group_put:
+ 	vfio_iommu_group_put(group, dev);
+ 	return ret;
+@@ -646,16 +672,10 @@ static int vfio_fsl_mc_remove(struct fsl_mc_device *mc_dev)
+ 
+ 	mutex_destroy(&vdev->igate);
+ 
++	dprc_remove_devices(mc_dev, NULL, 0);
++	vfio_fsl_uninit_device(vdev);
+ 	vfio_fsl_mc_reflck_put(vdev->reflck);
+ 
+-	if (is_fsl_mc_bus_dprc(mc_dev)) {
+-		dprc_remove_devices(mc_dev, NULL, 0);
+-		dprc_cleanup(mc_dev);
+-	}
+-
+-	if (vdev->nb.notifier_call)
+-		bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
+-
+ 	vfio_iommu_group_put(mc_dev->dev.iommu_group, dev);
+ 
+ 	return 0;
+diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c
+index 917fd84c1c6f2..367ff5412a387 100644
+--- a/drivers/vfio/mdev/mdev_sysfs.c
++++ b/drivers/vfio/mdev/mdev_sysfs.c
+@@ -105,6 +105,7 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	type->kobj.kset = parent->mdev_types_kset;
++	type->parent = parent;
+ 
+ 	ret = kobject_init_and_add(&type->kobj, &mdev_type_ktype, NULL,
+ 				   "%s-%s", dev_driver_string(parent->dev),
+@@ -132,7 +133,6 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
+ 	}
+ 
+ 	type->group = group;
+-	type->parent = parent;
+ 	return type;
+ 
+ attrs_failed:
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index 465f646e33298..48b048edf1ee8 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -1926,6 +1926,68 @@ static int vfio_pci_bus_notifier(struct notifier_block *nb,
+ 	return 0;
+ }
+ 
++static int vfio_pci_vf_init(struct vfio_pci_device *vdev)
++{
++	struct pci_dev *pdev = vdev->pdev;
++	int ret;
++
++	if (!pdev->is_physfn)
++		return 0;
++
++	vdev->vf_token = kzalloc(sizeof(*vdev->vf_token), GFP_KERNEL);
++	if (!vdev->vf_token)
++		return -ENOMEM;
++
++	mutex_init(&vdev->vf_token->lock);
++	uuid_gen(&vdev->vf_token->uuid);
++
++	vdev->nb.notifier_call = vfio_pci_bus_notifier;
++	ret = bus_register_notifier(&pci_bus_type, &vdev->nb);
++	if (ret) {
++		kfree(vdev->vf_token);
++		return ret;
++	}
++	return 0;
++}
++
++static void vfio_pci_vf_uninit(struct vfio_pci_device *vdev)
++{
++	if (!vdev->vf_token)
++		return;
++
++	bus_unregister_notifier(&pci_bus_type, &vdev->nb);
++	WARN_ON(vdev->vf_token->users);
++	mutex_destroy(&vdev->vf_token->lock);
++	kfree(vdev->vf_token);
++}
++
++static int vfio_pci_vga_init(struct vfio_pci_device *vdev)
++{
++	struct pci_dev *pdev = vdev->pdev;
++	int ret;
++
++	if (!vfio_pci_is_vga(pdev))
++		return 0;
++
++	ret = vga_client_register(pdev, vdev, NULL, vfio_pci_set_vga_decode);
++	if (ret)
++		return ret;
++	vga_set_legacy_decoding(pdev, vfio_pci_set_vga_decode(vdev, false));
++	return 0;
++}
++
++static void vfio_pci_vga_uninit(struct vfio_pci_device *vdev)
++{
++	struct pci_dev *pdev = vdev->pdev;
++
++	if (!vfio_pci_is_vga(pdev))
++		return;
++	vga_client_register(pdev, NULL, NULL, NULL);
++	vga_set_legacy_decoding(pdev, VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM |
++					      VGA_RSRC_LEGACY_IO |
++					      VGA_RSRC_LEGACY_MEM);
++}
++
+ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ 	struct vfio_pci_device *vdev;
+@@ -1972,35 +2034,15 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	INIT_LIST_HEAD(&vdev->vma_list);
+ 	init_rwsem(&vdev->memory_lock);
+ 
+-	ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);
++	ret = vfio_pci_reflck_attach(vdev);
+ 	if (ret)
+ 		goto out_free;
+-
+-	ret = vfio_pci_reflck_attach(vdev);
++	ret = vfio_pci_vf_init(vdev);
+ 	if (ret)
+-		goto out_del_group_dev;
+-
+-	if (pdev->is_physfn) {
+-		vdev->vf_token = kzalloc(sizeof(*vdev->vf_token), GFP_KERNEL);
+-		if (!vdev->vf_token) {
+-			ret = -ENOMEM;
+-			goto out_reflck;
+-		}
+-
+-		mutex_init(&vdev->vf_token->lock);
+-		uuid_gen(&vdev->vf_token->uuid);
+-
+-		vdev->nb.notifier_call = vfio_pci_bus_notifier;
+-		ret = bus_register_notifier(&pci_bus_type, &vdev->nb);
+-		if (ret)
+-			goto out_vf_token;
+-	}
+-
+-	if (vfio_pci_is_vga(pdev)) {
+-		vga_client_register(pdev, vdev, NULL, vfio_pci_set_vga_decode);
+-		vga_set_legacy_decoding(pdev,
+-					vfio_pci_set_vga_decode(vdev, false));
+-	}
++		goto out_reflck;
++	ret = vfio_pci_vga_init(vdev);
++	if (ret)
++		goto out_vf;
+ 
+ 	vfio_pci_probe_power_state(vdev);
+ 
+@@ -2018,15 +2060,20 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		vfio_pci_set_power_state(vdev, PCI_D3hot);
+ 	}
+ 
+-	return ret;
++	ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);
++	if (ret)
++		goto out_power;
++	return 0;
+ 
+-out_vf_token:
+-	kfree(vdev->vf_token);
++out_power:
++	if (!disable_idle_d3)
++		vfio_pci_set_power_state(vdev, PCI_D0);
++out_vf:
++	vfio_pci_vf_uninit(vdev);
+ out_reflck:
+ 	vfio_pci_reflck_put(vdev->reflck);
+-out_del_group_dev:
+-	vfio_del_group_dev(&pdev->dev);
+ out_free:
++	kfree(vdev->pm_save);
+ 	kfree(vdev);
+ out_group_put:
+ 	vfio_iommu_group_put(group, &pdev->dev);
+@@ -2043,33 +2090,19 @@ static void vfio_pci_remove(struct pci_dev *pdev)
+ 	if (!vdev)
+ 		return;
+ 
+-	if (vdev->vf_token) {
+-		WARN_ON(vdev->vf_token->users);
+-		mutex_destroy(&vdev->vf_token->lock);
+-		kfree(vdev->vf_token);
+-	}
+-
+-	if (vdev->nb.notifier_call)
+-		bus_unregister_notifier(&pci_bus_type, &vdev->nb);
+-
++	vfio_pci_vf_uninit(vdev);
+ 	vfio_pci_reflck_put(vdev->reflck);
++	vfio_pci_vga_uninit(vdev);
+ 
+ 	vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
+-	kfree(vdev->region);
+-	mutex_destroy(&vdev->ioeventfds_lock);
+ 
+ 	if (!disable_idle_d3)
+ 		vfio_pci_set_power_state(vdev, PCI_D0);
+ 
++	mutex_destroy(&vdev->ioeventfds_lock);
++	kfree(vdev->region);
+ 	kfree(vdev->pm_save);
+ 	kfree(vdev);
+-
+-	if (vfio_pci_is_vga(pdev)) {
+-		vga_client_register(pdev, NULL, NULL, NULL);
+-		vga_set_legacy_decoding(pdev,
+-				VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM |
+-				VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM);
+-	}
+ }
+ 
+ static pci_ers_result_t vfio_pci_aer_err_detected(struct pci_dev *pdev,
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 9dc6f4b1c4177..628ba3fed36df 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1337,6 +1337,7 @@ static int afs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->dentry	= dentry;
+ 	op->create.mode	= S_IFDIR | mode;
+@@ -1418,6 +1419,7 @@ static int afs_rmdir(struct inode *dir, struct dentry *dentry)
+ 
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 
+ 	op->dentry	= dentry;
+@@ -1554,6 +1556,7 @@ static int afs_unlink(struct inode *dir, struct dentry *dentry)
+ 
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 
+ 	/* Try to make sure we have a callback promise on the victim. */
+@@ -1636,6 +1639,7 @@ static int afs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 
+ 	op->dentry	= dentry;
+@@ -1710,6 +1714,7 @@ static int afs_link(struct dentry *from, struct inode *dir,
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	afs_op_set_vnode(op, 1, vnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->file[1].update_ctime = true;
+ 
+@@ -1905,6 +1910,8 @@ static int afs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	afs_op_set_vnode(op, 1, new_dvnode); /* May be same as orig_dvnode */
+ 	op->file[0].dv_delta = 1;
+ 	op->file[1].dv_delta = 1;
++	op->file[0].modification = true;
++	op->file[1].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->file[1].update_ctime = true;
+ 
+diff --git a/fs/afs/dir_silly.c b/fs/afs/dir_silly.c
+index 04f75a44f2432..dae9a57d7ec0c 100644
+--- a/fs/afs/dir_silly.c
++++ b/fs/afs/dir_silly.c
+@@ -73,6 +73,8 @@ static int afs_do_silly_rename(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ 	afs_op_set_vnode(op, 1, dvnode);
+ 	op->file[0].dv_delta = 1;
+ 	op->file[1].dv_delta = 1;
++	op->file[0].modification = true;
++	op->file[1].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->file[1].update_ctime = true;
+ 
+@@ -201,6 +203,7 @@ static int afs_do_silly_unlink(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	afs_op_set_vnode(op, 1, vnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->file[1].op_unlinked = true;
+ 	op->file[1].update_ctime = true;
+diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c
+index 71c58723763d2..a82515b47350e 100644
+--- a/fs/afs/fs_operation.c
++++ b/fs/afs/fs_operation.c
+@@ -118,6 +118,8 @@ static void afs_prepare_vnode(struct afs_operation *op, struct afs_vnode_param *
+ 		vp->cb_break_before	= afs_calc_vnode_cb_break(vnode);
+ 		if (vnode->lock_state != AFS_VNODE_LOCK_NONE)
+ 			op->flags	|= AFS_OPERATION_CUR_ONLY;
++		if (vp->modification)
++			set_bit(AFS_VNODE_MODIFYING, &vnode->flags);
+ 	}
+ 
+ 	if (vp->fid.vnode)
+@@ -223,6 +225,10 @@ int afs_put_operation(struct afs_operation *op)
+ 
+ 	if (op->ops && op->ops->put)
+ 		op->ops->put(op);
++	if (op->file[0].modification)
++		clear_bit(AFS_VNODE_MODIFYING, &op->file[0].vnode->flags);
++	if (op->file[1].modification && op->file[1].vnode != op->file[0].vnode)
++		clear_bit(AFS_VNODE_MODIFYING, &op->file[1].vnode->flags);
+ 	if (op->file[0].put_vnode)
+ 		iput(&op->file[0].vnode->vfs_inode);
+ 	if (op->file[1].put_vnode)
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 1d03eb1920ec0..ae3016a9fb23c 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -102,13 +102,13 @@ static int afs_inode_init_from_status(struct afs_operation *op,
+ 
+ 	switch (status->type) {
+ 	case AFS_FTYPE_FILE:
+-		inode->i_mode	= S_IFREG | status->mode;
++		inode->i_mode	= S_IFREG | (status->mode & S_IALLUGO);
+ 		inode->i_op	= &afs_file_inode_operations;
+ 		inode->i_fop	= &afs_file_operations;
+ 		inode->i_mapping->a_ops	= &afs_fs_aops;
+ 		break;
+ 	case AFS_FTYPE_DIR:
+-		inode->i_mode	= S_IFDIR | status->mode;
++		inode->i_mode	= S_IFDIR |  (status->mode & S_IALLUGO);
+ 		inode->i_op	= &afs_dir_inode_operations;
+ 		inode->i_fop	= &afs_dir_file_operations;
+ 		inode->i_mapping->a_ops	= &afs_dir_aops;
+@@ -198,7 +198,7 @@ static void afs_apply_status(struct afs_operation *op,
+ 	if (status->mode != vnode->status.mode) {
+ 		mode = inode->i_mode;
+ 		mode &= ~S_IALLUGO;
+-		mode |= status->mode;
++		mode |= status->mode & S_IALLUGO;
+ 		WRITE_ONCE(inode->i_mode, mode);
+ 	}
+ 
+@@ -293,8 +293,9 @@ void afs_vnode_commit_status(struct afs_operation *op, struct afs_vnode_param *v
+ 			op->flags &= ~AFS_OPERATION_DIR_CONFLICT;
+ 		}
+ 	} else if (vp->scb.have_status) {
+-		if (vp->dv_before + vp->dv_delta != vp->scb.status.data_version &&
+-		    vp->speculative)
++		if (vp->speculative &&
++		    (test_bit(AFS_VNODE_MODIFYING, &vnode->flags) ||
++		     vp->dv_before != vnode->status.data_version))
+ 			/* Ignore the result of a speculative bulk status fetch
+ 			 * if it splits around a modification op, thereby
+ 			 * appearing to regress the data version.
+@@ -909,6 +910,7 @@ int afs_setattr(struct dentry *dentry, struct iattr *attr)
+ 	}
+ 	op->ctime = attr->ia_ctime;
+ 	op->file[0].update_ctime = 1;
++	op->file[0].modification = true;
+ 
+ 	op->ops = &afs_setattr_operation;
+ 	ret = afs_do_sync_operation(op);
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 525ef075fcd90..ffe318ad2e026 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -640,6 +640,7 @@ struct afs_vnode {
+ #define AFS_VNODE_PSEUDODIR	7 		/* set if Vnode is a pseudo directory */
+ #define AFS_VNODE_NEW_CONTENT	8		/* Set if file has new content (create/trunc-0) */
+ #define AFS_VNODE_SILLY_DELETED	9		/* Set if file has been silly-deleted */
++#define AFS_VNODE_MODIFYING	10		/* Set if we're performing a modification op */
+ 
+ 	struct list_head	wb_keys;	/* List of keys available for writeback */
+ 	struct list_head	pending_locks;	/* locks waiting to be granted */
+@@ -756,6 +757,7 @@ struct afs_vnode_param {
+ 	bool			set_size:1;	/* Must update i_size */
+ 	bool			op_unlinked:1;	/* True if file was unlinked by op */
+ 	bool			speculative:1;	/* T if speculative status fetch (no vnode lock) */
++	bool			modification:1;	/* Set if the content gets modified */
+ };
+ 
+ /*
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index c9195fc67fd8f..d37b5cfcf28f5 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -450,6 +450,7 @@ static int afs_store_data(struct address_space *mapping,
+ 	afs_op_set_vnode(op, 0, vnode);
+ 	op->file[0].dv_delta = 1;
+ 	op->store.mapping = mapping;
++	op->file[0].modification = true;
+ 	op->store.first = first;
+ 	op->store.last = last;
+ 	op->store.first_offset = offset;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index dc1b0f6fd49b5..369ec81033d67 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -222,7 +222,7 @@ struct fixed_file_data {
+ struct io_buffer {
+ 	struct list_head list;
+ 	__u64 addr;
+-	__s32 len;
++	__u32 len;
+ 	__u16 bid;
+ };
+ 
+@@ -527,7 +527,7 @@ struct io_splice {
+ struct io_provide_buf {
+ 	struct file			*file;
+ 	__u64				addr;
+-	__s32				len;
++	__u32				len;
+ 	__u32				bgid;
+ 	__u16				nbufs;
+ 	__u16				bid;
+@@ -3996,7 +3996,7 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
+ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 				   const struct io_uring_sqe *sqe)
+ {
+-	unsigned long size;
++	unsigned long size, tmp_check;
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+@@ -4010,6 +4010,12 @@ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 	p->addr = READ_ONCE(sqe->addr);
+ 	p->len = READ_ONCE(sqe->len);
+ 
++	if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
++				&size))
++		return -EOVERFLOW;
++	if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
++		return -EOVERFLOW;
++
+ 	size = (unsigned long)p->len * p->nbufs;
+ 	if (!access_ok(u64_to_user_ptr(p->addr), size))
+ 		return -EFAULT;
+@@ -4034,7 +4040,7 @@ static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
+ 			break;
+ 
+ 		buf->addr = addr;
+-		buf->len = pbuf->len;
++		buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
+ 		buf->bid = bid;
+ 		addr += pbuf->len;
+ 		bid++;
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 2e68cea148e0d..00440337efc1f 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1425,7 +1425,7 @@ static __be32 nfsd4_do_copy(struct nfsd4_copy *copy, bool sync)
+ 	return status;
+ }
+ 
+-static int dup_copy_fields(struct nfsd4_copy *src, struct nfsd4_copy *dst)
++static void dup_copy_fields(struct nfsd4_copy *src, struct nfsd4_copy *dst)
+ {
+ 	dst->cp_src_pos = src->cp_src_pos;
+ 	dst->cp_dst_pos = src->cp_dst_pos;
+@@ -1444,8 +1444,6 @@ static int dup_copy_fields(struct nfsd4_copy *src, struct nfsd4_copy *dst)
+ 	memcpy(&dst->stateid, &src->stateid, sizeof(src->stateid));
+ 	memcpy(&dst->c_fh, &src->c_fh, sizeof(src->c_fh));
+ 	dst->ss_mnt = src->ss_mnt;
+-
+-	return 0;
+ }
+ 
+ static void cleanup_async_copy(struct nfsd4_copy *copy)
+@@ -1537,11 +1535,9 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		if (!nfs4_init_copy_state(nn, copy))
+ 			goto out_err;
+ 		refcount_set(&async_copy->refcount, 1);
+-		memcpy(&copy->cp_res.cb_stateid, &copy->cp_stateid,
+-			sizeof(copy->cp_stateid));
+-		status = dup_copy_fields(copy, async_copy);
+-		if (status)
+-			goto out_err;
++		memcpy(&copy->cp_res.cb_stateid, &copy->cp_stateid.stid,
++			sizeof(copy->cp_res.cb_stateid));
++		dup_copy_fields(copy, async_copy);
+ 		async_copy->copy_task = kthread_create(nfsd4_do_async_copy,
+ 				async_copy, "%s", "copy thread");
+ 		if (IS_ERR(async_copy->copy_task))
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 89d5d59c7d7a4..e466c58f9ec4c 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -928,7 +928,7 @@ static int ovl_copy_up_one(struct dentry *parent, struct dentry *dentry,
+ static int ovl_copy_up_flags(struct dentry *dentry, int flags)
+ {
+ 	int err = 0;
+-	const struct cred *old_cred = ovl_override_creds(dentry->d_sb);
++	const struct cred *old_cred;
+ 	bool disconnected = (dentry->d_flags & DCACHE_DISCONNECTED);
+ 
+ 	/*
+@@ -939,6 +939,7 @@ static int ovl_copy_up_flags(struct dentry *dentry, int flags)
+ 	if (WARN_ON(disconnected && d_is_dir(dentry)))
+ 		return -EIO;
+ 
++	old_cred = ovl_override_creds(dentry->d_sb);
+ 	while (!err) {
+ 		struct dentry *next;
+ 		struct dentry *parent = NULL;
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 9f7af98ae2005..e43dc68bd1b54 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -308,9 +308,6 @@ int ovl_check_setxattr(struct dentry *dentry, struct dentry *upperdentry,
+ 		       enum ovl_xattr ox, const void *value, size_t size,
+ 		       int xerr);
+ int ovl_set_impure(struct dentry *dentry, struct dentry *upperdentry);
+-void ovl_set_flag(unsigned long flag, struct inode *inode);
+-void ovl_clear_flag(unsigned long flag, struct inode *inode);
+-bool ovl_test_flag(unsigned long flag, struct inode *inode);
+ bool ovl_inuse_trylock(struct dentry *dentry);
+ void ovl_inuse_unlock(struct dentry *dentry);
+ bool ovl_is_inuse(struct dentry *dentry);
+@@ -324,6 +321,21 @@ char *ovl_get_redirect_xattr(struct ovl_fs *ofs, struct dentry *dentry,
+ 			     int padding);
+ int ovl_sync_status(struct ovl_fs *ofs);
+ 
++static inline void ovl_set_flag(unsigned long flag, struct inode *inode)
++{
++	set_bit(flag, &OVL_I(inode)->flags);
++}
++
++static inline void ovl_clear_flag(unsigned long flag, struct inode *inode)
++{
++	clear_bit(flag, &OVL_I(inode)->flags);
++}
++
++static inline bool ovl_test_flag(unsigned long flag, struct inode *inode)
++{
++	return test_bit(flag, &OVL_I(inode)->flags);
++}
++
+ static inline bool ovl_is_impuredir(struct super_block *sb,
+ 				    struct dentry *dentry)
+ {
+@@ -427,6 +439,18 @@ int ovl_workdir_cleanup(struct inode *dir, struct vfsmount *mnt,
+ 			struct dentry *dentry, int level);
+ int ovl_indexdir_cleanup(struct ovl_fs *ofs);
+ 
++/*
++ * Can we iterate real dir directly?
++ *
++ * Non-merge dir may contain whiteouts from a time it was a merge upper, before
++ * lower dir was removed under it and possibly before it was rotated from upper
++ * to lower layer.
++ */
++static inline bool ovl_dir_is_real(struct dentry *dir)
++{
++	return !ovl_test_flag(OVL_WHITEOUTS, d_inode(dir));
++}
++
+ /* inode.c */
+ int ovl_set_nlink_upper(struct dentry *dentry);
+ int ovl_set_nlink_lower(struct dentry *dentry);
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index f404a78e6b607..cc1e802570644 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -319,18 +319,6 @@ static inline int ovl_dir_read(struct path *realpath,
+ 	return err;
+ }
+ 
+-/*
+- * Can we iterate real dir directly?
+- *
+- * Non-merge dir may contain whiteouts from a time it was a merge upper, before
+- * lower dir was removed under it and possibly before it was rotated from upper
+- * to lower layer.
+- */
+-static bool ovl_dir_is_real(struct dentry *dir)
+-{
+-	return !ovl_test_flag(OVL_WHITEOUTS, d_inode(dir));
+-}
+-
+ static void ovl_dir_reset(struct file *file)
+ {
+ 	struct ovl_dir_file *od = file->private_data;
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 6e7b8c882045c..e8b14d2c180c6 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -419,18 +419,20 @@ void ovl_inode_update(struct inode *inode, struct dentry *upperdentry)
+ 	}
+ }
+ 
+-static void ovl_dentry_version_inc(struct dentry *dentry, bool impurity)
++static void ovl_dir_version_inc(struct dentry *dentry, bool impurity)
+ {
+ 	struct inode *inode = d_inode(dentry);
+ 
+ 	WARN_ON(!inode_is_locked(inode));
++	WARN_ON(!d_is_dir(dentry));
+ 	/*
+-	 * Version is used by readdir code to keep cache consistent.  For merge
+-	 * dirs all changes need to be noted.  For non-merge dirs, cache only
+-	 * contains impure (ones which have been copied up and have origins)
+-	 * entries, so only need to note changes to impure entries.
++	 * Version is used by readdir code to keep cache consistent.
++	 * For merge dirs (or dirs with origin) all changes need to be noted.
++	 * For non-merge dirs, cache contains only impure entries (i.e. ones
++	 * which have been copied up and have origins), so only need to note
++	 * changes to impure entries.
+ 	 */
+-	if (OVL_TYPE_MERGE(ovl_path_type(dentry)) || impurity)
++	if (!ovl_dir_is_real(dentry) || impurity)
+ 		OVL_I(inode)->version++;
+ }
+ 
+@@ -439,7 +441,7 @@ void ovl_dir_modified(struct dentry *dentry, bool impurity)
+ 	/* Copy mtime/ctime */
+ 	ovl_copyattr(d_inode(ovl_dentry_upper(dentry)), d_inode(dentry));
+ 
+-	ovl_dentry_version_inc(dentry, impurity);
++	ovl_dir_version_inc(dentry, impurity);
+ }
+ 
+ u64 ovl_dentry_version_get(struct dentry *dentry)
+@@ -634,21 +636,6 @@ int ovl_set_impure(struct dentry *dentry, struct dentry *upperdentry)
+ 	return err;
+ }
+ 
+-void ovl_set_flag(unsigned long flag, struct inode *inode)
+-{
+-	set_bit(flag, &OVL_I(inode)->flags);
+-}
+-
+-void ovl_clear_flag(unsigned long flag, struct inode *inode)
+-{
+-	clear_bit(flag, &OVL_I(inode)->flags);
+-}
+-
+-bool ovl_test_flag(unsigned long flag, struct inode *inode)
+-{
+-	return test_bit(flag, &OVL_I(inode)->flags);
+-}
+-
+ /**
+  * Caller must hold a reference to inode to prevent it from being freed while
+  * it is marked inuse.
+diff --git a/fs/proc/array.c b/fs/proc/array.c
+index 65ec2029fa802..18a4588c35be6 100644
+--- a/fs/proc/array.c
++++ b/fs/proc/array.c
+@@ -341,8 +341,10 @@ static inline void task_seccomp(struct seq_file *m, struct task_struct *p)
+ 	seq_put_decimal_ull(m, "NoNewPrivs:\t", task_no_new_privs(p));
+ #ifdef CONFIG_SECCOMP
+ 	seq_put_decimal_ull(m, "\nSeccomp:\t", p->seccomp.mode);
++#ifdef CONFIG_SECCOMP_FILTER
+ 	seq_put_decimal_ull(m, "\nSeccomp_filters:\t",
+ 			    atomic_read(&p->seccomp.filter_count));
++#endif
+ #endif
+ 	seq_puts(m, "\nSpeculation_Store_Bypass:\t");
+ 	switch (arch_prctl_spec_ctrl_get(p, PR_SPEC_STORE_BYPASS)) {
+diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c
+index fd8e6418a0d31..96ac7e562b871 100644
+--- a/fs/xfs/libxfs/xfs_attr.c
++++ b/fs/xfs/libxfs/xfs_attr.c
+@@ -928,6 +928,7 @@ restart:
+ 	 * Search to see if name already exists, and get back a pointer
+ 	 * to where it should go.
+ 	 */
++	error = 0;
+ 	retval = xfs_attr_node_hasname(args, &state);
+ 	if (retval != -ENOATTR && retval != -EEXIST)
+ 		goto out;
+diff --git a/include/crypto/internal/poly1305.h b/include/crypto/internal/poly1305.h
+index 064e52ca52480..196aa769f2968 100644
+--- a/include/crypto/internal/poly1305.h
++++ b/include/crypto/internal/poly1305.h
+@@ -18,7 +18,8 @@
+  * only the ε-almost-∆-universal hash function (not the full MAC) is computed.
+  */
+ 
+-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 *raw_key);
++void poly1305_core_setkey(struct poly1305_core_key *key,
++			  const u8 raw_key[POLY1305_BLOCK_SIZE]);
+ static inline void poly1305_core_init(struct poly1305_state *state)
+ {
+ 	*state = (struct poly1305_state){};
+diff --git a/include/crypto/poly1305.h b/include/crypto/poly1305.h
+index f1f67fc749cf4..090692ec3bc73 100644
+--- a/include/crypto/poly1305.h
++++ b/include/crypto/poly1305.h
+@@ -58,8 +58,10 @@ struct poly1305_desc_ctx {
+ 	};
+ };
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *desc, const u8 *key);
+-void poly1305_init_generic(struct poly1305_desc_ctx *desc, const u8 *key);
++void poly1305_init_arch(struct poly1305_desc_ctx *desc,
++			const u8 key[POLY1305_KEY_SIZE]);
++void poly1305_init_generic(struct poly1305_desc_ctx *desc,
++			   const u8 key[POLY1305_KEY_SIZE]);
+ 
+ static inline void poly1305_init(struct poly1305_desc_ctx *desc, const u8 *key)
+ {
+diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h
+index a94c03a61d8f9..b2ed3481c6a02 100644
+--- a/include/keys/trusted-type.h
++++ b/include/keys/trusted-type.h
+@@ -30,6 +30,7 @@ struct trusted_key_options {
+ 	uint16_t keytype;
+ 	uint32_t keyhandle;
+ 	unsigned char keyauth[TPM_DIGEST_SIZE];
++	uint32_t blobauth_len;
+ 	unsigned char blobauth[TPM_DIGEST_SIZE];
+ 	uint32_t pcrinfo_len;
+ 	unsigned char pcrinfo[MAX_PCRINFO_SIZE];
+diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
+index 41a1bab98b7e1..4930ece07fd88 100644
+--- a/include/linux/firmware/xlnx-zynqmp.h
++++ b/include/linux/firmware/xlnx-zynqmp.h
+@@ -354,111 +354,131 @@ int zynqmp_pm_read_pggs(u32 index, u32 *value);
+ int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype);
+ int zynqmp_pm_set_boot_health_status(u32 value);
+ #else
+-static inline struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void)
+-{
+-	return ERR_PTR(-ENODEV);
+-}
+ static inline int zynqmp_pm_get_api_version(u32 *version)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_get_chipid(u32 *idcode, u32 *version)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata,
+ 				       u32 *out)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_enable(u32 clock_id)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_disable(u32 clock_id)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_getstate(u32 clock_id, u32 *state)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_setdivider(u32 clock_id, u32 divider)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_getdivider(u32 clock_id, u32 *divider)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_setrate(u32 clock_id, u64 rate)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_getrate(u32 clock_id, u64 *rate)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_setparent(u32 clock_id, u32 parent_id)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_clock_getparent(u32 clock_id, u32 *parent_id)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_set_pll_frac_mode(u32 clk_id, u32 mode)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_get_pll_frac_mode(u32 clk_id, u32 *mode)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_set_pll_frac_data(u32 clk_id, u32 data)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_get_pll_frac_data(u32 clk_id, u32 *data)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_set_sd_tapdelay(u32 node_id, u32 type, u32 value)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_sd_dll_reset(u32 node_id, u32 type)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset,
+ 			   const enum zynqmp_pm_reset_action assert_flag)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset,
+ 					     u32 *status)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_init_finalize(void)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_set_suspend_mode(u32 mode)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_request_node(const u32 node, const u32 capabilities,
+ 					 const u32 qos,
+ 					 const enum zynqmp_pm_request_ack ack)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_release_node(const u32 node)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_set_requirement(const u32 node,
+ 					const u32 capabilities,
+ 					const u32 qos,
+@@ -466,39 +486,48 @@ static inline int zynqmp_pm_set_requirement(const u32 node,
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_aes_engine(const u64 address, u32 *out)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_fpga_load(const u64 address, const u32 size,
+ 				      const u32 flags)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_fpga_get_status(u32 *value)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_write_ggs(u32 index, u32 value)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_read_ggs(u32 index, u32 *value)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_write_pggs(u32 index, u32 value)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_read_pggs(u32 index, u32 *value)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype)
+ {
+ 	return -ENODEV;
+ }
++
+ static inline int zynqmp_pm_set_boot_health_status(u32 value)
+ {
+ 	return -ENODEV;
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 4a7e295c36401..8e144306e2622 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -637,8 +637,17 @@ int gpiochip_irqchip_add_key(struct gpio_chip *gc,
+ bool gpiochip_irqchip_irq_valid(const struct gpio_chip *gc,
+ 				unsigned int offset);
+ 
++#ifdef CONFIG_GPIOLIB_IRQCHIP
+ int gpiochip_irqchip_add_domain(struct gpio_chip *gc,
+ 				struct irq_domain *domain);
++#else
++static inline int gpiochip_irqchip_add_domain(struct gpio_chip *gc,
++					      struct irq_domain *domain)
++{
++	WARN_ON(1);
++	return -EINVAL;
++}
++#endif
+ 
+ #ifdef CONFIG_LOCKDEP
+ 
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 58684657960bf..8578db50ad734 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -262,6 +262,8 @@ struct hid_item {
+ #define HID_CP_SELECTION	0x000c0080
+ #define HID_CP_MEDIASELECTION	0x000c0087
+ #define HID_CP_SELECTDISC	0x000c00ba
++#define HID_CP_VOLUMEUP		0x000c00e9
++#define HID_CP_VOLUMEDOWN	0x000c00ea
+ #define HID_CP_PLAYBACKSPEED	0x000c00f1
+ #define HID_CP_PROXIMITY	0x000c0109
+ #define HID_CP_SPEAKERSYSTEM	0x000c0160
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 94522685a0d94..c00ee3458a919 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -42,6 +42,8 @@
+ 
+ #define DMA_FL_PTE_PRESENT	BIT_ULL(0)
+ #define DMA_FL_PTE_US		BIT_ULL(2)
++#define DMA_FL_PTE_ACCESS	BIT_ULL(5)
++#define DMA_FL_PTE_DIRTY	BIT_ULL(6)
+ #define DMA_FL_PTE_XD		BIT_ULL(63)
+ 
+ #define ADDR_WIDTH_5LEVEL	(57)
+@@ -367,6 +369,7 @@ enum {
+ /* PASID cache invalidation granu */
+ #define QI_PC_ALL_PASIDS	0
+ #define QI_PC_PASID_SEL		1
++#define QI_PC_GLOBAL		3
+ 
+ #define QI_EIOTLB_ADDR(addr)	((u64)(addr) & VTD_PAGE_MASK)
+ #define QI_EIOTLB_IH(ih)	(((u64)ih) << 6)
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index f11f5072af5dc..e90c267e7f3e1 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -544,7 +544,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
+ 	 * structure can be rewritten.
+ 	 */
+ 	if (gather->pgsize != size ||
+-	    end < gather->start || start > gather->end) {
++	    end + 1 < gather->start || start > gather->end + 1) {
+ 		if (gather->pgsize)
+ 			iommu_iotlb_sync(domain, gather);
+ 		gather->pgsize = size;
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 7f2e2a09ebbd9..a2278b9ff57d2 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -190,8 +190,8 @@ int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
+ 		    int len, void *val);
+ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+ 			    int len, struct kvm_io_device *dev);
+-void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+-			       struct kvm_io_device *dev);
++int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
++			      struct kvm_io_device *dev);
+ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ 					 gpa_t addr);
+ 
+diff --git a/include/linux/platform_device.h b/include/linux/platform_device.h
+index 77a2aada106dc..17f9cd5626c83 100644
+--- a/include/linux/platform_device.h
++++ b/include/linux/platform_device.h
+@@ -350,4 +350,7 @@ static inline int is_sh_early_platform_device(struct platform_device *pdev)
+ }
+ #endif /* CONFIG_SUPERH */
+ 
++/* For now only SuperH uses it */
++void early_platform_cleanup(void);
++
+ #endif /* _PLATFORM_DEVICE_H_ */
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index b492ae00cc908..6c08a085367bf 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -265,7 +265,7 @@ static inline void pm_runtime_no_callbacks(struct device *dev) {}
+ static inline void pm_runtime_irq_safe(struct device *dev) {}
+ static inline bool pm_runtime_is_irq_safe(struct device *dev) { return false; }
+ 
+-static inline bool pm_runtime_callbacks_present(struct device *dev) { return false; }
++static inline bool pm_runtime_has_no_callbacks(struct device *dev) { return false; }
+ static inline void pm_runtime_mark_last_busy(struct device *dev) {}
+ static inline void __pm_runtime_use_autosuspend(struct device *dev,
+ 						bool use) {}
+diff --git a/include/linux/smp.h b/include/linux/smp.h
+index 9f13966d3d92d..04f44e0aa2e0b 100644
+--- a/include/linux/smp.h
++++ b/include/linux/smp.h
+@@ -74,7 +74,7 @@ void on_each_cpu_cond(smp_cond_func_t cond_func, smp_call_func_t func,
+ void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func,
+ 			   void *info, bool wait, const struct cpumask *mask);
+ 
+-int smp_call_function_single_async(int cpu, call_single_data_t *csd);
++int smp_call_function_single_async(int cpu, struct __call_single_data *csd);
+ 
+ #ifdef CONFIG_SMP
+ 
+diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
+index b390fdac15876..2d906b9c14992 100644
+--- a/include/linux/spi/spi.h
++++ b/include/linux/spi/spi.h
+@@ -511,6 +511,9 @@ struct spi_controller {
+ 
+ #define SPI_MASTER_GPIO_SS		BIT(5)	/* GPIO CS must select slave */
+ 
++	/* flag indicating this is a non-devres managed controller */
++	bool			devm_allocated;
++
+ 	/* flag indicating this is an SPI slave controller */
+ 	bool			slave;
+ 
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index bc8caac390fce..5972f43b9d5ae 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -303,7 +303,6 @@ struct tty_struct {
+ 	spinlock_t flow_lock;
+ 	/* Termios values are protected by the termios rwsem */
+ 	struct ktermios termios, termios_locked;
+-	struct termiox *termiox;	/* May be NULL for unsupported */
+ 	char name[64];
+ 	struct pid *pgrp;		/* Protected by ctrl lock */
+ 	/*
+diff --git a/include/linux/tty_driver.h b/include/linux/tty_driver.h
+index 358446247ccdc..2f719b471d524 100644
+--- a/include/linux/tty_driver.h
++++ b/include/linux/tty_driver.h
+@@ -224,19 +224,11 @@
+  *	line). See tty_do_resize() if you need to wrap the standard method
+  *	in your own logic - the usual case.
+  *
+- * void (*set_termiox)(struct tty_struct *tty, struct termiox *new);
+- *
+- *	Called when the device receives a termiox based ioctl. Passes down
+- *	the requested data from user space. This method will not be invoked
+- *	unless the tty also has a valid tty->termiox pointer.
+- *
+- *	Optional: Called under the termios lock
+- *
+  * int (*get_icount)(struct tty_struct *tty, struct serial_icounter *icount);
+  *
+  *	Called when the device receives a TIOCGICOUNT ioctl. Passed a kernel
+  *	structure to complete. This method is optional and will only be called
+- *	if provided (otherwise EINVAL will be returned).
++ *	if provided (otherwise ENOTTY will be returned).
+  */
+ 
+ #include <linux/export.h>
+@@ -285,7 +277,6 @@ struct tty_operations {
+ 	int (*tiocmset)(struct tty_struct *tty,
+ 			unsigned int set, unsigned int clear);
+ 	int (*resize)(struct tty_struct *tty, struct winsize *ws);
+-	int (*set_termiox)(struct tty_struct *tty, struct termiox *tnew);
+ 	int (*get_icount)(struct tty_struct *tty,
+ 				struct serial_icounter_struct *icount);
+ 	int  (*get_serial)(struct tty_struct *tty, struct serial_struct *p);
+diff --git a/include/linux/udp.h b/include/linux/udp.h
+index aa84597bdc33c..ae58ff3b6b5b8 100644
+--- a/include/linux/udp.h
++++ b/include/linux/udp.h
+@@ -51,7 +51,9 @@ struct udp_sock {
+ 					   * different encapsulation layer set
+ 					   * this
+ 					   */
+-			 gro_enabled:1;	/* Can accept GRO packets */
++			 gro_enabled:1,	/* Request GRO aggregation */
++			 accept_udp_l4:1,
++			 accept_udp_fraglist:1;
+ 	/*
+ 	 * Following member retains the information to create a UDP header
+ 	 * when the socket is uncorked.
+@@ -131,8 +133,16 @@ static inline void udp_cmsg_recv(struct msghdr *msg, struct sock *sk,
+ 
+ static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)
+ {
+-	return !udp_sk(sk)->gro_enabled && skb_is_gso(skb) &&
+-	       skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4;
++	if (!skb_is_gso(skb))
++		return false;
++
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && !udp_sk(sk)->accept_udp_l4)
++		return true;
++
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST && !udp_sk(sk)->accept_udp_fraglist)
++		return true;
++
++	return false;
+ }
+ 
+ #define udp_portaddr_for_each_entry(__sk, list) \
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index 18f783dcd55fa..78ea3e332688f 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -233,7 +233,6 @@ void ipv6_mc_unmap(struct inet6_dev *idev);
+ void ipv6_mc_remap(struct inet6_dev *idev);
+ void ipv6_mc_init_dev(struct inet6_dev *idev);
+ void ipv6_mc_destroy_dev(struct inet6_dev *idev);
+-int ipv6_mc_check_icmpv6(struct sk_buff *skb);
+ int ipv6_mc_check_mld(struct sk_buff *skb);
+ void addrconf_dad_failure(struct sk_buff *skb, struct inet6_ifaddr *ifp);
+ 
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 9873e1c8cd163..df611c8b6b595 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -669,6 +669,7 @@ struct hci_chan {
+ 	struct sk_buff_head data_q;
+ 	unsigned int	sent;
+ 	__u8		state;
++	bool		amp;
+ };
+ 
+ struct hci_conn_params {
+diff --git a/include/net/netfilter/nf_tables_offload.h b/include/net/netfilter/nf_tables_offload.h
+index 1d34fe154fe0b..434a6158852f3 100644
+--- a/include/net/netfilter/nf_tables_offload.h
++++ b/include/net/netfilter/nf_tables_offload.h
+@@ -4,11 +4,16 @@
+ #include <net/flow_offload.h>
+ #include <net/netfilter/nf_tables.h>
+ 
++enum nft_offload_reg_flags {
++	NFT_OFFLOAD_F_NETWORK2HOST	= (1 << 0),
++};
++
+ struct nft_offload_reg {
+ 	u32		key;
+ 	u32		len;
+ 	u32		base_offset;
+ 	u32		offset;
++	u32		flags;
+ 	struct nft_data data;
+ 	struct nft_data	mask;
+ };
+@@ -45,6 +50,7 @@ struct nft_flow_key {
+ 	struct flow_dissector_key_ports			tp;
+ 	struct flow_dissector_key_ip			ip;
+ 	struct flow_dissector_key_vlan			vlan;
++	struct flow_dissector_key_vlan			cvlan;
+ 	struct flow_dissector_key_eth_addrs		eth_addrs;
+ 	struct flow_dissector_key_meta			meta;
+ } __aligned(BITS_PER_LONG / 8); /* Ensure that we can do comparisons as longs. */
+@@ -71,13 +77,17 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net, const struct nft_rul
+ void nft_flow_rule_destroy(struct nft_flow_rule *flow);
+ int nft_flow_rule_offload_commit(struct net *net);
+ 
+-#define NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg)		\
++#define NFT_OFFLOAD_MATCH_FLAGS(__key, __base, __field, __len, __reg, __flags)	\
+ 	(__reg)->base_offset	=					\
+ 		offsetof(struct nft_flow_key, __base);			\
+ 	(__reg)->offset		=					\
+ 		offsetof(struct nft_flow_key, __base.__field);		\
+ 	(__reg)->len		= __len;				\
+ 	(__reg)->key		= __key;				\
++	(__reg)->flags		= __flags;
++
++#define NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg)		\
++	NFT_OFFLOAD_MATCH_FLAGS(__key, __base, __field, __len, __reg, 0)
+ 
+ #define NFT_OFFLOAD_MATCH_EXACT(__key, __base, __field, __len, __reg)	\
+ 	NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg)		\
+diff --git a/include/uapi/linux/if_packet.h b/include/uapi/linux/if_packet.h
+index 3d884d68eb301..c07caf7b40db6 100644
+--- a/include/uapi/linux/if_packet.h
++++ b/include/uapi/linux/if_packet.h
+@@ -2,6 +2,7 @@
+ #ifndef __LINUX_IF_PACKET_H
+ #define __LINUX_IF_PACKET_H
+ 
++#include <asm/byteorder.h>
+ #include <linux/types.h>
+ 
+ struct sockaddr_pkt {
+@@ -296,6 +297,17 @@ struct packet_mreq {
+ 	unsigned char	mr_address[8];
+ };
+ 
++struct fanout_args {
++#if defined(__LITTLE_ENDIAN_BITFIELD)
++	__u16		id;
++	__u16		type_flags;
++#else
++	__u16		type_flags;
++	__u16		id;
++#endif
++	__u32		max_num_members;
++};
++
+ #define PACKET_MR_MULTICAST	0
+ #define PACKET_MR_PROMISC	1
+ #define PACKET_MR_ALLMULTI	2
+diff --git a/include/uapi/linux/tty_flags.h b/include/uapi/linux/tty_flags.h
+index 900a32e634247..6a3ac496a56c1 100644
+--- a/include/uapi/linux/tty_flags.h
++++ b/include/uapi/linux/tty_flags.h
+@@ -39,7 +39,7 @@
+  * WARNING: These flags are no longer used and have been superceded by the
+  *	    TTY_PORT_ flags in the iflags field (and not userspace-visible)
+  */
+-#ifndef _KERNEL_
++#ifndef __KERNEL__
+ #define ASYNCB_INITIALIZED	31 /* Serial port was initialized */
+ #define ASYNCB_SUSPENDED	30 /* Serial port is suspended */
+ #define ASYNCB_NORMAL_ACTIVE	29 /* Normal device is active */
+@@ -81,7 +81,7 @@
+ #define ASYNC_SPD_WARP		(ASYNC_SPD_HI|ASYNC_SPD_SHI)
+ #define ASYNC_SPD_MASK		(ASYNC_SPD_HI|ASYNC_SPD_VHI|ASYNC_SPD_SHI)
+ 
+-#ifndef _KERNEL_
++#ifndef __KERNEL__
+ /* These flags are no longer used (and were always masked from userspace) */
+ #define ASYNC_INITIALIZED	(1U << ASYNCB_INITIALIZED)
+ #define ASYNC_NORMAL_ACTIVE	(1U << ASYNCB_NORMAL_ACTIVE)
+diff --git a/init/init_task.c b/init/init_task.c
+index 16d14c2ebb552..5fa18ed59d33e 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -210,7 +210,7 @@ struct task_struct init_task
+ #ifdef CONFIG_SECURITY
+ 	.security	= NULL,
+ #endif
+-#ifdef CONFIG_SECCOMP
++#ifdef CONFIG_SECCOMP_FILTER
+ 	.seccomp	= { .filter_count = ATOMIC_INIT(0) },
+ #endif
+ };
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index 31cb04a4dd2d5..add0b34f2b340 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -240,25 +240,20 @@ static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
+ 	return -ENOTSUPP;
+ }
+ 
+-static size_t bpf_ringbuf_mmap_page_cnt(const struct bpf_ringbuf *rb)
+-{
+-	size_t data_pages = (rb->mask + 1) >> PAGE_SHIFT;
+-
+-	/* consumer page + producer page + 2 x data pages */
+-	return RINGBUF_POS_PAGES + 2 * data_pages;
+-}
+-
+ static int ringbuf_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
+ {
+ 	struct bpf_ringbuf_map *rb_map;
+-	size_t mmap_sz;
+ 
+ 	rb_map = container_of(map, struct bpf_ringbuf_map, map);
+-	mmap_sz = bpf_ringbuf_mmap_page_cnt(rb_map->rb) << PAGE_SHIFT;
+-
+-	if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) > mmap_sz)
+-		return -EINVAL;
+ 
++	if (vma->vm_flags & VM_WRITE) {
++		/* allow writable mapping for the consumer_pos only */
++		if (vma->vm_pgoff != 0 || vma->vm_end - vma->vm_start != PAGE_SIZE)
++			return -EPERM;
++	} else {
++		vma->vm_flags &= ~VM_MAYWRITE;
++	}
++	/* remap_vmalloc_range() checks size and offset constraints */
+ 	return remap_vmalloc_range(vma, rb_map->rb,
+ 				   vma->vm_pgoff + RINGBUF_PGOFF);
+ }
+@@ -334,6 +329,9 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
+ 		return NULL;
+ 
+ 	len = round_up(size + BPF_RINGBUF_HDR_SZ, 8);
++	if (len > rb->mask + 1)
++		return NULL;
++
+ 	cons_pos = smp_load_acquire(&rb->consumer_pos);
+ 
+ 	if (in_nmi()) {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b6656d181c9e7..69730943eaf80 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1303,9 +1303,7 @@ static bool __reg64_bound_s32(s64 a)
+ 
+ static bool __reg64_bound_u32(u64 a)
+ {
+-	if (a > U32_MIN && a < U32_MAX)
+-		return true;
+-	return false;
++	return a > U32_MIN && a < U32_MAX;
+ }
+ 
+ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
+@@ -1316,10 +1314,10 @@ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
+ 		reg->s32_min_value = (s32)reg->smin_value;
+ 		reg->s32_max_value = (s32)reg->smax_value;
+ 	}
+-	if (__reg64_bound_u32(reg->umin_value))
++	if (__reg64_bound_u32(reg->umin_value) && __reg64_bound_u32(reg->umax_value)) {
+ 		reg->u32_min_value = (u32)reg->umin_value;
+-	if (__reg64_bound_u32(reg->umax_value))
+ 		reg->u32_max_value = (u32)reg->umax_value;
++	}
+ 
+ 	/* Intersecting with the old var_off might have improved our bounds
+ 	 * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
+@@ -6343,11 +6341,10 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
+ 	s32 smin_val = src_reg->s32_min_value;
+ 	u32 umax_val = src_reg->u32_max_value;
+ 
+-	/* Assuming scalar64_min_max_and will be called so its safe
+-	 * to skip updating register for known 32-bit case.
+-	 */
+-	if (src_known && dst_known)
++	if (src_known && dst_known) {
++		__mark_reg32_known(dst_reg, var32_off.value);
+ 		return;
++	}
+ 
+ 	/* We get our minimum from the var_off, since that's inherently
+ 	 * bitwise.  Our maximum is the minimum of the operands' maxima.
+@@ -6367,7 +6364,6 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
+ 		dst_reg->s32_min_value = dst_reg->u32_min_value;
+ 		dst_reg->s32_max_value = dst_reg->u32_max_value;
+ 	}
+-
+ }
+ 
+ static void scalar_min_max_and(struct bpf_reg_state *dst_reg,
+@@ -6414,11 +6410,10 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg,
+ 	s32 smin_val = src_reg->s32_min_value;
+ 	u32 umin_val = src_reg->u32_min_value;
+ 
+-	/* Assuming scalar64_min_max_or will be called so it is safe
+-	 * to skip updating register for known case.
+-	 */
+-	if (src_known && dst_known)
++	if (src_known && dst_known) {
++		__mark_reg32_known(dst_reg, var32_off.value);
+ 		return;
++	}
+ 
+ 	/* We get our maximum from the var_off, and our minimum is the
+ 	 * maximum of the operands' minima
+@@ -6483,11 +6478,10 @@ static void scalar32_min_max_xor(struct bpf_reg_state *dst_reg,
+ 	struct tnum var32_off = tnum_subreg(dst_reg->var_off);
+ 	s32 smin_val = src_reg->s32_min_value;
+ 
+-	/* Assuming scalar64_min_max_xor will be called so it is safe
+-	 * to skip updating register for known case.
+-	 */
+-	if (src_known && dst_known)
++	if (src_known && dst_known) {
++		__mark_reg32_known(dst_reg, var32_off.value);
+ 		return;
++	}
+ 
+ 	/* We get both minimum and maximum from the var32_off. */
+ 	dst_reg->u32_min_value = var32_off.value;
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 8a5cc76ecac96..61e250cdd7c9c 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1019,7 +1019,6 @@ noinstr void rcu_nmi_enter(void)
+ 	} else if (!in_nmi()) {
+ 		instrumentation_begin();
+ 		rcu_irq_enter_check_tick();
+-		instrumentation_end();
+ 	} else  {
+ 		instrumentation_begin();
+ 	}
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 3a150445e0cba..3c3554d9ee50b 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -321,7 +321,7 @@ void update_rq_clock(struct rq *rq)
+ }
+ 
+ static inline void
+-rq_csd_init(struct rq *rq, call_single_data_t *csd, smp_call_func_t func)
++rq_csd_init(struct rq *rq, struct __call_single_data *csd, smp_call_func_t func)
+ {
+ 	csd->flags = 0;
+ 	csd->func = func;
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 2357921580f9c..6264584b51c25 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -8,8 +8,6 @@
+  */
+ #include "sched.h"
+ 
+-static DEFINE_SPINLOCK(sched_debug_lock);
+-
+ /*
+  * This allows printing both to /proc/sched_debug and
+  * to the console
+@@ -470,16 +468,37 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
+ #endif
+ 
+ #ifdef CONFIG_CGROUP_SCHED
++static DEFINE_SPINLOCK(sched_debug_lock);
+ static char group_path[PATH_MAX];
+ 
+-static char *task_group_path(struct task_group *tg)
++static void task_group_path(struct task_group *tg, char *path, int plen)
+ {
+-	if (autogroup_path(tg, group_path, PATH_MAX))
+-		return group_path;
++	if (autogroup_path(tg, path, plen))
++		return;
+ 
+-	cgroup_path(tg->css.cgroup, group_path, PATH_MAX);
++	cgroup_path(tg->css.cgroup, path, plen);
++}
+ 
+-	return group_path;
++/*
++ * Only 1 SEQ_printf_task_group_path() caller can use the full length
++ * group_path[] for cgroup path. Other simultaneous callers will have
++ * to use a shorter stack buffer. A "..." suffix is appended at the end
++ * of the stack buffer so that it will show up in case the output length
++ * matches the given buffer size to indicate possible path name truncation.
++ */
++#define SEQ_printf_task_group_path(m, tg, fmt...)			\
++{									\
++	if (spin_trylock(&sched_debug_lock)) {				\
++		task_group_path(tg, group_path, sizeof(group_path));	\
++		SEQ_printf(m, fmt, group_path);				\
++		spin_unlock(&sched_debug_lock);				\
++	} else {							\
++		char buf[128];						\
++		char *bufend = buf + sizeof(buf) - 3;			\
++		task_group_path(tg, buf, bufend - buf);			\
++		strcpy(bufend - 1, "...");				\
++		SEQ_printf(m, fmt, buf);				\
++	}								\
+ }
+ #endif
+ 
+@@ -506,7 +525,7 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
+ 	SEQ_printf(m, " %d %d", task_node(p), task_numa_group_id(p));
+ #endif
+ #ifdef CONFIG_CGROUP_SCHED
+-	SEQ_printf(m, " %s", task_group_path(task_group(p)));
++	SEQ_printf_task_group_path(m, task_group(p), " %s")
+ #endif
+ 
+ 	SEQ_printf(m, "\n");
+@@ -543,7 +562,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
+ 
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ 	SEQ_printf(m, "\n");
+-	SEQ_printf(m, "cfs_rq[%d]:%s\n", cpu, task_group_path(cfs_rq->tg));
++	SEQ_printf_task_group_path(m, cfs_rq->tg, "cfs_rq[%d]:%s\n", cpu);
+ #else
+ 	SEQ_printf(m, "\n");
+ 	SEQ_printf(m, "cfs_rq[%d]:\n", cpu);
+@@ -614,7 +633,7 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
+ {
+ #ifdef CONFIG_RT_GROUP_SCHED
+ 	SEQ_printf(m, "\n");
+-	SEQ_printf(m, "rt_rq[%d]:%s\n", cpu, task_group_path(rt_rq->tg));
++	SEQ_printf_task_group_path(m, rt_rq->tg, "rt_rq[%d]:%s\n", cpu);
+ #else
+ 	SEQ_printf(m, "\n");
+ 	SEQ_printf(m, "rt_rq[%d]:\n", cpu);
+@@ -666,7 +685,6 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq)
+ static void print_cpu(struct seq_file *m, int cpu)
+ {
+ 	struct rq *rq = cpu_rq(cpu);
+-	unsigned long flags;
+ 
+ #ifdef CONFIG_X86
+ 	{
+@@ -717,13 +735,11 @@ do {									\
+ 	}
+ #undef P
+ 
+-	spin_lock_irqsave(&sched_debug_lock, flags);
+ 	print_cfs_stats(m, cpu);
+ 	print_rt_stats(m, cpu);
+ 	print_dl_stats(m, cpu);
+ 
+ 	print_rq(m, rq, cpu);
+-	spin_unlock_irqrestore(&sched_debug_lock, flags);
+ 	SEQ_printf(m, "\n");
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index a0239649c7419..c80d1a039d19a 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7735,8 +7735,7 @@ static int detach_tasks(struct lb_env *env)
+ 			 * scheduler fails to find a good waiting task to
+ 			 * migrate.
+ 			 */
+-
+-			if ((load >> env->sd->nr_balance_failed) > env->imbalance)
++			if (shr_bound(load, env->sd->nr_balance_failed) > env->imbalance)
+ 				goto next;
+ 
+ 			env->imbalance -= load;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index fac1b121d1130..fdebfcbdfca94 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -205,6 +205,13 @@ static inline void update_avg(u64 *avg, u64 sample)
+ 	*avg += diff / 8;
+ }
+ 
++/*
++ * Shifting a value by an exponent greater *or equal* to the size of said value
++ * is UB; cap at size-1.
++ */
++#define shr_bound(val, shift)							\
++	(val >> min_t(typeof(shift), shift, BITS_PER_TYPE(typeof(val)) - 1))
++
+ /*
+  * !! For sched_setattr_nocheck() (kernel) only !!
+  *
+diff --git a/kernel/smp.c b/kernel/smp.c
+index 25240fb2df949..f73a597c8e4cf 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -110,7 +110,7 @@ static DEFINE_PER_CPU(void *, cur_csd_info);
+ static atomic_t csd_bug_count = ATOMIC_INIT(0);
+ 
+ /* Record current CSD work for current CPU, NULL to erase. */
+-static void csd_lock_record(call_single_data_t *csd)
++static void csd_lock_record(struct __call_single_data *csd)
+ {
+ 	if (!csd) {
+ 		smp_mb(); /* NULL cur_csd after unlock. */
+@@ -125,7 +125,7 @@ static void csd_lock_record(call_single_data_t *csd)
+ 		  /* Or before unlock, as the case may be. */
+ }
+ 
+-static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd)
++static __always_inline int csd_lock_wait_getcpu(struct __call_single_data *csd)
+ {
+ 	unsigned int csd_type;
+ 
+@@ -140,7 +140,7 @@ static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd)
+  * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
+  * so waiting on other types gets much less information.
+  */
+-static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, int *bug_id)
++static __always_inline bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 *ts1, int *bug_id)
+ {
+ 	int cpu = -1;
+ 	int cpux;
+@@ -204,7 +204,7 @@ static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 t
+  * previous function call. For multi-cpu calls its even more interesting
+  * as we'll have to ensure no other cpu is observing our csd.
+  */
+-static __always_inline void csd_lock_wait(call_single_data_t *csd)
++static __always_inline void csd_lock_wait(struct __call_single_data *csd)
+ {
+ 	int bug_id = 0;
+ 	u64 ts0, ts1;
+@@ -219,17 +219,17 @@ static __always_inline void csd_lock_wait(call_single_data_t *csd)
+ }
+ 
+ #else
+-static void csd_lock_record(call_single_data_t *csd)
++static void csd_lock_record(struct __call_single_data *csd)
+ {
+ }
+ 
+-static __always_inline void csd_lock_wait(call_single_data_t *csd)
++static __always_inline void csd_lock_wait(struct __call_single_data *csd)
+ {
+ 	smp_cond_load_acquire(&csd->flags, !(VAL & CSD_FLAG_LOCK));
+ }
+ #endif
+ 
+-static __always_inline void csd_lock(call_single_data_t *csd)
++static __always_inline void csd_lock(struct __call_single_data *csd)
+ {
+ 	csd_lock_wait(csd);
+ 	csd->flags |= CSD_FLAG_LOCK;
+@@ -242,7 +242,7 @@ static __always_inline void csd_lock(call_single_data_t *csd)
+ 	smp_wmb();
+ }
+ 
+-static __always_inline void csd_unlock(call_single_data_t *csd)
++static __always_inline void csd_unlock(struct __call_single_data *csd)
+ {
+ 	WARN_ON(!(csd->flags & CSD_FLAG_LOCK));
+ 
+@@ -276,7 +276,7 @@ void __smp_call_single_queue(int cpu, struct llist_node *node)
+  * for execution on the given CPU. data must already have
+  * ->func, ->info, and ->flags set.
+  */
+-static int generic_exec_single(int cpu, call_single_data_t *csd)
++static int generic_exec_single(int cpu, struct __call_single_data *csd)
+ {
+ 	if (cpu == smp_processor_id()) {
+ 		smp_call_func_t func = csd->func;
+@@ -542,7 +542,7 @@ EXPORT_SYMBOL(smp_call_function_single);
+  * NOTE: Be careful, there is unfortunately no current debugging facility to
+  * validate the correctness of this serialization.
+  */
+-int smp_call_function_single_async(int cpu, call_single_data_t *csd)
++int smp_call_function_single_async(int cpu, struct __call_single_data *csd)
+ {
+ 	int err = 0;
+ 
+diff --git a/kernel/up.c b/kernel/up.c
+index c6f323dcd45bb..4edd5493eba24 100644
+--- a/kernel/up.c
++++ b/kernel/up.c
+@@ -25,7 +25,7 @@ int smp_call_function_single(int cpu, void (*func) (void *info), void *info,
+ }
+ EXPORT_SYMBOL(smp_call_function_single);
+ 
+-int smp_call_function_single_async(int cpu, call_single_data_t *csd)
++int smp_call_function_single_async(int cpu, struct __call_single_data *csd)
+ {
+ 	unsigned long flags;
+ 
+diff --git a/lib/bug.c b/lib/bug.c
+index 7103440c0ee1a..4ab398a2de938 100644
+--- a/lib/bug.c
++++ b/lib/bug.c
+@@ -158,30 +158,27 @@ enum bug_trap_type report_bug(unsigned long bugaddr, struct pt_regs *regs)
+ 
+ 	file = NULL;
+ 	line = 0;
+-	warning = 0;
+ 
+-	if (bug) {
+ #ifdef CONFIG_DEBUG_BUGVERBOSE
+ #ifndef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
+-		file = bug->file;
++	file = bug->file;
+ #else
+-		file = (const char *)bug + bug->file_disp;
++	file = (const char *)bug + bug->file_disp;
+ #endif
+-		line = bug->line;
++	line = bug->line;
+ #endif
+-		warning = (bug->flags & BUGFLAG_WARNING) != 0;
+-		once = (bug->flags & BUGFLAG_ONCE) != 0;
+-		done = (bug->flags & BUGFLAG_DONE) != 0;
+-
+-		if (warning && once) {
+-			if (done)
+-				return BUG_TRAP_TYPE_WARN;
+-
+-			/*
+-			 * Since this is the only store, concurrency is not an issue.
+-			 */
+-			bug->flags |= BUGFLAG_DONE;
+-		}
++	warning = (bug->flags & BUGFLAG_WARNING) != 0;
++	once = (bug->flags & BUGFLAG_ONCE) != 0;
++	done = (bug->flags & BUGFLAG_DONE) != 0;
++
++	if (warning && once) {
++		if (done)
++			return BUG_TRAP_TYPE_WARN;
++
++		/*
++		 * Since this is the only store, concurrency is not an issue.
++		 */
++		bug->flags |= BUGFLAG_DONE;
+ 	}
+ 
+ 	/*
+diff --git a/lib/crypto/poly1305-donna32.c b/lib/crypto/poly1305-donna32.c
+index 3cc77d94390b2..7fb71845cc846 100644
+--- a/lib/crypto/poly1305-donna32.c
++++ b/lib/crypto/poly1305-donna32.c
+@@ -10,7 +10,8 @@
+ #include <asm/unaligned.h>
+ #include <crypto/internal/poly1305.h>
+ 
+-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16])
++void poly1305_core_setkey(struct poly1305_core_key *key,
++			  const u8 raw_key[POLY1305_BLOCK_SIZE])
+ {
+ 	/* r &= 0xffffffc0ffffffc0ffffffc0fffffff */
+ 	key->key.r[0] = (get_unaligned_le32(&raw_key[0])) & 0x3ffffff;
+diff --git a/lib/crypto/poly1305-donna64.c b/lib/crypto/poly1305-donna64.c
+index 6ae181bb43450..d34cf40536689 100644
+--- a/lib/crypto/poly1305-donna64.c
++++ b/lib/crypto/poly1305-donna64.c
+@@ -12,7 +12,8 @@
+ 
+ typedef __uint128_t u128;
+ 
+-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16])
++void poly1305_core_setkey(struct poly1305_core_key *key,
++			  const u8 raw_key[POLY1305_BLOCK_SIZE])
+ {
+ 	u64 t0, t1;
+ 
+diff --git a/lib/crypto/poly1305.c b/lib/crypto/poly1305.c
+index 9d2d14df0fee5..26d87fc3823e8 100644
+--- a/lib/crypto/poly1305.c
++++ b/lib/crypto/poly1305.c
+@@ -12,7 +12,8 @@
+ #include <linux/module.h>
+ #include <asm/unaligned.h>
+ 
+-void poly1305_init_generic(struct poly1305_desc_ctx *desc, const u8 *key)
++void poly1305_init_generic(struct poly1305_desc_ctx *desc,
++			   const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_core_setkey(&desc->core_r, key);
+ 	desc->s[0] = get_unaligned_le32(key + 16);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index d72d2b90474a4..8d9f5fa4c6d39 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -3162,9 +3162,17 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock)
+ 		unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1);
+ 
+ 		if (nr_pages) {
++			struct mem_cgroup *memcg;
++
+ 			rcu_read_lock();
+-			__memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages);
++retry:
++			memcg = obj_cgroup_memcg(old);
++			if (unlikely(!css_tryget(&memcg->css)))
++				goto retry;
+ 			rcu_read_unlock();
++
++			__memcg_kmem_uncharge(memcg, nr_pages);
++			css_put(&memcg->css);
+ 		}
+ 
+ 		/*
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 570a20b425613..2d7a667f8e609 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1293,7 +1293,7 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
+ 		 * communicated in siginfo, see kill_proc()
+ 		 */
+ 		start = (page->index << PAGE_SHIFT) & ~(size - 1);
+-		unmap_mapping_range(page->mapping, start, start + size, 0);
++		unmap_mapping_range(page->mapping, start, size, 0);
+ 	}
+ 	kill_procs(&tokill, flags & MF_MUST_KILL, !unmap_success, pfn, flags);
+ 	rc = 0;
+diff --git a/mm/slab.c b/mm/slab.c
+index b1113561b98b1..b2cc2cf7d8a33 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -1789,8 +1789,7 @@ static int __ref setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp)
+ }
+ 
+ slab_flags_t kmem_cache_flags(unsigned int object_size,
+-	slab_flags_t flags, const char *name,
+-	void (*ctor)(void *))
++	slab_flags_t flags, const char *name)
+ {
+ 	return flags;
+ }
+diff --git a/mm/slab.h b/mm/slab.h
+index f9977d6613d61..e258ffcfb0ef2 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -110,8 +110,7 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
+ 		   slab_flags_t flags, void (*ctor)(void *));
+ 
+ slab_flags_t kmem_cache_flags(unsigned int object_size,
+-	slab_flags_t flags, const char *name,
+-	void (*ctor)(void *));
++	slab_flags_t flags, const char *name);
+ #else
+ static inline struct kmem_cache *
+ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
+@@ -119,8 +118,7 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
+ { return NULL; }
+ 
+ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
+-	slab_flags_t flags, const char *name,
+-	void (*ctor)(void *))
++	slab_flags_t flags, const char *name)
+ {
+ 	return flags;
+ }
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 8d96679668b4e..8f27ccf9f7f35 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -196,7 +196,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
+ 	size = ALIGN(size, sizeof(void *));
+ 	align = calculate_alignment(flags, align, size);
+ 	size = ALIGN(size, align);
+-	flags = kmem_cache_flags(size, flags, name, NULL);
++	flags = kmem_cache_flags(size, flags, name);
+ 
+ 	if (flags & SLAB_NEVER_MERGE)
+ 		return NULL;
+diff --git a/mm/slub.c b/mm/slub.c
+index fbc415c340095..05a501b67cd59 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1397,7 +1397,6 @@ __setup("slub_debug", setup_slub_debug);
+  * @object_size:	the size of an object without meta data
+  * @flags:		flags to set
+  * @name:		name of the cache
+- * @ctor:		constructor function
+  *
+  * Debug option(s) are applied to @flags. In addition to the debug
+  * option(s), if a slab name (or multiple) is specified i.e.
+@@ -1405,8 +1404,7 @@ __setup("slub_debug", setup_slub_debug);
+  * then only the select slabs will receive the debug option(s).
+  */
+ slab_flags_t kmem_cache_flags(unsigned int object_size,
+-	slab_flags_t flags, const char *name,
+-	void (*ctor)(void *))
++	slab_flags_t flags, const char *name)
+ {
+ 	char *iter;
+ 	size_t len;
+@@ -1471,8 +1469,7 @@ static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n,
+ static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n,
+ 					struct page *page) {}
+ slab_flags_t kmem_cache_flags(unsigned int object_size,
+-	slab_flags_t flags, const char *name,
+-	void (*ctor)(void *))
++	slab_flags_t flags, const char *name)
+ {
+ 	return flags;
+ }
+@@ -3782,7 +3779,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
+ 
+ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
+ {
+-	s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor);
++	s->flags = kmem_cache_flags(s->size, flags, s->name);
+ #ifdef CONFIG_SLAB_FREELIST_HARDENED
+ 	s->random = get_random_long();
+ #endif
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 7bd23f9d6cef6..33406ea2ecc44 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -547,6 +547,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
+ 			pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.",
+ 			       __func__, nid);
+ 			pnum_begin = pnum;
++			sparse_buffer_fini();
+ 			goto failed;
+ 		}
+ 		check_usemap_section_nr(nid, usage);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index d0c1024bf6008..1c5a0a60292d2 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1789,8 +1789,6 @@ u32 hci_conn_get_phy(struct hci_conn *conn)
+ {
+ 	u32 phys = 0;
+ 
+-	hci_dev_lock(conn->hdev);
+-
+ 	/* BLUETOOTH CORE SPECIFICATION Version 5.2 | Vol 2, Part B page 471:
+ 	 * Table 6.2: Packets defined for synchronous, asynchronous, and
+ 	 * CSB logical transport types.
+@@ -1887,7 +1885,5 @@ u32 hci_conn_get_phy(struct hci_conn *conn)
+ 		break;
+ 	}
+ 
+-	hci_dev_unlock(conn->hdev);
+-
+ 	return phys;
+ }
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 17a72695865b5..e0a5428497352 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4990,6 +4990,7 @@ static void hci_loglink_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		return;
+ 
+ 	hchan->handle = le16_to_cpu(ev->handle);
++	hchan->amp = true;
+ 
+ 	BT_DBG("hcon %p mgr %p hchan %p", hcon, hcon->amp_mgr, hchan);
+ 
+@@ -5022,7 +5023,7 @@ static void hci_disconn_loglink_complete_evt(struct hci_dev *hdev,
+ 	hci_dev_lock(hdev);
+ 
+ 	hchan = hci_chan_lookup_handle(hdev, le16_to_cpu(ev->handle));
+-	if (!hchan)
++	if (!hchan || !hchan->amp)
+ 		goto unlock;
+ 
+ 	amp_destroy_logical_link(hchan, ev->reason);
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 610ed0817bd77..161ea93a53828 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -271,12 +271,16 @@ int hci_req_sync(struct hci_dev *hdev, int (*req)(struct hci_request *req,
+ {
+ 	int ret;
+ 
+-	if (!test_bit(HCI_UP, &hdev->flags))
+-		return -ENETDOWN;
+-
+ 	/* Serialize all requests */
+ 	hci_req_sync_lock(hdev);
+-	ret = __hci_req_sync(hdev, req, opt, timeout, hci_status);
++	/* check the state after obtaing the lock to protect the HCI_UP
++	 * against any races from hci_dev_do_close when the controller
++	 * gets removed.
++	 */
++	if (test_bit(HCI_UP, &hdev->flags))
++		ret = __hci_req_sync(hdev, req, opt, timeout, hci_status);
++	else
++		ret = -ENETDOWN;
+ 	hci_req_sync_unlock(hdev);
+ 
+ 	return ret;
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 54cb82a69056c..5015ece7adf7a 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -3070,25 +3070,14 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br,
+ }
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+-static int br_ip6_multicast_mrd_rcv(struct net_bridge *br,
+-				    struct net_bridge_port *port,
+-				    struct sk_buff *skb)
++static void br_ip6_multicast_mrd_rcv(struct net_bridge *br,
++				     struct net_bridge_port *port,
++				     struct sk_buff *skb)
+ {
+-	int ret;
+-
+-	if (ipv6_hdr(skb)->nexthdr != IPPROTO_ICMPV6)
+-		return -ENOMSG;
+-
+-	ret = ipv6_mc_check_icmpv6(skb);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (icmp6_hdr(skb)->icmp6_type != ICMPV6_MRDISC_ADV)
+-		return -ENOMSG;
++		return;
+ 
+ 	br_multicast_mark_router(br, port);
+-
+-	return 0;
+ }
+ 
+ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+@@ -3102,18 +3091,12 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+ 
+ 	err = ipv6_mc_check_mld(skb);
+ 
+-	if (err == -ENOMSG) {
++	if (err == -ENOMSG || err == -ENODATA) {
+ 		if (!ipv6_addr_is_ll_all_nodes(&ipv6_hdr(skb)->daddr))
+ 			BR_INPUT_SKB_CB(skb)->mrouters_only = 1;
+-
+-		if (ipv6_addr_is_all_snoopers(&ipv6_hdr(skb)->daddr)) {
+-			err = br_ip6_multicast_mrd_rcv(br, port, skb);
+-
+-			if (err < 0 && err != -ENOMSG) {
+-				br_multicast_err_count(br, port, skb->protocol);
+-				return err;
+-			}
+-		}
++		if (err == -ENODATA &&
++		    ipv6_addr_is_all_snoopers(&ipv6_hdr(skb)->daddr))
++			br_ip6_multicast_mrd_rcv(br, port, skb);
+ 
+ 		return 0;
+ 	} else if (err < 0) {
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 64f4c7ec729dc..2f17a4ac82f0e 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5857,7 +5857,7 @@ static struct list_head *gro_list_prepare(struct napi_struct *napi,
+ 	return head;
+ }
+ 
+-static void skb_gro_reset_offset(struct sk_buff *skb)
++static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff)
+ {
+ 	const struct skb_shared_info *pinfo = skb_shinfo(skb);
+ 	const skb_frag_t *frag0 = &pinfo->frags[0];
+@@ -5868,7 +5868,7 @@ static void skb_gro_reset_offset(struct sk_buff *skb)
+ 
+ 	if (!skb_headlen(skb) && pinfo->nr_frags &&
+ 	    !PageHighMem(skb_frag_page(frag0)) &&
+-	    (!NET_IP_ALIGN || !(skb_frag_off(frag0) & 3))) {
++	    (!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) {
+ 		NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
+ 		NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int,
+ 						    skb_frag_size(frag0),
+@@ -6101,7 +6101,7 @@ gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb)
+ 	skb_mark_napi_id(skb, napi);
+ 	trace_napi_gro_receive_entry(skb);
+ 
+-	skb_gro_reset_offset(skb);
++	skb_gro_reset_offset(skb, 0);
+ 
+ 	ret = napi_skb_finish(napi, skb, dev_gro_receive(napi, skb));
+ 	trace_napi_gro_receive_exit(ret);
+@@ -6194,7 +6194,7 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
+ 	napi->skb = NULL;
+ 
+ 	skb_reset_mac_header(skb);
+-	skb_gro_reset_offset(skb);
++	skb_gro_reset_offset(skb, hlen);
+ 
+ 	if (unlikely(skb_gro_header_hard(skb, hlen))) {
+ 		eth = skb_gro_header_slow(skb, hlen, 0);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 50a6d935376f5..798dc85bde5b7 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -66,6 +66,7 @@
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
++#include <linux/memblock.h>
+ #include <linux/string.h>
+ #include <linux/socket.h>
+ #include <linux/sockios.h>
+@@ -476,8 +477,10 @@ static void ipv4_confirm_neigh(const struct dst_entry *dst, const void *daddr)
+ 	__ipv4_confirm_neigh(dev, *(__force u32 *)pkey);
+ }
+ 
+-#define IP_IDENTS_SZ 2048u
+-
++/* Hash tables of size 2048..262144 depending on RAM size.
++ * Each bucket uses 8 bytes.
++ */
++static u32 ip_idents_mask __read_mostly;
+ static atomic_t *ip_idents __read_mostly;
+ static u32 *ip_tstamps __read_mostly;
+ 
+@@ -487,12 +490,16 @@ static u32 *ip_tstamps __read_mostly;
+  */
+ u32 ip_idents_reserve(u32 hash, int segs)
+ {
+-	u32 *p_tstamp = ip_tstamps + hash % IP_IDENTS_SZ;
+-	atomic_t *p_id = ip_idents + hash % IP_IDENTS_SZ;
+-	u32 old = READ_ONCE(*p_tstamp);
+-	u32 now = (u32)jiffies;
++	u32 bucket, old, now = (u32)jiffies;
++	atomic_t *p_id;
++	u32 *p_tstamp;
+ 	u32 delta = 0;
+ 
++	bucket = hash & ip_idents_mask;
++	p_tstamp = ip_tstamps + bucket;
++	p_id = ip_idents + bucket;
++	old = READ_ONCE(*p_tstamp);
++
+ 	if (old != now && cmpxchg(p_tstamp, old, now) == old)
+ 		delta = prandom_u32_max(now - old);
+ 
+@@ -3544,18 +3551,25 @@ struct ip_rt_acct __percpu *ip_rt_acct __read_mostly;
+ 
+ int __init ip_rt_init(void)
+ {
++	void *idents_hash;
+ 	int cpu;
+ 
+-	ip_idents = kmalloc_array(IP_IDENTS_SZ, sizeof(*ip_idents),
+-				  GFP_KERNEL);
+-	if (!ip_idents)
+-		panic("IP: failed to allocate ip_idents\n");
++	/* For modern hosts, this will use 2 MB of memory */
++	idents_hash = alloc_large_system_hash("IP idents",
++					      sizeof(*ip_idents) + sizeof(*ip_tstamps),
++					      0,
++					      16, /* one bucket per 64 KB */
++					      HASH_ZERO,
++					      NULL,
++					      &ip_idents_mask,
++					      2048,
++					      256*1024);
++
++	ip_idents = idents_hash;
+ 
+-	prandom_bytes(ip_idents, IP_IDENTS_SZ * sizeof(*ip_idents));
++	prandom_bytes(ip_idents, (ip_idents_mask + 1) * sizeof(*ip_idents));
+ 
+-	ip_tstamps = kcalloc(IP_IDENTS_SZ, sizeof(*ip_tstamps), GFP_KERNEL);
+-	if (!ip_tstamps)
+-		panic("IP: failed to allocate ip_tstamps\n");
++	ip_tstamps = idents_hash + (ip_idents_mask + 1) * sizeof(*ip_idents);
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		struct uncached_list *ul = &per_cpu(rt_uncached_list, cpu);
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index 563d016e74783..db5831e6c136a 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -230,6 +230,10 @@ int tcp_set_default_congestion_control(struct net *net, const char *name)
+ 		ret = -ENOENT;
+ 	} else if (!bpf_try_module_get(ca, ca->owner)) {
+ 		ret = -EBUSY;
++	} else if (!net_eq(net, &init_net) &&
++			!(ca->flags & TCP_CONG_NON_RESTRICTED)) {
++		/* Only init netns can set default to a restricted algorithm */
++		ret = -EPERM;
+ 	} else {
+ 		prev = xchg(&net->ipv4.tcp_congestion_control, ca);
+ 		if (prev)
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 4a2fd286787c0..9d28b2778e8fe 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2657,9 +2657,12 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
+ 
+ 	case UDP_GRO:
+ 		lock_sock(sk);
++
++		/* when enabling GRO, accept the related GSO packet type */
+ 		if (valbool)
+ 			udp_tunnel_encap_enable(sk->sk_socket);
+ 		up->gro_enabled = valbool;
++		up->accept_udp_l4 = valbool;
+ 		release_sock(sk);
+ 		break;
+ 
+diff --git a/net/ipv6/mcast_snoop.c b/net/ipv6/mcast_snoop.c
+index d3d6b6a66e5fa..04d5fcdfa6e00 100644
+--- a/net/ipv6/mcast_snoop.c
++++ b/net/ipv6/mcast_snoop.c
+@@ -109,7 +109,7 @@ static int ipv6_mc_check_mld_msg(struct sk_buff *skb)
+ 	struct mld_msg *mld;
+ 
+ 	if (!ipv6_mc_may_pull(skb, len))
+-		return -EINVAL;
++		return -ENODATA;
+ 
+ 	mld = (struct mld_msg *)skb_transport_header(skb);
+ 
+@@ -122,7 +122,7 @@ static int ipv6_mc_check_mld_msg(struct sk_buff *skb)
+ 	case ICMPV6_MGM_QUERY:
+ 		return ipv6_mc_check_mld_query(skb);
+ 	default:
+-		return -ENOMSG;
++		return -ENODATA;
+ 	}
+ }
+ 
+@@ -131,7 +131,7 @@ static inline __sum16 ipv6_mc_validate_checksum(struct sk_buff *skb)
+ 	return skb_checksum_validate(skb, IPPROTO_ICMPV6, ip6_compute_pseudo);
+ }
+ 
+-int ipv6_mc_check_icmpv6(struct sk_buff *skb)
++static int ipv6_mc_check_icmpv6(struct sk_buff *skb)
+ {
+ 	unsigned int len = skb_transport_offset(skb) + sizeof(struct icmp6hdr);
+ 	unsigned int transport_len = ipv6_transport_len(skb);
+@@ -150,7 +150,6 @@ int ipv6_mc_check_icmpv6(struct sk_buff *skb)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(ipv6_mc_check_icmpv6);
+ 
+ /**
+  * ipv6_mc_check_mld - checks whether this is a sane MLD packet
+@@ -161,7 +160,10 @@ EXPORT_SYMBOL(ipv6_mc_check_icmpv6);
+  *
+  * -EINVAL: A broken packet was detected, i.e. it violates some internet
+  *  standard
+- * -ENOMSG: IP header validation succeeded but it is not an MLD packet.
++ * -ENOMSG: IP header validation succeeded but it is not an ICMPv6 packet
++ *  with a hop-by-hop option.
++ * -ENODATA: IP+ICMPv6 header with hop-by-hop option validation succeeded
++ *  but it is not an MLD packet.
+  * -ENOMEM: A memory allocation failure happened.
+  *
+  * Caller needs to set the skb network header and free any returned skb if it
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 19c093bb3876e..73893025922fd 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1150,8 +1150,11 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 	if (local->hw.wiphy->max_scan_ie_len)
+ 		local->hw.wiphy->max_scan_ie_len -= local->scan_ies_len;
+ 
+-	WARN_ON(!ieee80211_cs_list_valid(local->hw.cipher_schemes,
+-					 local->hw.n_cipher_schemes));
++	if (WARN_ON(!ieee80211_cs_list_valid(local->hw.cipher_schemes,
++					     local->hw.n_cipher_schemes))) {
++		result = -EINVAL;
++		goto fail_workqueue;
++	}
+ 
+ 	result = ieee80211_init_cipher_suites(local);
+ 	if (result < 0)
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index 9ae14270c543e..2b00f7f47693b 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -45,6 +45,48 @@ void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow,
+ 		offsetof(struct nft_flow_key, control);
+ }
+ 
++struct nft_offload_ethertype {
++	__be16 value;
++	__be16 mask;
++};
++
++static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
++					struct nft_flow_rule *flow)
++{
++	struct nft_flow_match *match = &flow->match;
++	struct nft_offload_ethertype ethertype;
++
++	if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL) &&
++	    match->key.basic.n_proto != htons(ETH_P_8021Q) &&
++	    match->key.basic.n_proto != htons(ETH_P_8021AD))
++		return;
++
++	ethertype.value = match->key.basic.n_proto;
++	ethertype.mask = match->mask.basic.n_proto;
++
++	if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_VLAN) &&
++	    (match->key.vlan.vlan_tpid == htons(ETH_P_8021Q) ||
++	     match->key.vlan.vlan_tpid == htons(ETH_P_8021AD))) {
++		match->key.basic.n_proto = match->key.cvlan.vlan_tpid;
++		match->mask.basic.n_proto = match->mask.cvlan.vlan_tpid;
++		match->key.cvlan.vlan_tpid = match->key.vlan.vlan_tpid;
++		match->mask.cvlan.vlan_tpid = match->mask.vlan.vlan_tpid;
++		match->key.vlan.vlan_tpid = ethertype.value;
++		match->mask.vlan.vlan_tpid = ethertype.mask;
++		match->dissector.offset[FLOW_DISSECTOR_KEY_CVLAN] =
++			offsetof(struct nft_flow_key, cvlan);
++		match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN);
++	} else {
++		match->key.basic.n_proto = match->key.vlan.vlan_tpid;
++		match->mask.basic.n_proto = match->mask.vlan.vlan_tpid;
++		match->key.vlan.vlan_tpid = ethertype.value;
++		match->mask.vlan.vlan_tpid = ethertype.mask;
++		match->dissector.offset[FLOW_DISSECTOR_KEY_VLAN] =
++			offsetof(struct nft_flow_key, vlan);
++		match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN);
++	}
++}
++
+ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
+ 					   const struct nft_rule *rule)
+ {
+@@ -89,6 +131,8 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
+ 
+ 		expr = nft_expr_next(expr);
+ 	}
++	nft_flow_rule_transfer_vlan(ctx, flow);
++
+ 	flow->proto = ctx->dep.l3num;
+ 	kfree(ctx);
+ 
+diff --git a/net/netfilter/nft_cmp.c b/net/netfilter/nft_cmp.c
+index 00e563a72d3d7..1d42d06f5b64b 100644
+--- a/net/netfilter/nft_cmp.c
++++ b/net/netfilter/nft_cmp.c
+@@ -115,19 +115,56 @@ nla_put_failure:
+ 	return -1;
+ }
+ 
++union nft_cmp_offload_data {
++	u16	val16;
++	u32	val32;
++	u64	val64;
++};
++
++static void nft_payload_n2h(union nft_cmp_offload_data *data,
++			    const u8 *val, u32 len)
++{
++	switch (len) {
++	case 2:
++		data->val16 = ntohs(*((u16 *)val));
++		break;
++	case 4:
++		data->val32 = ntohl(*((u32 *)val));
++		break;
++	case 8:
++		data->val64 = be64_to_cpu(*((u64 *)val));
++		break;
++	default:
++		WARN_ON_ONCE(1);
++		break;
++	}
++}
++
+ static int __nft_cmp_offload(struct nft_offload_ctx *ctx,
+ 			     struct nft_flow_rule *flow,
+ 			     const struct nft_cmp_expr *priv)
+ {
+ 	struct nft_offload_reg *reg = &ctx->regs[priv->sreg];
++	union nft_cmp_offload_data _data, _datamask;
+ 	u8 *mask = (u8 *)&flow->match.mask;
+ 	u8 *key = (u8 *)&flow->match.key;
++	u8 *data, *datamask;
+ 
+ 	if (priv->op != NFT_CMP_EQ || priv->len > reg->len)
+ 		return -EOPNOTSUPP;
+ 
+-	memcpy(key + reg->offset, &priv->data, reg->len);
+-	memcpy(mask + reg->offset, &reg->mask, reg->len);
++	if (reg->flags & NFT_OFFLOAD_F_NETWORK2HOST) {
++		nft_payload_n2h(&_data, (u8 *)&priv->data, reg->len);
++		nft_payload_n2h(&_datamask, (u8 *)&reg->mask, reg->len);
++		data = (u8 *)&_data;
++		datamask = (u8 *)&_datamask;
++	} else {
++		data = (u8 *)&priv->data;
++		datamask = (u8 *)&reg->mask;
++	}
++
++	memcpy(key + reg->offset, data, reg->len);
++	memcpy(mask + reg->offset, datamask, reg->len);
+ 
+ 	flow->match.dissector.used_keys |= BIT(reg->key);
+ 	flow->match.dissector.offset[reg->key] = reg->base_offset;
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 47d4e0e216514..1ebee25de6772 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -226,8 +226,9 @@ static int nft_payload_offload_ll(struct nft_offload_ctx *ctx,
+ 		if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
+ 			return -EOPNOTSUPP;
+ 
+-		NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_VLAN, vlan,
+-				  vlan_tci, sizeof(__be16), reg);
++		NFT_OFFLOAD_MATCH_FLAGS(FLOW_DISSECTOR_KEY_VLAN, vlan,
++					vlan_tci, sizeof(__be16), reg,
++					NFT_OFFLOAD_F_NETWORK2HOST);
+ 		break;
+ 	case offsetof(struct vlan_ethhdr, h_vlan_encapsulated_proto):
+ 		if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
+@@ -241,16 +242,18 @@ static int nft_payload_offload_ll(struct nft_offload_ctx *ctx,
+ 		if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
+ 			return -EOPNOTSUPP;
+ 
+-		NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, vlan,
+-				  vlan_tci, sizeof(__be16), reg);
++		NFT_OFFLOAD_MATCH_FLAGS(FLOW_DISSECTOR_KEY_CVLAN, cvlan,
++					vlan_tci, sizeof(__be16), reg,
++					NFT_OFFLOAD_F_NETWORK2HOST);
+ 		break;
+ 	case offsetof(struct vlan_ethhdr, h_vlan_encapsulated_proto) +
+ 							sizeof(struct vlan_hdr):
+ 		if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
+ 			return -EOPNOTSUPP;
+ 
+-		NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, vlan,
++		NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, cvlan,
+ 				  vlan_tpid, sizeof(__be16), reg);
++		nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_NETWORK);
+ 		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+diff --git a/net/nfc/digital_dep.c b/net/nfc/digital_dep.c
+index 5971fb6f51cc7..dc21b4141b0af 100644
+--- a/net/nfc/digital_dep.c
++++ b/net/nfc/digital_dep.c
+@@ -1273,6 +1273,8 @@ static void digital_tg_recv_dep_req(struct nfc_digital_dev *ddev, void *arg,
+ 	}
+ 
+ 	rc = nfc_tm_data_received(ddev->nfc_dev, resp);
++	if (rc)
++		resp = NULL;
+ 
+ exit:
+ 	kfree_skb(ddev->chaining_skb);
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index a3b46f8888033..53dbe733f9981 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -109,12 +109,14 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 					  GFP_KERNEL);
+ 	if (!llcp_sock->service_name) {
+ 		nfc_llcp_local_put(llcp_sock->local);
++		llcp_sock->local = NULL;
+ 		ret = -ENOMEM;
+ 		goto put_dev;
+ 	}
+ 	llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
+ 		nfc_llcp_local_put(llcp_sock->local);
++		llcp_sock->local = NULL;
+ 		kfree(llcp_sock->service_name);
+ 		llcp_sock->service_name = NULL;
+ 		ret = -EADDRINUSE;
+@@ -709,6 +711,7 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
+ 	llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
+ 		nfc_llcp_local_put(llcp_sock->local);
++		llcp_sock->local = NULL;
+ 		ret = -ENOMEM;
+ 		goto put_dev;
+ 	}
+@@ -756,6 +759,7 @@ sock_unlink:
+ sock_llcp_release:
+ 	nfc_llcp_put_ssap(local, llcp_sock->ssap);
+ 	nfc_llcp_local_put(llcp_sock->local);
++	llcp_sock->local = NULL;
+ 
+ put_dev:
+ 	nfc_put_device(dev);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index a0121e7c98b14..449625c2ccc7a 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1358,7 +1358,7 @@ static unsigned int fanout_demux_rollover(struct packet_fanout *f,
+ 	struct packet_sock *po, *po_next, *po_skip = NULL;
+ 	unsigned int i, j, room = ROOM_NONE;
+ 
+-	po = pkt_sk(f->arr[idx]);
++	po = pkt_sk(rcu_dereference(f->arr[idx]));
+ 
+ 	if (try_self) {
+ 		room = packet_rcv_has_room(po, skb);
+@@ -1370,7 +1370,7 @@ static unsigned int fanout_demux_rollover(struct packet_fanout *f,
+ 
+ 	i = j = min_t(int, po->rollover->sock, num - 1);
+ 	do {
+-		po_next = pkt_sk(f->arr[i]);
++		po_next = pkt_sk(rcu_dereference(f->arr[i]));
+ 		if (po_next != po_skip && !READ_ONCE(po_next->pressure) &&
+ 		    packet_rcv_has_room(po_next, skb) == ROOM_NORMAL) {
+ 			if (i != j)
+@@ -1465,7 +1465,7 @@ static int packet_rcv_fanout(struct sk_buff *skb, struct net_device *dev,
+ 	if (fanout_has_flag(f, PACKET_FANOUT_FLAG_ROLLOVER))
+ 		idx = fanout_demux_rollover(f, skb, idx, true, num);
+ 
+-	po = pkt_sk(f->arr[idx]);
++	po = pkt_sk(rcu_dereference(f->arr[idx]));
+ 	return po->prot_hook.func(skb, dev, &po->prot_hook, orig_dev);
+ }
+ 
+@@ -1479,7 +1479,7 @@ static void __fanout_link(struct sock *sk, struct packet_sock *po)
+ 	struct packet_fanout *f = po->fanout;
+ 
+ 	spin_lock(&f->lock);
+-	f->arr[f->num_members] = sk;
++	rcu_assign_pointer(f->arr[f->num_members], sk);
+ 	smp_wmb();
+ 	f->num_members++;
+ 	if (f->num_members == 1)
+@@ -1494,11 +1494,14 @@ static void __fanout_unlink(struct sock *sk, struct packet_sock *po)
+ 
+ 	spin_lock(&f->lock);
+ 	for (i = 0; i < f->num_members; i++) {
+-		if (f->arr[i] == sk)
++		if (rcu_dereference_protected(f->arr[i],
++					      lockdep_is_held(&f->lock)) == sk)
+ 			break;
+ 	}
+ 	BUG_ON(i >= f->num_members);
+-	f->arr[i] = f->arr[f->num_members - 1];
++	rcu_assign_pointer(f->arr[i],
++			   rcu_dereference_protected(f->arr[f->num_members - 1],
++						     lockdep_is_held(&f->lock)));
+ 	f->num_members--;
+ 	if (f->num_members == 0)
+ 		__dev_remove_pack(&f->prot_hook);
+@@ -1636,13 +1639,15 @@ static bool fanout_find_new_id(struct sock *sk, u16 *new_id)
+ 	return false;
+ }
+ 
+-static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
++static int fanout_add(struct sock *sk, struct fanout_args *args)
+ {
+ 	struct packet_rollover *rollover = NULL;
+ 	struct packet_sock *po = pkt_sk(sk);
++	u16 type_flags = args->type_flags;
+ 	struct packet_fanout *f, *match;
+ 	u8 type = type_flags & 0xff;
+ 	u8 flags = type_flags >> 8;
++	u16 id = args->id;
+ 	int err;
+ 
+ 	switch (type) {
+@@ -1700,11 +1705,21 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
+ 		}
+ 	}
+ 	err = -EINVAL;
+-	if (match && match->flags != flags)
+-		goto out;
+-	if (!match) {
++	if (match) {
++		if (match->flags != flags)
++			goto out;
++		if (args->max_num_members &&
++		    args->max_num_members != match->max_num_members)
++			goto out;
++	} else {
++		if (args->max_num_members > PACKET_FANOUT_MAX)
++			goto out;
++		if (!args->max_num_members)
++			/* legacy PACKET_FANOUT_MAX */
++			args->max_num_members = 256;
+ 		err = -ENOMEM;
+-		match = kzalloc(sizeof(*match), GFP_KERNEL);
++		match = kvzalloc(struct_size(match, arr, args->max_num_members),
++				 GFP_KERNEL);
+ 		if (!match)
+ 			goto out;
+ 		write_pnet(&match->net, sock_net(sk));
+@@ -1720,6 +1735,7 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
+ 		match->prot_hook.func = packet_rcv_fanout;
+ 		match->prot_hook.af_packet_priv = match;
+ 		match->prot_hook.id_match = match_fanout_group;
++		match->max_num_members = args->max_num_members;
+ 		list_add(&match->list, &fanout_list);
+ 	}
+ 	err = -EINVAL;
+@@ -1730,7 +1746,7 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
+ 	    match->prot_hook.type == po->prot_hook.type &&
+ 	    match->prot_hook.dev == po->prot_hook.dev) {
+ 		err = -ENOSPC;
+-		if (refcount_read(&match->sk_ref) < PACKET_FANOUT_MAX) {
++		if (refcount_read(&match->sk_ref) < match->max_num_members) {
+ 			__dev_remove_pack(&po->prot_hook);
+ 			po->fanout = match;
+ 			po->rollover = rollover;
+@@ -1744,7 +1760,7 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
+ 
+ 	if (err && !refcount_read(&match->sk_ref)) {
+ 		list_del(&match->list);
+-		kfree(match);
++		kvfree(match);
+ 	}
+ 
+ out:
+@@ -3075,7 +3091,7 @@ static int packet_release(struct socket *sock)
+ 	kfree(po->rollover);
+ 	if (f) {
+ 		fanout_release_data(f);
+-		kfree(f);
++		kvfree(f);
+ 	}
+ 	/*
+ 	 *	Now the socket is dead. No more input will appear.
+@@ -3866,14 +3882,14 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 	}
+ 	case PACKET_FANOUT:
+ 	{
+-		int val;
++		struct fanout_args args = { 0 };
+ 
+-		if (optlen != sizeof(val))
++		if (optlen != sizeof(int) && optlen != sizeof(args))
+ 			return -EINVAL;
+-		if (copy_from_sockptr(&val, optval, sizeof(val)))
++		if (copy_from_sockptr(&args, optval, optlen))
+ 			return -EFAULT;
+ 
+-		return fanout_add(sk, val & 0xffff, val >> 16);
++		return fanout_add(sk, &args);
+ 	}
+ 	case PACKET_FANOUT_DATA:
+ 	{
+diff --git a/net/packet/internal.h b/net/packet/internal.h
+index fd41ecb7f6059..7af1e9179385f 100644
+--- a/net/packet/internal.h
++++ b/net/packet/internal.h
+@@ -77,11 +77,12 @@ struct packet_ring_buffer {
+ };
+ 
+ extern struct mutex fanout_mutex;
+-#define PACKET_FANOUT_MAX	256
++#define PACKET_FANOUT_MAX	(1 << 16)
+ 
+ struct packet_fanout {
+ 	possible_net_t		net;
+ 	unsigned int		num_members;
++	u32			max_num_members;
+ 	u16			id;
+ 	u8			type;
+ 	u8			flags;
+@@ -90,10 +91,10 @@ struct packet_fanout {
+ 		struct bpf_prog __rcu	*bpf_prog;
+ 	};
+ 	struct list_head	list;
+-	struct sock		*arr[PACKET_FANOUT_MAX];
+ 	spinlock_t		lock;
+ 	refcount_t		sk_ref;
+ 	struct packet_type	prot_hook ____cacheline_aligned_in_smp;
++	struct sock	__rcu	*arr[];
+ };
+ 
+ struct packet_rollover {
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 9463c54c465af..3ac6b21ecf2c1 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -357,6 +357,18 @@ static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt,
+ 	return af;
+ }
+ 
++static void sctp_auto_asconf_init(struct sctp_sock *sp)
++{
++	struct net *net = sock_net(&sp->inet.sk);
++
++	if (net->sctp.default_auto_asconf) {
++		spin_lock(&net->sctp.addr_wq_lock);
++		list_add_tail(&sp->auto_asconf_list, &net->sctp.auto_asconf_splist);
++		spin_unlock(&net->sctp.addr_wq_lock);
++		sp->do_auto_asconf = 1;
++	}
++}
++
+ /* Bind a local address either to an endpoint or to an association.  */
+ static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
+ {
+@@ -418,8 +430,10 @@ static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
+ 		return -EADDRINUSE;
+ 
+ 	/* Refresh ephemeral port.  */
+-	if (!bp->port)
++	if (!bp->port) {
+ 		bp->port = inet_sk(sk)->inet_num;
++		sctp_auto_asconf_init(sp);
++	}
+ 
+ 	/* Add the address to the bind address list.
+ 	 * Use GFP_ATOMIC since BHs will be disabled.
+@@ -1520,9 +1534,11 @@ static void sctp_close(struct sock *sk, long timeout)
+ 
+ 	/* Supposedly, no process has access to the socket, but
+ 	 * the net layers still may.
++	 * Also, sctp_destroy_sock() needs to be called with addr_wq_lock
++	 * held and that should be grabbed before socket lock.
+ 	 */
+-	local_bh_disable();
+-	bh_lock_sock(sk);
++	spin_lock_bh(&net->sctp.addr_wq_lock);
++	bh_lock_sock_nested(sk);
+ 
+ 	/* Hold the sock, since sk_common_release() will put sock_put()
+ 	 * and we have just a little more cleanup.
+@@ -1531,7 +1547,7 @@ static void sctp_close(struct sock *sk, long timeout)
+ 	sk_common_release(sk);
+ 
+ 	bh_unlock_sock(sk);
+-	local_bh_enable();
++	spin_unlock_bh(&net->sctp.addr_wq_lock);
+ 
+ 	sock_put(sk);
+ 
+@@ -4937,16 +4953,6 @@ static int sctp_init_sock(struct sock *sk)
+ 	sk_sockets_allocated_inc(sk);
+ 	sock_prot_inuse_add(net, sk->sk_prot, 1);
+ 
+-	if (net->sctp.default_auto_asconf) {
+-		spin_lock(&sock_net(sk)->sctp.addr_wq_lock);
+-		list_add_tail(&sp->auto_asconf_list,
+-		    &net->sctp.auto_asconf_splist);
+-		sp->do_auto_asconf = 1;
+-		spin_unlock(&sock_net(sk)->sctp.addr_wq_lock);
+-	} else {
+-		sp->do_auto_asconf = 0;
+-	}
+-
+ 	local_bh_enable();
+ 
+ 	return 0;
+@@ -4971,9 +4977,7 @@ static void sctp_destroy_sock(struct sock *sk)
+ 
+ 	if (sp->do_auto_asconf) {
+ 		sp->do_auto_asconf = 0;
+-		spin_lock_bh(&sock_net(sk)->sctp.addr_wq_lock);
+ 		list_del(&sp->auto_asconf_list);
+-		spin_unlock_bh(&sock_net(sk)->sctp.addr_wq_lock);
+ 	}
+ 	sctp_endpoint_free(sp->ep);
+ 	local_bh_disable();
+@@ -9282,6 +9286,8 @@ static int sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
+ 			return err;
+ 	}
+ 
++	sctp_auto_asconf_init(newsp);
++
+ 	/* Move any messages in the old socket's receive queue that are for the
+ 	 * peeled off association to the new socket's receive queue.
+ 	 */
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 86eb6d679225c..2301b66280def 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -1485,6 +1485,8 @@ int tipc_crypto_start(struct tipc_crypto **crypto, struct net *net,
+ 	/* Allocate statistic structure */
+ 	c->stats = alloc_percpu_gfp(struct tipc_crypto_stats, GFP_ATOMIC);
+ 	if (!c->stats) {
++		if (c->wq)
++			destroy_workqueue(c->wq);
+ 		kfree_sensitive(c);
+ 		return -ENOMEM;
+ 	}
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index e4370b1b74947..902cb6dd710bd 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -733,6 +733,23 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t,
+ 	return t->send_pkt(reply);
+ }
+ 
++/* This function should be called with sk_lock held and SOCK_DONE set */
++static void virtio_transport_remove_sock(struct vsock_sock *vsk)
++{
++	struct virtio_vsock_sock *vvs = vsk->trans;
++	struct virtio_vsock_pkt *pkt, *tmp;
++
++	/* We don't need to take rx_lock, as the socket is closing and we are
++	 * removing it.
++	 */
++	list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) {
++		list_del(&pkt->list);
++		virtio_transport_free_pkt(pkt);
++	}
++
++	vsock_remove_sock(vsk);
++}
++
+ static void virtio_transport_wait_close(struct sock *sk, long timeout)
+ {
+ 	if (timeout) {
+@@ -765,7 +782,7 @@ static void virtio_transport_do_close(struct vsock_sock *vsk,
+ 	    (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) {
+ 		vsk->close_work_scheduled = false;
+ 
+-		vsock_remove_sock(vsk);
++		virtio_transport_remove_sock(vsk);
+ 
+ 		/* Release refcnt obtained when we scheduled the timeout */
+ 		sock_put(sk);
+@@ -828,22 +845,15 @@ static bool virtio_transport_close(struct vsock_sock *vsk)
+ 
+ void virtio_transport_release(struct vsock_sock *vsk)
+ {
+-	struct virtio_vsock_sock *vvs = vsk->trans;
+-	struct virtio_vsock_pkt *pkt, *tmp;
+ 	struct sock *sk = &vsk->sk;
+ 	bool remove_sock = true;
+ 
+ 	if (sk->sk_type == SOCK_STREAM)
+ 		remove_sock = virtio_transport_close(vsk);
+ 
+-	list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) {
+-		list_del(&pkt->list);
+-		virtio_transport_free_pkt(pkt);
+-	}
+-
+ 	if (remove_sock) {
+ 		sock_set_flag(sk, SOCK_DONE);
+-		vsock_remove_sock(vsk);
++		virtio_transport_remove_sock(vsk);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(virtio_transport_release);
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index 8b65323207db5..1c9ecb18b8e64 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -568,8 +568,7 @@ vmci_transport_queue_pair_alloc(struct vmci_qp **qpair,
+ 			       peer, flags, VMCI_NO_PRIVILEGE_FLAGS);
+ out:
+ 	if (err < 0) {
+-		pr_err("Could not attach to queue pair with %d\n",
+-		       err);
++		pr_err_once("Could not attach to queue pair with %d\n", err);
+ 		err = vmci_transport_error_to_vsock_error(err);
+ 	}
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 345ef1c967685..87fc56bc4f1e7 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1753,6 +1753,8 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 
+ 		if (rdev->bss_entries >= bss_entries_limit &&
+ 		    !cfg80211_bss_expire_oldest(rdev)) {
++			if (!list_empty(&new->hidden_list))
++				list_del(&new->hidden_list);
+ 			kfree(new);
+ 			goto drop;
+ 		}
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 52fd1f96b241e..ca4716b92774b 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -380,12 +380,16 @@ static int xsk_generic_xmit(struct sock *sk)
+ 	struct sk_buff *skb;
+ 	unsigned long flags;
+ 	int err = 0;
++	u32 hr, tr;
+ 
+ 	mutex_lock(&xs->mutex);
+ 
+ 	if (xs->queue_id >= xs->dev->real_num_tx_queues)
+ 		goto out;
+ 
++	hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom));
++	tr = xs->dev->needed_tailroom;
++
+ 	while (xskq_cons_peek_desc(xs->tx, &desc, xs->pool)) {
+ 		char *buffer;
+ 		u64 addr;
+@@ -397,11 +401,13 @@ static int xsk_generic_xmit(struct sock *sk)
+ 		}
+ 
+ 		len = desc.len;
+-		skb = sock_alloc_send_skb(sk, len, 1, &err);
++		skb = sock_alloc_send_skb(sk, hr + len + tr, 1, &err);
+ 		if (unlikely(!skb))
+ 			goto out;
+ 
++		skb_reserve(skb, hr);
+ 		skb_put(skb, len);
++
+ 		addr = desc.addr;
+ 		buffer = xsk_buff_raw_get_data(xs->pool, addr);
+ 		err = skb_store_bits(skb, 0, buffer, len);
+diff --git a/samples/kfifo/bytestream-example.c b/samples/kfifo/bytestream-example.c
+index c406f03ee5519..5a90aa5278775 100644
+--- a/samples/kfifo/bytestream-example.c
++++ b/samples/kfifo/bytestream-example.c
+@@ -122,8 +122,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
+ 	ret = kfifo_from_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&write_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static ssize_t fifo_read(struct file *file, char __user *buf,
+@@ -138,8 +140,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
+ 	ret = kfifo_to_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&read_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static const struct proc_ops fifo_proc_ops = {
+diff --git a/samples/kfifo/inttype-example.c b/samples/kfifo/inttype-example.c
+index 78977fc4a23f7..e5403d8c971a5 100644
+--- a/samples/kfifo/inttype-example.c
++++ b/samples/kfifo/inttype-example.c
+@@ -115,8 +115,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
+ 	ret = kfifo_from_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&write_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static ssize_t fifo_read(struct file *file, char __user *buf,
+@@ -131,8 +133,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
+ 	ret = kfifo_to_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&read_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static const struct proc_ops fifo_proc_ops = {
+diff --git a/samples/kfifo/record-example.c b/samples/kfifo/record-example.c
+index c507998a2617c..f64f3d62d6c2a 100644
+--- a/samples/kfifo/record-example.c
++++ b/samples/kfifo/record-example.c
+@@ -129,8 +129,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
+ 	ret = kfifo_from_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&write_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static ssize_t fifo_read(struct file *file, char __user *buf,
+@@ -145,8 +147,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
+ 	ret = kfifo_to_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&read_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static const struct proc_ops fifo_proc_ops = {
+diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c
+index 1e89e2d3851f9..f83255a39e653 100644
+--- a/security/integrity/ima/ima_template.c
++++ b/security/integrity/ima/ima_template.c
+@@ -468,8 +468,8 @@ int ima_restore_measurement_list(loff_t size, void *buf)
+ 			}
+ 		}
+ 
+-		entry->pcr = !ima_canonical_fmt ? *(hdr[HDR_PCR].data) :
+-			     le32_to_cpu(*(hdr[HDR_PCR].data));
++		entry->pcr = !ima_canonical_fmt ? *(u32 *)(hdr[HDR_PCR].data) :
++			     le32_to_cpu(*(u32 *)(hdr[HDR_PCR].data));
+ 		ret = ima_restore_measurement_entry(entry);
+ 		if (ret < 0)
+ 			break;
+diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
+index 7a937c3c52834..230c0b27b77d1 100644
+--- a/security/keys/trusted-keys/trusted_tpm1.c
++++ b/security/keys/trusted-keys/trusted_tpm1.c
+@@ -791,13 +791,33 @@ static int getoptions(char *c, struct trusted_key_payload *pay,
+ 				return -EINVAL;
+ 			break;
+ 		case Opt_blobauth:
+-			if (strlen(args[0].from) != 2 * SHA1_DIGEST_SIZE)
+-				return -EINVAL;
+-			res = hex2bin(opt->blobauth, args[0].from,
+-				      SHA1_DIGEST_SIZE);
+-			if (res < 0)
+-				return -EINVAL;
++			/*
++			 * TPM 1.2 authorizations are sha1 hashes passed in as
++			 * hex strings.  TPM 2.0 authorizations are simple
++			 * passwords (although it can take a hash as well)
++			 */
++			opt->blobauth_len = strlen(args[0].from);
++
++			if (opt->blobauth_len == 2 * TPM_DIGEST_SIZE) {
++				res = hex2bin(opt->blobauth, args[0].from,
++					      TPM_DIGEST_SIZE);
++				if (res < 0)
++					return -EINVAL;
++
++				opt->blobauth_len = TPM_DIGEST_SIZE;
++				break;
++			}
++
++			if (tpm2 && opt->blobauth_len <= sizeof(opt->blobauth)) {
++				memcpy(opt->blobauth, args[0].from,
++				       opt->blobauth_len);
++				break;
++			}
++
++			return -EINVAL;
++
+ 			break;
++
+ 		case Opt_migratable:
+ 			if (*args[0].from == '0')
+ 				pay->migratable = 0;
+diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
+index c87c4df8703d4..4c19d3abddbee 100644
+--- a/security/keys/trusted-keys/trusted_tpm2.c
++++ b/security/keys/trusted-keys/trusted_tpm2.c
+@@ -97,10 +97,12 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
+ 			     TPM_DIGEST_SIZE);
+ 
+ 	/* sensitive */
+-	tpm_buf_append_u16(&buf, 4 + TPM_DIGEST_SIZE + payload->key_len + 1);
++	tpm_buf_append_u16(&buf, 4 + options->blobauth_len + payload->key_len + 1);
++
++	tpm_buf_append_u16(&buf, options->blobauth_len);
++	if (options->blobauth_len)
++		tpm_buf_append(&buf, options->blobauth, options->blobauth_len);
+ 
+-	tpm_buf_append_u16(&buf, TPM_DIGEST_SIZE);
+-	tpm_buf_append(&buf, options->blobauth, TPM_DIGEST_SIZE);
+ 	tpm_buf_append_u16(&buf, payload->key_len + 1);
+ 	tpm_buf_append(&buf, payload->key, payload->key_len);
+ 	tpm_buf_append_u8(&buf, payload->migratable);
+@@ -265,7 +267,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
+ 			     NULL /* nonce */, 0,
+ 			     TPM2_SA_CONTINUE_SESSION,
+ 			     options->blobauth /* hmac */,
+-			     TPM_DIGEST_SIZE);
++			     options->blobauth_len);
+ 
+ 	rc = tpm_transmit_cmd(chip, &buf, 6, "unsealing");
+ 	if (rc > 0)
+diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h
+index 40cebde62856a..b9fdba2ff4163 100644
+--- a/security/selinux/include/classmap.h
++++ b/security/selinux/include/classmap.h
+@@ -242,11 +242,12 @@ struct security_class_mapping secclass_map[] = {
+ 	{ "infiniband_endport",
+ 	  { "manage_subnet", NULL } },
+ 	{ "bpf",
+-	  {"map_create", "map_read", "map_write", "prog_load", "prog_run"} },
++	  { "map_create", "map_read", "map_write", "prog_load", "prog_run",
++	    NULL } },
+ 	{ "xdp_socket",
+ 	  { COMMON_SOCK_PERMS, NULL } },
+ 	{ "perf_event",
+-	  {"open", "cpu", "kernel", "tracepoint", "read", "write"} },
++	  { "open", "cpu", "kernel", "tracepoint", "read", "write", NULL } },
+ 	{ "lockdown",
+ 	  { "integrity", "confidentiality", NULL } },
+ 	{ NULL }
+diff --git a/sound/core/init.c b/sound/core/init.c
+index 018ce4ef12ec8..9f5270c90a10a 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -390,10 +390,8 @@ int snd_card_disconnect(struct snd_card *card)
+ 		return 0;
+ 	}
+ 	card->shutdown = 1;
+-	spin_unlock(&card->files_lock);
+ 
+ 	/* replace file->f_op with special dummy operations */
+-	spin_lock(&card->files_lock);
+ 	list_for_each_entry(mfile, &card->files_list, list) {
+ 		/* it's critical part, use endless loop */
+ 		/* we have no room to fail */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d05d16ddbdf2c..8ec57bd351dfe 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2470,13 +2470,13 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 		      ALC882_FIXUP_ACER_ASPIRE_8930G),
+ 	SND_PCI_QUIRK(0x1025, 0x0146, "Acer Aspire 6935G",
+ 		      ALC882_FIXUP_ACER_ASPIRE_8930G),
++	SND_PCI_QUIRK(0x1025, 0x0142, "Acer Aspire 7730G",
++		      ALC882_FIXUP_ACER_ASPIRE_4930G),
++	SND_PCI_QUIRK(0x1025, 0x0155, "Packard-Bell M5120", ALC882_FIXUP_PB_M5210),
+ 	SND_PCI_QUIRK(0x1025, 0x015e, "Acer Aspire 6930G",
+ 		      ALC882_FIXUP_ACER_ASPIRE_4930G),
+ 	SND_PCI_QUIRK(0x1025, 0x0166, "Acer Aspire 6530G",
+ 		      ALC882_FIXUP_ACER_ASPIRE_4930G),
+-	SND_PCI_QUIRK(0x1025, 0x0142, "Acer Aspire 7730G",
+-		      ALC882_FIXUP_ACER_ASPIRE_4930G),
+-	SND_PCI_QUIRK(0x1025, 0x0155, "Packard-Bell M5120", ALC882_FIXUP_PB_M5210),
+ 	SND_PCI_QUIRK(0x1025, 0x021e, "Acer Aspire 5739G",
+ 		      ALC882_FIXUP_ACER_ASPIRE_4930G),
+ 	SND_PCI_QUIRK(0x1025, 0x0259, "Acer Aspire 5935", ALC889_FIXUP_DAC_ROUTE),
+@@ -2489,11 +2489,11 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601),
+ 	SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS),
+ 	SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3),
++	SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
++	SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
+ 	SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
+ 	SND_PCI_QUIRK(0x104d, 0x905a, "Sony Vaio Z", ALC882_FIXUP_NO_PRIMARY_HP),
+ 	SND_PCI_QUIRK(0x104d, 0x9060, "Sony Vaio VPCL14M1R", ALC882_FIXUP_NO_PRIMARY_HP),
+-	SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
+-	SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
+ 
+ 	/* All Apple entries are in codec SSIDs */
+ 	SND_PCI_QUIRK(0x106b, 0x00a0, "MacBookPro 3,1", ALC889_FIXUP_MBP_VREF),
+@@ -2536,9 +2536,19 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ 	SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
++	SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950),
+-	SND_PCI_QUIRK(0x1558, 0x950A, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1558, 0x950a, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e3, "Clevo P955[ER]T", ALC1220_FIXUP_CLEVO_P950),
+@@ -2548,16 +2558,6 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x96e1, "Clevo P960[ER][CDFN]-K", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x97e1, "Clevo P970[ER][CDFN]", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x97e2, "Clevo P970RC-M", ALC1220_FIXUP_CLEVO_P950),
+-	SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
+@@ -4331,6 +4331,35 @@ static void alc245_fixup_hp_x360_amp(struct hda_codec *codec,
+ 	}
+ }
+ 
++/* toggle GPIO2 at each time stream is started; we use PREPARE state instead */
++static void alc274_hp_envy_pcm_hook(struct hda_pcm_stream *hinfo,
++				    struct hda_codec *codec,
++				    struct snd_pcm_substream *substream,
++				    int action)
++{
++	switch (action) {
++	case HDA_GEN_PCM_ACT_PREPARE:
++		alc_update_gpio_data(codec, 0x04, true);
++		break;
++	case HDA_GEN_PCM_ACT_CLEANUP:
++		alc_update_gpio_data(codec, 0x04, false);
++		break;
++	}
++}
++
++static void alc274_fixup_hp_envy_gpio(struct hda_codec *codec,
++				      const struct hda_fixup *fix,
++				      int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PROBE) {
++		spec->gpio_mask |= 0x04;
++		spec->gpio_dir |= 0x04;
++		spec->gen.pcm_playback_hook = alc274_hp_envy_pcm_hook;
++	}
++}
++
+ static void alc_update_coef_led(struct hda_codec *codec,
+ 				struct alc_coef_led *led,
+ 				bool polarity, bool on)
+@@ -6443,6 +6472,7 @@ enum {
+ 	ALC255_FIXUP_XIAOMI_HEADSET_MIC,
+ 	ALC274_FIXUP_HP_MIC,
+ 	ALC274_FIXUP_HP_HEADSET_MIC,
++	ALC274_FIXUP_HP_ENVY_GPIO,
+ 	ALC256_FIXUP_ASUS_HPE,
+ 	ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ 	ALC287_FIXUP_HP_GPIO_LED,
+@@ -7882,6 +7912,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC274_FIXUP_HP_MIC
+ 	},
++	[ALC274_FIXUP_HP_ENVY_GPIO] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc274_fixup_hp_envy_gpio,
++	},
+ 	[ALC256_FIXUP_ASUS_HPE] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -7947,12 +7981,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x047c, "Acer AC700", ALC269_FIXUP_ACER_AC700),
+ 	SND_PCI_QUIRK(0x1025, 0x072d, "Acer Aspire V5-571G", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+-	SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ 	SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ 	SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ 	SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
++	SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF),
+ 	SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+@@ -8008,8 +8042,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+-	SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ 	SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
++	SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ 	SND_PCI_QUIRK(0x1028, 0x080c, "Dell WYSE", ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ 	SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+@@ -8019,8 +8053,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+-	SND_PCI_QUIRK(0x1028, 0x097e, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x097d, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x097e, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x098d, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x09bf, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+@@ -8031,35 +8065,18 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x18e6, "HP", ALC269_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x218b, "HP", ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED),
+-	SND_PCI_QUIRK(0x103c, 0x225f, "HP", ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY),
+-	/* ALC282 */
+ 	SND_PCI_QUIRK(0x103c, 0x21f9, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2210, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2214, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x221b, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x103c, 0x2221, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x2225, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2236, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2237, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2238, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2239, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x224b, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+-	SND_PCI_QUIRK(0x103c, 0x2268, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x226a, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x226b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x226e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x2271, "HP", ALC286_FIXUP_HP_GPIO_LED),
+-	SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC280_FIXUP_HP_DOCK_PINS),
+-	SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC280_FIXUP_HP_DOCK_PINS),
+-	SND_PCI_QUIRK(0x103c, 0x229e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22b2, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22b7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22bf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22db, "HP", ALC280_FIXUP_HP_9480M),
+-	SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+-	SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+-	/* ALC290 */
+-	SND_PCI_QUIRK(0x103c, 0x221b, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+-	SND_PCI_QUIRK(0x103c, 0x2221, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+-	SND_PCI_QUIRK(0x103c, 0x2225, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2253, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2254, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2255, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+@@ -8067,26 +8084,41 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x2257, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2259, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x225a, "HP", ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x225f, "HP", ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x103c, 0x2260, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2263, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2264, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2265, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x2268, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x226a, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x226b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x226e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x2271, "HP", ALC286_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC280_FIXUP_HP_DOCK_PINS),
+ 	SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC280_FIXUP_HP_DOCK_PINS),
+ 	SND_PCI_QUIRK(0x103c, 0x2278, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x227f, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2282, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x228b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x228e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x229e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22b2, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22b7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22bf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22c4, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x22c5, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x22c7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x22c8, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22c4, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22db, "HP", ALC280_FIXUP_HP_9480M),
++	SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2334, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2335, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8077, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+@@ -8101,6 +8133,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+@@ -8128,16 +8161,18 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x10d0, "ASUS X540LA/X540LJ", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
++	SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
++	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x194e, "ASUS UX563FD", ALC294_FIXUP_ASUS_HPE),
+@@ -8150,32 +8185,31 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+-	SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+-	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101),
+-	SND_PCI_QUIRK(0x104d, 0x90b5, "Sony VAIO Pro 11", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2),
+ 	SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
++	SND_PCI_QUIRK(0x104d, 0x90b5, "Sony VAIO Pro 11", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x10cf, 0x159f, "Lifebook E780", ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT),
+ 	SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+-	SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1629, "Lifebook U7x7", ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC),
++	SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
++	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+@@ -8185,9 +8219,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+-	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ 	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+@@ -8243,9 +8277,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x21ca, "Thinkpad L412", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE),
++	SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x21f6, "Thinkpad T530", ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x21fa, "Thinkpad X230", ALC269_FIXUP_LENOVO_DOCK),
+-	SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2203, "Thinkpad X230 Tablet", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2208, "Thinkpad T431s", ALC269_FIXUP_LENOVO_DOCK),
+@@ -8289,6 +8323,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ 	SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
+@@ -8307,20 +8342,18 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+-	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ 	SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+ 	SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
+ 	SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+ 	SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
++	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
++	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+-	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+-	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ 	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+@@ -8777,6 +8810,16 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x19, 0x03a11020},
+ 		{0x21, 0x0321101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
++		{0x12, 0x90a60130},
++		{0x14, 0x90170110},
++		{0x19, 0x04a11040},
++		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
++		{0x14, 0x90170110},
++		{0x19, 0x04a11040},
++		{0x1d, 0x40600001},
++		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ 		{0x14, 0x90170110},
+ 		{0x19, 0x04a11040},
+ 		{0x21, 0x04211020}),
+@@ -8947,10 +8990,6 @@ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = {
+ 	SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+ 		{0x19, 0x40000000},
+ 		{0x1a, 0x40000000}),
+-	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+-		{0x14, 0x90170110},
+-		{0x19, 0x04a11040},
+-		{0x21, 0x04211020}),
+ 	{}
+ };
+ 
+@@ -9266,8 +9305,7 @@ static const struct snd_pci_quirk alc861_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1393, "ASUS A6Rp", ALC861_FIXUP_ASUS_A6RP),
+ 	SND_PCI_QUIRK_VENDOR(0x1043, "ASUS laptop", ALC861_FIXUP_AMP_VREF_0F),
+ 	SND_PCI_QUIRK(0x1462, 0x7254, "HP DX2200", ALC861_FIXUP_NO_JACK_DETECT),
+-	SND_PCI_QUIRK(0x1584, 0x2b01, "Haier W18", ALC861_FIXUP_AMP_VREF_0F),
+-	SND_PCI_QUIRK(0x1584, 0x0000, "Uniwill ECS M31EI", ALC861_FIXUP_AMP_VREF_0F),
++	SND_PCI_QUIRK_VENDOR(0x1584, "Haier/Uniwill", ALC861_FIXUP_AMP_VREF_0F),
+ 	SND_PCI_QUIRK(0x1734, 0x10c7, "FSC Amilo Pi1505", ALC861_FIXUP_FSC_AMILO_PI1505),
+ 	{}
+ };
+@@ -10062,6 +10100,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x0349, "eMachines eM250", ALC662_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x034a, "Gateway LT27", ALC662_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE),
++	SND_PCI_QUIRK(0x1025, 0x0566, "Acer Aspire Ethos 8951G", ALC669_FIXUP_ACER_ASPIRE_ETHOS),
+ 	SND_PCI_QUIRK(0x1025, 0x123c, "Acer Nitro N50-600", ALC662_FIXUP_ACER_NITRO_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1025, 0x124e, "Acer 2660G", ALC662_FIXUP_ACER_X2660G_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+@@ -10078,9 +10117,9 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+-	SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ 	SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
+ 	SND_PCI_QUIRK(0x1043, 0x12ff, "ASUS G751", ALC668_FIXUP_ASUS_G751),
++	SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ 	SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+ 	SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
+@@ -10100,7 +10139,6 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON),
+ 	SND_PCI_QUIRK(0x1b35, 0x1234, "CZC ET26", ALC662_FIXUP_CZC_ET26),
+ 	SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
+-	SND_PCI_QUIRK(0x1025, 0x0566, "Acer Aspire Ethos 8951G", ALC669_FIXUP_ACER_ASPIRE_ETHOS),
+ 
+ #if 0
+ 	/* Below is a quirk table taken from the old code.
+diff --git a/sound/soc/codecs/ak5558.c b/sound/soc/codecs/ak5558.c
+index 65a248c92f669..adbdfdbc7a38b 100644
+--- a/sound/soc/codecs/ak5558.c
++++ b/sound/soc/codecs/ak5558.c
+@@ -272,7 +272,7 @@ static void ak5558_power_off(struct ak5558_priv *ak5558)
+ 	if (!ak5558->reset_gpiod)
+ 		return;
+ 
+-	gpiod_set_value_cansleep(ak5558->reset_gpiod, 0);
++	gpiod_set_value_cansleep(ak5558->reset_gpiod, 1);
+ 	usleep_range(1000, 2000);
+ }
+ 
+@@ -281,7 +281,7 @@ static void ak5558_power_on(struct ak5558_priv *ak5558)
+ 	if (!ak5558->reset_gpiod)
+ 		return;
+ 
+-	gpiod_set_value_cansleep(ak5558->reset_gpiod, 1);
++	gpiod_set_value_cansleep(ak5558->reset_gpiod, 0);
+ 	usleep_range(1000, 2000);
+ }
+ 
+diff --git a/sound/soc/codecs/tlv320aic32x4.c b/sound/soc/codecs/tlv320aic32x4.c
+index 9e3de9ded0efb..b8950758471fa 100644
+--- a/sound/soc/codecs/tlv320aic32x4.c
++++ b/sound/soc/codecs/tlv320aic32x4.c
+@@ -577,12 +577,12 @@ static const struct regmap_range_cfg aic32x4_regmap_pages[] = {
+ 		.window_start = 0,
+ 		.window_len = 128,
+ 		.range_min = 0,
+-		.range_max = AIC32X4_RMICPGAVOL,
++		.range_max = AIC32X4_REFPOWERUP,
+ 	},
+ };
+ 
+ const struct regmap_config aic32x4_regmap_config = {
+-	.max_register = AIC32X4_RMICPGAVOL,
++	.max_register = AIC32X4_REFPOWERUP,
+ 	.ranges = aic32x4_regmap_pages,
+ 	.num_ranges = ARRAY_SIZE(aic32x4_regmap_pages),
+ };
+@@ -1243,6 +1243,10 @@ int aic32x4_probe(struct device *dev, struct regmap *regmap)
+ 	if (ret)
+ 		goto err_disable_regulators;
+ 
++	ret = aic32x4_register_clocks(dev, aic32x4->mclk_name);
++	if (ret)
++		goto err_disable_regulators;
++
+ 	ret = devm_snd_soc_register_component(dev,
+ 			&soc_component_dev_aic32x4, &aic32x4_dai, 1);
+ 	if (ret) {
+@@ -1250,10 +1254,6 @@ int aic32x4_probe(struct device *dev, struct regmap *regmap)
+ 		goto err_disable_regulators;
+ 	}
+ 
+-	ret = aic32x4_register_clocks(dev, aic32x4->mclk_name);
+-	if (ret)
+-		goto err_disable_regulators;
+-
+ 	return 0;
+ 
+ err_disable_regulators:
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index ceaf3bbb18e66..9d325555e2191 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -608,10 +608,6 @@ static const int bclk_divs[] = {
+  *		- lrclk      = sysclk / dac_divs
+  *		- 10 * bclk  = sysclk / bclk_divs
+  *
+- *	If we cannot find an exact match for (sysclk, lrclk, bclk)
+- *	triplet, we relax the bclk such that bclk is chosen as the
+- *	closest available frequency greater than expected bclk.
+- *
+  * @wm8960: codec private data
+  * @mclk: MCLK used to derive sysclk
+  * @sysclk_idx: sysclk_divs index for found sysclk
+@@ -629,7 +625,7 @@ int wm8960_configure_sysclk(struct wm8960_priv *wm8960, int mclk,
+ {
+ 	int sysclk, bclk, lrclk;
+ 	int i, j, k;
+-	int diff, closest = mclk;
++	int diff;
+ 
+ 	/* marker for no match */
+ 	*bclk_idx = -1;
+@@ -653,12 +649,6 @@ int wm8960_configure_sysclk(struct wm8960_priv *wm8960, int mclk,
+ 					*bclk_idx = k;
+ 					break;
+ 				}
+-				if (diff > 0 && closest > diff) {
+-					*sysclk_idx = i;
+-					*dac_idx = j;
+-					*bclk_idx = k;
+-					closest = diff;
+-				}
+ 			}
+ 			if (k != ARRAY_SIZE(bclk_divs))
+ 				break;
+diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
+index 97b4f5480a31c..0c640308ed80b 100644
+--- a/sound/soc/generic/audio-graph-card.c
++++ b/sound/soc/generic/audio-graph-card.c
+@@ -340,7 +340,7 @@ static int graph_dai_link_of(struct asoc_simple_priv *priv,
+ 	struct device_node *top = dev->of_node;
+ 	struct asoc_simple_dai *cpu_dai;
+ 	struct asoc_simple_dai *codec_dai;
+-	int ret, single_cpu;
++	int ret, single_cpu = 0;
+ 
+ 	/* Do it only CPU turn */
+ 	if (!li->cpu)
+diff --git a/sound/soc/generic/simple-card.c b/sound/soc/generic/simple-card.c
+index 75365c7bb3930..d916ec69c24ff 100644
+--- a/sound/soc/generic/simple-card.c
++++ b/sound/soc/generic/simple-card.c
+@@ -258,7 +258,7 @@ static int simple_dai_link_of(struct asoc_simple_priv *priv,
+ 	struct device_node *plat = NULL;
+ 	char prop[128];
+ 	char *prefix = "";
+-	int ret, single_cpu;
++	int ret, single_cpu = 0;
+ 
+ 	/*
+ 	 *	 |CPU   |Codec   : turn
+diff --git a/sound/soc/intel/Makefile b/sound/soc/intel/Makefile
+index 4e0248d2accc7..7c5038803be73 100644
+--- a/sound/soc/intel/Makefile
++++ b/sound/soc/intel/Makefile
+@@ -5,7 +5,7 @@ obj-$(CONFIG_SND_SOC) += common/
+ # Platform Support
+ obj-$(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM) += atom/
+ obj-$(CONFIG_SND_SOC_INTEL_CATPT) += catpt/
+-obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE) += skylake/
++obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON) += skylake/
+ obj-$(CONFIG_SND_SOC_INTEL_KEEMBAY) += keembay/
+ 
+ # Machine support
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98927.c b/sound/soc/intel/boards/kbl_da7219_max98927.c
+index cc9a2509ace29..e0149cf6127d0 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98927.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98927.c
+@@ -282,11 +282,33 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	struct snd_interval *chan = hw_param_interval(params,
+ 			SNDRV_PCM_HW_PARAM_CHANNELS);
+ 	struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
+-	struct snd_soc_dpcm *dpcm = container_of(
+-			params, struct snd_soc_dpcm, hw_params);
+-	struct snd_soc_dai_link *fe_dai_link = dpcm->fe->dai_link;
+-	struct snd_soc_dai_link *be_dai_link = dpcm->be->dai_link;
++	struct snd_soc_dpcm *dpcm, *rtd_dpcm = NULL;
+ 
++	/*
++	 * The following loop will be called only for playback stream
++	 * In this platform, there is only one playback device on every SSP
++	 */
++	for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_PLAYBACK, dpcm) {
++		rtd_dpcm = dpcm;
++		break;
++	}
++
++	/*
++	 * This following loop will be called only for capture stream
++	 * In this platform, there is only one capture device on every SSP
++	 */
++	for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_CAPTURE, dpcm) {
++		rtd_dpcm = dpcm;
++		break;
++	}
++
++	if (!rtd_dpcm)
++		return -EINVAL;
++
++	/*
++	 * The above 2 loops are mutually exclusive based on the stream direction,
++	 * thus rtd_dpcm variable will never be overwritten
++	 */
+ 	/*
+ 	 * Topology for kblda7219m98373 & kblmax98373 supports only S24_LE,
+ 	 * where as kblda7219m98927 & kblmax98927 supports S16_LE by default.
+@@ -309,9 +331,9 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	/*
+ 	 * The ADSP will convert the FE rate to 48k, stereo, 24 bit
+ 	 */
+-	if (!strcmp(fe_dai_link->name, "Kbl Audio Port") ||
+-	    !strcmp(fe_dai_link->name, "Kbl Audio Headset Playback") ||
+-	    !strcmp(fe_dai_link->name, "Kbl Audio Capture Port")) {
++	if (!strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Port") ||
++	    !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Headset Playback") ||
++	    !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Capture Port")) {
+ 		rate->min = rate->max = 48000;
+ 		chan->min = chan->max = 2;
+ 		snd_mask_none(fmt);
+@@ -322,7 +344,7 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	 * The speaker on the SSP0 supports S16_LE and not S24_LE.
+ 	 * thus changing the mask here
+ 	 */
+-	if (!strcmp(be_dai_link->name, "SSP0-Codec"))
++	if (!strcmp(rtd_dpcm->be->dai_link->name, "SSP0-Codec"))
+ 		snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE);
+ 
+ 	return 0;
+diff --git a/sound/soc/intel/boards/sof_wm8804.c b/sound/soc/intel/boards/sof_wm8804.c
+index a46ba13e8eb0c..6a181e45143d7 100644
+--- a/sound/soc/intel/boards/sof_wm8804.c
++++ b/sound/soc/intel/boards/sof_wm8804.c
+@@ -124,7 +124,11 @@ static int sof_wm8804_hw_params(struct snd_pcm_substream *substream,
+ 	}
+ 
+ 	snd_soc_dai_set_clkdiv(codec_dai, WM8804_MCLK_DIV, mclk_div);
+-	snd_soc_dai_set_pll(codec_dai, 0, 0, sysclk, mclk_freq);
++	ret = snd_soc_dai_set_pll(codec_dai, 0, 0, sysclk, mclk_freq);
++	if (ret < 0) {
++		dev_err(rtd->card->dev, "Failed to set WM8804 PLL\n");
++		return ret;
++	}
+ 
+ 	ret = snd_soc_dai_set_sysclk(codec_dai, WM8804_TX_CLKSRC_PLL,
+ 				     sysclk, SND_SOC_CLOCK_OUT);
+diff --git a/sound/soc/intel/skylake/Makefile b/sound/soc/intel/skylake/Makefile
+index dd39149b89b1d..1c4649bccec5a 100644
+--- a/sound/soc/intel/skylake/Makefile
++++ b/sound/soc/intel/skylake/Makefile
+@@ -7,7 +7,7 @@ ifdef CONFIG_DEBUG_FS
+   snd-soc-skl-objs += skl-debug.o
+ endif
+ 
+-obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE) += snd-soc-skl.o
++obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON) += snd-soc-skl.o
+ 
+ #Skylake Clock device support
+ snd-soc-skl-ssp-clk-objs := skl-ssp-clk.o
+diff --git a/sound/soc/samsung/tm2_wm5110.c b/sound/soc/samsung/tm2_wm5110.c
+index 9300fef9bf269..125e07f65d2b5 100644
+--- a/sound/soc/samsung/tm2_wm5110.c
++++ b/sound/soc/samsung/tm2_wm5110.c
+@@ -553,7 +553,7 @@ static int tm2_probe(struct platform_device *pdev)
+ 
+ 		ret = of_parse_phandle_with_args(dev->of_node, "i2s-controller",
+ 						 cells_name, i, &args);
+-		if (!args.np) {
++		if (ret) {
+ 			dev_err(dev, "i2s-controller property parse error: %d\n", i);
+ 			ret = -EINVAL;
+ 			goto dai_node_put;
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index fc7c359ae215a..258b81b399177 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -182,9 +182,8 @@ static int snd_usb_create_stream(struct snd_usb_audio *chip, int ctrlif, int int
+ 				ctrlif, interface);
+ 			return -EINVAL;
+ 		}
+-		usb_driver_claim_interface(&usb_audio_driver, iface, (void *)-1L);
+-
+-		return 0;
++		return usb_driver_claim_interface(&usb_audio_driver, iface,
++						  USB_AUDIO_IFACE_UNUSED);
+ 	}
+ 
+ 	if ((altsd->bInterfaceClass != USB_CLASS_AUDIO &&
+@@ -204,7 +203,8 @@ static int snd_usb_create_stream(struct snd_usb_audio *chip, int ctrlif, int int
+ 
+ 	if (! snd_usb_parse_audio_interface(chip, interface)) {
+ 		usb_set_interface(dev, interface, 0); /* reset the current interface */
+-		usb_driver_claim_interface(&usb_audio_driver, iface, (void *)-1L);
++		return usb_driver_claim_interface(&usb_audio_driver, iface,
++						  USB_AUDIO_IFACE_UNUSED);
+ 	}
+ 
+ 	return 0;
+@@ -864,7 +864,7 @@ static void usb_audio_disconnect(struct usb_interface *intf)
+ 	struct snd_card *card;
+ 	struct list_head *p;
+ 
+-	if (chip == (void *)-1L)
++	if (chip == USB_AUDIO_IFACE_UNUSED)
+ 		return;
+ 
+ 	card = chip->card;
+@@ -993,7 +993,7 @@ static int usb_audio_suspend(struct usb_interface *intf, pm_message_t message)
+ 	struct usb_mixer_interface *mixer;
+ 	struct list_head *p;
+ 
+-	if (chip == (void *)-1L)
++	if (chip == USB_AUDIO_IFACE_UNUSED)
+ 		return 0;
+ 
+ 	if (!chip->num_suspended_intf++) {
+@@ -1024,7 +1024,7 @@ static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
+ 	struct list_head *p;
+ 	int err = 0;
+ 
+-	if (chip == (void *)-1L)
++	if (chip == USB_AUDIO_IFACE_UNUSED)
+ 		return 0;
+ 
+ 	atomic_inc(&chip->active); /* avoid autopm */
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 0c23fa6d8525d..cd46ca7cd28de 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1332,7 +1332,7 @@ static int snd_usbmidi_in_endpoint_create(struct snd_usb_midi *umidi,
+ 
+  error:
+ 	snd_usbmidi_in_endpoint_delete(ep);
+-	return -ENOMEM;
++	return err;
+ }
+ 
+ /*
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 5ab2a4580bfb2..bddef8ad57783 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -55,8 +55,12 @@ static int create_composite_quirk(struct snd_usb_audio *chip,
+ 		if (!iface)
+ 			continue;
+ 		if (quirk->ifnum != probed_ifnum &&
+-		    !usb_interface_claimed(iface))
+-			usb_driver_claim_interface(driver, iface, (void *)-1L);
++		    !usb_interface_claimed(iface)) {
++			err = usb_driver_claim_interface(driver, iface,
++							 USB_AUDIO_IFACE_UNUSED);
++			if (err < 0)
++				return err;
++		}
+ 	}
+ 
+ 	return 0;
+@@ -390,8 +394,12 @@ static int create_autodetect_quirks(struct snd_usb_audio *chip,
+ 			continue;
+ 
+ 		err = create_autodetect_quirk(chip, iface, driver);
+-		if (err >= 0)
+-			usb_driver_claim_interface(driver, iface, (void *)-1L);
++		if (err >= 0) {
++			err = usb_driver_claim_interface(driver, iface,
++							 USB_AUDIO_IFACE_UNUSED);
++			if (err < 0)
++				return err;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 9667060ff92be..e54a98f465490 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -63,6 +63,8 @@ struct snd_usb_audio {
+ 	struct media_intf_devnode *ctl_intf_media_devnode;
+ };
+ 
++#define USB_AUDIO_IFACE_UNUSED	((void *)-1L)
++
+ #define usb_audio_err(chip, fmt, args...) \
+ 	dev_err(&(chip)->dev->dev, fmt, ##args)
+ #define usb_audio_warn(chip, fmt, args...) \
+diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
+index 2afb7d5b1aca2..592803af97340 100644
+--- a/tools/bpf/bpftool/btf.c
++++ b/tools/bpf/bpftool/btf.c
+@@ -519,6 +519,7 @@ static int do_dump(int argc, char **argv)
+ 			NEXT_ARG();
+ 			if (argc < 1) {
+ 				p_err("expecting value for 'format' option\n");
++				err = -EINVAL;
+ 				goto done;
+ 			}
+ 			if (strcmp(*argv, "c") == 0) {
+@@ -528,11 +529,13 @@ static int do_dump(int argc, char **argv)
+ 			} else {
+ 				p_err("unrecognized format specifier: '%s', possible values: raw, c",
+ 				      *argv);
++				err = -EINVAL;
+ 				goto done;
+ 			}
+ 			NEXT_ARG();
+ 		} else {
+ 			p_err("unrecognized option: '%s'", *argv);
++			err = -EINVAL;
+ 			goto done;
+ 		}
+ 	}
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index 682daaa49e6a2..33068d6ed5d6c 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -274,7 +274,7 @@ static int do_batch(int argc, char **argv)
+ 	int n_argc;
+ 	FILE *fp;
+ 	char *cp;
+-	int err;
++	int err = 0;
+ 	int i;
+ 
+ 	if (argc < 2) {
+@@ -368,7 +368,6 @@ static int do_batch(int argc, char **argv)
+ 	} else {
+ 		if (!json_output)
+ 			printf("processed %d commands\n", lines);
+-		err = 0;
+ 	}
+ err_close:
+ 	if (fp != stdin)
+diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
+index a7efbd84fbcc4..ce6faf1b90e83 100644
+--- a/tools/bpf/bpftool/map.c
++++ b/tools/bpf/bpftool/map.c
+@@ -99,7 +99,7 @@ static int do_dump_btf(const struct btf_dumper *d,
+ 		       void *value)
+ {
+ 	__u32 value_id;
+-	int ret;
++	int ret = 0;
+ 
+ 	/* start of key-value pair */
+ 	jsonw_start_object(d->jw);
+diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
+index bbcefb3ff5a57..4538ed762a209 100644
+--- a/tools/lib/bpf/bpf_core_read.h
++++ b/tools/lib/bpf/bpf_core_read.h
+@@ -88,11 +88,19 @@ enum bpf_enum_value_kind {
+ 	const void *p = (const void *)s + __CORE_RELO(s, field, BYTE_OFFSET); \
+ 	unsigned long long val;						      \
+ 									      \
++	/* This is a so-called barrier_var() operation that makes specified   \
++	 * variable "a black box" for optimizing compiler.		      \
++	 * It forces compiler to perform BYTE_OFFSET relocation on p and use  \
++	 * its calculated value in the switch below, instead of applying      \
++	 * the same relocation 4 times for each individual memory load.       \
++	 */								      \
++	asm volatile("" : "=r"(p) : "0"(p));				      \
++									      \
+ 	switch (__CORE_RELO(s, field, BYTE_SIZE)) {			      \
+-	case 1: val = *(const unsigned char *)p;			      \
+-	case 2: val = *(const unsigned short *)p;			      \
+-	case 4: val = *(const unsigned int *)p;				      \
+-	case 8: val = *(const unsigned long long *)p;			      \
++	case 1: val = *(const unsigned char *)p; break;			      \
++	case 2: val = *(const unsigned short *)p; break;		      \
++	case 4: val = *(const unsigned int *)p; break;			      \
++	case 8: val = *(const unsigned long long *)p; break;		      \
+ 	}								      \
+ 	val <<= __CORE_RELO(s, field, LSHIFT_U64);			      \
+ 	if (__CORE_RELO(s, field, SIGNED))				      \
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index f9ef37707888f..1c2e91ee041d8 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -413,20 +413,38 @@ typeof(name(0)) name(struct pt_regs *ctx)				    \
+ }									    \
+ static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
+ 
++#define ___bpf_fill0(arr, p, x) do {} while (0)
++#define ___bpf_fill1(arr, p, x) arr[p] = x
++#define ___bpf_fill2(arr, p, x, args...) arr[p] = x; ___bpf_fill1(arr, p + 1, args)
++#define ___bpf_fill3(arr, p, x, args...) arr[p] = x; ___bpf_fill2(arr, p + 1, args)
++#define ___bpf_fill4(arr, p, x, args...) arr[p] = x; ___bpf_fill3(arr, p + 1, args)
++#define ___bpf_fill5(arr, p, x, args...) arr[p] = x; ___bpf_fill4(arr, p + 1, args)
++#define ___bpf_fill6(arr, p, x, args...) arr[p] = x; ___bpf_fill5(arr, p + 1, args)
++#define ___bpf_fill7(arr, p, x, args...) arr[p] = x; ___bpf_fill6(arr, p + 1, args)
++#define ___bpf_fill8(arr, p, x, args...) arr[p] = x; ___bpf_fill7(arr, p + 1, args)
++#define ___bpf_fill9(arr, p, x, args...) arr[p] = x; ___bpf_fill8(arr, p + 1, args)
++#define ___bpf_fill10(arr, p, x, args...) arr[p] = x; ___bpf_fill9(arr, p + 1, args)
++#define ___bpf_fill11(arr, p, x, args...) arr[p] = x; ___bpf_fill10(arr, p + 1, args)
++#define ___bpf_fill12(arr, p, x, args...) arr[p] = x; ___bpf_fill11(arr, p + 1, args)
++#define ___bpf_fill(arr, args...) \
++	___bpf_apply(___bpf_fill, ___bpf_narg(args))(arr, 0, args)
++
+ /*
+  * BPF_SEQ_PRINTF to wrap bpf_seq_printf to-be-printed values
+  * in a structure.
+  */
+-#define BPF_SEQ_PRINTF(seq, fmt, args...)				    \
+-	({								    \
+-		_Pragma("GCC diagnostic push")				    \
+-		_Pragma("GCC diagnostic ignored \"-Wint-conversion\"")	    \
+-		static const char ___fmt[] = fmt;			    \
+-		unsigned long long ___param[] = { args };		    \
+-		_Pragma("GCC diagnostic pop")				    \
+-		int ___ret = bpf_seq_printf(seq, ___fmt, sizeof(___fmt),    \
+-					    ___param, sizeof(___param));    \
+-		___ret;							    \
+-	})
++#define BPF_SEQ_PRINTF(seq, fmt, args...)			\
++({								\
++	static const char ___fmt[] = fmt;			\
++	unsigned long long ___param[___bpf_narg(args)];		\
++								\
++	_Pragma("GCC diagnostic push")				\
++	_Pragma("GCC diagnostic ignored \"-Wint-conversion\"")	\
++	___bpf_fill(___param, args);				\
++	_Pragma("GCC diagnostic pop")				\
++								\
++	bpf_seq_printf(seq, ___fmt, sizeof(___fmt),		\
++		       ___param, sizeof(___param));		\
++})
+ 
+ #endif
+diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
+index 57247240a20ad..9cabc8b620e33 100644
+--- a/tools/lib/bpf/btf.h
++++ b/tools/lib/bpf/btf.h
+@@ -164,6 +164,7 @@ struct btf_dump_emit_type_decl_opts {
+ 	int indent_level;
+ 	/* strip all the const/volatile/restrict mods */
+ 	bool strip_mods;
++	size_t :0;
+ };
+ #define btf_dump_emit_type_decl_opts__last_field strip_mods
+ 
+diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
+index 6909ee81113a1..57d10b779dea0 100644
+--- a/tools/lib/bpf/libbpf.h
++++ b/tools/lib/bpf/libbpf.h
+@@ -507,6 +507,7 @@ struct xdp_link_info {
+ struct bpf_xdp_set_link_opts {
+ 	size_t sz;
+ 	int old_fd;
++	size_t :0;
+ };
+ #define bpf_xdp_set_link_opts__last_field old_fd
+ 
+diff --git a/tools/lib/perf/include/perf/event.h b/tools/lib/perf/include/perf/event.h
+index 988c539bedb6e..4a24b855d3ce2 100644
+--- a/tools/lib/perf/include/perf/event.h
++++ b/tools/lib/perf/include/perf/event.h
+@@ -8,6 +8,8 @@
+ #include <linux/bpf.h>
+ #include <sys/types.h> /* pid_t */
+ 
++#define event_contains(obj, mem) ((obj).header.size > offsetof(typeof(obj), mem))
++
+ struct perf_record_mmap {
+ 	struct perf_event_header header;
+ 	__u32			 pid, tid;
+@@ -336,8 +338,9 @@ struct perf_record_time_conv {
+ 	__u64			 time_zero;
+ 	__u64			 time_cycles;
+ 	__u64			 time_mask;
+-	bool			 cap_user_time_zero;
+-	bool			 cap_user_time_short;
++	__u8			 cap_user_time_zero;
++	__u8			 cap_user_time_short;
++	__u8			 reserved[6];	/* For alignment */
+ };
+ 
+ struct perf_record_header_feature {
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
+index 4ea7ec4f496e8..008f1683e5407 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
+@@ -275,7 +275,7 @@
+   {
+     "EventName": "l2_pf_hit_l2",
+     "EventCode": "0x70",
+-    "BriefDescription": "L2 prefetch hit in L2.",
++    "BriefDescription": "L2 prefetch hit in L2. Use l2_cache_hits_from_l2_hwpf instead.",
+     "UMask": "0xff"
+   },
+   {
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json b/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
+index 2cfe2d2f3bfdd..3c954543d1ae6 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
+@@ -79,10 +79,10 @@
+     "UMask": "0x70"
+   },
+   {
+-    "MetricName": "l2_cache_hits_from_l2_hwpf",
++    "EventName": "l2_cache_hits_from_l2_hwpf",
++    "EventCode": "0x70",
+     "BriefDescription": "L2 Cache Hits from L2 HWPF",
+-    "MetricExpr": "l2_pf_hit_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3",
+-    "MetricGroup": "l2_cache"
++    "UMask": "0xff"
+   },
+   {
+     "EventName": "l3_accesses",
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/cache.json b/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
+index f61b982f83ca3..8ba84a48188dd 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
+@@ -205,7 +205,7 @@
+   {
+     "EventName": "l2_pf_hit_l2",
+     "EventCode": "0x70",
+-    "BriefDescription": "L2 prefetch hit in L2.",
++    "BriefDescription": "L2 prefetch hit in L2. Use l2_cache_hits_from_l2_hwpf instead.",
+     "UMask": "0xff"
+   },
+   {
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json b/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
+index 2ef91e25e6613..1c624cee9ef48 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
+@@ -79,10 +79,10 @@
+     "UMask": "0x70"
+   },
+   {
+-    "MetricName": "l2_cache_hits_from_l2_hwpf",
++    "EventName": "l2_cache_hits_from_l2_hwpf",
++    "EventCode": "0x70",
+     "BriefDescription": "L2 Cache Hits from L2 HWPF",
+-    "MetricExpr": "l2_pf_hit_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3",
+-    "MetricGroup": "l2_cache"
++    "UMask": "0xff"
+   },
+   {
+     "EventName": "l3_accesses",
+diff --git a/tools/perf/trace/beauty/fsconfig.sh b/tools/perf/trace/beauty/fsconfig.sh
+index 83fb24df05c9f..bc6ef7bb7a5f9 100755
+--- a/tools/perf/trace/beauty/fsconfig.sh
++++ b/tools/perf/trace/beauty/fsconfig.sh
+@@ -10,8 +10,7 @@ fi
+ linux_mount=${linux_header_dir}/mount.h
+ 
+ printf "static const char *fsconfig_cmds[] = {\n"
+-regex='^[[:space:]]*+FSCONFIG_([[:alnum:]_]+)[[:space:]]*=[[:space:]]*([[:digit:]]+)[[:space:]]*,[[:space:]]*.*'
+-egrep $regex ${linux_mount} | \
+-	sed -r "s/$regex/\2 \1/g"	| \
+-	xargs printf "\t[%s] = \"%s\",\n"
++ms='[[:space:]]*'
++sed -nr "s/^${ms}FSCONFIG_([[:alnum:]_]+)${ms}=${ms}([[:digit:]]+)${ms},.*/\t[\2] = \"\1\",/p" \
++	${linux_mount}
+ printf "};\n"
+diff --git a/tools/perf/util/jitdump.c b/tools/perf/util/jitdump.c
+index 055bab7a92b35..64d8f9ba8c034 100644
+--- a/tools/perf/util/jitdump.c
++++ b/tools/perf/util/jitdump.c
+@@ -369,21 +369,31 @@ jit_inject_event(struct jit_buf_desc *jd, union perf_event *event)
+ 
+ static uint64_t convert_timestamp(struct jit_buf_desc *jd, uint64_t timestamp)
+ {
+-	struct perf_tsc_conversion tc;
++	struct perf_tsc_conversion tc = { .time_shift = 0, };
++	struct perf_record_time_conv *time_conv = &jd->session->time_conv;
+ 
+ 	if (!jd->use_arch_timestamp)
+ 		return timestamp;
+ 
+-	tc.time_shift	       = jd->session->time_conv.time_shift;
+-	tc.time_mult	       = jd->session->time_conv.time_mult;
+-	tc.time_zero	       = jd->session->time_conv.time_zero;
+-	tc.time_cycles	       = jd->session->time_conv.time_cycles;
+-	tc.time_mask	       = jd->session->time_conv.time_mask;
+-	tc.cap_user_time_zero  = jd->session->time_conv.cap_user_time_zero;
+-	tc.cap_user_time_short = jd->session->time_conv.cap_user_time_short;
++	tc.time_shift = time_conv->time_shift;
++	tc.time_mult  = time_conv->time_mult;
++	tc.time_zero  = time_conv->time_zero;
+ 
+-	if (!tc.cap_user_time_zero)
+-		return 0;
++	/*
++	 * The event TIME_CONV was extended for the fields from "time_cycles"
++	 * when supported cap_user_time_short, for backward compatibility,
++	 * checks the event size and assigns these extended fields if these
++	 * fields are contained in the event.
++	 */
++	if (event_contains(*time_conv, time_cycles)) {
++		tc.time_cycles	       = time_conv->time_cycles;
++		tc.time_mask	       = time_conv->time_mask;
++		tc.cap_user_time_zero  = time_conv->cap_user_time_zero;
++		tc.cap_user_time_short = time_conv->cap_user_time_short;
++
++		if (!tc.cap_user_time_zero)
++			return 0;
++	}
+ 
+ 	return tsc_to_perf_time(timestamp, &tc);
+ }
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 22098fffac4f1..63b619084b34a 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -945,6 +945,19 @@ static void perf_event__stat_round_swap(union perf_event *event,
+ 	event->stat_round.time = bswap_64(event->stat_round.time);
+ }
+ 
++static void perf_event__time_conv_swap(union perf_event *event,
++				       bool sample_id_all __maybe_unused)
++{
++	event->time_conv.time_shift = bswap_64(event->time_conv.time_shift);
++	event->time_conv.time_mult  = bswap_64(event->time_conv.time_mult);
++	event->time_conv.time_zero  = bswap_64(event->time_conv.time_zero);
++
++	if (event_contains(event->time_conv, time_cycles)) {
++		event->time_conv.time_cycles = bswap_64(event->time_conv.time_cycles);
++		event->time_conv.time_mask = bswap_64(event->time_conv.time_mask);
++	}
++}
++
+ typedef void (*perf_event__swap_op)(union perf_event *event,
+ 				    bool sample_id_all);
+ 
+@@ -981,7 +994,7 @@ static perf_event__swap_op perf_event__swap_ops[] = {
+ 	[PERF_RECORD_STAT]		  = perf_event__stat_swap,
+ 	[PERF_RECORD_STAT_ROUND]	  = perf_event__stat_round_swap,
+ 	[PERF_RECORD_EVENT_UPDATE]	  = perf_event__event_update_swap,
+-	[PERF_RECORD_TIME_CONV]		  = perf_event__all64_swap,
++	[PERF_RECORD_TIME_CONV]		  = perf_event__time_conv_swap,
+ 	[PERF_RECORD_HEADER_MAX]	  = NULL,
+ };
+ 
+diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
+index 35c936ce33efa..2664fb65e47ad 100644
+--- a/tools/perf/util/symbol_fprintf.c
++++ b/tools/perf/util/symbol_fprintf.c
+@@ -68,7 +68,7 @@ size_t dso__fprintf_symbols_by_name(struct dso *dso,
+ 
+ 	for (nd = rb_first_cached(&dso->symbol_names); nd; nd = rb_next(nd)) {
+ 		pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
+-		fprintf(fp, "%s\n", pos->sym.name);
++		ret += fprintf(fp, "%s\n", pos->sym.name);
+ 	}
+ 
+ 	return ret;
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index ca69bdb0159fd..424ed19a9d542 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -4795,33 +4795,12 @@ double discover_bclk(unsigned int family, unsigned int model)
+  * below this value, including the Digital Thermal Sensor (DTS),
+  * Package Thermal Management Sensor (PTM), and thermal event thresholds.
+  */
+-int read_tcc_activation_temp()
++int set_temperature_target(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+ {
+ 	unsigned long long msr;
+-	unsigned int tcc, target_c, offset_c;
+-
+-	/* Temperature Target MSR is Nehalem and newer only */
+-	if (!do_nhm_platform_info)
+-		return 0;
+-
+-	if (get_msr(base_cpu, MSR_IA32_TEMPERATURE_TARGET, &msr))
+-		return 0;
+-
+-	target_c = (msr >> 16) & 0xFF;
+-
+-	offset_c = (msr >> 24) & 0xF;
+-
+-	tcc = target_c - offset_c;
+-
+-	if (!quiet)
+-		fprintf(outf, "cpu%d: MSR_IA32_TEMPERATURE_TARGET: 0x%08llx (%d C) (%d default - %d offset)\n",
+-			base_cpu, msr, tcc, target_c, offset_c);
+-
+-	return tcc;
+-}
++	unsigned int target_c_local;
++	int cpu;
+ 
+-int set_temperature_target(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+-{
+ 	/* tcc_activation_temp is used only for dts or ptm */
+ 	if (!(do_dts || do_ptm))
+ 		return 0;
+@@ -4830,18 +4809,43 @@ int set_temperature_target(struct thread_data *t, struct core_data *c, struct pk
+ 	if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE) || !(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE))
+ 		return 0;
+ 
++	cpu = t->cpu_id;
++	if (cpu_migrate(cpu)) {
++		fprintf(outf, "Could not migrate to CPU %d\n", cpu);
++		return -1;
++	}
++
+ 	if (tcc_activation_temp_override != 0) {
+ 		tcc_activation_temp = tcc_activation_temp_override;
+-		fprintf(outf, "Using cmdline TCC Target (%d C)\n", tcc_activation_temp);
++		fprintf(outf, "cpu%d: Using cmdline TCC Target (%d C)\n",
++			cpu, tcc_activation_temp);
+ 		return 0;
+ 	}
+ 
+-	tcc_activation_temp = read_tcc_activation_temp();
+-	if (tcc_activation_temp)
+-		return 0;
++	/* Temperature Target MSR is Nehalem and newer only */
++	if (!do_nhm_platform_info)
++		goto guess;
++
++	if (get_msr(base_cpu, MSR_IA32_TEMPERATURE_TARGET, &msr))
++		goto guess;
++
++	target_c_local = (msr >> 16) & 0xFF;
++
++	if (!quiet)
++		fprintf(outf, "cpu%d: MSR_IA32_TEMPERATURE_TARGET: 0x%08llx (%d C)\n",
++			cpu, msr, target_c_local);
++
++	if (!target_c_local)
++		goto guess;
++
++	tcc_activation_temp = target_c_local;
++
++	return 0;
+ 
++guess:
+ 	tcc_activation_temp = TJMAX_DEFAULT;
+-	fprintf(outf, "Guessing tjMax %d C, Please use -T to specify\n", tcc_activation_temp);
++	fprintf(outf, "cpu%d: Guessing tjMax %d C, Please use -T to specify\n",
++		cpu, tcc_activation_temp);
+ 
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 9359377aeb35c..b5322d60068c4 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -196,7 +196,7 @@ $(BUILD_DIR)/libbpf $(BUILD_DIR)/bpftool $(BUILD_DIR)/resolve_btfids $(INCLUDE_D
+ 	$(call msg,MKDIR,,$@)
+ 	$(Q)mkdir -p $@
+ 
+-$(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) | $(BPFTOOL) $(INCLUDE_DIR)
++$(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL) | $(INCLUDE_DIR)
+ ifeq ($(VMLINUX_H),)
+ 	$(call msg,GEN,,$@)
+ 	$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
+@@ -333,7 +333,8 @@ $(TRUNNER_BPF_OBJS): $(TRUNNER_OUTPUT)/%.o:				\
+ 
+ $(TRUNNER_BPF_SKELS): $(TRUNNER_OUTPUT)/%.skel.h:			\
+ 		      $(TRUNNER_OUTPUT)/%.o				\
+-		      | $(BPFTOOL) $(TRUNNER_OUTPUT)
++		      $(BPFTOOL)					\
++		      | $(TRUNNER_OUTPUT)
+ 	$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
+ 	$(Q)$$(BPFTOOL) gen skeleton $$< > $$@
+ endif
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+index 30e40ff4b0d8e..8b641c306f263 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+@@ -185,11 +185,6 @@ static int duration = 0;
+ 	.bpf_obj_file = "test_core_reloc_existence.o",			\
+ 	.btf_src_file = "btf__core_reloc_" #name ".o"			\
+ 
+-#define FIELD_EXISTS_ERR_CASE(name) {					\
+-	FIELD_EXISTS_CASE_COMMON(name),					\
+-	.fails = true,							\
+-}
+-
+ #define BITFIELDS_CASE_COMMON(objfile, test_name_prefix,  name)		\
+ 	.case_name = test_name_prefix#name,				\
+ 	.bpf_obj_file = objfile,					\
+@@ -197,7 +192,7 @@ static int duration = 0;
+ 
+ #define BITFIELDS_CASE(name, ...) {					\
+ 	BITFIELDS_CASE_COMMON("test_core_reloc_bitfields_probed.o",	\
+-			      "direct:", name),				\
++			      "probed:", name),				\
+ 	.input = STRUCT_TO_CHAR_PTR(core_reloc_##name) __VA_ARGS__,	\
+ 	.input_len = sizeof(struct core_reloc_##name),			\
+ 	.output = STRUCT_TO_CHAR_PTR(core_reloc_bitfields_output)	\
+@@ -205,7 +200,7 @@ static int duration = 0;
+ 	.output_len = sizeof(struct core_reloc_bitfields_output),	\
+ }, {									\
+ 	BITFIELDS_CASE_COMMON("test_core_reloc_bitfields_direct.o",	\
+-			      "probed:", name),				\
++			      "direct:", name),				\
+ 	.input = STRUCT_TO_CHAR_PTR(core_reloc_##name) __VA_ARGS__,	\
+ 	.input_len = sizeof(struct core_reloc_##name),			\
+ 	.output = STRUCT_TO_CHAR_PTR(core_reloc_bitfields_output)	\
+@@ -500,8 +495,7 @@ static struct core_reloc_test_case test_cases[] = {
+ 	ARRAYS_ERR_CASE(arrays___err_too_small),
+ 	ARRAYS_ERR_CASE(arrays___err_too_shallow),
+ 	ARRAYS_ERR_CASE(arrays___err_non_array),
+-	ARRAYS_ERR_CASE(arrays___err_wrong_val_type1),
+-	ARRAYS_ERR_CASE(arrays___err_wrong_val_type2),
++	ARRAYS_ERR_CASE(arrays___err_wrong_val_type),
+ 	ARRAYS_ERR_CASE(arrays___err_bad_zero_sz_arr),
+ 
+ 	/* enum/ptr/int handling scenarios */
+@@ -592,13 +586,25 @@ static struct core_reloc_test_case test_cases[] = {
+ 		},
+ 		.output_len = sizeof(struct core_reloc_existence_output),
+ 	},
+-
+-	FIELD_EXISTS_ERR_CASE(existence__err_int_sz),
+-	FIELD_EXISTS_ERR_CASE(existence__err_int_type),
+-	FIELD_EXISTS_ERR_CASE(existence__err_int_kind),
+-	FIELD_EXISTS_ERR_CASE(existence__err_arr_kind),
+-	FIELD_EXISTS_ERR_CASE(existence__err_arr_value_type),
+-	FIELD_EXISTS_ERR_CASE(existence__err_struct_type),
++	{
++		FIELD_EXISTS_CASE_COMMON(existence___wrong_field_defs),
++		.input = STRUCT_TO_CHAR_PTR(core_reloc_existence___wrong_field_defs) {
++		},
++		.input_len = sizeof(struct core_reloc_existence___wrong_field_defs),
++		.output = STRUCT_TO_CHAR_PTR(core_reloc_existence_output) {
++			.a_exists = 0,
++			.b_exists = 0,
++			.c_exists = 0,
++			.arr_exists = 0,
++			.s_exists = 0,
++			.a_value = 0xff000001u,
++			.b_value = 0xff000002u,
++			.c_value = 0xff000003u,
++			.arr_value = 0xff000004u,
++			.s_value = 0xff000005u,
++		},
++		.output_len = sizeof(struct core_reloc_existence_output),
++	},
+ 
+ 	/* bitfield relocation checks */
+ 	BITFIELDS_CASE(bitfields, {
+@@ -804,13 +810,20 @@ void test_core_reloc(void)
+ 			  "prog '%s' not found\n", probe_name))
+ 			goto cleanup;
+ 
++
++		if (test_case->btf_src_file) {
++			err = access(test_case->btf_src_file, R_OK);
++			if (!ASSERT_OK(err, "btf_src_file"))
++				goto cleanup;
++		}
++
+ 		load_attr.obj = obj;
+ 		load_attr.log_level = 0;
+ 		load_attr.target_btf_path = test_case->btf_src_file;
+ 		err = bpf_object__load_xattr(&load_attr);
+ 		if (err) {
+ 			if (!test_case->fails)
+-				CHECK(false, "obj_load", "failed to load prog '%s': %d\n", probe_name, err);
++				ASSERT_OK(err, "obj_load");
+ 			goto cleanup;
+ 		}
+ 
+@@ -844,10 +857,8 @@ void test_core_reloc(void)
+ 			goto cleanup;
+ 		}
+ 
+-		if (test_case->fails) {
+-			CHECK(false, "obj_load_fail", "should fail to load prog '%s'\n", probe_name);
++		if (!ASSERT_FALSE(test_case->fails, "obj_load_should_fail"))
+ 			goto cleanup;
+-		}
+ 
+ 		equal = memcmp(data->out, test_case->output,
+ 			       test_case->output_len) == 0;
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c
+deleted file mode 100644
+index dd0ffa518f366..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_arr_kind x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c
+deleted file mode 100644
+index bc83372088ad0..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_arr_value_type x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c
+deleted file mode 100644
+index 917bec41be081..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_int_kind x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c
+deleted file mode 100644
+index 6ec7e6ec1c915..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_int_sz x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c
+deleted file mode 100644
+index 7bbcacf2b0d17..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_int_type x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c
+deleted file mode 100644
+index f384dd38ec709..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_struct_type x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c
+new file mode 100644
+index 0000000000000..d14b496190c3d
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c
+@@ -0,0 +1,3 @@
++#include "core_reloc_types.h"
++
++void f(struct core_reloc_existence___wrong_field_defs x) {}
+diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
+index e6e616cb7bc91..af58ef9a28caf 100644
+--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
++++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
+@@ -683,27 +683,11 @@ struct core_reloc_existence___minimal {
+ 	int a;
+ };
+ 
+-struct core_reloc_existence___err_wrong_int_sz {
+-	short a;
+-};
+-
+-struct core_reloc_existence___err_wrong_int_type {
++struct core_reloc_existence___wrong_field_defs {
++	void *a;
+ 	int b[1];
+-};
+-
+-struct core_reloc_existence___err_wrong_int_kind {
+ 	struct{ int x; } c;
+-};
+-
+-struct core_reloc_existence___err_wrong_arr_kind {
+ 	int arr;
+-};
+-
+-struct core_reloc_existence___err_wrong_arr_value_type {
+-	short arr[1];
+-};
+-
+-struct core_reloc_existence___err_wrong_struct_type {
+ 	int s;
+ };
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
+index 1b138cd2b187d..1b1c798e92489 100644
+--- a/tools/testing/selftests/bpf/verifier/array_access.c
++++ b/tools/testing/selftests/bpf/verifier/array_access.c
+@@ -186,7 +186,7 @@
+ 	},
+ 	.fixup_map_hash_48b = { 3 },
+ 	.errstr_unpriv = "R0 leaks addr",
+-	.errstr = "invalid access to map value, value_size=48 off=44 size=8",
++	.errstr = "R0 unbounded memory access",
+ 	.result_unpriv = REJECT,
+ 	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
+index cc0f07e72cf22..aa74be9f47c85 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
+@@ -98,11 +98,7 @@ __tc_flower_test()
+ 			jq -r '[ .[] | select(.kind == "flower") |
+ 			.options | .in_hw ]' | jq .[] | wc -l)
+ 	[[ $((offload_count - 1)) -eq $count ]]
+-	if [[ $should_fail -eq 0 ]]; then
+-		check_err $? "Offload mismatch"
+-	else
+-		check_err_fail $should_fail $? "Offload more than expacted"
+-	fi
++	check_err_fail $should_fail $? "Attempt to offload $count rules (actual result $((offload_count - 1)))"
+ }
+ 
+ tc_flower_test()
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index a5ce26d548e4f..be17462fe1467 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -74,7 +74,8 @@ ifdef building_out_of_srctree
+ 		rsync -aq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
+ 	fi
+ 	@if [ "X$(TEST_PROGS)" != "X" ]; then \
+-		$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(OUTPUT)/$(TEST_PROGS)) ; \
++		$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \
++				  $(addprefix $(OUTPUT)/,$(TEST_PROGS))) ; \
+ 	else \
+ 		$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS)); \
+ 	fi
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+index c02291e9841e3..880e3ab9d088d 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+@@ -271,7 +271,7 @@ test_span_gre_fdb_roaming()
+ 
+ 	while ((RET == 0)); do
+ 		bridge fdb del dev $swp3 $h3mac vlan 555 master 2>/dev/null
+-		bridge fdb add dev $swp2 $h3mac vlan 555 master
++		bridge fdb add dev $swp2 $h3mac vlan 555 master static
+ 		sleep 1
+ 		fail_test_span_gre_dir $tundev ingress
+ 
+diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
+index e2c197fd4f9d4..6edfcf1f3bd66 100644
+--- a/virt/kvm/coalesced_mmio.c
++++ b/virt/kvm/coalesced_mmio.c
+@@ -174,21 +174,36 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ 					   struct kvm_coalesced_mmio_zone *zone)
+ {
+ 	struct kvm_coalesced_mmio_dev *dev, *tmp;
++	int r;
+ 
+ 	if (zone->pio != 1 && zone->pio != 0)
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&kvm->slots_lock);
+ 
+-	list_for_each_entry_safe(dev, tmp, &kvm->coalesced_zones, list)
++	list_for_each_entry_safe(dev, tmp, &kvm->coalesced_zones, list) {
+ 		if (zone->pio == dev->zone.pio &&
+ 		    coalesced_mmio_in_range(dev, zone->addr, zone->size)) {
+-			kvm_io_bus_unregister_dev(kvm,
++			r = kvm_io_bus_unregister_dev(kvm,
+ 				zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev);
+ 			kvm_iodevice_destructor(&dev->dev);
++
++			/*
++			 * On failure, unregister destroys all devices on the
++			 * bus _except_ the target device, i.e. coalesced_zones
++			 * has been modified.  No need to restart the walk as
++			 * there aren't any zones left.
++			 */
++			if (r)
++				break;
+ 		}
++	}
+ 
+ 	mutex_unlock(&kvm->slots_lock);
+ 
++	/*
++	 * Ignore the result of kvm_io_bus_unregister_dev(), from userspace's
++	 * perspective, the coalesced MMIO is most definitely unregistered.
++	 */
+ 	return 0;
+ }
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index ed4d2e3a00718..78bf3f5492143 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -4342,15 +4342,15 @@ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+ }
+ 
+ /* Caller must hold slots_lock. */
+-void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+-			       struct kvm_io_device *dev)
++int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
++			      struct kvm_io_device *dev)
+ {
+ 	int i, j;
+ 	struct kvm_io_bus *new_bus, *bus;
+ 
+ 	bus = kvm_get_bus(kvm, bus_idx);
+ 	if (!bus)
+-		return;
++		return 0;
+ 
+ 	for (i = 0; i < bus->dev_count; i++)
+ 		if (bus->range[i].dev == dev) {
+@@ -4358,7 +4358,7 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ 		}
+ 
+ 	if (i == bus->dev_count)
+-		return;
++		return 0;
+ 
+ 	new_bus = kmalloc(struct_size(bus, range, bus->dev_count - 1),
+ 			  GFP_KERNEL_ACCOUNT);
+@@ -4367,7 +4367,13 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ 		new_bus->dev_count--;
+ 		memcpy(new_bus->range + i, bus->range + i + 1,
+ 				flex_array_size(new_bus, range, new_bus->dev_count - i));
+-	} else {
++	}
++
++	rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
++	synchronize_srcu_expedited(&kvm->srcu);
++
++	/* Destroy the old bus _after_ installing the (null) bus. */
++	if (!new_bus) {
+ 		pr_err("kvm: failed to shrink bus, removing it completely\n");
+ 		for (j = 0; j < bus->dev_count; j++) {
+ 			if (j == i)
+@@ -4376,10 +4382,8 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ 		}
+ 	}
+ 
+-	rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
+-	synchronize_srcu_expedited(&kvm->srcu);
+ 	kfree(bus);
+-	return;
++	return new_bus ? 0 : -ENOMEM;
+ }
+ 
+ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-05-19 12:24 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-05-19 12:24 UTC (permalink / raw
  To: gentoo-commits

commit:     d128b2db2aa0047a2a383406079438b0129706e0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 19 12:24:05 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 19 12:24:05 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d128b2db

Linux patch 5.10.38

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1037_linux-5.10.38.patch | 11404 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11408 insertions(+)

diff --git a/0000_README b/0000_README
index fc87a37..f072cf1 100644
--- a/0000_README
+++ b/0000_README
@@ -191,6 +191,10 @@ Patch:  1036_linux-5.10.37.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.37
 
+Patch:  1037_linux-5.10.38.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.38
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1037_linux-5.10.38.patch b/1037_linux-5.10.38.patch
new file mode 100644
index 0000000..6c6c9ca
--- /dev/null
+++ b/1037_linux-5.10.38.patch
@@ -0,0 +1,11404 @@
+diff --git a/.gitignore b/.gitignore
+index d01cda8e11779..67d2f35031283 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -55,6 +55,7 @@ modules.order
+ /tags
+ /TAGS
+ /linux
++/modules-only.symvers
+ /vmlinux
+ /vmlinux.32
+ /vmlinux.symvers
+diff --git a/Documentation/arm/memory.rst b/Documentation/arm/memory.rst
+index 0521b4ce5c961..34bb23c44a710 100644
+--- a/Documentation/arm/memory.rst
++++ b/Documentation/arm/memory.rst
+@@ -45,9 +45,14 @@ fffe8000	fffeffff	DTCM mapping area for platforms with
+ fffe0000	fffe7fff	ITCM mapping area for platforms with
+ 				ITCM mounted inside the CPU.
+ 
+-ffc00000	ffefffff	Fixmap mapping region.  Addresses provided
++ffc80000	ffefffff	Fixmap mapping region.  Addresses provided
+ 				by fix_to_virt() will be located here.
+ 
++ffc00000	ffc7ffff	Guard region
++
++ff800000	ffbfffff	Permanent, fixed read-only mapping of the
++				firmware provided DT blob
++
+ fee00000	feffffff	Mapping of PCI I/O space. This is a static
+ 				mapping within the vmalloc space.
+ 
+diff --git a/Documentation/devicetree/bindings/media/renesas,vin.yaml b/Documentation/devicetree/bindings/media/renesas,vin.yaml
+index ad2fe660364bd..c69cf8d0cb15b 100644
+--- a/Documentation/devicetree/bindings/media/renesas,vin.yaml
++++ b/Documentation/devicetree/bindings/media/renesas,vin.yaml
+@@ -278,23 +278,35 @@ required:
+   - interrupts
+   - clocks
+   - power-domains
+-  - resets
+-
+-if:
+-  properties:
+-    compatible:
+-      contains:
+-        enum:
+-          - renesas,vin-r8a7778
+-          - renesas,vin-r8a7779
+-          - renesas,rcar-gen2-vin
+-then:
+-  required:
+-    - port
+-else:
+-  required:
+-    - renesas,id
+-    - ports
++
++allOf:
++  - if:
++      not:
++        properties:
++          compatible:
++            contains:
++              enum:
++                - renesas,vin-r8a7778
++                - renesas,vin-r8a7779
++    then:
++      required:
++        - resets
++
++  - if:
++      properties:
++        compatible:
++          contains:
++            enum:
++              - renesas,vin-r8a7778
++              - renesas,vin-r8a7779
++              - renesas,rcar-gen2-vin
++    then:
++      required:
++        - port
++    else:
++      required:
++        - renesas,id
++        - ports
+ 
+ additionalProperties: false
+ 
+diff --git a/Documentation/devicetree/bindings/serial/8250.yaml b/Documentation/devicetree/bindings/serial/8250.yaml
+index c1d4c196f005b..460cb546c54a9 100644
+--- a/Documentation/devicetree/bindings/serial/8250.yaml
++++ b/Documentation/devicetree/bindings/serial/8250.yaml
+@@ -93,11 +93,6 @@ properties:
+               - mediatek,mt7622-btif
+               - mediatek,mt7623-btif
+           - const: mediatek,mtk-btif
+-      - items:
+-          - enum:
+-              - mediatek,mt7622-btif
+-              - mediatek,mt7623-btif
+-          - const: mediatek,mtk-btif
+       - items:
+           - const: mrvl,mmp-uart
+           - const: intel,xscale-uart
+diff --git a/Documentation/dontdiff b/Documentation/dontdiff
+index e361fc95ca293..82e3eee7363b0 100644
+--- a/Documentation/dontdiff
++++ b/Documentation/dontdiff
+@@ -178,6 +178,7 @@ mktables
+ mktree
+ mkutf8data
+ modpost
++modules-only.symvers
+ modules.builtin
+ modules.builtin.modinfo
+ modules.nsdeps
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 24cdfcf334ea1..4fef10dd29753 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -6694,6 +6694,7 @@ F:	Documentation/filesystems/f2fs.rst
+ F:	fs/f2fs/
+ F:	include/linux/f2fs_fs.h
+ F:	include/trace/events/f2fs.h
++F:	include/uapi/linux/f2fs.h
+ 
+ F71805F HARDWARE MONITORING DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+diff --git a/Makefile b/Makefile
+index 39f14ad009aef..6e4e536a0d20f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 37
++SUBLEVEL = 38
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1483,7 +1483,7 @@ endif # CONFIG_MODULES
+ # make distclean Remove editor backup files, patch leftover files and the like
+ 
+ # Directories & files removed with 'make clean'
+-CLEAN_FILES += include/ksym vmlinux.symvers \
++CLEAN_FILES += include/ksym vmlinux.symvers modules-only.symvers \
+ 	       modules.builtin modules.builtin.modinfo modules.nsdeps \
+ 	       compile_commands.json
+ 
+diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
+index d9c264dc25fcb..9926cd5a17b00 100644
+--- a/arch/arc/include/asm/page.h
++++ b/arch/arc/include/asm/page.h
+@@ -7,6 +7,18 @@
+ 
+ #include <uapi/asm/page.h>
+ 
++#ifdef CONFIG_ARC_HAS_PAE40
++
++#define MAX_POSSIBLE_PHYSMEM_BITS	40
++#define PAGE_MASK_PHYS			(0xff00000000ull | PAGE_MASK)
++
++#else /* CONFIG_ARC_HAS_PAE40 */
++
++#define MAX_POSSIBLE_PHYSMEM_BITS	32
++#define PAGE_MASK_PHYS			PAGE_MASK
++
++#endif /* CONFIG_ARC_HAS_PAE40 */
++
+ #ifndef __ASSEMBLY__
+ 
+ #define clear_page(paddr)		memset((paddr), 0, PAGE_SIZE)
+diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
+index 163641726a2b9..5878846f00cfe 100644
+--- a/arch/arc/include/asm/pgtable.h
++++ b/arch/arc/include/asm/pgtable.h
+@@ -107,8 +107,8 @@
+ #define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE)
+ 
+ /* Set of bits not changed in pte_modify */
+-#define _PAGE_CHG_MASK	(PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_SPECIAL)
+-
++#define _PAGE_CHG_MASK	(PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \
++							   _PAGE_SPECIAL)
+ /* More Abbrevaited helpers */
+ #define PAGE_U_NONE     __pgprot(___DEF)
+ #define PAGE_U_R        __pgprot(___DEF | _PAGE_READ)
+@@ -132,13 +132,7 @@
+ #define PTE_BITS_IN_PD0		(_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ)
+ #define PTE_BITS_RWX		(_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ)
+ 
+-#ifdef CONFIG_ARC_HAS_PAE40
+-#define PTE_BITS_NON_RWX_IN_PD1	(0xff00000000 | PAGE_MASK | _PAGE_CACHEABLE)
+-#define MAX_POSSIBLE_PHYSMEM_BITS 40
+-#else
+-#define PTE_BITS_NON_RWX_IN_PD1	(PAGE_MASK | _PAGE_CACHEABLE)
+-#define MAX_POSSIBLE_PHYSMEM_BITS 32
+-#endif
++#define PTE_BITS_NON_RWX_IN_PD1	(PAGE_MASK_PHYS | _PAGE_CACHEABLE)
+ 
+ /**************************************************************************
+  * Mapping of vm_flags (Generic VM) to PTE flags (arch specific)
+diff --git a/arch/arc/include/uapi/asm/page.h b/arch/arc/include/uapi/asm/page.h
+index 2a97e2718a219..2a4ad619abfba 100644
+--- a/arch/arc/include/uapi/asm/page.h
++++ b/arch/arc/include/uapi/asm/page.h
+@@ -33,5 +33,4 @@
+ 
+ #define PAGE_MASK	(~(PAGE_SIZE-1))
+ 
+-
+ #endif /* _UAPI__ASM_ARC_PAGE_H */
+diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S
+index ea00c8a17f079..ae656bfc31c3d 100644
+--- a/arch/arc/kernel/entry.S
++++ b/arch/arc/kernel/entry.S
+@@ -177,7 +177,7 @@ tracesys:
+ 
+ 	; Do the Sys Call as we normally would.
+ 	; Validate the Sys Call number
+-	cmp     r8,  NR_syscalls
++	cmp     r8,  NR_syscalls - 1
+ 	mov.hi  r0, -ENOSYS
+ 	bhi     tracesys_exit
+ 
+@@ -255,7 +255,7 @@ ENTRY(EV_Trap)
+ 	;============ Normal syscall case
+ 
+ 	; syscall num shd not exceed the total system calls avail
+-	cmp     r8,  NR_syscalls
++	cmp     r8,  NR_syscalls - 1
+ 	mov.hi  r0, -ENOSYS
+ 	bhi     .Lret_from_system_call
+ 
+diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c
+index 3a35b82a718e3..da543fd422fed 100644
+--- a/arch/arc/mm/init.c
++++ b/arch/arc/mm/init.c
+@@ -158,7 +158,16 @@ void __init setup_arch_memory(void)
+ 	min_high_pfn = PFN_DOWN(high_mem_start);
+ 	max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
+ 
+-	max_zone_pfn[ZONE_HIGHMEM] = min_low_pfn;
++	/*
++	 * max_high_pfn should be ok here for both HIGHMEM and HIGHMEM+PAE.
++	 * For HIGHMEM without PAE max_high_pfn should be less than
++	 * min_low_pfn to guarantee that these two regions don't overlap.
++	 * For PAE case highmem is greater than lowmem, so it is natural
++	 * to use max_high_pfn.
++	 *
++	 * In both cases, holes should be handled by pfn_valid().
++	 */
++	max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn;
+ 
+ 	high_memory = (void *)(min_high_pfn << PAGE_SHIFT);
+ 	kmap_init();
+diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c
+index fac4adc902044..95c649fbc95af 100644
+--- a/arch/arc/mm/ioremap.c
++++ b/arch/arc/mm/ioremap.c
+@@ -53,9 +53,10 @@ EXPORT_SYMBOL(ioremap);
+ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
+ 			   unsigned long flags)
+ {
++	unsigned int off;
+ 	unsigned long vaddr;
+ 	struct vm_struct *area;
+-	phys_addr_t off, end;
++	phys_addr_t end;
+ 	pgprot_t prot = __pgprot(flags);
+ 
+ 	/* Don't allow wraparound, zero size */
+@@ -72,7 +73,7 @@ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
+ 
+ 	/* Mappings have to be page-aligned */
+ 	off = paddr & ~PAGE_MASK;
+-	paddr &= PAGE_MASK;
++	paddr &= PAGE_MASK_PHYS;
+ 	size = PAGE_ALIGN(end + 1) - paddr;
+ 
+ 	/*
+diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
+index 9bb3c24f36770..9c7c682472896 100644
+--- a/arch/arc/mm/tlb.c
++++ b/arch/arc/mm/tlb.c
+@@ -576,7 +576,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned,
+ 		      pte_t *ptep)
+ {
+ 	unsigned long vaddr = vaddr_unaligned & PAGE_MASK;
+-	phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK;
++	phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS;
+ 	struct page *page = pfn_to_page(pte_pfn(*ptep));
+ 
+ 	create_tlb(vma, vaddr, ptep);
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 3bf90d9e33353..a294a02f2d232 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -1168,7 +1168,7 @@
+ 			};
+ 		};
+ 
+-		target-module@34000 {			/* 0x48034000, ap 7 46.0 */
++		timer3_target: target-module@34000 {	/* 0x48034000, ap 7 46.0 */
+ 			compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ 			reg = <0x34000 0x4>,
+ 			      <0x34010 0x4>;
+@@ -1195,7 +1195,7 @@
+ 			};
+ 		};
+ 
+-		target-module@36000 {			/* 0x48036000, ap 9 4e.0 */
++		timer4_target: target-module@36000 {	/* 0x48036000, ap 9 4e.0 */
+ 			compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ 			reg = <0x36000 0x4>,
+ 			      <0x36010 0x4>;
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index 4e1bbc0198eb7..7ecf8f86ac747 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -46,6 +46,7 @@
+ 
+ 	timer {
+ 		compatible = "arm,armv7-timer";
++		status = "disabled";	/* See ARM architected timer wrap erratum i940 */
+ 		interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+@@ -1090,3 +1091,22 @@
+ 		assigned-clock-parents = <&sys_32k_ck>;
+ 	};
+ };
++
++/* Local timers, see ARM architected timer wrap erratum i940 */
++&timer3_target {
++	ti,no-reset-on-init;
++	ti,no-idle;
++	timer@0 {
++		assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
++		assigned-clock-parents = <&timer_sys_clk_div>;
++	};
++};
++
++&timer4_target {
++	ti,no-reset-on-init;
++	ti,no-idle;
++	timer@0 {
++		assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
++		assigned-clock-parents = <&timer_sys_clk_div>;
++	};
++};
+diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
+index fc56fc3e19316..9575b404019c9 100644
+--- a/arch/arm/include/asm/fixmap.h
++++ b/arch/arm/include/asm/fixmap.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_FIXMAP_H
+ #define _ASM_FIXMAP_H
+ 
+-#define FIXADDR_START		0xffc00000UL
++#define FIXADDR_START		0xffc80000UL
+ #define FIXADDR_END		0xfff00000UL
+ #define FIXADDR_TOP		(FIXADDR_END - PAGE_SIZE)
+ 
+diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
+index 99035b5891ef4..f717d7122d9d1 100644
+--- a/arch/arm/include/asm/memory.h
++++ b/arch/arm/include/asm/memory.h
+@@ -67,6 +67,10 @@
+  */
+ #define XIP_VIRT_ADDR(physaddr)  (MODULES_VADDR + ((physaddr) & 0x000fffff))
+ 
++#define FDT_FIXED_BASE		UL(0xff800000)
++#define FDT_FIXED_SIZE		(2 * SECTION_SIZE)
++#define FDT_VIRT_BASE(physbase)	((void *)(FDT_FIXED_BASE | (physbase) % SECTION_SIZE))
++
+ #if !defined(CONFIG_SMP) && !defined(CONFIG_ARM_LPAE)
+ /*
+  * Allow 16MB-aligned ioremap pages
+@@ -107,6 +111,7 @@ extern unsigned long vectors_base;
+ #define MODULES_VADDR		PAGE_OFFSET
+ 
+ #define XIP_VIRT_ADDR(physaddr)  (physaddr)
++#define FDT_VIRT_BASE(physbase)  ((void *)(physbase))
+ 
+ #endif /* !CONFIG_MMU */
+ 
+diff --git a/arch/arm/include/asm/prom.h b/arch/arm/include/asm/prom.h
+index 1e36c40533c16..402e3f34c7ed8 100644
+--- a/arch/arm/include/asm/prom.h
++++ b/arch/arm/include/asm/prom.h
+@@ -9,12 +9,12 @@
+ 
+ #ifdef CONFIG_OF
+ 
+-extern const struct machine_desc *setup_machine_fdt(unsigned int dt_phys);
++extern const struct machine_desc *setup_machine_fdt(void *dt_virt);
+ extern void __init arm_dt_init_cpu_maps(void);
+ 
+ #else /* CONFIG_OF */
+ 
+-static inline const struct machine_desc *setup_machine_fdt(unsigned int dt_phys)
++static inline const struct machine_desc *setup_machine_fdt(void *dt_virt)
+ {
+ 	return NULL;
+ }
+diff --git a/arch/arm/kernel/atags.h b/arch/arm/kernel/atags.h
+index 067e12edc3419..f2819c25b6029 100644
+--- a/arch/arm/kernel/atags.h
++++ b/arch/arm/kernel/atags.h
+@@ -2,11 +2,11 @@
+ void convert_to_tag_list(struct tag *tags);
+ 
+ #ifdef CONFIG_ATAGS
+-const struct machine_desc *setup_machine_tags(phys_addr_t __atags_pointer,
++const struct machine_desc *setup_machine_tags(void *__atags_vaddr,
+ 	unsigned int machine_nr);
+ #else
+ static inline const struct machine_desc * __init __noreturn
+-setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr)
++setup_machine_tags(void *__atags_vaddr, unsigned int machine_nr)
+ {
+ 	early_print("no ATAGS support: can't continue\n");
+ 	while (true);
+diff --git a/arch/arm/kernel/atags_parse.c b/arch/arm/kernel/atags_parse.c
+index 6c12d9fe694e3..373b61f9a4f01 100644
+--- a/arch/arm/kernel/atags_parse.c
++++ b/arch/arm/kernel/atags_parse.c
+@@ -174,7 +174,7 @@ static void __init squash_mem_tags(struct tag *tag)
+ }
+ 
+ const struct machine_desc * __init
+-setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr)
++setup_machine_tags(void *atags_vaddr, unsigned int machine_nr)
+ {
+ 	struct tag *tags = (struct tag *)&default_tags;
+ 	const struct machine_desc *mdesc = NULL, *p;
+@@ -195,8 +195,8 @@ setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr)
+ 	if (!mdesc)
+ 		return NULL;
+ 
+-	if (__atags_pointer)
+-		tags = phys_to_virt(__atags_pointer);
++	if (atags_vaddr)
++		tags = atags_vaddr;
+ 	else if (mdesc->atag_offset)
+ 		tags = (void *)(PAGE_OFFSET + mdesc->atag_offset);
+ 
+diff --git a/arch/arm/kernel/devtree.c b/arch/arm/kernel/devtree.c
+index 7f0745a97e20f..28311dd0fee68 100644
+--- a/arch/arm/kernel/devtree.c
++++ b/arch/arm/kernel/devtree.c
+@@ -203,12 +203,12 @@ static const void * __init arch_get_next_mach(const char *const **match)
+ 
+ /**
+  * setup_machine_fdt - Machine setup when an dtb was passed to the kernel
+- * @dt_phys: physical address of dt blob
++ * @dt_virt: virtual address of dt blob
+  *
+  * If a dtb was passed to the kernel in r2, then use it to choose the
+  * correct machine_desc and to setup the system.
+  */
+-const struct machine_desc * __init setup_machine_fdt(unsigned int dt_phys)
++const struct machine_desc * __init setup_machine_fdt(void *dt_virt)
+ {
+ 	const struct machine_desc *mdesc, *mdesc_best = NULL;
+ 
+@@ -221,7 +221,7 @@ const struct machine_desc * __init setup_machine_fdt(unsigned int dt_phys)
+ 	mdesc_best = &__mach_desc_GENERIC_DT;
+ #endif
+ 
+-	if (!dt_phys || !early_init_dt_verify(phys_to_virt(dt_phys)))
++	if (!dt_virt || !early_init_dt_verify(dt_virt))
+ 		return NULL;
+ 
+ 	mdesc = of_flat_dt_match_machine(mdesc_best, arch_get_next_mach);
+diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
+index 98c1e68bdfcbb..4af5c76796242 100644
+--- a/arch/arm/kernel/head.S
++++ b/arch/arm/kernel/head.S
+@@ -274,11 +274,10 @@ __create_page_tables:
+ 	 * We map 2 sections in case the ATAGs/DTB crosses a section boundary.
+ 	 */
+ 	mov	r0, r2, lsr #SECTION_SHIFT
+-	movs	r0, r0, lsl #SECTION_SHIFT
+-	subne	r3, r0, r8
+-	addne	r3, r3, #PAGE_OFFSET
+-	addne	r3, r4, r3, lsr #(SECTION_SHIFT - PMD_ORDER)
+-	orrne	r6, r7, r0
++	cmp	r2, #0
++	ldrne	r3, =FDT_FIXED_BASE >> (SECTION_SHIFT - PMD_ORDER)
++	addne	r3, r3, r4
++	orrne	r6, r7, r0, lsl #SECTION_SHIFT
+ 	strne	r6, [r3], #1 << PMD_ORDER
+ 	addne	r6, r6, #1 << SECTION_SHIFT
+ 	strne	r6, [r3]
+diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
+index 08660ae9dcbce..b1423fb130ea4 100644
+--- a/arch/arm/kernel/hw_breakpoint.c
++++ b/arch/arm/kernel/hw_breakpoint.c
+@@ -886,7 +886,7 @@ static void breakpoint_handler(unsigned long unknown, struct pt_regs *regs)
+ 			info->trigger = addr;
+ 			pr_debug("breakpoint fired: address = 0x%x\n", addr);
+ 			perf_bp_event(bp, regs);
+-			if (!bp->overflow_handler)
++			if (is_default_overflow_handler(bp))
+ 				enable_single_step(bp, addr);
+ 			goto unlock;
+ 		}
+diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
+index 3f65d0ac9f632..f90479d8b50c8 100644
+--- a/arch/arm/kernel/setup.c
++++ b/arch/arm/kernel/setup.c
+@@ -18,6 +18,7 @@
+ #include <linux/of_platform.h>
+ #include <linux/init.h>
+ #include <linux/kexec.h>
++#include <linux/libfdt.h>
+ #include <linux/of_fdt.h>
+ #include <linux/cpu.h>
+ #include <linux/interrupt.h>
+@@ -1081,19 +1082,27 @@ void __init hyp_mode_check(void)
+ 
+ void __init setup_arch(char **cmdline_p)
+ {
+-	const struct machine_desc *mdesc;
++	const struct machine_desc *mdesc = NULL;
++	void *atags_vaddr = NULL;
++
++	if (__atags_pointer)
++		atags_vaddr = FDT_VIRT_BASE(__atags_pointer);
+ 
+ 	setup_processor();
+-	mdesc = setup_machine_fdt(__atags_pointer);
++	if (atags_vaddr) {
++		mdesc = setup_machine_fdt(atags_vaddr);
++		if (mdesc)
++			memblock_reserve(__atags_pointer,
++					 fdt_totalsize(atags_vaddr));
++	}
+ 	if (!mdesc)
+-		mdesc = setup_machine_tags(__atags_pointer, __machine_arch_type);
++		mdesc = setup_machine_tags(atags_vaddr, __machine_arch_type);
+ 	if (!mdesc) {
+ 		early_print("\nError: invalid dtb and unrecognized/unsupported machine ID\n");
+ 		early_print("  r1=0x%08x, r2=0x%08x\n", __machine_arch_type,
+ 			    __atags_pointer);
+ 		if (__atags_pointer)
+-			early_print("  r2[]=%*ph\n", 16,
+-				    phys_to_virt(__atags_pointer));
++			early_print("  r2[]=%*ph\n", 16, atags_vaddr);
+ 		dump_machine_table();
+ 	}
+ 
+diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
+index c23dbf8bebeeb..d54d69cf17322 100644
+--- a/arch/arm/mm/init.c
++++ b/arch/arm/mm/init.c
+@@ -223,7 +223,6 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
+ 	if (mdesc->reserve)
+ 		mdesc->reserve();
+ 
+-	early_init_fdt_reserve_self();
+ 	early_init_fdt_scan_reserved_mem();
+ 
+ 	/* reserve memory for DMA contiguous allocations */
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index ab69250a86bc3..fa259825310c5 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -39,6 +39,8 @@
+ #include "mm.h"
+ #include "tcm.h"
+ 
++extern unsigned long __atags_pointer;
++
+ /*
+  * empty_zero_page is a special page that is used for
+  * zero-initialized data and COW.
+@@ -946,7 +948,7 @@ static void __init create_mapping(struct map_desc *md)
+ 		return;
+ 	}
+ 
+-	if ((md->type == MT_DEVICE || md->type == MT_ROM) &&
++	if (md->type == MT_DEVICE &&
+ 	    md->virtual >= PAGE_OFFSET && md->virtual < FIXADDR_START &&
+ 	    (md->virtual < VMALLOC_START || md->virtual >= VMALLOC_END)) {
+ 		pr_warn("BUG: mapping for 0x%08llx at 0x%08lx out of vmalloc space\n",
+@@ -1333,6 +1335,15 @@ static void __init devicemaps_init(const struct machine_desc *mdesc)
+ 	for (addr = VMALLOC_START; addr < (FIXADDR_TOP & PMD_MASK); addr += PMD_SIZE)
+ 		pmd_clear(pmd_off_k(addr));
+ 
++	if (__atags_pointer) {
++		/* create a read-only mapping of the device tree */
++		map.pfn = __phys_to_pfn(__atags_pointer & SECTION_MASK);
++		map.virtual = FDT_FIXED_BASE;
++		map.length = FDT_FIXED_SIZE;
++		map.type = MT_ROM;
++		create_mapping(&map);
++	}
++
+ 	/*
+ 	 * Map the kernel if it is XIP.
+ 	 * It is always first in the modulearea.
+@@ -1489,8 +1500,7 @@ static void __init map_lowmem(void)
+ }
+ 
+ #ifdef CONFIG_ARM_PV_FIXUP
+-extern unsigned long __atags_pointer;
+-typedef void pgtables_remap(long long offset, unsigned long pgd, void *bdata);
++typedef void pgtables_remap(long long offset, unsigned long pgd);
+ pgtables_remap lpae_pgtables_remap_asm;
+ 
+ /*
+@@ -1503,7 +1513,6 @@ static void __init early_paging_init(const struct machine_desc *mdesc)
+ 	unsigned long pa_pgd;
+ 	unsigned int cr, ttbcr;
+ 	long long offset;
+-	void *boot_data;
+ 
+ 	if (!mdesc->pv_fixup)
+ 		return;
+@@ -1520,7 +1529,6 @@ static void __init early_paging_init(const struct machine_desc *mdesc)
+ 	 */
+ 	lpae_pgtables_remap = (pgtables_remap *)(unsigned long)__pa(lpae_pgtables_remap_asm);
+ 	pa_pgd = __pa(swapper_pg_dir);
+-	boot_data = __va(__atags_pointer);
+ 	barrier();
+ 
+ 	pr_info("Switching physical address space to 0x%08llx\n",
+@@ -1556,7 +1564,7 @@ static void __init early_paging_init(const struct machine_desc *mdesc)
+ 	 * needs to be assembly.  It's fairly simple, as we're using the
+ 	 * temporary tables setup by the initial assembly code.
+ 	 */
+-	lpae_pgtables_remap(offset, pa_pgd, boot_data);
++	lpae_pgtables_remap(offset, pa_pgd);
+ 
+ 	/* Re-enable the caches and cacheable TLB walks */
+ 	asm volatile("mcr p15, 0, %0, c2, c0, 2" : : "r" (ttbcr));
+diff --git a/arch/arm/mm/pv-fixup-asm.S b/arch/arm/mm/pv-fixup-asm.S
+index 8eade04167399..5c5e1952000ab 100644
+--- a/arch/arm/mm/pv-fixup-asm.S
++++ b/arch/arm/mm/pv-fixup-asm.S
+@@ -39,8 +39,8 @@ ENTRY(lpae_pgtables_remap_asm)
+ 
+ 	/* Update level 2 entries for the boot data */
+ 	add	r7, r2, #0x1000
+-	add	r7, r7, r3, lsr #SECTION_SHIFT - L2_ORDER
+-	bic	r7, r7, #(1 << L2_ORDER) - 1
++	movw	r3, #FDT_FIXED_BASE >> (SECTION_SHIFT - L2_ORDER)
++	add	r7, r7, r3
+ 	ldrd	r4, r5, [r7]
+ 	adds	r4, r4, r0
+ 	adc	r5, r5, r1
+diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
+index 1c26d7baa67f8..cfdde3a568059 100644
+--- a/arch/arm64/include/asm/daifflags.h
++++ b/arch/arm64/include/asm/daifflags.h
+@@ -131,6 +131,9 @@ static inline void local_daif_inherit(struct pt_regs *regs)
+ 	if (interrupts_enabled(regs))
+ 		trace_hardirqs_on();
+ 
++	if (system_uses_irq_prio_masking())
++		gic_write_pmr(regs->pmr_save);
++
+ 	/*
+ 	 * We can't use local_daif_restore(regs->pstate) here as
+ 	 * system_has_prio_mask_debugging() won't restore the I bit if it can
+diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
+index 70e0a7591245d..ec120ed18faf4 100644
+--- a/arch/arm64/kernel/entry-common.c
++++ b/arch/arm64/kernel/entry-common.c
+@@ -178,14 +178,6 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
+ {
+ 	unsigned long far = read_sysreg(far_el1);
+ 
+-	/*
+-	 * The CPU masked interrupts, and we are leaving them masked during
+-	 * do_debug_exception(). Update PMR as if we had called
+-	 * local_daif_mask().
+-	 */
+-	if (system_uses_irq_prio_masking())
+-		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+-
+ 	arm64_enter_el1_dbg(regs);
+ 	do_debug_exception(far, esr, regs);
+ 	arm64_exit_el1_dbg(regs);
+@@ -350,9 +342,6 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
+ 	/* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */
+ 	unsigned long far = read_sysreg(far_el1);
+ 
+-	if (system_uses_irq_prio_masking())
+-		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+-
+ 	enter_from_user_mode();
+ 	do_debug_exception(far, esr, regs);
+ 	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+@@ -360,9 +349,6 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
+ 
+ static void noinstr el0_svc(struct pt_regs *regs)
+ {
+-	if (system_uses_irq_prio_masking())
+-		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+-
+ 	enter_from_user_mode();
+ 	do_el0_svc(regs);
+ }
+@@ -437,9 +423,6 @@ static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
+ 
+ static void noinstr el0_svc_compat(struct pt_regs *regs)
+ {
+-	if (system_uses_irq_prio_masking())
+-		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+-
+ 	enter_from_user_mode();
+ 	do_el0_svc_compat(regs);
+ }
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 2da82c139e1cd..60d3991233600 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -259,6 +259,8 @@ alternative_else_nop_endif
+ alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+ 	mrs_s	x20, SYS_ICC_PMR_EL1
+ 	str	x20, [sp, #S_PMR_SAVE]
++	mov	x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET
++	msr_s	SYS_ICC_PMR_EL1, x20
+ alternative_else_nop_endif
+ 
+ 	/* Re-enable tag checking (TCO set on exception entry) */
+@@ -464,8 +466,8 @@ tsk	.req	x28		// current thread_info
+ /*
+  * Interrupt handling.
+  */
+-	.macro	irq_handler
+-	ldr_l	x1, handle_arch_irq
++	.macro	irq_handler, handler:req
++	ldr_l	x1, \handler
+ 	mov	x0, sp
+ 	irq_stack_entry
+ 	blr	x1
+@@ -495,13 +497,41 @@ alternative_endif
+ #endif
+ 	.endm
+ 
+-	.macro	gic_prio_irq_setup, pmr:req, tmp:req
+-#ifdef CONFIG_ARM64_PSEUDO_NMI
+-	alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+-	orr	\tmp, \pmr, #GIC_PRIO_PSR_I_SET
+-	msr_s	SYS_ICC_PMR_EL1, \tmp
+-	alternative_else_nop_endif
++	.macro el1_interrupt_handler, handler:req
++	enable_da_f
++
++	mov	x0, sp
++	bl	enter_el1_irq_or_nmi
++
++	irq_handler	\handler
++
++#ifdef CONFIG_PREEMPTION
++	ldr	x24, [tsk, #TSK_TI_PREEMPT]	// get preempt count
++alternative_if ARM64_HAS_IRQ_PRIO_MASKING
++	/*
++	 * DA_F were cleared at start of handling. If anything is set in DAIF,
++	 * we come back from an NMI, so skip preemption
++	 */
++	mrs	x0, daif
++	orr	x24, x24, x0
++alternative_else_nop_endif
++	cbnz	x24, 1f				// preempt count != 0 || NMI return path
++	bl	arm64_preempt_schedule_irq	// irq en/disable is done inside
++1:
+ #endif
++
++	mov	x0, sp
++	bl	exit_el1_irq_or_nmi
++	.endm
++
++	.macro el0_interrupt_handler, handler:req
++	user_exit_irqoff
++	enable_da_f
++
++	tbz	x22, #55, 1f
++	bl	do_el0_irq_bp_hardening
++1:
++	irq_handler	\handler
+ 	.endm
+ 
+ 	.text
+@@ -633,32 +663,7 @@ SYM_CODE_END(el1_sync)
+ 	.align	6
+ SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
+ 	kernel_entry 1
+-	gic_prio_irq_setup pmr=x20, tmp=x1
+-	enable_da_f
+-
+-	mov	x0, sp
+-	bl	enter_el1_irq_or_nmi
+-
+-	irq_handler
+-
+-#ifdef CONFIG_PREEMPTION
+-	ldr	x24, [tsk, #TSK_TI_PREEMPT]	// get preempt count
+-alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+-	/*
+-	 * DA_F were cleared at start of handling. If anything is set in DAIF,
+-	 * we come back from an NMI, so skip preemption
+-	 */
+-	mrs	x0, daif
+-	orr	x24, x24, x0
+-alternative_else_nop_endif
+-	cbnz	x24, 1f				// preempt count != 0 || NMI return path
+-	bl	arm64_preempt_schedule_irq	// irq en/disable is done inside
+-1:
+-#endif
+-
+-	mov	x0, sp
+-	bl	exit_el1_irq_or_nmi
+-
++	el1_interrupt_handler handle_arch_irq
+ 	kernel_exit 1
+ SYM_CODE_END(el1_irq)
+ 
+@@ -698,22 +703,13 @@ SYM_CODE_END(el0_error_compat)
+ SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
+ 	kernel_entry 0
+ el0_irq_naked:
+-	gic_prio_irq_setup pmr=x20, tmp=x0
+-	user_exit_irqoff
+-	enable_da_f
+-
+-	tbz	x22, #55, 1f
+-	bl	do_el0_irq_bp_hardening
+-1:
+-	irq_handler
+-
++	el0_interrupt_handler handle_arch_irq
+ 	b	ret_to_user
+ SYM_CODE_END(el0_irq)
+ 
+ SYM_CODE_START_LOCAL(el1_error)
+ 	kernel_entry 1
+ 	mrs	x1, esr_el1
+-	gic_prio_kentry_setup tmp=x2
+ 	enable_dbg
+ 	mov	x0, sp
+ 	bl	do_serror
+@@ -724,7 +720,6 @@ SYM_CODE_START_LOCAL(el0_error)
+ 	kernel_entry 0
+ el0_error_naked:
+ 	mrs	x25, esr_el1
+-	gic_prio_kentry_setup tmp=x2
+ 	user_exit_irqoff
+ 	enable_dbg
+ 	mov	x0, sp
+diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
+index ac485163a4a76..6d44c028d1c9e 100644
+--- a/arch/arm64/mm/flush.c
++++ b/arch/arm64/mm/flush.c
+@@ -55,8 +55,10 @@ void __sync_icache_dcache(pte_t pte)
+ {
+ 	struct page *page = pte_page(pte);
+ 
+-	if (!test_and_set_bit(PG_dcache_clean, &page->flags))
++	if (!test_bit(PG_dcache_clean, &page->flags)) {
+ 		sync_icache_aliases(page_address(page), page_size(page));
++		set_bit(PG_dcache_clean, &page->flags);
++	}
+ }
+ EXPORT_SYMBOL_GPL(__sync_icache_dcache);
+ 
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 23c326a06b2d4..a14927360be26 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -444,6 +444,18 @@ SYM_FUNC_START(__cpu_setup)
+ 	mov	x10, #(SYS_GCR_EL1_RRND | SYS_GCR_EL1_EXCL_MASK)
+ 	msr_s	SYS_GCR_EL1, x10
+ 
++	/*
++	 * If GCR_EL1.RRND=1 is implemented the same way as RRND=0, then
++	 * RGSR_EL1.SEED must be non-zero for IRG to produce
++	 * pseudorandom numbers. As RGSR_EL1 is UNKNOWN out of reset, we
++	 * must initialize it.
++	 */
++	mrs	x10, CNTVCT_EL0
++	ands	x10, x10, #SYS_RGSR_EL1_SEED_MASK
++	csinc	x10, x10, xzr, ne
++	lsl	x10, x10, #SYS_RGSR_EL1_SEED_SHIFT
++	msr_s	SYS_RGSR_EL1, x10
++
+ 	/* clear any pending tag check faults in TFSR*_EL1 */
+ 	msr_s	SYS_TFSR_EL1, xzr
+ 	msr_s	SYS_TFSRE0_EL1, xzr
+diff --git a/arch/ia64/include/asm/module.h b/arch/ia64/include/asm/module.h
+index 5a29652e6defc..7271b9c5fc760 100644
+--- a/arch/ia64/include/asm/module.h
++++ b/arch/ia64/include/asm/module.h
+@@ -14,16 +14,20 @@
+ struct elf64_shdr;			/* forward declration */
+ 
+ struct mod_arch_specific {
++	/* Used only at module load time. */
+ 	struct elf64_shdr *core_plt;	/* core PLT section */
+ 	struct elf64_shdr *init_plt;	/* init PLT section */
+ 	struct elf64_shdr *got;		/* global offset table */
+ 	struct elf64_shdr *opd;		/* official procedure descriptors */
+ 	struct elf64_shdr *unwind;	/* unwind-table section */
+ 	unsigned long gp;		/* global-pointer for module */
++	unsigned int next_got_entry;	/* index of next available got entry */
+ 
++	/* Used at module run and cleanup time. */
+ 	void *core_unw_table;		/* core unwind-table cookie returned by unwinder */
+ 	void *init_unw_table;		/* init unwind-table cookie returned by unwinder */
+-	unsigned int next_got_entry;	/* index of next available got entry */
++	void *opd_addr;			/* symbolize uses .opd to get to actual function */
++	unsigned long opd_size;
+ };
+ 
+ #define ARCH_SHF_SMALL	SHF_IA_64_SHORT
+diff --git a/arch/ia64/kernel/module.c b/arch/ia64/kernel/module.c
+index 00a496cb346f6..2cba53c1da82e 100644
+--- a/arch/ia64/kernel/module.c
++++ b/arch/ia64/kernel/module.c
+@@ -905,9 +905,31 @@ register_unwind_table (struct module *mod)
+ int
+ module_finalize (const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod)
+ {
++	struct mod_arch_specific *mas = &mod->arch;
++
+ 	DEBUGP("%s: init: entry=%p\n", __func__, mod->init);
+-	if (mod->arch.unwind)
++	if (mas->unwind)
+ 		register_unwind_table(mod);
++
++	/*
++	 * ".opd" was already relocated to the final destination. Store
++	 * it's address for use in symbolizer.
++	 */
++	mas->opd_addr = (void *)mas->opd->sh_addr;
++	mas->opd_size = mas->opd->sh_size;
++
++	/*
++	 * Module relocation was already done at this point. Section
++	 * headers are about to be deleted. Wipe out load-time context.
++	 */
++	mas->core_plt = NULL;
++	mas->init_plt = NULL;
++	mas->got = NULL;
++	mas->opd = NULL;
++	mas->unwind = NULL;
++	mas->gp = 0;
++	mas->next_got_entry = 0;
++
+ 	return 0;
+ }
+ 
+@@ -926,10 +948,9 @@ module_arch_cleanup (struct module *mod)
+ 
+ void *dereference_module_function_descriptor(struct module *mod, void *ptr)
+ {
+-	Elf64_Shdr *opd = mod->arch.opd;
++	struct mod_arch_specific *mas = &mod->arch;
+ 
+-	if (ptr < (void *)opd->sh_addr ||
+-			ptr >= (void *)(opd->sh_addr + opd->sh_size))
++	if (ptr < mas->opd_addr || ptr >= mas->opd_addr + mas->opd_size)
+ 		return ptr;
+ 
+ 	return dereference_function_descriptor(ptr);
+diff --git a/arch/mips/include/asm/div64.h b/arch/mips/include/asm/div64.h
+index dc5ea57364408..ceece76fc971a 100644
+--- a/arch/mips/include/asm/div64.h
++++ b/arch/mips/include/asm/div64.h
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (C) 2000, 2004  Maciej W. Rozycki
++ * Copyright (C) 2000, 2004, 2021  Maciej W. Rozycki
+  * Copyright (C) 2003, 07 Ralf Baechle (ralf@linux-mips.org)
+  *
+  * This file is subject to the terms and conditions of the GNU General Public
+@@ -9,25 +9,18 @@
+ #ifndef __ASM_DIV64_H
+ #define __ASM_DIV64_H
+ 
+-#include <asm-generic/div64.h>
+-
+-#if BITS_PER_LONG == 64
++#include <asm/bitsperlong.h>
+ 
+-#include <linux/types.h>
++#if BITS_PER_LONG == 32
+ 
+ /*
+  * No traps on overflows for any of these...
+  */
+ 
+-#define __div64_32(n, base)						\
+-({									\
++#define do_div64_32(res, high, low, base) ({				\
+ 	unsigned long __cf, __tmp, __tmp2, __i;				\
+ 	unsigned long __quot32, __mod32;				\
+-	unsigned long __high, __low;					\
+-	unsigned long long __n;						\
+ 									\
+-	__high = *__n >> 32;						\
+-	__low = __n;							\
+ 	__asm__(							\
+ 	"	.set	push					\n"	\
+ 	"	.set	noat					\n"	\
+@@ -51,18 +44,48 @@
+ 	"	subu	%0, %0, %z6				\n"	\
+ 	"	addiu	%2, %2, 1				\n"	\
+ 	"3:							\n"	\
+-	"	bnez	%4, 0b\n\t"					\
+-	"	 srl	%5, %1, 0x1f\n\t"				\
++	"	bnez	%4, 0b					\n"	\
++	"	 srl	%5, %1, 0x1f				\n"	\
+ 	"	.set	pop"						\
+ 	: "=&r" (__mod32), "=&r" (__tmp),				\
+ 	  "=&r" (__quot32), "=&r" (__cf),				\
+ 	  "=&r" (__i), "=&r" (__tmp2)					\
+-	: "Jr" (base), "0" (__high), "1" (__low));			\
++	: "Jr" (base), "0" (high), "1" (low));				\
+ 									\
+-	(__n) = __quot32;						\
++	(res) = __quot32;						\
+ 	__mod32;							\
+ })
+ 
+-#endif /* BITS_PER_LONG == 64 */
++#define __div64_32(n, base) ({						\
++	unsigned long __upper, __low, __high, __radix;			\
++	unsigned long long __quot;					\
++	unsigned long long __div;					\
++	unsigned long __mod;						\
++									\
++	__div = (*n);							\
++	__radix = (base);						\
++									\
++	__high = __div >> 32;						\
++	__low = __div;							\
++									\
++	if (__high < __radix) {						\
++		__upper = __high;					\
++		__high = 0;						\
++	} else {							\
++		__upper = __high % __radix;				\
++		__high /= __radix;					\
++	}								\
++									\
++	__mod = do_div64_32(__low, __upper, __low, __radix);		\
++									\
++	__quot = __high;						\
++	__quot = __quot << 32 | __low;					\
++	(*n) = __quot;							\
++	__mod;								\
++})
++
++#endif /* BITS_PER_LONG == 32 */
++
++#include <asm-generic/div64.h>
+ 
+ #endif /* __ASM_DIV64_H */
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index 31cb9199197ca..e6ae2bcdbeda0 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1739,7 +1739,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 			set_isa(c, MIPS_CPU_ISA_M64R2);
+ 			break;
+ 		}
+-		c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ 		c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_EXT |
+ 				MIPS_ASE_LOONGSON_EXT2);
+ 		break;
+@@ -1769,7 +1768,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		 * register, we correct it here.
+ 		 */
+ 		c->options |= MIPS_CPU_FTLB | MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
+-		c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ 		c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
+ 			MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2);
+ 		c->ases &= ~MIPS_ASE_VZ; /* VZ of Loongson-3A2000/3000 is incomplete */
+@@ -1780,7 +1778,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		set_elf_platform(cpu, "loongson3a");
+ 		set_isa(c, MIPS_CPU_ISA_M64R2);
+ 		decode_cpucfg(c);
+-		c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ 		break;
+ 	default:
+ 		panic("Unknown Loongson Processor ID!");
+diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
+index fef0b34a77c9d..f8e3d15ddf694 100644
+--- a/arch/powerpc/kernel/head_32.h
++++ b/arch/powerpc/kernel/head_32.h
+@@ -338,11 +338,7 @@ label:
+ 	lis	r1, emergency_ctx@ha
+ #endif
+ 	lwz	r1, emergency_ctx@l(r1)
+-	cmpwi	cr1, r1, 0
+-	bne	cr1, 1f
+-	lis	r1, init_thread_union@ha
+-	addi	r1, r1, init_thread_union@l
+-1:	addi	r1, r1, THREAD_SIZE - INT_FRAME_SIZE
++	addi	r1, r1, THREAD_SIZE - INT_FRAME_SIZE
+ 	EXCEPTION_PROLOG_2
+ 	SAVE_NVGPRS(r11)
+ 	addi	r3, r1, STACK_FRAME_OVERHEAD
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index 5b69a6a72a0e2..6806eefa52ceb 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -1050,7 +1050,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
+ 
+ 	spin_lock_irqsave(&tbl->large_pool.lock, flags);
+ 	for (i = 0; i < tbl->nr_pools; i++)
+-		spin_lock(&tbl->pools[i].lock);
++		spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock);
+ 
+ 	iommu_table_release_pages(tbl);
+ 
+@@ -1078,7 +1078,7 @@ void iommu_release_ownership(struct iommu_table *tbl)
+ 
+ 	spin_lock_irqsave(&tbl->large_pool.lock, flags);
+ 	for (i = 0; i < tbl->nr_pools; i++)
+-		spin_lock(&tbl->pools[i].lock);
++		spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock);
+ 
+ 	memset(tbl->it_map, 0, sz);
+ 
+diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
+index 057d6b8e9bb0e..e7f2eb7837fc4 100644
+--- a/arch/powerpc/kernel/setup_32.c
++++ b/arch/powerpc/kernel/setup_32.c
+@@ -164,7 +164,7 @@ void __init irqstack_early_init(void)
+ }
+ 
+ #ifdef CONFIG_VMAP_STACK
+-void *emergency_ctx[NR_CPUS] __ro_after_init;
++void *emergency_ctx[NR_CPUS] __ro_after_init = {[0] = &init_stack};
+ 
+ void __init emergency_stack_init(void)
+ {
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index dd34ea6744965..db7ac77bea3a7 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1442,6 +1442,9 @@ void start_secondary(void *unused)
+ 
+ 	vdso_getcpu_init();
+ #endif
++	set_numa_node(numa_cpu_lookup_table[cpu]);
++	set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]));
++
+ 	/* Update topology CPU masks */
+ 	add_cpu_to_masks(cpu);
+ 
+@@ -1460,9 +1463,6 @@ void start_secondary(void *unused)
+ 			shared_caches = true;
+ 	}
+ 
+-	set_numa_node(numa_cpu_lookup_table[cpu]);
+-	set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]));
+-
+ 	smp_wmb();
+ 	notify_cpu_starting(cpu);
+ 	set_cpu_online(cpu, true);
+diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
+index 92705d6dfb6e0..bda150ed33dec 100644
+--- a/arch/powerpc/lib/feature-fixups.c
++++ b/arch/powerpc/lib/feature-fixups.c
+@@ -14,6 +14,7 @@
+ #include <linux/string.h>
+ #include <linux/init.h>
+ #include <linux/sched/mm.h>
++#include <linux/stop_machine.h>
+ #include <asm/cputable.h>
+ #include <asm/code-patching.h>
+ #include <asm/page.h>
+@@ -227,11 +228,25 @@ static void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
+ 		                                           : "unknown");
+ }
+ 
++static int __do_stf_barrier_fixups(void *data)
++{
++	enum stf_barrier_type *types = data;
++
++	do_stf_entry_barrier_fixups(*types);
++	do_stf_exit_barrier_fixups(*types);
++
++	return 0;
++}
+ 
+ void do_stf_barrier_fixups(enum stf_barrier_type types)
+ {
+-	do_stf_entry_barrier_fixups(types);
+-	do_stf_exit_barrier_fixups(types);
++	/*
++	 * The call to the fallback entry flush, and the fallback/sync-ori exit
++	 * flush can not be safely patched in/out while other CPUs are executing
++	 * them. So call __do_stf_barrier_fixups() on one CPU while all other CPUs
++	 * spin in the stop machine core with interrupts hard disabled.
++	 */
++	stop_machine(__do_stf_barrier_fixups, &types, NULL);
+ }
+ 
+ void do_uaccess_flush_fixups(enum l1d_flush_type types)
+@@ -284,8 +299,9 @@ void do_uaccess_flush_fixups(enum l1d_flush_type types)
+ 						: "unknown");
+ }
+ 
+-void do_entry_flush_fixups(enum l1d_flush_type types)
++static int __do_entry_flush_fixups(void *data)
+ {
++	enum l1d_flush_type types = *(enum l1d_flush_type *)data;
+ 	unsigned int instrs[3], *dest;
+ 	long *start, *end;
+ 	int i;
+@@ -354,6 +370,19 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
+ 							: "ori type" :
+ 		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
+ 						: "unknown");
++
++	return 0;
++}
++
++void do_entry_flush_fixups(enum l1d_flush_type types)
++{
++	/*
++	 * The call to the fallback flush can not be safely patched in/out while
++	 * other CPUs are executing it. So call __do_entry_flush_fixups() on one
++	 * CPU while all other CPUs spin in the stop machine core with interrupts
++	 * hard disabled.
++	 */
++	stop_machine(__do_entry_flush_fixups, &types, NULL);
+ }
+ 
+ void do_rfi_flush_fixups(enum l1d_flush_type types)
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index 24702c0a92e0f..0141d571476c5 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -336,7 +336,7 @@ repeat:
+ int htab_remove_mapping(unsigned long vstart, unsigned long vend,
+ 		      int psize, int ssize)
+ {
+-	unsigned long vaddr;
++	unsigned long vaddr, time_limit;
+ 	unsigned int step, shift;
+ 	int rc;
+ 	int ret = 0;
+@@ -349,8 +349,19 @@ int htab_remove_mapping(unsigned long vstart, unsigned long vend,
+ 
+ 	/* Unmap the full range specificied */
+ 	vaddr = ALIGN_DOWN(vstart, step);
++	time_limit = jiffies + HZ;
++
+ 	for (;vaddr < vend; vaddr += step) {
+ 		rc = mmu_hash_ops.hpte_removebolted(vaddr, psize, ssize);
++
++		/*
++		 * For large number of mappings introduce a cond_resched()
++		 * to prevent softlockup warnings.
++		 */
++		if (time_after(jiffies, time_limit)) {
++			cond_resched();
++			time_limit = jiffies + HZ;
++		}
+ 		if (rc == -ENOENT) {
+ 			ret = -ENOENT;
+ 			continue;
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 12cbffd3c2e32..325f3b220f360 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -47,9 +47,6 @@ static void rtas_stop_self(void)
+ 
+ 	BUG_ON(rtas_stop_self_token == RTAS_UNKNOWN_SERVICE);
+ 
+-	printk("cpu %u (hwid %u) Ready to die...\n",
+-	       smp_processor_id(), hard_smp_processor_id());
+-
+ 	rtas_call_unlocked(&args, rtas_stop_self_token, 0, 1, NULL);
+ 
+ 	panic("Alas, I survived.\n");
+diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
+index ea028d9e0d242..d44567490d911 100644
+--- a/arch/riscv/kernel/smp.c
++++ b/arch/riscv/kernel/smp.c
+@@ -54,7 +54,7 @@ int riscv_hartid_to_cpuid(int hartid)
+ 			return i;
+ 
+ 	pr_err("Couldn't find cpu id for hartid [%d]\n", hartid);
+-	return i;
++	return -ENOENT;
+ }
+ 
+ void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out)
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index f656aabd1545c..0e3325790f3a9 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -588,6 +588,21 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_MC,	exc_machine_check);
+ #endif
+ 
+ /* NMI */
++
++#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL)
++/*
++ * Special NOIST entry point for VMX which invokes this on the kernel
++ * stack. asm_exc_nmi() requires an IST to work correctly vs. the NMI
++ * 'executing' marker.
++ *
++ * On 32bit this just uses the regular NMI entry point because 32-bit does
++ * not have ISTs.
++ */
++DECLARE_IDTENTRY(X86_TRAP_NMI,		exc_nmi_noist);
++#else
++#define asm_exc_nmi_noist		asm_exc_nmi
++#endif
++
+ DECLARE_IDTENTRY_NMI(X86_TRAP_NMI,	exc_nmi);
+ #ifdef CONFIG_XEN_PV
+ DECLARE_IDTENTRY_RAW(X86_TRAP_NMI,	xenpv_exc_nmi);
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 02d4c74d30e2b..ef56780022c3e 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -358,8 +358,6 @@ struct kvm_mmu {
+ 	int (*sync_page)(struct kvm_vcpu *vcpu,
+ 			 struct kvm_mmu_page *sp);
+ 	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
+-	void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+-			   u64 *spte, const void *pte);
+ 	hpa_t root_hpa;
+ 	gpa_t root_pgd;
+ 	union kvm_mmu_role mmu_role;
+@@ -1019,7 +1017,6 @@ struct kvm_arch {
+ struct kvm_vm_stat {
+ 	ulong mmu_shadow_zapped;
+ 	ulong mmu_pte_write;
+-	ulong mmu_pte_updated;
+ 	ulong mmu_pde_zapped;
+ 	ulong mmu_flooded;
+ 	ulong mmu_recycled;
+@@ -1671,6 +1668,7 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low,
+ 		    unsigned long icr, int op_64_bit);
+ 
+ void kvm_define_user_return_msr(unsigned index, u32 msr);
++int kvm_probe_user_return_msr(u32 msr);
+ int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
+ 
+ u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc);
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index bf250a339655f..2ef961cf4cfc5 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -524,6 +524,16 @@ nmi_restart:
+ 		mds_user_clear_cpu_buffers();
+ }
+ 
++#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL)
++DEFINE_IDTENTRY_RAW(exc_nmi_noist)
++{
++	exc_nmi(regs);
++}
++#endif
++#if IS_MODULE(CONFIG_KVM_INTEL)
++EXPORT_SYMBOL_GPL(asm_exc_nmi_noist);
++#endif
++
+ void stop_nmi(void)
+ {
+ 	ignore_nmis++;
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 62157b1000f08..56a62d555e924 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -572,7 +572,8 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
+ 	case 7:
+ 		entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
+ 		entry->eax = 0;
+-		entry->ecx = F(RDPID);
++		if (kvm_cpu_cap_has(X86_FEATURE_RDTSCP))
++			entry->ecx = F(RDPID);
+ 		++array->nent;
+ 	default:
+ 		break;
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index d3f2b63167451..e82151ba95c09 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -4502,7 +4502,7 @@ static const struct opcode group8[] = {
+  * from the register case of group9.
+  */
+ static const struct gprefix pfx_0f_c7_7 = {
+-	N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdtscp),
++	N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdpid),
+ };
+ 
+ 
+diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
+index 43c93ffa76edf..7d5be04dc6616 100644
+--- a/arch/x86/kvm/kvm_emulate.h
++++ b/arch/x86/kvm/kvm_emulate.h
+@@ -468,6 +468,7 @@ enum x86_intercept {
+ 	x86_intercept_clgi,
+ 	x86_intercept_skinit,
+ 	x86_intercept_rdtscp,
++	x86_intercept_rdpid,
+ 	x86_intercept_icebp,
+ 	x86_intercept_wbinvd,
+ 	x86_intercept_monitor,
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 4ca81ae9bc8ad..5759eb075d2fc 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1908,8 +1908,8 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
+ 	if (!apic->lapic_timer.hv_timer_in_use)
+ 		goto out;
+ 	WARN_ON(rcuwait_active(&vcpu->wait));
+-	cancel_hv_timer(apic);
+ 	apic_timer_expired(apic, false);
++	cancel_hv_timer(apic);
+ 
+ 	if (apic_lvtt_period(apic) && apic->lapic_timer.period) {
+ 		advance_periodic_target_expiration(apic);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 2f2576fd343e6..ac5054763e38e 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -1715,13 +1715,6 @@ static int nonpaging_sync_page(struct kvm_vcpu *vcpu,
+ 	return 0;
+ }
+ 
+-static void nonpaging_update_pte(struct kvm_vcpu *vcpu,
+-				 struct kvm_mmu_page *sp, u64 *spte,
+-				 const void *pte)
+-{
+-	WARN_ON(1);
+-}
+-
+ #define KVM_PAGE_ARRAY_NR 16
+ 
+ struct kvm_mmu_pages {
+@@ -3820,7 +3813,6 @@ static void nonpaging_init_context(struct kvm_vcpu *vcpu,
+ 	context->gva_to_gpa = nonpaging_gva_to_gpa;
+ 	context->sync_page = nonpaging_sync_page;
+ 	context->invlpg = NULL;
+-	context->update_pte = nonpaging_update_pte;
+ 	context->root_level = 0;
+ 	context->shadow_root_level = PT32E_ROOT_LEVEL;
+ 	context->direct_map = true;
+@@ -4402,7 +4394,6 @@ static void paging64_init_context_common(struct kvm_vcpu *vcpu,
+ 	context->gva_to_gpa = paging64_gva_to_gpa;
+ 	context->sync_page = paging64_sync_page;
+ 	context->invlpg = paging64_invlpg;
+-	context->update_pte = paging64_update_pte;
+ 	context->shadow_root_level = level;
+ 	context->direct_map = false;
+ }
+@@ -4431,7 +4422,6 @@ static void paging32_init_context(struct kvm_vcpu *vcpu,
+ 	context->gva_to_gpa = paging32_gva_to_gpa;
+ 	context->sync_page = paging32_sync_page;
+ 	context->invlpg = paging32_invlpg;
+-	context->update_pte = paging32_update_pte;
+ 	context->shadow_root_level = PT32E_ROOT_LEVEL;
+ 	context->direct_map = false;
+ }
+@@ -4513,7 +4503,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
+ 	context->page_fault = kvm_tdp_page_fault;
+ 	context->sync_page = nonpaging_sync_page;
+ 	context->invlpg = NULL;
+-	context->update_pte = nonpaging_update_pte;
+ 	context->shadow_root_level = kvm_mmu_get_tdp_level(vcpu);
+ 	context->direct_map = true;
+ 	context->get_guest_pgd = get_cr3;
+@@ -4690,7 +4679,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
+ 	context->gva_to_gpa = ept_gva_to_gpa;
+ 	context->sync_page = ept_sync_page;
+ 	context->invlpg = ept_invlpg;
+-	context->update_pte = ept_update_pte;
+ 	context->root_level = level;
+ 	context->direct_map = false;
+ 	context->mmu_role.as_u64 = new_role.as_u64;
+@@ -4838,19 +4826,6 @@ void kvm_mmu_unload(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_mmu_unload);
+ 
+-static void mmu_pte_write_new_pte(struct kvm_vcpu *vcpu,
+-				  struct kvm_mmu_page *sp, u64 *spte,
+-				  const void *new)
+-{
+-	if (sp->role.level != PG_LEVEL_4K) {
+-		++vcpu->kvm->stat.mmu_pde_zapped;
+-		return;
+-        }
+-
+-	++vcpu->kvm->stat.mmu_pte_updated;
+-	vcpu->arch.mmu->update_pte(vcpu, sp, spte, new);
+-}
+-
+ static bool need_remote_flush(u64 old, u64 new)
+ {
+ 	if (!is_shadow_present_pte(old))
+@@ -4966,22 +4941,6 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte)
+ 	return spte;
+ }
+ 
+-/*
+- * Ignore various flags when determining if a SPTE can be immediately
+- * overwritten for the current MMU.
+- *  - level: explicitly checked in mmu_pte_write_new_pte(), and will never
+- *    match the current MMU role, as MMU's level tracks the root level.
+- *  - access: updated based on the new guest PTE
+- *  - quadrant: handled by get_written_sptes()
+- *  - invalid: always false (loop only walks valid shadow pages)
+- */
+-static const union kvm_mmu_page_role role_ign = {
+-	.level = 0xf,
+-	.access = 0x7,
+-	.quadrant = 0x3,
+-	.invalid = 0x1,
+-};
+-
+ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+ 			      const u8 *new, int bytes,
+ 			      struct kvm_page_track_notifier_node *node)
+@@ -5032,14 +4991,10 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+ 
+ 		local_flush = true;
+ 		while (npte--) {
+-			u32 base_role = vcpu->arch.mmu->mmu_role.base.word;
+-
+ 			entry = *spte;
+ 			mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL);
+-			if (gentry &&
+-			    !((sp->role.word ^ base_role) & ~role_ign.word) &&
+-			    rmap_can_add(vcpu))
+-				mmu_pte_write_new_pte(vcpu, sp, spte, &gentry);
++			if (gentry && sp->role.level != PG_LEVEL_4K)
++				++vcpu->kvm->stat.mmu_pde_zapped;
+ 			if (need_remote_flush(entry, *spte))
+ 				remote_flush = true;
+ 			++spte;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index e8882715735ae..32e6f33c2c45b 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3139,15 +3139,8 @@ static bool nested_get_evmcs_page(struct kvm_vcpu *vcpu)
+ 			nested_vmx_handle_enlightened_vmptrld(vcpu, false);
+ 
+ 		if (evmptrld_status == EVMPTRLD_VMFAIL ||
+-		    evmptrld_status == EVMPTRLD_ERROR) {
+-			pr_debug_ratelimited("%s: enlightened vmptrld failed\n",
+-					     __func__);
+-			vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+-			vcpu->run->internal.suberror =
+-				KVM_INTERNAL_ERROR_EMULATION;
+-			vcpu->run->internal.ndata = 0;
++		    evmptrld_status == EVMPTRLD_ERROR)
+ 			return false;
+-		}
+ 	}
+ 
+ 	return true;
+@@ -3235,8 +3228,16 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
+ 
+ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
+ {
+-	if (!nested_get_evmcs_page(vcpu))
++	if (!nested_get_evmcs_page(vcpu)) {
++		pr_debug_ratelimited("%s: enlightened vmptrld failed\n",
++				     __func__);
++		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
++		vcpu->run->internal.suberror =
++			KVM_INTERNAL_ERROR_EMULATION;
++		vcpu->run->internal.ndata = 0;
++
+ 		return false;
++	}
+ 
+ 	if (is_guest_mode(vcpu) && !nested_get_vmcs12_pages(vcpu))
+ 		return false;
+@@ -4441,7 +4442,15 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ 	/* trying to cancel vmlaunch/vmresume is a bug */
+ 	WARN_ON_ONCE(vmx->nested.nested_run_pending);
+ 
+-	kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
++	if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
++		/*
++		 * KVM_REQ_GET_NESTED_STATE_PAGES is also used to map
++		 * Enlightened VMCS after migration and we still need to
++		 * do that when something is forcing L2->L1 exit prior to
++		 * the first L2 run.
++		 */
++		(void)nested_get_evmcs_page(vcpu);
++	}
+ 
+ 	/* Service the TLB flush request for L2 before switching to L1. */
+ 	if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index fca4f452827b7..d7f8d2167fda0 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -36,6 +36,7 @@
+ #include <asm/debugreg.h>
+ #include <asm/desc.h>
+ #include <asm/fpu/internal.h>
++#include <asm/idtentry.h>
+ #include <asm/io.h>
+ #include <asm/irq_remapping.h>
+ #include <asm/kexec.h>
+@@ -6354,18 +6355,17 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+ 
+ void vmx_do_interrupt_nmi_irqoff(unsigned long entry);
+ 
+-static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info)
++static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu,
++					unsigned long entry)
+ {
+-	unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
+-	gate_desc *desc = (gate_desc *)host_idt_base + vector;
+-
+ 	kvm_before_interrupt(vcpu);
+-	vmx_do_interrupt_nmi_irqoff(gate_offset(desc));
++	vmx_do_interrupt_nmi_irqoff(entry);
+ 	kvm_after_interrupt(vcpu);
+ }
+ 
+ static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
+ {
++	const unsigned long nmi_entry = (unsigned long)asm_exc_nmi_noist;
+ 	u32 intr_info = vmx_get_intr_info(&vmx->vcpu);
+ 
+ 	/* if exit due to PF check for async PF */
+@@ -6376,18 +6376,20 @@ static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
+ 		kvm_machine_check();
+ 	/* We need to handle NMIs before interrupts are enabled */
+ 	else if (is_nmi(intr_info))
+-		handle_interrupt_nmi_irqoff(&vmx->vcpu, intr_info);
++		handle_interrupt_nmi_irqoff(&vmx->vcpu, nmi_entry);
+ }
+ 
+ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
+ {
+ 	u32 intr_info = vmx_get_intr_info(vcpu);
++	unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
++	gate_desc *desc = (gate_desc *)host_idt_base + vector;
+ 
+ 	if (WARN_ONCE(!is_external_intr(intr_info),
+ 	    "KVM: unexpected VM-Exit interrupt info: 0x%x", intr_info))
+ 		return;
+ 
+-	handle_interrupt_nmi_irqoff(vcpu, intr_info);
++	handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc));
+ }
+ 
+ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+@@ -6862,12 +6864,9 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+ 
+ 	for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) {
+ 		u32 index = vmx_uret_msrs_list[i];
+-		u32 data_low, data_high;
+ 		int j = vmx->nr_uret_msrs;
+ 
+-		if (rdmsr_safe(index, &data_low, &data_high) < 0)
+-			continue;
+-		if (wrmsr_safe(index, data_low, data_high) < 0)
++		if (kvm_probe_user_return_msr(index))
+ 			continue;
+ 
+ 		vmx->guest_uret_msrs[j].slot = i;
+@@ -7300,9 +7299,11 @@ static __init void vmx_set_cpu_caps(void)
+ 	if (!cpu_has_vmx_xsaves())
+ 		kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
+ 
+-	/* CPUID 0x80000001 */
+-	if (!cpu_has_vmx_rdtscp())
++	/* CPUID 0x80000001 and 0x7 (RDPID) */
++	if (!cpu_has_vmx_rdtscp()) {
+ 		kvm_cpu_cap_clear(X86_FEATURE_RDTSCP);
++		kvm_cpu_cap_clear(X86_FEATURE_RDPID);
++	}
+ 
+ 	if (cpu_has_vmx_waitpkg())
+ 		kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG);
+@@ -7358,8 +7359,9 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu,
+ 	/*
+ 	 * RDPID causes #UD if disabled through secondary execution controls.
+ 	 * Because it is marked as EmulateOnUD, we need to intercept it here.
++	 * Note, RDPID is hidden behind ENABLE_RDTSCP.
+ 	 */
+-	case x86_intercept_rdtscp:
++	case x86_intercept_rdpid:
+ 		if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_RDTSCP)) {
+ 			exception->vector = UD_VECTOR;
+ 			exception->error_code_valid = false;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0a5dd7568ebc8..c071a83d543ae 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -233,7 +233,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
+ 	VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped),
+ 	VM_STAT("mmu_pte_write", mmu_pte_write),
+-	VM_STAT("mmu_pte_updated", mmu_pte_updated),
+ 	VM_STAT("mmu_pde_zapped", mmu_pde_zapped),
+ 	VM_STAT("mmu_flooded", mmu_flooded),
+ 	VM_STAT("mmu_recycled", mmu_recycled),
+@@ -323,6 +322,22 @@ static void kvm_on_user_return(struct user_return_notifier *urn)
+ 	}
+ }
+ 
++int kvm_probe_user_return_msr(u32 msr)
++{
++	u64 val;
++	int ret;
++
++	preempt_disable();
++	ret = rdmsrl_safe(msr, &val);
++	if (ret)
++		goto out;
++	ret = wrmsrl_safe(msr, val);
++out:
++	preempt_enable();
++	return ret;
++}
++EXPORT_SYMBOL_GPL(kvm_probe_user_return_msr);
++
+ void kvm_define_user_return_msr(unsigned slot, u32 msr)
+ {
+ 	BUG_ON(slot >= KVM_MAX_NR_USER_RETURN_MSRS);
+@@ -7849,6 +7864,18 @@ static void pvclock_gtod_update_fn(struct work_struct *work)
+ 
+ static DECLARE_WORK(pvclock_gtod_work, pvclock_gtod_update_fn);
+ 
++/*
++ * Indirection to move queue_work() out of the tk_core.seq write held
++ * region to prevent possible deadlocks against time accessors which
++ * are invoked with work related locks held.
++ */
++static void pvclock_irq_work_fn(struct irq_work *w)
++{
++	queue_work(system_long_wq, &pvclock_gtod_work);
++}
++
++static DEFINE_IRQ_WORK(pvclock_irq_work, pvclock_irq_work_fn);
++
+ /*
+  * Notification about pvclock gtod data update.
+  */
+@@ -7860,13 +7887,14 @@ static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused,
+ 
+ 	update_pvclock_gtod(tk);
+ 
+-	/* disable master clock if host does not trust, or does not
+-	 * use, TSC based clocksource.
++	/*
++	 * Disable master clock if host does not trust, or does not use,
++	 * TSC based clocksource. Delegate queue_work() to irq_work as
++	 * this is invoked with tk_core.seq write held.
+ 	 */
+ 	if (!gtod_is_based_on_tsc(gtod->clock.vclock_mode) &&
+ 	    atomic_read(&kvm_guest_has_master_clock) != 0)
+-		queue_work(system_long_wq, &pvclock_gtod_work);
+-
++		irq_work_queue(&pvclock_irq_work);
+ 	return 0;
+ }
+ 
+@@ -7982,6 +8010,8 @@ void kvm_arch_exit(void)
+ 	cpuhp_remove_state_nocalls(CPUHP_AP_X86_KVM_CLK_ONLINE);
+ #ifdef CONFIG_X86_64
+ 	pvclock_gtod_unregister_notifier(&pvclock_gtod_notifier);
++	irq_work_sync(&pvclock_irq_work);
++	cancel_work_sync(&pvclock_gtod_work);
+ #endif
+ 	kvm_x86_ops.hardware_enable = NULL;
+ 	kvm_mmu_module_exit();
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 5720978e4d09b..c91dca641eb46 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2210,10 +2210,9 @@ static void bfq_remove_request(struct request_queue *q,
+ 
+ }
+ 
+-static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
++static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
+ 		unsigned int nr_segs)
+ {
+-	struct request_queue *q = hctx->queue;
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+ 	struct request *free = NULL;
+ 	/*
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 7e963b457f2ec..aaae531135aac 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1023,7 +1023,17 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
+ 
+ 	lockdep_assert_held(&ioc->lock);
+ 
+-	inuse = clamp_t(u32, inuse, 1, active);
++	/*
++	 * For an active leaf node, its inuse shouldn't be zero or exceed
++	 * @active. An active internal node's inuse is solely determined by the
++	 * inuse to active ratio of its children regardless of @inuse.
++	 */
++	if (list_empty(&iocg->active_list) && iocg->child_active_sum) {
++		inuse = DIV64_U64_ROUND_UP(active * iocg->child_inuse_sum,
++					   iocg->child_active_sum);
++	} else {
++		inuse = clamp_t(u32, inuse, 1, active);
++	}
+ 
+ 	iocg->last_inuse = iocg->inuse;
+ 	if (save)
+@@ -1040,7 +1050,7 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
+ 		/* update the level sums */
+ 		parent->child_active_sum += (s32)(active - child->active);
+ 		parent->child_inuse_sum += (s32)(inuse - child->inuse);
+-		/* apply the udpates */
++		/* apply the updates */
+ 		child->active = active;
+ 		child->inuse = inuse;
+ 
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index d1eafe2c045ca..581be65a53c15 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -348,14 +348,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
+ 		unsigned int nr_segs)
+ {
+ 	struct elevator_queue *e = q->elevator;
+-	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
+-	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
++	struct blk_mq_ctx *ctx;
++	struct blk_mq_hw_ctx *hctx;
+ 	bool ret = false;
+ 	enum hctx_type type;
+ 
+ 	if (e && e->type->ops.bio_merge)
+-		return e->type->ops.bio_merge(hctx, bio, nr_segs);
++		return e->type->ops.bio_merge(q, bio, nr_segs);
+ 
++	ctx = blk_mq_get_ctx(q);
++	hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
+ 	type = hctx->type;
+ 	if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) ||
+ 	    list_empty_careful(&ctx->rq_lists[type]))
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 2a1eff60c7975..4bf9449b45868 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2203,8 +2203,9 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
+ 		/* Bypass scheduler for flush requests */
+ 		blk_insert_flush(rq);
+ 		blk_mq_run_hw_queue(data.hctx, true);
+-	} else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs ||
+-				!blk_queue_nonrot(q))) {
++	} else if (plug && (q->nr_hw_queues == 1 ||
++		   blk_mq_is_sbitmap_shared(rq->mq_hctx->flags) ||
++		   q->mq_ops->commit_rqs || !blk_queue_nonrot(q))) {
+ 		/*
+ 		 * Use plugging if we have a ->commit_rqs() hook as well, as
+ 		 * we know the driver uses bd->last in a smart fashion.
+@@ -3255,10 +3256,12 @@ EXPORT_SYMBOL(blk_mq_init_allocated_queue);
+ /* tags can _not_ be used after returning from blk_mq_exit_queue */
+ void blk_mq_exit_queue(struct request_queue *q)
+ {
+-	struct blk_mq_tag_set	*set = q->tag_set;
++	struct blk_mq_tag_set *set = q->tag_set;
+ 
+-	blk_mq_del_queue_tag_set(q);
++	/* Checks hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED. */
+ 	blk_mq_exit_hw_queues(q, set, set->nr_hw_queues);
++	/* May clear BLK_MQ_F_TAG_QUEUE_SHARED in hctx->flags. */
++	blk_mq_del_queue_tag_set(q);
+ }
+ 
+ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
+index dc89199bc8c69..7f9ef773bf444 100644
+--- a/block/kyber-iosched.c
++++ b/block/kyber-iosched.c
+@@ -562,11 +562,12 @@ static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
+ 	}
+ }
+ 
+-static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
++static bool kyber_bio_merge(struct request_queue *q, struct bio *bio,
+ 		unsigned int nr_segs)
+ {
++	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
++	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
+ 	struct kyber_hctx_data *khd = hctx->sched_data;
+-	struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue);
+ 	struct kyber_ctx_queue *kcq = &khd->kcqs[ctx->index_hw[hctx->type]];
+ 	unsigned int sched_domain = kyber_sched_domain(bio->bi_opf);
+ 	struct list_head *rq_list = &kcq->rq_list[sched_domain];
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 800ac902809b8..2b9635d0dcba8 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -461,10 +461,9 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
+ 	return ELEVATOR_NO_MERGE;
+ }
+ 
+-static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
++static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
+ 		unsigned int nr_segs)
+ {
+-	struct request_queue *q = hctx->queue;
+ 	struct deadline_data *dd = q->elevator->elevator_data;
+ 	struct request *free = NULL;
+ 	bool ret;
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index ef77dbcaf58f6..48ff6821a83d4 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -1301,6 +1301,7 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
+ 		{"PNP0C0B", }, /* Generic ACPI fan */
+ 		{"INT3404", }, /* Fan */
+ 		{"INTC1044", }, /* Fan for Tiger Lake generation */
++		{"INTC1048", }, /* Fan for Alder Lake generation */
+ 		{}
+ 	};
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index b47f14ac75ae0..de0533bd4e086 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -705,6 +705,7 @@ int acpi_device_add(struct acpi_device *device,
+ 
+ 		result = acpi_device_set_name(device, acpi_device_bus_id);
+ 		if (result) {
++			kfree_const(acpi_device_bus_id->bus_id);
+ 			kfree(acpi_device_bus_id);
+ 			goto err_unlock;
+ 		}
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index d6d73ff94e88f..bc649da4899a0 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1637,6 +1637,7 @@ void pm_runtime_init(struct device *dev)
+ 	dev->power.request_pending = false;
+ 	dev->power.request = RPM_REQ_NONE;
+ 	dev->power.deferred_resume = false;
++	dev->power.needs_force_resume = 0;
+ 	INIT_WORK(&dev->power.work, pm_runtime_work);
+ 
+ 	dev->power.timer_expires = 0;
+@@ -1804,10 +1805,12 @@ int pm_runtime_force_suspend(struct device *dev)
+ 	 * its parent, but set its status to RPM_SUSPENDED anyway in case this
+ 	 * function will be called again for it in the meantime.
+ 	 */
+-	if (pm_runtime_need_not_resume(dev))
++	if (pm_runtime_need_not_resume(dev)) {
+ 		pm_runtime_set_suspended(dev);
+-	else
++	} else {
+ 		__update_runtime_status(dev, RPM_SUSPENDED);
++		dev->power.needs_force_resume = 1;
++	}
+ 
+ 	return 0;
+ 
+@@ -1834,7 +1837,7 @@ int pm_runtime_force_resume(struct device *dev)
+ 	int (*callback)(struct device *);
+ 	int ret = 0;
+ 
+-	if (!pm_runtime_status_suspended(dev) || pm_runtime_need_not_resume(dev))
++	if (!pm_runtime_status_suspended(dev) || !dev->power.needs_force_resume)
+ 		goto out;
+ 
+ 	/*
+@@ -1853,6 +1856,7 @@ int pm_runtime_force_resume(struct device *dev)
+ 
+ 	pm_runtime_mark_last_busy(dev);
+ out:
++	dev->power.needs_force_resume = 0;
+ 	pm_runtime_enable(dev);
+ 	return ret;
+ }
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 5e45eddbe2abc..9a70eab7edbf7 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -2031,7 +2031,8 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ 	 * config ref and try to destroy the workqueue from inside the work
+ 	 * queue.
+ 	 */
+-	flush_workqueue(nbd->recv_workq);
++	if (nbd->recv_workq)
++		flush_workqueue(nbd->recv_workq);
+ 	if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
+ 			       &nbd->config->runtime_flags))
+ 		nbd_config_put(nbd);
+diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
+index ba334fe7626db..71b86fee81c24 100644
+--- a/drivers/block/rnbd/rnbd-clt.c
++++ b/drivers/block/rnbd/rnbd-clt.c
+@@ -679,7 +679,11 @@ static void remap_devs(struct rnbd_clt_session *sess)
+ 		return;
+ 	}
+ 
+-	rtrs_clt_query(sess->rtrs, &attrs);
++	err = rtrs_clt_query(sess->rtrs, &attrs);
++	if (err) {
++		pr_err("rtrs_clt_query(\"%s\"): %d\n", sess->sessname, err);
++		return;
++	}
+ 	mutex_lock(&sess->lock);
+ 	sess->max_io_size = attrs.max_io_size;
+ 
+@@ -1211,7 +1215,11 @@ find_and_get_or_create_sess(const char *sessname,
+ 		err = PTR_ERR(sess->rtrs);
+ 		goto wake_up_and_put;
+ 	}
+-	rtrs_clt_query(sess->rtrs, &attrs);
++
++	err = rtrs_clt_query(sess->rtrs, &attrs);
++	if (err)
++		goto close_rtrs;
++
+ 	sess->max_io_size = attrs.max_io_size;
+ 	sess->queue_depth = attrs.queue_depth;
+ 
+diff --git a/drivers/block/rnbd/rnbd-clt.h b/drivers/block/rnbd/rnbd-clt.h
+index b193d59040503..2941e3862b9c3 100644
+--- a/drivers/block/rnbd/rnbd-clt.h
++++ b/drivers/block/rnbd/rnbd-clt.h
+@@ -79,7 +79,7 @@ struct rnbd_clt_session {
+ 	DECLARE_BITMAP(cpu_queues_bm, NR_CPUS);
+ 	int	__percpu	*cpu_rr; /* per-cpu var for CPU round-robin */
+ 	atomic_t		busy;
+-	int			queue_depth;
++	size_t			queue_depth;
+ 	u32			max_io_size;
+ 	struct blk_mq_tag_set	tag_set;
+ 	struct mutex		lock; /* protects state and devs_list */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2953b96b3ceda..175cb1c0d5698 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -392,7 +392,9 @@ static const struct usb_device_id blacklist_table[] = {
+ 
+ 	/* MediaTek Bluetooth devices */
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(0x0e8d, 0xe0, 0x01, 0x01),
+-	  .driver_info = BTUSB_MEDIATEK },
++	  .driver_info = BTUSB_MEDIATEK |
++			 BTUSB_WIDEBAND_SPEECH |
++			 BTUSB_VALID_LE_STATES },
+ 
+ 	/* Additional Realtek 8723AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index eff1f12d981ab..c84d239512197 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -656,6 +656,7 @@ int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip)
+ 
+ 	if (nr_commands !=
+ 	    be32_to_cpup((__be32 *)&buf.data[TPM_HEADER_SIZE + 5])) {
++		rc = -EFAULT;
+ 		tpm_buf_destroy(&buf);
+ 		goto out;
+ 	}
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index a2e0395cbe618..55b9d3965ae1b 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -709,16 +709,14 @@ static int tpm_tis_gen_interrupt(struct tpm_chip *chip)
+ 	cap_t cap;
+ 	int ret;
+ 
+-	/* TPM 2.0 */
+-	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+-		return tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
+-
+-	/* TPM 1.2 */
+ 	ret = request_locality(chip, 0);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
++	if (chip->flags & TPM_CHIP_FLAG_TPM2)
++		ret = tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
++	else
++		ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
+ 
+ 	release_locality(chip, 0);
+ 
+@@ -1127,12 +1125,20 @@ int tpm_tis_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	/* TPM 1.2 requires self-test on resume. This function actually returns
++	/*
++	 * TPM 1.2 requires self-test on resume. This function actually returns
+ 	 * an error code but for unknown reason it isn't handled.
+ 	 */
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
++	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
++		ret = request_locality(chip, 0);
++		if (ret < 0)
++			return ret;
++
+ 		tpm1_do_selftest(chip);
+ 
++		release_locality(chip, 0);
++	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(tpm_tis_resume);
+diff --git a/drivers/clk/samsung/clk-exynos7.c b/drivers/clk/samsung/clk-exynos7.c
+index 87ee1bad9a9a8..4a5d2a914bd66 100644
+--- a/drivers/clk/samsung/clk-exynos7.c
++++ b/drivers/clk/samsung/clk-exynos7.c
+@@ -537,8 +537,13 @@ static const struct samsung_gate_clock top1_gate_clks[] __initconst = {
+ 	GATE(CLK_ACLK_FSYS0_200, "aclk_fsys0_200", "dout_aclk_fsys0_200",
+ 		ENABLE_ACLK_TOP13, 28, CLK_SET_RATE_PARENT |
+ 		CLK_IS_CRITICAL, 0),
++	/*
++	 * This clock is required for the CMU_FSYS1 registers access, keep it
++	 * enabled permanently until proper runtime PM support is added.
++	 */
+ 	GATE(CLK_ACLK_FSYS1_200, "aclk_fsys1_200", "dout_aclk_fsys1_200",
+-		ENABLE_ACLK_TOP13, 24, CLK_SET_RATE_PARENT, 0),
++		ENABLE_ACLK_TOP13, 24, CLK_SET_RATE_PARENT |
++		CLK_IS_CRITICAL, 0),
+ 
+ 	GATE(CLK_SCLK_PHY_FSYS1_26M, "sclk_phy_fsys1_26m",
+ 		"dout_sclk_phy_fsys1_26m", ENABLE_SCLK_TOP1_FSYS11,
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 3fae9ebb58b83..b6f97960d8ee0 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -2,6 +2,7 @@
+ #include <linux/clk.h>
+ #include <linux/clocksource.h>
+ #include <linux/clockchips.h>
++#include <linux/cpuhotplug.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+ #include <linux/iopoll.h>
+@@ -530,17 +531,17 @@ static void omap_clockevent_unidle(struct clock_event_device *evt)
+ 	writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
+ }
+ 
+-static int __init dmtimer_clockevent_init(struct device_node *np)
++static int __init dmtimer_clkevt_init_common(struct dmtimer_clockevent *clkevt,
++					     struct device_node *np,
++					     unsigned int features,
++					     const struct cpumask *cpumask,
++					     const char *name,
++					     int rating)
+ {
+-	struct dmtimer_clockevent *clkevt;
+ 	struct clock_event_device *dev;
+ 	struct dmtimer_systimer *t;
+ 	int error;
+ 
+-	clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
+-	if (!clkevt)
+-		return -ENOMEM;
+-
+ 	t = &clkevt->t;
+ 	dev = &clkevt->dev;
+ 
+@@ -548,25 +549,23 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
+ 	 * We mostly use cpuidle_coupled with ARM local timers for runtime,
+ 	 * so there's probably no use for CLOCK_EVT_FEAT_DYNIRQ here.
+ 	 */
+-	dev->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT;
+-	dev->rating = 300;
++	dev->features = features;
++	dev->rating = rating;
+ 	dev->set_next_event = dmtimer_set_next_event;
+ 	dev->set_state_shutdown = dmtimer_clockevent_shutdown;
+ 	dev->set_state_periodic = dmtimer_set_periodic;
+ 	dev->set_state_oneshot = dmtimer_clockevent_shutdown;
+ 	dev->set_state_oneshot_stopped = dmtimer_clockevent_shutdown;
+ 	dev->tick_resume = dmtimer_clockevent_shutdown;
+-	dev->cpumask = cpu_possible_mask;
++	dev->cpumask = cpumask;
+ 
+ 	dev->irq = irq_of_parse_and_map(np, 0);
+-	if (!dev->irq) {
+-		error = -ENXIO;
+-		goto err_out_free;
+-	}
++	if (!dev->irq)
++		return -ENXIO;
+ 
+ 	error = dmtimer_systimer_setup(np, &clkevt->t);
+ 	if (error)
+-		goto err_out_free;
++		return error;
+ 
+ 	clkevt->period = 0xffffffff - DIV_ROUND_CLOSEST(t->rate, HZ);
+ 
+@@ -578,38 +577,132 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
+ 	writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
+ 
+ 	error = request_irq(dev->irq, dmtimer_clockevent_interrupt,
+-			    IRQF_TIMER, "clockevent", clkevt);
++			    IRQF_TIMER, name, clkevt);
+ 	if (error)
+ 		goto err_out_unmap;
+ 
+ 	writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->irq_ena);
+ 	writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
+ 
+-	pr_info("TI gptimer clockevent: %s%lu Hz at %pOF\n",
+-		of_find_property(np, "ti,timer-alwon", NULL) ?
++	pr_info("TI gptimer %s: %s%lu Hz at %pOF\n",
++		name, of_find_property(np, "ti,timer-alwon", NULL) ?
+ 		"always-on " : "", t->rate, np->parent);
+ 
+-	clockevents_config_and_register(dev, t->rate,
+-					3, /* Timer internal resynch latency */
++	return 0;
++
++err_out_unmap:
++	iounmap(t->base);
++
++	return error;
++}
++
++static int __init dmtimer_clockevent_init(struct device_node *np)
++{
++	struct dmtimer_clockevent *clkevt;
++	int error;
++
++	clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
++	if (!clkevt)
++		return -ENOMEM;
++
++	error = dmtimer_clkevt_init_common(clkevt, np,
++					   CLOCK_EVT_FEAT_PERIODIC |
++					   CLOCK_EVT_FEAT_ONESHOT,
++					   cpu_possible_mask, "clockevent",
++					   300);
++	if (error)
++		goto err_out_free;
++
++	clockevents_config_and_register(&clkevt->dev, clkevt->t.rate,
++					3, /* Timer internal resync latency */
+ 					0xffffffff);
+ 
+ 	if (of_machine_is_compatible("ti,am33xx") ||
+ 	    of_machine_is_compatible("ti,am43")) {
+-		dev->suspend = omap_clockevent_idle;
+-		dev->resume = omap_clockevent_unidle;
++		clkevt->dev.suspend = omap_clockevent_idle;
++		clkevt->dev.resume = omap_clockevent_unidle;
+ 	}
+ 
+ 	return 0;
+ 
+-err_out_unmap:
+-	iounmap(t->base);
+-
+ err_out_free:
+ 	kfree(clkevt);
+ 
+ 	return error;
+ }
+ 
++/* Dmtimer as percpu timer. See dra7 ARM architected timer wrap erratum i940 */
++static DEFINE_PER_CPU(struct dmtimer_clockevent, dmtimer_percpu_timer);
++
++static int __init dmtimer_percpu_timer_init(struct device_node *np, int cpu)
++{
++	struct dmtimer_clockevent *clkevt;
++	int error;
++
++	if (!cpu_possible(cpu))
++		return -EINVAL;
++
++	if (!of_property_read_bool(np->parent, "ti,no-reset-on-init") ||
++	    !of_property_read_bool(np->parent, "ti,no-idle"))
++		pr_warn("Incomplete dtb for percpu dmtimer %pOF\n", np->parent);
++
++	clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
++
++	error = dmtimer_clkevt_init_common(clkevt, np, CLOCK_EVT_FEAT_ONESHOT,
++					   cpumask_of(cpu), "percpu-dmtimer",
++					   500);
++	if (error)
++		return error;
++
++	return 0;
++}
++
++/* See TRM for timer internal resynch latency */
++static int omap_dmtimer_starting_cpu(unsigned int cpu)
++{
++	struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
++	struct clock_event_device *dev = &clkevt->dev;
++	struct dmtimer_systimer *t = &clkevt->t;
++
++	clockevents_config_and_register(dev, t->rate, 3, ULONG_MAX);
++	irq_force_affinity(dev->irq, cpumask_of(cpu));
++
++	return 0;
++}
++
++static int __init dmtimer_percpu_timer_startup(void)
++{
++	struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, 0);
++	struct dmtimer_systimer *t = &clkevt->t;
++
++	if (t->sysc) {
++		cpuhp_setup_state(CPUHP_AP_TI_GP_TIMER_STARTING,
++				  "clockevents/omap/gptimer:starting",
++				  omap_dmtimer_starting_cpu, NULL);
++	}
++
++	return 0;
++}
++subsys_initcall(dmtimer_percpu_timer_startup);
++
++static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
++{
++	struct device_node *arm_timer;
++
++	arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
++	if (of_device_is_available(arm_timer)) {
++		pr_warn_once("ARM architected timer wrap issue i940 detected\n");
++		return 0;
++	}
++
++	if (pa == 0x48034000)		/* dra7 dmtimer3 */
++		return dmtimer_percpu_timer_init(np, 0);
++	else if (pa == 0x48036000)	/* dra7 dmtimer4 */
++		return dmtimer_percpu_timer_init(np, 1);
++
++	return 0;
++}
++
+ /* Clocksource */
+ static struct dmtimer_clocksource *
+ to_dmtimer_clocksource(struct clocksource *cs)
+@@ -743,6 +836,9 @@ static int __init dmtimer_systimer_init(struct device_node *np)
+ 	if (clockevent == pa)
+ 		return dmtimer_clockevent_init(np);
+ 
++	if (of_machine_is_compatible("ti,dra7"))
++		return dmtimer_percpu_quirk_init(np, pa);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index c8ae8554f4c91..44a5d15a75728 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -3019,6 +3019,14 @@ static const struct x86_cpu_id hwp_support_ids[] __initconst = {
+ 	{}
+ };
+ 
++static bool intel_pstate_hwp_is_enabled(void)
++{
++	u64 value;
++
++	rdmsrl(MSR_PM_ENABLE, value);
++	return !!(value & 0x1);
++}
++
+ static int __init intel_pstate_init(void)
+ {
+ 	const struct x86_cpu_id *id;
+@@ -3037,8 +3045,12 @@ static int __init intel_pstate_init(void)
+ 		 * Avoid enabling HWP for processors without EPP support,
+ 		 * because that means incomplete HWP implementation which is a
+ 		 * corner case and supporting it is generally problematic.
++		 *
++		 * If HWP is enabled already, though, there is no choice but to
++		 * deal with it.
+ 		 */
+-		if (!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) {
++		if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) ||
++		    intel_pstate_hwp_is_enabled()) {
+ 			hwp_active++;
+ 			hwp_mode_bdw = id->driver_data;
+ 			intel_pstate.attr = hwp_cpufreq_attrs;
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 5b82ba7acc7cb..21caed429cc52 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -989,7 +989,7 @@ int sev_dev_init(struct psp_device *psp)
+ 	if (!sev->vdata) {
+ 		ret = -ENODEV;
+ 		dev_err(dev, "sev: missing driver data\n");
+-		goto e_err;
++		goto e_sev;
+ 	}
+ 
+ 	psp_set_sev_irq_handler(psp, sev_irq_handler, sev);
+@@ -1004,6 +1004,8 @@ int sev_dev_init(struct psp_device *psp)
+ 
+ e_irq:
+ 	psp_clear_sev_irq_handler(psp);
++e_sev:
++	devm_kfree(dev, sev);
+ e_err:
+ 	psp->sev_data = NULL;
+ 
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index c3976156db2fc..4da88578ed646 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -35,15 +35,15 @@ struct idxd_user_context {
+ 	unsigned int flags;
+ };
+ 
+-enum idxd_cdev_cleanup {
+-	CDEV_NORMAL = 0,
+-	CDEV_FAILED,
+-};
+-
+ static void idxd_cdev_dev_release(struct device *dev)
+ {
+-	dev_dbg(dev, "releasing cdev device\n");
+-	kfree(dev);
++	struct idxd_cdev *idxd_cdev = container_of(dev, struct idxd_cdev, dev);
++	struct idxd_cdev_context *cdev_ctx;
++	struct idxd_wq *wq = idxd_cdev->wq;
++
++	cdev_ctx = &ictx[wq->idxd->type];
++	ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor);
++	kfree(idxd_cdev);
+ }
+ 
+ static struct device_type idxd_cdev_device_type = {
+@@ -58,14 +58,11 @@ static inline struct idxd_cdev *inode_idxd_cdev(struct inode *inode)
+ 	return container_of(cdev, struct idxd_cdev, cdev);
+ }
+ 
+-static inline struct idxd_wq *idxd_cdev_wq(struct idxd_cdev *idxd_cdev)
+-{
+-	return container_of(idxd_cdev, struct idxd_wq, idxd_cdev);
+-}
+-
+ static inline struct idxd_wq *inode_wq(struct inode *inode)
+ {
+-	return idxd_cdev_wq(inode_idxd_cdev(inode));
++	struct idxd_cdev *idxd_cdev = inode_idxd_cdev(inode);
++
++	return idxd_cdev->wq;
+ }
+ 
+ static int idxd_cdev_open(struct inode *inode, struct file *filp)
+@@ -172,11 +169,10 @@ static __poll_t idxd_cdev_poll(struct file *filp,
+ 	struct idxd_user_context *ctx = filp->private_data;
+ 	struct idxd_wq *wq = ctx->wq;
+ 	struct idxd_device *idxd = wq->idxd;
+-	struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
+ 	unsigned long flags;
+ 	__poll_t out = 0;
+ 
+-	poll_wait(filp, &idxd_cdev->err_queue, wait);
++	poll_wait(filp, &wq->err_queue, wait);
+ 	spin_lock_irqsave(&idxd->dev_lock, flags);
+ 	if (idxd->sw_err.valid)
+ 		out = EPOLLIN | EPOLLRDNORM;
+@@ -198,98 +194,67 @@ int idxd_cdev_get_major(struct idxd_device *idxd)
+ 	return MAJOR(ictx[idxd->type].devt);
+ }
+ 
+-static int idxd_wq_cdev_dev_setup(struct idxd_wq *wq)
++int idxd_wq_add_cdev(struct idxd_wq *wq)
+ {
+ 	struct idxd_device *idxd = wq->idxd;
+-	struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
+-	struct idxd_cdev_context *cdev_ctx;
++	struct idxd_cdev *idxd_cdev;
++	struct cdev *cdev;
+ 	struct device *dev;
+-	int minor, rc;
++	struct idxd_cdev_context *cdev_ctx;
++	int rc, minor;
+ 
+-	idxd_cdev->dev = kzalloc(sizeof(*idxd_cdev->dev), GFP_KERNEL);
+-	if (!idxd_cdev->dev)
++	idxd_cdev = kzalloc(sizeof(*idxd_cdev), GFP_KERNEL);
++	if (!idxd_cdev)
+ 		return -ENOMEM;
+ 
+-	dev = idxd_cdev->dev;
+-	dev->parent = &idxd->pdev->dev;
+-	dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd),
+-		     idxd->id, wq->id);
+-	dev->bus = idxd_get_bus_type(idxd);
+-
++	idxd_cdev->wq = wq;
++	cdev = &idxd_cdev->cdev;
++	dev = &idxd_cdev->dev;
+ 	cdev_ctx = &ictx[wq->idxd->type];
+ 	minor = ida_simple_get(&cdev_ctx->minor_ida, 0, MINORMASK, GFP_KERNEL);
+ 	if (minor < 0) {
+-		rc = minor;
+-		kfree(dev);
+-		goto ida_err;
+-	}
+-
+-	dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor);
+-	dev->type = &idxd_cdev_device_type;
+-	rc = device_register(dev);
+-	if (rc < 0) {
+-		dev_err(&idxd->pdev->dev, "device register failed\n");
+-		goto dev_reg_err;
++		kfree(idxd_cdev);
++		return minor;
+ 	}
+ 	idxd_cdev->minor = minor;
+ 
+-	return 0;
+-
+- dev_reg_err:
+-	ida_simple_remove(&cdev_ctx->minor_ida, MINOR(dev->devt));
+-	put_device(dev);
+- ida_err:
+-	idxd_cdev->dev = NULL;
+-	return rc;
+-}
+-
+-static void idxd_wq_cdev_cleanup(struct idxd_wq *wq,
+-				 enum idxd_cdev_cleanup cdev_state)
+-{
+-	struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
+-	struct idxd_cdev_context *cdev_ctx;
+-
+-	cdev_ctx = &ictx[wq->idxd->type];
+-	if (cdev_state == CDEV_NORMAL)
+-		cdev_del(&idxd_cdev->cdev);
+-	device_unregister(idxd_cdev->dev);
+-	/*
+-	 * The device_type->release() will be called on the device and free
+-	 * the allocated struct device. We can just forget it.
+-	 */
+-	ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor);
+-	idxd_cdev->dev = NULL;
+-	idxd_cdev->minor = -1;
+-}
+-
+-int idxd_wq_add_cdev(struct idxd_wq *wq)
+-{
+-	struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
+-	struct cdev *cdev = &idxd_cdev->cdev;
+-	struct device *dev;
+-	int rc;
++	device_initialize(dev);
++	dev->parent = &wq->conf_dev;
++	dev->bus = idxd_get_bus_type(idxd);
++	dev->type = &idxd_cdev_device_type;
++	dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor);
+ 
+-	rc = idxd_wq_cdev_dev_setup(wq);
++	rc = dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd),
++			  idxd->id, wq->id);
+ 	if (rc < 0)
+-		return rc;
++		goto err;
+ 
+-	dev = idxd_cdev->dev;
++	wq->idxd_cdev = idxd_cdev;
+ 	cdev_init(cdev, &idxd_cdev_fops);
+-	cdev_set_parent(cdev, &dev->kobj);
+-	rc = cdev_add(cdev, dev->devt, 1);
++	rc = cdev_device_add(cdev, dev);
+ 	if (rc) {
+ 		dev_dbg(&wq->idxd->pdev->dev, "cdev_add failed: %d\n", rc);
+-		idxd_wq_cdev_cleanup(wq, CDEV_FAILED);
+-		return rc;
++		goto err;
+ 	}
+ 
+-	init_waitqueue_head(&idxd_cdev->err_queue);
+ 	return 0;
++
++ err:
++	put_device(dev);
++	wq->idxd_cdev = NULL;
++	return rc;
+ }
+ 
+ void idxd_wq_del_cdev(struct idxd_wq *wq)
+ {
+-	idxd_wq_cdev_cleanup(wq, CDEV_NORMAL);
++	struct idxd_cdev *idxd_cdev;
++	struct idxd_cdev_context *cdev_ctx;
++
++	cdev_ctx = &ictx[wq->idxd->type];
++	idxd_cdev = wq->idxd_cdev;
++	wq->idxd_cdev = NULL;
++	cdev_device_del(&idxd_cdev->cdev, &idxd_cdev->dev);
++	put_device(&idxd_cdev->dev);
+ }
+ 
+ int idxd_cdev_register(void)
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 459e9fbc2253a..47aae5fe8273c 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -169,8 +169,6 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
+ 		desc->id = i;
+ 		desc->wq = wq;
+ 		desc->cpu = -1;
+-		dma_async_tx_descriptor_init(&desc->txd, &wq->dma_chan);
+-		desc->txd.tx_submit = idxd_dma_tx_submit;
+ 	}
+ 
+ 	return 0;
+@@ -378,7 +376,8 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
+ 
+ 	if (idxd_device_is_halted(idxd)) {
+ 		dev_warn(&idxd->pdev->dev, "Device is HALTED!\n");
+-		*status = IDXD_CMDSTS_HW_ERR;
++		if (status)
++			*status = IDXD_CMDSTS_HW_ERR;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c
+index ec177a535d6dd..aa7435555de95 100644
+--- a/drivers/dma/idxd/dma.c
++++ b/drivers/dma/idxd/dma.c
+@@ -14,7 +14,10 @@
+ 
+ static inline struct idxd_wq *to_idxd_wq(struct dma_chan *c)
+ {
+-	return container_of(c, struct idxd_wq, dma_chan);
++	struct idxd_dma_chan *idxd_chan;
++
++	idxd_chan = container_of(c, struct idxd_dma_chan, chan);
++	return idxd_chan->wq;
+ }
+ 
+ void idxd_dma_complete_txd(struct idxd_desc *desc,
+@@ -144,7 +147,7 @@ static void idxd_dma_issue_pending(struct dma_chan *dma_chan)
+ {
+ }
+ 
+-dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
++static dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+ {
+ 	struct dma_chan *c = tx->chan;
+ 	struct idxd_wq *wq = to_idxd_wq(c);
+@@ -165,14 +168,25 @@ dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+ 
+ static void idxd_dma_release(struct dma_device *device)
+ {
++	struct idxd_dma_dev *idxd_dma = container_of(device, struct idxd_dma_dev, dma);
++
++	kfree(idxd_dma);
+ }
+ 
+ int idxd_register_dma_device(struct idxd_device *idxd)
+ {
+-	struct dma_device *dma = &idxd->dma_dev;
++	struct idxd_dma_dev *idxd_dma;
++	struct dma_device *dma;
++	struct device *dev = &idxd->pdev->dev;
++	int rc;
+ 
++	idxd_dma = kzalloc_node(sizeof(*idxd_dma), GFP_KERNEL, dev_to_node(dev));
++	if (!idxd_dma)
++		return -ENOMEM;
++
++	dma = &idxd_dma->dma;
+ 	INIT_LIST_HEAD(&dma->channels);
+-	dma->dev = &idxd->pdev->dev;
++	dma->dev = dev;
+ 
+ 	dma_cap_set(DMA_PRIVATE, dma->cap_mask);
+ 	dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask);
+@@ -188,35 +202,72 @@ int idxd_register_dma_device(struct idxd_device *idxd)
+ 	dma->device_alloc_chan_resources = idxd_dma_alloc_chan_resources;
+ 	dma->device_free_chan_resources = idxd_dma_free_chan_resources;
+ 
+-	return dma_async_device_register(&idxd->dma_dev);
++	rc = dma_async_device_register(dma);
++	if (rc < 0) {
++		kfree(idxd_dma);
++		return rc;
++	}
++
++	idxd_dma->idxd = idxd;
++	/*
++	 * This pointer is protected by the refs taken by the dma_chan. It will remain valid
++	 * as long as there are outstanding channels.
++	 */
++	idxd->idxd_dma = idxd_dma;
++	return 0;
+ }
+ 
+ void idxd_unregister_dma_device(struct idxd_device *idxd)
+ {
+-	dma_async_device_unregister(&idxd->dma_dev);
++	dma_async_device_unregister(&idxd->idxd_dma->dma);
+ }
+ 
+ int idxd_register_dma_channel(struct idxd_wq *wq)
+ {
+ 	struct idxd_device *idxd = wq->idxd;
+-	struct dma_device *dma = &idxd->dma_dev;
+-	struct dma_chan *chan = &wq->dma_chan;
+-	int rc;
++	struct dma_device *dma = &idxd->idxd_dma->dma;
++	struct device *dev = &idxd->pdev->dev;
++	struct idxd_dma_chan *idxd_chan;
++	struct dma_chan *chan;
++	int rc, i;
++
++	idxd_chan = kzalloc_node(sizeof(*idxd_chan), GFP_KERNEL, dev_to_node(dev));
++	if (!idxd_chan)
++		return -ENOMEM;
+ 
+-	memset(&wq->dma_chan, 0, sizeof(struct dma_chan));
++	chan = &idxd_chan->chan;
+ 	chan->device = dma;
+ 	list_add_tail(&chan->device_node, &dma->channels);
++
++	for (i = 0; i < wq->num_descs; i++) {
++		struct idxd_desc *desc = wq->descs[i];
++
++		dma_async_tx_descriptor_init(&desc->txd, chan);
++		desc->txd.tx_submit = idxd_dma_tx_submit;
++	}
++
+ 	rc = dma_async_device_channel_register(dma, chan);
+-	if (rc < 0)
++	if (rc < 0) {
++		kfree(idxd_chan);
+ 		return rc;
++	}
++
++	wq->idxd_chan = idxd_chan;
++	idxd_chan->wq = wq;
++	get_device(&wq->conf_dev);
+ 
+ 	return 0;
+ }
+ 
+ void idxd_unregister_dma_channel(struct idxd_wq *wq)
+ {
+-	struct dma_chan *chan = &wq->dma_chan;
++	struct idxd_dma_chan *idxd_chan = wq->idxd_chan;
++	struct dma_chan *chan = &idxd_chan->chan;
++	struct idxd_dma_dev *idxd_dma = wq->idxd->idxd_dma;
+ 
+-	dma_async_device_channel_unregister(&wq->idxd->dma_dev, chan);
++	dma_async_device_channel_unregister(&idxd_dma->dma, chan);
+ 	list_del(&chan->device_node);
++	kfree(wq->idxd_chan);
++	wq->idxd_chan = NULL;
++	put_device(&wq->conf_dev);
+ }
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index 1d7849cb91004..eef6996ecc598 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -14,6 +14,9 @@
+ 
+ extern struct kmem_cache *idxd_desc_pool;
+ 
++struct idxd_device;
++struct idxd_wq;
++
+ #define IDXD_REG_TIMEOUT	50
+ #define IDXD_DRAIN_TIMEOUT	5000
+ 
+@@ -68,10 +71,10 @@ enum idxd_wq_type {
+ };
+ 
+ struct idxd_cdev {
++	struct idxd_wq *wq;
+ 	struct cdev cdev;
+-	struct device *dev;
++	struct device dev;
+ 	int minor;
+-	struct wait_queue_head err_queue;
+ };
+ 
+ #define IDXD_ALLOCATED_BATCH_SIZE	128U
+@@ -88,10 +91,16 @@ enum idxd_complete_type {
+ 	IDXD_COMPLETE_ABORT,
+ };
+ 
++struct idxd_dma_chan {
++	struct dma_chan chan;
++	struct idxd_wq *wq;
++};
++
+ struct idxd_wq {
+ 	void __iomem *dportal;
+ 	struct device conf_dev;
+-	struct idxd_cdev idxd_cdev;
++	struct idxd_cdev *idxd_cdev;
++	struct wait_queue_head err_queue;
+ 	struct idxd_device *idxd;
+ 	int id;
+ 	enum idxd_wq_type type;
+@@ -112,7 +121,7 @@ struct idxd_wq {
+ 	int compls_size;
+ 	struct idxd_desc **descs;
+ 	struct sbitmap_queue sbq;
+-	struct dma_chan dma_chan;
++	struct idxd_dma_chan *idxd_chan;
+ 	char name[WQ_NAME_SIZE + 1];
+ 	u64 max_xfer_bytes;
+ 	u32 max_batch_size;
+@@ -147,6 +156,11 @@ enum idxd_device_flag {
+ 	IDXD_FLAG_CMD_RUNNING,
+ };
+ 
++struct idxd_dma_dev {
++	struct idxd_device *idxd;
++	struct dma_device dma;
++};
++
+ struct idxd_device {
+ 	enum idxd_type type;
+ 	struct device conf_dev;
+@@ -191,7 +205,7 @@ struct idxd_device {
+ 	int num_wq_irqs;
+ 	struct idxd_irq_entry *irq_entries;
+ 
+-	struct dma_device dma_dev;
++	struct idxd_dma_dev *idxd_dma;
+ 	struct workqueue_struct *wq;
+ 	struct work_struct work;
+ };
+@@ -313,7 +327,6 @@ void idxd_unregister_dma_channel(struct idxd_wq *wq);
+ void idxd_parse_completion_status(u8 status, enum dmaengine_tx_result *res);
+ void idxd_dma_complete_txd(struct idxd_desc *desc,
+ 			   enum idxd_complete_type comp_type);
+-dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx);
+ 
+ /* cdev */
+ int idxd_cdev_register(void);
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index fa8c4228f358a..f4c7ce8cb399c 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -175,7 +175,7 @@ static int idxd_setup_internals(struct idxd_device *idxd)
+ 		wq->id = i;
+ 		wq->idxd = idxd;
+ 		mutex_init(&wq->wq_lock);
+-		wq->idxd_cdev.minor = -1;
++		init_waitqueue_head(&wq->err_queue);
+ 		wq->max_xfer_bytes = idxd->max_xfer_bytes;
+ 		wq->max_batch_size = idxd->max_batch_size;
+ 		wq->wqcfg = devm_kzalloc(dev, idxd->wqcfg_size, GFP_KERNEL);
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index 6bb1c1773aae6..fc95791807051 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -75,7 +75,7 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
+ 			struct idxd_wq *wq = &idxd->wqs[id];
+ 
+ 			if (wq->type == IDXD_WQT_USER)
+-				wake_up_interruptible(&wq->idxd_cdev.err_queue);
++				wake_up_interruptible(&wq->err_queue);
+ 		} else {
+ 			int i;
+ 
+@@ -83,7 +83,7 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
+ 				struct idxd_wq *wq = &idxd->wqs[i];
+ 
+ 				if (wq->type == IDXD_WQT_USER)
+-					wake_up_interruptible(&wq->idxd_cdev.err_queue);
++					wake_up_interruptible(&wq->err_queue);
+ 			}
+ 		}
+ 
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 7566b573d546e..7b41cdff1a2ce 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -1052,8 +1052,16 @@ static ssize_t wq_cdev_minor_show(struct device *dev,
+ 				  struct device_attribute *attr, char *buf)
+ {
+ 	struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
++	int minor = -1;
+ 
+-	return sprintf(buf, "%d\n", wq->idxd_cdev.minor);
++	mutex_lock(&wq->wq_lock);
++	if (wq->idxd_cdev)
++		minor = wq->idxd_cdev->minor;
++	mutex_unlock(&wq->wq_lock);
++
++	if (minor == -1)
++		return -ENXIO;
++	return sysfs_emit(buf, "%d\n", minor);
+ }
+ 
+ static struct device_attribute dev_attr_wq_cdev_minor =
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 2f53fa0ae9a62..28f20f0b722f5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -75,6 +75,8 @@ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 		}
+ 
+ 		ib->ptr = amdgpu_sa_bo_cpu_addr(ib->sa_bo);
++		/* flush the cache before commit the IB */
++		ib->flags = AMDGPU_IB_FLAG_EMIT_MEM_SYNC;
+ 
+ 		if (!vm)
+ 			ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index 79de68ac03f20..0c3b15992b814 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -643,6 +643,7 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct amdgpu_device *adev, struct
+ 
+ 	/* File created at /sys/class/drm/card0/device/hdcp_srm*/
+ 	hdcp_work[0].attr = data_attr;
++	sysfs_bin_attr_init(&hdcp_work[0].attr);
+ 
+ 	if (sysfs_create_bin_file(&adev->dev->kobj, &hdcp_work[0].attr))
+ 		DRM_WARN("Failed to create device file hdcp_srm");
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 921c4ca6e902a..284ed1c8a35ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2504,6 +2504,10 @@ static void commit_planes_for_stream(struct dc *dc,
+ 						plane_state->triplebuffer_flips = true;
+ 				}
+ 			}
++			if (update_type == UPDATE_TYPE_FULL) {
++				/* force vsync flip when reconfiguring pipes to prevent underflow */
++				plane_state->flip_immediate = false;
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
+index 368818d2dfc63..cd9bd71da4b79 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright 2012-17 Advanced Micro Devices, Inc.
++ * Copyright 2012-2021 Advanced Micro Devices, Inc.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the "Software"),
+@@ -181,11 +181,14 @@ void hubp2_vready_at_or_After_vsync(struct hubp *hubp,
+ 	else
+ 		Set HUBP_VREADY_AT_OR_AFTER_VSYNC = 0
+ 	*/
+-	if ((pipe_dest->vstartup_start - (pipe_dest->vready_offset+pipe_dest->vupdate_width
+-		+ pipe_dest->vupdate_offset) / pipe_dest->htotal) <= pipe_dest->vblank_end) {
+-		value = 1;
+-	} else
+-		value = 0;
++	if (pipe_dest->htotal != 0) {
++		if ((pipe_dest->vstartup_start - (pipe_dest->vready_offset+pipe_dest->vupdate_width
++			+ pipe_dest->vupdate_offset) / pipe_dest->htotal) <= pipe_dest->vblank_end) {
++			value = 1;
++		} else
++			value = 0;
++	}
++
+ 	REG_UPDATE(DCHUBP_CNTL, HUBP_VREADY_AT_OR_AFTER_VSYNC, value);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+index 3a367a5968ae1..972f2600f967f 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+@@ -789,6 +789,8 @@ enum mod_hdcp_status mod_hdcp_hdcp2_validate_rx_id_list(struct mod_hdcp *hdcp)
+ 			   TA_HDCP2_MSG_AUTHENTICATION_STATUS__RECEIVERID_REVOKED) {
+ 			hdcp->connection.is_hdcp2_revoked = 1;
+ 			status = MOD_HDCP_STATUS_HDCP2_RX_ID_LIST_REVOKED;
++		} else {
++			status = MOD_HDCP_STATUS_HDCP2_VALIDATE_RX_ID_LIST_FAILURE;
+ 		}
+ 	}
+ 	mutex_unlock(&psp->hdcp_context.mutex);
+diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
+index b73d51e766ce8..0e60aec0bb191 100644
+--- a/drivers/gpu/drm/i915/display/intel_overlay.c
++++ b/drivers/gpu/drm/i915/display/intel_overlay.c
+@@ -382,7 +382,7 @@ static void intel_overlay_off_tail(struct intel_overlay *overlay)
+ 		i830_overlay_clock_gating(dev_priv, true);
+ }
+ 
+-static void
++__i915_active_call static void
+ intel_overlay_last_flip_retire(struct i915_active *active)
+ {
+ 	struct intel_overlay *overlay =
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 3d69e51f3e4df..5754bccff4d15 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -189,7 +189,7 @@ compute_partial_view(const struct drm_i915_gem_object *obj,
+ 	struct i915_ggtt_view view;
+ 
+ 	if (i915_gem_object_is_tiled(obj))
+-		chunk = roundup(chunk, tile_row_pages(obj));
++		chunk = roundup(chunk, tile_row_pages(obj) ?: 1);
+ 
+ 	view.type = I915_GGTT_VIEW_PARTIAL;
+ 	view.partial.offset = rounddown(page_offset, chunk);
+diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+index 38c7069b77494..f08e25e95746e 100644
+--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+@@ -628,7 +628,6 @@ static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt)
+ 
+ 		err = pin_pt_dma(vm, pde->pt.base);
+ 		if (err) {
+-			i915_gem_object_put(pde->pt.base);
+ 			free_pd(vm, pde);
+ 			return err;
+ 		}
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+index 6614f67364862..b5937b39145a4 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+@@ -652,8 +652,8 @@ static void detect_bit_6_swizzle(struct i915_ggtt *ggtt)
+ 		 * banks of memory are paired and unswizzled on the
+ 		 * uneven portion, so leave that as unknown.
+ 		 */
+-		if (intel_uncore_read(uncore, C0DRB3) ==
+-		    intel_uncore_read(uncore, C1DRB3)) {
++		if (intel_uncore_read16(uncore, C0DRB3) ==
++		    intel_uncore_read16(uncore, C1DRB3)) {
+ 			swizzle_x = I915_BIT_6_SWIZZLE_9_10;
+ 			swizzle_y = I915_BIT_6_SWIZZLE_9;
+ 		}
+diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
+index 9ed19b8bca600..c4c2d24dc5094 100644
+--- a/drivers/gpu/drm/i915/i915_active.c
++++ b/drivers/gpu/drm/i915/i915_active.c
+@@ -1159,7 +1159,8 @@ static int auto_active(struct i915_active *ref)
+ 	return 0;
+ }
+ 
+-static void auto_retire(struct i915_active *ref)
++__i915_active_call static void
++auto_retire(struct i915_active *ref)
+ {
+ 	i915_active_put(ref);
+ }
+diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c
+index 82a8673ab8daf..d7e4a39a904e2 100644
+--- a/drivers/gpu/drm/msm/dp/dp_audio.c
++++ b/drivers/gpu/drm/msm/dp/dp_audio.c
+@@ -527,6 +527,7 @@ int dp_audio_hw_params(struct device *dev,
+ 	dp_audio_setup_acr(audio);
+ 	dp_audio_safe_to_exit_level(audio);
+ 	dp_audio_enable(audio, true);
++	dp_display_signal_audio_start(dp_display);
+ 	dp_display->audio_enabled = true;
+ 
+ end:
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index a2db14f852f11..66f2ea3d42fc2 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -176,6 +176,15 @@ static int dp_del_event(struct dp_display_private *dp_priv, u32 event)
+ 	return 0;
+ }
+ 
++void dp_display_signal_audio_start(struct msm_dp *dp_display)
++{
++	struct dp_display_private *dp;
++
++	dp = container_of(dp_display, struct dp_display_private, dp_display);
++
++	reinit_completion(&dp->audio_comp);
++}
++
+ void dp_display_signal_audio_complete(struct msm_dp *dp_display)
+ {
+ 	struct dp_display_private *dp;
+@@ -620,7 +629,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 	dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND);
+ 
+ 	/* signal the disconnect event early to ensure proper teardown */
+-	reinit_completion(&dp->audio_comp);
+ 	dp_display_handle_plugged_change(g_dp_display, false);
+ 
+ 	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK |
+@@ -841,7 +849,6 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
+ 	/* wait only if audio was enabled */
+ 	if (dp_display->audio_enabled) {
+ 		/* signal the disconnect event */
+-		reinit_completion(&dp->audio_comp);
+ 		dp_display_handle_plugged_change(dp_display, false);
+ 		if (!wait_for_completion_timeout(&dp->audio_comp,
+ 				HZ * 5))
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.h b/drivers/gpu/drm/msm/dp/dp_display.h
+index 6092ba1ed85ed..5173c89eedf7e 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.h
++++ b/drivers/gpu/drm/msm/dp/dp_display.h
+@@ -34,6 +34,7 @@ int dp_display_get_modes(struct msm_dp *dp_display,
+ int dp_display_request_irq(struct msm_dp *dp_display);
+ bool dp_display_check_video_test(struct msm_dp *dp_display);
+ int dp_display_get_test_bpp(struct msm_dp *dp_display);
++void dp_display_signal_audio_start(struct msm_dp *dp_display);
+ void dp_display_signal_audio_complete(struct msm_dp *dp_display);
+ 
+ #endif /* _DP_DISPLAY_H_ */
+diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
+index a6d8de01194ae..a813c00f109b5 100644
+--- a/drivers/gpu/drm/radeon/radeon.h
++++ b/drivers/gpu/drm/radeon/radeon.h
+@@ -1559,6 +1559,7 @@ struct radeon_dpm {
+ 	void                    *priv;
+ 	u32			new_active_crtcs;
+ 	int			new_active_crtc_count;
++	int			high_pixelclock_count;
+ 	u32			current_active_crtcs;
+ 	int			current_active_crtc_count;
+ 	bool single_display;
+diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
+index 5d25917251892..aca6e5cfae53d 100644
+--- a/drivers/gpu/drm/radeon/radeon_atombios.c
++++ b/drivers/gpu/drm/radeon/radeon_atombios.c
+@@ -2126,11 +2126,14 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev)
+ 		return state_index;
+ 	/* last mode is usually default, array is low to high */
+ 	for (i = 0; i < num_modes; i++) {
+-		rdev->pm.power_state[state_index].clock_info =
+-			kcalloc(1, sizeof(struct radeon_pm_clock_info),
+-				GFP_KERNEL);
++		/* avoid memory leaks from invalid modes or unknown frev. */
++		if (!rdev->pm.power_state[state_index].clock_info) {
++			rdev->pm.power_state[state_index].clock_info =
++				kzalloc(sizeof(struct radeon_pm_clock_info),
++					GFP_KERNEL);
++		}
+ 		if (!rdev->pm.power_state[state_index].clock_info)
+-			return state_index;
++			goto out;
+ 		rdev->pm.power_state[state_index].num_clock_modes = 1;
+ 		rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
+ 		switch (frev) {
+@@ -2249,17 +2252,24 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev)
+ 			break;
+ 		}
+ 	}
++out:
++	/* free any unused clock_info allocation. */
++	if (state_index && state_index < num_modes) {
++		kfree(rdev->pm.power_state[state_index].clock_info);
++		rdev->pm.power_state[state_index].clock_info = NULL;
++	}
++
+ 	/* last mode is usually default */
+-	if (rdev->pm.default_power_state_index == -1) {
++	if (state_index && rdev->pm.default_power_state_index == -1) {
+ 		rdev->pm.power_state[state_index - 1].type =
+ 			POWER_STATE_TYPE_DEFAULT;
+ 		rdev->pm.default_power_state_index = state_index - 1;
+ 		rdev->pm.power_state[state_index - 1].default_clock_mode =
+ 			&rdev->pm.power_state[state_index - 1].clock_info[0];
+-		rdev->pm.power_state[state_index].flags &=
++		rdev->pm.power_state[state_index - 1].flags &=
+ 			~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+-		rdev->pm.power_state[state_index].misc = 0;
+-		rdev->pm.power_state[state_index].misc2 = 0;
++		rdev->pm.power_state[state_index - 1].misc = 0;
++		rdev->pm.power_state[state_index - 1].misc2 = 0;
+ 	}
+ 	return state_index;
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
+index 05c4196a8212d..84b8d58f0718e 100644
+--- a/drivers/gpu/drm/radeon/radeon_pm.c
++++ b/drivers/gpu/drm/radeon/radeon_pm.c
+@@ -1747,6 +1747,7 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
+ 	struct drm_device *ddev = rdev->ddev;
+ 	struct drm_crtc *crtc;
+ 	struct radeon_crtc *radeon_crtc;
++	struct radeon_connector *radeon_connector;
+ 
+ 	if (!rdev->pm.dpm_enabled)
+ 		return;
+@@ -1756,6 +1757,7 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
+ 	/* update active crtc counts */
+ 	rdev->pm.dpm.new_active_crtcs = 0;
+ 	rdev->pm.dpm.new_active_crtc_count = 0;
++	rdev->pm.dpm.high_pixelclock_count = 0;
+ 	if (rdev->num_crtc && rdev->mode_info.mode_config_initialized) {
+ 		list_for_each_entry(crtc,
+ 				    &ddev->mode_config.crtc_list, head) {
+@@ -1763,6 +1765,12 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
+ 			if (crtc->enabled) {
+ 				rdev->pm.dpm.new_active_crtcs |= (1 << radeon_crtc->crtc_id);
+ 				rdev->pm.dpm.new_active_crtc_count++;
++				if (!radeon_crtc->connector)
++					continue;
++
++				radeon_connector = to_radeon_connector(radeon_crtc->connector);
++				if (radeon_connector->pixelclock_for_modeset > 297000)
++					rdev->pm.dpm.high_pixelclock_count++;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index d1c73e9db889a..a84df439deb2f 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -2982,6 +2982,9 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
+ 		    (rdev->pdev->device == 0x6605)) {
+ 			max_sclk = 75000;
+ 		}
++
++		if (rdev->pm.dpm.high_pixelclock_count > 1)
++			disable_sclk_switching = true;
+ 	}
+ 
+ 	if (rps->vce_active) {
+diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
+index a71777990d496..d052502dc2c0e 100644
+--- a/drivers/hwmon/occ/common.c
++++ b/drivers/hwmon/occ/common.c
+@@ -209,9 +209,9 @@ int occ_update_response(struct occ *occ)
+ 		return rc;
+ 
+ 	/* limit the maximum rate of polling the OCC */
+-	if (time_after(jiffies, occ->last_update + OCC_UPDATE_FREQUENCY)) {
++	if (time_after(jiffies, occ->next_update)) {
+ 		rc = occ_poll(occ);
+-		occ->last_update = jiffies;
++		occ->next_update = jiffies + OCC_UPDATE_FREQUENCY;
+ 	} else {
+ 		rc = occ->last_error;
+ 	}
+@@ -1089,6 +1089,7 @@ int occ_setup(struct occ *occ, const char *name)
+ 		return rc;
+ 	}
+ 
++	occ->next_update = jiffies + OCC_UPDATE_FREQUENCY;
+ 	occ_parse_poll_response(occ);
+ 
+ 	rc = occ_setup_sensor_attrs(occ);
+diff --git a/drivers/hwmon/occ/common.h b/drivers/hwmon/occ/common.h
+index 67e6968b8978e..e6df719770e81 100644
+--- a/drivers/hwmon/occ/common.h
++++ b/drivers/hwmon/occ/common.h
+@@ -99,7 +99,7 @@ struct occ {
+ 	u8 poll_cmd_data;		/* to perform OCC poll command */
+ 	int (*send_cmd)(struct occ *occ, u8 *cmd);
+ 
+-	unsigned long last_update;
++	unsigned long next_update;
+ 	struct mutex lock;		/* lock OCC access */
+ 
+ 	struct device *hwmon;
+diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
+index 3629b7885aca9..c594f45319fc5 100644
+--- a/drivers/hwtracing/coresight/coresight-platform.c
++++ b/drivers/hwtracing/coresight/coresight-platform.c
+@@ -90,6 +90,12 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
+ 	struct of_endpoint endpoint;
+ 	int in = 0, out = 0;
+ 
++	/*
++	 * Avoid warnings in of_graph_get_next_endpoint()
++	 * if the device doesn't have any graph connections
++	 */
++	if (!of_graph_is_present(node))
++		return;
+ 	do {
+ 		ep = of_graph_get_next_endpoint(node, ep);
+ 		if (!ep)
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 86f70c7513192..bf25acba2ed53 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -564,7 +564,7 @@ static const struct i2c_spec_values *mtk_i2c_get_spec(unsigned int speed)
+ 
+ static int mtk_i2c_max_step_cnt(unsigned int target_speed)
+ {
+-	if (target_speed > I2C_MAX_FAST_MODE_FREQ)
++	if (target_speed > I2C_MAX_FAST_MODE_PLUS_FREQ)
+ 		return MAX_HS_STEP_CNT_DIV;
+ 	else
+ 		return MAX_STEP_CNT_DIV;
+@@ -635,7 +635,7 @@ static int mtk_i2c_check_ac_timing(struct mtk_i2c *i2c,
+ 	if (sda_min > sda_max)
+ 		return -3;
+ 
+-	if (check_speed > I2C_MAX_FAST_MODE_FREQ) {
++	if (check_speed > I2C_MAX_FAST_MODE_PLUS_FREQ) {
+ 		if (i2c->dev_comp->ltiming_adjust) {
+ 			i2c->ac_timing.hs = I2C_TIME_DEFAULT_VALUE |
+ 				(sample_cnt << 12) | (high_cnt << 8);
+@@ -850,7 +850,7 @@ static int mtk_i2c_do_transfer(struct mtk_i2c *i2c, struct i2c_msg *msgs,
+ 
+ 	control_reg = mtk_i2c_readw(i2c, OFFSET_CONTROL) &
+ 			~(I2C_CONTROL_DIR_CHANGE | I2C_CONTROL_RS);
+-	if ((i2c->speed_hz > I2C_MAX_FAST_MODE_FREQ) || (left_num >= 1))
++	if ((i2c->speed_hz > I2C_MAX_FAST_MODE_PLUS_FREQ) || (left_num >= 1))
+ 		control_reg |= I2C_CONTROL_RS;
+ 
+ 	if (i2c->op == I2C_MASTER_WRRD)
+@@ -1067,7 +1067,8 @@ static int mtk_i2c_transfer(struct i2c_adapter *adap,
+ 		}
+ 	}
+ 
+-	if (i2c->auto_restart && num >= 2 && i2c->speed_hz > I2C_MAX_FAST_MODE_FREQ)
++	if (i2c->auto_restart && num >= 2 &&
++		i2c->speed_hz > I2C_MAX_FAST_MODE_PLUS_FREQ)
+ 		/* ignore the first restart irq after the master code,
+ 		 * otherwise the first transfer will be discarded.
+ 		 */
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index 6ceb11cc4be18..6ef38a8ee95cb 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -440,8 +440,13 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 				   sizeof(rdwr_arg)))
+ 			return -EFAULT;
+ 
+-		/* Put an arbitrary limit on the number of messages that can
+-		 * be sent at once */
++		if (!rdwr_arg.msgs || rdwr_arg.nmsgs == 0)
++			return -EINVAL;
++
++		/*
++		 * Put an arbitrary limit on the number of messages that can
++		 * be sent at once
++		 */
+ 		if (rdwr_arg.nmsgs > I2C_RDWR_IOCTL_MAX_MSGS)
+ 			return -EINVAL;
+ 
+diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig
+index 2e0c62c391550..8acf277b8b258 100644
+--- a/drivers/iio/accel/Kconfig
++++ b/drivers/iio/accel/Kconfig
+@@ -211,7 +211,6 @@ config DMARD10
+ config HID_SENSOR_ACCEL_3D
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	tristate "HID Accelerometers 3D"
+diff --git a/drivers/iio/common/hid-sensors/Kconfig b/drivers/iio/common/hid-sensors/Kconfig
+index 24d4925673363..2a3dd3b907bee 100644
+--- a/drivers/iio/common/hid-sensors/Kconfig
++++ b/drivers/iio/common/hid-sensors/Kconfig
+@@ -19,6 +19,7 @@ config HID_SENSOR_IIO_TRIGGER
+ 	tristate "Common module (trigger) for all HID Sensor IIO drivers"
+ 	depends on HID_SENSOR_HUB && HID_SENSOR_IIO_COMMON && IIO_BUFFER
+ 	select IIO_TRIGGER
++	select IIO_TRIGGERED_BUFFER
+ 	help
+ 	  Say yes here to build trigger support for HID sensors.
+ 	  Triggers will be send if all requested attributes were read.
+diff --git a/drivers/iio/gyro/Kconfig b/drivers/iio/gyro/Kconfig
+index 5824f2edf9758..20b5ac7ab66af 100644
+--- a/drivers/iio/gyro/Kconfig
++++ b/drivers/iio/gyro/Kconfig
+@@ -111,7 +111,6 @@ config FXAS21002C_SPI
+ config HID_SENSOR_GYRO_3D
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	tristate "HID Gyroscope 3D"
+diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c
+index 8ea6c2aa6263d..39e1c4306c474 100644
+--- a/drivers/iio/gyro/mpu3050-core.c
++++ b/drivers/iio/gyro/mpu3050-core.c
+@@ -271,7 +271,16 @@ static int mpu3050_read_raw(struct iio_dev *indio_dev,
+ 	case IIO_CHAN_INFO_OFFSET:
+ 		switch (chan->type) {
+ 		case IIO_TEMP:
+-			/* The temperature scaling is (x+23000)/280 Celsius */
++			/*
++			 * The temperature scaling is (x+23000)/280 Celsius
++			 * for the "best fit straight line" temperature range
++			 * of -30C..85C.  The 23000 includes room temperature
++			 * offset of +35C, 280 is the precision scale and x is
++			 * the 16-bit signed integer reported by hardware.
++			 *
++			 * Temperature value itself represents temperature of
++			 * the sensor die.
++			 */
+ 			*val = 23000;
+ 			return IIO_VAL_INT;
+ 		default:
+@@ -328,7 +337,7 @@ static int mpu3050_read_raw(struct iio_dev *indio_dev,
+ 				goto out_read_raw_unlock;
+ 			}
+ 
+-			*val = be16_to_cpu(raw_val);
++			*val = (s16)be16_to_cpu(raw_val);
+ 			ret = IIO_VAL_INT;
+ 
+ 			goto out_read_raw_unlock;
+diff --git a/drivers/iio/humidity/Kconfig b/drivers/iio/humidity/Kconfig
+index 6549fcf6db698..2de5494e7c225 100644
+--- a/drivers/iio/humidity/Kconfig
++++ b/drivers/iio/humidity/Kconfig
+@@ -52,7 +52,6 @@ config HID_SENSOR_HUMIDITY
+ 	tristate "HID Environmental humidity sensor"
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	help
+diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig
+index 33ad4dd0b5c7b..917f9becf9c75 100644
+--- a/drivers/iio/light/Kconfig
++++ b/drivers/iio/light/Kconfig
+@@ -256,7 +256,6 @@ config ISL29125
+ config HID_SENSOR_ALS
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	tristate "HID ALS"
+@@ -270,7 +269,6 @@ config HID_SENSOR_ALS
+ config HID_SENSOR_PROX
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	tristate "HID PROX"
+diff --git a/drivers/iio/light/gp2ap002.c b/drivers/iio/light/gp2ap002.c
+index 7ba7aa59437c3..040d8429a6e00 100644
+--- a/drivers/iio/light/gp2ap002.c
++++ b/drivers/iio/light/gp2ap002.c
+@@ -583,7 +583,7 @@ static int gp2ap002_probe(struct i2c_client *client,
+ 					"gp2ap002", indio_dev);
+ 	if (ret) {
+ 		dev_err(dev, "unable to request IRQ\n");
+-		goto out_disable_vio;
++		goto out_put_pm;
+ 	}
+ 	gp2ap002->irq = client->irq;
+ 
+@@ -613,8 +613,9 @@ static int gp2ap002_probe(struct i2c_client *client,
+ 
+ 	return 0;
+ 
+-out_disable_pm:
++out_put_pm:
+ 	pm_runtime_put_noidle(dev);
++out_disable_pm:
+ 	pm_runtime_disable(dev);
+ out_disable_vio:
+ 	regulator_disable(gp2ap002->vio);
+diff --git a/drivers/iio/light/tsl2583.c b/drivers/iio/light/tsl2583.c
+index 9e5490b7473bd..40b7dd266b314 100644
+--- a/drivers/iio/light/tsl2583.c
++++ b/drivers/iio/light/tsl2583.c
+@@ -341,6 +341,14 @@ static int tsl2583_als_calibrate(struct iio_dev *indio_dev)
+ 		return lux_val;
+ 	}
+ 
++	/* Avoid division by zero of lux_value later on */
++	if (lux_val == 0) {
++		dev_err(&chip->client->dev,
++			"%s: lux_val of 0 will produce out of range trim_value\n",
++			__func__);
++		return -ENODATA;
++	}
++
+ 	gain_trim_val = (unsigned int)(((chip->als_settings.als_cal_target)
+ 			* chip->als_settings.als_gain_trim) / lux_val);
+ 	if ((gain_trim_val < 250) || (gain_trim_val > 4000)) {
+diff --git a/drivers/iio/magnetometer/Kconfig b/drivers/iio/magnetometer/Kconfig
+index 1697a8c03506c..7e9489a355714 100644
+--- a/drivers/iio/magnetometer/Kconfig
++++ b/drivers/iio/magnetometer/Kconfig
+@@ -95,7 +95,6 @@ config MAG3110
+ config HID_SENSOR_MAGNETOMETER_3D
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	tristate "HID Magenetometer 3D"
+diff --git a/drivers/iio/orientation/Kconfig b/drivers/iio/orientation/Kconfig
+index a505583cc2fda..396cbbb867f4c 100644
+--- a/drivers/iio/orientation/Kconfig
++++ b/drivers/iio/orientation/Kconfig
+@@ -9,7 +9,6 @@ menu "Inclinometer sensors"
+ config HID_SENSOR_INCLINOMETER_3D
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	tristate "HID Inclinometer 3D"
+@@ -20,7 +19,6 @@ config HID_SENSOR_INCLINOMETER_3D
+ config HID_SENSOR_DEVICE_ROTATION
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	tristate "HID Device Rotation"
+diff --git a/drivers/iio/pressure/Kconfig b/drivers/iio/pressure/Kconfig
+index 689b978db4f95..fc0d3cfca4186 100644
+--- a/drivers/iio/pressure/Kconfig
++++ b/drivers/iio/pressure/Kconfig
+@@ -79,7 +79,6 @@ config DPS310
+ config HID_SENSOR_PRESS
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	tristate "HID PRESS"
+diff --git a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+index c685f10b5ae48..cc206bfa09c78 100644
+--- a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
++++ b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+@@ -160,6 +160,7 @@ static int lidar_get_measurement(struct lidar_data *data, u16 *reg)
+ 	ret = lidar_write_control(data, LIDAR_REG_CONTROL_ACQUIRE);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "cannot send start measurement command");
++		pm_runtime_put_noidle(&client->dev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/iio/temperature/Kconfig b/drivers/iio/temperature/Kconfig
+index f1f2a1499c9e2..4df60082c1fa8 100644
+--- a/drivers/iio/temperature/Kconfig
++++ b/drivers/iio/temperature/Kconfig
+@@ -45,7 +45,6 @@ config HID_SENSOR_TEMP
+ 	tristate "HID Environmental temperature sensor"
+ 	depends on HID_SENSOR_HUB
+ 	select IIO_BUFFER
+-	select IIO_TRIGGERED_BUFFER
+ 	select HID_SENSOR_IIO_COMMON
+ 	select HID_SENSOR_IIO_TRIGGER
+ 	help
+diff --git a/drivers/infiniband/hw/hfi1/ipoib.h b/drivers/infiniband/hw/hfi1/ipoib.h
+index b8c9d0a003fb3..1ee361c6d11a6 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib.h
++++ b/drivers/infiniband/hw/hfi1/ipoib.h
+@@ -52,8 +52,9 @@ union hfi1_ipoib_flow {
+  * @producer_lock: producer sync lock
+  * @consumer_lock: consumer sync lock
+  */
++struct ipoib_txreq;
+ struct hfi1_ipoib_circ_buf {
+-	void **items;
++	struct ipoib_txreq **items;
+ 	unsigned long head;
+ 	unsigned long tail;
+ 	unsigned long max_items;
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index 9df292b51a05b..ab1eefffc14b3 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -702,14 +702,14 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
+ 
+ 	priv->tx_napis = kcalloc_node(dev->num_tx_queues,
+ 				      sizeof(struct napi_struct),
+-				      GFP_ATOMIC,
++				      GFP_KERNEL,
+ 				      priv->dd->node);
+ 	if (!priv->tx_napis)
+ 		goto free_txreq_cache;
+ 
+ 	priv->txqs = kcalloc_node(dev->num_tx_queues,
+ 				  sizeof(struct hfi1_ipoib_txq),
+-				  GFP_ATOMIC,
++				  GFP_KERNEL,
+ 				  priv->dd->node);
+ 	if (!priv->txqs)
+ 		goto free_tx_napis;
+@@ -741,9 +741,9 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
+ 					     priv->dd->node);
+ 
+ 		txq->tx_ring.items =
+-			vzalloc_node(array_size(tx_ring_size,
+-						sizeof(struct ipoib_txreq)),
+-				     priv->dd->node);
++			kcalloc_node(tx_ring_size,
++				     sizeof(struct ipoib_txreq *),
++				     GFP_KERNEL, priv->dd->node);
+ 		if (!txq->tx_ring.items)
+ 			goto free_txqs;
+ 
+@@ -764,7 +764,7 @@ free_txqs:
+ 		struct hfi1_ipoib_txq *txq = &priv->txqs[i];
+ 
+ 		netif_napi_del(txq->napi);
+-		vfree(txq->tx_ring.items);
++		kfree(txq->tx_ring.items);
+ 	}
+ 
+ 	kfree(priv->txqs);
+@@ -817,7 +817,7 @@ void hfi1_ipoib_txreq_deinit(struct hfi1_ipoib_dev_priv *priv)
+ 		hfi1_ipoib_drain_tx_list(txq);
+ 		netif_napi_del(txq->napi);
+ 		(void)hfi1_ipoib_drain_tx_ring(txq, txq->tx_ring.max_items);
+-		vfree(txq->tx_ring.items);
++		kfree(txq->tx_ring.items);
+ 	}
+ 
+ 	kfree(priv->txqs);
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index fa502c0e2e31b..cc9869cc48e41 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -12,7 +12,6 @@
+ #include <linux/acpi.h>
+ #include <linux/list.h>
+ #include <linux/bitmap.h>
+-#include <linux/delay.h>
+ #include <linux/slab.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/interrupt.h>
+@@ -255,8 +254,6 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
+ static int amd_iommu_enable_interrupts(void);
+ static int __init iommu_go_to_state(enum iommu_init_state state);
+ static void init_device_table_dma(void);
+-static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
+-				u8 fxn, u64 *value, bool is_write);
+ 
+ static bool amd_iommu_pre_enabled = true;
+ 
+@@ -1720,53 +1717,16 @@ static int __init init_iommu_all(struct acpi_table_header *table)
+ 	return 0;
+ }
+ 
+-static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
++static void init_iommu_perf_ctr(struct amd_iommu *iommu)
+ {
+-	int retry;
++	u64 val;
+ 	struct pci_dev *pdev = iommu->dev;
+-	u64 val = 0xabcd, val2 = 0, save_reg, save_src;
+ 
+ 	if (!iommu_feature(iommu, FEATURE_PC))
+ 		return;
+ 
+ 	amd_iommu_pc_present = true;
+ 
+-	/* save the value to restore, if writable */
+-	if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||
+-	    iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))
+-		goto pc_false;
+-
+-	/*
+-	 * Disable power gating by programing the performance counter
+-	 * source to 20 (i.e. counts the reads and writes from/to IOMMU
+-	 * Reserved Register [MMIO Offset 1FF8h] that are ignored.),
+-	 * which never get incremented during this init phase.
+-	 * (Note: The event is also deprecated.)
+-	 */
+-	val = 20;
+-	if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))
+-		goto pc_false;
+-
+-	/* Check if the performance counters can be written to */
+-	val = 0xabcd;
+-	for (retry = 5; retry; retry--) {
+-		if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||
+-		    iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||
+-		    val2)
+-			break;
+-
+-		/* Wait about 20 msec for power gating to disable and retry. */
+-		msleep(20);
+-	}
+-
+-	/* restore */
+-	if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||
+-	    iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))
+-		goto pc_false;
+-
+-	if (val != val2)
+-		goto pc_false;
+-
+ 	pci_info(pdev, "IOMMU performance counters supported\n");
+ 
+ 	val = readl(iommu->mmio_base + MMIO_CNTR_CONF_OFFSET);
+@@ -1774,11 +1734,6 @@ static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
+ 	iommu->max_counters = (u8) ((val >> 7) & 0xf);
+ 
+ 	return;
+-
+-pc_false:
+-	pci_err(pdev, "Unable to read/write to IOMMU perf counter.\n");
+-	amd_iommu_pc_present = false;
+-	return;
+ }
+ 
+ static ssize_t amd_iommu_show_cap(struct device *dev,
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index db9bf5ac07228..eececdeaa40f9 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2373,7 +2373,10 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+ 		}
+ 	}
+ 
+-	pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | attr;
++	if (!sg) {
++		sg_res = nr_pages;
++		pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | attr;
++	}
+ 
+ 	while (nr_pages > 0) {
+ 		uint64_t tmp;
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 6f0bf5db885cd..62bcef4bb95fd 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1466,6 +1466,8 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
+ 	int i;
+ 	int putidx;
+ 
++	cdev->tx_skb = NULL;
++
+ 	/* Generate ID field for TX buffer Element */
+ 	/* Common to all supported M_CAN versions */
+ 	if (cf->can_id & CAN_EFF_FLAG) {
+@@ -1582,7 +1584,6 @@ static void m_can_tx_work_queue(struct work_struct *ws)
+ 						tx_work);
+ 
+ 	m_can_tx_handler(cdev);
+-	cdev->tx_skb = NULL;
+ }
+ 
+ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 42c3046fa3047..89897a2d41fa6 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -956,8 +956,6 @@ static int mcp251x_stop(struct net_device *net)
+ 
+ 	priv->force_quit = 1;
+ 	free_irq(spi->irq, priv);
+-	destroy_workqueue(priv->wq);
+-	priv->wq = NULL;
+ 
+ 	mutex_lock(&priv->mcp_lock);
+ 
+@@ -1224,24 +1222,15 @@ static int mcp251x_open(struct net_device *net)
+ 		goto out_close;
+ 	}
+ 
+-	priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
+-				   0);
+-	if (!priv->wq) {
+-		ret = -ENOMEM;
+-		goto out_clean;
+-	}
+-	INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
+-	INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
+-
+ 	ret = mcp251x_hw_wake(spi);
+ 	if (ret)
+-		goto out_free_wq;
++		goto out_free_irq;
+ 	ret = mcp251x_setup(net, spi);
+ 	if (ret)
+-		goto out_free_wq;
++		goto out_free_irq;
+ 	ret = mcp251x_set_normal_mode(spi);
+ 	if (ret)
+-		goto out_free_wq;
++		goto out_free_irq;
+ 
+ 	can_led_event(net, CAN_LED_EVENT_OPEN);
+ 
+@@ -1250,9 +1239,7 @@ static int mcp251x_open(struct net_device *net)
+ 
+ 	return 0;
+ 
+-out_free_wq:
+-	destroy_workqueue(priv->wq);
+-out_clean:
++out_free_irq:
+ 	free_irq(spi->irq, priv);
+ 	mcp251x_hw_sleep(spi);
+ out_close:
+@@ -1373,6 +1360,15 @@ static int mcp251x_can_probe(struct spi_device *spi)
+ 	if (ret)
+ 		goto out_clk;
+ 
++	priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
++				   0);
++	if (!priv->wq) {
++		ret = -ENOMEM;
++		goto out_clk;
++	}
++	INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
++	INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
++
+ 	priv->spi = spi;
+ 	mutex_init(&priv->mcp_lock);
+ 
+@@ -1417,6 +1413,8 @@ static int mcp251x_can_probe(struct spi_device *spi)
+ 	return 0;
+ 
+ error_probe:
++	destroy_workqueue(priv->wq);
++	priv->wq = NULL;
+ 	mcp251x_power_enable(priv->power, 0);
+ 
+ out_clk:
+@@ -1438,6 +1436,9 @@ static int mcp251x_can_remove(struct spi_device *spi)
+ 
+ 	mcp251x_power_enable(priv->power, 0);
+ 
++	destroy_workqueue(priv->wq);
++	priv->wq = NULL;
++
+ 	clk_disable_unprepare(priv->clk);
+ 
+ 	free_candev(net);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 096d818c167e2..68ff931993c25 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -2870,10 +2870,12 @@ static int mcp251xfd_probe(struct spi_device *spi)
+ 
+ 	err = mcp251xfd_register(priv);
+ 	if (err)
+-		goto out_free_candev;
++		goto out_can_rx_offload_del;
+ 
+ 	return 0;
+ 
++ out_can_rx_offload_del:
++	can_rx_offload_del(&priv->offload);
+  out_free_candev:
+ 	spi->max_speed_hz = priv->spi_max_speed_hz_orig;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 7ddc2e2e4976a..4385b42a2b636 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -122,7 +122,10 @@ enum board_idx {
+ 	NETXTREME_E_VF,
+ 	NETXTREME_C_VF,
+ 	NETXTREME_S_VF,
++	NETXTREME_C_VF_HV,
++	NETXTREME_E_VF_HV,
+ 	NETXTREME_E_P5_VF,
++	NETXTREME_E_P5_VF_HV,
+ };
+ 
+ /* indexed by enum above */
+@@ -170,7 +173,10 @@ static const struct {
+ 	[NETXTREME_E_VF] = { "Broadcom NetXtreme-E Ethernet Virtual Function" },
+ 	[NETXTREME_C_VF] = { "Broadcom NetXtreme-C Ethernet Virtual Function" },
+ 	[NETXTREME_S_VF] = { "Broadcom NetXtreme-S Ethernet Virtual Function" },
++	[NETXTREME_C_VF_HV] = { "Broadcom NetXtreme-C Virtual Function for Hyper-V" },
++	[NETXTREME_E_VF_HV] = { "Broadcom NetXtreme-E Virtual Function for Hyper-V" },
+ 	[NETXTREME_E_P5_VF] = { "Broadcom BCM5750X NetXtreme-E Ethernet Virtual Function" },
++	[NETXTREME_E_P5_VF_HV] = { "Broadcom BCM5750X NetXtreme-E Virtual Function for Hyper-V" },
+ };
+ 
+ static const struct pci_device_id bnxt_pci_tbl[] = {
+@@ -222,15 +228,25 @@ static const struct pci_device_id bnxt_pci_tbl[] = {
+ 	{ PCI_VDEVICE(BROADCOM, 0xd804), .driver_data = BCM58804 },
+ #ifdef CONFIG_BNXT_SRIOV
+ 	{ PCI_VDEVICE(BROADCOM, 0x1606), .driver_data = NETXTREME_E_VF },
++	{ PCI_VDEVICE(BROADCOM, 0x1607), .driver_data = NETXTREME_E_VF_HV },
++	{ PCI_VDEVICE(BROADCOM, 0x1608), .driver_data = NETXTREME_E_VF_HV },
+ 	{ PCI_VDEVICE(BROADCOM, 0x1609), .driver_data = NETXTREME_E_VF },
++	{ PCI_VDEVICE(BROADCOM, 0x16bd), .driver_data = NETXTREME_E_VF_HV },
+ 	{ PCI_VDEVICE(BROADCOM, 0x16c1), .driver_data = NETXTREME_E_VF },
++	{ PCI_VDEVICE(BROADCOM, 0x16c2), .driver_data = NETXTREME_C_VF_HV },
++	{ PCI_VDEVICE(BROADCOM, 0x16c3), .driver_data = NETXTREME_C_VF_HV },
++	{ PCI_VDEVICE(BROADCOM, 0x16c4), .driver_data = NETXTREME_E_VF_HV },
++	{ PCI_VDEVICE(BROADCOM, 0x16c5), .driver_data = NETXTREME_E_VF_HV },
+ 	{ PCI_VDEVICE(BROADCOM, 0x16cb), .driver_data = NETXTREME_C_VF },
+ 	{ PCI_VDEVICE(BROADCOM, 0x16d3), .driver_data = NETXTREME_E_VF },
+ 	{ PCI_VDEVICE(BROADCOM, 0x16dc), .driver_data = NETXTREME_E_VF },
+ 	{ PCI_VDEVICE(BROADCOM, 0x16e1), .driver_data = NETXTREME_C_VF },
+ 	{ PCI_VDEVICE(BROADCOM, 0x16e5), .driver_data = NETXTREME_C_VF },
++	{ PCI_VDEVICE(BROADCOM, 0x16e6), .driver_data = NETXTREME_C_VF_HV },
+ 	{ PCI_VDEVICE(BROADCOM, 0x1806), .driver_data = NETXTREME_E_P5_VF },
+ 	{ PCI_VDEVICE(BROADCOM, 0x1807), .driver_data = NETXTREME_E_P5_VF },
++	{ PCI_VDEVICE(BROADCOM, 0x1808), .driver_data = NETXTREME_E_P5_VF_HV },
++	{ PCI_VDEVICE(BROADCOM, 0x1809), .driver_data = NETXTREME_E_P5_VF_HV },
+ 	{ PCI_VDEVICE(BROADCOM, 0xd800), .driver_data = NETXTREME_S_VF },
+ #endif
+ 	{ 0 }
+@@ -263,7 +279,8 @@ static struct workqueue_struct *bnxt_pf_wq;
+ static bool bnxt_vf_pciid(enum board_idx idx)
+ {
+ 	return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF ||
+-		idx == NETXTREME_S_VF || idx == NETXTREME_E_P5_VF);
++		idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV ||
++		idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF);
+ }
+ 
+ #define DB_CP_REARM_FLAGS	(DB_KEY_CP | DB_IDX_VALID)
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index fb269d587b741..548d8095c0a79 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -768,7 +768,7 @@ static inline int enic_queue_wq_skb_encap(struct enic *enic, struct vnic_wq *wq,
+ 	return err;
+ }
+ 
+-static inline void enic_queue_wq_skb(struct enic *enic,
++static inline int enic_queue_wq_skb(struct enic *enic,
+ 	struct vnic_wq *wq, struct sk_buff *skb)
+ {
+ 	unsigned int mss = skb_shinfo(skb)->gso_size;
+@@ -814,6 +814,7 @@ static inline void enic_queue_wq_skb(struct enic *enic,
+ 		wq->to_use = buf->next;
+ 		dev_kfree_skb(skb);
+ 	}
++	return err;
+ }
+ 
+ /* netif_tx_lock held, process context with BHs disabled, or BH */
+@@ -857,7 +858,8 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb,
+ 		return NETDEV_TX_BUSY;
+ 	}
+ 
+-	enic_queue_wq_skb(enic, wq, skb);
++	if (enic_queue_wq_skb(enic, wq, skb))
++		goto error;
+ 
+ 	if (vnic_wq_desc_avail(wq) < MAX_SKB_FRAGS + ENIC_DESC_MAX_SPLITS)
+ 		netif_tx_stop_queue(txq);
+@@ -865,6 +867,7 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb,
+ 	if (!netdev_xmit_more() || netif_xmit_stopped(txq))
+ 		vnic_wq_doorbell(wq);
+ 
++error:
+ 	spin_unlock(&enic->wq_lock[txq_map]);
+ 
+ 	return NETDEV_TX_OK;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 070bef303d184..ef31489199706 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -526,8 +526,8 @@ static int hns3_nic_net_stop(struct net_device *netdev)
+ 	if (h->ae_algo->ops->set_timer_task)
+ 		h->ae_algo->ops->set_timer_task(priv->ae_handle, false);
+ 
+-	netif_tx_stop_all_queues(netdev);
+ 	netif_carrier_off(netdev);
++	netif_tx_disable(netdev);
+ 
+ 	hns3_nic_net_down(netdev);
+ 
+@@ -778,7 +778,7 @@ static int hns3_get_l4_protocol(struct sk_buff *skb, u8 *ol4_proto,
+  * and it is udp packet, which has a dest port as the IANA assigned.
+  * the hardware is expected to do the checksum offload, but the
+  * hardware will not do the checksum offload when udp dest port is
+- * 4789 or 6081.
++ * 4789, 4790 or 6081.
+  */
+ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
+ {
+@@ -788,7 +788,8 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
+ 
+ 	if (!(!skb->encapsulation &&
+ 	      (l4.udp->dest == htons(IANA_VXLAN_UDP_PORT) ||
+-	      l4.udp->dest == htons(GENEVE_UDP_PORT))))
++	      l4.udp->dest == htons(GENEVE_UDP_PORT) ||
++	      l4.udp->dest == htons(4790))))
+ 		return false;
+ 
+ 	skb_checksum_help(skb);
+@@ -1192,23 +1193,21 @@ static unsigned int hns3_skb_bd_num(struct sk_buff *skb, unsigned int *bd_size,
+ }
+ 
+ static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size,
+-				   u8 max_non_tso_bd_num)
++				   u8 max_non_tso_bd_num, unsigned int bd_num,
++				   unsigned int recursion_level)
+ {
++#define HNS3_MAX_RECURSION_LEVEL	24
++
+ 	struct sk_buff *frag_skb;
+-	unsigned int bd_num = 0;
+ 
+ 	/* If the total len is within the max bd limit */
+-	if (likely(skb->len <= HNS3_MAX_BD_SIZE && !skb_has_frag_list(skb) &&
++	if (likely(skb->len <= HNS3_MAX_BD_SIZE && !recursion_level &&
++		   !skb_has_frag_list(skb) &&
+ 		   skb_shinfo(skb)->nr_frags < max_non_tso_bd_num))
+ 		return skb_shinfo(skb)->nr_frags + 1U;
+ 
+-	/* The below case will always be linearized, return
+-	 * HNS3_MAX_BD_NUM_TSO + 1U to make sure it is linearized.
+-	 */
+-	if (unlikely(skb->len > HNS3_MAX_TSO_SIZE ||
+-		     (!skb_is_gso(skb) && skb->len >
+-		      HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))))
+-		return HNS3_MAX_TSO_BD_NUM + 1U;
++	if (unlikely(recursion_level >= HNS3_MAX_RECURSION_LEVEL))
++		return UINT_MAX;
+ 
+ 	bd_num = hns3_skb_bd_num(skb, bd_size, bd_num);
+ 
+@@ -1216,7 +1215,8 @@ static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size,
+ 		return bd_num;
+ 
+ 	skb_walk_frags(skb, frag_skb) {
+-		bd_num = hns3_skb_bd_num(frag_skb, bd_size, bd_num);
++		bd_num = hns3_tx_bd_num(frag_skb, bd_size, max_non_tso_bd_num,
++					bd_num, recursion_level + 1);
+ 		if (bd_num > HNS3_MAX_TSO_BD_NUM)
+ 			return bd_num;
+ 	}
+@@ -1276,6 +1276,43 @@ void hns3_shinfo_pack(struct skb_shared_info *shinfo, __u32 *size)
+ 		size[i] = skb_frag_size(&shinfo->frags[i]);
+ }
+ 
++static int hns3_skb_linearize(struct hns3_enet_ring *ring,
++			      struct sk_buff *skb,
++			      u8 max_non_tso_bd_num,
++			      unsigned int bd_num)
++{
++	/* 'bd_num == UINT_MAX' means the skb' fraglist has a
++	 * recursion level of over HNS3_MAX_RECURSION_LEVEL.
++	 */
++	if (bd_num == UINT_MAX) {
++		u64_stats_update_begin(&ring->syncp);
++		ring->stats.over_max_recursion++;
++		u64_stats_update_end(&ring->syncp);
++		return -ENOMEM;
++	}
++
++	/* The skb->len has exceeded the hw limitation, linearization
++	 * will not help.
++	 */
++	if (skb->len > HNS3_MAX_TSO_SIZE ||
++	    (!skb_is_gso(skb) && skb->len >
++	     HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) {
++		u64_stats_update_begin(&ring->syncp);
++		ring->stats.hw_limitation++;
++		u64_stats_update_end(&ring->syncp);
++		return -ENOMEM;
++	}
++
++	if (__skb_linearize(skb)) {
++		u64_stats_update_begin(&ring->syncp);
++		ring->stats.sw_err_cnt++;
++		u64_stats_update_end(&ring->syncp);
++		return -ENOMEM;
++	}
++
++	return 0;
++}
++
+ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
+ 				  struct net_device *netdev,
+ 				  struct sk_buff *skb)
+@@ -1285,7 +1322,7 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
+ 	unsigned int bd_size[HNS3_MAX_TSO_BD_NUM + 1U];
+ 	unsigned int bd_num;
+ 
+-	bd_num = hns3_tx_bd_num(skb, bd_size, max_non_tso_bd_num);
++	bd_num = hns3_tx_bd_num(skb, bd_size, max_non_tso_bd_num, 0, 0);
+ 	if (unlikely(bd_num > max_non_tso_bd_num)) {
+ 		if (bd_num <= HNS3_MAX_TSO_BD_NUM && skb_is_gso(skb) &&
+ 		    !hns3_skb_need_linearized(skb, bd_size, bd_num,
+@@ -1294,16 +1331,11 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
+ 			goto out;
+ 		}
+ 
+-		if (__skb_linearize(skb))
++		if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num,
++				       bd_num))
+ 			return -ENOMEM;
+ 
+ 		bd_num = hns3_tx_bd_count(skb->len);
+-		if ((skb_is_gso(skb) && bd_num > HNS3_MAX_TSO_BD_NUM) ||
+-		    (!skb_is_gso(skb) &&
+-		     bd_num > max_non_tso_bd_num)) {
+-			trace_hns3_over_max_bd(skb);
+-			return -ENOMEM;
+-		}
+ 
+ 		u64_stats_update_begin(&ring->syncp);
+ 		ring->stats.tx_copy++;
+@@ -1327,6 +1359,10 @@ out:
+ 		return bd_num;
+ 	}
+ 
++	u64_stats_update_begin(&ring->syncp);
++	ring->stats.tx_busy++;
++	u64_stats_update_end(&ring->syncp);
++
+ 	return -EBUSY;
+ }
+ 
+@@ -1374,6 +1410,7 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring,
+ 				 struct sk_buff *skb, enum hns_desc_type type)
+ {
+ 	unsigned int size = skb_headlen(skb);
++	struct sk_buff *frag_skb;
+ 	int i, ret, bd_num = 0;
+ 
+ 	if (size) {
+@@ -1398,6 +1435,15 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring,
+ 		bd_num += ret;
+ 	}
+ 
++	skb_walk_frags(skb, frag_skb) {
++		ret = hns3_fill_skb_to_desc(ring, frag_skb,
++					    DESC_TYPE_FRAGLIST_SKB);
++		if (unlikely(ret < 0))
++			return ret;
++
++		bd_num += ret;
++	}
++
+ 	return bd_num;
+ }
+ 
+@@ -1428,8 +1474,6 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	struct hns3_enet_ring *ring = &priv->ring[skb->queue_mapping];
+ 	struct netdev_queue *dev_queue;
+ 	int pre_ntu, next_to_use_head;
+-	struct sk_buff *frag_skb;
+-	int bd_num = 0;
+ 	bool doorbell;
+ 	int ret;
+ 
+@@ -1445,15 +1489,8 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	ret = hns3_nic_maybe_stop_tx(ring, netdev, skb);
+ 	if (unlikely(ret <= 0)) {
+ 		if (ret == -EBUSY) {
+-			u64_stats_update_begin(&ring->syncp);
+-			ring->stats.tx_busy++;
+-			u64_stats_update_end(&ring->syncp);
+ 			hns3_tx_doorbell(ring, 0, true);
+ 			return NETDEV_TX_BUSY;
+-		} else if (ret == -ENOMEM) {
+-			u64_stats_update_begin(&ring->syncp);
+-			ring->stats.sw_err_cnt++;
+-			u64_stats_update_end(&ring->syncp);
+ 		}
+ 
+ 		hns3_rl_err(netdev, "xmit error: %d!\n", ret);
+@@ -1466,21 +1503,14 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	if (unlikely(ret < 0))
+ 		goto fill_err;
+ 
++	/* 'ret < 0' means filling error, 'ret == 0' means skb->len is
++	 * zero, which is unlikely, and 'ret > 0' means how many tx desc
++	 * need to be notified to the hw.
++	 */
+ 	ret = hns3_fill_skb_to_desc(ring, skb, DESC_TYPE_SKB);
+-	if (unlikely(ret < 0))
++	if (unlikely(ret <= 0))
+ 		goto fill_err;
+ 
+-	bd_num += ret;
+-
+-	skb_walk_frags(skb, frag_skb) {
+-		ret = hns3_fill_skb_to_desc(ring, frag_skb,
+-					    DESC_TYPE_FRAGLIST_SKB);
+-		if (unlikely(ret < 0))
+-			goto fill_err;
+-
+-		bd_num += ret;
+-	}
+-
+ 	pre_ntu = ring->next_to_use ? (ring->next_to_use - 1) :
+ 					(ring->desc_num - 1);
+ 	ring->desc[pre_ntu].tx.bdtp_fe_sc_vld_ra_ri |=
+@@ -1491,7 +1521,7 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	dev_queue = netdev_get_tx_queue(netdev, ring->queue_index);
+ 	doorbell = __netdev_tx_sent_queue(dev_queue, skb->len,
+ 					  netdev_xmit_more());
+-	hns3_tx_doorbell(ring, bd_num, doorbell);
++	hns3_tx_doorbell(ring, ret, doorbell);
+ 
+ 	return NETDEV_TX_OK;
+ 
+@@ -1656,11 +1686,15 @@ static void hns3_nic_get_stats64(struct net_device *netdev,
+ 			tx_drop += ring->stats.tx_l4_proto_err;
+ 			tx_drop += ring->stats.tx_l2l3l4_err;
+ 			tx_drop += ring->stats.tx_tso_err;
++			tx_drop += ring->stats.over_max_recursion;
++			tx_drop += ring->stats.hw_limitation;
+ 			tx_errors += ring->stats.sw_err_cnt;
+ 			tx_errors += ring->stats.tx_vlan_err;
+ 			tx_errors += ring->stats.tx_l4_proto_err;
+ 			tx_errors += ring->stats.tx_l2l3l4_err;
+ 			tx_errors += ring->stats.tx_tso_err;
++			tx_errors += ring->stats.over_max_recursion;
++			tx_errors += ring->stats.hw_limitation;
+ 		} while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+ 
+ 		/* fetch the rx stats */
+@@ -4393,6 +4427,11 @@ static int hns3_reset_notify_up_enet(struct hnae3_handle *handle)
+ 	struct hns3_nic_priv *priv = netdev_priv(kinfo->netdev);
+ 	int ret = 0;
+ 
++	if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state)) {
++		netdev_err(kinfo->netdev, "device is not initialized yet\n");
++		return -EFAULT;
++	}
++
+ 	clear_bit(HNS3_NIC_STATE_RESETTING, &priv->state);
+ 
+ 	if (netif_running(kinfo->netdev)) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 1c81dea0da1e6..398686b15a826 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -359,6 +359,8 @@ struct ring_stats {
+ 			u64 tx_l4_proto_err;
+ 			u64 tx_l2l3l4_err;
+ 			u64 tx_tso_err;
++			u64 over_max_recursion;
++			u64 hw_limitation;
+ 		};
+ 		struct {
+ 			u64 rx_pkts;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 6b07b27711721..c0aa3be0cdfbb 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -39,6 +39,8 @@ static const struct hns3_stats hns3_txq_stats[] = {
+ 	HNS3_TQP_STAT("l4_proto_err", tx_l4_proto_err),
+ 	HNS3_TQP_STAT("l2l3l4_err", tx_l2l3l4_err),
+ 	HNS3_TQP_STAT("tso_err", tx_tso_err),
++	HNS3_TQP_STAT("over_max_recursion", over_max_recursion),
++	HNS3_TQP_STAT("hw_limitation", hw_limitation),
+ };
+ 
+ #define HNS3_TXQ_STATS_COUNT ARRAY_SIZE(hns3_txq_stats)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index 9ee55ee0487d9..3226ca1761556 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -753,8 +753,9 @@ static int hclge_config_igu_egu_hw_err_int(struct hclge_dev *hdev, bool en)
+ 
+ 	/* configure IGU,EGU error interrupts */
+ 	hclge_cmd_setup_basic_desc(&desc, HCLGE_IGU_COMMON_INT_EN, false);
++	desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_TYPE);
+ 	if (en)
+-		desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN);
++		desc.data[0] |= cpu_to_le32(HCLGE_IGU_ERR_INT_EN);
+ 
+ 	desc.data[1] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN_MASK);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+index 608fe26fc3fed..d647f3c841345 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+@@ -32,7 +32,8 @@
+ #define HCLGE_TQP_ECC_ERR_INT_EN_MASK	0x0FFF
+ #define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN_MASK	0x0F000000
+ #define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN	0x0F000000
+-#define HCLGE_IGU_ERR_INT_EN	0x0000066F
++#define HCLGE_IGU_ERR_INT_EN	0x0000000F
++#define HCLGE_IGU_ERR_INT_TYPE	0x00000660
+ #define HCLGE_IGU_ERR_INT_EN_MASK	0x000F
+ #define HCLGE_IGU_TNL_ERR_INT_EN    0x0002AABF
+ #define HCLGE_IGU_TNL_ERR_INT_EN_MASK  0x003F
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index b856dbe4db73b..98190aa907818 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -10845,7 +10845,6 @@ static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num,
+ #define REG_LEN_PER_LINE	(REG_NUM_PER_LINE * sizeof(u32))
+ #define REG_SEPARATOR_LINE	1
+ #define REG_NUM_REMAIN_MASK	3
+-#define BD_LIST_MAX_NUM		30
+ 
+ int hclge_query_bd_num_cmd_send(struct hclge_dev *hdev, struct hclge_desc *desc)
+ {
+@@ -10939,15 +10938,19 @@ static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len)
+ {
+ 	u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
+ 	int data_len_per_desc, bd_num, i;
+-	int bd_num_list[BD_LIST_MAX_NUM];
++	int *bd_num_list;
+ 	u32 data_len;
+ 	int ret;
+ 
++	bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
++	if (!bd_num_list)
++		return -ENOMEM;
++
+ 	ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"Get dfx reg bd num fail, status is %d.\n", ret);
+-		return ret;
++		goto out;
+ 	}
+ 
+ 	data_len_per_desc = sizeof_field(struct hclge_desc, data);
+@@ -10958,6 +10961,8 @@ static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len)
+ 		*len += (data_len / REG_LEN_PER_LINE + 1) * REG_LEN_PER_LINE;
+ 	}
+ 
++out:
++	kfree(bd_num_list);
+ 	return ret;
+ }
+ 
+@@ -10965,16 +10970,20 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+ {
+ 	u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
+ 	int bd_num, bd_num_max, buf_len, i;
+-	int bd_num_list[BD_LIST_MAX_NUM];
+ 	struct hclge_desc *desc_src;
++	int *bd_num_list;
+ 	u32 *reg = data;
+ 	int ret;
+ 
++	bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
++	if (!bd_num_list)
++		return -ENOMEM;
++
+ 	ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"Get dfx reg bd num fail, status is %d.\n", ret);
+-		return ret;
++		goto out;
+ 	}
+ 
+ 	bd_num_max = bd_num_list[0];
+@@ -10983,8 +10992,10 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+ 
+ 	buf_len = sizeof(*desc_src) * bd_num_max;
+ 	desc_src = kzalloc(buf_len, GFP_KERNEL);
+-	if (!desc_src)
+-		return -ENOMEM;
++	if (!desc_src) {
++		ret = -ENOMEM;
++		goto out;
++	}
+ 
+ 	for (i = 0; i < dfx_reg_type_num; i++) {
+ 		bd_num = bd_num_list[i];
+@@ -11000,6 +11011,8 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+ 	}
+ 
+ 	kfree(desc_src);
++out:
++	kfree(bd_num_list);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 9c8004fc9dc4f..e0254672831f4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -519,7 +519,7 @@ static void hclge_get_link_mode(struct hclge_vport *vport,
+ 	unsigned long advertising;
+ 	unsigned long supported;
+ 	unsigned long send_data;
+-	u8 msg_data[10];
++	u8 msg_data[10] = {};
+ 	u8 dest_vfid;
+ 
+ 	advertising = hdev->hw.mac.advertising[0];
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index e898207025406..c194bba187d6c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -255,6 +255,8 @@ void hclge_mac_start_phy(struct hclge_dev *hdev)
+ 	if (!phydev)
+ 		return;
+ 
++	phy_loopback(phydev, false);
++
+ 	phy_start(phydev);
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+index 1e960c3c7ef05..e84054fb8213d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+@@ -1565,8 +1565,10 @@ enum i40e_aq_phy_type {
+ 	I40E_PHY_TYPE_25GBASE_LR		= 0x22,
+ 	I40E_PHY_TYPE_25GBASE_AOC		= 0x23,
+ 	I40E_PHY_TYPE_25GBASE_ACC		= 0x24,
+-	I40E_PHY_TYPE_2_5GBASE_T		= 0x30,
+-	I40E_PHY_TYPE_5GBASE_T			= 0x31,
++	I40E_PHY_TYPE_2_5GBASE_T		= 0x26,
++	I40E_PHY_TYPE_5GBASE_T			= 0x27,
++	I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS	= 0x30,
++	I40E_PHY_TYPE_5GBASE_T_LINK_STATUS	= 0x31,
+ 	I40E_PHY_TYPE_MAX,
+ 	I40E_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP	= 0xFD,
+ 	I40E_PHY_TYPE_EMPTY			= 0xFE,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
+index a2dba32383f63..32f3facbed1a5 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
+@@ -375,6 +375,7 @@ void i40e_client_subtask(struct i40e_pf *pf)
+ 				clear_bit(__I40E_CLIENT_INSTANCE_OPENED,
+ 					  &cdev->state);
+ 				i40e_client_del_instance(pf);
++				return;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index adc9e4fa47891..ba109073d6052 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -1154,8 +1154,8 @@ static enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw)
+ 		break;
+ 	case I40E_PHY_TYPE_100BASE_TX:
+ 	case I40E_PHY_TYPE_1000BASE_T:
+-	case I40E_PHY_TYPE_2_5GBASE_T:
+-	case I40E_PHY_TYPE_5GBASE_T:
++	case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS:
++	case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS:
+ 	case I40E_PHY_TYPE_10GBASE_T:
+ 		media = I40E_MEDIA_TYPE_BASET;
+ 		break;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 31d48a85cfaf0..5d48bc0c3f6c4 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -841,8 +841,8 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
+ 							     10000baseT_Full);
+ 		break;
+ 	case I40E_PHY_TYPE_10GBASE_T:
+-	case I40E_PHY_TYPE_5GBASE_T:
+-	case I40E_PHY_TYPE_2_5GBASE_T:
++	case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS:
++	case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS:
+ 	case I40E_PHY_TYPE_1000BASE_T:
+ 	case I40E_PHY_TYPE_100BASE_TX:
+ 		ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg);
+@@ -1409,7 +1409,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg)
+ 
+ 		memset(&config, 0, sizeof(config));
+ 		config.phy_type = abilities.phy_type;
+-		config.abilities = abilities.abilities;
++		config.abilities = abilities.abilities |
++				   I40E_AQ_PHY_ENABLE_ATOMIC_LINK;
+ 		config.phy_type_ext = abilities.phy_type_ext;
+ 		config.link_speed = abilities.link_speed;
+ 		config.eee_capability = abilities.eee_capability;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 62b439232fa50..011f484606a3a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -1810,10 +1810,6 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb,
+ 				 union i40e_rx_desc *rx_desc)
+ 
+ {
+-	/* XDP packets use error pointer so abort at this point */
+-	if (IS_ERR(skb))
+-		return true;
+-
+ 	/* ERR_MASK will only have valid bits if EOP set, and
+ 	 * what we are doing here is actually checking
+ 	 * I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in
+@@ -2426,7 +2422,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ 		}
+ 
+ 		/* exit if we failed to retrieve a buffer */
+-		if (!skb) {
++		if (!xdp_res && !skb) {
+ 			rx_ring->rx_stats.alloc_buff_failed++;
+ 			rx_buffer->pagecnt_bias++;
+ 			break;
+@@ -2438,7 +2434,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ 		if (i40e_is_non_eop(rx_ring, rx_desc, skb))
+ 			continue;
+ 
+-		if (i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
++		if (xdp_res || i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
+ 			skb = NULL;
+ 			continue;
+ 		}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
+index c0bdc666f5571..add67f7b73e8b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
+@@ -239,11 +239,8 @@ struct i40e_phy_info {
+ #define I40E_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(I40E_PHY_TYPE_25GBASE_ACC + \
+ 					     I40E_PHY_TYPE_OFFSET)
+ /* Offset for 2.5G/5G PHY Types value to bit number conversion */
+-#define I40E_PHY_TYPE_OFFSET2 (-10)
+-#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T + \
+-					     I40E_PHY_TYPE_OFFSET2)
+-#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T + \
+-					     I40E_PHY_TYPE_OFFSET2)
++#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T)
++#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T)
+ #define I40E_HW_CAP_MAX_GPIO			30
+ /* Capabilities of a PF or a VF or the whole device */
+ struct i40e_hw_capabilities {
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index dc5b3c06d1e01..ebd08543791bd 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -3899,8 +3899,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 
+ 	iounmap(hw->hw_addr);
+ 	pci_release_regions(pdev);
+-	iavf_free_all_tx_resources(adapter);
+-	iavf_free_all_rx_resources(adapter);
+ 	iavf_free_queues(adapter);
+ 	kfree(adapter->vf_res);
+ 	spin_lock_bh(&adapter->mac_vlan_list_lock);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 170367eaa95aa..e1384503dd4d5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2684,38 +2684,46 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ }
+ 
+ /**
+- * ice_vsi_rebuild_update_coalesce - set coalesce for a q_vector
++ * ice_vsi_rebuild_update_coalesce_intrl - set interrupt rate limit for a q_vector
+  * @q_vector: pointer to q_vector which is being updated
+- * @coalesce: pointer to array of struct with stored coalesce
++ * @stored_intrl_setting: original INTRL setting
+  *
+  * Set coalesce param in q_vector and update these parameters in HW.
+  */
+ static void
+-ice_vsi_rebuild_update_coalesce(struct ice_q_vector *q_vector,
+-				struct ice_coalesce_stored *coalesce)
++ice_vsi_rebuild_update_coalesce_intrl(struct ice_q_vector *q_vector,
++				      u16 stored_intrl_setting)
+ {
+-	struct ice_ring_container *rx_rc = &q_vector->rx;
+-	struct ice_ring_container *tx_rc = &q_vector->tx;
+ 	struct ice_hw *hw = &q_vector->vsi->back->hw;
+ 
+-	tx_rc->itr_setting = coalesce->itr_tx;
+-	rx_rc->itr_setting = coalesce->itr_rx;
+-
+-	/* dynamic ITR values will be updated during Tx/Rx */
+-	if (!ITR_IS_DYNAMIC(tx_rc->itr_setting))
+-		wr32(hw, GLINT_ITR(tx_rc->itr_idx, q_vector->reg_idx),
+-		     ITR_REG_ALIGN(tx_rc->itr_setting) >>
+-		     ICE_ITR_GRAN_S);
+-	if (!ITR_IS_DYNAMIC(rx_rc->itr_setting))
+-		wr32(hw, GLINT_ITR(rx_rc->itr_idx, q_vector->reg_idx),
+-		     ITR_REG_ALIGN(rx_rc->itr_setting) >>
+-		     ICE_ITR_GRAN_S);
+-
+-	q_vector->intrl = coalesce->intrl;
++	q_vector->intrl = stored_intrl_setting;
+ 	wr32(hw, GLINT_RATE(q_vector->reg_idx),
+ 	     ice_intrl_usec_to_reg(q_vector->intrl, hw->intrl_gran));
+ }
+ 
++/**
++ * ice_vsi_rebuild_update_coalesce_itr - set coalesce for a q_vector
++ * @q_vector: pointer to q_vector which is being updated
++ * @rc: pointer to ring container
++ * @stored_itr_setting: original ITR setting
++ *
++ * Set coalesce param in q_vector and update these parameters in HW.
++ */
++static void
++ice_vsi_rebuild_update_coalesce_itr(struct ice_q_vector *q_vector,
++				    struct ice_ring_container *rc,
++				    u16 stored_itr_setting)
++{
++	struct ice_hw *hw = &q_vector->vsi->back->hw;
++
++	rc->itr_setting = stored_itr_setting;
++
++	/* dynamic ITR values will be updated during Tx/Rx */
++	if (!ITR_IS_DYNAMIC(rc->itr_setting))
++		wr32(hw, GLINT_ITR(rc->itr_idx, q_vector->reg_idx),
++		     ITR_REG_ALIGN(rc->itr_setting) >> ICE_ITR_GRAN_S);
++}
++
+ /**
+  * ice_vsi_rebuild_get_coalesce - get coalesce from all q_vectors
+  * @vsi: VSI connected with q_vectors
+@@ -2735,6 +2743,11 @@ ice_vsi_rebuild_get_coalesce(struct ice_vsi *vsi,
+ 		coalesce[i].itr_tx = q_vector->tx.itr_setting;
+ 		coalesce[i].itr_rx = q_vector->rx.itr_setting;
+ 		coalesce[i].intrl = q_vector->intrl;
++
++		if (i < vsi->num_txq)
++			coalesce[i].tx_valid = true;
++		if (i < vsi->num_rxq)
++			coalesce[i].rx_valid = true;
+ 	}
+ 
+ 	return vsi->num_q_vectors;
+@@ -2759,17 +2772,59 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi,
+ 	if ((size && !coalesce) || !vsi)
+ 		return;
+ 
+-	for (i = 0; i < size && i < vsi->num_q_vectors; i++)
+-		ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i],
+-						&coalesce[i]);
+-
+-	/* number of q_vectors increased, so assume coalesce settings were
+-	 * changed globally (i.e. ethtool -C eth0 instead of per-queue) and use
+-	 * the previous settings from q_vector 0 for all of the new q_vectors
++	/* There are a couple of cases that have to be handled here:
++	 *   1. The case where the number of queue vectors stays the same, but
++	 *      the number of Tx or Rx rings changes (the first for loop)
++	 *   2. The case where the number of queue vectors increased (the
++	 *      second for loop)
+ 	 */
+-	for (; i < vsi->num_q_vectors; i++)
+-		ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i],
+-						&coalesce[0]);
++	for (i = 0; i < size && i < vsi->num_q_vectors; i++) {
++		/* There are 2 cases to handle here and they are the same for
++		 * both Tx and Rx:
++		 *   if the entry was valid previously (coalesce[i].[tr]x_valid
++		 *   and the loop variable is less than the number of rings
++		 *   allocated, then write the previous values
++		 *
++		 *   if the entry was not valid previously, but the number of
++		 *   rings is less than are allocated (this means the number of
++		 *   rings increased from previously), then write out the
++		 *   values in the first element
++		 */
++		if (i < vsi->alloc_rxq && coalesce[i].rx_valid)
++			ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++							    &vsi->q_vectors[i]->rx,
++							    coalesce[i].itr_rx);
++		else if (i < vsi->alloc_rxq)
++			ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++							    &vsi->q_vectors[i]->rx,
++							    coalesce[0].itr_rx);
++
++		if (i < vsi->alloc_txq && coalesce[i].tx_valid)
++			ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++							    &vsi->q_vectors[i]->tx,
++							    coalesce[i].itr_tx);
++		else if (i < vsi->alloc_txq)
++			ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++							    &vsi->q_vectors[i]->tx,
++							    coalesce[0].itr_tx);
++
++		ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i],
++						      coalesce[i].intrl);
++	}
++
++	/* the number of queue vectors increased so write whatever is in
++	 * the first element
++	 */
++	for (; i < vsi->num_q_vectors; i++) {
++		ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++						    &vsi->q_vectors[i]->tx,
++						    coalesce[0].itr_tx);
++		ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++						    &vsi->q_vectors[i]->rx,
++						    coalesce[0].itr_rx);
++		ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i],
++						      coalesce[0].intrl);
++	}
+ }
+ 
+ /**
+@@ -2798,9 +2853,11 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
+ 
+ 	coalesce = kcalloc(vsi->num_q_vectors,
+ 			   sizeof(struct ice_coalesce_stored), GFP_KERNEL);
+-	if (coalesce)
+-		prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi,
+-								  coalesce);
++	if (!coalesce)
++		return -ENOMEM;
++
++	prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);
++
+ 	ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ 	ice_vsi_free_q_vectors(vsi);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index ff1a1cbd078e7..eab7ceae926b3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -351,6 +351,8 @@ struct ice_coalesce_stored {
+ 	u16 itr_tx;
+ 	u16 itr_rx;
+ 	u8 intrl;
++	u8 tx_valid;
++	u8 rx_valid;
+ };
+ 
+ /* iterator for handling rings in ring container */
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 6d2d60675ffd7..d930fcda9c3b6 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1319,7 +1319,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
+ 		skb->protocol = eth_type_trans(skb, netdev);
+ 
+ 		if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX &&
+-		    RX_DMA_VID(trxd.rxd3))
++		    (trxd.rxd2 & RX_DMA_VTAG))
+ 			__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+ 					       RX_DMA_VID(trxd.rxd3));
+ 		skb_record_rx_queue(skb, 0);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 454cfcd465fda..73ce1f0f307a4 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -295,6 +295,7 @@
+ #define RX_DMA_LSO		BIT(30)
+ #define RX_DMA_PLEN0(_x)	(((_x) & 0x3fff) << 16)
+ #define RX_DMA_GET_PLEN0(_x)	(((_x) >> 16) & 0x3fff)
++#define RX_DMA_VTAG		BIT(15)
+ 
+ /* QDMA descriptor rxd3 */
+ #define RX_DMA_VID(_x)		((_x) & 0xfff)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index 38a23d209b33b..3736680680715 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -486,7 +486,7 @@ static void mlx5e_tx_mpwqe_session_start(struct mlx5e_txqsq *sq,
+ 
+ 	pi = mlx5e_txqsq_get_next_pi(sq, MLX5E_TX_MPW_MAX_WQEBBS);
+ 	wqe = MLX5E_TX_FETCH_WQE(sq, pi);
+-	prefetchw(wqe->data);
++	net_prefetchw(wqe->data);
+ 
+ 	*session = (struct mlx5e_tx_mpwqe) {
+ 		.wqe = wqe,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index bf3250e0e59ca..749585fe6fc96 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -352,6 +352,8 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 	plat_dat->bsp_priv = gmac;
+ 	plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed;
+ 	plat_dat->multicast_filter_bins = 0;
++	plat_dat->tx_fifo_size = 8192;
++	plat_dat->rx_fifo_size = 8192;
+ 
+ 	err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ 	if (err)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index ced6d76a0d853..16c538cfaf59d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -617,6 +617,7 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
+ 	value &= ~GMAC_PACKET_FILTER_PCF;
+ 	value &= ~GMAC_PACKET_FILTER_PM;
+ 	value &= ~GMAC_PACKET_FILTER_PR;
++	value &= ~GMAC_PACKET_FILTER_RA;
+ 	if (dev->flags & IFF_PROMISC) {
+ 		/* VLAN Tag Filter Fail Packets Queuing */
+ 		if (hw->vlan_fail_q_en) {
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 173ab6ceed1f6..eca86225a3413 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -4986,31 +4986,6 @@ int ath11k_wmi_pull_fw_stats(struct ath11k_base *ab, struct sk_buff *skb,
+ 	return 0;
+ }
+ 
+-static int
+-ath11k_pull_pdev_temp_ev(struct ath11k_base *ab, u8 *evt_buf,
+-			 u32 len, const struct wmi_pdev_temperature_event *ev)
+-{
+-	const void **tb;
+-	int ret;
+-
+-	tb = ath11k_wmi_tlv_parse_alloc(ab, evt_buf, len, GFP_ATOMIC);
+-	if (IS_ERR(tb)) {
+-		ret = PTR_ERR(tb);
+-		ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
+-		return ret;
+-	}
+-
+-	ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT];
+-	if (!ev) {
+-		ath11k_warn(ab, "failed to fetch pdev temp ev");
+-		kfree(tb);
+-		return -EPROTO;
+-	}
+-
+-	kfree(tb);
+-	return 0;
+-}
+-
+ size_t ath11k_wmi_fw_stats_num_vdevs(struct list_head *head)
+ {
+ 	struct ath11k_fw_stats_vdev *i;
+@@ -6390,23 +6365,37 @@ ath11k_wmi_pdev_temperature_event(struct ath11k_base *ab,
+ 				  struct sk_buff *skb)
+ {
+ 	struct ath11k *ar;
+-	struct wmi_pdev_temperature_event ev = {0};
++	const void **tb;
++	const struct wmi_pdev_temperature_event *ev;
++	int ret;
++
++	tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
++	if (IS_ERR(tb)) {
++		ret = PTR_ERR(tb);
++		ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
++		return;
++	}
+ 
+-	if (ath11k_pull_pdev_temp_ev(ab, skb->data, skb->len, &ev) != 0) {
+-		ath11k_warn(ab, "failed to extract pdev temperature event");
++	ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT];
++	if (!ev) {
++		ath11k_warn(ab, "failed to fetch pdev temp ev");
++		kfree(tb);
+ 		return;
+ 	}
+ 
+ 	ath11k_dbg(ab, ATH11K_DBG_WMI,
+-		   "pdev temperature ev temp %d pdev_id %d\n", ev.temp, ev.pdev_id);
++		   "pdev temperature ev temp %d pdev_id %d\n", ev->temp, ev->pdev_id);
+ 
+-	ar = ath11k_mac_get_ar_by_pdev_id(ab, ev.pdev_id);
++	ar = ath11k_mac_get_ar_by_pdev_id(ab, ev->pdev_id);
+ 	if (!ar) {
+-		ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev.pdev_id);
++		ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev->pdev_id);
++		kfree(tb);
+ 		return;
+ 	}
+ 
+-	ath11k_thermal_event_temperature(ar, ev.temp);
++	ath11k_thermal_event_temperature(ar, ev->temp);
++
++	kfree(tb);
+ }
+ 
+ static void ath11k_wmi_tlv_op_rx(struct ath11k_base *ab, struct sk_buff *skb)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 500fdb0b6c42b..eeb70560b746e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -73,10 +73,20 @@
+ #include "iwl-prph.h"
+ #include "internal.h"
+ 
++#define TRANS_CFG_MARKER BIT(0)
++#define _IS_A(cfg, _struct) __builtin_types_compatible_p(typeof(cfg),	\
++							 struct _struct)
++extern int _invalid_type;
++#define _TRANS_CFG_MARKER(cfg)						\
++	(__builtin_choose_expr(_IS_A(cfg, iwl_cfg_trans_params),	\
++			       TRANS_CFG_MARKER,			\
++	 __builtin_choose_expr(_IS_A(cfg, iwl_cfg), 0, _invalid_type)))
++#define _ASSIGN_CFG(cfg) (_TRANS_CFG_MARKER(cfg) + (kernel_ulong_t)&(cfg))
++
+ #define IWL_PCI_DEVICE(dev, subdev, cfg) \
+ 	.vendor = PCI_VENDOR_ID_INTEL,  .device = (dev), \
+ 	.subvendor = PCI_ANY_ID, .subdevice = (subdev), \
+-	.driver_data = (kernel_ulong_t)&(cfg)
++	.driver_data = _ASSIGN_CFG(cfg)
+ 
+ /* Hardware specific file defines the PCI IDs table for that hardware module */
+ static const struct pci_device_id iwl_hw_card_ids[] = {
+@@ -1018,20 +1028,23 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 
+ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+-	const struct iwl_cfg_trans_params *trans =
+-		(struct iwl_cfg_trans_params *)(ent->driver_data);
++	const struct iwl_cfg_trans_params *trans;
+ 	const struct iwl_cfg *cfg_7265d __maybe_unused = NULL;
+ 	struct iwl_trans *iwl_trans;
+ 	struct iwl_trans_pcie *trans_pcie;
+ 	unsigned long flags;
+ 	int i, ret;
++	const struct iwl_cfg *cfg;
++
++	trans = (void *)(ent->driver_data & ~TRANS_CFG_MARKER);
++
+ 	/*
+ 	 * This is needed for backwards compatibility with the old
+ 	 * tables, so we don't need to change all the config structs
+ 	 * at the same time.  The cfg is used to compare with the old
+ 	 * full cfg structs.
+ 	 */
+-	const struct iwl_cfg *cfg = (struct iwl_cfg *)(ent->driver_data);
++	cfg = (void *)(ent->driver_data & ~TRANS_CFG_MARKER);
+ 
+ 	/* make sure trans is the first element in iwl_cfg */
+ 	BUILD_BUG_ON(offsetof(struct iwl_cfg, trans));
+@@ -1133,11 +1146,19 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ #endif
+ 	/*
+-	 * If we didn't set the cfg yet, assume the trans is actually
+-	 * a full cfg from the old tables.
++	 * If we didn't set the cfg yet, the PCI ID table entry should have
++	 * been a full config - if yes, use it, otherwise fail.
+ 	 */
+-	if (!iwl_trans->cfg)
++	if (!iwl_trans->cfg) {
++		if (ent->driver_data & TRANS_CFG_MARKER) {
++			pr_err("No config found for PCI dev %04x/%04x, rev=0x%x, rfid=0x%x\n",
++			       pdev->device, pdev->subsystem_device,
++			       iwl_trans->hw_rev, iwl_trans->hw_rf_id);
++			ret = -EINVAL;
++			goto out_free_trans;
++		}
+ 		iwl_trans->cfg = cfg;
++	}
+ 
+ 	/* if we don't have a name yet, copy name from the old cfg */
+ 	if (!iwl_trans->name)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
+index f4756bb946c36..e9cdcdc54d5c3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
+@@ -86,6 +86,7 @@ static int mt7615_check_eeprom(struct mt76_dev *dev)
+ 	switch (val) {
+ 	case 0x7615:
+ 	case 0x7622:
++	case 0x7663:
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index c31036f57aef8..62a971660da73 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -341,12 +341,20 @@ static int mt7615_mcu_drv_pmctrl(struct mt7615_dev *dev)
+ 	u32 addr;
+ 	int err;
+ 
+-	addr = is_mt7663(mdev) ? MT_PCIE_DOORBELL_PUSH : MT_CFG_LPCR_HOST;
++	if (is_mt7663(mdev)) {
++		/* Clear firmware own via N9 eint */
++		mt76_wr(dev, MT_PCIE_DOORBELL_PUSH, MT_CFG_LPCR_HOST_DRV_OWN);
++		mt76_poll(dev, MT_CONN_ON_MISC, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000);
++
++		addr = MT_CONN_HIF_ON_LPCTL;
++	} else {
++		addr = MT_CFG_LPCR_HOST;
++	}
++
+ 	mt76_wr(dev, addr, MT_CFG_LPCR_HOST_DRV_OWN);
+ 
+ 	mt7622_trigger_hif_int(dev, true);
+ 
+-	addr = is_mt7663(mdev) ? MT_CONN_HIF_ON_LPCTL : MT_CFG_LPCR_HOST;
+ 	err = !mt76_poll_msec(dev, addr, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000);
+ 
+ 	mt7622_trigger_hif_int(dev, false);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+index 11b769af2f8f6..0f191bd28417c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+@@ -446,6 +446,10 @@ int mt76x02_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	    !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
+ 		return -EOPNOTSUPP;
+ 
++	/* MT76x0 GTK offloading does not work with more than one VIF */
++	if (is_mt76x0(dev) && !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
++		return -EOPNOTSUPP;
++
+ 	msta = sta ? (struct mt76x02_sta *)sta->drv_priv : NULL;
+ 	wcid = msta ? &msta->wcid : &mvif->group_wcid;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+index 7deba7ebd68ac..e4c5f968f706d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+@@ -104,7 +104,7 @@ int mt7915_eeprom_get_target_power(struct mt7915_dev *dev,
+ 				   struct ieee80211_channel *chan,
+ 				   u8 chain_idx)
+ {
+-	int index;
++	int index, target_power;
+ 	bool tssi_on;
+ 
+ 	if (chain_idx > 3)
+@@ -113,15 +113,22 @@ int mt7915_eeprom_get_target_power(struct mt7915_dev *dev,
+ 	tssi_on = mt7915_tssi_enabled(dev, chan->band);
+ 
+ 	if (chan->band == NL80211_BAND_2GHZ) {
+-		index = MT_EE_TX0_POWER_2G + chain_idx * 3 + !tssi_on;
++		index = MT_EE_TX0_POWER_2G + chain_idx * 3;
++		target_power = mt7915_eeprom_read(dev, index);
++
++		if (!tssi_on)
++			target_power += mt7915_eeprom_read(dev, index + 1);
+ 	} else {
+-		int group = tssi_on ?
+-			    mt7915_get_channel_group(chan->hw_value) : 8;
++		int group = mt7915_get_channel_group(chan->hw_value);
++
++		index = MT_EE_TX0_POWER_5G + chain_idx * 12;
++		target_power = mt7915_eeprom_read(dev, index + group);
+ 
+-		index = MT_EE_TX0_POWER_5G + chain_idx * 12 + group;
++		if (!tssi_on)
++			target_power += mt7915_eeprom_read(dev, index + 8);
+ 	}
+ 
+-	return mt7915_eeprom_read(dev, index);
++	return target_power;
+ }
+ 
+ static const u8 sku_cck_delta_map[] = {
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c
+index c775c177933b2..8dc80574d08d9 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/event.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/event.c
+@@ -570,8 +570,10 @@ qtnf_event_handle_external_auth(struct qtnf_vif *vif,
+ 		return 0;
+ 
+ 	if (ev->ssid_len) {
+-		memcpy(auth.ssid.ssid, ev->ssid, ev->ssid_len);
+-		auth.ssid.ssid_len = ev->ssid_len;
++		int len = clamp_val(ev->ssid_len, 0, IEEE80211_MAX_SSID_LEN);
++
++		memcpy(auth.ssid.ssid, ev->ssid, len);
++		auth.ssid.ssid_len = len;
+ 	}
+ 
+ 	auth.key_mgmt_suite = le32_to_cpu(ev->akm_suite);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index ffb02e6142172..8ba0b0824ae9b 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -1156,6 +1156,7 @@ struct rtw_chip_info {
+ 	bool en_dis_dpd;
+ 	u16 dpd_ratemask;
+ 	u8 iqk_threshold;
++	u8 lck_threshold;
+ 	const struct rtw_pwr_track_tbl *pwr_track_tbl;
+ 
+ 	u8 bfer_su_max_num;
+@@ -1485,6 +1486,7 @@ struct rtw_dm_info {
+ 	u8 tx_rate;
+ 	u8 thermal_avg[RTW_RF_PATH_MAX];
+ 	u8 thermal_meter_k;
++	u8 thermal_meter_lck;
+ 	s8 delta_power_index[RTW_RF_PATH_MAX];
+ 	s8 delta_power_index_last[RTW_RF_PATH_MAX];
+ 	u8 default_ofdm_index;
+diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c
+index 36e2f0dba00c0..af8b703d11d4c 100644
+--- a/drivers/net/wireless/realtek/rtw88/phy.c
++++ b/drivers/net/wireless/realtek/rtw88/phy.c
+@@ -2154,6 +2154,20 @@ s8 rtw_phy_pwrtrack_get_pwridx(struct rtw_dev *rtwdev,
+ }
+ EXPORT_SYMBOL(rtw_phy_pwrtrack_get_pwridx);
+ 
++bool rtw_phy_pwrtrack_need_lck(struct rtw_dev *rtwdev)
++{
++	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
++	u8 delta_lck;
++
++	delta_lck = abs(dm_info->thermal_avg[0] - dm_info->thermal_meter_lck);
++	if (delta_lck >= rtwdev->chip->lck_threshold) {
++		dm_info->thermal_meter_lck = dm_info->thermal_avg[0];
++		return true;
++	}
++	return false;
++}
++EXPORT_SYMBOL(rtw_phy_pwrtrack_need_lck);
++
+ bool rtw_phy_pwrtrack_need_iqk(struct rtw_dev *rtwdev)
+ {
+ 	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+diff --git a/drivers/net/wireless/realtek/rtw88/phy.h b/drivers/net/wireless/realtek/rtw88/phy.h
+index b924ed07630a6..9623248c94667 100644
+--- a/drivers/net/wireless/realtek/rtw88/phy.h
++++ b/drivers/net/wireless/realtek/rtw88/phy.h
+@@ -55,6 +55,7 @@ u8 rtw_phy_pwrtrack_get_delta(struct rtw_dev *rtwdev, u8 path);
+ s8 rtw_phy_pwrtrack_get_pwridx(struct rtw_dev *rtwdev,
+ 			       struct rtw_swing_table *swing_table,
+ 			       u8 tbl_path, u8 therm_path, u8 delta);
++bool rtw_phy_pwrtrack_need_lck(struct rtw_dev *rtwdev);
+ bool rtw_phy_pwrtrack_need_iqk(struct rtw_dev *rtwdev);
+ void rtw_phy_config_swing_table(struct rtw_dev *rtwdev,
+ 				struct rtw_swing_table *swing_table);
+diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h
+index 86b94c008a272..aca3dbdc2d5a5 100644
+--- a/drivers/net/wireless/realtek/rtw88/reg.h
++++ b/drivers/net/wireless/realtek/rtw88/reg.h
+@@ -639,8 +639,13 @@
+ #define RF_TXATANK	0x64
+ #define RF_TRXIQ	0x66
+ #define RF_RXIQGEN	0x8d
++#define RF_SYN_PFD	0xb0
+ #define RF_XTALX2	0xb8
++#define RF_SYN_CTRL	0xbb
+ #define RF_MALSEL	0xbe
++#define RF_SYN_AAC	0xc9
++#define RF_AAC_CTRL	0xca
++#define RF_FAST_LCK	0xcc
+ #define RF_RCKD		0xde
+ #define RF_TXADBG	0xde
+ #define RF_LUTDBG	0xdf
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index e37300e98517b..b718f5d810be8 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -1124,6 +1124,7 @@ static void rtw8822c_pwrtrack_init(struct rtw_dev *rtwdev)
+ 
+ 	dm_info->pwr_trk_triggered = false;
+ 	dm_info->thermal_meter_k = rtwdev->efuse.thermal_meter_k;
++	dm_info->thermal_meter_lck = rtwdev->efuse.thermal_meter_k;
+ }
+ 
+ static void rtw8822c_phy_set_param(struct rtw_dev *rtwdev)
+@@ -2106,6 +2107,26 @@ static void rtw8822c_false_alarm_statistics(struct rtw_dev *rtwdev)
+ 	rtw_write32_set(rtwdev, REG_RX_BREAK, BIT_COM_RX_GCK_EN);
+ }
+ 
++static void rtw8822c_do_lck(struct rtw_dev *rtwdev)
++{
++	u32 val;
++
++	rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_CTRL, RFREG_MASK, 0x80010);
++	rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_PFD, RFREG_MASK, 0x1F0FA);
++	fsleep(1);
++	rtw_write_rf(rtwdev, RF_PATH_A, RF_AAC_CTRL, RFREG_MASK, 0x80000);
++	rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_AAC, RFREG_MASK, 0x80001);
++	read_poll_timeout(rtw_read_rf, val, val != 0x1, 1000, 100000,
++			  true, rtwdev, RF_PATH_A, RF_AAC_CTRL, 0x1000);
++	rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_PFD, RFREG_MASK, 0x1F0F8);
++	rtw_write_rf(rtwdev, RF_PATH_B, RF_SYN_CTRL, RFREG_MASK, 0x80010);
++
++	rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x0f000);
++	rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x4f000);
++	fsleep(1);
++	rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x0f000);
++}
++
+ static void rtw8822c_do_iqk(struct rtw_dev *rtwdev)
+ {
+ 	struct rtw_iqk_para para = {0};
+@@ -3519,11 +3540,12 @@ static void __rtw8822c_pwr_track(struct rtw_dev *rtwdev)
+ 
+ 	rtw_phy_config_swing_table(rtwdev, &swing_table);
+ 
++	if (rtw_phy_pwrtrack_need_lck(rtwdev))
++		rtw8822c_do_lck(rtwdev);
++
+ 	for (i = 0; i < rtwdev->hal.rf_path_num; i++)
+ 		rtw8822c_pwr_track_path(rtwdev, &swing_table, i);
+ 
+-	if (rtw_phy_pwrtrack_need_iqk(rtwdev))
+-		rtw8822c_do_iqk(rtwdev);
+ }
+ 
+ static void rtw8822c_pwr_track(struct rtw_dev *rtwdev)
+@@ -4328,6 +4350,7 @@ struct rtw_chip_info rtw8822c_hw_spec = {
+ 	.dpd_ratemask = DIS_DPD_RATEALL,
+ 	.pwr_track_tbl = &rtw8822c_rtw_pwr_track_tbl,
+ 	.iqk_threshold = 8,
++	.lck_threshold = 8,
+ 	.bfer_su_max_num = 2,
+ 	.bfer_mu_max_num = 1,
+ 	.rx_ldpc = true,
+diff --git a/drivers/net/wireless/wl3501.h b/drivers/net/wireless/wl3501.h
+index b446cb3695579..87195c1dadf2c 100644
+--- a/drivers/net/wireless/wl3501.h
++++ b/drivers/net/wireless/wl3501.h
+@@ -379,16 +379,7 @@ struct wl3501_get_confirm {
+ 	u8	mib_value[100];
+ };
+ 
+-struct wl3501_join_req {
+-	u16			    next_blk;
+-	u8			    sig_id;
+-	u8			    reserved;
+-	struct iw_mgmt_data_rset    operational_rset;
+-	u16			    reserved2;
+-	u16			    timeout;
+-	u16			    probe_delay;
+-	u8			    timestamp[8];
+-	u8			    local_time[8];
++struct wl3501_req {
+ 	u16			    beacon_period;
+ 	u16			    dtim_period;
+ 	u16			    cap_info;
+@@ -401,6 +392,19 @@ struct wl3501_join_req {
+ 	struct iw_mgmt_data_rset    bss_basic_rset;
+ };
+ 
++struct wl3501_join_req {
++	u16			    next_blk;
++	u8			    sig_id;
++	u8			    reserved;
++	struct iw_mgmt_data_rset    operational_rset;
++	u16			    reserved2;
++	u16			    timeout;
++	u16			    probe_delay;
++	u8			    timestamp[8];
++	u8			    local_time[8];
++	struct wl3501_req	    req;
++};
++
+ struct wl3501_join_confirm {
+ 	u16	next_blk;
+ 	u8	sig_id;
+@@ -443,16 +447,7 @@ struct wl3501_scan_confirm {
+ 	u16			    status;
+ 	char			    timestamp[8];
+ 	char			    localtime[8];
+-	u16			    beacon_period;
+-	u16			    dtim_period;
+-	u16			    cap_info;
+-	u8			    bss_type;
+-	u8			    bssid[ETH_ALEN];
+-	struct iw_mgmt_essid_pset   ssid;
+-	struct iw_mgmt_ds_pset	    ds_pset;
+-	struct iw_mgmt_cf_pset	    cf_pset;
+-	struct iw_mgmt_ibss_pset    ibss_pset;
+-	struct iw_mgmt_data_rset    bss_basic_rset;
++	struct wl3501_req	    req;
+ 	u8			    rssi;
+ };
+ 
+@@ -471,8 +466,10 @@ struct wl3501_md_req {
+ 	u16	size;
+ 	u8	pri;
+ 	u8	service_class;
+-	u8	daddr[ETH_ALEN];
+-	u8	saddr[ETH_ALEN];
++	struct {
++		u8	daddr[ETH_ALEN];
++		u8	saddr[ETH_ALEN];
++	} addr;
+ };
+ 
+ struct wl3501_md_ind {
+@@ -484,8 +481,10 @@ struct wl3501_md_ind {
+ 	u8	reception;
+ 	u8	pri;
+ 	u8	service_class;
+-	u8	daddr[ETH_ALEN];
+-	u8	saddr[ETH_ALEN];
++	struct {
++		u8	daddr[ETH_ALEN];
++		u8	saddr[ETH_ALEN];
++	} addr;
+ };
+ 
+ struct wl3501_md_confirm {
+diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
+index 026e88b80bfc4..ff1701adbb179 100644
+--- a/drivers/net/wireless/wl3501_cs.c
++++ b/drivers/net/wireless/wl3501_cs.c
+@@ -471,6 +471,7 @@ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len)
+ 	struct wl3501_md_req sig = {
+ 		.sig_id = WL3501_SIG_MD_REQ,
+ 	};
++	size_t sig_addr_len = sizeof(sig.addr);
+ 	u8 *pdata = (char *)data;
+ 	int rc = -EIO;
+ 
+@@ -486,9 +487,9 @@ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len)
+ 			goto out;
+ 		}
+ 		rc = 0;
+-		memcpy(&sig.daddr[0], pdata, 12);
+-		pktlen = len - 12;
+-		pdata += 12;
++		memcpy(&sig.addr, pdata, sig_addr_len);
++		pktlen = len - sig_addr_len;
++		pdata += sig_addr_len;
+ 		sig.data = bf;
+ 		if (((*pdata) * 256 + (*(pdata + 1))) > 1500) {
+ 			u8 addr4[ETH_ALEN] = {
+@@ -591,7 +592,7 @@ static int wl3501_mgmt_join(struct wl3501_card *this, u16 stas)
+ 	struct wl3501_join_req sig = {
+ 		.sig_id		  = WL3501_SIG_JOIN_REQ,
+ 		.timeout	  = 10,
+-		.ds_pset = {
++		.req.ds_pset = {
+ 			.el = {
+ 				.id  = IW_MGMT_INFO_ELEMENT_DS_PARAMETER_SET,
+ 				.len = 1,
+@@ -600,7 +601,7 @@ static int wl3501_mgmt_join(struct wl3501_card *this, u16 stas)
+ 		},
+ 	};
+ 
+-	memcpy(&sig.beacon_period, &this->bss_set[stas].beacon_period, 72);
++	memcpy(&sig.req, &this->bss_set[stas].req, sizeof(sig.req));
+ 	return wl3501_esbq_exec(this, &sig, sizeof(sig));
+ }
+ 
+@@ -668,35 +669,37 @@ static void wl3501_mgmt_scan_confirm(struct wl3501_card *this, u16 addr)
+ 	if (sig.status == WL3501_STATUS_SUCCESS) {
+ 		pr_debug("success");
+ 		if ((this->net_type == IW_MODE_INFRA &&
+-		     (sig.cap_info & WL3501_MGMT_CAPABILITY_ESS)) ||
++		     (sig.req.cap_info & WL3501_MGMT_CAPABILITY_ESS)) ||
+ 		    (this->net_type == IW_MODE_ADHOC &&
+-		     (sig.cap_info & WL3501_MGMT_CAPABILITY_IBSS)) ||
++		     (sig.req.cap_info & WL3501_MGMT_CAPABILITY_IBSS)) ||
+ 		    this->net_type == IW_MODE_AUTO) {
+ 			if (!this->essid.el.len)
+ 				matchflag = 1;
+ 			else if (this->essid.el.len == 3 &&
+ 				 !memcmp(this->essid.essid, "ANY", 3))
+ 				matchflag = 1;
+-			else if (this->essid.el.len != sig.ssid.el.len)
++			else if (this->essid.el.len != sig.req.ssid.el.len)
+ 				matchflag = 0;
+-			else if (memcmp(this->essid.essid, sig.ssid.essid,
++			else if (memcmp(this->essid.essid, sig.req.ssid.essid,
+ 					this->essid.el.len))
+ 				matchflag = 0;
+ 			else
+ 				matchflag = 1;
+ 			if (matchflag) {
+ 				for (i = 0; i < this->bss_cnt; i++) {
+-					if (ether_addr_equal_unaligned(this->bss_set[i].bssid, sig.bssid)) {
++					if (ether_addr_equal_unaligned(this->bss_set[i].req.bssid,
++								       sig.req.bssid)) {
+ 						matchflag = 0;
+ 						break;
+ 					}
+ 				}
+ 			}
+ 			if (matchflag && (i < 20)) {
+-				memcpy(&this->bss_set[i].beacon_period,
+-				       &sig.beacon_period, 73);
++				memcpy(&this->bss_set[i].req,
++				       &sig.req, sizeof(sig.req));
+ 				this->bss_cnt++;
+ 				this->rssi = sig.rssi;
++				this->bss_set[i].rssi = sig.rssi;
+ 			}
+ 		}
+ 	} else if (sig.status == WL3501_STATUS_TIMEOUT) {
+@@ -888,19 +891,19 @@ static void wl3501_mgmt_join_confirm(struct net_device *dev, u16 addr)
+ 			if (this->join_sta_bss < this->bss_cnt) {
+ 				const int i = this->join_sta_bss;
+ 				memcpy(this->bssid,
+-				       this->bss_set[i].bssid, ETH_ALEN);
+-				this->chan = this->bss_set[i].ds_pset.chan;
++				       this->bss_set[i].req.bssid, ETH_ALEN);
++				this->chan = this->bss_set[i].req.ds_pset.chan;
+ 				iw_copy_mgmt_info_element(&this->keep_essid.el,
+-						     &this->bss_set[i].ssid.el);
++						     &this->bss_set[i].req.ssid.el);
+ 				wl3501_mgmt_auth(this);
+ 			}
+ 		} else {
+ 			const int i = this->join_sta_bss;
+ 
+-			memcpy(&this->bssid, &this->bss_set[i].bssid, ETH_ALEN);
+-			this->chan = this->bss_set[i].ds_pset.chan;
++			memcpy(&this->bssid, &this->bss_set[i].req.bssid, ETH_ALEN);
++			this->chan = this->bss_set[i].req.ds_pset.chan;
+ 			iw_copy_mgmt_info_element(&this->keep_essid.el,
+-						  &this->bss_set[i].ssid.el);
++						  &this->bss_set[i].req.ssid.el);
+ 			wl3501_online(dev);
+ 		}
+ 	} else {
+@@ -982,7 +985,8 @@ static inline void wl3501_md_ind_interrupt(struct net_device *dev,
+ 	} else {
+ 		skb->dev = dev;
+ 		skb_reserve(skb, 2); /* IP headers on 16 bytes boundaries */
+-		skb_copy_to_linear_data(skb, (unsigned char *)&sig.daddr, 12);
++		skb_copy_to_linear_data(skb, (unsigned char *)&sig.addr,
++					sizeof(sig.addr));
+ 		wl3501_receive(this, skb->data, pkt_len);
+ 		skb_put(skb, pkt_len);
+ 		skb->protocol	= eth_type_trans(skb, dev);
+@@ -1573,30 +1577,30 @@ static int wl3501_get_scan(struct net_device *dev, struct iw_request_info *info,
+ 	for (i = 0; i < this->bss_cnt; ++i) {
+ 		iwe.cmd			= SIOCGIWAP;
+ 		iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
+-		memcpy(iwe.u.ap_addr.sa_data, this->bss_set[i].bssid, ETH_ALEN);
++		memcpy(iwe.u.ap_addr.sa_data, this->bss_set[i].req.bssid, ETH_ALEN);
+ 		current_ev = iwe_stream_add_event(info, current_ev,
+ 						  extra + IW_SCAN_MAX_DATA,
+ 						  &iwe, IW_EV_ADDR_LEN);
+ 		iwe.cmd		  = SIOCGIWESSID;
+ 		iwe.u.data.flags  = 1;
+-		iwe.u.data.length = this->bss_set[i].ssid.el.len;
++		iwe.u.data.length = this->bss_set[i].req.ssid.el.len;
+ 		current_ev = iwe_stream_add_point(info, current_ev,
+ 						  extra + IW_SCAN_MAX_DATA,
+ 						  &iwe,
+-						  this->bss_set[i].ssid.essid);
++						  this->bss_set[i].req.ssid.essid);
+ 		iwe.cmd	   = SIOCGIWMODE;
+-		iwe.u.mode = this->bss_set[i].bss_type;
++		iwe.u.mode = this->bss_set[i].req.bss_type;
+ 		current_ev = iwe_stream_add_event(info, current_ev,
+ 						  extra + IW_SCAN_MAX_DATA,
+ 						  &iwe, IW_EV_UINT_LEN);
+ 		iwe.cmd = SIOCGIWFREQ;
+-		iwe.u.freq.m = this->bss_set[i].ds_pset.chan;
++		iwe.u.freq.m = this->bss_set[i].req.ds_pset.chan;
+ 		iwe.u.freq.e = 0;
+ 		current_ev = iwe_stream_add_event(info, current_ev,
+ 						  extra + IW_SCAN_MAX_DATA,
+ 						  &iwe, IW_EV_FREQ_LEN);
+ 		iwe.cmd = SIOCGIWENCODE;
+-		if (this->bss_set[i].cap_info & WL3501_MGMT_CAPABILITY_PRIVACY)
++		if (this->bss_set[i].req.cap_info & WL3501_MGMT_CAPABILITY_PRIVACY)
+ 			iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY;
+ 		else
+ 			iwe.u.data.flags = IW_ENCODE_DISABLED;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 610d2bc43ea2d..740de61d12a0e 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2622,7 +2622,8 @@ static void nvme_set_latency_tolerance(struct device *dev, s32 val)
+ 
+ 	if (ctrl->ps_max_latency_us != latency) {
+ 		ctrl->ps_max_latency_us = latency;
+-		nvme_configure_apst(ctrl);
++		if (ctrl->state == NVME_CTRL_LIVE)
++			nvme_configure_apst(ctrl);
+ 	}
+ }
+ 
+diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
+index 125dde3f410ee..6a9626ff07135 100644
+--- a/drivers/nvme/target/io-cmd-bdev.c
++++ b/drivers/nvme/target/io-cmd-bdev.c
+@@ -256,10 +256,9 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
+ 	if (is_pci_p2pdma_page(sg_page(req->sg)))
+ 		op |= REQ_NOMERGE;
+ 
+-	sector = le64_to_cpu(req->cmd->rw.slba);
+-	sector <<= (req->ns->blksize_shift - 9);
++	sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
+ 
+-	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
++	if (nvmet_use_inline_bvec(req)) {
+ 		bio = &req->b.inline_bio;
+ 		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
+ 	} else {
+@@ -345,7 +344,7 @@ static u16 nvmet_bdev_discard_range(struct nvmet_req *req,
+ 	int ret;
+ 
+ 	ret = __blkdev_issue_discard(ns->bdev,
+-			le64_to_cpu(range->slba) << (ns->blksize_shift - 9),
++			nvmet_lba_to_sect(ns, range->slba),
+ 			le32_to_cpu(range->nlb) << (ns->blksize_shift - 9),
+ 			GFP_KERNEL, 0, bio);
+ 	if (ret && ret != -EOPNOTSUPP) {
+@@ -414,8 +413,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req)
+ 	if (!nvmet_check_transfer_len(req, 0))
+ 		return;
+ 
+-	sector = le64_to_cpu(write_zeroes->slba) <<
+-		(req->ns->blksize_shift - 9);
++	sector = nvmet_lba_to_sect(req->ns, write_zeroes->slba);
+ 	nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length) + 1) <<
+ 		(req->ns->blksize_shift - 9));
+ 
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index 559a15ccc322c..bc91336080e01 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -601,4 +601,20 @@ static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
+ 	return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple);
+ }
+ 
++static inline __le64 nvmet_sect_to_lba(struct nvmet_ns *ns, sector_t sect)
++{
++	return cpu_to_le64(sect >> (ns->blksize_shift - SECTOR_SHIFT));
++}
++
++static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba)
++{
++	return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT);
++}
++
++static inline bool nvmet_use_inline_bvec(struct nvmet_req *req)
++{
++	return req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN &&
++	       req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC;
++}
++
+ #endif /* _NVMET_H */
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 6c1f3ab7649c7..7d607f435e366 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -700,7 +700,7 @@ static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+ 	struct nvmet_rdma_rsp *rsp =
+ 		container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe);
+-	struct nvmet_rdma_queue *queue = cq->cq_context;
++	struct nvmet_rdma_queue *queue = wc->qp->qp_context;
+ 
+ 	nvmet_rdma_release_rsp(rsp);
+ 
+@@ -786,7 +786,7 @@ static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+ 	struct nvmet_rdma_rsp *rsp =
+ 		container_of(wc->wr_cqe, struct nvmet_rdma_rsp, write_cqe);
+-	struct nvmet_rdma_queue *queue = cq->cq_context;
++	struct nvmet_rdma_queue *queue = wc->qp->qp_context;
+ 	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
+ 	u16 status;
+ 
+diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c
+index 908475d27e0e7..eede4e8f3f75a 100644
+--- a/drivers/pci/controller/pcie-iproc-msi.c
++++ b/drivers/pci/controller/pcie-iproc-msi.c
+@@ -271,7 +271,7 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
+ 				    NULL, NULL);
+ 	}
+ 
+-	return hwirq;
++	return 0;
+ }
+ 
+ static void iproc_msi_irq_domain_free(struct irq_domain *domain,
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index e4e51d884553f..d41570715dc7f 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -830,13 +830,18 @@ static int pci_epf_test_bind(struct pci_epf *epf)
+ 		return -EINVAL;
+ 
+ 	epc_features = pci_epc_get_features(epc, epf->func_no);
+-	if (epc_features) {
+-		linkup_notifier = epc_features->linkup_notifier;
+-		core_init_notifier = epc_features->core_init_notifier;
+-		test_reg_bar = pci_epc_get_first_free_bar(epc_features);
+-		pci_epf_configure_bar(epf, epc_features);
++	if (!epc_features) {
++		dev_err(&epf->dev, "epc_features not implemented\n");
++		return -EOPNOTSUPP;
+ 	}
+ 
++	linkup_notifier = epc_features->linkup_notifier;
++	core_init_notifier = epc_features->core_init_notifier;
++	test_reg_bar = pci_epc_get_first_free_bar(epc_features);
++	if (test_reg_bar < 0)
++		return -EINVAL;
++	pci_epf_configure_bar(epf, epc_features);
++
+ 	epf_test->test_reg_bar = test_reg_bar;
+ 	epf_test->epc_features = epc_features;
+ 
+@@ -917,6 +922,7 @@ static int __init pci_epf_test_init(void)
+ 
+ 	ret = pci_epf_register_driver(&test_driver);
+ 	if (ret) {
++		destroy_workqueue(kpcitest_workqueue);
+ 		pr_err("Failed to register pci epf test driver --> %d\n", ret);
+ 		return ret;
+ 	}
+@@ -927,6 +933,8 @@ module_init(pci_epf_test_init);
+ 
+ static void __exit pci_epf_test_exit(void)
+ {
++	if (kpcitest_workqueue)
++		destroy_workqueue(kpcitest_workqueue);
+ 	pci_epf_unregister_driver(&test_driver);
+ }
+ module_exit(pci_epf_test_exit);
+diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
+index cadd3db0cbb08..ea7e7465ce7a6 100644
+--- a/drivers/pci/endpoint/pci-epc-core.c
++++ b/drivers/pci/endpoint/pci-epc-core.c
+@@ -87,24 +87,50 @@ EXPORT_SYMBOL_GPL(pci_epc_get);
+  * pci_epc_get_first_free_bar() - helper to get first unreserved BAR
+  * @epc_features: pci_epc_features structure that holds the reserved bar bitmap
+  *
+- * Invoke to get the first unreserved BAR that can be used for endpoint
++ * Invoke to get the first unreserved BAR that can be used by the endpoint
+  * function. For any incorrect value in reserved_bar return '0'.
+  */
+-unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
+-					*epc_features)
++enum pci_barno
++pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features)
+ {
+-	int free_bar;
++	return pci_epc_get_next_free_bar(epc_features, BAR_0);
++}
++EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
++
++/**
++ * pci_epc_get_next_free_bar() - helper to get unreserved BAR starting from @bar
++ * @epc_features: pci_epc_features structure that holds the reserved bar bitmap
++ * @bar: the starting BAR number from where unreserved BAR should be searched
++ *
++ * Invoke to get the next unreserved BAR starting from @bar that can be used
++ * for endpoint function. For any incorrect value in reserved_bar return '0'.
++ */
++enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
++					 *epc_features, enum pci_barno bar)
++{
++	unsigned long free_bar;
+ 
+ 	if (!epc_features)
+-		return 0;
++		return BAR_0;
++
++	/* If 'bar - 1' is a 64-bit BAR, move to the next BAR */
++	if ((epc_features->bar_fixed_64bit << 1) & 1 << bar)
++		bar++;
++
++	/* Find if the reserved BAR is also a 64-bit BAR */
++	free_bar = epc_features->reserved_bar & epc_features->bar_fixed_64bit;
+ 
+-	free_bar = ffz(epc_features->reserved_bar);
++	/* Set the adjacent bit if the reserved BAR is also a 64-bit BAR */
++	free_bar <<= 1;
++	free_bar |= epc_features->reserved_bar;
++
++	free_bar = find_next_zero_bit(&free_bar, 6, bar);
+ 	if (free_bar > 5)
+-		return 0;
++		return NO_BAR;
+ 
+ 	return free_bar;
+ }
+-EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
++EXPORT_SYMBOL_GPL(pci_epc_get_next_free_bar);
+ 
+ /**
+  * pci_epc_get_features() - get the features supported by EPC
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 4289030b0fff7..ece90a23936d2 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -2367,6 +2367,7 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn)
+ 	pci_set_of_node(dev);
+ 
+ 	if (pci_setup_device(dev)) {
++		pci_release_of_node(dev);
+ 		pci_bus_put(dev->bus);
+ 		kfree(dev);
+ 		return NULL;
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
+index b9ea09fabf840..493079a47d054 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
+@@ -55,7 +55,7 @@ static void exynos_irq_mask(struct irq_data *irqd)
+ 	struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
+ 	struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
+ 	unsigned long reg_mask = our_chip->eint_mask + bank->eint_offset;
+-	unsigned long mask;
++	unsigned int mask;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&bank->slock, flags);
+@@ -83,7 +83,7 @@ static void exynos_irq_unmask(struct irq_data *irqd)
+ 	struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
+ 	struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
+ 	unsigned long reg_mask = our_chip->eint_mask + bank->eint_offset;
+-	unsigned long mask;
++	unsigned int mask;
+ 	unsigned long flags;
+ 
+ 	/*
+@@ -483,7 +483,7 @@ static void exynos_irq_eint0_15(struct irq_desc *desc)
+ 	chained_irq_exit(chip, desc);
+ }
+ 
+-static inline void exynos_irq_demux_eint(unsigned long pend,
++static inline void exynos_irq_demux_eint(unsigned int pend,
+ 						struct irq_domain *domain)
+ {
+ 	unsigned int irq;
+@@ -500,8 +500,8 @@ static void exynos_irq_demux_eint16_31(struct irq_desc *desc)
+ {
+ 	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	struct exynos_muxed_weint_data *eintd = irq_desc_get_handler_data(desc);
+-	unsigned long pend;
+-	unsigned long mask;
++	unsigned int pend;
++	unsigned int mask;
+ 	int i;
+ 
+ 	chained_irq_enter(chip, desc);
+diff --git a/drivers/pwm/pwm-atmel.c b/drivers/pwm/pwm-atmel.c
+index 6161e7e3e9ac6..d7cb0dfa25a52 100644
+--- a/drivers/pwm/pwm-atmel.c
++++ b/drivers/pwm/pwm-atmel.c
+@@ -319,7 +319,7 @@ static void atmel_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 		cdty = atmel_pwm_ch_readl(atmel_pwm, pwm->hwpwm,
+ 					  atmel_pwm->data->regs.duty);
+-		tmp = (u64)cdty * NSEC_PER_SEC;
++		tmp = (u64)(cprd - cdty) * NSEC_PER_SEC;
+ 		tmp <<= pres;
+ 		state->duty_cycle = DIV64_U64_ROUND_UP(tmp, rate);
+ 
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index ba6f7551242de..ebc3e755bcbcd 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1182,7 +1182,15 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ 			goto release_firmware;
+ 		}
+ 
+-		ptr = ioremap_wc(qproc->mpss_phys + offset, phdr->p_memsz);
++		if (phdr->p_filesz > phdr->p_memsz) {
++			dev_err(qproc->dev,
++				"refusing to load segment %d with p_filesz > p_memsz\n",
++				i);
++			ret = -EINVAL;
++			goto release_firmware;
++		}
++
++		ptr = memremap(qproc->mpss_phys + offset, phdr->p_memsz, MEMREMAP_WC);
+ 		if (!ptr) {
+ 			dev_err(qproc->dev,
+ 				"unable to map memory region: %pa+%zx-%x\n",
+@@ -1197,7 +1205,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ 					"failed to load segment %d from truncated file %s\n",
+ 					i, fw_name);
+ 				ret = -EINVAL;
+-				iounmap(ptr);
++				memunmap(ptr);
+ 				goto release_firmware;
+ 			}
+ 
+@@ -1209,7 +1217,17 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ 							ptr, phdr->p_filesz);
+ 			if (ret) {
+ 				dev_err(qproc->dev, "failed to load %s\n", fw_name);
+-				iounmap(ptr);
++				memunmap(ptr);
++				goto release_firmware;
++			}
++
++			if (seg_fw->size != phdr->p_filesz) {
++				dev_err(qproc->dev,
++					"failed to load segment %d from truncated file %s\n",
++					i, fw_name);
++				ret = -EINVAL;
++				release_firmware(seg_fw);
++				memunmap(ptr);
+ 				goto release_firmware;
+ 			}
+ 
+@@ -1220,7 +1238,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ 			memset(ptr + phdr->p_filesz, 0,
+ 			       phdr->p_memsz - phdr->p_filesz);
+ 		}
+-		iounmap(ptr);
++		memunmap(ptr);
+ 		size += phdr->p_memsz;
+ 
+ 		code_length = readl(qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG);
+@@ -1287,11 +1305,11 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
+ 	}
+ 
+ 	if (!ret)
+-		ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, size);
++		ptr = memremap(qproc->mpss_phys + offset + cp_offset, size, MEMREMAP_WC);
+ 
+ 	if (ptr) {
+ 		memcpy(dest, ptr, size);
+-		iounmap(ptr);
++		memunmap(ptr);
+ 	} else {
+ 		memset(dest, 0xff, size);
+ 	}
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 27a05167c18c3..4840886532ff7 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -857,6 +857,7 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail)
+ 			dev_err(glink->dev,
+ 				"no intent found for channel %s intent %d",
+ 				channel->name, liid);
++			ret = -ENOENT;
+ 			goto advance_rx;
+ 		}
+ 	}
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index 9f5f54ca039d0..07a9cc91671b0 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -295,7 +295,11 @@ static int ds1307_get_time(struct device *dev, struct rtc_time *t)
+ 	t->tm_min = bcd2bin(regs[DS1307_REG_MIN] & 0x7f);
+ 	tmp = regs[DS1307_REG_HOUR] & 0x3f;
+ 	t->tm_hour = bcd2bin(tmp);
+-	t->tm_wday = bcd2bin(regs[DS1307_REG_WDAY] & 0x07) - 1;
++	/* rx8130 is bit position, not BCD */
++	if (ds1307->type == rx_8130)
++		t->tm_wday = fls(regs[DS1307_REG_WDAY] & 0x7f);
++	else
++		t->tm_wday = bcd2bin(regs[DS1307_REG_WDAY] & 0x07) - 1;
+ 	t->tm_mday = bcd2bin(regs[DS1307_REG_MDAY] & 0x3f);
+ 	tmp = regs[DS1307_REG_MONTH] & 0x1f;
+ 	t->tm_mon = bcd2bin(tmp) - 1;
+@@ -342,7 +346,11 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t)
+ 	regs[DS1307_REG_SECS] = bin2bcd(t->tm_sec);
+ 	regs[DS1307_REG_MIN] = bin2bcd(t->tm_min);
+ 	regs[DS1307_REG_HOUR] = bin2bcd(t->tm_hour);
+-	regs[DS1307_REG_WDAY] = bin2bcd(t->tm_wday + 1);
++	/* rx8130 is bit position, not BCD */
++	if (ds1307->type == rx_8130)
++		regs[DS1307_REG_WDAY] = 1 << t->tm_wday;
++	else
++		regs[DS1307_REG_WDAY] = bin2bcd(t->tm_wday + 1);
+ 	regs[DS1307_REG_MDAY] = bin2bcd(t->tm_mday);
+ 	regs[DS1307_REG_MONTH] = bin2bcd(t->tm_mon + 1);
+ 
+diff --git a/drivers/rtc/rtc-fsl-ftm-alarm.c b/drivers/rtc/rtc-fsl-ftm-alarm.c
+index 48d3b38ea3480..e08672e262036 100644
+--- a/drivers/rtc/rtc-fsl-ftm-alarm.c
++++ b/drivers/rtc/rtc-fsl-ftm-alarm.c
+@@ -310,6 +310,7 @@ static const struct of_device_id ftm_rtc_match[] = {
+ 	{ .compatible = "fsl,lx2160a-ftm-alarm", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, ftm_rtc_match);
+ 
+ static const struct acpi_device_id ftm_imx_acpi_ids[] = {
+ 	{"NXP0014",},
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 52e8b555bd1dc..6faf34fa62206 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1190,6 +1190,9 @@ static int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport)
+ {
+ 	struct qla_work_evt *e;
+ 
++	if (vha->host->active_mode == MODE_TARGET)
++		return QLA_FUNCTION_FAILED;
++
+ 	e = qla2x00_alloc_work(vha, QLA_EVT_PRLI);
+ 	if (!e)
+ 		return QLA_FUNCTION_FAILED;
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 4215d9a8e5de4..08d4d40c510ea 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -8459,7 +8459,7 @@ static void ufshcd_vreg_set_lpm(struct ufs_hba *hba)
+ 	} else if (!ufshcd_is_ufs_dev_active(hba)) {
+ 		ufshcd_toggle_vreg(hba->dev, hba->vreg_info.vcc, false);
+ 		vcc_off = true;
+-		if (!ufshcd_is_link_active(hba)) {
++		if (ufshcd_is_link_hibern8(hba) || ufshcd_is_link_off(hba)) {
+ 			ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq);
+ 			ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq2);
+ 		}
+@@ -8481,7 +8481,7 @@ static int ufshcd_vreg_set_hpm(struct ufs_hba *hba)
+ 	    !hba->dev_info.is_lu_power_on_wp) {
+ 		ret = ufshcd_setup_vreg(hba, true);
+ 	} else if (!ufshcd_is_ufs_dev_active(hba)) {
+-		if (!ret && !ufshcd_is_link_active(hba)) {
++		if (!ufshcd_is_link_active(hba)) {
+ 			ret = ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq);
+ 			if (ret)
+ 				goto vcc_disable;
+@@ -8819,10 +8819,13 @@ int ufshcd_system_suspend(struct ufs_hba *hba)
+ 	if (!hba || !hba->is_powered)
+ 		return 0;
+ 
++	cancel_delayed_work_sync(&hba->rpm_dev_flush_recheck_work);
++
+ 	if ((ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl) ==
+ 	     hba->curr_dev_pwr_mode) &&
+ 	    (ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl) ==
+ 	     hba->uic_link_state) &&
++	     pm_runtime_suspended(hba->dev) &&
+ 	     !hba->dev_info.b_rpm_dev_flush_capable)
+ 		goto out;
+ 
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index d25c4a37e2aff..1263991de76f9 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -1107,7 +1107,7 @@ static struct platform_driver rkvdec_driver = {
+ 	.remove = rkvdec_remove,
+ 	.driver = {
+ 		   .name = "rkvdec",
+-		   .of_match_table = of_match_ptr(of_rkvdec_match),
++		   .of_match_table = of_rkvdec_match,
+ 		   .pm = &rkvdec_pm_ops,
+ 	},
+ };
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index d8ce3a687b80d..3c4c0516e58ab 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -755,8 +755,10 @@ int __init init_common(struct tsens_priv *priv)
+ 		for (i = VER_MAJOR; i <= VER_STEP; i++) {
+ 			priv->rf[i] = devm_regmap_field_alloc(dev, priv->srot_map,
+ 							      priv->fields[i]);
+-			if (IS_ERR(priv->rf[i]))
+-				return PTR_ERR(priv->rf[i]);
++			if (IS_ERR(priv->rf[i])) {
++				ret = PTR_ERR(priv->rf[i]);
++				goto err_put_device;
++			}
+ 		}
+ 		ret = regmap_field_read(priv->rf[VER_MINOR], &ver_minor);
+ 		if (ret)
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 69ef12f852b7d..5b76f9a1280d5 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -704,14 +704,17 @@ static int thermal_of_populate_bind_params(struct device_node *np,
+ 
+ 	count = of_count_phandle_with_args(np, "cooling-device",
+ 					   "#cooling-cells");
+-	if (!count) {
++	if (count <= 0) {
+ 		pr_err("Add a cooling_device property with at least one device\n");
++		ret = -ENOENT;
+ 		goto end;
+ 	}
+ 
+ 	__tcbp = kcalloc(count, sizeof(*__tcbp), GFP_KERNEL);
+-	if (!__tcbp)
++	if (!__tcbp) {
++		ret = -ENOMEM;
+ 		goto end;
++	}
+ 
+ 	for (i = 0; i < count; i++) {
+ 		ret = of_parse_phandle_with_args(np, "cooling-device",
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 508b1c3f8b731..d1e4a7379bebd 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -321,12 +321,23 @@ exit:
+ 
+ }
+ 
+-static void kill_urbs(struct wdm_device *desc)
++static void poison_urbs(struct wdm_device *desc)
+ {
+ 	/* the order here is essential */
+-	usb_kill_urb(desc->command);
+-	usb_kill_urb(desc->validity);
+-	usb_kill_urb(desc->response);
++	usb_poison_urb(desc->command);
++	usb_poison_urb(desc->validity);
++	usb_poison_urb(desc->response);
++}
++
++static void unpoison_urbs(struct wdm_device *desc)
++{
++	/*
++	 *  the order here is not essential
++	 *  it is symmetrical just to be nice
++	 */
++	usb_unpoison_urb(desc->response);
++	usb_unpoison_urb(desc->validity);
++	usb_unpoison_urb(desc->command);
+ }
+ 
+ static void free_urbs(struct wdm_device *desc)
+@@ -741,11 +752,12 @@ static int wdm_release(struct inode *inode, struct file *file)
+ 	if (!desc->count) {
+ 		if (!test_bit(WDM_DISCONNECTING, &desc->flags)) {
+ 			dev_dbg(&desc->intf->dev, "wdm_release: cleanup\n");
+-			kill_urbs(desc);
++			poison_urbs(desc);
+ 			spin_lock_irq(&desc->iuspin);
+ 			desc->resp_count = 0;
+ 			spin_unlock_irq(&desc->iuspin);
+ 			desc->manage_power(desc->intf, 0);
++			unpoison_urbs(desc);
+ 		} else {
+ 			/* must avoid dev_printk here as desc->intf is invalid */
+ 			pr_debug(KBUILD_MODNAME " %s: device gone - cleaning up\n", __func__);
+@@ -1037,9 +1049,9 @@ static void wdm_disconnect(struct usb_interface *intf)
+ 	wake_up_all(&desc->wait);
+ 	mutex_lock(&desc->rlock);
+ 	mutex_lock(&desc->wlock);
++	poison_urbs(desc);
+ 	cancel_work_sync(&desc->rxwork);
+ 	cancel_work_sync(&desc->service_outs_intr);
+-	kill_urbs(desc);
+ 	mutex_unlock(&desc->wlock);
+ 	mutex_unlock(&desc->rlock);
+ 
+@@ -1080,9 +1092,10 @@ static int wdm_suspend(struct usb_interface *intf, pm_message_t message)
+ 		set_bit(WDM_SUSPENDING, &desc->flags);
+ 		spin_unlock_irq(&desc->iuspin);
+ 		/* callback submits work - order is essential */
+-		kill_urbs(desc);
++		poison_urbs(desc);
+ 		cancel_work_sync(&desc->rxwork);
+ 		cancel_work_sync(&desc->service_outs_intr);
++		unpoison_urbs(desc);
+ 	}
+ 	if (!PMSG_IS_AUTO(message)) {
+ 		mutex_unlock(&desc->wlock);
+@@ -1140,7 +1153,7 @@ static int wdm_pre_reset(struct usb_interface *intf)
+ 	wake_up_all(&desc->wait);
+ 	mutex_lock(&desc->rlock);
+ 	mutex_lock(&desc->wlock);
+-	kill_urbs(desc);
++	poison_urbs(desc);
+ 	cancel_work_sync(&desc->rxwork);
+ 	cancel_work_sync(&desc->service_outs_intr);
+ 	return 0;
+@@ -1151,6 +1164,7 @@ static int wdm_post_reset(struct usb_interface *intf)
+ 	struct wdm_device *desc = wdm_find_device(intf);
+ 	int rv;
+ 
++	unpoison_urbs(desc);
+ 	clear_bit(WDM_OVERFLOW, &desc->flags);
+ 	clear_bit(WDM_RESETTING, &desc->flags);
+ 	rv = recover_from_urb_loss(desc);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 22a86ae4f639c..228e3d4e1a9fd 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3592,9 +3592,6 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ 		 * sequence.
+ 		 */
+ 		status = hub_port_status(hub, port1, &portstatus, &portchange);
+-
+-		/* TRSMRCY = 10 msec */
+-		msleep(10);
+ 	}
+ 
+  SuspendCleared:
+@@ -3609,6 +3606,9 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ 				usb_clear_port_feature(hub->hdev, port1,
+ 						USB_PORT_FEAT_C_SUSPEND);
+ 		}
++
++		/* TRSMRCY = 10 msec */
++		msleep(10);
+ 	}
+ 
+ 	if (udev->persist_enabled)
+diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
+index 7161344c65221..641e4251cb7f1 100644
+--- a/drivers/usb/dwc2/core.h
++++ b/drivers/usb/dwc2/core.h
+@@ -112,6 +112,7 @@ struct dwc2_hsotg_req;
+  * @debugfs: File entry for debugfs file for this endpoint.
+  * @dir_in: Set to true if this endpoint is of the IN direction, which
+  *          means that it is sending data to the Host.
++ * @map_dir: Set to the value of dir_in when the DMA buffer is mapped.
+  * @index: The index for the endpoint registers.
+  * @mc: Multi Count - number of transactions per microframe
+  * @interval: Interval for periodic endpoints, in frames or microframes.
+@@ -161,6 +162,7 @@ struct dwc2_hsotg_ep {
+ 	unsigned short		fifo_index;
+ 
+ 	unsigned char           dir_in;
++	unsigned char           map_dir;
+ 	unsigned char           index;
+ 	unsigned char           mc;
+ 	u16                     interval;
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index ad4c94366dadf..d2f623d83bf78 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -422,7 +422,7 @@ static void dwc2_hsotg_unmap_dma(struct dwc2_hsotg *hsotg,
+ {
+ 	struct usb_request *req = &hs_req->req;
+ 
+-	usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->dir_in);
++	usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->map_dir);
+ }
+ 
+ /*
+@@ -1242,6 +1242,7 @@ static int dwc2_hsotg_map_dma(struct dwc2_hsotg *hsotg,
+ {
+ 	int ret;
+ 
++	hs_ep->map_dir = hs_ep->dir_in;
+ 	ret = usb_gadget_map_request(&hsotg->gadget, req, hs_ep->dir_in);
+ 	if (ret)
+ 		goto dma_error;
+diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
+index 3db17806e92e7..e196673f5c647 100644
+--- a/drivers/usb/dwc3/dwc3-omap.c
++++ b/drivers/usb/dwc3/dwc3-omap.c
+@@ -437,8 +437,13 @@ static int dwc3_omap_extcon_register(struct dwc3_omap *omap)
+ 
+ 		if (extcon_get_state(edev, EXTCON_USB) == true)
+ 			dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_VALID);
++		else
++			dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_OFF);
++
+ 		if (extcon_get_state(edev, EXTCON_USB_HOST) == true)
+ 			dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_GROUND);
++		else
++			dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_FLOAT);
+ 
+ 		omap->edev = edev;
+ 	}
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 598daed8086f6..17117870f6cea 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -120,6 +120,7 @@ static const struct property_entry dwc3_pci_mrfld_properties[] = {
+ 	PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
+ 	PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"),
+ 	PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
++	PROPERTY_ENTRY_BOOL("snps,usb2-gadget-lpm-disable"),
+ 	PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
+ 	{}
+ };
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 84d1487e9f060..acf57a98969dc 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1676,7 +1676,9 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
+ 		}
+ 	}
+ 
+-	return __dwc3_gadget_kick_transfer(dep);
++	__dwc3_gadget_kick_transfer(dep);
++
++	return 0;
+ }
+ 
+ static int dwc3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request,
+@@ -2206,6 +2208,10 @@ static void dwc3_gadget_enable_irq(struct dwc3 *dwc)
+ 	if (DWC3_VER_IS_PRIOR(DWC3, 250A))
+ 		reg |= DWC3_DEVTEN_ULSTCNGEN;
+ 
++	/* On 2.30a and above this bit enables U3/L2-L1 Suspend Events */
++	if (!DWC3_VER_IS_PRIOR(DWC3, 230A))
++		reg |= DWC3_DEVTEN_EOPFEN;
++
+ 	dwc3_writel(dwc->regs, DWC3_DEVTEN, reg);
+ }
+ 
+@@ -3948,8 +3954,9 @@ err0:
+ 
+ void dwc3_gadget_exit(struct dwc3 *dwc)
+ {
+-	usb_del_gadget_udc(dwc->gadget);
++	usb_del_gadget(dwc->gadget);
+ 	dwc3_gadget_free_endpoints(dwc);
++	usb_put_gadget(dwc->gadget);
+ 	dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce,
+ 			  dwc->bounce_addr);
+ 	kfree(dwc->setup_buf);
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index 1d94fcfac2c28..bd958f059fe64 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -5568,7 +5568,7 @@ static int fotg210_hcd_probe(struct platform_device *pdev)
+ 	struct usb_hcd *hcd;
+ 	struct resource *res;
+ 	int irq;
+-	int retval = -ENODEV;
++	int retval;
+ 	struct fotg210_hcd *fotg210;
+ 
+ 	if (usb_disabled())
+@@ -5588,7 +5588,7 @@ static int fotg210_hcd_probe(struct platform_device *pdev)
+ 	hcd = usb_create_hcd(&fotg210_fotg210_hc_driver, dev,
+ 			dev_name(dev));
+ 	if (!hcd) {
+-		dev_err(dev, "failed to create hcd with err %d\n", retval);
++		dev_err(dev, "failed to create hcd\n");
+ 		retval = -ENOMEM;
+ 		goto fail_create_hcd;
+ 	}
+diff --git a/drivers/usb/host/xhci-ext-caps.h b/drivers/usb/host/xhci-ext-caps.h
+index fa59b242cd515..e8af0a125f84b 100644
+--- a/drivers/usb/host/xhci-ext-caps.h
++++ b/drivers/usb/host/xhci-ext-caps.h
+@@ -7,8 +7,9 @@
+  * Author: Sarah Sharp
+  * Some code borrowed from the Linux EHCI driver.
+  */
+-/* Up to 16 ms to halt an HC */
+-#define XHCI_MAX_HALT_USEC	(16*1000)
++
++/* HC should halt within 16 ms, but use 32 ms as some hosts take longer */
++#define XHCI_MAX_HALT_USEC	(32 * 1000)
+ /* HC not running - set to 1 when run/stop bit is cleared. */
+ #define XHCI_STS_HALT		(1<<0)
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 5bbccc9a0179f..7bc18cf8042cc 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -57,6 +57,7 @@
+ #define PCI_DEVICE_ID_INTEL_CML_XHCI			0xa3af
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI		0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI		0x1138
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI		0x461e
+ 
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4			0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+@@ -166,8 +167,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	    (pdev->device == 0x15e0 || pdev->device == 0x15e1))
+ 		xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND;
+ 
+-	if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5)
++	if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5) {
+ 		xhci->quirks |= XHCI_DISABLE_SPARSE;
++		xhci->quirks |= XHCI_RESET_ON_RESUME;
++	}
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_AMD)
+ 		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+@@ -243,7 +246,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI))
++	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index dbe5553872ff0..a8d97e23f601f 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1397,7 +1397,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
+  * we need to issue an evaluate context command and wait on it.
+  */
+ static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id,
+-		unsigned int ep_index, struct urb *urb)
++		unsigned int ep_index, struct urb *urb, gfp_t mem_flags)
+ {
+ 	struct xhci_container_ctx *out_ctx;
+ 	struct xhci_input_control_ctx *ctrl_ctx;
+@@ -1428,7 +1428,7 @@ static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id,
+ 		 * changes max packet sizes.
+ 		 */
+ 
+-		command = xhci_alloc_command(xhci, true, GFP_KERNEL);
++		command = xhci_alloc_command(xhci, true, mem_flags);
+ 		if (!command)
+ 			return -ENOMEM;
+ 
+@@ -1524,7 +1524,7 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 		 */
+ 		if (urb->dev->speed == USB_SPEED_FULL) {
+ 			ret = xhci_check_maxpacket(xhci, slot_id,
+-					ep_index, urb);
++					ep_index, urb, mem_flags);
+ 			if (ret < 0) {
+ 				xhci_urb_free_priv(urb_priv);
+ 				urb->hcpriv = NULL;
+diff --git a/drivers/usb/musb/mediatek.c b/drivers/usb/musb/mediatek.c
+index eebeadd269461..6b92d037d8fc8 100644
+--- a/drivers/usb/musb/mediatek.c
++++ b/drivers/usb/musb/mediatek.c
+@@ -518,8 +518,8 @@ static int mtk_musb_probe(struct platform_device *pdev)
+ 
+ 	glue->xceiv = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2);
+ 	if (IS_ERR(glue->xceiv)) {
+-		dev_err(dev, "fail to getting usb-phy %d\n", ret);
+ 		ret = PTR_ERR(glue->xceiv);
++		dev_err(dev, "fail to getting usb-phy %d\n", ret);
+ 		goto err_unregister_usb_phy;
+ 	}
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 912dbf8ca2dac..bdbd346dc59ff 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2501,10 +2501,10 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
+ 		port->pps_data.req_max_volt = min(pdo_pps_apdo_max_voltage(src),
+ 						  pdo_pps_apdo_max_voltage(snk));
+ 		port->pps_data.req_max_curr = min_pps_apdo_current(src, snk);
+-		port->pps_data.req_out_volt = min(port->pps_data.max_volt,
+-						  max(port->pps_data.min_volt,
++		port->pps_data.req_out_volt = min(port->pps_data.req_max_volt,
++						  max(port->pps_data.req_min_volt,
+ 						      port->pps_data.req_out_volt));
+-		port->pps_data.req_op_curr = min(port->pps_data.max_curr,
++		port->pps_data.req_op_curr = min(port->pps_data.req_max_curr,
+ 						 port->pps_data.req_op_curr);
+ 	}
+ 
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 51a570d40a42e..b4615bb5daab8 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -495,7 +495,8 @@ static void ucsi_unregister_altmodes(struct ucsi_connector *con, u8 recipient)
+ 	}
+ }
+ 
+-static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner)
++static int ucsi_get_pdos(struct ucsi_connector *con, int is_partner,
++			 u32 *pdos, int offset, int num_pdos)
+ {
+ 	struct ucsi *ucsi = con->ucsi;
+ 	u64 command;
+@@ -503,17 +504,39 @@ static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner)
+ 
+ 	command = UCSI_COMMAND(UCSI_GET_PDOS) | UCSI_CONNECTOR_NUMBER(con->num);
+ 	command |= UCSI_GET_PDOS_PARTNER_PDO(is_partner);
+-	command |= UCSI_GET_PDOS_NUM_PDOS(UCSI_MAX_PDOS - 1);
++	command |= UCSI_GET_PDOS_PDO_OFFSET(offset);
++	command |= UCSI_GET_PDOS_NUM_PDOS(num_pdos - 1);
+ 	command |= UCSI_GET_PDOS_SRC_PDOS;
+-	ret = ucsi_send_command(ucsi, command, con->src_pdos,
+-			       sizeof(con->src_pdos));
+-	if (ret < 0) {
++	ret = ucsi_send_command(ucsi, command, pdos + offset,
++				num_pdos * sizeof(u32));
++	if (ret < 0)
+ 		dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret);
++	if (ret == 0 && offset == 0)
++		dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n");
++
++	return ret;
++}
++
++static void ucsi_get_src_pdos(struct ucsi_connector *con, int is_partner)
++{
++	int ret;
++
++	/* UCSI max payload means only getting at most 4 PDOs at a time */
++	ret = ucsi_get_pdos(con, 1, con->src_pdos, 0, UCSI_MAX_PDOS);
++	if (ret < 0)
+ 		return;
+-	}
++
+ 	con->num_pdos = ret / sizeof(u32); /* number of bytes to 32-bit PDOs */
+-	if (ret == 0)
+-		dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n");
++	if (con->num_pdos < UCSI_MAX_PDOS)
++		return;
++
++	/* get the remaining PDOs, if any */
++	ret = ucsi_get_pdos(con, 1, con->src_pdos, UCSI_MAX_PDOS,
++			    PDO_MAX_OBJECTS - UCSI_MAX_PDOS);
++	if (ret < 0)
++		return;
++
++	con->num_pdos += ret / sizeof(u32);
+ }
+ 
+ static void ucsi_pwr_opmode_change(struct ucsi_connector *con)
+@@ -522,7 +545,7 @@ static void ucsi_pwr_opmode_change(struct ucsi_connector *con)
+ 	case UCSI_CONSTAT_PWR_OPMODE_PD:
+ 		con->rdo = con->status.request_data_obj;
+ 		typec_set_pwr_opmode(con->port, TYPEC_PWR_MODE_PD);
+-		ucsi_get_pdos(con, 1);
++		ucsi_get_src_pdos(con, 1);
+ 		break;
+ 	case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5:
+ 		con->rdo = 0;
+@@ -887,6 +910,7 @@ static const struct typec_operations ucsi_ops = {
+ 	.pr_set = ucsi_pr_swap
+ };
+ 
++/* Caller must call fwnode_handle_put() after use */
+ static struct fwnode_handle *ucsi_find_fwnode(struct ucsi_connector *con)
+ {
+ 	struct fwnode_handle *fwnode;
+@@ -920,7 +944,7 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
+ 	command |= UCSI_CONNECTOR_NUMBER(con->num);
+ 	ret = ucsi_send_command(ucsi, command, &con->cap, sizeof(con->cap));
+ 	if (ret < 0)
+-		goto out;
++		goto out_unlock;
+ 
+ 	if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DRP)
+ 		cap->data = TYPEC_PORT_DRD;
+@@ -1016,6 +1040,8 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
+ 	trace_ucsi_register_port(con->num, &con->status);
+ 
+ out:
++	fwnode_handle_put(cap->fwnode);
++out_unlock:
+ 	mutex_unlock(&con->lock);
+ 	return ret;
+ }
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index b7a92f2460507..047e17c4b4922 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -8,6 +8,7 @@
+ #include <linux/power_supply.h>
+ #include <linux/types.h>
+ #include <linux/usb/typec.h>
++#include <linux/usb/pd.h>
+ 
+ /* -------------------------------------------------------------------------- */
+ 
+@@ -133,7 +134,9 @@ void ucsi_connector_change(struct ucsi *ucsi, u8 num);
+ 
+ /* GET_PDOS command bits */
+ #define UCSI_GET_PDOS_PARTNER_PDO(_r_)		((u64)(_r_) << 23)
++#define UCSI_GET_PDOS_PDO_OFFSET(_r_)		((u64)(_r_) << 24)
+ #define UCSI_GET_PDOS_NUM_PDOS(_r_)		((u64)(_r_) << 32)
++#define UCSI_MAX_PDOS				(4)
+ #define UCSI_GET_PDOS_SRC_PDOS			((u64)1 << 34)
+ 
+ /* -------------------------------------------------------------------------- */
+@@ -300,7 +303,6 @@ struct ucsi {
+ 
+ #define UCSI_MAX_SVID		5
+ #define UCSI_MAX_ALTMODES	(UCSI_MAX_SVID * 6)
+-#define UCSI_MAX_PDOS		(4)
+ 
+ #define UCSI_TYPEC_VSAFE5V	5000
+ #define UCSI_TYPEC_1_5_CURRENT	1500
+@@ -327,7 +329,7 @@ struct ucsi_connector {
+ 	struct power_supply *psy;
+ 	struct power_supply_desc psy_desc;
+ 	u32 rdo;
+-	u32 src_pdos[UCSI_MAX_PDOS];
++	u32 src_pdos[PDO_MAX_OBJECTS];
+ 	int num_pdos;
+ };
+ 
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 5447c5156b2e6..b9651f797676c 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -1005,8 +1005,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ 		err = mmu_interval_notifier_insert_locked(
+ 			&map->notifier, vma->vm_mm, vma->vm_start,
+ 			vma->vm_end - vma->vm_start, &gntdev_mmu_ops);
+-		if (err)
++		if (err) {
++			map->vma = NULL;
+ 			goto out_unlock_put;
++		}
+ 	}
+ 	mutex_unlock(&priv->lock);
+ 
+diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
+index 7762c1bb23cb3..87e6b7db892f5 100644
+--- a/drivers/xen/unpopulated-alloc.c
++++ b/drivers/xen/unpopulated-alloc.c
+@@ -27,11 +27,6 @@ static int fill_list(unsigned int nr_pages)
+ 	if (!res)
+ 		return -ENOMEM;
+ 
+-	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
+-	if (!pgmap)
+-		goto err_pgmap;
+-
+-	pgmap->type = MEMORY_DEVICE_GENERIC;
+ 	res->name = "Xen scratch";
+ 	res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ 
+@@ -43,6 +38,13 @@ static int fill_list(unsigned int nr_pages)
+ 		goto err_resource;
+ 	}
+ 
++	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
++	if (!pgmap) {
++		ret = -ENOMEM;
++		goto err_pgmap;
++	}
++
++	pgmap->type = MEMORY_DEVICE_GENERIC;
+ 	pgmap->range = (struct range) {
+ 		.start = res->start,
+ 		.end = res->end,
+@@ -92,10 +94,10 @@ static int fill_list(unsigned int nr_pages)
+ 	return 0;
+ 
+ err_memremap:
+-	release_resource(res);
+-err_resource:
+ 	kfree(pgmap);
+ err_pgmap:
++	release_resource(res);
++err_resource:
+ 	kfree(res);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index c81a20cc10dc8..7e87549c5edaf 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2065,6 +2065,30 @@ static int start_ordered_ops(struct inode *inode, loff_t start, loff_t end)
+ 	return ret;
+ }
+ 
++static inline bool skip_inode_logging(const struct btrfs_log_ctx *ctx)
++{
++	struct btrfs_inode *inode = BTRFS_I(ctx->inode);
++	struct btrfs_fs_info *fs_info = inode->root->fs_info;
++
++	if (btrfs_inode_in_log(inode, fs_info->generation) &&
++	    list_empty(&ctx->ordered_extents))
++		return true;
++
++	/*
++	 * If we are doing a fast fsync we can not bail out if the inode's
++	 * last_trans is <= then the last committed transaction, because we only
++	 * update the last_trans of the inode during ordered extent completion,
++	 * and for a fast fsync we don't wait for that, we only wait for the
++	 * writeback to complete.
++	 */
++	if (inode->last_trans <= fs_info->last_trans_committed &&
++	    (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags) ||
++	     list_empty(&ctx->ordered_extents)))
++		return true;
++
++	return false;
++}
++
+ /*
+  * fsync call for both files and directories.  This logs the inode into
+  * the tree log instead of forcing full commits whenever possible.
+@@ -2080,7 +2104,6 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+ 	struct dentry *dentry = file_dentry(file);
+ 	struct inode *inode = d_inode(dentry);
+-	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ 	struct btrfs_root *root = BTRFS_I(inode)->root;
+ 	struct btrfs_trans_handle *trans;
+ 	struct btrfs_log_ctx ctx;
+@@ -2187,17 +2210,8 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 
+ 	atomic_inc(&root->log_batch);
+ 
+-	/*
+-	 * If we are doing a fast fsync we can not bail out if the inode's
+-	 * last_trans is <= then the last committed transaction, because we only
+-	 * update the last_trans of the inode during ordered extent completion,
+-	 * and for a fast fsync we don't wait for that, we only wait for the
+-	 * writeback to complete.
+-	 */
+ 	smp_mb();
+-	if (btrfs_inode_in_log(BTRFS_I(inode), fs_info->generation) ||
+-	    (BTRFS_I(inode)->last_trans <= fs_info->last_trans_committed &&
+-	     (full_sync || list_empty(&ctx.ordered_extents)))) {
++	if (skip_inode_logging(&ctx)) {
+ 		/*
+ 		 * We've had everything committed since the last time we were
+ 		 * modified so clear this flag in case it was set for whatever
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 5b11bb9770664..8bc3e2f25e7de 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6062,7 +6062,8 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ 	 * (since logging them is pointless, a link count of 0 means they
+ 	 * will never be accessible).
+ 	 */
+-	if (btrfs_inode_in_log(inode, trans->transid) ||
++	if ((btrfs_inode_in_log(inode, trans->transid) &&
++	     list_empty(&ctx->ordered_extents)) ||
+ 	    inode->vfs_inode.i_nlink == 0) {
+ 		ret = BTRFS_NO_LOG_SYNC;
+ 		goto end_no_trans;
+diff --git a/fs/ceph/export.c b/fs/ceph/export.c
+index e088843a7734c..baa6368bece59 100644
+--- a/fs/ceph/export.c
++++ b/fs/ceph/export.c
+@@ -178,8 +178,10 @@ static struct dentry *__fh_to_dentry(struct super_block *sb, u64 ino)
+ 		return ERR_CAST(inode);
+ 	/* We need LINK caps to reliably check i_nlink */
+ 	err = ceph_do_getattr(inode, CEPH_CAP_LINK_SHARED, false);
+-	if (err)
++	if (err) {
++		iput(inode);
+ 		return ERR_PTR(err);
++	}
+ 	/* -ESTALE if inode as been unlinked and no file is open */
+ 	if ((inode->i_nlink == 0) && (atomic_read(&inode->i_count) == 1)) {
+ 		iput(inode);
+diff --git a/fs/dax.c b/fs/dax.c
+index b3d27fdc67752..df5485b4bddf1 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -144,6 +144,16 @@ struct wait_exceptional_entry_queue {
+ 	struct exceptional_entry_key key;
+ };
+ 
++/**
++ * enum dax_wake_mode: waitqueue wakeup behaviour
++ * @WAKE_ALL: wake all waiters in the waitqueue
++ * @WAKE_NEXT: wake only the first waiter in the waitqueue
++ */
++enum dax_wake_mode {
++	WAKE_ALL,
++	WAKE_NEXT,
++};
++
+ static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas,
+ 		void *entry, struct exceptional_entry_key *key)
+ {
+@@ -182,7 +192,8 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait,
+  * The important information it's conveying is whether the entry at
+  * this index used to be a PMD entry.
+  */
+-static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
++static void dax_wake_entry(struct xa_state *xas, void *entry,
++			   enum dax_wake_mode mode)
+ {
+ 	struct exceptional_entry_key key;
+ 	wait_queue_head_t *wq;
+@@ -196,7 +207,7 @@ static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
+ 	 * must be in the waitqueue and the following check will see them.
+ 	 */
+ 	if (waitqueue_active(wq))
+-		__wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
++		__wake_up(wq, TASK_NORMAL, mode == WAKE_ALL ? 0 : 1, &key);
+ }
+ 
+ /*
+@@ -264,11 +275,11 @@ static void wait_entry_unlocked(struct xa_state *xas, void *entry)
+ 	finish_wait(wq, &ewait.wait);
+ }
+ 
+-static void put_unlocked_entry(struct xa_state *xas, void *entry)
++static void put_unlocked_entry(struct xa_state *xas, void *entry,
++			       enum dax_wake_mode mode)
+ {
+-	/* If we were the only waiter woken, wake the next one */
+ 	if (entry && !dax_is_conflict(entry))
+-		dax_wake_entry(xas, entry, false);
++		dax_wake_entry(xas, entry, mode);
+ }
+ 
+ /*
+@@ -286,7 +297,7 @@ static void dax_unlock_entry(struct xa_state *xas, void *entry)
+ 	old = xas_store(xas, entry);
+ 	xas_unlock_irq(xas);
+ 	BUG_ON(!dax_is_locked(old));
+-	dax_wake_entry(xas, entry, false);
++	dax_wake_entry(xas, entry, WAKE_NEXT);
+ }
+ 
+ /*
+@@ -524,7 +535,7 @@ retry:
+ 
+ 		dax_disassociate_entry(entry, mapping, false);
+ 		xas_store(xas, NULL);	/* undo the PMD join */
+-		dax_wake_entry(xas, entry, true);
++		dax_wake_entry(xas, entry, WAKE_ALL);
+ 		mapping->nrexceptional--;
+ 		entry = NULL;
+ 		xas_set(xas, index);
+@@ -622,7 +633,7 @@ struct page *dax_layout_busy_page_range(struct address_space *mapping,
+ 			entry = get_unlocked_entry(&xas, 0);
+ 		if (entry)
+ 			page = dax_busy_page(entry);
+-		put_unlocked_entry(&xas, entry);
++		put_unlocked_entry(&xas, entry, WAKE_NEXT);
+ 		if (page)
+ 			break;
+ 		if (++scanned % XA_CHECK_SCHED)
+@@ -664,7 +675,7 @@ static int __dax_invalidate_entry(struct address_space *mapping,
+ 	mapping->nrexceptional--;
+ 	ret = 1;
+ out:
+-	put_unlocked_entry(&xas, entry);
++	put_unlocked_entry(&xas, entry, WAKE_ALL);
+ 	xas_unlock_irq(&xas);
+ 	return ret;
+ }
+@@ -937,13 +948,13 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ 	xas_lock_irq(xas);
+ 	xas_store(xas, entry);
+ 	xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
+-	dax_wake_entry(xas, entry, false);
++	dax_wake_entry(xas, entry, WAKE_NEXT);
+ 
+ 	trace_dax_writeback_one(mapping->host, index, count);
+ 	return ret;
+ 
+  put_unlocked:
+-	put_unlocked_entry(xas, entry);
++	put_unlocked_entry(xas, entry, WAKE_NEXT);
+ 	return ret;
+ }
+ 
+@@ -1684,7 +1695,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
+ 	/* Did we race with someone splitting entry or so? */
+ 	if (!entry || dax_is_conflict(entry) ||
+ 	    (order == 0 && !dax_is_pte_entry(entry))) {
+-		put_unlocked_entry(&xas, entry);
++		put_unlocked_entry(&xas, entry, WAKE_NEXT);
+ 		xas_unlock_irq(&xas);
+ 		trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+ 						      VM_FAULT_NOPAGE);
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 86c7f04896207..720d65f224f09 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -35,7 +35,7 @@
+ static struct vfsmount *debugfs_mount;
+ static int debugfs_mount_count;
+ static bool debugfs_registered;
+-static unsigned int debugfs_allow = DEFAULT_DEBUGFS_ALLOW_BITS;
++static unsigned int debugfs_allow __ro_after_init = DEFAULT_DEBUGFS_ALLOW_BITS;
+ 
+ /*
+  * Don't allow access attributes to be changed whilst the kernel is locked down
+diff --git a/fs/dlm/config.c b/fs/dlm/config.c
+index 49c5f9407098e..73e6643903af5 100644
+--- a/fs/dlm/config.c
++++ b/fs/dlm/config.c
+@@ -125,7 +125,7 @@ static ssize_t cluster_cluster_name_store(struct config_item *item,
+ CONFIGFS_ATTR(cluster_, cluster_name);
+ 
+ static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field,
+-			   int *info_field, bool (*check_cb)(unsigned int x),
++			   int *info_field, int (*check_cb)(unsigned int x),
+ 			   const char *buf, size_t len)
+ {
+ 	unsigned int x;
+@@ -137,8 +137,11 @@ static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field,
+ 	if (rc)
+ 		return rc;
+ 
+-	if (check_cb && check_cb(x))
+-		return -EINVAL;
++	if (check_cb) {
++		rc = check_cb(x);
++		if (rc)
++			return rc;
++	}
+ 
+ 	*cl_field = x;
+ 	*info_field = x;
+@@ -161,14 +164,20 @@ static ssize_t cluster_##name##_show(struct config_item *item, char *buf)     \
+ }                                                                             \
+ CONFIGFS_ATTR(cluster_, name);
+ 
+-static bool dlm_check_zero(unsigned int x)
++static int dlm_check_zero(unsigned int x)
+ {
+-	return !x;
++	if (!x)
++		return -EINVAL;
++
++	return 0;
+ }
+ 
+-static bool dlm_check_buffer_size(unsigned int x)
++static int dlm_check_buffer_size(unsigned int x)
+ {
+-	return (x < DEFAULT_BUFFER_SIZE);
++	if (x < DEFAULT_BUFFER_SIZE)
++		return -EINVAL;
++
++	return 0;
+ }
+ 
+ CLUSTER_ATTR(tcp_port, dlm_check_zero);
+diff --git a/fs/dlm/debug_fs.c b/fs/dlm/debug_fs.c
+index d6bbccb0ed152..d5bd990bcab8b 100644
+--- a/fs/dlm/debug_fs.c
++++ b/fs/dlm/debug_fs.c
+@@ -542,6 +542,7 @@ static void *table_seq_next(struct seq_file *seq, void *iter_ptr, loff_t *pos)
+ 
+ 		if (bucket >= ls->ls_rsbtbl_size) {
+ 			kfree(ri);
++			++*pos;
+ 			return NULL;
+ 		}
+ 		tree = toss ? &ls->ls_rsbtbl[bucket].toss : &ls->ls_rsbtbl[bucket].keep;
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 79f56f16bc2ce..44e2716ac1580 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -612,10 +612,7 @@ static void shutdown_connection(struct connection *con)
+ {
+ 	int ret;
+ 
+-	if (cancel_work_sync(&con->swork)) {
+-		log_print("canceled swork for node %d", con->nodeid);
+-		clear_bit(CF_WRITE_PENDING, &con->flags);
+-	}
++	flush_work(&con->swork);
+ 
+ 	mutex_lock(&con->sock_mutex);
+ 	/* nothing to shutdown */
+diff --git a/fs/dlm/midcomms.c b/fs/dlm/midcomms.c
+index fde3a6afe4bea..0bedfa8606a26 100644
+--- a/fs/dlm/midcomms.c
++++ b/fs/dlm/midcomms.c
+@@ -49,9 +49,10 @@ int dlm_process_incoming_buffer(int nodeid, unsigned char *buf, int len)
+ 		 * cannot deliver this message to upper layers
+ 		 */
+ 		msglen = get_unaligned_le16(&hd->h_length);
+-		if (msglen > DEFAULT_BUFFER_SIZE) {
+-			log_print("received invalid length header: %u, will abort message parsing",
+-				  msglen);
++		if (msglen > DEFAULT_BUFFER_SIZE ||
++		    msglen < sizeof(struct dlm_header)) {
++			log_print("received invalid length header: %u from node %d, will abort message parsing",
++				  msglen, nodeid);
+ 			return -EBADMSG;
+ 		}
+ 
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 4a0411b229a5d..896e1176e0449 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1694,7 +1694,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 		}
+ 
+ 		/* Range is mapped and needs a state change */
+-		jbd_debug(1, "Converting from %d to %d %lld",
++		jbd_debug(1, "Converting from %ld to %d %lld",
+ 				map.m_flags & EXT4_MAP_UNWRITTEN,
+ 			ext4_ext_is_unwritten(ex), map.m_pblk);
+ 		ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index d3f407ba64c9e..f94b13075ea47 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -123,19 +123,6 @@ static void f2fs_unlock_rpages(struct compress_ctx *cc, int len)
+ 	f2fs_drop_rpages(cc, len, true);
+ }
+ 
+-static void f2fs_put_rpages_mapping(struct address_space *mapping,
+-				pgoff_t start, int len)
+-{
+-	int i;
+-
+-	for (i = 0; i < len; i++) {
+-		struct page *page = find_get_page(mapping, start + i);
+-
+-		put_page(page);
+-		put_page(page);
+-	}
+-}
+-
+ static void f2fs_put_rpages_wbc(struct compress_ctx *cc,
+ 		struct writeback_control *wbc, bool redirty, int unlock)
+ {
+@@ -164,13 +151,14 @@ int f2fs_init_compress_ctx(struct compress_ctx *cc)
+ 	return cc->rpages ? 0 : -ENOMEM;
+ }
+ 
+-void f2fs_destroy_compress_ctx(struct compress_ctx *cc)
++void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse)
+ {
+ 	page_array_free(cc->inode, cc->rpages, cc->cluster_size);
+ 	cc->rpages = NULL;
+ 	cc->nr_rpages = 0;
+ 	cc->nr_cpages = 0;
+-	cc->cluster_idx = NULL_CLUSTER;
++	if (!reuse)
++		cc->cluster_idx = NULL_CLUSTER;
+ }
+ 
+ void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page)
+@@ -986,7 +974,7 @@ retry:
+ 		}
+ 
+ 		if (PageUptodate(page))
+-			unlock_page(page);
++			f2fs_put_page(page, 1);
+ 		else
+ 			f2fs_compress_ctx_add_page(cc, page);
+ 	}
+@@ -996,33 +984,35 @@ retry:
+ 
+ 		ret = f2fs_read_multi_pages(cc, &bio, cc->cluster_size,
+ 					&last_block_in_bio, false, true);
+-		f2fs_destroy_compress_ctx(cc);
++		f2fs_put_rpages(cc);
++		f2fs_destroy_compress_ctx(cc, true);
+ 		if (ret)
+-			goto release_pages;
++			goto out;
+ 		if (bio)
+ 			f2fs_submit_bio(sbi, bio, DATA);
+ 
+ 		ret = f2fs_init_compress_ctx(cc);
+ 		if (ret)
+-			goto release_pages;
++			goto out;
+ 	}
+ 
+ 	for (i = 0; i < cc->cluster_size; i++) {
+ 		f2fs_bug_on(sbi, cc->rpages[i]);
+ 
+ 		page = find_lock_page(mapping, start_idx + i);
+-		f2fs_bug_on(sbi, !page);
++		if (!page) {
++			/* page can be truncated */
++			goto release_and_retry;
++		}
+ 
+ 		f2fs_wait_on_page_writeback(page, DATA, true, true);
+-
+ 		f2fs_compress_ctx_add_page(cc, page);
+-		f2fs_put_page(page, 0);
+ 
+ 		if (!PageUptodate(page)) {
++release_and_retry:
++			f2fs_put_rpages(cc);
+ 			f2fs_unlock_rpages(cc, i + 1);
+-			f2fs_put_rpages_mapping(mapping, start_idx,
+-					cc->cluster_size);
+-			f2fs_destroy_compress_ctx(cc);
++			f2fs_destroy_compress_ctx(cc, true);
+ 			goto retry;
+ 		}
+ 	}
+@@ -1053,10 +1043,10 @@ retry:
+ 	}
+ 
+ unlock_pages:
++	f2fs_put_rpages(cc);
+ 	f2fs_unlock_rpages(cc, i);
+-release_pages:
+-	f2fs_put_rpages_mapping(mapping, start_idx, i);
+-	f2fs_destroy_compress_ctx(cc);
++	f2fs_destroy_compress_ctx(cc, true);
++out:
+ 	return ret;
+ }
+ 
+@@ -1091,7 +1081,7 @@ bool f2fs_compress_write_end(struct inode *inode, void *fsdata,
+ 		set_cluster_dirty(&cc);
+ 
+ 	f2fs_put_rpages_wbc(&cc, NULL, false, 1);
+-	f2fs_destroy_compress_ctx(&cc);
++	f2fs_destroy_compress_ctx(&cc, false);
+ 
+ 	return first_index;
+ }
+@@ -1310,7 +1300,7 @@ unlock_continue:
+ 	f2fs_put_rpages(cc);
+ 	page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
+ 	cc->cpages = NULL;
+-	f2fs_destroy_compress_ctx(cc);
++	f2fs_destroy_compress_ctx(cc, false);
+ 	return 0;
+ 
+ out_destroy_crypt:
+@@ -1321,7 +1311,8 @@ out_destroy_crypt:
+ 	for (i = 0; i < cc->nr_cpages; i++) {
+ 		if (!cc->cpages[i])
+ 			continue;
+-		f2fs_put_page(cc->cpages[i], 1);
++		f2fs_compress_free_page(cc->cpages[i]);
++		cc->cpages[i] = NULL;
+ 	}
+ out_put_cic:
+ 	kmem_cache_free(cic_entry_slab, cic);
+@@ -1471,7 +1462,7 @@ write:
+ 	err = f2fs_write_raw_pages(cc, submitted, wbc, io_type);
+ 	f2fs_put_rpages_wbc(cc, wbc, false, 0);
+ destroy_out:
+-	f2fs_destroy_compress_ctx(cc);
++	f2fs_destroy_compress_ctx(cc, false);
+ 	return err;
+ }
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 901bd1d963ee8..bdc0f3b2d7abf 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2419,7 +2419,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ 							max_nr_pages,
+ 							&last_block_in_bio,
+ 							rac != NULL, false);
+-				f2fs_destroy_compress_ctx(&cc);
++				f2fs_destroy_compress_ctx(&cc, false);
+ 				if (ret)
+ 					goto set_error_page;
+ 			}
+@@ -2464,7 +2464,7 @@ next_page:
+ 							max_nr_pages,
+ 							&last_block_in_bio,
+ 							rac != NULL, false);
+-				f2fs_destroy_compress_ctx(&cc);
++				f2fs_destroy_compress_ctx(&cc, false);
+ 			}
+ 		}
+ #endif
+@@ -3168,7 +3168,7 @@ next:
+ 		}
+ 	}
+ 	if (f2fs_compressed_file(inode))
+-		f2fs_destroy_compress_ctx(&cc);
++		f2fs_destroy_compress_ctx(&cc, false);
+ #endif
+ 	if (retry) {
+ 		index = 0;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 699815e94bd30..69a390c6064c6 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -402,85 +402,6 @@ static inline bool __has_cursum_space(struct f2fs_journal *journal,
+ 	return size <= MAX_SIT_JENTRIES(journal);
+ }
+ 
+-/*
+- * f2fs-specific ioctl commands
+- */
+-#define F2FS_IOCTL_MAGIC		0xf5
+-#define F2FS_IOC_START_ATOMIC_WRITE	_IO(F2FS_IOCTL_MAGIC, 1)
+-#define F2FS_IOC_COMMIT_ATOMIC_WRITE	_IO(F2FS_IOCTL_MAGIC, 2)
+-#define F2FS_IOC_START_VOLATILE_WRITE	_IO(F2FS_IOCTL_MAGIC, 3)
+-#define F2FS_IOC_RELEASE_VOLATILE_WRITE	_IO(F2FS_IOCTL_MAGIC, 4)
+-#define F2FS_IOC_ABORT_VOLATILE_WRITE	_IO(F2FS_IOCTL_MAGIC, 5)
+-#define F2FS_IOC_GARBAGE_COLLECT	_IOW(F2FS_IOCTL_MAGIC, 6, __u32)
+-#define F2FS_IOC_WRITE_CHECKPOINT	_IO(F2FS_IOCTL_MAGIC, 7)
+-#define F2FS_IOC_DEFRAGMENT		_IOWR(F2FS_IOCTL_MAGIC, 8,	\
+-						struct f2fs_defragment)
+-#define F2FS_IOC_MOVE_RANGE		_IOWR(F2FS_IOCTL_MAGIC, 9,	\
+-						struct f2fs_move_range)
+-#define F2FS_IOC_FLUSH_DEVICE		_IOW(F2FS_IOCTL_MAGIC, 10,	\
+-						struct f2fs_flush_device)
+-#define F2FS_IOC_GARBAGE_COLLECT_RANGE	_IOW(F2FS_IOCTL_MAGIC, 11,	\
+-						struct f2fs_gc_range)
+-#define F2FS_IOC_GET_FEATURES		_IOR(F2FS_IOCTL_MAGIC, 12, __u32)
+-#define F2FS_IOC_SET_PIN_FILE		_IOW(F2FS_IOCTL_MAGIC, 13, __u32)
+-#define F2FS_IOC_GET_PIN_FILE		_IOR(F2FS_IOCTL_MAGIC, 14, __u32)
+-#define F2FS_IOC_PRECACHE_EXTENTS	_IO(F2FS_IOCTL_MAGIC, 15)
+-#define F2FS_IOC_RESIZE_FS		_IOW(F2FS_IOCTL_MAGIC, 16, __u64)
+-#define F2FS_IOC_GET_COMPRESS_BLOCKS	_IOR(F2FS_IOCTL_MAGIC, 17, __u64)
+-#define F2FS_IOC_RELEASE_COMPRESS_BLOCKS				\
+-					_IOR(F2FS_IOCTL_MAGIC, 18, __u64)
+-#define F2FS_IOC_RESERVE_COMPRESS_BLOCKS				\
+-					_IOR(F2FS_IOCTL_MAGIC, 19, __u64)
+-#define F2FS_IOC_SEC_TRIM_FILE		_IOW(F2FS_IOCTL_MAGIC, 20,	\
+-						struct f2fs_sectrim_range)
+-
+-/*
+- * should be same as XFS_IOC_GOINGDOWN.
+- * Flags for going down operation used by FS_IOC_GOINGDOWN
+- */
+-#define F2FS_IOC_SHUTDOWN	_IOR('X', 125, __u32)	/* Shutdown */
+-#define F2FS_GOING_DOWN_FULLSYNC	0x0	/* going down with full sync */
+-#define F2FS_GOING_DOWN_METASYNC	0x1	/* going down with metadata */
+-#define F2FS_GOING_DOWN_NOSYNC		0x2	/* going down */
+-#define F2FS_GOING_DOWN_METAFLUSH	0x3	/* going down with meta flush */
+-#define F2FS_GOING_DOWN_NEED_FSCK	0x4	/* going down to trigger fsck */
+-
+-/*
+- * Flags used by F2FS_IOC_SEC_TRIM_FILE
+- */
+-#define F2FS_TRIM_FILE_DISCARD		0x1	/* send discard command */
+-#define F2FS_TRIM_FILE_ZEROOUT		0x2	/* zero out */
+-#define F2FS_TRIM_FILE_MASK		0x3
+-
+-struct f2fs_gc_range {
+-	u32 sync;
+-	u64 start;
+-	u64 len;
+-};
+-
+-struct f2fs_defragment {
+-	u64 start;
+-	u64 len;
+-};
+-
+-struct f2fs_move_range {
+-	u32 dst_fd;		/* destination fd */
+-	u64 pos_in;		/* start position in src_fd */
+-	u64 pos_out;		/* start position in dst_fd */
+-	u64 len;		/* size to move */
+-};
+-
+-struct f2fs_flush_device {
+-	u32 dev_num;		/* device number to flush */
+-	u32 segments;		/* # of segments to flush */
+-};
+-
+-struct f2fs_sectrim_range {
+-	u64 start;
+-	u64 len;
+-	u64 flags;
+-};
+-
+ /* for inline stuff */
+ #define DEF_INLINE_RESERVED_SIZE	1
+ static inline int get_extra_isize(struct inode *inode);
+@@ -3361,6 +3282,7 @@ block_t f2fs_get_unusable_blocks(struct f2fs_sb_info *sbi);
+ int f2fs_disable_cp_again(struct f2fs_sb_info *sbi, block_t unusable);
+ void f2fs_release_discard_addrs(struct f2fs_sb_info *sbi);
+ int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra);
++bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno);
+ void f2fs_init_inmem_curseg(struct f2fs_sb_info *sbi);
+ void f2fs_save_inmem_curseg(struct f2fs_sb_info *sbi);
+ void f2fs_restore_inmem_curseg(struct f2fs_sb_info *sbi);
+@@ -3368,7 +3290,7 @@ void f2fs_get_new_segment(struct f2fs_sb_info *sbi,
+ 			unsigned int *newseg, bool new_sec, int dir);
+ void f2fs_allocate_segment_for_resize(struct f2fs_sb_info *sbi, int type,
+ 					unsigned int start, unsigned int end);
+-void f2fs_allocate_new_segment(struct f2fs_sb_info *sbi, int type);
++void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type);
+ void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi);
+ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range);
+ bool f2fs_exist_trim_candidates(struct f2fs_sb_info *sbi,
+@@ -3528,7 +3450,7 @@ void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi);
+ int f2fs_start_gc_thread(struct f2fs_sb_info *sbi);
+ void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi);
+ block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode);
+-int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background,
++int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background, bool force,
+ 			unsigned int segno);
+ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi);
+ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count);
+@@ -3934,7 +3856,7 @@ void f2fs_free_dic(struct decompress_io_ctx *dic);
+ void f2fs_decompress_end_io(struct page **rpages,
+ 			unsigned int cluster_size, bool err, bool verity);
+ int f2fs_init_compress_ctx(struct compress_ctx *cc);
+-void f2fs_destroy_compress_ctx(struct compress_ctx *cc);
++void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse);
+ void f2fs_init_compress_info(struct f2fs_sb_info *sbi);
+ int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi);
+ void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 498e3aac79340..5c74b29971976 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -31,6 +31,7 @@
+ #include "gc.h"
+ #include "trace.h"
+ #include <trace/events/f2fs.h>
++#include <uapi/linux/f2fs.h>
+ 
+ static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
+ {
+@@ -1615,9 +1616,10 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
+ 	struct f2fs_map_blocks map = { .m_next_pgofs = NULL,
+ 			.m_next_extent = NULL, .m_seg_type = NO_CHECK_TYPE,
+ 			.m_may_create = true };
+-	pgoff_t pg_end;
++	pgoff_t pg_start, pg_end;
+ 	loff_t new_size = i_size_read(inode);
+ 	loff_t off_end;
++	block_t expanded = 0;
+ 	int err;
+ 
+ 	err = inode_newsize_ok(inode, (len + offset));
+@@ -1630,11 +1632,12 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
+ 
+ 	f2fs_balance_fs(sbi, true);
+ 
++	pg_start = ((unsigned long long)offset) >> PAGE_SHIFT;
+ 	pg_end = ((unsigned long long)offset + len) >> PAGE_SHIFT;
+ 	off_end = (offset + len) & (PAGE_SIZE - 1);
+ 
+-	map.m_lblk = ((unsigned long long)offset) >> PAGE_SHIFT;
+-	map.m_len = pg_end - map.m_lblk;
++	map.m_lblk = pg_start;
++	map.m_len = pg_end - pg_start;
+ 	if (off_end)
+ 		map.m_len++;
+ 
+@@ -1642,19 +1645,15 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
+ 		return 0;
+ 
+ 	if (f2fs_is_pinned_file(inode)) {
+-		block_t len = (map.m_len >> sbi->log_blocks_per_seg) <<
+-					sbi->log_blocks_per_seg;
+-		block_t done = 0;
++		block_t sec_blks = BLKS_PER_SEC(sbi);
++		block_t sec_len = roundup(map.m_len, sec_blks);
+ 
+-		if (map.m_len % sbi->blocks_per_seg)
+-			len += sbi->blocks_per_seg;
+-
+-		map.m_len = sbi->blocks_per_seg;
++		map.m_len = sec_blks;
+ next_alloc:
+ 		if (has_not_enough_free_secs(sbi, 0,
+ 			GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) {
+ 			down_write(&sbi->gc_lock);
+-			err = f2fs_gc(sbi, true, false, NULL_SEGNO);
++			err = f2fs_gc(sbi, true, false, false, NULL_SEGNO);
+ 			if (err && err != -ENODATA && err != -EAGAIN)
+ 				goto out_err;
+ 		}
+@@ -1662,7 +1661,7 @@ next_alloc:
+ 		down_write(&sbi->pin_sem);
+ 
+ 		f2fs_lock_op(sbi);
+-		f2fs_allocate_new_segment(sbi, CURSEG_COLD_DATA_PINNED);
++		f2fs_allocate_new_section(sbi, CURSEG_COLD_DATA_PINNED);
+ 		f2fs_unlock_op(sbi);
+ 
+ 		map.m_seg_type = CURSEG_COLD_DATA_PINNED;
+@@ -1670,24 +1669,25 @@ next_alloc:
+ 
+ 		up_write(&sbi->pin_sem);
+ 
+-		done += map.m_len;
+-		len -= map.m_len;
++		expanded += map.m_len;
++		sec_len -= map.m_len;
+ 		map.m_lblk += map.m_len;
+-		if (!err && len)
++		if (!err && sec_len)
+ 			goto next_alloc;
+ 
+-		map.m_len = done;
++		map.m_len = expanded;
+ 	} else {
+ 		err = f2fs_map_blocks(inode, &map, 1, F2FS_GET_BLOCK_PRE_AIO);
++		expanded = map.m_len;
+ 	}
+ out_err:
+ 	if (err) {
+ 		pgoff_t last_off;
+ 
+-		if (!map.m_len)
++		if (!expanded)
+ 			return err;
+ 
+-		last_off = map.m_lblk + map.m_len - 1;
++		last_off = pg_start + expanded - 1;
+ 
+ 		/* update new size to the failed position */
+ 		new_size = (last_off == pg_end) ? offset + len :
+@@ -2489,32 +2489,25 @@ static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
+ 		down_write(&sbi->gc_lock);
+ 	}
+ 
+-	ret = f2fs_gc(sbi, sync, true, NULL_SEGNO);
++	ret = f2fs_gc(sbi, sync, true, false, NULL_SEGNO);
+ out:
+ 	mnt_drop_write_file(filp);
+ 	return ret;
+ }
+ 
+-static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
++static int __f2fs_ioc_gc_range(struct file *filp, struct f2fs_gc_range *range)
+ {
+-	struct inode *inode = file_inode(filp);
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+-	struct f2fs_gc_range range;
++	struct f2fs_sb_info *sbi = F2FS_I_SB(file_inode(filp));
+ 	u64 end;
+ 	int ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+-
+-	if (copy_from_user(&range, (struct f2fs_gc_range __user *)arg,
+-							sizeof(range)))
+-		return -EFAULT;
+-
+ 	if (f2fs_readonly(sbi->sb))
+ 		return -EROFS;
+ 
+-	end = range.start + range.len;
+-	if (end < range.start || range.start < MAIN_BLKADDR(sbi) ||
++	end = range->start + range->len;
++	if (end < range->start || range->start < MAIN_BLKADDR(sbi) ||
+ 					end >= MAX_BLKADDR(sbi))
+ 		return -EINVAL;
+ 
+@@ -2523,7 +2516,7 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
+ 		return ret;
+ 
+ do_more:
+-	if (!range.sync) {
++	if (!range->sync) {
+ 		if (!down_write_trylock(&sbi->gc_lock)) {
+ 			ret = -EBUSY;
+ 			goto out;
+@@ -2532,20 +2525,31 @@ do_more:
+ 		down_write(&sbi->gc_lock);
+ 	}
+ 
+-	ret = f2fs_gc(sbi, range.sync, true, GET_SEGNO(sbi, range.start));
++	ret = f2fs_gc(sbi, range->sync, true, false,
++				GET_SEGNO(sbi, range->start));
+ 	if (ret) {
+ 		if (ret == -EBUSY)
+ 			ret = -EAGAIN;
+ 		goto out;
+ 	}
+-	range.start += BLKS_PER_SEC(sbi);
+-	if (range.start <= end)
++	range->start += BLKS_PER_SEC(sbi);
++	if (range->start <= end)
+ 		goto do_more;
+ out:
+ 	mnt_drop_write_file(filp);
+ 	return ret;
+ }
+ 
++static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
++{
++	struct f2fs_gc_range range;
++
++	if (copy_from_user(&range, (struct f2fs_gc_range __user *)arg,
++							sizeof(range)))
++		return -EFAULT;
++	return __f2fs_ioc_gc_range(filp, &range);
++}
++
+ static int f2fs_ioc_write_checkpoint(struct file *filp, unsigned long arg)
+ {
+ 	struct inode *inode = file_inode(filp);
+@@ -2882,9 +2886,9 @@ out:
+ 	return ret;
+ }
+ 
+-static int f2fs_ioc_move_range(struct file *filp, unsigned long arg)
++static int __f2fs_ioc_move_range(struct file *filp,
++				struct f2fs_move_range *range)
+ {
+-	struct f2fs_move_range range;
+ 	struct fd dst;
+ 	int err;
+ 
+@@ -2892,11 +2896,7 @@ static int f2fs_ioc_move_range(struct file *filp, unsigned long arg)
+ 			!(filp->f_mode & FMODE_WRITE))
+ 		return -EBADF;
+ 
+-	if (copy_from_user(&range, (struct f2fs_move_range __user *)arg,
+-							sizeof(range)))
+-		return -EFAULT;
+-
+-	dst = fdget(range.dst_fd);
++	dst = fdget(range->dst_fd);
+ 	if (!dst.file)
+ 		return -EBADF;
+ 
+@@ -2909,21 +2909,25 @@ static int f2fs_ioc_move_range(struct file *filp, unsigned long arg)
+ 	if (err)
+ 		goto err_out;
+ 
+-	err = f2fs_move_file_range(filp, range.pos_in, dst.file,
+-					range.pos_out, range.len);
++	err = f2fs_move_file_range(filp, range->pos_in, dst.file,
++					range->pos_out, range->len);
+ 
+ 	mnt_drop_write_file(filp);
+-	if (err)
+-		goto err_out;
+-
+-	if (copy_to_user((struct f2fs_move_range __user *)arg,
+-						&range, sizeof(range)))
+-		err = -EFAULT;
+ err_out:
+ 	fdput(dst);
+ 	return err;
+ }
+ 
++static int f2fs_ioc_move_range(struct file *filp, unsigned long arg)
++{
++	struct f2fs_move_range range;
++
++	if (copy_from_user(&range, (struct f2fs_move_range __user *)arg,
++							sizeof(range)))
++		return -EFAULT;
++	return __f2fs_ioc_move_range(filp, &range);
++}
++
+ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
+ {
+ 	struct inode *inode = file_inode(filp);
+@@ -2975,7 +2979,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
+ 		sm->last_victim[GC_CB] = end_segno + 1;
+ 		sm->last_victim[GC_GREEDY] = end_segno + 1;
+ 		sm->last_victim[ALLOC_NEXT] = end_segno + 1;
+-		ret = f2fs_gc(sbi, true, true, start_segno);
++		ret = f2fs_gc(sbi, true, true, true, start_segno);
+ 		if (ret == -EAGAIN)
+ 			ret = 0;
+ 		else if (ret < 0)
+@@ -3960,13 +3964,8 @@ err:
+ 	return ret;
+ }
+ 
+-long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
++static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ {
+-	if (unlikely(f2fs_cp_error(F2FS_I_SB(file_inode(filp)))))
+-		return -EIO;
+-	if (!f2fs_is_checkpoint_ready(F2FS_I_SB(file_inode(filp))))
+-		return -ENOSPC;
+-
+ 	switch (cmd) {
+ 	case FS_IOC_GETFLAGS:
+ 		return f2fs_ioc_getflags(filp, arg);
+@@ -4053,6 +4052,16 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	}
+ }
+ 
++long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
++{
++	if (unlikely(f2fs_cp_error(F2FS_I_SB(file_inode(filp)))))
++		return -EIO;
++	if (!f2fs_is_checkpoint_ready(F2FS_I_SB(file_inode(filp))))
++		return -ENOSPC;
++
++	return __f2fs_ioctl(filp, cmd, arg);
++}
++
+ static ssize_t f2fs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ 	struct file *file = iocb->ki_filp;
+@@ -4175,8 +4184,63 @@ out:
+ }
+ 
+ #ifdef CONFIG_COMPAT
++struct compat_f2fs_gc_range {
++	u32 sync;
++	compat_u64 start;
++	compat_u64 len;
++};
++#define F2FS_IOC32_GARBAGE_COLLECT_RANGE	_IOW(F2FS_IOCTL_MAGIC, 11,\
++						struct compat_f2fs_gc_range)
++
++static int f2fs_compat_ioc_gc_range(struct file *file, unsigned long arg)
++{
++	struct compat_f2fs_gc_range __user *urange;
++	struct f2fs_gc_range range;
++	int err;
++
++	urange = compat_ptr(arg);
++	err = get_user(range.sync, &urange->sync);
++	err |= get_user(range.start, &urange->start);
++	err |= get_user(range.len, &urange->len);
++	if (err)
++		return -EFAULT;
++
++	return __f2fs_ioc_gc_range(file, &range);
++}
++
++struct compat_f2fs_move_range {
++	u32 dst_fd;
++	compat_u64 pos_in;
++	compat_u64 pos_out;
++	compat_u64 len;
++};
++#define F2FS_IOC32_MOVE_RANGE		_IOWR(F2FS_IOCTL_MAGIC, 9,	\
++					struct compat_f2fs_move_range)
++
++static int f2fs_compat_ioc_move_range(struct file *file, unsigned long arg)
++{
++	struct compat_f2fs_move_range __user *urange;
++	struct f2fs_move_range range;
++	int err;
++
++	urange = compat_ptr(arg);
++	err = get_user(range.dst_fd, &urange->dst_fd);
++	err |= get_user(range.pos_in, &urange->pos_in);
++	err |= get_user(range.pos_out, &urange->pos_out);
++	err |= get_user(range.len, &urange->len);
++	if (err)
++		return -EFAULT;
++
++	return __f2fs_ioc_move_range(file, &range);
++}
++
+ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
++	if (unlikely(f2fs_cp_error(F2FS_I_SB(file_inode(file)))))
++		return -EIO;
++	if (!f2fs_is_checkpoint_ready(F2FS_I_SB(file_inode(file))))
++		return -ENOSPC;
++
+ 	switch (cmd) {
+ 	case FS_IOC32_GETFLAGS:
+ 		cmd = FS_IOC_GETFLAGS;
+@@ -4187,6 +4251,10 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	case FS_IOC32_GETVERSION:
+ 		cmd = FS_IOC_GETVERSION;
+ 		break;
++	case F2FS_IOC32_GARBAGE_COLLECT_RANGE:
++		return f2fs_compat_ioc_gc_range(file, arg);
++	case F2FS_IOC32_MOVE_RANGE:
++		return f2fs_compat_ioc_move_range(file, arg);
+ 	case F2FS_IOC_START_ATOMIC_WRITE:
+ 	case F2FS_IOC_COMMIT_ATOMIC_WRITE:
+ 	case F2FS_IOC_START_VOLATILE_WRITE:
+@@ -4204,10 +4272,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
+ 	case FS_IOC_GET_ENCRYPTION_NONCE:
+ 	case F2FS_IOC_GARBAGE_COLLECT:
+-	case F2FS_IOC_GARBAGE_COLLECT_RANGE:
+ 	case F2FS_IOC_WRITE_CHECKPOINT:
+ 	case F2FS_IOC_DEFRAGMENT:
+-	case F2FS_IOC_MOVE_RANGE:
+ 	case F2FS_IOC_FLUSH_DEVICE:
+ 	case F2FS_IOC_GET_FEATURES:
+ 	case FS_IOC_FSGETXATTR:
+@@ -4228,7 +4294,7 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	default:
+ 		return -ENOIOCTLCMD;
+ 	}
+-	return f2fs_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
++	return __f2fs_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
+ }
+ #endif
+ 
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 05641a1e36cc8..9b38cef4d50fe 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -112,7 +112,7 @@ do_gc:
+ 		sync_mode = F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC;
+ 
+ 		/* if return value is not zero, no victim was selected */
+-		if (f2fs_gc(sbi, sync_mode, true, NULL_SEGNO))
++		if (f2fs_gc(sbi, sync_mode, true, false, NULL_SEGNO))
+ 			wait_ms = gc_th->no_gc_sleep_time;
+ 
+ 		trace_f2fs_background_gc(sbi->sb, wait_ms,
+@@ -392,10 +392,6 @@ static void add_victim_entry(struct f2fs_sb_info *sbi,
+ 		if (p->gc_mode == GC_AT &&
+ 			get_valid_blocks(sbi, segno, true) == 0)
+ 			return;
+-
+-		if (p->alloc_mode == AT_SSR &&
+-			get_seg_entry(sbi, segno)->ckpt_valid_blocks == 0)
+-			return;
+ 	}
+ 
+ 	for (i = 0; i < sbi->segs_per_sec; i++)
+@@ -728,11 +724,27 @@ retry:
+ 
+ 		if (sec_usage_check(sbi, secno))
+ 			goto next;
++
+ 		/* Don't touch checkpointed data */
+-		if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED) &&
+-					get_ckpt_valid_blocks(sbi, segno) &&
+-					p.alloc_mode == LFS))
+-			goto next;
++		if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++			if (p.alloc_mode == LFS) {
++				/*
++				 * LFS is set to find source section during GC.
++				 * The victim should have no checkpointed data.
++				 */
++				if (get_ckpt_valid_blocks(sbi, segno, true))
++					goto next;
++			} else {
++				/*
++				 * SSR | AT_SSR are set to find target segment
++				 * for writes which can be full by checkpointed
++				 * and newly written blocks.
++				 */
++				if (!f2fs_segment_has_free_slot(sbi, segno))
++					goto next;
++			}
++		}
++
+ 		if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap))
+ 			goto next;
+ 
+@@ -1356,7 +1368,8 @@ out:
+  * the victim data block is ignored.
+  */
+ static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+-		struct gc_inode_list *gc_list, unsigned int segno, int gc_type)
++		struct gc_inode_list *gc_list, unsigned int segno, int gc_type,
++		bool force_migrate)
+ {
+ 	struct super_block *sb = sbi->sb;
+ 	struct f2fs_summary *entry;
+@@ -1385,8 +1398,8 @@ next_step:
+ 		 * race condition along with SSR block allocation.
+ 		 */
+ 		if ((gc_type == BG_GC && has_not_enough_free_secs(sbi, 0, 0)) ||
+-				get_valid_blocks(sbi, segno, true) ==
+-							BLKS_PER_SEC(sbi))
++			(!force_migrate && get_valid_blocks(sbi, segno, true) ==
++							BLKS_PER_SEC(sbi)))
+ 			return submitted;
+ 
+ 		if (check_valid_map(sbi, segno, off) == 0)
+@@ -1521,7 +1534,8 @@ static int __get_victim(struct f2fs_sb_info *sbi, unsigned int *victim,
+ 
+ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 				unsigned int start_segno,
+-				struct gc_inode_list *gc_list, int gc_type)
++				struct gc_inode_list *gc_list, int gc_type,
++				bool force_migrate)
+ {
+ 	struct page *sum_page;
+ 	struct f2fs_summary_block *sum;
+@@ -1608,7 +1622,8 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 								gc_type);
+ 		else
+ 			submitted += gc_data_segment(sbi, sum->entries, gc_list,
+-							segno, gc_type);
++							segno, gc_type,
++							force_migrate);
+ 
+ 		stat_inc_seg_count(sbi, type, gc_type);
+ 		migrated++;
+@@ -1636,7 +1651,7 @@ skip:
+ }
+ 
+ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
+-			bool background, unsigned int segno)
++			bool background, bool force, unsigned int segno)
+ {
+ 	int gc_type = sync ? FG_GC : BG_GC;
+ 	int sec_freed = 0, seg_freed = 0, total_freed = 0;
+@@ -1698,7 +1713,7 @@ gc_more:
+ 	if (ret)
+ 		goto stop;
+ 
+-	seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type);
++	seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type, force);
+ 	if (gc_type == FG_GC &&
+ 		seg_freed == f2fs_usable_segs_in_sec(sbi, segno))
+ 		sec_freed++;
+@@ -1837,7 +1852,7 @@ static int free_segment_range(struct f2fs_sb_info *sbi,
+ 			.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
+ 		};
+ 
+-		do_garbage_collect(sbi, segno, &gc_list, FG_GC);
++		do_garbage_collect(sbi, segno, &gc_list, FG_GC, true);
+ 		put_gc_inode(&gc_list);
+ 
+ 		if (!gc_only && get_valid_blocks(sbi, segno, true)) {
+@@ -1976,7 +1991,20 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
+ 
+ 	/* stop CP to protect MAIN_SEC in free_segment_range */
+ 	f2fs_lock_op(sbi);
++
++	spin_lock(&sbi->stat_lock);
++	if (shrunk_blocks + valid_user_blocks(sbi) +
++		sbi->current_reserved_blocks + sbi->unusable_block_count +
++		F2FS_OPTION(sbi).root_reserved_blocks > sbi->user_block_count)
++		err = -ENOSPC;
++	spin_unlock(&sbi->stat_lock);
++
++	if (err)
++		goto out_unlock;
++
+ 	err = free_segment_range(sbi, secs, true);
++
++out_unlock:
+ 	f2fs_unlock_op(sbi);
+ 	up_write(&sbi->gc_lock);
+ 	if (err)
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index b9e37f0b3e093..1d7dafdaffe30 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -218,7 +218,8 @@ out:
+ 
+ 	f2fs_put_page(page, 1);
+ 
+-	f2fs_balance_fs(sbi, dn.node_changed);
++	if (!err)
++		f2fs_balance_fs(sbi, dn.node_changed);
+ 
+ 	return err;
+ }
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index f2a4265318f5c..d04b449978aa8 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -327,23 +327,27 @@ void f2fs_drop_inmem_pages(struct inode *inode)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
+ 
+-	while (!list_empty(&fi->inmem_pages)) {
++	do {
+ 		mutex_lock(&fi->inmem_lock);
++		if (list_empty(&fi->inmem_pages)) {
++			fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
++
++			spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
++			if (!list_empty(&fi->inmem_ilist))
++				list_del_init(&fi->inmem_ilist);
++			if (f2fs_is_atomic_file(inode)) {
++				clear_inode_flag(inode, FI_ATOMIC_FILE);
++				sbi->atomic_files--;
++			}
++			spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
++
++			mutex_unlock(&fi->inmem_lock);
++			break;
++		}
+ 		__revoke_inmem_pages(inode, &fi->inmem_pages,
+ 						true, false, true);
+ 		mutex_unlock(&fi->inmem_lock);
+-	}
+-
+-	fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
+-
+-	spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
+-	if (!list_empty(&fi->inmem_ilist))
+-		list_del_init(&fi->inmem_ilist);
+-	if (f2fs_is_atomic_file(inode)) {
+-		clear_inode_flag(inode, FI_ATOMIC_FILE);
+-		sbi->atomic_files--;
+-	}
+-	spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
++	} while (1);
+ }
+ 
+ void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
+@@ -507,7 +511,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
+ 	 */
+ 	if (has_not_enough_free_secs(sbi, 0, 0)) {
+ 		down_write(&sbi->gc_lock);
+-		f2fs_gc(sbi, false, false, NULL_SEGNO);
++		f2fs_gc(sbi, false, false, false, NULL_SEGNO);
+ 	}
+ }
+ 
+@@ -871,7 +875,7 @@ static void locate_dirty_segment(struct f2fs_sb_info *sbi, unsigned int segno)
+ 	mutex_lock(&dirty_i->seglist_lock);
+ 
+ 	valid_blocks = get_valid_blocks(sbi, segno, false);
+-	ckpt_valid_blocks = get_ckpt_valid_blocks(sbi, segno);
++	ckpt_valid_blocks = get_ckpt_valid_blocks(sbi, segno, false);
+ 
+ 	if (valid_blocks == 0 && (!is_sbi_flag_set(sbi, SBI_CP_DISABLED) ||
+ 		ckpt_valid_blocks == usable_blocks)) {
+@@ -956,7 +960,7 @@ static unsigned int get_free_segment(struct f2fs_sb_info *sbi)
+ 	for_each_set_bit(segno, dirty_i->dirty_segmap[DIRTY], MAIN_SEGS(sbi)) {
+ 		if (get_valid_blocks(sbi, segno, false))
+ 			continue;
+-		if (get_ckpt_valid_blocks(sbi, segno))
++		if (get_ckpt_valid_blocks(sbi, segno, false))
+ 			continue;
+ 		mutex_unlock(&dirty_i->seglist_lock);
+ 		return segno;
+@@ -2646,6 +2650,23 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi,
+ 		seg->next_blkoff++;
+ }
+ 
++bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno)
++{
++	struct seg_entry *se = get_seg_entry(sbi, segno);
++	int entries = SIT_VBLOCK_MAP_SIZE / sizeof(unsigned long);
++	unsigned long *target_map = SIT_I(sbi)->tmp_map;
++	unsigned long *ckpt_map = (unsigned long *)se->ckpt_valid_map;
++	unsigned long *cur_map = (unsigned long *)se->cur_valid_map;
++	int i, pos;
++
++	for (i = 0; i < entries; i++)
++		target_map[i] = ckpt_map[i] | cur_map[i];
++
++	pos = __find_rev_next_zero_bit(target_map, sbi->blocks_per_seg, 0);
++
++	return pos < sbi->blocks_per_seg;
++}
++
+ /*
+  * This function always allocates a used segment(from dirty seglist) by SSR
+  * manner, so it should recover the existing segment information of valid blocks
+@@ -2903,7 +2924,8 @@ unlock:
+ 	up_read(&SM_I(sbi)->curseg_lock);
+ }
+ 
+-static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type)
++static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type,
++								bool new_sec)
+ {
+ 	struct curseg_info *curseg = CURSEG_I(sbi, type);
+ 	unsigned int old_segno;
+@@ -2911,32 +2933,42 @@ static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type)
+ 	if (!curseg->inited)
+ 		goto alloc;
+ 
+-	if (!curseg->next_blkoff &&
+-		!get_valid_blocks(sbi, curseg->segno, false) &&
+-		!get_ckpt_valid_blocks(sbi, curseg->segno))
+-		return;
++	if (curseg->next_blkoff ||
++		get_valid_blocks(sbi, curseg->segno, new_sec))
++		goto alloc;
+ 
++	if (!get_ckpt_valid_blocks(sbi, curseg->segno, new_sec))
++		return;
+ alloc:
+ 	old_segno = curseg->segno;
+ 	SIT_I(sbi)->s_ops->allocate_segment(sbi, type, true);
+ 	locate_dirty_segment(sbi, old_segno);
+ }
+ 
+-void f2fs_allocate_new_segment(struct f2fs_sb_info *sbi, int type)
++static void __allocate_new_section(struct f2fs_sb_info *sbi, int type)
+ {
++	__allocate_new_segment(sbi, type, true);
++}
++
++void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type)
++{
++	down_read(&SM_I(sbi)->curseg_lock);
+ 	down_write(&SIT_I(sbi)->sentry_lock);
+-	__allocate_new_segment(sbi, type);
++	__allocate_new_section(sbi, type);
+ 	up_write(&SIT_I(sbi)->sentry_lock);
++	up_read(&SM_I(sbi)->curseg_lock);
+ }
+ 
+ void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi)
+ {
+ 	int i;
+ 
++	down_read(&SM_I(sbi)->curseg_lock);
+ 	down_write(&SIT_I(sbi)->sentry_lock);
+ 	for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++)
+-		__allocate_new_segment(sbi, i);
++		__allocate_new_segment(sbi, i, false);
+ 	up_write(&SIT_I(sbi)->sentry_lock);
++	up_read(&SM_I(sbi)->curseg_lock);
+ }
+ 
+ static const struct segment_allocation default_salloc_ops = {
+@@ -3375,12 +3407,12 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
+ 		f2fs_inode_chksum_set(sbi, page);
+ 	}
+ 
+-	if (F2FS_IO_ALIGNED(sbi))
+-		fio->retry = false;
+-
+ 	if (fio) {
+ 		struct f2fs_bio_info *io;
+ 
++		if (F2FS_IO_ALIGNED(sbi))
++			fio->retry = false;
++
+ 		INIT_LIST_HEAD(&fio->list);
+ 		fio->in_list = true;
+ 		io = sbi->write_io[fio->type] + fio->temp;
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 229814b4f4a6c..1bf33fc27b8f8 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -361,8 +361,20 @@ static inline unsigned int get_valid_blocks(struct f2fs_sb_info *sbi,
+ }
+ 
+ static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi,
+-				unsigned int segno)
++				unsigned int segno, bool use_section)
+ {
++	if (use_section && __is_large_section(sbi)) {
++		unsigned int start_segno = START_SEGNO(segno);
++		unsigned int blocks = 0;
++		int i;
++
++		for (i = 0; i < sbi->segs_per_sec; i++, start_segno++) {
++			struct seg_entry *se = get_seg_entry(sbi, start_segno);
++
++			blocks += se->ckpt_valid_blocks;
++		}
++		return blocks;
++	}
+ 	return get_seg_entry(sbi, segno)->ckpt_valid_blocks;
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 4fffbef216af8..abc469dd9aea8 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1723,7 +1723,7 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
+ 
+ 	while (!f2fs_time_over(sbi, DISABLE_TIME)) {
+ 		down_write(&sbi->gc_lock);
+-		err = f2fs_gc(sbi, true, false, NULL_SEGNO);
++		err = f2fs_gc(sbi, true, false, false, NULL_SEGNO);
+ 		if (err == -ENODATA) {
+ 			err = 0;
+ 			break;
+diff --git a/fs/fuse/cuse.c b/fs/fuse/cuse.c
+index 45082269e6982..a37528b51798b 100644
+--- a/fs/fuse/cuse.c
++++ b/fs/fuse/cuse.c
+@@ -627,6 +627,8 @@ static int __init cuse_init(void)
+ 	cuse_channel_fops.owner		= THIS_MODULE;
+ 	cuse_channel_fops.open		= cuse_channel_open;
+ 	cuse_channel_fops.release	= cuse_channel_release;
++	/* CUSE is not prepared for FUSE_DEV_IOC_CLONE */
++	cuse_channel_fops.unlocked_ioctl	= NULL;
+ 
+ 	cuse_class = class_create(THIS_MODULE, "cuse");
+ 	if (IS_ERR(cuse_class))
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 7160d30068f32..8de9c24ac4ac6 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1761,8 +1761,17 @@ static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args,
+ 		container_of(args, typeof(*wpa), ia.ap.args);
+ 	struct inode *inode = wpa->inode;
+ 	struct fuse_inode *fi = get_fuse_inode(inode);
++	struct fuse_conn *fc = get_fuse_conn(inode);
+ 
+ 	mapping_set_error(inode->i_mapping, error);
++	/*
++	 * A writeback finished and this might have updated mtime/ctime on
++	 * server making local mtime/ctime stale.  Hence invalidate attrs.
++	 * Do this only if writeback_cache is not enabled.  If writeback_cache
++	 * is enabled, we trust local ctime/mtime.
++	 */
++	if (!fc->writeback_cache)
++		fuse_invalidate_attr(inode);
+ 	spin_lock(&fi->lock);
+ 	rb_erase(&wpa->writepages_entry, &fi->writepages);
+ 	while (wpa->next) {
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index f0a7f1b7b75fc..b9cfb1165ff42 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1457,8 +1457,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ 		return -ENOMEM;
+ 	}
+ 
+-	fuse_conn_init(fc, fm, get_user_ns(current_user_ns()),
+-		       &virtio_fs_fiq_ops, fs);
++	fuse_conn_init(fc, fm, fsc->user_ns, &virtio_fs_fiq_ops, fs);
+ 	fc->release = fuse_free_conn;
+ 	fc->delete_stale = true;
+ 	fc->auto_submounts = true;
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index a930ddd156819..7054a542689f9 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -598,13 +598,15 @@ void hfsplus_file_truncate(struct inode *inode)
+ 		res = __hfsplus_ext_cache_extent(&fd, inode, alloc_cnt);
+ 		if (res)
+ 			break;
+-		hfs_brec_remove(&fd);
+ 
+-		mutex_unlock(&fd.tree->tree_lock);
+ 		start = hip->cached_start;
++		if (blk_cnt <= start)
++			hfs_brec_remove(&fd);
++		mutex_unlock(&fd.tree->tree_lock);
+ 		hfsplus_free_extents(sb, hip->cached_extents,
+ 				     alloc_cnt - start, alloc_cnt - blk_cnt);
+ 		hfsplus_dump_extent(hip->cached_extents);
++		mutex_lock(&fd.tree->tree_lock);
+ 		if (blk_cnt > start) {
+ 			hip->extent_state |= HFSPLUS_EXT_DIRTY;
+ 			break;
+@@ -612,7 +614,6 @@ void hfsplus_file_truncate(struct inode *inode)
+ 		alloc_cnt = start;
+ 		hip->cached_start = hip->cached_blocks = 0;
+ 		hip->extent_state &= ~(HFSPLUS_EXT_DIRTY | HFSPLUS_EXT_NEW);
+-		mutex_lock(&fd.tree->tree_lock);
+ 	}
+ 	hfs_find_exit(&fd);
+ 
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 21c20fd5f9ee7..b7c24d152604d 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -131,6 +131,7 @@ static void huge_pagevec_release(struct pagevec *pvec)
+ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+ 	struct inode *inode = file_inode(file);
++	struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
+ 	loff_t len, vma_len;
+ 	int ret;
+ 	struct hstate *h = hstate_file(file);
+@@ -146,6 +147,10 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 	vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
+ 	vma->vm_ops = &hugetlb_vm_ops;
+ 
++	ret = seal_check_future_write(info->seals, vma);
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * page based offset in vm_pgoff could be sufficiently large to
+ 	 * overflow a loff_t when converted to byte offset.  This can
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index dc0694fcfcd12..1e07dfac4d811 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -245,15 +245,14 @@ static int fc_do_one_pass(journal_t *journal,
+ 		return 0;
+ 
+ 	while (next_fc_block <= journal->j_fc_last) {
+-		jbd_debug(3, "Fast commit replay: next block %ld",
++		jbd_debug(3, "Fast commit replay: next block %ld\n",
+ 			  next_fc_block);
+ 		err = jread(&bh, journal, next_fc_block);
+ 		if (err) {
+-			jbd_debug(3, "Fast commit replay: read error");
++			jbd_debug(3, "Fast commit replay: read error\n");
+ 			break;
+ 		}
+ 
+-		jbd_debug(3, "Processing fast commit blk with seq %d");
+ 		err = journal->j_fc_replay_callback(journal, bh, pass,
+ 					next_fc_block - journal->j_fc_first,
+ 					expected_commit_id);
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index e61dbc9b86ae2..be546ece383f5 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -132,12 +132,12 @@ static struct inode *nfs_layout_find_inode_by_stateid(struct nfs_client *clp,
+ 		list_for_each_entry_rcu(lo, &server->layouts, plh_layouts) {
+ 			if (!pnfs_layout_is_valid(lo))
+ 				continue;
+-			if (stateid != NULL &&
+-			    !nfs4_stateid_match_other(stateid, &lo->plh_stateid))
++			if (!nfs4_stateid_match_other(stateid, &lo->plh_stateid))
+ 				continue;
+-			if (!nfs_sb_active(server->super))
+-				continue;
+-			inode = igrab(lo->plh_inode);
++			if (nfs_sb_active(server->super))
++				inode = igrab(lo->plh_inode);
++			else
++				inode = ERR_PTR(-EAGAIN);
+ 			rcu_read_unlock();
+ 			if (inode)
+ 				return inode;
+@@ -171,9 +171,10 @@ static struct inode *nfs_layout_find_inode_by_fh(struct nfs_client *clp,
+ 				continue;
+ 			if (nfsi->layout != lo)
+ 				continue;
+-			if (!nfs_sb_active(server->super))
+-				continue;
+-			inode = igrab(lo->plh_inode);
++			if (nfs_sb_active(server->super))
++				inode = igrab(lo->plh_inode);
++			else
++				inode = ERR_PTR(-EAGAIN);
+ 			rcu_read_unlock();
+ 			if (inode)
+ 				return inode;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index fd0eda328943b..a8a02081942d2 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -106,7 +106,7 @@ static int decode_nfs_fh(struct xdr_stream *xdr, struct nfs_fh *fh)
+ 	if (unlikely(!p))
+ 		return -ENOBUFS;
+ 	fh->size = be32_to_cpup(p++);
+-	if (fh->size > sizeof(struct nfs_fh)) {
++	if (fh->size > NFS_MAXFHSIZE) {
+ 		printk(KERN_ERR "NFS flexfiles: Too big fh received %d\n",
+ 		       fh->size);
+ 		return -EOVERFLOW;
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 43af053f467a7..6e2e948f1475e 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1642,10 +1642,10 @@ EXPORT_SYMBOL_GPL(_nfs_display_fhandle);
+  */
+ static int nfs_inode_attrs_need_update(const struct inode *inode, const struct nfs_fattr *fattr)
+ {
+-	const struct nfs_inode *nfsi = NFS_I(inode);
++	unsigned long attr_gencount = NFS_I(inode)->attr_gencount;
+ 
+-	return ((long)fattr->gencount - (long)nfsi->attr_gencount) > 0 ||
+-		((long)nfsi->attr_gencount - (long)nfs_read_attr_generation_counter() > 0);
++	return (long)(fattr->gencount - attr_gencount) > 0 ||
++	       (long)(attr_gencount - nfs_read_attr_generation_counter()) > 0;
+ }
+ 
+ static int nfs_refresh_inode_locked(struct inode *inode, struct nfs_fattr *fattr)
+@@ -2074,7 +2074,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 			nfsi->attrtimeo_timestamp = now;
+ 		}
+ 		/* Set the barrier to be more recent than this fattr */
+-		if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0)
++		if ((long)(fattr->gencount - nfsi->attr_gencount) > 0)
+ 			nfsi->attr_gencount = fattr->gencount;
+ 	}
+ 
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 4fc61e3d098da..4ebcd9dd15352 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -46,11 +46,12 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ {
+ 	struct inode *inode = file_inode(filep);
+ 	struct nfs_server *server = NFS_SERVER(inode);
++	u32 bitmask[3];
+ 	struct nfs42_falloc_args args = {
+ 		.falloc_fh	= NFS_FH(inode),
+ 		.falloc_offset	= offset,
+ 		.falloc_length	= len,
+-		.falloc_bitmask	= nfs4_fattr_bitmap,
++		.falloc_bitmask	= bitmask,
+ 	};
+ 	struct nfs42_falloc_res res = {
+ 		.falloc_server	= server,
+@@ -68,6 +69,10 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ 		return status;
+ 	}
+ 
++	memcpy(bitmask, server->cache_consistency_bitmask, sizeof(bitmask));
++	if (server->attr_bitmask[1] & FATTR4_WORD1_SPACE_USED)
++		bitmask[1] |= FATTR4_WORD1_SPACE_USED;
++
+ 	res.falloc_fattr = nfs_alloc_fattr();
+ 	if (!res.falloc_fattr)
+ 		return -ENOMEM;
+@@ -75,7 +80,8 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ 	status = nfs4_call_sync(server->client, server, msg,
+ 				&args.seq_args, &res.seq_res, 0);
+ 	if (status == 0)
+-		status = nfs_post_op_update_inode(inode, res.falloc_fattr);
++		status = nfs_post_op_update_inode_force_wcc(inode,
++							    res.falloc_fattr);
+ 
+ 	kfree(res.falloc_fattr);
+ 	return status;
+@@ -84,7 +90,8 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ 				loff_t offset, loff_t len)
+ {
+-	struct nfs_server *server = NFS_SERVER(file_inode(filep));
++	struct inode *inode = file_inode(filep);
++	struct nfs_server *server = NFS_SERVER(inode);
+ 	struct nfs4_exception exception = { };
+ 	struct nfs_lock_context *lock;
+ 	int err;
+@@ -93,9 +100,13 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ 	if (IS_ERR(lock))
+ 		return PTR_ERR(lock);
+ 
+-	exception.inode = file_inode(filep);
++	exception.inode = inode;
+ 	exception.state = lock->open_context->state;
+ 
++	err = nfs_sync_inode(inode);
++	if (err)
++		goto out;
++
+ 	do {
+ 		err = _nfs42_proc_fallocate(msg, filep, lock, offset, len);
+ 		if (err == -ENOTSUPP) {
+@@ -104,7 +115,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ 		}
+ 		err = nfs4_handle_exception(server, err, &exception);
+ 	} while (exception.retry);
+-
++out:
+ 	nfs_put_lock_context(lock);
+ 	return err;
+ }
+@@ -142,16 +153,13 @@ int nfs42_proc_deallocate(struct file *filep, loff_t offset, loff_t len)
+ 		return -EOPNOTSUPP;
+ 
+ 	inode_lock(inode);
+-	err = nfs_sync_inode(inode);
+-	if (err)
+-		goto out_unlock;
+ 
+ 	err = nfs42_proc_fallocate(&msg, filep, offset, len);
+ 	if (err == 0)
+ 		truncate_pagecache_range(inode, offset, (offset + len) -1);
+ 	if (err == -EOPNOTSUPP)
+ 		NFS_SERVER(inode)->caps &= ~NFS_CAP_DEALLOCATE;
+-out_unlock:
++
+ 	inode_unlock(inode);
+ 	return err;
+ }
+@@ -657,7 +665,10 @@ static loff_t _nfs42_proc_llseek(struct file *filep,
+ 	if (status)
+ 		return status;
+ 
+-	return vfs_setpos(filep, res.sr_offset, inode->i_sb->s_maxbytes);
++	if (whence == SEEK_DATA && res.sr_eof)
++		return -NFS4ERR_NXIO;
++	else
++		return vfs_setpos(filep, res.sr_offset, inode->i_sb->s_maxbytes);
+ }
+ 
+ loff_t nfs42_proc_llseek(struct file *filep, loff_t offset, int whence)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 15ac6b6893e76..06b70de0cc0d0 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -112,9 +112,10 @@ static int nfs41_test_stateid(struct nfs_server *, nfs4_stateid *,
+ static int nfs41_free_stateid(struct nfs_server *, const nfs4_stateid *,
+ 		const struct cred *, bool);
+ #endif
+-static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
+-		struct nfs_server *server,
+-		struct nfs4_label *label);
++static void nfs4_bitmask_set(__u32 bitmask[NFS4_BITMASK_SZ],
++			     const __u32 *src, struct inode *inode,
++			     struct nfs_server *server,
++			     struct nfs4_label *label);
+ 
+ #ifdef CONFIG_NFS_V4_SECURITY_LABEL
+ static inline struct nfs4_label *
+@@ -3596,6 +3597,7 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data)
+ 	struct nfs4_closedata *calldata = data;
+ 	struct nfs4_state *state = calldata->state;
+ 	struct inode *inode = calldata->inode;
++	struct nfs_server *server = NFS_SERVER(inode);
+ 	struct pnfs_layout_hdr *lo;
+ 	bool is_rdonly, is_wronly, is_rdwr;
+ 	int call_close = 0;
+@@ -3652,8 +3654,10 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data)
+ 	if (calldata->arg.fmode == 0 || calldata->arg.fmode == FMODE_READ) {
+ 		/* Close-to-open cache consistency revalidation */
+ 		if (!nfs4_have_delegation(inode, FMODE_READ)) {
+-			calldata->arg.bitmask = NFS_SERVER(inode)->cache_consistency_bitmask;
+-			nfs4_bitmask_adjust(calldata->arg.bitmask, inode, NFS_SERVER(inode), NULL);
++			nfs4_bitmask_set(calldata->arg.bitmask_store,
++					 server->cache_consistency_bitmask,
++					 inode, server, NULL);
++			calldata->arg.bitmask = calldata->arg.bitmask_store;
+ 		} else
+ 			calldata->arg.bitmask = NULL;
+ 	}
+@@ -5418,19 +5422,17 @@ bool nfs4_write_need_cache_consistency_data(struct nfs_pgio_header *hdr)
+ 	return nfs4_have_delegation(hdr->inode, FMODE_READ) == 0;
+ }
+ 
+-static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
+-				struct nfs_server *server,
+-				struct nfs4_label *label)
++static void nfs4_bitmask_set(__u32 bitmask[NFS4_BITMASK_SZ], const __u32 *src,
++			     struct inode *inode, struct nfs_server *server,
++			     struct nfs4_label *label)
+ {
+-
+ 	unsigned long cache_validity = READ_ONCE(NFS_I(inode)->cache_validity);
++	unsigned int i;
+ 
+-	if ((cache_validity & NFS_INO_INVALID_DATA) ||
+-		(cache_validity & NFS_INO_REVAL_PAGECACHE) ||
+-		(cache_validity & NFS_INO_REVAL_FORCED) ||
+-		(cache_validity & NFS_INO_INVALID_OTHER))
+-		nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, label), inode);
++	memcpy(bitmask, src, sizeof(*bitmask) * NFS4_BITMASK_SZ);
+ 
++	if (cache_validity & (NFS_INO_INVALID_CHANGE | NFS_INO_REVAL_PAGECACHE))
++		bitmask[0] |= FATTR4_WORD0_CHANGE;
+ 	if (cache_validity & NFS_INO_INVALID_ATIME)
+ 		bitmask[1] |= FATTR4_WORD1_TIME_ACCESS;
+ 	if (cache_validity & NFS_INO_INVALID_OTHER)
+@@ -5439,16 +5441,22 @@ static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
+ 				FATTR4_WORD1_NUMLINKS;
+ 	if (label && label->len && cache_validity & NFS_INO_INVALID_LABEL)
+ 		bitmask[2] |= FATTR4_WORD2_SECURITY_LABEL;
+-	if (cache_validity & NFS_INO_INVALID_CHANGE)
+-		bitmask[0] |= FATTR4_WORD0_CHANGE;
+ 	if (cache_validity & NFS_INO_INVALID_CTIME)
+ 		bitmask[1] |= FATTR4_WORD1_TIME_METADATA;
+ 	if (cache_validity & NFS_INO_INVALID_MTIME)
+ 		bitmask[1] |= FATTR4_WORD1_TIME_MODIFY;
+-	if (cache_validity & NFS_INO_INVALID_SIZE)
+-		bitmask[0] |= FATTR4_WORD0_SIZE;
+ 	if (cache_validity & NFS_INO_INVALID_BLOCKS)
+ 		bitmask[1] |= FATTR4_WORD1_SPACE_USED;
++
++	if (nfs4_have_delegation(inode, FMODE_READ) &&
++	    !(cache_validity & NFS_INO_REVAL_FORCED))
++		bitmask[0] &= ~FATTR4_WORD0_SIZE;
++	else if (cache_validity &
++		 (NFS_INO_INVALID_SIZE | NFS_INO_REVAL_PAGECACHE))
++		bitmask[0] |= FATTR4_WORD0_SIZE;
++
++	for (i = 0; i < NFS4_BITMASK_SZ; i++)
++		bitmask[i] &= server->attr_bitmask[i];
+ }
+ 
+ static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr,
+@@ -5461,8 +5469,10 @@ static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr,
+ 		hdr->args.bitmask = NULL;
+ 		hdr->res.fattr = NULL;
+ 	} else {
+-		hdr->args.bitmask = server->cache_consistency_bitmask;
+-		nfs4_bitmask_adjust(hdr->args.bitmask, hdr->inode, server, NULL);
++		nfs4_bitmask_set(hdr->args.bitmask_store,
++				 server->cache_consistency_bitmask,
++				 hdr->inode, server, NULL);
++		hdr->args.bitmask = hdr->args.bitmask_store;
+ 	}
+ 
+ 	if (!hdr->pgio_done_cb)
+@@ -6504,8 +6514,10 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
+ 
+ 	data->args.fhandle = &data->fh;
+ 	data->args.stateid = &data->stateid;
+-	data->args.bitmask = server->cache_consistency_bitmask;
+-	nfs4_bitmask_adjust(data->args.bitmask, inode, server, NULL);
++	nfs4_bitmask_set(data->args.bitmask_store,
++			 server->cache_consistency_bitmask, inode, server,
++			 NULL);
++	data->args.bitmask = data->args.bitmask_store;
+ 	nfs_copy_fh(&data->fh, NFS_FH(inode));
+ 	nfs4_stateid_copy(&data->stateid, stateid);
+ 	data->res.fattr = &data->fattr;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 55cf60b71cde0..ac20f79bbedd6 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4874,6 +4874,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
+ 	if (nf)
+ 		nfsd_file_put(nf);
+ 
++	status = nfserrno(nfsd_open_break_lease(cur_fh->fh_dentry->d_inode,
++								access));
++	if (status)
++		goto out_put_access;
++
+ 	status = nfsd4_truncate(rqstp, cur_fh, open);
+ 	if (status)
+ 		goto out_put_access;
+@@ -6856,11 +6861,20 @@ out:
+ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct file_lock *lock)
+ {
+ 	struct nfsd_file *nf;
+-	__be32 err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf);
+-	if (!err) {
+-		err = nfserrno(vfs_test_lock(nf->nf_file, lock));
+-		nfsd_file_put(nf);
+-	}
++	__be32 err;
++
++	err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf);
++	if (err)
++		return err;
++	fh_lock(fhp); /* to block new leases till after test_lock: */
++	err = nfserrno(nfsd_open_break_lease(fhp->fh_dentry->d_inode,
++							NFSD_MAY_READ));
++	if (err)
++		goto out;
++	err = nfserrno(vfs_test_lock(nf->nf_file, lock));
++out:
++	fh_unlock(fhp);
++	nfsd_file_put(nf);
+ 	return err;
+ }
+ 
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index 6c0a05f55d6b1..09e4d8a499a38 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -754,7 +754,7 @@ int remove_proc_subtree(const char *name, struct proc_dir_entry *parent)
+ 	while (1) {
+ 		next = pde_subdir_first(de);
+ 		if (next) {
+-			if (unlikely(pde_is_permanent(root))) {
++			if (unlikely(pde_is_permanent(next))) {
+ 				write_unlock(&proc_subdir_lock);
+ 				WARN(1, "removing permanent /proc entry '%s/%s'",
+ 					next->parent->name, next->name);
+diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
+index 7b1128398976e..89d492916deaf 100644
+--- a/fs/squashfs/file.c
++++ b/fs/squashfs/file.c
+@@ -211,11 +211,11 @@ failure:
+  * If the skip factor is limited in this way then the file will use multiple
+  * slots.
+  */
+-static inline int calculate_skip(int blocks)
++static inline int calculate_skip(u64 blocks)
+ {
+-	int skip = blocks / ((SQUASHFS_META_ENTRIES + 1)
++	u64 skip = blocks / ((SQUASHFS_META_ENTRIES + 1)
+ 		 * SQUASHFS_META_INDEXES);
+-	return min(SQUASHFS_CACHED_BLKS - 1, skip + 1);
++	return min((u64) SQUASHFS_CACHED_BLKS - 1, skip + 1);
+ }
+ 
+ 
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index bc56287a1ed13..8fb893ed205e3 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -135,6 +135,7 @@ enum cpuhp_state {
+ 	CPUHP_AP_RISCV_TIMER_STARTING,
+ 	CPUHP_AP_CLINT_TIMER_STARTING,
+ 	CPUHP_AP_CSKY_TIMER_STARTING,
++	CPUHP_AP_TI_GP_TIMER_STARTING,
+ 	CPUHP_AP_HYPERV_TIMER_STARTING,
+ 	CPUHP_AP_KVM_STARTING,
+ 	CPUHP_AP_KVM_ARM_VGIC_INIT_STARTING,
+diff --git a/include/linux/elevator.h b/include/linux/elevator.h
+index bacc40a0bdf39..bc26b4e11f62f 100644
+--- a/include/linux/elevator.h
++++ b/include/linux/elevator.h
+@@ -34,7 +34,7 @@ struct elevator_mq_ops {
+ 	void (*depth_updated)(struct blk_mq_hw_ctx *);
+ 
+ 	bool (*allow_merge)(struct request_queue *, struct request *, struct bio *);
+-	bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *, unsigned int);
++	bool (*bio_merge)(struct request_queue *, struct bio *, unsigned int);
+ 	int (*request_merge)(struct request_queue *q, struct request **, struct bio *);
+ 	void (*request_merged)(struct request_queue *, struct request *, enum elv_merge);
+ 	void (*requests_merged)(struct request_queue *, struct request *, struct request *);
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 56622658b2158..a670ae129f4b9 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -687,6 +687,8 @@ struct i2c_adapter_quirks {
+ #define I2C_AQ_NO_ZERO_LEN_READ		BIT(5)
+ #define I2C_AQ_NO_ZERO_LEN_WRITE	BIT(6)
+ #define I2C_AQ_NO_ZERO_LEN		(I2C_AQ_NO_ZERO_LEN_READ | I2C_AQ_NO_ZERO_LEN_WRITE)
++/* adapter cannot do repeated START */
++#define I2C_AQ_NO_REP_START		BIT(7)
+ 
+ /*
+  * i2c_adapter is the structure used to identify a physical i2c bus along
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 08a48d3eaeaae..5106db3ad1ce3 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -3178,5 +3178,37 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping,
+ 
+ extern int sysctl_nr_trim_pages;
+ 
++/**
++ * seal_check_future_write - Check for F_SEAL_FUTURE_WRITE flag and handle it
++ * @seals: the seals to check
++ * @vma: the vma to operate on
++ *
++ * Check whether F_SEAL_FUTURE_WRITE is set; if so, do proper check/handling on
++ * the vma flags.  Return 0 if check pass, or <0 for errors.
++ */
++static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
++{
++	if (seals & F_SEAL_FUTURE_WRITE) {
++		/*
++		 * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
++		 * "future write" seal active.
++		 */
++		if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
++			return -EPERM;
++
++		/*
++		 * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as
++		 * MAP_SHARED and read-only, take care to not allow mprotect to
++		 * revert protections on such mappings. Do this only for shared
++		 * mappings. For private mappings, don't need to mask
++		 * VM_MAYWRITE as we still want them to be COW-writable.
++		 */
++		if (vma->vm_flags & VM_SHARED)
++			vma->vm_flags &= ~(VM_MAYWRITE);
++	}
++
++	return 0;
++}
++
+ #endif /* __KERNEL__ */
+ #endif /* _LINUX_MM_H */
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 3433ecc9c1f74..a4fff7d7abe58 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -97,10 +97,10 @@ struct page {
+ 		};
+ 		struct {	/* page_pool used by netstack */
+ 			/**
+-			 * @dma_addr: might require a 64-bit value even on
++			 * @dma_addr: might require a 64-bit value on
+ 			 * 32-bit architectures.
+ 			 */
+-			dma_addr_t dma_addr;
++			unsigned long dma_addr[2];
+ 		};
+ 		struct {	/* slab, slob and slub */
+ 			union {
+diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
+index d63cb862d58e5..5491ad5f48a94 100644
+--- a/include/linux/nfs_xdr.h
++++ b/include/linux/nfs_xdr.h
+@@ -15,6 +15,8 @@
+ #define NFS_DEF_FILE_IO_SIZE	(4096U)
+ #define NFS_MIN_FILE_IO_SIZE	(1024U)
+ 
++#define NFS_BITMASK_SZ		3
++
+ struct nfs4_string {
+ 	unsigned int len;
+ 	char *data;
+@@ -525,7 +527,8 @@ struct nfs_closeargs {
+ 	struct nfs_seqid *	seqid;
+ 	fmode_t			fmode;
+ 	u32			share_access;
+-	u32 *			bitmask;
++	const u32 *		bitmask;
++	u32			bitmask_store[NFS_BITMASK_SZ];
+ 	struct nfs4_layoutreturn_args *lr_args;
+ };
+ 
+@@ -608,7 +611,8 @@ struct nfs4_delegreturnargs {
+ 	struct nfs4_sequence_args	seq_args;
+ 	const struct nfs_fh *fhandle;
+ 	const nfs4_stateid *stateid;
+-	u32 * bitmask;
++	const u32 *bitmask;
++	u32 bitmask_store[NFS_BITMASK_SZ];
+ 	struct nfs4_layoutreturn_args *lr_args;
+ };
+ 
+@@ -648,7 +652,8 @@ struct nfs_pgio_args {
+ 	union {
+ 		unsigned int		replen;			/* used by read */
+ 		struct {
+-			u32 *			bitmask;	/* used by write */
++			const u32 *		bitmask;	/* used by write */
++			u32 bitmask_store[NFS_BITMASK_SZ];	/* used by write */
+ 			enum nfs3_stable_how	stable;		/* used by write */
+ 		};
+ 	};
+diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
+index cc66bec8be905..88d311bad9846 100644
+--- a/include/linux/pci-epc.h
++++ b/include/linux/pci-epc.h
+@@ -201,8 +201,10 @@ int pci_epc_start(struct pci_epc *epc);
+ void pci_epc_stop(struct pci_epc *epc);
+ const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
+ 						    u8 func_no);
+-unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
+-					*epc_features);
++enum pci_barno
++pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features);
++enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
++					 *epc_features, enum pci_barno bar);
+ struct pci_epc *pci_epc_get(const char *epc_name);
+ void pci_epc_put(struct pci_epc *epc);
+ 
+diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h
+index 6644ff3b07024..fa3aca43eb192 100644
+--- a/include/linux/pci-epf.h
++++ b/include/linux/pci-epf.h
+@@ -21,6 +21,7 @@ enum pci_notify_event {
+ };
+ 
+ enum pci_barno {
++	NO_BAR = -1,
+ 	BAR_0,
+ 	BAR_1,
+ 	BAR_2,
+diff --git a/include/linux/pm.h b/include/linux/pm.h
+index 47aca6bac1d6a..52d9724db9dc6 100644
+--- a/include/linux/pm.h
++++ b/include/linux/pm.h
+@@ -600,6 +600,7 @@ struct dev_pm_info {
+ 	unsigned int		idle_notification:1;
+ 	unsigned int		request_pending:1;
+ 	unsigned int		deferred_resume:1;
++	unsigned int		needs_force_resume:1;
+ 	unsigned int		runtime_auto:1;
+ 	bool			ignore_children:1;
+ 	unsigned int		no_callbacks:1;
+diff --git a/include/net/page_pool.h b/include/net/page_pool.h
+index 81d7773f96cdf..b139e7bf45fe4 100644
+--- a/include/net/page_pool.h
++++ b/include/net/page_pool.h
+@@ -191,7 +191,17 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
+ 
+ static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
+ {
+-	return page->dma_addr;
++	dma_addr_t ret = page->dma_addr[0];
++	if (sizeof(dma_addr_t) > sizeof(unsigned long))
++		ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16;
++	return ret;
++}
++
++static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
++{
++	page->dma_addr[0] = addr;
++	if (sizeof(dma_addr_t) > sizeof(unsigned long))
++		page->dma_addr[1] = upper_32_bits(addr);
+ }
+ 
+ static inline bool is_page_pool_compiled_in(void)
+diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
+index f8f1e85ff1300..56b113e3cd6aa 100644
+--- a/include/trace/events/f2fs.h
++++ b/include/trace/events/f2fs.h
+@@ -6,6 +6,7 @@
+ #define _TRACE_F2FS_H
+ 
+ #include <linux/tracepoint.h>
++#include <uapi/linux/f2fs.h>
+ 
+ #define show_dev(dev)		MAJOR(dev), MINOR(dev)
+ #define show_dev_ino(entry)	show_dev(entry->dev), (unsigned long)entry->ino
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 2a03263b5f9d4..23db248a7fdbe 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1141,7 +1141,6 @@ DECLARE_EVENT_CLASS(xprt_writelock_event,
+ 
+ DEFINE_WRITELOCK_EVENT(reserve_xprt);
+ DEFINE_WRITELOCK_EVENT(release_xprt);
+-DEFINE_WRITELOCK_EVENT(transmit_queued);
+ 
+ DECLARE_EVENT_CLASS(xprt_cong_event,
+ 	TP_PROTO(
+diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
+new file mode 100644
+index 0000000000000..28bcfe8d2c27e
+--- /dev/null
++++ b/include/uapi/linux/f2fs.h
+@@ -0,0 +1,87 @@
++/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
++
++#ifndef _UAPI_LINUX_F2FS_H
++#define _UAPI_LINUX_F2FS_H
++#include <linux/types.h>
++#include <linux/ioctl.h>
++
++/*
++ * f2fs-specific ioctl commands
++ */
++#define F2FS_IOCTL_MAGIC		0xf5
++#define F2FS_IOC_START_ATOMIC_WRITE	_IO(F2FS_IOCTL_MAGIC, 1)
++#define F2FS_IOC_COMMIT_ATOMIC_WRITE	_IO(F2FS_IOCTL_MAGIC, 2)
++#define F2FS_IOC_START_VOLATILE_WRITE	_IO(F2FS_IOCTL_MAGIC, 3)
++#define F2FS_IOC_RELEASE_VOLATILE_WRITE	_IO(F2FS_IOCTL_MAGIC, 4)
++#define F2FS_IOC_ABORT_VOLATILE_WRITE	_IO(F2FS_IOCTL_MAGIC, 5)
++#define F2FS_IOC_GARBAGE_COLLECT	_IOW(F2FS_IOCTL_MAGIC, 6, __u32)
++#define F2FS_IOC_WRITE_CHECKPOINT	_IO(F2FS_IOCTL_MAGIC, 7)
++#define F2FS_IOC_DEFRAGMENT		_IOWR(F2FS_IOCTL_MAGIC, 8,	\
++						struct f2fs_defragment)
++#define F2FS_IOC_MOVE_RANGE		_IOWR(F2FS_IOCTL_MAGIC, 9,	\
++						struct f2fs_move_range)
++#define F2FS_IOC_FLUSH_DEVICE		_IOW(F2FS_IOCTL_MAGIC, 10,	\
++						struct f2fs_flush_device)
++#define F2FS_IOC_GARBAGE_COLLECT_RANGE	_IOW(F2FS_IOCTL_MAGIC, 11,	\
++						struct f2fs_gc_range)
++#define F2FS_IOC_GET_FEATURES		_IOR(F2FS_IOCTL_MAGIC, 12, __u32)
++#define F2FS_IOC_SET_PIN_FILE		_IOW(F2FS_IOCTL_MAGIC, 13, __u32)
++#define F2FS_IOC_GET_PIN_FILE		_IOR(F2FS_IOCTL_MAGIC, 14, __u32)
++#define F2FS_IOC_PRECACHE_EXTENTS	_IO(F2FS_IOCTL_MAGIC, 15)
++#define F2FS_IOC_RESIZE_FS		_IOW(F2FS_IOCTL_MAGIC, 16, __u64)
++#define F2FS_IOC_GET_COMPRESS_BLOCKS	_IOR(F2FS_IOCTL_MAGIC, 17, __u64)
++#define F2FS_IOC_RELEASE_COMPRESS_BLOCKS				\
++					_IOR(F2FS_IOCTL_MAGIC, 18, __u64)
++#define F2FS_IOC_RESERVE_COMPRESS_BLOCKS				\
++					_IOR(F2FS_IOCTL_MAGIC, 19, __u64)
++#define F2FS_IOC_SEC_TRIM_FILE		_IOW(F2FS_IOCTL_MAGIC, 20,	\
++						struct f2fs_sectrim_range)
++
++/*
++ * should be same as XFS_IOC_GOINGDOWN.
++ * Flags for going down operation used by FS_IOC_GOINGDOWN
++ */
++#define F2FS_IOC_SHUTDOWN	_IOR('X', 125, __u32)	/* Shutdown */
++#define F2FS_GOING_DOWN_FULLSYNC	0x0	/* going down with full sync */
++#define F2FS_GOING_DOWN_METASYNC	0x1	/* going down with metadata */
++#define F2FS_GOING_DOWN_NOSYNC		0x2	/* going down */
++#define F2FS_GOING_DOWN_METAFLUSH	0x3	/* going down with meta flush */
++#define F2FS_GOING_DOWN_NEED_FSCK	0x4	/* going down to trigger fsck */
++
++/*
++ * Flags used by F2FS_IOC_SEC_TRIM_FILE
++ */
++#define F2FS_TRIM_FILE_DISCARD		0x1	/* send discard command */
++#define F2FS_TRIM_FILE_ZEROOUT		0x2	/* zero out */
++#define F2FS_TRIM_FILE_MASK		0x3
++
++struct f2fs_gc_range {
++	__u32 sync;
++	__u64 start;
++	__u64 len;
++};
++
++struct f2fs_defragment {
++	__u64 start;
++	__u64 len;
++};
++
++struct f2fs_move_range {
++	__u32 dst_fd;		/* destination fd */
++	__u64 pos_in;		/* start position in src_fd */
++	__u64 pos_out;		/* start position in dst_fd */
++	__u64 len;		/* size to move */
++};
++
++struct f2fs_flush_device {
++	__u32 dev_num;		/* device number to flush */
++	__u32 segments;		/* # of segments to flush */
++};
++
++struct f2fs_sectrim_range {
++	__u64 start;
++	__u64 len;
++	__u64 flags;
++};
++
++#endif /* _UAPI_LINUX_F2FS_H */
+diff --git a/include/uapi/linux/netfilter/xt_SECMARK.h b/include/uapi/linux/netfilter/xt_SECMARK.h
+index 1f2a708413f5d..beb2cadba8a9c 100644
+--- a/include/uapi/linux/netfilter/xt_SECMARK.h
++++ b/include/uapi/linux/netfilter/xt_SECMARK.h
+@@ -20,4 +20,10 @@ struct xt_secmark_target_info {
+ 	char secctx[SECMARK_SECCTX_MAX];
+ };
+ 
++struct xt_secmark_target_info_v1 {
++	__u8 mode;
++	char secctx[SECMARK_SECCTX_MAX];
++	__u32 secid;
++};
++
+ #endif /*_XT_SECMARK_H_target */
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index ba4055a192e4c..0f61b14b00995 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -563,7 +563,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
+ 		enum dma_data_direction dir, unsigned long attrs)
+ {
+ 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
+-	unsigned int index, i;
++	unsigned int i;
++	int index;
+ 	phys_addr_t tlb_addr;
+ 
+ 	if (no_iotlb_memory)
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index 7825adcc5efc3..aea9104265f29 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -740,8 +740,10 @@ static int kexec_calculate_store_digests(struct kimage *image)
+ 
+ 	sha_region_sz = KEXEC_SEGMENT_MAX * sizeof(struct kexec_sha_region);
+ 	sha_regions = vzalloc(sha_region_sz);
+-	if (!sha_regions)
++	if (!sha_regions) {
++		ret = -ENOMEM;
+ 		goto out_free_desc;
++	}
+ 
+ 	desc->tfm   = tfm;
+ 
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 3ae2f56cc79de..817545ff80b9b 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -450,7 +450,7 @@ int walk_system_ram_res(u64 start, u64 end, void *arg,
+ {
+ 	unsigned long flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+ 
+-	return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true,
++	return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, false,
+ 				     arg, func);
+ }
+ 
+@@ -463,7 +463,7 @@ int walk_mem_res(u64 start, u64 end, void *arg,
+ {
+ 	unsigned long flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ 
+-	return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true,
++	return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, false,
+ 				     arg, func);
+ }
+ 
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 3c3554d9ee50b..57b2362518849 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -936,7 +936,7 @@ DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
+ 
+ static inline unsigned int uclamp_bucket_id(unsigned int clamp_value)
+ {
+-	return clamp_value / UCLAMP_BUCKET_DELTA;
++	return min_t(unsigned int, clamp_value / UCLAMP_BUCKET_DELTA, UCLAMP_BUCKETS - 1);
+ }
+ 
+ static inline unsigned int uclamp_none(enum uclamp_id clamp_id)
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index c80d1a039d19a..1ad0e52487f6b 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -10840,16 +10840,22 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
+ {
+ 	struct cfs_rq *cfs_rq;
+ 
++	list_add_leaf_cfs_rq(cfs_rq_of(se));
++
+ 	/* Start to propagate at parent */
+ 	se = se->parent;
+ 
+ 	for_each_sched_entity(se) {
+ 		cfs_rq = cfs_rq_of(se);
+ 
+-		if (cfs_rq_throttled(cfs_rq))
+-			break;
++		if (!cfs_rq_throttled(cfs_rq)){
++			update_load_avg(cfs_rq, se, UPDATE_TG);
++			list_add_leaf_cfs_rq(cfs_rq);
++			continue;
++		}
+ 
+-		update_load_avg(cfs_rq, se, UPDATE_TG);
++		if (list_add_leaf_cfs_rq(cfs_rq))
++			break;
+ 	}
+ }
+ #else
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 71109065bd8eb..01bf977090dc2 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -172,7 +172,6 @@ static u64 __read_mostly sample_period;
+ static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts);
+ static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer);
+ static DEFINE_PER_CPU(bool, softlockup_touch_sync);
+-static DEFINE_PER_CPU(bool, soft_watchdog_warn);
+ static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts);
+ static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
+ static unsigned long soft_lockup_nmi_warn;
+@@ -236,7 +235,7 @@ static void set_sample_period(void)
+ }
+ 
+ /* Commands for resetting the watchdog */
+-static void __touch_watchdog(void)
++static void update_touch_ts(void)
+ {
+ 	__this_cpu_write(watchdog_touch_ts, get_timestamp());
+ }
+@@ -331,7 +330,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
+  */
+ static int softlockup_fn(void *data)
+ {
+-	__touch_watchdog();
++	update_touch_ts();
+ 	complete(this_cpu_ptr(&softlockup_completion));
+ 
+ 	return 0;
+@@ -374,7 +373,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ 
+ 		/* Clear the guest paused flag on watchdog reset */
+ 		kvm_check_and_clear_guest_paused();
+-		__touch_watchdog();
++		update_touch_ts();
+ 		return HRTIMER_RESTART;
+ 	}
+ 
+@@ -394,21 +393,18 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ 		if (kvm_check_and_clear_guest_paused())
+ 			return HRTIMER_RESTART;
+ 
+-		/* only warn once */
+-		if (__this_cpu_read(soft_watchdog_warn) == true)
+-			return HRTIMER_RESTART;
+-
++		/*
++		 * Prevent multiple soft-lockup reports if one cpu is already
++		 * engaged in dumping all cpu back traces.
++		 */
+ 		if (softlockup_all_cpu_backtrace) {
+-			/* Prevent multiple soft-lockup reports if one cpu is already
+-			 * engaged in dumping cpu back traces
+-			 */
+-			if (test_and_set_bit(0, &soft_lockup_nmi_warn)) {
+-				/* Someone else will report us. Let's give up */
+-				__this_cpu_write(soft_watchdog_warn, true);
++			if (test_and_set_bit_lock(0, &soft_lockup_nmi_warn))
+ 				return HRTIMER_RESTART;
+-			}
+ 		}
+ 
++		/* Start period for the next softlockup warning. */
++		update_touch_ts();
++
+ 		pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
+ 			smp_processor_id(), duration,
+ 			current->comm, task_pid_nr(current));
+@@ -420,22 +416,14 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ 			dump_stack();
+ 
+ 		if (softlockup_all_cpu_backtrace) {
+-			/* Avoid generating two back traces for current
+-			 * given that one is already made above
+-			 */
+ 			trigger_allbutself_cpu_backtrace();
+-
+-			clear_bit(0, &soft_lockup_nmi_warn);
+-			/* Barrier to sync with other cpus */
+-			smp_mb__after_atomic();
++			clear_bit_unlock(0, &soft_lockup_nmi_warn);
+ 		}
+ 
+ 		add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK);
+ 		if (softlockup_panic)
+ 			panic("softlockup: hung tasks");
+-		__this_cpu_write(soft_watchdog_warn, true);
+-	} else
+-		__this_cpu_write(soft_watchdog_warn, false);
++	}
+ 
+ 	return HRTIMER_RESTART;
+ }
+@@ -460,7 +448,7 @@ static void watchdog_enable(unsigned int cpu)
+ 		      HRTIMER_MODE_REL_PINNED_HARD);
+ 
+ 	/* Initialize timestamp */
+-	__touch_watchdog();
++	update_touch_ts();
+ 	/* Enable the perf event */
+ 	if (watchdog_enabled & NMI_WATCHDOG_ENABLED)
+ 		watchdog_nmi_enable(cpu);
+diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c
+index 7998affa45d49..c87d5b6a8a55a 100644
+--- a/lib/kobject_uevent.c
++++ b/lib/kobject_uevent.c
+@@ -251,12 +251,13 @@ static int kobj_usermode_filter(struct kobject *kobj)
+ 
+ static int init_uevent_argv(struct kobj_uevent_env *env, const char *subsystem)
+ {
++	int buffer_size = sizeof(env->buf) - env->buflen;
+ 	int len;
+ 
+-	len = strlcpy(&env->buf[env->buflen], subsystem,
+-		      sizeof(env->buf) - env->buflen);
+-	if (len >= (sizeof(env->buf) - env->buflen)) {
+-		WARN(1, KERN_ERR "init_uevent_argv: buffer size too small\n");
++	len = strlcpy(&env->buf[env->buflen], subsystem, buffer_size);
++	if (len >= buffer_size) {
++		pr_warn("init_uevent_argv: buffer size of %d too small, needed %d\n",
++			buffer_size, len);
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/lib/nlattr.c b/lib/nlattr.c
+index 74019c8ebf6be..fe60f9ae9db17 100644
+--- a/lib/nlattr.c
++++ b/lib/nlattr.c
+@@ -816,7 +816,7 @@ int nla_strcmp(const struct nlattr *nla, const char *str)
+ 	int attrlen = nla_len(nla);
+ 	int d;
+ 
+-	if (attrlen > 0 && buf[attrlen - 1] == '\0')
++	while (attrlen > 0 && buf[attrlen - 1] == '\0')
+ 		attrlen--;
+ 
+ 	d = attrlen - len;
+diff --git a/lib/test_kasan.c b/lib/test_kasan.c
+index 400507f1e5db0..28c7c123a1857 100644
+--- a/lib/test_kasan.c
++++ b/lib/test_kasan.c
+@@ -449,8 +449,20 @@ static char global_array[10];
+ 
+ static void kasan_global_oob(struct kunit *test)
+ {
+-	volatile int i = 3;
+-	char *p = &global_array[ARRAY_SIZE(global_array) + i];
++	/*
++	 * Deliberate out-of-bounds access. To prevent CONFIG_UBSAN_LOCAL_BOUNDS
++	 * from failing here and panicing the kernel, access the array via a
++	 * volatile pointer, which will prevent the compiler from being able to
++	 * determine the array bounds.
++	 *
++	 * This access uses a volatile pointer to char (char *volatile) rather
++	 * than the more conventional pointer to volatile char (volatile char *)
++	 * because we want to prevent the compiler from making inferences about
++	 * the pointer itself (i.e. its array bounds), not the data that it
++	 * refers to.
++	 */
++	char *volatile array = global_array;
++	char *p = &array[ARRAY_SIZE(global_array) + 3];
+ 
+ 	/* Only generic mode instruments globals. */
+ 	if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+@@ -479,8 +491,9 @@ static void ksize_unpoisons_memory(struct kunit *test)
+ static void kasan_stack_oob(struct kunit *test)
+ {
+ 	char stack_array[10];
+-	volatile int i = OOB_TAG_OFF;
+-	char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
++	/* See comment in kasan_global_oob. */
++	char *volatile array = stack_array;
++	char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF];
+ 
+ 	if (!IS_ENABLED(CONFIG_KASAN_STACK)) {
+ 		kunit_info(test, "CONFIG_KASAN_STACK is not enabled");
+@@ -494,7 +507,9 @@ static void kasan_alloca_oob_left(struct kunit *test)
+ {
+ 	volatile int i = 10;
+ 	char alloca_array[i];
+-	char *p = alloca_array - 1;
++	/* See comment in kasan_global_oob. */
++	char *volatile array = alloca_array;
++	char *p = array - 1;
+ 
+ 	/* Only generic mode instruments dynamic allocas. */
+ 	if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+@@ -514,7 +529,9 @@ static void kasan_alloca_oob_right(struct kunit *test)
+ {
+ 	volatile int i = 10;
+ 	char alloca_array[i];
+-	char *p = alloca_array + i;
++	/* See comment in kasan_global_oob. */
++	char *volatile array = alloca_array;
++	char *p = array + i;
+ 
+ 	/* Only generic mode instruments dynamic allocas. */
+ 	if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+diff --git a/mm/gup.c b/mm/gup.c
+index 054ff923d3d92..c2826f3afe722 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1561,54 +1561,60 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
+ 					struct vm_area_struct **vmas,
+ 					unsigned int gup_flags)
+ {
+-	unsigned long i;
+-	unsigned long step;
+-	bool drain_allow = true;
+-	bool migrate_allow = true;
++	unsigned long i, isolation_error_count;
++	bool drain_allow;
+ 	LIST_HEAD(cma_page_list);
+ 	long ret = nr_pages;
++	struct page *prev_head, *head;
+ 	struct migration_target_control mtc = {
+ 		.nid = NUMA_NO_NODE,
+ 		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN,
+ 	};
+ 
+ check_again:
+-	for (i = 0; i < nr_pages;) {
+-
+-		struct page *head = compound_head(pages[i]);
+-
+-		/*
+-		 * gup may start from a tail page. Advance step by the left
+-		 * part.
+-		 */
+-		step = compound_nr(head) - (pages[i] - head);
++	prev_head = NULL;
++	isolation_error_count = 0;
++	drain_allow = true;
++	for (i = 0; i < nr_pages; i++) {
++		head = compound_head(pages[i]);
++		if (head == prev_head)
++			continue;
++		prev_head = head;
+ 		/*
+ 		 * If we get a page from the CMA zone, since we are going to
+ 		 * be pinning these entries, we might as well move them out
+ 		 * of the CMA zone if possible.
+ 		 */
+ 		if (is_migrate_cma_page(head)) {
+-			if (PageHuge(head))
+-				isolate_huge_page(head, &cma_page_list);
+-			else {
++			if (PageHuge(head)) {
++				if (!isolate_huge_page(head, &cma_page_list))
++					isolation_error_count++;
++			} else {
+ 				if (!PageLRU(head) && drain_allow) {
+ 					lru_add_drain_all();
+ 					drain_allow = false;
+ 				}
+ 
+-				if (!isolate_lru_page(head)) {
+-					list_add_tail(&head->lru, &cma_page_list);
+-					mod_node_page_state(page_pgdat(head),
+-							    NR_ISOLATED_ANON +
+-							    page_is_file_lru(head),
+-							    thp_nr_pages(head));
++				if (isolate_lru_page(head)) {
++					isolation_error_count++;
++					continue;
+ 				}
++				list_add_tail(&head->lru, &cma_page_list);
++				mod_node_page_state(page_pgdat(head),
++						    NR_ISOLATED_ANON +
++						    page_is_file_lru(head),
++						    thp_nr_pages(head));
+ 			}
+ 		}
+-
+-		i += step;
+ 	}
+ 
++	/*
++	 * If list is empty, and no isolation errors, means that all pages are
++	 * in the correct zone.
++	 */
++	if (list_empty(&cma_page_list) && !isolation_error_count)
++		return ret;
++
+ 	if (!list_empty(&cma_page_list)) {
+ 		/*
+ 		 * drop the above get_user_pages reference.
+@@ -1619,34 +1625,28 @@ check_again:
+ 			for (i = 0; i < nr_pages; i++)
+ 				put_page(pages[i]);
+ 
+-		if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
+-			(unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
+-			/*
+-			 * some of the pages failed migration. Do get_user_pages
+-			 * without migration.
+-			 */
+-			migrate_allow = false;
+-
++		ret = migrate_pages(&cma_page_list, alloc_migration_target,
++				    NULL, (unsigned long)&mtc, MIGRATE_SYNC,
++				    MR_CONTIG_RANGE);
++		if (ret) {
+ 			if (!list_empty(&cma_page_list))
+ 				putback_movable_pages(&cma_page_list);
++			return ret > 0 ? -ENOMEM : ret;
+ 		}
+-		/*
+-		 * We did migrate all the pages, Try to get the page references
+-		 * again migrating any new CMA pages which we failed to isolate
+-		 * earlier.
+-		 */
+-		ret = __get_user_pages_locked(mm, start, nr_pages,
+-						   pages, vmas, NULL,
+-						   gup_flags);
+-
+-		if ((ret > 0) && migrate_allow) {
+-			nr_pages = ret;
+-			drain_allow = true;
+-			goto check_again;
+-		}
++
++		/* We unpinned pages before migration, pin them again */
++		ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas,
++					      NULL, gup_flags);
++		if (ret <= 0)
++			return ret;
++		nr_pages = ret;
+ 	}
+ 
+-	return ret;
++	/*
++	 * check again because pages were unpinned, and we also might have
++	 * had isolation errors and need more pages to migrate.
++	 */
++	goto check_again;
+ }
+ #else
+ static long check_and_migrate_cma_pages(struct mm_struct *mm,
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 573f1a0183be6..900851a4f9146 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -745,13 +745,20 @@ void hugetlb_fix_reserve_counts(struct inode *inode)
+ {
+ 	struct hugepage_subpool *spool = subpool_inode(inode);
+ 	long rsv_adjust;
++	bool reserved = false;
+ 
+ 	rsv_adjust = hugepage_subpool_get_pages(spool, 1);
+-	if (rsv_adjust) {
++	if (rsv_adjust > 0) {
+ 		struct hstate *h = hstate_inode(inode);
+ 
+-		hugetlb_acct_memory(h, 1);
++		if (!hugetlb_acct_memory(h, 1))
++			reserved = true;
++	} else if (!rsv_adjust) {
++		reserved = true;
+ 	}
++
++	if (!reserved)
++		pr_warn("hugetlb: Huge Page Reserved count may go negative.\n");
+ }
+ 
+ /*
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index abab394c42062..a6238118ac4c7 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -714,17 +714,17 @@ next:
+ 		if (pte_write(pteval))
+ 			writable = true;
+ 	}
+-	if (likely(writable)) {
+-		if (likely(referenced)) {
+-			result = SCAN_SUCCEED;
+-			trace_mm_collapse_huge_page_isolate(page, none_or_zero,
+-							    referenced, writable, result);
+-			return 1;
+-		}
+-	} else {
++
++	if (unlikely(!writable)) {
+ 		result = SCAN_PAGE_RO;
++	} else if (unlikely(!referenced)) {
++		result = SCAN_LACK_REFERENCED_PAGE;
++	} else {
++		result = SCAN_SUCCEED;
++		trace_mm_collapse_huge_page_isolate(page, none_or_zero,
++						    referenced, writable, result);
++		return 1;
+ 	}
+-
+ out:
+ 	release_pte_pages(pte, _pte, compound_pagelist);
+ 	trace_mm_collapse_huge_page_isolate(page, none_or_zero,
+diff --git a/mm/ksm.c b/mm/ksm.c
+index 0960750bb316d..25b8362a4f895 100644
+--- a/mm/ksm.c
++++ b/mm/ksm.c
+@@ -794,6 +794,7 @@ static void remove_rmap_item_from_tree(struct rmap_item *rmap_item)
+ 		stable_node->rmap_hlist_len--;
+ 
+ 		put_anon_vma(rmap_item->anon_vma);
++		rmap_item->head = NULL;
+ 		rmap_item->address &= PAGE_MASK;
+ 
+ 	} else if (rmap_item->address & UNSTABLE_FLAG) {
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 9d7ca1bd7f4b3..7982256a51250 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -2914,6 +2914,13 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
+ 
+ 			swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE);
+ 			entry = swp_entry_to_pte(swp_entry);
++		} else {
++			/*
++			 * For now we only support migrating to un-addressable
++			 * device memory.
++			 */
++			pr_warn_once("Unsupported ZONE_DEVICE page type.\n");
++			goto abort;
+ 		}
+ 	} else {
+ 		entry = mk_pte(page, vma->vm_page_prot);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 537c137698f8a..6e487bf555f9e 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2256,25 +2256,11 @@ out_nomem:
+ static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+ 	struct shmem_inode_info *info = SHMEM_I(file_inode(file));
++	int ret;
+ 
+-	if (info->seals & F_SEAL_FUTURE_WRITE) {
+-		/*
+-		 * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
+-		 * "future write" seal active.
+-		 */
+-		if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
+-			return -EPERM;
+-
+-		/*
+-		 * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as
+-		 * MAP_SHARED and read-only, take care to not allow mprotect to
+-		 * revert protections on such mappings. Do this only for shared
+-		 * mappings. For private mappings, don't need to mask
+-		 * VM_MAYWRITE as we still want them to be COW-writable.
+-		 */
+-		if (vma->vm_flags & VM_SHARED)
+-			vma->vm_flags &= ~(VM_MAYWRITE);
+-	}
++	ret = seal_check_future_write(info->seals, vma);
++	if (ret)
++		return ret;
+ 
+ 	/* arm64 - allow memory tagging on RAM-based files */
+ 	vma->vm_flags |= VM_MTE_ALLOWED;
+@@ -2378,8 +2364,18 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
+ 	pgoff_t offset, max_off;
+ 
+ 	ret = -ENOMEM;
+-	if (!shmem_inode_acct_block(inode, 1))
++	if (!shmem_inode_acct_block(inode, 1)) {
++		/*
++		 * We may have got a page, returned -ENOENT triggering a retry,
++		 * and now we find ourselves with -ENOMEM. Release the page, to
++		 * avoid a BUG_ON in our caller.
++		 */
++		if (unlikely(*pagep)) {
++			put_page(*pagep);
++			*pagep = NULL;
++		}
+ 		goto out;
++	}
+ 
+ 	if (!*pagep) {
+ 		page = shmem_alloc_page(gfp, info, pgoff);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index e0a5428497352..4676e4b0be2bf 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5897,7 +5897,7 @@ static void hci_le_phy_update_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+-	if (!ev->status)
++	if (ev->status)
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 1ab27b90ddcbc..cdc3863371739 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -451,6 +451,8 @@ struct l2cap_chan *l2cap_chan_create(void)
+ 	if (!chan)
+ 		return NULL;
+ 
++	skb_queue_head_init(&chan->tx_q);
++	skb_queue_head_init(&chan->srej_q);
+ 	mutex_init(&chan->lock);
+ 
+ 	/* Set default lock nesting level */
+@@ -516,7 +518,9 @@ void l2cap_chan_set_defaults(struct l2cap_chan *chan)
+ 	chan->flush_to = L2CAP_DEFAULT_FLUSH_TO;
+ 	chan->retrans_timeout = L2CAP_DEFAULT_RETRANS_TO;
+ 	chan->monitor_timeout = L2CAP_DEFAULT_MONITOR_TO;
++
+ 	chan->conf_state = 0;
++	set_bit(CONF_NOT_COMPLETE, &chan->conf_state);
+ 
+ 	set_bit(FLAG_FORCE_ACTIVE, &chan->flags);
+ }
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index f1b1edd0b6974..c99d65ef13b1e 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -179,9 +179,17 @@ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
+ 	struct l2cap_chan *chan = l2cap_pi(sk)->chan;
+ 	struct sockaddr_l2 la;
+ 	int len, err = 0;
++	bool zapped;
+ 
+ 	BT_DBG("sk %p", sk);
+ 
++	lock_sock(sk);
++	zapped = sock_flag(sk, SOCK_ZAPPED);
++	release_sock(sk);
++
++	if (zapped)
++		return -EINVAL;
++
+ 	if (!addr || alen < offsetofend(struct sockaddr, sa_family) ||
+ 	    addr->sa_family != AF_BLUETOOTH)
+ 		return -EINVAL;
+diff --git a/net/bridge/br_arp_nd_proxy.c b/net/bridge/br_arp_nd_proxy.c
+index dfec65eca8a6e..3db1def4437b3 100644
+--- a/net/bridge/br_arp_nd_proxy.c
++++ b/net/bridge/br_arp_nd_proxy.c
+@@ -160,7 +160,9 @@ void br_do_proxy_suppress_arp(struct sk_buff *skb, struct net_bridge *br,
+ 	if (br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED)) {
+ 		if (p && (p->flags & BR_NEIGH_SUPPRESS))
+ 			return;
+-		if (ipv4_is_zeronet(sip) || sip == tip) {
++		if (parp->ar_op != htons(ARPOP_RREQUEST) &&
++		    parp->ar_op != htons(ARPOP_RREPLY) &&
++		    (ipv4_is_zeronet(sip) || sip == tip)) {
+ 			/* prevent flooding to neigh suppress ports */
+ 			BR_INPUT_SKB_CB(skb)->proxyarp_replied = 1;
+ 			return;
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index d48b37b15b276..c52e5ea654e99 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -822,8 +822,10 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
+ 		key_addrs = skb_flow_dissector_target(flow_dissector,
+ 						      FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+ 						      target_container);
+-		memcpy(&key_addrs->v6addrs, &flow_keys->ipv6_src,
+-		       sizeof(key_addrs->v6addrs));
++		memcpy(&key_addrs->v6addrs.src, &flow_keys->ipv6_src,
++		       sizeof(key_addrs->v6addrs.src));
++		memcpy(&key_addrs->v6addrs.dst, &flow_keys->ipv6_dst,
++		       sizeof(key_addrs->v6addrs.dst));
+ 		key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ 	}
+ 
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index ef98372facf63..08fbf4049c108 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -172,8 +172,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
+ 					  struct page *page,
+ 					  unsigned int dma_sync_size)
+ {
++	dma_addr_t dma_addr = page_pool_get_dma_addr(page);
++
+ 	dma_sync_size = min(dma_sync_size, pool->p.max_len);
+-	dma_sync_single_range_for_device(pool->p.dev, page->dma_addr,
++	dma_sync_single_range_for_device(pool->p.dev, dma_addr,
+ 					 pool->p.offset, dma_sync_size,
+ 					 pool->p.dma_dir);
+ }
+@@ -224,7 +226,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
+ 		put_page(page);
+ 		return NULL;
+ 	}
+-	page->dma_addr = dma;
++	page_pool_set_dma_addr(page, dma);
+ 
+ 	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+ 		page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
+@@ -292,13 +294,13 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
+ 		 */
+ 		goto skip_dma_unmap;
+ 
+-	dma = page->dma_addr;
++	dma = page_pool_get_dma_addr(page);
+ 
+-	/* When page is unmapped, it cannot be returned our pool */
++	/* When page is unmapped, it cannot be returned to our pool */
+ 	dma_unmap_page_attrs(pool->p.dev, dma,
+ 			     PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ 			     DMA_ATTR_SKIP_CPU_SYNC);
+-	page->dma_addr = 0;
++	page_pool_set_dma_addr(page, 0);
+ skip_dma_unmap:
+ 	/* This may be the last page returned, releasing the pool, so
+ 	 * it is not safe to reference pool afterwards.
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index ec2cd7aab5add..2917af3f5ac19 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -489,7 +489,7 @@ store_link_ksettings_for_user(void __user *to,
+ {
+ 	struct ethtool_link_usettings link_usettings;
+ 
+-	memcpy(&link_usettings.base, &from->base, sizeof(link_usettings));
++	memcpy(&link_usettings, from, sizeof(link_usettings));
+ 	bitmap_to_arr32(link_usettings.link_modes.supported,
+ 			from->link_modes.supported,
+ 			__ETHTOOL_LINK_MODE_MASK_NBITS);
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index 50d3c8896f917..25a55086d2b66 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -384,7 +384,8 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
+ 	int ret;
+ 
+ 	ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+-			   &ethtool_genl_family, 0, ctx->ops->reply_cmd);
++			   &ethtool_genl_family, NLM_F_MULTI,
++			   ctx->ops->reply_cmd);
+ 	if (!ehdr)
+ 		return -EMSGSIZE;
+ 
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index ecfeffc06c55c..82961ff4da9b7 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -192,7 +192,6 @@ static int vti6_tnl_create2(struct net_device *dev)
+ 
+ 	strcpy(t->parms.name, dev->name);
+ 
+-	dev_hold(dev);
+ 	vti6_tnl_link(ip6n, t);
+ 
+ 	return 0;
+@@ -931,6 +930,7 @@ static inline int vti6_dev_init_gen(struct net_device *dev)
+ 	dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
+ 	if (!dev->tstats)
+ 		return -ENOMEM;
++	dev_hold(dev);
+ 	return 0;
+ }
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index ef19c3399b893..6d3220c66931a 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1295,6 +1295,11 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_sub_if_data *sdata)
+ 
+ 	sdata->vif.csa_active = false;
+ 	ifmgd->csa_waiting_bcn = false;
++	/*
++	 * If the CSA IE is still present on the beacon after the switch,
++	 * we need to consider it as a new CSA (possibly to self).
++	 */
++	ifmgd->beacon_crc_valid = false;
+ 
+ 	ret = drv_post_channel_switch(sdata);
+ 	if (ret) {
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 6317b9bc86815..01a675fa2aa2b 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -445,8 +445,7 @@ static void mptcp_sock_destruct(struct sock *sk)
+ 	 * ESTABLISHED state and will not have the SOCK_DEAD flag.
+ 	 * Both result in warnings from inet_sock_destruct.
+ 	 */
+-
+-	if (sk->sk_state == TCP_ESTABLISHED) {
++	if ((1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) {
+ 		sk->sk_state = TCP_CLOSE;
+ 		WARN_ON_ONCE(sk->sk_socket);
+ 		sock_orphan(sk);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 2e76935db2c88..7bf7bfa0c7d9c 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -6015,9 +6015,9 @@ err_obj_ht:
+ 	INIT_LIST_HEAD(&obj->list);
+ 	return err;
+ err_trans:
+-	kfree(obj->key.name);
+-err_userdata:
+ 	kfree(obj->udata);
++err_userdata:
++	kfree(obj->key.name);
+ err_strdup:
+ 	if (obj->ops->destroy)
+ 		obj->ops->destroy(&ctx, obj);
+diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
+index 916a3c7f9eafe..79fbf37291f38 100644
+--- a/net/netfilter/nfnetlink_osf.c
++++ b/net/netfilter/nfnetlink_osf.c
+@@ -186,6 +186,8 @@ static const struct tcphdr *nf_osf_hdr_ctx_init(struct nf_osf_hdr_ctx *ctx,
+ 
+ 		ctx->optp = skb_header_pointer(skb, ip_hdrlen(skb) +
+ 				sizeof(struct tcphdr), ctx->optsize, opts);
++		if (!ctx->optp)
++			return NULL;
+ 	}
+ 
+ 	return tcp;
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index 4d3f147e8d8dc..d7083bcb20e8c 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -393,9 +393,17 @@ static void nft_rhash_destroy(const struct nft_set *set)
+ 				    (void *)set);
+ }
+ 
++/* Number of buckets is stored in u32, so cap our result to 1U<<31 */
++#define NFT_MAX_BUCKETS (1U << 31)
++
+ static u32 nft_hash_buckets(u32 size)
+ {
+-	return roundup_pow_of_two(size * 4 / 3);
++	u64 val = div_u64((u64)size * 4, 3);
++
++	if (val >= NFT_MAX_BUCKETS)
++		return NFT_MAX_BUCKETS;
++
++	return roundup_pow_of_two(val);
+ }
+ 
+ static bool nft_rhash_estimate(const struct nft_set_desc *desc, u32 features,
+diff --git a/net/netfilter/xt_SECMARK.c b/net/netfilter/xt_SECMARK.c
+index 75625d13e976c..498a0bf6f0444 100644
+--- a/net/netfilter/xt_SECMARK.c
++++ b/net/netfilter/xt_SECMARK.c
+@@ -24,10 +24,9 @@ MODULE_ALIAS("ip6t_SECMARK");
+ static u8 mode;
+ 
+ static unsigned int
+-secmark_tg(struct sk_buff *skb, const struct xt_action_param *par)
++secmark_tg(struct sk_buff *skb, const struct xt_secmark_target_info_v1 *info)
+ {
+ 	u32 secmark = 0;
+-	const struct xt_secmark_target_info *info = par->targinfo;
+ 
+ 	switch (mode) {
+ 	case SECMARK_MODE_SEL:
+@@ -41,7 +40,7 @@ secmark_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ 	return XT_CONTINUE;
+ }
+ 
+-static int checkentry_lsm(struct xt_secmark_target_info *info)
++static int checkentry_lsm(struct xt_secmark_target_info_v1 *info)
+ {
+ 	int err;
+ 
+@@ -73,15 +72,15 @@ static int checkentry_lsm(struct xt_secmark_target_info *info)
+ 	return 0;
+ }
+ 
+-static int secmark_tg_check(const struct xt_tgchk_param *par)
++static int
++secmark_tg_check(const char *table, struct xt_secmark_target_info_v1 *info)
+ {
+-	struct xt_secmark_target_info *info = par->targinfo;
+ 	int err;
+ 
+-	if (strcmp(par->table, "mangle") != 0 &&
+-	    strcmp(par->table, "security") != 0) {
++	if (strcmp(table, "mangle") != 0 &&
++	    strcmp(table, "security") != 0) {
+ 		pr_info_ratelimited("only valid in \'mangle\' or \'security\' table, not \'%s\'\n",
+-				    par->table);
++				    table);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -116,25 +115,76 @@ static void secmark_tg_destroy(const struct xt_tgdtor_param *par)
+ 	}
+ }
+ 
+-static struct xt_target secmark_tg_reg __read_mostly = {
+-	.name       = "SECMARK",
+-	.revision   = 0,
+-	.family     = NFPROTO_UNSPEC,
+-	.checkentry = secmark_tg_check,
+-	.destroy    = secmark_tg_destroy,
+-	.target     = secmark_tg,
+-	.targetsize = sizeof(struct xt_secmark_target_info),
+-	.me         = THIS_MODULE,
++static int secmark_tg_check_v0(const struct xt_tgchk_param *par)
++{
++	struct xt_secmark_target_info *info = par->targinfo;
++	struct xt_secmark_target_info_v1 newinfo = {
++		.mode	= info->mode,
++	};
++	int ret;
++
++	memcpy(newinfo.secctx, info->secctx, SECMARK_SECCTX_MAX);
++
++	ret = secmark_tg_check(par->table, &newinfo);
++	info->secid = newinfo.secid;
++
++	return ret;
++}
++
++static unsigned int
++secmark_tg_v0(struct sk_buff *skb, const struct xt_action_param *par)
++{
++	const struct xt_secmark_target_info *info = par->targinfo;
++	struct xt_secmark_target_info_v1 newinfo = {
++		.secid	= info->secid,
++	};
++
++	return secmark_tg(skb, &newinfo);
++}
++
++static int secmark_tg_check_v1(const struct xt_tgchk_param *par)
++{
++	return secmark_tg_check(par->table, par->targinfo);
++}
++
++static unsigned int
++secmark_tg_v1(struct sk_buff *skb, const struct xt_action_param *par)
++{
++	return secmark_tg(skb, par->targinfo);
++}
++
++static struct xt_target secmark_tg_reg[] __read_mostly = {
++	{
++		.name		= "SECMARK",
++		.revision	= 0,
++		.family		= NFPROTO_UNSPEC,
++		.checkentry	= secmark_tg_check_v0,
++		.destroy	= secmark_tg_destroy,
++		.target		= secmark_tg_v0,
++		.targetsize	= sizeof(struct xt_secmark_target_info),
++		.me		= THIS_MODULE,
++	},
++	{
++		.name		= "SECMARK",
++		.revision	= 1,
++		.family		= NFPROTO_UNSPEC,
++		.checkentry	= secmark_tg_check_v1,
++		.destroy	= secmark_tg_destroy,
++		.target		= secmark_tg_v1,
++		.targetsize	= sizeof(struct xt_secmark_target_info_v1),
++		.usersize	= offsetof(struct xt_secmark_target_info_v1, secid),
++		.me		= THIS_MODULE,
++	},
+ };
+ 
+ static int __init secmark_tg_init(void)
+ {
+-	return xt_register_target(&secmark_tg_reg);
++	return xt_register_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg));
+ }
+ 
+ static void __exit secmark_tg_exit(void)
+ {
+-	xt_unregister_target(&secmark_tg_reg);
++	xt_unregister_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg));
+ }
+ 
+ module_init(secmark_tg_init);
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 14316ba9b3b32..a5212a3f86e2f 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -209,16 +209,16 @@ static bool fl_range_port_dst_cmp(struct cls_fl_filter *filter,
+ 				  struct fl_flow_key *key,
+ 				  struct fl_flow_key *mkey)
+ {
+-	__be16 min_mask, max_mask, min_val, max_val;
++	u16 min_mask, max_mask, min_val, max_val;
+ 
+-	min_mask = htons(filter->mask->key.tp_range.tp_min.dst);
+-	max_mask = htons(filter->mask->key.tp_range.tp_max.dst);
+-	min_val = htons(filter->key.tp_range.tp_min.dst);
+-	max_val = htons(filter->key.tp_range.tp_max.dst);
++	min_mask = ntohs(filter->mask->key.tp_range.tp_min.dst);
++	max_mask = ntohs(filter->mask->key.tp_range.tp_max.dst);
++	min_val = ntohs(filter->key.tp_range.tp_min.dst);
++	max_val = ntohs(filter->key.tp_range.tp_max.dst);
+ 
+ 	if (min_mask && max_mask) {
+-		if (htons(key->tp_range.tp.dst) < min_val ||
+-		    htons(key->tp_range.tp.dst) > max_val)
++		if (ntohs(key->tp_range.tp.dst) < min_val ||
++		    ntohs(key->tp_range.tp.dst) > max_val)
+ 			return false;
+ 
+ 		/* skb does not have min and max values */
+@@ -232,16 +232,16 @@ static bool fl_range_port_src_cmp(struct cls_fl_filter *filter,
+ 				  struct fl_flow_key *key,
+ 				  struct fl_flow_key *mkey)
+ {
+-	__be16 min_mask, max_mask, min_val, max_val;
++	u16 min_mask, max_mask, min_val, max_val;
+ 
+-	min_mask = htons(filter->mask->key.tp_range.tp_min.src);
+-	max_mask = htons(filter->mask->key.tp_range.tp_max.src);
+-	min_val = htons(filter->key.tp_range.tp_min.src);
+-	max_val = htons(filter->key.tp_range.tp_max.src);
++	min_mask = ntohs(filter->mask->key.tp_range.tp_min.src);
++	max_mask = ntohs(filter->mask->key.tp_range.tp_max.src);
++	min_val = ntohs(filter->key.tp_range.tp_min.src);
++	max_val = ntohs(filter->key.tp_range.tp_max.src);
+ 
+ 	if (min_mask && max_mask) {
+-		if (htons(key->tp_range.tp.src) < min_val ||
+-		    htons(key->tp_range.tp.src) > max_val)
++		if (ntohs(key->tp_range.tp.src) < min_val ||
++		    ntohs(key->tp_range.tp.src) > max_val)
+ 			return false;
+ 
+ 		/* skb does not have min and max values */
+@@ -779,16 +779,16 @@ static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key,
+ 		       TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src));
+ 
+ 	if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&
+-	    htons(key->tp_range.tp_max.dst) <=
+-	    htons(key->tp_range.tp_min.dst)) {
++	    ntohs(key->tp_range.tp_max.dst) <=
++	    ntohs(key->tp_range.tp_min.dst)) {
+ 		NL_SET_ERR_MSG_ATTR(extack,
+ 				    tb[TCA_FLOWER_KEY_PORT_DST_MIN],
+ 				    "Invalid destination port range (min must be strictly smaller than max)");
+ 		return -EINVAL;
+ 	}
+ 	if (mask->tp_range.tp_min.src && mask->tp_range.tp_max.src &&
+-	    htons(key->tp_range.tp_max.src) <=
+-	    htons(key->tp_range.tp_min.src)) {
++	    ntohs(key->tp_range.tp_max.src) <=
++	    ntohs(key->tp_range.tp_min.src)) {
+ 		NL_SET_ERR_MSG_ATTR(extack,
+ 				    tb[TCA_FLOWER_KEY_PORT_SRC_MIN],
+ 				    "Invalid source port range (min must be strictly smaller than max)");
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index c966c05a0be92..00853065dfa06 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -900,6 +900,12 @@ static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb,
+ 
+ 		list_for_each_entry(entry, &new->entries, list)
+ 			cycle = ktime_add_ns(cycle, entry->interval);
++
++		if (!cycle) {
++			NL_SET_ERR_MSG(extack, "'cycle_time' can never be 0");
++			return -EINVAL;
++		}
++
+ 		new->cycle_time = cycle;
+ 	}
+ 
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index 9a56ae2f36515..b9d6babe28702 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -3126,7 +3126,7 @@ static __be16 sctp_process_asconf_param(struct sctp_association *asoc,
+ 		 * primary.
+ 		 */
+ 		if (af->is_any(&addr))
+-			memcpy(&addr.v4, sctp_source(asconf), sizeof(addr));
++			memcpy(&addr, sctp_source(asconf), sizeof(addr));
+ 
+ 		if (security_sctp_bind_connect(asoc->ep->base.sk,
+ 					       SCTP_PARAM_SET_PRIMARY,
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index c669f8bd1eab2..b65bdaa84228f 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -1841,20 +1841,35 @@ static enum sctp_disposition sctp_sf_do_dupcook_a(
+ 			SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO));
+ 	sctp_add_cmd_sf(commands, SCTP_CMD_PURGE_ASCONF_QUEUE, SCTP_NULL());
+ 
+-	repl = sctp_make_cookie_ack(new_asoc, chunk);
++	/* Update the content of current association. */
++	if (sctp_assoc_update((struct sctp_association *)asoc, new_asoc)) {
++		struct sctp_chunk *abort;
++
++		abort = sctp_make_abort(asoc, NULL, sizeof(struct sctp_errhdr));
++		if (abort) {
++			sctp_init_cause(abort, SCTP_ERROR_RSRC_LOW, 0);
++			sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
++		}
++		sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, SCTP_ERROR(ECONNABORTED));
++		sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED,
++				SCTP_PERR(SCTP_ERROR_RSRC_LOW));
++		SCTP_INC_STATS(net, SCTP_MIB_ABORTEDS);
++		SCTP_DEC_STATS(net, SCTP_MIB_CURRESTAB);
++		goto nomem;
++	}
++
++	repl = sctp_make_cookie_ack(asoc, chunk);
+ 	if (!repl)
+ 		goto nomem;
+ 
+ 	/* Report association restart to upper layer. */
+ 	ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_RESTART, 0,
+-					     new_asoc->c.sinit_num_ostreams,
+-					     new_asoc->c.sinit_max_instreams,
++					     asoc->c.sinit_num_ostreams,
++					     asoc->c.sinit_max_instreams,
+ 					     NULL, GFP_ATOMIC);
+ 	if (!ev)
+ 		goto nomem_ev;
+ 
+-	/* Update the content of current association. */
+-	sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
+ 	sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));
+ 	if ((sctp_state(asoc, SHUTDOWN_PENDING) ||
+ 	     sctp_state(asoc, SHUTDOWN_SENT)) &&
+@@ -1918,7 +1933,8 @@ static enum sctp_disposition sctp_sf_do_dupcook_b(
+ 	sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
+ 	sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
+ 			SCTP_STATE(SCTP_STATE_ESTABLISHED));
+-	SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
++	if (asoc->state < SCTP_STATE_ESTABLISHED)
++		SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
+ 	sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
+ 
+ 	repl = sctp_make_cookie_ack(new_asoc, chunk);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 5dd4faaf7d6e5..030d7f30b13fe 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2147,6 +2147,9 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ 	struct smc_sock *smc;
+ 	int val, rc;
+ 
++	if (level == SOL_TCP && optname == TCP_ULP)
++		return -EOPNOTSUPP;
++
+ 	smc = smc_sk(sk);
+ 
+ 	/* generic setsockopts reaching us here always apply to the
+@@ -2171,7 +2174,6 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ 	if (rc || smc->use_fallback)
+ 		goto out;
+ 	switch (optname) {
+-	case TCP_ULP:
+ 	case TCP_FASTOPEN:
+ 	case TCP_FASTOPEN_CONNECT:
+ 	case TCP_FASTOPEN_KEY:
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 3259120462ed7..4a0e8e458a9ad 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1802,7 +1802,6 @@ call_allocate(struct rpc_task *task)
+ 
+ 	status = xprt->ops->buf_alloc(task);
+ 	trace_rpc_buf_alloc(task, status);
+-	xprt_inject_disconnect(xprt);
+ 	if (status == 0)
+ 		return;
+ 	if (status != -ENOMEM) {
+@@ -2460,12 +2459,6 @@ call_decode(struct rpc_task *task)
+ 		task->tk_flags &= ~RPC_CALL_MAJORSEEN;
+ 	}
+ 
+-	/*
+-	 * Ensure that we see all writes made by xprt_complete_rqst()
+-	 * before it changed req->rq_reply_bytes_recvd.
+-	 */
+-	smp_rmb();
+-
+ 	/*
+ 	 * Did we ever call xprt_complete_rqst()? If not, we should assume
+ 	 * the message is incomplete.
+@@ -2474,6 +2467,11 @@ call_decode(struct rpc_task *task)
+ 	if (!req->rq_reply_bytes_recvd)
+ 		goto out;
+ 
++	/* Ensure that we see all writes made by xprt_complete_rqst()
++	 * before it changed req->rq_reply_bytes_recvd.
++	 */
++	smp_rmb();
++
+ 	req->rq_rcv_buf.len = req->rq_private_buf.len;
+ 	trace_rpc_xdr_recvfrom(task, &req->rq_rcv_buf);
+ 
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index fa7b7ae2c2c5f..eba1714bf09ab 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1176,7 +1176,7 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp)
+ 		goto out_notconn;
+ 	err = svc_tcp_sendmsg(svsk->sk_sock, &msg, xdr, marker, &sent);
+ 	xdr_free_bvec(xdr);
+-	trace_svcsock_tcp_send(xprt, err < 0 ? err : sent);
++	trace_svcsock_tcp_send(xprt, err < 0 ? (long)err : sent);
+ 	if (err < 0 || sent != (xdr->len + sizeof(marker)))
+ 		goto out_close;
+ 	mutex_unlock(&xprt->xpt_mutex);
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 57f09ea3ef2af..a85759d8cde85 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -670,9 +670,9 @@ int xprt_adjust_timeout(struct rpc_rqst *req)
+ 	const struct rpc_timeout *to = req->rq_task->tk_client->cl_timeout;
+ 	int status = 0;
+ 
+-	if (time_before(jiffies, req->rq_minortimeo))
+-		return status;
+ 	if (time_before(jiffies, req->rq_majortimeo)) {
++		if (time_before(jiffies, req->rq_minortimeo))
++			return status;
+ 		if (to->to_exponential)
+ 			req->rq_timeout <<= 1;
+ 		else
+@@ -1441,8 +1441,6 @@ bool xprt_prepare_transmit(struct rpc_task *task)
+ 	struct rpc_xprt	*xprt = req->rq_xprt;
+ 
+ 	if (!xprt_lock_write(xprt, task)) {
+-		trace_xprt_transmit_queued(xprt, task);
+-
+ 		/* Race breaker: someone may have transmitted us */
+ 		if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+ 			rpc_wake_up_queued_task_set_status(&xprt->sending,
+@@ -1455,7 +1453,10 @@ bool xprt_prepare_transmit(struct rpc_task *task)
+ 
+ void xprt_end_transmit(struct rpc_task *task)
+ {
+-	xprt_release_write(task->tk_rqstp->rq_xprt, task);
++	struct rpc_xprt	*xprt = task->tk_rqstp->rq_xprt;
++
++	xprt_inject_disconnect(xprt);
++	xprt_release_write(xprt, task);
+ }
+ 
+ /**
+@@ -1857,7 +1858,6 @@ void xprt_release(struct rpc_task *task)
+ 	spin_unlock(&xprt->transport_lock);
+ 	if (req->rq_buffer)
+ 		xprt->ops->buf_free(task);
+-	xprt_inject_disconnect(xprt);
+ 	xdr_free_bvec(&req->rq_rcv_buf);
+ 	xdr_free_bvec(&req->rq_snd_buf);
+ 	if (req->rq_cred != NULL)
+diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
+index 44888f5badef5..bf3627dce5529 100644
+--- a/net/sunrpc/xprtrdma/frwr_ops.c
++++ b/net/sunrpc/xprtrdma/frwr_ops.c
+@@ -242,6 +242,7 @@ int frwr_query_device(struct rpcrdma_ep *ep, const struct ib_device *device)
+ 	ep->re_attr.cap.max_send_wr += 1; /* for ib_drain_sq */
+ 	ep->re_attr.cap.max_recv_wr = ep->re_max_requests;
+ 	ep->re_attr.cap.max_recv_wr += RPCRDMA_BACKWARD_WRS;
++	ep->re_attr.cap.max_recv_wr += RPCRDMA_MAX_RECV_BATCH;
+ 	ep->re_attr.cap.max_recv_wr += 1; /* for ib_drain_rq */
+ 
+ 	ep->re_max_rdma_segs =
+@@ -554,7 +555,6 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
+ 		mr = container_of(frwr, struct rpcrdma_mr, frwr);
+ 		bad_wr = bad_wr->next;
+ 
+-		list_del_init(&mr->mr_list);
+ 		frwr_mr_recycle(mr);
+ 	}
+ }
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index c48536f2121fb..ca267a855a12c 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -1467,9 +1467,10 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)
+ 		credits = 1;	/* don't deadlock */
+ 	else if (credits > r_xprt->rx_ep->re_max_requests)
+ 		credits = r_xprt->rx_ep->re_max_requests;
++	rpcrdma_post_recvs(r_xprt, credits + (buf->rb_bc_srv_max_requests << 1),
++			   false);
+ 	if (buf->rb_credits != credits)
+ 		rpcrdma_update_cwnd(r_xprt, credits);
+-	rpcrdma_post_recvs(r_xprt, false);
+ 
+ 	req = rpcr_to_rdmar(rqst);
+ 	if (req->rl_reply) {
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 035060c05fd5a..f93ff4282bf4f 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -262,8 +262,10 @@ xprt_rdma_connect_worker(struct work_struct *work)
+  * xprt_rdma_inject_disconnect - inject a connection fault
+  * @xprt: transport context
+  *
+- * If @xprt is connected, disconnect it to simulate spurious connection
+- * loss.
++ * If @xprt is connected, disconnect it to simulate spurious
++ * connection loss. Caller must hold @xprt's send lock to
++ * ensure that data structures and hardware resources are
++ * stable during the rdma_disconnect() call.
+  */
+ static void
+ xprt_rdma_inject_disconnect(struct rpc_xprt *xprt)
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index ad6e2e4994ce8..04325f0267c1c 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -535,7 +535,7 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt)
+ 	 * outstanding Receives.
+ 	 */
+ 	rpcrdma_ep_get(ep);
+-	rpcrdma_post_recvs(r_xprt, true);
++	rpcrdma_post_recvs(r_xprt, 1, true);
+ 
+ 	rc = rdma_connect(ep->re_id, &ep->re_remote_cma);
+ 	if (rc)
+@@ -1377,21 +1377,21 @@ int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
+ /**
+  * rpcrdma_post_recvs - Refill the Receive Queue
+  * @r_xprt: controlling transport instance
+- * @temp: mark Receive buffers to be deleted after use
++ * @needed: current credit grant
++ * @temp: mark Receive buffers to be deleted after one use
+  *
+  */
+-void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp)
++void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp)
+ {
+ 	struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
+ 	struct rpcrdma_ep *ep = r_xprt->rx_ep;
+ 	struct ib_recv_wr *wr, *bad_wr;
+ 	struct rpcrdma_rep *rep;
+-	int needed, count, rc;
++	int count, rc;
+ 
+ 	rc = 0;
+ 	count = 0;
+ 
+-	needed = buf->rb_credits + (buf->rb_bc_srv_max_requests << 1);
+ 	if (likely(ep->re_receive_count > needed))
+ 		goto out;
+ 	needed -= ep->re_receive_count;
+diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
+index 43974ef39a505..3cacc6f4c5271 100644
+--- a/net/sunrpc/xprtrdma/xprt_rdma.h
++++ b/net/sunrpc/xprtrdma/xprt_rdma.h
+@@ -452,7 +452,7 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt);
+ void rpcrdma_xprt_disconnect(struct rpcrdma_xprt *r_xprt);
+ 
+ int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
+-void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp);
++void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp);
+ 
+ /*
+  * Buffer calls - xprtrdma/verbs.c
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 1c7aa51cc2a3d..49e8933136526 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -693,7 +693,7 @@ static int tipc_nl_compat_link_dump(struct tipc_nl_compat_msg *msg,
+ 	if (err)
+ 		return err;
+ 
+-	link_info.dest = nla_get_flag(link[TIPC_NLA_LINK_DEST]);
++	link_info.dest = htonl(nla_get_flag(link[TIPC_NLA_LINK_DEST]));
+ 	link_info.up = htonl(nla_get_flag(link[TIPC_NLA_LINK_UP]));
+ 	nla_strlcpy(link_info.str, link[TIPC_NLA_LINK_NAME],
+ 		    TIPC_MAX_LINK_NAME);
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index ef6de0fb4e312..be9fd5a720117 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -126,13 +126,12 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
+ static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
+ 					    struct xdp_desc *desc)
+ {
+-	u64 chunk, chunk_end;
++	u64 chunk;
+ 
+-	chunk = xp_aligned_extract_addr(pool, desc->addr);
+-	chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len);
+-	if (chunk != chunk_end)
++	if (desc->len > pool->chunk_size)
+ 		return false;
+ 
++	chunk = xp_aligned_extract_addr(pool, desc->addr);
+ 	if (chunk >= pool->addrs_cnt)
+ 		return false;
+ 
+diff --git a/samples/bpf/tracex1_kern.c b/samples/bpf/tracex1_kern.c
+index 3f4599c9a2022..ef30d2b353b0f 100644
+--- a/samples/bpf/tracex1_kern.c
++++ b/samples/bpf/tracex1_kern.c
+@@ -26,7 +26,7 @@
+ SEC("kprobe/__netif_receive_skb_core")
+ int bpf_prog1(struct pt_regs *ctx)
+ {
+-	/* attaches to kprobe netif_receive_skb,
++	/* attaches to kprobe __netif_receive_skb_core,
+ 	 * looks for packets on loobpack device and prints them
+ 	 */
+ 	char devname[IFNAMSIZ];
+@@ -35,7 +35,7 @@ int bpf_prog1(struct pt_regs *ctx)
+ 	int len;
+ 
+ 	/* non-portable! works for the given kernel only */
+-	skb = (struct sk_buff *) PT_REGS_PARM1(ctx);
++	bpf_probe_read_kernel(&skb, sizeof(skb), (void *)PT_REGS_PARM1(ctx));
+ 	dev = _(skb->dev);
+ 	len = _(skb->len);
+ 
+diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
+index f54b6ac37ac2e..12a87be0fb446 100644
+--- a/scripts/Makefile.modpost
++++ b/scripts/Makefile.modpost
+@@ -65,7 +65,20 @@ else
+ ifeq ($(KBUILD_EXTMOD),)
+ 
+ input-symdump := vmlinux.symvers
+-output-symdump := Module.symvers
++output-symdump := modules-only.symvers
++
++quiet_cmd_cat = GEN     $@
++      cmd_cat = cat $(real-prereqs) > $@
++
++ifneq ($(wildcard vmlinux.symvers),)
++
++__modpost: Module.symvers
++Module.symvers: vmlinux.symvers modules-only.symvers FORCE
++	$(call if_changed,cat)
++
++targets += Module.symvers
++
++endif
+ 
+ else
+ 
+diff --git a/scripts/kconfig/nconf.c b/scripts/kconfig/nconf.c
+index e0f9655291665..af814b39b8765 100644
+--- a/scripts/kconfig/nconf.c
++++ b/scripts/kconfig/nconf.c
+@@ -504,8 +504,8 @@ static int get_mext_match(const char *match_str, match_f flag)
+ 	else if (flag == FIND_NEXT_MATCH_UP)
+ 		--match_start;
+ 
++	match_start = (match_start + items_num) % items_num;
+ 	index = match_start;
+-	index = (index + items_num) % items_num;
+ 	while (true) {
+ 		char *str = k_menu_items[index].str;
+ 		if (strcasestr(str, match_str) != NULL)
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index f882ce0d9327f..e08f75aed4293 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -2481,19 +2481,6 @@ fail:
+ 	fatal("parse error in symbol dump file\n");
+ }
+ 
+-/* For normal builds always dump all symbols.
+- * For external modules only dump symbols
+- * that are not read from kernel Module.symvers.
+- **/
+-static int dump_sym(struct symbol *sym)
+-{
+-	if (!external_module)
+-		return 1;
+-	if (sym->module->from_dump)
+-		return 0;
+-	return 1;
+-}
+-
+ static void write_dump(const char *fname)
+ {
+ 	struct buffer buf = { };
+@@ -2504,7 +2491,7 @@ static void write_dump(const char *fname)
+ 	for (n = 0; n < SYMBOL_HASH_SIZE ; n++) {
+ 		symbol = symbolhash[n];
+ 		while (symbol) {
+-			if (dump_sym(symbol)) {
++			if (!symbol->module->from_dump) {
+ 				namespace = symbol->namespace;
+ 				buf_printf(&buf, "0x%08x\t%s\t%s\t%s\t%s\n",
+ 					   symbol->crc, symbol->name,
+diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
+index 230c0b27b77d1..4c3cffcd296ac 100644
+--- a/security/keys/trusted-keys/trusted_tpm1.c
++++ b/security/keys/trusted-keys/trusted_tpm1.c
+@@ -500,10 +500,12 @@ static int tpm_seal(struct tpm_buf *tb, uint16_t keytype,
+ 
+ 	ret = tpm_get_random(chip, td->nonceodd, TPM_NONCE_SIZE);
+ 	if (ret < 0)
+-		return ret;
++		goto out;
+ 
+-	if (ret != TPM_NONCE_SIZE)
+-		return -EIO;
++	if (ret != TPM_NONCE_SIZE) {
++		ret = -EIO;
++		goto out;
++	}
+ 
+ 	ordinal = htonl(TPM_ORD_SEAL);
+ 	datsize = htonl(datalen);
+diff --git a/sound/firewire/bebob/bebob_stream.c b/sound/firewire/bebob/bebob_stream.c
+index bbae04793c50e..c18017e0a3d95 100644
+--- a/sound/firewire/bebob/bebob_stream.c
++++ b/sound/firewire/bebob/bebob_stream.c
+@@ -517,20 +517,22 @@ int snd_bebob_stream_init_duplex(struct snd_bebob *bebob)
+ static int keep_resources(struct snd_bebob *bebob, struct amdtp_stream *stream,
+ 			  unsigned int rate, unsigned int index)
+ {
+-	struct snd_bebob_stream_formation *formation;
++	unsigned int pcm_channels;
++	unsigned int midi_ports;
+ 	struct cmp_connection *conn;
+ 	int err;
+ 
+ 	if (stream == &bebob->tx_stream) {
+-		formation = bebob->tx_stream_formations + index;
++		pcm_channels = bebob->tx_stream_formations[index].pcm;
++		midi_ports = bebob->midi_input_ports;
+ 		conn = &bebob->out_conn;
+ 	} else {
+-		formation = bebob->rx_stream_formations + index;
++		pcm_channels = bebob->rx_stream_formations[index].pcm;
++		midi_ports = bebob->midi_output_ports;
+ 		conn = &bebob->in_conn;
+ 	}
+ 
+-	err = amdtp_am824_set_parameters(stream, rate, formation->pcm,
+-					 formation->midi, false);
++	err = amdtp_am824_set_parameters(stream, rate, pcm_channels, midi_ports, false);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/sound/pci/hda/ideapad_s740_helper.c b/sound/pci/hda/ideapad_s740_helper.c
+new file mode 100644
+index 0000000000000..564b9086e52db
+--- /dev/null
++++ b/sound/pci/hda/ideapad_s740_helper.c
+@@ -0,0 +1,492 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Fixes for Lenovo Ideapad S740, to be included from codec driver */
++
++static const struct hda_verb alc285_ideapad_s740_coefs[] = {
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x10 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0320 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0041 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0041 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001d },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004e },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001d },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004e },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x002a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x002a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0046 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0046 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0044 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0044 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{}
++};
++
++static void alc285_fixup_ideapad_s740_coef(struct hda_codec *codec,
++					   const struct hda_fixup *fix,
++					   int action)
++{
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_add_verbs(codec, alc285_ideapad_s740_coefs);
++		break;
++	}
++}
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 8c6f10cbced32..6d2a4dfcfe436 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2653,7 +2653,7 @@ static void generic_acomp_pin_eld_notify(void *audio_ptr, int port, int dev_id)
+ 	/* skip notification during system suspend (but not in runtime PM);
+ 	 * the state will be updated at resume
+ 	 */
+-	if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0)
++	if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
+ 		return;
+ 	/* ditto during suspend/resume process itself */
+ 	if (snd_hdac_is_in_pm(&codec->core))
+@@ -2839,7 +2839,7 @@ static void intel_pin_eld_notify(void *audio_ptr, int port, int pipe)
+ 	/* skip notification during system suspend (but not in runtime PM);
+ 	 * the state will be updated at resume
+ 	 */
+-	if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0)
++	if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
+ 		return;
+ 	/* ditto during suspend/resume process itself */
+ 	if (snd_hdac_is_in_pm(&codec->core))
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8ec57bd351dfe..1fe70f2fe4fe8 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6282,6 +6282,9 @@ static void alc_fixup_thinkpad_acpi(struct hda_codec *codec,
+ /* for alc295_fixup_hp_top_speakers */
+ #include "hp_x360_helper.c"
+ 
++/* for alc285_fixup_ideapad_s740_coef() */
++#include "ideapad_s740_helper.c"
++
+ enum {
+ 	ALC269_FIXUP_GPIO2,
+ 	ALC269_FIXUP_SONY_VAIO,
+@@ -6481,6 +6484,7 @@ enum {
+ 	ALC282_FIXUP_ACER_DISABLE_LINEOUT,
+ 	ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
+ 	ALC256_FIXUP_ACER_HEADSET_MIC,
++	ALC285_FIXUP_IDEAPAD_S740_COEF,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7973,6 +7977,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
+ 	},
++	[ALC285_FIXUP_IDEAPAD_S740_COEF] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_ideapad_s740_coef,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8320,6 +8330,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
++	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+diff --git a/sound/pci/rme9652/hdsp.c b/sound/pci/rme9652/hdsp.c
+index cea53a878c360..4aee30db034dd 100644
+--- a/sound/pci/rme9652/hdsp.c
++++ b/sound/pci/rme9652/hdsp.c
+@@ -5321,7 +5321,8 @@ static int snd_hdsp_free(struct hdsp *hdsp)
+ 	if (hdsp->port)
+ 		pci_release_regions(hdsp->pci);
+ 
+-	pci_disable_device(hdsp->pci);
++	if (pci_is_enabled(hdsp->pci))
++		pci_disable_device(hdsp->pci);
+ 	return 0;
+ }
+ 
+diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c
+index 4a1f576dd9cfa..51c3c6a08a1c5 100644
+--- a/sound/pci/rme9652/hdspm.c
++++ b/sound/pci/rme9652/hdspm.c
+@@ -6891,7 +6891,8 @@ static int snd_hdspm_free(struct hdspm * hdspm)
+ 	if (hdspm->port)
+ 		pci_release_regions(hdspm->pci);
+ 
+-	pci_disable_device(hdspm->pci);
++	if (pci_is_enabled(hdspm->pci))
++		pci_disable_device(hdspm->pci);
+ 	return 0;
+ }
+ 
+diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c
+index 7ab10028d9fa1..8def24673f35f 100644
+--- a/sound/pci/rme9652/rme9652.c
++++ b/sound/pci/rme9652/rme9652.c
+@@ -1740,7 +1740,8 @@ static int snd_rme9652_free(struct snd_rme9652 *rme9652)
+ 	if (rme9652->port)
+ 		pci_release_regions(rme9652->pci);
+ 
+-	pci_disable_device(rme9652->pci);
++	if (pci_is_enabled(rme9652->pci))
++		pci_disable_device(rme9652->pci);
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/codecs/rt286.c b/sound/soc/codecs/rt286.c
+index 5fb9653d9131f..eec2dd93ecbb0 100644
+--- a/sound/soc/codecs/rt286.c
++++ b/sound/soc/codecs/rt286.c
+@@ -171,6 +171,9 @@ static bool rt286_readable_register(struct device *dev, unsigned int reg)
+ 	case RT286_PROC_COEF:
+ 	case RT286_SET_AMP_GAIN_ADC_IN1:
+ 	case RT286_SET_AMP_GAIN_ADC_IN2:
++	case RT286_SET_GPIO_MASK:
++	case RT286_SET_GPIO_DIRECTION:
++	case RT286_SET_GPIO_DATA:
+ 	case RT286_SET_POWER(RT286_DAC_OUT1):
+ 	case RT286_SET_POWER(RT286_DAC_OUT2):
+ 	case RT286_SET_POWER(RT286_ADC_IN1):
+@@ -1117,12 +1120,11 @@ static const struct dmi_system_id force_combo_jack_table[] = {
+ 	{ }
+ };
+ 
+-static const struct dmi_system_id dmi_dell_dino[] = {
++static const struct dmi_system_id dmi_dell[] = {
+ 	{
+-		.ident = "Dell Dino",
++		.ident = "Dell",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9343")
+ 		}
+ 	},
+ 	{ }
+@@ -1133,7 +1135,7 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ {
+ 	struct rt286_platform_data *pdata = dev_get_platdata(&i2c->dev);
+ 	struct rt286_priv *rt286;
+-	int i, ret, val;
++	int i, ret, vendor_id;
+ 
+ 	rt286 = devm_kzalloc(&i2c->dev,	sizeof(*rt286),
+ 				GFP_KERNEL);
+@@ -1149,14 +1151,15 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ 	}
+ 
+ 	ret = regmap_read(rt286->regmap,
+-		RT286_GET_PARAM(AC_NODE_ROOT, AC_PAR_VENDOR_ID), &val);
++		RT286_GET_PARAM(AC_NODE_ROOT, AC_PAR_VENDOR_ID), &vendor_id);
+ 	if (ret != 0) {
+ 		dev_err(&i2c->dev, "I2C error %d\n", ret);
+ 		return ret;
+ 	}
+-	if (val != RT286_VENDOR_ID && val != RT288_VENDOR_ID) {
++	if (vendor_id != RT286_VENDOR_ID && vendor_id != RT288_VENDOR_ID) {
+ 		dev_err(&i2c->dev,
+-			"Device with ID register %#x is not rt286\n", val);
++			"Device with ID register %#x is not rt286\n",
++			vendor_id);
+ 		return -ENODEV;
+ 	}
+ 
+@@ -1180,8 +1183,8 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ 	if (pdata)
+ 		rt286->pdata = *pdata;
+ 
+-	if (dmi_check_system(force_combo_jack_table) ||
+-		dmi_check_system(dmi_dell_dino))
++	if ((vendor_id == RT288_VENDOR_ID && dmi_check_system(dmi_dell)) ||
++		dmi_check_system(force_combo_jack_table))
+ 		rt286->pdata.cbj_en = true;
+ 
+ 	regmap_write(rt286->regmap, RT286_SET_AUDIO_POWER, AC_PWRST_D3);
+@@ -1220,7 +1223,7 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ 	regmap_update_bits(rt286->regmap, RT286_DEPOP_CTRL3, 0xf777, 0x4737);
+ 	regmap_update_bits(rt286->regmap, RT286_DEPOP_CTRL4, 0x00ff, 0x003f);
+ 
+-	if (dmi_check_system(dmi_dell_dino)) {
++	if (vendor_id == RT288_VENDOR_ID && dmi_check_system(dmi_dell)) {
+ 		regmap_update_bits(rt286->regmap,
+ 			RT286_SET_GPIO_MASK, 0x40, 0x40);
+ 		regmap_update_bits(rt286->regmap,
+diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
+index a0c8f58d729b3..47ce074289ca9 100644
+--- a/sound/soc/codecs/rt5670.c
++++ b/sound/soc/codecs/rt5670.c
+@@ -2908,6 +2908,18 @@ static const struct dmi_system_id dmi_platform_intel_quirks[] = {
+ 						 RT5670_GPIO1_IS_IRQ |
+ 						 RT5670_JD_MODE3),
+ 	},
++	{
++		.callback = rt5670_quirk_cb,
++		.ident = "Dell Venue 10 Pro 5055",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Venue 10 Pro 5055"),
++		},
++		.driver_data = (unsigned long *)(RT5670_DMIC_EN |
++						 RT5670_DMIC2_INR |
++						 RT5670_GPIO1_IS_IRQ |
++						 RT5670_JD_MODE1),
++	},
+ 	{
+ 		.callback = rt5670_quirk_cb,
+ 		.ident = "Aegex 10 tablet (RU2)",
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index d5812e73eb63f..1ef0464249d1b 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -478,6 +478,9 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TAF"),
+ 		},
+ 		.driver_data = (void *)(BYT_RT5640_IN1_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_SF_0P75 |
+ 					BYT_RT5640_MONO_SPEAKER |
+ 					BYT_RT5640_DIFF_MIC |
+ 					BYT_RT5640_SSP0_AIF2 |
+@@ -511,6 +514,23 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{
++		/* Chuwi Hi8 (CWI509) */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
++			DMI_MATCH(DMI_BOARD_NAME, "BYT-PA03C"),
++			DMI_MATCH(DMI_SYS_VENDOR, "ilife"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "S806"),
++		},
++		.driver_data = (void *)(BYT_RT5640_IN1_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_MONO_SPEAKER |
++					BYT_RT5640_DIFF_MIC |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Circuitco"),
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 1d7677376e742..9dc982c2c7760 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -187,6 +187,17 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
++	/* AlderLake devices */
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Alder Lake Client Platform"),
++		},
++		.driver_data = (void *)(SOF_RT711_JD_SRC_JD1 |
++					SOF_SDW_TGL_HDMI |
++					SOF_SDW_PCH_DMIC),
++	},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index 6e670b3e92a00..289928d4c0c99 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -1428,8 +1428,75 @@ static int rsnd_hw_params(struct snd_soc_component *component,
+ 		}
+ 		if (io->converted_chan)
+ 			dev_dbg(dev, "convert channels = %d\n", io->converted_chan);
+-		if (io->converted_rate)
++		if (io->converted_rate) {
++			/*
++			 * SRC supports convert rates from params_rate(hw_params)/k_down
++			 * to params_rate(hw_params)*k_up, where k_up is always 6, and
++			 * k_down depends on number of channels and SRC unit.
++			 * So all SRC units can upsample audio up to 6 times regardless
++			 * its number of channels. And all SRC units can downsample
++			 * 2 channel audio up to 6 times too.
++			 */
++			int k_up = 6;
++			int k_down = 6;
++			int channel;
++			struct rsnd_mod *src_mod = rsnd_io_to_mod_src(io);
++
+ 			dev_dbg(dev, "convert rate     = %d\n", io->converted_rate);
++
++			channel = io->converted_chan ? io->converted_chan :
++				  params_channels(hw_params);
++
++			switch (rsnd_mod_id(src_mod)) {
++			/*
++			 * SRC0 can downsample 4, 6 and 8 channel audio up to 4 times.
++			 * SRC1, SRC3 and SRC4 can downsample 4 channel audio
++			 * up to 4 times.
++			 * SRC1, SRC3 and SRC4 can downsample 6 and 8 channel audio
++			 * no more than twice.
++			 */
++			case 1:
++			case 3:
++			case 4:
++				if (channel > 4) {
++					k_down = 2;
++					break;
++				}
++				fallthrough;
++			case 0:
++				if (channel > 2)
++					k_down = 4;
++				break;
++
++			/* Other SRC units do not support more than 2 channels */
++			default:
++				if (channel > 2)
++					return -EINVAL;
++			}
++
++			if (params_rate(hw_params) > io->converted_rate * k_down) {
++				hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->min =
++					io->converted_rate * k_down;
++				hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->max =
++					io->converted_rate * k_down;
++				hw_params->cmask |= SNDRV_PCM_HW_PARAM_RATE;
++			} else if (params_rate(hw_params) * k_up < io->converted_rate) {
++				hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->min =
++					(io->converted_rate + k_up - 1) / k_up;
++				hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->max =
++					(io->converted_rate + k_up - 1) / k_up;
++				hw_params->cmask |= SNDRV_PCM_HW_PARAM_RATE;
++			}
++
++			/*
++			 * TBD: Max SRC input and output rates also depend on number
++			 * of channels and SRC unit:
++			 * SRC1, SRC3 and SRC4 do not support more than 128kHz
++			 * for 6 channel and 96kHz for 8 channel audio.
++			 * Perhaps this function should return EINVAL if the input or
++			 * the output rate exceeds the limitation.
++			 */
++		}
+ 	}
+ 
+ 	return rsnd_dai_call(hw_params, io, substream, hw_params);
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index d0ded427a8363..042207c116514 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -507,10 +507,15 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ 			 struct rsnd_priv *priv)
+ {
+ 	struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
++	int ret;
+ 
+ 	if (!rsnd_ssi_is_run_mods(mod, io))
+ 		return 0;
+ 
++	ret = rsnd_ssi_master_clk_start(mod, io);
++	if (ret < 0)
++		return ret;
++
+ 	ssi->usrcnt++;
+ 
+ 	rsnd_mod_power_on(mod);
+@@ -792,7 +797,6 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
+ 						       SSI_SYS_STATUS(i * 2),
+ 						       0xf << (id * 4));
+ 					stop = true;
+-					break;
+ 				}
+ 			}
+ 			break;
+@@ -810,7 +814,6 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
+ 						SSI_SYS_STATUS((i * 2) + 1),
+ 						0xf << 4);
+ 					stop = true;
+-					break;
+ 				}
+ 			}
+ 			break;
+@@ -1060,13 +1063,6 @@ static int rsnd_ssi_pio_pointer(struct rsnd_mod *mod,
+ 	return 0;
+ }
+ 
+-static int rsnd_ssi_prepare(struct rsnd_mod *mod,
+-			    struct rsnd_dai_stream *io,
+-			    struct rsnd_priv *priv)
+-{
+-	return rsnd_ssi_master_clk_start(mod, io);
+-}
+-
+ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ 	.name		= SSI_NAME,
+ 	.probe		= rsnd_ssi_common_probe,
+@@ -1079,7 +1075,6 @@ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ 	.pointer	= rsnd_ssi_pio_pointer,
+ 	.pcm_new	= rsnd_ssi_pcm_new,
+ 	.hw_params	= rsnd_ssi_hw_params,
+-	.prepare	= rsnd_ssi_prepare,
+ 	.get_status	= rsnd_ssi_get_status,
+ };
+ 
+@@ -1166,7 +1161,6 @@ static struct rsnd_mod_ops rsnd_ssi_dma_ops = {
+ 	.pcm_new	= rsnd_ssi_pcm_new,
+ 	.fallback	= rsnd_ssi_fallback,
+ 	.hw_params	= rsnd_ssi_hw_params,
+-	.prepare	= rsnd_ssi_prepare,
+ 	.get_status	= rsnd_ssi_get_status,
+ };
+ 
+diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
+index 06cd709a3453a..86c31c787fb91 100644
+--- a/tools/lib/bpf/ringbuf.c
++++ b/tools/lib/bpf/ringbuf.c
+@@ -202,9 +202,11 @@ static inline int roundup_len(__u32 len)
+ 	return (len + 7) / 8 * 8;
+ }
+ 
+-static int ringbuf_process_ring(struct ring* r)
++static int64_t ringbuf_process_ring(struct ring* r)
+ {
+-	int *len_ptr, len, err, cnt = 0;
++	int *len_ptr, len, err;
++	/* 64-bit to avoid overflow in case of extreme application behavior */
++	int64_t cnt = 0;
+ 	unsigned long cons_pos, prod_pos;
+ 	bool got_new_data;
+ 	void *sample;
+@@ -244,12 +246,14 @@ done:
+ }
+ 
+ /* Consume available ring buffer(s) data without event polling.
+- * Returns number of records consumed across all registered ring buffers, or
+- * negative number if any of the callbacks return error.
++ * Returns number of records consumed across all registered ring buffers (or
++ * INT_MAX, whichever is less), or negative number if any of the callbacks
++ * return error.
+  */
+ int ring_buffer__consume(struct ring_buffer *rb)
+ {
+-	int i, err, res = 0;
++	int64_t err, res = 0;
++	int i;
+ 
+ 	for (i = 0; i < rb->ring_cnt; i++) {
+ 		struct ring *ring = &rb->rings[i];
+@@ -259,18 +263,24 @@ int ring_buffer__consume(struct ring_buffer *rb)
+ 			return err;
+ 		res += err;
+ 	}
++	if (res > INT_MAX)
++		return INT_MAX;
+ 	return res;
+ }
+ 
+ /* Poll for available data and consume records, if any are available.
+- * Returns number of records consumed, or negative number, if any of the
+- * registered callbacks returned error.
++ * Returns number of records consumed (or INT_MAX, whichever is less), or
++ * negative number, if any of the registered callbacks returned error.
+  */
+ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
+ {
+-	int i, cnt, err, res = 0;
++	int i, cnt;
++	int64_t err, res = 0;
+ 
+ 	cnt = epoll_wait(rb->epoll_fd, rb->events, rb->ring_cnt, timeout_ms);
++	if (cnt < 0)
++		return -errno;
++
+ 	for (i = 0; i < cnt; i++) {
+ 		__u32 ring_id = rb->events[i].data.fd;
+ 		struct ring *ring = &rb->rings[ring_id];
+@@ -280,5 +290,7 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
+ 			return err;
+ 		res += err;
+ 	}
+-	return cnt < 0 ? -errno : res;
++	if (res > INT_MAX)
++		return INT_MAX;
++	return res;
+ }
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index ce8516e4de34f..2abbd75fbf2e3 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -530,6 +530,7 @@ ifndef NO_LIBELF
+       ifdef LIBBPF_DYNAMIC
+         ifeq ($(feature-libbpf), 1)
+           EXTLIBS += -lbpf
++          $(call detected,CONFIG_LIBBPF_DYNAMIC)
+         else
+           dummy := $(error Error: No libbpf devel library found, please install libbpf-devel);
+         endif
+diff --git a/tools/perf/util/Build b/tools/perf/util/Build
+index e2563d0154eb6..0cf27354aa451 100644
+--- a/tools/perf/util/Build
++++ b/tools/perf/util/Build
+@@ -140,7 +140,14 @@ perf-$(CONFIG_LIBELF) += symbol-elf.o
+ perf-$(CONFIG_LIBELF) += probe-file.o
+ perf-$(CONFIG_LIBELF) += probe-event.o
+ 
++ifdef CONFIG_LIBBPF_DYNAMIC
++  hashmap := 1
++endif
+ ifndef CONFIG_LIBBPF
++  hashmap := 1
++endif
++
++ifdef hashmap
+ perf-y += hashmap.o
+ endif
+ 
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
+index 6f3a70df63bc6..e00435753008a 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
+@@ -120,12 +120,13 @@ __mirror_gre_test()
+ 	sleep 5
+ 
+ 	for ((i = 0; i < count; ++i)); do
++		local sip=$(mirror_gre_ipv6_addr 1 $i)::1
+ 		local dip=$(mirror_gre_ipv6_addr 1 $i)::2
+ 		local htun=h3-gt6-$i
+ 		local message
+ 
+ 		icmp6_capture_install $htun
+-		mirror_test v$h1 "" $dip $htun 100 10
++		mirror_test v$h1 $sip $dip $htun 100 10
+ 		icmp6_capture_uninstall $htun
+ 	done
+ }
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
+index b0cb1aaffddab..33ddd01689bee 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
+@@ -507,8 +507,8 @@ do_red_test()
+ 	check_err $? "backlog $backlog / $limit Got $pct% marked packets, expected == 0."
+ 	local diff=$((limit - backlog))
+ 	pct=$((100 * diff / limit))
+-	((0 <= pct && pct <= 5))
+-	check_err $? "backlog $backlog / $limit expected <= 5% distance"
++	((0 <= pct && pct <= 10))
++	check_err $? "backlog $backlog / $limit expected <= 10% distance"
+ 	log_test "TC $((vlan - 10)): RED backlog > limit"
+ 
+ 	stop_traffic
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index be17462fe1467..0af84ad48aa77 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -1,6 +1,10 @@
+ # This mimics the top-level Makefile. We do it explicitly here so that this
+ # Makefile can operate with or without the kbuild infrastructure.
++ifneq ($(LLVM),)
++CC := clang
++else
+ CC := $(CROSS_COMPILE)gcc
++endif
+ 
+ ifeq (0,$(MAKELEVEL))
+     ifeq ($(OUTPUT),)
+diff --git a/tools/testing/selftests/net/forwarding/mirror_lib.sh b/tools/testing/selftests/net/forwarding/mirror_lib.sh
+index 13db1cb50e57b..6406cd76a19d8 100644
+--- a/tools/testing/selftests/net/forwarding/mirror_lib.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_lib.sh
+@@ -20,6 +20,13 @@ mirror_uninstall()
+ 	tc filter del dev $swp1 $direction pref 1000
+ }
+ 
++is_ipv6()
++{
++	local addr=$1; shift
++
++	[[ -z ${addr//[0-9a-fA-F:]/} ]]
++}
++
+ mirror_test()
+ {
+ 	local vrf_name=$1; shift
+@@ -29,9 +36,17 @@ mirror_test()
+ 	local pref=$1; shift
+ 	local expect=$1; shift
+ 
++	if is_ipv6 $dip; then
++		local proto=-6
++		local type="icmp6 type=128" # Echo request.
++	else
++		local proto=
++		local type="icmp echoreq"
++	fi
++
+ 	local t0=$(tc_rule_stats_get $dev $pref)
+-	$MZ $vrf_name ${sip:+-A $sip} -B $dip -a own -b bc -q \
+-	    -c 10 -d 100msec -t icmp type=8
++	$MZ $proto $vrf_name ${sip:+-A $sip} -B $dip -a own -b bc -q \
++	    -c 10 -d 100msec -t $type
+ 	sleep 0.5
+ 	local t1=$(tc_rule_stats_get $dev $pref)
+ 	local delta=$((t1 - t0))
+diff --git a/tools/testing/selftests/powerpc/security/entry_flush.c b/tools/testing/selftests/powerpc/security/entry_flush.c
+index 78cf914fa3217..68ce377b205e9 100644
+--- a/tools/testing/selftests/powerpc/security/entry_flush.c
++++ b/tools/testing/selftests/powerpc/security/entry_flush.c
+@@ -53,7 +53,7 @@ int entry_flush_test(void)
+ 
+ 	entry_flush = entry_flush_orig;
+ 
+-	fd = perf_event_open_counter(PERF_TYPE_RAW, /* L1d miss */ 0x400f0, -1);
++	fd = perf_event_open_counter(PERF_TYPE_HW_CACHE, PERF_L1D_READ_MISS_CONFIG, -1);
+ 	FAIL_IF(fd < 0);
+ 
+ 	p = (char *)memalign(zero_size, CACHELINE_SIZE);
+diff --git a/tools/testing/selftests/powerpc/security/flush_utils.h b/tools/testing/selftests/powerpc/security/flush_utils.h
+index 07a5eb3014669..7a3d60292916e 100644
+--- a/tools/testing/selftests/powerpc/security/flush_utils.h
++++ b/tools/testing/selftests/powerpc/security/flush_utils.h
+@@ -9,6 +9,10 @@
+ 
+ #define CACHELINE_SIZE 128
+ 
++#define PERF_L1D_READ_MISS_CONFIG	((PERF_COUNT_HW_CACHE_L1D) | 		\
++					(PERF_COUNT_HW_CACHE_OP_READ << 8) |	\
++					(PERF_COUNT_HW_CACHE_RESULT_MISS << 16))
++
+ void syscall_loop(char *p, unsigned long iterations,
+ 		  unsigned long zero_size);
+ 
+diff --git a/tools/testing/selftests/powerpc/security/rfi_flush.c b/tools/testing/selftests/powerpc/security/rfi_flush.c
+index 7565fd786640f..f73484a6470fa 100644
+--- a/tools/testing/selftests/powerpc/security/rfi_flush.c
++++ b/tools/testing/selftests/powerpc/security/rfi_flush.c
+@@ -54,7 +54,7 @@ int rfi_flush_test(void)
+ 
+ 	rfi_flush = rfi_flush_orig;
+ 
+-	fd = perf_event_open_counter(PERF_TYPE_RAW, /* L1d miss */ 0x400f0, -1);
++	fd = perf_event_open_counter(PERF_TYPE_HW_CACHE, PERF_L1D_READ_MISS_CONFIG, -1);
+ 	FAIL_IF(fd < 0);
+ 
+ 	p = (char *)memalign(zero_size, CACHELINE_SIZE);
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 78bf3f5492143..f446c36f58003 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2717,8 +2717,8 @@ static void grow_halt_poll_ns(struct kvm_vcpu *vcpu)
+ 	if (val < grow_start)
+ 		val = grow_start;
+ 
+-	if (val > halt_poll_ns)
+-		val = halt_poll_ns;
++	if (val > vcpu->kvm->max_halt_poll_ns)
++		val = vcpu->kvm->max_halt_poll_ns;
+ 
+ 	vcpu->halt_poll_ns = val;
+ out:
+@@ -2797,7 +2797,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
+ 				goto out;
+ 			}
+ 			poll_end = cur = ktime_get();
+-		} while (single_task_running() && ktime_before(cur, stop));
++		} while (single_task_running() && !need_resched() &&
++			 ktime_before(cur, stop));
+ 	}
+ 
+ 	prepare_to_rcuwait(&vcpu->wait);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-05-22 16:59 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-05-22 16:59 UTC (permalink / raw
  To: gentoo-commits

commit:     f92df8dfc6b40639117994692438712a3886dbc0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat May 22 16:59:19 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat May 22 16:59:19 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f92df8df

Linux patch 5.10.39

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1038_linux-5.10.39.patch | 1921 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1925 insertions(+)

diff --git a/0000_README b/0000_README
index f072cf1..824db67 100644
--- a/0000_README
+++ b/0000_README
@@ -195,6 +195,10 @@ Patch:  1037_linux-5.10.38.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.38
 
+Patch:  1038_linux-5.10.39.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.39
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1038_linux-5.10.39.patch b/1038_linux-5.10.39.patch
new file mode 100644
index 0000000..5fc9f40
--- /dev/null
+++ b/1038_linux-5.10.39.patch
@@ -0,0 +1,1921 @@
+diff --git a/Documentation/sphinx/parse-headers.pl b/Documentation/sphinx/parse-headers.pl
+index 1910079f984fb..b063f2f1cfb25 100755
+--- a/Documentation/sphinx/parse-headers.pl
++++ b/Documentation/sphinx/parse-headers.pl
+@@ -1,4 +1,4 @@
+-#!/usr/bin/perl
++#!/usr/bin/env perl
+ use strict;
+ use Text::Tabs;
+ use Getopt::Long;
+diff --git a/Documentation/target/tcm_mod_builder.py b/Documentation/target/tcm_mod_builder.py
+index 1548d84204996..54492aa813b9b 100755
+--- a/Documentation/target/tcm_mod_builder.py
++++ b/Documentation/target/tcm_mod_builder.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python
+ # The TCM v4 multi-protocol fabric module generation script for drivers/target/$NEW_MOD
+ #
+ # Copyright (c) 2010 Rising Tide Systems
+diff --git a/Documentation/trace/postprocess/decode_msr.py b/Documentation/trace/postprocess/decode_msr.py
+index 0ab40e0db5809..aa9cc7abd5c2b 100644
+--- a/Documentation/trace/postprocess/decode_msr.py
++++ b/Documentation/trace/postprocess/decode_msr.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python
+ # add symbolic names to read_msr / write_msr in trace
+ # decode_msr msr-index.h < trace
+ import sys
+diff --git a/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl b/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
+index 0a120aae33ce5..b9b7d80c2f9d2 100644
+--- a/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
++++ b/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
+@@ -1,4 +1,4 @@
+-#!/usr/bin/perl
++#!/usr/bin/env perl
+ # This is a POC (proof of concept or piece of crap, take your pick) for reading the
+ # text representation of trace output related to page allocation. It makes an attempt
+ # to extract some high-level information on what is going on. The accuracy of the parser
+diff --git a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
+index 995da15b16cab..2f4e39875fb39 100644
+--- a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
++++ b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
+@@ -1,4 +1,4 @@
+-#!/usr/bin/perl
++#!/usr/bin/env perl
+ # This is a POC for reading the text representation of trace output related to
+ # page reclaim. It makes an attempt to extract some high-level information on
+ # what is going on. The accuracy of the parser may vary
+diff --git a/Makefile b/Makefile
+index 6e4e536a0d20f..38b703568da45 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 38
++SUBLEVEL = 39
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
+index be8050b0c3dfb..70993af22d80c 100644
+--- a/arch/arm/kernel/asm-offsets.c
++++ b/arch/arm/kernel/asm-offsets.c
+@@ -24,6 +24,7 @@
+ #include <asm/vdso_datapage.h>
+ #include <asm/hardware/cache-l2x0.h>
+ #include <linux/kbuild.h>
++#include <linux/arm-smccc.h>
+ #include "signal.h"
+ 
+ /*
+@@ -148,6 +149,8 @@ int main(void)
+   DEFINE(SLEEP_SAVE_SP_PHYS,	offsetof(struct sleep_save_sp, save_ptr_stash_phys));
+   DEFINE(SLEEP_SAVE_SP_VIRT,	offsetof(struct sleep_save_sp, save_ptr_stash));
+ #endif
++  DEFINE(ARM_SMCCC_QUIRK_ID_OFFS,	offsetof(struct arm_smccc_quirk, id));
++  DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS,	offsetof(struct arm_smccc_quirk, state));
+   BLANK();
+   DEFINE(DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
+   DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
+diff --git a/arch/arm/kernel/smccc-call.S b/arch/arm/kernel/smccc-call.S
+index 00664c78facab..931df62a78312 100644
+--- a/arch/arm/kernel/smccc-call.S
++++ b/arch/arm/kernel/smccc-call.S
+@@ -3,7 +3,9 @@
+  * Copyright (c) 2015, Linaro Limited
+  */
+ #include <linux/linkage.h>
++#include <linux/arm-smccc.h>
+ 
++#include <asm/asm-offsets.h>
+ #include <asm/opcodes-sec.h>
+ #include <asm/opcodes-virt.h>
+ #include <asm/unwind.h>
+@@ -27,7 +29,14 @@ UNWIND(	.fnstart)
+ UNWIND(	.save	{r4-r7})
+ 	ldm	r12, {r4-r7}
+ 	\instr
+-	pop	{r4-r7}
++	ldr	r4, [sp, #36]
++	cmp	r4, #0
++	beq	1f			// No quirk structure
++	ldr     r5, [r4, #ARM_SMCCC_QUIRK_ID_OFFS]
++	cmp     r5, #ARM_SMCCC_QUIRK_QCOM_A6
++	bne	1f			// No quirk present
++	str	r6, [r4, #ARM_SMCCC_QUIRK_STATE_OFFS]
++1:	pop	{r4-r7}
+ 	ldr	r12, [sp, #(4 * 4)]
+ 	stm	r12, {r0-r3}
+ 	bx	lr
+diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
+index 24bd20564be77..43f0a3ebf3909 100644
+--- a/arch/arm/kernel/suspend.c
++++ b/arch/arm/kernel/suspend.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
++#include <linux/ftrace.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
+ #include <linux/mm_types.h>
+@@ -25,6 +26,13 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ 	if (!idmap_pgd)
+ 		return -EINVAL;
+ 
++	/*
++	 * Function graph tracer state gets incosistent when the kernel
++	 * calls functions that never return (aka suspend finishers) hence
++	 * disable graph tracing during their execution.
++	 */
++	pause_graph_tracing();
++
+ 	/*
+ 	 * Provide a temporary page table with an identity mapping for
+ 	 * the MMU-enable code, required for resuming.  On successful
+@@ -32,6 +40,9 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ 	 * back to the correct page tables.
+ 	 */
+ 	ret = __cpu_suspend(arg, fn, __mpidr);
++
++	unpause_graph_tracing();
++
+ 	if (ret == 0) {
+ 		cpu_switch_mm(mm->pgd, mm);
+ 		local_flush_bp_all();
+@@ -45,7 +56,13 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ {
+ 	u32 __mpidr = cpu_logical_map(smp_processor_id());
+-	return __cpu_suspend(arg, fn, __mpidr);
++	int ret;
++
++	pause_graph_tracing();
++	ret = __cpu_suspend(arg, fn, __mpidr);
++	unpause_graph_tracing();
++
++	return ret;
+ }
+ #define	idmap_pgd	NULL
+ #endif
+diff --git a/arch/ia64/scripts/unwcheck.py b/arch/ia64/scripts/unwcheck.py
+index c55276e31b6b6..bfd1b671e35fc 100644
+--- a/arch/ia64/scripts/unwcheck.py
++++ b/arch/ia64/scripts/unwcheck.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Usage: unwcheck.py FILE
+diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
+index 845002cc2e571..04dad33800418 100644
+--- a/arch/riscv/include/asm/ftrace.h
++++ b/arch/riscv/include/asm/ftrace.h
+@@ -13,9 +13,19 @@
+ #endif
+ #define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
+ 
++/*
++ * Clang prior to 13 had "mcount" instead of "_mcount":
++ * https://reviews.llvm.org/D98881
++ */
++#if defined(CONFIG_CC_IS_GCC) || CONFIG_CLANG_VERSION >= 130000
++#define MCOUNT_NAME _mcount
++#else
++#define MCOUNT_NAME mcount
++#endif
++
+ #define ARCH_SUPPORTS_FTRACE_OPS 1
+ #ifndef __ASSEMBLY__
+-void _mcount(void);
++void MCOUNT_NAME(void);
+ static inline unsigned long ftrace_call_adjust(unsigned long addr)
+ {
+ 	return addr;
+@@ -36,7 +46,7 @@ struct dyn_arch_ftrace {
+  * both auipc and jalr at the same time.
+  */
+ 
+-#define MCOUNT_ADDR		((unsigned long)_mcount)
++#define MCOUNT_ADDR		((unsigned long)MCOUNT_NAME)
+ #define JALR_SIGN_MASK		(0x00000800)
+ #define JALR_OFFSET_MASK	(0x00000fff)
+ #define AUIPC_OFFSET_MASK	(0xfffff000)
+diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S
+index 8a5593ff9ff3d..6d462681c9c02 100644
+--- a/arch/riscv/kernel/mcount.S
++++ b/arch/riscv/kernel/mcount.S
+@@ -47,8 +47,8 @@
+ 
+ ENTRY(ftrace_stub)
+ #ifdef CONFIG_DYNAMIC_FTRACE
+-       .global _mcount
+-       .set    _mcount, ftrace_stub
++       .global MCOUNT_NAME
++       .set    MCOUNT_NAME, ftrace_stub
+ #endif
+ 	ret
+ ENDPROC(ftrace_stub)
+@@ -78,7 +78,7 @@ ENDPROC(return_to_handler)
+ #endif
+ 
+ #ifndef CONFIG_DYNAMIC_FTRACE
+-ENTRY(_mcount)
++ENTRY(MCOUNT_NAME)
+ 	la	t4, ftrace_stub
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ 	la	t0, ftrace_graph_return
+@@ -124,6 +124,6 @@ do_trace:
+ 	jalr	t5
+ 	RESTORE_ABI_STATE
+ 	ret
+-ENDPROC(_mcount)
++ENDPROC(MCOUNT_NAME)
+ #endif
+-EXPORT_SYMBOL(_mcount)
++EXPORT_SYMBOL(MCOUNT_NAME)
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index 71a315e73cbe7..ca2b40dfd24b8 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -41,11 +41,10 @@ KASAN_SANITIZE := n
+ $(obj)/vdso.o: $(obj)/vdso.so
+ 
+ # link rule for the .so file, .lds has to be first
+-SYSCFLAGS_vdso.so.dbg = $(c_flags)
+ $(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
+ 	$(call if_changed,vdsold)
+-SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
+-	-Wl,--build-id=sha1 -Wl,--hash-style=both
++LDFLAGS_vdso.so.dbg = -shared -s -soname=linux-vdso.so.1 \
++	--build-id=sha1 --hash-style=both --eh-frame-hdr
+ 
+ # We also create a special relocatable object that should mirror the symbol
+ # table and layout of the linked DSO. With ld --just-symbols we can then
+@@ -60,13 +59,10 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ 
+ # actual build commands
+ # The DSO images are built using a special linker script
+-# Add -lgcc so rv32 gets static muldi3 and lshrdi3 definitions.
+ # Make sure only to export the intended __vdso_xxx symbol offsets.
+ quiet_cmd_vdsold = VDSOLD  $@
+-      cmd_vdsold = $(CC) $(KBUILD_CFLAGS) $(call cc-option, -no-pie) -nostdlib -nostartfiles $(SYSCFLAGS_$(@F)) \
+-                           -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp && \
+-                   $(CROSS_COMPILE)objcopy \
+-                           $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@ && \
++      cmd_vdsold = $(LD) $(ld_flags) -T $(filter-out FORCE,$^) -o $@.tmp && \
++                   $(OBJCOPY) $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@ && \
+                    rm $@.tmp
+ 
+ # Extracts symbol offsets from the VDSO, converting them into an assembly file
+diff --git a/arch/um/Kconfig.debug b/arch/um/Kconfig.debug
+index 315d368e63adc..1dfb2959c73b8 100644
+--- a/arch/um/Kconfig.debug
++++ b/arch/um/Kconfig.debug
+@@ -17,6 +17,7 @@ config GCOV
+ 	bool "Enable gcov support"
+ 	depends on DEBUG_INFO
+ 	depends on !KCOV
++	depends on !MODULES
+ 	help
+ 	  This option allows developers to retrieve coverage data from a UML
+ 	  session.
+diff --git a/arch/um/kernel/Makefile b/arch/um/kernel/Makefile
+index 5aa882011e041..e698e0c7dbdca 100644
+--- a/arch/um/kernel/Makefile
++++ b/arch/um/kernel/Makefile
+@@ -21,7 +21,6 @@ obj-y = config.o exec.o exitcode.o irq.o ksyms.o mem.o \
+ 
+ obj-$(CONFIG_BLK_DEV_INITRD) += initrd.o
+ obj-$(CONFIG_GPROF)	+= gprof_syms.o
+-obj-$(CONFIG_GCOV)	+= gmon_syms.o
+ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+ obj-$(CONFIG_STACKTRACE) += stacktrace.o
+ 
+diff --git a/arch/um/kernel/dyn.lds.S b/arch/um/kernel/dyn.lds.S
+index dacbfabf66d8e..2f2a8ce92f1ee 100644
+--- a/arch/um/kernel/dyn.lds.S
++++ b/arch/um/kernel/dyn.lds.S
+@@ -6,6 +6,12 @@ OUTPUT_ARCH(ELF_ARCH)
+ ENTRY(_start)
+ jiffies = jiffies_64;
+ 
++VERSION {
++  {
++    local: *;
++  };
++}
++
+ SECTIONS
+ {
+   PROVIDE (__executable_start = START);
+diff --git a/arch/um/kernel/gmon_syms.c b/arch/um/kernel/gmon_syms.c
+deleted file mode 100644
+index 9361a8eb9bf1a..0000000000000
+--- a/arch/um/kernel/gmon_syms.c
++++ /dev/null
+@@ -1,16 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Copyright (C) 2001 - 2007 Jeff Dike (jdike@{addtoit,linux.intel}.com)
+- */
+-
+-#include <linux/module.h>
+-
+-extern void __bb_init_func(void *)  __attribute__((weak));
+-EXPORT_SYMBOL(__bb_init_func);
+-
+-extern void __gcov_init(void *)  __attribute__((weak));
+-EXPORT_SYMBOL(__gcov_init);
+-extern void __gcov_merge_add(void *, unsigned int)  __attribute__((weak));
+-EXPORT_SYMBOL(__gcov_merge_add);
+-extern void __gcov_exit(void)  __attribute__((weak));
+-EXPORT_SYMBOL(__gcov_exit);
+diff --git a/arch/um/kernel/uml.lds.S b/arch/um/kernel/uml.lds.S
+index 45d957d7004ca..7a8e2b123e29c 100644
+--- a/arch/um/kernel/uml.lds.S
++++ b/arch/um/kernel/uml.lds.S
+@@ -7,6 +7,12 @@ OUTPUT_ARCH(ELF_ARCH)
+ ENTRY(_start)
+ jiffies = jiffies_64;
+ 
++VERSION {
++  {
++    local: *;
++  };
++}
++
+ SECTIONS
+ {
+   /* This must contain the right address - not quite the default ELF one.*/
+diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
+index fee8b9c0520c9..9009393f44c78 100644
+--- a/arch/x86/lib/msr-smp.c
++++ b/arch/x86/lib/msr-smp.c
+@@ -253,7 +253,7 @@ static void __wrmsr_safe_regs_on_cpu(void *info)
+ 	rv->err = wrmsr_safe_regs(rv->regs);
+ }
+ 
+-int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 *regs)
++int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
+ {
+ 	int err;
+ 	struct msr_regs_info rv;
+@@ -266,7 +266,7 @@ int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 *regs)
+ }
+ EXPORT_SYMBOL(rdmsr_safe_regs_on_cpu);
+ 
+-int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 *regs)
++int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
+ {
+ 	int err;
+ 	struct msr_regs_info rv;
+diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
+index 08d71dafa0015..58c8cc8fe0e11 100644
+--- a/drivers/dma/dw-edma/dw-edma-core.c
++++ b/drivers/dma/dw-edma/dw-edma-core.c
+@@ -937,22 +937,21 @@ int dw_edma_remove(struct dw_edma_chip *chip)
+ 	/* Power management */
+ 	pm_runtime_disable(dev);
+ 
++	/* Deregister eDMA device */
++	dma_async_device_unregister(&dw->wr_edma);
+ 	list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels,
+ 				 vc.chan.device_node) {
+-		list_del(&chan->vc.chan.device_node);
+ 		tasklet_kill(&chan->vc.task);
++		list_del(&chan->vc.chan.device_node);
+ 	}
+ 
++	dma_async_device_unregister(&dw->rd_edma);
+ 	list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels,
+ 				 vc.chan.device_node) {
+-		list_del(&chan->vc.chan.device_node);
+ 		tasklet_kill(&chan->vc.task);
++		list_del(&chan->vc.chan.device_node);
+ 	}
+ 
+-	/* Deregister eDMA device */
+-	dma_async_device_unregister(&dw->wr_edma);
+-	dma_async_device_unregister(&dw->rd_edma);
+-
+ 	/* Turn debugfs off */
+ 	dw_edma_v0_core_debugfs_off();
+ 
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 863f059bc498a..6f11714ce0239 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -1407,6 +1407,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ 			.no_edge_events_on_boot = true,
+ 		},
+ 	},
++	{
++		/*
++		 * The Dell Venue 10 Pro 5055, with Bay Trail SoC + TI PMIC uses an
++		 * external embedded-controller connected via I2C + an ACPI GPIO
++		 * event handler on INT33FFC:02 pin 12, causing spurious wakeups.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Venue 10 Pro 5055"),
++		},
++		.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++			.ignore_wake = "INT33FC:02@12",
++		},
++	},
+ 	{
+ 		/*
+ 		 * HP X2 10 models with Cherry Trail SoC + TI PMIC use an
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 8180894bbd1e3..fbbb1bde6b063 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8611,6 +8611,53 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm
+ }
+ #endif
+ 
++static int validate_overlay(struct drm_atomic_state *state)
++{
++	int i;
++	struct drm_plane *plane;
++	struct drm_plane_state *old_plane_state, *new_plane_state;
++	struct drm_plane_state *primary_state, *overlay_state = NULL;
++
++	/* Check if primary plane is contained inside overlay */
++	for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
++		if (plane->type == DRM_PLANE_TYPE_OVERLAY) {
++			if (drm_atomic_plane_disabling(plane->state, new_plane_state))
++				return 0;
++
++			overlay_state = new_plane_state;
++			continue;
++		}
++	}
++
++	/* check if we're making changes to the overlay plane */
++	if (!overlay_state)
++		return 0;
++
++	/* check if overlay plane is enabled */
++	if (!overlay_state->crtc)
++		return 0;
++
++	/* find the primary plane for the CRTC that the overlay is enabled on */
++	primary_state = drm_atomic_get_plane_state(state, overlay_state->crtc->primary);
++	if (IS_ERR(primary_state))
++		return PTR_ERR(primary_state);
++
++	/* check if primary plane is enabled */
++	if (!primary_state->crtc)
++		return 0;
++
++	/* Perform the bounds check to ensure the overlay plane covers the primary */
++	if (primary_state->crtc_x < overlay_state->crtc_x ||
++	    primary_state->crtc_y < overlay_state->crtc_y ||
++	    primary_state->crtc_x + primary_state->crtc_w > overlay_state->crtc_x + overlay_state->crtc_w ||
++	    primary_state->crtc_y + primary_state->crtc_h > overlay_state->crtc_y + overlay_state->crtc_h) {
++		DRM_DEBUG_ATOMIC("Overlay plane is enabled with hardware cursor but does not fully cover primary plane\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ /**
+  * amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM.
+  * @dev: The DRM device
+@@ -8789,6 +8836,10 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 			goto fail;
+ 	}
+ 
++	ret = validate_overlay(state);
++	if (ret)
++		goto fail;
++
+ 	/* Add new/modified planes */
+ 	for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
+ 		ret = dm_update_plane_state(dc, state, plane,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index f2c8719b8395e..52df6202a9543 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -1110,7 +1110,6 @@ static int navi10_force_clk_levels(struct smu_context *smu,
+ 	case SMU_SOCCLK:
+ 	case SMU_MCLK:
+ 	case SMU_UCLK:
+-	case SMU_DCEFCLK:
+ 	case SMU_FCLK:
+ 		/* There is only 2 levels for fine grained DPM */
+ 		if (navi10_is_support_fine_grained_dpm(smu, clk_type)) {
+@@ -1130,6 +1129,10 @@ static int navi10_force_clk_levels(struct smu_context *smu,
+ 		if (ret)
+ 			return size;
+ 		break;
++	case SMU_DCEFCLK:
++		dev_info(smu->adev->dev,"Setting DCEFCLK min/max dpm level is not supported!\n");
++		break;
++
+ 	default:
+ 		break;
+ 	}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 31da8fae6fa9d..471bbb78884b5 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -1018,7 +1018,6 @@ static int sienna_cichlid_force_clk_levels(struct smu_context *smu,
+ 	case SMU_SOCCLK:
+ 	case SMU_MCLK:
+ 	case SMU_UCLK:
+-	case SMU_DCEFCLK:
+ 	case SMU_FCLK:
+ 		/* There is only 2 levels for fine grained DPM */
+ 		if (sienna_cichlid_is_support_fine_grained_dpm(smu, clk_type)) {
+@@ -1038,6 +1037,9 @@ static int sienna_cichlid_force_clk_levels(struct smu_context *smu,
+ 		if (ret)
+ 			goto forec_level_out;
+ 		break;
++	case SMU_DCEFCLK:
++		dev_info(smu->adev->dev,"Setting DCEFCLK min/max dpm level is not supported!\n");
++		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 1937b3d6342ae..6f52f81339242 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -5655,7 +5655,18 @@ intel_dp_check_mst_status(struct intel_dp *intel_dp)
+ 	drm_WARN_ON_ONCE(&i915->drm, intel_dp->active_mst_links < 0);
+ 
+ 	for (;;) {
+-		u8 esi[DP_DPRX_ESI_LEN] = {};
++		/*
++		 * The +2 is because DP_DPRX_ESI_LEN is 14, but we then
++		 * pass in "esi+10" to drm_dp_channel_eq_ok(), which
++		 * takes a 6-byte array. So we actually need 16 bytes
++		 * here.
++		 *
++		 * Somebody who knows what the limits actually are
++		 * should check this, but for now this is at least
++		 * harmless and avoids a valid compiler warning about
++		 * using more of the array than we have allocated.
++		 */
++		u8 esi[DP_DPRX_ESI_LEN+2] = {};
+ 		bool handled;
+ 		int retry;
+ 
+diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
+index 50c348297e383..03a4825359448 100644
+--- a/drivers/input/touchscreen/elants_i2c.c
++++ b/drivers/input/touchscreen/elants_i2c.c
+@@ -38,6 +38,7 @@
+ #include <linux/of.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/regulator/consumer.h>
++#include <linux/uuid.h>
+ #include <asm/unaligned.h>
+ 
+ /* Device, Driver information */
+@@ -1224,6 +1225,40 @@ static void elants_i2c_power_off(void *_data)
+ 	}
+ }
+ 
++#ifdef CONFIG_ACPI
++static const struct acpi_device_id i2c_hid_ids[] = {
++	{"ACPI0C50", 0 },
++	{"PNP0C50", 0 },
++	{ },
++};
++
++static const guid_t i2c_hid_guid =
++	GUID_INIT(0x3CDFF6F7, 0x4267, 0x4555,
++		  0xAD, 0x05, 0xB3, 0x0A, 0x3D, 0x89, 0x38, 0xDE);
++
++static bool elants_acpi_is_hid_device(struct device *dev)
++{
++	acpi_handle handle = ACPI_HANDLE(dev);
++	union acpi_object *obj;
++
++	if (acpi_match_device_ids(ACPI_COMPANION(dev), i2c_hid_ids))
++		return false;
++
++	obj = acpi_evaluate_dsm_typed(handle, &i2c_hid_guid, 1, 1, NULL, ACPI_TYPE_INTEGER);
++	if (obj) {
++		ACPI_FREE(obj);
++		return true;
++	}
++
++	return false;
++}
++#else
++static bool elants_acpi_is_hid_device(struct device *dev)
++{
++	return false;
++}
++#endif
++
+ static int elants_i2c_probe(struct i2c_client *client,
+ 			    const struct i2c_device_id *id)
+ {
+@@ -1232,9 +1267,14 @@ static int elants_i2c_probe(struct i2c_client *client,
+ 	unsigned long irqflags;
+ 	int error;
+ 
++	/* Don't bind to i2c-hid compatible devices, these are handled by the i2c-hid drv. */
++	if (elants_acpi_is_hid_device(&client->dev)) {
++		dev_warn(&client->dev, "This device appears to be an I2C-HID device, not binding\n");
++		return -ENODEV;
++	}
++
+ 	if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+-		dev_err(&client->dev,
+-			"%s: i2c check functionality error\n", DEVICE_NAME);
++		dev_err(&client->dev, "I2C check functionality error\n");
+ 		return -ENXIO;
+ 	}
+ 
+diff --git a/drivers/input/touchscreen/silead.c b/drivers/input/touchscreen/silead.c
+index 8fa2f3b7cfd8b..e8b6c3137420b 100644
+--- a/drivers/input/touchscreen/silead.c
++++ b/drivers/input/touchscreen/silead.c
+@@ -20,6 +20,7 @@
+ #include <linux/input/mt.h>
+ #include <linux/input/touchscreen.h>
+ #include <linux/pm.h>
++#include <linux/pm_runtime.h>
+ #include <linux/irq.h>
+ #include <linux/regulator/consumer.h>
+ 
+@@ -335,10 +336,8 @@ static int silead_ts_get_id(struct i2c_client *client)
+ 
+ 	error = i2c_smbus_read_i2c_block_data(client, SILEAD_REG_ID,
+ 					      sizeof(chip_id), (u8 *)&chip_id);
+-	if (error < 0) {
+-		dev_err(&client->dev, "Chip ID read error %d\n", error);
++	if (error < 0)
+ 		return error;
+-	}
+ 
+ 	data->chip_id = le32_to_cpu(chip_id);
+ 	dev_info(&client->dev, "Silead chip ID: 0x%8X", data->chip_id);
+@@ -351,12 +350,49 @@ static int silead_ts_setup(struct i2c_client *client)
+ 	int error;
+ 	u32 status;
+ 
++	/*
++	 * Some buggy BIOS-es bring up the chip in a stuck state where it
++	 * blocks the I2C bus. The following steps are necessary to
++	 * unstuck the chip / bus:
++	 * 1. Turn off the Silead chip.
++	 * 2. Try to do an I2C transfer with the chip, this will fail in
++	 *    response to which the I2C-bus-driver will call:
++	 *    i2c_recover_bus() which will unstuck the I2C-bus. Note the
++	 *    unstuck-ing of the I2C bus only works if we first drop the
++	 *    chip off the bus by turning it off.
++	 * 3. Turn the chip back on.
++	 *
++	 * On the x86/ACPI systems were this problem is seen, step 1. and
++	 * 3. require making ACPI calls and dealing with ACPI Power
++	 * Resources. The workaround below runtime-suspends the chip to
++	 * turn it off, leaving it up to the ACPI subsystem to deal with
++	 * this.
++	 */
++
++	if (device_property_read_bool(&client->dev,
++				      "silead,stuck-controller-bug")) {
++		pm_runtime_set_active(&client->dev);
++		pm_runtime_enable(&client->dev);
++		pm_runtime_allow(&client->dev);
++
++		pm_runtime_suspend(&client->dev);
++
++		dev_warn(&client->dev, FW_BUG "Stuck I2C bus: please ignore the next 'controller timed out' error\n");
++		silead_ts_get_id(client);
++
++		/* The forbid will also resume the device */
++		pm_runtime_forbid(&client->dev);
++		pm_runtime_disable(&client->dev);
++	}
++
+ 	silead_ts_set_power(client, SILEAD_POWER_OFF);
+ 	silead_ts_set_power(client, SILEAD_POWER_ON);
+ 
+ 	error = silead_ts_get_id(client);
+-	if (error)
++	if (error) {
++		dev_err(&client->dev, "Chip ID read error %d\n", error);
+ 		return error;
++	}
+ 
+ 	error = silead_ts_init(client);
+ 	if (error)
+diff --git a/drivers/isdn/capi/kcapi.c b/drivers/isdn/capi/kcapi.c
+index 7168778fbbe19..cb0afe8971623 100644
+--- a/drivers/isdn/capi/kcapi.c
++++ b/drivers/isdn/capi/kcapi.c
+@@ -721,7 +721,7 @@ u16 capi20_put_message(struct capi20_appl *ap, struct sk_buff *skb)
+  * Return value: CAPI result code
+  */
+ 
+-u16 capi20_get_manufacturer(u32 contr, u8 *buf)
++u16 capi20_get_manufacturer(u32 contr, u8 buf[CAPI_MANUFACTURER_LEN])
+ {
+ 	struct capi_ctr *ctr;
+ 	u16 ret;
+@@ -787,7 +787,7 @@ u16 capi20_get_version(u32 contr, struct capi_version *verp)
+  * Return value: CAPI result code
+  */
+ 
+-u16 capi20_get_serial(u32 contr, u8 *serial)
++u16 capi20_get_serial(u32 contr, u8 serial[CAPI_SERIAL_LEN])
+ {
+ 	struct capi_ctr *ctr;
+ 	u16 ret;
+diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c
+index 945701bce5536..2e081a58da6c5 100644
+--- a/drivers/misc/kgdbts.c
++++ b/drivers/misc/kgdbts.c
+@@ -95,19 +95,19 @@
+ 
+ #include <asm/sections.h>
+ 
+-#define v1printk(a...) do { \
+-	if (verbose) \
+-		printk(KERN_INFO a); \
+-	} while (0)
+-#define v2printk(a...) do { \
+-	if (verbose > 1) \
+-		printk(KERN_INFO a); \
+-		touch_nmi_watchdog();	\
+-	} while (0)
+-#define eprintk(a...) do { \
+-		printk(KERN_ERR a); \
+-		WARN_ON(1); \
+-	} while (0)
++#define v1printk(a...) do {		\
++	if (verbose)			\
++		printk(KERN_INFO a);	\
++} while (0)
++#define v2printk(a...) do {		\
++	if (verbose > 1)		\
++		printk(KERN_INFO a);	\
++	touch_nmi_watchdog();		\
++} while (0)
++#define eprintk(a...) do {		\
++	printk(KERN_ERR a);		\
++	WARN_ON(1);			\
++} while (0)
+ #define MAX_CONFIG_LEN		40
+ 
+ static struct kgdb_io kgdbts_io_ops;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
+index 17410fe866267..7d49fd4edc9ec 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
+@@ -2671,7 +2671,7 @@ do { \
+ 	seq_printf(seq, "%-12s", s); \
+ 	for (i = 0; i < n; ++i) \
+ 		seq_printf(seq, " %16" fmt_spec, v); \
+-		seq_putc(seq, '\n'); \
++	seq_putc(seq, '\n'); \
+ } while (0)
+ #define S(s, v) S3("s", s, v)
+ #define T3(fmt_spec, s, v) S3(fmt_spec, s, tx[i].v)
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index 3334c9e2152ab..546301272271d 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -2559,12 +2559,12 @@ int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc)
+ 	spin_lock_bh(&eosw_txq->lock);
+ 	if (tc != FW_SCHED_CLS_NONE) {
+ 		if (eosw_txq->state != CXGB4_EO_STATE_CLOSED)
+-			goto out_unlock;
++			goto out_free_skb;
+ 
+ 		next_state = CXGB4_EO_STATE_FLOWC_OPEN_SEND;
+ 	} else {
+ 		if (eosw_txq->state != CXGB4_EO_STATE_ACTIVE)
+-			goto out_unlock;
++			goto out_free_skb;
+ 
+ 		next_state = CXGB4_EO_STATE_FLOWC_CLOSE_SEND;
+ 	}
+@@ -2600,17 +2600,19 @@ int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc)
+ 		eosw_txq_flush_pending_skbs(eosw_txq);
+ 
+ 	ret = eosw_txq_enqueue(eosw_txq, skb);
+-	if (ret) {
+-		dev_consume_skb_any(skb);
+-		goto out_unlock;
+-	}
++	if (ret)
++		goto out_free_skb;
+ 
+ 	eosw_txq->state = next_state;
+ 	eosw_txq->flowc_idx = eosw_txq->pidx;
+ 	eosw_txq_advance(eosw_txq, 1);
+ 	ethofld_xmit(dev, eosw_txq);
+ 
+-out_unlock:
++	spin_unlock_bh(&eosw_txq->lock);
++	return 0;
++
++out_free_skb:
++	dev_consume_skb_any(skb);
+ 	spin_unlock_bh(&eosw_txq->lock);
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+index 62aa0e95beb70..a7249e4071f16 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+@@ -222,7 +222,7 @@ static void dwmac4_dma_rx_chan_op_mode(void __iomem *ioaddr, int mode,
+ 				       u32 channel, int fifosz, u8 qmode)
+ {
+ 	unsigned int rqs = fifosz / 256 - 1;
+-	u32 mtl_rx_op, mtl_rx_int;
++	u32 mtl_rx_op;
+ 
+ 	mtl_rx_op = readl(ioaddr + MTL_CHAN_RX_OP_MODE(channel));
+ 
+@@ -283,11 +283,6 @@ static void dwmac4_dma_rx_chan_op_mode(void __iomem *ioaddr, int mode,
+ 	}
+ 
+ 	writel(mtl_rx_op, ioaddr + MTL_CHAN_RX_OP_MODE(channel));
+-
+-	/* Enable MTL RX overflow */
+-	mtl_rx_int = readl(ioaddr + MTL_CHAN_INT_CTRL(channel));
+-	writel(mtl_rx_int | MTL_RX_OVERFLOW_INT_EN,
+-	       ioaddr + MTL_CHAN_INT_CTRL(channel));
+ }
+ 
+ static void dwmac4_dma_tx_chan_op_mode(void __iomem *ioaddr, int mode,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 5b9478dffe103..4374ce4671ad2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4138,7 +4138,6 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
+ 	/* To handle GMAC own interrupts */
+ 	if ((priv->plat->has_gmac) || xmac) {
+ 		int status = stmmac_host_irq_status(priv, priv->hw, &priv->xstats);
+-		int mtl_status;
+ 
+ 		if (unlikely(status)) {
+ 			/* For LPI we need to save the tx status */
+@@ -4149,17 +4148,8 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
+ 		}
+ 
+ 		for (queue = 0; queue < queues_count; queue++) {
+-			struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
+-
+-			mtl_status = stmmac_host_mtl_irq_status(priv, priv->hw,
+-								queue);
+-			if (mtl_status != -EINVAL)
+-				status |= mtl_status;
+-
+-			if (status & CORE_IRQ_MTL_RX_OVERFLOW)
+-				stmmac_set_rx_tail_ptr(priv, priv->ioaddr,
+-						       rx_q->rx_tail_addr,
+-						       queue);
++			status = stmmac_host_mtl_irq_status(priv, priv->hw,
++							    queue);
+ 		}
+ 
+ 		/* PCS link status */
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 038ce4e5e84ba..286f836a53bfe 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -406,9 +406,13 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
+ 	offset += hdr_padded_len;
+ 	p += hdr_padded_len;
+ 
+-	copy = len;
+-	if (copy > skb_tailroom(skb))
+-		copy = skb_tailroom(skb);
++	/* Copy all frame if it fits skb->head, otherwise
++	 * we let virtio_net_hdr_to_skb() and GRO pull headers as needed.
++	 */
++	if (len <= skb_tailroom(skb))
++		copy = len;
++	else
++		copy = ETH_HLEN + metasize;
+ 	skb_put_data(skb, p, copy);
+ 
+ 	if (metasize) {
+diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c
+index 87b9398b03fd4..0569f37e9ed59 100644
+--- a/drivers/net/wireless/cisco/airo.c
++++ b/drivers/net/wireless/cisco/airo.c
+@@ -3825,6 +3825,68 @@ static inline void set_auth_type(struct airo_info *local, int auth_type)
+ 		local->last_auth = auth_type;
+ }
+ 
++static int noinline_for_stack airo_readconfig(struct airo_info *ai, u8 *mac, int lock)
++{
++	int i, status;
++	/* large variables, so don't inline this function,
++	 * maybe change to kmalloc
++	 */
++	tdsRssiRid rssi_rid;
++	CapabilityRid cap_rid;
++
++	kfree(ai->SSID);
++	ai->SSID = NULL;
++	// general configuration (read/modify/write)
++	status = readConfigRid(ai, lock);
++	if (status != SUCCESS) return ERROR;
++
++	status = readCapabilityRid(ai, &cap_rid, lock);
++	if (status != SUCCESS) return ERROR;
++
++	status = PC4500_readrid(ai, RID_RSSI, &rssi_rid, sizeof(rssi_rid), lock);
++	if (status == SUCCESS) {
++		if (ai->rssi || (ai->rssi = kmalloc(512, GFP_KERNEL)) != NULL)
++			memcpy(ai->rssi, (u8*)&rssi_rid + 2, 512); /* Skip RID length member */
++	}
++	else {
++		kfree(ai->rssi);
++		ai->rssi = NULL;
++		if (cap_rid.softCap & cpu_to_le16(8))
++			ai->config.rmode |= RXMODE_NORMALIZED_RSSI;
++		else
++			airo_print_warn(ai->dev->name, "unknown received signal "
++					"level scale");
++	}
++	ai->config.opmode = adhoc ? MODE_STA_IBSS : MODE_STA_ESS;
++	set_auth_type(ai, AUTH_OPEN);
++	ai->config.modulation = MOD_CCK;
++
++	if (le16_to_cpu(cap_rid.len) >= sizeof(cap_rid) &&
++	    (cap_rid.extSoftCap & cpu_to_le16(1)) &&
++	    micsetup(ai) == SUCCESS) {
++		ai->config.opmode |= MODE_MIC;
++		set_bit(FLAG_MIC_CAPABLE, &ai->flags);
++	}
++
++	/* Save off the MAC */
++	for (i = 0; i < ETH_ALEN; i++) {
++		mac[i] = ai->config.macAddr[i];
++	}
++
++	/* Check to see if there are any insmod configured
++	   rates to add */
++	if (rates[0]) {
++		memset(ai->config.rates, 0, sizeof(ai->config.rates));
++		for (i = 0; i < 8 && rates[i]; i++) {
++			ai->config.rates[i] = rates[i];
++		}
++	}
++	set_bit (FLAG_COMMIT, &ai->flags);
++
++	return SUCCESS;
++}
++
++
+ static u16 setup_card(struct airo_info *ai, u8 *mac, int lock)
+ {
+ 	Cmd cmd;
+@@ -3871,58 +3933,9 @@ static u16 setup_card(struct airo_info *ai, u8 *mac, int lock)
+ 	if (lock)
+ 		up(&ai->sem);
+ 	if (ai->config.len == 0) {
+-		int i;
+-		tdsRssiRid rssi_rid;
+-		CapabilityRid cap_rid;
+-
+-		kfree(ai->SSID);
+-		ai->SSID = NULL;
+-		// general configuration (read/modify/write)
+-		status = readConfigRid(ai, lock);
+-		if (status != SUCCESS) return ERROR;
+-
+-		status = readCapabilityRid(ai, &cap_rid, lock);
+-		if (status != SUCCESS) return ERROR;
+-
+-		status = PC4500_readrid(ai, RID_RSSI,&rssi_rid, sizeof(rssi_rid), lock);
+-		if (status == SUCCESS) {
+-			if (ai->rssi || (ai->rssi = kmalloc(512, GFP_KERNEL)) != NULL)
+-				memcpy(ai->rssi, (u8*)&rssi_rid + 2, 512); /* Skip RID length member */
+-		}
+-		else {
+-			kfree(ai->rssi);
+-			ai->rssi = NULL;
+-			if (cap_rid.softCap & cpu_to_le16(8))
+-				ai->config.rmode |= RXMODE_NORMALIZED_RSSI;
+-			else
+-				airo_print_warn(ai->dev->name, "unknown received signal "
+-						"level scale");
+-		}
+-		ai->config.opmode = adhoc ? MODE_STA_IBSS : MODE_STA_ESS;
+-		set_auth_type(ai, AUTH_OPEN);
+-		ai->config.modulation = MOD_CCK;
+-
+-		if (le16_to_cpu(cap_rid.len) >= sizeof(cap_rid) &&
+-		    (cap_rid.extSoftCap & cpu_to_le16(1)) &&
+-		    micsetup(ai) == SUCCESS) {
+-			ai->config.opmode |= MODE_MIC;
+-			set_bit(FLAG_MIC_CAPABLE, &ai->flags);
+-		}
+-
+-		/* Save off the MAC */
+-		for (i = 0; i < ETH_ALEN; i++) {
+-			mac[i] = ai->config.macAddr[i];
+-		}
+-
+-		/* Check to see if there are any insmod configured
+-		   rates to add */
+-		if (rates[0]) {
+-			memset(ai->config.rates, 0, sizeof(ai->config.rates));
+-			for (i = 0; i < 8 && rates[i]; i++) {
+-				ai->config.rates[i] = rates[i];
+-			}
+-		}
+-		set_bit (FLAG_COMMIT, &ai->flags);
++		status = airo_readconfig(ai, mac, lock);
++		if (status != SUCCESS)
++			return ERROR;
+ 	}
+ 
+ 	/* Setup the SSIDs if present */
+diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
+index e20dea5c44f7b..6a8274caa3bc7 100644
+--- a/drivers/nvme/target/admin-cmd.c
++++ b/drivers/nvme/target/admin-cmd.c
+@@ -313,7 +313,7 @@ static void nvmet_execute_get_log_page(struct nvmet_req *req)
+ 	case NVME_LOG_ANA:
+ 		return nvmet_execute_get_log_page_ana(req);
+ 	}
+-	pr_err("unhandled lid %d on qid %d\n",
++	pr_debug("unhandled lid %d on qid %d\n",
+ 	       req->cmd->get_log_page.lid, req->sq->qid);
+ 	req->error_loc = offsetof(struct nvme_get_log_page_command, lid);
+ 	nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_SC_DNR);
+@@ -657,7 +657,7 @@ static void nvmet_execute_identify(struct nvmet_req *req)
+ 		return nvmet_execute_identify_desclist(req);
+ 	}
+ 
+-	pr_err("unhandled identify cns %d on qid %d\n",
++	pr_debug("unhandled identify cns %d on qid %d\n",
+ 	       req->cmd->identify.cns, req->sq->qid);
+ 	req->error_loc = offsetof(struct nvme_identify, cns);
+ 	nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_SC_DNR);
+@@ -972,7 +972,7 @@ u16 nvmet_parse_admin_cmd(struct nvmet_req *req)
+ 		return 0;
+ 	}
+ 
+-	pr_err("unhandled cmd %d on qid %d\n", cmd->common.opcode,
++	pr_debug("unhandled cmd %d on qid %d\n", cmd->common.opcode,
+ 	       req->sq->qid);
+ 	req->error_loc = offsetof(struct nvme_common_command, opcode);
+ 	return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index f920e7efe118f..d788f4d7f9aa3 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -1660,7 +1660,7 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
+ 	if (pcie->ep_state == EP_STATE_ENABLED)
+ 		return;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
+ 			ret);
+diff --git a/drivers/pci/controller/pci-thunder-ecam.c b/drivers/pci/controller/pci-thunder-ecam.c
+index 7e8835fee5f73..d79395881d767 100644
+--- a/drivers/pci/controller/pci-thunder-ecam.c
++++ b/drivers/pci/controller/pci-thunder-ecam.c
+@@ -116,7 +116,7 @@ static int thunder_ecam_p2_config_read(struct pci_bus *bus, unsigned int devfn,
+ 	 * the config space access window.  Since we are working with
+ 	 * the high-order 32 bits, shift everything down by 32 bits.
+ 	 */
+-	node_bits = (cfg->res.start >> 32) & (1 << 12);
++	node_bits = upper_32_bits(cfg->res.start) & (1 << 12);
+ 
+ 	v |= node_bits;
+ 	set_val(v, where, size, val);
+diff --git a/drivers/pci/controller/pci-thunder-pem.c b/drivers/pci/controller/pci-thunder-pem.c
+index 3f847969143e8..4b12dd42bf23b 100644
+--- a/drivers/pci/controller/pci-thunder-pem.c
++++ b/drivers/pci/controller/pci-thunder-pem.c
+@@ -12,6 +12,7 @@
+ #include <linux/pci-acpi.h>
+ #include <linux/pci-ecam.h>
+ #include <linux/platform_device.h>
++#include <linux/io-64-nonatomic-lo-hi.h>
+ #include "../pci.h"
+ 
+ #if defined(CONFIG_PCI_HOST_THUNDER_PEM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
+@@ -315,9 +316,9 @@ static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
+ 	 * structure here for the BAR.
+ 	 */
+ 	bar4_start = res_pem->start + 0xf00000;
+-	pem_pci->ea_entry[0] = (u32)bar4_start | 2;
+-	pem_pci->ea_entry[1] = (u32)(res_pem->end - bar4_start) & ~3u;
+-	pem_pci->ea_entry[2] = (u32)(bar4_start >> 32);
++	pem_pci->ea_entry[0] = lower_32_bits(bar4_start) | 2;
++	pem_pci->ea_entry[1] = lower_32_bits(res_pem->end - bar4_start) & ~3u;
++	pem_pci->ea_entry[2] = upper_32_bits(bar4_start);
+ 
+ 	cfg->priv = pem_pci;
+ 	return 0;
+@@ -325,9 +326,9 @@ static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
+ 
+ #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
+ 
+-#define PEM_RES_BASE		0x87e0c0000000UL
+-#define PEM_NODE_MASK		GENMASK(45, 44)
+-#define PEM_INDX_MASK		GENMASK(26, 24)
++#define PEM_RES_BASE		0x87e0c0000000ULL
++#define PEM_NODE_MASK		GENMASK_ULL(45, 44)
++#define PEM_INDX_MASK		GENMASK_ULL(26, 24)
+ #define PEM_MIN_DOM_IN_NODE	4
+ #define PEM_MAX_DOM_IN_NODE	10
+ 
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index 3365c93abf0e2..f031302ad4019 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -533,6 +533,7 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
+ 			slot->flags &= ~SLOT_ENABLED;
+ 			continue;
+ 		}
++		pci_dev_put(dev);
+ 	}
+ }
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index f86cae9aa1f41..09ebc134d0d74 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -606,6 +606,12 @@ static inline int pci_dev_specific_reset(struct pci_dev *dev, int probe)
+ #if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64)
+ int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment,
+ 			  struct resource *res);
++#else
++static inline int acpi_get_rc_resources(struct device *dev, const char *hid,
++					u16 segment, struct resource *res)
++{
++	return -ENODEV;
++}
+ #endif
+ 
+ u32 pci_rebar_get_possible_sizes(struct pci_dev *pdev, int bar);
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 31be31161350e..036d54dc52e24 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -475,6 +475,11 @@ static int cros_typec_enable_dp(struct cros_typec_data *typec,
+ 		return -ENOTSUPP;
+ 	}
+ 
++	if (!pd_ctrl->dp_mode) {
++		dev_err(typec->dev, "No valid DP mode provided.\n");
++		return -EINVAL;
++	}
++
+ 	/* Status VDO. */
+ 	dp_data.status = DP_STATUS_ENABLED;
+ 	if (port->mux_flags & USB_PD_MUX_HPD_IRQ)
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 3e5c0718555ad..bf171ef61abd5 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -11590,13 +11590,20 @@ lpfc_sli_validate_fcp_iocb(struct lpfc_iocbq *iocbq, struct lpfc_vport *vport,
+ 			   lpfc_ctx_cmd ctx_cmd)
+ {
+ 	struct lpfc_io_buf *lpfc_cmd;
++	IOCB_t *icmd = NULL;
+ 	int rc = 1;
+ 
+ 	if (iocbq->vport != vport)
+ 		return rc;
+ 
+-	if (!(iocbq->iocb_flag &  LPFC_IO_FCP) ||
+-	    !(iocbq->iocb_flag & LPFC_IO_ON_TXCMPLQ))
++	if (!(iocbq->iocb_flag & LPFC_IO_FCP) ||
++	    !(iocbq->iocb_flag & LPFC_IO_ON_TXCMPLQ) ||
++	      iocbq->iocb_flag & LPFC_DRIVER_ABORTED)
++		return rc;
++
++	icmd = &iocbq->iocb;
++	if (icmd->ulpCommand == CMD_ABORT_XRI_CN ||
++	    icmd->ulpCommand == CMD_CLOSE_XRI_CN)
+ 		return rc;
+ 
+ 	lpfc_cmd = container_of(iocbq, struct lpfc_io_buf, cur_iocbq);
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 7d5814a95e1ed..c6950f157b99f 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -1391,7 +1391,7 @@ static int tcmu_run_tmr_queue(struct tcmu_dev *udev)
+ 	return 1;
+ }
+ 
+-static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
++static bool tcmu_handle_completions(struct tcmu_dev *udev)
+ {
+ 	struct tcmu_mailbox *mb;
+ 	struct tcmu_cmd *cmd;
+@@ -1434,7 +1434,7 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
+ 			pr_err("cmd_id %u not found, ring is broken\n",
+ 			       entry->hdr.cmd_id);
+ 			set_bit(TCMU_DEV_BIT_BROKEN, &udev->flags);
+-			break;
++			return false;
+ 		}
+ 
+ 		tcmu_handle_completion(cmd, entry);
+diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c
+index adaf4063690a4..9465fce99c822 100644
+--- a/drivers/usb/host/sl811-hcd.c
++++ b/drivers/usb/host/sl811-hcd.c
+@@ -1287,11 +1287,10 @@ sl811h_hub_control(
+ 			goto error;
+ 		put_unaligned_le32(sl811->port1, buf);
+ 
+-#ifndef	VERBOSE
+-	if (*(u16*)(buf+2))	/* only if wPortChange is interesting */
+-#endif
+-		dev_dbg(hcd->self.controller, "GetPortStatus %08x\n",
+-			sl811->port1);
++		if (__is_defined(VERBOSE) ||
++		    *(u16*)(buf+2)) /* only if wPortChange is interesting */
++			dev_dbg(hcd->self.controller, "GetPortStatus %08x\n",
++				sl811->port1);
+ 		break;
+ 	case SetPortFeature:
+ 		if (wIndex != 1 || wLength != 0)
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 718533f0fb90b..cacea6bafc229 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1903,6 +1903,7 @@ ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	struct inode *bd_inode = bdev_file_inode(file);
+ 	loff_t size = i_size_read(bd_inode);
+ 	struct blk_plug plug;
++	size_t shorted = 0;
+ 	ssize_t ret;
+ 
+ 	if (bdev_read_only(I_BDEV(bd_inode)))
+@@ -1920,12 +1921,17 @@ ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	if ((iocb->ki_flags & (IOCB_NOWAIT | IOCB_DIRECT)) == IOCB_NOWAIT)
+ 		return -EOPNOTSUPP;
+ 
+-	iov_iter_truncate(from, size - iocb->ki_pos);
++	size -= iocb->ki_pos;
++	if (iov_iter_count(from) > size) {
++		shorted = iov_iter_count(from) - size;
++		iov_iter_truncate(from, size);
++	}
+ 
+ 	blk_start_plug(&plug);
+ 	ret = __generic_file_write_iter(iocb, from);
+ 	if (ret > 0)
+ 		ret = generic_write_sync(iocb, ret);
++	iov_iter_reexpand(from, iov_iter_count(from) + shorted);
+ 	blk_finish_plug(&plug);
+ 	return ret;
+ }
+@@ -1937,13 +1943,21 @@ ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ 	struct inode *bd_inode = bdev_file_inode(file);
+ 	loff_t size = i_size_read(bd_inode);
+ 	loff_t pos = iocb->ki_pos;
++	size_t shorted = 0;
++	ssize_t ret;
+ 
+ 	if (pos >= size)
+ 		return 0;
+ 
+ 	size -= pos;
+-	iov_iter_truncate(to, size);
+-	return generic_file_read_iter(iocb, to);
++	if (iov_iter_count(to) > size) {
++		shorted = iov_iter_count(to) - size;
++		iov_iter_truncate(to, size);
++	}
++
++	ret = generic_file_read_iter(iocb, to);
++	iov_iter_reexpand(to, iov_iter_count(to) + shorted);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(blkdev_read_iter);
+ 
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 576d01275bbd7..e4fc99afa25a9 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1866,6 +1866,7 @@ static int try_nonblocking_invalidate(struct inode *inode)
+ 	u32 invalidating_gen = ci->i_rdcache_gen;
+ 
+ 	spin_unlock(&ci->i_ceph_lock);
++	ceph_fscache_invalidate(inode);
+ 	invalidate_mapping_pages(&inode->i_data, 0, -1);
+ 	spin_lock(&ci->i_ceph_lock);
+ 
+diff --git a/fs/ceph/export.c b/fs/ceph/export.c
+index baa6368bece59..042bb4a02c0a2 100644
+--- a/fs/ceph/export.c
++++ b/fs/ceph/export.c
+@@ -129,6 +129,10 @@ static struct inode *__lookup_inode(struct super_block *sb, u64 ino)
+ 
+ 	vino.ino = ino;
+ 	vino.snap = CEPH_NOSNAP;
++
++	if (ceph_vino_is_reserved(vino))
++		return ERR_PTR(-ESTALE);
++
+ 	inode = ceph_find_inode(sb, vino);
+ 	if (!inode) {
+ 		struct ceph_mds_request *req;
+@@ -214,6 +218,10 @@ static struct dentry *__snapfh_to_dentry(struct super_block *sb,
+ 		vino.ino = sfh->ino;
+ 		vino.snap = sfh->snapid;
+ 	}
++
++	if (ceph_vino_is_reserved(vino))
++		return ERR_PTR(-ESTALE);
++
+ 	inode = ceph_find_inode(sb, vino);
+ 	if (inode)
+ 		return d_obtain_alias(inode);
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 2462a9a84b956..346fcdfcd3e91 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -56,6 +56,9 @@ struct inode *ceph_get_inode(struct super_block *sb, struct ceph_vino vino)
+ {
+ 	struct inode *inode;
+ 
++	if (ceph_vino_is_reserved(vino))
++		return ERR_PTR(-EREMOTEIO);
++
+ 	inode = iget5_locked(sb, (unsigned long)vino.ino, ceph_ino_compare,
+ 			     ceph_set_ino_cb, &vino);
+ 	if (!inode)
+@@ -87,14 +90,15 @@ struct inode *ceph_get_snapdir(struct inode *parent)
+ 	inode->i_mtime = parent->i_mtime;
+ 	inode->i_ctime = parent->i_ctime;
+ 	inode->i_atime = parent->i_atime;
+-	inode->i_op = &ceph_snapdir_iops;
+-	inode->i_fop = &ceph_snapdir_fops;
+-	ci->i_snap_caps = CEPH_CAP_PIN; /* so we can open */
+ 	ci->i_rbytes = 0;
+ 	ci->i_btime = ceph_inode(parent)->i_btime;
+ 
+-	if (inode->i_state & I_NEW)
++	if (inode->i_state & I_NEW) {
++		inode->i_op = &ceph_snapdir_iops;
++		inode->i_fop = &ceph_snapdir_fops;
++		ci->i_snap_caps = CEPH_CAP_PIN; /* so we can open */
+ 		unlock_new_inode(inode);
++	}
+ 
+ 	return inode;
+ }
+@@ -1912,6 +1916,7 @@ static void ceph_do_invalidate_pages(struct inode *inode)
+ 	orig_gen = ci->i_rdcache_gen;
+ 	spin_unlock(&ci->i_ceph_lock);
+ 
++	ceph_fscache_invalidate(inode);
+ 	if (invalidate_inode_pages2(inode->i_mapping) < 0) {
+ 		pr_err("invalidate_pages %p fails\n", inode);
+ 	}
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 8f1d7500a7ecb..d560752b764d8 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -433,6 +433,13 @@ static int ceph_parse_deleg_inos(void **p, void *end,
+ 
+ 		ceph_decode_64_safe(p, end, start, bad);
+ 		ceph_decode_64_safe(p, end, len, bad);
++
++		/* Don't accept a delegation of system inodes */
++		if (start < CEPH_INO_SYSTEM_BASE) {
++			pr_warn_ratelimited("ceph: ignoring reserved inode range delegation (start=0x%llx len=0x%llx)\n",
++					start, len);
++			continue;
++		}
+ 		while (len--) {
+ 			int err = xa_insert(&s->s_delegated_inos, ino = start++,
+ 					    DELEGATED_INO_AVAILABLE,
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 482473e4cce1a..c33f744a8e11c 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -529,10 +529,34 @@ static inline int ceph_ino_compare(struct inode *inode, void *data)
+ 		ci->i_vino.snap == pvino->snap;
+ }
+ 
++/*
++ * The MDS reserves a set of inodes for its own usage. These should never
++ * be accessible by clients, and so the MDS has no reason to ever hand these
++ * out. The range is CEPH_MDS_INO_MDSDIR_OFFSET..CEPH_INO_SYSTEM_BASE.
++ *
++ * These come from src/mds/mdstypes.h in the ceph sources.
++ */
++#define CEPH_MAX_MDS		0x100
++#define CEPH_NUM_STRAY		10
++#define CEPH_MDS_INO_MDSDIR_OFFSET	(1 * CEPH_MAX_MDS)
++#define CEPH_INO_SYSTEM_BASE		((6*CEPH_MAX_MDS) + (CEPH_MAX_MDS * CEPH_NUM_STRAY))
++
++static inline bool ceph_vino_is_reserved(const struct ceph_vino vino)
++{
++	if (vino.ino < CEPH_INO_SYSTEM_BASE &&
++	    vino.ino >= CEPH_MDS_INO_MDSDIR_OFFSET) {
++		WARN_RATELIMIT(1, "Attempt to access reserved inode number 0x%llx", vino.ino);
++		return true;
++	}
++	return false;
++}
+ 
+ static inline struct inode *ceph_find_inode(struct super_block *sb,
+ 					    struct ceph_vino vino)
+ {
++	if (ceph_vino_is_reserved(vino))
++		return NULL;
++
+ 	/*
+ 	 * NB: The hashval will be run through the fs/inode.c hash function
+ 	 * anyway, so there is no need to squash the inode number down to
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 6e2e948f1475e..dc2cbca98fb0d 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -207,7 +207,8 @@ static void nfs_set_cache_invalid(struct inode *inode, unsigned long flags)
+ 				| NFS_INO_INVALID_SIZE
+ 				| NFS_INO_REVAL_PAGECACHE
+ 				| NFS_INO_INVALID_XATTR);
+-	}
++	} else if (flags & NFS_INO_REVAL_PAGECACHE)
++		flags |= NFS_INO_INVALID_CHANGE | NFS_INO_INVALID_SIZE;
+ 
+ 	if (inode->i_mapping->nrpages == 0)
+ 		flags &= ~(NFS_INO_INVALID_DATA|NFS_INO_DATA_INVAL_DEFER);
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 98775d7fa6963..b465f8f3e554f 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -65,14 +65,18 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 	skb_reset_mac_header(skb);
+ 
+ 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
+-		u16 start = __virtio16_to_cpu(little_endian, hdr->csum_start);
+-		u16 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);
++		u32 start = __virtio16_to_cpu(little_endian, hdr->csum_start);
++		u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);
++		u32 needed = start + max_t(u32, thlen, off + sizeof(__sum16));
++
++		if (!pskb_may_pull(skb, needed))
++			return -EINVAL;
+ 
+ 		if (!skb_partial_csum_set(skb, start, off))
+ 			return -EINVAL;
+ 
+ 		p_off = skb_transport_offset(skb) + thlen;
+-		if (p_off > skb_headlen(skb))
++		if (!pskb_may_pull(skb, p_off))
+ 			return -EINVAL;
+ 	} else {
+ 		/* gso packets without NEEDS_CSUM do not set transport_offset.
+@@ -102,14 +106,14 @@ retry:
+ 			}
+ 
+ 			p_off = keys.control.thoff + thlen;
+-			if (p_off > skb_headlen(skb) ||
++			if (!pskb_may_pull(skb, p_off) ||
+ 			    keys.basic.ip_proto != ip_proto)
+ 				return -EINVAL;
+ 
+ 			skb_set_transport_header(skb, keys.control.thoff);
+ 		} else if (gso_type) {
+ 			p_off = thlen;
+-			if (p_off > skb_headlen(skb))
++			if (!pskb_may_pull(skb, p_off))
+ 				return -EINVAL;
+ 		}
+ 	}
+diff --git a/lib/stackdepot.c b/lib/stackdepot.c
+index 2caffc64e4c82..25bbac46605e9 100644
+--- a/lib/stackdepot.c
++++ b/lib/stackdepot.c
+@@ -70,7 +70,7 @@ static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
+ static int depot_index;
+ static int next_slab_inited;
+ static size_t depot_offset;
+-static DEFINE_SPINLOCK(depot_lock);
++static DEFINE_RAW_SPINLOCK(depot_lock);
+ 
+ static bool init_stack_slab(void **prealloc)
+ {
+@@ -281,7 +281,7 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
+ 			prealloc = page_address(page);
+ 	}
+ 
+-	spin_lock_irqsave(&depot_lock, flags);
++	raw_spin_lock_irqsave(&depot_lock, flags);
+ 
+ 	found = find_stack(*bucket, entries, nr_entries, hash);
+ 	if (!found) {
+@@ -305,7 +305,7 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
+ 		WARN_ON(!init_stack_slab(&prealloc));
+ 	}
+ 
+-	spin_unlock_irqrestore(&depot_lock, flags);
++	raw_spin_unlock_irqrestore(&depot_lock, flags);
+ exit:
+ 	if (prealloc) {
+ 		/* Nobody used this memory, ok to free it. */
+diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
+index 92d64abffa87d..73f71c22f4c03 100644
+--- a/net/bridge/br_netlink.c
++++ b/net/bridge/br_netlink.c
+@@ -99,8 +99,9 @@ static size_t br_get_link_af_size_filtered(const struct net_device *dev,
+ 
+ 	rcu_read_lock();
+ 	if (netif_is_bridge_port(dev)) {
+-		p = br_port_get_rcu(dev);
+-		vg = nbp_vlan_group_rcu(p);
++		p = br_port_get_check_rcu(dev);
++		if (p)
++			vg = nbp_vlan_group_rcu(p);
+ 	} else if (dev->priv_flags & IFF_EBRIDGE) {
+ 		br = netdev_priv(dev);
+ 		vg = br_vlan_group_rcu(br);
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index b4e06ae088348..90c72e4c0a8fc 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -493,6 +493,10 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ 	struct ethhdr *ethhdr;
+ 	__be16 proto;
+ 
++	/* Check if skb contains hsr_ethhdr */
++	if (skb->mac_len < sizeof(struct hsr_ethhdr))
++		return -EINVAL;
++
+ 	memset(frame, 0, sizeof(*frame));
+ 	frame->is_supervision = is_supervision_frame(port->hsr, skb);
+ 	frame->node_src = hsr_get_node(port, &hsr->node_db, skb,
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 640f71a7b29d9..09fa49bbf617d 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -387,7 +387,6 @@ static struct ip6_tnl *ip6gre_tunnel_locate(struct net *net,
+ 	if (!(nt->parms.o_flags & TUNNEL_SEQ))
+ 		dev->features |= NETIF_F_LLTX;
+ 
+-	dev_hold(dev);
+ 	ip6gre_tunnel_link(ign, nt);
+ 	return nt;
+ 
+@@ -1496,6 +1495,7 @@ static int ip6gre_tunnel_init_common(struct net_device *dev)
+ 	}
+ 	ip6gre_tnl_init_features(dev);
+ 
++	dev_hold(dev);
+ 	return 0;
+ 
+ cleanup_dst_cache_init:
+@@ -1538,8 +1538,6 @@ static void ip6gre_fb_tunnel_init(struct net_device *dev)
+ 	strcpy(tunnel->parms.name, dev->name);
+ 
+ 	tunnel->hlen		= sizeof(struct ipv6hdr) + 4;
+-
+-	dev_hold(dev);
+ }
+ 
+ static struct inet6_protocol ip6gre_protocol __read_mostly = {
+@@ -1889,6 +1887,7 @@ static int ip6erspan_tap_init(struct net_device *dev)
+ 	dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ 	ip6erspan_tnl_link_config(tunnel, 1);
+ 
++	dev_hold(dev);
+ 	return 0;
+ 
+ cleanup_dst_cache_init:
+@@ -1988,8 +1987,6 @@ static int ip6gre_newlink_common(struct net *src_net, struct net_device *dev,
+ 	if (tb[IFLA_MTU])
+ 		ip6_tnl_change_mtu(dev, nla_get_u32(tb[IFLA_MTU]));
+ 
+-	dev_hold(dev);
+-
+ out:
+ 	return err;
+ }
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index ecc1abfca0650..42ca2d05c480d 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -293,7 +293,6 @@ static int ip6_tnl_create2(struct net_device *dev)
+ 
+ 	strcpy(t->parms.name, dev->name);
+ 
+-	dev_hold(dev);
+ 	ip6_tnl_link(ip6n, t);
+ 	return 0;
+ 
+@@ -1913,6 +1912,7 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
+ 	dev->min_mtu = ETH_MIN_MTU;
+ 	dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len;
+ 
++	dev_hold(dev);
+ 	return 0;
+ 
+ destroy_dst:
+@@ -1956,7 +1956,6 @@ static int __net_init ip6_fb_tnl_dev_init(struct net_device *dev)
+ 	struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
+ 
+ 	t->parms.proto = IPPROTO_IPV6;
+-	dev_hold(dev);
+ 
+ 	rcu_assign_pointer(ip6n->tnls_wc[0], t);
+ 	return 0;
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index 82961ff4da9b7..23aeeb46f99fc 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -962,7 +962,6 @@ static int __net_init vti6_fb_tnl_dev_init(struct net_device *dev)
+ 	struct vti6_net *ip6n = net_generic(net, vti6_net_id);
+ 
+ 	t->parms.proto = IPPROTO_IPV6;
+-	dev_hold(dev);
+ 
+ 	rcu_assign_pointer(ip6n->tnls_wc[0], t);
+ 	return 0;
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 146ba7fa5bf62..a6a3d759246ec 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -218,8 +218,6 @@ static int ipip6_tunnel_create(struct net_device *dev)
+ 
+ 	ipip6_tunnel_clone_6rd(dev, sitn);
+ 
+-	dev_hold(dev);
+-
+ 	ipip6_tunnel_link(sitn, t);
+ 	return 0;
+ 
+@@ -1456,7 +1454,7 @@ static int ipip6_tunnel_init(struct net_device *dev)
+ 		dev->tstats = NULL;
+ 		return err;
+ 	}
+-
++	dev_hold(dev);
+ 	return 0;
+ }
+ 
+@@ -1472,7 +1470,6 @@ static void __net_init ipip6_fb_tunnel_init(struct net_device *dev)
+ 	iph->ihl		= 5;
+ 	iph->ttl		= 64;
+ 
+-	dev_hold(dev);
+ 	rcu_assign_pointer(sitn->tunnels_wc[0], tunnel);
+ }
+ 
+diff --git a/scripts/bloat-o-meter b/scripts/bloat-o-meter
+index d7ca46c612b34..dcd8d8750b8bf 100755
+--- a/scripts/bloat-o-meter
++++ b/scripts/bloat-o-meter
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python3
+ #
+ # Copyright 2004 Matt Mackall <mpm@selenic.com>
+ #
+diff --git a/scripts/config b/scripts/config
+index eee5b7f3a092a..8c8d7c3d7accc 100755
+--- a/scripts/config
++++ b/scripts/config
+@@ -1,4 +1,4 @@
+-#!/bin/bash
++#!/usr/bin/env bash
+ # SPDX-License-Identifier: GPL-2.0
+ # Manipulate options in a .config file from the command line
+ 
+diff --git a/scripts/diffconfig b/scripts/diffconfig
+index 89abf777f1973..d5da5fa05d1d3 100755
+--- a/scripts/diffconfig
++++ b/scripts/diffconfig
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python3
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # diffconfig - a tool to compare .config files.
+diff --git a/scripts/get_abi.pl b/scripts/get_abi.pl
+index 68dab828a722d..92d9aa6cc4f5d 100755
+--- a/scripts/get_abi.pl
++++ b/scripts/get_abi.pl
+@@ -1,4 +1,4 @@
+-#!/usr/bin/perl
++#!/usr/bin/env perl
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ use strict;
+diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
+index 0bafed857e171..4f84657f55c23 100755
+--- a/scripts/recordmcount.pl
++++ b/scripts/recordmcount.pl
+@@ -395,7 +395,7 @@ if ($arch eq "x86_64") {
+     $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s_mcount\$";
+ } elsif ($arch eq "riscv") {
+     $function_regex = "^([0-9a-fA-F]+)\\s+<([^.0-9][0-9a-zA-Z_\\.]+)>:";
+-    $mcount_regex = "^\\s*([0-9a-fA-F]+):\\sR_RISCV_CALL\\s_mcount\$";
++    $mcount_regex = "^\\s*([0-9a-fA-F]+):\\sR_RISCV_CALL(_PLT)?\\s_?mcount\$";
+     $type = ".quad";
+     $alignment = 2;
+ } elsif ($arch eq "nds32") {
+diff --git a/scripts/show_delta b/scripts/show_delta
+index 264399307c4fc..28e67e1781941 100755
+--- a/scripts/show_delta
++++ b/scripts/show_delta
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python
+ # SPDX-License-Identifier: GPL-2.0-only
+ #
+ # show_deltas: Read list of printk messages instrumented with
+diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install
+index 40fa6923e80ae..828a8615a9181 100755
+--- a/scripts/sphinx-pre-install
++++ b/scripts/sphinx-pre-install
+@@ -1,4 +1,4 @@
+-#!/usr/bin/perl
++#!/usr/bin/env perl
+ # SPDX-License-Identifier: GPL-2.0-or-later
+ use strict;
+ 
+diff --git a/scripts/split-man.pl b/scripts/split-man.pl
+index c3db607ee9ec1..96bd99dc977a5 100755
+--- a/scripts/split-man.pl
++++ b/scripts/split-man.pl
+@@ -1,4 +1,4 @@
+-#!/usr/bin/perl
++#!/usr/bin/env perl
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Author: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
+diff --git a/scripts/tracing/draw_functrace.py b/scripts/tracing/draw_functrace.py
+index b657357585209..74f8aadfd4cbc 100755
+--- a/scripts/tracing/draw_functrace.py
++++ b/scripts/tracing/draw_functrace.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python
+ # SPDX-License-Identifier: GPL-2.0-only
+ 
+ """
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 96903295a9677..7c49a7e92dd21 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -1202,11 +1202,17 @@ static const char *get_line_out_pfx(struct hda_codec *codec, int ch,
+ 		*index = ch;
+ 		return "Headphone";
+ 	case AUTO_PIN_LINE_OUT:
+-		/* This deals with the case where we have two DACs and
+-		 * one LO, one HP and one Speaker */
+-		if (!ch && cfg->speaker_outs && cfg->hp_outs) {
+-			bool hp_lo_shared = !path_has_mixer(codec, spec->hp_paths[0], ctl_type);
+-			bool spk_lo_shared = !path_has_mixer(codec, spec->speaker_paths[0], ctl_type);
++		/* This deals with the case where one HP or one Speaker or
++		 * one HP + one Speaker need to share the DAC with LO
++		 */
++		if (!ch) {
++			bool hp_lo_shared = false, spk_lo_shared = false;
++
++			if (cfg->speaker_outs)
++				spk_lo_shared = !path_has_mixer(codec,
++								spec->speaker_paths[0],	ctl_type);
++			if (cfg->hp_outs)
++				hp_lo_shared = !path_has_mixer(codec, spec->hp_paths[0], ctl_type);
+ 			if (hp_lo_shared && spk_lo_shared)
+ 				return spec->vmaster_mute.hook ? "PCM" : "Master";
+ 			if (hp_lo_shared)
+diff --git a/tools/perf/python/tracepoint.py b/tools/perf/python/tracepoint.py
+index eb76f6516247e..461848c7f57dc 100755
+--- a/tools/perf/python/tracepoint.py
++++ b/tools/perf/python/tracepoint.py
+@@ -1,4 +1,4 @@
+-#! /usr/bin/python
++#! /usr/bin/env python
+ # SPDX-License-Identifier: GPL-2.0
+ # -*- python -*-
+ # -*- coding: utf-8 -*-
+diff --git a/tools/perf/python/twatch.py b/tools/perf/python/twatch.py
+index ff87ccf5b7085..04f3db29b9bc1 100755
+--- a/tools/perf/python/twatch.py
++++ b/tools/perf/python/twatch.py
+@@ -1,4 +1,4 @@
+-#! /usr/bin/python
++#! /usr/bin/env python
+ # SPDX-License-Identifier: GPL-2.0-only
+ # -*- python -*-
+ # -*- coding: utf-8 -*-
+diff --git a/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py b/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
+index 3c47865bb247a..e15e20696d17b 100755
+--- a/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
++++ b/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python
+ # SPDX-License-Identifier: GPL-2.0-only
+ # -*- coding: utf-8 -*-
+ #
+diff --git a/tools/testing/ktest/compare-ktest-sample.pl b/tools/testing/ktest/compare-ktest-sample.pl
+index 4118eb4a842d2..ebea21d0a1be8 100755
+--- a/tools/testing/ktest/compare-ktest-sample.pl
++++ b/tools/testing/ktest/compare-ktest-sample.pl
+@@ -1,4 +1,4 @@
+-#!/usr/bin/perl
++#!/usr/bin/env perl
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ open (IN,"ktest.pl");
+diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
+index d4f7846d0745b..21516e293d171 100755
+--- a/tools/testing/kunit/kunit.py
++++ b/tools/testing/kunit/kunit.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python3
++#!/usr/bin/env python3
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # A thin wrapper on top of the KUnit Kernel
+diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
+index 3fbe1acd531ae..9a036e9d44554 100755
+--- a/tools/testing/kunit/kunit_tool_test.py
++++ b/tools/testing/kunit/kunit_tool_test.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python3
++#!/usr/bin/env python3
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # A collection of tests for tools/testing/kunit/kunit.py
+diff --git a/tools/testing/selftests/bpf/test_offload.py b/tools/testing/selftests/bpf/test_offload.py
+index b99bb8ed3ed4e..edaffd43da835 100755
+--- a/tools/testing/selftests/bpf/test_offload.py
++++ b/tools/testing/selftests/bpf/test_offload.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python3
++#!/usr/bin/env python3
+ 
+ # Copyright (C) 2017 Netronome Systems, Inc.
+ # Copyright (c) 2019 Mellanox Technologies. All rights reserved
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
+index 0d4b9327c9b3f..2223337eed0c1 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
++++ b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ import subprocess
+diff --git a/tools/testing/selftests/kselftest/prefix.pl b/tools/testing/selftests/kselftest/prefix.pl
+index 31f7c2a0a8bd4..12a7f4ca2684d 100755
+--- a/tools/testing/selftests/kselftest/prefix.pl
++++ b/tools/testing/selftests/kselftest/prefix.pl
+@@ -1,4 +1,4 @@
+-#!/usr/bin/perl
++#!/usr/bin/env perl
+ # SPDX-License-Identifier: GPL-2.0
+ # Prefix all lines with "# ", unbuffered. Command being piped in may need
+ # to have unbuffering forced with "stdbuf -i0 -o0 -e0 $cmd".
+diff --git a/tools/testing/selftests/net/devlink_port_split.py b/tools/testing/selftests/net/devlink_port_split.py
+index 58bb7e9b88cec..834066d465fc1 100755
+--- a/tools/testing/selftests/net/devlink_port_split.py
++++ b/tools/testing/selftests/net/devlink_port_split.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python3
++#!/usr/bin/env python3
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ from subprocess import PIPE, Popen
+diff --git a/tools/testing/selftests/tc-testing/tdc_batch.py b/tools/testing/selftests/tc-testing/tdc_batch.py
+index 995f66ce43eba..35d5d94937842 100755
+--- a/tools/testing/selftests/tc-testing/tdc_batch.py
++++ b/tools/testing/selftests/tc-testing/tdc_batch.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python3
++#!/usr/bin/env python3
+ 
+ """
+ tdc_batch.py - a script to generate TC batch file
+diff --git a/tools/testing/selftests/tc-testing/tdc_multibatch.py b/tools/testing/selftests/tc-testing/tdc_multibatch.py
+index 5e7237952e497..48e1f17ff2e86 100755
+--- a/tools/testing/selftests/tc-testing/tdc_multibatch.py
++++ b/tools/testing/selftests/tc-testing/tdc_multibatch.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python3
++#!/usr/bin/env python3
+ # SPDX-License-Identifier: GPL-2.0
+ """
+ tdc_multibatch.py - a thin wrapper over tdc_batch.py to generate multiple batch


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-05-26 12:07 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-05-26 12:07 UTC (permalink / raw
  To: gentoo-commits

commit:     b18dd2e6aa98a9e8d8484a4cb5a7d6efc041be5d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 26 12:07:17 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 26 12:07:17 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b18dd2e6

Linux patch 5.10.40

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1039_linux-5.10.40.patch | 3700 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3704 insertions(+)

diff --git a/0000_README b/0000_README
index 824db67..27b8de0 100644
--- a/0000_README
+++ b/0000_README
@@ -199,6 +199,10 @@ Patch:  1038_linux-5.10.39.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.39
 
+Patch:  1039_linux-5.10.40.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.40
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1039_linux-5.10.40.patch b/1039_linux-5.10.40.patch
new file mode 100644
index 0000000..05aa3e0
--- /dev/null
+++ b/1039_linux-5.10.40.patch
@@ -0,0 +1,3700 @@
+diff --git a/Documentation/powerpc/syscall64-abi.rst b/Documentation/powerpc/syscall64-abi.rst
+index cf9b2857c72aa..d8242049bdcb5 100644
+--- a/Documentation/powerpc/syscall64-abi.rst
++++ b/Documentation/powerpc/syscall64-abi.rst
+@@ -96,6 +96,16 @@ auxiliary vector.
+ 
+ scv 0 syscalls will always behave as PPC_FEATURE2_HTM_NOSC.
+ 
++ptrace
++------
++When ptracing system calls (PTRACE_SYSCALL), the pt_regs.trap value contains
++the system call type that can be used to distinguish between sc and scv 0
++system calls, and the different register conventions can be accounted for.
++
++If the value of (pt_regs.trap & 0xfff0) is 0xc00 then the system call was
++performed with the sc instruction, if it is 0x3000 then the system call was
++performed with the scv 0 instruction.
++
+ vsyscall
+ ========
+ 
+diff --git a/Makefile b/Makefile
+index 38b703568da45..42c915ccc5b80 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 39
++SUBLEVEL = 40
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/openrisc/kernel/setup.c b/arch/openrisc/kernel/setup.c
+index 2416a9f915330..c6f9e7b9f7cb2 100644
+--- a/arch/openrisc/kernel/setup.c
++++ b/arch/openrisc/kernel/setup.c
+@@ -278,6 +278,8 @@ void calibrate_delay(void)
+ 	pr_cont("%lu.%02lu BogoMIPS (lpj=%lu)\n",
+ 		loops_per_jiffy / (500000 / HZ),
+ 		(loops_per_jiffy / (5000 / HZ)) % 100, loops_per_jiffy);
++
++	of_node_put(cpu);
+ }
+ 
+ void __init setup_arch(char **cmdline_p)
+diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
+index 8348feaaf46e5..5e88c351e6a45 100644
+--- a/arch/openrisc/mm/init.c
++++ b/arch/openrisc/mm/init.c
+@@ -76,7 +76,6 @@ static void __init map_ram(void)
+ 	/* These mark extents of read-only kernel pages...
+ 	 * ...from vmlinux.lds.S
+ 	 */
+-	struct memblock_region *region;
+ 
+ 	v = PAGE_OFFSET;
+ 
+@@ -122,7 +121,7 @@ static void __init map_ram(void)
+ 		}
+ 
+ 		printk(KERN_INFO "%s: Memory: 0x%x-0x%x\n", __func__,
+-		       region->base, region->base + region->size);
++		       start, end);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
+index c1fbccb043903..3e8e19f5746c7 100644
+--- a/arch/powerpc/include/asm/hvcall.h
++++ b/arch/powerpc/include/asm/hvcall.h
+@@ -437,6 +437,9 @@
+  */
+ long plpar_hcall_norets(unsigned long opcode, ...);
+ 
++/* Variant which does not do hcall tracing */
++long plpar_hcall_norets_notrace(unsigned long opcode, ...);
++
+ /**
+  * plpar_hcall: - Make a pseries hypervisor call
+  * @opcode: The hypervisor call to make.
+diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
+index 9362c94fe3aa0..588bfb9a0579c 100644
+--- a/arch/powerpc/include/asm/paravirt.h
++++ b/arch/powerpc/include/asm/paravirt.h
+@@ -24,19 +24,35 @@ static inline u32 yield_count_of(int cpu)
+ 	return be32_to_cpu(yield_count);
+ }
+ 
++/*
++ * Spinlock code confers and prods, so don't trace the hcalls because the
++ * tracing code takes spinlocks which can cause recursion deadlocks.
++ *
++ * These calls are made while the lock is not held: the lock slowpath yields if
++ * it can not acquire the lock, and unlock slow path might prod if a waiter has
++ * yielded). So this may not be a problem for simple spin locks because the
++ * tracing does not technically recurse on the lock, but we avoid it anyway.
++ *
++ * However the queued spin lock contended path is more strictly ordered: the
++ * H_CONFER hcall is made after the task has queued itself on the lock, so then
++ * recursing on that lock will cause the task to then queue up again behind the
++ * first instance (or worse: queued spinlocks use tricks that assume a context
++ * never waits on more than one spinlock, so such recursion may cause random
++ * corruption in the lock code).
++ */
+ static inline void yield_to_preempted(int cpu, u32 yield_count)
+ {
+-	plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
++	plpar_hcall_norets_notrace(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
+ }
+ 
+ static inline void prod_cpu(int cpu)
+ {
+-	plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu));
++	plpar_hcall_norets_notrace(H_PROD, get_hard_smp_processor_id(cpu));
+ }
+ 
+ static inline void yield_to_any(void)
+ {
+-	plpar_hcall_norets(H_CONFER, -1, 0);
++	plpar_hcall_norets_notrace(H_CONFER, -1, 0);
+ }
+ #else
+ static inline bool is_shared_processor(void)
+diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
+index d6f262df4f346..a7e7688f57eca 100644
+--- a/arch/powerpc/include/asm/ptrace.h
++++ b/arch/powerpc/include/asm/ptrace.h
+@@ -19,6 +19,7 @@
+ #ifndef _ASM_POWERPC_PTRACE_H
+ #define _ASM_POWERPC_PTRACE_H
+ 
++#include <linux/err.h>
+ #include <uapi/asm/ptrace.h>
+ #include <asm/asm-const.h>
+ 
+@@ -144,25 +145,6 @@ extern unsigned long profile_pc(struct pt_regs *regs);
+ long do_syscall_trace_enter(struct pt_regs *regs);
+ void do_syscall_trace_leave(struct pt_regs *regs);
+ 
+-#define kernel_stack_pointer(regs) ((regs)->gpr[1])
+-static inline int is_syscall_success(struct pt_regs *regs)
+-{
+-	return !(regs->ccr & 0x10000000);
+-}
+-
+-static inline long regs_return_value(struct pt_regs *regs)
+-{
+-	if (is_syscall_success(regs))
+-		return regs->gpr[3];
+-	else
+-		return -regs->gpr[3];
+-}
+-
+-static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
+-{
+-	regs->gpr[3] = rc;
+-}
+-
+ #ifdef __powerpc64__
+ #define user_mode(regs) ((((regs)->msr) >> MSR_PR_LG) & 0x1)
+ #else
+@@ -245,6 +227,31 @@ static inline void set_trap_norestart(struct pt_regs *regs)
+ 	regs->trap |= 0x10;
+ }
+ 
++#define kernel_stack_pointer(regs) ((regs)->gpr[1])
++static inline int is_syscall_success(struct pt_regs *regs)
++{
++	if (trap_is_scv(regs))
++		return !IS_ERR_VALUE((unsigned long)regs->gpr[3]);
++	else
++		return !(regs->ccr & 0x10000000);
++}
++
++static inline long regs_return_value(struct pt_regs *regs)
++{
++	if (trap_is_scv(regs))
++		return regs->gpr[3];
++
++	if (is_syscall_success(regs))
++		return regs->gpr[3];
++	else
++		return -regs->gpr[3];
++}
++
++static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
++{
++	regs->gpr[3] = rc;
++}
++
+ #define arch_has_single_step()	(1)
+ #define arch_has_block_step()	(true)
+ #define ARCH_HAS_USER_SINGLE_STEP_REPORT
+diff --git a/arch/powerpc/include/asm/syscall.h b/arch/powerpc/include/asm/syscall.h
+index fd1b518eed17c..ba0f88f3a30da 100644
+--- a/arch/powerpc/include/asm/syscall.h
++++ b/arch/powerpc/include/asm/syscall.h
+@@ -41,11 +41,17 @@ static inline void syscall_rollback(struct task_struct *task,
+ static inline long syscall_get_error(struct task_struct *task,
+ 				     struct pt_regs *regs)
+ {
+-	/*
+-	 * If the system call failed,
+-	 * regs->gpr[3] contains a positive ERRORCODE.
+-	 */
+-	return (regs->ccr & 0x10000000UL) ? -regs->gpr[3] : 0;
++	if (trap_is_scv(regs)) {
++		unsigned long error = regs->gpr[3];
++
++		return IS_ERR_VALUE(error) ? error : 0;
++	} else {
++		/*
++		 * If the system call failed,
++		 * regs->gpr[3] contains a positive ERRORCODE.
++		 */
++		return (regs->ccr & 0x10000000UL) ? -regs->gpr[3] : 0;
++	}
+ }
+ 
+ static inline long syscall_get_return_value(struct task_struct *task,
+@@ -58,18 +64,22 @@ static inline void syscall_set_return_value(struct task_struct *task,
+ 					    struct pt_regs *regs,
+ 					    int error, long val)
+ {
+-	/*
+-	 * In the general case it's not obvious that we must deal with CCR
+-	 * here, as the syscall exit path will also do that for us. However
+-	 * there are some places, eg. the signal code, which check ccr to
+-	 * decide if the value in r3 is actually an error.
+-	 */
+-	if (error) {
+-		regs->ccr |= 0x10000000L;
+-		regs->gpr[3] = error;
++	if (trap_is_scv(regs)) {
++		regs->gpr[3] = (long) error ?: val;
+ 	} else {
+-		regs->ccr &= ~0x10000000L;
+-		regs->gpr[3] = val;
++		/*
++		 * In the general case it's not obvious that we must deal with
++		 * CCR here, as the syscall exit path will also do that for us.
++		 * However there are some places, eg. the signal code, which
++		 * check ccr to decide if the value in r3 is actually an error.
++		 */
++		if (error) {
++			regs->ccr |= 0x10000000L;
++			regs->gpr[3] = error;
++		} else {
++			regs->ccr &= ~0x10000000L;
++			regs->gpr[3] = val;
++		}
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 3b871ecb3a921..3f8426bccd168 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -368,11 +368,11 @@ void __init early_setup(unsigned long dt_ptr)
+ 	apply_feature_fixups();
+ 	setup_feature_keys();
+ 
+-	early_ioremap_setup();
+-
+ 	/* Initialize the hash table or TLB handling */
+ 	early_init_mmu();
+ 
++	early_ioremap_setup();
++
+ 	/*
+ 	 * After firmware and early platform setup code has set things up,
+ 	 * we note the SPR values for configurable control/performance
+diff --git a/arch/powerpc/platforms/pseries/hvCall.S b/arch/powerpc/platforms/pseries/hvCall.S
+index 2136e42833af3..8a2b8d64265bc 100644
+--- a/arch/powerpc/platforms/pseries/hvCall.S
++++ b/arch/powerpc/platforms/pseries/hvCall.S
+@@ -102,6 +102,16 @@ END_FTR_SECTION(0, 1);						\
+ #define HCALL_BRANCH(LABEL)
+ #endif
+ 
++_GLOBAL_TOC(plpar_hcall_norets_notrace)
++	HMT_MEDIUM
++
++	mfcr	r0
++	stw	r0,8(r1)
++	HVSC				/* invoke the hypervisor */
++	lwz	r0,8(r1)
++	mtcrf	0xff,r0
++	blr				/* return r3 = status */
++
+ _GLOBAL_TOC(plpar_hcall_norets)
+ 	HMT_MEDIUM
+ 
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 764170fdb0f74..1c3ac0f663369 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -1827,8 +1827,7 @@ void hcall_tracepoint_unregfunc(void)
+ 
+ /*
+  * Since the tracing code might execute hcalls we need to guard against
+- * recursion. One example of this are spinlocks calling H_YIELD on
+- * shared processor partitions.
++ * recursion.
+  */
+ static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
+ 
+diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
+index 017de6cc87dc6..72f655c238cf1 100644
+--- a/arch/x86/boot/compressed/head_64.S
++++ b/arch/x86/boot/compressed/head_64.S
+@@ -172,11 +172,21 @@ SYM_FUNC_START(startup_32)
+ 	 */
+ 	call	get_sev_encryption_bit
+ 	xorl	%edx, %edx
++#ifdef	CONFIG_AMD_MEM_ENCRYPT
+ 	testl	%eax, %eax
+ 	jz	1f
+ 	subl	$32, %eax	/* Encryption bit is always above bit 31 */
+ 	bts	%eax, %edx	/* Set encryption mask for page tables */
++	/*
++	 * Mark SEV as active in sev_status so that startup32_check_sev_cbit()
++	 * will do a check. The sev_status memory will be fully initialized
++	 * with the contents of MSR_AMD_SEV_STATUS later in
++	 * set_sev_encryption_mask(). For now it is sufficient to know that SEV
++	 * is active.
++	 */
++	movl	$1, rva(sev_status)(%ebp)
+ 1:
++#endif
+ 
+ 	/* Initialize Page tables to 0 */
+ 	leal	rva(pgtable)(%ebx), %edi
+@@ -261,6 +271,9 @@ SYM_FUNC_START(startup_32)
+ 	movl	%esi, %edx
+ 1:
+ #endif
++	/* Check if the C-bit position is correct when SEV is active */
++	call	startup32_check_sev_cbit
++
+ 	pushl	$__KERNEL_CS
+ 	pushl	%eax
+ 
+@@ -786,6 +799,78 @@ SYM_DATA_START_LOCAL(loaded_image_proto)
+ SYM_DATA_END(loaded_image_proto)
+ #endif
+ 
++/*
++ * Check for the correct C-bit position when the startup_32 boot-path is used.
++ *
++ * The check makes use of the fact that all memory is encrypted when paging is
++ * disabled. The function creates 64 bits of random data using the RDRAND
++ * instruction. RDRAND is mandatory for SEV guests, so always available. If the
++ * hypervisor violates that the kernel will crash right here.
++ *
++ * The 64 bits of random data are stored to a memory location and at the same
++ * time kept in the %eax and %ebx registers. Since encryption is always active
++ * when paging is off the random data will be stored encrypted in main memory.
++ *
++ * Then paging is enabled. When the C-bit position is correct all memory is
++ * still mapped encrypted and comparing the register values with memory will
++ * succeed. An incorrect C-bit position will map all memory unencrypted, so that
++ * the compare will use the encrypted random data and fail.
++ */
++	__HEAD
++	.code32
++SYM_FUNC_START(startup32_check_sev_cbit)
++#ifdef CONFIG_AMD_MEM_ENCRYPT
++	pushl	%eax
++	pushl	%ebx
++	pushl	%ecx
++	pushl	%edx
++
++	/* Check for non-zero sev_status */
++	movl	rva(sev_status)(%ebp), %eax
++	testl	%eax, %eax
++	jz	4f
++
++	/*
++	 * Get two 32-bit random values - Don't bail out if RDRAND fails
++	 * because it is better to prevent forward progress if no random value
++	 * can be gathered.
++	 */
++1:	rdrand	%eax
++	jnc	1b
++2:	rdrand	%ebx
++	jnc	2b
++
++	/* Store to memory and keep it in the registers */
++	movl	%eax, rva(sev_check_data)(%ebp)
++	movl	%ebx, rva(sev_check_data+4)(%ebp)
++
++	/* Enable paging to see if encryption is active */
++	movl	%cr0, %edx			 /* Backup %cr0 in %edx */
++	movl	$(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
++	movl	%ecx, %cr0
++
++	cmpl	%eax, rva(sev_check_data)(%ebp)
++	jne	3f
++	cmpl	%ebx, rva(sev_check_data+4)(%ebp)
++	jne	3f
++
++	movl	%edx, %cr0	/* Restore previous %cr0 */
++
++	jmp	4f
++
++3:	/* Check failed - hlt the machine */
++	hlt
++	jmp	3b
++
++4:
++	popl	%edx
++	popl	%ecx
++	popl	%ebx
++	popl	%eax
++#endif
++	ret
++SYM_FUNC_END(startup32_check_sev_cbit)
++
+ /*
+  * Stack and heap for uncompression
+  */
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 0b9975200ae35..ee659b5faf714 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -5563,7 +5563,7 @@ __init int intel_pmu_init(void)
+ 	 * Check all LBT MSR here.
+ 	 * Disable LBR access if any LBR MSRs can not be accessed.
+ 	 */
+-	if (x86_pmu.lbr_nr && !check_msr(x86_pmu.lbr_tos, 0x3UL))
++	if (x86_pmu.lbr_tos && !check_msr(x86_pmu.lbr_tos, 0x3UL))
+ 		x86_pmu.lbr_nr = 0;
+ 	for (i = 0; i < x86_pmu.lbr_nr; i++) {
+ 		if (!(check_msr(x86_pmu.lbr_from + i, 0xffffUL) &&
+diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
+index 387b716698187..ecb20b17b7df6 100644
+--- a/arch/x86/kernel/sev-es-shared.c
++++ b/arch/x86/kernel/sev-es-shared.c
+@@ -63,6 +63,7 @@ static bool sev_es_negotiate_protocol(void)
+ 
+ static __always_inline void vc_ghcb_invalidate(struct ghcb *ghcb)
+ {
++	ghcb->save.sw_exit_code = 0;
+ 	memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
+ }
+ 
+diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
+index 04a780abb512d..e0cdab7cb632b 100644
+--- a/arch/x86/kernel/sev-es.c
++++ b/arch/x86/kernel/sev-es.c
+@@ -191,8 +191,18 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
+ 	if (unlikely(data->ghcb_active)) {
+ 		/* GHCB is already in use - save its contents */
+ 
+-		if (unlikely(data->backup_ghcb_active))
+-			return NULL;
++		if (unlikely(data->backup_ghcb_active)) {
++			/*
++			 * Backup-GHCB is also already in use. There is no way
++			 * to continue here so just kill the machine. To make
++			 * panic() work, mark GHCBs inactive so that messages
++			 * can be printed out.
++			 */
++			data->ghcb_active        = false;
++			data->backup_ghcb_active = false;
++
++			panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use");
++		}
+ 
+ 		/* Mark backup_ghcb active before writing to it */
+ 		data->backup_ghcb_active = true;
+@@ -209,24 +219,6 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
+ 	return ghcb;
+ }
+ 
+-static __always_inline void sev_es_put_ghcb(struct ghcb_state *state)
+-{
+-	struct sev_es_runtime_data *data;
+-	struct ghcb *ghcb;
+-
+-	data = this_cpu_read(runtime_data);
+-	ghcb = &data->ghcb_page;
+-
+-	if (state->ghcb) {
+-		/* Restore GHCB from Backup */
+-		*ghcb = *state->ghcb;
+-		data->backup_ghcb_active = false;
+-		state->ghcb = NULL;
+-	} else {
+-		data->ghcb_active = false;
+-	}
+-}
+-
+ /* Needed in vc_early_forward_exception */
+ void do_early_exception(struct pt_regs *regs, int trapnr);
+ 
+@@ -296,31 +288,44 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
+ 	u16 d2;
+ 	u8  d1;
+ 
+-	/* If instruction ran in kernel mode and the I/O buffer is in kernel space */
+-	if (!user_mode(ctxt->regs) && !access_ok(target, size)) {
+-		memcpy(dst, buf, size);
+-		return ES_OK;
+-	}
+-
++	/*
++	 * This function uses __put_user() independent of whether kernel or user
++	 * memory is accessed. This works fine because __put_user() does no
++	 * sanity checks of the pointer being accessed. All that it does is
++	 * to report when the access failed.
++	 *
++	 * Also, this function runs in atomic context, so __put_user() is not
++	 * allowed to sleep. The page-fault handler detects that it is running
++	 * in atomic context and will not try to take mmap_sem and handle the
++	 * fault, so additional pagefault_enable()/disable() calls are not
++	 * needed.
++	 *
++	 * The access can't be done via copy_to_user() here because
++	 * vc_write_mem() must not use string instructions to access unsafe
++	 * memory. The reason is that MOVS is emulated by the #VC handler by
++	 * splitting the move up into a read and a write and taking a nested #VC
++	 * exception on whatever of them is the MMIO access. Using string
++	 * instructions here would cause infinite nesting.
++	 */
+ 	switch (size) {
+ 	case 1:
+ 		memcpy(&d1, buf, 1);
+-		if (put_user(d1, target))
++		if (__put_user(d1, target))
+ 			goto fault;
+ 		break;
+ 	case 2:
+ 		memcpy(&d2, buf, 2);
+-		if (put_user(d2, target))
++		if (__put_user(d2, target))
+ 			goto fault;
+ 		break;
+ 	case 4:
+ 		memcpy(&d4, buf, 4);
+-		if (put_user(d4, target))
++		if (__put_user(d4, target))
+ 			goto fault;
+ 		break;
+ 	case 8:
+ 		memcpy(&d8, buf, 8);
+-		if (put_user(d8, target))
++		if (__put_user(d8, target))
+ 			goto fault;
+ 		break;
+ 	default:
+@@ -351,30 +356,43 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
+ 	u16 d2;
+ 	u8  d1;
+ 
+-	/* If instruction ran in kernel mode and the I/O buffer is in kernel space */
+-	if (!user_mode(ctxt->regs) && !access_ok(s, size)) {
+-		memcpy(buf, src, size);
+-		return ES_OK;
+-	}
+-
++	/*
++	 * This function uses __get_user() independent of whether kernel or user
++	 * memory is accessed. This works fine because __get_user() does no
++	 * sanity checks of the pointer being accessed. All that it does is
++	 * to report when the access failed.
++	 *
++	 * Also, this function runs in atomic context, so __get_user() is not
++	 * allowed to sleep. The page-fault handler detects that it is running
++	 * in atomic context and will not try to take mmap_sem and handle the
++	 * fault, so additional pagefault_enable()/disable() calls are not
++	 * needed.
++	 *
++	 * The access can't be done via copy_from_user() here because
++	 * vc_read_mem() must not use string instructions to access unsafe
++	 * memory. The reason is that MOVS is emulated by the #VC handler by
++	 * splitting the move up into a read and a write and taking a nested #VC
++	 * exception on whatever of them is the MMIO access. Using string
++	 * instructions here would cause infinite nesting.
++	 */
+ 	switch (size) {
+ 	case 1:
+-		if (get_user(d1, s))
++		if (__get_user(d1, s))
+ 			goto fault;
+ 		memcpy(buf, &d1, 1);
+ 		break;
+ 	case 2:
+-		if (get_user(d2, s))
++		if (__get_user(d2, s))
+ 			goto fault;
+ 		memcpy(buf, &d2, 2);
+ 		break;
+ 	case 4:
+-		if (get_user(d4, s))
++		if (__get_user(d4, s))
+ 			goto fault;
+ 		memcpy(buf, &d4, 4);
+ 		break;
+ 	case 8:
+-		if (get_user(d8, s))
++		if (__get_user(d8, s))
+ 			goto fault;
+ 		memcpy(buf, &d8, 8);
+ 		break;
+@@ -434,6 +452,29 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
+ /* Include code shared with pre-decompression boot stage */
+ #include "sev-es-shared.c"
+ 
++static __always_inline void sev_es_put_ghcb(struct ghcb_state *state)
++{
++	struct sev_es_runtime_data *data;
++	struct ghcb *ghcb;
++
++	data = this_cpu_read(runtime_data);
++	ghcb = &data->ghcb_page;
++
++	if (state->ghcb) {
++		/* Restore GHCB from Backup */
++		*ghcb = *state->ghcb;
++		data->backup_ghcb_active = false;
++		state->ghcb = NULL;
++	} else {
++		/*
++		 * Invalidate the GHCB so a VMGEXIT instruction issued
++		 * from userspace won't appear to be valid.
++		 */
++		vc_ghcb_invalidate(ghcb);
++		data->ghcb_active = false;
++	}
++}
++
+ void noinstr __sev_es_nmi_complete(void)
+ {
+ 	struct ghcb_state state;
+@@ -1228,6 +1269,10 @@ static __always_inline void vc_forward_exception(struct es_em_ctxt *ctxt)
+ 	case X86_TRAP_UD:
+ 		exc_invalid_op(ctxt->regs);
+ 		break;
++	case X86_TRAP_PF:
++		write_cr2(ctxt->fi.cr2);
++		exc_page_fault(ctxt->regs, error_code);
++		break;
+ 	case X86_TRAP_AC:
+ 		exc_alignment_check(ctxt->regs, error_code);
+ 		break;
+@@ -1257,7 +1302,6 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs)
+  */
+ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ {
+-	struct sev_es_runtime_data *data = this_cpu_read(runtime_data);
+ 	irqentry_state_t irq_state;
+ 	struct ghcb_state state;
+ 	struct es_em_ctxt ctxt;
+@@ -1283,16 +1327,6 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 	 */
+ 
+ 	ghcb = sev_es_get_ghcb(&state);
+-	if (!ghcb) {
+-		/*
+-		 * Mark GHCBs inactive so that panic() is able to print the
+-		 * message.
+-		 */
+-		data->ghcb_active        = false;
+-		data->backup_ghcb_active = false;
+-
+-		panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use");
+-	}
+ 
+ 	vc_ghcb_invalidate(ghcb);
+ 	result = vc_init_em_ctxt(&ctxt, regs, error_code);
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 9a5a50cdaab59..8064df6382227 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1262,16 +1262,16 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 	/* Get mfn list */
+ 	xen_build_dynamic_phys_to_machine();
+ 
++	/* Work out if we support NX */
++	get_cpu_cap(&boot_cpu_data);
++	x86_configure_nx();
++
+ 	/*
+ 	 * Set up kernel GDT and segment registers, mainly so that
+ 	 * -fstack-protector code can be executed.
+ 	 */
+ 	xen_setup_gdt(0);
+ 
+-	/* Work out if we support NX */
+-	get_cpu_cap(&boot_cpu_data);
+-	x86_configure_nx();
+-
+ 	/* Determine virtual and physical address sizes */
+ 	get_cpu_address_sizes(&boot_cpu_data);
+ 
+diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
+index 9874fc1c815b5..1831099306aa9 100644
+--- a/drivers/cdrom/gdrom.c
++++ b/drivers/cdrom/gdrom.c
+@@ -743,6 +743,13 @@ static const struct blk_mq_ops gdrom_mq_ops = {
+ static int probe_gdrom(struct platform_device *devptr)
+ {
+ 	int err;
++
++	/*
++	 * Ensure our "one" device is initialized properly in case of previous
++	 * usages of it
++	 */
++	memset(&gd, 0, sizeof(gd));
++
+ 	/* Start the device */
+ 	if (gdrom_execute_diagnostic() != 1) {
+ 		pr_warn("ATA Probe for GDROM failed\n");
+@@ -831,6 +838,8 @@ static int remove_gdrom(struct platform_device *devptr)
+ 	if (gdrom_major)
+ 		unregister_blkdev(gdrom_major, GDROM_DEV_NAME);
+ 	unregister_cdrom(gd.cd_info);
++	kfree(gd.cd_info);
++	kfree(gd.toc);
+ 
+ 	return 0;
+ }
+@@ -846,7 +855,7 @@ static struct platform_driver gdrom_driver = {
+ static int __init init_gdrom(void)
+ {
+ 	int rc;
+-	gd.toc = NULL;
++
+ 	rc = platform_driver_register(&gdrom_driver);
+ 	if (rc)
+ 		return rc;
+@@ -862,8 +871,6 @@ static void __exit exit_gdrom(void)
+ {
+ 	platform_device_unregister(pd);
+ 	platform_driver_unregister(&gdrom_driver);
+-	kfree(gd.toc);
+-	kfree(gd.cd_info);
+ }
+ 
+ module_init(init_gdrom);
+diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c
+index d0dee37ad5228..4ceba5ef78958 100644
+--- a/drivers/firmware/arm_scpi.c
++++ b/drivers/firmware/arm_scpi.c
+@@ -552,8 +552,10 @@ static unsigned long scpi_clk_get_val(u16 clk_id)
+ 
+ 	ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id,
+ 				sizeof(le_clk_id), &rate, sizeof(rate));
++	if (ret)
++		return 0;
+ 
+-	return ret ? ret : le32_to_cpu(rate);
++	return le32_to_cpu(rate);
+ }
+ 
+ static int scpi_clk_set_val(u16 clk_id, unsigned long rate)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index ab7755a3885a6..532250c2b19ee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -267,7 +267,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
+ 	*addr += offset & ~PAGE_MASK;
+ 
+ 	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
+-	num_bytes = num_pages * 8;
++	num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
+ 
+ 	r = amdgpu_job_alloc_with_ib(adev, num_dw * 4 + num_bytes,
+ 				     AMDGPU_IB_POOL_DELAYED, &job);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 4ebb43e090999..fc8da5fed779b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -1334,9 +1334,10 @@ static const struct soc15_reg_golden golden_settings_gc_10_1_2[] =
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG, 0xffffffff, 0x20000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG2, 0xffffffff, 0x00000420),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000200),
+-	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x04800000),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x04900000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DFSM_TILES_IN_FLIGHT, 0x0000ffff, 0x0000003f),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_LAST_OF_BURST_CONFIG, 0xffffffff, 0x03860204),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1800ff, 0x00000044),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff0ffff, 0x00000500),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGE_PRIV_CONTROL, 0x00007fff, 0x000001fe),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0xffffffff, 0xe4e4e4e4),
+@@ -1354,12 +1355,13 @@ static const struct soc15_reg_golden golden_settings_gc_10_1_2[] =
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE_2, 0x00000820, 0x00000820),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_LINE_STIPPLE_STATE, 0x0000ff0f, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmRMI_SPARE, 0xffffffff, 0xffff3101),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_CONFIG_CNTL_1, 0x001f0000, 0x00070104),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_ALU_CLK_CTRL, 0xffffffff, 0xffffffff),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_ARB_CONFIG, 0x00000133, 0x00000130),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_LDS_CLK_CTRL, 0xffffffff, 0xffffffff),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CNTL, 0xffdf80ff, 0x479c0010),
+-	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffffffff, 0x00800000)
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffffffff, 0x00c00000)
+ };
+ 
+ static void gfx_v10_rlcg_wreg(struct amdgpu_device *adev, u32 offset, u32 v)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 957c12b727676..fb15e8b5af32f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -4859,7 +4859,7 @@ static void gfx_v9_0_update_3d_clock_gating(struct amdgpu_device *adev,
+ 	amdgpu_gfx_rlc_enter_safe_mode(adev);
+ 
+ 	/* Enable 3D CGCG/CGLS */
+-	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)) {
++	if (enable) {
+ 		/* write cmd to clear cgcg/cgls ov */
+ 		def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+ 		/* unset CGCG override */
+@@ -4871,8 +4871,12 @@ static void gfx_v9_0_update_3d_clock_gating(struct amdgpu_device *adev,
+ 		/* enable 3Dcgcg FSM(0x0000363f) */
+ 		def = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D);
+ 
+-		data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+-			RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
++		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)
++			data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
++				RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
++		else
++			data = 0x0 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT;
++
+ 		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS)
+ 			data |= (0x000F << RLC_CGCG_CGLS_CTRL_3D__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
+ 				RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK;
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index 9c72b95b74639..be23d637c3d77 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -124,6 +124,10 @@ static const struct soc15_reg_golden golden_settings_sdma_nv14[] = {
+ 
+ static const struct soc15_reg_golden golden_settings_sdma_nv12[] = {
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC3_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_GB_ADDR_CONFIG, 0x001877ff, 0x00000044),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x001877ff, 0x00000044),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_GB_ADDR_CONFIG, 0x001877ff, 0x00000044),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x001877ff, 0x00000044),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC3_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 7efc618887e21..37226cbbbd11a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -1183,7 +1183,6 @@ static int soc15_common_early_init(void *handle)
+ 			adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
+ 				AMD_CG_SUPPORT_GFX_MGLS |
+ 				AMD_CG_SUPPORT_GFX_CP_LS |
+-				AMD_CG_SUPPORT_GFX_3D_CGCG |
+ 				AMD_CG_SUPPORT_GFX_3D_CGLS |
+ 				AMD_CG_SUPPORT_GFX_CGCG |
+ 				AMD_CG_SUPPORT_GFX_CGLS |
+@@ -1203,7 +1202,6 @@ static int soc15_common_early_init(void *handle)
+ 				AMD_CG_SUPPORT_GFX_MGLS |
+ 				AMD_CG_SUPPORT_GFX_RLC_LS |
+ 				AMD_CG_SUPPORT_GFX_CP_LS |
+-				AMD_CG_SUPPORT_GFX_3D_CGCG |
+ 				AMD_CG_SUPPORT_GFX_3D_CGLS |
+ 				AMD_CG_SUPPORT_GFX_CGCG |
+ 				AMD_CG_SUPPORT_GFX_CGLS |
+diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+index 4adbc2bba97fb..c724fba8a87b9 100644
+--- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c
++++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+@@ -397,7 +397,10 @@ static void emit_batch(struct i915_vma * const vma,
+ 	gen7_emit_pipeline_invalidate(&cmds);
+ 	batch_add(&cmds, MI_LOAD_REGISTER_IMM(2));
+ 	batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_0_GEN7));
+-	batch_add(&cmds, 0xffff0000);
++	batch_add(&cmds, 0xffff0000 |
++			((IS_IVB_GT1(i915) || IS_VALLEYVIEW(i915)) ?
++			 HIZ_RAW_STALL_OPT_DISABLE :
++			 0));
+ 	batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_1));
+ 	batch_add(&cmds, 0xffff0000 | PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
+ 	gen7_emit_pipeline_invalidate(&cmds);
+diff --git a/drivers/hwmon/lm80.c b/drivers/hwmon/lm80.c
+index ac4adb44b224d..97ab491d2922c 100644
+--- a/drivers/hwmon/lm80.c
++++ b/drivers/hwmon/lm80.c
+@@ -596,7 +596,6 @@ static int lm80_probe(struct i2c_client *client)
+ 	struct device *dev = &client->dev;
+ 	struct device *hwmon_dev;
+ 	struct lm80_data *data;
+-	int rv;
+ 
+ 	data = devm_kzalloc(dev, sizeof(struct lm80_data), GFP_KERNEL);
+ 	if (!data)
+@@ -609,14 +608,8 @@ static int lm80_probe(struct i2c_client *client)
+ 	lm80_init_client(client);
+ 
+ 	/* A few vars need to be filled upon startup */
+-	rv = lm80_read_value(client, LM80_REG_FAN_MIN(1));
+-	if (rv < 0)
+-		return rv;
+-	data->fan[f_min][0] = rv;
+-	rv = lm80_read_value(client, LM80_REG_FAN_MIN(2));
+-	if (rv < 0)
+-		return rv;
+-	data->fan[f_min][1] = rv;
++	data->fan[f_min][0] = lm80_read_value(client, LM80_REG_FAN_MIN(1));
++	data->fan[f_min][1] = lm80_read_value(client, LM80_REG_FAN_MIN(2));
+ 
+ 	hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name,
+ 							   data, lm80_groups);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 6af066a2c8c06..d1e94147fb165 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -482,6 +482,7 @@ static void cma_release_dev(struct rdma_id_private *id_priv)
+ 	list_del(&id_priv->list);
+ 	cma_dev_put(id_priv->cma_dev);
+ 	id_priv->cma_dev = NULL;
++	id_priv->id.device = NULL;
+ 	if (id_priv->id.route.addr.dev_addr.sgid_attr) {
+ 		rdma_put_gid_attr(id_priv->id.route.addr.dev_addr.sgid_attr);
+ 		id_priv->id.route.addr.dev_addr.sgid_attr = NULL;
+@@ -1864,6 +1865,7 @@ static void _destroy_id(struct rdma_id_private *id_priv,
+ 				iw_destroy_cm_id(id_priv->cm_id.iw);
+ 		}
+ 		cma_leave_mc_groups(id_priv);
++		rdma_restrack_del(&id_priv->res);
+ 		cma_release_dev(id_priv);
+ 	}
+ 
+@@ -1877,7 +1879,6 @@ static void _destroy_id(struct rdma_id_private *id_priv,
+ 	kfree(id_priv->id.route.path_rec);
+ 
+ 	put_net(id_priv->id.route.addr.dev_addr.net);
+-	rdma_restrack_del(&id_priv->res);
+ 	kfree(id_priv);
+ }
+ 
+@@ -3740,7 +3741,7 @@ int rdma_listen(struct rdma_cm_id *id, int backlog)
+ 	}
+ 
+ 	id_priv->backlog = backlog;
+-	if (id->device) {
++	if (id_priv->cma_dev) {
+ 		if (rdma_cap_ib_cm(id->device, 1)) {
+ 			ret = cma_ib_listen(id_priv);
+ 			if (ret)
+diff --git a/drivers/infiniband/core/uverbs_std_types_device.c b/drivers/infiniband/core/uverbs_std_types_device.c
+index 9ec6971056fa8..049684880ae03 100644
+--- a/drivers/infiniband/core/uverbs_std_types_device.c
++++ b/drivers/infiniband/core/uverbs_std_types_device.c
+@@ -117,8 +117,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_INFO_HANDLES)(
+ 		return ret;
+ 
+ 	uapi_object = uapi_get_object(attrs->ufile->device->uapi, object_id);
+-	if (!uapi_object)
+-		return -EINVAL;
++	if (IS_ERR(uapi_object))
++		return PTR_ERR(uapi_object);
+ 
+ 	handles = gather_objects_handle(attrs->ufile, uapi_object, attrs,
+ 					out_len, &total);
+@@ -331,6 +331,9 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_GID_TABLE)(
+ 	if (ret)
+ 		return ret;
+ 
++	if (!user_entry_size)
++		return -EINVAL;
++
+ 	max_entries = uverbs_attr_ptr_get_array_size(
+ 		attrs, UVERBS_ATTR_QUERY_GID_TABLE_RESP_ENTRIES,
+ 		user_entry_size);
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index efb9ec99b68bd..06a8732576193 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -559,9 +559,8 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs,
+ 	case UVERBS_OBJECT_QP:
+ 	{
+ 		struct mlx5_ib_qp *qp = to_mqp(uobj->object);
+-		enum ib_qp_type	qp_type = qp->ibqp.qp_type;
+ 
+-		if (qp_type == IB_QPT_RAW_PACKET ||
++		if (qp->type == IB_QPT_RAW_PACKET ||
+ 		    (qp->flags & IB_QP_CREATE_SOURCE_QPN)) {
+ 			struct mlx5_ib_raw_packet_qp *raw_packet_qp =
+ 							 &qp->raw_packet_qp;
+@@ -578,10 +577,9 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs,
+ 					       sq->tisn) == obj_id);
+ 		}
+ 
+-		if (qp_type == MLX5_IB_QPT_DCT)
++		if (qp->type == MLX5_IB_QPT_DCT)
+ 			return get_enc_obj_id(MLX5_CMD_OP_CREATE_DCT,
+ 					      qp->dct.mdct.mqp.qpn) == obj_id;
+-
+ 		return get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
+ 				      qp->ibqp.qp_num) == obj_id;
+ 	}
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index beec0d7c0d6e8..b19506707e45c 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4762,6 +4762,7 @@ static void *mlx5_ib_add_slave_port(struct mlx5_core_dev *mdev)
+ 
+ 		if (bound) {
+ 			rdma_roce_rescan_device(&dev->ib_dev);
++			mpi->ibdev->ib_active = true;
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 656a5b4be847e..1e716fe7014cc 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -231,6 +231,7 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	if (err) {
+ 		vfree(qp->sq.queue->buf);
+ 		kfree(qp->sq.queue);
++		qp->sq.queue = NULL;
+ 		return err;
+ 	}
+ 
+@@ -284,6 +285,7 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 		if (err) {
+ 			vfree(qp->rq.queue->buf);
+ 			kfree(qp->rq.queue);
++			qp->rq.queue = NULL;
+ 			return err;
+ 		}
+ 	}
+@@ -344,6 +346,11 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
+ err2:
+ 	rxe_queue_cleanup(qp->sq.queue);
+ err1:
++	qp->pd = NULL;
++	qp->rcq = NULL;
++	qp->scq = NULL;
++	qp->srq = NULL;
++
+ 	if (srq)
+ 		rxe_drop_ref(srq);
+ 	rxe_drop_ref(scq);
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index fb25e8011f5a4..34e847a91eb80 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -300,7 +300,6 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
+ 	struct siw_ucontext *uctx =
+ 		rdma_udata_to_drv_context(udata, struct siw_ucontext,
+ 					  base_ucontext);
+-	struct siw_cq *scq = NULL, *rcq = NULL;
+ 	unsigned long flags;
+ 	int num_sqe, num_rqe, rv = 0;
+ 	size_t length;
+@@ -340,10 +339,8 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
+ 		rv = -EINVAL;
+ 		goto err_out;
+ 	}
+-	scq = to_siw_cq(attrs->send_cq);
+-	rcq = to_siw_cq(attrs->recv_cq);
+ 
+-	if (!scq || (!rcq && !attrs->srq)) {
++	if (!attrs->send_cq || (!attrs->recv_cq && !attrs->srq)) {
+ 		siw_dbg(base_dev, "send CQ or receive CQ invalid\n");
+ 		rv = -EINVAL;
+ 		goto err_out;
+@@ -375,7 +372,7 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
+ 	else {
+ 		/* Zero sized SQ is not supported */
+ 		rv = -EINVAL;
+-		goto err_out;
++		goto err_out_xa;
+ 	}
+ 	if (num_rqe)
+ 		num_rqe = roundup_pow_of_two(num_rqe);
+@@ -398,8 +395,8 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
+ 		}
+ 	}
+ 	qp->pd = pd;
+-	qp->scq = scq;
+-	qp->rcq = rcq;
++	qp->scq = to_siw_cq(attrs->send_cq);
++	qp->rcq = to_siw_cq(attrs->recv_cq);
+ 
+ 	if (attrs->srq) {
+ 		/*
+diff --git a/drivers/leds/leds-lp5523.c b/drivers/leds/leds-lp5523.c
+index fc433e63b1dc0..b1590cb4a1887 100644
+--- a/drivers/leds/leds-lp5523.c
++++ b/drivers/leds/leds-lp5523.c
+@@ -307,7 +307,7 @@ static int lp5523_init_program_engine(struct lp55xx_chip *chip)
+ 	usleep_range(3000, 6000);
+ 	ret = lp55xx_read(chip, LP5523_REG_STATUS, &status);
+ 	if (ret)
+-		return ret;
++		goto out;
+ 	status &= LP5523_ENG_STATUS_MASK;
+ 
+ 	if (status != LP5523_ENG_STATUS_MASK) {
+diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
+index 11890db71f3fe..962f7df0691ef 100644
+--- a/drivers/md/dm-snap.c
++++ b/drivers/md/dm-snap.c
+@@ -1408,6 +1408,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 
+ 	if (!s->store->chunk_size) {
+ 		ti->error = "Chunk size not set";
++		r = -EINVAL;
+ 		goto bad_read_metadata;
+ 	}
+ 
+diff --git a/drivers/media/platform/rcar_drif.c b/drivers/media/platform/rcar_drif.c
+index f318cd4b8086f..083dba95beaa0 100644
+--- a/drivers/media/platform/rcar_drif.c
++++ b/drivers/media/platform/rcar_drif.c
+@@ -915,7 +915,6 @@ static int rcar_drif_g_fmt_sdr_cap(struct file *file, void *priv,
+ {
+ 	struct rcar_drif_sdr *sdr = video_drvdata(file);
+ 
+-	memset(f->fmt.sdr.reserved, 0, sizeof(f->fmt.sdr.reserved));
+ 	f->fmt.sdr.pixelformat = sdr->fmt->pixelformat;
+ 	f->fmt.sdr.buffersize = sdr->fmt->buffersize;
+ 
+diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
+index 926408b41270c..7a6f01ace78ac 100644
+--- a/drivers/misc/eeprom/at24.c
++++ b/drivers/misc/eeprom/at24.c
+@@ -763,7 +763,8 @@ static int at24_probe(struct i2c_client *client)
+ 	at24->nvmem = devm_nvmem_register(dev, &nvmem_config);
+ 	if (IS_ERR(at24->nvmem)) {
+ 		pm_runtime_disable(dev);
+-		regulator_disable(at24->vcc_reg);
++		if (!pm_runtime_status_suspended(dev))
++			regulator_disable(at24->vcc_reg);
+ 		return PTR_ERR(at24->nvmem);
+ 	}
+ 
+@@ -774,7 +775,8 @@ static int at24_probe(struct i2c_client *client)
+ 	err = at24_read(at24, 0, &test_byte, 1);
+ 	if (err) {
+ 		pm_runtime_disable(dev);
+-		regulator_disable(at24->vcc_reg);
++		if (!pm_runtime_status_suspended(dev))
++			regulator_disable(at24->vcc_reg);
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/misc/ics932s401.c b/drivers/misc/ics932s401.c
+index 2bdf560ee681b..0f9ea75b0b189 100644
+--- a/drivers/misc/ics932s401.c
++++ b/drivers/misc/ics932s401.c
+@@ -134,7 +134,7 @@ static struct ics932s401_data *ics932s401_update_device(struct device *dev)
+ 	for (i = 0; i < NUM_MIRRORED_REGS; i++) {
+ 		temp = i2c_smbus_read_word_data(client, regs_to_copy[i]);
+ 		if (temp < 0)
+-			data->regs[regs_to_copy[i]] = 0;
++			temp = 0;
+ 		data->regs[regs_to_copy[i]] = temp >> 8;
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c
+index 9887485a41348..23b89b4cad088 100644
+--- a/drivers/mmc/host/sdhci-pci-gli.c
++++ b/drivers/mmc/host/sdhci-pci-gli.c
+@@ -555,8 +555,13 @@ static void sdhci_gli_voltage_switch(struct sdhci_host *host)
+ 	 *
+ 	 * Wait 5ms after set 1.8V signal enable in Host Control 2 register
+ 	 * to ensure 1.8V signal enable bit is set by GL9750/GL9755.
++	 *
++	 * ...however, the controller in the NUC10i3FNK4 (a 9755) requires
++	 * slightly longer than 5ms before the control register reports that
++	 * 1.8V is ready, and far longer still before the card will actually
++	 * work reliably.
+ 	 */
+-	usleep_range(5000, 5500);
++	usleep_range(100000, 110000);
+ }
+ 
+ static void sdhci_gl9750_reset(struct sdhci_host *host, u8 mask)
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
+index d8a3ecaed3fc6..d8f0863b39342 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
+@@ -1048,7 +1048,7 @@ int qlcnic_do_lb_test(struct qlcnic_adapter *adapter, u8 mode)
+ 	for (i = 0; i < QLCNIC_NUM_ILB_PKT; i++) {
+ 		skb = netdev_alloc_skb(adapter->netdev, QLCNIC_ILB_PKT_SIZE);
+ 		if (!skb)
+-			break;
++			goto error;
+ 		qlcnic_create_loopback_buff(skb->data, adapter->mac_addr);
+ 		skb_put(skb, QLCNIC_ILB_PKT_SIZE);
+ 		adapter->ahw->diag_cnt = 0;
+@@ -1072,6 +1072,7 @@ int qlcnic_do_lb_test(struct qlcnic_adapter *adapter, u8 mode)
+ 			cnt++;
+ 	}
+ 	if (cnt != i) {
++error:
+ 		dev_err(&adapter->pdev->dev,
+ 			"LB Test: failed, TX[%d], RX[%d]\n", i, cnt);
+ 		if (mode != QLCNIC_ILB_MODE)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
+index 0e1ca2cba3c7c..e18dee7fe6876 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
+@@ -30,7 +30,7 @@ struct sunxi_priv_data {
+ static int sun7i_gmac_init(struct platform_device *pdev, void *priv)
+ {
+ 	struct sunxi_priv_data *gmac = priv;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (gmac->regulator) {
+ 		ret = regulator_enable(gmac->regulator);
+@@ -51,11 +51,11 @@ static int sun7i_gmac_init(struct platform_device *pdev, void *priv)
+ 	} else {
+ 		clk_set_rate(gmac->tx_clk, SUN7I_GMAC_MII_RATE);
+ 		ret = clk_prepare(gmac->tx_clk);
+-		if (ret)
+-			return ret;
++		if (ret && gmac->regulator)
++			regulator_disable(gmac->regulator);
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void sun7i_gmac_exit(struct platform_device *pdev, void *priv)
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 707ccdd03b19e..74e748662ec01 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -8144,10 +8144,10 @@ static int niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end)
+ 				     "VPD_SCAN: Reading in property [%s] len[%d]\n",
+ 				     namebuf, prop_len);
+ 			for (i = 0; i < prop_len; i++) {
+-				err = niu_pci_eeprom_read(np, off + i);
+-				if (err >= 0)
+-					*prop_buf = err;
+-				++prop_buf;
++				err =  niu_pci_eeprom_read(np, off + i);
++				if (err < 0)
++					return err;
++				*prop_buf++ = err;
+ 			}
+ 		}
+ 
+@@ -8158,14 +8158,14 @@ static int niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end)
+ }
+ 
+ /* ESPC_PIO_EN_ENABLE must be set */
+-static void niu_pci_vpd_fetch(struct niu *np, u32 start)
++static int niu_pci_vpd_fetch(struct niu *np, u32 start)
+ {
+ 	u32 offset;
+ 	int err;
+ 
+ 	err = niu_pci_eeprom_read16_swp(np, start + 1);
+ 	if (err < 0)
+-		return;
++		return err;
+ 
+ 	offset = err + 3;
+ 
+@@ -8174,12 +8174,14 @@ static void niu_pci_vpd_fetch(struct niu *np, u32 start)
+ 		u32 end;
+ 
+ 		err = niu_pci_eeprom_read(np, here);
++		if (err < 0)
++			return err;
+ 		if (err != 0x90)
+-			return;
++			return -EINVAL;
+ 
+ 		err = niu_pci_eeprom_read16_swp(np, here + 1);
+ 		if (err < 0)
+-			return;
++			return err;
+ 
+ 		here = start + offset + 3;
+ 		end = start + offset + err;
+@@ -8187,9 +8189,12 @@ static void niu_pci_vpd_fetch(struct niu *np, u32 start)
+ 		offset += err;
+ 
+ 		err = niu_pci_vpd_scan_props(np, here, end);
+-		if (err < 0 || err == 1)
+-			return;
++		if (err < 0)
++			return err;
++		if (err == 1)
++			return -EINVAL;
+ 	}
++	return 0;
+ }
+ 
+ /* ESPC_PIO_EN_ENABLE must be set */
+@@ -9280,8 +9285,11 @@ static int niu_get_invariants(struct niu *np)
+ 		offset = niu_pci_vpd_offset(np);
+ 		netif_printk(np, probe, KERN_DEBUG, np->dev,
+ 			     "%s() VPD offset [%08x]\n", __func__, offset);
+-		if (offset)
+-			niu_pci_vpd_fetch(np, offset);
++		if (offset) {
++			err = niu_pci_vpd_fetch(np, offset);
++			if (err < 0)
++				return err;
++		}
+ 		nw64(ESPC_PIO_EN, 0);
+ 
+ 		if (np->flags & NIU_FLAGS_VPD_VALID) {
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
+index 6e8bd99e8911d..1866f6c2acab1 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.c
++++ b/drivers/net/wireless/realtek/rtlwifi/base.c
+@@ -440,9 +440,14 @@ static void rtl_watchdog_wq_callback(struct work_struct *work);
+ static void rtl_fwevt_wq_callback(struct work_struct *work);
+ static void rtl_c2hcmd_wq_callback(struct work_struct *work);
+ 
+-static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
++static int _rtl_init_deferred_work(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
++	struct workqueue_struct *wq;
++
++	wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name);
++	if (!wq)
++		return -ENOMEM;
+ 
+ 	/* <1> timer */
+ 	timer_setup(&rtlpriv->works.watchdog_timer,
+@@ -451,11 +456,7 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
+ 		    rtl_easy_concurrent_retrytimer_callback, 0);
+ 	/* <2> work queue */
+ 	rtlpriv->works.hw = hw;
+-	rtlpriv->works.rtl_wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name);
+-	if (unlikely(!rtlpriv->works.rtl_wq)) {
+-		pr_err("Failed to allocate work queue\n");
+-		return;
+-	}
++	rtlpriv->works.rtl_wq = wq;
+ 
+ 	INIT_DELAYED_WORK(&rtlpriv->works.watchdog_wq,
+ 			  rtl_watchdog_wq_callback);
+@@ -466,6 +467,7 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
+ 			  rtl_swlps_rfon_wq_callback);
+ 	INIT_DELAYED_WORK(&rtlpriv->works.fwevt_wq, rtl_fwevt_wq_callback);
+ 	INIT_DELAYED_WORK(&rtlpriv->works.c2hcmd_wq, rtl_c2hcmd_wq_callback);
++	return 0;
+ }
+ 
+ void rtl_deinit_deferred_work(struct ieee80211_hw *hw, bool ips_wq)
+@@ -565,9 +567,7 @@ int rtl_init_core(struct ieee80211_hw *hw)
+ 	rtlmac->link_state = MAC80211_NOLINK;
+ 
+ 	/* <6> init deferred work */
+-	_rtl_init_deferred_work(hw);
+-
+-	return 0;
++	return _rtl_init_deferred_work(hw);
+ }
+ EXPORT_SYMBOL_GPL(rtl_init_core);
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 740de61d12a0e..f520a71a361fc 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3131,7 +3131,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 		ctrl->hmmaxd = le16_to_cpu(id->hmmaxd);
+ 	}
+ 
+-	ret = nvme_mpath_init(ctrl, id);
++	ret = nvme_mpath_init_identify(ctrl, id);
+ 	kfree(id);
+ 
+ 	if (ret < 0)
+@@ -4517,6 +4517,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
+ 		min(default_ps_max_latency_us, (unsigned long)S32_MAX));
+ 
+ 	nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device));
++	nvme_mpath_init_ctrl(ctrl);
+ 
+ 	return 0;
+ out_free_name:
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 41257daf7464d..a0bcec33b0208 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2460,6 +2460,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
+ static void
+ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
+ {
++	int q;
++
++	/*
++	 * if aborting io, the queues are no longer good, mark them
++	 * all as not live.
++	 */
++	if (ctrl->ctrl.queue_count > 1) {
++		for (q = 1; q < ctrl->ctrl.queue_count; q++)
++			clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags);
++	}
++	clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags);
++
+ 	/*
+ 	 * If io queues are present, stop them and terminate all outstanding
+ 	 * ios on them. As FC allocates FC exchange for each io, the
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index f750cf98ae264..2747efc03825c 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -708,9 +708,18 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+ 	put_disk(head->disk);
+ }
+ 
+-int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
++void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl)
+ {
+-	int error;
++	mutex_init(&ctrl->ana_lock);
++	timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0);
++	INIT_WORK(&ctrl->ana_work, nvme_ana_work);
++}
++
++int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
++{
++	size_t max_transfer_size = ctrl->max_hw_sectors << SECTOR_SHIFT;
++	size_t ana_log_size;
++	int error = 0;
+ 
+ 	/* check if multipath is enabled and we have the capability */
+ 	if (!multipath || !ctrl->subsys ||
+@@ -722,37 +731,31 @@ int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ 	ctrl->nanagrpid = le32_to_cpu(id->nanagrpid);
+ 	ctrl->anagrpmax = le32_to_cpu(id->anagrpmax);
+ 
+-	mutex_init(&ctrl->ana_lock);
+-	timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0);
+-	ctrl->ana_log_size = sizeof(struct nvme_ana_rsp_hdr) +
+-		ctrl->nanagrpid * sizeof(struct nvme_ana_group_desc);
+-	ctrl->ana_log_size += ctrl->max_namespaces * sizeof(__le32);
+-
+-	if (ctrl->ana_log_size > ctrl->max_hw_sectors << SECTOR_SHIFT) {
++	ana_log_size = sizeof(struct nvme_ana_rsp_hdr) +
++		ctrl->nanagrpid * sizeof(struct nvme_ana_group_desc) +
++		ctrl->max_namespaces * sizeof(__le32);
++	if (ana_log_size > max_transfer_size) {
+ 		dev_err(ctrl->device,
+-			"ANA log page size (%zd) larger than MDTS (%d).\n",
+-			ctrl->ana_log_size,
+-			ctrl->max_hw_sectors << SECTOR_SHIFT);
++			"ANA log page size (%zd) larger than MDTS (%zd).\n",
++			ana_log_size, max_transfer_size);
+ 		dev_err(ctrl->device, "disabling ANA support.\n");
+-		return 0;
++		goto out_uninit;
+ 	}
+-
+-	INIT_WORK(&ctrl->ana_work, nvme_ana_work);
+-	kfree(ctrl->ana_log_buf);
+-	ctrl->ana_log_buf = kmalloc(ctrl->ana_log_size, GFP_KERNEL);
+-	if (!ctrl->ana_log_buf) {
+-		error = -ENOMEM;
+-		goto out;
++	if (ana_log_size > ctrl->ana_log_size) {
++		nvme_mpath_stop(ctrl);
++		kfree(ctrl->ana_log_buf);
++		ctrl->ana_log_buf = kmalloc(ana_log_size, GFP_KERNEL);
++		if (!ctrl->ana_log_buf)
++			return -ENOMEM;
+ 	}
+-
++	ctrl->ana_log_size = ana_log_size;
+ 	error = nvme_read_ana_log(ctrl);
+ 	if (error)
+-		goto out_free_ana_log_buf;
++		goto out_uninit;
+ 	return 0;
+-out_free_ana_log_buf:
+-	kfree(ctrl->ana_log_buf);
+-	ctrl->ana_log_buf = NULL;
+-out:
++
++out_uninit:
++	nvme_mpath_uninit(ctrl);
+ 	return error;
+ }
+ 
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index f843540cc238e..3cb3c82061d7e 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -654,7 +654,8 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl);
+ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl,struct nvme_ns_head *head);
+ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id);
+ void nvme_mpath_remove_disk(struct nvme_ns_head *head);
+-int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id);
++int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id);
++void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl);
+ void nvme_mpath_uninit(struct nvme_ctrl *ctrl);
+ void nvme_mpath_stop(struct nvme_ctrl *ctrl);
+ bool nvme_mpath_clear_current_path(struct nvme_ns *ns);
+@@ -730,7 +731,10 @@ static inline void nvme_trace_bio_complete(struct request *req,
+         blk_status_t status)
+ {
+ }
+-static inline int nvme_mpath_init(struct nvme_ctrl *ctrl,
++static inline void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl)
++{
++}
++static inline int nvme_mpath_init_identify(struct nvme_ctrl *ctrl,
+ 		struct nvme_id_ctrl *id)
+ {
+ 	if (ctrl->subsys->cmic & (1 << 3))
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 4cf81f3841aee..82b2611d39a2f 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -940,7 +940,6 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 		if (ret <= 0)
+ 			return ret;
+ 
+-		nvme_tcp_advance_req(req, ret);
+ 		if (queue->data_digest)
+ 			nvme_tcp_ddgst_update(queue->snd_hash, page,
+ 					offset, ret);
+@@ -957,6 +956,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 			}
+ 			return 1;
+ 		}
++		nvme_tcp_advance_req(req, ret);
+ 	}
+ 	return -EAGAIN;
+ }
+@@ -1140,7 +1140,8 @@ static void nvme_tcp_io_work(struct work_struct *w)
+ 				pending = true;
+ 			else if (unlikely(result < 0))
+ 				break;
+-		}
++		} else
++			pending = !llist_empty(&queue->req_list);
+ 
+ 		result = nvme_tcp_try_recv(queue);
+ 		if (result > 0)
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 1e79d33c1df7e..46e4f7ea34c8b 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -757,8 +757,6 @@ void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq,
+ {
+ 	cq->qid = qid;
+ 	cq->size = size;
+-
+-	ctrl->cqs[qid] = cq;
+ }
+ 
+ void nvmet_sq_setup(struct nvmet_ctrl *ctrl, struct nvmet_sq *sq,
+@@ -1355,20 +1353,14 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 	if (!ctrl->changed_ns_list)
+ 		goto out_free_ctrl;
+ 
+-	ctrl->cqs = kcalloc(subsys->max_qid + 1,
+-			sizeof(struct nvmet_cq *),
+-			GFP_KERNEL);
+-	if (!ctrl->cqs)
+-		goto out_free_changed_ns_list;
+-
+ 	ctrl->sqs = kcalloc(subsys->max_qid + 1,
+ 			sizeof(struct nvmet_sq *),
+ 			GFP_KERNEL);
+ 	if (!ctrl->sqs)
+-		goto out_free_cqs;
++		goto out_free_changed_ns_list;
+ 
+ 	if (subsys->cntlid_min > subsys->cntlid_max)
+-		goto out_free_cqs;
++		goto out_free_sqs;
+ 
+ 	ret = ida_simple_get(&cntlid_ida,
+ 			     subsys->cntlid_min, subsys->cntlid_max,
+@@ -1406,8 +1398,6 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 
+ out_free_sqs:
+ 	kfree(ctrl->sqs);
+-out_free_cqs:
+-	kfree(ctrl->cqs);
+ out_free_changed_ns_list:
+ 	kfree(ctrl->changed_ns_list);
+ out_free_ctrl:
+@@ -1437,7 +1427,6 @@ static void nvmet_ctrl_free(struct kref *ref)
+ 
+ 	nvmet_async_events_free(ctrl);
+ 	kfree(ctrl->sqs);
+-	kfree(ctrl->cqs);
+ 	kfree(ctrl->changed_ns_list);
+ 	kfree(ctrl);
+ 
+diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
+index 0abbefd9925e3..b575997244482 100644
+--- a/drivers/nvme/target/io-cmd-file.c
++++ b/drivers/nvme/target/io-cmd-file.c
+@@ -49,9 +49,11 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
+ 
+ 	ns->file = filp_open(ns->device_path, flags, 0);
+ 	if (IS_ERR(ns->file)) {
+-		pr_err("failed to open file %s: (%ld)\n",
+-				ns->device_path, PTR_ERR(ns->file));
+-		return PTR_ERR(ns->file);
++		ret = PTR_ERR(ns->file);
++		pr_err("failed to open file %s: (%d)\n",
++			ns->device_path, ret);
++		ns->file = NULL;
++		return ret;
+ 	}
+ 
+ 	ret = nvmet_file_ns_revalidate(ns);
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index f6d81239be215..b869b686e9623 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -578,8 +578,10 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
+ 
+ 	ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_loop_ctrl_ops,
+ 				0 /* no quirks, we're perfect! */);
+-	if (ret)
++	if (ret) {
++		kfree(ctrl);
+ 		goto out;
++	}
+ 
+ 	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING))
+ 		WARN_ON_ONCE(1);
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index bc91336080e01..ea96487b5424e 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -164,7 +164,6 @@ static inline struct nvmet_port *ana_groups_to_port(
+ 
+ struct nvmet_ctrl {
+ 	struct nvmet_subsys	*subsys;
+-	struct nvmet_cq		**cqs;
+ 	struct nvmet_sq		**sqs;
+ 
+ 	bool			cmd_seen;
+diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
+index bbc4e71a16ff8..38800e86ed8ad 100644
+--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
+@@ -294,6 +294,9 @@ mlxbf_tmfifo_get_next_desc(struct mlxbf_tmfifo_vring *vring)
+ 	if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx))
+ 		return NULL;
+ 
++	/* Make sure 'avail->idx' is visible already. */
++	virtio_rmb(false);
++
+ 	idx = vring->next_avail % vr->num;
+ 	head = virtio16_to_cpu(vdev, vr->avail->ring[idx]);
+ 	if (WARN_ON(head >= vr->num))
+@@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring,
+ 	 * done or not. Add a memory barrier here to make sure the update above
+ 	 * completes before updating the idx.
+ 	 */
+-	mb();
++	virtio_mb(false);
+ 	vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1);
+ }
+ 
+@@ -733,6 +736,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring,
+ 		desc = NULL;
+ 		fifo->vring[is_rx] = NULL;
+ 
++		/*
++		 * Make sure the load/store are in order before
++		 * returning back to virtio.
++		 */
++		virtio_mb(false);
++
+ 		/* Notify upper layer that packet is done. */
+ 		spin_lock_irqsave(&fifo->spin_lock[is_rx], flags);
+ 		vring_interrupt(0, vring->vq);
+diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
+index 0d91d136bc3b7..a1858689d6e10 100644
+--- a/drivers/platform/x86/Kconfig
++++ b/drivers/platform/x86/Kconfig
+@@ -821,7 +821,7 @@ config INTEL_HID_EVENT
+ 
+ config INTEL_INT0002_VGPIO
+ 	tristate "Intel ACPI INT0002 Virtual GPIO driver"
+-	depends on GPIOLIB && ACPI
++	depends on GPIOLIB && ACPI && PM_SLEEP
+ 	select GPIOLIB_IRQCHIP
+ 	help
+ 	  Some peripherals on Bay Trail and Cherry Trail platforms signal a
+diff --git a/drivers/platform/x86/dell-smbios-wmi.c b/drivers/platform/x86/dell-smbios-wmi.c
+index 27a298b7c541b..c97bd4a452422 100644
+--- a/drivers/platform/x86/dell-smbios-wmi.c
++++ b/drivers/platform/x86/dell-smbios-wmi.c
+@@ -271,7 +271,8 @@ int init_dell_smbios_wmi(void)
+ 
+ void exit_dell_smbios_wmi(void)
+ {
+-	wmi_driver_unregister(&dell_smbios_wmi_driver);
++	if (wmi_supported)
++		wmi_driver_unregister(&dell_smbios_wmi_driver);
+ }
+ 
+ MODULE_DEVICE_TABLE(wmi, dell_smbios_wmi_id_table);
+diff --git a/drivers/platform/x86/intel_int0002_vgpio.c b/drivers/platform/x86/intel_int0002_vgpio.c
+index 289c6655d425d..569342aa8926e 100644
+--- a/drivers/platform/x86/intel_int0002_vgpio.c
++++ b/drivers/platform/x86/intel_int0002_vgpio.c
+@@ -51,6 +51,12 @@
+ #define GPE0A_STS_PORT			0x420
+ #define GPE0A_EN_PORT			0x428
+ 
++struct int0002_data {
++	struct gpio_chip chip;
++	int parent_irq;
++	int wake_enable_count;
++};
++
+ /*
+  * As this is not a real GPIO at all, but just a hack to model an event in
+  * ACPI the get / set functions are dummy functions.
+@@ -98,14 +104,16 @@ static void int0002_irq_mask(struct irq_data *data)
+ static int int0002_irq_set_wake(struct irq_data *data, unsigned int on)
+ {
+ 	struct gpio_chip *chip = irq_data_get_irq_chip_data(data);
+-	struct platform_device *pdev = to_platform_device(chip->parent);
+-	int irq = platform_get_irq(pdev, 0);
++	struct int0002_data *int0002 = container_of(chip, struct int0002_data, chip);
+ 
+-	/* Propagate to parent irq */
++	/*
++	 * Applying of the wakeup flag to our parent IRQ is delayed till system
++	 * suspend, because we only want to do this when using s2idle.
++	 */
+ 	if (on)
+-		enable_irq_wake(irq);
++		int0002->wake_enable_count++;
+ 	else
+-		disable_irq_wake(irq);
++		int0002->wake_enable_count--;
+ 
+ 	return 0;
+ }
+@@ -135,7 +143,7 @@ static bool int0002_check_wake(void *data)
+ 	return (gpe_sts_reg & GPE0A_PME_B0_STS_BIT);
+ }
+ 
+-static struct irq_chip int0002_byt_irqchip = {
++static struct irq_chip int0002_irqchip = {
+ 	.name			= DRV_NAME,
+ 	.irq_ack		= int0002_irq_ack,
+ 	.irq_mask		= int0002_irq_mask,
+@@ -143,21 +151,9 @@ static struct irq_chip int0002_byt_irqchip = {
+ 	.irq_set_wake		= int0002_irq_set_wake,
+ };
+ 
+-static struct irq_chip int0002_cht_irqchip = {
+-	.name			= DRV_NAME,
+-	.irq_ack		= int0002_irq_ack,
+-	.irq_mask		= int0002_irq_mask,
+-	.irq_unmask		= int0002_irq_unmask,
+-	/*
+-	 * No set_wake, on CHT the IRQ is typically shared with the ACPI SCI
+-	 * and we don't want to mess with the ACPI SCI irq settings.
+-	 */
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
+-};
+-
+ static const struct x86_cpu_id int0002_cpu_ids[] = {
+-	X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT,	&int0002_byt_irqchip),
+-	X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT,	&int0002_cht_irqchip),
++	X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, NULL),
+ 	{}
+ };
+ 
+@@ -172,8 +168,9 @@ static int int0002_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	const struct x86_cpu_id *cpu_id;
+-	struct gpio_chip *chip;
++	struct int0002_data *int0002;
+ 	struct gpio_irq_chip *girq;
++	struct gpio_chip *chip;
+ 	int irq, ret;
+ 
+ 	/* Menlow has a different INT0002 device? <sigh> */
+@@ -185,10 +182,13 @@ static int int0002_probe(struct platform_device *pdev)
+ 	if (irq < 0)
+ 		return irq;
+ 
+-	chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL);
+-	if (!chip)
++	int0002 = devm_kzalloc(dev, sizeof(*int0002), GFP_KERNEL);
++	if (!int0002)
+ 		return -ENOMEM;
+ 
++	int0002->parent_irq = irq;
++
++	chip = &int0002->chip;
+ 	chip->label = DRV_NAME;
+ 	chip->parent = dev;
+ 	chip->owner = THIS_MODULE;
+@@ -214,7 +214,7 @@ static int int0002_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	girq = &chip->irq;
+-	girq->chip = (struct irq_chip *)cpu_id->driver_data;
++	girq->chip = &int0002_irqchip;
+ 	/* This let us handle the parent IRQ in the driver */
+ 	girq->parent_handler = NULL;
+ 	girq->num_parents = 0;
+@@ -230,6 +230,7 @@ static int int0002_probe(struct platform_device *pdev)
+ 
+ 	acpi_register_wakeup_handler(irq, int0002_check_wake, NULL);
+ 	device_init_wakeup(dev, true);
++	dev_set_drvdata(dev, int0002);
+ 	return 0;
+ }
+ 
+@@ -240,6 +241,36 @@ static int int0002_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static int int0002_suspend(struct device *dev)
++{
++	struct int0002_data *int0002 = dev_get_drvdata(dev);
++
++	/*
++	 * The INT0002 parent IRQ is often shared with the ACPI GPE IRQ, don't
++	 * muck with it when firmware based suspend is used, otherwise we may
++	 * cause spurious wakeups from firmware managed suspend.
++	 */
++	if (!pm_suspend_via_firmware() && int0002->wake_enable_count)
++		enable_irq_wake(int0002->parent_irq);
++
++	return 0;
++}
++
++static int int0002_resume(struct device *dev)
++{
++	struct int0002_data *int0002 = dev_get_drvdata(dev);
++
++	if (!pm_suspend_via_firmware() && int0002->wake_enable_count)
++		disable_irq_wake(int0002->parent_irq);
++
++	return 0;
++}
++
++static const struct dev_pm_ops int0002_pm_ops = {
++	.suspend = int0002_suspend,
++	.resume = int0002_resume,
++};
++
+ static const struct acpi_device_id int0002_acpi_ids[] = {
+ 	{ "INT0002", 0 },
+ 	{ },
+@@ -250,6 +281,7 @@ static struct platform_driver int0002_driver = {
+ 	.driver = {
+ 		.name			= DRV_NAME,
+ 		.acpi_match_table	= int0002_acpi_ids,
++		.pm			= &int0002_pm_ops,
+ 	},
+ 	.probe	= int0002_probe,
+ 	.remove	= int0002_remove,
+diff --git a/drivers/rapidio/rio_cm.c b/drivers/rapidio/rio_cm.c
+index 50ec53d67a4c0..db4c265287ae6 100644
+--- a/drivers/rapidio/rio_cm.c
++++ b/drivers/rapidio/rio_cm.c
+@@ -2127,6 +2127,14 @@ static int riocm_add_mport(struct device *dev,
+ 		return -ENODEV;
+ 	}
+ 
++	cm->rx_wq = create_workqueue(DRV_NAME "/rxq");
++	if (!cm->rx_wq) {
++		rio_release_inb_mbox(mport, cmbox);
++		rio_release_outb_mbox(mport, cmbox);
++		kfree(cm);
++		return -ENOMEM;
++	}
++
+ 	/*
+ 	 * Allocate and register inbound messaging buffers to be ready
+ 	 * to receive channel and system management requests
+@@ -2137,15 +2145,6 @@ static int riocm_add_mport(struct device *dev,
+ 	cm->rx_slots = RIOCM_RX_RING_SIZE;
+ 	mutex_init(&cm->rx_lock);
+ 	riocm_rx_fill(cm, RIOCM_RX_RING_SIZE);
+-	cm->rx_wq = create_workqueue(DRV_NAME "/rxq");
+-	if (!cm->rx_wq) {
+-		riocm_error("failed to allocate IBMBOX_%d on %s",
+-			    cmbox, mport->name);
+-		rio_release_outb_mbox(mport, cmbox);
+-		kfree(cm);
+-		return -ENOMEM;
+-	}
+-
+ 	INIT_WORK(&cm->rx_work, rio_ibmsg_handler);
+ 
+ 	cm->tx_slot = 0;
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index f8b99cb729590..62684ca3a665e 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -486,6 +486,7 @@ static struct clk *pcf85063_clkout_register_clk(struct pcf85063 *pcf85063)
+ {
+ 	struct clk *clk;
+ 	struct clk_init_data init;
++	struct device_node *node = pcf85063->rtc->dev.parent->of_node;
+ 
+ 	init.name = "pcf85063-clkout";
+ 	init.ops = &pcf85063_clkout_ops;
+@@ -495,15 +496,13 @@ static struct clk *pcf85063_clkout_register_clk(struct pcf85063 *pcf85063)
+ 	pcf85063->clkout_hw.init = &init;
+ 
+ 	/* optional override of the clockname */
+-	of_property_read_string(pcf85063->rtc->dev.of_node,
+-				"clock-output-names", &init.name);
++	of_property_read_string(node, "clock-output-names", &init.name);
+ 
+ 	/* register the clock */
+ 	clk = devm_clk_register(&pcf85063->rtc->dev, &pcf85063->clkout_hw);
+ 
+ 	if (!IS_ERR(clk))
+-		of_clk_add_provider(pcf85063->rtc->dev.of_node,
+-				    of_clk_src_simple_get, clk);
++		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 
+ 	return clk;
+ }
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 46d185cb9ea80..a464d0a4f4653 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -536,7 +536,9 @@ static void qedf_update_link_speed(struct qedf_ctx *qedf,
+ 	if (linkmode_intersects(link->supported_caps, sup_caps))
+ 		lport->link_supported_speeds |= FC_PORTSPEED_20GBIT;
+ 
+-	fc_host_supported_speeds(lport->host) = lport->link_supported_speeds;
++	if (lport->host && lport->host->shost_data)
++		fc_host_supported_speeds(lport->host) =
++			lport->link_supported_speeds;
+ }
+ 
+ static void qedf_bw_update(void *dev)
+diff --git a/drivers/scsi/qla2xxx/qla_nx.c b/drivers/scsi/qla2xxx/qla_nx.c
+index b3ba0de5d4fb8..0563c9530dcad 100644
+--- a/drivers/scsi/qla2xxx/qla_nx.c
++++ b/drivers/scsi/qla2xxx/qla_nx.c
+@@ -1066,7 +1066,8 @@ qla82xx_write_flash_dword(struct qla_hw_data *ha, uint32_t flashaddr,
+ 		return ret;
+ 	}
+ 
+-	if (qla82xx_flash_set_write_enable(ha))
++	ret = qla82xx_flash_set_write_enable(ha);
++	if (ret < 0)
+ 		goto done_write;
+ 
+ 	qla82xx_wr_32(ha, QLA82XX_ROMUSB_ROM_WDATA, data);
+diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c
+index 074a6a055a4c0..55b7161d76977 100644
+--- a/drivers/scsi/ufs/ufs-hisi.c
++++ b/drivers/scsi/ufs/ufs-hisi.c
+@@ -478,21 +478,24 @@ static int ufs_hisi_init_common(struct ufs_hba *hba)
+ 	host->hba = hba;
+ 	ufshcd_set_variant(hba, host);
+ 
+-	host->rst  = devm_reset_control_get(dev, "rst");
++	host->rst = devm_reset_control_get(dev, "rst");
+ 	if (IS_ERR(host->rst)) {
+ 		dev_err(dev, "%s: failed to get reset control\n", __func__);
+-		return PTR_ERR(host->rst);
++		err = PTR_ERR(host->rst);
++		goto error;
+ 	}
+ 
+ 	ufs_hisi_set_pm_lvl(hba);
+ 
+ 	err = ufs_hisi_get_resource(host);
+-	if (err) {
+-		ufshcd_set_variant(hba, NULL);
+-		return err;
+-	}
++	if (err)
++		goto error;
+ 
+ 	return 0;
++
++error:
++	ufshcd_set_variant(hba, NULL);
++	return err;
+ }
+ 
+ static int ufs_hi3660_init(struct ufs_hba *hba)
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 08d4d40c510ea..854c96e630077 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -2768,7 +2768,7 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba,
+  * ufshcd_exec_dev_cmd - API for sending device management requests
+  * @hba: UFS hba
+  * @cmd_type: specifies the type (NOP, Query...)
+- * @timeout: time in seconds
++ * @timeout: timeout in milliseconds
+  *
+  * NOTE: Since there is only one available tag for device management commands,
+  * it is expected you hold the hba->dev_cmd.lock mutex.
+@@ -2798,6 +2798,9 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ 	}
+ 	tag = req->tag;
+ 	WARN_ON_ONCE(!ufshcd_valid_tag(hba, tag));
++	/* Set the timeout such that the SCSI error handler is not activated. */
++	req->timeout = msecs_to_jiffies(2 * timeout);
++	blk_mq_start_request(req);
+ 
+ 	init_completion(&wait);
+ 	lrbp = &hba->lrb[tag];
+diff --git a/drivers/tee/amdtee/amdtee_private.h b/drivers/tee/amdtee/amdtee_private.h
+index 337c8d82f74eb..6d0f7062bb870 100644
+--- a/drivers/tee/amdtee/amdtee_private.h
++++ b/drivers/tee/amdtee/amdtee_private.h
+@@ -21,6 +21,7 @@
+ #define TEEC_SUCCESS			0x00000000
+ #define TEEC_ERROR_GENERIC		0xFFFF0000
+ #define TEEC_ERROR_BAD_PARAMETERS	0xFFFF0006
++#define TEEC_ERROR_OUT_OF_MEMORY	0xFFFF000C
+ #define TEEC_ERROR_COMMUNICATION	0xFFFF000E
+ 
+ #define TEEC_ORIGIN_COMMS		0x00000002
+@@ -93,6 +94,18 @@ struct amdtee_shm_data {
+ 	u32     buf_id;
+ };
+ 
++/**
++ * struct amdtee_ta_data - Keeps track of all TAs loaded in AMD Secure
++ *			   Processor
++ * @ta_handle:	Handle to TA loaded in TEE
++ * @refcount:	Reference count for the loaded TA
++ */
++struct amdtee_ta_data {
++	struct list_head list_node;
++	u32 ta_handle;
++	u32 refcount;
++};
++
+ #define LOWER_TWO_BYTE_MASK	0x0000FFFF
+ 
+ /**
+diff --git a/drivers/tee/amdtee/call.c b/drivers/tee/amdtee/call.c
+index 096dd4d92d39c..07f36ac834c88 100644
+--- a/drivers/tee/amdtee/call.c
++++ b/drivers/tee/amdtee/call.c
+@@ -121,15 +121,69 @@ static int amd_params_to_tee_params(struct tee_param *tee, u32 count,
+ 	return ret;
+ }
+ 
++static DEFINE_MUTEX(ta_refcount_mutex);
++static struct list_head ta_list = LIST_HEAD_INIT(ta_list);
++
++static u32 get_ta_refcount(u32 ta_handle)
++{
++	struct amdtee_ta_data *ta_data;
++	u32 count = 0;
++
++	/* Caller must hold a mutex */
++	list_for_each_entry(ta_data, &ta_list, list_node)
++		if (ta_data->ta_handle == ta_handle)
++			return ++ta_data->refcount;
++
++	ta_data = kzalloc(sizeof(*ta_data), GFP_KERNEL);
++	if (ta_data) {
++		ta_data->ta_handle = ta_handle;
++		ta_data->refcount = 1;
++		count = ta_data->refcount;
++		list_add(&ta_data->list_node, &ta_list);
++	}
++
++	return count;
++}
++
++static u32 put_ta_refcount(u32 ta_handle)
++{
++	struct amdtee_ta_data *ta_data;
++	u32 count = 0;
++
++	/* Caller must hold a mutex */
++	list_for_each_entry(ta_data, &ta_list, list_node)
++		if (ta_data->ta_handle == ta_handle) {
++			count = --ta_data->refcount;
++			if (count == 0) {
++				list_del(&ta_data->list_node);
++				kfree(ta_data);
++				break;
++			}
++		}
++
++	return count;
++}
++
+ int handle_unload_ta(u32 ta_handle)
+ {
+ 	struct tee_cmd_unload_ta cmd = {0};
+-	u32 status;
++	u32 status, count;
+ 	int ret;
+ 
+ 	if (!ta_handle)
+ 		return -EINVAL;
+ 
++	mutex_lock(&ta_refcount_mutex);
++
++	count = put_ta_refcount(ta_handle);
++
++	if (count) {
++		pr_debug("unload ta: not unloading %u count %u\n",
++			 ta_handle, count);
++		ret = -EBUSY;
++		goto unlock;
++	}
++
+ 	cmd.ta_handle = ta_handle;
+ 
+ 	ret = psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, (void *)&cmd,
+@@ -137,8 +191,12 @@ int handle_unload_ta(u32 ta_handle)
+ 	if (!ret && status != 0) {
+ 		pr_err("unload ta: status = 0x%x\n", status);
+ 		ret = -EBUSY;
++	} else {
++		pr_debug("unloaded ta handle %u\n", ta_handle);
+ 	}
+ 
++unlock:
++	mutex_unlock(&ta_refcount_mutex);
+ 	return ret;
+ }
+ 
+@@ -340,7 +398,8 @@ int handle_open_session(struct tee_ioctl_open_session_arg *arg, u32 *info,
+ 
+ int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg)
+ {
+-	struct tee_cmd_load_ta cmd = {0};
++	struct tee_cmd_unload_ta unload_cmd = {};
++	struct tee_cmd_load_ta load_cmd = {};
+ 	phys_addr_t blob;
+ 	int ret;
+ 
+@@ -353,21 +412,36 @@ int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg)
+ 		return -EINVAL;
+ 	}
+ 
+-	cmd.hi_addr = upper_32_bits(blob);
+-	cmd.low_addr = lower_32_bits(blob);
+-	cmd.size = size;
++	load_cmd.hi_addr = upper_32_bits(blob);
++	load_cmd.low_addr = lower_32_bits(blob);
++	load_cmd.size = size;
+ 
+-	ret = psp_tee_process_cmd(TEE_CMD_ID_LOAD_TA, (void *)&cmd,
+-				  sizeof(cmd), &arg->ret);
++	mutex_lock(&ta_refcount_mutex);
++
++	ret = psp_tee_process_cmd(TEE_CMD_ID_LOAD_TA, (void *)&load_cmd,
++				  sizeof(load_cmd), &arg->ret);
+ 	if (ret) {
+ 		arg->ret_origin = TEEC_ORIGIN_COMMS;
+ 		arg->ret = TEEC_ERROR_COMMUNICATION;
+-	} else {
+-		set_session_id(cmd.ta_handle, 0, &arg->session);
++	} else if (arg->ret == TEEC_SUCCESS) {
++		ret = get_ta_refcount(load_cmd.ta_handle);
++		if (!ret) {
++			arg->ret_origin = TEEC_ORIGIN_COMMS;
++			arg->ret = TEEC_ERROR_OUT_OF_MEMORY;
++
++			/* Unload the TA on error */
++			unload_cmd.ta_handle = load_cmd.ta_handle;
++			psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA,
++					    (void *)&unload_cmd,
++					    sizeof(unload_cmd), &ret);
++		} else {
++			set_session_id(load_cmd.ta_handle, 0, &arg->session);
++		}
+ 	}
++	mutex_unlock(&ta_refcount_mutex);
+ 
+ 	pr_debug("load TA: TA handle = 0x%x, RO = 0x%x, ret = 0x%x\n",
+-		 cmd.ta_handle, arg->ret_origin, arg->ret);
++		 load_cmd.ta_handle, arg->ret_origin, arg->ret);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
+index 8a6a8f30bb427..da6b88e80dc07 100644
+--- a/drivers/tee/amdtee/core.c
++++ b/drivers/tee/amdtee/core.c
+@@ -59,10 +59,9 @@ static void release_session(struct amdtee_session *sess)
+ 			continue;
+ 
+ 		handle_close_session(sess->ta_handle, sess->session_info[i]);
++		handle_unload_ta(sess->ta_handle);
+ 	}
+ 
+-	/* Unload Trusted Application once all sessions are closed */
+-	handle_unload_ta(sess->ta_handle);
+ 	kfree(sess);
+ }
+ 
+@@ -224,8 +223,6 @@ static void destroy_session(struct kref *ref)
+ 	struct amdtee_session *sess = container_of(ref, struct amdtee_session,
+ 						   refcount);
+ 
+-	/* Unload the TA from TEE */
+-	handle_unload_ta(sess->ta_handle);
+ 	mutex_lock(&session_list_mutex);
+ 	list_del(&sess->list_node);
+ 	mutex_unlock(&session_list_mutex);
+@@ -238,7 +235,7 @@ int amdtee_open_session(struct tee_context *ctx,
+ {
+ 	struct amdtee_context_data *ctxdata = ctx->data;
+ 	struct amdtee_session *sess = NULL;
+-	u32 session_info;
++	u32 session_info, ta_handle;
+ 	size_t ta_size;
+ 	int rc, i;
+ 	void *ta;
+@@ -259,11 +256,14 @@ int amdtee_open_session(struct tee_context *ctx,
+ 	if (arg->ret != TEEC_SUCCESS)
+ 		goto out;
+ 
++	ta_handle = get_ta_handle(arg->session);
++
+ 	mutex_lock(&session_list_mutex);
+ 	sess = alloc_session(ctxdata, arg->session);
+ 	mutex_unlock(&session_list_mutex);
+ 
+ 	if (!sess) {
++		handle_unload_ta(ta_handle);
+ 		rc = -ENOMEM;
+ 		goto out;
+ 	}
+@@ -277,6 +277,7 @@ int amdtee_open_session(struct tee_context *ctx,
+ 
+ 	if (i >= TEE_NUM_SESSIONS) {
+ 		pr_err("reached maximum session count %d\n", TEE_NUM_SESSIONS);
++		handle_unload_ta(ta_handle);
+ 		kref_put(&sess->refcount, destroy_session);
+ 		rc = -ENOMEM;
+ 		goto out;
+@@ -289,12 +290,13 @@ int amdtee_open_session(struct tee_context *ctx,
+ 		spin_lock(&sess->lock);
+ 		clear_bit(i, sess->sess_mask);
+ 		spin_unlock(&sess->lock);
++		handle_unload_ta(ta_handle);
+ 		kref_put(&sess->refcount, destroy_session);
+ 		goto out;
+ 	}
+ 
+ 	sess->session_info[i] = session_info;
+-	set_session_id(sess->ta_handle, i, &arg->session);
++	set_session_id(ta_handle, i, &arg->session);
+ out:
+ 	free_pages((u64)ta, get_order(ta_size));
+ 	return rc;
+@@ -329,6 +331,7 @@ int amdtee_close_session(struct tee_context *ctx, u32 session)
+ 
+ 	/* Close the session */
+ 	handle_close_session(ta_handle, session_info);
++	handle_unload_ta(ta_handle);
+ 
+ 	kref_put(&sess->refcount, destroy_session);
+ 
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index e0c00a1b07639..51b0ecabf2ec9 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -818,9 +818,6 @@ static int mvebu_uart_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!match)
+-		return -ENODEV;
+-
+ 	/* Assume that all UART ports have a DT alias or none has */
+ 	id = of_alias_get_id(pdev->dev.of_node, "serial");
+ 	if (!pdev->dev.of_node || id < 0)
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 8f88ee2a2c8d0..06757b1d4aecd 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1172,7 +1172,7 @@ static inline int resize_screen(struct vc_data *vc, int width, int height,
+ 	/* Resizes the resolution of the display adapater */
+ 	int err = 0;
+ 
+-	if (vc->vc_mode != KD_GRAPHICS && vc->vc_sw->con_resize)
++	if (vc->vc_sw->con_resize)
+ 		err = vc->vc_sw->con_resize(vc, width, height, user);
+ 
+ 	return err;
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index 5f61b25a9aaa8..09b8d02acd996 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -771,21 +771,58 @@ static int vt_resizex(struct vc_data *vc, struct vt_consize __user *cs)
+ 	if (copy_from_user(&v, cs, sizeof(struct vt_consize)))
+ 		return -EFAULT;
+ 
+-	if (v.v_vlin)
+-		pr_info_once("\"struct vt_consize\"->v_vlin is ignored. Please report if you need this.\n");
+-	if (v.v_clin)
+-		pr_info_once("\"struct vt_consize\"->v_clin is ignored. Please report if you need this.\n");
++	/* FIXME: Should check the copies properly */
++	if (!v.v_vlin)
++		v.v_vlin = vc->vc_scan_lines;
++
++	if (v.v_clin) {
++		int rows = v.v_vlin / v.v_clin;
++		if (v.v_rows != rows) {
++			if (v.v_rows) /* Parameters don't add up */
++				return -EINVAL;
++			v.v_rows = rows;
++		}
++	}
++
++	if (v.v_vcol && v.v_ccol) {
++		int cols = v.v_vcol / v.v_ccol;
++		if (v.v_cols != cols) {
++			if (v.v_cols)
++				return -EINVAL;
++			v.v_cols = cols;
++		}
++	}
++
++	if (v.v_clin > 32)
++		return -EINVAL;
+ 
+-	console_lock();
+ 	for (i = 0; i < MAX_NR_CONSOLES; i++) {
+-		vc = vc_cons[i].d;
++		struct vc_data *vcp;
+ 
+-		if (vc) {
+-			vc->vc_resize_user = 1;
+-			vc_resize(vc, v.v_cols, v.v_rows);
++		if (!vc_cons[i].d)
++			continue;
++		console_lock();
++		vcp = vc_cons[i].d;
++		if (vcp) {
++			int ret;
++			int save_scan_lines = vcp->vc_scan_lines;
++			int save_cell_height = vcp->vc_cell_height;
++
++			if (v.v_vlin)
++				vcp->vc_scan_lines = v.v_vlin;
++			if (v.v_clin)
++				vcp->vc_cell_height = v.v_clin;
++			vcp->vc_resize_user = 1;
++			ret = vc_resize(vcp, v.v_cols, v.v_rows);
++			if (ret) {
++				vcp->vc_scan_lines = save_scan_lines;
++				vcp->vc_cell_height = save_cell_height;
++				console_unlock();
++				return ret;
++			}
+ 		}
++		console_unlock();
+ 	}
+-	console_unlock();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
+index 4dae2320b103e..c31febe90d4ea 100644
+--- a/drivers/uio/uio_hv_generic.c
++++ b/drivers/uio/uio_hv_generic.c
+@@ -296,8 +296,10 @@ hv_uio_probe(struct hv_device *dev,
+ 
+ 	ret = vmbus_establish_gpadl(channel, pdata->recv_buf,
+ 				    RECV_BUFFER_SIZE, &pdata->recv_gpadl);
+-	if (ret)
++	if (ret) {
++		vfree(pdata->recv_buf);
+ 		goto fail_close;
++	}
+ 
+ 	/* put Global Physical Address Label in name */
+ 	snprintf(pdata->recv_name, sizeof(pdata->recv_name),
+@@ -316,8 +318,10 @@ hv_uio_probe(struct hv_device *dev,
+ 
+ 	ret = vmbus_establish_gpadl(channel, pdata->send_buf,
+ 				    SEND_BUFFER_SIZE, &pdata->send_gpadl);
+-	if (ret)
++	if (ret) {
++		vfree(pdata->send_buf);
+ 		goto fail_close;
++	}
+ 
+ 	snprintf(pdata->send_name, sizeof(pdata->send_name),
+ 		 "send:%u", pdata->send_gpadl);
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index 17876f0179b57..5dc88a914b349 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -384,7 +384,7 @@ static void vgacon_init(struct vc_data *c, int init)
+ 		vc_resize(c, vga_video_num_columns, vga_video_num_lines);
+ 
+ 	c->vc_scan_lines = vga_scan_lines;
+-	c->vc_font.height = vga_video_font_height;
++	c->vc_font.height = c->vc_cell_height = vga_video_font_height;
+ 	c->vc_complement_mask = 0x7700;
+ 	if (vga_512_chars)
+ 		c->vc_hi_font_mask = 0x0800;
+@@ -519,32 +519,32 @@ static void vgacon_cursor(struct vc_data *c, int mode)
+ 		switch (CUR_SIZE(c->vc_cursor_type)) {
+ 		case CUR_UNDERLINE:
+ 			vgacon_set_cursor_size(c->state.x,
+-					       c->vc_font.height -
+-					       (c->vc_font.height <
++					       c->vc_cell_height -
++					       (c->vc_cell_height <
+ 						10 ? 2 : 3),
+-					       c->vc_font.height -
+-					       (c->vc_font.height <
++					       c->vc_cell_height -
++					       (c->vc_cell_height <
+ 						10 ? 1 : 2));
+ 			break;
+ 		case CUR_TWO_THIRDS:
+ 			vgacon_set_cursor_size(c->state.x,
+-					       c->vc_font.height / 3,
+-					       c->vc_font.height -
+-					       (c->vc_font.height <
++					       c->vc_cell_height / 3,
++					       c->vc_cell_height -
++					       (c->vc_cell_height <
+ 						10 ? 1 : 2));
+ 			break;
+ 		case CUR_LOWER_THIRD:
+ 			vgacon_set_cursor_size(c->state.x,
+-					       (c->vc_font.height * 2) / 3,
+-					       c->vc_font.height -
+-					       (c->vc_font.height <
++					       (c->vc_cell_height * 2) / 3,
++					       c->vc_cell_height -
++					       (c->vc_cell_height <
+ 						10 ? 1 : 2));
+ 			break;
+ 		case CUR_LOWER_HALF:
+ 			vgacon_set_cursor_size(c->state.x,
+-					       c->vc_font.height / 2,
+-					       c->vc_font.height -
+-					       (c->vc_font.height <
++					       c->vc_cell_height / 2,
++					       c->vc_cell_height -
++					       (c->vc_cell_height <
+ 						10 ? 1 : 2));
+ 			break;
+ 		case CUR_NONE:
+@@ -555,7 +555,7 @@ static void vgacon_cursor(struct vc_data *c, int mode)
+ 			break;
+ 		default:
+ 			vgacon_set_cursor_size(c->state.x, 1,
+-					       c->vc_font.height);
++					       c->vc_cell_height);
+ 			break;
+ 		}
+ 		break;
+@@ -566,13 +566,13 @@ static int vgacon_doresize(struct vc_data *c,
+ 		unsigned int width, unsigned int height)
+ {
+ 	unsigned long flags;
+-	unsigned int scanlines = height * c->vc_font.height;
++	unsigned int scanlines = height * c->vc_cell_height;
+ 	u8 scanlines_lo = 0, r7 = 0, vsync_end = 0, mode, max_scan;
+ 
+ 	raw_spin_lock_irqsave(&vga_lock, flags);
+ 
+ 	vgacon_xres = width * VGA_FONTWIDTH;
+-	vgacon_yres = height * c->vc_font.height;
++	vgacon_yres = height * c->vc_cell_height;
+ 	if (vga_video_type >= VIDEO_TYPE_VGAC) {
+ 		outb_p(VGA_CRTC_MAX_SCAN, vga_video_port_reg);
+ 		max_scan = inb_p(vga_video_port_val);
+@@ -627,9 +627,9 @@ static int vgacon_doresize(struct vc_data *c,
+ static int vgacon_switch(struct vc_data *c)
+ {
+ 	int x = c->vc_cols * VGA_FONTWIDTH;
+-	int y = c->vc_rows * c->vc_font.height;
++	int y = c->vc_rows * c->vc_cell_height;
+ 	int rows = screen_info.orig_video_lines * vga_default_font_height/
+-		c->vc_font.height;
++		c->vc_cell_height;
+ 	/*
+ 	 * We need to save screen size here as it's the only way
+ 	 * we can spot the screen has been resized and we need to
+@@ -1060,7 +1060,7 @@ static int vgacon_adjust_height(struct vc_data *vc, unsigned fontheight)
+ 				cursor_size_lastto = 0;
+ 				c->vc_sw->con_cursor(c, CM_DRAW);
+ 			}
+-			c->vc_font.height = fontheight;
++			c->vc_font.height = c->vc_cell_height = fontheight;
+ 			vc_resize(c, 0, rows);	/* Adjust console size */
+ 		}
+ 	}
+@@ -1108,12 +1108,20 @@ static int vgacon_resize(struct vc_data *c, unsigned int width,
+ 	if ((width << 1) * height > vga_vram_size)
+ 		return -EINVAL;
+ 
++	if (user) {
++		/*
++		 * Ho ho!  Someone (svgatextmode, eh?) may have reprogrammed
++		 * the video mode!  Set the new defaults then and go away.
++		 */
++		screen_info.orig_video_cols = width;
++		screen_info.orig_video_lines = height;
++		vga_default_font_height = c->vc_cell_height;
++		return 0;
++	}
+ 	if (width % 2 || width > screen_info.orig_video_cols ||
+ 	    height > (screen_info.orig_video_lines * vga_default_font_height)/
+-	    c->vc_font.height)
+-		/* let svgatextmode tinker with video timings and
+-		   return success */
+-		return (user) ? 0 : -EINVAL;
++	    c->vc_cell_height)
++		return -EINVAL;
+ 
+ 	if (con_is_visible(c) && !vga_is_gfx) /* who knows */
+ 		vgacon_doresize(c, width, height);
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 26581194fdf81..42c72d051158f 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2031,7 +2031,7 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ 			return -EINVAL;
+ 
+ 		DPRINTK("resize now %ix%i\n", var.xres, var.yres);
+-		if (con_is_visible(vc)) {
++		if (con_is_visible(vc) && vc->vc_mode == KD_TEXT) {
+ 			var.activate = FB_ACTIVATE_NOW |
+ 				FB_ACTIVATE_FORCE;
+ 			fb_set_var(info, &var);
+diff --git a/drivers/video/fbdev/hgafb.c b/drivers/video/fbdev/hgafb.c
+index a45fcff1461fb..0fe32737ba084 100644
+--- a/drivers/video/fbdev/hgafb.c
++++ b/drivers/video/fbdev/hgafb.c
+@@ -286,7 +286,7 @@ static int hga_card_detect(void)
+ 
+ 	hga_vram = ioremap(0xb0000, hga_vram_len);
+ 	if (!hga_vram)
+-		goto error;
++		return -ENOMEM;
+ 
+ 	if (request_region(0x3b0, 12, "hgafb"))
+ 		release_io_ports = 1;
+@@ -346,13 +346,18 @@ static int hga_card_detect(void)
+ 			hga_type_name = "Hercules";
+ 			break;
+ 	}
+-	return 1;
++	return 0;
+ error:
+ 	if (release_io_ports)
+ 		release_region(0x3b0, 12);
+ 	if (release_io_port)
+ 		release_region(0x3bf, 1);
+-	return 0;
++
++	iounmap(hga_vram);
++
++	pr_err("hgafb: HGA card not detected.\n");
++
++	return -EINVAL;
+ }
+ 
+ /**
+@@ -550,13 +555,11 @@ static const struct fb_ops hgafb_ops = {
+ static int hgafb_probe(struct platform_device *pdev)
+ {
+ 	struct fb_info *info;
++	int ret;
+ 
+-	if (! hga_card_detect()) {
+-		printk(KERN_INFO "hgafb: HGA card not detected.\n");
+-		if (hga_vram)
+-			iounmap(hga_vram);
+-		return -EINVAL;
+-	}
++	ret = hga_card_detect();
++	if (ret)
++		return ret;
+ 
+ 	printk(KERN_INFO "hgafb: %s with %ldK of memory detected.\n",
+ 		hga_type_name, hga_vram_len/1024);
+diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c
+index 3ac053b884958..e04411701ec85 100644
+--- a/drivers/video/fbdev/imsttfb.c
++++ b/drivers/video/fbdev/imsttfb.c
+@@ -1512,11 +1512,6 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	info->fix.smem_start = addr;
+ 	info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ?
+ 					    0x400000 : 0x800000);
+-	if (!info->screen_base) {
+-		release_mem_region(addr, size);
+-		framebuffer_release(info);
+-		return -ENOMEM;
+-	}
+ 	info->fix.mmio_start = addr + 0x800000;
+ 	par->dc_regs = ioremap(addr + 0x800000, 0x1000);
+ 	par->cmap_regs_phys = addr + 0x840000;
+diff --git a/drivers/xen/xen-pciback/vpci.c b/drivers/xen/xen-pciback/vpci.c
+index 5447b5ab7c766..1221cfd914cb0 100644
+--- a/drivers/xen/xen-pciback/vpci.c
++++ b/drivers/xen/xen-pciback/vpci.c
+@@ -70,7 +70,7 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
+ 				   struct pci_dev *dev, int devid,
+ 				   publish_pci_dev_cb publish_cb)
+ {
+-	int err = 0, slot, func = -1;
++	int err = 0, slot, func = PCI_FUNC(dev->devfn);
+ 	struct pci_dev_entry *t, *dev_entry;
+ 	struct vpci_dev_data *vpci_dev = pdev->pci_dev_data;
+ 
+@@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
+ 
+ 	/*
+ 	 * Keep multi-function devices together on the virtual PCI bus, except
+-	 * virtual functions.
++	 * that we want to keep virtual functions at func 0 on their own. They
++	 * aren't multi-function devices and hence their presence at func 0
++	 * may cause guests to not scan the other functions.
+ 	 */
+-	if (!dev->is_virtfn) {
++	if (!dev->is_virtfn || func) {
+ 		for (slot = 0; slot < PCI_SLOT_MAX; slot++) {
+ 			if (list_empty(&vpci_dev->dev_list[slot]))
+ 				continue;
+ 
+ 			t = list_entry(list_first(&vpci_dev->dev_list[slot]),
+ 				       struct pci_dev_entry, list);
++			if (t->dev->is_virtfn && !PCI_FUNC(t->dev->devfn))
++				continue;
+ 
+ 			if (match_slot(dev, t->dev)) {
+ 				dev_info(&dev->dev, "vpci: assign to virtual slot %d func %d\n",
+-					 slot, PCI_FUNC(dev->devfn));
++					 slot, func);
+ 				list_add_tail(&dev_entry->list,
+ 					      &vpci_dev->dev_list[slot]);
+-				func = PCI_FUNC(dev->devfn);
+ 				goto unlock;
+ 			}
+ 		}
+@@ -123,7 +126,6 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
+ 				 slot);
+ 			list_add_tail(&dev_entry->list,
+ 				      &vpci_dev->dev_list[slot]);
+-			func = dev->is_virtfn ? 0 : PCI_FUNC(dev->devfn);
+ 			goto unlock;
+ 		}
+ 	}
+diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
+index e7c692cfb2cf8..cad56ea61376d 100644
+--- a/drivers/xen/xen-pciback/xenbus.c
++++ b/drivers/xen/xen-pciback/xenbus.c
+@@ -359,7 +359,8 @@ out:
+ 	return err;
+ }
+ 
+-static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev)
++static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev,
++				 enum xenbus_state state)
+ {
+ 	int err = 0;
+ 	int num_devs;
+@@ -373,9 +374,7 @@ static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev)
+ 	dev_dbg(&pdev->xdev->dev, "Reconfiguring device ...\n");
+ 
+ 	mutex_lock(&pdev->dev_lock);
+-	/* Make sure we only reconfigure once */
+-	if (xenbus_read_driver_state(pdev->xdev->nodename) !=
+-	    XenbusStateReconfiguring)
++	if (xenbus_read_driver_state(pdev->xdev->nodename) != state)
+ 		goto out;
+ 
+ 	err = xenbus_scanf(XBT_NIL, pdev->xdev->nodename, "num_devs", "%d",
+@@ -500,6 +499,10 @@ static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev)
+ 		}
+ 	}
+ 
++	if (state != XenbusStateReconfiguring)
++		/* Make sure we only reconfigure once. */
++		goto out;
++
+ 	err = xenbus_switch_state(pdev->xdev, XenbusStateReconfigured);
+ 	if (err) {
+ 		xenbus_dev_fatal(pdev->xdev, err,
+@@ -525,7 +528,7 @@ static void xen_pcibk_frontend_changed(struct xenbus_device *xdev,
+ 		break;
+ 
+ 	case XenbusStateReconfiguring:
+-		xen_pcibk_reconfigure(pdev);
++		xen_pcibk_reconfigure(pdev, XenbusStateReconfiguring);
+ 		break;
+ 
+ 	case XenbusStateConnected:
+@@ -664,6 +667,15 @@ static void xen_pcibk_be_watch(struct xenbus_watch *watch,
+ 		xen_pcibk_setup_backend(pdev);
+ 		break;
+ 
++	case XenbusStateInitialised:
++		/*
++		 * We typically move to Initialised when the first device was
++		 * added. Hence subsequent devices getting added may need
++		 * reconfiguring.
++		 */
++		xen_pcibk_reconfigure(pdev, XenbusStateInitialised);
++		break;
++
+ 	default:
+ 		break;
+ 	}
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4162ef602a024..94c24b2a211bf 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2967,6 +2967,7 @@ void btrfs_run_delayed_iputs(struct btrfs_fs_info *fs_info)
+ 		inode = list_first_entry(&fs_info->delayed_iputs,
+ 				struct btrfs_inode, delayed_iput);
+ 		run_delayed_iput_locked(fs_info, inode);
++		cond_resched_lock(&fs_info->delayed_iput_lock);
+ 	}
+ 	spin_unlock(&fs_info->delayed_iput_lock);
+ }
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 6e45a25adeff8..a9d1555301446 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1764,6 +1764,8 @@ smb2_copychunk_range(const unsigned int xid,
+ 			cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk));
+ 
+ 		/* Request server copy to target from src identified by key */
++		kfree(retbuf);
++		retbuf = NULL;
+ 		rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,
+ 			trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE,
+ 			true /* is_fsctl */, (char *)pcchunk,
+diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c
+index 0681540c48d98..adf0707263a1b 100644
+--- a/fs/ecryptfs/crypto.c
++++ b/fs/ecryptfs/crypto.c
+@@ -296,10 +296,8 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat *crypt_stat,
+ 	struct extent_crypt_result ecr;
+ 	int rc = 0;
+ 
+-	if (!crypt_stat || !crypt_stat->tfm
+-	       || !(crypt_stat->flags & ECRYPTFS_STRUCT_INITIALIZED))
+-		return -EINVAL;
+-
++	BUG_ON(!crypt_stat || !crypt_stat->tfm
++	       || !(crypt_stat->flags & ECRYPTFS_STRUCT_INITIALIZED));
+ 	if (unlikely(ecryptfs_verbosity > 0)) {
+ 		ecryptfs_printk(KERN_DEBUG, "Key size [%zd]; key:\n",
+ 				crypt_stat->key_size);
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index 153734816b49c..d5b9c8d40c18e 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -101,6 +101,7 @@ struct vc_data {
+ 	unsigned int	vc_rows;
+ 	unsigned int	vc_size_row;		/* Bytes per row */
+ 	unsigned int	vc_scan_lines;		/* # of scan lines */
++	unsigned int	vc_cell_height;		/* CRTC character cell height */
+ 	unsigned long	vc_origin;		/* [!] Start of real screen */
+ 	unsigned long	vc_scr_end;		/* [!] End of real screen */
+ 	unsigned long	vc_visible_origin;	/* [!] Top of visible window */
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index beff0cfcd1e87..05d2176cc4712 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -1003,12 +1003,14 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
+ 				  struct mqueue_inode_info *info,
+ 				  struct ext_wait_queue *this)
+ {
++	struct task_struct *task;
++
+ 	list_del(&this->list);
+-	get_task_struct(this->task);
++	task = get_task_struct(this->task);
+ 
+ 	/* see MQ_BARRIER for purpose/pairing */
+ 	smp_store_release(&this->state, STATE_READY);
+-	wake_q_add_safe(wake_q, this->task);
++	wake_q_add_safe(wake_q, task);
+ }
+ 
+ /* pipelined_send() - send a message directly to the task waiting in
+diff --git a/ipc/msg.c b/ipc/msg.c
+index acd1bc7af55a2..6e6c8e0c9380e 100644
+--- a/ipc/msg.c
++++ b/ipc/msg.c
+@@ -251,11 +251,13 @@ static void expunge_all(struct msg_queue *msq, int res,
+ 	struct msg_receiver *msr, *t;
+ 
+ 	list_for_each_entry_safe(msr, t, &msq->q_receivers, r_list) {
+-		get_task_struct(msr->r_tsk);
++		struct task_struct *r_tsk;
++
++		r_tsk = get_task_struct(msr->r_tsk);
+ 
+ 		/* see MSG_BARRIER for purpose/pairing */
+ 		smp_store_release(&msr->r_msg, ERR_PTR(res));
+-		wake_q_add_safe(wake_q, msr->r_tsk);
++		wake_q_add_safe(wake_q, r_tsk);
+ 	}
+ }
+ 
+diff --git a/ipc/sem.c b/ipc/sem.c
+index f6c30a85dadf9..7d9c06b0ad6e2 100644
+--- a/ipc/sem.c
++++ b/ipc/sem.c
+@@ -784,12 +784,14 @@ would_block:
+ static inline void wake_up_sem_queue_prepare(struct sem_queue *q, int error,
+ 					     struct wake_q_head *wake_q)
+ {
+-	get_task_struct(q->sleeper);
++	struct task_struct *sleeper;
++
++	sleeper = get_task_struct(q->sleeper);
+ 
+ 	/* see SEM_BARRIER_2 for purpuse/pairing */
+ 	smp_store_release(&q->status, error);
+ 
+-	wake_q_add_safe(wake_q, q->sleeper);
++	wake_q_add_safe(wake_q, sleeper);
+ }
+ 
+ static void unlink_queue(struct sem_array *sma, struct sem_queue *q)
+diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
+index 209ad8dcfcecf..62a52be8f6ba9 100644
+--- a/kernel/kcsan/debugfs.c
++++ b/kernel/kcsan/debugfs.c
+@@ -261,9 +261,10 @@ static const struct file_operations debugfs_ops =
+ 	.release = single_release
+ };
+ 
+-static void __init kcsan_debugfs_init(void)
++static int __init kcsan_debugfs_init(void)
+ {
+ 	debugfs_create_file("kcsan", 0644, NULL, NULL, &debugfs_ops);
++	return 0;
+ }
+ 
+ late_initcall(kcsan_debugfs_init);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 38d7c03e694cd..858b96b438cee 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -5664,7 +5664,7 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	trace_lock_acquired(lock, ip);
++	trace_lock_contended(lock, ip);
+ 
+ 	if (unlikely(!lock_stat || !lockdep_enabled()))
+ 		return;
+@@ -5682,7 +5682,7 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	trace_lock_contended(lock, ip);
++	trace_lock_acquired(lock, ip);
+ 
+ 	if (unlikely(!lock_stat || !lockdep_enabled()))
+ 		return;
+diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
+index a7276aaf2abc0..db9301591e3fc 100644
+--- a/kernel/locking/mutex-debug.c
++++ b/kernel/locking/mutex-debug.c
+@@ -57,7 +57,7 @@ void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
+ 	task->blocked_on = waiter;
+ }
+ 
+-void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
++void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
+ 			 struct task_struct *task)
+ {
+ 	DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
+@@ -65,7 +65,7 @@ void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
+ 	DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter);
+ 	task->blocked_on = NULL;
+ 
+-	list_del_init(&waiter->list);
++	INIT_LIST_HEAD(&waiter->list);
+ 	waiter->task = NULL;
+ }
+ 
+diff --git a/kernel/locking/mutex-debug.h b/kernel/locking/mutex-debug.h
+index 1edd3f45a4ecb..53e631e1d76da 100644
+--- a/kernel/locking/mutex-debug.h
++++ b/kernel/locking/mutex-debug.h
+@@ -22,7 +22,7 @@ extern void debug_mutex_free_waiter(struct mutex_waiter *waiter);
+ extern void debug_mutex_add_waiter(struct mutex *lock,
+ 				   struct mutex_waiter *waiter,
+ 				   struct task_struct *task);
+-extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
++extern void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
+ 				struct task_struct *task);
+ extern void debug_mutex_unlock(struct mutex *lock);
+ extern void debug_mutex_init(struct mutex *lock, const char *name,
+diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
+index 2c25b830203cd..15ac7c4bb1117 100644
+--- a/kernel/locking/mutex.c
++++ b/kernel/locking/mutex.c
+@@ -204,7 +204,7 @@ static inline bool __mutex_waiter_is_first(struct mutex *lock, struct mutex_wait
+  * Add @waiter to a given location in the lock wait_list and set the
+  * FLAG_WAITERS flag if it's the first waiter.
+  */
+-static void __sched
++static void
+ __mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
+ 		   struct list_head *list)
+ {
+@@ -215,6 +215,16 @@ __mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
+ 		__mutex_set_flag(lock, MUTEX_FLAG_WAITERS);
+ }
+ 
++static void
++__mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter)
++{
++	list_del(&waiter->list);
++	if (likely(list_empty(&lock->wait_list)))
++		__mutex_clear_flag(lock, MUTEX_FLAGS);
++
++	debug_mutex_remove_waiter(lock, waiter, current);
++}
++
+ /*
+  * Give up ownership to a specific task, when @task = NULL, this is equivalent
+  * to a regular unlock. Sets PICKUP on a handoff, clears HANDOF, preserves
+@@ -1071,9 +1081,7 @@ acquired:
+ 			__ww_mutex_check_waiters(lock, ww_ctx);
+ 	}
+ 
+-	mutex_remove_waiter(lock, &waiter, current);
+-	if (likely(list_empty(&lock->wait_list)))
+-		__mutex_clear_flag(lock, MUTEX_FLAGS);
++	__mutex_remove_waiter(lock, &waiter);
+ 
+ 	debug_mutex_free_waiter(&waiter);
+ 
+@@ -1090,7 +1098,7 @@ skip_wait:
+ 
+ err:
+ 	__set_current_state(TASK_RUNNING);
+-	mutex_remove_waiter(lock, &waiter, current);
++	__mutex_remove_waiter(lock, &waiter);
+ err_early_kill:
+ 	spin_unlock(&lock->wait_lock);
+ 	debug_mutex_free_waiter(&waiter);
+diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h
+index 1c2287d3fa719..f0c710b1d1927 100644
+--- a/kernel/locking/mutex.h
++++ b/kernel/locking/mutex.h
+@@ -10,12 +10,10 @@
+  * !CONFIG_DEBUG_MUTEXES case. Most of them are NOPs:
+  */
+ 
+-#define mutex_remove_waiter(lock, waiter, task) \
+-		__list_del((waiter)->list.prev, (waiter)->list.next)
+-
+ #define debug_mutex_wake_waiter(lock, waiter)		do { } while (0)
+ #define debug_mutex_free_waiter(waiter)			do { } while (0)
+ #define debug_mutex_add_waiter(lock, waiter, ti)	do { } while (0)
++#define debug_mutex_remove_waiter(lock, waiter, ti)     do { } while (0)
+ #define debug_mutex_unlock(lock)			do { } while (0)
+ #define debug_mutex_init(lock, name, key)		do { } while (0)
+ 
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 79de1294f8ebd..eb4d04cb3aaf5 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -169,6 +169,21 @@ void __ptrace_unlink(struct task_struct *child)
+ 	spin_unlock(&child->sighand->siglock);
+ }
+ 
++static bool looks_like_a_spurious_pid(struct task_struct *task)
++{
++	if (task->exit_code != ((PTRACE_EVENT_EXEC << 8) | SIGTRAP))
++		return false;
++
++	if (task_pid_vnr(task) == task->ptrace_message)
++		return false;
++	/*
++	 * The tracee changed its pid but the PTRACE_EVENT_EXEC event
++	 * was not wait()'ed, most probably debugger targets the old
++	 * leader which was destroyed in de_thread().
++	 */
++	return true;
++}
++
+ /* Ensure that nothing can wake it up, even SIGKILL */
+ static bool ptrace_freeze_traced(struct task_struct *task)
+ {
+@@ -179,7 +194,8 @@ static bool ptrace_freeze_traced(struct task_struct *task)
+ 		return ret;
+ 
+ 	spin_lock_irq(&task->sighand->siglock);
+-	if (task_is_traced(task) && !__fatal_signal_pending(task)) {
++	if (task_is_traced(task) && !looks_like_a_spurious_pid(task) &&
++	    !__fatal_signal_pending(task)) {
+ 		task->state = __TASK_TRACED;
+ 		ret = true;
+ 	}
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index bf4bef13d9354..2b7879afc333b 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -2733,6 +2733,15 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (skb->len < sizeof(*key))
+ 		return SMP_INVALID_PARAMS;
+ 
++	/* Check if remote and local public keys are the same and debug key is
++	 * not in use.
++	 */
++	if (!test_bit(SMP_FLAG_DEBUG_KEY, &smp->flags) &&
++	    !crypto_memneq(key, smp->local_pk, 64)) {
++		bt_dev_err(hdev, "Remote and local public keys are identical");
++		return SMP_UNSPECIFIED;
++	}
++
+ 	memcpy(smp->remote_pk, key, 64);
+ 
+ 	if (test_bit(SMP_FLAG_REMOTE_OOB, &smp->flags)) {
+diff --git a/sound/firewire/Kconfig b/sound/firewire/Kconfig
+index 25778765cbfe9..9897bd26a4388 100644
+--- a/sound/firewire/Kconfig
++++ b/sound/firewire/Kconfig
+@@ -38,7 +38,7 @@ config SND_OXFW
+ 	   * Mackie(Loud) Onyx 1640i (former model)
+ 	   * Mackie(Loud) Onyx Satellite
+ 	   * Mackie(Loud) Tapco Link.Firewire
+-	   * Mackie(Loud) d.2 pro/d.4 pro
++	   * Mackie(Loud) d.4 pro
+ 	   * Mackie(Loud) U.420/U.420d
+ 	   * TASCAM FireOne
+ 	   * Stanton Controllers & Systems 1 Deck/Mixer
+@@ -84,7 +84,7 @@ config SND_BEBOB
+ 	  * PreSonus FIREBOX/FIREPOD/FP10/Inspire1394
+ 	  * BridgeCo RDAudio1/Audio5
+ 	  * Mackie Onyx 1220/1620/1640 (FireWire I/O Card)
+-	  * Mackie d.2 (FireWire Option)
++	  * Mackie d.2 (FireWire Option) and d.2 Pro
+ 	  * Stanton FinalScratch 2 (ScratchAmp)
+ 	  * Tascam IF-FW/DM
+ 	  * Behringer XENIX UFX 1204/1604
+diff --git a/sound/firewire/amdtp-stream-trace.h b/sound/firewire/amdtp-stream-trace.h
+index 26e7cb555d3c5..aa53c13b89d34 100644
+--- a/sound/firewire/amdtp-stream-trace.h
++++ b/sound/firewire/amdtp-stream-trace.h
+@@ -14,8 +14,8 @@
+ #include <linux/tracepoint.h>
+ 
+ TRACE_EVENT(amdtp_packet,
+-	TP_PROTO(const struct amdtp_stream *s, u32 cycles, const __be32 *cip_header, unsigned int payload_length, unsigned int data_blocks, unsigned int data_block_counter, unsigned int index),
+-	TP_ARGS(s, cycles, cip_header, payload_length, data_blocks, data_block_counter, index),
++	TP_PROTO(const struct amdtp_stream *s, u32 cycles, const __be32 *cip_header, unsigned int payload_length, unsigned int data_blocks, unsigned int data_block_counter, unsigned int packet_index, unsigned int index),
++	TP_ARGS(s, cycles, cip_header, payload_length, data_blocks, data_block_counter, packet_index, index),
+ 	TP_STRUCT__entry(
+ 		__field(unsigned int, second)
+ 		__field(unsigned int, cycle)
+@@ -48,7 +48,7 @@ TRACE_EVENT(amdtp_packet,
+ 		__entry->payload_quadlets = payload_length / sizeof(__be32);
+ 		__entry->data_blocks = data_blocks;
+ 		__entry->data_block_counter = data_block_counter,
+-		__entry->packet_index = s->packet_index;
++		__entry->packet_index = packet_index;
+ 		__entry->irq = !!in_interrupt();
+ 		__entry->index = index;
+ 	),
+diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
+index 4e2f2bb7879fb..e0faa6601966c 100644
+--- a/sound/firewire/amdtp-stream.c
++++ b/sound/firewire/amdtp-stream.c
+@@ -526,7 +526,7 @@ static void build_it_pkt_header(struct amdtp_stream *s, unsigned int cycle,
+ 	}
+ 
+ 	trace_amdtp_packet(s, cycle, cip_header, payload_length, data_blocks,
+-			   data_block_counter, index);
++			   data_block_counter, s->packet_index, index);
+ }
+ 
+ static int check_cip_header(struct amdtp_stream *s, const __be32 *buf,
+@@ -630,21 +630,27 @@ static int parse_ir_ctx_header(struct amdtp_stream *s, unsigned int cycle,
+ 			       unsigned int *payload_length,
+ 			       unsigned int *data_blocks,
+ 			       unsigned int *data_block_counter,
+-			       unsigned int *syt, unsigned int index)
++			       unsigned int *syt, unsigned int packet_index, unsigned int index)
+ {
+ 	const __be32 *cip_header;
++	unsigned int cip_header_size;
+ 	int err;
+ 
+ 	*payload_length = be32_to_cpu(ctx_header[0]) >> ISO_DATA_LENGTH_SHIFT;
+-	if (*payload_length > s->ctx_data.tx.ctx_header_size +
+-					s->ctx_data.tx.max_ctx_payload_length) {
++
++	if (!(s->flags & CIP_NO_HEADER))
++		cip_header_size = 8;
++	else
++		cip_header_size = 0;
++
++	if (*payload_length > cip_header_size + s->ctx_data.tx.max_ctx_payload_length) {
+ 		dev_err(&s->unit->device,
+ 			"Detect jumbo payload: %04x %04x\n",
+-			*payload_length, s->ctx_data.tx.max_ctx_payload_length);
++			*payload_length, cip_header_size + s->ctx_data.tx.max_ctx_payload_length);
+ 		return -EIO;
+ 	}
+ 
+-	if (!(s->flags & CIP_NO_HEADER)) {
++	if (cip_header_size > 0) {
+ 		cip_header = ctx_header + 2;
+ 		err = check_cip_header(s, cip_header, *payload_length,
+ 				       data_blocks, data_block_counter, syt);
+@@ -662,7 +668,7 @@ static int parse_ir_ctx_header(struct amdtp_stream *s, unsigned int cycle,
+ 	}
+ 
+ 	trace_amdtp_packet(s, cycle, cip_header, *payload_length, *data_blocks,
+-			   *data_block_counter, index);
++			   *data_block_counter, packet_index, index);
+ 
+ 	return err;
+ }
+@@ -701,12 +707,13 @@ static int generate_device_pkt_descs(struct amdtp_stream *s,
+ 				     unsigned int packets)
+ {
+ 	unsigned int dbc = s->data_block_counter;
++	unsigned int packet_index = s->packet_index;
++	unsigned int queue_size = s->queue_size;
+ 	int i;
+ 	int err;
+ 
+ 	for (i = 0; i < packets; ++i) {
+ 		struct pkt_desc *desc = descs + i;
+-		unsigned int index = (s->packet_index + i) % s->queue_size;
+ 		unsigned int cycle;
+ 		unsigned int payload_length;
+ 		unsigned int data_blocks;
+@@ -715,7 +722,7 @@ static int generate_device_pkt_descs(struct amdtp_stream *s,
+ 		cycle = compute_cycle_count(ctx_header[1]);
+ 
+ 		err = parse_ir_ctx_header(s, cycle, ctx_header, &payload_length,
+-					  &data_blocks, &dbc, &syt, i);
++					  &data_blocks, &dbc, &syt, packet_index, i);
+ 		if (err < 0)
+ 			return err;
+ 
+@@ -723,13 +730,15 @@ static int generate_device_pkt_descs(struct amdtp_stream *s,
+ 		desc->syt = syt;
+ 		desc->data_blocks = data_blocks;
+ 		desc->data_block_counter = dbc;
+-		desc->ctx_payload = s->buffer.packets[index].buffer;
++		desc->ctx_payload = s->buffer.packets[packet_index].buffer;
+ 
+ 		if (!(s->flags & CIP_DBC_IS_END_EVENT))
+ 			dbc = (dbc + desc->data_blocks) & 0xff;
+ 
+ 		ctx_header +=
+ 			s->ctx_data.tx.ctx_header_size / sizeof(*ctx_header);
++
++		packet_index = (packet_index + 1) % queue_size;
+ 	}
+ 
+ 	s->data_block_counter = dbc;
+@@ -1065,23 +1074,22 @@ static int amdtp_stream_start(struct amdtp_stream *s, int channel, int speed,
+ 		s->data_block_counter = 0;
+ 	}
+ 
+-	/* initialize packet buffer */
++	// initialize packet buffer.
++	max_ctx_payload_size = amdtp_stream_get_max_payload(s);
+ 	if (s->direction == AMDTP_IN_STREAM) {
+ 		dir = DMA_FROM_DEVICE;
+ 		type = FW_ISO_CONTEXT_RECEIVE;
+-		if (!(s->flags & CIP_NO_HEADER))
++		if (!(s->flags & CIP_NO_HEADER)) {
++			max_ctx_payload_size -= 8;
+ 			ctx_header_size = IR_CTX_HEADER_SIZE_CIP;
+-		else
++		} else {
+ 			ctx_header_size = IR_CTX_HEADER_SIZE_NO_CIP;
+-
+-		max_ctx_payload_size = amdtp_stream_get_max_payload(s) -
+-				       ctx_header_size;
++		}
+ 	} else {
+ 		dir = DMA_TO_DEVICE;
+ 		type = FW_ISO_CONTEXT_TRANSMIT;
+ 		ctx_header_size = 0;	// No effect for IT context.
+ 
+-		max_ctx_payload_size = amdtp_stream_get_max_payload(s);
+ 		if (!(s->flags & CIP_NO_HEADER))
+ 			max_ctx_payload_size -= IT_PKT_HEADER_SIZE_CIP;
+ 	}
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index 2c8e3392a4903..daeecfa8b9aac 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -387,7 +387,7 @@ static const struct ieee1394_device_id bebob_id_table[] = {
+ 	SND_BEBOB_DEV_ENTRY(VEN_BRIDGECO, 0x00010049, &spec_normal),
+ 	/* Mackie, Onyx 1220/1620/1640 (Firewire I/O Card) */
+ 	SND_BEBOB_DEV_ENTRY(VEN_MACKIE2, 0x00010065, &spec_normal),
+-	/* Mackie, d.2 (Firewire Option) */
++	// Mackie, d.2 (Firewire option card) and d.2 Pro (the card is built-in).
+ 	SND_BEBOB_DEV_ENTRY(VEN_MACKIE1, 0x00010067, &spec_normal),
+ 	/* Stanton, ScratchAmp */
+ 	SND_BEBOB_DEV_ENTRY(VEN_STANTON, 0x00000001, &spec_normal),
+diff --git a/sound/firewire/dice/dice-alesis.c b/sound/firewire/dice/dice-alesis.c
+index 0916864511d50..27c13b9cc9efd 100644
+--- a/sound/firewire/dice/dice-alesis.c
++++ b/sound/firewire/dice/dice-alesis.c
+@@ -16,7 +16,7 @@ alesis_io14_tx_pcm_chs[MAX_STREAMS][SND_DICE_RATE_MODE_COUNT] = {
+ static const unsigned int
+ alesis_io26_tx_pcm_chs[MAX_STREAMS][SND_DICE_RATE_MODE_COUNT] = {
+ 	{10, 10, 4},	/* Tx0 = Analog + S/PDIF. */
+-	{16, 8, 0},	/* Tx1 = ADAT1 + ADAT2. */
++	{16, 4, 0},	/* Tx1 = ADAT1 + ADAT2 (available at low rate). */
+ };
+ 
+ int snd_dice_detect_alesis_formats(struct snd_dice *dice)
+diff --git a/sound/firewire/dice/dice-tcelectronic.c b/sound/firewire/dice/dice-tcelectronic.c
+index a8875d24ba2aa..43a3bcb15b3d1 100644
+--- a/sound/firewire/dice/dice-tcelectronic.c
++++ b/sound/firewire/dice/dice-tcelectronic.c
+@@ -38,8 +38,8 @@ static const struct dice_tc_spec konnekt_24d = {
+ };
+ 
+ static const struct dice_tc_spec konnekt_live = {
+-	.tx_pcm_chs = {{16, 16, 16}, {0, 0, 0} },
+-	.rx_pcm_chs = {{16, 16, 16}, {0, 0, 0} },
++	.tx_pcm_chs = {{16, 16, 6}, {0, 0, 0} },
++	.rx_pcm_chs = {{16, 16, 6}, {0, 0, 0} },
+ 	.has_midi = true,
+ };
+ 
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index 1f1e3236efb8e..9eea25c46dc7e 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -355,7 +355,6 @@ static const struct ieee1394_device_id oxfw_id_table[] = {
+ 	 *  Onyx-i series (former models):	0x081216
+ 	 *  Mackie Onyx Satellite:		0x00200f
+ 	 *  Tapco LINK.firewire 4x6:		0x000460
+-	 *  d.2 pro:				Unknown
+ 	 *  d.4 pro:				Unknown
+ 	 *  U.420:				Unknown
+ 	 *  U.420d:				Unknown
+diff --git a/sound/isa/sb/sb8.c b/sound/isa/sb/sb8.c
+index 438109f167d61..ae93191ffdc9f 100644
+--- a/sound/isa/sb/sb8.c
++++ b/sound/isa/sb/sb8.c
+@@ -96,10 +96,6 @@ static int snd_sb8_probe(struct device *pdev, unsigned int dev)
+ 
+ 	/* block the 0x388 port to avoid PnP conflicts */
+ 	acard->fm_res = request_region(0x388, 4, "SoundBlaster FM");
+-	if (!acard->fm_res) {
+-		err = -EBUSY;
+-		goto _err;
+-	}
+ 
+ 	if (port[dev] != SNDRV_AUTO_PORT) {
+ 		if ((err = snd_sbdsp_create(card, port[dev], irq[dev],
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1fe70f2fe4fe8..43a63db4ab6ad 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -395,7 +395,6 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 	case 0x10ec0282:
+ 	case 0x10ec0283:
+ 	case 0x10ec0286:
+-	case 0x10ec0287:
+ 	case 0x10ec0288:
+ 	case 0x10ec0285:
+ 	case 0x10ec0298:
+@@ -406,6 +405,10 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 	case 0x10ec0275:
+ 		alc_update_coef_idx(codec, 0xe, 0, 1<<0);
+ 		break;
++	case 0x10ec0287:
++		alc_update_coef_idx(codec, 0x10, 1<<9, 0);
++		alc_write_coef_idx(codec, 0x8, 0x4ab7);
++		break;
+ 	case 0x10ec0293:
+ 		alc_update_coef_idx(codec, 0xa, 1<<13, 0);
+ 		break;
+@@ -5717,6 +5720,18 @@ static void alc_fixup_tpt470_dacs(struct hda_codec *codec,
+ 		spec->gen.preferred_dacs = preferred_pairs;
+ }
+ 
++static void alc295_fixup_asus_dacs(struct hda_codec *codec,
++				   const struct hda_fixup *fix, int action)
++{
++	static const hda_nid_t preferred_pairs[] = {
++		0x17, 0x02, 0x21, 0x03, 0
++	};
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		spec->gen.preferred_dacs = preferred_pairs;
++}
++
+ static void alc_shutup_dell_xps13(struct hda_codec *codec)
+ {
+ 	struct alc_spec *spec = codec->spec;
+@@ -6232,6 +6247,35 @@ static void alc294_fixup_gx502_hp(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc294_gu502_toggle_output(struct hda_codec *codec,
++				       struct hda_jack_callback *cb)
++{
++	/* Windows sets 0x10 to 0x8420 for Node 0x20 which is
++	 * responsible from changes between speakers and headphones
++	 */
++	if (snd_hda_jack_detect_state(codec, 0x21) == HDA_JACK_PRESENT)
++		alc_write_coef_idx(codec, 0x10, 0x8420);
++	else
++		alc_write_coef_idx(codec, 0x10, 0x0a20);
++}
++
++static void alc294_fixup_gu502_hp(struct hda_codec *codec,
++				  const struct hda_fixup *fix, int action)
++{
++	if (!is_jack_detectable(codec, 0x21))
++		return;
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_jack_detect_enable_callback(codec, 0x21,
++				alc294_gu502_toggle_output);
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		alc294_gu502_toggle_output(codec, NULL);
++		break;
++	}
++}
++
+ static void  alc285_fixup_hp_gpio_amp_init(struct hda_codec *codec,
+ 			      const struct hda_fixup *fix, int action)
+ {
+@@ -6449,6 +6493,9 @@ enum {
+ 	ALC294_FIXUP_ASUS_GX502_HP,
+ 	ALC294_FIXUP_ASUS_GX502_PINS,
+ 	ALC294_FIXUP_ASUS_GX502_VERBS,
++	ALC294_FIXUP_ASUS_GU502_HP,
++	ALC294_FIXUP_ASUS_GU502_PINS,
++	ALC294_FIXUP_ASUS_GU502_VERBS,
+ 	ALC285_FIXUP_HP_GPIO_LED,
+ 	ALC285_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_GPIO_LED,
+@@ -6485,6 +6532,9 @@ enum {
+ 	ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
+ 	ALC256_FIXUP_ACER_HEADSET_MIC,
+ 	ALC285_FIXUP_IDEAPAD_S740_COEF,
++	ALC295_FIXUP_ASUS_DACS,
++	ALC295_FIXUP_HP_OMEN,
++	ALC285_FIXUP_HP_SPECTRE_X360,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7687,6 +7737,35 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc294_fixup_gx502_hp,
+ 	},
++	[ALC294_FIXUP_ASUS_GU502_PINS] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x01a11050 }, /* rear HP mic */
++			{ 0x1a, 0x01a11830 }, /* rear external mic */
++			{ 0x21, 0x012110f0 }, /* rear HP out */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC294_FIXUP_ASUS_GU502_VERBS
++	},
++	[ALC294_FIXUP_ASUS_GU502_VERBS] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			/* set 0x15 to HP-OUT ctrl */
++			{ 0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, 0xc0 },
++			/* unmute the 0x15 amp */
++			{ 0x15, AC_VERB_SET_AMP_GAIN_MUTE, 0xb000 },
++			/* set 0x1b to HP-OUT */
++			{ 0x1b, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x24 },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC294_FIXUP_ASUS_GU502_HP
++	},
++	[ALC294_FIXUP_ASUS_GU502_HP] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc294_fixup_gu502_hp,
++	},
+ 	[ALC294_FIXUP_ASUS_COEF_1B] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -7983,6 +8062,39 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
+ 	},
++	[ALC295_FIXUP_ASUS_DACS] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc295_fixup_asus_dacs,
++	},
++	[ALC295_FIXUP_HP_OMEN] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x12, 0xb7a60130 },
++			{ 0x13, 0x40000000 },
++			{ 0x14, 0x411111f0 },
++			{ 0x16, 0x411111f0 },
++			{ 0x17, 0x90170110 },
++			{ 0x18, 0x411111f0 },
++			{ 0x19, 0x02a11030 },
++			{ 0x1a, 0x411111f0 },
++			{ 0x1b, 0x04a19030 },
++			{ 0x1d, 0x40600001 },
++			{ 0x1e, 0x411111f0 },
++			{ 0x21, 0x03211020 },
++			{}
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HP_LINE1_MIC1_LED,
++	},
++	[ALC285_FIXUP_HP_SPECTRE_X360] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x14, 0x90170110 }, /* enable top speaker */
++			{}
++		},
++		.chained = true,
++		.chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8141,7 +8253,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+@@ -8181,6 +8295,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
++	SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
+ 	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -8198,6 +8313,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
++	SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+@@ -8254,12 +8370,19 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x50b8, "Clevo NK50SZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50d5, "Clevo NP50D5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50f0, "Clevo NH50A[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x50f2, "Clevo NH50E[PR]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50f3, "Clevo NH58DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x50f5, "Clevo NH55EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x50f6, "Clevo NH55DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x70f2, "Clevo NH79EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x70f3, "Clevo NH77DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -8277,9 +8400,17 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x8a51, "Clevo NH70RCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8d50, "Clevo NH55RCQ-M", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x951d, "Clevo N950T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x9600, "Clevo N960K[PR]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x961d, "Clevo N960S[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x971d, "Clevo N970T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa500, "Clevo NL53RU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL5XNU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xb022, "Clevo NH77D[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xc018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xc019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xc022, "Clevo NH77[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS),
+ 	SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),
+@@ -8544,6 +8675,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ 	{.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
+ 	{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
++	{.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
++	{.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index 3349e455a871a..6fb6f36d0d377 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -354,6 +354,7 @@ struct ichdev {
+ 	unsigned int ali_slot;			/* ALI DMA slot */
+ 	struct ac97_pcm *pcm;
+ 	int pcm_open_flag;
++	unsigned int prepared:1;
+ 	unsigned int suspended: 1;
+ };
+ 
+@@ -714,6 +715,9 @@ static inline void snd_intel8x0_update(struct intel8x0 *chip, struct ichdev *ich
+ 	int status, civ, i, step;
+ 	int ack = 0;
+ 
++	if (!ichdev->prepared || ichdev->suspended)
++		return;
++
+ 	spin_lock_irqsave(&chip->reg_lock, flags);
+ 	status = igetbyte(chip, port + ichdev->roff_sr);
+ 	civ = igetbyte(chip, port + ICH_REG_OFF_CIV);
+@@ -904,6 +908,7 @@ static int snd_intel8x0_hw_params(struct snd_pcm_substream *substream,
+ 	if (ichdev->pcm_open_flag) {
+ 		snd_ac97_pcm_close(ichdev->pcm);
+ 		ichdev->pcm_open_flag = 0;
++		ichdev->prepared = 0;
+ 	}
+ 	err = snd_ac97_pcm_open(ichdev->pcm, params_rate(hw_params),
+ 				params_channels(hw_params),
+@@ -925,6 +930,7 @@ static int snd_intel8x0_hw_free(struct snd_pcm_substream *substream)
+ 	if (ichdev->pcm_open_flag) {
+ 		snd_ac97_pcm_close(ichdev->pcm);
+ 		ichdev->pcm_open_flag = 0;
++		ichdev->prepared = 0;
+ 	}
+ 	return 0;
+ }
+@@ -999,6 +1005,7 @@ static int snd_intel8x0_pcm_prepare(struct snd_pcm_substream *substream)
+ 			ichdev->pos_shift = (runtime->sample_bits > 16) ? 2 : 1;
+ 	}
+ 	snd_intel8x0_setup_periods(chip, ichdev);
++	ichdev->prepared = 1;
+ 	return 0;
+ }
+ 
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index a030dd65eb280..9602929b7de90 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -699,6 +699,10 @@ static int line6_init_cap_control(struct usb_line6 *line6)
+ 		line6->buffer_message = kmalloc(LINE6_MIDI_MESSAGE_MAXLEN, GFP_KERNEL);
+ 		if (!line6->buffer_message)
+ 			return -ENOMEM;
++
++		ret = line6_init_midi(line6);
++		if (ret < 0)
++			return ret;
+ 	} else {
+ 		ret = line6_hwdep_init(line6);
+ 		if (ret < 0)
+diff --git a/sound/usb/line6/pod.c b/sound/usb/line6/pod.c
+index cd44cb5f1310c..16e644330c4d6 100644
+--- a/sound/usb/line6/pod.c
++++ b/sound/usb/line6/pod.c
+@@ -376,11 +376,6 @@ static int pod_init(struct usb_line6 *line6,
+ 	if (err < 0)
+ 		return err;
+ 
+-	/* initialize MIDI subsystem: */
+-	err = line6_init_midi(line6);
+-	if (err < 0)
+-		return err;
+-
+ 	/* initialize PCM subsystem: */
+ 	err = line6_init_pcm(line6, &pod_pcm_properties);
+ 	if (err < 0)
+diff --git a/sound/usb/line6/variax.c b/sound/usb/line6/variax.c
+index ed158f04de80f..c2245aa93b08f 100644
+--- a/sound/usb/line6/variax.c
++++ b/sound/usb/line6/variax.c
+@@ -159,7 +159,6 @@ static int variax_init(struct usb_line6 *line6,
+ 		       const struct usb_device_id *id)
+ {
+ 	struct usb_line6_variax *variax = line6_to_variax(line6);
+-	int err;
+ 
+ 	line6->process_message = line6_variax_process_message;
+ 	line6->disconnect = line6_variax_disconnect;
+@@ -172,11 +171,6 @@ static int variax_init(struct usb_line6 *line6,
+ 	if (variax->buffer_activate == NULL)
+ 		return -ENOMEM;
+ 
+-	/* initialize MIDI subsystem: */
+-	err = line6_init_midi(&variax->line6);
+-	if (err < 0)
+-		return err;
+-
+ 	/* initiate startup procedure: */
+ 	schedule_delayed_work(&line6->startup_work,
+ 			      msecs_to_jiffies(VARIAX_STARTUP_DELAY1));
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index cd46ca7cd28de..fa91290ad89db 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1889,8 +1889,12 @@ static int snd_usbmidi_get_ms_info(struct snd_usb_midi *umidi,
+ 		ms_ep = find_usb_ms_endpoint_descriptor(hostep);
+ 		if (!ms_ep)
+ 			continue;
++		if (ms_ep->bLength <= sizeof(*ms_ep))
++			continue;
+ 		if (ms_ep->bNumEmbMIDIJack > 0x10)
+ 			continue;
++		if (ms_ep->bLength < sizeof(*ms_ep) + ms_ep->bNumEmbMIDIJack)
++			continue;
+ 		if (usb_endpoint_dir_out(ep)) {
+ 			if (endpoints[epidx].out_ep) {
+ 				if (++epidx >= MIDI_MAX_ENDPOINTS) {
+diff --git a/tools/testing/selftests/exec/Makefile b/tools/testing/selftests/exec/Makefile
+index cf69b2fcce59e..dd61118df66ed 100644
+--- a/tools/testing/selftests/exec/Makefile
++++ b/tools/testing/selftests/exec/Makefile
+@@ -28,8 +28,8 @@ $(OUTPUT)/execveat.denatured: $(OUTPUT)/execveat
+ 	cp $< $@
+ 	chmod -x $@
+ $(OUTPUT)/load_address_4096: load_address.c
+-	$(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000 -pie $< -o $@
++	$(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000 -pie -static $< -o $@
+ $(OUTPUT)/load_address_2097152: load_address.c
+-	$(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x200000 -pie $< -o $@
++	$(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x200000 -pie -static $< -o $@
+ $(OUTPUT)/load_address_16777216: load_address.c
+-	$(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000000 -pie $< -o $@
++	$(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000000 -pie -static $< -o $@
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 1b6c7d33c4ff2..dc21dc49b426f 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -1753,16 +1753,25 @@ TEST_F(TRACE_poke, getpid_runs_normally)
+ # define SYSCALL_RET_SET(_regs, _val)				\
+ 	do {							\
+ 		typeof(_val) _result = (_val);			\
+-		/*						\
+-		 * A syscall error is signaled by CR0 SO bit	\
+-		 * and the code is stored as a positive value.	\
+-		 */						\
+-		if (_result < 0) {				\
+-			SYSCALL_RET(_regs) = -_result;		\
+-			(_regs).ccr |= 0x10000000;		\
+-		} else {					\
++		if ((_regs.trap & 0xfff0) == 0x3000) {		\
++			/*					\
++			 * scv 0 system call uses -ve result	\
++			 * for error, so no need to adjust.	\
++			 */					\
+ 			SYSCALL_RET(_regs) = _result;		\
+-			(_regs).ccr &= ~0x10000000;		\
++		} else {					\
++			/*					\
++			 * A syscall error is signaled by the	\
++			 * CR0 SO bit and the code is stored as	\
++			 * a positive value.			\
++			 */					\
++			if (_result < 0) {			\
++				SYSCALL_RET(_regs) = -_result;	\
++				(_regs).ccr |= 0x10000000;	\
++			} else {				\
++				SYSCALL_RET(_regs) = _result;	\
++				(_regs).ccr &= ~0x10000000;	\
++			}					\
+ 		}						\
+ 	} while (0)
+ # define SYSCALL_RET_SET_ON_PTRACE_EXIT


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-05-28 12:15 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-05-28 12:15 UTC (permalink / raw
  To: gentoo-commits

commit:     582727a23f4f5b78c6ac19772e3dde0425b13bc5
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri May 28 12:14:39 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri May 28 12:14:52 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=582727a2

Linux patch 5.10.41

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |   4 +
 1040_linux-5.10.41.patch | 344 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 348 insertions(+)

diff --git a/0000_README b/0000_README
index 27b8de0..6130f63 100644
--- a/0000_README
+++ b/0000_README
@@ -203,6 +203,10 @@ Patch:  1039_linux-5.10.40.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.40
 
+Patch:  1040_linux-5.10.41.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.41
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1040_linux-5.10.41.patch b/1040_linux-5.10.41.patch
new file mode 100644
index 0000000..9699d6e
--- /dev/null
+++ b/1040_linux-5.10.41.patch
@@ -0,0 +1,344 @@
+diff --git a/Makefile b/Makefile
+index 42c915ccc5b80..81011c92dd46f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 40
++SUBLEVEL = 41
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index ca7a717477e70..9d4eb114613c2 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -3532,15 +3532,15 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 	 * have them in state 'on' as recorded before entering guest mode.
+ 	 * Same as enter_from_user_mode().
+ 	 *
+-	 * guest_exit_irqoff() restores host context and reinstates RCU if
+-	 * enabled and required.
++	 * context_tracking_guest_exit() restores host context and reinstates
++	 * RCU if enabled and required.
+ 	 *
+ 	 * This needs to be done before the below as native_read_msr()
+ 	 * contains a tracepoint and x86_spec_ctrl_restore_host() calls
+ 	 * into world and some more.
+ 	 */
+ 	lockdep_hardirqs_off(CALLER_ADDR0);
+-	guest_exit_irqoff();
++	context_tracking_guest_exit();
+ 
+ 	instrumentation_begin();
+ 	trace_hardirqs_off_finish();
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index d7f8d2167fda0..45877364e6829 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6640,15 +6640,15 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 	 * have them in state 'on' as recorded before entering guest mode.
+ 	 * Same as enter_from_user_mode().
+ 	 *
+-	 * guest_exit_irqoff() restores host context and reinstates RCU if
+-	 * enabled and required.
++	 * context_tracking_guest_exit() restores host context and reinstates
++	 * RCU if enabled and required.
+ 	 *
+ 	 * This needs to be done before the below as native_read_msr()
+ 	 * contains a tracepoint and x86_spec_ctrl_restore_host() calls
+ 	 * into world and some more.
+ 	 */
+ 	lockdep_hardirqs_off(CALLER_ADDR0);
+-	guest_exit_irqoff();
++	context_tracking_guest_exit();
+ 
+ 	instrumentation_begin();
+ 	trace_hardirqs_off_finish();
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c071a83d543ae..7f767d59b09d3 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9063,6 +9063,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 	local_irq_disable();
+ 	kvm_after_interrupt(vcpu);
+ 
++	/*
++	 * Wait until after servicing IRQs to account guest time so that any
++	 * ticks that occurred while running the guest are properly accounted
++	 * to the guest.  Waiting until IRQs are enabled degrades the accuracy
++	 * of accounting via context tracking, but the loss of accuracy is
++	 * acceptable for all known use cases.
++	 */
++	vtime_account_guest_exit();
++
+ 	if (lapic_in_kernel(vcpu)) {
+ 		s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta;
+ 		if (delta != S64_MIN) {
+diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
+index d53cd331c4dd3..f5d127a5d819b 100644
+--- a/include/linux/context_tracking.h
++++ b/include/linux/context_tracking.h
+@@ -129,16 +129,26 @@ static __always_inline void guest_enter_irqoff(void)
+ 	}
+ }
+ 
+-static __always_inline void guest_exit_irqoff(void)
++static __always_inline void context_tracking_guest_exit(void)
+ {
+ 	if (context_tracking_enabled())
+ 		__context_tracking_exit(CONTEXT_GUEST);
++}
+ 
+-	instrumentation_begin();
++static __always_inline void vtime_account_guest_exit(void)
++{
+ 	if (vtime_accounting_enabled_this_cpu())
+ 		vtime_guest_exit(current);
+ 	else
+ 		current->flags &= ~PF_VCPU;
++}
++
++static __always_inline void guest_exit_irqoff(void)
++{
++	context_tracking_guest_exit();
++
++	instrumentation_begin();
++	vtime_account_guest_exit();
+ 	instrumentation_end();
+ }
+ 
+@@ -157,12 +167,19 @@ static __always_inline void guest_enter_irqoff(void)
+ 	instrumentation_end();
+ }
+ 
++static __always_inline void context_tracking_guest_exit(void) { }
++
++static __always_inline void vtime_account_guest_exit(void)
++{
++	vtime_account_kernel(current);
++	current->flags &= ~PF_VCPU;
++}
++
+ static __always_inline void guest_exit_irqoff(void)
+ {
+ 	instrumentation_begin();
+ 	/* Flush the guest cputime we spent on the guest */
+-	vtime_account_kernel(current);
+-	current->flags &= ~PF_VCPU;
++	vtime_account_guest_exit();
+ 	instrumentation_end();
+ }
+ #endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
+diff --git a/include/net/nfc/nci_core.h b/include/net/nfc/nci_core.h
+index 43c9c5d2bedbd..33979017b7824 100644
+--- a/include/net/nfc/nci_core.h
++++ b/include/net/nfc/nci_core.h
+@@ -298,6 +298,7 @@ int nci_nfcc_loopback(struct nci_dev *ndev, void *data, size_t data_len,
+ 		      struct sk_buff **resp);
+ 
+ struct nci_hci_dev *nci_hci_allocate(struct nci_dev *ndev);
++void nci_hci_deallocate(struct nci_dev *ndev);
+ int nci_hci_send_event(struct nci_dev *ndev, u8 gate, u8 event,
+ 		       const u8 *param, size_t param_len);
+ int nci_hci_send_cmd(struct nci_dev *ndev, u8 gate,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 69730943eaf80..364b9760d1a73 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5666,18 +5666,10 @@ enum {
+ };
+ 
+ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
+-			      const struct bpf_reg_state *off_reg,
+-			      u32 *alu_limit, u8 opcode)
++			      u32 *alu_limit, bool mask_to_left)
+ {
+-	bool off_is_neg = off_reg->smin_value < 0;
+-	bool mask_to_left = (opcode == BPF_ADD &&  off_is_neg) ||
+-			    (opcode == BPF_SUB && !off_is_neg);
+ 	u32 max = 0, ptr_limit = 0;
+ 
+-	if (!tnum_is_const(off_reg->var_off) &&
+-	    (off_reg->smin_value < 0) != (off_reg->smax_value < 0))
+-		return REASON_BOUNDS;
+-
+ 	switch (ptr_reg->type) {
+ 	case PTR_TO_STACK:
+ 		/* Offset 0 is out-of-bounds, but acceptable start for the
+@@ -5743,15 +5735,20 @@ static bool sanitize_needed(u8 opcode)
+ 	return opcode == BPF_ADD || opcode == BPF_SUB;
+ }
+ 
++struct bpf_sanitize_info {
++	struct bpf_insn_aux_data aux;
++	bool mask_to_left;
++};
++
+ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 			    struct bpf_insn *insn,
+ 			    const struct bpf_reg_state *ptr_reg,
+ 			    const struct bpf_reg_state *off_reg,
+ 			    struct bpf_reg_state *dst_reg,
+-			    struct bpf_insn_aux_data *tmp_aux,
++			    struct bpf_sanitize_info *info,
+ 			    const bool commit_window)
+ {
+-	struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : tmp_aux;
++	struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : &info->aux;
+ 	struct bpf_verifier_state *vstate = env->cur_state;
+ 	bool off_is_imm = tnum_is_const(off_reg->var_off);
+ 	bool off_is_neg = off_reg->smin_value < 0;
+@@ -5772,7 +5769,16 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 	if (vstate->speculative)
+ 		goto do_sim;
+ 
+-	err = retrieve_ptr_limit(ptr_reg, off_reg, &alu_limit, opcode);
++	if (!commit_window) {
++		if (!tnum_is_const(off_reg->var_off) &&
++		    (off_reg->smin_value < 0) != (off_reg->smax_value < 0))
++			return REASON_BOUNDS;
++
++		info->mask_to_left = (opcode == BPF_ADD &&  off_is_neg) ||
++				     (opcode == BPF_SUB && !off_is_neg);
++	}
++
++	err = retrieve_ptr_limit(ptr_reg, &alu_limit, info->mask_to_left);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -5780,8 +5786,8 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 		/* In commit phase we narrow the masking window based on
+ 		 * the observed pointer move after the simulated operation.
+ 		 */
+-		alu_state = tmp_aux->alu_state;
+-		alu_limit = abs(tmp_aux->alu_limit - alu_limit);
++		alu_state = info->aux.alu_state;
++		alu_limit = abs(info->aux.alu_limit - alu_limit);
+ 	} else {
+ 		alu_state  = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
+ 		alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;
+@@ -5796,8 +5802,12 @@ do_sim:
+ 	/* If we're in commit phase, we're done here given we already
+ 	 * pushed the truncated dst_reg into the speculative verification
+ 	 * stack.
++	 *
++	 * Also, when register is a known constant, we rewrite register-based
++	 * operation to immediate-based, and thus do not need masking (and as
++	 * a consequence, do not need to simulate the zero-truncation either).
+ 	 */
+-	if (commit_window)
++	if (commit_window || off_is_imm)
+ 		return 0;
+ 
+ 	/* Simulate and find potential out-of-bounds access under
+@@ -5942,7 +5952,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	    smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
+ 	u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
+ 	    umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
+-	struct bpf_insn_aux_data tmp_aux = {};
++	struct bpf_sanitize_info info = {};
+ 	u8 opcode = BPF_OP(insn->code);
+ 	u32 dst = insn->dst_reg;
+ 	int ret;
+@@ -6011,7 +6021,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 
+ 	if (sanitize_needed(opcode)) {
+ 		ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg,
+-				       &tmp_aux, false);
++				       &info, false);
+ 		if (ret < 0)
+ 			return sanitize_err(env, insn, ret, off_reg, dst_reg);
+ 	}
+@@ -6152,7 +6162,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		return -EACCES;
+ 	if (sanitize_needed(opcode)) {
+ 		ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg,
+-				       &tmp_aux, true);
++				       &info, true);
+ 		if (ret < 0)
+ 			return sanitize_err(env, insn, ret, off_reg, dst_reg);
+ 	}
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index 741da8f81c2b8..32e8154363cab 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -1175,6 +1175,7 @@ EXPORT_SYMBOL(nci_allocate_device);
+ void nci_free_device(struct nci_dev *ndev)
+ {
+ 	nfc_free_device(ndev->nfc_dev);
++	nci_hci_deallocate(ndev);
+ 	kfree(ndev);
+ }
+ EXPORT_SYMBOL(nci_free_device);
+diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c
+index c18e76d6d8ba0..04e55ccb33836 100644
+--- a/net/nfc/nci/hci.c
++++ b/net/nfc/nci/hci.c
+@@ -795,3 +795,8 @@ struct nci_hci_dev *nci_hci_allocate(struct nci_dev *ndev)
+ 
+ 	return hdev;
+ }
++
++void nci_hci_deallocate(struct nci_dev *ndev)
++{
++	kfree(ndev->hci_dev);
++}
+diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
+index 7a3dbc259cecc..a74b517f74974 100644
+--- a/tools/perf/util/unwind-libdw.c
++++ b/tools/perf/util/unwind-libdw.c
+@@ -20,10 +20,24 @@
+ 
+ static char *debuginfo_path;
+ 
++static int __find_debuginfo(Dwfl_Module *mod __maybe_unused, void **userdata,
++			    const char *modname __maybe_unused, Dwarf_Addr base __maybe_unused,
++			    const char *file_name, const char *debuglink_file __maybe_unused,
++			    GElf_Word debuglink_crc __maybe_unused, char **debuginfo_file_name)
++{
++	const struct dso *dso = *userdata;
++
++	assert(dso);
++	if (dso->symsrc_filename && strcmp (file_name, dso->symsrc_filename))
++		*debuginfo_file_name = strdup(dso->symsrc_filename);
++	return -1;
++}
++
+ static const Dwfl_Callbacks offline_callbacks = {
+-	.find_debuginfo		= dwfl_standard_find_debuginfo,
++	.find_debuginfo		= __find_debuginfo,
+ 	.debuginfo_path		= &debuginfo_path,
+ 	.section_address	= dwfl_offline_section_address,
++	// .find_elf is not set as we use dwfl_report_elf() instead.
+ };
+ 
+ static int __report_module(struct addr_location *al, u64 ip,
+@@ -53,9 +67,22 @@ static int __report_module(struct addr_location *al, u64 ip,
+ 	}
+ 
+ 	if (!mod)
+-		mod = dwfl_report_elf(ui->dwfl, dso->short_name,
+-				      (dso->symsrc_filename ? dso->symsrc_filename : dso->long_name), -1, al->map->start - al->map->pgoff,
+-				      false);
++		mod = dwfl_report_elf(ui->dwfl, dso->short_name, dso->long_name, -1,
++				      al->map->start - al->map->pgoff, false);
++	if (!mod) {
++		char filename[PATH_MAX];
++
++		if (dso__build_id_filename(dso, filename, sizeof(filename), false))
++			mod = dwfl_report_elf(ui->dwfl, dso->short_name, filename, -1,
++					      al->map->start - al->map->pgoff, false);
++	}
++
++	if (mod) {
++		void **userdatap;
++
++		dwfl_module_info(mod, &userdatap, NULL, NULL, NULL, NULL, NULL, NULL);
++		*userdatap = dso;
++	}
+ 
+ 	return mod && dwfl_addrmodule(ui->dwfl, ip) == mod ? 0 : -1;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-03 10:26 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-06-03 10:26 UTC (permalink / raw
  To: gentoo-commits

commit:     443db2b6aaa282d4532502a3914ce4395ef58694
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Jun  3 10:25:47 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Jun  3 10:26:00 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=443db2b6

Linux patch 5.10.42

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1041_linux-5.10.42.patch | 8349 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8353 insertions(+)

diff --git a/0000_README b/0000_README
index 6130f63..fb74799 100644
--- a/0000_README
+++ b/0000_README
@@ -207,6 +207,10 @@ Patch:  1040_linux-5.10.41.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.41
 
+Patch:  1041_linux-5.10.42.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.42
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1041_linux-5.10.42.patch b/1041_linux-5.10.42.patch
new file mode 100644
index 0000000..606cd82
--- /dev/null
+++ b/1041_linux-5.10.42.patch
@@ -0,0 +1,8349 @@
+diff --git a/Documentation/userspace-api/seccomp_filter.rst b/Documentation/userspace-api/seccomp_filter.rst
+index bd9165241b6c8..6efb41cc80725 100644
+--- a/Documentation/userspace-api/seccomp_filter.rst
++++ b/Documentation/userspace-api/seccomp_filter.rst
+@@ -250,14 +250,14 @@ Users can read via ``ioctl(SECCOMP_IOCTL_NOTIF_RECV)``  (or ``poll()``) on a
+ seccomp notification fd to receive a ``struct seccomp_notif``, which contains
+ five members: the input length of the structure, a unique-per-filter ``id``,
+ the ``pid`` of the task which triggered this request (which may be 0 if the
+-task is in a pid ns not visible from the listener's pid namespace), a ``flags``
+-member which for now only has ``SECCOMP_NOTIF_FLAG_SIGNALED``, representing
+-whether or not the notification is a result of a non-fatal signal, and the
+-``data`` passed to seccomp. Userspace can then make a decision based on this
+-information about what to do, and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a
+-response, indicating what should be returned to userspace. The ``id`` member of
+-``struct seccomp_notif_resp`` should be the same ``id`` as in ``struct
+-seccomp_notif``.
++task is in a pid ns not visible from the listener's pid namespace). The
++notification also contains the ``data`` passed to seccomp, and a filters flag.
++The structure should be zeroed out prior to calling the ioctl.
++
++Userspace can then make a decision based on this information about what to do,
++and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a response, indicating what should be
++returned to userspace. The ``id`` member of ``struct seccomp_notif_resp`` should
++be the same ``id`` as in ``struct seccomp_notif``.
+ 
+ It is worth noting that ``struct seccomp_data`` contains the values of register
+ arguments to the syscall, but does not contain pointers to memory. The task's
+diff --git a/Makefile b/Makefile
+index 81011c92dd46f..290903d0e7dab 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 41
++SUBLEVEL = 42
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
+index 00bc6f1234ba3..472122d731b0d 100644
+--- a/arch/arm64/include/asm/kvm_emulate.h
++++ b/arch/arm64/include/asm/kvm_emulate.h
+@@ -505,4 +505,9 @@ static __always_inline void __kvm_skip_instr(struct kvm_vcpu *vcpu)
+ 	write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
+ }
+ 
++static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
++{
++	return test_bit(feature, vcpu->arch.features);
++}
++
+ #endif /* __ARM64_KVM_EMULATE_H__ */
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index 53a127d3e460b..b969c2157ad2e 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -223,6 +223,25 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
+ 	return 0;
+ }
+ 
++static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
++{
++	struct kvm_vcpu *tmp;
++	bool is32bit;
++	int i;
++
++	is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT);
++	if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit)
++		return false;
++
++	/* Check that the vcpus are either all 32bit or all 64bit */
++	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
++		if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit)
++			return false;
++	}
++
++	return true;
++}
++
+ /**
+  * kvm_reset_vcpu - sets core registers and sys_regs to reset value
+  * @vcpu: The VCPU pointer
+@@ -274,13 +293,14 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 		}
+ 	}
+ 
++	if (!vcpu_allowed_register_width(vcpu)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	switch (vcpu->arch.target) {
+ 	default:
+ 		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
+-			if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) {
+-				ret = -EINVAL;
+-				goto out;
+-			}
+ 			pstate = VCPU_RESET_PSTATE_SVC;
+ 		} else {
+ 			pstate = VCPU_RESET_PSTATE_EL1;
+diff --git a/arch/mips/alchemy/board-xxs1500.c b/arch/mips/alchemy/board-xxs1500.c
+index b184baa4e56a6..f175bce2987fa 100644
+--- a/arch/mips/alchemy/board-xxs1500.c
++++ b/arch/mips/alchemy/board-xxs1500.c
+@@ -18,6 +18,7 @@
+ #include <asm/reboot.h>
+ #include <asm/setup.h>
+ #include <asm/mach-au1x00/au1000.h>
++#include <asm/mach-au1x00/gpio-au1000.h>
+ #include <prom.h>
+ 
+ const char *get_system_type(void)
+diff --git a/arch/mips/ralink/of.c b/arch/mips/ralink/of.c
+index cbae9d23ab7ff..a971f1aca096c 100644
+--- a/arch/mips/ralink/of.c
++++ b/arch/mips/ralink/of.c
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/io.h>
+ #include <linux/clk.h>
++#include <linux/export.h>
+ #include <linux/init.h>
+ #include <linux/sizes.h>
+ #include <linux/of_fdt.h>
+@@ -25,6 +26,7 @@
+ 
+ __iomem void *rt_sysc_membase;
+ __iomem void *rt_memc_membase;
++EXPORT_SYMBOL_GPL(rt_sysc_membase);
+ 
+ __iomem void *plat_of_remap_node(const char *node)
+ {
+diff --git a/arch/openrisc/include/asm/barrier.h b/arch/openrisc/include/asm/barrier.h
+new file mode 100644
+index 0000000000000..7538294721bed
+--- /dev/null
++++ b/arch/openrisc/include/asm/barrier.h
+@@ -0,0 +1,9 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __ASM_BARRIER_H
++#define __ASM_BARRIER_H
++
++#define mb() asm volatile ("l.msync" ::: "memory")
++
++#include <asm-generic/barrier.h>
++
++#endif /* __ASM_BARRIER_H */
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7f767d59b09d3..109041630d30b 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3006,6 +3006,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
+ 				       st->preempted & KVM_VCPU_FLUSH_TLB);
+ 		if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
+ 			kvm_vcpu_flush_tlb_guest(vcpu);
++	} else {
++		st->preempted = 0;
+ 	}
+ 
+ 	vcpu->arch.st.preempted = 0;
+diff --git a/drivers/acpi/acpi_apd.c b/drivers/acpi/acpi_apd.c
+index 39359ce0eb2c0..645e82a66bb07 100644
+--- a/drivers/acpi/acpi_apd.c
++++ b/drivers/acpi/acpi_apd.c
+@@ -226,6 +226,7 @@ static const struct acpi_device_id acpi_apd_device_ids[] = {
+ 	{ "AMDI0010", APD_ADDR(wt_i2c_desc) },
+ 	{ "AMD0020", APD_ADDR(cz_uart_desc) },
+ 	{ "AMDI0020", APD_ADDR(cz_uart_desc) },
++	{ "AMDI0022", APD_ADDR(cz_uart_desc) },
+ 	{ "AMD0030", },
+ 	{ "AMD0040", APD_ADDR(fch_misc_desc)},
+ 	{ "HYGO0010", APD_ADDR(wt_i2c_desc) },
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 96f73aaf71da3..8b3c3fcf35a96 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -83,6 +83,11 @@ int device_links_read_lock_held(void)
+ {
+ 	return srcu_read_lock_held(&device_links_srcu);
+ }
++
++static void device_link_synchronize_removal(void)
++{
++	synchronize_srcu(&device_links_srcu);
++}
+ #else /* !CONFIG_SRCU */
+ static DECLARE_RWSEM(device_links_lock);
+ 
+@@ -113,6 +118,10 @@ int device_links_read_lock_held(void)
+ 	return lockdep_is_held(&device_links_lock);
+ }
+ #endif
++
++static inline void device_link_synchronize_removal(void)
++{
++}
+ #endif /* !CONFIG_SRCU */
+ 
+ static bool device_is_ancestor(struct device *dev, struct device *target)
+@@ -332,8 +341,13 @@ static struct attribute *devlink_attrs[] = {
+ };
+ ATTRIBUTE_GROUPS(devlink);
+ 
+-static void device_link_free(struct device_link *link)
++static void device_link_release_fn(struct work_struct *work)
+ {
++	struct device_link *link = container_of(work, struct device_link, rm_work);
++
++	/* Ensure that all references to the link object have been dropped. */
++	device_link_synchronize_removal();
++
+ 	while (refcount_dec_not_one(&link->rpm_active))
+ 		pm_runtime_put(link->supplier);
+ 
+@@ -342,24 +356,19 @@ static void device_link_free(struct device_link *link)
+ 	kfree(link);
+ }
+ 
+-#ifdef CONFIG_SRCU
+-static void __device_link_free_srcu(struct rcu_head *rhead)
+-{
+-	device_link_free(container_of(rhead, struct device_link, rcu_head));
+-}
+-
+ static void devlink_dev_release(struct device *dev)
+ {
+ 	struct device_link *link = to_devlink(dev);
+ 
+-	call_srcu(&device_links_srcu, &link->rcu_head, __device_link_free_srcu);
+-}
+-#else
+-static void devlink_dev_release(struct device *dev)
+-{
+-	device_link_free(to_devlink(dev));
++	INIT_WORK(&link->rm_work, device_link_release_fn);
++	/*
++	 * It may take a while to complete this work because of the SRCU
++	 * synchronization in device_link_release_fn() and if the consumer or
++	 * supplier devices get deleted when it runs, so put it into the "long"
++	 * workqueue.
++	 */
++	queue_work(system_long_wq, &link->rm_work);
+ }
+-#endif
+ 
+ static struct class devlink_class = {
+ 	.name = "devlink",
+diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
+index ed3b7dab678db..8b55085650ad0 100644
+--- a/drivers/char/hpet.c
++++ b/drivers/char/hpet.c
+@@ -984,6 +984,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
+ 		hdp->hd_phys_address = fixmem32->address;
+ 		hdp->hd_address = ioremap(fixmem32->address,
+ 						HPET_RANGE_SIZE);
++		if (!hdp->hd_address)
++			return AE_ERROR;
+ 
+ 		if (hpet_is_known(hdp)) {
+ 			iounmap(hdp->hd_address);
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_main.c b/drivers/crypto/cavium/nitrox/nitrox_main.c
+index 9d14be97e3819..cee2a2713038d 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_main.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_main.c
+@@ -451,7 +451,6 @@ static int nitrox_probe(struct pci_dev *pdev,
+ 	err = pci_request_mem_regions(pdev, nitrox_driver_name);
+ 	if (err) {
+ 		pci_disable_device(pdev);
+-		dev_err(&pdev->dev, "Failed to request mem regions!\n");
+ 		return err;
+ 	}
+ 	pci_set_master(pdev);
+diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
+index 806ca02c52d71..62026607f3f8b 100644
+--- a/drivers/dma/qcom/hidma_mgmt.c
++++ b/drivers/dma/qcom/hidma_mgmt.c
+@@ -418,8 +418,23 @@ static int __init hidma_mgmt_init(void)
+ 		hidma_mgmt_of_populate_channels(child);
+ 	}
+ #endif
+-	return platform_driver_register(&hidma_mgmt_driver);
++	/*
++	 * We do not check for return value here, as it is assumed that
++	 * platform_driver_register must not fail. The reason for this is that
++	 * the (potential) hidma_mgmt_of_populate_channels calls above are not
++	 * cleaned up if it does fail, and to do this work is quite
++	 * complicated. In particular, various calls of of_address_to_resource,
++	 * of_irq_to_resource, platform_device_register_full, of_dma_configure,
++	 * and of_msi_configure which then call other functions and so on, must
++	 * be cleaned up - this is not a trivial exercise.
++	 *
++	 * Currently, this module is not intended to be unloaded, and there is
++	 * no module_exit function defined which does the needed cleanup. For
++	 * this reason, we have to assume success here.
++	 */
++	platform_driver_register(&hidma_mgmt_driver);
+ 
++	return 0;
+ }
+ module_init(hidma_mgmt_init);
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/gpio/gpio-cadence.c b/drivers/gpio/gpio-cadence.c
+index a4d3239d25944..4ab3fcd9b9ba6 100644
+--- a/drivers/gpio/gpio-cadence.c
++++ b/drivers/gpio/gpio-cadence.c
+@@ -278,6 +278,7 @@ static const struct of_device_id cdns_of_ids[] = {
+ 	{ .compatible = "cdns,gpio-r1p02" },
+ 	{ /* sentinel */ },
+ };
++MODULE_DEVICE_TABLE(of, cdns_of_ids);
+ 
+ static struct platform_driver cdns_gpio_driver = {
+ 	.driver = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
+index 50016bf9c4279..8346caec771f6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
+@@ -157,16 +157,16 @@ static uint32_t get_sdma_rlc_reg_offset(struct amdgpu_device *adev,
+ 				mmSDMA0_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
+ 		break;
+ 	case 1:
+-		sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA1, 0,
++		sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
+ 				mmSDMA1_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
+ 		break;
+ 	case 2:
+-		sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA2, 0,
+-				mmSDMA2_RLC0_RB_CNTL) - mmSDMA2_RLC0_RB_CNTL;
++		sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
++				mmSDMA2_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
+ 		break;
+ 	case 3:
+-		sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA3, 0,
+-				mmSDMA3_RLC0_RB_CNTL) - mmSDMA2_RLC0_RB_CNTL;
++		sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
++				mmSDMA3_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
+ 		break;
+ 	}
+ 
+@@ -451,7 +451,7 @@ static int hqd_sdma_dump_v10_3(struct kgd_dev *kgd,
+ 			engine_id, queue_id);
+ 	uint32_t i = 0, reg;
+ #undef HQD_N_REGS
+-#define HQD_N_REGS (19+6+7+10)
++#define HQD_N_REGS (19+6+7+12)
+ 
+ 	*dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL);
+ 	if (*dump == NULL)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 7f2689d4b86da..87c7c45f1bb73 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4368,7 +4368,6 @@ out:
+ 			r = amdgpu_ib_ring_tests(tmp_adev);
+ 			if (r) {
+ 				dev_err(tmp_adev->dev, "ib ring test failed (%d).\n", r);
+-				r = amdgpu_device_ip_suspend(tmp_adev);
+ 				need_full_reset = true;
+ 				r = -EAGAIN;
+ 				goto end;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+index 1ea8af48ae2f5..43f29ee0e3b06 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+@@ -289,10 +289,13 @@ out:
+ static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfbdev)
+ {
+ 	struct amdgpu_framebuffer *rfb = &rfbdev->rfb;
++	int i;
+ 
+ 	drm_fb_helper_unregister_fbi(&rfbdev->helper);
+ 
+ 	if (rfb->base.obj[0]) {
++		for (i = 0; i < rfb->base.format->num_planes; i++)
++			drm_gem_object_put(rfb->base.obj[0]);
+ 		amdgpufb_destroy_pinned_object(rfb->base.obj[0]);
+ 		rfb->base.obj[0] = NULL;
+ 		drm_framebuffer_unregister_private(&rfb->base);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 532250c2b19ee..5207ad654f18e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -1381,6 +1381,7 @@ static void amdgpu_ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *
+ 	if (gtt && gtt->userptr) {
+ 		amdgpu_ttm_tt_set_user_pages(ttm, NULL);
+ 		kfree(ttm->sg);
++		ttm->sg = NULL;
+ 		ttm->page_flags &= ~TTM_PAGE_FLAG_SG;
+ 		return;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+index 94caf5204c8bd..ae8c0f897d59b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+@@ -172,6 +172,8 @@ static int jpeg_v2_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
+ 	      RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS))
+ 		jpeg_v2_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+index 845306f63cdb4..63b3501823898 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+@@ -198,8 +198,6 @@ static int jpeg_v2_5_hw_fini(void *handle)
+ 		if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
+ 		      RREG32_SOC15(JPEG, i, mmUVD_JRBC_STATUS))
+ 			jpeg_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE);
+-
+-		ring->sched.ready = false;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+index 3a0dff53654df..9259e35f0f55a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+@@ -166,8 +166,6 @@ static int jpeg_v3_0_hw_fini(void *handle)
+ 	      RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS))
+ 		jpeg_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
+ 
+-	ring->sched.ready = false;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index 2a485052e3abe..1bd330d431479 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -476,11 +476,6 @@ static void sdma_v5_2_gfx_stop(struct amdgpu_device *adev)
+ 		ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0);
+ 		WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL), ib_cntl);
+ 	}
+-
+-	sdma0->sched.ready = false;
+-	sdma1->sched.ready = false;
+-	sdma2->sched.ready = false;
+-	sdma3->sched.ready = false;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index 86e1ef732ebec..aa8ae0ca62f91 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -232,9 +232,13 @@ static int vcn_v1_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
+-		RREG32_SOC15(VCN, 0, mmUVD_STATUS))
++		(adev->vcn.cur_state != AMD_PG_STATE_GATE &&
++		 RREG32_SOC15(VCN, 0, mmUVD_STATUS))) {
+ 		vcn_v1_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+index e5d29dee0c882..fc939d4f4841e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+@@ -262,6 +262,8 @@ static int vcn_v2_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
+ 	    (adev->vcn.cur_state != AMD_PG_STATE_GATE &&
+ 	      RREG32_SOC15(VCN, 0, mmUVD_STATUS)))
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+index 0f1d3ef8baa72..2c328362eee3c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+@@ -321,6 +321,8 @@ static int vcn_v2_5_hw_fini(void *handle)
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	int i;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+ 		if (adev->vcn.harvest_config & (1 << i))
+ 			continue;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index b5f8f3d731cb0..700621ddc02e2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -346,7 +346,7 @@ static int vcn_v3_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	struct amdgpu_ring *ring;
+-	int i, j;
++	int i;
+ 
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+ 		if (adev->vcn.harvest_config & (1 << i))
+@@ -361,12 +361,6 @@ static int vcn_v3_0_hw_fini(void *handle)
+ 				vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
+ 			}
+ 		}
+-		ring->sched.ready = false;
+-
+-		for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
+-			ring = &adev->vcn.inst[i].ring_enc[j];
+-			ring->sched.ready = false;
+-		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index f0039599e02f7..62778ccea055b 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -1049,6 +1049,24 @@ static bool dc_link_detect_helper(struct dc_link *link,
+ 			    dc_is_dvi_signal(link->connector_signal)) {
+ 				if (prev_sink)
+ 					dc_sink_release(prev_sink);
++				link_disconnect_sink(link);
++
++				return false;
++			}
++			/*
++			 * Abort detection for DP connectors if we have
++			 * no EDID and connector is active converter
++			 * as there are no display downstream
++			 *
++			 */
++			if (dc_is_dp_sst_signal(link->connector_signal) &&
++				(link->dpcd_caps.dongle_type ==
++						DISPLAY_DONGLE_DP_VGA_CONVERTER ||
++				link->dpcd_caps.dongle_type ==
++						DISPLAY_DONGLE_DP_DVI_CONVERTER)) {
++				if (prev_sink)
++					dc_sink_release(prev_sink);
++				link_disconnect_sink(link);
+ 
+ 				return false;
+ 			}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index 52df6202a9543..2937784bc8249 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2606,6 +2606,8 @@ static ssize_t navi10_get_gpu_metrics(struct smu_context *smu,
+ 
+ static int navi10_enable_mgpu_fan_boost(struct smu_context *smu)
+ {
++	struct smu_table_context *table_context = &smu->smu_table;
++	PPTable_t *smc_pptable = table_context->driver_pptable;
+ 	struct amdgpu_device *adev = smu->adev;
+ 	uint32_t param = 0;
+ 
+@@ -2613,6 +2615,13 @@ static int navi10_enable_mgpu_fan_boost(struct smu_context *smu)
+ 	if (adev->asic_type == CHIP_NAVI12)
+ 		return 0;
+ 
++	/*
++	 * Skip the MGpuFanBoost setting for those ASICs
++	 * which do not support it
++	 */
++	if (!smc_pptable->MGpuFanBoostLimitRpm)
++		return 0;
++
+ 	/* Workaround for WS SKU */
+ 	if (adev->pdev->device == 0x7312 &&
+ 	    adev->pdev->revision == 0)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 471bbb78884b5..8556c229ff598 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -2715,6 +2715,16 @@ static ssize_t sienna_cichlid_get_gpu_metrics(struct smu_context *smu,
+ 
+ static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu)
+ {
++	struct smu_table_context *table_context = &smu->smu_table;
++	PPTable_t *smc_pptable = table_context->driver_pptable;
++
++	/*
++	 * Skip the MGpuFanBoost setting for those ASICs
++	 * which do not support it
++	 */
++	if (!smc_pptable->MGpuFanBoostLimitRpm)
++		return 0;
++
+ 	return smu_cmn_send_smc_msg_with_param(smu,
+ 					       SMU_MSG_SetMGpuFanBoostLimitRpm,
+ 					       0,
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 6f52f81339242..eb02ecb6e345a 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -4136,7 +4136,7 @@ static void chv_dp_post_pll_disable(struct intel_atomic_state *state,
+  * link status information
+  */
+ bool
+-intel_dp_get_link_status(struct intel_dp *intel_dp, u8 link_status[DP_LINK_STATUS_SIZE])
++intel_dp_get_link_status(struct intel_dp *intel_dp, u8 *link_status)
+ {
+ 	return drm_dp_dpcd_read(&intel_dp->aux, DP_LANE0_1_STATUS, link_status,
+ 				DP_LINK_STATUS_SIZE) == DP_LINK_STATUS_SIZE;
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index db56732bdd260..2753067c08e68 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -485,11 +485,12 @@ static int meson_probe_remote(struct platform_device *pdev,
+ static void meson_drv_shutdown(struct platform_device *pdev)
+ {
+ 	struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
+-	struct drm_device *drm = priv->drm;
+ 
+-	DRM_DEBUG_DRIVER("\n");
+-	drm_kms_helper_poll_fini(drm);
+-	drm_atomic_helper_shutdown(drm);
++	if (!priv)
++		return;
++
++	drm_kms_helper_poll_fini(priv->drm);
++	drm_atomic_helper_shutdown(priv->drm);
+ }
+ 
+ static int meson_drv_probe(struct platform_device *pdev)
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 877fe3733a42b..e42b87e96f747 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -391,11 +391,9 @@ static int i801_check_post(struct i801_priv *priv, int status)
+ 		dev_err(&priv->pci_dev->dev, "Transaction timeout\n");
+ 		/* try to stop the current command */
+ 		dev_dbg(&priv->pci_dev->dev, "Terminating the current operation\n");
+-		outb_p(inb_p(SMBHSTCNT(priv)) | SMBHSTCNT_KILL,
+-		       SMBHSTCNT(priv));
++		outb_p(SMBHSTCNT_KILL, SMBHSTCNT(priv));
+ 		usleep_range(1000, 2000);
+-		outb_p(inb_p(SMBHSTCNT(priv)) & (~SMBHSTCNT_KILL),
+-		       SMBHSTCNT(priv));
++		outb_p(0, SMBHSTCNT(priv));
+ 
+ 		/* Check if it worked */
+ 		status = inb_p(SMBHSTSTS(priv));
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index bf25acba2ed53..dcde71ae63419 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -478,6 +478,11 @@ static void mtk_i2c_clock_disable(struct mtk_i2c *i2c)
+ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ {
+ 	u16 control_reg;
++	u16 intr_stat_reg;
++
++	mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_START);
++	intr_stat_reg = mtk_i2c_readw(i2c, OFFSET_INTR_STAT);
++	mtk_i2c_writew(i2c, intr_stat_reg, OFFSET_INTR_STAT);
+ 
+ 	if (i2c->dev_comp->apdma_sync) {
+ 		writel(I2C_DMA_WARM_RST, i2c->pdmabase + OFFSET_RST);
+diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
+index 3eafe0eb3e4cc..40fa9e4af5d1c 100644
+--- a/drivers/i2c/busses/i2c-s3c2410.c
++++ b/drivers/i2c/busses/i2c-s3c2410.c
+@@ -483,7 +483,10 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
+ 					 * forces us to send a new START
+ 					 * when we change direction
+ 					 */
++					dev_dbg(i2c->dev,
++						"missing START before write->read\n");
+ 					s3c24xx_i2c_stop(i2c, -EINVAL);
++					break;
+ 				}
+ 
+ 				goto retry_write;
+diff --git a/drivers/i2c/busses/i2c-sh_mobile.c b/drivers/i2c/busses/i2c-sh_mobile.c
+index bdd60770779ad..c253535dc18ec 100644
+--- a/drivers/i2c/busses/i2c-sh_mobile.c
++++ b/drivers/i2c/busses/i2c-sh_mobile.c
+@@ -807,7 +807,7 @@ static const struct sh_mobile_dt_config r8a7740_dt_config = {
+ static const struct of_device_id sh_mobile_i2c_dt_ids[] = {
+ 	{ .compatible = "renesas,iic-r8a73a4", .data = &fast_clock_dt_config },
+ 	{ .compatible = "renesas,iic-r8a7740", .data = &r8a7740_dt_config },
+-	{ .compatible = "renesas,iic-r8a774c0", .data = &fast_clock_dt_config },
++	{ .compatible = "renesas,iic-r8a774c0", .data = &v2_freq_calc_dt_config },
+ 	{ .compatible = "renesas,iic-r8a7790", .data = &v2_freq_calc_dt_config },
+ 	{ .compatible = "renesas,iic-r8a7791", .data = &v2_freq_calc_dt_config },
+ 	{ .compatible = "renesas,iic-r8a7792", .data = &v2_freq_calc_dt_config },
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 766c733336045..9c2401c5848ec 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -616,6 +616,13 @@ static int ad7124_of_parse_channel_config(struct iio_dev *indio_dev,
+ 		if (ret)
+ 			goto err;
+ 
++		if (channel >= indio_dev->num_channels) {
++			dev_err(indio_dev->dev.parent,
++				"Channel index >= number of channels\n");
++			ret = -EINVAL;
++			goto err;
++		}
++
+ 		ret = of_property_read_u32_array(child, "diff-channels",
+ 						 ain, 2);
+ 		if (ret)
+@@ -707,6 +714,11 @@ static int ad7124_setup(struct ad7124_state *st)
+ 	return ret;
+ }
+ 
++static void ad7124_reg_disable(void *r)
++{
++	regulator_disable(r);
++}
++
+ static int ad7124_probe(struct spi_device *spi)
+ {
+ 	const struct ad7124_chip_info *info;
+@@ -752,17 +764,20 @@ static int ad7124_probe(struct spi_device *spi)
+ 		ret = regulator_enable(st->vref[i]);
+ 		if (ret)
+ 			return ret;
++
++		ret = devm_add_action_or_reset(&spi->dev, ad7124_reg_disable,
++					       st->vref[i]);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	st->mclk = devm_clk_get(&spi->dev, "mclk");
+-	if (IS_ERR(st->mclk)) {
+-		ret = PTR_ERR(st->mclk);
+-		goto error_regulator_disable;
+-	}
++	if (IS_ERR(st->mclk))
++		return PTR_ERR(st->mclk);
+ 
+ 	ret = clk_prepare_enable(st->mclk);
+ 	if (ret < 0)
+-		goto error_regulator_disable;
++		return ret;
+ 
+ 	ret = ad7124_soft_reset(st);
+ 	if (ret < 0)
+@@ -792,11 +807,6 @@ error_remove_trigger:
+ 	ad_sd_cleanup_buffer_and_trigger(indio_dev);
+ error_clk_disable_unprepare:
+ 	clk_disable_unprepare(st->mclk);
+-error_regulator_disable:
+-	for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) {
+-		if (!IS_ERR_OR_NULL(st->vref[i]))
+-			regulator_disable(st->vref[i]);
+-	}
+ 
+ 	return ret;
+ }
+@@ -805,17 +815,11 @@ static int ad7124_remove(struct spi_device *spi)
+ {
+ 	struct iio_dev *indio_dev = spi_get_drvdata(spi);
+ 	struct ad7124_state *st = iio_priv(indio_dev);
+-	int i;
+ 
+ 	iio_device_unregister(indio_dev);
+ 	ad_sd_cleanup_buffer_and_trigger(indio_dev);
+ 	clk_disable_unprepare(st->mclk);
+ 
+-	for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) {
+-		if (!IS_ERR_OR_NULL(st->vref[i]))
+-			regulator_disable(st->vref[i]);
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index 2ed580521d815..1141cc13a1249 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -912,7 +912,7 @@ static int ad7192_probe(struct spi_device *spi)
+ {
+ 	struct ad7192_state *st;
+ 	struct iio_dev *indio_dev;
+-	int ret, voltage_uv = 0;
++	int ret;
+ 
+ 	if (!spi->irq) {
+ 		dev_err(&spi->dev, "no IRQ?\n");
+@@ -949,15 +949,12 @@ static int ad7192_probe(struct spi_device *spi)
+ 		goto error_disable_avdd;
+ 	}
+ 
+-	voltage_uv = regulator_get_voltage(st->avdd);
+-
+-	if (voltage_uv > 0) {
+-		st->int_vref_mv = voltage_uv / 1000;
+-	} else {
+-		ret = voltage_uv;
++	ret = regulator_get_voltage(st->avdd);
++	if (ret < 0) {
+ 		dev_err(&spi->dev, "Device tree error, reference voltage undefined\n");
+ 		goto error_disable_avdd;
+ 	}
++	st->int_vref_mv = ret / 1000;
+ 
+ 	spi_set_drvdata(spi, indio_dev);
+ 	st->chip_info = of_device_get_match_data(&spi->dev);
+@@ -1014,7 +1011,9 @@ static int ad7192_probe(struct spi_device *spi)
+ 	return 0;
+ 
+ error_disable_clk:
+-	clk_disable_unprepare(st->mclk);
++	if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 ||
++	    st->clock_sel == AD7192_CLK_EXT_MCLK2)
++		clk_disable_unprepare(st->mclk);
+ error_remove_trigger:
+ 	ad_sd_cleanup_buffer_and_trigger(indio_dev);
+ error_disable_dvdd:
+@@ -1031,7 +1030,9 @@ static int ad7192_remove(struct spi_device *spi)
+ 	struct ad7192_state *st = iio_priv(indio_dev);
+ 
+ 	iio_device_unregister(indio_dev);
+-	clk_disable_unprepare(st->mclk);
++	if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 ||
++	    st->clock_sel == AD7192_CLK_EXT_MCLK2)
++		clk_disable_unprepare(st->mclk);
+ 	ad_sd_cleanup_buffer_and_trigger(indio_dev);
+ 
+ 	regulator_disable(st->dvdd);
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index 0e93b0766eb45..c7e15c45140ad 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -166,6 +166,10 @@ struct ad7768_state {
+ 	 * transfer buffers to live in their own cache lines.
+ 	 */
+ 	union {
++		struct {
++			__be32 chan;
++			s64 timestamp;
++		} scan;
+ 		__be32 d32;
+ 		u8 d8[2];
+ 	} data ____cacheline_aligned;
+@@ -459,11 +463,11 @@ static irqreturn_t ad7768_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&st->lock);
+ 
+-	ret = spi_read(st->spi, &st->data.d32, 3);
++	ret = spi_read(st->spi, &st->data.scan.chan, 3);
+ 	if (ret < 0)
+ 		goto err_unlock;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, &st->data.d32,
++	iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan,
+ 					   iio_get_time_ns(indio_dev));
+ 
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/adc/ad7793.c b/drivers/iio/adc/ad7793.c
+index 5e980a06258e6..440ef4c7be074 100644
+--- a/drivers/iio/adc/ad7793.c
++++ b/drivers/iio/adc/ad7793.c
+@@ -279,6 +279,7 @@ static int ad7793_setup(struct iio_dev *indio_dev,
+ 	id &= AD7793_ID_MASK;
+ 
+ 	if (id != st->chip_info->id) {
++		ret = -ENODEV;
+ 		dev_err(&st->sd.spi->dev, "device ID query failed\n");
+ 		goto out;
+ 	}
+diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c
+index a2cc966580540..8c1e866f72e85 100644
+--- a/drivers/iio/adc/ad7923.c
++++ b/drivers/iio/adc/ad7923.c
+@@ -59,8 +59,10 @@ struct ad7923_state {
+ 	/*
+ 	 * DMA (thus cache coherency maintenance) requires the
+ 	 * transfer buffers to live in their own cache lines.
++	 * Ensure rx_buf can be directly used in iio_push_to_buffers_with_timetamp
++	 * Length = 8 channels + 4 extra for 8 byte timestamp
+ 	 */
+-	__be16				rx_buf[4] ____cacheline_aligned;
++	__be16				rx_buf[12] ____cacheline_aligned;
+ 	__be16				tx_buf[4];
+ };
+ 
+diff --git a/drivers/iio/dac/ad5770r.c b/drivers/iio/dac/ad5770r.c
+index 84dcf149261f9..42decba1463cc 100644
+--- a/drivers/iio/dac/ad5770r.c
++++ b/drivers/iio/dac/ad5770r.c
+@@ -524,23 +524,29 @@ static int ad5770r_channel_config(struct ad5770r_state *st)
+ 	device_for_each_child_node(&st->spi->dev, child) {
+ 		ret = fwnode_property_read_u32(child, "num", &num);
+ 		if (ret)
+-			return ret;
+-		if (num >= AD5770R_MAX_CHANNELS)
+-			return -EINVAL;
++			goto err_child_out;
++		if (num >= AD5770R_MAX_CHANNELS) {
++			ret = -EINVAL;
++			goto err_child_out;
++		}
+ 
+ 		ret = fwnode_property_read_u32_array(child,
+ 						     "adi,range-microamp",
+ 						     tmp, 2);
+ 		if (ret)
+-			return ret;
++			goto err_child_out;
+ 
+ 		min = tmp[0] / 1000;
+ 		max = tmp[1] / 1000;
+ 		ret = ad5770r_store_output_range(st, min, max, num);
+ 		if (ret)
+-			return ret;
++			goto err_child_out;
+ 	}
+ 
++	return 0;
++
++err_child_out:
++	fwnode_handle_put(child);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iio/gyro/fxas21002c_core.c b/drivers/iio/gyro/fxas21002c_core.c
+index 129eead8febc0..b7523357d8eba 100644
+--- a/drivers/iio/gyro/fxas21002c_core.c
++++ b/drivers/iio/gyro/fxas21002c_core.c
+@@ -399,6 +399,7 @@ static int fxas21002c_temp_get(struct fxas21002c_data *data, int *val)
+ 	ret = regmap_field_read(data->regmap_fields[F_TEMP], &temp);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to read temp: %d\n", ret);
++		fxas21002c_pm_put(data);
+ 		goto data_unlock;
+ 	}
+ 
+@@ -432,6 +433,7 @@ static int fxas21002c_axis_get(struct fxas21002c_data *data,
+ 			       &axis_be, sizeof(axis_be));
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to read axis: %d: %d\n", index, ret);
++		fxas21002c_pm_put(data);
+ 		goto data_unlock;
+ 	}
+ 
+diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
+index 887d13721e521..dd0e3bd50b94c 100644
+--- a/drivers/interconnect/qcom/bcm-voter.c
++++ b/drivers/interconnect/qcom/bcm-voter.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
+- * Copyright (c) 2020, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved.
+  */
+ 
+ #include <asm/div64.h>
+@@ -212,6 +212,7 @@ struct bcm_voter *of_bcm_voter_get(struct device *dev, const char *name)
+ 	}
+ 	mutex_unlock(&bcm_voter_lock);
+ 
++	of_node_put(node);
+ 	return voter;
+ }
+ EXPORT_SYMBOL_GPL(of_bcm_voter_get);
+@@ -369,6 +370,7 @@ static const struct of_device_id bcm_voter_of_match[] = {
+ 	{ .compatible = "qcom,bcm-voter" },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, bcm_voter_of_match);
+ 
+ static struct platform_driver qcom_icc_bcm_voter_driver = {
+ 	.probe = qcom_icc_bcm_voter_probe,
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 02e7c10a4224b..b8d0b56a75751 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1137,7 +1137,7 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
+ 
+ 		err = iommu_device_register(&iommu->iommu);
+ 		if (err)
+-			goto err_unmap;
++			goto err_sysfs;
+ 	}
+ 
+ 	drhd->iommu = iommu;
+@@ -1145,6 +1145,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
+ 
+ 	return 0;
+ 
++err_sysfs:
++	iommu_device_sysfs_remove(&iommu->iommu);
+ err_unmap:
+ 	unmap_iommu(iommu);
+ error_free_seq_id:
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index eececdeaa40f9..b21c8224b1c84 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2606,9 +2606,9 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ 				    struct device *dev,
+ 				    u32 pasid)
+ {
+-	int flags = PASID_FLAG_SUPERVISOR_MODE;
+ 	struct dma_pte *pgd = domain->pgd;
+ 	int agaw, level;
++	int flags = 0;
+ 
+ 	/*
+ 	 * Skip top levels of page tables for iommu which has
+@@ -2624,7 +2624,10 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ 	if (level != 4 && level != 5)
+ 		return -EINVAL;
+ 
+-	flags |= (level == 5) ? PASID_FLAG_FL5LP : 0;
++	if (pasid != PASID_RID2PASID)
++		flags |= PASID_FLAG_SUPERVISOR_MODE;
++	if (level == 5)
++		flags |= PASID_FLAG_FL5LP;
+ 
+ 	if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
+ 		flags |= PASID_FLAG_PAGE_SNOOP;
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index ce4ef2d245e3b..1e7c17989084e 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -677,7 +677,8 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
+ 	 * Since it is a second level only translation setup, we should
+ 	 * set SRE bit as well (addresses are expected to be GPAs).
+ 	 */
+-	pasid_set_sre(pte);
++	if (pasid != PASID_RID2PASID)
++		pasid_set_sre(pte);
+ 	pasid_set_present(pte);
+ 	pasid_flush_caches(iommu, pte, pasid, did);
+ 
+diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
+index 2bfdd57348443..81dea4caf561f 100644
+--- a/drivers/iommu/virtio-iommu.c
++++ b/drivers/iommu/virtio-iommu.c
+@@ -1138,6 +1138,7 @@ static struct virtio_device_id id_table[] = {
+ 	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+ 	{ 0 },
+ };
++MODULE_DEVICE_TABLE(virtio, id_table);
+ 
+ static struct virtio_driver virtio_iommu_drv = {
+ 	.driver.name		= KBUILD_MODNAME,
+diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c
+index 70061991915a5..cd5642cef01fd 100644
+--- a/drivers/isdn/hardware/mISDN/hfcsusb.c
++++ b/drivers/isdn/hardware/mISDN/hfcsusb.c
+@@ -46,7 +46,7 @@ static void hfcsusb_start_endpoint(struct hfcsusb *hw, int channel);
+ static void hfcsusb_stop_endpoint(struct hfcsusb *hw, int channel);
+ static int  hfcsusb_setup_bch(struct bchannel *bch, int protocol);
+ static void deactivate_bchannel(struct bchannel *bch);
+-static void hfcsusb_ph_info(struct hfcsusb *hw);
++static int  hfcsusb_ph_info(struct hfcsusb *hw);
+ 
+ /* start next background transfer for control channel */
+ static void
+@@ -241,7 +241,7 @@ hfcusb_l2l1B(struct mISDNchannel *ch, struct sk_buff *skb)
+  * send full D/B channel status information
+  * as MPH_INFORMATION_IND
+  */
+-static void
++static int
+ hfcsusb_ph_info(struct hfcsusb *hw)
+ {
+ 	struct ph_info *phi;
+@@ -250,7 +250,7 @@ hfcsusb_ph_info(struct hfcsusb *hw)
+ 
+ 	phi = kzalloc(struct_size(phi, bch, dch->dev.nrbchan), GFP_ATOMIC);
+ 	if (!phi)
+-		return;
++		return -ENOMEM;
+ 
+ 	phi->dch.ch.protocol = hw->protocol;
+ 	phi->dch.ch.Flags = dch->Flags;
+@@ -263,6 +263,8 @@ hfcsusb_ph_info(struct hfcsusb *hw)
+ 	_queue_data(&dch->dev.D, MPH_INFORMATION_IND, MISDN_ID_ANY,
+ 		    struct_size(phi, bch, dch->dev.nrbchan), phi, GFP_ATOMIC);
+ 	kfree(phi);
++
++	return 0;
+ }
+ 
+ /*
+@@ -347,8 +349,7 @@ hfcusb_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
+ 			ret = l1_event(dch->l1, hh->prim);
+ 		break;
+ 	case MPH_INFORMATION_REQ:
+-		hfcsusb_ph_info(hw);
+-		ret = 0;
++		ret = hfcsusb_ph_info(hw);
+ 		break;
+ 	}
+ 
+@@ -403,8 +404,7 @@ hfc_l1callback(struct dchannel *dch, u_int cmd)
+ 			       hw->name, __func__, cmd);
+ 		return -1;
+ 	}
+-	hfcsusb_ph_info(hw);
+-	return 0;
++	return hfcsusb_ph_info(hw);
+ }
+ 
+ static int
+@@ -746,8 +746,7 @@ hfcsusb_setup_bch(struct bchannel *bch, int protocol)
+ 			handle_led(hw, (bch->nr == 1) ? LED_B1_OFF :
+ 				   LED_B2_OFF);
+ 	}
+-	hfcsusb_ph_info(hw);
+-	return 0;
++	return hfcsusb_ph_info(hw);
+ }
+ 
+ static void
+diff --git a/drivers/isdn/hardware/mISDN/mISDNinfineon.c b/drivers/isdn/hardware/mISDN/mISDNinfineon.c
+index a16c7a2a7f3d0..88d592bafdb02 100644
+--- a/drivers/isdn/hardware/mISDN/mISDNinfineon.c
++++ b/drivers/isdn/hardware/mISDN/mISDNinfineon.c
+@@ -630,17 +630,19 @@ static void
+ release_io(struct inf_hw *hw)
+ {
+ 	if (hw->cfg.mode) {
+-		if (hw->cfg.p) {
++		if (hw->cfg.mode == AM_MEMIO) {
+ 			release_mem_region(hw->cfg.start, hw->cfg.size);
+-			iounmap(hw->cfg.p);
++			if (hw->cfg.p)
++				iounmap(hw->cfg.p);
+ 		} else
+ 			release_region(hw->cfg.start, hw->cfg.size);
+ 		hw->cfg.mode = AM_NONE;
+ 	}
+ 	if (hw->addr.mode) {
+-		if (hw->addr.p) {
++		if (hw->addr.mode == AM_MEMIO) {
+ 			release_mem_region(hw->addr.start, hw->addr.size);
+-			iounmap(hw->addr.p);
++			if (hw->addr.p)
++				iounmap(hw->addr.p);
+ 		} else
+ 			release_region(hw->addr.start, hw->addr.size);
+ 		hw->addr.mode = AM_NONE;
+@@ -670,9 +672,12 @@ setup_io(struct inf_hw *hw)
+ 				(ulong)hw->cfg.start, (ulong)hw->cfg.size);
+ 			return err;
+ 		}
+-		if (hw->ci->cfg_mode == AM_MEMIO)
+-			hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size);
+ 		hw->cfg.mode = hw->ci->cfg_mode;
++		if (hw->ci->cfg_mode == AM_MEMIO) {
++			hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size);
++			if (!hw->cfg.p)
++				return -ENOMEM;
++		}
+ 		if (debug & DEBUG_HW)
+ 			pr_notice("%s: IO cfg %lx (%lu bytes) mode%d\n",
+ 				  hw->name, (ulong)hw->cfg.start,
+@@ -697,12 +702,12 @@ setup_io(struct inf_hw *hw)
+ 				(ulong)hw->addr.start, (ulong)hw->addr.size);
+ 			return err;
+ 		}
++		hw->addr.mode = hw->ci->addr_mode;
+ 		if (hw->ci->addr_mode == AM_MEMIO) {
+ 			hw->addr.p = ioremap(hw->addr.start, hw->addr.size);
+-			if (unlikely(!hw->addr.p))
++			if (!hw->addr.p)
+ 				return -ENOMEM;
+ 		}
+-		hw->addr.mode = hw->ci->addr_mode;
+ 		if (debug & DEBUG_HW)
+ 			pr_notice("%s: IO addr %lx (%lu bytes) mode%d\n",
+ 				  hw->name, (ulong)hw->addr.start,
+diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
+index 962f7df0691ef..41735a25d50aa 100644
+--- a/drivers/md/dm-snap.c
++++ b/drivers/md/dm-snap.c
+@@ -854,7 +854,7 @@ static int dm_add_exception(void *context, chunk_t old, chunk_t new)
+ static uint32_t __minimum_chunk_size(struct origin *o)
+ {
+ 	struct dm_snapshot *snap;
+-	unsigned chunk_size = 0;
++	unsigned chunk_size = rounddown_pow_of_two(UINT_MAX);
+ 
+ 	if (o)
+ 		list_for_each_entry(snap, &o->snapshots, list)
+diff --git a/drivers/media/dvb-frontends/sp8870.c b/drivers/media/dvb-frontends/sp8870.c
+index 655db8272268d..9767159aeb9b2 100644
+--- a/drivers/media/dvb-frontends/sp8870.c
++++ b/drivers/media/dvb-frontends/sp8870.c
+@@ -281,7 +281,7 @@ static int sp8870_set_frontend_parameters(struct dvb_frontend *fe)
+ 
+ 	// read status reg in order to clear pending irqs
+ 	err = sp8870_readreg(state, 0x200);
+-	if (err)
++	if (err < 0)
+ 		return err;
+ 
+ 	// system controller start
+diff --git a/drivers/media/usb/gspca/cpia1.c b/drivers/media/usb/gspca/cpia1.c
+index a4f7431486f31..d93d384286c16 100644
+--- a/drivers/media/usb/gspca/cpia1.c
++++ b/drivers/media/usb/gspca/cpia1.c
+@@ -1424,7 +1424,6 @@ static int sd_config(struct gspca_dev *gspca_dev,
+ {
+ 	struct sd *sd = (struct sd *) gspca_dev;
+ 	struct cam *cam;
+-	int ret;
+ 
+ 	sd->mainsFreq = FREQ_DEF == V4L2_CID_POWER_LINE_FREQUENCY_60HZ;
+ 	reset_camera_params(gspca_dev);
+@@ -1436,10 +1435,7 @@ static int sd_config(struct gspca_dev *gspca_dev,
+ 	cam->cam_mode = mode;
+ 	cam->nmodes = ARRAY_SIZE(mode);
+ 
+-	ret = goto_low_power(gspca_dev);
+-	if (ret)
+-		gspca_err(gspca_dev, "Cannot go to low power mode: %d\n",
+-			  ret);
++	goto_low_power(gspca_dev);
+ 	/* Check the firmware version. */
+ 	sd->params.version.firmwareVersion = 0;
+ 	get_version_information(gspca_dev);
+diff --git a/drivers/media/usb/gspca/m5602/m5602_mt9m111.c b/drivers/media/usb/gspca/m5602/m5602_mt9m111.c
+index bfa3b381d8a26..bf1af6ed9131e 100644
+--- a/drivers/media/usb/gspca/m5602/m5602_mt9m111.c
++++ b/drivers/media/usb/gspca/m5602/m5602_mt9m111.c
+@@ -195,7 +195,7 @@ static const struct v4l2_ctrl_config mt9m111_greenbal_cfg = {
+ int mt9m111_probe(struct sd *sd)
+ {
+ 	u8 data[2] = {0x00, 0x00};
+-	int i, rc = 0;
++	int i, err;
+ 	struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
+ 
+ 	if (force_sensor) {
+@@ -213,18 +213,18 @@ int mt9m111_probe(struct sd *sd)
+ 	/* Do the preinit */
+ 	for (i = 0; i < ARRAY_SIZE(preinit_mt9m111); i++) {
+ 		if (preinit_mt9m111[i][0] == BRIDGE) {
+-			rc |= m5602_write_bridge(sd,
+-				preinit_mt9m111[i][1],
+-				preinit_mt9m111[i][2]);
++			err = m5602_write_bridge(sd,
++					preinit_mt9m111[i][1],
++					preinit_mt9m111[i][2]);
+ 		} else {
+ 			data[0] = preinit_mt9m111[i][2];
+ 			data[1] = preinit_mt9m111[i][3];
+-			rc |= m5602_write_sensor(sd,
+-				preinit_mt9m111[i][1], data, 2);
++			err = m5602_write_sensor(sd,
++					preinit_mt9m111[i][1], data, 2);
+ 		}
++		if (err < 0)
++			return err;
+ 	}
+-	if (rc < 0)
+-		return rc;
+ 
+ 	if (m5602_read_sensor(sd, MT9M111_SC_CHIPVER, data, 2))
+ 		return -ENODEV;
+diff --git a/drivers/media/usb/gspca/m5602/m5602_po1030.c b/drivers/media/usb/gspca/m5602/m5602_po1030.c
+index d680b777f097f..8fd99ceee4b67 100644
+--- a/drivers/media/usb/gspca/m5602/m5602_po1030.c
++++ b/drivers/media/usb/gspca/m5602/m5602_po1030.c
+@@ -154,8 +154,8 @@ static const struct v4l2_ctrl_config po1030_greenbal_cfg = {
+ 
+ int po1030_probe(struct sd *sd)
+ {
+-	int rc = 0;
+ 	u8 dev_id_h = 0, i;
++	int err;
+ 	struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
+ 
+ 	if (force_sensor) {
+@@ -174,14 +174,14 @@ int po1030_probe(struct sd *sd)
+ 	for (i = 0; i < ARRAY_SIZE(preinit_po1030); i++) {
+ 		u8 data = preinit_po1030[i][2];
+ 		if (preinit_po1030[i][0] == SENSOR)
+-			rc |= m5602_write_sensor(sd,
+-				preinit_po1030[i][1], &data, 1);
++			err = m5602_write_sensor(sd, preinit_po1030[i][1],
++						 &data, 1);
+ 		else
+-			rc |= m5602_write_bridge(sd, preinit_po1030[i][1],
+-						data);
++			err = m5602_write_bridge(sd, preinit_po1030[i][1],
++						 data);
++		if (err < 0)
++			return err;
+ 	}
+-	if (rc < 0)
+-		return rc;
+ 
+ 	if (m5602_read_sensor(sd, PO1030_DEVID_H, &dev_id_h, 1))
+ 		return -ENODEV;
+diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c
+index 2e081a58da6c5..49489153cd162 100644
+--- a/drivers/misc/kgdbts.c
++++ b/drivers/misc/kgdbts.c
+@@ -100,8 +100,9 @@
+ 		printk(KERN_INFO a);	\
+ } while (0)
+ #define v2printk(a...) do {		\
+-	if (verbose > 1)		\
++	if (verbose > 1) {		\
+ 		printk(KERN_INFO a);	\
++	}				\
+ 	touch_nmi_watchdog();		\
+ } while (0)
+ #define eprintk(a...) do {		\
+diff --git a/drivers/misc/lis3lv02d/lis3lv02d.h b/drivers/misc/lis3lv02d/lis3lv02d.h
+index c394c0b08519a..7ac788fae1b86 100644
+--- a/drivers/misc/lis3lv02d/lis3lv02d.h
++++ b/drivers/misc/lis3lv02d/lis3lv02d.h
+@@ -271,6 +271,7 @@ struct lis3lv02d {
+ 	int			regs_size;
+ 	u8                      *reg_cache;
+ 	bool			regs_stored;
++	bool			init_required;
+ 	u8                      odr_mask;  /* ODR bit mask */
+ 	u8			whoami;    /* indicates measurement precision */
+ 	s16 (*read_data) (struct lis3lv02d *lis3, int reg);
+diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
+index 2161c1234ad72..fee603039e872 100644
+--- a/drivers/misc/mei/interrupt.c
++++ b/drivers/misc/mei/interrupt.c
+@@ -277,6 +277,9 @@ static int mei_cl_irq_read(struct mei_cl *cl, struct mei_cl_cb *cb,
+ 		return ret;
+ 	}
+ 
++	pm_runtime_mark_last_busy(dev->dev);
++	pm_request_autosuspend(dev->dev);
++
+ 	list_move_tail(&cb->list, &cl->rd_pending);
+ 
+ 	return 0;
+diff --git a/drivers/net/caif/caif_serial.c b/drivers/net/caif/caif_serial.c
+index bcc14c5875bf0..d025ea4349339 100644
+--- a/drivers/net/caif/caif_serial.c
++++ b/drivers/net/caif/caif_serial.c
+@@ -270,9 +270,6 @@ static netdev_tx_t caif_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ser_device *ser;
+ 
+-	if (WARN_ON(!dev))
+-		return -EINVAL;
+-
+ 	ser = netdev_priv(dev);
+ 
+ 	/* Send flow off once, on high water mark */
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index de7692b763d83..190025a0a98ed 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1128,14 +1128,6 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
+ {
+ 	struct mt7530_priv *priv = ds->priv;
+ 
+-	/* The real fabric path would be decided on the membership in the
+-	 * entry of VLAN table. PCR_MATRIX set up here with ALL_MEMBERS
+-	 * means potential VLAN can be consisting of certain subset of all
+-	 * ports.
+-	 */
+-	mt7530_rmw(priv, MT7530_PCR_P(port),
+-		   PCR_MATRIX_MASK, PCR_MATRIX(MT7530_ALL_MEMBERS));
+-
+ 	/* Trapped into security mode allows packet forwarding through VLAN
+ 	 * table lookup. CPU port is set to fallback mode to let untagged
+ 	 * frames pass through.
+diff --git a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
+index b777d3f375736..12cd04b568030 100644
+--- a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
++++ b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
+@@ -167,9 +167,10 @@ enum sja1105_hostcmd {
+ 	SJA1105_HOSTCMD_INVALIDATE = 4,
+ };
+ 
++/* Command and entry overlap */
+ static void
+-sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+-			      enum packing_op op)
++sja1105et_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
++				enum packing_op op)
+ {
+ 	const int size = SJA1105_SIZE_DYN_CMD;
+ 
+@@ -179,6 +180,20 @@ sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+ 	sja1105_packing(buf, &cmd->index,    9,  0, size, op);
+ }
+ 
++/* Command and entry are separate */
++static void
++sja1105pqrs_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
++				  enum packing_op op)
++{
++	u8 *p = buf + SJA1105_SIZE_VL_LOOKUP_ENTRY;
++	const int size = SJA1105_SIZE_DYN_CMD;
++
++	sja1105_packing(p, &cmd->valid,   31, 31, size, op);
++	sja1105_packing(p, &cmd->errors,  30, 30, size, op);
++	sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op);
++	sja1105_packing(p, &cmd->index,    9,  0, size, op);
++}
++
+ static size_t sja1105et_vl_lookup_entry_packing(void *buf, void *entry_ptr,
+ 						enum packing_op op)
+ {
+@@ -641,7 +656,7 @@ static size_t sja1105pqrs_cbs_entry_packing(void *buf, void *entry_ptr,
+ const struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
+ 	[BLK_IDX_VL_LOOKUP] = {
+ 		.entry_packing = sja1105et_vl_lookup_entry_packing,
+-		.cmd_packing = sja1105_vl_lookup_cmd_packing,
++		.cmd_packing = sja1105et_vl_lookup_cmd_packing,
+ 		.access = OP_WRITE,
+ 		.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
+ 		.packed_size = SJA1105ET_SIZE_VL_LOOKUP_DYN_CMD,
+@@ -725,7 +740,7 @@ const struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
+ const struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
+ 	[BLK_IDX_VL_LOOKUP] = {
+ 		.entry_packing = sja1105_vl_lookup_entry_packing,
+-		.cmd_packing = sja1105_vl_lookup_cmd_packing,
++		.cmd_packing = sja1105pqrs_vl_lookup_cmd_packing,
+ 		.access = (OP_READ | OP_WRITE),
+ 		.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
+ 		.packed_size = SJA1105PQRS_SIZE_VL_LOOKUP_DYN_CMD,
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 1a855816cbc9d..e273b2bd82ba7 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -25,6 +25,8 @@
+ #include "sja1105_sgmii.h"
+ #include "sja1105_tas.h"
+ 
++#define SJA1105_DEFAULT_VLAN		(VLAN_N_VID - 1)
++
+ static const struct dsa_switch_ops sja1105_switch_ops;
+ 
+ static void sja1105_hw_reset(struct gpio_desc *gpio, unsigned int pulse_len,
+@@ -204,6 +206,7 @@ static int sja1105_init_mii_settings(struct sja1105_private *priv,
+ 		default:
+ 			dev_err(dev, "Unsupported PHY mode %s!\n",
+ 				phy_modes(ports[i].phy_mode));
++			return -EINVAL;
+ 		}
+ 
+ 		/* Even though the SerDes port is able to drive SGMII autoneg
+@@ -292,6 +295,13 @@ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
+ 	return 0;
+ }
+ 
++/* Set up a default VLAN for untagged traffic injected from the CPU
++ * using management routes (e.g. STP, PTP) as opposed to tag_8021q.
++ * All DT-defined ports are members of this VLAN, and there are no
++ * restrictions on forwarding (since the CPU selects the destination).
++ * Frames from this VLAN will always be transmitted as untagged, and
++ * neither the bridge nor the 8021q module cannot create this VLAN ID.
++ */
+ static int sja1105_init_static_vlan(struct sja1105_private *priv)
+ {
+ 	struct sja1105_table *table;
+@@ -301,17 +311,13 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
+ 		.vmemb_port = 0,
+ 		.vlan_bc = 0,
+ 		.tag_port = 0,
+-		.vlanid = 1,
++		.vlanid = SJA1105_DEFAULT_VLAN,
+ 	};
+ 	struct dsa_switch *ds = priv->ds;
+ 	int port;
+ 
+ 	table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
+ 
+-	/* The static VLAN table will only contain the initial pvid of 1.
+-	 * All other VLANs are to be configured through dynamic entries,
+-	 * and kept in the static configuration table as backing memory.
+-	 */
+ 	if (table->entry_count) {
+ 		kfree(table->entries);
+ 		table->entry_count = 0;
+@@ -324,9 +330,6 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
+ 
+ 	table->entry_count = 1;
+ 
+-	/* VLAN 1: all DT-defined ports are members; no restrictions on
+-	 * forwarding; always transmit as untagged.
+-	 */
+ 	for (port = 0; port < ds->num_ports; port++) {
+ 		struct sja1105_bridge_vlan *v;
+ 
+@@ -337,15 +340,12 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
+ 		pvid.vlan_bc |= BIT(port);
+ 		pvid.tag_port &= ~BIT(port);
+ 
+-		/* Let traffic that don't need dsa_8021q (e.g. STP, PTP) be
+-		 * transmitted as untagged.
+-		 */
+ 		v = kzalloc(sizeof(*v), GFP_KERNEL);
+ 		if (!v)
+ 			return -ENOMEM;
+ 
+ 		v->port = port;
+-		v->vid = 1;
++		v->vid = SJA1105_DEFAULT_VLAN;
+ 		v->untagged = true;
+ 		if (dsa_is_cpu_port(ds, port))
+ 			v->pvid = true;
+@@ -2756,11 +2756,22 @@ static int sja1105_vlan_add_one(struct dsa_switch *ds, int port, u16 vid,
+ 	bool pvid = flags & BRIDGE_VLAN_INFO_PVID;
+ 	struct sja1105_bridge_vlan *v;
+ 
+-	list_for_each_entry(v, vlan_list, list)
+-		if (v->port == port && v->vid == vid &&
+-		    v->untagged == untagged && v->pvid == pvid)
++	list_for_each_entry(v, vlan_list, list) {
++		if (v->port == port && v->vid == vid) {
+ 			/* Already added */
+-			return 0;
++			if (v->untagged == untagged && v->pvid == pvid)
++				/* Nothing changed */
++				return 0;
++
++			/* It's the same VLAN, but some of the flags changed
++			 * and the user did not bother to delete it first.
++			 * Update it and trigger sja1105_build_vlan_table.
++			 */
++			v->untagged = untagged;
++			v->pvid = pvid;
++			return 1;
++		}
++	}
+ 
+ 	v = kzalloc(sizeof(*v), GFP_KERNEL);
+ 	if (!v) {
+@@ -2911,13 +2922,13 @@ static int sja1105_setup(struct dsa_switch *ds)
+ 	rc = sja1105_static_config_load(priv, ports);
+ 	if (rc < 0) {
+ 		dev_err(ds->dev, "Failed to load static config: %d\n", rc);
+-		return rc;
++		goto out_ptp_clock_unregister;
+ 	}
+ 	/* Configure the CGU (PHY link modes and speeds) */
+ 	rc = sja1105_clocking_setup(priv);
+ 	if (rc < 0) {
+ 		dev_err(ds->dev, "Failed to configure MII clocking: %d\n", rc);
+-		return rc;
++		goto out_static_config_free;
+ 	}
+ 	/* On SJA1105, VLAN filtering per se is always enabled in hardware.
+ 	 * The only thing we can do to disable it is lie about what the 802.1Q
+@@ -2938,7 +2949,7 @@ static int sja1105_setup(struct dsa_switch *ds)
+ 
+ 	rc = sja1105_devlink_setup(ds);
+ 	if (rc < 0)
+-		return rc;
++		goto out_static_config_free;
+ 
+ 	/* The DSA/switchdev model brings up switch ports in standalone mode by
+ 	 * default, and that means vlan_filtering is 0 since they're not under
+@@ -2947,6 +2958,17 @@ static int sja1105_setup(struct dsa_switch *ds)
+ 	rtnl_lock();
+ 	rc = sja1105_setup_8021q_tagging(ds, true);
+ 	rtnl_unlock();
++	if (rc)
++		goto out_devlink_teardown;
++
++	return 0;
++
++out_devlink_teardown:
++	sja1105_devlink_teardown(ds);
++out_ptp_clock_unregister:
++	sja1105_ptp_clock_unregister(ds);
++out_static_config_free:
++	sja1105_static_config_free(&priv->static_config);
+ 
+ 	return rc;
+ }
+@@ -3461,8 +3483,10 @@ static int sja1105_probe(struct spi_device *spi)
+ 		priv->cbs = devm_kcalloc(dev, priv->info->num_cbs_shapers,
+ 					 sizeof(struct sja1105_cbs_entry),
+ 					 GFP_KERNEL);
+-		if (!priv->cbs)
+-			return -ENOMEM;
++		if (!priv->cbs) {
++			rc = -ENOMEM;
++			goto out_unregister_switch;
++		}
+ 	}
+ 
+ 	/* Connections between dsa_port and sja1105_port */
+@@ -3487,7 +3511,7 @@ static int sja1105_probe(struct spi_device *spi)
+ 			dev_err(ds->dev,
+ 				"failed to create deferred xmit thread: %d\n",
+ 				rc);
+-			goto out;
++			goto out_destroy_workers;
+ 		}
+ 		skb_queue_head_init(&sp->xmit_queue);
+ 		sp->xmit_tpid = ETH_P_SJA1105;
+@@ -3497,7 +3521,8 @@ static int sja1105_probe(struct spi_device *spi)
+ 	}
+ 
+ 	return 0;
+-out:
++
++out_destroy_workers:
+ 	while (port-- > 0) {
+ 		struct sja1105_port *sp = &priv->ports[port];
+ 
+@@ -3506,6 +3531,10 @@ out:
+ 
+ 		kthread_destroy_worker(sp->xmit_worker);
+ 	}
++
++out_unregister_switch:
++	dsa_unregister_switch(ds);
++
+ 	return rc;
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
+index 3e8a179f39db4..633b103896535 100644
+--- a/drivers/net/ethernet/broadcom/bnx2.c
++++ b/drivers/net/ethernet/broadcom/bnx2.c
+@@ -8247,9 +8247,9 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
+ 		BNX2_WR(bp, PCI_COMMAND, reg);
+ 	} else if ((BNX2_CHIP_ID(bp) == BNX2_CHIP_ID_5706_A1) &&
+ 		!(bp->flags & BNX2_FLAG_PCIX)) {
+-
+ 		dev_err(&pdev->dev,
+ 			"5706 A1 can only be used in a PCIX bus, aborting\n");
++		rc = -EPERM;
+ 		goto err_out_unmap;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 4385b42a2b636..adfaa9a850dd3 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -280,7 +280,8 @@ static bool bnxt_vf_pciid(enum board_idx idx)
+ {
+ 	return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF ||
+ 		idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV ||
+-		idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF);
++		idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF ||
++		idx == NETXTREME_E_P5_VF_HV);
+ }
+ 
+ #define DB_CP_REARM_FLAGS	(DB_KEY_CP | DB_IDX_VALID)
+@@ -6833,14 +6834,7 @@ ctx_err:
+ static void bnxt_hwrm_set_pg_attr(struct bnxt_ring_mem_info *rmem, u8 *pg_attr,
+ 				  __le64 *pg_dir)
+ {
+-	u8 pg_size = 0;
+-
+-	if (BNXT_PAGE_SHIFT == 13)
+-		pg_size = 1 << 4;
+-	else if (BNXT_PAGE_SIZE == 16)
+-		pg_size = 2 << 4;
+-
+-	*pg_attr = pg_size;
++	BNXT_SET_CTX_PAGE_ATTR(*pg_attr);
+ 	if (rmem->depth >= 1) {
+ 		if (rmem->depth == 2)
+ 			*pg_attr |= 2;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index e4e926c65118a..a95c5afa2f018 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1440,6 +1440,16 @@ struct bnxt_ctx_pg_info {
+ #define BNXT_MAX_TQM_RINGS		\
+ 	(BNXT_MAX_TQM_SP_RINGS + BNXT_MAX_TQM_FP_RINGS)
+ 
++#define BNXT_SET_CTX_PAGE_ATTR(attr)					\
++do {									\
++	if (BNXT_PAGE_SIZE == 0x2000)					\
++		attr = FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_8K;	\
++	else if (BNXT_PAGE_SIZE == 0x10000)				\
++		attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_64K;	\
++	else								\
++		attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_4K;	\
++} while (0)
++
+ struct bnxt_ctx_mem_info {
+ 	u32	qp_max_entries;
+ 	u16	qp_min_qp1_entries;
+diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
+index 7d00d3a8ded4d..e0d18e9171080 100644
+--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
++++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
+@@ -1153,7 +1153,7 @@ static void octeon_destroy_resources(struct octeon_device *oct)
+  * @lio: per-network private data
+  * @start_stop: whether to start or stop
+  */
+-static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
++static int send_rx_ctrl_cmd(struct lio *lio, int start_stop)
+ {
+ 	struct octeon_soft_command *sc;
+ 	union octnet_cmd *ncmd;
+@@ -1161,15 +1161,15 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
+ 	int retval;
+ 
+ 	if (oct->props[lio->ifidx].rx_on == start_stop)
+-		return;
++		return 0;
+ 
+ 	sc = (struct octeon_soft_command *)
+ 		octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE,
+ 					  16, 0);
+ 	if (!sc) {
+ 		netif_info(lio, rx_err, lio->netdev,
+-			   "Failed to allocate octeon_soft_command\n");
+-		return;
++			   "Failed to allocate octeon_soft_command struct\n");
++		return -ENOMEM;
+ 	}
+ 
+ 	ncmd = (union octnet_cmd *)sc->virtdptr;
+@@ -1192,18 +1192,19 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
+ 	if (retval == IQ_SEND_FAILED) {
+ 		netif_info(lio, rx_err, lio->netdev, "Failed to send RX Control message\n");
+ 		octeon_free_soft_command(oct, sc);
+-		return;
+ 	} else {
+ 		/* Sleep on a wait queue till the cond flag indicates that the
+ 		 * response arrived or timed-out.
+ 		 */
+ 		retval = wait_for_sc_completion_timeout(oct, sc, 0);
+ 		if (retval)
+-			return;
++			return retval;
+ 
+ 		oct->props[lio->ifidx].rx_on = start_stop;
+ 		WRITE_ONCE(sc->caller_is_done, true);
+ 	}
++
++	return retval;
+ }
+ 
+ /**
+@@ -1778,6 +1779,7 @@ static int liquidio_open(struct net_device *netdev)
+ 	struct octeon_device_priv *oct_priv =
+ 		(struct octeon_device_priv *)oct->priv;
+ 	struct napi_struct *napi, *n;
++	int ret = 0;
+ 
+ 	if (oct->props[lio->ifidx].napi_enabled == 0) {
+ 		tasklet_disable(&oct_priv->droq_tasklet);
+@@ -1813,7 +1815,9 @@ static int liquidio_open(struct net_device *netdev)
+ 	netif_info(lio, ifup, lio->netdev, "Interface Open, ready for traffic\n");
+ 
+ 	/* tell Octeon to start forwarding packets to host */
+-	send_rx_ctrl_cmd(lio, 1);
++	ret = send_rx_ctrl_cmd(lio, 1);
++	if (ret)
++		return ret;
+ 
+ 	/* start periodical statistics fetch */
+ 	INIT_DELAYED_WORK(&lio->stats_wk.work, lio_fetch_stats);
+@@ -1824,7 +1828,7 @@ static int liquidio_open(struct net_device *netdev)
+ 	dev_info(&oct->pci_dev->dev, "%s interface is opened\n",
+ 		 netdev->name);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+@@ -1838,6 +1842,7 @@ static int liquidio_stop(struct net_device *netdev)
+ 	struct octeon_device_priv *oct_priv =
+ 		(struct octeon_device_priv *)oct->priv;
+ 	struct napi_struct *napi, *n;
++	int ret = 0;
+ 
+ 	ifstate_reset(lio, LIO_IFSTATE_RUNNING);
+ 
+@@ -1854,7 +1859,9 @@ static int liquidio_stop(struct net_device *netdev)
+ 	lio->link_changes++;
+ 
+ 	/* Tell Octeon that nic interface is down. */
+-	send_rx_ctrl_cmd(lio, 0);
++	ret = send_rx_ctrl_cmd(lio, 0);
++	if (ret)
++		return ret;
+ 
+ 	if (OCTEON_CN23XX_PF(oct)) {
+ 		if (!oct->msix_on)
+@@ -1889,7 +1896,7 @@ static int liquidio_stop(struct net_device *netdev)
+ 
+ 	dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
+index 103440f97bc84..226a7842d2fdb 100644
+--- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
++++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
+@@ -595,7 +595,7 @@ static void octeon_destroy_resources(struct octeon_device *oct)
+  * @lio: per-network private data
+  * @start_stop: whether to start or stop
+  */
+-static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
++static int send_rx_ctrl_cmd(struct lio *lio, int start_stop)
+ {
+ 	struct octeon_device *oct = (struct octeon_device *)lio->oct_dev;
+ 	struct octeon_soft_command *sc;
+@@ -603,11 +603,16 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
+ 	int retval;
+ 
+ 	if (oct->props[lio->ifidx].rx_on == start_stop)
+-		return;
++		return 0;
+ 
+ 	sc = (struct octeon_soft_command *)
+ 		octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE,
+ 					  16, 0);
++	if (!sc) {
++		netif_info(lio, rx_err, lio->netdev,
++			   "Failed to allocate octeon_soft_command struct\n");
++		return -ENOMEM;
++	}
+ 
+ 	ncmd = (union octnet_cmd *)sc->virtdptr;
+ 
+@@ -635,11 +640,13 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
+ 		 */
+ 		retval = wait_for_sc_completion_timeout(oct, sc, 0);
+ 		if (retval)
+-			return;
++			return retval;
+ 
+ 		oct->props[lio->ifidx].rx_on = start_stop;
+ 		WRITE_ONCE(sc->caller_is_done, true);
+ 	}
++
++	return retval;
+ }
+ 
+ /**
+@@ -906,6 +913,7 @@ static int liquidio_open(struct net_device *netdev)
+ 	struct octeon_device_priv *oct_priv =
+ 		(struct octeon_device_priv *)oct->priv;
+ 	struct napi_struct *napi, *n;
++	int ret = 0;
+ 
+ 	if (!oct->props[lio->ifidx].napi_enabled) {
+ 		tasklet_disable(&oct_priv->droq_tasklet);
+@@ -932,11 +940,13 @@ static int liquidio_open(struct net_device *netdev)
+ 					(LIQUIDIO_NDEV_STATS_POLL_TIME_MS));
+ 
+ 	/* tell Octeon to start forwarding packets to host */
+-	send_rx_ctrl_cmd(lio, 1);
++	ret = send_rx_ctrl_cmd(lio, 1);
++	if (ret)
++		return ret;
+ 
+ 	dev_info(&oct->pci_dev->dev, "%s interface is opened\n", netdev->name);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+@@ -950,9 +960,12 @@ static int liquidio_stop(struct net_device *netdev)
+ 	struct octeon_device_priv *oct_priv =
+ 		(struct octeon_device_priv *)oct->priv;
+ 	struct napi_struct *napi, *n;
++	int ret = 0;
+ 
+ 	/* tell Octeon to stop forwarding packets to host */
+-	send_rx_ctrl_cmd(lio, 0);
++	ret = send_rx_ctrl_cmd(lio, 0);
++	if (ret)
++		return ret;
+ 
+ 	netif_info(lio, ifdown, lio->netdev, "Stopping interface!\n");
+ 	/* Inform that netif carrier is down */
+@@ -986,7 +999,7 @@ static int liquidio_stop(struct net_device *netdev)
+ 
+ 	dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index bde8494215c41..e664e05b9f026 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -1042,7 +1042,7 @@ void clear_all_filters(struct adapter *adapter)
+ 				cxgb4_del_filter(dev, f->tid, &f->fs);
+ 		}
+ 
+-		sb = t4_read_reg(adapter, LE_DB_SRVR_START_INDEX_A);
++		sb = adapter->tids.stid_base;
+ 		for (i = 0; i < sb; i++) {
+ 			f = (struct filter_entry *)adapter->tids.tid_tab[i];
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 7fd264a6d0854..23c13f34a5727 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -6484,9 +6484,9 @@ static void cxgb4_ktls_dev_del(struct net_device *netdev,
+ 
+ 	adap->uld[CXGB4_ULD_KTLS].tlsdev_ops->tls_dev_del(netdev, tls_ctx,
+ 							  direction);
+-	cxgb4_set_ktls_feature(adap, FW_PARAMS_PARAM_DEV_KTLS_HW_DISABLE);
+ 
+ out_unlock:
++	cxgb4_set_ktls_feature(adap, FW_PARAMS_PARAM_DEV_KTLS_HW_DISABLE);
+ 	mutex_unlock(&uld_mutex);
+ }
+ 
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
+index 3a50d5a62aceb..f9353826b2455 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
+@@ -59,6 +59,7 @@ static int chcr_get_nfrags_to_send(struct sk_buff *skb, u32 start, u32 len)
+ }
+ 
+ static int chcr_init_tcb_fields(struct chcr_ktls_info *tx_info);
++static void clear_conn_resources(struct chcr_ktls_info *tx_info);
+ /*
+  * chcr_ktls_save_keys: calculate and save crypto keys.
+  * @tx_info - driver specific tls info.
+@@ -370,10 +371,14 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
+ 				chcr_get_ktls_tx_context(tls_ctx);
+ 	struct chcr_ktls_info *tx_info = tx_ctx->chcr_info;
+ 	struct ch_ktls_port_stats_debug *port_stats;
++	struct chcr_ktls_uld_ctx *u_ctx;
+ 
+ 	if (!tx_info)
+ 		return;
+ 
++	u_ctx = tx_info->adap->uld[CXGB4_ULD_KTLS].handle;
++	if (u_ctx && u_ctx->detach)
++		return;
+ 	/* clear l2t entry */
+ 	if (tx_info->l2te)
+ 		cxgb4_l2t_release(tx_info->l2te);
+@@ -390,6 +395,8 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
+ 	if (tx_info->tid != -1) {
+ 		cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
+ 				 tx_info->tid, tx_info->ip_family);
++
++		xa_erase(&u_ctx->tid_list, tx_info->tid);
+ 	}
+ 
+ 	port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id];
+@@ -417,6 +424,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
+ 	struct tls_context *tls_ctx = tls_get_ctx(sk);
+ 	struct ch_ktls_port_stats_debug *port_stats;
+ 	struct chcr_ktls_ofld_ctx_tx *tx_ctx;
++	struct chcr_ktls_uld_ctx *u_ctx;
+ 	struct chcr_ktls_info *tx_info;
+ 	struct dst_entry *dst;
+ 	struct adapter *adap;
+@@ -431,6 +439,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
+ 	adap = pi->adapter;
+ 	port_stats = &adap->ch_ktls_stats.ktls_port[pi->port_id];
+ 	atomic64_inc(&port_stats->ktls_tx_connection_open);
++	u_ctx = adap->uld[CXGB4_ULD_KTLS].handle;
+ 
+ 	if (direction == TLS_OFFLOAD_CTX_DIR_RX) {
+ 		pr_err("not expecting for RX direction\n");
+@@ -440,6 +449,9 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
+ 	if (tx_ctx->chcr_info)
+ 		goto out;
+ 
++	if (u_ctx && u_ctx->detach)
++		goto out;
++
+ 	tx_info = kvzalloc(sizeof(*tx_info), GFP_KERNEL);
+ 	if (!tx_info)
+ 		goto out;
+@@ -575,6 +587,8 @@ free_tid:
+ 	cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
+ 			 tx_info->tid, tx_info->ip_family);
+ 
++	xa_erase(&u_ctx->tid_list, tx_info->tid);
++
+ put_module:
+ 	/* release module refcount */
+ 	module_put(THIS_MODULE);
+@@ -639,8 +653,12 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
+ {
+ 	const struct cpl_act_open_rpl *p = (void *)input;
+ 	struct chcr_ktls_info *tx_info = NULL;
++	struct chcr_ktls_ofld_ctx_tx *tx_ctx;
++	struct chcr_ktls_uld_ctx *u_ctx;
+ 	unsigned int atid, tid, status;
++	struct tls_context *tls_ctx;
+ 	struct tid_info *t;
++	int ret = 0;
+ 
+ 	tid = GET_TID(p);
+ 	status = AOPEN_STATUS_G(ntohl(p->atid_status));
+@@ -672,14 +690,29 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
+ 	if (!status) {
+ 		tx_info->tid = tid;
+ 		cxgb4_insert_tid(t, tx_info, tx_info->tid, tx_info->ip_family);
++		/* Adding tid */
++		tls_ctx = tls_get_ctx(tx_info->sk);
++		tx_ctx = chcr_get_ktls_tx_context(tls_ctx);
++		u_ctx = adap->uld[CXGB4_ULD_KTLS].handle;
++		if (u_ctx) {
++			ret = xa_insert_bh(&u_ctx->tid_list, tid, tx_ctx,
++					   GFP_NOWAIT);
++			if (ret < 0) {
++				pr_err("%s: Failed to allocate tid XA entry = %d\n",
++				       __func__, tx_info->tid);
++				tx_info->open_state = CH_KTLS_OPEN_FAILURE;
++				goto out;
++			}
++		}
+ 		tx_info->open_state = CH_KTLS_OPEN_SUCCESS;
+ 	} else {
+ 		tx_info->open_state = CH_KTLS_OPEN_FAILURE;
+ 	}
++out:
+ 	spin_unlock(&tx_info->lock);
+ 
+ 	complete(&tx_info->completion);
+-	return 0;
++	return ret;
+ }
+ 
+ /*
+@@ -2097,6 +2130,8 @@ static void *chcr_ktls_uld_add(const struct cxgb4_lld_info *lldi)
+ 		goto out;
+ 	}
+ 	u_ctx->lldi = *lldi;
++	u_ctx->detach = false;
++	xa_init_flags(&u_ctx->tid_list, XA_FLAGS_LOCK_BH);
+ out:
+ 	return u_ctx;
+ }
+@@ -2130,6 +2165,45 @@ static int chcr_ktls_uld_rx_handler(void *handle, const __be64 *rsp,
+ 	return 0;
+ }
+ 
++static void clear_conn_resources(struct chcr_ktls_info *tx_info)
++{
++	/* clear l2t entry */
++	if (tx_info->l2te)
++		cxgb4_l2t_release(tx_info->l2te);
++
++#if IS_ENABLED(CONFIG_IPV6)
++	/* clear clip entry */
++	if (tx_info->ip_family == AF_INET6)
++		cxgb4_clip_release(tx_info->netdev, (const u32 *)
++				   &tx_info->sk->sk_v6_rcv_saddr,
++				   1);
++#endif
++
++	/* clear tid */
++	if (tx_info->tid != -1)
++		cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
++				 tx_info->tid, tx_info->ip_family);
++}
++
++static void ch_ktls_reset_all_conn(struct chcr_ktls_uld_ctx *u_ctx)
++{
++	struct ch_ktls_port_stats_debug *port_stats;
++	struct chcr_ktls_ofld_ctx_tx *tx_ctx;
++	struct chcr_ktls_info *tx_info;
++	unsigned long index;
++
++	xa_for_each(&u_ctx->tid_list, index, tx_ctx) {
++		tx_info = tx_ctx->chcr_info;
++		clear_conn_resources(tx_info);
++		port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id];
++		atomic64_inc(&port_stats->ktls_tx_connection_close);
++		kvfree(tx_info);
++		tx_ctx->chcr_info = NULL;
++		/* release module refcount */
++		module_put(THIS_MODULE);
++	}
++}
++
+ static int chcr_ktls_uld_state_change(void *handle, enum cxgb4_state new_state)
+ {
+ 	struct chcr_ktls_uld_ctx *u_ctx = handle;
+@@ -2146,7 +2220,10 @@ static int chcr_ktls_uld_state_change(void *handle, enum cxgb4_state new_state)
+ 	case CXGB4_STATE_DETACH:
+ 		pr_info("%s: Down\n", pci_name(u_ctx->lldi.pdev));
+ 		mutex_lock(&dev_mutex);
++		u_ctx->detach = true;
+ 		list_del(&u_ctx->entry);
++		ch_ktls_reset_all_conn(u_ctx);
++		xa_destroy(&u_ctx->tid_list);
+ 		mutex_unlock(&dev_mutex);
+ 		break;
+ 	default:
+@@ -2185,6 +2262,7 @@ static void __exit chcr_ktls_exit(void)
+ 		adap = pci_get_drvdata(u_ctx->lldi.pdev);
+ 		memset(&adap->ch_ktls_stats, 0, sizeof(adap->ch_ktls_stats));
+ 		list_del(&u_ctx->entry);
++		xa_destroy(&u_ctx->tid_list);
+ 		kfree(u_ctx);
+ 	}
+ 	mutex_unlock(&dev_mutex);
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
+index 18b3b1f024156..10572dc55365a 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
++++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
+@@ -75,6 +75,8 @@ struct chcr_ktls_ofld_ctx_tx {
+ struct chcr_ktls_uld_ctx {
+ 	struct list_head entry;
+ 	struct cxgb4_lld_info lldi;
++	struct xarray tid_list;
++	bool detach;
+ };
+ 
+ static inline struct chcr_ktls_ofld_ctx_tx *
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
+index 188d871f6b8cd..c320cc8ca68d6 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
+@@ -1564,8 +1564,10 @@ found_ok_skb:
+ 			cerr = put_cmsg(msg, SOL_TLS, TLS_GET_RECORD_TYPE,
+ 					sizeof(thdr->type), &thdr->type);
+ 
+-			if (cerr && thdr->type != TLS_RECORD_TYPE_DATA)
+-				return -EIO;
++			if (cerr && thdr->type != TLS_RECORD_TYPE_DATA) {
++				copied = -EIO;
++				break;
++			}
+ 			/*  don't send tls header, skip copy */
+ 			goto skip_copy;
+ 		}
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 55c28fbc5f9ea..960def41cc555 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3277,7 +3277,9 @@ static int fec_enet_init(struct net_device *ndev)
+ 		return ret;
+ 	}
+ 
+-	fec_enet_alloc_queue(ndev);
++	ret = fec_enet_alloc_queue(ndev);
++	if (ret)
++		return ret;
+ 
+ 	bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) * dsize;
+ 
+@@ -3285,7 +3287,8 @@ static int fec_enet_init(struct net_device *ndev)
+ 	cbd_base = dmam_alloc_coherent(&fep->pdev->dev, bd_size, &bd_dma,
+ 				       GFP_KERNEL);
+ 	if (!cbd_base) {
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto free_queue_mem;
+ 	}
+ 
+ 	/* Get the Ethernet address */
+@@ -3363,6 +3366,10 @@ static int fec_enet_init(struct net_device *ndev)
+ 		fec_enet_update_ethtool_stats(ndev);
+ 
+ 	return 0;
++
++free_queue_mem:
++	fec_enet_free_queue(ndev);
++	return ret;
+ }
+ 
+ #ifdef CONFIG_OF
+diff --git a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
+index a7b7a4aace791..b0c0504950d81 100644
+--- a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
++++ b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
+@@ -548,8 +548,8 @@ static int fmvj18x_get_hwinfo(struct pcmcia_device *link, u_char *node_id)
+ 
+     base = ioremap(link->resource[2]->start, resource_size(link->resource[2]));
+     if (!base) {
+-	    pcmcia_release_window(link, link->resource[2]);
+-	    return -ENOMEM;
++	pcmcia_release_window(link, link->resource[2]);
++	return -1;
+     }
+ 
+     pcmcia_map_mem_page(link, link->resource[2], 0);
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 02e7d74779f46..d6e35421d8f7b 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -180,7 +180,7 @@ static int gve_napi_poll(struct napi_struct *napi, int budget)
+ 	/* Double check we have no extra work.
+ 	 * Ensure unmask synchronizes with checking for work.
+ 	 */
+-	dma_rmb();
++	mb();
+ 	if (block->tx)
+ 		reschedule |= gve_tx_poll(block, -1);
+ 	if (block->rx)
+@@ -220,6 +220,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
+ 		int vecs_left = new_num_ntfy_blks % 2;
+ 
+ 		priv->num_ntfy_blks = new_num_ntfy_blks;
++		priv->mgmt_msix_idx = priv->num_ntfy_blks;
+ 		priv->tx_cfg.max_queues = min_t(int, priv->tx_cfg.max_queues,
+ 						vecs_per_type);
+ 		priv->rx_cfg.max_queues = min_t(int, priv->rx_cfg.max_queues,
+@@ -300,20 +301,22 @@ static void gve_free_notify_blocks(struct gve_priv *priv)
+ {
+ 	int i;
+ 
+-	/* Free the irqs */
+-	for (i = 0; i < priv->num_ntfy_blks; i++) {
+-		struct gve_notify_block *block = &priv->ntfy_blocks[i];
+-		int msix_idx = i;
++	if (priv->msix_vectors) {
++		/* Free the irqs */
++		for (i = 0; i < priv->num_ntfy_blks; i++) {
++			struct gve_notify_block *block = &priv->ntfy_blocks[i];
++			int msix_idx = i;
+ 
+-		irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
+-				      NULL);
+-		free_irq(priv->msix_vectors[msix_idx].vector, block);
++			irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
++					      NULL);
++			free_irq(priv->msix_vectors[msix_idx].vector, block);
++		}
++		free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
+ 	}
+ 	dma_free_coherent(&priv->pdev->dev,
+ 			  priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),
+ 			  priv->ntfy_blocks, priv->ntfy_block_bus);
+ 	priv->ntfy_blocks = NULL;
+-	free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
+ 	pci_disable_msix(priv->pdev);
+ 	kvfree(priv->msix_vectors);
+ 	priv->msix_vectors = NULL;
+diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
+index d0244feb03011..b653197b34d10 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx.c
++++ b/drivers/net/ethernet/google/gve/gve_tx.c
+@@ -207,10 +207,12 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
+ 		goto abort_with_info;
+ 
+ 	tx->tx_fifo.qpl = gve_assign_tx_qpl(priv);
++	if (!tx->tx_fifo.qpl)
++		goto abort_with_desc;
+ 
+ 	/* map Tx FIFO */
+ 	if (gve_tx_fifo_init(priv, &tx->tx_fifo))
+-		goto abort_with_desc;
++		goto abort_with_qpl;
+ 
+ 	tx->q_resources =
+ 		dma_alloc_coherent(hdev,
+@@ -229,6 +231,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
+ 
+ abort_with_fifo:
+ 	gve_tx_fifo_release(priv, &tx->tx_fifo);
++abort_with_qpl:
++	gve_unassign_qpl(priv, tx->tx_fifo.qpl->id);
+ abort_with_desc:
+ 	dma_free_coherent(hdev, bytes, tx->desc, tx->bus);
+ 	tx->desc = NULL;
+@@ -478,7 +482,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
+ 	struct gve_tx_ring *tx;
+ 	int nsegs;
+ 
+-	WARN(skb_get_queue_mapping(skb) > priv->tx_cfg.num_queues,
++	WARN(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues,
+ 	     "skb queue index out of range");
+ 	tx = &priv->tx[skb_get_queue_mapping(skb)];
+ 	if (unlikely(gve_maybe_stop_tx(tx, skb))) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index ef31489199706..92ca3b21968fe 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -792,8 +792,6 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
+ 	      l4.udp->dest == htons(4790))))
+ 		return false;
+ 
+-	skb_checksum_help(skb);
+-
+ 	return true;
+ }
+ 
+@@ -871,8 +869,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
+ 			/* the stack computes the IP header already,
+ 			 * driver calculate l4 checksum when not TSO.
+ 			 */
+-			skb_checksum_help(skb);
+-			return 0;
++			return skb_checksum_help(skb);
+ 		}
+ 
+ 		hns3_set_outer_l2l3l4(skb, ol4_proto, ol_type_vlan_len_msec);
+@@ -917,7 +914,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
+ 		break;
+ 	case IPPROTO_UDP:
+ 		if (hns3_tunnel_csum_bug(skb))
+-			break;
++			return skb_checksum_help(skb);
+ 
+ 		hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
+ 		hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S,
+@@ -942,8 +939,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
+ 		/* the stack computes the IP header already,
+ 		 * driver calculate l4 checksum when not TSO.
+ 		 */
+-		skb_checksum_help(skb);
+-		return 0;
++		return skb_checksum_help(skb);
+ 	}
+ 
+ 	return 0;
+@@ -4113,12 +4109,6 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	if (ret)
+ 		goto out_init_phy;
+ 
+-	ret = register_netdev(netdev);
+-	if (ret) {
+-		dev_err(priv->dev, "probe register netdev fail!\n");
+-		goto out_reg_netdev_fail;
+-	}
+-
+ 	/* the device can work without cpu rmap, only aRFS needs it */
+ 	ret = hns3_set_rx_cpu_rmap(netdev);
+ 	if (ret)
+@@ -4146,17 +4136,23 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 
+ 	set_bit(HNS3_NIC_STATE_INITED, &priv->state);
+ 
++	ret = register_netdev(netdev);
++	if (ret) {
++		dev_err(priv->dev, "probe register netdev fail!\n");
++		goto out_reg_netdev_fail;
++	}
++
+ 	if (netif_msg_drv(handle))
+ 		hns3_info_show(priv);
+ 
+ 	return ret;
+ 
++out_reg_netdev_fail:
++	hns3_dbg_uninit(handle);
+ out_client_start:
+ 	hns3_free_rx_cpu_rmap(netdev);
+ 	hns3_nic_uninit_irq(priv);
+ out_init_irq_fail:
+-	unregister_netdev(netdev);
+-out_reg_netdev_fail:
+ 	hns3_uninit_phy(netdev);
+ out_init_phy:
+ 	hns3_uninit_all_ring(priv);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index e0254672831f4..2c2d53f5c56e1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -678,7 +678,6 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
+ 	unsigned int flag;
+ 	int ret = 0;
+ 
+-	memset(&resp_msg, 0, sizeof(resp_msg));
+ 	/* handle all the mailbox requests in the queue */
+ 	while (!hclge_cmd_crq_empty(&hdev->hw)) {
+ 		if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) {
+@@ -706,6 +705,9 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
+ 
+ 		trace_hclge_pf_mbx_get(hdev, req);
+ 
++		/* clear the resp_msg before processing every mailbox message */
++		memset(&resp_msg, 0, sizeof(resp_msg));
++
+ 		switch (req->msg.code) {
+ 		case HCLGE_MBX_MAP_RING_TO_VECTOR:
+ 			ret = hclge_map_unmap_ring_to_vf_vector(vport, true,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 988db46bff0ee..214a38de3f415 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -467,12 +467,16 @@ static int ixgbe_set_vf_vlan(struct ixgbe_adapter *adapter, int add, int vid,
+ 	return err;
+ }
+ 
+-static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
++static int ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 max_frame, u32 vf)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
+-	int max_frame = msgbuf[1];
+ 	u32 max_frs;
+ 
++	if (max_frame < ETH_MIN_MTU || max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {
++		e_err(drv, "VF max_frame %d out of range\n", max_frame);
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * For 82599EB we have to keep all PFs and VFs operating with
+ 	 * the same max_frame value in order to avoid sending an oversize
+@@ -533,12 +537,6 @@ static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ 		}
+ 	}
+ 
+-	/* MTU < 68 is an error and causes problems on some kernels */
+-	if (max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {
+-		e_err(drv, "VF max_frame %d out of range\n", max_frame);
+-		return -EINVAL;
+-	}
+-
+ 	/* pull current max frame size from hardware */
+ 	max_frs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
+ 	max_frs &= IXGBE_MHADD_MFS_MASK;
+@@ -1249,7 +1247,7 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf)
+ 		retval = ixgbe_set_vf_vlan_msg(adapter, msgbuf, vf);
+ 		break;
+ 	case IXGBE_VF_SET_LPE:
+-		retval = ixgbe_set_vf_lpe(adapter, msgbuf, vf);
++		retval = ixgbe_set_vf_lpe(adapter, msgbuf[1], vf);
+ 		break;
+ 	case IXGBE_VF_SET_MACVLAN:
+ 		retval = ixgbe_set_vf_macvlan_msg(adapter, msgbuf, vf);
+diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c
+index 51ed8a54d3801..135ba5b6ae980 100644
+--- a/drivers/net/ethernet/lantiq_xrx200.c
++++ b/drivers/net/ethernet/lantiq_xrx200.c
+@@ -154,6 +154,7 @@ static int xrx200_close(struct net_device *net_dev)
+ 
+ static int xrx200_alloc_skb(struct xrx200_chan *ch)
+ {
++	dma_addr_t mapping;
+ 	int ret = 0;
+ 
+ 	ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev,
+@@ -163,16 +164,17 @@ static int xrx200_alloc_skb(struct xrx200_chan *ch)
+ 		goto skip;
+ 	}
+ 
+-	ch->dma.desc_base[ch->dma.desc].addr = dma_map_single(ch->priv->dev,
+-			ch->skb[ch->dma.desc]->data, XRX200_DMA_DATA_LEN,
+-			DMA_FROM_DEVICE);
+-	if (unlikely(dma_mapping_error(ch->priv->dev,
+-				       ch->dma.desc_base[ch->dma.desc].addr))) {
++	mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data,
++				 XRX200_DMA_DATA_LEN, DMA_FROM_DEVICE);
++	if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) {
+ 		dev_kfree_skb_any(ch->skb[ch->dma.desc]);
+ 		ret = -ENOMEM;
+ 		goto skip;
+ 	}
+ 
++	ch->dma.desc_base[ch->dma.desc].addr = mapping;
++	/* Make sure the address is written before we give it to HW */
++	wmb();
+ skip:
+ 	ch->dma.desc_base[ch->dma.desc].ctl =
+ 		LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) |
+@@ -196,6 +198,8 @@ static int xrx200_hw_receive(struct xrx200_chan *ch)
+ 	ch->dma.desc %= LTQ_DESC_NUM;
+ 
+ 	if (ret) {
++		ch->skb[ch->dma.desc] = skb;
++		net_dev->stats.rx_dropped++;
+ 		netdev_err(net_dev, "failed to allocate new rx buffer\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index 8347758430675..a1aefce55e655 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -909,6 +909,14 @@ enum mvpp22_ptp_packet_format {
+ 
+ #define MVPP2_DESC_DMA_MASK	DMA_BIT_MASK(40)
+ 
++/* Buffer header info bits */
++#define MVPP2_B_HDR_INFO_MC_ID_MASK	0xfff
++#define MVPP2_B_HDR_INFO_MC_ID(info)	((info) & MVPP2_B_HDR_INFO_MC_ID_MASK)
++#define MVPP2_B_HDR_INFO_LAST_OFFS	12
++#define MVPP2_B_HDR_INFO_LAST_MASK	BIT(12)
++#define MVPP2_B_HDR_INFO_IS_LAST(info) \
++	   (((info) & MVPP2_B_HDR_INFO_LAST_MASK) >> MVPP2_B_HDR_INFO_LAST_OFFS)
++
+ struct mvpp2_tai;
+ 
+ /* Definitions */
+@@ -918,6 +926,20 @@ struct mvpp2_rss_table {
+ 	u32 indir[MVPP22_RSS_TABLE_ENTRIES];
+ };
+ 
++struct mvpp2_buff_hdr {
++	__le32 next_phys_addr;
++	__le32 next_dma_addr;
++	__le16 byte_count;
++	__le16 info;
++	__le16 reserved1;	/* bm_qset (for future use, BM) */
++	u8 next_phys_addr_high;
++	u8 next_dma_addr_high;
++	__le16 reserved2;
++	__le16 reserved3;
++	__le16 reserved4;
++	__le16 reserved5;
++};
++
+ /* Shared Packet Processor resources */
+ struct mvpp2 {
+ 	/* Shared registers' base addresses */
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index f5333fc27e14f..6aa13c9f9fc9c 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -3481,6 +3481,35 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq,
+ 	return ret;
+ }
+ 
++static void mvpp2_buff_hdr_pool_put(struct mvpp2_port *port, struct mvpp2_rx_desc *rx_desc,
++				    int pool, u32 rx_status)
++{
++	phys_addr_t phys_addr, phys_addr_next;
++	dma_addr_t dma_addr, dma_addr_next;
++	struct mvpp2_buff_hdr *buff_hdr;
++
++	phys_addr = mvpp2_rxdesc_dma_addr_get(port, rx_desc);
++	dma_addr = mvpp2_rxdesc_cookie_get(port, rx_desc);
++
++	do {
++		buff_hdr = (struct mvpp2_buff_hdr *)phys_to_virt(phys_addr);
++
++		phys_addr_next = le32_to_cpu(buff_hdr->next_phys_addr);
++		dma_addr_next = le32_to_cpu(buff_hdr->next_dma_addr);
++
++		if (port->priv->hw_version >= MVPP22) {
++			phys_addr_next |= ((u64)buff_hdr->next_phys_addr_high << 32);
++			dma_addr_next |= ((u64)buff_hdr->next_dma_addr_high << 32);
++		}
++
++		mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
++
++		phys_addr = phys_addr_next;
++		dma_addr = dma_addr_next;
++
++	} while (!MVPP2_B_HDR_INFO_IS_LAST(le16_to_cpu(buff_hdr->info)));
++}
++
+ /* Main rx processing */
+ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
+ 		    int rx_todo, struct mvpp2_rx_queue *rxq)
+@@ -3527,14 +3556,6 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
+ 			MVPP2_RXD_BM_POOL_ID_OFFS;
+ 		bm_pool = &port->priv->bm_pools[pool];
+ 
+-		/* In case of an error, release the requested buffer pointer
+-		 * to the Buffer Manager. This request process is controlled
+-		 * by the hardware, and the information about the buffer is
+-		 * comprised by the RX descriptor.
+-		 */
+-		if (rx_status & MVPP2_RXD_ERR_SUMMARY)
+-			goto err_drop_frame;
+-
+ 		if (port->priv->percpu_pools) {
+ 			pp = port->priv->page_pool[pool];
+ 			dma_dir = page_pool_get_dma_dir(pp);
+@@ -3546,6 +3567,18 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
+ 					rx_bytes + MVPP2_MH_SIZE,
+ 					dma_dir);
+ 
++		/* Buffer header not supported */
++		if (rx_status & MVPP2_RXD_BUF_HDR)
++			goto err_drop_frame;
++
++		/* In case of an error, release the requested buffer pointer
++		 * to the Buffer Manager. This request process is controlled
++		 * by the hardware, and the information about the buffer is
++		 * comprised by the RX descriptor.
++		 */
++		if (rx_status & MVPP2_RXD_ERR_SUMMARY)
++			goto err_drop_frame;
++
+ 		/* Prefetch header */
+ 		prefetch(data);
+ 
+@@ -3627,7 +3660,10 @@ err_drop_frame:
+ 		dev->stats.rx_errors++;
+ 		mvpp2_rx_error(port, rx_desc);
+ 		/* Return the buffer to the pool */
+-		mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
++		if (rx_status & MVPP2_RXD_BUF_HDR)
++			mvpp2_buff_hdr_pool_put(port, rx_desc, pool, rx_status);
++		else
++			mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
+ 	}
+ 
+ 	rcu_read_unlock();
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index d930fcda9c3b6..a2d3f04a9ff22 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -679,32 +679,53 @@ static int mtk_set_mac_address(struct net_device *dev, void *p)
+ void mtk_stats_update_mac(struct mtk_mac *mac)
+ {
+ 	struct mtk_hw_stats *hw_stats = mac->hw_stats;
+-	unsigned int base = MTK_GDM1_TX_GBCNT;
+-	u64 stats;
+-
+-	base += hw_stats->reg_offset;
++	struct mtk_eth *eth = mac->hw;
+ 
+ 	u64_stats_update_begin(&hw_stats->syncp);
+ 
+-	hw_stats->rx_bytes += mtk_r32(mac->hw, base);
+-	stats =  mtk_r32(mac->hw, base + 0x04);
+-	if (stats)
+-		hw_stats->rx_bytes += (stats << 32);
+-	hw_stats->rx_packets += mtk_r32(mac->hw, base + 0x08);
+-	hw_stats->rx_overflow += mtk_r32(mac->hw, base + 0x10);
+-	hw_stats->rx_fcs_errors += mtk_r32(mac->hw, base + 0x14);
+-	hw_stats->rx_short_errors += mtk_r32(mac->hw, base + 0x18);
+-	hw_stats->rx_long_errors += mtk_r32(mac->hw, base + 0x1c);
+-	hw_stats->rx_checksum_errors += mtk_r32(mac->hw, base + 0x20);
+-	hw_stats->rx_flow_control_packets +=
+-					mtk_r32(mac->hw, base + 0x24);
+-	hw_stats->tx_skip += mtk_r32(mac->hw, base + 0x28);
+-	hw_stats->tx_collisions += mtk_r32(mac->hw, base + 0x2c);
+-	hw_stats->tx_bytes += mtk_r32(mac->hw, base + 0x30);
+-	stats =  mtk_r32(mac->hw, base + 0x34);
+-	if (stats)
+-		hw_stats->tx_bytes += (stats << 32);
+-	hw_stats->tx_packets += mtk_r32(mac->hw, base + 0x38);
++	if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) {
++		hw_stats->tx_packets += mtk_r32(mac->hw, MT7628_SDM_TPCNT);
++		hw_stats->tx_bytes += mtk_r32(mac->hw, MT7628_SDM_TBCNT);
++		hw_stats->rx_packets += mtk_r32(mac->hw, MT7628_SDM_RPCNT);
++		hw_stats->rx_bytes += mtk_r32(mac->hw, MT7628_SDM_RBCNT);
++		hw_stats->rx_checksum_errors +=
++			mtk_r32(mac->hw, MT7628_SDM_CS_ERR);
++	} else {
++		unsigned int offs = hw_stats->reg_offset;
++		u64 stats;
++
++		hw_stats->rx_bytes += mtk_r32(mac->hw,
++					      MTK_GDM1_RX_GBCNT_L + offs);
++		stats = mtk_r32(mac->hw, MTK_GDM1_RX_GBCNT_H + offs);
++		if (stats)
++			hw_stats->rx_bytes += (stats << 32);
++		hw_stats->rx_packets +=
++			mtk_r32(mac->hw, MTK_GDM1_RX_GPCNT + offs);
++		hw_stats->rx_overflow +=
++			mtk_r32(mac->hw, MTK_GDM1_RX_OERCNT + offs);
++		hw_stats->rx_fcs_errors +=
++			mtk_r32(mac->hw, MTK_GDM1_RX_FERCNT + offs);
++		hw_stats->rx_short_errors +=
++			mtk_r32(mac->hw, MTK_GDM1_RX_SERCNT + offs);
++		hw_stats->rx_long_errors +=
++			mtk_r32(mac->hw, MTK_GDM1_RX_LENCNT + offs);
++		hw_stats->rx_checksum_errors +=
++			mtk_r32(mac->hw, MTK_GDM1_RX_CERCNT + offs);
++		hw_stats->rx_flow_control_packets +=
++			mtk_r32(mac->hw, MTK_GDM1_RX_FCCNT + offs);
++		hw_stats->tx_skip +=
++			mtk_r32(mac->hw, MTK_GDM1_TX_SKIPCNT + offs);
++		hw_stats->tx_collisions +=
++			mtk_r32(mac->hw, MTK_GDM1_TX_COLCNT + offs);
++		hw_stats->tx_bytes +=
++			mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_L + offs);
++		stats =  mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_H + offs);
++		if (stats)
++			hw_stats->tx_bytes += (stats << 32);
++		hw_stats->tx_packets +=
++			mtk_r32(mac->hw, MTK_GDM1_TX_GPCNT + offs);
++	}
++
+ 	u64_stats_update_end(&hw_stats->syncp);
+ }
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 73ce1f0f307a4..54a7cd93cc0fe 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -266,8 +266,21 @@
+ /* QDMA FQ Free Page Buffer Length Register */
+ #define MTK_QDMA_FQ_BLEN	0x1B2C
+ 
+-/* GMA1 Received Good Byte Count Register */
+-#define MTK_GDM1_TX_GBCNT	0x2400
++/* GMA1 counter / statics register */
++#define MTK_GDM1_RX_GBCNT_L	0x2400
++#define MTK_GDM1_RX_GBCNT_H	0x2404
++#define MTK_GDM1_RX_GPCNT	0x2408
++#define MTK_GDM1_RX_OERCNT	0x2410
++#define MTK_GDM1_RX_FERCNT	0x2414
++#define MTK_GDM1_RX_SERCNT	0x2418
++#define MTK_GDM1_RX_LENCNT	0x241c
++#define MTK_GDM1_RX_CERCNT	0x2420
++#define MTK_GDM1_RX_FCCNT	0x2424
++#define MTK_GDM1_TX_SKIPCNT	0x2428
++#define MTK_GDM1_TX_COLCNT	0x242c
++#define MTK_GDM1_TX_GBCNT_L	0x2430
++#define MTK_GDM1_TX_GBCNT_H	0x2434
++#define MTK_GDM1_TX_GPCNT	0x2438
+ #define MTK_STAT_OFFSET		0x40
+ 
+ /* QDMA descriptor txd4 */
+@@ -478,6 +491,13 @@
+ #define MT7628_SDM_MAC_ADRL	(MT7628_SDM_OFFSET + 0x0c)
+ #define MT7628_SDM_MAC_ADRH	(MT7628_SDM_OFFSET + 0x10)
+ 
++/* Counter / stat register */
++#define MT7628_SDM_TPCNT	(MT7628_SDM_OFFSET + 0x100)
++#define MT7628_SDM_TBCNT	(MT7628_SDM_OFFSET + 0x104)
++#define MT7628_SDM_RPCNT	(MT7628_SDM_OFFSET + 0x108)
++#define MT7628_SDM_RBCNT	(MT7628_SDM_OFFSET + 0x10c)
++#define MT7628_SDM_CS_ERR	(MT7628_SDM_OFFSET + 0x110)
++
+ struct mtk_rx_dma {
+ 	unsigned int rxd1;
+ 	unsigned int rxd2;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index 1434df66fcf2e..3616b77caa0ad 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -2027,8 +2027,6 @@ static int mlx4_en_set_tunable(struct net_device *dev,
+ 	return ret;
+ }
+ 
+-#define MLX4_EEPROM_PAGE_LEN 256
+-
+ static int mlx4_en_get_module_info(struct net_device *dev,
+ 				   struct ethtool_modinfo *modinfo)
+ {
+@@ -2063,7 +2061,7 @@ static int mlx4_en_get_module_info(struct net_device *dev,
+ 		break;
+ 	case MLX4_MODULE_ID_SFP:
+ 		modinfo->type = ETH_MODULE_SFF_8472;
+-		modinfo->eeprom_len = MLX4_EEPROM_PAGE_LEN;
++		modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/port.c b/drivers/net/ethernet/mellanox/mlx4/port.c
+index ba6ac31a339dc..256a06b3c096b 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/port.c
++++ b/drivers/net/ethernet/mellanox/mlx4/port.c
+@@ -1973,6 +1973,7 @@ EXPORT_SYMBOL(mlx4_get_roce_gid_from_slave);
+ #define I2C_ADDR_LOW  0x50
+ #define I2C_ADDR_HIGH 0x51
+ #define I2C_PAGE_SIZE 256
++#define I2C_HIGH_PAGE_SIZE 128
+ 
+ /* Module Info Data */
+ struct mlx4_cable_info {
+@@ -2026,6 +2027,88 @@ static inline const char *cable_info_mad_err_str(u16 mad_status)
+ 	return "Unknown Error";
+ }
+ 
++static int mlx4_get_module_id(struct mlx4_dev *dev, u8 port, u8 *module_id)
++{
++	struct mlx4_cmd_mailbox *inbox, *outbox;
++	struct mlx4_mad_ifc *inmad, *outmad;
++	struct mlx4_cable_info *cable_info;
++	int ret;
++
++	inbox = mlx4_alloc_cmd_mailbox(dev);
++	if (IS_ERR(inbox))
++		return PTR_ERR(inbox);
++
++	outbox = mlx4_alloc_cmd_mailbox(dev);
++	if (IS_ERR(outbox)) {
++		mlx4_free_cmd_mailbox(dev, inbox);
++		return PTR_ERR(outbox);
++	}
++
++	inmad = (struct mlx4_mad_ifc *)(inbox->buf);
++	outmad = (struct mlx4_mad_ifc *)(outbox->buf);
++
++	inmad->method = 0x1; /* Get */
++	inmad->class_version = 0x1;
++	inmad->mgmt_class = 0x1;
++	inmad->base_version = 0x1;
++	inmad->attr_id = cpu_to_be16(0xFF60); /* Module Info */
++
++	cable_info = (struct mlx4_cable_info *)inmad->data;
++	cable_info->dev_mem_address = 0;
++	cable_info->page_num = 0;
++	cable_info->i2c_addr = I2C_ADDR_LOW;
++	cable_info->size = cpu_to_be16(1);
++
++	ret = mlx4_cmd_box(dev, inbox->dma, outbox->dma, port, 3,
++			   MLX4_CMD_MAD_IFC, MLX4_CMD_TIME_CLASS_C,
++			   MLX4_CMD_NATIVE);
++	if (ret)
++		goto out;
++
++	if (be16_to_cpu(outmad->status)) {
++		/* Mad returned with bad status */
++		ret = be16_to_cpu(outmad->status);
++		mlx4_warn(dev,
++			  "MLX4_CMD_MAD_IFC Get Module ID attr(%x) port(%d) i2c_addr(%x) offset(%d) size(%d): Response Mad Status(%x) - %s\n",
++			  0xFF60, port, I2C_ADDR_LOW, 0, 1, ret,
++			  cable_info_mad_err_str(ret));
++		ret = -ret;
++		goto out;
++	}
++	cable_info = (struct mlx4_cable_info *)outmad->data;
++	*module_id = cable_info->data[0];
++out:
++	mlx4_free_cmd_mailbox(dev, inbox);
++	mlx4_free_cmd_mailbox(dev, outbox);
++	return ret;
++}
++
++static void mlx4_sfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset)
++{
++	*i2c_addr = I2C_ADDR_LOW;
++	*page_num = 0;
++
++	if (*offset < I2C_PAGE_SIZE)
++		return;
++
++	*i2c_addr = I2C_ADDR_HIGH;
++	*offset -= I2C_PAGE_SIZE;
++}
++
++static void mlx4_qsfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset)
++{
++	/* Offsets 0-255 belong to page 0.
++	 * Offsets 256-639 belong to pages 01, 02, 03.
++	 * For example, offset 400 is page 02: 1 + (400 - 256) / 128 = 2
++	 */
++	if (*offset < I2C_PAGE_SIZE)
++		*page_num = 0;
++	else
++		*page_num = 1 + (*offset - I2C_PAGE_SIZE) / I2C_HIGH_PAGE_SIZE;
++	*i2c_addr = I2C_ADDR_LOW;
++	*offset -= *page_num * I2C_HIGH_PAGE_SIZE;
++}
++
+ /**
+  * mlx4_get_module_info - Read cable module eeprom data
+  * @dev: mlx4_dev.
+@@ -2045,12 +2128,30 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
+ 	struct mlx4_cmd_mailbox *inbox, *outbox;
+ 	struct mlx4_mad_ifc *inmad, *outmad;
+ 	struct mlx4_cable_info *cable_info;
+-	u16 i2c_addr;
++	u8 module_id, i2c_addr, page_num;
+ 	int ret;
+ 
+ 	if (size > MODULE_INFO_MAX_READ)
+ 		size = MODULE_INFO_MAX_READ;
+ 
++	ret = mlx4_get_module_id(dev, port, &module_id);
++	if (ret)
++		return ret;
++
++	switch (module_id) {
++	case MLX4_MODULE_ID_SFP:
++		mlx4_sfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
++		break;
++	case MLX4_MODULE_ID_QSFP:
++	case MLX4_MODULE_ID_QSFP_PLUS:
++	case MLX4_MODULE_ID_QSFP28:
++		mlx4_qsfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
++		break;
++	default:
++		mlx4_err(dev, "Module ID not recognized: %#x\n", module_id);
++		return -EINVAL;
++	}
++
+ 	inbox = mlx4_alloc_cmd_mailbox(dev);
+ 	if (IS_ERR(inbox))
+ 		return PTR_ERR(inbox);
+@@ -2076,11 +2177,9 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
+ 		 */
+ 		size -= offset + size - I2C_PAGE_SIZE;
+ 
+-	i2c_addr = I2C_ADDR_LOW;
+-
+ 	cable_info = (struct mlx4_cable_info *)inmad->data;
+ 	cable_info->dev_mem_address = cpu_to_be16(offset);
+-	cable_info->page_num = 0;
++	cable_info->page_num = page_num;
+ 	cable_info->i2c_addr = i2c_addr;
+ 	cable_info->size = cpu_to_be16(size);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
+index 95f2b26a3ee31..9c076aa20306a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
+@@ -223,6 +223,8 @@ static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *pt
+ 	rpriv = priv->ppriv;
+ 	fwd_vport_num = rpriv->rep->vport;
+ 	lag_dev = netdev_master_upper_dev_get(netdev);
++	if (!lag_dev)
++		return;
+ 
+ 	netdev_dbg(netdev, "lag_dev(%s)'s slave vport(%d) is txable(%d)\n",
+ 		   lag_dev->name, fwd_vport_num, net_lag_port_dev_txable(netdev));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+index 76177f7c5ec29..e6f782743fbe8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+@@ -643,7 +643,7 @@ bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe,
+ 	}
+ 
+ 	if (chain) {
+-		tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT);
++		tc_skb_ext = tc_skb_ext_alloc(skb);
+ 		if (!tc_skb_ext) {
+ 			WARN_ON(1);
+ 			return false;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+index 7ad332d8625b9..93877becfae26 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+@@ -35,6 +35,7 @@
+ #include <linux/ipv6.h>
+ #include <linux/tcp.h>
+ #include <linux/mlx5/fs.h>
++#include <linux/mlx5/mpfs.h>
+ #include "en.h"
+ #include "lib/mpfs.h"
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 9a12df43becc4..f18b52be32e98 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2920,7 +2920,7 @@ static int mlx5e_update_netdev_queues(struct mlx5e_priv *priv)
+ 	int err;
+ 
+ 	old_num_txqs = netdev->real_num_tx_queues;
+-	old_ntc = netdev->num_tc;
++	old_ntc = netdev->num_tc ? : 1;
+ 
+ 	nch = priv->channels.params.num_channels;
+ 	ntc = priv->channels.params.num_tc;
+@@ -5385,6 +5385,11 @@ err_free_netdev:
+ 	return NULL;
+ }
+ 
++static void mlx5e_reset_channels(struct net_device *netdev)
++{
++	netdev_reset_tc(netdev);
++}
++
+ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
+ {
+ 	const bool take_rtnl = priv->netdev->reg_state == NETREG_REGISTERED;
+@@ -5438,6 +5443,7 @@ err_cleanup_tx:
+ 	profile->cleanup_tx(priv);
+ 
+ out:
++	mlx5e_reset_channels(priv->netdev);
+ 	set_bit(MLX5E_STATE_DESTROYING, &priv->state);
+ 	cancel_work_sync(&priv->update_stats_work);
+ 	return err;
+@@ -5455,6 +5461,7 @@ void mlx5e_detach_netdev(struct mlx5e_priv *priv)
+ 
+ 	profile->cleanup_rx(priv);
+ 	profile->cleanup_tx(priv);
++	mlx5e_reset_channels(priv->netdev);
+ 	cancel_work_sync(&priv->update_stats_work);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 3079a82f1f412..1bdeb948f56d7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -4025,8 +4025,12 @@ static int add_vlan_push_action(struct mlx5e_priv *priv,
+ 	if (err)
+ 		return err;
+ 
+-	*out_dev = dev_get_by_index_rcu(dev_net(vlan_dev),
+-					dev_get_iflink(vlan_dev));
++	rcu_read_lock();
++	*out_dev = dev_get_by_index_rcu(dev_net(vlan_dev), dev_get_iflink(vlan_dev));
++	rcu_read_unlock();
++	if (!*out_dev)
++		return -ENODEV;
++
+ 	if (is_vlan_dev(*out_dev))
+ 		err = add_vlan_push_action(priv, attr, out_dev, action);
+ 
+@@ -5490,7 +5494,7 @@ bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe,
+ 	}
+ 
+ 	if (chain) {
+-		tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT);
++		tc_skb_ext = tc_skb_ext_alloc(skb);
+ 		if (WARN_ON(!tc_skb_ext))
+ 			return false;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index d4ee0a9c03dbf..d61539b5567c0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -35,6 +35,7 @@
+ #include <linux/mlx5/mlx5_ifc.h>
+ #include <linux/mlx5/vport.h>
+ #include <linux/mlx5/fs.h>
++#include <linux/mlx5/mpfs.h>
+ #include "esw/acl/lgcy.h"
+ #include "mlx5_core.h"
+ #include "lib/eq.h"
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+index ec679560a95d0..1cbb330b9f42b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+@@ -76,10 +76,11 @@ mlx5_eswitch_termtbl_create(struct mlx5_core_dev *dev,
+ 	/* As this is the terminating action then the termination table is the
+ 	 * same prio as the slow path
+ 	 */
+-	ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION |
++	ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION | MLX5_FLOW_TABLE_UNMANAGED |
+ 			MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT;
+-	ft_attr.prio = FDB_SLOW_PATH;
++	ft_attr.prio = FDB_TC_OFFLOAD;
+ 	ft_attr.max_fte = 1;
++	ft_attr.level = 1;
+ 	ft_attr.autogroup.max_num_groups = 1;
+ 	tt->termtbl = mlx5_create_auto_grouped_flow_table(root_ns, &ft_attr);
+ 	if (IS_ERR(tt->termtbl)) {
+@@ -171,19 +172,6 @@ mlx5_eswitch_termtbl_put(struct mlx5_eswitch *esw,
+ 	}
+ }
+ 
+-static bool mlx5_eswitch_termtbl_is_encap_reformat(struct mlx5_pkt_reformat *rt)
+-{
+-	switch (rt->reformat_type) {
+-	case MLX5_REFORMAT_TYPE_L2_TO_VXLAN:
+-	case MLX5_REFORMAT_TYPE_L2_TO_NVGRE:
+-	case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL:
+-	case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL:
+-		return true;
+-	default:
+-		return false;
+-	}
+-}
+-
+ static void
+ mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
+ 				  struct mlx5_flow_act *dst)
+@@ -201,14 +189,6 @@ mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
+ 			memset(&src->vlan[1], 0, sizeof(src->vlan[1]));
+ 		}
+ 	}
+-
+-	if (src->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT &&
+-	    mlx5_eswitch_termtbl_is_encap_reformat(src->pkt_reformat)) {
+-		src->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+-		dst->action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+-		dst->pkt_reformat = src->pkt_reformat;
+-		src->pkt_reformat = NULL;
+-	}
+ }
+ 
+ static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw,
+@@ -237,6 +217,7 @@ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
+ 	int i;
+ 
+ 	if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table) ||
++	    !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level) ||
+ 	    attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH ||
+ 	    !mlx5_eswitch_offload_is_uplink_port(esw, spec))
+ 		return false;
+@@ -278,6 +259,14 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
+ 		if (dest[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT)
+ 			continue;
+ 
++		if (attr->dests[num_vport_dests].flags & MLX5_ESW_DEST_ENCAP) {
++			term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
++			term_tbl_act.pkt_reformat = attr->dests[num_vport_dests].pkt_reformat;
++		} else {
++			term_tbl_act.action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
++			term_tbl_act.pkt_reformat = NULL;
++		}
++
+ 		/* get the terminating table for the action list */
+ 		tt = mlx5_eswitch_termtbl_get_create(esw, &term_tbl_act,
+ 						     &dest[i], attr);
+@@ -299,6 +288,9 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
+ 		goto revert_changes;
+ 
+ 	/* create the FTE */
++	flow_act->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
++	flow_act->pkt_reformat = NULL;
++	flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
+ 	rule = mlx5_add_flow_rules(fdb, spec, flow_act, dest, num_dest);
+ 	if (IS_ERR(rule))
+ 		goto revert_changes;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+index 88e58ac902def..15c3a9058e728 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+@@ -307,6 +307,11 @@ int mlx5_lag_mp_init(struct mlx5_lag *ldev)
+ 	struct lag_mp *mp = &ldev->lag_mp;
+ 	int err;
+ 
++	/* always clear mfi, as it might become stale when a route delete event
++	 * has been missed
++	 */
++	mp->mfi = NULL;
++
+ 	if (mp->fib_nb.notifier_call)
+ 		return 0;
+ 
+@@ -335,4 +340,5 @@ void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev)
+ 	unregister_fib_notifier(&init_net, &mp->fib_nb);
+ 	destroy_workqueue(mp->wq);
+ 	mp->fib_nb.notifier_call = NULL;
++	mp->mfi = NULL;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
+index fd8449ff9e176..839a01da110f3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
+@@ -33,6 +33,7 @@
+ #include <linux/etherdevice.h>
+ #include <linux/mlx5/driver.h>
+ #include <linux/mlx5/mlx5_ifc.h>
++#include <linux/mlx5/mpfs.h>
+ #include <linux/mlx5/eswitch.h>
+ #include "mlx5_core.h"
+ #include "lib/mpfs.h"
+@@ -175,6 +176,7 @@ out:
+ 	mutex_unlock(&mpfs->lock);
+ 	return err;
+ }
++EXPORT_SYMBOL(mlx5_mpfs_add_mac);
+ 
+ int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac)
+ {
+@@ -206,3 +208,4 @@ unlock:
+ 	mutex_unlock(&mpfs->lock);
+ 	return err;
+ }
++EXPORT_SYMBOL(mlx5_mpfs_del_mac);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h
+index 4a7b2c3203a7e..4a293542a7aa1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h
+@@ -84,12 +84,9 @@ struct l2addr_node {
+ #ifdef CONFIG_MLX5_MPFS
+ int  mlx5_mpfs_init(struct mlx5_core_dev *dev);
+ void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev);
+-int  mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac);
+-int  mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac);
+ #else /* #ifndef CONFIG_MLX5_MPFS */
+ static inline int  mlx5_mpfs_init(struct mlx5_core_dev *dev) { return 0; }
+ static inline void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev) {}
+-static inline int  mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
+-static inline int  mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
+ #endif
++
+ #endif
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 4374ce4671ad2..3134f7e669f80 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1052,7 +1052,6 @@ static void stmmac_check_pcs_mode(struct stmmac_priv *priv)
+  */
+ static int stmmac_init_phy(struct net_device *dev)
+ {
+-	struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 	struct device_node *node;
+ 	int ret;
+@@ -1078,8 +1077,12 @@ static int stmmac_init_phy(struct net_device *dev)
+ 		ret = phylink_connect_phy(priv->phylink, phydev);
+ 	}
+ 
+-	phylink_ethtool_get_wol(priv->phylink, &wol);
+-	device_set_wakeup_capable(priv->device, !!wol.supported);
++	if (!priv->plat->pmt) {
++		struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };
++
++		phylink_ethtool_get_wol(priv->phylink, &wol);
++		device_set_wakeup_capable(priv->device, !!wol.supported);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c
+index d7a144b4a09f0..dc50e948195de 100644
+--- a/drivers/net/ethernet/ti/netcp_core.c
++++ b/drivers/net/ethernet/ti/netcp_core.c
+@@ -1350,8 +1350,8 @@ int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe)
+ 	tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id,
+ 					     KNAV_QUEUE_SHARED);
+ 	if (IS_ERR(tx_pipe->dma_queue)) {
+-		dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n",
+-			name, ret);
++		dev_err(dev, "Could not open DMA queue for channel \"%s\": %pe\n",
++			name, tx_pipe->dma_queue);
+ 		ret = PTR_ERR(tx_pipe->dma_queue);
+ 		goto err;
+ 	}
+diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
+index 6c2371084c55a..da862db09d7b7 100644
+--- a/drivers/net/ipa/ipa.h
++++ b/drivers/net/ipa/ipa.h
+@@ -56,6 +56,7 @@ enum ipa_flag {
+  * @mem_virt:		Virtual address of IPA-local memory space
+  * @mem_offset:		Offset from @mem_virt used for access to IPA memory
+  * @mem_size:		Total size (bytes) of memory at @mem_virt
++ * @mem_count:		Number of entries in the mem array
+  * @mem:		Array of IPA-local memory region descriptors
+  * @imem_iova:		I/O virtual address of IPA region in IMEM
+  * @imem_size;		Size of IMEM region
+@@ -102,6 +103,7 @@ struct ipa {
+ 	void *mem_virt;
+ 	u32 mem_offset;
+ 	u32 mem_size;
++	u32 mem_count;
+ 	const struct ipa_mem *mem;
+ 
+ 	unsigned long imem_iova;
+diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
+index 2d45c444a67fa..a78d66051a17d 100644
+--- a/drivers/net/ipa/ipa_mem.c
++++ b/drivers/net/ipa/ipa_mem.c
+@@ -181,7 +181,7 @@ int ipa_mem_config(struct ipa *ipa)
+ 	 * for the region, write "canary" values in the space prior to
+ 	 * the region's base address.
+ 	 */
+-	for (mem_id = 0; mem_id < IPA_MEM_COUNT; mem_id++) {
++	for (mem_id = 0; mem_id < ipa->mem_count; mem_id++) {
+ 		const struct ipa_mem *mem = &ipa->mem[mem_id];
+ 		u16 canary_count;
+ 		__le32 *canary;
+@@ -488,6 +488,7 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
+ 	ipa->mem_size = resource_size(res);
+ 
+ 	/* The ipa->mem[] array is indexed by enum ipa_mem_id values */
++	ipa->mem_count = mem_data->local_count;
+ 	ipa->mem = mem_data->local;
+ 
+ 	ret = ipa_imem_init(ipa, mem_data->imem_addr, mem_data->imem_size);
+diff --git a/drivers/net/mdio/mdio-octeon.c b/drivers/net/mdio/mdio-octeon.c
+index d1e1009d51afe..6faf39314ac93 100644
+--- a/drivers/net/mdio/mdio-octeon.c
++++ b/drivers/net/mdio/mdio-octeon.c
+@@ -71,7 +71,6 @@ static int octeon_mdiobus_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ fail_register:
+-	mdiobus_free(bus->mii_bus);
+ 	smi_en.u64 = 0;
+ 	oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
+ 	return err;
+@@ -85,7 +84,6 @@ static int octeon_mdiobus_remove(struct platform_device *pdev)
+ 	bus = platform_get_drvdata(pdev);
+ 
+ 	mdiobus_unregister(bus->mii_bus);
+-	mdiobus_free(bus->mii_bus);
+ 	smi_en.u64 = 0;
+ 	oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
+ 	return 0;
+diff --git a/drivers/net/mdio/mdio-thunder.c b/drivers/net/mdio/mdio-thunder.c
+index 3d7eda99d34e2..dd7430c998a2a 100644
+--- a/drivers/net/mdio/mdio-thunder.c
++++ b/drivers/net/mdio/mdio-thunder.c
+@@ -126,7 +126,6 @@ static void thunder_mdiobus_pci_remove(struct pci_dev *pdev)
+ 			continue;
+ 
+ 		mdiobus_unregister(bus->mii_bus);
+-		mdiobus_free(bus->mii_bus);
+ 		oct_mdio_writeq(0, bus->register_base + SMI_EN);
+ 	}
+ 	pci_release_regions(pdev);
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index 4909405803d57..fbfcbd0dcfcbc 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -1689,7 +1689,7 @@ static int hso_serial_tiocmset(struct tty_struct *tty,
+ 	spin_unlock_irqrestore(&serial->serial_lock, flags);
+ 
+ 	return usb_control_msg(serial->parent->usb,
+-			       usb_rcvctrlpipe(serial->parent->usb, 0), 0x22,
++			       usb_sndctrlpipe(serial->parent->usb, 0), 0x22,
+ 			       0x21, val, if_num, NULL, 0,
+ 			       USB_CTRL_SET_TIMEOUT);
+ }
+@@ -2436,7 +2436,7 @@ static int hso_rfkill_set_block(void *data, bool blocked)
+ 	if (hso_dev->usb_gone)
+ 		rv = 0;
+ 	else
+-		rv = usb_control_msg(hso_dev->usb, usb_rcvctrlpipe(hso_dev->usb, 0),
++		rv = usb_control_msg(hso_dev->usb, usb_sndctrlpipe(hso_dev->usb, 0),
+ 				       enabled ? 0x82 : 0x81, 0x40, 0, 0, NULL, 0,
+ 				       USB_CTRL_SET_TIMEOUT);
+ 	mutex_unlock(&hso_dev->mutex);
+@@ -2618,32 +2618,31 @@ static struct hso_device *hso_create_bulk_serial_device(
+ 		num_urbs = 2;
+ 		serial->tiocmget = kzalloc(sizeof(struct hso_tiocmget),
+ 					   GFP_KERNEL);
++		if (!serial->tiocmget)
++			goto exit;
+ 		serial->tiocmget->serial_state_notification
+ 			= kzalloc(sizeof(struct hso_serial_state_notification),
+ 					   GFP_KERNEL);
+-		/* it isn't going to break our heart if serial->tiocmget
+-		 *  allocation fails don't bother checking this.
+-		 */
+-		if (serial->tiocmget && serial->tiocmget->serial_state_notification) {
+-			tiocmget = serial->tiocmget;
+-			tiocmget->endp = hso_get_ep(interface,
+-						    USB_ENDPOINT_XFER_INT,
+-						    USB_DIR_IN);
+-			if (!tiocmget->endp) {
+-				dev_err(&interface->dev, "Failed to find INT IN ep\n");
+-				goto exit;
+-			}
+-
+-			tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL);
+-			if (tiocmget->urb) {
+-				mutex_init(&tiocmget->mutex);
+-				init_waitqueue_head(&tiocmget->waitq);
+-			} else
+-				hso_free_tiomget(serial);
++		if (!serial->tiocmget->serial_state_notification)
++			goto exit;
++		tiocmget = serial->tiocmget;
++		tiocmget->endp = hso_get_ep(interface,
++					    USB_ENDPOINT_XFER_INT,
++					    USB_DIR_IN);
++		if (!tiocmget->endp) {
++			dev_err(&interface->dev, "Failed to find INT IN ep\n");
++			goto exit;
+ 		}
+-	}
+-	else
++
++		tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL);
++		if (!tiocmget->urb)
++			goto exit;
++
++		mutex_init(&tiocmget->mutex);
++		init_waitqueue_head(&tiocmget->waitq);
++	} else {
+ 		num_urbs = 1;
++	}
+ 
+ 	if (hso_serial_common_create(serial, num_urbs, BULK_URB_RX_SIZE,
+ 				     BULK_URB_TX_SIZE))
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index 8689835a52145..d44657b54d2b6 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -1483,7 +1483,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	ret = smsc75xx_wait_ready(dev, 0);
+ 	if (ret < 0) {
+ 		netdev_warn(dev->net, "device not ready in smsc75xx_bind\n");
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	smsc75xx_init_mac_address(dev);
+@@ -1492,7 +1492,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	ret = smsc75xx_reset(dev);
+ 	if (ret < 0) {
+ 		netdev_warn(dev->net, "smsc75xx_reset error %d\n", ret);
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	dev->net->netdev_ops = &smsc75xx_netdev_ops;
+@@ -1502,6 +1502,10 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len;
+ 	dev->net->max_mtu = MAX_SINGLE_PACKET_SIZE;
+ 	return 0;
++
++err:
++	kfree(pdata);
++	return ret;
+ }
+ 
+ static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h
+index cad59494f1752..2ee2c655d5991 100644
+--- a/drivers/net/wireless/ath/ath10k/htt.h
++++ b/drivers/net/wireless/ath/ath10k/htt.h
+@@ -845,6 +845,7 @@ enum htt_security_types {
+ 
+ #define ATH10K_HTT_TXRX_PEER_SECURITY_MAX 2
+ #define ATH10K_TXRX_NUM_EXT_TIDS 19
++#define ATH10K_TXRX_NON_QOS_TID 16
+ 
+ enum htt_security_flags {
+ #define HTT_SECURITY_TYPE_MASK 0x7F
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index 5c1af20218833..28ec3c5b4d1fc 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -1746,16 +1746,97 @@ static void ath10k_htt_rx_h_csum_offload(struct sk_buff *msdu)
+ 	msdu->ip_summed = ath10k_htt_rx_get_csum_state(msdu);
+ }
+ 
++static u64 ath10k_htt_rx_h_get_pn(struct ath10k *ar, struct sk_buff *skb,
++				  u16 offset,
++				  enum htt_rx_mpdu_encrypt_type enctype)
++{
++	struct ieee80211_hdr *hdr;
++	u64 pn = 0;
++	u8 *ehdr;
++
++	hdr = (struct ieee80211_hdr *)(skb->data + offset);
++	ehdr = skb->data + offset + ieee80211_hdrlen(hdr->frame_control);
++
++	if (enctype == HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2) {
++		pn = ehdr[0];
++		pn |= (u64)ehdr[1] << 8;
++		pn |= (u64)ehdr[4] << 16;
++		pn |= (u64)ehdr[5] << 24;
++		pn |= (u64)ehdr[6] << 32;
++		pn |= (u64)ehdr[7] << 40;
++	}
++	return pn;
++}
++
++static bool ath10k_htt_rx_h_frag_multicast_check(struct ath10k *ar,
++						 struct sk_buff *skb,
++						 u16 offset)
++{
++	struct ieee80211_hdr *hdr;
++
++	hdr = (struct ieee80211_hdr *)(skb->data + offset);
++	return !is_multicast_ether_addr(hdr->addr1);
++}
++
++static bool ath10k_htt_rx_h_frag_pn_check(struct ath10k *ar,
++					  struct sk_buff *skb,
++					  u16 peer_id,
++					  u16 offset,
++					  enum htt_rx_mpdu_encrypt_type enctype)
++{
++	struct ath10k_peer *peer;
++	union htt_rx_pn_t *last_pn, new_pn = {0};
++	struct ieee80211_hdr *hdr;
++	bool more_frags;
++	u8 tid, frag_number;
++	u32 seq;
++
++	peer = ath10k_peer_find_by_id(ar, peer_id);
++	if (!peer) {
++		ath10k_dbg(ar, ATH10K_DBG_HTT, "invalid peer for frag pn check\n");
++		return false;
++	}
++
++	hdr = (struct ieee80211_hdr *)(skb->data + offset);
++	if (ieee80211_is_data_qos(hdr->frame_control))
++		tid = ieee80211_get_tid(hdr);
++	else
++		tid = ATH10K_TXRX_NON_QOS_TID;
++
++	last_pn = &peer->frag_tids_last_pn[tid];
++	new_pn.pn48 = ath10k_htt_rx_h_get_pn(ar, skb, offset, enctype);
++	more_frags = ieee80211_has_morefrags(hdr->frame_control);
++	frag_number = le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_FRAG;
++	seq = (__le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4;
++
++	if (frag_number == 0) {
++		last_pn->pn48 = new_pn.pn48;
++		peer->frag_tids_seq[tid] = seq;
++	} else {
++		if (seq != peer->frag_tids_seq[tid])
++			return false;
++
++		if (new_pn.pn48 != last_pn->pn48 + 1)
++			return false;
++
++		last_pn->pn48 = new_pn.pn48;
++	}
++
++	return true;
++}
++
+ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
+ 				 struct sk_buff_head *amsdu,
+ 				 struct ieee80211_rx_status *status,
+ 				 bool fill_crypt_header,
+ 				 u8 *rx_hdr,
+-				 enum ath10k_pkt_rx_err *err)
++				 enum ath10k_pkt_rx_err *err,
++				 u16 peer_id,
++				 bool frag)
+ {
+ 	struct sk_buff *first;
+ 	struct sk_buff *last;
+-	struct sk_buff *msdu;
++	struct sk_buff *msdu, *temp;
+ 	struct htt_rx_desc *rxd;
+ 	struct ieee80211_hdr *hdr;
+ 	enum htt_rx_mpdu_encrypt_type enctype;
+@@ -1768,6 +1849,7 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
+ 	bool is_decrypted;
+ 	bool is_mgmt;
+ 	u32 attention;
++	bool frag_pn_check = true, multicast_check = true;
+ 
+ 	if (skb_queue_empty(amsdu))
+ 		return;
+@@ -1866,7 +1948,37 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
+ 	}
+ 
+ 	skb_queue_walk(amsdu, msdu) {
++		if (frag && !fill_crypt_header && is_decrypted &&
++		    enctype == HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2)
++			frag_pn_check = ath10k_htt_rx_h_frag_pn_check(ar,
++								      msdu,
++								      peer_id,
++								      0,
++								      enctype);
++
++		if (frag)
++			multicast_check = ath10k_htt_rx_h_frag_multicast_check(ar,
++									       msdu,
++									       0);
++
++		if (!frag_pn_check || !multicast_check) {
++			/* Discard the fragment with invalid PN or multicast DA
++			 */
++			temp = msdu->prev;
++			__skb_unlink(msdu, amsdu);
++			dev_kfree_skb_any(msdu);
++			msdu = temp;
++			frag_pn_check = true;
++			multicast_check = true;
++			continue;
++		}
++
+ 		ath10k_htt_rx_h_csum_offload(msdu);
++
++		if (frag && !fill_crypt_header &&
++		    enctype == HTT_RX_MPDU_ENCRYPT_TKIP_WPA)
++			status->flag &= ~RX_FLAG_MMIC_STRIPPED;
++
+ 		ath10k_htt_rx_h_undecap(ar, msdu, status, first_hdr, enctype,
+ 					is_decrypted);
+ 
+@@ -1884,6 +1996,11 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
+ 
+ 		hdr = (void *)msdu->data;
+ 		hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_PROTECTED);
++
++		if (frag && !fill_crypt_header &&
++		    enctype == HTT_RX_MPDU_ENCRYPT_TKIP_WPA)
++			status->flag &= ~RX_FLAG_IV_STRIPPED &
++					~RX_FLAG_MMIC_STRIPPED;
+ 	}
+ }
+ 
+@@ -1991,14 +2108,62 @@ static void ath10k_htt_rx_h_unchain(struct ath10k *ar,
+ 	ath10k_unchain_msdu(amsdu, unchain_cnt);
+ }
+ 
++static bool ath10k_htt_rx_validate_amsdu(struct ath10k *ar,
++					 struct sk_buff_head *amsdu)
++{
++	u8 *subframe_hdr;
++	struct sk_buff *first;
++	bool is_first, is_last;
++	struct htt_rx_desc *rxd;
++	struct ieee80211_hdr *hdr;
++	size_t hdr_len, crypto_len;
++	enum htt_rx_mpdu_encrypt_type enctype;
++	int bytes_aligned = ar->hw_params.decap_align_bytes;
++
++	first = skb_peek(amsdu);
++
++	rxd = (void *)first->data - sizeof(*rxd);
++	hdr = (void *)rxd->rx_hdr_status;
++
++	is_first = !!(rxd->msdu_end.common.info0 &
++		      __cpu_to_le32(RX_MSDU_END_INFO0_FIRST_MSDU));
++	is_last = !!(rxd->msdu_end.common.info0 &
++		     __cpu_to_le32(RX_MSDU_END_INFO0_LAST_MSDU));
++
++	/* Return in case of non-aggregated msdu */
++	if (is_first && is_last)
++		return true;
++
++	/* First msdu flag is not set for the first msdu of the list */
++	if (!is_first)
++		return false;
++
++	enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0),
++		     RX_MPDU_START_INFO0_ENCRYPT_TYPE);
++
++	hdr_len = ieee80211_hdrlen(hdr->frame_control);
++	crypto_len = ath10k_htt_rx_crypto_param_len(ar, enctype);
++
++	subframe_hdr = (u8 *)hdr + round_up(hdr_len, bytes_aligned) +
++		       crypto_len;
++
++	/* Validate if the amsdu has a proper first subframe.
++	 * There are chances a single msdu can be received as amsdu when
++	 * the unauthenticated amsdu flag of a QoS header
++	 * gets flipped in non-SPP AMSDU's, in such cases the first
++	 * subframe has llc/snap header in place of a valid da.
++	 * return false if the da matches rfc1042 pattern
++	 */
++	if (ether_addr_equal(subframe_hdr, rfc1042_header))
++		return false;
++
++	return true;
++}
++
+ static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
+ 					struct sk_buff_head *amsdu,
+ 					struct ieee80211_rx_status *rx_status)
+ {
+-	/* FIXME: It might be a good idea to do some fuzzy-testing to drop
+-	 * invalid/dangerous frames.
+-	 */
+-
+ 	if (!rx_status->freq) {
+ 		ath10k_dbg(ar, ATH10K_DBG_HTT, "no channel configured; ignoring frame(s)!\n");
+ 		return false;
+@@ -2009,6 +2174,11 @@ static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
+ 		return false;
+ 	}
+ 
++	if (!ath10k_htt_rx_validate_amsdu(ar, amsdu)) {
++		ath10k_dbg(ar, ATH10K_DBG_HTT, "invalid amsdu received\n");
++		return false;
++	}
++
+ 	return true;
+ }
+ 
+@@ -2071,7 +2241,8 @@ static int ath10k_htt_rx_handle_amsdu(struct ath10k_htt *htt)
+ 		ath10k_htt_rx_h_unchain(ar, &amsdu, &drop_cnt, &unchain_cnt);
+ 
+ 	ath10k_htt_rx_h_filter(ar, &amsdu, rx_status, &drop_cnt_filter);
+-	ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status, true, first_hdr, &err);
++	ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status, true, first_hdr, &err, 0,
++			     false);
+ 	msdus_to_queue = skb_queue_len(&amsdu);
+ 	ath10k_htt_rx_h_enqueue(ar, &amsdu, rx_status);
+ 
+@@ -2204,6 +2375,11 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
+ 	fw_desc = &rx->fw_desc;
+ 	rx_desc_len = fw_desc->len;
+ 
++	if (fw_desc->u.bits.discard) {
++		ath10k_dbg(ar, ATH10K_DBG_HTT, "htt discard mpdu\n");
++		goto err;
++	}
++
+ 	/* I have not yet seen any case where num_mpdu_ranges > 1.
+ 	 * qcacld does not seem handle that case either, so we introduce the
+ 	 * same limitiation here as well.
+@@ -2509,6 +2685,13 @@ static bool ath10k_htt_rx_proc_rx_frag_ind_hl(struct ath10k_htt *htt,
+ 	rx_desc = (struct htt_hl_rx_desc *)(skb->data + tot_hdr_len);
+ 	rx_desc_info = __le32_to_cpu(rx_desc->info);
+ 
++	hdr = (struct ieee80211_hdr *)((u8 *)rx_desc + rx_hl->fw_desc.len);
++
++	if (is_multicast_ether_addr(hdr->addr1)) {
++		/* Discard the fragment with multicast DA */
++		goto err;
++	}
++
+ 	if (!MS(rx_desc_info, HTT_RX_DESC_HL_INFO_ENCRYPTED)) {
+ 		spin_unlock_bh(&ar->data_lock);
+ 		return ath10k_htt_rx_proc_rx_ind_hl(htt, &resp->rx_ind_hl, skb,
+@@ -2516,8 +2699,6 @@ static bool ath10k_htt_rx_proc_rx_frag_ind_hl(struct ath10k_htt *htt,
+ 						    HTT_RX_NON_TKIP_MIC);
+ 	}
+ 
+-	hdr = (struct ieee80211_hdr *)((u8 *)rx_desc + rx_hl->fw_desc.len);
+-
+ 	if (ieee80211_has_retry(hdr->frame_control))
+ 		goto err;
+ 
+@@ -3027,7 +3208,7 @@ static int ath10k_htt_rx_in_ord_ind(struct ath10k *ar, struct sk_buff *skb)
+ 			ath10k_htt_rx_h_ppdu(ar, &amsdu, status, vdev_id);
+ 			ath10k_htt_rx_h_filter(ar, &amsdu, status, NULL);
+ 			ath10k_htt_rx_h_mpdu(ar, &amsdu, status, false, NULL,
+-					     NULL);
++					     NULL, peer_id, frag);
+ 			ath10k_htt_rx_h_enqueue(ar, &amsdu, status);
+ 			break;
+ 		case -EAGAIN:
+diff --git a/drivers/net/wireless/ath/ath10k/rx_desc.h b/drivers/net/wireless/ath/ath10k/rx_desc.h
+index dec1582005b94..13a1cae6b51b0 100644
+--- a/drivers/net/wireless/ath/ath10k/rx_desc.h
++++ b/drivers/net/wireless/ath/ath10k/rx_desc.h
+@@ -1282,7 +1282,19 @@ struct fw_rx_desc_base {
+ #define FW_RX_DESC_UDP              (1 << 6)
+ 
+ struct fw_rx_desc_hl {
+-	u8 info0;
++	union {
++		struct {
++		u8 discard:1,
++		   forward:1,
++		   any_err:1,
++		   dup_err:1,
++		   reserved:1,
++		   inspect:1,
++		   extension:2;
++		} bits;
++		u8 info0;
++	} u;
++
+ 	u8 version;
+ 	u8 len;
+ 	u8 flags;
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 3638501a09593..2bff8eb507d4d 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -832,6 +832,24 @@ static void ath11k_dp_rx_frags_cleanup(struct dp_rx_tid *rx_tid, bool rel_link_d
+ 	__skb_queue_purge(&rx_tid->rx_frags);
+ }
+ 
++void ath11k_peer_frags_flush(struct ath11k *ar, struct ath11k_peer *peer)
++{
++	struct dp_rx_tid *rx_tid;
++	int i;
++
++	lockdep_assert_held(&ar->ab->base_lock);
++
++	for (i = 0; i <= IEEE80211_NUM_TIDS; i++) {
++		rx_tid = &peer->rx_tid[i];
++
++		spin_unlock_bh(&ar->ab->base_lock);
++		del_timer_sync(&rx_tid->frag_timer);
++		spin_lock_bh(&ar->ab->base_lock);
++
++		ath11k_dp_rx_frags_cleanup(rx_tid, true);
++	}
++}
++
+ void ath11k_peer_rx_tid_cleanup(struct ath11k *ar, struct ath11k_peer *peer)
+ {
+ 	struct dp_rx_tid *rx_tid;
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.h b/drivers/net/wireless/ath/ath11k/dp_rx.h
+index fbea45f79c9b0..6986752fc4b68 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.h
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.h
+@@ -49,6 +49,7 @@ int ath11k_dp_peer_rx_pn_replay_config(struct ath11k_vif *arvif,
+ 				       const u8 *peer_addr,
+ 				       enum set_key_cmd key_cmd,
+ 				       struct ieee80211_key_conf *key);
++void ath11k_peer_frags_flush(struct ath11k *ar, struct ath11k_peer *peer);
+ void ath11k_peer_rx_tid_cleanup(struct ath11k *ar, struct ath11k_peer *peer);
+ void ath11k_peer_rx_tid_delete(struct ath11k *ar,
+ 			       struct ath11k_peer *peer, u8 tid);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index e9e6b0c4de220..0738c784616f1 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -2525,6 +2525,12 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	 */
+ 	spin_lock_bh(&ab->base_lock);
+ 	peer = ath11k_peer_find(ab, arvif->vdev_id, peer_addr);
++
++	/* flush the fragments cache during key (re)install to
++	 * ensure all frags in the new frag list belong to the same key.
++	 */
++	if (peer && cmd == SET_KEY)
++		ath11k_peer_frags_flush(ar, peer);
+ 	spin_unlock_bh(&ab->base_lock);
+ 
+ 	if (!peer) {
+diff --git a/drivers/net/wireless/ath/ath6kl/debug.c b/drivers/net/wireless/ath/ath6kl/debug.c
+index 7506cea46f589..433a047f3747b 100644
+--- a/drivers/net/wireless/ath/ath6kl/debug.c
++++ b/drivers/net/wireless/ath/ath6kl/debug.c
+@@ -1027,14 +1027,17 @@ static ssize_t ath6kl_lrssi_roam_write(struct file *file,
+ {
+ 	struct ath6kl *ar = file->private_data;
+ 	unsigned long lrssi_roam_threshold;
++	int ret;
+ 
+ 	if (kstrtoul_from_user(user_buf, count, 0, &lrssi_roam_threshold))
+ 		return -EINVAL;
+ 
+ 	ar->lrssi_roam_threshold = lrssi_roam_threshold;
+ 
+-	ath6kl_wmi_set_roam_lrssi_cmd(ar->wmi, ar->lrssi_roam_threshold);
++	ret = ath6kl_wmi_set_roam_lrssi_cmd(ar->wmi, ar->lrssi_roam_threshold);
+ 
++	if (ret)
++		return ret;
+ 	return count;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+index f9ebb98b0e3c7..b6d0bc73923fc 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+@@ -1217,13 +1217,9 @@ static struct sdio_driver brcmf_sdmmc_driver = {
+ 	},
+ };
+ 
+-void brcmf_sdio_register(void)
++int brcmf_sdio_register(void)
+ {
+-	int ret;
+-
+-	ret = sdio_register_driver(&brcmf_sdmmc_driver);
+-	if (ret)
+-		brcmf_err("sdio_register_driver failed: %d\n", ret);
++	return sdio_register_driver(&brcmf_sdmmc_driver);
+ }
+ 
+ void brcmf_sdio_exit(void)
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
+index 08f9d47f2e5ca..3f5da3bb6aa59 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
+@@ -275,11 +275,26 @@ void brcmf_bus_add_txhdrlen(struct device *dev, uint len);
+ 
+ #ifdef CONFIG_BRCMFMAC_SDIO
+ void brcmf_sdio_exit(void);
+-void brcmf_sdio_register(void);
++int brcmf_sdio_register(void);
++#else
++static inline void brcmf_sdio_exit(void) { }
++static inline int brcmf_sdio_register(void) { return 0; }
+ #endif
++
+ #ifdef CONFIG_BRCMFMAC_USB
+ void brcmf_usb_exit(void);
+-void brcmf_usb_register(void);
++int brcmf_usb_register(void);
++#else
++static inline void brcmf_usb_exit(void) { }
++static inline int brcmf_usb_register(void) { return 0; }
++#endif
++
++#ifdef CONFIG_BRCMFMAC_PCIE
++void brcmf_pcie_exit(void);
++int brcmf_pcie_register(void);
++#else
++static inline void brcmf_pcie_exit(void) { }
++static inline int brcmf_pcie_register(void) { return 0; }
+ #endif
+ 
+ #endif /* BRCMFMAC_BUS_H */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 3dd28f5fef19e..61039538a15bc 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -1518,40 +1518,34 @@ void brcmf_bus_change_state(struct brcmf_bus *bus, enum brcmf_bus_state state)
+ 	}
+ }
+ 
+-static void brcmf_driver_register(struct work_struct *work)
+-{
+-#ifdef CONFIG_BRCMFMAC_SDIO
+-	brcmf_sdio_register();
+-#endif
+-#ifdef CONFIG_BRCMFMAC_USB
+-	brcmf_usb_register();
+-#endif
+-#ifdef CONFIG_BRCMFMAC_PCIE
+-	brcmf_pcie_register();
+-#endif
+-}
+-static DECLARE_WORK(brcmf_driver_work, brcmf_driver_register);
+-
+ int __init brcmf_core_init(void)
+ {
+-	if (!schedule_work(&brcmf_driver_work))
+-		return -EBUSY;
++	int err;
+ 
++	err = brcmf_sdio_register();
++	if (err)
++		return err;
++
++	err = brcmf_usb_register();
++	if (err)
++		goto error_usb_register;
++
++	err = brcmf_pcie_register();
++	if (err)
++		goto error_pcie_register;
+ 	return 0;
++
++error_pcie_register:
++	brcmf_usb_exit();
++error_usb_register:
++	brcmf_sdio_exit();
++	return err;
+ }
+ 
+ void __exit brcmf_core_exit(void)
+ {
+-	cancel_work_sync(&brcmf_driver_work);
+-
+-#ifdef CONFIG_BRCMFMAC_SDIO
+ 	brcmf_sdio_exit();
+-#endif
+-#ifdef CONFIG_BRCMFMAC_USB
+ 	brcmf_usb_exit();
+-#endif
+-#ifdef CONFIG_BRCMFMAC_PCIE
+ 	brcmf_pcie_exit();
+-#endif
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index d8db0dbcfe091..603aff421e38e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -2138,15 +2138,10 @@ static struct pci_driver brcmf_pciedrvr = {
+ };
+ 
+ 
+-void brcmf_pcie_register(void)
++int brcmf_pcie_register(void)
+ {
+-	int err;
+-
+ 	brcmf_dbg(PCIE, "Enter\n");
+-	err = pci_register_driver(&brcmf_pciedrvr);
+-	if (err)
+-		brcmf_err(NULL, "PCIE driver registration failed, err=%d\n",
+-			  err);
++	return pci_register_driver(&brcmf_pciedrvr);
+ }
+ 
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
+index d026401d20010..8e6c227e8315c 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
+@@ -11,9 +11,4 @@ struct brcmf_pciedev {
+ 	struct brcmf_pciedev_info *devinfo;
+ };
+ 
+-
+-void brcmf_pcie_exit(void);
+-void brcmf_pcie_register(void);
+-
+-
+ #endif /* BRCMFMAC_PCIE_H */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+index 586f4dfc638b9..9fb68c2dc7e39 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+@@ -1584,12 +1584,8 @@ void brcmf_usb_exit(void)
+ 	usb_deregister(&brcmf_usbdrvr);
+ }
+ 
+-void brcmf_usb_register(void)
++int brcmf_usb_register(void)
+ {
+-	int ret;
+-
+ 	brcmf_dbg(USB, "Enter\n");
+-	ret = usb_register(&brcmf_usbdrvr);
+-	if (ret)
+-		brcmf_err("usb_register failed %d\n", ret);
++	return usb_register(&brcmf_usbdrvr);
+ }
+diff --git a/drivers/net/wireless/marvell/libertas/mesh.c b/drivers/net/wireless/marvell/libertas/mesh.c
+index f5b78257d5518..c68814841583f 100644
+--- a/drivers/net/wireless/marvell/libertas/mesh.c
++++ b/drivers/net/wireless/marvell/libertas/mesh.c
+@@ -801,24 +801,6 @@ static const struct attribute_group mesh_ie_group = {
+ 	.attrs = mesh_ie_attrs,
+ };
+ 
+-static void lbs_persist_config_init(struct net_device *dev)
+-{
+-	int ret;
+-	ret = sysfs_create_group(&(dev->dev.kobj), &boot_opts_group);
+-	if (ret)
+-		pr_err("failed to create boot_opts_group.\n");
+-
+-	ret = sysfs_create_group(&(dev->dev.kobj), &mesh_ie_group);
+-	if (ret)
+-		pr_err("failed to create mesh_ie_group.\n");
+-}
+-
+-static void lbs_persist_config_remove(struct net_device *dev)
+-{
+-	sysfs_remove_group(&(dev->dev.kobj), &boot_opts_group);
+-	sysfs_remove_group(&(dev->dev.kobj), &mesh_ie_group);
+-}
+-
+ 
+ /***************************************************************************
+  * Initializing and starting, stopping mesh
+@@ -1014,6 +996,10 @@ static int lbs_add_mesh(struct lbs_private *priv)
+ 	SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent);
+ 
+ 	mesh_dev->flags |= IFF_BROADCAST | IFF_MULTICAST;
++	mesh_dev->sysfs_groups[0] = &lbs_mesh_attr_group;
++	mesh_dev->sysfs_groups[1] = &boot_opts_group;
++	mesh_dev->sysfs_groups[2] = &mesh_ie_group;
++
+ 	/* Register virtual mesh interface */
+ 	ret = register_netdev(mesh_dev);
+ 	if (ret) {
+@@ -1021,19 +1007,10 @@ static int lbs_add_mesh(struct lbs_private *priv)
+ 		goto err_free_netdev;
+ 	}
+ 
+-	ret = sysfs_create_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
+-	if (ret)
+-		goto err_unregister;
+-
+-	lbs_persist_config_init(mesh_dev);
+-
+ 	/* Everything successful */
+ 	ret = 0;
+ 	goto done;
+ 
+-err_unregister:
+-	unregister_netdev(mesh_dev);
+-
+ err_free_netdev:
+ 	free_netdev(mesh_dev);
+ 
+@@ -1054,8 +1031,6 @@ void lbs_remove_mesh(struct lbs_private *priv)
+ 
+ 	netif_stop_queue(mesh_dev);
+ 	netif_carrier_off(mesh_dev);
+-	sysfs_remove_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
+-	lbs_persist_config_remove(mesh_dev);
+ 	unregister_netdev(mesh_dev);
+ 	priv->mesh_dev = NULL;
+ 	kfree(mesh_dev->ieee80211_ptr);
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index d958b5da9b88a..4df4f37e6b895 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -538,7 +538,7 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
+ 		 * nvmet_req_init is completed.
+ 		 */
+ 		if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
+-		    len && len < cmd->req.port->inline_data_size &&
++		    len && len <= cmd->req.port->inline_data_size &&
+ 		    nvme_is_write(cmd->req.cmd))
+ 			return;
+ 	}
+diff --git a/drivers/platform/x86/hp-wireless.c b/drivers/platform/x86/hp-wireless.c
+index 12c31fd5d5ae2..0753ef18e7211 100644
+--- a/drivers/platform/x86/hp-wireless.c
++++ b/drivers/platform/x86/hp-wireless.c
+@@ -17,12 +17,14 @@ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Alex Hung");
+ MODULE_ALIAS("acpi*:HPQ6001:*");
+ MODULE_ALIAS("acpi*:WSTADEF:*");
++MODULE_ALIAS("acpi*:AMDI0051:*");
+ 
+ static struct input_dev *hpwl_input_dev;
+ 
+ static const struct acpi_device_id hpwl_ids[] = {
+ 	{"HPQ6001", 0},
+ 	{"WSTADEF", 0},
++	{"AMDI0051", 0},
+ 	{"", 0},
+ };
+ 
+diff --git a/drivers/platform/x86/hp_accel.c b/drivers/platform/x86/hp_accel.c
+index 799cbe2ffcf36..8c0867bda8280 100644
+--- a/drivers/platform/x86/hp_accel.c
++++ b/drivers/platform/x86/hp_accel.c
+@@ -88,6 +88,9 @@ MODULE_DEVICE_TABLE(acpi, lis3lv02d_device_ids);
+ static int lis3lv02d_acpi_init(struct lis3lv02d *lis3)
+ {
+ 	struct acpi_device *dev = lis3->bus_priv;
++	if (!lis3->init_required)
++		return 0;
++
+ 	if (acpi_evaluate_object(dev->handle, METHOD_NAME__INI,
+ 				 NULL, NULL) != AE_OK)
+ 		return -EINVAL;
+@@ -356,6 +359,7 @@ static int lis3lv02d_add(struct acpi_device *device)
+ 	}
+ 
+ 	/* call the core layer do its init */
++	lis3_dev.init_required = true;
+ 	ret = lis3lv02d_init_device(&lis3_dev);
+ 	if (ret)
+ 		return ret;
+@@ -403,11 +407,27 @@ static int lis3lv02d_suspend(struct device *dev)
+ 
+ static int lis3lv02d_resume(struct device *dev)
+ {
++	lis3_dev.init_required = false;
++	lis3lv02d_poweron(&lis3_dev);
++	return 0;
++}
++
++static int lis3lv02d_restore(struct device *dev)
++{
++	lis3_dev.init_required = true;
+ 	lis3lv02d_poweron(&lis3_dev);
+ 	return 0;
+ }
+ 
+-static SIMPLE_DEV_PM_OPS(hp_accel_pm, lis3lv02d_suspend, lis3lv02d_resume);
++static const struct dev_pm_ops hp_accel_pm = {
++	.suspend = lis3lv02d_suspend,
++	.resume = lis3lv02d_resume,
++	.freeze = lis3lv02d_suspend,
++	.thaw = lis3lv02d_resume,
++	.poweroff = lis3lv02d_suspend,
++	.restore = lis3lv02d_restore,
++};
++
+ #define HP_ACCEL_PM (&hp_accel_pm)
+ #else
+ #define HP_ACCEL_PM NULL
+diff --git a/drivers/platform/x86/intel_punit_ipc.c b/drivers/platform/x86/intel_punit_ipc.c
+index 05cced59e251a..f58b8543f6ac5 100644
+--- a/drivers/platform/x86/intel_punit_ipc.c
++++ b/drivers/platform/x86/intel_punit_ipc.c
+@@ -312,6 +312,7 @@ static const struct acpi_device_id punit_ipc_acpi_ids[] = {
+ 	{ "INT34D4", 0 },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(acpi, punit_ipc_acpi_ids);
+ 
+ static struct platform_driver intel_punit_ipc_driver = {
+ 	.probe = intel_punit_ipc_probe,
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index c4de932302d6b..3743d895399e7 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -115,6 +115,32 @@ static const struct ts_dmi_data chuwi_hi10_plus_data = {
+ 	.properties     = chuwi_hi10_plus_props,
+ };
+ 
++static const struct property_entry chuwi_hi10_pro_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 8),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 8),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1912),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1272),
++	PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10-pro.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	PROPERTY_ENTRY_BOOL("silead,home-button"),
++	{ }
++};
++
++static const struct ts_dmi_data chuwi_hi10_pro_data = {
++	.embedded_fw = {
++		.name	= "silead/gsl1680-chuwi-hi10-pro.fw",
++		.prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
++		.length	= 42504,
++		.sha256	= { 0xdb, 0x92, 0x68, 0xa8, 0xdb, 0x81, 0x31, 0x00,
++			    0x1f, 0x58, 0x89, 0xdb, 0x19, 0x1b, 0x15, 0x8c,
++			    0x05, 0x14, 0xf4, 0x95, 0xba, 0x15, 0x45, 0x98,
++			    0x42, 0xa3, 0xbb, 0x65, 0xe3, 0x30, 0xa5, 0x93 },
++	},
++	.acpi_name      = "MSSL1680:00",
++	.properties     = chuwi_hi10_pro_props,
++};
++
+ static const struct property_entry chuwi_vi8_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
+ 	PROPERTY_ENTRY_U32("touchscreen-min-y", 6),
+@@ -872,6 +898,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
+ 		},
+ 	},
++	{
++		/* Chuwi Hi10 Prus (CWI597) */
++		.driver_data = (void *)&chuwi_hi10_pro_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Hi10 pro tablet"),
++			DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++		},
++	},
+ 	{
+ 		/* Chuwi Vi8 (CWI506) */
+ 		.driver_data = (void *)&chuwi_vi8_data,
+@@ -1043,6 +1078,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BIOS_VERSION, "jumperx.T87.KFBNEEA"),
+ 		},
+ 	},
++	{
++		/* Mediacom WinPad 7.0 W700 (same hw as Wintron surftab 7") */
++		.driver_data = (void *)&trekstor_surftab_wintron70_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "MEDIACOM"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "WinPad 7 W10 - WPW700"),
++		},
++	},
+ 	{
+ 		/* Mediacom Flexbook Edge 11 (same hw as TS Primebook C11) */
+ 		.driver_data = (void *)&trekstor_primebook_c11_data,
+diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
+index b9febc581b1f4..8d1b2771c1aa0 100644
+--- a/drivers/s390/cio/vfio_ccw_cp.c
++++ b/drivers/s390/cio/vfio_ccw_cp.c
+@@ -638,6 +638,10 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
+ 	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
+ 	int ret;
+ 
++	/* this is an error in the caller */
++	if (cp->initialized)
++		return -EBUSY;
++
+ 	/*
+ 	 * We only support prefetching the channel program. We assume all channel
+ 	 * programs executed by supported guests likewise support prefetching.
+diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
+index ccb061ab0a0ad..7231de2767a96 100644
+--- a/drivers/scsi/BusLogic.c
++++ b/drivers/scsi/BusLogic.c
+@@ -3078,11 +3078,11 @@ static int blogic_qcmd_lck(struct scsi_cmnd *command,
+ 		ccb->opcode = BLOGIC_INITIATOR_CCB_SG;
+ 		ccb->datalen = count * sizeof(struct blogic_sg_seg);
+ 		if (blogic_multimaster_type(adapter))
+-			ccb->data = (void *)((unsigned int) ccb->dma_handle +
++			ccb->data = (unsigned int) ccb->dma_handle +
+ 					((unsigned long) &ccb->sglist -
+-					(unsigned long) ccb));
++					(unsigned long) ccb);
+ 		else
+-			ccb->data = ccb->sglist;
++			ccb->data = virt_to_32bit_virt(ccb->sglist);
+ 
+ 		scsi_for_each_sg(command, sg, count, i) {
+ 			ccb->sglist[i].segbytes = sg_dma_len(sg);
+diff --git a/drivers/scsi/BusLogic.h b/drivers/scsi/BusLogic.h
+index 6182cc8a0344a..e081ad47d1cf4 100644
+--- a/drivers/scsi/BusLogic.h
++++ b/drivers/scsi/BusLogic.h
+@@ -814,7 +814,7 @@ struct blogic_ccb {
+ 	unsigned char cdblen;				/* Byte 2 */
+ 	unsigned char sense_datalen;			/* Byte 3 */
+ 	u32 datalen;					/* Bytes 4-7 */
+-	void *data;					/* Bytes 8-11 */
++	u32 data;					/* Bytes 8-11 */
+ 	unsigned char:8;				/* Byte 12 */
+ 	unsigned char:8;				/* Byte 13 */
+ 	enum blogic_adapter_status adapter_status;	/* Byte 14 */
+diff --git a/drivers/scsi/libsas/sas_port.c b/drivers/scsi/libsas/sas_port.c
+index 19cf418928faa..e3d03d744713d 100644
+--- a/drivers/scsi/libsas/sas_port.c
++++ b/drivers/scsi/libsas/sas_port.c
+@@ -25,7 +25,7 @@ static bool phy_is_wideport_member(struct asd_sas_port *port, struct asd_sas_phy
+ 
+ static void sas_resume_port(struct asd_sas_phy *phy)
+ {
+-	struct domain_device *dev;
++	struct domain_device *dev, *n;
+ 	struct asd_sas_port *port = phy->port;
+ 	struct sas_ha_struct *sas_ha = phy->ha;
+ 	struct sas_internal *si = to_sas_internal(sas_ha->core.shost->transportt);
+@@ -44,7 +44,7 @@ static void sas_resume_port(struct asd_sas_phy *phy)
+ 	 * 1/ presume every device came back
+ 	 * 2/ force the next revalidation to check all expander phys
+ 	 */
+-	list_for_each_entry(dev, &port->dev_list, dev_list_node) {
++	list_for_each_entry_safe(dev, n, &port->dev_list, dev_list_node) {
+ 		int i, rc;
+ 
+ 		rc = sas_notify_lldd_dev_found(dev);
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 355d1c5f2194c..2114d2dd3501a 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -3703,11 +3703,13 @@ static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void* piomb)
+ 	case HW_EVENT_PHY_START_STATUS:
+ 		pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_START_STATUS status = %x\n",
+ 			   status);
+-		if (status == 0) {
++		if (status == 0)
+ 			phy->phy_state = 1;
+-			if (pm8001_ha->flags == PM8001F_RUN_TIME &&
+-					phy->enable_completion != NULL)
+-				complete(phy->enable_completion);
++
++		if (pm8001_ha->flags == PM8001F_RUN_TIME &&
++				phy->enable_completion != NULL) {
++			complete(phy->enable_completion);
++			phy->enable_completion = NULL;
+ 		}
+ 		break;
+ 	case HW_EVENT_SAS_PHY_UP:
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 7657d68e12d5f..0c0c886c7371d 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -1139,8 +1139,8 @@ static int pm8001_pci_probe(struct pci_dev *pdev,
+ 		goto err_out_shost;
+ 	}
+ 	list_add_tail(&pm8001_ha->list, &hba_list);
+-	scsi_scan_host(pm8001_ha->shost);
+ 	pm8001_ha->flags = PM8001F_RUN_TIME;
++	scsi_scan_host(pm8001_ha->shost);
+ 	return 0;
+ 
+ err_out_shost:
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index 474468df2a78d..39de9a9360d3e 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -264,12 +264,17 @@ void pm8001_scan_start(struct Scsi_Host *shost)
+ 	int i;
+ 	struct pm8001_hba_info *pm8001_ha;
+ 	struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
++	DECLARE_COMPLETION_ONSTACK(completion);
+ 	pm8001_ha = sha->lldd_ha;
+ 	/* SAS_RE_INITIALIZATION not available in SPCv/ve */
+ 	if (pm8001_ha->chip_id == chip_8001)
+ 		PM8001_CHIP_DISP->sas_re_init_req(pm8001_ha);
+-	for (i = 0; i < pm8001_ha->chip->n_phy; ++i)
++	for (i = 0; i < pm8001_ha->chip->n_phy; ++i) {
++		pm8001_ha->phy[i].enable_completion = &completion;
+ 		PM8001_CHIP_DISP->phy_start_req(pm8001_ha, i);
++		wait_for_completion(&completion);
++		msleep(300);
++	}
+ }
+ 
+ int pm8001_scan_finished(struct Scsi_Host *shost, unsigned long time)
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 27b354860a16e..a203a4fc2674a 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -3432,13 +3432,13 @@ static int mpi_phy_start_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	pm8001_dbg(pm8001_ha, INIT,
+ 		   "phy start resp status:0x%x, phyid:0x%x\n",
+ 		   status, phy_id);
+-	if (status == 0) {
++	if (status == 0)
+ 		phy->phy_state = PHY_LINK_DOWN;
+-		if (pm8001_ha->flags == PM8001F_RUN_TIME &&
+-				phy->enable_completion != NULL) {
+-			complete(phy->enable_completion);
+-			phy->enable_completion = NULL;
+-		}
++
++	if (pm8001_ha->flags == PM8001F_RUN_TIME &&
++			phy->enable_completion != NULL) {
++		complete(phy->enable_completion);
++		phy->enable_completion = NULL;
+ 	}
+ 	return 0;
+ 
+diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c
+index 09d2ac20508b5..aace13399a7f3 100644
+--- a/drivers/scsi/ufs/ufs-mediatek.c
++++ b/drivers/scsi/ufs/ufs-mediatek.c
+@@ -824,6 +824,7 @@ static void ufs_mtk_vreg_set_lpm(struct ufs_hba *hba, bool lpm)
+ static int ufs_mtk_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ {
+ 	int err;
++	struct arm_smccc_res res;
+ 
+ 	if (ufshcd_is_link_hibern8(hba)) {
+ 		err = ufs_mtk_link_set_lpm(hba);
+@@ -844,6 +845,9 @@ static int ufs_mtk_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ 		ufs_mtk_vreg_set_lpm(hba, true);
+ 	}
+ 
++	if (ufshcd_is_link_off(hba))
++		ufs_mtk_device_reset_ctrl(0, res);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 0287366874882..fb45e6af66381 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1375,11 +1375,13 @@ poll_mode:
+ 	ret = spi_register_controller(ctlr);
+ 	if (ret != 0) {
+ 		dev_err(&pdev->dev, "Problem registering DSPI ctlr\n");
+-		goto out_free_irq;
++		goto out_release_dma;
+ 	}
+ 
+ 	return ret;
+ 
++out_release_dma:
++	dspi_release_dma(dspi);
+ out_free_irq:
+ 	if (dspi->irq)
+ 		free_irq(dspi->irq, dspi);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 419de3d404814..a6f1e94af13c5 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -814,16 +814,29 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
+ 
+ 	if (spi->cs_gpiod || gpio_is_valid(spi->cs_gpio)) {
+ 		if (!(spi->mode & SPI_NO_CS)) {
+-			if (spi->cs_gpiod)
+-				/* polarity handled by gpiolib */
+-				gpiod_set_value_cansleep(spi->cs_gpiod,
+-							 enable1);
+-			else
++			if (spi->cs_gpiod) {
++				/*
++				 * Historically ACPI has no means of the GPIO polarity and
++				 * thus the SPISerialBus() resource defines it on the per-chip
++				 * basis. In order to avoid a chain of negations, the GPIO
++				 * polarity is considered being Active High. Even for the cases
++				 * when _DSD() is involved (in the updated versions of ACPI)
++				 * the GPIO CS polarity must be defined Active High to avoid
++				 * ambiguity. That's why we use enable, that takes SPI_CS_HIGH
++				 * into account.
++				 */
++				if (has_acpi_companion(&spi->dev))
++					gpiod_set_value_cansleep(spi->cs_gpiod, !enable);
++				else
++					/* Polarity handled by GPIO library */
++					gpiod_set_value_cansleep(spi->cs_gpiod, enable1);
++			} else {
+ 				/*
+ 				 * invert the enable line, as active low is
+ 				 * default for SPI.
+ 				 */
+ 				gpio_set_value_cansleep(spi->cs_gpio, !enable);
++			}
+ 		}
+ 		/* Some SPI masters need both GPIO CS & slave_select */
+ 		if ((spi->controller->flags & SPI_MASTER_GPIO_SS) &&
+diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c
+index a30b4f5b199b5..3897f8e8f5e0d 100644
+--- a/drivers/staging/emxx_udc/emxx_udc.c
++++ b/drivers/staging/emxx_udc/emxx_udc.c
+@@ -2062,7 +2062,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc,
+ 			struct nbu2ss_ep *ep,
+ 			int status)
+ {
+-	struct nbu2ss_req *req;
++	struct nbu2ss_req *req, *n;
+ 
+ 	/* Endpoint Disable */
+ 	_nbu2ss_epn_exit(udc, ep);
+@@ -2074,7 +2074,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc,
+ 		return 0;
+ 
+ 	/* called with irqs blocked */
+-	list_for_each_entry(req, &ep->queue, queue) {
++	list_for_each_entry_safe(req, n, &ep->queue, queue) {
+ 		_nbu2ss_ep_done(ep, req, status);
+ 	}
+ 
+diff --git a/drivers/staging/iio/cdc/ad7746.c b/drivers/staging/iio/cdc/ad7746.c
+index dfd71e99e872e..eab534dc4bcc0 100644
+--- a/drivers/staging/iio/cdc/ad7746.c
++++ b/drivers/staging/iio/cdc/ad7746.c
+@@ -700,7 +700,6 @@ static int ad7746_probe(struct i2c_client *client,
+ 		indio_dev->num_channels = ARRAY_SIZE(ad7746_channels);
+ 	else
+ 		indio_dev->num_channels =  ARRAY_SIZE(ad7746_channels) - 2;
+-	indio_dev->num_channels = ARRAY_SIZE(ad7746_channels);
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 
+ 	if (pdata) {
+diff --git a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
+index 6e479deff76b0..a337600d5bc4c 100644
+--- a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
++++ b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
+@@ -231,6 +231,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
+ 	if (ACPI_FAILURE(status))
+ 		trip_cnt = 0;
+ 	else {
++		int i;
++
+ 		int34x_thermal_zone->aux_trips =
+ 			kcalloc(trip_cnt,
+ 				sizeof(*int34x_thermal_zone->aux_trips),
+@@ -241,6 +243,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
+ 		}
+ 		trip_mask = BIT(trip_cnt) - 1;
+ 		int34x_thermal_zone->aux_trip_nr = trip_cnt;
++		for (i = 0; i < trip_cnt; ++i)
++			int34x_thermal_zone->aux_trips[i] = THERMAL_TEMP_INVALID;
+ 	}
+ 
+ 	trip_cnt = int340x_thermal_read_trips(int34x_thermal_zone);
+diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c
+index b81c33202f41a..4f5d97329ee32 100644
+--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c
++++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c
+@@ -164,7 +164,7 @@ static int sys_get_trip_temp(struct thermal_zone_device *tzd,
+ 	if (thres_reg_value)
+ 		*temp = zonedev->tj_max - thres_reg_value * 1000;
+ 	else
+-		*temp = 0;
++		*temp = THERMAL_TEMP_INVALID;
+ 	pr_debug("sys_get_trip_temp %d\n", *temp);
+ 
+ 	return 0;
+diff --git a/drivers/thunderbolt/dma_port.c b/drivers/thunderbolt/dma_port.c
+index 847dd07a7b172..de219953c8b37 100644
+--- a/drivers/thunderbolt/dma_port.c
++++ b/drivers/thunderbolt/dma_port.c
+@@ -364,15 +364,15 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
+ 			void *buf, size_t size)
+ {
+ 	unsigned int retries = DMA_PORT_RETRIES;
+-	unsigned int offset;
+-
+-	offset = address & 3;
+-	address = address & ~3;
+ 
+ 	do {
+-		u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
++		unsigned int offset;
++		size_t nbytes;
+ 		int ret;
+ 
++		offset = address & 3;
++		nbytes = min_t(size_t, size + offset, MAIL_DATA_DWORDS * 4);
++
+ 		ret = dma_port_flash_read_block(dma, address, dma->buf,
+ 						ALIGN(nbytes, 4));
+ 		if (ret) {
+@@ -384,6 +384,7 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
+ 			return ret;
+ 		}
+ 
++		nbytes -= offset;
+ 		memcpy(buf, dma->buf + offset, nbytes);
+ 
+ 		size -= nbytes;
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index f2583b4053e48..c05ec6fad77f6 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -108,15 +108,15 @@ static int usb4_do_read_data(u16 address, void *buf, size_t size,
+ 	unsigned int retries = USB4_DATA_RETRIES;
+ 	unsigned int offset;
+ 
+-	offset = address & 3;
+-	address = address & ~3;
+-
+ 	do {
+-		size_t nbytes = min_t(size_t, size, USB4_DATA_DWORDS * 4);
+ 		unsigned int dwaddress, dwords;
+ 		u8 data[USB4_DATA_DWORDS * 4];
++		size_t nbytes;
+ 		int ret;
+ 
++		offset = address & 3;
++		nbytes = min_t(size_t, size + offset, USB4_DATA_DWORDS * 4);
++
+ 		dwaddress = address / 4;
+ 		dwords = ALIGN(nbytes, 4) / 4;
+ 
+@@ -127,6 +127,7 @@ static int usb4_do_read_data(u16 address, void *buf, size_t size,
+ 			return ret;
+ 		}
+ 
++		nbytes -= offset;
+ 		memcpy(buf, data + offset, nbytes);
+ 
+ 		size -= nbytes;
+diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
+index 52bb21205bb68..34aa2714f3c93 100644
+--- a/drivers/tty/serial/8250/8250.h
++++ b/drivers/tty/serial/8250/8250.h
+@@ -88,6 +88,7 @@ struct serial8250_config {
+ #define UART_BUG_NOMSR	(1 << 2)	/* UART has buggy MSR status bits (Au1x00) */
+ #define UART_BUG_THRE	(1 << 3)	/* UART has buggy THRE reassertion */
+ #define UART_BUG_PARITY	(1 << 4)	/* UART mishandles parity if FIFO enabled */
++#define UART_BUG_TXRACE	(1 << 5)	/* UART Tx fails to set remote DR */
+ 
+ 
+ #ifdef CONFIG_SERIAL_8250_SHARE_IRQ
+diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+index c33e02cbde930..ec0d1da71a203 100644
+--- a/drivers/tty/serial/8250/8250_aspeed_vuart.c
++++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+@@ -403,6 +403,7 @@ static int aspeed_vuart_probe(struct platform_device *pdev)
+ 	port.port.status = UPSTAT_SYNC_FIFO;
+ 	port.port.dev = &pdev->dev;
+ 	port.port.has_sysrq = IS_ENABLED(CONFIG_SERIAL_8250_CONSOLE);
++	port.bugs |= UART_BUG_TXRACE;
+ 
+ 	rc = sysfs_create_group(&vuart->dev->kobj, &aspeed_vuart_attr_group);
+ 	if (rc < 0)
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 9e204f9b799a1..a3a0154da567d 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -714,6 +714,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
+ 	{ "APMC0D08", 0},
+ 	{ "AMD0020", 0 },
+ 	{ "AMDI0020", 0 },
++	{ "AMDI0022", 0 },
+ 	{ "BRCM2032", 0 },
+ 	{ "HISI0031", 0 },
+ 	{ },
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index d5a513efb2613..13929ab64dceb 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -56,6 +56,8 @@ struct serial_private {
+ 	int			line[];
+ };
+ 
++#define PCI_DEVICE_ID_HPE_PCI_SERIAL	0x37e
++
+ static const struct pci_device_id pci_use_msi[] = {
+ 	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9900,
+ 			 0xA000, 0x1000) },
+@@ -63,6 +65,8 @@ static const struct pci_device_id pci_use_msi[] = {
+ 			 0xA000, 0x1000) },
+ 	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9922,
+ 			 0xA000, 0x1000) },
++	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_HP_3PAR, PCI_DEVICE_ID_HPE_PCI_SERIAL,
++			 PCI_ANY_ID, PCI_ANY_ID) },
+ 	{ }
+ };
+ 
+@@ -1997,6 +2001,16 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
+ 		.init		= pci_hp_diva_init,
+ 		.setup		= pci_hp_diva_setup,
+ 	},
++	/*
++	 * HPE PCI serial device
++	 */
++	{
++		.vendor         = PCI_VENDOR_ID_HP_3PAR,
++		.device         = PCI_DEVICE_ID_HPE_PCI_SERIAL,
++		.subvendor      = PCI_ANY_ID,
++		.subdevice      = PCI_ANY_ID,
++		.setup		= pci_hp_diva_setup,
++	},
+ 	/*
+ 	 * Intel
+ 	 */
+@@ -3944,21 +3958,26 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board)
+ 	uart.port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_SHARE_IRQ;
+ 	uart.port.uartclk = board->base_baud * 16;
+ 
+-	if (pci_match_id(pci_use_msi, dev)) {
+-		dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n");
+-		pci_set_master(dev);
+-		rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES);
++	if (board->flags & FL_NOIRQ) {
++		uart.port.irq = 0;
+ 	} else {
+-		dev_dbg(&dev->dev, "Using legacy interrupts\n");
+-		rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY);
+-	}
+-	if (rc < 0) {
+-		kfree(priv);
+-		priv = ERR_PTR(rc);
+-		goto err_deinit;
++		if (pci_match_id(pci_use_msi, dev)) {
++			dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n");
++			pci_set_master(dev);
++			rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES);
++		} else {
++			dev_dbg(&dev->dev, "Using legacy interrupts\n");
++			rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY);
++		}
++		if (rc < 0) {
++			kfree(priv);
++			priv = ERR_PTR(rc);
++			goto err_deinit;
++		}
++
++		uart.port.irq = pci_irq_vector(dev, 0);
+ 	}
+ 
+-	uart.port.irq = pci_irq_vector(dev, 0);
+ 	uart.port.dev = &dev->dev;
+ 
+ 	for (i = 0; i < nr_ports; i++) {
+@@ -4973,6 +4992,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	{	PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_DIVA_AUX,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_b2_1_115200 },
++	/* HPE PCI serial device */
++	{	PCI_VENDOR_ID_HP_3PAR, PCI_DEVICE_ID_HPE_PCI_SERIAL,
++		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
++		pbn_b1_1_115200 },
+ 
+ 	{	PCI_VENDOR_ID_DCI, PCI_DEVICE_ID_DCI_PCCOM2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index b0af13074cd36..6e141429c9808 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1815,6 +1815,18 @@ void serial8250_tx_chars(struct uart_8250_port *up)
+ 	count = up->tx_loadsz;
+ 	do {
+ 		serial_out(up, UART_TX, xmit->buf[xmit->tail]);
++		if (up->bugs & UART_BUG_TXRACE) {
++			/*
++			 * The Aspeed BMC virtual UARTs have a bug where data
++			 * may get stuck in the BMC's Tx FIFO from bursts of
++			 * writes on the APB interface.
++			 *
++			 * Delay back-to-back writes by a read cycle to avoid
++			 * stalling the VUART. Read a register that won't have
++			 * side-effects and discard the result.
++			 */
++			serial_in(up, UART_SCR);
++		}
+ 		xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ 		port->icount.tx++;
+ 		if (uart_circ_empty(xmit))
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 8434bd5a8ec78..5bf8dd6198bbd 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1528,6 +1528,8 @@ static int __init max310x_uart_init(void)
+ 
+ #ifdef CONFIG_SPI_MASTER
+ 	ret = spi_register_driver(&max310x_spi_driver);
++	if (ret)
++		uart_unregister_driver(&max310x_uart);
+ #endif
+ 
+ 	return ret;
+diff --git a/drivers/tty/serial/rp2.c b/drivers/tty/serial/rp2.c
+index 5690c09cc0417..944a4c0105795 100644
+--- a/drivers/tty/serial/rp2.c
++++ b/drivers/tty/serial/rp2.c
+@@ -195,7 +195,6 @@ struct rp2_card {
+ 	void __iomem			*bar0;
+ 	void __iomem			*bar1;
+ 	spinlock_t			card_lock;
+-	struct completion		fw_loaded;
+ };
+ 
+ #define RP_ID(prod) PCI_VDEVICE(RP, (prod))
+@@ -664,17 +663,10 @@ static void rp2_remove_ports(struct rp2_card *card)
+ 	card->initialized_ports = 0;
+ }
+ 
+-static void rp2_fw_cb(const struct firmware *fw, void *context)
++static int rp2_load_firmware(struct rp2_card *card, const struct firmware *fw)
+ {
+-	struct rp2_card *card = context;
+ 	resource_size_t phys_base;
+-	int i, rc = -ENOENT;
+-
+-	if (!fw) {
+-		dev_err(&card->pdev->dev, "cannot find '%s' firmware image\n",
+-			RP2_FW_NAME);
+-		goto no_fw;
+-	}
++	int i, rc = 0;
+ 
+ 	phys_base = pci_resource_start(card->pdev, 1);
+ 
+@@ -720,23 +712,13 @@ static void rp2_fw_cb(const struct firmware *fw, void *context)
+ 		card->initialized_ports++;
+ 	}
+ 
+-	release_firmware(fw);
+-no_fw:
+-	/*
+-	 * rp2_fw_cb() is called from a workqueue long after rp2_probe()
+-	 * has already returned success.  So if something failed here,
+-	 * we'll just leave the now-dormant device in place until somebody
+-	 * unbinds it.
+-	 */
+-	if (rc)
+-		dev_warn(&card->pdev->dev, "driver initialization failed\n");
+-
+-	complete(&card->fw_loaded);
++	return rc;
+ }
+ 
+ static int rp2_probe(struct pci_dev *pdev,
+ 				   const struct pci_device_id *id)
+ {
++	const struct firmware *fw;
+ 	struct rp2_card *card;
+ 	struct rp2_uart_port *ports;
+ 	void __iomem * const *bars;
+@@ -747,7 +729,6 @@ static int rp2_probe(struct pci_dev *pdev,
+ 		return -ENOMEM;
+ 	pci_set_drvdata(pdev, card);
+ 	spin_lock_init(&card->card_lock);
+-	init_completion(&card->fw_loaded);
+ 
+ 	rc = pcim_enable_device(pdev);
+ 	if (rc)
+@@ -780,21 +761,23 @@ static int rp2_probe(struct pci_dev *pdev,
+ 		return -ENOMEM;
+ 	card->ports = ports;
+ 
+-	rc = devm_request_irq(&pdev->dev, pdev->irq, rp2_uart_interrupt,
+-			      IRQF_SHARED, DRV_NAME, card);
+-	if (rc)
++	rc = request_firmware(&fw, RP2_FW_NAME, &pdev->dev);
++	if (rc < 0) {
++		dev_err(&pdev->dev, "cannot find '%s' firmware image\n",
++			RP2_FW_NAME);
+ 		return rc;
++	}
+ 
+-	/*
+-	 * Only catastrophic errors (e.g. ENOMEM) are reported here.
+-	 * If the FW image is missing, we'll find out in rp2_fw_cb()
+-	 * and print an error message.
+-	 */
+-	rc = request_firmware_nowait(THIS_MODULE, 1, RP2_FW_NAME, &pdev->dev,
+-				     GFP_KERNEL, card, rp2_fw_cb);
++	rc = rp2_load_firmware(card, fw);
++
++	release_firmware(fw);
++	if (rc < 0)
++		return rc;
++
++	rc = devm_request_irq(&pdev->dev, pdev->irq, rp2_uart_interrupt,
++			      IRQF_SHARED, DRV_NAME, card);
+ 	if (rc)
+ 		return rc;
+-	dev_dbg(&pdev->dev, "waiting for firmware blob...\n");
+ 
+ 	return 0;
+ }
+@@ -803,7 +786,6 @@ static void rp2_remove(struct pci_dev *pdev)
+ {
+ 	struct rp2_card *card = pci_get_drvdata(pdev);
+ 
+-	wait_for_completion(&card->fw_loaded);
+ 	rp2_remove_ports(card);
+ }
+ 
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index bd13014a1c537..fdce1a7995920 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -333,7 +333,7 @@ static void tegra_uart_fifo_reset(struct tegra_uart_port *tup, u8 fcr_bits)
+ 
+ 	do {
+ 		lsr = tegra_uart_read(tup, UART_LSR);
+-		if ((lsr | UART_LSR_TEMT) && !(lsr & UART_LSR_DR))
++		if ((lsr & UART_LSR_TEMT) && !(lsr & UART_LSR_DR))
+ 			break;
+ 		udelay(1);
+ 	} while (--tmout);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index c6cbaccc19b0d..68a0ff6054761 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -865,9 +865,11 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port,
+ 		goto check_and_exit;
+ 	}
+ 
+-	retval = security_locked_down(LOCKDOWN_TIOCSSERIAL);
+-	if (retval && (change_irq || change_port))
+-		goto exit;
++	if (change_irq || change_port) {
++		retval = security_locked_down(LOCKDOWN_TIOCSSERIAL);
++		if (retval)
++			goto exit;
++	}
+ 
+ 	/*
+ 	 * Ask the low level driver to verify the settings.
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index e1179e74a2b89..3b1aaa93d750e 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -1023,10 +1023,10 @@ static int scif_set_rtrg(struct uart_port *port, int rx_trig)
+ {
+ 	unsigned int bits;
+ 
++	if (rx_trig >= port->fifosize)
++		rx_trig = port->fifosize - 1;
+ 	if (rx_trig < 1)
+ 		rx_trig = 1;
+-	if (rx_trig >= port->fifosize)
+-		rx_trig = port->fifosize;
+ 
+ 	/* HSCIF can be set to an arbitrary level. */
+ 	if (sci_getreg(port, HSRTRGR)->size) {
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 533236366a03b..2218941d35a3f 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1218,7 +1218,12 @@ static int do_proc_bulk(struct usb_dev_state *ps,
+ 	ret = usbfs_increase_memory_usage(len1 + sizeof(struct urb));
+ 	if (ret)
+ 		return ret;
+-	tbuf = kmalloc(len1, GFP_KERNEL);
++
++	/*
++	 * len1 can be almost arbitrarily large.  Don't WARN if it's
++	 * too big, just fail the request.
++	 */
++	tbuf = kmalloc(len1, GFP_KERNEL | __GFP_NOWARN);
+ 	if (!tbuf) {
+ 		ret = -ENOMEM;
+ 		goto done;
+@@ -1696,7 +1701,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	if (num_sgs) {
+ 		as->urb->sg = kmalloc_array(num_sgs,
+ 					    sizeof(struct scatterlist),
+-					    GFP_KERNEL);
++					    GFP_KERNEL | __GFP_NOWARN);
+ 		if (!as->urb->sg) {
+ 			ret = -ENOMEM;
+ 			goto error;
+@@ -1731,7 +1736,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 					(uurb_start - as->usbm->vm_start);
+ 		} else {
+ 			as->urb->transfer_buffer = kmalloc(uurb->buffer_length,
+-					GFP_KERNEL);
++					GFP_KERNEL | __GFP_NOWARN);
+ 			if (!as->urb->transfer_buffer) {
+ 				ret = -ENOMEM;
+ 				goto error;
+diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h
+index 73f4482d833a7..22ea1f4f2d66d 100644
+--- a/drivers/usb/core/hub.h
++++ b/drivers/usb/core/hub.h
+@@ -148,8 +148,10 @@ static inline unsigned hub_power_on_good_delay(struct usb_hub *hub)
+ {
+ 	unsigned delay = hub->descriptor->bPwrOn2PwrGood * 2;
+ 
+-	/* Wait at least 100 msec for power to become stable */
+-	return max(delay, 100U);
++	if (!hub->hdev->parent)	/* root hub */
++		return delay;
++	else /* Wait at least 100 msec for power to become stable */
++		return max(delay, 100U);
+ }
+ 
+ static inline int hub_port_debounce_be_connected(struct usb_hub *hub,
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index acf57a98969dc..ead877e7c87f9 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1236,6 +1236,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
+ 			req->start_sg = sg_next(s);
+ 
+ 		req->num_queued_sgs++;
++		req->num_pending_sgs--;
+ 
+ 		/*
+ 		 * The number of pending SG entries may not correspond to the
+@@ -1243,7 +1244,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
+ 		 * don't include unused SG entries.
+ 		 */
+ 		if (length == 0) {
+-			req->num_pending_sgs -= req->request.num_mapped_sgs - req->num_queued_sgs;
++			req->num_pending_sgs = 0;
+ 			break;
+ 		}
+ 
+@@ -2784,15 +2785,15 @@ static int dwc3_gadget_ep_reclaim_trb_sg(struct dwc3_ep *dep,
+ 	struct dwc3_trb *trb = &dep->trb_pool[dep->trb_dequeue];
+ 	struct scatterlist *sg = req->sg;
+ 	struct scatterlist *s;
+-	unsigned int pending = req->num_pending_sgs;
++	unsigned int num_queued = req->num_queued_sgs;
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+-	for_each_sg(sg, s, pending, i) {
++	for_each_sg(sg, s, num_queued, i) {
+ 		trb = &dep->trb_pool[dep->trb_dequeue];
+ 
+ 		req->sg = sg_next(s);
+-		req->num_pending_sgs--;
++		req->num_queued_sgs--;
+ 
+ 		ret = dwc3_gadget_ep_reclaim_completed_trb(dep, req,
+ 				trb, event, status, true);
+@@ -2815,7 +2816,7 @@ static int dwc3_gadget_ep_reclaim_trb_linear(struct dwc3_ep *dep,
+ 
+ static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)
+ {
+-	return req->num_pending_sgs == 0;
++	return req->num_pending_sgs == 0 && req->num_queued_sgs == 0;
+ }
+ 
+ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+@@ -2824,7 +2825,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ {
+ 	int ret;
+ 
+-	if (req->num_pending_sgs)
++	if (req->request.num_mapped_sgs)
+ 		ret = dwc3_gadget_ep_reclaim_trb_sg(dep, req, event,
+ 				status);
+ 	else
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 0c418ce50ba0f..f1b35a39d1ba8 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -1488,7 +1488,7 @@ static void usb3_start_pipen(struct renesas_usb3_ep *usb3_ep,
+ 			     struct renesas_usb3_request *usb3_req)
+ {
+ 	struct renesas_usb3 *usb3 = usb3_ep_to_usb3(usb3_ep);
+-	struct renesas_usb3_request *usb3_req_first = usb3_get_request(usb3_ep);
++	struct renesas_usb3_request *usb3_req_first;
+ 	unsigned long flags;
+ 	int ret = -EAGAIN;
+ 	u32 enable_bits = 0;
+@@ -1496,7 +1496,8 @@ static void usb3_start_pipen(struct renesas_usb3_ep *usb3_ep,
+ 	spin_lock_irqsave(&usb3->lock, flags);
+ 	if (usb3_ep->halt || usb3_ep->started)
+ 		goto out;
+-	if (usb3_req != usb3_req_first)
++	usb3_req_first = __usb3_get_request(usb3_ep);
++	if (!usb3_req_first || usb3_req != usb3_req_first)
+ 		goto out;
+ 
+ 	if (usb3_pn_change(usb3, usb3_ep->num) < 0)
+diff --git a/drivers/usb/misc/trancevibrator.c b/drivers/usb/misc/trancevibrator.c
+index a3dfc77578ea1..26baba3ab7d73 100644
+--- a/drivers/usb/misc/trancevibrator.c
++++ b/drivers/usb/misc/trancevibrator.c
+@@ -61,9 +61,9 @@ static ssize_t speed_store(struct device *dev, struct device_attribute *attr,
+ 	/* Set speed */
+ 	retval = usb_control_msg(tv->udev, usb_sndctrlpipe(tv->udev, 0),
+ 				 0x01, /* vendor request: set speed */
+-				 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				 USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+ 				 tv->speed, /* speed value */
+-				 0, NULL, 0, USB_CTRL_GET_TIMEOUT);
++				 0, NULL, 0, USB_CTRL_SET_TIMEOUT);
+ 	if (retval) {
+ 		tv->speed = old;
+ 		dev_dbg(&tv->udev->dev, "retval = %d\n", retval);
+diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
+index b5d6616442635..748139d262633 100644
+--- a/drivers/usb/misc/uss720.c
++++ b/drivers/usb/misc/uss720.c
+@@ -736,6 +736,7 @@ static int uss720_probe(struct usb_interface *intf,
+ 	parport_announce_port(pp);
+ 
+ 	usb_set_intfdata(intf, pp);
++	usb_put_dev(usbdev);
+ 	return 0;
+ 
+ probe_abort:
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 56cd70ba201c7..7c64b6ee5c194 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1034,6 +1034,9 @@ static const struct usb_device_id id_table_combined[] = {
+ 	/* Sienna devices */
+ 	{ USB_DEVICE(FTDI_VID, FTDI_SIENNA_PID) },
+ 	{ USB_DEVICE(ECHELON_VID, ECHELON_U20_PID) },
++	/* IDS GmbH devices */
++	{ USB_DEVICE(IDS_VID, IDS_SI31A_PID) },
++	{ USB_DEVICE(IDS_VID, IDS_CM31A_PID) },
+ 	/* U-Blox devices */
+ 	{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) },
+ 	{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 3d47c6d72256e..d854e04a4286e 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1567,6 +1567,13 @@
+ #define UNJO_VID			0x22B7
+ #define UNJO_ISODEBUG_V1_PID		0x150D
+ 
++/*
++ * IDS GmbH
++ */
++#define IDS_VID				0x2CAF
++#define IDS_SI31A_PID			0x13A2
++#define IDS_CM31A_PID			0x13A3
++
+ /*
+  * U-Blox products (http://www.u-blox.com).
+  */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index c6969ca728390..61d94641ddc08 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1240,6 +1240,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff),	/* Telit LN940 (MBIM) */
+ 	  .driver_info = NCTRL(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7010, 0xff),	/* Telit LE910-S1 (RNDIS) */
++	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff),	/* Telit LE910-S1 (ECM) */
++	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010),				/* Telit SBL FN980 flashing device */
+ 	  .driver_info = NCTRL(0) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 29dda60e3bcde..1bbe18f3f9f11 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -113,6 +113,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) },
+ 	{ USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) },
+ 	{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) },
++	{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530GC_PRODUCT_ID) },
+ 	{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
+ 	{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
+ 	{ }					/* Terminating entry */
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 0f681ddbfd288..6097ee8fccb25 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -158,6 +158,7 @@
+ /* ADLINK ND-6530 RS232,RS485 and RS422 adapter */
+ #define ADLINK_VENDOR_ID		0x0b63
+ #define ADLINK_ND6530_PRODUCT_ID	0x6530
++#define ADLINK_ND6530GC_PRODUCT_ID	0x653a
+ 
+ /* SMART USB Serial Adapter */
+ #define SMART_VENDOR_ID	0x0b8c
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index 622e24b06b4b7..afc4f960a2a55 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -37,6 +37,7 @@
+ /* Vendor and product ids */
+ #define TI_VENDOR_ID			0x0451
+ #define IBM_VENDOR_ID			0x04b3
++#define STARTECH_VENDOR_ID		0x14b0
+ #define TI_3410_PRODUCT_ID		0x3410
+ #define IBM_4543_PRODUCT_ID		0x4543
+ #define IBM_454B_PRODUCT_ID		0x454b
+@@ -372,6 +373,7 @@ static const struct usb_device_id ti_id_table_3410[] = {
+ 	{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) },
+ 	{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) },
+ 	{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) },
++	{ USB_DEVICE(STARTECH_VENDOR_ID, TI_3410_PRODUCT_ID) },
+ 	{ }	/* terminator */
+ };
+ 
+@@ -410,6 +412,7 @@ static const struct usb_device_id ti_id_table_combined[] = {
+ 	{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) },
+ 	{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) },
+ 	{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) },
++	{ USB_DEVICE(STARTECH_VENDOR_ID, TI_3410_PRODUCT_ID) },
+ 	{ }	/* terminator */
+ };
+ 
+diff --git a/drivers/usb/typec/mux.c b/drivers/usb/typec/mux.c
+index cf720e944aaaa..42acdc8b684fe 100644
+--- a/drivers/usb/typec/mux.c
++++ b/drivers/usb/typec/mux.c
+@@ -191,6 +191,7 @@ static void *typec_mux_match(struct fwnode_handle *fwnode, const char *id,
+ 	bool match;
+ 	int nval;
+ 	u16 *val;
++	int ret;
+ 	int i;
+ 
+ 	/*
+@@ -218,10 +219,10 @@ static void *typec_mux_match(struct fwnode_handle *fwnode, const char *id,
+ 	if (!val)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	nval = fwnode_property_read_u16_array(fwnode, "svid", val, nval);
+-	if (nval < 0) {
++	ret = fwnode_property_read_u16_array(fwnode, "svid", val, nval);
++	if (ret < 0) {
+ 		kfree(val);
+-		return ERR_PTR(nval);
++		return ERR_PTR(ret);
+ 	}
+ 
+ 	for (i = 0; i < nval; i++) {
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 65cfbd3771301..8af30d07f6880 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -9,6 +9,7 @@
+ #include <linux/mlx5/vport.h>
+ #include <linux/mlx5/fs.h>
+ #include <linux/mlx5/device.h>
++#include <linux/mlx5/mpfs.h>
+ #include "mlx5_vnet.h"
+ #include "mlx5_vdpa_ifc.h"
+ #include "mlx5_vdpa.h"
+@@ -1839,11 +1840,16 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, struct vhost_iotlb *iotlb
+ static void mlx5_vdpa_free(struct vdpa_device *vdev)
+ {
+ 	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
++	struct mlx5_core_dev *pfmdev;
+ 	struct mlx5_vdpa_net *ndev;
+ 
+ 	ndev = to_mlx5_vdpa_ndev(mvdev);
+ 
+ 	free_resources(ndev);
++	if (!is_zero_ether_addr(ndev->config.mac)) {
++		pfmdev = pci_get_drvdata(pci_physfn(mvdev->mdev->pdev));
++		mlx5_mpfs_del_mac(pfmdev, ndev->config.mac);
++	}
+ 	mlx5_vdpa_free_resources(&ndev->mvdev);
+ 	mutex_destroy(&ndev->reslock);
+ }
+@@ -1962,6 +1968,7 @@ static void init_mvqs(struct mlx5_vdpa_net *ndev)
+ void *mlx5_vdpa_add_dev(struct mlx5_core_dev *mdev)
+ {
+ 	struct virtio_net_config *config;
++	struct mlx5_core_dev *pfmdev;
+ 	struct mlx5_vdpa_dev *mvdev;
+ 	struct mlx5_vdpa_net *ndev;
+ 	u32 max_vqs;
+@@ -1990,10 +1997,17 @@ void *mlx5_vdpa_add_dev(struct mlx5_core_dev *mdev)
+ 	if (err)
+ 		goto err_mtu;
+ 
++	if (!is_zero_ether_addr(config->mac)) {
++		pfmdev = pci_get_drvdata(pci_physfn(mdev->pdev));
++		err = mlx5_mpfs_add_mac(pfmdev, config->mac);
++		if (err)
++			goto err_mtu;
++	}
++
+ 	mvdev->vdev.dma_dev = mdev->device;
+ 	err = mlx5_vdpa_alloc_resources(&ndev->mvdev);
+ 	if (err)
+-		goto err_mtu;
++		goto err_mpfs;
+ 
+ 	err = alloc_resources(ndev);
+ 	if (err)
+@@ -2009,6 +2023,9 @@ err_reg:
+ 	free_resources(ndev);
+ err_res:
+ 	mlx5_vdpa_free_resources(&ndev->mvdev);
++err_mpfs:
++	if (!is_zero_ether_addr(config->mac))
++		mlx5_mpfs_del_mac(pfmdev, config->mac);
+ err_mtu:
+ 	mutex_destroy(&ndev->reslock);
+ 	put_device(&mvdev->vdev.dev);
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 628ba3fed36df..92d7fd7436cb8 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1837,7 +1837,9 @@ static void afs_rename_edit_dir(struct afs_operation *op)
+ 	new_inode = d_inode(new_dentry);
+ 	if (new_inode) {
+ 		spin_lock(&new_inode->i_lock);
+-		if (new_inode->i_nlink > 0)
++		if (S_ISDIR(new_inode->i_mode))
++			clear_nlink(new_inode);
++		else if (new_inode->i_nlink > 0)
+ 			drop_nlink(new_inode);
+ 		spin_unlock(&new_inode->i_lock);
+ 	}
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index cacea6bafc229..29f020c4b2d0d 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1408,6 +1408,9 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
+ 
+ 	lockdep_assert_held(&bdev->bd_mutex);
+ 
++	if (!(disk->flags & GENHD_FL_UP))
++		return -ENXIO;
++
+ rescan:
+ 	ret = blk_drop_partitions(bdev);
+ 	if (ret)
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 30cf917a58e92..81e98a457130f 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4662,7 +4662,7 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 		  u64 start, u64 len)
+ {
+ 	int ret = 0;
+-	u64 off = start;
++	u64 off;
+ 	u64 max = start + len;
+ 	u32 flags = 0;
+ 	u32 found_type;
+@@ -4698,6 +4698,11 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 		goto out_free_ulist;
+ 	}
+ 
++	/*
++	 * We can't initialize that to 'start' as this could miss extents due
++	 * to extent item merging
++	 */
++	off = 0;
+ 	start = round_down(start, btrfs_inode_sectorsize(inode));
+ 	len = round_up(max, btrfs_inode_sectorsize(inode)) - start;
+ 
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index c4f87df532833..eeb66e797e0bf 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -281,6 +281,11 @@ copy_inline_extent:
+ 	ret = btrfs_inode_set_file_extent_range(BTRFS_I(dst), 0, aligned_end);
+ out:
+ 	if (!ret && !trans) {
++		/*
++		 * Release path before starting a new transaction so we don't
++		 * hold locks that would confuse lockdep.
++		 */
++		btrfs_release_path(path);
+ 		/*
+ 		 * No transaction here means we copied the inline extent into a
+ 		 * page of the destination inode.
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 8bc3e2f25e7de..9a0cfa0e124da 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1823,8 +1823,6 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans,
+ 		ret = btrfs_update_inode(trans, root, inode);
+ 	} else if (ret == -EEXIST) {
+ 		ret = 0;
+-	} else {
+-		BUG(); /* Logic Error */
+ 	}
+ 	iput(inode);
+ 
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index d424f431263c8..ab509965656e3 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -951,6 +951,13 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
+ 	/* Internal types */
+ 	server->capabilities |= SMB2_NT_FIND | SMB2_LARGE_FILES;
+ 
++	/*
++	 * SMB3.0 supports only 1 cipher and doesn't have a encryption neg context
++	 * Set the cipher type manually.
++	 */
++	if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
++		server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;
++
+ 	security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,
+ 					       (struct smb2_sync_hdr *)rsp);
+ 	/*
+@@ -3891,10 +3898,10 @@ smb2_new_read_req(void **buf, unsigned int *total_len,
+ 			 * Related requests use info from previous read request
+ 			 * in chain.
+ 			 */
+-			shdr->SessionId = 0xFFFFFFFF;
++			shdr->SessionId = 0xFFFFFFFFFFFFFFFF;
+ 			shdr->TreeId = 0xFFFFFFFF;
+-			req->PersistentFileId = 0xFFFFFFFF;
+-			req->VolatileFileId = 0xFFFFFFFF;
++			req->PersistentFileId = 0xFFFFFFFFFFFFFFFF;
++			req->VolatileFileId = 0xFFFFFFFFFFFFFFFF;
+ 		}
+ 	}
+ 	if (remaining_bytes > io_parms->length)
+diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
+index 7f5aa0403e167..ae5ed3a074943 100644
+--- a/fs/nfs/filelayout/filelayout.c
++++ b/fs/nfs/filelayout/filelayout.c
+@@ -718,7 +718,7 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
+ 		if (unlikely(!p))
+ 			goto out_err;
+ 		fl->fh_array[i]->size = be32_to_cpup(p++);
+-		if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) {
++		if (fl->fh_array[i]->size > NFS_MAXFHSIZE) {
+ 			printk(KERN_ERR "NFS: Too big fh %d received %d\n",
+ 			       i, fl->fh_array[i]->size);
+ 			goto out_err;
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 57b3821d975a3..a1e5c6b85dedc 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -211,7 +211,7 @@ static loff_t nfs4_file_llseek(struct file *filep, loff_t offset, int whence)
+ 	case SEEK_HOLE:
+ 	case SEEK_DATA:
+ 		ret = nfs42_proc_llseek(filep, offset, whence);
+-		if (ret != -ENOTSUPP)
++		if (ret != -EOPNOTSUPP)
+ 			return ret;
+ 		fallthrough;
+ 	default:
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 06b70de0cc0d0..c92d6ff0fceab 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1688,7 +1688,7 @@ static void nfs_set_open_stateid_locked(struct nfs4_state *state,
+ 		rcu_read_unlock();
+ 		trace_nfs4_open_stateid_update_wait(state->inode, stateid, 0);
+ 
+-		if (!signal_pending(current)) {
++		if (!fatal_signal_pending(current)) {
+ 			if (schedule_timeout(5*HZ) == 0)
+ 				status = -EAGAIN;
+ 			else
+@@ -3463,7 +3463,7 @@ static bool nfs4_refresh_open_old_stateid(nfs4_stateid *dst,
+ 		write_sequnlock(&state->seqlock);
+ 		trace_nfs4_close_stateid_update_wait(state->inode, dst, 0);
+ 
+-		if (signal_pending(current))
++		if (fatal_signal_pending(current))
+ 			status = -EINTR;
+ 		else
+ 			if (schedule_timeout(5*HZ) != 0)
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 78c9c4bdef2b6..98b9c1ed366ee 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -1094,15 +1094,16 @@ nfs_pageio_do_add_request(struct nfs_pageio_descriptor *desc,
+ 	struct nfs_page *prev = NULL;
+ 	unsigned int size;
+ 
+-	if (mirror->pg_count != 0) {
+-		prev = nfs_list_entry(mirror->pg_list.prev);
+-	} else {
++	if (list_empty(&mirror->pg_list)) {
+ 		if (desc->pg_ops->pg_init)
+ 			desc->pg_ops->pg_init(desc, req);
+ 		if (desc->pg_error < 0)
+ 			return 0;
+ 		mirror->pg_base = req->wb_pgbase;
+-	}
++		mirror->pg_count = 0;
++		mirror->pg_recoalesce = 0;
++	} else
++		prev = nfs_list_entry(mirror->pg_list.prev);
+ 
+ 	if (desc->pg_maxretrans && req->wb_nio > desc->pg_maxretrans) {
+ 		if (NFS_SERVER(desc->pg_inode)->flags & NFS_MOUNT_SOFTERR)
+@@ -1127,17 +1128,16 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
+ {
+ 	struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
+ 
+-
+ 	if (!list_empty(&mirror->pg_list)) {
+ 		int error = desc->pg_ops->pg_doio(desc);
+ 		if (error < 0)
+ 			desc->pg_error = error;
+-		else
++		if (list_empty(&mirror->pg_list)) {
+ 			mirror->pg_bytes_written += mirror->pg_count;
+-	}
+-	if (list_empty(&mirror->pg_list)) {
+-		mirror->pg_count = 0;
+-		mirror->pg_base = 0;
++			mirror->pg_count = 0;
++			mirror->pg_base = 0;
++			mirror->pg_recoalesce = 0;
++		}
+ 	}
+ }
+ 
+@@ -1227,7 +1227,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+ 
+ 	do {
+ 		list_splice_init(&mirror->pg_list, &head);
+-		mirror->pg_bytes_written -= mirror->pg_count;
+ 		mirror->pg_count = 0;
+ 		mirror->pg_base = 0;
+ 		mirror->pg_recoalesce = 0;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 0a32d182dce46..4d20125e982a0 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1317,6 +1317,11 @@ _pnfs_return_layout(struct inode *ino)
+ {
+ 	struct pnfs_layout_hdr *lo = NULL;
+ 	struct nfs_inode *nfsi = NFS_I(ino);
++	struct pnfs_layout_range range = {
++		.iomode		= IOMODE_ANY,
++		.offset		= 0,
++		.length		= NFS4_MAX_UINT64,
++	};
+ 	LIST_HEAD(tmp_list);
+ 	const struct cred *cred;
+ 	nfs4_stateid stateid;
+@@ -1344,16 +1349,10 @@ _pnfs_return_layout(struct inode *ino)
+ 	}
+ 	valid_layout = pnfs_layout_is_valid(lo);
+ 	pnfs_clear_layoutcommit(ino, &tmp_list);
+-	pnfs_mark_matching_lsegs_return(lo, &tmp_list, NULL, 0);
++	pnfs_mark_matching_lsegs_return(lo, &tmp_list, &range, 0);
+ 
+-	if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
+-		struct pnfs_layout_range range = {
+-			.iomode		= IOMODE_ANY,
+-			.offset		= 0,
+-			.length		= NFS4_MAX_UINT64,
+-		};
++	if (NFS_SERVER(ino)->pnfs_curr_ld->return_range)
+ 		NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
+-	}
+ 
+ 	/* Don't send a LAYOUTRETURN if list was initially empty */
+ 	if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags) ||
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 55ce0ee9c5c73..297ea12b3cfd2 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -2704,6 +2704,10 @@ static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
+ 	void *page;
+ 	int rv;
+ 
++	/* A task may only write when it was the opener. */
++	if (file->f_cred != current_real_cred())
++		return -EPERM;
++
+ 	rcu_read_lock();
+ 	task = pid_task(proc_pid(inode), PIDTYPE_PID);
+ 	if (!task) {
+diff --git a/include/linux/bits.h b/include/linux/bits.h
+index 7f475d59a0974..87d112650dfbb 100644
+--- a/include/linux/bits.h
++++ b/include/linux/bits.h
+@@ -22,7 +22,7 @@
+ #include <linux/build_bug.h>
+ #define GENMASK_INPUT_CHECK(h, l) \
+ 	(BUILD_BUG_ON_ZERO(__builtin_choose_expr( \
+-		__builtin_constant_p((l) > (h)), (l) > (h), 0)))
++		__is_constexpr((l) > (h)), (l) > (h), 0)))
+ #else
+ /*
+  * BUILD_BUG_ON_ZERO is not available in h files included from asm files,
+diff --git a/include/linux/const.h b/include/linux/const.h
+index 81b8aae5a8559..435ddd72d2c46 100644
+--- a/include/linux/const.h
++++ b/include/linux/const.h
+@@ -3,4 +3,12 @@
+ 
+ #include <vdso/const.h>
+ 
++/*
++ * This returns a constant expression while determining if an argument is
++ * a constant expression, most importantly without evaluating the argument.
++ * Glory to Martin Uecker <Martin.Uecker@med.uni-goettingen.de>
++ */
++#define __is_constexpr(x) \
++	(sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8)))
++
+ #endif /* _LINUX_CONST_H */
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 75a24b32fee8a..8d97871631d02 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -570,7 +570,7 @@ struct device {
+  * @flags: Link flags.
+  * @rpm_active: Whether or not the consumer device is runtime-PM-active.
+  * @kref: Count repeated addition of the same link.
+- * @rcu_head: An RCU head to use for deferred execution of SRCU callbacks.
++ * @rm_work: Work structure used for removing the link.
+  * @supplier_preactivated: Supplier has been made active before consumer probe.
+  */
+ struct device_link {
+@@ -583,9 +583,7 @@ struct device_link {
+ 	u32 flags;
+ 	refcount_t rpm_active;
+ 	struct kref kref;
+-#ifdef CONFIG_SRCU
+-	struct rcu_head rcu_head;
+-#endif
++	struct work_struct rm_work;
+ 	bool supplier_preactivated; /* Owned by consumer probe. */
+ };
+ 
+diff --git a/include/linux/minmax.h b/include/linux/minmax.h
+index c0f57b0c64d90..5433c08fcc685 100644
+--- a/include/linux/minmax.h
++++ b/include/linux/minmax.h
+@@ -2,6 +2,8 @@
+ #ifndef _LINUX_MINMAX_H
+ #define _LINUX_MINMAX_H
+ 
++#include <linux/const.h>
++
+ /*
+  * min()/max()/clamp() macros must accomplish three things:
+  *
+@@ -17,14 +19,6 @@
+ #define __typecheck(x, y) \
+ 	(!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
+ 
+-/*
+- * This returns a constant expression while determining if an argument is
+- * a constant expression, most importantly without evaluating the argument.
+- * Glory to Martin Uecker <Martin.Uecker@med.uni-goettingen.de>
+- */
+-#define __is_constexpr(x) \
+-	(sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8)))
+-
+ #define __no_side_effects(x, y) \
+ 		(__is_constexpr(x) && __is_constexpr(y))
+ 
+diff --git a/include/linux/mlx5/mpfs.h b/include/linux/mlx5/mpfs.h
+new file mode 100644
+index 0000000000000..bf700c8d55164
+--- /dev/null
++++ b/include/linux/mlx5/mpfs.h
+@@ -0,0 +1,18 @@
++/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
++ * Copyright (c) 2021 Mellanox Technologies Ltd.
++ */
++
++#ifndef _MLX5_MPFS_
++#define _MLX5_MPFS_
++
++struct mlx5_core_dev;
++
++#ifdef CONFIG_MLX5_MPFS
++int  mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac);
++int  mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac);
++#else /* #ifndef CONFIG_MLX5_MPFS */
++static inline int  mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
++static inline int  mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
++#endif
++
++#endif
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index 3ac5037d1c3da..cad1fa2b6baa2 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -367,6 +367,8 @@ struct rpc_xprt *	xprt_alloc(struct net *net, size_t size,
+ 				unsigned int num_prealloc,
+ 				unsigned int max_req);
+ void			xprt_free(struct rpc_xprt *);
++void			xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task);
++bool			xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req);
+ 
+ static inline int
+ xprt_enable_swap(struct rpc_xprt *xprt)
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index d5ab8d99739f1..8a1bf2dbadd0c 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -5624,7 +5624,7 @@ unsigned int ieee80211_get_mesh_hdrlen(struct ieee80211s_hdr *meshhdr);
+  */
+ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
+ 				  const u8 *addr, enum nl80211_iftype iftype,
+-				  u8 data_offset);
++				  u8 data_offset, bool is_amsdu);
+ 
+ /**
+  * ieee80211_data_to_8023 - convert an 802.11 data frame to 802.3
+@@ -5636,7 +5636,7 @@ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
+ static inline int ieee80211_data_to_8023(struct sk_buff *skb, const u8 *addr,
+ 					 enum nl80211_iftype iftype)
+ {
+-	return ieee80211_data_to_8023_exthdr(skb, NULL, addr, iftype, 0);
++	return ieee80211_data_to_8023_exthdr(skb, NULL, addr, iftype, 0, false);
+ }
+ 
+ /**
+diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
+index 16e8b2f8d006a..b338638f22792 100644
+--- a/include/net/netfilter/nf_flow_table.h
++++ b/include/net/netfilter/nf_flow_table.h
+@@ -126,7 +126,6 @@ enum nf_flow_flags {
+ 	NF_FLOW_HW,
+ 	NF_FLOW_HW_DYING,
+ 	NF_FLOW_HW_DEAD,
+-	NF_FLOW_HW_REFRESH,
+ 	NF_FLOW_HW_PENDING,
+ };
+ 
+diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
+index d4d4612363519..b608be532964f 100644
+--- a/include/net/pkt_cls.h
++++ b/include/net/pkt_cls.h
+@@ -709,6 +709,17 @@ tc_cls_common_offload_init(struct flow_cls_common_offload *cls_common,
+ 		cls_common->extack = extack;
+ }
+ 
++#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
++static inline struct tc_skb_ext *tc_skb_ext_alloc(struct sk_buff *skb)
++{
++	struct tc_skb_ext *tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT);
++
++	if (tc_skb_ext)
++		memset(tc_skb_ext, 0, sizeof(*tc_skb_ext));
++	return tc_skb_ext;
++}
++#endif
++
+ enum tc_matchall_command {
+ 	TC_CLSMATCHALL_REPLACE,
+ 	TC_CLSMATCHALL_DESTROY,
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 4ed32e6b02014..2be90a54a4044 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -123,12 +123,7 @@ void __qdisc_run(struct Qdisc *q);
+ static inline void qdisc_run(struct Qdisc *q)
+ {
+ 	if (qdisc_run_begin(q)) {
+-		/* NOLOCK qdisc must check 'state' under the qdisc seqlock
+-		 * to avoid racing with dev_qdisc_reset()
+-		 */
+-		if (!(q->flags & TCQ_F_NOLOCK) ||
+-		    likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
+-			__qdisc_run(q);
++		__qdisc_run(q);
+ 		qdisc_run_end(q);
+ 	}
+ }
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 3648164faa060..4dd2c9e34976e 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -36,6 +36,7 @@ struct qdisc_rate_table {
+ enum qdisc_state_t {
+ 	__QDISC_STATE_SCHED,
+ 	__QDISC_STATE_DEACTIVATED,
++	__QDISC_STATE_MISSED,
+ };
+ 
+ struct qdisc_size_table {
+@@ -159,8 +160,33 @@ static inline bool qdisc_is_empty(const struct Qdisc *qdisc)
+ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ {
+ 	if (qdisc->flags & TCQ_F_NOLOCK) {
++		if (spin_trylock(&qdisc->seqlock))
++			goto nolock_empty;
++
++		/* If the MISSED flag is set, it means other thread has
++		 * set the MISSED flag before second spin_trylock(), so
++		 * we can return false here to avoid multi cpus doing
++		 * the set_bit() and second spin_trylock() concurrently.
++		 */
++		if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
++			return false;
++
++		/* Set the MISSED flag before the second spin_trylock(),
++		 * if the second spin_trylock() return false, it means
++		 * other cpu holding the lock will do dequeuing for us
++		 * or it will see the MISSED flag set after releasing
++		 * lock and reschedule the net_tx_action() to do the
++		 * dequeuing.
++		 */
++		set_bit(__QDISC_STATE_MISSED, &qdisc->state);
++
++		/* Retry again in case other CPU may not see the new flag
++		 * after it releases the lock at the end of qdisc_run_end().
++		 */
+ 		if (!spin_trylock(&qdisc->seqlock))
+ 			return false;
++
++nolock_empty:
+ 		WRITE_ONCE(qdisc->empty, false);
+ 	} else if (qdisc_is_running(qdisc)) {
+ 		return false;
+@@ -176,8 +202,15 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ static inline void qdisc_run_end(struct Qdisc *qdisc)
+ {
+ 	write_seqcount_end(&qdisc->running);
+-	if (qdisc->flags & TCQ_F_NOLOCK)
++	if (qdisc->flags & TCQ_F_NOLOCK) {
+ 		spin_unlock(&qdisc->seqlock);
++
++		if (unlikely(test_bit(__QDISC_STATE_MISSED,
++				      &qdisc->state))) {
++			clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
++			__netif_schedule(qdisc);
++		}
++	}
+ }
+ 
+ static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 261195598df39..f68184b8c0aa5 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2197,13 +2197,15 @@ static inline void skb_set_owner_r(struct sk_buff *skb, struct sock *sk)
+ 	sk_mem_charge(sk, skb->truesize);
+ }
+ 
+-static inline void skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk)
++static inline __must_check bool skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk)
+ {
+ 	if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) {
+ 		skb_orphan(skb);
+ 		skb->destructor = sock_efree;
+ 		skb->sk = sk;
++		return true;
+ 	}
++	return false;
+ }
+ 
+ void sk_reset_timer(struct sock *sk, struct timer_list *timer,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 364b9760d1a73..4f50d6f128be3 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -12364,12 +12364,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
+ 	if (is_priv)
+ 		env->test_state_freq = attr->prog_flags & BPF_F_TEST_STATE_FREQ;
+ 
+-	if (bpf_prog_is_dev_bound(env->prog->aux)) {
+-		ret = bpf_prog_offload_verifier_prep(env->prog);
+-		if (ret)
+-			goto skip_full_check;
+-	}
+-
+ 	env->explored_states = kvcalloc(state_htab_size(env),
+ 				       sizeof(struct bpf_verifier_state_list *),
+ 				       GFP_USER);
+@@ -12393,6 +12387,12 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
+ 	if (ret < 0)
+ 		goto skip_full_check;
+ 
++	if (bpf_prog_is_dev_bound(env->prog->aux)) {
++		ret = bpf_prog_offload_verifier_prep(env->prog);
++		if (ret)
++			goto skip_full_check;
++	}
++
+ 	ret = check_cfg(env);
+ 	if (ret < 0)
+ 		goto skip_full_check;
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 0ceaaba36c2e1..0aabfcaf269a9 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -864,28 +864,30 @@ static int seccomp_do_user_notification(int this_syscall,
+ 
+ 	up(&match->notif->request);
+ 	wake_up_poll(&match->wqh, EPOLLIN | EPOLLRDNORM);
+-	mutex_unlock(&match->notify_lock);
+ 
+ 	/*
+ 	 * This is where we wait for a reply from userspace.
+ 	 */
+-wait:
+-	err = wait_for_completion_interruptible(&n.ready);
+-	mutex_lock(&match->notify_lock);
+-	if (err == 0) {
+-		/* Check if we were woken up by a addfd message */
++	do {
++		mutex_unlock(&match->notify_lock);
++		err = wait_for_completion_interruptible(&n.ready);
++		mutex_lock(&match->notify_lock);
++		if (err != 0)
++			goto interrupted;
++
+ 		addfd = list_first_entry_or_null(&n.addfd,
+ 						 struct seccomp_kaddfd, list);
+-		if (addfd && n.state != SECCOMP_NOTIFY_REPLIED) {
++		/* Check if we were woken up by a addfd message */
++		if (addfd)
+ 			seccomp_handle_addfd(addfd);
+-			mutex_unlock(&match->notify_lock);
+-			goto wait;
+-		}
+-		ret = n.val;
+-		err = n.error;
+-		flags = n.flags;
+-	}
+ 
++	}  while (n.state != SECCOMP_NOTIFY_REPLIED);
++
++	ret = n.val;
++	err = n.error;
++	flags = n.flags;
++
++interrupted:
+ 	/* If there were any pending addfd calls, clear them out */
+ 	list_for_each_entry_safe(addfd, tmp, &n.addfd, list) {
+ 		/* The process went away before we got a chance to handle it */
+diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c
+index 07cfa3249f83a..0a2d78e811cf5 100644
+--- a/net/bluetooth/cmtp/core.c
++++ b/net/bluetooth/cmtp/core.c
+@@ -392,6 +392,11 @@ int cmtp_add_connection(struct cmtp_connadd_req *req, struct socket *sock)
+ 	if (!(session->flags & BIT(CMTP_LOOPBACK))) {
+ 		err = cmtp_attach_device(session);
+ 		if (err < 0) {
++			/* Caller will call fput in case of failure, and so
++			 * will cmtp_session kthread.
++			 */
++			get_file(session->sock->file);
++
+ 			atomic_inc(&session->terminate);
+ 			wake_up_interruptible(sk_sleep(session->sock->sk));
+ 			up_write(&cmtp_session_sem);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2f17a4ac82f0e..0c9ce36afc8cf 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3764,7 +3764,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
+ 
+ 	if (q->flags & TCQ_F_NOLOCK) {
+ 		rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;
+-		qdisc_run(q);
++		if (likely(!netif_xmit_frozen_or_stopped(txq)))
++			qdisc_run(q);
+ 
+ 		if (unlikely(to_free))
+ 			kfree_skb_list(to_free);
+@@ -4910,25 +4911,43 @@ static __latent_entropy void net_tx_action(struct softirq_action *h)
+ 		sd->output_queue_tailp = &sd->output_queue;
+ 		local_irq_enable();
+ 
++		rcu_read_lock();
++
+ 		while (head) {
+ 			struct Qdisc *q = head;
+ 			spinlock_t *root_lock = NULL;
+ 
+ 			head = head->next_sched;
+ 
+-			if (!(q->flags & TCQ_F_NOLOCK)) {
+-				root_lock = qdisc_lock(q);
+-				spin_lock(root_lock);
+-			}
+ 			/* We need to make sure head->next_sched is read
+ 			 * before clearing __QDISC_STATE_SCHED
+ 			 */
+ 			smp_mb__before_atomic();
++
++			if (!(q->flags & TCQ_F_NOLOCK)) {
++				root_lock = qdisc_lock(q);
++				spin_lock(root_lock);
++			} else if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED,
++						     &q->state))) {
++				/* There is a synchronize_net() between
++				 * STATE_DEACTIVATED flag being set and
++				 * qdisc_reset()/some_qdisc_is_busy() in
++				 * dev_deactivate(), so we can safely bail out
++				 * early here to avoid data race between
++				 * qdisc_deactivate() and some_qdisc_is_busy()
++				 * for lockless qdisc.
++				 */
++				clear_bit(__QDISC_STATE_SCHED, &q->state);
++				continue;
++			}
++
+ 			clear_bit(__QDISC_STATE_SCHED, &q->state);
+ 			qdisc_run(q);
+ 			if (root_lock)
+ 				spin_unlock(root_lock);
+ 		}
++
++		rcu_read_unlock();
+ 	}
+ 
+ 	xfrm_dev_backlog(sd);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9358bc4a3711f..ef6bdbb63ecbb 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3782,6 +3782,7 @@ static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
+ 		__skb_push(skb, head_room);
+ 		memset(skb->data, 0, head_room);
+ 		skb_reset_mac_header(skb);
++		skb_reset_mac_len(skb);
+ 	}
+ 
+ 	return ret;
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 8339978d46ff8..a18c2973b8c6d 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -132,6 +132,9 @@ static void neigh_update_gc_list(struct neighbour *n)
+ 	write_lock_bh(&n->tbl->lock);
+ 	write_lock(&n->lock);
+ 
++	if (n->dead)
++		goto out;
++
+ 	/* remove from the gc list if new state is permanent or if neighbor
+ 	 * is externally learned; otherwise entry should be on the gc list
+ 	 */
+@@ -148,6 +151,7 @@ static void neigh_update_gc_list(struct neighbour *n)
+ 		atomic_inc(&n->tbl->gc_entries);
+ 	}
+ 
++out:
+ 	write_unlock(&n->lock);
+ 	write_unlock_bh(&n->tbl->lock);
+ }
+diff --git a/net/core/sock.c b/net/core/sock.c
+index c75c1e723a840..dee29f41beaf8 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2099,10 +2099,10 @@ void skb_orphan_partial(struct sk_buff *skb)
+ 	if (skb_is_tcp_pure_ack(skb))
+ 		return;
+ 
+-	if (can_skb_orphan_partial(skb))
+-		skb_set_owner_sk_safe(skb, skb->sk);
+-	else
+-		skb_orphan(skb);
++	if (can_skb_orphan_partial(skb) && skb_set_owner_sk_safe(skb, skb->sk))
++		return;
++
++	skb_orphan(skb);
+ }
+ EXPORT_SYMBOL(skb_orphan_partial);
+ 
+diff --git a/net/dsa/master.c b/net/dsa/master.c
+index 3a44da35dfeba..45bd627b4e7bc 100644
+--- a/net/dsa/master.c
++++ b/net/dsa/master.c
+@@ -147,8 +147,7 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
+ 	struct dsa_switch *ds = cpu_dp->ds;
+ 	int port = cpu_dp->index;
+ 	int len = ETH_GSTRING_LEN;
+-	int mcount = 0, count;
+-	unsigned int i;
++	int mcount = 0, count, i;
+ 	uint8_t pfx[4];
+ 	uint8_t *ndata;
+ 
+@@ -178,6 +177,8 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
+ 		 */
+ 		ds->ops->get_strings(ds, port, stringset, ndata);
+ 		count = ds->ops->get_sset_count(ds, port, stringset);
++		if (count < 0)
++			return;
+ 		for (i = 0; i < count; i++) {
+ 			memmove(ndata + (i * len + sizeof(pfx)),
+ 				ndata + i * len, len - sizeof(pfx));
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index c6806eef906f9..9281c9c6a253e 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -746,13 +746,15 @@ static int dsa_slave_get_sset_count(struct net_device *dev, int sset)
+ 	struct dsa_switch *ds = dp->ds;
+ 
+ 	if (sset == ETH_SS_STATS) {
+-		int count;
++		int count = 0;
+ 
+-		count = 4;
+-		if (ds->ops->get_sset_count)
+-			count += ds->ops->get_sset_count(ds, dp->index, sset);
++		if (ds->ops->get_sset_count) {
++			count = ds->ops->get_sset_count(ds, dp->index, sset);
++			if (count < 0)
++				return count;
++		}
+ 
+-		return count;
++		return count + 4;
+ 	}
+ 
+ 	return -EOPNOTSUPP;
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index 6f4c34b6a5d69..fec1b014c0a26 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -218,6 +218,7 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (master) {
+ 		skb->dev = master->dev;
+ 		skb_reset_mac_header(skb);
++		skb_reset_mac_len(skb);
+ 		hsr_forward_skb(skb, master);
+ 	} else {
+ 		atomic_long_inc(&dev->tx_dropped);
+@@ -261,6 +262,7 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master, u16 proto)
+ 		goto out;
+ 
+ 	skb_reset_mac_header(skb);
++	skb_reset_mac_len(skb);
+ 	skb_reset_network_header(skb);
+ 	skb_reset_transport_header(skb);
+ 
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index 90c72e4c0a8fc..baf4765be6d78 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -451,25 +451,31 @@ static void handle_std_frame(struct sk_buff *skb,
+ 	}
+ }
+ 
+-void hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
+-			 struct hsr_frame_info *frame)
++int hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
++			struct hsr_frame_info *frame)
+ {
+ 	if (proto == htons(ETH_P_PRP) ||
+ 	    proto == htons(ETH_P_HSR)) {
++		/* Check if skb contains hsr_ethhdr */
++		if (skb->mac_len < sizeof(struct hsr_ethhdr))
++			return -EINVAL;
++
+ 		/* HSR tagged frame :- Data or Supervision */
+ 		frame->skb_std = NULL;
+ 		frame->skb_prp = NULL;
+ 		frame->skb_hsr = skb;
+ 		frame->sequence_nr = hsr_get_skb_sequence_nr(skb);
+-		return;
++		return 0;
+ 	}
+ 
+ 	/* Standard frame or PRP from master port */
+ 	handle_std_frame(skb, frame);
++
++	return 0;
+ }
+ 
+-void prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
+-			 struct hsr_frame_info *frame)
++int prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
++			struct hsr_frame_info *frame)
+ {
+ 	/* Supervision frame */
+ 	struct prp_rct *rct = skb_get_PRP_rct(skb);
+@@ -480,9 +486,11 @@ void prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
+ 		frame->skb_std = NULL;
+ 		frame->skb_prp = skb;
+ 		frame->sequence_nr = prp_get_skb_sequence_nr(rct);
+-		return;
++		return 0;
+ 	}
+ 	handle_std_frame(skb, frame);
++
++	return 0;
+ }
+ 
+ static int fill_frame_info(struct hsr_frame_info *frame,
+@@ -492,9 +500,10 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ 	struct hsr_vlan_ethhdr *vlan_hdr;
+ 	struct ethhdr *ethhdr;
+ 	__be16 proto;
++	int ret;
+ 
+-	/* Check if skb contains hsr_ethhdr */
+-	if (skb->mac_len < sizeof(struct hsr_ethhdr))
++	/* Check if skb contains ethhdr */
++	if (skb->mac_len < sizeof(struct ethhdr))
+ 		return -EINVAL;
+ 
+ 	memset(frame, 0, sizeof(*frame));
+@@ -521,7 +530,10 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ 
+ 	frame->is_from_san = false;
+ 	frame->port_rcv = port;
+-	hsr->proto_ops->fill_frame_info(proto, skb, frame);
++	ret = hsr->proto_ops->fill_frame_info(proto, skb, frame);
++	if (ret)
++		return ret;
++
+ 	check_local_dest(port->hsr, skb, frame);
+ 
+ 	return 0;
+diff --git a/net/hsr/hsr_forward.h b/net/hsr/hsr_forward.h
+index 618140d484ad6..008f45786f068 100644
+--- a/net/hsr/hsr_forward.h
++++ b/net/hsr/hsr_forward.h
+@@ -23,8 +23,8 @@ struct sk_buff *hsr_get_untagged_frame(struct hsr_frame_info *frame,
+ struct sk_buff *prp_get_untagged_frame(struct hsr_frame_info *frame,
+ 				       struct hsr_port *port);
+ bool prp_drop_frame(struct hsr_frame_info *frame, struct hsr_port *port);
+-void prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
+-			 struct hsr_frame_info *frame);
+-void hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
+-			 struct hsr_frame_info *frame);
++int prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
++			struct hsr_frame_info *frame);
++int hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
++			struct hsr_frame_info *frame);
+ #endif /* __HSR_FORWARD_H */
+diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
+index f79ca55d69868..9a25a5d349aef 100644
+--- a/net/hsr/hsr_main.h
++++ b/net/hsr/hsr_main.h
+@@ -192,8 +192,8 @@ struct hsr_proto_ops {
+ 					       struct hsr_port *port);
+ 	struct sk_buff * (*create_tagged_frame)(struct hsr_frame_info *frame,
+ 						struct hsr_port *port);
+-	void (*fill_frame_info)(__be16 proto, struct sk_buff *skb,
+-				struct hsr_frame_info *frame);
++	int (*fill_frame_info)(__be16 proto, struct sk_buff *skb,
++			       struct hsr_frame_info *frame);
+ 	bool (*invalid_dan_ingress_frame)(__be16 protocol);
+ 	void (*update_san_info)(struct hsr_node *node, bool is_sup);
+ };
+diff --git a/net/hsr/hsr_slave.c b/net/hsr/hsr_slave.c
+index 36d5fcf09c619..aecc05a28fa19 100644
+--- a/net/hsr/hsr_slave.c
++++ b/net/hsr/hsr_slave.c
+@@ -58,12 +58,11 @@ static rx_handler_result_t hsr_handle_frame(struct sk_buff **pskb)
+ 		goto finish_pass;
+ 
+ 	skb_push(skb, ETH_HLEN);
+-
+-	if (skb_mac_header(skb) != skb->data) {
+-		WARN_ONCE(1, "%s:%d: Malformed frame at source port %s)\n",
+-			  __func__, __LINE__, port->dev->name);
+-		goto finish_consume;
+-	}
++	skb_reset_mac_header(skb);
++	if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) ||
++	    protocol == htons(ETH_P_HSR))
++		skb_set_network_header(skb, ETH_HLEN + HSR_HLEN);
++	skb_reset_mac_len(skb);
+ 
+ 	hsr_forward_skb(skb, port);
+ 
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 8cd2782a31e4c..9fb5077f8e9a7 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -1601,10 +1601,7 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
+ 		     IPV6_TLV_PADN, 0 };
+ 
+ 	/* we assume size > sizeof(ra) here */
+-	/* limit our allocations to order-0 page */
+-	size = min_t(int, size, SKB_MAX_ORDER(0, 0));
+ 	skb = sock_alloc_send_skb(sk, size, 1, &err);
+-
+ 	if (!skb)
+ 		return NULL;
+ 
+diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
+index 47a0dc46cbdb0..28e44782c94d1 100644
+--- a/net/ipv6/reassembly.c
++++ b/net/ipv6/reassembly.c
+@@ -343,7 +343,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
+ 	hdr = ipv6_hdr(skb);
+ 	fhdr = (struct frag_hdr *)skb_transport_header(skb);
+ 
+-	if (!(fhdr->frag_off & htons(0xFFF9))) {
++	if (!(fhdr->frag_off & htons(IP6_OFFSET | IP6_MF))) {
+ 		/* It is not a fragmented frame */
+ 		skb->transport_header += sizeof(struct frag_hdr);
+ 		__IP6_INC_STATS(net,
+@@ -351,6 +351,8 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
+ 
+ 		IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);
+ 		IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;
++		IP6CB(skb)->frag_max_size = ntohs(hdr->payload_len) +
++					    sizeof(struct ipv6hdr);
+ 		return 1;
+ 	}
+ 
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index d6913784be2bd..be40f6b16199f 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -50,12 +50,6 @@ struct ieee80211_local;
+ #define IEEE80211_ENCRYPT_HEADROOM 8
+ #define IEEE80211_ENCRYPT_TAILROOM 18
+ 
+-/* IEEE 802.11 (Ch. 9.5 Defragmentation) requires support for concurrent
+- * reception of at least three fragmented frames. This limit can be increased
+- * by changing this define, at the cost of slower frame reassembly and
+- * increased memory use (about 2 kB of RAM per entry). */
+-#define IEEE80211_FRAGMENT_MAX 4
+-
+ /* power level hasn't been configured (or set to automatic) */
+ #define IEEE80211_UNSET_POWER_LEVEL	INT_MIN
+ 
+@@ -88,18 +82,6 @@ extern const u8 ieee80211_ac_to_qos_mask[IEEE80211_NUM_ACS];
+ 
+ #define IEEE80211_MAX_NAN_INSTANCE_ID 255
+ 
+-struct ieee80211_fragment_entry {
+-	struct sk_buff_head skb_list;
+-	unsigned long first_frag_time;
+-	u16 seq;
+-	u16 extra_len;
+-	u16 last_frag;
+-	u8 rx_queue;
+-	bool check_sequential_pn; /* needed for CCMP/GCMP */
+-	u8 last_pn[6]; /* PN of the last fragment if CCMP was used */
+-};
+-
+-
+ struct ieee80211_bss {
+ 	u32 device_ts_beacon, device_ts_presp;
+ 
+@@ -241,8 +223,15 @@ struct ieee80211_rx_data {
+ 	 */
+ 	int security_idx;
+ 
+-	u32 tkip_iv32;
+-	u16 tkip_iv16;
++	union {
++		struct {
++			u32 iv32;
++			u16 iv16;
++		} tkip;
++		struct {
++			u8 pn[IEEE80211_CCMP_PN_LEN];
++		} ccm_gcm;
++	};
+ };
+ 
+ struct ieee80211_csa_settings {
+@@ -906,9 +895,7 @@ struct ieee80211_sub_if_data {
+ 
+ 	char name[IFNAMSIZ];
+ 
+-	/* Fragment table for host-based reassembly */
+-	struct ieee80211_fragment_entry	fragments[IEEE80211_FRAGMENT_MAX];
+-	unsigned int fragment_next;
++	struct ieee80211_fragment_cache frags;
+ 
+ 	/* TID bitmap for NoAck policy */
+ 	u16 noack_map;
+@@ -2327,4 +2314,7 @@ u32 ieee80211_calc_expected_tx_airtime(struct ieee80211_hw *hw,
+ #define debug_noinline
+ #endif
+ 
++void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache);
++void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache);
++
+ #endif /* IEEE80211_I_H */
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index f3c3557a9e4c4..30589b4c09da4 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -8,7 +8,7 @@
+  * Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright (c) 2016        Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ #include <linux/slab.h>
+ #include <linux/kernel.h>
+@@ -679,16 +679,12 @@ static void ieee80211_set_multicast_list(struct net_device *dev)
+  */
+ static void ieee80211_teardown_sdata(struct ieee80211_sub_if_data *sdata)
+ {
+-	int i;
+-
+ 	/* free extra data */
+ 	ieee80211_free_keys(sdata, false);
+ 
+ 	ieee80211_debugfs_remove_netdev(sdata);
+ 
+-	for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++)
+-		__skb_queue_purge(&sdata->fragments[i].skb_list);
+-	sdata->fragment_next = 0;
++	ieee80211_destroy_frag_cache(&sdata->frags);
+ 
+ 	if (ieee80211_vif_is_mesh(&sdata->vif))
+ 		ieee80211_mesh_teardown_sdata(sdata);
+@@ -1950,8 +1946,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 	sdata->wdev.wiphy = local->hw.wiphy;
+ 	sdata->local = local;
+ 
+-	for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++)
+-		skb_queue_head_init(&sdata->fragments[i].skb_list);
++	ieee80211_init_frag_cache(&sdata->frags);
+ 
+ 	INIT_LIST_HEAD(&sdata->key_list);
+ 
+diff --git a/net/mac80211/key.c b/net/mac80211/key.c
+index 8c5f829ff6d71..6a72c33679ba9 100644
+--- a/net/mac80211/key.c
++++ b/net/mac80211/key.c
+@@ -799,6 +799,7 @@ int ieee80211_key_link(struct ieee80211_key *key,
+ 		       struct ieee80211_sub_if_data *sdata,
+ 		       struct sta_info *sta)
+ {
++	static atomic_t key_color = ATOMIC_INIT(0);
+ 	struct ieee80211_key *old_key;
+ 	int idx = key->conf.keyidx;
+ 	bool pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
+@@ -850,6 +851,12 @@ int ieee80211_key_link(struct ieee80211_key *key,
+ 	key->sdata = sdata;
+ 	key->sta = sta;
+ 
++	/*
++	 * Assign a unique ID to every key so we can easily prevent mixed
++	 * key and fragment cache attacks.
++	 */
++	key->color = atomic_inc_return(&key_color);
++
+ 	increment_tailroom_need_count(sdata);
+ 
+ 	ret = ieee80211_key_replace(sdata, sta, pairwise, old_key, key);
+diff --git a/net/mac80211/key.h b/net/mac80211/key.h
+index 7ad72e9b4991d..1e326c89d7217 100644
+--- a/net/mac80211/key.h
++++ b/net/mac80211/key.h
+@@ -128,6 +128,8 @@ struct ieee80211_key {
+ 	} debugfs;
+ #endif
+ 
++	unsigned int color;
++
+ 	/*
+ 	 * key config, must be last because it contains key
+ 	 * material as variable length member
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 98517423b0b76..ef8ff0bc66f17 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -6,7 +6,7 @@
+  * Copyright 2007-2010	Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright(c) 2015 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ 
+ #include <linux/jiffies.h>
+@@ -2133,19 +2133,34 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+ 	return result;
+ }
+ 
++void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(cache->entries); i++)
++		skb_queue_head_init(&cache->entries[i].skb_list);
++}
++
++void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(cache->entries); i++)
++		__skb_queue_purge(&cache->entries[i].skb_list);
++}
++
+ static inline struct ieee80211_fragment_entry *
+-ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata,
++ieee80211_reassemble_add(struct ieee80211_fragment_cache *cache,
+ 			 unsigned int frag, unsigned int seq, int rx_queue,
+ 			 struct sk_buff **skb)
+ {
+ 	struct ieee80211_fragment_entry *entry;
+ 
+-	entry = &sdata->fragments[sdata->fragment_next++];
+-	if (sdata->fragment_next >= IEEE80211_FRAGMENT_MAX)
+-		sdata->fragment_next = 0;
++	entry = &cache->entries[cache->next++];
++	if (cache->next >= IEEE80211_FRAGMENT_MAX)
++		cache->next = 0;
+ 
+-	if (!skb_queue_empty(&entry->skb_list))
+-		__skb_queue_purge(&entry->skb_list);
++	__skb_queue_purge(&entry->skb_list);
+ 
+ 	__skb_queue_tail(&entry->skb_list, *skb); /* no need for locking */
+ 	*skb = NULL;
+@@ -2160,14 +2175,14 @@ ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata,
+ }
+ 
+ static inline struct ieee80211_fragment_entry *
+-ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata,
++ieee80211_reassemble_find(struct ieee80211_fragment_cache *cache,
+ 			  unsigned int frag, unsigned int seq,
+ 			  int rx_queue, struct ieee80211_hdr *hdr)
+ {
+ 	struct ieee80211_fragment_entry *entry;
+ 	int i, idx;
+ 
+-	idx = sdata->fragment_next;
++	idx = cache->next;
+ 	for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++) {
+ 		struct ieee80211_hdr *f_hdr;
+ 		struct sk_buff *f_skb;
+@@ -2176,7 +2191,7 @@ ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata,
+ 		if (idx < 0)
+ 			idx = IEEE80211_FRAGMENT_MAX - 1;
+ 
+-		entry = &sdata->fragments[idx];
++		entry = &cache->entries[idx];
+ 		if (skb_queue_empty(&entry->skb_list) || entry->seq != seq ||
+ 		    entry->rx_queue != rx_queue ||
+ 		    entry->last_frag + 1 != frag)
+@@ -2204,15 +2219,27 @@ ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata,
+ 	return NULL;
+ }
+ 
++static bool requires_sequential_pn(struct ieee80211_rx_data *rx, __le16 fc)
++{
++	return rx->key &&
++		(rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP ||
++		 rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 ||
++		 rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP ||
++		 rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) &&
++		ieee80211_has_protected(fc);
++}
++
+ static ieee80211_rx_result debug_noinline
+ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ {
++	struct ieee80211_fragment_cache *cache = &rx->sdata->frags;
+ 	struct ieee80211_hdr *hdr;
+ 	u16 sc;
+ 	__le16 fc;
+ 	unsigned int frag, seq;
+ 	struct ieee80211_fragment_entry *entry;
+ 	struct sk_buff *skb;
++	struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
+ 
+ 	hdr = (struct ieee80211_hdr *)rx->skb->data;
+ 	fc = hdr->frame_control;
+@@ -2228,6 +2255,9 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ 		goto out_no_led;
+ 	}
+ 
++	if (rx->sta)
++		cache = &rx->sta->frags;
++
+ 	if (likely(!ieee80211_has_morefrags(fc) && frag == 0))
+ 		goto out;
+ 
+@@ -2246,20 +2276,17 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ 
+ 	if (frag == 0) {
+ 		/* This is the first fragment of a new frame. */
+-		entry = ieee80211_reassemble_add(rx->sdata, frag, seq,
++		entry = ieee80211_reassemble_add(cache, frag, seq,
+ 						 rx->seqno_idx, &(rx->skb));
+-		if (rx->key &&
+-		    (rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP ||
+-		     rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 ||
+-		     rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP ||
+-		     rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) &&
+-		    ieee80211_has_protected(fc)) {
++		if (requires_sequential_pn(rx, fc)) {
+ 			int queue = rx->security_idx;
+ 
+ 			/* Store CCMP/GCMP PN so that we can verify that the
+ 			 * next fragment has a sequential PN value.
+ 			 */
+ 			entry->check_sequential_pn = true;
++			entry->is_protected = true;
++			entry->key_color = rx->key->color;
+ 			memcpy(entry->last_pn,
+ 			       rx->key->u.ccmp.rx_pn[queue],
+ 			       IEEE80211_CCMP_PN_LEN);
+@@ -2271,6 +2298,11 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ 				     sizeof(rx->key->u.gcmp.rx_pn[queue]));
+ 			BUILD_BUG_ON(IEEE80211_CCMP_PN_LEN !=
+ 				     IEEE80211_GCMP_PN_LEN);
++		} else if (rx->key &&
++			   (ieee80211_has_protected(fc) ||
++			    (status->flag & RX_FLAG_DECRYPTED))) {
++			entry->is_protected = true;
++			entry->key_color = rx->key->color;
+ 		}
+ 		return RX_QUEUED;
+ 	}
+@@ -2278,7 +2310,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ 	/* This is a fragment for a frame that should already be pending in
+ 	 * fragment cache. Add this fragment to the end of the pending entry.
+ 	 */
+-	entry = ieee80211_reassemble_find(rx->sdata, frag, seq,
++	entry = ieee80211_reassemble_find(cache, frag, seq,
+ 					  rx->seqno_idx, hdr);
+ 	if (!entry) {
+ 		I802_DEBUG_INC(rx->local->rx_handlers_drop_defrag);
+@@ -2293,25 +2325,39 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ 	if (entry->check_sequential_pn) {
+ 		int i;
+ 		u8 pn[IEEE80211_CCMP_PN_LEN], *rpn;
+-		int queue;
+ 
+-		if (!rx->key ||
+-		    (rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP &&
+-		     rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP_256 &&
+-		     rx->key->conf.cipher != WLAN_CIPHER_SUITE_GCMP &&
+-		     rx->key->conf.cipher != WLAN_CIPHER_SUITE_GCMP_256))
++		if (!requires_sequential_pn(rx, fc))
++			return RX_DROP_UNUSABLE;
++
++		/* Prevent mixed key and fragment cache attacks */
++		if (entry->key_color != rx->key->color)
+ 			return RX_DROP_UNUSABLE;
++
+ 		memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN);
+ 		for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) {
+ 			pn[i]++;
+ 			if (pn[i])
+ 				break;
+ 		}
+-		queue = rx->security_idx;
+-		rpn = rx->key->u.ccmp.rx_pn[queue];
++
++		rpn = rx->ccm_gcm.pn;
+ 		if (memcmp(pn, rpn, IEEE80211_CCMP_PN_LEN))
+ 			return RX_DROP_UNUSABLE;
+ 		memcpy(entry->last_pn, pn, IEEE80211_CCMP_PN_LEN);
++	} else if (entry->is_protected &&
++		   (!rx->key ||
++		    (!ieee80211_has_protected(fc) &&
++		     !(status->flag & RX_FLAG_DECRYPTED)) ||
++		    rx->key->color != entry->key_color)) {
++		/* Drop this as a mixed key or fragment cache attack, even
++		 * if for TKIP Michael MIC should protect us, and WEP is a
++		 * lost cause anyway.
++		 */
++		return RX_DROP_UNUSABLE;
++	} else if (entry->is_protected && rx->key &&
++		   entry->key_color != rx->key->color &&
++		   (status->flag & RX_FLAG_DECRYPTED)) {
++		return RX_DROP_UNUSABLE;
+ 	}
+ 
+ 	skb_pull(rx->skb, ieee80211_hdrlen(fc));
+@@ -2504,13 +2550,13 @@ static bool ieee80211_frame_allowed(struct ieee80211_rx_data *rx, __le16 fc)
+ 	struct ethhdr *ehdr = (struct ethhdr *) rx->skb->data;
+ 
+ 	/*
+-	 * Allow EAPOL frames to us/the PAE group address regardless
+-	 * of whether the frame was encrypted or not.
++	 * Allow EAPOL frames to us/the PAE group address regardless of
++	 * whether the frame was encrypted or not, and always disallow
++	 * all other destination addresses for them.
+ 	 */
+-	if (ehdr->h_proto == rx->sdata->control_port_protocol &&
+-	    (ether_addr_equal(ehdr->h_dest, rx->sdata->vif.addr) ||
+-	     ether_addr_equal(ehdr->h_dest, pae_group_addr)))
+-		return true;
++	if (unlikely(ehdr->h_proto == rx->sdata->control_port_protocol))
++		return ether_addr_equal(ehdr->h_dest, rx->sdata->vif.addr) ||
++		       ether_addr_equal(ehdr->h_dest, pae_group_addr);
+ 
+ 	if (ieee80211_802_1x_port_control(rx) ||
+ 	    ieee80211_drop_unencrypted(rx, fc))
+@@ -2535,8 +2581,28 @@ static void ieee80211_deliver_skb_to_local_stack(struct sk_buff *skb,
+ 		cfg80211_rx_control_port(dev, skb, noencrypt);
+ 		dev_kfree_skb(skb);
+ 	} else {
++		struct ethhdr *ehdr = (void *)skb_mac_header(skb);
++
+ 		memset(skb->cb, 0, sizeof(skb->cb));
+ 
++		/*
++		 * 802.1X over 802.11 requires that the authenticator address
++		 * be used for EAPOL frames. However, 802.1X allows the use of
++		 * the PAE group address instead. If the interface is part of
++		 * a bridge and we pass the frame with the PAE group address,
++		 * then the bridge will forward it to the network (even if the
++		 * client was not associated yet), which isn't supposed to
++		 * happen.
++		 * To avoid that, rewrite the destination address to our own
++		 * address, so that the authenticator (e.g. hostapd) will see
++		 * the frame, but bridge won't forward it anywhere else. Note
++		 * that due to earlier filtering, the only other address can
++		 * be the PAE group address.
++		 */
++		if (unlikely(skb->protocol == sdata->control_port_protocol &&
++			     !ether_addr_equal(ehdr->h_dest, sdata->vif.addr)))
++			ether_addr_copy(ehdr->h_dest, sdata->vif.addr);
++
+ 		/* deliver to local stack */
+ 		if (rx->list)
+ 			list_add_tail(&skb->list, rx->list);
+@@ -2576,6 +2642,7 @@ ieee80211_deliver_skb(struct ieee80211_rx_data *rx)
+ 	if ((sdata->vif.type == NL80211_IFTYPE_AP ||
+ 	     sdata->vif.type == NL80211_IFTYPE_AP_VLAN) &&
+ 	    !(sdata->flags & IEEE80211_SDATA_DONT_BRIDGE_PACKETS) &&
++	    ehdr->h_proto != rx->sdata->control_port_protocol &&
+ 	    (sdata->vif.type != NL80211_IFTYPE_AP_VLAN || !sdata->u.vlan.sta)) {
+ 		if (is_multicast_ether_addr(ehdr->h_dest) &&
+ 		    ieee80211_vif_get_num_mcast_if(sdata) != 0) {
+@@ -2685,7 +2752,7 @@ __ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx, u8 data_offset)
+ 	if (ieee80211_data_to_8023_exthdr(skb, &ethhdr,
+ 					  rx->sdata->vif.addr,
+ 					  rx->sdata->vif.type,
+-					  data_offset))
++					  data_offset, true))
+ 		return RX_DROP_UNUSABLE;
+ 
+ 	ieee80211_amsdu_to_8023s(skb, &frame_list, dev->dev_addr,
+@@ -2742,6 +2809,23 @@ ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx)
+ 	if (is_multicast_ether_addr(hdr->addr1))
+ 		return RX_DROP_UNUSABLE;
+ 
++	if (rx->key) {
++		/*
++		 * We should not receive A-MSDUs on pre-HT connections,
++		 * and HT connections cannot use old ciphers. Thus drop
++		 * them, as in those cases we couldn't even have SPP
++		 * A-MSDUs or such.
++		 */
++		switch (rx->key->conf.cipher) {
++		case WLAN_CIPHER_SUITE_WEP40:
++		case WLAN_CIPHER_SUITE_WEP104:
++		case WLAN_CIPHER_SUITE_TKIP:
++			return RX_DROP_UNUSABLE;
++		default:
++			break;
++		}
++	}
++
+ 	return __ieee80211_rx_h_amsdu(rx, 0);
+ }
+ 
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index ec6973ee88ef4..f2fb69da9b6e1 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -4,7 +4,7 @@
+  * Copyright 2006-2007	Jiri Benc <jbenc@suse.cz>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright (C) 2015 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ 
+ #include <linux/module.h>
+@@ -392,6 +392,8 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata,
+ 
+ 	u64_stats_init(&sta->rx_stats.syncp);
+ 
++	ieee80211_init_frag_cache(&sta->frags);
++
+ 	sta->sta_state = IEEE80211_STA_NONE;
+ 
+ 	/* Mark TID as unreserved */
+@@ -1102,6 +1104,8 @@ static void __sta_info_destroy_part2(struct sta_info *sta)
+ 
+ 	ieee80211_sta_debugfs_remove(sta);
+ 
++	ieee80211_destroy_frag_cache(&sta->frags);
++
+ 	cleanup_single_sta(sta);
+ }
+ 
+diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
+index 7afd07636b81d..355e006432ccc 100644
+--- a/net/mac80211/sta_info.h
++++ b/net/mac80211/sta_info.h
+@@ -3,7 +3,7 @@
+  * Copyright 2002-2005, Devicescape Software, Inc.
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright(c) 2020 Intel Corporation
++ * Copyright(c) 2020-2021 Intel Corporation
+  */
+ 
+ #ifndef STA_INFO_H
+@@ -436,6 +436,34 @@ struct ieee80211_sta_rx_stats {
+ 	u64 msdu[IEEE80211_NUM_TIDS + 1];
+ };
+ 
++/*
++ * IEEE 802.11-2016 (10.6 "Defragmentation") recommends support for "concurrent
++ * reception of at least one MSDU per access category per associated STA"
++ * on APs, or "at least one MSDU per access category" on other interface types.
++ *
++ * This limit can be increased by changing this define, at the cost of slower
++ * frame reassembly and increased memory use while fragments are pending.
++ */
++#define IEEE80211_FRAGMENT_MAX 4
++
++struct ieee80211_fragment_entry {
++	struct sk_buff_head skb_list;
++	unsigned long first_frag_time;
++	u16 seq;
++	u16 extra_len;
++	u16 last_frag;
++	u8 rx_queue;
++	u8 check_sequential_pn:1, /* needed for CCMP/GCMP */
++	   is_protected:1;
++	u8 last_pn[6]; /* PN of the last fragment if CCMP was used */
++	unsigned int key_color;
++};
++
++struct ieee80211_fragment_cache {
++	struct ieee80211_fragment_entry	entries[IEEE80211_FRAGMENT_MAX];
++	unsigned int next;
++};
++
+ /*
+  * The bandwidth threshold below which the per-station CoDel parameters will be
+  * scaled to be more lenient (to prevent starvation of slow stations). This
+@@ -529,6 +557,7 @@ struct ieee80211_sta_rx_stats {
+  * @status_stats.last_ack_signal: last ACK signal
+  * @status_stats.ack_signal_filled: last ACK signal validity
+  * @status_stats.avg_ack_signal: average ACK signal
++ * @frags: fragment cache
+  */
+ struct sta_info {
+ 	/* General information, mostly static */
+@@ -637,6 +666,8 @@ struct sta_info {
+ 
+ 	struct cfg80211_chan_def tdls_chandef;
+ 
++	struct ieee80211_fragment_cache frags;
++
+ 	/* keep last! */
+ 	struct ieee80211_sta sta;
+ };
+diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
+index 91bf32af55e9a..bca47fad5a162 100644
+--- a/net/mac80211/wpa.c
++++ b/net/mac80211/wpa.c
+@@ -3,6 +3,7 @@
+  * Copyright 2002-2004, Instant802 Networks, Inc.
+  * Copyright 2008, Jouni Malinen <j@w1.fi>
+  * Copyright (C) 2016-2017 Intel Deutschland GmbH
++ * Copyright (C) 2020-2021 Intel Corporation
+  */
+ 
+ #include <linux/netdevice.h>
+@@ -167,8 +168,8 @@ ieee80211_rx_h_michael_mic_verify(struct ieee80211_rx_data *rx)
+ 
+ update_iv:
+ 	/* update IV in key information to be able to detect replays */
+-	rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip_iv32;
+-	rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip_iv16;
++	rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip.iv32;
++	rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip.iv16;
+ 
+ 	return RX_CONTINUE;
+ 
+@@ -294,8 +295,8 @@ ieee80211_crypto_tkip_decrypt(struct ieee80211_rx_data *rx)
+ 					  key, skb->data + hdrlen,
+ 					  skb->len - hdrlen, rx->sta->sta.addr,
+ 					  hdr->addr1, hwaccel, rx->security_idx,
+-					  &rx->tkip_iv32,
+-					  &rx->tkip_iv16);
++					  &rx->tkip.iv32,
++					  &rx->tkip.iv16);
+ 	if (res != TKIP_DECRYPT_OK)
+ 		return RX_DROP_UNUSABLE;
+ 
+@@ -553,6 +554,8 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx,
+ 		}
+ 
+ 		memcpy(key->u.ccmp.rx_pn[queue], pn, IEEE80211_CCMP_PN_LEN);
++		if (unlikely(ieee80211_is_frag(hdr)))
++			memcpy(rx->ccm_gcm.pn, pn, IEEE80211_CCMP_PN_LEN);
+ 	}
+ 
+ 	/* Remove CCMP header and MIC */
+@@ -781,6 +784,8 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx)
+ 		}
+ 
+ 		memcpy(key->u.gcmp.rx_pn[queue], pn, IEEE80211_GCMP_PN_LEN);
++		if (unlikely(ieee80211_is_frag(hdr)))
++			memcpy(rx->ccm_gcm.pn, pn, IEEE80211_CCMP_PN_LEN);
+ 	}
+ 
+ 	/* Remove GCMP header and MIC */
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index a044dd43411d9..91034a221983c 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -127,7 +127,6 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			memcpy(mp_opt->hmac, ptr, MPTCPOPT_HMAC_LEN);
+ 			pr_debug("MP_JOIN hmac");
+ 		} else {
+-			pr_warn("MP_JOIN bad option size");
+ 			mp_opt->mp_join = 0;
+ 		}
+ 		break;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 27f6672589ce2..7832b20baac2e 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -758,11 +758,17 @@ static bool mptcp_skb_can_collapse_to(u64 write_seq,
+ 	return mpext && mpext->data_seq + mpext->data_len == write_seq;
+ }
+ 
++/* we can append data to the given data frag if:
++ * - there is space available in the backing page_frag
++ * - the data frag tail matches the current page_frag free offset
++ * - the data frag end sequence number matches the current write seq
++ */
+ static bool mptcp_frag_can_collapse_to(const struct mptcp_sock *msk,
+ 				       const struct page_frag *pfrag,
+ 				       const struct mptcp_data_frag *df)
+ {
+ 	return df && pfrag->page == df->page &&
++		pfrag->offset == (df->offset + df->data_len) &&
+ 		df->data_seq + df->data_len == msk->write_seq;
+ }
+ 
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 01a675fa2aa2b..bdd6af38a9ae3 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -740,7 +740,6 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 
+ 	data_len = mpext->data_len;
+ 	if (data_len == 0) {
+-		pr_err("Infinite mapping not handled");
+ 		MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX);
+ 		return MAPPING_INVALID;
+ 	}
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index b03feb6e1226a..f4029fc2c8846 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -259,8 +259,7 @@ void flow_offload_refresh(struct nf_flowtable *flow_table,
+ {
+ 	flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT;
+ 
+-	if (likely(!nf_flowtable_hw_offload(flow_table) ||
+-		   !test_and_clear_bit(NF_FLOW_HW_REFRESH, &flow->flags)))
++	if (likely(!nf_flowtable_hw_offload(flow_table)))
+ 		return;
+ 
+ 	nf_flow_offload_add(flow_table, flow);
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index 1c5460e7bce87..92047cea3c170 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -753,10 +753,11 @@ static void flow_offload_work_add(struct flow_offload_work *offload)
+ 
+ 	err = flow_offload_rule_add(offload, flow_rule);
+ 	if (err < 0)
+-		set_bit(NF_FLOW_HW_REFRESH, &offload->flow->flags);
+-	else
+-		set_bit(IPS_HW_OFFLOAD_BIT, &offload->flow->ct->status);
++		goto out;
++
++	set_bit(IPS_HW_OFFLOAD_BIT, &offload->flow->ct->status);
+ 
++out:
+ 	nf_flow_offload_destroy(flow_rule);
+ }
+ 
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 9944523f5c2c3..2d73f265b12c9 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -408,8 +408,8 @@ int pipapo_refill(unsigned long *map, int len, int rules, unsigned long *dst,
+  *
+  * Return: true on match, false otherwise.
+  */
+-static bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+-			      const u32 *key, const struct nft_set_ext **ext)
++bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
++		       const u32 *key, const struct nft_set_ext **ext)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	unsigned long *res_map, *fill_map;
+diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h
+index 25a75591583eb..d84afb8fa79a1 100644
+--- a/net/netfilter/nft_set_pipapo.h
++++ b/net/netfilter/nft_set_pipapo.h
+@@ -178,6 +178,8 @@ struct nft_pipapo_elem {
+ 
+ int pipapo_refill(unsigned long *map, int len, int rules, unsigned long *dst,
+ 		  union nft_pipapo_map_bucket *mt, bool match_only);
++bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
++		       const u32 *key, const struct nft_set_ext **ext);
+ 
+ /**
+  * pipapo_and_field_buckets_4bit() - Intersect 4-bit buckets
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index d65ae0e23028d..eabdb8d552eef 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -1131,6 +1131,9 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	bool map_index;
+ 	int i, ret = 0;
+ 
++	if (unlikely(!irq_fpu_usable()))
++		return nft_pipapo_lookup(net, set, key, ext);
++
+ 	m = rcu_dereference(priv->match);
+ 
+ 	/* This also protects access to all data related to scratch maps */
+diff --git a/net/openvswitch/meter.c b/net/openvswitch/meter.c
+index 8fbefd52af7f2..e594b4d6b58a9 100644
+--- a/net/openvswitch/meter.c
++++ b/net/openvswitch/meter.c
+@@ -611,6 +611,14 @@ bool ovs_meter_execute(struct datapath *dp, struct sk_buff *skb,
+ 	spin_lock(&meter->lock);
+ 
+ 	long_delta_ms = (now_ms - meter->used); /* ms */
++	if (long_delta_ms < 0) {
++		/* This condition means that we have several threads fighting
++		 * for a meter lock, and the one who received the packets a
++		 * bit later wins. Assuming that all racing threads received
++		 * packets at the same time to avoid overflow.
++		 */
++		long_delta_ms = 0;
++	}
+ 
+ 	/* Make sure delta_ms will not be too large, so that bucket will not
+ 	 * wrap around below.
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 449625c2ccc7a..ddb68aa836f71 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -421,7 +421,8 @@ static __u32 tpacket_get_timestamp(struct sk_buff *skb, struct timespec64 *ts,
+ 	    ktime_to_timespec64_cond(shhwtstamps->hwtstamp, ts))
+ 		return TP_STATUS_TS_RAW_HARDWARE;
+ 
+-	if (ktime_to_timespec64_cond(skb->tstamp, ts))
++	if ((flags & SOF_TIMESTAMPING_SOFTWARE) &&
++	    ktime_to_timespec64_cond(skb->tstamp, ts))
+ 		return TP_STATUS_TS_SOFTWARE;
+ 
+ 	return 0;
+@@ -2339,7 +2340,12 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	skb_copy_bits(skb, 0, h.raw + macoff, snaplen);
+ 
+-	if (!(ts_status = tpacket_get_timestamp(skb, &ts, po->tp_tstamp)))
++	/* Always timestamp; prefer an existing software timestamp taken
++	 * closer to the time of capture.
++	 */
++	ts_status = tpacket_get_timestamp(skb, &ts,
++					  po->tp_tstamp | SOF_TIMESTAMPING_SOFTWARE);
++	if (!ts_status)
+ 		ktime_get_real_ts64(&ts);
+ 
+ 	status |= ts_status;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 9383dc29ead5d..a281da07bb1d2 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1625,7 +1625,7 @@ int tcf_classify_ingress(struct sk_buff *skb,
+ 
+ 	/* If we missed on some chain */
+ 	if (ret == TC_ACT_UNSPEC && last_executed_chain) {
+-		ext = skb_ext_add(skb, TC_SKB_EXT);
++		ext = tc_skb_ext_alloc(skb);
+ 		if (WARN_ON_ONCE(!ext))
+ 			return TC_ACT_SHOT;
+ 		ext->chain = last_executed_chain;
+diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c
+index 2b88710994d71..76ed1a05ded27 100644
+--- a/net/sched/sch_dsmark.c
++++ b/net/sched/sch_dsmark.c
+@@ -406,7 +406,8 @@ static void dsmark_reset(struct Qdisc *sch)
+ 	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+ 
+ 	pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p);
+-	qdisc_reset(p->q);
++	if (p->q)
++		qdisc_reset(p->q);
+ 	sch->qstats.backlog = 0;
+ 	sch->q.qlen = 0;
+ }
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index 949163fe68afd..cac684952edc5 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -138,8 +138,15 @@ static int fq_pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 
+ 	/* Classifies packet into corresponding flow */
+ 	idx = fq_pie_classify(skb, sch, &ret);
+-	sel_flow = &q->flows[idx];
++	if (idx == 0) {
++		if (ret & __NET_XMIT_BYPASS)
++			qdisc_qstats_drop(sch);
++		__qdisc_drop(skb, to_free);
++		return ret;
++	}
++	idx--;
+ 
++	sel_flow = &q->flows[idx];
+ 	/* Checks whether adding a new packet would exceed memory limit */
+ 	get_pie_cb(skb)->mem_usage = skb->truesize;
+ 	memory_limited = q->memory_usage > q->memory_limit + skb->truesize;
+@@ -297,9 +304,9 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
+ 			goto flow_error;
+ 		}
+ 		q->flows_cnt = nla_get_u32(tb[TCA_FQ_PIE_FLOWS]);
+-		if (!q->flows_cnt || q->flows_cnt >= 65536) {
++		if (!q->flows_cnt || q->flows_cnt > 65536) {
+ 			NL_SET_ERR_MSG_MOD(extack,
+-					   "Number of flows must range in [1..65535]");
++					   "Number of flows must range in [1..65536]");
+ 			goto flow_error;
+ 		}
+ 	}
+@@ -367,7 +374,7 @@ static void fq_pie_timer(struct timer_list *t)
+ 	struct fq_pie_sched_data *q = from_timer(q, t, adapt_timer);
+ 	struct Qdisc *sch = q->sch;
+ 	spinlock_t *root_lock; /* to lock qdisc for probability calculations */
+-	u16 idx;
++	u32 idx;
+ 
+ 	root_lock = qdisc_lock(qdisc_root_sleeping(sch));
+ 	spin_lock(root_lock);
+@@ -388,7 +395,7 @@ static int fq_pie_init(struct Qdisc *sch, struct nlattr *opt,
+ {
+ 	struct fq_pie_sched_data *q = qdisc_priv(sch);
+ 	int err;
+-	u16 idx;
++	u32 idx;
+ 
+ 	pie_params_init(&q->p_params);
+ 	sch->limit = 10 * 1024;
+@@ -500,7 +507,7 @@ static int fq_pie_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
+ static void fq_pie_reset(struct Qdisc *sch)
+ {
+ 	struct fq_pie_sched_data *q = qdisc_priv(sch);
+-	u16 idx;
++	u32 idx;
+ 
+ 	INIT_LIST_HEAD(&q->new_flows);
+ 	INIT_LIST_HEAD(&q->old_flows);
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 49eae93d1489d..854d2b38db856 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -35,6 +35,25 @@
+ const struct Qdisc_ops *default_qdisc_ops = &pfifo_fast_ops;
+ EXPORT_SYMBOL(default_qdisc_ops);
+ 
++static void qdisc_maybe_clear_missed(struct Qdisc *q,
++				     const struct netdev_queue *txq)
++{
++	clear_bit(__QDISC_STATE_MISSED, &q->state);
++
++	/* Make sure the below netif_xmit_frozen_or_stopped()
++	 * checking happens after clearing STATE_MISSED.
++	 */
++	smp_mb__after_atomic();
++
++	/* Checking netif_xmit_frozen_or_stopped() again to
++	 * make sure STATE_MISSED is set if the STATE_MISSED
++	 * set by netif_tx_wake_queue()'s rescheduling of
++	 * net_tx_action() is cleared by the above clear_bit().
++	 */
++	if (!netif_xmit_frozen_or_stopped(txq))
++		set_bit(__QDISC_STATE_MISSED, &q->state);
++}
++
+ /* Main transmission queue. */
+ 
+ /* Modifications to data participating in scheduling must be protected with
+@@ -74,6 +93,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q)
+ 			}
+ 		} else {
+ 			skb = SKB_XOFF_MAGIC;
++			qdisc_maybe_clear_missed(q, txq);
+ 		}
+ 	}
+ 
+@@ -242,6 +262,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
+ 			}
+ 		} else {
+ 			skb = NULL;
++			qdisc_maybe_clear_missed(q, txq);
+ 		}
+ 		if (lock)
+ 			spin_unlock(lock);
+@@ -251,8 +272,10 @@ validate:
+ 	*validate = true;
+ 
+ 	if ((q->flags & TCQ_F_ONETXQUEUE) &&
+-	    netif_xmit_frozen_or_stopped(txq))
++	    netif_xmit_frozen_or_stopped(txq)) {
++		qdisc_maybe_clear_missed(q, txq);
+ 		return skb;
++	}
+ 
+ 	skb = qdisc_dequeue_skb_bad_txq(q);
+ 	if (unlikely(skb)) {
+@@ -311,6 +334,8 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
+ 		HARD_TX_LOCK(dev, txq, smp_processor_id());
+ 		if (!netif_xmit_frozen_or_stopped(txq))
+ 			skb = dev_hard_start_xmit(skb, dev, txq, &ret);
++		else
++			qdisc_maybe_clear_missed(q, txq);
+ 
+ 		HARD_TX_UNLOCK(dev, txq);
+ 	} else {
+@@ -640,8 +665,10 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
+ {
+ 	struct pfifo_fast_priv *priv = qdisc_priv(qdisc);
+ 	struct sk_buff *skb = NULL;
++	bool need_retry = true;
+ 	int band;
+ 
++retry:
+ 	for (band = 0; band < PFIFO_FAST_BANDS && !skb; band++) {
+ 		struct skb_array *q = band2list(priv, band);
+ 
+@@ -652,6 +679,23 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
+ 	}
+ 	if (likely(skb)) {
+ 		qdisc_update_stats_at_dequeue(qdisc, skb);
++	} else if (need_retry &&
++		   test_bit(__QDISC_STATE_MISSED, &qdisc->state)) {
++		/* Delay clearing the STATE_MISSED here to reduce
++		 * the overhead of the second spin_trylock() in
++		 * qdisc_run_begin() and __netif_schedule() calling
++		 * in qdisc_run_end().
++		 */
++		clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
++
++		/* Make sure dequeuing happens after clearing
++		 * STATE_MISSED.
++		 */
++		smp_mb__after_atomic();
++
++		need_retry = false;
++
++		goto retry;
+ 	} else {
+ 		WRITE_ONCE(qdisc->empty, true);
+ 	}
+@@ -1158,8 +1202,10 @@ static void dev_reset_queue(struct net_device *dev,
+ 	qdisc_reset(qdisc);
+ 
+ 	spin_unlock_bh(qdisc_lock(qdisc));
+-	if (nolock)
++	if (nolock) {
++		clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
+ 		spin_unlock_bh(&qdisc->seqlock);
++	}
+ }
+ 
+ static bool some_qdisc_is_busy(struct net_device *dev)
+diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c
+index 6abbdd09a580c..8e33c0128d73a 100644
+--- a/net/smc/smc_ism.c
++++ b/net/smc/smc_ism.c
+@@ -304,6 +304,14 @@ struct smcd_dev *smcd_alloc_dev(struct device *parent, const char *name,
+ 		return NULL;
+ 	}
+ 
++	smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)",
++						 WQ_MEM_RECLAIM, name);
++	if (!smcd->event_wq) {
++		kfree(smcd->conn);
++		kfree(smcd);
++		return NULL;
++	}
++
+ 	smcd->dev.parent = parent;
+ 	smcd->dev.release = smcd_release;
+ 	device_initialize(&smcd->dev);
+@@ -317,19 +325,14 @@ struct smcd_dev *smcd_alloc_dev(struct device *parent, const char *name,
+ 	INIT_LIST_HEAD(&smcd->vlan);
+ 	INIT_LIST_HEAD(&smcd->lgr_list);
+ 	init_waitqueue_head(&smcd->lgrs_deleted);
+-	smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)",
+-						 WQ_MEM_RECLAIM, name);
+-	if (!smcd->event_wq) {
+-		kfree(smcd->conn);
+-		kfree(smcd);
+-		return NULL;
+-	}
+ 	return smcd;
+ }
+ EXPORT_SYMBOL_GPL(smcd_alloc_dev);
+ 
+ int smcd_register_dev(struct smcd_dev *smcd)
+ {
++	int rc;
++
+ 	mutex_lock(&smcd_dev_list.mutex);
+ 	if (list_empty(&smcd_dev_list.list)) {
+ 		u8 *system_eid = NULL;
+@@ -349,7 +352,14 @@ int smcd_register_dev(struct smcd_dev *smcd)
+ 			    dev_name(&smcd->dev), smcd->pnetid,
+ 			    smcd->pnetid_by_user ? " (user defined)" : "");
+ 
+-	return device_add(&smcd->dev);
++	rc = device_add(&smcd->dev);
++	if (rc) {
++		mutex_lock(&smcd_dev_list.mutex);
++		list_del(&smcd->list);
++		mutex_unlock(&smcd_dev_list.mutex);
++	}
++
++	return rc;
+ }
+ EXPORT_SYMBOL_GPL(smcd_register_dev);
+ 
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 4a0e8e458a9ad..84c8a534029c9 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1680,13 +1680,6 @@ call_reserveresult(struct rpc_task *task)
+ 		return;
+ 	}
+ 
+-	/*
+-	 * Even though there was an error, we may have acquired
+-	 * a request slot somehow.  Make sure not to leak it.
+-	 */
+-	if (task->tk_rqstp)
+-		xprt_release(task);
+-
+ 	switch (status) {
+ 	case -ENOMEM:
+ 		rpc_delay(task, HZ >> 2);
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index a85759d8cde85..9a50764be9160 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -70,6 +70,7 @@
+ static void	 xprt_init(struct rpc_xprt *xprt, struct net *net);
+ static __be32	xprt_alloc_xid(struct rpc_xprt *xprt);
+ static void	 xprt_destroy(struct rpc_xprt *xprt);
++static void	 xprt_request_init(struct rpc_task *task);
+ 
+ static DEFINE_SPINLOCK(xprt_list_lock);
+ static LIST_HEAD(xprt_list);
+@@ -1574,17 +1575,40 @@ xprt_transmit(struct rpc_task *task)
+ 	spin_unlock(&xprt->queue_lock);
+ }
+ 
+-static void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task)
++static void xprt_complete_request_init(struct rpc_task *task)
++{
++	if (task->tk_rqstp)
++		xprt_request_init(task);
++}
++
++void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task)
+ {
+ 	set_bit(XPRT_CONGESTED, &xprt->state);
+-	rpc_sleep_on(&xprt->backlog, task, NULL);
++	rpc_sleep_on(&xprt->backlog, task, xprt_complete_request_init);
++}
++EXPORT_SYMBOL_GPL(xprt_add_backlog);
++
++static bool __xprt_set_rq(struct rpc_task *task, void *data)
++{
++	struct rpc_rqst *req = data;
++
++	if (task->tk_rqstp == NULL) {
++		memset(req, 0, sizeof(*req));	/* mark unused */
++		task->tk_rqstp = req;
++		return true;
++	}
++	return false;
+ }
+ 
+-static void xprt_wake_up_backlog(struct rpc_xprt *xprt)
++bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req)
+ {
+-	if (rpc_wake_up_next(&xprt->backlog) == NULL)
++	if (rpc_wake_up_first(&xprt->backlog, __xprt_set_rq, req) == NULL) {
+ 		clear_bit(XPRT_CONGESTED, &xprt->state);
++		return false;
++	}
++	return true;
+ }
++EXPORT_SYMBOL_GPL(xprt_wake_up_backlog);
+ 
+ static bool xprt_throttle_congested(struct rpc_xprt *xprt, struct rpc_task *task)
+ {
+@@ -1594,7 +1618,7 @@ static bool xprt_throttle_congested(struct rpc_xprt *xprt, struct rpc_task *task
+ 		goto out;
+ 	spin_lock(&xprt->reserve_lock);
+ 	if (test_bit(XPRT_CONGESTED, &xprt->state)) {
+-		rpc_sleep_on(&xprt->backlog, task, NULL);
++		xprt_add_backlog(xprt, task);
+ 		ret = true;
+ 	}
+ 	spin_unlock(&xprt->reserve_lock);
+@@ -1671,11 +1695,11 @@ EXPORT_SYMBOL_GPL(xprt_alloc_slot);
+ void xprt_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *req)
+ {
+ 	spin_lock(&xprt->reserve_lock);
+-	if (!xprt_dynamic_free_slot(xprt, req)) {
++	if (!xprt_wake_up_backlog(xprt, req) &&
++	    !xprt_dynamic_free_slot(xprt, req)) {
+ 		memset(req, 0, sizeof(*req));	/* mark unused */
+ 		list_add(&req->rq_list, &xprt->free);
+ 	}
+-	xprt_wake_up_backlog(xprt);
+ 	spin_unlock(&xprt->reserve_lock);
+ }
+ EXPORT_SYMBOL_GPL(xprt_free_slot);
+@@ -1862,10 +1886,10 @@ void xprt_release(struct rpc_task *task)
+ 	xdr_free_bvec(&req->rq_snd_buf);
+ 	if (req->rq_cred != NULL)
+ 		put_rpccred(req->rq_cred);
+-	task->tk_rqstp = NULL;
+ 	if (req->rq_release_snd_buf)
+ 		req->rq_release_snd_buf(req);
+ 
++	task->tk_rqstp = NULL;
+ 	if (likely(!bc_prealloc(req)))
+ 		xprt->ops->free_slot(xprt, req);
+ 	else
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index f93ff4282bf4f..c26db0a379967 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -520,9 +520,8 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task)
+ 	return;
+ 
+ out_sleep:
+-	set_bit(XPRT_CONGESTED, &xprt->state);
+-	rpc_sleep_on(&xprt->backlog, task, NULL);
+ 	task->tk_status = -EAGAIN;
++	xprt_add_backlog(xprt, task);
+ }
+ 
+ /**
+@@ -537,10 +536,11 @@ xprt_rdma_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *rqst)
+ 	struct rpcrdma_xprt *r_xprt =
+ 		container_of(xprt, struct rpcrdma_xprt, rx_xprt);
+ 
+-	memset(rqst, 0, sizeof(*rqst));
+-	rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst));
+-	if (unlikely(!rpc_wake_up_next(&xprt->backlog)))
+-		clear_bit(XPRT_CONGESTED, &xprt->state);
++	rpcrdma_reply_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst));
++	if (!xprt_wake_up_backlog(xprt, rqst)) {
++		memset(rqst, 0, sizeof(*rqst));
++		rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst));
++	}
+ }
+ 
+ static bool rpcrdma_check_regbuf(struct rpcrdma_xprt *r_xprt,
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 04325f0267c1c..25554260a5931 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -1197,6 +1197,20 @@ void rpcrdma_mr_put(struct rpcrdma_mr *mr)
+ 	rpcrdma_mr_push(mr, &mr->mr_req->rl_free_mrs);
+ }
+ 
++/**
++ * rpcrdma_reply_put - Put reply buffers back into pool
++ * @buffers: buffer pool
++ * @req: object to return
++ *
++ */
++void rpcrdma_reply_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req)
++{
++	if (req->rl_reply) {
++		rpcrdma_rep_put(buffers, req->rl_reply);
++		req->rl_reply = NULL;
++	}
++}
++
+ /**
+  * rpcrdma_buffer_get - Get a request buffer
+  * @buffers: Buffer pool from which to obtain a buffer
+@@ -1225,9 +1239,7 @@ rpcrdma_buffer_get(struct rpcrdma_buffer *buffers)
+  */
+ void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req)
+ {
+-	if (req->rl_reply)
+-		rpcrdma_rep_put(buffers, req->rl_reply);
+-	req->rl_reply = NULL;
++	rpcrdma_reply_put(buffers, req);
+ 
+ 	spin_lock(&buffers->rb_lock);
+ 	list_add(&req->rl_list, &buffers->rb_send_bufs);
+diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
+index 3cacc6f4c5271..702f0344523cc 100644
+--- a/net/sunrpc/xprtrdma/xprt_rdma.h
++++ b/net/sunrpc/xprtrdma/xprt_rdma.h
+@@ -472,6 +472,7 @@ void rpcrdma_mrs_refresh(struct rpcrdma_xprt *r_xprt);
+ struct rpcrdma_req *rpcrdma_buffer_get(struct rpcrdma_buffer *);
+ void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers,
+ 			struct rpcrdma_req *req);
++void rpcrdma_reply_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req);
+ void rpcrdma_recv_buffer_put(struct rpcrdma_rep *);
+ 
+ bool rpcrdma_regbuf_realloc(struct rpcrdma_regbuf *rb, size_t size,
+diff --git a/net/tipc/core.c b/net/tipc/core.c
+index c2ff42900b539..40c03085c0eaf 100644
+--- a/net/tipc/core.c
++++ b/net/tipc/core.c
+@@ -121,6 +121,8 @@ static void __net_exit tipc_exit_net(struct net *net)
+ #ifdef CONFIG_TIPC_CRYPTO
+ 	tipc_crypto_stop(&tipc_net(net)->crypto_tx);
+ #endif
++	while (atomic_read(&tn->wq_count))
++		cond_resched();
+ }
+ 
+ static void __net_exit tipc_pernet_pre_exit(struct net *net)
+diff --git a/net/tipc/core.h b/net/tipc/core.h
+index 1d57a4d3b05e2..992924a849be6 100644
+--- a/net/tipc/core.h
++++ b/net/tipc/core.h
+@@ -151,6 +151,8 @@ struct tipc_net {
+ #endif
+ 	/* Work item for net finalize */
+ 	struct tipc_net_work final_work;
++	/* The numbers of work queues in schedule */
++	atomic_t wq_count;
+ };
+ 
+ static inline struct tipc_net *tipc_net(struct net *net)
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 32c79c59052b6..88a3ed80094cd 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -151,18 +151,13 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
+ 		if (unlikely(head))
+ 			goto err;
+ 		*buf = NULL;
++		if (skb_has_frag_list(frag) && __skb_linearize(frag))
++			goto err;
+ 		frag = skb_unshare(frag, GFP_ATOMIC);
+ 		if (unlikely(!frag))
+ 			goto err;
+ 		head = *headbuf = frag;
+ 		TIPC_SKB_CB(head)->tail = NULL;
+-		if (skb_is_nonlinear(head)) {
+-			skb_walk_frags(head, tail) {
+-				TIPC_SKB_CB(head)->tail = tail;
+-			}
+-		} else {
+-			skb_frag_list_init(head);
+-		}
+ 		return 0;
+ 	}
+ 
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 5b18c6a46cfb8..9f7cc9e1e4ef3 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1244,7 +1244,10 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq,
+ 		spin_lock_bh(&inputq->lock);
+ 		if (skb_peek(arrvq) == skb) {
+ 			skb_queue_splice_tail_init(&tmpq, inputq);
+-			__skb_dequeue(arrvq);
++			/* Decrease the skb's refcnt as increasing in the
++			 * function tipc_skb_peek
++			 */
++			kfree_skb(__skb_dequeue(arrvq));
+ 		}
+ 		spin_unlock_bh(&inputq->lock);
+ 		__skb_queue_purge(&tmpq);
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 1d17f4470ee2a..a236281082726 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -806,6 +806,7 @@ static void cleanup_bearer(struct work_struct *work)
+ 		kfree_rcu(rcast, rcu);
+ 	}
+ 
++	atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
+ 	dst_cache_destroy(&ub->rcast.dst_cache);
+ 	udp_tunnel_sock_release(ub->ubsock);
+ 	synchronize_net();
+@@ -826,6 +827,7 @@ static void tipc_udp_disable(struct tipc_bearer *b)
+ 	RCU_INIT_POINTER(ub->bearer, NULL);
+ 
+ 	/* sock_release need to be done outside of rtnl lock */
++	atomic_inc(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
+ 	INIT_WORK(&ub->work, cleanup_bearer);
+ 	schedule_work(&ub->work);
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 845c628ac1b27..3abe5257f7577 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -37,6 +37,7 @@
+ 
+ #include <linux/sched/signal.h>
+ #include <linux/module.h>
++#include <linux/splice.h>
+ #include <crypto/aead.h>
+ 
+ #include <net/strparser.h>
+@@ -1282,7 +1283,7 @@ int tls_sw_sendpage(struct sock *sk, struct page *page,
+ }
+ 
+ static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock,
+-				     int flags, long timeo, int *err)
++				     bool nonblock, long timeo, int *err)
+ {
+ 	struct tls_context *tls_ctx = tls_get_ctx(sk);
+ 	struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
+@@ -1307,7 +1308,7 @@ static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock,
+ 		if (sock_flag(sk, SOCK_DONE))
+ 			return NULL;
+ 
+-		if ((flags & MSG_DONTWAIT) || !timeo) {
++		if (nonblock || !timeo) {
+ 			*err = -EAGAIN;
+ 			return NULL;
+ 		}
+@@ -1787,7 +1788,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 		bool async_capable;
+ 		bool async = false;
+ 
+-		skb = tls_wait_data(sk, psock, flags, timeo, &err);
++		skb = tls_wait_data(sk, psock, flags & MSG_DONTWAIT, timeo, &err);
+ 		if (!skb) {
+ 			if (psock) {
+ 				int ret = __tcp_bpf_recvmsg(sk, psock,
+@@ -1991,9 +1992,9 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
+ 
+ 	lock_sock(sk);
+ 
+-	timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
++	timeo = sock_rcvtimeo(sk, flags & SPLICE_F_NONBLOCK);
+ 
+-	skb = tls_wait_data(sk, NULL, flags, timeo, &err);
++	skb = tls_wait_data(sk, NULL, flags & SPLICE_F_NONBLOCK, timeo, &err);
+ 	if (!skb)
+ 		goto splice_read_end;
+ 
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index e4247c3543566..2731267fd0f9e 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -541,7 +541,7 @@ EXPORT_SYMBOL(ieee80211_get_mesh_hdrlen);
+ 
+ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
+ 				  const u8 *addr, enum nl80211_iftype iftype,
+-				  u8 data_offset)
++				  u8 data_offset, bool is_amsdu)
+ {
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
+ 	struct {
+@@ -629,7 +629,7 @@ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
+ 	skb_copy_bits(skb, hdrlen, &payload, sizeof(payload));
+ 	tmp.h_proto = payload.proto;
+ 
+-	if (likely((ether_addr_equal(payload.hdr, rfc1042_header) &&
++	if (likely((!is_amsdu && ether_addr_equal(payload.hdr, rfc1042_header) &&
+ 		    tmp.h_proto != htons(ETH_P_AARP) &&
+ 		    tmp.h_proto != htons(ETH_P_IPX)) ||
+ 		   ether_addr_equal(payload.hdr, bridge_tunnel_header)))
+@@ -771,6 +771,9 @@ void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
+ 		remaining = skb->len - offset;
+ 		if (subframe_len > remaining)
+ 			goto purge;
++		/* mitigate A-MSDU aggregation injection attacks */
++		if (ether_addr_equal(eth.h_dest, rfc1042_header))
++			goto purge;
+ 
+ 		offset += sizeof(struct ethhdr);
+ 		last = remaining <= subframe_len + padding;
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index 3edae90188936..2e4508a6cb3a7 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -1257,7 +1257,7 @@ static void tx_only(struct xsk_socket_info *xsk, u32 *frame_nb, int batch_size)
+ 	for (i = 0; i < batch_size; i++) {
+ 		struct xdp_desc *tx_desc = xsk_ring_prod__tx_desc(&xsk->tx,
+ 								  idx + i);
+-		tx_desc->addr = (*frame_nb + i) << XSK_UMEM__DEFAULT_FRAME_SHIFT;
++		tx_desc->addr = (*frame_nb + i) * opt_xsk_frame_size;
+ 		tx_desc->len = PKT_SIZE;
+ 	}
+ 
+diff --git a/scripts/clang-tools/gen_compile_commands.py b/scripts/clang-tools/gen_compile_commands.py
+index 19963708bcf87..8ddb5d099029f 100755
+--- a/scripts/clang-tools/gen_compile_commands.py
++++ b/scripts/clang-tools/gen_compile_commands.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Copyright (C) Google LLC, 2018
+diff --git a/scripts/clang-tools/run-clang-tools.py b/scripts/clang-tools/run-clang-tools.py
+index fa7655c7cec0e..f754415af398b 100755
+--- a/scripts/clang-tools/run-clang-tools.py
++++ b/scripts/clang-tools/run-clang-tools.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Copyright (C) Google LLC, 2020
+diff --git a/sound/isa/gus/gus_main.c b/sound/isa/gus/gus_main.c
+index afc088f0377ce..b7518122a10d6 100644
+--- a/sound/isa/gus/gus_main.c
++++ b/sound/isa/gus/gus_main.c
+@@ -77,17 +77,8 @@ static const struct snd_kcontrol_new snd_gus_joystick_control = {
+ 
+ static void snd_gus_init_control(struct snd_gus_card *gus)
+ {
+-	int ret;
+-
+-	if (!gus->ace_flag) {
+-		ret =
+-			snd_ctl_add(gus->card,
+-					snd_ctl_new1(&snd_gus_joystick_control,
+-						gus));
+-		if (ret)
+-			snd_printk(KERN_ERR "gus: snd_ctl_add failed: %d\n",
+-					ret);
+-	}
++	if (!gus->ace_flag)
++		snd_ctl_add(gus->card, snd_ctl_new1(&snd_gus_joystick_control, gus));
+ }
+ 
+ /*
+diff --git a/sound/isa/sb/sb16_main.c b/sound/isa/sb/sb16_main.c
+index 38dc1fde25f3c..aa48705310231 100644
+--- a/sound/isa/sb/sb16_main.c
++++ b/sound/isa/sb/sb16_main.c
+@@ -846,14 +846,10 @@ int snd_sb16dsp_pcm(struct snd_sb *chip, int device)
+ 	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_sb16_playback_ops);
+ 	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_sb16_capture_ops);
+ 
+-	if (chip->dma16 >= 0 && chip->dma8 != chip->dma16) {
+-		err = snd_ctl_add(card, snd_ctl_new1(
+-					&snd_sb16_dma_control, chip));
+-		if (err)
+-			return err;
+-	} else {
++	if (chip->dma16 >= 0 && chip->dma8 != chip->dma16)
++		snd_ctl_add(card, snd_ctl_new1(&snd_sb16_dma_control, chip));
++	else
+ 		pcm->info_flags = SNDRV_PCM_INFO_HALF_DUPLEX;
+-	}
+ 
+ 	snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV,
+ 				       card->dev, 64*1024, 128*1024);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 43a63db4ab6ad..d8424d226714f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2603,6 +2603,28 @@ static const struct hda_model_fixup alc882_fixup_models[] = {
+ 	{}
+ };
+ 
++static const struct snd_hda_pin_quirk alc882_pin_fixup_tbl[] = {
++	SND_HDA_PIN_QUIRK(0x10ec1220, 0x1043, "ASUS", ALC1220_FIXUP_CLEVO_P950,
++		{0x14, 0x01014010},
++		{0x15, 0x01011012},
++		{0x16, 0x01016011},
++		{0x18, 0x01a19040},
++		{0x19, 0x02a19050},
++		{0x1a, 0x0181304f},
++		{0x1b, 0x0221401f},
++		{0x1e, 0x01456130}),
++	SND_HDA_PIN_QUIRK(0x10ec1220, 0x1462, "MS-7C35", ALC1220_FIXUP_CLEVO_P950,
++		{0x14, 0x01015010},
++		{0x15, 0x01011012},
++		{0x16, 0x01011011},
++		{0x18, 0x01a11040},
++		{0x19, 0x02a19050},
++		{0x1a, 0x0181104f},
++		{0x1b, 0x0221401f},
++		{0x1e, 0x01451130}),
++	{}
++};
++
+ /*
+  * BIOS auto configuration
+  */
+@@ -2644,6 +2666,7 @@ static int patch_alc882(struct hda_codec *codec)
+ 
+ 	snd_hda_pick_fixup(codec, alc882_fixup_models, alc882_fixup_tbl,
+ 		       alc882_fixups);
++	snd_hda_pick_pin_fixup(codec, alc882_pin_fixup_tbl, alc882_fixups, true);
+ 	snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PRE_PROBE);
+ 
+ 	alc_auto_parse_customize_define(codec);
+@@ -6535,6 +6558,8 @@ enum {
+ 	ALC295_FIXUP_ASUS_DACS,
+ 	ALC295_FIXUP_HP_OMEN,
+ 	ALC285_FIXUP_HP_SPECTRE_X360,
++	ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP,
++	ALC623_FIXUP_LENOVO_THINKSTATION_P340,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8095,6 +8120,18 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1,
+ 	},
++	[ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_ideapad_s740_coef,
++		.chained = true,
++		.chain_id = ALC285_FIXUP_THINKPAD_HEADSET_JACK,
++	},
++	[ALC623_FIXUP_LENOVO_THINKSTATION_P340] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_no_shutup,
++		.chained = true,
++		.chain_id = ALC283_FIXUP_HEADSET_MIC,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8277,6 +8314,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -8412,7 +8453,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0xc019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xc022, "Clevo NH77[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS),
+-	SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
+@@ -8462,6 +8503,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
++	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+@@ -8677,6 +8719,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
+ 	{.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
+ 	{.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
++	{.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
++	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/codecs/cs35l33.c b/sound/soc/codecs/cs35l33.c
+index 6042194d95d3e..8894369e329af 100644
+--- a/sound/soc/codecs/cs35l33.c
++++ b/sound/soc/codecs/cs35l33.c
+@@ -1201,6 +1201,7 @@ static int cs35l33_i2c_probe(struct i2c_client *i2c_client,
+ 		dev_err(&i2c_client->dev,
+ 			"CS35L33 Device ID (%X). Expected ID %X\n",
+ 			devid, CS35L33_CHIP_ID);
++		ret = -EINVAL;
+ 		goto err_enable;
+ 	}
+ 
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 4d82d24c7828d..7c6b10bc0b8c5 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -398,6 +398,9 @@ static const struct regmap_config cs42l42_regmap = {
+ 	.reg_defaults = cs42l42_reg_defaults,
+ 	.num_reg_defaults = ARRAY_SIZE(cs42l42_reg_defaults),
+ 	.cache_type = REGCACHE_RBTREE,
++
++	.use_single_read = true,
++	.use_single_write = true,
+ };
+ 
+ static DECLARE_TLV_DB_SCALE(adc_tlv, -9600, 100, false);
+diff --git a/sound/soc/codecs/cs43130.c b/sound/soc/codecs/cs43130.c
+index 7fb34422a2a4b..8f70dee958786 100644
+--- a/sound/soc/codecs/cs43130.c
++++ b/sound/soc/codecs/cs43130.c
+@@ -1735,6 +1735,14 @@ static DEVICE_ATTR(hpload_dc_r, 0444, cs43130_show_dc_r, NULL);
+ static DEVICE_ATTR(hpload_ac_l, 0444, cs43130_show_ac_l, NULL);
+ static DEVICE_ATTR(hpload_ac_r, 0444, cs43130_show_ac_r, NULL);
+ 
++static struct attribute *hpload_attrs[] = {
++	&dev_attr_hpload_dc_l.attr,
++	&dev_attr_hpload_dc_r.attr,
++	&dev_attr_hpload_ac_l.attr,
++	&dev_attr_hpload_ac_r.attr,
++};
++ATTRIBUTE_GROUPS(hpload);
++
+ static struct reg_sequence hp_en_cal_seq[] = {
+ 	{CS43130_INT_MASK_4, CS43130_INT_MASK_ALL},
+ 	{CS43130_HP_MEAS_LOAD_1, 0},
+@@ -2302,25 +2310,15 @@ static int cs43130_probe(struct snd_soc_component *component)
+ 
+ 	cs43130->hpload_done = false;
+ 	if (cs43130->dc_meas) {
+-		ret = device_create_file(component->dev, &dev_attr_hpload_dc_l);
+-		if (ret < 0)
+-			return ret;
+-
+-		ret = device_create_file(component->dev, &dev_attr_hpload_dc_r);
+-		if (ret < 0)
+-			return ret;
+-
+-		ret = device_create_file(component->dev, &dev_attr_hpload_ac_l);
+-		if (ret < 0)
+-			return ret;
+-
+-		ret = device_create_file(component->dev, &dev_attr_hpload_ac_r);
+-		if (ret < 0)
++		ret = sysfs_create_groups(&component->dev->kobj, hpload_groups);
++		if (ret)
+ 			return ret;
+ 
+ 		cs43130->wq = create_singlethread_workqueue("cs43130_hp");
+-		if (!cs43130->wq)
++		if (!cs43130->wq) {
++			sysfs_remove_groups(&component->dev->kobj, hpload_groups);
+ 			return -ENOMEM;
++		}
+ 		INIT_WORK(&cs43130->work, cs43130_imp_meas);
+ 	}
+ 
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index 4fb2ec7c8867b..7a30a12519a70 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -839,18 +839,8 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev)
+ 		if (dai_id == LPASS_DP_RX)
+ 			continue;
+ 
+-		drvdata->mi2s_osr_clk[dai_id] = devm_clk_get(dev,
++		drvdata->mi2s_osr_clk[dai_id] = devm_clk_get_optional(dev,
+ 					     variant->dai_osr_clk_names[i]);
+-		if (IS_ERR(drvdata->mi2s_osr_clk[dai_id])) {
+-			dev_warn(dev,
+-				"%s() error getting optional %s: %ld\n",
+-				__func__,
+-				variant->dai_osr_clk_names[i],
+-				PTR_ERR(drvdata->mi2s_osr_clk[dai_id]));
+-
+-			drvdata->mi2s_osr_clk[dai_id] = NULL;
+-		}
+-
+ 		drvdata->mi2s_bit_clk[dai_id] = devm_clk_get(dev,
+ 						variant->dai_bit_clk_names[i]);
+ 		if (IS_ERR(drvdata->mi2s_bit_clk[dai_id])) {
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 5171b3dc1eb9e..8297117f4766e 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -3017,7 +3017,7 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 	case USB_ID(0x1235, 0x8203): /* Focusrite Scarlett 6i6 2nd Gen */
+ 	case USB_ID(0x1235, 0x8204): /* Focusrite Scarlett 18i8 2nd Gen */
+ 	case USB_ID(0x1235, 0x8201): /* Focusrite Scarlett 18i20 2nd Gen */
+-		err = snd_scarlett_gen2_controls_create(mixer);
++		err = snd_scarlett_gen2_init(mixer);
+ 		break;
+ 
+ 	case USB_ID(0x041e, 0x323b): /* Creative Sound Blaster E1 */
+diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
+index 4bbec56c7df34..9a98b0c048e33 100644
+--- a/sound/usb/mixer_scarlett_gen2.c
++++ b/sound/usb/mixer_scarlett_gen2.c
+@@ -635,7 +635,7 @@ static int scarlett2_usb(
+ 	/* send a second message to get the response */
+ 
+ 	err = snd_usb_ctl_msg(mixer->chip->dev,
+-			usb_sndctrlpipe(mixer->chip->dev, 0),
++			usb_rcvctrlpipe(mixer->chip->dev, 0),
+ 			SCARLETT2_USB_VENDOR_SPECIFIC_CMD_RESP,
+ 			USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_IN,
+ 			0,
+@@ -1997,38 +1997,11 @@ static int scarlett2_mixer_status_create(struct usb_mixer_interface *mixer)
+ 	return usb_submit_urb(mixer->urb, GFP_KERNEL);
+ }
+ 
+-/* Entry point */
+-int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer)
++static int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer,
++					     const struct scarlett2_device_info *info)
+ {
+-	const struct scarlett2_device_info *info;
+ 	int err;
+ 
+-	/* only use UAC_VERSION_2 */
+-	if (!mixer->protocol)
+-		return 0;
+-
+-	switch (mixer->chip->usb_id) {
+-	case USB_ID(0x1235, 0x8203):
+-		info = &s6i6_gen2_info;
+-		break;
+-	case USB_ID(0x1235, 0x8204):
+-		info = &s18i8_gen2_info;
+-		break;
+-	case USB_ID(0x1235, 0x8201):
+-		info = &s18i20_gen2_info;
+-		break;
+-	default: /* device not (yet) supported */
+-		return -EINVAL;
+-	}
+-
+-	if (!(mixer->chip->setup & SCARLETT2_ENABLE)) {
+-		usb_audio_err(mixer->chip,
+-			"Focusrite Scarlett Gen 2 Mixer Driver disabled; "
+-			"use options snd_usb_audio device_setup=1 "
+-			"to enable and report any issues to g@b4.vu");
+-		return 0;
+-	}
+-
+ 	/* Initialise private data, routing, sequence number */
+ 	err = scarlett2_init_private(mixer, info);
+ 	if (err < 0)
+@@ -2073,3 +2046,51 @@ int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer)
+ 
+ 	return 0;
+ }
++
++int snd_scarlett_gen2_init(struct usb_mixer_interface *mixer)
++{
++	struct snd_usb_audio *chip = mixer->chip;
++	const struct scarlett2_device_info *info;
++	int err;
++
++	/* only use UAC_VERSION_2 */
++	if (!mixer->protocol)
++		return 0;
++
++	switch (chip->usb_id) {
++	case USB_ID(0x1235, 0x8203):
++		info = &s6i6_gen2_info;
++		break;
++	case USB_ID(0x1235, 0x8204):
++		info = &s18i8_gen2_info;
++		break;
++	case USB_ID(0x1235, 0x8201):
++		info = &s18i20_gen2_info;
++		break;
++	default: /* device not (yet) supported */
++		return -EINVAL;
++	}
++
++	if (!(chip->setup & SCARLETT2_ENABLE)) {
++		usb_audio_info(chip,
++			"Focusrite Scarlett Gen 2 Mixer Driver disabled; "
++			"use options snd_usb_audio vid=0x%04x pid=0x%04x "
++			"device_setup=1 to enable and report any issues "
++			"to g@b4.vu",
++			USB_ID_VENDOR(chip->usb_id),
++			USB_ID_PRODUCT(chip->usb_id));
++		return 0;
++	}
++
++	usb_audio_info(chip,
++		"Focusrite Scarlett Gen 2 Mixer Driver enabled pid=0x%04x",
++		USB_ID_PRODUCT(chip->usb_id));
++
++	err = snd_scarlett_gen2_controls_create(mixer, info);
++	if (err < 0)
++		usb_audio_err(mixer->chip,
++			      "Error initialising Scarlett Mixer Driver: %d",
++			      err);
++
++	return err;
++}
+diff --git a/sound/usb/mixer_scarlett_gen2.h b/sound/usb/mixer_scarlett_gen2.h
+index 52e1dad77afd4..668c6b0cb50a6 100644
+--- a/sound/usb/mixer_scarlett_gen2.h
++++ b/sound/usb/mixer_scarlett_gen2.h
+@@ -2,6 +2,6 @@
+ #ifndef __USB_MIXER_SCARLETT_GEN2_H
+ #define __USB_MIXER_SCARLETT_GEN2_H
+ 
+-int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer);
++int snd_scarlett_gen2_init(struct usb_mixer_interface *mixer);
+ 
+ #endif /* __USB_MIXER_SCARLETT_GEN2_H */
+diff --git a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
+index 790944c356025..baee8591ac76a 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
+@@ -30,7 +30,8 @@ CGROUP COMMANDS
+ |	*ATTACH_TYPE* := { **ingress** | **egress** | **sock_create** | **sock_ops** | **device** |
+ |		**bind4** | **bind6** | **post_bind4** | **post_bind6** | **connect4** | **connect6** |
+ |               **getpeername4** | **getpeername6** | **getsockname4** | **getsockname6** | **sendmsg4** |
+-|               **sendmsg6** | **recvmsg4** | **recvmsg6** | **sysctl** | **getsockopt** | **setsockopt** }
++|               **sendmsg6** | **recvmsg4** | **recvmsg6** | **sysctl** | **getsockopt** | **setsockopt** |
++|               **sock_release** }
+ |	*ATTACH_FLAGS* := { **multi** | **override** }
+ 
+ DESCRIPTION
+@@ -106,6 +107,7 @@ DESCRIPTION
+ 		  **getpeername6** call to getpeername(2) for an inet6 socket (since 5.8);
+ 		  **getsockname4** call to getsockname(2) for an inet4 socket (since 5.8);
+ 		  **getsockname6** call to getsockname(2) for an inet6 socket (since 5.8).
++		  **sock_release** closing an userspace inet socket (since 5.9).
+ 
+ 	**bpftool cgroup detach** *CGROUP* *ATTACH_TYPE* *PROG*
+ 		  Detach *PROG* from the cgroup *CGROUP* and attach type
+diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+index 358c7309d4191..fe1b38e7e887d 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+@@ -44,7 +44,7 @@ PROG COMMANDS
+ |		**cgroup/connect4** | **cgroup/connect6** | **cgroup/getpeername4** | **cgroup/getpeername6** |
+ |               **cgroup/getsockname4** | **cgroup/getsockname6** | **cgroup/sendmsg4** | **cgroup/sendmsg6** |
+ |		**cgroup/recvmsg4** | **cgroup/recvmsg6** | **cgroup/sysctl** |
+-|		**cgroup/getsockopt** | **cgroup/setsockopt** |
++|		**cgroup/getsockopt** | **cgroup/setsockopt** | **cgroup/sock_release** |
+ |		**struct_ops** | **fentry** | **fexit** | **freplace** | **sk_lookup**
+ |	}
+ |       *ATTACH_TYPE* := {
+diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool
+index 3f1da30c4da6e..f783e7c5a8df8 100644
+--- a/tools/bpf/bpftool/bash-completion/bpftool
++++ b/tools/bpf/bpftool/bash-completion/bpftool
+@@ -478,7 +478,7 @@ _bpftool()
+                                 cgroup/recvmsg4 cgroup/recvmsg6 \
+                                 cgroup/post_bind4 cgroup/post_bind6 \
+                                 cgroup/sysctl cgroup/getsockopt \
+-                                cgroup/setsockopt struct_ops \
++                                cgroup/setsockopt cgroup/sock_release struct_ops \
+                                 fentry fexit freplace sk_lookup" -- \
+                                                    "$cur" ) )
+                             return 0
+@@ -1008,7 +1008,7 @@ _bpftool()
+                         device bind4 bind6 post_bind4 post_bind6 connect4 connect6 \
+                         getpeername4 getpeername6 getsockname4 getsockname6 \
+                         sendmsg4 sendmsg6 recvmsg4 recvmsg6 sysctl getsockopt \
+-                        setsockopt'
++                        setsockopt sock_release'
+                     local ATTACH_FLAGS='multi override'
+                     local PROG_TYPE='id pinned tag name'
+                     case $prev in
+@@ -1019,7 +1019,7 @@ _bpftool()
+                         ingress|egress|sock_create|sock_ops|device|bind4|bind6|\
+                         post_bind4|post_bind6|connect4|connect6|getpeername4|\
+                         getpeername6|getsockname4|getsockname6|sendmsg4|sendmsg6|\
+-                        recvmsg4|recvmsg6|sysctl|getsockopt|setsockopt)
++                        recvmsg4|recvmsg6|sysctl|getsockopt|setsockopt|sock_release)
+                             COMPREPLY=( $( compgen -W "$PROG_TYPE" -- \
+                                 "$cur" ) )
+                             return 0
+diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
+index d901cc1b904af..6e53b1d393f4a 100644
+--- a/tools/bpf/bpftool/cgroup.c
++++ b/tools/bpf/bpftool/cgroup.c
+@@ -28,7 +28,8 @@
+ 	"                        connect6 | getpeername4 | getpeername6 |\n"   \
+ 	"                        getsockname4 | getsockname6 | sendmsg4 |\n"   \
+ 	"                        sendmsg6 | recvmsg4 | recvmsg6 |\n"           \
+-	"                        sysctl | getsockopt | setsockopt }"
++	"                        sysctl | getsockopt | setsockopt |\n"	       \
++	"                        sock_release }"
+ 
+ static unsigned int query_flags;
+ 
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index acdb2c245f0a4..14237ffb90bae 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -2105,7 +2105,7 @@ static int do_help(int argc, char **argv)
+ 		"                 cgroup/getpeername4 | cgroup/getpeername6 |\n"
+ 		"                 cgroup/getsockname4 | cgroup/getsockname6 | cgroup/sendmsg4 |\n"
+ 		"                 cgroup/sendmsg6 | cgroup/recvmsg4 | cgroup/recvmsg6 |\n"
+-		"                 cgroup/getsockopt | cgroup/setsockopt |\n"
++		"                 cgroup/getsockopt | cgroup/setsockopt | cgroup/sock_release |\n"
+ 		"                 struct_ops | fentry | fexit | freplace | sk_lookup }\n"
+ 		"       ATTACH_TYPE := { msg_verdict | stream_verdict | stream_parser |\n"
+ 		"                        flow_dissector }\n"
+diff --git a/tools/include/linux/bits.h b/tools/include/linux/bits.h
+index 7f475d59a0974..87d112650dfbb 100644
+--- a/tools/include/linux/bits.h
++++ b/tools/include/linux/bits.h
+@@ -22,7 +22,7 @@
+ #include <linux/build_bug.h>
+ #define GENMASK_INPUT_CHECK(h, l) \
+ 	(BUILD_BUG_ON_ZERO(__builtin_choose_expr( \
+-		__builtin_constant_p((l) > (h)), (l) > (h), 0)))
++		__is_constexpr((l) > (h)), (l) > (h), 0)))
+ #else
+ /*
+  * BUILD_BUG_ON_ZERO is not available in h files included from asm files,
+diff --git a/tools/include/linux/const.h b/tools/include/linux/const.h
+index 81b8aae5a8559..435ddd72d2c46 100644
+--- a/tools/include/linux/const.h
++++ b/tools/include/linux/const.h
+@@ -3,4 +3,12 @@
+ 
+ #include <vdso/const.h>
+ 
++/*
++ * This returns a constant expression while determining if an argument is
++ * a constant expression, most importantly without evaluating the argument.
++ * Glory to Martin Uecker <Martin.Uecker@med.uni-goettingen.de>
++ */
++#define __is_constexpr(x) \
++	(sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8)))
++
+ #endif /* _LINUX_CONST_H */
+diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
+index e47644cab3fab..dcfdf6a322dc4 100644
+--- a/tools/perf/pmu-events/jevents.c
++++ b/tools/perf/pmu-events/jevents.c
+@@ -894,7 +894,7 @@ static int get_maxfds(void)
+ 	struct rlimit rlim;
+ 
+ 	if (getrlimit(RLIMIT_NOFILE, &rlim) == 0)
+-		return min((int)rlim.rlim_max / 2, 512);
++		return min(rlim.rlim_max / 2, (rlim_t)512);
+ 
+ 	return 512;
+ }
+diff --git a/tools/perf/scripts/python/exported-sql-viewer.py b/tools/perf/scripts/python/exported-sql-viewer.py
+index 7daa8bb70a5a0..711d4f9f5645c 100755
+--- a/tools/perf/scripts/python/exported-sql-viewer.py
++++ b/tools/perf/scripts/python/exported-sql-viewer.py
+@@ -91,6 +91,11 @@
+ from __future__ import print_function
+ 
+ import sys
++# Only change warnings if the python -W option was not used
++if not sys.warnoptions:
++	import warnings
++	# PySide2 causes deprecation warnings, ignore them.
++	warnings.filterwarnings("ignore", category=DeprecationWarning)
+ import argparse
+ import weakref
+ import threading
+@@ -125,8 +130,9 @@ if pyside_version_1:
+ 	from PySide.QtGui import *
+ 	from PySide.QtSql import *
+ 
+-from decimal import *
+-from ctypes import *
++from decimal import Decimal, ROUND_HALF_UP
++from ctypes import CDLL, Structure, create_string_buffer, addressof, sizeof, \
++		   c_void_p, c_bool, c_byte, c_char, c_int, c_uint, c_longlong, c_ulonglong
+ from multiprocessing import Process, Array, Value, Event
+ 
+ # xrange is range in Python3
+@@ -3868,7 +3874,7 @@ def CopyTableCellsToClipboard(view, as_csv=False, with_hdr=False):
+ 	if with_hdr:
+ 		model = indexes[0].model()
+ 		for col in range(min_col, max_col + 1):
+-			val = model.headerData(col, Qt.Horizontal)
++			val = model.headerData(col, Qt.Horizontal, Qt.DisplayRole)
+ 			if as_csv:
+ 				text += sep + ToCSValue(val)
+ 				sep = ","
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 197eb58a39cb7..e6029d4c096fb 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -1120,6 +1120,8 @@ static bool intel_pt_fup_event(struct intel_pt_decoder *decoder)
+ 		decoder->set_fup_tx_flags = false;
+ 		decoder->tx_flags = decoder->fup_tx_flags;
+ 		decoder->state.type = INTEL_PT_TRANSACTION;
++		if (decoder->fup_tx_flags & INTEL_PT_ABORT_TX)
++			decoder->state.type |= INTEL_PT_BRANCH;
+ 		decoder->state.from_ip = decoder->ip;
+ 		decoder->state.to_ip = 0;
+ 		decoder->state.flags = decoder->fup_tx_flags;
+@@ -1194,8 +1196,10 @@ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder)
+ 			return 0;
+ 		if (err == -EAGAIN ||
+ 		    intel_pt_fup_with_nlip(decoder, &intel_pt_insn, ip, err)) {
++			bool no_tip = decoder->pkt_state != INTEL_PT_STATE_FUP;
++
+ 			decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+-			if (intel_pt_fup_event(decoder))
++			if (intel_pt_fup_event(decoder) && no_tip)
+ 				return 0;
+ 			return -EAGAIN;
+ 		}
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index dc023b8c6003a..e5aaf1337be98 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -647,8 +647,10 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
+ 
+ 			*ip += intel_pt_insn->length;
+ 
+-			if (to_ip && *ip == to_ip)
++			if (to_ip && *ip == to_ip) {
++				intel_pt_insn->length = 0;
+ 				goto out_no_cache;
++			}
+ 
+ 			if (*ip >= al.map->end)
+ 				break;
+@@ -1131,6 +1133,7 @@ static void intel_pt_set_pid_tid_cpu(struct intel_pt *pt,
+ 
+ static void intel_pt_sample_flags(struct intel_pt_queue *ptq)
+ {
++	ptq->insn_len = 0;
+ 	if (ptq->state->flags & INTEL_PT_ABORT_TX) {
+ 		ptq->flags = PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TX_ABORT;
+ 	} else if (ptq->state->flags & INTEL_PT_ASYNC) {
+diff --git a/tools/testing/selftests/gpio/Makefile b/tools/testing/selftests/gpio/Makefile
+index 32bdc978a711c..acf4088a9891b 100644
+--- a/tools/testing/selftests/gpio/Makefile
++++ b/tools/testing/selftests/gpio/Makefile
+@@ -11,22 +11,24 @@ LDLIBS += $(VAR_LDLIBS)
+ 
+ TEST_PROGS := gpio-mockup.sh
+ TEST_FILES := gpio-mockup-sysfs.sh
+-TEST_PROGS_EXTENDED := gpio-mockup-chardev
++TEST_GEN_PROGS_EXTENDED := gpio-mockup-chardev
+ 
+-GPIODIR := $(realpath ../../../gpio)
+-GPIOOBJ := gpio-utils.o
++KSFT_KHDR_INSTALL := 1
++include ../lib.mk
+ 
+-all: $(TEST_PROGS_EXTENDED)
++GPIODIR := $(realpath ../../../gpio)
++GPIOOUT := $(OUTPUT)/tools-gpio/
++GPIOOBJ := $(GPIOOUT)/gpio-utils.o
+ 
+ override define CLEAN
+-	$(RM) $(TEST_PROGS_EXTENDED)
+-	$(MAKE) -C $(GPIODIR) OUTPUT=$(GPIODIR)/ clean
++	$(RM) $(TEST_GEN_PROGS_EXTENDED)
++	$(RM) -rf $(GPIOOUT)
+ endef
+ 
+-KSFT_KHDR_INSTALL := 1
+-include ../lib.mk
++$(TEST_GEN_PROGS_EXTENDED): $(GPIOOBJ)
+ 
+-$(TEST_PROGS_EXTENDED): $(GPIODIR)/$(GPIOOBJ)
++$(GPIOOUT):
++	mkdir -p $@
+ 
+-$(GPIODIR)/$(GPIOOBJ):
+-	$(MAKE) OUTPUT=$(GPIODIR)/ -C $(GPIODIR)
++$(GPIOOBJ): $(GPIOOUT)
++	$(MAKE) OUTPUT=$(GPIOOUT) -C $(GPIODIR)
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
+index 1cda2e11b3ad9..773c5027553d2 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
+@@ -9,11 +9,11 @@
+         "setup": [
+             "$IP link add dev $DUMMY type dummy || /bin/true"
+         ],
+-        "cmdUnderTest": "$TC qdisc add dev $DUMMY root fq_pie flows 65536",
+-        "expExitCode": "2",
++        "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_pie flows 65536",
++        "expExitCode": "0",
+         "verifyCmd": "$TC qdisc show dev $DUMMY",
+-        "matchPattern": "qdisc",
+-        "matchCount": "0",
++        "matchPattern": "qdisc fq_pie 1: root refcnt 2 limit 10240p flows 65536",
++        "matchCount": "1",
+         "teardown": [
+             "$IP link del dev $DUMMY"
+         ]
+diff --git a/virt/lib/irqbypass.c b/virt/lib/irqbypass.c
+index c9bb3957f58a7..28fda42e471bb 100644
+--- a/virt/lib/irqbypass.c
++++ b/virt/lib/irqbypass.c
+@@ -40,21 +40,17 @@ static int __connect(struct irq_bypass_producer *prod,
+ 	if (prod->add_consumer)
+ 		ret = prod->add_consumer(prod, cons);
+ 
+-	if (ret)
+-		goto err_add_consumer;
+-
+-	ret = cons->add_producer(cons, prod);
+-	if (ret)
+-		goto err_add_producer;
++	if (!ret) {
++		ret = cons->add_producer(cons, prod);
++		if (ret && prod->del_consumer)
++			prod->del_consumer(prod, cons);
++	}
+ 
+ 	if (cons->start)
+ 		cons->start(cons);
+ 	if (prod->start)
+ 		prod->start(prod);
+-err_add_producer:
+-	if (prod->del_consumer)
+-		prod->del_consumer(prod, cons);
+-err_add_consumer:
++
+ 	return ret;
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-08 22:42 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-06-08 22:42 UTC (permalink / raw
  To: gentoo-commits

commit:     22f4eb42e182ecb54358b492c391c9e06ee89753
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun  8 22:41:36 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun  8 22:41:36 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=22f4eb42

CONFIG opt to enable a subset of Kernel Self Protection Project settings

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 177 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 171 insertions(+), 6 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index e754a3e..635de00 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,14 @@
---- a/Kconfig	2020-04-15 11:05:30.202413863 -0400
-+++ b/Kconfig	2020-04-15 10:37:45.683952949 -0400
-@@ -32,3 +32,5 @@ source "lib/Kconfig"
+--- a/Kconfig	2021-06-04 19:03:33.646823432 -0400
++++ b/Kconfig	2021-06-04 19:03:40.508892817 -0400
+@@ -30,3 +30,5 @@ source "lib/Kconfig"
  source "lib/Kconfig.debug"
  
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2020-09-24 03:06:47.590000000 -0400
-+++ b/distro/Kconfig	2020-09-24 11:31:29.403150624 -0400
-@@ -0,0 +1,158 @@
+--- /dev/null	2021-06-08 16:56:49.698138501 -0400
++++ b/distro/Kconfig	2021-06-08 17:11:33.377999003 -0400
+@@ -0,0 +1,263 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -166,4 +166,169 @@
 +
 +endmenu
 +
++menu "Enable Kernel Self Protection Project Recommendations"
++	visible if GENTOO_LINUX
++
++config GENTOO_KERNEL_SELF_PROTECTION
++	bool "Architecture Independent Kernel Self Protection Project Recommendations"
++
++	help
++  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
++		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due
++		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for
++		dependency information on your specific architecture.
++		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
++		for X86_64
++
++	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
++
++	select BUG
++	select STRICT_KERNEL_RWX
++	select DEBUG_WX
++	select STACKPROTECTOR
++	select STACKPROTECTOR_STRONG
++	select STRICT_DEVMEM
++	select IO_STRICT_DEVMEM
++	select SYN_COOKIES
++	select DEBUG_CREDENTIALS
++	select DEBUG_NOTIFIERS
++	select DEBUG_LIST
++	select DEBUG_SG
++	select BUG_ON_DATA_CORRUPTION
++	select SCHED_STACK_END_CHECK
++	select SECCOMP
++	select SECCOMP_FILTER
++	select SECURITY_YAMA
++	select SLAB_FREELIST_RANDOM
++	select SLAB_FREELIST_HARDENED
++	select SHUFFLE_PAGE_ALLOCATOR
++	select SLUB_DEBUG
++	select PAGE_POISONING
++	select PAGE_POISONING_NO_SANITY
++	select PAGE_POISONING_ZERO
++	select INIT_ON_ALLOC_DEFAULT_ON
++	select INIT_ON_FREE_DEFAULT_ON
++	select VMAP_STACK
++	select REFCOUNT_FULL
++	select FORTIFY_SOURCE
++	select SECURITY_DMESG_RESTRICT
++	select PANIC_ON_OOPS
++	select CONFIG_GCC_PLUGINS
++	select GCC_PLUGIN_LATENT_ENTROPY
++	select GCC_PLUGIN_STRUCTLEAK
++	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
++	select GCC_PLUGIN_STACKLEAK
++	select GCC_PLUGIN_RANDSTRUCT
++	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
++
++menu "Architecture Specific Self Protection Project Recommendations"
++
++config GENTOO_KERNEL_SELF_PROTECTION_X86_64
++	bool "X86_64 KSPP Settings"
++
++	depends on !X86_MSR && X86_64
++	default n
++	
++	select RANDOMIZE_BASE
++	select RANDOMIZE_MEMORY
++	select LEGACY_VSYSCALL_NONE
++ 	select PAGE_TABLE_ISOLATION
++
++
++config GENTOO_KERNEL_SELF_PROTECTION_ARM64
++	bool "ARM64 KSPP Settings"
++
++	depends on ARM64
++	default n
++
++	select RANDOMIZE_BASE
++	select ARM64_SW_TTBR0_PAN
++	select CONFIG_UNMAP_KERNEL_AT_EL0
++
++config GENTOO_KERNEL_SELF_PROTECTION_X86_32
++	bool "X86_32 KSPP Settings"
++
++	depends on !X86_MSR && !MODIFY_LDT_SYSCALL && !M486 && X86_32
++	default n
++
++	select HIGHMEM64G
++	select X86_PAE
++	select RANDOMIZE_BASE
++	select PAGE_TABLE_ISOLATION
++
++config GENTOO_KERNEL_SELF_PROTECTION_ARM
++	bool "ARM KSPP Settings"
++
++	depends on !OABI_COMPAT && ARM
++	default n
++
++	select VMSPLIT_3G
++	select STRICT_MEMORY_RWX
++	select CPU_SW_DOMAIN_PAN
++
++endmenu
++
++endmenu
++
 +endmenu
+diff --git a/security/Kconfig b/security/Kconfig
+index 7561f6f99..01f0bf73f 100644
+--- a/security/Kconfig
++++ b/security/Kconfig
+@@ -166,6 +166,7 @@ config HARDENED_USERCOPY
+ config HARDENED_USERCOPY_FALLBACK
+ 	bool "Allow usercopy whitelist violations to fallback to object size"
+ 	depends on HARDENED_USERCOPY
++	depends on !GENTOO_KERNEL_SELF_PROTECTION
+ 	default y
+ 	help
+ 	  This is a temporary option that allows missing usercopy whitelists
+@@ -181,6 +182,7 @@ config HARDENED_USERCOPY_PAGESPAN
+ 	bool "Refuse to copy allocations that span multiple pages"
+ 	depends on HARDENED_USERCOPY
+ 	depends on EXPERT
++	depends on !GENTOO_KERNEL_SELF_PROTECTION
+ 	help
+ 	  When a multi-page allocation is done without __GFP_COMP,
+ 	  hardened usercopy will reject attempts to copy it. There are,
+diff --git a/security/selinux/Kconfig b/security/selinux/Kconfig
+index 9e921fc72..f29bc13fa 100644
+--- a/security/selinux/Kconfig
++++ b/security/selinux/Kconfig
+@@ -26,6 +26,7 @@ config SECURITY_SELINUX_BOOTPARAM
+ config SECURITY_SELINUX_DISABLE
+ 	bool "NSA SELinux runtime disable"
+ 	depends on SECURITY_SELINUX
++	depends on !GENTOO_KERNEL_SELF_PROTECTION
+ 	select SECURITY_WRITABLE_HOOKS
+ 	default n
+ 	help
+-- 
+2.31.1
+
+From bd3ff0b16792c18c0614c2b95e148943209f460a Mon Sep 17 00:00:00 2001
+From: Georgy Yakovlev <gyakovlev@gentoo.org>
+Date: Tue, 8 Jun 2021 13:59:57 -0700
+Subject: [PATCH 2/2] set DEFAULT_MMAP_MIN_ADDR by default
+
+---
+ mm/Kconfig | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/mm/Kconfig b/mm/Kconfig
+index 24c045b24..e13fc740c 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -321,6 +321,8 @@ config KSM
+ config DEFAULT_MMAP_MIN_ADDR
+ 	int "Low address space to protect from user allocation"
+ 	depends on MMU
++	default 65536 if ( X86_64 || X86_32 || PPC64 || IA64 ) && GENTOO_KERNEL_SELF_PROTECTION
++	default 32768 if ( ARM64 || ARM ) && GENTOO_KERNEL_SELF_PROTECTION
+ 	default 4096
+ 	help
+ 	  This is the portion of low virtual memory which should be protected
+-- 
+2.31.1
+```


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-10 12:09 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-06-10 12:09 UTC (permalink / raw
  To: gentoo-commits

commit:     1d2a2e1dbbbafa2ae51bb6ee4258ae1441d706c3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 10 12:08:56 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun 10 12:08:56 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1d2a2e1d

Linux patch 5.10.43

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1042_linux-5.10.43.patch | 5652 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5656 insertions(+)

diff --git a/0000_README b/0000_README
index fb74799..f258b9d 100644
--- a/0000_README
+++ b/0000_README
@@ -211,6 +211,10 @@ Patch:  1041_linux-5.10.42.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.42
 
+Patch:  1042_linux-5.10.43.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.43
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1042_linux-5.10.43.patch b/1042_linux-5.10.43.patch
new file mode 100644
index 0000000..7d99626
--- /dev/null
+++ b/1042_linux-5.10.43.patch
@@ -0,0 +1,5652 @@
+diff --git a/Makefile b/Makefile
+index 290903d0e7dab..ec9ee8032a985 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 42
++SUBLEVEL = 43
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi b/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
+index 7d2c72562c735..9148a01ed6d9f 100644
+--- a/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
++++ b/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
+@@ -105,9 +105,13 @@
+ 	phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
+ 	phy-reset-duration = <20>;
+ 	phy-supply = <&sw2_reg>;
+-	phy-handle = <&ethphy0>;
+ 	status = "okay";
+ 
++	fixed-link {
++		speed = <1000>;
++		full-duplex;
++	};
++
+ 	mdio {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/imx6q-dhcom-som.dtsi b/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
+index 236fc205c3890..d0768ae429faa 100644
+--- a/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
+@@ -406,6 +406,18 @@
+ 	vin-supply = <&sw1_reg>;
+ };
+ 
++&reg_pu {
++	vin-supply = <&sw1_reg>;
++};
++
++&reg_vdd1p1 {
++	vin-supply = <&sw2_reg>;
++};
++
++&reg_vdd2p5 {
++	vin-supply = <&sw2_reg>;
++};
++
+ &uart1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_uart1>;
+diff --git a/arch/arm/boot/dts/imx6qdl-emcon-avari.dtsi b/arch/arm/boot/dts/imx6qdl-emcon-avari.dtsi
+index 828cf3e39784a..c4e146f3341bb 100644
+--- a/arch/arm/boot/dts/imx6qdl-emcon-avari.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-emcon-avari.dtsi
+@@ -126,7 +126,7 @@
+ 		compatible = "nxp,pca8574";
+ 		reg = <0x3a>;
+ 		gpio-controller;
+-		#gpio-cells = <1>;
++		#gpio-cells = <2>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/imx7d-meerkat96.dts b/arch/arm/boot/dts/imx7d-meerkat96.dts
+index 5339210b63d0f..dd8003bd1fc09 100644
+--- a/arch/arm/boot/dts/imx7d-meerkat96.dts
++++ b/arch/arm/boot/dts/imx7d-meerkat96.dts
+@@ -193,7 +193,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_usdhc1>;
+ 	keep-power-in-suspend;
+-	tuning-step = <2>;
++	fsl,tuning-step = <2>;
+ 	vmmc-supply = <&reg_3p3v>;
+ 	no-1-8-v;
+ 	broken-cd;
+diff --git a/arch/arm/boot/dts/imx7d-pico.dtsi b/arch/arm/boot/dts/imx7d-pico.dtsi
+index e57da0d32b98d..e519897fae082 100644
+--- a/arch/arm/boot/dts/imx7d-pico.dtsi
++++ b/arch/arm/boot/dts/imx7d-pico.dtsi
+@@ -351,7 +351,7 @@
+ 	pinctrl-2 = <&pinctrl_usdhc1_200mhz>;
+ 	cd-gpios = <&gpio5 0 GPIO_ACTIVE_LOW>;
+ 	bus-width = <4>;
+-	tuning-step = <2>;
++	fsl,tuning-step = <2>;
+ 	vmmc-supply = <&reg_3p3v>;
+ 	wakeup-source;
+ 	no-1-8-v;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var4.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var4.dts
+index df212ed5bb942..e65d1c477e2ce 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var4.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var4.dts
+@@ -31,11 +31,10 @@
+ 			reg = <0x4>;
+ 			eee-broken-1000t;
+ 			eee-broken-100tx;
+-
+ 			qca,clk-out-frequency = <125000000>;
+ 			qca,clk-out-strength = <AR803X_STRENGTH_FULL>;
+-
+-			vddio-supply = <&vddh>;
++			qca,keep-pll-enabled;
++			vddio-supply = <&vddio>;
+ 
+ 			vddio: vddio-regulator {
+ 				regulator-name = "VDDIO";
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+index 62f4dcb96e70d..f3b58bb9b8408 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+@@ -192,8 +192,8 @@
+ 		ddr: memory-controller@1080000 {
+ 			compatible = "fsl,qoriq-memory-controller";
+ 			reg = <0x0 0x1080000 0x0 0x1000>;
+-			interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
+-			big-endian;
++			interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>;
++			little-endian;
+ 		};
+ 
+ 		dcfg: syscon@1e00000 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
+index fa7a041ffcfde..825c83c71a9f1 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
+@@ -45,8 +45,8 @@
+ 	reg_12p0_main: regulator-12p0-main {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "12V_MAIN";
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
++		regulator-min-microvolt = <12000000>;
++		regulator-max-microvolt = <12000000>;
+ 		regulator-always-on;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 72d6496e88dd4..689538244392c 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -78,6 +78,8 @@
+ 		#size-cells = <2>;
+ 		ranges = <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>;
+ 		ti,sci-dev-id = <199>;
++		dma-coherent;
++		dma-ranges;
+ 
+ 		main_navss_intr: interrupt-controller1 {
+ 			compatible = "ti,sci-intr";
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index b246a4acba416..568f11e23830c 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -464,14 +464,14 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
+ 		     struct sys_reg_params *p,
+ 		     const struct sys_reg_desc *rd)
+ {
+-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
++	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+ 
+ 	if (p->is_write)
+ 		reg_to_dbg(vcpu, p, dbg_reg);
+ 	else
+ 		dbg_to_reg(vcpu, p, dbg_reg);
+ 
+-	trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg);
++	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
+ 
+ 	return true;
+ }
+@@ -479,7 +479,7 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
+ static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ 		const struct kvm_one_reg *reg, void __user *uaddr)
+ {
+-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
++	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+ 
+ 	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+ 		return -EFAULT;
+@@ -489,7 +489,7 @@ static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ 	const struct kvm_one_reg *reg, void __user *uaddr)
+ {
+-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
++	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+ 
+ 	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+ 		return -EFAULT;
+@@ -499,21 +499,21 @@ static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ static void reset_bvr(struct kvm_vcpu *vcpu,
+ 		      const struct sys_reg_desc *rd)
+ {
+-	vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = rd->val;
++	vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm] = rd->val;
+ }
+ 
+ static bool trap_bcr(struct kvm_vcpu *vcpu,
+ 		     struct sys_reg_params *p,
+ 		     const struct sys_reg_desc *rd)
+ {
+-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
++	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+ 
+ 	if (p->is_write)
+ 		reg_to_dbg(vcpu, p, dbg_reg);
+ 	else
+ 		dbg_to_reg(vcpu, p, dbg_reg);
+ 
+-	trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg);
++	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
+ 
+ 	return true;
+ }
+@@ -521,7 +521,7 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
+ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ 		const struct kvm_one_reg *reg, void __user *uaddr)
+ {
+-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
++	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+ 
+ 	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+ 		return -EFAULT;
+@@ -532,7 +532,7 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ 	const struct kvm_one_reg *reg, void __user *uaddr)
+ {
+-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
++	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+ 
+ 	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+ 		return -EFAULT;
+@@ -542,22 +542,22 @@ static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ static void reset_bcr(struct kvm_vcpu *vcpu,
+ 		      const struct sys_reg_desc *rd)
+ {
+-	vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = rd->val;
++	vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm] = rd->val;
+ }
+ 
+ static bool trap_wvr(struct kvm_vcpu *vcpu,
+ 		     struct sys_reg_params *p,
+ 		     const struct sys_reg_desc *rd)
+ {
+-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
++	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm];
+ 
+ 	if (p->is_write)
+ 		reg_to_dbg(vcpu, p, dbg_reg);
+ 	else
+ 		dbg_to_reg(vcpu, p, dbg_reg);
+ 
+-	trace_trap_reg(__func__, rd->reg, p->is_write,
+-		vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg]);
++	trace_trap_reg(__func__, rd->CRm, p->is_write,
++		vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm]);
+ 
+ 	return true;
+ }
+@@ -565,7 +565,7 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
+ static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ 		const struct kvm_one_reg *reg, void __user *uaddr)
+ {
+-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
++	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm];
+ 
+ 	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+ 		return -EFAULT;
+@@ -575,7 +575,7 @@ static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ 	const struct kvm_one_reg *reg, void __user *uaddr)
+ {
+-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
++	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm];
+ 
+ 	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+ 		return -EFAULT;
+@@ -585,21 +585,21 @@ static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ static void reset_wvr(struct kvm_vcpu *vcpu,
+ 		      const struct sys_reg_desc *rd)
+ {
+-	vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = rd->val;
++	vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm] = rd->val;
+ }
+ 
+ static bool trap_wcr(struct kvm_vcpu *vcpu,
+ 		     struct sys_reg_params *p,
+ 		     const struct sys_reg_desc *rd)
+ {
+-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
++	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+ 
+ 	if (p->is_write)
+ 		reg_to_dbg(vcpu, p, dbg_reg);
+ 	else
+ 		dbg_to_reg(vcpu, p, dbg_reg);
+ 
+-	trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg);
++	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
+ 
+ 	return true;
+ }
+@@ -607,7 +607,7 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
+ static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ 		const struct kvm_one_reg *reg, void __user *uaddr)
+ {
+-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
++	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+ 
+ 	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+ 		return -EFAULT;
+@@ -617,7 +617,7 @@ static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ 	const struct kvm_one_reg *reg, void __user *uaddr)
+ {
+-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
++	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+ 
+ 	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+ 		return -EFAULT;
+@@ -627,7 +627,7 @@ static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+ static void reset_wcr(struct kvm_vcpu *vcpu,
+ 		      const struct sys_reg_desc *rd)
+ {
+-	vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = rd->val;
++	vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm] = rd->val;
+ }
+ 
+ static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
+index 01ab2163659e4..e8c2a6373157d 100644
+--- a/arch/powerpc/kernel/kprobes.c
++++ b/arch/powerpc/kernel/kprobes.c
+@@ -108,7 +108,6 @@ int arch_prepare_kprobe(struct kprobe *p)
+ 	int ret = 0;
+ 	struct kprobe *prev;
+ 	struct ppc_inst insn = ppc_inst_read((struct ppc_inst *)p->addr);
+-	struct ppc_inst prefix = ppc_inst_read((struct ppc_inst *)(p->addr - 1));
+ 
+ 	if ((unsigned long)p->addr & 0x03) {
+ 		printk("Attempt to register kprobe at an unaligned address\n");
+@@ -116,7 +115,8 @@ int arch_prepare_kprobe(struct kprobe *p)
+ 	} else if (IS_MTMSRD(insn) || IS_RFID(insn) || IS_RFI(insn)) {
+ 		printk("Cannot register a kprobe on rfi/rfid or mtmsr[d]\n");
+ 		ret = -EINVAL;
+-	} else if (ppc_inst_prefixed(prefix)) {
++	} else if ((unsigned long)p->addr & ~PAGE_MASK &&
++		   ppc_inst_prefixed(ppc_inst_read((struct ppc_inst *)(p->addr - 1)))) {
+ 		printk("Cannot register a kprobe on the second word of prefixed instruction\n");
+ 		ret = -EINVAL;
+ 	}
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index ca2b40dfd24b8..24d936c147cdf 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -23,7 +23,7 @@ ifneq ($(c-gettimeofday-y),)
+ endif
+ 
+ # Build rules
+-targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-dummy.o
++targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-syms.S
+ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+ 
+ obj-y += vdso.o vdso-syms.o
+@@ -41,7 +41,7 @@ KASAN_SANITIZE := n
+ $(obj)/vdso.o: $(obj)/vdso.so
+ 
+ # link rule for the .so file, .lds has to be first
+-$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
++$(obj)/vdso.so.dbg: $(obj)/vdso.lds $(obj-vdso) FORCE
+ 	$(call if_changed,vdsold)
+ LDFLAGS_vdso.so.dbg = -shared -s -soname=linux-vdso.so.1 \
+ 	--build-id=sha1 --hash-style=both --eh-frame-hdr
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 51abd44ab8c2d..3b4412c83eec0 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -174,6 +174,7 @@ static inline int apic_is_clustered_box(void)
+ extern int setup_APIC_eilvt(u8 lvt_off, u8 vector, u8 msg_type, u8 mask);
+ extern void lapic_assign_system_vectors(void);
+ extern void lapic_assign_legacy_vector(unsigned int isairq, bool replace);
++extern void lapic_update_legacy_vectors(void);
+ extern void lapic_online(void);
+ extern void lapic_offline(void);
+ extern bool apic_needs_pit(void);
+diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
+index 5861d34f97718..09db5b8f1444a 100644
+--- a/arch/x86/include/asm/disabled-features.h
++++ b/arch/x86/include/asm/disabled-features.h
+@@ -56,11 +56,8 @@
+ # define DISABLE_PTI		(1 << (X86_FEATURE_PTI & 31))
+ #endif
+ 
+-#ifdef CONFIG_IOMMU_SUPPORT
+-# define DISABLE_ENQCMD	0
+-#else
+-# define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31))
+-#endif
++/* Force disable because it's broken beyond repair */
++#define DISABLE_ENQCMD		(1 << (X86_FEATURE_ENQCMD & 31))
+ 
+ /*
+  * Make sure to add features to the correct mask
+diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
+index 38f4936045ab6..8b9bfaad6e662 100644
+--- a/arch/x86/include/asm/fpu/api.h
++++ b/arch/x86/include/asm/fpu/api.h
+@@ -79,10 +79,6 @@ extern int cpu_has_xfeatures(u64 xfeatures_mask, const char **feature_name);
+  */
+ #define PASID_DISABLED	0
+ 
+-#ifdef CONFIG_IOMMU_SUPPORT
+-/* Update current's PASID MSR/state by mm's PASID. */
+-void update_pasid(void);
+-#else
+ static inline void update_pasid(void) { }
+-#endif
++
+ #endif /* _ASM_X86_FPU_API_H */
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index 8d33ad80704f2..ceeba9f631722 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -584,13 +584,6 @@ static inline void switch_fpu_finish(struct fpu *new_fpu)
+ 			pkru_val = pk->pkru;
+ 	}
+ 	__write_pkru(pkru_val);
+-
+-	/*
+-	 * Expensive PASID MSR write will be avoided in update_pasid() because
+-	 * TIF_NEED_FPU_LOAD was set. And the PASID state won't be updated
+-	 * unless it's different from mm->pasid to reduce overhead.
+-	 */
+-	update_pasid();
+ }
+ 
+ #endif /* _ASM_X86_FPU_INTERNAL_H */
+diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
+index 3381198525126..69299878b200a 100644
+--- a/arch/x86/include/asm/kvm_para.h
++++ b/arch/x86/include/asm/kvm_para.h
+@@ -7,8 +7,6 @@
+ #include <linux/interrupt.h>
+ #include <uapi/asm/kvm_para.h>
+ 
+-extern void kvmclock_init(void);
+-
+ #ifdef CONFIG_KVM_GUEST
+ bool kvm_check_and_clear_guest_paused(void);
+ #else
+@@ -86,13 +84,14 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
+ }
+ 
+ #ifdef CONFIG_KVM_GUEST
++void kvmclock_init(void);
++void kvmclock_disable(void);
+ bool kvm_para_available(void);
+ unsigned int kvm_arch_para_features(void);
+ unsigned int kvm_arch_para_hints(void);
+ void kvm_async_pf_task_wait_schedule(u32 token);
+ void kvm_async_pf_task_wake(u32 token);
+ u32 kvm_read_and_reset_apf_flags(void);
+-void kvm_disable_steal_time(void);
+ bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token);
+ 
+ DECLARE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
+@@ -137,11 +136,6 @@ static inline u32 kvm_read_and_reset_apf_flags(void)
+ 	return 0;
+ }
+ 
+-static inline void kvm_disable_steal_time(void)
+-{
+-	return;
+-}
+-
+ static __always_inline bool kvm_handle_async_pf(struct pt_regs *regs, u32 token)
+ {
+ 	return false;
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 539f3e88ca7cd..24539a05c58c7 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -2539,6 +2539,7 @@ static void __init apic_bsp_setup(bool upmode)
+ 	end_local_APIC_setup();
+ 	irq_remap_enable_fault_handling();
+ 	setup_IO_APIC();
++	lapic_update_legacy_vectors();
+ }
+ 
+ #ifdef CONFIG_UP_LATE_INIT
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 758bbf25ef748..bd557e9f5dd8e 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -687,6 +687,26 @@ void lapic_assign_legacy_vector(unsigned int irq, bool replace)
+ 	irq_matrix_assign_system(vector_matrix, ISA_IRQ_VECTOR(irq), replace);
+ }
+ 
++void __init lapic_update_legacy_vectors(void)
++{
++	unsigned int i;
++
++	if (IS_ENABLED(CONFIG_X86_IO_APIC) && nr_ioapics > 0)
++		return;
++
++	/*
++	 * If the IO/APIC is disabled via config, kernel command line or
++	 * lack of enumeration then all legacy interrupts are routed
++	 * through the PIC. Make sure that they are marked as legacy
++	 * vectors. PIC_CASCADE_IRQ has already been marked in
++	 * lapic_assign_system_vectors().
++	 */
++	for (i = 0; i < nr_legacy_irqs(); i++) {
++		if (i != PIC_CASCADE_IR)
++			lapic_assign_legacy_vector(i, true);
++	}
++}
++
+ void __init lapic_assign_system_vectors(void)
+ {
+ 	unsigned int i, vector = 0;
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 5d8047441a0aa..67f1a03b9b235 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -1402,60 +1402,3 @@ int proc_pid_arch_status(struct seq_file *m, struct pid_namespace *ns,
+ 	return 0;
+ }
+ #endif /* CONFIG_PROC_PID_ARCH_STATUS */
+-
+-#ifdef CONFIG_IOMMU_SUPPORT
+-void update_pasid(void)
+-{
+-	u64 pasid_state;
+-	u32 pasid;
+-
+-	if (!cpu_feature_enabled(X86_FEATURE_ENQCMD))
+-		return;
+-
+-	if (!current->mm)
+-		return;
+-
+-	pasid = READ_ONCE(current->mm->pasid);
+-	/* Set the valid bit in the PASID MSR/state only for valid pasid. */
+-	pasid_state = pasid == PASID_DISABLED ?
+-		      pasid : pasid | MSR_IA32_PASID_VALID;
+-
+-	/*
+-	 * No need to hold fregs_lock() since the task's fpstate won't
+-	 * be changed by others (e.g. ptrace) while the task is being
+-	 * switched to or is in IPI.
+-	 */
+-	if (!test_thread_flag(TIF_NEED_FPU_LOAD)) {
+-		/* The MSR is active and can be directly updated. */
+-		wrmsrl(MSR_IA32_PASID, pasid_state);
+-	} else {
+-		struct fpu *fpu = &current->thread.fpu;
+-		struct ia32_pasid_state *ppasid_state;
+-		struct xregs_state *xsave;
+-
+-		/*
+-		 * The CPU's xstate registers are not currently active. Just
+-		 * update the PASID state in the memory buffer here. The
+-		 * PASID MSR will be loaded when returning to user mode.
+-		 */
+-		xsave = &fpu->state.xsave;
+-		xsave->header.xfeatures |= XFEATURE_MASK_PASID;
+-		ppasid_state = get_xsave_addr(xsave, XFEATURE_PASID);
+-		/*
+-		 * Since XFEATURE_MASK_PASID is set in xfeatures, ppasid_state
+-		 * won't be NULL and no need to check its value.
+-		 *
+-		 * Only update the task's PASID state when it's different
+-		 * from the mm's pasid.
+-		 */
+-		if (ppasid_state->pasid != pasid_state) {
+-			/*
+-			 * Invalid fpregs so that state restoring will pick up
+-			 * the PASID state.
+-			 */
+-			__fpu_invalidate_fpregs_state(fpu);
+-			ppasid_state->pasid = pasid_state;
+-		}
+-	}
+-}
+-#endif /* CONFIG_IOMMU_SUPPORT */
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index 7f57ede3cb8e7..7462b79c39de6 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -26,6 +26,7 @@
+ #include <linux/kprobes.h>
+ #include <linux/nmi.h>
+ #include <linux/swait.h>
++#include <linux/syscore_ops.h>
+ #include <asm/timer.h>
+ #include <asm/cpu.h>
+ #include <asm/traps.h>
+@@ -37,6 +38,7 @@
+ #include <asm/tlb.h>
+ #include <asm/cpuidle_haltpoll.h>
+ #include <asm/ptrace.h>
++#include <asm/reboot.h>
+ #include <asm/svm.h>
+ 
+ DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
+@@ -374,6 +376,14 @@ static void kvm_pv_disable_apf(void)
+ 	pr_info("Unregister pv shared memory for cpu %d\n", smp_processor_id());
+ }
+ 
++static void kvm_disable_steal_time(void)
++{
++	if (!has_steal_clock)
++		return;
++
++	wrmsr(MSR_KVM_STEAL_TIME, 0, 0);
++}
++
+ static void kvm_pv_guest_cpu_reboot(void *unused)
+ {
+ 	/*
+@@ -416,14 +426,6 @@ static u64 kvm_steal_clock(int cpu)
+ 	return steal;
+ }
+ 
+-void kvm_disable_steal_time(void)
+-{
+-	if (!has_steal_clock)
+-		return;
+-
+-	wrmsr(MSR_KVM_STEAL_TIME, 0, 0);
+-}
+-
+ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
+ {
+ 	early_set_memory_decrypted((unsigned long) ptr, size);
+@@ -460,6 +462,27 @@ static bool pv_tlb_flush_supported(void)
+ 
+ static DEFINE_PER_CPU(cpumask_var_t, __pv_cpu_mask);
+ 
++static void kvm_guest_cpu_offline(bool shutdown)
++{
++	kvm_disable_steal_time();
++	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
++		wrmsrl(MSR_KVM_PV_EOI_EN, 0);
++	kvm_pv_disable_apf();
++	if (!shutdown)
++		apf_task_wake_all();
++	kvmclock_disable();
++}
++
++static int kvm_cpu_online(unsigned int cpu)
++{
++	unsigned long flags;
++
++	local_irq_save(flags);
++	kvm_guest_cpu_init();
++	local_irq_restore(flags);
++	return 0;
++}
++
+ #ifdef CONFIG_SMP
+ 
+ static bool pv_ipi_supported(void)
+@@ -587,29 +610,46 @@ static void __init kvm_smp_prepare_boot_cpu(void)
+ 	kvm_spinlock_init();
+ }
+ 
+-static void kvm_guest_cpu_offline(void)
++static int kvm_cpu_down_prepare(unsigned int cpu)
+ {
+-	kvm_disable_steal_time();
+-	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
+-		wrmsrl(MSR_KVM_PV_EOI_EN, 0);
+-	kvm_pv_disable_apf();
+-	apf_task_wake_all();
++	unsigned long flags;
++
++	local_irq_save(flags);
++	kvm_guest_cpu_offline(false);
++	local_irq_restore(flags);
++	return 0;
+ }
+ 
+-static int kvm_cpu_online(unsigned int cpu)
++#endif
++
++static int kvm_suspend(void)
+ {
+-	local_irq_disable();
+-	kvm_guest_cpu_init();
+-	local_irq_enable();
++	kvm_guest_cpu_offline(false);
++
+ 	return 0;
+ }
+ 
+-static int kvm_cpu_down_prepare(unsigned int cpu)
++static void kvm_resume(void)
+ {
+-	local_irq_disable();
+-	kvm_guest_cpu_offline();
+-	local_irq_enable();
+-	return 0;
++	kvm_cpu_online(raw_smp_processor_id());
++}
++
++static struct syscore_ops kvm_syscore_ops = {
++	.suspend	= kvm_suspend,
++	.resume		= kvm_resume,
++};
++
++/*
++ * After a PV feature is registered, the host will keep writing to the
++ * registered memory location. If the guest happens to shutdown, this memory
++ * won't be valid. In cases like kexec, in which you install a new kernel, this
++ * means a random memory location will be kept being written.
++ */
++#ifdef CONFIG_KEXEC_CORE
++static void kvm_crash_shutdown(struct pt_regs *regs)
++{
++	kvm_guest_cpu_offline(true);
++	native_machine_crash_shutdown(regs);
+ }
+ #endif
+ 
+@@ -681,6 +721,12 @@ static void __init kvm_guest_init(void)
+ 	kvm_guest_cpu_init();
+ #endif
+ 
++#ifdef CONFIG_KEXEC_CORE
++	machine_ops.crash_shutdown = kvm_crash_shutdown;
++#endif
++
++	register_syscore_ops(&kvm_syscore_ops);
++
+ 	/*
+ 	 * Hard lockup detection is enabled by default. Disable it, as guests
+ 	 * can get false positives too easily, for example if the host is
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index 5ee705b44560b..c4ac26333bc41 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -20,7 +20,6 @@
+ #include <asm/hypervisor.h>
+ #include <asm/mem_encrypt.h>
+ #include <asm/x86_init.h>
+-#include <asm/reboot.h>
+ #include <asm/kvmclock.h>
+ 
+ static int kvmclock __initdata = 1;
+@@ -204,28 +203,9 @@ static void kvm_setup_secondary_clock(void)
+ }
+ #endif
+ 
+-/*
+- * After the clock is registered, the host will keep writing to the
+- * registered memory location. If the guest happens to shutdown, this memory
+- * won't be valid. In cases like kexec, in which you install a new kernel, this
+- * means a random memory location will be kept being written. So before any
+- * kind of shutdown from our side, we unregister the clock by writing anything
+- * that does not have the 'enable' bit set in the msr
+- */
+-#ifdef CONFIG_KEXEC_CORE
+-static void kvm_crash_shutdown(struct pt_regs *regs)
+-{
+-	native_write_msr(msr_kvm_system_time, 0, 0);
+-	kvm_disable_steal_time();
+-	native_machine_crash_shutdown(regs);
+-}
+-#endif
+-
+-static void kvm_shutdown(void)
++void kvmclock_disable(void)
+ {
+ 	native_write_msr(msr_kvm_system_time, 0, 0);
+-	kvm_disable_steal_time();
+-	native_machine_shutdown();
+ }
+ 
+ static void __init kvmclock_init_mem(void)
+@@ -352,10 +332,6 @@ void __init kvmclock_init(void)
+ #endif
+ 	x86_platform.save_sched_clock_state = kvm_save_sched_clock_state;
+ 	x86_platform.restore_sched_clock_state = kvm_restore_sched_clock_state;
+-	machine_ops.shutdown  = kvm_shutdown;
+-#ifdef CONFIG_KEXEC_CORE
+-	machine_ops.crash_shutdown  = kvm_crash_shutdown;
+-#endif
+ 	kvm_get_preset_lpj();
+ 
+ 	/*
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 9d4eb114613c2..41d44fb5f753d 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2362,7 +2362,7 @@ static int cr_interception(struct vcpu_svm *svm)
+ 	err = 0;
+ 	if (cr >= 16) { /* mov to cr */
+ 		cr -= 16;
+-		val = kvm_register_read(&svm->vcpu, reg);
++		val = kvm_register_readl(&svm->vcpu, reg);
+ 		trace_kvm_cr_write(cr, val);
+ 		switch (cr) {
+ 		case 0:
+@@ -2408,7 +2408,7 @@ static int cr_interception(struct vcpu_svm *svm)
+ 			kvm_queue_exception(&svm->vcpu, UD_VECTOR);
+ 			return 1;
+ 		}
+-		kvm_register_write(&svm->vcpu, reg, val);
++		kvm_register_writel(&svm->vcpu, reg, val);
+ 		trace_kvm_cr_read(cr, val);
+ 	}
+ 	return kvm_complete_insn_gp(&svm->vcpu, err);
+@@ -2439,13 +2439,13 @@ static int dr_interception(struct vcpu_svm *svm)
+ 	if (dr >= 16) { /* mov to DRn */
+ 		if (!kvm_require_dr(&svm->vcpu, dr - 16))
+ 			return 1;
+-		val = kvm_register_read(&svm->vcpu, reg);
++		val = kvm_register_readl(&svm->vcpu, reg);
+ 		kvm_set_dr(&svm->vcpu, dr - 16, val);
+ 	} else {
+ 		if (!kvm_require_dr(&svm->vcpu, dr))
+ 			return 1;
+ 		kvm_get_dr(&svm->vcpu, dr, &val);
+-		kvm_register_write(&svm->vcpu, reg, val);
++		kvm_register_writel(&svm->vcpu, reg, val);
+ 	}
+ 
+ 	return kvm_skip_emulated_instruction(&svm->vcpu);
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index a19374d261013..65f599e9075bc 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -504,10 +504,6 @@ void __init sme_enable(struct boot_params *bp)
+ #define AMD_SME_BIT	BIT(0)
+ #define AMD_SEV_BIT	BIT(1)
+ 
+-	/* Check the SEV MSR whether SEV or SME is enabled */
+-	sev_status   = __rdmsr(MSR_AMD64_SEV);
+-	feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
+-
+ 	/*
+ 	 * Check for the SME/SEV feature:
+ 	 *   CPUID Fn8000_001F[EAX]
+@@ -519,11 +515,16 @@ void __init sme_enable(struct boot_params *bp)
+ 	eax = 0x8000001f;
+ 	ecx = 0;
+ 	native_cpuid(&eax, &ebx, &ecx, &edx);
+-	if (!(eax & feature_mask))
++	/* Check whether SEV or SME is supported */
++	if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))
+ 		return;
+ 
+ 	me_mask = 1UL << (ebx & 0x3f);
+ 
++	/* Check the SEV MSR whether SEV or SME is enabled */
++	sev_status   = __rdmsr(MSR_AMD64_SEV);
++	feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
++
+ 	/* Check if memory encryption is enabled */
+ 	if (feature_mask == AMD_SME_BIT) {
+ 		/*
+diff --git a/drivers/acpi/acpica/utdelete.c b/drivers/acpi/acpica/utdelete.c
+index 4c0d4e4341961..72d2c0b656339 100644
+--- a/drivers/acpi/acpica/utdelete.c
++++ b/drivers/acpi/acpica/utdelete.c
+@@ -285,6 +285,14 @@ static void acpi_ut_delete_internal_obj(union acpi_operand_object *object)
+ 		}
+ 		break;
+ 
++	case ACPI_TYPE_LOCAL_ADDRESS_HANDLER:
++
++		ACPI_DEBUG_PRINT((ACPI_DB_ALLOCATIONS,
++				  "***** Address handler %p\n", object));
++
++		acpi_os_delete_mutex(object->address_space.context_mutex);
++		break;
++
+ 	default:
+ 
+ 		break;
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 9afbe4992a1dd..818dc7f54f038 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1330,6 +1330,34 @@ err_allow_idle:
+ 	return error;
+ }
+ 
++static int sysc_reinit_module(struct sysc *ddata, bool leave_enabled)
++{
++	struct device *dev = ddata->dev;
++	int error;
++
++	/* Disable target module if it is enabled */
++	if (ddata->enabled) {
++		error = sysc_runtime_suspend(dev);
++		if (error)
++			dev_warn(dev, "reinit suspend failed: %i\n", error);
++	}
++
++	/* Enable target module */
++	error = sysc_runtime_resume(dev);
++	if (error)
++		dev_warn(dev, "reinit resume failed: %i\n", error);
++
++	if (leave_enabled)
++		return error;
++
++	/* Disable target module if no leave_enabled was set */
++	error = sysc_runtime_suspend(dev);
++	if (error)
++		dev_warn(dev, "reinit suspend failed: %i\n", error);
++
++	return error;
++}
++
+ static int __maybe_unused sysc_noirq_suspend(struct device *dev)
+ {
+ 	struct sysc *ddata;
+@@ -1340,12 +1368,18 @@ static int __maybe_unused sysc_noirq_suspend(struct device *dev)
+ 	    (SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE))
+ 		return 0;
+ 
+-	return pm_runtime_force_suspend(dev);
++	if (!ddata->enabled)
++		return 0;
++
++	ddata->needs_resume = 1;
++
++	return sysc_runtime_suspend(dev);
+ }
+ 
+ static int __maybe_unused sysc_noirq_resume(struct device *dev)
+ {
+ 	struct sysc *ddata;
++	int error = 0;
+ 
+ 	ddata = dev_get_drvdata(dev);
+ 
+@@ -1353,7 +1387,19 @@ static int __maybe_unused sysc_noirq_resume(struct device *dev)
+ 	    (SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE))
+ 		return 0;
+ 
+-	return pm_runtime_force_resume(dev);
++	if (ddata->cfg.quirks & SYSC_QUIRK_REINIT_ON_RESUME) {
++		error = sysc_reinit_module(ddata, ddata->needs_resume);
++		if (error)
++			dev_warn(dev, "noirq_resume failed: %i\n", error);
++	} else if (ddata->needs_resume) {
++		error = sysc_runtime_resume(dev);
++		if (error)
++			dev_warn(dev, "noirq_resume failed: %i\n", error);
++	}
++
++	ddata->needs_resume = 0;
++
++	return error;
+ }
+ 
+ static const struct dev_pm_ops sysc_pm_ops = {
+@@ -1404,9 +1450,9 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
+ 	/* Uarts on omap4 and later */
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff,
+-		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
+ 
+ 	/* Quirks that need to be set based on the module address */
+ 	SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff,
+@@ -1462,7 +1508,8 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 	SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050,
+ 		   0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
+ 	SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -ENODEV, 0x4ea2080d, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
++		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY |
++		   SYSC_QUIRK_REINIT_ON_RESUME),
+ 	SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0,
+ 		   SYSC_MODULE_QUIRK_WDT),
+ 	/* PRUSS on am3, am4 and am5 */
+diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
+index e15d484b6a5a7..ea7ca74fc1730 100644
+--- a/drivers/firmware/efi/cper.c
++++ b/drivers/firmware/efi/cper.c
+@@ -276,8 +276,7 @@ static int cper_dimm_err_location(struct cper_mem_err_compact *mem, char *msg)
+ 	if (!msg || !(mem->validation_bits & CPER_MEM_VALID_MODULE_HANDLE))
+ 		return 0;
+ 
+-	n = 0;
+-	len = CPER_REC_LEN - 1;
++	len = CPER_REC_LEN;
+ 	dmi_memdev_name(mem->mem_dev_handle, &bank, &device);
+ 	if (bank && device)
+ 		n = snprintf(msg, len, "DIMM location: %s %s ", bank, device);
+@@ -286,7 +285,6 @@ static int cper_dimm_err_location(struct cper_mem_err_compact *mem, char *msg)
+ 			     "DIMM location: not present. DMI handle: 0x%.4x ",
+ 			     mem->mem_dev_handle);
+ 
+-	msg[n] = '\0';
+ 	return n;
+ }
+ 
+diff --git a/drivers/firmware/efi/fdtparams.c b/drivers/firmware/efi/fdtparams.c
+index bb042ab7c2be6..e901f8564ca0c 100644
+--- a/drivers/firmware/efi/fdtparams.c
++++ b/drivers/firmware/efi/fdtparams.c
+@@ -98,6 +98,9 @@ u64 __init efi_get_fdt_params(struct efi_memory_map_data *mm)
+ 	BUILD_BUG_ON(ARRAY_SIZE(target) != ARRAY_SIZE(name));
+ 	BUILD_BUG_ON(ARRAY_SIZE(target) != ARRAY_SIZE(dt_params[0].params));
+ 
++	if (!fdt)
++		return 0;
++
+ 	for (i = 0; i < ARRAY_SIZE(dt_params); i++) {
+ 		node = fdt_path_offset(fdt, dt_params[i].path);
+ 		if (node < 0)
+diff --git a/drivers/firmware/efi/libstub/file.c b/drivers/firmware/efi/libstub/file.c
+index 4e81c6077188e..dd95f330fe6e1 100644
+--- a/drivers/firmware/efi/libstub/file.c
++++ b/drivers/firmware/efi/libstub/file.c
+@@ -103,7 +103,7 @@ static int find_file_option(const efi_char16_t *cmdline, int cmdline_len,
+ 		return 0;
+ 
+ 	/* Skip any leading slashes */
+-	while (cmdline[i] == L'/' || cmdline[i] == L'\\')
++	while (i < cmdline_len && (cmdline[i] == L'/' || cmdline[i] == L'\\'))
+ 		i++;
+ 
+ 	while (--result_len > 0 && i < cmdline_len) {
+diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c
+index 5737cb0fcd44e..0a9aba5f9ceff 100644
+--- a/drivers/firmware/efi/memattr.c
++++ b/drivers/firmware/efi/memattr.c
+@@ -67,11 +67,6 @@ static bool entry_is_valid(const efi_memory_desc_t *in, efi_memory_desc_t *out)
+ 		return false;
+ 	}
+ 
+-	if (!(in->attribute & (EFI_MEMORY_RO | EFI_MEMORY_XP))) {
+-		pr_warn("Entry attributes invalid: RO and XP bits both cleared\n");
+-		return false;
+-	}
+-
+ 	if (PAGE_SIZE > EFI_PAGE_SIZE &&
+ 	    (!PAGE_ALIGNED(in->phys_addr) ||
+ 	     !PAGE_ALIGNED(in->num_pages << EFI_PAGE_SHIFT))) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+index c80d8339f58c4..2c1c5f7f98deb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+@@ -337,7 +337,6 @@ static int amdgpu_ctx_query2(struct amdgpu_device *adev,
+ {
+ 	struct amdgpu_ctx *ctx;
+ 	struct amdgpu_ctx_mgr *mgr;
+-	unsigned long ras_counter;
+ 
+ 	if (!fpriv)
+ 		return -EINVAL;
+@@ -362,21 +361,6 @@ static int amdgpu_ctx_query2(struct amdgpu_device *adev,
+ 	if (atomic_read(&ctx->guilty))
+ 		out->state.flags |= AMDGPU_CTX_QUERY2_FLAGS_GUILTY;
+ 
+-	/*query ue count*/
+-	ras_counter = amdgpu_ras_query_error_count(adev, false);
+-	/*ras counter is monotonic increasing*/
+-	if (ras_counter != ctx->ras_counter_ue) {
+-		out->state.flags |= AMDGPU_CTX_QUERY2_FLAGS_RAS_UE;
+-		ctx->ras_counter_ue = ras_counter;
+-	}
+-
+-	/*query ce count*/
+-	ras_counter = amdgpu_ras_query_error_count(adev, true);
+-	if (ras_counter != ctx->ras_counter_ce) {
+-		out->state.flags |= AMDGPU_CTX_QUERY2_FLAGS_RAS_CE;
+-		ctx->ras_counter_ce = ras_counter;
+-	}
+-
+ 	mutex_unlock(&mgr->lock);
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+index 63b3501823898..8c84e35c2719b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+@@ -187,14 +187,14 @@ static int jpeg_v2_5_hw_init(void *handle)
+ static int jpeg_v2_5_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+-	struct amdgpu_ring *ring;
+ 	int i;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	for (i = 0; i < adev->jpeg.num_jpeg_inst; ++i) {
+ 		if (adev->jpeg.harvest_config & (1 << i))
+ 			continue;
+ 
+-		ring = &adev->jpeg.inst[i].ring_dec;
+ 		if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
+ 		      RREG32_SOC15(JPEG, i, mmUVD_JRBC_STATUS))
+ 			jpeg_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE);
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+index 9259e35f0f55a..e00c88abeaed1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+@@ -159,9 +159,9 @@ static int jpeg_v3_0_hw_init(void *handle)
+ static int jpeg_v3_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+-	struct amdgpu_ring *ring;
+ 
+-	ring = &adev->jpeg.inst->ring_dec;
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
+ 	      RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS))
+ 		jpeg_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+index 666bfa4a0b8ea..53f0899eb3166 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+@@ -356,6 +356,7 @@ static int uvd_v6_0_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+ 
+ error:
+ 	dma_fence_put(fence);
++	amdgpu_bo_unpin(bo);
+ 	amdgpu_bo_unreserve(bo);
+ 	amdgpu_bo_unref(&bo);
+ 	return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 700621ddc02e2..c9c888be12285 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -345,15 +345,14 @@ done:
+ static int vcn_v3_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+-	struct amdgpu_ring *ring;
+ 	int i;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+ 		if (adev->vcn.harvest_config & (1 << i))
+ 			continue;
+ 
+-		ring = &adev->vcn.inst[i].ring_dec;
+-
+ 		if (!amdgpu_sriov_vf(adev)) {
+ 			if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
+ 					(adev->vcn.cur_state != AMD_PG_STATE_GATE &&
+diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
+index e424a6d1a68c9..7a72faf29f272 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_request.c
++++ b/drivers/gpu/drm/i915/selftests/i915_request.c
+@@ -1391,8 +1391,8 @@ static int live_breadcrumbs_smoketest(void *arg)
+ 
+ 	for (n = 0; n < smoke[0].ncontexts; n++) {
+ 		smoke[0].contexts[n] = live_context(i915, file);
+-		if (!smoke[0].contexts[n]) {
+-			ret = -ENOMEM;
++		if (IS_ERR(smoke[0].contexts[n])) {
++			ret = PTR_ERR(smoke[0].contexts[n]);
+ 			goto out_contexts;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index e69ea810e18d9..c8217f4858a15 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -931,8 +931,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+ 		DPU_DEBUG("REG_DMA is not defined");
+ 	}
+ 
+-	if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
+-		dpu_kms_parse_data_bus_icc_path(dpu_kms);
++	dpu_kms_parse_data_bus_icc_path(dpu_kms);
+ 
+ 	pm_runtime_get_sync(&dpu_kms->pdev->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+index cd4078807db1b..3416e9617ee9a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+@@ -31,40 +31,8 @@ struct dpu_mdss {
+ 	void __iomem *mmio;
+ 	struct dss_module_power mp;
+ 	struct dpu_irq_controller irq_controller;
+-	struct icc_path *path[2];
+-	u32 num_paths;
+ };
+ 
+-static int dpu_mdss_parse_data_bus_icc_path(struct drm_device *dev,
+-						struct dpu_mdss *dpu_mdss)
+-{
+-	struct icc_path *path0 = of_icc_get(dev->dev, "mdp0-mem");
+-	struct icc_path *path1 = of_icc_get(dev->dev, "mdp1-mem");
+-
+-	if (IS_ERR_OR_NULL(path0))
+-		return PTR_ERR_OR_ZERO(path0);
+-
+-	dpu_mdss->path[0] = path0;
+-	dpu_mdss->num_paths = 1;
+-
+-	if (!IS_ERR_OR_NULL(path1)) {
+-		dpu_mdss->path[1] = path1;
+-		dpu_mdss->num_paths++;
+-	}
+-
+-	return 0;
+-}
+-
+-static void dpu_mdss_icc_request_bw(struct msm_mdss *mdss)
+-{
+-	struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
+-	int i;
+-	u64 avg_bw = dpu_mdss->num_paths ? MAX_BW / dpu_mdss->num_paths : 0;
+-
+-	for (i = 0; i < dpu_mdss->num_paths; i++)
+-		icc_set_bw(dpu_mdss->path[i], avg_bw, kBps_to_icc(MAX_BW));
+-}
+-
+ static void dpu_mdss_irq(struct irq_desc *desc)
+ {
+ 	struct dpu_mdss *dpu_mdss = irq_desc_get_handler_data(desc);
+@@ -178,8 +146,6 @@ static int dpu_mdss_enable(struct msm_mdss *mdss)
+ 	struct dss_module_power *mp = &dpu_mdss->mp;
+ 	int ret;
+ 
+-	dpu_mdss_icc_request_bw(mdss);
+-
+ 	ret = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true);
+ 	if (ret) {
+ 		DPU_ERROR("clock enable failed, ret:%d\n", ret);
+@@ -213,15 +179,12 @@ static int dpu_mdss_disable(struct msm_mdss *mdss)
+ {
+ 	struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
+ 	struct dss_module_power *mp = &dpu_mdss->mp;
+-	int ret, i;
++	int ret;
+ 
+ 	ret = msm_dss_enable_clk(mp->clk_config, mp->num_clk, false);
+ 	if (ret)
+ 		DPU_ERROR("clock disable failed, ret:%d\n", ret);
+ 
+-	for (i = 0; i < dpu_mdss->num_paths; i++)
+-		icc_set_bw(dpu_mdss->path[i], 0, 0);
+-
+ 	return ret;
+ }
+ 
+@@ -232,7 +195,6 @@ static void dpu_mdss_destroy(struct drm_device *dev)
+ 	struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
+ 	struct dss_module_power *mp = &dpu_mdss->mp;
+ 	int irq;
+-	int i;
+ 
+ 	pm_runtime_suspend(dev->dev);
+ 	pm_runtime_disable(dev->dev);
+@@ -242,9 +204,6 @@ static void dpu_mdss_destroy(struct drm_device *dev)
+ 	msm_dss_put_clk(mp->clk_config, mp->num_clk);
+ 	devm_kfree(&pdev->dev, mp->clk_config);
+ 
+-	for (i = 0; i < dpu_mdss->num_paths; i++)
+-		icc_put(dpu_mdss->path[i]);
+-
+ 	if (dpu_mdss->mmio)
+ 		devm_iounmap(&pdev->dev, dpu_mdss->mmio);
+ 	dpu_mdss->mmio = NULL;
+@@ -276,12 +235,6 @@ int dpu_mdss_init(struct drm_device *dev)
+ 
+ 	DRM_DEBUG("mapped mdss address space @%pK\n", dpu_mdss->mmio);
+ 
+-	if (!of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss")) {
+-		ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	mp = &dpu_mdss->mp;
+ 	ret = msm_dss_parse_clock(pdev, mp);
+ 	if (ret) {
+@@ -307,8 +260,6 @@ int dpu_mdss_init(struct drm_device *dev)
+ 
+ 	pm_runtime_enable(dev->dev);
+ 
+-	dpu_mdss_icc_request_bw(priv->mdss);
+-
+ 	return ret;
+ 
+ irq_error:
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 74ebfb12c360e..66b1051620390 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -1259,6 +1259,7 @@ static int hidpp20_battery_map_status_voltage(u8 data[3], int *voltage,
+ 	int status;
+ 
+ 	long flags = (long) data[2];
++	*level = POWER_SUPPLY_CAPACITY_LEVEL_UNKNOWN;
+ 
+ 	if (flags & 0x80)
+ 		switch (flags & 0x07) {
+diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
+index abd86903875f0..fc4c074597539 100644
+--- a/drivers/hid/hid-magicmouse.c
++++ b/drivers/hid/hid-magicmouse.c
+@@ -597,7 +597,7 @@ static int magicmouse_probe(struct hid_device *hdev,
+ 	if (id->vendor == USB_VENDOR_ID_APPLE &&
+ 	    id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
+ 	    hdev->type != HID_TYPE_USBMOUSE)
+-		return 0;
++		return -ENODEV;
+ 
+ 	msc = devm_kzalloc(&hdev->dev, sizeof(*msc), GFP_KERNEL);
+ 	if (msc == NULL) {
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 8429ebe7097e4..8580ace596c25 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -604,9 +604,13 @@ static struct mt_report_data *mt_allocate_report_data(struct mt_device *td,
+ 		if (!(HID_MAIN_ITEM_VARIABLE & field->flags))
+ 			continue;
+ 
+-		for (n = 0; n < field->report_count; n++) {
+-			if (field->usage[n].hid == HID_DG_CONTACTID)
+-				rdata->is_mt_collection = true;
++		if (field->logical == HID_DG_FINGER || td->hdev->group != HID_GROUP_MULTITOUCH_WIN_8) {
++			for (n = 0; n < field->report_count; n++) {
++				if (field->usage[n].hid == HID_DG_CONTACTID) {
++					rdata->is_mt_collection = true;
++					break;
++				}
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index cb7758d59014e..1f08c848c33de 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -50,6 +50,7 @@
+ #define I2C_HID_QUIRK_BOGUS_IRQ			BIT(4)
+ #define I2C_HID_QUIRK_RESET_ON_RESUME		BIT(5)
+ #define I2C_HID_QUIRK_BAD_INPUT_SIZE		BIT(6)
++#define I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET	BIT(7)
+ 
+ 
+ /* flags */
+@@ -183,6 +184,11 @@ static const struct i2c_hid_quirks {
+ 		 I2C_HID_QUIRK_RESET_ON_RESUME },
+ 	{ USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720,
+ 		I2C_HID_QUIRK_BAD_INPUT_SIZE },
++	/*
++	 * Sending the wakeup after reset actually break ELAN touchscreen controller
++	 */
++	{ USB_VENDOR_ID_ELAN, HID_ANY_ID,
++		 I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET },
+ 	{ 0, 0 }
+ };
+ 
+@@ -466,7 +472,8 @@ static int i2c_hid_hwreset(struct i2c_client *client)
+ 	}
+ 
+ 	/* At least some SIS devices need this after reset */
+-	ret = i2c_hid_set_power(client, I2C_HID_PWR_ON);
++	if (!(ihid->quirks & I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET))
++		ret = i2c_hid_set_power(client, I2C_HID_PWR_ON);
+ 
+ out_unlock:
+ 	mutex_unlock(&ihid->reset_lock);
+@@ -1131,8 +1138,8 @@ static int i2c_hid_probe(struct i2c_client *client,
+ 	hid->vendor = le16_to_cpu(ihid->hdesc.wVendorID);
+ 	hid->product = le16_to_cpu(ihid->hdesc.wProductID);
+ 
+-	snprintf(hid->name, sizeof(hid->name), "%s %04hX:%04hX",
+-		 client->name, hid->vendor, hid->product);
++	snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X",
++		 client->name, (u16)hid->vendor, (u16)hid->product);
+ 	strlcpy(hid->phys, dev_name(&client->dev), sizeof(hid->phys));
+ 
+ 	ihid->quirks = i2c_hid_lookup_quirk(hid->vendor, hid->product);
+diff --git a/drivers/hid/usbhid/hid-pidff.c b/drivers/hid/usbhid/hid-pidff.c
+index fddac7c72f645..07a9fe97d2e05 100644
+--- a/drivers/hid/usbhid/hid-pidff.c
++++ b/drivers/hid/usbhid/hid-pidff.c
+@@ -1292,6 +1292,7 @@ int hid_pidff_init(struct hid_device *hid)
+ 
+ 	if (pidff->pool[PID_DEVICE_MANAGED_POOL].value &&
+ 	    pidff->pool[PID_DEVICE_MANAGED_POOL].value[0] == 0) {
++		error = -EPERM;
+ 		hid_notice(hid,
+ 			   "device does not support device managed pool\n");
+ 		goto fail;
+diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c
+index 73b9db9e3aab6..63b74e781c5d9 100644
+--- a/drivers/hwmon/dell-smm-hwmon.c
++++ b/drivers/hwmon/dell-smm-hwmon.c
+@@ -838,10 +838,10 @@ static struct attribute *i8k_attrs[] = {
+ static umode_t i8k_is_visible(struct kobject *kobj, struct attribute *attr,
+ 			      int index)
+ {
+-	if (disallow_fan_support && index >= 8)
++	if (disallow_fan_support && index >= 20)
+ 		return 0;
+ 	if (disallow_fan_type_call &&
+-	    (index == 9 || index == 12 || index == 15))
++	    (index == 21 || index == 25 || index == 28))
+ 		return 0;
+ 	if (index >= 0 && index <= 1 &&
+ 	    !(i8k_hwmon_flags & I8K_HWMON_HAVE_TEMP1))
+diff --git a/drivers/hwmon/pmbus/isl68137.c b/drivers/hwmon/pmbus/isl68137.c
+index 7cad76e07f701..3f1b826dac8a0 100644
+--- a/drivers/hwmon/pmbus/isl68137.c
++++ b/drivers/hwmon/pmbus/isl68137.c
+@@ -244,8 +244,8 @@ static int isl68137_probe(struct i2c_client *client)
+ 		info->read_word_data = raa_dmpvr2_read_word_data;
+ 		break;
+ 	case raa_dmpvr2_2rail_nontc:
+-		info->func[0] &= ~PMBUS_HAVE_TEMP;
+-		info->func[1] &= ~PMBUS_HAVE_TEMP;
++		info->func[0] &= ~PMBUS_HAVE_TEMP3;
++		info->func[1] &= ~PMBUS_HAVE_TEMP3;
+ 		fallthrough;
+ 	case raa_dmpvr2_2rail:
+ 		info->pages = 2;
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 4a6dd05d6dbf9..86f028febce35 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -654,6 +654,14 @@ static int geni_i2c_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void geni_i2c_shutdown(struct platform_device *pdev)
++{
++	struct geni_i2c_dev *gi2c = platform_get_drvdata(pdev);
++
++	/* Make client i2c transfers start failing */
++	i2c_mark_adapter_suspended(&gi2c->adap);
++}
++
+ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
+ {
+ 	int ret;
+@@ -694,6 +702,8 @@ static int __maybe_unused geni_i2c_suspend_noirq(struct device *dev)
+ {
+ 	struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
+ 
++	i2c_mark_adapter_suspended(&gi2c->adap);
++
+ 	if (!gi2c->suspended) {
+ 		geni_i2c_runtime_suspend(dev);
+ 		pm_runtime_disable(dev);
+@@ -703,8 +713,16 @@ static int __maybe_unused geni_i2c_suspend_noirq(struct device *dev)
+ 	return 0;
+ }
+ 
++static int __maybe_unused geni_i2c_resume_noirq(struct device *dev)
++{
++	struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
++
++	i2c_mark_adapter_resumed(&gi2c->adap);
++	return 0;
++}
++
+ static const struct dev_pm_ops geni_i2c_pm_ops = {
+-	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(geni_i2c_suspend_noirq, NULL)
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(geni_i2c_suspend_noirq, geni_i2c_resume_noirq)
+ 	SET_RUNTIME_PM_OPS(geni_i2c_runtime_suspend, geni_i2c_runtime_resume,
+ 									NULL)
+ };
+@@ -718,6 +736,7 @@ MODULE_DEVICE_TABLE(of, geni_i2c_dt_match);
+ static struct platform_driver geni_i2c_driver = {
+ 	.probe  = geni_i2c_probe,
+ 	.remove = geni_i2c_remove,
++	.shutdown = geni_i2c_shutdown,
+ 	.driver = {
+ 		.name = "geni_i2c",
+ 		.pm = &geni_i2c_pm_ops,
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+index 27308600da153..2dd4869156291 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+@@ -2177,8 +2177,6 @@ int cxgb4_update_mac_filt(struct port_info *pi, unsigned int viid,
+ 			  bool persistent, u8 *smt_idx);
+ int cxgb4_get_msix_idx_from_bmap(struct adapter *adap);
+ void cxgb4_free_msix_idx_in_bmap(struct adapter *adap, u32 msix_idx);
+-int cxgb_open(struct net_device *dev);
+-int cxgb_close(struct net_device *dev);
+ void cxgb4_enable_rx(struct adapter *adap, struct sge_rspq *q);
+ void cxgb4_quiesce_rx(struct sge_rspq *q);
+ int cxgb4_port_mirror_alloc(struct net_device *dev);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 23c13f34a5727..04dcb5e4b3161 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -2834,7 +2834,7 @@ static void cxgb_down(struct adapter *adapter)
+ /*
+  * net_device operations
+  */
+-int cxgb_open(struct net_device *dev)
++static int cxgb_open(struct net_device *dev)
+ {
+ 	struct port_info *pi = netdev_priv(dev);
+ 	struct adapter *adapter = pi->adapter;
+@@ -2882,7 +2882,7 @@ out_unlock:
+ 	return err;
+ }
+ 
+-int cxgb_close(struct net_device *dev)
++static int cxgb_close(struct net_device *dev)
+ {
+ 	struct port_info *pi = netdev_priv(dev);
+ 	struct adapter *adapter = pi->adapter;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+index 1b88bd1c2dbe4..dd9be229819a5 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+@@ -997,20 +997,16 @@ int cxgb4_tc_flower_destroy(struct net_device *dev,
+ 	if (!ch_flower)
+ 		return -ENOENT;
+ 
++	rhashtable_remove_fast(&adap->flower_tbl, &ch_flower->node,
++			       adap->flower_ht_params);
++
+ 	ret = cxgb4_flow_rule_destroy(dev, ch_flower->fs.tc_prio,
+ 				      &ch_flower->fs, ch_flower->filter_id);
+ 	if (ret)
+-		goto err;
++		netdev_err(dev, "Flow rule destroy failed for tid: %u, ret: %d",
++			   ch_flower->filter_id, ret);
+ 
+-	ret = rhashtable_remove_fast(&adap->flower_tbl, &ch_flower->node,
+-				     adap->flower_ht_params);
+-	if (ret) {
+-		netdev_err(dev, "Flow remove from rhashtable failed");
+-		goto err;
+-	}
+ 	kfree_rcu(ch_flower, rcu);
+-
+-err:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c
+index 6c259de96f969..338b04f339b3d 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c
+@@ -589,7 +589,8 @@ int cxgb4_setup_tc_mqprio(struct net_device *dev,
+ 	 * down before configuring tc params.
+ 	 */
+ 	if (netif_running(dev)) {
+-		cxgb_close(dev);
++		netif_tx_stop_all_queues(dev);
++		netif_carrier_off(dev);
+ 		needs_bring_up = true;
+ 	}
+ 
+@@ -615,8 +616,10 @@ int cxgb4_setup_tc_mqprio(struct net_device *dev,
+ 	}
+ 
+ out:
+-	if (needs_bring_up)
+-		cxgb_open(dev);
++	if (needs_bring_up) {
++		netif_tx_start_all_queues(dev);
++		netif_carrier_on(dev);
++	}
+ 
+ 	mutex_unlock(&adap->tc_mqprio->mqprio_mutex);
+ 	return ret;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index 546301272271d..ccb6bd002b20d 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -2552,6 +2552,12 @@ int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc)
+ 	if (!eosw_txq)
+ 		return -ENOMEM;
+ 
++	if (!(adap->flags & CXGB4_FW_OK)) {
++		/* Don't stall caller when access to FW is lost */
++		complete(&eosw_txq->completion);
++		return -EIO;
++	}
++
+ 	skb = alloc_skb(len, GFP_KERNEL);
+ 	if (!skb)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 011f484606a3a..c40ac82db863e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -2205,15 +2205,20 @@ static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
+ 	case XDP_TX:
+ 		xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->queue_index];
+ 		result = i40e_xmit_xdp_tx_ring(xdp, xdp_ring);
++		if (result == I40E_XDP_CONSUMED)
++			goto out_failure;
+ 		break;
+ 	case XDP_REDIRECT:
+ 		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
+-		result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;
++		if (err)
++			goto out_failure;
++		result = I40E_XDP_REDIR;
+ 		break;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++out_failure:
+ 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
+ 		fallthrough; /* handle aborts by dropping packet */
+ 	case XDP_DROP:
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index 8557807b41717..86c79f71c685a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -159,21 +159,28 @@ static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
+ 	xdp_prog = READ_ONCE(rx_ring->xdp_prog);
+ 	act = bpf_prog_run_xdp(xdp_prog, xdp);
+ 
++	if (likely(act == XDP_REDIRECT)) {
++		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
++		if (err)
++			goto out_failure;
++		rcu_read_unlock();
++		return I40E_XDP_REDIR;
++	}
++
+ 	switch (act) {
+ 	case XDP_PASS:
+ 		break;
+ 	case XDP_TX:
+ 		xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->queue_index];
+ 		result = i40e_xmit_xdp_tx_ring(xdp, xdp_ring);
+-		break;
+-	case XDP_REDIRECT:
+-		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
+-		result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;
++		if (result == I40E_XDP_CONSUMED)
++			goto out_failure;
+ 		break;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++out_failure:
+ 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
+ 		fallthrough; /* handle aborts by dropping packet */
+ 	case XDP_DROP:
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index d70573f5072c6..a7975afecf70f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -1797,49 +1797,6 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
+ 		ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB,
+ 						100000baseKR4_Full);
+ 	}
+-
+-	/* Autoneg PHY types */
+-	if (phy_types_low & ICE_PHY_TYPE_LOW_100BASE_TX ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_1000BASE_T ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_1000BASE_KX ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_2500BASE_T ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_2500BASE_KX ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_5GBASE_T ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_5GBASE_KR ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_10GBASE_T ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_10GBASE_KR_CR1 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_T ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_CR ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_CR_S ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_CR1 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_KR ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_KR_S ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_KR1 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_40GBASE_CR4 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_40GBASE_KR4) {
+-		ethtool_link_ksettings_add_link_mode(ks, supported,
+-						     Autoneg);
+-		ethtool_link_ksettings_add_link_mode(ks, advertising,
+-						     Autoneg);
+-	}
+-	if (phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_CR2 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_KR2 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_CP ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4) {
+-		ethtool_link_ksettings_add_link_mode(ks, supported,
+-						     Autoneg);
+-		ethtool_link_ksettings_add_link_mode(ks, advertising,
+-						     Autoneg);
+-	}
+-	if (phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_CR4 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_KR4 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4 ||
+-	    phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_CP2) {
+-		ethtool_link_ksettings_add_link_mode(ks, supported,
+-						     Autoneg);
+-		ethtool_link_ksettings_add_link_mode(ks, advertising,
+-						     Autoneg);
+-	}
+ }
+ 
+ #define TEST_SET_BITS_TIMEOUT	50
+@@ -1996,9 +1953,7 @@ ice_get_link_ksettings(struct net_device *netdev,
+ 		ks->base.port = PORT_TP;
+ 		break;
+ 	case ICE_MEDIA_BACKPLANE:
+-		ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg);
+ 		ethtool_link_ksettings_add_link_mode(ks, supported, Backplane);
+-		ethtool_link_ksettings_add_link_mode(ks, advertising, Autoneg);
+ 		ethtool_link_ksettings_add_link_mode(ks, advertising,
+ 						     Backplane);
+ 		ks->base.port = PORT_NONE;
+@@ -2073,6 +2028,12 @@ ice_get_link_ksettings(struct net_device *netdev,
+ 	if (caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN)
+ 		ethtool_link_ksettings_add_link_mode(ks, supported, FEC_RS);
+ 
++	/* Set supported and advertised autoneg */
++	if (ice_is_phy_caps_an_enabled(caps)) {
++		ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg);
++		ethtool_link_ksettings_add_link_mode(ks, advertising, Autoneg);
++	}
++
+ done:
+ 	kfree(caps);
+ 	return err;
+diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+index 90abc8612a6ab..406dd6bd97a7d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
++++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+@@ -31,6 +31,7 @@
+ #define PF_FW_ATQLEN_ATQOVFL_M			BIT(29)
+ #define PF_FW_ATQLEN_ATQCRIT_M			BIT(30)
+ #define VF_MBX_ARQLEN(_VF)			(0x0022BC00 + ((_VF) * 4))
++#define VF_MBX_ATQLEN(_VF)			(0x0022A800 + ((_VF) * 4))
+ #define PF_FW_ATQLEN_ATQENABLE_M		BIT(31)
+ #define PF_FW_ATQT				0x00080400
+ #define PF_MBX_ARQBAH				0x0022E400
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index e1384503dd4d5..fb20c6971f4c7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -192,6 +192,8 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
+ 		break;
+ 	case ICE_VSI_VF:
+ 		vf = &pf->vf[vsi->vf_id];
++		if (vf->num_req_qs)
++			vf->num_vf_qs = vf->num_req_qs;
+ 		vsi->alloc_txq = vf->num_vf_qs;
+ 		vsi->alloc_rxq = vf->num_vf_qs;
+ 		/* pf->num_msix_per_vf includes (VF miscellaneous vector +
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 0f2544c420ac3..442a9bcbf60a7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -537,34 +537,35 @@ static int
+ ice_run_xdp(struct ice_ring *rx_ring, struct xdp_buff *xdp,
+ 	    struct bpf_prog *xdp_prog)
+ {
+-	int err, result = ICE_XDP_PASS;
+ 	struct ice_ring *xdp_ring;
++	int err, result;
+ 	u32 act;
+ 
+ 	act = bpf_prog_run_xdp(xdp_prog, xdp);
+ 	switch (act) {
+ 	case XDP_PASS:
+-		break;
++		return ICE_XDP_PASS;
+ 	case XDP_TX:
+ 		xdp_ring = rx_ring->vsi->xdp_rings[smp_processor_id()];
+ 		result = ice_xmit_xdp_buff(xdp, xdp_ring);
+-		break;
++		if (result == ICE_XDP_CONSUMED)
++			goto out_failure;
++		return result;
+ 	case XDP_REDIRECT:
+ 		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
+-		result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED;
+-		break;
++		if (err)
++			goto out_failure;
++		return ICE_XDP_REDIR;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++out_failure:
+ 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
+ 		fallthrough;
+ 	case XDP_DROP:
+-		result = ICE_XDP_CONSUMED;
+-		break;
++		return ICE_XDP_CONSUMED;
+ 	}
+-
+-	return result;
+ }
+ 
+ /**
+@@ -2373,6 +2374,7 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_ring *tx_ring)
+ 	struct ice_tx_offload_params offload = { 0 };
+ 	struct ice_vsi *vsi = tx_ring->vsi;
+ 	struct ice_tx_buf *first;
++	struct ethhdr *eth;
+ 	unsigned int count;
+ 	int tso, csum;
+ 
+@@ -2419,7 +2421,9 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_ring *tx_ring)
+ 		goto out_drop;
+ 
+ 	/* allow CONTROL frames egress from main VSI if FW LLDP disabled */
+-	if (unlikely(skb->priority == TC_PRIO_CONTROL &&
++	eth = (struct ethhdr *)skb_mac_header(skb);
++	if (unlikely((skb->priority == TC_PRIO_CONTROL ||
++		      eth->h_proto == htons(ETH_P_LLDP)) &&
+ 		     vsi->type == ICE_VSI_PF &&
+ 		     vsi->port_info->qos_cfg.is_sw_lldp))
+ 		offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX |
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index b3161c5def465..c9f82fd3cf48d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -435,13 +435,15 @@ static void ice_trigger_vf_reset(struct ice_vf *vf, bool is_vflr, bool is_pfr)
+ 	 */
+ 	clear_bit(ICE_VF_STATE_INIT, vf->vf_states);
+ 
+-	/* VF_MBX_ARQLEN is cleared by PFR, so the driver needs to clear it
+-	 * in the case of VFR. If this is done for PFR, it can mess up VF
+-	 * resets because the VF driver may already have started cleanup
+-	 * by the time we get here.
++	/* VF_MBX_ARQLEN and VF_MBX_ATQLEN are cleared by PFR, so the driver
++	 * needs to clear them in the case of VFR/VFLR. If this is done for
++	 * PFR, it can mess up VF resets because the VF driver may already
++	 * have started cleanup by the time we get here.
+ 	 */
+-	if (!is_pfr)
++	if (!is_pfr) {
+ 		wr32(hw, VF_MBX_ARQLEN(vf->vf_id), 0);
++		wr32(hw, VF_MBX_ATQLEN(vf->vf_id), 0);
++	}
+ 
+ 	/* In the case of a VFLR, the HW has already reset the VF and we
+ 	 * just need to clean up, so don't hit the VFRTRIG register.
+@@ -1339,7 +1341,12 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
+ 	}
+ 
+ 	ice_vf_pre_vsi_rebuild(vf);
+-	ice_vf_rebuild_vsi_with_release(vf);
++
++	if (ice_vf_rebuild_vsi_with_release(vf)) {
++		dev_err(dev, "Failed to release and setup the VF%u's VSI\n", vf->vf_id);
++		return false;
++	}
++
+ 	ice_vf_post_vsi_rebuild(vf);
+ 
+ 	return true;
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 98101a8e2952d..9f36f8d7a9854 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -524,21 +524,29 @@ ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp)
+ 	}
+ 
+ 	act = bpf_prog_run_xdp(xdp_prog, xdp);
++
++	if (likely(act == XDP_REDIRECT)) {
++		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
++		if (err)
++			goto out_failure;
++		rcu_read_unlock();
++		return ICE_XDP_REDIR;
++	}
++
+ 	switch (act) {
+ 	case XDP_PASS:
+ 		break;
+ 	case XDP_TX:
+ 		xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->q_index];
+ 		result = ice_xmit_xdp_buff(xdp, xdp_ring);
+-		break;
+-	case XDP_REDIRECT:
+-		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
+-		result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED;
++		if (result == ICE_XDP_CONSUMED)
++			goto out_failure;
+ 		break;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++out_failure:
+ 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
+ 		fallthrough;
+ 	case XDP_DROP:
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 368f0aac5e1d4..5c87c0a7ce3d7 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -8419,18 +8419,20 @@ static struct sk_buff *igb_run_xdp(struct igb_adapter *adapter,
+ 		break;
+ 	case XDP_TX:
+ 		result = igb_xdp_xmit_back(adapter, xdp);
++		if (result == IGB_XDP_CONSUMED)
++			goto out_failure;
+ 		break;
+ 	case XDP_REDIRECT:
+ 		err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog);
+-		if (!err)
+-			result = IGB_XDP_REDIR;
+-		else
+-			result = IGB_XDP_CONSUMED;
++		if (err)
++			goto out_failure;
++		result = IGB_XDP_REDIR;
+ 		break;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++out_failure:
+ 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
+ 		fallthrough;
+ 	case XDP_DROP:
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 0b9fddbc5db4f..1bfba87f1ff60 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -2218,23 +2218,23 @@ static struct sk_buff *ixgbe_run_xdp(struct ixgbe_adapter *adapter,
+ 		break;
+ 	case XDP_TX:
+ 		xdpf = xdp_convert_buff_to_frame(xdp);
+-		if (unlikely(!xdpf)) {
+-			result = IXGBE_XDP_CONSUMED;
+-			break;
+-		}
++		if (unlikely(!xdpf))
++			goto out_failure;
+ 		result = ixgbe_xmit_xdp_ring(adapter, xdpf);
++		if (result == IXGBE_XDP_CONSUMED)
++			goto out_failure;
+ 		break;
+ 	case XDP_REDIRECT:
+ 		err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog);
+-		if (!err)
+-			result = IXGBE_XDP_REDIR;
+-		else
+-			result = IXGBE_XDP_CONSUMED;
++		if (err)
++			goto out_failure;
++		result = IXGBE_XDP_REDIR;
+ 		break;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++out_failure:
+ 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
+ 		fallthrough; /* handle aborts by dropping packet */
+ 	case XDP_DROP:
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index 3771857cf887c..f72d2978263b9 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -104,25 +104,30 @@ static int ixgbe_run_xdp_zc(struct ixgbe_adapter *adapter,
+ 	xdp_prog = READ_ONCE(rx_ring->xdp_prog);
+ 	act = bpf_prog_run_xdp(xdp_prog, xdp);
+ 
++	if (likely(act == XDP_REDIRECT)) {
++		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
++		if (err)
++			goto out_failure;
++		rcu_read_unlock();
++		return IXGBE_XDP_REDIR;
++	}
++
+ 	switch (act) {
+ 	case XDP_PASS:
+ 		break;
+ 	case XDP_TX:
+ 		xdpf = xdp_convert_buff_to_frame(xdp);
+-		if (unlikely(!xdpf)) {
+-			result = IXGBE_XDP_CONSUMED;
+-			break;
+-		}
++		if (unlikely(!xdpf))
++			goto out_failure;
+ 		result = ixgbe_xmit_xdp_ring(adapter, xdpf);
+-		break;
+-	case XDP_REDIRECT:
+-		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
+-		result = !err ? IXGBE_XDP_REDIR : IXGBE_XDP_CONSUMED;
++		if (result == IXGBE_XDP_CONSUMED)
++			goto out_failure;
+ 		break;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++out_failure:
+ 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
+ 		fallthrough; /* handle aborts by dropping packet */
+ 	case XDP_DROP:
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index 82fce27f682bb..a7d0a459969a2 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -1072,11 +1072,14 @@ static struct sk_buff *ixgbevf_run_xdp(struct ixgbevf_adapter *adapter,
+ 	case XDP_TX:
+ 		xdp_ring = adapter->xdp_ring[rx_ring->queue_index];
+ 		result = ixgbevf_xmit_xdp_ring(xdp_ring, xdp);
++		if (result == IXGBEVF_XDP_CONSUMED)
++			goto out_failure;
+ 		break;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++out_failure:
+ 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
+ 		fallthrough; /* handle aborts by dropping packet */
+ 	case XDP_DROP:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 986f0d86e94dc..bc7c1962f9e66 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1618,12 +1618,13 @@ static int mlx5e_set_fecparam(struct net_device *netdev,
+ {
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
+ 	struct mlx5_core_dev *mdev = priv->mdev;
++	unsigned long fec_bitmap;
+ 	u16 fec_policy = 0;
+ 	int mode;
+ 	int err;
+ 
+-	if (bitmap_weight((unsigned long *)&fecparam->fec,
+-			  ETHTOOL_FEC_LLRS_BIT + 1) > 1)
++	bitmap_from_arr32(&fec_bitmap, &fecparam->fec, sizeof(fecparam->fec) * BITS_PER_BYTE);
++	if (bitmap_weight(&fec_bitmap, ETHTOOL_FEC_LLRS_BIT + 1) > 1)
+ 		return -EOPNOTSUPP;
+ 
+ 	for (mode = 0; mode < ARRAY_SIZE(pplm_fec_2_ethtool); mode++) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 1bdeb948f56d7..80abdb0b47d7e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2253,11 +2253,13 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 				    misc_parameters);
+ 	struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ 	struct flow_dissector *dissector = rule->match.dissector;
++	enum fs_flow_table_type fs_type;
+ 	u16 addr_type = 0;
+ 	u8 ip_proto = 0;
+ 	u8 *match_level;
+ 	int err;
+ 
++	fs_type = mlx5e_is_eswitch_flow(flow) ? FS_FT_FDB : FS_FT_NIC_RX;
+ 	match_level = outer_match_level;
+ 
+ 	if (dissector->used_keys &
+@@ -2382,6 +2384,13 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 		if (match.mask->vlan_id ||
+ 		    match.mask->vlan_priority ||
+ 		    match.mask->vlan_tpid) {
++			if (!MLX5_CAP_FLOWTABLE_TYPE(priv->mdev, ft_field_support.outer_second_vid,
++						     fs_type)) {
++				NL_SET_ERR_MSG_MOD(extack,
++						   "Matching on CVLAN is not supported");
++				return -EOPNOTSUPP;
++			}
++
+ 			if (match.key->vlan_tpid == htons(ETH_P_8021AD)) {
+ 				MLX5_SET(fte_match_set_misc, misc_c,
+ 					 outer_second_svlan_tag, 1);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index f9042e147c7f6..ee710ce007950 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -354,6 +354,9 @@ static void mlx5_sync_reset_abort_event(struct work_struct *work)
+ 						      reset_abort_work);
+ 	struct mlx5_core_dev *dev = fw_reset->dev;
+ 
++	if (!test_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags))
++		return;
++
+ 	mlx5_sync_reset_clear_reset_requested(dev, true);
+ 	mlx5_core_warn(dev, "PCI Sync FW Update Reset Aborted.\n");
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c
+index 1fbcd012bb855..7ccfd40586cee 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c
+@@ -112,7 +112,8 @@ int mlx5dr_fw_create_md_tbl(struct mlx5dr_domain *dmn,
+ 	int ret;
+ 
+ 	ft_attr.table_type = MLX5_FLOW_TABLE_TYPE_FDB;
+-	ft_attr.level = dmn->info.caps.max_ft_level - 2;
++	ft_attr.level = min_t(int, dmn->info.caps.max_ft_level - 2,
++			      MLX5_FT_MAX_MULTIPATH_LEVEL);
+ 	ft_attr.reformat_en = reformat_req;
+ 	ft_attr.decap_en = reformat_req;
+ 
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 854c6624e6859..1d3bf810f2ca1 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1827,6 +1827,15 @@ cdc_ncm_speed_change(struct usbnet *dev,
+ 	uint32_t rx_speed = le32_to_cpu(data->DLBitRRate);
+ 	uint32_t tx_speed = le32_to_cpu(data->ULBitRate);
+ 
++	/* if the speed hasn't changed, don't report it.
++	 * RTL8156 shipped before 2021 sends notification about every 32ms.
++	 */
++	if (dev->rx_speed == rx_speed && dev->tx_speed == tx_speed)
++		return;
++
++	dev->rx_speed = rx_speed;
++	dev->tx_speed = tx_speed;
++
+ 	/*
+ 	 * Currently the USB-NET API does not support reporting the actual
+ 	 * device speed. Do print it instead.
+@@ -1867,7 +1876,8 @@ static void cdc_ncm_status(struct usbnet *dev, struct urb *urb)
+ 		 * USB_CDC_NOTIFY_NETWORK_CONNECTION notification shall be
+ 		 * sent by device after USB_CDC_NOTIFY_SPEED_CHANGE.
+ 		 */
+-		usbnet_link_change(dev, !!event->wValue, 0);
++		if (netif_carrier_ok(dev->net) != !!event->wValue)
++			usbnet_link_change(dev, !!event->wValue, 0);
+ 		break;
+ 
+ 	case USB_CDC_NOTIFY_SPEED_CHANGE:
+diff --git a/drivers/net/wireguard/Makefile b/drivers/net/wireguard/Makefile
+index fc52b2cb500b3..dbe1f8514efc3 100644
+--- a/drivers/net/wireguard/Makefile
++++ b/drivers/net/wireguard/Makefile
+@@ -1,5 +1,4 @@
+-ccflags-y := -O3
+-ccflags-y += -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt'
++ccflags-y := -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt'
+ ccflags-$(CONFIG_WIREGUARD_DEBUG) += -DDEBUG
+ wireguard-y := main.o
+ wireguard-y += noise.o
+diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
+index 3725e9cd85f4f..b7197e80f2264 100644
+--- a/drivers/net/wireguard/allowedips.c
++++ b/drivers/net/wireguard/allowedips.c
+@@ -6,6 +6,8 @@
+ #include "allowedips.h"
+ #include "peer.h"
+ 
++static struct kmem_cache *node_cache;
++
+ static void swap_endian(u8 *dst, const u8 *src, u8 bits)
+ {
+ 	if (bits == 32) {
+@@ -28,8 +30,11 @@ static void copy_and_assign_cidr(struct allowedips_node *node, const u8 *src,
+ 	node->bitlen = bits;
+ 	memcpy(node->bits, src, bits / 8U);
+ }
+-#define CHOOSE_NODE(parent, key) \
+-	parent->bit[(key[parent->bit_at_a] >> parent->bit_at_b) & 1]
++
++static inline u8 choose(struct allowedips_node *node, const u8 *key)
++{
++	return (key[node->bit_at_a] >> node->bit_at_b) & 1;
++}
+ 
+ static void push_rcu(struct allowedips_node **stack,
+ 		     struct allowedips_node __rcu *p, unsigned int *len)
+@@ -40,6 +45,11 @@ static void push_rcu(struct allowedips_node **stack,
+ 	}
+ }
+ 
++static void node_free_rcu(struct rcu_head *rcu)
++{
++	kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
++}
++
+ static void root_free_rcu(struct rcu_head *rcu)
+ {
+ 	struct allowedips_node *node, *stack[128] = {
+@@ -49,7 +59,7 @@ static void root_free_rcu(struct rcu_head *rcu)
+ 	while (len > 0 && (node = stack[--len])) {
+ 		push_rcu(stack, node->bit[0], &len);
+ 		push_rcu(stack, node->bit[1], &len);
+-		kfree(node);
++		kmem_cache_free(node_cache, node);
+ 	}
+ }
+ 
+@@ -66,60 +76,6 @@ static void root_remove_peer_lists(struct allowedips_node *root)
+ 	}
+ }
+ 
+-static void walk_remove_by_peer(struct allowedips_node __rcu **top,
+-				struct wg_peer *peer, struct mutex *lock)
+-{
+-#define REF(p) rcu_access_pointer(p)
+-#define DEREF(p) rcu_dereference_protected(*(p), lockdep_is_held(lock))
+-#define PUSH(p) ({                                                             \
+-		WARN_ON(IS_ENABLED(DEBUG) && len >= 128);                      \
+-		stack[len++] = p;                                              \
+-	})
+-
+-	struct allowedips_node __rcu **stack[128], **nptr;
+-	struct allowedips_node *node, *prev;
+-	unsigned int len;
+-
+-	if (unlikely(!peer || !REF(*top)))
+-		return;
+-
+-	for (prev = NULL, len = 0, PUSH(top); len > 0; prev = node) {
+-		nptr = stack[len - 1];
+-		node = DEREF(nptr);
+-		if (!node) {
+-			--len;
+-			continue;
+-		}
+-		if (!prev || REF(prev->bit[0]) == node ||
+-		    REF(prev->bit[1]) == node) {
+-			if (REF(node->bit[0]))
+-				PUSH(&node->bit[0]);
+-			else if (REF(node->bit[1]))
+-				PUSH(&node->bit[1]);
+-		} else if (REF(node->bit[0]) == prev) {
+-			if (REF(node->bit[1]))
+-				PUSH(&node->bit[1]);
+-		} else {
+-			if (rcu_dereference_protected(node->peer,
+-				lockdep_is_held(lock)) == peer) {
+-				RCU_INIT_POINTER(node->peer, NULL);
+-				list_del_init(&node->peer_list);
+-				if (!node->bit[0] || !node->bit[1]) {
+-					rcu_assign_pointer(*nptr, DEREF(
+-					       &node->bit[!REF(node->bit[0])]));
+-					kfree_rcu(node, rcu);
+-					node = DEREF(nptr);
+-				}
+-			}
+-			--len;
+-		}
+-	}
+-
+-#undef REF
+-#undef DEREF
+-#undef PUSH
+-}
+-
+ static unsigned int fls128(u64 a, u64 b)
+ {
+ 	return a ? fls64(a) + 64U : fls64(b);
+@@ -159,7 +115,7 @@ static struct allowedips_node *find_node(struct allowedips_node *trie, u8 bits,
+ 			found = node;
+ 		if (node->cidr == bits)
+ 			break;
+-		node = rcu_dereference_bh(CHOOSE_NODE(node, key));
++		node = rcu_dereference_bh(node->bit[choose(node, key)]);
+ 	}
+ 	return found;
+ }
+@@ -191,8 +147,7 @@ static bool node_placement(struct allowedips_node __rcu *trie, const u8 *key,
+ 			   u8 cidr, u8 bits, struct allowedips_node **rnode,
+ 			   struct mutex *lock)
+ {
+-	struct allowedips_node *node = rcu_dereference_protected(trie,
+-						lockdep_is_held(lock));
++	struct allowedips_node *node = rcu_dereference_protected(trie, lockdep_is_held(lock));
+ 	struct allowedips_node *parent = NULL;
+ 	bool exact = false;
+ 
+@@ -202,13 +157,24 @@ static bool node_placement(struct allowedips_node __rcu *trie, const u8 *key,
+ 			exact = true;
+ 			break;
+ 		}
+-		node = rcu_dereference_protected(CHOOSE_NODE(parent, key),
+-						 lockdep_is_held(lock));
++		node = rcu_dereference_protected(parent->bit[choose(parent, key)], lockdep_is_held(lock));
+ 	}
+ 	*rnode = parent;
+ 	return exact;
+ }
+ 
++static inline void connect_node(struct allowedips_node **parent, u8 bit, struct allowedips_node *node)
++{
++	node->parent_bit_packed = (unsigned long)parent | bit;
++	rcu_assign_pointer(*parent, node);
++}
++
++static inline void choose_and_connect_node(struct allowedips_node *parent, struct allowedips_node *node)
++{
++	u8 bit = choose(parent, node->bits);
++	connect_node(&parent->bit[bit], bit, node);
++}
++
+ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
+ 	       u8 cidr, struct wg_peer *peer, struct mutex *lock)
+ {
+@@ -218,13 +184,13 @@ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
+ 		return -EINVAL;
+ 
+ 	if (!rcu_access_pointer(*trie)) {
+-		node = kzalloc(sizeof(*node), GFP_KERNEL);
++		node = kmem_cache_zalloc(node_cache, GFP_KERNEL);
+ 		if (unlikely(!node))
+ 			return -ENOMEM;
+ 		RCU_INIT_POINTER(node->peer, peer);
+ 		list_add_tail(&node->peer_list, &peer->allowedips_list);
+ 		copy_and_assign_cidr(node, key, cidr, bits);
+-		rcu_assign_pointer(*trie, node);
++		connect_node(trie, 2, node);
+ 		return 0;
+ 	}
+ 	if (node_placement(*trie, key, cidr, bits, &node, lock)) {
+@@ -233,7 +199,7 @@ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
+ 		return 0;
+ 	}
+ 
+-	newnode = kzalloc(sizeof(*newnode), GFP_KERNEL);
++	newnode = kmem_cache_zalloc(node_cache, GFP_KERNEL);
+ 	if (unlikely(!newnode))
+ 		return -ENOMEM;
+ 	RCU_INIT_POINTER(newnode->peer, peer);
+@@ -243,10 +209,10 @@ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
+ 	if (!node) {
+ 		down = rcu_dereference_protected(*trie, lockdep_is_held(lock));
+ 	} else {
+-		down = rcu_dereference_protected(CHOOSE_NODE(node, key),
+-						 lockdep_is_held(lock));
++		const u8 bit = choose(node, key);
++		down = rcu_dereference_protected(node->bit[bit], lockdep_is_held(lock));
+ 		if (!down) {
+-			rcu_assign_pointer(CHOOSE_NODE(node, key), newnode);
++			connect_node(&node->bit[bit], bit, newnode);
+ 			return 0;
+ 		}
+ 	}
+@@ -254,30 +220,29 @@ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
+ 	parent = node;
+ 
+ 	if (newnode->cidr == cidr) {
+-		rcu_assign_pointer(CHOOSE_NODE(newnode, down->bits), down);
++		choose_and_connect_node(newnode, down);
+ 		if (!parent)
+-			rcu_assign_pointer(*trie, newnode);
++			connect_node(trie, 2, newnode);
+ 		else
+-			rcu_assign_pointer(CHOOSE_NODE(parent, newnode->bits),
+-					   newnode);
+-	} else {
+-		node = kzalloc(sizeof(*node), GFP_KERNEL);
+-		if (unlikely(!node)) {
+-			list_del(&newnode->peer_list);
+-			kfree(newnode);
+-			return -ENOMEM;
+-		}
+-		INIT_LIST_HEAD(&node->peer_list);
+-		copy_and_assign_cidr(node, newnode->bits, cidr, bits);
++			choose_and_connect_node(parent, newnode);
++		return 0;
++	}
+ 
+-		rcu_assign_pointer(CHOOSE_NODE(node, down->bits), down);
+-		rcu_assign_pointer(CHOOSE_NODE(node, newnode->bits), newnode);
+-		if (!parent)
+-			rcu_assign_pointer(*trie, node);
+-		else
+-			rcu_assign_pointer(CHOOSE_NODE(parent, node->bits),
+-					   node);
++	node = kmem_cache_zalloc(node_cache, GFP_KERNEL);
++	if (unlikely(!node)) {
++		list_del(&newnode->peer_list);
++		kmem_cache_free(node_cache, newnode);
++		return -ENOMEM;
+ 	}
++	INIT_LIST_HEAD(&node->peer_list);
++	copy_and_assign_cidr(node, newnode->bits, cidr, bits);
++
++	choose_and_connect_node(node, down);
++	choose_and_connect_node(node, newnode);
++	if (!parent)
++		connect_node(trie, 2, node);
++	else
++		choose_and_connect_node(parent, node);
+ 	return 0;
+ }
+ 
+@@ -335,9 +300,41 @@ int wg_allowedips_insert_v6(struct allowedips *table, const struct in6_addr *ip,
+ void wg_allowedips_remove_by_peer(struct allowedips *table,
+ 				  struct wg_peer *peer, struct mutex *lock)
+ {
++	struct allowedips_node *node, *child, **parent_bit, *parent, *tmp;
++	bool free_parent;
++
++	if (list_empty(&peer->allowedips_list))
++		return;
+ 	++table->seq;
+-	walk_remove_by_peer(&table->root4, peer, lock);
+-	walk_remove_by_peer(&table->root6, peer, lock);
++	list_for_each_entry_safe(node, tmp, &peer->allowedips_list, peer_list) {
++		list_del_init(&node->peer_list);
++		RCU_INIT_POINTER(node->peer, NULL);
++		if (node->bit[0] && node->bit[1])
++			continue;
++		child = rcu_dereference_protected(node->bit[!rcu_access_pointer(node->bit[0])],
++						  lockdep_is_held(lock));
++		if (child)
++			child->parent_bit_packed = node->parent_bit_packed;
++		parent_bit = (struct allowedips_node **)(node->parent_bit_packed & ~3UL);
++		*parent_bit = child;
++		parent = (void *)parent_bit -
++			 offsetof(struct allowedips_node, bit[node->parent_bit_packed & 1]);
++		free_parent = !rcu_access_pointer(node->bit[0]) &&
++			      !rcu_access_pointer(node->bit[1]) &&
++			      (node->parent_bit_packed & 3) <= 1 &&
++			      !rcu_access_pointer(parent->peer);
++		if (free_parent)
++			child = rcu_dereference_protected(
++					parent->bit[!(node->parent_bit_packed & 1)],
++					lockdep_is_held(lock));
++		call_rcu(&node->rcu, node_free_rcu);
++		if (!free_parent)
++			continue;
++		if (child)
++			child->parent_bit_packed = parent->parent_bit_packed;
++		*(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
++		call_rcu(&parent->rcu, node_free_rcu);
++	}
+ }
+ 
+ int wg_allowedips_read_node(struct allowedips_node *node, u8 ip[16], u8 *cidr)
+@@ -374,4 +371,16 @@ struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
+ 	return NULL;
+ }
+ 
++int __init wg_allowedips_slab_init(void)
++{
++	node_cache = KMEM_CACHE(allowedips_node, 0);
++	return node_cache ? 0 : -ENOMEM;
++}
++
++void wg_allowedips_slab_uninit(void)
++{
++	rcu_barrier();
++	kmem_cache_destroy(node_cache);
++}
++
+ #include "selftest/allowedips.c"
+diff --git a/drivers/net/wireguard/allowedips.h b/drivers/net/wireguard/allowedips.h
+index e5c83cafcef4c..2346c797eb4d8 100644
+--- a/drivers/net/wireguard/allowedips.h
++++ b/drivers/net/wireguard/allowedips.h
+@@ -15,14 +15,11 @@ struct wg_peer;
+ struct allowedips_node {
+ 	struct wg_peer __rcu *peer;
+ 	struct allowedips_node __rcu *bit[2];
+-	/* While it may seem scandalous that we waste space for v4,
+-	 * we're alloc'ing to the nearest power of 2 anyway, so this
+-	 * doesn't actually make a difference.
+-	 */
+-	u8 bits[16] __aligned(__alignof(u64));
+ 	u8 cidr, bit_at_a, bit_at_b, bitlen;
++	u8 bits[16] __aligned(__alignof(u64));
+ 
+-	/* Keep rarely used list at bottom to be beyond cache line. */
++	/* Keep rarely used members at bottom to be beyond cache line. */
++	unsigned long parent_bit_packed;
+ 	union {
+ 		struct list_head peer_list;
+ 		struct rcu_head rcu;
+@@ -33,7 +30,7 @@ struct allowedips {
+ 	struct allowedips_node __rcu *root4;
+ 	struct allowedips_node __rcu *root6;
+ 	u64 seq;
+-};
++} __aligned(4); /* We pack the lower 2 bits of &root, but m68k only gives 16-bit alignment. */
+ 
+ void wg_allowedips_init(struct allowedips *table);
+ void wg_allowedips_free(struct allowedips *table, struct mutex *mutex);
+@@ -56,4 +53,7 @@ struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
+ bool wg_allowedips_selftest(void);
+ #endif
+ 
++int wg_allowedips_slab_init(void);
++void wg_allowedips_slab_uninit(void);
++
+ #endif /* _WG_ALLOWEDIPS_H */
+diff --git a/drivers/net/wireguard/main.c b/drivers/net/wireguard/main.c
+index 7a7d5f1a80fc7..75dbe77b0b4b4 100644
+--- a/drivers/net/wireguard/main.c
++++ b/drivers/net/wireguard/main.c
+@@ -21,13 +21,22 @@ static int __init mod_init(void)
+ {
+ 	int ret;
+ 
++	ret = wg_allowedips_slab_init();
++	if (ret < 0)
++		goto err_allowedips;
++
+ #ifdef DEBUG
++	ret = -ENOTRECOVERABLE;
+ 	if (!wg_allowedips_selftest() || !wg_packet_counter_selftest() ||
+ 	    !wg_ratelimiter_selftest())
+-		return -ENOTRECOVERABLE;
++		goto err_peer;
+ #endif
+ 	wg_noise_init();
+ 
++	ret = wg_peer_init();
++	if (ret < 0)
++		goto err_peer;
++
+ 	ret = wg_device_init();
+ 	if (ret < 0)
+ 		goto err_device;
+@@ -44,6 +53,10 @@ static int __init mod_init(void)
+ err_netlink:
+ 	wg_device_uninit();
+ err_device:
++	wg_peer_uninit();
++err_peer:
++	wg_allowedips_slab_uninit();
++err_allowedips:
+ 	return ret;
+ }
+ 
+@@ -51,6 +64,8 @@ static void __exit mod_exit(void)
+ {
+ 	wg_genetlink_uninit();
+ 	wg_device_uninit();
++	wg_peer_uninit();
++	wg_allowedips_slab_uninit();
+ }
+ 
+ module_init(mod_init);
+diff --git a/drivers/net/wireguard/peer.c b/drivers/net/wireguard/peer.c
+index cd5cb0292cb67..1acd00ab2fbcb 100644
+--- a/drivers/net/wireguard/peer.c
++++ b/drivers/net/wireguard/peer.c
+@@ -15,6 +15,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/list.h>
+ 
++static struct kmem_cache *peer_cache;
+ static atomic64_t peer_counter = ATOMIC64_INIT(0);
+ 
+ struct wg_peer *wg_peer_create(struct wg_device *wg,
+@@ -29,10 +30,10 @@ struct wg_peer *wg_peer_create(struct wg_device *wg,
+ 	if (wg->num_peers >= MAX_PEERS_PER_DEVICE)
+ 		return ERR_PTR(ret);
+ 
+-	peer = kzalloc(sizeof(*peer), GFP_KERNEL);
++	peer = kmem_cache_zalloc(peer_cache, GFP_KERNEL);
+ 	if (unlikely(!peer))
+ 		return ERR_PTR(ret);
+-	if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))
++	if (unlikely(dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)))
+ 		goto err;
+ 
+ 	peer->device = wg;
+@@ -64,7 +65,7 @@ struct wg_peer *wg_peer_create(struct wg_device *wg,
+ 	return peer;
+ 
+ err:
+-	kfree(peer);
++	kmem_cache_free(peer_cache, peer);
+ 	return ERR_PTR(ret);
+ }
+ 
+@@ -88,7 +89,7 @@ static void peer_make_dead(struct wg_peer *peer)
+ 	/* Mark as dead, so that we don't allow jumping contexts after. */
+ 	WRITE_ONCE(peer->is_dead, true);
+ 
+-	/* The caller must now synchronize_rcu() for this to take effect. */
++	/* The caller must now synchronize_net() for this to take effect. */
+ }
+ 
+ static void peer_remove_after_dead(struct wg_peer *peer)
+@@ -160,7 +161,7 @@ void wg_peer_remove(struct wg_peer *peer)
+ 	lockdep_assert_held(&peer->device->device_update_lock);
+ 
+ 	peer_make_dead(peer);
+-	synchronize_rcu();
++	synchronize_net();
+ 	peer_remove_after_dead(peer);
+ }
+ 
+@@ -178,7 +179,7 @@ void wg_peer_remove_all(struct wg_device *wg)
+ 		peer_make_dead(peer);
+ 		list_add_tail(&peer->peer_list, &dead_peers);
+ 	}
+-	synchronize_rcu();
++	synchronize_net();
+ 	list_for_each_entry_safe(peer, temp, &dead_peers, peer_list)
+ 		peer_remove_after_dead(peer);
+ }
+@@ -193,7 +194,8 @@ static void rcu_release(struct rcu_head *rcu)
+ 	/* The final zeroing takes care of clearing any remaining handshake key
+ 	 * material and other potentially sensitive information.
+ 	 */
+-	kfree_sensitive(peer);
++	memzero_explicit(peer, sizeof(*peer));
++	kmem_cache_free(peer_cache, peer);
+ }
+ 
+ static void kref_release(struct kref *refcount)
+@@ -225,3 +227,14 @@ void wg_peer_put(struct wg_peer *peer)
+ 		return;
+ 	kref_put(&peer->refcount, kref_release);
+ }
++
++int __init wg_peer_init(void)
++{
++	peer_cache = KMEM_CACHE(wg_peer, 0);
++	return peer_cache ? 0 : -ENOMEM;
++}
++
++void wg_peer_uninit(void)
++{
++	kmem_cache_destroy(peer_cache);
++}
+diff --git a/drivers/net/wireguard/peer.h b/drivers/net/wireguard/peer.h
+index 0809cda08bfa4..74227aa2d5b5a 100644
+--- a/drivers/net/wireguard/peer.h
++++ b/drivers/net/wireguard/peer.h
+@@ -80,4 +80,7 @@ void wg_peer_put(struct wg_peer *peer);
+ void wg_peer_remove(struct wg_peer *peer);
+ void wg_peer_remove_all(struct wg_device *wg);
+ 
++int wg_peer_init(void);
++void wg_peer_uninit(void);
++
+ #endif /* _WG_PEER_H */
+diff --git a/drivers/net/wireguard/selftest/allowedips.c b/drivers/net/wireguard/selftest/allowedips.c
+index 846db14cb046b..e173204ae7d78 100644
+--- a/drivers/net/wireguard/selftest/allowedips.c
++++ b/drivers/net/wireguard/selftest/allowedips.c
+@@ -19,32 +19,22 @@
+ 
+ #include <linux/siphash.h>
+ 
+-static __init void swap_endian_and_apply_cidr(u8 *dst, const u8 *src, u8 bits,
+-					      u8 cidr)
+-{
+-	swap_endian(dst, src, bits);
+-	memset(dst + (cidr + 7) / 8, 0, bits / 8 - (cidr + 7) / 8);
+-	if (cidr)
+-		dst[(cidr + 7) / 8 - 1] &= ~0U << ((8 - (cidr % 8)) % 8);
+-}
+-
+ static __init void print_node(struct allowedips_node *node, u8 bits)
+ {
+ 	char *fmt_connection = KERN_DEBUG "\t\"%p/%d\" -> \"%p/%d\";\n";
+-	char *fmt_declaration = KERN_DEBUG
+-		"\t\"%p/%d\"[style=%s, color=\"#%06x\"];\n";
++	char *fmt_declaration = KERN_DEBUG "\t\"%p/%d\"[style=%s, color=\"#%06x\"];\n";
++	u8 ip1[16], ip2[16], cidr1, cidr2;
+ 	char *style = "dotted";
+-	u8 ip1[16], ip2[16];
+ 	u32 color = 0;
+ 
++	if (node == NULL)
++		return;
+ 	if (bits == 32) {
+ 		fmt_connection = KERN_DEBUG "\t\"%pI4/%d\" -> \"%pI4/%d\";\n";
+-		fmt_declaration = KERN_DEBUG
+-			"\t\"%pI4/%d\"[style=%s, color=\"#%06x\"];\n";
++		fmt_declaration = KERN_DEBUG "\t\"%pI4/%d\"[style=%s, color=\"#%06x\"];\n";
+ 	} else if (bits == 128) {
+ 		fmt_connection = KERN_DEBUG "\t\"%pI6/%d\" -> \"%pI6/%d\";\n";
+-		fmt_declaration = KERN_DEBUG
+-			"\t\"%pI6/%d\"[style=%s, color=\"#%06x\"];\n";
++		fmt_declaration = KERN_DEBUG "\t\"%pI6/%d\"[style=%s, color=\"#%06x\"];\n";
+ 	}
+ 	if (node->peer) {
+ 		hsiphash_key_t key = { { 0 } };
+@@ -55,24 +45,20 @@ static __init void print_node(struct allowedips_node *node, u8 bits)
+ 			hsiphash_1u32(0xabad1dea, &key) % 200;
+ 		style = "bold";
+ 	}
+-	swap_endian_and_apply_cidr(ip1, node->bits, bits, node->cidr);
+-	printk(fmt_declaration, ip1, node->cidr, style, color);
++	wg_allowedips_read_node(node, ip1, &cidr1);
++	printk(fmt_declaration, ip1, cidr1, style, color);
+ 	if (node->bit[0]) {
+-		swap_endian_and_apply_cidr(ip2,
+-				rcu_dereference_raw(node->bit[0])->bits, bits,
+-				node->cidr);
+-		printk(fmt_connection, ip1, node->cidr, ip2,
+-		       rcu_dereference_raw(node->bit[0])->cidr);
+-		print_node(rcu_dereference_raw(node->bit[0]), bits);
++		wg_allowedips_read_node(rcu_dereference_raw(node->bit[0]), ip2, &cidr2);
++		printk(fmt_connection, ip1, cidr1, ip2, cidr2);
+ 	}
+ 	if (node->bit[1]) {
+-		swap_endian_and_apply_cidr(ip2,
+-				rcu_dereference_raw(node->bit[1])->bits,
+-				bits, node->cidr);
+-		printk(fmt_connection, ip1, node->cidr, ip2,
+-		       rcu_dereference_raw(node->bit[1])->cidr);
+-		print_node(rcu_dereference_raw(node->bit[1]), bits);
++		wg_allowedips_read_node(rcu_dereference_raw(node->bit[1]), ip2, &cidr2);
++		printk(fmt_connection, ip1, cidr1, ip2, cidr2);
+ 	}
++	if (node->bit[0])
++		print_node(rcu_dereference_raw(node->bit[0]), bits);
++	if (node->bit[1])
++		print_node(rcu_dereference_raw(node->bit[1]), bits);
+ }
+ 
+ static __init void print_tree(struct allowedips_node __rcu *top, u8 bits)
+@@ -121,8 +107,8 @@ static __init inline union nf_inet_addr horrible_cidr_to_mask(u8 cidr)
+ {
+ 	union nf_inet_addr mask;
+ 
+-	memset(&mask, 0x00, 128 / 8);
+-	memset(&mask, 0xff, cidr / 8);
++	memset(&mask, 0, sizeof(mask));
++	memset(&mask.all, 0xff, cidr / 8);
+ 	if (cidr % 32)
+ 		mask.all[cidr / 32] = (__force u32)htonl(
+ 			(0xFFFFFFFFUL << (32 - (cidr % 32))) & 0xFFFFFFFFUL);
+@@ -149,42 +135,36 @@ horrible_mask_self(struct horrible_allowedips_node *node)
+ }
+ 
+ static __init inline bool
+-horrible_match_v4(const struct horrible_allowedips_node *node,
+-		  struct in_addr *ip)
++horrible_match_v4(const struct horrible_allowedips_node *node, struct in_addr *ip)
+ {
+ 	return (ip->s_addr & node->mask.ip) == node->ip.ip;
+ }
+ 
+ static __init inline bool
+-horrible_match_v6(const struct horrible_allowedips_node *node,
+-		  struct in6_addr *ip)
++horrible_match_v6(const struct horrible_allowedips_node *node, struct in6_addr *ip)
+ {
+-	return (ip->in6_u.u6_addr32[0] & node->mask.ip6[0]) ==
+-		       node->ip.ip6[0] &&
+-	       (ip->in6_u.u6_addr32[1] & node->mask.ip6[1]) ==
+-		       node->ip.ip6[1] &&
+-	       (ip->in6_u.u6_addr32[2] & node->mask.ip6[2]) ==
+-		       node->ip.ip6[2] &&
++	return (ip->in6_u.u6_addr32[0] & node->mask.ip6[0]) == node->ip.ip6[0] &&
++	       (ip->in6_u.u6_addr32[1] & node->mask.ip6[1]) == node->ip.ip6[1] &&
++	       (ip->in6_u.u6_addr32[2] & node->mask.ip6[2]) == node->ip.ip6[2] &&
+ 	       (ip->in6_u.u6_addr32[3] & node->mask.ip6[3]) == node->ip.ip6[3];
+ }
+ 
+ static __init void
+-horrible_insert_ordered(struct horrible_allowedips *table,
+-			struct horrible_allowedips_node *node)
++horrible_insert_ordered(struct horrible_allowedips *table, struct horrible_allowedips_node *node)
+ {
+ 	struct horrible_allowedips_node *other = NULL, *where = NULL;
+ 	u8 my_cidr = horrible_mask_to_cidr(node->mask);
+ 
+ 	hlist_for_each_entry(other, &table->head, table) {
+-		if (!memcmp(&other->mask, &node->mask,
+-			    sizeof(union nf_inet_addr)) &&
+-		    !memcmp(&other->ip, &node->ip,
+-			    sizeof(union nf_inet_addr)) &&
+-		    other->ip_version == node->ip_version) {
++		if (other->ip_version == node->ip_version &&
++		    !memcmp(&other->mask, &node->mask, sizeof(union nf_inet_addr)) &&
++		    !memcmp(&other->ip, &node->ip, sizeof(union nf_inet_addr))) {
+ 			other->value = node->value;
+ 			kfree(node);
+ 			return;
+ 		}
++	}
++	hlist_for_each_entry(other, &table->head, table) {
+ 		where = other;
+ 		if (horrible_mask_to_cidr(other->mask) <= my_cidr)
+ 			break;
+@@ -201,8 +181,7 @@ static __init int
+ horrible_allowedips_insert_v4(struct horrible_allowedips *table,
+ 			      struct in_addr *ip, u8 cidr, void *value)
+ {
+-	struct horrible_allowedips_node *node = kzalloc(sizeof(*node),
+-							GFP_KERNEL);
++	struct horrible_allowedips_node *node = kzalloc(sizeof(*node), GFP_KERNEL);
+ 
+ 	if (unlikely(!node))
+ 		return -ENOMEM;
+@@ -219,8 +198,7 @@ static __init int
+ horrible_allowedips_insert_v6(struct horrible_allowedips *table,
+ 			      struct in6_addr *ip, u8 cidr, void *value)
+ {
+-	struct horrible_allowedips_node *node = kzalloc(sizeof(*node),
+-							GFP_KERNEL);
++	struct horrible_allowedips_node *node = kzalloc(sizeof(*node), GFP_KERNEL);
+ 
+ 	if (unlikely(!node))
+ 		return -ENOMEM;
+@@ -234,39 +212,43 @@ horrible_allowedips_insert_v6(struct horrible_allowedips *table,
+ }
+ 
+ static __init void *
+-horrible_allowedips_lookup_v4(struct horrible_allowedips *table,
+-			      struct in_addr *ip)
++horrible_allowedips_lookup_v4(struct horrible_allowedips *table, struct in_addr *ip)
+ {
+ 	struct horrible_allowedips_node *node;
+-	void *ret = NULL;
+ 
+ 	hlist_for_each_entry(node, &table->head, table) {
+-		if (node->ip_version != 4)
+-			continue;
+-		if (horrible_match_v4(node, ip)) {
+-			ret = node->value;
+-			break;
+-		}
++		if (node->ip_version == 4 && horrible_match_v4(node, ip))
++			return node->value;
+ 	}
+-	return ret;
++	return NULL;
+ }
+ 
+ static __init void *
+-horrible_allowedips_lookup_v6(struct horrible_allowedips *table,
+-			      struct in6_addr *ip)
++horrible_allowedips_lookup_v6(struct horrible_allowedips *table, struct in6_addr *ip)
+ {
+ 	struct horrible_allowedips_node *node;
+-	void *ret = NULL;
+ 
+ 	hlist_for_each_entry(node, &table->head, table) {
+-		if (node->ip_version != 6)
++		if (node->ip_version == 6 && horrible_match_v6(node, ip))
++			return node->value;
++	}
++	return NULL;
++}
++
++
++static __init void
++horrible_allowedips_remove_by_value(struct horrible_allowedips *table, void *value)
++{
++	struct horrible_allowedips_node *node;
++	struct hlist_node *h;
++
++	hlist_for_each_entry_safe(node, h, &table->head, table) {
++		if (node->value != value)
+ 			continue;
+-		if (horrible_match_v6(node, ip)) {
+-			ret = node->value;
+-			break;
+-		}
++		hlist_del(&node->table);
++		kfree(node);
+ 	}
+-	return ret;
++
+ }
+ 
+ static __init bool randomized_test(void)
+@@ -296,6 +278,7 @@ static __init bool randomized_test(void)
+ 			goto free;
+ 		}
+ 		kref_init(&peers[i]->refcount);
++		INIT_LIST_HEAD(&peers[i]->allowedips_list);
+ 	}
+ 
+ 	mutex_lock(&mutex);
+@@ -333,7 +316,7 @@ static __init bool randomized_test(void)
+ 			if (wg_allowedips_insert_v4(&t,
+ 						    (struct in_addr *)mutated,
+ 						    cidr, peer, &mutex) < 0) {
+-				pr_err("allowedips random malloc: FAIL\n");
++				pr_err("allowedips random self-test malloc: FAIL\n");
+ 				goto free_locked;
+ 			}
+ 			if (horrible_allowedips_insert_v4(&h,
+@@ -396,23 +379,33 @@ static __init bool randomized_test(void)
+ 		print_tree(t.root6, 128);
+ 	}
+ 
+-	for (i = 0; i < NUM_QUERIES; ++i) {
+-		prandom_bytes(ip, 4);
+-		if (lookup(t.root4, 32, ip) !=
+-		    horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) {
+-			pr_err("allowedips random self-test: FAIL\n");
+-			goto free;
++	for (j = 0;; ++j) {
++		for (i = 0; i < NUM_QUERIES; ++i) {
++			prandom_bytes(ip, 4);
++			if (lookup(t.root4, 32, ip) != horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) {
++				horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip);
++				pr_err("allowedips random v4 self-test: FAIL\n");
++				goto free;
++			}
++			prandom_bytes(ip, 16);
++			if (lookup(t.root6, 128, ip) != horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) {
++				pr_err("allowedips random v6 self-test: FAIL\n");
++				goto free;
++			}
+ 		}
++		if (j >= NUM_PEERS)
++			break;
++		mutex_lock(&mutex);
++		wg_allowedips_remove_by_peer(&t, peers[j], &mutex);
++		mutex_unlock(&mutex);
++		horrible_allowedips_remove_by_value(&h, peers[j]);
+ 	}
+ 
+-	for (i = 0; i < NUM_QUERIES; ++i) {
+-		prandom_bytes(ip, 16);
+-		if (lookup(t.root6, 128, ip) !=
+-		    horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) {
+-			pr_err("allowedips random self-test: FAIL\n");
+-			goto free;
+-		}
++	if (t.root4 || t.root6) {
++		pr_err("allowedips random self-test removal: FAIL\n");
++		goto free;
+ 	}
++
+ 	ret = true;
+ 
+ free:
+diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c
+index c33e2c81635fa..c8cd385d233b6 100644
+--- a/drivers/net/wireguard/socket.c
++++ b/drivers/net/wireguard/socket.c
+@@ -430,7 +430,7 @@ void wg_socket_reinit(struct wg_device *wg, struct sock *new4,
+ 	if (new4)
+ 		wg->incoming_port = ntohs(inet_sk(new4)->inet_sport);
+ 	mutex_unlock(&wg->socket_update_lock);
+-	synchronize_rcu();
++	synchronize_net();
+ 	sock_free(old4);
+ 	sock_free(old6);
+ }
+diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
+index e02a4fbb74de5..7ce9807fc24c5 100644
+--- a/drivers/net/xen-netback/interface.c
++++ b/drivers/net/xen-netback/interface.c
+@@ -685,6 +685,7 @@ static void xenvif_disconnect_queue(struct xenvif_queue *queue)
+ {
+ 	if (queue->task) {
+ 		kthread_stop(queue->task);
++		put_task_struct(queue->task);
+ 		queue->task = NULL;
+ 	}
+ 
+@@ -745,6 +746,11 @@ int xenvif_connect_data(struct xenvif_queue *queue,
+ 	if (IS_ERR(task))
+ 		goto kthread_err;
+ 	queue->task = task;
++	/*
++	 * Take a reference to the task in order to prevent it from being freed
++	 * if the thread function returns before kthread_stop is called.
++	 */
++	get_task_struct(task);
+ 
+ 	task = kthread_run(xenvif_dealloc_kthread, queue,
+ 			   "%s-dealloc", queue->name);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 8b326508a480e..e6d58402b829d 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1327,16 +1327,17 @@ static int nvme_rdma_map_sg_inline(struct nvme_rdma_queue *queue,
+ 		int count)
+ {
+ 	struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
+-	struct scatterlist *sgl = req->data_sgl.sg_table.sgl;
+ 	struct ib_sge *sge = &req->sge[1];
++	struct scatterlist *sgl;
+ 	u32 len = 0;
+ 	int i;
+ 
+-	for (i = 0; i < count; i++, sgl++, sge++) {
++	for_each_sg(req->data_sgl.sg_table.sgl, sgl, count, i) {
+ 		sge->addr = sg_dma_address(sgl);
+ 		sge->length = sg_dma_len(sgl);
+ 		sge->lkey = queue->device->pd->local_dma_lkey;
+ 		len += sge->length;
++		sge++;
+ 	}
+ 
+ 	sg->addr = cpu_to_le64(queue->ctrl->ctrl.icdoff);
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 46e4f7ea34c8b..8b939e9db470c 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -988,19 +988,23 @@ static unsigned int nvmet_data_transfer_len(struct nvmet_req *req)
+ 	return req->transfer_len - req->metadata_len;
+ }
+ 
+-static int nvmet_req_alloc_p2pmem_sgls(struct nvmet_req *req)
++static int nvmet_req_alloc_p2pmem_sgls(struct pci_dev *p2p_dev,
++		struct nvmet_req *req)
+ {
+-	req->sg = pci_p2pmem_alloc_sgl(req->p2p_dev, &req->sg_cnt,
++	req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt,
+ 			nvmet_data_transfer_len(req));
+ 	if (!req->sg)
+ 		goto out_err;
+ 
+ 	if (req->metadata_len) {
+-		req->metadata_sg = pci_p2pmem_alloc_sgl(req->p2p_dev,
++		req->metadata_sg = pci_p2pmem_alloc_sgl(p2p_dev,
+ 				&req->metadata_sg_cnt, req->metadata_len);
+ 		if (!req->metadata_sg)
+ 			goto out_free_sg;
+ 	}
++
++	req->p2p_dev = p2p_dev;
++
+ 	return 0;
+ out_free_sg:
+ 	pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
+@@ -1008,25 +1012,19 @@ out_err:
+ 	return -ENOMEM;
+ }
+ 
+-static bool nvmet_req_find_p2p_dev(struct nvmet_req *req)
++static struct pci_dev *nvmet_req_find_p2p_dev(struct nvmet_req *req)
+ {
+-	if (!IS_ENABLED(CONFIG_PCI_P2PDMA))
+-		return false;
+-
+-	if (req->sq->ctrl && req->sq->qid && req->ns) {
+-		req->p2p_dev = radix_tree_lookup(&req->sq->ctrl->p2p_ns_map,
+-						 req->ns->nsid);
+-		if (req->p2p_dev)
+-			return true;
+-	}
+-
+-	req->p2p_dev = NULL;
+-	return false;
++	if (!IS_ENABLED(CONFIG_PCI_P2PDMA) ||
++	    !req->sq->ctrl || !req->sq->qid || !req->ns)
++		return NULL;
++	return radix_tree_lookup(&req->sq->ctrl->p2p_ns_map, req->ns->nsid);
+ }
+ 
+ int nvmet_req_alloc_sgls(struct nvmet_req *req)
+ {
+-	if (nvmet_req_find_p2p_dev(req) && !nvmet_req_alloc_p2pmem_sgls(req))
++	struct pci_dev *p2p_dev = nvmet_req_find_p2p_dev(req);
++
++	if (p2p_dev && !nvmet_req_alloc_p2pmem_sgls(p2p_dev, req))
+ 		return 0;
+ 
+ 	req->sg = sgl_alloc(nvmet_data_transfer_len(req), GFP_KERNEL,
+@@ -1055,6 +1053,7 @@ void nvmet_req_free_sgls(struct nvmet_req *req)
+ 		pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
+ 		if (req->metadata_sg)
+ 			pci_p2pmem_free_sgl(req->p2p_dev, req->metadata_sg);
++		req->p2p_dev = NULL;
+ 	} else {
+ 		sgl_free(req->sg);
+ 		if (req->metadata_sg)
+diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c
+index 780d7c4fd7565..0790de29f0ca2 100644
+--- a/drivers/tee/optee/call.c
++++ b/drivers/tee/optee/call.c
+@@ -217,6 +217,7 @@ int optee_open_session(struct tee_context *ctx,
+ 	struct optee_msg_arg *msg_arg;
+ 	phys_addr_t msg_parg;
+ 	struct optee_session *sess = NULL;
++	uuid_t client_uuid;
+ 
+ 	/* +2 for the meta parameters added below */
+ 	shm = get_msg_arg(ctx, arg->num_params + 2, &msg_arg, &msg_parg);
+@@ -237,10 +238,11 @@ int optee_open_session(struct tee_context *ctx,
+ 	memcpy(&msg_arg->params[0].u.value, arg->uuid, sizeof(arg->uuid));
+ 	msg_arg->params[1].u.value.c = arg->clnt_login;
+ 
+-	rc = tee_session_calc_client_uuid((uuid_t *)&msg_arg->params[1].u.value,
+-					  arg->clnt_login, arg->clnt_uuid);
++	rc = tee_session_calc_client_uuid(&client_uuid, arg->clnt_login,
++					  arg->clnt_uuid);
+ 	if (rc)
+ 		goto out;
++	export_uuid(msg_arg->params[1].u.octets, &client_uuid);
+ 
+ 	rc = optee_to_msg_param(msg_arg->params + 2, arg->num_params, param);
+ 	if (rc)
+diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h
+index 7b2d919da2ace..c7ac7d02d6cc9 100644
+--- a/drivers/tee/optee/optee_msg.h
++++ b/drivers/tee/optee/optee_msg.h
+@@ -9,7 +9,7 @@
+ #include <linux/types.h>
+ 
+ /*
+- * This file defines the OP-TEE message protocol used to communicate
++ * This file defines the OP-TEE message protocol (ABI) used to communicate
+  * with an instance of OP-TEE running in secure world.
+  *
+  * This file is divided into three sections.
+@@ -146,9 +146,10 @@ struct optee_msg_param_value {
+  * @tmem:	parameter by temporary memory reference
+  * @rmem:	parameter by registered memory reference
+  * @value:	parameter by opaque value
++ * @octets:	parameter by octet string
+  *
+  * @attr & OPTEE_MSG_ATTR_TYPE_MASK indicates if tmem, rmem or value is used in
+- * the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value,
++ * the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value or octets,
+  * OPTEE_MSG_ATTR_TYPE_TMEM_* indicates @tmem and
+  * OPTEE_MSG_ATTR_TYPE_RMEM_* indicates @rmem,
+  * OPTEE_MSG_ATTR_TYPE_NONE indicates that none of the members are used.
+@@ -159,6 +160,7 @@ struct optee_msg_param {
+ 		struct optee_msg_param_tmem tmem;
+ 		struct optee_msg_param_rmem rmem;
+ 		struct optee_msg_param_value value;
++		u8 octets[24];
+ 	} u;
+ };
+ 
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 2cf9fc915510c..844059861f9e1 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -213,14 +213,11 @@ static void stm32_usart_receive_chars(struct uart_port *port, bool threaded)
+ 	struct tty_port *tport = &port->state->port;
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	unsigned long c, flags;
++	unsigned long c;
+ 	u32 sr;
+ 	char flag;
+ 
+-	if (threaded)
+-		spin_lock_irqsave(&port->lock, flags);
+-	else
+-		spin_lock(&port->lock);
++	spin_lock(&port->lock);
+ 
+ 	while (stm32_usart_pending_rx(port, &sr, &stm32_port->last_res,
+ 				      threaded)) {
+@@ -277,10 +274,7 @@ static void stm32_usart_receive_chars(struct uart_port *port, bool threaded)
+ 		uart_insert_char(port, sr, USART_SR_ORE, c, flag);
+ 	}
+ 
+-	if (threaded)
+-		spin_unlock_irqrestore(&port->lock, flags);
+-	else
+-		spin_unlock(&port->lock);
++	spin_unlock(&port->lock);
+ 
+ 	tty_flip_buffer_push(tport);
+ }
+@@ -653,7 +647,8 @@ static int stm32_usart_startup(struct uart_port *port)
+ 
+ 	ret = request_threaded_irq(port->irq, stm32_usart_interrupt,
+ 				   stm32_usart_threaded_interrupt,
+-				   IRQF_NO_SUSPEND, name, port);
++				   IRQF_ONESHOT | IRQF_NO_SUSPEND,
++				   name, port);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1126,6 +1121,13 @@ static int stm32_usart_of_dma_rx_probe(struct stm32_port *stm32port,
+ 	struct dma_async_tx_descriptor *desc = NULL;
+ 	int ret;
+ 
++	/*
++	 * Using DMA and threaded handler for the console could lead to
++	 * deadlocks.
++	 */
++	if (uart_console(port))
++		return -ENODEV;
++
+ 	/* Request DMA RX channel */
+ 	stm32port->rx_ch = dma_request_slave_channel(dev, "rx");
+ 	if (!stm32port->rx_ch) {
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index 510fd0572feb1..e3f429f1575e9 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -707,7 +707,11 @@ static inline void dwc_handle_gpwrdn_disc_det(struct dwc2_hsotg *hsotg,
+ 	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+ 
+ 	hsotg->hibernated = 0;
++
++#if IS_ENABLED(CONFIG_USB_DWC2_HOST) ||	\
++	IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE)
+ 	hsotg->bus_suspended = 0;
++#endif
+ 
+ 	if (gpwrdn & GPWRDN_IDSTS) {
+ 		hsotg->op_state = OTG_STATE_B_PERIPHERAL;
+diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
+index 0f28bf99efebc..4e1107767e29b 100644
+--- a/drivers/vfio/pci/Kconfig
++++ b/drivers/vfio/pci/Kconfig
+@@ -2,6 +2,7 @@
+ config VFIO_PCI
+ 	tristate "VFIO support for PCI devices"
+ 	depends on VFIO && PCI && EVENTFD
++	depends on MMU
+ 	select VFIO_VIRQFD
+ 	select IRQ_BYPASS_MANAGER
+ 	help
+diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
+index a402adee8a215..47f21a6ca7fe9 100644
+--- a/drivers/vfio/pci/vfio_pci_config.c
++++ b/drivers/vfio/pci/vfio_pci_config.c
+@@ -1581,7 +1581,7 @@ static int vfio_ecap_init(struct vfio_pci_device *vdev)
+ 			if (len == 0xFF) {
+ 				len = vfio_ext_cap_len(vdev, ecap, epos);
+ 				if (len < 0)
+-					return ret;
++					return len;
+ 			}
+ 		}
+ 
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index fb4b385191f28..e83a7cd15c956 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -289,7 +289,7 @@ err_irq:
+ 	vfio_platform_regions_cleanup(vdev);
+ err_reg:
+ 	mutex_unlock(&driver_lock);
+-	module_put(THIS_MODULE);
++	module_put(vdev->parent_module);
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 51c18da4792ec..73ebe0c5fdbc9 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -1297,16 +1297,20 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr,
+ 		for (i = 0; i < bbio->num_stripes; i++, stripe++) {
+ 			u64 bytes;
+ 			struct request_queue *req_q;
++			struct btrfs_device *device = stripe->dev;
+ 
+-			if (!stripe->dev->bdev) {
++			if (!device->bdev) {
+ 				ASSERT(btrfs_test_opt(fs_info, DEGRADED));
+ 				continue;
+ 			}
+-			req_q = bdev_get_queue(stripe->dev->bdev);
++			req_q = bdev_get_queue(device->bdev);
+ 			if (!blk_queue_discard(req_q))
+ 				continue;
+ 
+-			ret = btrfs_issue_discard(stripe->dev->bdev,
++			if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state))
++				continue;
++
++			ret = btrfs_issue_discard(device->bdev,
+ 						  stripe->physical,
+ 						  stripe->length,
+ 						  &bytes);
+@@ -1830,7 +1834,7 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans,
+ 	trace_run_delayed_ref_head(fs_info, head, 0);
+ 	btrfs_delayed_ref_unlock(head);
+ 	btrfs_put_delayed_ref_head(head);
+-	return 0;
++	return ret;
+ }
+ 
+ static struct btrfs_delayed_ref_head *btrfs_obtain_ref_head(
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index 8f4f2bd6d9b95..48a2ea6d70921 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -690,7 +690,7 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans,
+ 	u64 end_byte = bytenr + len;
+ 	u64 csum_end;
+ 	struct extent_buffer *leaf;
+-	int ret;
++	int ret = 0;
+ 	u16 csum_size = btrfs_super_csum_size(fs_info->super_copy);
+ 	int blocksize_bits = fs_info->sb->s_blocksize_bits;
+ 
+@@ -709,6 +709,7 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans,
+ 		path->leave_spinning = 1;
+ 		ret = btrfs_search_slot(trans, root, &key, path, -1, 1);
+ 		if (ret > 0) {
++			ret = 0;
+ 			if (path->slots[0] == 0)
+ 				break;
+ 			path->slots[0]--;
+@@ -765,7 +766,7 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans,
+ 			ret = btrfs_del_items(trans, root, path,
+ 					      path->slots[0], del_nr);
+ 			if (ret)
+-				goto out;
++				break;
+ 			if (key.offset == bytenr)
+ 				break;
+ 		} else if (key.offset < bytenr && csum_end > end_byte) {
+@@ -809,8 +810,9 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans,
+ 			ret = btrfs_split_item(trans, root, path, &key, offset);
+ 			if (ret && ret != -EAGAIN) {
+ 				btrfs_abort_transaction(trans, ret);
+-				goto out;
++				break;
+ 			}
++			ret = 0;
+ 
+ 			key.offset = end_byte - 1;
+ 		} else {
+@@ -820,8 +822,6 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans,
+ 		}
+ 		btrfs_release_path(path);
+ 	}
+-	ret = 0;
+-out:
+ 	btrfs_free_path(path);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 94c24b2a211bf..4f26dae63b64a 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2760,6 +2760,18 @@ out:
+ 	if (ret || truncated) {
+ 		u64 unwritten_start = start;
+ 
++		/*
++		 * If we failed to finish this ordered extent for any reason we
++		 * need to make sure BTRFS_ORDERED_IOERR is set on the ordered
++		 * extent, and mark the inode with the error if it wasn't
++		 * already set.  Any error during writeback would have already
++		 * set the mapping error, so we need to set it if we're the ones
++		 * marking this ordered extent as failed.
++		 */
++		if (ret && !test_and_set_bit(BTRFS_ORDERED_IOERR,
++					     &ordered_extent->flags))
++			mapping_set_error(ordered_extent->inode->i_mapping, -EIO);
++
+ 		if (truncated)
+ 			unwritten_start += logical_len;
+ 		clear_extent_uptodate(io_tree, unwritten_start, end, NULL);
+@@ -8878,6 +8890,7 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 	int ret2;
+ 	bool root_log_pinned = false;
+ 	bool dest_log_pinned = false;
++	bool need_abort = false;
+ 
+ 	/* we only allow rename subvolume link between subvolumes */
+ 	if (old_ino != BTRFS_FIRST_FREE_OBJECTID && root != dest)
+@@ -8934,6 +8947,7 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 					     old_idx);
+ 		if (ret)
+ 			goto out_fail;
++		need_abort = true;
+ 	}
+ 
+ 	/* And now for the dest. */
+@@ -8949,8 +8963,11 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 					     new_ino,
+ 					     btrfs_ino(BTRFS_I(old_dir)),
+ 					     new_idx);
+-		if (ret)
++		if (ret) {
++			if (need_abort)
++				btrfs_abort_transaction(trans, ret);
+ 			goto out_fail;
++		}
+ 	}
+ 
+ 	/* Update inode version and ctime/mtime. */
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index eeb66e797e0bf..96ef9fed9a656 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -207,10 +207,7 @@ static int clone_copy_inline_extent(struct inode *dst,
+ 			 * inline extent's data to the page.
+ 			 */
+ 			ASSERT(key.offset > 0);
+-			ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset,
+-						  inline_data, size, datal,
+-						  comp_type);
+-			goto out;
++			goto copy_to_page;
+ 		}
+ 	} else if (i_size_read(dst) <= datal) {
+ 		struct btrfs_file_extent_item *ei;
+@@ -226,13 +223,10 @@ static int clone_copy_inline_extent(struct inode *dst,
+ 		    BTRFS_FILE_EXTENT_INLINE)
+ 			goto copy_inline_extent;
+ 
+-		ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset,
+-					  inline_data, size, datal, comp_type);
+-		goto out;
++		goto copy_to_page;
+ 	}
+ 
+ copy_inline_extent:
+-	ret = 0;
+ 	/*
+ 	 * We have no extent items, or we have an extent at offset 0 which may
+ 	 * or may not be inlined. All these cases are dealt the same way.
+@@ -244,11 +238,13 @@ copy_inline_extent:
+ 		 * clone. Deal with all these cases by copying the inline extent
+ 		 * data into the respective page at the destination inode.
+ 		 */
+-		ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset,
+-					  inline_data, size, datal, comp_type);
+-		goto out;
++		goto copy_to_page;
+ 	}
+ 
++	/*
++	 * Release path before starting a new transaction so we don't hold locks
++	 * that would confuse lockdep.
++	 */
+ 	btrfs_release_path(path);
+ 	/*
+ 	 * If we end up here it means were copy the inline extent into a leaf
+@@ -281,11 +277,6 @@ copy_inline_extent:
+ 	ret = btrfs_inode_set_file_extent_range(BTRFS_I(dst), 0, aligned_end);
+ out:
+ 	if (!ret && !trans) {
+-		/*
+-		 * Release path before starting a new transaction so we don't
+-		 * hold locks that would confuse lockdep.
+-		 */
+-		btrfs_release_path(path);
+ 		/*
+ 		 * No transaction here means we copied the inline extent into a
+ 		 * page of the destination inode.
+@@ -306,6 +297,21 @@ out:
+ 		*trans_out = trans;
+ 
+ 	return ret;
++
++copy_to_page:
++	/*
++	 * Release our path because we don't need it anymore and also because
++	 * copy_inline_to_page() needs to reserve data and metadata, which may
++	 * need to flush delalloc when we are low on available space and
++	 * therefore cause a deadlock if writeback of an inline extent needs to
++	 * write to the same leaf or an ordered extent completion needs to write
++	 * to the same leaf.
++	 */
++	btrfs_release_path(path);
++
++	ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset,
++				  inline_data, size, datal, comp_type);
++	goto out;
+ }
+ 
+ /**
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 40845428b739c..d4a3a56726aa8 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1440,22 +1440,14 @@ static int check_extent_data_ref(struct extent_buffer *leaf,
+ 		return -EUCLEAN;
+ 	}
+ 	for (; ptr < end; ptr += sizeof(*dref)) {
+-		u64 root_objectid;
+-		u64 owner;
+ 		u64 offset;
+-		u64 hash;
+ 
++		/*
++		 * We cannot check the extent_data_ref hash due to possible
++		 * overflow from the leaf due to hash collisions.
++		 */
+ 		dref = (struct btrfs_extent_data_ref *)ptr;
+-		root_objectid = btrfs_extent_data_ref_root(leaf, dref);
+-		owner = btrfs_extent_data_ref_objectid(leaf, dref);
+ 		offset = btrfs_extent_data_ref_offset(leaf, dref);
+-		hash = hash_extent_data_ref(root_objectid, owner, offset);
+-		if (hash != key->offset) {
+-			extent_err(leaf, slot,
+-	"invalid extent data ref hash, item has 0x%016llx key has 0x%016llx",
+-				   hash, key->offset);
+-			return -EUCLEAN;
+-		}
+ 		if (!IS_ALIGNED(offset, leaf->fs_info->sectorsize)) {
+ 			extent_err(leaf, slot,
+ 	"invalid extent data backref offset, have %llu expect aligned to %u",
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 9a0cfa0e124da..300951088a11c 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1752,6 +1752,7 @@ static noinline int fixup_inode_link_counts(struct btrfs_trans_handle *trans,
+ 			break;
+ 
+ 		if (ret == 1) {
++			ret = 0;
+ 			if (path->slots[0] == 0)
+ 				break;
+ 			path->slots[0]--;
+@@ -1764,17 +1765,19 @@ static noinline int fixup_inode_link_counts(struct btrfs_trans_handle *trans,
+ 
+ 		ret = btrfs_del_item(trans, root, path);
+ 		if (ret)
+-			goto out;
++			break;
+ 
+ 		btrfs_release_path(path);
+ 		inode = read_one_inode(root, key.offset);
+-		if (!inode)
+-			return -EIO;
++		if (!inode) {
++			ret = -EIO;
++			break;
++		}
+ 
+ 		ret = fixup_inode_link_count(trans, root, inode);
+ 		iput(inode);
+ 		if (ret)
+-			goto out;
++			break;
+ 
+ 		/*
+ 		 * fixup on a directory may create new entries,
+@@ -1783,8 +1786,6 @@ static noinline int fixup_inode_link_counts(struct btrfs_trans_handle *trans,
+ 		 */
+ 		key.offset = (u64)-1;
+ 	}
+-	ret = 0;
+-out:
+ 	btrfs_release_path(path);
+ 	return ret;
+ }
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 12eac88373032..e6542ba264330 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -3206,7 +3206,10 @@ static int ext4_split_extent_at(handle_t *handle,
+ 		ext4_ext_mark_unwritten(ex2);
+ 
+ 	err = ext4_ext_insert_extent(handle, inode, ppath, &newex, flags);
+-	if (err == -ENOSPC && (EXT4_EXT_MAY_ZEROOUT & split_flag)) {
++	if (err != -ENOSPC && err != -EDQUOT)
++		goto out;
++
++	if (EXT4_EXT_MAY_ZEROOUT & split_flag) {
+ 		if (split_flag & (EXT4_EXT_DATA_VALID1|EXT4_EXT_DATA_VALID2)) {
+ 			if (split_flag & EXT4_EXT_DATA_VALID1) {
+ 				err = ext4_ext_zeroout(inode, ex2);
+@@ -3232,25 +3235,22 @@ static int ext4_split_extent_at(handle_t *handle,
+ 					      ext4_ext_pblock(&orig_ex));
+ 		}
+ 
+-		if (err)
+-			goto fix_extent_len;
+-		/* update the extent length and mark as initialized */
+-		ex->ee_len = cpu_to_le16(ee_len);
+-		ext4_ext_try_to_merge(handle, inode, path, ex);
+-		err = ext4_ext_dirty(handle, inode, path + path->p_depth);
+-		if (err)
+-			goto fix_extent_len;
+-
+-		/* update extent status tree */
+-		err = ext4_zeroout_es(inode, &zero_ex);
+-
+-		goto out;
+-	} else if (err)
+-		goto fix_extent_len;
+-
+-out:
+-	ext4_ext_show_leaf(inode, path);
+-	return err;
++		if (!err) {
++			/* update the extent length and mark as initialized */
++			ex->ee_len = cpu_to_le16(ee_len);
++			ext4_ext_try_to_merge(handle, inode, path, ex);
++			err = ext4_ext_dirty(handle, inode, path + path->p_depth);
++			if (!err)
++				/* update extent status tree */
++				err = ext4_zeroout_es(inode, &zero_ex);
++			/* If we failed at this point, we don't know in which
++			 * state the extent tree exactly is so don't try to fix
++			 * length of the original extent as it may do even more
++			 * damage.
++			 */
++			goto out;
++		}
++	}
+ 
+ fix_extent_len:
+ 	ex->ee_len = orig_ex.ee_len;
+@@ -3260,6 +3260,9 @@ fix_extent_len:
+ 	 */
+ 	ext4_ext_dirty(handle, inode, path + path->p_depth);
+ 	return err;
++out:
++	ext4_ext_show_leaf(inode, path);
++	return err;
+ }
+ 
+ /*
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 896e1176e0449..53647fa038773 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1227,18 +1227,6 @@ static void ext4_fc_cleanup(journal_t *journal, int full)
+ 
+ /* Ext4 Replay Path Routines */
+ 
+-/* Get length of a particular tlv */
+-static inline int ext4_fc_tag_len(struct ext4_fc_tl *tl)
+-{
+-	return le16_to_cpu(tl->fc_len);
+-}
+-
+-/* Get a pointer to "value" of a tlv */
+-static inline u8 *ext4_fc_tag_val(struct ext4_fc_tl *tl)
+-{
+-	return (u8 *)tl + sizeof(*tl);
+-}
+-
+ /* Helper struct for dentry replay routines */
+ struct dentry_info_args {
+ 	int parent_ino, dname_len, ino, inode_len;
+@@ -1246,28 +1234,29 @@ struct dentry_info_args {
+ };
+ 
+ static inline void tl_to_darg(struct dentry_info_args *darg,
+-				struct  ext4_fc_tl *tl)
++			      struct  ext4_fc_tl *tl, u8 *val)
+ {
+-	struct ext4_fc_dentry_info *fcd;
++	struct ext4_fc_dentry_info fcd;
+ 
+-	fcd = (struct ext4_fc_dentry_info *)ext4_fc_tag_val(tl);
++	memcpy(&fcd, val, sizeof(fcd));
+ 
+-	darg->parent_ino = le32_to_cpu(fcd->fc_parent_ino);
+-	darg->ino = le32_to_cpu(fcd->fc_ino);
+-	darg->dname = fcd->fc_dname;
+-	darg->dname_len = ext4_fc_tag_len(tl) -
+-			sizeof(struct ext4_fc_dentry_info);
++	darg->parent_ino = le32_to_cpu(fcd.fc_parent_ino);
++	darg->ino = le32_to_cpu(fcd.fc_ino);
++	darg->dname = val + offsetof(struct ext4_fc_dentry_info, fc_dname);
++	darg->dname_len = le16_to_cpu(tl->fc_len) -
++		sizeof(struct ext4_fc_dentry_info);
+ }
+ 
+ /* Unlink replay function */
+-static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl)
++static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl,
++				 u8 *val)
+ {
+ 	struct inode *inode, *old_parent;
+ 	struct qstr entry;
+ 	struct dentry_info_args darg;
+ 	int ret = 0;
+ 
+-	tl_to_darg(&darg, tl);
++	tl_to_darg(&darg, tl, val);
+ 
+ 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_UNLINK, darg.ino,
+ 			darg.parent_ino, darg.dname_len);
+@@ -1357,13 +1346,14 @@ out:
+ }
+ 
+ /* Link replay function */
+-static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl)
++static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl,
++			       u8 *val)
+ {
+ 	struct inode *inode;
+ 	struct dentry_info_args darg;
+ 	int ret = 0;
+ 
+-	tl_to_darg(&darg, tl);
++	tl_to_darg(&darg, tl, val);
+ 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_LINK, darg.ino,
+ 			darg.parent_ino, darg.dname_len);
+ 
+@@ -1408,9 +1398,10 @@ static int ext4_fc_record_modified_inode(struct super_block *sb, int ino)
+ /*
+  * Inode replay function
+  */
+-static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl)
++static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
++				u8 *val)
+ {
+-	struct ext4_fc_inode *fc_inode;
++	struct ext4_fc_inode fc_inode;
+ 	struct ext4_inode *raw_inode;
+ 	struct ext4_inode *raw_fc_inode;
+ 	struct inode *inode = NULL;
+@@ -1418,9 +1409,9 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl)
+ 	int inode_len, ino, ret, tag = le16_to_cpu(tl->fc_tag);
+ 	struct ext4_extent_header *eh;
+ 
+-	fc_inode = (struct ext4_fc_inode *)ext4_fc_tag_val(tl);
++	memcpy(&fc_inode, val, sizeof(fc_inode));
+ 
+-	ino = le32_to_cpu(fc_inode->fc_ino);
++	ino = le32_to_cpu(fc_inode.fc_ino);
+ 	trace_ext4_fc_replay(sb, tag, ino, 0, 0);
+ 
+ 	inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);
+@@ -1432,12 +1423,13 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl)
+ 
+ 	ext4_fc_record_modified_inode(sb, ino);
+ 
+-	raw_fc_inode = (struct ext4_inode *)fc_inode->fc_raw_inode;
++	raw_fc_inode = (struct ext4_inode *)
++		(val + offsetof(struct ext4_fc_inode, fc_raw_inode));
+ 	ret = ext4_get_fc_inode_loc(sb, ino, &iloc);
+ 	if (ret)
+ 		goto out;
+ 
+-	inode_len = ext4_fc_tag_len(tl) - sizeof(struct ext4_fc_inode);
++	inode_len = le16_to_cpu(tl->fc_len) - sizeof(struct ext4_fc_inode);
+ 	raw_inode = ext4_raw_inode(&iloc);
+ 
+ 	memcpy(raw_inode, raw_fc_inode, offsetof(struct ext4_inode, i_block));
+@@ -1505,14 +1497,15 @@ out:
+  * inode for which we are trying to create a dentry here, should already have
+  * been replayed before we start here.
+  */
+-static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl)
++static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl,
++				 u8 *val)
+ {
+ 	int ret = 0;
+ 	struct inode *inode = NULL;
+ 	struct inode *dir = NULL;
+ 	struct dentry_info_args darg;
+ 
+-	tl_to_darg(&darg, tl);
++	tl_to_darg(&darg, tl, val);
+ 
+ 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_CREAT, darg.ino,
+ 			darg.parent_ino, darg.dname_len);
+@@ -1591,9 +1584,9 @@ static int ext4_fc_record_regions(struct super_block *sb, int ino,
+ 
+ /* Replay add range tag */
+ static int ext4_fc_replay_add_range(struct super_block *sb,
+-				struct ext4_fc_tl *tl)
++				    struct ext4_fc_tl *tl, u8 *val)
+ {
+-	struct ext4_fc_add_range *fc_add_ex;
++	struct ext4_fc_add_range fc_add_ex;
+ 	struct ext4_extent newex, *ex;
+ 	struct inode *inode;
+ 	ext4_lblk_t start, cur;
+@@ -1603,15 +1596,14 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 	struct ext4_ext_path *path = NULL;
+ 	int ret;
+ 
+-	fc_add_ex = (struct ext4_fc_add_range *)ext4_fc_tag_val(tl);
+-	ex = (struct ext4_extent *)&fc_add_ex->fc_ex;
++	memcpy(&fc_add_ex, val, sizeof(fc_add_ex));
++	ex = (struct ext4_extent *)&fc_add_ex.fc_ex;
+ 
+ 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_ADD_RANGE,
+-		le32_to_cpu(fc_add_ex->fc_ino), le32_to_cpu(ex->ee_block),
++		le32_to_cpu(fc_add_ex.fc_ino), le32_to_cpu(ex->ee_block),
+ 		ext4_ext_get_actual_len(ex));
+ 
+-	inode = ext4_iget(sb, le32_to_cpu(fc_add_ex->fc_ino),
+-				EXT4_IGET_NORMAL);
++	inode = ext4_iget(sb, le32_to_cpu(fc_add_ex.fc_ino), EXT4_IGET_NORMAL);
+ 	if (IS_ERR(inode)) {
+ 		jbd_debug(1, "Inode not found.");
+ 		return 0;
+@@ -1720,32 +1712,33 @@ next:
+ 
+ /* Replay DEL_RANGE tag */
+ static int
+-ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl)
++ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
++			 u8 *val)
+ {
+ 	struct inode *inode;
+-	struct ext4_fc_del_range *lrange;
++	struct ext4_fc_del_range lrange;
+ 	struct ext4_map_blocks map;
+ 	ext4_lblk_t cur, remaining;
+ 	int ret;
+ 
+-	lrange = (struct ext4_fc_del_range *)ext4_fc_tag_val(tl);
+-	cur = le32_to_cpu(lrange->fc_lblk);
+-	remaining = le32_to_cpu(lrange->fc_len);
++	memcpy(&lrange, val, sizeof(lrange));
++	cur = le32_to_cpu(lrange.fc_lblk);
++	remaining = le32_to_cpu(lrange.fc_len);
+ 
+ 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_DEL_RANGE,
+-		le32_to_cpu(lrange->fc_ino), cur, remaining);
++		le32_to_cpu(lrange.fc_ino), cur, remaining);
+ 
+-	inode = ext4_iget(sb, le32_to_cpu(lrange->fc_ino), EXT4_IGET_NORMAL);
++	inode = ext4_iget(sb, le32_to_cpu(lrange.fc_ino), EXT4_IGET_NORMAL);
+ 	if (IS_ERR(inode)) {
+-		jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange->fc_ino));
++		jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange.fc_ino));
+ 		return 0;
+ 	}
+ 
+ 	ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
+ 
+ 	jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n",
+-			inode->i_ino, le32_to_cpu(lrange->fc_lblk),
+-			le32_to_cpu(lrange->fc_len));
++			inode->i_ino, le32_to_cpu(lrange.fc_lblk),
++			le32_to_cpu(lrange.fc_len));
+ 	while (remaining > 0) {
+ 		map.m_lblk = cur;
+ 		map.m_len = remaining;
+@@ -1766,8 +1759,8 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl)
+ 	}
+ 
+ 	ret = ext4_punch_hole(inode,
+-		le32_to_cpu(lrange->fc_lblk) << sb->s_blocksize_bits,
+-		le32_to_cpu(lrange->fc_len) <<  sb->s_blocksize_bits);
++		le32_to_cpu(lrange.fc_lblk) << sb->s_blocksize_bits,
++		le32_to_cpu(lrange.fc_len) <<  sb->s_blocksize_bits);
+ 	if (ret)
+ 		jbd_debug(1, "ext4_punch_hole returned %d", ret);
+ 	ext4_ext_replay_shrink_inode(inode,
+@@ -1909,11 +1902,11 @@ static int ext4_fc_replay_scan(journal_t *journal,
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	struct ext4_fc_replay_state *state;
+ 	int ret = JBD2_FC_REPLAY_CONTINUE;
+-	struct ext4_fc_add_range *ext;
+-	struct ext4_fc_tl *tl;
+-	struct ext4_fc_tail *tail;
+-	__u8 *start, *end;
+-	struct ext4_fc_head *head;
++	struct ext4_fc_add_range ext;
++	struct ext4_fc_tl tl;
++	struct ext4_fc_tail tail;
++	__u8 *start, *end, *cur, *val;
++	struct ext4_fc_head head;
+ 	struct ext4_extent *ex;
+ 
+ 	state = &sbi->s_fc_replay_state;
+@@ -1940,15 +1933,17 @@ static int ext4_fc_replay_scan(journal_t *journal,
+ 	}
+ 
+ 	state->fc_replay_expected_off++;
+-	fc_for_each_tl(start, end, tl) {
++	for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) {
++		memcpy(&tl, cur, sizeof(tl));
++		val = cur + sizeof(tl);
+ 		jbd_debug(3, "Scan phase, tag:%s, blk %lld\n",
+-			  tag2str(le16_to_cpu(tl->fc_tag)), bh->b_blocknr);
+-		switch (le16_to_cpu(tl->fc_tag)) {
++			  tag2str(le16_to_cpu(tl.fc_tag)), bh->b_blocknr);
++		switch (le16_to_cpu(tl.fc_tag)) {
+ 		case EXT4_FC_TAG_ADD_RANGE:
+-			ext = (struct ext4_fc_add_range *)ext4_fc_tag_val(tl);
+-			ex = (struct ext4_extent *)&ext->fc_ex;
++			memcpy(&ext, val, sizeof(ext));
++			ex = (struct ext4_extent *)&ext.fc_ex;
+ 			ret = ext4_fc_record_regions(sb,
+-				le32_to_cpu(ext->fc_ino),
++				le32_to_cpu(ext.fc_ino),
+ 				le32_to_cpu(ex->ee_block), ext4_ext_pblock(ex),
+ 				ext4_ext_get_actual_len(ex));
+ 			if (ret < 0)
+@@ -1962,18 +1957,18 @@ static int ext4_fc_replay_scan(journal_t *journal,
+ 		case EXT4_FC_TAG_INODE:
+ 		case EXT4_FC_TAG_PAD:
+ 			state->fc_cur_tag++;
+-			state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl,
+-					sizeof(*tl) + ext4_fc_tag_len(tl));
++			state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
++					sizeof(tl) + le16_to_cpu(tl.fc_len));
+ 			break;
+ 		case EXT4_FC_TAG_TAIL:
+ 			state->fc_cur_tag++;
+-			tail = (struct ext4_fc_tail *)ext4_fc_tag_val(tl);
+-			state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl,
+-						sizeof(*tl) +
++			memcpy(&tail, val, sizeof(tail));
++			state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
++						sizeof(tl) +
+ 						offsetof(struct ext4_fc_tail,
+ 						fc_crc));
+-			if (le32_to_cpu(tail->fc_tid) == expected_tid &&
+-				le32_to_cpu(tail->fc_crc) == state->fc_crc) {
++			if (le32_to_cpu(tail.fc_tid) == expected_tid &&
++				le32_to_cpu(tail.fc_crc) == state->fc_crc) {
+ 				state->fc_replay_num_tags = state->fc_cur_tag;
+ 				state->fc_regions_valid =
+ 					state->fc_regions_used;
+@@ -1984,19 +1979,19 @@ static int ext4_fc_replay_scan(journal_t *journal,
+ 			state->fc_crc = 0;
+ 			break;
+ 		case EXT4_FC_TAG_HEAD:
+-			head = (struct ext4_fc_head *)ext4_fc_tag_val(tl);
+-			if (le32_to_cpu(head->fc_features) &
++			memcpy(&head, val, sizeof(head));
++			if (le32_to_cpu(head.fc_features) &
+ 				~EXT4_FC_SUPPORTED_FEATURES) {
+ 				ret = -EOPNOTSUPP;
+ 				break;
+ 			}
+-			if (le32_to_cpu(head->fc_tid) != expected_tid) {
++			if (le32_to_cpu(head.fc_tid) != expected_tid) {
+ 				ret = JBD2_FC_REPLAY_STOP;
+ 				break;
+ 			}
+ 			state->fc_cur_tag++;
+-			state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl,
+-					sizeof(*tl) + ext4_fc_tag_len(tl));
++			state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
++					    sizeof(tl) + le16_to_cpu(tl.fc_len));
+ 			break;
+ 		default:
+ 			ret = state->fc_replay_num_tags ?
+@@ -2020,11 +2015,11 @@ static int ext4_fc_replay(journal_t *journal, struct buffer_head *bh,
+ {
+ 	struct super_block *sb = journal->j_private;
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	struct ext4_fc_tl *tl;
+-	__u8 *start, *end;
++	struct ext4_fc_tl tl;
++	__u8 *start, *end, *cur, *val;
+ 	int ret = JBD2_FC_REPLAY_CONTINUE;
+ 	struct ext4_fc_replay_state *state = &sbi->s_fc_replay_state;
+-	struct ext4_fc_tail *tail;
++	struct ext4_fc_tail tail;
+ 
+ 	if (pass == PASS_SCAN) {
+ 		state->fc_current_pass = PASS_SCAN;
+@@ -2051,49 +2046,52 @@ static int ext4_fc_replay(journal_t *journal, struct buffer_head *bh,
+ 	start = (u8 *)bh->b_data;
+ 	end = (__u8 *)bh->b_data + journal->j_blocksize - 1;
+ 
+-	fc_for_each_tl(start, end, tl) {
++	for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) {
++		memcpy(&tl, cur, sizeof(tl));
++		val = cur + sizeof(tl);
++
+ 		if (state->fc_replay_num_tags == 0) {
+ 			ret = JBD2_FC_REPLAY_STOP;
+ 			ext4_fc_set_bitmaps_and_counters(sb);
+ 			break;
+ 		}
+ 		jbd_debug(3, "Replay phase, tag:%s\n",
+-				tag2str(le16_to_cpu(tl->fc_tag)));
++				tag2str(le16_to_cpu(tl.fc_tag)));
+ 		state->fc_replay_num_tags--;
+-		switch (le16_to_cpu(tl->fc_tag)) {
++		switch (le16_to_cpu(tl.fc_tag)) {
+ 		case EXT4_FC_TAG_LINK:
+-			ret = ext4_fc_replay_link(sb, tl);
++			ret = ext4_fc_replay_link(sb, &tl, val);
+ 			break;
+ 		case EXT4_FC_TAG_UNLINK:
+-			ret = ext4_fc_replay_unlink(sb, tl);
++			ret = ext4_fc_replay_unlink(sb, &tl, val);
+ 			break;
+ 		case EXT4_FC_TAG_ADD_RANGE:
+-			ret = ext4_fc_replay_add_range(sb, tl);
++			ret = ext4_fc_replay_add_range(sb, &tl, val);
+ 			break;
+ 		case EXT4_FC_TAG_CREAT:
+-			ret = ext4_fc_replay_create(sb, tl);
++			ret = ext4_fc_replay_create(sb, &tl, val);
+ 			break;
+ 		case EXT4_FC_TAG_DEL_RANGE:
+-			ret = ext4_fc_replay_del_range(sb, tl);
++			ret = ext4_fc_replay_del_range(sb, &tl, val);
+ 			break;
+ 		case EXT4_FC_TAG_INODE:
+-			ret = ext4_fc_replay_inode(sb, tl);
++			ret = ext4_fc_replay_inode(sb, &tl, val);
+ 			break;
+ 		case EXT4_FC_TAG_PAD:
+ 			trace_ext4_fc_replay(sb, EXT4_FC_TAG_PAD, 0,
+-				ext4_fc_tag_len(tl), 0);
++					     le16_to_cpu(tl.fc_len), 0);
+ 			break;
+ 		case EXT4_FC_TAG_TAIL:
+ 			trace_ext4_fc_replay(sb, EXT4_FC_TAG_TAIL, 0,
+-				ext4_fc_tag_len(tl), 0);
+-			tail = (struct ext4_fc_tail *)ext4_fc_tag_val(tl);
+-			WARN_ON(le32_to_cpu(tail->fc_tid) != expected_tid);
++					     le16_to_cpu(tl.fc_len), 0);
++			memcpy(&tail, val, sizeof(tail));
++			WARN_ON(le32_to_cpu(tail.fc_tid) != expected_tid);
+ 			break;
+ 		case EXT4_FC_TAG_HEAD:
+ 			break;
+ 		default:
+-			trace_ext4_fc_replay(sb, le16_to_cpu(tl->fc_tag), 0,
+-				ext4_fc_tag_len(tl), 0);
++			trace_ext4_fc_replay(sb, le16_to_cpu(tl.fc_tag), 0,
++					     le16_to_cpu(tl.fc_len), 0);
+ 			ret = -ECANCELED;
+ 			break;
+ 		}
+diff --git a/fs/ext4/fast_commit.h b/fs/ext4/fast_commit.h
+index 3a6e5a1fa1b80..d8d0998a5c163 100644
+--- a/fs/ext4/fast_commit.h
++++ b/fs/ext4/fast_commit.h
+@@ -146,12 +146,5 @@ struct ext4_fc_replay_state {
+ 
+ #define region_last(__region) (((__region)->lblk) + ((__region)->len) - 1)
+ 
+-#define fc_for_each_tl(__start, __end, __tl)				\
+-	for (tl = (struct ext4_fc_tl *)start;				\
+-		(u8 *)tl < (u8 *)end;					\
+-		tl = (struct ext4_fc_tl *)((u8 *)tl +			\
+-					sizeof(struct ext4_fc_tl) +	\
+-					+ le16_to_cpu(tl->fc_len)))
+-
+ 
+ #endif /* __FAST_COMMIT_H__ */
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index c92558ede623e..b294ebcb4db4b 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -322,14 +322,16 @@ void ext4_free_inode(handle_t *handle, struct inode *inode)
+ 	if (is_directory) {
+ 		count = ext4_used_dirs_count(sb, gdp) - 1;
+ 		ext4_used_dirs_set(sb, gdp, count);
+-		percpu_counter_dec(&sbi->s_dirs_counter);
++		if (percpu_counter_initialized(&sbi->s_dirs_counter))
++			percpu_counter_dec(&sbi->s_dirs_counter);
+ 	}
+ 	ext4_inode_bitmap_csum_set(sb, block_group, gdp, bitmap_bh,
+ 				   EXT4_INODES_PER_GROUP(sb) / 8);
+ 	ext4_group_desc_csum_set(sb, block_group, gdp);
+ 	ext4_unlock_group(sb, block_group);
+ 
+-	percpu_counter_inc(&sbi->s_freeinodes_counter);
++	if (percpu_counter_initialized(&sbi->s_freeinodes_counter))
++		percpu_counter_inc(&sbi->s_freeinodes_counter);
+ 	if (sbi->s_log_groups_per_flex) {
+ 		struct flex_groups *fg;
+ 
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index b6229fe1aa233..9c390c3d7fb15 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2738,7 +2738,7 @@ static int ext4_mb_init_backend(struct super_block *sb)
+ 		 */
+ 		if (sbi->s_es->s_log_groups_per_flex >= 32) {
+ 			ext4_msg(sb, KERN_ERR, "too many log groups per flexible block group");
+-			goto err_freesgi;
++			goto err_freebuddy;
+ 		}
+ 		sbi->s_mb_prefetch = min_t(uint, 1 << sbi->s_es->s_log_groups_per_flex,
+ 			BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9));
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index c7f5b665834fc..21c4ba2513ce5 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -4451,14 +4451,20 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	}
+ 
+ 	if (sb->s_blocksize != blocksize) {
++		/*
++		 * bh must be released before kill_bdev(), otherwise
++		 * it won't be freed and its page also. kill_bdev()
++		 * is called by sb_set_blocksize().
++		 */
++		brelse(bh);
+ 		/* Validate the filesystem blocksize */
+ 		if (!sb_set_blocksize(sb, blocksize)) {
+ 			ext4_msg(sb, KERN_ERR, "bad block size %d",
+ 					blocksize);
++			bh = NULL;
+ 			goto failed_mount;
+ 		}
+ 
+-		brelse(bh);
+ 		logical_sb_block = sb_block * EXT4_MIN_BLOCK_SIZE;
+ 		offset = do_div(logical_sb_block, blocksize);
+ 		bh = ext4_sb_bread_unmovable(sb, logical_sb_block);
+@@ -5181,8 +5187,9 @@ failed_mount:
+ 		kfree(get_qf_name(sb, sbi, i));
+ #endif
+ 	fscrypt_free_dummy_policy(&sbi->s_dummy_enc_policy);
+-	ext4_blkdev_remove(sbi);
++	/* ext4_blkdev_remove() calls kill_bdev(), release bh before it. */
+ 	brelse(bh);
++	ext4_blkdev_remove(sbi);
+ out_fail:
+ 	sb->s_fs_info = NULL;
+ 	kfree(sbi->s_blockgroup_lock);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 35a6fd103761b..ea2f2de448063 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1457,9 +1457,11 @@ void gfs2_glock_dq(struct gfs2_holder *gh)
+ 	    glock_blocked_by_withdraw(gl) &&
+ 	    gh->gh_gl != sdp->sd_jinode_gl) {
+ 		sdp->sd_glock_dqs_held++;
++		spin_unlock(&gl->gl_lockref.lock);
+ 		might_sleep();
+ 		wait_on_bit(&sdp->sd_flags, SDF_WITHDRAW_RECOVERY,
+ 			    TASK_UNINTERRUPTIBLE);
++		spin_lock(&gl->gl_lockref.lock);
+ 	}
+ 	if (gh->gh_flags & GL_NOCACHE)
+ 		handle_callback(gl, LM_ST_UNLOCKED, 0, false);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 369ec81033d67..fdbaaf579cc60 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -545,7 +545,7 @@ struct io_statx {
+ struct io_completion {
+ 	struct file			*file;
+ 	struct list_head		list;
+-	int				cflags;
++	u32				cflags;
+ };
+ 
+ struct io_async_connect {
+@@ -1711,7 +1711,8 @@ static void io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+ 	}
+ }
+ 
+-static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags)
++static void __io_cqring_fill_event(struct io_kiocb *req, long res,
++				   unsigned int cflags)
+ {
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 	struct io_uring_cqe *cqe;
+@@ -6266,6 +6267,7 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
+ 	if (prev) {
+ 		io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME);
+ 		io_put_req_deferred(prev, 1);
++		io_put_req_deferred(req, 1);
+ 	} else {
+ 		io_cqring_add_event(req, -ETIME, 0);
+ 		io_put_req_deferred(req, 1);
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 8880071ee4ee0..2b296d720c9fa 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1855,6 +1855,45 @@ out:
+ 	return ret;
+ }
+ 
++/*
++ * zero out partial blocks of one cluster.
++ *
++ * start: file offset where zero starts, will be made upper block aligned.
++ * len: it will be trimmed to the end of current cluster if "start + len"
++ *      is bigger than it.
++ */
++static int ocfs2_zeroout_partial_cluster(struct inode *inode,
++					u64 start, u64 len)
++{
++	int ret;
++	u64 start_block, end_block, nr_blocks;
++	u64 p_block, offset;
++	u32 cluster, p_cluster, nr_clusters;
++	struct super_block *sb = inode->i_sb;
++	u64 end = ocfs2_align_bytes_to_clusters(sb, start);
++
++	if (start + len < end)
++		end = start + len;
++
++	start_block = ocfs2_blocks_for_bytes(sb, start);
++	end_block = ocfs2_blocks_for_bytes(sb, end);
++	nr_blocks = end_block - start_block;
++	if (!nr_blocks)
++		return 0;
++
++	cluster = ocfs2_bytes_to_clusters(sb, start);
++	ret = ocfs2_get_clusters(inode, cluster, &p_cluster,
++				&nr_clusters, NULL);
++	if (ret)
++		return ret;
++	if (!p_cluster)
++		return 0;
++
++	offset = start_block - ocfs2_clusters_to_blocks(sb, cluster);
++	p_block = ocfs2_clusters_to_blocks(sb, p_cluster) + offset;
++	return sb_issue_zeroout(sb, p_block, nr_blocks, GFP_NOFS);
++}
++
+ /*
+  * Parts of this function taken from xfs_change_file_space()
+  */
+@@ -1865,7 +1904,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ {
+ 	int ret;
+ 	s64 llen;
+-	loff_t size;
++	loff_t size, orig_isize;
+ 	struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+ 	struct buffer_head *di_bh = NULL;
+ 	handle_t *handle;
+@@ -1896,6 +1935,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		goto out_inode_unlock;
+ 	}
+ 
++	orig_isize = i_size_read(inode);
+ 	switch (sr->l_whence) {
+ 	case 0: /*SEEK_SET*/
+ 		break;
+@@ -1903,7 +1943,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		sr->l_start += f_pos;
+ 		break;
+ 	case 2: /*SEEK_END*/
+-		sr->l_start += i_size_read(inode);
++		sr->l_start += orig_isize;
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -1957,6 +1997,14 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 	default:
+ 		ret = -EINVAL;
+ 	}
++
++	/* zeroout eof blocks in the cluster. */
++	if (!ret && change_size && orig_isize < size) {
++		ret = ocfs2_zeroout_partial_cluster(inode, orig_isize,
++					size - orig_isize);
++		if (!ret)
++			i_size_write(inode, size);
++	}
+ 	up_write(&OCFS2_I(inode)->ip_alloc_sem);
+ 	if (ret) {
+ 		mlog_errno(ret);
+@@ -1973,9 +2021,6 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		goto out_inode_unlock;
+ 	}
+ 
+-	if (change_size && i_size_read(inode) < size)
+-		i_size_write(inode, size);
+-
+ 	inode->i_ctime = inode->i_mtime = current_time(inode);
+ 	ret = ocfs2_mark_inode_dirty(handle, inode, di_bh);
+ 	if (ret < 0)
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index cc9ee07769745..af8f4e2cf21d1 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -1223,6 +1223,8 @@ enum mlx5_fc_bulk_alloc_bitmask {
+ 
+ #define MLX5_FC_BULK_NUM_FCS(fc_enum) (MLX5_FC_BULK_SIZE_FACTOR * (fc_enum))
+ 
++#define MLX5_FT_MAX_MULTIPATH_LEVEL 63
++
+ enum {
+ 	MLX5_STEERING_FORMAT_CONNECTX_5   = 0,
+ 	MLX5_STEERING_FORMAT_CONNECTX_6DX = 1,
+diff --git a/include/linux/platform_data/ti-sysc.h b/include/linux/platform_data/ti-sysc.h
+index fafc1beea504a..9837fb011f2fb 100644
+--- a/include/linux/platform_data/ti-sysc.h
++++ b/include/linux/platform_data/ti-sysc.h
+@@ -50,6 +50,7 @@ struct sysc_regbits {
+ 	s8 emufree_shift;
+ };
+ 
++#define SYSC_QUIRK_REINIT_ON_RESUME	BIT(27)
+ #define SYSC_QUIRK_GPMC_DEBUG		BIT(26)
+ #define SYSC_MODULE_QUIRK_ENA_RESETDONE	BIT(25)
+ #define SYSC_MODULE_QUIRK_PRUSS		BIT(24)
+diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
+index 2e4f7721fc4e7..8110c29fab42d 100644
+--- a/include/linux/usb/usbnet.h
++++ b/include/linux/usb/usbnet.h
+@@ -83,6 +83,8 @@ struct usbnet {
+ #		define EVENT_LINK_CHANGE	11
+ #		define EVENT_SET_RX_MODE	12
+ #		define EVENT_NO_IP_ALIGN	13
++	u32			rx_speed;	/* in bps - NOT Mbps */
++	u32			tx_speed;	/* in bps - NOT Mbps */
+ };
+ 
+ static inline struct usb_driver *driver_of(struct usb_interface *intf)
+diff --git a/include/net/caif/caif_dev.h b/include/net/caif/caif_dev.h
+index 48ecca8530ffa..b655d8666f555 100644
+--- a/include/net/caif/caif_dev.h
++++ b/include/net/caif/caif_dev.h
+@@ -119,7 +119,7 @@ void caif_free_client(struct cflayer *adap_layer);
+  * The link_support layer is used to add any Link Layer specific
+  * framing.
+  */
+-void caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,
++int caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,
+ 			struct cflayer *link_support, int head_room,
+ 			struct cflayer **layer, int (**rcv_func)(
+ 				struct sk_buff *, struct net_device *,
+diff --git a/include/net/caif/cfcnfg.h b/include/net/caif/cfcnfg.h
+index 2aa5e91d84576..8819ff4db35a6 100644
+--- a/include/net/caif/cfcnfg.h
++++ b/include/net/caif/cfcnfg.h
+@@ -62,7 +62,7 @@ void cfcnfg_remove(struct cfcnfg *cfg);
+  * @fcs:	Specify if checksum is used in CAIF Framing Layer.
+  * @head_room:	Head space needed by link specific protocol.
+  */
+-void
++int
+ cfcnfg_add_phy_layer(struct cfcnfg *cnfg,
+ 		     struct net_device *dev, struct cflayer *phy_layer,
+ 		     enum cfcnfg_phy_preference pref,
+diff --git a/include/net/caif/cfserl.h b/include/net/caif/cfserl.h
+index 14a55e03bb3ce..67cce8757175a 100644
+--- a/include/net/caif/cfserl.h
++++ b/include/net/caif/cfserl.h
+@@ -9,4 +9,5 @@
+ #include <net/caif/caif_layer.h>
+ 
+ struct cflayer *cfserl_create(int instance, bool use_stx);
++void cfserl_release(struct cflayer *layer);
+ #endif
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 2bdd802212fe0..43891b28fc482 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -193,7 +193,11 @@ struct tls_offload_context_tx {
+ 	(sizeof(struct tls_offload_context_tx) + TLS_DRIVER_STATE_SIZE_TX)
+ 
+ enum tls_context_flags {
+-	TLS_RX_SYNC_RUNNING = 0,
++	/* tls_device_down was called after the netdev went down, device state
++	 * was released, and kTLS works in software, even though rx_conf is
++	 * still TLS_HW (needed for transition).
++	 */
++	TLS_RX_DEV_DEGRADED = 0,
+ 	/* Unlike RX where resync is driven entirely by the core in TX only
+ 	 * the driver knows when things went out of sync, so we need the flag
+ 	 * to be atomic.
+@@ -265,6 +269,7 @@ struct tls_context {
+ 
+ 	/* cache cold stuff */
+ 	struct proto *sk_proto;
++	struct sock *sk;
+ 
+ 	void (*sk_destruct)(struct sock *sk);
+ 
+@@ -447,6 +452,9 @@ static inline u16 tls_user_config(struct tls_context *ctx, bool tx)
+ struct sk_buff *
+ tls_validate_xmit_skb(struct sock *sk, struct net_device *dev,
+ 		      struct sk_buff *skb);
++struct sk_buff *
++tls_validate_xmit_skb_sw(struct sock *sk, struct net_device *dev,
++			 struct sk_buff *skb);
+ 
+ static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk)
+ {
+diff --git a/init/main.c b/init/main.c
+index d9d9141112511..b4449544390ca 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1505,7 +1505,7 @@ static noinline void __init kernel_init_freeable(void)
+ 	 */
+ 	set_mems_allowed(node_states[N_MEMORY]);
+ 
+-	cad_pid = task_pid(current);
++	cad_pid = get_pid(task_pid(current));
+ 
+ 	smp_prepare_cpus(setup_max_cpus);
+ 
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index c489430cac78c..f7e99bb8c3b6c 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -14,6 +14,7 @@
+ #include <linux/jiffies.h>
+ #include <linux/pid_namespace.h>
+ #include <linux/proc_ns.h>
++#include <linux/security.h>
+ 
+ #include "../../lib/kstrtox.h"
+ 
+@@ -707,14 +708,6 @@ bpf_base_func_proto(enum bpf_func_id func_id)
+ 		return &bpf_spin_lock_proto;
+ 	case BPF_FUNC_spin_unlock:
+ 		return &bpf_spin_unlock_proto;
+-	case BPF_FUNC_trace_printk:
+-		if (!perfmon_capable())
+-			return NULL;
+-		return bpf_get_trace_printk_proto();
+-	case BPF_FUNC_snprintf_btf:
+-		if (!perfmon_capable())
+-			return NULL;
+-		return &bpf_snprintf_btf_proto;
+ 	case BPF_FUNC_jiffies64:
+ 		return &bpf_jiffies64_proto;
+ 	case BPF_FUNC_per_cpu_ptr:
+@@ -729,16 +722,22 @@ bpf_base_func_proto(enum bpf_func_id func_id)
+ 		return NULL;
+ 
+ 	switch (func_id) {
++	case BPF_FUNC_trace_printk:
++		return bpf_get_trace_printk_proto();
+ 	case BPF_FUNC_get_current_task:
+ 		return &bpf_get_current_task_proto;
+ 	case BPF_FUNC_probe_read_user:
+ 		return &bpf_probe_read_user_proto;
+ 	case BPF_FUNC_probe_read_kernel:
+-		return &bpf_probe_read_kernel_proto;
++		return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?
++		       NULL : &bpf_probe_read_kernel_proto;
+ 	case BPF_FUNC_probe_read_user_str:
+ 		return &bpf_probe_read_user_str_proto;
+ 	case BPF_FUNC_probe_read_kernel_str:
+-		return &bpf_probe_read_kernel_str_proto;
++		return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?
++		       NULL : &bpf_probe_read_kernel_str_proto;
++	case BPF_FUNC_snprintf_btf:
++		return &bpf_snprintf_btf_proto;
+ 	default:
+ 		return NULL;
+ 	}
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index fcbfc95649967..01710831fd02f 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -212,16 +212,11 @@ const struct bpf_func_proto bpf_probe_read_user_str_proto = {
+ static __always_inline int
+ bpf_probe_read_kernel_common(void *dst, u32 size, const void *unsafe_ptr)
+ {
+-	int ret = security_locked_down(LOCKDOWN_BPF_READ);
++	int ret;
+ 
+-	if (unlikely(ret < 0))
+-		goto fail;
+ 	ret = copy_from_kernel_nofault(dst, unsafe_ptr, size);
+ 	if (unlikely(ret < 0))
+-		goto fail;
+-	return ret;
+-fail:
+-	memset(dst, 0, size);
++		memset(dst, 0, size);
+ 	return ret;
+ }
+ 
+@@ -243,10 +238,7 @@ const struct bpf_func_proto bpf_probe_read_kernel_proto = {
+ static __always_inline int
+ bpf_probe_read_kernel_str_common(void *dst, u32 size, const void *unsafe_ptr)
+ {
+-	int ret = security_locked_down(LOCKDOWN_BPF_READ);
+-
+-	if (unlikely(ret < 0))
+-		goto fail;
++	int ret;
+ 
+ 	/*
+ 	 * The strncpy_from_kernel_nofault() call will likely not fill the
+@@ -259,11 +251,7 @@ bpf_probe_read_kernel_str_common(void *dst, u32 size, const void *unsafe_ptr)
+ 	 */
+ 	ret = strncpy_from_kernel_nofault(dst, unsafe_ptr, size);
+ 	if (unlikely(ret < 0))
+-		goto fail;
+-
+-	return ret;
+-fail:
+-	memset(dst, 0, size);
++		memset(dst, 0, size);
+ 	return ret;
+ }
+ 
+@@ -1293,16 +1281,20 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ 	case BPF_FUNC_probe_read_user:
+ 		return &bpf_probe_read_user_proto;
+ 	case BPF_FUNC_probe_read_kernel:
+-		return &bpf_probe_read_kernel_proto;
++		return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?
++		       NULL : &bpf_probe_read_kernel_proto;
+ 	case BPF_FUNC_probe_read_user_str:
+ 		return &bpf_probe_read_user_str_proto;
+ 	case BPF_FUNC_probe_read_kernel_str:
+-		return &bpf_probe_read_kernel_str_proto;
++		return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?
++		       NULL : &bpf_probe_read_kernel_str_proto;
+ #ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	case BPF_FUNC_probe_read:
+-		return &bpf_probe_read_compat_proto;
++		return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?
++		       NULL : &bpf_probe_read_compat_proto;
+ 	case BPF_FUNC_probe_read_str:
+-		return &bpf_probe_read_compat_str_proto;
++		return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?
++		       NULL : &bpf_probe_read_compat_str_proto;
+ #endif
+ #ifdef CONFIG_CGROUPS
+ 	case BPF_FUNC_get_current_cgroup_id:
+diff --git a/lib/lz4/lz4_decompress.c b/lib/lz4/lz4_decompress.c
+index 00cb0d0b73e16..8a7724a6ce2fb 100644
+--- a/lib/lz4/lz4_decompress.c
++++ b/lib/lz4/lz4_decompress.c
+@@ -263,7 +263,11 @@ static FORCE_INLINE int LZ4_decompress_generic(
+ 				}
+ 			}
+ 
+-			LZ4_memcpy(op, ip, length);
++			/*
++			 * supports overlapping memory regions; only matters
++			 * for in-place decompression scenarios
++			 */
++			LZ4_memmove(op, ip, length);
+ 			ip += length;
+ 			op += length;
+ 
+diff --git a/lib/lz4/lz4defs.h b/lib/lz4/lz4defs.h
+index c91dd96ef6291..673bd206aa98b 100644
+--- a/lib/lz4/lz4defs.h
++++ b/lib/lz4/lz4defs.h
+@@ -146,6 +146,7 @@ static FORCE_INLINE void LZ4_writeLE16(void *memPtr, U16 value)
+  * environments. This is needed when decompressing the Linux Kernel, for example.
+  */
+ #define LZ4_memcpy(dst, src, size) __builtin_memcpy(dst, src, size)
++#define LZ4_memmove(dst, src, size) __builtin_memmove(dst, src, size)
+ 
+ static FORCE_INLINE void LZ4_copy8(void *dst, const void *src)
+ {
+diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
+index c05d9dcf78911..750bfef26be37 100644
+--- a/mm/debug_vm_pgtable.c
++++ b/mm/debug_vm_pgtable.c
+@@ -163,7 +163,7 @@ static void __init pmd_advanced_tests(struct mm_struct *mm,
+ 
+ 	pr_debug("Validating PMD advanced\n");
+ 	/* Align the address wrt HPAGE_PMD_SIZE */
+-	vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
++	vaddr &= HPAGE_PMD_MASK;
+ 
+ 	pgtable_trans_huge_deposit(mm, pmdp, pgtable);
+ 
+@@ -285,7 +285,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
+ 
+ 	pr_debug("Validating PUD advanced\n");
+ 	/* Align the address wrt HPAGE_PUD_SIZE */
+-	vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE;
++	vaddr &= HPAGE_PUD_MASK;
+ 
+ 	set_pud_at(mm, vaddr, pudp, pud);
+ 	pudp_set_wrprotect(mm, vaddr, pudp);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 900851a4f9146..bc1006a327338 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4708,10 +4708,20 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
+ 	struct page *page;
+ 
+ 	if (!*pagep) {
+-		ret = -ENOMEM;
++		/* If a page already exists, then it's UFFDIO_COPY for
++		 * a non-missing case. Return -EEXIST.
++		 */
++		if (vm_shared &&
++		    hugetlbfs_pagecache_present(h, dst_vma, dst_addr)) {
++			ret = -EEXIST;
++			goto out;
++		}
++
+ 		page = alloc_huge_page(dst_vma, dst_addr, 0);
+-		if (IS_ERR(page))
++		if (IS_ERR(page)) {
++			ret = -ENOMEM;
+ 			goto out;
++		}
+ 
+ 		ret = copy_huge_page_from_user(page,
+ 						(const void __user *) src_addr,
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 7ffa706e5c305..81cc7fdc9c8fd 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -8870,6 +8870,8 @@ bool take_page_off_buddy(struct page *page)
+ 			del_page_from_free_list(page_head, zone, page_order);
+ 			break_down_buddy_pages(zone, page_head, page, 0,
+ 						page_order, migratetype);
++			if (!is_migrate_isolate(migratetype))
++				__mod_zone_freepage_state(zone, -1, migratetype);
+ 			ret = true;
+ 			break;
+ 		}
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 0152bc6b67967..86ebfc6ae6986 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1602,8 +1602,13 @@ setup_failed:
+ 	} else {
+ 		/* Init failed, cleanup */
+ 		flush_work(&hdev->tx_work);
+-		flush_work(&hdev->cmd_work);
++
++		/* Since hci_rx_work() is possible to awake new cmd_work
++		 * it should be flushed first to avoid unexpected call of
++		 * hci_cmd_work()
++		 */
+ 		flush_work(&hdev->rx_work);
++		flush_work(&hdev->cmd_work);
+ 
+ 		skb_queue_purge(&hdev->cmd_q);
+ 		skb_queue_purge(&hdev->rx_q);
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 251b9128f530a..eed0dd066e12c 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -762,7 +762,7 @@ void hci_sock_dev_event(struct hci_dev *hdev, int event)
+ 		/* Detach sockets from device */
+ 		read_lock(&hci_sk_list.lock);
+ 		sk_for_each(sk, &hci_sk_list.head) {
+-			bh_lock_sock_nested(sk);
++			lock_sock(sk);
+ 			if (hci_pi(sk)->hdev == hdev) {
+ 				hci_pi(sk)->hdev = NULL;
+ 				sk->sk_err = EPIPE;
+@@ -771,7 +771,7 @@ void hci_sock_dev_event(struct hci_dev *hdev, int event)
+ 
+ 				hci_dev_put(hdev);
+ 			}
+-			bh_unlock_sock(sk);
++			release_sock(sk);
+ 		}
+ 		read_unlock(&hci_sk_list.lock);
+ 	}
+diff --git a/net/caif/caif_dev.c b/net/caif/caif_dev.c
+index c10e5a55758d2..440139706130a 100644
+--- a/net/caif/caif_dev.c
++++ b/net/caif/caif_dev.c
+@@ -308,7 +308,7 @@ static void dev_flowctrl(struct net_device *dev, int on)
+ 	caifd_put(caifd);
+ }
+ 
+-void caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,
++int caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,
+ 		     struct cflayer *link_support, int head_room,
+ 		     struct cflayer **layer,
+ 		     int (**rcv_func)(struct sk_buff *, struct net_device *,
+@@ -319,11 +319,12 @@ void caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,
+ 	enum cfcnfg_phy_preference pref;
+ 	struct cfcnfg *cfg = get_cfcnfg(dev_net(dev));
+ 	struct caif_device_entry_list *caifdevs;
++	int res;
+ 
+ 	caifdevs = caif_device_list(dev_net(dev));
+ 	caifd = caif_device_alloc(dev);
+ 	if (!caifd)
+-		return;
++		return -ENOMEM;
+ 	*layer = &caifd->layer;
+ 	spin_lock_init(&caifd->flow_lock);
+ 
+@@ -344,7 +345,7 @@ void caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,
+ 	strlcpy(caifd->layer.name, dev->name,
+ 		sizeof(caifd->layer.name));
+ 	caifd->layer.transmit = transmit;
+-	cfcnfg_add_phy_layer(cfg,
++	res = cfcnfg_add_phy_layer(cfg,
+ 				dev,
+ 				&caifd->layer,
+ 				pref,
+@@ -354,6 +355,7 @@ void caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,
+ 	mutex_unlock(&caifdevs->lock);
+ 	if (rcv_func)
+ 		*rcv_func = receive;
++	return res;
+ }
+ EXPORT_SYMBOL(caif_enroll_dev);
+ 
+@@ -368,6 +370,7 @@ static int caif_device_notify(struct notifier_block *me, unsigned long what,
+ 	struct cflayer *layer, *link_support;
+ 	int head_room = 0;
+ 	struct caif_device_entry_list *caifdevs;
++	int res;
+ 
+ 	cfg = get_cfcnfg(dev_net(dev));
+ 	caifdevs = caif_device_list(dev_net(dev));
+@@ -393,8 +396,10 @@ static int caif_device_notify(struct notifier_block *me, unsigned long what,
+ 				break;
+ 			}
+ 		}
+-		caif_enroll_dev(dev, caifdev, link_support, head_room,
++		res = caif_enroll_dev(dev, caifdev, link_support, head_room,
+ 				&layer, NULL);
++		if (res)
++			cfserl_release(link_support);
+ 		caifdev->flowctrl = dev_flowctrl;
+ 		break;
+ 
+diff --git a/net/caif/caif_usb.c b/net/caif/caif_usb.c
+index a0116b9503d9d..b02e1292f7f19 100644
+--- a/net/caif/caif_usb.c
++++ b/net/caif/caif_usb.c
+@@ -115,6 +115,11 @@ static struct cflayer *cfusbl_create(int phyid, u8 ethaddr[ETH_ALEN],
+ 	return (struct cflayer *) this;
+ }
+ 
++static void cfusbl_release(struct cflayer *layer)
++{
++	kfree(layer);
++}
++
+ static struct packet_type caif_usb_type __read_mostly = {
+ 	.type = cpu_to_be16(ETH_P_802_EX1),
+ };
+@@ -127,6 +132,7 @@ static int cfusbl_device_notify(struct notifier_block *me, unsigned long what,
+ 	struct cflayer *layer, *link_support;
+ 	struct usbnet *usbnet;
+ 	struct usb_device *usbdev;
++	int res;
+ 
+ 	/* Check whether we have a NCM device, and find its VID/PID. */
+ 	if (!(dev->dev.parent && dev->dev.parent->driver &&
+@@ -169,8 +175,11 @@ static int cfusbl_device_notify(struct notifier_block *me, unsigned long what,
+ 	if (dev->num_tx_queues > 1)
+ 		pr_warn("USB device uses more than one tx queue\n");
+ 
+-	caif_enroll_dev(dev, &common, link_support, CFUSB_MAX_HEADLEN,
++	res = caif_enroll_dev(dev, &common, link_support, CFUSB_MAX_HEADLEN,
+ 			&layer, &caif_usb_type.func);
++	if (res)
++		goto err;
++
+ 	if (!pack_added)
+ 		dev_add_pack(&caif_usb_type);
+ 	pack_added = true;
+@@ -178,6 +187,9 @@ static int cfusbl_device_notify(struct notifier_block *me, unsigned long what,
+ 	strlcpy(layer->name, dev->name, sizeof(layer->name));
+ 
+ 	return 0;
++err:
++	cfusbl_release(link_support);
++	return res;
+ }
+ 
+ static struct notifier_block caif_device_notifier = {
+diff --git a/net/caif/cfcnfg.c b/net/caif/cfcnfg.c
+index 399239a14420f..cac30e676ac94 100644
+--- a/net/caif/cfcnfg.c
++++ b/net/caif/cfcnfg.c
+@@ -450,7 +450,7 @@ unlock:
+ 	rcu_read_unlock();
+ }
+ 
+-void
++int
+ cfcnfg_add_phy_layer(struct cfcnfg *cnfg,
+ 		     struct net_device *dev, struct cflayer *phy_layer,
+ 		     enum cfcnfg_phy_preference pref,
+@@ -459,7 +459,7 @@ cfcnfg_add_phy_layer(struct cfcnfg *cnfg,
+ {
+ 	struct cflayer *frml;
+ 	struct cfcnfg_phyinfo *phyinfo = NULL;
+-	int i;
++	int i, res = 0;
+ 	u8 phyid;
+ 
+ 	mutex_lock(&cnfg->lock);
+@@ -473,12 +473,15 @@ cfcnfg_add_phy_layer(struct cfcnfg *cnfg,
+ 			goto got_phyid;
+ 	}
+ 	pr_warn("Too many CAIF Link Layers (max 6)\n");
++	res = -EEXIST;
+ 	goto out;
+ 
+ got_phyid:
+ 	phyinfo = kzalloc(sizeof(struct cfcnfg_phyinfo), GFP_ATOMIC);
+-	if (!phyinfo)
++	if (!phyinfo) {
++		res = -ENOMEM;
+ 		goto out_err;
++	}
+ 
+ 	phy_layer->id = phyid;
+ 	phyinfo->pref = pref;
+@@ -492,8 +495,10 @@ got_phyid:
+ 
+ 	frml = cffrml_create(phyid, fcs);
+ 
+-	if (!frml)
++	if (!frml) {
++		res = -ENOMEM;
+ 		goto out_err;
++	}
+ 	phyinfo->frm_layer = frml;
+ 	layer_set_up(frml, cnfg->mux);
+ 
+@@ -511,11 +516,12 @@ got_phyid:
+ 	list_add_rcu(&phyinfo->node, &cnfg->phys);
+ out:
+ 	mutex_unlock(&cnfg->lock);
+-	return;
++	return res;
+ 
+ out_err:
+ 	kfree(phyinfo);
+ 	mutex_unlock(&cnfg->lock);
++	return res;
+ }
+ EXPORT_SYMBOL(cfcnfg_add_phy_layer);
+ 
+diff --git a/net/caif/cfserl.c b/net/caif/cfserl.c
+index e11725a4bb0ed..40cd57ad0a0f4 100644
+--- a/net/caif/cfserl.c
++++ b/net/caif/cfserl.c
+@@ -31,6 +31,11 @@ static int cfserl_transmit(struct cflayer *layr, struct cfpkt *pkt);
+ static void cfserl_ctrlcmd(struct cflayer *layr, enum caif_ctrlcmd ctrl,
+ 			   int phyid);
+ 
++void cfserl_release(struct cflayer *layer)
++{
++	kfree(layer);
++}
++
+ struct cflayer *cfserl_create(int instance, bool use_stx)
+ {
+ 	struct cfserl *this = kzalloc(sizeof(struct cfserl), GFP_ATOMIC);
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 5d397838bceb6..90badb6f72271 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -693,7 +693,6 @@ static int devlink_nl_port_attrs_put(struct sk_buff *msg,
+ 	case DEVLINK_PORT_FLAVOUR_PHYSICAL:
+ 	case DEVLINK_PORT_FLAVOUR_CPU:
+ 	case DEVLINK_PORT_FLAVOUR_DSA:
+-	case DEVLINK_PORT_FLAVOUR_VIRTUAL:
+ 		if (nla_put_u32(msg, DEVLINK_ATTR_PORT_NUMBER,
+ 				attrs->phys.port_number))
+ 			return -EMSGSIZE;
+@@ -8376,7 +8375,6 @@ static int __devlink_port_phys_port_name_get(struct devlink_port *devlink_port,
+ 
+ 	switch (attrs->flavour) {
+ 	case DEVLINK_PORT_FLAVOUR_PHYSICAL:
+-	case DEVLINK_PORT_FLAVOUR_VIRTUAL:
+ 		if (!attrs->split)
+ 			n = snprintf(name, len, "p%u", attrs->phys.port_number);
+ 		else
+@@ -8413,6 +8411,8 @@ static int __devlink_port_phys_port_name_get(struct devlink_port *devlink_port,
+ 		n = snprintf(name, len, "pf%uvf%u",
+ 			     attrs->pci_vf.pf, attrs->pci_vf.vf);
+ 		break;
++	case DEVLINK_PORT_FLAVOUR_VIRTUAL:
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	if (n >= len)
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index a18c2973b8c6d..c452ebf209394 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -239,6 +239,7 @@ static int neigh_forced_gc(struct neigh_table *tbl)
+ 
+ 			write_lock(&n->lock);
+ 			if ((n->nud_state == NUD_FAILED) ||
++			    (n->nud_state == NUD_NOARP) ||
+ 			    (tbl->is_multicast &&
+ 			     tbl->is_multicast(n->primary_key)) ||
+ 			    time_after(tref, n->updated))
+diff --git a/net/core/sock.c b/net/core/sock.c
+index dee29f41beaf8..7de51ea15cdfc 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -807,10 +807,18 @@ void sock_set_rcvbuf(struct sock *sk, int val)
+ }
+ EXPORT_SYMBOL(sock_set_rcvbuf);
+ 
++static void __sock_set_mark(struct sock *sk, u32 val)
++{
++	if (val != sk->sk_mark) {
++		sk->sk_mark = val;
++		sk_dst_reset(sk);
++	}
++}
++
+ void sock_set_mark(struct sock *sk, u32 val)
+ {
+ 	lock_sock(sk);
+-	sk->sk_mark = val;
++	__sock_set_mark(sk, val);
+ 	release_sock(sk);
+ }
+ EXPORT_SYMBOL(sock_set_mark);
+@@ -1118,10 +1126,10 @@ set_sndbuf:
+ 	case SO_MARK:
+ 		if (!ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)) {
+ 			ret = -EPERM;
+-		} else if (val != sk->sk_mark) {
+-			sk->sk_mark = val;
+-			sk_dst_reset(sk);
++			break;
+ 		}
++
++		__sock_set_mark(sk, val);
+ 		break;
+ 
+ 	case SO_RXQ_OVFL:
+diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c
+index 8e3e8a5b85593..a00b513c22a1d 100644
+--- a/net/dsa/tag_8021q.c
++++ b/net/dsa/tag_8021q.c
+@@ -64,7 +64,7 @@
+ #define DSA_8021Q_SUBVLAN_HI_SHIFT	9
+ #define DSA_8021Q_SUBVLAN_HI_MASK	GENMASK(9, 9)
+ #define DSA_8021Q_SUBVLAN_LO_SHIFT	4
+-#define DSA_8021Q_SUBVLAN_LO_MASK	GENMASK(4, 3)
++#define DSA_8021Q_SUBVLAN_LO_MASK	GENMASK(5, 4)
+ #define DSA_8021Q_SUBVLAN_HI(x)		(((x) & GENMASK(2, 2)) >> 2)
+ #define DSA_8021Q_SUBVLAN_LO(x)		((x) & GENMASK(1, 0))
+ #define DSA_8021Q_SUBVLAN(x)		\
+diff --git a/net/ieee802154/nl-mac.c b/net/ieee802154/nl-mac.c
+index d19c40c684e80..71be751123210 100644
+--- a/net/ieee802154/nl-mac.c
++++ b/net/ieee802154/nl-mac.c
+@@ -680,8 +680,10 @@ int ieee802154_llsec_getparams(struct sk_buff *skb, struct genl_info *info)
+ 	    nla_put_u8(msg, IEEE802154_ATTR_LLSEC_SECLEVEL, params.out_level) ||
+ 	    nla_put_u32(msg, IEEE802154_ATTR_LLSEC_FRAME_COUNTER,
+ 			be32_to_cpu(params.frame_counter)) ||
+-	    ieee802154_llsec_fill_key_id(msg, &params.out_key))
++	    ieee802154_llsec_fill_key_id(msg, &params.out_key)) {
++		rc = -ENOBUFS;
+ 		goto out_free;
++	}
+ 
+ 	dev_put(dev);
+ 
+diff --git a/net/ieee802154/nl-phy.c b/net/ieee802154/nl-phy.c
+index 2cdc7e63fe172..88215b5c93aa4 100644
+--- a/net/ieee802154/nl-phy.c
++++ b/net/ieee802154/nl-phy.c
+@@ -241,8 +241,10 @@ int ieee802154_add_iface(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	if (nla_put_string(msg, IEEE802154_ATTR_PHY_NAME, wpan_phy_name(phy)) ||
+-	    nla_put_string(msg, IEEE802154_ATTR_DEV_NAME, dev->name))
++	    nla_put_string(msg, IEEE802154_ATTR_DEV_NAME, dev->name)) {
++		rc = -EMSGSIZE;
+ 		goto nla_put_failure;
++	}
+ 	dev_put(dev);
+ 
+ 	wpan_phy_put(phy);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 71e578ed8699f..ccff4738313c1 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3671,11 +3671,11 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
+ 	if (nh) {
+ 		if (rt->fib6_src.plen) {
+ 			NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing");
+-			goto out;
++			goto out_free;
+ 		}
+ 		if (!nexthop_get(nh)) {
+ 			NL_SET_ERR_MSG(extack, "Nexthop has been deleted");
+-			goto out;
++			goto out_free;
+ 		}
+ 		rt->nh = nh;
+ 		fib6_nh = nexthop_fib6_nh(rt->nh);
+@@ -3712,6 +3712,10 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
+ out:
+ 	fib6_info_release(rt);
+ 	return ERR_PTR(err);
++out_free:
++	ip_fib_metrics_put(rt->fib6_metrics);
++	kfree(rt);
++	return ERR_PTR(err);
+ }
+ 
+ int ip6_route_add(struct fib6_config *cfg, gfp_t gfp_flags,
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index bdd6af38a9ae3..96b6aca9d0ae7 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -527,21 +527,20 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 
+ 	/* if the sk is MP_CAPABLE, we try to fetch the client key */
+ 	if (subflow_req->mp_capable) {
+-		if (TCP_SKB_CB(skb)->seq != subflow_req->ssn_offset + 1) {
+-			/* here we can receive and accept an in-window,
+-			 * out-of-order pkt, which will not carry the MP_CAPABLE
+-			 * opt even on mptcp enabled paths
+-			 */
+-			goto create_msk;
+-		}
+-
++		/* we can receive and accept an in-window, out-of-order pkt,
++		 * which may not carry the MP_CAPABLE opt even on mptcp enabled
++		 * paths: always try to extract the peer key, and fallback
++		 * for packets missing it.
++		 * Even OoO DSS packets coming legitly after dropped or
++		 * reordered MPC will cause fallback, but we don't have other
++		 * options.
++		 */
+ 		mptcp_get_options(skb, &mp_opt);
+ 		if (!mp_opt.mp_capable) {
+ 			fallback = true;
+ 			goto create_child;
+ 		}
+ 
+-create_msk:
+ 		new_msk = mptcp_sk_clone(listener->conn, &mp_opt, req);
+ 		if (!new_msk)
+ 			fallback = true;
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index d45dbcba8b49c..c25097092a060 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -1367,7 +1367,7 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
+ 	ip_vs_addr_copy(svc->af, &svc->addr, &u->addr);
+ 	svc->port = u->port;
+ 	svc->fwmark = u->fwmark;
+-	svc->flags = u->flags;
++	svc->flags = u->flags & ~IP_VS_SVC_F_HASHED;
+ 	svc->timeout = u->timeout * HZ;
+ 	svc->netmask = u->netmask;
+ 	svc->ipvs = ipvs;
+diff --git a/net/netfilter/nf_conntrack_proto.c b/net/netfilter/nf_conntrack_proto.c
+index 47e9319d2cf31..71892822bbf5d 100644
+--- a/net/netfilter/nf_conntrack_proto.c
++++ b/net/netfilter/nf_conntrack_proto.c
+@@ -660,7 +660,7 @@ int nf_conntrack_proto_init(void)
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ cleanup_sockopt:
+-	nf_unregister_sockopt(&so_getorigdst6);
++	nf_unregister_sockopt(&so_getorigdst);
+ #endif
+ 	return ret;
+ }
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 7bf7bfa0c7d9c..e34d05cc57549 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3263,8 +3263,10 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 			if (n == NFT_RULE_MAXEXPRS)
+ 				goto err1;
+ 			err = nf_tables_expr_parse(&ctx, tmp, &info[n]);
+-			if (err < 0)
++			if (err < 0) {
++				NL_SET_BAD_ATTR(extack, tmp);
+ 				goto err1;
++			}
+ 			size += info[n].ops->size;
+ 			n++;
+ 		}
+diff --git a/net/netfilter/nfnetlink_cthelper.c b/net/netfilter/nfnetlink_cthelper.c
+index 5b0d0a77379c6..91afbf8ac8cf0 100644
+--- a/net/netfilter/nfnetlink_cthelper.c
++++ b/net/netfilter/nfnetlink_cthelper.c
+@@ -380,10 +380,14 @@ static int
+ nfnl_cthelper_update(const struct nlattr * const tb[],
+ 		     struct nf_conntrack_helper *helper)
+ {
++	u32 size;
+ 	int ret;
+ 
+-	if (tb[NFCTH_PRIV_DATA_LEN])
+-		return -EBUSY;
++	if (tb[NFCTH_PRIV_DATA_LEN]) {
++		size = ntohl(nla_get_be32(tb[NFCTH_PRIV_DATA_LEN]));
++		if (size != helper->data_len)
++			return -EBUSY;
++	}
+ 
+ 	if (tb[NFCTH_POLICY]) {
+ 		ret = nfnl_cthelper_update_policy(helper, tb[NFCTH_POLICY]);
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index a1b0aac46e9e0..70d46e0bbf064 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -1218,7 +1218,7 @@ static void nft_ct_expect_obj_eval(struct nft_object *obj,
+ 	struct nf_conn *ct;
+ 
+ 	ct = nf_ct_get(pkt->skb, &ctinfo);
+-	if (!ct || ctinfo == IP_CT_UNTRACKED) {
++	if (!ct || nf_ct_is_confirmed(ct) || nf_ct_is_template(ct)) {
+ 		regs->verdict.code = NFT_BREAK;
+ 		return;
+ 	}
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index 53dbe733f9981..6cfd30fc07985 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -110,6 +110,7 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 	if (!llcp_sock->service_name) {
+ 		nfc_llcp_local_put(llcp_sock->local);
+ 		llcp_sock->local = NULL;
++		llcp_sock->dev = NULL;
+ 		ret = -ENOMEM;
+ 		goto put_dev;
+ 	}
+@@ -119,6 +120,7 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 		llcp_sock->local = NULL;
+ 		kfree(llcp_sock->service_name);
+ 		llcp_sock->service_name = NULL;
++		llcp_sock->dev = NULL;
+ 		ret = -EADDRINUSE;
+ 		goto put_dev;
+ 	}
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index aba3cd85f284f..315a5b2f3add8 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -979,7 +979,7 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ 	 */
+ 	cached = tcf_ct_skb_nfct_cached(net, skb, p->zone, force);
+ 	if (!cached) {
+-		if (!commit && tcf_ct_flow_table_lookup(p, skb, family)) {
++		if (tcf_ct_flow_table_lookup(p, skb, family)) {
+ 			skip_add = true;
+ 			goto do_nat;
+ 		}
+@@ -1019,10 +1019,11 @@ do_nat:
+ 		 * even if the connection is already confirmed.
+ 		 */
+ 		nf_conntrack_confirm(skb);
+-	} else if (!skip_add) {
+-		tcf_ct_flow_table_process_conn(p->ct_ft, ct, ctinfo);
+ 	}
+ 
++	if (!skip_add)
++		tcf_ct_flow_table_process_conn(p->ct_ft, ct, ctinfo);
++
+ out_push:
+ 	skb_push_rcsum(skb, nh_ofs);
+ 
+@@ -1198,9 +1199,6 @@ static int tcf_ct_fill_params(struct net *net,
+ 				   sizeof(p->zone));
+ 	}
+ 
+-	if (p->zone == NF_CT_DEFAULT_ZONE_ID)
+-		return 0;
+-
+ 	nf_ct_zone_init(&zone, p->zone, NF_CT_DEFAULT_ZONE_DIR, 0);
+ 	tmpl = nf_ct_tmpl_alloc(net, &zone, GFP_KERNEL);
+ 	if (!tmpl) {
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 6504141104521..12e535b43d887 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -234,7 +234,8 @@ void tipc_bearer_remove_dest(struct net *net, u32 bearer_id, u32 dest)
+  */
+ static int tipc_enable_bearer(struct net *net, const char *name,
+ 			      u32 disc_domain, u32 prio,
+-			      struct nlattr *attr[])
++			      struct nlattr *attr[],
++			      struct netlink_ext_ack *extack)
+ {
+ 	struct tipc_net *tn = tipc_net(net);
+ 	struct tipc_bearer_names b_names;
+@@ -245,20 +246,24 @@ static int tipc_enable_bearer(struct net *net, const char *name,
+ 	int bearer_id = 0;
+ 	int res = -EINVAL;
+ 	char *errstr = "";
++	u32 i;
+ 
+ 	if (!bearer_name_validate(name, &b_names)) {
+ 		errstr = "illegal name";
++		NL_SET_ERR_MSG(extack, "Illegal name");
+ 		goto rejected;
+ 	}
+ 
+ 	if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) {
+ 		errstr = "illegal priority";
++		NL_SET_ERR_MSG(extack, "Illegal priority");
+ 		goto rejected;
+ 	}
+ 
+ 	m = tipc_media_find(b_names.media_name);
+ 	if (!m) {
+ 		errstr = "media not registered";
++		NL_SET_ERR_MSG(extack, "Media not registered");
+ 		goto rejected;
+ 	}
+ 
+@@ -266,33 +271,43 @@ static int tipc_enable_bearer(struct net *net, const char *name,
+ 		prio = m->priority;
+ 
+ 	/* Check new bearer vs existing ones and find free bearer id if any */
+-	while (bearer_id < MAX_BEARERS) {
+-		b = rtnl_dereference(tn->bearer_list[bearer_id]);
+-		if (!b)
+-			break;
++	bearer_id = MAX_BEARERS;
++	i = MAX_BEARERS;
++	while (i-- != 0) {
++		b = rtnl_dereference(tn->bearer_list[i]);
++		if (!b) {
++			bearer_id = i;
++			continue;
++		}
+ 		if (!strcmp(name, b->name)) {
+ 			errstr = "already enabled";
++			NL_SET_ERR_MSG(extack, "Already enabled");
+ 			goto rejected;
+ 		}
+-		bearer_id++;
+-		if (b->priority != prio)
+-			continue;
+-		if (++with_this_prio <= 2)
+-			continue;
+-		pr_warn("Bearer <%s>: already 2 bearers with priority %u\n",
+-			name, prio);
+-		if (prio == TIPC_MIN_LINK_PRI) {
+-			errstr = "cannot adjust to lower";
+-			goto rejected;
++
++		if (b->priority == prio &&
++		    (++with_this_prio > 2)) {
++			pr_warn("Bearer <%s>: already 2 bearers with priority %u\n",
++				name, prio);
++
++			if (prio == TIPC_MIN_LINK_PRI) {
++				errstr = "cannot adjust to lower";
++				NL_SET_ERR_MSG(extack, "Cannot adjust to lower");
++				goto rejected;
++			}
++
++			pr_warn("Bearer <%s>: trying with adjusted priority\n",
++				name);
++			prio--;
++			bearer_id = MAX_BEARERS;
++			i = MAX_BEARERS;
++			with_this_prio = 1;
+ 		}
+-		pr_warn("Bearer <%s>: trying with adjusted priority\n", name);
+-		prio--;
+-		bearer_id = 0;
+-		with_this_prio = 1;
+ 	}
+ 
+ 	if (bearer_id >= MAX_BEARERS) {
+ 		errstr = "max 3 bearers permitted";
++		NL_SET_ERR_MSG(extack, "Max 3 bearers permitted");
+ 		goto rejected;
+ 	}
+ 
+@@ -306,6 +321,7 @@ static int tipc_enable_bearer(struct net *net, const char *name,
+ 	if (res) {
+ 		kfree(b);
+ 		errstr = "failed to enable media";
++		NL_SET_ERR_MSG(extack, "Failed to enable media");
+ 		goto rejected;
+ 	}
+ 
+@@ -322,6 +338,7 @@ static int tipc_enable_bearer(struct net *net, const char *name,
+ 	if (res) {
+ 		bearer_disable(net, b);
+ 		errstr = "failed to create discoverer";
++		NL_SET_ERR_MSG(extack, "Failed to create discoverer");
+ 		goto rejected;
+ 	}
+ 
+@@ -894,6 +911,7 @@ int tipc_nl_bearer_get(struct sk_buff *skb, struct genl_info *info)
+ 	bearer = tipc_bearer_find(net, name);
+ 	if (!bearer) {
+ 		err = -EINVAL;
++		NL_SET_ERR_MSG(info->extack, "Bearer not found");
+ 		goto err_out;
+ 	}
+ 
+@@ -933,8 +951,10 @@ int __tipc_nl_bearer_disable(struct sk_buff *skb, struct genl_info *info)
+ 	name = nla_data(attrs[TIPC_NLA_BEARER_NAME]);
+ 
+ 	bearer = tipc_bearer_find(net, name);
+-	if (!bearer)
++	if (!bearer) {
++		NL_SET_ERR_MSG(info->extack, "Bearer not found");
+ 		return -EINVAL;
++	}
+ 
+ 	bearer_disable(net, bearer);
+ 
+@@ -992,7 +1012,8 @@ int __tipc_nl_bearer_enable(struct sk_buff *skb, struct genl_info *info)
+ 			prio = nla_get_u32(props[TIPC_NLA_PROP_PRIO]);
+ 	}
+ 
+-	return tipc_enable_bearer(net, bearer, domain, prio, attrs);
++	return tipc_enable_bearer(net, bearer, domain, prio, attrs,
++				  info->extack);
+ }
+ 
+ int tipc_nl_bearer_enable(struct sk_buff *skb, struct genl_info *info)
+@@ -1031,6 +1052,7 @@ int tipc_nl_bearer_add(struct sk_buff *skb, struct genl_info *info)
+ 	b = tipc_bearer_find(net, name);
+ 	if (!b) {
+ 		rtnl_unlock();
++		NL_SET_ERR_MSG(info->extack, "Bearer not found");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1071,8 +1093,10 @@ int __tipc_nl_bearer_set(struct sk_buff *skb, struct genl_info *info)
+ 	name = nla_data(attrs[TIPC_NLA_BEARER_NAME]);
+ 
+ 	b = tipc_bearer_find(net, name);
+-	if (!b)
++	if (!b) {
++		NL_SET_ERR_MSG(info->extack, "Bearer not found");
+ 		return -EINVAL;
++	}
+ 
+ 	if (attrs[TIPC_NLA_BEARER_PROP]) {
+ 		struct nlattr *props[TIPC_NLA_PROP_MAX + 1];
+@@ -1091,12 +1115,18 @@ int __tipc_nl_bearer_set(struct sk_buff *skb, struct genl_info *info)
+ 		if (props[TIPC_NLA_PROP_WIN])
+ 			b->max_win = nla_get_u32(props[TIPC_NLA_PROP_WIN]);
+ 		if (props[TIPC_NLA_PROP_MTU]) {
+-			if (b->media->type_id != TIPC_MEDIA_TYPE_UDP)
++			if (b->media->type_id != TIPC_MEDIA_TYPE_UDP) {
++				NL_SET_ERR_MSG(info->extack,
++					       "MTU property is unsupported");
+ 				return -EINVAL;
++			}
+ #ifdef CONFIG_TIPC_MEDIA_UDP
+ 			if (tipc_udp_mtu_bad(nla_get_u32
+-					     (props[TIPC_NLA_PROP_MTU])))
++					     (props[TIPC_NLA_PROP_MTU]))) {
++				NL_SET_ERR_MSG(info->extack,
++					       "MTU value is out-of-range");
+ 				return -EINVAL;
++			}
+ 			b->mtu = nla_get_u32(props[TIPC_NLA_PROP_MTU]);
+ 			tipc_node_apply_property(net, b, TIPC_NLA_PROP_MTU);
+ #endif
+@@ -1224,6 +1254,7 @@ int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info)
+ 	rtnl_lock();
+ 	media = tipc_media_find(name);
+ 	if (!media) {
++		NL_SET_ERR_MSG(info->extack, "Media not found");
+ 		err = -EINVAL;
+ 		goto err_out;
+ 	}
+@@ -1260,9 +1291,10 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info)
+ 	name = nla_data(attrs[TIPC_NLA_MEDIA_NAME]);
+ 
+ 	m = tipc_media_find(name);
+-	if (!m)
++	if (!m) {
++		NL_SET_ERR_MSG(info->extack, "Media not found");
+ 		return -EINVAL;
+-
++	}
+ 	if (attrs[TIPC_NLA_MEDIA_PROP]) {
+ 		struct nlattr *props[TIPC_NLA_PROP_MAX + 1];
+ 
+@@ -1278,12 +1310,18 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info)
+ 		if (props[TIPC_NLA_PROP_WIN])
+ 			m->max_win = nla_get_u32(props[TIPC_NLA_PROP_WIN]);
+ 		if (props[TIPC_NLA_PROP_MTU]) {
+-			if (m->type_id != TIPC_MEDIA_TYPE_UDP)
++			if (m->type_id != TIPC_MEDIA_TYPE_UDP) {
++				NL_SET_ERR_MSG(info->extack,
++					       "MTU property is unsupported");
+ 				return -EINVAL;
++			}
+ #ifdef CONFIG_TIPC_MEDIA_UDP
+ 			if (tipc_udp_mtu_bad(nla_get_u32
+-					     (props[TIPC_NLA_PROP_MTU])))
++					     (props[TIPC_NLA_PROP_MTU]))) {
++				NL_SET_ERR_MSG(info->extack,
++					       "MTU value is out-of-range");
+ 				return -EINVAL;
++			}
+ 			m->mtu = nla_get_u32(props[TIPC_NLA_PROP_MTU]);
+ #endif
+ 		}
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index a3ab2d3d4e4ea..f718c7346088f 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -50,6 +50,7 @@ static void tls_device_gc_task(struct work_struct *work);
+ static DECLARE_WORK(tls_device_gc_work, tls_device_gc_task);
+ static LIST_HEAD(tls_device_gc_list);
+ static LIST_HEAD(tls_device_list);
++static LIST_HEAD(tls_device_down_list);
+ static DEFINE_SPINLOCK(tls_device_lock);
+ 
+ static void tls_device_free_ctx(struct tls_context *ctx)
+@@ -680,15 +681,13 @@ static void tls_device_resync_rx(struct tls_context *tls_ctx,
+ 	struct tls_offload_context_rx *rx_ctx = tls_offload_ctx_rx(tls_ctx);
+ 	struct net_device *netdev;
+ 
+-	if (WARN_ON(test_and_set_bit(TLS_RX_SYNC_RUNNING, &tls_ctx->flags)))
+-		return;
+-
+ 	trace_tls_device_rx_resync_send(sk, seq, rcd_sn, rx_ctx->resync_type);
++	rcu_read_lock();
+ 	netdev = READ_ONCE(tls_ctx->netdev);
+ 	if (netdev)
+ 		netdev->tlsdev_ops->tls_dev_resync(netdev, sk, seq, rcd_sn,
+ 						   TLS_OFFLOAD_CTX_DIR_RX);
+-	clear_bit_unlock(TLS_RX_SYNC_RUNNING, &tls_ctx->flags);
++	rcu_read_unlock();
+ 	TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSRXDEVICERESYNC);
+ }
+ 
+@@ -761,6 +760,8 @@ void tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq)
+ 
+ 	if (tls_ctx->rx_conf != TLS_HW)
+ 		return;
++	if (unlikely(test_bit(TLS_RX_DEV_DEGRADED, &tls_ctx->flags)))
++		return;
+ 
+ 	prot = &tls_ctx->prot_info;
+ 	rx_ctx = tls_offload_ctx_rx(tls_ctx);
+@@ -963,6 +964,17 @@ int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
+ 
+ 	ctx->sw.decrypted |= is_decrypted;
+ 
++	if (unlikely(test_bit(TLS_RX_DEV_DEGRADED, &tls_ctx->flags))) {
++		if (likely(is_encrypted || is_decrypted))
++			return 0;
++
++		/* After tls_device_down disables the offload, the next SKB will
++		 * likely have initial fragments decrypted, and final ones not
++		 * decrypted. We need to reencrypt that single SKB.
++		 */
++		return tls_device_reencrypt(sk, skb);
++	}
++
+ 	/* Return immediately if the record is either entirely plaintext or
+ 	 * entirely ciphertext. Otherwise handle reencrypt partially decrypted
+ 	 * record.
+@@ -1290,6 +1302,26 @@ static int tls_device_down(struct net_device *netdev)
+ 	spin_unlock_irqrestore(&tls_device_lock, flags);
+ 
+ 	list_for_each_entry_safe(ctx, tmp, &list, list)	{
++		/* Stop offloaded TX and switch to the fallback.
++		 * tls_is_sk_tx_device_offloaded will return false.
++		 */
++		WRITE_ONCE(ctx->sk->sk_validate_xmit_skb, tls_validate_xmit_skb_sw);
++
++		/* Stop the RX and TX resync.
++		 * tls_dev_resync must not be called after tls_dev_del.
++		 */
++		WRITE_ONCE(ctx->netdev, NULL);
++
++		/* Start skipping the RX resync logic completely. */
++		set_bit(TLS_RX_DEV_DEGRADED, &ctx->flags);
++
++		/* Sync with inflight packets. After this point:
++		 * TX: no non-encrypted packets will be passed to the driver.
++		 * RX: resync requests from the driver will be ignored.
++		 */
++		synchronize_net();
++
++		/* Release the offload context on the driver side. */
+ 		if (ctx->tx_conf == TLS_HW)
+ 			netdev->tlsdev_ops->tls_dev_del(netdev, ctx,
+ 							TLS_OFFLOAD_CTX_DIR_TX);
+@@ -1297,15 +1329,21 @@ static int tls_device_down(struct net_device *netdev)
+ 		    !test_bit(TLS_RX_DEV_CLOSED, &ctx->flags))
+ 			netdev->tlsdev_ops->tls_dev_del(netdev, ctx,
+ 							TLS_OFFLOAD_CTX_DIR_RX);
+-		WRITE_ONCE(ctx->netdev, NULL);
+-		smp_mb__before_atomic(); /* pairs with test_and_set_bit() */
+-		while (test_bit(TLS_RX_SYNC_RUNNING, &ctx->flags))
+-			usleep_range(10, 200);
++
+ 		dev_put(netdev);
+-		list_del_init(&ctx->list);
+ 
+-		if (refcount_dec_and_test(&ctx->refcount))
+-			tls_device_free_ctx(ctx);
++		/* Move the context to a separate list for two reasons:
++		 * 1. When the context is deallocated, list_del is called.
++		 * 2. It's no longer an offloaded context, so we don't want to
++		 *    run offload-specific code on this context.
++		 */
++		spin_lock_irqsave(&tls_device_lock, flags);
++		list_move_tail(&ctx->list, &tls_device_down_list);
++		spin_unlock_irqrestore(&tls_device_lock, flags);
++
++		/* Device contexts for RX and TX will be freed in on sk_destruct
++		 * by tls_device_free_ctx. rx_conf and tx_conf stay in TLS_HW.
++		 */
+ 	}
+ 
+ 	up_write(&device_offload_lock);
+diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
+index 28895333701e4..0d40016bf69e0 100644
+--- a/net/tls/tls_device_fallback.c
++++ b/net/tls/tls_device_fallback.c
+@@ -430,6 +430,13 @@ struct sk_buff *tls_validate_xmit_skb(struct sock *sk,
+ }
+ EXPORT_SYMBOL_GPL(tls_validate_xmit_skb);
+ 
++struct sk_buff *tls_validate_xmit_skb_sw(struct sock *sk,
++					 struct net_device *dev,
++					 struct sk_buff *skb)
++{
++	return tls_sw_fallback(sk, skb);
++}
++
+ struct sk_buff *tls_encrypt_skb(struct sk_buff *skb)
+ {
+ 	return tls_sw_fallback(skb->sk, skb);
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 8d93cea99f2cb..32a51b20509c9 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -633,6 +633,7 @@ struct tls_context *tls_ctx_create(struct sock *sk)
+ 	mutex_init(&ctx->tx_lock);
+ 	rcu_assign_pointer(icsk->icsk_ulp_data, ctx);
+ 	ctx->sk_proto = READ_ONCE(sk->sk_prot);
++	ctx->sk = sk;
+ 	return ctx;
+ }
+ 
+diff --git a/samples/vfio-mdev/mdpy-fb.c b/samples/vfio-mdev/mdpy-fb.c
+index 21dbf63d6e415..9ec93d90e8a5a 100644
+--- a/samples/vfio-mdev/mdpy-fb.c
++++ b/samples/vfio-mdev/mdpy-fb.c
+@@ -117,22 +117,27 @@ static int mdpy_fb_probe(struct pci_dev *pdev,
+ 	if (format != DRM_FORMAT_XRGB8888) {
+ 		pci_err(pdev, "format mismatch (0x%x != 0x%x)\n",
+ 			format, DRM_FORMAT_XRGB8888);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_release_regions;
+ 	}
+ 	if (width < 100	 || width > 10000) {
+ 		pci_err(pdev, "width (%d) out of range\n", width);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_release_regions;
+ 	}
+ 	if (height < 100 || height > 10000) {
+ 		pci_err(pdev, "height (%d) out of range\n", height);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_release_regions;
+ 	}
+ 	pci_info(pdev, "mdpy found: %dx%d framebuffer\n",
+ 		 width, height);
+ 
+ 	info = framebuffer_alloc(sizeof(struct mdpy_fb_par), &pdev->dev);
+-	if (!info)
++	if (!info) {
++		ret = -ENOMEM;
+ 		goto err_release_regions;
++	}
+ 	pci_set_drvdata(pdev, info);
+ 	par = info->par;
+ 
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 765ea66665a8c..c15c8314671b7 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -520,9 +520,10 @@ static void snd_timer_notify1(struct snd_timer_instance *ti, int event)
+ 		return;
+ 	if (timer->hw.flags & SNDRV_TIMER_HW_SLAVE)
+ 		return;
++	event += 10; /* convert to SNDRV_TIMER_EVENT_MXXX */
+ 	list_for_each_entry(ts, &ti->slave_active_head, active_list)
+ 		if (ts->ccallback)
+-			ts->ccallback(ts, event + 100, &tstamp, resolution);
++			ts->ccallback(ts, event, &tstamp, resolution);
+ }
+ 
+ /* start/continue a master timer */
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index eec1775dfffe9..4cec1bd77e6fe 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2973,6 +2973,7 @@ static int hda_codec_runtime_resume(struct device *dev)
+ #ifdef CONFIG_PM_SLEEP
+ static int hda_codec_pm_prepare(struct device *dev)
+ {
++	dev->power.power_state = PMSG_SUSPEND;
+ 	return pm_runtime_suspended(dev);
+ }
+ 
+@@ -2980,6 +2981,10 @@ static void hda_codec_pm_complete(struct device *dev)
+ {
+ 	struct hda_codec *codec = dev_to_hda_codec(dev);
+ 
++	/* If no other pm-functions are called between prepare() and complete() */
++	if (dev->power.power_state.event == PM_EVENT_SUSPEND)
++		dev->power.power_state = PMSG_RESUME;
++
+ 	if (pm_runtime_suspended(dev) && (codec->jackpoll_interval ||
+ 	    hda_codec_need_resume(codec) || codec->forced_resume))
+ 		pm_request_resume(dev);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d8424d226714f..cc13a68197f3c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8289,6 +8289,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x82bf, "HP G3 mini", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x841c, "HP Pavilion 15-CK0xx", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index 7b2d471a6419d..4343356f3cf9a 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -975,9 +975,13 @@ static int __die_find_variable_cb(Dwarf_Die *die_mem, void *data)
+ 	if ((tag == DW_TAG_formal_parameter ||
+ 	     tag == DW_TAG_variable) &&
+ 	    die_compare_name(die_mem, fvp->name) &&
+-	/* Does the DIE have location information or external instance? */
++	/*
++	 * Does the DIE have location information or const value
++	 * or external instance?
++	 */
+ 	    (dwarf_attr(die_mem, DW_AT_external, &attr) ||
+-	     dwarf_attr(die_mem, DW_AT_location, &attr)))
++	     dwarf_attr(die_mem, DW_AT_location, &attr) ||
++	     dwarf_attr(die_mem, DW_AT_const_value, &attr)))
+ 		return DIE_FIND_CB_END;
+ 	if (dwarf_haspc(die_mem, fvp->addr))
+ 		return DIE_FIND_CB_CONTINUE;
+diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
+index 76dd349aa48d8..fdafbfcef6871 100644
+--- a/tools/perf/util/probe-finder.c
++++ b/tools/perf/util/probe-finder.c
+@@ -190,6 +190,9 @@ static int convert_variable_location(Dwarf_Die *vr_die, Dwarf_Addr addr,
+ 	    immediate_value_is_supported()) {
+ 		Dwarf_Sword snum;
+ 
++		if (!tvar)
++			return 0;
++
+ 		dwarf_formsdata(&attr, &snum);
+ 		ret = asprintf(&tvar->value, "\\%ld", (long)snum);
+ 
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 7ed7cd95e58fe..ebc4ee0fe179f 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -363,6 +363,7 @@ ip1 -6 rule add table main suppress_prefixlength 0
+ ip1 -4 route add default dev wg0 table 51820
+ ip1 -4 rule add not fwmark 51820 table 51820
+ ip1 -4 rule add table main suppress_prefixlength 0
++n1 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/vethc/rp_filter'
+ # Flood the pings instead of sending just one, to trigger routing table reference counting bugs.
+ n1 ping -W 1 -c 100 -f 192.168.99.7
+ n1 ping -W 1 -c 100 -f abab::1111
+diff --git a/tools/testing/selftests/wireguard/qemu/kernel.config b/tools/testing/selftests/wireguard/qemu/kernel.config
+index 4eecb432a66c1..74db83a0aedd8 100644
+--- a/tools/testing/selftests/wireguard/qemu/kernel.config
++++ b/tools/testing/selftests/wireguard/qemu/kernel.config
+@@ -19,7 +19,6 @@ CONFIG_NETFILTER_XTABLES=y
+ CONFIG_NETFILTER_XT_NAT=y
+ CONFIG_NETFILTER_XT_MATCH_LENGTH=y
+ CONFIG_NETFILTER_XT_MARK=y
+-CONFIG_NF_CONNTRACK_IPV4=y
+ CONFIG_NF_NAT_IPV4=y
+ CONFIG_IP_NF_IPTABLES=y
+ CONFIG_IP_NF_FILTER=y


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-10 13:14 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-06-10 13:14 UTC (permalink / raw
  To: gentoo-commits

commit:     7d2f5cd45d7ee4d8067e089f463f822329ff3741
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 10 13:13:28 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun 10 13:13:28 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7d2f5cd4

Updated corrected Kernel Self Protection config patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 635de00..4eee26b 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -170,16 +170,16 @@
 +	visible if GENTOO_LINUX
 +
 +config GENTOO_KERNEL_SELF_PROTECTION
-+	bool "Architecture Independent Kernel Self Protection Project Recommendations"
++	bool "Architecture Independant Kernel Self Protection Project Recommendations"
 +
 +	help
-+  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
-+		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
-+		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due
-+		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for
-+		dependency information on your specific architecture.
-+		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
-+		for X86_64
++  Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++	See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
++	Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
++	to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
++	dependency information on your specific architecture.
++	Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
++	for X86_64
 +
 +	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
 +
@@ -214,7 +214,7 @@
 +	select FORTIFY_SOURCE
 +	select SECURITY_DMESG_RESTRICT
 +	select PANIC_ON_OOPS
-+	select CONFIG_GCC_PLUGINS
++	select CONFIG_GCC_PLUGINS=y
 +	select GCC_PLUGIN_LATENT_ENTROPY
 +	select GCC_PLUGIN_STRUCTLEAK
 +	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
@@ -233,7 +233,7 @@
 +	select RANDOMIZE_BASE
 +	select RANDOMIZE_MEMORY
 +	select LEGACY_VSYSCALL_NONE
-+ 	select PAGE_TABLE_ISOLATION
++ select PAGE_TABLE_ISOLATION
 +
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM64


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-11 17:34 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-06-11 17:34 UTC (permalink / raw
  To: gentoo-commits

commit:     cbcbe56bc8beb2afe161042e2d5beaef40693a99
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun 11 17:33:34 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun 11 17:33:34 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cbcbe56b

Fix type in KSP Patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 4eee26b..b671313 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -214,7 +214,7 @@
 +	select FORTIFY_SOURCE
 +	select SECURITY_DMESG_RESTRICT
 +	select PANIC_ON_OOPS
-+	select CONFIG_GCC_PLUGINS=y
++	select CONFIG_GCC_PLUGINS
 +	select GCC_PLUGIN_LATENT_ENTROPY
 +	select GCC_PLUGIN_STRUCTLEAK
 +	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-16 12:24 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-06-16 12:24 UTC (permalink / raw
  To: gentoo-commits

commit:     7d3885d1402609ffb18ce792e92b299368ad942c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 16 12:24:12 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 16 12:24:12 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7d3885d1

Linux patch 5.10.44

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1043_linux-5.10.44.patch | 3608 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3612 insertions(+)

diff --git a/0000_README b/0000_README
index f258b9d..e96b155 100644
--- a/0000_README
+++ b/0000_README
@@ -215,6 +215,10 @@ Patch:  1042_linux-5.10.43.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.43
 
+Patch:  1043_linux-5.10.44.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.44
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1043_linux-5.10.44.patch b/1043_linux-5.10.44.patch
new file mode 100644
index 0000000..c7232fd
--- /dev/null
+++ b/1043_linux-5.10.44.patch
@@ -0,0 +1,3608 @@
+diff --git a/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml b/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
+index db61f0731a203..2e35aeaa8781d 100644
+--- a/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
++++ b/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
+@@ -57,7 +57,7 @@ patternProperties:
+           rate
+ 
+       sound-dai:
+-        $ref: /schemas/types.yaml#/definitions/phandle
++        $ref: /schemas/types.yaml#/definitions/phandle-array
+         description: phandle of the CPU DAI
+ 
+     patternProperties:
+@@ -71,7 +71,7 @@ patternProperties:
+ 
+         properties:
+           sound-dai:
+-            $ref: /schemas/types.yaml#/definitions/phandle
++            $ref: /schemas/types.yaml#/definitions/phandle-array
+             description: phandle of the codec DAI
+ 
+         required:
+diff --git a/Documentation/virt/kvm/mmu.rst b/Documentation/virt/kvm/mmu.rst
+index 5bfe28b0728e8..20d85daed395e 100644
+--- a/Documentation/virt/kvm/mmu.rst
++++ b/Documentation/virt/kvm/mmu.rst
+@@ -171,8 +171,8 @@ Shadow pages contain the following information:
+     shadow pages) so role.quadrant takes values in the range 0..3.  Each
+     quadrant maps 1GB virtual address space.
+   role.access:
+-    Inherited guest access permissions in the form uwx.  Note execute
+-    permission is positive, not negative.
++    Inherited guest access permissions from the parent ptes in the form uwx.
++    Note execute permission is positive, not negative.
+   role.invalid:
+     The page is invalid and should not be used.  It is a root page that is
+     currently pinned (by a cpu hardware register pointing to it); once it is
+diff --git a/Makefile b/Makefile
+index ec9ee8032a985..ae33e048eb8db 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 43
++SUBLEVEL = 44
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/include/asm/cpuidle.h b/arch/arm/include/asm/cpuidle.h
+index 0d67ed682e077..bc4ffa7ca04c7 100644
+--- a/arch/arm/include/asm/cpuidle.h
++++ b/arch/arm/include/asm/cpuidle.h
+@@ -7,9 +7,11 @@
+ #ifdef CONFIG_CPU_IDLE
+ extern int arm_cpuidle_simple_enter(struct cpuidle_device *dev,
+ 		struct cpuidle_driver *drv, int index);
++#define __cpuidle_method_section __used __section("__cpuidle_method_of_table")
+ #else
+ static inline int arm_cpuidle_simple_enter(struct cpuidle_device *dev,
+ 		struct cpuidle_driver *drv, int index) { return -ENODEV; }
++#define __cpuidle_method_section __maybe_unused /* drop silently */
+ #endif
+ 
+ /* Common ARM WFI state */
+@@ -42,8 +44,7 @@ struct of_cpuidle_method {
+ 
+ #define CPUIDLE_METHOD_OF_DECLARE(name, _method, _ops)			\
+ 	static const struct of_cpuidle_method __cpuidle_method_of_table_##name \
+-	__used __section("__cpuidle_method_of_table")			\
+-	= { .method = _method, .ops = _ops }
++	__cpuidle_method_section = { .method = _method, .ops = _ops }
+ 
+ extern int arm_cpuidle_suspend(int index);
+ 
+diff --git a/arch/mips/lib/mips-atomic.c b/arch/mips/lib/mips-atomic.c
+index de03838b343b8..a9b72eacfc0b3 100644
+--- a/arch/mips/lib/mips-atomic.c
++++ b/arch/mips/lib/mips-atomic.c
+@@ -37,7 +37,7 @@
+  */
+ notrace void arch_local_irq_disable(void)
+ {
+-	preempt_disable();
++	preempt_disable_notrace();
+ 
+ 	__asm__ __volatile__(
+ 	"	.set	push						\n"
+@@ -53,7 +53,7 @@ notrace void arch_local_irq_disable(void)
+ 	: /* no inputs */
+ 	: "memory");
+ 
+-	preempt_enable();
++	preempt_enable_notrace();
+ }
+ EXPORT_SYMBOL(arch_local_irq_disable);
+ 
+@@ -61,7 +61,7 @@ notrace unsigned long arch_local_irq_save(void)
+ {
+ 	unsigned long flags;
+ 
+-	preempt_disable();
++	preempt_disable_notrace();
+ 
+ 	__asm__ __volatile__(
+ 	"	.set	push						\n"
+@@ -78,7 +78,7 @@ notrace unsigned long arch_local_irq_save(void)
+ 	: /* no inputs */
+ 	: "memory");
+ 
+-	preempt_enable();
++	preempt_enable_notrace();
+ 
+ 	return flags;
+ }
+@@ -88,7 +88,7 @@ notrace void arch_local_irq_restore(unsigned long flags)
+ {
+ 	unsigned long __tmp1;
+ 
+-	preempt_disable();
++	preempt_disable_notrace();
+ 
+ 	__asm__ __volatile__(
+ 	"	.set	push						\n"
+@@ -106,7 +106,7 @@ notrace void arch_local_irq_restore(unsigned long flags)
+ 	: "0" (flags)
+ 	: "memory");
+ 
+-	preempt_enable();
++	preempt_enable_notrace();
+ }
+ EXPORT_SYMBOL(arch_local_irq_restore);
+ 
+diff --git a/arch/powerpc/boot/dts/fsl/p1010si-post.dtsi b/arch/powerpc/boot/dts/fsl/p1010si-post.dtsi
+index 1b4aafc1f6a27..9716a0484ecf0 100644
+--- a/arch/powerpc/boot/dts/fsl/p1010si-post.dtsi
++++ b/arch/powerpc/boot/dts/fsl/p1010si-post.dtsi
+@@ -122,7 +122,15 @@
+ 	};
+ 
+ /include/ "pq3-i2c-0.dtsi"
++	i2c@3000 {
++		fsl,i2c-erratum-a004447;
++	};
++
+ /include/ "pq3-i2c-1.dtsi"
++	i2c@3100 {
++		fsl,i2c-erratum-a004447;
++	};
++
+ /include/ "pq3-duart-0.dtsi"
+ /include/ "pq3-espi-0.dtsi"
+ 	spi0: spi@7000 {
+diff --git a/arch/powerpc/boot/dts/fsl/p2041si-post.dtsi b/arch/powerpc/boot/dts/fsl/p2041si-post.dtsi
+index 872e4485dc3f0..ddc018d42252f 100644
+--- a/arch/powerpc/boot/dts/fsl/p2041si-post.dtsi
++++ b/arch/powerpc/boot/dts/fsl/p2041si-post.dtsi
+@@ -371,7 +371,23 @@
+ 	};
+ 
+ /include/ "qoriq-i2c-0.dtsi"
++	i2c@118000 {
++		fsl,i2c-erratum-a004447;
++	};
++
++	i2c@118100 {
++		fsl,i2c-erratum-a004447;
++	};
++
+ /include/ "qoriq-i2c-1.dtsi"
++	i2c@119000 {
++		fsl,i2c-erratum-a004447;
++	};
++
++	i2c@119100 {
++		fsl,i2c-erratum-a004447;
++	};
++
+ /include/ "qoriq-duart-0.dtsi"
+ /include/ "qoriq-duart-1.dtsi"
+ /include/ "qoriq-gpio-0.dtsi"
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 3112186a4f4b2..16159950fcf5b 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -5067,9 +5067,10 @@ static struct intel_uncore_type icx_uncore_m2m = {
+ 	.perf_ctr	= SNR_M2M_PCI_PMON_CTR0,
+ 	.event_ctl	= SNR_M2M_PCI_PMON_CTL0,
+ 	.event_mask	= SNBEP_PMON_RAW_EVENT_MASK,
++	.event_mask_ext	= SNR_M2M_PCI_PMON_UMASK_EXT,
+ 	.box_ctl	= SNR_M2M_PCI_PMON_BOX_CTL,
+ 	.ops		= &snr_m2m_uncore_pci_ops,
+-	.format_group	= &skx_uncore_format_group,
++	.format_group	= &snr_m2m_uncore_format_group,
+ };
+ 
+ static struct attribute *icx_upi_uncore_formats_attr[] = {
+diff --git a/arch/x86/kernel/cpu/perfctr-watchdog.c b/arch/x86/kernel/cpu/perfctr-watchdog.c
+index a5ee607a3b893..a548d9104604f 100644
+--- a/arch/x86/kernel/cpu/perfctr-watchdog.c
++++ b/arch/x86/kernel/cpu/perfctr-watchdog.c
+@@ -63,7 +63,7 @@ static inline unsigned int nmi_perfctr_msr_to_bit(unsigned int msr)
+ 		case 15:
+ 			return msr - MSR_P4_BPU_PERFCTR0;
+ 		}
+-		fallthrough;
++		break;
+ 	case X86_VENDOR_ZHAOXIN:
+ 	case X86_VENDOR_CENTAUR:
+ 		return msr - MSR_ARCH_PERFMON_PERFCTR0;
+@@ -96,7 +96,7 @@ static inline unsigned int nmi_evntsel_msr_to_bit(unsigned int msr)
+ 		case 15:
+ 			return msr - MSR_P4_BSU_ESCR0;
+ 		}
+-		fallthrough;
++		break;
+ 	case X86_VENDOR_ZHAOXIN:
+ 	case X86_VENDOR_CENTAUR:
+ 		return msr - MSR_ARCH_PERFMON_EVENTSEL0;
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 50e268eb8e1a9..00a0bfaed6e86 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -90,8 +90,8 @@ struct guest_walker {
+ 	gpa_t pte_gpa[PT_MAX_FULL_LEVELS];
+ 	pt_element_t __user *ptep_user[PT_MAX_FULL_LEVELS];
+ 	bool pte_writable[PT_MAX_FULL_LEVELS];
+-	unsigned pt_access;
+-	unsigned pte_access;
++	unsigned int pt_access[PT_MAX_FULL_LEVELS];
++	unsigned int pte_access;
+ 	gfn_t gfn;
+ 	struct x86_exception fault;
+ };
+@@ -418,13 +418,15 @@ retry_walk:
+ 		}
+ 
+ 		walker->ptes[walker->level - 1] = pte;
++
++		/* Convert to ACC_*_MASK flags for struct guest_walker.  */
++		walker->pt_access[walker->level - 1] = FNAME(gpte_access)(pt_access ^ walk_nx_mask);
+ 	} while (!is_last_gpte(mmu, walker->level, pte));
+ 
+ 	pte_pkey = FNAME(gpte_pkeys)(vcpu, pte);
+ 	accessed_dirty = have_ad ? pte_access & PT_GUEST_ACCESSED_MASK : 0;
+ 
+ 	/* Convert to ACC_*_MASK flags for struct guest_walker.  */
+-	walker->pt_access = FNAME(gpte_access)(pt_access ^ walk_nx_mask);
+ 	walker->pte_access = FNAME(gpte_access)(pte_access ^ walk_nx_mask);
+ 	errcode = permission_fault(vcpu, mmu, walker->pte_access, pte_pkey, access);
+ 	if (unlikely(errcode))
+@@ -463,7 +465,8 @@ retry_walk:
+ 	}
+ 
+ 	pgprintk("%s: pte %llx pte_access %x pt_access %x\n",
+-		 __func__, (u64)pte, walker->pte_access, walker->pt_access);
++		 __func__, (u64)pte, walker->pte_access,
++		 walker->pt_access[walker->level - 1]);
+ 	return 1;
+ 
+ error:
+@@ -635,7 +638,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
+ 	bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled;
+ 	struct kvm_mmu_page *sp = NULL;
+ 	struct kvm_shadow_walk_iterator it;
+-	unsigned direct_access, access = gw->pt_access;
++	unsigned int direct_access, access;
+ 	int top_level, level, req_level, ret;
+ 	gfn_t base_gfn = gw->gfn;
+ 
+@@ -667,6 +670,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
+ 		sp = NULL;
+ 		if (!is_shadow_present_pte(*it.sptep)) {
+ 			table_gfn = gw->table_gfn[it.level - 2];
++			access = gw->pt_access[it.level - 2];
+ 			sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
+ 					      false, access);
+ 		}
+diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
+index aef960f90f26e..a2835d784f4be 100644
+--- a/arch/x86/kvm/trace.h
++++ b/arch/x86/kvm/trace.h
+@@ -1514,16 +1514,16 @@ TRACE_EVENT(kvm_nested_vmenter_failed,
+ 	TP_ARGS(msg, err),
+ 
+ 	TP_STRUCT__entry(
+-		__field(const char *, msg)
++		__string(msg, msg)
+ 		__field(u32, err)
+ 	),
+ 
+ 	TP_fast_assign(
+-		__entry->msg = msg;
++		__assign_str(msg, msg);
+ 		__entry->err = err;
+ 	),
+ 
+-	TP_printk("%s%s", __entry->msg, !__entry->err ? "" :
++	TP_printk("%s%s", __get_str(msg), !__entry->err ? "" :
+ 		__print_symbolic(__entry->err, VMX_VMENTER_INSTRUCTION_ERRORS))
+ );
+ 
+diff --git a/crypto/async_tx/async_xor.c b/crypto/async_tx/async_xor.c
+index 6cd7f7025df47..d8a91521144e0 100644
+--- a/crypto/async_tx/async_xor.c
++++ b/crypto/async_tx/async_xor.c
+@@ -233,7 +233,8 @@ async_xor_offs(struct page *dest, unsigned int offset,
+ 		if (submit->flags & ASYNC_TX_XOR_DROP_DST) {
+ 			src_cnt--;
+ 			src_list++;
+-			src_offs++;
++			if (src_offs)
++				src_offs++;
+ 		}
+ 
+ 		/* wait for any prerequisite operations */
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index aff13bf4d9470..31c9d0c8ae11f 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -1290,10 +1290,8 @@ static void acpi_sleep_hibernate_setup(void)
+ 		return;
+ 
+ 	acpi_get_table(ACPI_SIG_FACS, 1, (struct acpi_table_header **)&facs);
+-	if (facs) {
++	if (facs)
+ 		s4_hardware_signature = facs->hardware_signature;
+-		acpi_put_table((struct acpi_table_header *)facs);
+-	}
+ }
+ #else /* !CONFIG_HIBERNATION */
+ static inline void acpi_sleep_hibernate_setup(void) {}
+diff --git a/drivers/gpio/gpio-wcd934x.c b/drivers/gpio/gpio-wcd934x.c
+index 1cbce59908558..97e6caedf1f33 100644
+--- a/drivers/gpio/gpio-wcd934x.c
++++ b/drivers/gpio/gpio-wcd934x.c
+@@ -7,7 +7,7 @@
+ #include <linux/slab.h>
+ #include <linux/of_device.h>
+ 
+-#define WCD_PIN_MASK(p) BIT(p - 1)
++#define WCD_PIN_MASK(p) BIT(p)
+ #define WCD_REG_DIR_CTL_OFFSET 0x42
+ #define WCD_REG_VAL_CTL_OFFSET 0x43
+ #define WCD934X_NPINS		5
+diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
+index f2d46b7ac6f9f..232abbba36868 100644
+--- a/drivers/gpu/drm/drm_auth.c
++++ b/drivers/gpu/drm/drm_auth.c
+@@ -314,9 +314,10 @@ int drm_master_open(struct drm_file *file_priv)
+ void drm_master_release(struct drm_file *file_priv)
+ {
+ 	struct drm_device *dev = file_priv->minor->dev;
+-	struct drm_master *master = file_priv->master;
++	struct drm_master *master;
+ 
+ 	mutex_lock(&dev->master_mutex);
++	master = file_priv->master;
+ 	if (file_priv->magic)
+ 		idr_remove(&file_priv->master->magic_map, file_priv->magic);
+ 
+diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
+index 789ee65ac1f57..ae647be4a49fb 100644
+--- a/drivers/gpu/drm/drm_ioctl.c
++++ b/drivers/gpu/drm/drm_ioctl.c
+@@ -118,17 +118,18 @@ int drm_getunique(struct drm_device *dev, void *data,
+ 		  struct drm_file *file_priv)
+ {
+ 	struct drm_unique *u = data;
+-	struct drm_master *master = file_priv->master;
++	struct drm_master *master;
+ 
+-	mutex_lock(&master->dev->master_mutex);
++	mutex_lock(&dev->master_mutex);
++	master = file_priv->master;
+ 	if (u->unique_len >= master->unique_len) {
+ 		if (copy_to_user(u->unique, master->unique, master->unique_len)) {
+-			mutex_unlock(&master->dev->master_mutex);
++			mutex_unlock(&dev->master_mutex);
+ 			return -EFAULT;
+ 		}
+ 	}
+ 	u->unique_len = master->unique_len;
+-	mutex_unlock(&master->dev->master_mutex);
++	mutex_unlock(&dev->master_mutex);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/mcde/mcde_dsi.c b/drivers/gpu/drm/mcde/mcde_dsi.c
+index b3fd3501c4127..5275b2723293b 100644
+--- a/drivers/gpu/drm/mcde/mcde_dsi.c
++++ b/drivers/gpu/drm/mcde/mcde_dsi.c
+@@ -577,7 +577,7 @@ static void mcde_dsi_setup_video_mode(struct mcde_dsi *d,
+ 	 * porches and sync.
+ 	 */
+ 	/* (ps/s) / (pixels/s) = ps/pixels */
+-	pclk = DIV_ROUND_UP_ULL(1000000000000, mode->clock);
++	pclk = DIV_ROUND_UP_ULL(1000000000000, (mode->clock * 1000));
+ 	dev_dbg(d->dev, "picoseconds between two pixels: %llu\n",
+ 		pclk);
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 722c2fe3bfd56..2dcbe02846cd9 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -154,7 +154,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ 	 * GPU registers so we need to add 0x1a800 to the register value on A630
+ 	 * to get the right value from PM4.
+ 	 */
+-	get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800,
++	get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
+ 		rbmemptr_stats(ring, index, alwayson_start));
+ 
+ 	/* Invalidate CCU depth and color */
+@@ -184,7 +184,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ 
+ 	get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP_0_LO,
+ 		rbmemptr_stats(ring, index, cpcycles_end));
+-	get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800,
++	get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
+ 		rbmemptr_stats(ring, index, alwayson_end));
+ 
+ 	/* Write the fence to the scratch register */
+@@ -203,8 +203,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ 	OUT_RING(ring, submit->seqno);
+ 
+ 	trace_msm_gpu_submit_flush(submit,
+-		gmu_read64(&a6xx_gpu->gmu, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L,
+-			REG_A6XX_GMU_ALWAYS_ON_COUNTER_H));
++		gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
++			REG_A6XX_CP_ALWAYS_ON_COUNTER_HI));
+ 
+ 	a6xx_flush(gpu, ring);
+ }
+@@ -459,6 +459,113 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
+ 	gpu_write(gpu, REG_A6XX_RBBM_CLOCK_CNTL, state ? clock_cntl_on : 0);
+ }
+ 
++/* For a615, a616, a618, A619, a630, a640 and a680 */
++static const u32 a6xx_protect[] = {
++	A6XX_PROTECT_RDONLY(0x00000, 0x04ff),
++	A6XX_PROTECT_RDONLY(0x00501, 0x0005),
++	A6XX_PROTECT_RDONLY(0x0050b, 0x02f4),
++	A6XX_PROTECT_NORDWR(0x0050e, 0x0000),
++	A6XX_PROTECT_NORDWR(0x00510, 0x0000),
++	A6XX_PROTECT_NORDWR(0x00534, 0x0000),
++	A6XX_PROTECT_NORDWR(0x00800, 0x0082),
++	A6XX_PROTECT_NORDWR(0x008a0, 0x0008),
++	A6XX_PROTECT_NORDWR(0x008ab, 0x0024),
++	A6XX_PROTECT_RDONLY(0x008de, 0x00ae),
++	A6XX_PROTECT_NORDWR(0x00900, 0x004d),
++	A6XX_PROTECT_NORDWR(0x0098d, 0x0272),
++	A6XX_PROTECT_NORDWR(0x00e00, 0x0001),
++	A6XX_PROTECT_NORDWR(0x00e03, 0x000c),
++	A6XX_PROTECT_NORDWR(0x03c00, 0x00c3),
++	A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff),
++	A6XX_PROTECT_NORDWR(0x08630, 0x01cf),
++	A6XX_PROTECT_NORDWR(0x08e00, 0x0000),
++	A6XX_PROTECT_NORDWR(0x08e08, 0x0000),
++	A6XX_PROTECT_NORDWR(0x08e50, 0x001f),
++	A6XX_PROTECT_NORDWR(0x09624, 0x01db),
++	A6XX_PROTECT_NORDWR(0x09e70, 0x0001),
++	A6XX_PROTECT_NORDWR(0x09e78, 0x0187),
++	A6XX_PROTECT_NORDWR(0x0a630, 0x01cf),
++	A6XX_PROTECT_NORDWR(0x0ae02, 0x0000),
++	A6XX_PROTECT_NORDWR(0x0ae50, 0x032f),
++	A6XX_PROTECT_NORDWR(0x0b604, 0x0000),
++	A6XX_PROTECT_NORDWR(0x0be02, 0x0001),
++	A6XX_PROTECT_NORDWR(0x0be20, 0x17df),
++	A6XX_PROTECT_NORDWR(0x0f000, 0x0bff),
++	A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff),
++	A6XX_PROTECT_NORDWR(0x11c00, 0x0000), /* note: infinite range */
++};
++
++/* These are for a620 and a650 */
++static const u32 a650_protect[] = {
++	A6XX_PROTECT_RDONLY(0x00000, 0x04ff),
++	A6XX_PROTECT_RDONLY(0x00501, 0x0005),
++	A6XX_PROTECT_RDONLY(0x0050b, 0x02f4),
++	A6XX_PROTECT_NORDWR(0x0050e, 0x0000),
++	A6XX_PROTECT_NORDWR(0x00510, 0x0000),
++	A6XX_PROTECT_NORDWR(0x00534, 0x0000),
++	A6XX_PROTECT_NORDWR(0x00800, 0x0082),
++	A6XX_PROTECT_NORDWR(0x008a0, 0x0008),
++	A6XX_PROTECT_NORDWR(0x008ab, 0x0024),
++	A6XX_PROTECT_RDONLY(0x008de, 0x00ae),
++	A6XX_PROTECT_NORDWR(0x00900, 0x004d),
++	A6XX_PROTECT_NORDWR(0x0098d, 0x0272),
++	A6XX_PROTECT_NORDWR(0x00e00, 0x0001),
++	A6XX_PROTECT_NORDWR(0x00e03, 0x000c),
++	A6XX_PROTECT_NORDWR(0x03c00, 0x00c3),
++	A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff),
++	A6XX_PROTECT_NORDWR(0x08630, 0x01cf),
++	A6XX_PROTECT_NORDWR(0x08e00, 0x0000),
++	A6XX_PROTECT_NORDWR(0x08e08, 0x0000),
++	A6XX_PROTECT_NORDWR(0x08e50, 0x001f),
++	A6XX_PROTECT_NORDWR(0x08e80, 0x027f),
++	A6XX_PROTECT_NORDWR(0x09624, 0x01db),
++	A6XX_PROTECT_NORDWR(0x09e60, 0x0011),
++	A6XX_PROTECT_NORDWR(0x09e78, 0x0187),
++	A6XX_PROTECT_NORDWR(0x0a630, 0x01cf),
++	A6XX_PROTECT_NORDWR(0x0ae02, 0x0000),
++	A6XX_PROTECT_NORDWR(0x0ae50, 0x032f),
++	A6XX_PROTECT_NORDWR(0x0b604, 0x0000),
++	A6XX_PROTECT_NORDWR(0x0b608, 0x0007),
++	A6XX_PROTECT_NORDWR(0x0be02, 0x0001),
++	A6XX_PROTECT_NORDWR(0x0be20, 0x17df),
++	A6XX_PROTECT_NORDWR(0x0f000, 0x0bff),
++	A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff),
++	A6XX_PROTECT_NORDWR(0x18400, 0x1fff),
++	A6XX_PROTECT_NORDWR(0x1a800, 0x1fff),
++	A6XX_PROTECT_NORDWR(0x1f400, 0x0443),
++	A6XX_PROTECT_RDONLY(0x1f844, 0x007b),
++	A6XX_PROTECT_NORDWR(0x1f887, 0x001b),
++	A6XX_PROTECT_NORDWR(0x1f8c0, 0x0000), /* note: infinite range */
++};
++
++static void a6xx_set_cp_protect(struct msm_gpu *gpu)
++{
++	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
++	const u32 *regs = a6xx_protect;
++	unsigned i, count = ARRAY_SIZE(a6xx_protect), count_max = 32;
++
++	BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32);
++	BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48);
++
++	if (adreno_is_a650(adreno_gpu)) {
++		regs = a650_protect;
++		count = ARRAY_SIZE(a650_protect);
++		count_max = 48;
++	}
++
++	/*
++	 * Enable access protection to privileged registers, fault on an access
++	 * protect violation and select the last span to protect from the start
++	 * address all the way to the end of the register address space
++	 */
++	gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, BIT(0) | BIT(1) | BIT(3));
++
++	for (i = 0; i < count - 1; i++)
++		gpu_write(gpu, REG_A6XX_CP_PROTECT(i), regs[i]);
++	/* last CP_PROTECT to have "infinite" length on the last entry */
++	gpu_write(gpu, REG_A6XX_CP_PROTECT(count_max - 1), regs[i]);
++}
++
+ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
+ {
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+@@ -486,7 +593,7 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
+ 		rgb565_predicator << 11 | amsbc << 4 | lower_bit << 1);
+ 	gpu_write(gpu, REG_A6XX_TPL1_NC_MODE_CNTL, lower_bit << 1);
+ 	gpu_write(gpu, REG_A6XX_SP_NC_MODE_CNTL,
+-		uavflagprd_inv >> 4 | lower_bit << 1);
++		uavflagprd_inv << 4 | lower_bit << 1);
+ 	gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, lower_bit << 21);
+ }
+ 
+@@ -722,41 +829,7 @@ static int a6xx_hw_init(struct msm_gpu *gpu)
+ 	}
+ 
+ 	/* Protect registers from the CP */
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, 0x00000003);
+-
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(0),
+-		A6XX_PROTECT_RDONLY(0x600, 0x51));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(1), A6XX_PROTECT_RW(0xae50, 0x2));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(2), A6XX_PROTECT_RW(0x9624, 0x13));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(3), A6XX_PROTECT_RW(0x8630, 0x8));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(4), A6XX_PROTECT_RW(0x9e70, 0x1));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(5), A6XX_PROTECT_RW(0x9e78, 0x187));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(6), A6XX_PROTECT_RW(0xf000, 0x810));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(7),
+-		A6XX_PROTECT_RDONLY(0xfc00, 0x3));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(8), A6XX_PROTECT_RW(0x50e, 0x0));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(9), A6XX_PROTECT_RDONLY(0x50f, 0x0));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(10), A6XX_PROTECT_RW(0x510, 0x0));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(11),
+-		A6XX_PROTECT_RDONLY(0x0, 0x4f9));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(12),
+-		A6XX_PROTECT_RDONLY(0x501, 0xa));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(13),
+-		A6XX_PROTECT_RDONLY(0x511, 0x44));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(14), A6XX_PROTECT_RW(0xe00, 0xe));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(15), A6XX_PROTECT_RW(0x8e00, 0x0));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(16), A6XX_PROTECT_RW(0x8e50, 0xf));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(17), A6XX_PROTECT_RW(0xbe02, 0x0));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(18),
+-		A6XX_PROTECT_RW(0xbe20, 0x11f3));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(19), A6XX_PROTECT_RW(0x800, 0x82));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(20), A6XX_PROTECT_RW(0x8a0, 0x8));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(21), A6XX_PROTECT_RW(0x8ab, 0x19));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(22), A6XX_PROTECT_RW(0x900, 0x4d));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(23), A6XX_PROTECT_RW(0x98d, 0x76));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(24),
+-			A6XX_PROTECT_RDONLY(0x980, 0x4));
+-	gpu_write(gpu, REG_A6XX_CP_PROTECT(25), A6XX_PROTECT_RW(0xa630, 0x0));
++	a6xx_set_cp_protect(gpu);
+ 
+ 	/* Enable expanded apriv for targets that support it */
+ 	if (gpu->hw_apriv) {
+@@ -1055,7 +1128,7 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami)
++	if (a6xx_gpu->shadow_bo)
+ 		for (i = 0; i < gpu->nr_rings; i++)
+ 			a6xx_gpu->shadow[i] = 0;
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+index 3eeebf6a754b9..69765a722cae6 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+@@ -37,7 +37,7 @@ struct a6xx_gpu {
+  * REG_CP_PROTECT_REG(n) - this will block both reads and writes for _len
+  * registers starting at _reg.
+  */
+-#define A6XX_PROTECT_RW(_reg, _len) \
++#define A6XX_PROTECT_NORDWR(_reg, _len) \
+ 	((1 << 31) | \
+ 	(((_len) & 0x3FFF) << 18) | ((_reg) & 0x3FFFF))
+ 
+diff --git a/drivers/i2c/busses/i2c-mpc.c b/drivers/i2c/busses/i2c-mpc.c
+index d94f05c8b8b79..af349661fd769 100644
+--- a/drivers/i2c/busses/i2c-mpc.c
++++ b/drivers/i2c/busses/i2c-mpc.c
+@@ -23,6 +23,7 @@
+ 
+ #include <linux/clk.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/fsl_devices.h>
+ #include <linux/i2c.h>
+ #include <linux/interrupt.h>
+@@ -49,6 +50,7 @@
+ #define CCR_MTX  0x10
+ #define CCR_TXAK 0x08
+ #define CCR_RSTA 0x04
++#define CCR_RSVD 0x02
+ 
+ #define CSR_MCF  0x80
+ #define CSR_MAAS 0x40
+@@ -70,6 +72,7 @@ struct mpc_i2c {
+ 	u8 fdr, dfsrr;
+ #endif
+ 	struct clk *clk_per;
++	bool has_errata_A004447;
+ };
+ 
+ struct mpc_i2c_divider {
+@@ -176,6 +179,75 @@ static int i2c_wait(struct mpc_i2c *i2c, unsigned timeout, int writing)
+ 	return 0;
+ }
+ 
++static int i2c_mpc_wait_sr(struct mpc_i2c *i2c, int mask)
++{
++	void __iomem *addr = i2c->base + MPC_I2C_SR;
++	u8 val;
++
++	return readb_poll_timeout(addr, val, val & mask, 0, 100);
++}
++
++/*
++ * Workaround for Erratum A004447. From the P2040CE Rev Q
++ *
++ * 1.  Set up the frequency divider and sampling rate.
++ * 2.  I2CCR - a0h
++ * 3.  Poll for I2CSR[MBB] to get set.
++ * 4.  If I2CSR[MAL] is set (an indication that SDA is stuck low), then go to
++ *     step 5. If MAL is not set, then go to step 13.
++ * 5.  I2CCR - 00h
++ * 6.  I2CCR - 22h
++ * 7.  I2CCR - a2h
++ * 8.  Poll for I2CSR[MBB] to get set.
++ * 9.  Issue read to I2CDR.
++ * 10. Poll for I2CSR[MIF] to be set.
++ * 11. I2CCR - 82h
++ * 12. Workaround complete. Skip the next steps.
++ * 13. Issue read to I2CDR.
++ * 14. Poll for I2CSR[MIF] to be set.
++ * 15. I2CCR - 80h
++ */
++static void mpc_i2c_fixup_A004447(struct mpc_i2c *i2c)
++{
++	int ret;
++	u32 val;
++
++	writeccr(i2c, CCR_MEN | CCR_MSTA);
++	ret = i2c_mpc_wait_sr(i2c, CSR_MBB);
++	if (ret) {
++		dev_err(i2c->dev, "timeout waiting for CSR_MBB\n");
++		return;
++	}
++
++	val = readb(i2c->base + MPC_I2C_SR);
++
++	if (val & CSR_MAL) {
++		writeccr(i2c, 0x00);
++		writeccr(i2c, CCR_MSTA | CCR_RSVD);
++		writeccr(i2c, CCR_MEN | CCR_MSTA | CCR_RSVD);
++		ret = i2c_mpc_wait_sr(i2c, CSR_MBB);
++		if (ret) {
++			dev_err(i2c->dev, "timeout waiting for CSR_MBB\n");
++			return;
++		}
++		val = readb(i2c->base + MPC_I2C_DR);
++		ret = i2c_mpc_wait_sr(i2c, CSR_MIF);
++		if (ret) {
++			dev_err(i2c->dev, "timeout waiting for CSR_MIF\n");
++			return;
++		}
++		writeccr(i2c, CCR_MEN | CCR_RSVD);
++	} else {
++		val = readb(i2c->base + MPC_I2C_DR);
++		ret = i2c_mpc_wait_sr(i2c, CSR_MIF);
++		if (ret) {
++			dev_err(i2c->dev, "timeout waiting for CSR_MIF\n");
++			return;
++		}
++		writeccr(i2c, CCR_MEN);
++	}
++}
++
+ #if defined(CONFIG_PPC_MPC52xx) || defined(CONFIG_PPC_MPC512x)
+ static const struct mpc_i2c_divider mpc_i2c_dividers_52xx[] = {
+ 	{20, 0x20}, {22, 0x21}, {24, 0x22}, {26, 0x23},
+@@ -586,7 +658,7 @@ static int mpc_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 			if ((status & (CSR_MCF | CSR_MBB | CSR_RXAK)) != 0) {
+ 				writeb(status & ~CSR_MAL,
+ 				       i2c->base + MPC_I2C_SR);
+-				mpc_i2c_fixup(i2c);
++				i2c_recover_bus(&i2c->adap);
+ 			}
+ 			return -EIO;
+ 		}
+@@ -622,7 +694,7 @@ static int mpc_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 			if ((status & (CSR_MCF | CSR_MBB | CSR_RXAK)) != 0) {
+ 				writeb(status & ~CSR_MAL,
+ 				       i2c->base + MPC_I2C_SR);
+-				mpc_i2c_fixup(i2c);
++				i2c_recover_bus(&i2c->adap);
+ 			}
+ 			return -EIO;
+ 		}
+@@ -637,6 +709,18 @@ static u32 mpc_functionality(struct i2c_adapter *adap)
+ 	  | I2C_FUNC_SMBUS_READ_BLOCK_DATA | I2C_FUNC_SMBUS_BLOCK_PROC_CALL;
+ }
+ 
++static int fsl_i2c_bus_recovery(struct i2c_adapter *adap)
++{
++	struct mpc_i2c *i2c = i2c_get_adapdata(adap);
++
++	if (i2c->has_errata_A004447)
++		mpc_i2c_fixup_A004447(i2c);
++	else
++		mpc_i2c_fixup(i2c);
++
++	return 0;
++}
++
+ static const struct i2c_algorithm mpc_algo = {
+ 	.master_xfer = mpc_xfer,
+ 	.functionality = mpc_functionality,
+@@ -648,6 +732,10 @@ static struct i2c_adapter mpc_ops = {
+ 	.timeout = HZ,
+ };
+ 
++static struct i2c_bus_recovery_info fsl_i2c_recovery_info = {
++	.recover_bus = fsl_i2c_bus_recovery,
++};
++
+ static const struct of_device_id mpc_i2c_of_match[];
+ static int fsl_i2c_probe(struct platform_device *op)
+ {
+@@ -732,6 +820,8 @@ static int fsl_i2c_probe(struct platform_device *op)
+ 	dev_info(i2c->dev, "timeout %u us\n", mpc_ops.timeout * 1000000 / HZ);
+ 
+ 	platform_set_drvdata(op, i2c);
++	if (of_property_read_bool(op->dev.of_node, "fsl,i2c-erratum-a004447"))
++		i2c->has_errata_A004447 = true;
+ 
+ 	i2c->adap = mpc_ops;
+ 	of_address_to_resource(op->dev.of_node, 0, &res);
+@@ -740,6 +830,7 @@ static int fsl_i2c_probe(struct platform_device *op)
+ 	i2c_set_adapdata(&i2c->adap, i2c);
+ 	i2c->adap.dev.parent = &op->dev;
+ 	i2c->adap.dev.of_node = of_node_get(op->dev.of_node);
++	i2c->adap.bus_recovery_info = &fsl_i2c_recovery_info;
+ 
+ 	result = i2c_add_adapter(&i2c->adap);
+ 	if (result < 0)
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index cd0fba6b09642..7b11aff8a5ea7 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -580,12 +580,9 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
+ 	props->cq_caps.max_cq_moderation_count = MLX4_MAX_CQ_COUNT;
+ 	props->cq_caps.max_cq_moderation_period = MLX4_MAX_CQ_PERIOD;
+ 
+-	if (!mlx4_is_slave(dev->dev))
+-		err = mlx4_get_internal_clock_params(dev->dev, &clock_params);
+-
+ 	if (uhw->outlen >= resp.response_length + sizeof(resp.hca_core_clock_offset)) {
+ 		resp.response_length += sizeof(resp.hca_core_clock_offset);
+-		if (!err && !mlx4_is_slave(dev->dev)) {
++		if (!mlx4_get_internal_clock_params(dev->dev, &clock_params)) {
+ 			resp.comp_mask |= MLX4_IB_QUERY_DEV_RESP_MASK_CORE_CLOCK_OFFSET;
+ 			resp.hca_core_clock_offset = clock_params.offset % PAGE_SIZE;
+ 		}
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index fb62f1d04afa7..372adb7ceb74e 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -838,15 +838,14 @@ static void destroy_cq_user(struct mlx5_ib_cq *cq, struct ib_udata *udata)
+ 	ib_umem_release(cq->buf.umem);
+ }
+ 
+-static void init_cq_frag_buf(struct mlx5_ib_cq *cq,
+-			     struct mlx5_ib_cq_buf *buf)
++static void init_cq_frag_buf(struct mlx5_ib_cq_buf *buf)
+ {
+ 	int i;
+ 	void *cqe;
+ 	struct mlx5_cqe64 *cqe64;
+ 
+ 	for (i = 0; i < buf->nent; i++) {
+-		cqe = get_cqe(cq, i);
++		cqe = mlx5_frag_buf_get_wqe(&buf->fbc, i);
+ 		cqe64 = buf->cqe_size == 64 ? cqe : cqe + 64;
+ 		cqe64->op_own = MLX5_CQE_INVALID << 4;
+ 	}
+@@ -872,7 +871,7 @@ static int create_cq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
+ 	if (err)
+ 		goto err_db;
+ 
+-	init_cq_frag_buf(cq, &cq->buf);
++	init_cq_frag_buf(&cq->buf);
+ 
+ 	*inlen = MLX5_ST_SZ_BYTES(create_cq_in) +
+ 		 MLX5_FLD_SZ_BYTES(create_cq_in, pas[0]) *
+@@ -1177,7 +1176,7 @@ static int resize_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
+ 	if (err)
+ 		goto ex;
+ 
+-	init_cq_frag_buf(cq, cq->resize_buf);
++	init_cq_frag_buf(cq->resize_buf);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
+index d5a90a66b45cf..5b05cf3837da1 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
+@@ -163,6 +163,7 @@ static size_t ipoib_get_size(const struct net_device *dev)
+ 
+ static struct rtnl_link_ops ipoib_link_ops __read_mostly = {
+ 	.kind		= "ipoib",
++	.netns_refund   = true,
+ 	.maxtype	= IFLA_IPOIB_MAX,
+ 	.policy		= ipoib_policy,
+ 	.priv_size	= sizeof(struct ipoib_dev_priv),
+diff --git a/drivers/isdn/hardware/mISDN/netjet.c b/drivers/isdn/hardware/mISDN/netjet.c
+index ee925b58bbcea..2a1ddd47a0968 100644
+--- a/drivers/isdn/hardware/mISDN/netjet.c
++++ b/drivers/isdn/hardware/mISDN/netjet.c
+@@ -1100,7 +1100,6 @@ nj_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		card->typ = NETJET_S_TJ300;
+ 
+ 	card->base = pci_resource_start(pdev, 0);
+-	card->irq = pdev->irq;
+ 	pci_set_drvdata(pdev, card);
+ 	err = setup_instance(card);
+ 	if (err)
+diff --git a/drivers/md/dm-verity-verify-sig.c b/drivers/md/dm-verity-verify-sig.c
+index 614e43db93aa8..919154ae4cae6 100644
+--- a/drivers/md/dm-verity-verify-sig.c
++++ b/drivers/md/dm-verity-verify-sig.c
+@@ -15,7 +15,7 @@
+ #define DM_VERITY_VERIFY_ERR(s) DM_VERITY_ROOT_HASH_VERIFICATION " " s
+ 
+ static bool require_signatures;
+-module_param(require_signatures, bool, false);
++module_param(require_signatures, bool, 0444);
+ MODULE_PARM_DESC(require_signatures,
+ 		"Verify the roothash of dm-verity hash tree");
+ 
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index acb9c81a4e456..addaaf2810e26 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -660,14 +660,19 @@ static int renesas_sdhi_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ 
+ 	/* Issue CMD19 twice for each tap */
+ 	for (i = 0; i < 2 * priv->tap_num; i++) {
++		int cmd_error;
++
+ 		/* Set sampling clock position */
+ 		sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_TAPSET, i % priv->tap_num);
+ 
+-		if (mmc_send_tuning(mmc, opcode, NULL) == 0)
++		if (mmc_send_tuning(mmc, opcode, &cmd_error) == 0)
+ 			set_bit(i, priv->taps);
+ 
+ 		if (sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_SMPCMP) == 0)
+ 			set_bit(i, priv->smpcmp);
++
++		if (cmd_error)
++			mmc_abort_tuning(mmc, opcode);
+ 	}
+ 
+ 	ret = renesas_sdhi_select_tuning(host);
+@@ -897,7 +902,7 @@ static const struct soc_device_attribute sdhi_quirks_match[]  = {
+ 	{ .soc_id = "r8a7795", .revision = "ES3.*", .data = &sdhi_quirks_bad_taps2367 },
+ 	{ .soc_id = "r8a7796", .revision = "ES1.[012]", .data = &sdhi_quirks_4tap_nohs400 },
+ 	{ .soc_id = "r8a7796", .revision = "ES1.*", .data = &sdhi_quirks_r8a7796_es13 },
+-	{ .soc_id = "r8a7796", .revision = "ES3.*", .data = &sdhi_quirks_bad_taps1357 },
++	{ .soc_id = "r8a77961", .data = &sdhi_quirks_bad_taps1357 },
+ 	{ .soc_id = "r8a77965", .data = &sdhi_quirks_r8a77965 },
+ 	{ .soc_id = "r8a77980", .data = &sdhi_quirks_nohs400 },
+ 	{ .soc_id = "r8a77990", .data = &sdhi_quirks_r8a77990 },
+diff --git a/drivers/net/appletalk/cops.c b/drivers/net/appletalk/cops.c
+index ba8e70a8e3125..6b12ce822e51a 100644
+--- a/drivers/net/appletalk/cops.c
++++ b/drivers/net/appletalk/cops.c
+@@ -327,6 +327,8 @@ static int __init cops_probe1(struct net_device *dev, int ioaddr)
+ 			break;
+ 	}
+ 
++	dev->base_addr = ioaddr;
++
+ 	/* Reserve any actual interrupt. */
+ 	if (dev->irq) {
+ 		retval = request_irq(dev->irq, cops_interrupt, 0, dev->name, dev);
+@@ -334,8 +336,6 @@ static int __init cops_probe1(struct net_device *dev, int ioaddr)
+ 			goto err_out;
+ 	}
+ 
+-	dev->base_addr = ioaddr;
+-
+         lp = netdev_priv(dev);
+         spin_lock_init(&lp->lock);
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 47afc5938c26b..345a3f61c723c 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1502,6 +1502,7 @@ static struct slave *bond_alloc_slave(struct bonding *bond,
+ 
+ 	slave->bond = bond;
+ 	slave->dev = slave_dev;
++	INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
+ 
+ 	if (bond_kobj_init(slave))
+ 		return NULL;
+@@ -1514,7 +1515,6 @@ static struct slave *bond_alloc_slave(struct bonding *bond,
+ 			return NULL;
+ 		}
+ 	}
+-	INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
+ 
+ 	return slave;
+ }
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index abfd3802bb517..b3aa99eb6c2c5 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -1532,6 +1532,7 @@ static const struct ksz_chip_data ksz9477_switch_chips[] = {
+ 		.num_statics = 16,
+ 		.cpu_ports = 0x7F,	/* can be configured as cpu port */
+ 		.port_cnt = 7,		/* total physical port count */
++		.phy_errata_9477 = true,
+ 	},
+ };
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+index 9c2f51f230351..9108b497b3c99 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+@@ -1224,8 +1224,10 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
+ 		goto failed;
+ 
+ 	/* SR-IOV capability was enabled but there are no VFs*/
+-	if (iov->total == 0)
++	if (iov->total == 0) {
++		err = -EINVAL;
+ 		goto failed;
++	}
+ 
+ 	iov->nr_virtfn = min_t(u16, iov->total, num_vfs_param);
+ 
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 390f45e49eaf7..1e8bf6b9834bb 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2709,6 +2709,9 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
+ 	struct gem_stats *hwstat = &bp->hw_stats.gem;
+ 	struct net_device_stats *nstat = &bp->dev->stats;
+ 
++	if (!netif_running(bp->dev))
++		return nstat;
++
+ 	gem_update_stats(bp);
+ 
+ 	nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors +
+diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
+index f6cfec81ccc3b..dc4ac1a2b6b67 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
++++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
+@@ -823,6 +823,7 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
+ #define QUERY_DEV_CAP_MAD_DEMUX_OFFSET		0xb0
+ #define QUERY_DEV_CAP_DMFS_HIGH_RATE_QPN_BASE_OFFSET	0xa8
+ #define QUERY_DEV_CAP_DMFS_HIGH_RATE_QPN_RANGE_OFFSET	0xac
++#define QUERY_DEV_CAP_MAP_CLOCK_TO_USER 0xc1
+ #define QUERY_DEV_CAP_QP_RATE_LIMIT_NUM_OFFSET	0xcc
+ #define QUERY_DEV_CAP_QP_RATE_LIMIT_MAX_OFFSET	0xd0
+ #define QUERY_DEV_CAP_QP_RATE_LIMIT_MIN_OFFSET	0xd2
+@@ -841,6 +842,8 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
+ 
+ 	if (mlx4_is_mfunc(dev))
+ 		disable_unsupported_roce_caps(outbox);
++	MLX4_GET(field, outbox, QUERY_DEV_CAP_MAP_CLOCK_TO_USER);
++	dev_cap->map_clock_to_user = field & 0x80;
+ 	MLX4_GET(field, outbox, QUERY_DEV_CAP_RSVD_QP_OFFSET);
+ 	dev_cap->reserved_qps = 1 << (field & 0xf);
+ 	MLX4_GET(field, outbox, QUERY_DEV_CAP_MAX_QP_OFFSET);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h
+index 8f020f26ebf5f..cf64e54eecb05 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/fw.h
++++ b/drivers/net/ethernet/mellanox/mlx4/fw.h
+@@ -131,6 +131,7 @@ struct mlx4_dev_cap {
+ 	u32 health_buffer_addrs;
+ 	struct mlx4_port_cap port_cap[MLX4_MAX_PORTS + 1];
+ 	bool wol_port[MLX4_MAX_PORTS + 1];
++	bool map_clock_to_user;
+ };
+ 
+ struct mlx4_func_cap {
+diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
+index c326b434734e1..00c84656b2e7e 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/main.c
++++ b/drivers/net/ethernet/mellanox/mlx4/main.c
+@@ -498,6 +498,7 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
+ 		}
+ 	}
+ 
++	dev->caps.map_clock_to_user  = dev_cap->map_clock_to_user;
+ 	dev->caps.uar_page_size	     = PAGE_SIZE;
+ 	dev->caps.num_uars	     = dev_cap->uar_size / PAGE_SIZE;
+ 	dev->caps.local_ca_ack_delay = dev_cap->local_ca_ack_delay;
+@@ -1948,6 +1949,11 @@ int mlx4_get_internal_clock_params(struct mlx4_dev *dev,
+ 	if (mlx4_is_slave(dev))
+ 		return -EOPNOTSUPP;
+ 
++	if (!dev->caps.map_clock_to_user) {
++		mlx4_dbg(dev, "Map clock to user is not supported.\n");
++		return -EOPNOTSUPP;
++	}
++
+ 	if (!params)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index 27740c027681b..a83b3d69a6565 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -114,7 +114,7 @@ static int ql_sem_spinlock(struct ql3_adapter *qdev,
+ 		value = readl(&port_regs->CommonRegs.semaphoreReg);
+ 		if ((value & (sem_mask >> 16)) == sem_bits)
+ 			return 0;
+-		ssleep(1);
++		mdelay(1000);
+ 	} while (--seconds);
+ 	return -1;
+ }
+diff --git a/drivers/net/ethernet/sfc/nic.c b/drivers/net/ethernet/sfc/nic.c
+index d1e908846f5dd..22fbb0ae77fba 100644
+--- a/drivers/net/ethernet/sfc/nic.c
++++ b/drivers/net/ethernet/sfc/nic.c
+@@ -90,6 +90,7 @@ int efx_nic_init_interrupt(struct efx_nic *efx)
+ 				  efx->pci_dev->irq);
+ 			goto fail1;
+ 		}
++		efx->irqs_hooked = true;
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 757e950fb745b..b848439fa837c 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
+ 	struct mdio_device *mdiodev;
+ 	int i;
+ 
+-	BUG_ON(bus->state != MDIOBUS_REGISTERED);
++	if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
++		return;
+ 	bus->state = MDIOBUS_UNREGISTERED;
+ 
+ 	for (i = 0; i < PHY_MAX_ADDR; i++) {
+diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
+index a44d49d63968a..494675aeaaad7 100644
+--- a/drivers/nvme/host/Kconfig
++++ b/drivers/nvme/host/Kconfig
+@@ -71,7 +71,8 @@ config NVME_FC
+ config NVME_TCP
+ 	tristate "NVM Express over Fabrics TCP host driver"
+ 	depends on INET
+-	depends on BLK_DEV_NVME
++	depends on BLOCK
++	select NVME_CORE
+ 	select NVME_FABRICS
+ 	select CRYPTO
+ 	select CRYPTO_CRC32C
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index 8575724734e02..7015fba2e5125 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -336,6 +336,11 @@ static void nvmf_log_connect_error(struct nvme_ctrl *ctrl,
+ 			cmd->connect.recfmt);
+ 		break;
+ 
++	case NVME_SC_HOST_PATH_ERROR:
++		dev_err(ctrl->device,
++			"Connect command failed: host path error\n");
++		break;
++
+ 	default:
+ 		dev_err(ctrl->device,
+ 			"Connect command failed, error wo/DNR bit: %d\n",
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 8b939e9db470c..9a8fa2e582d5b 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -379,10 +379,10 @@ static void nvmet_keep_alive_timer(struct work_struct *work)
+ {
+ 	struct nvmet_ctrl *ctrl = container_of(to_delayed_work(work),
+ 			struct nvmet_ctrl, ka_work);
+-	bool cmd_seen = ctrl->cmd_seen;
++	bool reset_tbkas = ctrl->reset_tbkas;
+ 
+-	ctrl->cmd_seen = false;
+-	if (cmd_seen) {
++	ctrl->reset_tbkas = false;
++	if (reset_tbkas) {
+ 		pr_debug("ctrl %d reschedule traffic based keep-alive timer\n",
+ 			ctrl->cntlid);
+ 		schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ);
+@@ -792,6 +792,13 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
+ 	percpu_ref_exit(&sq->ref);
+ 
+ 	if (ctrl) {
++		/*
++		 * The teardown flow may take some time, and the host may not
++		 * send us keep-alive during this period, hence reset the
++		 * traffic based keep-alive timer so we don't trigger a
++		 * controller teardown as a result of a keep-alive expiration.
++		 */
++		ctrl->reset_tbkas = true;
+ 		nvmet_ctrl_put(ctrl);
+ 		sq->ctrl = NULL; /* allows reusing the queue later */
+ 	}
+@@ -942,7 +949,7 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
+ 	}
+ 
+ 	if (sq->ctrl)
+-		sq->ctrl->cmd_seen = true;
++		sq->ctrl->reset_tbkas = true;
+ 
+ 	return true;
+ 
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index ea96487b5424e..4bf6d21290c23 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -166,7 +166,7 @@ struct nvmet_ctrl {
+ 	struct nvmet_subsys	*subsys;
+ 	struct nvmet_sq		**sqs;
+ 
+-	bool			cmd_seen;
++	bool			reset_tbkas;
+ 
+ 	struct mutex		lock;
+ 	u64			cap;
+diff --git a/drivers/phy/broadcom/phy-brcm-usb-init.h b/drivers/phy/broadcom/phy-brcm-usb-init.h
+index 899b9eb43fad6..a39f30fa2e991 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb-init.h
++++ b/drivers/phy/broadcom/phy-brcm-usb-init.h
+@@ -78,7 +78,7 @@ static inline u32 brcm_usb_readl(void __iomem *addr)
+ 	 * Other architectures (e.g., ARM) either do not support big endian, or
+ 	 * else leave I/O in little endian mode.
+ 	 */
+-	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN))
++	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		return __raw_readl(addr);
+ 	else
+ 		return readl_relaxed(addr);
+@@ -87,7 +87,7 @@ static inline u32 brcm_usb_readl(void __iomem *addr)
+ static inline void brcm_usb_writel(u32 val, void __iomem *addr)
+ {
+ 	/* See brcmnand_readl() comments */
+-	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN))
++	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		__raw_writel(val, addr);
+ 	else
+ 		writel_relaxed(val, addr);
+diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c
+index aaa0bbe473f76..7d990613ce837 100644
+--- a/drivers/phy/cadence/phy-cadence-sierra.c
++++ b/drivers/phy/cadence/phy-cadence-sierra.c
+@@ -614,6 +614,7 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
+ 	sp->nsubnodes = node;
+ 
+ 	if (sp->num_lanes > SIERRA_MAX_LANES) {
++		ret = -EINVAL;
+ 		dev_err(dev, "Invalid lane configuration\n");
+ 		goto put_child2;
+ 	}
+diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
+index e28e25f98708c..dceac77148721 100644
+--- a/drivers/phy/ti/phy-j721e-wiz.c
++++ b/drivers/phy/ti/phy-j721e-wiz.c
+@@ -894,6 +894,7 @@ static int wiz_probe(struct platform_device *pdev)
+ 
+ 		if (wiz->typec_dir_delay < WIZ_TYPEC_DIR_DEBOUNCE_MIN ||
+ 		    wiz->typec_dir_delay > WIZ_TYPEC_DIR_DEBOUNCE_MAX) {
++			ret = -EINVAL;
+ 			dev_err(dev, "Invalid typec-dir-debounce property\n");
+ 			goto err_addr_to_resource;
+ 		}
+diff --git a/drivers/regulator/bd718x7-regulator.c b/drivers/regulator/bd718x7-regulator.c
+index 3333b8905f1b7..2c097ee6cb021 100644
+--- a/drivers/regulator/bd718x7-regulator.c
++++ b/drivers/regulator/bd718x7-regulator.c
+@@ -364,7 +364,7 @@ BD718XX_OPS(bd71837_buck_regulator_ops, regulator_list_voltage_linear_range,
+ 	    NULL);
+ 
+ BD718XX_OPS(bd71837_buck_regulator_nolinear_ops, regulator_list_voltage_table,
+-	    regulator_map_voltage_ascend, bd718xx_set_voltage_sel_restricted,
++	    regulator_map_voltage_ascend, bd71837_set_voltage_sel_restricted,
+ 	    regulator_get_voltage_sel_regmap, regulator_set_voltage_time_sel,
+ 	    NULL);
+ /*
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 7b3de8b0b1caf..043b5f63b94a1 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1422,6 +1422,12 @@ static int set_machine_constraints(struct regulator_dev *rdev)
+ 	 * and we have control then make sure it is enabled.
+ 	 */
+ 	if (rdev->constraints->always_on || rdev->constraints->boot_on) {
++		/* If we want to enable this regulator, make sure that we know
++		 * the supplying regulator.
++		 */
++		if (rdev->supply_name && !rdev->supply)
++			return -EPROBE_DEFER;
++
+ 		if (rdev->supply) {
+ 			ret = regulator_enable(rdev->supply);
+ 			if (ret < 0) {
+diff --git a/drivers/regulator/fan53880.c b/drivers/regulator/fan53880.c
+index e83eb4fb1876a..1684faf82ed25 100644
+--- a/drivers/regulator/fan53880.c
++++ b/drivers/regulator/fan53880.c
+@@ -51,6 +51,7 @@ static const struct regulator_ops fan53880_ops = {
+ 		      REGULATOR_LINEAR_RANGE(800000, 0xf, 0x73, 25000),	\
+ 		},							\
+ 		.n_linear_ranges = 2,					\
++		.n_voltages =	   0x74,				\
+ 		.vsel_reg =	   FAN53880_LDO ## _num ## VOUT,	\
+ 		.vsel_mask =	   0x7f,				\
+ 		.enable_reg =	   FAN53880_ENABLE,			\
+@@ -76,6 +77,7 @@ static const struct regulator_desc fan53880_regulators[] = {
+ 		      REGULATOR_LINEAR_RANGE(600000, 0x1f, 0xf7, 12500),
+ 		},
+ 		.n_linear_ranges = 2,
++		.n_voltages =	   0xf8,
+ 		.vsel_reg =	   FAN53880_BUCKVOUT,
+ 		.vsel_mask =	   0x7f,
+ 		.enable_reg =	   FAN53880_ENABLE,
+@@ -95,6 +97,7 @@ static const struct regulator_desc fan53880_regulators[] = {
+ 		      REGULATOR_LINEAR_RANGE(3000000, 0x4, 0x70, 25000),
+ 		},
+ 		.n_linear_ranges = 2,
++		.n_voltages =	   0x71,
+ 		.vsel_reg =	   FAN53880_BOOSTVOUT,
+ 		.vsel_mask =	   0x7f,
+ 		.enable_reg =	   FAN53880_ENABLE_BOOST,
+diff --git a/drivers/regulator/max77620-regulator.c b/drivers/regulator/max77620-regulator.c
+index 8d9731e4052bf..5c439c850d090 100644
+--- a/drivers/regulator/max77620-regulator.c
++++ b/drivers/regulator/max77620-regulator.c
+@@ -814,6 +814,13 @@ static int max77620_regulator_probe(struct platform_device *pdev)
+ 	config.dev = dev;
+ 	config.driver_data = pmic;
+ 
++	/*
++	 * Set of_node_reuse flag to prevent driver core from attempting to
++	 * claim any pinmux resources already claimed by the parent device.
++	 * Otherwise PMIC driver will fail to re-probe.
++	 */
++	device_set_of_node_from_dev(&pdev->dev, pdev->dev.parent);
++
+ 	for (id = 0; id < MAX77620_NUM_REGS; id++) {
+ 		struct regulator_dev *rdev;
+ 		struct regulator_desc *rdesc;
+diff --git a/drivers/regulator/rtmv20-regulator.c b/drivers/regulator/rtmv20-regulator.c
+index 852fb2596ffda..5adc552dffd58 100644
+--- a/drivers/regulator/rtmv20-regulator.c
++++ b/drivers/regulator/rtmv20-regulator.c
+@@ -103,9 +103,47 @@ static int rtmv20_lsw_disable(struct regulator_dev *rdev)
+ 	return 0;
+ }
+ 
++static int rtmv20_lsw_set_current_limit(struct regulator_dev *rdev, int min_uA,
++					int max_uA)
++{
++	int sel;
++
++	if (min_uA > RTMV20_LSW_MAXUA || max_uA < RTMV20_LSW_MINUA)
++		return -EINVAL;
++
++	if (max_uA > RTMV20_LSW_MAXUA)
++		max_uA = RTMV20_LSW_MAXUA;
++
++	sel = (max_uA - RTMV20_LSW_MINUA) / RTMV20_LSW_STEPUA;
++
++	/* Ensure the selected setting is still in range */
++	if ((sel * RTMV20_LSW_STEPUA + RTMV20_LSW_MINUA) < min_uA)
++		return -EINVAL;
++
++	sel <<= ffs(rdev->desc->csel_mask) - 1;
++
++	return regmap_update_bits(rdev->regmap, rdev->desc->csel_reg,
++				  rdev->desc->csel_mask, sel);
++}
++
++static int rtmv20_lsw_get_current_limit(struct regulator_dev *rdev)
++{
++	unsigned int val;
++	int ret;
++
++	ret = regmap_read(rdev->regmap, rdev->desc->csel_reg, &val);
++	if (ret)
++		return ret;
++
++	val &= rdev->desc->csel_mask;
++	val >>= ffs(rdev->desc->csel_mask) - 1;
++
++	return val * RTMV20_LSW_STEPUA + RTMV20_LSW_MINUA;
++}
++
+ static const struct regulator_ops rtmv20_regulator_ops = {
+-	.set_current_limit = regulator_set_current_limit_regmap,
+-	.get_current_limit = regulator_get_current_limit_regmap,
++	.set_current_limit = rtmv20_lsw_set_current_limit,
++	.get_current_limit = rtmv20_lsw_get_current_limit,
+ 	.enable = rtmv20_lsw_enable,
+ 	.disable = rtmv20_lsw_disable,
+ 	.is_enabled = regulator_is_enabled_regmap,
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index 8c625b530035f..9b61e9b131ade 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -86,6 +86,7 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
+ 	struct vfio_ccw_private *private;
+ 	struct irb *irb;
+ 	bool is_final;
++	bool cp_is_finished = false;
+ 
+ 	private = container_of(work, struct vfio_ccw_private, io_work);
+ 	irb = &private->irb;
+@@ -94,14 +95,21 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
+ 		     (SCSW_ACTL_DEVACT | SCSW_ACTL_SCHACT));
+ 	if (scsw_is_solicited(&irb->scsw)) {
+ 		cp_update_scsw(&private->cp, &irb->scsw);
+-		if (is_final && private->state == VFIO_CCW_STATE_CP_PENDING)
++		if (is_final && private->state == VFIO_CCW_STATE_CP_PENDING) {
+ 			cp_free(&private->cp);
++			cp_is_finished = true;
++		}
+ 	}
+ 	mutex_lock(&private->io_mutex);
+ 	memcpy(private->io_region->irb_area, irb, sizeof(*irb));
+ 	mutex_unlock(&private->io_mutex);
+ 
+-	if (private->mdev && is_final)
++	/*
++	 * Reset to IDLE only if processing of a channel program
++	 * has finished. Do not overwrite a possible processing
++	 * state if the final interrupt was for HSCH or CSCH.
++	 */
++	if (private->mdev && cp_is_finished)
+ 		private->state = VFIO_CCW_STATE_IDLE;
+ 
+ 	if (private->io_trigger)
+diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
+index 23e61aa638e4e..e435a9cd92dac 100644
+--- a/drivers/s390/cio/vfio_ccw_fsm.c
++++ b/drivers/s390/cio/vfio_ccw_fsm.c
+@@ -318,6 +318,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
+ 	}
+ 
+ err_out:
++	private->state = VFIO_CCW_STATE_IDLE;
+ 	trace_vfio_ccw_fsm_io_request(scsw->cmd.fctl, schid,
+ 				      io_region->ret_code, errstr);
+ }
+diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
+index 1ad5f7018ec2d..2280f51dd679d 100644
+--- a/drivers/s390/cio/vfio_ccw_ops.c
++++ b/drivers/s390/cio/vfio_ccw_ops.c
+@@ -276,8 +276,6 @@ static ssize_t vfio_ccw_mdev_write_io_region(struct vfio_ccw_private *private,
+ 	}
+ 
+ 	vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_IO_REQ);
+-	if (region->ret_code != 0)
+-		private->state = VFIO_CCW_STATE_IDLE;
+ 	ret = (region->ret_code != 0) ? region->ret_code : count;
+ 
+ out_unlock:
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_io.c b/drivers/scsi/bnx2fc/bnx2fc_io.c
+index 1a0dc18d69155..ed300a279a387 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_io.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_io.c
+@@ -1220,6 +1220,7 @@ int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd)
+ 		   was a result from the ABTS request rather than the CLEANUP
+ 		   request */
+ 		set_bit(BNX2FC_FLAG_IO_CLEANUP,	&io_req->req_flags);
++		rc = FAILED;
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 19170c7ac336f..e9a82a390672c 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -3359,14 +3359,14 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba)
+ {
+ 	int i;
+ 
+-	free_irq(pci_irq_vector(pdev, 1), hisi_hba);
+-	free_irq(pci_irq_vector(pdev, 2), hisi_hba);
+-	free_irq(pci_irq_vector(pdev, 11), hisi_hba);
++	devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 1), hisi_hba);
++	devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 2), hisi_hba);
++	devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 11), hisi_hba);
+ 	for (i = 0; i < hisi_hba->cq_nvecs; i++) {
+ 		struct hisi_sas_cq *cq = &hisi_hba->cq[i];
+ 		int nr = hisi_sas_intr_conv ? 16 : 16 + i;
+ 
+-		free_irq(pci_irq_vector(pdev, nr), cq);
++		devm_free_irq(&pdev->dev, pci_irq_vector(pdev, nr), cq);
+ 	}
+ 	pci_free_irq_vectors(pdev);
+ }
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 2f162603876f9..b93dd8ef4ac82 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -254,12 +254,11 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 
+ 	device_enable_async_suspend(&shost->shost_dev);
+ 
++	get_device(&shost->shost_gendev);
+ 	error = device_add(&shost->shost_dev);
+ 	if (error)
+ 		goto out_del_gendev;
+ 
+-	get_device(&shost->shost_gendev);
+-
+ 	if (shost->transportt->host_size) {
+ 		shost->shost_data = kzalloc(shost->transportt->host_size,
+ 					 GFP_KERNEL);
+@@ -278,33 +277,36 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 
+ 		if (!shost->work_q) {
+ 			error = -EINVAL;
+-			goto out_free_shost_data;
++			goto out_del_dev;
+ 		}
+ 	}
+ 
+ 	error = scsi_sysfs_add_host(shost);
+ 	if (error)
+-		goto out_destroy_host;
++		goto out_del_dev;
+ 
+ 	scsi_proc_host_add(shost);
+ 	scsi_autopm_put_host(shost);
+ 	return error;
+ 
+- out_destroy_host:
+-	if (shost->work_q)
+-		destroy_workqueue(shost->work_q);
+- out_free_shost_data:
+-	kfree(shost->shost_data);
++	/*
++	 * Any host allocation in this function will be freed in
++	 * scsi_host_dev_release().
++	 */
+  out_del_dev:
+ 	device_del(&shost->shost_dev);
+  out_del_gendev:
++	/*
++	 * Host state is SHOST_RUNNING so we have to explicitly release
++	 * ->shost_dev.
++	 */
++	put_device(&shost->shost_dev);
+ 	device_del(&shost->shost_gendev);
+  out_disable_runtime_pm:
+ 	device_disable_async_suspend(&shost->shost_gendev);
+ 	pm_runtime_disable(&shost->shost_gendev);
+ 	pm_runtime_set_suspended(&shost->shost_gendev);
+ 	pm_runtime_put_noidle(&shost->shost_gendev);
+-	scsi_mq_destroy_tags(shost);
+  fail:
+ 	return error;
+ }
+@@ -345,7 +347,7 @@ static void scsi_host_dev_release(struct device *dev)
+ 
+ 	ida_simple_remove(&host_index_ida, shost->host_no);
+ 
+-	if (parent)
++	if (shost->shost_state != SHOST_CREATED)
+ 		put_device(parent);
+ 	kfree(shost);
+ }
+@@ -392,8 +394,10 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
+ 	mutex_init(&shost->scan_mutex);
+ 
+ 	index = ida_simple_get(&host_index_ida, 0, 0, GFP_KERNEL);
+-	if (index < 0)
+-		goto fail_kfree;
++	if (index < 0) {
++		kfree(shost);
++		return NULL;
++	}
+ 	shost->host_no = index;
+ 
+ 	shost->dma_channel = 0xff;
+@@ -486,7 +490,7 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
+ 		shost_printk(KERN_WARNING, shost,
+ 			"error handler thread failed to spawn, error = %ld\n",
+ 			PTR_ERR(shost->ehandler));
+-		goto fail_index_remove;
++		goto fail;
+ 	}
+ 
+ 	shost->tmf_work_q = alloc_workqueue("scsi_tmf_%d",
+@@ -495,17 +499,18 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
+ 	if (!shost->tmf_work_q) {
+ 		shost_printk(KERN_WARNING, shost,
+ 			     "failed to create tmf workq\n");
+-		goto fail_kthread;
++		goto fail;
+ 	}
+ 	scsi_proc_hostdir_add(shost->hostt);
+ 	return shost;
++ fail:
++	/*
++	 * Host state is still SHOST_CREATED and that is enough to release
++	 * ->shost_gendev. scsi_host_dev_release() will free
++	 * dev_name(&shost->shost_dev).
++	 */
++	put_device(&shost->shost_gendev);
+ 
+- fail_kthread:
+-	kthread_stop(shost->ehandler);
+- fail_index_remove:
+-	ida_simple_remove(&host_index_ida, shost->host_no);
+- fail_kfree:
+-	kfree(shost);
+ 	return NULL;
+ }
+ EXPORT_SYMBOL(scsi_host_alloc);
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index dcae8f071c355..8d4976725a75a 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1559,10 +1559,12 @@ void qlt_stop_phase2(struct qla_tgt *tgt)
+ 		return;
+ 	}
+ 
++	mutex_lock(&tgt->ha->optrom_mutex);
+ 	mutex_lock(&vha->vha_tgt.tgt_mutex);
+ 	tgt->tgt_stop = 0;
+ 	tgt->tgt_stopped = 1;
+ 	mutex_unlock(&vha->vha_tgt.tgt_mutex);
++	mutex_unlock(&tgt->ha->optrom_mutex);
+ 
+ 	ql_dbg(ql_dbg_tgt_mgt, vha, 0xf00c, "Stop of tgt %p finished\n",
+ 	    tgt);
+diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c
+index 081f54ab7d86c..1421b1394d816 100644
+--- a/drivers/scsi/vmw_pvscsi.c
++++ b/drivers/scsi/vmw_pvscsi.c
+@@ -587,7 +587,13 @@ static void pvscsi_complete_request(struct pvscsi_adapter *adapter,
+ 		case BTSTAT_SUCCESS:
+ 		case BTSTAT_LINKED_COMMAND_COMPLETED:
+ 		case BTSTAT_LINKED_COMMAND_COMPLETED_WITH_FLAG:
+-			/* If everything went fine, let's move on..  */
++			/*
++			 * Commands like INQUIRY may transfer less data than
++			 * requested by the initiator via bufflen. Set residual
++			 * count to make upper layer aware of the actual amount
++			 * of data returned.
++			 */
++			scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);
+ 			cmd->result = (DID_OK << 16);
+ 			break;
+ 
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index 197485f2c2b22..29ee555a42f90 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -68,7 +68,7 @@
+ #define BCM2835_SPI_FIFO_SIZE		64
+ #define BCM2835_SPI_FIFO_SIZE_3_4	48
+ #define BCM2835_SPI_DMA_MIN_LENGTH	96
+-#define BCM2835_SPI_NUM_CS		4   /* raise as necessary */
++#define BCM2835_SPI_NUM_CS		24  /* raise as necessary */
+ #define BCM2835_SPI_MODE_BITS	(SPI_CPOL | SPI_CPHA | SPI_CS_HIGH \
+ 				| SPI_NO_CS | SPI_3WIRE)
+ 
+@@ -1195,6 +1195,12 @@ static int bcm2835_spi_setup(struct spi_device *spi)
+ 	struct gpio_chip *chip;
+ 	u32 cs;
+ 
++	if (spi->chip_select >= BCM2835_SPI_NUM_CS) {
++		dev_err(&spi->dev, "only %d chip-selects supported\n",
++			BCM2835_SPI_NUM_CS - 1);
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * Precalculate SPI slave's CS register value for ->prepare_message():
+ 	 * The driver always uses software-controlled GPIO chip select, hence
+@@ -1288,7 +1294,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev)
+ 	ctlr->use_gpio_descriptors = true;
+ 	ctlr->mode_bits = BCM2835_SPI_MODE_BITS;
+ 	ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
+-	ctlr->num_chipselect = BCM2835_SPI_NUM_CS;
++	ctlr->num_chipselect = 3;
+ 	ctlr->setup = bcm2835_spi_setup;
+ 	ctlr->transfer_one = bcm2835_spi_transfer_one;
+ 	ctlr->handle_err = bcm2835_spi_handle_err;
+diff --git a/drivers/spi/spi-bitbang.c b/drivers/spi/spi-bitbang.c
+index 1a7352abd8786..3d8948a17095b 100644
+--- a/drivers/spi/spi-bitbang.c
++++ b/drivers/spi/spi-bitbang.c
+@@ -181,6 +181,8 @@ int spi_bitbang_setup(struct spi_device *spi)
+ {
+ 	struct spi_bitbang_cs	*cs = spi->controller_state;
+ 	struct spi_bitbang	*bitbang;
++	bool			initial_setup = false;
++	int			retval;
+ 
+ 	bitbang = spi_master_get_devdata(spi->master);
+ 
+@@ -189,22 +191,30 @@ int spi_bitbang_setup(struct spi_device *spi)
+ 		if (!cs)
+ 			return -ENOMEM;
+ 		spi->controller_state = cs;
++		initial_setup = true;
+ 	}
+ 
+ 	/* per-word shift register access, in hardware or bitbanging */
+ 	cs->txrx_word = bitbang->txrx_word[spi->mode & (SPI_CPOL|SPI_CPHA)];
+-	if (!cs->txrx_word)
+-		return -EINVAL;
++	if (!cs->txrx_word) {
++		retval = -EINVAL;
++		goto err_free;
++	}
+ 
+ 	if (bitbang->setup_transfer) {
+-		int retval = bitbang->setup_transfer(spi, NULL);
++		retval = bitbang->setup_transfer(spi, NULL);
+ 		if (retval < 0)
+-			return retval;
++			goto err_free;
+ 	}
+ 
+ 	dev_dbg(&spi->dev, "%s, %u nsec/bit\n", __func__, 2 * cs->nsecs);
+ 
+ 	return 0;
++
++err_free:
++	if (initial_setup)
++		kfree(cs);
++	return retval;
+ }
+ EXPORT_SYMBOL_GPL(spi_bitbang_setup);
+ 
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index d0e5aa18b7bad..bdf94cc7be1af 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -440,6 +440,7 @@ static int fsl_spi_setup(struct spi_device *spi)
+ {
+ 	struct mpc8xxx_spi *mpc8xxx_spi;
+ 	struct fsl_spi_reg __iomem *reg_base;
++	bool initial_setup = false;
+ 	int retval;
+ 	u32 hw_mode;
+ 	struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi);
+@@ -452,6 +453,7 @@ static int fsl_spi_setup(struct spi_device *spi)
+ 		if (!cs)
+ 			return -ENOMEM;
+ 		spi_set_ctldata(spi, cs);
++		initial_setup = true;
+ 	}
+ 	mpc8xxx_spi = spi_master_get_devdata(spi->master);
+ 
+@@ -475,6 +477,8 @@ static int fsl_spi_setup(struct spi_device *spi)
+ 	retval = fsl_spi_setup_transfer(spi, NULL);
+ 	if (retval < 0) {
+ 		cs->hw_mode = hw_mode; /* Restore settings */
++		if (initial_setup)
++			kfree(cs);
+ 		return retval;
+ 	}
+ 
+diff --git a/drivers/spi/spi-omap-uwire.c b/drivers/spi/spi-omap-uwire.c
+index 71402f71ddd85..df28c6664aba6 100644
+--- a/drivers/spi/spi-omap-uwire.c
++++ b/drivers/spi/spi-omap-uwire.c
+@@ -424,15 +424,22 @@ done:
+ static int uwire_setup(struct spi_device *spi)
+ {
+ 	struct uwire_state *ust = spi->controller_state;
++	bool initial_setup = false;
++	int status;
+ 
+ 	if (ust == NULL) {
+ 		ust = kzalloc(sizeof(*ust), GFP_KERNEL);
+ 		if (ust == NULL)
+ 			return -ENOMEM;
+ 		spi->controller_state = ust;
++		initial_setup = true;
+ 	}
+ 
+-	return uwire_setup_transfer(spi, NULL);
++	status = uwire_setup_transfer(spi, NULL);
++	if (status && initial_setup)
++		kfree(ust);
++
++	return status;
+ }
+ 
+ static void uwire_cleanup(struct spi_device *spi)
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index d4c9510af3931..3596bbe4b7760 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -1032,8 +1032,22 @@ static void omap2_mcspi_release_dma(struct spi_master *master)
+ 	}
+ }
+ 
++static void omap2_mcspi_cleanup(struct spi_device *spi)
++{
++	struct omap2_mcspi_cs	*cs;
++
++	if (spi->controller_state) {
++		/* Unlink controller state from context save list */
++		cs = spi->controller_state;
++		list_del(&cs->node);
++
++		kfree(cs);
++	}
++}
++
+ static int omap2_mcspi_setup(struct spi_device *spi)
+ {
++	bool			initial_setup = false;
+ 	int			ret;
+ 	struct omap2_mcspi	*mcspi = spi_master_get_devdata(spi->master);
+ 	struct omap2_mcspi_regs	*ctx = &mcspi->ctx;
+@@ -1051,35 +1065,28 @@ static int omap2_mcspi_setup(struct spi_device *spi)
+ 		spi->controller_state = cs;
+ 		/* Link this to context save list */
+ 		list_add_tail(&cs->node, &ctx->cs);
++		initial_setup = true;
+ 	}
+ 
+ 	ret = pm_runtime_get_sync(mcspi->dev);
+ 	if (ret < 0) {
+ 		pm_runtime_put_noidle(mcspi->dev);
++		if (initial_setup)
++			omap2_mcspi_cleanup(spi);
+ 
+ 		return ret;
+ 	}
+ 
+ 	ret = omap2_mcspi_setup_transfer(spi, NULL);
++	if (ret && initial_setup)
++		omap2_mcspi_cleanup(spi);
++
+ 	pm_runtime_mark_last_busy(mcspi->dev);
+ 	pm_runtime_put_autosuspend(mcspi->dev);
+ 
+ 	return ret;
+ }
+ 
+-static void omap2_mcspi_cleanup(struct spi_device *spi)
+-{
+-	struct omap2_mcspi_cs	*cs;
+-
+-	if (spi->controller_state) {
+-		/* Unlink controller state from context save list */
+-		cs = spi->controller_state;
+-		list_del(&cs->node);
+-
+-		kfree(cs);
+-	}
+-}
+-
+ static irqreturn_t omap2_mcspi_irq_handler(int irq, void *data)
+ {
+ 	struct omap2_mcspi *mcspi = data;
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index d6b534d38e5da..56a62095ec8c2 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1254,6 +1254,8 @@ static int setup_cs(struct spi_device *spi, struct chip_data *chip,
+ 		chip->gpio_cs_inverted = spi->mode & SPI_CS_HIGH;
+ 
+ 		err = gpiod_direction_output(gpiod, !chip->gpio_cs_inverted);
++		if (err)
++			gpiod_put(chip->gpiod_cs);
+ 	}
+ 
+ 	return err;
+@@ -1267,6 +1269,7 @@ static int setup(struct spi_device *spi)
+ 	struct driver_data *drv_data =
+ 		spi_controller_get_devdata(spi->controller);
+ 	uint tx_thres, tx_hi_thres, rx_thres;
++	int err;
+ 
+ 	switch (drv_data->ssp_type) {
+ 	case QUARK_X1000_SSP:
+@@ -1413,7 +1416,11 @@ static int setup(struct spi_device *spi)
+ 	if (drv_data->ssp_type == CE4100_SSP)
+ 		return 0;
+ 
+-	return setup_cs(spi, chip, chip_info);
++	err = setup_cs(spi, chip, chip_info);
++	if (err)
++		kfree(chip);
++
++	return err;
+ }
+ 
+ static void cleanup(struct spi_device *spi)
+diff --git a/drivers/spi/spi-sprd.c b/drivers/spi/spi-sprd.c
+index b41a75749b498..28e70db9bbba8 100644
+--- a/drivers/spi/spi-sprd.c
++++ b/drivers/spi/spi-sprd.c
+@@ -1068,6 +1068,7 @@ static const struct of_device_id sprd_spi_of_match[] = {
+ 	{ .compatible = "sprd,sc9860-spi", },
+ 	{ /* sentinel */ }
+ };
++MODULE_DEVICE_TABLE(of, sprd_spi_of_match);
+ 
+ static struct platform_driver sprd_spi_driver = {
+ 	.driver = {
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index 5d8a5ee62fa23..2765289028fae 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -528,18 +528,17 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 	struct zynq_qspi *xqspi = spi_controller_get_devdata(mem->spi->master);
+ 	int err = 0, i;
+ 	u8 *tmpbuf;
+-	u8 opcode = op->cmd.opcode;
+ 
+ 	dev_dbg(xqspi->dev, "cmd:%#x mode:%d.%d.%d.%d\n",
+-		opcode, op->cmd.buswidth, op->addr.buswidth,
++		op->cmd.opcode, op->cmd.buswidth, op->addr.buswidth,
+ 		op->dummy.buswidth, op->data.buswidth);
+ 
+ 	zynq_qspi_chipselect(mem->spi, true);
+ 	zynq_qspi_config_op(xqspi, mem->spi);
+ 
+-	if (op->cmd.nbytes) {
++	if (op->cmd.opcode) {
+ 		reinit_completion(&xqspi->data_completion);
+-		xqspi->txbuf = &opcode;
++		xqspi->txbuf = (u8 *)&op->cmd.opcode;
+ 		xqspi->rxbuf = NULL;
+ 		xqspi->tx_bytes = op->cmd.nbytes;
+ 		xqspi->rx_bytes = op->cmd.nbytes;
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index a6f1e94af13c5..0cf67de741e78 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -47,10 +47,6 @@ static void spidev_release(struct device *dev)
+ {
+ 	struct spi_device	*spi = to_spi_device(dev);
+ 
+-	/* spi controllers may cleanup for released devices */
+-	if (spi->controller->cleanup)
+-		spi->controller->cleanup(spi);
+-
+ 	spi_controller_put(spi->controller);
+ 	kfree(spi->driver_override);
+ 	kfree(spi);
+@@ -550,6 +546,12 @@ static int spi_dev_check(struct device *dev, void *data)
+ 	return 0;
+ }
+ 
++static void spi_cleanup(struct spi_device *spi)
++{
++	if (spi->controller->cleanup)
++		spi->controller->cleanup(spi);
++}
++
+ /**
+  * spi_add_device - Add spi_device allocated with spi_alloc_device
+  * @spi: spi_device to register
+@@ -614,11 +616,13 @@ int spi_add_device(struct spi_device *spi)
+ 
+ 	/* Device may be bound to an active driver when this returns */
+ 	status = device_add(&spi->dev);
+-	if (status < 0)
++	if (status < 0) {
+ 		dev_err(dev, "can't add %s, status %d\n",
+ 				dev_name(&spi->dev), status);
+-	else
++		spi_cleanup(spi);
++	} else {
+ 		dev_dbg(dev, "registered child %s\n", dev_name(&spi->dev));
++	}
+ 
+ done:
+ 	mutex_unlock(&spi_add_lock);
+@@ -711,7 +715,9 @@ void spi_unregister_device(struct spi_device *spi)
+ 	}
+ 	if (ACPI_COMPANION(&spi->dev))
+ 		acpi_device_clear_enumerated(ACPI_COMPANION(&spi->dev));
+-	device_unregister(&spi->dev);
++	device_del(&spi->dev);
++	spi_cleanup(spi);
++	put_device(&spi->dev);
+ }
+ EXPORT_SYMBOL_GPL(spi_unregister_device);
+ 
+diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
+index ea3ae3d38337e..b7993e25764d5 100644
+--- a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
++++ b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
+@@ -2384,7 +2384,7 @@ void rtw_cfg80211_indicate_sta_assoc(struct adapter *padapter, u8 *pmgmt_frame,
+ 	DBG_871X(FUNC_ADPT_FMT"\n", FUNC_ADPT_ARG(padapter));
+ 
+ 	{
+-		struct station_info sinfo;
++		struct station_info sinfo = {};
+ 		u8 ie_offset;
+ 		if (GetFrameSubType(pmgmt_frame) == WIFI_ASSOCREQ)
+ 			ie_offset = _ASOCREQ_IE_OFFSET_;
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 0aa85cc07ff19..c24c0e3440e39 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -3255,8 +3255,10 @@ static int __cdns3_gadget_init(struct cdns3 *cdns)
+ 	pm_runtime_get_sync(cdns->dev);
+ 
+ 	ret = cdns3_gadget_start(cdns);
+-	if (ret)
++	if (ret) {
++		pm_runtime_put_sync(cdns->dev);
+ 		return ret;
++	}
+ 
+ 	/*
+ 	 * Because interrupt line can be shared with other components in
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 60ea932afe2b8..5f35cdd2cf1dd 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -2055,6 +2055,7 @@ static int udc_start(struct ci_hdrc *ci)
+ 	ci->gadget.name         = ci->platdata->name;
+ 	ci->gadget.otg_caps	= otg_caps;
+ 	ci->gadget.sg_supported = 1;
++	ci->gadget.irq		= ci->irq;
+ 
+ 	if (ci->platdata->flags & CI_HDRC_REQUIRES_ALIGNED_DMA)
+ 		ci->gadget.quirk_avoids_skb_reserve = 1;
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index bdf1f98dfad8c..ffe301d6ea359 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -651,7 +651,7 @@ static int dwc3_meson_g12a_setup_regmaps(struct dwc3_meson_g12a *priv,
+ 		return PTR_ERR(priv->usb_glue_regmap);
+ 
+ 	/* Create a regmap for each USB2 PHY control register set */
+-	for (i = 0; i < priv->usb2_ports; i++) {
++	for (i = 0; i < priv->drvdata->num_phys; i++) {
+ 		struct regmap_config u2p_regmap_config = {
+ 			.reg_bits = 8,
+ 			.val_bits = 32,
+@@ -659,6 +659,9 @@ static int dwc3_meson_g12a_setup_regmaps(struct dwc3_meson_g12a *priv,
+ 			.max_register = U2P_R1,
+ 		};
+ 
++		if (!strstr(priv->drvdata->phy_names[i], "usb2"))
++			continue;
++
+ 		u2p_regmap_config.name = devm_kasprintf(priv->dev, GFP_KERNEL,
+ 							"u2p-%d", i);
+ 		if (!u2p_regmap_config.name)
+@@ -772,13 +775,13 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ 
+ 	ret = priv->drvdata->usb_init(priv);
+ 	if (ret)
+-		goto err_disable_clks;
++		goto err_disable_regulator;
+ 
+ 	/* Init PHYs */
+ 	for (i = 0 ; i < PHY_COUNT ; ++i) {
+ 		ret = phy_init(priv->phys[i]);
+ 		if (ret)
+-			goto err_disable_clks;
++			goto err_disable_regulator;
+ 	}
+ 
+ 	/* Set PHY Power */
+@@ -816,6 +819,10 @@ err_phys_exit:
+ 	for (i = 0 ; i < PHY_COUNT ; ++i)
+ 		phy_exit(priv->phys[i]);
+ 
++err_disable_regulator:
++	if (priv->vbus)
++		regulator_disable(priv->vbus);
++
+ err_disable_clks:
+ 	clk_bulk_disable_unprepare(priv->drvdata->num_clks,
+ 				   priv->drvdata->clks);
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index 8b668ef46f7f1..3cd2942643725 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -292,6 +292,9 @@ static struct dwc3_ep *dwc3_wIndex_to_dep(struct dwc3 *dwc, __le16 wIndex_le)
+ 		epnum |= 1;
+ 
+ 	dep = dwc->eps[epnum];
++	if (dep == NULL)
++		return NULL;
++
+ 	if (dep->flags & DWC3_EP_ENABLED)
+ 		return dep;
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index ead877e7c87f9..8bccdd7b0ca2e 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2143,13 +2143,10 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 	}
+ 
+ 	/*
+-	 * Synchronize any pending event handling before executing the controller
+-	 * halt routine.
++	 * Synchronize and disable any further event handling while controller
++	 * is being enabled/disabled.
+ 	 */
+-	if (!is_on) {
+-		dwc3_gadget_disable_irq(dwc);
+-		synchronize_irq(dwc->irq_gadget);
+-	}
++	disable_irq(dwc->irq_gadget);
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 
+@@ -2187,6 +2184,8 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 
+ 	ret = dwc3_gadget_run_stop(dwc, is_on, false);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
++	enable_irq(dwc->irq_gadget);
++
+ 	pm_runtime_put(dwc->dev);
+ 
+ 	return ret;
+@@ -3936,6 +3935,7 @@ err5:
+ 	dwc3_gadget_free_endpoints(dwc);
+ err4:
+ 	usb_put_gadget(dwc->gadget);
++	dwc->gadget = NULL;
+ err3:
+ 	dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce,
+ 			dwc->bounce_addr);
+@@ -3955,6 +3955,9 @@ err0:
+ 
+ void dwc3_gadget_exit(struct dwc3 *dwc)
+ {
++	if (!dwc->gadget)
++		return;
++
+ 	usb_del_gadget(dwc->gadget);
+ 	dwc3_gadget_free_endpoints(dwc);
+ 	usb_put_gadget(dwc->gadget);
+diff --git a/drivers/usb/gadget/config.c b/drivers/usb/gadget/config.c
+index 8bb25773b61e9..05507606b2b42 100644
+--- a/drivers/usb/gadget/config.c
++++ b/drivers/usb/gadget/config.c
+@@ -164,6 +164,14 @@ int usb_assign_descriptors(struct usb_function *f,
+ {
+ 	struct usb_gadget *g = f->config->cdev->gadget;
+ 
++	/* super-speed-plus descriptor falls back to super-speed one,
++	 * if such a descriptor was provided, thus avoiding a NULL
++	 * pointer dereference if a 5gbps capable gadget is used with
++	 * a 10gbps capable config (device port + cable + host port)
++	 */
++	if (!ssp)
++		ssp = ss;
++
+ 	if (fs) {
+ 		f->fs_descriptors = usb_copy_descriptors(fs);
+ 		if (!f->fs_descriptors)
+diff --git a/drivers/usb/gadget/function/f_ecm.c b/drivers/usb/gadget/function/f_ecm.c
+index 7f5cf488b2b1e..ffe2486fce71c 100644
+--- a/drivers/usb/gadget/function/f_ecm.c
++++ b/drivers/usb/gadget/function/f_ecm.c
+@@ -791,7 +791,7 @@ ecm_bind(struct usb_configuration *c, struct usb_function *f)
+ 		fs_ecm_notify_desc.bEndpointAddress;
+ 
+ 	status = usb_assign_descriptors(f, ecm_fs_function, ecm_hs_function,
+-			ecm_ss_function, NULL);
++			ecm_ss_function, ecm_ss_function);
+ 	if (status)
+ 		goto fail;
+ 
+diff --git a/drivers/usb/gadget/function/f_eem.c b/drivers/usb/gadget/function/f_eem.c
+index cfcc4e81fb776..2cd9942707b46 100644
+--- a/drivers/usb/gadget/function/f_eem.c
++++ b/drivers/usb/gadget/function/f_eem.c
+@@ -302,7 +302,7 @@ static int eem_bind(struct usb_configuration *c, struct usb_function *f)
+ 	eem_ss_out_desc.bEndpointAddress = eem_fs_out_desc.bEndpointAddress;
+ 
+ 	status = usb_assign_descriptors(f, eem_fs_function, eem_hs_function,
+-			eem_ss_function, NULL);
++			eem_ss_function, eem_ss_function);
+ 	if (status)
+ 		goto fail;
+ 
+@@ -495,7 +495,7 @@ static int eem_unwrap(struct gether *port,
+ 			skb2 = skb_clone(skb, GFP_ATOMIC);
+ 			if (unlikely(!skb2)) {
+ 				DBG(cdev, "unable to unframe EEM packet\n");
+-				continue;
++				goto next;
+ 			}
+ 			skb_trim(skb2, len - ETH_FCS_LEN);
+ 
+@@ -505,7 +505,7 @@ static int eem_unwrap(struct gether *port,
+ 						GFP_ATOMIC);
+ 			if (unlikely(!skb3)) {
+ 				dev_kfree_skb_any(skb2);
+-				continue;
++				goto next;
+ 			}
+ 			dev_kfree_skb_any(skb2);
+ 			skb_queue_tail(list, skb3);
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index ffe67d836b0ce..7df180b110afc 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -3566,6 +3566,9 @@ static void ffs_func_unbind(struct usb_configuration *c,
+ 		ffs->func = NULL;
+ 	}
+ 
++	/* Drain any pending AIO completions */
++	drain_workqueue(ffs->io_completion_wq);
++
+ 	if (!--opts->refcnt)
+ 		functionfs_unbind(ffs);
+ 
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 1125f4715830d..e556993081170 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -802,7 +802,8 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 		hidg_fs_out_ep_desc.bEndpointAddress;
+ 
+ 	status = usb_assign_descriptors(f, hidg_fs_descriptors,
+-			hidg_hs_descriptors, hidg_ss_descriptors, NULL);
++			hidg_hs_descriptors, hidg_ss_descriptors,
++			hidg_ss_descriptors);
+ 	if (status)
+ 		goto fail;
+ 
+diff --git a/drivers/usb/gadget/function/f_loopback.c b/drivers/usb/gadget/function/f_loopback.c
+index 1803646b36780..90215a81c178b 100644
+--- a/drivers/usb/gadget/function/f_loopback.c
++++ b/drivers/usb/gadget/function/f_loopback.c
+@@ -207,7 +207,7 @@ autoconf_fail:
+ 	ss_loop_sink_desc.bEndpointAddress = fs_loop_sink_desc.bEndpointAddress;
+ 
+ 	ret = usb_assign_descriptors(f, fs_loopback_descs, hs_loopback_descs,
+-			ss_loopback_descs, NULL);
++			ss_loopback_descs, ss_loopback_descs);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index 019bea8e09cce..855127249f242 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -583,7 +583,7 @@ static void ncm_do_notify(struct f_ncm *ncm)
+ 		data[0] = cpu_to_le32(ncm_bitrate(cdev->gadget));
+ 		data[1] = data[0];
+ 
+-		DBG(cdev, "notify speed %d\n", ncm_bitrate(cdev->gadget));
++		DBG(cdev, "notify speed %u\n", ncm_bitrate(cdev->gadget));
+ 		ncm->notify_state = NCM_NOTIFY_CONNECT;
+ 		break;
+ 	}
+@@ -1101,11 +1101,11 @@ static struct sk_buff *ncm_wrap_ntb(struct gether *port,
+ 			ncm->ndp_dgram_count = 1;
+ 
+ 			/* Note: we skip opts->next_ndp_index */
+-		}
+ 
+-		/* Delay the timer. */
+-		hrtimer_start(&ncm->task_timer, TX_TIMEOUT_NSECS,
+-			      HRTIMER_MODE_REL_SOFT);
++			/* Start the timer. */
++			hrtimer_start(&ncm->task_timer, TX_TIMEOUT_NSECS,
++				      HRTIMER_MODE_REL_SOFT);
++		}
+ 
+ 		/* Add the datagram position entries */
+ 		ntb_ndp = skb_put_zero(ncm->skb_tx_ndp, dgram_idx_len);
+diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c
+index 2f1eb2e81d306..236ecc9689985 100644
+--- a/drivers/usb/gadget/function/f_printer.c
++++ b/drivers/usb/gadget/function/f_printer.c
+@@ -1099,7 +1099,8 @@ autoconf_fail:
+ 	ss_ep_out_desc.bEndpointAddress = fs_ep_out_desc.bEndpointAddress;
+ 
+ 	ret = usb_assign_descriptors(f, fs_printer_function,
+-			hs_printer_function, ss_printer_function, NULL);
++			hs_printer_function, ss_printer_function,
++			ss_printer_function);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/usb/gadget/function/f_rndis.c b/drivers/usb/gadget/function/f_rndis.c
+index 0739b05a0ef7b..ee95e8f5f9d48 100644
+--- a/drivers/usb/gadget/function/f_rndis.c
++++ b/drivers/usb/gadget/function/f_rndis.c
+@@ -789,7 +789,7 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
+ 	ss_notify_desc.bEndpointAddress = fs_notify_desc.bEndpointAddress;
+ 
+ 	status = usb_assign_descriptors(f, eth_fs_function, eth_hs_function,
+-			eth_ss_function, NULL);
++			eth_ss_function, eth_ss_function);
+ 	if (status)
+ 		goto fail;
+ 
+diff --git a/drivers/usb/gadget/function/f_serial.c b/drivers/usb/gadget/function/f_serial.c
+index e627138463504..1ed8ff0ac2d31 100644
+--- a/drivers/usb/gadget/function/f_serial.c
++++ b/drivers/usb/gadget/function/f_serial.c
+@@ -233,7 +233,7 @@ static int gser_bind(struct usb_configuration *c, struct usb_function *f)
+ 	gser_ss_out_desc.bEndpointAddress = gser_fs_out_desc.bEndpointAddress;
+ 
+ 	status = usb_assign_descriptors(f, gser_fs_function, gser_hs_function,
+-			gser_ss_function, NULL);
++			gser_ss_function, gser_ss_function);
+ 	if (status)
+ 		goto fail;
+ 	dev_dbg(&cdev->gadget->dev, "generic ttyGS%d: %s speed IN/%s OUT/%s\n",
+diff --git a/drivers/usb/gadget/function/f_sourcesink.c b/drivers/usb/gadget/function/f_sourcesink.c
+index ed68a4860b7d8..282737e4609ce 100644
+--- a/drivers/usb/gadget/function/f_sourcesink.c
++++ b/drivers/usb/gadget/function/f_sourcesink.c
+@@ -431,7 +431,8 @@ no_iso:
+ 	ss_iso_sink_desc.bEndpointAddress = fs_iso_sink_desc.bEndpointAddress;
+ 
+ 	ret = usb_assign_descriptors(f, fs_source_sink_descs,
+-			hs_source_sink_descs, ss_source_sink_descs, NULL);
++			hs_source_sink_descs, ss_source_sink_descs,
++			ss_source_sink_descs);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/usb/gadget/function/f_subset.c b/drivers/usb/gadget/function/f_subset.c
+index 4d945254905d9..51c1cae162d9b 100644
+--- a/drivers/usb/gadget/function/f_subset.c
++++ b/drivers/usb/gadget/function/f_subset.c
+@@ -358,7 +358,7 @@ geth_bind(struct usb_configuration *c, struct usb_function *f)
+ 		fs_subset_out_desc.bEndpointAddress;
+ 
+ 	status = usb_assign_descriptors(f, fs_eth_function, hs_eth_function,
+-			ss_eth_function, NULL);
++			ss_eth_function, ss_eth_function);
+ 	if (status)
+ 		goto fail;
+ 
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index 410fa89eae8f6..5a2e9ce2bc352 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -2061,7 +2061,8 @@ static int tcm_bind(struct usb_configuration *c, struct usb_function *f)
+ 	uasp_fs_cmd_desc.bEndpointAddress = uasp_ss_cmd_desc.bEndpointAddress;
+ 
+ 	ret = usb_assign_descriptors(f, uasp_fs_function_desc,
+-			uasp_hs_function_desc, uasp_ss_function_desc, NULL);
++			uasp_hs_function_desc, uasp_ss_function_desc,
++			uasp_ss_function_desc);
+ 	if (ret)
+ 		goto ep_fail;
+ 
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 8f09a387b7738..4c8f0112481f3 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2009,9 +2009,8 @@ static void musb_pm_runtime_check_session(struct musb *musb)
+ 			schedule_delayed_work(&musb->irq_work,
+ 					      msecs_to_jiffies(1000));
+ 			musb->quirk_retries--;
+-			break;
+ 		}
+-		fallthrough;
++		break;
+ 	case MUSB_QUIRK_B_INVALID_VBUS_91:
+ 		if (musb->quirk_retries && !musb->flush_irq_work) {
+ 			musb_dbg(musb,
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index b5f4e584f3c9e..28a728f883bc5 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -533,6 +533,12 @@ struct cp210x_single_port_config {
+ #define CP210X_2NCONFIG_GPIO_RSTLATCH_IDX	587
+ #define CP210X_2NCONFIG_GPIO_CONTROL_IDX	600
+ 
++/* CP2102N QFN20 port configuration values */
++#define CP2102N_QFN20_GPIO2_TXLED_MODE		BIT(2)
++#define CP2102N_QFN20_GPIO3_RXLED_MODE		BIT(3)
++#define CP2102N_QFN20_GPIO1_RS485_MODE		BIT(4)
++#define CP2102N_QFN20_GPIO0_CLK_MODE		BIT(6)
++
+ /* CP210X_VENDOR_SPECIFIC, CP210X_WRITE_LATCH call writes these 0x2 bytes. */
+ struct cp210x_gpio_write {
+ 	u8	mask;
+@@ -1884,7 +1890,19 @@ static int cp2102n_gpioconf_init(struct usb_serial *serial)
+ 	priv->gpio_pushpull = (gpio_pushpull >> 3) & 0x0f;
+ 
+ 	/* 0 indicates GPIO mode, 1 is alternate function */
+-	priv->gpio_altfunc = (gpio_ctrl >> 2) & 0x0f;
++	if (priv->partnum == CP210X_PARTNUM_CP2102N_QFN20) {
++		/* QFN20 is special... */
++		if (gpio_ctrl & CP2102N_QFN20_GPIO0_CLK_MODE)   /* GPIO 0 */
++			priv->gpio_altfunc |= BIT(0);
++		if (gpio_ctrl & CP2102N_QFN20_GPIO1_RS485_MODE) /* GPIO 1 */
++			priv->gpio_altfunc |= BIT(1);
++		if (gpio_ctrl & CP2102N_QFN20_GPIO2_TXLED_MODE) /* GPIO 2 */
++			priv->gpio_altfunc |= BIT(2);
++		if (gpio_ctrl & CP2102N_QFN20_GPIO3_RXLED_MODE) /* GPIO 3 */
++			priv->gpio_altfunc |= BIT(3);
++	} else {
++		priv->gpio_altfunc = (gpio_ctrl >> 2) & 0x0f;
++	}
+ 
+ 	if (priv->partnum == CP210X_PARTNUM_CP2102N_QFN28) {
+ 		/*
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 7c64b6ee5c194..1aef9b1e1c4eb 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -611,6 +611,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLX_PLUS_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORION_IO_PID) },
++	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONMX_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index d854e04a4286e..add602bebd820 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -581,6 +581,7 @@
+ #define FTDI_NT_ORIONLXM_PID		0x7c90	/* OrionLXm Substation Automation Platform */
+ #define FTDI_NT_ORIONLX_PLUS_PID	0x7c91	/* OrionLX+ Substation Automation Platform */
+ #define FTDI_NT_ORION_IO_PID		0x7c92	/* Orion I/O */
++#define FTDI_NT_ORIONMX_PID		0x7c93	/* OrionMX */
+ 
+ /*
+  * Synapse Wireless product ids (FTDI_VID)
+diff --git a/drivers/usb/serial/omninet.c b/drivers/usb/serial/omninet.c
+index 5b6e982a9376b..ff02eff704162 100644
+--- a/drivers/usb/serial/omninet.c
++++ b/drivers/usb/serial/omninet.c
+@@ -26,6 +26,7 @@
+ 
+ #define ZYXEL_VENDOR_ID		0x0586
+ #define ZYXEL_OMNINET_ID	0x1000
++#define ZYXEL_OMNI_56K_PLUS_ID	0x1500
+ /* This one seems to be a re-branded ZyXEL device */
+ #define BT_IGNITIONPRO_ID	0x2000
+ 
+@@ -40,6 +41,7 @@ static int omninet_port_remove(struct usb_serial_port *port);
+ 
+ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(ZYXEL_VENDOR_ID, ZYXEL_OMNINET_ID) },
++	{ USB_DEVICE(ZYXEL_VENDOR_ID, ZYXEL_OMNI_56K_PLUS_ID) },
+ 	{ USB_DEVICE(ZYXEL_VENDOR_ID, BT_IGNITIONPRO_ID) },
+ 	{ }						/* Terminating entry */
+ };
+diff --git a/drivers/usb/serial/quatech2.c b/drivers/usb/serial/quatech2.c
+index 872d1bc86ab43..a2c3c0944f996 100644
+--- a/drivers/usb/serial/quatech2.c
++++ b/drivers/usb/serial/quatech2.c
+@@ -416,7 +416,7 @@ static void qt2_close(struct usb_serial_port *port)
+ 
+ 	/* flush the port transmit buffer */
+ 	i = usb_control_msg(serial->dev,
+-			    usb_rcvctrlpipe(serial->dev, 0),
++			    usb_sndctrlpipe(serial->dev, 0),
+ 			    QT2_FLUSH_DEVICE, 0x40, 1,
+ 			    port_priv->device_port, NULL, 0, QT2_USB_TIMEOUT);
+ 
+@@ -426,7 +426,7 @@ static void qt2_close(struct usb_serial_port *port)
+ 
+ 	/* flush the port receive buffer */
+ 	i = usb_control_msg(serial->dev,
+-			    usb_rcvctrlpipe(serial->dev, 0),
++			    usb_sndctrlpipe(serial->dev, 0),
+ 			    QT2_FLUSH_DEVICE, 0x40, 0,
+ 			    port_priv->device_port, NULL, 0, QT2_USB_TIMEOUT);
+ 
+@@ -654,7 +654,7 @@ static int qt2_attach(struct usb_serial *serial)
+ 	int status;
+ 
+ 	/* power on unit */
+-	status = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
++	status = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0),
+ 				 0xc2, 0x40, 0x8000, 0, NULL, 0,
+ 				 QT2_USB_TIMEOUT);
+ 	if (status < 0) {
+diff --git a/drivers/usb/typec/mux.c b/drivers/usb/typec/mux.c
+index 42acdc8b684fe..b9035c3407b56 100644
+--- a/drivers/usb/typec/mux.c
++++ b/drivers/usb/typec/mux.c
+@@ -239,7 +239,7 @@ find_mux:
+ 	dev = class_find_device(&typec_mux_class, NULL, fwnode,
+ 				mux_fwnode_match);
+ 
+-	return dev ? to_typec_switch(dev) : ERR_PTR(-EPROBE_DEFER);
++	return dev ? to_typec_mux(dev) : ERR_PTR(-EPROBE_DEFER);
+ }
+ 
+ /**
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index 17896bd87fc3f..acdef6fbb85e0 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -573,6 +573,11 @@ static int pmc_usb_probe_iom(struct pmc_usb *pmc)
+ 		return -ENOMEM;
+ 	}
+ 
++	if (IS_ERR(pmc->iom_base)) {
++		put_device(&adev->dev);
++		return PTR_ERR(pmc->iom_base);
++	}
++
+ 	pmc->iom_adev = adev;
+ 
+ 	return 0;
+@@ -623,8 +628,10 @@ static int pmc_usb_probe(struct platform_device *pdev)
+ 			break;
+ 
+ 		ret = pmc_usb_register_port(pmc, i, fwnode);
+-		if (ret)
++		if (ret) {
++			fwnode_handle_put(fwnode);
+ 			goto err_remove_ports;
++		}
+ 	}
+ 
+ 	platform_set_drvdata(pdev, pmc);
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index bdbd346dc59ff..61929d37d7fc4 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -5187,6 +5187,10 @@ void tcpm_unregister_port(struct tcpm_port *port)
+ {
+ 	int i;
+ 
++	hrtimer_cancel(&port->enable_frs_timer);
++	hrtimer_cancel(&port->vdm_state_machine_timer);
++	hrtimer_cancel(&port->state_machine_timer);
++
+ 	tcpm_reset_port(port);
+ 	for (i = 0; i < ARRAY_SIZE(port->port_altmode); i++)
+ 		typec_unregister_altmode(port->port_altmode[i]);
+diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
+index 9b745f432c910..7e9c279bf49df 100644
+--- a/drivers/usb/typec/tcpm/wcove.c
++++ b/drivers/usb/typec/tcpm/wcove.c
+@@ -377,7 +377,7 @@ static int wcove_pd_transmit(struct tcpc_dev *tcpc,
+ 		const u8 *data = (void *)msg;
+ 		int i;
+ 
+-		for (i = 0; i < pd_header_cnt(msg->header) * 4 + 2; i++) {
++		for (i = 0; i < pd_header_cnt_le(msg->header) * 4 + 2; i++) {
+ 			ret = regmap_write(wcove->regmap, USBC_TX_DATA + i,
+ 					   data[i]);
+ 			if (ret)
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index b4615bb5daab8..310b5caeb05ae 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -1118,6 +1118,7 @@ err_unregister:
+ 	}
+ 
+ err_reset:
++	memset(&ucsi->cap, 0, sizeof(ucsi->cap));
+ 	ucsi_reset_ppm(ucsi);
+ err:
+ 	return ret;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 56f3b9acd2154..e025cd8f3f071 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2467,6 +2467,24 @@ static int validate_super(struct btrfs_fs_info *fs_info,
+ 		ret = -EINVAL;
+ 	}
+ 
++	if (memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,
++		   BTRFS_FSID_SIZE)) {
++		btrfs_err(fs_info,
++		"superblock fsid doesn't match fsid of fs_devices: %pU != %pU",
++			fs_info->super_copy->fsid, fs_info->fs_devices->fsid);
++		ret = -EINVAL;
++	}
++
++	if (btrfs_fs_incompat(fs_info, METADATA_UUID) &&
++	    memcmp(fs_info->fs_devices->metadata_uuid,
++		   fs_info->super_copy->metadata_uuid, BTRFS_FSID_SIZE)) {
++		btrfs_err(fs_info,
++"superblock metadata_uuid doesn't match metadata uuid of fs_devices: %pU != %pU",
++			fs_info->super_copy->metadata_uuid,
++			fs_info->fs_devices->metadata_uuid);
++		ret = -EINVAL;
++	}
++
+ 	if (memcmp(fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid,
+ 		   BTRFS_FSID_SIZE) != 0) {
+ 		btrfs_err(fs_info,
+@@ -2969,14 +2987,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 
+ 	disk_super = fs_info->super_copy;
+ 
+-	ASSERT(!memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,
+-		       BTRFS_FSID_SIZE));
+-
+-	if (btrfs_fs_incompat(fs_info, METADATA_UUID)) {
+-		ASSERT(!memcmp(fs_info->fs_devices->metadata_uuid,
+-				fs_info->super_copy->metadata_uuid,
+-				BTRFS_FSID_SIZE));
+-	}
+ 
+ 	features = btrfs_super_flags(disk_super);
+ 	if (features & BTRFS_SUPER_FLAG_CHANGING_FSID_V2) {
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 7e87549c5edaf..ffa48ac98d1e5 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1088,7 +1088,7 @@ int btrfs_mark_extent_written(struct btrfs_trans_handle *trans,
+ 	int del_nr = 0;
+ 	int del_slot = 0;
+ 	int recow;
+-	int ret;
++	int ret = 0;
+ 	u64 ino = btrfs_ino(inode);
+ 
+ 	path = btrfs_alloc_path();
+@@ -1309,7 +1309,7 @@ again:
+ 	}
+ out:
+ 	btrfs_free_path(path);
+-	return 0;
++	return ret;
+ }
+ 
+ /*
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 4b8cc93913f74..723d425796cca 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -406,7 +406,7 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init)
+ 
+ 	if (cl_init->hostname == NULL) {
+ 		WARN_ON(1);
+-		return NULL;
++		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	/* see if the client already exists */
+diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
+index 065cb04222a1b..543d916f79abb 100644
+--- a/fs/nfs/nfs4_fs.h
++++ b/fs/nfs/nfs4_fs.h
+@@ -205,6 +205,7 @@ struct nfs4_exception {
+ 	struct inode *inode;
+ 	nfs4_stateid *stateid;
+ 	long timeout;
++	unsigned char task_is_privileged : 1;
+ 	unsigned char delay : 1,
+ 		      recovering : 1,
+ 		      retry : 1;
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index be7915c861cef..7491323a58207 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -435,8 +435,8 @@ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
+ 		 */
+ 		nfs_mark_client_ready(clp, -EPERM);
+ 	}
+-	nfs_put_client(clp);
+ 	clear_bit(NFS_CS_TSM_POSSIBLE, &clp->cl_flags);
++	nfs_put_client(clp);
+ 	return old;
+ 
+ error:
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index c92d6ff0fceab..5365000e83bd6 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -592,6 +592,8 @@ int nfs4_handle_exception(struct nfs_server *server, int errorcode, struct nfs4_
+ 		goto out_retry;
+ 	}
+ 	if (exception->recovering) {
++		if (exception->task_is_privileged)
++			return -EDEADLOCK;
+ 		ret = nfs4_wait_clnt_recover(clp);
+ 		if (test_bit(NFS_MIG_FAILED, &server->mig_status))
+ 			return -EIO;
+@@ -617,6 +619,8 @@ nfs4_async_handle_exception(struct rpc_task *task, struct nfs_server *server,
+ 		goto out_retry;
+ 	}
+ 	if (exception->recovering) {
++		if (exception->task_is_privileged)
++			return -EDEADLOCK;
+ 		rpc_sleep_on(&clp->cl_rpcwaitq, task, NULL);
+ 		if (test_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) == 0)
+ 			rpc_wake_up_queued_task(&clp->cl_rpcwaitq, task);
+@@ -5942,6 +5946,14 @@ static int nfs4_proc_set_acl(struct inode *inode, const void *buf, size_t buflen
+ 	do {
+ 		err = __nfs4_proc_set_acl(inode, buf, buflen);
+ 		trace_nfs4_set_acl(inode, err);
++		if (err == -NFS4ERR_BADOWNER || err == -NFS4ERR_BADNAME) {
++			/*
++			 * no need to retry since the kernel
++			 * isn't involved in encoding the ACEs.
++			 */
++			err = -EINVAL;
++			break;
++		}
+ 		err = nfs4_handle_exception(NFS_SERVER(inode), err,
+ 				&exception);
+ 	} while (exception.retry);
+@@ -6383,6 +6395,7 @@ static void nfs4_delegreturn_done(struct rpc_task *task, void *calldata)
+ 	struct nfs4_exception exception = {
+ 		.inode = data->inode,
+ 		.stateid = &data->stateid,
++		.task_is_privileged = data->args.seq_args.sa_privileged,
+ 	};
+ 
+ 	if (!nfs4_sequence_done(task, &data->res.seq_res))
+@@ -6506,7 +6519,6 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
+ 	data = kzalloc(sizeof(*data), GFP_NOFS);
+ 	if (data == NULL)
+ 		return -ENOMEM;
+-	nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1, 0);
+ 
+ 	nfs4_state_protect(server->nfs_client,
+ 			NFS_SP4_MACH_CRED_CLEANUP,
+@@ -6537,6 +6549,12 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
+ 		}
+ 	}
+ 
++	if (!data->inode)
++		nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1,
++				   1);
++	else
++		nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1,
++				   0);
+ 	task_setup_data.callback_data = data;
+ 	msg.rpc_argp = &data->args;
+ 	msg.rpc_resp = &data->res;
+@@ -9619,15 +9637,20 @@ int nfs4_proc_layoutreturn(struct nfs4_layoutreturn *lrp, bool sync)
+ 			&task_setup_data.rpc_client, &msg);
+ 
+ 	dprintk("--> %s\n", __func__);
++	lrp->inode = nfs_igrab_and_active(lrp->args.inode);
+ 	if (!sync) {
+-		lrp->inode = nfs_igrab_and_active(lrp->args.inode);
+ 		if (!lrp->inode) {
+ 			nfs4_layoutreturn_release(lrp);
+ 			return -EAGAIN;
+ 		}
+ 		task_setup_data.flags |= RPC_TASK_ASYNC;
+ 	}
+-	nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1, 0);
++	if (!lrp->inode)
++		nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
++				   1);
++	else
++		nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
++				   0);
+ 	task = rpc_run_task(&task_setup_data);
+ 	if (IS_ERR(task))
+ 		return PTR_ERR(task);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 297ea12b3cfd2..df9b17dd92cb3 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -2675,6 +2675,13 @@ out:
+ }
+ 
+ #ifdef CONFIG_SECURITY
++static int proc_pid_attr_open(struct inode *inode, struct file *file)
++{
++	file->private_data = NULL;
++	__mem_open(inode, file, PTRACE_MODE_READ_FSCREDS);
++	return 0;
++}
++
+ static ssize_t proc_pid_attr_read(struct file * file, char __user * buf,
+ 				  size_t count, loff_t *ppos)
+ {
+@@ -2705,7 +2712,7 @@ static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
+ 	int rv;
+ 
+ 	/* A task may only write when it was the opener. */
+-	if (file->f_cred != current_real_cred())
++	if (file->private_data != current->mm)
+ 		return -EPERM;
+ 
+ 	rcu_read_lock();
+@@ -2755,9 +2762,11 @@ out:
+ }
+ 
+ static const struct file_operations proc_pid_attr_operations = {
++	.open		= proc_pid_attr_open,
+ 	.read		= proc_pid_attr_read,
+ 	.write		= proc_pid_attr_write,
+ 	.llseek		= generic_file_llseek,
++	.release	= mem_release,
+ };
+ 
+ #define LSM_DIR_OPS(LSM) \
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index d7efbc5490e8c..18468b46c4506 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -971,6 +971,7 @@
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ #define PERCPU_DECRYPTED_SECTION					\
+ 	. = ALIGN(PAGE_SIZE);						\
++	*(.data..decrypted)						\
+ 	*(.data..percpu..decrypted)					\
+ 	. = ALIGN(PAGE_SIZE);
+ #else
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index a2278b9ff57d2..c66c702a4f079 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1104,7 +1104,15 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
+ static inline unsigned long
+ __gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn)
+ {
+-	return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE;
++	/*
++	 * The index was checked originally in search_memslots.  To avoid
++	 * that a malicious guest builds a Spectre gadget out of e.g. page
++	 * table walks, do not let the processor speculate loads outside
++	 * the guest's registered memslots.
++	 */
++	unsigned long offset = gfn - slot->base_gfn;
++	offset = array_index_nospec(offset, slot->npages);
++	return slot->userspace_addr + offset * PAGE_SIZE;
+ }
+ 
+ static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
+diff --git a/include/linux/mfd/rohm-bd71828.h b/include/linux/mfd/rohm-bd71828.h
+index 017a4c01cb315..61f0974c33d72 100644
+--- a/include/linux/mfd/rohm-bd71828.h
++++ b/include/linux/mfd/rohm-bd71828.h
+@@ -26,11 +26,11 @@ enum {
+ 	BD71828_REGULATOR_AMOUNT,
+ };
+ 
+-#define BD71828_BUCK1267_VOLTS		0xEF
+-#define BD71828_BUCK3_VOLTS		0x10
+-#define BD71828_BUCK4_VOLTS		0x20
+-#define BD71828_BUCK5_VOLTS		0x10
+-#define BD71828_LDO_VOLTS		0x32
++#define BD71828_BUCK1267_VOLTS		0x100
++#define BD71828_BUCK3_VOLTS		0x20
++#define BD71828_BUCK4_VOLTS		0x40
++#define BD71828_BUCK5_VOLTS		0x20
++#define BD71828_LDO_VOLTS		0x40
+ /* LDO6 is fixed 1.8V voltage */
+ #define BD71828_LDO_6_VOLTAGE		1800000
+ 
+diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h
+index 06e066e04a4bb..eb8169c03d899 100644
+--- a/include/linux/mlx4/device.h
++++ b/include/linux/mlx4/device.h
+@@ -631,6 +631,7 @@ struct mlx4_caps {
+ 	bool			wol_port[MLX4_MAX_PORTS + 1];
+ 	struct mlx4_rate_limit_caps rl_caps;
+ 	u32			health_buffer_addrs;
++	bool			map_clock_to_user;
+ };
+ 
+ struct mlx4_buf_list {
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 76cd21fa55016..2660ee4b08adf 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -348,11 +348,19 @@ struct load_weight {
+  * Only for tasks we track a moving average of the past instantaneous
+  * estimated utilization. This allows to absorb sporadic drops in utilization
+  * of an otherwise almost periodic task.
++ *
++ * The UTIL_AVG_UNCHANGED flag is used to synchronize util_est with util_avg
++ * updates. When a task is dequeued, its util_est should not be updated if its
++ * util_avg has not been updated in the meantime.
++ * This information is mapped into the MSB bit of util_est.enqueued at dequeue
++ * time. Since max value of util_est.enqueued for a task is 1024 (PELT util_avg
++ * for a task) it is safe to use MSB.
+  */
+ struct util_est {
+ 	unsigned int			enqueued;
+ 	unsigned int			ewma;
+ #define UTIL_EST_WEIGHT_SHIFT		2
++#define UTIL_AVG_UNCHANGED		0x80000000
+ } __attribute__((__aligned__(sizeof(u64))));
+ 
+ /*
+diff --git a/include/linux/usb/pd.h b/include/linux/usb/pd.h
+index 3a805e2ecbc99..433040ff840a3 100644
+--- a/include/linux/usb/pd.h
++++ b/include/linux/usb/pd.h
+@@ -459,7 +459,7 @@ static inline unsigned int rdo_max_power(u32 rdo)
+ #define PD_T_RECEIVER_RESPONSE	15	/* 15ms max */
+ #define PD_T_SOURCE_ACTIVITY	45
+ #define PD_T_SINK_ACTIVITY	135
+-#define PD_T_SINK_WAIT_CAP	240
++#define PD_T_SINK_WAIT_CAP	310	/* 310 - 620 ms */
+ #define PD_T_PS_TRANSITION	500
+ #define PD_T_SRC_TRANSITION	35
+ #define PD_T_DRP_SNK		40
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index ed7d02e8bc939..aaf2fbaa0cc76 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -4960,6 +4960,12 @@ int btf_distill_func_proto(struct bpf_verifier_log *log,
+ 	m->ret_size = ret;
+ 
+ 	for (i = 0; i < nargs; i++) {
++		if (i == nargs - 1 && args[i].type == 0) {
++			bpf_log(log,
++				"The function %s with variable args is unsupported.\n",
++				tname);
++			return -EINVAL;
++		}
+ 		ret = __get_type_size(btf, args[i].type, &t);
+ 		if (ret < 0) {
+ 			bpf_log(log,
+@@ -4967,6 +4973,12 @@ int btf_distill_func_proto(struct bpf_verifier_log *log,
+ 				tname, i, btf_kind_str[BTF_INFO_KIND(t->info)]);
+ 			return -EINVAL;
+ 		}
++		if (ret == 0) {
++			bpf_log(log,
++				"The function %s has malformed void argument.\n",
++				tname);
++			return -EINVAL;
++		}
+ 		m->arg_size[i] = ret;
+ 	}
+ 	m->nr_args = nargs;
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index a5751784ad740..f6dddb3a8f4a2 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -820,6 +820,10 @@ static int cgroup1_rename(struct kernfs_node *kn, struct kernfs_node *new_parent
+ 	struct cgroup *cgrp = kn->priv;
+ 	int ret;
+ 
++	/* do not accept '\n' to prevent making /proc/<pid>/cgroup unparsable */
++	if (strchr(new_name_str, '\n'))
++		return -EINVAL;
++
+ 	if (kernfs_type(kn) != KERNFS_DIR)
+ 		return -ENOTDIR;
+ 	if (kn->parent != new_parent)
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 5d1fdf7c3ec65..c8b811e039cc2 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -5665,8 +5665,6 @@ int __init cgroup_init_early(void)
+ 	return 0;
+ }
+ 
+-static u16 cgroup_disable_mask __initdata;
+-
+ /**
+  * cgroup_init - cgroup initialization
+  *
+@@ -5725,12 +5723,8 @@ int __init cgroup_init(void)
+ 		 * disabled flag and cftype registration needs kmalloc,
+ 		 * both of which aren't available during early_init.
+ 		 */
+-		if (cgroup_disable_mask & (1 << ssid)) {
+-			static_branch_disable(cgroup_subsys_enabled_key[ssid]);
+-			printk(KERN_INFO "Disabling %s control group subsystem\n",
+-			       ss->name);
++		if (!cgroup_ssid_enabled(ssid))
+ 			continue;
+-		}
+ 
+ 		if (cgroup1_ssid_disabled(ssid))
+ 			printk(KERN_INFO "Disabling %s control group subsystem in v1 mounts\n",
+@@ -6245,7 +6239,10 @@ static int __init cgroup_disable(char *str)
+ 			if (strcmp(token, ss->name) &&
+ 			    strcmp(token, ss->legacy_name))
+ 				continue;
+-			cgroup_disable_mask |= 1 << i;
++
++			static_branch_disable(cgroup_subsys_enabled_key[i]);
++			pr_info("Disabling %s control group subsystem\n",
++				ss->name);
+ 		}
+ 	}
+ 	return 1;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 45fa7167cee2d..7e0fdc19043e4 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -4547,7 +4547,9 @@ find_get_context(struct pmu *pmu, struct task_struct *task,
+ 		cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
+ 		ctx = &cpuctx->ctx;
+ 		get_ctx(ctx);
++		raw_spin_lock_irqsave(&ctx->lock, flags);
+ 		++ctx->pin_count;
++		raw_spin_unlock_irqrestore(&ctx->lock, flags);
+ 
+ 		return ctx;
+ 	}
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 6264584b51c25..70a5782724363 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -888,6 +888,7 @@ __initcall(init_sched_debug_procfs);
+ #define __PS(S, F) SEQ_printf(m, "%-45s:%21Ld\n", S, (long long)(F))
+ #define __P(F) __PS(#F, F)
+ #define   P(F) __PS(#F, p->F)
++#define   PM(F, M) __PS(#F, p->F & (M))
+ #define __PSN(S, F) SEQ_printf(m, "%-45s:%14Ld.%06ld\n", S, SPLIT_NS((long long)(F)))
+ #define __PN(F) __PSN(#F, F)
+ #define   PN(F) __PSN(#F, p->F)
+@@ -1014,7 +1015,7 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
+ 	P(se.avg.util_avg);
+ 	P(se.avg.last_update_time);
+ 	P(se.avg.util_est.ewma);
+-	P(se.avg.util_est.enqueued);
++	PM(se.avg.util_est.enqueued, ~UTIL_AVG_UNCHANGED);
+ #endif
+ #ifdef CONFIG_UCLAMP_TASK
+ 	__PS("uclamp.min", p->uclamp_req[UCLAMP_MIN].value);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 1ad0e52487f6b..ff8a172a69ca9 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3501,10 +3501,9 @@ update_tg_cfs_runnable(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cf
+ static inline void
+ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq)
+ {
+-	long delta_avg, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum;
++	long delta, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum;
+ 	unsigned long load_avg;
+ 	u64 load_sum = 0;
+-	s64 delta_sum;
+ 	u32 divider;
+ 
+ 	if (!runnable_sum)
+@@ -3551,13 +3550,13 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq
+ 	load_sum = (s64)se_weight(se) * runnable_sum;
+ 	load_avg = div_s64(load_sum, divider);
+ 
+-	delta_sum = load_sum - (s64)se_weight(se) * se->avg.load_sum;
+-	delta_avg = load_avg - se->avg.load_avg;
++	delta = load_avg - se->avg.load_avg;
+ 
+ 	se->avg.load_sum = runnable_sum;
+ 	se->avg.load_avg = load_avg;
+-	add_positive(&cfs_rq->avg.load_avg, delta_avg);
+-	add_positive(&cfs_rq->avg.load_sum, delta_sum);
++
++	add_positive(&cfs_rq->avg.load_avg, delta);
++	cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * divider;
+ }
+ 
+ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum)
+@@ -3904,7 +3903,7 @@ static inline unsigned long _task_util_est(struct task_struct *p)
+ {
+ 	struct util_est ue = READ_ONCE(p->se.avg.util_est);
+ 
+-	return (max(ue.ewma, ue.enqueued) | UTIL_AVG_UNCHANGED);
++	return max(ue.ewma, (ue.enqueued & ~UTIL_AVG_UNCHANGED));
+ }
+ 
+ static inline unsigned long task_util_est(struct task_struct *p)
+@@ -4004,7 +4003,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
+ 	 * Reset EWMA on utilization increases, the moving average is used only
+ 	 * to smooth utilization decreases.
+ 	 */
+-	ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED);
++	ue.enqueued = task_util(p);
+ 	if (sched_feat(UTIL_EST_FASTUP)) {
+ 		if (ue.ewma < ue.enqueued) {
+ 			ue.ewma = ue.enqueued;
+@@ -4053,6 +4052,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
+ 	ue.ewma  += last_ewma_diff;
+ 	ue.ewma >>= UTIL_EST_WEIGHT_SHIFT;
+ done:
++	ue.enqueued |= UTIL_AVG_UNCHANGED;
+ 	WRITE_ONCE(p->se.avg.util_est, ue);
+ 
+ 	trace_sched_util_est_se_tp(&p->se);
+@@ -7961,7 +7961,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
+ 		/* Propagate pending load changes to the parent, if any: */
+ 		se = cfs_rq->tg->se[cpu];
+ 		if (se && !skip_blocked_update(se))
+-			update_load_avg(cfs_rq_of(se), se, 0);
++			update_load_avg(cfs_rq_of(se), se, UPDATE_TG);
+ 
+ 		/*
+ 		 * There can be a lot of idle CPU cgroups.  Don't let fully
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 795e43e02afc6..0b9aeebb9c325 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -42,15 +42,6 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return LOAD_AVG_MAX - 1024 + avg->period_contrib;
+ }
+ 
+-/*
+- * When a task is dequeued, its estimated utilization should not be update if
+- * its util_avg has not been updated at least once.
+- * This flag is used to synchronize util_avg updates with util_est updates.
+- * We map this information into the LSB bit of the utilization saved at
+- * dequeue time (i.e. util_est.dequeued).
+- */
+-#define UTIL_AVG_UNCHANGED 0x1
+-
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -58,7 +49,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
+ 	if (!sched_feat(UTIL_EST))
+ 		return;
+ 
+-	/* Avoid store if the flag has been already set */
++	/* Avoid store if the flag has been already reset */
+ 	enqueued = avg->util_est.enqueued;
+ 	if (!(enqueued & UTIL_AVG_UNCHANGED))
+ 		return;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index a6d15a3187d0e..30010614b9237 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -1968,12 +1968,18 @@ static int ftrace_hash_ipmodify_update(struct ftrace_ops *ops,
+ 
+ static void print_ip_ins(const char *fmt, const unsigned char *p)
+ {
++	char ins[MCOUNT_INSN_SIZE];
+ 	int i;
+ 
++	if (copy_from_kernel_nofault(ins, p, MCOUNT_INSN_SIZE)) {
++		printk(KERN_CONT "%s[FAULT] %px\n", fmt, p);
++		return;
++	}
++
+ 	printk(KERN_CONT "%s", fmt);
+ 
+ 	for (i = 0; i < MCOUNT_INSN_SIZE; i++)
+-		printk(KERN_CONT "%s%02x", i ? ":" : "", p[i]);
++		printk(KERN_CONT "%s%02x", i ? ":" : "", ins[i]);
+ }
+ 
+ enum ftrace_bug_type ftrace_bug_type;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 321f7f7a29b4b..b2c141eaca020 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2734,7 +2734,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
+ 	    (entry = this_cpu_read(trace_buffered_event))) {
+ 		/* Try to use the per cpu buffer first */
+ 		val = this_cpu_inc_return(trace_buffered_event_cnt);
+-		if ((len < (PAGE_SIZE - sizeof(*entry))) && val == 1) {
++		if ((len < (PAGE_SIZE - sizeof(*entry) - sizeof(entry->array[0]))) && val == 1) {
+ 			trace_event_setup(entry, type, flags, pc);
+ 			entry->array[0] = len;
+ 			return entry;
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 1e2ca744dadbe..b23f7d1044be7 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -50,6 +50,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/sched/isolation.h>
+ #include <linux/nmi.h>
++#include <linux/kvm_para.h>
+ 
+ #include "workqueue_internal.h"
+ 
+@@ -5758,6 +5759,7 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
+ {
+ 	unsigned long thresh = READ_ONCE(wq_watchdog_thresh) * HZ;
+ 	bool lockup_detected = false;
++	unsigned long now = jiffies;
+ 	struct worker_pool *pool;
+ 	int pi;
+ 
+@@ -5772,6 +5774,12 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
+ 		if (list_empty(&pool->worklist))
+ 			continue;
+ 
++		/*
++		 * If a virtual machine is stopped by the host it can look to
++		 * the watchdog like a stall.
++		 */
++		kvm_check_and_clear_guest_paused();
++
+ 		/* get the latest of pool and touched timestamps */
+ 		pool_ts = READ_ONCE(pool->watchdog_ts);
+ 		touched = READ_ONCE(wq_watchdog_touched);
+@@ -5790,12 +5798,12 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
+ 		}
+ 
+ 		/* did we stall? */
+-		if (time_after(jiffies, ts + thresh)) {
++		if (time_after(now, ts + thresh)) {
+ 			lockup_detected = true;
+ 			pr_emerg("BUG: workqueue lockup - pool");
+ 			pr_cont_pool_info(pool);
+ 			pr_cont(" stuck for %us!\n",
+-				jiffies_to_msecs(jiffies - pool_ts) / 1000);
++				jiffies_to_msecs(now - pool_ts) / 1000);
+ 		}
+ 	}
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index daca50d6bb128..e527f5686e2bf 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -453,11 +453,13 @@ void netlink_table_ungrab(void)
+ static inline void
+ netlink_lock_table(void)
+ {
++	unsigned long flags;
++
+ 	/* read_lock() synchronizes us to netlink_table_grab */
+ 
+-	read_lock(&nl_table_lock);
++	read_lock_irqsave(&nl_table_lock, flags);
+ 	atomic_inc(&nl_table_users);
+-	read_unlock(&nl_table_lock);
++	read_unlock_irqrestore(&nl_table_lock, flags);
+ }
+ 
+ static inline void
+diff --git a/net/nfc/rawsock.c b/net/nfc/rawsock.c
+index 9c7eb8455ba8e..5f1d438a0a23f 100644
+--- a/net/nfc/rawsock.c
++++ b/net/nfc/rawsock.c
+@@ -329,7 +329,7 @@ static int rawsock_create(struct net *net, struct socket *sock,
+ 		return -ESOCKTNOSUPPORT;
+ 
+ 	if (sock->type == SOCK_RAW) {
+-		if (!capable(CAP_NET_RAW))
++		if (!ns_capable(net->user_ns, CAP_NET_RAW))
+ 			return -EPERM;
+ 		sock->ops = &rawsock_raw_ops;
+ 	} else {
+diff --git a/net/rds/connection.c b/net/rds/connection.c
+index f2fcab182095c..a3bc4b54d4910 100644
+--- a/net/rds/connection.c
++++ b/net/rds/connection.c
+@@ -240,12 +240,23 @@ static struct rds_connection *__rds_conn_create(struct net *net,
+ 	if (loop_trans) {
+ 		rds_trans_put(loop_trans);
+ 		conn->c_loopback = 1;
+-		if (is_outgoing && trans->t_prefer_loopback) {
+-			/* "outgoing" connection - and the transport
+-			 * says it wants the connection handled by the
+-			 * loopback transport. This is what TCP does.
+-			 */
+-			trans = &rds_loop_transport;
++		if (trans->t_prefer_loopback) {
++			if (likely(is_outgoing)) {
++				/* "outgoing" connection to local address.
++				 * Protocol says it wants the connection
++				 * handled by the loopback transport.
++				 * This is what TCP does.
++				 */
++				trans = &rds_loop_transport;
++			} else {
++				/* No transport currently in use
++				 * should end up here, but if it
++				 * does, reset/destroy the connection.
++				 */
++				kmem_cache_free(rds_conn_slab, conn);
++				conn = ERR_PTR(-EOPNOTSUPP);
++				goto out;
++			}
+ 		}
+ 	}
+ 
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index 43db0eca911fa..abf19c0e3ba0b 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -313,8 +313,8 @@ out:
+ }
+ #endif
+ 
+-static int rds_tcp_laddr_check(struct net *net, const struct in6_addr *addr,
+-			       __u32 scope_id)
++int rds_tcp_laddr_check(struct net *net, const struct in6_addr *addr,
++			__u32 scope_id)
+ {
+ 	struct net_device *dev = NULL;
+ #if IS_ENABLED(CONFIG_IPV6)
+diff --git a/net/rds/tcp.h b/net/rds/tcp.h
+index bad9cf49d5657..dc8d745d68575 100644
+--- a/net/rds/tcp.h
++++ b/net/rds/tcp.h
+@@ -59,7 +59,8 @@ u32 rds_tcp_snd_una(struct rds_tcp_connection *tc);
+ u64 rds_tcp_map_seq(struct rds_tcp_connection *tc, u32 seq);
+ extern struct rds_transport rds_tcp_transport;
+ void rds_tcp_accept_work(struct sock *sk);
+-
++int rds_tcp_laddr_check(struct net *net, const struct in6_addr *addr,
++			__u32 scope_id);
+ /* tcp_connect.c */
+ int rds_tcp_conn_path_connect(struct rds_conn_path *cp);
+ void rds_tcp_conn_path_shutdown(struct rds_conn_path *conn);
+diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
+index 101cf14215a0b..09cadd556d1e1 100644
+--- a/net/rds/tcp_listen.c
++++ b/net/rds/tcp_listen.c
+@@ -167,6 +167,12 @@ int rds_tcp_accept_one(struct socket *sock)
+ 	}
+ #endif
+ 
++	if (!rds_tcp_laddr_check(sock_net(sock->sk), peer_addr, dev_if)) {
++		/* local address connection is only allowed via loopback */
++		ret = -EOPNOTSUPP;
++		goto out;
++	}
++
+ 	conn = rds_conn_create(sock_net(sock->sk),
+ 			       my_addr, peer_addr,
+ 			       &rds_tcp_transport, 0, GFP_KERNEL, dev_if);
+diff --git a/sound/core/seq/seq_timer.c b/sound/core/seq/seq_timer.c
+index 1645e4142e302..9863be6fd43e1 100644
+--- a/sound/core/seq/seq_timer.c
++++ b/sound/core/seq/seq_timer.c
+@@ -297,8 +297,16 @@ int snd_seq_timer_open(struct snd_seq_queue *q)
+ 		return err;
+ 	}
+ 	spin_lock_irq(&tmr->lock);
+-	tmr->timeri = t;
++	if (tmr->timeri)
++		err = -EBUSY;
++	else
++		tmr->timeri = t;
+ 	spin_unlock_irq(&tmr->lock);
++	if (err < 0) {
++		snd_timer_close(t);
++		snd_timer_instance_free(t);
++		return err;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
+index e0faa6601966c..5805c5de39fbf 100644
+--- a/sound/firewire/amdtp-stream.c
++++ b/sound/firewire/amdtp-stream.c
+@@ -804,7 +804,7 @@ static void generate_pkt_descs(struct amdtp_stream *s, struct pkt_desc *descs,
+ static inline void cancel_stream(struct amdtp_stream *s)
+ {
+ 	s->packet_index = -1;
+-	if (current_work() == &s->period_work)
++	if (in_interrupt())
+ 		amdtp_stream_pcm_abort(s);
+ 	WRITE_ONCE(s->pcm_buffer_pointer, SNDRV_PCM_POS_XRUN);
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index cc13a68197f3c..e46e43dac6bfd 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6560,6 +6560,7 @@ enum {
+ 	ALC285_FIXUP_HP_SPECTRE_X360,
+ 	ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP,
+ 	ALC623_FIXUP_LENOVO_THINKSTATION_P340,
++	ALC255_FIXUP_ACER_HEADPHONE_AND_MIC,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8132,6 +8133,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC283_FIXUP_HEADSET_MIC,
+ 	},
++	[ALC255_FIXUP_ACER_HEADPHONE_AND_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x21, 0x03211030 }, /* Change the Headphone location to Left */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC255_FIXUP_XIAOMI_HEADSET_MIC
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8168,6 +8178,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ 	SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+ 	SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X),
+@@ -8296,6 +8307,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
++	SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+@@ -8314,10 +8327,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+@@ -8722,6 +8737,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
+ 	{.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
+ 	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
++	{.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/amd/raven/acp3x-pcm-dma.c b/sound/soc/amd/raven/acp3x-pcm-dma.c
+index 417cda24030cd..2447a1e6e913f 100644
+--- a/sound/soc/amd/raven/acp3x-pcm-dma.c
++++ b/sound/soc/amd/raven/acp3x-pcm-dma.c
+@@ -237,10 +237,6 @@ static int acp3x_dma_open(struct snd_soc_component *component,
+ 		return ret;
+ 	}
+ 
+-	if (!adata->play_stream && !adata->capture_stream &&
+-	    !adata->i2ssp_play_stream && !adata->i2ssp_capture_stream)
+-		rv_writel(1, adata->acp3x_base + mmACP_EXTERNAL_INTR_ENB);
+-
+ 	i2s_data->acp3x_base = adata->acp3x_base;
+ 	runtime->private_data = i2s_data;
+ 	return ret;
+@@ -367,12 +363,6 @@ static int acp3x_dma_close(struct snd_soc_component *component,
+ 		}
+ 	}
+ 
+-	/* Disable ACP irq, when the current stream is being closed and
+-	 * another stream is also not active.
+-	 */
+-	if (!adata->play_stream && !adata->capture_stream &&
+-		!adata->i2ssp_play_stream && !adata->i2ssp_capture_stream)
+-		rv_writel(0, adata->acp3x_base + mmACP_EXTERNAL_INTR_ENB);
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/amd/raven/acp3x.h b/sound/soc/amd/raven/acp3x.h
+index 03fe93913e12e..c3f0c8b7545db 100644
+--- a/sound/soc/amd/raven/acp3x.h
++++ b/sound/soc/amd/raven/acp3x.h
+@@ -77,6 +77,7 @@
+ #define ACP_POWER_OFF_IN_PROGRESS	0x03
+ 
+ #define ACP3x_ITER_IRER_SAMP_LEN_MASK	0x38
++#define ACP_EXT_INTR_STAT_CLEAR_MASK 0xFFFFFFFF
+ 
+ struct acp3x_platform_info {
+ 	u16 play_i2s_instance;
+diff --git a/sound/soc/amd/raven/pci-acp3x.c b/sound/soc/amd/raven/pci-acp3x.c
+index 77f2d93896067..df83d2ce75ea7 100644
+--- a/sound/soc/amd/raven/pci-acp3x.c
++++ b/sound/soc/amd/raven/pci-acp3x.c
+@@ -76,6 +76,19 @@ static int acp3x_reset(void __iomem *acp3x_base)
+ 	return -ETIMEDOUT;
+ }
+ 
++static void acp3x_enable_interrupts(void __iomem *acp_base)
++{
++	rv_writel(0x01, acp_base + mmACP_EXTERNAL_INTR_ENB);
++}
++
++static void acp3x_disable_interrupts(void __iomem *acp_base)
++{
++	rv_writel(ACP_EXT_INTR_STAT_CLEAR_MASK, acp_base +
++		  mmACP_EXTERNAL_INTR_STAT);
++	rv_writel(0x00, acp_base + mmACP_EXTERNAL_INTR_CNTL);
++	rv_writel(0x00, acp_base + mmACP_EXTERNAL_INTR_ENB);
++}
++
+ static int acp3x_init(struct acp3x_dev_data *adata)
+ {
+ 	void __iomem *acp3x_base = adata->acp3x_base;
+@@ -93,6 +106,7 @@ static int acp3x_init(struct acp3x_dev_data *adata)
+ 		pr_err("ACP3x reset failed\n");
+ 		return ret;
+ 	}
++	acp3x_enable_interrupts(acp3x_base);
+ 	return 0;
+ }
+ 
+@@ -100,6 +114,7 @@ static int acp3x_deinit(void __iomem *acp3x_base)
+ {
+ 	int ret;
+ 
++	acp3x_disable_interrupts(acp3x_base);
+ 	/* Reset */
+ 	ret = acp3x_reset(acp3x_base);
+ 	if (ret) {
+diff --git a/sound/soc/codecs/max98088.c b/sound/soc/codecs/max98088.c
+index 4be24e7f51c89..f8e49e45ce33f 100644
+--- a/sound/soc/codecs/max98088.c
++++ b/sound/soc/codecs/max98088.c
+@@ -41,6 +41,7 @@ struct max98088_priv {
+ 	enum max98088_type devtype;
+ 	struct max98088_pdata *pdata;
+ 	struct clk *mclk;
++	unsigned char mclk_prescaler;
+ 	unsigned int sysclk;
+ 	struct max98088_cdata dai[2];
+ 	int eq_textcnt;
+@@ -998,13 +999,16 @@ static int max98088_dai1_hw_params(struct snd_pcm_substream *substream,
+        /* Configure NI when operating as master */
+        if (snd_soc_component_read(component, M98088_REG_14_DAI1_FORMAT)
+                & M98088_DAI_MAS) {
++               unsigned long pclk;
++
+                if (max98088->sysclk == 0) {
+                        dev_err(component->dev, "Invalid system clock frequency\n");
+                        return -EINVAL;
+                }
+                ni = 65536ULL * (rate < 50000 ? 96ULL : 48ULL)
+                                * (unsigned long long int)rate;
+-               do_div(ni, (unsigned long long int)max98088->sysclk);
++               pclk = DIV_ROUND_CLOSEST(max98088->sysclk, max98088->mclk_prescaler);
++               ni = DIV_ROUND_CLOSEST_ULL(ni, pclk);
+                snd_soc_component_write(component, M98088_REG_12_DAI1_CLKCFG_HI,
+                        (ni >> 8) & 0x7F);
+                snd_soc_component_write(component, M98088_REG_13_DAI1_CLKCFG_LO,
+@@ -1065,13 +1069,16 @@ static int max98088_dai2_hw_params(struct snd_pcm_substream *substream,
+        /* Configure NI when operating as master */
+        if (snd_soc_component_read(component, M98088_REG_1C_DAI2_FORMAT)
+                & M98088_DAI_MAS) {
++               unsigned long pclk;
++
+                if (max98088->sysclk == 0) {
+                        dev_err(component->dev, "Invalid system clock frequency\n");
+                        return -EINVAL;
+                }
+                ni = 65536ULL * (rate < 50000 ? 96ULL : 48ULL)
+                                * (unsigned long long int)rate;
+-               do_div(ni, (unsigned long long int)max98088->sysclk);
++               pclk = DIV_ROUND_CLOSEST(max98088->sysclk, max98088->mclk_prescaler);
++               ni = DIV_ROUND_CLOSEST_ULL(ni, pclk);
+                snd_soc_component_write(component, M98088_REG_1A_DAI2_CLKCFG_HI,
+                        (ni >> 8) & 0x7F);
+                snd_soc_component_write(component, M98088_REG_1B_DAI2_CLKCFG_LO,
+@@ -1113,8 +1120,10 @@ static int max98088_dai_set_sysclk(struct snd_soc_dai *dai,
+         */
+        if ((freq >= 10000000) && (freq < 20000000)) {
+                snd_soc_component_write(component, M98088_REG_10_SYS_CLK, 0x10);
++               max98088->mclk_prescaler = 1;
+        } else if ((freq >= 20000000) && (freq < 30000000)) {
+                snd_soc_component_write(component, M98088_REG_10_SYS_CLK, 0x20);
++               max98088->mclk_prescaler = 2;
+        } else {
+                dev_err(component->dev, "Invalid master clock frequency\n");
+                return -EINVAL;
+diff --git a/sound/soc/codecs/sti-sas.c b/sound/soc/codecs/sti-sas.c
+index ec9933b054ad3..423daac9d5a9f 100644
+--- a/sound/soc/codecs/sti-sas.c
++++ b/sound/soc/codecs/sti-sas.c
+@@ -411,6 +411,7 @@ static const struct of_device_id sti_sas_dev_match[] = {
+ 	},
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, sti_sas_dev_match);
+ 
+ static int sti_sas_driver_probe(struct platform_device *pdev)
+ {
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 1ef0464249d1b..ca14730232ba9 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -570,6 +570,17 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* Glavey TM800A550L */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++			/* Above strings are too generic, also match on BIOS version */
++			DMI_MATCH(DMI_BIOS_VERSION, "ZY-8-BI-PX4S70VTR400-X423B-005-D"),
++		},
++		.driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+@@ -648,6 +659,20 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_MONO_SPEAKER |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* Lenovo Miix 3-830 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 3-830"),
++		},
++		.driver_data = (void *)(BYT_RT5640_IN1_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_MONO_SPEAKER |
++					BYT_RT5640_DIFF_MIC |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{	/* Linx Linx7 tablet */
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LINX"),
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index bf65cba232e67..b22674e3a89c9 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -2231,6 +2231,8 @@ static char *fmt_single_name(struct device *dev, int *id)
+ 		return NULL;
+ 
+ 	name = devm_kstrdup(dev, devname, GFP_KERNEL);
++	if (!name)
++		return NULL;
+ 
+ 	/* are we a "%s.%d" name (platform and SPI components) */
+ 	found = strstr(name, dev->driver->name);
+diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
+index 7362bef1a3683..6cd6080cac04c 100644
+--- a/tools/bootconfig/main.c
++++ b/tools/bootconfig/main.c
+@@ -399,6 +399,7 @@ static int apply_xbc(const char *path, const char *xbc_path)
+ 	}
+ 	/* TODO: Ensure the @path is initramfs/initrd image */
+ 	if (fstat(fd, &stat) < 0) {
++		ret = -errno;
+ 		pr_err("Failed to get the size of %s\n", path);
+ 		goto out;
+ 	}
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 63b619084b34a..9dddec19a494e 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -1699,6 +1699,7 @@ int perf_session__peek_event(struct perf_session *session, off_t file_offset,
+ 	if (event->header.size < hdr_sz || event->header.size > buf_sz)
+ 		return -1;
+ 
++	buf += hdr_sz;
+ 	rest = event->header.size - hdr_sz;
+ 
+ 	if (readn(fd, buf, rest) != (ssize_t)rest)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-18 11:37 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-06-18 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     5f098a1db04014be1f66642a7062bf5eb66e1567
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun 18 11:37:01 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun 18 11:37:01 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5f098a1d

Linux patch 5.10.45

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1044_linux-5.10.45.patch | 1077 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1081 insertions(+)

diff --git a/0000_README b/0000_README
index e96b155..5ac74f5 100644
--- a/0000_README
+++ b/0000_README
@@ -219,6 +219,10 @@ Patch:  1043_linux-5.10.44.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.44
 
+Patch:  1044_linux-5.10.45.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.45
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1044_linux-5.10.45.patch b/1044_linux-5.10.45.patch
new file mode 100644
index 0000000..d41c3c1
--- /dev/null
+++ b/1044_linux-5.10.45.patch
@@ -0,0 +1,1077 @@
+diff --git a/Makefile b/Makefile
+index ae33e048eb8db..808b68483002f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 44
++SUBLEVEL = 45
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/mach-omap1/pm.c b/arch/arm/mach-omap1/pm.c
+index 2c1e2b32b9b36..a745d64d46995 100644
+--- a/arch/arm/mach-omap1/pm.c
++++ b/arch/arm/mach-omap1/pm.c
+@@ -655,9 +655,13 @@ static int __init omap_pm_init(void)
+ 		irq = INT_7XX_WAKE_UP_REQ;
+ 	else if (cpu_is_omap16xx())
+ 		irq = INT_1610_WAKE_UP_REQ;
+-	if (request_irq(irq, omap_wakeup_interrupt, 0, "peripheral wakeup",
+-			NULL))
+-		pr_err("Failed to request irq %d (peripheral wakeup)\n", irq);
++	else
++		irq = -1;
++
++	if (irq >= 0) {
++		if (request_irq(irq, omap_wakeup_interrupt, 0, "peripheral wakeup", NULL))
++			pr_err("Failed to request irq %d (peripheral wakeup)\n", irq);
++	}
+ 
+ 	/* Program new power ramp-up time
+ 	 * (0 for most boards since we don't lower voltage when in deep sleep)
+diff --git a/arch/arm/mach-omap2/board-n8x0.c b/arch/arm/mach-omap2/board-n8x0.c
+index 418a61ecb8275..5e86145db0e2a 100644
+--- a/arch/arm/mach-omap2/board-n8x0.c
++++ b/arch/arm/mach-omap2/board-n8x0.c
+@@ -322,6 +322,7 @@ static int n8x0_mmc_get_cover_state(struct device *dev, int slot)
+ 
+ static void n8x0_mmc_callback(void *data, u8 card_mask)
+ {
++#ifdef CONFIG_MMC_OMAP
+ 	int bit, *openp, index;
+ 
+ 	if (board_is_n800()) {
+@@ -339,7 +340,6 @@ static void n8x0_mmc_callback(void *data, u8 card_mask)
+ 	else
+ 		*openp = 0;
+ 
+-#ifdef CONFIG_MMC_OMAP
+ 	omap_mmc_notify_cover_event(mmc_device, index, *openp);
+ #else
+ 	pr_warn("MMC: notify cover event not available\n");
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 0289a97325d12..e241e0e85ac81 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -36,6 +36,15 @@ else
+ 	KBUILD_LDFLAGS += -melf32lriscv
+ endif
+ 
++ifeq ($(CONFIG_LD_IS_LLD),y)
++	KBUILD_CFLAGS += -mno-relax
++	KBUILD_AFLAGS += -mno-relax
++ifneq ($(LLVM_IAS),1)
++	KBUILD_CFLAGS += -Wa,-mno-relax
++	KBUILD_AFLAGS += -Wa,-mno-relax
++endif
++endif
++
+ # ISA string setting
+ riscv-march-$(CONFIG_ARCH_RV32I)	:= rv32ima
+ riscv-march-$(CONFIG_ARCH_RV64I)	:= rv64ima
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 175cb1c0d5698..b1f0b13cc8bc6 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -385,6 +385,8 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Realtek 8822CE Bluetooth devices */
+ 	{ USB_DEVICE(0x0bda, 0xb00c), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0bda, 0xc822), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 
+ 	/* Realtek Bluetooth devices */
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01),
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+index 8f4a8f8d81463..39b6c6bfab453 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+@@ -101,7 +101,8 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
+ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ {
+ 	unsigned char buff[34];
+-	int addrptr = 0, size = 0;
++	int addrptr, size;
++	int len;
+ 
+ 	if (!is_fru_eeprom_supported(adev))
+ 		return 0;
+@@ -109,7 +110,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ 	/* If algo exists, it means that the i2c_adapter's initialized */
+ 	if (!adev->pm.smu_i2c.algo) {
+ 		DRM_WARN("Cannot access FRU, EEPROM accessor not initialized");
+-		return 0;
++		return -ENODEV;
+ 	}
+ 
+ 	/* There's a lot of repetition here. This is due to the FRU having
+@@ -128,7 +129,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ 	size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+ 	if (size < 1) {
+ 		DRM_ERROR("Failed to read FRU Manufacturer, ret:%d", size);
+-		return size;
++		return -EINVAL;
+ 	}
+ 
+ 	/* Increment the addrptr by the size of the field, and 1 due to the
+@@ -138,43 +139,45 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ 	size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+ 	if (size < 1) {
+ 		DRM_ERROR("Failed to read FRU product name, ret:%d", size);
+-		return size;
++		return -EINVAL;
+ 	}
+ 
++	len = size;
+ 	/* Product name should only be 32 characters. Any more,
+ 	 * and something could be wrong. Cap it at 32 to be safe
+ 	 */
+-	if (size > 32) {
++	if (len >= sizeof(adev->product_name)) {
+ 		DRM_WARN("FRU Product Number is larger than 32 characters. This is likely a mistake");
+-		size = 32;
++		len = sizeof(adev->product_name) - 1;
+ 	}
+ 	/* Start at 2 due to buff using fields 0 and 1 for the address */
+-	memcpy(adev->product_name, &buff[2], size);
+-	adev->product_name[size] = '\0';
++	memcpy(adev->product_name, &buff[2], len);
++	adev->product_name[len] = '\0';
+ 
+ 	addrptr += size + 1;
+ 	size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+ 	if (size < 1) {
+ 		DRM_ERROR("Failed to read FRU product number, ret:%d", size);
+-		return size;
++		return -EINVAL;
+ 	}
+ 
++	len = size;
+ 	/* Product number should only be 16 characters. Any more,
+ 	 * and something could be wrong. Cap it at 16 to be safe
+ 	 */
+-	if (size > 16) {
++	if (len >= sizeof(adev->product_number)) {
+ 		DRM_WARN("FRU Product Number is larger than 16 characters. This is likely a mistake");
+-		size = 16;
++		len = sizeof(adev->product_number) - 1;
+ 	}
+-	memcpy(adev->product_number, &buff[2], size);
+-	adev->product_number[size] = '\0';
++	memcpy(adev->product_number, &buff[2], len);
++	adev->product_number[len] = '\0';
+ 
+ 	addrptr += size + 1;
+ 	size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+ 
+ 	if (size < 1) {
+ 		DRM_ERROR("Failed to read FRU product version, ret:%d", size);
+-		return size;
++		return -EINVAL;
+ 	}
+ 
+ 	addrptr += size + 1;
+@@ -182,18 +185,19 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ 
+ 	if (size < 1) {
+ 		DRM_ERROR("Failed to read FRU serial number, ret:%d", size);
+-		return size;
++		return -EINVAL;
+ 	}
+ 
++	len = size;
+ 	/* Serial number should only be 16 characters. Any more,
+ 	 * and something could be wrong. Cap it at 16 to be safe
+ 	 */
+-	if (size > 16) {
++	if (len >= sizeof(adev->serial)) {
+ 		DRM_WARN("FRU Serial Number is larger than 16 characters. This is likely a mistake");
+-		size = 16;
++		len = sizeof(adev->serial) - 1;
+ 	}
+-	memcpy(adev->serial, &buff[2], size);
+-	adev->serial[size] = '\0';
++	memcpy(adev->serial, &buff[2], len);
++	adev->serial[len] = '\0';
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index 919d2fb7427b1..60b7563f4c05d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -73,6 +73,7 @@ struct psp_ring
+ 	uint64_t			ring_mem_mc_addr;
+ 	void				*ring_mem_handle;
+ 	uint32_t			ring_size;
++	uint32_t			ring_wptr;
+ };
+ 
+ /* More registers may will be supported */
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+index 6c5d9612abcb6..cb764b5545527 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+@@ -732,7 +732,7 @@ static uint32_t psp_v11_0_ring_get_wptr(struct psp_context *psp)
+ 	struct amdgpu_device *adev = psp->adev;
+ 
+ 	if (amdgpu_sriov_vf(adev))
+-		data = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_102);
++		data = psp->km_ring.ring_wptr;
+ 	else
+ 		data = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_67);
+ 
+@@ -746,6 +746,7 @@ static void psp_v11_0_ring_set_wptr(struct psp_context *psp, uint32_t value)
+ 	if (amdgpu_sriov_vf(adev)) {
+ 		WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_102, value);
+ 		WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_101, GFX_CTRL_CMD_ID_CONSUME_CMD);
++		psp->km_ring.ring_wptr = value;
+ 	} else
+ 		WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_67, value);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
+index f2e725f72d2f1..908664a5774bb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
+@@ -379,7 +379,7 @@ static uint32_t psp_v3_1_ring_get_wptr(struct psp_context *psp)
+ 	struct amdgpu_device *adev = psp->adev;
+ 
+ 	if (amdgpu_sriov_vf(adev))
+-		data = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_102);
++		data = psp->km_ring.ring_wptr;
+ 	else
+ 		data = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_67);
+ 	return data;
+@@ -394,6 +394,7 @@ static void psp_v3_1_ring_set_wptr(struct psp_context *psp, uint32_t value)
+ 		/* send interrupt to PSP for SRIOV ring write pointer update */
+ 		WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_101,
+ 			GFX_CTRL_CMD_ID_CONSUME_CMD);
++		psp->km_ring.ring_wptr = value;
+ 	} else
+ 		WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_67, value);
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index fbbb1bde6b063..df26c07cb9120 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -870,7 +870,8 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
+ 		abm->dmcu_is_running = dmcu->funcs->is_dmcu_initialized(dmcu);
+ 	}
+ 
+-	adev->dm.dc->ctx->dmub_srv = dc_dmub_srv_create(adev->dm.dc, dmub_srv);
++	if (!adev->dm.dc->ctx->dmub_srv)
++		adev->dm.dc->ctx->dmub_srv = dc_dmub_srv_create(adev->dm.dc, dmub_srv);
+ 	if (!adev->dm.dc->ctx->dmub_srv) {
+ 		DRM_ERROR("Couldn't allocate DC DMUB server!\n");
+ 		return -ENOMEM;
+@@ -1755,7 +1756,6 @@ static int dm_suspend(void *handle)
+ 
+ 	amdgpu_dm_irq_suspend(adev);
+ 
+-
+ 	dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 33488b3c5c3ce..1812ec7ee11bb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -3232,7 +3232,7 @@ static noinline bool dcn20_validate_bandwidth_fp(struct dc *dc,
+ 	voltage_supported = dcn20_validate_bandwidth_internal(dc, context, false);
+ 	dummy_pstate_supported = context->bw_ctx.bw.dcn.clk.p_state_change_support;
+ 
+-	if (voltage_supported && dummy_pstate_supported) {
++	if (voltage_supported && (dummy_pstate_supported || !(context->stream_count))) {
+ 		context->bw_ctx.bw.dcn.clk.p_state_change_support = false;
+ 		goto restore_dml_state;
+ 	}
+diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c
+index 7b88261f57bb6..32c83f2e386ca 100644
+--- a/drivers/gpu/drm/tegra/sor.c
++++ b/drivers/gpu/drm/tegra/sor.c
+@@ -3125,21 +3125,21 @@ static int tegra_sor_init(struct host1x_client *client)
+ 		if (err < 0) {
+ 			dev_err(sor->dev, "failed to acquire SOR reset: %d\n",
+ 				err);
+-			return err;
++			goto rpm_put;
+ 		}
+ 
+ 		err = reset_control_assert(sor->rst);
+ 		if (err < 0) {
+ 			dev_err(sor->dev, "failed to assert SOR reset: %d\n",
+ 				err);
+-			return err;
++			goto rpm_put;
+ 		}
+ 	}
+ 
+ 	err = clk_prepare_enable(sor->clk);
+ 	if (err < 0) {
+ 		dev_err(sor->dev, "failed to enable clock: %d\n", err);
+-		return err;
++		goto rpm_put;
+ 	}
+ 
+ 	usleep_range(1000, 3000);
+@@ -3150,7 +3150,7 @@ static int tegra_sor_init(struct host1x_client *client)
+ 			dev_err(sor->dev, "failed to deassert SOR reset: %d\n",
+ 				err);
+ 			clk_disable_unprepare(sor->clk);
+-			return err;
++			goto rpm_put;
+ 		}
+ 
+ 		reset_control_release(sor->rst);
+@@ -3171,6 +3171,12 @@ static int tegra_sor_init(struct host1x_client *client)
+ 	}
+ 
+ 	return 0;
++
++rpm_put:
++	if (sor->rst)
++		pm_runtime_put(sor->dev);
++
++	return err;
+ }
+ 
+ static int tegra_sor_exit(struct host1x_client *client)
+@@ -3916,17 +3922,10 @@ static int tegra_sor_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, sor);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	INIT_LIST_HEAD(&sor->client.list);
++	host1x_client_init(&sor->client);
+ 	sor->client.ops = &sor_client_ops;
+ 	sor->client.dev = &pdev->dev;
+ 
+-	err = host1x_client_register(&sor->client);
+-	if (err < 0) {
+-		dev_err(&pdev->dev, "failed to register host1x client: %d\n",
+-			err);
+-		goto rpm_disable;
+-	}
+-
+ 	/*
+ 	 * On Tegra210 and earlier, provide our own implementation for the
+ 	 * pad output clock.
+@@ -3938,13 +3937,13 @@ static int tegra_sor_probe(struct platform_device *pdev)
+ 				      sor->index);
+ 		if (!name) {
+ 			err = -ENOMEM;
+-			goto unregister;
++			goto uninit;
+ 		}
+ 
+ 		err = host1x_client_resume(&sor->client);
+ 		if (err < 0) {
+ 			dev_err(sor->dev, "failed to resume: %d\n", err);
+-			goto unregister;
++			goto uninit;
+ 		}
+ 
+ 		sor->clk_pad = tegra_clk_sor_pad_register(sor, name);
+@@ -3955,14 +3954,20 @@ static int tegra_sor_probe(struct platform_device *pdev)
+ 		err = PTR_ERR(sor->clk_pad);
+ 		dev_err(sor->dev, "failed to register SOR pad clock: %d\n",
+ 			err);
+-		goto unregister;
++		goto uninit;
++	}
++
++	err = __host1x_client_register(&sor->client);
++	if (err < 0) {
++		dev_err(&pdev->dev, "failed to register host1x client: %d\n",
++			err);
++		goto uninit;
+ 	}
+ 
+ 	return 0;
+ 
+-unregister:
+-	host1x_client_unregister(&sor->client);
+-rpm_disable:
++uninit:
++	host1x_client_exit(&sor->client);
+ 	pm_runtime_disable(&pdev->dev);
+ remove:
+ 	tegra_output_remove(&sor->output);
+diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/host1x/bus.c
+index 9e2cb6968819e..6e3b49d0de66d 100644
+--- a/drivers/gpu/host1x/bus.c
++++ b/drivers/gpu/host1x/bus.c
+@@ -703,6 +703,29 @@ void host1x_driver_unregister(struct host1x_driver *driver)
+ }
+ EXPORT_SYMBOL(host1x_driver_unregister);
+ 
++/**
++ * __host1x_client_init() - initialize a host1x client
++ * @client: host1x client
++ * @key: lock class key for the client-specific mutex
++ */
++void __host1x_client_init(struct host1x_client *client, struct lock_class_key *key)
++{
++	INIT_LIST_HEAD(&client->list);
++	__mutex_init(&client->lock, "host1x client lock", key);
++	client->usecount = 0;
++}
++EXPORT_SYMBOL(__host1x_client_init);
++
++/**
++ * host1x_client_exit() - uninitialize a host1x client
++ * @client: host1x client
++ */
++void host1x_client_exit(struct host1x_client *client)
++{
++	mutex_destroy(&client->lock);
++}
++EXPORT_SYMBOL(host1x_client_exit);
++
+ /**
+  * __host1x_client_register() - register a host1x client
+  * @client: host1x client
+@@ -715,16 +738,11 @@ EXPORT_SYMBOL(host1x_driver_unregister);
+  * device and call host1x_device_init(), which will in turn call each client's
+  * &host1x_client_ops.init implementation.
+  */
+-int __host1x_client_register(struct host1x_client *client,
+-			     struct lock_class_key *key)
++int __host1x_client_register(struct host1x_client *client)
+ {
+ 	struct host1x *host1x;
+ 	int err;
+ 
+-	INIT_LIST_HEAD(&client->list);
+-	__mutex_init(&client->lock, "host1x client lock", key);
+-	client->usecount = 0;
+-
+ 	mutex_lock(&devices_lock);
+ 
+ 	list_for_each_entry(host1x, &devices, list) {
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index 9b56226ce0d1c..54bc563a8dff1 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -93,11 +93,11 @@ menu "Special HID drivers"
+ 	depends on HID
+ 
+ config HID_A4TECH
+-	tristate "A4 tech mice"
++	tristate "A4TECH mice"
+ 	depends on HID
+ 	default !EXPERT
+ 	help
+-	Support for A4 tech X5 and WOP-35 / Trust 450L mice.
++	Support for some A4TECH mice with two scroll wheels.
+ 
+ config HID_ACCUTOUCH
+ 	tristate "Accutouch touch device"
+diff --git a/drivers/hid/hid-a4tech.c b/drivers/hid/hid-a4tech.c
+index 3a8c4a5971f70..2cbc32dda7f74 100644
+--- a/drivers/hid/hid-a4tech.c
++++ b/drivers/hid/hid-a4tech.c
+@@ -147,6 +147,8 @@ static const struct hid_device_id a4_devices[] = {
+ 		.driver_data = A4_2WHEEL_MOUSE_HACK_B8 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_RP_649),
+ 		.driver_data = A4_2WHEEL_MOUSE_HACK_B8 },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_NB_95),
++		.driver_data = A4_2WHEEL_MOUSE_HACK_B8 },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(hid, a4_devices);
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 097cb1ee31268..0f69f35f2957e 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -2005,6 +2005,9 @@ int hid_connect(struct hid_device *hdev, unsigned int connect_mask)
+ 	case BUS_I2C:
+ 		bus = "I2C";
+ 		break;
++	case BUS_VIRTUAL:
++		bus = "VIRTUAL";
++		break;
+ 	default:
+ 		bus = "<UNKNOWN>";
+ 	}
+diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
+index d7eaf91003706..982737827b871 100644
+--- a/drivers/hid/hid-debug.c
++++ b/drivers/hid/hid-debug.c
+@@ -929,6 +929,7 @@ static const char *keys[KEY_MAX + 1] = {
+ 	[KEY_APPSELECT] = "AppSelect",
+ 	[KEY_SCREENSAVER] = "ScreenSaver",
+ 	[KEY_VOICECOMMAND] = "VoiceCommand",
++	[KEY_EMOJI_PICKER] = "EmojiPicker",
+ 	[KEY_BRIGHTNESS_MIN] = "BrightnessMin",
+ 	[KEY_BRIGHTNESS_MAX] = "BrightnessMax",
+ 	[KEY_BRIGHTNESS_AUTO] = "BrightnessAuto",
+diff --git a/drivers/hid/hid-gt683r.c b/drivers/hid/hid-gt683r.c
+index 898871c8c768e..29ccb0accfba8 100644
+--- a/drivers/hid/hid-gt683r.c
++++ b/drivers/hid/hid-gt683r.c
+@@ -54,6 +54,7 @@ static const struct hid_device_id gt683r_led_id[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GT683R_LED_PANEL) },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(hid, gt683r_led_id);
+ 
+ static void gt683r_brightness_set(struct led_classdev *led_cdev,
+ 				enum led_brightness brightness)
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index e220a05a05b48..136b58a91c04c 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -26,6 +26,7 @@
+ #define USB_DEVICE_ID_A4TECH_WCP32PU	0x0006
+ #define USB_DEVICE_ID_A4TECH_X5_005D	0x000a
+ #define USB_DEVICE_ID_A4TECH_RP_649	0x001a
++#define USB_DEVICE_ID_A4TECH_NB_95	0x022b
+ 
+ #define USB_VENDOR_ID_AASHIMA		0x06d6
+ #define USB_DEVICE_ID_AASHIMA_GAMEPAD	0x0025
+@@ -741,6 +742,7 @@
+ #define USB_DEVICE_ID_LENOVO_X1_COVER	0x6085
+ #define USB_DEVICE_ID_LENOVO_X1_TAB	0x60a3
+ #define USB_DEVICE_ID_LENOVO_X1_TAB3	0x60b5
++#define USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E	0x600e
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D	0x608d
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019	0x6019
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E	0x602e
+@@ -1034,6 +1036,7 @@
+ #define USB_DEVICE_ID_SAITEK_X52	0x075c
+ #define USB_DEVICE_ID_SAITEK_X52_2	0x0255
+ #define USB_DEVICE_ID_SAITEK_X52_PRO	0x0762
++#define USB_DEVICE_ID_SAITEK_X65	0x0b6a
+ 
+ #define USB_VENDOR_ID_SAMSUNG		0x0419
+ #define USB_DEVICE_ID_SAMSUNG_IR_REMOTE	0x0001
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 32024905fd70f..d1ab2dccf6fd7 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -957,6 +957,9 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 
+ 		case 0x0cd: map_key_clear(KEY_PLAYPAUSE);	break;
+ 		case 0x0cf: map_key_clear(KEY_VOICECOMMAND);	break;
++
++		case 0x0d9: map_key_clear(KEY_EMOJI_PICKER);	break;
++
+ 		case 0x0e0: map_abs_clear(ABS_VOLUME);		break;
+ 		case 0x0e2: map_key_clear(KEY_MUTE);		break;
+ 		case 0x0e5: map_key_clear(KEY_BASSBOOST);	break;
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 8580ace596c25..e5a3704b9fe8f 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1580,13 +1580,13 @@ static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ 		/* we do not set suffix = "Touchscreen" */
+ 		hi->input->name = hdev->name;
+ 		break;
+-	case HID_DG_STYLUS:
+-		/* force BTN_STYLUS to allow tablet matching in udev */
+-		__set_bit(BTN_STYLUS, hi->input->keybit);
+-		break;
+ 	case HID_VD_ASUS_CUSTOM_MEDIA_KEYS:
+ 		suffix = "Custom Media Keys";
+ 		break;
++	case HID_DG_STYLUS:
++		/* force BTN_STYLUS to allow tablet matching in udev */
++		__set_bit(BTN_STYLUS, hi->input->keybit);
++		fallthrough;
+ 	case HID_DG_PEN:
+ 		suffix = "Stylus";
+ 		break;
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 2e38340e19dfb..be53c723c729d 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -110,6 +110,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_PENSKETCH_M912), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406XE), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE_ID2), HID_QUIRK_ALWAYS_POLL },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E), HID_QUIRK_ALWAYS_POLL },
+@@ -158,6 +159,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52_2), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52_PRO), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X65), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD2), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB), HID_QUIRK_NOGET },
+@@ -212,6 +214,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_WCP32PU) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_X5_005D) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_RP_649) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_NB_95) },
+ #endif
+ #if IS_ENABLED(CONFIG_HID_ACCUTOUCH)
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_ACCUTOUCH_2216) },
+diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c
+index 3dd7d32467378..f9983145d4e70 100644
+--- a/drivers/hid/hid-sensor-hub.c
++++ b/drivers/hid/hid-sensor-hub.c
+@@ -210,16 +210,21 @@ int sensor_hub_set_feature(struct hid_sensor_hub_device *hsdev, u32 report_id,
+ 	buffer_size = buffer_size / sizeof(__s32);
+ 	if (buffer_size) {
+ 		for (i = 0; i < buffer_size; ++i) {
+-			hid_set_field(report->field[field_index], i,
+-				      (__force __s32)cpu_to_le32(*buf32));
++			ret = hid_set_field(report->field[field_index], i,
++					    (__force __s32)cpu_to_le32(*buf32));
++			if (ret)
++				goto done_proc;
++
+ 			++buf32;
+ 		}
+ 	}
+ 	if (remaining_bytes) {
+ 		value = 0;
+ 		memcpy(&value, (u8 *)buf32, remaining_bytes);
+-		hid_set_field(report->field[field_index], i,
+-			      (__force __s32)cpu_to_le32(value));
++		ret = hid_set_field(report->field[field_index], i,
++				    (__force __s32)cpu_to_le32(value));
++		if (ret)
++			goto done_proc;
+ 	}
+ 	hid_hw_request(hsdev->hdev, report, HID_REQ_SET_REPORT);
+ 	hid_hw_wait(hsdev->hdev);
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index 17a29ee0ac6c2..8d4ac4b9fb9da 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -374,7 +374,7 @@ static int hid_submit_ctrl(struct hid_device *hid)
+ 	raw_report = usbhid->ctrl[usbhid->ctrltail].raw_report;
+ 	dir = usbhid->ctrl[usbhid->ctrltail].dir;
+ 
+-	len = ((report->size - 1) >> 3) + 1 + (report->id > 0);
++	len = hid_report_len(report);
+ 	if (dir == USB_DIR_OUT) {
+ 		usbhid->urbctrl->pipe = usb_sndctrlpipe(hid_to_usb_dev(hid), 0);
+ 		usbhid->urbctrl->transfer_buffer_length = len;
+diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+index c84c8bf2bc20e..fc99ad8e4a388 100644
+--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
++++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+@@ -3815,6 +3815,7 @@ static int myri10ge_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		dev_err(&pdev->dev,
+ 			"invalid sram_size %dB or board span %ldB\n",
+ 			mgp->sram_size, mgp->board_span);
++		status = -EINVAL;
+ 		goto abort_with_ioremap;
+ 	}
+ 	memcpy_fromio(mgp->eeprom_strings,
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index b869b686e9623..16d71cc5a50eb 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -251,7 +251,8 @@ static const struct blk_mq_ops nvme_loop_admin_mq_ops = {
+ 
+ static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl)
+ {
+-	clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags);
++	if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags))
++		return;
+ 	nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);
+ 	blk_cleanup_queue(ctrl->ctrl.admin_q);
+ 	blk_cleanup_queue(ctrl->ctrl.fabrics_q);
+@@ -287,6 +288,7 @@ static void nvme_loop_destroy_io_queues(struct nvme_loop_ctrl *ctrl)
+ 		clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[i].flags);
+ 		nvmet_sq_destroy(&ctrl->queues[i].nvme_sq);
+ 	}
++	ctrl->ctrl.queue_count = 1;
+ }
+ 
+ static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl)
+@@ -393,6 +395,7 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl)
+ 	return 0;
+ 
+ out_cleanup_queue:
++	clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags);
+ 	blk_cleanup_queue(ctrl->ctrl.admin_q);
+ out_cleanup_fabrics_q:
+ 	blk_cleanup_queue(ctrl->ctrl.fabrics_q);
+@@ -450,8 +453,10 @@ static void nvme_loop_reset_ctrl_work(struct work_struct *work)
+ 	nvme_loop_shutdown_ctrl(ctrl);
+ 
+ 	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
+-		/* state change failure should never happen */
+-		WARN_ON_ONCE(1);
++		if (ctrl->ctrl.state != NVME_CTRL_DELETING &&
++		    ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO)
++			/* state change failure for non-deleted ctrl? */
++			WARN_ON_ONCE(1);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index a464d0a4f4653..846a02de4d510 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1827,22 +1827,20 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
+ 		fcoe_wwn_to_str(vport->port_name, buf, sizeof(buf));
+ 		QEDF_WARN(&(base_qedf->dbg_ctx), "Failed to create vport, "
+ 			   "WWPN (0x%s) already exists.\n", buf);
+-		goto err1;
++		return rc;
+ 	}
+ 
+ 	if (atomic_read(&base_qedf->link_state) != QEDF_LINK_UP) {
+ 		QEDF_WARN(&(base_qedf->dbg_ctx), "Cannot create vport "
+ 			   "because link is not up.\n");
+-		rc = -EIO;
+-		goto err1;
++		return -EIO;
+ 	}
+ 
+ 	vn_port = libfc_vport_create(vport, sizeof(struct qedf_ctx));
+ 	if (!vn_port) {
+ 		QEDF_WARN(&(base_qedf->dbg_ctx), "Could not create lport "
+ 			   "for vport.\n");
+-		rc = -ENOMEM;
+-		goto err1;
++		return -ENOMEM;
+ 	}
+ 
+ 	fcoe_wwn_to_str(vport->port_name, buf, sizeof(buf));
+@@ -1866,7 +1864,7 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
+ 	if (rc) {
+ 		QEDF_ERR(&(base_qedf->dbg_ctx), "Could not allocate memory "
+ 		    "for lport stats.\n");
+-		goto err2;
++		goto err;
+ 	}
+ 
+ 	fc_set_wwnn(vn_port, vport->node_name);
+@@ -1884,7 +1882,7 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
+ 	if (rc) {
+ 		QEDF_WARN(&base_qedf->dbg_ctx,
+ 			  "Error adding Scsi_Host rc=0x%x.\n", rc);
+-		goto err2;
++		goto err;
+ 	}
+ 
+ 	/* Set default dev_loss_tmo based on module parameter */
+@@ -1925,9 +1923,10 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
+ 	vport_qedf->dbg_ctx.host_no = vn_port->host->host_no;
+ 	vport_qedf->dbg_ctx.pdev = base_qedf->pdev;
+ 
+-err2:
++	return 0;
++
++err:
+ 	scsi_host_put(vn_port->host);
+-err1:
+ 	return rc;
+ }
+ 
+@@ -1968,8 +1967,7 @@ static int qedf_vport_destroy(struct fc_vport *vport)
+ 	fc_lport_free_stats(vn_port);
+ 
+ 	/* Release Scsi_Host */
+-	if (vn_port->host)
+-		scsi_host_put(vn_port->host);
++	scsi_host_put(vn_port->host);
+ 
+ out:
+ 	return 0;
+diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
+index ba84244c1b4f6..9a8f9f902f3b4 100644
+--- a/drivers/scsi/scsi_devinfo.c
++++ b/drivers/scsi/scsi_devinfo.c
+@@ -184,6 +184,7 @@ static struct {
+ 	{"HP", "C3323-300", "4269", BLIST_NOTQ},
+ 	{"HP", "C5713A", NULL, BLIST_NOREPORTLUN},
+ 	{"HP", "DISK-SUBSYSTEM", "*", BLIST_REPORTLUN2},
++	{"HPE", "OPEN-", "*", BLIST_REPORTLUN2 | BLIST_TRY_VPD_PAGES},
+ 	{"IBM", "AuSaV1S2", NULL, BLIST_FORCELUN},
+ 	{"IBM", "ProFibre 4000R", "*", BLIST_SPARSELUN | BLIST_LARGELUN},
+ 	{"IBM", "2105", NULL, BLIST_RETRY_HWERROR},
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 484f0ba0a65bb..61b79804d462c 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -3038,9 +3038,7 @@ __transport_wait_for_tasks(struct se_cmd *cmd, bool fabric_stop,
+ 	__releases(&cmd->t_state_lock)
+ 	__acquires(&cmd->t_state_lock)
+ {
+-
+-	assert_spin_locked(&cmd->t_state_lock);
+-	WARN_ON_ONCE(!irqs_disabled());
++	lockdep_assert_held(&cmd->t_state_lock);
+ 
+ 	if (fabric_stop)
+ 		cmd->transport_state |= CMD_T_FABRIC_STOP;
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index b39b339feddc9..16fb0184ce5e1 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -938,8 +938,11 @@ static ssize_t gfs2_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 		current->backing_dev_info = inode_to_bdi(inode);
+ 		buffered = iomap_file_buffered_write(iocb, from, &gfs2_iomap_ops);
+ 		current->backing_dev_info = NULL;
+-		if (unlikely(buffered <= 0))
++		if (unlikely(buffered <= 0)) {
++			if (!ret)
++				ret = buffered;
+ 			goto out_unlock;
++		}
+ 
+ 		/*
+ 		 * We need to ensure that the page cache pages are written to
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index ea2f2de448063..cd43c481df4b4 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -569,6 +569,16 @@ out_locked:
+ 	spin_unlock(&gl->gl_lockref.lock);
+ }
+ 
++static bool is_system_glock(struct gfs2_glock *gl)
++{
++	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
++	struct gfs2_inode *m_ip = GFS2_I(sdp->sd_statfs_inode);
++
++	if (gl == m_ip->i_gl)
++		return true;
++	return false;
++}
++
+ /**
+  * do_xmote - Calls the DLM to change the state of a lock
+  * @gl: The lock state
+@@ -658,17 +668,25 @@ skip_inval:
+ 	 * to see sd_log_error and withdraw, and in the meantime, requeue the
+ 	 * work for later.
+ 	 *
++	 * We make a special exception for some system glocks, such as the
++	 * system statfs inode glock, which needs to be granted before the
++	 * gfs2_quotad daemon can exit, and that exit needs to finish before
++	 * we can unmount the withdrawn file system.
++	 *
+ 	 * However, if we're just unlocking the lock (say, for unmount, when
+ 	 * gfs2_gl_hash_clear calls clear_glock) and recovery is complete
+ 	 * then it's okay to tell dlm to unlock it.
+ 	 */
+ 	if (unlikely(sdp->sd_log_error && !gfs2_withdrawn(sdp)))
+ 		gfs2_withdraw_delayed(sdp);
+-	if (glock_blocked_by_withdraw(gl)) {
+-		if (target != LM_ST_UNLOCKED ||
+-		    test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags)) {
++	if (glock_blocked_by_withdraw(gl) &&
++	    (target != LM_ST_UNLOCKED ||
++	     test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags))) {
++		if (!is_system_glock(gl)) {
+ 			gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD);
+ 			goto out;
++		} else {
++			clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags);
+ 		}
+ 	}
+ 
+@@ -1766,6 +1784,7 @@ __acquires(&lru_lock)
+ 	while(!list_empty(list)) {
+ 		gl = list_first_entry(list, struct gfs2_glock, gl_lru);
+ 		list_del_init(&gl->gl_lru);
++		clear_bit(GLF_LRU, &gl->gl_flags);
+ 		if (!spin_trylock(&gl->gl_lockref.lock)) {
+ add_back_to_lru:
+ 			list_add(&gl->gl_lru, &lru_list);
+@@ -1811,7 +1830,6 @@ static long gfs2_scan_glock_lru(int nr)
+ 		if (!test_bit(GLF_LOCK, &gl->gl_flags)) {
+ 			list_move(&gl->gl_lru, &dispose);
+ 			atomic_dec(&lru_count);
+-			clear_bit(GLF_LRU, &gl->gl_flags);
+ 			freed++;
+ 			continue;
+ 		}
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 8578db50ad734..6ed2a97eb55f1 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -1156,8 +1156,7 @@ static inline void hid_hw_wait(struct hid_device *hdev)
+  */
+ static inline u32 hid_report_len(struct hid_report *report)
+ {
+-	/* equivalent to DIV_ROUND_UP(report->size, 8) + !!(report->id > 0) */
+-	return ((report->size - 1) >> 3) + 1 + (report->id > 0);
++	return DIV_ROUND_UP(report->size, 8) + (report->id > 0);
+ }
+ 
+ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
+diff --git a/include/linux/host1x.h b/include/linux/host1x.h
+index 9eb77c87a83b0..ed0005ce4285c 100644
+--- a/include/linux/host1x.h
++++ b/include/linux/host1x.h
+@@ -320,12 +320,30 @@ static inline struct host1x_device *to_host1x_device(struct device *dev)
+ int host1x_device_init(struct host1x_device *device);
+ int host1x_device_exit(struct host1x_device *device);
+ 
+-int __host1x_client_register(struct host1x_client *client,
+-			     struct lock_class_key *key);
+-#define host1x_client_register(class) \
+-	({ \
+-		static struct lock_class_key __key; \
+-		__host1x_client_register(class, &__key); \
++void __host1x_client_init(struct host1x_client *client, struct lock_class_key *key);
++void host1x_client_exit(struct host1x_client *client);
++
++#define host1x_client_init(client)			\
++	({						\
++		static struct lock_class_key __key;	\
++		__host1x_client_init(client, &__key);	\
++	})
++
++int __host1x_client_register(struct host1x_client *client);
++
++/*
++ * Note that this wrapper calls __host1x_client_init() for compatibility
++ * with existing callers. Callers that want to separately initialize and
++ * register a host1x client must first initialize using either of the
++ * __host1x_client_init() or host1x_client_init() functions and then use
++ * the low-level __host1x_client_register() function to avoid the client
++ * getting reinitialized.
++ */
++#define host1x_client_register(client)			\
++	({						\
++		static struct lock_class_key __key;	\
++		__host1x_client_init(client, &__key);	\
++		__host1x_client_register(client);	\
+ 	})
+ 
+ int host1x_client_unregister(struct host1x_client *client);
+diff --git a/include/uapi/linux/input-event-codes.h b/include/uapi/linux/input-event-codes.h
+index ee93428ced9a1..225ec87d4f228 100644
+--- a/include/uapi/linux/input-event-codes.h
++++ b/include/uapi/linux/input-event-codes.h
+@@ -611,6 +611,7 @@
+ #define KEY_VOICECOMMAND		0x246	/* Listening Voice Command */
+ #define KEY_ASSISTANT		0x247	/* AL Context-aware desktop assistant */
+ #define KEY_KBD_LAYOUT_NEXT	0x248	/* AC Next Keyboard Layout Select */
++#define KEY_EMOJI_PICKER	0x249	/* Show/hide emoji picker (HUTRR101) */
+ 
+ #define KEY_BRIGHTNESS_MIN		0x250	/* Set Brightness to Minimum */
+ #define KEY_BRIGHTNESS_MAX		0x251	/* Set Brightness to Maximum */
+diff --git a/net/compat.c b/net/compat.c
+index ddd15af3a2837..210fc3b4d0d83 100644
+--- a/net/compat.c
++++ b/net/compat.c
+@@ -177,7 +177,7 @@ int cmsghdr_from_user_compat_to_kern(struct msghdr *kmsg, struct sock *sk,
+ 	if (kcmlen > stackbuf_size)
+ 		kcmsg_base = kcmsg = sock_kmalloc(sk, kcmlen, GFP_KERNEL);
+ 	if (kcmsg == NULL)
+-		return -ENOBUFS;
++		return -ENOMEM;
+ 
+ 	/* Now copy them over neatly. */
+ 	memset(kcmsg, 0, kcmlen);
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index 7bcfb16854cbb..9258ffc4ebffc 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -1168,7 +1168,7 @@ static void notify_rule_change(int event, struct fib_rule *rule,
+ {
+ 	struct net *net;
+ 	struct sk_buff *skb;
+-	int err = -ENOBUFS;
++	int err = -ENOMEM;
+ 
+ 	net = ops->fro_net;
+ 	skb = nlmsg_new(fib_rule_nlmsg_size(ops, rule), GFP_KERNEL);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index eae8e87930cd7..83894723ebeea 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -4842,8 +4842,10 @@ static int rtnl_bridge_notify(struct net_device *dev)
+ 	if (err < 0)
+ 		goto errout;
+ 
+-	if (!skb->len)
++	if (!skb->len) {
++		err = -EINVAL;
+ 		goto errout;
++	}
+ 
+ 	rtnl_notify(skb, net, 0, RTNLGRP_LINK, NULL, GFP_ATOMIC);
+ 	return 0;
+diff --git a/net/ieee802154/nl802154.c b/net/ieee802154/nl802154.c
+index f0b47d43c9f6e..b34e4f827e756 100644
+--- a/net/ieee802154/nl802154.c
++++ b/net/ieee802154/nl802154.c
+@@ -1298,19 +1298,20 @@ ieee802154_llsec_parse_dev_addr(struct nlattr *nla,
+ 	if (!nla || nla_parse_nested_deprecated(attrs, NL802154_DEV_ADDR_ATTR_MAX, nla, nl802154_dev_addr_policy, NULL))
+ 		return -EINVAL;
+ 
+-	if (!attrs[NL802154_DEV_ADDR_ATTR_PAN_ID] ||
+-	    !attrs[NL802154_DEV_ADDR_ATTR_MODE] ||
+-	    !(attrs[NL802154_DEV_ADDR_ATTR_SHORT] ||
+-	      attrs[NL802154_DEV_ADDR_ATTR_EXTENDED]))
++	if (!attrs[NL802154_DEV_ADDR_ATTR_PAN_ID] || !attrs[NL802154_DEV_ADDR_ATTR_MODE])
+ 		return -EINVAL;
+ 
+ 	addr->pan_id = nla_get_le16(attrs[NL802154_DEV_ADDR_ATTR_PAN_ID]);
+ 	addr->mode = nla_get_u32(attrs[NL802154_DEV_ADDR_ATTR_MODE]);
+ 	switch (addr->mode) {
+ 	case NL802154_DEV_ADDR_SHORT:
++		if (!attrs[NL802154_DEV_ADDR_ATTR_SHORT])
++			return -EINVAL;
+ 		addr->short_addr = nla_get_le16(attrs[NL802154_DEV_ADDR_ATTR_SHORT]);
+ 		break;
+ 	case NL802154_DEV_ADDR_EXTENDED:
++		if (!attrs[NL802154_DEV_ADDR_ATTR_EXTENDED])
++			return -EINVAL;
+ 		addr->extended_addr = nla_get_le64(attrs[NL802154_DEV_ADDR_ATTR_EXTENDED]);
+ 		break;
+ 	default:
+diff --git a/net/ipv4/ipconfig.c b/net/ipv4/ipconfig.c
+index 3cd13e1bc6a70..213a1c91507d9 100644
+--- a/net/ipv4/ipconfig.c
++++ b/net/ipv4/ipconfig.c
+@@ -870,7 +870,7 @@ static void __init ic_bootp_send_if(struct ic_device *d, unsigned long jiffies_d
+ 
+ 
+ /*
+- *  Copy BOOTP-supplied string if not already set.
++ *  Copy BOOTP-supplied string
+  */
+ static int __init ic_bootp_string(char *dest, char *src, int len, int max)
+ {
+@@ -919,12 +919,15 @@ static void __init ic_do_bootp_ext(u8 *ext)
+ 		}
+ 		break;
+ 	case 12:	/* Host name */
+-		ic_bootp_string(utsname()->nodename, ext+1, *ext,
+-				__NEW_UTS_LEN);
+-		ic_host_name_set = 1;
++		if (!ic_host_name_set) {
++			ic_bootp_string(utsname()->nodename, ext+1, *ext,
++					__NEW_UTS_LEN);
++			ic_host_name_set = 1;
++		}
+ 		break;
+ 	case 15:	/* Domain name (DNS) */
+-		ic_bootp_string(ic_domain, ext+1, *ext, sizeof(ic_domain));
++		if (!ic_domain[0])
++			ic_bootp_string(ic_domain, ext+1, *ext, sizeof(ic_domain));
+ 		break;
+ 	case 17:	/* Root path */
+ 		if (!root_server_path[0])
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index e65a50192432c..03ed170b8125e 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -546,7 +546,7 @@ static int x25_create(struct net *net, struct socket *sock, int protocol,
+ 	if (protocol)
+ 		goto out;
+ 
+-	rc = -ENOBUFS;
++	rc = -ENOMEM;
+ 	if ((sk = x25_alloc_socket(net, kern)) == NULL)
+ 		goto out;
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-23 15:12 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-06-23 15:12 UTC (permalink / raw
  To: gentoo-commits

commit:     735c9f35ae8368863f1218a6b171c1fdad8d1ede
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 23 15:12:20 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 23 15:12:20 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=735c9f35

Linuxpatch 5.10.46

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1045_linux-5.10.46.patch | 5158 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5162 insertions(+)

diff --git a/0000_README b/0000_README
index 5ac74f5..6abe7e2 100644
--- a/0000_README
+++ b/0000_README
@@ -223,6 +223,10 @@ Patch:  1044_linux-5.10.45.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.45
 
+Patch:  1045_linux-5.10.46.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.46
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1045_linux-5.10.46.patch b/1045_linux-5.10.46.patch
new file mode 100644
index 0000000..c2e48b5
--- /dev/null
+++ b/1045_linux-5.10.46.patch
@@ -0,0 +1,5158 @@
+diff --git a/Documentation/vm/slub.rst b/Documentation/vm/slub.rst
+index 03f294a638bd8..d3028554b1e9c 100644
+--- a/Documentation/vm/slub.rst
++++ b/Documentation/vm/slub.rst
+@@ -181,7 +181,7 @@ SLUB Debug output
+ Here is a sample of slub debug output::
+ 
+  ====================================================================
+- BUG kmalloc-8: Redzone overwritten
++ BUG kmalloc-8: Right Redzone overwritten
+  --------------------------------------------------------------------
+ 
+  INFO: 0xc90f6d28-0xc90f6d2b. First byte 0x00 instead of 0xcc
+@@ -189,10 +189,10 @@ Here is a sample of slub debug output::
+  INFO: Object 0xc90f6d20 @offset=3360 fp=0xc90f6d58
+  INFO: Allocated in get_modalias+0x61/0xf5 age=53 cpu=1 pid=554
+ 
+- Bytes b4 0xc90f6d10:  00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
+-   Object 0xc90f6d20:  31 30 31 39 2e 30 30 35                         1019.005
+-  Redzone 0xc90f6d28:  00 cc cc cc                                     .
+-  Padding 0xc90f6d50:  5a 5a 5a 5a 5a 5a 5a 5a                         ZZZZZZZZ
++ Bytes b4 (0xc90f6d10): 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
++ Object   (0xc90f6d20): 31 30 31 39 2e 30 30 35                         1019.005
++ Redzone  (0xc90f6d28): 00 cc cc cc                                     .
++ Padding  (0xc90f6d50): 5a 5a 5a 5a 5a 5a 5a 5a                         ZZZZZZZZ
+ 
+    [<c010523d>] dump_trace+0x63/0x1eb
+    [<c01053df>] show_trace_log_lvl+0x1a/0x2f
+diff --git a/Makefile b/Makefile
+index 808b68483002f..7ab22f105a032 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 45
++SUBLEVEL = 46
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/include/uapi/asm/sigcontext.h b/arch/arc/include/uapi/asm/sigcontext.h
+index 95f8a4380e110..7a5449dfcb290 100644
+--- a/arch/arc/include/uapi/asm/sigcontext.h
++++ b/arch/arc/include/uapi/asm/sigcontext.h
+@@ -18,6 +18,7 @@
+  */
+ struct sigcontext {
+ 	struct user_regs_struct regs;
++	struct user_regs_arcv2 v2abi;
+ };
+ 
+ #endif /* _ASM_ARC_SIGCONTEXT_H */
+diff --git a/arch/arc/kernel/signal.c b/arch/arc/kernel/signal.c
+index 98e575dbcce51..9d5996e014c01 100644
+--- a/arch/arc/kernel/signal.c
++++ b/arch/arc/kernel/signal.c
+@@ -61,6 +61,41 @@ struct rt_sigframe {
+ 	unsigned int sigret_magic;
+ };
+ 
++static int save_arcv2_regs(struct sigcontext *mctx, struct pt_regs *regs)
++{
++	int err = 0;
++#ifndef CONFIG_ISA_ARCOMPACT
++	struct user_regs_arcv2 v2abi;
++
++	v2abi.r30 = regs->r30;
++#ifdef CONFIG_ARC_HAS_ACCL_REGS
++	v2abi.r58 = regs->r58;
++	v2abi.r59 = regs->r59;
++#else
++	v2abi.r58 = v2abi.r59 = 0;
++#endif
++	err = __copy_to_user(&mctx->v2abi, &v2abi, sizeof(v2abi));
++#endif
++	return err;
++}
++
++static int restore_arcv2_regs(struct sigcontext *mctx, struct pt_regs *regs)
++{
++	int err = 0;
++#ifndef CONFIG_ISA_ARCOMPACT
++	struct user_regs_arcv2 v2abi;
++
++	err = __copy_from_user(&v2abi, &mctx->v2abi, sizeof(v2abi));
++
++	regs->r30 = v2abi.r30;
++#ifdef CONFIG_ARC_HAS_ACCL_REGS
++	regs->r58 = v2abi.r58;
++	regs->r59 = v2abi.r59;
++#endif
++#endif
++	return err;
++}
++
+ static int
+ stash_usr_regs(struct rt_sigframe __user *sf, struct pt_regs *regs,
+ 	       sigset_t *set)
+@@ -94,6 +129,10 @@ stash_usr_regs(struct rt_sigframe __user *sf, struct pt_regs *regs,
+ 
+ 	err = __copy_to_user(&(sf->uc.uc_mcontext.regs.scratch), &uregs.scratch,
+ 			     sizeof(sf->uc.uc_mcontext.regs.scratch));
++
++	if (is_isa_arcv2())
++		err |= save_arcv2_regs(&(sf->uc.uc_mcontext), regs);
++
+ 	err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t));
+ 
+ 	return err ? -EFAULT : 0;
+@@ -109,6 +148,10 @@ static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf)
+ 	err |= __copy_from_user(&uregs.scratch,
+ 				&(sf->uc.uc_mcontext.regs.scratch),
+ 				sizeof(sf->uc.uc_mcontext.regs.scratch));
++
++	if (is_isa_arcv2())
++		err |= restore_arcv2_regs(&(sf->uc.uc_mcontext), regs);
++
+ 	if (err)
+ 		return -EFAULT;
+ 
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 81c458e996d9b..963e8cb936e28 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -1284,7 +1284,7 @@ ENDPROC(stack_overflow)
+ 	je	1f
+ 	larl	%r13,.Lsie_entry
+ 	slgr	%r9,%r13
+-	larl	%r13,.Lsie_skip
++	lghi	%r13,.Lsie_skip - .Lsie_entry
+ 	clgr	%r9,%r13
+ 	jh	1f
+ 	oi	__LC_CPU_FLAGS+7, _CIF_MCCK_GUEST
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index ceeba9f631722..fdee23ea4e173 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -578,10 +578,17 @@ static inline void switch_fpu_finish(struct fpu *new_fpu)
+ 	 * PKRU state is switched eagerly because it needs to be valid before we
+ 	 * return to userland e.g. for a copy_to_user() operation.
+ 	 */
+-	if (current->mm) {
++	if (!(current->flags & PF_KTHREAD)) {
++		/*
++		 * If the PKRU bit in xsave.header.xfeatures is not set,
++		 * then the PKRU component was in init state, which means
++		 * XRSTOR will set PKRU to 0. If the bit is not set then
++		 * get_xsave_addr() will return NULL because the PKRU value
++		 * in memory is not valid. This means pkru_val has to be
++		 * set to 0 and not to init_pkru_value.
++		 */
+ 		pk = get_xsave_addr(&new_fpu->state.xsave, XFEATURE_PKRU);
+-		if (pk)
+-			pkru_val = pk->pkru;
++		pkru_val = pk ? pk->pkru : 0;
+ 	}
+ 	__write_pkru(pkru_val);
+ }
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index a4ec65317a7fa..ec3ae30547920 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -307,13 +307,17 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 		return 0;
+ 	}
+ 
+-	if (!access_ok(buf, size))
+-		return -EACCES;
++	if (!access_ok(buf, size)) {
++		ret = -EACCES;
++		goto out;
++	}
+ 
+-	if (!static_cpu_has(X86_FEATURE_FPU))
+-		return fpregs_soft_set(current, NULL,
+-				       0, sizeof(struct user_i387_ia32_struct),
+-				       NULL, buf) != 0;
++	if (!static_cpu_has(X86_FEATURE_FPU)) {
++		ret = fpregs_soft_set(current, NULL, 0,
++				      sizeof(struct user_i387_ia32_struct),
++				      NULL, buf);
++		goto out;
++	}
+ 
+ 	if (use_xsave()) {
+ 		struct _fpx_sw_bytes fx_sw_user;
+@@ -369,6 +373,25 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 			fpregs_unlock();
+ 			return 0;
+ 		}
++
++		/*
++		 * The above did an FPU restore operation, restricted to
++		 * the user portion of the registers, and failed, but the
++		 * microcode might have modified the FPU registers
++		 * nevertheless.
++		 *
++		 * If the FPU registers do not belong to current, then
++		 * invalidate the FPU register state otherwise the task might
++		 * preempt current and return to user space with corrupted
++		 * FPU registers.
++		 *
++		 * In case current owns the FPU registers then no further
++		 * action is required. The fixup below will handle it
++		 * correctly.
++		 */
++		if (test_thread_flag(TIF_NEED_FPU_LOAD))
++			__cpu_invalidate_fpregs_state();
++
+ 		fpregs_unlock();
+ 	} else {
+ 		/*
+@@ -377,7 +400,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 		 */
+ 		ret = __copy_from_user(&env, buf, sizeof(env));
+ 		if (ret)
+-			goto err_out;
++			goto out;
+ 		envp = &env;
+ 	}
+ 
+@@ -405,16 +428,9 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 	if (use_xsave() && !fx_only) {
+ 		u64 init_bv = xfeatures_mask_user() & ~user_xfeatures;
+ 
+-		if (using_compacted_format()) {
+-			ret = copy_user_to_xstate(&fpu->state.xsave, buf_fx);
+-		} else {
+-			ret = __copy_from_user(&fpu->state.xsave, buf_fx, state_size);
+-
+-			if (!ret && state_size > offsetof(struct xregs_state, header))
+-				ret = validate_user_xstate_header(&fpu->state.xsave.header);
+-		}
++		ret = copy_user_to_xstate(&fpu->state.xsave, buf_fx);
+ 		if (ret)
+-			goto err_out;
++			goto out;
+ 
+ 		sanitize_restored_user_xstate(&fpu->state, envp, user_xfeatures,
+ 					      fx_only);
+@@ -434,7 +450,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 		ret = __copy_from_user(&fpu->state.fxsave, buf_fx, state_size);
+ 		if (ret) {
+ 			ret = -EFAULT;
+-			goto err_out;
++			goto out;
+ 		}
+ 
+ 		sanitize_restored_user_xstate(&fpu->state, envp, user_xfeatures,
+@@ -452,7 +468,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 	} else {
+ 		ret = __copy_from_user(&fpu->state.fsave, buf_fx, state_size);
+ 		if (ret)
+-			goto err_out;
++			goto out;
+ 
+ 		fpregs_lock();
+ 		ret = copy_kernel_to_fregs_err(&fpu->state.fsave);
+@@ -463,7 +479,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 		fpregs_deactivate(fpu);
+ 	fpregs_unlock();
+ 
+-err_out:
++out:
+ 	if (ret)
+ 		fpu__clear_user_states(fpu);
+ 	return ret;
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 5759eb075d2fc..677d21082454f 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1405,6 +1405,9 @@ int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
+ 	if (!apic_x2apic_mode(apic))
+ 		valid_reg_mask |= APIC_REG_MASK(APIC_ARBPRI);
+ 
++	if (alignment + len > 4)
++		return 1;
++
+ 	if (offset > 0x3f0 || !(valid_reg_mask & APIC_REG_MASK(offset)))
+ 		return 1;
+ 
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index ac5054763e38e..6b794344c02db 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4705,9 +4705,33 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu)
+ 	context->inject_page_fault = kvm_inject_page_fault;
+ }
+ 
++static union kvm_mmu_role kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu)
++{
++	union kvm_mmu_role role = kvm_calc_shadow_root_page_role_common(vcpu, false);
++
++	/*
++	 * Nested MMUs are used only for walking L2's gva->gpa, they never have
++	 * shadow pages of their own and so "direct" has no meaning.   Set it
++	 * to "true" to try to detect bogus usage of the nested MMU.
++	 */
++	role.base.direct = true;
++
++	if (!is_paging(vcpu))
++		role.base.level = 0;
++	else if (is_long_mode(vcpu))
++		role.base.level = is_la57_mode(vcpu) ? PT64_ROOT_5LEVEL :
++						       PT64_ROOT_4LEVEL;
++	else if (is_pae(vcpu))
++		role.base.level = PT32E_ROOT_LEVEL;
++	else
++		role.base.level = PT32_ROOT_LEVEL;
++
++	return role;
++}
++
+ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
+ {
+-	union kvm_mmu_role new_role = kvm_calc_mmu_role_common(vcpu, false);
++	union kvm_mmu_role new_role = kvm_calc_nested_mmu_role(vcpu);
+ 	struct kvm_mmu *g_context = &vcpu->arch.nested_mmu;
+ 
+ 	if (new_role.as_u64 == g_context->mmu_role.as_u64)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 109041630d30b..d3372cb973079 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6876,7 +6876,10 @@ static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt)
+ 
+ static void emulator_set_hflags(struct x86_emulate_ctxt *ctxt, unsigned emul_flags)
+ {
+-	emul_to_vcpu(ctxt)->arch.hflags = emul_flags;
++	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
++
++	vcpu->arch.hflags = emul_flags;
++	kvm_mmu_reset_context(vcpu);
+ }
+ 
+ static int emulator_pre_leave_smm(struct x86_emulate_ctxt *ctxt,
+@@ -8018,6 +8021,7 @@ void kvm_arch_exit(void)
+ 	kvm_x86_ops.hardware_enable = NULL;
+ 	kvm_mmu_module_exit();
+ 	free_percpu(user_return_msrs);
++	kmem_cache_destroy(x86_emulator_cache);
+ 	kmem_cache_destroy(x86_fpu_cache);
+ }
+ 
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+index 9e5ccc56f8e07..356b746dfbe7a 100644
+--- a/arch/x86/mm/ioremap.c
++++ b/arch/x86/mm/ioremap.c
+@@ -118,7 +118,9 @@ static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *des
+ 	if (!IS_ENABLED(CONFIG_EFI))
+ 		return;
+ 
+-	if (efi_mem_type(addr) == EFI_RUNTIME_SERVICES_DATA)
++	if (efi_mem_type(addr) == EFI_RUNTIME_SERVICES_DATA ||
++	    (efi_mem_type(addr) == EFI_BOOT_SERVICES_DATA &&
++	     efi_mem_attributes(addr) & EFI_MEMORY_RUNTIME))
+ 		desc->flags |= IORES_MAP_ENCRYPTED;
+ }
+ 
+diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
+index 5eb4dc2b97dac..e94da744386f3 100644
+--- a/arch/x86/mm/numa.c
++++ b/arch/x86/mm/numa.c
+@@ -254,7 +254,13 @@ int __init numa_cleanup_meminfo(struct numa_meminfo *mi)
+ 
+ 		/* make sure all non-reserved blocks are inside the limits */
+ 		bi->start = max(bi->start, low);
+-		bi->end = min(bi->end, high);
++
++		/* preserve info for non-RAM areas above 'max_pfn': */
++		if (bi->end > high) {
++			numa_add_memblk_to(bi->nid, high, bi->end,
++					   &numa_reserved_meminfo);
++			bi->end = high;
++		}
+ 
+ 		/* and there's no empty block */
+ 		if (bi->start >= bi->end)
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index 90284ffda58a7..f2db761ee5488 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -59,6 +59,7 @@ config DMA_OF
+ #devices
+ config ALTERA_MSGDMA
+ 	tristate "Altera / Intel mSGDMA Engine"
++	depends on HAS_IOMEM
+ 	select DMA_ENGINE
+ 	help
+ 	  Enable support for Altera / Intel mSGDMA controller.
+diff --git a/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c b/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
+index 4ec909e0b8106..4ae057922ef1f 100644
+--- a/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
++++ b/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
+@@ -332,6 +332,7 @@ static int __cold dpaa2_qdma_setup(struct fsl_mc_device *ls_dev)
+ 	}
+ 
+ 	if (priv->dpdmai_attr.version.major > DPDMAI_VER_MAJOR) {
++		err = -EINVAL;
+ 		dev_err(dev, "DPDMAI major version mismatch\n"
+ 			     "Found %u.%u, supported version is %u.%u\n",
+ 				priv->dpdmai_attr.version.major,
+@@ -341,6 +342,7 @@ static int __cold dpaa2_qdma_setup(struct fsl_mc_device *ls_dev)
+ 	}
+ 
+ 	if (priv->dpdmai_attr.version.minor > DPDMAI_VER_MINOR) {
++		err = -EINVAL;
+ 		dev_err(dev, "DPDMAI minor version mismatch\n"
+ 			     "Found %u.%u, supported version is %u.%u\n",
+ 				priv->dpdmai_attr.version.major,
+@@ -475,6 +477,7 @@ static int __cold dpaa2_qdma_dpio_setup(struct dpaa2_qdma_priv *priv)
+ 		ppriv->store =
+ 			dpaa2_io_store_create(DPAA2_QDMA_STORE_SIZE, dev);
+ 		if (!ppriv->store) {
++			err = -ENOMEM;
+ 			dev_err(dev, "dpaa2_io_store_create() failed\n");
+ 			goto err_store;
+ 		}
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index f4c7ce8cb399c..048a23018a3df 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -518,6 +518,7 @@ module_init(idxd_init_module);
+ 
+ static void __exit idxd_exit_module(void)
+ {
++	idxd_unregister_driver();
+ 	pci_unregister_driver(&idxd_pci_driver);
+ 	idxd_cdev_remove();
+ 	idxd_unregister_bus_type();
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index 0f5c19370f6d7..dfbf514188f37 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2696,13 +2696,15 @@ static struct dma_async_tx_descriptor *pl330_prep_dma_cyclic(
+ 	for (i = 0; i < len / period_len; i++) {
+ 		desc = pl330_get_desc(pch);
+ 		if (!desc) {
++			unsigned long iflags;
++
+ 			dev_err(pch->dmac->ddma.dev, "%s:%d Unable to fetch desc\n",
+ 				__func__, __LINE__);
+ 
+ 			if (!first)
+ 				return NULL;
+ 
+-			spin_lock_irqsave(&pl330->pool_lock, flags);
++			spin_lock_irqsave(&pl330->pool_lock, iflags);
+ 
+ 			while (!list_empty(&first->node)) {
+ 				desc = list_entry(first->node.next,
+@@ -2712,7 +2714,7 @@ static struct dma_async_tx_descriptor *pl330_prep_dma_cyclic(
+ 
+ 			list_move_tail(&first->node, &pl330->desc_pool);
+ 
+-			spin_unlock_irqrestore(&pl330->pool_lock, flags);
++			spin_unlock_irqrestore(&pl330->pool_lock, iflags);
+ 
+ 			return NULL;
+ 		}
+diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
+index 3bcb689162c67..ef038f3c5e328 100644
+--- a/drivers/dma/qcom/Kconfig
++++ b/drivers/dma/qcom/Kconfig
+@@ -10,6 +10,7 @@ config QCOM_BAM_DMA
+ 
+ config QCOM_HIDMA_MGMT
+ 	tristate "Qualcomm Technologies HIDMA Management support"
++	depends on HAS_IOMEM
+ 	select DMA_ENGINE
+ 	help
+ 	  Enable support for the Qualcomm Technologies HIDMA Management.
+diff --git a/drivers/dma/sf-pdma/Kconfig b/drivers/dma/sf-pdma/Kconfig
+index f8ffa02e279ff..ba46a0a15a936 100644
+--- a/drivers/dma/sf-pdma/Kconfig
++++ b/drivers/dma/sf-pdma/Kconfig
+@@ -1,5 +1,6 @@
+ config SF_PDMA
+ 	tristate "Sifive PDMA controller driver"
++	depends on HAS_IOMEM
+ 	select DMA_ENGINE
+ 	select DMA_VIRTUAL_CHANNELS
+ 	help
+diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
+index 77ab1f4730be9..b35b97cb8fd25 100644
+--- a/drivers/dma/ste_dma40.c
++++ b/drivers/dma/ste_dma40.c
+@@ -3676,6 +3676,9 @@ static int __init d40_probe(struct platform_device *pdev)
+ 
+ 	kfree(base->lcla_pool.base_unaligned);
+ 
++	if (base->lcpa_base)
++		iounmap(base->lcpa_base);
++
+ 	if (base->phy_lcpa)
+ 		release_mem_region(base->phy_lcpa,
+ 				   base->lcpa_size);
+diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
+index 70b29bd079c9f..ff7dfb3fdeb47 100644
+--- a/drivers/dma/xilinx/xilinx_dpdma.c
++++ b/drivers/dma/xilinx/xilinx_dpdma.c
+@@ -1459,7 +1459,7 @@ static void xilinx_dpdma_enable_irq(struct xilinx_dpdma_device *xdev)
+  */
+ static void xilinx_dpdma_disable_irq(struct xilinx_dpdma_device *xdev)
+ {
+-	dpdma_write(xdev->reg, XILINX_DPDMA_IDS, XILINX_DPDMA_INTR_ERR_ALL);
++	dpdma_write(xdev->reg, XILINX_DPDMA_IDS, XILINX_DPDMA_INTR_ALL);
+ 	dpdma_write(xdev->reg, XILINX_DPDMA_EIDS, XILINX_DPDMA_EINTR_ALL);
+ }
+ 
+@@ -1596,6 +1596,26 @@ static struct dma_chan *of_dma_xilinx_xlate(struct of_phandle_args *dma_spec,
+ 	return dma_get_slave_channel(&xdev->chan[chan_id]->vchan.chan);
+ }
+ 
++static void dpdma_hw_init(struct xilinx_dpdma_device *xdev)
++{
++	unsigned int i;
++	void __iomem *reg;
++
++	/* Disable all interrupts */
++	xilinx_dpdma_disable_irq(xdev);
++
++	/* Stop all channels */
++	for (i = 0; i < ARRAY_SIZE(xdev->chan); i++) {
++		reg = xdev->reg + XILINX_DPDMA_CH_BASE
++				+ XILINX_DPDMA_CH_OFFSET * i;
++		dpdma_clr(reg, XILINX_DPDMA_CH_CNTL, XILINX_DPDMA_CH_CNTL_ENABLE);
++	}
++
++	/* Clear the interrupt status registers */
++	dpdma_write(xdev->reg, XILINX_DPDMA_ISR, XILINX_DPDMA_INTR_ALL);
++	dpdma_write(xdev->reg, XILINX_DPDMA_EISR, XILINX_DPDMA_EINTR_ALL);
++}
++
+ static int xilinx_dpdma_probe(struct platform_device *pdev)
+ {
+ 	struct xilinx_dpdma_device *xdev;
+@@ -1622,6 +1642,8 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
+ 	if (IS_ERR(xdev->reg))
+ 		return PTR_ERR(xdev->reg);
+ 
++	dpdma_hw_init(xdev);
++
+ 	xdev->irq = platform_get_irq(pdev, 0);
+ 	if (xdev->irq < 0) {
+ 		dev_err(xdev->dev, "failed to get platform irq\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index fc8da5fed779b..3c92dacbc24ad 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -6590,8 +6590,12 @@ static int gfx_v10_0_kiq_init_register(struct amdgpu_ring *ring)
+ 	if (ring->use_doorbell) {
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
+ 			(adev->doorbell_index.kiq * 2) << 2);
++		/* If GC has entered CGPG, ringing doorbell > first page doesn't
++		 * wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to workaround
++		 * this issue.
++		 */
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
+-			(adev->doorbell_index.userqueue_end * 2) << 2);
++			(adev->doorbell.size - 4));
+ 	}
+ 
+ 	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index fb15e8b5af32f..1859d293ef712 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3619,8 +3619,12 @@ static int gfx_v9_0_kiq_init_register(struct amdgpu_ring *ring)
+ 	if (ring->use_doorbell) {
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
+ 					(adev->doorbell_index.kiq * 2) << 2);
++		/* If GC has entered CGPG, ringing doorbell > first page doesn't
++		 * wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to workaround
++		 * this issue.
++		 */
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
+-					(adev->doorbell_index.userqueue_end * 2) << 2);
++					(adev->doorbell.size - 4));
+ 	}
+ 
+ 	WREG32_SOC15_RLC(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL,
+diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
+index 57fb3eb3a4b45..1f4e3396d097c 100644
+--- a/drivers/gpu/drm/radeon/radeon_uvd.c
++++ b/drivers/gpu/drm/radeon/radeon_uvd.c
+@@ -286,7 +286,7 @@ int radeon_uvd_resume(struct radeon_device *rdev)
+ 	if (rdev->uvd.vcpu_bo == NULL)
+ 		return -EINVAL;
+ 
+-	memcpy(rdev->uvd.cpu_addr, rdev->uvd_fw->data, rdev->uvd_fw->size);
++	memcpy_toio((void __iomem *)rdev->uvd.cpu_addr, rdev->uvd_fw->data, rdev->uvd_fw->size);
+ 
+ 	size = radeon_bo_size(rdev->uvd.vcpu_bo);
+ 	size -= rdev->uvd_fw->size;
+@@ -294,7 +294,7 @@ int radeon_uvd_resume(struct radeon_device *rdev)
+ 	ptr = rdev->uvd.cpu_addr;
+ 	ptr += rdev->uvd_fw->size;
+ 
+-	memset(ptr, 0, size);
++	memset_io((void __iomem *)ptr, 0, size);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+index bbdfd5e26ec88..f75fb157f2ff7 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+@@ -209,7 +209,7 @@ static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master,
+ 		goto err_disable_clk_tmds;
+ 	}
+ 
+-	ret = sun8i_hdmi_phy_probe(hdmi, phy_node);
++	ret = sun8i_hdmi_phy_get(hdmi, phy_node);
+ 	of_node_put(phy_node);
+ 	if (ret) {
+ 		dev_err(dev, "Couldn't get the HDMI PHY\n");
+@@ -242,7 +242,6 @@ static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master,
+ 
+ cleanup_encoder:
+ 	drm_encoder_cleanup(encoder);
+-	sun8i_hdmi_phy_remove(hdmi);
+ err_disable_clk_tmds:
+ 	clk_disable_unprepare(hdmi->clk_tmds);
+ err_assert_ctrl_reset:
+@@ -263,7 +262,6 @@ static void sun8i_dw_hdmi_unbind(struct device *dev, struct device *master,
+ 	struct sun8i_dw_hdmi *hdmi = dev_get_drvdata(dev);
+ 
+ 	dw_hdmi_unbind(hdmi->hdmi);
+-	sun8i_hdmi_phy_remove(hdmi);
+ 	clk_disable_unprepare(hdmi->clk_tmds);
+ 	reset_control_assert(hdmi->rst_ctrl);
+ 	gpiod_set_value(hdmi->ddc_en, 0);
+@@ -320,7 +318,32 @@ static struct platform_driver sun8i_dw_hdmi_pltfm_driver = {
+ 		.of_match_table = sun8i_dw_hdmi_dt_ids,
+ 	},
+ };
+-module_platform_driver(sun8i_dw_hdmi_pltfm_driver);
++
++static int __init sun8i_dw_hdmi_init(void)
++{
++	int ret;
++
++	ret = platform_driver_register(&sun8i_dw_hdmi_pltfm_driver);
++	if (ret)
++		return ret;
++
++	ret = platform_driver_register(&sun8i_hdmi_phy_driver);
++	if (ret) {
++		platform_driver_unregister(&sun8i_dw_hdmi_pltfm_driver);
++		return ret;
++	}
++
++	return ret;
++}
++
++static void __exit sun8i_dw_hdmi_exit(void)
++{
++	platform_driver_unregister(&sun8i_dw_hdmi_pltfm_driver);
++	platform_driver_unregister(&sun8i_hdmi_phy_driver);
++}
++
++module_init(sun8i_dw_hdmi_init);
++module_exit(sun8i_dw_hdmi_exit);
+ 
+ MODULE_AUTHOR("Jernej Skrabec <jernej.skrabec@siol.net>");
+ MODULE_DESCRIPTION("Allwinner DW HDMI bridge");
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
+index d4b55af0592f8..74f6ed0e25709 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
+@@ -195,14 +195,15 @@ struct sun8i_dw_hdmi {
+ 	struct gpio_desc		*ddc_en;
+ };
+ 
++extern struct platform_driver sun8i_hdmi_phy_driver;
++
+ static inline struct sun8i_dw_hdmi *
+ encoder_to_sun8i_dw_hdmi(struct drm_encoder *encoder)
+ {
+ 	return container_of(encoder, struct sun8i_dw_hdmi, encoder);
+ }
+ 
+-int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node);
+-void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi);
++int sun8i_hdmi_phy_get(struct sun8i_dw_hdmi *hdmi, struct device_node *node);
+ 
+ void sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy);
+ void sun8i_hdmi_phy_set_ops(struct sun8i_hdmi_phy *phy,
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index 9994edf675096..c9239708d398c 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -5,6 +5,7 @@
+ 
+ #include <linux/delay.h>
+ #include <linux/of_address.h>
++#include <linux/of_platform.h>
+ 
+ #include "sun8i_dw_hdmi.h"
+ 
+@@ -597,10 +598,30 @@ static const struct of_device_id sun8i_hdmi_phy_of_table[] = {
+ 	{ /* sentinel */ }
+ };
+ 
+-int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
++int sun8i_hdmi_phy_get(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
++{
++	struct platform_device *pdev = of_find_device_by_node(node);
++	struct sun8i_hdmi_phy *phy;
++
++	if (!pdev)
++		return -EPROBE_DEFER;
++
++	phy = platform_get_drvdata(pdev);
++	if (!phy)
++		return -EPROBE_DEFER;
++
++	hdmi->phy = phy;
++
++	put_device(&pdev->dev);
++
++	return 0;
++}
++
++static int sun8i_hdmi_phy_probe(struct platform_device *pdev)
+ {
+ 	const struct of_device_id *match;
+-	struct device *dev = hdmi->dev;
++	struct device *dev = &pdev->dev;
++	struct device_node *node = dev->of_node;
+ 	struct sun8i_hdmi_phy *phy;
+ 	struct resource res;
+ 	void __iomem *regs;
+@@ -704,7 +725,7 @@ int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
+ 		clk_prepare_enable(phy->clk_phy);
+ 	}
+ 
+-	hdmi->phy = phy;
++	platform_set_drvdata(pdev, phy);
+ 
+ 	return 0;
+ 
+@@ -728,9 +749,9 @@ err_put_clk_bus:
+ 	return ret;
+ }
+ 
+-void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi)
++static int sun8i_hdmi_phy_remove(struct platform_device *pdev)
+ {
+-	struct sun8i_hdmi_phy *phy = hdmi->phy;
++	struct sun8i_hdmi_phy *phy = platform_get_drvdata(pdev);
+ 
+ 	clk_disable_unprepare(phy->clk_mod);
+ 	clk_disable_unprepare(phy->clk_bus);
+@@ -744,4 +765,14 @@ void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi)
+ 	clk_put(phy->clk_pll1);
+ 	clk_put(phy->clk_mod);
+ 	clk_put(phy->clk_bus);
++	return 0;
+ }
++
++struct platform_driver sun8i_hdmi_phy_driver = {
++	.probe  = sun8i_hdmi_phy_probe,
++	.remove = sun8i_hdmi_phy_remove,
++	.driver = {
++		.name = "sun8i-hdmi-phy",
++		.of_match_table = sun8i_hdmi_phy_of_table,
++	},
++};
+diff --git a/drivers/hwmon/scpi-hwmon.c b/drivers/hwmon/scpi-hwmon.c
+index 25aac40f2764a..919877970ae3b 100644
+--- a/drivers/hwmon/scpi-hwmon.c
++++ b/drivers/hwmon/scpi-hwmon.c
+@@ -99,6 +99,15 @@ scpi_show_sensor(struct device *dev, struct device_attribute *attr, char *buf)
+ 
+ 	scpi_scale_reading(&value, sensor);
+ 
++	/*
++	 * Temperature sensor values are treated as signed values based on
++	 * observation even though that is not explicitly specified, and
++	 * because an unsigned u64 temperature does not really make practical
++	 * sense especially when the temperature is below zero degrees Celsius.
++	 */
++	if (sensor->info.class == TEMPERATURE)
++		return sprintf(buf, "%lld\n", (s64)value);
++
+ 	return sprintf(buf, "%llu\n", value);
+ }
+ 
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 7929bf12651ca..1005b182bab47 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -642,11 +642,45 @@ static inline void gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
+ 		nmi_exit();
+ }
+ 
++static u32 do_read_iar(struct pt_regs *regs)
++{
++	u32 iar;
++
++	if (gic_supports_nmi() && unlikely(!interrupts_enabled(regs))) {
++		u64 pmr;
++
++		/*
++		 * We were in a context with IRQs disabled. However, the
++		 * entry code has set PMR to a value that allows any
++		 * interrupt to be acknowledged, and not just NMIs. This can
++		 * lead to surprising effects if the NMI has been retired in
++		 * the meantime, and that there is an IRQ pending. The IRQ
++		 * would then be taken in NMI context, something that nobody
++		 * wants to debug twice.
++		 *
++		 * Until we sort this, drop PMR again to a level that will
++		 * actually only allow NMIs before reading IAR, and then
++		 * restore it to what it was.
++		 */
++		pmr = gic_read_pmr();
++		gic_pmr_mask_irqs();
++		isb();
++
++		iar = gic_read_iar();
++
++		gic_write_pmr(pmr);
++	} else {
++		iar = gic_read_iar();
++	}
++
++	return iar;
++}
++
+ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
+ {
+ 	u32 irqnr;
+ 
+-	irqnr = gic_read_iar();
++	irqnr = do_read_iar(regs);
+ 
+ 	/* Check for special IDs first */
+ 	if ((irqnr >= 1020 && irqnr <= 1023))
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index e97f2e0da6b07..6d03f1d6c4d38 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -82,6 +82,8 @@ struct mcba_priv {
+ 	bool can_ka_first_pass;
+ 	bool can_speed_check;
+ 	atomic_t free_ctx_cnt;
++	void *rxbuf[MCBA_MAX_RX_URBS];
++	dma_addr_t rxbuf_dma[MCBA_MAX_RX_URBS];
+ };
+ 
+ /* CAN frame */
+@@ -633,6 +635,7 @@ static int mcba_usb_start(struct mcba_priv *priv)
+ 	for (i = 0; i < MCBA_MAX_RX_URBS; i++) {
+ 		struct urb *urb = NULL;
+ 		u8 *buf;
++		dma_addr_t buf_dma;
+ 
+ 		/* create a URB, and a buffer for it */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -642,7 +645,7 @@ static int mcba_usb_start(struct mcba_priv *priv)
+ 		}
+ 
+ 		buf = usb_alloc_coherent(priv->udev, MCBA_USB_RX_BUFF_SIZE,
+-					 GFP_KERNEL, &urb->transfer_dma);
++					 GFP_KERNEL, &buf_dma);
+ 		if (!buf) {
+ 			netdev_err(netdev, "No memory left for USB buffer\n");
+ 			usb_free_urb(urb);
+@@ -661,11 +664,14 @@ static int mcba_usb_start(struct mcba_priv *priv)
+ 		if (err) {
+ 			usb_unanchor_urb(urb);
+ 			usb_free_coherent(priv->udev, MCBA_USB_RX_BUFF_SIZE,
+-					  buf, urb->transfer_dma);
++					  buf, buf_dma);
+ 			usb_free_urb(urb);
+ 			break;
+ 		}
+ 
++		priv->rxbuf[i] = buf;
++		priv->rxbuf_dma[i] = buf_dma;
++
+ 		/* Drop reference, USB core will take care of freeing it */
+ 		usb_free_urb(urb);
+ 	}
+@@ -708,7 +714,14 @@ static int mcba_usb_open(struct net_device *netdev)
+ 
+ static void mcba_urb_unlink(struct mcba_priv *priv)
+ {
++	int i;
++
+ 	usb_kill_anchored_urbs(&priv->rx_submitted);
++
++	for (i = 0; i < MCBA_MAX_RX_URBS; ++i)
++		usb_free_coherent(priv->udev, MCBA_USB_RX_BUFF_SIZE,
++				  priv->rxbuf[i], priv->rxbuf_dma[i]);
++
+ 	usb_kill_anchored_urbs(&priv->tx_submitted);
+ }
+ 
+diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
+index 9e02f88645931..5e90df42b2013 100644
+--- a/drivers/net/ethernet/atheros/alx/main.c
++++ b/drivers/net/ethernet/atheros/alx/main.c
+@@ -1849,6 +1849,7 @@ out_free_netdev:
+ 	free_netdev(netdev);
+ out_pci_release:
+ 	pci_release_mem_regions(pdev);
++	pci_disable_pcie_error_reporting(pdev);
+ out_pci_disable:
+ 	pci_disable_device(pdev);
+ 	return err;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index adfaa9a850dd3..db1b89f570794 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -7184,7 +7184,7 @@ skip_rdma:
+ 	entries_sp = ctx->vnic_max_vnic_entries + ctx->qp_max_l2_entries +
+ 		     2 * (extra_qps + ctx->qp_min_qp1_entries) + min;
+ 	entries_sp = roundup(entries_sp, ctx->tqm_entries_multiple);
+-	entries = ctx->qp_max_l2_entries + extra_qps + ctx->qp_min_qp1_entries;
++	entries = ctx->qp_max_l2_entries + 2 * (extra_qps + ctx->qp_min_qp1_entries);
+ 	entries = roundup(entries, ctx->tqm_entries_multiple);
+ 	entries = clamp_t(u32, entries, min, ctx->tqm_max_entries_per_ring);
+ 	for (i = 0; i < ctx->tqm_fp_rings_count + 1; i++) {
+@@ -11353,6 +11353,8 @@ static void bnxt_fw_init_one_p3(struct bnxt *bp)
+ 	bnxt_hwrm_coal_params_qcaps(bp);
+ }
+ 
++static int bnxt_probe_phy(struct bnxt *bp, bool fw_dflt);
++
+ static int bnxt_fw_init_one(struct bnxt *bp)
+ {
+ 	int rc;
+@@ -11367,6 +11369,9 @@ static int bnxt_fw_init_one(struct bnxt *bp)
+ 		netdev_err(bp->dev, "Firmware init phase 2 failed\n");
+ 		return rc;
+ 	}
++	rc = bnxt_probe_phy(bp, false);
++	if (rc)
++		return rc;
+ 	rc = bnxt_approve_mac(bp, bp->dev->dev_addr, false);
+ 	if (rc)
+ 		return rc;
+@@ -12741,6 +12746,7 @@ init_err_pci_clean:
+ 	bnxt_hwrm_func_drv_unrgtr(bp);
+ 	bnxt_free_hwrm_short_cmd_req(bp);
+ 	bnxt_free_hwrm_resources(bp);
++	bnxt_ethtool_free(bp);
+ 	kfree(bp->fw_health);
+ 	bp->fw_health = NULL;
+ 	bnxt_cleanup_pci(bp);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+index 61ea3ec5c3fcc..83ed10ac86606 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+@@ -1337,13 +1337,27 @@ static int cxgb4_ethtool_flash_phy(struct net_device *netdev,
+ 		return ret;
+ 	}
+ 
+-	spin_lock_bh(&adap->win0_lock);
++	/* We have to RESET the chip/firmware because we need the
++	 * chip in uninitialized state for loading new PHY image.
++	 * Otherwise, the running firmware will only store the PHY
++	 * image in local RAM which will be lost after next reset.
++	 */
++	ret = t4_fw_reset(adap, adap->mbox, PIORSTMODE_F | PIORST_F);
++	if (ret < 0) {
++		dev_err(adap->pdev_dev,
++			"Set FW to RESET for flashing PHY FW failed. ret: %d\n",
++			ret);
++		return ret;
++	}
++
+ 	ret = t4_load_phy_fw(adap, MEMWIN_NIC, NULL, data, size);
+-	spin_unlock_bh(&adap->win0_lock);
+-	if (ret)
+-		dev_err(adap->pdev_dev, "Failed to load PHY FW\n");
++	if (ret < 0) {
++		dev_err(adap->pdev_dev, "Failed to load PHY FW. ret: %d\n",
++			ret);
++		return ret;
++	}
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int cxgb4_ethtool_flash_fw(struct net_device *netdev,
+@@ -1610,16 +1624,14 @@ static struct filter_entry *cxgb4_get_filter_entry(struct adapter *adap,
+ 						   u32 ftid)
+ {
+ 	struct tid_info *t = &adap->tids;
+-	struct filter_entry *f;
+ 
+-	if (ftid < t->nhpftids)
+-		f = &adap->tids.hpftid_tab[ftid];
+-	else if (ftid < t->nftids)
+-		f = &adap->tids.ftid_tab[ftid - t->nhpftids];
+-	else
+-		f = lookup_tid(&adap->tids, ftid);
++	if (ftid >= t->hpftid_base && ftid < t->hpftid_base + t->nhpftids)
++		return &t->hpftid_tab[ftid - t->hpftid_base];
+ 
+-	return f;
++	if (ftid >= t->ftid_base && ftid < t->ftid_base + t->nftids)
++		return &t->ftid_tab[ftid - t->ftid_base];
++
++	return lookup_tid(t, ftid);
+ }
+ 
+ static void cxgb4_fill_filter_rule(struct ethtool_rx_flow_spec *fs,
+@@ -1826,6 +1838,11 @@ static int cxgb4_ntuple_del_filter(struct net_device *dev,
+ 	filter_id = filter_info->loc_array[cmd->fs.location];
+ 	f = cxgb4_get_filter_entry(adapter, filter_id);
+ 
++	if (f->fs.prio)
++		filter_id -= adapter->tids.hpftid_base;
++	else if (!f->fs.hash)
++		filter_id -= (adapter->tids.ftid_base - adapter->tids.nhpftids);
++
+ 	ret = cxgb4_flow_rule_destroy(dev, f->fs.tc_prio, &f->fs, filter_id);
+ 	if (ret)
+ 		goto err;
+@@ -1885,6 +1902,11 @@ static int cxgb4_ntuple_set_filter(struct net_device *netdev,
+ 
+ 	filter_info = &adapter->ethtool_filters->port[pi->port_id];
+ 
++	if (fs.prio)
++		tid += adapter->tids.hpftid_base;
++	else if (!fs.hash)
++		tid += (adapter->tids.ftid_base - adapter->tids.nhpftids);
++
+ 	filter_info->loc_array[cmd->fs.location] = tid;
+ 	set_bit(cmd->fs.location, filter_info->bmap);
+ 	filter_info->in_use++;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index e664e05b9f026..5fbc087268dbe 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -198,7 +198,7 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ 				      WORD_MASK, f->fs.nat_lip[3] |
+ 				      f->fs.nat_lip[2] << 8 |
+ 				      f->fs.nat_lip[1] << 16 |
+-				      (u64)f->fs.nat_lip[0] << 25, 1);
++				      (u64)f->fs.nat_lip[0] << 24, 1);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 04dcb5e4b3161..8be525c5e2e4a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -4428,10 +4428,8 @@ static int adap_init0_phy(struct adapter *adap)
+ 
+ 	/* Load PHY Firmware onto adapter.
+ 	 */
+-	spin_lock_bh(&adap->win0_lock);
+ 	ret = t4_load_phy_fw(adap, MEMWIN_NIC, phy_info->phy_fw_version,
+ 			     (u8 *)phyf->data, phyf->size);
+-	spin_unlock_bh(&adap->win0_lock);
+ 	if (ret < 0)
+ 		dev_err(adap->pdev_dev, "PHY Firmware transfer error %d\n",
+ 			-ret);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 581670dced6ec..964ea3491b80b 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -3067,16 +3067,19 @@ int t4_read_flash(struct adapter *adapter, unsigned int addr,
+  *	@addr: the start address to write
+  *	@n: length of data to write in bytes
+  *	@data: the data to write
++ *	@byte_oriented: whether to store data as bytes or as words
+  *
+  *	Writes up to a page of data (256 bytes) to the serial flash starting
+  *	at the given address.  All the data must be written to the same page.
++ *	If @byte_oriented is set the write data is stored as byte stream
++ *	(i.e. matches what on disk), otherwise in big-endian.
+  */
+ static int t4_write_flash(struct adapter *adapter, unsigned int addr,
+-			  unsigned int n, const u8 *data)
++			  unsigned int n, const u8 *data, bool byte_oriented)
+ {
+-	int ret;
+-	u32 buf[64];
+ 	unsigned int i, c, left, val, offset = addr & 0xff;
++	u32 buf[64];
++	int ret;
+ 
+ 	if (addr >= adapter->params.sf_size || offset + n > SF_PAGE_SIZE)
+ 		return -EINVAL;
+@@ -3087,10 +3090,14 @@ static int t4_write_flash(struct adapter *adapter, unsigned int addr,
+ 	    (ret = sf1_write(adapter, 4, 1, 1, val)) != 0)
+ 		goto unlock;
+ 
+-	for (left = n; left; left -= c) {
++	for (left = n; left; left -= c, data += c) {
+ 		c = min(left, 4U);
+-		for (val = 0, i = 0; i < c; ++i)
+-			val = (val << 8) + *data++;
++		for (val = 0, i = 0; i < c; ++i) {
++			if (byte_oriented)
++				val = (val << 8) + data[i];
++			else
++				val = (val << 8) + data[c - i - 1];
++		}
+ 
+ 		ret = sf1_write(adapter, c, c != left, 1, val);
+ 		if (ret)
+@@ -3103,7 +3110,8 @@ static int t4_write_flash(struct adapter *adapter, unsigned int addr,
+ 	t4_write_reg(adapter, SF_OP_A, 0);    /* unlock SF */
+ 
+ 	/* Read the page to verify the write succeeded */
+-	ret = t4_read_flash(adapter, addr & ~0xff, ARRAY_SIZE(buf), buf, 1);
++	ret = t4_read_flash(adapter, addr & ~0xff, ARRAY_SIZE(buf), buf,
++			    byte_oriented);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -3699,7 +3707,7 @@ int t4_load_fw(struct adapter *adap, const u8 *fw_data, unsigned int size)
+ 	 */
+ 	memcpy(first_page, fw_data, SF_PAGE_SIZE);
+ 	((struct fw_hdr *)first_page)->fw_ver = cpu_to_be32(0xffffffff);
+-	ret = t4_write_flash(adap, fw_start, SF_PAGE_SIZE, first_page);
++	ret = t4_write_flash(adap, fw_start, SF_PAGE_SIZE, first_page, true);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -3707,14 +3715,14 @@ int t4_load_fw(struct adapter *adap, const u8 *fw_data, unsigned int size)
+ 	for (size -= SF_PAGE_SIZE; size; size -= SF_PAGE_SIZE) {
+ 		addr += SF_PAGE_SIZE;
+ 		fw_data += SF_PAGE_SIZE;
+-		ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, fw_data);
++		ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, fw_data, true);
+ 		if (ret)
+ 			goto out;
+ 	}
+ 
+-	ret = t4_write_flash(adap,
+-			     fw_start + offsetof(struct fw_hdr, fw_ver),
+-			     sizeof(hdr->fw_ver), (const u8 *)&hdr->fw_ver);
++	ret = t4_write_flash(adap, fw_start + offsetof(struct fw_hdr, fw_ver),
++			     sizeof(hdr->fw_ver), (const u8 *)&hdr->fw_ver,
++			     true);
+ out:
+ 	if (ret)
+ 		dev_err(adap->pdev_dev, "firmware download failed, error %d\n",
+@@ -3819,9 +3827,11 @@ int t4_load_phy_fw(struct adapter *adap, int win,
+ 	/* Copy the supplied PHY Firmware image to the adapter memory location
+ 	 * allocated by the adapter firmware.
+ 	 */
++	spin_lock_bh(&adap->win0_lock);
+ 	ret = t4_memory_rw(adap, win, mtype, maddr,
+ 			   phy_fw_size, (__be32 *)phy_fw_data,
+ 			   T4_MEMORY_WRITE);
++	spin_unlock_bh(&adap->win0_lock);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -10215,7 +10225,7 @@ int t4_load_cfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
+ 			n = size - i;
+ 		else
+ 			n = SF_PAGE_SIZE;
+-		ret = t4_write_flash(adap, addr, n, cfg_data);
++		ret = t4_write_flash(adap, addr, n, cfg_data, true);
+ 		if (ret)
+ 			goto out;
+ 
+@@ -10684,13 +10694,14 @@ int t4_load_boot(struct adapter *adap, u8 *boot_data,
+ 	for (size -= SF_PAGE_SIZE; size; size -= SF_PAGE_SIZE) {
+ 		addr += SF_PAGE_SIZE;
+ 		boot_data += SF_PAGE_SIZE;
+-		ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, boot_data);
++		ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, boot_data,
++				     false);
+ 		if (ret)
+ 			goto out;
+ 	}
+ 
+ 	ret = t4_write_flash(adap, boot_sector, SF_PAGE_SIZE,
+-			     (const u8 *)header);
++			     (const u8 *)header, false);
+ 
+ out:
+ 	if (ret)
+@@ -10765,7 +10776,7 @@ int t4_load_bootcfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
+ 	for (i = 0; i < size; i += SF_PAGE_SIZE) {
+ 		n = min_t(u32, size - i, SF_PAGE_SIZE);
+ 
+-		ret = t4_write_flash(adap, addr, n, cfg_data);
++		ret = t4_write_flash(adap, addr, n, cfg_data, false);
+ 		if (ret)
+ 			goto out;
+ 
+@@ -10777,7 +10788,8 @@ int t4_load_bootcfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
+ 	for (i = 0; i < npad; i++) {
+ 		u8 data = 0;
+ 
+-		ret = t4_write_flash(adap, cfg_addr + size + i, 1, &data);
++		ret = t4_write_flash(adap, cfg_addr + size + i, 1, &data,
++				     false);
+ 		if (ret)
+ 			goto out;
+ 	}
+diff --git a/drivers/net/ethernet/ec_bhf.c b/drivers/net/ethernet/ec_bhf.c
+index 46b0dbab8aadc..7c992172933bc 100644
+--- a/drivers/net/ethernet/ec_bhf.c
++++ b/drivers/net/ethernet/ec_bhf.c
+@@ -576,10 +576,12 @@ static void ec_bhf_remove(struct pci_dev *dev)
+ 	struct ec_bhf_priv *priv = netdev_priv(net_dev);
+ 
+ 	unregister_netdev(net_dev);
+-	free_netdev(net_dev);
+ 
+ 	pci_iounmap(dev, priv->dma_io);
+ 	pci_iounmap(dev, priv->io);
++
++	free_netdev(net_dev);
++
+ 	pci_release_regions(dev);
+ 	pci_clear_master(dev);
+ 	pci_disable_device(dev);
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 676e437d78f6a..cb1e1ad652d09 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -5905,6 +5905,7 @@ drv_cleanup:
+ unmap_bars:
+ 	be_unmap_pci_bars(adapter);
+ free_netdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	free_netdev(netdev);
+ rel_reg:
+ 	pci_release_regions(pdev);
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 1753807cbf97e..d71eac7e19249 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -215,15 +215,13 @@ static u64 fec_ptp_read(const struct cyclecounter *cc)
+ {
+ 	struct fec_enet_private *fep =
+ 		container_of(cc, struct fec_enet_private, cc);
+-	const struct platform_device_id *id_entry =
+-		platform_get_device_id(fep->pdev);
+ 	u32 tempval;
+ 
+ 	tempval = readl(fep->hwp + FEC_ATIME_CTRL);
+ 	tempval |= FEC_T_CTRL_CAPTURE;
+ 	writel(tempval, fep->hwp + FEC_ATIME_CTRL);
+ 
+-	if (id_entry->driver_data & FEC_QUIRK_BUG_CAPTURE)
++	if (fep->quirks & FEC_QUIRK_BUG_CAPTURE)
+ 		udelay(1);
+ 
+ 	return readl(fep->hwp + FEC_ATIME);
+@@ -604,6 +602,10 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
+ 	fep->ptp_caps.enable = fec_ptp_enable;
+ 
+ 	fep->cycle_speed = clk_get_rate(fep->clk_ptp);
++	if (!fep->cycle_speed) {
++		fep->cycle_speed = NSEC_PER_SEC;
++		dev_err(&fep->pdev->dev, "clk_ptp clock rate is zero\n");
++	}
+ 	fep->ptp_inc = NSEC_PER_SEC / fep->cycle_speed;
+ 
+ 	spin_lock_init(&fep->tmreg_lock);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index fb20c6971f4c7..dc944d605a741 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -1705,12 +1705,13 @@ setup_rings:
+  * ice_vsi_cfg_txqs - Configure the VSI for Tx
+  * @vsi: the VSI being configured
+  * @rings: Tx ring array to be configured
++ * @count: number of Tx ring array elements
+  *
+  * Return 0 on success and a negative value on error
+  * Configure the Tx VSI for operation.
+  */
+ static int
+-ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_ring **rings)
++ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_ring **rings, u16 count)
+ {
+ 	struct ice_aqc_add_tx_qgrp *qg_buf;
+ 	u16 q_idx = 0;
+@@ -1722,7 +1723,7 @@ ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_ring **rings)
+ 
+ 	qg_buf->num_txqs = 1;
+ 
+-	for (q_idx = 0; q_idx < vsi->num_txq; q_idx++) {
++	for (q_idx = 0; q_idx < count; q_idx++) {
+ 		err = ice_vsi_cfg_txq(vsi, rings[q_idx], qg_buf);
+ 		if (err)
+ 			goto err_cfg_txqs;
+@@ -1742,7 +1743,7 @@ err_cfg_txqs:
+  */
+ int ice_vsi_cfg_lan_txqs(struct ice_vsi *vsi)
+ {
+-	return ice_vsi_cfg_txqs(vsi, vsi->tx_rings);
++	return ice_vsi_cfg_txqs(vsi, vsi->tx_rings, vsi->num_txq);
+ }
+ 
+ /**
+@@ -1757,7 +1758,7 @@ int ice_vsi_cfg_xdp_txqs(struct ice_vsi *vsi)
+ 	int ret;
+ 	int i;
+ 
+-	ret = ice_vsi_cfg_txqs(vsi, vsi->xdp_rings);
++	ret = ice_vsi_cfg_txqs(vsi, vsi->xdp_rings, vsi->num_xdp_txq);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1955,17 +1956,18 @@ int ice_vsi_stop_all_rx_rings(struct ice_vsi *vsi)
+  * @rst_src: reset source
+  * @rel_vmvf_num: Relative ID of VF/VM
+  * @rings: Tx ring array to be stopped
++ * @count: number of Tx ring array elements
+  */
+ static int
+ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
+-		      u16 rel_vmvf_num, struct ice_ring **rings)
++		      u16 rel_vmvf_num, struct ice_ring **rings, u16 count)
+ {
+ 	u16 q_idx;
+ 
+ 	if (vsi->num_txq > ICE_LAN_TXQ_MAX_QDIS)
+ 		return -EINVAL;
+ 
+-	for (q_idx = 0; q_idx < vsi->num_txq; q_idx++) {
++	for (q_idx = 0; q_idx < count; q_idx++) {
+ 		struct ice_txq_meta txq_meta = { };
+ 		int status;
+ 
+@@ -1993,7 +1995,7 @@ int
+ ice_vsi_stop_lan_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
+ 			  u16 rel_vmvf_num)
+ {
+-	return ice_vsi_stop_tx_rings(vsi, rst_src, rel_vmvf_num, vsi->tx_rings);
++	return ice_vsi_stop_tx_rings(vsi, rst_src, rel_vmvf_num, vsi->tx_rings, vsi->num_txq);
+ }
+ 
+ /**
+@@ -2002,7 +2004,7 @@ ice_vsi_stop_lan_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
+  */
+ int ice_vsi_stop_xdp_tx_rings(struct ice_vsi *vsi)
+ {
+-	return ice_vsi_stop_tx_rings(vsi, ICE_NO_RESET, 0, vsi->xdp_rings);
++	return ice_vsi_stop_tx_rings(vsi, ICE_NO_RESET, 0, vsi->xdp_rings, vsi->num_xdp_txq);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 6f30aad7695fb..1567ddd4c5b87 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2539,6 +2539,20 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 	return (ret || xdp_ring_err) ? -ENOMEM : 0;
+ }
+ 
++/**
++ * ice_xdp_safe_mode - XDP handler for safe mode
++ * @dev: netdevice
++ * @xdp: XDP command
++ */
++static int ice_xdp_safe_mode(struct net_device __always_unused *dev,
++			     struct netdev_bpf *xdp)
++{
++	NL_SET_ERR_MSG_MOD(xdp->extack,
++			   "Please provide working DDP firmware package in order to use XDP\n"
++			   "Refer to Documentation/networking/device_drivers/ethernet/intel/ice.rst");
++	return -EOPNOTSUPP;
++}
++
+ /**
+  * ice_xdp - implements XDP handler
+  * @dev: netdevice
+@@ -6786,6 +6800,7 @@ static const struct net_device_ops ice_netdev_safe_mode_ops = {
+ 	.ndo_change_mtu = ice_change_mtu,
+ 	.ndo_get_stats64 = ice_get_stats64,
+ 	.ndo_tx_timeout = ice_tx_timeout,
++	.ndo_bpf = ice_xdp_safe_mode,
+ };
+ 
+ static const struct net_device_ops ice_netdev_ops = {
+diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c
+index 135ba5b6ae980..072075bc60ee9 100644
+--- a/drivers/net/ethernet/lantiq_xrx200.c
++++ b/drivers/net/ethernet/lantiq_xrx200.c
+@@ -154,6 +154,7 @@ static int xrx200_close(struct net_device *net_dev)
+ 
+ static int xrx200_alloc_skb(struct xrx200_chan *ch)
+ {
++	struct sk_buff *skb = ch->skb[ch->dma.desc];
+ 	dma_addr_t mapping;
+ 	int ret = 0;
+ 
+@@ -168,6 +169,7 @@ static int xrx200_alloc_skb(struct xrx200_chan *ch)
+ 				 XRX200_DMA_DATA_LEN, DMA_FROM_DEVICE);
+ 	if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) {
+ 		dev_kfree_skb_any(ch->skb[ch->dma.desc]);
++		ch->skb[ch->dma.desc] = skb;
+ 		ret = -ENOMEM;
+ 		goto skip;
+ 	}
+@@ -198,7 +200,6 @@ static int xrx200_hw_receive(struct xrx200_chan *ch)
+ 	ch->dma.desc %= LTQ_DESC_NUM;
+ 
+ 	if (ret) {
+-		ch->skb[ch->dma.desc] = skb;
+ 		net_dev->stats.rx_dropped++;
+ 		netdev_err(net_dev, "failed to allocate new rx buffer\n");
+ 		return ret;
+@@ -352,8 +353,8 @@ static irqreturn_t xrx200_dma_irq(int irq, void *ptr)
+ 	struct xrx200_chan *ch = ptr;
+ 
+ 	if (napi_schedule_prep(&ch->napi)) {
+-		__napi_schedule(&ch->napi);
+ 		ltq_dma_disable_irq(&ch->dma);
++		__napi_schedule(&ch->napi);
+ 	}
+ 
+ 	ltq_dma_ack_irq(&ch->dma);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index 3d45341e2216f..26f7fab109d97 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -532,9 +532,6 @@ void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv)
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	struct net_device *netdev = priv->netdev;
+ 
+-	if (!priv->ipsec)
+-		return;
+-
+ 	if (!(mlx5_accel_ipsec_device_caps(mdev) & MLX5_ACCEL_IPSEC_CAP_ESP) ||
+ 	    !MLX5_CAP_ETH(mdev, swp)) {
+ 		mlx5_core_dbg(mdev, "mlx5e: ESP and SWP offload not supported\n");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index f18b52be32e98..d81fa8e561991 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -4958,13 +4958,9 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	}
+ 
+ 	if (mlx5_vxlan_allowed(mdev->vxlan) || mlx5_geneve_tx_allowed(mdev)) {
+-		netdev->hw_features     |= NETIF_F_GSO_UDP_TUNNEL |
+-					   NETIF_F_GSO_UDP_TUNNEL_CSUM;
+-		netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL |
+-					   NETIF_F_GSO_UDP_TUNNEL_CSUM;
+-		netdev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
+-		netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL |
+-					 NETIF_F_GSO_UDP_TUNNEL_CSUM;
++		netdev->hw_features     |= NETIF_F_GSO_UDP_TUNNEL;
++		netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL;
++		netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL;
+ 	}
+ 
+ 	if (mlx5e_tunnel_proto_supported(mdev, IPPROTO_GRE)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 80abdb0b47d7e..59837af959d06 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -5206,7 +5206,7 @@ static void mlx5e_tc_hairpin_update_dead_peer(struct mlx5e_priv *priv,
+ 	list_for_each_entry_safe(hpe, tmp, &init_wait_list, dead_peer_wait_list) {
+ 		wait_for_completion(&hpe->res_ready);
+ 		if (!IS_ERR_OR_NULL(hpe->hp) && hpe->peer_vhca_id == peer_vhca_id)
+-			hpe->hp->pair->peer_gone = true;
++			mlx5_core_hairpin_clear_dead_peer(hpe->hp->pair);
+ 
+ 		mlx5e_hairpin_put(priv, hpe);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index d61539b5567c0..401b2f5128dd4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1302,6 +1302,12 @@ static int esw_enable_vport(struct mlx5_eswitch *esw, u16 vport_num,
+ 	    (!vport_num && mlx5_core_is_ecpf(esw->dev)))
+ 		vport->info.trusted = true;
+ 
++	/* External controller host PF has factory programmed MAC.
++	 * Read it from the device.
++	 */
++	if (mlx5_core_is_ecpf(esw->dev) && vport_num == MLX5_VPORT_PF)
++		mlx5_query_nic_vport_mac_address(esw->dev, vport_num, true, vport->info.mac);
++
+ 	esw_vport_change_handle_locked(vport);
+ 
+ 	esw->enabled_vports++;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mr.c b/drivers/net/ethernet/mellanox/mlx5/core/mr.c
+index 9eb51f06d3ae2..d1972508338cf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mr.c
+@@ -54,7 +54,7 @@ int mlx5_core_create_mkey(struct mlx5_core_dev *dev,
+ 	mkey_index = MLX5_GET(create_mkey_out, lout, mkey_index);
+ 	mkey->iova = MLX5_GET64(mkc, mkc, start_addr);
+ 	mkey->size = MLX5_GET64(mkc, mkc, len);
+-	mkey->key |= mlx5_idx_to_mkey(mkey_index);
++	mkey->key = (u32)mlx5_mkey_variant(mkey->key) | mlx5_idx_to_mkey(mkey_index);
+ 	mkey->pd = MLX5_GET(mkc, mkc, pd);
+ 
+ 	mlx5_core_dbg(dev, "out 0x%x, mkey 0x%x\n", mkey_index, mkey->key);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
+index 8e0dddc6383f0..2389239acadc9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
+@@ -156,6 +156,9 @@ void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev)
+ {
+ 	int err;
+ 
++	if (!MLX5_CAP_GEN(dev, roce))
++		return;
++
+ 	err = mlx5_nic_vport_enable_roce(dev);
+ 	if (err) {
+ 		mlx5_core_err(dev, "Failed to enable RoCE: %d\n", err);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
+index 51bbd88ff021c..fd56cae0d54fc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
+@@ -78,9 +78,9 @@ int mlx5dr_cmd_query_esw_caps(struct mlx5_core_dev *mdev,
+ 	caps->uplink_icm_address_tx =
+ 		MLX5_CAP64_ESW_FLOWTABLE(mdev,
+ 					 sw_steering_uplink_icm_address_tx);
+-	caps->sw_owner =
+-		MLX5_CAP_ESW_FLOWTABLE_FDB(mdev,
+-					   sw_owner);
++	caps->sw_owner_v2 = MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, sw_owner_v2);
++	if (!caps->sw_owner_v2)
++		caps->sw_owner = MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, sw_owner);
+ 
+ 	return 0;
+ }
+@@ -113,10 +113,15 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev,
+ 	caps->nic_tx_allow_address =
+ 		MLX5_CAP64_FLOWTABLE(mdev, sw_steering_nic_tx_action_allow_icm_address);
+ 
+-	caps->rx_sw_owner = MLX5_CAP_FLOWTABLE_NIC_RX(mdev, sw_owner);
+-	caps->max_ft_level = MLX5_CAP_FLOWTABLE_NIC_RX(mdev, max_ft_level);
++	caps->rx_sw_owner_v2 = MLX5_CAP_FLOWTABLE_NIC_RX(mdev, sw_owner_v2);
++	caps->tx_sw_owner_v2 = MLX5_CAP_FLOWTABLE_NIC_TX(mdev, sw_owner_v2);
++
++	if (!caps->rx_sw_owner_v2)
++		caps->rx_sw_owner = MLX5_CAP_FLOWTABLE_NIC_RX(mdev, sw_owner);
++	if (!caps->tx_sw_owner_v2)
++		caps->tx_sw_owner = MLX5_CAP_FLOWTABLE_NIC_TX(mdev, sw_owner);
+ 
+-	caps->tx_sw_owner = MLX5_CAP_FLOWTABLE_NIC_TX(mdev, sw_owner);
++	caps->max_ft_level = MLX5_CAP_FLOWTABLE_NIC_RX(mdev, max_ft_level);
+ 
+ 	caps->log_icm_size = MLX5_CAP_DEV_MEM(mdev, log_steering_sw_icm_size);
+ 	caps->hdr_modify_icm_addr =
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+index aa2c2d6c44e6b..00d861361428f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+@@ -4,6 +4,11 @@
+ #include <linux/mlx5/eswitch.h>
+ #include "dr_types.h"
+ 
++#define DR_DOMAIN_SW_STEERING_SUPPORTED(dmn, dmn_type)	\
++	((dmn)->info.caps.dmn_type##_sw_owner ||	\
++	 ((dmn)->info.caps.dmn_type##_sw_owner_v2 &&	\
++	  (dmn)->info.caps.sw_format_ver <= MLX5_STEERING_FORMAT_CONNECTX_6DX))
++
+ static int dr_domain_init_cache(struct mlx5dr_domain *dmn)
+ {
+ 	/* Per vport cached FW FT for checksum recalculation, this
+@@ -181,6 +186,7 @@ static int dr_domain_query_fdb_caps(struct mlx5_core_dev *mdev,
+ 		return ret;
+ 
+ 	dmn->info.caps.fdb_sw_owner = dmn->info.caps.esw_caps.sw_owner;
++	dmn->info.caps.fdb_sw_owner_v2 = dmn->info.caps.esw_caps.sw_owner_v2;
+ 	dmn->info.caps.esw_rx_drop_address = dmn->info.caps.esw_caps.drop_icm_address_rx;
+ 	dmn->info.caps.esw_tx_drop_address = dmn->info.caps.esw_caps.drop_icm_address_tx;
+ 
+@@ -223,18 +229,13 @@ static int dr_domain_caps_init(struct mlx5_core_dev *mdev,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (dmn->info.caps.sw_format_ver != MLX5_STEERING_FORMAT_CONNECTX_5) {
+-		mlx5dr_err(dmn, "SW steering is not supported on this device\n");
+-		return -EOPNOTSUPP;
+-	}
+-
+ 	ret = dr_domain_query_fdb_caps(mdev, dmn);
+ 	if (ret)
+ 		return ret;
+ 
+ 	switch (dmn->type) {
+ 	case MLX5DR_DOMAIN_TYPE_NIC_RX:
+-		if (!dmn->info.caps.rx_sw_owner)
++		if (!DR_DOMAIN_SW_STEERING_SUPPORTED(dmn, rx))
+ 			return -ENOTSUPP;
+ 
+ 		dmn->info.supp_sw_steering = true;
+@@ -243,7 +244,7 @@ static int dr_domain_caps_init(struct mlx5_core_dev *mdev,
+ 		dmn->info.rx.drop_icm_addr = dmn->info.caps.nic_rx_drop_address;
+ 		break;
+ 	case MLX5DR_DOMAIN_TYPE_NIC_TX:
+-		if (!dmn->info.caps.tx_sw_owner)
++		if (!DR_DOMAIN_SW_STEERING_SUPPORTED(dmn, tx))
+ 			return -ENOTSUPP;
+ 
+ 		dmn->info.supp_sw_steering = true;
+@@ -255,7 +256,7 @@ static int dr_domain_caps_init(struct mlx5_core_dev *mdev,
+ 		if (!dmn->info.caps.eswitch_manager)
+ 			return -ENOTSUPP;
+ 
+-		if (!dmn->info.caps.fdb_sw_owner)
++		if (!DR_DOMAIN_SW_STEERING_SUPPORTED(dmn, fdb))
+ 			return -ENOTSUPP;
+ 
+ 		dmn->info.rx.ste_type = MLX5DR_STE_TYPE_RX;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+index cf62ea4f882e6..42c49f09e9d3f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+@@ -597,7 +597,8 @@ struct mlx5dr_esw_caps {
+ 	u64 drop_icm_address_tx;
+ 	u64 uplink_icm_address_rx;
+ 	u64 uplink_icm_address_tx;
+-	bool sw_owner;
++	u8 sw_owner:1;
++	u8 sw_owner_v2:1;
+ };
+ 
+ struct mlx5dr_cmd_vport_cap {
+@@ -630,6 +631,9 @@ struct mlx5dr_cmd_caps {
+ 	bool rx_sw_owner;
+ 	bool tx_sw_owner;
+ 	bool fdb_sw_owner;
++	u8 rx_sw_owner_v2:1;
++	u8 tx_sw_owner_v2:1;
++	u8 fdb_sw_owner_v2:1;
+ 	u32 num_vports;
+ 	struct mlx5dr_esw_caps esw_caps;
+ 	struct mlx5dr_cmd_vport_cap *vports_caps;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
+index 7914fe3fc68d8..454968ba68313 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
+@@ -124,7 +124,11 @@ int mlx5dr_action_destroy(struct mlx5dr_action *action);
+ static inline bool
+ mlx5dr_is_supported(struct mlx5_core_dev *dev)
+ {
+-	return MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner);
++	return MLX5_CAP_GEN(dev, roce) &&
++	       (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner) ||
++		(MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner_v2) &&
++		 (MLX5_CAP_GEN(dev, steering_format_version) <=
++		  MLX5_STEERING_FORMAT_CONNECTX_6DX)));
+ }
+ 
+ #endif /* _MLX5DR_H_ */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
+index 01cc00ad8acf2..b6931bbe52d29 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
+@@ -424,6 +424,15 @@ err_modify_sq:
+ 	return err;
+ }
+ 
++static void mlx5_hairpin_unpair_peer_sq(struct mlx5_hairpin *hp)
++{
++	int i;
++
++	for (i = 0; i < hp->num_channels; i++)
++		mlx5_hairpin_modify_sq(hp->peer_mdev, hp->sqn[i], MLX5_SQC_STATE_RDY,
++				       MLX5_SQC_STATE_RST, 0, 0);
++}
++
+ static void mlx5_hairpin_unpair_queues(struct mlx5_hairpin *hp)
+ {
+ 	int i;
+@@ -432,13 +441,9 @@ static void mlx5_hairpin_unpair_queues(struct mlx5_hairpin *hp)
+ 	for (i = 0; i < hp->num_channels; i++)
+ 		mlx5_hairpin_modify_rq(hp->func_mdev, hp->rqn[i], MLX5_RQC_STATE_RDY,
+ 				       MLX5_RQC_STATE_RST, 0, 0);
+-
+ 	/* unset peer SQs */
+-	if (hp->peer_gone)
+-		return;
+-	for (i = 0; i < hp->num_channels; i++)
+-		mlx5_hairpin_modify_sq(hp->peer_mdev, hp->sqn[i], MLX5_SQC_STATE_RDY,
+-				       MLX5_SQC_STATE_RST, 0, 0);
++	if (!hp->peer_gone)
++		mlx5_hairpin_unpair_peer_sq(hp);
+ }
+ 
+ struct mlx5_hairpin *
+@@ -485,3 +490,16 @@ void mlx5_core_hairpin_destroy(struct mlx5_hairpin *hp)
+ 	mlx5_hairpin_destroy_queues(hp);
+ 	kfree(hp);
+ }
++
++void mlx5_core_hairpin_clear_dead_peer(struct mlx5_hairpin *hp)
++{
++	int i;
++
++	mlx5_hairpin_unpair_peer_sq(hp);
++
++	/* destroy peer SQ */
++	for (i = 0; i < hp->num_channels; i++)
++		mlx5_core_destroy_sq(hp->peer_mdev, hp->sqn[i]);
++
++	hp->peer_gone = true;
++}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+index bdafc85fd874d..fc91bbf7d0c37 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+@@ -464,8 +464,6 @@ int mlx5_modify_nic_vport_node_guid(struct mlx5_core_dev *mdev,
+ 	void *in;
+ 	int err;
+ 
+-	if (!vport)
+-		return -EINVAL;
+ 	if (!MLX5_CAP_GEN(mdev, vport_group_manager))
+ 		return -EACCES;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+index bf85ce9835d7f..42e4437ac3c16 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+@@ -708,7 +708,8 @@ mlxsw_thermal_module_tz_init(struct mlxsw_thermal_module *module_tz)
+ 							MLXSW_THERMAL_TRIP_MASK,
+ 							module_tz,
+ 							&mlxsw_thermal_module_ops,
+-							NULL, 0, 0);
++							NULL, 0,
++							module_tz->parent->polling_delay);
+ 	if (IS_ERR(module_tz->tzdev)) {
+ 		err = PTR_ERR(module_tz->tzdev);
+ 		return err;
+@@ -830,7 +831,8 @@ mlxsw_thermal_gearbox_tz_init(struct mlxsw_thermal_module *gearbox_tz)
+ 						MLXSW_THERMAL_TRIP_MASK,
+ 						gearbox_tz,
+ 						&mlxsw_thermal_gearbox_ops,
+-						NULL, 0, 0);
++						NULL, 0,
++						gearbox_tz->parent->polling_delay);
+ 	if (IS_ERR(gearbox_tz->tzdev))
+ 		return PTR_ERR(gearbox_tz->tzdev);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
+index 3c3069afc0a31..c670bf3464c2a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
+@@ -3641,7 +3641,7 @@ MLXSW_ITEM32(reg, qeec, max_shaper_bs, 0x1C, 0, 6);
+ #define MLXSW_REG_QEEC_HIGHEST_SHAPER_BS	25
+ #define MLXSW_REG_QEEC_LOWEST_SHAPER_BS_SP1	5
+ #define MLXSW_REG_QEEC_LOWEST_SHAPER_BS_SP2	11
+-#define MLXSW_REG_QEEC_LOWEST_SHAPER_BS_SP3	5
++#define MLXSW_REG_QEEC_LOWEST_SHAPER_BS_SP3	11
+ 
+ static inline void mlxsw_reg_qeec_pack(char *payload, u8 local_port,
+ 				       enum mlxsw_reg_qeec_hr hr, u8 index,
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index aa400b925b08e..5bfc7acfd13a9 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -355,6 +355,7 @@ static u32 ocelot_read_eq_avail(struct ocelot *ocelot, int port)
+ 
+ int ocelot_port_flush(struct ocelot *ocelot, int port)
+ {
++	unsigned int pause_ena;
+ 	int err, val;
+ 
+ 	/* Disable dequeuing from the egress queues */
+@@ -363,6 +364,7 @@ int ocelot_port_flush(struct ocelot *ocelot, int port)
+ 		       QSYS_PORT_MODE, port);
+ 
+ 	/* Disable flow control */
++	ocelot_fields_read(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, &pause_ena);
+ 	ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, 0);
+ 
+ 	/* Disable priority flow control */
+@@ -398,6 +400,9 @@ int ocelot_port_flush(struct ocelot *ocelot, int port)
+ 	/* Clear flushing again. */
+ 	ocelot_rmw_gix(ocelot, 0, REW_PORT_CFG_FLUSH_ENA, REW_PORT_CFG, port);
+ 
++	/* Re-enable flow control */
++	ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, pause_ena);
++
+ 	return err;
+ }
+ EXPORT_SYMBOL(ocelot_port_flush);
+diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+index d258e0ccf9465..e2046b6d65a30 100644
+--- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
++++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+@@ -1602,6 +1602,8 @@ err_out_free_netdev:
+ 	free_netdev(netdev);
+ 
+ err_out_free_res:
++	if (NX_IS_REVISION_P3(pdev->revision))
++		pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_regions(pdev);
+ 
+ err_out_disable_pdev:
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+index c2faf96fcade8..27c07b2412f46 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+@@ -2692,6 +2692,7 @@ err_out_free_hw_res:
+ 	kfree(ahw);
+ 
+ err_out_free_res:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_regions(pdev);
+ 
+ err_out_disable_pdev:
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+index fcdecddb28122..8d51b0cb545ca 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+@@ -26,7 +26,7 @@ static int rmnet_is_real_dev_registered(const struct net_device *real_dev)
+ }
+ 
+ /* Needs rtnl lock */
+-static struct rmnet_port*
++struct rmnet_port*
+ rmnet_get_port_rtnl(const struct net_device *real_dev)
+ {
+ 	return rtnl_dereference(real_dev->rx_handler_data);
+@@ -253,7 +253,10 @@ static int rmnet_config_notify_cb(struct notifier_block *nb,
+ 		netdev_dbg(real_dev, "Kernel unregister\n");
+ 		rmnet_force_unassociate_device(real_dev);
+ 		break;
+-
++	case NETDEV_CHANGEMTU:
++		if (rmnet_vnd_validate_real_dev_mtu(real_dev))
++			return NOTIFY_BAD;
++		break;
+ 	default:
+ 		break;
+ 	}
+@@ -329,9 +332,17 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[],
+ 
+ 	if (data[IFLA_RMNET_FLAGS]) {
+ 		struct ifla_rmnet_flags *flags;
++		u32 old_data_format;
+ 
++		old_data_format = port->data_format;
+ 		flags = nla_data(data[IFLA_RMNET_FLAGS]);
+ 		port->data_format = flags->flags & flags->mask;
++
++		if (rmnet_vnd_update_dev_mtu(port, real_dev)) {
++			port->data_format = old_data_format;
++			NL_SET_ERR_MSG_MOD(extack, "Invalid MTU on real dev");
++			return -EINVAL;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
+index be515982d6286..8d8d4690a0745 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
+@@ -73,4 +73,6 @@ int rmnet_add_bridge(struct net_device *rmnet_dev,
+ 		     struct netlink_ext_ack *extack);
+ int rmnet_del_bridge(struct net_device *rmnet_dev,
+ 		     struct net_device *slave_dev);
++struct rmnet_port*
++rmnet_get_port_rtnl(const struct net_device *real_dev);
+ #endif /* _RMNET_CONFIG_H_ */
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
+index d58b51d277f18..2adcf24848a45 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
+@@ -58,9 +58,30 @@ static netdev_tx_t rmnet_vnd_start_xmit(struct sk_buff *skb,
+ 	return NETDEV_TX_OK;
+ }
+ 
++static int rmnet_vnd_headroom(struct rmnet_port *port)
++{
++	u32 headroom;
++
++	headroom = sizeof(struct rmnet_map_header);
++
++	if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV4)
++		headroom += sizeof(struct rmnet_map_ul_csum_header);
++
++	return headroom;
++}
++
+ static int rmnet_vnd_change_mtu(struct net_device *rmnet_dev, int new_mtu)
+ {
+-	if (new_mtu < 0 || new_mtu > RMNET_MAX_PACKET_SIZE)
++	struct rmnet_priv *priv = netdev_priv(rmnet_dev);
++	struct rmnet_port *port;
++	u32 headroom;
++
++	port = rmnet_get_port_rtnl(priv->real_dev);
++
++	headroom = rmnet_vnd_headroom(port);
++
++	if (new_mtu < 0 || new_mtu > RMNET_MAX_PACKET_SIZE ||
++	    new_mtu > (priv->real_dev->mtu - headroom))
+ 		return -EINVAL;
+ 
+ 	rmnet_dev->mtu = new_mtu;
+@@ -104,24 +125,24 @@ static void rmnet_get_stats64(struct net_device *dev,
+ 			      struct rtnl_link_stats64 *s)
+ {
+ 	struct rmnet_priv *priv = netdev_priv(dev);
+-	struct rmnet_vnd_stats total_stats;
++	struct rmnet_vnd_stats total_stats = { };
+ 	struct rmnet_pcpu_stats *pcpu_ptr;
++	struct rmnet_vnd_stats snapshot;
+ 	unsigned int cpu, start;
+ 
+-	memset(&total_stats, 0, sizeof(struct rmnet_vnd_stats));
+-
+ 	for_each_possible_cpu(cpu) {
+ 		pcpu_ptr = per_cpu_ptr(priv->pcpu_stats, cpu);
+ 
+ 		do {
+ 			start = u64_stats_fetch_begin_irq(&pcpu_ptr->syncp);
+-			total_stats.rx_pkts += pcpu_ptr->stats.rx_pkts;
+-			total_stats.rx_bytes += pcpu_ptr->stats.rx_bytes;
+-			total_stats.tx_pkts += pcpu_ptr->stats.tx_pkts;
+-			total_stats.tx_bytes += pcpu_ptr->stats.tx_bytes;
++			snapshot = pcpu_ptr->stats;	/* struct assignment */
+ 		} while (u64_stats_fetch_retry_irq(&pcpu_ptr->syncp, start));
+ 
+-		total_stats.tx_drops += pcpu_ptr->stats.tx_drops;
++		total_stats.rx_pkts += snapshot.rx_pkts;
++		total_stats.rx_bytes += snapshot.rx_bytes;
++		total_stats.tx_pkts += snapshot.tx_pkts;
++		total_stats.tx_bytes += snapshot.tx_bytes;
++		total_stats.tx_drops += snapshot.tx_drops;
+ 	}
+ 
+ 	s->rx_packets = total_stats.rx_pkts;
+@@ -229,6 +250,7 @@ int rmnet_vnd_newlink(u8 id, struct net_device *rmnet_dev,
+ 
+ {
+ 	struct rmnet_priv *priv = netdev_priv(rmnet_dev);
++	u32 headroom;
+ 	int rc;
+ 
+ 	if (rmnet_get_endpoint(port, id)) {
+@@ -242,6 +264,13 @@ int rmnet_vnd_newlink(u8 id, struct net_device *rmnet_dev,
+ 
+ 	priv->real_dev = real_dev;
+ 
++	headroom = rmnet_vnd_headroom(port);
++
++	if (rmnet_vnd_change_mtu(rmnet_dev, real_dev->mtu - headroom)) {
++		NL_SET_ERR_MSG_MOD(extack, "Invalid MTU on real dev");
++		return -EINVAL;
++	}
++
+ 	rc = register_netdevice(rmnet_dev);
+ 	if (!rc) {
+ 		ep->egress_dev = rmnet_dev;
+@@ -283,3 +312,45 @@ int rmnet_vnd_do_flow_control(struct net_device *rmnet_dev, int enable)
+ 
+ 	return 0;
+ }
++
++int rmnet_vnd_validate_real_dev_mtu(struct net_device *real_dev)
++{
++	struct hlist_node *tmp_ep;
++	struct rmnet_endpoint *ep;
++	struct rmnet_port *port;
++	unsigned long bkt_ep;
++	u32 headroom;
++
++	port = rmnet_get_port_rtnl(real_dev);
++
++	headroom = rmnet_vnd_headroom(port);
++
++	hash_for_each_safe(port->muxed_ep, bkt_ep, tmp_ep, ep, hlnode) {
++		if (ep->egress_dev->mtu > (real_dev->mtu - headroom))
++			return -1;
++	}
++
++	return 0;
++}
++
++int rmnet_vnd_update_dev_mtu(struct rmnet_port *port,
++			     struct net_device *real_dev)
++{
++	struct hlist_node *tmp_ep;
++	struct rmnet_endpoint *ep;
++	unsigned long bkt_ep;
++	u32 headroom;
++
++	headroom = rmnet_vnd_headroom(port);
++
++	hash_for_each_safe(port->muxed_ep, bkt_ep, tmp_ep, ep, hlnode) {
++		if (ep->egress_dev->mtu <= (real_dev->mtu - headroom))
++			continue;
++
++		if (rmnet_vnd_change_mtu(ep->egress_dev,
++					 real_dev->mtu - headroom))
++			return -1;
++	}
++
++	return 0;
++}
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h
+index 4967f3461ed1e..dc3a4443ef0af 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h
+@@ -18,4 +18,7 @@ int rmnet_vnd_dellink(u8 id, struct rmnet_port *port,
+ void rmnet_vnd_rx_fixup(struct sk_buff *skb, struct net_device *dev);
+ void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev);
+ void rmnet_vnd_setup(struct net_device *dev);
++int rmnet_vnd_validate_real_dev_mtu(struct net_device *real_dev);
++int rmnet_vnd_update_dev_mtu(struct rmnet_port *port,
++			     struct net_device *real_dev);
+ #endif /* _RMNET_VND_H_ */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h b/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
+index b70d44ac09906..3c73453725f94 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
+@@ -76,10 +76,10 @@ enum power_event {
+ #define LPI_CTRL_STATUS_TLPIEN	0x00000001	/* Transmit LPI Entry */
+ 
+ /* GMAC HW ADDR regs */
+-#define GMAC_ADDR_HIGH(reg)	(((reg > 15) ? 0x00000800 : 0x00000040) + \
+-				(reg * 8))
+-#define GMAC_ADDR_LOW(reg)	(((reg > 15) ? 0x00000804 : 0x00000044) + \
+-				(reg * 8))
++#define GMAC_ADDR_HIGH(reg)	((reg > 15) ? 0x00000800 + (reg - 16) * 8 : \
++				 0x00000040 + (reg * 8))
++#define GMAC_ADDR_LOW(reg)	((reg > 15) ? 0x00000804 + (reg - 16) * 8 : \
++				 0x00000044 + (reg * 8))
+ #define GMAC_MAX_PERFECT_ADDRESSES	1
+ 
+ #define GMAC_PCS_BASE		0x000000c0	/* PCS register base */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index af34a4cadbb0a..ff95400594fc1 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -626,6 +626,8 @@ error_pclk_get:
+ void stmmac_remove_config_dt(struct platform_device *pdev,
+ 			     struct plat_stmmacenet_data *plat)
+ {
++	clk_disable_unprepare(plat->stmmac_clk);
++	clk_disable_unprepare(plat->pclk);
+ 	of_node_put(plat->phy_node);
+ 	of_node_put(plat->mdio_node);
+ }
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index 030185301014c..01bb36e7cff0a 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -849,7 +849,7 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 		smp_mb();
+ 
+ 		/* Space might have just been freed - check again */
+-		if (temac_check_tx_bd_space(lp, num_frag))
++		if (temac_check_tx_bd_space(lp, num_frag + 1))
+ 			return NETDEV_TX_BUSY;
+ 
+ 		netif_wake_queue(ndev);
+@@ -876,7 +876,6 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 		return NETDEV_TX_OK;
+ 	}
+ 	cur_p->phys = cpu_to_be32(skb_dma_addr);
+-	ptr_to_txbd((void *)skb, cur_p);
+ 
+ 	for (ii = 0; ii < num_frag; ii++) {
+ 		if (++lp->tx_bd_tail >= lp->tx_bd_num)
+@@ -915,6 +914,11 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	}
+ 	cur_p->app0 |= cpu_to_be32(STS_CTRL_APP0_EOP);
+ 
++	/* Mark last fragment with skb address, so it can be consumed
++	 * in temac_start_xmit_done()
++	 */
++	ptr_to_txbd((void *)skb, cur_p);
++
+ 	tail_p = lp->tx_bd_p + sizeof(*lp->tx_bd_v) * lp->tx_bd_tail;
+ 	lp->tx_bd_tail++;
+ 	if (lp->tx_bd_tail >= lp->tx_bd_num)
+diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c
+index 17be2bb2985cd..920e9f888cc35 100644
+--- a/drivers/net/hamradio/mkiss.c
++++ b/drivers/net/hamradio/mkiss.c
+@@ -799,6 +799,7 @@ static void mkiss_close(struct tty_struct *tty)
+ 	ax->tty = NULL;
+ 
+ 	unregister_netdev(ax->dev);
++	free_netdev(ax->dev);
+ }
+ 
+ /* Perform I/O control on an active ax25 channel. */
+diff --git a/drivers/net/usb/cdc_eem.c b/drivers/net/usb/cdc_eem.c
+index 0eeec80bec311..e4a5703666461 100644
+--- a/drivers/net/usb/cdc_eem.c
++++ b/drivers/net/usb/cdc_eem.c
+@@ -123,10 +123,10 @@ static struct sk_buff *eem_tx_fixup(struct usbnet *dev, struct sk_buff *skb,
+ 	}
+ 
+ 	skb2 = skb_copy_expand(skb, EEM_HEAD, ETH_FCS_LEN + padlen, flags);
++	dev_kfree_skb_any(skb);
+ 	if (!skb2)
+ 		return NULL;
+ 
+-	dev_kfree_skb_any(skb);
+ 	skb = skb2;
+ 
+ done:
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 1d3bf810f2ca1..04c4f1570bc8c 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1900,7 +1900,7 @@ static void cdc_ncm_status(struct usbnet *dev, struct urb *urb)
+ static const struct driver_info cdc_ncm_info = {
+ 	.description = "CDC NCM",
+ 	.flags = FLAG_POINTTOPOINT | FLAG_NO_SETINT | FLAG_MULTI_PACKET
+-			| FLAG_LINK_INTR,
++			| FLAG_LINK_INTR | FLAG_ETHER,
+ 	.bind = cdc_ncm_bind,
+ 	.unbind = cdc_ncm_unbind,
+ 	.manage_power = usbnet_manage_power,
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index d44657b54d2b6..378a12ae2d957 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -1483,7 +1483,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	ret = smsc75xx_wait_ready(dev, 0);
+ 	if (ret < 0) {
+ 		netdev_warn(dev->net, "device not ready in smsc75xx_bind\n");
+-		goto err;
++		goto free_pdata;
+ 	}
+ 
+ 	smsc75xx_init_mac_address(dev);
+@@ -1492,7 +1492,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	ret = smsc75xx_reset(dev);
+ 	if (ret < 0) {
+ 		netdev_warn(dev->net, "smsc75xx_reset error %d\n", ret);
+-		goto err;
++		goto cancel_work;
+ 	}
+ 
+ 	dev->net->netdev_ops = &smsc75xx_netdev_ops;
+@@ -1503,8 +1503,11 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	dev->net->max_mtu = MAX_SINGLE_PACKET_SIZE;
+ 	return 0;
+ 
+-err:
++cancel_work:
++	cancel_work_sync(&pdata->set_multicast);
++free_pdata:
+ 	kfree(pdata);
++	dev->data[0] = 0;
+ 	return ret;
+ }
+ 
+@@ -1515,7 +1518,6 @@ static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+ 		cancel_work_sync(&pdata->set_multicast);
+ 		netif_dbg(dev, ifdown, dev->net, "free pdata\n");
+ 		kfree(pdata);
+-		pdata = NULL;
+ 		dev->data[0] = 0;
+ 	}
+ }
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index b9b7e00b72a84..bc96ac0c5769c 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1184,9 +1184,6 @@ static int vrf_dev_init(struct net_device *dev)
+ 
+ 	dev->flags = IFF_MASTER | IFF_NOARP;
+ 
+-	/* MTU is irrelevant for VRF device; set to 64k similar to lo */
+-	dev->mtu = 64 * 1024;
+-
+ 	/* similarly, oper state is irrelevant; set to up to avoid confusion */
+ 	dev->operstate = IF_OPER_UP;
+ 	netdev_lockdep_set_classes(dev);
+@@ -1620,7 +1617,8 @@ static void vrf_setup(struct net_device *dev)
+ 	 * which breaks networking.
+ 	 */
+ 	dev->min_mtu = IPV6_MIN_MTU;
+-	dev->max_mtu = ETH_MAX_MTU;
++	dev->max_mtu = IP6_MAX_MTU;
++	dev->mtu = dev->max_mtu;
+ }
+ 
+ static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 0be485a253273..41be72c74e3a4 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -514,7 +514,7 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
+ 		udelay(PIO_RETRY_DELAY);
+ 	}
+ 
+-	dev_err(dev, "config read/write timed out\n");
++	dev_err(dev, "PIO read/write transfer time out\n");
+ 	return -ETIMEDOUT;
+ }
+ 
+@@ -657,6 +657,35 @@ static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
+ 	return true;
+ }
+ 
++static bool advk_pcie_pio_is_running(struct advk_pcie *pcie)
++{
++	struct device *dev = &pcie->pdev->dev;
++
++	/*
++	 * Trying to start a new PIO transfer when previous has not completed
++	 * cause External Abort on CPU which results in kernel panic:
++	 *
++	 *     SError Interrupt on CPU0, code 0xbf000002 -- SError
++	 *     Kernel panic - not syncing: Asynchronous SError Interrupt
++	 *
++	 * Functions advk_pcie_rd_conf() and advk_pcie_wr_conf() are protected
++	 * by raw_spin_lock_irqsave() at pci_lock_config() level to prevent
++	 * concurrent calls at the same time. But because PIO transfer may take
++	 * about 1.5s when link is down or card is disconnected, it means that
++	 * advk_pcie_wait_pio() does not always have to wait for completion.
++	 *
++	 * Some versions of ARM Trusted Firmware handles this External Abort at
++	 * EL3 level and mask it to prevent kernel panic. Relevant TF-A commit:
++	 * https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/commit/?id=3c7dcdac5c50
++	 */
++	if (advk_readl(pcie, PIO_START)) {
++		dev_err(dev, "Previous PIO read/write transfer is still running\n");
++		return true;
++	}
++
++	return false;
++}
++
+ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 			     int where, int size, u32 *val)
+ {
+@@ -673,9 +702,10 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 		return pci_bridge_emul_conf_read(&pcie->bridge, where,
+ 						 size, val);
+ 
+-	/* Start PIO */
+-	advk_writel(pcie, 0, PIO_START);
+-	advk_writel(pcie, 1, PIO_ISR);
++	if (advk_pcie_pio_is_running(pcie)) {
++		*val = 0xffffffff;
++		return PCIBIOS_SET_FAILED;
++	}
+ 
+ 	/* Program the control register */
+ 	reg = advk_readl(pcie, PIO_CTRL);
+@@ -694,7 +724,8 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 	/* Program the data strobe */
+ 	advk_writel(pcie, 0xf, PIO_WR_DATA_STRB);
+ 
+-	/* Start the transfer */
++	/* Clear PIO DONE ISR and start the transfer */
++	advk_writel(pcie, 1, PIO_ISR);
+ 	advk_writel(pcie, 1, PIO_START);
+ 
+ 	ret = advk_pcie_wait_pio(pcie);
+@@ -734,9 +765,8 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	if (where % size)
+ 		return PCIBIOS_SET_FAILED;
+ 
+-	/* Start PIO */
+-	advk_writel(pcie, 0, PIO_START);
+-	advk_writel(pcie, 1, PIO_ISR);
++	if (advk_pcie_pio_is_running(pcie))
++		return PCIBIOS_SET_FAILED;
+ 
+ 	/* Program the control register */
+ 	reg = advk_readl(pcie, PIO_CTRL);
+@@ -763,7 +793,8 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	/* Program the data strobe */
+ 	advk_writel(pcie, data_strobe, PIO_WR_DATA_STRB);
+ 
+-	/* Start the transfer */
++	/* Clear PIO DONE ISR and start the transfer */
++	advk_writel(pcie, 1, PIO_ISR);
+ 	advk_writel(pcie, 1, PIO_START);
+ 
+ 	ret = advk_pcie_wait_pio(pcie);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index b570f297e3ec1..16fb3d7714d51 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3557,6 +3557,18 @@ static void quirk_no_bus_reset(struct pci_dev *dev)
+ 	dev->dev_flags |= PCI_DEV_FLAGS_NO_BUS_RESET;
+ }
+ 
++/*
++ * Some NVIDIA GPU devices do not work with bus reset, SBR needs to be
++ * prevented for those affected devices.
++ */
++static void quirk_nvidia_no_bus_reset(struct pci_dev *dev)
++{
++	if ((dev->device & 0xffc0) == 0x2340)
++		quirk_no_bus_reset(dev);
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
++			 quirk_nvidia_no_bus_reset);
++
+ /*
+  * Some Atheros AR9xxx and QCA988x chips do not behave after a bus reset.
+  * The device will throw a Link Down error on AER-capable systems and
+@@ -3577,6 +3589,16 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset);
+  */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CAVIUM, 0xa100, quirk_no_bus_reset);
+ 
++/*
++ * Some TI KeyStone C667X devices do not support bus/hot reset.  The PCIESS
++ * automatically disables LTSSM when Secondary Bus Reset is received and
++ * the device stops working.  Prevent bus reset for these devices.  With
++ * this change, the device can be assigned to VMs with VFIO, but it will
++ * leak state between VMs.  Reference
++ * https://e2e.ti.com/support/processors/f/791/t/954382
++ */
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TI, 0xb005, quirk_no_bus_reset);
++
+ static void quirk_no_pm_reset(struct pci_dev *dev)
+ {
+ 	/*
+@@ -3912,6 +3934,69 @@ static int delay_250ms_after_flr(struct pci_dev *dev, int probe)
+ 	return 0;
+ }
+ 
++#define PCI_DEVICE_ID_HINIC_VF      0x375E
++#define HINIC_VF_FLR_TYPE           0x1000
++#define HINIC_VF_FLR_CAP_BIT        (1UL << 30)
++#define HINIC_VF_OP                 0xE80
++#define HINIC_VF_FLR_PROC_BIT       (1UL << 18)
++#define HINIC_OPERATION_TIMEOUT     15000	/* 15 seconds */
++
++/* Device-specific reset method for Huawei Intelligent NIC virtual functions */
++static int reset_hinic_vf_dev(struct pci_dev *pdev, int probe)
++{
++	unsigned long timeout;
++	void __iomem *bar;
++	u32 val;
++
++	if (probe)
++		return 0;
++
++	bar = pci_iomap(pdev, 0, 0);
++	if (!bar)
++		return -ENOTTY;
++
++	/* Get and check firmware capabilities */
++	val = ioread32be(bar + HINIC_VF_FLR_TYPE);
++	if (!(val & HINIC_VF_FLR_CAP_BIT)) {
++		pci_iounmap(pdev, bar);
++		return -ENOTTY;
++	}
++
++	/* Set HINIC_VF_FLR_PROC_BIT for the start of FLR */
++	val = ioread32be(bar + HINIC_VF_OP);
++	val = val | HINIC_VF_FLR_PROC_BIT;
++	iowrite32be(val, bar + HINIC_VF_OP);
++
++	pcie_flr(pdev);
++
++	/*
++	 * The device must recapture its Bus and Device Numbers after FLR
++	 * in order generate Completions.  Issue a config write to let the
++	 * device capture this information.
++	 */
++	pci_write_config_word(pdev, PCI_VENDOR_ID, 0);
++
++	/* Firmware clears HINIC_VF_FLR_PROC_BIT when reset is complete */
++	timeout = jiffies + msecs_to_jiffies(HINIC_OPERATION_TIMEOUT);
++	do {
++		val = ioread32be(bar + HINIC_VF_OP);
++		if (!(val & HINIC_VF_FLR_PROC_BIT))
++			goto reset_complete;
++		msleep(20);
++	} while (time_before(jiffies, timeout));
++
++	val = ioread32be(bar + HINIC_VF_OP);
++	if (!(val & HINIC_VF_FLR_PROC_BIT))
++		goto reset_complete;
++
++	pci_warn(pdev, "Reset dev timeout, FLR ack reg: %#010x\n", val);
++
++reset_complete:
++	pci_iounmap(pdev, bar);
++
++	return 0;
++}
++
+ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = {
+ 	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82599_SFP_VF,
+ 		 reset_intel_82599_sfp_virtfn },
+@@ -3923,6 +4008,8 @@ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = {
+ 	{ PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr },
+ 	{ PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
+ 		reset_chelsio_generic_dev },
++	{ PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HINIC_VF,
++		reset_hinic_vf_dev },
+ 	{ 0 }
+ };
+ 
+@@ -4763,6 +4850,8 @@ static const struct pci_dev_acs_enabled {
+ 	{ PCI_VENDOR_ID_AMPERE, 0xE00A, pci_quirk_xgene_acs },
+ 	{ PCI_VENDOR_ID_AMPERE, 0xE00B, pci_quirk_xgene_acs },
+ 	{ PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs },
++	/* Broadcom multi-function device */
++	{ PCI_VENDOR_ID_BROADCOM, 0x16D7, pci_quirk_mf_endpoint_acs },
+ 	{ PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
+ 	/* Amazon Annapurna Labs */
+ 	{ PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
+diff --git a/drivers/phy/mediatek/phy-mtk-tphy.c b/drivers/phy/mediatek/phy-mtk-tphy.c
+index cdbcc49f71152..731c483a04dea 100644
+--- a/drivers/phy/mediatek/phy-mtk-tphy.c
++++ b/drivers/phy/mediatek/phy-mtk-tphy.c
+@@ -949,6 +949,8 @@ static int mtk_phy_init(struct phy *phy)
+ 		break;
+ 	default:
+ 		dev_err(tphy->dev, "incompatible PHY type\n");
++		clk_disable_unprepare(instance->ref_clk);
++		clk_disable_unprepare(instance->da_ref_clk);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 1c25af28a7233..5c2f2e337b57b 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -8806,6 +8806,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ 	TPACPI_Q_LNV3('N', '2', 'O', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (2nd gen) */
+ 	TPACPI_Q_LNV3('N', '2', 'V', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (3nd gen) */
+ 	TPACPI_Q_LNV3('N', '3', '0', TPACPI_FAN_2CTL),	/* P15 (1st gen) / P15v (1st gen) */
++	TPACPI_Q_LNV3('N', '3', '2', TPACPI_FAN_2CTL),	/* X1 Carbon (9th gen) */
+ };
+ 
+ static int __init fan_init(struct ibm_init_struct *iibm)
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 03a246e60fd98..21c4c34c52d8d 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -63,7 +63,7 @@ static void enqueue_external_timestamp(struct timestamp_event_queue *queue,
+ 	spin_unlock_irqrestore(&queue->lock, flags);
+ }
+ 
+-s32 scaled_ppm_to_ppb(long ppm)
++long scaled_ppm_to_ppb(long ppm)
+ {
+ 	/*
+ 	 * The 'freq' field in the 'struct timex' is in parts per
+@@ -80,7 +80,7 @@ s32 scaled_ppm_to_ppb(long ppm)
+ 	s64 ppb = 1 + ppm;
+ 	ppb *= 125;
+ 	ppb >>= 13;
+-	return (s32) ppb;
++	return (long) ppb;
+ }
+ EXPORT_SYMBOL(scaled_ppm_to_ppb);
+ 
+@@ -138,7 +138,7 @@ static int ptp_clock_adjtime(struct posix_clock *pc, struct __kernel_timex *tx)
+ 		delta = ktime_to_ns(kt);
+ 		err = ops->adjtime(ops, delta);
+ 	} else if (tx->modes & ADJ_FREQUENCY) {
+-		s32 ppb = scaled_ppm_to_ppb(tx->freq);
++		long ppb = scaled_ppm_to_ppb(tx->freq);
+ 		if (ppb > ops->max_adj || ppb < -ops->max_adj)
+ 			return -ERANGE;
+ 		if (ops->adjfine)
+diff --git a/drivers/regulator/cros-ec-regulator.c b/drivers/regulator/cros-ec-regulator.c
+index eb3fc1db4edc8..c4754f3cf2337 100644
+--- a/drivers/regulator/cros-ec-regulator.c
++++ b/drivers/regulator/cros-ec-regulator.c
+@@ -225,8 +225,9 @@ static int cros_ec_regulator_probe(struct platform_device *pdev)
+ 
+ 	drvdata->dev = devm_regulator_register(dev, &drvdata->desc, &cfg);
+ 	if (IS_ERR(drvdata->dev)) {
++		ret = PTR_ERR(drvdata->dev);
+ 		dev_err(&pdev->dev, "Failed to register regulator: %d\n", ret);
+-		return PTR_ERR(drvdata->dev);
++		return ret;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, drvdata);
+diff --git a/drivers/regulator/rt4801-regulator.c b/drivers/regulator/rt4801-regulator.c
+index 2055a9cb13ba5..7a87788d3f092 100644
+--- a/drivers/regulator/rt4801-regulator.c
++++ b/drivers/regulator/rt4801-regulator.c
+@@ -66,7 +66,7 @@ static int rt4801_enable(struct regulator_dev *rdev)
+ 	struct gpio_descs *gpios = priv->enable_gpios;
+ 	int id = rdev_get_id(rdev), ret;
+ 
+-	if (gpios->ndescs <= id) {
++	if (!gpios || gpios->ndescs <= id) {
+ 		dev_warn(&rdev->dev, "no dedicated gpio can control\n");
+ 		goto bypass_gpio;
+ 	}
+@@ -88,7 +88,7 @@ static int rt4801_disable(struct regulator_dev *rdev)
+ 	struct gpio_descs *gpios = priv->enable_gpios;
+ 	int id = rdev_get_id(rdev);
+ 
+-	if (gpios->ndescs <= id) {
++	if (!gpios || gpios->ndescs <= id) {
+ 		dev_warn(&rdev->dev, "no dedicated gpio can control\n");
+ 		goto bypass_gpio;
+ 	}
+diff --git a/drivers/regulator/rtmv20-regulator.c b/drivers/regulator/rtmv20-regulator.c
+index 5adc552dffd58..4bca64de0f672 100644
+--- a/drivers/regulator/rtmv20-regulator.c
++++ b/drivers/regulator/rtmv20-regulator.c
+@@ -27,6 +27,7 @@
+ #define RTMV20_REG_LDIRQ	0x30
+ #define RTMV20_REG_LDSTAT	0x40
+ #define RTMV20_REG_LDMASK	0x50
++#define RTMV20_MAX_REGS		(RTMV20_REG_LDMASK + 1)
+ 
+ #define RTMV20_VID_MASK		GENMASK(7, 4)
+ #define RICHTEK_VID		0x80
+@@ -313,6 +314,7 @@ static const struct regmap_config rtmv20_regmap_config = {
+ 	.val_bits = 8,
+ 	.cache_type = REGCACHE_RBTREE,
+ 	.max_register = RTMV20_REG_LDMASK,
++	.num_reg_defaults_raw = RTMV20_MAX_REGS,
+ 
+ 	.writeable_reg = rtmv20_is_accessible_reg,
+ 	.readable_reg = rtmv20_is_accessible_reg,
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index ecefc25eff0c0..337353c9655ed 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -135,12 +135,13 @@ static struct ap_queue_status ap_sm_recv(struct ap_queue *aq)
+ {
+ 	struct ap_queue_status status;
+ 	struct ap_message *ap_msg;
++	bool found = false;
+ 
+ 	status = ap_dqap(aq->qid, &aq->reply->psmid,
+ 			 aq->reply->msg, aq->reply->len);
+ 	switch (status.response_code) {
+ 	case AP_RESPONSE_NORMAL:
+-		aq->queue_count--;
++		aq->queue_count = max_t(int, 0, aq->queue_count - 1);
+ 		if (aq->queue_count > 0)
+ 			mod_timer(&aq->timeout,
+ 				  jiffies + aq->request_timeout);
+@@ -150,8 +151,14 @@ static struct ap_queue_status ap_sm_recv(struct ap_queue *aq)
+ 			list_del_init(&ap_msg->list);
+ 			aq->pendingq_count--;
+ 			ap_msg->receive(aq, ap_msg, aq->reply);
++			found = true;
+ 			break;
+ 		}
++		if (!found) {
++			AP_DBF_WARN("%s unassociated reply psmid=0x%016llx on 0x%02x.%04x\n",
++				    __func__, aq->reply->psmid,
++				    AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
++		}
+ 		fallthrough;
+ 	case AP_RESPONSE_NO_PENDING_REPLY:
+ 		if (!status.queue_empty || aq->queue_count <= 0)
+@@ -232,7 +239,7 @@ static enum ap_sm_wait ap_sm_write(struct ap_queue *aq)
+ 			   ap_msg->flags & AP_MSG_FLAG_SPECIAL);
+ 	switch (status.response_code) {
+ 	case AP_RESPONSE_NORMAL:
+-		aq->queue_count++;
++		aq->queue_count = max_t(int, 1, aq->queue_count + 1);
+ 		if (aq->queue_count == 1)
+ 			mod_timer(&aq->timeout, jiffies + aq->request_timeout);
+ 		list_move_tail(&ap_msg->list, &aq->pendingq);
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index 2786470a52011..4f24f63922126 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -293,7 +293,7 @@ static int stm32_qspi_wait_cmd(struct stm32_qspi *qspi,
+ 	int err = 0;
+ 
+ 	if (!op->data.nbytes)
+-		return stm32_qspi_wait_nobusy(qspi);
++		goto wait_nobusy;
+ 
+ 	if (readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF)
+ 		goto out;
+@@ -314,6 +314,9 @@ static int stm32_qspi_wait_cmd(struct stm32_qspi *qspi,
+ out:
+ 	/* clear flags */
+ 	writel_relaxed(FCR_CTCF | FCR_CTEF, qspi->io_base + QSPI_FCR);
++wait_nobusy:
++	if (!err)
++		err = stm32_qspi_wait_nobusy(qspi);
+ 
+ 	return err;
+ }
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index 2765289028fae..68193db8b2e3c 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -678,14 +678,14 @@ static int zynq_qspi_probe(struct platform_device *pdev)
+ 	xqspi->irq = platform_get_irq(pdev, 0);
+ 	if (xqspi->irq <= 0) {
+ 		ret = -ENXIO;
+-		goto remove_master;
++		goto clk_dis_all;
+ 	}
+ 	ret = devm_request_irq(&pdev->dev, xqspi->irq, zynq_qspi_irq,
+ 			       0, pdev->name, xqspi);
+ 	if (ret != 0) {
+ 		ret = -ENXIO;
+ 		dev_err(&pdev->dev, "request_irq failed\n");
+-		goto remove_master;
++		goto clk_dis_all;
+ 	}
+ 
+ 	ret = of_property_read_u32(np, "num-cs",
+@@ -693,8 +693,9 @@ static int zynq_qspi_probe(struct platform_device *pdev)
+ 	if (ret < 0) {
+ 		ctlr->num_chipselect = 1;
+ 	} else if (num_cs > ZYNQ_QSPI_MAX_NUM_CS) {
++		ret = -EINVAL;
+ 		dev_err(&pdev->dev, "only 2 chip selects are available\n");
+-		goto remove_master;
++		goto clk_dis_all;
+ 	} else {
+ 		ctlr->num_chipselect = num_cs;
+ 	}
+diff --git a/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c b/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c
+index caaf9e34f1ee2..09b0b8a16e994 100644
+--- a/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c
++++ b/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c
+@@ -127,7 +127,7 @@ static int rt2880_pmx_group_enable(struct pinctrl_dev *pctrldev,
+ 	if (p->groups[group].enabled) {
+ 		dev_err(p->dev, "%s is already enabled\n",
+ 			p->groups[group].name);
+-		return -EBUSY;
++		return 0;
+ 	}
+ 
+ 	p->groups[group].enabled = 1;
+diff --git a/drivers/usb/chipidea/usbmisc_imx.c b/drivers/usb/chipidea/usbmisc_imx.c
+index 6d8331e7da99e..425b29168b4d0 100644
+--- a/drivers/usb/chipidea/usbmisc_imx.c
++++ b/drivers/usb/chipidea/usbmisc_imx.c
+@@ -686,6 +686,16 @@ static int imx7d_charger_secondary_detection(struct imx_usbmisc_data *data)
+ 	int val;
+ 	unsigned long flags;
+ 
++	/* Clear VDATSRCENB0 to disable VDP_SRC and IDM_SNK required by BC 1.2 spec */
++	spin_lock_irqsave(&usbmisc->lock, flags);
++	val = readl(usbmisc->base + MX7D_USB_OTG_PHY_CFG2);
++	val &= ~MX7D_USB_OTG_PHY_CFG2_CHRG_VDATSRCENB0;
++	writel(val, usbmisc->base + MX7D_USB_OTG_PHY_CFG2);
++	spin_unlock_irqrestore(&usbmisc->lock, flags);
++
++	/* TVDMSRC_DIS */
++	msleep(20);
++
+ 	/* VDM_SRC is connected to D- and IDP_SINK is connected to D+ */
+ 	spin_lock_irqsave(&usbmisc->lock, flags);
+ 	val = readl(usbmisc->base + MX7D_USB_OTG_PHY_CFG2);
+@@ -695,7 +705,8 @@ static int imx7d_charger_secondary_detection(struct imx_usbmisc_data *data)
+ 				usbmisc->base + MX7D_USB_OTG_PHY_CFG2);
+ 	spin_unlock_irqrestore(&usbmisc->lock, flags);
+ 
+-	usleep_range(1000, 2000);
++	/* TVDMSRC_ON */
++	msleep(40);
+ 
+ 	/*
+ 	 * Per BC 1.2, check voltage of D+:
+@@ -798,7 +809,8 @@ static int imx7d_charger_primary_detection(struct imx_usbmisc_data *data)
+ 				usbmisc->base + MX7D_USB_OTG_PHY_CFG2);
+ 	spin_unlock_irqrestore(&usbmisc->lock, flags);
+ 
+-	usleep_range(1000, 2000);
++	/* TVDPSRC_ON */
++	msleep(40);
+ 
+ 	/* Check if D- is less than VDAT_REF to determine an SDP per BC 1.2 */
+ 	val = readl(usbmisc->base + MX7D_USB_OTG_PHY_STATUS);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 228e3d4e1a9fd..357730e8f52f2 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -40,6 +40,8 @@
+ #define USB_VENDOR_GENESYS_LOGIC		0x05e3
+ #define USB_VENDOR_SMSC				0x0424
+ #define USB_PRODUCT_USB5534B			0x5534
++#define USB_VENDOR_CYPRESS			0x04b4
++#define USB_PRODUCT_CY7C65632			0x6570
+ #define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND	0x01
+ #define HUB_QUIRK_DISABLE_AUTOSUSPEND		0x02
+ 
+@@ -5643,6 +5645,11 @@ static const struct usb_device_id hub_id_table[] = {
+       .idProduct = USB_PRODUCT_USB5534B,
+       .bInterfaceClass = USB_CLASS_HUB,
+       .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
++    { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++                   | USB_DEVICE_ID_MATCH_PRODUCT,
++      .idVendor = USB_VENDOR_CYPRESS,
++      .idProduct = USB_PRODUCT_CY7C65632,
++      .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
+     { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
+ 			| USB_DEVICE_ID_MATCH_INT_CLASS,
+       .idVendor = USB_VENDOR_GENESYS_LOGIC,
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index e07fd5ee8ed95..7537dd50ad533 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1642,8 +1642,8 @@ static int dwc3_remove(struct platform_device *pdev)
+ 
+ 	pm_runtime_get_sync(&pdev->dev);
+ 
+-	dwc3_debugfs_exit(dwc);
+ 	dwc3_core_exit_mode(dwc);
++	dwc3_debugfs_exit(dwc);
+ 
+ 	dwc3_core_exit(dwc);
+ 	dwc3_ulpi_exit(dwc);
+diff --git a/drivers/usb/dwc3/debug.h b/drivers/usb/dwc3/debug.h
+index 8ab3949423604..74d9c2c38193d 100644
+--- a/drivers/usb/dwc3/debug.h
++++ b/drivers/usb/dwc3/debug.h
+@@ -413,9 +413,12 @@ static inline const char *dwc3_gadget_generic_cmd_status_string(int status)
+ 
+ 
+ #ifdef CONFIG_DEBUG_FS
++extern void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep);
+ extern void dwc3_debugfs_init(struct dwc3 *d);
+ extern void dwc3_debugfs_exit(struct dwc3 *d);
+ #else
++static inline void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep)
++{  }
+ static inline void dwc3_debugfs_init(struct dwc3 *d)
+ {  }
+ static inline void dwc3_debugfs_exit(struct dwc3 *d)
+diff --git a/drivers/usb/dwc3/debugfs.c b/drivers/usb/dwc3/debugfs.c
+index 5da4f6082d930..3ebe3e6c284d2 100644
+--- a/drivers/usb/dwc3/debugfs.c
++++ b/drivers/usb/dwc3/debugfs.c
+@@ -890,30 +890,14 @@ static void dwc3_debugfs_create_endpoint_files(struct dwc3_ep *dep,
+ 	}
+ }
+ 
+-static void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep,
+-		struct dentry *parent)
++void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep)
+ {
+ 	struct dentry		*dir;
+ 
+-	dir = debugfs_create_dir(dep->name, parent);
++	dir = debugfs_create_dir(dep->name, dep->dwc->root);
+ 	dwc3_debugfs_create_endpoint_files(dep, dir);
+ }
+ 
+-static void dwc3_debugfs_create_endpoint_dirs(struct dwc3 *dwc,
+-		struct dentry *parent)
+-{
+-	int			i;
+-
+-	for (i = 0; i < dwc->num_eps; i++) {
+-		struct dwc3_ep	*dep = dwc->eps[i];
+-
+-		if (!dep)
+-			continue;
+-
+-		dwc3_debugfs_create_endpoint_dir(dep, parent);
+-	}
+-}
+-
+ void dwc3_debugfs_init(struct dwc3 *dwc)
+ {
+ 	struct dentry		*root;
+@@ -944,7 +928,6 @@ void dwc3_debugfs_init(struct dwc3 *dwc)
+ 				&dwc3_testmode_fops);
+ 		debugfs_create_file("link_state", 0644, root, dwc,
+ 				    &dwc3_link_state_fops);
+-		dwc3_debugfs_create_endpoint_dirs(dwc, root);
+ 	}
+ }
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 8bccdd7b0ca2e..14a7c05abfe8f 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2664,6 +2664,8 @@ static int dwc3_gadget_init_endpoint(struct dwc3 *dwc, u8 epnum)
+ 	INIT_LIST_HEAD(&dep->started_list);
+ 	INIT_LIST_HEAD(&dep->cancelled_list);
+ 
++	dwc3_debugfs_create_endpoint_dir(dep);
++
+ 	return 0;
+ }
+ 
+@@ -2707,6 +2709,7 @@ static void dwc3_gadget_free_endpoints(struct dwc3 *dwc)
+ 			list_del(&dep->endpoint.ep_list);
+ 		}
+ 
++		debugfs_remove_recursive(debugfs_lookup(dep->name, dwc->root));
+ 		kfree(dep);
+ 	}
+ }
+diff --git a/fs/afs/main.c b/fs/afs/main.c
+index b2975256dadbd..179004b15566d 100644
+--- a/fs/afs/main.c
++++ b/fs/afs/main.c
+@@ -203,8 +203,8 @@ static int __init afs_init(void)
+ 		goto error_fs;
+ 
+ 	afs_proc_symlink = proc_symlink("fs/afs", NULL, "../self/net/afs");
+-	if (IS_ERR(afs_proc_symlink)) {
+-		ret = PTR_ERR(afs_proc_symlink);
++	if (!afs_proc_symlink) {
++		ret = -ENOMEM;
+ 		goto error_proc;
+ 	}
+ 
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index dcab112e1f001..086b6bacbad17 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -378,7 +378,7 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 					info_type, fanotify_info_name(info),
+ 					info->name_len, buf, count);
+ 		if (ret < 0)
+-			return ret;
++			goto out_close_fd;
+ 
+ 		buf += ret;
+ 		count -= ret;
+@@ -426,7 +426,7 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 					fanotify_event_object_fh(event),
+ 					info_type, dot, dot_len, buf, count);
+ 		if (ret < 0)
+-			return ret;
++			goto out_close_fd;
+ 
+ 		buf += ret;
+ 		count -= ret;
+diff --git a/include/linux/mfd/rohm-bd70528.h b/include/linux/mfd/rohm-bd70528.h
+index a57af878fd0cd..4a5966475a35a 100644
+--- a/include/linux/mfd/rohm-bd70528.h
++++ b/include/linux/mfd/rohm-bd70528.h
+@@ -26,9 +26,7 @@ struct bd70528_data {
+ 	struct mutex rtc_timer_lock;
+ };
+ 
+-#define BD70528_BUCK_VOLTS 17
+-#define BD70528_BUCK_VOLTS 17
+-#define BD70528_BUCK_VOLTS 17
++#define BD70528_BUCK_VOLTS 0x10
+ #define BD70528_LDO_VOLTS 0x20
+ 
+ #define BD70528_REG_BUCK1_EN	0x0F
+diff --git a/include/linux/mlx5/transobj.h b/include/linux/mlx5/transobj.h
+index 028f442530cf5..60ffeb6b67ae7 100644
+--- a/include/linux/mlx5/transobj.h
++++ b/include/linux/mlx5/transobj.h
+@@ -85,4 +85,5 @@ mlx5_core_hairpin_create(struct mlx5_core_dev *func_mdev,
+ 			 struct mlx5_hairpin_params *params);
+ 
+ void mlx5_core_hairpin_destroy(struct mlx5_hairpin *pair);
++void mlx5_core_hairpin_clear_dead_peer(struct mlx5_hairpin *hp);
+ #endif /* __TRANSOBJ_H__ */
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index a4fff7d7abe58..4eb38918da8f8 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -448,13 +448,6 @@ struct mm_struct {
+ 		 */
+ 		atomic_t has_pinned;
+ 
+-		/**
+-		 * @write_protect_seq: Locked when any thread is write
+-		 * protecting pages mapped by this mm to enforce a later COW,
+-		 * for instance during page table copying for fork().
+-		 */
+-		seqcount_t write_protect_seq;
+-
+ #ifdef CONFIG_MMU
+ 		atomic_long_t pgtables_bytes;	/* PTE page table pages */
+ #endif
+@@ -463,6 +456,18 @@ struct mm_struct {
+ 		spinlock_t page_table_lock; /* Protects page tables and some
+ 					     * counters
+ 					     */
++		/*
++		 * With some kernel config, the current mmap_lock's offset
++		 * inside 'mm_struct' is at 0x120, which is very optimal, as
++		 * its two hot fields 'count' and 'owner' sit in 2 different
++		 * cachelines,  and when mmap_lock is highly contended, both
++		 * of the 2 fields will be accessed frequently, current layout
++		 * will help to reduce cache bouncing.
++		 *
++		 * So please be careful with adding new fields before
++		 * mmap_lock, which can easily push the 2 fields into one
++		 * cacheline.
++		 */
+ 		struct rw_semaphore mmap_lock;
+ 
+ 		struct list_head mmlist; /* List of maybe swapped mm's.	These
+@@ -483,7 +488,15 @@ struct mm_struct {
+ 		unsigned long stack_vm;	   /* VM_STACK */
+ 		unsigned long def_flags;
+ 
++		/**
++		 * @write_protect_seq: Locked when any thread is write
++		 * protecting pages mapped by this mm to enforce a later COW,
++		 * for instance during page table copying for fork().
++		 */
++		seqcount_t write_protect_seq;
++
+ 		spinlock_t arg_lock; /* protect the below fields */
++
+ 		unsigned long start_code, end_code, start_data, end_data;
+ 		unsigned long start_brk, brk, start_stack;
+ 		unsigned long arg_start, arg_end, env_start, env_end;
+diff --git a/include/linux/ptp_clock_kernel.h b/include/linux/ptp_clock_kernel.h
+index d3e8ba5c71258..6d6b42143effc 100644
+--- a/include/linux/ptp_clock_kernel.h
++++ b/include/linux/ptp_clock_kernel.h
+@@ -222,7 +222,7 @@ extern int ptp_clock_index(struct ptp_clock *ptp);
+  * @ppm:    Parts per million, but with a 16 bit binary fractional field
+  */
+ 
+-extern s32 scaled_ppm_to_ppb(long ppm);
++extern long scaled_ppm_to_ppb(long ppm);
+ 
+ /**
+  * ptp_find_pin() - obtain the pin index of a given auxiliary function
+diff --git a/include/linux/socket.h b/include/linux/socket.h
+index e9cb30d8cbfb1..9aa530d497da8 100644
+--- a/include/linux/socket.h
++++ b/include/linux/socket.h
+@@ -437,6 +437,4 @@ extern int __sys_getpeername(int fd, struct sockaddr __user *usockaddr,
+ extern int __sys_socketpair(int family, int type, int protocol,
+ 			    int __user *usockvec);
+ extern int __sys_shutdown(int fd, int how);
+-
+-extern struct ns_common *get_net_ns(struct ns_common *ns);
+ #endif /* _LINUX_SOCKET_H */
+diff --git a/include/linux/swapops.h b/include/linux/swapops.h
+index d9b7c9132c2f6..6430a94c69818 100644
+--- a/include/linux/swapops.h
++++ b/include/linux/swapops.h
+@@ -23,6 +23,16 @@
+ #define SWP_TYPE_SHIFT	(BITS_PER_XA_VALUE - MAX_SWAPFILES_SHIFT)
+ #define SWP_OFFSET_MASK	((1UL << SWP_TYPE_SHIFT) - 1)
+ 
++/* Clear all flags but only keep swp_entry_t related information */
++static inline pte_t pte_swp_clear_flags(pte_t pte)
++{
++	if (pte_swp_soft_dirty(pte))
++		pte = pte_swp_clear_soft_dirty(pte);
++	if (pte_swp_uffd_wp(pte))
++		pte = pte_swp_clear_uffd_wp(pte);
++	return pte;
++}
++
+ /*
+  * Store a type+offset into a swp_entry_t in an arch-independent format
+  */
+@@ -66,10 +76,7 @@ static inline swp_entry_t pte_to_swp_entry(pte_t pte)
+ {
+ 	swp_entry_t arch_entry;
+ 
+-	if (pte_swp_soft_dirty(pte))
+-		pte = pte_swp_clear_soft_dirty(pte);
+-	if (pte_swp_uffd_wp(pte))
+-		pte = pte_swp_clear_uffd_wp(pte);
++	pte = pte_swp_clear_flags(pte);
+ 	arch_entry = __pte_to_swp_entry(pte);
+ 	return swp_entry(__swp_type(arch_entry), __swp_offset(arch_entry));
+ }
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index dcdba96814a2b..6ff49c13717bb 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -6335,7 +6335,12 @@ bool ieee80211_tx_prepare_skb(struct ieee80211_hw *hw,
+ 
+ /**
+  * ieee80211_parse_tx_radiotap - Sanity-check and parse the radiotap header
+- *				 of injected frames
++ *				 of injected frames.
++ *
++ * To accurately parse and take into account rate and retransmission fields,
++ * you must initialize the chandef field in the ieee80211_tx_info structure
++ * of the skb before calling this function.
++ *
+  * @skb: packet injected by userspace
+  * @dev: the &struct device of this 802.11 device
+  */
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 22bc07f4b043d..eb0e7731f3b1c 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -203,6 +203,8 @@ struct net *copy_net_ns(unsigned long flags, struct user_namespace *user_ns,
+ void net_ns_get_ownership(const struct net *net, kuid_t *uid, kgid_t *gid);
+ 
+ void net_ns_barrier(void);
++
++struct ns_common *get_net_ns(struct ns_common *ns);
+ #else /* CONFIG_NET_NS */
+ #include <linux/sched.h>
+ #include <linux/nsproxy.h>
+@@ -222,6 +224,11 @@ static inline void net_ns_get_ownership(const struct net *net,
+ }
+ 
+ static inline void net_ns_barrier(void) {}
++
++static inline struct ns_common *get_net_ns(struct ns_common *ns)
++{
++	return ERR_PTR(-EINVAL);
++}
+ #endif /* CONFIG_NET_NS */
+ 
+ 
+diff --git a/include/uapi/linux/in.h b/include/uapi/linux/in.h
+index 7d6687618d808..d1b327036ae43 100644
+--- a/include/uapi/linux/in.h
++++ b/include/uapi/linux/in.h
+@@ -289,6 +289,9 @@ struct sockaddr_in {
+ /* Address indicating an error return. */
+ #define	INADDR_NONE		((unsigned long int) 0xffffffff)
+ 
++/* Dummy address for src of ICMP replies if no real address is set (RFC7600). */
++#define	INADDR_DUMMY		((unsigned long int) 0xc0000008)
++
+ /* Network number for local host loopback. */
+ #define	IN_LOOPBACKNET		127
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4f50d6f128be3..e97724e36dfb5 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5740,6 +5740,27 @@ struct bpf_sanitize_info {
+ 	bool mask_to_left;
+ };
+ 
++static struct bpf_verifier_state *
++sanitize_speculative_path(struct bpf_verifier_env *env,
++			  const struct bpf_insn *insn,
++			  u32 next_idx, u32 curr_idx)
++{
++	struct bpf_verifier_state *branch;
++	struct bpf_reg_state *regs;
++
++	branch = push_stack(env, next_idx, curr_idx, true);
++	if (branch && insn) {
++		regs = branch->frame[branch->curframe]->regs;
++		if (BPF_SRC(insn->code) == BPF_K) {
++			mark_reg_unknown(env, regs, insn->dst_reg);
++		} else if (BPF_SRC(insn->code) == BPF_X) {
++			mark_reg_unknown(env, regs, insn->dst_reg);
++			mark_reg_unknown(env, regs, insn->src_reg);
++		}
++	}
++	return branch;
++}
++
+ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 			    struct bpf_insn *insn,
+ 			    const struct bpf_reg_state *ptr_reg,
+@@ -5823,12 +5844,26 @@ do_sim:
+ 		tmp = *dst_reg;
+ 		*dst_reg = *ptr_reg;
+ 	}
+-	ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
++	ret = sanitize_speculative_path(env, NULL, env->insn_idx + 1,
++					env->insn_idx);
+ 	if (!ptr_is_dst_reg && ret)
+ 		*dst_reg = tmp;
+ 	return !ret ? REASON_STACK : 0;
+ }
+ 
++static void sanitize_mark_insn_seen(struct bpf_verifier_env *env)
++{
++	struct bpf_verifier_state *vstate = env->cur_state;
++
++	/* If we simulate paths under speculation, we don't update the
++	 * insn as 'seen' such that when we verify unreachable paths in
++	 * the non-speculative domain, sanitize_dead_code() can still
++	 * rewrite/sanitize them.
++	 */
++	if (!vstate->speculative)
++		env->insn_aux_data[env->insn_idx].seen = env->pass_cnt;
++}
++
+ static int sanitize_err(struct bpf_verifier_env *env,
+ 			const struct bpf_insn *insn, int reason,
+ 			const struct bpf_reg_state *off_reg,
+@@ -7974,14 +8009,28 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 		if (err)
+ 			return err;
+ 	}
++
+ 	if (pred == 1) {
+-		/* only follow the goto, ignore fall-through */
++		/* Only follow the goto, ignore fall-through. If needed, push
++		 * the fall-through branch for simulation under speculative
++		 * execution.
++		 */
++		if (!env->bypass_spec_v1 &&
++		    !sanitize_speculative_path(env, insn, *insn_idx + 1,
++					       *insn_idx))
++			return -EFAULT;
+ 		*insn_idx += insn->off;
+ 		return 0;
+ 	} else if (pred == 0) {
+-		/* only follow fall-through branch, since
+-		 * that's where the program will go
++		/* Only follow the fall-through branch, since that's where the
++		 * program will go. If needed, push the goto branch for
++		 * simulation under speculative execution.
+ 		 */
++		if (!env->bypass_spec_v1 &&
++		    !sanitize_speculative_path(env, insn,
++					       *insn_idx + insn->off + 1,
++					       *insn_idx))
++			return -EFAULT;
+ 		return 0;
+ 	}
+ 
+@@ -9811,7 +9860,7 @@ static int do_check(struct bpf_verifier_env *env)
+ 		}
+ 
+ 		regs = cur_regs(env);
+-		env->insn_aux_data[env->insn_idx].seen = env->pass_cnt;
++		sanitize_mark_insn_seen(env);
+ 		prev_insn_idx = env->insn_idx;
+ 
+ 		if (class == BPF_ALU || class == BPF_ALU64) {
+@@ -10031,7 +10080,7 @@ process_bpf_exit:
+ 					return err;
+ 
+ 				env->insn_idx++;
+-				env->insn_aux_data[env->insn_idx].seen = env->pass_cnt;
++				sanitize_mark_insn_seen(env);
+ 			} else {
+ 				verbose(env, "invalid BPF_LD mode\n");
+ 				return -EINVAL;
+@@ -10439,6 +10488,7 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+ {
+ 	struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data;
+ 	struct bpf_insn *insn = new_prog->insnsi;
++	u32 old_seen = old_data[off].seen;
+ 	u32 prog_len;
+ 	int i;
+ 
+@@ -10459,7 +10509,8 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+ 	memcpy(new_data + off + cnt - 1, old_data + off,
+ 	       sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
+ 	for (i = off; i < off + cnt - 1; i++) {
+-		new_data[i].seen = env->pass_cnt;
++		/* Expand insni[off]'s seen count to the patched range. */
++		new_data[i].seen = old_seen;
+ 		new_data[i].zext_dst = insn_has_def32(env, insn + i);
+ 	}
+ 	env->insn_aux_data = new_data;
+@@ -11703,6 +11754,9 @@ static void free_states(struct bpf_verifier_env *env)
+  * insn_aux_data was touched. These variables are compared to clear temporary
+  * data from failed pass. For testing and experiments do_check_common() can be
+  * run multiple times even when prior attempt to verify is unsuccessful.
++ *
++ * Note that special handling is needed on !env->bypass_spec_v1 if this is
++ * ever called outside of error path with subsequent program rejection.
+  */
+ static void sanitize_insn_aux_data(struct bpf_verifier_env *env)
+ {
+diff --git a/kernel/crash_core.c b/kernel/crash_core.c
+index 106e4500fd53d..4a5fed2f497b8 100644
+--- a/kernel/crash_core.c
++++ b/kernel/crash_core.c
+@@ -463,6 +463,7 @@ static int __init crash_save_vmcoreinfo_init(void)
+ 	VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
+ 	VMCOREINFO_STRUCT_SIZE(mem_section);
+ 	VMCOREINFO_OFFSET(mem_section, section_mem_map);
++	VMCOREINFO_NUMBER(SECTION_SIZE_BITS);
+ 	VMCOREINFO_NUMBER(MAX_PHYSMEM_BITS);
+ #endif
+ 	VMCOREINFO_STRUCT_SIZE(page);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index ff8a172a69ca9..d6e1c90de570a 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3767,11 +3767,17 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
+  */
+ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
+ {
++	/*
++	 * cfs_rq->avg.period_contrib can be used for both cfs_rq and se.
++	 * See ___update_load_avg() for details.
++	 */
++	u32 divider = get_pelt_divider(&cfs_rq->avg);
++
+ 	dequeue_load_avg(cfs_rq, se);
+ 	sub_positive(&cfs_rq->avg.util_avg, se->avg.util_avg);
+-	sub_positive(&cfs_rq->avg.util_sum, se->avg.util_sum);
++	cfs_rq->avg.util_sum = cfs_rq->avg.util_avg * divider;
+ 	sub_positive(&cfs_rq->avg.runnable_avg, se->avg.runnable_avg);
+-	sub_positive(&cfs_rq->avg.runnable_sum, se->avg.runnable_sum);
++	cfs_rq->avg.runnable_sum = cfs_rq->avg.runnable_avg * divider;
+ 
+ 	add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b2c141eaca020..b09c598065019 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2195,9 +2195,6 @@ struct saved_cmdlines_buffer {
+ };
+ static struct saved_cmdlines_buffer *savedcmd;
+ 
+-/* temporary disable recording */
+-static atomic_t trace_record_taskinfo_disabled __read_mostly;
+-
+ static inline char *get_saved_cmdlines(int idx)
+ {
+ 	return &savedcmd->saved_cmdlines[idx * TASK_COMM_LEN];
+@@ -2483,8 +2480,6 @@ static bool tracing_record_taskinfo_skip(int flags)
+ {
+ 	if (unlikely(!(flags & (TRACE_RECORD_CMDLINE | TRACE_RECORD_TGID))))
+ 		return true;
+-	if (atomic_read(&trace_record_taskinfo_disabled) || !tracing_is_on())
+-		return true;
+ 	if (!__this_cpu_read(trace_taskinfo_save))
+ 		return true;
+ 	return false;
+@@ -3685,9 +3680,6 @@ static void *s_start(struct seq_file *m, loff_t *pos)
+ 		return ERR_PTR(-EBUSY);
+ #endif
+ 
+-	if (!iter->snapshot)
+-		atomic_inc(&trace_record_taskinfo_disabled);
+-
+ 	if (*pos != iter->pos) {
+ 		iter->ent = NULL;
+ 		iter->cpu = 0;
+@@ -3730,9 +3722,6 @@ static void s_stop(struct seq_file *m, void *p)
+ 		return;
+ #endif
+ 
+-	if (!iter->snapshot)
+-		atomic_dec(&trace_record_taskinfo_disabled);
+-
+ 	trace_access_unlock(iter->cpu_file);
+ 	trace_event_read_unlock();
+ }
+diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c
+index c1637f90c8a38..4702efb00ff21 100644
+--- a/kernel/trace/trace_clock.c
++++ b/kernel/trace/trace_clock.c
+@@ -115,9 +115,9 @@ u64 notrace trace_clock_global(void)
+ 	prev_time = READ_ONCE(trace_clock_struct.prev_time);
+ 	now = sched_clock_cpu(this_cpu);
+ 
+-	/* Make sure that now is always greater than prev_time */
++	/* Make sure that now is always greater than or equal to prev_time */
+ 	if ((s64)(now - prev_time) < 0)
+-		now = prev_time + 1;
++		now = prev_time;
+ 
+ 	/*
+ 	 * If in an NMI context then dont risk lockups and simply return
+@@ -131,7 +131,7 @@ u64 notrace trace_clock_global(void)
+ 		/* Reread prev_time in case it was already updated */
+ 		prev_time = READ_ONCE(trace_clock_struct.prev_time);
+ 		if ((s64)(now - prev_time) < 0)
+-			now = prev_time + 1;
++			now = prev_time;
+ 
+ 		trace_clock_struct.prev_time = now;
+ 
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 2d7a667f8e609..25fb82320e3d5 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1445,7 +1445,12 @@ int memory_failure(unsigned long pfn, int flags)
+ 		return 0;
+ 	}
+ 
+-	if (!PageTransTail(p) && !PageLRU(p))
++	/*
++	 * __munlock_pagevec may clear a writeback page's LRU flag without
++	 * page_lock. We need wait writeback completion for this page or it
++	 * may trigger vfs BUG while evict inode.
++	 */
++	if (!PageTransTail(p) && !PageLRU(p) && !PageWriteback(p))
+ 		goto identify_page_state;
+ 
+ 	/*
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 8f27ccf9f7f35..ec832904f4084 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -87,8 +87,7 @@ EXPORT_SYMBOL(kmem_cache_size);
+ #ifdef CONFIG_DEBUG_VM
+ static int kmem_cache_sanity_check(const char *name, unsigned int size)
+ {
+-	if (!name || in_interrupt() || size < sizeof(void *) ||
+-		size > KMALLOC_MAX_SIZE) {
++	if (!name || in_interrupt() || size > KMALLOC_MAX_SIZE) {
+ 		pr_err("kmem_cache_create(%s) integrity check failed\n", name);
+ 		return -EINVAL;
+ 	}
+diff --git a/mm/slub.c b/mm/slub.c
+index 05a501b67cd59..f5fc44208bdc3 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -15,6 +15,7 @@
+ #include <linux/module.h>
+ #include <linux/bit_spinlock.h>
+ #include <linux/interrupt.h>
++#include <linux/swab.h>
+ #include <linux/bitops.h>
+ #include <linux/slab.h>
+ #include "slab.h"
+@@ -698,15 +699,15 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
+ 	       p, p - addr, get_freepointer(s, p));
+ 
+ 	if (s->flags & SLAB_RED_ZONE)
+-		print_section(KERN_ERR, "Redzone ", p - s->red_left_pad,
++		print_section(KERN_ERR, "Redzone  ", p - s->red_left_pad,
+ 			      s->red_left_pad);
+ 	else if (p > addr + 16)
+ 		print_section(KERN_ERR, "Bytes b4 ", p - 16, 16);
+ 
+-	print_section(KERN_ERR, "Object ", p,
++	print_section(KERN_ERR,         "Object   ", p,
+ 		      min_t(unsigned int, s->object_size, PAGE_SIZE));
+ 	if (s->flags & SLAB_RED_ZONE)
+-		print_section(KERN_ERR, "Redzone ", p + s->object_size,
++		print_section(KERN_ERR, "Redzone  ", p + s->object_size,
+ 			s->inuse - s->object_size);
+ 
+ 	off = get_info_end(s);
+@@ -718,7 +719,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
+ 
+ 	if (off != size_from_object(s))
+ 		/* Beginning of the filler is the free pointer */
+-		print_section(KERN_ERR, "Padding ", p + off,
++		print_section(KERN_ERR, "Padding  ", p + off,
+ 			      size_from_object(s) - off);
+ 
+ 	dump_stack();
+@@ -895,11 +896,11 @@ static int check_object(struct kmem_cache *s, struct page *page,
+ 	u8 *endobject = object + s->object_size;
+ 
+ 	if (s->flags & SLAB_RED_ZONE) {
+-		if (!check_bytes_and_report(s, page, object, "Redzone",
++		if (!check_bytes_and_report(s, page, object, "Left Redzone",
+ 			object - s->red_left_pad, val, s->red_left_pad))
+ 			return 0;
+ 
+-		if (!check_bytes_and_report(s, page, object, "Redzone",
++		if (!check_bytes_and_report(s, page, object, "Right Redzone",
+ 			endobject, val, s->inuse - s->object_size))
+ 			return 0;
+ 	} else {
+@@ -914,7 +915,7 @@ static int check_object(struct kmem_cache *s, struct page *page,
+ 		if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) &&
+ 			(!check_bytes_and_report(s, page, p, "Poison", p,
+ 					POISON_FREE, s->object_size - 1) ||
+-			 !check_bytes_and_report(s, page, p, "Poison",
++			 !check_bytes_and_report(s, page, p, "End Poison",
+ 				p + s->object_size - 1, POISON_END, 1)))
+ 			return 0;
+ 		/*
+@@ -3639,7 +3640,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
+ {
+ 	slab_flags_t flags = s->flags;
+ 	unsigned int size = s->object_size;
+-	unsigned int freepointer_area;
+ 	unsigned int order;
+ 
+ 	/*
+@@ -3648,13 +3648,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
+ 	 * the possible location of the free pointer.
+ 	 */
+ 	size = ALIGN(size, sizeof(void *));
+-	/*
+-	 * This is the area of the object where a freepointer can be
+-	 * safely written. If redzoning adds more to the inuse size, we
+-	 * can't use that portion for writing the freepointer, so
+-	 * s->offset must be limited within this for the general case.
+-	 */
+-	freepointer_area = size;
+ 
+ #ifdef CONFIG_SLUB_DEBUG
+ 	/*
+@@ -3680,19 +3673,21 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
+ 
+ 	/*
+ 	 * With that we have determined the number of bytes in actual use
+-	 * by the object. This is the potential offset to the free pointer.
++	 * by the object and redzoning.
+ 	 */
+ 	s->inuse = size;
+ 
+-	if (((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
+-		s->ctor)) {
++	if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
++	    ((flags & SLAB_RED_ZONE) && s->object_size < sizeof(void *)) ||
++	    s->ctor) {
+ 		/*
+ 		 * Relocate free pointer after the object if it is not
+ 		 * permitted to overwrite the first word of the object on
+ 		 * kmem_cache_free.
+ 		 *
+ 		 * This is the case if we do RCU, have a constructor or
+-		 * destructor or are poisoning the objects.
++		 * destructor, are poisoning the objects, or are
++		 * redzoning an object smaller than sizeof(void *).
+ 		 *
+ 		 * The assumption that s->offset >= s->inuse means free
+ 		 * pointer is outside of the object is used in the
+@@ -3701,13 +3696,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
+ 		 */
+ 		s->offset = size;
+ 		size += sizeof(void *);
+-	} else if (freepointer_area > sizeof(void *)) {
++	} else {
+ 		/*
+ 		 * Store freelist pointer near middle of object to keep
+ 		 * it away from the edges of the object to avoid small
+ 		 * sized over/underflows from neighboring allocations.
+ 		 */
+-		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
++		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
+ 	}
+ 
+ #ifdef CONFIG_SLUB_DEBUG
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 5256c10049b0f..5af6b0f770de6 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1903,7 +1903,7 @@ unsigned int count_swap_pages(int type, int free)
+ 
+ static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
+ {
+-	return pte_same(pte_swp_clear_soft_dirty(pte), swp_pte);
++	return pte_same(pte_swp_clear_flags(pte), swp_pte);
+ }
+ 
+ /*
+diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
+index 206d0b424712e..c0aa54d21c649 100644
+--- a/net/batman-adv/bat_iv_ogm.c
++++ b/net/batman-adv/bat_iv_ogm.c
+@@ -410,8 +410,10 @@ static void batadv_iv_ogm_emit(struct batadv_forw_packet *forw_packet)
+ 	if (WARN_ON(!forw_packet->if_outgoing))
+ 		return;
+ 
+-	if (WARN_ON(forw_packet->if_outgoing->soft_iface != soft_iface))
++	if (forw_packet->if_outgoing->soft_iface != soft_iface) {
++		pr_warn("%s: soft interface switch for queued OGM\n", __func__);
+ 		return;
++	}
+ 
+ 	if (forw_packet->if_incoming->if_status != BATADV_IF_ACTIVE)
+ 		return;
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 8424464186a6b..5e5726048a1af 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -98,8 +98,8 @@ struct br_vlan_stats {
+ };
+ 
+ struct br_tunnel_info {
+-	__be64			tunnel_id;
+-	struct metadata_dst	*tunnel_dst;
++	__be64				tunnel_id;
++	struct metadata_dst __rcu	*tunnel_dst;
+ };
+ 
+ /* private vlan flags */
+diff --git a/net/bridge/br_vlan_tunnel.c b/net/bridge/br_vlan_tunnel.c
+index 169e005fbda29..debe167202782 100644
+--- a/net/bridge/br_vlan_tunnel.c
++++ b/net/bridge/br_vlan_tunnel.c
+@@ -41,26 +41,33 @@ static struct net_bridge_vlan *br_vlan_tunnel_lookup(struct rhashtable *tbl,
+ 				      br_vlan_tunnel_rht_params);
+ }
+ 
++static void vlan_tunnel_info_release(struct net_bridge_vlan *vlan)
++{
++	struct metadata_dst *tdst = rtnl_dereference(vlan->tinfo.tunnel_dst);
++
++	WRITE_ONCE(vlan->tinfo.tunnel_id, 0);
++	RCU_INIT_POINTER(vlan->tinfo.tunnel_dst, NULL);
++	dst_release(&tdst->dst);
++}
++
+ void vlan_tunnel_info_del(struct net_bridge_vlan_group *vg,
+ 			  struct net_bridge_vlan *vlan)
+ {
+-	if (!vlan->tinfo.tunnel_dst)
++	if (!rcu_access_pointer(vlan->tinfo.tunnel_dst))
+ 		return;
+ 	rhashtable_remove_fast(&vg->tunnel_hash, &vlan->tnode,
+ 			       br_vlan_tunnel_rht_params);
+-	vlan->tinfo.tunnel_id = 0;
+-	dst_release(&vlan->tinfo.tunnel_dst->dst);
+-	vlan->tinfo.tunnel_dst = NULL;
++	vlan_tunnel_info_release(vlan);
+ }
+ 
+ static int __vlan_tunnel_info_add(struct net_bridge_vlan_group *vg,
+ 				  struct net_bridge_vlan *vlan, u32 tun_id)
+ {
+-	struct metadata_dst *metadata = NULL;
++	struct metadata_dst *metadata = rtnl_dereference(vlan->tinfo.tunnel_dst);
+ 	__be64 key = key32_to_tunnel_id(cpu_to_be32(tun_id));
+ 	int err;
+ 
+-	if (vlan->tinfo.tunnel_dst)
++	if (metadata)
+ 		return -EEXIST;
+ 
+ 	metadata = __ip_tun_set_dst(0, 0, 0, 0, 0, TUNNEL_KEY,
+@@ -69,8 +76,8 @@ static int __vlan_tunnel_info_add(struct net_bridge_vlan_group *vg,
+ 		return -EINVAL;
+ 
+ 	metadata->u.tun_info.mode |= IP_TUNNEL_INFO_TX | IP_TUNNEL_INFO_BRIDGE;
+-	vlan->tinfo.tunnel_dst = metadata;
+-	vlan->tinfo.tunnel_id = key;
++	rcu_assign_pointer(vlan->tinfo.tunnel_dst, metadata);
++	WRITE_ONCE(vlan->tinfo.tunnel_id, key);
+ 
+ 	err = rhashtable_lookup_insert_fast(&vg->tunnel_hash, &vlan->tnode,
+ 					    br_vlan_tunnel_rht_params);
+@@ -79,9 +86,7 @@ static int __vlan_tunnel_info_add(struct net_bridge_vlan_group *vg,
+ 
+ 	return 0;
+ out:
+-	dst_release(&vlan->tinfo.tunnel_dst->dst);
+-	vlan->tinfo.tunnel_dst = NULL;
+-	vlan->tinfo.tunnel_id = 0;
++	vlan_tunnel_info_release(vlan);
+ 
+ 	return err;
+ }
+@@ -182,12 +187,15 @@ int br_handle_ingress_vlan_tunnel(struct sk_buff *skb,
+ int br_handle_egress_vlan_tunnel(struct sk_buff *skb,
+ 				 struct net_bridge_vlan *vlan)
+ {
++	struct metadata_dst *tunnel_dst;
++	__be64 tunnel_id;
+ 	int err;
+ 
+-	if (!vlan || !vlan->tinfo.tunnel_id)
++	if (!vlan)
+ 		return 0;
+ 
+-	if (unlikely(!skb_vlan_tag_present(skb)))
++	tunnel_id = READ_ONCE(vlan->tinfo.tunnel_id);
++	if (!tunnel_id || unlikely(!skb_vlan_tag_present(skb)))
+ 		return 0;
+ 
+ 	skb_dst_drop(skb);
+@@ -195,7 +203,9 @@ int br_handle_egress_vlan_tunnel(struct sk_buff *skb,
+ 	if (err)
+ 		return err;
+ 
+-	skb_dst_set(skb, dst_clone(&vlan->tinfo.tunnel_dst->dst));
++	tunnel_dst = rcu_dereference(vlan->tinfo.tunnel_dst);
++	if (tunnel_dst && dst_hold_safe(&tunnel_dst->dst))
++		skb_dst_set(skb, &tunnel_dst->dst);
+ 
+ 	return 0;
+ }
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 909b9e684e043..f3e4d9528fa38 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -125,7 +125,7 @@ struct bcm_sock {
+ 	struct sock sk;
+ 	int bound;
+ 	int ifindex;
+-	struct notifier_block notifier;
++	struct list_head notifier;
+ 	struct list_head rx_ops;
+ 	struct list_head tx_ops;
+ 	unsigned long dropped_usr_msgs;
+@@ -133,6 +133,10 @@ struct bcm_sock {
+ 	char procname [32]; /* inode number in decimal with \0 */
+ };
+ 
++static LIST_HEAD(bcm_notifier_list);
++static DEFINE_SPINLOCK(bcm_notifier_lock);
++static struct bcm_sock *bcm_busy_notifier;
++
+ static inline struct bcm_sock *bcm_sk(const struct sock *sk)
+ {
+ 	return (struct bcm_sock *)sk;
+@@ -402,6 +406,7 @@ static enum hrtimer_restart bcm_tx_timeout_handler(struct hrtimer *hrtimer)
+ 		if (!op->count && (op->flags & TX_COUNTEVT)) {
+ 
+ 			/* create notification to user */
++			memset(&msg_head, 0, sizeof(msg_head));
+ 			msg_head.opcode  = TX_EXPIRED;
+ 			msg_head.flags   = op->flags;
+ 			msg_head.count   = op->count;
+@@ -439,6 +444,7 @@ static void bcm_rx_changed(struct bcm_op *op, struct canfd_frame *data)
+ 	/* this element is not throttled anymore */
+ 	data->flags &= (BCM_CAN_FLAGS_MASK|RX_RECV);
+ 
++	memset(&head, 0, sizeof(head));
+ 	head.opcode  = RX_CHANGED;
+ 	head.flags   = op->flags;
+ 	head.count   = op->count;
+@@ -560,6 +566,7 @@ static enum hrtimer_restart bcm_rx_timeout_handler(struct hrtimer *hrtimer)
+ 	}
+ 
+ 	/* create notification to user */
++	memset(&msg_head, 0, sizeof(msg_head));
+ 	msg_head.opcode  = RX_TIMEOUT;
+ 	msg_head.flags   = op->flags;
+ 	msg_head.count   = op->count;
+@@ -1378,20 +1385,15 @@ static int bcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ /*
+  * notification handler for netdevice status changes
+  */
+-static int bcm_notifier(struct notifier_block *nb, unsigned long msg,
+-			void *ptr)
++static void bcm_notify(struct bcm_sock *bo, unsigned long msg,
++		       struct net_device *dev)
+ {
+-	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-	struct bcm_sock *bo = container_of(nb, struct bcm_sock, notifier);
+ 	struct sock *sk = &bo->sk;
+ 	struct bcm_op *op;
+ 	int notify_enodev = 0;
+ 
+ 	if (!net_eq(dev_net(dev), sock_net(sk)))
+-		return NOTIFY_DONE;
+-
+-	if (dev->type != ARPHRD_CAN)
+-		return NOTIFY_DONE;
++		return;
+ 
+ 	switch (msg) {
+ 
+@@ -1426,7 +1428,28 @@ static int bcm_notifier(struct notifier_block *nb, unsigned long msg,
+ 				sk->sk_error_report(sk);
+ 		}
+ 	}
++}
+ 
++static int bcm_notifier(struct notifier_block *nb, unsigned long msg,
++			void *ptr)
++{
++	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++
++	if (dev->type != ARPHRD_CAN)
++		return NOTIFY_DONE;
++	if (msg != NETDEV_UNREGISTER && msg != NETDEV_DOWN)
++		return NOTIFY_DONE;
++	if (unlikely(bcm_busy_notifier)) /* Check for reentrant bug. */
++		return NOTIFY_DONE;
++
++	spin_lock(&bcm_notifier_lock);
++	list_for_each_entry(bcm_busy_notifier, &bcm_notifier_list, notifier) {
++		spin_unlock(&bcm_notifier_lock);
++		bcm_notify(bcm_busy_notifier, msg, dev);
++		spin_lock(&bcm_notifier_lock);
++	}
++	bcm_busy_notifier = NULL;
++	spin_unlock(&bcm_notifier_lock);
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -1446,9 +1469,9 @@ static int bcm_init(struct sock *sk)
+ 	INIT_LIST_HEAD(&bo->rx_ops);
+ 
+ 	/* set notifier */
+-	bo->notifier.notifier_call = bcm_notifier;
+-
+-	register_netdevice_notifier(&bo->notifier);
++	spin_lock(&bcm_notifier_lock);
++	list_add_tail(&bo->notifier, &bcm_notifier_list);
++	spin_unlock(&bcm_notifier_lock);
+ 
+ 	return 0;
+ }
+@@ -1471,7 +1494,14 @@ static int bcm_release(struct socket *sock)
+ 
+ 	/* remove bcm_ops, timer, rx_unregister(), etc. */
+ 
+-	unregister_netdevice_notifier(&bo->notifier);
++	spin_lock(&bcm_notifier_lock);
++	while (bcm_busy_notifier == bo) {
++		spin_unlock(&bcm_notifier_lock);
++		schedule_timeout_uninterruptible(1);
++		spin_lock(&bcm_notifier_lock);
++	}
++	list_del(&bo->notifier);
++	spin_unlock(&bcm_notifier_lock);
+ 
+ 	lock_sock(sk);
+ 
+@@ -1692,6 +1722,10 @@ static struct pernet_operations canbcm_pernet_ops __read_mostly = {
+ 	.exit = canbcm_pernet_exit,
+ };
+ 
++static struct notifier_block canbcm_notifier = {
++	.notifier_call = bcm_notifier
++};
++
+ static int __init bcm_module_init(void)
+ {
+ 	int err;
+@@ -1705,12 +1739,14 @@ static int __init bcm_module_init(void)
+ 	}
+ 
+ 	register_pernet_subsys(&canbcm_pernet_ops);
++	register_netdevice_notifier(&canbcm_notifier);
+ 	return 0;
+ }
+ 
+ static void __exit bcm_module_exit(void)
+ {
+ 	can_proto_unregister(&bcm_can_proto);
++	unregister_netdevice_notifier(&canbcm_notifier);
+ 	unregister_pernet_subsys(&canbcm_pernet_ops);
+ }
+ 
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index d5780ab29e098..1adefb14527d8 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -143,10 +143,14 @@ struct isotp_sock {
+ 	u32 force_tx_stmin;
+ 	u32 force_rx_stmin;
+ 	struct tpcon rx, tx;
+-	struct notifier_block notifier;
++	struct list_head notifier;
+ 	wait_queue_head_t wait;
+ };
+ 
++static LIST_HEAD(isotp_notifier_list);
++static DEFINE_SPINLOCK(isotp_notifier_lock);
++static struct isotp_sock *isotp_busy_notifier;
++
+ static inline struct isotp_sock *isotp_sk(const struct sock *sk)
+ {
+ 	return (struct isotp_sock *)sk;
+@@ -1008,7 +1012,14 @@ static int isotp_release(struct socket *sock)
+ 	/* wait for complete transmission of current pdu */
+ 	wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
+ 
+-	unregister_netdevice_notifier(&so->notifier);
++	spin_lock(&isotp_notifier_lock);
++	while (isotp_busy_notifier == so) {
++		spin_unlock(&isotp_notifier_lock);
++		schedule_timeout_uninterruptible(1);
++		spin_lock(&isotp_notifier_lock);
++	}
++	list_del(&so->notifier);
++	spin_unlock(&isotp_notifier_lock);
+ 
+ 	lock_sock(sk);
+ 
+@@ -1284,21 +1295,16 @@ static int isotp_getsockopt(struct socket *sock, int level, int optname,
+ 	return 0;
+ }
+ 
+-static int isotp_notifier(struct notifier_block *nb, unsigned long msg,
+-			  void *ptr)
++static void isotp_notify(struct isotp_sock *so, unsigned long msg,
++			 struct net_device *dev)
+ {
+-	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-	struct isotp_sock *so = container_of(nb, struct isotp_sock, notifier);
+ 	struct sock *sk = &so->sk;
+ 
+ 	if (!net_eq(dev_net(dev), sock_net(sk)))
+-		return NOTIFY_DONE;
+-
+-	if (dev->type != ARPHRD_CAN)
+-		return NOTIFY_DONE;
++		return;
+ 
+ 	if (so->ifindex != dev->ifindex)
+-		return NOTIFY_DONE;
++		return;
+ 
+ 	switch (msg) {
+ 	case NETDEV_UNREGISTER:
+@@ -1324,7 +1330,28 @@ static int isotp_notifier(struct notifier_block *nb, unsigned long msg,
+ 			sk->sk_error_report(sk);
+ 		break;
+ 	}
++}
+ 
++static int isotp_notifier(struct notifier_block *nb, unsigned long msg,
++			  void *ptr)
++{
++	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++
++	if (dev->type != ARPHRD_CAN)
++		return NOTIFY_DONE;
++	if (msg != NETDEV_UNREGISTER && msg != NETDEV_DOWN)
++		return NOTIFY_DONE;
++	if (unlikely(isotp_busy_notifier)) /* Check for reentrant bug. */
++		return NOTIFY_DONE;
++
++	spin_lock(&isotp_notifier_lock);
++	list_for_each_entry(isotp_busy_notifier, &isotp_notifier_list, notifier) {
++		spin_unlock(&isotp_notifier_lock);
++		isotp_notify(isotp_busy_notifier, msg, dev);
++		spin_lock(&isotp_notifier_lock);
++	}
++	isotp_busy_notifier = NULL;
++	spin_unlock(&isotp_notifier_lock);
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -1361,8 +1388,9 @@ static int isotp_init(struct sock *sk)
+ 
+ 	init_waitqueue_head(&so->wait);
+ 
+-	so->notifier.notifier_call = isotp_notifier;
+-	register_netdevice_notifier(&so->notifier);
++	spin_lock(&isotp_notifier_lock);
++	list_add_tail(&so->notifier, &isotp_notifier_list);
++	spin_unlock(&isotp_notifier_lock);
+ 
+ 	return 0;
+ }
+@@ -1409,6 +1437,10 @@ static const struct can_proto isotp_can_proto = {
+ 	.prot = &isotp_proto,
+ };
+ 
++static struct notifier_block canisotp_notifier = {
++	.notifier_call = isotp_notifier
++};
++
+ static __init int isotp_module_init(void)
+ {
+ 	int err;
+@@ -1418,6 +1450,8 @@ static __init int isotp_module_init(void)
+ 	err = can_proto_register(&isotp_can_proto);
+ 	if (err < 0)
+ 		pr_err("can: registration of isotp protocol failed\n");
++	else
++		register_netdevice_notifier(&canisotp_notifier);
+ 
+ 	return err;
+ }
+@@ -1425,6 +1459,7 @@ static __init int isotp_module_init(void)
+ static __exit void isotp_module_exit(void)
+ {
+ 	can_proto_unregister(&isotp_can_proto);
++	unregister_netdevice_notifier(&canisotp_notifier);
+ }
+ 
+ module_init(isotp_module_init);
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index e09d087ba2409..c3946c3558826 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -330,6 +330,9 @@ static void j1939_session_skb_drop_old(struct j1939_session *session)
+ 
+ 	if ((do_skcb->offset + do_skb->len) < offset_start) {
+ 		__skb_unlink(do_skb, &session->skb_queue);
++		/* drop ref taken in j1939_session_skb_queue() */
++		skb_unref(do_skb);
++
+ 		kfree_skb(do_skb);
+ 	}
+ 	spin_unlock_irqrestore(&session->skb_queue.lock, flags);
+@@ -349,12 +352,13 @@ void j1939_session_skb_queue(struct j1939_session *session,
+ 
+ 	skcb->flags |= J1939_ECU_LOCAL_SRC;
+ 
++	skb_get(skb);
+ 	skb_queue_tail(&session->skb_queue, skb);
+ }
+ 
+ static struct
+-sk_buff *j1939_session_skb_find_by_offset(struct j1939_session *session,
+-					  unsigned int offset_start)
++sk_buff *j1939_session_skb_get_by_offset(struct j1939_session *session,
++					 unsigned int offset_start)
+ {
+ 	struct j1939_priv *priv = session->priv;
+ 	struct j1939_sk_buff_cb *do_skcb;
+@@ -371,6 +375,10 @@ sk_buff *j1939_session_skb_find_by_offset(struct j1939_session *session,
+ 			skb = do_skb;
+ 		}
+ 	}
++
++	if (skb)
++		skb_get(skb);
++
+ 	spin_unlock_irqrestore(&session->skb_queue.lock, flags);
+ 
+ 	if (!skb)
+@@ -381,12 +389,12 @@ sk_buff *j1939_session_skb_find_by_offset(struct j1939_session *session,
+ 	return skb;
+ }
+ 
+-static struct sk_buff *j1939_session_skb_find(struct j1939_session *session)
++static struct sk_buff *j1939_session_skb_get(struct j1939_session *session)
+ {
+ 	unsigned int offset_start;
+ 
+ 	offset_start = session->pkt.dpo * 7;
+-	return j1939_session_skb_find_by_offset(session, offset_start);
++	return j1939_session_skb_get_by_offset(session, offset_start);
+ }
+ 
+ /* see if we are receiver
+@@ -776,7 +784,7 @@ static int j1939_session_tx_dat(struct j1939_session *session)
+ 	int ret = 0;
+ 	u8 dat[8];
+ 
+-	se_skb = j1939_session_skb_find_by_offset(session, session->pkt.tx * 7);
++	se_skb = j1939_session_skb_get_by_offset(session, session->pkt.tx * 7);
+ 	if (!se_skb)
+ 		return -ENOBUFS;
+ 
+@@ -801,7 +809,8 @@ static int j1939_session_tx_dat(struct j1939_session *session)
+ 			netdev_err_once(priv->ndev,
+ 					"%s: 0x%p: requested data outside of queued buffer: offset %i, len %i, pkt.tx: %i\n",
+ 					__func__, session, skcb->offset, se_skb->len , session->pkt.tx);
+-			return -EOVERFLOW;
++			ret = -EOVERFLOW;
++			goto out_free;
+ 		}
+ 
+ 		if (!len) {
+@@ -835,6 +844,12 @@ static int j1939_session_tx_dat(struct j1939_session *session)
+ 	if (pkt_done)
+ 		j1939_tp_set_rxtimeout(session, 250);
+ 
++ out_free:
++	if (ret)
++		kfree_skb(se_skb);
++	else
++		consume_skb(se_skb);
++
+ 	return ret;
+ }
+ 
+@@ -1007,7 +1022,7 @@ static int j1939_xtp_txnext_receiver(struct j1939_session *session)
+ static int j1939_simple_txnext(struct j1939_session *session)
+ {
+ 	struct j1939_priv *priv = session->priv;
+-	struct sk_buff *se_skb = j1939_session_skb_find(session);
++	struct sk_buff *se_skb = j1939_session_skb_get(session);
+ 	struct sk_buff *skb;
+ 	int ret;
+ 
+@@ -1015,8 +1030,10 @@ static int j1939_simple_txnext(struct j1939_session *session)
+ 		return 0;
+ 
+ 	skb = skb_clone(se_skb, GFP_ATOMIC);
+-	if (!skb)
+-		return -ENOMEM;
++	if (!skb) {
++		ret = -ENOMEM;
++		goto out_free;
++	}
+ 
+ 	can_skb_set_owner(skb, se_skb->sk);
+ 
+@@ -1024,12 +1041,18 @@ static int j1939_simple_txnext(struct j1939_session *session)
+ 
+ 	ret = j1939_send_one(priv, skb);
+ 	if (ret)
+-		return ret;
++		goto out_free;
+ 
+ 	j1939_sk_errqueue(session, J1939_ERRQUEUE_SCHED);
+ 	j1939_sk_queue_activate_next(session);
+ 
+-	return 0;
++ out_free:
++	if (ret)
++		kfree_skb(se_skb);
++	else
++		consume_skb(se_skb);
++
++	return ret;
+ }
+ 
+ static bool j1939_session_deactivate_locked(struct j1939_session *session)
+@@ -1170,9 +1193,10 @@ static void j1939_session_completed(struct j1939_session *session)
+ 	struct sk_buff *skb;
+ 
+ 	if (!session->transmission) {
+-		skb = j1939_session_skb_find(session);
++		skb = j1939_session_skb_get(session);
+ 		/* distribute among j1939 receivers */
+ 		j1939_sk_recv(session->priv, skb);
++		consume_skb(skb);
+ 	}
+ 
+ 	j1939_session_deactivate_activate_next(session);
+@@ -1744,7 +1768,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ {
+ 	struct j1939_priv *priv = session->priv;
+ 	struct j1939_sk_buff_cb *skcb;
+-	struct sk_buff *se_skb;
++	struct sk_buff *se_skb = NULL;
+ 	const u8 *dat;
+ 	u8 *tpdat;
+ 	int offset;
+@@ -1786,7 +1810,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ 		goto out_session_cancel;
+ 	}
+ 
+-	se_skb = j1939_session_skb_find_by_offset(session, packet * 7);
++	se_skb = j1939_session_skb_get_by_offset(session, packet * 7);
+ 	if (!se_skb) {
+ 		netdev_warn(priv->ndev, "%s: 0x%p: no skb found\n", __func__,
+ 			    session);
+@@ -1848,11 +1872,13 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ 		j1939_tp_set_rxtimeout(session, 250);
+ 	}
+ 	session->last_cmd = 0xff;
++	consume_skb(se_skb);
+ 	j1939_session_put(session);
+ 
+ 	return;
+ 
+  out_session_cancel:
++	kfree_skb(se_skb);
+ 	j1939_session_timers_cancel(session);
+ 	j1939_session_cancel(session, J1939_XTP_ABORT_FAULT);
+ 	j1939_session_put(session);
+diff --git a/net/can/raw.c b/net/can/raw.c
+index 95113b0898b24..4a7c063deb6ce 100644
+--- a/net/can/raw.c
++++ b/net/can/raw.c
+@@ -83,7 +83,7 @@ struct raw_sock {
+ 	struct sock sk;
+ 	int bound;
+ 	int ifindex;
+-	struct notifier_block notifier;
++	struct list_head notifier;
+ 	int loopback;
+ 	int recv_own_msgs;
+ 	int fd_frames;
+@@ -95,6 +95,10 @@ struct raw_sock {
+ 	struct uniqframe __percpu *uniq;
+ };
+ 
++static LIST_HEAD(raw_notifier_list);
++static DEFINE_SPINLOCK(raw_notifier_lock);
++static struct raw_sock *raw_busy_notifier;
++
+ /* Return pointer to store the extra msg flags for raw_recvmsg().
+  * We use the space of one unsigned int beyond the 'struct sockaddr_can'
+  * in skb->cb.
+@@ -263,21 +267,16 @@ static int raw_enable_allfilters(struct net *net, struct net_device *dev,
+ 	return err;
+ }
+ 
+-static int raw_notifier(struct notifier_block *nb,
+-			unsigned long msg, void *ptr)
++static void raw_notify(struct raw_sock *ro, unsigned long msg,
++		       struct net_device *dev)
+ {
+-	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-	struct raw_sock *ro = container_of(nb, struct raw_sock, notifier);
+ 	struct sock *sk = &ro->sk;
+ 
+ 	if (!net_eq(dev_net(dev), sock_net(sk)))
+-		return NOTIFY_DONE;
+-
+-	if (dev->type != ARPHRD_CAN)
+-		return NOTIFY_DONE;
++		return;
+ 
+ 	if (ro->ifindex != dev->ifindex)
+-		return NOTIFY_DONE;
++		return;
+ 
+ 	switch (msg) {
+ 	case NETDEV_UNREGISTER:
+@@ -305,7 +304,28 @@ static int raw_notifier(struct notifier_block *nb,
+ 			sk->sk_error_report(sk);
+ 		break;
+ 	}
++}
++
++static int raw_notifier(struct notifier_block *nb, unsigned long msg,
++			void *ptr)
++{
++	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++
++	if (dev->type != ARPHRD_CAN)
++		return NOTIFY_DONE;
++	if (msg != NETDEV_UNREGISTER && msg != NETDEV_DOWN)
++		return NOTIFY_DONE;
++	if (unlikely(raw_busy_notifier)) /* Check for reentrant bug. */
++		return NOTIFY_DONE;
+ 
++	spin_lock(&raw_notifier_lock);
++	list_for_each_entry(raw_busy_notifier, &raw_notifier_list, notifier) {
++		spin_unlock(&raw_notifier_lock);
++		raw_notify(raw_busy_notifier, msg, dev);
++		spin_lock(&raw_notifier_lock);
++	}
++	raw_busy_notifier = NULL;
++	spin_unlock(&raw_notifier_lock);
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -334,9 +354,9 @@ static int raw_init(struct sock *sk)
+ 		return -ENOMEM;
+ 
+ 	/* set notifier */
+-	ro->notifier.notifier_call = raw_notifier;
+-
+-	register_netdevice_notifier(&ro->notifier);
++	spin_lock(&raw_notifier_lock);
++	list_add_tail(&ro->notifier, &raw_notifier_list);
++	spin_unlock(&raw_notifier_lock);
+ 
+ 	return 0;
+ }
+@@ -351,7 +371,14 @@ static int raw_release(struct socket *sock)
+ 
+ 	ro = raw_sk(sk);
+ 
+-	unregister_netdevice_notifier(&ro->notifier);
++	spin_lock(&raw_notifier_lock);
++	while (raw_busy_notifier == ro) {
++		spin_unlock(&raw_notifier_lock);
++		schedule_timeout_uninterruptible(1);
++		spin_lock(&raw_notifier_lock);
++	}
++	list_del(&ro->notifier);
++	spin_unlock(&raw_notifier_lock);
+ 
+ 	lock_sock(sk);
+ 
+@@ -881,6 +908,10 @@ static const struct can_proto raw_can_proto = {
+ 	.prot       = &raw_proto,
+ };
+ 
++static struct notifier_block canraw_notifier = {
++	.notifier_call = raw_notifier
++};
++
+ static __init int raw_module_init(void)
+ {
+ 	int err;
+@@ -890,6 +921,8 @@ static __init int raw_module_init(void)
+ 	err = can_proto_register(&raw_can_proto);
+ 	if (err < 0)
+ 		pr_err("can: registration of raw protocol failed\n");
++	else
++		register_netdevice_notifier(&canraw_notifier);
+ 
+ 	return err;
+ }
+@@ -897,6 +930,7 @@ static __init int raw_module_init(void)
+ static __exit void raw_module_exit(void)
+ {
+ 	can_proto_unregister(&raw_can_proto);
++	unregister_netdevice_notifier(&canraw_notifier);
+ }
+ 
+ module_init(raw_module_init);
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index dbc66b896287a..5c9d95f30be60 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -650,6 +650,18 @@ void __put_net(struct net *net)
+ }
+ EXPORT_SYMBOL_GPL(__put_net);
+ 
++/**
++ * get_net_ns - increment the refcount of the network namespace
++ * @ns: common namespace (net)
++ *
++ * Returns the net's common namespace.
++ */
++struct ns_common *get_net_ns(struct ns_common *ns)
++{
++	return &get_net(container_of(ns, struct net, ns))->ns;
++}
++EXPORT_SYMBOL_GPL(get_net_ns);
++
+ struct net *get_net_ns_by_fd(int fd)
+ {
+ 	struct file *file;
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 83894723ebeea..dd46592464058 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -4842,10 +4842,12 @@ static int rtnl_bridge_notify(struct net_device *dev)
+ 	if (err < 0)
+ 		goto errout;
+ 
+-	if (!skb->len) {
+-		err = -EINVAL;
++	/* Notification info is only filled for bridge ports, not the bridge
++	 * device itself. Therefore, a zero notification length is valid and
++	 * should not result in an error.
++	 */
++	if (!skb->len)
+ 		goto errout;
+-	}
+ 
+ 	rtnl_notify(skb, net, 0, RTNLGRP_LINK, NULL, GFP_ATOMIC);
+ 	return 0;
+diff --git a/net/ethtool/strset.c b/net/ethtool/strset.c
+index c3a5489964cde..9908b922cce8d 100644
+--- a/net/ethtool/strset.c
++++ b/net/ethtool/strset.c
+@@ -328,6 +328,8 @@ static int strset_reply_size(const struct ethnl_req_info *req_base,
+ 	int len = 0;
+ 	int ret;
+ 
++	len += nla_total_size(0); /* ETHTOOL_A_STRSET_STRINGSETS */
++
+ 	for (i = 0; i < ETH_SS_COUNT; i++) {
+ 		const struct strset_info *set_info = &data->sets[i];
+ 
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index be09c7669a799..ca217a6f488f6 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -472,6 +472,7 @@ void cipso_v4_doi_free(struct cipso_v4_doi *doi_def)
+ 		kfree(doi_def->map.std->lvl.local);
+ 		kfree(doi_def->map.std->cat.cipso);
+ 		kfree(doi_def->map.std->cat.local);
++		kfree(doi_def->map.std);
+ 		break;
+ 	}
+ 	kfree(doi_def);
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index ff3818333fcfb..b71b836cc7d19 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -759,6 +759,13 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ 		icmp_param.data_len = room;
+ 	icmp_param.head_len = sizeof(struct icmphdr);
+ 
++	/* if we don't have a source address at this point, fall back to the
++	 * dummy address instead of sending out a packet with a source address
++	 * of 0.0.0.0
++	 */
++	if (!fl4.saddr)
++		fl4.saddr = htonl(INADDR_DUMMY);
++
+ 	icmp_push_reply(&icmp_param, &fl4, &ipc, &rt);
+ ende:
+ 	ip_rt_put(rt);
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 7b272bbed2b43..6b3c558a4f232 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -1801,6 +1801,7 @@ void ip_mc_destroy_dev(struct in_device *in_dev)
+ 	while ((i = rtnl_dereference(in_dev->mc_list)) != NULL) {
+ 		in_dev->mc_list = i->next_rcu;
+ 		in_dev->mc_count--;
++		ip_mc_clear_src(i);
+ 		ip_ma_put(i);
+ 	}
+ }
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 798dc85bde5b7..e968bb47d5bd8 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2076,6 +2076,19 @@ martian_source:
+ 	return err;
+ }
+ 
++/* get device for dst_alloc with local routes */
++static struct net_device *ip_rt_get_dev(struct net *net,
++					const struct fib_result *res)
++{
++	struct fib_nh_common *nhc = res->fi ? res->nhc : NULL;
++	struct net_device *dev = NULL;
++
++	if (nhc)
++		dev = l3mdev_master_dev_rcu(nhc->nhc_dev);
++
++	return dev ? : net->loopback_dev;
++}
++
+ /*
+  *	NOTE. We drop all the packets that has local source
+  *	addresses, because every properly looped back packet
+@@ -2232,7 +2245,7 @@ local_input:
+ 		}
+ 	}
+ 
+-	rth = rt_dst_alloc(l3mdev_master_dev_rcu(dev) ? : net->loopback_dev,
++	rth = rt_dst_alloc(ip_rt_get_dev(net, res),
+ 			   flags | RTCF_LOCAL, res->type,
+ 			   IN_DEV_CONF_GET(in_dev, NOPOLICY), false);
+ 	if (!rth)
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 9d28b2778e8fe..fbb9a11fe4a37 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2569,6 +2569,9 @@ void udp_destroy_sock(struct sock *sk)
+ {
+ 	struct udp_sock *up = udp_sk(sk);
+ 	bool slow = lock_sock_fast(sk);
++
++	/* protects from races with udp_abort() */
++	sock_set_flag(sk, SOCK_DEAD);
+ 	udp_flush_pending_frames(sk);
+ 	unlock_sock_fast(sk, slow);
+ 	if (static_branch_unlikely(&udp_encap_needed_key)) {
+@@ -2819,10 +2822,17 @@ int udp_abort(struct sock *sk, int err)
+ {
+ 	lock_sock(sk);
+ 
++	/* udp{v6}_destroy_sock() sets it under the sk lock, avoid racing
++	 * with close()
++	 */
++	if (sock_flag(sk, SOCK_DEAD))
++		goto out;
++
+ 	sk->sk_err = err;
+ 	sk->sk_error_report(sk);
+ 	__udp_disconnect(sk, 0);
+ 
++out:
+ 	release_sock(sk);
+ 
+ 	return 0;
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index e204163c7036c..92f3235fa2874 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -135,6 +135,17 @@ void nft_fib6_eval_type(const struct nft_expr *expr, struct nft_regs *regs,
+ }
+ EXPORT_SYMBOL_GPL(nft_fib6_eval_type);
+ 
++static bool nft_fib_v6_skip_icmpv6(const struct sk_buff *skb, u8 next, const struct ipv6hdr *iph)
++{
++	if (likely(next != IPPROTO_ICMPV6))
++		return false;
++
++	if (ipv6_addr_type(&iph->saddr) != IPV6_ADDR_ANY)
++		return false;
++
++	return ipv6_addr_type(&iph->daddr) & IPV6_ADDR_LINKLOCAL;
++}
++
+ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		   const struct nft_pktinfo *pkt)
+ {
+@@ -163,10 +174,13 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 
+ 	lookup_flags = nft_fib6_flowi_init(&fl6, priv, pkt, oif, iph);
+ 
+-	if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+-	    nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+-		nft_fib_store_result(dest, priv, nft_in(pkt));
+-		return;
++	if (nft_hook(pkt) == NF_INET_PRE_ROUTING ||
++	    nft_hook(pkt) == NF_INET_INGRESS) {
++		if (nft_fib_is_loopback(pkt->skb, nft_in(pkt)) ||
++		    nft_fib_v6_skip_icmpv6(pkt->skb, pkt->tprot, iph)) {
++			nft_fib_store_result(dest, priv, nft_in(pkt));
++			return;
++		}
+ 	}
+ 
+ 	*dest = 0;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 29d9691359b9c..e2de58d6cdce2 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1596,6 +1596,9 @@ void udpv6_destroy_sock(struct sock *sk)
+ {
+ 	struct udp_sock *up = udp_sk(sk);
+ 	lock_sock(sk);
++
++	/* protects from races with udp_abort() */
++	sock_set_flag(sk, SOCK_DEAD);
+ 	udp_v6_flush_pending_frames(sk);
+ 	release_sock(sk);
+ 
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index d4cc9ac2d7033..6b50cb5e0e3cc 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -251,13 +251,24 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 	struct ieee80211_mgmt *mgmt = (void *)skb->data;
+ 	struct ieee80211_bss *bss;
+ 	struct ieee80211_channel *channel;
++	size_t min_hdr_len = offsetof(struct ieee80211_mgmt,
++				      u.probe_resp.variable);
++
++	if (!ieee80211_is_probe_resp(mgmt->frame_control) &&
++	    !ieee80211_is_beacon(mgmt->frame_control) &&
++	    !ieee80211_is_s1g_beacon(mgmt->frame_control))
++		return;
+ 
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+-		if (skb->len < 15)
+-			return;
+-	} else if (skb->len < 24 ||
+-		 (!ieee80211_is_probe_resp(mgmt->frame_control) &&
+-		  !ieee80211_is_beacon(mgmt->frame_control)))
++		if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
++			min_hdr_len = offsetof(struct ieee80211_ext,
++					       u.s1g_short_beacon.variable);
++		else
++			min_hdr_len = offsetof(struct ieee80211_ext,
++					       u.s1g_beacon);
++	}
++
++	if (skb->len < min_hdr_len)
+ 		return;
+ 
+ 	sdata1 = rcu_dereference(local->scan_sdata);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 1d8526d89505f..20b3581a1c43f 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -2030,6 +2030,26 @@ void ieee80211_xmit(struct ieee80211_sub_if_data *sdata,
+ 	ieee80211_tx(sdata, sta, skb, false);
+ }
+ 
++static bool ieee80211_validate_radiotap_len(struct sk_buff *skb)
++{
++	struct ieee80211_radiotap_header *rthdr =
++		(struct ieee80211_radiotap_header *)skb->data;
++
++	/* check for not even having the fixed radiotap header part */
++	if (unlikely(skb->len < sizeof(struct ieee80211_radiotap_header)))
++		return false; /* too short to be possibly valid */
++
++	/* is it a header version we can trust to find length from? */
++	if (unlikely(rthdr->it_version))
++		return false; /* only version 0 is supported */
++
++	/* does the skb contain enough to deliver on the alleged length? */
++	if (unlikely(skb->len < ieee80211_get_radiotap_len(skb->data)))
++		return false; /* skb too short for claimed rt header extent */
++
++	return true;
++}
++
+ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
+ 				 struct net_device *dev)
+ {
+@@ -2038,8 +2058,6 @@ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
+ 	struct ieee80211_radiotap_header *rthdr =
+ 		(struct ieee80211_radiotap_header *) skb->data;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+-	struct ieee80211_supported_band *sband =
+-		local->hw.wiphy->bands[info->band];
+ 	int ret = ieee80211_radiotap_iterator_init(&iterator, rthdr, skb->len,
+ 						   NULL);
+ 	u16 txflags;
+@@ -2052,17 +2070,8 @@ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
+ 	u8 vht_mcs = 0, vht_nss = 0;
+ 	int i;
+ 
+-	/* check for not even having the fixed radiotap header part */
+-	if (unlikely(skb->len < sizeof(struct ieee80211_radiotap_header)))
+-		return false; /* too short to be possibly valid */
+-
+-	/* is it a header version we can trust to find length from? */
+-	if (unlikely(rthdr->it_version))
+-		return false; /* only version 0 is supported */
+-
+-	/* does the skb contain enough to deliver on the alleged length? */
+-	if (unlikely(skb->len < ieee80211_get_radiotap_len(skb->data)))
+-		return false; /* skb too short for claimed rt header extent */
++	if (!ieee80211_validate_radiotap_len(skb))
++		return false;
+ 
+ 	info->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT |
+ 		       IEEE80211_TX_CTL_DONTFRAG;
+@@ -2186,6 +2195,9 @@ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
+ 		return false;
+ 
+ 	if (rate_found) {
++		struct ieee80211_supported_band *sband =
++			local->hw.wiphy->bands[info->band];
++
+ 		info->control.flags |= IEEE80211_TX_CTRL_RATE_INJECT;
+ 
+ 		for (i = 0; i < IEEE80211_TX_MAX_RATES; i++) {
+@@ -2199,7 +2211,7 @@ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
+ 		} else if (rate_flags & IEEE80211_TX_RC_VHT_MCS) {
+ 			ieee80211_rate_set_vht(info->control.rates, vht_mcs,
+ 					       vht_nss);
+-		} else {
++		} else if (sband) {
+ 			for (i = 0; i < sband->n_bitrates; i++) {
+ 				if (rate * 5 != sband->bitrates[i].bitrate)
+ 					continue;
+@@ -2236,8 +2248,8 @@ netdev_tx_t ieee80211_monitor_start_xmit(struct sk_buff *skb,
+ 	info->flags = IEEE80211_TX_CTL_REQ_TX_STATUS |
+ 		      IEEE80211_TX_CTL_INJECTED;
+ 
+-	/* Sanity-check and process the injection radiotap header */
+-	if (!ieee80211_parse_tx_radiotap(skb, dev))
++	/* Sanity-check the length of the radiotap header */
++	if (!ieee80211_validate_radiotap_len(skb))
+ 		goto fail;
+ 
+ 	/* we now know there is a radiotap header with a length we can use */
+@@ -2353,6 +2365,14 @@ netdev_tx_t ieee80211_monitor_start_xmit(struct sk_buff *skb,
+ 
+ 	info->band = chandef->chan->band;
+ 
++	/*
++	 * Process the radiotap header. This will now take into account the
++	 * selected chandef above to accurately set injection rates and
++	 * retransmissions.
++	 */
++	if (!ieee80211_parse_tx_radiotap(skb, dev))
++		goto fail_rcu;
++
+ 	/* remove the injection radiotap header */
+ 	skb_pull(skb, len_rthdr);
+ 
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 91034a221983c..ac0233c9cd349 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -314,6 +314,8 @@ void mptcp_get_options(const struct sk_buff *skb,
+ 			length--;
+ 			continue;
+ 		default:
++			if (length < 2)
++				return;
+ 			opsize = *ptr++;
+ 			if (opsize < 2) /* "silly options" */
+ 				return;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 7832b20baac2e..3ca8b359e399a 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -276,11 +276,13 @@ static bool __mptcp_move_skb(struct mptcp_sock *msk, struct sock *ssk,
+ 
+ 	/* try to fetch required memory from subflow */
+ 	if (!sk_rmem_schedule(sk, skb, skb->truesize)) {
+-		if (ssk->sk_forward_alloc < skb->truesize)
+-			goto drop;
+-		__sk_mem_reclaim(ssk, skb->truesize);
+-		if (!sk_rmem_schedule(sk, skb, skb->truesize))
++		int amount = sk_mem_pages(skb->truesize) << SK_MEM_QUANTUM_SHIFT;
++
++		if (ssk->sk_forward_alloc < amount)
+ 			goto drop;
++
++		ssk->sk_forward_alloc -= amount;
++		sk->sk_forward_alloc += amount;
+ 	}
+ 
+ 	/* the skb map_seq accounts for the skb offset:
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 96b6aca9d0ae7..851fb3d8c791d 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -655,10 +655,10 @@ static u64 expand_seq(u64 old_seq, u16 old_data_len, u64 seq)
+ 	return seq | ((old_seq + old_data_len + 1) & GENMASK_ULL(63, 32));
+ }
+ 
+-static void warn_bad_map(struct mptcp_subflow_context *subflow, u32 ssn)
++static void dbg_bad_map(struct mptcp_subflow_context *subflow, u32 ssn)
+ {
+-	WARN_ONCE(1, "Bad mapping: ssn=%d map_seq=%d map_data_len=%d",
+-		  ssn, subflow->map_subflow_seq, subflow->map_data_len);
++	pr_debug("Bad mapping: ssn=%d map_seq=%d map_data_len=%d",
++		 ssn, subflow->map_subflow_seq, subflow->map_data_len);
+ }
+ 
+ static bool skb_is_fully_mapped(struct sock *ssk, struct sk_buff *skb)
+@@ -683,13 +683,13 @@ static bool validate_mapping(struct sock *ssk, struct sk_buff *skb)
+ 		/* Mapping covers data later in the subflow stream,
+ 		 * currently unsupported.
+ 		 */
+-		warn_bad_map(subflow, ssn);
++		dbg_bad_map(subflow, ssn);
+ 		return false;
+ 	}
+ 	if (unlikely(!before(ssn, subflow->map_subflow_seq +
+ 				  subflow->map_data_len))) {
+ 		/* Mapping does covers past subflow data, invalid */
+-		warn_bad_map(subflow, ssn + skb->len);
++		dbg_bad_map(subflow, ssn);
+ 		return false;
+ 	}
+ 	return true;
+diff --git a/net/netfilter/nf_synproxy_core.c b/net/netfilter/nf_synproxy_core.c
+index d7d34a62d3bf5..2fc4ae960769d 100644
+--- a/net/netfilter/nf_synproxy_core.c
++++ b/net/netfilter/nf_synproxy_core.c
+@@ -31,6 +31,9 @@ synproxy_parse_options(const struct sk_buff *skb, unsigned int doff,
+ 	int length = (th->doff * 4) - sizeof(*th);
+ 	u8 buf[40], *ptr;
+ 
++	if (unlikely(length < 0))
++		return false;
++
+ 	ptr = skb_header_pointer(skb, doff + sizeof(*th), length, buf);
+ 	if (ptr == NULL)
+ 		return false;
+@@ -47,6 +50,8 @@ synproxy_parse_options(const struct sk_buff *skb, unsigned int doff,
+ 			length--;
+ 			continue;
+ 		default:
++			if (length < 2)
++				return true;
+ 			opsize = *ptr++;
+ 			if (opsize < 2)
+ 				return true;
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 93a7edcff11e7..0d9baddb9cd49 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -429,7 +429,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 	struct qrtr_sock *ipc;
+ 	struct sk_buff *skb;
+ 	struct qrtr_cb *cb;
+-	unsigned int size;
++	size_t size;
+ 	unsigned int ver;
+ 	size_t hdrlen;
+ 
+diff --git a/net/rds/recv.c b/net/rds/recv.c
+index aba4afe4dfedc..967d115f97efd 100644
+--- a/net/rds/recv.c
++++ b/net/rds/recv.c
+@@ -714,7 +714,7 @@ int rds_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 
+ 		if (rds_cmsg_recv(inc, msg, rs)) {
+ 			ret = -EFAULT;
+-			goto out;
++			break;
+ 		}
+ 		rds_recvmsg_zcookie(rs, msg);
+ 
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 315a5b2f3add8..7ef074c6dd160 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -900,14 +900,19 @@ static int tcf_ct_act_nat(struct sk_buff *skb,
+ 	}
+ 
+ 	err = ct_nat_execute(skb, ct, ctinfo, range, maniptype);
+-	if (err == NF_ACCEPT &&
+-	    ct->status & IPS_SRC_NAT && ct->status & IPS_DST_NAT) {
+-		if (maniptype == NF_NAT_MANIP_SRC)
+-			maniptype = NF_NAT_MANIP_DST;
+-		else
+-			maniptype = NF_NAT_MANIP_SRC;
+-
+-		err = ct_nat_execute(skb, ct, ctinfo, range, maniptype);
++	if (err == NF_ACCEPT && ct->status & IPS_DST_NAT) {
++		if (ct->status & IPS_SRC_NAT) {
++			if (maniptype == NF_NAT_MANIP_SRC)
++				maniptype = NF_NAT_MANIP_DST;
++			else
++				maniptype = NF_NAT_MANIP_SRC;
++
++			err = ct_nat_execute(skb, ct, ctinfo, range,
++					     maniptype);
++		} else if (CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) {
++			err = ct_nat_execute(skb, ct, ctinfo, NULL,
++					     NF_NAT_MANIP_SRC);
++		}
+ 	}
+ 	return err;
+ #else
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 7d37638ee1c7a..5c15968b5155b 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -943,7 +943,7 @@ static struct tcphdr *cake_get_tcphdr(const struct sk_buff *skb,
+ 	}
+ 
+ 	tcph = skb_header_pointer(skb, offset, sizeof(_tcph), &_tcph);
+-	if (!tcph)
++	if (!tcph || tcph->doff < 5)
+ 		return NULL;
+ 
+ 	return skb_header_pointer(skb, offset,
+@@ -967,6 +967,8 @@ static const void *cake_get_tcpopt(const struct tcphdr *tcph,
+ 			length--;
+ 			continue;
+ 		}
++		if (length < 2)
++			break;
+ 		opsize = *ptr++;
+ 		if (opsize < 2 || opsize > length)
+ 			break;
+@@ -1104,6 +1106,8 @@ static bool cake_tcph_may_drop(const struct tcphdr *tcph,
+ 			length--;
+ 			continue;
+ 		}
++		if (length < 2)
++			break;
+ 		opsize = *ptr++;
+ 		if (opsize < 2 || opsize > length)
+ 			break;
+diff --git a/net/socket.c b/net/socket.c
+index 6e6cccc2104f7..002d5952ae5d8 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1080,19 +1080,6 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+  *	what to do with it - that's up to the protocol still.
+  */
+ 
+-/**
+- *	get_net_ns - increment the refcount of the network namespace
+- *	@ns: common namespace (net)
+- *
+- *	Returns the net's common namespace.
+- */
+-
+-struct ns_common *get_net_ns(struct ns_common *ns)
+-{
+-	return &get_net(container_of(ns, struct net, ns))->ns;
+-}
+-EXPORT_SYMBOL_GPL(get_net_ns);
+-
+ static long sock_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+ {
+ 	struct socket *sock;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 41c3303c33577..39be4b52329b5 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -535,12 +535,14 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 	u->path.mnt = NULL;
+ 	state = sk->sk_state;
+ 	sk->sk_state = TCP_CLOSE;
++
++	skpair = unix_peer(sk);
++	unix_peer(sk) = NULL;
++
+ 	unix_state_unlock(sk);
+ 
+ 	wake_up_interruptible_all(&u->peer_wait);
+ 
+-	skpair = unix_peer(sk);
+-
+ 	if (skpair != NULL) {
+ 		if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) {
+ 			unix_state_lock(skpair);
+@@ -555,7 +557,6 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 
+ 		unix_dgram_peer_wake_disconnect(sk, skpair);
+ 		sock_put(skpair); /* It may now die */
+-		unix_peer(sk) = NULL;
+ 	}
+ 
+ 	/* Try to flush out this socket. Throw out buffers at least */
+diff --git a/net/wireless/Makefile b/net/wireless/Makefile
+index 2eee93985ab0d..af590ae606b69 100644
+--- a/net/wireless/Makefile
++++ b/net/wireless/Makefile
+@@ -28,7 +28,7 @@ $(obj)/shipped-certs.c: $(wildcard $(srctree)/$(src)/certs/*.hex)
+ 	@$(kecho) "  GEN     $@"
+ 	@(echo '#include "reg.h"'; \
+ 	  echo 'const u8 shipped_regdb_certs[] = {'; \
+-	  cat $^ ; \
++	  echo | cat - $^ ; \
+ 	  echo '};'; \
+ 	  echo 'unsigned int shipped_regdb_certs_len = sizeof(shipped_regdb_certs);'; \
+ 	 ) > $@
+diff --git a/net/wireless/pmsr.c b/net/wireless/pmsr.c
+index a95c79d183492..a817d8e3e4b36 100644
+--- a/net/wireless/pmsr.c
++++ b/net/wireless/pmsr.c
+@@ -324,6 +324,7 @@ void cfg80211_pmsr_complete(struct wireless_dev *wdev,
+ 			    gfp_t gfp)
+ {
+ 	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
++	struct cfg80211_pmsr_request *tmp, *prev, *to_free = NULL;
+ 	struct sk_buff *msg;
+ 	void *hdr;
+ 
+@@ -354,9 +355,20 @@ free_msg:
+ 	nlmsg_free(msg);
+ free_request:
+ 	spin_lock_bh(&wdev->pmsr_lock);
+-	list_del(&req->list);
++	/*
++	 * cfg80211_pmsr_process_abort() may have already moved this request
++	 * to the free list, and will free it later. In this case, don't free
++	 * it here.
++	 */
++	list_for_each_entry_safe(tmp, prev, &wdev->pmsr_list, list) {
++		if (tmp == req) {
++			list_del(&req->list);
++			to_free = req;
++			break;
++		}
++	}
+ 	spin_unlock_bh(&wdev->pmsr_lock);
+-	kfree(req);
++	kfree(to_free);
+ }
+ EXPORT_SYMBOL_GPL(cfg80211_pmsr_complete);
+ 
+diff --git a/sound/soc/codecs/rt5659.c b/sound/soc/codecs/rt5659.c
+index 91a4ef7f620ca..a9b079d56fd69 100644
+--- a/sound/soc/codecs/rt5659.c
++++ b/sound/soc/codecs/rt5659.c
+@@ -2433,13 +2433,18 @@ static int set_dmic_power(struct snd_soc_dapm_widget *w,
+ 	return 0;
+ }
+ 
+-static const struct snd_soc_dapm_widget rt5659_dapm_widgets[] = {
++static const struct snd_soc_dapm_widget rt5659_particular_dapm_widgets[] = {
+ 	SND_SOC_DAPM_SUPPLY("LDO2", RT5659_PWR_ANLG_3, RT5659_PWR_LDO2_BIT, 0,
+ 		NULL, 0),
+-	SND_SOC_DAPM_SUPPLY("PLL", RT5659_PWR_ANLG_3, RT5659_PWR_PLL_BIT, 0,
+-		NULL, 0),
++	SND_SOC_DAPM_SUPPLY("MICBIAS1", RT5659_PWR_ANLG_2, RT5659_PWR_MB1_BIT,
++		0, NULL, 0),
+ 	SND_SOC_DAPM_SUPPLY("Mic Det Power", RT5659_PWR_VOL,
+ 		RT5659_PWR_MIC_DET_BIT, 0, NULL, 0),
++};
++
++static const struct snd_soc_dapm_widget rt5659_dapm_widgets[] = {
++	SND_SOC_DAPM_SUPPLY("PLL", RT5659_PWR_ANLG_3, RT5659_PWR_PLL_BIT, 0,
++		NULL, 0),
+ 	SND_SOC_DAPM_SUPPLY("Mono Vref", RT5659_PWR_ANLG_1,
+ 		RT5659_PWR_VREF3_BIT, 0, NULL, 0),
+ 
+@@ -2464,8 +2469,6 @@ static const struct snd_soc_dapm_widget rt5659_dapm_widgets[] = {
+ 		RT5659_ADC_MONO_R_ASRC_SFT, 0, NULL, 0),
+ 
+ 	/* Input Side */
+-	SND_SOC_DAPM_SUPPLY("MICBIAS1", RT5659_PWR_ANLG_2, RT5659_PWR_MB1_BIT,
+-		0, NULL, 0),
+ 	SND_SOC_DAPM_SUPPLY("MICBIAS2", RT5659_PWR_ANLG_2, RT5659_PWR_MB2_BIT,
+ 		0, NULL, 0),
+ 	SND_SOC_DAPM_SUPPLY("MICBIAS3", RT5659_PWR_ANLG_2, RT5659_PWR_MB3_BIT,
+@@ -3660,10 +3663,23 @@ static int rt5659_set_bias_level(struct snd_soc_component *component,
+ 
+ static int rt5659_probe(struct snd_soc_component *component)
+ {
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
+ 	struct rt5659_priv *rt5659 = snd_soc_component_get_drvdata(component);
+ 
+ 	rt5659->component = component;
+ 
++	switch (rt5659->pdata.jd_src) {
++	case RT5659_JD_HDA_HEADER:
++		break;
++
++	default:
++		snd_soc_dapm_new_controls(dapm,
++			rt5659_particular_dapm_widgets,
++			ARRAY_SIZE(rt5659_particular_dapm_widgets));
++		break;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/codecs/rt5682-sdw.c b/sound/soc/codecs/rt5682-sdw.c
+index 58fb13132602c..aa6c325faeab2 100644
+--- a/sound/soc/codecs/rt5682-sdw.c
++++ b/sound/soc/codecs/rt5682-sdw.c
+@@ -455,7 +455,8 @@ static int rt5682_io_init(struct device *dev, struct sdw_slave *slave)
+ 
+ 	regmap_update_bits(rt5682->regmap, RT5682_CBJ_CTRL_2,
+ 		RT5682_EXT_JD_SRC, RT5682_EXT_JD_SRC_MANUAL);
+-	regmap_write(rt5682->regmap, RT5682_CBJ_CTRL_1, 0xd042);
++	regmap_write(rt5682->regmap, RT5682_CBJ_CTRL_1, 0xd142);
++	regmap_update_bits(rt5682->regmap, RT5682_CBJ_CTRL_5, 0x0700, 0x0600);
+ 	regmap_update_bits(rt5682->regmap, RT5682_CBJ_CTRL_3,
+ 		RT5682_CBJ_IN_BUF_EN, RT5682_CBJ_IN_BUF_EN);
+ 	regmap_update_bits(rt5682->regmap, RT5682_SAR_IL_CMD_1,
+diff --git a/sound/soc/codecs/tas2562.h b/sound/soc/codecs/tas2562.h
+index 81866aeb3fbfa..55b2a1f52ca37 100644
+--- a/sound/soc/codecs/tas2562.h
++++ b/sound/soc/codecs/tas2562.h
+@@ -57,13 +57,13 @@
+ #define TAS2562_TDM_CFG0_RAMPRATE_MASK		BIT(5)
+ #define TAS2562_TDM_CFG0_RAMPRATE_44_1		BIT(5)
+ #define TAS2562_TDM_CFG0_SAMPRATE_MASK		GENMASK(3, 1)
+-#define TAS2562_TDM_CFG0_SAMPRATE_7305_8KHZ	0x0
+-#define TAS2562_TDM_CFG0_SAMPRATE_14_7_16KHZ	0x1
+-#define TAS2562_TDM_CFG0_SAMPRATE_22_05_24KHZ	0x2
+-#define TAS2562_TDM_CFG0_SAMPRATE_29_4_32KHZ	0x3
+-#define TAS2562_TDM_CFG0_SAMPRATE_44_1_48KHZ	0x4
+-#define TAS2562_TDM_CFG0_SAMPRATE_88_2_96KHZ	0x5
+-#define TAS2562_TDM_CFG0_SAMPRATE_176_4_192KHZ	0x6
++#define TAS2562_TDM_CFG0_SAMPRATE_7305_8KHZ	(0x0 << 1)
++#define TAS2562_TDM_CFG0_SAMPRATE_14_7_16KHZ	(0x1 << 1)
++#define TAS2562_TDM_CFG0_SAMPRATE_22_05_24KHZ	(0x2 << 1)
++#define TAS2562_TDM_CFG0_SAMPRATE_29_4_32KHZ	(0x3 << 1)
++#define TAS2562_TDM_CFG0_SAMPRATE_44_1_48KHZ	(0x4 << 1)
++#define TAS2562_TDM_CFG0_SAMPRATE_88_2_96KHZ	(0x5 << 1)
++#define TAS2562_TDM_CFG0_SAMPRATE_176_4_192KHZ	(0x6 << 1)
+ 
+ #define TAS2562_TDM_CFG2_RIGHT_JUSTIFY	BIT(6)
+ 
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index a2dd3b6b7fec1..7cd14d6b9436a 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -720,6 +720,7 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ 	/* Initialize sound card */
+ 	priv->pdev = pdev;
+ 	priv->card.dev = &pdev->dev;
++	priv->card.owner = THIS_MODULE;
+ 	ret = snd_soc_of_parse_card_name(&priv->card, "model");
+ 	if (ret) {
+ 		snprintf(priv->name, sizeof(priv->name), "%s-audio",
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index 7a30a12519a70..e620a62ef534f 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -93,8 +93,30 @@ static void lpass_cpu_daiops_shutdown(struct snd_pcm_substream *substream,
+ 		struct snd_soc_dai *dai)
+ {
+ 	struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai);
++	struct lpaif_i2sctl *i2sctl = drvdata->i2sctl;
++	unsigned int id = dai->driver->id;
+ 
+ 	clk_disable_unprepare(drvdata->mi2s_osr_clk[dai->driver->id]);
++	/*
++	 * Ensure LRCLK is disabled even in device node validation.
++	 * Will not impact if disabled in lpass_cpu_daiops_trigger()
++	 * suspend.
++	 */
++	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
++		regmap_fields_write(i2sctl->spken, id, LPAIF_I2SCTL_SPKEN_DISABLE);
++	else
++		regmap_fields_write(i2sctl->micen, id, LPAIF_I2SCTL_MICEN_DISABLE);
++
++	/*
++	 * BCLK may not be enabled if lpass_cpu_daiops_prepare is called before
++	 * lpass_cpu_daiops_shutdown. It's paired with the clk_enable in
++	 * lpass_cpu_daiops_prepare.
++	 */
++	if (drvdata->mi2s_was_prepared[dai->driver->id]) {
++		drvdata->mi2s_was_prepared[dai->driver->id] = false;
++		clk_disable(drvdata->mi2s_bit_clk[dai->driver->id]);
++	}
++
+ 	clk_unprepare(drvdata->mi2s_bit_clk[dai->driver->id]);
+ }
+ 
+@@ -275,6 +297,18 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream,
+ 	case SNDRV_PCM_TRIGGER_START:
+ 	case SNDRV_PCM_TRIGGER_RESUME:
+ 	case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
++		/*
++		 * Ensure lpass BCLK/LRCLK is enabled during
++		 * device resume as lpass_cpu_daiops_prepare() is not called
++		 * after the device resumes. We don't check mi2s_was_prepared before
++		 * enable/disable BCLK in trigger events because:
++		 *  1. These trigger events are paired, so the BCLK
++		 *     enable_count is balanced.
++		 *  2. the BCLK can be shared (ex: headset and headset mic),
++		 *     we need to increase the enable_count so that we don't
++		 *     turn off the shared BCLK while other devices are using
++		 *     it.
++		 */
+ 		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 			ret = regmap_fields_write(i2sctl->spken, id,
+ 						 LPAIF_I2SCTL_SPKEN_ENABLE);
+@@ -296,6 +330,10 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream,
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 	case SNDRV_PCM_TRIGGER_SUSPEND:
+ 	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
++		/*
++		 * To ensure lpass BCLK/LRCLK is disabled during
++		 * device suspend.
++		 */
+ 		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 			ret = regmap_fields_write(i2sctl->spken, id,
+ 						 LPAIF_I2SCTL_SPKEN_DISABLE);
+@@ -315,12 +353,53 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream,
+ 	return ret;
+ }
+ 
++static int lpass_cpu_daiops_prepare(struct snd_pcm_substream *substream,
++		struct snd_soc_dai *dai)
++{
++	struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai);
++	struct lpaif_i2sctl *i2sctl = drvdata->i2sctl;
++	unsigned int id = dai->driver->id;
++	int ret;
++
++	/*
++	 * Ensure lpass BCLK/LRCLK is enabled bit before playback/capture
++	 * data flow starts. This allows other codec to have some delay before
++	 * the data flow.
++	 * (ex: to drop start up pop noise before capture starts).
++	 */
++	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
++		ret = regmap_fields_write(i2sctl->spken, id, LPAIF_I2SCTL_SPKEN_ENABLE);
++	else
++		ret = regmap_fields_write(i2sctl->micen, id, LPAIF_I2SCTL_MICEN_ENABLE);
++
++	if (ret) {
++		dev_err(dai->dev, "error writing to i2sctl reg: %d\n", ret);
++		return ret;
++	}
++
++	/*
++	 * Check mi2s_was_prepared before enabling BCLK as lpass_cpu_daiops_prepare can
++	 * be called multiple times. It's paired with the clk_disable in
++	 * lpass_cpu_daiops_shutdown.
++	 */
++	if (!drvdata->mi2s_was_prepared[dai->driver->id]) {
++		ret = clk_enable(drvdata->mi2s_bit_clk[id]);
++		if (ret) {
++			dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret);
++			return ret;
++		}
++		drvdata->mi2s_was_prepared[dai->driver->id] = true;
++	}
++	return 0;
++}
++
+ const struct snd_soc_dai_ops asoc_qcom_lpass_cpu_dai_ops = {
+ 	.set_sysclk	= lpass_cpu_daiops_set_sysclk,
+ 	.startup	= lpass_cpu_daiops_startup,
+ 	.shutdown	= lpass_cpu_daiops_shutdown,
+ 	.hw_params	= lpass_cpu_daiops_hw_params,
+ 	.trigger	= lpass_cpu_daiops_trigger,
++	.prepare	= lpass_cpu_daiops_prepare,
+ };
+ EXPORT_SYMBOL_GPL(asoc_qcom_lpass_cpu_dai_ops);
+ 
+diff --git a/sound/soc/qcom/lpass.h b/sound/soc/qcom/lpass.h
+index 1d926dd5f5900..0484ad39b3dce 100644
+--- a/sound/soc/qcom/lpass.h
++++ b/sound/soc/qcom/lpass.h
+@@ -67,6 +67,10 @@ struct lpass_data {
+ 	/* MI2S SD lines to use for playback/capture */
+ 	unsigned int mi2s_playback_sd_mode[LPASS_MAX_MI2S_PORTS];
+ 	unsigned int mi2s_capture_sd_mode[LPASS_MAX_MI2S_PORTS];
++
++	/* The state of MI2S prepare dai_ops was called */
++	bool mi2s_was_prepared[LPASS_MAX_MI2S_PORTS];
++
+ 	int hdmi_port_enable;
+ 
+ 	/* low-power audio interface (LPAIF) registers */
+diff --git a/tools/include/uapi/linux/in.h b/tools/include/uapi/linux/in.h
+index 7d6687618d808..d1b327036ae43 100644
+--- a/tools/include/uapi/linux/in.h
++++ b/tools/include/uapi/linux/in.h
+@@ -289,6 +289,9 @@ struct sockaddr_in {
+ /* Address indicating an error return. */
+ #define	INADDR_NONE		((unsigned long int) 0xffffffff)
+ 
++/* Dummy address for src of ICMP replies if no real address is set (RFC7600). */
++#define	INADDR_DUMMY		((unsigned long int) 0xc0000008)
++
+ /* Network number for local host loopback. */
+ #define	IN_LOOPBACKNET		127
+ 
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index 7150e34cf2afb..3028f932e10c0 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -779,7 +779,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 			goto out_put_ctx;
+ 		}
+ 		if (xsk->fd == umem->fd)
+-			umem->rx_ring_setup_done = true;
++			umem->tx_ring_setup_done = true;
+ 	}
+ 
+ 	err = xsk_get_mmap_offsets(xsk->fd, &off);
+diff --git a/tools/perf/trace/beauty/include/linux/socket.h b/tools/perf/trace/beauty/include/linux/socket.h
+index e9cb30d8cbfb1..9aa530d497da8 100644
+--- a/tools/perf/trace/beauty/include/linux/socket.h
++++ b/tools/perf/trace/beauty/include/linux/socket.h
+@@ -437,6 +437,4 @@ extern int __sys_getpeername(int fd, struct sockaddr __user *usockaddr,
+ extern int __sys_socketpair(int family, int type, int protocol,
+ 			    int __user *usockvec);
+ extern int __sys_shutdown(int fd, int how);
+-
+-extern struct ns_common *get_net_ns(struct ns_common *ns);
+ #endif /* _LINUX_SOCKET_H */
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 2b5707738609e..6fad54c7ecb4a 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -1384,12 +1384,37 @@ ipv4_rt_replace()
+ 	ipv4_rt_replace_mpath
+ }
+ 
++# checks that cached input route on VRF port is deleted
++# when VRF is deleted
++ipv4_local_rt_cache()
++{
++	run_cmd "ip addr add 10.0.0.1/32 dev lo"
++	run_cmd "ip netns add test-ns"
++	run_cmd "ip link add veth-outside type veth peer name veth-inside"
++	run_cmd "ip link add vrf-100 type vrf table 1100"
++	run_cmd "ip link set veth-outside master vrf-100"
++	run_cmd "ip link set veth-inside netns test-ns"
++	run_cmd "ip link set veth-outside up"
++	run_cmd "ip link set vrf-100 up"
++	run_cmd "ip route add 10.1.1.1/32 dev veth-outside table 1100"
++	run_cmd "ip netns exec test-ns ip link set veth-inside up"
++	run_cmd "ip netns exec test-ns ip addr add 10.1.1.1/32 dev veth-inside"
++	run_cmd "ip netns exec test-ns ip route add 10.0.0.1/32 dev veth-inside"
++	run_cmd "ip netns exec test-ns ip route add default via 10.0.0.1"
++	run_cmd "ip netns exec test-ns ping 10.0.0.1 -c 1 -i 1"
++	run_cmd "ip link delete vrf-100"
++
++	# if we do not hang test is a success
++	log_test $? 0 "Cached route removed from VRF port device"
++}
++
+ ipv4_route_test()
+ {
+ 	route_setup
+ 
+ 	ipv4_rt_add
+ 	ipv4_rt_replace
++	ipv4_local_rt_cache
+ 
+ 	route_cleanup
+ }
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index e927df83efb91..987a914ee0df2 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -195,9 +195,6 @@ ip -net "$ns4" link set ns4eth3 up
+ ip -net "$ns4" route add default via 10.0.3.2
+ ip -net "$ns4" route add default via dead:beef:3::2
+ 
+-# use TCP syn cookies, even if no flooding was detected.
+-ip netns exec "$ns2" sysctl -q net.ipv4.tcp_syncookies=2
+-
+ set_ethtool_flags() {
+ 	local ns="$1"
+ 	local dev="$2"
+@@ -666,6 +663,14 @@ for sender in $ns1 $ns2 $ns3 $ns4;do
+ 		exit $ret
+ 	fi
+ 
++	# ns1<->ns2 is not subject to reordering/tc delays. Use it to test
++	# mptcp syncookie support.
++	if [ $sender = $ns1 ]; then
++		ip netns exec "$ns2" sysctl -q net.ipv4.tcp_syncookies=2
++	else
++		ip netns exec "$ns2" sysctl -q net.ipv4.tcp_syncookies=1
++	fi
++
+ 	run_tests "$ns2" $sender 10.0.1.2
+ 	run_tests "$ns2" $sender dead:beef:1::2
+ 	run_tests "$ns2" $sender 10.0.2.1


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-06-30 14:23 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-06-30 14:23 UTC (permalink / raw
  To: gentoo-commits

commit:     82457685705b0a1d80b955f129f8502a94616833
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 30 14:23:21 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 30 14:23:21 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=82457685

Linuxpatch 5.10.47

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1046_linux-5.10.47.patch | 4107 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4111 insertions(+)

diff --git a/0000_README b/0000_README
index 6abe7e2..9ae25ca 100644
--- a/0000_README
+++ b/0000_README
@@ -227,6 +227,10 @@ Patch:  1045_linux-5.10.46.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.46
 
+Patch:  1046_linux-5.10.47.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.47
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1046_linux-5.10.47.patch b/1046_linux-5.10.47.patch
new file mode 100644
index 0000000..168bbc5
--- /dev/null
+++ b/1046_linux-5.10.47.patch
@@ -0,0 +1,4107 @@
+diff --git a/Makefile b/Makefile
+index 7ab22f105a032..fb2937bca41b3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 46
++SUBLEVEL = 47
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
+index f90479d8b50c8..b06602cea99c7 100644
+--- a/arch/arm/kernel/setup.c
++++ b/arch/arm/kernel/setup.c
+@@ -544,9 +544,11 @@ void notrace cpu_init(void)
+ 	 * In Thumb-2, msr with an immediate value is not allowed.
+ 	 */
+ #ifdef CONFIG_THUMB2_KERNEL
+-#define PLC	"r"
++#define PLC_l	"l"
++#define PLC_r	"r"
+ #else
+-#define PLC	"I"
++#define PLC_l	"I"
++#define PLC_r	"I"
+ #endif
+ 
+ 	/*
+@@ -568,15 +570,15 @@ void notrace cpu_init(void)
+ 	"msr	cpsr_c, %9"
+ 	    :
+ 	    : "r" (stk),
+-	      PLC (PSR_F_BIT | PSR_I_BIT | IRQ_MODE),
++	      PLC_r (PSR_F_BIT | PSR_I_BIT | IRQ_MODE),
+ 	      "I" (offsetof(struct stack, irq[0])),
+-	      PLC (PSR_F_BIT | PSR_I_BIT | ABT_MODE),
++	      PLC_r (PSR_F_BIT | PSR_I_BIT | ABT_MODE),
+ 	      "I" (offsetof(struct stack, abt[0])),
+-	      PLC (PSR_F_BIT | PSR_I_BIT | UND_MODE),
++	      PLC_r (PSR_F_BIT | PSR_I_BIT | UND_MODE),
+ 	      "I" (offsetof(struct stack, und[0])),
+-	      PLC (PSR_F_BIT | PSR_I_BIT | FIQ_MODE),
++	      PLC_r (PSR_F_BIT | PSR_I_BIT | FIQ_MODE),
+ 	      "I" (offsetof(struct stack, fiq[0])),
+-	      PLC (PSR_F_BIT | PSR_I_BIT | SVC_MODE)
++	      PLC_l (PSR_F_BIT | PSR_I_BIT | SVC_MODE)
+ 	    : "r14");
+ #endif
+ }
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index a985d292e8203..c0a7f0d90b39d 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -174,14 +174,21 @@ static void __init reserve_elfcorehdr(void)
+ #endif /* CONFIG_CRASH_DUMP */
+ 
+ /*
+- * Return the maximum physical address for a zone with a given address size
+- * limit. It currently assumes that for memory starting above 4G, 32-bit
+- * devices will use a DMA offset.
++ * Return the maximum physical address for a zone accessible by the given bits
++ * limit. If DRAM starts above 32-bit, expand the zone to the maximum
++ * available memory, otherwise cap it at 32-bit.
+  */
+ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
+ {
+-	phys_addr_t offset = memblock_start_of_DRAM() & GENMASK_ULL(63, zone_bits);
+-	return min(offset + (1ULL << zone_bits), memblock_end_of_DRAM());
++	phys_addr_t zone_mask = DMA_BIT_MASK(zone_bits);
++	phys_addr_t phys_start = memblock_start_of_DRAM();
++
++	if (phys_start > U32_MAX)
++		zone_mask = PHYS_ADDR_MAX;
++	else if (phys_start > zone_mask)
++		zone_mask = U32_MAX;
++
++	return min(zone_mask, memblock_end_of_DRAM() - 1) + 1;
+ }
+ 
+ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index afdad76078506..58dc93e566179 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -469,6 +469,21 @@ void __init mark_linear_text_alias_ro(void)
+ 			    PAGE_KERNEL_RO);
+ }
+ 
++static bool crash_mem_map __initdata;
++
++static int __init enable_crash_mem_map(char *arg)
++{
++	/*
++	 * Proper parameter parsing is done by reserve_crashkernel(). We only
++	 * need to know if the linear map has to avoid block mappings so that
++	 * the crashkernel reservations can be unmapped later.
++	 */
++	crash_mem_map = true;
++
++	return 0;
++}
++early_param("crashkernel", enable_crash_mem_map);
++
+ static void __init map_mem(pgd_t *pgdp)
+ {
+ 	phys_addr_t kernel_start = __pa_symbol(_text);
+@@ -477,7 +492,7 @@ static void __init map_mem(pgd_t *pgdp)
+ 	int flags = 0;
+ 	u64 i;
+ 
+-	if (rodata_full || debug_pagealloc_enabled())
++	if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+ 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+ 
+ 	/*
+@@ -487,11 +502,6 @@ static void __init map_mem(pgd_t *pgdp)
+ 	 * the following for-loop
+ 	 */
+ 	memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+-#ifdef CONFIG_KEXEC_CORE
+-	if (crashk_res.end)
+-		memblock_mark_nomap(crashk_res.start,
+-				    resource_size(&crashk_res));
+-#endif
+ 
+ 	/* map all the memory banks */
+ 	for_each_mem_range(i, &start, &end) {
+@@ -519,21 +529,6 @@ static void __init map_mem(pgd_t *pgdp)
+ 	__map_memblock(pgdp, kernel_start, kernel_end,
+ 		       PAGE_KERNEL, NO_CONT_MAPPINGS);
+ 	memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
+-
+-#ifdef CONFIG_KEXEC_CORE
+-	/*
+-	 * Use page-level mappings here so that we can shrink the region
+-	 * in page granularity and put back unused memory to buddy system
+-	 * through /sys/kernel/kexec_crash_size interface.
+-	 */
+-	if (crashk_res.end) {
+-		__map_memblock(pgdp, crashk_res.start, crashk_res.end + 1,
+-			       PAGE_KERNEL,
+-			       NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
+-		memblock_clear_nomap(crashk_res.start,
+-				     resource_size(&crashk_res));
+-	}
+-#endif
+ }
+ 
+ void mark_rodata_ro(void)
+diff --git a/arch/mips/generic/board-boston.its.S b/arch/mips/generic/board-boston.its.S
+index a7f51f97b9102..c45ad27594218 100644
+--- a/arch/mips/generic/board-boston.its.S
++++ b/arch/mips/generic/board-boston.its.S
+@@ -1,22 +1,22 @@
+ / {
+ 	images {
+-		fdt@boston {
++		fdt-boston {
+ 			description = "img,boston Device Tree";
+ 			data = /incbin/("boot/dts/img/boston.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		conf@boston {
++		conf-boston {
+ 			description = "Boston Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@boston";
++			kernel = "kernel";
++			fdt = "fdt-boston";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/board-ni169445.its.S b/arch/mips/generic/board-ni169445.its.S
+index e4cb4f95a8cc1..0a2e8f7a8526f 100644
+--- a/arch/mips/generic/board-ni169445.its.S
++++ b/arch/mips/generic/board-ni169445.its.S
+@@ -1,22 +1,22 @@
+ / {
+ 	images {
+-		fdt@ni169445 {
++		fdt-ni169445 {
+ 			description = "NI 169445 device tree";
+ 			data = /incbin/("boot/dts/ni/169445.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		conf@ni169445 {
++		conf-ni169445 {
+ 			description = "NI 169445 Linux Kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@ni169445";
++			kernel = "kernel";
++			fdt = "fdt-ni169445";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/board-ocelot.its.S b/arch/mips/generic/board-ocelot.its.S
+index 3da23988149a6..8c7e3a1b68d3d 100644
+--- a/arch/mips/generic/board-ocelot.its.S
++++ b/arch/mips/generic/board-ocelot.its.S
+@@ -1,40 +1,40 @@
+ /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+ / {
+ 	images {
+-		fdt@ocelot_pcb123 {
++		fdt-ocelot_pcb123 {
+ 			description = "MSCC Ocelot PCB123 Device Tree";
+ 			data = /incbin/("boot/dts/mscc/ocelot_pcb123.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 
+-		fdt@ocelot_pcb120 {
++		fdt-ocelot_pcb120 {
+ 			description = "MSCC Ocelot PCB120 Device Tree";
+ 			data = /incbin/("boot/dts/mscc/ocelot_pcb120.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		conf@ocelot_pcb123 {
++		conf-ocelot_pcb123 {
+ 			description = "Ocelot Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@ocelot_pcb123";
++			kernel = "kernel";
++			fdt = "fdt-ocelot_pcb123";
+ 		};
+ 
+-		conf@ocelot_pcb120 {
++		conf-ocelot_pcb120 {
+ 			description = "Ocelot Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@ocelot_pcb120";
++			kernel = "kernel";
++			fdt = "fdt-ocelot_pcb120";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/board-xilfpga.its.S b/arch/mips/generic/board-xilfpga.its.S
+index a2e773d3f14f4..08c1e900eb4ed 100644
+--- a/arch/mips/generic/board-xilfpga.its.S
++++ b/arch/mips/generic/board-xilfpga.its.S
+@@ -1,22 +1,22 @@
+ / {
+ 	images {
+-		fdt@xilfpga {
++		fdt-xilfpga {
+ 			description = "MIPSfpga (xilfpga) Device Tree";
+ 			data = /incbin/("boot/dts/xilfpga/nexys4ddr.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		conf@xilfpga {
++		conf-xilfpga {
+ 			description = "MIPSfpga Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@xilfpga";
++			kernel = "kernel";
++			fdt = "fdt-xilfpga";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/vmlinux.its.S b/arch/mips/generic/vmlinux.its.S
+index 1a08438fd8930..3e254676540f4 100644
+--- a/arch/mips/generic/vmlinux.its.S
++++ b/arch/mips/generic/vmlinux.its.S
+@@ -6,7 +6,7 @@
+ 	#address-cells = <ADDR_CELLS>;
+ 
+ 	images {
+-		kernel@0 {
++		kernel {
+ 			description = KERNEL_NAME;
+ 			data = /incbin/(VMLINUX_BINARY);
+ 			type = "kernel";
+@@ -15,18 +15,18 @@
+ 			compression = VMLINUX_COMPRESSION;
+ 			load = /bits/ ADDR_BITS <VMLINUX_LOAD_ADDRESS>;
+ 			entry = /bits/ ADDR_BITS <VMLINUX_ENTRY_ADDRESS>;
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		default = "conf@default";
++		default = "conf-default";
+ 
+-		conf@default {
++		conf-default {
+ 			description = "Generic Linux kernel";
+-			kernel = "kernel@0";
++			kernel = "kernel";
+ 		};
+ 	};
+ };
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index e241e0e85ac81..226c366072da3 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -14,7 +14,7 @@ ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
+ 	LDFLAGS_vmlinux := --no-relax
+ endif
+ 
+-ifeq ($(CONFIG_64BIT)$(CONFIG_CMODEL_MEDLOW),yy)
++ifeq ($(CONFIG_CMODEL_MEDLOW),y)
+ KBUILD_CFLAGS_MODULE += -mcmodel=medany
+ endif
+ 
+diff --git a/arch/s390/include/asm/stacktrace.h b/arch/s390/include/asm/stacktrace.h
+index ee056f4a4fa30..ee582896b6a3f 100644
+--- a/arch/s390/include/asm/stacktrace.h
++++ b/arch/s390/include/asm/stacktrace.h
+@@ -90,12 +90,16 @@ struct stack_frame {
+ 	CALL_ARGS_4(arg1, arg2, arg3, arg4);				\
+ 	register unsigned long r4 asm("6") = (unsigned long)(arg5)
+ 
+-#define CALL_FMT_0 "=&d" (r2) :
+-#define CALL_FMT_1 "+&d" (r2) :
+-#define CALL_FMT_2 CALL_FMT_1 "d" (r3),
+-#define CALL_FMT_3 CALL_FMT_2 "d" (r4),
+-#define CALL_FMT_4 CALL_FMT_3 "d" (r5),
+-#define CALL_FMT_5 CALL_FMT_4 "d" (r6),
++/*
++ * To keep this simple mark register 2-6 as being changed (volatile)
++ * by the called function, even though register 6 is saved/nonvolatile.
++ */
++#define CALL_FMT_0 "=&d" (r2)
++#define CALL_FMT_1 "+&d" (r2)
++#define CALL_FMT_2 CALL_FMT_1, "+&d" (r3)
++#define CALL_FMT_3 CALL_FMT_2, "+&d" (r4)
++#define CALL_FMT_4 CALL_FMT_3, "+&d" (r5)
++#define CALL_FMT_5 CALL_FMT_4, "+&d" (r6)
+ 
+ #define CALL_CLOBBER_5 "0", "1", "14", "cc", "memory"
+ #define CALL_CLOBBER_4 CALL_CLOBBER_5
+@@ -117,7 +121,7 @@ struct stack_frame {
+ 		"	brasl	14,%[_fn]\n"				\
+ 		"	la	15,0(%[_prev])\n"			\
+ 		: [_prev] "=&a" (prev), CALL_FMT_##nr			\
+-		  [_stack] "R" (stack),					\
++		: [_stack] "R" (stack),					\
+ 		  [_bc] "i" (offsetof(struct stack_frame, back_chain)),	\
+ 		  [_frame] "d" (frame),					\
+ 		  [_fn] "X" (fn) : CALL_CLOBBER_##nr);			\
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 2e4d91f3feea4..93a3122cd15ff 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -127,8 +127,8 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
+ 		/* User code screwed up. */
+ 		regs->ax = -EFAULT;
+ 
+-		instrumentation_end();
+ 		local_irq_disable();
++		instrumentation_end();
+ 		irqentry_exit_to_user_mode(regs);
+ 		return false;
+ 	}
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index a88c94d656931..e6db1a1f22d7d 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -45,9 +45,11 @@
+ #include "perf_event.h"
+ 
+ struct x86_pmu x86_pmu __read_mostly;
++static struct pmu pmu;
+ 
+ DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events) = {
+ 	.enabled = 1,
++	.pmu = &pmu,
+ };
+ 
+ DEFINE_STATIC_KEY_FALSE(rdpmc_never_available_key);
+@@ -372,10 +374,12 @@ int x86_reserve_hardware(void)
+ 	if (!atomic_inc_not_zero(&pmc_refcount)) {
+ 		mutex_lock(&pmc_reserve_mutex);
+ 		if (atomic_read(&pmc_refcount) == 0) {
+-			if (!reserve_pmc_hardware())
++			if (!reserve_pmc_hardware()) {
+ 				err = -EBUSY;
+-			else
++			} else {
+ 				reserve_ds_buffers();
++				reserve_lbr_buffers();
++			}
+ 		}
+ 		if (!err)
+ 			atomic_inc(&pmc_refcount);
+@@ -710,16 +714,23 @@ void x86_pmu_enable_all(int added)
+ 	}
+ }
+ 
+-static struct pmu pmu;
+-
+ static inline int is_x86_event(struct perf_event *event)
+ {
+ 	return event->pmu == &pmu;
+ }
+ 
+-struct pmu *x86_get_pmu(void)
++struct pmu *x86_get_pmu(unsigned int cpu)
+ {
+-	return &pmu;
++	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
++
++	/*
++	 * All CPUs of the hybrid type have been offline.
++	 * The x86_get_pmu() should not be invoked.
++	 */
++	if (WARN_ON_ONCE(!cpuc->pmu))
++		return &pmu;
++
++	return cpuc->pmu;
+ }
+ /*
+  * Event scheduler state:
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index ee659b5faf714..3b8b8eede1a8a 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4747,7 +4747,7 @@ static void update_tfa_sched(void *ignored)
+ 	 * and if so force schedule out for all event types all contexts
+ 	 */
+ 	if (test_bit(3, cpuc->active_mask))
+-		perf_pmu_resched(x86_get_pmu());
++		perf_pmu_resched(x86_get_pmu(smp_processor_id()));
+ }
+ 
+ static ssize_t show_sysctl_tfa(struct device *cdev,
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 31a7a6566d077..945d470f62d0f 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -2076,7 +2076,7 @@ void __init intel_ds_init(void)
+ 					PERF_SAMPLE_TIME;
+ 				x86_pmu.flags |= PMU_FL_PEBS_ALL;
+ 				pebs_qual = "-baseline";
+-				x86_get_pmu()->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
++				x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
+ 			} else {
+ 				/* Only basic record supported */
+ 				x86_pmu.large_pebs_flags &=
+@@ -2091,7 +2091,7 @@ void __init intel_ds_init(void)
+ 
+ 			if (x86_pmu.intel_cap.pebs_output_pt_available) {
+ 				pr_cont("PEBS-via-PT, ");
+-				x86_get_pmu()->capabilities |= PERF_PMU_CAP_AUX_OUTPUT;
++				x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_AUX_OUTPUT;
+ 			}
+ 
+ 			break;
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index e2b0efcba1017..9c1a013d56822 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -658,7 +658,6 @@ static inline bool branch_user_callstack(unsigned br_sel)
+ 
+ void intel_pmu_lbr_add(struct perf_event *event)
+ {
+-	struct kmem_cache *kmem_cache = event->pmu->task_ctx_cache;
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 
+ 	if (!x86_pmu.lbr_nr)
+@@ -696,16 +695,11 @@ void intel_pmu_lbr_add(struct perf_event *event)
+ 	perf_sched_cb_inc(event->ctx->pmu);
+ 	if (!cpuc->lbr_users++ && !event->total_time_running)
+ 		intel_pmu_lbr_reset();
+-
+-	if (static_cpu_has(X86_FEATURE_ARCH_LBR) &&
+-	    kmem_cache && !cpuc->lbr_xsave &&
+-	    (cpuc->lbr_users != cpuc->lbr_pebs_users))
+-		cpuc->lbr_xsave = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
+ }
+ 
+ void release_lbr_buffers(void)
+ {
+-	struct kmem_cache *kmem_cache = x86_get_pmu()->task_ctx_cache;
++	struct kmem_cache *kmem_cache;
+ 	struct cpu_hw_events *cpuc;
+ 	int cpu;
+ 
+@@ -714,6 +708,7 @@ void release_lbr_buffers(void)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		cpuc = per_cpu_ptr(&cpu_hw_events, cpu);
++		kmem_cache = x86_get_pmu(cpu)->task_ctx_cache;
+ 		if (kmem_cache && cpuc->lbr_xsave) {
+ 			kmem_cache_free(kmem_cache, cpuc->lbr_xsave);
+ 			cpuc->lbr_xsave = NULL;
+@@ -721,6 +716,27 @@ void release_lbr_buffers(void)
+ 	}
+ }
+ 
++void reserve_lbr_buffers(void)
++{
++	struct kmem_cache *kmem_cache;
++	struct cpu_hw_events *cpuc;
++	int cpu;
++
++	if (!static_cpu_has(X86_FEATURE_ARCH_LBR))
++		return;
++
++	for_each_possible_cpu(cpu) {
++		cpuc = per_cpu_ptr(&cpu_hw_events, cpu);
++		kmem_cache = x86_get_pmu(cpu)->task_ctx_cache;
++		if (!kmem_cache || cpuc->lbr_xsave)
++			continue;
++
++		cpuc->lbr_xsave = kmem_cache_alloc_node(kmem_cache,
++							GFP_KERNEL | __GFP_ZERO,
++							cpu_to_node(cpu));
++	}
++}
++
+ void intel_pmu_lbr_del(struct perf_event *event)
+ {
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+@@ -1609,7 +1625,7 @@ void intel_pmu_lbr_init_hsw(void)
+ 	x86_pmu.lbr_sel_mask = LBR_SEL_MASK;
+ 	x86_pmu.lbr_sel_map  = hsw_lbr_sel_map;
+ 
+-	x86_get_pmu()->task_ctx_cache = create_lbr_kmem_cache(size, 0);
++	x86_get_pmu(smp_processor_id())->task_ctx_cache = create_lbr_kmem_cache(size, 0);
+ 
+ 	if (lbr_from_signext_quirk_needed())
+ 		static_branch_enable(&lbr_from_quirk_key);
+@@ -1629,7 +1645,7 @@ __init void intel_pmu_lbr_init_skl(void)
+ 	x86_pmu.lbr_sel_mask = LBR_SEL_MASK;
+ 	x86_pmu.lbr_sel_map  = hsw_lbr_sel_map;
+ 
+-	x86_get_pmu()->task_ctx_cache = create_lbr_kmem_cache(size, 0);
++	x86_get_pmu(smp_processor_id())->task_ctx_cache = create_lbr_kmem_cache(size, 0);
+ 
+ 	/*
+ 	 * SW branch filter usage:
+@@ -1726,7 +1742,7 @@ static bool is_arch_lbr_xsave_available(void)
+ 
+ void __init intel_pmu_arch_lbr_init(void)
+ {
+-	struct pmu *pmu = x86_get_pmu();
++	struct pmu *pmu = x86_get_pmu(smp_processor_id());
+ 	union cpuid28_eax eax;
+ 	union cpuid28_ebx ebx;
+ 	union cpuid28_ecx ecx;
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 6a8edfe59b09c..f07d77cffb3c6 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -326,6 +326,8 @@ struct cpu_hw_events {
+ 	int				n_pair; /* Large increment events */
+ 
+ 	void				*kfree_on_online[X86_PERF_KFREE_MAX];
++
++	struct pmu			*pmu;
+ };
+ 
+ #define __EVENT_CONSTRAINT_RANGE(c, e, n, m, w, o, f) {	\
+@@ -897,7 +899,7 @@ static struct perf_pmu_events_ht_attr event_attr_##v = {		\
+ 	.event_str_ht	= ht,						\
+ }
+ 
+-struct pmu *x86_get_pmu(void);
++struct pmu *x86_get_pmu(unsigned int cpu);
+ extern struct x86_pmu x86_pmu __read_mostly;
+ 
+ static __always_inline struct x86_perf_task_context_opt *task_context_opt(void *ctx)
+@@ -1122,6 +1124,8 @@ void reserve_ds_buffers(void);
+ 
+ void release_lbr_buffers(void);
+ 
++void reserve_lbr_buffers(void);
++
+ extern struct event_constraint bts_constraint;
+ extern struct event_constraint vlbr_constraint;
+ 
+@@ -1267,6 +1271,10 @@ static inline void release_lbr_buffers(void)
+ {
+ }
+ 
++static inline void reserve_lbr_buffers(void)
++{
++}
++
+ static inline int intel_pmu_init(void)
+ {
+ 	return 0;
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index fdee23ea4e173..16bf4d4a8159e 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -204,6 +204,14 @@ static inline void copy_fxregs_to_kernel(struct fpu *fpu)
+ 		asm volatile("fxsaveq %[fx]" : [fx] "=m" (fpu->state.fxsave));
+ }
+ 
++static inline void fxsave(struct fxregs_state *fx)
++{
++	if (IS_ENABLED(CONFIG_X86_32))
++		asm volatile( "fxsave %[fx]" : [fx] "=m" (*fx));
++	else
++		asm volatile("fxsaveq %[fx]" : [fx] "=m" (*fx));
++}
++
+ /* These macros all use (%edi)/(%rdi) as the single memory argument. */
+ #define XSAVE		".byte " REX_PREFIX "0x0f,0xae,0x27"
+ #define XSAVEOPT	".byte " REX_PREFIX "0x0f,0xae,0x37"
+@@ -268,28 +276,6 @@ static inline void copy_fxregs_to_kernel(struct fpu *fpu)
+ 		     : "D" (st), "m" (*st), "a" (lmask), "d" (hmask)	\
+ 		     : "memory")
+ 
+-/*
+- * This function is called only during boot time when x86 caps are not set
+- * up and alternative can not be used yet.
+- */
+-static inline void copy_xregs_to_kernel_booting(struct xregs_state *xstate)
+-{
+-	u64 mask = xfeatures_mask_all;
+-	u32 lmask = mask;
+-	u32 hmask = mask >> 32;
+-	int err;
+-
+-	WARN_ON(system_state != SYSTEM_BOOTING);
+-
+-	if (boot_cpu_has(X86_FEATURE_XSAVES))
+-		XSTATE_OP(XSAVES, xstate, lmask, hmask, err);
+-	else
+-		XSTATE_OP(XSAVE, xstate, lmask, hmask, err);
+-
+-	/* We should never fault when copying to a kernel buffer: */
+-	WARN_ON_FPU(err);
+-}
+-
+ /*
+  * This function is called only during boot time when x86 caps are not set
+  * up and alternative can not be used yet.
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index ec3ae30547920..b7b92cdf3add4 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -221,28 +221,18 @@ sanitize_restored_user_xstate(union fpregs_state *state,
+ 
+ 	if (use_xsave()) {
+ 		/*
+-		 * Note: we don't need to zero the reserved bits in the
+-		 * xstate_header here because we either didn't copy them at all,
+-		 * or we checked earlier that they aren't set.
++		 * Clear all feature bits which are not set in
++		 * user_xfeatures and clear all extended features
++		 * for fx_only mode.
+ 		 */
++		u64 mask = fx_only ? XFEATURE_MASK_FPSSE : user_xfeatures;
+ 
+ 		/*
+-		 * 'user_xfeatures' might have bits clear which are
+-		 * set in header->xfeatures. This represents features that
+-		 * were in init state prior to a signal delivery, and need
+-		 * to be reset back to the init state.  Clear any user
+-		 * feature bits which are set in the kernel buffer to get
+-		 * them back to the init state.
+-		 *
+-		 * Supervisor state is unchanged by input from userspace.
+-		 * Ensure supervisor state bits stay set and supervisor
+-		 * state is not modified.
++		 * Supervisor state has to be preserved. The sigframe
++		 * restore can only modify user features, i.e. @mask
++		 * cannot contain them.
+ 		 */
+-		if (fx_only)
+-			header->xfeatures = XFEATURE_MASK_FPSSE;
+-		else
+-			header->xfeatures &= user_xfeatures |
+-					     xfeatures_mask_supervisor();
++		header->xfeatures &= mask | xfeatures_mask_supervisor();
+ 	}
+ 
+ 	if (use_fxsr()) {
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 67f1a03b9b235..80dcf0417f30b 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -440,6 +440,25 @@ static void __init print_xstate_offset_size(void)
+ 	}
+ }
+ 
++/*
++ * All supported features have either init state all zeros or are
++ * handled in setup_init_fpu() individually. This is an explicit
++ * feature list and does not use XFEATURE_MASK*SUPPORTED to catch
++ * newly added supported features at build time and make people
++ * actually look at the init state for the new feature.
++ */
++#define XFEATURES_INIT_FPSTATE_HANDLED		\
++	(XFEATURE_MASK_FP |			\
++	 XFEATURE_MASK_SSE |			\
++	 XFEATURE_MASK_YMM |			\
++	 XFEATURE_MASK_OPMASK |			\
++	 XFEATURE_MASK_ZMM_Hi256 |		\
++	 XFEATURE_MASK_Hi16_ZMM	 |		\
++	 XFEATURE_MASK_PKRU |			\
++	 XFEATURE_MASK_BNDREGS |		\
++	 XFEATURE_MASK_BNDCSR |			\
++	 XFEATURE_MASK_PASID)
++
+ /*
+  * setup the xstate image representing the init state
+  */
+@@ -447,6 +466,10 @@ static void __init setup_init_fpu_buf(void)
+ {
+ 	static int on_boot_cpu __initdata = 1;
+ 
++	BUILD_BUG_ON((XFEATURE_MASK_USER_SUPPORTED |
++		      XFEATURE_MASK_SUPERVISOR_SUPPORTED) !=
++		     XFEATURES_INIT_FPSTATE_HANDLED);
++
+ 	WARN_ON_FPU(!on_boot_cpu);
+ 	on_boot_cpu = 0;
+ 
+@@ -466,10 +489,22 @@ static void __init setup_init_fpu_buf(void)
+ 	copy_kernel_to_xregs_booting(&init_fpstate.xsave);
+ 
+ 	/*
+-	 * Dump the init state again. This is to identify the init state
+-	 * of any feature which is not represented by all zero's.
++	 * All components are now in init state. Read the state back so
++	 * that init_fpstate contains all non-zero init state. This only
++	 * works with XSAVE, but not with XSAVEOPT and XSAVES because
++	 * those use the init optimization which skips writing data for
++	 * components in init state.
++	 *
++	 * XSAVE could be used, but that would require to reshuffle the
++	 * data when XSAVES is available because XSAVES uses xstate
++	 * compaction. But doing so is a pointless exercise because most
++	 * components have an all zeros init state except for the legacy
++	 * ones (FP and SSE). Those can be saved with FXSAVE into the
++	 * legacy area. Adding new features requires to ensure that init
++	 * state is all zeroes or if not to add the necessary handling
++	 * here.
+ 	 */
+-	copy_xregs_to_kernel_booting(&init_fpstate.xsave);
++	fxsave(&init_fpstate.fxsave);
+ }
+ 
+ static int xfeature_uncompacted_offset(int xfeature_nr)
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 16b10b9436dc5..01547bdbfb061 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -130,9 +130,25 @@ static void sev_asid_free(int asid)
+ 	mutex_unlock(&sev_bitmap_lock);
+ }
+ 
+-static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
++static void sev_decommission(unsigned int handle)
+ {
+ 	struct sev_data_decommission *decommission;
++
++	if (!handle)
++		return;
++
++	decommission = kzalloc(sizeof(*decommission), GFP_KERNEL);
++	if (!decommission)
++		return;
++
++	decommission->handle = handle;
++	sev_guest_decommission(decommission, NULL);
++
++	kfree(decommission);
++}
++
++static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
++{
+ 	struct sev_data_deactivate *data;
+ 
+ 	if (!handle)
+@@ -152,15 +168,7 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
+ 
+ 	kfree(data);
+ 
+-	decommission = kzalloc(sizeof(*decommission), GFP_KERNEL);
+-	if (!decommission)
+-		return;
+-
+-	/* decommission handle */
+-	decommission->handle = handle;
+-	sev_guest_decommission(decommission, NULL);
+-
+-	kfree(decommission);
++	sev_decommission(handle);
+ }
+ 
+ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+@@ -288,8 +296,10 @@ static int sev_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 
+ 	/* Bind ASID to this guest */
+ 	ret = sev_bind_asid(kvm, start->handle, error);
+-	if (ret)
++	if (ret) {
++		sev_decommission(start->handle);
+ 		goto e_free_session;
++	}
+ 
+ 	/* return handle to userspace */
+ 	params.handle = start->handle;
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 0a0e168be1cbe..9b0e771302cee 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -779,4 +779,48 @@ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1571, pci_amd_enable_64bit_bar);
+ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x15b1, pci_amd_enable_64bit_bar);
+ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1601, pci_amd_enable_64bit_bar);
+ 
++#define RS690_LOWER_TOP_OF_DRAM2	0x30
++#define RS690_LOWER_TOP_OF_DRAM2_VALID	0x1
++#define RS690_UPPER_TOP_OF_DRAM2	0x31
++#define RS690_HTIU_NB_INDEX		0xA8
++#define RS690_HTIU_NB_INDEX_WR_ENABLE	0x100
++#define RS690_HTIU_NB_DATA		0xAC
++
++/*
++ * Some BIOS implementations support RAM above 4GB, but do not configure the
++ * PCI host to respond to bus master accesses for these addresses. These
++ * implementations set the TOP_OF_DRAM_SLOT1 register correctly, so PCI DMA
++ * works as expected for addresses below 4GB.
++ *
++ * Reference: "AMD RS690 ASIC Family Register Reference Guide" (pg. 2-57)
++ * https://www.amd.com/system/files/TechDocs/43372_rs690_rrg_3.00o.pdf
++ */
++static void rs690_fix_64bit_dma(struct pci_dev *pdev)
++{
++	u32 val = 0;
++	phys_addr_t top_of_dram = __pa(high_memory - 1) + 1;
++
++	if (top_of_dram <= (1ULL << 32))
++		return;
++
++	pci_write_config_dword(pdev, RS690_HTIU_NB_INDEX,
++				RS690_LOWER_TOP_OF_DRAM2);
++	pci_read_config_dword(pdev, RS690_HTIU_NB_DATA, &val);
++
++	if (val)
++		return;
++
++	pci_info(pdev, "Adjusting top of DRAM to %pa for 64-bit DMA support\n", &top_of_dram);
++
++	pci_write_config_dword(pdev, RS690_HTIU_NB_INDEX,
++		RS690_UPPER_TOP_OF_DRAM2 | RS690_HTIU_NB_INDEX_WR_ENABLE);
++	pci_write_config_dword(pdev, RS690_HTIU_NB_DATA, top_of_dram >> 32);
++
++	pci_write_config_dword(pdev, RS690_HTIU_NB_INDEX,
++		RS690_LOWER_TOP_OF_DRAM2 | RS690_HTIU_NB_INDEX_WR_ENABLE);
++	pci_write_config_dword(pdev, RS690_HTIU_NB_DATA,
++		top_of_dram | RS690_LOWER_TOP_OF_DRAM2_VALID);
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7910, rs690_fix_64bit_dma);
++
+ #endif
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 8064df6382227..d3cdf467d91fa 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -586,8 +586,10 @@ DEFINE_IDTENTRY_RAW(xenpv_exc_debug)
+ DEFINE_IDTENTRY_RAW(exc_xen_unknown_trap)
+ {
+ 	/* This should never happen and there is no way to handle it. */
++	instrumentation_begin();
+ 	pr_err("Unknown trap in Xen PV mode.");
+ 	BUG();
++	instrumentation_end();
+ }
+ 
+ struct trap_array_entry {
+diff --git a/certs/Kconfig b/certs/Kconfig
+index c94e93d8bccf0..ab88d2a7f3c7f 100644
+--- a/certs/Kconfig
++++ b/certs/Kconfig
+@@ -83,4 +83,21 @@ config SYSTEM_BLACKLIST_HASH_LIST
+ 	  wrapper to incorporate the list into the kernel.  Each <hash> should
+ 	  be a string of hex digits.
+ 
++config SYSTEM_REVOCATION_LIST
++	bool "Provide system-wide ring of revocation certificates"
++	depends on SYSTEM_BLACKLIST_KEYRING
++	depends on PKCS7_MESSAGE_PARSER=y
++	help
++	  If set, this allows revocation certificates to be stored in the
++	  blacklist keyring and implements a hook whereby a PKCS#7 message can
++	  be checked to see if it matches such a certificate.
++
++config SYSTEM_REVOCATION_KEYS
++	string "X.509 certificates to be preloaded into the system blacklist keyring"
++	depends on SYSTEM_REVOCATION_LIST
++	help
++	  If set, this option should be the filename of a PEM-formatted file
++	  containing X.509 certificates to be included in the default blacklist
++	  keyring.
++
+ endmenu
+diff --git a/certs/Makefile b/certs/Makefile
+index f4c25b67aad90..b6db52ebf0beb 100644
+--- a/certs/Makefile
++++ b/certs/Makefile
+@@ -3,8 +3,9 @@
+ # Makefile for the linux kernel signature checking certificates.
+ #
+ 
+-obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o
+-obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o
++obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o common.o
++obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o common.o
++obj-$(CONFIG_SYSTEM_REVOCATION_LIST) += revocation_certificates.o
+ ifneq ($(CONFIG_SYSTEM_BLACKLIST_HASH_LIST),"")
+ obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_hashes.o
+ else
+@@ -29,7 +30,7 @@ $(obj)/x509_certificate_list: scripts/extract-cert $(SYSTEM_TRUSTED_KEYS_SRCPREF
+ 	$(call if_changed,extract_certs,$(SYSTEM_TRUSTED_KEYS_SRCPREFIX)$(CONFIG_SYSTEM_TRUSTED_KEYS))
+ endif # CONFIG_SYSTEM_TRUSTED_KEYRING
+ 
+-clean-files := x509_certificate_list .x509.list
++clean-files := x509_certificate_list .x509.list x509_revocation_list
+ 
+ ifeq ($(CONFIG_MODULE_SIG),y)
+ ###############################################################################
+@@ -104,3 +105,17 @@ targets += signing_key.x509
+ $(obj)/signing_key.x509: scripts/extract-cert $(X509_DEP) FORCE
+ 	$(call if_changed,extract_certs,$(MODULE_SIG_KEY_SRCPREFIX)$(CONFIG_MODULE_SIG_KEY))
+ endif # CONFIG_MODULE_SIG
++
++ifeq ($(CONFIG_SYSTEM_REVOCATION_LIST),y)
++
++$(eval $(call config_filename,SYSTEM_REVOCATION_KEYS))
++
++$(obj)/revocation_certificates.o: $(obj)/x509_revocation_list
++
++quiet_cmd_extract_certs  = EXTRACT_CERTS   $(patsubst "%",%,$(2))
++      cmd_extract_certs  = scripts/extract-cert $(2) $@
++
++targets += x509_revocation_list
++$(obj)/x509_revocation_list: scripts/extract-cert $(SYSTEM_REVOCATION_KEYS_SRCPREFIX)$(SYSTEM_REVOCATION_KEYS_FILENAME) FORCE
++	$(call if_changed,extract_certs,$(SYSTEM_REVOCATION_KEYS_SRCPREFIX)$(CONFIG_SYSTEM_REVOCATION_KEYS))
++endif
+diff --git a/certs/blacklist.c b/certs/blacklist.c
+index f1c434b04b5e4..c973de883cf02 100644
+--- a/certs/blacklist.c
++++ b/certs/blacklist.c
+@@ -16,9 +16,15 @@
+ #include <linux/seq_file.h>
+ #include <keys/system_keyring.h>
+ #include "blacklist.h"
++#include "common.h"
+ 
+ static struct key *blacklist_keyring;
+ 
++#ifdef CONFIG_SYSTEM_REVOCATION_LIST
++extern __initconst const u8 revocation_certificate_list[];
++extern __initconst const unsigned long revocation_certificate_list_size;
++#endif
++
+ /*
+  * The description must be a type prefix, a colon and then an even number of
+  * hex digits.  The hash is kept in the description.
+@@ -144,6 +150,49 @@ int is_binary_blacklisted(const u8 *hash, size_t hash_len)
+ }
+ EXPORT_SYMBOL_GPL(is_binary_blacklisted);
+ 
++#ifdef CONFIG_SYSTEM_REVOCATION_LIST
++/**
++ * add_key_to_revocation_list - Add a revocation certificate to the blacklist
++ * @data: The data blob containing the certificate
++ * @size: The size of data blob
++ */
++int add_key_to_revocation_list(const char *data, size_t size)
++{
++	key_ref_t key;
++
++	key = key_create_or_update(make_key_ref(blacklist_keyring, true),
++				   "asymmetric",
++				   NULL,
++				   data,
++				   size,
++				   ((KEY_POS_ALL & ~KEY_POS_SETATTR) | KEY_USR_VIEW),
++				   KEY_ALLOC_NOT_IN_QUOTA | KEY_ALLOC_BUILT_IN);
++
++	if (IS_ERR(key)) {
++		pr_err("Problem with revocation key (%ld)\n", PTR_ERR(key));
++		return PTR_ERR(key);
++	}
++
++	return 0;
++}
++
++/**
++ * is_key_on_revocation_list - Determine if the key for a PKCS#7 message is revoked
++ * @pkcs7: The PKCS#7 message to check
++ */
++int is_key_on_revocation_list(struct pkcs7_message *pkcs7)
++{
++	int ret;
++
++	ret = pkcs7_validate_trust(pkcs7, blacklist_keyring);
++
++	if (ret == 0)
++		return -EKEYREJECTED;
++
++	return -ENOKEY;
++}
++#endif
++
+ /*
+  * Initialise the blacklist
+  */
+@@ -177,3 +226,18 @@ static int __init blacklist_init(void)
+  * Must be initialised before we try and load the keys into the keyring.
+  */
+ device_initcall(blacklist_init);
++
++#ifdef CONFIG_SYSTEM_REVOCATION_LIST
++/*
++ * Load the compiled-in list of revocation X.509 certificates.
++ */
++static __init int load_revocation_certificate_list(void)
++{
++	if (revocation_certificate_list_size)
++		pr_notice("Loading compiled-in revocation X.509 certificates\n");
++
++	return load_certificate_list(revocation_certificate_list, revocation_certificate_list_size,
++				     blacklist_keyring);
++}
++late_initcall(load_revocation_certificate_list);
++#endif
+diff --git a/certs/blacklist.h b/certs/blacklist.h
+index 1efd6fa0dc608..51b320cf85749 100644
+--- a/certs/blacklist.h
++++ b/certs/blacklist.h
+@@ -1,3 +1,5 @@
+ #include <linux/kernel.h>
++#include <linux/errno.h>
++#include <crypto/pkcs7.h>
+ 
+ extern const char __initconst *const blacklist_hashes[];
+diff --git a/certs/common.c b/certs/common.c
+new file mode 100644
+index 0000000000000..16a220887a53e
+--- /dev/null
++++ b/certs/common.c
+@@ -0,0 +1,57 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++
++#include <linux/kernel.h>
++#include <linux/key.h>
++#include "common.h"
++
++int load_certificate_list(const u8 cert_list[],
++			  const unsigned long list_size,
++			  const struct key *keyring)
++{
++	key_ref_t key;
++	const u8 *p, *end;
++	size_t plen;
++
++	p = cert_list;
++	end = p + list_size;
++	while (p < end) {
++		/* Each cert begins with an ASN.1 SEQUENCE tag and must be more
++		 * than 256 bytes in size.
++		 */
++		if (end - p < 4)
++			goto dodgy_cert;
++		if (p[0] != 0x30 &&
++		    p[1] != 0x82)
++			goto dodgy_cert;
++		plen = (p[2] << 8) | p[3];
++		plen += 4;
++		if (plen > end - p)
++			goto dodgy_cert;
++
++		key = key_create_or_update(make_key_ref(keyring, 1),
++					   "asymmetric",
++					   NULL,
++					   p,
++					   plen,
++					   ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
++					   KEY_USR_VIEW | KEY_USR_READ),
++					   KEY_ALLOC_NOT_IN_QUOTA |
++					   KEY_ALLOC_BUILT_IN |
++					   KEY_ALLOC_BYPASS_RESTRICTION);
++		if (IS_ERR(key)) {
++			pr_err("Problem loading in-kernel X.509 certificate (%ld)\n",
++			       PTR_ERR(key));
++		} else {
++			pr_notice("Loaded X.509 cert '%s'\n",
++				  key_ref_to_ptr(key)->description);
++			key_ref_put(key);
++		}
++		p += plen;
++	}
++
++	return 0;
++
++dodgy_cert:
++	pr_err("Problem parsing in-kernel X.509 certificate list\n");
++	return 0;
++}
+diff --git a/certs/common.h b/certs/common.h
+new file mode 100644
+index 0000000000000..abdb5795936b7
+--- /dev/null
++++ b/certs/common.h
+@@ -0,0 +1,9 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++
++#ifndef _CERT_COMMON_H
++#define _CERT_COMMON_H
++
++int load_certificate_list(const u8 cert_list[], const unsigned long list_size,
++			  const struct key *keyring);
++
++#endif
+diff --git a/certs/revocation_certificates.S b/certs/revocation_certificates.S
+new file mode 100644
+index 0000000000000..f21aae8a8f0ef
+--- /dev/null
++++ b/certs/revocation_certificates.S
+@@ -0,0 +1,21 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include <linux/export.h>
++#include <linux/init.h>
++
++	__INITRODATA
++
++	.align 8
++	.globl revocation_certificate_list
++revocation_certificate_list:
++__revocation_list_start:
++	.incbin "certs/x509_revocation_list"
++__revocation_list_end:
++
++	.align 8
++	.globl revocation_certificate_list_size
++revocation_certificate_list_size:
++#ifdef CONFIG_64BIT
++	.quad __revocation_list_end - __revocation_list_start
++#else
++	.long __revocation_list_end - __revocation_list_start
++#endif
+diff --git a/certs/system_keyring.c b/certs/system_keyring.c
+index 798291177186c..a44a8915c94cf 100644
+--- a/certs/system_keyring.c
++++ b/certs/system_keyring.c
+@@ -15,6 +15,7 @@
+ #include <keys/asymmetric-type.h>
+ #include <keys/system_keyring.h>
+ #include <crypto/pkcs7.h>
++#include "common.h"
+ 
+ static struct key *builtin_trusted_keys;
+ #ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
+@@ -136,54 +137,10 @@ device_initcall(system_trusted_keyring_init);
+  */
+ static __init int load_system_certificate_list(void)
+ {
+-	key_ref_t key;
+-	const u8 *p, *end;
+-	size_t plen;
+-
+ 	pr_notice("Loading compiled-in X.509 certificates\n");
+ 
+-	p = system_certificate_list;
+-	end = p + system_certificate_list_size;
+-	while (p < end) {
+-		/* Each cert begins with an ASN.1 SEQUENCE tag and must be more
+-		 * than 256 bytes in size.
+-		 */
+-		if (end - p < 4)
+-			goto dodgy_cert;
+-		if (p[0] != 0x30 &&
+-		    p[1] != 0x82)
+-			goto dodgy_cert;
+-		plen = (p[2] << 8) | p[3];
+-		plen += 4;
+-		if (plen > end - p)
+-			goto dodgy_cert;
+-
+-		key = key_create_or_update(make_key_ref(builtin_trusted_keys, 1),
+-					   "asymmetric",
+-					   NULL,
+-					   p,
+-					   plen,
+-					   ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
+-					   KEY_USR_VIEW | KEY_USR_READ),
+-					   KEY_ALLOC_NOT_IN_QUOTA |
+-					   KEY_ALLOC_BUILT_IN |
+-					   KEY_ALLOC_BYPASS_RESTRICTION);
+-		if (IS_ERR(key)) {
+-			pr_err("Problem loading in-kernel X.509 certificate (%ld)\n",
+-			       PTR_ERR(key));
+-		} else {
+-			pr_notice("Loaded X.509 cert '%s'\n",
+-				  key_ref_to_ptr(key)->description);
+-			key_ref_put(key);
+-		}
+-		p += plen;
+-	}
+-
+-	return 0;
+-
+-dodgy_cert:
+-	pr_err("Problem parsing in-kernel X.509 certificate list\n");
+-	return 0;
++	return load_certificate_list(system_certificate_list, system_certificate_list_size,
++				     builtin_trusted_keys);
+ }
+ late_initcall(load_system_certificate_list);
+ 
+@@ -241,6 +198,12 @@ int verify_pkcs7_message_sig(const void *data, size_t len,
+ 			pr_devel("PKCS#7 platform keyring is not available\n");
+ 			goto error;
+ 		}
++
++		ret = is_key_on_revocation_list(pkcs7);
++		if (ret != -ENOKEY) {
++			pr_devel("PKCS#7 platform key is on revocation list\n");
++			goto error;
++		}
+ 	}
+ 	ret = pkcs7_validate_trust(pkcs7, trusted_keys);
+ 	if (ret < 0) {
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index f2db761ee5488..f28bb2334e747 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -693,6 +693,7 @@ config XILINX_ZYNQMP_DMA
+ 
+ config XILINX_ZYNQMP_DPDMA
+ 	tristate "Xilinx DPDMA Engine"
++	depends on HAS_IOMEM && OF
+ 	select DMA_ENGINE
+ 	select DMA_VIRTUAL_CHANNELS
+ 	help
+diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
+index 27c07350971dd..375e7e647df6b 100644
+--- a/drivers/dma/mediatek/mtk-uart-apdma.c
++++ b/drivers/dma/mediatek/mtk-uart-apdma.c
+@@ -131,10 +131,7 @@ static unsigned int mtk_uart_apdma_read(struct mtk_chan *c, unsigned int reg)
+ 
+ static void mtk_uart_apdma_desc_free(struct virt_dma_desc *vd)
+ {
+-	struct dma_chan *chan = vd->tx.chan;
+-	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+-
+-	kfree(c->desc);
++	kfree(container_of(vd, struct mtk_uart_apdma_desc, vd));
+ }
+ 
+ static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
+@@ -207,14 +204,9 @@ static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
+ 
+ static void mtk_uart_apdma_tx_handler(struct mtk_chan *c)
+ {
+-	struct mtk_uart_apdma_desc *d = c->desc;
+-
+ 	mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
+ 	mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
+ 	mtk_uart_apdma_write(c, VFF_EN, VFF_EN_CLR_B);
+-
+-	list_del(&d->vd.node);
+-	vchan_cookie_complete(&d->vd);
+ }
+ 
+ static void mtk_uart_apdma_rx_handler(struct mtk_chan *c)
+@@ -245,9 +237,17 @@ static void mtk_uart_apdma_rx_handler(struct mtk_chan *c)
+ 
+ 	c->rx_status = d->avail_len - cnt;
+ 	mtk_uart_apdma_write(c, VFF_RPT, wg);
++}
+ 
+-	list_del(&d->vd.node);
+-	vchan_cookie_complete(&d->vd);
++static void mtk_uart_apdma_chan_complete_handler(struct mtk_chan *c)
++{
++	struct mtk_uart_apdma_desc *d = c->desc;
++
++	if (d) {
++		list_del(&d->vd.node);
++		vchan_cookie_complete(&d->vd);
++		c->desc = NULL;
++	}
+ }
+ 
+ static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
+@@ -261,6 +261,7 @@ static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
+ 		mtk_uart_apdma_rx_handler(c);
+ 	else if (c->dir == DMA_MEM_TO_DEV)
+ 		mtk_uart_apdma_tx_handler(c);
++	mtk_uart_apdma_chan_complete_handler(c);
+ 	spin_unlock_irqrestore(&c->vc.lock, flags);
+ 
+ 	return IRQ_HANDLED;
+@@ -348,7 +349,7 @@ static struct dma_async_tx_descriptor *mtk_uart_apdma_prep_slave_sg
+ 		return NULL;
+ 
+ 	/* Now allocate and setup the descriptor */
+-	d = kzalloc(sizeof(*d), GFP_ATOMIC);
++	d = kzalloc(sizeof(*d), GFP_NOWAIT);
+ 	if (!d)
+ 		return NULL;
+ 
+@@ -366,7 +367,7 @@ static void mtk_uart_apdma_issue_pending(struct dma_chan *chan)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&c->vc.lock, flags);
+-	if (vchan_issue_pending(&c->vc)) {
++	if (vchan_issue_pending(&c->vc) && !c->desc) {
+ 		vd = vchan_next_desc(&c->vc);
+ 		c->desc = to_mtk_uart_apdma_desc(&vd->tx);
+ 
+diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
+index a57705356e8bb..991a7b5da29f0 100644
+--- a/drivers/dma/sh/rcar-dmac.c
++++ b/drivers/dma/sh/rcar-dmac.c
+@@ -1874,7 +1874,7 @@ static int rcar_dmac_probe(struct platform_device *pdev)
+ 
+ 	/* Enable runtime PM and initialize the device. */
+ 	pm_runtime_enable(&pdev->dev);
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "runtime PM get sync failed (%d)\n", ret);
+ 		return ret;
+diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
+index 08cfbfab837bb..9d473923712ad 100644
+--- a/drivers/dma/stm32-mdma.c
++++ b/drivers/dma/stm32-mdma.c
+@@ -1448,7 +1448,7 @@ static int stm32_mdma_alloc_chan_resources(struct dma_chan *c)
+ 		return -ENOMEM;
+ 	}
+ 
+-	ret = pm_runtime_get_sync(dmadev->ddev.dev);
++	ret = pm_runtime_resume_and_get(dmadev->ddev.dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1714,7 +1714,7 @@ static int stm32_mdma_pm_suspend(struct device *dev)
+ 	u32 ccr, id;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
+index ff7dfb3fdeb47..6c709803203ad 100644
+--- a/drivers/dma/xilinx/xilinx_dpdma.c
++++ b/drivers/dma/xilinx/xilinx_dpdma.c
+@@ -113,6 +113,7 @@
+ #define XILINX_DPDMA_CH_VDO				0x020
+ #define XILINX_DPDMA_CH_PYLD_SZ				0x024
+ #define XILINX_DPDMA_CH_DESC_ID				0x028
++#define XILINX_DPDMA_CH_DESC_ID_MASK			GENMASK(15, 0)
+ 
+ /* DPDMA descriptor fields */
+ #define XILINX_DPDMA_DESC_CONTROL_PREEMBLE		0xa5
+@@ -866,7 +867,8 @@ static void xilinx_dpdma_chan_queue_transfer(struct xilinx_dpdma_chan *chan)
+ 	 * will be used, but it should be enough.
+ 	 */
+ 	list_for_each_entry(sw_desc, &desc->descriptors, node)
+-		sw_desc->hw.desc_id = desc->vdesc.tx.cookie;
++		sw_desc->hw.desc_id = desc->vdesc.tx.cookie
++				    & XILINX_DPDMA_CH_DESC_ID_MASK;
+ 
+ 	sw_desc = list_first_entry(&desc->descriptors,
+ 				   struct xilinx_dpdma_sw_desc, node);
+@@ -1086,7 +1088,8 @@ static void xilinx_dpdma_chan_vsync_irq(struct  xilinx_dpdma_chan *chan)
+ 	if (!chan->running || !pending)
+ 		goto out;
+ 
+-	desc_id = dpdma_read(chan->reg, XILINX_DPDMA_CH_DESC_ID);
++	desc_id = dpdma_read(chan->reg, XILINX_DPDMA_CH_DESC_ID)
++		& XILINX_DPDMA_CH_DESC_ID_MASK;
+ 
+ 	/* If the retrigger raced with vsync, retry at the next frame. */
+ 	sw_desc = list_first_entry(&pending->descriptors,
+diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c
+index d8419565b92cc..5fecf5aa6e858 100644
+--- a/drivers/dma/xilinx/zynqmp_dma.c
++++ b/drivers/dma/xilinx/zynqmp_dma.c
+@@ -468,7 +468,7 @@ static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan)
+ 	struct zynqmp_dma_desc_sw *desc;
+ 	int i, ret;
+ 
+-	ret = pm_runtime_get_sync(chan->dev);
++	ret = pm_runtime_resume_and_get(chan->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index ade3ecf2ee495..2613881a66e66 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -1865,6 +1865,7 @@ static void gpio_v2_line_info_changed_to_v1(
+ 		struct gpio_v2_line_info_changed *lic_v2,
+ 		struct gpioline_info_changed *lic_v1)
+ {
++	memset(lic_v1, 0, sizeof(*lic_v1));
+ 	gpio_v2_line_info_to_v1(&lic_v2->info, &lic_v1->info);
+ 	lic_v1->timestamp = lic_v2->timestamp_ns;
+ 	lic_v1->event_type = lic_v2->event_type;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+index 1b56dbc1f304e..e93ccdc5faf4e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+@@ -238,9 +238,21 @@ static int amdgpu_dma_buf_pin(struct dma_buf_attachment *attach)
+ {
+ 	struct drm_gem_object *obj = attach->dmabuf->priv;
+ 	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
++	int r;
+ 
+ 	/* pin buffer into GTT */
+-	return amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT);
++	r = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT);
++	if (r)
++		return r;
++
++	if (bo->tbo.moving) {
++		r = dma_fence_wait(bo->tbo.moving, true);
++		if (r) {
++			amdgpu_bo_unpin(bo);
++			return r;
++		}
++	}
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 3c92dacbc24ad..fc8da5fed779b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -6590,12 +6590,8 @@ static int gfx_v10_0_kiq_init_register(struct amdgpu_ring *ring)
+ 	if (ring->use_doorbell) {
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
+ 			(adev->doorbell_index.kiq * 2) << 2);
+-		/* If GC has entered CGPG, ringing doorbell > first page doesn't
+-		 * wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to workaround
+-		 * this issue.
+-		 */
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
+-			(adev->doorbell.size - 4));
++			(adev->doorbell_index.userqueue_end * 2) << 2);
+ 	}
+ 
+ 	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 1859d293ef712..fb15e8b5af32f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3619,12 +3619,8 @@ static int gfx_v9_0_kiq_init_register(struct amdgpu_ring *ring)
+ 	if (ring->use_doorbell) {
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
+ 					(adev->doorbell_index.kiq * 2) << 2);
+-		/* If GC has entered CGPG, ringing doorbell > first page doesn't
+-		 * wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to workaround
+-		 * this issue.
+-		 */
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
+-					(adev->doorbell.size - 4));
++					(adev->doorbell_index.userqueue_end * 2) << 2);
+ 	}
+ 
+ 	WREG32_SOC15_RLC(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
+index b2ecb91f8ddc0..5f5b87f995468 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
++++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
+@@ -111,7 +111,22 @@ int nouveau_gem_prime_pin(struct drm_gem_object *obj)
+ 	if (ret)
+ 		return -EINVAL;
+ 
+-	return 0;
++	ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL);
++	if (ret)
++		goto error;
++
++	if (nvbo->bo.moving)
++		ret = dma_fence_wait(nvbo->bo.moving, true);
++
++	ttm_bo_unreserve(&nvbo->bo);
++	if (ret)
++		goto error;
++
++	return ret;
++
++error:
++	nouveau_bo_unpin(nvbo);
++	return ret;
+ }
+ 
+ void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
+diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
+index b9de0e51c0be9..cbad81578190f 100644
+--- a/drivers/gpu/drm/radeon/radeon_prime.c
++++ b/drivers/gpu/drm/radeon/radeon_prime.c
+@@ -94,9 +94,19 @@ int radeon_gem_prime_pin(struct drm_gem_object *obj)
+ 
+ 	/* pin buffer into GTT */
+ 	ret = radeon_bo_pin(bo, RADEON_GEM_DOMAIN_GTT, NULL);
+-	if (likely(ret == 0))
+-		bo->prime_shared_count++;
+-
++	if (unlikely(ret))
++		goto error;
++
++	if (bo->tbo.moving) {
++		ret = dma_fence_wait(bo->tbo.moving, false);
++		if (unlikely(ret)) {
++			radeon_bo_unpin(bo);
++			goto error;
++		}
++	}
++
++	bo->prime_shared_count++;
++error:
+ 	radeon_bo_unreserve(bo);
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index af5f01eff872c..88a8cb840cd54 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -146,6 +146,8 @@ vc4_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ 	struct vc4_hdmi *vc4_hdmi = connector_to_vc4_hdmi(connector);
+ 	bool connected = false;
+ 
++	WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
++
+ 	if (vc4_hdmi->hpd_gpio) {
+ 		if (gpio_get_value_cansleep(vc4_hdmi->hpd_gpio) ^
+ 		    vc4_hdmi->hpd_active_low)
+@@ -167,10 +169,12 @@ vc4_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ 			}
+ 		}
+ 
++		pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 		return connector_status_connected;
+ 	}
+ 
+ 	cec_phys_addr_invalidate(vc4_hdmi->cec_adap);
++	pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 	return connector_status_disconnected;
+ }
+ 
+@@ -415,7 +419,6 @@ static void vc4_hdmi_encoder_post_crtc_powerdown(struct drm_encoder *encoder)
+ 		   HDMI_READ(HDMI_VID_CTL) & ~VC4_HD_VID_CTL_ENABLE);
+ 
+ 	clk_disable_unprepare(vc4_hdmi->pixel_bvb_clock);
+-	clk_disable_unprepare(vc4_hdmi->hsm_clock);
+ 	clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ 
+ 	ret = pm_runtime_put(&vc4_hdmi->pdev->dev);
+@@ -666,13 +669,6 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder)
+ 		return;
+ 	}
+ 
+-	ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
+-	if (ret) {
+-		DRM_ERROR("Failed to turn on HSM clock: %d\n", ret);
+-		clk_disable_unprepare(vc4_hdmi->pixel_clock);
+-		return;
+-	}
+-
+ 	vc4_hdmi_cec_update_clk_div(vc4_hdmi);
+ 
+ 	/*
+@@ -683,7 +679,6 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder)
+ 			       (hsm_rate > VC4_HSM_MID_CLOCK ? 150000000 : 75000000));
+ 	if (ret) {
+ 		DRM_ERROR("Failed to set pixel bvb clock rate: %d\n", ret);
+-		clk_disable_unprepare(vc4_hdmi->hsm_clock);
+ 		clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ 		return;
+ 	}
+@@ -691,7 +686,6 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder)
+ 	ret = clk_prepare_enable(vc4_hdmi->pixel_bvb_clock);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to turn on pixel bvb clock: %d\n", ret);
+-		clk_disable_unprepare(vc4_hdmi->hsm_clock);
+ 		clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ 		return;
+ 	}
+@@ -1724,6 +1718,29 @@ static int vc5_hdmi_init_resources(struct vc4_hdmi *vc4_hdmi)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PM
++static int vc4_hdmi_runtime_suspend(struct device *dev)
++{
++	struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
++
++	clk_disable_unprepare(vc4_hdmi->hsm_clock);
++
++	return 0;
++}
++
++static int vc4_hdmi_runtime_resume(struct device *dev)
++{
++	struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
++	int ret;
++
++	ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++#endif
++
+ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ {
+ 	const struct vc4_hdmi_variant *variant = of_device_get_match_data(dev);
+@@ -1959,11 +1976,18 @@ static const struct of_device_id vc4_hdmi_dt_match[] = {
+ 	{}
+ };
+ 
++static const struct dev_pm_ops vc4_hdmi_pm_ops = {
++	SET_RUNTIME_PM_OPS(vc4_hdmi_runtime_suspend,
++			   vc4_hdmi_runtime_resume,
++			   NULL)
++};
++
+ struct platform_driver vc4_hdmi_driver = {
+ 	.probe = vc4_hdmi_dev_probe,
+ 	.remove = vc4_hdmi_dev_remove,
+ 	.driver = {
+ 		.name = "vc4_hdmi",
+ 		.of_match_table = vc4_hdmi_dt_match,
++		.pm = &vc4_hdmi_pm_ops,
+ 	},
+ };
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index e42b87e96f747..eab6fd6b890eb 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -974,6 +974,9 @@ static s32 i801_access(struct i2c_adapter *adap, u16 addr,
+ 	}
+ 
+ out:
++	/* Unlock the SMBus device for use by BIOS/ACPI */
++	outb_p(SMBHSTSTS_INUSE_STS, SMBHSTSTS(priv));
++
+ 	pm_runtime_mark_last_busy(&priv->pci_dev->dev);
+ 	pm_runtime_put_autosuspend(&priv->pci_dev->dev);
+ 	mutex_unlock(&priv->acpi_lock);
+diff --git a/drivers/i2c/busses/i2c-robotfuzz-osif.c b/drivers/i2c/busses/i2c-robotfuzz-osif.c
+index a39f7d0927973..66dfa211e736b 100644
+--- a/drivers/i2c/busses/i2c-robotfuzz-osif.c
++++ b/drivers/i2c/busses/i2c-robotfuzz-osif.c
+@@ -83,7 +83,7 @@ static int osif_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs,
+ 			}
+ 		}
+ 
+-		ret = osif_usb_read(adapter, OSIFI2C_STOP, 0, 0, NULL, 0);
++		ret = osif_usb_write(adapter, OSIFI2C_STOP, 0, 0, NULL, 0);
+ 		if (ret) {
+ 			dev_err(&adapter->dev, "failure sending STOP\n");
+ 			return -EREMOTEIO;
+@@ -153,7 +153,7 @@ static int osif_probe(struct usb_interface *interface,
+ 	 * Set bus frequency. The frequency is:
+ 	 * 120,000,000 / ( 16 + 2 * div * 4^prescale).
+ 	 * Using dev = 52, prescale = 0 give 100KHz */
+-	ret = osif_usb_read(&priv->adapter, OSIFI2C_SET_BIT_RATE, 52, 0,
++	ret = osif_usb_write(&priv->adapter, OSIFI2C_SET_BIT_RATE, 52, 0,
+ 			    NULL, 0);
+ 	if (ret) {
+ 		dev_err(&interface->dev, "failure sending bit rate");
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 4ec41579940a3..d3f40c9a8c6c8 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -165,6 +165,7 @@ struct meson_host {
+ 
+ 	unsigned int bounce_buf_size;
+ 	void *bounce_buf;
++	void __iomem *bounce_iomem_buf;
+ 	dma_addr_t bounce_dma_addr;
+ 	struct sd_emmc_desc *descs;
+ 	dma_addr_t descs_dma_addr;
+@@ -734,6 +735,47 @@ static void meson_mmc_desc_chain_transfer(struct mmc_host *mmc, u32 cmd_cfg)
+ 	writel(start, host->regs + SD_EMMC_START);
+ }
+ 
++/* local sg copy to buffer version with _to/fromio usage for dram_access_quirk */
++static void meson_mmc_copy_buffer(struct meson_host *host, struct mmc_data *data,
++				  size_t buflen, bool to_buffer)
++{
++	unsigned int sg_flags = SG_MITER_ATOMIC;
++	struct scatterlist *sgl = data->sg;
++	unsigned int nents = data->sg_len;
++	struct sg_mapping_iter miter;
++	unsigned int offset = 0;
++
++	if (to_buffer)
++		sg_flags |= SG_MITER_FROM_SG;
++	else
++		sg_flags |= SG_MITER_TO_SG;
++
++	sg_miter_start(&miter, sgl, nents, sg_flags);
++
++	while ((offset < buflen) && sg_miter_next(&miter)) {
++		unsigned int len;
++
++		len = min(miter.length, buflen - offset);
++
++		/* When dram_access_quirk, the bounce buffer is a iomem mapping */
++		if (host->dram_access_quirk) {
++			if (to_buffer)
++				memcpy_toio(host->bounce_iomem_buf + offset, miter.addr, len);
++			else
++				memcpy_fromio(miter.addr, host->bounce_iomem_buf + offset, len);
++		} else {
++			if (to_buffer)
++				memcpy(host->bounce_buf + offset, miter.addr, len);
++			else
++				memcpy(miter.addr, host->bounce_buf + offset, len);
++		}
++
++		offset += len;
++	}
++
++	sg_miter_stop(&miter);
++}
++
+ static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd)
+ {
+ 	struct meson_host *host = mmc_priv(mmc);
+@@ -777,8 +819,7 @@ static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd)
+ 		if (data->flags & MMC_DATA_WRITE) {
+ 			cmd_cfg |= CMD_CFG_DATA_WR;
+ 			WARN_ON(xfer_bytes > host->bounce_buf_size);
+-			sg_copy_to_buffer(data->sg, data->sg_len,
+-					  host->bounce_buf, xfer_bytes);
++			meson_mmc_copy_buffer(host, data, xfer_bytes, true);
+ 			dma_wmb();
+ 		}
+ 
+@@ -947,8 +988,7 @@ static irqreturn_t meson_mmc_irq_thread(int irq, void *dev_id)
+ 	if (meson_mmc_bounce_buf_read(data)) {
+ 		xfer_bytes = data->blksz * data->blocks;
+ 		WARN_ON(xfer_bytes > host->bounce_buf_size);
+-		sg_copy_from_buffer(data->sg, data->sg_len,
+-				    host->bounce_buf, xfer_bytes);
++		meson_mmc_copy_buffer(host, data, xfer_bytes, false);
+ 	}
+ 
+ 	next_cmd = meson_mmc_get_next_command(cmd);
+@@ -1168,7 +1208,7 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ 		 * instead of the DDR memory
+ 		 */
+ 		host->bounce_buf_size = SD_EMMC_SRAM_DATA_BUF_LEN;
+-		host->bounce_buf = host->regs + SD_EMMC_SRAM_DATA_BUF_OFF;
++		host->bounce_iomem_buf = host->regs + SD_EMMC_SRAM_DATA_BUF_OFF;
+ 		host->bounce_dma_addr = res->start + SD_EMMC_SRAM_DATA_BUF_OFF;
+ 	} else {
+ 		/* data bounce buffer */
+diff --git a/drivers/net/caif/caif_serial.c b/drivers/net/caif/caif_serial.c
+index d025ea4349339..39fbd0be179c2 100644
+--- a/drivers/net/caif/caif_serial.c
++++ b/drivers/net/caif/caif_serial.c
+@@ -351,6 +351,7 @@ static int ldisc_open(struct tty_struct *tty)
+ 	rtnl_lock();
+ 	result = register_netdevice(dev);
+ 	if (result) {
++		tty_kref_put(tty);
+ 		rtnl_unlock();
+ 		free_netdev(dev);
+ 		return -ENODEV;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+index 17d5b649eb36b..e81dd34a3cac2 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+@@ -1266,9 +1266,11 @@ int qed_dcbx_get_config_params(struct qed_hwfn *p_hwfn,
+ 		p_hwfn->p_dcbx_info->set.ver_num |= DCBX_CONFIG_VERSION_STATIC;
+ 
+ 	p_hwfn->p_dcbx_info->set.enabled = dcbx_info->operational.enabled;
++	BUILD_BUG_ON(sizeof(dcbx_info->operational.params) !=
++		     sizeof(p_hwfn->p_dcbx_info->set.config.params));
+ 	memcpy(&p_hwfn->p_dcbx_info->set.config.params,
+ 	       &dcbx_info->operational.params,
+-	       sizeof(struct qed_dcbx_admin_params));
++	       sizeof(p_hwfn->p_dcbx_info->set.config.params));
+ 	p_hwfn->p_dcbx_info->set.config.valid = true;
+ 
+ 	memcpy(params, &p_hwfn->p_dcbx_info->set, sizeof(struct qed_dcbx_set));
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 3bb36f4a984e8..a6bf80b529679 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -1673,7 +1673,7 @@ static void rtl8169_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+ {
+ 	switch(stringset) {
+ 	case ETH_SS_STATS:
+-		memcpy(data, *rtl8169_gstrings, sizeof(rtl8169_gstrings));
++		memcpy(data, rtl8169_gstrings, sizeof(rtl8169_gstrings));
+ 		break;
+ 	}
+ }
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 6d84266c03caf..5cab2d3c00236 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -2287,7 +2287,7 @@ static void sh_eth_get_strings(struct net_device *ndev, u32 stringset, u8 *data)
+ {
+ 	switch (stringset) {
+ 	case ETH_SS_STATS:
+-		memcpy(data, *sh_eth_gstrings_stats,
++		memcpy(data, sh_eth_gstrings_stats,
+ 		       sizeof(sh_eth_gstrings_stats));
+ 		break;
+ 	}
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index 01bb36e7cff0a..6bd3a389d389c 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -774,12 +774,15 @@ static void temac_start_xmit_done(struct net_device *ndev)
+ 	stat = be32_to_cpu(cur_p->app0);
+ 
+ 	while (stat & STS_CTRL_APP0_CMPLT) {
++		/* Make sure that the other fields are read after bd is
++		 * released by dma
++		 */
++		rmb();
+ 		dma_unmap_single(ndev->dev.parent, be32_to_cpu(cur_p->phys),
+ 				 be32_to_cpu(cur_p->len), DMA_TO_DEVICE);
+ 		skb = (struct sk_buff *)ptr_from_txbd(cur_p);
+ 		if (skb)
+ 			dev_consume_skb_irq(skb);
+-		cur_p->app0 = 0;
+ 		cur_p->app1 = 0;
+ 		cur_p->app2 = 0;
+ 		cur_p->app3 = 0;
+@@ -788,6 +791,12 @@ static void temac_start_xmit_done(struct net_device *ndev)
+ 		ndev->stats.tx_packets++;
+ 		ndev->stats.tx_bytes += be32_to_cpu(cur_p->len);
+ 
++		/* app0 must be visible last, as it is used to flag
++		 * availability of the bd
++		 */
++		smp_mb();
++		cur_p->app0 = 0;
++
+ 		lp->tx_bd_ci++;
+ 		if (lp->tx_bd_ci >= lp->tx_bd_num)
+ 			lp->tx_bd_ci = 0;
+@@ -814,6 +823,9 @@ static inline int temac_check_tx_bd_space(struct temac_local *lp, int num_frag)
+ 		if (cur_p->app0)
+ 			return NETDEV_TX_BUSY;
+ 
++		/* Make sure to read next bd app0 after this one */
++		rmb();
++
+ 		tail++;
+ 		if (tail >= lp->tx_bd_num)
+ 			tail = 0;
+@@ -930,6 +942,11 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	wmb();
+ 	lp->dma_out(lp, TX_TAILDESC_PTR, tail_p); /* DMA start */
+ 
++	if (temac_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) {
++		netdev_info(ndev, "%s -> netif_stop_queue\n", __func__);
++		netif_stop_queue(ndev);
++	}
++
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index 69d3eacc2b96c..c716074fdef0b 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -792,16 +792,12 @@ static int dp83867_phy_reset(struct phy_device *phydev)
+ {
+ 	int err;
+ 
+-	err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESET);
++	err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESTART);
+ 	if (err < 0)
+ 		return err;
+ 
+ 	usleep_range(10, 20);
+ 
+-	/* After reset FORCE_LINK_GOOD bit is set. Although the
+-	 * default value should be unset. Disable FORCE_LINK_GOOD
+-	 * for the phy to work properly.
+-	 */
+ 	return phy_modify(phydev, MII_DP83867_PHYCTRL,
+ 			 DP83867_PHYCR_FORCE_LINK_GOOD, 0);
+ }
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index f5010f8ac1ec7..95e27fb7d2c10 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -6054,7 +6054,7 @@ static void rtl8152_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+ {
+ 	switch (stringset) {
+ 	case ETH_SS_STATS:
+-		memcpy(data, *rtl8152_gstrings, sizeof(rtl8152_gstrings));
++		memcpy(data, rtl8152_gstrings, sizeof(rtl8152_gstrings));
+ 		break;
+ 	}
+ }
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 3b3fc7c9c91dc..f147d4feedb91 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -1623,8 +1623,13 @@ static int mac80211_hwsim_start(struct ieee80211_hw *hw)
+ static void mac80211_hwsim_stop(struct ieee80211_hw *hw)
+ {
+ 	struct mac80211_hwsim_data *data = hw->priv;
++
+ 	data->started = false;
+ 	hrtimer_cancel(&data->beacon_timer);
++
++	while (!skb_queue_empty(&data->pending))
++		ieee80211_free_txskb(hw, skb_dequeue(&data->pending));
++
+ 	wiphy_dbg(hw->wiphy, "%s\n", __func__);
+ }
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index d5d9ea864fe66..9e971fffeb6a3 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1874,11 +1874,21 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
+ 	int err;
+ 	int i, bars = 0;
+ 
+-	if (atomic_inc_return(&dev->enable_cnt) > 1) {
+-		pci_update_current_state(dev, dev->current_state);
+-		return 0;		/* already enabled */
++	/*
++	 * Power state could be unknown at this point, either due to a fresh
++	 * boot or a device removal call.  So get the current power state
++	 * so that things like MSI message writing will behave as expected
++	 * (e.g. if the device really is in D0 at enable time).
++	 */
++	if (dev->pm_cap) {
++		u16 pmcsr;
++		pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
++		dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
+ 	}
+ 
++	if (atomic_inc_return(&dev->enable_cnt) > 1)
++		return 0;		/* already enabled */
++
+ 	bridge = pci_upstream_bridge(dev);
+ 	if (bridge)
+ 		pci_enable_bridge(bridge);
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 7d9bdedcd71bb..3af4430543dca 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -1229,7 +1229,7 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl,
+ 	struct device *dev = pctl->dev;
+ 	struct resource res;
+ 	int npins = STM32_GPIO_PINS_PER_BANK;
+-	int bank_nr, err;
++	int bank_nr, err, i = 0;
+ 
+ 	if (!IS_ERR(bank->rstc))
+ 		reset_control_deassert(bank->rstc);
+@@ -1251,9 +1251,14 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl,
+ 
+ 	of_property_read_string(np, "st,bank-name", &bank->gpio_chip.label);
+ 
+-	if (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 0, &args)) {
++	if (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, i, &args)) {
+ 		bank_nr = args.args[1] / STM32_GPIO_PINS_PER_BANK;
+ 		bank->gpio_chip.base = args.args[1];
++
++		npins = args.args[2];
++		while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3,
++							 ++i, &args))
++			npins += args.args[2];
+ 	} else {
+ 		bank_nr = pctl->nbanks;
+ 		bank->gpio_chip.base = bank_nr * STM32_GPIO_PINS_PER_BANK;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 20a6564f87d9f..01f87bcab3dd1 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1389,6 +1389,22 @@ static void sd_uninit_command(struct scsi_cmnd *SCpnt)
+ 	}
+ }
+ 
++static bool sd_need_revalidate(struct block_device *bdev,
++		struct scsi_disk *sdkp)
++{
++	if (sdkp->device->removable || sdkp->write_prot) {
++		if (bdev_check_media_change(bdev))
++			return true;
++	}
++
++	/*
++	 * Force a full rescan after ioctl(BLKRRPART).  While the disk state has
++	 * nothing to do with partitions, BLKRRPART is used to force a full
++	 * revalidate after things like a format for historical reasons.
++	 */
++	return test_bit(GD_NEED_PART_SCAN, &bdev->bd_disk->state);
++}
++
+ /**
+  *	sd_open - open a scsi disk device
+  *	@bdev: Block device of the scsi disk to open
+@@ -1425,10 +1441,8 @@ static int sd_open(struct block_device *bdev, fmode_t mode)
+ 	if (!scsi_block_when_processing_errors(sdev))
+ 		goto error_out;
+ 
+-	if (sdev->removable || sdkp->write_prot) {
+-		if (bdev_check_media_change(bdev))
+-			sd_revalidate_disk(bdev->bd_disk);
+-	}
++	if (sd_need_revalidate(bdev, sdkp))
++		sd_revalidate_disk(bdev->bd_disk);
+ 
+ 	/*
+ 	 * If the drive is empty, just let the open fail.
+diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c
+index ab9035662717a..bcc0b5a3a459c 100644
+--- a/drivers/spi/spi-nxp-fspi.c
++++ b/drivers/spi/spi-nxp-fspi.c
+@@ -1033,12 +1033,6 @@ static int nxp_fspi_probe(struct platform_device *pdev)
+ 		goto err_put_ctrl;
+ 	}
+ 
+-	/* Clear potential interrupts */
+-	reg = fspi_readl(f, f->iobase + FSPI_INTR);
+-	if (reg)
+-		fspi_writel(f, reg, f->iobase + FSPI_INTR);
+-
+-
+ 	/* find the resources - controller memory mapped space */
+ 	if (is_acpi_node(f->dev->fwnode))
+ 		res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+@@ -1076,6 +1070,11 @@ static int nxp_fspi_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	/* Clear potential interrupts */
++	reg = fspi_readl(f, f->iobase + FSPI_INTR);
++	if (reg)
++		fspi_writel(f, reg, f->iobase + FSPI_INTR);
++
+ 	/* find the irq */
+ 	ret = platform_get_irq(pdev, 0);
+ 	if (ret < 0)
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 35c83f65475b2..8b0507f69c156 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -1302,6 +1302,45 @@ ceph_find_incompatible(struct page *page)
+ 	return NULL;
+ }
+ 
++/**
++ * prep_noread_page - prep a page for writing without reading first
++ * @page: page being prepared
++ * @pos: starting position for the write
++ * @len: length of write
++ *
++ * In some cases, write_begin doesn't need to read at all:
++ * - full page write
++ * - file is currently zero-length
++ * - write that lies in a page that is completely beyond EOF
++ * - write that covers the the page from start to EOF or beyond it
++ *
++ * If any of these criteria are met, then zero out the unwritten parts
++ * of the page and return true. Otherwise, return false.
++ */
++static bool skip_page_read(struct page *page, loff_t pos, size_t len)
++{
++	struct inode *inode = page->mapping->host;
++	loff_t i_size = i_size_read(inode);
++	size_t offset = offset_in_page(pos);
++
++	/* Full page write */
++	if (offset == 0 && len >= PAGE_SIZE)
++		return true;
++
++	/* pos beyond last page in the file */
++	if (pos - offset >= i_size)
++		goto zero_out;
++
++	/* write that covers the whole page from start to EOF or beyond it */
++	if (offset == 0 && (pos + len) >= i_size)
++		goto zero_out;
++
++	return false;
++zero_out:
++	zero_user_segments(page, 0, offset, offset + len, PAGE_SIZE);
++	return true;
++}
++
+ /*
+  * We are only allowed to write into/dirty the page if the page is
+  * clean, or already dirty within the same snap context.
+@@ -1315,7 +1354,6 @@ static int ceph_write_begin(struct file *file, struct address_space *mapping,
+ 	struct ceph_snap_context *snapc;
+ 	struct page *page = NULL;
+ 	pgoff_t index = pos >> PAGE_SHIFT;
+-	int pos_in_page = pos & ~PAGE_MASK;
+ 	int r = 0;
+ 
+ 	dout("write_begin file %p inode %p page %p %d~%d\n", file, inode, page, (int)pos, (int)len);
+@@ -1350,19 +1388,9 @@ static int ceph_write_begin(struct file *file, struct address_space *mapping,
+ 			break;
+ 		}
+ 
+-		/*
+-		 * In some cases we don't need to read at all:
+-		 * - full page write
+-		 * - write that lies completely beyond EOF
+-		 * - write that covers the the page from start to EOF or beyond it
+-		 */
+-		if ((pos_in_page == 0 && len == PAGE_SIZE) ||
+-		    (pos >= i_size_read(inode)) ||
+-		    (pos_in_page == 0 && (pos + len) >= i_size_read(inode))) {
+-			zero_user_segments(page, 0, pos_in_page,
+-					   pos_in_page + len, PAGE_SIZE);
++		/* No need to read in some cases */
++		if (skip_page_read(page, pos, len))
+ 			break;
+-		}
+ 
+ 		/*
+ 		 * We need to read it. If we get back -EINPROGRESS, then the page was
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 209535d5b8d38..3d2e3dd4ee01d 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -578,6 +578,7 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ 	struct ceph_inode_info *ci = ceph_inode(dir);
+ 	struct inode *inode;
+ 	struct timespec64 now;
++	struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(dir->i_sb);
+ 	struct ceph_vino vino = { .ino = req->r_deleg_ino,
+ 				  .snap = CEPH_NOSNAP };
+ 
+@@ -615,8 +616,10 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ 
+ 	ceph_file_layout_to_legacy(lo, &in.layout);
+ 
++	down_read(&mdsc->snap_rwsem);
+ 	ret = ceph_fill_inode(inode, NULL, &iinfo, NULL, req->r_session,
+ 			      req->r_fmode, NULL);
++	up_read(&mdsc->snap_rwsem);
+ 	if (ret) {
+ 		dout("%s failed to fill inode: %d\n", __func__, ret);
+ 		ceph_dir_clear_complete(dir);
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 346fcdfcd3e91..57cd78e942c08 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -762,6 +762,8 @@ int ceph_fill_inode(struct inode *inode, struct page *locked_page,
+ 	bool new_version = false;
+ 	bool fill_inline = false;
+ 
++	lockdep_assert_held(&mdsc->snap_rwsem);
++
+ 	dout("%s %p ino %llx.%llx v %llu had %llu\n", __func__,
+ 	     inode, ceph_vinop(inode), le64_to_cpu(info->version),
+ 	     ci->i_version);
+diff --git a/fs/nilfs2/sysfs.c b/fs/nilfs2/sysfs.c
+index 303d71430bdd1..9c6c0e2e5880a 100644
+--- a/fs/nilfs2/sysfs.c
++++ b/fs/nilfs2/sysfs.c
+@@ -1053,6 +1053,7 @@ void nilfs_sysfs_delete_device_group(struct the_nilfs *nilfs)
+ 	nilfs_sysfs_delete_superblock_group(nilfs);
+ 	nilfs_sysfs_delete_segctor_group(nilfs);
+ 	kobject_del(&nilfs->ns_dev_kobj);
++	kobject_put(&nilfs->ns_dev_kobj);
+ 	kfree(nilfs->ns_dev_subgroups);
+ }
+ 
+diff --git a/include/keys/system_keyring.h b/include/keys/system_keyring.h
+index fb8b07daa9d15..875e002a41804 100644
+--- a/include/keys/system_keyring.h
++++ b/include/keys/system_keyring.h
+@@ -31,6 +31,7 @@ extern int restrict_link_by_builtin_and_secondary_trusted(
+ #define restrict_link_by_builtin_and_secondary_trusted restrict_link_by_builtin_trusted
+ #endif
+ 
++extern struct pkcs7_message *pkcs7;
+ #ifdef CONFIG_SYSTEM_BLACKLIST_KEYRING
+ extern int mark_hash_blacklisted(const char *hash);
+ extern int is_hash_blacklisted(const u8 *hash, size_t hash_len,
+@@ -49,6 +50,20 @@ static inline int is_binary_blacklisted(const u8 *hash, size_t hash_len)
+ }
+ #endif
+ 
++#ifdef CONFIG_SYSTEM_REVOCATION_LIST
++extern int add_key_to_revocation_list(const char *data, size_t size);
++extern int is_key_on_revocation_list(struct pkcs7_message *pkcs7);
++#else
++static inline int add_key_to_revocation_list(const char *data, size_t size)
++{
++	return 0;
++}
++static inline int is_key_on_revocation_list(struct pkcs7_message *pkcs7)
++{
++	return -ENOKEY;
++}
++#endif
++
+ #ifdef CONFIG_IMA_BLACKLIST_KEYRING
+ extern struct key *ima_blacklist_keyring;
+ 
+diff --git a/include/linux/debug_locks.h b/include/linux/debug_locks.h
+index 2915f56ad4214..edb5c186b0b7a 100644
+--- a/include/linux/debug_locks.h
++++ b/include/linux/debug_locks.h
+@@ -27,8 +27,10 @@ extern int debug_locks_off(void);
+ 	int __ret = 0;							\
+ 									\
+ 	if (!oops_in_progress && unlikely(c)) {				\
++		instrumentation_begin();				\
+ 		if (debug_locks_off() && !debug_locks_silent)		\
+ 			WARN(1, "DEBUG_LOCKS_WARN_ON(%s)", #c);		\
++		instrumentation_end();					\
+ 		__ret = 1;						\
+ 	}								\
+ 	__ret;								\
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index 0365aa97f8e73..ff55be0117397 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -297,6 +297,7 @@ struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
+ extern vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd);
+ 
+ extern struct page *huge_zero_page;
++extern unsigned long huge_zero_pfn;
+ 
+ static inline bool is_huge_zero_page(struct page *page)
+ {
+@@ -305,7 +306,7 @@ static inline bool is_huge_zero_page(struct page *page)
+ 
+ static inline bool is_huge_zero_pmd(pmd_t pmd)
+ {
+-	return is_huge_zero_page(pmd_page(pmd));
++	return READ_ONCE(huge_zero_pfn) == pmd_pfn(pmd) && pmd_present(pmd);
+ }
+ 
+ static inline bool is_huge_zero_pud(pud_t pud)
+@@ -451,6 +452,11 @@ static inline bool is_huge_zero_page(struct page *page)
+ 	return false;
+ }
+ 
++static inline bool is_huge_zero_pmd(pmd_t pmd)
++{
++	return false;
++}
++
+ static inline bool is_huge_zero_pud(pud_t pud)
+ {
+ 	return false;
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index b5807f23caf80..5b68c9787f7c2 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -628,17 +628,6 @@ static inline int hstate_index(struct hstate *h)
+ 	return h - hstates;
+ }
+ 
+-pgoff_t __basepage_index(struct page *page);
+-
+-/* Return page->index in PAGE_SIZE units */
+-static inline pgoff_t basepage_index(struct page *page)
+-{
+-	if (!PageCompound(page))
+-		return page->index;
+-
+-	return __basepage_index(page);
+-}
+-
+ extern int dissolve_free_huge_page(struct page *page);
+ extern int dissolve_free_huge_pages(unsigned long start_pfn,
+ 				    unsigned long end_pfn);
+@@ -871,11 +860,6 @@ static inline int hstate_index(struct hstate *h)
+ 	return 0;
+ }
+ 
+-static inline pgoff_t basepage_index(struct page *page)
+-{
+-	return page->index;
+-}
+-
+ static inline int dissolve_free_huge_page(struct page *page)
+ {
+ 	return 0;
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 5106db3ad1ce3..289c26f055cdd 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1648,6 +1648,7 @@ struct zap_details {
+ 	struct address_space *check_mapping;	/* Check page->mapping if set */
+ 	pgoff_t	first_index;			/* Lowest page->index to unmap */
+ 	pgoff_t last_index;			/* Highest page->index to unmap */
++	struct page *single_page;		/* Locked page to be unmapped */
+ };
+ 
+ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+@@ -1695,6 +1696,7 @@ extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
+ extern int fixup_user_fault(struct mm_struct *mm,
+ 			    unsigned long address, unsigned int fault_flags,
+ 			    bool *unlocked);
++void unmap_mapping_page(struct page *page);
+ void unmap_mapping_pages(struct address_space *mapping,
+ 		pgoff_t start, pgoff_t nr, bool even_cows);
+ void unmap_mapping_range(struct address_space *mapping,
+@@ -1715,6 +1717,7 @@ static inline int fixup_user_fault(struct mm_struct *mm, unsigned long address,
+ 	BUG();
+ 	return -EFAULT;
+ }
++static inline void unmap_mapping_page(struct page *page) { }
+ static inline void unmap_mapping_pages(struct address_space *mapping,
+ 		pgoff_t start, pgoff_t nr, bool even_cows) { }
+ static inline void unmap_mapping_range(struct address_space *mapping,
+diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
+index 2ad72d2c8cc52..5d0767cb424aa 100644
+--- a/include/linux/mmdebug.h
++++ b/include/linux/mmdebug.h
+@@ -37,6 +37,18 @@ void dump_mm(const struct mm_struct *mm);
+ 			BUG();						\
+ 		}							\
+ 	} while (0)
++#define VM_WARN_ON_ONCE_PAGE(cond, page)	({			\
++	static bool __section(".data.once") __warned;			\
++	int __ret_warn_once = !!(cond);					\
++									\
++	if (unlikely(__ret_warn_once && !__warned)) {			\
++		dump_page(page, "VM_WARN_ON_ONCE_PAGE(" __stringify(cond)")");\
++		__warned = true;					\
++		WARN_ON(1);						\
++	}								\
++	unlikely(__ret_warn_once);					\
++})
++
+ #define VM_WARN_ON(cond) (void)WARN_ON(cond)
+ #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond)
+ #define VM_WARN_ONCE(cond, format...) (void)WARN_ONCE(cond, format)
+@@ -48,6 +60,7 @@ void dump_mm(const struct mm_struct *mm);
+ #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
+ #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
+ #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond)
++#define VM_WARN_ON_ONCE_PAGE(cond, page)  BUILD_BUG_ON_INVALID(cond)
+ #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond)
+ #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond)
+ #endif
+diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
+index b032f094a7827..fcb3f040102af 100644
+--- a/include/linux/pagemap.h
++++ b/include/linux/pagemap.h
+@@ -501,7 +501,7 @@ static inline struct page *read_mapping_page(struct address_space *mapping,
+ }
+ 
+ /*
+- * Get index of the page with in radix-tree
++ * Get index of the page within radix-tree (but not for hugetlb pages).
+  * (TODO: remove once hugetlb pages will have ->index in PAGE_SIZE)
+  */
+ static inline pgoff_t page_to_index(struct page *page)
+@@ -520,15 +520,16 @@ static inline pgoff_t page_to_index(struct page *page)
+ 	return pgoff;
+ }
+ 
++extern pgoff_t hugetlb_basepage_index(struct page *page);
++
+ /*
+- * Get the offset in PAGE_SIZE.
+- * (TODO: hugepage should have ->index in PAGE_SIZE)
++ * Get the offset in PAGE_SIZE (even for hugetlb pages).
++ * (TODO: hugetlb pages should have ->index in PAGE_SIZE)
+  */
+ static inline pgoff_t page_to_pgoff(struct page *page)
+ {
+-	if (unlikely(PageHeadHuge(page)))
+-		return page->index << compound_order(page);
+-
++	if (unlikely(PageHuge(page)))
++		return hugetlb_basepage_index(page);
+ 	return page_to_index(page);
+ }
+ 
+diff --git a/include/linux/rmap.h b/include/linux/rmap.h
+index def5c62c93b3b..8d04e7deedc66 100644
+--- a/include/linux/rmap.h
++++ b/include/linux/rmap.h
+@@ -91,6 +91,7 @@ enum ttu_flags {
+ 
+ 	TTU_SPLIT_HUGE_PMD	= 0x4,	/* split huge PMD if any */
+ 	TTU_IGNORE_MLOCK	= 0x8,	/* ignore mlock */
++	TTU_SYNC		= 0x10,	/* avoid racy checks with PVMW_SYNC */
+ 	TTU_IGNORE_HWPOISON	= 0x20,	/* corrupted page is recoverable */
+ 	TTU_BATCH_FLUSH		= 0x40,	/* Batch TLB flushes where possible
+ 					 * and caller guarantees they will
+diff --git a/include/net/sock.h b/include/net/sock.h
+index f68184b8c0aa5..3c7addf951509 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1900,7 +1900,8 @@ static inline u32 net_tx_rndhash(void)
+ 
+ static inline void sk_set_txhash(struct sock *sk)
+ {
+-	sk->sk_txhash = net_tx_rndhash();
++	/* This pairs with READ_ONCE() in skb_set_hash_from_sk() */
++	WRITE_ONCE(sk->sk_txhash, net_tx_rndhash());
+ }
+ 
+ static inline bool sk_rethink_txhash(struct sock *sk)
+@@ -2172,9 +2173,12 @@ static inline void sock_poll_wait(struct file *filp, struct socket *sock,
+ 
+ static inline void skb_set_hash_from_sk(struct sk_buff *skb, struct sock *sk)
+ {
+-	if (sk->sk_txhash) {
++	/* This pairs with WRITE_ONCE() in sk_set_txhash() */
++	u32 txhash = READ_ONCE(sk->sk_txhash);
++
++	if (txhash) {
+ 		skb->l4_hash = 1;
+-		skb->hash = sk->sk_txhash;
++		skb->hash = txhash;
+ 	}
+ }
+ 
+@@ -2232,8 +2236,13 @@ struct sk_buff *sock_dequeue_err_skb(struct sock *sk);
+ static inline int sock_error(struct sock *sk)
+ {
+ 	int err;
+-	if (likely(!sk->sk_err))
++
++	/* Avoid an atomic operation for the common case.
++	 * This is racy since another cpu/thread can change sk_err under us.
++	 */
++	if (likely(data_race(!sk->sk_err)))
+ 		return 0;
++
+ 	err = xchg(&sk->sk_err, 0);
+ 	return -err;
+ }
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 0f61b14b00995..0ed0e1f215c75 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -667,6 +667,9 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
+ 	if (orig_addr == INVALID_PHYS_ADDR)
+ 		return;
+ 
++	orig_addr += (tlb_addr & (IO_TLB_SIZE - 1)) -
++		swiotlb_align_offset(hwdev, orig_addr);
++
+ 	switch (target) {
+ 	case SYNC_FOR_CPU:
+ 		if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 3136aba177720..98a6e1b80bfe4 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -35,7 +35,6 @@
+ #include <linux/jhash.h>
+ #include <linux/pagemap.h>
+ #include <linux/syscalls.h>
+-#include <linux/hugetlb.h>
+ #include <linux/freezer.h>
+ #include <linux/memblock.h>
+ #include <linux/fault-inject.h>
+@@ -652,7 +651,7 @@ again:
+ 
+ 		key->both.offset |= FUT_OFF_INODE; /* inode-based key */
+ 		key->shared.i_seq = get_inode_sequence_number(inode);
+-		key->shared.pgoff = basepage_index(tail);
++		key->shared.pgoff = page_to_pgoff(tail);
+ 		rcu_read_unlock();
+ 	}
+ 
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 5edf7e19ab262..36be4364b313a 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -1044,8 +1044,38 @@ void kthread_flush_work(struct kthread_work *work)
+ EXPORT_SYMBOL_GPL(kthread_flush_work);
+ 
+ /*
+- * This function removes the work from the worker queue. Also it makes sure
+- * that it won't get queued later via the delayed work's timer.
++ * Make sure that the timer is neither set nor running and could
++ * not manipulate the work list_head any longer.
++ *
++ * The function is called under worker->lock. The lock is temporary
++ * released but the timer can't be set again in the meantime.
++ */
++static void kthread_cancel_delayed_work_timer(struct kthread_work *work,
++					      unsigned long *flags)
++{
++	struct kthread_delayed_work *dwork =
++		container_of(work, struct kthread_delayed_work, work);
++	struct kthread_worker *worker = work->worker;
++
++	/*
++	 * del_timer_sync() must be called to make sure that the timer
++	 * callback is not running. The lock must be temporary released
++	 * to avoid a deadlock with the callback. In the meantime,
++	 * any queuing is blocked by setting the canceling counter.
++	 */
++	work->canceling++;
++	raw_spin_unlock_irqrestore(&worker->lock, *flags);
++	del_timer_sync(&dwork->timer);
++	raw_spin_lock_irqsave(&worker->lock, *flags);
++	work->canceling--;
++}
++
++/*
++ * This function removes the work from the worker queue.
++ *
++ * It is called under worker->lock. The caller must make sure that
++ * the timer used by delayed work is not running, e.g. by calling
++ * kthread_cancel_delayed_work_timer().
+  *
+  * The work might still be in use when this function finishes. See the
+  * current_work proceed by the worker.
+@@ -1053,28 +1083,8 @@ EXPORT_SYMBOL_GPL(kthread_flush_work);
+  * Return: %true if @work was pending and successfully canceled,
+  *	%false if @work was not pending
+  */
+-static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
+-				  unsigned long *flags)
++static bool __kthread_cancel_work(struct kthread_work *work)
+ {
+-	/* Try to cancel the timer if exists. */
+-	if (is_dwork) {
+-		struct kthread_delayed_work *dwork =
+-			container_of(work, struct kthread_delayed_work, work);
+-		struct kthread_worker *worker = work->worker;
+-
+-		/*
+-		 * del_timer_sync() must be called to make sure that the timer
+-		 * callback is not running. The lock must be temporary released
+-		 * to avoid a deadlock with the callback. In the meantime,
+-		 * any queuing is blocked by setting the canceling counter.
+-		 */
+-		work->canceling++;
+-		raw_spin_unlock_irqrestore(&worker->lock, *flags);
+-		del_timer_sync(&dwork->timer);
+-		raw_spin_lock_irqsave(&worker->lock, *flags);
+-		work->canceling--;
+-	}
+-
+ 	/*
+ 	 * Try to remove the work from a worker list. It might either
+ 	 * be from worker->work_list or from worker->delayed_work_list.
+@@ -1127,11 +1137,23 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
+ 	/* Work must not be used with >1 worker, see kthread_queue_work() */
+ 	WARN_ON_ONCE(work->worker != worker);
+ 
+-	/* Do not fight with another command that is canceling this work. */
++	/*
++	 * Temporary cancel the work but do not fight with another command
++	 * that is canceling the work as well.
++	 *
++	 * It is a bit tricky because of possible races with another
++	 * mod_delayed_work() and cancel_delayed_work() callers.
++	 *
++	 * The timer must be canceled first because worker->lock is released
++	 * when doing so. But the work can be removed from the queue (list)
++	 * only when it can be queued again so that the return value can
++	 * be used for reference counting.
++	 */
++	kthread_cancel_delayed_work_timer(work, &flags);
+ 	if (work->canceling)
+ 		goto out;
++	ret = __kthread_cancel_work(work);
+ 
+-	ret = __kthread_cancel_work(work, true, &flags);
+ fast_queue:
+ 	__kthread_queue_delayed_work(worker, dwork, delay);
+ out:
+@@ -1153,7 +1175,10 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
+ 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
+ 	WARN_ON_ONCE(work->worker != worker);
+ 
+-	ret = __kthread_cancel_work(work, is_dwork, &flags);
++	if (is_dwork)
++		kthread_cancel_delayed_work_timer(work, &flags);
++
++	ret = __kthread_cancel_work(work);
+ 
+ 	if (worker->current_work != work)
+ 		goto out_fast;
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 858b96b438cee..cdca007551e71 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -842,7 +842,7 @@ static int count_matching_names(struct lock_class *new_class)
+ }
+ 
+ /* used from NMI context -- must be lockless */
+-static __always_inline struct lock_class *
++static noinstr struct lock_class *
+ look_up_lock_class(const struct lockdep_map *lock, unsigned int subclass)
+ {
+ 	struct lockdep_subclass_key *key;
+@@ -850,12 +850,14 @@ look_up_lock_class(const struct lockdep_map *lock, unsigned int subclass)
+ 	struct lock_class *class;
+ 
+ 	if (unlikely(subclass >= MAX_LOCKDEP_SUBCLASSES)) {
++		instrumentation_begin();
+ 		debug_locks_off();
+ 		printk(KERN_ERR
+ 			"BUG: looking up invalid subclass: %u\n", subclass);
+ 		printk(KERN_ERR
+ 			"turning off the locking correctness validator.\n");
+ 		dump_stack();
++		instrumentation_end();
+ 		return NULL;
+ 	}
+ 
+diff --git a/kernel/module.c b/kernel/module.c
+index 908d46abe1656..185b2655bc206 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -272,9 +272,18 @@ static void module_assert_mutex_or_preempt(void)
+ #endif
+ }
+ 
++#ifdef CONFIG_MODULE_SIG
+ static bool sig_enforce = IS_ENABLED(CONFIG_MODULE_SIG_FORCE);
+ module_param(sig_enforce, bool_enable_only, 0644);
+ 
++void set_module_sig_enforced(void)
++{
++	sig_enforce = true;
++}
++#else
++#define sig_enforce false
++#endif
++
+ /*
+  * Export sig_enforce kernel cmdline parameter to allow other subsystems rely
+  * on that instead of directly to CONFIG_MODULE_SIG_FORCE config.
+@@ -285,11 +294,6 @@ bool is_module_sig_enforced(void)
+ }
+ EXPORT_SYMBOL(is_module_sig_enforced);
+ 
+-void set_module_sig_enforced(void)
+-{
+-	sig_enforce = true;
+-}
+-
+ /* Block module loading/unloading? */
+ int modules_disabled = 0;
+ core_param(nomodule, modules_disabled, bint, 0);
+diff --git a/lib/debug_locks.c b/lib/debug_locks.c
+index 06d3135bd184c..a75ee30b77cb8 100644
+--- a/lib/debug_locks.c
++++ b/lib/debug_locks.c
+@@ -36,7 +36,7 @@ EXPORT_SYMBOL_GPL(debug_locks_silent);
+ /*
+  * Generic 'turn off all lock debugging' function:
+  */
+-noinstr int debug_locks_off(void)
++int debug_locks_off(void)
+ {
+ 	if (debug_locks && __debug_locks_off()) {
+ 		if (!debug_locks_silent) {
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index d9ade23ac2b22..6301ecc1f679a 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -61,6 +61,7 @@ static struct shrinker deferred_split_shrinker;
+ 
+ static atomic_t huge_zero_refcount;
+ struct page *huge_zero_page __read_mostly;
++unsigned long huge_zero_pfn __read_mostly = ~0UL;
+ 
+ bool transparent_hugepage_enabled(struct vm_area_struct *vma)
+ {
+@@ -97,6 +98,7 @@ retry:
+ 		__free_pages(zero_page, compound_order(zero_page));
+ 		goto retry;
+ 	}
++	WRITE_ONCE(huge_zero_pfn, page_to_pfn(zero_page));
+ 
+ 	/* We take additional reference here. It will be put back by shrinker */
+ 	atomic_set(&huge_zero_refcount, 2);
+@@ -146,6 +148,7 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
+ 	if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
+ 		struct page *zero_page = xchg(&huge_zero_page, NULL);
+ 		BUG_ON(zero_page == NULL);
++		WRITE_ONCE(huge_zero_pfn, ~0UL);
+ 		__free_pages(zero_page, compound_order(zero_page));
+ 		return HPAGE_PMD_NR;
+ 	}
+@@ -2031,7 +2034,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
+ 	count_vm_event(THP_SPLIT_PMD);
+ 
+ 	if (!vma_is_anonymous(vma)) {
+-		_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd);
++		old_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd);
+ 		/*
+ 		 * We are going to unmap this huge page. So
+ 		 * just go ahead and zap it
+@@ -2040,16 +2043,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
+ 			zap_deposited_table(mm, pmd);
+ 		if (vma_is_special_huge(vma))
+ 			return;
+-		page = pmd_page(_pmd);
+-		if (!PageDirty(page) && pmd_dirty(_pmd))
+-			set_page_dirty(page);
+-		if (!PageReferenced(page) && pmd_young(_pmd))
+-			SetPageReferenced(page);
+-		page_remove_rmap(page, true);
+-		put_page(page);
++		if (unlikely(is_pmd_migration_entry(old_pmd))) {
++			swp_entry_t entry;
++
++			entry = pmd_to_swp_entry(old_pmd);
++			page = migration_entry_to_page(entry);
++		} else {
++			page = pmd_page(old_pmd);
++			if (!PageDirty(page) && pmd_dirty(old_pmd))
++				set_page_dirty(page);
++			if (!PageReferenced(page) && pmd_young(old_pmd))
++				SetPageReferenced(page);
++			page_remove_rmap(page, true);
++			put_page(page);
++		}
+ 		add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
+ 		return;
+-	} else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) {
++	}
++
++	if (is_huge_zero_pmd(*pmd)) {
+ 		/*
+ 		 * FIXME: Do we want to invalidate secondary mmu by calling
+ 		 * mmu_notifier_invalidate_range() see comments below inside
+@@ -2330,17 +2342,17 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
+ 
+ static void unmap_page(struct page *page)
+ {
+-	enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK |
++	enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_SYNC |
+ 		TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD;
+-	bool unmap_success;
+ 
+ 	VM_BUG_ON_PAGE(!PageHead(page), page);
+ 
+ 	if (PageAnon(page))
+ 		ttu_flags |= TTU_SPLIT_FREEZE;
+ 
+-	unmap_success = try_to_unmap(page, ttu_flags);
+-	VM_BUG_ON_PAGE(!unmap_success, page);
++	try_to_unmap(page, ttu_flags);
++
++	VM_WARN_ON_ONCE_PAGE(page_mapped(page), page);
+ }
+ 
+ static void remap_page(struct page *page, unsigned int nr)
+@@ -2630,7 +2642,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
+ 	struct deferred_split *ds_queue = get_deferred_split_queue(head);
+ 	struct anon_vma *anon_vma = NULL;
+ 	struct address_space *mapping = NULL;
+-	int count, mapcount, extra_pins, ret;
++	int extra_pins, ret;
+ 	unsigned long flags;
+ 	pgoff_t end;
+ 
+@@ -2690,7 +2702,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
+ 	}
+ 
+ 	unmap_page(head);
+-	VM_BUG_ON_PAGE(compound_mapcount(head), head);
+ 
+ 	/* prevent PageLRU to go away from under us, and freeze lru stats */
+ 	spin_lock_irqsave(&pgdata->lru_lock, flags);
+@@ -2709,9 +2720,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
+ 
+ 	/* Prevent deferred_split_scan() touching ->_refcount */
+ 	spin_lock(&ds_queue->split_queue_lock);
+-	count = page_count(head);
+-	mapcount = total_mapcount(head);
+-	if (!mapcount && page_ref_freeze(head, 1 + extra_pins)) {
++	if (page_ref_freeze(head, 1 + extra_pins)) {
+ 		if (!list_empty(page_deferred_list(head))) {
+ 			ds_queue->split_queue_len--;
+ 			list_del(page_deferred_list(head));
+@@ -2727,16 +2736,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
+ 		__split_huge_page(page, list, end, flags);
+ 		ret = 0;
+ 	} else {
+-		if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
+-			pr_alert("total_mapcount: %u, page_count(): %u\n",
+-					mapcount, count);
+-			if (PageTail(page))
+-				dump_page(head, NULL);
+-			dump_page(page, "total_mapcount(head) > 0");
+-			BUG();
+-		}
+ 		spin_unlock(&ds_queue->split_queue_lock);
+-fail:		if (mapping)
++fail:
++		if (mapping)
+ 			xa_unlock(&mapping->i_pages);
+ 		spin_unlock_irqrestore(&pgdata->lru_lock, flags);
+ 		remap_page(head, thp_nr_pages(head));
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index bc1006a327338..d4f89c2f95446 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1635,15 +1635,12 @@ struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage)
+ 	return NULL;
+ }
+ 
+-pgoff_t __basepage_index(struct page *page)
++pgoff_t hugetlb_basepage_index(struct page *page)
+ {
+ 	struct page *page_head = compound_head(page);
+ 	pgoff_t index = page_index(page_head);
+ 	unsigned long compound_idx;
+ 
+-	if (!PageHuge(page_head))
+-		return page_index(page);
+-
+ 	if (compound_order(page_head) >= MAX_ORDER)
+ 		compound_idx = page_to_pfn(page) - page_to_pfn(page_head);
+ 	else
+diff --git a/mm/internal.h b/mm/internal.h
+index c43ccdddb0f6e..840b8a330b9ac 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -379,27 +379,52 @@ static inline void mlock_migrate_page(struct page *newpage, struct page *page)
+ extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
+ 
+ /*
+- * At what user virtual address is page expected in @vma?
++ * At what user virtual address is page expected in vma?
++ * Returns -EFAULT if all of the page is outside the range of vma.
++ * If page is a compound head, the entire compound page is considered.
+  */
+ static inline unsigned long
+-__vma_address(struct page *page, struct vm_area_struct *vma)
++vma_address(struct page *page, struct vm_area_struct *vma)
+ {
+-	pgoff_t pgoff = page_to_pgoff(page);
+-	return vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
++	pgoff_t pgoff;
++	unsigned long address;
++
++	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
++	pgoff = page_to_pgoff(page);
++	if (pgoff >= vma->vm_pgoff) {
++		address = vma->vm_start +
++			((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
++		/* Check for address beyond vma (or wrapped through 0?) */
++		if (address < vma->vm_start || address >= vma->vm_end)
++			address = -EFAULT;
++	} else if (PageHead(page) &&
++		   pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) {
++		/* Test above avoids possibility of wrap to 0 on 32-bit */
++		address = vma->vm_start;
++	} else {
++		address = -EFAULT;
++	}
++	return address;
+ }
+ 
++/*
++ * Then at what user virtual address will none of the page be found in vma?
++ * Assumes that vma_address() already returned a good starting address.
++ * If page is a compound head, the entire compound page is considered.
++ */
+ static inline unsigned long
+-vma_address(struct page *page, struct vm_area_struct *vma)
++vma_address_end(struct page *page, struct vm_area_struct *vma)
+ {
+-	unsigned long start, end;
+-
+-	start = __vma_address(page, vma);
+-	end = start + thp_size(page) - PAGE_SIZE;
+-
+-	/* page should be within @vma mapping range */
+-	VM_BUG_ON_VMA(end < vma->vm_start || start >= vma->vm_end, vma);
+-
+-	return max(start, vma->vm_start);
++	pgoff_t pgoff;
++	unsigned long address;
++
++	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
++	pgoff = page_to_pgoff(page) + compound_nr(page);
++	address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
++	/* Check for address beyond vma (or wrapped through 0?) */
++	if (address < vma->vm_start || address > vma->vm_end)
++		address = vma->vm_end;
++	return address;
+ }
+ 
+ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
+diff --git a/mm/memory.c b/mm/memory.c
+index b70bd3ba33888..eb31b3e4ef93b 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1355,7 +1355,18 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
+ 			else if (zap_huge_pmd(tlb, vma, pmd, addr))
+ 				goto next;
+ 			/* fall through */
++		} else if (details && details->single_page &&
++			   PageTransCompound(details->single_page) &&
++			   next - addr == HPAGE_PMD_SIZE && pmd_none(*pmd)) {
++			spinlock_t *ptl = pmd_lock(tlb->mm, pmd);
++			/*
++			 * Take and drop THP pmd lock so that we cannot return
++			 * prematurely, while zap_huge_pmd() has cleared *pmd,
++			 * but not yet decremented compound_mapcount().
++			 */
++			spin_unlock(ptl);
+ 		}
++
+ 		/*
+ 		 * Here there can be other concurrent MADV_DONTNEED or
+ 		 * trans huge page faults running, and if the pmd is
+@@ -3185,6 +3196,36 @@ static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
+ 	}
+ }
+ 
++/**
++ * unmap_mapping_page() - Unmap single page from processes.
++ * @page: The locked page to be unmapped.
++ *
++ * Unmap this page from any userspace process which still has it mmaped.
++ * Typically, for efficiency, the range of nearby pages has already been
++ * unmapped by unmap_mapping_pages() or unmap_mapping_range().  But once
++ * truncation or invalidation holds the lock on a page, it may find that
++ * the page has been remapped again: and then uses unmap_mapping_page()
++ * to unmap it finally.
++ */
++void unmap_mapping_page(struct page *page)
++{
++	struct address_space *mapping = page->mapping;
++	struct zap_details details = { };
++
++	VM_BUG_ON(!PageLocked(page));
++	VM_BUG_ON(PageTail(page));
++
++	details.check_mapping = mapping;
++	details.first_index = page->index;
++	details.last_index = page->index + thp_nr_pages(page) - 1;
++	details.single_page = page;
++
++	i_mmap_lock_write(mapping);
++	if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)))
++		unmap_mapping_range_tree(&mapping->i_mmap, &details);
++	i_mmap_unlock_write(mapping);
++}
++
+ /**
+  * unmap_mapping_pages() - Unmap pages from processes.
+  * @mapping: The address space containing pages to be unmapped.
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 7982256a51250..278e6f3fa62ce 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -326,6 +326,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
+ 		goto out;
+ 
+ 	page = migration_entry_to_page(entry);
++	page = compound_head(page);
+ 
+ 	/*
+ 	 * Once page cache replacement of page migration started, page_count
+diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
+index 5e77b269c330a..610ebbee787cc 100644
+--- a/mm/page_vma_mapped.c
++++ b/mm/page_vma_mapped.c
+@@ -115,6 +115,13 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
+ 	return pfn_is_match(pvmw->page, pfn);
+ }
+ 
++static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
++{
++	pvmw->address = (pvmw->address + size) & ~(size - 1);
++	if (!pvmw->address)
++		pvmw->address = ULONG_MAX;
++}
++
+ /**
+  * page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at
+  * @pvmw->address
+@@ -143,6 +150,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
+ {
+ 	struct mm_struct *mm = pvmw->vma->vm_mm;
+ 	struct page *page = pvmw->page;
++	unsigned long end;
+ 	pgd_t *pgd;
+ 	p4d_t *p4d;
+ 	pud_t *pud;
+@@ -152,10 +160,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
+ 	if (pvmw->pmd && !pvmw->pte)
+ 		return not_found(pvmw);
+ 
+-	if (pvmw->pte)
+-		goto next_pte;
++	if (unlikely(PageHuge(page))) {
++		/* The only possible mapping was handled on last iteration */
++		if (pvmw->pte)
++			return not_found(pvmw);
+ 
+-	if (unlikely(PageHuge(pvmw->page))) {
+ 		/* when pud is not present, pte will be NULL */
+ 		pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page));
+ 		if (!pvmw->pte)
+@@ -167,78 +176,108 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
+ 			return not_found(pvmw);
+ 		return true;
+ 	}
+-restart:
+-	pgd = pgd_offset(mm, pvmw->address);
+-	if (!pgd_present(*pgd))
+-		return false;
+-	p4d = p4d_offset(pgd, pvmw->address);
+-	if (!p4d_present(*p4d))
+-		return false;
+-	pud = pud_offset(p4d, pvmw->address);
+-	if (!pud_present(*pud))
+-		return false;
+-	pvmw->pmd = pmd_offset(pud, pvmw->address);
++
+ 	/*
+-	 * Make sure the pmd value isn't cached in a register by the
+-	 * compiler and used as a stale value after we've observed a
+-	 * subsequent update.
++	 * Seek to next pte only makes sense for THP.
++	 * But more important than that optimization, is to filter out
++	 * any PageKsm page: whose page->index misleads vma_address()
++	 * and vma_address_end() to disaster.
+ 	 */
+-	pmde = READ_ONCE(*pvmw->pmd);
+-	if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) {
+-		pvmw->ptl = pmd_lock(mm, pvmw->pmd);
+-		if (likely(pmd_trans_huge(*pvmw->pmd))) {
+-			if (pvmw->flags & PVMW_MIGRATION)
+-				return not_found(pvmw);
+-			if (pmd_page(*pvmw->pmd) != page)
+-				return not_found(pvmw);
+-			return true;
+-		} else if (!pmd_present(*pvmw->pmd)) {
+-			if (thp_migration_supported()) {
+-				if (!(pvmw->flags & PVMW_MIGRATION))
++	end = PageTransCompound(page) ?
++		vma_address_end(page, pvmw->vma) :
++		pvmw->address + PAGE_SIZE;
++	if (pvmw->pte)
++		goto next_pte;
++restart:
++	do {
++		pgd = pgd_offset(mm, pvmw->address);
++		if (!pgd_present(*pgd)) {
++			step_forward(pvmw, PGDIR_SIZE);
++			continue;
++		}
++		p4d = p4d_offset(pgd, pvmw->address);
++		if (!p4d_present(*p4d)) {
++			step_forward(pvmw, P4D_SIZE);
++			continue;
++		}
++		pud = pud_offset(p4d, pvmw->address);
++		if (!pud_present(*pud)) {
++			step_forward(pvmw, PUD_SIZE);
++			continue;
++		}
++
++		pvmw->pmd = pmd_offset(pud, pvmw->address);
++		/*
++		 * Make sure the pmd value isn't cached in a register by the
++		 * compiler and used as a stale value after we've observed a
++		 * subsequent update.
++		 */
++		pmde = READ_ONCE(*pvmw->pmd);
++
++		if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) {
++			pvmw->ptl = pmd_lock(mm, pvmw->pmd);
++			pmde = *pvmw->pmd;
++			if (likely(pmd_trans_huge(pmde))) {
++				if (pvmw->flags & PVMW_MIGRATION)
+ 					return not_found(pvmw);
+-				if (is_migration_entry(pmd_to_swp_entry(*pvmw->pmd))) {
+-					swp_entry_t entry = pmd_to_swp_entry(*pvmw->pmd);
++				if (pmd_page(pmde) != page)
++					return not_found(pvmw);
++				return true;
++			}
++			if (!pmd_present(pmde)) {
++				swp_entry_t entry;
+ 
+-					if (migration_entry_to_page(entry) != page)
+-						return not_found(pvmw);
+-					return true;
+-				}
++				if (!thp_migration_supported() ||
++				    !(pvmw->flags & PVMW_MIGRATION))
++					return not_found(pvmw);
++				entry = pmd_to_swp_entry(pmde);
++				if (!is_migration_entry(entry) ||
++				    migration_entry_to_page(entry) != page)
++					return not_found(pvmw);
++				return true;
+ 			}
+-			return not_found(pvmw);
+-		} else {
+ 			/* THP pmd was split under us: handle on pte level */
+ 			spin_unlock(pvmw->ptl);
+ 			pvmw->ptl = NULL;
++		} else if (!pmd_present(pmde)) {
++			/*
++			 * If PVMW_SYNC, take and drop THP pmd lock so that we
++			 * cannot return prematurely, while zap_huge_pmd() has
++			 * cleared *pmd but not decremented compound_mapcount().
++			 */
++			if ((pvmw->flags & PVMW_SYNC) &&
++			    PageTransCompound(page)) {
++				spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
++
++				spin_unlock(ptl);
++			}
++			step_forward(pvmw, PMD_SIZE);
++			continue;
+ 		}
+-	} else if (!pmd_present(pmde)) {
+-		return false;
+-	}
+-	if (!map_pte(pvmw))
+-		goto next_pte;
+-	while (1) {
++		if (!map_pte(pvmw))
++			goto next_pte;
++this_pte:
+ 		if (check_pte(pvmw))
+ 			return true;
+ next_pte:
+-		/* Seek to next pte only makes sense for THP */
+-		if (!PageTransHuge(pvmw->page) || PageHuge(pvmw->page))
+-			return not_found(pvmw);
+ 		do {
+ 			pvmw->address += PAGE_SIZE;
+-			if (pvmw->address >= pvmw->vma->vm_end ||
+-			    pvmw->address >=
+-					__vma_address(pvmw->page, pvmw->vma) +
+-					thp_size(pvmw->page))
++			if (pvmw->address >= end)
+ 				return not_found(pvmw);
+ 			/* Did we cross page table boundary? */
+-			if (pvmw->address % PMD_SIZE == 0) {
+-				pte_unmap(pvmw->pte);
++			if ((pvmw->address & (PMD_SIZE - PAGE_SIZE)) == 0) {
+ 				if (pvmw->ptl) {
+ 					spin_unlock(pvmw->ptl);
+ 					pvmw->ptl = NULL;
+ 				}
++				pte_unmap(pvmw->pte);
++				pvmw->pte = NULL;
+ 				goto restart;
+-			} else {
+-				pvmw->pte++;
++			}
++			pvmw->pte++;
++			if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
++				pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
++				spin_lock(pvmw->ptl);
+ 			}
+ 		} while (pte_none(*pvmw->pte));
+ 
+@@ -246,7 +285,10 @@ next_pte:
+ 			pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
+ 			spin_lock(pvmw->ptl);
+ 		}
+-	}
++		goto this_pte;
++	} while (pvmw->address < end);
++
++	return false;
+ }
+ 
+ /**
+@@ -265,14 +307,10 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
+ 		.vma = vma,
+ 		.flags = PVMW_SYNC,
+ 	};
+-	unsigned long start, end;
+-
+-	start = __vma_address(page, vma);
+-	end = start + thp_size(page) - PAGE_SIZE;
+ 
+-	if (unlikely(end < vma->vm_start || start >= vma->vm_end))
++	pvmw.address = vma_address(page, vma);
++	if (pvmw.address == -EFAULT)
+ 		return 0;
+-	pvmw.address = max(start, vma->vm_start);
+ 	if (!page_vma_mapped_walk(&pvmw))
+ 		return 0;
+ 	page_vma_mapped_walk_done(&pvmw);
+diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
+index 9578db83e312f..4e640baf97948 100644
+--- a/mm/pgtable-generic.c
++++ b/mm/pgtable-generic.c
+@@ -135,8 +135,8 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
+ {
+ 	pmd_t pmd;
+ 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+-	VM_BUG_ON((pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) &&
+-			   !pmd_devmap(*pmdp)) || !pmd_present(*pmdp));
++	VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) &&
++			   !pmd_devmap(*pmdp));
+ 	pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
+ 	flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+ 	return pmd;
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 6657000b18d41..14f84f70c5571 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -700,7 +700,6 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
+  */
+ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma)
+ {
+-	unsigned long address;
+ 	if (PageAnon(page)) {
+ 		struct anon_vma *page__anon_vma = page_anon_vma(page);
+ 		/*
+@@ -710,15 +709,13 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma)
+ 		if (!vma->anon_vma || !page__anon_vma ||
+ 		    vma->anon_vma->root != page__anon_vma->root)
+ 			return -EFAULT;
+-	} else if (page->mapping) {
+-		if (!vma->vm_file || vma->vm_file->f_mapping != page->mapping)
+-			return -EFAULT;
+-	} else
++	} else if (!vma->vm_file) {
+ 		return -EFAULT;
+-	address = __vma_address(page, vma);
+-	if (unlikely(address < vma->vm_start || address >= vma->vm_end))
++	} else if (vma->vm_file->f_mapping != compound_head(page)->mapping) {
+ 		return -EFAULT;
+-	return address;
++	}
++
++	return vma_address(page, vma);
+ }
+ 
+ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
+@@ -912,7 +909,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
+ 	 */
+ 	mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
+ 				0, vma, vma->vm_mm, address,
+-				min(vma->vm_end, address + page_size(page)));
++				vma_address_end(page, vma));
+ 	mmu_notifier_invalidate_range_start(&range);
+ 
+ 	while (page_vma_mapped_walk(&pvmw)) {
+@@ -1385,6 +1382,15 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 	struct mmu_notifier_range range;
+ 	enum ttu_flags flags = (enum ttu_flags)(long)arg;
+ 
++	/*
++	 * When racing against e.g. zap_pte_range() on another cpu,
++	 * in between its ptep_get_and_clear_full() and page_remove_rmap(),
++	 * try_to_unmap() may return false when it is about to become true,
++	 * if page table locking is skipped: use TTU_SYNC to wait for that.
++	 */
++	if (flags & TTU_SYNC)
++		pvmw.flags = PVMW_SYNC;
++
+ 	/* munlock has nothing to gain from examining un-locked vmas */
+ 	if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED))
+ 		return true;
+@@ -1406,9 +1412,10 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 	 * Note that the page can not be free in this function as call of
+ 	 * try_to_unmap() must hold a reference on the page.
+ 	 */
++	range.end = PageKsm(page) ?
++			address + PAGE_SIZE : vma_address_end(page, vma);
+ 	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
+-				address,
+-				min(vma->vm_end, address + page_size(page)));
++				address, range.end);
+ 	if (PageHuge(page)) {
+ 		/*
+ 		 * If sharing is possible, start and end will be adjusted
+@@ -1716,9 +1723,9 @@ static bool invalid_migration_vma(struct vm_area_struct *vma, void *arg)
+ 	return vma_is_temporary_stack(vma);
+ }
+ 
+-static int page_mapcount_is_zero(struct page *page)
++static int page_not_mapped(struct page *page)
+ {
+-	return !total_mapcount(page);
++	return !page_mapped(page);
+ }
+ 
+ /**
+@@ -1736,7 +1743,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
+ 	struct rmap_walk_control rwc = {
+ 		.rmap_one = try_to_unmap_one,
+ 		.arg = (void *)flags,
+-		.done = page_mapcount_is_zero,
++		.done = page_not_mapped,
+ 		.anon_lock = page_lock_anon_vma_read,
+ 	};
+ 
+@@ -1757,14 +1764,15 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
+ 	else
+ 		rmap_walk(page, &rwc);
+ 
+-	return !page_mapcount(page) ? true : false;
++	/*
++	 * When racing against e.g. zap_pte_range() on another cpu,
++	 * in between its ptep_get_and_clear_full() and page_remove_rmap(),
++	 * try_to_unmap() may return false when it is about to become true,
++	 * if page table locking is skipped: use TTU_SYNC to wait for that.
++	 */
++	return !page_mapcount(page);
+ }
+ 
+-static int page_not_mapped(struct page *page)
+-{
+-	return !page_mapped(page);
+-};
+-
+ /**
+  * try_to_munlock - try to munlock a page
+  * @page: the page to be munlocked
+@@ -1859,6 +1867,7 @@ static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
+ 		struct vm_area_struct *vma = avc->vma;
+ 		unsigned long address = vma_address(page, vma);
+ 
++		VM_BUG_ON_VMA(address == -EFAULT, vma);
+ 		cond_resched();
+ 
+ 		if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
+@@ -1913,6 +1922,7 @@ static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc,
+ 			pgoff_start, pgoff_end) {
+ 		unsigned long address = vma_address(page, vma);
+ 
++		VM_BUG_ON_VMA(address == -EFAULT, vma);
+ 		cond_resched();
+ 
+ 		if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
+diff --git a/mm/truncate.c b/mm/truncate.c
+index 960edf5803ca9..8914ca4ce4b1e 100644
+--- a/mm/truncate.c
++++ b/mm/truncate.c
+@@ -173,13 +173,10 @@ void do_invalidatepage(struct page *page, unsigned int offset,
+  * its lock, b) when a concurrent invalidate_mapping_pages got there first and
+  * c) when tmpfs swizzles a page between a tmpfs inode and swapper_space.
+  */
+-static void
+-truncate_cleanup_page(struct address_space *mapping, struct page *page)
++static void truncate_cleanup_page(struct page *page)
+ {
+-	if (page_mapped(page)) {
+-		unsigned int nr = thp_nr_pages(page);
+-		unmap_mapping_pages(mapping, page->index, nr, false);
+-	}
++	if (page_mapped(page))
++		unmap_mapping_page(page);
+ 
+ 	if (page_has_private(page))
+ 		do_invalidatepage(page, 0, thp_size(page));
+@@ -224,7 +221,7 @@ int truncate_inode_page(struct address_space *mapping, struct page *page)
+ 	if (page->mapping != mapping)
+ 		return -EIO;
+ 
+-	truncate_cleanup_page(mapping, page);
++	truncate_cleanup_page(page);
+ 	delete_from_page_cache(page);
+ 	return 0;
+ }
+@@ -362,7 +359,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
+ 			pagevec_add(&locked_pvec, page);
+ 		}
+ 		for (i = 0; i < pagevec_count(&locked_pvec); i++)
+-			truncate_cleanup_page(mapping, locked_pvec.pages[i]);
++			truncate_cleanup_page(locked_pvec.pages[i]);
+ 		delete_from_page_cache_batch(mapping, &locked_pvec);
+ 		for (i = 0; i < pagevec_count(&locked_pvec); i++)
+ 			unlock_page(locked_pvec.pages[i]);
+@@ -737,6 +734,16 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
+ 				continue;
+ 			}
+ 
++			if (!did_range_unmap && page_mapped(page)) {
++				/*
++				 * If page is mapped, before taking its lock,
++				 * zap the rest of the file in one hit.
++				 */
++				unmap_mapping_pages(mapping, index,
++						(1 + end - index), false);
++				did_range_unmap = 1;
++			}
++
+ 			lock_page(page);
+ 			WARN_ON(page_to_index(page) != index);
+ 			if (page->mapping != mapping) {
+@@ -744,23 +751,11 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
+ 				continue;
+ 			}
+ 			wait_on_page_writeback(page);
+-			if (page_mapped(page)) {
+-				if (!did_range_unmap) {
+-					/*
+-					 * Zap the rest of the file in one hit.
+-					 */
+-					unmap_mapping_pages(mapping, index,
+-						(1 + end - index), false);
+-					did_range_unmap = 1;
+-				} else {
+-					/*
+-					 * Just zap this page
+-					 */
+-					unmap_mapping_pages(mapping, index,
+-								1, false);
+-				}
+-			}
++
++			if (page_mapped(page))
++				unmap_mapping_page(page);
+ 			BUG_ON(page_mapped(page));
++
+ 			ret2 = do_launder_page(mapping, page);
+ 			if (ret2 == 0) {
+ 				if (!invalidate_complete_page2(mapping, page))
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 2917af3f5ac19..68ff19af195c6 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -1421,7 +1421,7 @@ static int ethtool_get_any_eeprom(struct net_device *dev, void __user *useraddr,
+ 	if (eeprom.offset + eeprom.len > total_len)
+ 		return -EINVAL;
+ 
+-	data = kmalloc(PAGE_SIZE, GFP_USER);
++	data = kzalloc(PAGE_SIZE, GFP_USER);
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+@@ -1486,7 +1486,7 @@ static int ethtool_set_eeprom(struct net_device *dev, void __user *useraddr)
+ 	if (eeprom.offset + eeprom.len > ops->get_eeprom_len(dev))
+ 		return -EINVAL;
+ 
+-	data = kmalloc(PAGE_SIZE, GFP_USER);
++	data = kzalloc(PAGE_SIZE, GFP_USER);
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+@@ -1765,7 +1765,7 @@ static int ethtool_self_test(struct net_device *dev, char __user *useraddr)
+ 		return -EFAULT;
+ 
+ 	test.len = test_len;
+-	data = kmalloc_array(test_len, sizeof(u64), GFP_USER);
++	data = kcalloc(test_len, sizeof(u64), GFP_USER);
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+@@ -2281,7 +2281,7 @@ static int ethtool_get_tunable(struct net_device *dev, void __user *useraddr)
+ 	ret = ethtool_tunable_valid(&tuna);
+ 	if (ret)
+ 		return ret;
+-	data = kmalloc(tuna.len, GFP_USER);
++	data = kzalloc(tuna.len, GFP_USER);
+ 	if (!data)
+ 		return -ENOMEM;
+ 	ret = ops->get_tunable(dev, &tuna, data);
+@@ -2473,7 +2473,7 @@ static int get_phy_tunable(struct net_device *dev, void __user *useraddr)
+ 	ret = ethtool_phy_tunable_valid(&tuna);
+ 	if (ret)
+ 		return ret;
+-	data = kmalloc(tuna.len, GFP_USER);
++	data = kzalloc(tuna.len, GFP_USER);
+ 	if (!data)
+ 		return -ENOMEM;
+ 	if (phy_drv_tunable) {
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index b7260c8cef2e5..8267349afe231 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -572,7 +572,7 @@ int inet_dgram_connect(struct socket *sock, struct sockaddr *uaddr,
+ 			return err;
+ 	}
+ 
+-	if (!inet_sk(sk)->inet_num && inet_autobind(sk))
++	if (data_race(!inet_sk(sk)->inet_num) && inet_autobind(sk))
+ 		return -EAGAIN;
+ 	return sk->sk_prot->connect(sk, uaddr, addr_len);
+ }
+@@ -799,7 +799,7 @@ int inet_send_prepare(struct sock *sk)
+ 	sock_rps_record_flow(sk);
+ 
+ 	/* We may need to bind the socket. */
+-	if (!inet_sk(sk)->inet_num && !sk->sk_prot->no_autobind &&
++	if (data_race(!inet_sk(sk)->inet_num) && !sk->sk_prot->no_autobind &&
+ 	    inet_autobind(sk))
+ 		return -EAGAIN;
+ 
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 123a6d39438f8..7c18597774297 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -1989,7 +1989,7 @@ static int inet_set_link_af(struct net_device *dev, const struct nlattr *nla)
+ 		return -EAFNOSUPPORT;
+ 
+ 	if (nla_parse_nested_deprecated(tb, IFLA_INET_MAX, nla, NULL, NULL) < 0)
+-		BUG();
++		return -EINVAL;
+ 
+ 	if (tb[IFLA_INET_CONF]) {
+ 		nla_for_each_nested(a, tb[IFLA_INET_CONF], rem)
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 248856b301c45..8ce8b7300b9d3 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -952,6 +952,7 @@ bool ping_rcv(struct sk_buff *skb)
+ 	struct sock *sk;
+ 	struct net *net = dev_net(skb->dev);
+ 	struct icmphdr *icmph = icmp_hdr(skb);
++	bool rc = false;
+ 
+ 	/* We assume the packet has already been checked by icmp_rcv */
+ 
+@@ -966,14 +967,15 @@ bool ping_rcv(struct sk_buff *skb)
+ 		struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC);
+ 
+ 		pr_debug("rcv on socket %p\n", sk);
+-		if (skb2)
+-			ping_queue_rcv_skb(sk, skb2);
++		if (skb2 && !ping_queue_rcv_skb(sk, skb2))
++			rc = true;
+ 		sock_put(sk);
+-		return true;
+ 	}
+-	pr_debug("no socket, dropping\n");
+ 
+-	return false;
++	if (!rc)
++		pr_debug("no socket, dropping\n");
++
++	return rc;
+ }
+ EXPORT_SYMBOL_GPL(ping_rcv);
+ 
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 4c881f5d9080c..884d430e23cb3 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -5799,7 +5799,7 @@ static int inet6_set_link_af(struct net_device *dev, const struct nlattr *nla)
+ 		return -EAFNOSUPPORT;
+ 
+ 	if (nla_parse_nested_deprecated(tb, IFLA_INET6_MAX, nla, NULL, NULL) < 0)
+-		BUG();
++		return -EINVAL;
+ 
+ 	if (tb[IFLA_INET6_TOKEN]) {
+ 		err = inet6_set_iftoken(idev, nla_data(tb[IFLA_INET6_TOKEN]));
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index be40f6b16199f..a83f0c2fcdf77 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1445,7 +1445,7 @@ ieee80211_get_sband(struct ieee80211_sub_if_data *sdata)
+ 	rcu_read_lock();
+ 	chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
+ 
+-	if (WARN_ON_ONCE(!chanctx_conf)) {
++	if (!chanctx_conf) {
+ 		rcu_read_unlock();
+ 		return NULL;
+ 	}
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 6d3220c66931a..fbe26e912300d 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -4019,10 +4019,14 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
+ 		if (elems.mbssid_config_ie)
+ 			bss_conf->profile_periodicity =
+ 				elems.mbssid_config_ie->profile_periodicity;
++		else
++			bss_conf->profile_periodicity = 0;
+ 
+ 		if (elems.ext_capab_len >= 11 &&
+ 		    (elems.ext_capab[10] & WLAN_EXT_CAPA11_EMA_SUPPORT))
+ 			bss_conf->ema_ap = true;
++		else
++			bss_conf->ema_ap = false;
+ 
+ 		/* continue assoc process */
+ 		ifmgd->assoc_data->timeout = jiffies;
+@@ -5749,12 +5753,16 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
+ 					      beacon_ies->data, beacon_ies->len);
+ 		if (elem && elem->datalen >= 3)
+ 			sdata->vif.bss_conf.profile_periodicity = elem->data[2];
++		else
++			sdata->vif.bss_conf.profile_periodicity = 0;
+ 
+ 		elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY,
+ 					  beacon_ies->data, beacon_ies->len);
+ 		if (elem && elem->datalen >= 11 &&
+ 		    (elem->data[10] & WLAN_EXT_CAPA11_EMA_SUPPORT))
+ 			sdata->vif.bss_conf.ema_ap = true;
++		else
++			sdata->vif.bss_conf.ema_ap = false;
+ 	} else {
+ 		assoc_data->timeout = jiffies;
+ 		assoc_data->timeout_started = true;
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index ef8ff0bc66f17..38b5695c2a0c8 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2250,17 +2250,15 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ 	sc = le16_to_cpu(hdr->seq_ctrl);
+ 	frag = sc & IEEE80211_SCTL_FRAG;
+ 
+-	if (is_multicast_ether_addr(hdr->addr1)) {
+-		I802_DEBUG_INC(rx->local->dot11MulticastReceivedFrameCount);
+-		goto out_no_led;
+-	}
+-
+ 	if (rx->sta)
+ 		cache = &rx->sta->frags;
+ 
+ 	if (likely(!ieee80211_has_morefrags(fc) && frag == 0))
+ 		goto out;
+ 
++	if (is_multicast_ether_addr(hdr->addr1))
++		return RX_DROP_MONITOR;
++
+ 	I802_DEBUG_INC(rx->local->rx_handlers_fragments);
+ 
+ 	if (skb_linearize(rx->skb))
+@@ -2386,7 +2384,6 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ 
+  out:
+ 	ieee80211_led_rx(rx->local);
+- out_no_led:
+ 	if (rx->sta)
+ 		rx->sta->rx_stats.packets++;
+ 	return RX_CONTINUE;
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index d8f9fb0646a4d..fbf56a203c0e8 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -954,7 +954,7 @@ static void ieee80211_parse_extension_element(u32 *crc,
+ 
+ 	switch (elem->data[0]) {
+ 	case WLAN_EID_EXT_HE_MU_EDCA:
+-		if (len == sizeof(*elems->mu_edca_param_set)) {
++		if (len >= sizeof(*elems->mu_edca_param_set)) {
+ 			elems->mu_edca_param_set = data;
+ 			if (crc)
+ 				*crc = crc32_be(*crc, (void *)elem,
+@@ -975,7 +975,7 @@ static void ieee80211_parse_extension_element(u32 *crc,
+ 		}
+ 		break;
+ 	case WLAN_EID_EXT_UORA:
+-		if (len == 1)
++		if (len >= 1)
+ 			elems->uora_element = data;
+ 		break;
+ 	case WLAN_EID_EXT_MAX_CHANNEL_SWITCH_TIME:
+@@ -983,7 +983,7 @@ static void ieee80211_parse_extension_element(u32 *crc,
+ 			elems->max_channel_switch_time = data;
+ 		break;
+ 	case WLAN_EID_EXT_MULTIPLE_BSSID_CONFIGURATION:
+-		if (len == sizeof(*elems->mbssid_config_ie))
++		if (len >= sizeof(*elems->mbssid_config_ie))
+ 			elems->mbssid_config_ie = data;
+ 		break;
+ 	case WLAN_EID_EXT_HE_SPR:
+@@ -992,7 +992,7 @@ static void ieee80211_parse_extension_element(u32 *crc,
+ 			elems->he_spr = data;
+ 		break;
+ 	case WLAN_EID_EXT_HE_6GHZ_CAPA:
+-		if (len == sizeof(*elems->he_6ghz_capa))
++		if (len >= sizeof(*elems->he_6ghz_capa))
+ 			elems->he_6ghz_capa = data;
+ 		break;
+ 	}
+@@ -1081,14 +1081,14 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 
+ 		switch (id) {
+ 		case WLAN_EID_LINK_ID:
+-			if (elen + 2 != sizeof(struct ieee80211_tdls_lnkie)) {
++			if (elen + 2 < sizeof(struct ieee80211_tdls_lnkie)) {
+ 				elem_parse_failed = true;
+ 				break;
+ 			}
+ 			elems->lnk_id = (void *)(pos - 2);
+ 			break;
+ 		case WLAN_EID_CHAN_SWITCH_TIMING:
+-			if (elen != sizeof(struct ieee80211_ch_switch_timing)) {
++			if (elen < sizeof(struct ieee80211_ch_switch_timing)) {
+ 				elem_parse_failed = true;
+ 				break;
+ 			}
+@@ -1251,7 +1251,7 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 			elems->sec_chan_offs = (void *)pos;
+ 			break;
+ 		case WLAN_EID_CHAN_SWITCH_PARAM:
+-			if (elen !=
++			if (elen <
+ 			    sizeof(*elems->mesh_chansw_params_ie)) {
+ 				elem_parse_failed = true;
+ 				break;
+@@ -1260,7 +1260,7 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 			break;
+ 		case WLAN_EID_WIDE_BW_CHANNEL_SWITCH:
+ 			if (!action ||
+-			    elen != sizeof(*elems->wide_bw_chansw_ie)) {
++			    elen < sizeof(*elems->wide_bw_chansw_ie)) {
+ 				elem_parse_failed = true;
+ 				break;
+ 			}
+@@ -1279,7 +1279,7 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 			ie = cfg80211_find_ie(WLAN_EID_WIDE_BW_CHANNEL_SWITCH,
+ 					      pos, elen);
+ 			if (ie) {
+-				if (ie[1] == sizeof(*elems->wide_bw_chansw_ie))
++				if (ie[1] >= sizeof(*elems->wide_bw_chansw_ie))
+ 					elems->wide_bw_chansw_ie =
+ 						(void *)(ie + 2);
+ 				else
+@@ -1323,7 +1323,7 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 			elems->cisco_dtpc_elem = pos;
+ 			break;
+ 		case WLAN_EID_ADDBA_EXT:
+-			if (elen != sizeof(struct ieee80211_addba_ext_ie)) {
++			if (elen < sizeof(struct ieee80211_addba_ext_ie)) {
+ 				elem_parse_failed = true;
+ 				break;
+ 			}
+@@ -1349,7 +1349,7 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 							  elem, elems);
+ 			break;
+ 		case WLAN_EID_S1G_CAPABILITIES:
+-			if (elen == sizeof(*elems->s1g_capab))
++			if (elen >= sizeof(*elems->s1g_capab))
+ 				elems->s1g_capab = (void *)pos;
+ 			else
+ 				elem_parse_failed = true;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index ddb68aa836f71..08144559eed56 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2682,7 +2682,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 	}
+ 	if (likely(saddr == NULL)) {
+ 		dev	= packet_cached_dev_get(po);
+-		proto	= po->num;
++		proto	= READ_ONCE(po->num);
+ 	} else {
+ 		err = -EINVAL;
+ 		if (msg->msg_namelen < sizeof(struct sockaddr_ll))
+@@ -2895,7 +2895,7 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 
+ 	if (likely(saddr == NULL)) {
+ 		dev	= packet_cached_dev_get(po);
+-		proto	= po->num;
++		proto	= READ_ONCE(po->num);
+ 	} else {
+ 		err = -EINVAL;
+ 		if (msg->msg_namelen < sizeof(struct sockaddr_ll))
+@@ -3033,10 +3033,13 @@ static int packet_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	struct sock *sk = sock->sk;
+ 	struct packet_sock *po = pkt_sk(sk);
+ 
+-	if (po->tx_ring.pg_vec)
++	/* Reading tx_ring.pg_vec without holding pg_vec_lock is racy.
++	 * tpacket_snd() will redo the check safely.
++	 */
++	if (data_race(po->tx_ring.pg_vec))
+ 		return tpacket_snd(po, msg);
+-	else
+-		return packet_snd(sock, msg, len);
++
++	return packet_snd(sock, msg, len);
+ }
+ 
+ /*
+@@ -3167,7 +3170,7 @@ static int packet_do_bind(struct sock *sk, const char *name, int ifindex,
+ 			/* prevents packet_notifier() from calling
+ 			 * register_prot_hook()
+ 			 */
+-			po->num = 0;
++			WRITE_ONCE(po->num, 0);
+ 			__unregister_prot_hook(sk, true);
+ 			rcu_read_lock();
+ 			dev_curr = po->prot_hook.dev;
+@@ -3177,17 +3180,17 @@ static int packet_do_bind(struct sock *sk, const char *name, int ifindex,
+ 		}
+ 
+ 		BUG_ON(po->running);
+-		po->num = proto;
++		WRITE_ONCE(po->num, proto);
+ 		po->prot_hook.type = proto;
+ 
+ 		if (unlikely(unlisted)) {
+ 			dev_put(dev);
+ 			po->prot_hook.dev = NULL;
+-			po->ifindex = -1;
++			WRITE_ONCE(po->ifindex, -1);
+ 			packet_cached_dev_reset(po);
+ 		} else {
+ 			po->prot_hook.dev = dev;
+-			po->ifindex = dev ? dev->ifindex : 0;
++			WRITE_ONCE(po->ifindex, dev ? dev->ifindex : 0);
+ 			packet_cached_dev_assign(po, dev);
+ 		}
+ 	}
+@@ -3501,7 +3504,7 @@ static int packet_getname_spkt(struct socket *sock, struct sockaddr *uaddr,
+ 	uaddr->sa_family = AF_PACKET;
+ 	memset(uaddr->sa_data, 0, sizeof(uaddr->sa_data));
+ 	rcu_read_lock();
+-	dev = dev_get_by_index_rcu(sock_net(sk), pkt_sk(sk)->ifindex);
++	dev = dev_get_by_index_rcu(sock_net(sk), READ_ONCE(pkt_sk(sk)->ifindex));
+ 	if (dev)
+ 		strlcpy(uaddr->sa_data, dev->name, sizeof(uaddr->sa_data));
+ 	rcu_read_unlock();
+@@ -3516,16 +3519,18 @@ static int packet_getname(struct socket *sock, struct sockaddr *uaddr,
+ 	struct sock *sk = sock->sk;
+ 	struct packet_sock *po = pkt_sk(sk);
+ 	DECLARE_SOCKADDR(struct sockaddr_ll *, sll, uaddr);
++	int ifindex;
+ 
+ 	if (peer)
+ 		return -EOPNOTSUPP;
+ 
++	ifindex = READ_ONCE(po->ifindex);
+ 	sll->sll_family = AF_PACKET;
+-	sll->sll_ifindex = po->ifindex;
+-	sll->sll_protocol = po->num;
++	sll->sll_ifindex = ifindex;
++	sll->sll_protocol = READ_ONCE(po->num);
+ 	sll->sll_pkttype = 0;
+ 	rcu_read_lock();
+-	dev = dev_get_by_index_rcu(sock_net(sk), po->ifindex);
++	dev = dev_get_by_index_rcu(sock_net(sk), ifindex);
+ 	if (dev) {
+ 		sll->sll_hatype = dev->type;
+ 		sll->sll_halen = dev->addr_len;
+@@ -4104,7 +4109,7 @@ static int packet_notifier(struct notifier_block *this,
+ 				}
+ 				if (msg == NETDEV_UNREGISTER) {
+ 					packet_cached_dev_reset(po);
+-					po->ifindex = -1;
++					WRITE_ONCE(po->ifindex, -1);
+ 					if (po->prot_hook.dev)
+ 						dev_put(po->prot_hook.dev);
+ 					po->prot_hook.dev = NULL;
+@@ -4410,7 +4415,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 	was_running = po->running;
+ 	num = po->num;
+ 	if (was_running) {
+-		po->num = 0;
++		WRITE_ONCE(po->num, 0);
+ 		__unregister_prot_hook(sk, false);
+ 	}
+ 	spin_unlock(&po->bind_lock);
+@@ -4445,7 +4450,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 
+ 	spin_lock(&po->bind_lock);
+ 	if (was_running) {
+-		po->num = num;
++		WRITE_ONCE(po->num, num);
+ 		register_prot_hook(sk);
+ 	}
+ 	spin_unlock(&po->bind_lock);
+@@ -4613,8 +4618,8 @@ static int packet_seq_show(struct seq_file *seq, void *v)
+ 			   s,
+ 			   refcount_read(&s->sk_refcnt),
+ 			   s->sk_type,
+-			   ntohs(po->num),
+-			   po->ifindex,
++			   ntohs(READ_ONCE(po->num)),
++			   READ_ONCE(po->ifindex),
+ 			   po->running,
+ 			   atomic_read(&s->sk_rmem_alloc),
+ 			   from_kuid_munged(seq_user_ns(seq), sock_i_uid(s)),
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 2731267fd0f9e..4fb8d1b14e76a 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1059,6 +1059,9 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+ 		case NL80211_IFTYPE_MESH_POINT:
+ 			/* mesh should be handled? */
+ 			break;
++		case NL80211_IFTYPE_OCB:
++			cfg80211_leave_ocb(rdev, dev);
++			break;
+ 		default:
+ 			break;
+ 		}
+diff --git a/scripts/Makefile b/scripts/Makefile
+index c36106bce80ee..9adb6d247818f 100644
+--- a/scripts/Makefile
++++ b/scripts/Makefile
+@@ -14,6 +14,7 @@ hostprogs-always-$(CONFIG_ASN1)				+= asn1_compiler
+ hostprogs-always-$(CONFIG_MODULE_SIG_FORMAT)		+= sign-file
+ hostprogs-always-$(CONFIG_SYSTEM_TRUSTED_KEYRING)	+= extract-cert
+ hostprogs-always-$(CONFIG_SYSTEM_EXTRA_CERTIFICATE)	+= insert-sys-cert
++hostprogs-always-$(CONFIG_SYSTEM_REVOCATION_LIST)	+= extract-cert
+ 
+ HOSTCFLAGS_sorttable.o = -I$(srctree)/tools/include
+ HOSTCFLAGS_asn1_compiler.o = -I$(srctree)/include
+diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h
+index f9b19524da112..1e9baa5c4fc6e 100644
+--- a/scripts/recordmcount.h
++++ b/scripts/recordmcount.h
+@@ -192,15 +192,20 @@ static unsigned int get_symindex(Elf_Sym const *sym, Elf32_Word const *symtab,
+ 				 Elf32_Word const *symtab_shndx)
+ {
+ 	unsigned long offset;
++	unsigned short shndx = w2(sym->st_shndx);
+ 	int index;
+ 
+-	if (sym->st_shndx != SHN_XINDEX)
+-		return w2(sym->st_shndx);
++	if (shndx > SHN_UNDEF && shndx < SHN_LORESERVE)
++		return shndx;
+ 
+-	offset = (unsigned long)sym - (unsigned long)symtab;
+-	index = offset / sizeof(*sym);
++	if (shndx == SHN_XINDEX) {
++		offset = (unsigned long)sym - (unsigned long)symtab;
++		index = offset / sizeof(*sym);
+ 
+-	return w(symtab_shndx[index]);
++		return w(symtab_shndx[index]);
++	}
++
++	return 0;
+ }
+ 
+ static unsigned int get_shnum(Elf_Ehdr const *ehdr, Elf_Shdr const *shdr0)
+diff --git a/security/integrity/platform_certs/keyring_handler.c b/security/integrity/platform_certs/keyring_handler.c
+index c5ba695c10e3a..5604bd57c9907 100644
+--- a/security/integrity/platform_certs/keyring_handler.c
++++ b/security/integrity/platform_certs/keyring_handler.c
+@@ -55,6 +55,15 @@ static __init void uefi_blacklist_binary(const char *source,
+ 	uefi_blacklist_hash(source, data, len, "bin:", 4);
+ }
+ 
++/*
++ * Add an X509 cert to the revocation list.
++ */
++static __init void uefi_revocation_list_x509(const char *source,
++					     const void *data, size_t len)
++{
++	add_key_to_revocation_list(data, len);
++}
++
+ /*
+  * Return the appropriate handler for particular signature list types found in
+  * the UEFI db and MokListRT tables.
+@@ -76,5 +85,7 @@ __init efi_element_handler_t get_handler_for_dbx(const efi_guid_t *sig_type)
+ 		return uefi_blacklist_x509_tbs;
+ 	if (efi_guidcmp(*sig_type, efi_cert_sha256_guid) == 0)
+ 		return uefi_blacklist_binary;
++	if (efi_guidcmp(*sig_type, efi_cert_x509_guid) == 0)
++		return uefi_revocation_list_x509;
+ 	return 0;
+ }
+diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c
+index ee4b4c666854f..f290f78c3f301 100644
+--- a/security/integrity/platform_certs/load_uefi.c
++++ b/security/integrity/platform_certs/load_uefi.c
+@@ -132,8 +132,9 @@ static int __init load_moklist_certs(void)
+ static int __init load_uefi_certs(void)
+ {
+ 	efi_guid_t secure_var = EFI_IMAGE_SECURITY_DATABASE_GUID;
+-	void *db = NULL, *dbx = NULL;
+-	unsigned long dbsize = 0, dbxsize = 0;
++	efi_guid_t mok_var = EFI_SHIM_LOCK_GUID;
++	void *db = NULL, *dbx = NULL, *mokx = NULL;
++	unsigned long dbsize = 0, dbxsize = 0, mokxsize = 0;
+ 	efi_status_t status;
+ 	int rc = 0;
+ 
+@@ -175,6 +176,21 @@ static int __init load_uefi_certs(void)
+ 		kfree(dbx);
+ 	}
+ 
++	mokx = get_cert_list(L"MokListXRT", &mok_var, &mokxsize, &status);
++	if (!mokx) {
++		if (status == EFI_NOT_FOUND)
++			pr_debug("mokx variable wasn't found\n");
++		else
++			pr_info("Couldn't get mokx list\n");
++	} else {
++		rc = parse_efi_signature_list("UEFI:MokListXRT",
++					      mokx, mokxsize,
++					      get_handler_for_dbx);
++		if (rc)
++			pr_err("Couldn't parse mokx signatures %d\n", rc);
++		kfree(mokx);
++	}
++
+ 	/* Load the MokListRT certs */
+ 	rc = load_moklist_certs();
+ 
+diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
+index 126c6727a6b09..49805fd16fdf5 100644
+--- a/tools/testing/selftests/kvm/lib/kvm_util.c
++++ b/tools/testing/selftests/kvm/lib/kvm_util.c
+@@ -55,7 +55,7 @@ int kvm_check_cap(long cap)
+ 		exit(KSFT_SKIP);
+ 
+ 	ret = ioctl(kvm_fd, KVM_CHECK_EXTENSION, cap);
+-	TEST_ASSERT(ret != -1, "KVM_CHECK_EXTENSION IOCTL failed,\n"
++	TEST_ASSERT(ret >= 0, "KVM_CHECK_EXTENSION IOCTL failed,\n"
+ 		"  rc: %i errno: %i", ret, errno);
+ 
+ 	close(kvm_fd);
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index f446c36f58003..1353439691cf7 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1883,6 +1883,13 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault)
+ 	return true;
+ }
+ 
++static int kvm_try_get_pfn(kvm_pfn_t pfn)
++{
++	if (kvm_is_reserved_pfn(pfn))
++		return 1;
++	return get_page_unless_zero(pfn_to_page(pfn));
++}
++
+ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
+ 			       unsigned long addr, bool *async,
+ 			       bool write_fault, bool *writable,
+@@ -1932,13 +1939,21 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
+ 	 * Whoever called remap_pfn_range is also going to call e.g.
+ 	 * unmap_mapping_range before the underlying pages are freed,
+ 	 * causing a call to our MMU notifier.
++	 *
++	 * Certain IO or PFNMAP mappings can be backed with valid
++	 * struct pages, but be allocated without refcounting e.g.,
++	 * tail pages of non-compound higher order allocations, which
++	 * would then underflow the refcount when the caller does the
++	 * required put_page. Don't allow those pages here.
+ 	 */ 
+-	kvm_get_pfn(pfn);
++	if (!kvm_try_get_pfn(pfn))
++		r = -EFAULT;
+ 
+ out:
+ 	pte_unmap_unlock(ptep, ptl);
+ 	*p_pfn = pfn;
+-	return 0;
++
++	return r;
+ }
+ 
+ /*


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-01 14:32 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-01 14:32 UTC (permalink / raw
  To: gentoo-commits

commit:     b469118be3739f0b7453dfa4ef3c86647d0b5fb3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul  1 14:32:27 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul  1 14:32:27 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b469118b

Update to CPU Opt patch 06062021

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 74 +++++++++++++--------------
 1 file changed, 36 insertions(+), 38 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index 1868f23..c45d13b 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,23 +1,18 @@
-From 59db769ad69e080c512b3890e1d27d6120f4a1a4 Mon Sep 17 00:00:00 2001
+From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Mon, 12 Apr 2021 07:09:27 -0400
+Date: Sun, 6 Jun 2021 09:41:36 -0400
 Subject: [PATCH] more uarches for kernel 5.8+
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 8bit
 
-WARNING
-This patch works with all gcc versions 9.0+ and with kernel version 5.8+ and should
-NOT be applied when compiling on older versions of gcc due to key name changes
-of the march flags introduced with the version 4.9 release of gcc.[1]
-
 FEATURES
 This patch adds additional CPU options to the Linux kernel accessible under:
  Processor type and features  --->
   Processor family --->
 
-With the release of gcc 11.0, several generic 64-bit levels are offered which
-are good for supported Intel or AMD CPUs:
+With the release of gcc 11.1 and clang 12.0, several generic 64-bit levels are
+offered which are good for supported Intel or AMD CPUs:
 • x86-64-v2
 • x86-64-v3
 • x86-64-v4
@@ -26,7 +21,7 @@ Users of glibc 2.33 and above can see which level is supported by current
 hardware by running:
   /lib/ld-linux-x86-64.so.2 --help | grep supported
 
-Alternatively, compare the flags from /proc/cpuinfo to this list.[2]
+Alternatively, compare the flags from /proc/cpuinfo to this list.[1]
 
 CPU-specific microarchitectures include:
 • AMD Improved K8-family
@@ -62,13 +57,15 @@ CPU-specific microarchitectures include:
 • Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)‡
 
 Notes: If not otherwise noted, gcc >=9.1 is required for support.
-       *Requires gcc >=10.1  †Required gcc >=10.3  ‡Required gcc >=11.0
+       *Requires gcc >=10.1 or clang >=10.0
+       †Required gcc >=10.3 or clang >=12.0
+       ‡Required gcc >=11.1 or clang >=12.0
 
 It also offers to compile passing the 'native' option which, "selects the CPU
 to generate code for at compilation time by determining the processor type of
 the compiling machine. Using -march=native enables all instruction subsets
 supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[3]
+machine under the constraints of the selected instruction set."[2]
 
 Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
 CPUs should select the 'AMD-Native' option.
@@ -76,9 +73,9 @@ CPUs should select the 'AMD-Native' option.
 MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
 This patch also changes -march=atom to -march=bonnell in accordance with the
 gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
-believe it should use the newer -march=bonnell flag for atom processors.[4]
+believe it should use the newer -march=bonnell flag for atom processors.[3]
 
-It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
 recommendation is to use the 'atom' option instead.
 
 BENEFITS
@@ -90,18 +87,19 @@ https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
 linux version >=5.8
-gcc version >=9.0
+gcc version >=9.0 or clang version >=9.0
 
 ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[6]
+This patch builds on the seminal work by Jeroen.[5]
 
 REFERENCES
-1.  https://gcc.gnu.org/gcc-4.9/changes.html
-2.  https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
-3.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
-4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
-5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
-6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+1.  https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
+2.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+3.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
+4.  https://github.com/graysky2/kernel_gcc_patch/issues/15
+5.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+Signed-off-by: graysky <graysky@archlinux.us>
 ---
  arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
  arch/x86/Makefile               |  47 ++++-
@@ -109,7 +107,7 @@ REFERENCES
  3 files changed, 428 insertions(+), 17 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..872b9cf598e3 100644
+index 814fe0d349b0..8acf6519d279 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
@@ -221,7 +219,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MZEN3
 +	bool "AMD Zen 3"
-+	depends on GCC_VERSION > 100300
++	depends on ( CC_IS_GCC && GCC_VERSION >= 100300 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
 +	help
 +	  Select this for AMD Family 19h Zen 3 processors.
 +
@@ -380,7 +378,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MCOOPERLAKE
 +	bool "Intel Cooper Lake"
-+	depends on GCC_VERSION > 100100
++	depends on ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -390,7 +388,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MTIGERLAKE
 +	bool "Intel Tiger Lake"
-+	depends on GCC_VERSION > 100100
++	depends on  ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -400,7 +398,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MSAPPHIRERAPIDS
 +	bool "Intel Sapphire Rapids"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -410,7 +408,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MROCKETLAKE
 +	bool "Intel Rocket Lake"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -420,7 +418,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MALDERLAKE
 +	bool "Intel Alder Lake"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -437,7 +435,7 @@ index 814fe0d349b0..872b9cf598e3 100644
  
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU.
@@ -445,7 +443,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config GENERIC_CPU3
 +	bool "Generic-x86-64-v3"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
 +	depends on X86_64
 +	help
 +	  Generic x86-64-v3 CPU with v3 instructions.
@@ -453,27 +451,27 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config GENERIC_CPU4
 +	bool "Generic-x86-64-v4"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU with v4 instructions.
 +	  Run equally well on all x86-64 CPUs with min support of x86-64-v4.
 +
 +config MNATIVE_INTEL
-+	bool "Intel-Native optimizations autodetected by GCC"
++	bool "Intel-Native optimizations autodetected by the compiler"
 +	help
 +
-+	  GCC 4.2 and above support -march=native, which automatically detects
++	  Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
 +	  the optimum settings to use based on your processor. Do NOT use this
 +	  for AMD CPUs.  Intel Only!
 +
 +	  Enables -march=native
 +
 +config MNATIVE_AMD
-+	bool "AMD-Native optimizations autodetected by GCC"
++	bool "AMD-Native optimizations autodetected by the compiler"
 +	help
 +
-+	  GCC 4.2 and above support -march=native, which automatically detects
++	  Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
 +	  the optimum settings to use based on your processor. Do NOT use this
 +	  for Intel CPUs.  AMD Only!
 +
@@ -538,10 +536,10 @@ index 814fe0d349b0..872b9cf598e3 100644
  	default "4"
  
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 9a85eae37b17..facf9a278fe3 100644
+index 78faf9c7e3ae..ee0cd507af8b 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -113,11 +113,48 @@ else
+@@ -114,11 +114,48 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
          cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
          cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-02 19:38 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-02 19:38 UTC (permalink / raw
  To: gentoo-commits

commit:     da09e43808cfff024220fc10065e6744764e576e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul  2 19:37:43 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul  2 19:37:43 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=da09e438

Update shiftfs patchset and correct typo in patch name

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  2 +-
 ...-20.04.patch => 5000_shiftfs-ubuntu-20.04.patch | 89 +++++++++++++---------
 2 files changed, 53 insertions(+), 38 deletions(-)

diff --git a/0000_README b/0000_README
index 9ae25ca..58e20cf 100644
--- a/0000_README
+++ b/0000_README
@@ -255,7 +255,7 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
-Patch:  5000_shifts-ubuntu-20.04.patch
+Patch:  5000_shiftfs-ubuntu-20.04.patch
 From:   https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
 Desc:   UID/GID shifting overlay filesystem for containers 
 

diff --git a/5000_shifts-ubuntu-20.04.patch b/5000_shiftfs-ubuntu-20.04.patch
similarity index 97%
rename from 5000_shifts-ubuntu-20.04.patch
rename to 5000_shiftfs-ubuntu-20.04.patch
index 665fc66..9ea7400 100644
--- a/5000_shifts-ubuntu-20.04.patch
+++ b/5000_shiftfs-ubuntu-20.04.patch
@@ -1,6 +1,6 @@
---- /dev/null	2021-01-08 13:33:13.190303432 -0500
-+++ b/fs/shiftfs.c	2021-01-08 19:02:40.000000000 -0500
-@@ -0,0 +1,2157 @@
+--- /dev/null	2021-07-02 07:32:48.742034238 -0400
++++ b/fs/shiftfs.c	2021-07-02 13:55:18.327684885 -0400
+@@ -0,0 +1,2172 @@
 +#include <linux/btrfs.h>
 +#include <linux/capability.h>
 +#include <linux/cred.h>
@@ -1427,7 +1427,7 @@
 +	kfree(v1);
 +	kfree(v2);
 +
-+	return ret;
++	return ret ? -EFAULT: 0;
 +}
 +
 +static int shiftfs_btrfs_ioctl_fd_replace(int cmd, void __user *arg,
@@ -1441,6 +1441,9 @@
 +	struct btrfs_ioctl_vol_args *v1 = NULL;
 +	struct btrfs_ioctl_vol_args_v2 *v2 = NULL;
 +
++	*b1 = NULL;
++	*b2 = NULL;
++
 +	if (!is_btrfs_snap_ioctl(cmd))
 +		return 0;
 +
@@ -1449,29 +1452,29 @@
 +		if (IS_ERR(v1))
 +			return PTR_ERR(v1);
 +		oldfd = v1->fd;
-+		*b1 = v1;
 +	} else {
 +		v2 = memdup_user(arg, sizeof(*v2));
 +		if (IS_ERR(v2))
 +			return PTR_ERR(v2);
 +		oldfd = v2->fd;
-+		*b2 = v2;
 +	}
 +
 +	src = fdget(oldfd);
-+	if (!src.file)
-+		return -EINVAL;
++	if (!src.file) {
++		ret = -EINVAL;
++		goto err_free;
++	}
 +
 +	ret = shiftfs_real_fdget(src.file, &lfd);
 +	if (ret) {
 +		fdput(src);
-+		return ret;
++		goto err_free;
 +	}
 +
 +	/*
 +	 * shiftfs_real_fdget() does not take a reference to lfd.file, so
 +	 * take a reference here to offset the one which will be put by
-+	 * __close_fd(), and make sure that reference is put on fdput(lfd).
++	 * close_fd(), and make sure that reference is put on fdput(lfd).
 +	 */
 +	get_file(lfd.file);
 +	lfd.flags |= FDPUT_FPUT;
@@ -1480,7 +1483,8 @@
 +	*newfd = get_unused_fd_flags(lfd.file->f_flags);
 +	if (*newfd < 0) {
 +		fdput(lfd);
-+		return *newfd;
++		ret = *newfd;
++		goto err_free;
 +	}
 +
 +	fd_install(*newfd, lfd.file);
@@ -1495,8 +1499,19 @@
 +		v2->fd = oldfd;
 +	}
 +
-+	if (ret)
++	if (!ret) {
++		*b1 = v1;
++		*b2 = v2;
++	} else {
 +		shiftfs_btrfs_ioctl_fd_restore(cmd, *newfd, arg, v1, v2);
++		ret = -EFAULT;
++	}
++
++	return ret;
++
++err_free:
++	kfree(v1);
++	kfree(v2);
 +
 +	return ret;
 +}
@@ -2158,45 +2173,45 @@
 +MODULE_LICENSE("GPL v2");
 +module_init(shiftfs_init)
 +module_exit(shiftfs_exit)
---- a/include/uapi/linux/magic.h	2021-01-06 19:08:45.234777659 -0500
-+++ b/include/uapi/linux/magic.h	2021-01-06 19:09:53.900375394 -0500
-@@ -96,4 +96,6 @@
- #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
+--- a/include/uapi/linux/magic.h	2021-07-02 13:19:57.024999483 -0400
++++ b/include/uapi/linux/magic.h	2021-07-02 13:21:16.215074343 -0400
+@@ -98,4 +98,6 @@
  #define Z3FOLD_MAGIC		0x33
+ #define PPC_CMM_MAGIC		0xc7571590
  
-+#define SHIFTFS_MAGIC         0x6a656a62
++#define SHIFTFS_MAGIC		0x6a656a62
 +
  #endif /* __LINUX_MAGIC_H__ */
---- a/fs/Makefile	2021-01-08 18:08:28.187064015 -0500
-+++ b/fs/Makefile	2021-01-08 18:09:00.788217579 -0500
+--- a/fs/Makefile	2021-07-02 13:22:24.815163699 -0400
++++ b/fs/Makefile	2021-07-02 13:22:43.991858989 -0400
 @@ -136,3 +136,4 @@ obj-$(CONFIG_EFIVAR_FS)		+= efivarfs/
  obj-$(CONFIG_EROFS_FS)		+= erofs/
  obj-$(CONFIG_VBOXSF_FS)		+= vboxsf/
  obj-$(CONFIG_ZONEFS_FS)		+= zonefs/
-+obj-$(CONFIG_SHIFT_FS)    += shiftfs.o
---- a/fs/Kconfig	2021-01-06 19:14:17.709697891 -0500
-+++ b/fs/Kconfig	2021-01-06 19:15:23.413281282 -0500
-@@ -122,6 +122,24 @@ source "fs/autofs/Kconfig"
++obj-$(CONFIG_SHIFT_FS)		+= shiftfs.o
+--- a/fs/Kconfig	2021-07-02 13:24:13.908678796 -0400
++++ b/fs/Kconfig	2021-07-02 13:28:26.312574889 -0400
+@@ -123,6 +123,24 @@ source "fs/autofs/Kconfig"
  source "fs/fuse/Kconfig"
  source "fs/overlayfs/Kconfig"
  
 +config SHIFT_FS
-+  tristate "UID/GID shifting overlay filesystem for containers"
-+  help
-+    This filesystem can overlay any mounted filesystem and shift
-+    the uid/gid the files appear at.  The idea is that
-+    unprivileged containers can use this to mount root volumes
-+    using this technique.
++	tristate "UID/GID shifting overlay filesystem for containers"
++	help
++	  This filesystem can overlay any mounted filesystem and shift
++	  the uid/gid the files appear at.  The idea is that
++	  unprivileged containers can use this to mount root volumes
++	  using this technique.
 +
 +config SHIFT_FS_POSIX_ACL
-+  bool "shiftfs POSIX Access Control Lists"
-+  depends on SHIFT_FS
-+  select FS_POSIX_ACL
-+  help
-+    POSIX Access Control Lists (ACLs) support permissions for users and
-+    groups beyond the owner/group/world scheme.
-+
-+    If you don't know what Access Control Lists are, say N.
++	bool "shiftfs POSIX Access Control Lists"
++	depends on SHIFT_FS
++	select FS_POSIX_ACL
++	help
++	  POSIX Access Control Lists (ACLs) support permissions for users and
++	  groups beyond the owner/group/world scheme.
++
++	  If you don't know what Access Control Lists are, say N.
 +
  menu "Caches"
  


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-07 13:13 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-07 13:13 UTC (permalink / raw
  To: gentoo-commits

commit:     b2fdf6344cc68032ac329ca3e3cc916dadc0420c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul  7 13:12:58 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul  7 13:12:58 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b2fdf634

Linux patch 5.10.48

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 ++
 1047_linux-5.10.48.patch | 123 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 127 insertions(+)

diff --git a/0000_README b/0000_README
index 58e20cf..dc7b9b6 100644
--- a/0000_README
+++ b/0000_README
@@ -231,6 +231,10 @@ Patch:  1046_linux-5.10.47.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.47
 
+Patch:  1047_linux-5.10.48.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.48
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1047_linux-5.10.48.patch b/1047_linux-5.10.48.patch
new file mode 100644
index 0000000..fb36685
--- /dev/null
+++ b/1047_linux-5.10.48.patch
@@ -0,0 +1,123 @@
+diff --git a/Makefile b/Makefile
+index fb2937bca41b3..52dcfe3371c4c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 47
++SUBLEVEL = 48
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index ef56780022c3e..d1ac2de41ea8a 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -296,6 +296,7 @@ union kvm_mmu_extended_role {
+ 		unsigned int cr4_pke:1;
+ 		unsigned int cr4_smap:1;
+ 		unsigned int cr4_smep:1;
++		unsigned int cr4_la57:1;
+ 		unsigned int maxphyaddr:6;
+ 	};
+ };
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 6b794344c02db..f2eeaf197294d 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4442,6 +4442,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu)
+ 	ext.cr4_smap = !!kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
+ 	ext.cr4_pse = !!is_pse(vcpu);
+ 	ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE);
++	ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57);
+ 	ext.maxphyaddr = cpuid_maxphyaddr(vcpu);
+ 
+ 	ext.valid = 1;
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index 14751c7ccd1f4..d1300fc003ed7 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -1337,6 +1337,7 @@ config GPIO_TPS68470
+ config GPIO_TQMX86
+ 	tristate "TQ-Systems QTMX86 GPIO"
+ 	depends on MFD_TQMX86 || COMPILE_TEST
++	depends on HAS_IOPORT_MAP
+ 	select GPIOLIB_IRQCHIP
+ 	help
+ 	  This driver supports GPIO on the TQMX86 IO controller.
+@@ -1404,6 +1405,7 @@ menu "PCI GPIO expanders"
+ config GPIO_AMD8111
+ 	tristate "AMD 8111 GPIO driver"
+ 	depends on X86 || COMPILE_TEST
++	depends on HAS_IOPORT_MAP
+ 	help
+ 	  The AMD 8111 south bridge contains 32 GPIO pins which can be used.
+ 
+diff --git a/drivers/gpio/gpio-mxc.c b/drivers/gpio/gpio-mxc.c
+index 643f4c557ac2a..ba6ed2a413f51 100644
+--- a/drivers/gpio/gpio-mxc.c
++++ b/drivers/gpio/gpio-mxc.c
+@@ -361,7 +361,7 @@ static int mxc_gpio_init_gc(struct mxc_gpio_port *port, int irq_base)
+ 	ct->chip.irq_unmask = irq_gc_mask_set_bit;
+ 	ct->chip.irq_set_type = gpio_set_irq_type;
+ 	ct->chip.irq_set_wake = gpio_set_wake_irq;
+-	ct->chip.flags = IRQCHIP_MASK_ON_SUSPEND;
++	ct->chip.flags = IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND;
+ 	ct->regs.ack = GPIO_ISR;
+ 	ct->regs.mask = GPIO_IMR;
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index 7daa12eec01bb..b4946b595d86e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -590,7 +590,7 @@ nouveau_bo_sync_for_device(struct nouveau_bo *nvbo)
+ 	struct ttm_dma_tt *ttm_dma = (struct ttm_dma_tt *)nvbo->bo.ttm;
+ 	int i;
+ 
+-	if (!ttm_dma)
++	if (!ttm_dma || !ttm_dma->dma_address)
+ 		return;
+ 
+ 	/* Don't waste time looping if the object is coherent */
+@@ -610,7 +610,7 @@ nouveau_bo_sync_for_cpu(struct nouveau_bo *nvbo)
+ 	struct ttm_dma_tt *ttm_dma = (struct ttm_dma_tt *)nvbo->bo.ttm;
+ 	int i;
+ 
+-	if (!ttm_dma)
++	if (!ttm_dma || !ttm_dma->dma_address)
+ 		return;
+ 
+ 	/* Don't waste time looping if the object is coherent */
+diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
+index 13d50b1781660..b3391ecedda7e 100644
+--- a/drivers/infiniband/hw/mlx5/fs.c
++++ b/drivers/infiniband/hw/mlx5/fs.c
+@@ -2136,6 +2136,13 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_FLOW_MATCHER_CREATE)(
+ 	if (err)
+ 		goto end;
+ 
++	if (obj->ns_type == MLX5_FLOW_NAMESPACE_FDB &&
++	    mlx5_eswitch_mode(dev->mdev->priv.eswitch) !=
++			      MLX5_ESWITCH_OFFLOADS) {
++		err = -EINVAL;
++		goto end;
++	}
++
+ 	uobj->object = obj;
+ 	obj->mdev = dev->mdev;
+ 	atomic_set(&obj->usecnt, 0);
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index fd4b582110b29..77961f0583674 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -220,6 +220,8 @@ static unsigned int sr_get_events(struct scsi_device *sdev)
+ 		return DISK_EVENT_EJECT_REQUEST;
+ 	else if (med->media_event_code == 2)
+ 		return DISK_EVENT_MEDIA_CHANGE;
++	else if (med->media_event_code == 3)
++		return DISK_EVENT_EJECT_REQUEST;
+ 	return 0;
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-08  3:27 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-07-08  3:27 UTC (permalink / raw
  To: gentoo-commits

commit:     0516d30fad4f3ec1e6c5f821a798c695034f77f5
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Jul  8 03:23:26 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Jul  8 03:26:38 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0516d30f

Add KVM: PPC: Book3S HV: Save and restore FSCR in the P9 path

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README                         |  4 +++
 1700_P9_save_and_restore_fscr.patch | 56 +++++++++++++++++++++++++++++++++++++
 2 files changed, 60 insertions(+)

diff --git a/0000_README b/0000_README
index dc7b9b6..aefbc8e 100644
--- a/0000_README
+++ b/0000_README
@@ -243,6 +243,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1700_P9_save_and_restore_fscr.patch
+From:   https://github.com/torvalds/linux/commit/25edcc50d76c.patch
+Desc:   Fix qemu on P9 ppc64.
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1700_P9_save_and_restore_fscr.patch b/1700_P9_save_and_restore_fscr.patch
new file mode 100644
index 0000000..59a7c7e
--- /dev/null
+++ b/1700_P9_save_and_restore_fscr.patch
@@ -0,0 +1,56 @@
+From 25edcc50d76c834479d11fcc7de46f3da4d95121 Mon Sep 17 00:00:00 2001
+From: Fabiano Rosas <farosas@linux.ibm.com>
+Date: Thu, 4 Feb 2021 17:05:17 -0300
+Subject: [PATCH] KVM: PPC: Book3S HV: Save and restore FSCR in the P9 path
+
+The Facility Status and Control Register is a privileged SPR that
+defines the availability of some features in problem state. Since it
+can be written by the guest, we must restore it to the previous host
+value after guest exit.
+
+This restoration is currently done by taking the value from
+current->thread.fscr, which in the P9 path is not enough anymore
+because the guest could context switch the QEMU thread, causing the
+guest-current value to be saved into the thread struct.
+
+The above situation manifested when running a QEMU linked against a
+libc with System Call Vectored support, which causes scv
+instructions to be run by QEMU early during the guest boot (during
+SLOF), at which point the FSCR is 0 due to guest entry. After a few
+scv calls (1 to a couple hundred), the context switching happens and
+the QEMU thread runs with the guest value, resulting in a Facility
+Unavailable interrupt.
+
+This patch saves and restores the host value of FSCR in the inner
+guest entry loop in a way independent of current->thread.fscr. The old
+way of doing it is still kept in place because it works for the old
+entry path.
+
+Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
+Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
+---
+ arch/powerpc/kvm/book3s_hv.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 89c686c17f0606..f6d470157fcb62 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3611,6 +3611,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	unsigned long host_tidr = mfspr(SPRN_TIDR);
+ 	unsigned long host_iamr = mfspr(SPRN_IAMR);
+ 	unsigned long host_amr = mfspr(SPRN_AMR);
++	unsigned long host_fscr = mfspr(SPRN_FSCR);
+ 	s64 dec;
+ 	u64 tb;
+ 	int trap, save_pmu;
+@@ -3751,6 +3752,9 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	if (host_amr != vcpu->arch.amr)
+ 		mtspr(SPRN_AMR, host_amr);
+ 
++	if (host_fscr != vcpu->arch.fscr)
++		mtspr(SPRN_FSCR, host_fscr);
++
+ 	msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
+ 	store_fp_state(&vcpu->arch.fp);
+ #ifdef CONFIG_ALTIVEC


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-08 12:27 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-08 12:27 UTC (permalink / raw
  To: gentoo-commits

commit:     fafbdba1d0c62ba216f0cdbd963cad7c4a9ee612
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul  8 12:26:29 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul  8 12:26:29 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fafbdba1

KSPP:Fix DEVMEM Select and move help text

Bug: https://bugs.gentoo.org/798315

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index b671313..c063c6d 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,8 +6,8 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-06-08 16:56:49.698138501 -0400
-+++ b/distro/Kconfig	2021-06-08 17:11:33.377999003 -0400
+--- /dev/null	2021-07-04 10:53:51.006624416 -0400
++++ b/distro/Kconfig	2021-07-04 11:07:33.534248860 -0400
 @@ -0,0 +1,263 @@
 +menu "Gentoo Linux"
 +
@@ -172,15 +172,6 @@
 +config GENTOO_KERNEL_SELF_PROTECTION
 +	bool "Architecture Independant Kernel Self Protection Project Recommendations"
 +
-+	help
-+  Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
-+	See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
-+	Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
-+	to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
-+	dependency information on your specific architecture.
-+	Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
-+	for X86_64
-+
 +	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
 +
 +	select BUG
@@ -188,8 +179,8 @@
 +	select DEBUG_WX
 +	select STACKPROTECTOR
 +	select STACKPROTECTOR_STRONG
-+	select STRICT_DEVMEM
-+	select IO_STRICT_DEVMEM
++	select STRICT_DEVMEM if DEVMEM=y
++	select IO_STRICT_DEVMEM if DEVMEM=y
 +	select SYN_COOKIES
 +	select DEBUG_CREDENTIALS
 +	select DEBUG_NOTIFIERS
@@ -222,6 +213,15 @@
 +	select GCC_PLUGIN_RANDSTRUCT
 +	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
 +
++	help
++  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
++		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
++		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
++		dependency information on your specific architecture.
++		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
++		for X86_64
++
 +menu "Architecture Specific Self Protection Project Recommendations"
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_X86_64


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-11 14:43 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-11 14:43 UTC (permalink / raw
  To: gentoo-commits

commit:     9c16e6b001749e8de8bac1a8c050e9cd6b68da3b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 11 14:42:58 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 11 14:42:58 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9c16e6b0

Linux patch 5.10.49

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1048_linux-5.10.49.patch | 516 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 520 insertions(+)

diff --git a/0000_README b/0000_README
index aefbc8e..534b76a 100644
--- a/0000_README
+++ b/0000_README
@@ -235,6 +235,10 @@ Patch:  1047_linux-5.10.48.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.48
 
+Patch:  1048_linux-5.10.49.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.49
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1048_linux-5.10.49.patch b/1048_linux-5.10.49.patch
new file mode 100644
index 0000000..481f258
--- /dev/null
+++ b/1048_linux-5.10.49.patch
@@ -0,0 +1,516 @@
+diff --git a/Makefile b/Makefile
+index 52dcfe3371c4c..c51b73455ea33 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 48
++SUBLEVEL = 49
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/hexagon/Makefile b/arch/hexagon/Makefile
+index c168c6980d050..74b644ea8a00a 100644
+--- a/arch/hexagon/Makefile
++++ b/arch/hexagon/Makefile
+@@ -10,6 +10,9 @@ LDFLAGS_vmlinux += -G0
+ # Do not use single-byte enums; these will overflow.
+ KBUILD_CFLAGS += -fno-short-enums
+ 
++# We must use long-calls:
++KBUILD_CFLAGS += -mlong-calls
++
+ # Modules must use either long-calls, or use pic/plt.
+ # Use long-calls for now, it's easier.  And faster.
+ # KBUILD_CFLAGS_MODULE += -fPIC
+@@ -30,9 +33,6 @@ TIR_NAME := r19
+ KBUILD_CFLAGS += -ffixed-$(TIR_NAME) -DTHREADINFO_REG=$(TIR_NAME) -D__linux__
+ KBUILD_AFLAGS += -DTHREADINFO_REG=$(TIR_NAME)
+ 
+-LIBGCC := $(shell $(CC) $(KBUILD_CFLAGS) -print-libgcc-file-name 2>/dev/null)
+-libs-y += $(LIBGCC)
+-
+ head-y := arch/hexagon/kernel/head.o
+ 
+ core-y += arch/hexagon/kernel/ \
+diff --git a/arch/hexagon/include/asm/futex.h b/arch/hexagon/include/asm/futex.h
+index 6b9c554aee78e..9fb00a0ae89f7 100644
+--- a/arch/hexagon/include/asm/futex.h
++++ b/arch/hexagon/include/asm/futex.h
+@@ -21,7 +21,7 @@
+ 	"3:\n" \
+ 	".section .fixup,\"ax\"\n" \
+ 	"4: %1 = #%5;\n" \
+-	"   jump 3b\n" \
++	"   jump ##3b\n" \
+ 	".previous\n" \
+ 	".section __ex_table,\"a\"\n" \
+ 	".long 1b,4b,2b,4b\n" \
+@@ -90,7 +90,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, u32 oldval,
+ 	"3:\n"
+ 	".section .fixup,\"ax\"\n"
+ 	"4: %0 = #%6\n"
+-	"   jump 3b\n"
++	"   jump ##3b\n"
+ 	".previous\n"
+ 	".section __ex_table,\"a\"\n"
+ 	".long 1b,4b,2b,4b\n"
+diff --git a/arch/hexagon/include/asm/timex.h b/arch/hexagon/include/asm/timex.h
+index 78338d8ada83f..8d4ec76fceb45 100644
+--- a/arch/hexagon/include/asm/timex.h
++++ b/arch/hexagon/include/asm/timex.h
+@@ -8,6 +8,7 @@
+ 
+ #include <asm-generic/timex.h>
+ #include <asm/timer-regs.h>
++#include <asm/hexagon_vm.h>
+ 
+ /* Using TCX0 as our clock.  CLOCK_TICK_RATE scheduled to be removed. */
+ #define CLOCK_TICK_RATE              TCX0_CLK_RATE
+@@ -16,7 +17,7 @@
+ 
+ static inline int read_current_timer(unsigned long *timer_val)
+ {
+-	*timer_val = (unsigned long) __vmgettime();
++	*timer_val = __vmgettime();
+ 	return 0;
+ }
+ 
+diff --git a/arch/hexagon/kernel/hexagon_ksyms.c b/arch/hexagon/kernel/hexagon_ksyms.c
+index 6fb1aaab1c298..35545a7386a06 100644
+--- a/arch/hexagon/kernel/hexagon_ksyms.c
++++ b/arch/hexagon/kernel/hexagon_ksyms.c
+@@ -35,8 +35,8 @@ EXPORT_SYMBOL(_dflt_cache_att);
+ DECLARE_EXPORT(__hexagon_memcpy_likely_aligned_min32bytes_mult8bytes);
+ 
+ /* Additional functions */
+-DECLARE_EXPORT(__divsi3);
+-DECLARE_EXPORT(__modsi3);
+-DECLARE_EXPORT(__udivsi3);
+-DECLARE_EXPORT(__umodsi3);
++DECLARE_EXPORT(__hexagon_divsi3);
++DECLARE_EXPORT(__hexagon_modsi3);
++DECLARE_EXPORT(__hexagon_udivsi3);
++DECLARE_EXPORT(__hexagon_umodsi3);
+ DECLARE_EXPORT(csum_tcpudp_magic);
+diff --git a/arch/hexagon/kernel/ptrace.c b/arch/hexagon/kernel/ptrace.c
+index a5a89e944257c..8975f9b4cedf0 100644
+--- a/arch/hexagon/kernel/ptrace.c
++++ b/arch/hexagon/kernel/ptrace.c
+@@ -35,7 +35,7 @@ void user_disable_single_step(struct task_struct *child)
+ 
+ static int genregs_get(struct task_struct *target,
+ 		   const struct user_regset *regset,
+-		   srtuct membuf to)
++		   struct membuf to)
+ {
+ 	struct pt_regs *regs = task_pt_regs(target);
+ 
+@@ -54,7 +54,7 @@ static int genregs_get(struct task_struct *target,
+ 	membuf_store(&to, regs->m0);
+ 	membuf_store(&to, regs->m1);
+ 	membuf_store(&to, regs->usr);
+-	membuf_store(&to, regs->p3_0);
++	membuf_store(&to, regs->preds);
+ 	membuf_store(&to, regs->gp);
+ 	membuf_store(&to, regs->ugp);
+ 	membuf_store(&to, pt_elr(regs)); // pc
+diff --git a/arch/hexagon/lib/Makefile b/arch/hexagon/lib/Makefile
+index 54be529d17a25..a64641e89d5fe 100644
+--- a/arch/hexagon/lib/Makefile
++++ b/arch/hexagon/lib/Makefile
+@@ -2,4 +2,5 @@
+ #
+ # Makefile for hexagon-specific library files.
+ #
+-obj-y = checksum.o io.o memcpy.o memset.o
++obj-y = checksum.o io.o memcpy.o memset.o memcpy_likely_aligned.o \
++         divsi3.o modsi3.o udivsi3.o  umodsi3.o
+diff --git a/arch/hexagon/lib/divsi3.S b/arch/hexagon/lib/divsi3.S
+new file mode 100644
+index 0000000000000..783e09424c2c8
+--- /dev/null
++++ b/arch/hexagon/lib/divsi3.S
+@@ -0,0 +1,67 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (c) 2021, The Linux Foundation. All rights reserved.
++ */
++
++#include <linux/linkage.h>
++
++SYM_FUNC_START(__hexagon_divsi3)
++        {
++                p0 = cmp.gt(r0,#-1)
++                p1 = cmp.gt(r1,#-1)
++                r3:2 = vabsw(r1:0)
++        }
++        {
++                p3 = xor(p0,p1)
++                r4 = sub(r2,r3)
++                r6 = cl0(r2)
++                p0 = cmp.gtu(r3,r2)
++        }
++        {
++                r0 = mux(p3,#-1,#1)
++                r7 = cl0(r3)
++                p1 = cmp.gtu(r3,r4)
++        }
++        {
++                r0 = mux(p0,#0,r0)
++                p0 = or(p0,p1)
++                if (p0.new) jumpr:nt r31
++                r6 = sub(r7,r6)
++        }
++        {
++                r7 = r6
++                r5:4 = combine(#1,r3)
++                r6 = add(#1,lsr(r6,#1))
++                p0 = cmp.gtu(r6,#4)
++        }
++        {
++                r5:4 = vaslw(r5:4,r7)
++                if (!p0) r6 = #3
++        }
++        {
++                loop0(1f,r6)
++                r7:6 = vlsrw(r5:4,#1)
++                r1:0 = #0
++        }
++        .falign
++1:
++        {
++                r5:4 = vlsrw(r5:4,#2)
++                if (!p0.new) r0 = add(r0,r5)
++                if (!p0.new) r2 = sub(r2,r4)
++                p0 = cmp.gtu(r4,r2)
++        }
++        {
++                r7:6 = vlsrw(r7:6,#2)
++                if (!p0.new) r0 = add(r0,r7)
++                if (!p0.new) r2 = sub(r2,r6)
++                p0 = cmp.gtu(r6,r2)
++        }:endloop0
++        {
++                if (!p0) r0 = add(r0,r7)
++        }
++        {
++                if (p3) r0 = sub(r1,r0)
++                jumpr r31
++        }
++SYM_FUNC_END(__hexagon_divsi3)
+diff --git a/arch/hexagon/lib/memcpy_likely_aligned.S b/arch/hexagon/lib/memcpy_likely_aligned.S
+new file mode 100644
+index 0000000000000..6a541fb90a540
+--- /dev/null
++++ b/arch/hexagon/lib/memcpy_likely_aligned.S
+@@ -0,0 +1,56 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (c) 2021, The Linux Foundation. All rights reserved.
++ */
++
++#include <linux/linkage.h>
++
++SYM_FUNC_START(__hexagon_memcpy_likely_aligned_min32bytes_mult8bytes)
++        {
++                p0 = bitsclr(r1,#7)
++                p0 = bitsclr(r0,#7)
++                if (p0.new) r5:4 = memd(r1)
++                if (p0.new) r7:6 = memd(r1+#8)
++        }
++        {
++                if (!p0) jump:nt .Lmemcpy_call
++                if (p0) r9:8 = memd(r1+#16)
++                if (p0) r11:10 = memd(r1+#24)
++                p0 = cmp.gtu(r2,#64)
++        }
++        {
++                if (p0) jump:nt .Lmemcpy_call
++                if (!p0) memd(r0) = r5:4
++                if (!p0) memd(r0+#8) = r7:6
++                p0 = cmp.gtu(r2,#32)
++        }
++        {
++                p1 = cmp.gtu(r2,#40)
++                p2 = cmp.gtu(r2,#48)
++                if (p0) r13:12 = memd(r1+#32)
++                if (p1.new) r15:14 = memd(r1+#40)
++        }
++        {
++                memd(r0+#16) = r9:8
++                memd(r0+#24) = r11:10
++        }
++        {
++                if (p0) memd(r0+#32) = r13:12
++                if (p1) memd(r0+#40) = r15:14
++                if (!p2) jumpr:t r31
++        }
++        {
++                p0 = cmp.gtu(r2,#56)
++                r5:4 = memd(r1+#48)
++                if (p0.new) r7:6 = memd(r1+#56)
++        }
++        {
++                memd(r0+#48) = r5:4
++                if (p0) memd(r0+#56) = r7:6
++                jumpr r31
++        }
++
++.Lmemcpy_call:
++        jump memcpy
++
++SYM_FUNC_END(__hexagon_memcpy_likely_aligned_min32bytes_mult8bytes)
+diff --git a/arch/hexagon/lib/modsi3.S b/arch/hexagon/lib/modsi3.S
+new file mode 100644
+index 0000000000000..9ea1c86efac2b
+--- /dev/null
++++ b/arch/hexagon/lib/modsi3.S
+@@ -0,0 +1,46 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (c) 2021, The Linux Foundation. All rights reserved.
++ */
++
++#include <linux/linkage.h>
++
++SYM_FUNC_START(__hexagon_modsi3)
++        {
++                p2 = cmp.ge(r0,#0)
++                r2 = abs(r0)
++                r1 = abs(r1)
++        }
++        {
++                r3 = cl0(r2)
++                r4 = cl0(r1)
++                p0 = cmp.gtu(r1,r2)
++        }
++        {
++                r3 = sub(r4,r3)
++                if (p0) jumpr r31
++        }
++        {
++                p1 = cmp.eq(r3,#0)
++                loop0(1f,r3)
++                r0 = r2
++                r2 = lsl(r1,r3)
++        }
++        .falign
++1:
++        {
++                p0 = cmp.gtu(r2,r0)
++                if (!p0.new) r0 = sub(r0,r2)
++                r2 = lsr(r2,#1)
++                if (p1) r1 = #0
++        }:endloop0
++        {
++                p0 = cmp.gtu(r2,r0)
++                if (!p0.new) r0 = sub(r0,r1)
++                if (p2) jumpr r31
++        }
++        {
++                r0 = neg(r0)
++                jumpr r31
++        }
++SYM_FUNC_END(__hexagon_modsi3)
+diff --git a/arch/hexagon/lib/udivsi3.S b/arch/hexagon/lib/udivsi3.S
+new file mode 100644
+index 0000000000000..477f27b9311cd
+--- /dev/null
++++ b/arch/hexagon/lib/udivsi3.S
+@@ -0,0 +1,38 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (c) 2021, The Linux Foundation. All rights reserved.
++ */
++
++#include <linux/linkage.h>
++
++SYM_FUNC_START(__hexagon_udivsi3)
++        {
++                r2 = cl0(r0)
++                r3 = cl0(r1)
++                r5:4 = combine(#1,#0)
++                p0 = cmp.gtu(r1,r0)
++        }
++        {
++                r6 = sub(r3,r2)
++                r4 = r1
++                r1:0 = combine(r0,r4)
++                if (p0) jumpr r31
++        }
++        {
++                r3:2 = vlslw(r5:4,r6)
++                loop0(1f,r6)
++        }
++        .falign
++1:
++        {
++                p0 = cmp.gtu(r2,r1)
++                if (!p0.new) r1 = sub(r1,r2)
++                if (!p0.new) r0 = add(r0,r3)
++                r3:2 = vlsrw(r3:2,#1)
++        }:endloop0
++        {
++                p0 = cmp.gtu(r2,r1)
++                if (!p0.new) r0 = add(r0,r3)
++                jumpr r31
++        }
++SYM_FUNC_END(__hexagon_udivsi3)
+diff --git a/arch/hexagon/lib/umodsi3.S b/arch/hexagon/lib/umodsi3.S
+new file mode 100644
+index 0000000000000..280bf06a55e7d
+--- /dev/null
++++ b/arch/hexagon/lib/umodsi3.S
+@@ -0,0 +1,36 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (c) 2021, The Linux Foundation. All rights reserved.
++ */
++
++#include <linux/linkage.h>
++
++SYM_FUNC_START(__hexagon_umodsi3)
++        {
++                r2 = cl0(r0)
++                r3 = cl0(r1)
++                p0 = cmp.gtu(r1,r0)
++        }
++        {
++                r2 = sub(r3,r2)
++                if (p0) jumpr r31
++        }
++        {
++                loop0(1f,r2)
++                p1 = cmp.eq(r2,#0)
++                r2 = lsl(r1,r2)
++        }
++        .falign
++1:
++        {
++                p0 = cmp.gtu(r2,r0)
++                if (!p0.new) r0 = sub(r0,r2)
++                r2 = lsr(r2,#1)
++                if (p1) r1 = #0
++        }:endloop0
++        {
++                p0 = cmp.gtu(r2,r0)
++                if (!p0.new) r0 = sub(r0,r1)
++                jumpr r31
++        }
++SYM_FUNC_END(__hexagon_umodsi3)
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 280f7992ae993..965b702208d85 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3583,6 +3583,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	unsigned long host_tidr = mfspr(SPRN_TIDR);
+ 	unsigned long host_iamr = mfspr(SPRN_IAMR);
+ 	unsigned long host_amr = mfspr(SPRN_AMR);
++	unsigned long host_fscr = mfspr(SPRN_FSCR);
+ 	s64 dec;
+ 	u64 tb;
+ 	int trap, save_pmu;
+@@ -3726,6 +3727,9 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	if (host_amr != vcpu->arch.amr)
+ 		mtspr(SPRN_AMR, host_amr);
+ 
++	if (host_fscr != vcpu->arch.fscr)
++		mtspr(SPRN_FSCR, host_fscr);
++
+ 	msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
+ 	store_fp_state(&vcpu->arch.fp);
+ #ifdef CONFIG_ALTIVEC
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 5ad5282641350..282f3d2388cc2 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -1588,6 +1588,31 @@ static int uvc_scan_chain_forward(struct uvc_video_chain *chain,
+ 				return -EINVAL;
+ 			}
+ 
++			/*
++			 * Some devices reference an output terminal as the
++			 * source of extension units. This is incorrect, as
++			 * output terminals only have an input pin, and thus
++			 * can't be connected to any entity in the forward
++			 * direction. The resulting topology would cause issues
++			 * when registering the media controller graph. To
++			 * avoid this problem, connect the extension unit to
++			 * the source of the output terminal instead.
++			 */
++			if (UVC_ENTITY_IS_OTERM(entity)) {
++				struct uvc_entity *source;
++
++				source = uvc_entity_by_id(chain->dev,
++							  entity->baSourceID[0]);
++				if (!source) {
++					uvc_trace(UVC_TRACE_DESCR,
++						"Can't connect extension unit %u in chain\n",
++						forward->id);
++					break;
++				}
++
++				forward->baSourceID[0] = source->id;
++			}
++
+ 			list_add_tail(&forward->chain, &chain->entities);
+ 			if (uvc_trace_param & UVC_TRACE_PROBE) {
+ 				if (!found)
+@@ -1608,6 +1633,13 @@ static int uvc_scan_chain_forward(struct uvc_video_chain *chain,
+ 				return -EINVAL;
+ 			}
+ 
++			if (UVC_ENTITY_IS_OTERM(entity)) {
++				uvc_trace(UVC_TRACE_DESCR,
++					"Unsupported connection between output terminals %u and %u\n",
++					entity->id, forward->id);
++				break;
++			}
++
+ 			list_add_tail(&forward->chain, &chain->entities);
+ 			if (uvc_trace_param & UVC_TRACE_PROBE) {
+ 				if (!found)
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 29bec07205142..af0f6ad32522c 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -583,6 +583,9 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
+ 	}
+ 
+ 	info->eoi_time = 0;
++
++	/* is_active hasn't been reset yet, do it now. */
++	smp_store_release(&info->is_active, 0);
+ 	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
+ }
+ 
+@@ -1807,10 +1810,22 @@ static void lateeoi_ack_dynirq(struct irq_data *data)
+ 	struct irq_info *info = info_for_irq(data->irq);
+ 	evtchn_port_t evtchn = info ? info->evtchn : 0;
+ 
+-	if (VALID_EVTCHN(evtchn)) {
+-		do_mask(info, EVT_MASK_REASON_EOI_PENDING);
+-		ack_dynirq(data);
+-	}
++	if (!VALID_EVTCHN(evtchn))
++		return;
++
++	do_mask(info, EVT_MASK_REASON_EOI_PENDING);
++
++	if (unlikely(irqd_is_setaffinity_pending(data)) &&
++	    likely(!irqd_irq_disabled(data))) {
++		do_mask(info, EVT_MASK_REASON_TEMPORARY);
++
++		clear_evtchn(evtchn);
++
++		irq_move_masked_irq(data);
++
++		do_unmask(info, EVT_MASK_REASON_TEMPORARY);
++	} else
++		clear_evtchn(evtchn);
+ }
+ 
+ static void lateeoi_mask_ack_dynirq(struct irq_data *data)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-11 15:11 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-11 15:11 UTC (permalink / raw
  To: gentoo-commits

commit:     10611e78768a62db9ca233f50ba3f17e2451bc18
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 11 15:10:40 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 11 15:10:40 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=10611e78

Remove redundant patch

Removed: 1700_P9_save_and_restore_fscr.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                         |  4 ---
 1700_P9_save_and_restore_fscr.patch | 56 -------------------------------------
 2 files changed, 60 deletions(-)

diff --git a/0000_README b/0000_README
index 534b76a..908e68d 100644
--- a/0000_README
+++ b/0000_README
@@ -247,10 +247,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1700_P9_save_and_restore_fscr.patch
-From:   https://github.com/torvalds/linux/commit/25edcc50d76c.patch
-Desc:   Fix qemu on P9 ppc64.
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1700_P9_save_and_restore_fscr.patch b/1700_P9_save_and_restore_fscr.patch
deleted file mode 100644
index 59a7c7e..0000000
--- a/1700_P9_save_and_restore_fscr.patch
+++ /dev/null
@@ -1,56 +0,0 @@
-From 25edcc50d76c834479d11fcc7de46f3da4d95121 Mon Sep 17 00:00:00 2001
-From: Fabiano Rosas <farosas@linux.ibm.com>
-Date: Thu, 4 Feb 2021 17:05:17 -0300
-Subject: [PATCH] KVM: PPC: Book3S HV: Save and restore FSCR in the P9 path
-
-The Facility Status and Control Register is a privileged SPR that
-defines the availability of some features in problem state. Since it
-can be written by the guest, we must restore it to the previous host
-value after guest exit.
-
-This restoration is currently done by taking the value from
-current->thread.fscr, which in the P9 path is not enough anymore
-because the guest could context switch the QEMU thread, causing the
-guest-current value to be saved into the thread struct.
-
-The above situation manifested when running a QEMU linked against a
-libc with System Call Vectored support, which causes scv
-instructions to be run by QEMU early during the guest boot (during
-SLOF), at which point the FSCR is 0 due to guest entry. After a few
-scv calls (1 to a couple hundred), the context switching happens and
-the QEMU thread runs with the guest value, resulting in a Facility
-Unavailable interrupt.
-
-This patch saves and restores the host value of FSCR in the inner
-guest entry loop in a way independent of current->thread.fscr. The old
-way of doing it is still kept in place because it works for the old
-entry path.
-
-Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
-Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
----
- arch/powerpc/kvm/book3s_hv.c | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
-index 89c686c17f0606..f6d470157fcb62 100644
---- a/arch/powerpc/kvm/book3s_hv.c
-+++ b/arch/powerpc/kvm/book3s_hv.c
-@@ -3611,6 +3611,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
- 	unsigned long host_tidr = mfspr(SPRN_TIDR);
- 	unsigned long host_iamr = mfspr(SPRN_IAMR);
- 	unsigned long host_amr = mfspr(SPRN_AMR);
-+	unsigned long host_fscr = mfspr(SPRN_FSCR);
- 	s64 dec;
- 	u64 tb;
- 	int trap, save_pmu;
-@@ -3751,6 +3752,9 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
- 	if (host_amr != vcpu->arch.amr)
- 		mtspr(SPRN_AMR, host_amr);
- 
-+	if (host_fscr != vcpu->arch.fscr)
-+		mtspr(SPRN_FSCR, host_fscr);
-+
- 	msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
- 	store_fp_state(&vcpu->arch.fp);
- #ifdef CONFIG_ALTIVEC


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-12 17:25 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-12 17:25 UTC (permalink / raw
  To: gentoo-commits

commit:     da5f7d3accc013203b1edbe8926be726a0a3507d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 12 17:25:04 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jul 12 17:25:04 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=da5f7d3a

Revert ibmvnic: remove duplicate napi_schedule call in open function

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ++
 ...nic-remove-duplicate-napi_schedule-call-i.patch | 46 ++++++++++++++++++++++
 2 files changed, 50 insertions(+)

diff --git a/0000_README b/0000_README
index 908e68d..7e89cd1 100644
--- a/0000_README
+++ b/0000_README
@@ -251,6 +251,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/plain/queue-5.10/revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch
+Desc:   Revert ibmvnic: remove duplicate napi_schedule call in open function
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch b/2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch
new file mode 100644
index 0000000..6acd432
--- /dev/null
+++ b/2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch
@@ -0,0 +1,46 @@
+From a795f695bbc648e27d56f99c38fc1afbe832b088 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 23 Jun 2021 21:13:11 -0700
+Subject: Revert "ibmvnic: remove duplicate napi_schedule call in open
+ function"
+
+From: Dany Madden <drt@linux.ibm.com>
+
+[ Upstream commit 2ca220f92878470c6ba03f9946e412323093cc94 ]
+
+This reverts commit 7c451f3ef676c805a4b77a743a01a5c21a250a73.
+
+When a vnic interface is taken down and then up, connectivity is not
+restored. We bisected it to this commit. Reverting this commit until
+we can fully investigate the issue/benefit of the change.
+
+Fixes: 7c451f3ef676 ("ibmvnic: remove duplicate napi_schedule call in open function")
+Reported-by: Cristobal Forno <cforno12@linux.ibm.com>
+Reported-by: Abdul Haleem <abdhalee@in.ibm.com>
+Signed-off-by: Dany Madden <drt@linux.ibm.com>
+Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/ibm/ibmvnic.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 8cc444684491..765b38c8b252 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1166,6 +1166,11 @@ static int __ibmvnic_open(struct net_device *netdev)
+ 
+ 	netif_tx_start_all_queues(netdev);
+ 
++	if (prev_state == VNIC_CLOSED) {
++		for (i = 0; i < adapter->req_rx_queues; i++)
++			napi_schedule(&adapter->napi[i]);
++	}
++
+ 	adapter->state = VNIC_OPEN;
+ 	return rc;
+ }
+-- 
+2.30.2
+


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-13 12:37 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-13 12:37 UTC (permalink / raw
  To: gentoo-commits

commit:     2f2274d731535d1fd720be05087fbe5f24aa66c2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 13 12:36:49 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul 13 12:36:49 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2f2274d7

Update Homepage for CPU Optimization patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index 7e89cd1..2efc79e 100644
--- a/0000_README
+++ b/0000_README
@@ -272,5 +272,5 @@ From:   https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
 Desc:   UID/GID shifting overlay filesystem for containers 
 
 Patch:  5010_enable-cpu-optimizations-universal.patch
-From:   https://github.com/graysky2/kernel_gcc_patch/
+From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-14 16:21 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-14 16:21 UTC (permalink / raw
  To: gentoo-commits

commit:     225807b2829dd6d5b341d875303fb27c82164a08
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 14 16:21:41 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 14 16:21:41 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=225807b2

Linux patch 5.10.50

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1049_linux-5.10.50.patch | 21067 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 21071 insertions(+)

diff --git a/0000_README b/0000_README
index 2efc79e..e9246be 100644
--- a/0000_README
+++ b/0000_README
@@ -239,6 +239,10 @@ Patch:  1048_linux-5.10.49.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.49
 
+Patch:  1049_linux-5.10.50.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.50
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1049_linux-5.10.50.patch b/1049_linux-5.10.50.patch
new file mode 100644
index 0000000..2eba41f
--- /dev/null
+++ b/1049_linux-5.10.50.patch
@@ -0,0 +1,21067 @@
+diff --git a/Documentation/ABI/testing/evm b/Documentation/ABI/testing/evm
+index 3c477ba48a312..2243b72e41107 100644
+--- a/Documentation/ABI/testing/evm
++++ b/Documentation/ABI/testing/evm
+@@ -49,8 +49,30 @@ Description:
+ 		modification of EVM-protected metadata and
+ 		disable all further modification of policy
+ 
+-		Note that once a key has been loaded, it will no longer be
+-		possible to enable metadata modification.
++		Echoing a value is additive, the new value is added to the
++		existing initialization flags.
++
++		For example, after::
++
++		  echo 2 ><securityfs>/evm
++
++		another echo can be performed::
++
++		  echo 1 ><securityfs>/evm
++
++		and the resulting value will be 3.
++
++		Note that once an HMAC key has been loaded, it will no longer
++		be possible to enable metadata modification. Signaling that an
++		HMAC key has been loaded will clear the corresponding flag.
++		For example, if the current value is 6 (2 and 4 set)::
++
++		  echo 1 ><securityfs>/evm
++
++		will set the new value to 3 (4 cleared).
++
++		Loading an HMAC key is the only way to disable metadata
++		modification.
+ 
+ 		Until key loading has been signaled EVM can not create
+ 		or validate the 'security.evm' xattr, but returns
+diff --git a/Documentation/ABI/testing/sysfs-bus-papr-pmem b/Documentation/ABI/testing/sysfs-bus-papr-pmem
+index 8316c33862a04..0aa02bf2bde5c 100644
+--- a/Documentation/ABI/testing/sysfs-bus-papr-pmem
++++ b/Documentation/ABI/testing/sysfs-bus-papr-pmem
+@@ -39,9 +39,11 @@ KernelVersion:	v5.9
+ Contact:	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, linux-nvdimm@lists.01.org,
+ Description:
+ 		(RO) Report various performance stats related to papr-scm NVDIMM
+-		device.  Each stat is reported on a new line with each line
+-		composed of a stat-identifier followed by it value. Below are
+-		currently known dimm performance stats which are reported:
++		device. This attribute is only available for NVDIMM devices
++		that support reporting NVDIMM performance stats. Each stat is
++		reported on a new line with each line composed of a
++		stat-identifier followed by it value. Below are currently known
++		dimm performance stats which are reported:
+ 
+ 		* "CtlResCt" : Controller Reset Count
+ 		* "CtlResTm" : Controller Reset Elapsed Time
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 26bfe7ae711b8..f103667d3727f 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -577,6 +577,12 @@
+ 			loops can be debugged more effectively on production
+ 			systems.
+ 
++	clocksource.max_cswd_read_retries= [KNL]
++			Number of clocksource_watchdog() retries due to
++			external delays before the clock will be marked
++			unstable.  Defaults to three retries, that is,
++			four attempts to read the clock under test.
++
+ 	clearcpuid=BITNUM[,BITNUM...] [X86]
+ 			Disable CPUID feature X for the kernel. See
+ 			arch/x86/include/asm/cpufeatures.h for the valid bit
+diff --git a/Documentation/hwmon/max31790.rst b/Documentation/hwmon/max31790.rst
+index f301385d8cef3..7b097c3b9b908 100644
+--- a/Documentation/hwmon/max31790.rst
++++ b/Documentation/hwmon/max31790.rst
+@@ -38,6 +38,7 @@ Sysfs entries
+ fan[1-12]_input    RO  fan tachometer speed in RPM
+ fan[1-12]_fault    RO  fan experienced fault
+ fan[1-6]_target    RW  desired fan speed in RPM
+-pwm[1-6]_enable    RW  regulator mode, 0=disabled, 1=manual mode, 2=rpm mode
+-pwm[1-6]           RW  fan target duty cycle (0-255)
++pwm[1-6]_enable    RW  regulator mode, 0=disabled (duty cycle=0%), 1=manual mode, 2=rpm mode
++pwm[1-6]           RW  read: current pwm duty cycle,
++                       write: target pwm duty cycle (0-255)
+ ================== === =======================================================
+diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
+index ce728c757eaf8..b864869b42bc8 100644
+--- a/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
++++ b/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
+@@ -4030,7 +4030,7 @@ enum v4l2_mpeg_video_hevc_size_of_length_field -
+     :stub-columns: 0
+     :widths:       1 1 2
+ 
+-    * - ``V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT``
++    * - ``V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT_ENABLED``
+       - 0x00000001
+       -
+     * - ``V4L2_HEVC_PPS_FLAG_OUTPUT_FLAG_PRESENT``
+@@ -4238,6 +4238,9 @@ enum v4l2_mpeg_video_hevc_size_of_length_field -
+     * - ``V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED``
+       - 0x00000100
+       -
++    * - ``V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT``
++      - 0x00000200
++      -
+ 
+ .. c:type:: v4l2_hevc_dpb_entry
+ 
+diff --git a/Documentation/vm/arch_pgtable_helpers.rst b/Documentation/vm/arch_pgtable_helpers.rst
+index f3591ee3aaa89..552567d863b86 100644
+--- a/Documentation/vm/arch_pgtable_helpers.rst
++++ b/Documentation/vm/arch_pgtable_helpers.rst
+@@ -50,7 +50,7 @@ PTE Page Table Helpers
+ +---------------------------+--------------------------------------------------+
+ | pte_mkwrite               | Creates a writable PTE                           |
+ +---------------------------+--------------------------------------------------+
+-| pte_mkwrprotect           | Creates a write protected PTE                    |
++| pte_wrprotect             | Creates a write protected PTE                    |
+ +---------------------------+--------------------------------------------------+
+ | pte_mkspecial             | Creates a special PTE                            |
+ +---------------------------+--------------------------------------------------+
+@@ -120,7 +120,7 @@ PMD Page Table Helpers
+ +---------------------------+--------------------------------------------------+
+ | pmd_mkwrite               | Creates a writable PMD                           |
+ +---------------------------+--------------------------------------------------+
+-| pmd_mkwrprotect           | Creates a write protected PMD                    |
++| pmd_wrprotect             | Creates a write protected PMD                    |
+ +---------------------------+--------------------------------------------------+
+ | pmd_mkspecial             | Creates a special PMD                            |
+ +---------------------------+--------------------------------------------------+
+@@ -186,7 +186,7 @@ PUD Page Table Helpers
+ +---------------------------+--------------------------------------------------+
+ | pud_mkwrite               | Creates a writable PUD                           |
+ +---------------------------+--------------------------------------------------+
+-| pud_mkwrprotect           | Creates a write protected PUD                    |
++| pud_wrprotect             | Creates a write protected PUD                    |
+ +---------------------------+--------------------------------------------------+
+ | pud_mkdevmap              | Creates a ZONE_DEVICE mapped PUD                 |
+ +---------------------------+--------------------------------------------------+
+@@ -224,7 +224,7 @@ HugeTLB Page Table Helpers
+ +---------------------------+--------------------------------------------------+
+ | huge_pte_mkwrite          | Creates a writable HugeTLB                       |
+ +---------------------------+--------------------------------------------------+
+-| huge_pte_mkwrprotect      | Creates a write protected HugeTLB                |
++| huge_pte_wrprotect        | Creates a write protected HugeTLB                |
+ +---------------------------+--------------------------------------------------+
+ | huge_ptep_get_and_clear   | Clears a HugeTLB                                 |
+ +---------------------------+--------------------------------------------------+
+diff --git a/Makefile b/Makefile
+index c51b73455ea33..695f8e739a91b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 49
++SUBLEVEL = 50
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -978,7 +978,7 @@ LDFLAGS_vmlinux	+= $(call ld-option, -X,)
+ endif
+ 
+ ifeq ($(CONFIG_RELR),y)
+-LDFLAGS_vmlinux	+= --pack-dyn-relocs=relr
++LDFLAGS_vmlinux	+= --pack-dyn-relocs=relr --use-android-relr-tags
+ endif
+ 
+ # We never want expected sections to be placed heuristically by the
+diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
+index f4dd9f3f30010..4b2575f936d46 100644
+--- a/arch/alpha/kernel/smp.c
++++ b/arch/alpha/kernel/smp.c
+@@ -166,7 +166,6 @@ smp_callin(void)
+ 	DBGS(("smp_callin: commencing CPU %d current %p active_mm %p\n",
+ 	      cpuid, current, current->active_mm));
+ 
+-	preempt_disable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
+diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c
+index 52906d3145371..db0e104d68355 100644
+--- a/arch/arc/kernel/smp.c
++++ b/arch/arc/kernel/smp.c
+@@ -189,7 +189,6 @@ void start_kernel_secondary(void)
+ 	pr_info("## CPU%u LIVE ##: Executing Code...\n", cpu);
+ 
+ 	local_irq_enable();
+-	preempt_disable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
+diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi
+index 04f24cf752d34..e5c2c52013e3e 100644
+--- a/arch/arm/boot/dts/sama5d4.dtsi
++++ b/arch/arm/boot/dts/sama5d4.dtsi
+@@ -809,7 +809,7 @@
+ 					0xffffffff 0x3ffcfe7c 0x1c010101	/* pioA */
+ 					0x7fffffff 0xfffccc3a 0x3f00cc3a	/* pioB */
+ 					0xffffffff 0x3ff83fff 0xff00ffff	/* pioC */
+-					0x0003ff00 0x8002a800 0x00000000	/* pioD */
++					0xb003ff00 0x8002a800 0x00000000	/* pioD */
+ 					0xffffffff 0x7fffffff 0x76fff1bf	/* pioE */
+ 					>;
+ 
+diff --git a/arch/arm/boot/dts/ste-href.dtsi b/arch/arm/boot/dts/ste-href.dtsi
+index ff47cbf6ed3b7..359c1219b0bab 100644
+--- a/arch/arm/boot/dts/ste-href.dtsi
++++ b/arch/arm/boot/dts/ste-href.dtsi
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/leds/common.h>
+ #include "ste-href-family-pinctrl.dtsi"
+ 
+ / {
+@@ -64,17 +65,20 @@
+ 					reg = <0>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 					linux,default-trigger = "heartbeat";
+ 				};
+ 				chan@1 {
+ 					reg = <1>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 				chan@2 {
+ 					reg = <2>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 			};
+ 			lp5521@34 {
+@@ -88,16 +92,19 @@
+ 					reg = <0>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 				chan@1 {
+ 					reg = <1>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 				chan@2 {
+ 					reg = <2>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 			};
+ 			bh1780@29 {
+diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c
+index 2924d7910b106..eb2190477da10 100644
+--- a/arch/arm/kernel/perf_event_v7.c
++++ b/arch/arm/kernel/perf_event_v7.c
+@@ -773,10 +773,10 @@ static inline void armv7pmu_write_counter(struct perf_event *event, u64 value)
+ 		pr_err("CPU%u writing wrong counter %d\n",
+ 			smp_processor_id(), idx);
+ 	} else if (idx == ARMV7_IDX_CYCLE_COUNTER) {
+-		asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" (value));
++		asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" ((u32)value));
+ 	} else {
+ 		armv7_pmnc_select_counter(idx);
+-		asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" (value));
++		asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" ((u32)value));
+ 	}
+ }
+ 
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index 48099c6e1e4a6..8aa7fa949c232 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -432,7 +432,6 @@ asmlinkage void secondary_start_kernel(void)
+ #endif
+ 	pr_debug("CPU%u: Booted secondary processor\n", cpu);
+ 
+-	preempt_disable();
+ 	trace_hardirqs_off();
+ 
+ 	/*
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index a89e47d95eef2..879115dfdf828 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -134,7 +134,7 @@
+ 
+ 			uart0: serial@12000 {
+ 				compatible = "marvell,armada-3700-uart";
+-				reg = <0x12000 0x200>;
++				reg = <0x12000 0x18>;
+ 				clocks = <&xtalclk>;
+ 				interrupts =
+ 				<GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
+index f68a0e64482a1..5ef624fef44a2 100644
+--- a/arch/arm64/include/asm/asm-uaccess.h
++++ b/arch/arm64/include/asm/asm-uaccess.h
+@@ -15,10 +15,10 @@
+ 	.macro	__uaccess_ttbr0_disable, tmp1
+ 	mrs	\tmp1, ttbr1_el1			// swapper_pg_dir
+ 	bic	\tmp1, \tmp1, #TTBR_ASID_MASK
+-	sub	\tmp1, \tmp1, #RESERVED_TTBR0_SIZE	// reserved_ttbr0 just before swapper_pg_dir
++	sub	\tmp1, \tmp1, #PAGE_SIZE		// reserved_pg_dir just before swapper_pg_dir
+ 	msr	ttbr0_el1, \tmp1			// set reserved TTBR0_EL1
+ 	isb
+-	add	\tmp1, \tmp1, #RESERVED_TTBR0_SIZE
++	add	\tmp1, \tmp1, #PAGE_SIZE
+ 	msr	ttbr1_el1, \tmp1		// set reserved ASID
+ 	isb
+ 	.endm
+diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
+index 19ca76ea60d98..587c504a4c8b2 100644
+--- a/arch/arm64/include/asm/kernel-pgtable.h
++++ b/arch/arm64/include/asm/kernel-pgtable.h
+@@ -89,12 +89,6 @@
+ #define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end))
+ #define IDMAP_DIR_SIZE		(IDMAP_PGTABLE_LEVELS * PAGE_SIZE)
+ 
+-#ifdef CONFIG_ARM64_SW_TTBR0_PAN
+-#define RESERVED_TTBR0_SIZE	(PAGE_SIZE)
+-#else
+-#define RESERVED_TTBR0_SIZE	(0)
+-#endif
+-
+ /* Initial memory map size */
+ #if ARM64_SWAPPER_USES_SECTION_MAPS
+ #define SWAPPER_BLOCK_SHIFT	SECTION_SHIFT
+diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
+index 4e2ba94778450..5a54a5ab5f928 100644
+--- a/arch/arm64/include/asm/mmu_context.h
++++ b/arch/arm64/include/asm/mmu_context.h
+@@ -36,11 +36,11 @@ static inline void contextidr_thread_switch(struct task_struct *next)
+ }
+ 
+ /*
+- * Set TTBR0 to empty_zero_page. No translations will be possible via TTBR0.
++ * Set TTBR0 to reserved_pg_dir. No translations will be possible via TTBR0.
+  */
+ static inline void cpu_set_reserved_ttbr0(void)
+ {
+-	unsigned long ttbr = phys_to_ttbr(__pa_symbol(empty_zero_page));
++	unsigned long ttbr = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
+ 
+ 	write_sysreg(ttbr, ttbr0_el1);
+ 	isb();
+@@ -192,9 +192,9 @@ static inline void update_saved_ttbr0(struct task_struct *tsk,
+ 		return;
+ 
+ 	if (mm == &init_mm)
+-		ttbr = __pa_symbol(empty_zero_page);
++		ttbr = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
+ 	else
+-		ttbr = virt_to_phys(mm->pgd) | ASID(mm) << 48;
++		ttbr = phys_to_ttbr(virt_to_phys(mm->pgd)) | ASID(mm) << 48;
+ 
+ 	WRITE_ONCE(task_thread_info(tsk)->ttbr0, ttbr);
+ }
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 717f13d52ecc5..10ffbc96ac31f 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -530,6 +530,7 @@ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+ extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
+ extern pgd_t idmap_pg_end[];
+ extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
++extern pgd_t reserved_pg_dir[PTRS_PER_PGD];
+ 
+ extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd);
+ 
+diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
+index 80e946b2abee2..e83f0982b99c1 100644
+--- a/arch/arm64/include/asm/preempt.h
++++ b/arch/arm64/include/asm/preempt.h
+@@ -23,7 +23,7 @@ static inline void preempt_count_set(u64 pc)
+ } while (0)
+ 
+ #define init_idle_preempt_count(p, cpu) do { \
+-	task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
++	task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
+ } while (0)
+ 
+ static inline void set_preempt_need_resched(void)
+diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
+index 991dd5f031e46..385a189f7d39e 100644
+--- a/arch/arm64/include/asm/uaccess.h
++++ b/arch/arm64/include/asm/uaccess.h
+@@ -113,8 +113,8 @@ static inline void __uaccess_ttbr0_disable(void)
+ 	local_irq_save(flags);
+ 	ttbr = read_sysreg(ttbr1_el1);
+ 	ttbr &= ~TTBR_ASID_MASK;
+-	/* reserved_ttbr0 placed before swapper_pg_dir */
+-	write_sysreg(ttbr - RESERVED_TTBR0_SIZE, ttbr0_el1);
++	/* reserved_pg_dir placed before swapper_pg_dir */
++	write_sysreg(ttbr - PAGE_SIZE, ttbr0_el1);
+ 	isb();
+ 	/* Set reserved ASID */
+ 	write_sysreg(ttbr, ttbr1_el1);
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 60d3991233600..fe83d6d67ec3d 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -770,9 +770,10 @@ SYM_CODE_END(ret_to_user)
+  */
+ 	.pushsection ".entry.tramp.text", "ax"
+ 
++	// Move from tramp_pg_dir to swapper_pg_dir
+ 	.macro tramp_map_kernel, tmp
+ 	mrs	\tmp, ttbr1_el1
+-	add	\tmp, \tmp, #(PAGE_SIZE + RESERVED_TTBR0_SIZE)
++	add	\tmp, \tmp, #(2 * PAGE_SIZE)
+ 	bic	\tmp, \tmp, #USER_ASID_FLAG
+ 	msr	ttbr1_el1, \tmp
+ #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
+@@ -789,9 +790,10 @@ alternative_else_nop_endif
+ #endif /* CONFIG_QCOM_FALKOR_ERRATUM_1003 */
+ 	.endm
+ 
++	// Move from swapper_pg_dir to tramp_pg_dir
+ 	.macro tramp_unmap_kernel, tmp
+ 	mrs	\tmp, ttbr1_el1
+-	sub	\tmp, \tmp, #(PAGE_SIZE + RESERVED_TTBR0_SIZE)
++	sub	\tmp, \tmp, #(2 * PAGE_SIZE)
+ 	orr	\tmp, \tmp, #USER_ASID_FLAG
+ 	msr	ttbr1_el1, \tmp
+ 	/*
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 11852e05ee32a..cdb3d4549b3a9 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -312,7 +312,7 @@ static ssize_t slots_show(struct device *dev, struct device_attribute *attr,
+ 	struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
+ 	u32 slots = cpu_pmu->reg_pmmir & ARMV8_PMU_SLOTS_MASK;
+ 
+-	return snprintf(page, PAGE_SIZE, "0x%08x\n", slots);
++	return sysfs_emit(page, "0x%08x\n", slots);
+ }
+ 
+ static DEVICE_ATTR_RO(slots);
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index 133257ffd8591..eb4b24652c105 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -366,7 +366,7 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
+ 	 * faults in case uaccess_enable() is inadvertently called by the init
+ 	 * thread.
+ 	 */
+-	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
++	init_task.thread_info.ttbr0 = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
+ #endif
+ 
+ 	if (boot_args[1] || boot_args[2] || boot_args[3]) {
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 18e9727d3f645..feee5a3cd1288 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -223,7 +223,6 @@ asmlinkage notrace void secondary_start_kernel(void)
+ 		init_gic_priority_masking();
+ 
+ 	rcu_cpu_starting(cpu);
+-	preempt_disable();
+ 	trace_hardirqs_off();
+ 
+ 	/*
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 1bda604f4c704..30c1029789427 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -164,13 +164,11 @@ SECTIONS
+ 	. += PAGE_SIZE;
+ #endif
+ 
+-#ifdef CONFIG_ARM64_SW_TTBR0_PAN
+-	reserved_ttbr0 = .;
+-	. += RESERVED_TTBR0_SIZE;
+-#endif
++	reserved_pg_dir = .;
++	. += PAGE_SIZE;
++
+ 	swapper_pg_dir = .;
+ 	. += PAGE_SIZE;
+-	swapper_pg_end = .;
+ 
+ 	. = ALIGN(SEGMENT_ALIGN);
+ 	__init_begin = .;
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index 2dd164bb1c5a9..4b30260e1abf4 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -578,6 +578,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
+ 		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
+ 
+ 	if (val & ARMV8_PMU_PMCR_P) {
++		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
+ 		for_each_set_bit(i, &mask, 32)
+ 			kvm_pmu_set_counter_value(vcpu, i, 0);
+ 	}
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index a14927360be26..aacc7eab9b2ff 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -168,7 +168,7 @@ SYM_FUNC_END(cpu_do_resume)
+ 	.pushsection ".idmap.text", "awx"
+ 
+ .macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
+-	adrp	\tmp1, empty_zero_page
++	adrp	\tmp1, reserved_pg_dir
+ 	phys_to_ttbr \tmp2, \tmp1
+ 	offset_ttbr1 \tmp2, \tmp1
+ 	msr	ttbr1_el1, \tmp2
+diff --git a/arch/csky/kernel/smp.c b/arch/csky/kernel/smp.c
+index 041d0de6a1b67..1a8d7eaf1ff71 100644
+--- a/arch/csky/kernel/smp.c
++++ b/arch/csky/kernel/smp.c
+@@ -282,7 +282,6 @@ void csky_start_secondary(void)
+ 	pr_info("CPU%u Online: %s...\n", cpu, __func__);
+ 
+ 	local_irq_enable();
+-	preempt_disable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
+diff --git a/arch/csky/mm/syscache.c b/arch/csky/mm/syscache.c
+index ffade2f9a4c87..cd847ad62c7ee 100644
+--- a/arch/csky/mm/syscache.c
++++ b/arch/csky/mm/syscache.c
+@@ -12,14 +12,17 @@ SYSCALL_DEFINE3(cacheflush,
+ 		int, cache)
+ {
+ 	switch (cache) {
+-	case ICACHE:
+ 	case BCACHE:
+-		flush_icache_mm_range(current->mm,
+-				(unsigned long)addr,
+-				(unsigned long)addr + bytes);
+ 	case DCACHE:
+ 		dcache_wb_range((unsigned long)addr,
+ 				(unsigned long)addr + bytes);
++		if (cache != BCACHE)
++			break;
++		fallthrough;
++	case ICACHE:
++		flush_icache_mm_range(current->mm,
++				(unsigned long)addr,
++				(unsigned long)addr + bytes);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/arch/ia64/kernel/mca_drv.c b/arch/ia64/kernel/mca_drv.c
+index 4d0ab323dee8c..2a40268c3d494 100644
+--- a/arch/ia64/kernel/mca_drv.c
++++ b/arch/ia64/kernel/mca_drv.c
+@@ -343,7 +343,7 @@ init_record_index_pools(void)
+ 
+ 	/* - 2 - */
+ 	sect_min_size = sal_log_sect_min_sizes[0];
+-	for (i = 1; i < sizeof sal_log_sect_min_sizes/sizeof(size_t); i++)
++	for (i = 1; i < ARRAY_SIZE(sal_log_sect_min_sizes); i++)
+ 		if (sect_min_size > sal_log_sect_min_sizes[i])
+ 			sect_min_size = sal_log_sect_min_sizes[i];
+ 
+diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
+index 093040f7e626a..0cad990385c04 100644
+--- a/arch/ia64/kernel/smpboot.c
++++ b/arch/ia64/kernel/smpboot.c
+@@ -440,7 +440,6 @@ start_secondary (void *unused)
+ #endif
+ 	efi_map_pal_code();
+ 	cpu_init();
+-	preempt_disable();
+ 	smp_callin();
+ 
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine
+index 17e8c3a292d77..e161a4e1493b4 100644
+--- a/arch/m68k/Kconfig.machine
++++ b/arch/m68k/Kconfig.machine
+@@ -23,6 +23,9 @@ config ATARI
+ 	  this kernel on an Atari, say Y here and browse the material
+ 	  available in <file:Documentation/m68k>; otherwise say N.
+ 
++config ATARI_KBD_CORE
++	bool
++
+ config MAC
+ 	bool "Macintosh support"
+ 	depends on MMU
+diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
+index f1f788b571666..9f021cf51aa71 100644
+--- a/arch/mips/include/asm/highmem.h
++++ b/arch/mips/include/asm/highmem.h
+@@ -36,7 +36,7 @@ extern pte_t *pkmap_page_table;
+  * easily, subsequent pte tables have to be allocated in one physical
+  * chunk of RAM.
+  */
+-#ifdef CONFIG_PHYS_ADDR_T_64BIT
++#if defined(CONFIG_PHYS_ADDR_T_64BIT) || defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
+ #define LAST_PKMAP 512
+ #else
+ #define LAST_PKMAP 1024
+diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
+index 48d84d5fcc361..ff25926c5458c 100644
+--- a/arch/mips/kernel/smp.c
++++ b/arch/mips/kernel/smp.c
+@@ -348,7 +348,6 @@ asmlinkage void start_secondary(void)
+ 	 */
+ 
+ 	calibrate_delay();
+-	preempt_disable();
+ 	cpu = smp_processor_id();
+ 	cpu_data[cpu].udelay_val = loops_per_jiffy;
+ 
+diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
+index 29c82ef2e207c..e4dad76066aed 100644
+--- a/arch/openrisc/kernel/smp.c
++++ b/arch/openrisc/kernel/smp.c
+@@ -134,8 +134,6 @@ asmlinkage __init void secondary_start_kernel(void)
+ 	set_cpu_online(cpu, true);
+ 
+ 	local_irq_enable();
+-
+-	preempt_disable();
+ 	/*
+ 	 * OK, it's off to the idle thread for us
+ 	 */
+diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
+index 10227f667c8a6..1405b603b91b6 100644
+--- a/arch/parisc/kernel/smp.c
++++ b/arch/parisc/kernel/smp.c
+@@ -302,7 +302,6 @@ void __init smp_callin(unsigned long pdce_proc)
+ #endif
+ 
+ 	smp_cpu_init(slave_id);
+-	preempt_disable();
+ 
+ 	flush_cache_all_local(); /* start with known state */
+ 	flush_tlb_all_local(NULL);
+diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h
+index 98c8bd155bf9d..b167186aaee4a 100644
+--- a/arch/powerpc/include/asm/cputhreads.h
++++ b/arch/powerpc/include/asm/cputhreads.h
+@@ -98,6 +98,36 @@ static inline int cpu_last_thread_sibling(int cpu)
+ 	return cpu | (threads_per_core - 1);
+ }
+ 
++/*
++ * tlb_thread_siblings are siblings which share a TLB. This is not
++ * architected, is not something a hypervisor could emulate and a future
++ * CPU may change behaviour even in compat mode, so this should only be
++ * used on PowerNV, and only with care.
++ */
++static inline int cpu_first_tlb_thread_sibling(int cpu)
++{
++	if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
++		return cpu & ~0x6;	/* Big Core */
++	else
++		return cpu_first_thread_sibling(cpu);
++}
++
++static inline int cpu_last_tlb_thread_sibling(int cpu)
++{
++	if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
++		return cpu | 0x6;	/* Big Core */
++	else
++		return cpu_last_thread_sibling(cpu);
++}
++
++static inline int cpu_tlb_thread_sibling_step(void)
++{
++	if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
++		return 2;		/* Big Core */
++	else
++		return 1;
++}
++
+ static inline u32 get_tensr(void)
+ {
+ #ifdef	CONFIG_BOOKE
+diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
+index b7e173754a2e9..ea8b002820cec 100644
+--- a/arch/powerpc/kernel/mce_power.c
++++ b/arch/powerpc/kernel/mce_power.c
+@@ -475,12 +475,11 @@ static int mce_find_instr_ea_and_phys(struct pt_regs *regs, uint64_t *addr,
+ 	return -1;
+ }
+ 
+-static int mce_handle_ierror(struct pt_regs *regs,
++static int mce_handle_ierror(struct pt_regs *regs, unsigned long srr1,
+ 		const struct mce_ierror_table table[],
+ 		struct mce_error_info *mce_err, uint64_t *addr,
+ 		uint64_t *phys_addr)
+ {
+-	uint64_t srr1 = regs->msr;
+ 	int handled = 0;
+ 	int i;
+ 
+@@ -683,19 +682,19 @@ static long mce_handle_ue_error(struct pt_regs *regs,
+ }
+ 
+ static long mce_handle_error(struct pt_regs *regs,
++		unsigned long srr1,
+ 		const struct mce_derror_table dtable[],
+ 		const struct mce_ierror_table itable[])
+ {
+ 	struct mce_error_info mce_err = { 0 };
+ 	uint64_t addr, phys_addr = ULONG_MAX;
+-	uint64_t srr1 = regs->msr;
+ 	long handled;
+ 
+ 	if (SRR1_MC_LOADSTORE(srr1))
+ 		handled = mce_handle_derror(regs, dtable, &mce_err, &addr,
+ 				&phys_addr);
+ 	else
+-		handled = mce_handle_ierror(regs, itable, &mce_err, &addr,
++		handled = mce_handle_ierror(regs, srr1, itable, &mce_err, &addr,
+ 				&phys_addr);
+ 
+ 	if (!handled && mce_err.error_type == MCE_ERROR_TYPE_UE)
+@@ -711,16 +710,20 @@ long __machine_check_early_realmode_p7(struct pt_regs *regs)
+ 	/* P7 DD1 leaves top bits of DSISR undefined */
+ 	regs->dsisr &= 0x0000ffff;
+ 
+-	return mce_handle_error(regs, mce_p7_derror_table, mce_p7_ierror_table);
++	return mce_handle_error(regs, regs->msr,
++			mce_p7_derror_table, mce_p7_ierror_table);
+ }
+ 
+ long __machine_check_early_realmode_p8(struct pt_regs *regs)
+ {
+-	return mce_handle_error(regs, mce_p8_derror_table, mce_p8_ierror_table);
++	return mce_handle_error(regs, regs->msr,
++			mce_p8_derror_table, mce_p8_ierror_table);
+ }
+ 
+ long __machine_check_early_realmode_p9(struct pt_regs *regs)
+ {
++	unsigned long srr1 = regs->msr;
++
+ 	/*
+ 	 * On POWER9 DD2.1 and below, it's possible to get a machine check
+ 	 * caused by a paste instruction where only DSISR bit 25 is set. This
+@@ -734,10 +737,39 @@ long __machine_check_early_realmode_p9(struct pt_regs *regs)
+ 	if (SRR1_MC_LOADSTORE(regs->msr) && regs->dsisr == 0x02000000)
+ 		return 1;
+ 
+-	return mce_handle_error(regs, mce_p9_derror_table, mce_p9_ierror_table);
++	/*
++	 * Async machine check due to bad real address from store or foreign
++	 * link time out comes with the load/store bit (PPC bit 42) set in
++	 * SRR1, but the cause comes in SRR1 not DSISR. Clear bit 42 so we're
++	 * directed to the ierror table so it will find the cause (which
++	 * describes it correctly as a store error).
++	 */
++	if (SRR1_MC_LOADSTORE(srr1) &&
++			((srr1 & 0x081c0000) == 0x08140000 ||
++			 (srr1 & 0x081c0000) == 0x08180000)) {
++		srr1 &= ~PPC_BIT(42);
++	}
++
++	return mce_handle_error(regs, srr1,
++			mce_p9_derror_table, mce_p9_ierror_table);
+ }
+ 
+ long __machine_check_early_realmode_p10(struct pt_regs *regs)
+ {
+-	return mce_handle_error(regs, mce_p10_derror_table, mce_p10_ierror_table);
++	unsigned long srr1 = regs->msr;
++
++	/*
++	 * Async machine check due to bad real address from store comes with
++	 * the load/store bit (PPC bit 42) set in SRR1, but the cause comes in
++	 * SRR1 not DSISR. Clear bit 42 so we're directed to the ierror table
++	 * so it will find the cause (which describes it correctly as a store
++	 * error).
++	 */
++	if (SRR1_MC_LOADSTORE(srr1) &&
++			(srr1 & 0x081c0000) == 0x08140000) {
++		srr1 &= ~PPC_BIT(42);
++	}
++
++	return mce_handle_error(regs, srr1,
++			mce_p10_derror_table, mce_p10_ierror_table);
+ }
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 1a1d2657fe8dd..3064694afea17 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -1227,6 +1227,19 @@ struct task_struct *__switch_to(struct task_struct *prev,
+ 			__flush_tlb_pending(batch);
+ 		batch->active = 0;
+ 	}
++
++	/*
++	 * On POWER9 the copy-paste buffer can only paste into
++	 * foreign real addresses, so unprivileged processes can not
++	 * see the data or use it in any way unless they have
++	 * foreign real mappings. If the new process has the foreign
++	 * real address mappings, we must issue a cp_abort to clear
++	 * any state and prevent snooping, corruption or a covert
++	 * channel. ISA v3.1 supports paste into local memory.
++	 */
++	if (new->mm && (cpu_has_feature(CPU_FTR_ARCH_31) ||
++			atomic_read(&new->mm->context.vas_windows)))
++		asm volatile(PPC_CP_ABORT);
+ #endif /* CONFIG_PPC_BOOK3S_64 */
+ 
+ #ifdef CONFIG_PPC_ADV_DEBUG_REGS
+@@ -1272,30 +1285,33 @@ struct task_struct *__switch_to(struct task_struct *prev,
+ 
+ 	last = _switch(old_thread, new_thread);
+ 
++	/*
++	 * Nothing after _switch will be run for newly created tasks,
++	 * because they switch directly to ret_from_fork/ret_from_kernel_thread
++	 * etc. Code added here should have a comment explaining why that is
++	 * okay.
++	 */
++
+ #ifdef CONFIG_PPC_BOOK3S_64
++	/*
++	 * This applies to a process that was context switched while inside
++	 * arch_enter_lazy_mmu_mode(), to re-activate the batch that was
++	 * deactivated above, before _switch(). This will never be the case
++	 * for new tasks.
++	 */
+ 	if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
+ 		current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
+ 		batch = this_cpu_ptr(&ppc64_tlb_batch);
+ 		batch->active = 1;
+ 	}
+ 
+-	if (current->thread.regs) {
++	/*
++	 * Math facilities are masked out of the child MSR in copy_thread.
++	 * A new task does not need to restore_math because it will
++	 * demand fault them.
++	 */
++	if (current->thread.regs)
+ 		restore_math(current->thread.regs);
+-
+-		/*
+-		 * On POWER9 the copy-paste buffer can only paste into
+-		 * foreign real addresses, so unprivileged processes can not
+-		 * see the data or use it in any way unless they have
+-		 * foreign real mappings. If the new process has the foreign
+-		 * real address mappings, we must issue a cp_abort to clear
+-		 * any state and prevent snooping, corruption or a covert
+-		 * channel. ISA v3.1 supports paste into local memory.
+-		 */
+-		if (current->mm &&
+-			(cpu_has_feature(CPU_FTR_ARCH_31) ||
+-			atomic_read(&current->mm->context.vas_windows)))
+-			asm volatile(PPC_CP_ABORT);
+-	}
+ #endif /* CONFIG_PPC_BOOK3S_64 */
+ 
+ 	return last;
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index db7ac77bea3a7..26a028a9233af 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -600,6 +600,8 @@ static void nmi_stop_this_cpu(struct pt_regs *regs)
+ 	/*
+ 	 * IRQs are already hard disabled by the smp_handle_nmi_ipi.
+ 	 */
++	set_cpu_online(smp_processor_id(), false);
++
+ 	spin_begin();
+ 	while (1)
+ 		spin_cpu_relax();
+@@ -615,6 +617,15 @@ void smp_send_stop(void)
+ static void stop_this_cpu(void *dummy)
+ {
+ 	hard_irq_disable();
++
++	/*
++	 * Offlining CPUs in stop_this_cpu can result in scheduler warnings,
++	 * (see commit de6e5d38417e), but printk_safe_flush_on_panic() wants
++	 * to know other CPUs are offline before it breaks locks to flush
++	 * printk buffers, in case we panic()ed while holding the lock.
++	 */
++	set_cpu_online(smp_processor_id(), false);
++
+ 	spin_begin();
+ 	while (1)
+ 		spin_cpu_relax();
+@@ -1426,7 +1437,6 @@ void start_secondary(void *unused)
+ 	smp_store_cpu_info(cpu);
+ 	set_dec(tb_ticks_per_jiffy);
+ 	rcu_cpu_starting(cpu);
+-	preempt_disable();
+ 	cpu_callin_map[cpu] = 1;
+ 
+ 	if (smp_ops->setup_cpu)
+diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
+index b6440657ef92d..2f926ea9b7b94 100644
+--- a/arch/powerpc/kernel/stacktrace.c
++++ b/arch/powerpc/kernel/stacktrace.c
+@@ -19,6 +19,7 @@
+ #include <asm/ptrace.h>
+ #include <asm/processor.h>
+ #include <linux/ftrace.h>
++#include <linux/delay.h>
+ #include <asm/kprobes.h>
+ 
+ #include <asm/paca.h>
+@@ -230,17 +231,31 @@ static void handle_backtrace_ipi(struct pt_regs *regs)
+ 
+ static void raise_backtrace_ipi(cpumask_t *mask)
+ {
++	struct paca_struct *p;
+ 	unsigned int cpu;
++	u64 delay_us;
+ 
+ 	for_each_cpu(cpu, mask) {
+-		if (cpu == smp_processor_id())
++		if (cpu == smp_processor_id()) {
+ 			handle_backtrace_ipi(NULL);
+-		else
+-			smp_send_safe_nmi_ipi(cpu, handle_backtrace_ipi, 5 * USEC_PER_SEC);
+-	}
++			continue;
++		}
+ 
+-	for_each_cpu(cpu, mask) {
+-		struct paca_struct *p = paca_ptrs[cpu];
++		delay_us = 5 * USEC_PER_SEC;
++
++		if (smp_send_safe_nmi_ipi(cpu, handle_backtrace_ipi, delay_us)) {
++			// Now wait up to 5s for the other CPU to do its backtrace
++			while (cpumask_test_cpu(cpu, mask) && delay_us) {
++				udelay(1);
++				delay_us--;
++			}
++
++			// Other CPU cleared itself from the mask
++			if (delay_us)
++				continue;
++		}
++
++		p = paca_ptrs[cpu];
+ 
+ 		cpumask_clear_cpu(cpu, mask);
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 965b702208d85..2325b7a6e95f8 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -2578,7 +2578,7 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu)
+ 	cpumask_t *cpu_in_guest;
+ 	int i;
+ 
+-	cpu = cpu_first_thread_sibling(cpu);
++	cpu = cpu_first_tlb_thread_sibling(cpu);
+ 	if (nested) {
+ 		cpumask_set_cpu(cpu, &nested->need_tlb_flush);
+ 		cpu_in_guest = &nested->cpu_in_guest;
+@@ -2592,9 +2592,10 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu)
+ 	 * the other side is the first smp_mb() in kvmppc_run_core().
+ 	 */
+ 	smp_mb();
+-	for (i = 0; i < threads_per_core; ++i)
+-		if (cpumask_test_cpu(cpu + i, cpu_in_guest))
+-			smp_call_function_single(cpu + i, do_nothing, NULL, 1);
++	for (i = cpu; i <= cpu_last_tlb_thread_sibling(cpu);
++					i += cpu_tlb_thread_sibling_step())
++		if (cpumask_test_cpu(i, cpu_in_guest))
++			smp_call_function_single(i, do_nothing, NULL, 1);
+ }
+ 
+ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu)
+@@ -2625,8 +2626,8 @@ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu)
+ 	 */
+ 	if (prev_cpu != pcpu) {
+ 		if (prev_cpu >= 0 &&
+-		    cpu_first_thread_sibling(prev_cpu) !=
+-		    cpu_first_thread_sibling(pcpu))
++		    cpu_first_tlb_thread_sibling(prev_cpu) !=
++		    cpu_first_tlb_thread_sibling(pcpu))
+ 			radix_flush_cpu(kvm, prev_cpu, vcpu);
+ 		if (nested)
+ 			nested->prev_cpu[vcpu->arch.nested_vcpu_id] = pcpu;
+diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
+index 8f58dd20b362a..4621905bdd9ea 100644
+--- a/arch/powerpc/kvm/book3s_hv_builtin.c
++++ b/arch/powerpc/kvm/book3s_hv_builtin.c
+@@ -893,7 +893,7 @@ void kvmppc_check_need_tlb_flush(struct kvm *kvm, int pcpu,
+ 	 * Thus we make all 4 threads use the same bit.
+ 	 */
+ 	if (cpu_has_feature(CPU_FTR_ARCH_300))
+-		pcpu = cpu_first_thread_sibling(pcpu);
++		pcpu = cpu_first_tlb_thread_sibling(pcpu);
+ 
+ 	if (nested)
+ 		need_tlb_flush = &nested->need_tlb_flush;
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 33b58549a9aaf..065738819db9b 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -51,7 +51,8 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
+ 	hr->ppr = vcpu->arch.ppr;
+ }
+ 
+-static void byteswap_pt_regs(struct pt_regs *regs)
++/* Use noinline_for_stack due to https://bugs.llvm.org/show_bug.cgi?id=49610 */
++static noinline_for_stack void byteswap_pt_regs(struct pt_regs *regs)
+ {
+ 	unsigned long *addr = (unsigned long *) regs;
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+index 88da2764c1bb9..3ddc83d2e8493 100644
+--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
++++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+@@ -67,7 +67,7 @@ static int global_invalidates(struct kvm *kvm)
+ 		 * so use the bit for the first thread to represent the core.
+ 		 */
+ 		if (cpu_has_feature(CPU_FTR_ARCH_300))
+-			cpu = cpu_first_thread_sibling(cpu);
++			cpu = cpu_first_tlb_thread_sibling(cpu);
+ 		cpumask_clear_cpu(cpu, &kvm->arch.need_tlb_flush);
+ 	}
+ 
+diff --git a/arch/powerpc/platforms/cell/smp.c b/arch/powerpc/platforms/cell/smp.c
+index c855a0aeb49cc..d7ab868aab54a 100644
+--- a/arch/powerpc/platforms/cell/smp.c
++++ b/arch/powerpc/platforms/cell/smp.c
+@@ -78,9 +78,6 @@ static inline int smp_startup_cpu(unsigned int lcpu)
+ 
+ 	pcpu = get_hard_smp_processor_id(lcpu);
+ 
+-	/* Fixup atomic count: it exited inside IRQ handler. */
+-	task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count	= 0;
+-
+ 	/*
+ 	 * If the RTAS start-cpu token does not exist then presume the
+ 	 * cpu is already spinning.
+diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
+index 835163f54244a..057acbb9116dd 100644
+--- a/arch/powerpc/platforms/pseries/papr_scm.c
++++ b/arch/powerpc/platforms/pseries/papr_scm.c
+@@ -18,6 +18,7 @@
+ #include <asm/plpar_wrappers.h>
+ #include <asm/papr_pdsm.h>
+ #include <asm/mce.h>
++#include <asm/unaligned.h>
+ 
+ #define BIND_ANY_ADDR (~0ul)
+ 
+@@ -867,6 +868,20 @@ static ssize_t flags_show(struct device *dev,
+ }
+ DEVICE_ATTR_RO(flags);
+ 
++static umode_t papr_nd_attribute_visible(struct kobject *kobj,
++					 struct attribute *attr, int n)
++{
++	struct device *dev = kobj_to_dev(kobj);
++	struct nvdimm *nvdimm = to_nvdimm(dev);
++	struct papr_scm_priv *p = nvdimm_provider_data(nvdimm);
++
++	/* For if perf-stats not available remove perf_stats sysfs */
++	if (attr == &dev_attr_perf_stats.attr && p->stat_buffer_len == 0)
++		return 0;
++
++	return attr->mode;
++}
++
+ /* papr_scm specific dimm attributes */
+ static struct attribute *papr_nd_attributes[] = {
+ 	&dev_attr_flags.attr,
+@@ -876,6 +891,7 @@ static struct attribute *papr_nd_attributes[] = {
+ 
+ static struct attribute_group papr_nd_attribute_group = {
+ 	.name = "papr",
++	.is_visible = papr_nd_attribute_visible,
+ 	.attrs = papr_nd_attributes,
+ };
+ 
+@@ -891,7 +907,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
+ 	struct nd_region_desc ndr_desc;
+ 	unsigned long dimm_flags;
+ 	int target_nid, online_nid;
+-	ssize_t stat_size;
+ 
+ 	p->bus_desc.ndctl = papr_scm_ndctl;
+ 	p->bus_desc.module = THIS_MODULE;
+@@ -962,16 +977,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
+ 	list_add_tail(&p->region_list, &papr_nd_regions);
+ 	mutex_unlock(&papr_ndr_lock);
+ 
+-	/* Try retriving the stat buffer and see if its supported */
+-	stat_size = drc_pmem_query_stats(p, NULL, 0);
+-	if (stat_size > 0) {
+-		p->stat_buffer_len = stat_size;
+-		dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n",
+-			p->stat_buffer_len);
+-	} else {
+-		dev_info(&p->pdev->dev, "Dimm performance stats unavailable\n");
+-	}
+-
+ 	return 0;
+ 
+ err:	nvdimm_bus_unregister(p->bus);
+@@ -1047,8 +1052,10 @@ static int papr_scm_probe(struct platform_device *pdev)
+ 	u32 drc_index, metadata_size;
+ 	u64 blocks, block_size;
+ 	struct papr_scm_priv *p;
++	u8 uuid_raw[UUID_SIZE];
+ 	const char *uuid_str;
+-	u64 uuid[2];
++	ssize_t stat_size;
++	uuid_t uuid;
+ 	int rc;
+ 
+ 	/* check we have all the required DT properties */
+@@ -1090,16 +1097,23 @@ static int papr_scm_probe(struct platform_device *pdev)
+ 	p->is_volatile = !of_property_read_bool(dn, "ibm,cache-flush-required");
+ 
+ 	/* We just need to ensure that set cookies are unique across */
+-	uuid_parse(uuid_str, (uuid_t *) uuid);
++	uuid_parse(uuid_str, &uuid);
++
+ 	/*
+-	 * cookie1 and cookie2 are not really little endian
+-	 * we store a little endian representation of the
+-	 * uuid str so that we can compare this with the label
+-	 * area cookie irrespective of the endian config with which
+-	 * the kernel is built.
++	 * The cookie1 and cookie2 are not really little endian.
++	 * We store a raw buffer representation of the
++	 * uuid string so that we can compare this with the label
++	 * area cookie irrespective of the endian configuration
++	 * with which the kernel is built.
++	 *
++	 * Historically we stored the cookie in the below format.
++	 * for a uuid string 72511b67-0b3b-42fd-8d1d-5be3cae8bcaa
++	 *	cookie1 was 0xfd423b0b671b5172
++	 *	cookie2 was 0xaabce8cae35b1d8d
+ 	 */
+-	p->nd_set.cookie1 = cpu_to_le64(uuid[0]);
+-	p->nd_set.cookie2 = cpu_to_le64(uuid[1]);
++	export_uuid(uuid_raw, &uuid);
++	p->nd_set.cookie1 = get_unaligned_le64(&uuid_raw[0]);
++	p->nd_set.cookie2 = get_unaligned_le64(&uuid_raw[8]);
+ 
+ 	/* might be zero */
+ 	p->metadata_size = metadata_size;
+@@ -1124,6 +1138,14 @@ static int papr_scm_probe(struct platform_device *pdev)
+ 	p->res.name  = pdev->name;
+ 	p->res.flags = IORESOURCE_MEM;
+ 
++	/* Try retrieving the stat buffer and see if its supported */
++	stat_size = drc_pmem_query_stats(p, NULL, 0);
++	if (stat_size > 0) {
++		p->stat_buffer_len = stat_size;
++		dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n",
++			p->stat_buffer_len);
++	}
++
+ 	rc = papr_scm_nvdimm_init(p);
+ 	if (rc)
+ 		goto err2;
+diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
+index 92922491a81c6..624e80b00eb18 100644
+--- a/arch/powerpc/platforms/pseries/smp.c
++++ b/arch/powerpc/platforms/pseries/smp.c
+@@ -104,9 +104,6 @@ static inline int smp_startup_cpu(unsigned int lcpu)
+ 		return 1;
+ 	}
+ 
+-	/* Fixup atomic count: it exited inside IRQ handler. */
+-	task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count	= 0;
+-
+ 	/* 
+ 	 * If the RTAS start-cpu token does not exist then presume the
+ 	 * cpu is already spinning.
+diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
+index 96167d55ed984..0b04e0eae3ab5 100644
+--- a/arch/riscv/kernel/smpboot.c
++++ b/arch/riscv/kernel/smpboot.c
+@@ -166,7 +166,6 @@ asmlinkage __visible void smp_callin(void)
+ 	 * Disable preemption before enabling interrupts, so we don't try to
+ 	 * schedule a CPU that hasn't actually started yet.
+ 	 */
+-	preempt_disable();
+ 	local_irq_enable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index 4a2a12be04c96..896b68e541b2e 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -154,6 +154,8 @@ config S390
+ 	select HAVE_FUTEX_CMPXCHG if FUTEX
+ 	select HAVE_GCC_PLUGINS
+ 	select HAVE_GENERIC_VDSO
++	select HAVE_IOREMAP_PROT if PCI
++	select HAVE_IRQ_EXIT_ON_IRQ_STACK
+ 	select HAVE_KERNEL_BZIP2
+ 	select HAVE_KERNEL_GZIP
+ 	select HAVE_KERNEL_LZ4
+@@ -856,7 +858,7 @@ config CMM_IUCV
+ config APPLDATA_BASE
+ 	def_bool n
+ 	prompt "Linux - VM Monitor Stream, base infrastructure"
+-	depends on PROC_FS
++	depends on PROC_SYSCTL
+ 	help
+ 	  This provides a kernel interface for creating and updating z/VM APPLDATA
+ 	  monitor records. The monitor records are updated at certain time
+diff --git a/arch/s390/boot/uv.c b/arch/s390/boot/uv.c
+index 87641dd65ccf9..b3501ea5039e4 100644
+--- a/arch/s390/boot/uv.c
++++ b/arch/s390/boot/uv.c
+@@ -36,6 +36,7 @@ void uv_query_info(void)
+ 		uv_info.max_sec_stor_addr = ALIGN(uvcb.max_guest_stor_addr, PAGE_SIZE);
+ 		uv_info.max_num_sec_conf = uvcb.max_num_sec_conf;
+ 		uv_info.max_guest_cpu_id = uvcb.max_guest_cpu_id;
++		uv_info.uv_feature_indications = uvcb.uv_feature_indications;
+ 	}
+ 
+ #ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index b5dbae78969b9..2338345912a31 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -864,6 +864,25 @@ static inline int pte_unused(pte_t pte)
+ 	return pte_val(pte) & _PAGE_UNUSED;
+ }
+ 
++/*
++ * Extract the pgprot value from the given pte while at the same time making it
++ * usable for kernel address space mappings where fault driven dirty and
++ * young/old accounting is not supported, i.e _PAGE_PROTECT and _PAGE_INVALID
++ * must not be set.
++ */
++static inline pgprot_t pte_pgprot(pte_t pte)
++{
++	unsigned long pte_flags = pte_val(pte) & _PAGE_CHG_MASK;
++
++	if (pte_write(pte))
++		pte_flags |= pgprot_val(PAGE_KERNEL);
++	else
++		pte_flags |= pgprot_val(PAGE_KERNEL_RO);
++	pte_flags |= pte_val(pte) & mio_wb_bit_mask;
++
++	return __pgprot(pte_flags);
++}
++
+ /*
+  * pgd/pmd/pte modification functions
+  */
+diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h
+index 6ede29907fbf7..b5f545db461a4 100644
+--- a/arch/s390/include/asm/preempt.h
++++ b/arch/s390/include/asm/preempt.h
+@@ -29,12 +29,6 @@ static inline void preempt_count_set(int pc)
+ 				  old, new) != old);
+ }
+ 
+-#define init_task_preempt_count(p)	do { } while (0)
+-
+-#define init_idle_preempt_count(p, cpu)	do { \
+-	S390_lowcore.preempt_count = PREEMPT_ENABLED; \
+-} while (0)
+-
+ static inline void set_preempt_need_resched(void)
+ {
+ 	__atomic_and(~PREEMPT_NEED_RESCHED, &S390_lowcore.preempt_count);
+@@ -88,12 +82,6 @@ static inline void preempt_count_set(int pc)
+ 	S390_lowcore.preempt_count = pc;
+ }
+ 
+-#define init_task_preempt_count(p)	do { } while (0)
+-
+-#define init_idle_preempt_count(p, cpu)	do { \
+-	S390_lowcore.preempt_count = PREEMPT_ENABLED; \
+-} while (0)
+-
+ static inline void set_preempt_need_resched(void)
+ {
+ }
+@@ -130,6 +118,10 @@ static inline bool should_resched(int preempt_offset)
+ 
+ #endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */
+ 
++#define init_task_preempt_count(p)	do { } while (0)
++/* Deferred to CPU bringup time */
++#define init_idle_preempt_count(p, cpu)	do { } while (0)
++
+ #ifdef CONFIG_PREEMPTION
+ extern asmlinkage void preempt_schedule(void);
+ #define __preempt_schedule() preempt_schedule()
+diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
+index 7b98d4caee779..12c5f006c1364 100644
+--- a/arch/s390/include/asm/uv.h
++++ b/arch/s390/include/asm/uv.h
+@@ -73,6 +73,10 @@ enum uv_cmds_inst {
+ 	BIT_UVC_CMD_UNPIN_PAGE_SHARED = 22,
+ };
+ 
++enum uv_feat_ind {
++	BIT_UV_FEAT_MISC = 0,
++};
++
+ struct uv_cb_header {
+ 	u16 len;
+ 	u16 cmd;	/* Command Code */
+@@ -97,7 +101,8 @@ struct uv_cb_qui {
+ 	u64 max_guest_stor_addr;
+ 	u8  reserved88[158 - 136];
+ 	u16 max_guest_cpu_id;
+-	u8  reserveda0[200 - 160];
++	u64 uv_feature_indications;
++	u8  reserveda0[200 - 168];
+ } __packed __aligned(8);
+ 
+ /* Initialize Ultravisor */
+@@ -274,6 +279,7 @@ struct uv_info {
+ 	unsigned long max_sec_stor_addr;
+ 	unsigned int max_num_sec_conf;
+ 	unsigned short max_guest_cpu_id;
++	unsigned long uv_feature_indications;
+ };
+ 
+ extern struct uv_info uv_info;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index e83ce909686c5..83a3f346e5bd9 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -454,6 +454,7 @@ static void __init setup_lowcore_dat_off(void)
+ 	lc->br_r1_trampoline = 0x07f1;	/* br %r1 */
+ 	lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
+ 	lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
++	lc->preempt_count = PREEMPT_DISABLED;
+ 
+ 	set_prefix((u32)(unsigned long) lc);
+ 	lowcore_ptr[0] = lc;
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 791bc373418bd..5674792726cd9 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -215,6 +215,7 @@ static int pcpu_alloc_lowcore(struct pcpu *pcpu, int cpu)
+ 	lc->br_r1_trampoline = 0x07f1;	/* br %r1 */
+ 	lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
+ 	lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
++	lc->preempt_count = PREEMPT_DISABLED;
+ 	if (nmi_alloc_per_cpu(lc))
+ 		goto out_async;
+ 	if (vdso_alloc_per_cpu(lc))
+@@ -863,7 +864,6 @@ static void smp_init_secondary(void)
+ 	set_cpu_flag(CIF_ASCE_SECONDARY);
+ 	cpu_init();
+ 	rcu_cpu_starting(cpu);
+-	preempt_disable();
+ 	init_cpu_timer();
+ 	vtime_init();
+ 	pfault_init();
+diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
+index b2d2ad1530676..c811b2313100b 100644
+--- a/arch/s390/kernel/uv.c
++++ b/arch/s390/kernel/uv.c
+@@ -364,6 +364,15 @@ static ssize_t uv_query_facilities(struct kobject *kobj,
+ static struct kobj_attribute uv_query_facilities_attr =
+ 	__ATTR(facilities, 0444, uv_query_facilities, NULL);
+ 
++static ssize_t uv_query_feature_indications(struct kobject *kobj,
++					    struct kobj_attribute *attr, char *buf)
++{
++	return sysfs_emit(buf, "%lx\n", uv_info.uv_feature_indications);
++}
++
++static struct kobj_attribute uv_query_feature_indications_attr =
++	__ATTR(feature_indications, 0444, uv_query_feature_indications, NULL);
++
+ static ssize_t uv_query_max_guest_cpus(struct kobject *kobj,
+ 				       struct kobj_attribute *attr, char *page)
+ {
+@@ -396,6 +405,7 @@ static struct kobj_attribute uv_query_max_guest_addr_attr =
+ 
+ static struct attribute *uv_query_attrs[] = {
+ 	&uv_query_facilities_attr.attr,
++	&uv_query_feature_indications_attr.attr,
+ 	&uv_query_max_guest_cpus_attr.attr,
+ 	&uv_query_max_guest_vms_attr.attr,
+ 	&uv_query_max_guest_addr_attr.attr,
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 20afffd6b9820..f94b4f78d4dab 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -327,31 +327,31 @@ static void allow_cpu_feat(unsigned long nr)
+ 
+ static inline int plo_test_bit(unsigned char nr)
+ {
+-	register unsigned long r0 asm("0") = (unsigned long) nr | 0x100;
++	unsigned long function = (unsigned long)nr | 0x100;
+ 	int cc;
+ 
+ 	asm volatile(
++		"	lgr	0,%[function]\n"
+ 		/* Parameter registers are ignored for "test bit" */
+ 		"	plo	0,0,0,0(0)\n"
+ 		"	ipm	%0\n"
+ 		"	srl	%0,28\n"
+ 		: "=d" (cc)
+-		: "d" (r0)
+-		: "cc");
++		: [function] "d" (function)
++		: "cc", "0");
+ 	return cc == 0;
+ }
+ 
+ static __always_inline void __insn32_query(unsigned int opcode, u8 *query)
+ {
+-	register unsigned long r0 asm("0") = 0;	/* query function */
+-	register unsigned long r1 asm("1") = (unsigned long) query;
+-
+ 	asm volatile(
+-		/* Parameter regs are ignored */
++		"	lghi	0,0\n"
++		"	lgr	1,%[query]\n"
++		/* Parameter registers are ignored */
+ 		"	.insn	rrf,%[opc] << 16,2,4,6,0\n"
+ 		:
+-		: "d" (r0), "a" (r1), [opc] "i" (opcode)
+-		: "cc", "memory");
++		: [query] "d" ((unsigned long)query), [opc] "i" (opcode)
++		: "cc", "memory", "0", "1");
+ }
+ 
+ #define INSN_SORTL 0xb938
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index 996884dcc9fdb..ed517fad0d035 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -805,6 +805,32 @@ void do_secure_storage_access(struct pt_regs *regs)
+ 	struct page *page;
+ 	int rc;
+ 
++	/*
++	 * bit 61 tells us if the address is valid, if it's not we
++	 * have a major problem and should stop the kernel or send a
++	 * SIGSEGV to the process. Unfortunately bit 61 is not
++	 * reliable without the misc UV feature so we need to check
++	 * for that as well.
++	 */
++	if (test_bit_inv(BIT_UV_FEAT_MISC, &uv_info.uv_feature_indications) &&
++	    !test_bit_inv(61, &regs->int_parm_long)) {
++		/*
++		 * When this happens, userspace did something that it
++		 * was not supposed to do, e.g. branching into secure
++		 * memory. Trigger a segmentation fault.
++		 */
++		if (user_mode(regs)) {
++			send_sig(SIGSEGV, current, 0);
++			return;
++		}
++
++		/*
++		 * The kernel should never run into this case and we
++		 * have no way out of this situation.
++		 */
++		panic("Unexpected PGM 0x3d with TEID bit 61=0");
++	}
++
+ 	switch (get_fault_type(regs)) {
+ 	case USER_FAULT:
+ 		mm = current->mm;
+diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
+index 372acdc9033eb..65924d9ec2459 100644
+--- a/arch/sh/kernel/smp.c
++++ b/arch/sh/kernel/smp.c
+@@ -186,8 +186,6 @@ asmlinkage void start_secondary(void)
+ 
+ 	per_cpu_trap_init();
+ 
+-	preempt_disable();
+-
+ 	notify_cpu_starting(cpu);
+ 
+ 	local_irq_enable();
+diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
+index 50c127ab46d5b..22b148e5a5f88 100644
+--- a/arch/sparc/kernel/smp_32.c
++++ b/arch/sparc/kernel/smp_32.c
+@@ -348,7 +348,6 @@ static void sparc_start_secondary(void *arg)
+ 	 */
+ 	arch_cpu_pre_starting(arg);
+ 
+-	preempt_disable();
+ 	cpu = smp_processor_id();
+ 
+ 	notify_cpu_starting(cpu);
+diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
+index e38d8bf454e86..ae5faa1d989d2 100644
+--- a/arch/sparc/kernel/smp_64.c
++++ b/arch/sparc/kernel/smp_64.c
+@@ -138,9 +138,6 @@ void smp_callin(void)
+ 
+ 	set_cpu_online(cpuid, true);
+ 
+-	/* idle thread is expected to have preempt disabled */
+-	preempt_disable();
+-
+ 	local_irq_enable();
+ 
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+diff --git a/arch/x86/crypto/curve25519-x86_64.c b/arch/x86/crypto/curve25519-x86_64.c
+index 5af8021b98cea..11b4c83c715e3 100644
+--- a/arch/x86/crypto/curve25519-x86_64.c
++++ b/arch/x86/crypto/curve25519-x86_64.c
+@@ -1500,7 +1500,7 @@ static int __init curve25519_mod_init(void)
+ static void __exit curve25519_mod_exit(void)
+ {
+ 	if (IS_REACHABLE(CONFIG_CRYPTO_KPP) &&
+-	    (boot_cpu_has(X86_FEATURE_BMI2) || boot_cpu_has(X86_FEATURE_ADX)))
++	    static_branch_likely(&curve25519_use_bmi2_adx))
+ 		crypto_unregister_kpp(&curve25519_alg);
+ }
+ 
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index cad08703c4ad7..f18f3932e971a 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -508,7 +508,7 @@ SYM_CODE_START(\asmsym)
+ 
+ 	movq	%rsp, %rdi		/* pt_regs pointer */
+ 
+-	call	\cfunc
++	call	kernel_\cfunc
+ 
+ 	/*
+ 	 * No need to switch back to the IST stack. The current stack is either
+@@ -519,7 +519,7 @@ SYM_CODE_START(\asmsym)
+ 
+ 	/* Switch to the regular task stack */
+ .Lfrom_usermode_switch_stack_\@:
+-	idtentry_body safe_stack_\cfunc, has_error_code=1
++	idtentry_body user_\cfunc, has_error_code=1
+ 
+ _ASM_NOKPROBE(\asmsym)
+ SYM_CODE_END(\asmsym)
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index 0e3325790f3a9..dc2a8b1657f4a 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -315,8 +315,8 @@ static __always_inline void __##func(struct pt_regs *regs)
+  */
+ #define DECLARE_IDTENTRY_VC(vector, func)				\
+ 	DECLARE_IDTENTRY_RAW_ERRORCODE(vector, func);			\
+-	__visible noinstr void ist_##func(struct pt_regs *regs, unsigned long error_code);	\
+-	__visible noinstr void safe_stack_##func(struct pt_regs *regs, unsigned long error_code)
++	__visible noinstr void kernel_##func(struct pt_regs *regs, unsigned long error_code);	\
++	__visible noinstr void   user_##func(struct pt_regs *regs, unsigned long error_code)
+ 
+ /**
+  * DEFINE_IDTENTRY_IST - Emit code for IST entry points
+@@ -358,33 +358,24 @@ static __always_inline void __##func(struct pt_regs *regs)
+ 	DEFINE_IDTENTRY_RAW_ERRORCODE(func)
+ 
+ /**
+- * DEFINE_IDTENTRY_VC_SAFE_STACK - Emit code for VMM communication handler
+-				   which runs on a safe stack.
++ * DEFINE_IDTENTRY_VC_KERNEL - Emit code for VMM communication handler
++			       when raised from kernel mode
+  * @func:	Function name of the entry point
+  *
+  * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
+  */
+-#define DEFINE_IDTENTRY_VC_SAFE_STACK(func)				\
+-	DEFINE_IDTENTRY_RAW_ERRORCODE(safe_stack_##func)
++#define DEFINE_IDTENTRY_VC_KERNEL(func)				\
++	DEFINE_IDTENTRY_RAW_ERRORCODE(kernel_##func)
+ 
+ /**
+- * DEFINE_IDTENTRY_VC_IST - Emit code for VMM communication handler
+-			    which runs on the VC fall-back stack
++ * DEFINE_IDTENTRY_VC_USER - Emit code for VMM communication handler
++			     when raised from user mode
+  * @func:	Function name of the entry point
+  *
+  * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
+  */
+-#define DEFINE_IDTENTRY_VC_IST(func)				\
+-	DEFINE_IDTENTRY_RAW_ERRORCODE(ist_##func)
+-
+-/**
+- * DEFINE_IDTENTRY_VC - Emit code for VMM communication handler
+- * @func:	Function name of the entry point
+- *
+- * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
+- */
+-#define DEFINE_IDTENTRY_VC(func)					\
+-	DEFINE_IDTENTRY_RAW_ERRORCODE(func)
++#define DEFINE_IDTENTRY_VC_USER(func)				\
++	DEFINE_IDTENTRY_RAW_ERRORCODE(user_##func)
+ 
+ #else	/* CONFIG_X86_64 */
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index d1ac2de41ea8a..b1cd8334db11a 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -84,7 +84,7 @@
+ #define KVM_REQ_APICV_UPDATE \
+ 	KVM_ARCH_REQ_FLAGS(25, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+ #define KVM_REQ_TLB_FLUSH_CURRENT	KVM_ARCH_REQ(26)
+-#define KVM_REQ_HV_TLB_FLUSH \
++#define KVM_REQ_TLB_FLUSH_GUEST \
+ 	KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_NO_WAKEUP)
+ #define KVM_REQ_APF_READY		KVM_ARCH_REQ(28)
+ #define KVM_REQ_MSR_FILTER_CHANGED	KVM_ARCH_REQ(29)
+diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
+index 69485ca13665f..a334dd0d7c42c 100644
+--- a/arch/x86/include/asm/preempt.h
++++ b/arch/x86/include/asm/preempt.h
+@@ -43,7 +43,7 @@ static __always_inline void preempt_count_set(int pc)
+ #define init_task_preempt_count(p) do { } while (0)
+ 
+ #define init_idle_preempt_count(p, cpu) do { \
+-	per_cpu(__preempt_count, (cpu)) = PREEMPT_ENABLED; \
++	per_cpu(__preempt_count, (cpu)) = PREEMPT_DISABLED; \
+ } while (0)
+ 
+ /*
+diff --git a/arch/x86/include/uapi/asm/hwcap2.h b/arch/x86/include/uapi/asm/hwcap2.h
+index 5fdfcb47000f9..054604aba9f00 100644
+--- a/arch/x86/include/uapi/asm/hwcap2.h
++++ b/arch/x86/include/uapi/asm/hwcap2.h
+@@ -2,10 +2,12 @@
+ #ifndef _ASM_X86_HWCAP2_H
+ #define _ASM_X86_HWCAP2_H
+ 
++#include <linux/const.h>
++
+ /* MONITOR/MWAIT enabled in Ring 3 */
+-#define HWCAP2_RING3MWAIT		(1 << 0)
++#define HWCAP2_RING3MWAIT		_BITUL(0)
+ 
+ /* Kernel allows FSGSBASE instructions available in Ring 3 */
+-#define HWCAP2_FSGSBASE			BIT(1)
++#define HWCAP2_FSGSBASE			_BITUL(1)
+ 
+ #endif
+diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
+index e0cdab7cb632b..f3202b2e3c157 100644
+--- a/arch/x86/kernel/sev-es.c
++++ b/arch/x86/kernel/sev-es.c
+@@ -12,7 +12,6 @@
+ #include <linux/sched/debug.h>	/* For show_regs() */
+ #include <linux/percpu-defs.h>
+ #include <linux/mem_encrypt.h>
+-#include <linux/lockdep.h>
+ #include <linux/printk.h>
+ #include <linux/mm_types.h>
+ #include <linux/set_memory.h>
+@@ -180,11 +179,19 @@ void noinstr __sev_es_ist_exit(void)
+ 	this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], *(unsigned long *)ist);
+ }
+ 
+-static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
++/*
++ * Nothing shall interrupt this code path while holding the per-CPU
++ * GHCB. The backup GHCB is only for NMIs interrupting this path.
++ *
++ * Callers must disable local interrupts around it.
++ */
++static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state)
+ {
+ 	struct sev_es_runtime_data *data;
+ 	struct ghcb *ghcb;
+ 
++	WARN_ON(!irqs_disabled());
++
+ 	data = this_cpu_read(runtime_data);
+ 	ghcb = &data->ghcb_page;
+ 
+@@ -201,7 +208,9 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
+ 			data->ghcb_active        = false;
+ 			data->backup_ghcb_active = false;
+ 
++			instrumentation_begin();
+ 			panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use");
++			instrumentation_end();
+ 		}
+ 
+ 		/* Mark backup_ghcb active before writing to it */
+@@ -452,11 +461,13 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
+ /* Include code shared with pre-decompression boot stage */
+ #include "sev-es-shared.c"
+ 
+-static __always_inline void sev_es_put_ghcb(struct ghcb_state *state)
++static noinstr void __sev_put_ghcb(struct ghcb_state *state)
+ {
+ 	struct sev_es_runtime_data *data;
+ 	struct ghcb *ghcb;
+ 
++	WARN_ON(!irqs_disabled());
++
+ 	data = this_cpu_read(runtime_data);
+ 	ghcb = &data->ghcb_page;
+ 
+@@ -480,7 +491,7 @@ void noinstr __sev_es_nmi_complete(void)
+ 	struct ghcb_state state;
+ 	struct ghcb *ghcb;
+ 
+-	ghcb = sev_es_get_ghcb(&state);
++	ghcb = __sev_get_ghcb(&state);
+ 
+ 	vc_ghcb_invalidate(ghcb);
+ 	ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_NMI_COMPLETE);
+@@ -490,7 +501,7 @@ void noinstr __sev_es_nmi_complete(void)
+ 	sev_es_wr_ghcb_msr(__pa_nodebug(ghcb));
+ 	VMGEXIT();
+ 
+-	sev_es_put_ghcb(&state);
++	__sev_put_ghcb(&state);
+ }
+ 
+ static u64 get_jump_table_addr(void)
+@@ -502,7 +513,7 @@ static u64 get_jump_table_addr(void)
+ 
+ 	local_irq_save(flags);
+ 
+-	ghcb = sev_es_get_ghcb(&state);
++	ghcb = __sev_get_ghcb(&state);
+ 
+ 	vc_ghcb_invalidate(ghcb);
+ 	ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_JUMP_TABLE);
+@@ -516,7 +527,7 @@ static u64 get_jump_table_addr(void)
+ 	    ghcb_sw_exit_info_2_is_valid(ghcb))
+ 		ret = ghcb->save.sw_exit_info_2;
+ 
+-	sev_es_put_ghcb(&state);
++	__sev_put_ghcb(&state);
+ 
+ 	local_irq_restore(flags);
+ 
+@@ -641,7 +652,7 @@ static void sev_es_ap_hlt_loop(void)
+ 	struct ghcb_state state;
+ 	struct ghcb *ghcb;
+ 
+-	ghcb = sev_es_get_ghcb(&state);
++	ghcb = __sev_get_ghcb(&state);
+ 
+ 	while (true) {
+ 		vc_ghcb_invalidate(ghcb);
+@@ -658,7 +669,7 @@ static void sev_es_ap_hlt_loop(void)
+ 			break;
+ 	}
+ 
+-	sev_es_put_ghcb(&state);
++	__sev_put_ghcb(&state);
+ }
+ 
+ /*
+@@ -748,7 +759,7 @@ void __init sev_es_init_vc_handling(void)
+ 	sev_es_setup_play_dead();
+ 
+ 	/* Secondary CPUs use the runtime #VC handler */
+-	initial_vc_handler = (unsigned long)safe_stack_exc_vmm_communication;
++	initial_vc_handler = (unsigned long)kernel_exc_vmm_communication;
+ }
+ 
+ static void __init vc_early_forward_exception(struct es_em_ctxt *ctxt)
+@@ -1186,14 +1197,6 @@ static enum es_result vc_handle_trap_ac(struct ghcb *ghcb,
+ 	return ES_EXCEPTION;
+ }
+ 
+-static __always_inline void vc_handle_trap_db(struct pt_regs *regs)
+-{
+-	if (user_mode(regs))
+-		noist_exc_debug(regs);
+-	else
+-		exc_debug(regs);
+-}
+-
+ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
+ 					 struct ghcb *ghcb,
+ 					 unsigned long exit_code)
+@@ -1289,44 +1292,15 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs)
+ 	return (sp >= __this_cpu_ist_bottom_va(VC2) && sp < __this_cpu_ist_top_va(VC2));
+ }
+ 
+-/*
+- * Main #VC exception handler. It is called when the entry code was able to
+- * switch off the IST to a safe kernel stack.
+- *
+- * With the current implementation it is always possible to switch to a safe
+- * stack because #VC exceptions only happen at known places, like intercepted
+- * instructions or accesses to MMIO areas/IO ports. They can also happen with
+- * code instrumentation when the hypervisor intercepts #DB, but the critical
+- * paths are forbidden to be instrumented, so #DB exceptions currently also
+- * only happen in safe places.
+- */
+-DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
++static bool vc_raw_handle_exception(struct pt_regs *regs, unsigned long error_code)
+ {
+-	irqentry_state_t irq_state;
+ 	struct ghcb_state state;
+ 	struct es_em_ctxt ctxt;
+ 	enum es_result result;
+ 	struct ghcb *ghcb;
++	bool ret = true;
+ 
+-	/*
+-	 * Handle #DB before calling into !noinstr code to avoid recursive #DB.
+-	 */
+-	if (error_code == SVM_EXIT_EXCP_BASE + X86_TRAP_DB) {
+-		vc_handle_trap_db(regs);
+-		return;
+-	}
+-
+-	irq_state = irqentry_nmi_enter(regs);
+-	lockdep_assert_irqs_disabled();
+-	instrumentation_begin();
+-
+-	/*
+-	 * This is invoked through an interrupt gate, so IRQs are disabled. The
+-	 * code below might walk page-tables for user or kernel addresses, so
+-	 * keep the IRQs disabled to protect us against concurrent TLB flushes.
+-	 */
+-
+-	ghcb = sev_es_get_ghcb(&state);
++	ghcb = __sev_get_ghcb(&state);
+ 
+ 	vc_ghcb_invalidate(ghcb);
+ 	result = vc_init_em_ctxt(&ctxt, regs, error_code);
+@@ -1334,7 +1308,7 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 	if (result == ES_OK)
+ 		result = vc_handle_exitcode(&ctxt, ghcb, error_code);
+ 
+-	sev_es_put_ghcb(&state);
++	__sev_put_ghcb(&state);
+ 
+ 	/* Done - now check the result */
+ 	switch (result) {
+@@ -1344,15 +1318,18 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 	case ES_UNSUPPORTED:
+ 		pr_err_ratelimited("Unsupported exit-code 0x%02lx in early #VC exception (IP: 0x%lx)\n",
+ 				   error_code, regs->ip);
+-		goto fail;
++		ret = false;
++		break;
+ 	case ES_VMM_ERROR:
+ 		pr_err_ratelimited("Failure in communication with VMM (exit-code 0x%02lx IP: 0x%lx)\n",
+ 				   error_code, regs->ip);
+-		goto fail;
++		ret = false;
++		break;
+ 	case ES_DECODE_FAILED:
+ 		pr_err_ratelimited("Failed to decode instruction (exit-code 0x%02lx IP: 0x%lx)\n",
+ 				   error_code, regs->ip);
+-		goto fail;
++		ret = false;
++		break;
+ 	case ES_EXCEPTION:
+ 		vc_forward_exception(&ctxt);
+ 		break;
+@@ -1368,24 +1345,52 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 		BUG();
+ 	}
+ 
+-out:
+-	instrumentation_end();
+-	irqentry_nmi_exit(regs, irq_state);
++	return ret;
++}
+ 
+-	return;
++static __always_inline bool vc_is_db(unsigned long error_code)
++{
++	return error_code == SVM_EXIT_EXCP_BASE + X86_TRAP_DB;
++}
+ 
+-fail:
+-	if (user_mode(regs)) {
+-		/*
+-		 * Do not kill the machine if user-space triggered the
+-		 * exception. Send SIGBUS instead and let user-space deal with
+-		 * it.
+-		 */
+-		force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0);
+-	} else {
+-		pr_emerg("PANIC: Unhandled #VC exception in kernel space (result=%d)\n",
+-			 result);
++/*
++ * Runtime #VC exception handler when raised from kernel mode. Runs in NMI mode
++ * and will panic when an error happens.
++ */
++DEFINE_IDTENTRY_VC_KERNEL(exc_vmm_communication)
++{
++	irqentry_state_t irq_state;
+ 
++	/*
++	 * With the current implementation it is always possible to switch to a
++	 * safe stack because #VC exceptions only happen at known places, like
++	 * intercepted instructions or accesses to MMIO areas/IO ports. They can
++	 * also happen with code instrumentation when the hypervisor intercepts
++	 * #DB, but the critical paths are forbidden to be instrumented, so #DB
++	 * exceptions currently also only happen in safe places.
++	 *
++	 * But keep this here in case the noinstr annotations are violated due
++	 * to bug elsewhere.
++	 */
++	if (unlikely(on_vc_fallback_stack(regs))) {
++		instrumentation_begin();
++		panic("Can't handle #VC exception from unsupported context\n");
++		instrumentation_end();
++	}
++
++	/*
++	 * Handle #DB before calling into !noinstr code to avoid recursive #DB.
++	 */
++	if (vc_is_db(error_code)) {
++		exc_debug(regs);
++		return;
++	}
++
++	irq_state = irqentry_nmi_enter(regs);
++
++	instrumentation_begin();
++
++	if (!vc_raw_handle_exception(regs, error_code)) {
+ 		/* Show some debug info */
+ 		show_regs(regs);
+ 
+@@ -1396,23 +1401,38 @@ fail:
+ 		panic("Returned from Terminate-Request to Hypervisor\n");
+ 	}
+ 
+-	goto out;
++	instrumentation_end();
++	irqentry_nmi_exit(regs, irq_state);
+ }
+ 
+-/* This handler runs on the #VC fall-back stack. It can cause further #VC exceptions */
+-DEFINE_IDTENTRY_VC_IST(exc_vmm_communication)
++/*
++ * Runtime #VC exception handler when raised from user mode. Runs in IRQ mode
++ * and will kill the current task with SIGBUS when an error happens.
++ */
++DEFINE_IDTENTRY_VC_USER(exc_vmm_communication)
+ {
++	/*
++	 * Handle #DB before calling into !noinstr code to avoid recursive #DB.
++	 */
++	if (vc_is_db(error_code)) {
++		noist_exc_debug(regs);
++		return;
++	}
++
++	irqentry_enter_from_user_mode(regs);
+ 	instrumentation_begin();
+-	panic("Can't handle #VC exception from unsupported context\n");
+-	instrumentation_end();
+-}
+ 
+-DEFINE_IDTENTRY_VC(exc_vmm_communication)
+-{
+-	if (likely(!on_vc_fallback_stack(regs)))
+-		safe_stack_exc_vmm_communication(regs, error_code);
+-	else
+-		ist_exc_vmm_communication(regs, error_code);
++	if (!vc_raw_handle_exception(regs, error_code)) {
++		/*
++		 * Do not kill the machine if user-space triggered the
++		 * exception. Send SIGBUS instead and let user-space deal with
++		 * it.
++		 */
++		force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0);
++	}
++
++	instrumentation_end();
++	irqentry_exit_to_user_mode(regs);
+ }
+ 
+ bool __init handle_vc_boot_ghcb(struct pt_regs *regs)
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 582387fc939f4..8baff500914ea 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -230,7 +230,6 @@ static void notrace start_secondary(void *unused)
+ 	cpu_init_exception_handling();
+ 	cpu_init();
+ 	x86_cpuinit.early_percpu_clock_init();
+-	preempt_disable();
+ 	smp_callin();
+ 
+ 	enable_start_cpu0 = 0;
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index f70dffc2771f5..56289170753c5 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1151,7 +1151,8 @@ static struct clocksource clocksource_tsc = {
+ 	.mask			= CLOCKSOURCE_MASK(64),
+ 	.flags			= CLOCK_SOURCE_IS_CONTINUOUS |
+ 				  CLOCK_SOURCE_VALID_FOR_HRES |
+-				  CLOCK_SOURCE_MUST_VERIFY,
++				  CLOCK_SOURCE_MUST_VERIFY |
++				  CLOCK_SOURCE_VERIFY_PERCPU,
+ 	.vdso_clock_mode	= VDSO_CLOCKMODE_TSC,
+ 	.enable			= tsc_cs_enable,
+ 	.resume			= tsc_resume,
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 5c7c4060b45cb..bb39f493447cf 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -1564,7 +1564,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa,
+ 	 * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
+ 	 * analyze it here, flush TLB regardless of the specified address space.
+ 	 */
+-	kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH,
++	kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
+ 				    NULL, vcpu_mask, &hv_vcpu->tlb_flush);
+ 
+ ret_success:
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index f2eeaf197294d..7e6dc454ea28d 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4133,7 +4133,15 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
+ void
+ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
+ {
+-	bool uses_nx = context->nx ||
++	/*
++	 * KVM uses NX when TDP is disabled to handle a variety of scenarios,
++	 * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and
++	 * to generate correct permissions for CR0.WP=0/CR4.SMEP=1/EFER.NX=0.
++	 * The iTLB multi-hit workaround can be toggled at any time, so assume
++	 * NX can be used by any non-nested shadow MMU to avoid having to reset
++	 * MMU contexts.  Note, KVM forces EFER.NX=1 when TDP is disabled.
++	 */
++	bool uses_nx = context->nx || !tdp_enabled ||
+ 		context->mmu_role.base.smep_andnot_wp;
+ 	struct rsvd_bits_validate *shadow_zero_check;
+ 	int i;
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 00a0bfaed6e86..d6cd702e85b68 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -471,8 +471,7 @@ retry_walk:
+ 
+ error:
+ 	errcode |= write_fault | user_fault;
+-	if (fetch_fault && (mmu->nx ||
+-			    kvm_read_cr4_bits(vcpu, X86_CR4_SMEP)))
++	if (fetch_fault && (mmu->nx || mmu->mmu_role.ext.cr4_smep))
+ 		errcode |= PFERR_FETCH_MASK;
+ 
+ 	walker->fault.vector = PF_VECTOR;
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 61c00f8631f1a..f2ddf663e72e9 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -527,7 +527,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
+ 					  kvm_pfn_t pfn, bool prefault)
+ {
+ 	u64 new_spte;
+-	int ret = 0;
++	int ret = RET_PF_FIXED;
+ 	int make_spte_ret = 0;
+ 
+ 	if (unlikely(is_noslot_pfn(pfn))) {
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 32e6f33c2c45b..67554bc7adb26 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -1142,12 +1142,19 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne
+ 
+ 	/*
+ 	 * Unconditionally skip the TLB flush on fast CR3 switch, all TLB
+-	 * flushes are handled by nested_vmx_transition_tlb_flush().  See
+-	 * nested_vmx_transition_mmu_sync for details on skipping the MMU sync.
++	 * flushes are handled by nested_vmx_transition_tlb_flush().
+ 	 */
+-	if (!nested_ept)
+-		kvm_mmu_new_pgd(vcpu, cr3, true,
+-				!nested_vmx_transition_mmu_sync(vcpu));
++	if (!nested_ept) {
++		kvm_mmu_new_pgd(vcpu, cr3, true, true);
++
++		/*
++		 * A TLB flush on VM-Enter/VM-Exit flushes all linear mappings
++		 * across all PCIDs, i.e. all PGDs need to be synchronized.
++		 * See nested_vmx_transition_mmu_sync() for more details.
++		 */
++		if (nested_vmx_transition_mmu_sync(vcpu))
++			kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
++	}
+ 
+ 	vcpu->arch.cr3 = cr3;
+ 	kvm_register_mark_available(vcpu, VCPU_EXREG_CR3);
+@@ -5477,8 +5484,6 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
+ {
+ 	u32 index = kvm_rcx_read(vcpu);
+ 	u64 new_eptp;
+-	bool accessed_dirty;
+-	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
+ 
+ 	if (!nested_cpu_has_eptp_switching(vmcs12) ||
+ 	    !nested_cpu_has_ept(vmcs12))
+@@ -5487,13 +5492,10 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
+ 	if (index >= VMFUNC_EPTP_ENTRIES)
+ 		return 1;
+ 
+-
+ 	if (kvm_vcpu_read_guest_page(vcpu, vmcs12->eptp_list_address >> PAGE_SHIFT,
+ 				     &new_eptp, index * 8, 8))
+ 		return 1;
+ 
+-	accessed_dirty = !!(new_eptp & VMX_EPTP_AD_ENABLE_BIT);
+-
+ 	/*
+ 	 * If the (L2) guest does a vmfunc to the currently
+ 	 * active ept pointer, we don't have to do anything else
+@@ -5502,8 +5504,6 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
+ 		if (!nested_vmx_check_eptp(vcpu, new_eptp))
+ 			return 1;
+ 
+-		mmu->ept_ad = accessed_dirty;
+-		mmu->mmu_role.base.ad_disabled = !accessed_dirty;
+ 		vmcs12->ept_pointer = new_eptp;
+ 
+ 		kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+@@ -5529,7 +5529,7 @@ static int handle_vmfunc(struct kvm_vcpu *vcpu)
+ 	}
+ 
+ 	vmcs12 = get_vmcs12(vcpu);
+-	if ((vmcs12->vm_function_control & (1 << function)) == 0)
++	if (!(vmcs12->vm_function_control & BIT_ULL(function)))
+ 		goto fail;
+ 
+ 	switch (function) {
+@@ -5787,6 +5787,9 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
+ 		else if (is_breakpoint(intr_info) &&
+ 			 vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP)
+ 			return true;
++		else if (is_alignment_check(intr_info) &&
++			 !vmx_guest_inject_ac(vcpu))
++			return true;
+ 		return false;
+ 	case EXIT_REASON_EXTERNAL_INTERRUPT:
+ 		return true;
+diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h
+index 1472c6c376f74..571d9ad80a59e 100644
+--- a/arch/x86/kvm/vmx/vmcs.h
++++ b/arch/x86/kvm/vmx/vmcs.h
+@@ -117,6 +117,11 @@ static inline bool is_gp_fault(u32 intr_info)
+ 	return is_exception_n(intr_info, GP_VECTOR);
+ }
+ 
++static inline bool is_alignment_check(u32 intr_info)
++{
++	return is_exception_n(intr_info, AC_VECTOR);
++}
++
+ static inline bool is_machine_check(u32 intr_info)
+ {
+ 	return is_exception_n(intr_info, MC_VECTOR);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 45877364e6829..de24d3826788a 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4755,7 +4755,7 @@ static int handle_machine_check(struct kvm_vcpu *vcpu)
+  *  - Guest has #AC detection enabled in CR0
+  *  - Guest EFLAGS has AC bit set
+  */
+-static inline bool guest_inject_ac(struct kvm_vcpu *vcpu)
++bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
+ 		return true;
+@@ -4864,7 +4864,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
+ 		kvm_run->debug.arch.exception = ex_no;
+ 		break;
+ 	case AC_VECTOR:
+-		if (guest_inject_ac(vcpu)) {
++		if (vmx_guest_inject_ac(vcpu)) {
+ 			kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
+ 			return 1;
+ 		}
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index ae3a89ac0600d..73d87d44b6578 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -352,6 +352,7 @@ void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+ u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa,
+ 		   int root_level);
+ 
++bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu);
+ void update_exception_bitmap(struct kvm_vcpu *vcpu);
+ void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
+ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index d3372cb973079..7bf88e6cbd0e9 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8852,7 +8852,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		}
+ 		if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
+ 			kvm_vcpu_flush_tlb_current(vcpu);
+-		if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
++		if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))
+ 			kvm_vcpu_flush_tlb_guest(vcpu);
+ 
+ 		if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
+diff --git a/arch/xtensa/kernel/smp.c b/arch/xtensa/kernel/smp.c
+index cd85a7a2722ba..1254da07ead1f 100644
+--- a/arch/xtensa/kernel/smp.c
++++ b/arch/xtensa/kernel/smp.c
+@@ -145,7 +145,6 @@ void secondary_start_kernel(void)
+ 	cpumask_set_cpu(cpu, mm_cpumask(mm));
+ 	enter_lazy_tlb(mm, current);
+ 
+-	preempt_disable();
+ 	trace_hardirqs_off();
+ 
+ 	calibrate_delay();
+diff --git a/block/blk-flush.c b/block/blk-flush.c
+index fd5cee9f1a3be..7ee7e5e8905d5 100644
+--- a/block/blk-flush.c
++++ b/block/blk-flush.c
+@@ -220,8 +220,6 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+ 	unsigned long flags = 0;
+ 	struct blk_flush_queue *fq = blk_get_flush_queue(q, flush_rq->mq_ctx);
+ 
+-	blk_account_io_flush(flush_rq);
+-
+ 	/* release the tag's ownership to the req cloned from */
+ 	spin_lock_irqsave(&fq->mq_flush_lock, flags);
+ 
+@@ -231,6 +229,7 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+ 		return;
+ 	}
+ 
++	blk_account_io_flush(flush_rq);
+ 	/*
+ 	 * Flush request has to be marked as IDLE when it is really ended
+ 	 * because its .end_io() is called from timeout code path too for
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 7cdd566966473..349cd7d3af815 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -552,10 +552,14 @@ static inline unsigned int blk_rq_get_max_segments(struct request *rq)
+ static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
+ 		unsigned int nr_phys_segs)
+ {
+-	if (req->nr_phys_segments + nr_phys_segs > blk_rq_get_max_segments(req))
++	if (blk_integrity_merge_bio(req->q, req, bio) == false)
+ 		goto no_merge;
+ 
+-	if (blk_integrity_merge_bio(req->q, req, bio) == false)
++	/* discard request merge won't add new segment */
++	if (req_op(req) == REQ_OP_DISCARD)
++		return 1;
++
++	if (req->nr_phys_segments + nr_phys_segs > blk_rq_get_max_segments(req))
+ 		goto no_merge;
+ 
+ 	/*
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index 9c92053e704dc..c4f2f6c123aed 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -199,6 +199,20 @@ struct bt_iter_data {
+ 	bool reserved;
+ };
+ 
++static struct request *blk_mq_find_and_get_req(struct blk_mq_tags *tags,
++		unsigned int bitnr)
++{
++	struct request *rq;
++	unsigned long flags;
++
++	spin_lock_irqsave(&tags->lock, flags);
++	rq = tags->rqs[bitnr];
++	if (!rq || !refcount_inc_not_zero(&rq->ref))
++		rq = NULL;
++	spin_unlock_irqrestore(&tags->lock, flags);
++	return rq;
++}
++
+ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
+ {
+ 	struct bt_iter_data *iter_data = data;
+@@ -206,18 +220,22 @@ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
+ 	struct blk_mq_tags *tags = hctx->tags;
+ 	bool reserved = iter_data->reserved;
+ 	struct request *rq;
++	bool ret = true;
+ 
+ 	if (!reserved)
+ 		bitnr += tags->nr_reserved_tags;
+-	rq = tags->rqs[bitnr];
+-
+ 	/*
+ 	 * We can hit rq == NULL here, because the tagging functions
+ 	 * test and set the bit before assigning ->rqs[].
+ 	 */
+-	if (rq && rq->q == hctx->queue && rq->mq_hctx == hctx)
+-		return iter_data->fn(hctx, rq, iter_data->data, reserved);
+-	return true;
++	rq = blk_mq_find_and_get_req(tags, bitnr);
++	if (!rq)
++		return true;
++
++	if (rq->q == hctx->queue && rq->mq_hctx == hctx)
++		ret = iter_data->fn(hctx, rq, iter_data->data, reserved);
++	blk_mq_put_rq_ref(rq);
++	return ret;
+ }
+ 
+ /**
+@@ -264,6 +282,8 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
+ 	struct blk_mq_tags *tags = iter_data->tags;
+ 	bool reserved = iter_data->flags & BT_TAG_ITER_RESERVED;
+ 	struct request *rq;
++	bool ret = true;
++	bool iter_static_rqs = !!(iter_data->flags & BT_TAG_ITER_STATIC_RQS);
+ 
+ 	if (!reserved)
+ 		bitnr += tags->nr_reserved_tags;
+@@ -272,16 +292,19 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
+ 	 * We can hit rq == NULL here, because the tagging functions
+ 	 * test and set the bit before assigning ->rqs[].
+ 	 */
+-	if (iter_data->flags & BT_TAG_ITER_STATIC_RQS)
++	if (iter_static_rqs)
+ 		rq = tags->static_rqs[bitnr];
+ 	else
+-		rq = tags->rqs[bitnr];
++		rq = blk_mq_find_and_get_req(tags, bitnr);
+ 	if (!rq)
+ 		return true;
+-	if ((iter_data->flags & BT_TAG_ITER_STARTED) &&
+-	    !blk_mq_request_started(rq))
+-		return true;
+-	return iter_data->fn(rq, iter_data->data, reserved);
++
++	if (!(iter_data->flags & BT_TAG_ITER_STARTED) ||
++	    blk_mq_request_started(rq))
++		ret = iter_data->fn(rq, iter_data->data, reserved);
++	if (!iter_static_rqs)
++		blk_mq_put_rq_ref(rq);
++	return ret;
+ }
+ 
+ /**
+@@ -348,6 +371,9 @@ void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
+  *		indicates whether or not @rq is a reserved request. Return
+  *		true to continue iterating tags, false to stop.
+  * @priv:	Will be passed as second argument to @fn.
++ *
++ * We grab one request reference before calling @fn and release it after
++ * @fn returns.
+  */
+ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
+ 		busy_tag_iter_fn *fn, void *priv)
+@@ -516,6 +542,7 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags,
+ 
+ 	tags->nr_tags = total_tags;
+ 	tags->nr_reserved_tags = reserved_tags;
++	spin_lock_init(&tags->lock);
+ 
+ 	if (flags & BLK_MQ_F_TAG_HCTX_SHARED)
+ 		return tags;
+diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
+index 7d3e6b333a4a9..f887988e5ef60 100644
+--- a/block/blk-mq-tag.h
++++ b/block/blk-mq-tag.h
+@@ -20,6 +20,12 @@ struct blk_mq_tags {
+ 	struct request **rqs;
+ 	struct request **static_rqs;
+ 	struct list_head page_list;
++
++	/*
++	 * used to clear request reference in rqs[] before freeing one
++	 * request pool
++	 */
++	spinlock_t lock;
+ };
+ 
+ extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 4bf9449b45868..a368eb6dc6470 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -927,6 +927,14 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
+ 	return false;
+ }
+ 
++void blk_mq_put_rq_ref(struct request *rq)
++{
++	if (is_flush_rq(rq, rq->mq_hctx))
++		rq->end_io(rq, 0);
++	else if (refcount_dec_and_test(&rq->ref))
++		__blk_mq_free_request(rq);
++}
++
+ static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
+ 		struct request *rq, void *priv, bool reserved)
+ {
+@@ -960,11 +968,7 @@ static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
+ 	if (blk_mq_req_expired(rq, next))
+ 		blk_mq_rq_timed_out(rq, reserved);
+ 
+-	if (is_flush_rq(rq, hctx))
+-		rq->end_io(rq, 0);
+-	else if (refcount_dec_and_test(&rq->ref))
+-		__blk_mq_free_request(rq);
+-
++	blk_mq_put_rq_ref(rq);
+ 	return true;
+ }
+ 
+@@ -1238,9 +1242,6 @@ static void blk_mq_update_dispatch_busy(struct blk_mq_hw_ctx *hctx, bool busy)
+ {
+ 	unsigned int ewma;
+ 
+-	if (hctx->queue->elevator)
+-		return;
+-
+ 	ewma = hctx->dispatch_busy;
+ 
+ 	if (!ewma && !busy)
+@@ -2272,6 +2273,45 @@ queue_exit:
+ 	return BLK_QC_T_NONE;
+ }
+ 
++static size_t order_to_size(unsigned int order)
++{
++	return (size_t)PAGE_SIZE << order;
++}
++
++/* called before freeing request pool in @tags */
++static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set,
++		struct blk_mq_tags *tags, unsigned int hctx_idx)
++{
++	struct blk_mq_tags *drv_tags = set->tags[hctx_idx];
++	struct page *page;
++	unsigned long flags;
++
++	list_for_each_entry(page, &tags->page_list, lru) {
++		unsigned long start = (unsigned long)page_address(page);
++		unsigned long end = start + order_to_size(page->private);
++		int i;
++
++		for (i = 0; i < set->queue_depth; i++) {
++			struct request *rq = drv_tags->rqs[i];
++			unsigned long rq_addr = (unsigned long)rq;
++
++			if (rq_addr >= start && rq_addr < end) {
++				WARN_ON_ONCE(refcount_read(&rq->ref) != 0);
++				cmpxchg(&drv_tags->rqs[i], rq, NULL);
++			}
++		}
++	}
++
++	/*
++	 * Wait until all pending iteration is done.
++	 *
++	 * Request reference is cleared and it is guaranteed to be observed
++	 * after the ->lock is released.
++	 */
++	spin_lock_irqsave(&drv_tags->lock, flags);
++	spin_unlock_irqrestore(&drv_tags->lock, flags);
++}
++
+ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
+ 		     unsigned int hctx_idx)
+ {
+@@ -2290,6 +2330,8 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
+ 		}
+ 	}
+ 
++	blk_mq_clear_rq_mapping(set, tags, hctx_idx);
++
+ 	while (!list_empty(&tags->page_list)) {
+ 		page = list_first_entry(&tags->page_list, struct page, lru);
+ 		list_del_init(&page->lru);
+@@ -2349,11 +2391,6 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set,
+ 	return tags;
+ }
+ 
+-static size_t order_to_size(unsigned int order)
+-{
+-	return (size_t)PAGE_SIZE << order;
+-}
+-
+ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq,
+ 			       unsigned int hctx_idx, int node)
+ {
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index d2359f7cfd5f2..f792a0920ebb1 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -47,6 +47,7 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
+ void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
+ struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
+ 					struct blk_mq_ctx *start);
++void blk_mq_put_rq_ref(struct request *rq);
+ 
+ /*
+  * Internal helpers for allocating/freeing the request map
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index 2bc43e94f4c40..2bcb3495e376b 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -7,6 +7,7 @@
+ #include <linux/blk_types.h>
+ #include <linux/atomic.h>
+ #include <linux/wait.h>
++#include <linux/blk-mq.h>
+ 
+ #include "blk-mq-debugfs.h"
+ 
+@@ -99,8 +100,21 @@ static inline void rq_wait_init(struct rq_wait *rq_wait)
+ 
+ static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
+ {
++	/*
++	 * No IO can be in-flight when adding rqos, so freeze queue, which
++	 * is fine since we only support rq_qos for blk-mq queue.
++	 *
++	 * Reuse ->queue_lock for protecting against other concurrent
++	 * rq_qos adding/deleting
++	 */
++	blk_mq_freeze_queue(q);
++
++	spin_lock_irq(&q->queue_lock);
+ 	rqos->next = q->rq_qos;
+ 	q->rq_qos = rqos;
++	spin_unlock_irq(&q->queue_lock);
++
++	blk_mq_unfreeze_queue(q);
+ 
+ 	if (rqos->ops->debugfs_attrs)
+ 		blk_mq_debugfs_register_rqos(rqos);
+@@ -110,12 +124,22 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
+ {
+ 	struct rq_qos **cur;
+ 
++	/*
++	 * See comment in rq_qos_add() about freezing queue & using
++	 * ->queue_lock.
++	 */
++	blk_mq_freeze_queue(q);
++
++	spin_lock_irq(&q->queue_lock);
+ 	for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
+ 		if (*cur == rqos) {
+ 			*cur = rqos->next;
+ 			break;
+ 		}
+ 	}
++	spin_unlock_irq(&q->queue_lock);
++
++	blk_mq_unfreeze_queue(q);
+ 
+ 	blk_mq_debugfs_unregister_rqos(rqos);
+ }
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index fd410086fe1de..35d81b5deae1c 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -77,7 +77,8 @@ enum {
+ 
+ static inline bool rwb_enabled(struct rq_wb *rwb)
+ {
+-	return rwb && rwb->wb_normal != 0;
++	return rwb && rwb->enable_state != WBT_STATE_OFF_DEFAULT &&
++		      rwb->wb_normal != 0;
+ }
+ 
+ static void wb_timestamp(struct rq_wb *rwb, unsigned long *var)
+@@ -636,9 +637,13 @@ void wbt_set_write_cache(struct request_queue *q, bool write_cache_on)
+ void wbt_enable_default(struct request_queue *q)
+ {
+ 	struct rq_qos *rqos = wbt_rq_qos(q);
++
+ 	/* Throttling already enabled? */
+-	if (rqos)
++	if (rqos) {
++		if (RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT)
++			RQWB(rqos)->enable_state = WBT_STATE_ON_DEFAULT;
+ 		return;
++	}
+ 
+ 	/* Queue not registered? Maybe shutting down... */
+ 	if (!blk_queue_registered(q))
+@@ -702,7 +707,7 @@ void wbt_disable_default(struct request_queue *q)
+ 	rwb = RQWB(rqos);
+ 	if (rwb->enable_state == WBT_STATE_ON_DEFAULT) {
+ 		blk_stat_deactivate(rwb->cb);
+-		rwb->wb_normal = 0;
++		rwb->enable_state = WBT_STATE_OFF_DEFAULT;
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(wbt_disable_default);
+diff --git a/block/blk-wbt.h b/block/blk-wbt.h
+index 16bdc85b8df92..2eb01becde8c4 100644
+--- a/block/blk-wbt.h
++++ b/block/blk-wbt.h
+@@ -34,6 +34,7 @@ enum {
+ enum {
+ 	WBT_STATE_ON_DEFAULT	= 1,
+ 	WBT_STATE_ON_MANUAL	= 2,
++	WBT_STATE_OFF_DEFAULT
+ };
+ 
+ struct rq_wb {
+diff --git a/crypto/shash.c b/crypto/shash.c
+index 2e3433ad97629..0a0a50cb694f0 100644
+--- a/crypto/shash.c
++++ b/crypto/shash.c
+@@ -20,12 +20,24 @@
+ 
+ static const struct crypto_type crypto_shash_type;
+ 
+-int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
+-		    unsigned int keylen)
++static int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
++			   unsigned int keylen)
+ {
+ 	return -ENOSYS;
+ }
+-EXPORT_SYMBOL_GPL(shash_no_setkey);
++
++/*
++ * Check whether an shash algorithm has a setkey function.
++ *
++ * For CFI compatibility, this must not be an inline function.  This is because
++ * when CFI is enabled, modules won't get the same address for shash_no_setkey
++ * (if it were exported, which inlining would require) as the core kernel will.
++ */
++bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
++{
++	return alg->setkey != shash_no_setkey;
++}
++EXPORT_SYMBOL_GPL(crypto_shash_alg_has_setkey);
+ 
+ static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
+ 				  unsigned int keylen)
+diff --git a/crypto/sm2.c b/crypto/sm2.c
+index 767e160333f6e..db8a4a265669d 100644
+--- a/crypto/sm2.c
++++ b/crypto/sm2.c
+@@ -79,10 +79,17 @@ static int sm2_ec_ctx_init(struct mpi_ec_ctx *ec)
+ 		goto free;
+ 
+ 	rc = -ENOMEM;
++
++	ec->Q = mpi_point_new(0);
++	if (!ec->Q)
++		goto free;
++
+ 	/* mpi_ec_setup_elliptic_curve */
+ 	ec->G = mpi_point_new(0);
+-	if (!ec->G)
++	if (!ec->G) {
++		mpi_point_release(ec->Q);
+ 		goto free;
++	}
+ 
+ 	mpi_set(ec->G->x, x);
+ 	mpi_set(ec->G->y, y);
+@@ -91,6 +98,7 @@ static int sm2_ec_ctx_init(struct mpi_ec_ctx *ec)
+ 	rc = -EINVAL;
+ 	ec->n = mpi_scanval(ecp->n);
+ 	if (!ec->n) {
++		mpi_point_release(ec->Q);
+ 		mpi_point_release(ec->G);
+ 		goto free;
+ 	}
+@@ -119,12 +127,6 @@ static void sm2_ec_ctx_deinit(struct mpi_ec_ctx *ec)
+ 	memset(ec, 0, sizeof(*ec));
+ }
+ 
+-static int sm2_ec_ctx_reset(struct mpi_ec_ctx *ec)
+-{
+-	sm2_ec_ctx_deinit(ec);
+-	return sm2_ec_ctx_init(ec);
+-}
+-
+ /* RESULT must have been initialized and is set on success to the
+  * point given by VALUE.
+  */
+@@ -132,55 +134,48 @@ static int sm2_ecc_os2ec(MPI_POINT result, MPI value)
+ {
+ 	int rc;
+ 	size_t n;
+-	const unsigned char *buf;
+-	unsigned char *buf_memory;
++	unsigned char *buf;
+ 	MPI x, y;
+ 
+-	n = (mpi_get_nbits(value)+7)/8;
+-	buf_memory = kmalloc(n, GFP_KERNEL);
+-	rc = mpi_print(GCRYMPI_FMT_USG, buf_memory, n, &n, value);
+-	if (rc) {
+-		kfree(buf_memory);
+-		return rc;
+-	}
+-	buf = buf_memory;
++	n = MPI_NBYTES(value);
++	buf = kmalloc(n, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
+ 
+-	if (n < 1) {
+-		kfree(buf_memory);
+-		return -EINVAL;
+-	}
+-	if (*buf != 4) {
+-		kfree(buf_memory);
+-		return -EINVAL; /* No support for point compression.  */
+-	}
+-	if (((n-1)%2)) {
+-		kfree(buf_memory);
+-		return -EINVAL;
+-	}
+-	n = (n-1)/2;
++	rc = mpi_print(GCRYMPI_FMT_USG, buf, n, &n, value);
++	if (rc)
++		goto err_freebuf;
++
++	rc = -EINVAL;
++	if (n < 1 || ((n - 1) % 2))
++		goto err_freebuf;
++	/* No support for point compression */
++	if (*buf != 0x4)
++		goto err_freebuf;
++
++	rc = -ENOMEM;
++	n = (n - 1) / 2;
+ 	x = mpi_read_raw_data(buf + 1, n);
+-	if (!x) {
+-		kfree(buf_memory);
+-		return -ENOMEM;
+-	}
++	if (!x)
++		goto err_freebuf;
+ 	y = mpi_read_raw_data(buf + 1 + n, n);
+-	kfree(buf_memory);
+-	if (!y) {
+-		mpi_free(x);
+-		return -ENOMEM;
+-	}
++	if (!y)
++		goto err_freex;
+ 
+ 	mpi_normalize(x);
+ 	mpi_normalize(y);
+-
+ 	mpi_set(result->x, x);
+ 	mpi_set(result->y, y);
+ 	mpi_set_ui(result->z, 1);
+ 
+-	mpi_free(x);
+-	mpi_free(y);
++	rc = 0;
+ 
+-	return 0;
++	mpi_free(y);
++err_freex:
++	mpi_free(x);
++err_freebuf:
++	kfree(buf);
++	return rc;
+ }
+ 
+ struct sm2_signature_ctx {
+@@ -399,31 +394,15 @@ static int sm2_set_pub_key(struct crypto_akcipher *tfm,
+ 	MPI a;
+ 	int rc;
+ 
+-	rc = sm2_ec_ctx_reset(ec);
+-	if (rc)
+-		return rc;
+-
+-	ec->Q = mpi_point_new(0);
+-	if (!ec->Q)
+-		return -ENOMEM;
+-
+ 	/* include the uncompressed flag '0x04' */
+-	rc = -ENOMEM;
+ 	a = mpi_read_raw_data(key, keylen);
+ 	if (!a)
+-		goto error;
++		return -ENOMEM;
+ 
+ 	mpi_normalize(a);
+ 	rc = sm2_ecc_os2ec(ec->Q, a);
+ 	mpi_free(a);
+-	if (rc)
+-		goto error;
+ 
+-	return 0;
+-
+-error:
+-	mpi_point_release(ec->Q);
+-	ec->Q = NULL;
+ 	return rc;
+ }
+ 
+diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
+index 44e4125063178..4466156474eef 100644
+--- a/drivers/acpi/Makefile
++++ b/drivers/acpi/Makefile
+@@ -8,6 +8,11 @@ ccflags-$(CONFIG_ACPI_DEBUG)	+= -DACPI_DEBUG_OUTPUT
+ #
+ # ACPI Boot-Time Table Parsing
+ #
++ifeq ($(CONFIG_ACPI_CUSTOM_DSDT),y)
++tables.o: $(src)/../../include/$(subst $\",,$(CONFIG_ACPI_CUSTOM_DSDT_FILE)) ;
++
++endif
++
+ obj-$(CONFIG_ACPI)		+= tables.o
+ obj-$(CONFIG_X86)		+= blacklist.o
+ 
+diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c
+index b8745ce48a47b..b84ab722feb44 100644
+--- a/drivers/acpi/acpi_pad.c
++++ b/drivers/acpi/acpi_pad.c
+@@ -261,7 +261,7 @@ static uint32_t acpi_pad_idle_cpus_num(void)
+ 	return ps_tsk_num;
+ }
+ 
+-static ssize_t acpi_pad_rrtime_store(struct device *dev,
++static ssize_t rrtime_store(struct device *dev,
+ 	struct device_attribute *attr, const char *buf, size_t count)
+ {
+ 	unsigned long num;
+@@ -275,16 +275,14 @@ static ssize_t acpi_pad_rrtime_store(struct device *dev,
+ 	return count;
+ }
+ 
+-static ssize_t acpi_pad_rrtime_show(struct device *dev,
++static ssize_t rrtime_show(struct device *dev,
+ 	struct device_attribute *attr, char *buf)
+ {
+ 	return scnprintf(buf, PAGE_SIZE, "%d\n", round_robin_time);
+ }
+-static DEVICE_ATTR(rrtime, S_IRUGO|S_IWUSR,
+-	acpi_pad_rrtime_show,
+-	acpi_pad_rrtime_store);
++static DEVICE_ATTR_RW(rrtime);
+ 
+-static ssize_t acpi_pad_idlepct_store(struct device *dev,
++static ssize_t idlepct_store(struct device *dev,
+ 	struct device_attribute *attr, const char *buf, size_t count)
+ {
+ 	unsigned long num;
+@@ -298,16 +296,14 @@ static ssize_t acpi_pad_idlepct_store(struct device *dev,
+ 	return count;
+ }
+ 
+-static ssize_t acpi_pad_idlepct_show(struct device *dev,
++static ssize_t idlepct_show(struct device *dev,
+ 	struct device_attribute *attr, char *buf)
+ {
+ 	return scnprintf(buf, PAGE_SIZE, "%d\n", idle_pct);
+ }
+-static DEVICE_ATTR(idlepct, S_IRUGO|S_IWUSR,
+-	acpi_pad_idlepct_show,
+-	acpi_pad_idlepct_store);
++static DEVICE_ATTR_RW(idlepct);
+ 
+-static ssize_t acpi_pad_idlecpus_store(struct device *dev,
++static ssize_t idlecpus_store(struct device *dev,
+ 	struct device_attribute *attr, const char *buf, size_t count)
+ {
+ 	unsigned long num;
+@@ -319,16 +315,14 @@ static ssize_t acpi_pad_idlecpus_store(struct device *dev,
+ 	return count;
+ }
+ 
+-static ssize_t acpi_pad_idlecpus_show(struct device *dev,
++static ssize_t idlecpus_show(struct device *dev,
+ 	struct device_attribute *attr, char *buf)
+ {
+ 	return cpumap_print_to_pagebuf(false, buf,
+ 				       to_cpumask(pad_busy_cpus_bits));
+ }
+ 
+-static DEVICE_ATTR(idlecpus, S_IRUGO|S_IWUSR,
+-	acpi_pad_idlecpus_show,
+-	acpi_pad_idlecpus_store);
++static DEVICE_ATTR_RW(idlecpus);
+ 
+ static int acpi_pad_add_sysfs(struct acpi_device *device)
+ {
+diff --git a/drivers/acpi/acpi_tad.c b/drivers/acpi/acpi_tad.c
+index 7d45cce0c3c18..e9b8e8305e23e 100644
+--- a/drivers/acpi/acpi_tad.c
++++ b/drivers/acpi/acpi_tad.c
+@@ -237,7 +237,7 @@ static ssize_t time_show(struct device *dev, struct device_attribute *attr,
+ 		       rt.tz, rt.daylight);
+ }
+ 
+-static DEVICE_ATTR(time, S_IRUSR | S_IWUSR, time_show, time_store);
++static DEVICE_ATTR_RW(time);
+ 
+ static struct attribute *acpi_tad_time_attrs[] = {
+ 	&dev_attr_time.attr,
+@@ -446,7 +446,7 @@ static ssize_t ac_alarm_show(struct device *dev, struct device_attribute *attr,
+ 	return acpi_tad_alarm_read(dev, buf, ACPI_TAD_AC_TIMER);
+ }
+ 
+-static DEVICE_ATTR(ac_alarm, S_IRUSR | S_IWUSR, ac_alarm_show, ac_alarm_store);
++static DEVICE_ATTR_RW(ac_alarm);
+ 
+ static ssize_t ac_policy_store(struct device *dev, struct device_attribute *attr,
+ 			       const char *buf, size_t count)
+@@ -462,7 +462,7 @@ static ssize_t ac_policy_show(struct device *dev, struct device_attribute *attr,
+ 	return acpi_tad_policy_read(dev, buf, ACPI_TAD_AC_TIMER);
+ }
+ 
+-static DEVICE_ATTR(ac_policy, S_IRUSR | S_IWUSR, ac_policy_show, ac_policy_store);
++static DEVICE_ATTR_RW(ac_policy);
+ 
+ static ssize_t ac_status_store(struct device *dev, struct device_attribute *attr,
+ 			       const char *buf, size_t count)
+@@ -478,7 +478,7 @@ static ssize_t ac_status_show(struct device *dev, struct device_attribute *attr,
+ 	return acpi_tad_status_read(dev, buf, ACPI_TAD_AC_TIMER);
+ }
+ 
+-static DEVICE_ATTR(ac_status, S_IRUSR | S_IWUSR, ac_status_show, ac_status_store);
++static DEVICE_ATTR_RW(ac_status);
+ 
+ static struct attribute *acpi_tad_attrs[] = {
+ 	&dev_attr_caps.attr,
+@@ -505,7 +505,7 @@ static ssize_t dc_alarm_show(struct device *dev, struct device_attribute *attr,
+ 	return acpi_tad_alarm_read(dev, buf, ACPI_TAD_DC_TIMER);
+ }
+ 
+-static DEVICE_ATTR(dc_alarm, S_IRUSR | S_IWUSR, dc_alarm_show, dc_alarm_store);
++static DEVICE_ATTR_RW(dc_alarm);
+ 
+ static ssize_t dc_policy_store(struct device *dev, struct device_attribute *attr,
+ 			       const char *buf, size_t count)
+@@ -521,7 +521,7 @@ static ssize_t dc_policy_show(struct device *dev, struct device_attribute *attr,
+ 	return acpi_tad_policy_read(dev, buf, ACPI_TAD_DC_TIMER);
+ }
+ 
+-static DEVICE_ATTR(dc_policy, S_IRUSR | S_IWUSR, dc_policy_show, dc_policy_store);
++static DEVICE_ATTR_RW(dc_policy);
+ 
+ static ssize_t dc_status_store(struct device *dev, struct device_attribute *attr,
+ 			       const char *buf, size_t count)
+@@ -537,7 +537,7 @@ static ssize_t dc_status_show(struct device *dev, struct device_attribute *attr,
+ 	return acpi_tad_status_read(dev, buf, ACPI_TAD_DC_TIMER);
+ }
+ 
+-static DEVICE_ATTR(dc_status, S_IRUSR | S_IWUSR, dc_status_show, dc_status_store);
++static DEVICE_ATTR_RW(dc_status);
+ 
+ static struct attribute *acpi_tad_dc_attrs[] = {
+ 	&dev_attr_dc_alarm.attr,
+diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
+index 125143c41bb81..8768594c79e58 100644
+--- a/drivers/acpi/acpica/nsrepair2.c
++++ b/drivers/acpi/acpica/nsrepair2.c
+@@ -375,6 +375,13 @@ acpi_ns_repair_CID(struct acpi_evaluate_info *info,
+ 
+ 			(*element_ptr)->common.reference_count =
+ 			    original_ref_count;
++
++			/*
++			 * The original_element holds a reference from the package object
++			 * that represents _HID. Since a new element was created by _HID,
++			 * remove the reference from the _CID package.
++			 */
++			acpi_ut_remove_reference(original_element);
+ 		}
+ 
+ 		element_ptr++;
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index fce7ade2aba92..0c8330ed1ffd5 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -441,28 +441,35 @@ static void ghes_kick_task_work(struct callback_head *head)
+ 	gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len);
+ }
+ 
+-static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+-				       int sev)
++static bool ghes_do_memory_failure(u64 physical_addr, int flags)
+ {
+ 	unsigned long pfn;
+-	int flags = -1;
+-	int sec_sev = ghes_severity(gdata->error_severity);
+-	struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);
+ 
+ 	if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
+ 		return false;
+ 
+-	if (!(mem_err->validation_bits & CPER_MEM_VALID_PA))
+-		return false;
+-
+-	pfn = mem_err->physical_addr >> PAGE_SHIFT;
++	pfn = PHYS_PFN(physical_addr);
+ 	if (!pfn_valid(pfn)) {
+ 		pr_warn_ratelimited(FW_WARN GHES_PFX
+ 		"Invalid address in generic error data: %#llx\n",
+-		mem_err->physical_addr);
++		physical_addr);
+ 		return false;
+ 	}
+ 
++	memory_failure_queue(pfn, flags);
++	return true;
++}
++
++static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
++				       int sev)
++{
++	int flags = -1;
++	int sec_sev = ghes_severity(gdata->error_severity);
++	struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);
++
++	if (!(mem_err->validation_bits & CPER_MEM_VALID_PA))
++		return false;
++
+ 	/* iff following two events can be handled properly by now */
+ 	if (sec_sev == GHES_SEV_CORRECTED &&
+ 	    (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
+@@ -470,14 +477,56 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+ 	if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
+ 		flags = 0;
+ 
+-	if (flags != -1) {
+-		memory_failure_queue(pfn, flags);
+-		return true;
+-	}
++	if (flags != -1)
++		return ghes_do_memory_failure(mem_err->physical_addr, flags);
+ 
+ 	return false;
+ }
+ 
++static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev)
++{
++	struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
++	bool queued = false;
++	int sec_sev, i;
++	char *p;
++
++	log_arm_hw_error(err);
++
++	sec_sev = ghes_severity(gdata->error_severity);
++	if (sev != GHES_SEV_RECOVERABLE || sec_sev != GHES_SEV_RECOVERABLE)
++		return false;
++
++	p = (char *)(err + 1);
++	for (i = 0; i < err->err_info_num; i++) {
++		struct cper_arm_err_info *err_info = (struct cper_arm_err_info *)p;
++		bool is_cache = (err_info->type == CPER_ARM_CACHE_ERROR);
++		bool has_pa = (err_info->validation_bits & CPER_ARM_INFO_VALID_PHYSICAL_ADDR);
++		const char *error_type = "unknown error";
++
++		/*
++		 * The field (err_info->error_info & BIT(26)) is fixed to set to
++		 * 1 in some old firmware of HiSilicon Kunpeng920. We assume that
++		 * firmware won't mix corrected errors in an uncorrected section,
++		 * and don't filter out 'corrected' error here.
++		 */
++		if (is_cache && has_pa) {
++			queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0);
++			p += err_info->length;
++			continue;
++		}
++
++		if (err_info->type < ARRAY_SIZE(cper_proc_error_type_strs))
++			error_type = cper_proc_error_type_strs[err_info->type];
++
++		pr_warn_ratelimited(FW_WARN GHES_PFX
++				    "Unhandled processor error type: %s\n",
++				    error_type);
++		p += err_info->length;
++	}
++
++	return queued;
++}
++
+ /*
+  * PCIe AER errors need to be sent to the AER driver for reporting and
+  * recovery. The GHES severities map to the following AER severities and
+@@ -605,9 +654,7 @@ static bool ghes_do_proc(struct ghes *ghes,
+ 			ghes_handle_aer(gdata);
+ 		}
+ 		else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
+-			struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
+-
+-			log_arm_hw_error(err);
++			queued = ghes_handle_arm_hw_error(gdata, sev);
+ 		} else {
+ 			void *err = acpi_hest_get_payload(gdata);
+ 
+diff --git a/drivers/acpi/bgrt.c b/drivers/acpi/bgrt.c
+index 251f961c28cc4..e0d14017706ea 100644
+--- a/drivers/acpi/bgrt.c
++++ b/drivers/acpi/bgrt.c
+@@ -15,40 +15,19 @@
+ static void *bgrt_image;
+ static struct kobject *bgrt_kobj;
+ 
+-static ssize_t show_version(struct device *dev,
+-			    struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.version);
+-}
+-static DEVICE_ATTR(version, S_IRUGO, show_version, NULL);
+-
+-static ssize_t show_status(struct device *dev,
+-			   struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.status);
+-}
+-static DEVICE_ATTR(status, S_IRUGO, show_status, NULL);
+-
+-static ssize_t show_type(struct device *dev,
+-			 struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_type);
+-}
+-static DEVICE_ATTR(type, S_IRUGO, show_type, NULL);
+-
+-static ssize_t show_xoffset(struct device *dev,
+-			    struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_offset_x);
+-}
+-static DEVICE_ATTR(xoffset, S_IRUGO, show_xoffset, NULL);
+-
+-static ssize_t show_yoffset(struct device *dev,
+-			    struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_offset_y);
+-}
+-static DEVICE_ATTR(yoffset, S_IRUGO, show_yoffset, NULL);
++#define BGRT_SHOW(_name, _member) \
++	static ssize_t _name##_show(struct kobject *kobj,			\
++				    struct kobj_attribute *attr, char *buf)	\
++	{									\
++		return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab._member);	\
++	}									\
++	struct kobj_attribute bgrt_attr_##_name = __ATTR_RO(_name)
++
++BGRT_SHOW(version, version);
++BGRT_SHOW(status, status);
++BGRT_SHOW(type, image_type);
++BGRT_SHOW(xoffset, image_offset_x);
++BGRT_SHOW(yoffset, image_offset_y);
+ 
+ static ssize_t image_read(struct file *file, struct kobject *kobj,
+ 	       struct bin_attribute *attr, char *buf, loff_t off, size_t count)
+@@ -60,11 +39,11 @@ static ssize_t image_read(struct file *file, struct kobject *kobj,
+ static BIN_ATTR_RO(image, 0);	/* size gets filled in later */
+ 
+ static struct attribute *bgrt_attributes[] = {
+-	&dev_attr_version.attr,
+-	&dev_attr_status.attr,
+-	&dev_attr_type.attr,
+-	&dev_attr_xoffset.attr,
+-	&dev_attr_yoffset.attr,
++	&bgrt_attr_version.attr,
++	&bgrt_attr_status.attr,
++	&bgrt_attr_type.attr,
++	&bgrt_attr_xoffset.attr,
++	&bgrt_attr_yoffset.attr,
+ 	NULL,
+ };
+ 
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index 1682f8b454a2e..e317214aabec5 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -1245,6 +1245,7 @@ static int __init acpi_init(void)
+ 
+ 	result = acpi_bus_init();
+ 	if (result) {
++		kobject_put(acpi_kobj);
+ 		disable_acpi();
+ 		return result;
+ 	}
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 48ff6821a83d4..ecd2ddc2215f5 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -18,6 +18,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/suspend.h>
+ 
++#include "fan.h"
+ #include "internal.h"
+ 
+ #define _COMPONENT	ACPI_POWER_COMPONENT
+@@ -1298,10 +1299,7 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
+ 	 * with the generic ACPI PM domain.
+ 	 */
+ 	static const struct acpi_device_id special_pm_ids[] = {
+-		{"PNP0C0B", }, /* Generic ACPI fan */
+-		{"INT3404", }, /* Fan */
+-		{"INTC1044", }, /* Fan for Tiger Lake generation */
+-		{"INTC1048", }, /* Fan for Alder Lake generation */
++		ACPI_FAN_DEVICE_IDS,
+ 		{}
+ 	};
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index bfca116482b8b..fe8c7e79f4726 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -325,11 +325,11 @@ int acpi_device_modalias(struct device *dev, char *buf, int size)
+ EXPORT_SYMBOL_GPL(acpi_device_modalias);
+ 
+ static ssize_t
+-acpi_device_modalias_show(struct device *dev, struct device_attribute *attr, char *buf)
++modalias_show(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ 	return __acpi_device_modalias(to_acpi_device(dev), buf, 1024);
+ }
+-static DEVICE_ATTR(modalias, 0444, acpi_device_modalias_show, NULL);
++static DEVICE_ATTR_RO(modalias);
+ 
+ static ssize_t real_power_state_show(struct device *dev,
+ 				     struct device_attribute *attr, char *buf)
+@@ -358,8 +358,8 @@ static ssize_t power_state_show(struct device *dev,
+ static DEVICE_ATTR_RO(power_state);
+ 
+ static ssize_t
+-acpi_eject_store(struct device *d, struct device_attribute *attr,
+-		const char *buf, size_t count)
++eject_store(struct device *d, struct device_attribute *attr,
++	    const char *buf, size_t count)
+ {
+ 	struct acpi_device *acpi_device = to_acpi_device(d);
+ 	acpi_object_type not_used;
+@@ -387,28 +387,28 @@ acpi_eject_store(struct device *d, struct device_attribute *attr,
+ 	return status == AE_NO_MEMORY ? -ENOMEM : -EAGAIN;
+ }
+ 
+-static DEVICE_ATTR(eject, 0200, NULL, acpi_eject_store);
++static DEVICE_ATTR_WO(eject);
+ 
+ static ssize_t
+-acpi_device_hid_show(struct device *dev, struct device_attribute *attr, char *buf)
++hid_show(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ 	struct acpi_device *acpi_dev = to_acpi_device(dev);
+ 
+ 	return sprintf(buf, "%s\n", acpi_device_hid(acpi_dev));
+ }
+-static DEVICE_ATTR(hid, 0444, acpi_device_hid_show, NULL);
++static DEVICE_ATTR_RO(hid);
+ 
+-static ssize_t acpi_device_uid_show(struct device *dev,
+-				    struct device_attribute *attr, char *buf)
++static ssize_t uid_show(struct device *dev,
++			struct device_attribute *attr, char *buf)
+ {
+ 	struct acpi_device *acpi_dev = to_acpi_device(dev);
+ 
+ 	return sprintf(buf, "%s\n", acpi_dev->pnp.unique_id);
+ }
+-static DEVICE_ATTR(uid, 0444, acpi_device_uid_show, NULL);
++static DEVICE_ATTR_RO(uid);
+ 
+-static ssize_t acpi_device_adr_show(struct device *dev,
+-				    struct device_attribute *attr, char *buf)
++static ssize_t adr_show(struct device *dev,
++			struct device_attribute *attr, char *buf)
+ {
+ 	struct acpi_device *acpi_dev = to_acpi_device(dev);
+ 
+@@ -417,16 +417,16 @@ static ssize_t acpi_device_adr_show(struct device *dev,
+ 	else
+ 		return sprintf(buf, "0x%08llx\n", acpi_dev->pnp.bus_address);
+ }
+-static DEVICE_ATTR(adr, 0444, acpi_device_adr_show, NULL);
++static DEVICE_ATTR_RO(adr);
+ 
+-static ssize_t acpi_device_path_show(struct device *dev,
+-				     struct device_attribute *attr, char *buf)
++static ssize_t path_show(struct device *dev,
++			 struct device_attribute *attr, char *buf)
+ {
+ 	struct acpi_device *acpi_dev = to_acpi_device(dev);
+ 
+ 	return acpi_object_path(acpi_dev->handle, buf);
+ }
+-static DEVICE_ATTR(path, 0444, acpi_device_path_show, NULL);
++static DEVICE_ATTR_RO(path);
+ 
+ /* sysfs file that shows description text from the ACPI _STR method */
+ static ssize_t description_show(struct device *dev,
+@@ -446,7 +446,7 @@ static ssize_t description_show(struct device *dev,
+ 		(wchar_t *)acpi_dev->pnp.str_obj->buffer.pointer,
+ 		acpi_dev->pnp.str_obj->buffer.length,
+ 		UTF16_LITTLE_ENDIAN, buf,
+-		PAGE_SIZE);
++		PAGE_SIZE - 1);
+ 
+ 	buf[result++] = '\n';
+ 
+@@ -455,8 +455,8 @@ static ssize_t description_show(struct device *dev,
+ static DEVICE_ATTR_RO(description);
+ 
+ static ssize_t
+-acpi_device_sun_show(struct device *dev, struct device_attribute *attr,
+-		     char *buf) {
++sun_show(struct device *dev, struct device_attribute *attr,
++	 char *buf) {
+ 	struct acpi_device *acpi_dev = to_acpi_device(dev);
+ 	acpi_status status;
+ 	unsigned long long sun;
+@@ -467,11 +467,11 @@ acpi_device_sun_show(struct device *dev, struct device_attribute *attr,
+ 
+ 	return sprintf(buf, "%llu\n", sun);
+ }
+-static DEVICE_ATTR(sun, 0444, acpi_device_sun_show, NULL);
++static DEVICE_ATTR_RO(sun);
+ 
+ static ssize_t
+-acpi_device_hrv_show(struct device *dev, struct device_attribute *attr,
+-		     char *buf) {
++hrv_show(struct device *dev, struct device_attribute *attr,
++	 char *buf) {
+ 	struct acpi_device *acpi_dev = to_acpi_device(dev);
+ 	acpi_status status;
+ 	unsigned long long hrv;
+@@ -482,7 +482,7 @@ acpi_device_hrv_show(struct device *dev, struct device_attribute *attr,
+ 
+ 	return sprintf(buf, "%llu\n", hrv);
+ }
+-static DEVICE_ATTR(hrv, 0444, acpi_device_hrv_show, NULL);
++static DEVICE_ATTR_RO(hrv);
+ 
+ static ssize_t status_show(struct device *dev, struct device_attribute *attr,
+ 				char *buf) {
+diff --git a/drivers/acpi/dock.c b/drivers/acpi/dock.c
+index 24e076f44d238..0937ceab052e8 100644
+--- a/drivers/acpi/dock.c
++++ b/drivers/acpi/dock.c
+@@ -484,7 +484,7 @@ int dock_notify(struct acpi_device *adev, u32 event)
+ /*
+  * show_docked - read method for "docked" file in sysfs
+  */
+-static ssize_t show_docked(struct device *dev,
++static ssize_t docked_show(struct device *dev,
+ 			   struct device_attribute *attr, char *buf)
+ {
+ 	struct dock_station *dock_station = dev->platform_data;
+@@ -493,25 +493,25 @@ static ssize_t show_docked(struct device *dev,
+ 	acpi_bus_get_device(dock_station->handle, &adev);
+ 	return snprintf(buf, PAGE_SIZE, "%u\n", acpi_device_enumerated(adev));
+ }
+-static DEVICE_ATTR(docked, S_IRUGO, show_docked, NULL);
++static DEVICE_ATTR_RO(docked);
+ 
+ /*
+  * show_flags - read method for flags file in sysfs
+  */
+-static ssize_t show_flags(struct device *dev,
++static ssize_t flags_show(struct device *dev,
+ 			  struct device_attribute *attr, char *buf)
+ {
+ 	struct dock_station *dock_station = dev->platform_data;
+ 	return snprintf(buf, PAGE_SIZE, "%d\n", dock_station->flags);
+ 
+ }
+-static DEVICE_ATTR(flags, S_IRUGO, show_flags, NULL);
++static DEVICE_ATTR_RO(flags);
+ 
+ /*
+  * write_undock - write method for "undock" file in sysfs
+  */
+-static ssize_t write_undock(struct device *dev, struct device_attribute *attr,
+-			   const char *buf, size_t count)
++static ssize_t undock_store(struct device *dev, struct device_attribute *attr,
++			    const char *buf, size_t count)
+ {
+ 	int ret;
+ 	struct dock_station *dock_station = dev->platform_data;
+@@ -525,13 +525,13 @@ static ssize_t write_undock(struct device *dev, struct device_attribute *attr,
+ 	acpi_scan_lock_release();
+ 	return ret ? ret: count;
+ }
+-static DEVICE_ATTR(undock, S_IWUSR, NULL, write_undock);
++static DEVICE_ATTR_WO(undock);
+ 
+ /*
+  * show_dock_uid - read method for "uid" file in sysfs
+  */
+-static ssize_t show_dock_uid(struct device *dev,
+-			     struct device_attribute *attr, char *buf)
++static ssize_t uid_show(struct device *dev,
++			struct device_attribute *attr, char *buf)
+ {
+ 	unsigned long long lbuf;
+ 	struct dock_station *dock_station = dev->platform_data;
+@@ -542,10 +542,10 @@ static ssize_t show_dock_uid(struct device *dev,
+ 
+ 	return snprintf(buf, PAGE_SIZE, "%llx\n", lbuf);
+ }
+-static DEVICE_ATTR(uid, S_IRUGO, show_dock_uid, NULL);
++static DEVICE_ATTR_RO(uid);
+ 
+-static ssize_t show_dock_type(struct device *dev,
+-		struct device_attribute *attr, char *buf)
++static ssize_t type_show(struct device *dev,
++			 struct device_attribute *attr, char *buf)
+ {
+ 	struct dock_station *dock_station = dev->platform_data;
+ 	char *type;
+@@ -561,7 +561,7 @@ static ssize_t show_dock_type(struct device *dev,
+ 
+ 	return snprintf(buf, PAGE_SIZE, "%s\n", type);
+ }
+-static DEVICE_ATTR(type, S_IRUGO, show_dock_type, NULL);
++static DEVICE_ATTR_RO(type);
+ 
+ static struct attribute *dock_attributes[] = {
+ 	&dev_attr_docked.attr,
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index e0cb1bcfffb29..be3e0921a6c00 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -183,6 +183,7 @@ static struct workqueue_struct *ec_query_wq;
+ 
+ static int EC_FLAGS_CORRECT_ECDT; /* Needs ECDT port address correction */
+ static int EC_FLAGS_IGNORE_DSDT_GPE; /* Needs ECDT GPE as correction setting */
++static int EC_FLAGS_TRUST_DSDT_GPE; /* Needs DSDT GPE as correction setting */
+ static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
+ 
+ /* --------------------------------------------------------------------------
+@@ -1606,7 +1607,8 @@ static int acpi_ec_add(struct acpi_device *device)
+ 		}
+ 
+ 		if (boot_ec && ec->command_addr == boot_ec->command_addr &&
+-		    ec->data_addr == boot_ec->data_addr) {
++		    ec->data_addr == boot_ec->data_addr &&
++		    !EC_FLAGS_TRUST_DSDT_GPE) {
+ 			/*
+ 			 * Trust PNP0C09 namespace location rather than
+ 			 * ECDT ID. But trust ECDT GPE rather than _GPE
+@@ -1829,6 +1831,18 @@ static int ec_correct_ecdt(const struct dmi_system_id *id)
+ 	return 0;
+ }
+ 
++/*
++ * Some ECDTs contain wrong GPE setting, but they share the same port addresses
++ * with DSDT EC, don't duplicate the DSDT EC with ECDT EC in this case.
++ * https://bugzilla.kernel.org/show_bug.cgi?id=209989
++ */
++static int ec_honor_dsdt_gpe(const struct dmi_system_id *id)
++{
++	pr_debug("Detected system needing DSDT GPE setting.\n");
++	EC_FLAGS_TRUST_DSDT_GPE = 1;
++	return 0;
++}
++
+ /*
+  * Some DSDTs contain wrong GPE setting.
+  * Asus FX502VD/VE, GL702VMK, X550VXK, X580VD
+@@ -1859,6 +1873,22 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 	DMI_MATCH(DMI_PRODUCT_NAME, "GL702VMK"),}, NULL},
+ 	{
++	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BA", {
++	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++	DMI_MATCH(DMI_PRODUCT_NAME, "X505BA"),}, NULL},
++	{
++	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BP", {
++	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++	DMI_MATCH(DMI_PRODUCT_NAME, "X505BP"),}, NULL},
++	{
++	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BA", {
++	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++	DMI_MATCH(DMI_PRODUCT_NAME, "X542BA"),}, NULL},
++	{
++	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BP", {
++	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++	DMI_MATCH(DMI_PRODUCT_NAME, "X542BP"),}, NULL},
++	{
+ 	ec_honor_ecdt_gpe, "ASUS X550VXK", {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 	DMI_MATCH(DMI_PRODUCT_NAME, "X550VXK"),}, NULL},
+@@ -1867,6 +1897,11 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 	DMI_MATCH(DMI_PRODUCT_NAME, "X580VD"),}, NULL},
+ 	{
++	/* https://bugzilla.kernel.org/show_bug.cgi?id=209989 */
++	ec_honor_dsdt_gpe, "HP Pavilion Gaming Laptop 15-cx0xxx", {
++	DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++	DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Gaming Laptop 15-cx0xxx"),}, NULL},
++	{
+ 	ec_clear_on_resume, "Samsung hardware", {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
+ 	{},
+diff --git a/drivers/acpi/fan.c b/drivers/acpi/fan.c
+index 66c3983f0ccca..5cd0ceb50bc8a 100644
+--- a/drivers/acpi/fan.c
++++ b/drivers/acpi/fan.c
+@@ -16,6 +16,8 @@
+ #include <linux/platform_device.h>
+ #include <linux/sort.h>
+ 
++#include "fan.h"
++
+ MODULE_AUTHOR("Paul Diefenbaugh");
+ MODULE_DESCRIPTION("ACPI Fan Driver");
+ MODULE_LICENSE("GPL");
+@@ -24,10 +26,7 @@ static int acpi_fan_probe(struct platform_device *pdev);
+ static int acpi_fan_remove(struct platform_device *pdev);
+ 
+ static const struct acpi_device_id fan_device_ids[] = {
+-	{"PNP0C0B", 0},
+-	{"INT3404", 0},
+-	{"INTC1044", 0},
+-	{"INTC1048", 0},
++	ACPI_FAN_DEVICE_IDS,
+ 	{"", 0},
+ };
+ MODULE_DEVICE_TABLE(acpi, fan_device_ids);
+diff --git a/drivers/acpi/fan.h b/drivers/acpi/fan.h
+new file mode 100644
+index 0000000000000..dc9a6efa514b0
+--- /dev/null
++++ b/drivers/acpi/fan.h
+@@ -0,0 +1,13 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++/*
++ * ACPI fan device IDs are shared between the fan driver and the device power
++ * management code.
++ *
++ * Add new device IDs before the generic ACPI fan one.
++ */
++#define ACPI_FAN_DEVICE_IDS	\
++	{"INT3404", }, /* Fan */ \
++	{"INTC1044", }, /* Fan for Tiger Lake generation */ \
++	{"INTC1048", }, /* Fan for Alder Lake generation */ \
++	{"PNP0C0B", } /* Generic ACPI fan */
+diff --git a/drivers/acpi/power.c b/drivers/acpi/power.c
+index 8048da85b7e07..61115ed8b93fb 100644
+--- a/drivers/acpi/power.c
++++ b/drivers/acpi/power.c
+@@ -886,15 +886,16 @@ static void acpi_release_power_resource(struct device *dev)
+ 	kfree(resource);
+ }
+ 
+-static ssize_t acpi_power_in_use_show(struct device *dev,
+-				      struct device_attribute *attr,
+-				      char *buf) {
++static ssize_t resource_in_use_show(struct device *dev,
++				    struct device_attribute *attr,
++				    char *buf)
++{
+ 	struct acpi_power_resource *resource;
+ 
+ 	resource = to_power_resource(to_acpi_device(dev));
+ 	return sprintf(buf, "%u\n", !!resource->ref_count);
+ }
+-static DEVICE_ATTR(resource_in_use, 0444, acpi_power_in_use_show, NULL);
++static DEVICE_ATTR_RO(resource_in_use);
+ 
+ static void acpi_power_sysfs_remove(struct acpi_device *device)
+ {
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index fb161a21d0aec..8377c3ed10ffa 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -16,6 +16,7 @@
+ #include <linux/acpi.h>
+ #include <linux/dmi.h>
+ #include <linux/sched.h>       /* need_resched() */
++#include <linux/sort.h>
+ #include <linux/tick.h>
+ #include <linux/cpuidle.h>
+ #include <linux/cpu.h>
+@@ -389,10 +390,37 @@ static void acpi_processor_power_verify_c3(struct acpi_processor *pr,
+ 	return;
+ }
+ 
++static int acpi_cst_latency_cmp(const void *a, const void *b)
++{
++	const struct acpi_processor_cx *x = a, *y = b;
++
++	if (!(x->valid && y->valid))
++		return 0;
++	if (x->latency > y->latency)
++		return 1;
++	if (x->latency < y->latency)
++		return -1;
++	return 0;
++}
++static void acpi_cst_latency_swap(void *a, void *b, int n)
++{
++	struct acpi_processor_cx *x = a, *y = b;
++	u32 tmp;
++
++	if (!(x->valid && y->valid))
++		return;
++	tmp = x->latency;
++	x->latency = y->latency;
++	y->latency = tmp;
++}
++
+ static int acpi_processor_power_verify(struct acpi_processor *pr)
+ {
+ 	unsigned int i;
+ 	unsigned int working = 0;
++	unsigned int last_latency = 0;
++	unsigned int last_type = 0;
++	bool buggy_latency = false;
+ 
+ 	pr->power.timer_broadcast_on_state = INT_MAX;
+ 
+@@ -416,12 +444,24 @@ static int acpi_processor_power_verify(struct acpi_processor *pr)
+ 		}
+ 		if (!cx->valid)
+ 			continue;
++		if (cx->type >= last_type && cx->latency < last_latency)
++			buggy_latency = true;
++		last_latency = cx->latency;
++		last_type = cx->type;
+ 
+ 		lapic_timer_check_state(i, pr, cx);
+ 		tsc_check_state(cx->type);
+ 		working++;
+ 	}
+ 
++	if (buggy_latency) {
++		pr_notice("FW issue: working around C-state latencies out of order\n");
++		sort(&pr->power.states[1], max_cstate,
++		     sizeof(struct acpi_processor_cx),
++		     acpi_cst_latency_cmp,
++		     acpi_cst_latency_swap);
++	}
++
+ 	lapic_timer_propagate_broadcast(pr);
+ 
+ 	return (working);
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index f2f5f1dc7c61d..9d82440a1d75b 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -430,6 +430,13 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 	}
+ }
+ 
++static bool irq_is_legacy(struct acpi_resource_irq *irq)
++{
++	return irq->triggering == ACPI_EDGE_SENSITIVE &&
++		irq->polarity == ACPI_ACTIVE_HIGH &&
++		irq->shareable == ACPI_EXCLUSIVE;
++}
++
+ /**
+  * acpi_dev_resource_interrupt - Extract ACPI interrupt resource information.
+  * @ares: Input ACPI resource object.
+@@ -468,7 +475,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
+ 		}
+ 		acpi_dev_get_irqresource(res, irq->interrupts[index],
+ 					 irq->triggering, irq->polarity,
+-					 irq->shareable, true);
++					 irq->shareable, irq_is_legacy(irq));
+ 		break;
+ 	case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
+ 		ext_irq = &ares->data.extended_irq;
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 83cd4c95faf0d..33474fd969913 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -385,6 +385,30 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_BOARD_NAME, "BA51_MV"),
+ 		},
+ 	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "ASUSTeK COMPUTER INC. GA401",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "GA401"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "ASUSTeK COMPUTER INC. GA502",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "GA502"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "ASUSTeK COMPUTER INC. GA503",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "GA503"),
++		},
++	},
+ 
+ 	/*
+ 	 * Desktops which falsely report a backlight and which our heuristics
+diff --git a/drivers/ata/pata_ep93xx.c b/drivers/ata/pata_ep93xx.c
+index badab67088935..46208ececbb6a 100644
+--- a/drivers/ata/pata_ep93xx.c
++++ b/drivers/ata/pata_ep93xx.c
+@@ -928,7 +928,7 @@ static int ep93xx_pata_probe(struct platform_device *pdev)
+ 	/* INT[3] (IRQ_EP93XX_EXT3) line connected as pull down */
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+-		err = -ENXIO;
++		err = irq;
+ 		goto err_rel_gpio;
+ 	}
+ 
+diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c
+index bd87476ab4813..b5a3f710d76de 100644
+--- a/drivers/ata/pata_octeon_cf.c
++++ b/drivers/ata/pata_octeon_cf.c
+@@ -898,10 +898,11 @@ static int octeon_cf_probe(struct platform_device *pdev)
+ 					return -EINVAL;
+ 				}
+ 
+-				irq_handler = octeon_cf_interrupt;
+ 				i = platform_get_irq(dma_dev, 0);
+-				if (i > 0)
++				if (i > 0) {
+ 					irq = i;
++					irq_handler = octeon_cf_interrupt;
++				}
+ 			}
+ 			of_node_put(dma_node);
+ 		}
+diff --git a/drivers/ata/pata_rb532_cf.c b/drivers/ata/pata_rb532_cf.c
+index 479c4b29b8562..303f8c375b3af 100644
+--- a/drivers/ata/pata_rb532_cf.c
++++ b/drivers/ata/pata_rb532_cf.c
+@@ -115,10 +115,12 @@ static int rb532_pata_driver_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0) {
++	if (irq < 0) {
+ 		dev_err(&pdev->dev, "no IRQ resource found\n");
+-		return -ENOENT;
++		return irq;
+ 	}
++	if (!irq)
++		return -EINVAL;
+ 
+ 	gpiod = devm_gpiod_get(&pdev->dev, NULL, GPIOD_IN);
+ 	if (IS_ERR(gpiod)) {
+diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c
+index 64b2ef15ec191..8440203e835ed 100644
+--- a/drivers/ata/sata_highbank.c
++++ b/drivers/ata/sata_highbank.c
+@@ -469,10 +469,12 @@ static int ahci_highbank_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0) {
++	if (irq < 0) {
+ 		dev_err(dev, "no irq\n");
+-		return -EINVAL;
++		return irq;
+ 	}
++	if (!irq)
++		return -EINVAL;
+ 
+ 	hpriv = devm_kzalloc(dev, sizeof(*hpriv), GFP_KERNEL);
+ 	if (!hpriv) {
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index a58084c2ed7ce..06d44ae9701f1 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1161,6 +1161,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 	blk_queue_physical_block_size(lo->lo_queue, bsize);
+ 	blk_queue_io_min(lo->lo_queue, bsize);
+ 
++	loop_config_discard(lo);
+ 	loop_update_rotational(lo);
+ 	loop_update_dio(lo);
+ 	loop_sysfs_init(lo);
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index ce9dcffdc5bfd..7551cac3fd7a9 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -143,7 +143,7 @@ int qca_send_pre_shutdown_cmd(struct hci_dev *hdev)
+ EXPORT_SYMBOL_GPL(qca_send_pre_shutdown_cmd);
+ 
+ static void qca_tlv_check_data(struct qca_fw_config *config,
+-		const struct firmware *fw, enum qca_btsoc_type soc_type)
++		u8 *fw_data, enum qca_btsoc_type soc_type)
+ {
+ 	const u8 *data;
+ 	u32 type_len;
+@@ -154,7 +154,7 @@ static void qca_tlv_check_data(struct qca_fw_config *config,
+ 	struct tlv_type_nvm *tlv_nvm;
+ 	uint8_t nvm_baud_rate = config->user_baud_rate;
+ 
+-	tlv = (struct tlv_type_hdr *)fw->data;
++	tlv = (struct tlv_type_hdr *)fw_data;
+ 
+ 	type_len = le32_to_cpu(tlv->type_len);
+ 	length = (type_len >> 8) & 0x00ffffff;
+@@ -350,8 +350,9 @@ static int qca_download_firmware(struct hci_dev *hdev,
+ 				 enum qca_btsoc_type soc_type)
+ {
+ 	const struct firmware *fw;
++	u8 *data;
+ 	const u8 *segment;
+-	int ret, remain, i = 0;
++	int ret, size, remain, i = 0;
+ 
+ 	bt_dev_info(hdev, "QCA Downloading %s", config->fwname);
+ 
+@@ -362,10 +363,22 @@ static int qca_download_firmware(struct hci_dev *hdev,
+ 		return ret;
+ 	}
+ 
+-	qca_tlv_check_data(config, fw, soc_type);
++	size = fw->size;
++	data = vmalloc(fw->size);
++	if (!data) {
++		bt_dev_err(hdev, "QCA Failed to allocate memory for file: %s",
++			   config->fwname);
++		release_firmware(fw);
++		return -ENOMEM;
++	}
++
++	memcpy(data, fw->data, size);
++	release_firmware(fw);
++
++	qca_tlv_check_data(config, data, soc_type);
+ 
+-	segment = fw->data;
+-	remain = fw->size;
++	segment = data;
++	remain = size;
+ 	while (remain > 0) {
+ 		int segsize = min(MAX_SIZE_PER_TLV_SEGMENT, remain);
+ 
+@@ -395,7 +408,7 @@ static int qca_download_firmware(struct hci_dev *hdev,
+ 		ret = qca_inject_cmd_complete_event(hdev);
+ 
+ out:
+-	release_firmware(fw);
++	vfree(data);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index ad47ff0d55c2e..4184faef9f169 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1809,8 +1809,6 @@ static void qca_power_shutdown(struct hci_uart *hu)
+ 	unsigned long flags;
+ 	enum qca_btsoc_type soc_type = qca_soc_type(hu);
+ 
+-	qcadev = serdev_device_get_drvdata(hu->serdev);
+-
+ 	/* From this point we go into power off state. But serial port is
+ 	 * still open, stop queueing the IBS data and flush all the buffered
+ 	 * data in skb's.
+@@ -1826,6 +1824,8 @@ static void qca_power_shutdown(struct hci_uart *hu)
+ 	if (!hu->serdev)
+ 		return;
+ 
++	qcadev = serdev_device_get_drvdata(hu->serdev);
++
+ 	if (qca_is_wcn399x(soc_type)) {
+ 		host_set_baudrate(hu, 2400);
+ 		qca_send_power_pulse(hu, false);
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+index aeb895c084607..044dcdd723a70 100644
+--- a/drivers/bus/mhi/core/pm.c
++++ b/drivers/bus/mhi/core/pm.c
+@@ -809,6 +809,7 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
+ 
+ 	ret = wait_event_timeout(mhi_cntrl->state_event,
+ 				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
++				 mhi_cntrl->dev_state == MHI_STATE_M2 ||
+ 				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+ 				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+ 
+diff --git a/drivers/char/hw_random/exynos-trng.c b/drivers/char/hw_random/exynos-trng.c
+index 8e1fe3f8dd2df..c8db62bc5ff72 100644
+--- a/drivers/char/hw_random/exynos-trng.c
++++ b/drivers/char/hw_random/exynos-trng.c
+@@ -132,7 +132,7 @@ static int exynos_trng_probe(struct platform_device *pdev)
+ 		return PTR_ERR(trng->mem);
+ 
+ 	pm_runtime_enable(&pdev->dev);
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Could not get runtime PM.\n");
+ 		goto err_pm_get;
+@@ -165,7 +165,7 @@ err_register:
+ 	clk_disable_unprepare(trng->clk);
+ 
+ err_clock:
+-	pm_runtime_put_sync(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
+ 
+ err_pm_get:
+ 	pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/char/pcmcia/cm4000_cs.c b/drivers/char/pcmcia/cm4000_cs.c
+index 89681f07bc787..9468e9520cee0 100644
+--- a/drivers/char/pcmcia/cm4000_cs.c
++++ b/drivers/char/pcmcia/cm4000_cs.c
+@@ -544,6 +544,10 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
+ 		io_read_num_rec_bytes(iobase, &num_bytes_read);
+ 		if (num_bytes_read >= 4) {
+ 			DEBUGP(2, dev, "NumRecBytes = %i\n", num_bytes_read);
++			if (num_bytes_read > 4) {
++				rc = -EIO;
++				goto exit_setprotocol;
++			}
+ 			break;
+ 		}
+ 		usleep_range(10000, 11000);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 55b9d3965ae1b..69579efb247b3 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -196,13 +196,24 @@ static u8 tpm_tis_status(struct tpm_chip *chip)
+ 		return 0;
+ 
+ 	if (unlikely((status & TPM_STS_READ_ZERO) != 0)) {
+-		/*
+-		 * If this trips, the chances are the read is
+-		 * returning 0xff because the locality hasn't been
+-		 * acquired.  Usually because tpm_try_get_ops() hasn't
+-		 * been called before doing a TPM operation.
+-		 */
+-		WARN_ONCE(1, "TPM returned invalid status\n");
++		if  (!test_and_set_bit(TPM_TIS_INVALID_STATUS, &priv->flags)) {
++			/*
++			 * If this trips, the chances are the read is
++			 * returning 0xff because the locality hasn't been
++			 * acquired.  Usually because tpm_try_get_ops() hasn't
++			 * been called before doing a TPM operation.
++			 */
++			dev_err(&chip->dev, "invalid TPM_STS.x 0x%02x, dumping stack for forensics\n",
++				status);
++
++			/*
++			 * Dump stack for forensics, as invalid TPM_STS.x could be
++			 * potentially triggered by impaired tpm_try_get_ops() or
++			 * tpm_find_get_ops().
++			 */
++			dump_stack();
++		}
++
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 9b2d32a59f670..b2a3c6c72882d 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -83,6 +83,7 @@ enum tis_defaults {
+ 
+ enum tpm_tis_flags {
+ 	TPM_TIS_ITPM_WORKAROUND		= BIT(0),
++	TPM_TIS_INVALID_STATUS		= BIT(1),
+ };
+ 
+ struct tpm_tis_data {
+@@ -90,7 +91,7 @@ struct tpm_tis_data {
+ 	int locality;
+ 	int irq;
+ 	bool irq_tested;
+-	unsigned int flags;
++	unsigned long flags;
+ 	void __iomem *ilb_base_addr;
+ 	u16 clkrun_enabled;
+ 	wait_queue_head_t int_queue;
+diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
+index 3856f6ebcb34f..de4209003a448 100644
+--- a/drivers/char/tpm/tpm_tis_spi_main.c
++++ b/drivers/char/tpm/tpm_tis_spi_main.c
+@@ -260,6 +260,8 @@ static int tpm_tis_spi_remove(struct spi_device *dev)
+ }
+ 
+ static const struct spi_device_id tpm_tis_spi_id[] = {
++	{ "st33htpm-spi", (unsigned long)tpm_tis_spi_probe },
++	{ "slb9670", (unsigned long)tpm_tis_spi_probe },
+ 	{ "tpm_tis_spi", (unsigned long)tpm_tis_spi_probe },
+ 	{ "cr50", (unsigned long)cr50_spi_probe },
+ 	{}
+diff --git a/drivers/clk/actions/owl-s500.c b/drivers/clk/actions/owl-s500.c
+index 61bb224f63309..cbeb51c804eb5 100644
+--- a/drivers/clk/actions/owl-s500.c
++++ b/drivers/clk/actions/owl-s500.c
+@@ -127,8 +127,7 @@ static struct clk_factor_table sd_factor_table[] = {
+ 	{ 12, 1, 13 }, { 13, 1, 14 }, { 14, 1, 15 }, { 15, 1, 16 },
+ 	{ 16, 1, 17 }, { 17, 1, 18 }, { 18, 1, 19 }, { 19, 1, 20 },
+ 	{ 20, 1, 21 }, { 21, 1, 22 }, { 22, 1, 23 }, { 23, 1, 24 },
+-	{ 24, 1, 25 }, { 25, 1, 26 }, { 26, 1, 27 }, { 27, 1, 28 },
+-	{ 28, 1, 29 }, { 29, 1, 30 }, { 30, 1, 31 }, { 31, 1, 32 },
++	{ 24, 1, 25 },
+ 
+ 	/* bit8: /128 */
+ 	{ 256, 1, 1 * 128 }, { 257, 1, 2 * 128 }, { 258, 1, 3 * 128 }, { 259, 1, 4 * 128 },
+@@ -137,19 +136,20 @@ static struct clk_factor_table sd_factor_table[] = {
+ 	{ 268, 1, 13 * 128 }, { 269, 1, 14 * 128 }, { 270, 1, 15 * 128 }, { 271, 1, 16 * 128 },
+ 	{ 272, 1, 17 * 128 }, { 273, 1, 18 * 128 }, { 274, 1, 19 * 128 }, { 275, 1, 20 * 128 },
+ 	{ 276, 1, 21 * 128 }, { 277, 1, 22 * 128 }, { 278, 1, 23 * 128 }, { 279, 1, 24 * 128 },
+-	{ 280, 1, 25 * 128 }, { 281, 1, 26 * 128 }, { 282, 1, 27 * 128 }, { 283, 1, 28 * 128 },
+-	{ 284, 1, 29 * 128 }, { 285, 1, 30 * 128 }, { 286, 1, 31 * 128 }, { 287, 1, 32 * 128 },
++	{ 280, 1, 25 * 128 },
+ 	{ 0, 0, 0 },
+ };
+ 
+-static struct clk_factor_table bisp_factor_table[] = {
+-	{ 0, 1, 1 }, { 1, 1, 2 }, { 2, 1, 3 }, { 3, 1, 4 },
+-	{ 4, 1, 5 }, { 5, 1, 6 }, { 6, 1, 7 }, { 7, 1, 8 },
++static struct clk_factor_table de_factor_table[] = {
++	{ 0, 1, 1 }, { 1, 2, 3 }, { 2, 1, 2 }, { 3, 2, 5 },
++	{ 4, 1, 3 }, { 5, 1, 4 }, { 6, 1, 6 }, { 7, 1, 8 },
++	{ 8, 1, 12 },
+ 	{ 0, 0, 0 },
+ };
+ 
+-static struct clk_factor_table ahb_factor_table[] = {
+-	{ 1, 1, 2 }, { 2, 1, 3 },
++static struct clk_factor_table hde_factor_table[] = {
++	{ 0, 1, 1 }, { 1, 2, 3 }, { 2, 1, 2 }, { 3, 2, 5 },
++	{ 4, 1, 3 }, { 5, 1, 4 }, { 6, 1, 6 }, { 7, 1, 8 },
+ 	{ 0, 0, 0 },
+ };
+ 
+@@ -158,6 +158,13 @@ static struct clk_div_table rmii_ref_div_table[] = {
+ 	{ 0, 0 },
+ };
+ 
++static struct clk_div_table std12rate_div_table[] = {
++	{ 0, 1 }, { 1, 2 }, { 2, 3 }, { 3, 4 },
++	{ 4, 5 }, { 5, 6 }, { 6, 7 }, { 7, 8 },
++	{ 8, 9 }, { 9, 10 }, { 10, 11 }, { 11, 12 },
++	{ 0, 0 },
++};
++
+ static struct clk_div_table i2s_div_table[] = {
+ 	{ 0, 1 }, { 1, 2 }, { 2, 3 }, { 3, 4 },
+ 	{ 4, 6 }, { 5, 8 }, { 6, 12 }, { 7, 16 },
+@@ -174,7 +181,6 @@ static struct clk_div_table nand_div_table[] = {
+ 
+ /* mux clock */
+ static OWL_MUX(dev_clk, "dev_clk", dev_clk_mux_p, CMU_DEVPLL, 12, 1, CLK_SET_RATE_PARENT);
+-static OWL_MUX(ahbprediv_clk, "ahbprediv_clk", ahbprediv_clk_mux_p, CMU_BUSCLK1, 8, 3, CLK_SET_RATE_PARENT);
+ 
+ /* gate clocks */
+ static OWL_GATE(gpio_clk, "gpio_clk", "apb_clk", CMU_DEVCLKEN0, 18, 0, 0);
+@@ -187,45 +193,54 @@ static OWL_GATE(timer_clk, "timer_clk", "hosc", CMU_DEVCLKEN1, 27, 0, 0);
+ static OWL_GATE(hdmi_clk, "hdmi_clk", "hosc", CMU_DEVCLKEN1, 3, 0, 0);
+ 
+ /* divider clocks */
+-static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
++static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 2, 2, NULL, 0, 0);
+ static OWL_DIVIDER(apb_clk, "apb_clk", "ahb_clk", CMU_BUSCLK1, 14, 2, NULL, 0, 0);
+ static OWL_DIVIDER(rmii_ref_clk, "rmii_ref_clk", "ethernet_pll_clk", CMU_ETHERNETPLL, 1, 1, rmii_ref_div_table, 0, 0);
+ 
+ /* factor clocks */
+-static OWL_FACTOR(ahb_clk, "ahb_clk", "h_clk", CMU_BUSCLK1, 2, 2, ahb_factor_table, 0, 0);
+-static OWL_FACTOR(de1_clk, "de_clk1", "de_clk", CMU_DECLK, 0, 3, bisp_factor_table, 0, 0);
+-static OWL_FACTOR(de2_clk, "de_clk2", "de_clk", CMU_DECLK, 4, 3, bisp_factor_table, 0, 0);
++static OWL_FACTOR(de1_clk, "de_clk1", "de_clk", CMU_DECLK, 0, 4, de_factor_table, 0, 0);
++static OWL_FACTOR(de2_clk, "de_clk2", "de_clk", CMU_DECLK, 4, 4, de_factor_table, 0, 0);
+ 
+ /* composite clocks */
++static OWL_COMP_DIV(ahbprediv_clk, "ahbprediv_clk", ahbprediv_clk_mux_p,
++			OWL_MUX_HW(CMU_BUSCLK1, 8, 3),
++			{ 0 },
++			OWL_DIVIDER_HW(CMU_BUSCLK1, 12, 2, 0, NULL),
++			CLK_SET_RATE_PARENT);
++
++static OWL_COMP_FIXED_FACTOR(ahb_clk, "ahb_clk", "h_clk",
++			{ 0 },
++			1, 1, 0);
++
+ static OWL_COMP_FACTOR(vce_clk, "vce_clk", hde_clk_mux_p,
+ 			OWL_MUX_HW(CMU_VCECLK, 4, 2),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 26, 0),
+-			OWL_FACTOR_HW(CMU_VCECLK, 0, 3, 0, bisp_factor_table),
++			OWL_FACTOR_HW(CMU_VCECLK, 0, 3, 0, hde_factor_table),
+ 			0);
+ 
+ static OWL_COMP_FACTOR(vde_clk, "vde_clk", hde_clk_mux_p,
+ 			OWL_MUX_HW(CMU_VDECLK, 4, 2),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 25, 0),
+-			OWL_FACTOR_HW(CMU_VDECLK, 0, 3, 0, bisp_factor_table),
++			OWL_FACTOR_HW(CMU_VDECLK, 0, 3, 0, hde_factor_table),
+ 			0);
+ 
+-static OWL_COMP_FACTOR(bisp_clk, "bisp_clk", bisp_clk_mux_p,
++static OWL_COMP_DIV(bisp_clk, "bisp_clk", bisp_clk_mux_p,
+ 			OWL_MUX_HW(CMU_BISPCLK, 4, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
+-			OWL_FACTOR_HW(CMU_BISPCLK, 0, 3, 0, bisp_factor_table),
++			OWL_DIVIDER_HW(CMU_BISPCLK, 0, 4, 0, std12rate_div_table),
+ 			0);
+ 
+-static OWL_COMP_FACTOR(sensor0_clk, "sensor0_clk", sensor_clk_mux_p,
++static OWL_COMP_DIV(sensor0_clk, "sensor0_clk", sensor_clk_mux_p,
+ 			OWL_MUX_HW(CMU_SENSORCLK, 4, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
+-			OWL_FACTOR_HW(CMU_SENSORCLK, 0, 3, 0, bisp_factor_table),
+-			CLK_IGNORE_UNUSED);
++			OWL_DIVIDER_HW(CMU_SENSORCLK, 0, 4, 0, std12rate_div_table),
++			0);
+ 
+-static OWL_COMP_FACTOR(sensor1_clk, "sensor1_clk", sensor_clk_mux_p,
++static OWL_COMP_DIV(sensor1_clk, "sensor1_clk", sensor_clk_mux_p,
+ 			OWL_MUX_HW(CMU_SENSORCLK, 4, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
+-			OWL_FACTOR_HW(CMU_SENSORCLK, 8, 3, 0, bisp_factor_table),
+-			CLK_IGNORE_UNUSED);
++			OWL_DIVIDER_HW(CMU_SENSORCLK, 8, 4, 0, std12rate_div_table),
++			0);
+ 
+ static OWL_COMP_FACTOR(sd0_clk, "sd0_clk", sd_clk_mux_p,
+ 			OWL_MUX_HW(CMU_SD0CLK, 9, 1),
+@@ -305,7 +320,7 @@ static OWL_COMP_FIXED_FACTOR(i2c3_clk, "i2c3_clk", "ethernet_pll_clk",
+ static OWL_COMP_DIV(uart0_clk, "uart0_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART0CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 6, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART0CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart1_clk, "uart1_clk", uart_clk_mux_p,
+@@ -317,31 +332,31 @@ static OWL_COMP_DIV(uart1_clk, "uart1_clk", uart_clk_mux_p,
+ static OWL_COMP_DIV(uart2_clk, "uart2_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART2CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 8, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART2CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart3_clk, "uart3_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART3CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 19, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART3CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart4_clk, "uart4_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART4CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 20, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART4CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart5_clk, "uart5_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART5CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 21, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART5CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart6_clk, "uart6_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART6CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 18, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART6CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(i2srx_clk, "i2srx_clk", i2s_clk_mux_p,
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index e0446e66fa645..eb22f4fdbc6b4 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -92,12 +92,22 @@ struct clk_si5341_output_config {
+ #define SI5341_PN_BASE		0x0002
+ #define SI5341_DEVICE_REV	0x0005
+ #define SI5341_STATUS		0x000C
++#define SI5341_LOS		0x000D
++#define SI5341_STATUS_STICKY	0x0011
++#define SI5341_LOS_STICKY	0x0012
+ #define SI5341_SOFT_RST		0x001C
+ #define SI5341_IN_SEL		0x0021
++#define SI5341_DEVICE_READY	0x00FE
+ #define SI5341_XAXB_CFG		0x090E
+ #define SI5341_IN_EN		0x0949
+ #define SI5341_INX_TO_PFD_EN	0x094A
+ 
++/* Status bits */
++#define SI5341_STATUS_SYSINCAL	BIT(0)
++#define SI5341_STATUS_LOSXAXB	BIT(1)
++#define SI5341_STATUS_LOSREF	BIT(2)
++#define SI5341_STATUS_LOL	BIT(3)
++
+ /* Input selection */
+ #define SI5341_IN_SEL_MASK	0x06
+ #define SI5341_IN_SEL_SHIFT	1
+@@ -340,6 +350,8 @@ static const struct si5341_reg_default si5341_reg_defaults[] = {
+ 	{ 0x094A, 0x00 }, /* INx_TO_PFD_EN (disabled) */
+ 	{ 0x0A02, 0x00 }, /* Not in datasheet */
+ 	{ 0x0B44, 0x0F }, /* PDIV_ENB (datasheet does not mention what it is) */
++	{ 0x0B57, 0x10 }, /* VCO_RESET_CALCODE (not described in datasheet) */
++	{ 0x0B58, 0x05 }, /* VCO_RESET_CALCODE (not described in datasheet) */
+ };
+ 
+ /* Read and interpret a 44-bit followed by a 32-bit value in the regmap */
+@@ -623,6 +635,9 @@ static unsigned long si5341_synth_clk_recalc_rate(struct clk_hw *hw,
+ 			SI5341_SYNTH_N_NUM(synth->index), &n_num, &n_den);
+ 	if (err < 0)
+ 		return err;
++	/* Check for bogus/uninitialized settings */
++	if (!n_num || !n_den)
++		return 0;
+ 
+ 	/*
+ 	 * n_num and n_den are shifted left as much as possible, so to prevent
+@@ -806,6 +821,9 @@ static long si5341_output_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ {
+ 	unsigned long r;
+ 
++	if (!rate)
++		return 0;
++
+ 	r = *parent_rate >> 1;
+ 
+ 	/* If rate is an even divisor, no changes to parent required */
+@@ -834,11 +852,16 @@ static int si5341_output_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+ 		unsigned long parent_rate)
+ {
+ 	struct clk_si5341_output *output = to_clk_si5341_output(hw);
+-	/* Frequency divider is (r_div + 1) * 2 */
+-	u32 r_div = (parent_rate / rate) >> 1;
++	u32 r_div;
+ 	int err;
+ 	u8 r[3];
+ 
++	if (!rate)
++		return -EINVAL;
++
++	/* Frequency divider is (r_div + 1) * 2 */
++	r_div = (parent_rate / rate) >> 1;
++
+ 	if (r_div <= 1)
+ 		r_div = 0;
+ 	else if (r_div >= BIT(24))
+@@ -1083,7 +1106,7 @@ static const struct si5341_reg_default si5341_preamble[] = {
+ 	{ 0x0B25, 0x00 },
+ 	{ 0x0502, 0x01 },
+ 	{ 0x0505, 0x03 },
+-	{ 0x0957, 0x1F },
++	{ 0x0957, 0x17 },
+ 	{ 0x0B4E, 0x1A },
+ };
+ 
+@@ -1189,6 +1212,32 @@ static const struct regmap_range_cfg si5341_regmap_ranges[] = {
+ 	},
+ };
+ 
++static int si5341_wait_device_ready(struct i2c_client *client)
++{
++	int count;
++
++	/* Datasheet warns: Any attempt to read or write any register other
++	 * than DEVICE_READY before DEVICE_READY reads as 0x0F may corrupt the
++	 * NVM programming and may corrupt the register contents, as they are
++	 * read from NVM. Note that this includes accesses to the PAGE register.
++	 * Also: DEVICE_READY is available on every register page, so no page
++	 * change is needed to read it.
++	 * Do this outside regmap to avoid automatic PAGE register access.
++	 * May take up to 300ms to complete.
++	 */
++	for (count = 0; count < 15; ++count) {
++		s32 result = i2c_smbus_read_byte_data(client,
++						      SI5341_DEVICE_READY);
++		if (result < 0)
++			return result;
++		if (result == 0x0F)
++			return 0;
++		msleep(20);
++	}
++	dev_err(&client->dev, "timeout waiting for DEVICE_READY\n");
++	return -EIO;
++}
++
+ static const struct regmap_config si5341_regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+@@ -1378,6 +1427,7 @@ static int si5341_probe(struct i2c_client *client,
+ 	unsigned int i;
+ 	struct clk_si5341_output_config config[SI5341_MAX_NUM_OUTPUTS];
+ 	bool initialization_required;
++	u32 status;
+ 
+ 	data = devm_kzalloc(&client->dev, sizeof(*data), GFP_KERNEL);
+ 	if (!data)
+@@ -1385,6 +1435,11 @@ static int si5341_probe(struct i2c_client *client,
+ 
+ 	data->i2c_client = client;
+ 
++	/* Must be done before otherwise touching hardware */
++	err = si5341_wait_device_ready(client);
++	if (err)
++		return err;
++
+ 	for (i = 0; i < SI5341_NUM_INPUTS; ++i) {
+ 		input = devm_clk_get(&client->dev, si5341_input_clock_names[i]);
+ 		if (IS_ERR(input)) {
+@@ -1540,6 +1595,22 @@ static int si5341_probe(struct i2c_client *client,
+ 			return err;
+ 	}
+ 
++	/* wait for device to report input clock present and PLL lock */
++	err = regmap_read_poll_timeout(data->regmap, SI5341_STATUS, status,
++		!(status & (SI5341_STATUS_LOSREF | SI5341_STATUS_LOL)),
++	       10000, 250000);
++	if (err) {
++		dev_err(&client->dev, "Error waiting for input clock or PLL lock\n");
++		return err;
++	}
++
++	/* clear sticky alarm bits from initialization */
++	err = regmap_write(data->regmap, SI5341_STATUS_STICKY, 0);
++	if (err) {
++		dev_err(&client->dev, "unable to clear sticky status\n");
++		return err;
++	}
++
+ 	/* Free the names, clk framework makes copies */
+ 	for (i = 0; i < data->num_synth; ++i)
+ 		 devm_kfree(&client->dev, (void *)synth_clock_names[i]);
+diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
+index 43db67337bc06..4e741f94baf02 100644
+--- a/drivers/clk/clk-versaclock5.c
++++ b/drivers/clk/clk-versaclock5.c
+@@ -69,7 +69,10 @@
+ #define VC5_FEEDBACK_FRAC_DIV(n)		(0x19 + (n))
+ #define VC5_RC_CONTROL0				0x1e
+ #define VC5_RC_CONTROL1				0x1f
+-/* Register 0x20 is factory reserved */
++
++/* These registers are named "Unused Factory Reserved Registers" */
++#define VC5_RESERVED_X0(idx)		(0x20 + ((idx) * 0x10))
++#define VC5_RESERVED_X0_BYPASS_SYNC	BIT(7) /* bypass_sync<idx> bit */
+ 
+ /* Output divider control for divider 1,2,3,4 */
+ #define VC5_OUT_DIV_CONTROL(idx)	(0x21 + ((idx) * 0x10))
+@@ -87,7 +90,6 @@
+ #define VC5_OUT_DIV_SKEW_INT(idx, n)	(0x2b + ((idx) * 0x10) + (n))
+ #define VC5_OUT_DIV_INT(idx, n)		(0x2d + ((idx) * 0x10) + (n))
+ #define VC5_OUT_DIV_SKEW_FRAC(idx)	(0x2f + ((idx) * 0x10))
+-/* Registers 0x30, 0x40, 0x50 are factory reserved */
+ 
+ /* Clock control register for clock 1,2 */
+ #define VC5_CLK_OUTPUT_CFG(idx, n)	(0x60 + ((idx) * 0x2) + (n))
+@@ -140,6 +142,8 @@
+ #define VC5_HAS_INTERNAL_XTAL	BIT(0)
+ /* chip has PFD requency doubler */
+ #define VC5_HAS_PFD_FREQ_DBL	BIT(1)
++/* chip has bits to disable FOD sync */
++#define VC5_HAS_BYPASS_SYNC_BIT	BIT(2)
+ 
+ /* Supported IDT VC5 models. */
+ enum vc5_model {
+@@ -581,6 +585,23 @@ static int vc5_clk_out_prepare(struct clk_hw *hw)
+ 	unsigned int src;
+ 	int ret;
+ 
++	/*
++	 * When enabling a FOD, all currently enabled FODs are briefly
++	 * stopped in order to synchronize all of them. This causes a clock
++	 * disruption to any unrelated chips that might be already using
++	 * other clock outputs. Bypass the sync feature to avoid the issue,
++	 * which is possible on the VersaClock 6E family via reserved
++	 * registers.
++	 */
++	if (vc5->chip_info->flags & VC5_HAS_BYPASS_SYNC_BIT) {
++		ret = regmap_update_bits(vc5->regmap,
++					 VC5_RESERVED_X0(hwdata->num),
++					 VC5_RESERVED_X0_BYPASS_SYNC,
++					 VC5_RESERVED_X0_BYPASS_SYNC);
++		if (ret)
++			return ret;
++	}
++
+ 	/*
+ 	 * If the input mux is disabled, enable it first and
+ 	 * select source from matching FOD.
+@@ -1102,7 +1123,7 @@ static const struct vc5_chip_info idt_5p49v6965_info = {
+ 	.model = IDT_VC6_5P49V6965,
+ 	.clk_fod_cnt = 4,
+ 	.clk_out_cnt = 5,
+-	.flags = 0,
++	.flags = VC5_HAS_BYPASS_SYNC_BIT,
+ };
+ 
+ static const struct i2c_device_id vc5_id[] = {
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index 4e6c81a702214..aac6bcc65c20c 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -350,46 +350,26 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MQ_VIDEO2_PLL_OUT] = imx_clk_hw_sscg_pll("video2_pll_out", video2_pll_out_sels, ARRAY_SIZE(video2_pll_out_sels), 0, 0, 0, base + 0x54, 0);
+ 
+ 	/* SYS PLL1 fixed output */
+-	hws[IMX8MQ_SYS1_PLL_40M_CG] = imx_clk_hw_gate("sys1_pll_40m_cg", "sys1_pll_out", base + 0x30, 9);
+-	hws[IMX8MQ_SYS1_PLL_80M_CG] = imx_clk_hw_gate("sys1_pll_80m_cg", "sys1_pll_out", base + 0x30, 11);
+-	hws[IMX8MQ_SYS1_PLL_100M_CG] = imx_clk_hw_gate("sys1_pll_100m_cg", "sys1_pll_out", base + 0x30, 13);
+-	hws[IMX8MQ_SYS1_PLL_133M_CG] = imx_clk_hw_gate("sys1_pll_133m_cg", "sys1_pll_out", base + 0x30, 15);
+-	hws[IMX8MQ_SYS1_PLL_160M_CG] = imx_clk_hw_gate("sys1_pll_160m_cg", "sys1_pll_out", base + 0x30, 17);
+-	hws[IMX8MQ_SYS1_PLL_200M_CG] = imx_clk_hw_gate("sys1_pll_200m_cg", "sys1_pll_out", base + 0x30, 19);
+-	hws[IMX8MQ_SYS1_PLL_266M_CG] = imx_clk_hw_gate("sys1_pll_266m_cg", "sys1_pll_out", base + 0x30, 21);
+-	hws[IMX8MQ_SYS1_PLL_400M_CG] = imx_clk_hw_gate("sys1_pll_400m_cg", "sys1_pll_out", base + 0x30, 23);
+-	hws[IMX8MQ_SYS1_PLL_800M_CG] = imx_clk_hw_gate("sys1_pll_800m_cg", "sys1_pll_out", base + 0x30, 25);
+-
+-	hws[IMX8MQ_SYS1_PLL_40M] = imx_clk_hw_fixed_factor("sys1_pll_40m", "sys1_pll_40m_cg", 1, 20);
+-	hws[IMX8MQ_SYS1_PLL_80M] = imx_clk_hw_fixed_factor("sys1_pll_80m", "sys1_pll_80m_cg", 1, 10);
+-	hws[IMX8MQ_SYS1_PLL_100M] = imx_clk_hw_fixed_factor("sys1_pll_100m", "sys1_pll_100m_cg", 1, 8);
+-	hws[IMX8MQ_SYS1_PLL_133M] = imx_clk_hw_fixed_factor("sys1_pll_133m", "sys1_pll_133m_cg", 1, 6);
+-	hws[IMX8MQ_SYS1_PLL_160M] = imx_clk_hw_fixed_factor("sys1_pll_160m", "sys1_pll_160m_cg", 1, 5);
+-	hws[IMX8MQ_SYS1_PLL_200M] = imx_clk_hw_fixed_factor("sys1_pll_200m", "sys1_pll_200m_cg", 1, 4);
+-	hws[IMX8MQ_SYS1_PLL_266M] = imx_clk_hw_fixed_factor("sys1_pll_266m", "sys1_pll_266m_cg", 1, 3);
+-	hws[IMX8MQ_SYS1_PLL_400M] = imx_clk_hw_fixed_factor("sys1_pll_400m", "sys1_pll_400m_cg", 1, 2);
+-	hws[IMX8MQ_SYS1_PLL_800M] = imx_clk_hw_fixed_factor("sys1_pll_800m", "sys1_pll_800m_cg", 1, 1);
++	hws[IMX8MQ_SYS1_PLL_40M] = imx_clk_hw_fixed_factor("sys1_pll_40m", "sys1_pll_out", 1, 20);
++	hws[IMX8MQ_SYS1_PLL_80M] = imx_clk_hw_fixed_factor("sys1_pll_80m", "sys1_pll_out", 1, 10);
++	hws[IMX8MQ_SYS1_PLL_100M] = imx_clk_hw_fixed_factor("sys1_pll_100m", "sys1_pll_out", 1, 8);
++	hws[IMX8MQ_SYS1_PLL_133M] = imx_clk_hw_fixed_factor("sys1_pll_133m", "sys1_pll_out", 1, 6);
++	hws[IMX8MQ_SYS1_PLL_160M] = imx_clk_hw_fixed_factor("sys1_pll_160m", "sys1_pll_out", 1, 5);
++	hws[IMX8MQ_SYS1_PLL_200M] = imx_clk_hw_fixed_factor("sys1_pll_200m", "sys1_pll_out", 1, 4);
++	hws[IMX8MQ_SYS1_PLL_266M] = imx_clk_hw_fixed_factor("sys1_pll_266m", "sys1_pll_out", 1, 3);
++	hws[IMX8MQ_SYS1_PLL_400M] = imx_clk_hw_fixed_factor("sys1_pll_400m", "sys1_pll_out", 1, 2);
++	hws[IMX8MQ_SYS1_PLL_800M] = imx_clk_hw_fixed_factor("sys1_pll_800m", "sys1_pll_out", 1, 1);
+ 
+ 	/* SYS PLL2 fixed output */
+-	hws[IMX8MQ_SYS2_PLL_50M_CG] = imx_clk_hw_gate("sys2_pll_50m_cg", "sys2_pll_out", base + 0x3c, 9);
+-	hws[IMX8MQ_SYS2_PLL_100M_CG] = imx_clk_hw_gate("sys2_pll_100m_cg", "sys2_pll_out", base + 0x3c, 11);
+-	hws[IMX8MQ_SYS2_PLL_125M_CG] = imx_clk_hw_gate("sys2_pll_125m_cg", "sys2_pll_out", base + 0x3c, 13);
+-	hws[IMX8MQ_SYS2_PLL_166M_CG] = imx_clk_hw_gate("sys2_pll_166m_cg", "sys2_pll_out", base + 0x3c, 15);
+-	hws[IMX8MQ_SYS2_PLL_200M_CG] = imx_clk_hw_gate("sys2_pll_200m_cg", "sys2_pll_out", base + 0x3c, 17);
+-	hws[IMX8MQ_SYS2_PLL_250M_CG] = imx_clk_hw_gate("sys2_pll_250m_cg", "sys2_pll_out", base + 0x3c, 19);
+-	hws[IMX8MQ_SYS2_PLL_333M_CG] = imx_clk_hw_gate("sys2_pll_333m_cg", "sys2_pll_out", base + 0x3c, 21);
+-	hws[IMX8MQ_SYS2_PLL_500M_CG] = imx_clk_hw_gate("sys2_pll_500m_cg", "sys2_pll_out", base + 0x3c, 23);
+-	hws[IMX8MQ_SYS2_PLL_1000M_CG] = imx_clk_hw_gate("sys2_pll_1000m_cg", "sys2_pll_out", base + 0x3c, 25);
+-
+-	hws[IMX8MQ_SYS2_PLL_50M] = imx_clk_hw_fixed_factor("sys2_pll_50m", "sys2_pll_50m_cg", 1, 20);
+-	hws[IMX8MQ_SYS2_PLL_100M] = imx_clk_hw_fixed_factor("sys2_pll_100m", "sys2_pll_100m_cg", 1, 10);
+-	hws[IMX8MQ_SYS2_PLL_125M] = imx_clk_hw_fixed_factor("sys2_pll_125m", "sys2_pll_125m_cg", 1, 8);
+-	hws[IMX8MQ_SYS2_PLL_166M] = imx_clk_hw_fixed_factor("sys2_pll_166m", "sys2_pll_166m_cg", 1, 6);
+-	hws[IMX8MQ_SYS2_PLL_200M] = imx_clk_hw_fixed_factor("sys2_pll_200m", "sys2_pll_200m_cg", 1, 5);
+-	hws[IMX8MQ_SYS2_PLL_250M] = imx_clk_hw_fixed_factor("sys2_pll_250m", "sys2_pll_250m_cg", 1, 4);
+-	hws[IMX8MQ_SYS2_PLL_333M] = imx_clk_hw_fixed_factor("sys2_pll_333m", "sys2_pll_333m_cg", 1, 3);
+-	hws[IMX8MQ_SYS2_PLL_500M] = imx_clk_hw_fixed_factor("sys2_pll_500m", "sys2_pll_500m_cg", 1, 2);
+-	hws[IMX8MQ_SYS2_PLL_1000M] = imx_clk_hw_fixed_factor("sys2_pll_1000m", "sys2_pll_1000m_cg", 1, 1);
++	hws[IMX8MQ_SYS2_PLL_50M] = imx_clk_hw_fixed_factor("sys2_pll_50m", "sys2_pll_out", 1, 20);
++	hws[IMX8MQ_SYS2_PLL_100M] = imx_clk_hw_fixed_factor("sys2_pll_100m", "sys2_pll_out", 1, 10);
++	hws[IMX8MQ_SYS2_PLL_125M] = imx_clk_hw_fixed_factor("sys2_pll_125m", "sys2_pll_out", 1, 8);
++	hws[IMX8MQ_SYS2_PLL_166M] = imx_clk_hw_fixed_factor("sys2_pll_166m", "sys2_pll_out", 1, 6);
++	hws[IMX8MQ_SYS2_PLL_200M] = imx_clk_hw_fixed_factor("sys2_pll_200m", "sys2_pll_out", 1, 5);
++	hws[IMX8MQ_SYS2_PLL_250M] = imx_clk_hw_fixed_factor("sys2_pll_250m", "sys2_pll_out", 1, 4);
++	hws[IMX8MQ_SYS2_PLL_333M] = imx_clk_hw_fixed_factor("sys2_pll_333m", "sys2_pll_out", 1, 3);
++	hws[IMX8MQ_SYS2_PLL_500M] = imx_clk_hw_fixed_factor("sys2_pll_500m", "sys2_pll_out", 1, 2);
++	hws[IMX8MQ_SYS2_PLL_1000M] = imx_clk_hw_fixed_factor("sys2_pll_1000m", "sys2_pll_out", 1, 1);
+ 
+ 	np = dev->of_node;
+ 	base = devm_platform_ioremap_resource(pdev, 0);
+diff --git a/drivers/clk/meson/g12a.c b/drivers/clk/meson/g12a.c
+index b814d44917a5d..2876bb83d9d0e 100644
+--- a/drivers/clk/meson/g12a.c
++++ b/drivers/clk/meson/g12a.c
+@@ -1602,7 +1602,7 @@ static struct clk_regmap g12b_cpub_clk_trace = {
+ };
+ 
+ static const struct pll_mult_range g12a_gp0_pll_mult_range = {
+-	.min = 55,
++	.min = 125,
+ 	.max = 255,
+ };
+ 
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 564431130a760..1a571c04a76cb 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1214,7 +1214,7 @@ static int alpha_pll_fabia_prepare(struct clk_hw *hw)
+ 		return -EINVAL;
+ 
+ 	/* Setup PLL for calibration frequency */
+-	regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), cal_l);
++	regmap_write(pll->clkr.regmap, PLL_CAL_L_VAL(pll), cal_l);
+ 
+ 	/* Bringup the PLL at calibration frequency */
+ 	ret = clk_alpha_pll_enable(hw);
+diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c
+index bb3e80928ebe8..438075a50b9f2 100644
+--- a/drivers/clk/socfpga/clk-agilex.c
++++ b/drivers/clk/socfpga/clk-agilex.c
+@@ -186,6 +186,41 @@ static const struct clk_parent_data noc_mux[] = {
+ 	  .name = "boot_clk", },
+ };
+ 
++static const struct clk_parent_data sdmmc_mux[] = {
++	{ .fw_name = "sdmmc_free_clk",
++	  .name = "sdmmc_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data s2f_user1_mux[] = {
++	{ .fw_name = "s2f_user1_free_clk",
++	  .name = "s2f_user1_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data psi_mux[] = {
++	{ .fw_name = "psi_ref_free_clk",
++	  .name = "psi_ref_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data gpio_db_mux[] = {
++	{ .fw_name = "gpio_db_free_clk",
++	  .name = "gpio_db_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data emac_ptp_mux[] = {
++	{ .fw_name = "emac_ptp_free_clk",
++	  .name = "emac_ptp_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
+ /* clocks in AO (always on) controller */
+ static const struct stratix10_pll_clock agilex_pll_clks[] = {
+ 	{ AGILEX_BOOT_CLK, "boot_clk", boot_mux, ARRAY_SIZE(boot_mux), 0,
+@@ -211,11 +246,9 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
+ 	{ AGILEX_MPU_FREE_CLK, "mpu_free_clk", NULL, mpu_free_mux, ARRAY_SIZE(mpu_free_mux),
+ 	   0, 0x3C, 0, 0, 0},
+ 	{ AGILEX_NOC_FREE_CLK, "noc_free_clk", NULL, noc_free_mux, ARRAY_SIZE(noc_free_mux),
+-	  0, 0x40, 0, 0, 1},
+-	{ AGILEX_L4_SYS_FREE_CLK, "l4_sys_free_clk", "noc_free_clk", NULL, 1, 0,
+-	  0, 4, 0, 0},
+-	{ AGILEX_NOC_CLK, "noc_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux),
+-	  0, 0, 0, 0x30, 1},
++	  0, 0x40, 0, 0, 0},
++	{ AGILEX_L4_SYS_FREE_CLK, "l4_sys_free_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0,
++	  0, 4, 0x30, 1},
+ 	{ AGILEX_EMAC_A_FREE_CLK, "emaca_free_clk", NULL, emaca_free_mux, ARRAY_SIZE(emaca_free_mux),
+ 	  0, 0xD4, 0, 0x88, 0},
+ 	{ AGILEX_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
+@@ -225,7 +258,7 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
+ 	{ AGILEX_GPIO_DB_FREE_CLK, "gpio_db_free_clk", NULL, gpio_db_free_mux,
+ 	  ARRAY_SIZE(gpio_db_free_mux), 0, 0xE0, 0, 0x88, 3},
+ 	{ AGILEX_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
+-	  ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0x88, 4},
++	  ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0, 0},
+ 	{ AGILEX_S2F_USER0_FREE_CLK, "s2f_user0_free_clk", NULL, s2f_usr0_free_mux,
+ 	  ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0, 0},
+ 	{ AGILEX_S2F_USER1_FREE_CLK, "s2f_user1_free_clk", NULL, s2f_usr1_free_mux,
+@@ -241,24 +274,24 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  0, 0, 0, 0, 0, 0, 4},
+ 	{ AGILEX_MPU_CCU_CLK, "mpu_ccu_clk", "mpu_clk", NULL, 1, 0, 0x24,
+ 	  0, 0, 0, 0, 0, 0, 2},
+-	{ AGILEX_L4_MAIN_CLK, "l4_main_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  1, 0x44, 0, 2, 0, 0, 0},
+-	{ AGILEX_L4_MP_CLK, "l4_mp_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  2, 0x44, 8, 2, 0, 0, 0},
++	{ AGILEX_L4_MAIN_CLK, "l4_main_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  1, 0x44, 0, 2, 0x30, 1, 0},
++	{ AGILEX_L4_MP_CLK, "l4_mp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  2, 0x44, 8, 2, 0x30, 1, 0},
+ 	/*
+ 	 * The l4_sp_clk feeds a 100 MHz clock to various peripherals, one of them
+ 	 * being the SP timers, thus cannot get gated.
+ 	 */
+-	{ AGILEX_L4_SP_CLK, "l4_sp_clk", "noc_clk", NULL, 1, CLK_IS_CRITICAL, 0x24,
+-	  3, 0x44, 16, 2, 0, 0, 0},
+-	{ AGILEX_CS_AT_CLK, "cs_at_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  4, 0x44, 24, 2, 0, 0, 0},
+-	{ AGILEX_CS_TRACE_CLK, "cs_trace_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  4, 0x44, 26, 2, 0, 0, 0},
++	{ AGILEX_L4_SP_CLK, "l4_sp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), CLK_IS_CRITICAL, 0x24,
++	  3, 0x44, 16, 2, 0x30, 1, 0},
++	{ AGILEX_CS_AT_CLK, "cs_at_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  4, 0x44, 24, 2, 0x30, 1, 0},
++	{ AGILEX_CS_TRACE_CLK, "cs_trace_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  4, 0x44, 26, 2, 0x30, 1, 0},
+ 	{ AGILEX_CS_PDBG_CLK, "cs_pdbg_clk", "cs_at_clk", NULL, 1, 0, 0x24,
+ 	  4, 0x44, 28, 1, 0, 0, 0},
+-	{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  5, 0, 0, 0, 0, 0, 0},
++	{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  5, 0, 0, 0, 0x30, 1, 0},
+ 	{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24,
+ 	  6, 0, 0, 0, 0, 0, 0},
+ 	{ AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+@@ -267,16 +300,16 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  1, 0, 0, 0, 0x94, 27, 0},
+ 	{ AGILEX_EMAC2_CLK, "emac2_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+ 	  2, 0, 0, 0, 0x94, 28, 0},
+-	{ AGILEX_EMAC_PTP_CLK, "emac_ptp_clk", "emac_ptp_free_clk", NULL, 1, 0, 0x7C,
+-	  3, 0, 0, 0, 0, 0, 0},
+-	{ AGILEX_GPIO_DB_CLK, "gpio_db_clk", "gpio_db_free_clk", NULL, 1, 0, 0x7C,
+-	  4, 0x98, 0, 16, 0, 0, 0},
+-	{ AGILEX_SDMMC_CLK, "sdmmc_clk", "sdmmc_free_clk", NULL, 1, 0, 0x7C,
+-	  5, 0, 0, 0, 0, 0, 4},
+-	{ AGILEX_S2F_USER1_CLK, "s2f_user1_clk", "s2f_user1_free_clk", NULL, 1, 0, 0x7C,
+-	  6, 0, 0, 0, 0, 0, 0},
+-	{ AGILEX_PSI_REF_CLK, "psi_ref_clk", "psi_ref_free_clk", NULL, 1, 0, 0x7C,
+-	  7, 0, 0, 0, 0, 0, 0},
++	{ AGILEX_EMAC_PTP_CLK, "emac_ptp_clk", NULL, emac_ptp_mux, ARRAY_SIZE(emac_ptp_mux), 0, 0x7C,
++	  3, 0, 0, 0, 0x88, 2, 0},
++	{ AGILEX_GPIO_DB_CLK, "gpio_db_clk", NULL, gpio_db_mux, ARRAY_SIZE(gpio_db_mux), 0, 0x7C,
++	  4, 0x98, 0, 16, 0x88, 3, 0},
++	{ AGILEX_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0x7C,
++	  5, 0, 0, 0, 0x88, 4, 4},
++	{ AGILEX_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0x7C,
++	  6, 0, 0, 0, 0x88, 5, 0},
++	{ AGILEX_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0x7C,
++	  7, 0, 0, 0, 0x88, 6, 0},
+ 	{ AGILEX_USB_CLK, "usb_clk", "l4_mp_clk", NULL, 1, 0, 0x7C,
+ 	  8, 0, 0, 0, 0, 0, 0},
+ 	{ AGILEX_SPI_M_CLK, "spi_m_clk", "l4_mp_clk", NULL, 1, 0, 0x7C,
+diff --git a/drivers/clk/socfpga/clk-periph-s10.c b/drivers/clk/socfpga/clk-periph-s10.c
+index 397b77b89b166..bae595f17061f 100644
+--- a/drivers/clk/socfpga/clk-periph-s10.c
++++ b/drivers/clk/socfpga/clk-periph-s10.c
+@@ -49,16 +49,21 @@ static u8 clk_periclk_get_parent(struct clk_hw *hwclk)
+ {
+ 	struct socfpga_periph_clk *socfpgaclk = to_periph_clk(hwclk);
+ 	u32 clk_src, mask;
+-	u8 parent;
++	u8 parent = 0;
+ 
++	/* handle the bypass first */
+ 	if (socfpgaclk->bypass_reg) {
+ 		mask = (0x1 << socfpgaclk->bypass_shift);
+ 		parent = ((readl(socfpgaclk->bypass_reg) & mask) >>
+ 			   socfpgaclk->bypass_shift);
+-	} else {
++		if (parent)
++			return parent;
++	}
++
++	if (socfpgaclk->hw.reg) {
+ 		clk_src = readl(socfpgaclk->hw.reg);
+ 		parent = (clk_src >> CLK_MGR_FREE_SHIFT) &
+-			CLK_MGR_FREE_MASK;
++			  CLK_MGR_FREE_MASK;
+ 	}
+ 	return parent;
+ }
+diff --git a/drivers/clk/socfpga/clk-s10.c b/drivers/clk/socfpga/clk-s10.c
+index 661a8e9bfb9bd..aaf69058b1dca 100644
+--- a/drivers/clk/socfpga/clk-s10.c
++++ b/drivers/clk/socfpga/clk-s10.c
+@@ -144,6 +144,41 @@ static const struct clk_parent_data mpu_free_mux[] = {
+ 	  .name = "f2s-free-clk", },
+ };
+ 
++static const struct clk_parent_data sdmmc_mux[] = {
++	{ .fw_name = "sdmmc_free_clk",
++	  .name = "sdmmc_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data s2f_user1_mux[] = {
++	{ .fw_name = "s2f_user1_free_clk",
++	  .name = "s2f_user1_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data psi_mux[] = {
++	{ .fw_name = "psi_ref_free_clk",
++	  .name = "psi_ref_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data gpio_db_mux[] = {
++	{ .fw_name = "gpio_db_free_clk",
++	  .name = "gpio_db_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data emac_ptp_mux[] = {
++	{ .fw_name = "emac_ptp_free_clk",
++	  .name = "emac_ptp_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
+ /* clocks in AO (always on) controller */
+ static const struct stratix10_pll_clock s10_pll_clks[] = {
+ 	{ STRATIX10_BOOT_CLK, "boot_clk", boot_mux, ARRAY_SIZE(boot_mux), 0,
+@@ -167,7 +202,7 @@ static const struct stratix10_perip_cnt_clock s10_main_perip_cnt_clks[] = {
+ 	{ STRATIX10_MPU_FREE_CLK, "mpu_free_clk", NULL, mpu_free_mux, ARRAY_SIZE(mpu_free_mux),
+ 	   0, 0x48, 0, 0, 0},
+ 	{ STRATIX10_NOC_FREE_CLK, "noc_free_clk", NULL, noc_free_mux, ARRAY_SIZE(noc_free_mux),
+-	  0, 0x4C, 0, 0, 0},
++	  0, 0x4C, 0, 0x3C, 1},
+ 	{ STRATIX10_MAIN_EMACA_CLK, "main_emaca_clk", "main_noc_base_clk", NULL, 1, 0,
+ 	  0x50, 0, 0, 0},
+ 	{ STRATIX10_MAIN_EMACB_CLK, "main_emacb_clk", "main_noc_base_clk", NULL, 1, 0,
+@@ -200,10 +235,8 @@ static const struct stratix10_perip_cnt_clock s10_main_perip_cnt_clks[] = {
+ 	  0, 0xD4, 0, 0, 0},
+ 	{ STRATIX10_PERI_PSI_REF_CLK, "peri_psi_ref_clk", "peri_noc_base_clk", NULL, 1, 0,
+ 	  0xD8, 0, 0, 0},
+-	{ STRATIX10_L4_SYS_FREE_CLK, "l4_sys_free_clk", "noc_free_clk", NULL, 1, 0,
+-	  0, 4, 0, 0},
+-	{ STRATIX10_NOC_CLK, "noc_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux),
+-	  0, 0, 0, 0x3C, 1},
++	{ STRATIX10_L4_SYS_FREE_CLK, "l4_sys_free_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0,
++	  0, 4, 0x3C, 1},
+ 	{ STRATIX10_EMAC_A_FREE_CLK, "emaca_free_clk", NULL, emaca_free_mux, ARRAY_SIZE(emaca_free_mux),
+ 	  0, 0, 2, 0xB0, 0},
+ 	{ STRATIX10_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
+@@ -227,20 +260,20 @@ static const struct stratix10_gate_clock s10_gate_clks[] = {
+ 	  0, 0, 0, 0, 0, 0, 4},
+ 	{ STRATIX10_MPU_L2RAM_CLK, "mpu_l2ram_clk", "mpu_clk", NULL, 1, 0, 0x30,
+ 	  0, 0, 0, 0, 0, 0, 2},
+-	{ STRATIX10_L4_MAIN_CLK, "l4_main_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  1, 0x70, 0, 2, 0, 0, 0},
+-	{ STRATIX10_L4_MP_CLK, "l4_mp_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  2, 0x70, 8, 2, 0, 0, 0},
+-	{ STRATIX10_L4_SP_CLK, "l4_sp_clk", "noc_clk", NULL, 1, CLK_IS_CRITICAL, 0x30,
+-	  3, 0x70, 16, 2, 0, 0, 0},
+-	{ STRATIX10_CS_AT_CLK, "cs_at_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  4, 0x70, 24, 2, 0, 0, 0},
+-	{ STRATIX10_CS_TRACE_CLK, "cs_trace_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  4, 0x70, 26, 2, 0, 0, 0},
++	{ STRATIX10_L4_MAIN_CLK, "l4_main_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  1, 0x70, 0, 2, 0x3C, 1, 0},
++	{ STRATIX10_L4_MP_CLK, "l4_mp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  2, 0x70, 8, 2, 0x3C, 1, 0},
++	{ STRATIX10_L4_SP_CLK, "l4_sp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), CLK_IS_CRITICAL, 0x30,
++	  3, 0x70, 16, 2, 0x3C, 1, 0},
++	{ STRATIX10_CS_AT_CLK, "cs_at_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  4, 0x70, 24, 2, 0x3C, 1, 0},
++	{ STRATIX10_CS_TRACE_CLK, "cs_trace_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  4, 0x70, 26, 2, 0x3C, 1, 0},
+ 	{ STRATIX10_CS_PDBG_CLK, "cs_pdbg_clk", "cs_at_clk", NULL, 1, 0, 0x30,
+ 	  4, 0x70, 28, 1, 0, 0, 0},
+-	{ STRATIX10_CS_TIMER_CLK, "cs_timer_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  5, 0, 0, 0, 0, 0, 0},
++	{ STRATIX10_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  5, 0, 0, 0, 0x3C, 1, 0},
+ 	{ STRATIX10_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x30,
+ 	  6, 0, 0, 0, 0, 0, 0},
+ 	{ STRATIX10_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0xA4,
+@@ -249,16 +282,16 @@ static const struct stratix10_gate_clock s10_gate_clks[] = {
+ 	  1, 0, 0, 0, 0xDC, 27, 0},
+ 	{ STRATIX10_EMAC2_CLK, "emac2_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0xA4,
+ 	  2, 0, 0, 0, 0xDC, 28, 0},
+-	{ STRATIX10_EMAC_PTP_CLK, "emac_ptp_clk", "emac_ptp_free_clk", NULL, 1, 0, 0xA4,
+-	  3, 0, 0, 0, 0, 0, 0},
+-	{ STRATIX10_GPIO_DB_CLK, "gpio_db_clk", "gpio_db_free_clk", NULL, 1, 0, 0xA4,
+-	  4, 0xE0, 0, 16, 0, 0, 0},
+-	{ STRATIX10_SDMMC_CLK, "sdmmc_clk", "sdmmc_free_clk", NULL, 1, 0, 0xA4,
+-	  5, 0, 0, 0, 0, 0, 4},
+-	{ STRATIX10_S2F_USER1_CLK, "s2f_user1_clk", "s2f_user1_free_clk", NULL, 1, 0, 0xA4,
+-	  6, 0, 0, 0, 0, 0, 0},
+-	{ STRATIX10_PSI_REF_CLK, "psi_ref_clk", "psi_ref_free_clk", NULL, 1, 0, 0xA4,
+-	  7, 0, 0, 0, 0, 0, 0},
++	{ STRATIX10_EMAC_PTP_CLK, "emac_ptp_clk", NULL, emac_ptp_mux, ARRAY_SIZE(emac_ptp_mux), 0, 0xA4,
++	  3, 0, 0, 0, 0xB0, 2, 0},
++	{ STRATIX10_GPIO_DB_CLK, "gpio_db_clk", NULL, gpio_db_mux, ARRAY_SIZE(gpio_db_mux), 0, 0xA4,
++	  4, 0xE0, 0, 16, 0xB0, 3, 0},
++	{ STRATIX10_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0xA4,
++	  5, 0, 0, 0, 0xB0, 4, 4},
++	{ STRATIX10_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0xA4,
++	  6, 0, 0, 0, 0xB0, 5, 0},
++	{ STRATIX10_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0xA4,
++	  7, 0, 0, 0, 0xB0, 6, 0},
+ 	{ STRATIX10_USB_CLK, "usb_clk", "l4_mp_clk", NULL, 1, 0, 0xA4,
+ 	  8, 0, 0, 0, 0, 0, 0},
+ 	{ STRATIX10_SPI_M_CLK, "spi_m_clk", "l4_mp_clk", NULL, 1, 0, 0xA4,
+diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c
+index 9cf249c344d9e..31e752318a104 100644
+--- a/drivers/clk/tegra/clk-tegra30.c
++++ b/drivers/clk/tegra/clk-tegra30.c
+@@ -1248,7 +1248,7 @@ static struct tegra_clk_init_table init_table[] __initdata = {
+ 	{ TEGRA30_CLK_GR3D, TEGRA30_CLK_PLL_C, 300000000, 0 },
+ 	{ TEGRA30_CLK_GR3D2, TEGRA30_CLK_PLL_C, 300000000, 0 },
+ 	{ TEGRA30_CLK_PLL_U, TEGRA30_CLK_CLK_MAX, 480000000, 0 },
+-	{ TEGRA30_CLK_VDE, TEGRA30_CLK_PLL_C, 600000000, 0 },
++	{ TEGRA30_CLK_VDE, TEGRA30_CLK_PLL_C, 300000000, 0 },
+ 	{ TEGRA30_CLK_SPDIF_IN_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+ 	{ TEGRA30_CLK_I2S0_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+ 	{ TEGRA30_CLK_I2S1_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+diff --git a/drivers/clocksource/timer-ti-dm.c b/drivers/clocksource/timer-ti-dm.c
+index 33eeabf9c3d12..e5c631f1b5cbe 100644
+--- a/drivers/clocksource/timer-ti-dm.c
++++ b/drivers/clocksource/timer-ti-dm.c
+@@ -78,6 +78,9 @@ static void omap_dm_timer_write_reg(struct omap_dm_timer *timer, u32 reg,
+ 
+ static void omap_timer_restore_context(struct omap_dm_timer *timer)
+ {
++	__omap_dm_timer_write(timer, OMAP_TIMER_OCP_CFG_OFFSET,
++			      timer->context.ocp_cfg, 0);
++
+ 	omap_dm_timer_write_reg(timer, OMAP_TIMER_WAKEUP_EN_REG,
+ 				timer->context.twer);
+ 	omap_dm_timer_write_reg(timer, OMAP_TIMER_COUNTER_REG,
+@@ -95,6 +98,9 @@ static void omap_timer_restore_context(struct omap_dm_timer *timer)
+ 
+ static void omap_timer_save_context(struct omap_dm_timer *timer)
+ {
++	timer->context.ocp_cfg =
++		__omap_dm_timer_read(timer, OMAP_TIMER_OCP_CFG_OFFSET, 0);
++
+ 	timer->context.tclr =
+ 			omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
+ 	timer->context.twer =
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 1e7e3f2ff09f0..ebee0ad559fad 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1368,9 +1368,14 @@ static int cpufreq_online(unsigned int cpu)
+ 			goto out_free_policy;
+ 		}
+ 
++		/*
++		 * The initialization has succeeded and the policy is online.
++		 * If there is a problem with its frequency table, take it
++		 * offline and drop it.
++		 */
+ 		ret = cpufreq_table_validate_and_sort(policy);
+ 		if (ret)
+-			goto out_exit_policy;
++			goto out_offline_policy;
+ 
+ 		/* related_cpus should at least include policy->cpus. */
+ 		cpumask_copy(policy->related_cpus, policy->cpus);
+@@ -1513,6 +1518,10 @@ out_destroy_policy:
+ 
+ 	up_write(&policy->rwsem);
+ 
++out_offline_policy:
++	if (cpufreq_driver->offline)
++		cpufreq_driver->offline(policy);
++
+ out_exit_policy:
+ 	if (cpufreq_driver->exit)
+ 		cpufreq_driver->exit(policy);
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_isr.c b/drivers/crypto/cavium/nitrox/nitrox_isr.c
+index 3dec570a190ad..10e3408bf704c 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_isr.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_isr.c
+@@ -306,6 +306,10 @@ int nitrox_register_interrupts(struct nitrox_device *ndev)
+ 	 * Entry 192: NPS_CORE_INT_ACTIVE
+ 	 */
+ 	nr_vecs = pci_msix_vec_count(pdev);
++	if (nr_vecs < 0) {
++		dev_err(DEV(ndev), "Error in getting vec count %d\n", nr_vecs);
++		return nr_vecs;
++	}
+ 
+ 	/* Enable MSI-X */
+ 	ret = pci_alloc_irq_vectors(pdev, nr_vecs, nr_vecs, PCI_IRQ_MSIX);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 21caed429cc52..d0018794e92e8 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -42,6 +42,10 @@ static int psp_probe_timeout = 5;
+ module_param(psp_probe_timeout, int, 0644);
+ MODULE_PARM_DESC(psp_probe_timeout, " default timeout value, in seconds, during PSP device probe");
+ 
++MODULE_FIRMWARE("amd/amd_sev_fam17h_model0xh.sbin"); /* 1st gen EPYC */
++MODULE_FIRMWARE("amd/amd_sev_fam17h_model3xh.sbin"); /* 2nd gen EPYC */
++MODULE_FIRMWARE("amd/amd_sev_fam19h_model0xh.sbin"); /* 3rd gen EPYC */
++
+ static bool psp_dead;
+ static int psp_timeout;
+ 
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index f471dbaef1fbc..7d346d842a39e 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -222,7 +222,7 @@ static int sp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		if (ret) {
+ 			dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n",
+ 				ret);
+-			goto e_err;
++			goto free_irqs;
+ 		}
+ 	}
+ 
+@@ -230,10 +230,12 @@ static int sp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	ret = sp_init(sp);
+ 	if (ret)
+-		goto e_err;
++		goto free_irqs;
+ 
+ 	return 0;
+ 
++free_irqs:
++	sp_free_irqs(sp);
+ e_err:
+ 	dev_notice(dev, "initialization failed\n");
+ 	return ret;
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 41f1fcacb2809..630dcb59ad569 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -1515,11 +1515,11 @@ static struct skcipher_alg sec_skciphers[] = {
+ 			 AES_BLOCK_SIZE, AES_BLOCK_SIZE)
+ 
+ 	SEC_SKCIPHER_ALG("ecb(des3_ede)", sec_setkey_3des_ecb,
+-			 SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE,
++			 SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE,
+ 			 DES3_EDE_BLOCK_SIZE, 0)
+ 
+ 	SEC_SKCIPHER_ALG("cbc(des3_ede)", sec_setkey_3des_cbc,
+-			 SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE,
++			 SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE,
+ 			 DES3_EDE_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE)
+ 
+ 	SEC_SKCIPHER_ALG("xts(sm4)", sec_setkey_sm4_xts,
+diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c
+index 276012e7c482f..5e474a7a1912c 100644
+--- a/drivers/crypto/ixp4xx_crypto.c
++++ b/drivers/crypto/ixp4xx_crypto.c
+@@ -149,6 +149,8 @@ struct crypt_ctl {
+ struct ablk_ctx {
+ 	struct buffer_desc *src;
+ 	struct buffer_desc *dst;
++	u8 iv[MAX_IVLEN];
++	bool encrypt;
+ };
+ 
+ struct aead_ctx {
+@@ -330,7 +332,7 @@ static void free_buf_chain(struct device *dev, struct buffer_desc *buf,
+ 
+ 		buf1 = buf->next;
+ 		phys1 = buf->phys_next;
+-		dma_unmap_single(dev, buf->phys_next, buf->buf_len, buf->dir);
++		dma_unmap_single(dev, buf->phys_addr, buf->buf_len, buf->dir);
+ 		dma_pool_free(buffer_pool, buf, phys);
+ 		buf = buf1;
+ 		phys = phys1;
+@@ -381,6 +383,20 @@ static void one_packet(dma_addr_t phys)
+ 	case CTL_FLAG_PERFORM_ABLK: {
+ 		struct skcipher_request *req = crypt->data.ablk_req;
+ 		struct ablk_ctx *req_ctx = skcipher_request_ctx(req);
++		struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
++		unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++		unsigned int offset;
++
++		if (ivsize > 0) {
++			offset = req->cryptlen - ivsize;
++			if (req_ctx->encrypt) {
++				scatterwalk_map_and_copy(req->iv, req->dst,
++							 offset, ivsize, 0);
++			} else {
++				memcpy(req->iv, req_ctx->iv, ivsize);
++				memzero_explicit(req_ctx->iv, ivsize);
++			}
++		}
+ 
+ 		if (req_ctx->dst) {
+ 			free_buf_chain(dev, req_ctx->dst, crypt->dst_buf);
+@@ -876,6 +892,7 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
+ 	struct ablk_ctx *req_ctx = skcipher_request_ctx(req);
+ 	struct buffer_desc src_hook;
+ 	struct device *dev = &pdev->dev;
++	unsigned int offset;
+ 	gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
+ 				GFP_KERNEL : GFP_ATOMIC;
+ 
+@@ -885,6 +902,7 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
+ 		return -EAGAIN;
+ 
+ 	dir = encrypt ? &ctx->encrypt : &ctx->decrypt;
++	req_ctx->encrypt = encrypt;
+ 
+ 	crypt = get_crypt_desc();
+ 	if (!crypt)
+@@ -900,6 +918,10 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
+ 
+ 	BUG_ON(ivsize && !req->iv);
+ 	memcpy(crypt->iv, req->iv, ivsize);
++	if (ivsize > 0 && !encrypt) {
++		offset = req->cryptlen - ivsize;
++		scatterwalk_map_and_copy(req_ctx->iv, req->src, offset, ivsize, 0);
++	}
+ 	if (req->src != req->dst) {
+ 		struct buffer_desc dst_hook;
+ 		crypt->mode |= NPE_OP_NOT_IN_PLACE;
+diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
+index 2de5e3672e423..c5ec50a28f30d 100644
+--- a/drivers/crypto/nx/nx-842-pseries.c
++++ b/drivers/crypto/nx/nx-842-pseries.c
+@@ -538,13 +538,15 @@ static int nx842_OF_set_defaults(struct nx842_devdata *devdata)
+  * The status field indicates if the device is enabled when the status
+  * is 'okay'.  Otherwise the device driver will be disabled.
+  *
+- * @prop - struct property point containing the maxsyncop for the update
++ * @devdata: struct nx842_devdata to use for dev_info
++ * @prop: struct property point containing the maxsyncop for the update
+  *
+  * Returns:
+  *  0 - Device is available
+  *  -ENODEV - Device is not available
+  */
+-static int nx842_OF_upd_status(struct property *prop)
++static int nx842_OF_upd_status(struct nx842_devdata *devdata,
++			       struct property *prop)
+ {
+ 	const char *status = (const char *)prop->value;
+ 
+@@ -758,7 +760,7 @@ static int nx842_OF_upd(struct property *new_prop)
+ 		goto out;
+ 
+ 	/* Perform property updates */
+-	ret = nx842_OF_upd_status(status);
++	ret = nx842_OF_upd_status(new_devdata, status);
+ 	if (ret)
+ 		goto error_out;
+ 
+@@ -1071,6 +1073,7 @@ static const struct vio_device_id nx842_vio_driver_ids[] = {
+ 	{"ibm,compression-v1", "ibm,compression"},
+ 	{"", ""},
+ };
++MODULE_DEVICE_TABLE(vio, nx842_vio_driver_ids);
+ 
+ static struct vio_driver nx842_vio_driver = {
+ 	.name = KBUILD_MODNAME,
+diff --git a/drivers/crypto/nx/nx-aes-ctr.c b/drivers/crypto/nx/nx-aes-ctr.c
+index 6d5ce1a66f1ee..02ad26012c665 100644
+--- a/drivers/crypto/nx/nx-aes-ctr.c
++++ b/drivers/crypto/nx/nx-aes-ctr.c
+@@ -118,7 +118,7 @@ static int ctr3686_aes_nx_crypt(struct skcipher_request *req)
+ 	struct nx_crypto_ctx *nx_ctx = crypto_skcipher_ctx(tfm);
+ 	u8 iv[16];
+ 
+-	memcpy(iv, nx_ctx->priv.ctr.nonce, CTR_RFC3686_IV_SIZE);
++	memcpy(iv, nx_ctx->priv.ctr.nonce, CTR_RFC3686_NONCE_SIZE);
+ 	memcpy(iv + CTR_RFC3686_NONCE_SIZE, req->iv, CTR_RFC3686_IV_SIZE);
+ 	iv[12] = iv[13] = iv[14] = 0;
+ 	iv[15] = 1;
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index a3b38d2c92e70..39d17ed1db2f2 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -371,7 +371,7 @@ static int omap_sham_hw_init(struct omap_sham_dev *dd)
+ {
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(dd->dev);
++	err = pm_runtime_resume_and_get(dd->dev);
+ 	if (err < 0) {
+ 		dev_err(dd->dev, "failed to get sync: %d\n", err);
+ 		return err;
+@@ -2243,7 +2243,7 @@ static int omap_sham_suspend(struct device *dev)
+ 
+ static int omap_sham_resume(struct device *dev)
+ {
+-	int err = pm_runtime_get_sync(dev);
++	int err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get sync: %d\n", err);
+ 		return err;
+diff --git a/drivers/crypto/qat/qat_common/qat_hal.c b/drivers/crypto/qat/qat_common/qat_hal.c
+index 52ef80efeddc6..b40e81e0088f0 100644
+--- a/drivers/crypto/qat/qat_common/qat_hal.c
++++ b/drivers/crypto/qat/qat_common/qat_hal.c
+@@ -1213,7 +1213,11 @@ static int qat_hal_put_rel_wr_xfer(struct icp_qat_fw_loader_handle *handle,
+ 		pr_err("QAT: bad xfrAddr=0x%x\n", xfr_addr);
+ 		return -EINVAL;
+ 	}
+-	qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, gprnum, &gprval);
++	status = qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, gprnum, &gprval);
++	if (status) {
++		pr_err("QAT: failed to read register");
++		return status;
++	}
+ 	gpr_addr = qat_hal_get_reg_addr(ICP_GPB_REL, gprnum);
+ 	data16low = 0xffff & data;
+ 	data16hi = 0xffff & (data >> 0x10);
+diff --git a/drivers/crypto/qat/qat_common/qat_uclo.c b/drivers/crypto/qat/qat_common/qat_uclo.c
+index 5d1f28cd66809..6adc91fedb083 100644
+--- a/drivers/crypto/qat/qat_common/qat_uclo.c
++++ b/drivers/crypto/qat/qat_common/qat_uclo.c
+@@ -342,7 +342,6 @@ static int qat_uclo_init_umem_seg(struct icp_qat_fw_loader_handle *handle,
+ 	return 0;
+ }
+ 
+-#define ICP_DH895XCC_PESRAM_BAR_SIZE 0x80000
+ static int qat_uclo_init_ae_memory(struct icp_qat_fw_loader_handle *handle,
+ 				   struct icp_qat_uof_initmem *init_mem)
+ {
+diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
+index a2d3da0ad95f3..d8053789c8828 100644
+--- a/drivers/crypto/qce/skcipher.c
++++ b/drivers/crypto/qce/skcipher.c
+@@ -71,7 +71,7 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
+ 	struct scatterlist *sg;
+ 	bool diff_dst;
+ 	gfp_t gfp;
+-	int ret;
++	int dst_nents, src_nents, ret;
+ 
+ 	rctx->iv = req->iv;
+ 	rctx->ivsize = crypto_skcipher_ivsize(skcipher);
+@@ -122,21 +122,26 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
+ 	sg_mark_end(sg);
+ 	rctx->dst_sg = rctx->dst_tbl.sgl;
+ 
+-	ret = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
+-	if (ret < 0)
++	dst_nents = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
++	if (dst_nents < 0) {
++		ret = dst_nents;
+ 		goto error_free;
++	}
+ 
+ 	if (diff_dst) {
+-		ret = dma_map_sg(qce->dev, req->src, rctx->src_nents, dir_src);
+-		if (ret < 0)
++		src_nents = dma_map_sg(qce->dev, req->src, rctx->src_nents, dir_src);
++		if (src_nents < 0) {
++			ret = src_nents;
+ 			goto error_unmap_dst;
++		}
+ 		rctx->src_sg = req->src;
+ 	} else {
+ 		rctx->src_sg = rctx->dst_sg;
++		src_nents = dst_nents - 1;
+ 	}
+ 
+-	ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, rctx->src_nents,
+-			       rctx->dst_sg, rctx->dst_nents,
++	ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, src_nents,
++			       rctx->dst_sg, dst_nents,
+ 			       qce_skcipher_done, async_req);
+ 	if (ret)
+ 		goto error_unmap_src;
+diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
+index 4640fe0c1f221..f15fc1fb37079 100644
+--- a/drivers/crypto/sa2ul.c
++++ b/drivers/crypto/sa2ul.c
+@@ -2270,9 +2270,9 @@ static int sa_dma_init(struct sa_crypto_data *dd)
+ 
+ 	dd->dma_rx2 = dma_request_chan(dd->dev, "rx2");
+ 	if (IS_ERR(dd->dma_rx2)) {
+-		dma_release_channel(dd->dma_rx1);
+-		return dev_err_probe(dd->dev, PTR_ERR(dd->dma_rx2),
+-				     "Unable to request rx2 DMA channel\n");
++		ret = dev_err_probe(dd->dev, PTR_ERR(dd->dma_rx2),
++				    "Unable to request rx2 DMA channel\n");
++		goto err_dma_rx2;
+ 	}
+ 
+ 	dd->dma_tx = dma_request_chan(dd->dev, "tx");
+@@ -2293,28 +2293,31 @@ static int sa_dma_init(struct sa_crypto_data *dd)
+ 	if (ret) {
+ 		dev_err(dd->dev, "can't configure IN dmaengine slave: %d\n",
+ 			ret);
+-		return ret;
++		goto err_dma_config;
+ 	}
+ 
+ 	ret = dmaengine_slave_config(dd->dma_rx2, &cfg);
+ 	if (ret) {
+ 		dev_err(dd->dev, "can't configure IN dmaengine slave: %d\n",
+ 			ret);
+-		return ret;
++		goto err_dma_config;
+ 	}
+ 
+ 	ret = dmaengine_slave_config(dd->dma_tx, &cfg);
+ 	if (ret) {
+ 		dev_err(dd->dev, "can't configure OUT dmaengine slave: %d\n",
+ 			ret);
+-		return ret;
++		goto err_dma_config;
+ 	}
+ 
+ 	return 0;
+ 
++err_dma_config:
++	dma_release_channel(dd->dma_tx);
+ err_dma_tx:
+-	dma_release_channel(dd->dma_rx1);
+ 	dma_release_channel(dd->dma_rx2);
++err_dma_rx2:
++	dma_release_channel(dd->dma_rx1);
+ 
+ 	return ret;
+ }
+@@ -2353,13 +2356,14 @@ static int sa_ul_probe(struct platform_device *pdev)
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "%s: failed to get sync: %d\n", __func__,
+ 			ret);
++		pm_runtime_disable(dev);
+ 		return ret;
+ 	}
+ 
+ 	sa_init_mem(dev_data);
+ 	ret = sa_dma_init(dev_data);
+ 	if (ret)
+-		goto disable_pm_runtime;
++		goto destroy_dma_pool;
+ 
+ 	spin_lock_init(&dev_data->scid_lock);
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -2389,9 +2393,9 @@ release_dma:
+ 	dma_release_channel(dev_data->dma_rx1);
+ 	dma_release_channel(dev_data->dma_tx);
+ 
++destroy_dma_pool:
+ 	dma_pool_destroy(dev_data->sc_pool);
+ 
+-disable_pm_runtime:
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
+diff --git a/drivers/crypto/ux500/hash/hash_core.c b/drivers/crypto/ux500/hash/hash_core.c
+index 3d407eebb2bab..1e2daf4030327 100644
+--- a/drivers/crypto/ux500/hash/hash_core.c
++++ b/drivers/crypto/ux500/hash/hash_core.c
+@@ -1009,6 +1009,7 @@ static int hash_hw_final(struct ahash_request *req)
+ 			goto out;
+ 		}
+ 	} else if (req->nbytes == 0 && ctx->keylen > 0) {
++		ret = -EPERM;
+ 		dev_err(device_data->dev, "%s: Empty message with keylength > 0, NOT supported\n",
+ 			__func__);
+ 		goto out;
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 98f03a02d1122..829128c0cc68c 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -789,6 +789,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
+ 	if (devfreq->profile->timer < 0
+ 		|| devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
+ 		mutex_unlock(&devfreq->lock);
++		err = -EINVAL;
+ 		goto err_dev;
+ 	}
+ 
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 7b52691c45d26..4912a7b883801 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -263,6 +263,9 @@ static int __init i10nm_init(void)
+ 	if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
+ 		return -EBUSY;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	id = x86_match_cpu(i10nm_cpuids);
+ 	if (!id)
+ 		return -ENODEV;
+diff --git a/drivers/edac/pnd2_edac.c b/drivers/edac/pnd2_edac.c
+index 928f63a374c78..c94ca1f790c43 100644
+--- a/drivers/edac/pnd2_edac.c
++++ b/drivers/edac/pnd2_edac.c
+@@ -1554,6 +1554,9 @@ static int __init pnd2_init(void)
+ 	if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
+ 		return -EBUSY;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	id = x86_match_cpu(pnd2_cpuids);
+ 	if (!id)
+ 		return -ENODEV;
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 93daa4297f2e0..4c626fcd4dcbb 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -3510,6 +3510,9 @@ static int __init sbridge_init(void)
+ 	if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
+ 		return -EBUSY;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	id = x86_match_cpu(sbridge_cpuids);
+ 	if (!id)
+ 		return -ENODEV;
+diff --git a/drivers/edac/skx_base.c b/drivers/edac/skx_base.c
+index 2c7db95df3263..f887e31666510 100644
+--- a/drivers/edac/skx_base.c
++++ b/drivers/edac/skx_base.c
+@@ -656,6 +656,9 @@ static int __init skx_init(void)
+ 	if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
+ 		return -EBUSY;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	id = x86_match_cpu(skx_cpuids);
+ 	if (!id)
+ 		return -ENODEV;
+diff --git a/drivers/edac/ti_edac.c b/drivers/edac/ti_edac.c
+index e7eae20f83d1d..169f96e51c293 100644
+--- a/drivers/edac/ti_edac.c
++++ b/drivers/edac/ti_edac.c
+@@ -197,6 +197,7 @@ static const struct of_device_id ti_edac_of_match[] = {
+ 	{ .compatible = "ti,emif-dra7xx", .data = (void *)EMIF_TYPE_DRA7 },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, ti_edac_of_match);
+ 
+ static int _emif_get_id(struct device_node *node)
+ {
+diff --git a/drivers/extcon/extcon-max8997.c b/drivers/extcon/extcon-max8997.c
+index 337b0eea4e629..64008808675ef 100644
+--- a/drivers/extcon/extcon-max8997.c
++++ b/drivers/extcon/extcon-max8997.c
+@@ -729,7 +729,7 @@ static int max8997_muic_probe(struct platform_device *pdev)
+ 				2, info->status);
+ 	if (ret) {
+ 		dev_err(info->dev, "failed to read MUIC register\n");
+-		return ret;
++		goto err_irq;
+ 	}
+ 	cable_type = max8997_muic_get_cable_type(info,
+ 					   MAX8997_CABLE_GROUP_ADC, &attached);
+@@ -784,3 +784,4 @@ module_platform_driver(max8997_muic_driver);
+ MODULE_DESCRIPTION("Maxim MAX8997 Extcon driver");
+ MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS("platform:max8997-muic");
+diff --git a/drivers/extcon/extcon-sm5502.c b/drivers/extcon/extcon-sm5502.c
+index 106d4da647bd9..5e0718dee03bc 100644
+--- a/drivers/extcon/extcon-sm5502.c
++++ b/drivers/extcon/extcon-sm5502.c
+@@ -88,7 +88,6 @@ static struct reg_data sm5502_reg_data[] = {
+ 			| SM5502_REG_INTM2_MHL_MASK,
+ 		.invert = true,
+ 	},
+-	{ }
+ };
+ 
+ /* List of detectable cables */
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 3aa489dba30a7..2a7687911c097 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -1034,24 +1034,32 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+ 
+ 	/* add svc client device(s) */
+ 	svc = devm_kzalloc(dev, sizeof(*svc), GFP_KERNEL);
+-	if (!svc)
+-		return -ENOMEM;
++	if (!svc) {
++		ret = -ENOMEM;
++		goto err_free_kfifo;
++	}
+ 
+ 	svc->stratix10_svc_rsu = platform_device_alloc(STRATIX10_RSU, 0);
+ 	if (!svc->stratix10_svc_rsu) {
+ 		dev_err(dev, "failed to allocate %s device\n", STRATIX10_RSU);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto err_free_kfifo;
+ 	}
+ 
+ 	ret = platform_device_add(svc->stratix10_svc_rsu);
+-	if (ret) {
+-		platform_device_put(svc->stratix10_svc_rsu);
+-		return ret;
+-	}
++	if (ret)
++		goto err_put_device;
++
+ 	dev_set_drvdata(dev, svc);
+ 
+ 	pr_info("Intel Service Layer Driver Initialized\n");
+ 
++	return 0;
++
++err_put_device:
++	platform_device_put(svc->stratix10_svc_rsu);
++err_free_kfifo:
++	kfifo_free(&controller->svc_fifo);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/fsi/fsi-core.c b/drivers/fsi/fsi-core.c
+index 4e60e84cd17a5..59ddc9fd5bca4 100644
+--- a/drivers/fsi/fsi-core.c
++++ b/drivers/fsi/fsi-core.c
+@@ -724,7 +724,7 @@ static ssize_t cfam_read(struct file *filep, char __user *buf, size_t count,
+ 	rc = count;
+  fail:
+ 	*offset = off;
+-	return count;
++	return rc;
+ }
+ 
+ static ssize_t cfam_write(struct file *filep, const char __user *buf,
+@@ -761,7 +761,7 @@ static ssize_t cfam_write(struct file *filep, const char __user *buf,
+ 	rc = count;
+  fail:
+ 	*offset = off;
+-	return count;
++	return rc;
+ }
+ 
+ static loff_t cfam_llseek(struct file *file, loff_t offset, int whence)
+diff --git a/drivers/fsi/fsi-occ.c b/drivers/fsi/fsi-occ.c
+index 9eeb856c8905e..a691f9732a13b 100644
+--- a/drivers/fsi/fsi-occ.c
++++ b/drivers/fsi/fsi-occ.c
+@@ -445,6 +445,7 @@ int fsi_occ_submit(struct device *dev, const void *request, size_t req_len,
+ 			goto done;
+ 
+ 		if (resp->return_status == OCC_RESP_CMD_IN_PRG ||
++		    resp->return_status == OCC_RESP_CRIT_INIT ||
+ 		    resp->seq_no != seq_no) {
+ 			rc = -ETIMEDOUT;
+ 
+diff --git a/drivers/fsi/fsi-sbefifo.c b/drivers/fsi/fsi-sbefifo.c
+index bfd5e5da80209..84cb965bfed5c 100644
+--- a/drivers/fsi/fsi-sbefifo.c
++++ b/drivers/fsi/fsi-sbefifo.c
+@@ -325,7 +325,8 @@ static int sbefifo_up_write(struct sbefifo *sbefifo, __be32 word)
+ static int sbefifo_request_reset(struct sbefifo *sbefifo)
+ {
+ 	struct device *dev = &sbefifo->fsi_dev->dev;
+-	u32 status, timeout;
++	unsigned long end_time;
++	u32 status;
+ 	int rc;
+ 
+ 	dev_dbg(dev, "Requesting FIFO reset\n");
+@@ -341,7 +342,8 @@ static int sbefifo_request_reset(struct sbefifo *sbefifo)
+ 	}
+ 
+ 	/* Wait for it to complete */
+-	for (timeout = 0; timeout < SBEFIFO_RESET_TIMEOUT; timeout++) {
++	end_time = jiffies + msecs_to_jiffies(SBEFIFO_RESET_TIMEOUT);
++	while (!time_after(jiffies, end_time)) {
+ 		rc = sbefifo_regr(sbefifo, SBEFIFO_UP | SBEFIFO_STS, &status);
+ 		if (rc) {
+ 			dev_err(dev, "Failed to read UP fifo status during reset"
+@@ -355,7 +357,7 @@ static int sbefifo_request_reset(struct sbefifo *sbefifo)
+ 			return 0;
+ 		}
+ 
+-		msleep(1);
++		cond_resched();
+ 	}
+ 	dev_err(dev, "FIFO reset timed out\n");
+ 
+@@ -400,7 +402,7 @@ static int sbefifo_cleanup_hw(struct sbefifo *sbefifo)
+ 	/* The FIFO already contains a reset request from the SBE ? */
+ 	if (down_status & SBEFIFO_STS_RESET_REQ) {
+ 		dev_info(dev, "Cleanup: FIFO reset request set, resetting\n");
+-		rc = sbefifo_regw(sbefifo, SBEFIFO_UP, SBEFIFO_PERFORM_RESET);
++		rc = sbefifo_regw(sbefifo, SBEFIFO_DOWN, SBEFIFO_PERFORM_RESET);
+ 		if (rc) {
+ 			sbefifo->broken = true;
+ 			dev_err(dev, "Cleanup: Reset reg write failed, rc=%d\n", rc);
+diff --git a/drivers/fsi/fsi-scom.c b/drivers/fsi/fsi-scom.c
+index b45bfab7b7f55..75d1389e2626d 100644
+--- a/drivers/fsi/fsi-scom.c
++++ b/drivers/fsi/fsi-scom.c
+@@ -38,9 +38,10 @@
+ #define SCOM_STATUS_PIB_RESP_MASK	0x00007000
+ #define SCOM_STATUS_PIB_RESP_SHIFT	12
+ 
+-#define SCOM_STATUS_ANY_ERR		(SCOM_STATUS_PROTECTION | \
+-					 SCOM_STATUS_PARITY |	  \
+-					 SCOM_STATUS_PIB_ABORT | \
++#define SCOM_STATUS_FSI2PIB_ERROR	(SCOM_STATUS_PROTECTION |	\
++					 SCOM_STATUS_PARITY |		\
++					 SCOM_STATUS_PIB_ABORT)
++#define SCOM_STATUS_ANY_ERR		(SCOM_STATUS_FSI2PIB_ERROR |	\
+ 					 SCOM_STATUS_PIB_RESP_MASK)
+ /* SCOM address encodings */
+ #define XSCOM_ADDR_IND_FLAG		BIT_ULL(63)
+@@ -240,13 +241,14 @@ static int handle_fsi2pib_status(struct scom_device *scom, uint32_t status)
+ {
+ 	uint32_t dummy = -1;
+ 
+-	if (status & SCOM_STATUS_PROTECTION)
+-		return -EPERM;
+-	if (status & SCOM_STATUS_PARITY) {
++	if (status & SCOM_STATUS_FSI2PIB_ERROR)
+ 		fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy,
+ 				 sizeof(uint32_t));
++
++	if (status & SCOM_STATUS_PROTECTION)
++		return -EPERM;
++	if (status & SCOM_STATUS_PARITY)
+ 		return -EIO;
+-	}
+ 	/* Return -EBUSY on PIB abort to force a retry */
+ 	if (status & SCOM_STATUS_PIB_ABORT)
+ 		return -EBUSY;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 1e448f1b39a18..955a055bd9800 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -268,6 +268,9 @@ dm_dp_mst_detect(struct drm_connector *connector,
+ 	struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
+ 	struct amdgpu_dm_connector *master = aconnector->mst_port;
+ 
++	if (drm_connector_is_unregistered(connector))
++		return connector_status_disconnected;
++
+ 	return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr,
+ 				      aconnector->port);
+ }
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index 77066bca87939..ee82b2ddf9325 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -409,7 +409,7 @@ struct ast_private *ast_device_create(struct drm_driver *drv,
+ 	dev->pdev = pdev;
+ 	pci_set_drvdata(pdev, dev);
+ 
+-	ast->regs = pci_iomap(dev->pdev, 1, 0);
++	ast->regs = pcim_iomap(pdev, 1, 0);
+ 	if (!ast->regs)
+ 		return ERR_PTR(-EIO);
+ 
+@@ -425,7 +425,7 @@ struct ast_private *ast_device_create(struct drm_driver *drv,
+ 
+ 	/* "map" IO regs if the above hasn't done so already */
+ 	if (!ast->ioregs) {
+-		ast->ioregs = pci_iomap(dev->pdev, 2, 0);
++		ast->ioregs = pcim_iomap(pdev, 2, 0);
+ 		if (!ast->ioregs)
+ 			return ERR_PTR(-EIO);
+ 	}
+diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
+index e145cbb35baca..4e82647a621ef 100644
+--- a/drivers/gpu/drm/bridge/Kconfig
++++ b/drivers/gpu/drm/bridge/Kconfig
+@@ -130,7 +130,7 @@ config DRM_SIL_SII8620
+ 	tristate "Silicon Image SII8620 HDMI/MHL bridge"
+ 	depends on OF
+ 	select DRM_KMS_HELPER
+-	imply EXTCON
++	select EXTCON
+ 	depends on RC_CORE || !RC_CORE
+ 	help
+ 	  Silicon Image SII8620 HDMI/MHL bridge chip driver.
+diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
+index 64f0effb52ac1..044acd07c1538 100644
+--- a/drivers/gpu/drm/drm_bridge.c
++++ b/drivers/gpu/drm/drm_bridge.c
+@@ -522,6 +522,9 @@ void drm_bridge_chain_pre_enable(struct drm_bridge *bridge)
+ 	list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
+ 		if (iter->funcs->pre_enable)
+ 			iter->funcs->pre_enable(iter);
++
++		if (iter == bridge)
++			break;
+ 	}
+ }
+ EXPORT_SYMBOL(drm_bridge_chain_pre_enable);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+index 3416e9617ee9a..96f3908e4c5b9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+@@ -222,7 +222,7 @@ int dpu_mdss_init(struct drm_device *dev)
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct dpu_mdss *dpu_mdss;
+ 	struct dss_module_power *mp;
+-	int ret = 0;
++	int ret;
+ 	int irq;
+ 
+ 	dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
+@@ -250,8 +250,10 @@ int dpu_mdss_init(struct drm_device *dev)
+ 		goto irq_domain_error;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq < 0)
++	if (irq < 0) {
++		ret = irq;
+ 		goto irq_error;
++	}
+ 
+ 	irq_set_chained_handler_and_data(irq, dpu_mdss_irq,
+ 					 dpu_mdss);
+@@ -260,7 +262,7 @@ int dpu_mdss_init(struct drm_device *dev)
+ 
+ 	pm_runtime_enable(dev->dev);
+ 
+-	return ret;
++	return 0;
+ 
+ irq_error:
+ 	_dpu_mdss_irq_domain_fini(dpu_mdss);
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 0aacc43faefa3..edee4c2a76ce4 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -505,6 +505,7 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv)
+ 		priv->event_thread[i].worker = kthread_create_worker(0,
+ 			"crtc_event:%d", priv->event_thread[i].crtc_id);
+ 		if (IS_ERR(priv->event_thread[i].worker)) {
++			ret = PTR_ERR(priv->event_thread[i].worker);
+ 			DRM_DEV_ERROR(dev, "failed to create crtc_event kthread\n");
+ 			goto err_msm_uninit;
+ 		}
+diff --git a/drivers/gpu/drm/pl111/Kconfig b/drivers/gpu/drm/pl111/Kconfig
+index 80f6748055e36..3aae387a96af2 100644
+--- a/drivers/gpu/drm/pl111/Kconfig
++++ b/drivers/gpu/drm/pl111/Kconfig
+@@ -3,6 +3,7 @@ config DRM_PL111
+ 	tristate "DRM Support for PL111 CLCD Controller"
+ 	depends on DRM
+ 	depends on ARM || ARM64 || COMPILE_TEST
++	depends on VEXPRESS_CONFIG || VEXPRESS_CONFIG=n
+ 	depends on COMMON_CLK
+ 	select DRM_KMS_HELPER
+ 	select DRM_KMS_CMA_HELPER
+diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
+index c04cd5a2553ce..e377bdbff90dd 100644
+--- a/drivers/gpu/drm/qxl/qxl_dumb.c
++++ b/drivers/gpu/drm/qxl/qxl_dumb.c
+@@ -58,6 +58,8 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
+ 	surf.height = args->height;
+ 	surf.stride = pitch;
+ 	surf.format = format;
++	surf.data = 0;
++
+ 	r = qxl_gem_object_create_with_handle(qdev, file_priv,
+ 					      QXL_GEM_DOMAIN_SURFACE,
+ 					      args->size, &surf, &qobj,
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index a4a45daf93f2b..6802d9b65f828 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -73,6 +73,7 @@ static int cdn_dp_grf_write(struct cdn_dp_device *dp,
+ 	ret = regmap_write(dp->grf, reg, val);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dp->dev, "Could not write to GRF: %d\n", ret);
++		clk_disable_unprepare(dp->grf_clk);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-reg.c b/drivers/gpu/drm/rockchip/cdn-dp-reg.c
+index 9d2163ef4d6e2..33fb4d05c5065 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-reg.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-reg.c
+@@ -658,7 +658,7 @@ int cdn_dp_config_video(struct cdn_dp_device *dp)
+ 	 */
+ 	do {
+ 		tu_size_reg += 2;
+-		symbol = tu_size_reg * mode->clock * bit_per_pix;
++		symbol = (u64)tu_size_reg * mode->clock * bit_per_pix;
+ 		do_div(symbol, dp->max_lanes * link_rate * 8);
+ 		rem = do_div(symbol, 1000);
+ 		if (tu_size_reg > 64) {
+diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+index 542dcf7eddd66..75a76408cb29e 100644
+--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+@@ -692,13 +692,8 @@ static const struct dw_mipi_dsi_phy_ops dw_mipi_dsi_rockchip_phy_ops = {
+ 	.get_timing = dw_mipi_dsi_phy_get_timing,
+ };
+ 
+-static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi,
+-					int mux)
++static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi)
+ {
+-	if (dsi->cdata->lcdsel_grf_reg)
+-		regmap_write(dsi->grf_regmap, dsi->cdata->lcdsel_grf_reg,
+-			mux ? dsi->cdata->lcdsel_lit : dsi->cdata->lcdsel_big);
+-
+ 	if (dsi->cdata->lanecfg1_grf_reg)
+ 		regmap_write(dsi->grf_regmap, dsi->cdata->lanecfg1_grf_reg,
+ 					      dsi->cdata->lanecfg1);
+@@ -712,6 +707,13 @@ static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi,
+ 					      dsi->cdata->enable);
+ }
+ 
++static void dw_mipi_dsi_rockchip_set_lcdsel(struct dw_mipi_dsi_rockchip *dsi,
++					    int mux)
++{
++	regmap_write(dsi->grf_regmap, dsi->cdata->lcdsel_grf_reg,
++		mux ? dsi->cdata->lcdsel_lit : dsi->cdata->lcdsel_big);
++}
++
+ static int
+ dw_mipi_dsi_encoder_atomic_check(struct drm_encoder *encoder,
+ 				 struct drm_crtc_state *crtc_state,
+@@ -767,9 +769,9 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
+ 		return;
+ 	}
+ 
+-	dw_mipi_dsi_rockchip_config(dsi, mux);
++	dw_mipi_dsi_rockchip_set_lcdsel(dsi, mux);
+ 	if (dsi->slave)
+-		dw_mipi_dsi_rockchip_config(dsi->slave, mux);
++		dw_mipi_dsi_rockchip_set_lcdsel(dsi->slave, mux);
+ 
+ 	clk_disable_unprepare(dsi->grf_clk);
+ }
+@@ -923,6 +925,24 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
+ 		return ret;
+ 	}
+ 
++	/*
++	 * With the GRF clock running, write lane and dual-mode configurations
++	 * that won't change immediately. If we waited until enable() to do
++	 * this, things like panel preparation would not be able to send
++	 * commands over DSI.
++	 */
++	ret = clk_prepare_enable(dsi->grf_clk);
++	if (ret) {
++		DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
++		return ret;
++	}
++
++	dw_mipi_dsi_rockchip_config(dsi);
++	if (dsi->slave)
++		dw_mipi_dsi_rockchip_config(dsi->slave);
++
++	clk_disable_unprepare(dsi->grf_clk);
++
+ 	ret = rockchip_dsi_drm_create_encoder(dsi, drm_dev);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "Failed to create drm encoder\n");
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index c80f7d9fd13f8..0f23144491e40 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -1013,6 +1013,7 @@ static void vop_plane_atomic_update(struct drm_plane *plane,
+ 		VOP_WIN_SET(vop, win, alpha_en, 1);
+ 	} else {
+ 		VOP_WIN_SET(vop, win, src_alpha_ctl, SRC_ALPHA_EN(0));
++		VOP_WIN_SET(vop, win, alpha_en, 0);
+ 	}
+ 
+ 	VOP_WIN_SET(vop, win, enable, 1);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+index 41edd0a421b25..7c20b4a24a7e2 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+@@ -499,11 +499,11 @@ static int px30_lvds_probe(struct platform_device *pdev,
+ 	if (IS_ERR(lvds->dphy))
+ 		return PTR_ERR(lvds->dphy);
+ 
+-	phy_init(lvds->dphy);
++	ret = phy_init(lvds->dphy);
+ 	if (ret)
+ 		return ret;
+ 
+-	phy_set_mode(lvds->dphy, PHY_MODE_LVDS);
++	ret = phy_set_mode(lvds->dphy, PHY_MODE_LVDS);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 88a8cb840cd54..25a09aaf58838 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1795,7 +1795,7 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 							     &hpd_gpio_flags);
+ 		if (vc4_hdmi->hpd_gpio < 0) {
+ 			ret = vc4_hdmi->hpd_gpio;
+-			goto err_unprepare_hsm;
++			goto err_put_ddc;
+ 		}
+ 
+ 		vc4_hdmi->hpd_active_low = hpd_gpio_flags & OF_GPIO_ACTIVE_LOW;
+@@ -1836,8 +1836,8 @@ err_destroy_conn:
+ 	vc4_hdmi_connector_destroy(&vc4_hdmi->connector);
+ err_destroy_encoder:
+ 	drm_encoder_cleanup(encoder);
+-err_unprepare_hsm:
+ 	pm_runtime_disable(dev);
++err_put_ddc:
+ 	put_device(&vc4_hdmi->ddc->dev);
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h b/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
+index 4db25bd9fa22d..127eaf0a0a580 100644
+--- a/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
++++ b/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
+@@ -1467,6 +1467,7 @@ struct svga3dsurface_cache {
+ 
+ /**
+  * struct svga3dsurface_loc - Surface location
++ * @sheet: The multisample sheet.
+  * @sub_resource: Surface subresource. Defined as layer * num_mip_levels +
+  * mip_level.
+  * @x: X coordinate.
+@@ -1474,6 +1475,7 @@ struct svga3dsurface_cache {
+  * @z: Z coordinate.
+  */
+ struct svga3dsurface_loc {
++	u32 sheet;
+ 	u32 sub_resource;
+ 	u32 x, y, z;
+ };
+@@ -1566,8 +1568,8 @@ svga3dsurface_get_loc(const struct svga3dsurface_cache *cache,
+ 	u32 layer;
+ 	int i;
+ 
+-	if (offset >= cache->sheet_bytes)
+-		offset %= cache->sheet_bytes;
++	loc->sheet = offset / cache->sheet_bytes;
++	offset -= loc->sheet * cache->sheet_bytes;
+ 
+ 	layer = offset / cache->mip_chain_bytes;
+ 	offset -= layer * cache->mip_chain_bytes;
+@@ -1631,6 +1633,7 @@ svga3dsurface_min_loc(const struct svga3dsurface_cache *cache,
+ 		      u32 sub_resource,
+ 		      struct svga3dsurface_loc *loc)
+ {
++	loc->sheet = 0;
+ 	loc->sub_resource = sub_resource;
+ 	loc->x = loc->y = loc->z = 0;
+ }
+@@ -1652,6 +1655,7 @@ svga3dsurface_max_loc(const struct svga3dsurface_cache *cache,
+ 	const struct drm_vmw_size *size;
+ 	u32 mip;
+ 
++	loc->sheet = 0;
+ 	loc->sub_resource = sub_resource + 1;
+ 	mip = sub_resource % cache->num_mip_levels;
+ 	size = &cache->mip[mip].size;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index e67e2e8f6e6fa..83e1b54eb8647 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -2759,12 +2759,24 @@ static int vmw_cmd_dx_genmips(struct vmw_private *dev_priv,
+ {
+ 	VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXGenMips) =
+ 		container_of(header, typeof(*cmd), header);
+-	struct vmw_resource *ret;
++	struct vmw_resource *view;
++	struct vmw_res_cache_entry *rcache;
+ 
+-	ret = vmw_view_id_val_add(sw_context, vmw_view_sr,
+-				  cmd->body.shaderResourceViewId);
++	view = vmw_view_id_val_add(sw_context, vmw_view_sr,
++				   cmd->body.shaderResourceViewId);
++	if (IS_ERR(view))
++		return PTR_ERR(view);
+ 
+-	return PTR_ERR_OR_ZERO(ret);
++	/*
++	 * Normally the shader-resource view is not gpu-dirtying, but for
++	 * this particular command it is...
++	 * So mark the last looked-up surface, which is the surface
++	 * the view points to, gpu-dirty.
++	 */
++	rcache = &sw_context->res_cache[vmw_res_surface];
++	vmw_validation_res_set_dirty(sw_context->ctx, rcache->private,
++				     VMW_RES_DIRTY_SET);
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index 3914bfee0533b..f493b20c7a38c 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -1802,6 +1802,19 @@ static void vmw_surface_tex_dirty_range_add(struct vmw_resource *res,
+ 	svga3dsurface_get_loc(cache, &loc2, end - 1);
+ 	svga3dsurface_inc_loc(cache, &loc2);
+ 
++	if (loc1.sheet != loc2.sheet) {
++		u32 sub_res;
++
++		/*
++		 * Multiple multisample sheets. To do this in an optimized
++		 * fashion, compute the dirty region for each sheet and the
++		 * resulting union. Since this is not a common case, just dirty
++		 * the whole surface.
++		 */
++		for (sub_res = 0; sub_res < dirty->num_subres; ++sub_res)
++			vmw_subres_dirty_full(dirty, sub_res);
++		return;
++	}
+ 	if (loc1.sub_resource + 1 == loc2.sub_resource) {
+ 		/* Dirty range covers a single sub-resource */
+ 		vmw_subres_dirty_add(dirty, &loc1, &loc2);
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 0f69f35f2957e..5550c943f9855 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -2306,12 +2306,8 @@ static int hid_device_remove(struct device *dev)
+ {
+ 	struct hid_device *hdev = to_hid_device(dev);
+ 	struct hid_driver *hdrv;
+-	int ret = 0;
+ 
+-	if (down_interruptible(&hdev->driver_input_lock)) {
+-		ret = -EINTR;
+-		goto end;
+-	}
++	down(&hdev->driver_input_lock);
+ 	hdev->io_started = false;
+ 
+ 	hdrv = hdev->driver;
+@@ -2326,8 +2322,8 @@ static int hid_device_remove(struct device *dev)
+ 
+ 	if (!hdev->io_started)
+ 		up(&hdev->driver_input_lock);
+-end:
+-	return ret;
++
++	return 0;
+ }
+ 
+ static ssize_t modalias_show(struct device *dev, struct device_attribute *a,
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 195910dd2154e..e3835407e8d23 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -122,7 +122,7 @@
+ #define WACOM_HID_WD_TOUCHONOFF         (WACOM_HID_UP_WACOMDIGITIZER | 0x0454)
+ #define WACOM_HID_WD_BATTERY_LEVEL      (WACOM_HID_UP_WACOMDIGITIZER | 0x043b)
+ #define WACOM_HID_WD_EXPRESSKEY00       (WACOM_HID_UP_WACOMDIGITIZER | 0x0910)
+-#define WACOM_HID_WD_EXPRESSKEYCAP00    (WACOM_HID_UP_WACOMDIGITIZER | 0x0950)
++#define WACOM_HID_WD_EXPRESSKEYCAP00    (WACOM_HID_UP_WACOMDIGITIZER | 0x0940)
+ #define WACOM_HID_WD_MODE_CHANGE        (WACOM_HID_UP_WACOMDIGITIZER | 0x0980)
+ #define WACOM_HID_WD_MUTE_DEVICE        (WACOM_HID_UP_WACOMDIGITIZER | 0x0981)
+ #define WACOM_HID_WD_CONTROLPANEL       (WACOM_HID_UP_WACOMDIGITIZER | 0x0982)
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index 11170d9a2e1a5..bfd7f00a59ecf 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -229,8 +229,10 @@ int vmbus_connect(void)
+ 	 */
+ 
+ 	for (i = 0; ; i++) {
+-		if (i == ARRAY_SIZE(vmbus_versions))
++		if (i == ARRAY_SIZE(vmbus_versions)) {
++			ret = -EDOM;
+ 			goto cleanup;
++		}
+ 
+ 		version = vmbus_versions[i];
+ 		if (version > max_version)
+diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
+index 05566ecdbe4b4..1b914e418e41e 100644
+--- a/drivers/hv/hv_util.c
++++ b/drivers/hv/hv_util.c
+@@ -696,8 +696,8 @@ static int hv_timesync_init(struct hv_util_service *srv)
+ 	 */
+ 	hv_ptp_clock = ptp_clock_register(&ptp_hyperv_info, NULL);
+ 	if (IS_ERR_OR_NULL(hv_ptp_clock)) {
+-		pr_err("cannot register PTP clock: %ld\n",
+-		       PTR_ERR(hv_ptp_clock));
++		pr_err("cannot register PTP clock: %d\n",
++		       PTR_ERR_OR_ZERO(hv_ptp_clock));
+ 		hv_ptp_clock = NULL;
+ 	}
+ 
+diff --git a/drivers/hwmon/lm70.c b/drivers/hwmon/lm70.c
+index ae2b84263a445..6b884ea009877 100644
+--- a/drivers/hwmon/lm70.c
++++ b/drivers/hwmon/lm70.c
+@@ -22,10 +22,10 @@
+ #include <linux/hwmon.h>
+ #include <linux/mutex.h>
+ #include <linux/mod_devicetable.h>
++#include <linux/of.h>
++#include <linux/property.h>
+ #include <linux/spi/spi.h>
+ #include <linux/slab.h>
+-#include <linux/of_device.h>
+-#include <linux/acpi.h>
+ 
+ #define DRVNAME		"lm70"
+ 
+@@ -148,50 +148,17 @@ static const struct of_device_id lm70_of_ids[] = {
+ MODULE_DEVICE_TABLE(of, lm70_of_ids);
+ #endif
+ 
+-#ifdef CONFIG_ACPI
+-static const struct acpi_device_id lm70_acpi_ids[] = {
+-	{
+-		.id = "LM000070",
+-		.driver_data = LM70_CHIP_LM70,
+-	},
+-	{
+-		.id = "TMP00121",
+-		.driver_data = LM70_CHIP_TMP121,
+-	},
+-	{
+-		.id = "LM000071",
+-		.driver_data = LM70_CHIP_LM71,
+-	},
+-	{
+-		.id = "LM000074",
+-		.driver_data = LM70_CHIP_LM74,
+-	},
+-	{},
+-};
+-MODULE_DEVICE_TABLE(acpi, lm70_acpi_ids);
+-#endif
+-
+ static int lm70_probe(struct spi_device *spi)
+ {
+-	const struct of_device_id *of_match;
+ 	struct device *hwmon_dev;
+ 	struct lm70 *p_lm70;
+ 	int chip;
+ 
+-	of_match = of_match_device(lm70_of_ids, &spi->dev);
+-	if (of_match)
+-		chip = (int)(uintptr_t)of_match->data;
+-	else {
+-#ifdef CONFIG_ACPI
+-		const struct acpi_device_id *acpi_match;
+-
+-		acpi_match = acpi_match_device(lm70_acpi_ids, &spi->dev);
+-		if (acpi_match)
+-			chip = (int)(uintptr_t)acpi_match->driver_data;
+-		else
+-#endif
+-			chip = spi_get_device_id(spi)->driver_data;
+-	}
++	if (dev_fwnode(&spi->dev))
++		chip = (int)(uintptr_t)device_get_match_data(&spi->dev);
++	else
++		chip = spi_get_device_id(spi)->driver_data;
++
+ 
+ 	/* signaling is SPI_MODE_0 */
+ 	if (spi->mode & (SPI_CPOL | SPI_CPHA))
+@@ -227,7 +194,6 @@ static struct spi_driver lm70_driver = {
+ 	.driver = {
+ 		.name	= "lm70",
+ 		.of_match_table	= of_match_ptr(lm70_of_ids),
+-		.acpi_match_table = ACPI_PTR(lm70_acpi_ids),
+ 	},
+ 	.id_table = lm70_ids,
+ 	.probe	= lm70_probe,
+diff --git a/drivers/hwmon/max31722.c b/drivers/hwmon/max31722.c
+index 062eceb7be0db..613338cbcb170 100644
+--- a/drivers/hwmon/max31722.c
++++ b/drivers/hwmon/max31722.c
+@@ -6,7 +6,6 @@
+  * Copyright (c) 2016, Intel Corporation.
+  */
+ 
+-#include <linux/acpi.h>
+ #include <linux/hwmon.h>
+ #include <linux/hwmon-sysfs.h>
+ #include <linux/kernel.h>
+@@ -133,20 +132,12 @@ static const struct spi_device_id max31722_spi_id[] = {
+ 	{"max31723", 0},
+ 	{}
+ };
+-
+-static const struct acpi_device_id __maybe_unused max31722_acpi_id[] = {
+-	{"MAX31722", 0},
+-	{"MAX31723", 0},
+-	{}
+-};
+-
+ MODULE_DEVICE_TABLE(spi, max31722_spi_id);
+ 
+ static struct spi_driver max31722_driver = {
+ 	.driver = {
+ 		.name = "max31722",
+ 		.pm = &max31722_pm_ops,
+-		.acpi_match_table = ACPI_PTR(max31722_acpi_id),
+ 	},
+ 	.probe =            max31722_probe,
+ 	.remove =           max31722_remove,
+diff --git a/drivers/hwmon/max31790.c b/drivers/hwmon/max31790.c
+index 86e6c71db685c..67677c4377687 100644
+--- a/drivers/hwmon/max31790.c
++++ b/drivers/hwmon/max31790.c
+@@ -27,6 +27,7 @@
+ 
+ /* Fan Config register bits */
+ #define MAX31790_FAN_CFG_RPM_MODE	0x80
++#define MAX31790_FAN_CFG_CTRL_MON	0x10
+ #define MAX31790_FAN_CFG_TACH_INPUT_EN	0x08
+ #define MAX31790_FAN_CFG_TACH_INPUT	0x01
+ 
+@@ -104,7 +105,7 @@ static struct max31790_data *max31790_update_device(struct device *dev)
+ 				data->tach[NR_CHANNEL + i] = rv;
+ 			} else {
+ 				rv = i2c_smbus_read_word_swapped(client,
+-						MAX31790_REG_PWMOUT(i));
++						MAX31790_REG_PWM_DUTY_CYCLE(i));
+ 				if (rv < 0)
+ 					goto abort;
+ 				data->pwm[i] = rv;
+@@ -170,7 +171,7 @@ static int max31790_read_fan(struct device *dev, u32 attr, int channel,
+ 
+ 	switch (attr) {
+ 	case hwmon_fan_input:
+-		sr = get_tach_period(data->fan_dynamics[channel]);
++		sr = get_tach_period(data->fan_dynamics[channel % NR_CHANNEL]);
+ 		rpm = RPM_FROM_REG(data->tach[channel], sr);
+ 		*val = rpm;
+ 		return 0;
+@@ -271,12 +272,12 @@ static int max31790_read_pwm(struct device *dev, u32 attr, int channel,
+ 		*val = data->pwm[channel] >> 8;
+ 		return 0;
+ 	case hwmon_pwm_enable:
+-		if (fan_config & MAX31790_FAN_CFG_RPM_MODE)
++		if (fan_config & MAX31790_FAN_CFG_CTRL_MON)
++			*val = 0;
++		else if (fan_config & MAX31790_FAN_CFG_RPM_MODE)
+ 			*val = 2;
+-		else if (fan_config & MAX31790_FAN_CFG_TACH_INPUT_EN)
+-			*val = 1;
+ 		else
+-			*val = 0;
++			*val = 1;
+ 		return 0;
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -299,31 +300,41 @@ static int max31790_write_pwm(struct device *dev, u32 attr, int channel,
+ 			err = -EINVAL;
+ 			break;
+ 		}
+-		data->pwm[channel] = val << 8;
++		data->valid = false;
+ 		err = i2c_smbus_write_word_swapped(client,
+ 						   MAX31790_REG_PWMOUT(channel),
+-						   data->pwm[channel]);
++						   val << 8);
+ 		break;
+ 	case hwmon_pwm_enable:
+ 		fan_config = data->fan_config[channel];
+ 		if (val == 0) {
+-			fan_config &= ~(MAX31790_FAN_CFG_TACH_INPUT_EN |
+-					MAX31790_FAN_CFG_RPM_MODE);
++			fan_config |= MAX31790_FAN_CFG_CTRL_MON;
++			/*
++			 * Disable RPM mode; otherwise disabling fan speed
++			 * monitoring is not possible.
++			 */
++			fan_config &= ~MAX31790_FAN_CFG_RPM_MODE;
+ 		} else if (val == 1) {
+-			fan_config = (fan_config |
+-				      MAX31790_FAN_CFG_TACH_INPUT_EN) &
+-				     ~MAX31790_FAN_CFG_RPM_MODE;
++			fan_config &= ~(MAX31790_FAN_CFG_CTRL_MON | MAX31790_FAN_CFG_RPM_MODE);
+ 		} else if (val == 2) {
+-			fan_config |= MAX31790_FAN_CFG_TACH_INPUT_EN |
+-				      MAX31790_FAN_CFG_RPM_MODE;
++			fan_config &= ~MAX31790_FAN_CFG_CTRL_MON;
++			/*
++			 * The chip sets MAX31790_FAN_CFG_TACH_INPUT_EN on its
++			 * own if MAX31790_FAN_CFG_RPM_MODE is set.
++			 * Do it here as well to reflect the actual register
++			 * value in the cache.
++			 */
++			fan_config |= (MAX31790_FAN_CFG_RPM_MODE | MAX31790_FAN_CFG_TACH_INPUT_EN);
+ 		} else {
+ 			err = -EINVAL;
+ 			break;
+ 		}
+-		data->fan_config[channel] = fan_config;
+-		err = i2c_smbus_write_byte_data(client,
+-					MAX31790_REG_FAN_CONFIG(channel),
+-					fan_config);
++		if (fan_config != data->fan_config[channel]) {
++			err = i2c_smbus_write_byte_data(client, MAX31790_REG_FAN_CONFIG(channel),
++							fan_config);
++			if (!err)
++				data->fan_config[channel] = fan_config;
++		}
+ 		break;
+ 	default:
+ 		err = -EOPNOTSUPP;
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index cc9e8025c533c..b2088d2d386a4 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -581,7 +581,7 @@ static struct coresight_device *
+ coresight_find_enabled_sink(struct coresight_device *csdev)
+ {
+ 	int i;
+-	struct coresight_device *sink;
++	struct coresight_device *sink = NULL;
+ 
+ 	if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
+ 	     csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
+diff --git a/drivers/iio/accel/bma180.c b/drivers/iio/accel/bma180.c
+index 6b74c2b04c157..da56488182d07 100644
+--- a/drivers/iio/accel/bma180.c
++++ b/drivers/iio/accel/bma180.c
+@@ -55,7 +55,7 @@ struct bma180_part_info {
+ 
+ 	u8 int_reset_reg, int_reset_mask;
+ 	u8 sleep_reg, sleep_mask;
+-	u8 bw_reg, bw_mask;
++	u8 bw_reg, bw_mask, bw_offset;
+ 	u8 scale_reg, scale_mask;
+ 	u8 power_reg, power_mask, lowpower_val;
+ 	u8 int_enable_reg, int_enable_mask;
+@@ -127,6 +127,7 @@ struct bma180_part_info {
+ 
+ #define BMA250_RANGE_MASK	GENMASK(3, 0) /* Range of accel values */
+ #define BMA250_BW_MASK		GENMASK(4, 0) /* Accel bandwidth */
++#define BMA250_BW_OFFSET	8
+ #define BMA250_SUSPEND_MASK	BIT(7) /* chip will sleep */
+ #define BMA250_LOWPOWER_MASK	BIT(6)
+ #define BMA250_DATA_INTEN_MASK	BIT(4)
+@@ -143,6 +144,7 @@ struct bma180_part_info {
+ 
+ #define BMA254_RANGE_MASK	GENMASK(3, 0) /* Range of accel values */
+ #define BMA254_BW_MASK		GENMASK(4, 0) /* Accel bandwidth */
++#define BMA254_BW_OFFSET	8
+ #define BMA254_SUSPEND_MASK	BIT(7) /* chip will sleep */
+ #define BMA254_LOWPOWER_MASK	BIT(6)
+ #define BMA254_DATA_INTEN_MASK	BIT(4)
+@@ -162,7 +164,11 @@ struct bma180_data {
+ 	int scale;
+ 	int bw;
+ 	bool pmode;
+-	u8 buff[16]; /* 3x 16-bit + 8-bit + padding + timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s16 chan[4];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ enum bma180_chan {
+@@ -283,7 +289,8 @@ static int bma180_set_bw(struct bma180_data *data, int val)
+ 	for (i = 0; i < data->part_info->num_bw; ++i) {
+ 		if (data->part_info->bw_table[i] == val) {
+ 			ret = bma180_set_bits(data, data->part_info->bw_reg,
+-				data->part_info->bw_mask, i);
++				data->part_info->bw_mask,
++				i + data->part_info->bw_offset);
+ 			if (ret) {
+ 				dev_err(&data->client->dev,
+ 					"failed to set bandwidth\n");
+@@ -876,6 +883,7 @@ static const struct bma180_part_info bma180_part_info[] = {
+ 		.sleep_mask = BMA250_SUSPEND_MASK,
+ 		.bw_reg = BMA250_BW_REG,
+ 		.bw_mask = BMA250_BW_MASK,
++		.bw_offset = BMA250_BW_OFFSET,
+ 		.scale_reg = BMA250_RANGE_REG,
+ 		.scale_mask = BMA250_RANGE_MASK,
+ 		.power_reg = BMA250_POWER_REG,
+@@ -905,6 +913,7 @@ static const struct bma180_part_info bma180_part_info[] = {
+ 		.sleep_mask = BMA254_SUSPEND_MASK,
+ 		.bw_reg = BMA254_BW_REG,
+ 		.bw_mask = BMA254_BW_MASK,
++		.bw_offset = BMA254_BW_OFFSET,
+ 		.scale_reg = BMA254_RANGE_REG,
+ 		.scale_mask = BMA254_RANGE_MASK,
+ 		.power_reg = BMA254_POWER_REG,
+@@ -938,12 +947,12 @@ static irqreturn_t bma180_trigger_handler(int irq, void *p)
+ 			mutex_unlock(&data->mutex);
+ 			goto err;
+ 		}
+-		((s16 *)data->buff)[i++] = ret;
++		data->scan.chan[i++] = ret;
+ 	}
+ 
+ 	mutex_unlock(&data->mutex);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buff, time_ns);
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan, time_ns);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+diff --git a/drivers/iio/accel/bma220_spi.c b/drivers/iio/accel/bma220_spi.c
+index 3c9b0c6954e60..e8a9db1a82ad8 100644
+--- a/drivers/iio/accel/bma220_spi.c
++++ b/drivers/iio/accel/bma220_spi.c
+@@ -63,7 +63,11 @@ static const int bma220_scale_table[][2] = {
+ struct bma220_data {
+ 	struct spi_device *spi_device;
+ 	struct mutex lock;
+-	s8 buffer[16]; /* 3x8-bit channels + 5x8 padding + 8x8 timestamp */
++	struct {
++		s8 chans[3];
++		/* Ensure timestamp is naturally aligned. */
++		s64 timestamp __aligned(8);
++	} scan;
+ 	u8 tx_buf[2] ____cacheline_aligned;
+ };
+ 
+@@ -94,12 +98,12 @@ static irqreturn_t bma220_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&data->lock);
+ 	data->tx_buf[0] = BMA220_REG_ACCEL_X | BMA220_READ_MASK;
+-	ret = spi_write_then_read(spi, data->tx_buf, 1, data->buffer,
++	ret = spi_write_then_read(spi, data->tx_buf, 1, &data->scan.chans,
+ 				  ARRAY_SIZE(bma220_channels) - 1);
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	mutex_unlock(&data->lock);
+diff --git a/drivers/iio/accel/hid-sensor-accel-3d.c b/drivers/iio/accel/hid-sensor-accel-3d.c
+index 4c5e594024f8c..f05840d17fb71 100644
+--- a/drivers/iio/accel/hid-sensor-accel-3d.c
++++ b/drivers/iio/accel/hid-sensor-accel-3d.c
+@@ -27,8 +27,11 @@ struct accel_3d_state {
+ 	struct hid_sensor_hub_callbacks callbacks;
+ 	struct hid_sensor_common common_attributes;
+ 	struct hid_sensor_hub_attribute_info accel[ACCEL_3D_CHANNEL_MAX];
+-	/* Reserve for 3 channels + padding + timestamp */
+-	u32 accel_val[ACCEL_3D_CHANNEL_MAX + 3];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u32 accel_val[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	int scale_pre_decml;
+ 	int scale_post_decml;
+ 	int scale_precision;
+@@ -239,8 +242,8 @@ static int accel_3d_proc_event(struct hid_sensor_hub_device *hsdev,
+ 			accel_state->timestamp = iio_get_time_ns(indio_dev);
+ 
+ 		hid_sensor_push_data(indio_dev,
+-				     accel_state->accel_val,
+-				     sizeof(accel_state->accel_val),
++				     &accel_state->scan,
++				     sizeof(accel_state->scan),
+ 				     accel_state->timestamp);
+ 
+ 		accel_state->timestamp = 0;
+@@ -265,7 +268,7 @@ static int accel_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
+ 	case HID_USAGE_SENSOR_ACCEL_Y_AXIS:
+ 	case HID_USAGE_SENSOR_ACCEL_Z_AXIS:
+ 		offset = usage_id - HID_USAGE_SENSOR_ACCEL_X_AXIS;
+-		accel_state->accel_val[CHANNEL_SCAN_INDEX_X + offset] =
++		accel_state->scan.accel_val[CHANNEL_SCAN_INDEX_X + offset] =
+ 						*(u32 *)raw_data;
+ 		ret = 0;
+ 	break;
+diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
+index 560a3373ff20d..c99e90469a245 100644
+--- a/drivers/iio/accel/kxcjk-1013.c
++++ b/drivers/iio/accel/kxcjk-1013.c
+@@ -132,13 +132,24 @@ enum kx_acpi_type {
+ 	ACPI_KIOX010A,
+ };
+ 
++enum kxcjk1013_axis {
++	AXIS_X,
++	AXIS_Y,
++	AXIS_Z,
++	AXIS_MAX
++};
++
+ struct kxcjk1013_data {
+ 	struct i2c_client *client;
+ 	struct iio_trigger *dready_trig;
+ 	struct iio_trigger *motion_trig;
+ 	struct iio_mount_matrix orientation;
+ 	struct mutex mutex;
+-	s16 buffer[8];
++	/* Ensure timestamp naturally aligned */
++	struct {
++		s16 chans[AXIS_MAX];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	u8 odr_bits;
+ 	u8 range;
+ 	int wake_thres;
+@@ -152,13 +163,6 @@ struct kxcjk1013_data {
+ 	enum kx_acpi_type acpi_type;
+ };
+ 
+-enum kxcjk1013_axis {
+-	AXIS_X,
+-	AXIS_Y,
+-	AXIS_Z,
+-	AXIS_MAX,
+-};
+-
+ enum kxcjk1013_mode {
+ 	STANDBY,
+ 	OPERATION,
+@@ -1092,12 +1096,12 @@ static irqreturn_t kxcjk1013_trigger_handler(int irq, void *p)
+ 	ret = i2c_smbus_read_i2c_block_data_or_emulated(data->client,
+ 							KXCJK1013_REG_XOUT_L,
+ 							AXIS_MAX * 2,
+-							(u8 *)data->buffer);
++							(u8 *)data->scan.chans);
+ 	mutex_unlock(&data->mutex);
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   data->timestamp);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/accel/mxc4005.c b/drivers/iio/accel/mxc4005.c
+index f877263dc6efb..5a2b0ffbb145d 100644
+--- a/drivers/iio/accel/mxc4005.c
++++ b/drivers/iio/accel/mxc4005.c
+@@ -56,7 +56,11 @@ struct mxc4005_data {
+ 	struct mutex mutex;
+ 	struct regmap *regmap;
+ 	struct iio_trigger *dready_trig;
+-	__be16 buffer[8];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		__be16 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	bool trigger_enabled;
+ };
+ 
+@@ -135,7 +139,7 @@ static int mxc4005_read_xyz(struct mxc4005_data *data)
+ 	int ret;
+ 
+ 	ret = regmap_bulk_read(data->regmap, MXC4005_REG_XOUT_UPPER,
+-			       data->buffer, sizeof(data->buffer));
++			       data->scan.chans, sizeof(data->scan.chans));
+ 	if (ret < 0) {
+ 		dev_err(data->dev, "failed to read axes\n");
+ 		return ret;
+@@ -301,7 +305,7 @@ static irqreturn_t mxc4005_trigger_handler(int irq, void *private)
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ 
+ err:
+diff --git a/drivers/iio/accel/stk8312.c b/drivers/iio/accel/stk8312.c
+index 3b59887a8581b..7d24801e8aa7c 100644
+--- a/drivers/iio/accel/stk8312.c
++++ b/drivers/iio/accel/stk8312.c
+@@ -103,7 +103,11 @@ struct stk8312_data {
+ 	u8 mode;
+ 	struct iio_trigger *dready_trig;
+ 	bool dready_trigger_on;
+-	s8 buffer[16]; /* 3x8-bit channels + 5x8 padding + 64-bit timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s8 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ static IIO_CONST_ATTR(in_accel_scale_available, STK8312_SCALE_AVAIL);
+@@ -438,7 +442,7 @@ static irqreturn_t stk8312_trigger_handler(int irq, void *p)
+ 		ret = i2c_smbus_read_i2c_block_data(data->client,
+ 						    STK8312_REG_XOUT,
+ 						    STK8312_ALL_CHANNEL_SIZE,
+-						    data->buffer);
++						    data->scan.chans);
+ 		if (ret < STK8312_ALL_CHANNEL_SIZE) {
+ 			dev_err(&data->client->dev, "register read failed\n");
+ 			mutex_unlock(&data->lock);
+@@ -452,12 +456,12 @@ static irqreturn_t stk8312_trigger_handler(int irq, void *p)
+ 				mutex_unlock(&data->lock);
+ 				goto err;
+ 			}
+-			data->buffer[i++] = ret;
++			data->scan.chans[i++] = ret;
+ 		}
+ 	}
+ 	mutex_unlock(&data->lock);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/accel/stk8ba50.c b/drivers/iio/accel/stk8ba50.c
+index 3ead378b02c9b..e8087d7ee49f9 100644
+--- a/drivers/iio/accel/stk8ba50.c
++++ b/drivers/iio/accel/stk8ba50.c
+@@ -91,12 +91,11 @@ struct stk8ba50_data {
+ 	u8 sample_rate_idx;
+ 	struct iio_trigger *dready_trig;
+ 	bool dready_trigger_on;
+-	/*
+-	 * 3 x 16-bit channels (10-bit data, 6-bit padding) +
+-	 * 1 x 16 padding +
+-	 * 4 x 16 64-bit timestamp
+-	 */
+-	s16 buffer[8];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s16 chans[3];
++		s64 timetamp __aligned(8);
++	} scan;
+ };
+ 
+ #define STK8BA50_ACCEL_CHANNEL(index, reg, axis) {			\
+@@ -324,7 +323,7 @@ static irqreturn_t stk8ba50_trigger_handler(int irq, void *p)
+ 		ret = i2c_smbus_read_i2c_block_data(data->client,
+ 						    STK8BA50_REG_XOUT,
+ 						    STK8BA50_ALL_CHANNEL_SIZE,
+-						    (u8 *)data->buffer);
++						    (u8 *)data->scan.chans);
+ 		if (ret < STK8BA50_ALL_CHANNEL_SIZE) {
+ 			dev_err(&data->client->dev, "register read failed\n");
+ 			goto err;
+@@ -337,10 +336,10 @@ static irqreturn_t stk8ba50_trigger_handler(int irq, void *p)
+ 			if (ret < 0)
+ 				goto err;
+ 
+-			data->buffer[i++] = ret;
++			data->scan.chans[i++] = ret;
+ 		}
+ 	}
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	mutex_unlock(&data->lock);
+diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
+index b917a4714a9c9..b8139c435a4b0 100644
+--- a/drivers/iio/adc/at91-sama5d2_adc.c
++++ b/drivers/iio/adc/at91-sama5d2_adc.c
+@@ -403,7 +403,8 @@ struct at91_adc_state {
+ 	struct at91_adc_dma		dma_st;
+ 	struct at91_adc_touch		touch_st;
+ 	struct iio_dev			*indio_dev;
+-	u16				buffer[AT91_BUFFER_MAX_HWORDS];
++	/* Ensure naturally aligned timestamp */
++	u16				buffer[AT91_BUFFER_MAX_HWORDS] __aligned(8);
+ 	/*
+ 	 * lock to prevent concurrent 'single conversion' requests through
+ 	 * sysfs.
+diff --git a/drivers/iio/adc/hx711.c b/drivers/iio/adc/hx711.c
+index 6a173531d355b..f7ee856a6b8b6 100644
+--- a/drivers/iio/adc/hx711.c
++++ b/drivers/iio/adc/hx711.c
+@@ -86,9 +86,9 @@ struct hx711_data {
+ 	struct mutex		lock;
+ 	/*
+ 	 * triggered buffer
+-	 * 2x32-bit channel + 64-bit timestamp
++	 * 2x32-bit channel + 64-bit naturally aligned timestamp
+ 	 */
+-	u32			buffer[4];
++	u32			buffer[4] __aligned(8);
+ 	/*
+ 	 * delay after a rising edge on SCK until the data is ready DOUT
+ 	 * this is dependent on the hx711 where the datasheet tells a
+diff --git a/drivers/iio/adc/mxs-lradc-adc.c b/drivers/iio/adc/mxs-lradc-adc.c
+index 30e29f44ebd2e..c480cb489c1a3 100644
+--- a/drivers/iio/adc/mxs-lradc-adc.c
++++ b/drivers/iio/adc/mxs-lradc-adc.c
+@@ -115,7 +115,8 @@ struct mxs_lradc_adc {
+ 	struct device		*dev;
+ 
+ 	void __iomem		*base;
+-	u32			buffer[10];
++	/* Maximum of 8 channels + 8 byte ts */
++	u32			buffer[10] __aligned(8);
+ 	struct iio_trigger	*trig;
+ 	struct completion	completion;
+ 	spinlock_t		lock;
+diff --git a/drivers/iio/adc/ti-ads1015.c b/drivers/iio/adc/ti-ads1015.c
+index 9fef39bcf997b..5b828428be77c 100644
+--- a/drivers/iio/adc/ti-ads1015.c
++++ b/drivers/iio/adc/ti-ads1015.c
+@@ -395,10 +395,14 @@ static irqreturn_t ads1015_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct ads1015_data *data = iio_priv(indio_dev);
+-	s16 buf[8]; /* 1x s16 ADC val + 3x s16 padding +  4x s16 timestamp */
++	/* Ensure natural alignment of timestamp */
++	struct {
++		s16 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ 	int chan, ret, res;
+ 
+-	memset(buf, 0, sizeof(buf));
++	memset(&scan, 0, sizeof(scan));
+ 
+ 	mutex_lock(&data->lock);
+ 	chan = find_first_bit(indio_dev->active_scan_mask,
+@@ -409,10 +413,10 @@ static irqreturn_t ads1015_trigger_handler(int irq, void *p)
+ 		goto err;
+ 	}
+ 
+-	buf[0] = res;
++	scan.chan = res;
+ 	mutex_unlock(&data->lock);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, buf,
++	iio_push_to_buffers_with_timestamp(indio_dev, &scan,
+ 					   iio_get_time_ns(indio_dev));
+ 
+ err:
+diff --git a/drivers/iio/adc/ti-ads8688.c b/drivers/iio/adc/ti-ads8688.c
+index 16bcb37eebb72..79c803537dc42 100644
+--- a/drivers/iio/adc/ti-ads8688.c
++++ b/drivers/iio/adc/ti-ads8688.c
+@@ -383,7 +383,8 @@ static irqreturn_t ads8688_trigger_handler(int irq, void *p)
+ {
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+-	u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)];
++	/* Ensure naturally aligned timestamp */
++	u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8);
+ 	int i, j = 0;
+ 
+ 	for (i = 0; i < indio_dev->masklength; i++) {
+diff --git a/drivers/iio/adc/vf610_adc.c b/drivers/iio/adc/vf610_adc.c
+index 1d794cf3e3f13..fd57fc43e8e5c 100644
+--- a/drivers/iio/adc/vf610_adc.c
++++ b/drivers/iio/adc/vf610_adc.c
+@@ -167,7 +167,11 @@ struct vf610_adc {
+ 	u32 sample_freq_avail[5];
+ 
+ 	struct completion completion;
+-	u16 buffer[8];
++	/* Ensure the timestamp is naturally aligned */
++	struct {
++		u16 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ static const u32 vf610_hw_avgs[] = { 1, 4, 8, 16, 32 };
+@@ -579,9 +583,9 @@ static irqreturn_t vf610_adc_isr(int irq, void *dev_id)
+ 	if (coco & VF610_ADC_HS_COCO0) {
+ 		info->value = vf610_adc_read_data(info);
+ 		if (iio_buffer_enabled(indio_dev)) {
+-			info->buffer[0] = info->value;
++			info->scan.chan = info->value;
+ 			iio_push_to_buffers_with_timestamp(indio_dev,
+-					info->buffer,
++					&info->scan,
+ 					iio_get_time_ns(indio_dev));
+ 			iio_trigger_notify_done(indio_dev->trig);
+ 		} else
+diff --git a/drivers/iio/chemical/atlas-sensor.c b/drivers/iio/chemical/atlas-sensor.c
+index cdab9d04dedd0..0c8a50de89408 100644
+--- a/drivers/iio/chemical/atlas-sensor.c
++++ b/drivers/iio/chemical/atlas-sensor.c
+@@ -91,8 +91,8 @@ struct atlas_data {
+ 	struct regmap *regmap;
+ 	struct irq_work work;
+ 	unsigned int interrupt_enabled;
+-
+-	__be32 buffer[6]; /* 96-bit data + 32-bit pad + 64-bit timestamp */
++	/* 96-bit data + 32-bit pad + 64-bit timestamp */
++	__be32 buffer[6] __aligned(8);
+ };
+ 
+ static const struct regmap_config atlas_regmap_config = {
+diff --git a/drivers/iio/frequency/adf4350.c b/drivers/iio/frequency/adf4350.c
+index 82c050a3899d9..8f885b0af38e5 100644
+--- a/drivers/iio/frequency/adf4350.c
++++ b/drivers/iio/frequency/adf4350.c
+@@ -563,8 +563,10 @@ static int adf4350_probe(struct spi_device *spi)
+ 
+ 	st->lock_detect_gpiod = devm_gpiod_get_optional(&spi->dev, NULL,
+ 							GPIOD_IN);
+-	if (IS_ERR(st->lock_detect_gpiod))
+-		return PTR_ERR(st->lock_detect_gpiod);
++	if (IS_ERR(st->lock_detect_gpiod)) {
++		ret = PTR_ERR(st->lock_detect_gpiod);
++		goto error_disable_reg;
++	}
+ 
+ 	if (pdata->power_up_frequency) {
+ 		ret = adf4350_set_freq(st, pdata->power_up_frequency);
+diff --git a/drivers/iio/gyro/bmg160_core.c b/drivers/iio/gyro/bmg160_core.c
+index 8ddda96455fcb..39fe0b1785920 100644
+--- a/drivers/iio/gyro/bmg160_core.c
++++ b/drivers/iio/gyro/bmg160_core.c
+@@ -96,7 +96,11 @@ struct bmg160_data {
+ 	struct iio_trigger *motion_trig;
+ 	struct iio_mount_matrix orientation;
+ 	struct mutex mutex;
+-	s16 buffer[8];
++	/* Ensure naturally aligned timestamp */
++	struct {
++		s16 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	u32 dps_range;
+ 	int ev_enable_state;
+ 	int slope_thres;
+@@ -880,12 +884,12 @@ static irqreturn_t bmg160_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&data->mutex);
+ 	ret = regmap_bulk_read(data->regmap, BMG160_REG_XOUT_L,
+-			       data->buffer, AXIS_MAX * 2);
++			       data->scan.chans, AXIS_MAX * 2);
+ 	mutex_unlock(&data->mutex);
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/humidity/am2315.c b/drivers/iio/humidity/am2315.c
+index 02ad1767c845e..3398fa413ec5c 100644
+--- a/drivers/iio/humidity/am2315.c
++++ b/drivers/iio/humidity/am2315.c
+@@ -33,7 +33,11 @@
+ struct am2315_data {
+ 	struct i2c_client *client;
+ 	struct mutex lock;
+-	s16 buffer[8]; /* 2x16-bit channels + 2x16 padding + 4x16 timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s16 chans[2];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ struct am2315_sensor_data {
+@@ -167,20 +171,20 @@ static irqreturn_t am2315_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&data->lock);
+ 	if (*(indio_dev->active_scan_mask) == AM2315_ALL_CHANNEL_MASK) {
+-		data->buffer[0] = sensor_data.hum_data;
+-		data->buffer[1] = sensor_data.temp_data;
++		data->scan.chans[0] = sensor_data.hum_data;
++		data->scan.chans[1] = sensor_data.temp_data;
+ 	} else {
+ 		i = 0;
+ 		for_each_set_bit(bit, indio_dev->active_scan_mask,
+ 				 indio_dev->masklength) {
+-			data->buffer[i] = (bit ? sensor_data.temp_data :
+-						 sensor_data.hum_data);
++			data->scan.chans[i] = (bit ? sensor_data.temp_data :
++					       sensor_data.hum_data);
+ 			i++;
+ 		}
+ 	}
+ 	mutex_unlock(&data->lock);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/imu/adis16400.c b/drivers/iio/imu/adis16400.c
+index 785a4ce606d89..4aff16466da02 100644
+--- a/drivers/iio/imu/adis16400.c
++++ b/drivers/iio/imu/adis16400.c
+@@ -647,9 +647,6 @@ static irqreturn_t adis16400_trigger_handler(int irq, void *p)
+ 	void *buffer;
+ 	int ret;
+ 
+-	if (!adis->buffer)
+-		return -ENOMEM;
+-
+ 	if (!(st->variant->flags & ADIS16400_NO_BURST) &&
+ 		st->adis.spi->max_speed_hz > ADIS16400_SPI_BURST) {
+ 		st->adis.spi->max_speed_hz = ADIS16400_SPI_BURST;
+diff --git a/drivers/iio/imu/adis16475.c b/drivers/iio/imu/adis16475.c
+index 197d482409911..3c4e4deb87608 100644
+--- a/drivers/iio/imu/adis16475.c
++++ b/drivers/iio/imu/adis16475.c
+@@ -990,7 +990,7 @@ static irqreturn_t adis16475_trigger_handler(int irq, void *p)
+ 
+ 	ret = spi_sync(adis->spi, &adis->msg);
+ 	if (ret)
+-		return ret;
++		goto check_burst32;
+ 
+ 	adis->spi->max_speed_hz = cached_spi_speed_hz;
+ 	buffer = adis->buffer;
+diff --git a/drivers/iio/imu/adis_buffer.c b/drivers/iio/imu/adis_buffer.c
+index ac354321f63a3..175af154e4437 100644
+--- a/drivers/iio/imu/adis_buffer.c
++++ b/drivers/iio/imu/adis_buffer.c
+@@ -129,9 +129,6 @@ static irqreturn_t adis_trigger_handler(int irq, void *p)
+ 	struct adis *adis = iio_device_get_drvdata(indio_dev);
+ 	int ret;
+ 
+-	if (!adis->buffer)
+-		return -ENOMEM;
+-
+ 	if (adis->data->has_paging) {
+ 		mutex_lock(&adis->state_lock);
+ 		if (adis->current_page != 0) {
+diff --git a/drivers/iio/light/isl29125.c b/drivers/iio/light/isl29125.c
+index b93b85dbc3a6a..ba53b50d711a1 100644
+--- a/drivers/iio/light/isl29125.c
++++ b/drivers/iio/light/isl29125.c
+@@ -51,7 +51,11 @@
+ struct isl29125_data {
+ 	struct i2c_client *client;
+ 	u8 conf1;
+-	u16 buffer[8]; /* 3x 16-bit, padding, 8 bytes timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u16 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ #define ISL29125_CHANNEL(_color, _si) { \
+@@ -184,10 +188,10 @@ static irqreturn_t isl29125_trigger_handler(int irq, void *p)
+ 		if (ret < 0)
+ 			goto done;
+ 
+-		data->buffer[j++] = ret;
++		data->scan.chans[j++] = ret;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 		iio_get_time_ns(indio_dev));
+ 
+ done:
+diff --git a/drivers/iio/light/ltr501.c b/drivers/iio/light/ltr501.c
+index b4323d2db0b19..74ed2d88a3ed3 100644
+--- a/drivers/iio/light/ltr501.c
++++ b/drivers/iio/light/ltr501.c
+@@ -32,9 +32,12 @@
+ #define LTR501_PART_ID 0x86
+ #define LTR501_MANUFAC_ID 0x87
+ #define LTR501_ALS_DATA1 0x88 /* 16-bit, little endian */
++#define LTR501_ALS_DATA1_UPPER 0x89 /* upper 8 bits of LTR501_ALS_DATA1 */
+ #define LTR501_ALS_DATA0 0x8a /* 16-bit, little endian */
++#define LTR501_ALS_DATA0_UPPER 0x8b /* upper 8 bits of LTR501_ALS_DATA0 */
+ #define LTR501_ALS_PS_STATUS 0x8c
+ #define LTR501_PS_DATA 0x8d /* 16-bit, little endian */
++#define LTR501_PS_DATA_UPPER 0x8e /* upper 8 bits of LTR501_PS_DATA */
+ #define LTR501_INTR 0x8f /* output mode, polarity, mode */
+ #define LTR501_PS_THRESH_UP 0x90 /* 11 bit, ps upper threshold */
+ #define LTR501_PS_THRESH_LOW 0x92 /* 11 bit, ps lower threshold */
+@@ -406,18 +409,19 @@ static int ltr501_read_als(const struct ltr501_data *data, __le16 buf[2])
+ 
+ static int ltr501_read_ps(const struct ltr501_data *data)
+ {
+-	int ret, status;
++	__le16 status;
++	int ret;
+ 
+ 	ret = ltr501_drdy(data, LTR501_STATUS_PS_RDY);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	ret = regmap_bulk_read(data->regmap, LTR501_PS_DATA,
+-			       &status, 2);
++			       &status, sizeof(status));
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return status;
++	return le16_to_cpu(status);
+ }
+ 
+ static int ltr501_read_intr_prst(const struct ltr501_data *data,
+@@ -1205,7 +1209,7 @@ static struct ltr501_chip_info ltr501_chip_info_tbl[] = {
+ 		.als_gain_tbl_size = ARRAY_SIZE(ltr559_als_gain_tbl),
+ 		.ps_gain = ltr559_ps_gain_tbl,
+ 		.ps_gain_tbl_size = ARRAY_SIZE(ltr559_ps_gain_tbl),
+-		.als_mode_active = BIT(1),
++		.als_mode_active = BIT(0),
+ 		.als_gain_mask = BIT(2) | BIT(3) | BIT(4),
+ 		.als_gain_shift = 2,
+ 		.info = &ltr501_info,
+@@ -1354,9 +1358,12 @@ static bool ltr501_is_volatile_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+ 	case LTR501_ALS_DATA1:
++	case LTR501_ALS_DATA1_UPPER:
+ 	case LTR501_ALS_DATA0:
++	case LTR501_ALS_DATA0_UPPER:
+ 	case LTR501_ALS_PS_STATUS:
+ 	case LTR501_PS_DATA:
++	case LTR501_PS_DATA_UPPER:
+ 		return true;
+ 	default:
+ 		return false;
+diff --git a/drivers/iio/light/tcs3414.c b/drivers/iio/light/tcs3414.c
+index 6fe5d46f80d40..0593abd600ec2 100644
+--- a/drivers/iio/light/tcs3414.c
++++ b/drivers/iio/light/tcs3414.c
+@@ -53,7 +53,11 @@ struct tcs3414_data {
+ 	u8 control;
+ 	u8 gain;
+ 	u8 timing;
+-	u16 buffer[8]; /* 4x 16-bit + 8 bytes timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u16 chans[4];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ #define TCS3414_CHANNEL(_color, _si, _addr) { \
+@@ -209,10 +213,10 @@ static irqreturn_t tcs3414_trigger_handler(int irq, void *p)
+ 		if (ret < 0)
+ 			goto done;
+ 
+-		data->buffer[j++] = ret;
++		data->scan.chans[j++] = ret;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 		iio_get_time_ns(indio_dev));
+ 
+ done:
+diff --git a/drivers/iio/light/tcs3472.c b/drivers/iio/light/tcs3472.c
+index a0dc447aeb68b..371c6a39a1654 100644
+--- a/drivers/iio/light/tcs3472.c
++++ b/drivers/iio/light/tcs3472.c
+@@ -64,7 +64,11 @@ struct tcs3472_data {
+ 	u8 control;
+ 	u8 atime;
+ 	u8 apers;
+-	u16 buffer[8]; /* 4 16-bit channels + 64-bit timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u16 chans[4];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ static const struct iio_event_spec tcs3472_events[] = {
+@@ -386,10 +390,10 @@ static irqreturn_t tcs3472_trigger_handler(int irq, void *p)
+ 		if (ret < 0)
+ 			goto done;
+ 
+-		data->buffer[j++] = ret;
++		data->scan.chans[j++] = ret;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 		iio_get_time_ns(indio_dev));
+ 
+ done:
+@@ -531,7 +535,8 @@ static int tcs3472_probe(struct i2c_client *client,
+ 	return 0;
+ 
+ free_irq:
+-	free_irq(client->irq, indio_dev);
++	if (client->irq)
++		free_irq(client->irq, indio_dev);
+ buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ 	return ret;
+@@ -559,7 +564,8 @@ static int tcs3472_remove(struct i2c_client *client)
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ 
+ 	iio_device_unregister(indio_dev);
+-	free_irq(client->irq, indio_dev);
++	if (client->irq)
++		free_irq(client->irq, indio_dev);
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ 	tcs3472_powerdown(iio_priv(indio_dev));
+ 
+diff --git a/drivers/iio/light/vcnl4000.c b/drivers/iio/light/vcnl4000.c
+index fff4b36b8b58d..f4feb44903b3f 100644
+--- a/drivers/iio/light/vcnl4000.c
++++ b/drivers/iio/light/vcnl4000.c
+@@ -910,7 +910,7 @@ static irqreturn_t vcnl4010_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct vcnl4000_data *data = iio_priv(indio_dev);
+ 	const unsigned long *active_scan_mask = indio_dev->active_scan_mask;
+-	u16 buffer[8] = {0}; /* 1x16-bit + ts */
++	u16 buffer[8] __aligned(8) = {0}; /* 1x16-bit + naturally aligned ts */
+ 	bool data_read = false;
+ 	unsigned long isr;
+ 	int val = 0;
+diff --git a/drivers/iio/light/vcnl4035.c b/drivers/iio/light/vcnl4035.c
+index 765c44adac574..1bd85e21fd114 100644
+--- a/drivers/iio/light/vcnl4035.c
++++ b/drivers/iio/light/vcnl4035.c
+@@ -102,7 +102,8 @@ static irqreturn_t vcnl4035_trigger_consumer_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct vcnl4035_data *data = iio_priv(indio_dev);
+-	u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)];
++	/* Ensure naturally aligned timestamp */
++	u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)]  __aligned(8);
+ 	int ret;
+ 
+ 	ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer);
+diff --git a/drivers/iio/magnetometer/bmc150_magn.c b/drivers/iio/magnetometer/bmc150_magn.c
+index fc6840f9c1fa6..8042175275d09 100644
+--- a/drivers/iio/magnetometer/bmc150_magn.c
++++ b/drivers/iio/magnetometer/bmc150_magn.c
+@@ -136,8 +136,11 @@ struct bmc150_magn_data {
+ 	struct mutex mutex;
+ 	struct regmap *regmap;
+ 	struct iio_mount_matrix orientation;
+-	/* 4 x 32 bits for x, y z, 4 bytes align, 64 bits timestamp */
+-	s32 buffer[6];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s32 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	struct iio_trigger *dready_trig;
+ 	bool dready_trigger_on;
+ 	int max_odr;
+@@ -673,11 +676,11 @@ static irqreturn_t bmc150_magn_trigger_handler(int irq, void *p)
+ 	int ret;
+ 
+ 	mutex_lock(&data->mutex);
+-	ret = bmc150_magn_read_xyz(data, data->buffer);
++	ret = bmc150_magn_read_xyz(data, data->scan.chans);
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ 
+ err:
+diff --git a/drivers/iio/magnetometer/hmc5843.h b/drivers/iio/magnetometer/hmc5843.h
+index 3f6c0b6629415..242f742f2643a 100644
+--- a/drivers/iio/magnetometer/hmc5843.h
++++ b/drivers/iio/magnetometer/hmc5843.h
+@@ -33,7 +33,8 @@ enum hmc5843_ids {
+  * @lock:		update and read regmap data
+  * @regmap:		hardware access register maps
+  * @variant:		describe chip variants
+- * @buffer:		3x 16-bit channels + padding + 64-bit timestamp
++ * @scan:		buffer to pack data for passing to
++ *			iio_push_to_buffers_with_timestamp()
+  */
+ struct hmc5843_data {
+ 	struct device *dev;
+@@ -41,7 +42,10 @@ struct hmc5843_data {
+ 	struct regmap *regmap;
+ 	const struct hmc5843_chip_info *variant;
+ 	struct iio_mount_matrix orientation;
+-	__be16 buffer[8];
++	struct {
++		__be16 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ int hmc5843_common_probe(struct device *dev, struct regmap *regmap,
+diff --git a/drivers/iio/magnetometer/hmc5843_core.c b/drivers/iio/magnetometer/hmc5843_core.c
+index 780faea61d82e..221563e0c18fd 100644
+--- a/drivers/iio/magnetometer/hmc5843_core.c
++++ b/drivers/iio/magnetometer/hmc5843_core.c
+@@ -446,13 +446,13 @@ static irqreturn_t hmc5843_trigger_handler(int irq, void *p)
+ 	}
+ 
+ 	ret = regmap_bulk_read(data->regmap, HMC5843_DATA_OUT_MSB_REGS,
+-			       data->buffer, 3 * sizeof(__be16));
++			       data->scan.chans, sizeof(data->scan.chans));
+ 
+ 	mutex_unlock(&data->lock);
+ 	if (ret < 0)
+ 		goto done;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   iio_get_time_ns(indio_dev));
+ 
+ done:
+diff --git a/drivers/iio/magnetometer/rm3100-core.c b/drivers/iio/magnetometer/rm3100-core.c
+index 7242897a05e95..720234a91db11 100644
+--- a/drivers/iio/magnetometer/rm3100-core.c
++++ b/drivers/iio/magnetometer/rm3100-core.c
+@@ -78,7 +78,8 @@ struct rm3100_data {
+ 	bool use_interrupt;
+ 	int conversion_time;
+ 	int scale;
+-	u8 buffer[RM3100_SCAN_BYTES];
++	/* Ensure naturally aligned timestamp */
++	u8 buffer[RM3100_SCAN_BYTES] __aligned(8);
+ 	struct iio_trigger *drdy_trig;
+ 
+ 	/*
+diff --git a/drivers/iio/potentiostat/lmp91000.c b/drivers/iio/potentiostat/lmp91000.c
+index f34ca769dc20d..d7ff74a798ba3 100644
+--- a/drivers/iio/potentiostat/lmp91000.c
++++ b/drivers/iio/potentiostat/lmp91000.c
+@@ -71,8 +71,8 @@ struct lmp91000_data {
+ 
+ 	struct completion completion;
+ 	u8 chan_select;
+-
+-	u32 buffer[4]; /* 64-bit data + 64-bit timestamp */
++	/* 64-bit data + 64-bit naturally aligned timestamp */
++	u32 buffer[4] __aligned(8);
+ };
+ 
+ static const struct iio_chan_spec lmp91000_channels[] = {
+diff --git a/drivers/iio/proximity/as3935.c b/drivers/iio/proximity/as3935.c
+index b79ada839e012..98330e26ac3bd 100644
+--- a/drivers/iio/proximity/as3935.c
++++ b/drivers/iio/proximity/as3935.c
+@@ -59,7 +59,11 @@ struct as3935_state {
+ 	unsigned long noise_tripped;
+ 	u32 tune_cap;
+ 	u32 nflwdth_reg;
+-	u8 buffer[16]; /* 8-bit data + 56-bit padding + 64-bit timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u8 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ 	u8 buf[2] ____cacheline_aligned;
+ };
+ 
+@@ -225,8 +229,8 @@ static irqreturn_t as3935_trigger_handler(int irq, void *private)
+ 	if (ret)
+ 		goto err_read;
+ 
+-	st->buffer[0] = val & AS3935_DATA_MASK;
+-	iio_push_to_buffers_with_timestamp(indio_dev, &st->buffer,
++	st->scan.chan = val & AS3935_DATA_MASK;
++	iio_push_to_buffers_with_timestamp(indio_dev, &st->scan,
+ 					   iio_get_time_ns(indio_dev));
+ err_read:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/proximity/isl29501.c b/drivers/iio/proximity/isl29501.c
+index 90e76451c972a..5b6ea783795d9 100644
+--- a/drivers/iio/proximity/isl29501.c
++++ b/drivers/iio/proximity/isl29501.c
+@@ -938,7 +938,7 @@ static irqreturn_t isl29501_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct isl29501_private *isl29501 = iio_priv(indio_dev);
+ 	const unsigned long *active_mask = indio_dev->active_scan_mask;
+-	u32 buffer[4] = {}; /* 1x16-bit + ts */
++	u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */
+ 
+ 	if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask))
+ 		isl29501_register_read(isl29501, REG_DISTANCE, buffer);
+diff --git a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+index cc206bfa09c78..d854b8d5fbbaf 100644
+--- a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
++++ b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+@@ -44,7 +44,11 @@ struct lidar_data {
+ 	int (*xfer)(struct lidar_data *data, u8 reg, u8 *val, int len);
+ 	int i2c_enabled;
+ 
+-	u16 buffer[8]; /* 2 byte distance + 8 byte timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u16 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ static const struct iio_chan_spec lidar_channels[] = {
+@@ -230,9 +234,9 @@ static irqreturn_t lidar_trigger_handler(int irq, void *private)
+ 	struct lidar_data *data = iio_priv(indio_dev);
+ 	int ret;
+ 
+-	ret = lidar_get_measurement(data, data->buffer);
++	ret = lidar_get_measurement(data, &data->scan.chan);
+ 	if (!ret) {
+-		iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++		iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 						   iio_get_time_ns(indio_dev));
+ 	} else if (ret != -EINVAL) {
+ 		dev_err(&data->client->dev, "cannot read LIDAR measurement");
+diff --git a/drivers/iio/proximity/srf08.c b/drivers/iio/proximity/srf08.c
+index 70beac5c9c1df..9b0886760f76d 100644
+--- a/drivers/iio/proximity/srf08.c
++++ b/drivers/iio/proximity/srf08.c
+@@ -63,11 +63,11 @@ struct srf08_data {
+ 	int			range_mm;
+ 	struct mutex		lock;
+ 
+-	/*
+-	 * triggered buffer
+-	 * 1x16-bit channel + 3x16 padding + 4x16 timestamp
+-	 */
+-	s16			buffer[8];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s16 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ 
+ 	/* Sensor-Type */
+ 	enum srf08_sensor_type	sensor_type;
+@@ -190,9 +190,9 @@ static irqreturn_t srf08_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&data->lock);
+ 
+-	data->buffer[0] = sensor_data;
++	data->scan.chan = sensor_data;
+ 	iio_push_to_buffers_with_timestamp(indio_dev,
+-						data->buffer, pf->timestamp);
++					   &data->scan, pf->timestamp);
+ 
+ 	mutex_unlock(&data->lock);
+ err:
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index d1e94147fb165..0c879e40bd18d 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1856,6 +1856,7 @@ static void _destroy_id(struct rdma_id_private *id_priv,
+ {
+ 	cma_cancel_operation(id_priv, state);
+ 
++	rdma_restrack_del(&id_priv->res);
+ 	if (id_priv->cma_dev) {
+ 		if (rdma_cap_ib_cm(id_priv->id.device, 1)) {
+ 			if (id_priv->cm_id.ib)
+@@ -1865,7 +1866,6 @@ static void _destroy_id(struct rdma_id_private *id_priv,
+ 				iw_destroy_cm_id(id_priv->cm_id.iw);
+ 		}
+ 		cma_leave_mc_groups(id_priv);
+-		rdma_restrack_del(&id_priv->res);
+ 		cma_release_dev(id_priv);
+ 	}
+ 
+@@ -2476,8 +2476,10 @@ static int cma_iw_listen(struct rdma_id_private *id_priv, int backlog)
+ 	if (IS_ERR(id))
+ 		return PTR_ERR(id);
+ 
++	mutex_lock(&id_priv->qp_mutex);
+ 	id->tos = id_priv->tos;
+ 	id->tos_set = id_priv->tos_set;
++	mutex_unlock(&id_priv->qp_mutex);
+ 	id_priv->cm_id.iw = id;
+ 
+ 	memcpy(&id_priv->cm_id.iw->local_addr, cma_src_addr(id_priv),
+@@ -2537,8 +2539,10 @@ static int cma_listen_on_dev(struct rdma_id_private *id_priv,
+ 	cma_id_get(id_priv);
+ 	dev_id_priv->internal_id = 1;
+ 	dev_id_priv->afonly = id_priv->afonly;
++	mutex_lock(&id_priv->qp_mutex);
+ 	dev_id_priv->tos_set = id_priv->tos_set;
+ 	dev_id_priv->tos = id_priv->tos;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	ret = rdma_listen(&dev_id_priv->id, id_priv->backlog);
+ 	if (ret)
+@@ -2585,8 +2589,10 @@ void rdma_set_service_type(struct rdma_cm_id *id, int tos)
+ 	struct rdma_id_private *id_priv;
+ 
+ 	id_priv = container_of(id, struct rdma_id_private, id);
++	mutex_lock(&id_priv->qp_mutex);
+ 	id_priv->tos = (u8) tos;
+ 	id_priv->tos_set = true;
++	mutex_unlock(&id_priv->qp_mutex);
+ }
+ EXPORT_SYMBOL(rdma_set_service_type);
+ 
+@@ -2613,8 +2619,10 @@ int rdma_set_ack_timeout(struct rdma_cm_id *id, u8 timeout)
+ 		return -EINVAL;
+ 
+ 	id_priv = container_of(id, struct rdma_id_private, id);
++	mutex_lock(&id_priv->qp_mutex);
+ 	id_priv->timeout = timeout;
+ 	id_priv->timeout_set = true;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	return 0;
+ }
+@@ -3000,8 +3008,11 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
+ 
+ 	u8 default_roce_tos = id_priv->cma_dev->default_roce_tos[id_priv->id.port_num -
+ 					rdma_start_port(id_priv->cma_dev->device)];
+-	u8 tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
++	u8 tos;
+ 
++	mutex_lock(&id_priv->qp_mutex);
++	tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	work = kzalloc(sizeof *work, GFP_KERNEL);
+ 	if (!work)
+@@ -3048,8 +3059,12 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
+ 	 * PacketLifeTime = local ACK timeout/2
+ 	 * as a reasonable approximation for RoCE networks.
+ 	 */
+-	route->path_rec->packet_life_time = id_priv->timeout_set ?
+-		id_priv->timeout - 1 : CMA_IBOE_PACKET_LIFETIME;
++	mutex_lock(&id_priv->qp_mutex);
++	if (id_priv->timeout_set && id_priv->timeout)
++		route->path_rec->packet_life_time = id_priv->timeout - 1;
++	else
++		route->path_rec->packet_life_time = CMA_IBOE_PACKET_LIFETIME;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	if (!route->path_rec->mtu) {
+ 		ret = -EINVAL;
+@@ -4073,8 +4088,11 @@ static int cma_connect_iw(struct rdma_id_private *id_priv,
+ 	if (IS_ERR(cm_id))
+ 		return PTR_ERR(cm_id);
+ 
++	mutex_lock(&id_priv->qp_mutex);
+ 	cm_id->tos = id_priv->tos;
+ 	cm_id->tos_set = id_priv->tos_set;
++	mutex_unlock(&id_priv->qp_mutex);
++
+ 	id_priv->cm_id.iw = cm_id;
+ 
+ 	memcpy(&cm_id->local_addr, cma_src_addr(id_priv),
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 418d133a8fb08..466026825dd75 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -3000,12 +3000,29 @@ static int ib_uverbs_ex_modify_wq(struct uverbs_attr_bundle *attrs)
+ 	if (!wq)
+ 		return -EINVAL;
+ 
+-	wq_attr.curr_wq_state = cmd.curr_wq_state;
+-	wq_attr.wq_state = cmd.wq_state;
+ 	if (cmd.attr_mask & IB_WQ_FLAGS) {
+ 		wq_attr.flags = cmd.flags;
+ 		wq_attr.flags_mask = cmd.flags_mask;
+ 	}
++
++	if (cmd.attr_mask & IB_WQ_CUR_STATE) {
++		if (cmd.curr_wq_state > IB_WQS_ERR)
++			return -EINVAL;
++
++		wq_attr.curr_wq_state = cmd.curr_wq_state;
++	} else {
++		wq_attr.curr_wq_state = wq->state;
++	}
++
++	if (cmd.attr_mask & IB_WQ_STATE) {
++		if (cmd.wq_state > IB_WQS_ERR)
++			return -EINVAL;
++
++		wq_attr.wq_state = cmd.wq_state;
++	} else {
++		wq_attr.wq_state = wq_attr.curr_wq_state;
++	}
++
+ 	ret = wq->device->ops.modify_wq(wq, &wq_attr, cmd.attr_mask,
+ 					&attrs->driver_udata);
+ 	rdma_lookup_put_uobject(&wq->uobject->uevent.uobject,
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 5cb8e602294ca..6bc0818f4b2c6 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -4244,13 +4244,8 @@ int mlx4_ib_modify_wq(struct ib_wq *ibwq, struct ib_wq_attr *wq_attr,
+ 	if (wq_attr_mask & IB_WQ_FLAGS)
+ 		return -EOPNOTSUPP;
+ 
+-	cur_state = wq_attr_mask & IB_WQ_CUR_STATE ? wq_attr->curr_wq_state :
+-						     ibwq->state;
+-	new_state = wq_attr_mask & IB_WQ_STATE ? wq_attr->wq_state : cur_state;
+-
+-	if (cur_state  < IB_WQS_RESET || cur_state  > IB_WQS_ERR ||
+-	    new_state < IB_WQS_RESET || new_state > IB_WQS_ERR)
+-		return -EINVAL;
++	cur_state = wq_attr->curr_wq_state;
++	new_state = wq_attr->wq_state;
+ 
+ 	if ((new_state == IB_WQS_RDY) && (cur_state == IB_WQS_ERR))
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index b19506707e45c..eb69bec77e5d4 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -3440,8 +3440,6 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev,
+ 
+ 	port->mp.mpi = NULL;
+ 
+-	list_add_tail(&mpi->list, &mlx5_ib_unaffiliated_port_list);
+-
+ 	spin_unlock(&port->mp.mpi_lock);
+ 
+ 	err = mlx5_nic_vport_unaffiliate_multiport(mpi->mdev);
+@@ -3594,6 +3592,8 @@ static void mlx5_ib_cleanup_multiport_master(struct mlx5_ib_dev *dev)
+ 				dev->port[i].mp.mpi = NULL;
+ 			} else {
+ 				mlx5_ib_dbg(dev, "unbinding port_num: %d\n", i + 1);
++				list_add_tail(&dev->port[i].mp.mpi->list,
++					      &mlx5_ib_unaffiliated_port_list);
+ 				mlx5_ib_unbind_slave_port(dev, dev->port[i].mp.mpi);
+ 			}
+ 		}
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 6d2715f65d788..8beba002e5dd7 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -5236,10 +5236,8 @@ int mlx5_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr,
+ 
+ 	rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+ 
+-	curr_wq_state = (wq_attr_mask & IB_WQ_CUR_STATE) ?
+-		wq_attr->curr_wq_state : wq->state;
+-	wq_state = (wq_attr_mask & IB_WQ_STATE) ?
+-		wq_attr->wq_state : curr_wq_state;
++	curr_wq_state = wq_attr->curr_wq_state;
++	wq_state = wq_attr->wq_state;
+ 	if (curr_wq_state == IB_WQS_ERR)
+ 		curr_wq_state = MLX5_RQC_STATE_ERR;
+ 	if (wq_state == IB_WQS_ERR)
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index bce44502ab0ed..c071d5b1b85a7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -212,10 +212,8 @@ static struct socket *rxe_setup_udp_tunnel(struct net *net, __be16 port,
+ 
+ 	/* Create UDP socket */
+ 	err = udp_sock_create(net, &udp_cfg, &sock);
+-	if (err < 0) {
+-		pr_err("failed to create udp socket. err = %d\n", err);
++	if (err < 0)
+ 		return ERR_PTR(err);
+-	}
+ 
+ 	tnl_cfg.encap_type = 1;
+ 	tnl_cfg.encap_rcv = rxe_udp_encap_recv;
+@@ -616,6 +614,12 @@ static int rxe_net_ipv6_init(void)
+ 
+ 	recv_sockets.sk6 = rxe_setup_udp_tunnel(&init_net,
+ 						htons(ROCE_V2_UDP_DPORT), true);
++	if (PTR_ERR(recv_sockets.sk6) == -EAFNOSUPPORT) {
++		recv_sockets.sk6 = NULL;
++		pr_warn("IPv6 is not supported, can not create a UDPv6 socket\n");
++		return 0;
++	}
++
+ 	if (IS_ERR(recv_sockets.sk6)) {
+ 		recv_sockets.sk6 = NULL;
+ 		pr_err("Failed to create IPv6 UDP tunnel\n");
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 1e716fe7014cc..a1b79015e6f22 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -125,7 +125,6 @@ static void free_rd_atomic_resources(struct rxe_qp *qp)
+ void free_rd_atomic_resource(struct rxe_qp *qp, struct resp_res *res)
+ {
+ 	if (res->type == RXE_ATOMIC_MASK) {
+-		rxe_drop_ref(qp);
+ 		kfree_skb(res->atomic.skb);
+ 	} else if (res->type == RXE_READ_MASK) {
+ 		if (res->read.mr)
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index c7e3b6a4af38f..83c03212099a2 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -966,8 +966,6 @@ static int send_atomic_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
+ 		goto out;
+ 	}
+ 
+-	rxe_add_ref(qp);
+-
+ 	res = &qp->resp.resources[qp->resp.res_head];
+ 	free_rd_atomic_resource(qp, res);
+ 	rxe_advance_resp_resource(qp);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 7db550ba25d7f..46fad202a380e 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -811,6 +811,9 @@ static struct rtrs_clt_sess *get_next_path_min_inflight(struct path_it *it)
+ 	int inflight;
+ 
+ 	list_for_each_entry_rcu(sess, &clt->paths_list, s.entry) {
++		if (unlikely(READ_ONCE(sess->state) != RTRS_CLT_CONNECTED))
++			continue;
++
+ 		if (unlikely(!list_empty(raw_cpu_ptr(sess->mp_skip_entry))))
+ 			continue;
+ 
+@@ -1724,7 +1727,19 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 				  queue_depth);
+ 			return -ECONNRESET;
+ 		}
+-		if (!sess->rbufs || sess->queue_depth < queue_depth) {
++		if (sess->queue_depth > 0 && queue_depth != sess->queue_depth) {
++			rtrs_err(clt, "Error: queue depth changed\n");
++
++			/*
++			 * Stop any more reconnection attempts
++			 */
++			sess->reconnect_attempts = -1;
++			rtrs_err(clt,
++				"Disabling auto-reconnect. Trigger a manual reconnect after issue is resolved\n");
++			return -ECONNRESET;
++		}
++
++		if (!sess->rbufs) {
+ 			kfree(sess->rbufs);
+ 			sess->rbufs = kcalloc(queue_depth, sizeof(*sess->rbufs),
+ 					      GFP_KERNEL);
+@@ -1738,7 +1753,7 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 		sess->chunk_size = sess->max_io_size + sess->max_hdr_size;
+ 
+ 		/*
+-		 * Global queue depth and IO size is always a minimum.
++		 * Global IO size is always a minimum.
+ 		 * If while a reconnection server sends us a value a bit
+ 		 * higher - client does not care and uses cached minimum.
+ 		 *
+@@ -1746,8 +1761,7 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 		 * connections in parallel, use lock.
+ 		 */
+ 		mutex_lock(&clt->paths_mutex);
+-		clt->queue_depth = min_not_zero(sess->queue_depth,
+-						clt->queue_depth);
++		clt->queue_depth = sess->queue_depth;
+ 		clt->max_io_size = min_not_zero(sess->max_io_size,
+ 						clt->max_io_size);
+ 		mutex_unlock(&clt->paths_mutex);
+@@ -2692,6 +2706,8 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
+ 		if (err) {
+ 			list_del_rcu(&sess->s.entry);
+ 			rtrs_clt_close_conns(sess, true);
++			free_percpu(sess->stats->pcpu_stats);
++			kfree(sess->stats);
+ 			free_sess(sess);
+ 			goto close_all_sess;
+ 		}
+@@ -2700,6 +2716,8 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
+ 		if (err) {
+ 			list_del_rcu(&sess->s.entry);
+ 			rtrs_clt_close_conns(sess, true);
++			free_percpu(sess->stats->pcpu_stats);
++			kfree(sess->stats);
+ 			free_sess(sess);
+ 			goto close_all_sess;
+ 		}
+@@ -2959,6 +2977,8 @@ int rtrs_clt_create_path_from_sysfs(struct rtrs_clt *clt,
+ close_sess:
+ 	rtrs_clt_remove_path_from_arr(sess);
+ 	rtrs_clt_close_conns(sess, true);
++	free_percpu(sess->stats->pcpu_stats);
++	kfree(sess->stats);
+ 	free_sess(sess);
+ 
+ 	return err;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
+index 39708ab4f26e5..7c75e14590173 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
+@@ -214,6 +214,7 @@ rtrs_srv_destroy_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
+ 		device_del(&srv->dev);
+ 		put_device(&srv->dev);
+ 	} else {
++		put_device(&srv->dev);
+ 		mutex_unlock(&srv->paths_mutex);
+ 	}
+ }
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 43806180f85ec..b033bfa9f3839 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -1490,6 +1490,7 @@ static void free_sess(struct rtrs_srv_sess *sess)
+ 		kobject_del(&sess->kobj);
+ 		kobject_put(&sess->kobj);
+ 	} else {
++		kfree(sess->stats);
+ 		kfree(sess);
+ 	}
+ }
+@@ -1613,7 +1614,7 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 	struct rtrs_sess *s = &sess->s;
+ 	struct rtrs_srv_con *con;
+ 
+-	u32 cq_size, wr_queue_size;
++	u32 cq_size, max_send_wr, max_recv_wr, wr_limit;
+ 	int err, cq_vector;
+ 
+ 	con = kzalloc(sizeof(*con), GFP_KERNEL);
+@@ -1634,30 +1635,42 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 		 * All receive and all send (each requiring invalidate)
+ 		 * + 2 for drain and heartbeat
+ 		 */
+-		wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2;
+-		cq_size = wr_queue_size;
++		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
++		max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2;
++		cq_size = max_send_wr + max_recv_wr;
+ 	} else {
+-		/*
+-		 * If we have all receive requests posted and
+-		 * all write requests posted and each read request
+-		 * requires an invalidate request + drain
+-		 * and qp gets into error state.
+-		 */
+-		cq_size = srv->queue_depth * 3 + 1;
+ 		/*
+ 		 * In theory we might have queue_depth * 32
+ 		 * outstanding requests if an unsafe global key is used
+ 		 * and we have queue_depth read requests each consisting
+ 		 * of 32 different addresses. div 3 for mlx5.
+ 		 */
+-		wr_queue_size = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
++		wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
++		/* when always_invlaidate enalbed, we need linv+rinv+mr+imm */
++		if (always_invalidate)
++			max_send_wr =
++				min_t(int, wr_limit,
++				      srv->queue_depth * (1 + 4) + 1);
++		else
++			max_send_wr =
++				min_t(int, wr_limit,
++				      srv->queue_depth * (1 + 2) + 1);
++
++		max_recv_wr = srv->queue_depth + 1;
++		/*
++		 * If we have all receive requests posted and
++		 * all write requests posted and each read request
++		 * requires an invalidate request + drain
++		 * and qp gets into error state.
++		 */
++		cq_size = max_send_wr + max_recv_wr;
+ 	}
+-	atomic_set(&con->sq_wr_avail, wr_queue_size);
++	atomic_set(&con->sq_wr_avail, max_send_wr);
+ 	cq_vector = rtrs_srv_get_next_cq_vector(sess);
+ 
+ 	/* TODO: SOFTIRQ can be faster, but be careful with softirq context */
+ 	err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size,
+-				 wr_queue_size, wr_queue_size,
++				 max_send_wr, max_recv_wr,
+ 				 IB_POLL_WORKQUEUE);
+ 	if (err) {
+ 		rtrs_err(s, "rtrs_cq_qp_create(), err: %d\n", err);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index d13aff0aa8165..4629bb758126a 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -373,7 +373,6 @@ void rtrs_stop_hb(struct rtrs_sess *sess)
+ {
+ 	cancel_delayed_work_sync(&sess->hb_dwork);
+ 	sess->hb_missed_cnt = 0;
+-	sess->hb_missed_max = 0;
+ }
+ EXPORT_SYMBOL_GPL(rtrs_stop_hb);
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index a8f85993dab30..86d5c4c92b363 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -998,7 +998,6 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
+ 	struct srp_device *srp_dev = target->srp_host->srp_dev;
+ 	struct ib_device *ibdev = srp_dev->dev;
+ 	struct srp_request *req;
+-	void *mr_list;
+ 	dma_addr_t dma_addr;
+ 	int i, ret = -ENOMEM;
+ 
+@@ -1009,12 +1008,12 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
+ 
+ 	for (i = 0; i < target->req_ring_size; ++i) {
+ 		req = &ch->req_ring[i];
+-		mr_list = kmalloc_array(target->mr_per_cmd, sizeof(void *),
+-					GFP_KERNEL);
+-		if (!mr_list)
+-			goto out;
+-		if (srp_dev->use_fast_reg)
+-			req->fr_list = mr_list;
++		if (srp_dev->use_fast_reg) {
++			req->fr_list = kmalloc_array(target->mr_per_cmd,
++						sizeof(void *), GFP_KERNEL);
++			if (!req->fr_list)
++				goto out;
++		}
+ 		req->indirect_desc = kmalloc(target->indirect_size, GFP_KERNEL);
+ 		if (!req->indirect_desc)
+ 			goto out;
+diff --git a/drivers/input/joydev.c b/drivers/input/joydev.c
+index 430dc69750048..675fcd0952a2d 100644
+--- a/drivers/input/joydev.c
++++ b/drivers/input/joydev.c
+@@ -500,7 +500,7 @@ static int joydev_handle_JSIOCSBTNMAP(struct joydev *joydev,
+ 	memcpy(joydev->keypam, keypam, len);
+ 
+ 	for (i = 0; i < joydev->nkey; i++)
+-		joydev->keymap[keypam[i] - BTN_MISC] = i;
++		joydev->keymap[joydev->keypam[i] - BTN_MISC] = i;
+ 
+  out:
+ 	kfree(keypam);
+diff --git a/drivers/input/keyboard/Kconfig b/drivers/input/keyboard/Kconfig
+index 793ecbbda32ca..9f60f1559e499 100644
+--- a/drivers/input/keyboard/Kconfig
++++ b/drivers/input/keyboard/Kconfig
+@@ -67,9 +67,6 @@ config KEYBOARD_AMIGA
+ 	  To compile this driver as a module, choose M here: the
+ 	  module will be called amikbd.
+ 
+-config ATARI_KBD_CORE
+-	bool
+-
+ config KEYBOARD_APPLESPI
+ 	tristate "Apple SPI keyboard and trackpad"
+ 	depends on ACPI && EFI
+diff --git a/drivers/input/keyboard/hil_kbd.c b/drivers/input/keyboard/hil_kbd.c
+index bb29a7c9a1c0c..54afb38601b9f 100644
+--- a/drivers/input/keyboard/hil_kbd.c
++++ b/drivers/input/keyboard/hil_kbd.c
+@@ -512,6 +512,7 @@ static int hil_dev_connect(struct serio *serio, struct serio_driver *drv)
+ 		    HIL_IDD_NUM_AXES_PER_SET(*idd)) {
+ 			printk(KERN_INFO PREFIX
+ 				"combo devices are not supported.\n");
++			error = -EINVAL;
+ 			goto bail1;
+ 		}
+ 
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index 45113767db964..a06385c55af2a 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -178,51 +178,6 @@ static const unsigned long goodix_irq_flags[] = {
+ 	IRQ_TYPE_LEVEL_HIGH,
+ };
+ 
+-/*
+- * Those tablets have their coordinates origin at the bottom right
+- * of the tablet, as if rotated 180 degrees
+- */
+-static const struct dmi_system_id rotated_screen[] = {
+-#if defined(CONFIG_DMI) && defined(CONFIG_X86)
+-	{
+-		.ident = "Teclast X89",
+-		.matches = {
+-			/* tPAD is too generic, also match on bios date */
+-			DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
+-			DMI_MATCH(DMI_BOARD_NAME, "tPAD"),
+-			DMI_MATCH(DMI_BIOS_DATE, "12/19/2014"),
+-		},
+-	},
+-	{
+-		.ident = "Teclast X98 Pro",
+-		.matches = {
+-			/*
+-			 * Only match BIOS date, because the manufacturers
+-			 * BIOS does not report the board name at all
+-			 * (sometimes)...
+-			 */
+-			DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
+-			DMI_MATCH(DMI_BIOS_DATE, "10/28/2015"),
+-		},
+-	},
+-	{
+-		.ident = "WinBook TW100",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TW100")
+-		}
+-	},
+-	{
+-		.ident = "WinBook TW700",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TW700")
+-		},
+-	},
+-#endif
+-	{}
+-};
+-
+ static const struct dmi_system_id nine_bytes_report[] = {
+ #if defined(CONFIG_DMI) && defined(CONFIG_X86)
+ 	{
+@@ -1121,13 +1076,6 @@ static int goodix_configure_dev(struct goodix_ts_data *ts)
+ 				  ABS_MT_POSITION_Y, ts->prop.max_y);
+ 	}
+ 
+-	if (dmi_check_system(rotated_screen)) {
+-		ts->prop.invert_x = true;
+-		ts->prop.invert_y = true;
+-		dev_dbg(&ts->client->dev,
+-			"Applying '180 degrees rotated screen' quirk\n");
+-	}
+-
+ 	if (dmi_check_system(nine_bytes_report)) {
+ 		ts->contact_size = 9;
+ 
+diff --git a/drivers/input/touchscreen/usbtouchscreen.c b/drivers/input/touchscreen/usbtouchscreen.c
+index 397cb1d3f481b..544a8f40b81f1 100644
+--- a/drivers/input/touchscreen/usbtouchscreen.c
++++ b/drivers/input/touchscreen/usbtouchscreen.c
+@@ -251,7 +251,7 @@ static int e2i_init(struct usbtouch_usb *usbtouch)
+ 	int ret;
+ 	struct usb_device *udev = interface_to_usbdev(usbtouch->interface);
+ 
+-	ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
++	ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 	                      0x01, 0x02, 0x0000, 0x0081,
+ 	                      NULL, 0, USB_CTRL_SET_TIMEOUT);
+ 
+@@ -531,7 +531,7 @@ static int mtouch_init(struct usbtouch_usb *usbtouch)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
++	ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 	                      MTOUCHUSB_RESET,
+ 	                      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 	                      1, 0, NULL, 0, USB_CTRL_SET_TIMEOUT);
+@@ -543,7 +543,7 @@ static int mtouch_init(struct usbtouch_usb *usbtouch)
+ 	msleep(150);
+ 
+ 	for (i = 0; i < 3; i++) {
+-		ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
++		ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				      MTOUCHUSB_ASYNC_REPORT,
+ 				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				      1, 1, NULL, 0, USB_CTRL_SET_TIMEOUT);
+@@ -722,7 +722,7 @@ static int dmc_tsc10_init(struct usbtouch_usb *usbtouch)
+ 	}
+ 
+ 	/* start sending data */
+-	ret = usb_control_msg(dev, usb_rcvctrlpipe (dev, 0),
++	ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ 	                      TSC10_CMD_DATA1,
+ 	                      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 	                      0, 0, NULL, 0, USB_CTRL_SET_TIMEOUT);
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index cc9869cc48e41..fa57986c2309c 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -1914,8 +1914,8 @@ static void print_iommu_info(void)
+ 		pci_info(pdev, "Found IOMMU cap 0x%hx\n", iommu->cap_ptr);
+ 
+ 		if (iommu->cap & (1 << IOMMU_CAP_EFR)) {
+-			pci_info(pdev, "Extended features (%#llx):",
+-				 iommu->features);
++			pr_info("Extended features (%#llx):", iommu->features);
++
+ 			for (i = 0; i < ARRAY_SIZE(feat_str); ++i) {
+ 				if (iommu_feature(iommu, (1ULL << i)))
+ 					pr_cont(" %s", feat_str[i]);
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 0cbcd3fc3e7e8..d1539b7399a96 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -216,9 +216,11 @@ resv_iova:
+ 			lo = iova_pfn(iovad, start);
+ 			hi = iova_pfn(iovad, end);
+ 			reserve_iova(iovad, lo, hi);
+-		} else {
++		} else if (end < start) {
+ 			/* dma_ranges list should be sorted */
+-			dev_err(&dev->dev, "Failed to reserve IOVA\n");
++			dev_err(&dev->dev,
++				"Failed to reserve IOVA [%pa-%pa]\n",
++				&start, &end);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
+index 849d3c5f908e4..56e8198e13d10 100644
+--- a/drivers/leds/Kconfig
++++ b/drivers/leds/Kconfig
+@@ -199,6 +199,7 @@ config LEDS_LM3530
+ 
+ config LEDS_LM3532
+ 	tristate "LCD Backlight driver for LM3532"
++	select REGMAP_I2C
+ 	depends on LEDS_CLASS
+ 	depends on I2C
+ 	help
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 131ca83f5fb38..4365c1cc4505f 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -286,10 +286,6 @@ struct led_classdev *__must_check devm_of_led_get(struct device *dev,
+ 	if (!dev)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	/* Not using device tree? */
+-	if (!IS_ENABLED(CONFIG_OF) || !dev->of_node)
+-		return ERR_PTR(-ENOTSUPP);
+-
+ 	led = of_led_get(dev->of_node, index);
+ 	if (IS_ERR(led))
+ 		return led;
+diff --git a/drivers/leds/leds-as3645a.c b/drivers/leds/leds-as3645a.c
+index e8922fa033796..80411d41e802d 100644
+--- a/drivers/leds/leds-as3645a.c
++++ b/drivers/leds/leds-as3645a.c
+@@ -545,6 +545,7 @@ static int as3645a_parse_node(struct as3645a *flash,
+ 	if (!flash->indicator_node) {
+ 		dev_warn(&flash->client->dev,
+ 			 "can't find indicator node\n");
++		rval = -ENODEV;
+ 		goto out_err;
+ 	}
+ 
+diff --git a/drivers/leds/leds-ktd2692.c b/drivers/leds/leds-ktd2692.c
+index 632f10db4b3ff..f341da1503a49 100644
+--- a/drivers/leds/leds-ktd2692.c
++++ b/drivers/leds/leds-ktd2692.c
+@@ -256,6 +256,17 @@ static void ktd2692_setup(struct ktd2692_context *led)
+ 				 | KTD2692_REG_FLASH_CURRENT_BASE);
+ }
+ 
++static void regulator_disable_action(void *_data)
++{
++	struct device *dev = _data;
++	struct ktd2692_context *led = dev_get_drvdata(dev);
++	int ret;
++
++	ret = regulator_disable(led->regulator);
++	if (ret)
++		dev_err(dev, "Failed to disable supply: %d\n", ret);
++}
++
+ static int ktd2692_parse_dt(struct ktd2692_context *led, struct device *dev,
+ 			    struct ktd2692_led_config_data *cfg)
+ {
+@@ -286,8 +297,14 @@ static int ktd2692_parse_dt(struct ktd2692_context *led, struct device *dev,
+ 
+ 	if (led->regulator) {
+ 		ret = regulator_enable(led->regulator);
+-		if (ret)
++		if (ret) {
+ 			dev_err(dev, "Failed to enable supply: %d\n", ret);
++		} else {
++			ret = devm_add_action_or_reset(dev,
++						regulator_disable_action, dev);
++			if (ret)
++				return ret;
++		}
+ 	}
+ 
+ 	child_node = of_get_next_available_child(np, NULL);
+@@ -377,17 +394,9 @@ static int ktd2692_probe(struct platform_device *pdev)
+ static int ktd2692_remove(struct platform_device *pdev)
+ {
+ 	struct ktd2692_context *led = platform_get_drvdata(pdev);
+-	int ret;
+ 
+ 	led_classdev_flash_unregister(&led->fled_cdev);
+ 
+-	if (led->regulator) {
+-		ret = regulator_disable(led->regulator);
+-		if (ret)
+-			dev_err(&pdev->dev,
+-				"Failed to disable supply: %d\n", ret);
+-	}
+-
+ 	mutex_destroy(&led->lock);
+ 
+ 	return 0;
+diff --git a/drivers/leds/leds-lm36274.c b/drivers/leds/leds-lm36274.c
+index aadb03468a40a..a23a9424c2f38 100644
+--- a/drivers/leds/leds-lm36274.c
++++ b/drivers/leds/leds-lm36274.c
+@@ -127,6 +127,7 @@ static int lm36274_probe(struct platform_device *pdev)
+ 
+ 	ret = lm36274_init(chip);
+ 	if (ret) {
++		fwnode_handle_put(init_data.fwnode);
+ 		dev_err(chip->dev, "Failed to init the device\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/leds/leds-lm3692x.c b/drivers/leds/leds-lm3692x.c
+index e945de45388ca..55e6443997ec9 100644
+--- a/drivers/leds/leds-lm3692x.c
++++ b/drivers/leds/leds-lm3692x.c
+@@ -435,6 +435,7 @@ static int lm3692x_probe_dt(struct lm3692x_led *led)
+ 
+ 	ret = fwnode_property_read_u32(child, "reg", &led->led_enable);
+ 	if (ret) {
++		fwnode_handle_put(child);
+ 		dev_err(&led->client->dev, "reg DT property missing\n");
+ 		return ret;
+ 	}
+@@ -449,12 +450,11 @@ static int lm3692x_probe_dt(struct lm3692x_led *led)
+ 
+ 	ret = devm_led_classdev_register_ext(&led->client->dev, &led->led_dev,
+ 					     &init_data);
+-	if (ret) {
++	if (ret)
+ 		dev_err(&led->client->dev, "led register err: %d\n", ret);
+-		return ret;
+-	}
+ 
+-	return 0;
++	fwnode_handle_put(init_data.fwnode);
++	return ret;
+ }
+ 
+ static int lm3692x_probe(struct i2c_client *client,
+diff --git a/drivers/leds/leds-lm3697.c b/drivers/leds/leds-lm3697.c
+index 7d216cdb91a8a..912e8bb22a995 100644
+--- a/drivers/leds/leds-lm3697.c
++++ b/drivers/leds/leds-lm3697.c
+@@ -203,11 +203,9 @@ static int lm3697_probe_dt(struct lm3697 *priv)
+ 
+ 	priv->enable_gpio = devm_gpiod_get_optional(dev, "enable",
+ 						    GPIOD_OUT_LOW);
+-	if (IS_ERR(priv->enable_gpio)) {
+-		ret = PTR_ERR(priv->enable_gpio);
+-		dev_err(dev, "Failed to get enable gpio: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->enable_gpio))
++		return dev_err_probe(dev, PTR_ERR(priv->enable_gpio),
++					  "Failed to get enable GPIO\n");
+ 
+ 	priv->regulator = devm_regulator_get(dev, "vled");
+ 	if (IS_ERR(priv->regulator))
+diff --git a/drivers/leds/leds-lp50xx.c b/drivers/leds/leds-lp50xx.c
+index f13117eed976d..d4529082935b8 100644
+--- a/drivers/leds/leds-lp50xx.c
++++ b/drivers/leds/leds-lp50xx.c
+@@ -496,6 +496,7 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 			ret = fwnode_property_read_u32(led_node, "color",
+ 						       &color_id);
+ 			if (ret) {
++				fwnode_handle_put(led_node);
+ 				dev_err(priv->dev, "Cannot read color\n");
+ 				goto child_out;
+ 			}
+@@ -519,7 +520,6 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 			goto child_out;
+ 		}
+ 		i++;
+-		fwnode_handle_put(child);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mailbox/qcom-apcs-ipc-mailbox.c b/drivers/mailbox/qcom-apcs-ipc-mailbox.c
+index 077e5c6a9ef7d..3d100a004760f 100644
+--- a/drivers/mailbox/qcom-apcs-ipc-mailbox.c
++++ b/drivers/mailbox/qcom-apcs-ipc-mailbox.c
+@@ -128,7 +128,7 @@ static int qcom_apcs_ipc_probe(struct platform_device *pdev)
+ 	if (apcs_data->clk_name) {
+ 		apcs->clk = platform_device_register_data(&pdev->dev,
+ 							  apcs_data->clk_name,
+-							  PLATFORM_DEVID_NONE,
++							  PLATFORM_DEVID_AUTO,
+ 							  NULL, 0);
+ 		if (IS_ERR(apcs->clk))
+ 			dev_err(&pdev->dev, "failed to register APCS clk\n");
+diff --git a/drivers/mailbox/qcom-ipcc.c b/drivers/mailbox/qcom-ipcc.c
+index 2d13c72944c6f..584700cd15855 100644
+--- a/drivers/mailbox/qcom-ipcc.c
++++ b/drivers/mailbox/qcom-ipcc.c
+@@ -155,6 +155,11 @@ static int qcom_ipcc_mbox_send_data(struct mbox_chan *chan, void *data)
+ 	return 0;
+ }
+ 
++static void qcom_ipcc_mbox_shutdown(struct mbox_chan *chan)
++{
++	chan->con_priv = NULL;
++}
++
+ static struct mbox_chan *qcom_ipcc_mbox_xlate(struct mbox_controller *mbox,
+ 					const struct of_phandle_args *ph)
+ {
+@@ -184,6 +189,7 @@ static struct mbox_chan *qcom_ipcc_mbox_xlate(struct mbox_controller *mbox,
+ 
+ static const struct mbox_chan_ops ipcc_mbox_chan_ops = {
+ 	.send_data = qcom_ipcc_mbox_send_data,
++	.shutdown = qcom_ipcc_mbox_shutdown,
+ };
+ 
+ static int qcom_ipcc_setup_mbox(struct qcom_ipcc *ipcc)
+diff --git a/drivers/media/cec/platform/s5p/s5p_cec.c b/drivers/media/cec/platform/s5p/s5p_cec.c
+index 2a3e7ffefe0a2..028a09a7531ef 100644
+--- a/drivers/media/cec/platform/s5p/s5p_cec.c
++++ b/drivers/media/cec/platform/s5p/s5p_cec.c
+@@ -35,10 +35,13 @@ MODULE_PARM_DESC(debug, "debug level (0-2)");
+ 
+ static int s5p_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ {
++	int ret;
+ 	struct s5p_cec_dev *cec = cec_get_drvdata(adap);
+ 
+ 	if (enable) {
+-		pm_runtime_get_sync(cec->dev);
++		ret = pm_runtime_resume_and_get(cec->dev);
++		if (ret < 0)
++			return ret;
+ 
+ 		s5p_cec_reset(cec);
+ 
+@@ -51,7 +54,7 @@ static int s5p_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	} else {
+ 		s5p_cec_mask_tx_interrupts(cec);
+ 		s5p_cec_mask_rx_interrupts(cec);
+-		pm_runtime_disable(cec->dev);
++		pm_runtime_put(cec->dev);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/media/common/siano/smscoreapi.c b/drivers/media/common/siano/smscoreapi.c
+index c1511094fdc7b..b735e23701373 100644
+--- a/drivers/media/common/siano/smscoreapi.c
++++ b/drivers/media/common/siano/smscoreapi.c
+@@ -908,7 +908,7 @@ static int smscore_load_firmware_family2(struct smscore_device_t *coredev,
+ 					 void *buffer, size_t size)
+ {
+ 	struct sms_firmware *firmware = (struct sms_firmware *) buffer;
+-	struct sms_msg_data4 *msg;
++	struct sms_msg_data5 *msg;
+ 	u32 mem_address,  calc_checksum = 0;
+ 	u32 i, *ptr;
+ 	u8 *payload = firmware->payload;
+@@ -989,24 +989,20 @@ static int smscore_load_firmware_family2(struct smscore_device_t *coredev,
+ 		goto exit_fw_download;
+ 
+ 	if (coredev->mode == DEVICE_MODE_NONE) {
+-		struct sms_msg_data *trigger_msg =
+-			(struct sms_msg_data *) msg;
+-
+ 		pr_debug("sending MSG_SMS_SWDOWNLOAD_TRIGGER_REQ\n");
+ 		SMS_INIT_MSG(&msg->x_msg_header,
+ 				MSG_SMS_SWDOWNLOAD_TRIGGER_REQ,
+-				sizeof(struct sms_msg_hdr) +
+-				sizeof(u32) * 5);
++				sizeof(*msg));
+ 
+-		trigger_msg->msg_data[0] = firmware->start_address;
++		msg->msg_data[0] = firmware->start_address;
+ 					/* Entry point */
+-		trigger_msg->msg_data[1] = 6; /* Priority */
+-		trigger_msg->msg_data[2] = 0x200; /* Stack size */
+-		trigger_msg->msg_data[3] = 0; /* Parameter */
+-		trigger_msg->msg_data[4] = 4; /* Task ID */
++		msg->msg_data[1] = 6; /* Priority */
++		msg->msg_data[2] = 0x200; /* Stack size */
++		msg->msg_data[3] = 0; /* Parameter */
++		msg->msg_data[4] = 4; /* Task ID */
+ 
+-		rc = smscore_sendrequest_and_wait(coredev, trigger_msg,
+-					trigger_msg->x_msg_header.msg_length,
++		rc = smscore_sendrequest_and_wait(coredev, msg,
++					msg->x_msg_header.msg_length,
+ 					&coredev->trigger_done);
+ 	} else {
+ 		SMS_INIT_MSG(&msg->x_msg_header, MSG_SW_RELOAD_EXEC_REQ,
+diff --git a/drivers/media/common/siano/smscoreapi.h b/drivers/media/common/siano/smscoreapi.h
+index b3b793b5caf35..16c45afabc530 100644
+--- a/drivers/media/common/siano/smscoreapi.h
++++ b/drivers/media/common/siano/smscoreapi.h
+@@ -629,9 +629,9 @@ struct sms_msg_data2 {
+ 	u32 msg_data[2];
+ };
+ 
+-struct sms_msg_data4 {
++struct sms_msg_data5 {
+ 	struct sms_msg_hdr x_msg_header;
+-	u32 msg_data[4];
++	u32 msg_data[5];
+ };
+ 
+ struct sms_data_download {
+diff --git a/drivers/media/common/siano/smsdvb-main.c b/drivers/media/common/siano/smsdvb-main.c
+index ae17407e477a4..7cc654bc52d37 100644
+--- a/drivers/media/common/siano/smsdvb-main.c
++++ b/drivers/media/common/siano/smsdvb-main.c
+@@ -1176,6 +1176,10 @@ static int smsdvb_hotplug(struct smscore_device_t *coredev,
+ 	return 0;
+ 
+ media_graph_error:
++	mutex_lock(&g_smsdvb_clientslock);
++	list_del(&client->entry);
++	mutex_unlock(&g_smsdvb_clientslock);
++
+ 	smsdvb_debugfs_release(client);
+ 
+ client_error:
+diff --git a/drivers/media/dvb-core/dvb_net.c b/drivers/media/dvb-core/dvb_net.c
+index 89620da983bab..dddebea644bb8 100644
+--- a/drivers/media/dvb-core/dvb_net.c
++++ b/drivers/media/dvb-core/dvb_net.c
+@@ -45,6 +45,7 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/netdevice.h>
++#include <linux/nospec.h>
+ #include <linux/etherdevice.h>
+ #include <linux/dvb/net.h>
+ #include <linux/uio.h>
+@@ -1462,14 +1463,20 @@ static int dvb_net_do_ioctl(struct file *file,
+ 		struct net_device *netdev;
+ 		struct dvb_net_priv *priv_data;
+ 		struct dvb_net_if *dvbnetif = parg;
++		int if_num = dvbnetif->if_num;
+ 
+-		if (dvbnetif->if_num >= DVB_NET_DEVICES_MAX ||
+-		    !dvbnet->state[dvbnetif->if_num]) {
++		if (if_num >= DVB_NET_DEVICES_MAX) {
+ 			ret = -EINVAL;
+ 			goto ioctl_error;
+ 		}
++		if_num = array_index_nospec(if_num, DVB_NET_DEVICES_MAX);
+ 
+-		netdev = dvbnet->device[dvbnetif->if_num];
++		if (!dvbnet->state[if_num]) {
++			ret = -EINVAL;
++			goto ioctl_error;
++		}
++
++		netdev = dvbnet->device[if_num];
+ 
+ 		priv_data = netdev_priv(netdev);
+ 		dvbnetif->pid=priv_data->pid;
+@@ -1522,14 +1529,20 @@ static int dvb_net_do_ioctl(struct file *file,
+ 		struct net_device *netdev;
+ 		struct dvb_net_priv *priv_data;
+ 		struct __dvb_net_if_old *dvbnetif = parg;
++		int if_num = dvbnetif->if_num;
++
++		if (if_num >= DVB_NET_DEVICES_MAX) {
++			ret = -EINVAL;
++			goto ioctl_error;
++		}
++		if_num = array_index_nospec(if_num, DVB_NET_DEVICES_MAX);
+ 
+-		if (dvbnetif->if_num >= DVB_NET_DEVICES_MAX ||
+-		    !dvbnet->state[dvbnetif->if_num]) {
++		if (!dvbnet->state[if_num]) {
+ 			ret = -EINVAL;
+ 			goto ioctl_error;
+ 		}
+ 
+-		netdev = dvbnet->device[dvbnetif->if_num];
++		netdev = dvbnet->device[if_num];
+ 
+ 		priv_data = netdev_priv(netdev);
+ 		dvbnetif->pid=priv_data->pid;
+diff --git a/drivers/media/i2c/ir-kbd-i2c.c b/drivers/media/i2c/ir-kbd-i2c.c
+index e8119ad0bc71d..92376592455ee 100644
+--- a/drivers/media/i2c/ir-kbd-i2c.c
++++ b/drivers/media/i2c/ir-kbd-i2c.c
+@@ -678,8 +678,8 @@ static int zilog_tx(struct rc_dev *rcdev, unsigned int *txbuf,
+ 		goto out_unlock;
+ 	}
+ 
+-	i = i2c_master_recv(ir->tx_c, buf, 1);
+-	if (i != 1) {
++	ret = i2c_master_recv(ir->tx_c, buf, 1);
++	if (ret != 1) {
+ 		dev_err(&ir->rc->dev, "i2c_master_recv failed with %d\n", ret);
+ 		ret = -EIO;
+ 		goto out_unlock;
+diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c
+index 42f64175a6dff..fb78a1cedc03b 100644
+--- a/drivers/media/i2c/ov2659.c
++++ b/drivers/media/i2c/ov2659.c
+@@ -204,6 +204,7 @@ struct ov2659 {
+ 	struct i2c_client *client;
+ 	struct v4l2_ctrl_handler ctrls;
+ 	struct v4l2_ctrl *link_frequency;
++	struct clk *clk;
+ 	const struct ov2659_framesize *frame_size;
+ 	struct sensor_register *format_ctrl_regs;
+ 	struct ov2659_pll_ctrl pll;
+@@ -1270,6 +1271,8 @@ static int ov2659_power_off(struct device *dev)
+ 
+ 	gpiod_set_value(ov2659->pwdn_gpio, 1);
+ 
++	clk_disable_unprepare(ov2659->clk);
++
+ 	return 0;
+ }
+ 
+@@ -1278,9 +1281,17 @@ static int ov2659_power_on(struct device *dev)
+ 	struct i2c_client *client = to_i2c_client(dev);
+ 	struct v4l2_subdev *sd = i2c_get_clientdata(client);
+ 	struct ov2659 *ov2659 = to_ov2659(sd);
++	int ret;
+ 
+ 	dev_dbg(&client->dev, "%s:\n", __func__);
+ 
++	ret = clk_prepare_enable(ov2659->clk);
++	if (ret) {
++		dev_err(&client->dev, "%s: failed to enable clock\n",
++			__func__);
++		return ret;
++	}
++
+ 	gpiod_set_value(ov2659->pwdn_gpio, 0);
+ 
+ 	if (ov2659->resetb_gpio) {
+@@ -1425,7 +1436,6 @@ static int ov2659_probe(struct i2c_client *client)
+ 	const struct ov2659_platform_data *pdata = ov2659_get_pdata(client);
+ 	struct v4l2_subdev *sd;
+ 	struct ov2659 *ov2659;
+-	struct clk *clk;
+ 	int ret;
+ 
+ 	if (!pdata) {
+@@ -1440,11 +1450,11 @@ static int ov2659_probe(struct i2c_client *client)
+ 	ov2659->pdata = pdata;
+ 	ov2659->client = client;
+ 
+-	clk = devm_clk_get(&client->dev, "xvclk");
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
++	ov2659->clk = devm_clk_get(&client->dev, "xvclk");
++	if (IS_ERR(ov2659->clk))
++		return PTR_ERR(ov2659->clk);
+ 
+-	ov2659->xvclk_frequency = clk_get_rate(clk);
++	ov2659->xvclk_frequency = clk_get_rate(ov2659->clk);
+ 	if (ov2659->xvclk_frequency < 6000000 ||
+ 	    ov2659->xvclk_frequency > 27000000)
+ 		return -EINVAL;
+@@ -1506,7 +1516,9 @@ static int ov2659_probe(struct i2c_client *client)
+ 	ov2659->frame_size = &ov2659_framesizes[2];
+ 	ov2659->format_ctrl_regs = ov2659_formats[0].format_ctrl_regs;
+ 
+-	ov2659_power_on(&client->dev);
++	ret = ov2659_power_on(&client->dev);
++	if (ret < 0)
++		goto error;
+ 
+ 	ret = ov2659_detect(sd);
+ 	if (ret < 0)
+diff --git a/drivers/media/i2c/s5c73m3/s5c73m3-core.c b/drivers/media/i2c/s5c73m3/s5c73m3-core.c
+index 5b4c4a3547c93..71804a70bc6d7 100644
+--- a/drivers/media/i2c/s5c73m3/s5c73m3-core.c
++++ b/drivers/media/i2c/s5c73m3/s5c73m3-core.c
+@@ -1386,7 +1386,7 @@ static int __s5c73m3_power_on(struct s5c73m3 *state)
+ 	s5c73m3_gpio_deassert(state, STBY);
+ 	usleep_range(100, 200);
+ 
+-	s5c73m3_gpio_deassert(state, RST);
++	s5c73m3_gpio_deassert(state, RSET);
+ 	usleep_range(50, 100);
+ 
+ 	return 0;
+@@ -1401,7 +1401,7 @@ static int __s5c73m3_power_off(struct s5c73m3 *state)
+ {
+ 	int i, ret;
+ 
+-	if (s5c73m3_gpio_assert(state, RST))
++	if (s5c73m3_gpio_assert(state, RSET))
+ 		usleep_range(10, 50);
+ 
+ 	if (s5c73m3_gpio_assert(state, STBY))
+@@ -1606,7 +1606,7 @@ static int s5c73m3_get_platform_data(struct s5c73m3 *state)
+ 
+ 		state->mclk_frequency = pdata->mclk_frequency;
+ 		state->gpio[STBY] = pdata->gpio_stby;
+-		state->gpio[RST] = pdata->gpio_reset;
++		state->gpio[RSET] = pdata->gpio_reset;
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/media/i2c/s5c73m3/s5c73m3.h b/drivers/media/i2c/s5c73m3/s5c73m3.h
+index ef7e85b34263b..c3fcfdd3ea66d 100644
+--- a/drivers/media/i2c/s5c73m3/s5c73m3.h
++++ b/drivers/media/i2c/s5c73m3/s5c73m3.h
+@@ -353,7 +353,7 @@ struct s5c73m3_ctrls {
+ 
+ enum s5c73m3_gpio_id {
+ 	STBY,
+-	RST,
++	RSET,
+ 	GPIO_NUM,
+ };
+ 
+diff --git a/drivers/media/i2c/s5k4ecgx.c b/drivers/media/i2c/s5k4ecgx.c
+index b2d53417badf6..4e97309a67f41 100644
+--- a/drivers/media/i2c/s5k4ecgx.c
++++ b/drivers/media/i2c/s5k4ecgx.c
+@@ -173,7 +173,7 @@ static const char * const s5k4ecgx_supply_names[] = {
+ 
+ enum s5k4ecgx_gpio_id {
+ 	STBY,
+-	RST,
++	RSET,
+ 	GPIO_NUM,
+ };
+ 
+@@ -476,7 +476,7 @@ static int __s5k4ecgx_power_on(struct s5k4ecgx *priv)
+ 	if (s5k4ecgx_gpio_set_value(priv, STBY, priv->gpio[STBY].level))
+ 		usleep_range(30, 50);
+ 
+-	if (s5k4ecgx_gpio_set_value(priv, RST, priv->gpio[RST].level))
++	if (s5k4ecgx_gpio_set_value(priv, RSET, priv->gpio[RSET].level))
+ 		usleep_range(30, 50);
+ 
+ 	return 0;
+@@ -484,7 +484,7 @@ static int __s5k4ecgx_power_on(struct s5k4ecgx *priv)
+ 
+ static int __s5k4ecgx_power_off(struct s5k4ecgx *priv)
+ {
+-	if (s5k4ecgx_gpio_set_value(priv, RST, !priv->gpio[RST].level))
++	if (s5k4ecgx_gpio_set_value(priv, RSET, !priv->gpio[RSET].level))
+ 		usleep_range(30, 50);
+ 
+ 	if (s5k4ecgx_gpio_set_value(priv, STBY, !priv->gpio[STBY].level))
+@@ -872,7 +872,7 @@ static int s5k4ecgx_config_gpios(struct s5k4ecgx *priv,
+ 	int ret;
+ 
+ 	priv->gpio[STBY].gpio = -EINVAL;
+-	priv->gpio[RST].gpio  = -EINVAL;
++	priv->gpio[RSET].gpio  = -EINVAL;
+ 
+ 	ret = s5k4ecgx_config_gpio(gpio->gpio, gpio->level, "S5K4ECGX_STBY");
+ 
+@@ -891,7 +891,7 @@ static int s5k4ecgx_config_gpios(struct s5k4ecgx *priv,
+ 		s5k4ecgx_free_gpios(priv);
+ 		return ret;
+ 	}
+-	priv->gpio[RST] = *gpio;
++	priv->gpio[RSET] = *gpio;
+ 	if (gpio_is_valid(gpio->gpio))
+ 		gpio_set_value(gpio->gpio, 0);
+ 
+diff --git a/drivers/media/i2c/s5k5baf.c b/drivers/media/i2c/s5k5baf.c
+index ec6f22efe19ad..ec65a8e084c6a 100644
+--- a/drivers/media/i2c/s5k5baf.c
++++ b/drivers/media/i2c/s5k5baf.c
+@@ -235,7 +235,7 @@ struct s5k5baf_gpio {
+ 
+ enum s5k5baf_gpio_id {
+ 	STBY,
+-	RST,
++	RSET,
+ 	NUM_GPIOS,
+ };
+ 
+@@ -969,7 +969,7 @@ static int s5k5baf_power_on(struct s5k5baf *state)
+ 
+ 	s5k5baf_gpio_deassert(state, STBY);
+ 	usleep_range(50, 100);
+-	s5k5baf_gpio_deassert(state, RST);
++	s5k5baf_gpio_deassert(state, RSET);
+ 	return 0;
+ 
+ err_reg_dis:
+@@ -987,7 +987,7 @@ static int s5k5baf_power_off(struct s5k5baf *state)
+ 	state->apply_cfg = 0;
+ 	state->apply_crop = 0;
+ 
+-	s5k5baf_gpio_assert(state, RST);
++	s5k5baf_gpio_assert(state, RSET);
+ 	s5k5baf_gpio_assert(state, STBY);
+ 
+ 	if (!IS_ERR(state->clock))
+diff --git a/drivers/media/i2c/s5k6aa.c b/drivers/media/i2c/s5k6aa.c
+index 72439fae7968b..6516e205e9a3d 100644
+--- a/drivers/media/i2c/s5k6aa.c
++++ b/drivers/media/i2c/s5k6aa.c
+@@ -177,7 +177,7 @@ static const char * const s5k6aa_supply_names[] = {
+ 
+ enum s5k6aa_gpio_id {
+ 	STBY,
+-	RST,
++	RSET,
+ 	GPIO_NUM,
+ };
+ 
+@@ -841,7 +841,7 @@ static int __s5k6aa_power_on(struct s5k6aa *s5k6aa)
+ 		ret = s5k6aa->s_power(1);
+ 	usleep_range(4000, 5000);
+ 
+-	if (s5k6aa_gpio_deassert(s5k6aa, RST))
++	if (s5k6aa_gpio_deassert(s5k6aa, RSET))
+ 		msleep(20);
+ 
+ 	return ret;
+@@ -851,7 +851,7 @@ static int __s5k6aa_power_off(struct s5k6aa *s5k6aa)
+ {
+ 	int ret;
+ 
+-	if (s5k6aa_gpio_assert(s5k6aa, RST))
++	if (s5k6aa_gpio_assert(s5k6aa, RSET))
+ 		usleep_range(100, 150);
+ 
+ 	if (s5k6aa->s_power) {
+@@ -1510,7 +1510,7 @@ static int s5k6aa_configure_gpios(struct s5k6aa *s5k6aa,
+ 	int ret;
+ 
+ 	s5k6aa->gpio[STBY].gpio = -EINVAL;
+-	s5k6aa->gpio[RST].gpio  = -EINVAL;
++	s5k6aa->gpio[RSET].gpio  = -EINVAL;
+ 
+ 	gpio = &pdata->gpio_stby;
+ 	if (gpio_is_valid(gpio->gpio)) {
+@@ -1533,7 +1533,7 @@ static int s5k6aa_configure_gpios(struct s5k6aa *s5k6aa,
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		s5k6aa->gpio[RST] = *gpio;
++		s5k6aa->gpio[RSET] = *gpio;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 1b309bb743c7b..f21da11caf224 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -1974,6 +1974,7 @@ static int tc358743_probe_of(struct tc358743_state *state)
+ 	bps_pr_lane = 2 * endpoint.link_frequencies[0];
+ 	if (bps_pr_lane < 62500000U || bps_pr_lane > 1000000000U) {
+ 		dev_err(dev, "unsupported bps per lane: %u bps\n", bps_pr_lane);
++		ret = -EINVAL;
+ 		goto disable_clk;
+ 	}
+ 
+diff --git a/drivers/media/mc/Makefile b/drivers/media/mc/Makefile
+index 119037f0e686d..2b7af42ba59c1 100644
+--- a/drivers/media/mc/Makefile
++++ b/drivers/media/mc/Makefile
+@@ -3,7 +3,7 @@
+ mc-objs	:= mc-device.o mc-devnode.o mc-entity.o \
+ 	   mc-request.o
+ 
+-ifeq ($(CONFIG_USB),y)
++ifneq ($(CONFIG_USB),)
+ 	mc-objs += mc-dev-allocator.o
+ endif
+ 
+diff --git a/drivers/media/pci/bt8xx/bt878.c b/drivers/media/pci/bt8xx/bt878.c
+index 79ba15a9385a5..0705913972c66 100644
+--- a/drivers/media/pci/bt8xx/bt878.c
++++ b/drivers/media/pci/bt8xx/bt878.c
+@@ -300,7 +300,8 @@ static irqreturn_t bt878_irq(int irq, void *dev_id)
+ 		}
+ 		if (astat & BT878_ARISCI) {
+ 			bt->finished_block = (stat & BT878_ARISCS) >> 28;
+-			tasklet_schedule(&bt->tasklet);
++			if (bt->tasklet.callback)
++				tasklet_schedule(&bt->tasklet);
+ 			break;
+ 		}
+ 		count++;
+@@ -477,6 +478,9 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
+ 	btwrite(0, BT878_AINT_MASK);
+ 	bt878_num++;
+ 
++	if (!bt->tasklet.func)
++		tasklet_disable(&bt->tasklet);
++
+ 	return 0;
+ 
+       fail2:
+diff --git a/drivers/media/pci/cobalt/cobalt-driver.c b/drivers/media/pci/cobalt/cobalt-driver.c
+index 0695078ef8125..1bd8bbe57a30e 100644
+--- a/drivers/media/pci/cobalt/cobalt-driver.c
++++ b/drivers/media/pci/cobalt/cobalt-driver.c
+@@ -667,6 +667,7 @@ static int cobalt_probe(struct pci_dev *pci_dev,
+ 		return -ENOMEM;
+ 	cobalt->pci_dev = pci_dev;
+ 	cobalt->instance = i;
++	mutex_init(&cobalt->pci_lock);
+ 
+ 	retval = v4l2_device_register(&pci_dev->dev, &cobalt->v4l2_dev);
+ 	if (retval) {
+diff --git a/drivers/media/pci/cobalt/cobalt-driver.h b/drivers/media/pci/cobalt/cobalt-driver.h
+index bca68572b3242..12c33e035904c 100644
+--- a/drivers/media/pci/cobalt/cobalt-driver.h
++++ b/drivers/media/pci/cobalt/cobalt-driver.h
+@@ -251,6 +251,8 @@ struct cobalt {
+ 	int instance;
+ 	struct pci_dev *pci_dev;
+ 	struct v4l2_device v4l2_dev;
++	/* serialize PCI access in cobalt_s_bit_sysctrl() */
++	struct mutex pci_lock;
+ 
+ 	void __iomem *bar0, *bar1;
+ 
+@@ -320,10 +322,13 @@ static inline u32 cobalt_g_sysctrl(struct cobalt *cobalt)
+ static inline void cobalt_s_bit_sysctrl(struct cobalt *cobalt,
+ 					int bit, int val)
+ {
+-	u32 ctrl = cobalt_read_bar1(cobalt, COBALT_SYS_CTRL_BASE);
++	u32 ctrl;
+ 
++	mutex_lock(&cobalt->pci_lock);
++	ctrl = cobalt_read_bar1(cobalt, COBALT_SYS_CTRL_BASE);
+ 	cobalt_write_bar1(cobalt, COBALT_SYS_CTRL_BASE,
+ 			(ctrl & ~(1UL << bit)) | (val << bit));
++	mutex_unlock(&cobalt->pci_lock);
+ }
+ 
+ static inline u32 cobalt_g_sysstat(struct cobalt *cobalt)
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.c b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+index dcbfe8c9abc72..2fe4a0bd02844 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.c
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+@@ -1476,7 +1476,8 @@ static int cio2_parse_firmware(struct cio2_device *cio2)
+ 		struct v4l2_fwnode_endpoint vep = {
+ 			.bus_type = V4L2_MBUS_CSI2_DPHY
+ 		};
+-		struct sensor_async_subdev *s_asd = NULL;
++		struct sensor_async_subdev *s_asd;
++		struct v4l2_async_subdev *asd;
+ 		struct fwnode_handle *ep;
+ 
+ 		ep = fwnode_graph_get_endpoint_by_id(
+@@ -1490,27 +1491,23 @@ static int cio2_parse_firmware(struct cio2_device *cio2)
+ 		if (ret)
+ 			goto err_parse;
+ 
+-		s_asd = kzalloc(sizeof(*s_asd), GFP_KERNEL);
+-		if (!s_asd) {
+-			ret = -ENOMEM;
++		asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++				&cio2->notifier, ep, sizeof(*s_asd));
++		if (IS_ERR(asd)) {
++			ret = PTR_ERR(asd);
+ 			goto err_parse;
+ 		}
+ 
++		s_asd = container_of(asd, struct sensor_async_subdev, asd);
+ 		s_asd->csi2.port = vep.base.port;
+ 		s_asd->csi2.lanes = vep.bus.mipi_csi2.num_data_lanes;
+ 
+-		ret = v4l2_async_notifier_add_fwnode_remote_subdev(
+-			&cio2->notifier, ep, &s_asd->asd);
+-		if (ret)
+-			goto err_parse;
+-
+ 		fwnode_handle_put(ep);
+ 
+ 		continue;
+ 
+ err_parse:
+ 		fwnode_handle_put(ep);
+-		kfree(s_asd);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/media/platform/am437x/am437x-vpfe.c b/drivers/media/platform/am437x/am437x-vpfe.c
+index 0fb9f9ba1219d..31cee69adbe1f 100644
+--- a/drivers/media/platform/am437x/am437x-vpfe.c
++++ b/drivers/media/platform/am437x/am437x-vpfe.c
+@@ -1021,7 +1021,9 @@ static int vpfe_initialize_device(struct vpfe_device *vpfe)
+ 	if (ret)
+ 		return ret;
+ 
+-	pm_runtime_get_sync(vpfe->pdev);
++	ret = pm_runtime_resume_and_get(vpfe->pdev);
++	if (ret < 0)
++		return ret;
+ 
+ 	vpfe_config_enable(&vpfe->ccdc, 1);
+ 
+@@ -2443,7 +2445,11 @@ static int vpfe_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	/* for now just enable it here instead of waiting for the open */
+-	pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
++	if (ret < 0) {
++		vpfe_err(vpfe, "Unable to resume device.\n");
++		goto probe_out_v4l2_unregister;
++	}
+ 
+ 	vpfe_ccdc_config_defaults(ccdc);
+ 
+@@ -2530,6 +2536,11 @@ static int vpfe_suspend(struct device *dev)
+ 
+ 	/* only do full suspend if streaming has started */
+ 	if (vb2_start_streaming_called(&vpfe->buffer_queue)) {
++		/*
++		 * ignore RPM resume errors here, as it is already too late.
++		 * A check like that should happen earlier, either at
++		 * open() or just before start streaming.
++		 */
+ 		pm_runtime_get_sync(dev);
+ 		vpfe_config_enable(ccdc, 1);
+ 
+diff --git a/drivers/media/platform/exynos-gsc/gsc-m2m.c b/drivers/media/platform/exynos-gsc/gsc-m2m.c
+index 27a3c92c73bce..f1cf847d1cc2d 100644
+--- a/drivers/media/platform/exynos-gsc/gsc-m2m.c
++++ b/drivers/media/platform/exynos-gsc/gsc-m2m.c
+@@ -56,10 +56,8 @@ static void __gsc_m2m_job_abort(struct gsc_ctx *ctx)
+ static int gsc_m2m_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct gsc_ctx *ctx = q->drv_priv;
+-	int ret;
+ 
+-	ret = pm_runtime_get_sync(&ctx->gsc_dev->pdev->dev);
+-	return ret > 0 ? 0 : ret;
++	return pm_runtime_resume_and_get(&ctx->gsc_dev->pdev->dev);
+ }
+ 
+ static void __gsc_m2m_cleanup_queue(struct gsc_ctx *ctx)
+diff --git a/drivers/media/platform/exynos4-is/fimc-capture.c b/drivers/media/platform/exynos4-is/fimc-capture.c
+index 6000a4e789adb..808b490c1910f 100644
+--- a/drivers/media/platform/exynos4-is/fimc-capture.c
++++ b/drivers/media/platform/exynos4-is/fimc-capture.c
+@@ -478,11 +478,9 @@ static int fimc_capture_open(struct file *file)
+ 		goto unlock;
+ 
+ 	set_bit(ST_CAPT_BUSY, &fimc->state);
+-	ret = pm_runtime_get_sync(&fimc->pdev->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_sync(&fimc->pdev->dev);
++	ret = pm_runtime_resume_and_get(&fimc->pdev->dev);
++	if (ret < 0)
+ 		goto unlock;
+-	}
+ 
+ 	ret = v4l2_fh_open(file);
+ 	if (ret) {
+diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
+index 32ab01e89196d..d26fa5967d821 100644
+--- a/drivers/media/platform/exynos4-is/fimc-is.c
++++ b/drivers/media/platform/exynos4-is/fimc-is.c
+@@ -828,9 +828,9 @@ static int fimc_is_probe(struct platform_device *pdev)
+ 			goto err_irq;
+ 	}
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+-		goto err_pm;
++		goto err_irq;
+ 
+ 	vb2_dma_contig_set_max_seg_size(dev, DMA_BIT_MASK(32));
+ 
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.c b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+index 612b9872afc87..83688a7982f70 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp-video.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+@@ -275,7 +275,7 @@ static int isp_video_open(struct file *file)
+ 	if (ret < 0)
+ 		goto unlock;
+ 
+-	ret = pm_runtime_get_sync(&isp->pdev->dev);
++	ret = pm_runtime_resume_and_get(&isp->pdev->dev);
+ 	if (ret < 0)
+ 		goto rel_fh;
+ 
+@@ -293,7 +293,6 @@ static int isp_video_open(struct file *file)
+ 	if (!ret)
+ 		goto unlock;
+ rel_fh:
+-	pm_runtime_put_noidle(&isp->pdev->dev);
+ 	v4l2_fh_release(file);
+ unlock:
+ 	mutex_unlock(&isp->video_lock);
+@@ -306,17 +305,20 @@ static int isp_video_release(struct file *file)
+ 	struct fimc_is_video *ivc = &isp->video_capture;
+ 	struct media_entity *entity = &ivc->ve.vdev.entity;
+ 	struct media_device *mdev = entity->graph_obj.mdev;
++	bool is_singular_file;
+ 
+ 	mutex_lock(&isp->video_lock);
+ 
+-	if (v4l2_fh_is_singular_file(file) && ivc->streaming) {
++	is_singular_file = v4l2_fh_is_singular_file(file);
++
++	if (is_singular_file && ivc->streaming) {
+ 		media_pipeline_stop(entity);
+ 		ivc->streaming = 0;
+ 	}
+ 
+ 	_vb2_fop_release(file, NULL);
+ 
+-	if (v4l2_fh_is_singular_file(file)) {
++	if (is_singular_file) {
+ 		fimc_pipeline_call(&ivc->ve, close);
+ 
+ 		mutex_lock(&mdev->graph_mutex);
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp.c b/drivers/media/platform/exynos4-is/fimc-isp.c
+index a77c49b185115..74b49d30901ed 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp.c
+@@ -304,11 +304,10 @@ static int fimc_isp_subdev_s_power(struct v4l2_subdev *sd, int on)
+ 	pr_debug("on: %d\n", on);
+ 
+ 	if (on) {
+-		ret = pm_runtime_get_sync(&is->pdev->dev);
+-		if (ret < 0) {
+-			pm_runtime_put(&is->pdev->dev);
++		ret = pm_runtime_resume_and_get(&is->pdev->dev);
++		if (ret < 0)
+ 			return ret;
+-		}
++
+ 		set_bit(IS_ST_PWR_ON, &is->state);
+ 
+ 		ret = fimc_is_start_firmware(is);
+diff --git a/drivers/media/platform/exynos4-is/fimc-lite.c b/drivers/media/platform/exynos4-is/fimc-lite.c
+index fdd0d369b1925..d279f282d5921 100644
+--- a/drivers/media/platform/exynos4-is/fimc-lite.c
++++ b/drivers/media/platform/exynos4-is/fimc-lite.c
+@@ -469,9 +469,9 @@ static int fimc_lite_open(struct file *file)
+ 	}
+ 
+ 	set_bit(ST_FLITE_IN_USE, &fimc->state);
+-	ret = pm_runtime_get_sync(&fimc->pdev->dev);
++	ret = pm_runtime_resume_and_get(&fimc->pdev->dev);
+ 	if (ret < 0)
+-		goto err_pm;
++		goto err_in_use;
+ 
+ 	ret = v4l2_fh_open(file);
+ 	if (ret < 0)
+@@ -499,6 +499,7 @@ static int fimc_lite_open(struct file *file)
+ 	v4l2_fh_release(file);
+ err_pm:
+ 	pm_runtime_put_sync(&fimc->pdev->dev);
++err_in_use:
+ 	clear_bit(ST_FLITE_IN_USE, &fimc->state);
+ unlock:
+ 	mutex_unlock(&fimc->lock);
+diff --git a/drivers/media/platform/exynos4-is/fimc-m2m.c b/drivers/media/platform/exynos4-is/fimc-m2m.c
+index 4acb179556c41..24b1badd20807 100644
+--- a/drivers/media/platform/exynos4-is/fimc-m2m.c
++++ b/drivers/media/platform/exynos4-is/fimc-m2m.c
+@@ -73,17 +73,14 @@ static void fimc_m2m_shutdown(struct fimc_ctx *ctx)
+ static int start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct fimc_ctx *ctx = q->drv_priv;
+-	int ret;
+ 
+-	ret = pm_runtime_get_sync(&ctx->fimc_dev->pdev->dev);
+-	return ret > 0 ? 0 : ret;
++	return pm_runtime_resume_and_get(&ctx->fimc_dev->pdev->dev);
+ }
+ 
+ static void stop_streaming(struct vb2_queue *q)
+ {
+ 	struct fimc_ctx *ctx = q->drv_priv;
+ 
+-
+ 	fimc_m2m_shutdown(ctx);
+ 	fimc_m2m_job_finish(ctx, VB2_BUF_STATE_ERROR);
+ 	pm_runtime_put(&ctx->fimc_dev->pdev->dev);
+diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
+index e636c33e847bd..a9a8f0433fb2c 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.c
++++ b/drivers/media/platform/exynos4-is/media-dev.c
+@@ -508,11 +508,9 @@ static int fimc_md_register_sensor_entities(struct fimc_md *fmd)
+ 	if (!fmd->pmf)
+ 		return -ENXIO;
+ 
+-	ret = pm_runtime_get_sync(fmd->pmf);
+-	if (ret < 0) {
+-		pm_runtime_put(fmd->pmf);
++	ret = pm_runtime_resume_and_get(fmd->pmf);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	fmd->num_sensors = 0;
+ 
+@@ -1282,13 +1280,11 @@ static DEVICE_ATTR(subdev_conf_mode, S_IWUSR | S_IRUGO,
+ static int cam_clk_prepare(struct clk_hw *hw)
+ {
+ 	struct cam_clk *camclk = to_cam_clk(hw);
+-	int ret;
+ 
+ 	if (camclk->fmd->pmf == NULL)
+ 		return -ENODEV;
+ 
+-	ret = pm_runtime_get_sync(camclk->fmd->pmf);
+-	return ret < 0 ? ret : 0;
++	return pm_runtime_resume_and_get(camclk->fmd->pmf);
+ }
+ 
+ static void cam_clk_unprepare(struct clk_hw *hw)
+diff --git a/drivers/media/platform/exynos4-is/mipi-csis.c b/drivers/media/platform/exynos4-is/mipi-csis.c
+index 1aac167abb175..ebf39c8568943 100644
+--- a/drivers/media/platform/exynos4-is/mipi-csis.c
++++ b/drivers/media/platform/exynos4-is/mipi-csis.c
+@@ -494,7 +494,7 @@ static int s5pcsis_s_power(struct v4l2_subdev *sd, int on)
+ 	struct device *dev = &state->pdev->dev;
+ 
+ 	if (on)
+-		return pm_runtime_get_sync(dev);
++		return pm_runtime_resume_and_get(dev);
+ 
+ 	return pm_runtime_put_sync(dev);
+ }
+@@ -509,11 +509,9 @@ static int s5pcsis_s_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 	if (enable) {
+ 		s5pcsis_clear_counters(state);
+-		ret = pm_runtime_get_sync(&state->pdev->dev);
+-		if (ret && ret != 1) {
+-			pm_runtime_put_noidle(&state->pdev->dev);
++		ret = pm_runtime_resume_and_get(&state->pdev->dev);
++		if (ret < 0)
+ 			return ret;
+-		}
+ 	}
+ 
+ 	mutex_lock(&state->lock);
+@@ -535,7 +533,7 @@ unlock:
+ 	if (!enable)
+ 		pm_runtime_put(&state->pdev->dev);
+ 
+-	return ret == 1 ? 0 : ret;
++	return ret;
+ }
+ 
+ static int s5pcsis_enum_mbus_code(struct v4l2_subdev *sd,
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
+index 34266fba824f2..e56c5e56e824a 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.c
++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
+@@ -918,6 +918,7 @@ static int mclk_enable(struct clk_hw *hw)
+ 	struct mcam_camera *cam = container_of(hw, struct mcam_camera, mclk_hw);
+ 	int mclk_src;
+ 	int mclk_div;
++	int ret;
+ 
+ 	/*
+ 	 * Clock the sensor appropriately.  Controller clock should
+@@ -931,7 +932,9 @@ static int mclk_enable(struct clk_hw *hw)
+ 		mclk_div = 2;
+ 	}
+ 
+-	pm_runtime_get_sync(cam->dev);
++	ret = pm_runtime_resume_and_get(cam->dev);
++	if (ret < 0)
++		return ret;
+ 	clk_enable(cam->clk[0]);
+ 	mcam_reg_write(cam, REG_CLKCTRL, (mclk_src << 29) | mclk_div);
+ 	mcam_ctlr_power_up(cam);
+@@ -1611,7 +1614,9 @@ static int mcam_v4l_open(struct file *filp)
+ 		ret = sensor_call(cam, core, s_power, 1);
+ 		if (ret)
+ 			goto out;
+-		pm_runtime_get_sync(cam->dev);
++		ret = pm_runtime_resume_and_get(cam->dev);
++		if (ret < 0)
++			goto out;
+ 		__mcam_cam_reset(cam);
+ 		mcam_set_config_needed(cam, 1);
+ 	}
+diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c b/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
+index 724c7333b6e5a..45fc741c55411 100644
+--- a/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
+@@ -394,12 +394,12 @@ static int mtk_mdp_m2m_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	struct mtk_mdp_ctx *ctx = q->drv_priv;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(&ctx->mdp_dev->pdev->dev);
++	ret = pm_runtime_resume_and_get(&ctx->mdp_dev->pdev->dev);
+ 	if (ret < 0)
+-		mtk_mdp_dbg(1, "[%d] pm_runtime_get_sync failed:%d",
++		mtk_mdp_dbg(1, "[%d] pm_runtime_resume_and_get failed:%d",
+ 			    ctx->id, ret);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void *mtk_mdp_m2m_buf_remove(struct mtk_mdp_ctx *ctx,
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
+index 145686d2c219c..f59ef8c8c9db4 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
+@@ -126,7 +126,9 @@ static int fops_vcodec_open(struct file *file)
+ 	mtk_vcodec_dec_set_default_params(ctx);
+ 
+ 	if (v4l2_fh_is_singular(&ctx->fh)) {
+-		mtk_vcodec_dec_pw_on(&dev->pm);
++		ret = mtk_vcodec_dec_pw_on(&dev->pm);
++		if (ret < 0)
++			goto err_load_fw;
+ 		/*
+ 		 * Does nothing if firmware was already loaded.
+ 		 */
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
+index ddee7046ce422..6038db96f71c3 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
+@@ -88,13 +88,15 @@ void mtk_vcodec_release_dec_pm(struct mtk_vcodec_dev *dev)
+ 	put_device(dev->pm.larbvdec);
+ }
+ 
+-void mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm)
++int mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm)
+ {
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(pm->dev);
++	ret = pm_runtime_resume_and_get(pm->dev);
+ 	if (ret)
+-		mtk_v4l2_err("pm_runtime_get_sync fail %d", ret);
++		mtk_v4l2_err("pm_runtime_resume_and_get fail %d", ret);
++
++	return ret;
+ }
+ 
+ void mtk_vcodec_dec_pw_off(struct mtk_vcodec_pm *pm)
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
+index 872d8bf8cfaf3..280aeaefdb651 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
+@@ -12,7 +12,7 @@
+ int mtk_vcodec_init_dec_pm(struct mtk_vcodec_dev *dev);
+ void mtk_vcodec_release_dec_pm(struct mtk_vcodec_dev *dev);
+ 
+-void mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm);
++int mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm);
+ void mtk_vcodec_dec_pw_off(struct mtk_vcodec_pm *pm);
+ void mtk_vcodec_dec_clock_on(struct mtk_vcodec_pm *pm);
+ void mtk_vcodec_dec_clock_off(struct mtk_vcodec_pm *pm);
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index b1fc4518e275d..1311b4996eceb 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -2126,21 +2126,6 @@ static void isp_parse_of_csi1_endpoint(struct device *dev,
+ 	buscfg->bus.ccp2.crc = 1;
+ }
+ 
+-static int isp_alloc_isd(struct isp_async_subdev **isd,
+-			 struct isp_bus_cfg **buscfg)
+-{
+-	struct isp_async_subdev *__isd;
+-
+-	__isd = kzalloc(sizeof(*__isd), GFP_KERNEL);
+-	if (!__isd)
+-		return -ENOMEM;
+-
+-	*isd = __isd;
+-	*buscfg = &__isd->bus;
+-
+-	return 0;
+-}
+-
+ static struct {
+ 	u32 phy;
+ 	u32 csi2_if;
+@@ -2156,7 +2141,7 @@ static int isp_parse_of_endpoints(struct isp_device *isp)
+ {
+ 	struct fwnode_handle *ep;
+ 	struct isp_async_subdev *isd = NULL;
+-	struct isp_bus_cfg *buscfg;
++	struct v4l2_async_subdev *asd;
+ 	unsigned int i;
+ 
+ 	ep = fwnode_graph_get_endpoint_by_id(
+@@ -2174,20 +2159,15 @@ static int isp_parse_of_endpoints(struct isp_device *isp)
+ 		ret = v4l2_fwnode_endpoint_parse(ep, &vep);
+ 
+ 		if (!ret) {
+-			ret = isp_alloc_isd(&isd, &buscfg);
+-			if (ret)
+-				return ret;
+-		}
+-
+-		if (!ret) {
+-			isp_parse_of_parallel_endpoint(isp->dev, &vep, buscfg);
+-			ret = v4l2_async_notifier_add_fwnode_remote_subdev(
+-				&isp->notifier, ep, &isd->asd);
++			asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++				&isp->notifier, ep, sizeof(*isd));
++			if (!IS_ERR(asd)) {
++				isd = container_of(asd, struct isp_async_subdev, asd);
++				isp_parse_of_parallel_endpoint(isp->dev, &vep, &isd->bus);
++			}
+ 		}
+ 
+ 		fwnode_handle_put(ep);
+-		if (ret)
+-			kfree(isd);
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(isp_bus_interfaces); i++) {
+@@ -2206,15 +2186,8 @@ static int isp_parse_of_endpoints(struct isp_device *isp)
+ 		dev_dbg(isp->dev, "parsing serial interface %u, node %pOF\n", i,
+ 			to_of_node(ep));
+ 
+-		ret = isp_alloc_isd(&isd, &buscfg);
+-		if (ret)
+-			return ret;
+-
+ 		ret = v4l2_fwnode_endpoint_parse(ep, &vep);
+-		if (!ret) {
+-			buscfg->interface = isp_bus_interfaces[i].csi2_if;
+-			isp_parse_of_csi2_endpoint(isp->dev, &vep, buscfg);
+-		} else if (ret == -ENXIO) {
++		if (ret == -ENXIO) {
+ 			vep = (struct v4l2_fwnode_endpoint)
+ 				{ .bus_type = V4L2_MBUS_CSI1 };
+ 			ret = v4l2_fwnode_endpoint_parse(ep, &vep);
+@@ -2224,21 +2197,35 @@ static int isp_parse_of_endpoints(struct isp_device *isp)
+ 					{ .bus_type = V4L2_MBUS_CCP2 };
+ 				ret = v4l2_fwnode_endpoint_parse(ep, &vep);
+ 			}
+-			if (!ret) {
+-				buscfg->interface =
+-					isp_bus_interfaces[i].csi1_if;
+-				isp_parse_of_csi1_endpoint(isp->dev, &vep,
+-							   buscfg);
+-			}
+ 		}
+ 
+-		if (!ret)
+-			ret = v4l2_async_notifier_add_fwnode_remote_subdev(
+-				&isp->notifier, ep, &isd->asd);
++		if (!ret) {
++			asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++				&isp->notifier, ep, sizeof(*isd));
++
++			if (!IS_ERR(asd)) {
++				isd = container_of(asd, struct isp_async_subdev, asd);
++
++				switch (vep.bus_type) {
++				case V4L2_MBUS_CSI2_DPHY:
++					isd->bus.interface =
++						isp_bus_interfaces[i].csi2_if;
++					isp_parse_of_csi2_endpoint(isp->dev, &vep, &isd->bus);
++					break;
++				case V4L2_MBUS_CSI1:
++				case V4L2_MBUS_CCP2:
++					isd->bus.interface =
++						isp_bus_interfaces[i].csi1_if;
++					isp_parse_of_csi1_endpoint(isp->dev, &vep,
++								   &isd->bus);
++					break;
++				default:
++					break;
++				}
++			}
++		}
+ 
+ 		fwnode_handle_put(ep);
+-		if (ret)
+-			kfree(isd);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index fd5993b3e6743..58ddebbb84468 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -48,52 +48,86 @@ static const struct hfi_core_ops venus_core_ops = {
+ 	.event_notify = venus_event_notify,
+ };
+ 
++#define RPM_WAIT_FOR_IDLE_MAX_ATTEMPTS 10
++
+ static void venus_sys_error_handler(struct work_struct *work)
+ {
+ 	struct venus_core *core =
+ 			container_of(work, struct venus_core, work.work);
+-	int ret = 0;
+-
+-	pm_runtime_get_sync(core->dev);
++	int ret, i, max_attempts = RPM_WAIT_FOR_IDLE_MAX_ATTEMPTS;
++	const char *err_msg = "";
++	bool failed = false;
++
++	ret = pm_runtime_get_sync(core->dev);
++	if (ret < 0) {
++		err_msg = "resume runtime PM";
++		max_attempts = 0;
++		failed = true;
++	}
+ 
+ 	hfi_core_deinit(core, true);
+ 
+-	dev_warn(core->dev, "system error has occurred, starting recovery!\n");
+-
+ 	mutex_lock(&core->lock);
+ 
+-	while (pm_runtime_active(core->dev_dec) || pm_runtime_active(core->dev_enc))
++	for (i = 0; i < max_attempts; i++) {
++		if (!pm_runtime_active(core->dev_dec) && !pm_runtime_active(core->dev_enc))
++			break;
+ 		msleep(10);
++	}
+ 
+ 	venus_shutdown(core);
+ 
+ 	pm_runtime_put_sync(core->dev);
+ 
+-	while (core->pmdomains[0] && pm_runtime_active(core->pmdomains[0]))
++	for (i = 0; i < max_attempts; i++) {
++		if (!core->pmdomains[0] || !pm_runtime_active(core->pmdomains[0]))
++			break;
+ 		usleep_range(1000, 1500);
++	}
+ 
+ 	hfi_reinit(core);
+ 
+-	pm_runtime_get_sync(core->dev);
++	ret = pm_runtime_get_sync(core->dev);
++	if (ret < 0) {
++		err_msg = "resume runtime PM";
++		failed = true;
++	}
++
++	ret = venus_boot(core);
++	if (ret && !failed) {
++		err_msg = "boot Venus";
++		failed = true;
++	}
+ 
+-	ret |= venus_boot(core);
+-	ret |= hfi_core_resume(core, true);
++	ret = hfi_core_resume(core, true);
++	if (ret && !failed) {
++		err_msg = "resume HFI";
++		failed = true;
++	}
+ 
+ 	enable_irq(core->irq);
+ 
+ 	mutex_unlock(&core->lock);
+ 
+-	ret |= hfi_core_init(core);
++	ret = hfi_core_init(core);
++	if (ret && !failed) {
++		err_msg = "init HFI";
++		failed = true;
++	}
+ 
+ 	pm_runtime_put_sync(core->dev);
+ 
+-	if (ret) {
++	if (failed) {
+ 		disable_irq_nosync(core->irq);
+-		dev_warn(core->dev, "recovery failed (%d)\n", ret);
++		dev_warn_ratelimited(core->dev,
++				     "System error has occurred, recovery failed to %s\n",
++				     err_msg);
+ 		schedule_delayed_work(&core->work, msecs_to_jiffies(10));
+ 		return;
+ 	}
+ 
++	dev_warn(core->dev, "system error has occurred (recovered)\n");
++
+ 	mutex_lock(&core->lock);
+ 	core->sys_error = false;
+ 	mutex_unlock(&core->lock);
+diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
+index 15bcb7f6e113c..1cb5eaabf340b 100644
+--- a/drivers/media/platform/s5p-g2d/g2d.c
++++ b/drivers/media/platform/s5p-g2d/g2d.c
+@@ -276,6 +276,9 @@ static int g2d_release(struct file *file)
+ 	struct g2d_dev *dev = video_drvdata(file);
+ 	struct g2d_ctx *ctx = fh2ctx(file->private_data);
+ 
++	mutex_lock(&dev->mutex);
++	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
++	mutex_unlock(&dev->mutex);
+ 	v4l2_ctrl_handler_free(&ctx->ctrl_handler);
+ 	v4l2_fh_del(&ctx->fh);
+ 	v4l2_fh_exit(&ctx->fh);
+diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+index 9b22dd8e34f44..d515eb08c3ee4 100644
+--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
++++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+@@ -2566,11 +2566,8 @@ static void s5p_jpeg_buf_queue(struct vb2_buffer *vb)
+ static int s5p_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct s5p_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+-	int ret;
+-
+-	ret = pm_runtime_get_sync(ctx->jpeg->dev);
+ 
+-	return ret > 0 ? 0 : ret;
++	return pm_runtime_resume_and_get(ctx->jpeg->dev);
+ }
+ 
+ static void s5p_jpeg_stop_streaming(struct vb2_queue *q)
+diff --git a/drivers/media/platform/sh_vou.c b/drivers/media/platform/sh_vou.c
+index b22dc1d725276..7d30e0c9447e8 100644
+--- a/drivers/media/platform/sh_vou.c
++++ b/drivers/media/platform/sh_vou.c
+@@ -1133,7 +1133,11 @@ static int sh_vou_open(struct file *file)
+ 	if (v4l2_fh_is_singular_file(file) &&
+ 	    vou_dev->status == SH_VOU_INITIALISING) {
+ 		/* First open */
+-		pm_runtime_get_sync(vou_dev->v4l2_dev.dev);
++		err = pm_runtime_resume_and_get(vou_dev->v4l2_dev.dev);
++		if (err < 0) {
++			v4l2_fh_release(file);
++			goto done_open;
++		}
+ 		err = sh_vou_hw_init(vou_dev);
+ 		if (err < 0) {
+ 			pm_runtime_put(vou_dev->v4l2_dev.dev);
+diff --git a/drivers/media/platform/sti/bdisp/Makefile b/drivers/media/platform/sti/bdisp/Makefile
+index caf7ccd193eaa..39ade0a347236 100644
+--- a/drivers/media/platform/sti/bdisp/Makefile
++++ b/drivers/media/platform/sti/bdisp/Makefile
+@@ -1,4 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_VIDEO_STI_BDISP) := bdisp.o
++obj-$(CONFIG_VIDEO_STI_BDISP) += bdisp.o
+ 
+ bdisp-objs := bdisp-v4l2.o bdisp-hw.o bdisp-debug.o
+diff --git a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+index 060ca85f64d5d..85288da9d2ae6 100644
+--- a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
++++ b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+@@ -499,7 +499,7 @@ static int bdisp_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct bdisp_ctx *ctx = q->drv_priv;
+ 	struct vb2_v4l2_buffer *buf;
+-	int ret = pm_runtime_get_sync(ctx->bdisp_dev->dev);
++	int ret = pm_runtime_resume_and_get(ctx->bdisp_dev->dev);
+ 
+ 	if (ret < 0) {
+ 		dev_err(ctx->bdisp_dev->dev, "failed to set runtime PM\n");
+@@ -1364,10 +1364,10 @@ static int bdisp_probe(struct platform_device *pdev)
+ 
+ 	/* Power management */
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to set PM\n");
+-		goto err_pm;
++		goto err_remove;
+ 	}
+ 
+ 	/* Filters */
+@@ -1395,6 +1395,7 @@ err_filter:
+ 	bdisp_hw_free_filters(bdisp->dev);
+ err_pm:
+ 	pm_runtime_put(dev);
++err_remove:
+ 	bdisp_debugfs_remove(bdisp);
+ 	v4l2_device_unregister(&bdisp->v4l2_dev);
+ err_clk:
+diff --git a/drivers/media/platform/sti/delta/Makefile b/drivers/media/platform/sti/delta/Makefile
+index 92b37e216f004..32412fa4c6328 100644
+--- a/drivers/media/platform/sti/delta/Makefile
++++ b/drivers/media/platform/sti/delta/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_VIDEO_STI_DELTA_DRIVER) := st-delta.o
++obj-$(CONFIG_VIDEO_STI_DELTA_DRIVER) += st-delta.o
+ st-delta-y := delta-v4l2.o delta-mem.o delta-ipc.o delta-debug.o
+ 
+ # MJPEG support
+diff --git a/drivers/media/platform/sti/hva/Makefile b/drivers/media/platform/sti/hva/Makefile
+index 74b41ec52f976..b5a5478bdd016 100644
+--- a/drivers/media/platform/sti/hva/Makefile
++++ b/drivers/media/platform/sti/hva/Makefile
+@@ -1,4 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_VIDEO_STI_HVA) := st-hva.o
++obj-$(CONFIG_VIDEO_STI_HVA) += st-hva.o
+ st-hva-y := hva-v4l2.o hva-hw.o hva-mem.o hva-h264.o
+ st-hva-$(CONFIG_VIDEO_STI_HVA_DEBUGFS) += hva-debugfs.o
+diff --git a/drivers/media/platform/sti/hva/hva-hw.c b/drivers/media/platform/sti/hva/hva-hw.c
+index 43f279e2a6a38..cf4c891bf619a 100644
+--- a/drivers/media/platform/sti/hva/hva-hw.c
++++ b/drivers/media/platform/sti/hva/hva-hw.c
+@@ -130,8 +130,7 @@ static irqreturn_t hva_hw_its_irq_thread(int irq, void *arg)
+ 	ctx_id = (hva->sts_reg & 0xFF00) >> 8;
+ 	if (ctx_id >= HVA_MAX_INSTANCES) {
+ 		dev_err(dev, "%s     %s: bad context identifier: %d\n",
+-			ctx->name, __func__, ctx_id);
+-		ctx->hw_err = true;
++			HVA_PREFIX, __func__, ctx_id);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c b/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
+index eb15c8c725ca0..64f25921463e9 100644
+--- a/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
++++ b/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
+@@ -118,6 +118,7 @@ static int sun4i_csi_notifier_init(struct sun4i_csi *csi)
+ 	struct v4l2_fwnode_endpoint vep = {
+ 		.bus_type = V4L2_MBUS_PARALLEL,
+ 	};
++	struct v4l2_async_subdev *asd;
+ 	struct fwnode_handle *ep;
+ 	int ret;
+ 
+@@ -134,10 +135,12 @@ static int sun4i_csi_notifier_init(struct sun4i_csi *csi)
+ 
+ 	csi->bus = vep.bus.parallel;
+ 
+-	ret = v4l2_async_notifier_add_fwnode_remote_subdev(&csi->notifier,
+-							   ep, &csi->asd);
+-	if (ret)
++	asd = v4l2_async_notifier_add_fwnode_remote_subdev(&csi->notifier,
++							   ep, sizeof(*asd));
++	if (IS_ERR(asd)) {
++		ret = PTR_ERR(asd);
+ 		goto out;
++	}
+ 
+ 	csi->notifier.ops = &sun4i_csi_notify_ops;
+ 
+diff --git a/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.h b/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.h
+index 0f67ff652c2e1..a5f61ee0ec4df 100644
+--- a/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.h
++++ b/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.h
+@@ -139,7 +139,6 @@ struct sun4i_csi {
+ 	struct v4l2_mbus_framefmt	subdev_fmt;
+ 
+ 	/* V4L2 Async variables */
+-	struct v4l2_async_subdev	asd;
+ 	struct v4l2_async_notifier	notifier;
+ 	struct v4l2_subdev		*src_subdev;
+ 	int				src_pad;
+diff --git a/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c b/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
+index 3f81dd17755cb..fbcca59a0517c 100644
+--- a/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
++++ b/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
+@@ -494,7 +494,7 @@ static int rotate_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 		struct device *dev = ctx->dev->dev;
+ 		int ret;
+ 
+-		ret = pm_runtime_get_sync(dev);
++		ret = pm_runtime_resume_and_get(dev);
+ 		if (ret < 0) {
+ 			dev_err(dev, "Failed to enable module\n");
+ 
+diff --git a/drivers/media/platform/video-mux.c b/drivers/media/platform/video-mux.c
+index 53570250a25d5..640ce76fe0d92 100644
+--- a/drivers/media/platform/video-mux.c
++++ b/drivers/media/platform/video-mux.c
+@@ -362,7 +362,7 @@ static int video_mux_async_register(struct video_mux *vmux,
+ 
+ 	for (i = 0; i < num_input_pads; i++) {
+ 		struct v4l2_async_subdev *asd;
+-		struct fwnode_handle *ep;
++		struct fwnode_handle *ep, *remote_ep;
+ 
+ 		ep = fwnode_graph_get_endpoint_by_id(
+ 			dev_fwnode(vmux->subdev.dev), i, 0,
+@@ -370,19 +370,21 @@ static int video_mux_async_register(struct video_mux *vmux,
+ 		if (!ep)
+ 			continue;
+ 
+-		asd = kzalloc(sizeof(*asd), GFP_KERNEL);
+-		if (!asd) {
++		/* Skip dangling endpoints for backwards compatibility */
++		remote_ep = fwnode_graph_get_remote_endpoint(ep);
++		if (!remote_ep) {
+ 			fwnode_handle_put(ep);
+-			return -ENOMEM;
++			continue;
+ 		}
++		fwnode_handle_put(remote_ep);
+ 
+-		ret = v4l2_async_notifier_add_fwnode_remote_subdev(
+-			&vmux->notifier, ep, asd);
++		asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++			&vmux->notifier, ep, sizeof(*asd));
+ 
+ 		fwnode_handle_put(ep);
+ 
+-		if (ret) {
+-			kfree(asd);
++		if (IS_ERR(asd)) {
++			ret = PTR_ERR(asd);
+ 			/* OK if asd already exists */
+ 			if (ret != -EEXIST)
+ 				return ret;
+diff --git a/drivers/media/usb/au0828/au0828-core.c b/drivers/media/usb/au0828/au0828-core.c
+index a8a72d5fbd129..caefac07af927 100644
+--- a/drivers/media/usb/au0828/au0828-core.c
++++ b/drivers/media/usb/au0828/au0828-core.c
+@@ -199,8 +199,8 @@ static int au0828_media_device_init(struct au0828_dev *dev,
+ 	struct media_device *mdev;
+ 
+ 	mdev = media_device_usb_allocate(udev, KBUILD_MODNAME, THIS_MODULE);
+-	if (!mdev)
+-		return -ENOMEM;
++	if (IS_ERR(mdev))
++		return PTR_ERR(mdev);
+ 
+ 	dev->media_dev = mdev;
+ #endif
+diff --git a/drivers/media/usb/cpia2/cpia2.h b/drivers/media/usb/cpia2/cpia2.h
+index 50835f5f7512c..57b7f1ea68da5 100644
+--- a/drivers/media/usb/cpia2/cpia2.h
++++ b/drivers/media/usb/cpia2/cpia2.h
+@@ -429,6 +429,7 @@ int cpia2_send_command(struct camera_data *cam, struct cpia2_command *cmd);
+ int cpia2_do_command(struct camera_data *cam,
+ 		     unsigned int command,
+ 		     unsigned char direction, unsigned char param);
++void cpia2_deinit_camera_struct(struct camera_data *cam, struct usb_interface *intf);
+ struct camera_data *cpia2_init_camera_struct(struct usb_interface *intf);
+ int cpia2_init_camera(struct camera_data *cam);
+ int cpia2_allocate_buffers(struct camera_data *cam);
+diff --git a/drivers/media/usb/cpia2/cpia2_core.c b/drivers/media/usb/cpia2/cpia2_core.c
+index e747548ab2869..b5a2d06fb356b 100644
+--- a/drivers/media/usb/cpia2/cpia2_core.c
++++ b/drivers/media/usb/cpia2/cpia2_core.c
+@@ -2163,6 +2163,18 @@ static void reset_camera_struct(struct camera_data *cam)
+ 	cam->height = cam->params.roi.height;
+ }
+ 
++/******************************************************************************
++ *
++ *  cpia2_init_camera_struct
++ *
++ *  Deinitialize camera struct
++ *****************************************************************************/
++void cpia2_deinit_camera_struct(struct camera_data *cam, struct usb_interface *intf)
++{
++	v4l2_device_unregister(&cam->v4l2_dev);
++	kfree(cam);
++}
++
+ /******************************************************************************
+  *
+  *  cpia2_init_camera_struct
+diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c
+index 3ab80a7b44985..76aac06f9fb8e 100644
+--- a/drivers/media/usb/cpia2/cpia2_usb.c
++++ b/drivers/media/usb/cpia2/cpia2_usb.c
+@@ -844,15 +844,13 @@ static int cpia2_usb_probe(struct usb_interface *intf,
+ 	ret = set_alternate(cam, USBIF_CMDONLY);
+ 	if (ret < 0) {
+ 		ERR("%s: usb_set_interface error (ret = %d)\n", __func__, ret);
+-		kfree(cam);
+-		return ret;
++		goto alt_err;
+ 	}
+ 
+ 
+ 	if((ret = cpia2_init_camera(cam)) < 0) {
+ 		ERR("%s: failed to initialize cpia2 camera (ret = %d)\n", __func__, ret);
+-		kfree(cam);
+-		return ret;
++		goto alt_err;
+ 	}
+ 	LOG("  CPiA Version: %d.%02d (%d.%d)\n",
+ 	       cam->params.version.firmware_revision_hi,
+@@ -872,11 +870,14 @@ static int cpia2_usb_probe(struct usb_interface *intf,
+ 	ret = cpia2_register_camera(cam);
+ 	if (ret < 0) {
+ 		ERR("%s: Failed to register cpia2 camera (ret = %d)\n", __func__, ret);
+-		kfree(cam);
+-		return ret;
++		goto alt_err;
+ 	}
+ 
+ 	return 0;
++
++alt_err:
++	cpia2_deinit_camera_struct(cam, intf);
++	return ret;
+ }
+ 
+ /******************************************************************************
+diff --git a/drivers/media/usb/dvb-usb/cinergyT2-core.c b/drivers/media/usb/dvb-usb/cinergyT2-core.c
+index 969a7ec71dff7..4116ba5c45fcb 100644
+--- a/drivers/media/usb/dvb-usb/cinergyT2-core.c
++++ b/drivers/media/usb/dvb-usb/cinergyT2-core.c
+@@ -78,6 +78,8 @@ static int cinergyt2_frontend_attach(struct dvb_usb_adapter *adap)
+ 
+ 	ret = dvb_usb_generic_rw(d, st->data, 1, st->data, 3, 0);
+ 	if (ret < 0) {
++		if (adap->fe_adap[0].fe)
++			adap->fe_adap[0].fe->ops.release(adap->fe_adap[0].fe);
+ 		deb_rc("cinergyt2_power_ctrl() Failed to retrieve sleep state info\n");
+ 	}
+ 	mutex_unlock(&d->data_mutex);
+diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c
+index 761992ad05e2a..7707de7bae7ca 100644
+--- a/drivers/media/usb/dvb-usb/cxusb.c
++++ b/drivers/media/usb/dvb-usb/cxusb.c
+@@ -1947,7 +1947,7 @@ static struct dvb_usb_device_properties cxusb_bluebird_lgz201_properties = {
+ 
+ 	.size_of_priv     = sizeof(struct cxusb_state),
+ 
+-	.num_adapters = 2,
++	.num_adapters = 1,
+ 	.adapter = {
+ 		{
+ 		.num_frontends = 1,
+diff --git a/drivers/media/usb/em28xx/em28xx-input.c b/drivers/media/usb/em28xx/em28xx-input.c
+index 5aa15a7a49def..59529cbf9cd0b 100644
+--- a/drivers/media/usb/em28xx/em28xx-input.c
++++ b/drivers/media/usb/em28xx/em28xx-input.c
+@@ -720,7 +720,8 @@ static int em28xx_ir_init(struct em28xx *dev)
+ 			dev->board.has_ir_i2c = 0;
+ 			dev_warn(&dev->intf->dev,
+ 				 "No i2c IR remote control device found.\n");
+-			return -ENODEV;
++			err = -ENODEV;
++			goto ref_put;
+ 		}
+ 	}
+ 
+@@ -735,7 +736,7 @@ static int em28xx_ir_init(struct em28xx *dev)
+ 
+ 	ir = kzalloc(sizeof(*ir), GFP_KERNEL);
+ 	if (!ir)
+-		return -ENOMEM;
++		goto ref_put;
+ 	rc = rc_allocate_device(RC_DRIVER_SCANCODE);
+ 	if (!rc)
+ 		goto error;
+@@ -839,6 +840,9 @@ error:
+ 	dev->ir = NULL;
+ 	rc_free_device(rc);
+ 	kfree(ir);
++ref_put:
++	em28xx_shutdown_buttons(dev);
++	kref_put(&dev->ref, em28xx_free_device);
+ 	return err;
+ }
+ 
+diff --git a/drivers/media/usb/gspca/gl860/gl860.c b/drivers/media/usb/gspca/gl860/gl860.c
+index 2c05ea2598e76..ce4ee8bc75c85 100644
+--- a/drivers/media/usb/gspca/gl860/gl860.c
++++ b/drivers/media/usb/gspca/gl860/gl860.c
+@@ -561,8 +561,8 @@ int gl860_RTx(struct gspca_dev *gspca_dev,
+ 					len, 400 + 200 * (len > 1));
+ 			memcpy(pdata, gspca_dev->usb_buf, len);
+ 		} else {
+-			r = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+-					req, pref, val, index, NULL, len, 400);
++			gspca_err(gspca_dev, "zero-length read request\n");
++			r = -EINVAL;
+ 		}
+ 	}
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index f4a727918e352..d38dee1792e41 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -2676,9 +2676,8 @@ void pvr2_hdw_destroy(struct pvr2_hdw *hdw)
+ 		pvr2_stream_destroy(hdw->vid_stream);
+ 		hdw->vid_stream = NULL;
+ 	}
+-	pvr2_i2c_core_done(hdw);
+ 	v4l2_device_unregister(&hdw->v4l2_dev);
+-	pvr2_hdw_remove_usb_stuff(hdw);
++	pvr2_hdw_disconnect(hdw);
+ 	mutex_lock(&pvr2_unit_mtx);
+ 	do {
+ 		if ((hdw->unit_number >= 0) &&
+@@ -2705,6 +2704,7 @@ void pvr2_hdw_disconnect(struct pvr2_hdw *hdw)
+ {
+ 	pvr2_trace(PVR2_TRACE_INIT,"pvr2_hdw_disconnect(hdw=%p)",hdw);
+ 	LOCK_TAKE(hdw->big_lock);
++	pvr2_i2c_core_done(hdw);
+ 	LOCK_TAKE(hdw->ctl_lock);
+ 	pvr2_hdw_remove_usb_stuff(hdw);
+ 	LOCK_GIVE(hdw->ctl_lock);
+diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
+index e3ab003a6c851..33babe6e8b3a2 100644
+--- a/drivers/media/v4l2-core/v4l2-async.c
++++ b/drivers/media/v4l2-core/v4l2-async.c
+@@ -673,26 +673,26 @@ v4l2_async_notifier_add_fwnode_subdev(struct v4l2_async_notifier *notifier,
+ }
+ EXPORT_SYMBOL_GPL(v4l2_async_notifier_add_fwnode_subdev);
+ 
+-int
++struct v4l2_async_subdev *
+ v4l2_async_notifier_add_fwnode_remote_subdev(struct v4l2_async_notifier *notif,
+ 					     struct fwnode_handle *endpoint,
+-					     struct v4l2_async_subdev *asd)
++					     unsigned int asd_struct_size)
+ {
++	struct v4l2_async_subdev *asd;
+ 	struct fwnode_handle *remote;
+-	int ret;
+ 
+ 	remote = fwnode_graph_get_remote_port_parent(endpoint);
+ 	if (!remote)
+-		return -ENOTCONN;
++		return ERR_PTR(-ENOTCONN);
+ 
+-	asd->match_type = V4L2_ASYNC_MATCH_FWNODE;
+-	asd->match.fwnode = remote;
+-
+-	ret = v4l2_async_notifier_add_subdev(notif, asd);
+-	if (ret)
+-		fwnode_handle_put(remote);
+-
+-	return ret;
++	asd = v4l2_async_notifier_add_fwnode_subdev(notif, remote,
++						    asd_struct_size);
++	/*
++	 * Calling v4l2_async_notifier_add_fwnode_subdev grabs a refcount,
++	 * so drop the one we got in fwnode_graph_get_remote_port_parent.
++	 */
++	fwnode_handle_put(remote);
++	return asd;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_async_notifier_add_fwnode_remote_subdev);
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-fh.c b/drivers/media/v4l2-core/v4l2-fh.c
+index 684574f58e82d..90eec79ee995a 100644
+--- a/drivers/media/v4l2-core/v4l2-fh.c
++++ b/drivers/media/v4l2-core/v4l2-fh.c
+@@ -96,6 +96,7 @@ int v4l2_fh_release(struct file *filp)
+ 		v4l2_fh_del(fh);
+ 		v4l2_fh_exit(fh);
+ 		kfree(fh);
++		filp->private_data = NULL;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
+index a7d508e74d6b3..fbf0dcb313c82 100644
+--- a/drivers/media/v4l2-core/v4l2-subdev.c
++++ b/drivers/media/v4l2-core/v4l2-subdev.c
+@@ -428,30 +428,6 @@ static long subdev_do_ioctl(struct file *file, unsigned int cmd, void *arg)
+ 
+ 		return v4l2_event_dequeue(vfh, arg, file->f_flags & O_NONBLOCK);
+ 
+-	case VIDIOC_DQEVENT_TIME32: {
+-		struct v4l2_event_time32 *ev32 = arg;
+-		struct v4l2_event ev = { };
+-
+-		if (!(sd->flags & V4L2_SUBDEV_FL_HAS_EVENTS))
+-			return -ENOIOCTLCMD;
+-
+-		rval = v4l2_event_dequeue(vfh, &ev, file->f_flags & O_NONBLOCK);
+-
+-		*ev32 = (struct v4l2_event_time32) {
+-			.type		= ev.type,
+-			.pending	= ev.pending,
+-			.sequence	= ev.sequence,
+-			.timestamp.tv_sec  = ev.timestamp.tv_sec,
+-			.timestamp.tv_nsec = ev.timestamp.tv_nsec,
+-			.id		= ev.id,
+-		};
+-
+-		memcpy(&ev32->u, &ev.u, sizeof(ev.u));
+-		memcpy(&ev32->reserved, &ev.reserved, sizeof(ev.reserved));
+-
+-		return rval;
+-	}
+-
+ 	case VIDIOC_SUBSCRIBE_EVENT:
+ 		return v4l2_subdev_call(sd, core, subscribe_event, vfh, arg);
+ 
+diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
+index 102dbb8080da5..29271ad4728a2 100644
+--- a/drivers/memstick/host/rtsx_usb_ms.c
++++ b/drivers/memstick/host/rtsx_usb_ms.c
+@@ -799,9 +799,9 @@ static int rtsx_usb_ms_drv_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ err_out:
+-	memstick_free_host(msh);
+ 	pm_runtime_disable(ms_dev(host));
+ 	pm_runtime_put_noidle(ms_dev(host));
++	memstick_free_host(msh);
+ 	return err;
+ }
+ 
+@@ -828,9 +828,6 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
+ 	}
+ 	mutex_unlock(&host->host_mutex);
+ 
+-	memstick_remove_host(msh);
+-	memstick_free_host(msh);
+-
+ 	/* Balance possible unbalanced usage count
+ 	 * e.g. unconditional module removal
+ 	 */
+@@ -838,10 +835,11 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
+ 		pm_runtime_put(ms_dev(host));
+ 
+ 	pm_runtime_disable(ms_dev(host));
+-	platform_set_drvdata(pdev, NULL);
+-
++	memstick_remove_host(msh);
+ 	dev_dbg(ms_dev(host),
+ 		": Realtek USB Memstick controller has been removed\n");
++	memstick_free_host(msh);
++	platform_set_drvdata(pdev, NULL);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 4789507f325b8..b8847ae04d938 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -465,6 +465,7 @@ config MFD_MP2629
+ 	tristate "Monolithic Power Systems MP2629 ADC and Battery charger"
+ 	depends on I2C
+ 	select REGMAP_I2C
++	select MFD_CORE
+ 	help
+ 	  Select this option to enable support for Monolithic Power Systems
+ 	  battery charger. This provides ADC, thermal and battery charger power
+diff --git a/drivers/mfd/rn5t618.c b/drivers/mfd/rn5t618.c
+index dc452df1f1bfe..652a5e60067f8 100644
+--- a/drivers/mfd/rn5t618.c
++++ b/drivers/mfd/rn5t618.c
+@@ -104,7 +104,7 @@ static int rn5t618_irq_init(struct rn5t618 *rn5t618)
+ 
+ 	ret = devm_regmap_add_irq_chip(rn5t618->dev, rn5t618->regmap,
+ 				       rn5t618->irq,
+-				       IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
++				       IRQF_TRIGGER_LOW | IRQF_ONESHOT,
+ 				       0, irq_chip, &rn5t618->irq_data);
+ 	if (ret)
+ 		dev_err(rn5t618->dev, "Failed to register IRQ chip\n");
+diff --git a/drivers/misc/eeprom/idt_89hpesx.c b/drivers/misc/eeprom/idt_89hpesx.c
+index 81c70e5bc168f..3e4a594c110b3 100644
+--- a/drivers/misc/eeprom/idt_89hpesx.c
++++ b/drivers/misc/eeprom/idt_89hpesx.c
+@@ -1126,11 +1126,10 @@ static void idt_get_fw_data(struct idt_89hpesx_dev *pdev)
+ 
+ 	device_for_each_child_node(dev, fwnode) {
+ 		ee_id = idt_ee_match_id(fwnode);
+-		if (!ee_id) {
+-			dev_warn(dev, "Skip unsupported EEPROM device");
+-			continue;
+-		} else
++		if (ee_id)
+ 			break;
++
++		dev_warn(dev, "Skip unsupported EEPROM device %pfw\n", fwnode);
+ 	}
+ 
+ 	/* If there is no fwnode EEPROM device, then set zero size */
+@@ -1161,6 +1160,7 @@ static void idt_get_fw_data(struct idt_89hpesx_dev *pdev)
+ 	else /* if (!fwnode_property_read_bool(node, "read-only")) */
+ 		pdev->eero = false;
+ 
++	fwnode_handle_put(fwnode);
+ 	dev_info(dev, "EEPROM of %d bytes found by 0x%x",
+ 		pdev->eesize, pdev->eeaddr);
+ }
+diff --git a/drivers/misc/habanalabs/common/habanalabs_drv.c b/drivers/misc/habanalabs/common/habanalabs_drv.c
+index 3bcef64a677ae..ded92b3cbdb27 100644
+--- a/drivers/misc/habanalabs/common/habanalabs_drv.c
++++ b/drivers/misc/habanalabs/common/habanalabs_drv.c
+@@ -421,6 +421,7 @@ static int hl_pci_probe(struct pci_dev *pdev,
+ 	return 0;
+ 
+ disable_device:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_set_drvdata(pdev, NULL);
+ 	destroy_hdev(hdev);
+ 
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 3246598e4d7e3..87bac99207023 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1003,6 +1003,12 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
+ 
+ 	switch (mq_rq->drv_op) {
+ 	case MMC_DRV_OP_IOCTL:
++		if (card->ext_csd.cmdq_en) {
++			ret = mmc_cmdq_disable(card);
++			if (ret)
++				break;
++		}
++		fallthrough;
+ 	case MMC_DRV_OP_IOCTL_RPMB:
+ 		idata = mq_rq->drv_op_data;
+ 		for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) {
+@@ -1013,6 +1019,8 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
+ 		/* Always switch back to main area after RPMB access */
+ 		if (rpmb_ioctl)
+ 			mmc_blk_part_switch(card, 0);
++		else if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
++			mmc_cmdq_enable(card);
+ 		break;
+ 	case MMC_DRV_OP_BOOT_WP:
+ 		ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BOOT_WP,
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index 19cbb6171b358..9cd8862e6cbd0 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -393,6 +393,7 @@ static void sdhci_sprd_request_done(struct sdhci_host *host,
+ static struct sdhci_ops sdhci_sprd_ops = {
+ 	.read_l = sdhci_sprd_readl,
+ 	.write_l = sdhci_sprd_writel,
++	.write_w = sdhci_sprd_writew,
+ 	.write_b = sdhci_sprd_writeb,
+ 	.set_clock = sdhci_sprd_set_clock,
+ 	.get_max_clock = sdhci_sprd_get_max_clock,
+diff --git a/drivers/mmc/host/usdhi6rol0.c b/drivers/mmc/host/usdhi6rol0.c
+index 615f3d008af1e..b9b79b1089a00 100644
+--- a/drivers/mmc/host/usdhi6rol0.c
++++ b/drivers/mmc/host/usdhi6rol0.c
+@@ -1801,6 +1801,7 @@ static int usdhi6_probe(struct platform_device *pdev)
+ 
+ 	version = usdhi6_read(host, USDHI6_VERSION);
+ 	if ((version & 0xfff) != 0xa0d) {
++		ret = -EPERM;
+ 		dev_err(dev, "Version not recognized %x\n", version);
+ 		goto e_clk_off;
+ 	}
+diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
+index 9b755ea0fa03c..f07c71db3cafe 100644
+--- a/drivers/mmc/host/via-sdmmc.c
++++ b/drivers/mmc/host/via-sdmmc.c
+@@ -857,6 +857,9 @@ static void via_sdc_data_isr(struct via_crdr_mmc_host *host, u16 intmask)
+ {
+ 	BUG_ON(intmask == 0);
+ 
++	if (!host->data)
++		return;
++
+ 	if (intmask & VIA_CRDR_SDSTS_DT)
+ 		host->data->error = -ETIMEDOUT;
+ 	else if (intmask & (VIA_CRDR_SDSTS_RC | VIA_CRDR_SDSTS_WC))
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index 739cf63ef6e2f..4950d10d3a191 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -2279,7 +2279,7 @@ static int vub300_probe(struct usb_interface *interface,
+ 	if (retval < 0)
+ 		goto error5;
+ 	retval =
+-		usb_control_msg(vub300->udev, usb_rcvctrlpipe(vub300->udev, 0),
++		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_ROM_WAIT_STATES,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				firmware_rom_wait_states, 0x0000, NULL, 0, HZ);
+diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
+index fbb4ea751be8e..0ee3192916d97 100644
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -272,6 +272,37 @@ static int anfc_pkt_len_config(unsigned int len, unsigned int *steps,
+ 	return 0;
+ }
+ 
++static int anfc_select_target(struct nand_chip *chip, int target)
++{
++	struct anand *anand = to_anand(chip);
++	struct arasan_nfc *nfc = to_anfc(chip->controller);
++	int ret;
++
++	/* Update the controller timings and the potential ECC configuration */
++	writel_relaxed(anand->timings, nfc->base + DATA_INTERFACE_REG);
++
++	/* Update clock frequency */
++	if (nfc->cur_clk != anand->clk) {
++		clk_disable_unprepare(nfc->controller_clk);
++		ret = clk_set_rate(nfc->controller_clk, anand->clk);
++		if (ret) {
++			dev_err(nfc->dev, "Failed to change clock rate\n");
++			return ret;
++		}
++
++		ret = clk_prepare_enable(nfc->controller_clk);
++		if (ret) {
++			dev_err(nfc->dev,
++				"Failed to re-enable the controller clock\n");
++			return ret;
++		}
++
++		nfc->cur_clk = anand->clk;
++	}
++
++	return 0;
++}
++
+ /*
+  * When using the embedded hardware ECC engine, the controller is in charge of
+  * feeding the engine with, first, the ECC residue present in the data array.
+@@ -400,6 +431,18 @@ static int anfc_read_page_hw_ecc(struct nand_chip *chip, u8 *buf,
+ 	return 0;
+ }
+ 
++static int anfc_sel_read_page_hw_ecc(struct nand_chip *chip, u8 *buf,
++				     int oob_required, int page)
++{
++	int ret;
++
++	ret = anfc_select_target(chip, chip->cur_cs);
++	if (ret)
++		return ret;
++
++	return anfc_read_page_hw_ecc(chip, buf, oob_required, page);
++};
++
+ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
+ 				  int oob_required, int page)
+ {
+@@ -460,6 +503,18 @@ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
+ 	return ret;
+ }
+ 
++static int anfc_sel_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
++				      int oob_required, int page)
++{
++	int ret;
++
++	ret = anfc_select_target(chip, chip->cur_cs);
++	if (ret)
++		return ret;
++
++	return anfc_write_page_hw_ecc(chip, buf, oob_required, page);
++};
++
+ /* NAND framework ->exec_op() hooks and related helpers */
+ static int anfc_parse_instructions(struct nand_chip *chip,
+ 				   const struct nand_subop *subop,
+@@ -752,37 +807,6 @@ static const struct nand_op_parser anfc_op_parser = NAND_OP_PARSER(
+ 		NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)),
+ 	);
+ 
+-static int anfc_select_target(struct nand_chip *chip, int target)
+-{
+-	struct anand *anand = to_anand(chip);
+-	struct arasan_nfc *nfc = to_anfc(chip->controller);
+-	int ret;
+-
+-	/* Update the controller timings and the potential ECC configuration */
+-	writel_relaxed(anand->timings, nfc->base + DATA_INTERFACE_REG);
+-
+-	/* Update clock frequency */
+-	if (nfc->cur_clk != anand->clk) {
+-		clk_disable_unprepare(nfc->controller_clk);
+-		ret = clk_set_rate(nfc->controller_clk, anand->clk);
+-		if (ret) {
+-			dev_err(nfc->dev, "Failed to change clock rate\n");
+-			return ret;
+-		}
+-
+-		ret = clk_prepare_enable(nfc->controller_clk);
+-		if (ret) {
+-			dev_err(nfc->dev,
+-				"Failed to re-enable the controller clock\n");
+-			return ret;
+-		}
+-
+-		nfc->cur_clk = anand->clk;
+-	}
+-
+-	return 0;
+-}
+-
+ static int anfc_check_op(struct nand_chip *chip,
+ 			 const struct nand_operation *op)
+ {
+@@ -1006,8 +1030,8 @@ static int anfc_init_hw_ecc_controller(struct arasan_nfc *nfc,
+ 	if (!anand->bch)
+ 		return -EINVAL;
+ 
+-	ecc->read_page = anfc_read_page_hw_ecc;
+-	ecc->write_page = anfc_write_page_hw_ecc;
++	ecc->read_page = anfc_sel_read_page_hw_ecc;
++	ecc->write_page = anfc_sel_write_page_hw_ecc;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index f5ca2002d08e8..d00c916f133bd 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -3036,8 +3036,10 @@ static int __maybe_unused marvell_nfc_resume(struct device *dev)
+ 		return ret;
+ 
+ 	ret = clk_prepare_enable(nfc->reg_clk);
+-	if (ret < 0)
++	if (ret < 0) {
++		clk_disable_unprepare(nfc->core_clk);
+ 		return ret;
++	}
+ 
+ 	/*
+ 	 * Reset nfc->selected_chip so the next command will cause the timing
+diff --git a/drivers/mtd/parsers/redboot.c b/drivers/mtd/parsers/redboot.c
+index 91146bdc47132..3ccd6363ee8cb 100644
+--- a/drivers/mtd/parsers/redboot.c
++++ b/drivers/mtd/parsers/redboot.c
+@@ -45,6 +45,7 @@ static inline int redboot_checksum(struct fis_image_desc *img)
+ static void parse_redboot_of(struct mtd_info *master)
+ {
+ 	struct device_node *np;
++	struct device_node *npart;
+ 	u32 dirblock;
+ 	int ret;
+ 
+@@ -52,7 +53,11 @@ static void parse_redboot_of(struct mtd_info *master)
+ 	if (!np)
+ 		return;
+ 
+-	ret = of_property_read_u32(np, "fis-index-block", &dirblock);
++	npart = of_get_child_by_name(np, "partitions");
++	if (!npart)
++		return;
++
++	ret = of_property_read_u32(npart, "fis-index-block", &dirblock);
+ 	if (ret)
+ 		return;
+ 
+diff --git a/drivers/net/can/peak_canfd/peak_canfd.c b/drivers/net/can/peak_canfd/peak_canfd.c
+index 40c33b8a5fda3..ac5801a98680d 100644
+--- a/drivers/net/can/peak_canfd/peak_canfd.c
++++ b/drivers/net/can/peak_canfd/peak_canfd.c
+@@ -351,8 +351,8 @@ static int pucan_handle_status(struct peak_canfd_priv *priv,
+ 				return err;
+ 		}
+ 
+-		/* start network queue (echo_skb array is empty) */
+-		netif_start_queue(ndev);
++		/* wake network queue up (echo_skb array is empty) */
++		netif_wake_queue(ndev);
+ 
+ 		return 0;
+ 	}
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 4f52810bebf89..db9f15f17610b 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -1053,7 +1053,6 @@ static void ems_usb_disconnect(struct usb_interface *intf)
+ 
+ 	if (dev) {
+ 		unregister_netdev(dev->netdev);
+-		free_candev(dev->netdev);
+ 
+ 		unlink_all_urbs(dev);
+ 
+@@ -1061,6 +1060,8 @@ static void ems_usb_disconnect(struct usb_interface *intf)
+ 
+ 		kfree(dev->intr_in_buffer);
+ 		kfree(dev->tx_msg_buffer);
++
++		free_candev(dev->netdev);
+ 	}
+ }
+ 
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index e273b2bd82ba7..82852c57cc0e4 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1711,6 +1711,12 @@ static int sja1105_reload_cbs(struct sja1105_private *priv)
+ {
+ 	int rc = 0, i;
+ 
++	/* The credit based shapers are only allocated if
++	 * CONFIG_NET_SCH_CBS is enabled.
++	 */
++	if (!priv->cbs)
++		return 0;
++
+ 	for (i = 0; i < priv->info->num_cbs_shapers; i++) {
+ 		struct sja1105_cbs_entry *cbs = &priv->cbs[i];
+ 
+diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c
+index 9c5891bbfe61a..f4f50b3a472e1 100644
+--- a/drivers/net/ethernet/aeroflex/greth.c
++++ b/drivers/net/ethernet/aeroflex/greth.c
+@@ -1539,10 +1539,11 @@ static int greth_of_remove(struct platform_device *of_dev)
+ 	mdiobus_unregister(greth->mdio);
+ 
+ 	unregister_netdev(ndev);
+-	free_netdev(ndev);
+ 
+ 	of_iounmap(&of_dev->resource[0], greth->regs, resource_size(&of_dev->resource[0]));
+ 
++	free_netdev(ndev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
+index f5fba8b8cdea9..a47e2710487ec 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
+@@ -91,7 +91,7 @@ struct aq_macsec_txsc {
+ 	u32 hw_sc_idx;
+ 	unsigned long tx_sa_idx_busy;
+ 	const struct macsec_secy *sw_secy;
+-	u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN];
++	u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_MAX_KEY_LEN];
+ 	struct aq_macsec_tx_sc_stats stats;
+ 	struct aq_macsec_tx_sa_stats tx_sa_stats[MACSEC_NUM_AN];
+ };
+@@ -101,7 +101,7 @@ struct aq_macsec_rxsc {
+ 	unsigned long rx_sa_idx_busy;
+ 	const struct macsec_secy *sw_secy;
+ 	const struct macsec_rx_sc *sw_rxsc;
+-	u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN];
++	u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_MAX_KEY_LEN];
+ 	struct aq_macsec_rx_sa_stats rx_sa_stats[MACSEC_NUM_AN];
+ };
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index fcca023f22e54..41f7f078cd27c 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -4296,3 +4296,4 @@ MODULE_AUTHOR("Broadcom Corporation");
+ MODULE_DESCRIPTION("Broadcom GENET Ethernet controller driver");
+ MODULE_ALIAS("platform:bcmgenet");
+ MODULE_LICENSE("GPL");
++MODULE_SOFTDEP("pre: mdio-bcm-unimac");
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index 701c12c9e0337..649c5c429bd7c 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -550,7 +550,7 @@ int be_process_mcc(struct be_adapter *adapter)
+ 	int num = 0, status = 0;
+ 	struct be_mcc_obj *mcc_obj = &adapter->mcc_obj;
+ 
+-	spin_lock_bh(&adapter->mcc_cq_lock);
++	spin_lock(&adapter->mcc_cq_lock);
+ 
+ 	while ((compl = be_mcc_compl_get(adapter))) {
+ 		if (compl->flags & CQE_FLAGS_ASYNC_MASK) {
+@@ -566,7 +566,7 @@ int be_process_mcc(struct be_adapter *adapter)
+ 	if (num)
+ 		be_cq_notify(adapter, mcc_obj->cq.id, mcc_obj->rearm_cq, num);
+ 
+-	spin_unlock_bh(&adapter->mcc_cq_lock);
++	spin_unlock(&adapter->mcc_cq_lock);
+ 	return status;
+ }
+ 
+@@ -581,7 +581,9 @@ static int be_mcc_wait_compl(struct be_adapter *adapter)
+ 		if (be_check_error(adapter, BE_ERROR_ANY))
+ 			return -EIO;
+ 
++		local_bh_disable();
+ 		status = be_process_mcc(adapter);
++		local_bh_enable();
+ 
+ 		if (atomic_read(&mcc_obj->q.used) == 0)
+ 			break;
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index cb1e1ad652d09..89697cb09d1c0 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -5509,7 +5509,9 @@ static void be_worker(struct work_struct *work)
+ 	 * mcc completions
+ 	 */
+ 	if (!netif_running(adapter->netdev)) {
++		local_bh_disable();
+ 		be_process_mcc(adapter);
++		local_bh_enable();
+ 		goto reschedule;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/ezchip/nps_enet.c b/drivers/net/ethernet/ezchip/nps_enet.c
+index 815fb62c4b02e..3d74401b4f102 100644
+--- a/drivers/net/ethernet/ezchip/nps_enet.c
++++ b/drivers/net/ethernet/ezchip/nps_enet.c
+@@ -610,7 +610,7 @@ static s32 nps_enet_probe(struct platform_device *pdev)
+ 
+ 	/* Get IRQ number */
+ 	priv->irq = platform_get_irq(pdev, 0);
+-	if (!priv->irq) {
++	if (priv->irq < 0) {
+ 		dev_err(dev, "failed to retrieve <irq Rx-Tx> value from device tree\n");
+ 		err = -ENODEV;
+ 		goto out_netdev;
+@@ -645,8 +645,8 @@ static s32 nps_enet_remove(struct platform_device *pdev)
+ 	struct nps_enet_priv *priv = netdev_priv(ndev);
+ 
+ 	unregister_netdev(ndev);
+-	free_netdev(ndev);
+ 	netif_napi_del(&priv->napi);
++	free_netdev(ndev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index c9c380c508791..5bc11d1bb9df8 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1831,14 +1831,17 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ 	if (np && of_get_property(np, "use-ncsi", NULL)) {
+ 		if (!IS_ENABLED(CONFIG_NET_NCSI)) {
+ 			dev_err(&pdev->dev, "NCSI stack not enabled\n");
++			err = -EINVAL;
+ 			goto err_ncsi_dev;
+ 		}
+ 
+ 		dev_info(&pdev->dev, "Using NCSI interface\n");
+ 		priv->use_ncsi = true;
+ 		priv->ndev = ncsi_register_dev(netdev, ftgmac100_ncsi_handler);
+-		if (!priv->ndev)
++		if (!priv->ndev) {
++			err = -EINVAL;
+ 			goto err_ncsi_dev;
++		}
+ 	} else if (np && of_get_property(np, "phy-handle", NULL)) {
+ 		struct phy_device *phy;
+ 
+@@ -1846,6 +1849,7 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ 					     &ftgmac100_adjust_link);
+ 		if (!phy) {
+ 			dev_err(&pdev->dev, "Failed to connect to phy\n");
++			err = -EINVAL;
+ 			goto err_setup_mdio;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index d6e35421d8f7b..3a74e4645ce65 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1286,8 +1286,8 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	gve_write_version(&reg_bar->driver_version);
+ 	/* Get max queues to alloc etherdev */
+-	max_rx_queues = ioread32be(&reg_bar->max_tx_queues);
+-	max_tx_queues = ioread32be(&reg_bar->max_rx_queues);
++	max_tx_queues = ioread32be(&reg_bar->max_tx_queues);
++	max_rx_queues = ioread32be(&reg_bar->max_rx_queues);
+ 	/* Alloc and setup the netdev and priv */
+ 	dev = alloc_etherdev_mqs(sizeof(*priv), max_tx_queues, max_rx_queues);
+ 	if (!dev) {
+diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+index c2e7404757869..f630667364253 100644
+--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c
++++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+@@ -2617,10 +2617,8 @@ static int ehea_restart_qps(struct net_device *dev)
+ 	u16 dummy16 = 0;
+ 
+ 	cb0 = (void *)get_zeroed_page(GFP_KERNEL);
+-	if (!cb0) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
++	if (!cb0)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < (port->num_def_qps); i++) {
+ 		struct ehea_port_res *pr =  &port->port_res[i];
+@@ -2640,6 +2638,7 @@ static int ehea_restart_qps(struct net_device *dev)
+ 					    cb0);
+ 		if (hret != H_SUCCESS) {
+ 			netdev_err(dev, "query_ehea_qp failed (1)\n");
++			ret = -EFAULT;
+ 			goto out;
+ 		}
+ 
+@@ -2652,6 +2651,7 @@ static int ehea_restart_qps(struct net_device *dev)
+ 					     &dummy64, &dummy16, &dummy16);
+ 		if (hret != H_SUCCESS) {
+ 			netdev_err(dev, "modify_ehea_qp failed (1)\n");
++			ret = -EFAULT;
+ 			goto out;
+ 		}
+ 
+@@ -2660,6 +2660,7 @@ static int ehea_restart_qps(struct net_device *dev)
+ 					    cb0);
+ 		if (hret != H_SUCCESS) {
+ 			netdev_err(dev, "query_ehea_qp failed (2)\n");
++			ret = -EFAULT;
+ 			goto out;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 8cc444684491a..3134c1988db36 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -212,12 +212,11 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
+ 	mutex_lock(&adapter->fw_lock);
+ 	adapter->fw_done_rc = 0;
+ 	reinit_completion(&adapter->fw_done);
+-	rc = send_request_map(adapter, ltb->addr,
+-			      ltb->size, ltb->map_id);
++
++	rc = send_request_map(adapter, ltb->addr, ltb->size, ltb->map_id);
+ 	if (rc) {
+-		dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
+-		mutex_unlock(&adapter->fw_lock);
+-		return rc;
++		dev_err(dev, "send_request_map failed, rc = %d\n", rc);
++		goto out;
+ 	}
+ 
+ 	rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
+@@ -225,20 +224,23 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
+ 		dev_err(dev,
+ 			"Long term map request aborted or timed out,rc = %d\n",
+ 			rc);
+-		dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
+-		mutex_unlock(&adapter->fw_lock);
+-		return rc;
++		goto out;
+ 	}
+ 
+ 	if (adapter->fw_done_rc) {
+ 		dev_err(dev, "Couldn't map long term buffer,rc = %d\n",
+ 			adapter->fw_done_rc);
++		rc = -1;
++		goto out;
++	}
++	rc = 0;
++out:
++	if (rc) {
+ 		dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
+-		mutex_unlock(&adapter->fw_lock);
+-		return -1;
++		ltb->buff = NULL;
+ 	}
+ 	mutex_unlock(&adapter->fw_lock);
+-	return 0;
++	return rc;
+ }
+ 
+ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
+@@ -258,6 +260,8 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
+ 	    adapter->reset_reason != VNIC_RESET_TIMEOUT)
+ 		send_request_unmap(adapter, ltb->map_id);
+ 	dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
++	ltb->buff = NULL;
++	ltb->map_id = 0;
+ }
+ 
+ static int reset_long_term_buff(struct ibmvnic_adapter *adapter,
+@@ -747,8 +751,11 @@ static int init_tx_pools(struct net_device *netdev)
+ 
+ 	adapter->tso_pool = kcalloc(tx_subcrqs,
+ 				    sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
+-	if (!adapter->tso_pool)
++	if (!adapter->tso_pool) {
++		kfree(adapter->tx_pool);
++		adapter->tx_pool = NULL;
+ 		return -1;
++	}
+ 
+ 	adapter->num_active_tx_pools = tx_subcrqs;
+ 
+@@ -1166,6 +1173,11 @@ static int __ibmvnic_open(struct net_device *netdev)
+ 
+ 	netif_tx_start_all_queues(netdev);
+ 
++	if (prev_state == VNIC_CLOSED) {
++		for (i = 0; i < adapter->req_rx_queues; i++)
++			napi_schedule(&adapter->napi[i]);
++	}
++
+ 	adapter->state = VNIC_OPEN;
+ 	return rc;
+ }
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index a0948002ddf85..b3ad95ac3d859 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -5222,18 +5222,20 @@ static void e1000_watchdog_task(struct work_struct *work)
+ 			pm_runtime_resume(netdev->dev.parent);
+ 
+ 			/* Checking if MAC is in DMoff state*/
+-			pcim_state = er32(STATUS);
+-			while (pcim_state & E1000_STATUS_PCIM_STATE) {
+-				if (tries++ == dmoff_exit_timeout) {
+-					e_dbg("Error in exiting dmoff\n");
+-					break;
+-				}
+-				usleep_range(10000, 20000);
++			if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) {
+ 				pcim_state = er32(STATUS);
+-
+-				/* Checking if MAC exited DMoff state */
+-				if (!(pcim_state & E1000_STATUS_PCIM_STATE))
+-					e1000_phy_hw_reset(&adapter->hw);
++				while (pcim_state & E1000_STATUS_PCIM_STATE) {
++					if (tries++ == dmoff_exit_timeout) {
++						e_dbg("Error in exiting dmoff\n");
++						break;
++					}
++					usleep_range(10000, 20000);
++					pcim_state = er32(STATUS);
++
++					/* Checking if MAC exited DMoff state */
++					if (!(pcim_state & E1000_STATUS_PCIM_STATE))
++						e1000_phy_hw_reset(&adapter->hw);
++				}
+ 			}
+ 
+ 			/* update snapshot of PHY registers on LSC */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 5d48bc0c3f6c4..874073f7f0248 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -1262,8 +1262,7 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
+ 			if (ethtool_link_ksettings_test_link_mode(&safe_ks,
+ 								  supported,
+ 								  Autoneg) &&
+-			    hw->phy.link_info.phy_type !=
+-			    I40E_PHY_TYPE_10GBASE_T) {
++			    hw->phy.media_type != I40E_MEDIA_TYPE_BASET) {
+ 				netdev_info(netdev, "Autoneg cannot be disabled on this phy\n");
+ 				err = -EINVAL;
+ 				goto done;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index f0edea7cdbccc..52e31f712a545 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -31,7 +31,7 @@ static void i40e_vsi_reinit_locked(struct i40e_vsi *vsi);
+ static void i40e_handle_reset_warning(struct i40e_pf *pf, bool lock_acquired);
+ static int i40e_add_vsi(struct i40e_vsi *vsi);
+ static int i40e_add_veb(struct i40e_veb *veb, struct i40e_vsi *vsi);
+-static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit);
++static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acquired);
+ static int i40e_setup_misc_vector(struct i40e_pf *pf);
+ static void i40e_determine_queue_usage(struct i40e_pf *pf);
+ static int i40e_setup_pf_filter_control(struct i40e_pf *pf);
+@@ -8347,6 +8347,8 @@ int i40e_vsi_open(struct i40e_vsi *vsi)
+ 			 dev_driver_string(&pf->pdev->dev),
+ 			 dev_name(&pf->pdev->dev));
+ 		err = i40e_vsi_request_irq(vsi, int_name);
++		if (err)
++			goto err_setup_rx;
+ 
+ 	} else {
+ 		err = -EINVAL;
+@@ -10112,7 +10114,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 	/* do basic switch setup */
+ 	if (!lock_acquired)
+ 		rtnl_lock();
+-	ret = i40e_setup_pf_switch(pf, reinit);
++	ret = i40e_setup_pf_switch(pf, reinit, true);
+ 	if (ret)
+ 		goto end_unlock;
+ 
+@@ -14167,10 +14169,11 @@ int i40e_fetch_switch_configuration(struct i40e_pf *pf, bool printconfig)
+  * i40e_setup_pf_switch - Setup the HW switch on startup or after reset
+  * @pf: board private structure
+  * @reinit: if the Main VSI needs to re-initialized.
++ * @lock_acquired: indicates whether or not the lock has been acquired
+  *
+  * Returns 0 on success, negative value on failure
+  **/
+-static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
++static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ {
+ 	u16 flags = 0;
+ 	int ret;
+@@ -14272,9 +14275,15 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
+ 
+ 	i40e_ptp_init(pf);
+ 
++	if (!lock_acquired)
++		rtnl_lock();
++
+ 	/* repopulate tunnel port filters */
+ 	udp_tunnel_nic_reset_ntf(pf->vsi[pf->lan_vsi]->netdev);
+ 
++	if (!lock_acquired)
++		rtnl_unlock();
++
+ 	return ret;
+ }
+ 
+@@ -15046,7 +15055,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+ 	}
+ #endif
+-	err = i40e_setup_pf_switch(pf, false);
++	err = i40e_setup_pf_switch(pf, false, false);
+ 	if (err) {
+ 		dev_info(&pdev->dev, "setup_pf_switch failed: %d\n", err);
+ 		goto err_vsis;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 6aa13c9f9fc9c..a9f65d6677617 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -7045,6 +7045,8 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_port_probe:
++	fwnode_handle_put(port_fwnode);
++
+ 	i = 0;
+ 	fwnode_for_each_available_child_node(fwnode, port_fwnode) {
+ 		if (priv->port_list[i])
+diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+index ade8c44c01cd1..9a0870dc2f034 100644
+--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
++++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+@@ -2536,9 +2536,13 @@ static int pch_gbe_probe(struct pci_dev *pdev,
+ 	adapter->pdev = pdev;
+ 	adapter->hw.back = adapter;
+ 	adapter->hw.reg = pcim_iomap_table(pdev)[PCH_GBE_PCI_BAR];
++
+ 	adapter->pdata = (struct pch_gbe_privdata *)pci_id->driver_data;
+-	if (adapter->pdata && adapter->pdata->platform_init)
+-		adapter->pdata->platform_init(pdev);
++	if (adapter->pdata && adapter->pdata->platform_init) {
++		ret = adapter->pdata->platform_init(pdev);
++		if (ret)
++			goto err_free_netdev;
++	}
+ 
+ 	adapter->ptp_pdev =
+ 		pci_get_domain_bus_and_slot(pci_domain_nr(adapter->pdev->bus),
+@@ -2633,7 +2637,7 @@ err_free_netdev:
+  */
+ static int pch_gbe_minnow_platform_init(struct pci_dev *pdev)
+ {
+-	unsigned long flags = GPIOF_DIR_OUT | GPIOF_INIT_HIGH | GPIOF_EXPORT;
++	unsigned long flags = GPIOF_OUT_INIT_HIGH;
+ 	unsigned gpio = MINNOW_PHY_RESET_GPIO;
+ 	int ret;
+ 
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 501d676fd88b9..0805edef56254 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -1433,12 +1433,12 @@ static void am65_cpsw_nuss_free_tx_chns(void *data)
+ 	for (i = 0; i < common->tx_ch_num; i++) {
+ 		struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+ 
+-		if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
+-			k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
+-
+ 		if (!IS_ERR_OR_NULL(tx_chn->desc_pool))
+ 			k3_cppi_desc_pool_destroy(tx_chn->desc_pool);
+ 
++		if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
++			k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
++
+ 		memset(tx_chn, 0, sizeof(*tx_chn));
+ 	}
+ }
+@@ -1458,12 +1458,12 @@ void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common)
+ 
+ 		netif_napi_del(&tx_chn->napi_tx);
+ 
+-		if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
+-			k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
+-
+ 		if (!IS_ERR_OR_NULL(tx_chn->desc_pool))
+ 			k3_cppi_desc_pool_destroy(tx_chn->desc_pool);
+ 
++		if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
++			k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
++
+ 		memset(tx_chn, 0, sizeof(*tx_chn));
+ 	}
+ }
+@@ -1550,11 +1550,11 @@ static void am65_cpsw_nuss_free_rx_chns(void *data)
+ 
+ 	rx_chn = &common->rx_chns;
+ 
+-	if (!IS_ERR_OR_NULL(rx_chn->rx_chn))
+-		k3_udma_glue_release_rx_chn(rx_chn->rx_chn);
+-
+ 	if (!IS_ERR_OR_NULL(rx_chn->desc_pool))
+ 		k3_cppi_desc_pool_destroy(rx_chn->desc_pool);
++
++	if (!IS_ERR_OR_NULL(rx_chn->rx_chn))
++		k3_udma_glue_release_rx_chn(rx_chn->rx_chn);
+ }
+ 
+ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index c0bf7d78276e4..626e1ce817fcf 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -480,7 +480,7 @@ static int hwsim_del_edge_nl(struct sk_buff *msg, struct genl_info *info)
+ 	struct hwsim_edge *e;
+ 	u32 v0, v1;
+ 
+-	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] &&
++	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] ||
+ 	    !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE])
+ 		return -EINVAL;
+ 
+@@ -715,6 +715,8 @@ static int hwsim_subscribe_all_others(struct hwsim_phy *phy)
+ 
+ 	return 0;
+ 
++sub_fail:
++	hwsim_edge_unsubscribe_me(phy);
+ me_fail:
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(e, &phy->edges, list) {
+@@ -722,8 +724,6 @@ me_fail:
+ 		hwsim_free_edge(e);
+ 	}
+ 	rcu_read_unlock();
+-sub_fail:
+-	hwsim_edge_unsubscribe_me(phy);
+ 	return -ENOMEM;
+ }
+ 
+@@ -824,12 +824,17 @@ err_pib:
+ static void hwsim_del(struct hwsim_phy *phy)
+ {
+ 	struct hwsim_pib *pib;
++	struct hwsim_edge *e;
+ 
+ 	hwsim_edge_unsubscribe_me(phy);
+ 
+ 	list_del(&phy->list);
+ 
+ 	rcu_read_lock();
++	list_for_each_entry_rcu(e, &phy->edges, list) {
++		list_del_rcu(&e->list);
++		hwsim_free_edge(e);
++	}
+ 	pib = rcu_dereference(phy->pib);
+ 	rcu_read_unlock();
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 11ca5fa902a16..c601d3df27220 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1818,7 +1818,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 		ctx.sa.rx_sa = rx_sa;
+ 		ctx.secy = secy;
+ 		memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
+-		       MACSEC_KEYID_LEN);
++		       secy->key_len);
+ 
+ 		err = macsec_offload(ops->mdo_add_rxsa, &ctx);
+ 		if (err)
+@@ -2060,7 +2060,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
+ 		ctx.sa.tx_sa = tx_sa;
+ 		ctx.secy = secy;
+ 		memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
+-		       MACSEC_KEYID_LEN);
++		       secy->key_len);
+ 
+ 		err = macsec_offload(ops->mdo_add_txsa, &ctx);
+ 		if (err)
+diff --git a/drivers/net/phy/mscc/mscc_macsec.c b/drivers/net/phy/mscc/mscc_macsec.c
+index 10be266e48e8b..b7b2521c73fb6 100644
+--- a/drivers/net/phy/mscc/mscc_macsec.c
++++ b/drivers/net/phy/mscc/mscc_macsec.c
+@@ -501,7 +501,7 @@ static u32 vsc8584_macsec_flow_context_id(struct macsec_flow *flow)
+ }
+ 
+ /* Derive the AES key to get a key for the hash autentication */
+-static int vsc8584_macsec_derive_key(const u8 key[MACSEC_KEYID_LEN],
++static int vsc8584_macsec_derive_key(const u8 key[MACSEC_MAX_KEY_LEN],
+ 				     u16 key_len, u8 hkey[16])
+ {
+ 	const u8 input[AES_BLOCK_SIZE] = {0};
+diff --git a/drivers/net/phy/mscc/mscc_macsec.h b/drivers/net/phy/mscc/mscc_macsec.h
+index 9c6d25e36de2a..453304bae7784 100644
+--- a/drivers/net/phy/mscc/mscc_macsec.h
++++ b/drivers/net/phy/mscc/mscc_macsec.h
+@@ -81,7 +81,7 @@ struct macsec_flow {
+ 	/* Highest takes precedence [0..15] */
+ 	u8 priority;
+ 
+-	u8 key[MACSEC_KEYID_LEN];
++	u8 key[MACSEC_MAX_KEY_LEN];
+ 
+ 	union {
+ 		struct macsec_rx_sa *rx_sa;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index bc96ac0c5769c..2746f77745e4d 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1312,22 +1312,22 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ 	int orig_iif = skb->skb_iif;
+ 	bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);
+ 	bool is_ndisc = ipv6_ndisc_frame(skb);
+-	bool is_ll_src;
+ 
+ 	/* loopback, multicast & non-ND link-local traffic; do not push through
+ 	 * packet taps again. Reset pkt_type for upper layers to process skb.
+-	 * for packets with lladdr src, however, skip so that the dst can be
+-	 * determine at input using original ifindex in the case that daddr
+-	 * needs strict
++	 * For strict packets with a source LLA, determine the dst using the
++	 * original ifindex.
+ 	 */
+-	is_ll_src = ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL;
+-	if (skb->pkt_type == PACKET_LOOPBACK ||
+-	    (need_strict && !is_ndisc && !is_ll_src)) {
++	if (skb->pkt_type == PACKET_LOOPBACK || (need_strict && !is_ndisc)) {
+ 		skb->dev = vrf_dev;
+ 		skb->skb_iif = vrf_dev->ifindex;
+ 		IP6CB(skb)->flags |= IP6SKB_L3SLAVE;
++
+ 		if (skb->pkt_type == PACKET_LOOPBACK)
+ 			skb->pkt_type = PACKET_HOST;
++		else if (ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)
++			vrf_ip6_input_dst(skb, vrf_dev, orig_iif);
++
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index d3b698d9e2e6a..48fbdce6a70e7 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2163,6 +2163,7 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 	struct neighbour *n;
+ 	struct nd_msg *msg;
+ 
++	rcu_read_lock();
+ 	in6_dev = __in6_dev_get(dev);
+ 	if (!in6_dev)
+ 		goto out;
+@@ -2214,6 +2215,7 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 	}
+ 
+ out:
++	rcu_read_unlock();
+ 	consume_skb(skb);
+ 	return NETDEV_TX_OK;
+ }
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index f5c0f9bac8404..36183fdfb7f03 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -5482,6 +5482,7 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
+ 
+ 	if (arvif->nohwcrypt &&
+ 	    !test_bit(ATH10K_FLAG_RAW_MODE, &ar->dev_flags)) {
++		ret = -EINVAL;
+ 		ath10k_warn(ar, "cryptmode module param needed for sw crypto\n");
+ 		goto err;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index 36426efdb2ea0..86f52bcb3e4db 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -3684,8 +3684,10 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
+ 			ath10k_pci_soc_read32(ar, SOC_CHIP_ID_ADDRESS);
+ 		if (bus_params.chip_id != 0xffffffff) {
+ 			if (!ath10k_pci_chip_is_supported(pdev->device,
+-							  bus_params.chip_id))
++							  bus_params.chip_id)) {
++				ret = -ENODEV;
+ 				goto err_unsupported;
++			}
+ 		}
+ 	}
+ 
+@@ -3696,11 +3698,15 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
+ 	}
+ 
+ 	bus_params.chip_id = ath10k_pci_soc_read32(ar, SOC_CHIP_ID_ADDRESS);
+-	if (bus_params.chip_id == 0xffffffff)
++	if (bus_params.chip_id == 0xffffffff) {
++		ret = -ENODEV;
+ 		goto err_unsupported;
++	}
+ 
+-	if (!ath10k_pci_chip_is_supported(pdev->device, bus_params.chip_id))
+-		goto err_free_irq;
++	if (!ath10k_pci_chip_is_supported(pdev->device, bus_params.chip_id)) {
++		ret = -ENODEV;
++		goto err_unsupported;
++	}
+ 
+ 	ret = ath10k_core_register(ar, &bus_params);
+ 	if (ret) {
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index a68fe3a45a744..28de2c7ae8991 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -329,7 +329,8 @@ static int ath11k_core_fetch_board_data_api_n(struct ath11k_base *ab,
+ 		if (len < ALIGN(ie_len, 4)) {
+ 			ath11k_err(ab, "invalid length for board ie_id %d ie_len %zu len %zu\n",
+ 				   ie_id, ie_len, len);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto err;
+ 		}
+ 
+ 		switch (ie_id) {
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 0738c784616f1..cc0c30ceaa0d4 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -5123,11 +5123,6 @@ ath11k_mac_update_vif_chan(struct ath11k *ar,
+ 		if (WARN_ON(!arvif->is_up))
+ 			continue;
+ 
+-		ret = ath11k_mac_setup_bcn_tmpl(arvif);
+-		if (ret)
+-			ath11k_warn(ab, "failed to update bcn tmpl during csa: %d\n",
+-				    ret);
+-
+ 		ret = ath11k_mac_vdev_restart(arvif, &vifs[i].new_ctx->def);
+ 		if (ret) {
+ 			ath11k_warn(ab, "failed to restart vdev %d: %d\n",
+@@ -5135,6 +5130,11 @@ ath11k_mac_update_vif_chan(struct ath11k *ar,
+ 			continue;
+ 		}
+ 
++		ret = ath11k_mac_setup_bcn_tmpl(arvif);
++		if (ret)
++			ath11k_warn(ab, "failed to update bcn tmpl during csa: %d\n",
++				    ret);
++
+ 		ret = ath11k_wmi_vdev_up(arvif->ar, arvif->vdev_id, arvif->aid,
+ 					 arvif->bssid);
+ 		if (ret) {
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 8dbf68b94228c..ac805f56627ab 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -307,6 +307,11 @@ static int ath_reset_internal(struct ath_softc *sc, struct ath9k_channel *hchan)
+ 		hchan = ah->curchan;
+ 	}
+ 
++	if (!hchan) {
++		fastcc = false;
++		hchan = ath9k_cmn_get_channel(sc->hw, ah, &sc->cur_chan->chandef);
++	}
++
+ 	if (!ath_prepare_reset(sc))
+ 		fastcc = false;
+ 
+diff --git a/drivers/net/wireless/ath/carl9170/Kconfig b/drivers/net/wireless/ath/carl9170/Kconfig
+index b2d760873992f..ba9bea79381c5 100644
+--- a/drivers/net/wireless/ath/carl9170/Kconfig
++++ b/drivers/net/wireless/ath/carl9170/Kconfig
+@@ -16,13 +16,11 @@ config CARL9170
+ 
+ config CARL9170_LEDS
+ 	bool "SoftLED Support"
+-	depends on CARL9170
+-	select MAC80211_LEDS
+-	select LEDS_CLASS
+-	select NEW_LEDS
+ 	default y
++	depends on CARL9170
++	depends on MAC80211_LEDS
+ 	help
+-	  This option is necessary, if you want your device' LEDs to blink
++	  This option is necessary, if you want your device's LEDs to blink.
+ 
+ 	  Say Y, unless you need the LEDs for firmware debugging.
+ 
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 706728fba72d7..9f8e44210e89a 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -293,23 +293,16 @@ static int wcn36xx_start(struct ieee80211_hw *hw)
+ 		goto out_free_dxe_pool;
+ 	}
+ 
+-	wcn->hal_buf = kmalloc(WCN36XX_HAL_BUF_SIZE, GFP_KERNEL);
+-	if (!wcn->hal_buf) {
+-		wcn36xx_err("Failed to allocate smd buf\n");
+-		ret = -ENOMEM;
+-		goto out_free_dxe_ctl;
+-	}
+-
+ 	ret = wcn36xx_smd_load_nv(wcn);
+ 	if (ret) {
+ 		wcn36xx_err("Failed to push NV to chip\n");
+-		goto out_free_smd_buf;
++		goto out_free_dxe_ctl;
+ 	}
+ 
+ 	ret = wcn36xx_smd_start(wcn);
+ 	if (ret) {
+ 		wcn36xx_err("Failed to start chip\n");
+-		goto out_free_smd_buf;
++		goto out_free_dxe_ctl;
+ 	}
+ 
+ 	if (!wcn36xx_is_fw_version(wcn, 1, 2, 2, 24)) {
+@@ -336,8 +329,6 @@ static int wcn36xx_start(struct ieee80211_hw *hw)
+ 
+ out_smd_stop:
+ 	wcn36xx_smd_stop(wcn);
+-out_free_smd_buf:
+-	kfree(wcn->hal_buf);
+ out_free_dxe_ctl:
+ 	wcn36xx_dxe_free_ctl_blks(wcn);
+ out_free_dxe_pool:
+@@ -372,8 +363,6 @@ static void wcn36xx_stop(struct ieee80211_hw *hw)
+ 
+ 	wcn36xx_dxe_free_mem_pools(wcn);
+ 	wcn36xx_dxe_free_ctl_blks(wcn);
+-
+-	kfree(wcn->hal_buf);
+ }
+ 
+ static void wcn36xx_change_ps(struct wcn36xx *wcn, bool enable)
+@@ -1398,6 +1387,12 @@ static int wcn36xx_probe(struct platform_device *pdev)
+ 	mutex_init(&wcn->hal_mutex);
+ 	mutex_init(&wcn->scan_lock);
+ 
++	wcn->hal_buf = devm_kmalloc(wcn->dev, WCN36XX_HAL_BUF_SIZE, GFP_KERNEL);
++	if (!wcn->hal_buf) {
++		ret = -ENOMEM;
++		goto out_wq;
++	}
++
+ 	ret = dma_set_mask_and_coherent(wcn->dev, DMA_BIT_MASK(32));
+ 	if (ret < 0) {
+ 		wcn36xx_err("failed to set DMA mask: %d\n", ret);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 23e6422c2251b..c2b6e5c966d04 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -2767,8 +2767,9 @@ brcmf_cfg80211_get_station(struct wiphy *wiphy, struct net_device *ndev,
+ 	struct brcmf_sta_info_le sta_info_le;
+ 	u32 sta_flags;
+ 	u32 is_tdls_peer;
+-	s32 total_rssi;
+-	s32 count_rssi;
++	s32 total_rssi_avg = 0;
++	s32 total_rssi = 0;
++	s32 count_rssi = 0;
+ 	int rssi;
+ 	u32 i;
+ 
+@@ -2834,25 +2835,27 @@ brcmf_cfg80211_get_station(struct wiphy *wiphy, struct net_device *ndev,
+ 			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_BYTES);
+ 			sinfo->rx_bytes = le64_to_cpu(sta_info_le.rx_tot_bytes);
+ 		}
+-		total_rssi = 0;
+-		count_rssi = 0;
+ 		for (i = 0; i < BRCMF_ANT_MAX; i++) {
+-			if (sta_info_le.rssi[i]) {
+-				sinfo->chain_signal_avg[count_rssi] =
+-					sta_info_le.rssi[i];
+-				sinfo->chain_signal[count_rssi] =
+-					sta_info_le.rssi[i];
+-				total_rssi += sta_info_le.rssi[i];
+-				count_rssi++;
+-			}
++			if (sta_info_le.rssi[i] == 0 ||
++			    sta_info_le.rx_lastpkt_rssi[i] == 0)
++				continue;
++			sinfo->chains |= BIT(count_rssi);
++			sinfo->chain_signal[count_rssi] =
++				sta_info_le.rx_lastpkt_rssi[i];
++			sinfo->chain_signal_avg[count_rssi] =
++				sta_info_le.rssi[i];
++			total_rssi += sta_info_le.rx_lastpkt_rssi[i];
++			total_rssi_avg += sta_info_le.rssi[i];
++			count_rssi++;
+ 		}
+ 		if (count_rssi) {
+-			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL);
+-			sinfo->chains = count_rssi;
+-
+ 			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL);
+-			total_rssi /= count_rssi;
+-			sinfo->signal = total_rssi;
++			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);
++			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL);
++			sinfo->filled |=
++				BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL_AVG);
++			sinfo->signal = total_rssi / count_rssi;
++			sinfo->signal_avg = total_rssi_avg / count_rssi;
+ 		} else if (test_bit(BRCMF_VIF_STATUS_CONNECTED,
+ 			&ifp->vif->sme_state)) {
+ 			memset(&scb_val, 0, sizeof(scb_val));
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 59c2b2b6027da..6d5d5c39c6359 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -4157,7 +4157,6 @@ static int brcmf_sdio_bus_reset(struct device *dev)
+ 	if (ret) {
+ 		brcmf_err("Failed to probe after sdio device reset: ret %d\n",
+ 			  ret);
+-		brcmf_sdiod_remove(sdiodev);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
+index 818e523f6025d..fb76b4a69a059 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
+@@ -1221,6 +1221,7 @@ static int brcms_bcma_probe(struct bcma_device *pdev)
+ {
+ 	struct brcms_info *wl;
+ 	struct ieee80211_hw *hw;
++	int ret;
+ 
+ 	dev_info(&pdev->dev, "mfg %x core %x rev %d class %d irq %d\n",
+ 		 pdev->id.manuf, pdev->id.id, pdev->id.rev, pdev->id.class,
+@@ -1245,11 +1246,16 @@ static int brcms_bcma_probe(struct bcma_device *pdev)
+ 	wl = brcms_attach(pdev);
+ 	if (!wl) {
+ 		pr_err("%s: brcms_attach failed!\n", __func__);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_free_ieee80211;
+ 	}
+ 	brcms_led_register(wl);
+ 
+ 	return 0;
++
++err_free_ieee80211:
++	ieee80211_free_hw(hw);
++	return ret;
+ }
+ 
+ static int brcms_suspend(struct bcma_device *pdev)
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
+index e4f91bce222d8..61d3d4e0b7d94 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /******************************************************************************
+  *
+- * Copyright(c) 2020 Intel Corporation
++ * Copyright(c) 2020-2021 Intel Corporation
+  *
+  *****************************************************************************/
+ 
+@@ -10,7 +10,7 @@
+ 
+ #include "fw/notif-wait.h"
+ 
+-#define MVM_UCODE_PNVM_TIMEOUT	(HZ / 10)
++#define MVM_UCODE_PNVM_TIMEOUT	(HZ / 4)
+ 
+ int iwl_pnvm_load(struct iwl_trans *trans,
+ 		  struct iwl_notif_wait_data *notif_wait);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 7626117c01fa3..7186e1dbbd6b5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1085,6 +1085,9 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	if (WARN_ON_ONCE(mvmsta->sta_id == IWL_MVM_INVALID_STA))
+ 		return -1;
+ 
++	if (unlikely(ieee80211_is_any_nullfunc(fc)) && sta->he_cap.has_he)
++		return -1;
++
+ 	if (unlikely(ieee80211_is_probe_resp(fc)))
+ 		iwl_mvm_probe_resp_set_noa(mvm, skb);
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index 33cf952cc01d3..b2de8d03c5fac 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -1232,7 +1232,7 @@ static int mwifiex_pcie_delete_cmdrsp_buf(struct mwifiex_adapter *adapter)
+ static int mwifiex_pcie_alloc_sleep_cookie_buf(struct mwifiex_adapter *adapter)
+ {
+ 	struct pcie_service_card *card = adapter->card;
+-	u32 tmp;
++	u32 *cookie;
+ 
+ 	card->sleep_cookie_vbase = dma_alloc_coherent(&card->dev->dev,
+ 						      sizeof(u32),
+@@ -1243,13 +1243,11 @@ static int mwifiex_pcie_alloc_sleep_cookie_buf(struct mwifiex_adapter *adapter)
+ 			    "dma_alloc_coherent failed!\n");
+ 		return -ENOMEM;
+ 	}
++	cookie = (u32 *)card->sleep_cookie_vbase;
+ 	/* Init val of Sleep Cookie */
+-	tmp = FW_AWAKE_COOKIE;
+-	put_unaligned(tmp, card->sleep_cookie_vbase);
++	*cookie = FW_AWAKE_COOKIE;
+ 
+-	mwifiex_dbg(adapter, INFO,
+-		    "alloc_scook: sleep cookie=0x%x\n",
+-		    get_unaligned(card->sleep_cookie_vbase));
++	mwifiex_dbg(adapter, INFO, "alloc_scook: sleep cookie=0x%x\n", *cookie);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
+index 4cf7c5d343258..490d55651de39 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
+@@ -133,20 +133,21 @@ int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 			  struct mt76_tx_info *tx_info)
+ {
+ 	struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
+-	struct mt7615_sta *msta = container_of(wcid, struct mt7615_sta, wcid);
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb);
+ 	struct ieee80211_key_conf *key = info->control.hw_key;
+ 	int pid, id;
+ 	u8 *txwi = (u8 *)txwi_ptr;
+ 	struct mt76_txwi_cache *t;
++	struct mt7615_sta *msta;
+ 	void *txp;
+ 
++	msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL;
+ 	if (!wcid)
+ 		wcid = &dev->mt76.global_wcid;
+ 
+ 	pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
+ 
+-	if (info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) {
++	if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && msta) {
+ 		struct mt7615_phy *phy = &dev->phy;
+ 
+ 		if ((info->hw_queue & MT_TX_HW_QUEUE_EXT_PHY) && mdev->phy2)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c b/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
+index 3b29a6d3dc641..18082b4ce7d3d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
+@@ -243,14 +243,15 @@ int mt7663_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 				   struct ieee80211_sta *sta,
+ 				   struct mt76_tx_info *tx_info)
+ {
+-	struct mt7615_sta *msta = container_of(wcid, struct mt7615_sta, wcid);
+ 	struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
+ 	struct sk_buff *skb = tx_info->skb;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++	struct mt7615_sta *msta;
+ 	int pad;
+ 
++	msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL;
+ 	if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) &&
+-	    !msta->rate_probe) {
++	    msta && !msta->rate_probe) {
+ 		/* request to configure sampling rate */
+ 		spin_lock_bh(&dev->mt76.lock);
+ 		mt7615_mac_set_rates(&dev->phy, msta, &info->control.rates[0],
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 44ef4bc7a46e5..073c29eb2ed8f 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -278,7 +278,7 @@ mt76_tx(struct mt76_phy *phy, struct ieee80211_sta *sta,
+ 		skb_set_queue_mapping(skb, qid);
+ 	}
+ 
+-	if (!(wcid->tx_info & MT_WCID_TX_INFO_SET))
++	if (wcid && !(wcid->tx_info & MT_WCID_TX_INFO_SET))
+ 		ieee80211_get_tx_rates(info->control.vif, sta, skb,
+ 				       info->control.rates, 1);
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index b718f5d810be8..79ad6232dce83 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -3510,26 +3510,28 @@ static void rtw8822c_pwrtrack_set(struct rtw_dev *rtwdev, u8 rf_path)
+ 	}
+ }
+ 
+-static void rtw8822c_pwr_track_path(struct rtw_dev *rtwdev,
+-				    struct rtw_swing_table *swing_table,
+-				    u8 path)
++static void rtw8822c_pwr_track_stats(struct rtw_dev *rtwdev, u8 path)
+ {
+-	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+-	u8 thermal_value, delta;
++	u8 thermal_value;
+ 
+ 	if (rtwdev->efuse.thermal_meter[path] == 0xff)
+ 		return;
+ 
+ 	thermal_value = rtw_read_rf(rtwdev, path, RF_T_METER, 0x7e);
+-
+ 	rtw_phy_pwrtrack_avg(rtwdev, thermal_value, path);
++}
+ 
+-	delta = rtw_phy_pwrtrack_get_delta(rtwdev, path);
++static void rtw8822c_pwr_track_path(struct rtw_dev *rtwdev,
++				    struct rtw_swing_table *swing_table,
++				    u8 path)
++{
++	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
++	u8 delta;
+ 
++	delta = rtw_phy_pwrtrack_get_delta(rtwdev, path);
+ 	dm_info->delta_power_index[path] =
+ 		rtw_phy_pwrtrack_get_pwridx(rtwdev, swing_table, path, path,
+ 					    delta);
+-
+ 	rtw8822c_pwrtrack_set(rtwdev, path);
+ }
+ 
+@@ -3540,12 +3542,12 @@ static void __rtw8822c_pwr_track(struct rtw_dev *rtwdev)
+ 
+ 	rtw_phy_config_swing_table(rtwdev, &swing_table);
+ 
++	for (i = 0; i < rtwdev->hal.rf_path_num; i++)
++		rtw8822c_pwr_track_stats(rtwdev, i);
+ 	if (rtw_phy_pwrtrack_need_lck(rtwdev))
+ 		rtw8822c_do_lck(rtwdev);
+-
+ 	for (i = 0; i < rtwdev->hal.rf_path_num; i++)
+ 		rtw8822c_pwr_track_path(rtwdev, &swing_table, i);
+-
+ }
+ 
+ static void rtw8822c_pwr_track(struct rtw_dev *rtwdev)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index ce9892152f4d4..99b21a2c83861 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -203,7 +203,7 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 		wh->frame_control |= cpu_to_le16(RSI_SET_PS_ENABLE);
+ 
+ 	if ((!(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT)) &&
+-	    (common->secinfo.security_enable)) {
++	    info->control.hw_key) {
+ 		if (rsi_is_cipher_wep(common))
+ 			ieee80211_size += 4;
+ 		else
+@@ -470,9 +470,9 @@ int rsi_prepare_beacon(struct rsi_common *common, struct sk_buff *skb)
+ 	}
+ 
+ 	if (common->band == NL80211_BAND_2GHZ)
+-		bcn_frm->bbp_info |= cpu_to_le16(RSI_RATE_1);
++		bcn_frm->rate_info |= cpu_to_le16(RSI_RATE_1);
+ 	else
+-		bcn_frm->bbp_info |= cpu_to_le16(RSI_RATE_6);
++		bcn_frm->rate_info |= cpu_to_le16(RSI_RATE_6);
+ 
+ 	if (mac_bcn->data[tim_offset + 2] == 0)
+ 		bcn_frm->frame_info |= cpu_to_le16(RSI_DATA_DESC_DTIM_BEACON);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+index 16025300cddb3..57c9e3559dfd1 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+@@ -1028,7 +1028,6 @@ static int rsi_mac80211_set_key(struct ieee80211_hw *hw,
+ 	mutex_lock(&common->mutex);
+ 	switch (cmd) {
+ 	case SET_KEY:
+-		secinfo->security_enable = true;
+ 		status = rsi_hal_key_config(hw, vif, key, sta);
+ 		if (status) {
+ 			mutex_unlock(&common->mutex);
+@@ -1047,8 +1046,6 @@ static int rsi_mac80211_set_key(struct ieee80211_hw *hw,
+ 		break;
+ 
+ 	case DISABLE_KEY:
+-		if (vif->type == NL80211_IFTYPE_STATION)
+-			secinfo->security_enable = false;
+ 		rsi_dbg(ERR_ZONE, "%s: RSI del key\n", __func__);
+ 		memset(key, 0, sizeof(struct ieee80211_key_conf));
+ 		status = rsi_hal_key_config(hw, vif, key, sta);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mgmt.c b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+index 33c76d39a8e96..b6d050a2fbe7e 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mgmt.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+@@ -1803,8 +1803,7 @@ int rsi_send_wowlan_request(struct rsi_common *common, u16 flags,
+ 			RSI_WIFI_MGMT_Q);
+ 	cmd_frame->desc.desc_dword0.frame_type = WOWLAN_CONFIG_PARAMS;
+ 	cmd_frame->host_sleep_status = sleep_status;
+-	if (common->secinfo.security_enable &&
+-	    common->secinfo.gtk_cipher)
++	if (common->secinfo.gtk_cipher)
+ 		flags |= RSI_WOW_GTK_REKEY;
+ 	if (sleep_status)
+ 		cmd_frame->wow_flags = flags;
+diff --git a/drivers/net/wireless/rsi/rsi_main.h b/drivers/net/wireless/rsi/rsi_main.h
+index 73a19e43106b1..b3e25bc28682c 100644
+--- a/drivers/net/wireless/rsi/rsi_main.h
++++ b/drivers/net/wireless/rsi/rsi_main.h
+@@ -151,7 +151,6 @@ enum edca_queue {
+ };
+ 
+ struct security_info {
+-	bool security_enable;
+ 	u32 ptk_cipher;
+ 	u32 gtk_cipher;
+ };
+diff --git a/drivers/net/wireless/st/cw1200/scan.c b/drivers/net/wireless/st/cw1200/scan.c
+index 988581cc134b7..1f856fbbc0ea4 100644
+--- a/drivers/net/wireless/st/cw1200/scan.c
++++ b/drivers/net/wireless/st/cw1200/scan.c
+@@ -75,30 +75,27 @@ int cw1200_hw_scan(struct ieee80211_hw *hw,
+ 	if (req->n_ssids > WSM_SCAN_MAX_NUM_OF_SSIDS)
+ 		return -EINVAL;
+ 
+-	/* will be unlocked in cw1200_scan_work() */
+-	down(&priv->scan.lock);
+-	mutex_lock(&priv->conf_mutex);
+-
+ 	frame.skb = ieee80211_probereq_get(hw, priv->vif->addr, NULL, 0,
+ 		req->ie_len);
+-	if (!frame.skb) {
+-		mutex_unlock(&priv->conf_mutex);
+-		up(&priv->scan.lock);
++	if (!frame.skb)
+ 		return -ENOMEM;
+-	}
+ 
+ 	if (req->ie_len)
+ 		skb_put_data(frame.skb, req->ie, req->ie_len);
+ 
++	/* will be unlocked in cw1200_scan_work() */
++	down(&priv->scan.lock);
++	mutex_lock(&priv->conf_mutex);
++
+ 	ret = wsm_set_template_frame(priv, &frame);
+ 	if (!ret) {
+ 		/* Host want to be the probe responder. */
+ 		ret = wsm_set_probe_responder(priv, true);
+ 	}
+ 	if (ret) {
+-		dev_kfree_skb(frame.skb);
+ 		mutex_unlock(&priv->conf_mutex);
+ 		up(&priv->scan.lock);
++		dev_kfree_skb(frame.skb);
+ 		return ret;
+ 	}
+ 
+@@ -120,8 +117,8 @@ int cw1200_hw_scan(struct ieee80211_hw *hw,
+ 		++priv->scan.n_ssids;
+ 	}
+ 
+-	dev_kfree_skb(frame.skb);
+ 	mutex_unlock(&priv->conf_mutex);
++	dev_kfree_skb(frame.skb);
+ 	queue_work(priv->workqueue, &priv->scan.work);
+ 	return 0;
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index c1f3446216c5c..3f05df98697d3 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1027,7 +1027,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ 
+ static inline void nvme_update_cq_head(struct nvme_queue *nvmeq)
+ {
+-	u16 tmp = nvmeq->cq_head + 1;
++	u32 tmp = nvmeq->cq_head + 1;
+ 
+ 	if (tmp == nvmeq->q_depth) {
+ 		nvmeq->cq_head = 0;
+@@ -2836,10 +2836,7 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
+ #ifdef CONFIG_ACPI
+ static bool nvme_acpi_storage_d3(struct pci_dev *dev)
+ {
+-	struct acpi_device *adev;
+-	struct pci_dev *root;
+-	acpi_handle handle;
+-	acpi_status status;
++	struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
+ 	u8 val;
+ 
+ 	/*
+@@ -2847,28 +2844,9 @@ static bool nvme_acpi_storage_d3(struct pci_dev *dev)
+ 	 * must use D3 to support deep platform power savings during
+ 	 * suspend-to-idle.
+ 	 */
+-	root = pcie_find_root_port(dev);
+-	if (!root)
+-		return false;
+ 
+-	adev = ACPI_COMPANION(&root->dev);
+ 	if (!adev)
+ 		return false;
+-
+-	/*
+-	 * The property is defined in the PXSX device for South complex ports
+-	 * and in the PEGP device for North complex ports.
+-	 */
+-	status = acpi_get_handle(adev->handle, "PXSX", &handle);
+-	if (ACPI_FAILURE(status)) {
+-		status = acpi_get_handle(adev->handle, "PEGP", &handle);
+-		if (ACPI_FAILURE(status))
+-			return false;
+-	}
+-
+-	if (acpi_bus_get_device(handle, &adev))
+-		return false;
+-
+ 	if (fwnode_property_read_u8(acpi_fwnode_handle(adev), "StorageD3Enable",
+ 			&val))
+ 		return false;
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index cd4e73aa98074..640031cbda7cc 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -2499,13 +2499,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+ 	u32 xfrlen = be32_to_cpu(cmdiu->data_len);
+ 	int ret;
+ 
+-	/*
+-	 * if there is no nvmet mapping to the targetport there
+-	 * shouldn't be requests. just terminate them.
+-	 */
+-	if (!tgtport->pe)
+-		goto transport_error;
+-
+ 	/*
+ 	 * Fused commands are currently not supported in the linux
+ 	 * implementation.
+@@ -2533,7 +2526,8 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+ 
+ 	fod->req.cmd = &fod->cmdiubuf.sqe;
+ 	fod->req.cqe = &fod->rspiubuf.cqe;
+-	fod->req.port = tgtport->pe->port;
++	if (tgtport->pe)
++		fod->req.port = tgtport->pe->port;
+ 
+ 	/* clear any response payload */
+ 	memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index f2e697000b96f..57ff31b6b1e47 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -501,11 +501,11 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
+ 
+ 		if (size &&
+ 		    early_init_dt_reserve_memory_arch(base, size, nomap) == 0)
+-			pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %ld MiB\n",
+-				uname, &base, (unsigned long)size / SZ_1M);
++			pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n",
++				uname, &base, (unsigned long)(size / SZ_1M));
+ 		else
+-			pr_info("Reserved memory: failed to reserve memory for node '%s': base %pa, size %ld MiB\n",
+-				uname, &base, (unsigned long)size / SZ_1M);
++			pr_info("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n",
++				uname, &base, (unsigned long)(size / SZ_1M));
+ 
+ 		len -= t_len;
+ 		if (first) {
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index a7fbc5e37e19e..6c95bbdf9265a 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -134,9 +134,9 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
+ 			ret = early_init_dt_alloc_reserved_memory_arch(size,
+ 					align, start, end, nomap, &base);
+ 			if (ret == 0) {
+-				pr_debug("allocated memory for '%s' node: base %pa, size %ld MiB\n",
++				pr_debug("allocated memory for '%s' node: base %pa, size %lu MiB\n",
+ 					uname, &base,
+-					(unsigned long)size / SZ_1M);
++					(unsigned long)(size / SZ_1M));
+ 				break;
+ 			}
+ 			len -= t_len;
+@@ -146,8 +146,8 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
+ 		ret = early_init_dt_alloc_reserved_memory_arch(size, align,
+ 							0, 0, nomap, &base);
+ 		if (ret == 0)
+-			pr_debug("allocated memory for '%s' node: base %pa, size %ld MiB\n",
+-				uname, &base, (unsigned long)size / SZ_1M);
++			pr_debug("allocated memory for '%s' node: base %pa, size %lu MiB\n",
++				uname, &base, (unsigned long)(size / SZ_1M));
+ 	}
+ 
+ 	if (base == 0) {
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 03ed5cb1c4b25..d57c538bbb2db 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -3480,6 +3480,9 @@ static void __exit exit_hv_pci_drv(void)
+ 
+ static int __init init_hv_pci_drv(void)
+ {
++	if (!hv_is_hyperv_initialized())
++		return -ENODEV;
++
+ 	/* Set the invalid domain number's bit, so it will not be used */
+ 	set_bit(HVPCI_DOM_INVALID, hvpci_dom_map);
+ 
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 46defb1dcf867..bb019e3839888 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -1212,7 +1212,7 @@ static int arm_cmn_init_irqs(struct arm_cmn *cmn)
+ 		irq = cmn->dtc[i].irq;
+ 		for (j = i; j--; ) {
+ 			if (cmn->dtc[j].irq == irq) {
+-				cmn->dtc[j].irq_friend = j - i;
++				cmn->dtc[j].irq_friend = i - j;
+ 				goto next;
+ 			}
+ 		}
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index 5274f7fe359eb..afa8efbdad8fa 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -275,7 +275,7 @@ static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu,
+ 				       struct perf_event *event, int idx)
+ {
+ 	u32 span, sid;
+-	unsigned int num_ctrs = smmu_pmu->num_counters;
++	unsigned int cur_idx, num_ctrs = smmu_pmu->num_counters;
+ 	bool filter_en = !!get_filter_enable(event);
+ 
+ 	span = filter_en ? get_filter_span(event) :
+@@ -283,17 +283,19 @@ static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu,
+ 	sid = filter_en ? get_filter_stream_id(event) :
+ 			   SMMU_PMCG_DEFAULT_FILTER_SID;
+ 
+-	/* Support individual filter settings */
+-	if (!smmu_pmu->global_filter) {
++	cur_idx = find_first_bit(smmu_pmu->used_counters, num_ctrs);
++	/*
++	 * Per-counter filtering, or scheduling the first globally-filtered
++	 * event into an empty PMU so idx == 0 and it works out equivalent.
++	 */
++	if (!smmu_pmu->global_filter || cur_idx == num_ctrs) {
+ 		smmu_pmu_set_event_filter(event, idx, span, sid);
+ 		return 0;
+ 	}
+ 
+-	/* Requested settings same as current global settings*/
+-	idx = find_first_bit(smmu_pmu->used_counters, num_ctrs);
+-	if (idx == num_ctrs ||
+-	    smmu_pmu_check_global_filter(smmu_pmu->events[idx], event)) {
+-		smmu_pmu_set_event_filter(event, 0, span, sid);
++	/* Otherwise, must match whatever's currently scheduled */
++	if (smmu_pmu_check_global_filter(smmu_pmu->events[cur_idx], event)) {
++		smmu_pmu_set_evtyper(smmu_pmu, idx, get_event(event));
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c
+index 397540a4b799c..7f7bc0993670f 100644
+--- a/drivers/perf/fsl_imx8_ddr_perf.c
++++ b/drivers/perf/fsl_imx8_ddr_perf.c
+@@ -623,8 +623,10 @@ static int ddr_perf_probe(struct platform_device *pdev)
+ 
+ 	name = devm_kasprintf(&pdev->dev, GFP_KERNEL, DDR_PERF_DEV_NAME "%d",
+ 			      num);
+-	if (!name)
+-		return -ENOMEM;
++	if (!name) {
++		ret = -ENOMEM;
++		goto cpuhp_state_err;
++	}
+ 
+ 	pmu->devtype_data = of_device_get_match_data(&pdev->dev);
+ 
+diff --git a/drivers/phy/socionext/phy-uniphier-pcie.c b/drivers/phy/socionext/phy-uniphier-pcie.c
+index e4adab375c737..6bdbd1f214dd4 100644
+--- a/drivers/phy/socionext/phy-uniphier-pcie.c
++++ b/drivers/phy/socionext/phy-uniphier-pcie.c
+@@ -24,11 +24,13 @@
+ #define PORT_SEL_1		FIELD_PREP(PORT_SEL_MASK, 1)
+ 
+ #define PCL_PHY_TEST_I		0x2000
+-#define PCL_PHY_TEST_O		0x2004
+ #define TESTI_DAT_MASK		GENMASK(13, 6)
+ #define TESTI_ADR_MASK		GENMASK(5, 1)
+ #define TESTI_WR_EN		BIT(0)
+ 
++#define PCL_PHY_TEST_O		0x2004
++#define TESTO_DAT_MASK		GENMASK(7, 0)
++
+ #define PCL_PHY_RESET		0x200c
+ #define PCL_PHY_RESET_N_MNMODE	BIT(8)	/* =1:manual */
+ #define PCL_PHY_RESET_N		BIT(0)	/* =1:deasssert */
+@@ -77,11 +79,12 @@ static void uniphier_pciephy_set_param(struct uniphier_pciephy_priv *priv,
+ 	val  = FIELD_PREP(TESTI_DAT_MASK, 1);
+ 	val |= FIELD_PREP(TESTI_ADR_MASK, reg);
+ 	uniphier_pciephy_testio_write(priv, val);
+-	val = readl(priv->base + PCL_PHY_TEST_O);
++	val = readl(priv->base + PCL_PHY_TEST_O) & TESTO_DAT_MASK;
+ 
+ 	/* update value */
+-	val &= ~FIELD_PREP(TESTI_DAT_MASK, mask);
+-	val  = FIELD_PREP(TESTI_DAT_MASK, mask & param);
++	val &= ~mask;
++	val |= mask & param;
++	val = FIELD_PREP(TESTI_DAT_MASK, val);
+ 	val |= FIELD_PREP(TESTI_ADR_MASK, reg);
+ 	uniphier_pciephy_testio_write(priv, val);
+ 	uniphier_pciephy_testio_write(priv, val | TESTI_WR_EN);
+diff --git a/drivers/phy/ti/phy-dm816x-usb.c b/drivers/phy/ti/phy-dm816x-usb.c
+index 57adc08a89b2d..9fe6ea6fdae55 100644
+--- a/drivers/phy/ti/phy-dm816x-usb.c
++++ b/drivers/phy/ti/phy-dm816x-usb.c
+@@ -242,19 +242,28 @@ static int dm816x_usb_phy_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(phy->dev);
+ 	generic_phy = devm_phy_create(phy->dev, NULL, &ops);
+-	if (IS_ERR(generic_phy))
+-		return PTR_ERR(generic_phy);
++	if (IS_ERR(generic_phy)) {
++		error = PTR_ERR(generic_phy);
++		goto clk_unprepare;
++	}
+ 
+ 	phy_set_drvdata(generic_phy, phy);
+ 
+ 	phy_provider = devm_of_phy_provider_register(phy->dev,
+ 						     of_phy_simple_xlate);
+-	if (IS_ERR(phy_provider))
+-		return PTR_ERR(phy_provider);
++	if (IS_ERR(phy_provider)) {
++		error = PTR_ERR(phy_provider);
++		goto clk_unprepare;
++	}
+ 
+ 	usb_add_phy_dev(&phy->phy);
+ 
+ 	return 0;
++
++clk_unprepare:
++	pm_runtime_disable(phy->dev);
++	clk_unprepare(phy->refclk);
++	return error;
+ }
+ 
+ static int dm816x_usb_phy_remove(struct platform_device *pdev)
+diff --git a/drivers/pinctrl/renesas/pfc-r8a7796.c b/drivers/pinctrl/renesas/pfc-r8a7796.c
+index 55f0344a3d3e9..3878d6b0db149 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a7796.c
++++ b/drivers/pinctrl/renesas/pfc-r8a7796.c
+@@ -68,6 +68,7 @@
+ 	PIN_NOGP_CFG(QSPI1_MOSI_IO0, "QSPI1_MOSI_IO0", fn, CFG_FLAGS),	\
+ 	PIN_NOGP_CFG(QSPI1_SPCLK, "QSPI1_SPCLK", fn, CFG_FLAGS),	\
+ 	PIN_NOGP_CFG(QSPI1_SSL, "QSPI1_SSL", fn, CFG_FLAGS),		\
++	PIN_NOGP_CFG(PRESET_N, "PRESET#", fn, SH_PFC_PIN_CFG_PULL_DOWN),\
+ 	PIN_NOGP_CFG(RPC_INT_N, "RPC_INT#", fn, CFG_FLAGS),		\
+ 	PIN_NOGP_CFG(RPC_RESET_N, "RPC_RESET#", fn, CFG_FLAGS),		\
+ 	PIN_NOGP_CFG(RPC_WP_N, "RPC_WP#", fn, CFG_FLAGS),		\
+@@ -6109,7 +6110,7 @@ static const struct pinmux_bias_reg pinmux_bias_regs[] = {
+ 		[ 4] = RCAR_GP_PIN(6, 29),	/* USB30_OVC */
+ 		[ 5] = RCAR_GP_PIN(6, 30),	/* GP6_30 */
+ 		[ 6] = RCAR_GP_PIN(6, 31),	/* GP6_31 */
+-		[ 7] = SH_PFC_PIN_NONE,
++		[ 7] = PIN_PRESET_N,		/* PRESET# */
+ 		[ 8] = SH_PFC_PIN_NONE,
+ 		[ 9] = SH_PFC_PIN_NONE,
+ 		[10] = SH_PFC_PIN_NONE,
+diff --git a/drivers/pinctrl/renesas/pfc-r8a77990.c b/drivers/pinctrl/renesas/pfc-r8a77990.c
+index aed04a4c61163..240aadc4611fb 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a77990.c
++++ b/drivers/pinctrl/renesas/pfc-r8a77990.c
+@@ -54,10 +54,10 @@
+ 	PIN_NOGP_CFG(FSCLKST_N, "FSCLKST_N", fn, CFG_FLAGS),		\
+ 	PIN_NOGP_CFG(MLB_REF, "MLB_REF", fn, CFG_FLAGS),		\
+ 	PIN_NOGP_CFG(PRESETOUT_N, "PRESETOUT_N", fn, CFG_FLAGS),	\
+-	PIN_NOGP_CFG(TCK, "TCK", fn, CFG_FLAGS),			\
+-	PIN_NOGP_CFG(TDI, "TDI", fn, CFG_FLAGS),			\
+-	PIN_NOGP_CFG(TMS, "TMS", fn, CFG_FLAGS),			\
+-	PIN_NOGP_CFG(TRST_N, "TRST_N", fn, CFG_FLAGS)
++	PIN_NOGP_CFG(TCK, "TCK", fn, SH_PFC_PIN_CFG_PULL_UP),		\
++	PIN_NOGP_CFG(TDI, "TDI", fn, SH_PFC_PIN_CFG_PULL_UP),		\
++	PIN_NOGP_CFG(TMS, "TMS", fn, SH_PFC_PIN_CFG_PULL_UP),		\
++	PIN_NOGP_CFG(TRST_N, "TRST_N", fn, SH_PFC_PIN_CFG_PULL_UP)
+ 
+ /*
+  * F_() : just information
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 1d9fbabd02fb7..949ddeb673bc5 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -110,11 +110,6 @@ static struct quirk_entry quirk_asus_forceals = {
+ 	.wmi_force_als_set = true,
+ };
+ 
+-static struct quirk_entry quirk_asus_vendor_backlight = {
+-	.wmi_backlight_power = true,
+-	.wmi_backlight_set_devstate = true,
+-};
+-
+ static struct quirk_entry quirk_asus_use_kbd_dock_devid = {
+ 	.use_kbd_dock_devid = true,
+ };
+@@ -420,78 +415,6 @@ static const struct dmi_system_id asus_quirks[] = {
+ 		},
+ 		.driver_data = &quirk_asus_forceals,
+ 	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401IH",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401IH"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401II",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401II"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401IU",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401IU"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401IV",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401IV"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401IVC",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401IVC"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-		{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA502II",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA502II"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA502IU",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA502IU"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA502IV",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA502IV"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+ 	{
+ 		.callback = dmi_matched,
+ 		.ident = "Asus Transformer T100TA / T100HA / T100CHI",
+diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
+index fa7232ad8c395..352508d304675 100644
+--- a/drivers/platform/x86/toshiba_acpi.c
++++ b/drivers/platform/x86/toshiba_acpi.c
+@@ -2831,6 +2831,7 @@ static int toshiba_acpi_setup_keyboard(struct toshiba_acpi_dev *dev)
+ 
+ 	if (!dev->info_supported && !dev->system_event_supported) {
+ 		pr_warn("No hotkey query interface found\n");
++		error = -EINVAL;
+ 		goto err_remove_filter;
+ 	}
+ 
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 3743d895399e7..99260915122c0 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -299,6 +299,35 @@ static const struct ts_dmi_data estar_beauty_hd_data = {
+ 	.properties	= estar_beauty_hd_props,
+ };
+ 
++/* Generic props + data for upside-down mounted GDIX1001 touchscreens */
++static const struct property_entry gdix1001_upside_down_props[] = {
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-x"),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	{ }
++};
++
++static const struct ts_dmi_data gdix1001_00_upside_down_data = {
++	.acpi_name	= "GDIX1001:00",
++	.properties	= gdix1001_upside_down_props,
++};
++
++static const struct ts_dmi_data gdix1001_01_upside_down_data = {
++	.acpi_name	= "GDIX1001:01",
++	.properties	= gdix1001_upside_down_props,
++};
++
++static const struct property_entry glavey_tm800a550l_props[] = {
++	PROPERTY_ENTRY_STRING("firmware-name", "gt912-glavey-tm800a550l.fw"),
++	PROPERTY_ENTRY_STRING("goodix,config-name", "gt912-glavey-tm800a550l.cfg"),
++	PROPERTY_ENTRY_U32("goodix,main-clk", 54),
++	{ }
++};
++
++static const struct ts_dmi_data glavey_tm800a550l_data = {
++	.acpi_name	= "GDIX1001:00",
++	.properties	= glavey_tm800a550l_props,
++};
++
+ static const struct property_entry gp_electronic_t701_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 960),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 640),
+@@ -995,6 +1024,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "eSTAR BEAUTY HD Intel Quad core"),
+ 		},
+ 	},
++	{	/* Glavey TM800A550L */
++		.driver_data = (void *)&glavey_tm800a550l_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++			/* Above strings are too generic, also match on BIOS version */
++			DMI_MATCH(DMI_BIOS_VERSION, "ZY-8-BI-PX4S70VTR400-X423B-005-D"),
++		},
++	},
+ 	{
+ 		/* GP-electronic T701 */
+ 		.driver_data = (void *)&gp_electronic_t701_data,
+@@ -1268,6 +1306,24 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "X3 Plus"),
+ 		},
+ 	},
++	{
++		/* Teclast X89 (Android version / BIOS) */
++		.driver_data = (void *)&gdix1001_00_upside_down_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "WISKY"),
++			DMI_MATCH(DMI_BOARD_NAME, "3G062i"),
++		},
++	},
++	{
++		/* Teclast X89 (Windows version / BIOS) */
++		.driver_data = (void *)&gdix1001_01_upside_down_data,
++		.matches = {
++			/* tPAD is too generic, also match on bios date */
++			DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
++			DMI_MATCH(DMI_BOARD_NAME, "tPAD"),
++			DMI_MATCH(DMI_BIOS_DATE, "12/19/2014"),
++		},
++	},
+ 	{
+ 		/* Teclast X98 Plus II */
+ 		.driver_data = (void *)&teclast_x98plus2_data,
+@@ -1276,6 +1332,19 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "X98 Plus II"),
+ 		},
+ 	},
++	{
++		/* Teclast X98 Pro */
++		.driver_data = (void *)&gdix1001_00_upside_down_data,
++		.matches = {
++			/*
++			 * Only match BIOS date, because the manufacturers
++			 * BIOS does not report the board name at all
++			 * (sometimes)...
++			 */
++			DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
++			DMI_MATCH(DMI_BIOS_DATE, "10/28/2015"),
++		},
++	},
+ 	{
+ 		/* Trekstor Primebook C11 */
+ 		.driver_data = (void *)&trekstor_primebook_c11_data,
+@@ -1351,6 +1420,22 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "VINGA Twizzle J116"),
+ 		},
+ 	},
++	{
++		/* "WinBook TW100" */
++		.driver_data = (void *)&gdix1001_00_upside_down_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TW100")
++		}
++	},
++	{
++		/* WinBook TW700 */
++		.driver_data = (void *)&gdix1001_00_upside_down_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TW700")
++		},
++	},
+ 	{
+ 		/* Yours Y8W81, same case and touchscreen as Chuwi Vi8 */
+ 		.driver_data = (void *)&chuwi_vi8_data,
+diff --git a/drivers/regulator/da9052-regulator.c b/drivers/regulator/da9052-regulator.c
+index e18d291c7f21c..23fa429ebe760 100644
+--- a/drivers/regulator/da9052-regulator.c
++++ b/drivers/regulator/da9052-regulator.c
+@@ -250,7 +250,8 @@ static int da9052_regulator_set_voltage_time_sel(struct regulator_dev *rdev,
+ 	case DA9052_ID_BUCK3:
+ 	case DA9052_ID_LDO2:
+ 	case DA9052_ID_LDO3:
+-		ret = (new_sel - old_sel) * info->step_uV / 6250;
++		ret = DIV_ROUND_UP(abs(new_sel - old_sel) * info->step_uV,
++				   6250);
+ 		break;
+ 	}
+ 
+diff --git a/drivers/regulator/fan53880.c b/drivers/regulator/fan53880.c
+index 1684faf82ed25..94f02f3099dd4 100644
+--- a/drivers/regulator/fan53880.c
++++ b/drivers/regulator/fan53880.c
+@@ -79,7 +79,7 @@ static const struct regulator_desc fan53880_regulators[] = {
+ 		.n_linear_ranges = 2,
+ 		.n_voltages =	   0xf8,
+ 		.vsel_reg =	   FAN53880_BUCKVOUT,
+-		.vsel_mask =	   0x7f,
++		.vsel_mask =	   0xff,
+ 		.enable_reg =	   FAN53880_ENABLE,
+ 		.enable_mask =	   0x10,
+ 		.enable_time =	   480,
+diff --git a/drivers/regulator/hi655x-regulator.c b/drivers/regulator/hi655x-regulator.c
+index ac2ee2030211a..b44f492a2b832 100644
+--- a/drivers/regulator/hi655x-regulator.c
++++ b/drivers/regulator/hi655x-regulator.c
+@@ -72,7 +72,7 @@ enum hi655x_regulator_id {
+ static int hi655x_is_enabled(struct regulator_dev *rdev)
+ {
+ 	unsigned int value = 0;
+-	struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
++	const struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
+ 
+ 	regmap_read(rdev->regmap, regulator->status_reg, &value);
+ 	return (value & rdev->desc->enable_mask);
+@@ -80,7 +80,7 @@ static int hi655x_is_enabled(struct regulator_dev *rdev)
+ 
+ static int hi655x_disable(struct regulator_dev *rdev)
+ {
+-	struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
++	const struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
+ 
+ 	return regmap_write(rdev->regmap, regulator->disable_reg,
+ 			    rdev->desc->enable_mask);
+@@ -169,7 +169,6 @@ static const struct hi655x_regulator regulators[] = {
+ static int hi655x_regulator_probe(struct platform_device *pdev)
+ {
+ 	unsigned int i;
+-	struct hi655x_regulator *regulator;
+ 	struct hi655x_pmic *pmic;
+ 	struct regulator_config config = { };
+ 	struct regulator_dev *rdev;
+@@ -180,22 +179,17 @@ static int hi655x_regulator_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	regulator = devm_kzalloc(&pdev->dev, sizeof(*regulator), GFP_KERNEL);
+-	if (!regulator)
+-		return -ENOMEM;
+-
+-	platform_set_drvdata(pdev, regulator);
+-
+ 	config.dev = pdev->dev.parent;
+ 	config.regmap = pmic->regmap;
+-	config.driver_data = regulator;
+ 	for (i = 0; i < ARRAY_SIZE(regulators); i++) {
++		config.driver_data = (void *) &regulators[i];
++
+ 		rdev = devm_regulator_register(&pdev->dev,
+ 					       &regulators[i].rdesc,
+ 					       &config);
+ 		if (IS_ERR(rdev)) {
+ 			dev_err(&pdev->dev, "failed to register regulator %s\n",
+-				regulator->rdesc.name);
++				regulators[i].rdesc.name);
+ 			return PTR_ERR(rdev);
+ 		}
+ 	}
+diff --git a/drivers/regulator/mt6358-regulator.c b/drivers/regulator/mt6358-regulator.c
+index 13cb6ac9a8929..1d4eb5dc4fac8 100644
+--- a/drivers/regulator/mt6358-regulator.c
++++ b/drivers/regulator/mt6358-regulator.c
+@@ -457,7 +457,7 @@ static struct mt6358_regulator_info mt6358_regulators[] = {
+ 	MT6358_REG_FIXED("ldo_vaud28", VAUD28,
+ 			 MT6358_LDO_VAUD28_CON0, 0, 2800000),
+ 	MT6358_LDO("ldo_vdram2", VDRAM2, vdram2_voltages, vdram2_idx,
+-		   MT6358_LDO_VDRAM2_CON0, 0, MT6358_LDO_VDRAM2_ELR0, 0x10, 0),
++		   MT6358_LDO_VDRAM2_CON0, 0, MT6358_LDO_VDRAM2_ELR0, 0xf, 0),
+ 	MT6358_LDO("ldo_vsim1", VSIM1, vsim_voltages, vsim_idx,
+ 		   MT6358_LDO_VSIM1_CON0, 0, MT6358_VSIM1_ANA_CON0, 0xf00, 8),
+ 	MT6358_LDO("ldo_vibr", VIBR, vibr_voltages, vibr_idx,
+diff --git a/drivers/regulator/uniphier-regulator.c b/drivers/regulator/uniphier-regulator.c
+index 2e02e26b516c4..e75b0973e3256 100644
+--- a/drivers/regulator/uniphier-regulator.c
++++ b/drivers/regulator/uniphier-regulator.c
+@@ -201,6 +201,7 @@ static const struct of_device_id uniphier_regulator_match[] = {
+ 	},
+ 	{ /* Sentinel */ },
+ };
++MODULE_DEVICE_TABLE(of, uniphier_regulator_match);
+ 
+ static struct platform_driver uniphier_regulator_driver = {
+ 	.probe = uniphier_regulator_probe,
+diff --git a/drivers/rtc/rtc-stm32.c b/drivers/rtc/rtc-stm32.c
+index d774aa18f57a5..d096b58cd06c1 100644
+--- a/drivers/rtc/rtc-stm32.c
++++ b/drivers/rtc/rtc-stm32.c
+@@ -754,7 +754,7 @@ static int stm32_rtc_probe(struct platform_device *pdev)
+ 
+ 	ret = clk_prepare_enable(rtc->rtc_ck);
+ 	if (ret)
+-		goto err;
++		goto err_no_rtc_ck;
+ 
+ 	if (rtc->data->need_dbp)
+ 		regmap_update_bits(rtc->dbp, rtc->dbp_reg,
+@@ -830,10 +830,12 @@ static int stm32_rtc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	return 0;
++
+ err:
++	clk_disable_unprepare(rtc->rtc_ck);
++err_no_rtc_ck:
+ 	if (rtc->data->has_pclk)
+ 		clk_disable_unprepare(rtc->pclk);
+-	clk_disable_unprepare(rtc->rtc_ck);
+ 
+ 	if (rtc->data->need_dbp)
+ 		regmap_update_bits(rtc->dbp, rtc->dbp_reg, rtc->dbp_mask, 0);
+diff --git a/drivers/s390/cio/chp.c b/drivers/s390/cio/chp.c
+index dfcbe54591fbd..93e22785a0e09 100644
+--- a/drivers/s390/cio/chp.c
++++ b/drivers/s390/cio/chp.c
+@@ -255,6 +255,9 @@ static ssize_t chp_status_write(struct device *dev,
+ 	if (!num_args)
+ 		return count;
+ 
++	/* Wait until previous actions have settled. */
++	css_wait_for_slow_path();
++
+ 	if (!strncasecmp(cmd, "on", 2) || !strcmp(cmd, "1")) {
+ 		mutex_lock(&cp->lock);
+ 		error = s390_vary_chpid(cp->chpid, 1);
+diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
+index fc06a40021688..93aa7eabe8b1f 100644
+--- a/drivers/s390/cio/chsc.c
++++ b/drivers/s390/cio/chsc.c
+@@ -757,8 +757,6 @@ int chsc_chp_vary(struct chp_id chpid, int on)
+ {
+ 	struct channel_path *chp = chpid_to_chp(chpid);
+ 
+-	/* Wait until previous actions have settled. */
+-	css_wait_for_slow_path();
+ 	/*
+ 	 * Redo PathVerification on the devices the chpid connects to
+ 	 */
+diff --git a/drivers/scsi/FlashPoint.c b/drivers/scsi/FlashPoint.c
+index 24ace18240480..ec8a621d232d6 100644
+--- a/drivers/scsi/FlashPoint.c
++++ b/drivers/scsi/FlashPoint.c
+@@ -40,7 +40,7 @@ struct sccb_mgr_info {
+ 	u16 si_per_targ_ultra_nego;
+ 	u16 si_per_targ_no_disc;
+ 	u16 si_per_targ_wide_nego;
+-	u16 si_flags;
++	u16 si_mflags;
+ 	unsigned char si_card_family;
+ 	unsigned char si_bustype;
+ 	unsigned char si_card_model[3];
+@@ -1073,22 +1073,22 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
+ 		ScamFlg =
+ 		    (unsigned char)FPT_utilEERead(ioport, SCAM_CONFIG / 2);
+ 
+-	pCardInfo->si_flags = 0x0000;
++	pCardInfo->si_mflags = 0x0000;
+ 
+ 	if (i & 0x01)
+-		pCardInfo->si_flags |= SCSI_PARITY_ENA;
++		pCardInfo->si_mflags |= SCSI_PARITY_ENA;
+ 
+ 	if (!(i & 0x02))
+-		pCardInfo->si_flags |= SOFT_RESET;
++		pCardInfo->si_mflags |= SOFT_RESET;
+ 
+ 	if (i & 0x10)
+-		pCardInfo->si_flags |= EXTENDED_TRANSLATION;
++		pCardInfo->si_mflags |= EXTENDED_TRANSLATION;
+ 
+ 	if (ScamFlg & SCAM_ENABLED)
+-		pCardInfo->si_flags |= FLAG_SCAM_ENABLED;
++		pCardInfo->si_mflags |= FLAG_SCAM_ENABLED;
+ 
+ 	if (ScamFlg & SCAM_LEVEL2)
+-		pCardInfo->si_flags |= FLAG_SCAM_LEVEL2;
++		pCardInfo->si_mflags |= FLAG_SCAM_LEVEL2;
+ 
+ 	j = (RD_HARPOON(ioport + hp_bm_ctrl) & ~SCSI_TERM_ENA_L);
+ 	if (i & 0x04) {
+@@ -1104,7 +1104,7 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
+ 
+ 	if (!(RD_HARPOON(ioport + hp_page_ctrl) & NARROW_SCSI_CARD))
+ 
+-		pCardInfo->si_flags |= SUPPORT_16TAR_32LUN;
++		pCardInfo->si_mflags |= SUPPORT_16TAR_32LUN;
+ 
+ 	pCardInfo->si_card_family = HARPOON_FAMILY;
+ 	pCardInfo->si_bustype = BUSTYPE_PCI;
+@@ -1140,15 +1140,15 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
+ 
+ 	if (pCardInfo->si_card_model[1] == '3') {
+ 		if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
+-			pCardInfo->si_flags |= LOW_BYTE_TERM;
++			pCardInfo->si_mflags |= LOW_BYTE_TERM;
+ 	} else if (pCardInfo->si_card_model[2] == '0') {
+ 		temp = RD_HARPOON(ioport + hp_xfer_pad);
+ 		WR_HARPOON(ioport + hp_xfer_pad, (temp & ~BIT(4)));
+ 		if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
+-			pCardInfo->si_flags |= LOW_BYTE_TERM;
++			pCardInfo->si_mflags |= LOW_BYTE_TERM;
+ 		WR_HARPOON(ioport + hp_xfer_pad, (temp | BIT(4)));
+ 		if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
+-			pCardInfo->si_flags |= HIGH_BYTE_TERM;
++			pCardInfo->si_mflags |= HIGH_BYTE_TERM;
+ 		WR_HARPOON(ioport + hp_xfer_pad, temp);
+ 	} else {
+ 		temp = RD_HARPOON(ioport + hp_ee_ctrl);
+@@ -1166,9 +1166,9 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
+ 		WR_HARPOON(ioport + hp_ee_ctrl, temp);
+ 		WR_HARPOON(ioport + hp_xfer_pad, temp2);
+ 		if (!(temp3 & BIT(7)))
+-			pCardInfo->si_flags |= LOW_BYTE_TERM;
++			pCardInfo->si_mflags |= LOW_BYTE_TERM;
+ 		if (!(temp3 & BIT(6)))
+-			pCardInfo->si_flags |= HIGH_BYTE_TERM;
++			pCardInfo->si_mflags |= HIGH_BYTE_TERM;
+ 	}
+ 
+ 	ARAM_ACCESS(ioport);
+@@ -1275,7 +1275,7 @@ static void *FlashPoint_HardwareResetHostAdapter(struct sccb_mgr_info
+ 	WR_HARPOON(ioport + hp_arb_id, pCardInfo->si_id);
+ 	CurrCard->ourId = pCardInfo->si_id;
+ 
+-	i = (unsigned char)pCardInfo->si_flags;
++	i = (unsigned char)pCardInfo->si_mflags;
+ 	if (i & SCSI_PARITY_ENA)
+ 		WR_HARPOON(ioport + hp_portctrl_1, (HOST_MODE8 | CHK_SCSI_P));
+ 
+@@ -1289,14 +1289,14 @@ static void *FlashPoint_HardwareResetHostAdapter(struct sccb_mgr_info
+ 		j |= SCSI_TERM_ENA_H;
+ 	WR_HARPOON(ioport + hp_ee_ctrl, j);
+ 
+-	if (!(pCardInfo->si_flags & SOFT_RESET)) {
++	if (!(pCardInfo->si_mflags & SOFT_RESET)) {
+ 
+ 		FPT_sresb(ioport, thisCard);
+ 
+ 		FPT_scini(thisCard, pCardInfo->si_id, 0);
+ 	}
+ 
+-	if (pCardInfo->si_flags & POST_ALL_UNDERRRUNS)
++	if (pCardInfo->si_mflags & POST_ALL_UNDERRRUNS)
+ 		CurrCard->globalFlags |= F_NO_FILTER;
+ 
+ 	if (pCurrNvRam) {
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 5f845d7094fcc..008f734698f71 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -6007,8 +6007,10 @@ _scsih_expander_add(struct MPT3SAS_ADAPTER *ioc, u16 handle)
+ 		 handle, parent_handle,
+ 		 (u64)sas_expander->sas_address, sas_expander->num_phys);
+ 
+-	if (!sas_expander->num_phys)
++	if (!sas_expander->num_phys) {
++		rc = -1;
+ 		goto out_fail;
++	}
+ 	sas_expander->phy = kcalloc(sas_expander->num_phys,
+ 	    sizeof(struct _sas_phy), GFP_KERNEL);
+ 	if (!sas_expander->phy) {
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 31d7a6ddc9db7..a045d00509d5c 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -760,6 +760,7 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ 				case 0x07: /* operation in progress */
+ 				case 0x08: /* Long write in progress */
+ 				case 0x09: /* self test in progress */
++				case 0x11: /* notify (enable spinup) required */
+ 				case 0x14: /* space allocation in progress */
+ 				case 0x1a: /* start stop unit in progress */
+ 				case 0x1b: /* sanitize in progress */
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index c53c3f9fa526a..c520239082fc6 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -1979,6 +1979,8 @@ static void __iscsi_unblock_session(struct work_struct *work)
+  */
+ void iscsi_unblock_session(struct iscsi_cls_session *session)
+ {
++	flush_work(&session->block_work);
++
+ 	queue_work(iscsi_eh_timer_workq, &session->unblock_work);
+ 	/*
+ 	 * Blocking the session can be done from any context so we only
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index a418c3c7001c0..304ff2ee7d75a 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -422,7 +422,6 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus,
+ 	struct completion *port_ready;
+ 	struct sdw_dpn_prop *dpn_prop;
+ 	struct sdw_prepare_ch prep_ch;
+-	unsigned int time_left;
+ 	bool intr = false;
+ 	int ret = 0, val;
+ 	u32 addr;
+@@ -479,15 +478,15 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus,
+ 
+ 		/* Wait for completion on port ready */
+ 		port_ready = &s_rt->slave->port_ready[prep_ch.num];
+-		time_left = wait_for_completion_timeout(port_ready,
+-				msecs_to_jiffies(dpn_prop->ch_prep_timeout));
++		wait_for_completion_timeout(port_ready,
++			msecs_to_jiffies(dpn_prop->ch_prep_timeout));
+ 
+ 		val = sdw_read(s_rt->slave, SDW_DPN_PREPARESTATUS(p_rt->num));
+-		val &= p_rt->ch_mask;
+-		if (!time_left || val) {
++		if ((val < 0) || (val & p_rt->ch_mask)) {
++			ret = (val < 0) ? val : -ETIMEDOUT;
+ 			dev_err(&s_rt->slave->dev,
+-				"Chn prep failed for port:%d\n", prep_ch.num);
+-			return -ETIMEDOUT;
++				"Chn prep failed for port %d: %d\n", prep_ch.num, ret);
++			return ret;
+ 		}
+ 	}
+ 
+diff --git a/drivers/spi/spi-loopback-test.c b/drivers/spi/spi-loopback-test.c
+index df981e55c24c9..89b91cdfb2a54 100644
+--- a/drivers/spi/spi-loopback-test.c
++++ b/drivers/spi/spi-loopback-test.c
+@@ -874,7 +874,7 @@ static int spi_test_run_iter(struct spi_device *spi,
+ 		test.transfers[i].len = len;
+ 		if (test.transfers[i].tx_buf)
+ 			test.transfers[i].tx_buf += tx_off;
+-		if (test.transfers[i].tx_buf)
++		if (test.transfers[i].rx_buf)
+ 			test.transfers[i].rx_buf += rx_off;
+ 	}
+ 
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index ecba6b4a5d85d..b2c4621db34d7 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -725,7 +725,7 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(spicc->pclk);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "pclk clock enable failed\n");
+-		goto out_master;
++		goto out_core_clk;
+ 	}
+ 
+ 	device_reset_optional(&pdev->dev);
+@@ -752,7 +752,7 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 	ret = meson_spicc_clk_init(spicc);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "clock registration failed\n");
+-		goto out_master;
++		goto out_clk;
+ 	}
+ 
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+@@ -764,9 +764,11 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ out_clk:
+-	clk_disable_unprepare(spicc->core);
+ 	clk_disable_unprepare(spicc->pclk);
+ 
++out_core_clk:
++	clk_disable_unprepare(spicc->core);
++
+ out_master:
+ 	spi_master_put(master);
+ 
+diff --git a/drivers/spi/spi-omap-100k.c b/drivers/spi/spi-omap-100k.c
+index ccd817ee4917b..0d0cd061d3563 100644
+--- a/drivers/spi/spi-omap-100k.c
++++ b/drivers/spi/spi-omap-100k.c
+@@ -241,7 +241,7 @@ static int omap1_spi100k_setup_transfer(struct spi_device *spi,
+ 	else
+ 		word_len = spi->bits_per_word;
+ 
+-	if (spi->bits_per_word > 32)
++	if (word_len > 32)
+ 		return -EINVAL;
+ 	cs->word_len = word_len;
+ 
+diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c
+index 19238e1b76b44..803d92f8d0316 100644
+--- a/drivers/spi/spi-sun6i.c
++++ b/drivers/spi/spi-sun6i.c
+@@ -290,6 +290,10 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
+ 	}
+ 
+ 	sun6i_spi_write(sspi, SUN6I_CLK_CTL_REG, reg);
++	/* Finally enable the bus - doing so before might raise SCK to HIGH */
++	reg = sun6i_spi_read(sspi, SUN6I_GBL_CTL_REG);
++	reg |= SUN6I_GBL_CTL_BUS_ENABLE;
++	sun6i_spi_write(sspi, SUN6I_GBL_CTL_REG, reg);
+ 
+ 	/* Setup the transfer now... */
+ 	if (sspi->tx_buf)
+@@ -398,7 +402,7 @@ static int sun6i_spi_runtime_resume(struct device *dev)
+ 	}
+ 
+ 	sun6i_spi_write(sspi, SUN6I_GBL_CTL_REG,
+-			SUN6I_GBL_CTL_BUS_ENABLE | SUN6I_GBL_CTL_MASTER | SUN6I_GBL_CTL_TP);
++			SUN6I_GBL_CTL_MASTER | SUN6I_GBL_CTL_TP);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/spi/spi-topcliff-pch.c b/drivers/spi/spi-topcliff-pch.c
+index b459e369079f8..7fb020a1d66aa 100644
+--- a/drivers/spi/spi-topcliff-pch.c
++++ b/drivers/spi/spi-topcliff-pch.c
+@@ -580,8 +580,10 @@ static void pch_spi_set_tx(struct pch_spi_data *data, int *bpw)
+ 	data->pkt_tx_buff = kzalloc(size, GFP_KERNEL);
+ 	if (data->pkt_tx_buff != NULL) {
+ 		data->pkt_rx_buff = kzalloc(size, GFP_KERNEL);
+-		if (!data->pkt_rx_buff)
++		if (!data->pkt_rx_buff) {
+ 			kfree(data->pkt_tx_buff);
++			data->pkt_tx_buff = NULL;
++		}
+ 	}
+ 
+ 	if (!data->pkt_rx_buff) {
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 0cf67de741e78..8c261eac2cee5 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2050,6 +2050,7 @@ of_register_spi_device(struct spi_controller *ctlr, struct device_node *nc)
+ 	/* Store a pointer to the node in the device structure */
+ 	of_node_get(nc);
+ 	spi->dev.of_node = nc;
++	spi->dev.fwnode = of_fwnode_handle(nc);
+ 
+ 	/* Register the new device */
+ 	rc = spi_add_device(spi);
+@@ -2613,9 +2614,10 @@ static int spi_get_gpio_descs(struct spi_controller *ctlr)
+ 		native_cs_mask |= BIT(i);
+ 	}
+ 
+-	ctlr->unused_native_cs = ffz(native_cs_mask);
+-	if (num_cs_gpios && ctlr->max_native_cs &&
+-	    ctlr->unused_native_cs >= ctlr->max_native_cs) {
++	ctlr->unused_native_cs = ffs(~native_cs_mask) - 1;
++
++	if ((ctlr->flags & SPI_MASTER_GPIO_SS) && num_cs_gpios &&
++	    ctlr->max_native_cs && ctlr->unused_native_cs >= ctlr->max_native_cs) {
+ 		dev_err(dev, "No unused native chip select available\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/ssb/scan.c b/drivers/ssb/scan.c
+index f49ab1aa2149a..4161e5d1f276e 100644
+--- a/drivers/ssb/scan.c
++++ b/drivers/ssb/scan.c
+@@ -325,6 +325,7 @@ int ssb_bus_scan(struct ssb_bus *bus,
+ 	if (bus->nr_devices > ARRAY_SIZE(bus->devices)) {
+ 		pr_err("More than %d ssb cores found (%d)\n",
+ 		       SSB_MAX_NR_CORES, bus->nr_devices);
++		err = -EINVAL;
+ 		goto err_unmap;
+ 	}
+ 	if (bus->bustype == SSB_BUSTYPE_SSB) {
+diff --git a/drivers/ssb/sdio.c b/drivers/ssb/sdio.c
+index 7fe0afb42234f..66c5c2169704b 100644
+--- a/drivers/ssb/sdio.c
++++ b/drivers/ssb/sdio.c
+@@ -411,7 +411,6 @@ static void ssb_sdio_block_write(struct ssb_device *dev, const void *buffer,
+ 	sdio_claim_host(bus->host_sdio);
+ 	if (unlikely(ssb_sdio_switch_core(bus, dev))) {
+ 		error = -EIO;
+-		memset((void *)buffer, 0xff, count);
+ 		goto err_out;
+ 	}
+ 	offset |= bus->sdio_sbaddr & 0xffff;
+diff --git a/drivers/staging/fbtft/fb_agm1264k-fl.c b/drivers/staging/fbtft/fb_agm1264k-fl.c
+index eeeeec97ad278..b545c2ca80a41 100644
+--- a/drivers/staging/fbtft/fb_agm1264k-fl.c
++++ b/drivers/staging/fbtft/fb_agm1264k-fl.c
+@@ -84,9 +84,9 @@ static void reset(struct fbtft_par *par)
+ 
+ 	dev_dbg(par->info->device, "%s()\n", __func__);
+ 
+-	gpiod_set_value(par->gpio.reset, 0);
+-	udelay(20);
+ 	gpiod_set_value(par->gpio.reset, 1);
++	udelay(20);
++	gpiod_set_value(par->gpio.reset, 0);
+ 	mdelay(120);
+ }
+ 
+@@ -194,12 +194,12 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
+ 	/* select chip */
+ 	if (*buf) {
+ 		/* cs1 */
+-		gpiod_set_value(par->CS0, 1);
+-		gpiod_set_value(par->CS1, 0);
+-	} else {
+-		/* cs0 */
+ 		gpiod_set_value(par->CS0, 0);
+ 		gpiod_set_value(par->CS1, 1);
++	} else {
++		/* cs0 */
++		gpiod_set_value(par->CS0, 1);
++		gpiod_set_value(par->CS1, 0);
+ 	}
+ 
+ 	gpiod_set_value(par->RS, 0); /* RS->0 (command mode) */
+@@ -397,8 +397,8 @@ static int write_vmem(struct fbtft_par *par, size_t offset, size_t len)
+ 	}
+ 	kfree(convert_buf);
+ 
+-	gpiod_set_value(par->CS0, 1);
+-	gpiod_set_value(par->CS1, 1);
++	gpiod_set_value(par->CS0, 0);
++	gpiod_set_value(par->CS1, 0);
+ 
+ 	return ret;
+ }
+@@ -419,10 +419,10 @@ static int write(struct fbtft_par *par, void *buf, size_t len)
+ 		for (i = 0; i < 8; ++i)
+ 			gpiod_set_value(par->gpio.db[i], data & (1 << i));
+ 		/* set E */
+-		gpiod_set_value(par->EPIN, 1);
++		gpiod_set_value(par->EPIN, 0);
+ 		udelay(5);
+ 		/* unset E - write */
+-		gpiod_set_value(par->EPIN, 0);
++		gpiod_set_value(par->EPIN, 1);
+ 		udelay(1);
+ 	}
+ 
+diff --git a/drivers/staging/fbtft/fb_bd663474.c b/drivers/staging/fbtft/fb_bd663474.c
+index e2c7646588f8c..1629c2c440a97 100644
+--- a/drivers/staging/fbtft/fb_bd663474.c
++++ b/drivers/staging/fbtft/fb_bd663474.c
+@@ -12,7 +12,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+@@ -24,9 +23,6 @@
+ 
+ static int init_display(struct fbtft_par *par)
+ {
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	par->fbtftops.reset(par);
+ 
+ 	/* Initialization sequence from Lib_UTFT */
+diff --git a/drivers/staging/fbtft/fb_ili9163.c b/drivers/staging/fbtft/fb_ili9163.c
+index 05648c3ffe474..6582a2c90aafc 100644
+--- a/drivers/staging/fbtft/fb_ili9163.c
++++ b/drivers/staging/fbtft/fb_ili9163.c
+@@ -11,7 +11,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ #include <video/mipi_display.h>
+ 
+@@ -77,9 +76,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	write_reg(par, MIPI_DCS_SOFT_RESET); /* software reset */
+ 	mdelay(500);
+ 	write_reg(par, MIPI_DCS_EXIT_SLEEP_MODE); /* exit sleep */
+diff --git a/drivers/staging/fbtft/fb_ili9320.c b/drivers/staging/fbtft/fb_ili9320.c
+index f2e72d14431db..a8f4c618b754c 100644
+--- a/drivers/staging/fbtft/fb_ili9320.c
++++ b/drivers/staging/fbtft/fb_ili9320.c
+@@ -8,7 +8,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/spi/spi.h>
+ #include <linux/delay.h>
+ 
+diff --git a/drivers/staging/fbtft/fb_ili9325.c b/drivers/staging/fbtft/fb_ili9325.c
+index c9aa4cb431236..16d3b17ca2798 100644
+--- a/drivers/staging/fbtft/fb_ili9325.c
++++ b/drivers/staging/fbtft/fb_ili9325.c
+@@ -10,7 +10,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+@@ -85,9 +84,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	bt &= 0x07;
+ 	vc &= 0x07;
+ 	vrh &= 0x0f;
+diff --git a/drivers/staging/fbtft/fb_ili9340.c b/drivers/staging/fbtft/fb_ili9340.c
+index 415183c7054a8..704236bcaf3ff 100644
+--- a/drivers/staging/fbtft/fb_ili9340.c
++++ b/drivers/staging/fbtft/fb_ili9340.c
+@@ -8,7 +8,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ #include <video/mipi_display.h>
+ 
+diff --git a/drivers/staging/fbtft/fb_s6d1121.c b/drivers/staging/fbtft/fb_s6d1121.c
+index 8c7de32903434..62f27172f8449 100644
+--- a/drivers/staging/fbtft/fb_s6d1121.c
++++ b/drivers/staging/fbtft/fb_s6d1121.c
+@@ -12,7 +12,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+@@ -29,9 +28,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	/* Initialization sequence from Lib_UTFT */
+ 
+ 	write_reg(par, 0x0011, 0x2004);
+diff --git a/drivers/staging/fbtft/fb_sh1106.c b/drivers/staging/fbtft/fb_sh1106.c
+index 6f7249493ea3b..7b9ab39e1c1a8 100644
+--- a/drivers/staging/fbtft/fb_sh1106.c
++++ b/drivers/staging/fbtft/fb_sh1106.c
+@@ -9,7 +9,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+diff --git a/drivers/staging/fbtft/fb_ssd1289.c b/drivers/staging/fbtft/fb_ssd1289.c
+index 7a3fe022cc69d..f27bab38b3ec4 100644
+--- a/drivers/staging/fbtft/fb_ssd1289.c
++++ b/drivers/staging/fbtft/fb_ssd1289.c
+@@ -10,7 +10,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ 
+ #include "fbtft.h"
+ 
+@@ -28,9 +27,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	write_reg(par, 0x00, 0x0001);
+ 	write_reg(par, 0x03, 0xA8A4);
+ 	write_reg(par, 0x0C, 0x0000);
+diff --git a/drivers/staging/fbtft/fb_ssd1325.c b/drivers/staging/fbtft/fb_ssd1325.c
+index 8a3140d41d8bb..796a2ac3e1948 100644
+--- a/drivers/staging/fbtft/fb_ssd1325.c
++++ b/drivers/staging/fbtft/fb_ssd1325.c
+@@ -35,8 +35,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	gpiod_set_value(par->gpio.cs, 0);
+-
+ 	write_reg(par, 0xb3);
+ 	write_reg(par, 0xf0);
+ 	write_reg(par, 0xae);
+diff --git a/drivers/staging/fbtft/fb_ssd1331.c b/drivers/staging/fbtft/fb_ssd1331.c
+index 37622c9462aa7..ec5eced7f8cbd 100644
+--- a/drivers/staging/fbtft/fb_ssd1331.c
++++ b/drivers/staging/fbtft/fb_ssd1331.c
+@@ -81,8 +81,7 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
+ 	va_start(args, len);
+ 
+ 	*buf = (u8)va_arg(args, unsigned int);
+-	if (par->gpio.dc)
+-		gpiod_set_value(par->gpio.dc, 0);
++	gpiod_set_value(par->gpio.dc, 0);
+ 	ret = par->fbtftops.write(par, par->buf, sizeof(u8));
+ 	if (ret < 0) {
+ 		va_end(args);
+@@ -104,8 +103,7 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
+ 			return;
+ 		}
+ 	}
+-	if (par->gpio.dc)
+-		gpiod_set_value(par->gpio.dc, 1);
++	gpiod_set_value(par->gpio.dc, 1);
+ 	va_end(args);
+ }
+ 
+diff --git a/drivers/staging/fbtft/fb_ssd1351.c b/drivers/staging/fbtft/fb_ssd1351.c
+index 900b28d826b28..cf263a58a1489 100644
+--- a/drivers/staging/fbtft/fb_ssd1351.c
++++ b/drivers/staging/fbtft/fb_ssd1351.c
+@@ -2,7 +2,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/spi/spi.h>
+ #include <linux/delay.h>
+ 
+diff --git a/drivers/staging/fbtft/fb_upd161704.c b/drivers/staging/fbtft/fb_upd161704.c
+index c77832ae5e5ba..c680160d63807 100644
+--- a/drivers/staging/fbtft/fb_upd161704.c
++++ b/drivers/staging/fbtft/fb_upd161704.c
+@@ -12,7 +12,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+@@ -26,9 +25,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	/* Initialization sequence from Lib_UTFT */
+ 
+ 	/* register reset */
+diff --git a/drivers/staging/fbtft/fb_watterott.c b/drivers/staging/fbtft/fb_watterott.c
+index 76b25df376b8f..a57e1f4feef35 100644
+--- a/drivers/staging/fbtft/fb_watterott.c
++++ b/drivers/staging/fbtft/fb_watterott.c
+@@ -8,7 +8,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+diff --git a/drivers/staging/fbtft/fbtft-bus.c b/drivers/staging/fbtft/fbtft-bus.c
+index 63c65dd67b175..3d422bc116411 100644
+--- a/drivers/staging/fbtft/fbtft-bus.c
++++ b/drivers/staging/fbtft/fbtft-bus.c
+@@ -135,8 +135,7 @@ int fbtft_write_vmem16_bus8(struct fbtft_par *par, size_t offset, size_t len)
+ 	remain = len / 2;
+ 	vmem16 = (u16 *)(par->info->screen_buffer + offset);
+ 
+-	if (par->gpio.dc)
+-		gpiod_set_value(par->gpio.dc, 1);
++	gpiod_set_value(par->gpio.dc, 1);
+ 
+ 	/* non buffered write */
+ 	if (!par->txbuf.buf)
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index 4f362dad4436a..3723269890d5f 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -38,8 +38,7 @@ int fbtft_write_buf_dc(struct fbtft_par *par, void *buf, size_t len, int dc)
+ {
+ 	int ret;
+ 
+-	if (par->gpio.dc)
+-		gpiod_set_value(par->gpio.dc, dc);
++	gpiod_set_value(par->gpio.dc, dc);
+ 
+ 	ret = par->fbtftops.write(par, buf, len);
+ 	if (ret < 0)
+@@ -76,20 +75,16 @@ static int fbtft_request_one_gpio(struct fbtft_par *par,
+ 				  struct gpio_desc **gpiop)
+ {
+ 	struct device *dev = par->info->device;
+-	int ret = 0;
+ 
+ 	*gpiop = devm_gpiod_get_index_optional(dev, name, index,
+-					       GPIOD_OUT_HIGH);
+-	if (IS_ERR(*gpiop)) {
+-		ret = PTR_ERR(*gpiop);
+-		dev_err(dev,
+-			"Failed to request %s GPIO: %d\n", name, ret);
+-		return ret;
+-	}
++					       GPIOD_OUT_LOW);
++	if (IS_ERR(*gpiop))
++		return dev_err_probe(dev, PTR_ERR(*gpiop), "Failed to request %s GPIO\n", name);
++
+ 	fbtft_par_dbg(DEBUG_REQUEST_GPIOS, par, "%s: '%s' GPIO\n",
+ 		      __func__, name);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int fbtft_request_gpios(struct fbtft_par *par)
+@@ -226,11 +221,15 @@ static void fbtft_reset(struct fbtft_par *par)
+ {
+ 	if (!par->gpio.reset)
+ 		return;
++
+ 	fbtft_par_dbg(DEBUG_RESET, par, "%s()\n", __func__);
++
+ 	gpiod_set_value_cansleep(par->gpio.reset, 1);
+ 	usleep_range(20, 40);
+ 	gpiod_set_value_cansleep(par->gpio.reset, 0);
+ 	msleep(120);
++
++	gpiod_set_value_cansleep(par->gpio.cs, 1);  /* Activate chip */
+ }
+ 
+ static void fbtft_update_display(struct fbtft_par *par, unsigned int start_line,
+@@ -922,8 +921,6 @@ static int fbtft_init_display_from_property(struct fbtft_par *par)
+ 		goto out_free;
+ 
+ 	par->fbtftops.reset(par);
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+ 
+ 	index = -1;
+ 	val = values[++index];
+@@ -1018,8 +1015,6 @@ int fbtft_init_display(struct fbtft_par *par)
+ 	}
+ 
+ 	par->fbtftops.reset(par);
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+ 
+ 	i = 0;
+ 	while (i < FBTFT_MAX_INIT_SEQUENCE) {
+diff --git a/drivers/staging/fbtft/fbtft-io.c b/drivers/staging/fbtft/fbtft-io.c
+index 0863d257d7620..de1904a443c27 100644
+--- a/drivers/staging/fbtft/fbtft-io.c
++++ b/drivers/staging/fbtft/fbtft-io.c
+@@ -142,12 +142,12 @@ int fbtft_write_gpio8_wr(struct fbtft_par *par, void *buf, size_t len)
+ 		data = *(u8 *)buf;
+ 
+ 		/* Start writing by pulling down /WR */
+-		gpiod_set_value(par->gpio.wr, 0);
++		gpiod_set_value(par->gpio.wr, 1);
+ 
+ 		/* Set data */
+ #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
+ 		if (data == prev_data) {
+-			gpiod_set_value(par->gpio.wr, 0); /* used as delay */
++			gpiod_set_value(par->gpio.wr, 1); /* used as delay */
+ 		} else {
+ 			for (i = 0; i < 8; i++) {
+ 				if ((data & 1) != (prev_data & 1))
+@@ -165,7 +165,7 @@ int fbtft_write_gpio8_wr(struct fbtft_par *par, void *buf, size_t len)
+ #endif
+ 
+ 		/* Pullup /WR */
+-		gpiod_set_value(par->gpio.wr, 1);
++		gpiod_set_value(par->gpio.wr, 0);
+ 
+ #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
+ 		prev_data = *(u8 *)buf;
+@@ -192,12 +192,12 @@ int fbtft_write_gpio16_wr(struct fbtft_par *par, void *buf, size_t len)
+ 		data = *(u16 *)buf;
+ 
+ 		/* Start writing by pulling down /WR */
+-		gpiod_set_value(par->gpio.wr, 0);
++		gpiod_set_value(par->gpio.wr, 1);
+ 
+ 		/* Set data */
+ #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
+ 		if (data == prev_data) {
+-			gpiod_set_value(par->gpio.wr, 0); /* used as delay */
++			gpiod_set_value(par->gpio.wr, 1); /* used as delay */
+ 		} else {
+ 			for (i = 0; i < 16; i++) {
+ 				if ((data & 1) != (prev_data & 1))
+@@ -215,7 +215,7 @@ int fbtft_write_gpio16_wr(struct fbtft_par *par, void *buf, size_t len)
+ #endif
+ 
+ 		/* Pullup /WR */
+-		gpiod_set_value(par->gpio.wr, 1);
++		gpiod_set_value(par->gpio.wr, 0);
+ 
+ #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
+ 		prev_data = *(u16 *)buf;
+diff --git a/drivers/staging/gdm724x/gdm_lte.c b/drivers/staging/gdm724x/gdm_lte.c
+index 571f47d394843..bd5f874334043 100644
+--- a/drivers/staging/gdm724x/gdm_lte.c
++++ b/drivers/staging/gdm724x/gdm_lte.c
+@@ -611,10 +611,12 @@ static void gdm_lte_netif_rx(struct net_device *dev, char *buf,
+ 						  * bytes (99,130,83,99 dec)
+ 						  */
+ 			} __packed;
+-			void *addr = buf + sizeof(struct iphdr) +
+-				sizeof(struct udphdr) +
+-				offsetof(struct dhcp_packet, chaddr);
+-			ether_addr_copy(nic->dest_mac_addr, addr);
++			int offset = sizeof(struct iphdr) +
++				     sizeof(struct udphdr) +
++				     offsetof(struct dhcp_packet, chaddr);
++			if (offset + ETH_ALEN > len)
++				return;
++			ether_addr_copy(nic->dest_mac_addr, buf + offset);
+ 		}
+ 	}
+ 
+@@ -677,6 +679,7 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
+ 	struct sdu *sdu = NULL;
+ 	u8 endian = phy_dev->get_endian(phy_dev->priv_dev);
+ 	u8 *data = (u8 *)multi_sdu->data;
++	int copied;
+ 	u16 i = 0;
+ 	u16 num_packet;
+ 	u16 hci_len;
+@@ -688,6 +691,12 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
+ 	num_packet = gdm_dev16_to_cpu(endian, multi_sdu->num_packet);
+ 
+ 	for (i = 0; i < num_packet; i++) {
++		copied = data - multi_sdu->data;
++		if (len < copied + sizeof(*sdu)) {
++			pr_err("rx prevent buffer overflow");
++			return;
++		}
++
+ 		sdu = (struct sdu *)data;
+ 
+ 		cmd_evt  = gdm_dev16_to_cpu(endian, sdu->cmd_evt);
+@@ -698,7 +707,8 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
+ 			pr_err("rx sdu wrong hci %04x\n", cmd_evt);
+ 			return;
+ 		}
+-		if (hci_len < 12) {
++		if (hci_len < 12 ||
++		    len < copied + sizeof(*sdu) + (hci_len - 12)) {
+ 			pr_err("rx sdu invalid len %d\n", hci_len);
+ 			return;
+ 		}
+diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
+index 3cd00cc0a3641..7749ca9a8ebbf 100644
+--- a/drivers/staging/media/hantro/hantro_drv.c
++++ b/drivers/staging/media/hantro/hantro_drv.c
+@@ -56,16 +56,12 @@ dma_addr_t hantro_get_ref(struct hantro_ctx *ctx, u64 ts)
+ 	return hantro_get_dec_buf_addr(ctx, buf);
+ }
+ 
+-static void hantro_job_finish(struct hantro_dev *vpu,
+-			      struct hantro_ctx *ctx,
+-			      enum vb2_buffer_state result)
++static void hantro_job_finish_no_pm(struct hantro_dev *vpu,
++				    struct hantro_ctx *ctx,
++				    enum vb2_buffer_state result)
+ {
+ 	struct vb2_v4l2_buffer *src, *dst;
+ 
+-	pm_runtime_mark_last_busy(vpu->dev);
+-	pm_runtime_put_autosuspend(vpu->dev);
+-	clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks);
+-
+ 	src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 
+@@ -81,6 +77,18 @@ static void hantro_job_finish(struct hantro_dev *vpu,
+ 					 result);
+ }
+ 
++static void hantro_job_finish(struct hantro_dev *vpu,
++			      struct hantro_ctx *ctx,
++			      enum vb2_buffer_state result)
++{
++	pm_runtime_mark_last_busy(vpu->dev);
++	pm_runtime_put_autosuspend(vpu->dev);
++
++	clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks);
++
++	hantro_job_finish_no_pm(vpu, ctx, result);
++}
++
+ void hantro_irq_done(struct hantro_dev *vpu,
+ 		     enum vb2_buffer_state result)
+ {
+@@ -152,12 +160,15 @@ static void device_run(void *priv)
+ 	src = hantro_get_src_buf(ctx);
+ 	dst = hantro_get_dst_buf(ctx);
+ 
++	ret = pm_runtime_get_sync(ctx->dev->dev);
++	if (ret < 0) {
++		pm_runtime_put_noidle(ctx->dev->dev);
++		goto err_cancel_job;
++	}
++
+ 	ret = clk_bulk_enable(ctx->dev->variant->num_clocks, ctx->dev->clocks);
+ 	if (ret)
+ 		goto err_cancel_job;
+-	ret = pm_runtime_get_sync(ctx->dev->dev);
+-	if (ret < 0)
+-		goto err_cancel_job;
+ 
+ 	v4l2_m2m_buf_copy_metadata(src, dst, true);
+ 
+@@ -165,7 +176,7 @@ static void device_run(void *priv)
+ 	return;
+ 
+ err_cancel_job:
+-	hantro_job_finish(ctx->dev, ctx, VB2_BUF_STATE_ERROR);
++	hantro_job_finish_no_pm(ctx->dev, ctx, VB2_BUF_STATE_ERROR);
+ }
+ 
+ static struct v4l2_m2m_ops vpu_m2m_ops = {
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index f5fbdbc4ffdb1..5c2ca61add8e8 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -639,7 +639,14 @@ static int hantro_buf_prepare(struct vb2_buffer *vb)
+ 	ret = hantro_buf_plane_check(vb, pix_fmt);
+ 	if (ret)
+ 		return ret;
+-	vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++	/*
++	 * Buffer's bytesused must be written by driver for CAPTURE buffers.
++	 * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
++	 * it to buffer length).
++	 */
++	if (V4L2_TYPE_IS_CAPTURE(vq->type))
++		vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 21ebf77696964..d9a8667b4bedf 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -753,9 +753,10 @@ static int csi_setup(struct csi_priv *priv)
+ 
+ static int csi_start(struct csi_priv *priv)
+ {
+-	struct v4l2_fract *output_fi;
++	struct v4l2_fract *input_fi, *output_fi;
+ 	int ret;
+ 
++	input_fi = &priv->frame_interval[CSI_SINK_PAD];
+ 	output_fi = &priv->frame_interval[priv->active_output_pad];
+ 
+ 	/* start upstream */
+@@ -764,6 +765,17 @@ static int csi_start(struct csi_priv *priv)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Skip first few frames from a BT.656 source */
++	if (priv->upstream_ep.bus_type == V4L2_MBUS_BT656) {
++		u32 delay_usec, bad_frames = 20;
++
++		delay_usec = DIV_ROUND_UP_ULL((u64)USEC_PER_SEC *
++			input_fi->numerator * bad_frames,
++			input_fi->denominator);
++
++		usleep_range(delay_usec, delay_usec + 1000);
++	}
++
+ 	if (priv->dest == IPU_CSI_DEST_IDMAC) {
+ 		ret = csi_idmac_start(priv);
+ 		if (ret)
+@@ -1930,19 +1942,13 @@ static int imx_csi_async_register(struct csi_priv *priv)
+ 					     port, 0,
+ 					     FWNODE_GRAPH_ENDPOINT_NEXT);
+ 	if (ep) {
+-		asd = kzalloc(sizeof(*asd), GFP_KERNEL);
+-		if (!asd) {
+-			fwnode_handle_put(ep);
+-			return -ENOMEM;
+-		}
+-
+-		ret = v4l2_async_notifier_add_fwnode_remote_subdev(
+-			&priv->notifier, ep, asd);
++		asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++			&priv->notifier, ep, sizeof(*asd));
+ 
+ 		fwnode_handle_put(ep);
+ 
+-		if (ret) {
+-			kfree(asd);
++		if (IS_ERR(asd)) {
++			ret = PTR_ERR(asd);
+ 			/* OK if asd already exists */
+ 			if (ret != -EEXIST)
+ 				return ret;
+diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
+index 94d87d27d3896..9457761b7c8ba 100644
+--- a/drivers/staging/media/imx/imx6-mipi-csi2.c
++++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
+@@ -557,7 +557,7 @@ static int csi2_async_register(struct csi2_dev *csi2)
+ 	struct v4l2_fwnode_endpoint vep = {
+ 		.bus_type = V4L2_MBUS_CSI2_DPHY,
+ 	};
+-	struct v4l2_async_subdev *asd = NULL;
++	struct v4l2_async_subdev *asd;
+ 	struct fwnode_handle *ep;
+ 	int ret;
+ 
+@@ -577,19 +577,13 @@ static int csi2_async_register(struct csi2_dev *csi2)
+ 	dev_dbg(csi2->dev, "data lanes: %d\n", csi2->bus.num_data_lanes);
+ 	dev_dbg(csi2->dev, "flags: 0x%08x\n", csi2->bus.flags);
+ 
+-	asd = kzalloc(sizeof(*asd), GFP_KERNEL);
+-	if (!asd) {
+-		ret = -ENOMEM;
+-		goto err_parse;
+-	}
+-
+-	ret = v4l2_async_notifier_add_fwnode_remote_subdev(
+-		&csi2->notifier, ep, asd);
+-	if (ret)
+-		goto err_parse;
+-
++	asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++		&csi2->notifier, ep, sizeof(*asd));
+ 	fwnode_handle_put(ep);
+ 
++	if (IS_ERR(asd))
++		return PTR_ERR(asd);
++
+ 	csi2->notifier.ops = &csi2_notify_ops;
+ 
+ 	ret = v4l2_async_subdev_notifier_register(&csi2->sd,
+@@ -601,7 +595,6 @@ static int csi2_async_register(struct csi2_dev *csi2)
+ 
+ err_parse:
+ 	fwnode_handle_put(ep);
+-	kfree(asd);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c
+index ac52b1daf9914..6c59485291ca3 100644
+--- a/drivers/staging/media/imx/imx7-media-csi.c
++++ b/drivers/staging/media/imx/imx7-media-csi.c
+@@ -1191,7 +1191,7 @@ static const struct v4l2_async_notifier_operations imx7_csi_notify_ops = {
+ 
+ static int imx7_csi_async_register(struct imx7_csi *csi)
+ {
+-	struct v4l2_async_subdev *asd = NULL;
++	struct v4l2_async_subdev *asd;
+ 	struct fwnode_handle *ep;
+ 	int ret;
+ 
+@@ -1200,19 +1200,13 @@ static int imx7_csi_async_register(struct imx7_csi *csi)
+ 	ep = fwnode_graph_get_endpoint_by_id(dev_fwnode(csi->dev), 0, 0,
+ 					     FWNODE_GRAPH_ENDPOINT_NEXT);
+ 	if (ep) {
+-		asd = kzalloc(sizeof(*asd), GFP_KERNEL);
+-		if (!asd) {
+-			fwnode_handle_put(ep);
+-			return -ENOMEM;
+-		}
+-
+-		ret = v4l2_async_notifier_add_fwnode_remote_subdev(
+-			&csi->notifier, ep, asd);
++		asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++			&csi->notifier, ep, sizeof(*asd));
+ 
+ 		fwnode_handle_put(ep);
+ 
+-		if (ret) {
+-			kfree(asd);
++		if (IS_ERR(asd)) {
++			ret = PTR_ERR(asd);
+ 			/* OK if asd already exists */
+ 			if (ret != -EEXIST)
+ 				return ret;
+diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c
+index 7612993cc1d68..a392f9012626b 100644
+--- a/drivers/staging/media/imx/imx7-mipi-csis.c
++++ b/drivers/staging/media/imx/imx7-mipi-csis.c
+@@ -597,13 +597,15 @@ static void mipi_csis_clear_counters(struct csi_state *state)
+ 
+ static void mipi_csis_log_counters(struct csi_state *state, bool non_errors)
+ {
+-	int i = non_errors ? MIPI_CSIS_NUM_EVENTS : MIPI_CSIS_NUM_EVENTS - 4;
++	unsigned int num_events = non_errors ? MIPI_CSIS_NUM_EVENTS
++				: MIPI_CSIS_NUM_EVENTS - 6;
+ 	struct device *dev = &state->pdev->dev;
+ 	unsigned long flags;
++	unsigned int i;
+ 
+ 	spin_lock_irqsave(&state->slock, flags);
+ 
+-	for (i--; i >= 0; i--) {
++	for (i = 0; i < num_events; ++i) {
+ 		if (state->events[i].counter > 0 || state->debug)
+ 			dev_info(dev, "%s events: %d\n", state->events[i].name,
+ 				 state->events[i].counter);
+@@ -1004,7 +1006,7 @@ static int mipi_csis_async_register(struct csi_state *state)
+ 	struct v4l2_fwnode_endpoint vep = {
+ 		.bus_type = V4L2_MBUS_CSI2_DPHY,
+ 	};
+-	struct v4l2_async_subdev *asd = NULL;
++	struct v4l2_async_subdev *asd;
+ 	struct fwnode_handle *ep;
+ 	int ret;
+ 
+@@ -1024,17 +1026,13 @@ static int mipi_csis_async_register(struct csi_state *state)
+ 	dev_dbg(state->dev, "data lanes: %d\n", state->bus.num_data_lanes);
+ 	dev_dbg(state->dev, "flags: 0x%08x\n", state->bus.flags);
+ 
+-	asd = kzalloc(sizeof(*asd), GFP_KERNEL);
+-	if (!asd) {
+-		ret = -ENOMEM;
++	asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++		&state->notifier, ep, sizeof(*asd));
++	if (IS_ERR(asd)) {
++		ret = PTR_ERR(asd);
+ 		goto err_parse;
+ 	}
+ 
+-	ret = v4l2_async_notifier_add_fwnode_remote_subdev(
+-		&state->notifier, ep, asd);
+-	if (ret)
+-		goto err_parse;
+-
+ 	fwnode_handle_put(ep);
+ 
+ 	state->notifier.ops = &mipi_csis_notify_ops;
+@@ -1048,7 +1046,6 @@ static int mipi_csis_async_register(struct csi_state *state)
+ 
+ err_parse:
+ 	fwnode_handle_put(ep);
+-	kfree(asd);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/staging/media/rkisp1/rkisp1-dev.c b/drivers/staging/media/rkisp1/rkisp1-dev.c
+index 91584695804bb..06de5540c8af4 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-dev.c
++++ b/drivers/staging/media/rkisp1/rkisp1-dev.c
+@@ -252,6 +252,7 @@ static int rkisp1_subdev_notifier(struct rkisp1_device *rkisp1)
+ 			.bus_type = V4L2_MBUS_CSI2_DPHY
+ 		};
+ 		struct rkisp1_sensor_async *rk_asd = NULL;
++		struct v4l2_async_subdev *asd;
+ 		struct fwnode_handle *ep;
+ 
+ 		ep = fwnode_graph_get_endpoint_by_id(dev_fwnode(rkisp1->dev),
+@@ -263,21 +264,18 @@ static int rkisp1_subdev_notifier(struct rkisp1_device *rkisp1)
+ 		if (ret)
+ 			goto err_parse;
+ 
+-		rk_asd = kzalloc(sizeof(*rk_asd), GFP_KERNEL);
+-		if (!rk_asd) {
+-			ret = -ENOMEM;
++		asd = v4l2_async_notifier_add_fwnode_remote_subdev(ntf, ep,
++							sizeof(*rk_asd));
++		if (IS_ERR(asd)) {
++			ret = PTR_ERR(asd);
+ 			goto err_parse;
+ 		}
+ 
++		rk_asd = container_of(asd, struct rkisp1_sensor_async, asd);
+ 		rk_asd->mbus_type = vep.bus_type;
+ 		rk_asd->mbus_flags = vep.bus.mipi_csi2.flags;
+ 		rk_asd->lanes = vep.bus.mipi_csi2.num_data_lanes;
+ 
+-		ret = v4l2_async_notifier_add_fwnode_remote_subdev(ntf, ep,
+-								   &rk_asd->asd);
+-		if (ret)
+-			goto err_parse;
+-
+ 		dev_dbg(rkisp1->dev, "registered ep id %d with %d lanes\n",
+ 			vep.base.id, rk_asd->lanes);
+ 
+@@ -288,7 +286,6 @@ static int rkisp1_subdev_notifier(struct rkisp1_device *rkisp1)
+ 		continue;
+ err_parse:
+ 		fwnode_handle_put(ep);
+-		kfree(rk_asd);
+ 		v4l2_async_notifier_cleanup(ntf);
+ 		return ret;
+ 	}
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index 1263991de76f9..e68303e2b3907 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -471,7 +471,15 @@ static int rkvdec_buf_prepare(struct vb2_buffer *vb)
+ 		if (vb2_plane_size(vb, i) < sizeimage)
+ 			return -EINVAL;
+ 	}
+-	vb2_set_plane_payload(vb, 0, f->fmt.pix_mp.plane_fmt[0].sizeimage);
++
++	/*
++	 * Buffer's bytesused must be written by driver for CAPTURE buffers.
++	 * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
++	 * it to buffer length).
++	 */
++	if (V4L2_TYPE_IS_CAPTURE(vq->type))
++		vb2_set_plane_payload(vb, 0, f->fmt.pix_mp.plane_fmt[0].sizeimage);
++
+ 	return 0;
+ }
+ 
+@@ -691,7 +699,7 @@ static void rkvdec_device_run(void *priv)
+ 	if (WARN_ON(!desc))
+ 		return;
+ 
+-	ret = pm_runtime_get_sync(rkvdec->dev);
++	ret = pm_runtime_resume_and_get(rkvdec->dev);
+ 	if (ret < 0) {
+ 		rkvdec_job_finish_no_pm(ctx, VB2_BUF_STATE_ERROR);
+ 		return;
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index ce497d0197dfc..10744fab7ceaa 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -477,8 +477,8 @@ static void cedrus_h265_setup(struct cedrus_ctx *ctx,
+ 				slice_params->flags);
+ 
+ 	reg |= VE_DEC_H265_FLAG(VE_DEC_H265_DEC_SLICE_HDR_INFO0_FLAG_DEPENDENT_SLICE_SEGMENT,
+-				V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT,
+-				pps->flags);
++				V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT,
++				slice_params->flags);
+ 
+ 	/* FIXME: For multi-slice support. */
+ 	reg |= VE_DEC_H265_DEC_SLICE_HDR_INFO0_FLAG_FIRST_SLICE_SEGMENT_IN_PIC;
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_video.c b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+index 911f607d9b092..16327be904d1a 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_video.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+@@ -449,7 +449,13 @@ static int cedrus_buf_prepare(struct vb2_buffer *vb)
+ 	if (vb2_plane_size(vb, 0) < pix_fmt->sizeimage)
+ 		return -EINVAL;
+ 
+-	vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
++	/*
++	 * Buffer's bytesused must be written by driver for CAPTURE buffers.
++	 * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
++	 * it to buffer length).
++	 */
++	if (V4L2_TYPE_IS_CAPTURE(vq->type))
++		vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index 82aa93634eda3..27222f7b246fd 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -519,7 +519,7 @@
+ 
+ 		bus-range = <0 255>;
+ 		ranges = <
+-			0x02000000 0 0x00000000 0x60000000 0 0x10000000 /* pci memory */
++			0x02000000 0 0x60000000 0x60000000 0 0x10000000 /* pci memory */
+ 			0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */
+ 		>;
+ 
+diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c
+index 715f1fe8b4726..22974277afa08 100644
+--- a/drivers/staging/rtl8712/hal_init.c
++++ b/drivers/staging/rtl8712/hal_init.c
+@@ -40,7 +40,10 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
+ 		dev_err(&udev->dev, "r8712u: Firmware request failed\n");
+ 		usb_put_dev(udev);
+ 		usb_set_intfdata(usb_intf, NULL);
++		r8712_free_drv_sw(adapter);
++		adapter->dvobj_deinit(adapter);
+ 		complete(&adapter->rtl8712_fw_ready);
++		free_netdev(adapter->pnetdev);
+ 		return;
+ 	}
+ 	adapter->fw = firmware;
+diff --git a/drivers/staging/rtl8712/os_intfs.c b/drivers/staging/rtl8712/os_intfs.c
+index 0c3ae8495afb7..2214aca097308 100644
+--- a/drivers/staging/rtl8712/os_intfs.c
++++ b/drivers/staging/rtl8712/os_intfs.c
+@@ -328,8 +328,6 @@ int r8712_init_drv_sw(struct _adapter *padapter)
+ 
+ void r8712_free_drv_sw(struct _adapter *padapter)
+ {
+-	struct net_device *pnetdev = padapter->pnetdev;
+-
+ 	r8712_free_cmd_priv(&padapter->cmdpriv);
+ 	r8712_free_evt_priv(&padapter->evtpriv);
+ 	r8712_DeInitSwLeds(padapter);
+@@ -339,8 +337,6 @@ void r8712_free_drv_sw(struct _adapter *padapter)
+ 	_r8712_free_sta_priv(&padapter->stapriv);
+ 	_r8712_free_recv_priv(&padapter->recvpriv);
+ 	mp871xdeinit(padapter);
+-	if (pnetdev)
+-		free_netdev(pnetdev);
+ }
+ 
+ static void enable_video_mode(struct _adapter *padapter, int cbw40_value)
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index dc21e7743349c..b760bc3559373 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -361,7 +361,7 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ 	/* step 1. */
+ 	pnetdev = r8712_init_netdev();
+ 	if (!pnetdev)
+-		goto error;
++		goto put_dev;
+ 	padapter = netdev_priv(pnetdev);
+ 	disable_ht_for_spec_devid(pdid, padapter);
+ 	pdvobjpriv = &padapter->dvobjpriv;
+@@ -381,16 +381,16 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ 	 * initialize the dvobj_priv
+ 	 */
+ 	if (!padapter->dvobj_init) {
+-		goto error;
++		goto put_dev;
+ 	} else {
+ 		status = padapter->dvobj_init(padapter);
+ 		if (status != _SUCCESS)
+-			goto error;
++			goto free_netdev;
+ 	}
+ 	/* step 4. */
+ 	status = r8712_init_drv_sw(padapter);
+ 	if (status)
+-		goto error;
++		goto dvobj_deinit;
+ 	/* step 5. read efuse/eeprom data and get mac_addr */
+ 	{
+ 		int i, offset;
+@@ -570,17 +570,20 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ 	}
+ 	/* step 6. Load the firmware asynchronously */
+ 	if (rtl871x_load_fw(padapter))
+-		goto error;
++		goto deinit_drv_sw;
+ 	spin_lock_init(&padapter->lock_rx_ff0_filter);
+ 	mutex_init(&padapter->mutex_start);
+ 	return 0;
+-error:
++
++deinit_drv_sw:
++	r8712_free_drv_sw(padapter);
++dvobj_deinit:
++	padapter->dvobj_deinit(padapter);
++free_netdev:
++	free_netdev(pnetdev);
++put_dev:
+ 	usb_put_dev(udev);
+ 	usb_set_intfdata(pusb_intf, NULL);
+-	if (padapter && padapter->dvobj_deinit)
+-		padapter->dvobj_deinit(padapter);
+-	if (pnetdev)
+-		free_netdev(pnetdev);
+ 	return -ENODEV;
+ }
+ 
+@@ -612,6 +615,7 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ 		r8712_stop_drv_timers(padapter);
+ 		r871x_dev_unload(padapter);
+ 		r8712_free_drv_sw(padapter);
++		free_netdev(pnetdev);
+ 
+ 		/* decrease the reference count of the usb device structure
+ 		 * when disconnect
+diff --git a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+index 9097bcbd67d82..d697ea55a0da1 100644
+--- a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+@@ -1862,7 +1862,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance)
+ 	int status;
+ 	int err = -ENODEV;
+ 	struct vchiq_mmal_instance *instance;
+-	static struct vchiq_instance *vchiq_instance;
++	struct vchiq_instance *vchiq_instance;
+ 	struct vchiq_service_params_kernel params = {
+ 		.version		= VC_MMAL_VER,
+ 		.version_min		= VC_MMAL_MIN_VER,
+diff --git a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
+index af35251232eb3..b044999ad002b 100644
+--- a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
++++ b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
+@@ -265,12 +265,13 @@ void cxgbit_unmap_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+ 	struct cxgbit_cmd *ccmd = iscsit_priv_cmd(cmd);
+ 
+ 	if (ccmd->release) {
+-		struct cxgbi_task_tag_info *ttinfo = &ccmd->ttinfo;
+-
+-		if (ttinfo->sgl) {
++		if (cmd->se_cmd.se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
++			put_page(sg_page(&ccmd->sg));
++		} else {
+ 			struct cxgbit_sock *csk = conn->context;
+ 			struct cxgbit_device *cdev = csk->com.cdev;
+ 			struct cxgbi_ppm *ppm = cdev2ppm(cdev);
++			struct cxgbi_task_tag_info *ttinfo = &ccmd->ttinfo;
+ 
+ 			/* Abort the TCP conn if DDP is not complete to
+ 			 * avoid any possibility of DDP after freeing
+@@ -280,14 +281,14 @@ void cxgbit_unmap_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+ 				     cmd->se_cmd.data_length))
+ 				cxgbit_abort_conn(csk);
+ 
++			if (unlikely(ttinfo->sgl)) {
++				dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl,
++					     ttinfo->nents, DMA_FROM_DEVICE);
++				ttinfo->nents = 0;
++				ttinfo->sgl = NULL;
++			}
+ 			cxgbi_ppm_ppod_release(ppm, ttinfo->idx);
+-
+-			dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl,
+-				     ttinfo->nents, DMA_FROM_DEVICE);
+-		} else {
+-			put_page(sg_page(&ccmd->sg));
+ 		}
+-
+ 		ccmd->release = false;
+ 	}
+ }
+diff --git a/drivers/target/iscsi/cxgbit/cxgbit_target.c b/drivers/target/iscsi/cxgbit/cxgbit_target.c
+index b926e1d6c7b8e..282297ffc4044 100644
+--- a/drivers/target/iscsi/cxgbit/cxgbit_target.c
++++ b/drivers/target/iscsi/cxgbit/cxgbit_target.c
+@@ -997,17 +997,18 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
+ 	struct scatterlist *sg_start;
+ 	struct iscsi_conn *conn = csk->conn;
+ 	struct iscsi_cmd *cmd = NULL;
++	struct cxgbit_cmd *ccmd;
++	struct cxgbi_task_tag_info *ttinfo;
+ 	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+ 	struct iscsi_data *hdr = (struct iscsi_data *)pdu_cb->hdr;
+ 	u32 data_offset = be32_to_cpu(hdr->offset);
+-	u32 data_len = pdu_cb->dlen;
++	u32 data_len = ntoh24(hdr->dlength);
+ 	int rc, sg_nents, sg_off;
+ 	bool dcrc_err = false;
+ 
+ 	if (pdu_cb->flags & PDUCBF_RX_DDP_CMP) {
+ 		u32 offset = be32_to_cpu(hdr->offset);
+ 		u32 ddp_data_len;
+-		u32 payload_length = ntoh24(hdr->dlength);
+ 		bool success = false;
+ 
+ 		cmd = iscsit_find_cmd_from_itt_or_dump(conn, hdr->itt, 0);
+@@ -1022,7 +1023,7 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
+ 		cmd->data_sn = be32_to_cpu(hdr->datasn);
+ 
+ 		rc = __iscsit_check_dataout_hdr(conn, (unsigned char *)hdr,
+-						cmd, payload_length, &success);
++						cmd, data_len, &success);
+ 		if (rc < 0)
+ 			return rc;
+ 		else if (!success)
+@@ -1060,6 +1061,20 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
+ 		cxgbit_skb_copy_to_sg(csk->skb, sg_start, sg_nents, skip);
+ 	}
+ 
++	ccmd = iscsit_priv_cmd(cmd);
++	ttinfo = &ccmd->ttinfo;
++
++	if (ccmd->release && ttinfo->sgl &&
++	    (cmd->se_cmd.data_length ==	(cmd->write_data_done + data_len))) {
++		struct cxgbit_device *cdev = csk->com.cdev;
++		struct cxgbi_ppm *ppm = cdev2ppm(cdev);
++
++		dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl, ttinfo->nents,
++			     DMA_FROM_DEVICE);
++		ttinfo->nents = 0;
++		ttinfo->sgl = NULL;
++	}
++
+ check_payload:
+ 
+ 	rc = iscsit_check_dataout_payload(cmd, hdr, dcrc_err);
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index 3f6a69ccc1737..6e1d6a31ee4fb 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -443,7 +443,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
+ 	ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
+ 	if (ret >= 0) {
+ 		cpufreq_cdev->cpufreq_state = state;
+-		cpus = cpufreq_cdev->policy->cpus;
++		cpus = cpufreq_cdev->policy->related_cpus;
+ 		max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
+ 		capacity = frequency * max_capacity;
+ 		capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;
+diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c
+index 464c2d37b992e..e254f8c37cb73 100644
+--- a/drivers/thunderbolt/test.c
++++ b/drivers/thunderbolt/test.c
+@@ -259,14 +259,14 @@ static struct tb_switch *alloc_dev_default(struct kunit *test,
+ 	if (port->dual_link_port && upstream_port->dual_link_port) {
+ 		port->dual_link_port->remote = upstream_port->dual_link_port;
+ 		upstream_port->dual_link_port->remote = port->dual_link_port;
+-	}
+ 
+-	if (bonded) {
+-		/* Bonding is used */
+-		port->bonded = true;
+-		port->dual_link_port->bonded = true;
+-		upstream_port->bonded = true;
+-		upstream_port->dual_link_port->bonded = true;
++		if (bonded) {
++			/* Bonding is used */
++			port->bonded = true;
++			port->dual_link_port->bonded = true;
++			upstream_port->bonded = true;
++			upstream_port->dual_link_port->bonded = true;
++		}
+ 	}
+ 
+ 	return sw;
+diff --git a/drivers/tty/nozomi.c b/drivers/tty/nozomi.c
+index d42b854cb7df2..6890418a29a40 100644
+--- a/drivers/tty/nozomi.c
++++ b/drivers/tty/nozomi.c
+@@ -1394,7 +1394,7 @@ static int nozomi_card_init(struct pci_dev *pdev,
+ 			NOZOMI_NAME, dc);
+ 	if (unlikely(ret)) {
+ 		dev_err(&pdev->dev, "can't request irq %d\n", pdev->irq);
+-		goto err_free_kfifo;
++		goto err_free_all_kfifo;
+ 	}
+ 
+ 	DBG1("base_addr: %p", dc->base_addr);
+@@ -1432,12 +1432,15 @@ static int nozomi_card_init(struct pci_dev *pdev,
+ 	return 0;
+ 
+ err_free_tty:
+-	for (i = 0; i < MAX_PORT; ++i) {
++	for (i--; i >= 0; i--) {
+ 		tty_unregister_device(ntty_driver, dc->index_start + i);
+ 		tty_port_destroy(&dc->port[i].port);
+ 	}
++	free_irq(pdev->irq, dc);
++err_free_all_kfifo:
++	i = MAX_PORT;
+ err_free_kfifo:
+-	for (i = 0; i < MAX_PORT; i++)
++	for (i--; i >= PORT_MDM; i--)
+ 		kfifo_free(&dc->port[i].fifo_ul);
+ err_free_sbuf:
+ 	kfree(dc->send_buf);
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 0cc6d35a08156..95e2d6de4f213 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -27,6 +27,7 @@
+ #include <linux/pm_qos.h>
+ #include <linux/pm_wakeirq.h>
+ #include <linux/dma-mapping.h>
++#include <linux/sys_soc.h>
+ 
+ #include "8250.h"
+ 
+@@ -41,6 +42,8 @@
+  */
+ #define UART_ERRATA_CLOCK_DISABLE	(1 << 3)
+ #define	UART_HAS_EFR2			BIT(4)
++#define UART_HAS_RHR_IT_DIS		BIT(5)
++#define UART_RX_TIMEOUT_QUIRK		BIT(6)
+ 
+ #define OMAP_UART_FCR_RX_TRIG		6
+ #define OMAP_UART_FCR_TX_TRIG		4
+@@ -94,10 +97,17 @@
+ #define OMAP_UART_REV_52 0x0502
+ #define OMAP_UART_REV_63 0x0603
+ 
++/* Interrupt Enable Register 2 */
++#define UART_OMAP_IER2			0x1B
++#define UART_OMAP_IER2_RHR_IT_DIS	BIT(2)
++
+ /* Enhanced features register 2 */
+ #define UART_OMAP_EFR2			0x23
+ #define UART_OMAP_EFR2_TIMEOUT_BEHAVE	BIT(6)
+ 
++/* RX FIFO occupancy indicator */
++#define UART_OMAP_RX_LVL		0x64
++
+ struct omap8250_priv {
+ 	int line;
+ 	u8 habit;
+@@ -592,6 +602,7 @@ static int omap_8250_dma_handle_irq(struct uart_port *port);
+ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ {
+ 	struct uart_port *port = dev_id;
++	struct omap8250_priv *priv = port->private_data;
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 	unsigned int iir;
+ 	int ret;
+@@ -606,6 +617,18 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 	serial8250_rpm_get(up);
+ 	iir = serial_port_in(port, UART_IIR);
+ 	ret = serial8250_handle_irq(port, iir);
++
++	/*
++	 * On K3 SoCs, it is observed that RX TIMEOUT is signalled after
++	 * FIFO has been drained, in which case a dummy read of RX FIFO
++	 * is required to clear RX TIMEOUT condition.
++	 */
++	if (priv->habit & UART_RX_TIMEOUT_QUIRK &&
++	    (iir & UART_IIR_RX_TIMEOUT) == UART_IIR_RX_TIMEOUT &&
++	    serial_port_in(port, UART_OMAP_RX_LVL) == 0) {
++		serial_port_in(port, UART_RX);
++	}
++
+ 	serial8250_rpm_put(up);
+ 
+ 	return IRQ_RETVAL(ret);
+@@ -756,17 +779,27 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
+ {
+ 	struct uart_8250_dma    *dma = p->dma;
+ 	struct tty_port         *tty_port = &p->port.state->port;
++	struct omap8250_priv	*priv = p->port.private_data;
+ 	struct dma_chan		*rxchan = dma->rxchan;
+ 	dma_cookie_t		cookie;
+ 	struct dma_tx_state     state;
+ 	int                     count;
+ 	int			ret;
++	u32			reg;
+ 
+ 	if (!dma->rx_running)
+ 		goto out;
+ 
+ 	cookie = dma->rx_cookie;
+ 	dma->rx_running = 0;
++
++	/* Re-enable RX FIFO interrupt now that transfer is complete */
++	if (priv->habit & UART_HAS_RHR_IT_DIS) {
++		reg = serial_in(p, UART_OMAP_IER2);
++		reg &= ~UART_OMAP_IER2_RHR_IT_DIS;
++		serial_out(p, UART_OMAP_IER2, UART_OMAP_IER2_RHR_IT_DIS);
++	}
++
+ 	dmaengine_tx_status(rxchan, cookie, &state);
+ 
+ 	count = dma->rx_size - state.residue + state.in_flight_bytes;
+@@ -784,7 +817,7 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
+ 			       poll_count--)
+ 				cpu_relax();
+ 
+-			if (!poll_count)
++			if (poll_count == -1)
+ 				dev_err(p->port.dev, "teardown incomplete\n");
+ 		}
+ 	}
+@@ -862,6 +895,7 @@ static int omap_8250_rx_dma(struct uart_8250_port *p)
+ 	int				err = 0;
+ 	struct dma_async_tx_descriptor  *desc;
+ 	unsigned long			flags;
++	u32				reg;
+ 
+ 	if (priv->rx_dma_broken)
+ 		return -EINVAL;
+@@ -897,6 +931,17 @@ static int omap_8250_rx_dma(struct uart_8250_port *p)
+ 
+ 	dma->rx_cookie = dmaengine_submit(desc);
+ 
++	/*
++	 * Disable RX FIFO interrupt while RX DMA is enabled, else
++	 * spurious interrupt may be raised when data is in the RX FIFO
++	 * but is yet to be drained by DMA.
++	 */
++	if (priv->habit & UART_HAS_RHR_IT_DIS) {
++		reg = serial_in(p, UART_OMAP_IER2);
++		reg |= UART_OMAP_IER2_RHR_IT_DIS;
++		serial_out(p, UART_OMAP_IER2, UART_OMAP_IER2_RHR_IT_DIS);
++	}
++
+ 	dma_async_issue_pending(dma->rxchan);
+ out:
+ 	spin_unlock_irqrestore(&priv->rx_dma_lock, flags);
+@@ -1163,6 +1208,11 @@ static int omap8250_no_handle_irq(struct uart_port *port)
+ 	return 0;
+ }
+ 
++static const struct soc_device_attribute k3_soc_devices[] = {
++	{ .family = "AM65X",  },
++	{ .family = "J721E", .revision = "SR1.0" },
++};
++
+ static struct omap8250_dma_params am654_dma = {
+ 	.rx_size = SZ_2K,
+ 	.rx_trigger = 1,
+@@ -1177,7 +1227,8 @@ static struct omap8250_dma_params am33xx_dma = {
+ 
+ static struct omap8250_platdata am654_platdata = {
+ 	.dma_params	= &am654_dma,
+-	.habit		= UART_HAS_EFR2,
++	.habit		= UART_HAS_EFR2 | UART_HAS_RHR_IT_DIS |
++			  UART_RX_TIMEOUT_QUIRK,
+ };
+ 
+ static struct omap8250_platdata am33xx_platdata = {
+@@ -1367,6 +1418,13 @@ static int omap8250_probe(struct platform_device *pdev)
+ 			up.dma->rxconf.src_maxburst = RX_TRIGGER;
+ 			up.dma->txconf.dst_maxburst = TX_TRIGGER;
+ 		}
++
++		/*
++		 * AM65x SR1.0, AM65x SR2.0 and J721e SR1.0 don't
++		 * don't have RHR_IT_DIS bit in IER2 register
++		 */
++		if (soc_device_match(k3_soc_devices))
++			priv->habit &= ~UART_HAS_RHR_IT_DIS;
+ 	}
+ #endif
+ 	ret = serial8250_register_8250_port(&up);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 6e141429c9808..6d9c494bed7d2 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2635,6 +2635,21 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ 					     struct ktermios *old)
+ {
+ 	unsigned int tolerance = port->uartclk / 100;
++	unsigned int min;
++	unsigned int max;
++
++	/*
++	 * Handle magic divisors for baud rates above baud_base on SMSC
++	 * Super I/O chips.  Enable custom rates of clk/4 and clk/8, but
++	 * disable divisor values beyond 32767, which are unavailable.
++	 */
++	if (port->flags & UPF_MAGIC_MULTIPLIER) {
++		min = port->uartclk / 16 / UART_DIV_MAX >> 1;
++		max = (port->uartclk + tolerance) / 4;
++	} else {
++		min = port->uartclk / 16 / UART_DIV_MAX;
++		max = (port->uartclk + tolerance) / 16;
++	}
+ 
+ 	/*
+ 	 * Ask the core to calculate the divisor for us.
+@@ -2642,9 +2657,7 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ 	 * slower than nominal still match standard baud rates without
+ 	 * causing transmission errors.
+ 	 */
+-	return uart_get_baud_rate(port, termios, old,
+-				  port->uartclk / 16 / UART_DIV_MAX,
+-				  (port->uartclk + tolerance) / 16);
++	return uart_get_baud_rate(port, termios, old, min, max);
+ }
+ 
+ /*
+diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
+index e3d10794dbba3..1d3ec8503cef3 100644
+--- a/drivers/tty/serial/8250/serial_cs.c
++++ b/drivers/tty/serial/8250/serial_cs.c
+@@ -780,6 +780,7 @@ static const struct pcmcia_device_id serial_ids[] = {
+ 	PCMCIA_DEVICE_PROD_ID12("Multi-Tech", "MT2834LT", 0x5f73be51, 0x4cd7c09e),
+ 	PCMCIA_DEVICE_PROD_ID12("OEM      ", "C288MX     ", 0xb572d360, 0xd2385b7a),
+ 	PCMCIA_DEVICE_PROD_ID12("Option International", "V34bis GSM/PSTN Data/Fax Modem", 0x9d7cd6f5, 0x5cb8bf41),
++	PCMCIA_DEVICE_PROD_ID12("Option International", "GSM-Ready 56K/ISDN", 0x9d7cd6f5, 0xb23844aa),
+ 	PCMCIA_DEVICE_PROD_ID12("PCMCIA   ", "C336MX     ", 0x99bcafe9, 0xaa25bcab),
+ 	PCMCIA_DEVICE_PROD_ID12("Quatech Inc", "PCMCIA Dual RS-232 Serial Port Card", 0xc4420b35, 0x92abc92f),
+ 	PCMCIA_DEVICE_PROD_ID12("Quatech Inc", "Dual RS-232 Serial Port PC Card", 0xc4420b35, 0x031a380d),
+@@ -807,7 +808,6 @@ static const struct pcmcia_device_id serial_ids[] = {
+ 	PCMCIA_DEVICE_CIS_PROD_ID12("ADVANTECH", "COMpad-32/85B-4", 0x96913a85, 0xcec8f102, "cis/COMpad4.cis"),
+ 	PCMCIA_DEVICE_CIS_PROD_ID123("ADVANTECH", "COMpad-32/85", "1.0", 0x96913a85, 0x8fbe92ae, 0x0877b627, "cis/COMpad2.cis"),
+ 	PCMCIA_DEVICE_CIS_PROD_ID2("RS-COM 2P", 0xad20b156, "cis/RS-COM-2P.cis"),
+-	PCMCIA_DEVICE_CIS_MANF_CARD(0x0013, 0x0000, "cis/GLOBETROTTER.cis"),
+ 	PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL100  1.00.", 0x19ca78af, 0xf964f42b),
+ 	PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL100", 0x19ca78af, 0x71d98e83),
+ 	PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL232  1.00.", 0x19ca78af, 0x69fb7490),
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index bd047e1f9bea7..de5ee4aad9f34 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1408,17 +1408,7 @@ static unsigned int lpuart_get_mctrl(struct uart_port *port)
+ 
+ static unsigned int lpuart32_get_mctrl(struct uart_port *port)
+ {
+-	unsigned int temp = 0;
+-	unsigned long reg;
+-
+-	reg = lpuart32_read(port, UARTMODIR);
+-	if (reg & UARTMODIR_TXCTSE)
+-		temp |= TIOCM_CTS;
+-
+-	if (reg & UARTMODIR_RXRTSE)
+-		temp |= TIOCM_RTS;
+-
+-	return temp;
++	return 0;
+ }
+ 
+ static void lpuart_set_mctrl(struct uart_port *port, unsigned int mctrl)
+@@ -1625,7 +1615,7 @@ static void lpuart_rx_dma_startup(struct lpuart_port *sport)
+ 	sport->lpuart_dma_rx_use = true;
+ 	rx_dma_timer_init(sport);
+ 
+-	if (sport->port.has_sysrq) {
++	if (sport->port.has_sysrq && !lpuart_is_32(sport)) {
+ 		cr3 = readb(sport->port.membase + UARTCR3);
+ 		cr3 |= UARTCR3_FEIE;
+ 		writeb(cr3, sport->port.membase + UARTCR3);
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 51b0ecabf2ec9..1e26220c78527 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -445,12 +445,11 @@ static void mvebu_uart_shutdown(struct uart_port *port)
+ 
+ static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
+ {
+-	struct mvebu_uart *mvuart = to_mvuart(port);
+ 	unsigned int d_divisor, m_divisor;
+ 	u32 brdv, osamp;
+ 
+-	if (IS_ERR(mvuart->clk))
+-		return -PTR_ERR(mvuart->clk);
++	if (!port->uartclk)
++		return -EOPNOTSUPP;
+ 
+ 	/*
+ 	 * The baudrate is derived from the UART clock thanks to two divisors:
+@@ -463,7 +462,7 @@ static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
+ 	 * makes use of D to configure the desired baudrate.
+ 	 */
+ 	m_divisor = OSAMP_DEFAULT_DIVISOR;
+-	d_divisor = DIV_ROUND_UP(port->uartclk, baud * m_divisor);
++	d_divisor = DIV_ROUND_CLOSEST(port->uartclk, baud * m_divisor);
+ 
+ 	brdv = readl(port->membase + UART_BRDV);
+ 	brdv &= ~BRDV_BAUD_MASK;
+@@ -482,7 +481,7 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 				   struct ktermios *old)
+ {
+ 	unsigned long flags;
+-	unsigned int baud;
++	unsigned int baud, min_baud, max_baud;
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+ 
+@@ -501,16 +500,21 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 		port->ignore_status_mask |= STAT_RX_RDY(port) | STAT_BRK_ERR;
+ 
+ 	/*
++	 * Maximal divisor is 1023 * 16 when using default (x16) scheme.
+ 	 * Maximum achievable frequency with simple baudrate divisor is 230400.
+ 	 * Since the error per bit frame would be of more than 15%, achieving
+ 	 * higher frequencies would require to implement the fractional divisor
+ 	 * feature.
+ 	 */
+-	baud = uart_get_baud_rate(port, termios, old, 0, 230400);
++	min_baud = DIV_ROUND_UP(port->uartclk, 1023 * 16);
++	max_baud = 230400;
++
++	baud = uart_get_baud_rate(port, termios, old, min_baud, max_baud);
+ 	if (mvebu_uart_baud_rate_set(port, baud)) {
+ 		/* No clock available, baudrate cannot be changed */
+ 		if (old)
+-			baud = uart_get_baud_rate(port, old, NULL, 0, 230400);
++			baud = uart_get_baud_rate(port, old, NULL,
++						  min_baud, max_baud);
+ 	} else {
+ 		tty_termios_encode_baud_rate(termios, baud, baud);
+ 		uart_update_timeout(port, termios->c_cflag, baud);
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 3b1aaa93d750e..70898a999a498 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -610,6 +610,14 @@ static void sci_stop_tx(struct uart_port *port)
+ 	ctrl &= ~SCSCR_TIE;
+ 
+ 	serial_port_out(port, SCSCR, ctrl);
++
++#ifdef CONFIG_SERIAL_SH_SCI_DMA
++	if (to_sci_port(port)->chan_tx &&
++	    !dma_submit_error(to_sci_port(port)->cookie_tx)) {
++		dmaengine_terminate_async(to_sci_port(port)->chan_tx);
++		to_sci_port(port)->cookie_tx = -EINVAL;
++	}
++#endif
+ }
+ 
+ static void sci_start_rx(struct uart_port *port)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 6fbabf56dbb76..df5b2d1e214f1 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1948,6 +1948,11 @@ static const struct usb_device_id acm_ids[] = {
+ 	.driver_info = IGNORE_DEVICE,
+ 	},
+ 
++	/* Exclude Heimann Sensor GmbH USB appset demo */
++	{ USB_DEVICE(0x32a7, 0x0000),
++	.driver_info = IGNORE_DEVICE,
++	},
++
+ 	/* control interfaces without any protocol set */
+ 	{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
+ 		USB_CDC_PROTO_NONE) },
+diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
+index fec17a2d2447d..15911ac7582b4 100644
+--- a/drivers/usb/dwc2/core.c
++++ b/drivers/usb/dwc2/core.c
+@@ -1167,15 +1167,6 @@ static int dwc2_hs_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
+ 		usbcfg &= ~(GUSBCFG_ULPI_UTMI_SEL | GUSBCFG_PHYIF16);
+ 		if (hsotg->params.phy_utmi_width == 16)
+ 			usbcfg |= GUSBCFG_PHYIF16;
+-
+-		/* Set turnaround time */
+-		if (dwc2_is_device_mode(hsotg)) {
+-			usbcfg &= ~GUSBCFG_USBTRDTIM_MASK;
+-			if (hsotg->params.phy_utmi_width == 16)
+-				usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT;
+-			else
+-				usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT;
+-		}
+ 		break;
+ 	default:
+ 		dev_err(hsotg->dev, "FS PHY selected at HS!\n");
+@@ -1197,6 +1188,24 @@ static int dwc2_hs_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
+ 	return retval;
+ }
+ 
++static void dwc2_set_turnaround_time(struct dwc2_hsotg *hsotg)
++{
++	u32 usbcfg;
++
++	if (hsotg->params.phy_type != DWC2_PHY_TYPE_PARAM_UTMI)
++		return;
++
++	usbcfg = dwc2_readl(hsotg, GUSBCFG);
++
++	usbcfg &= ~GUSBCFG_USBTRDTIM_MASK;
++	if (hsotg->params.phy_utmi_width == 16)
++		usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT;
++	else
++		usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT;
++
++	dwc2_writel(hsotg, usbcfg, GUSBCFG);
++}
++
+ int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
+ {
+ 	u32 usbcfg;
+@@ -1214,6 +1223,9 @@ int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
+ 		retval = dwc2_hs_phy_init(hsotg, select_phy);
+ 		if (retval)
+ 			return retval;
++
++		if (dwc2_is_device_mode(hsotg))
++			dwc2_set_turnaround_time(hsotg);
+ 	}
+ 
+ 	if (hsotg->hw_params.hs_phy_type == GHWCFG2_HS_PHY_TYPE_ULPI &&
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 7537dd50ad533..bfb72902f3a68 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1590,17 +1590,18 @@ static int dwc3_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	dwc3_check_params(dwc);
++	dwc3_debugfs_init(dwc);
+ 
+ 	ret = dwc3_core_init_mode(dwc);
+ 	if (ret)
+ 		goto err5;
+ 
+-	dwc3_debugfs_init(dwc);
+ 	pm_runtime_put(dev);
+ 
+ 	return 0;
+ 
+ err5:
++	dwc3_debugfs_exit(dwc);
+ 	dwc3_event_buffers_cleanup(dwc);
+ 
+ 	usb_phy_shutdown(dwc->usb2_phy);
+diff --git a/drivers/usb/gadget/function/f_eem.c b/drivers/usb/gadget/function/f_eem.c
+index 2cd9942707b46..5d38f29bda720 100644
+--- a/drivers/usb/gadget/function/f_eem.c
++++ b/drivers/usb/gadget/function/f_eem.c
+@@ -30,6 +30,11 @@ struct f_eem {
+ 	u8				ctrl_id;
+ };
+ 
++struct in_context {
++	struct sk_buff	*skb;
++	struct usb_ep	*ep;
++};
++
+ static inline struct f_eem *func_to_eem(struct usb_function *f)
+ {
+ 	return container_of(f, struct f_eem, port.func);
+@@ -320,9 +325,12 @@ fail:
+ 
+ static void eem_cmd_complete(struct usb_ep *ep, struct usb_request *req)
+ {
+-	struct sk_buff *skb = (struct sk_buff *)req->context;
++	struct in_context *ctx = req->context;
+ 
+-	dev_kfree_skb_any(skb);
++	dev_kfree_skb_any(ctx->skb);
++	kfree(req->buf);
++	usb_ep_free_request(ctx->ep, req);
++	kfree(ctx);
+ }
+ 
+ /*
+@@ -410,7 +418,9 @@ static int eem_unwrap(struct gether *port,
+ 		 * b15:		bmType (0 == data, 1 == command)
+ 		 */
+ 		if (header & BIT(15)) {
+-			struct usb_request	*req = cdev->req;
++			struct usb_request	*req;
++			struct in_context	*ctx;
++			struct usb_ep		*ep;
+ 			u16			bmEEMCmd;
+ 
+ 			/* EEM command packet format:
+@@ -439,11 +449,36 @@ static int eem_unwrap(struct gether *port,
+ 				skb_trim(skb2, len);
+ 				put_unaligned_le16(BIT(15) | BIT(11) | len,
+ 							skb_push(skb2, 2));
++
++				ep = port->in_ep;
++				req = usb_ep_alloc_request(ep, GFP_ATOMIC);
++				if (!req) {
++					dev_kfree_skb_any(skb2);
++					goto next;
++				}
++
++				req->buf = kmalloc(skb2->len, GFP_KERNEL);
++				if (!req->buf) {
++					usb_ep_free_request(ep, req);
++					dev_kfree_skb_any(skb2);
++					goto next;
++				}
++
++				ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
++				if (!ctx) {
++					kfree(req->buf);
++					usb_ep_free_request(ep, req);
++					dev_kfree_skb_any(skb2);
++					goto next;
++				}
++				ctx->skb = skb2;
++				ctx->ep = ep;
++
+ 				skb_copy_bits(skb2, 0, req->buf, skb2->len);
+ 				req->length = skb2->len;
+ 				req->complete = eem_cmd_complete;
+ 				req->zero = 1;
+-				req->context = skb2;
++				req->context = ctx;
+ 				if (usb_ep_queue(port->in_ep, req, GFP_ATOMIC))
+ 					DBG(cdev, "echo response queue fail\n");
+ 				break;
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 7df180b110afc..725e35167837e 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -250,8 +250,8 @@ EXPORT_SYMBOL_GPL(ffs_lock);
+ static struct ffs_dev *_ffs_find_dev(const char *name);
+ static struct ffs_dev *_ffs_alloc_dev(void);
+ static void _ffs_free_dev(struct ffs_dev *dev);
+-static void *ffs_acquire_dev(const char *dev_name);
+-static void ffs_release_dev(struct ffs_data *ffs_data);
++static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data);
++static void ffs_release_dev(struct ffs_dev *ffs_dev);
+ static int ffs_ready(struct ffs_data *ffs);
+ static void ffs_closed(struct ffs_data *ffs);
+ 
+@@ -1553,8 +1553,8 @@ unmapped_value:
+ static int ffs_fs_get_tree(struct fs_context *fc)
+ {
+ 	struct ffs_sb_fill_data *ctx = fc->fs_private;
+-	void *ffs_dev;
+ 	struct ffs_data	*ffs;
++	int ret;
+ 
+ 	ENTER();
+ 
+@@ -1573,13 +1573,12 @@ static int ffs_fs_get_tree(struct fs_context *fc)
+ 		return -ENOMEM;
+ 	}
+ 
+-	ffs_dev = ffs_acquire_dev(ffs->dev_name);
+-	if (IS_ERR(ffs_dev)) {
++	ret = ffs_acquire_dev(ffs->dev_name, ffs);
++	if (ret) {
+ 		ffs_data_put(ffs);
+-		return PTR_ERR(ffs_dev);
++		return ret;
+ 	}
+ 
+-	ffs->private_data = ffs_dev;
+ 	ctx->ffs_data = ffs;
+ 	return get_tree_nodev(fc, ffs_sb_fill);
+ }
+@@ -1590,7 +1589,6 @@ static void ffs_fs_free_fc(struct fs_context *fc)
+ 
+ 	if (ctx) {
+ 		if (ctx->ffs_data) {
+-			ffs_release_dev(ctx->ffs_data);
+ 			ffs_data_put(ctx->ffs_data);
+ 		}
+ 
+@@ -1629,10 +1627,8 @@ ffs_fs_kill_sb(struct super_block *sb)
+ 	ENTER();
+ 
+ 	kill_litter_super(sb);
+-	if (sb->s_fs_info) {
+-		ffs_release_dev(sb->s_fs_info);
++	if (sb->s_fs_info)
+ 		ffs_data_closed(sb->s_fs_info);
+-	}
+ }
+ 
+ static struct file_system_type ffs_fs_type = {
+@@ -1702,6 +1698,7 @@ static void ffs_data_put(struct ffs_data *ffs)
+ 	if (unlikely(refcount_dec_and_test(&ffs->ref))) {
+ 		pr_info("%s(): freeing\n", __func__);
+ 		ffs_data_clear(ffs);
++		ffs_release_dev(ffs->private_data);
+ 		BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
+ 		       swait_active(&ffs->ep0req_completion.wait) ||
+ 		       waitqueue_active(&ffs->wait));
+@@ -3031,6 +3028,7 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f,
+ 	struct ffs_function *func = ffs_func_from_usb(f);
+ 	struct f_fs_opts *ffs_opts =
+ 		container_of(f->fi, struct f_fs_opts, func_inst);
++	struct ffs_data *ffs_data;
+ 	int ret;
+ 
+ 	ENTER();
+@@ -3045,12 +3043,13 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f,
+ 	if (!ffs_opts->no_configfs)
+ 		ffs_dev_lock();
+ 	ret = ffs_opts->dev->desc_ready ? 0 : -ENODEV;
+-	func->ffs = ffs_opts->dev->ffs_data;
++	ffs_data = ffs_opts->dev->ffs_data;
+ 	if (!ffs_opts->no_configfs)
+ 		ffs_dev_unlock();
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
++	func->ffs = ffs_data;
+ 	func->conf = c;
+ 	func->gadget = c->cdev->gadget;
+ 
+@@ -3505,6 +3504,7 @@ static void ffs_free_inst(struct usb_function_instance *f)
+ 	struct f_fs_opts *opts;
+ 
+ 	opts = to_f_fs_opts(f);
++	ffs_release_dev(opts->dev);
+ 	ffs_dev_lock();
+ 	_ffs_free_dev(opts->dev);
+ 	ffs_dev_unlock();
+@@ -3692,47 +3692,48 @@ static void _ffs_free_dev(struct ffs_dev *dev)
+ {
+ 	list_del(&dev->entry);
+ 
+-	/* Clear the private_data pointer to stop incorrect dev access */
+-	if (dev->ffs_data)
+-		dev->ffs_data->private_data = NULL;
+-
+ 	kfree(dev);
+ 	if (list_empty(&ffs_devices))
+ 		functionfs_cleanup();
+ }
+ 
+-static void *ffs_acquire_dev(const char *dev_name)
++static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data)
+ {
++	int ret = 0;
+ 	struct ffs_dev *ffs_dev;
+ 
+ 	ENTER();
+ 	ffs_dev_lock();
+ 
+ 	ffs_dev = _ffs_find_dev(dev_name);
+-	if (!ffs_dev)
+-		ffs_dev = ERR_PTR(-ENOENT);
+-	else if (ffs_dev->mounted)
+-		ffs_dev = ERR_PTR(-EBUSY);
+-	else if (ffs_dev->ffs_acquire_dev_callback &&
+-	    ffs_dev->ffs_acquire_dev_callback(ffs_dev))
+-		ffs_dev = ERR_PTR(-ENOENT);
+-	else
++	if (!ffs_dev) {
++		ret = -ENOENT;
++	} else if (ffs_dev->mounted) {
++		ret = -EBUSY;
++	} else if (ffs_dev->ffs_acquire_dev_callback &&
++		   ffs_dev->ffs_acquire_dev_callback(ffs_dev)) {
++		ret = -ENOENT;
++	} else {
+ 		ffs_dev->mounted = true;
++		ffs_dev->ffs_data = ffs_data;
++		ffs_data->private_data = ffs_dev;
++	}
+ 
+ 	ffs_dev_unlock();
+-	return ffs_dev;
++	return ret;
+ }
+ 
+-static void ffs_release_dev(struct ffs_data *ffs_data)
++static void ffs_release_dev(struct ffs_dev *ffs_dev)
+ {
+-	struct ffs_dev *ffs_dev;
+-
+ 	ENTER();
+ 	ffs_dev_lock();
+ 
+-	ffs_dev = ffs_data->private_data;
+-	if (ffs_dev) {
++	if (ffs_dev && ffs_dev->mounted) {
+ 		ffs_dev->mounted = false;
++		if (ffs_dev->ffs_data) {
++			ffs_dev->ffs_data->private_data = NULL;
++			ffs_dev->ffs_data = NULL;
++		}
+ 
+ 		if (ffs_dev->ffs_release_dev_callback)
+ 			ffs_dev->ffs_release_dev_callback(ffs_dev);
+@@ -3760,7 +3761,6 @@ static int ffs_ready(struct ffs_data *ffs)
+ 	}
+ 
+ 	ffs_obj->desc_ready = true;
+-	ffs_obj->ffs_data = ffs;
+ 
+ 	if (ffs_obj->ffs_ready_callback) {
+ 		ret = ffs_obj->ffs_ready_callback(ffs);
+@@ -3788,7 +3788,6 @@ static void ffs_closed(struct ffs_data *ffs)
+ 		goto done;
+ 
+ 	ffs_obj->desc_ready = false;
+-	ffs_obj->ffs_data = NULL;
+ 
+ 	if (test_and_clear_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags) &&
+ 	    ffs_obj->ffs_closed_callback)
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 8ce043e6ed872..ed380ee58ab5d 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1938,6 +1938,7 @@ no_bw:
+ 	xhci->hw_ports = NULL;
+ 	xhci->rh_bw = NULL;
+ 	xhci->ext_caps = NULL;
++	xhci->port_caps = NULL;
+ 
+ 	xhci->page_size = 0;
+ 	xhci->page_shift = 0;
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index f97ac9f52bf4d..431213cdf9e0e 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -207,7 +207,8 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
+ 			return 0;
+ 
+ 		case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
+-			return 0;
++			dev_dbg(&pdev->dev, "Unknown ROM status ...\n");
++			break;
+ 
+ 		case RENESAS_ROM_STATUS_ERROR: /* Error State */
+ 		default: /* All other states are marked as "Reserved states" */
+@@ -224,13 +225,12 @@ static int renesas_fw_check_running(struct pci_dev *pdev)
+ 	u8 fw_state;
+ 	int err;
+ 
+-	/* Check if device has ROM and loaded, if so skip everything */
+-	err = renesas_check_rom(pdev);
+-	if (err) { /* we have rom */
+-		err = renesas_check_rom_state(pdev);
+-		if (!err)
+-			return err;
+-	}
++	/*
++	 * Only if device has ROM and loaded FW we can skip loading and
++	 * return success. Otherwise (even unknown state), attempt to load FW.
++	 */
++	if (renesas_check_rom(pdev) && !renesas_check_rom_state(pdev))
++		return 0;
+ 
+ 	/*
+ 	 * Test if the device is actually needing the firmware. As most
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index 35eec707cb512..c7d44daa05c4a 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -446,8 +446,10 @@ typec_register_altmode(struct device *parent,
+ 	int ret;
+ 
+ 	alt = kzalloc(sizeof(*alt), GFP_KERNEL);
+-	if (!alt)
++	if (!alt) {
++		altmode_id_remove(parent, id);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	alt->adev.svid = desc->svid;
+ 	alt->adev.mode = desc->mode;
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index 48b048edf1ee8..57ae8b46b8361 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -1614,6 +1614,7 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct vfio_pci_device *vdev = vma->vm_private_data;
++	struct vfio_pci_mmap_vma *mmap_vma;
+ 	vm_fault_t ret = VM_FAULT_NOPAGE;
+ 
+ 	mutex_lock(&vdev->vma_lock);
+@@ -1621,24 +1622,36 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
+ 
+ 	if (!__vfio_pci_memory_enabled(vdev)) {
+ 		ret = VM_FAULT_SIGBUS;
+-		mutex_unlock(&vdev->vma_lock);
+ 		goto up_out;
+ 	}
+ 
+-	if (__vfio_pci_add_vma(vdev, vma)) {
+-		ret = VM_FAULT_OOM;
+-		mutex_unlock(&vdev->vma_lock);
+-		goto up_out;
++	/*
++	 * We populate the whole vma on fault, so we need to test whether
++	 * the vma has already been mapped, such as for concurrent faults
++	 * to the same vma.  io_remap_pfn_range() will trigger a BUG_ON if
++	 * we ask it to fill the same range again.
++	 */
++	list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) {
++		if (mmap_vma->vma == vma)
++			goto up_out;
+ 	}
+ 
+-	mutex_unlock(&vdev->vma_lock);
+-
+ 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+-			       vma->vm_end - vma->vm_start, vma->vm_page_prot))
++			       vma->vm_end - vma->vm_start,
++			       vma->vm_page_prot)) {
+ 		ret = VM_FAULT_SIGBUS;
++		zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
++		goto up_out;
++	}
++
++	if (__vfio_pci_add_vma(vdev, vma)) {
++		ret = VM_FAULT_OOM;
++		zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
++	}
+ 
+ up_out:
+ 	up_read(&vdev->memory_lock);
++	mutex_unlock(&vdev->vma_lock);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/video/backlight/lm3630a_bl.c b/drivers/video/backlight/lm3630a_bl.c
+index e88a2b0e59046..662029d6a3dc9 100644
+--- a/drivers/video/backlight/lm3630a_bl.c
++++ b/drivers/video/backlight/lm3630a_bl.c
+@@ -482,8 +482,10 @@ static int lm3630a_parse_node(struct lm3630a_chip *pchip,
+ 
+ 	device_for_each_child_node(pchip->dev, node) {
+ 		ret = lm3630a_parse_bank(pdata, node, &seen_led_sources);
+-		if (ret)
++		if (ret) {
++			fwnode_handle_put(node);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index 884b16efa7e8a..564bd0407ed81 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -992,7 +992,7 @@ static int imxfb_probe(struct platform_device *pdev)
+ 	info->screen_buffer = dma_alloc_wc(&pdev->dev, fbi->map_size,
+ 					   &fbi->map_dma, GFP_KERNEL);
+ 	if (!info->screen_buffer) {
+-		dev_err(&pdev->dev, "Failed to allocate video RAM: %d\n", ret);
++		dev_err(&pdev->dev, "Failed to allocate video RAM\n");
+ 		ret = -ENOMEM;
+ 		goto failed_map;
+ 	}
+diff --git a/drivers/visorbus/visorchipset.c b/drivers/visorbus/visorchipset.c
+index cb1eb7e05f871..5668cad86e374 100644
+--- a/drivers/visorbus/visorchipset.c
++++ b/drivers/visorbus/visorchipset.c
+@@ -1561,7 +1561,7 @@ schedule_out:
+ 
+ static int visorchipset_init(struct acpi_device *acpi_device)
+ {
+-	int err = -ENODEV;
++	int err = -ENOMEM;
+ 	struct visorchannel *controlvm_channel;
+ 
+ 	chipset_dev = kzalloc(sizeof(*chipset_dev), GFP_KERNEL);
+@@ -1584,8 +1584,10 @@ static int visorchipset_init(struct acpi_device *acpi_device)
+ 				 "controlvm",
+ 				 sizeof(struct visor_controlvm_channel),
+ 				 VISOR_CONTROLVM_CHANNEL_VERSIONID,
+-				 VISOR_CHANNEL_SIGNATURE))
++				 VISOR_CHANNEL_SIGNATURE)) {
++		err = -ENODEV;
+ 		goto error_delete_groups;
++	}
+ 	/* if booting in a crash kernel */
+ 	if (is_kdump_kernel())
+ 		INIT_DELAYED_WORK(&chipset_dev->periodic_controlvm_work,
+diff --git a/fs/btrfs/Kconfig b/fs/btrfs/Kconfig
+index 68b95ad82126e..520a0f6a7d9e9 100644
+--- a/fs/btrfs/Kconfig
++++ b/fs/btrfs/Kconfig
+@@ -18,6 +18,8 @@ config BTRFS_FS
+ 	select RAID6_PQ
+ 	select XOR_BLOCKS
+ 	select SRCU
++	depends on !PPC_256K_PAGES	# powerpc
++	depends on !PAGE_SIZE_256KB	# hexagon
+ 
+ 	help
+ 	  Btrfs is a general purpose copy-on-write filesystem with extents,
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 4e2cce5ca7f6a..04422d929c232 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1032,12 +1032,10 @@ static int __btrfs_update_delayed_inode(struct btrfs_trans_handle *trans,
+ 	nofs_flag = memalloc_nofs_save();
+ 	ret = btrfs_lookup_inode(trans, root, path, &key, mod);
+ 	memalloc_nofs_restore(nofs_flag);
+-	if (ret > 0) {
+-		btrfs_release_path(path);
+-		return -ENOENT;
+-	} else if (ret < 0) {
+-		return ret;
+-	}
++	if (ret > 0)
++		ret = -ENOENT;
++	if (ret < 0)
++		goto out;
+ 
+ 	leaf = path->nodes[0];
+ 	inode_item = btrfs_item_ptr(leaf, path->slots[0],
+@@ -1075,6 +1073,14 @@ err_out:
+ 	btrfs_delayed_inode_release_metadata(fs_info, node, (ret < 0));
+ 	btrfs_release_delayed_inode(node);
+ 
++	/*
++	 * If we fail to update the delayed inode we need to abort the
++	 * transaction, because we could leave the inode with the improper
++	 * counts behind.
++	 */
++	if (ret && ret != -ENOENT)
++		btrfs_abort_transaction(trans, ret);
++
+ 	return ret;
+ 
+ search:
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4f26dae63b64a..4f21b8fbfd4bc 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -547,7 +547,7 @@ again:
+ 	 * inode has not been flagged as nocompress.  This flag can
+ 	 * change at any time if we discover bad compression ratios.
+ 	 */
+-	if (inode_need_compress(BTRFS_I(inode), start, end)) {
++	if (nr_pages > 1 && inode_need_compress(BTRFS_I(inode), start, end)) {
+ 		WARN_ON(pages);
+ 		pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
+ 		if (!pages) {
+@@ -8213,7 +8213,19 @@ static void btrfs_invalidatepage(struct page *page, unsigned int offset,
+ 	 */
+ 	wait_on_page_writeback(page);
+ 
+-	if (offset) {
++	/*
++	 * For subpage case, we have call sites like
++	 * btrfs_punch_hole_lock_range() which passes range not aligned to
++	 * sectorsize.
++	 * If the range doesn't cover the full page, we don't need to and
++	 * shouldn't clear page extent mapped, as page->private can still
++	 * record subpage dirty bits for other part of the range.
++	 *
++	 * For cases that can invalidate the full even the range doesn't
++	 * cover the full page, like invalidating the last page, we're
++	 * still safe to wait for ordered extent to finish.
++	 */
++	if (!(offset == 0 && length == PAGE_SIZE)) {
+ 		btrfs_releasepage(page, GFP_NOFS);
+ 		return;
+ 	}
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 9e5809118c34d..10f020ab1186f 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4080,6 +4080,17 @@ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move)
+ 				if (ret < 0)
+ 					goto out;
+ 			} else {
++				/*
++				 * If we previously orphanized a directory that
++				 * collided with a new reference that we already
++				 * processed, recompute the current path because
++				 * that directory may be part of the path.
++				 */
++				if (orphanized_dir) {
++					ret = refresh_ref_path(sctx, cur);
++					if (ret < 0)
++						goto out;
++				}
+ 				ret = send_unlink(sctx, cur->full_path);
+ 				if (ret < 0)
+ 					goto out;
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 279d9262b676d..3bb6b688ece52 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -382,7 +382,7 @@ static ssize_t btrfs_discard_bitmap_bytes_show(struct kobject *kobj,
+ {
+ 	struct btrfs_fs_info *fs_info = discard_to_fs_info(kobj);
+ 
+-	return scnprintf(buf, PAGE_SIZE, "%lld\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			fs_info->discard_ctl.discard_bitmap_bytes);
+ }
+ BTRFS_ATTR(discard, discard_bitmap_bytes, btrfs_discard_bitmap_bytes_show);
+@@ -404,7 +404,7 @@ static ssize_t btrfs_discard_extent_bytes_show(struct kobject *kobj,
+ {
+ 	struct btrfs_fs_info *fs_info = discard_to_fs_info(kobj);
+ 
+-	return scnprintf(buf, PAGE_SIZE, "%lld\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			fs_info->discard_ctl.discard_extent_bytes);
+ }
+ BTRFS_ATTR(discard, discard_extent_bytes, btrfs_discard_extent_bytes_show);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index af2f2f8704d8b..8daa9e4eb1d2e 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1382,8 +1382,10 @@ int btrfs_defrag_root(struct btrfs_root *root)
+ 
+ 	while (1) {
+ 		trans = btrfs_start_transaction(root, 0);
+-		if (IS_ERR(trans))
+-			return PTR_ERR(trans);
++		if (IS_ERR(trans)) {
++			ret = PTR_ERR(trans);
++			break;
++		}
+ 
+ 		ret = btrfs_defrag_leaves(trans, root);
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 300951088a11c..4b913de2f24fb 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6348,6 +6348,7 @@ next:
+ error:
+ 	if (wc.trans)
+ 		btrfs_end_transaction(wc.trans);
++	clear_bit(BTRFS_FS_LOG_RECOVERING, &fs_info->flags);
+ 	btrfs_free_path(path);
+ 	return ret;
+ }
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 248ee81e01516..6599069be690e 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -979,7 +979,7 @@ struct cifs_ses {
+ 	struct mutex session_mutex;
+ 	struct TCP_Server_Info *server;	/* pointer to server info */
+ 	int ses_count;		/* reference counter */
+-	enum statusEnum status;
++	enum statusEnum status;  /* updates protected by GlobalMid_Lock */
+ 	unsigned overrideSecFlg;  /* if non-zero override global sec flags */
+ 	char *serverOS;		/* name of operating system underlying server */
+ 	char *serverNOS;	/* name of network operating system of server */
+@@ -1863,6 +1863,7 @@ require use of the stronger protocol */
+  *	list operations on pending_mid_q and oplockQ
+  *      updates to XID counters, multiplex id  and SMB sequence numbers
+  *      list operations on global DnotifyReqList
++ *      updates to ses->status
+  *  tcp_ses_lock protects:
+  *	list operations on tcp and SMB session lists
+  *  tcon->open_file_lock protects the list of open files hanging off the tcon
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index aabaebd1535f0..fb7088d57e46f 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2829,9 +2829,12 @@ void cifs_put_smb_ses(struct cifs_ses *ses)
+ 		spin_unlock(&cifs_tcp_ses_lock);
+ 		return;
+ 	}
++	spin_unlock(&cifs_tcp_ses_lock);
++
++	spin_lock(&GlobalMid_Lock);
+ 	if (ses->status == CifsGood)
+ 		ses->status = CifsExiting;
+-	spin_unlock(&cifs_tcp_ses_lock);
++	spin_unlock(&GlobalMid_Lock);
+ 
+ 	cifs_free_ipc(ses);
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index a9d1555301446..f6ceb79a995d0 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3459,6 +3459,119 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ 	return rc;
+ }
+ 
++static int smb3_simple_fallocate_write_range(unsigned int xid,
++					     struct cifs_tcon *tcon,
++					     struct cifsFileInfo *cfile,
++					     loff_t off, loff_t len,
++					     char *buf)
++{
++	struct cifs_io_parms io_parms = {0};
++	int nbytes;
++	struct kvec iov[2];
++
++	io_parms.netfid = cfile->fid.netfid;
++	io_parms.pid = current->tgid;
++	io_parms.tcon = tcon;
++	io_parms.persistent_fid = cfile->fid.persistent_fid;
++	io_parms.volatile_fid = cfile->fid.volatile_fid;
++	io_parms.offset = off;
++	io_parms.length = len;
++
++	/* iov[0] is reserved for smb header */
++	iov[1].iov_base = buf;
++	iov[1].iov_len = io_parms.length;
++	return SMB2_write(xid, &io_parms, &nbytes, iov, 1);
++}
++
++static int smb3_simple_fallocate_range(unsigned int xid,
++				       struct cifs_tcon *tcon,
++				       struct cifsFileInfo *cfile,
++				       loff_t off, loff_t len)
++{
++	struct file_allocated_range_buffer in_data, *out_data = NULL, *tmp_data;
++	u32 out_data_len;
++	char *buf = NULL;
++	loff_t l;
++	int rc;
++
++	in_data.file_offset = cpu_to_le64(off);
++	in_data.length = cpu_to_le64(len);
++	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
++			cfile->fid.volatile_fid,
++			FSCTL_QUERY_ALLOCATED_RANGES, true,
++			(char *)&in_data, sizeof(in_data),
++			1024 * sizeof(struct file_allocated_range_buffer),
++			(char **)&out_data, &out_data_len);
++	if (rc)
++		goto out;
++	/*
++	 * It is already all allocated
++	 */
++	if (out_data_len == 0)
++		goto out;
++
++	buf = kzalloc(1024 * 1024, GFP_KERNEL);
++	if (buf == NULL) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	tmp_data = out_data;
++	while (len) {
++		/*
++		 * The rest of the region is unmapped so write it all.
++		 */
++		if (out_data_len == 0) {
++			rc = smb3_simple_fallocate_write_range(xid, tcon,
++					       cfile, off, len, buf);
++			goto out;
++		}
++
++		if (out_data_len < sizeof(struct file_allocated_range_buffer)) {
++			rc = -EINVAL;
++			goto out;
++		}
++
++		if (off < le64_to_cpu(tmp_data->file_offset)) {
++			/*
++			 * We are at a hole. Write until the end of the region
++			 * or until the next allocated data,
++			 * whichever comes next.
++			 */
++			l = le64_to_cpu(tmp_data->file_offset) - off;
++			if (len < l)
++				l = len;
++			rc = smb3_simple_fallocate_write_range(xid, tcon,
++					       cfile, off, l, buf);
++			if (rc)
++				goto out;
++			off = off + l;
++			len = len - l;
++			if (len == 0)
++				goto out;
++		}
++		/*
++		 * We are at a section of allocated data, just skip forward
++		 * until the end of the data or the end of the region
++		 * we are supposed to fallocate, whichever comes first.
++		 */
++		l = le64_to_cpu(tmp_data->length);
++		if (len < l)
++			l = len;
++		off += l;
++		len -= l;
++
++		tmp_data = &tmp_data[1];
++		out_data_len -= sizeof(struct file_allocated_range_buffer);
++	}
++
++ out:
++	kfree(out_data);
++	kfree(buf);
++	return rc;
++}
++
++
+ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 			    loff_t off, loff_t len, bool keep_size)
+ {
+@@ -3519,6 +3632,26 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
++		/*
++		 * At this point, we are trying to fallocate an internal
++		 * regions of a sparse file. Since smb2 does not have a
++		 * fallocate command we have two otions on how to emulate this.
++		 * We can either turn the entire file to become non-sparse
++		 * which we only do if the fallocate is for virtually
++		 * the whole file,  or we can overwrite the region with zeroes
++		 * using SMB2_write, which could be prohibitevly expensive
++		 * if len is large.
++		 */
++		/*
++		 * We are only trying to fallocate a small region so
++		 * just write it with zero.
++		 */
++		if (len <= 1024 * 1024) {
++			rc = smb3_simple_fallocate_range(xid, tcon, cfile,
++							 off, len);
++			goto out;
++		}
++
+ 		/*
+ 		 * Check if falloc starts within first few pages of file
+ 		 * and ends within a few pages of the end of file to
+diff --git a/fs/configfs/file.c b/fs/configfs/file.c
+index da8351d1e4552..4d0825213116a 100644
+--- a/fs/configfs/file.c
++++ b/fs/configfs/file.c
+@@ -482,13 +482,13 @@ static int configfs_release_bin_file(struct inode *inode, struct file *file)
+ 					buffer->bin_buffer_size);
+ 		}
+ 		up_read(&frag->frag_sem);
+-		/* vfree on NULL is safe */
+-		vfree(buffer->bin_buffer);
+-		buffer->bin_buffer = NULL;
+-		buffer->bin_buffer_size = 0;
+-		buffer->needs_read_fill = 1;
+ 	}
+ 
++	vfree(buffer->bin_buffer);
++	buffer->bin_buffer = NULL;
++	buffer->bin_buffer_size = 0;
++	buffer->needs_read_fill = 1;
++
+ 	configfs_release(inode, file);
+ 	return 0;
+ }
+diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
+index 1fbe6c24d7052..9fa871e287ba3 100644
+--- a/fs/crypto/fname.c
++++ b/fs/crypto/fname.c
+@@ -344,13 +344,9 @@ int fscrypt_fname_disk_to_usr(const struct inode *inode,
+ 		     offsetof(struct fscrypt_nokey_name, sha256));
+ 	BUILD_BUG_ON(BASE64_CHARS(FSCRYPT_NOKEY_NAME_MAX) > NAME_MAX);
+ 
+-	if (hash) {
+-		nokey_name.dirhash[0] = hash;
+-		nokey_name.dirhash[1] = minor_hash;
+-	} else {
+-		nokey_name.dirhash[0] = 0;
+-		nokey_name.dirhash[1] = 0;
+-	}
++	nokey_name.dirhash[0] = hash;
++	nokey_name.dirhash[1] = minor_hash;
++
+ 	if (iname->len <= sizeof(nokey_name.bytes)) {
+ 		memcpy(nokey_name.bytes, iname->name, iname->len);
+ 		size = offsetof(struct fscrypt_nokey_name, bytes[iname->len]);
+diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
+index 31fb08d94f874..9a6f9a188efb9 100644
+--- a/fs/crypto/keysetup.c
++++ b/fs/crypto/keysetup.c
+@@ -210,15 +210,40 @@ out_unlock:
+ 	return err;
+ }
+ 
++/*
++ * Derive a SipHash key from the given fscrypt master key and the given
++ * application-specific information string.
++ *
++ * Note that the KDF produces a byte array, but the SipHash APIs expect the key
++ * as a pair of 64-bit words.  Therefore, on big endian CPUs we have to do an
++ * endianness swap in order to get the same results as on little endian CPUs.
++ */
++static int fscrypt_derive_siphash_key(const struct fscrypt_master_key *mk,
++				      u8 context, const u8 *info,
++				      unsigned int infolen, siphash_key_t *key)
++{
++	int err;
++
++	err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, context, info, infolen,
++				  (u8 *)key, sizeof(*key));
++	if (err)
++		return err;
++
++	BUILD_BUG_ON(sizeof(*key) != 16);
++	BUILD_BUG_ON(ARRAY_SIZE(key->key) != 2);
++	le64_to_cpus(&key->key[0]);
++	le64_to_cpus(&key->key[1]);
++	return 0;
++}
++
+ int fscrypt_derive_dirhash_key(struct fscrypt_info *ci,
+ 			       const struct fscrypt_master_key *mk)
+ {
+ 	int err;
+ 
+-	err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, HKDF_CONTEXT_DIRHASH_KEY,
+-				  ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE,
+-				  (u8 *)&ci->ci_dirhash_key,
+-				  sizeof(ci->ci_dirhash_key));
++	err = fscrypt_derive_siphash_key(mk, HKDF_CONTEXT_DIRHASH_KEY,
++					 ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE,
++					 &ci->ci_dirhash_key);
+ 	if (err)
+ 		return err;
+ 	ci->ci_dirhash_key_initialized = true;
+@@ -253,10 +278,9 @@ static int fscrypt_setup_iv_ino_lblk_32_key(struct fscrypt_info *ci,
+ 		if (mk->mk_ino_hash_key_initialized)
+ 			goto unlock;
+ 
+-		err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
+-					  HKDF_CONTEXT_INODE_HASH_KEY, NULL, 0,
+-					  (u8 *)&mk->mk_ino_hash_key,
+-					  sizeof(mk->mk_ino_hash_key));
++		err = fscrypt_derive_siphash_key(mk,
++						 HKDF_CONTEXT_INODE_HASH_KEY,
++						 NULL, 0, &mk->mk_ino_hash_key);
+ 		if (err)
+ 			goto unlock;
+ 		/* pairs with smp_load_acquire() above */
+diff --git a/fs/dax.c b/fs/dax.c
+index df5485b4bddf1..d5d7b9393bcaa 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -488,10 +488,11 @@ static void *grab_mapping_entry(struct xa_state *xas,
+ 		struct address_space *mapping, unsigned int order)
+ {
+ 	unsigned long index = xas->xa_index;
+-	bool pmd_downgrade = false; /* splitting PMD entry into PTE entries? */
++	bool pmd_downgrade;	/* splitting PMD entry into PTE entries? */
+ 	void *entry;
+ 
+ retry:
++	pmd_downgrade = false;
+ 	xas_lock_irq(xas);
+ 	entry = get_unlocked_entry(xas, order);
+ 
+diff --git a/fs/dlm/config.c b/fs/dlm/config.c
+index 73e6643903af5..18a8ffcea0aa4 100644
+--- a/fs/dlm/config.c
++++ b/fs/dlm/config.c
+@@ -79,6 +79,9 @@ struct dlm_cluster {
+ 	unsigned int cl_new_rsb_count;
+ 	unsigned int cl_recover_callbacks;
+ 	char cl_cluster_name[DLM_LOCKSPACE_LEN];
++
++	struct dlm_spaces *sps;
++	struct dlm_comms *cms;
+ };
+ 
+ static struct dlm_cluster *config_item_to_cluster(struct config_item *i)
+@@ -379,6 +382,9 @@ static struct config_group *make_cluster(struct config_group *g,
+ 	if (!cl || !sps || !cms)
+ 		goto fail;
+ 
++	cl->sps = sps;
++	cl->cms = cms;
++
+ 	config_group_init_type_name(&cl->group, name, &cluster_type);
+ 	config_group_init_type_name(&sps->ss_group, "spaces", &spaces_type);
+ 	config_group_init_type_name(&cms->cs_group, "comms", &comms_type);
+@@ -428,6 +434,9 @@ static void drop_cluster(struct config_group *g, struct config_item *i)
+ static void release_cluster(struct config_item *i)
+ {
+ 	struct dlm_cluster *cl = config_item_to_cluster(i);
++
++	kfree(cl->sps);
++	kfree(cl->cms);
+ 	kfree(cl);
+ }
+ 
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 44e2716ac1580..0c78fdfb1f6fa 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -599,7 +599,7 @@ static void close_connection(struct connection *con, bool and_other,
+ 	}
+ 	if (con->othercon && and_other) {
+ 		/* Will only re-enter once. */
+-		close_connection(con->othercon, false, true, true);
++		close_connection(con->othercon, false, tx, rx);
+ 	}
+ 
+ 	con->rx_leftover = 0;
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index d5a6b9b888a56..f31a08d86be89 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -155,6 +155,7 @@ static int erofs_read_superblock(struct super_block *sb)
+ 			goto out;
+ 	}
+ 
++	ret = -EINVAL;
+ 	blkszbits = dsb->blkszbits;
+ 	/* 9(512 bytes) + LOG_SECTORS_PER_BLOCK == LOG_BLOCK_SIZE */
+ 	if (blkszbits != LOG_BLOCK_SIZE) {
+diff --git a/fs/exec.c b/fs/exec.c
+index ca89e0e3ef10f..c7a4ef8df3058 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1347,6 +1347,10 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 	WRITE_ONCE(me->self_exec_id, me->self_exec_id + 1);
+ 	flush_signal_handlers(me, 0);
+ 
++	retval = set_cred_ucounts(bprm->cred);
++	if (retval < 0)
++		goto out_unlock;
++
+ 	/*
+ 	 * install the new credentials for this executable
+ 	 */
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 916797077aad4..dedbc55cd48f5 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -62,7 +62,7 @@ static void exfat_get_uniname_from_ext_entry(struct super_block *sb,
+ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_entry *dir_entry)
+ {
+ 	int i, dentries_per_clu, dentries_per_clu_bits = 0, num_ext;
+-	unsigned int type, clu_offset;
++	unsigned int type, clu_offset, max_dentries;
+ 	sector_t sector;
+ 	struct exfat_chain dir, clu;
+ 	struct exfat_uni_name uni_name;
+@@ -85,6 +85,8 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ 
+ 	dentries_per_clu = sbi->dentries_per_clu;
+ 	dentries_per_clu_bits = ilog2(dentries_per_clu);
++	max_dentries = (unsigned int)min_t(u64, MAX_EXFAT_DENTRIES,
++					   (u64)sbi->num_clusters << dentries_per_clu_bits);
+ 
+ 	clu_offset = dentry >> dentries_per_clu_bits;
+ 	exfat_chain_dup(&clu, &dir);
+@@ -108,7 +110,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ 		}
+ 	}
+ 
+-	while (clu.dir != EXFAT_EOF_CLUSTER) {
++	while (clu.dir != EXFAT_EOF_CLUSTER && dentry < max_dentries) {
+ 		i = dentry & (dentries_per_clu - 1);
+ 
+ 		for ( ; i < dentries_per_clu; i++, dentry++) {
+@@ -244,7 +246,7 @@ static int exfat_iterate(struct file *filp, struct dir_context *ctx)
+ 	if (err)
+ 		goto unlock;
+ get_new:
+-	if (cpos >= i_size_read(inode))
++	if (ei->flags == ALLOC_NO_FAT_CHAIN && cpos >= i_size_read(inode))
+ 		goto end_of_dir;
+ 
+ 	err = exfat_readdir(inode, &cpos, &de);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index e6542ba264330..e00a35530a4e0 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -825,6 +825,7 @@ void ext4_ext_tree_init(handle_t *handle, struct inode *inode)
+ 	eh->eh_entries = 0;
+ 	eh->eh_magic = EXT4_EXT_MAGIC;
+ 	eh->eh_max = cpu_to_le16(ext4_ext_space_root(inode, 0));
++	eh->eh_generation = 0;
+ 	ext4_mark_inode_dirty(handle, inode);
+ }
+ 
+@@ -1090,6 +1091,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 	neh->eh_max = cpu_to_le16(ext4_ext_space_block(inode, 0));
+ 	neh->eh_magic = EXT4_EXT_MAGIC;
+ 	neh->eh_depth = 0;
++	neh->eh_generation = 0;
+ 
+ 	/* move remainder of path[depth] to the new leaf */
+ 	if (unlikely(path[depth].p_hdr->eh_entries !=
+@@ -1167,6 +1169,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 		neh->eh_magic = EXT4_EXT_MAGIC;
+ 		neh->eh_max = cpu_to_le16(ext4_ext_space_block_idx(inode, 0));
+ 		neh->eh_depth = cpu_to_le16(depth - i);
++		neh->eh_generation = 0;
+ 		fidx = EXT_FIRST_INDEX(neh);
+ 		fidx->ei_block = border;
+ 		ext4_idx_store_pblock(fidx, oldblock);
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index 0a729027322dd..9a3a8996aacf7 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -1574,11 +1574,9 @@ static unsigned long ext4_es_scan(struct shrinker *shrink,
+ 	ret = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
+ 	trace_ext4_es_shrink_scan_enter(sbi->s_sb, nr_to_scan, ret);
+ 
+-	if (!nr_to_scan)
+-		return ret;
+-
+ 	nr_shrunk = __es_shrink(sbi, nr_to_scan, NULL);
+ 
++	ret = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
+ 	trace_ext4_es_shrink_scan_exit(sbi->s_sb, nr_shrunk, ret);
+ 	return nr_shrunk;
+ }
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index b294ebcb4db4b..875af329c43ec 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -402,7 +402,7 @@ static void get_orlov_stats(struct super_block *sb, ext4_group_t g,
+  *
+  * We always try to spread first-level directories.
+  *
+- * If there are blockgroups with both free inodes and free blocks counts
++ * If there are blockgroups with both free inodes and free clusters counts
+  * not worse than average we return one with smallest directory count.
+  * Otherwise we simply return a random group.
+  *
+@@ -411,7 +411,7 @@ static void get_orlov_stats(struct super_block *sb, ext4_group_t g,
+  * It's OK to put directory into a group unless
+  * it has too many directories already (max_dirs) or
+  * it has too few free inodes left (min_inodes) or
+- * it has too few free blocks left (min_blocks) or
++ * it has too few free clusters left (min_clusters) or
+  * Parent's group is preferred, if it doesn't satisfy these
+  * conditions we search cyclically through the rest. If none
+  * of the groups look good we just look for a group with more
+@@ -427,7 +427,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
+ 	ext4_group_t real_ngroups = ext4_get_groups_count(sb);
+ 	int inodes_per_group = EXT4_INODES_PER_GROUP(sb);
+ 	unsigned int freei, avefreei, grp_free;
+-	ext4_fsblk_t freeb, avefreec;
++	ext4_fsblk_t freec, avefreec;
+ 	unsigned int ndirs;
+ 	int max_dirs, min_inodes;
+ 	ext4_grpblk_t min_clusters;
+@@ -446,9 +446,8 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
+ 
+ 	freei = percpu_counter_read_positive(&sbi->s_freeinodes_counter);
+ 	avefreei = freei / ngroups;
+-	freeb = EXT4_C2B(sbi,
+-		percpu_counter_read_positive(&sbi->s_freeclusters_counter));
+-	avefreec = freeb;
++	freec = percpu_counter_read_positive(&sbi->s_freeclusters_counter);
++	avefreec = freec;
+ 	do_div(avefreec, ngroups);
+ 	ndirs = percpu_counter_read_positive(&sbi->s_dirs_counter);
+ 
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 3f11c948feb02..18a5321b5ef37 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3419,7 +3419,7 @@ retry:
+ 	 * i_disksize out to i_size. This could be beyond where direct I/O is
+ 	 * happening and thus expose allocated blocks to direct I/O reads.
+ 	 */
+-	else if ((map->m_lblk * (1 << blkbits)) >= i_size_read(inode))
++	else if (((loff_t)map->m_lblk << blkbits) >= i_size_read(inode))
+ 		m_flags = EXT4_GET_BLOCKS_CREATE;
+ 	else if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+ 		m_flags = EXT4_GET_BLOCKS_IO_CREATE_EXT;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 9c390c3d7fb15..d7cb7d719ee58 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1597,10 +1597,11 @@ static int mb_find_extent(struct ext4_buddy *e4b, int block,
+ 	if (ex->fe_start + ex->fe_len > EXT4_CLUSTERS_PER_GROUP(e4b->bd_sb)) {
+ 		/* Should never happen! (but apparently sometimes does?!?) */
+ 		WARN_ON(1);
+-		ext4_error(e4b->bd_sb, "corruption or bug in mb_find_extent "
+-			   "block=%d, order=%d needed=%d ex=%u/%d/%d@%u",
+-			   block, order, needed, ex->fe_group, ex->fe_start,
+-			   ex->fe_len, ex->fe_logical);
++		ext4_grp_locked_error(e4b->bd_sb, e4b->bd_group, 0, 0,
++			"corruption or bug in mb_find_extent "
++			"block=%d, order=%d needed=%d ex=%u/%d/%d@%u",
++			block, order, needed, ex->fe_group, ex->fe_start,
++			ex->fe_len, ex->fe_logical);
+ 		ex->fe_len = 0;
+ 		ex->fe_start = 0;
+ 		ex->fe_group = 0;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 21c4ba2513ce5..4956917b7cc2b 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3099,8 +3099,15 @@ static void ext4_orphan_cleanup(struct super_block *sb,
+ 			inode_lock(inode);
+ 			truncate_inode_pages(inode->i_mapping, inode->i_size);
+ 			ret = ext4_truncate(inode);
+-			if (ret)
++			if (ret) {
++				/*
++				 * We need to clean up the in-core orphan list
++				 * manually if ext4_truncate() failed to get a
++				 * transaction handle.
++				 */
++				ext4_orphan_del(NULL, inode);
+ 				ext4_std_error(inode->i_sb, ret);
++			}
+ 			inode_unlock(inode);
+ 			nr_truncates++;
+ 		} else {
+@@ -5039,6 +5046,7 @@ no_journal:
+ 			ext4_msg(sb, KERN_ERR,
+ 			       "unable to initialize "
+ 			       "flex_bg meta info!");
++			ret = -ENOMEM;
+ 			goto failed_mount6;
+ 		}
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index bdc0f3b2d7abf..cfae2dddb0bae 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -4112,6 +4112,12 @@ static int f2fs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ 	if (f2fs_readonly(F2FS_I_SB(inode)->sb))
+ 		return -EROFS;
+ 
++	if (f2fs_lfs_mode(F2FS_I_SB(inode))) {
++		f2fs_err(F2FS_I_SB(inode),
++			"Swapfile not supported in LFS mode");
++		return -EINVAL;
++	}
++
+ 	ret = f2fs_convert_inline_inode(inode);
+ 	if (ret)
+ 		return ret;
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 90dddb507e4af..a0869194ab739 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -505,12 +505,19 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
+ 	if (!isw)
+ 		return;
+ 
++	atomic_inc(&isw_nr_in_flight);
++
+ 	/* find and pin the new wb */
+ 	rcu_read_lock();
+ 	memcg_css = css_from_id(new_wb_id, &memory_cgrp_subsys);
+-	if (memcg_css)
+-		isw->new_wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
++	if (memcg_css && !css_tryget(memcg_css))
++		memcg_css = NULL;
+ 	rcu_read_unlock();
++	if (!memcg_css)
++		goto out_free;
++
++	isw->new_wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
++	css_put(memcg_css);
+ 	if (!isw->new_wb)
+ 		goto out_free;
+ 
+@@ -535,11 +542,10 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
+ 	 * Let's continue after I_WB_SWITCH is guaranteed to be visible.
+ 	 */
+ 	call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
+-
+-	atomic_inc(&isw_nr_in_flight);
+ 	return;
+ 
+ out_free:
++	atomic_dec(&isw_nr_in_flight);
+ 	if (isw->new_wb)
+ 		wb_put(isw->new_wb);
+ 	kfree(isw);
+@@ -2196,28 +2202,6 @@ int dirtytime_interval_handler(struct ctl_table *table, int write,
+ 	return ret;
+ }
+ 
+-static noinline void block_dump___mark_inode_dirty(struct inode *inode)
+-{
+-	if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) {
+-		struct dentry *dentry;
+-		const char *name = "?";
+-
+-		dentry = d_find_alias(inode);
+-		if (dentry) {
+-			spin_lock(&dentry->d_lock);
+-			name = (const char *) dentry->d_name.name;
+-		}
+-		printk(KERN_DEBUG
+-		       "%s(%d): dirtied inode %lu (%s) on %s\n",
+-		       current->comm, task_pid_nr(current), inode->i_ino,
+-		       name, inode->i_sb->s_id);
+-		if (dentry) {
+-			spin_unlock(&dentry->d_lock);
+-			dput(dentry);
+-		}
+-	}
+-}
+-
+ /**
+  * __mark_inode_dirty -	internal function
+  *
+@@ -2277,9 +2261,6 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ 	    (dirtytime && (inode->i_state & I_DIRTY_INODE)))
+ 		return;
+ 
+-	if (unlikely(block_dump))
+-		block_dump___mark_inode_dirty(inode);
+-
+ 	spin_lock(&inode->i_lock);
+ 	if (dirtytime && (inode->i_state & I_DIRTY_INODE))
+ 		goto out_unlock_inode;
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 588f8d1240aab..4140d5c3ab5a5 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -783,6 +783,7 @@ static int fuse_check_page(struct page *page)
+ 	       1 << PG_uptodate |
+ 	       1 << PG_lru |
+ 	       1 << PG_active |
++	       1 << PG_workingset |
+ 	       1 << PG_reclaim |
+ 	       1 << PG_waiters))) {
+ 		dump_page(page, "fuse: trying to steal weird page");
+@@ -1275,6 +1276,15 @@ static ssize_t fuse_dev_do_read(struct fuse_dev *fud, struct file *file,
+ 		goto restart;
+ 	}
+ 	spin_lock(&fpq->lock);
++	/*
++	 *  Must not put request on fpq->io queue after having been shut down by
++	 *  fuse_abort_conn()
++	 */
++	if (!fpq->connected) {
++		req->out.h.error = err = -ECONNABORTED;
++		goto out_end;
++
++	}
+ 	list_add(&req->list, &fpq->io);
+ 	spin_unlock(&fpq->lock);
+ 	cs->req = req;
+@@ -1861,7 +1871,7 @@ static ssize_t fuse_dev_do_write(struct fuse_dev *fud,
+ 	}
+ 
+ 	err = -EINVAL;
+-	if (oh.error <= -1000 || oh.error > 0)
++	if (oh.error <= -512 || oh.error > 0)
+ 		goto copy_finish;
+ 
+ 	spin_lock(&fpq->lock);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index ffa031fe52933..756bbdd563e08 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -340,18 +340,33 @@ static struct vfsmount *fuse_dentry_automount(struct path *path)
+ 
+ 	/* Initialize superblock, making @mp_fi its root */
+ 	err = fuse_fill_super_submount(sb, mp_fi);
+-	if (err)
++	if (err) {
++		fuse_conn_put(fc);
++		kfree(fm);
++		sb->s_fs_info = NULL;
+ 		goto out_put_sb;
++	}
++
++	down_write(&fc->killsb);
++	list_add_tail(&fm->fc_entry, &fc->mounts);
++	up_write(&fc->killsb);
+ 
+ 	sb->s_flags |= SB_ACTIVE;
+ 	fsc->root = dget(sb->s_root);
++
++	/*
++	 * FIXME: setting SB_BORN requires a write barrier for
++	 *        super_cache_count(). We should actually come
++	 *        up with a proper ->get_tree() implementation
++	 *        for submounts and call vfs_get_tree() to take
++	 *        care of the write barrier.
++	 */
++	smp_wmb();
++	sb->s_flags |= SB_BORN;
++
+ 	/* We are done configuring the superblock, so unlock it */
+ 	up_write(&sb->s_umount);
+ 
+-	down_write(&fc->killsb);
+-	list_add_tail(&fm->fc_entry, &fc->mounts);
+-	up_write(&fc->killsb);
+-
+ 	/* Create the submount */
+ 	mnt = vfs_create_mount(fsc);
+ 	if (IS_ERR(mnt)) {
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index 16fb0184ce5e1..cfd9d03f604fe 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -474,8 +474,8 @@ static vm_fault_t gfs2_page_mkwrite(struct vm_fault *vmf)
+ 	file_update_time(vmf->vma->vm_file);
+ 
+ 	/* page is wholly or partially inside EOF */
+-	if (offset > size - PAGE_SIZE)
+-		length = offset_in_page(size);
++	if (size - offset < PAGE_SIZE)
++		length = size - offset;
+ 	else
+ 		length = PAGE_SIZE;
+ 
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index f2c6bbe5cdb81..ae9c5c1bdc508 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -670,6 +670,7 @@ static int init_statfs(struct gfs2_sbd *sdp)
+ 	}
+ 
+ 	iput(pn);
++	pn = NULL;
+ 	ip = GFS2_I(sdp->sd_sc_inode);
+ 	error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, 0,
+ 				   &sdp->sd_sc_gh);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index fdbaaf579cc60..0138aa7133172 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2770,7 +2770,7 @@ static bool io_file_supports_async(struct file *file, int rw)
+ 			return true;
+ 		return false;
+ 	}
+-	if (S_ISCHR(mode) || S_ISSOCK(mode))
++	if (S_ISSOCK(mode))
+ 		return true;
+ 	if (S_ISREG(mode)) {
+ 		if (io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
+diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
+index e9d5c8e638b01..ea18e4a2a691d 100644
+--- a/fs/ntfs/inode.c
++++ b/fs/ntfs/inode.c
+@@ -477,7 +477,7 @@ err_corrupt_attr:
+ 		}
+ 		file_name_attr = (FILE_NAME_ATTR*)((u8*)attr +
+ 				le16_to_cpu(attr->data.resident.value_offset));
+-		p2 = (u8*)attr + le32_to_cpu(attr->data.resident.value_length);
++		p2 = (u8 *)file_name_attr + le32_to_cpu(attr->data.resident.value_length);
+ 		if (p2 < (u8*)attr || p2 > p)
+ 			goto err_corrupt_attr;
+ 		/* This attribute is ok, but is it in the $Extend directory? */
+diff --git a/fs/ocfs2/filecheck.c b/fs/ocfs2/filecheck.c
+index 50f11bfdc8c2d..82a3edc4aea4b 100644
+--- a/fs/ocfs2/filecheck.c
++++ b/fs/ocfs2/filecheck.c
+@@ -328,11 +328,7 @@ static ssize_t ocfs2_filecheck_attr_show(struct kobject *kobj,
+ 		ret = snprintf(buf + total, remain, "%lu\t\t%u\t%s\n",
+ 			       p->fe_ino, p->fe_done,
+ 			       ocfs2_filecheck_error(p->fe_status));
+-		if (ret < 0) {
+-			total = ret;
+-			break;
+-		}
+-		if (ret == remain) {
++		if (ret >= remain) {
+ 			/* snprintf() didn't fit */
+ 			total = -E2BIG;
+ 			break;
+diff --git a/fs/ocfs2/stackglue.c b/fs/ocfs2/stackglue.c
+index a191094694c61..03eacb249f379 100644
+--- a/fs/ocfs2/stackglue.c
++++ b/fs/ocfs2/stackglue.c
+@@ -502,11 +502,7 @@ static ssize_t ocfs2_loaded_cluster_plugins_show(struct kobject *kobj,
+ 	list_for_each_entry(p, &ocfs2_stack_list, sp_list) {
+ 		ret = snprintf(buf, remain, "%s\n",
+ 			       p->sp_name);
+-		if (ret < 0) {
+-			total = ret;
+-			break;
+-		}
+-		if (ret == remain) {
++		if (ret >= remain) {
+ 			/* snprintf() didn't fit */
+ 			total = -E2BIG;
+ 			break;
+@@ -533,7 +529,7 @@ static ssize_t ocfs2_active_cluster_plugin_show(struct kobject *kobj,
+ 	if (active_stack) {
+ 		ret = snprintf(buf, PAGE_SIZE, "%s\n",
+ 			       active_stack->sp_name);
+-		if (ret == PAGE_SIZE)
++		if (ret >= PAGE_SIZE)
+ 			ret = -E2BIG;
+ 	}
+ 	spin_unlock(&ocfs2_stack_lock);
+diff --git a/fs/open.c b/fs/open.c
+index 4d7537ae59df5..3aaaad47d9cac 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -993,12 +993,20 @@ inline struct open_how build_open_how(int flags, umode_t mode)
+ 
+ inline int build_open_flags(const struct open_how *how, struct open_flags *op)
+ {
+-	int flags = how->flags;
++	u64 flags = how->flags;
++	u64 strip = FMODE_NONOTIFY | O_CLOEXEC;
+ 	int lookup_flags = 0;
+ 	int acc_mode = ACC_MODE(flags);
+ 
+-	/* Must never be set by userspace */
+-	flags &= ~(FMODE_NONOTIFY | O_CLOEXEC);
++	BUILD_BUG_ON_MSG(upper_32_bits(VALID_OPEN_FLAGS),
++			 "struct open_flags doesn't yet handle flags > 32 bits");
++
++	/*
++	 * Strip flags that either shouldn't be set by userspace like
++	 * FMODE_NONOTIFY or that aren't relevant in determining struct
++	 * open_flags like O_CLOEXEC.
++	 */
++	flags &= ~strip;
+ 
+ 	/*
+ 	 * Older syscalls implicitly clear all of the invalid flags or argument
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 3cec6fbef725e..3931f60e421f7 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -829,7 +829,7 @@ static int show_smap(struct seq_file *m, void *v)
+ 	__show_smap(m, &mss, false);
+ 
+ 	seq_printf(m, "THPeligible:    %d\n",
+-		   transparent_hugepage_enabled(vma));
++		   transparent_hugepage_active(vma));
+ 
+ 	if (arch_pkeys_enabled())
+ 		seq_printf(m, "ProtectionKey:  %8u\n", vma_pkey(vma));
+diff --git a/fs/pstore/Kconfig b/fs/pstore/Kconfig
+index e16a49ebfe546..8efe60487b486 100644
+--- a/fs/pstore/Kconfig
++++ b/fs/pstore/Kconfig
+@@ -165,6 +165,7 @@ config PSTORE_BLK
+ 	tristate "Log panic/oops to a block device"
+ 	depends on PSTORE
+ 	depends on BLOCK
++	depends on BROKEN
+ 	select PSTORE_ZONE
+ 	default n
+ 	help
+diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
+index d683f5e6d7913..b4d43a4af5f79 100644
+--- a/include/asm-generic/preempt.h
++++ b/include/asm-generic/preempt.h
+@@ -29,7 +29,7 @@ static __always_inline void preempt_count_set(int pc)
+ } while (0)
+ 
+ #define init_idle_preempt_count(p, cpu) do { \
+-	task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
++	task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
+ } while (0)
+ 
+ static __always_inline void set_preempt_need_resched(void)
+diff --git a/include/clocksource/timer-ti-dm.h b/include/clocksource/timer-ti-dm.h
+index 4c61dade8835f..f6da8a1326398 100644
+--- a/include/clocksource/timer-ti-dm.h
++++ b/include/clocksource/timer-ti-dm.h
+@@ -74,6 +74,7 @@
+ #define OMAP_TIMER_ERRATA_I103_I767			0x80000000
+ 
+ struct timer_regs {
++	u32 ocp_cfg;
+ 	u32 tidr;
+ 	u32 tier;
+ 	u32 twer;
+diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
+index 0a288dddcf5be..25806141db591 100644
+--- a/include/crypto/internal/hash.h
++++ b/include/crypto/internal/hash.h
+@@ -75,13 +75,7 @@ void crypto_unregister_ahashes(struct ahash_alg *algs, int count);
+ int ahash_register_instance(struct crypto_template *tmpl,
+ 			    struct ahash_instance *inst);
+ 
+-int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
+-		    unsigned int keylen);
+-
+-static inline bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
+-{
+-	return alg->setkey != shash_no_setkey;
+-}
++bool crypto_shash_alg_has_setkey(struct shash_alg *alg);
+ 
+ static inline bool crypto_shash_alg_needs_key(struct shash_alg *alg)
+ {
+diff --git a/include/dt-bindings/clock/imx8mq-clock.h b/include/dt-bindings/clock/imx8mq-clock.h
+index 9b8045d75b8b6..da62c9f61371b 100644
+--- a/include/dt-bindings/clock/imx8mq-clock.h
++++ b/include/dt-bindings/clock/imx8mq-clock.h
+@@ -405,25 +405,6 @@
+ 
+ #define IMX8MQ_VIDEO2_PLL1_REF_SEL		266
+ 
+-#define IMX8MQ_SYS1_PLL_40M_CG			267
+-#define IMX8MQ_SYS1_PLL_80M_CG			268
+-#define IMX8MQ_SYS1_PLL_100M_CG			269
+-#define IMX8MQ_SYS1_PLL_133M_CG			270
+-#define IMX8MQ_SYS1_PLL_160M_CG			271
+-#define IMX8MQ_SYS1_PLL_200M_CG			272
+-#define IMX8MQ_SYS1_PLL_266M_CG			273
+-#define IMX8MQ_SYS1_PLL_400M_CG			274
+-#define IMX8MQ_SYS1_PLL_800M_CG			275
+-#define IMX8MQ_SYS2_PLL_50M_CG			276
+-#define IMX8MQ_SYS2_PLL_100M_CG			277
+-#define IMX8MQ_SYS2_PLL_125M_CG			278
+-#define IMX8MQ_SYS2_PLL_166M_CG			279
+-#define IMX8MQ_SYS2_PLL_200M_CG			280
+-#define IMX8MQ_SYS2_PLL_250M_CG			281
+-#define IMX8MQ_SYS2_PLL_333M_CG			282
+-#define IMX8MQ_SYS2_PLL_500M_CG			283
+-#define IMX8MQ_SYS2_PLL_1000M_CG		284
+-
+ #define IMX8MQ_CLK_GPU_CORE			285
+ #define IMX8MQ_CLK_GPU_SHADER			286
+ #define IMX8MQ_CLK_M4_CORE			287
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index c6d7653829264..23b7a73cd7575 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -38,9 +38,6 @@
+ #define bio_offset(bio)		bio_iter_offset((bio), (bio)->bi_iter)
+ #define bio_iovec(bio)		bio_iter_iovec((bio), (bio)->bi_iter)
+ 
+-#define bio_multiple_segments(bio)				\
+-	((bio)->bi_iter.bi_size != bio_iovec(bio).bv_len)
+-
+ #define bvec_iter_sectors(iter)	((iter).bi_size >> 9)
+ #define bvec_iter_end_sector(iter) ((iter).bi_sector + bvec_iter_sectors((iter)))
+ 
+@@ -252,7 +249,7 @@ static inline void bio_clear_flag(struct bio *bio, unsigned int bit)
+ 
+ static inline void bio_get_first_bvec(struct bio *bio, struct bio_vec *bv)
+ {
+-	*bv = bio_iovec(bio);
++	*bv = mp_bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
+ }
+ 
+ static inline void bio_get_last_bvec(struct bio *bio, struct bio_vec *bv)
+@@ -260,10 +257,9 @@ static inline void bio_get_last_bvec(struct bio *bio, struct bio_vec *bv)
+ 	struct bvec_iter iter = bio->bi_iter;
+ 	int idx;
+ 
+-	if (unlikely(!bio_multiple_segments(bio))) {
+-		*bv = bio_iovec(bio);
+-		return;
+-	}
++	bio_get_first_bvec(bio, bv);
++	if (bv->bv_len == bio->bi_iter.bi_size)
++		return;		/* this bio only has a single bvec */
+ 
+ 	bio_advance_iter(bio, &iter, iter.bi_size);
+ 
+diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
+index 86d143db65231..83a3ebff74560 100644
+--- a/include/linux/clocksource.h
++++ b/include/linux/clocksource.h
+@@ -131,7 +131,7 @@ struct clocksource {
+ #define CLOCK_SOURCE_UNSTABLE			0x40
+ #define CLOCK_SOURCE_SUSPEND_NONSTOP		0x80
+ #define CLOCK_SOURCE_RESELECT			0x100
+-
++#define CLOCK_SOURCE_VERIFY_PERCPU		0x200
+ /* simplify initialization of mask field */
+ #define CLOCKSOURCE_MASK(bits) GENMASK_ULL((bits) - 1, 0)
+ 
+diff --git a/include/linux/cred.h b/include/linux/cred.h
+index 18639c069263f..ad160e5fe5c64 100644
+--- a/include/linux/cred.h
++++ b/include/linux/cred.h
+@@ -144,6 +144,7 @@ struct cred {
+ #endif
+ 	struct user_struct *user;	/* real user ID subscription */
+ 	struct user_namespace *user_ns; /* user_ns the caps and keyrings are relative to. */
++	struct ucounts *ucounts;
+ 	struct group_info *group_info;	/* supplementary groups for euid/fsgid */
+ 	/* RCU deletion */
+ 	union {
+@@ -170,6 +171,7 @@ extern int set_security_override_from_ctx(struct cred *, const char *);
+ extern int set_create_files_as(struct cred *, struct inode *);
+ extern int cred_fscmp(const struct cred *, const struct cred *);
+ extern void __init cred_init(void);
++extern int set_cred_ucounts(struct cred *);
+ 
+ /*
+  * check for validity of credentials
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index ff55be0117397..e72787731a5b2 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -7,43 +7,37 @@
+ 
+ #include <linux/fs.h> /* only for vma_is_dax() */
+ 
+-extern vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
+-extern int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+-			 pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
+-			 struct vm_area_struct *vma);
+-extern void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd);
+-extern int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+-			 pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
+-			 struct vm_area_struct *vma);
++vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
++int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
++		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
++		  struct vm_area_struct *vma);
++void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd);
++int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
++		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
++		  struct vm_area_struct *vma);
+ 
+ #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+-extern void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud);
++void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud);
+ #else
+ static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
+ {
+ }
+ #endif
+ 
+-extern vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd);
+-extern struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
+-					  unsigned long addr,
+-					  pmd_t *pmd,
+-					  unsigned int flags);
+-extern bool madvise_free_huge_pmd(struct mmu_gather *tlb,
+-			struct vm_area_struct *vma,
+-			pmd_t *pmd, unsigned long addr, unsigned long next);
+-extern int zap_huge_pmd(struct mmu_gather *tlb,
+-			struct vm_area_struct *vma,
+-			pmd_t *pmd, unsigned long addr);
+-extern int zap_huge_pud(struct mmu_gather *tlb,
+-			struct vm_area_struct *vma,
+-			pud_t *pud, unsigned long addr);
+-extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+-			 unsigned long new_addr,
+-			 pmd_t *old_pmd, pmd_t *new_pmd);
+-extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+-			unsigned long addr, pgprot_t newprot,
+-			unsigned long cp_flags);
++vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd);
++struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
++				   unsigned long addr, pmd_t *pmd,
++				   unsigned int flags);
++bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
++			   pmd_t *pmd, unsigned long addr, unsigned long next);
++int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd,
++		 unsigned long addr);
++int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pud,
++		 unsigned long addr);
++bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
++		   unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd);
++int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
++		    pgprot_t newprot, unsigned long cp_flags);
+ vm_fault_t vmf_insert_pfn_pmd_prot(struct vm_fault *vmf, pfn_t pfn,
+ 				   pgprot_t pgprot, bool write);
+ 
+@@ -84,6 +78,7 @@ static inline vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn,
+ }
+ 
+ enum transparent_hugepage_flag {
++	TRANSPARENT_HUGEPAGE_NEVER_DAX,
+ 	TRANSPARENT_HUGEPAGE_FLAG,
+ 	TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+ 	TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
+@@ -100,13 +95,13 @@ enum transparent_hugepage_flag {
+ struct kobject;
+ struct kobj_attribute;
+ 
+-extern ssize_t single_hugepage_flag_store(struct kobject *kobj,
+-				 struct kobj_attribute *attr,
+-				 const char *buf, size_t count,
+-				 enum transparent_hugepage_flag flag);
+-extern ssize_t single_hugepage_flag_show(struct kobject *kobj,
+-				struct kobj_attribute *attr, char *buf,
+-				enum transparent_hugepage_flag flag);
++ssize_t single_hugepage_flag_store(struct kobject *kobj,
++				   struct kobj_attribute *attr,
++				   const char *buf, size_t count,
++				   enum transparent_hugepage_flag flag);
++ssize_t single_hugepage_flag_show(struct kobject *kobj,
++				  struct kobj_attribute *attr, char *buf,
++				  enum transparent_hugepage_flag flag);
+ extern struct kobj_attribute shmem_enabled_attr;
+ 
+ #define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
+@@ -123,29 +118,53 @@ extern struct kobj_attribute shmem_enabled_attr;
+ 
+ extern unsigned long transparent_hugepage_flags;
+ 
++static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
++		unsigned long haddr)
++{
++	/* Don't have to check pgoff for anonymous vma */
++	if (!vma_is_anonymous(vma)) {
++		if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
++				HPAGE_PMD_NR))
++			return false;
++	}
++
++	if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
++		return false;
++	return true;
++}
++
++static inline bool transhuge_vma_enabled(struct vm_area_struct *vma,
++					  unsigned long vm_flags)
++{
++	/* Explicitly disabled through madvise. */
++	if ((vm_flags & VM_NOHUGEPAGE) ||
++	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
++		return false;
++	return true;
++}
++
+ /*
+  * to be used on vmas which are known to support THP.
+- * Use transparent_hugepage_enabled otherwise
++ * Use transparent_hugepage_active otherwise
+  */
+ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
+ {
+-	if (vma->vm_flags & VM_NOHUGEPAGE)
++
++	/*
++	 * If the hardware/firmware marked hugepage support disabled.
++	 */
++	if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_NEVER_DAX))
+ 		return false;
+ 
+-	if (vma_is_temporary_stack(vma))
++	if (!transhuge_vma_enabled(vma, vma->vm_flags))
+ 		return false;
+ 
+-	if (test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
++	if (vma_is_temporary_stack(vma))
+ 		return false;
+ 
+ 	if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_FLAG))
+ 		return true;
+-	/*
+-	 * For dax vmas, try to always use hugepage mappings. If the kernel does
+-	 * not support hugepages, fsdax mappings will fallback to PAGE_SIZE
+-	 * mappings, and device-dax namespaces, that try to guarantee a given
+-	 * mapping size, will fail to enable
+-	 */
++
+ 	if (vma_is_dax(vma))
+ 		return true;
+ 
+@@ -156,35 +175,17 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
+ 	return false;
+ }
+ 
+-bool transparent_hugepage_enabled(struct vm_area_struct *vma);
+-
+-#define HPAGE_CACHE_INDEX_MASK (HPAGE_PMD_NR - 1)
+-
+-static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
+-		unsigned long haddr)
+-{
+-	/* Don't have to check pgoff for anonymous vma */
+-	if (!vma_is_anonymous(vma)) {
+-		if (((vma->vm_start >> PAGE_SHIFT) & HPAGE_CACHE_INDEX_MASK) !=
+-			(vma->vm_pgoff & HPAGE_CACHE_INDEX_MASK))
+-			return false;
+-	}
+-
+-	if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
+-		return false;
+-	return true;
+-}
++bool transparent_hugepage_active(struct vm_area_struct *vma);
+ 
+ #define transparent_hugepage_use_zero_page()				\
+ 	(transparent_hugepage_flags &					\
+ 	 (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG))
+ 
+-extern unsigned long thp_get_unmapped_area(struct file *filp,
+-		unsigned long addr, unsigned long len, unsigned long pgoff,
+-		unsigned long flags);
++unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
++		unsigned long len, unsigned long pgoff, unsigned long flags);
+ 
+-extern void prep_transhuge_page(struct page *page);
+-extern void free_transhuge_page(struct page *page);
++void prep_transhuge_page(struct page *page);
++void free_transhuge_page(struct page *page);
+ bool is_transparent_hugepage(struct page *page);
+ 
+ bool can_split_huge_page(struct page *page, int *pextra_pins);
+@@ -222,16 +223,12 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
+ 			__split_huge_pud(__vma, __pud, __address);	\
+ 	}  while (0)
+ 
+-extern int hugepage_madvise(struct vm_area_struct *vma,
+-			    unsigned long *vm_flags, int advice);
+-extern void vma_adjust_trans_huge(struct vm_area_struct *vma,
+-				    unsigned long start,
+-				    unsigned long end,
+-				    long adjust_next);
+-extern spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd,
+-		struct vm_area_struct *vma);
+-extern spinlock_t *__pud_trans_huge_lock(pud_t *pud,
+-		struct vm_area_struct *vma);
++int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags,
++		     int advice);
++void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start,
++			   unsigned long end, long adjust_next);
++spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma);
++spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma);
+ 
+ static inline int is_swap_pmd(pmd_t pmd)
+ {
+@@ -294,7 +291,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
+ struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
+ 		pud_t *pud, int flags, struct dev_pagemap **pgmap);
+ 
+-extern vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd);
++vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd);
+ 
+ extern struct page *huge_zero_page;
+ extern unsigned long huge_zero_pfn;
+@@ -365,7 +362,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
+ 	return false;
+ }
+ 
+-static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma)
++static inline bool transparent_hugepage_active(struct vm_area_struct *vma)
+ {
+ 	return false;
+ }
+@@ -376,6 +373,12 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
+ 	return false;
+ }
+ 
++static inline bool transhuge_vma_enabled(struct vm_area_struct *vma,
++					  unsigned long vm_flags)
++{
++	return false;
++}
++
+ static inline void prep_transhuge_page(struct page *page) {}
+ 
+ static inline bool is_transparent_hugepage(struct page *page)
+diff --git a/include/linux/iio/common/cros_ec_sensors_core.h b/include/linux/iio/common/cros_ec_sensors_core.h
+index c9b80be82440f..f82857bd693fd 100644
+--- a/include/linux/iio/common/cros_ec_sensors_core.h
++++ b/include/linux/iio/common/cros_ec_sensors_core.h
+@@ -77,7 +77,7 @@ struct cros_ec_sensors_core_state {
+ 		u16 scale;
+ 	} calib[CROS_EC_SENSOR_MAX_AXIS];
+ 	s8 sign[CROS_EC_SENSOR_MAX_AXIS];
+-	u8 samples[CROS_EC_SAMPLE_SIZE];
++	u8 samples[CROS_EC_SAMPLE_SIZE] __aligned(8);
+ 
+ 	int (*read_ec_sensors_data)(struct iio_dev *indio_dev,
+ 				    unsigned long scan_mask, s16 *data);
+diff --git a/include/linux/prandom.h b/include/linux/prandom.h
+index bbf4b4ad61dfd..056d31317e499 100644
+--- a/include/linux/prandom.h
++++ b/include/linux/prandom.h
+@@ -111,7 +111,7 @@ static inline u32 __seed(u32 x, u32 m)
+  */
+ static inline void prandom_seed_state(struct rnd_state *state, u64 seed)
+ {
+-	u32 i = (seed >> 32) ^ (seed << 10) ^ seed;
++	u32 i = ((seed >> 32) ^ (seed << 10) ^ seed) & 0xffffffffUL;
+ 
+ 	state->s1 = __seed(i,   2U);
+ 	state->s2 = __seed(i,   8U);
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index fbc6805358da0..dfabf4660a670 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -503,6 +503,15 @@ static inline struct swap_info_struct *swp_swap_info(swp_entry_t entry)
+ 	return NULL;
+ }
+ 
++static inline struct swap_info_struct *get_swap_device(swp_entry_t entry)
++{
++	return NULL;
++}
++
++static inline void put_swap_device(struct swap_info_struct *si)
++{
++}
++
+ #define swap_address_space(entry)		(NULL)
+ #define get_nr_swap_pages()			0L
+ #define total_swap_pages			0L
+diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
+index 966ed89803274..e4c5df71f0e74 100644
+--- a/include/linux/tracepoint.h
++++ b/include/linux/tracepoint.h
+@@ -41,7 +41,17 @@ extern int
+ tracepoint_probe_register_prio(struct tracepoint *tp, void *probe, void *data,
+ 			       int prio);
+ extern int
++tracepoint_probe_register_prio_may_exist(struct tracepoint *tp, void *probe, void *data,
++					 int prio);
++extern int
+ tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
++static inline int
++tracepoint_probe_register_may_exist(struct tracepoint *tp, void *probe,
++				    void *data)
++{
++	return tracepoint_probe_register_prio_may_exist(tp, probe, data,
++							TRACEPOINT_DEFAULT_PRIO);
++}
+ extern void
+ for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
+ 		void *priv);
+diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
+index 7616c7bf4b241..e1bd560da1cd4 100644
+--- a/include/linux/user_namespace.h
++++ b/include/linux/user_namespace.h
+@@ -101,11 +101,15 @@ struct ucounts {
+ };
+ 
+ extern struct user_namespace init_user_ns;
++extern struct ucounts init_ucounts;
+ 
+ bool setup_userns_sysctls(struct user_namespace *ns);
+ void retire_userns_sysctls(struct user_namespace *ns);
+ struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid, enum ucount_type type);
+ void dec_ucount(struct ucounts *ucounts, enum ucount_type type);
++struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid);
++struct ucounts *get_ucounts(struct ucounts *ucounts);
++void put_ucounts(struct ucounts *ucounts);
+ 
+ #ifdef CONFIG_USER_NS
+ 
+diff --git a/include/media/hevc-ctrls.h b/include/media/hevc-ctrls.h
+index 1009cf0891cc6..a3b650ab00f66 100644
+--- a/include/media/hevc-ctrls.h
++++ b/include/media/hevc-ctrls.h
+@@ -81,7 +81,7 @@ struct v4l2_ctrl_hevc_sps {
+ 	__u64	flags;
+ };
+ 
+-#define V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT		(1ULL << 0)
++#define V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT_ENABLED	(1ULL << 0)
+ #define V4L2_HEVC_PPS_FLAG_OUTPUT_FLAG_PRESENT			(1ULL << 1)
+ #define V4L2_HEVC_PPS_FLAG_SIGN_DATA_HIDING_ENABLED		(1ULL << 2)
+ #define V4L2_HEVC_PPS_FLAG_CABAC_INIT_PRESENT			(1ULL << 3)
+@@ -160,6 +160,7 @@ struct v4l2_hevc_pred_weight_table {
+ #define V4L2_HEVC_SLICE_PARAMS_FLAG_USE_INTEGER_MV		(1ULL << 6)
+ #define V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_DEBLOCKING_FILTER_DISABLED (1ULL << 7)
+ #define V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED (1ULL << 8)
++#define V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT	(1ULL << 9)
+ 
+ struct v4l2_ctrl_hevc_slice_params {
+ 	__u32	bit_size;
+diff --git a/include/media/media-dev-allocator.h b/include/media/media-dev-allocator.h
+index b35ea6062596b..2ab54d426c644 100644
+--- a/include/media/media-dev-allocator.h
++++ b/include/media/media-dev-allocator.h
+@@ -19,7 +19,7 @@
+ 
+ struct usb_device;
+ 
+-#if defined(CONFIG_MEDIA_CONTROLLER) && defined(CONFIG_USB)
++#if defined(CONFIG_MEDIA_CONTROLLER) && IS_ENABLED(CONFIG_USB)
+ /**
+  * media_device_usb_allocate() - Allocate and return struct &media device
+  *
+diff --git a/include/media/v4l2-async.h b/include/media/v4l2-async.h
+index d6e31234826f3..92cd9f038fed8 100644
+--- a/include/media/v4l2-async.h
++++ b/include/media/v4l2-async.h
+@@ -189,9 +189,11 @@ v4l2_async_notifier_add_fwnode_subdev(struct v4l2_async_notifier *notifier,
+  *
+  * @notif: pointer to &struct v4l2_async_notifier
+  * @endpoint: local endpoint pointing to the remote sub-device to be matched
+- * @asd: Async sub-device struct allocated by the caller. The &struct
+- *	 v4l2_async_subdev shall be the first member of the driver's async
+- *	 sub-device struct, i.e. both begin at the same memory address.
++ * @asd_struct_size: size of the driver's async sub-device struct, including
++ *		     sizeof(struct v4l2_async_subdev). The &struct
++ *		     v4l2_async_subdev shall be the first member of
++ *		     the driver's async sub-device struct, i.e. both
++ *		     begin at the same memory address.
+  *
+  * Gets the remote endpoint of a given local endpoint, set it up for fwnode
+  * matching and adds the async sub-device to the notifier's @asd_list. The
+@@ -199,13 +201,12 @@ v4l2_async_notifier_add_fwnode_subdev(struct v4l2_async_notifier *notifier,
+  * notifier cleanup time.
+  *
+  * This is just like @v4l2_async_notifier_add_fwnode_subdev, but with the
+- * exception that the fwnode refers to a local endpoint, not the remote one, and
+- * the function relies on the caller to allocate the async sub-device struct.
++ * exception that the fwnode refers to a local endpoint, not the remote one.
+  */
+-int
++struct v4l2_async_subdev *
+ v4l2_async_notifier_add_fwnode_remote_subdev(struct v4l2_async_notifier *notif,
+ 					     struct fwnode_handle *endpoint,
+-					     struct v4l2_async_subdev *asd);
++					     unsigned int asd_struct_size);
+ 
+ /**
+  * v4l2_async_notifier_add_i2c_subdev - Allocate and add an i2c async
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 6da4b3c5dd55d..243de74e118e7 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -1773,13 +1773,15 @@ struct hci_cp_ext_adv_set {
+ 	__u8  max_events;
+ } __packed;
+ 
++#define HCI_MAX_EXT_AD_LENGTH	251
++
+ #define HCI_OP_LE_SET_EXT_ADV_DATA		0x2037
+ struct hci_cp_le_set_ext_adv_data {
+ 	__u8  handle;
+ 	__u8  operation;
+ 	__u8  frag_pref;
+ 	__u8  length;
+-	__u8  data[HCI_MAX_AD_LENGTH];
++	__u8  data[];
+ } __packed;
+ 
+ #define HCI_OP_LE_SET_EXT_SCAN_RSP_DATA		0x2038
+@@ -1788,7 +1790,7 @@ struct hci_cp_le_set_ext_scan_rsp_data {
+ 	__u8  operation;
+ 	__u8  frag_pref;
+ 	__u8  length;
+-	__u8  data[HCI_MAX_AD_LENGTH];
++	__u8  data[];
+ } __packed;
+ 
+ #define LE_SET_ADV_DATA_OP_COMPLETE	0x03
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index df611c8b6b595..e534dff2874e1 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -226,9 +226,9 @@ struct adv_info {
+ 	__u16	remaining_time;
+ 	__u16	duration;
+ 	__u16	adv_data_len;
+-	__u8	adv_data[HCI_MAX_AD_LENGTH];
++	__u8	adv_data[HCI_MAX_EXT_AD_LENGTH];
+ 	__u16	scan_rsp_len;
+-	__u8	scan_rsp_data[HCI_MAX_AD_LENGTH];
++	__u8	scan_rsp_data[HCI_MAX_EXT_AD_LENGTH];
+ 	__s8	tx_power;
+ 	bdaddr_t	random_addr;
+ 	bool 		rpa_expired;
+@@ -523,9 +523,9 @@ struct hci_dev {
+ 	DECLARE_BITMAP(dev_flags, __HCI_NUM_FLAGS);
+ 
+ 	__s8			adv_tx_power;
+-	__u8			adv_data[HCI_MAX_AD_LENGTH];
++	__u8			adv_data[HCI_MAX_EXT_AD_LENGTH];
+ 	__u8			adv_data_len;
+-	__u8			scan_rsp_data[HCI_MAX_AD_LENGTH];
++	__u8			scan_rsp_data[HCI_MAX_EXT_AD_LENGTH];
+ 	__u8			scan_rsp_data_len;
+ 
+ 	struct list_head	adv_instances;
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 2d6b985d11cca..5538e54d4620c 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -31,6 +31,7 @@
+ #include <net/flow.h>
+ #include <net/flow_dissector.h>
+ #include <net/netns/hash.h>
++#include <net/lwtunnel.h>
+ 
+ #define IPV4_MAX_PMTU		65535U		/* RFC 2675, Section 5.1 */
+ #define IPV4_MIN_MTU		68			/* RFC 791 */
+@@ -445,22 +446,25 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
+ 
+ 	/* 'forwarding = true' case should always honour route mtu */
+ 	mtu = dst_metric_raw(dst, RTAX_MTU);
+-	if (mtu)
+-		return mtu;
++	if (!mtu)
++		mtu = min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU);
+ 
+-	return min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU);
++	return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
+ }
+ 
+ static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
+ 					  const struct sk_buff *skb)
+ {
++	unsigned int mtu;
++
+ 	if (!sk || !sk_fullsock(sk) || ip_sk_use_pmtu(sk)) {
+ 		bool forwarding = IPCB(skb)->flags & IPSKB_FORWARDED;
+ 
+ 		return ip_dst_mtu_maybe_forward(skb_dst(skb), forwarding);
+ 	}
+ 
+-	return min(READ_ONCE(skb_dst(skb)->dev->mtu), IP_MAX_MTU);
++	mtu = min(READ_ONCE(skb_dst(skb)->dev->mtu), IP_MAX_MTU);
++	return mtu - lwtunnel_headroom(skb_dst(skb)->lwtstate, mtu);
+ }
+ 
+ struct dst_metrics *ip_fib_metrics_init(struct net *net, struct nlattr *fc_mx,
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index 2a5277758379e..37a7fb1969d6c 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -264,11 +264,18 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 
+ static inline int ip6_skb_dst_mtu(struct sk_buff *skb)
+ {
++	int mtu;
++
+ 	struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ?
+ 				inet6_sk(skb->sk) : NULL;
+ 
+-	return (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) ?
+-	       skb_dst(skb)->dev->mtu : dst_mtu(skb_dst(skb));
++	if (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) {
++		mtu = READ_ONCE(skb_dst(skb)->dev->mtu);
++		mtu -= lwtunnel_headroom(skb_dst(skb)->lwtstate, mtu);
++	} else
++		mtu = dst_mtu(skb_dst(skb));
++
++	return mtu;
+ }
+ 
+ static inline bool ip6_sk_accept_pmtu(const struct sock *sk)
+@@ -316,7 +323,7 @@ static inline unsigned int ip6_dst_mtu_forward(const struct dst_entry *dst)
+ 	if (dst_metric_locked(dst, RTAX_MTU)) {
+ 		mtu = dst_metric_raw(dst, RTAX_MTU);
+ 		if (mtu)
+-			return mtu;
++			goto out;
+ 	}
+ 
+ 	mtu = IPV6_MIN_MTU;
+@@ -326,7 +333,8 @@ static inline unsigned int ip6_dst_mtu_forward(const struct dst_entry *dst)
+ 		mtu = idev->cnf.mtu6;
+ 	rcu_read_unlock();
+ 
+-	return mtu;
++out:
++	return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
+ }
+ 
+ u32 ip6_mtu_from_fib6(const struct fib6_result *res,
+diff --git a/include/net/macsec.h b/include/net/macsec.h
+index 52874cdfe2260..d6fa6b97f6efa 100644
+--- a/include/net/macsec.h
++++ b/include/net/macsec.h
+@@ -241,7 +241,7 @@ struct macsec_context {
+ 	struct macsec_rx_sc *rx_sc;
+ 	struct {
+ 		unsigned char assoc_num;
+-		u8 key[MACSEC_KEYID_LEN];
++		u8 key[MACSEC_MAX_KEY_LEN];
+ 		union {
+ 			struct macsec_rx_sa *rx_sa;
+ 			struct macsec_tx_sa *tx_sa;
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 4dd2c9e34976e..f8631ad3c8686 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -163,6 +163,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ 		if (spin_trylock(&qdisc->seqlock))
+ 			goto nolock_empty;
+ 
++		/* Paired with smp_mb__after_atomic() to make sure
++		 * STATE_MISSED checking is synchronized with clearing
++		 * in pfifo_fast_dequeue().
++		 */
++		smp_mb__before_atomic();
++
+ 		/* If the MISSED flag is set, it means other thread has
+ 		 * set the MISSED flag before second spin_trylock(), so
+ 		 * we can return false here to avoid multi cpus doing
+@@ -180,6 +186,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ 		 */
+ 		set_bit(__QDISC_STATE_MISSED, &qdisc->state);
+ 
++		/* spin_trylock() only has load-acquire semantic, so use
++		 * smp_mb__after_atomic() to ensure STATE_MISSED is set
++		 * before doing the second spin_trylock().
++		 */
++		smp_mb__after_atomic();
++
+ 		/* Retry again in case other CPU may not see the new flag
+ 		 * after it releases the lock at the end of qdisc_run_end().
+ 		 */
+diff --git a/include/net/tc_act/tc_vlan.h b/include/net/tc_act/tc_vlan.h
+index f051046ba0344..f94b8bc26f9ec 100644
+--- a/include/net/tc_act/tc_vlan.h
++++ b/include/net/tc_act/tc_vlan.h
+@@ -16,6 +16,7 @@ struct tcf_vlan_params {
+ 	u16               tcfv_push_vid;
+ 	__be16            tcfv_push_proto;
+ 	u8                tcfv_push_prio;
++	bool              tcfv_push_prio_exists;
+ 	struct rcu_head   rcu;
+ };
+ 
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index c58a6d4eb6103..6232a5f048bde 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1546,6 +1546,7 @@ void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si);
+ void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
+ u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq);
+ int xfrm_init_replay(struct xfrm_state *x);
++u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu);
+ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu);
+ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload);
+ int xfrm_init_state(struct xfrm_state *x);
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index eaa8386dbc630..7a9a23e7a604a 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -147,11 +147,16 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool,
+ {
+ 	bool cross_pg = (addr & (PAGE_SIZE - 1)) + len > PAGE_SIZE;
+ 
+-	if (pool->dma_pages_cnt && cross_pg) {
++	if (likely(!cross_pg))
++		return false;
++
++	if (pool->dma_pages_cnt) {
+ 		return !(pool->dma_pages[addr >> PAGE_SHIFT] &
+ 			 XSK_NEXT_PG_CONTIG_MASK);
+ 	}
+-	return false;
++
++	/* skb path */
++	return addr + len > pool->addrs_cnt;
+ }
+ 
+ static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr)
+diff --git a/include/scsi/fc/fc_ms.h b/include/scsi/fc/fc_ms.h
+index 9e273fed0a85f..800d53dc94705 100644
+--- a/include/scsi/fc/fc_ms.h
++++ b/include/scsi/fc/fc_ms.h
+@@ -63,8 +63,8 @@ enum fc_fdmi_hba_attr_type {
+  * HBA Attribute Length
+  */
+ #define FC_FDMI_HBA_ATTR_NODENAME_LEN		8
+-#define FC_FDMI_HBA_ATTR_MANUFACTURER_LEN	80
+-#define FC_FDMI_HBA_ATTR_SERIALNUMBER_LEN	80
++#define FC_FDMI_HBA_ATTR_MANUFACTURER_LEN	64
++#define FC_FDMI_HBA_ATTR_SERIALNUMBER_LEN	64
+ #define FC_FDMI_HBA_ATTR_MODEL_LEN		256
+ #define FC_FDMI_HBA_ATTR_MODELDESCR_LEN		256
+ #define FC_FDMI_HBA_ATTR_HARDWAREVERSION_LEN	256
+diff --git a/init/main.c b/init/main.c
+index b4449544390ca..dd26a42e80a87 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -914,11 +914,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	 * time - but meanwhile we still have a functioning scheduler.
+ 	 */
+ 	sched_init();
+-	/*
+-	 * Disable preemption - early bootup scheduling is extremely
+-	 * fragile until we cpu_idle() for the first time.
+-	 */
+-	preempt_disable();
++
+ 	if (WARN(!irqs_disabled(),
+ 		 "Interrupts were enabled *very* early, fixing it\n"))
+ 		local_irq_disable();
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index e97724e36dfb5..bf6798fb23319 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -10532,7 +10532,7 @@ static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len
+ 	}
+ }
+ 
+-static void adjust_poke_descs(struct bpf_prog *prog, u32 len)
++static void adjust_poke_descs(struct bpf_prog *prog, u32 off, u32 len)
+ {
+ 	struct bpf_jit_poke_descriptor *tab = prog->aux->poke_tab;
+ 	int i, sz = prog->aux->size_poke_tab;
+@@ -10540,6 +10540,8 @@ static void adjust_poke_descs(struct bpf_prog *prog, u32 len)
+ 
+ 	for (i = 0; i < sz; i++) {
+ 		desc = &tab[i];
++		if (desc->insn_idx <= off)
++			continue;
+ 		desc->insn_idx += len - 1;
+ 	}
+ }
+@@ -10560,7 +10562,7 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
+ 	if (adjust_insn_aux_data(env, new_prog, off, len))
+ 		return NULL;
+ 	adjust_subprog_starts(env, off, len);
+-	adjust_poke_descs(new_prog, len);
++	adjust_poke_descs(new_prog, off, len);
+ 	return new_prog;
+ }
+ 
+diff --git a/kernel/cred.c b/kernel/cred.c
+index 421b1149c6516..098213d4a39c3 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -60,6 +60,7 @@ struct cred init_cred = {
+ 	.user			= INIT_USER,
+ 	.user_ns		= &init_user_ns,
+ 	.group_info		= &init_groups,
++	.ucounts		= &init_ucounts,
+ };
+ 
+ static inline void set_cred_subscribers(struct cred *cred, int n)
+@@ -119,6 +120,8 @@ static void put_cred_rcu(struct rcu_head *rcu)
+ 	if (cred->group_info)
+ 		put_group_info(cred->group_info);
+ 	free_uid(cred->user);
++	if (cred->ucounts)
++		put_ucounts(cred->ucounts);
+ 	put_user_ns(cred->user_ns);
+ 	kmem_cache_free(cred_jar, cred);
+ }
+@@ -222,6 +225,7 @@ struct cred *cred_alloc_blank(void)
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	new->magic = CRED_MAGIC;
+ #endif
++	new->ucounts = get_ucounts(&init_ucounts);
+ 
+ 	if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
+@@ -284,6 +288,11 @@ struct cred *prepare_creds(void)
+ 
+ 	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
++
++	new->ucounts = get_ucounts(new->ucounts);
++	if (!new->ucounts)
++		goto error;
++
+ 	validate_creds(new);
+ 	return new;
+ 
+@@ -363,6 +372,9 @@ int copy_creds(struct task_struct *p, unsigned long clone_flags)
+ 		ret = create_user_ns(new);
+ 		if (ret < 0)
+ 			goto error_put;
++		ret = set_cred_ucounts(new);
++		if (ret < 0)
++			goto error_put;
+ 	}
+ 
+ #ifdef CONFIG_KEYS
+@@ -653,6 +665,31 @@ int cred_fscmp(const struct cred *a, const struct cred *b)
+ }
+ EXPORT_SYMBOL(cred_fscmp);
+ 
++int set_cred_ucounts(struct cred *new)
++{
++	struct task_struct *task = current;
++	const struct cred *old = task->real_cred;
++	struct ucounts *old_ucounts = new->ucounts;
++
++	if (new->user == old->user && new->user_ns == old->user_ns)
++		return 0;
++
++	/*
++	 * This optimization is needed because alloc_ucounts() uses locks
++	 * for table lookups.
++	 */
++	if (old_ucounts && old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->euid))
++		return 0;
++
++	if (!(new->ucounts = alloc_ucounts(new->user_ns, new->euid)))
++		return -EAGAIN;
++
++	if (old_ucounts)
++		put_ucounts(old_ucounts);
++
++	return 0;
++}
++
+ /*
+  * initialise the credentials stuff
+  */
+@@ -719,6 +756,10 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
+ 	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
+ 
++	new->ucounts = get_ucounts(new->ucounts);
++	if (!new->ucounts)
++		goto error;
++
+ 	put_cred(old);
+ 	validate_creds(new);
+ 	return new;
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 7c044d377926c..096945ef49ad7 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2392,7 +2392,7 @@ static inline void init_idle_pids(struct task_struct *idle)
+ 	}
+ }
+ 
+-struct task_struct *fork_idle(int cpu)
++struct task_struct * __init fork_idle(int cpu)
+ {
+ 	struct task_struct *task;
+ 	struct kernel_clone_args args = {
+@@ -2960,6 +2960,12 @@ int ksys_unshare(unsigned long unshare_flags)
+ 	if (err)
+ 		goto bad_unshare_cleanup_cred;
+ 
++	if (new_cred) {
++		err = set_cred_ucounts(new_cred);
++		if (err)
++			goto bad_unshare_cleanup_cred;
++	}
++
+ 	if (new_fs || new_fd || do_sysvsem || new_cred || new_nsproxy) {
+ 		if (do_sysvsem) {
+ 			/*
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 36be4364b313a..9825cf89c614d 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -1107,14 +1107,14 @@ static bool __kthread_cancel_work(struct kthread_work *work)
+  * modify @dwork's timer so that it expires after @delay. If @delay is zero,
+  * @work is guaranteed to be queued immediately.
+  *
+- * Return: %true if @dwork was pending and its timer was modified,
+- * %false otherwise.
++ * Return: %false if @dwork was idle and queued, %true otherwise.
+  *
+  * A special case is when the work is being canceled in parallel.
+  * It might be caused either by the real kthread_cancel_delayed_work_sync()
+  * or yet another kthread_mod_delayed_work() call. We let the other command
+- * win and return %false here. The caller is supposed to synchronize these
+- * operations a reasonable way.
++ * win and return %true here. The return value can be used for reference
++ * counting and the number of queued works stays the same. Anyway, the caller
++ * is supposed to synchronize these operations a reasonable way.
+  *
+  * This function is safe to call from any context including IRQ handler.
+  * See __kthread_cancel_work() and kthread_delayed_work_timer_fn()
+@@ -1126,13 +1126,15 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
+ {
+ 	struct kthread_work *work = &dwork->work;
+ 	unsigned long flags;
+-	int ret = false;
++	int ret;
+ 
+ 	raw_spin_lock_irqsave(&worker->lock, flags);
+ 
+ 	/* Do not bother with canceling when never queued. */
+-	if (!work->worker)
++	if (!work->worker) {
++		ret = false;
+ 		goto fast_queue;
++	}
+ 
+ 	/* Work must not be used with >1 worker, see kthread_queue_work() */
+ 	WARN_ON_ONCE(work->worker != worker);
+@@ -1150,8 +1152,11 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
+ 	 * be used for reference counting.
+ 	 */
+ 	kthread_cancel_delayed_work_timer(work, &flags);
+-	if (work->canceling)
++	if (work->canceling) {
++		/* The number of works in the queue does not change. */
++		ret = true;
+ 		goto out;
++	}
+ 	ret = __kthread_cancel_work(work);
+ 
+ fast_queue:
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index cdca007551e71..8ae9d7abebc08 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -2297,7 +2297,56 @@ static void print_lock_class_header(struct lock_class *class, int depth)
+ }
+ 
+ /*
+- * printk the shortest lock dependencies from @start to @end in reverse order:
++ * Dependency path printing:
++ *
++ * After BFS we get a lock dependency path (linked via ->parent of lock_list),
++ * printing out each lock in the dependency path will help on understanding how
++ * the deadlock could happen. Here are some details about dependency path
++ * printing:
++ *
++ * 1)	A lock_list can be either forwards or backwards for a lock dependency,
++ * 	for a lock dependency A -> B, there are two lock_lists:
++ *
++ * 	a)	lock_list in the ->locks_after list of A, whose ->class is B and
++ * 		->links_to is A. In this case, we can say the lock_list is
++ * 		"A -> B" (forwards case).
++ *
++ * 	b)	lock_list in the ->locks_before list of B, whose ->class is A
++ * 		and ->links_to is B. In this case, we can say the lock_list is
++ * 		"B <- A" (bacwards case).
++ *
++ * 	The ->trace of both a) and b) point to the call trace where B was
++ * 	acquired with A held.
++ *
++ * 2)	A "helper" lock_list is introduced during BFS, this lock_list doesn't
++ * 	represent a certain lock dependency, it only provides an initial entry
++ * 	for BFS. For example, BFS may introduce a "helper" lock_list whose
++ * 	->class is A, as a result BFS will search all dependencies starting with
++ * 	A, e.g. A -> B or A -> C.
++ *
++ * 	The notation of a forwards helper lock_list is like "-> A", which means
++ * 	we should search the forwards dependencies starting with "A", e.g A -> B
++ * 	or A -> C.
++ *
++ * 	The notation of a bacwards helper lock_list is like "<- B", which means
++ * 	we should search the backwards dependencies ending with "B", e.g.
++ * 	B <- A or B <- C.
++ */
++
++/*
++ * printk the shortest lock dependencies from @root to @leaf in reverse order.
++ *
++ * We have a lock dependency path as follow:
++ *
++ *    @root                                                                 @leaf
++ *      |                                                                     |
++ *      V                                                                     V
++ *	          ->parent                                   ->parent
++ * | lock_list | <--------- | lock_list | ... | lock_list  | <--------- | lock_list |
++ * |    -> L1  |            | L1 -> L2  | ... |Ln-2 -> Ln-1|            | Ln-1 -> Ln|
++ *
++ * , so it's natural that we start from @leaf and print every ->class and
++ * ->trace until we reach the @root.
+  */
+ static void __used
+ print_shortest_lock_dependencies(struct lock_list *leaf,
+@@ -2325,6 +2374,61 @@ print_shortest_lock_dependencies(struct lock_list *leaf,
+ 	} while (entry && (depth >= 0));
+ }
+ 
++/*
++ * printk the shortest lock dependencies from @leaf to @root.
++ *
++ * We have a lock dependency path (from a backwards search) as follow:
++ *
++ *    @leaf                                                                 @root
++ *      |                                                                     |
++ *      V                                                                     V
++ *	          ->parent                                   ->parent
++ * | lock_list | ---------> | lock_list | ... | lock_list  | ---------> | lock_list |
++ * | L2 <- L1  |            | L3 <- L2  | ... | Ln <- Ln-1 |            |    <- Ln  |
++ *
++ * , so when we iterate from @leaf to @root, we actually print the lock
++ * dependency path L1 -> L2 -> .. -> Ln in the non-reverse order.
++ *
++ * Another thing to notice here is that ->class of L2 <- L1 is L1, while the
++ * ->trace of L2 <- L1 is the call trace of L2, in fact we don't have the call
++ * trace of L1 in the dependency path, which is alright, because most of the
++ * time we can figure out where L1 is held from the call trace of L2.
++ */
++static void __used
++print_shortest_lock_dependencies_backwards(struct lock_list *leaf,
++					   struct lock_list *root)
++{
++	struct lock_list *entry = leaf;
++	const struct lock_trace *trace = NULL;
++	int depth;
++
++	/*compute depth from generated tree by BFS*/
++	depth = get_lock_depth(leaf);
++
++	do {
++		print_lock_class_header(entry->class, depth);
++		if (trace) {
++			printk("%*s ... acquired at:\n", depth, "");
++			print_lock_trace(trace, 2);
++			printk("\n");
++		}
++
++		/*
++		 * Record the pointer to the trace for the next lock_list
++		 * entry, see the comments for the function.
++		 */
++		trace = entry->trace;
++
++		if (depth == 0 && (entry != root)) {
++			printk("lockdep:%s bad path found in chain graph\n", __func__);
++			break;
++		}
++
++		entry = get_lock_parent(entry);
++		depth--;
++	} while (entry && (depth >= 0));
++}
++
+ static void
+ print_irq_lock_scenario(struct lock_list *safe_entry,
+ 			struct lock_list *unsafe_entry,
+@@ -2442,7 +2546,7 @@ print_bad_irq_dependency(struct task_struct *curr,
+ 	prev_root->trace = save_trace();
+ 	if (!prev_root->trace)
+ 		return;
+-	print_shortest_lock_dependencies(backwards_entry, prev_root);
++	print_shortest_lock_dependencies_backwards(backwards_entry, prev_root);
+ 
+ 	pr_warn("\nthe dependencies between the lock to be acquired");
+ 	pr_warn(" and %s-irq-unsafe lock:\n", irqclass);
+@@ -2660,8 +2764,18 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
+ 	 * Step 3: we found a bad match! Now retrieve a lock from the backward
+ 	 * list whose usage mask matches the exclusive usage mask from the
+ 	 * lock found on the forward list.
++	 *
++	 * Note, we should only keep the LOCKF_ENABLED_IRQ_ALL bits, considering
++	 * the follow case:
++	 *
++	 * When trying to add A -> B to the graph, we find that there is a
++	 * hardirq-safe L, that L -> ... -> A, and another hardirq-unsafe M,
++	 * that B -> ... -> M. However M is **softirq-safe**, if we use exact
++	 * invert bits of M's usage_mask, we will find another lock N that is
++	 * **softirq-unsafe** and N -> ... -> A, however N -> .. -> M will not
++	 * cause a inversion deadlock.
+ 	 */
+-	backward_mask = original_mask(target_entry1->class->usage_mask);
++	backward_mask = original_mask(target_entry1->class->usage_mask & LOCKF_ENABLED_IRQ_ALL);
+ 
+ 	ret = find_usage_backwards(&this, backward_mask, &target_entry);
+ 	if (bfs_error(ret)) {
+@@ -4512,7 +4626,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
+ 	short curr_inner;
+ 	int depth;
+ 
+-	if (!curr->lockdep_depth || !next_inner || next->trylock)
++	if (!next_inner || next->trylock)
+ 		return 0;
+ 
+ 	if (!next_outer)
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 61e250cdd7c9c..45b60e9974610 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -2837,7 +2837,6 @@ static int __init rcu_spawn_core_kthreads(void)
+ 		  "%s: Could not start rcuc kthread, OOM is now expected behavior\n", __func__);
+ 	return 0;
+ }
+-early_initcall(rcu_spawn_core_kthreads);
+ 
+ /*
+  * Handle any core-RCU processing required by a call_rcu() invocation.
+@@ -4273,6 +4272,7 @@ static int __init rcu_spawn_gp_kthread(void)
+ 	wake_up_process(t);
+ 	rcu_spawn_nocb_kthreads();
+ 	rcu_spawn_boost_kthreads();
++	rcu_spawn_core_kthreads();
+ 	return 0;
+ }
+ early_initcall(rcu_spawn_gp_kthread);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 57b2362518849..679562d2f55d1 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1063,9 +1063,10 @@ static void uclamp_sync_util_min_rt_default(void)
+ static inline struct uclamp_se
+ uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id)
+ {
++	/* Copy by value as we could modify it */
+ 	struct uclamp_se uc_req = p->uclamp_req[clamp_id];
+ #ifdef CONFIG_UCLAMP_TASK_GROUP
+-	struct uclamp_se uc_max;
++	unsigned int tg_min, tg_max, value;
+ 
+ 	/*
+ 	 * Tasks in autogroups or root task group will be
+@@ -1076,9 +1077,11 @@ uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id)
+ 	if (task_group(p) == &root_task_group)
+ 		return uc_req;
+ 
+-	uc_max = task_group(p)->uclamp[clamp_id];
+-	if (uc_req.value > uc_max.value || !uc_req.user_defined)
+-		return uc_max;
++	tg_min = task_group(p)->uclamp[UCLAMP_MIN].value;
++	tg_max = task_group(p)->uclamp[UCLAMP_MAX].value;
++	value = uc_req.value;
++	value = clamp(value, tg_min, tg_max);
++	uclamp_se_set(&uc_req, value, false);
+ #endif
+ 
+ 	return uc_req;
+@@ -1277,8 +1280,9 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
+ }
+ 
+ static inline void
+-uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
++uclamp_update_active(struct task_struct *p)
+ {
++	enum uclamp_id clamp_id;
+ 	struct rq_flags rf;
+ 	struct rq *rq;
+ 
+@@ -1298,9 +1302,11 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
+ 	 * affecting a valid clamp bucket, the next time it's enqueued,
+ 	 * it will already see the updated clamp bucket value.
+ 	 */
+-	if (p->uclamp[clamp_id].active) {
+-		uclamp_rq_dec_id(rq, p, clamp_id);
+-		uclamp_rq_inc_id(rq, p, clamp_id);
++	for_each_clamp_id(clamp_id) {
++		if (p->uclamp[clamp_id].active) {
++			uclamp_rq_dec_id(rq, p, clamp_id);
++			uclamp_rq_inc_id(rq, p, clamp_id);
++		}
+ 	}
+ 
+ 	task_rq_unlock(rq, p, &rf);
+@@ -1308,20 +1314,14 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
+ 
+ #ifdef CONFIG_UCLAMP_TASK_GROUP
+ static inline void
+-uclamp_update_active_tasks(struct cgroup_subsys_state *css,
+-			   unsigned int clamps)
++uclamp_update_active_tasks(struct cgroup_subsys_state *css)
+ {
+-	enum uclamp_id clamp_id;
+ 	struct css_task_iter it;
+ 	struct task_struct *p;
+ 
+ 	css_task_iter_start(css, 0, &it);
+-	while ((p = css_task_iter_next(&it))) {
+-		for_each_clamp_id(clamp_id) {
+-			if ((0x1 << clamp_id) & clamps)
+-				uclamp_update_active(p, clamp_id);
+-		}
+-	}
++	while ((p = css_task_iter_next(&it)))
++		uclamp_update_active(p);
+ 	css_task_iter_end(&it);
+ }
+ 
+@@ -6512,7 +6512,7 @@ void show_state_filter(unsigned long state_filter)
+  * NOTE: this function does not set the idle thread's NEED_RESCHED
+  * flag, to make booting more robust.
+  */
+-void init_idle(struct task_struct *idle, int cpu)
++void __init init_idle(struct task_struct *idle, int cpu)
+ {
+ 	struct rq *rq = cpu_rq(cpu);
+ 	unsigned long flags;
+@@ -7607,7 +7607,11 @@ static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
+ 
+ #ifdef CONFIG_UCLAMP_TASK_GROUP
+ 	/* Propagate the effective uclamp value for the new group */
++	mutex_lock(&uclamp_mutex);
++	rcu_read_lock();
+ 	cpu_util_update_eff(css);
++	rcu_read_unlock();
++	mutex_unlock(&uclamp_mutex);
+ #endif
+ 
+ 	return 0;
+@@ -7697,6 +7701,9 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css)
+ 	enum uclamp_id clamp_id;
+ 	unsigned int clamps;
+ 
++	lockdep_assert_held(&uclamp_mutex);
++	SCHED_WARN_ON(!rcu_read_lock_held());
++
+ 	css_for_each_descendant_pre(css, top_css) {
+ 		uc_parent = css_tg(css)->parent
+ 			? css_tg(css)->parent->uclamp : NULL;
+@@ -7729,7 +7736,7 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css)
+ 		}
+ 
+ 		/* Immediately update descendants RUNNABLE tasks */
+-		uclamp_update_active_tasks(css, clamps);
++		uclamp_update_active_tasks(css);
+ 	}
+ }
+ 
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 8d06d1f4e2f7b..6b98c1fe6e7f8 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2470,6 +2470,8 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
+ 			check_preempt_curr_dl(rq, p, 0);
+ 		else
+ 			resched_curr(rq);
++	} else {
++		update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
+ 	}
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index d6e1c90de570a..3d92de7909bf4 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3141,7 +3141,7 @@ void reweight_task(struct task_struct *p, int prio)
+  *
+  *                     tg->weight * grq->load.weight
+  *   ge->load.weight = -----------------------------               (1)
+- *			  \Sum grq->load.weight
++ *                       \Sum grq->load.weight
+  *
+  * Now, because computing that sum is prohibitively expensive to compute (been
+  * there, done that) we approximate it with this average stuff. The average
+@@ -3155,7 +3155,7 @@ void reweight_task(struct task_struct *p, int prio)
+  *
+  *                     tg->weight * grq->avg.load_avg
+  *   ge->load.weight = ------------------------------              (3)
+- *				tg->load_avg
++ *                             tg->load_avg
+  *
+  * Where: tg->load_avg ~= \Sum grq->avg.load_avg
+  *
+@@ -3171,7 +3171,7 @@ void reweight_task(struct task_struct *p, int prio)
+  *
+  *                     tg->weight * grq->load.weight
+  *   ge->load.weight = ----------------------------- = tg->weight   (4)
+- *			    grp->load.weight
++ *                         grp->load.weight
+  *
+  * That is, the sum collapses because all other CPUs are idle; the UP scenario.
+  *
+@@ -3190,7 +3190,7 @@ void reweight_task(struct task_struct *p, int prio)
+  *
+  *                     tg->weight * grq->load.weight
+  *   ge->load.weight = -----------------------------		   (6)
+- *				tg_load_avg'
++ *                             tg_load_avg'
+  *
+  * Where:
+  *
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 651218ded9817..d50a31ecedeec 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -179,6 +179,8 @@ struct psi_group psi_system = {
+ 
+ static void psi_avgs_work(struct work_struct *work);
+ 
++static void poll_timer_fn(struct timer_list *t);
++
+ static void group_init(struct psi_group *group)
+ {
+ 	int cpu;
+@@ -198,6 +200,8 @@ static void group_init(struct psi_group *group)
+ 	memset(group->polling_total, 0, sizeof(group->polling_total));
+ 	group->polling_next_update = ULLONG_MAX;
+ 	group->polling_until = 0;
++	init_waitqueue_head(&group->poll_wait);
++	timer_setup(&group->poll_timer, poll_timer_fn, 0);
+ 	rcu_assign_pointer(group->poll_task, NULL);
+ }
+ 
+@@ -1126,9 +1130,7 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 			return ERR_CAST(task);
+ 		}
+ 		atomic_set(&group->poll_wakeup, 0);
+-		init_waitqueue_head(&group->poll_wait);
+ 		wake_up_process(task);
+-		timer_setup(&group->poll_timer, poll_timer_fn, 0);
+ 		rcu_assign_pointer(group->poll_task, task);
+ 	}
+ 
+@@ -1180,6 +1182,7 @@ static void psi_trigger_destroy(struct kref *ref)
+ 					group->poll_task,
+ 					lockdep_is_held(&group->trigger_lock));
+ 			rcu_assign_pointer(group->poll_task, NULL);
++			del_timer(&group->poll_timer);
+ 		}
+ 	}
+ 
+@@ -1192,17 +1195,14 @@ static void psi_trigger_destroy(struct kref *ref)
+ 	 */
+ 	synchronize_rcu();
+ 	/*
+-	 * Destroy the kworker after releasing trigger_lock to prevent a
++	 * Stop kthread 'psimon' after releasing trigger_lock to prevent a
+ 	 * deadlock while waiting for psi_poll_work to acquire trigger_lock
+ 	 */
+ 	if (task_to_destroy) {
+ 		/*
+ 		 * After the RCU grace period has expired, the worker
+ 		 * can no longer be found through group->poll_task.
+-		 * But it might have been already scheduled before
+-		 * that - deschedule it cleanly before destroying it.
+ 		 */
+-		del_timer_sync(&group->poll_timer);
+ 		kthread_stop(task_to_destroy);
+ 	}
+ 	kfree(t);
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 49ec096a8aa1f..b5cf418e2e3fe 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2291,13 +2291,20 @@ void __init init_sched_rt_class(void)
+ static void switched_to_rt(struct rq *rq, struct task_struct *p)
+ {
+ 	/*
+-	 * If we are already running, then there's nothing
+-	 * that needs to be done. But if we are not running
+-	 * we may need to preempt the current running task.
+-	 * If that current running task is also an RT task
++	 * If we are running, update the avg_rt tracking, as the running time
++	 * will now on be accounted into the latter.
++	 */
++	if (task_current(rq, p)) {
++		update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
++		return;
++	}
++
++	/*
++	 * If we are not running we may need to preempt the current
++	 * running task. If that current running task is also an RT task
+ 	 * then see if we can move to another run queue.
+ 	 */
+-	if (task_on_rq_queued(p) && rq->curr != p) {
++	if (task_on_rq_queued(p)) {
+ #ifdef CONFIG_SMP
+ 		if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
+ 			rt_queue_push_tasks(rq);
+diff --git a/kernel/smpboot.c b/kernel/smpboot.c
+index f25208e8df836..e4163042c4d66 100644
+--- a/kernel/smpboot.c
++++ b/kernel/smpboot.c
+@@ -33,7 +33,6 @@ struct task_struct *idle_thread_get(unsigned int cpu)
+ 
+ 	if (!tsk)
+ 		return ERR_PTR(-ENOMEM);
+-	init_idle(tsk, cpu);
+ 	return tsk;
+ }
+ 
+diff --git a/kernel/sys.c b/kernel/sys.c
+index a730c03ee607c..0670e824e0197 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -552,6 +552,10 @@ long __sys_setreuid(uid_t ruid, uid_t euid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	retval = set_cred_ucounts(new);
++	if (retval < 0)
++		goto error;
++
+ 	return commit_creds(new);
+ 
+ error:
+@@ -610,6 +614,10 @@ long __sys_setuid(uid_t uid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	retval = set_cred_ucounts(new);
++	if (retval < 0)
++		goto error;
++
+ 	return commit_creds(new);
+ 
+ error:
+@@ -685,6 +693,10 @@ long __sys_setresuid(uid_t ruid, uid_t euid, uid_t suid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	retval = set_cred_ucounts(new);
++	if (retval < 0)
++		goto error;
++
+ 	return commit_creds(new);
+ 
+ error:
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 02441ead3c3bb..74492f08660c4 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -124,6 +124,13 @@ static void __clocksource_change_rating(struct clocksource *cs, int rating);
+ #define WATCHDOG_INTERVAL (HZ >> 1)
+ #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
+ 
++/*
++ * Maximum permissible delay between two readouts of the watchdog
++ * clocksource surrounding a read of the clocksource being validated.
++ * This delay could be due to SMIs, NMIs, or to VCPU preemptions.
++ */
++#define WATCHDOG_MAX_SKEW (100 * NSEC_PER_USEC)
++
+ static void clocksource_watchdog_work(struct work_struct *work)
+ {
+ 	/*
+@@ -184,12 +191,99 @@ void clocksource_mark_unstable(struct clocksource *cs)
+ 	spin_unlock_irqrestore(&watchdog_lock, flags);
+ }
+ 
++static ulong max_cswd_read_retries = 3;
++module_param(max_cswd_read_retries, ulong, 0644);
++
++static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
++{
++	unsigned int nretries;
++	u64 wd_end, wd_delta;
++	int64_t wd_delay;
++
++	for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
++		local_irq_disable();
++		*wdnow = watchdog->read(watchdog);
++		*csnow = cs->read(cs);
++		wd_end = watchdog->read(watchdog);
++		local_irq_enable();
++
++		wd_delta = clocksource_delta(wd_end, *wdnow, watchdog->mask);
++		wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult,
++					      watchdog->shift);
++		if (wd_delay <= WATCHDOG_MAX_SKEW) {
++			if (nretries > 1 || nretries >= max_cswd_read_retries) {
++				pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
++					smp_processor_id(), watchdog->name, nretries);
++			}
++			return true;
++		}
++	}
++
++	pr_warn("timekeeping watchdog on CPU%d: %s read-back delay of %lldns, attempt %d, marking unstable\n",
++		smp_processor_id(), watchdog->name, wd_delay, nretries);
++	return false;
++}
++
++static u64 csnow_mid;
++static cpumask_t cpus_ahead;
++static cpumask_t cpus_behind;
++
++static void clocksource_verify_one_cpu(void *csin)
++{
++	struct clocksource *cs = (struct clocksource *)csin;
++
++	csnow_mid = cs->read(cs);
++}
++
++static void clocksource_verify_percpu(struct clocksource *cs)
++{
++	int64_t cs_nsec, cs_nsec_max = 0, cs_nsec_min = LLONG_MAX;
++	u64 csnow_begin, csnow_end;
++	int cpu, testcpu;
++	s64 delta;
++
++	cpumask_clear(&cpus_ahead);
++	cpumask_clear(&cpus_behind);
++	preempt_disable();
++	testcpu = smp_processor_id();
++	pr_warn("Checking clocksource %s synchronization from CPU %d.\n", cs->name, testcpu);
++	for_each_online_cpu(cpu) {
++		if (cpu == testcpu)
++			continue;
++		csnow_begin = cs->read(cs);
++		smp_call_function_single(cpu, clocksource_verify_one_cpu, cs, 1);
++		csnow_end = cs->read(cs);
++		delta = (s64)((csnow_mid - csnow_begin) & cs->mask);
++		if (delta < 0)
++			cpumask_set_cpu(cpu, &cpus_behind);
++		delta = (csnow_end - csnow_mid) & cs->mask;
++		if (delta < 0)
++			cpumask_set_cpu(cpu, &cpus_ahead);
++		delta = clocksource_delta(csnow_end, csnow_begin, cs->mask);
++		cs_nsec = clocksource_cyc2ns(delta, cs->mult, cs->shift);
++		if (cs_nsec > cs_nsec_max)
++			cs_nsec_max = cs_nsec;
++		if (cs_nsec < cs_nsec_min)
++			cs_nsec_min = cs_nsec;
++	}
++	preempt_enable();
++	if (!cpumask_empty(&cpus_ahead))
++		pr_warn("        CPUs %*pbl ahead of CPU %d for clocksource %s.\n",
++			cpumask_pr_args(&cpus_ahead), testcpu, cs->name);
++	if (!cpumask_empty(&cpus_behind))
++		pr_warn("        CPUs %*pbl behind CPU %d for clocksource %s.\n",
++			cpumask_pr_args(&cpus_behind), testcpu, cs->name);
++	if (!cpumask_empty(&cpus_ahead) || !cpumask_empty(&cpus_behind))
++		pr_warn("        CPU %d check durations %lldns - %lldns for clocksource %s.\n",
++			testcpu, cs_nsec_min, cs_nsec_max, cs->name);
++}
++
+ static void clocksource_watchdog(struct timer_list *unused)
+ {
+-	struct clocksource *cs;
+ 	u64 csnow, wdnow, cslast, wdlast, delta;
+-	int64_t wd_nsec, cs_nsec;
+ 	int next_cpu, reset_pending;
++	int64_t wd_nsec, cs_nsec;
++	struct clocksource *cs;
+ 
+ 	spin_lock(&watchdog_lock);
+ 	if (!watchdog_running)
+@@ -206,10 +300,11 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 			continue;
+ 		}
+ 
+-		local_irq_disable();
+-		csnow = cs->read(cs);
+-		wdnow = watchdog->read(watchdog);
+-		local_irq_enable();
++		if (!cs_watchdog_read(cs, &csnow, &wdnow)) {
++			/* Clock readout unreliable, so give it up. */
++			__clocksource_unstable(cs);
++			continue;
++		}
+ 
+ 		/* Clocksource initialized ? */
+ 		if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) ||
+@@ -407,6 +502,12 @@ static int __clocksource_watchdog_kthread(void)
+ 	unsigned long flags;
+ 	int select = 0;
+ 
++	/* Do any required per-CPU skew verification. */
++	if (curr_clocksource &&
++	    curr_clocksource->flags & CLOCK_SOURCE_UNSTABLE &&
++	    curr_clocksource->flags & CLOCK_SOURCE_VERIFY_PERCPU)
++		clocksource_verify_percpu(curr_clocksource);
++
+ 	spin_lock_irqsave(&watchdog_lock, flags);
+ 	list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) {
+ 		if (cs->flags & CLOCK_SOURCE_UNSTABLE) {
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 01710831fd02f..216329c23f18a 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -2106,7 +2106,8 @@ static int __bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *
+ 	if (prog->aux->max_tp_access > btp->writable_size)
+ 		return -EINVAL;
+ 
+-	return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog);
++	return tracepoint_probe_register_may_exist(tp, (void *)btp->bpf_func,
++						   prog);
+ }
+ 
+ int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 96c3f86b81c5f..0b24938cbe92e 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1539,6 +1539,13 @@ static int contains_operator(char *str)
+ 
+ 	switch (*op) {
+ 	case '-':
++		/*
++		 * Unfortunately, the modifier ".sym-offset"
++		 * can confuse things.
++		 */
++		if (op - str >= 4 && !strncmp(op - 4, ".sym-offset", 11))
++			return FIELD_OP_NONE;
++
+ 		if (*str == '-')
+ 			field_op = FIELD_OP_UNARY_MINUS;
+ 		else
+diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
+index 3e261482296cf..f8b161edca5ea 100644
+--- a/kernel/tracepoint.c
++++ b/kernel/tracepoint.c
+@@ -294,7 +294,8 @@ static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func
+  * Add the probe function to a tracepoint.
+  */
+ static int tracepoint_add_func(struct tracepoint *tp,
+-			       struct tracepoint_func *func, int prio)
++			       struct tracepoint_func *func, int prio,
++			       bool warn)
+ {
+ 	struct tracepoint_func *old, *tp_funcs;
+ 	int ret;
+@@ -309,7 +310,7 @@ static int tracepoint_add_func(struct tracepoint *tp,
+ 			lockdep_is_held(&tracepoints_mutex));
+ 	old = func_add(&tp_funcs, func, prio);
+ 	if (IS_ERR(old)) {
+-		WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM);
++		WARN_ON_ONCE(warn && PTR_ERR(old) != -ENOMEM);
+ 		return PTR_ERR(old);
+ 	}
+ 
+@@ -364,6 +365,32 @@ static int tracepoint_remove_func(struct tracepoint *tp,
+ 	return 0;
+ }
+ 
++/**
++ * tracepoint_probe_register_prio_may_exist -  Connect a probe to a tracepoint with priority
++ * @tp: tracepoint
++ * @probe: probe handler
++ * @data: tracepoint data
++ * @prio: priority of this function over other registered functions
++ *
++ * Same as tracepoint_probe_register_prio() except that it will not warn
++ * if the tracepoint is already registered.
++ */
++int tracepoint_probe_register_prio_may_exist(struct tracepoint *tp, void *probe,
++					     void *data, int prio)
++{
++	struct tracepoint_func tp_func;
++	int ret;
++
++	mutex_lock(&tracepoints_mutex);
++	tp_func.func = probe;
++	tp_func.data = data;
++	tp_func.prio = prio;
++	ret = tracepoint_add_func(tp, &tp_func, prio, false);
++	mutex_unlock(&tracepoints_mutex);
++	return ret;
++}
++EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio_may_exist);
++
+ /**
+  * tracepoint_probe_register_prio -  Connect a probe to a tracepoint with priority
+  * @tp: tracepoint
+@@ -387,7 +414,7 @@ int tracepoint_probe_register_prio(struct tracepoint *tp, void *probe,
+ 	tp_func.func = probe;
+ 	tp_func.data = data;
+ 	tp_func.prio = prio;
+-	ret = tracepoint_add_func(tp, &tp_func, prio);
++	ret = tracepoint_add_func(tp, &tp_func, prio, true);
+ 	mutex_unlock(&tracepoints_mutex);
+ 	return ret;
+ }
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 11b1596e2542a..9894795043c42 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -8,6 +8,12 @@
+ #include <linux/kmemleak.h>
+ #include <linux/user_namespace.h>
+ 
++struct ucounts init_ucounts = {
++	.ns    = &init_user_ns,
++	.uid   = GLOBAL_ROOT_UID,
++	.count = 1,
++};
++
+ #define UCOUNTS_HASHTABLE_BITS 10
+ static struct hlist_head ucounts_hashtable[(1 << UCOUNTS_HASHTABLE_BITS)];
+ static DEFINE_SPINLOCK(ucounts_lock);
+@@ -125,7 +131,15 @@ static struct ucounts *find_ucounts(struct user_namespace *ns, kuid_t uid, struc
+ 	return NULL;
+ }
+ 
+-static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
++static void hlist_add_ucounts(struct ucounts *ucounts)
++{
++	struct hlist_head *hashent = ucounts_hashentry(ucounts->ns, ucounts->uid);
++	spin_lock_irq(&ucounts_lock);
++	hlist_add_head(&ucounts->node, hashent);
++	spin_unlock_irq(&ucounts_lock);
++}
++
++struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
+ {
+ 	struct hlist_head *hashent = ucounts_hashentry(ns, uid);
+ 	struct ucounts *ucounts, *new;
+@@ -160,7 +174,26 @@ static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
+ 	return ucounts;
+ }
+ 
+-static void put_ucounts(struct ucounts *ucounts)
++struct ucounts *get_ucounts(struct ucounts *ucounts)
++{
++	unsigned long flags;
++
++	if (!ucounts)
++		return NULL;
++
++	spin_lock_irqsave(&ucounts_lock, flags);
++	if (ucounts->count == INT_MAX) {
++		WARN_ONCE(1, "ucounts: counter has reached its maximum value");
++		ucounts = NULL;
++	} else {
++		ucounts->count += 1;
++	}
++	spin_unlock_irqrestore(&ucounts_lock, flags);
++
++	return ucounts;
++}
++
++void put_ucounts(struct ucounts *ucounts)
+ {
+ 	unsigned long flags;
+ 
+@@ -194,7 +227,7 @@ struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid,
+ {
+ 	struct ucounts *ucounts, *iter, *bad;
+ 	struct user_namespace *tns;
+-	ucounts = get_ucounts(ns, uid);
++	ucounts = alloc_ucounts(ns, uid);
+ 	for (iter = ucounts; iter; iter = tns->ucounts) {
+ 		int max;
+ 		tns = iter->ns;
+@@ -237,6 +270,7 @@ static __init int user_namespace_sysctl_init(void)
+ 	BUG_ON(!user_header);
+ 	BUG_ON(!setup_userns_sysctls(&init_user_ns));
+ #endif
++	hlist_add_ucounts(&init_ucounts);
+ 	return 0;
+ }
+ subsys_initcall(user_namespace_sysctl_init);
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index ce396ea4de608..8206a13c81ebc 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -1340,6 +1340,9 @@ static int userns_install(struct nsset *nsset, struct ns_common *ns)
+ 	put_user_ns(cred->user_ns);
+ 	set_cred_user_ns(cred, get_user_ns(user_ns));
+ 
++	if (set_cred_ucounts(cred) < 0)
++		return -EINVAL;
++
+ 	return 0;
+ }
+ 
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index dcf4a9028e165..5b7f88a2876db 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1302,7 +1302,6 @@ config LOCKDEP
+ 	bool
+ 	depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
+ 	select STACKTRACE
+-	depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
+ 	select KALLSYMS
+ 	select KALLSYMS_ALL
+ 
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index f0b2ccb1bb018..537bfdc8cd095 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -434,7 +434,7 @@ int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
+ 	int err;
+ 	struct iovec v;
+ 
+-	if (!(i->type & (ITER_BVEC|ITER_KVEC))) {
++	if (iter_is_iovec(i)) {
+ 		iterate_iovec(i, bytes, v, iov, skip, ({
+ 			err = fault_in_pages_readable(v.iov_base, v.iov_len);
+ 			if (unlikely(err))
+@@ -922,9 +922,12 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
+ 		size_t wanted = copy_to_iter(kaddr + offset, bytes, i);
+ 		kunmap_atomic(kaddr);
+ 		return wanted;
+-	} else if (unlikely(iov_iter_is_discard(i)))
++	} else if (unlikely(iov_iter_is_discard(i))) {
++		if (unlikely(i->count < bytes))
++			bytes = i->count;
++		i->count -= bytes;
+ 		return bytes;
+-	else if (likely(!iov_iter_is_pipe(i)))
++	} else if (likely(!iov_iter_is_pipe(i)))
+ 		return copy_page_to_iter_iovec(page, offset, bytes, i);
+ 	else
+ 		return copy_page_to_iter_pipe(page, offset, bytes, i);
+diff --git a/lib/kstrtox.c b/lib/kstrtox.c
+index a14ccf9050552..8504526541c13 100644
+--- a/lib/kstrtox.c
++++ b/lib/kstrtox.c
+@@ -39,20 +39,22 @@ const char *_parse_integer_fixup_radix(const char *s, unsigned int *base)
+ 
+ /*
+  * Convert non-negative integer string representation in explicitly given radix
+- * to an integer.
++ * to an integer. A maximum of max_chars characters will be converted.
++ *
+  * Return number of characters consumed maybe or-ed with overflow bit.
+  * If overflow occurs, result integer (incorrect) is still returned.
+  *
+  * Don't you dare use this function.
+  */
+-unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *p)
++unsigned int _parse_integer_limit(const char *s, unsigned int base, unsigned long long *p,
++				  size_t max_chars)
+ {
+ 	unsigned long long res;
+ 	unsigned int rv;
+ 
+ 	res = 0;
+ 	rv = 0;
+-	while (1) {
++	while (max_chars--) {
+ 		unsigned int c = *s;
+ 		unsigned int lc = c | 0x20; /* don't tolower() this line */
+ 		unsigned int val;
+@@ -82,6 +84,11 @@ unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long
+ 	return rv;
+ }
+ 
++unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *p)
++{
++	return _parse_integer_limit(s, base, p, INT_MAX);
++}
++
+ static int _kstrtoull(const char *s, unsigned int base, unsigned long long *res)
+ {
+ 	unsigned long long _res;
+diff --git a/lib/kstrtox.h b/lib/kstrtox.h
+index 3b4637bcd2540..158c400ca8658 100644
+--- a/lib/kstrtox.h
++++ b/lib/kstrtox.h
+@@ -4,6 +4,8 @@
+ 
+ #define KSTRTOX_OVERFLOW	(1U << 31)
+ const char *_parse_integer_fixup_radix(const char *s, unsigned int *base);
++unsigned int _parse_integer_limit(const char *s, unsigned int base, unsigned long long *res,
++				  size_t max_chars);
+ unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *res);
+ 
+ #endif
+diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
+index a899b3f0e2e53..76c52b0b76d38 100644
+--- a/lib/locking-selftest.c
++++ b/lib/locking-selftest.c
+@@ -186,6 +186,7 @@ static void init_shared_classes(void)
+ #define HARDIRQ_ENTER()				\
+ 	local_irq_disable();			\
+ 	__irq_enter();				\
++	lockdep_hardirq_threaded();		\
+ 	WARN_ON(!in_irq());
+ 
+ #define HARDIRQ_EXIT()				\
+diff --git a/lib/math/rational.c b/lib/math/rational.c
+index 9781d521963d1..c0ab51d8fbb98 100644
+--- a/lib/math/rational.c
++++ b/lib/math/rational.c
+@@ -12,6 +12,7 @@
+ #include <linux/compiler.h>
+ #include <linux/export.h>
+ #include <linux/minmax.h>
++#include <linux/limits.h>
+ 
+ /*
+  * calculate best rational approximation for a given fraction
+@@ -78,13 +79,18 @@ void rational_best_approximation(
+ 		 * found below as 't'.
+ 		 */
+ 		if ((n2 > max_numerator) || (d2 > max_denominator)) {
+-			unsigned long t = min((max_numerator - n0) / n1,
+-					      (max_denominator - d0) / d1);
++			unsigned long t = ULONG_MAX;
+ 
+-			/* This tests if the semi-convergent is closer
+-			 * than the previous convergent.
++			if (d1)
++				t = (max_denominator - d0) / d1;
++			if (n1)
++				t = min(t, (max_numerator - n0) / n1);
++
++			/* This tests if the semi-convergent is closer than the previous
++			 * convergent.  If d1 is zero there is no previous convergent as this
++			 * is the 1st iteration, so always choose the semi-convergent.
+ 			 */
+-			if (2u * t > a || (2u * t == a && d0 * dp > d1 * d)) {
++			if (!d1 || 2u * t > a || (2u * t == a && d0 * dp > d1 * d)) {
+ 				n1 = n0 + t * n1;
+ 				d1 = d0 + t * d1;
+ 			}
+diff --git a/lib/seq_buf.c b/lib/seq_buf.c
+index 707453f5d58ee..89c26c393bdba 100644
+--- a/lib/seq_buf.c
++++ b/lib/seq_buf.c
+@@ -243,12 +243,14 @@ int seq_buf_putmem_hex(struct seq_buf *s, const void *mem,
+ 			break;
+ 
+ 		/* j increments twice per loop */
+-		len -= j / 2;
+ 		hex[j++] = ' ';
+ 
+ 		seq_buf_putmem(s, hex, j);
+ 		if (seq_buf_has_overflowed(s))
+ 			return -1;
++
++		len -= start_len;
++		data += start_len;
+ 	}
+ 	return 0;
+ }
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index fd0fde639ec91..8ade1a86d8187 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -53,6 +53,31 @@
+ #include <linux/string_helpers.h>
+ #include "kstrtox.h"
+ 
++static unsigned long long simple_strntoull(const char *startp, size_t max_chars,
++					   char **endp, unsigned int base)
++{
++	const char *cp;
++	unsigned long long result = 0ULL;
++	size_t prefix_chars;
++	unsigned int rv;
++
++	cp = _parse_integer_fixup_radix(startp, &base);
++	prefix_chars = cp - startp;
++	if (prefix_chars < max_chars) {
++		rv = _parse_integer_limit(cp, base, &result, max_chars - prefix_chars);
++		/* FIXME */
++		cp += (rv & ~KSTRTOX_OVERFLOW);
++	} else {
++		/* Field too short for prefix + digit, skip over without converting */
++		cp = startp + max_chars;
++	}
++
++	if (endp)
++		*endp = (char *)cp;
++
++	return result;
++}
++
+ /**
+  * simple_strtoull - convert a string to an unsigned long long
+  * @cp: The start of the string
+@@ -63,18 +88,7 @@
+  */
+ unsigned long long simple_strtoull(const char *cp, char **endp, unsigned int base)
+ {
+-	unsigned long long result;
+-	unsigned int rv;
+-
+-	cp = _parse_integer_fixup_radix(cp, &base);
+-	rv = _parse_integer(cp, base, &result);
+-	/* FIXME */
+-	cp += (rv & ~KSTRTOX_OVERFLOW);
+-
+-	if (endp)
+-		*endp = (char *)cp;
+-
+-	return result;
++	return simple_strntoull(cp, INT_MAX, endp, base);
+ }
+ EXPORT_SYMBOL(simple_strtoull);
+ 
+@@ -109,6 +123,21 @@ long simple_strtol(const char *cp, char **endp, unsigned int base)
+ }
+ EXPORT_SYMBOL(simple_strtol);
+ 
++static long long simple_strntoll(const char *cp, size_t max_chars, char **endp,
++				 unsigned int base)
++{
++	/*
++	 * simple_strntoull() safely handles receiving max_chars==0 in the
++	 * case cp[0] == '-' && max_chars == 1.
++	 * If max_chars == 0 we can drop through and pass it to simple_strntoull()
++	 * and the content of *cp is irrelevant.
++	 */
++	if (*cp == '-' && max_chars > 0)
++		return -simple_strntoull(cp + 1, max_chars - 1, endp, base);
++
++	return simple_strntoull(cp, max_chars, endp, base);
++}
++
+ /**
+  * simple_strtoll - convert a string to a signed long long
+  * @cp: The start of the string
+@@ -119,10 +148,7 @@ EXPORT_SYMBOL(simple_strtol);
+  */
+ long long simple_strtoll(const char *cp, char **endp, unsigned int base)
+ {
+-	if (*cp == '-')
+-		return -simple_strtoull(cp + 1, endp, base);
+-
+-	return simple_strtoull(cp, endp, base);
++	return simple_strntoll(cp, INT_MAX, endp, base);
+ }
+ EXPORT_SYMBOL(simple_strtoll);
+ 
+@@ -3442,25 +3468,13 @@ int vsscanf(const char *buf, const char *fmt, va_list args)
+ 			break;
+ 
+ 		if (is_sign)
+-			val.s = qualifier != 'L' ?
+-				simple_strtol(str, &next, base) :
+-				simple_strtoll(str, &next, base);
++			val.s = simple_strntoll(str,
++						field_width >= 0 ? field_width : INT_MAX,
++						&next, base);
+ 		else
+-			val.u = qualifier != 'L' ?
+-				simple_strtoul(str, &next, base) :
+-				simple_strtoull(str, &next, base);
+-
+-		if (field_width > 0 && next - str > field_width) {
+-			if (base == 0)
+-				_parse_integer_fixup_radix(str, &base);
+-			while (next - str > field_width) {
+-				if (is_sign)
+-					val.s = div_s64(val.s, base);
+-				else
+-					val.u = div_u64(val.u, base);
+-				--next;
+-			}
+-		}
++			val.u = simple_strntoull(str,
++						 field_width >= 0 ? field_width : INT_MAX,
++						 &next, base);
+ 
+ 		switch (qualifier) {
+ 		case 'H':	/* that's 'hh' in format */
+diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
+index 750bfef26be37..12ebc97e8b435 100644
+--- a/mm/debug_vm_pgtable.c
++++ b/mm/debug_vm_pgtable.c
+@@ -58,11 +58,23 @@
+ #define RANDOM_ORVALUE (GENMASK(BITS_PER_LONG - 1, 0) & ~ARCH_SKIP_MASK)
+ #define RANDOM_NZVALUE	GENMASK(7, 0)
+ 
+-static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot)
++static void __init pte_basic_tests(unsigned long pfn, int idx)
+ {
++	pgprot_t prot = protection_map[idx];
+ 	pte_t pte = pfn_pte(pfn, prot);
++	unsigned long val = idx, *ptr = &val;
++
++	pr_debug("Validating PTE basic (%pGv)\n", ptr);
++
++	/*
++	 * This test needs to be executed after the given page table entry
++	 * is created with pfn_pte() to make sure that protection_map[idx]
++	 * does not have the dirty bit enabled from the beginning. This is
++	 * important for platforms like arm64 where (!PTE_RDONLY) indicate
++	 * dirty bit being set.
++	 */
++	WARN_ON(pte_dirty(pte_wrprotect(pte)));
+ 
+-	pr_debug("Validating PTE basic\n");
+ 	WARN_ON(!pte_same(pte, pte));
+ 	WARN_ON(!pte_young(pte_mkyoung(pte_mkold(pte))));
+ 	WARN_ON(!pte_dirty(pte_mkdirty(pte_mkclean(pte))));
+@@ -70,6 +82,8 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot)
+ 	WARN_ON(pte_young(pte_mkold(pte_mkyoung(pte))));
+ 	WARN_ON(pte_dirty(pte_mkclean(pte_mkdirty(pte))));
+ 	WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte))));
++	WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte))));
++	WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte))));
+ }
+ 
+ static void __init pte_advanced_tests(struct mm_struct *mm,
+@@ -129,14 +143,28 @@ static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
+ }
+ 
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
++static void __init pmd_basic_tests(unsigned long pfn, int idx)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pgprot_t prot = protection_map[idx];
++	unsigned long val = idx, *ptr = &val;
++	pmd_t pmd;
+ 
+ 	if (!has_transparent_hugepage())
+ 		return;
+ 
+-	pr_debug("Validating PMD basic\n");
++	pr_debug("Validating PMD basic (%pGv)\n", ptr);
++	pmd = pfn_pmd(pfn, prot);
++
++	/*
++	 * This test needs to be executed after the given page table entry
++	 * is created with pfn_pmd() to make sure that protection_map[idx]
++	 * does not have the dirty bit enabled from the beginning. This is
++	 * important for platforms like arm64 where (!PTE_RDONLY) indicate
++	 * dirty bit being set.
++	 */
++	WARN_ON(pmd_dirty(pmd_wrprotect(pmd)));
++
++
+ 	WARN_ON(!pmd_same(pmd, pmd));
+ 	WARN_ON(!pmd_young(pmd_mkyoung(pmd_mkold(pmd))));
+ 	WARN_ON(!pmd_dirty(pmd_mkdirty(pmd_mkclean(pmd))));
+@@ -144,6 +172,8 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
+ 	WARN_ON(pmd_young(pmd_mkold(pmd_mkyoung(pmd))));
+ 	WARN_ON(pmd_dirty(pmd_mkclean(pmd_mkdirty(pmd))));
+ 	WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite(pmd))));
++	WARN_ON(pmd_dirty(pmd_wrprotect(pmd_mkclean(pmd))));
++	WARN_ON(!pmd_dirty(pmd_wrprotect(pmd_mkdirty(pmd))));
+ 	/*
+ 	 * A huge page does not point to next level page table
+ 	 * entry. Hence this must qualify as pmd_bad().
+@@ -156,7 +186,7 @@ static void __init pmd_advanced_tests(struct mm_struct *mm,
+ 				      unsigned long pfn, unsigned long vaddr,
+ 				      pgprot_t prot, pgtable_t pgtable)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
+ 
+ 	if (!has_transparent_hugepage())
+ 		return;
+@@ -203,9 +233,14 @@ static void __init pmd_advanced_tests(struct mm_struct *mm,
+ 
+ static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
++
++	if (!has_transparent_hugepage())
++		return;
+ 
+ 	pr_debug("Validating PMD leaf\n");
++	pmd = pfn_pmd(pfn, prot);
++
+ 	/*
+ 	 * PMD based THP is a leaf entry.
+ 	 */
+@@ -238,30 +273,51 @@ static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
+ 
+ static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
+ 
+ 	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
+ 		return;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD saved write\n");
++	pmd = pfn_pmd(pfn, prot);
+ 	WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
+ 	WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
+ }
+ 
+ #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+-static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
++static void __init pud_basic_tests(struct mm_struct *mm, unsigned long pfn, int idx)
+ {
+-	pud_t pud = pfn_pud(pfn, prot);
++	pgprot_t prot = protection_map[idx];
++	unsigned long val = idx, *ptr = &val;
++	pud_t pud;
+ 
+ 	if (!has_transparent_hugepage())
+ 		return;
+ 
+-	pr_debug("Validating PUD basic\n");
++	pr_debug("Validating PUD basic (%pGv)\n", ptr);
++	pud = pfn_pud(pfn, prot);
++
++	/*
++	 * This test needs to be executed after the given page table entry
++	 * is created with pfn_pud() to make sure that protection_map[idx]
++	 * does not have the dirty bit enabled from the beginning. This is
++	 * important for platforms like arm64 where (!PTE_RDONLY) indicate
++	 * dirty bit being set.
++	 */
++	WARN_ON(pud_dirty(pud_wrprotect(pud)));
++
+ 	WARN_ON(!pud_same(pud, pud));
+ 	WARN_ON(!pud_young(pud_mkyoung(pud_mkold(pud))));
++	WARN_ON(!pud_dirty(pud_mkdirty(pud_mkclean(pud))));
++	WARN_ON(pud_dirty(pud_mkclean(pud_mkdirty(pud))));
+ 	WARN_ON(!pud_write(pud_mkwrite(pud_wrprotect(pud))));
+ 	WARN_ON(pud_write(pud_wrprotect(pud_mkwrite(pud))));
+ 	WARN_ON(pud_young(pud_mkold(pud_mkyoung(pud))));
++	WARN_ON(pud_dirty(pud_wrprotect(pud_mkclean(pud))));
++	WARN_ON(!pud_dirty(pud_wrprotect(pud_mkdirty(pud))));
+ 
+ 	if (mm_pmd_folded(mm))
+ 		return;
+@@ -278,7 +334,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
+ 				      unsigned long pfn, unsigned long vaddr,
+ 				      pgprot_t prot)
+ {
+-	pud_t pud = pfn_pud(pfn, prot);
++	pud_t pud;
+ 
+ 	if (!has_transparent_hugepage())
+ 		return;
+@@ -287,6 +343,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
+ 	/* Align the address wrt HPAGE_PUD_SIZE */
+ 	vaddr &= HPAGE_PUD_MASK;
+ 
++	pud = pfn_pud(pfn, prot);
+ 	set_pud_at(mm, vaddr, pudp, pud);
+ 	pudp_set_wrprotect(mm, vaddr, pudp);
+ 	pud = READ_ONCE(*pudp);
+@@ -325,9 +382,13 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
+ 
+ static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pud_t pud = pfn_pud(pfn, prot);
++	pud_t pud;
++
++	if (!has_transparent_hugepage())
++		return;
+ 
+ 	pr_debug("Validating PUD leaf\n");
++	pud = pfn_pud(pfn, prot);
+ 	/*
+ 	 * PUD based THP is a leaf entry.
+ 	 */
+@@ -359,7 +420,7 @@ static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
+ #endif /* !CONFIG_HAVE_ARCH_HUGE_VMAP */
+ 
+ #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+-static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
++static void __init pud_basic_tests(struct mm_struct *mm, unsigned long pfn, int idx) { }
+ static void __init pud_advanced_tests(struct mm_struct *mm,
+ 				      struct vm_area_struct *vma, pud_t *pudp,
+ 				      unsigned long pfn, unsigned long vaddr,
+@@ -372,8 +433,8 @@ static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
+ }
+ #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+ #else  /* !CONFIG_TRANSPARENT_HUGEPAGE */
+-static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { }
+-static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
++static void __init pmd_basic_tests(unsigned long pfn, int idx) { }
++static void __init pud_basic_tests(struct mm_struct *mm, unsigned long pfn, int idx) { }
+ static void __init pmd_advanced_tests(struct mm_struct *mm,
+ 				      struct vm_area_struct *vma, pmd_t *pmdp,
+ 				      unsigned long pfn, unsigned long vaddr,
+@@ -609,12 +670,16 @@ static void __init pte_protnone_tests(unsigned long pfn, pgprot_t prot)
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
++	pmd_t pmd;
+ 
+ 	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
+ 		return;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD protnone\n");
++	pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
+ 	WARN_ON(!pmd_protnone(pmd));
+ 	WARN_ON(!pmd_present(pmd));
+ }
+@@ -634,18 +699,26 @@ static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot)
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
++
++	if (!has_transparent_hugepage())
++		return;
+ 
+ 	pr_debug("Validating PMD devmap\n");
++	pmd = pfn_pmd(pfn, prot);
+ 	WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd)));
+ }
+ 
+ #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+ static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pud_t pud = pfn_pud(pfn, prot);
++	pud_t pud;
++
++	if (!has_transparent_hugepage())
++		return;
+ 
+ 	pr_debug("Validating PUD devmap\n");
++	pud = pfn_pud(pfn, prot);
+ 	WARN_ON(!pud_devmap(pud_mkdevmap(pud)));
+ }
+ #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+@@ -688,25 +761,33 @@ static void __init pte_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
+ 
+ 	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
+ 		return;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD soft dirty\n");
++	pmd = pfn_pmd(pfn, prot);
+ 	WARN_ON(!pmd_soft_dirty(pmd_mksoft_dirty(pmd)));
+ 	WARN_ON(pmd_soft_dirty(pmd_clear_soft_dirty(pmd)));
+ }
+ 
+ static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
+ 
+ 	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) ||
+ 		!IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION))
+ 		return;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD swap soft dirty\n");
++	pmd = pfn_pmd(pfn, prot);
+ 	WARN_ON(!pmd_swp_soft_dirty(pmd_swp_mksoft_dirty(pmd)));
+ 	WARN_ON(pmd_swp_soft_dirty(pmd_swp_clear_soft_dirty(pmd)));
+ }
+@@ -735,6 +816,9 @@ static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot)
+ 	swp_entry_t swp;
+ 	pmd_t pmd;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD swap\n");
+ 	pmd = pfn_pmd(pfn, prot);
+ 	swp = __pmd_to_swp_entry(pmd);
+@@ -899,6 +983,7 @@ static int __init debug_vm_pgtable(void)
+ 	unsigned long vaddr, pte_aligned, pmd_aligned;
+ 	unsigned long pud_aligned, p4d_aligned, pgd_aligned;
+ 	spinlock_t *ptl = NULL;
++	int idx;
+ 
+ 	pr_info("Validating architecture page table helpers\n");
+ 	prot = vm_get_page_prot(VMFLAGS);
+@@ -963,9 +1048,25 @@ static int __init debug_vm_pgtable(void)
+ 	saved_pmdp = pmd_offset(pudp, 0UL);
+ 	saved_ptep = pmd_pgtable(pmd);
+ 
+-	pte_basic_tests(pte_aligned, prot);
+-	pmd_basic_tests(pmd_aligned, prot);
+-	pud_basic_tests(pud_aligned, prot);
++	/*
++	 * Iterate over the protection_map[] to make sure that all
++	 * the basic page table transformation validations just hold
++	 * true irrespective of the starting protection value for a
++	 * given page table entry.
++	 */
++	for (idx = 0; idx < ARRAY_SIZE(protection_map); idx++) {
++		pte_basic_tests(pte_aligned, idx);
++		pmd_basic_tests(pmd_aligned, idx);
++		pud_basic_tests(mm, pud_aligned, idx);
++	}
++
++	/*
++	 * Both P4D and PGD level tests are very basic which do not
++	 * involve creating page table entries from the protection
++	 * value and the given pfn. Hence just keep them out from
++	 * the above iteration for now to save some test execution
++	 * time.
++	 */
+ 	p4d_basic_tests(p4d_aligned, prot);
+ 	pgd_basic_tests(pgd_aligned, prot);
+ 
+diff --git a/mm/gup.c b/mm/gup.c
+index c2826f3afe722..6cb7d8ae56f66 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -44,6 +44,23 @@ static void hpage_pincount_sub(struct page *page, int refs)
+ 	atomic_sub(refs, compound_pincount_ptr(page));
+ }
+ 
++/* Equivalent to calling put_page() @refs times. */
++static void put_page_refs(struct page *page, int refs)
++{
++#ifdef CONFIG_DEBUG_VM
++	if (VM_WARN_ON_ONCE_PAGE(page_ref_count(page) < refs, page))
++		return;
++#endif
++
++	/*
++	 * Calling put_page() for each ref is unnecessarily slow. Only the last
++	 * ref needs a put_page().
++	 */
++	if (refs > 1)
++		page_ref_sub(page, refs - 1);
++	put_page(page);
++}
++
+ /*
+  * Return the compound head page with ref appropriately incremented,
+  * or NULL if that failed.
+@@ -56,6 +73,21 @@ static inline struct page *try_get_compound_head(struct page *page, int refs)
+ 		return NULL;
+ 	if (unlikely(!page_cache_add_speculative(head, refs)))
+ 		return NULL;
++
++	/*
++	 * At this point we have a stable reference to the head page; but it
++	 * could be that between the compound_head() lookup and the refcount
++	 * increment, the compound page was split, in which case we'd end up
++	 * holding a reference on a page that has nothing to do with the page
++	 * we were given anymore.
++	 * So now that the head page is stable, recheck that the pages still
++	 * belong together.
++	 */
++	if (unlikely(compound_head(page) != head)) {
++		put_page_refs(head, refs);
++		return NULL;
++	}
++
+ 	return head;
+ }
+ 
+@@ -95,6 +127,14 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page,
+ 				is_migrate_cma_page(page))
+ 			return NULL;
+ 
++		/*
++		 * CAUTION: Don't use compound_head() on the page before this
++		 * point, the result won't be stable.
++		 */
++		page = try_get_compound_head(page, refs);
++		if (!page)
++			return NULL;
++
+ 		/*
+ 		 * When pinning a compound page of order > 1 (which is what
+ 		 * hpage_pincount_available() checks for), use an exact count to
+@@ -103,15 +143,10 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page,
+ 		 * However, be sure to *also* increment the normal page refcount
+ 		 * field at least once, so that the page really is pinned.
+ 		 */
+-		if (!hpage_pincount_available(page))
+-			refs *= GUP_PIN_COUNTING_BIAS;
+-
+-		page = try_get_compound_head(page, refs);
+-		if (!page)
+-			return NULL;
+-
+ 		if (hpage_pincount_available(page))
+ 			hpage_pincount_add(page, refs);
++		else
++			page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1));
+ 
+ 		mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED,
+ 				    orig_refs);
+@@ -135,14 +170,7 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags)
+ 			refs *= GUP_PIN_COUNTING_BIAS;
+ 	}
+ 
+-	VM_BUG_ON_PAGE(page_ref_count(page) < refs, page);
+-	/*
+-	 * Calling put_page() for each ref is unnecessarily slow. Only the last
+-	 * ref needs a put_page().
+-	 */
+-	if (refs > 1)
+-		page_ref_sub(page, refs - 1);
+-	put_page(page);
++	put_page_refs(page, refs);
+ }
+ 
+ /**
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 6301ecc1f679a..9fe622ff2fc4a 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -63,7 +63,14 @@ static atomic_t huge_zero_refcount;
+ struct page *huge_zero_page __read_mostly;
+ unsigned long huge_zero_pfn __read_mostly = ~0UL;
+ 
+-bool transparent_hugepage_enabled(struct vm_area_struct *vma)
++static inline bool file_thp_enabled(struct vm_area_struct *vma)
++{
++	return transhuge_vma_enabled(vma, vma->vm_flags) && vma->vm_file &&
++	       !inode_is_open_for_write(vma->vm_file->f_inode) &&
++	       (vma->vm_flags & VM_EXEC);
++}
++
++bool transparent_hugepage_active(struct vm_area_struct *vma)
+ {
+ 	/* The addr is used to check if the vma size fits */
+ 	unsigned long addr = (vma->vm_end & HPAGE_PMD_MASK) - HPAGE_PMD_SIZE;
+@@ -74,6 +81,8 @@ bool transparent_hugepage_enabled(struct vm_area_struct *vma)
+ 		return __transparent_hugepage_enabled(vma);
+ 	if (vma_is_shmem(vma))
+ 		return shmem_huge_enabled(vma);
++	if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS))
++		return file_thp_enabled(vma);
+ 
+ 	return false;
+ }
+@@ -375,7 +384,11 @@ static int __init hugepage_init(void)
+ 	struct kobject *hugepage_kobj;
+ 
+ 	if (!has_transparent_hugepage()) {
+-		transparent_hugepage_flags = 0;
++		/*
++		 * Hardware doesn't support hugepages, hence disable
++		 * DAX PMD support.
++		 */
++		transparent_hugepage_flags = 1 << TRANSPARENT_HUGEPAGE_NEVER_DAX;
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1591,7 +1604,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 	 * If other processes are mapping this page, we couldn't discard
+ 	 * the page unless they all do MADV_FREE so let's skip the page.
+ 	 */
+-	if (page_mapcount(page) != 1)
++	if (total_mapcount(page) != 1)
+ 		goto out;
+ 
+ 	if (!trylock_page(page))
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index d4f89c2f95446..fa6b0ac6c280d 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1252,8 +1252,7 @@ static void destroy_compound_gigantic_page(struct page *page,
+ 	struct page *p = page + 1;
+ 
+ 	atomic_set(compound_mapcount_ptr(page), 0);
+-	if (hpage_pincount_available(page))
+-		atomic_set(compound_pincount_ptr(page), 0);
++	atomic_set(compound_pincount_ptr(page), 0);
+ 
+ 	for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
+ 		clear_compound_head(p);
+@@ -1316,8 +1315,6 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+ 	return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);
+ }
+ 
+-static void prep_new_huge_page(struct hstate *h, struct page *page, int nid);
+-static void prep_compound_gigantic_page(struct page *page, unsigned int order);
+ #else /* !CONFIG_CONTIG_ALLOC */
+ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+ 					int nid, nodemask_t *nodemask)
+@@ -1583,9 +1580,7 @@ static void prep_compound_gigantic_page(struct page *page, unsigned int order)
+ 		set_compound_head(p, page);
+ 	}
+ 	atomic_set(compound_mapcount_ptr(page), -1);
+-
+-	if (hpage_pincount_available(page))
+-		atomic_set(compound_pincount_ptr(page), 0);
++	atomic_set(compound_pincount_ptr(page), 0);
+ }
+ 
+ /*
+@@ -2481,16 +2476,10 @@ found:
+ 	return 1;
+ }
+ 
+-static void __init prep_compound_huge_page(struct page *page,
+-		unsigned int order)
+-{
+-	if (unlikely(order > (MAX_ORDER - 1)))
+-		prep_compound_gigantic_page(page, order);
+-	else
+-		prep_compound_page(page, order);
+-}
+-
+-/* Put bootmem huge pages into the standard lists after mem_map is up */
++/*
++ * Put bootmem huge pages into the standard lists after mem_map is up.
++ * Note: This only applies to gigantic (order > MAX_ORDER) pages.
++ */
+ static void __init gather_bootmem_prealloc(void)
+ {
+ 	struct huge_bootmem_page *m;
+@@ -2499,20 +2488,19 @@ static void __init gather_bootmem_prealloc(void)
+ 		struct page *page = virt_to_page(m);
+ 		struct hstate *h = m->hstate;
+ 
++		VM_BUG_ON(!hstate_is_gigantic(h));
+ 		WARN_ON(page_count(page) != 1);
+-		prep_compound_huge_page(page, h->order);
++		prep_compound_gigantic_page(page, huge_page_order(h));
+ 		WARN_ON(PageReserved(page));
+ 		prep_new_huge_page(h, page, page_to_nid(page));
+ 		put_page(page); /* free it into the hugepage allocator */
+ 
+ 		/*
+-		 * If we had gigantic hugepages allocated at boot time, we need
+-		 * to restore the 'stolen' pages to totalram_pages in order to
+-		 * fix confusing memory reports from free(1) and another
+-		 * side-effects, like CommitLimit going negative.
++		 * We need to restore the 'stolen' pages to totalram_pages
++		 * in order to fix confusing memory reports from free(1) and
++		 * other side-effects, like CommitLimit going negative.
+ 		 */
+-		if (hstate_is_gigantic(h))
+-			adjust_managed_page_count(page, 1 << h->order);
++		adjust_managed_page_count(page, pages_per_huge_page(h));
+ 		cond_resched();
+ 	}
+ }
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index a6238118ac4c7..ee88125785638 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -440,9 +440,7 @@ static inline int khugepaged_test_exit(struct mm_struct *mm)
+ static bool hugepage_vma_check(struct vm_area_struct *vma,
+ 			       unsigned long vm_flags)
+ {
+-	/* Explicitly disabled through madvise. */
+-	if ((vm_flags & VM_NOHUGEPAGE) ||
+-	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
++	if (!transhuge_vma_enabled(vma, vm_flags))
+ 		return false;
+ 
+ 	/* Enabled via shmem mount options or sysfs settings. */
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 8d9f5fa4c6d39..92bf987d0a410 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2898,12 +2898,20 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg)
+ }
+ 
+ #ifdef CONFIG_MEMCG_KMEM
++/*
++ * The allocated objcg pointers array is not accounted directly.
++ * Moreover, it should not come from DMA buffer and is not readily
++ * reclaimable. So those GFP bits should be masked off.
++ */
++#define OBJCGS_CLEAR_MASK	(__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT)
++
+ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
+ 				 gfp_t gfp)
+ {
+ 	unsigned int objects = objs_per_slab_page(s, page);
+ 	void *vec;
+ 
++	gfp &= ~OBJCGS_CLEAR_MASK;
+ 	vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp,
+ 			   page_to_nid(page));
+ 	if (!vec)
+diff --git a/mm/memory.c b/mm/memory.c
+index eb31b3e4ef93b..0a905e0a7e672 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3302,6 +3302,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct page *page = NULL, *swapcache;
++	struct swap_info_struct *si = NULL;
+ 	swp_entry_t entry;
+ 	pte_t pte;
+ 	int locked;
+@@ -3329,14 +3330,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 		goto out;
+ 	}
+ 
++	/* Prevent swapoff from happening to us. */
++	si = get_swap_device(entry);
++	if (unlikely(!si))
++		goto out;
+ 
+ 	delayacct_set_flag(DELAYACCT_PF_SWAPIN);
+ 	page = lookup_swap_cache(entry, vma, vmf->address);
+ 	swapcache = page;
+ 
+ 	if (!page) {
+-		struct swap_info_struct *si = swp_swap_info(entry);
+-
+ 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
+ 		    __swap_count(entry) == 1) {
+ 			/* skip swapcache */
+@@ -3507,6 +3510,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ unlock:
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+ out:
++	if (si)
++		put_swap_device(si);
+ 	return ret;
+ out_nomap:
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+@@ -3518,6 +3523,8 @@ out_release:
+ 		unlock_page(swapcache);
+ 		put_page(swapcache);
+ 	}
++	if (si)
++		put_swap_device(si);
+ 	return ret;
+ }
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 81cc7fdc9c8fd..e30d88efd7fbb 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7788,31 +7788,24 @@ static void calculate_totalreserve_pages(void)
+ static void setup_per_zone_lowmem_reserve(void)
+ {
+ 	struct pglist_data *pgdat;
+-	enum zone_type j, idx;
++	enum zone_type i, j;
+ 
+ 	for_each_online_pgdat(pgdat) {
+-		for (j = 0; j < MAX_NR_ZONES; j++) {
+-			struct zone *zone = pgdat->node_zones + j;
+-			unsigned long managed_pages = zone_managed_pages(zone);
+-
+-			zone->lowmem_reserve[j] = 0;
++		for (i = 0; i < MAX_NR_ZONES - 1; i++) {
++			struct zone *zone = &pgdat->node_zones[i];
++			int ratio = sysctl_lowmem_reserve_ratio[i];
++			bool clear = !ratio || !zone_managed_pages(zone);
++			unsigned long managed_pages = 0;
+ 
+-			idx = j;
+-			while (idx) {
+-				struct zone *lower_zone;
++			for (j = i + 1; j < MAX_NR_ZONES; j++) {
++				struct zone *upper_zone = &pgdat->node_zones[j];
+ 
+-				idx--;
+-				lower_zone = pgdat->node_zones + idx;
++				managed_pages += zone_managed_pages(upper_zone);
+ 
+-				if (!sysctl_lowmem_reserve_ratio[idx] ||
+-				    !zone_managed_pages(lower_zone)) {
+-					lower_zone->lowmem_reserve[j] = 0;
+-					continue;
+-				} else {
+-					lower_zone->lowmem_reserve[j] =
+-						managed_pages / sysctl_lowmem_reserve_ratio[idx];
+-				}
+-				managed_pages += zone_managed_pages(lower_zone);
++				if (clear)
++					zone->lowmem_reserve[j] = 0;
++				else
++					zone->lowmem_reserve[j] = managed_pages / ratio;
+ 			}
+ 		}
+ 	}
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 6e487bf555f9e..96df61c8af653 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1698,7 +1698,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	struct address_space *mapping = inode->i_mapping;
+ 	struct shmem_inode_info *info = SHMEM_I(inode);
+ 	struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
+-	struct page *page;
++	struct swap_info_struct *si;
++	struct page *page = NULL;
+ 	swp_entry_t swap;
+ 	int error;
+ 
+@@ -1706,6 +1707,12 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	swap = radix_to_swp_entry(*pagep);
+ 	*pagep = NULL;
+ 
++	/* Prevent swapoff from happening to us. */
++	si = get_swap_device(swap);
++	if (!si) {
++		error = EINVAL;
++		goto failed;
++	}
+ 	/* Look it up and read it in.. */
+ 	page = lookup_swap_cache(swap, NULL, 0);
+ 	if (!page) {
+@@ -1767,6 +1774,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	swap_free(swap);
+ 
+ 	*pagep = page;
++	if (si)
++		put_swap_device(si);
+ 	return 0;
+ failed:
+ 	if (!shmem_confirm_swap(mapping, index, swap))
+@@ -1777,6 +1786,9 @@ unlock:
+ 		put_page(page);
+ 	}
+ 
++	if (si)
++		put_swap_device(si);
++
+ 	return error;
+ }
+ 
+@@ -4080,8 +4092,7 @@ bool shmem_huge_enabled(struct vm_area_struct *vma)
+ 	loff_t i_size;
+ 	pgoff_t off;
+ 
+-	if ((vma->vm_flags & VM_NOHUGEPAGE) ||
+-	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
++	if (!transhuge_vma_enabled(vma, vma->vm_flags))
+ 		return false;
+ 	if (shmem_huge == SHMEM_HUGE_FORCE)
+ 		return true;
+diff --git a/mm/slab.h b/mm/slab.h
+index e258ffcfb0ef2..944e8b2040ae2 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -326,7 +326,6 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
+ 	if (!memcg_kmem_enabled() || !objcg)
+ 		return;
+ 
+-	flags &= ~__GFP_ACCOUNT;
+ 	for (i = 0; i < size; i++) {
+ 		if (likely(p[i])) {
+ 			page = virt_to_head_page(p[i]);
+diff --git a/mm/z3fold.c b/mm/z3fold.c
+index 8ae944eeb8e20..912ac9a64a155 100644
+--- a/mm/z3fold.c
++++ b/mm/z3fold.c
+@@ -1063,6 +1063,7 @@ static void z3fold_destroy_pool(struct z3fold_pool *pool)
+ 	destroy_workqueue(pool->compact_wq);
+ 	destroy_workqueue(pool->release_wq);
+ 	z3fold_unregister_migration(pool);
++	free_percpu(pool->unbuddied);
+ 	kfree(pool);
+ }
+ 
+@@ -1386,7 +1387,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ 			if (zhdr->foreign_handles ||
+ 			    test_and_set_bit(PAGE_CLAIMED, &page->private)) {
+ 				if (kref_put(&zhdr->refcount,
+-						release_z3fold_page))
++						release_z3fold_page_locked))
+ 					atomic64_dec(&pool->pages_nr);
+ 				else
+ 					z3fold_page_unlock(zhdr);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 4676e4b0be2bf..d62ac4b737099 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5256,8 +5256,19 @@ static void hci_le_ext_adv_term_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+-	if (ev->status)
++	if (ev->status) {
++		struct adv_info *adv;
++
++		adv = hci_find_adv_instance(hdev, ev->handle);
++		if (!adv)
++			return;
++
++		/* Remove advertising as it has been terminated */
++		hci_remove_adv_instance(hdev, ev->handle);
++		mgmt_advertising_removed(NULL, hdev, ev->handle);
++
+ 		return;
++	}
+ 
+ 	conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->conn_handle));
+ 	if (conn) {
+@@ -5401,7 +5412,7 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ 	struct hci_conn *conn;
+ 	bool match;
+ 	u32 flags;
+-	u8 *ptr, real_len;
++	u8 *ptr;
+ 
+ 	switch (type) {
+ 	case LE_ADV_IND:
+@@ -5432,14 +5443,10 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ 			break;
+ 	}
+ 
+-	real_len = ptr - data;
+-
+-	/* Adjust for actual length */
+-	if (len != real_len) {
+-		bt_dev_err_ratelimited(hdev, "advertising data len corrected %u -> %u",
+-				       len, real_len);
+-		len = real_len;
+-	}
++	/* Adjust for actual length. This handles the case when remote
++	 * device is advertising with incorrect data length.
++	 */
++	len = ptr - data;
+ 
+ 	/* If the direct address is present, then this report is from
+ 	 * a LE Direct Advertising Report event. In that case it is
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 161ea93a53828..1a94ed2f8a4f8 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -1060,9 +1060,10 @@ static u8 get_adv_instance_scan_rsp_len(struct hci_dev *hdev, u8 instance)
+ 	if (!adv_instance)
+ 		return 0;
+ 
+-	/* TODO: Take into account the "appearance" and "local-name" flags here.
+-	 * These are currently being ignored as they are not supported.
+-	 */
++	if (adv_instance->flags & MGMT_ADV_FLAG_APPEARANCE ||
++	    adv_instance->flags & MGMT_ADV_FLAG_LOCAL_NAME)
++		return 1;
++
+ 	return adv_instance->scan_rsp_len;
+ }
+ 
+@@ -1595,33 +1596,33 @@ void __hci_req_update_scan_rsp_data(struct hci_request *req, u8 instance)
+ 		return;
+ 
+ 	if (ext_adv_capable(hdev)) {
+-		struct hci_cp_le_set_ext_scan_rsp_data cp;
++		struct {
++			struct hci_cp_le_set_ext_scan_rsp_data cp;
++			u8 data[HCI_MAX_EXT_AD_LENGTH];
++		} pdu;
+ 
+-		memset(&cp, 0, sizeof(cp));
++		memset(&pdu, 0, sizeof(pdu));
+ 
+-		/* Extended scan response data doesn't allow a response to be
+-		 * set if the instance isn't scannable.
+-		 */
+-		if (get_adv_instance_scan_rsp_len(hdev, instance))
++		if (instance)
+ 			len = create_instance_scan_rsp_data(hdev, instance,
+-							    cp.data);
++							    pdu.data);
+ 		else
+-			len = 0;
++			len = create_default_scan_rsp_data(hdev, pdu.data);
+ 
+ 		if (hdev->scan_rsp_data_len == len &&
+-		    !memcmp(cp.data, hdev->scan_rsp_data, len))
++		    !memcmp(pdu.data, hdev->scan_rsp_data, len))
+ 			return;
+ 
+-		memcpy(hdev->scan_rsp_data, cp.data, sizeof(cp.data));
++		memcpy(hdev->scan_rsp_data, pdu.data, len);
+ 		hdev->scan_rsp_data_len = len;
+ 
+-		cp.handle = instance;
+-		cp.length = len;
+-		cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
+-		cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
++		pdu.cp.handle = instance;
++		pdu.cp.length = len;
++		pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
++		pdu.cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
+ 
+-		hci_req_add(req, HCI_OP_LE_SET_EXT_SCAN_RSP_DATA, sizeof(cp),
+-			    &cp);
++		hci_req_add(req, HCI_OP_LE_SET_EXT_SCAN_RSP_DATA,
++			    sizeof(pdu.cp) + len, &pdu.cp);
+ 	} else {
+ 		struct hci_cp_le_set_scan_rsp_data cp;
+ 
+@@ -1744,26 +1745,30 @@ void __hci_req_update_adv_data(struct hci_request *req, u8 instance)
+ 		return;
+ 
+ 	if (ext_adv_capable(hdev)) {
+-		struct hci_cp_le_set_ext_adv_data cp;
++		struct {
++			struct hci_cp_le_set_ext_adv_data cp;
++			u8 data[HCI_MAX_EXT_AD_LENGTH];
++		} pdu;
+ 
+-		memset(&cp, 0, sizeof(cp));
++		memset(&pdu, 0, sizeof(pdu));
+ 
+-		len = create_instance_adv_data(hdev, instance, cp.data);
++		len = create_instance_adv_data(hdev, instance, pdu.data);
+ 
+ 		/* There's nothing to do if the data hasn't changed */
+ 		if (hdev->adv_data_len == len &&
+-		    memcmp(cp.data, hdev->adv_data, len) == 0)
++		    memcmp(pdu.data, hdev->adv_data, len) == 0)
+ 			return;
+ 
+-		memcpy(hdev->adv_data, cp.data, sizeof(cp.data));
++		memcpy(hdev->adv_data, pdu.data, len);
+ 		hdev->adv_data_len = len;
+ 
+-		cp.length = len;
+-		cp.handle = instance;
+-		cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
+-		cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
++		pdu.cp.length = len;
++		pdu.cp.handle = instance;
++		pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
++		pdu.cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
+ 
+-		hci_req_add(req, HCI_OP_LE_SET_EXT_ADV_DATA, sizeof(cp), &cp);
++		hci_req_add(req, HCI_OP_LE_SET_EXT_ADV_DATA,
++			    sizeof(pdu.cp) + len, &pdu.cp);
+ 	} else {
+ 		struct hci_cp_le_set_adv_data cp;
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 12d7b368b428e..13520c7b4f2fb 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7350,6 +7350,9 @@ static bool tlv_data_is_valid(struct hci_dev *hdev, u32 adv_flags, u8 *data,
+ 	for (i = 0, cur_len = 0; i < len; i += (cur_len + 1)) {
+ 		cur_len = data[i];
+ 
++		if (!cur_len)
++			continue;
++
+ 		if (data[i + 1] == EIR_FLAGS &&
+ 		    (!is_adv_data || flags_managed(adv_flags)))
+ 			return false;
+diff --git a/net/bpfilter/main.c b/net/bpfilter/main.c
+index 05e1cfc1e5cd1..291a925462463 100644
+--- a/net/bpfilter/main.c
++++ b/net/bpfilter/main.c
+@@ -57,7 +57,7 @@ int main(void)
+ {
+ 	debug_f = fopen("/dev/kmsg", "w");
+ 	setvbuf(debug_f, 0, _IOLBF, 0);
+-	fprintf(debug_f, "Started bpfilter\n");
++	fprintf(debug_f, "<5>Started bpfilter\n");
+ 	loop();
+ 	fclose(debug_f);
+ 	return 0;
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index f3e4d9528fa38..0928a39c4423b 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -785,6 +785,7 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh,
+ 						  bcm_rx_handler, op);
+ 
+ 			list_del(&op->list);
++			synchronize_rcu();
+ 			bcm_remove_op(op);
+ 			return 1; /* done */
+ 		}
+@@ -1533,9 +1534,13 @@ static int bcm_release(struct socket *sock)
+ 					  REGMASK(op->can_id),
+ 					  bcm_rx_handler, op);
+ 
+-		bcm_remove_op(op);
+ 	}
+ 
++	synchronize_rcu();
++
++	list_for_each_entry_safe(op, next, &bo->rx_ops, list)
++		bcm_remove_op(op);
++
+ #if IS_ENABLED(CONFIG_PROC_FS)
+ 	/* remove procfs entry */
+ 	if (net->can.bcmproc_dir && bo->bcm_proc_read)
+diff --git a/net/can/gw.c b/net/can/gw.c
+index 6b790b6ff8d26..cbb46d3aa9634 100644
+--- a/net/can/gw.c
++++ b/net/can/gw.c
+@@ -534,6 +534,7 @@ static int cgw_notifier(struct notifier_block *nb,
+ 			if (gwj->src.dev == dev || gwj->dst.dev == dev) {
+ 				hlist_del(&gwj->list);
+ 				cgw_unregister_filter(net, gwj);
++				synchronize_rcu();
+ 				kmem_cache_free(cgw_cache, gwj);
+ 			}
+ 		}
+@@ -1092,6 +1093,7 @@ static void cgw_remove_all_jobs(struct net *net)
+ 	hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
+ 		hlist_del(&gwj->list);
+ 		cgw_unregister_filter(net, gwj);
++		synchronize_rcu();
+ 		kmem_cache_free(cgw_cache, gwj);
+ 	}
+ }
+@@ -1160,6 +1162,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 		hlist_del(&gwj->list);
+ 		cgw_unregister_filter(net, gwj);
++		synchronize_rcu();
+ 		kmem_cache_free(cgw_cache, gwj);
+ 		err = 0;
+ 		break;
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 1adefb14527d8..5fc28f190677b 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1023,9 +1023,6 @@ static int isotp_release(struct socket *sock)
+ 
+ 	lock_sock(sk);
+ 
+-	hrtimer_cancel(&so->txtimer);
+-	hrtimer_cancel(&so->rxtimer);
+-
+ 	/* remove current filters & unregister */
+ 	if (so->bound) {
+ 		if (so->ifindex) {
+@@ -1037,10 +1034,14 @@ static int isotp_release(struct socket *sock)
+ 						  SINGLE_MASK(so->rxid),
+ 						  isotp_rcv, sk);
+ 				dev_put(dev);
++				synchronize_rcu();
+ 			}
+ 		}
+ 	}
+ 
++	hrtimer_cancel(&so->txtimer);
++	hrtimer_cancel(&so->rxtimer);
++
+ 	so->ifindex = 0;
+ 	so->bound = 0;
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index e52330f628c9f..6884d18f919c7 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -193,6 +193,10 @@ static void j1939_can_rx_unregister(struct j1939_priv *priv)
+ 	can_rx_unregister(dev_net(ndev), ndev, J1939_CAN_ID, J1939_CAN_MASK,
+ 			  j1939_can_recv, priv);
+ 
++	/* The last reference of priv is dropped by the RCU deferred
++	 * j1939_sk_sock_destruct() of the last socket, so we can
++	 * safely drop this reference here.
++	 */
+ 	j1939_priv_put(priv);
+ }
+ 
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 56aa66147d5ac..e1a399821238f 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -398,6 +398,9 @@ static int j1939_sk_init(struct sock *sk)
+ 	atomic_set(&jsk->skb_pending, 0);
+ 	spin_lock_init(&jsk->sk_session_queue_lock);
+ 	INIT_LIST_HEAD(&jsk->sk_session_queue);
++
++	/* j1939_sk_sock_destruct() depends on SOCK_RCU_FREE flag */
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	sk->sk_destruct = j1939_sk_sock_destruct;
+ 	sk->sk_protocol = CAN_J1939;
+ 
+@@ -673,7 +676,7 @@ static int j1939_sk_setsockopt(struct socket *sock, int level, int optname,
+ 
+ 	switch (optname) {
+ 	case SO_J1939_FILTER:
+-		if (!sockptr_is_null(optval)) {
++		if (!sockptr_is_null(optval) && optlen != 0) {
+ 			struct j1939_filter *f;
+ 			int c;
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index ef6bdbb63ecbb..7ea752af7894d 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3266,8 +3266,6 @@ static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
+ 			shinfo->gso_type |=  SKB_GSO_TCPV6;
+ 		}
+ 
+-		/* Due to IPv6 header, MSS needs to be downgraded. */
+-		skb_decrease_gso_size(shinfo, len_diff);
+ 		/* Header must be checked, and gso_segs recomputed. */
+ 		shinfo->gso_type |= SKB_GSO_DODGY;
+ 		shinfo->gso_segs = 0;
+@@ -3307,8 +3305,6 @@ static int bpf_skb_proto_6_to_4(struct sk_buff *skb)
+ 			shinfo->gso_type |=  SKB_GSO_TCPV4;
+ 		}
+ 
+-		/* Due to IPv4 header, MSS can be upgraded. */
+-		skb_increase_gso_size(shinfo, len_diff);
+ 		/* Header must be checked, and gso_segs recomputed. */
+ 		shinfo->gso_type |= SKB_GSO_DODGY;
+ 		shinfo->gso_segs = 0;
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 4b834bbf95e07..ed9857b2875dc 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -673,7 +673,7 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
+ 		struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+ 		u32 padto;
+ 
+-		padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
++		padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
+ 		if (skb->len < padto)
+ 			esp.tfclen = padto - skb->len;
+ 	}
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 84bb707bd88d8..647bceab56c2d 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -371,6 +371,8 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ 		fl4.flowi4_proto = 0;
+ 		fl4.fl4_sport = 0;
+ 		fl4.fl4_dport = 0;
++	} else {
++		swap(fl4.fl4_sport, fl4.fl4_dport);
+ 	}
+ 
+ 	if (fib_lookup(net, &fl4, &res, 0))
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index e968bb47d5bd8..e15c1d8b7c8de 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1327,7 +1327,7 @@ static unsigned int ipv4_mtu(const struct dst_entry *dst)
+ 		mtu = dst_metric_raw(dst, RTAX_MTU);
+ 
+ 	if (mtu)
+-		return mtu;
++		goto out;
+ 
+ 	mtu = READ_ONCE(dst->dev->mtu);
+ 
+@@ -1336,6 +1336,7 @@ static unsigned int ipv4_mtu(const struct dst_entry *dst)
+ 			mtu = 576;
+ 	}
+ 
++out:
+ 	mtu = min_t(unsigned int, mtu, IP_MAX_MTU);
+ 
+ 	return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 4071cb7c7a154..8d001f665fb15 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -708,7 +708,7 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
+ 		struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+ 		u32 padto;
+ 
+-		padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
++		padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
+ 		if (skb->len < padto)
+ 			esp.tfclen = padto - skb->len;
+ 	}
+diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
+index 374105e4394f8..4932dea9820ba 100644
+--- a/net/ipv6/exthdrs.c
++++ b/net/ipv6/exthdrs.c
+@@ -135,18 +135,23 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
+ 	len -= 2;
+ 
+ 	while (len > 0) {
+-		int optlen = nh[off + 1] + 2;
+-		int i;
++		int optlen, i;
+ 
+-		switch (nh[off]) {
+-		case IPV6_TLV_PAD1:
+-			optlen = 1;
++		if (nh[off] == IPV6_TLV_PAD1) {
+ 			padlen++;
+ 			if (padlen > 7)
+ 				goto bad;
+-			break;
++			off++;
++			len--;
++			continue;
++		}
++		if (len < 2)
++			goto bad;
++		optlen = nh[off + 1] + 2;
++		if (optlen > len)
++			goto bad;
+ 
+-		case IPV6_TLV_PADN:
++		if (nh[off] == IPV6_TLV_PADN) {
+ 			/* RFC 2460 states that the purpose of PadN is
+ 			 * to align the containing header to multiples
+ 			 * of 8. 7 is therefore the highest valid value.
+@@ -163,12 +168,7 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
+ 				if (nh[off + i] != 0)
+ 					goto bad;
+ 			}
+-			break;
+-
+-		default: /* Other TLV code so scan list */
+-			if (optlen > len)
+-				goto bad;
+-
++		} else {
+ 			tlv_count++;
+ 			if (tlv_count > max_count)
+ 				goto bad;
+@@ -188,7 +188,6 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
+ 				return false;
+ 
+ 			padlen = 0;
+-			break;
+ 		}
+ 		off += optlen;
+ 		len -= optlen;
+@@ -306,7 +305,7 @@ fail_and_free:
+ #endif
+ 
+ 	if (ip6_parse_tlv(tlvprocdestopt_lst, skb,
+-			  init_net.ipv6.sysctl.max_dst_opts_cnt)) {
++			  net->ipv6.sysctl.max_dst_opts_cnt)) {
+ 		skb->transport_header += extlen;
+ 		opt = IP6CB(skb);
+ #if IS_ENABLED(CONFIG_IPV6_MIP6)
+@@ -1041,7 +1040,7 @@ fail_and_free:
+ 
+ 	opt->flags |= IP6SKB_HOPBYHOP;
+ 	if (ip6_parse_tlv(tlvprochopopt_lst, skb,
+-			  init_net.ipv6.sysctl.max_hbh_opts_cnt)) {
++			  net->ipv6.sysctl.max_hbh_opts_cnt)) {
+ 		skb->transport_header += extlen;
+ 		opt = IP6CB(skb);
+ 		opt->nhoff = sizeof(struct ipv6hdr);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 42ca2d05c480d..08441f06afd48 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1270,8 +1270,6 @@ route_lookup:
+ 	if (max_headroom > dev->needed_headroom)
+ 		dev->needed_headroom = max_headroom;
+ 
+-	skb_set_inner_ipproto(skb, proto);
+-
+ 	err = ip6_tnl_encap(skb, t, &proto, fl6);
+ 	if (err)
+ 		return err;
+@@ -1408,6 +1406,8 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6))
+ 		return -1;
+ 
++	skb_set_inner_ipproto(skb, protocol);
++
+ 	err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu,
+ 			   protocol);
+ 	if (err != 0) {
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index fbe26e912300d..142bb28199c48 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1094,11 +1094,6 @@ void ieee80211_send_nullfunc(struct ieee80211_local *local,
+ 	struct ieee80211_hdr_3addr *nullfunc;
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ 
+-	/* Don't send NDPs when STA is connected HE */
+-	if (sdata->vif.type == NL80211_IFTYPE_STATION &&
+-	    !(ifmgd->flags & IEEE80211_STA_DISABLE_HE))
+-		return;
+-
+ 	skb = ieee80211_nullfunc_get(&local->hw, &sdata->vif,
+ 		!ieee80211_hw_check(&local->hw, DOESNT_SUPPORT_QOS_NDP));
+ 	if (!skb)
+@@ -1130,10 +1125,6 @@ static void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
+ 	if (WARN_ON(sdata->vif.type != NL80211_IFTYPE_STATION))
+ 		return;
+ 
+-	/* Don't send NDPs when connected HE */
+-	if (!(sdata->u.mgd.flags & IEEE80211_STA_DISABLE_HE))
+-		return;
+-
+ 	skb = dev_alloc_skb(local->hw.extra_tx_headroom + 30);
+ 	if (!skb)
+ 		return;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index f2fb69da9b6e1..13250cadb4202 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -1398,11 +1398,6 @@ static void ieee80211_send_null_response(struct sta_info *sta, int tid,
+ 	struct ieee80211_tx_info *info;
+ 	struct ieee80211_chanctx_conf *chanctx_conf;
+ 
+-	/* Don't send NDPs when STA is connected HE */
+-	if (sdata->vif.type == NL80211_IFTYPE_STATION &&
+-	    !(sdata->u.mgd.flags & IEEE80211_STA_DISABLE_HE))
+-		return;
+-
+ 	if (qos) {
+ 		fc = cpu_to_le16(IEEE80211_FTYPE_DATA |
+ 				 IEEE80211_STYPE_QOS_NULLFUNC |
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 851fb3d8c791d..bba5696fee36d 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -338,15 +338,15 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 			goto do_reset;
+ 		}
+ 
++		if (!mptcp_finish_join(sk))
++			goto do_reset;
++
+ 		subflow_generate_hmac(subflow->local_key, subflow->remote_key,
+ 				      subflow->local_nonce,
+ 				      subflow->remote_nonce,
+ 				      hmac);
+ 		memcpy(subflow->hmac, hmac, MPTCPOPT_HMAC_LEN);
+ 
+-		if (!mptcp_finish_join(sk))
+-			goto do_reset;
+-
+ 		subflow->mp_join = 1;
+ 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKRX);
+ 	} else if (mptcp_check_fallback(sk)) {
+diff --git a/net/mptcp/token.c b/net/mptcp/token.c
+index feb4b9ffd4625..0691a4883f3ab 100644
+--- a/net/mptcp/token.c
++++ b/net/mptcp/token.c
+@@ -156,9 +156,6 @@ int mptcp_token_new_connect(struct sock *sk)
+ 	int retries = TOKEN_MAX_RETRIES;
+ 	struct token_bucket *bucket;
+ 
+-	pr_debug("ssk=%p, local_key=%llu, token=%u, idsn=%llu\n",
+-		 sk, subflow->local_key, subflow->token, subflow->idsn);
+-
+ again:
+ 	mptcp_crypto_key_gen_sha(&subflow->local_key, &subflow->token,
+ 				 &subflow->idsn);
+@@ -172,6 +169,9 @@ again:
+ 		goto again;
+ 	}
+ 
++	pr_debug("ssk=%p, local_key=%llu, token=%u, idsn=%llu\n",
++		 sk, subflow->local_key, subflow->token, subflow->idsn);
++
+ 	WRITE_ONCE(msk->token, subflow->token);
+ 	__sk_nulls_add_node_rcu((struct sock *)msk, &bucket->msk_chain);
+ 	bucket->chain_len++;
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index 2b00f7f47693b..9ce776175214c 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -54,15 +54,10 @@ static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
+ 					struct nft_flow_rule *flow)
+ {
+ 	struct nft_flow_match *match = &flow->match;
+-	struct nft_offload_ethertype ethertype;
+-
+-	if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL) &&
+-	    match->key.basic.n_proto != htons(ETH_P_8021Q) &&
+-	    match->key.basic.n_proto != htons(ETH_P_8021AD))
+-		return;
+-
+-	ethertype.value = match->key.basic.n_proto;
+-	ethertype.mask = match->mask.basic.n_proto;
++	struct nft_offload_ethertype ethertype = {
++		.value	= match->key.basic.n_proto,
++		.mask	= match->mask.basic.n_proto,
++	};
+ 
+ 	if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_VLAN) &&
+ 	    (match->key.vlan.vlan_tpid == htons(ETH_P_8021Q) ||
+@@ -76,7 +71,9 @@ static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
+ 		match->dissector.offset[FLOW_DISSECTOR_KEY_CVLAN] =
+ 			offsetof(struct nft_flow_key, cvlan);
+ 		match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN);
+-	} else {
++	} else if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_BASIC) &&
++		   (match->key.basic.n_proto == htons(ETH_P_8021Q) ||
++		    match->key.basic.n_proto == htons(ETH_P_8021AD))) {
+ 		match->key.basic.n_proto = match->key.vlan.vlan_tpid;
+ 		match->mask.basic.n_proto = match->mask.vlan.vlan_tpid;
+ 		match->key.vlan.vlan_tpid = ethertype.value;
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index 3c48cdc8935df..faa0844c01fb8 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -42,6 +42,9 @@ static void nft_exthdr_ipv6_eval(const struct nft_expr *expr,
+ 	unsigned int offset = 0;
+ 	int err;
+ 
++	if (pkt->skb->protocol != htons(ETH_P_IPV6))
++		goto err;
++
+ 	err = ipv6_find_hdr(pkt->skb, &offset, priv->type, NULL, NULL);
+ 	if (priv->flags & NFT_EXTHDR_F_PRESENT) {
+ 		nft_reg_store8(dest, err >= 0);
+diff --git a/net/netfilter/nft_osf.c b/net/netfilter/nft_osf.c
+index c261d57a666ab..2c957629ea660 100644
+--- a/net/netfilter/nft_osf.c
++++ b/net/netfilter/nft_osf.c
+@@ -28,6 +28,11 @@ static void nft_osf_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	struct nf_osf_data data;
+ 	struct tcphdr _tcph;
+ 
++	if (pkt->tprot != IPPROTO_TCP) {
++		regs->verdict.code = NFT_BREAK;
++		return;
++	}
++
+ 	tcp = skb_header_pointer(skb, ip_hdrlen(skb),
+ 				 sizeof(struct tcphdr), &_tcph);
+ 	if (!tcp) {
+diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
+index d67f83a0958d3..242222dc52c3c 100644
+--- a/net/netfilter/nft_tproxy.c
++++ b/net/netfilter/nft_tproxy.c
+@@ -30,6 +30,12 @@ static void nft_tproxy_eval_v4(const struct nft_expr *expr,
+ 	__be16 tport = 0;
+ 	struct sock *sk;
+ 
++	if (pkt->tprot != IPPROTO_TCP &&
++	    pkt->tprot != IPPROTO_UDP) {
++		regs->verdict.code = NFT_BREAK;
++		return;
++	}
++
+ 	hp = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_hdr), &_hdr);
+ 	if (!hp) {
+ 		regs->verdict.code = NFT_BREAK;
+@@ -91,7 +97,8 @@ static void nft_tproxy_eval_v6(const struct nft_expr *expr,
+ 
+ 	memset(&taddr, 0, sizeof(taddr));
+ 
+-	if (!pkt->tprot_set) {
++	if (pkt->tprot != IPPROTO_TCP &&
++	    pkt->tprot != IPPROTO_UDP) {
+ 		regs->verdict.code = NFT_BREAK;
+ 		return;
+ 	}
+diff --git a/net/netlabel/netlabel_mgmt.c b/net/netlabel/netlabel_mgmt.c
+index eb1d66d20afbb..02a97bca1a1a2 100644
+--- a/net/netlabel/netlabel_mgmt.c
++++ b/net/netlabel/netlabel_mgmt.c
+@@ -76,6 +76,7 @@ static const struct nla_policy netlbl_mgmt_genl_policy[NLBL_MGMT_A_MAX + 1] = {
+ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 				  struct netlbl_audit *audit_info)
+ {
++	void *pmap = NULL;
+ 	int ret_val = -EINVAL;
+ 	struct netlbl_domaddr_map *addrmap = NULL;
+ 	struct cipso_v4_doi *cipsov4 = NULL;
+@@ -175,6 +176,7 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 			ret_val = -ENOMEM;
+ 			goto add_free_addrmap;
+ 		}
++		pmap = map;
+ 		map->list.addr = addr->s_addr & mask->s_addr;
+ 		map->list.mask = mask->s_addr;
+ 		map->list.valid = 1;
+@@ -183,10 +185,8 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 			map->def.cipso = cipsov4;
+ 
+ 		ret_val = netlbl_af4list_add(&map->list, &addrmap->list4);
+-		if (ret_val != 0) {
+-			kfree(map);
+-			goto add_free_addrmap;
+-		}
++		if (ret_val != 0)
++			goto add_free_map;
+ 
+ 		entry->family = AF_INET;
+ 		entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
+@@ -223,6 +223,7 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 			ret_val = -ENOMEM;
+ 			goto add_free_addrmap;
+ 		}
++		pmap = map;
+ 		map->list.addr = *addr;
+ 		map->list.addr.s6_addr32[0] &= mask->s6_addr32[0];
+ 		map->list.addr.s6_addr32[1] &= mask->s6_addr32[1];
+@@ -235,10 +236,8 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 			map->def.calipso = calipso;
+ 
+ 		ret_val = netlbl_af6list_add(&map->list, &addrmap->list6);
+-		if (ret_val != 0) {
+-			kfree(map);
+-			goto add_free_addrmap;
+-		}
++		if (ret_val != 0)
++			goto add_free_map;
+ 
+ 		entry->family = AF_INET6;
+ 		entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
+@@ -248,10 +247,12 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 
+ 	ret_val = netlbl_domhsh_add(entry, audit_info);
+ 	if (ret_val != 0)
+-		goto add_free_addrmap;
++		goto add_free_map;
+ 
+ 	return 0;
+ 
++add_free_map:
++	kfree(pmap);
+ add_free_addrmap:
+ 	kfree(addrmap);
+ add_doi_put_def:
+diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
+index b8559c8824318..e760d4a38fafd 100644
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -783,8 +783,10 @@ void qrtr_ns_init(void)
+ 	}
+ 
+ 	qrtr_ns.workqueue = alloc_workqueue("qrtr_ns_handler", WQ_UNBOUND, 1);
+-	if (!qrtr_ns.workqueue)
++	if (!qrtr_ns.workqueue) {
++		ret = -ENOMEM;
+ 		goto err_sock;
++	}
+ 
+ 	qrtr_ns.sock->sk->sk_data_ready = qrtr_ns_data_ready;
+ 
+diff --git a/net/sched/act_vlan.c b/net/sched/act_vlan.c
+index 1cac3c6fbb49c..a108469c664f7 100644
+--- a/net/sched/act_vlan.c
++++ b/net/sched/act_vlan.c
+@@ -70,7 +70,7 @@ static int tcf_vlan_act(struct sk_buff *skb, const struct tc_action *a,
+ 		/* replace the vid */
+ 		tci = (tci & ~VLAN_VID_MASK) | p->tcfv_push_vid;
+ 		/* replace prio bits, if tcfv_push_prio specified */
+-		if (p->tcfv_push_prio) {
++		if (p->tcfv_push_prio_exists) {
+ 			tci &= ~VLAN_PRIO_MASK;
+ 			tci |= p->tcfv_push_prio << VLAN_PRIO_SHIFT;
+ 		}
+@@ -121,6 +121,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
+ 	struct tc_action_net *tn = net_generic(net, vlan_net_id);
+ 	struct nlattr *tb[TCA_VLAN_MAX + 1];
+ 	struct tcf_chain *goto_ch = NULL;
++	bool push_prio_exists = false;
+ 	struct tcf_vlan_params *p;
+ 	struct tc_vlan *parm;
+ 	struct tcf_vlan *v;
+@@ -189,7 +190,8 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
+ 			push_proto = htons(ETH_P_8021Q);
+ 		}
+ 
+-		if (tb[TCA_VLAN_PUSH_VLAN_PRIORITY])
++		push_prio_exists = !!tb[TCA_VLAN_PUSH_VLAN_PRIORITY];
++		if (push_prio_exists)
+ 			push_prio = nla_get_u8(tb[TCA_VLAN_PUSH_VLAN_PRIORITY]);
+ 		break;
+ 	case TCA_VLAN_ACT_POP_ETH:
+@@ -241,6 +243,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
+ 	p->tcfv_action = action;
+ 	p->tcfv_push_vid = push_vid;
+ 	p->tcfv_push_prio = push_prio;
++	p->tcfv_push_prio_exists = push_prio_exists || action == TCA_VLAN_ACT_PUSH;
+ 	p->tcfv_push_proto = push_proto;
+ 
+ 	if (action == TCA_VLAN_ACT_PUSH_ETH) {
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index c4007b9cd16d6..5b274534264c2 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -304,7 +304,7 @@ static int tcindex_alloc_perfect_hash(struct net *net, struct tcindex_data *cp)
+ 	int i, err = 0;
+ 
+ 	cp->perfect = kcalloc(cp->hash, sizeof(struct tcindex_filter_result),
+-			      GFP_KERNEL);
++			      GFP_KERNEL | __GFP_NOWARN);
+ 	if (!cp->perfect)
+ 		return -ENOMEM;
+ 
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index 6335230a971e2..ade2d6ddc9148 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -485,11 +485,6 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 
+ 	if (cl->qdisc != &noop_qdisc)
+ 		qdisc_hash_add(cl->qdisc, true);
+-	sch_tree_lock(sch);
+-	qdisc_class_hash_insert(&q->clhash, &cl->common);
+-	sch_tree_unlock(sch);
+-
+-	qdisc_class_hash_grow(sch, &q->clhash);
+ 
+ set_change_agg:
+ 	sch_tree_lock(sch);
+@@ -507,8 +502,11 @@ set_change_agg:
+ 	}
+ 	if (existing)
+ 		qfq_deact_rm_from_agg(q, cl);
++	else
++		qdisc_class_hash_insert(&q->clhash, &cl->common);
+ 	qfq_add_to_agg(q, new_agg, cl);
+ 	sch_tree_unlock(sch);
++	qdisc_class_hash_grow(sch, &q->clhash);
+ 
+ 	*arg = (unsigned long)cl;
+ 	return 0;
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 39ed0e0afe6d9..c045f63d11fa6 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -591,11 +591,21 @@ static struct rpc_task *__rpc_find_next_queued_priority(struct rpc_wait_queue *q
+ 	struct list_head *q;
+ 	struct rpc_task *task;
+ 
++	/*
++	 * Service the privileged queue.
++	 */
++	q = &queue->tasks[RPC_NR_PRIORITY - 1];
++	if (queue->maxpriority > RPC_PRIORITY_PRIVILEGED && !list_empty(q)) {
++		task = list_first_entry(q, struct rpc_task, u.tk_wait.list);
++		goto out;
++	}
++
+ 	/*
+ 	 * Service a batch of tasks from a single owner.
+ 	 */
+ 	q = &queue->tasks[queue->priority];
+-	if (!list_empty(q) && --queue->nr) {
++	if (!list_empty(q) && queue->nr) {
++		queue->nr--;
+ 		task = list_first_entry(q, struct rpc_task, u.tk_wait.list);
+ 		goto out;
+ 	}
+diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
+index d4beca895992d..593846d252143 100644
+--- a/net/tipc/bcast.c
++++ b/net/tipc/bcast.c
+@@ -699,7 +699,7 @@ int tipc_bcast_init(struct net *net)
+ 	spin_lock_init(&tipc_net(net)->bclock);
+ 
+ 	if (!tipc_link_bc_create(net, 0, 0, NULL,
+-				 FB_MTU,
++				 one_page_mtu,
+ 				 BCLINK_WIN_DEFAULT,
+ 				 BCLINK_WIN_DEFAULT,
+ 				 0,
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 88a3ed80094cd..91dcf648d32bb 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -44,12 +44,15 @@
+ #define MAX_FORWARD_SIZE 1024
+ #ifdef CONFIG_TIPC_CRYPTO
+ #define BUF_HEADROOM ALIGN(((LL_MAX_HEADER + 48) + EHDR_MAX_SIZE), 16)
+-#define BUF_TAILROOM (TIPC_AES_GCM_TAG_SIZE)
++#define BUF_OVERHEAD (BUF_HEADROOM + TIPC_AES_GCM_TAG_SIZE)
+ #else
+ #define BUF_HEADROOM (LL_MAX_HEADER + 48)
+-#define BUF_TAILROOM 16
++#define BUF_OVERHEAD BUF_HEADROOM
+ #endif
+ 
++const int one_page_mtu = PAGE_SIZE - SKB_DATA_ALIGN(BUF_OVERHEAD) -
++			 SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
++
+ static unsigned int align(unsigned int i)
+ {
+ 	return (i + 3) & ~3u;
+@@ -67,13 +70,8 @@ static unsigned int align(unsigned int i)
+ struct sk_buff *tipc_buf_acquire(u32 size, gfp_t gfp)
+ {
+ 	struct sk_buff *skb;
+-#ifdef CONFIG_TIPC_CRYPTO
+-	unsigned int buf_size = (BUF_HEADROOM + size + BUF_TAILROOM + 3) & ~3u;
+-#else
+-	unsigned int buf_size = (BUF_HEADROOM + size + 3) & ~3u;
+-#endif
+ 
+-	skb = alloc_skb_fclone(buf_size, gfp);
++	skb = alloc_skb_fclone(BUF_OVERHEAD + size, gfp);
+ 	if (skb) {
+ 		skb_reserve(skb, BUF_HEADROOM);
+ 		skb_put(skb, size);
+@@ -395,7 +393,8 @@ int tipc_msg_build(struct tipc_msg *mhdr, struct msghdr *m, int offset,
+ 		if (unlikely(!skb)) {
+ 			if (pktmax != MAX_MSG_SIZE)
+ 				return -ENOMEM;
+-			rc = tipc_msg_build(mhdr, m, offset, dsz, FB_MTU, list);
++			rc = tipc_msg_build(mhdr, m, offset, dsz,
++					    one_page_mtu, list);
+ 			if (rc != dsz)
+ 				return rc;
+ 			if (tipc_msg_assemble(list))
+diff --git a/net/tipc/msg.h b/net/tipc/msg.h
+index 5d64596ba9877..64ae4c4c44f8c 100644
+--- a/net/tipc/msg.h
++++ b/net/tipc/msg.h
+@@ -99,9 +99,10 @@ struct plist;
+ #define MAX_H_SIZE                60	/* Largest possible TIPC header size */
+ 
+ #define MAX_MSG_SIZE (MAX_H_SIZE + TIPC_MAX_USER_MSG_SIZE)
+-#define FB_MTU                  3744
+ #define TIPC_MEDIA_INFO_OFFSET	5
+ 
++extern const int one_page_mtu;
++
+ struct tipc_skb_cb {
+ 	union {
+ 		struct {
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 3abe5257f7577..15395683b8e2a 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1154,7 +1154,7 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
+ 	int ret = 0;
+ 	bool eor;
+ 
+-	eor = !(flags & (MSG_MORE | MSG_SENDPAGE_NOTLAST));
++	eor = !(flags & MSG_SENDPAGE_NOTLAST);
+ 	sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+ 
+ 	/* Call the sk_stream functions to manage the sndbuf mem. */
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index be9fd5a720117..3c7ce60fe9a5a 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -126,12 +126,15 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
+ static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
+ 					    struct xdp_desc *desc)
+ {
+-	u64 chunk;
+-
+-	if (desc->len > pool->chunk_size)
+-		return false;
++	u64 chunk, chunk_end;
+ 
+ 	chunk = xp_aligned_extract_addr(pool, desc->addr);
++	if (likely(desc->len)) {
++		chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len - 1);
++		if (chunk != chunk_end)
++			return false;
++	}
++
+ 	if (chunk >= pool->addrs_cnt)
+ 		return false;
+ 
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index 6d6917b68856f..e843b0d9e2a61 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -268,6 +268,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 		xso->num_exthdrs = 0;
+ 		xso->flags = 0;
+ 		xso->dev = NULL;
++		xso->real_dev = NULL;
+ 		dev_put(dev);
+ 
+ 		if (err != -EOPNOTSUPP)
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index e4cb0ff4dcf41..ac907b9d32d1e 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -711,15 +711,8 @@ out:
+ static int xfrm6_extract_output(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ #if IS_ENABLED(CONFIG_IPV6)
+-	unsigned int ptr = 0;
+ 	int err;
+ 
+-	if (x->outer_mode.encap == XFRM_MODE_BEET &&
+-	    ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT, NULL, NULL) >= 0) {
+-		net_warn_ratelimited("BEET mode doesn't support inner IPv6 fragments\n");
+-		return -EAFNOSUPPORT;
+-	}
+-
+ 	err = xfrm6_tunnel_check_size(skb);
+ 	if (err)
+ 		return err;
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 77499abd9f992..c158e70e8ae10 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2516,7 +2516,7 @@ void xfrm_state_delete_tunnel(struct xfrm_state *x)
+ }
+ EXPORT_SYMBOL(xfrm_state_delete_tunnel);
+ 
+-u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
++u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ {
+ 	const struct xfrm_type *type = READ_ONCE(x->type);
+ 	struct crypto_aead *aead;
+@@ -2547,7 +2547,17 @@ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ 	return ((mtu - x->props.header_len - crypto_aead_authsize(aead) -
+ 		 net_adj) & ~(blksize - 1)) + net_adj - 2;
+ }
+-EXPORT_SYMBOL_GPL(xfrm_state_mtu);
++EXPORT_SYMBOL_GPL(__xfrm_state_mtu);
++
++u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
++{
++	mtu = __xfrm_state_mtu(x, mtu);
++
++	if (x->props.family == AF_INET6 && mtu < IPV6_MIN_MTU)
++		return IPV6_MIN_MTU;
++
++	return mtu;
++}
+ 
+ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
+ {
+diff --git a/samples/bpf/xdp_redirect_user.c b/samples/bpf/xdp_redirect_user.c
+index 9ca2bf457cdae..3c92adc2a7bd0 100644
+--- a/samples/bpf/xdp_redirect_user.c
++++ b/samples/bpf/xdp_redirect_user.c
+@@ -131,7 +131,7 @@ int main(int argc, char **argv)
+ 	if (!(xdp_flags & XDP_FLAGS_SKB_MODE))
+ 		xdp_flags |= XDP_FLAGS_DRV_MODE;
+ 
+-	if (optind == argc) {
++	if (optind + 2 != argc) {
+ 		printf("usage: %s <IFNAME|IFINDEX>_IN <IFNAME|IFINDEX>_OUT\n", argv[0]);
+ 		return 1;
+ 	}
+@@ -219,5 +219,5 @@ int main(int argc, char **argv)
+ 	poll_stats(2, ifindex_out);
+ 
+ out:
+-	return 0;
++	return ret;
+ }
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 4c058f12dd73c..8bd4e673383f3 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -275,7 +275,8 @@ define rule_as_o_S
+ endef
+ 
+ # Built-in and composite module parts
+-$(obj)/%.o: $(src)/%.c $(recordmcount_source) $(objtool_dep) FORCE
++.SECONDEXPANSION:
++$(obj)/%.o: $(src)/%.c $(recordmcount_source) $$(objtool_dep) FORCE
+ 	$(call if_changed_rule,cc_o_c)
+ 	$(call cmd,force_checksrc)
+ 
+@@ -356,7 +357,7 @@ cmd_modversions_S =								\
+ 	fi
+ endif
+ 
+-$(obj)/%.o: $(src)/%.S $(objtool_dep) FORCE
++$(obj)/%.o: $(src)/%.S $$(objtool_dep) FORCE
+ 	$(call if_changed_rule,as_o_S)
+ 
+ targets += $(filter-out $(subdir-builtin), $(real-obj-y))
+diff --git a/scripts/tools-support-relr.sh b/scripts/tools-support-relr.sh
+index 45e8aa360b457..cb55878bd5b81 100755
+--- a/scripts/tools-support-relr.sh
++++ b/scripts/tools-support-relr.sh
+@@ -7,7 +7,8 @@ trap "rm -f $tmp_file.o $tmp_file $tmp_file.bin" EXIT
+ cat << "END" | $CC -c -x c - -o $tmp_file.o >/dev/null 2>&1
+ void *p = &p;
+ END
+-$LD $tmp_file.o -shared -Bsymbolic --pack-dyn-relocs=relr -o $tmp_file
++$LD $tmp_file.o -shared -Bsymbolic --pack-dyn-relocs=relr \
++  --use-android-relr-tags -o $tmp_file
+ 
+ # Despite printing an error message, GNU nm still exits with exit code 0 if it
+ # sees a relr section. So we need to check that nothing is printed to stderr.
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 76d19146d74bc..f1ca3cac9b861 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -521,7 +521,7 @@ void evm_inode_post_setattr(struct dentry *dentry, int ia_valid)
+ }
+ 
+ /*
+- * evm_inode_init_security - initializes security.evm
++ * evm_inode_init_security - initializes security.evm HMAC value
+  */
+ int evm_inode_init_security(struct inode *inode,
+ 				 const struct xattr *lsm_xattr,
+@@ -530,7 +530,8 @@ int evm_inode_init_security(struct inode *inode,
+ 	struct evm_xattr *xattr_data;
+ 	int rc;
+ 
+-	if (!evm_key_loaded() || !evm_protected_xattr(lsm_xattr->name))
++	if (!(evm_initialized & EVM_INIT_HMAC) ||
++	    !evm_protected_xattr(lsm_xattr->name))
+ 		return 0;
+ 
+ 	xattr_data = kzalloc(sizeof(*xattr_data), GFP_NOFS);
+diff --git a/security/integrity/evm/evm_secfs.c b/security/integrity/evm/evm_secfs.c
+index cfc3075769bb0..bc10c945f3ed5 100644
+--- a/security/integrity/evm/evm_secfs.c
++++ b/security/integrity/evm/evm_secfs.c
+@@ -66,12 +66,13 @@ static ssize_t evm_read_key(struct file *filp, char __user *buf,
+ static ssize_t evm_write_key(struct file *file, const char __user *buf,
+ 			     size_t count, loff_t *ppos)
+ {
+-	int i, ret;
++	unsigned int i;
++	int ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN) || (evm_initialized & EVM_SETUP_COMPLETE))
+ 		return -EPERM;
+ 
+-	ret = kstrtoint_from_user(buf, count, 0, &i);
++	ret = kstrtouint_from_user(buf, count, 0, &i);
+ 
+ 	if (ret)
+ 		return ret;
+@@ -80,12 +81,12 @@ static ssize_t evm_write_key(struct file *file, const char __user *buf,
+ 	if (!i || (i & ~EVM_INIT_MASK) != 0)
+ 		return -EINVAL;
+ 
+-	/* Don't allow a request to freshly enable metadata writes if
+-	 * keys are loaded.
++	/*
++	 * Don't allow a request to enable metadata writes if
++	 * an HMAC key is loaded.
+ 	 */
+ 	if ((i & EVM_ALLOW_METADATA_WRITES) &&
+-	    ((evm_initialized & EVM_KEY_MASK) != 0) &&
+-	    !(evm_initialized & EVM_ALLOW_METADATA_WRITES))
++	    (evm_initialized & EVM_INIT_HMAC) != 0)
+ 		return -EPERM;
+ 
+ 	if (i & EVM_INIT_HMAC) {
+diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
+index 5805c5de39fbf..7a282d8e71485 100644
+--- a/sound/firewire/amdtp-stream.c
++++ b/sound/firewire/amdtp-stream.c
+@@ -1404,14 +1404,17 @@ int amdtp_domain_start(struct amdtp_domain *d, unsigned int ir_delay_cycle)
+ 	unsigned int queue_size;
+ 	struct amdtp_stream *s;
+ 	int cycle;
++	bool found = false;
+ 	int err;
+ 
+ 	// Select an IT context as IRQ target.
+ 	list_for_each_entry(s, &d->streams, list) {
+-		if (s->direction == AMDTP_OUT_STREAM)
++		if (s->direction == AMDTP_OUT_STREAM) {
++			found = true;
+ 			break;
++		}
+ 	}
+-	if (!s)
++	if (!found)
+ 		return -ENXIO;
+ 	d->irq_target = s;
+ 
+diff --git a/sound/firewire/motu/motu-protocol-v2.c b/sound/firewire/motu/motu-protocol-v2.c
+index e59e69ab1538b..784073aa10265 100644
+--- a/sound/firewire/motu/motu-protocol-v2.c
++++ b/sound/firewire/motu/motu-protocol-v2.c
+@@ -353,6 +353,7 @@ const struct snd_motu_spec snd_motu_spec_8pre = {
+ 	.protocol_version = SND_MOTU_PROTOCOL_V2,
+ 	.flags = SND_MOTU_SPEC_RX_MIDI_2ND_Q |
+ 		 SND_MOTU_SPEC_TX_MIDI_2ND_Q,
+-	.tx_fixed_pcm_chunks = {10, 6, 0},
+-	.rx_fixed_pcm_chunks = {10, 6, 0},
++	// Two dummy chunks always in the end of data block.
++	.tx_fixed_pcm_chunks = {10, 10, 0},
++	.rx_fixed_pcm_chunks = {6, 6, 0},
+ };
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e46e43dac6bfd..1cc83344c2ecf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -385,6 +385,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x67, 0xf000, 0x3000);
+ 		fallthrough;
+ 	case 0x10ec0215:
++	case 0x10ec0230:
+ 	case 0x10ec0233:
+ 	case 0x10ec0235:
+ 	case 0x10ec0236:
+@@ -3153,6 +3154,7 @@ static void alc_disable_headset_jack_key(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x49, 0x0045, 0x0);
+ 		alc_update_coef_idx(codec, 0x44, 0x0045 << 8, 0x0);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x48, 0x0);
+@@ -3180,6 +3182,7 @@ static void alc_enable_headset_jack_key(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
+ 		alc_update_coef_idx(codec, 0x44, 0x007f << 8, 0x0045 << 8);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x48, 0xd011);
+@@ -4737,6 +4740,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, coef0256);
+@@ -4851,6 +4855,7 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin,
+ 		alc_process_coef_fw(codec, coef0255);
+ 		snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x45, 0xc489);
+@@ -5000,6 +5005,7 @@ static void alc_headset_mode_default(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x1b, 0x0e4b);
+@@ -5098,6 +5104,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, coef0256);
+@@ -5211,6 +5218,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, coef0256);
+@@ -5311,6 +5319,7 @@ static void alc_determine_headset_type(struct hda_codec *codec)
+ 		val = alc_read_coef_idx(codec, 0x46);
+ 		is_ctia = (val & 0x0070) == 0x0070;
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x1b, 0x0e4b);
+@@ -5604,6 +5613,7 @@ static void alc255_set_default_jack_type(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, alc255fw);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, alc256fw);
+@@ -6204,6 +6214,7 @@ static void alc_combo_jack_hp_jd_restart(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
+ 		alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0235:
+ 	case 0x10ec0236:
+ 	case 0x10ec0255:
+@@ -6336,6 +6347,24 @@ static void alc_fixup_no_int_mic(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
++					  const struct hda_fixup *fix, int action)
++{
++	static const hda_nid_t conn[] = { 0x02 };
++	static const struct hda_pintbl pincfgs[] = {
++		{ 0x14, 0x90170110 },  /* rear speaker */
++		{ }
++	};
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_apply_pincfgs(codec, pincfgs);
++		/* force front speaker to DAC1 */
++		snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
++		break;
++	}
++}
++
+ /* for hda_fixup_thinkpad_acpi() */
+ #include "thinkpad_helper.c"
+ 
+@@ -7802,6 +7831,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x4e4b },
+ 			{ }
+ 		},
++		.chained = true,
++		.chain_id = ALC289_FIXUP_ASUS_GA401,
+ 	},
+ 	[ALC285_FIXUP_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -8113,13 +8144,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chain_id = ALC269_FIXUP_HP_LINE1_MIC1_LED,
+ 	},
+ 	[ALC285_FIXUP_HP_SPECTRE_X360] = {
+-		.type = HDA_FIXUP_PINS,
+-		.v.pins = (const struct hda_pintbl[]) {
+-			{ 0x14, 0x90170110 }, /* enable top speaker */
+-			{}
+-		},
+-		.chained = true,
+-		.chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1,
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_spectre_x360,
+ 	},
+ 	[ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -8305,6 +8331,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
++	SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+@@ -8322,13 +8349,19 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87f1, "HP ProBook 630 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f2, "HP ProBook 640 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8862, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8863, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+@@ -9326,6 +9359,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 		spec->shutup = alc256_shutup;
+ 		spec->init_hook = alc256_init;
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		spec->codec_variant = ALC269_TYPE_ALC256;
+@@ -10617,6 +10651,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
+ 	HDA_CODEC_ENTRY(0x10ec0221, "ALC221", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0222, "ALC222", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0225, "ALC225", patch_alc269),
++	HDA_CODEC_ENTRY(0x10ec0230, "ALC236", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0231, "ALC231", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0233, "ALC233", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0234, "ALC234", patch_alc269),
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index 6fb6f36d0d377..3707dc27324d2 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -715,7 +715,7 @@ static inline void snd_intel8x0_update(struct intel8x0 *chip, struct ichdev *ich
+ 	int status, civ, i, step;
+ 	int ack = 0;
+ 
+-	if (!ichdev->prepared || ichdev->suspended)
++	if (!(ichdev->prepared || chip->in_measurement) || ichdev->suspended)
+ 		return;
+ 
+ 	spin_lock_irqsave(&chip->reg_lock, flags);
+diff --git a/sound/soc/atmel/atmel-i2s.c b/sound/soc/atmel/atmel-i2s.c
+index bbe2b638abb58..d870f56c44cfc 100644
+--- a/sound/soc/atmel/atmel-i2s.c
++++ b/sound/soc/atmel/atmel-i2s.c
+@@ -200,6 +200,7 @@ struct atmel_i2s_dev {
+ 	unsigned int				fmt;
+ 	const struct atmel_i2s_gck_param	*gck_param;
+ 	const struct atmel_i2s_caps		*caps;
++	int					clk_use_no;
+ };
+ 
+ static irqreturn_t atmel_i2s_interrupt(int irq, void *dev_id)
+@@ -321,9 +322,16 @@ static int atmel_i2s_hw_params(struct snd_pcm_substream *substream,
+ {
+ 	struct atmel_i2s_dev *dev = snd_soc_dai_get_drvdata(dai);
+ 	bool is_playback = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK);
+-	unsigned int mr = 0;
++	unsigned int mr = 0, mr_mask;
+ 	int ret;
+ 
++	mr_mask = ATMEL_I2SC_MR_FORMAT_MASK | ATMEL_I2SC_MR_MODE_MASK |
++		ATMEL_I2SC_MR_DATALENGTH_MASK;
++	if (is_playback)
++		mr_mask |= ATMEL_I2SC_MR_TXMONO;
++	else
++		mr_mask |= ATMEL_I2SC_MR_RXMONO;
++
+ 	switch (dev->fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_I2S:
+ 		mr |= ATMEL_I2SC_MR_FORMAT_I2S;
+@@ -402,7 +410,7 @@ static int atmel_i2s_hw_params(struct snd_pcm_substream *substream,
+ 		return -EINVAL;
+ 	}
+ 
+-	return regmap_write(dev->regmap, ATMEL_I2SC_MR, mr);
++	return regmap_update_bits(dev->regmap, ATMEL_I2SC_MR, mr_mask, mr);
+ }
+ 
+ static int atmel_i2s_switch_mck_generator(struct atmel_i2s_dev *dev,
+@@ -495,18 +503,28 @@ static int atmel_i2s_trigger(struct snd_pcm_substream *substream, int cmd,
+ 	is_master = (mr & ATMEL_I2SC_MR_MODE_MASK) == ATMEL_I2SC_MR_MODE_MASTER;
+ 
+ 	/* If master starts, enable the audio clock. */
+-	if (is_master && mck_enabled)
+-		err = atmel_i2s_switch_mck_generator(dev, true);
+-	if (err)
+-		return err;
++	if (is_master && mck_enabled) {
++		if (!dev->clk_use_no) {
++			err = atmel_i2s_switch_mck_generator(dev, true);
++			if (err)
++				return err;
++		}
++		dev->clk_use_no++;
++	}
+ 
+ 	err = regmap_write(dev->regmap, ATMEL_I2SC_CR, cr);
+ 	if (err)
+ 		return err;
+ 
+ 	/* If master stops, disable the audio clock. */
+-	if (is_master && !mck_enabled)
+-		err = atmel_i2s_switch_mck_generator(dev, false);
++	if (is_master && !mck_enabled) {
++		if (dev->clk_use_no == 1) {
++			err = atmel_i2s_switch_mck_generator(dev, false);
++			if (err)
++				return err;
++		}
++		dev->clk_use_no--;
++	}
+ 
+ 	return err;
+ }
+diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
+index 866d7c873e3c9..ca2019732013e 100644
+--- a/sound/soc/codecs/cs42l42.h
++++ b/sound/soc/codecs/cs42l42.h
+@@ -77,7 +77,7 @@
+ #define CS42L42_HP_PDN_SHIFT		3
+ #define CS42L42_HP_PDN_MASK		(1 << CS42L42_HP_PDN_SHIFT)
+ #define CS42L42_ADC_PDN_SHIFT		2
+-#define CS42L42_ADC_PDN_MASK		(1 << CS42L42_HP_PDN_SHIFT)
++#define CS42L42_ADC_PDN_MASK		(1 << CS42L42_ADC_PDN_SHIFT)
+ #define CS42L42_PDN_ALL_SHIFT		0
+ #define CS42L42_PDN_ALL_MASK		(1 << CS42L42_PDN_ALL_SHIFT)
+ 
+diff --git a/sound/soc/codecs/max98373-sdw.c b/sound/soc/codecs/max98373-sdw.c
+index 14fd2f9a0bf3a..39afa011f0e27 100644
+--- a/sound/soc/codecs/max98373-sdw.c
++++ b/sound/soc/codecs/max98373-sdw.c
+@@ -258,7 +258,7 @@ static __maybe_unused int max98373_resume(struct device *dev)
+ 	struct max98373_priv *max98373 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!max98373->hw_init)
++	if (!max98373->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+@@ -349,7 +349,7 @@ static int max98373_io_init(struct sdw_slave *slave)
+ 	struct device *dev = &slave->dev;
+ 	struct max98373_priv *max98373 = dev_get_drvdata(dev);
+ 
+-	if (max98373->pm_init_once) {
++	if (max98373->first_hw_init) {
+ 		regcache_cache_only(max98373->regmap, false);
+ 		regcache_cache_bypass(max98373->regmap, true);
+ 	}
+@@ -357,7 +357,7 @@ static int max98373_io_init(struct sdw_slave *slave)
+ 	/*
+ 	 * PM runtime is only enabled when a Slave reports as Attached
+ 	 */
+-	if (!max98373->pm_init_once) {
++	if (!max98373->first_hw_init) {
+ 		/* set autosuspend parameters */
+ 		pm_runtime_set_autosuspend_delay(dev, 3000);
+ 		pm_runtime_use_autosuspend(dev);
+@@ -449,12 +449,12 @@ static int max98373_io_init(struct sdw_slave *slave)
+ 	regmap_write(max98373->regmap, MAX98373_R20B5_BDE_EN, 1);
+ 	regmap_write(max98373->regmap, MAX98373_R20E2_LIMITER_EN, 1);
+ 
+-	if (max98373->pm_init_once) {
++	if (max98373->first_hw_init) {
+ 		regcache_cache_bypass(max98373->regmap, false);
+ 		regcache_mark_dirty(max98373->regmap);
+ 	}
+ 
+-	max98373->pm_init_once = true;
++	max98373->first_hw_init = true;
+ 	max98373->hw_init = true;
+ 
+ 	pm_runtime_mark_last_busy(dev);
+@@ -773,7 +773,7 @@ static int max98373_init(struct sdw_slave *slave, struct regmap *regmap)
+ 	max98373_slot_config(dev, max98373);
+ 
+ 	max98373->hw_init = false;
+-	max98373->pm_init_once = false;
++	max98373->first_hw_init = false;
+ 
+ 	/* codec registration  */
+ 	ret = devm_snd_soc_register_component(dev, &soc_codec_dev_max98373_sdw,
+diff --git a/sound/soc/codecs/max98373.h b/sound/soc/codecs/max98373.h
+index 4ab29b9d51c74..010f6bb21e9a1 100644
+--- a/sound/soc/codecs/max98373.h
++++ b/sound/soc/codecs/max98373.h
+@@ -215,7 +215,7 @@ struct max98373_priv {
+ 	/* variables to support soundwire */
+ 	struct sdw_slave *slave;
+ 	bool hw_init;
+-	bool pm_init_once;
++	bool first_hw_init;
+ 	int slot;
+ 	unsigned int rx_mask;
+ };
+diff --git a/sound/soc/codecs/rk3328_codec.c b/sound/soc/codecs/rk3328_codec.c
+index 940a2fa933edb..aed18cbb9f68e 100644
+--- a/sound/soc/codecs/rk3328_codec.c
++++ b/sound/soc/codecs/rk3328_codec.c
+@@ -474,7 +474,8 @@ static int rk3328_platform_probe(struct platform_device *pdev)
+ 	rk3328->pclk = devm_clk_get(&pdev->dev, "pclk");
+ 	if (IS_ERR(rk3328->pclk)) {
+ 		dev_err(&pdev->dev, "can't get acodec pclk\n");
+-		return PTR_ERR(rk3328->pclk);
++		ret = PTR_ERR(rk3328->pclk);
++		goto err_unprepare_mclk;
+ 	}
+ 
+ 	ret = clk_prepare_enable(rk3328->pclk);
+@@ -484,19 +485,34 @@ static int rk3328_platform_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(base))
+-		return PTR_ERR(base);
++	if (IS_ERR(base)) {
++		ret = PTR_ERR(base);
++		goto err_unprepare_pclk;
++	}
+ 
+ 	rk3328->regmap = devm_regmap_init_mmio(&pdev->dev, base,
+ 					       &rk3328_codec_regmap_config);
+-	if (IS_ERR(rk3328->regmap))
+-		return PTR_ERR(rk3328->regmap);
++	if (IS_ERR(rk3328->regmap)) {
++		ret = PTR_ERR(rk3328->regmap);
++		goto err_unprepare_pclk;
++	}
+ 
+ 	platform_set_drvdata(pdev, rk3328);
+ 
+-	return devm_snd_soc_register_component(&pdev->dev, &soc_codec_rk3328,
++	ret = devm_snd_soc_register_component(&pdev->dev, &soc_codec_rk3328,
+ 					       rk3328_dai,
+ 					       ARRAY_SIZE(rk3328_dai));
++	if (ret)
++		goto err_unprepare_pclk;
++
++	return 0;
++
++err_unprepare_pclk:
++	clk_disable_unprepare(rk3328->pclk);
++
++err_unprepare_mclk:
++	clk_disable_unprepare(rk3328->mclk);
++	return ret;
+ }
+ 
+ static const struct of_device_id rk3328_codec_of_match[] = {
+diff --git a/sound/soc/codecs/rt1308-sdw.c b/sound/soc/codecs/rt1308-sdw.c
+index c2621b0afe6c1..31daa749c3db4 100644
+--- a/sound/soc/codecs/rt1308-sdw.c
++++ b/sound/soc/codecs/rt1308-sdw.c
+@@ -709,7 +709,7 @@ static int __maybe_unused rt1308_dev_resume(struct device *dev)
+ 	struct rt1308_sdw_priv *rt1308 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt1308->hw_init)
++	if (!rt1308->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
+index 7e652843c57d9..547445d1e3c69 100644
+--- a/sound/soc/codecs/rt5682-i2c.c
++++ b/sound/soc/codecs/rt5682-i2c.c
+@@ -268,6 +268,7 @@ static void rt5682_i2c_shutdown(struct i2c_client *client)
+ {
+ 	struct rt5682_priv *rt5682 = i2c_get_clientdata(client);
+ 
++	disable_irq(client->irq);
+ 	cancel_delayed_work_sync(&rt5682->jack_detect_work);
+ 	cancel_delayed_work_sync(&rt5682->jd_check_work);
+ 
+diff --git a/sound/soc/codecs/rt5682-sdw.c b/sound/soc/codecs/rt5682-sdw.c
+index aa6c325faeab2..c9868dd096fcd 100644
+--- a/sound/soc/codecs/rt5682-sdw.c
++++ b/sound/soc/codecs/rt5682-sdw.c
+@@ -375,18 +375,12 @@ static int rt5682_sdw_init(struct device *dev, struct regmap *regmap,
+ static int rt5682_io_init(struct device *dev, struct sdw_slave *slave)
+ {
+ 	struct rt5682_priv *rt5682 = dev_get_drvdata(dev);
+-	int ret = 0;
++	int ret = 0, loop = 10;
+ 	unsigned int val;
+ 
+ 	if (rt5682->hw_init)
+ 		return 0;
+ 
+-	regmap_read(rt5682->regmap, RT5682_DEVICE_ID, &val);
+-	if (val != DEVICE_ID) {
+-		dev_err(dev, "Device with ID register %x is not rt5682\n", val);
+-		return -ENODEV;
+-	}
+-
+ 	/*
+ 	 * PM runtime is only enabled when a Slave reports as Attached
+ 	 */
+@@ -411,6 +405,19 @@ static int rt5682_io_init(struct device *dev, struct sdw_slave *slave)
+ 		regcache_cache_bypass(rt5682->regmap, true);
+ 	}
+ 
++	while (loop > 0) {
++		regmap_read(rt5682->regmap, RT5682_DEVICE_ID, &val);
++		if (val == DEVICE_ID)
++			break;
++		dev_warn(dev, "Device with ID register %x is not rt5682\n", val);
++		usleep_range(30000, 30005);
++		loop--;
++	}
++	if (val != DEVICE_ID) {
++		dev_err(dev, "Device with ID register %x is not rt5682\n", val);
++		return -ENODEV;
++	}
++
+ 	rt5682_calibrate(rt5682);
+ 
+ 	if (rt5682->first_hw_init) {
+@@ -734,7 +741,7 @@ static int __maybe_unused rt5682_dev_resume(struct device *dev)
+ 	struct rt5682_priv *rt5682 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt5682->hw_init)
++	if (!rt5682->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt700-sdw.c b/sound/soc/codecs/rt700-sdw.c
+index fb77e77a4ebd5..3a1db79030d71 100644
+--- a/sound/soc/codecs/rt700-sdw.c
++++ b/sound/soc/codecs/rt700-sdw.c
+@@ -498,7 +498,7 @@ static int __maybe_unused rt700_dev_resume(struct device *dev)
+ 	struct rt700_priv *rt700 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt700->hw_init)
++	if (!rt700->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt711-sdw.c b/sound/soc/codecs/rt711-sdw.c
+index f0a0691bd31cc..eb54e90c1c604 100644
+--- a/sound/soc/codecs/rt711-sdw.c
++++ b/sound/soc/codecs/rt711-sdw.c
+@@ -500,7 +500,7 @@ static int __maybe_unused rt711_dev_resume(struct device *dev)
+ 	struct rt711_priv *rt711 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt711->hw_init)
++	if (!rt711->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt715-sdw.c b/sound/soc/codecs/rt715-sdw.c
+index 8f0aa1e8a2737..361a90ae594cd 100644
+--- a/sound/soc/codecs/rt715-sdw.c
++++ b/sound/soc/codecs/rt715-sdw.c
+@@ -541,7 +541,7 @@ static int __maybe_unused rt715_dev_resume(struct device *dev)
+ 	struct rt715_priv *rt715 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt715->hw_init)
++	if (!rt715->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index b0f643fefe1e8..15bcb0f38ec9e 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -1358,14 +1358,27 @@ static int fsl_spdif_probe(struct platform_device *pdev)
+ 					      &spdif_priv->cpu_dai_drv, 1);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register DAI: %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = imx_pcm_dma_init(pdev, IMX_SPDIF_DMABUF_SIZE);
+-	if (ret && ret != -EPROBE_DEFER)
+-		dev_err(&pdev->dev, "imx_pcm_dma_init failed: %d\n", ret);
++	if (ret) {
++		dev_err_probe(&pdev->dev, ret, "imx_pcm_dma_init failed\n");
++		goto err_pm_disable;
++	}
+ 
+ 	return ret;
++
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
++	return ret;
++}
++
++static int fsl_spdif_remove(struct platform_device *pdev)
++{
++	pm_runtime_disable(&pdev->dev);
++
++	return 0;
+ }
+ 
+ #ifdef CONFIG_PM
+@@ -1374,6 +1387,9 @@ static int fsl_spdif_runtime_suspend(struct device *dev)
+ 	struct fsl_spdif_priv *spdif_priv = dev_get_drvdata(dev);
+ 	int i;
+ 
++	/* Disable all the interrupts */
++	regmap_update_bits(spdif_priv->regmap, REG_SPDIF_SIE, 0xffffff, 0);
++
+ 	regmap_read(spdif_priv->regmap, REG_SPDIF_SRPC,
+ 			&spdif_priv->regcache_srpc);
+ 	regcache_cache_only(spdif_priv->regmap, true);
+@@ -1469,6 +1485,7 @@ static struct platform_driver fsl_spdif_driver = {
+ 		.pm = &fsl_spdif_pm,
+ 	},
+ 	.probe = fsl_spdif_probe,
++	.remove = fsl_spdif_remove,
+ };
+ 
+ module_platform_driver(fsl_spdif_driver);
+diff --git a/sound/soc/hisilicon/hi6210-i2s.c b/sound/soc/hisilicon/hi6210-i2s.c
+index 907f5f1f7b445..ff05b9779e4be 100644
+--- a/sound/soc/hisilicon/hi6210-i2s.c
++++ b/sound/soc/hisilicon/hi6210-i2s.c
+@@ -102,18 +102,15 @@ static int hi6210_i2s_startup(struct snd_pcm_substream *substream,
+ 
+ 	for (n = 0; n < i2s->clocks; n++) {
+ 		ret = clk_prepare_enable(i2s->clk[n]);
+-		if (ret) {
+-			while (n--)
+-				clk_disable_unprepare(i2s->clk[n]);
+-			return ret;
+-		}
++		if (ret)
++			goto err_unprepare_clk;
+ 	}
+ 
+ 	ret = clk_set_rate(i2s->clk[CLK_I2S_BASE], 49152000);
+ 	if (ret) {
+ 		dev_err(i2s->dev, "%s: setting 49.152MHz base rate failed %d\n",
+ 			__func__, ret);
+-		return ret;
++		goto err_unprepare_clk;
+ 	}
+ 
+ 	/* enable clock before frequency division */
+@@ -165,6 +162,11 @@ static int hi6210_i2s_startup(struct snd_pcm_substream *substream,
+ 	hi6210_write_reg(i2s, HII2S_SW_RST_N, val);
+ 
+ 	return 0;
++
++err_unprepare_clk:
++	while (n--)
++		clk_disable_unprepare(i2s->clk[n]);
++	return ret;
+ }
+ 
+ static void hi6210_i2s_shutdown(struct snd_pcm_substream *substream,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 9dc982c2c7760..75a0bfedb4493 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -196,6 +196,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(SOF_RT711_JD_SRC_JD1 |
+ 					SOF_SDW_TGL_HDMI |
++					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_PCH_DMIC),
+ 	},
+ 	{}
+diff --git a/sound/soc/mediatek/common/mtk-btcvsd.c b/sound/soc/mediatek/common/mtk-btcvsd.c
+index 668fef3e319a0..86e982e3209ed 100644
+--- a/sound/soc/mediatek/common/mtk-btcvsd.c
++++ b/sound/soc/mediatek/common/mtk-btcvsd.c
+@@ -1281,7 +1281,7 @@ static const struct snd_soc_component_driver mtk_btcvsd_snd_platform = {
+ 
+ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ {
+-	int ret = 0;
++	int ret;
+ 	int irq_id;
+ 	u32 offset[5] = {0, 0, 0, 0, 0};
+ 	struct mtk_btcvsd_snd *btcvsd;
+@@ -1337,7 +1337,8 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ 	btcvsd->bt_sram_bank2_base = of_iomap(dev->of_node, 1);
+ 	if (!btcvsd->bt_sram_bank2_base) {
+ 		dev_err(dev, "iomap bt_sram_bank2_base fail\n");
+-		return -EIO;
++		ret = -EIO;
++		goto unmap_pkv_err;
+ 	}
+ 
+ 	btcvsd->infra = syscon_regmap_lookup_by_phandle(dev->of_node,
+@@ -1345,7 +1346,8 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ 	if (IS_ERR(btcvsd->infra)) {
+ 		dev_err(dev, "cannot find infra controller: %ld\n",
+ 			PTR_ERR(btcvsd->infra));
+-		return PTR_ERR(btcvsd->infra);
++		ret = PTR_ERR(btcvsd->infra);
++		goto unmap_bank2_err;
+ 	}
+ 
+ 	/* get offset */
+@@ -1354,7 +1356,7 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ 					 ARRAY_SIZE(offset));
+ 	if (ret) {
+ 		dev_warn(dev, "%s(), get offset fail, ret %d\n", __func__, ret);
+-		return ret;
++		goto unmap_bank2_err;
+ 	}
+ 	btcvsd->infra_misc_offset = offset[0];
+ 	btcvsd->conn_bt_cvsd_mask = offset[1];
+@@ -1373,8 +1375,18 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ 	mtk_btcvsd_snd_set_state(btcvsd, btcvsd->tx, BT_SCO_STATE_IDLE);
+ 	mtk_btcvsd_snd_set_state(btcvsd, btcvsd->rx, BT_SCO_STATE_IDLE);
+ 
+-	return devm_snd_soc_register_component(dev, &mtk_btcvsd_snd_platform,
+-					       NULL, 0);
++	ret = devm_snd_soc_register_component(dev, &mtk_btcvsd_snd_platform,
++					      NULL, 0);
++	if (ret)
++		goto unmap_bank2_err;
++
++	return 0;
++
++unmap_bank2_err:
++	iounmap(btcvsd->bt_sram_bank2_base);
++unmap_pkv_err:
++	iounmap(btcvsd->bt_pkv_base);
++	return ret;
+ }
+ 
+ static int mtk_btcvsd_snd_remove(struct platform_device *pdev)
+diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c
+index b9aacf3d3b29c..7532ab27a48df 100644
+--- a/sound/soc/sh/rcar/adg.c
++++ b/sound/soc/sh/rcar/adg.c
+@@ -289,7 +289,6 @@ static void rsnd_adg_set_ssi_clk(struct rsnd_mod *ssi_mod, u32 val)
+ int rsnd_adg_clk_query(struct rsnd_priv *priv, unsigned int rate)
+ {
+ 	struct rsnd_adg *adg = rsnd_priv_to_adg(priv);
+-	struct clk *clk;
+ 	int i;
+ 	int sel_table[] = {
+ 		[CLKA] = 0x1,
+@@ -302,10 +301,9 @@ int rsnd_adg_clk_query(struct rsnd_priv *priv, unsigned int rate)
+ 	 * find suitable clock from
+ 	 * AUDIO_CLKA/AUDIO_CLKB/AUDIO_CLKC/AUDIO_CLKI.
+ 	 */
+-	for_each_rsnd_clk(clk, adg, i) {
++	for (i = 0; i < CLKMAX; i++)
+ 		if (rate == adg->clk_rate[i])
+ 			return sel_table[i];
+-	}
+ 
+ 	/*
+ 	 * find divided clock from BRGA/BRGB
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 91f0ed4a2e7eb..5c5b76c611480 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -208,9 +208,11 @@ static int parse_audio_format_rates_v1(struct snd_usb_audio *chip, struct audiof
+ 				continue;
+ 			/* C-Media CM6501 mislabels its 96 kHz altsetting */
+ 			/* Terratec Aureon 7.1 USB C-Media 6206, too */
++			/* Ozone Z90 USB C-Media, too */
+ 			if (rate == 48000 && nr_rates == 1 &&
+ 			    (chip->usb_id == USB_ID(0x0d8c, 0x0201) ||
+ 			     chip->usb_id == USB_ID(0x0d8c, 0x0102) ||
++			     chip->usb_id == USB_ID(0x0d8c, 0x0078) ||
+ 			     chip->usb_id == USB_ID(0x0ccd, 0x00b1)) &&
+ 			    fp->altsetting == 5 && fp->maxpacksize == 392)
+ 				rate = 96000;
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 375cfb9c9ab7e..8e11582fbae98 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -3273,8 +3273,9 @@ static void snd_usb_mixer_dump_cval(struct snd_info_buffer *buffer,
+ 				    struct usb_mixer_elem_list *list)
+ {
+ 	struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
+-	static const char * const val_types[] = {"BOOLEAN", "INV_BOOLEAN",
+-				    "S8", "U8", "S16", "U16"};
++	static const char * const val_types[] = {
++		"BOOLEAN", "INV_BOOLEAN", "S8", "U8", "S16", "U16", "S32", "U32",
++	};
+ 	snd_iprintf(buffer, "    Info: id=%i, control=%i, cmask=0x%x, "
+ 			    "channels=%i, type=\"%s\"\n", cval->head.id,
+ 			    cval->control, cval->cmask, cval->channels,
+@@ -3630,6 +3631,9 @@ static int restore_mixer_value(struct usb_mixer_elem_list *list)
+ 	struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
+ 	int c, err, idx;
+ 
++	if (cval->val_type == USB_MIXER_BESPOKEN)
++		return 0;
++
+ 	if (cval->cmask) {
+ 		idx = 0;
+ 		for (c = 0; c < MAX_CHANNELS; c++) {
+diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
+index c29e27ac43a7a..6d20ba7ee88fd 100644
+--- a/sound/usb/mixer.h
++++ b/sound/usb/mixer.h
+@@ -55,6 +55,7 @@ enum {
+ 	USB_MIXER_U16,
+ 	USB_MIXER_S32,
+ 	USB_MIXER_U32,
++	USB_MIXER_BESPOKEN,	/* non-standard type */
+ };
+ 
+ typedef void (*usb_mixer_elem_dump_func_t)(struct snd_info_buffer *buffer,
+diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
+index 9a98b0c048e33..97e72b3e06c26 100644
+--- a/sound/usb/mixer_scarlett_gen2.c
++++ b/sound/usb/mixer_scarlett_gen2.c
+@@ -949,10 +949,15 @@ static int scarlett2_add_new_ctl(struct usb_mixer_interface *mixer,
+ 	if (!elem)
+ 		return -ENOMEM;
+ 
++	/* We set USB_MIXER_BESPOKEN type, so that the core USB mixer code
++	 * ignores them for resume and other operations.
++	 * Also, the head.id field is set to 0, as we don't use this field.
++	 */
+ 	elem->head.mixer = mixer;
+ 	elem->control = index;
+-	elem->head.id = index;
++	elem->head.id = 0;
+ 	elem->channels = channels;
++	elem->val_type = USB_MIXER_BESPOKEN;
+ 
+ 	kctl = snd_ctl_new1(ncontrol, elem);
+ 	if (!kctl) {
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index 33068d6ed5d6c..c58a135dc355e 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -338,8 +338,10 @@ static int do_batch(int argc, char **argv)
+ 		n_argc = make_args(buf, n_argv, BATCH_ARG_NB_MAX, lines);
+ 		if (!n_argc)
+ 			continue;
+-		if (n_argc < 0)
++		if (n_argc < 0) {
++			err = n_argc;
+ 			goto err_close;
++		}
+ 
+ 		if (json_output) {
+ 			jsonw_start_object(json_wtr);
+diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
+index d636643ddd358..f32c059fbfb4f 100644
+--- a/tools/bpf/resolve_btfids/main.c
++++ b/tools/bpf/resolve_btfids/main.c
+@@ -649,6 +649,9 @@ static int symbols_patch(struct object *obj)
+ 	if (sets_patch(obj))
+ 		return -1;
+ 
++	/* Set type to ensure endian translation occurs. */
++	obj->efile.idlist->d_type = ELF_T_WORD;
++
+ 	elf_flagdata(obj->efile.idlist, ELF_C_SET, ELF_F_DIRTY);
+ 
+ 	err = elf_update(obj->efile.elf, ELF_C_WRITE);
+diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
+index dbdffb6673feb..0bf6b4d4c90a7 100644
+--- a/tools/perf/util/llvm-utils.c
++++ b/tools/perf/util/llvm-utils.c
+@@ -504,6 +504,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
+ 			goto errout;
+ 		}
+ 
++		err = -ENOMEM;
+ 		if (asprintf(&pipe_template, "%s -emit-llvm | %s -march=bpf %s -filetype=obj -o -",
+ 			      template, llc_path, opts) < 0) {
+ 			pr_err("ERROR:\tnot enough memory to setup command line\n");
+@@ -524,6 +525,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
+ 
+ 	pr_debug("llvm compiling command template: %s\n", template);
+ 
++	err = -ENOMEM;
+ 	if (asprintf(&command_echo, "echo -n \"%s\"", template) < 0)
+ 		goto errout;
+ 
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index c83c2c6564e01..23dc5014e7119 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -934,7 +934,7 @@ static PyObject *tuple_new(unsigned int sz)
+ 	return t;
+ }
+ 
+-static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
++static int tuple_set_s64(PyObject *t, unsigned int pos, s64 val)
+ {
+ #if BITS_PER_LONG == 64
+ 	return PyTuple_SetItem(t, pos, _PyLong_FromLong(val));
+@@ -944,6 +944,22 @@ static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
+ #endif
+ }
+ 
++/*
++ * Databases support only signed 64-bit numbers, so even though we are
++ * exporting a u64, it must be as s64.
++ */
++#define tuple_set_d64 tuple_set_s64
++
++static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
++{
++#if BITS_PER_LONG == 64
++	return PyTuple_SetItem(t, pos, PyLong_FromUnsignedLong(val));
++#endif
++#if BITS_PER_LONG == 32
++	return PyTuple_SetItem(t, pos, PyLong_FromUnsignedLongLong(val));
++#endif
++}
++
+ static int tuple_set_s32(PyObject *t, unsigned int pos, s32 val)
+ {
+ 	return PyTuple_SetItem(t, pos, _PyLong_FromLong(val));
+@@ -967,7 +983,7 @@ static int python_export_evsel(struct db_export *dbe, struct evsel *evsel)
+ 
+ 	t = tuple_new(2);
+ 
+-	tuple_set_u64(t, 0, evsel->db_id);
++	tuple_set_d64(t, 0, evsel->db_id);
+ 	tuple_set_string(t, 1, evsel__name(evsel));
+ 
+ 	call_object(tables->evsel_handler, t, "evsel_table");
+@@ -985,7 +1001,7 @@ static int python_export_machine(struct db_export *dbe,
+ 
+ 	t = tuple_new(3);
+ 
+-	tuple_set_u64(t, 0, machine->db_id);
++	tuple_set_d64(t, 0, machine->db_id);
+ 	tuple_set_s32(t, 1, machine->pid);
+ 	tuple_set_string(t, 2, machine->root_dir ? machine->root_dir : "");
+ 
+@@ -1004,9 +1020,9 @@ static int python_export_thread(struct db_export *dbe, struct thread *thread,
+ 
+ 	t = tuple_new(5);
+ 
+-	tuple_set_u64(t, 0, thread->db_id);
+-	tuple_set_u64(t, 1, machine->db_id);
+-	tuple_set_u64(t, 2, main_thread_db_id);
++	tuple_set_d64(t, 0, thread->db_id);
++	tuple_set_d64(t, 1, machine->db_id);
++	tuple_set_d64(t, 2, main_thread_db_id);
+ 	tuple_set_s32(t, 3, thread->pid_);
+ 	tuple_set_s32(t, 4, thread->tid);
+ 
+@@ -1025,10 +1041,10 @@ static int python_export_comm(struct db_export *dbe, struct comm *comm,
+ 
+ 	t = tuple_new(5);
+ 
+-	tuple_set_u64(t, 0, comm->db_id);
++	tuple_set_d64(t, 0, comm->db_id);
+ 	tuple_set_string(t, 1, comm__str(comm));
+-	tuple_set_u64(t, 2, thread->db_id);
+-	tuple_set_u64(t, 3, comm->start);
++	tuple_set_d64(t, 2, thread->db_id);
++	tuple_set_d64(t, 3, comm->start);
+ 	tuple_set_s32(t, 4, comm->exec);
+ 
+ 	call_object(tables->comm_handler, t, "comm_table");
+@@ -1046,9 +1062,9 @@ static int python_export_comm_thread(struct db_export *dbe, u64 db_id,
+ 
+ 	t = tuple_new(3);
+ 
+-	tuple_set_u64(t, 0, db_id);
+-	tuple_set_u64(t, 1, comm->db_id);
+-	tuple_set_u64(t, 2, thread->db_id);
++	tuple_set_d64(t, 0, db_id);
++	tuple_set_d64(t, 1, comm->db_id);
++	tuple_set_d64(t, 2, thread->db_id);
+ 
+ 	call_object(tables->comm_thread_handler, t, "comm_thread_table");
+ 
+@@ -1068,8 +1084,8 @@ static int python_export_dso(struct db_export *dbe, struct dso *dso,
+ 
+ 	t = tuple_new(5);
+ 
+-	tuple_set_u64(t, 0, dso->db_id);
+-	tuple_set_u64(t, 1, machine->db_id);
++	tuple_set_d64(t, 0, dso->db_id);
++	tuple_set_d64(t, 1, machine->db_id);
+ 	tuple_set_string(t, 2, dso->short_name);
+ 	tuple_set_string(t, 3, dso->long_name);
+ 	tuple_set_string(t, 4, sbuild_id);
+@@ -1090,10 +1106,10 @@ static int python_export_symbol(struct db_export *dbe, struct symbol *sym,
+ 
+ 	t = tuple_new(6);
+ 
+-	tuple_set_u64(t, 0, *sym_db_id);
+-	tuple_set_u64(t, 1, dso->db_id);
+-	tuple_set_u64(t, 2, sym->start);
+-	tuple_set_u64(t, 3, sym->end);
++	tuple_set_d64(t, 0, *sym_db_id);
++	tuple_set_d64(t, 1, dso->db_id);
++	tuple_set_d64(t, 2, sym->start);
++	tuple_set_d64(t, 3, sym->end);
+ 	tuple_set_s32(t, 4, sym->binding);
+ 	tuple_set_string(t, 5, sym->name);
+ 
+@@ -1130,30 +1146,30 @@ static void python_export_sample_table(struct db_export *dbe,
+ 
+ 	t = tuple_new(24);
+ 
+-	tuple_set_u64(t, 0, es->db_id);
+-	tuple_set_u64(t, 1, es->evsel->db_id);
+-	tuple_set_u64(t, 2, es->al->maps->machine->db_id);
+-	tuple_set_u64(t, 3, es->al->thread->db_id);
+-	tuple_set_u64(t, 4, es->comm_db_id);
+-	tuple_set_u64(t, 5, es->dso_db_id);
+-	tuple_set_u64(t, 6, es->sym_db_id);
+-	tuple_set_u64(t, 7, es->offset);
+-	tuple_set_u64(t, 8, es->sample->ip);
+-	tuple_set_u64(t, 9, es->sample->time);
++	tuple_set_d64(t, 0, es->db_id);
++	tuple_set_d64(t, 1, es->evsel->db_id);
++	tuple_set_d64(t, 2, es->al->maps->machine->db_id);
++	tuple_set_d64(t, 3, es->al->thread->db_id);
++	tuple_set_d64(t, 4, es->comm_db_id);
++	tuple_set_d64(t, 5, es->dso_db_id);
++	tuple_set_d64(t, 6, es->sym_db_id);
++	tuple_set_d64(t, 7, es->offset);
++	tuple_set_d64(t, 8, es->sample->ip);
++	tuple_set_d64(t, 9, es->sample->time);
+ 	tuple_set_s32(t, 10, es->sample->cpu);
+-	tuple_set_u64(t, 11, es->addr_dso_db_id);
+-	tuple_set_u64(t, 12, es->addr_sym_db_id);
+-	tuple_set_u64(t, 13, es->addr_offset);
+-	tuple_set_u64(t, 14, es->sample->addr);
+-	tuple_set_u64(t, 15, es->sample->period);
+-	tuple_set_u64(t, 16, es->sample->weight);
+-	tuple_set_u64(t, 17, es->sample->transaction);
+-	tuple_set_u64(t, 18, es->sample->data_src);
++	tuple_set_d64(t, 11, es->addr_dso_db_id);
++	tuple_set_d64(t, 12, es->addr_sym_db_id);
++	tuple_set_d64(t, 13, es->addr_offset);
++	tuple_set_d64(t, 14, es->sample->addr);
++	tuple_set_d64(t, 15, es->sample->period);
++	tuple_set_d64(t, 16, es->sample->weight);
++	tuple_set_d64(t, 17, es->sample->transaction);
++	tuple_set_d64(t, 18, es->sample->data_src);
+ 	tuple_set_s32(t, 19, es->sample->flags & PERF_BRANCH_MASK);
+ 	tuple_set_s32(t, 20, !!(es->sample->flags & PERF_IP_FLAG_IN_TX));
+-	tuple_set_u64(t, 21, es->call_path_id);
+-	tuple_set_u64(t, 22, es->sample->insn_cnt);
+-	tuple_set_u64(t, 23, es->sample->cyc_cnt);
++	tuple_set_d64(t, 21, es->call_path_id);
++	tuple_set_d64(t, 22, es->sample->insn_cnt);
++	tuple_set_d64(t, 23, es->sample->cyc_cnt);
+ 
+ 	call_object(tables->sample_handler, t, "sample_table");
+ 
+@@ -1167,8 +1183,8 @@ static void python_export_synth(struct db_export *dbe, struct export_sample *es)
+ 
+ 	t = tuple_new(3);
+ 
+-	tuple_set_u64(t, 0, es->db_id);
+-	tuple_set_u64(t, 1, es->evsel->core.attr.config);
++	tuple_set_d64(t, 0, es->db_id);
++	tuple_set_d64(t, 1, es->evsel->core.attr.config);
+ 	tuple_set_bytes(t, 2, es->sample->raw_data, es->sample->raw_size);
+ 
+ 	call_object(tables->synth_handler, t, "synth_data");
+@@ -1200,10 +1216,10 @@ static int python_export_call_path(struct db_export *dbe, struct call_path *cp)
+ 
+ 	t = tuple_new(4);
+ 
+-	tuple_set_u64(t, 0, cp->db_id);
+-	tuple_set_u64(t, 1, parent_db_id);
+-	tuple_set_u64(t, 2, sym_db_id);
+-	tuple_set_u64(t, 3, cp->ip);
++	tuple_set_d64(t, 0, cp->db_id);
++	tuple_set_d64(t, 1, parent_db_id);
++	tuple_set_d64(t, 2, sym_db_id);
++	tuple_set_d64(t, 3, cp->ip);
+ 
+ 	call_object(tables->call_path_handler, t, "call_path_table");
+ 
+@@ -1221,20 +1237,20 @@ static int python_export_call_return(struct db_export *dbe,
+ 
+ 	t = tuple_new(14);
+ 
+-	tuple_set_u64(t, 0, cr->db_id);
+-	tuple_set_u64(t, 1, cr->thread->db_id);
+-	tuple_set_u64(t, 2, comm_db_id);
+-	tuple_set_u64(t, 3, cr->cp->db_id);
+-	tuple_set_u64(t, 4, cr->call_time);
+-	tuple_set_u64(t, 5, cr->return_time);
+-	tuple_set_u64(t, 6, cr->branch_count);
+-	tuple_set_u64(t, 7, cr->call_ref);
+-	tuple_set_u64(t, 8, cr->return_ref);
+-	tuple_set_u64(t, 9, cr->cp->parent->db_id);
++	tuple_set_d64(t, 0, cr->db_id);
++	tuple_set_d64(t, 1, cr->thread->db_id);
++	tuple_set_d64(t, 2, comm_db_id);
++	tuple_set_d64(t, 3, cr->cp->db_id);
++	tuple_set_d64(t, 4, cr->call_time);
++	tuple_set_d64(t, 5, cr->return_time);
++	tuple_set_d64(t, 6, cr->branch_count);
++	tuple_set_d64(t, 7, cr->call_ref);
++	tuple_set_d64(t, 8, cr->return_ref);
++	tuple_set_d64(t, 9, cr->cp->parent->db_id);
+ 	tuple_set_s32(t, 10, cr->flags);
+-	tuple_set_u64(t, 11, cr->parent_db_id);
+-	tuple_set_u64(t, 12, cr->insn_count);
+-	tuple_set_u64(t, 13, cr->cyc_count);
++	tuple_set_d64(t, 11, cr->parent_db_id);
++	tuple_set_d64(t, 12, cr->insn_count);
++	tuple_set_d64(t, 13, cr->cyc_count);
+ 
+ 	call_object(tables->call_return_handler, t, "call_return_table");
+ 
+@@ -1254,14 +1270,14 @@ static int python_export_context_switch(struct db_export *dbe, u64 db_id,
+ 
+ 	t = tuple_new(9);
+ 
+-	tuple_set_u64(t, 0, db_id);
+-	tuple_set_u64(t, 1, machine->db_id);
+-	tuple_set_u64(t, 2, sample->time);
++	tuple_set_d64(t, 0, db_id);
++	tuple_set_d64(t, 1, machine->db_id);
++	tuple_set_d64(t, 2, sample->time);
+ 	tuple_set_s32(t, 3, sample->cpu);
+-	tuple_set_u64(t, 4, th_out_id);
+-	tuple_set_u64(t, 5, comm_out_id);
+-	tuple_set_u64(t, 6, th_in_id);
+-	tuple_set_u64(t, 7, comm_in_id);
++	tuple_set_d64(t, 4, th_out_id);
++	tuple_set_d64(t, 5, comm_out_id);
++	tuple_set_d64(t, 6, th_in_id);
++	tuple_set_d64(t, 7, comm_in_id);
+ 	tuple_set_s32(t, 8, flags);
+ 
+ 	call_object(tables->context_switch_handler, t, "context_switch");
+diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
+index 3ab1200e172fa..b1b37dcade9f2 100644
+--- a/tools/testing/selftests/bpf/.gitignore
++++ b/tools/testing/selftests/bpf/.gitignore
+@@ -9,6 +9,7 @@ fixdep
+ test_dev_cgroup
+ /test_progs*
+ test_tcpbpf_user
++!test_progs.h
+ test_verifier_log
+ feature
+ test_sock
+diff --git a/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc b/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
+index e6eb78f0b9545..9933ed24f9012 100644
+--- a/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
+@@ -57,6 +57,10 @@ enable_events() {
+     echo 1 > tracing_on
+ }
+ 
++other_task() {
++    sleep .001 || usleep 1 || sleep 1
++}
++
+ echo 0 > options/event-fork
+ 
+ do_reset
+@@ -94,6 +98,9 @@ child=$!
+ echo "child = $child"
+ wait $child
+ 
++# Be sure some other events will happen for small systems (e.g. 1 core)
++other_task
++
+ echo 0 > tracing_on
+ 
+ cnt=`count_pid $mypid`
+diff --git a/tools/testing/selftests/lkdtm/run.sh b/tools/testing/selftests/lkdtm/run.sh
+index bb7a1775307b8..e95e79bd31268 100755
+--- a/tools/testing/selftests/lkdtm/run.sh
++++ b/tools/testing/selftests/lkdtm/run.sh
+@@ -76,10 +76,14 @@ fi
+ # Save existing dmesg so we can detect new content below
+ dmesg > "$DMESG"
+ 
+-# Most shells yell about signals and we're expecting the "cat" process
+-# to usually be killed by the kernel. So we have to run it in a sub-shell
+-# and silence errors.
+-($SHELL -c 'cat <(echo '"$test"') >'"$TRIGGER" 2>/dev/null) || true
++# Since the kernel is likely killing the process writing to the trigger
++# file, it must not be the script's shell itself. i.e. we cannot do:
++#     echo "$test" >"$TRIGGER"
++# Instead, use "cat" to take the signal. Since the shell will yell about
++# the signal that killed the subprocess, we must ignore the failure and
++# continue. However we don't silence stderr since there might be other
++# useful details reported there in the case of other unexpected conditions.
++echo "$test" | cat >"$TRIGGER" || true
+ 
+ # Record and dump the results
+ dmesg | comm --nocheck-order -13 "$DMESG" - > "$LOG" || true
+diff --git a/tools/testing/selftests/splice/short_splice_read.sh b/tools/testing/selftests/splice/short_splice_read.sh
+index 7810d3589d9ab..22b6c8910b182 100755
+--- a/tools/testing/selftests/splice/short_splice_read.sh
++++ b/tools/testing/selftests/splice/short_splice_read.sh
+@@ -1,21 +1,87 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
++#
++# Test for mishandling of splice() on pseudofilesystems, which should catch
++# bugs like 11990a5bd7e5 ("module: Correctly truncate sysfs sections output")
++#
++# Since splice fallback was removed as part of the set_fs() rework, many of these
++# tests expect to fail now. See https://lore.kernel.org/lkml/202009181443.C2179FB@keescook/
+ set -e
+ 
++DIR=$(dirname "$0")
++
+ ret=0
+ 
++expect_success()
++{
++	title="$1"
++	shift
++
++	echo "" >&2
++	echo "$title ..." >&2
++
++	set +e
++	"$@"
++	rc=$?
++	set -e
++
++	case "$rc" in
++	0)
++		echo "ok: $title succeeded" >&2
++		;;
++	1)
++		echo "FAIL: $title should work" >&2
++		ret=$(( ret + 1 ))
++		;;
++	*)
++		echo "FAIL: something else went wrong" >&2
++		ret=$(( ret + 1 ))
++		;;
++	esac
++}
++
++expect_failure()
++{
++	title="$1"
++	shift
++
++	echo "" >&2
++	echo "$title ..." >&2
++
++	set +e
++	"$@"
++	rc=$?
++	set -e
++
++	case "$rc" in
++	0)
++		echo "FAIL: $title unexpectedly worked" >&2
++		ret=$(( ret + 1 ))
++		;;
++	1)
++		echo "ok: $title correctly failed" >&2
++		;;
++	*)
++		echo "FAIL: something else went wrong" >&2
++		ret=$(( ret + 1 ))
++		;;
++	esac
++}
++
+ do_splice()
+ {
+ 	filename="$1"
+ 	bytes="$2"
+ 	expected="$3"
++	report="$4"
+ 
+-	out=$(./splice_read "$filename" "$bytes" | cat)
++	out=$("$DIR"/splice_read "$filename" "$bytes" | cat)
+ 	if [ "$out" = "$expected" ] ; then
+-		echo "ok: $filename $bytes"
++		echo "      matched $report" >&2
++		return 0
+ 	else
+-		echo "FAIL: $filename $bytes"
+-		ret=1
++		echo "      no match: '$out' vs $report" >&2
++		return 1
+ 	fi
+ }
+ 
+@@ -23,34 +89,45 @@ test_splice()
+ {
+ 	filename="$1"
+ 
++	echo "  checking $filename ..." >&2
++
+ 	full=$(cat "$filename")
++	rc=$?
++	if [ $rc -ne 0 ] ; then
++		return 2
++	fi
++
+ 	two=$(echo "$full" | grep -m1 . | cut -c-2)
+ 
+ 	# Make sure full splice has the same contents as a standard read.
+-	do_splice "$filename" 4096 "$full"
++	echo "    splicing 4096 bytes ..." >&2
++	if ! do_splice "$filename" 4096 "$full" "full read" ; then
++		return 1
++	fi
+ 
+ 	# Make sure a partial splice see the first two characters.
+-	do_splice "$filename" 2 "$two"
++	echo "    splicing 2 bytes ..." >&2
++	if ! do_splice "$filename" 2 "$two" "'$two'" ; then
++		return 1
++	fi
++
++	return 0
+ }
+ 
+-# proc_single_open(), seq_read()
+-test_splice /proc/$$/limits
+-# special open, seq_read()
+-test_splice /proc/$$/comm
++### /proc/$pid/ has no splice interface; these should all fail.
++expect_failure "proc_single_open(), seq_read() splice" test_splice /proc/$$/limits
++expect_failure "special open(), seq_read() splice" test_splice /proc/$$/comm
+ 
+-# proc_handler, proc_dointvec_minmax
+-test_splice /proc/sys/fs/nr_open
+-# proc_handler, proc_dostring
+-test_splice /proc/sys/kernel/modprobe
+-# proc_handler, special read
+-test_splice /proc/sys/kernel/version
++### /proc/sys/ has a splice interface; these should all succeed.
++expect_success "proc_handler: proc_dointvec_minmax() splice" test_splice /proc/sys/fs/nr_open
++expect_success "proc_handler: proc_dostring() splice" test_splice /proc/sys/kernel/modprobe
++expect_success "proc_handler: special read splice" test_splice /proc/sys/kernel/version
+ 
++### /sys/ has no splice interface; these should all fail.
+ if ! [ -d /sys/module/test_module/sections ] ; then
+-	modprobe test_module
++	expect_success "test_module kernel module load" modprobe test_module
+ fi
+-# kernfs, attr
+-test_splice /sys/module/test_module/coresize
+-# kernfs, binattr
+-test_splice /sys/module/test_module/sections/.init.text
++expect_failure "kernfs attr splice" test_splice /sys/module/test_module/coresize
++expect_failure "kernfs binattr splice" test_splice /sys/module/test_module/sections/.init.text
+ 
+ exit $ret
+diff --git a/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py b/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
+index 229ee185b27e1..a7b21658af9b4 100644
+--- a/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
++++ b/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
+@@ -36,7 +36,7 @@ class SubPlugin(TdcPlugin):
+         for k in scapy_keys:
+             if k not in scapyinfo:
+                 keyfail = True
+-                missing_keys.add(k)
++                missing_keys.append(k)
+         if keyfail:
+             print('{}: Scapy block present in the test, but is missing info:'
+                 .format(self.sub_class))
+diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c
+index fdbb602ecf325..87eecd5ba577b 100644
+--- a/tools/testing/selftests/vm/protection_keys.c
++++ b/tools/testing/selftests/vm/protection_keys.c
+@@ -510,7 +510,7 @@ int alloc_pkey(void)
+ 			" shadow: 0x%016llx\n",
+ 			__func__, __LINE__, ret, __read_pkey_reg(),
+ 			shadow_pkey_reg);
+-	if (ret) {
++	if (ret > 0) {
+ 		/* clear both the bits: */
+ 		shadow_pkey_reg = set_pkey_bits(shadow_pkey_reg, ret,
+ 						~PKEY_MASK);
+@@ -561,7 +561,6 @@ int alloc_random_pkey(void)
+ 	int nr_alloced = 0;
+ 	int random_index;
+ 	memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
+-	srand((unsigned int)time(NULL));
+ 
+ 	/* allocate every possible key and make a note of which ones we got */
+ 	max_nr_pkey_allocs = NR_PKEYS;
+@@ -1449,6 +1448,13 @@ void test_implicit_mprotect_exec_only_memory(int *ptr, u16 pkey)
+ 	ret = mprotect(p1, PAGE_SIZE, PROT_EXEC);
+ 	pkey_assert(!ret);
+ 
++	/*
++	 * Reset the shadow, assuming that the above mprotect()
++	 * correctly changed PKRU, but to an unknown value since
++	 * the actual alllocated pkey is unknown.
++	 */
++	shadow_pkey_reg = __read_pkey_reg();
++
+ 	dprintf2("pkey_reg: %016llx\n", read_pkey_reg());
+ 
+ 	/* Make sure this is an *instruction* fault */
+@@ -1552,6 +1558,8 @@ int main(void)
+ 	int nr_iterations = 22;
+ 	int pkeys_supported = is_pkeys_supported();
+ 
++	srand((unsigned int)time(NULL));
++
+ 	setup_handlers();
+ 
+ 	printf("has pkeys: %d\n", pkeys_supported);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-14 16:31 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-14 16:31 UTC (permalink / raw
  To: gentoo-commits

commit:     21d8468bad975398075cf6fba790c8d416f45a32
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 14 16:30:45 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 14 16:30:45 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=21d8468b

Remove redundant patch

Removed:
2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 --
 ...nic-remove-duplicate-napi_schedule-call-i.patch | 46 ----------------------
 2 files changed, 50 deletions(-)

diff --git a/0000_README b/0000_README
index e9246be..e40d4fd 100644
--- a/0000_README
+++ b/0000_README
@@ -255,10 +255,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/plain/queue-5.10/revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch
-Desc:   Revert ibmvnic: remove duplicate napi_schedule call in open function
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch b/2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch
deleted file mode 100644
index 6acd432..0000000
--- a/2400_revert-ibmvnic-remove-duplicate-napi_schedule-call-i.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-From a795f695bbc648e27d56f99c38fc1afbe832b088 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 23 Jun 2021 21:13:11 -0700
-Subject: Revert "ibmvnic: remove duplicate napi_schedule call in open
- function"
-
-From: Dany Madden <drt@linux.ibm.com>
-
-[ Upstream commit 2ca220f92878470c6ba03f9946e412323093cc94 ]
-
-This reverts commit 7c451f3ef676c805a4b77a743a01a5c21a250a73.
-
-When a vnic interface is taken down and then up, connectivity is not
-restored. We bisected it to this commit. Reverting this commit until
-we can fully investigate the issue/benefit of the change.
-
-Fixes: 7c451f3ef676 ("ibmvnic: remove duplicate napi_schedule call in open function")
-Reported-by: Cristobal Forno <cforno12@linux.ibm.com>
-Reported-by: Abdul Haleem <abdhalee@in.ibm.com>
-Signed-off-by: Dany Madden <drt@linux.ibm.com>
-Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- drivers/net/ethernet/ibm/ibmvnic.c | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
-index 8cc444684491..765b38c8b252 100644
---- a/drivers/net/ethernet/ibm/ibmvnic.c
-+++ b/drivers/net/ethernet/ibm/ibmvnic.c
-@@ -1166,6 +1166,11 @@ static int __ibmvnic_open(struct net_device *netdev)
- 
- 	netif_tx_start_all_queues(netdev);
- 
-+	if (prev_state == VNIC_CLOSED) {
-+		for (i = 0; i < adapter->req_rx_queues; i++)
-+			napi_schedule(&adapter->napi[i]);
-+	}
-+
- 	adapter->state = VNIC_OPEN;
- 	return rc;
- }
--- 
-2.30.2
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-19 11:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-19 11:17 UTC (permalink / raw
  To: gentoo-commits

commit:     07344f962d2f029e1055344508f35055acdef5fd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 19 11:16:46 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jul 19 11:16:46 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=07344f96

Linux patch 5.10.51

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1050_linux-5.10.51.patch | 7267 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7271 insertions(+)

diff --git a/0000_README b/0000_README
index e40d4fd..8176468 100644
--- a/0000_README
+++ b/0000_README
@@ -243,6 +243,10 @@ Patch:  1049_linux-5.10.50.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.50
 
+Patch:  1050_linux-5.10.51.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.51
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1050_linux-5.10.51.patch b/1050_linux-5.10.51.patch
new file mode 100644
index 0000000..94fab6b
--- /dev/null
+++ b/1050_linux-5.10.51.patch
@@ -0,0 +1,7267 @@
+diff --git a/Makefile b/Makefile
+index 695f8e739a91b..d0fad1e774931 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 50
++SUBLEVEL = 51
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+index 86cfb5c50a949..95ab6928cfd40 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+@@ -384,6 +384,11 @@
+ 	status = "okay";
+ };
+ 
++&usbdrd3 {
++	dr_mode = "host";
++	status = "okay";
++};
++
+ &usb_host0_ehci {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index 93c734d8a46c2..de1e5e8a0e885 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -984,6 +984,25 @@
+ 		status = "disabled";
+ 	};
+ 
++	usbdrd3: usb@ff600000 {
++		compatible = "rockchip,rk3328-dwc3", "snps,dwc3";
++		reg = <0x0 0xff600000 0x0 0x100000>;
++		interrupts = <GIC_SPI 67 IRQ_TYPE_LEVEL_HIGH>;
++		clocks = <&cru SCLK_USB3OTG_REF>, <&cru SCLK_USB3OTG_SUSPEND>,
++			 <&cru ACLK_USB3OTG>;
++		clock-names = "ref_clk", "suspend_clk",
++			      "bus_clk";
++		dr_mode = "otg";
++		phy_type = "utmi_wide";
++		snps,dis-del-phy-power-chg-quirk;
++		snps,dis_enblslpm_quirk;
++		snps,dis-tx-ipgap-linecheck-quirk;
++		snps,dis-u2-freeclk-exists-quirk;
++		snps,dis_u2_susphy_quirk;
++		snps,dis_u3_susphy_quirk;
++		status = "disabled";
++	};
++
+ 	gic: interrupt-controller@ff811000 {
+ 		compatible = "arm,gic-400";
+ 		#interrupt-cells = <3>;
+diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
+index 61c97d3b58c70..c995d1f4594f6 100644
+--- a/arch/arm64/include/asm/tlb.h
++++ b/arch/arm64/include/asm/tlb.h
+@@ -28,6 +28,10 @@ static void tlb_flush(struct mmu_gather *tlb);
+  */
+ static inline int tlb_get_level(struct mmu_gather *tlb)
+ {
++	/* The TTL field is only valid for the leaf entry. */
++	if (tlb->freed_tables)
++		return 0;
++
+ 	if (tlb->cleared_ptes && !(tlb->cleared_pmds ||
+ 				   tlb->cleared_puds ||
+ 				   tlb->cleared_p4ds))
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 1917ccd392564..1a63f592034eb 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -418,6 +418,8 @@ config MACH_INGENIC_SOC
+ 	select MIPS_GENERIC
+ 	select MACH_INGENIC
+ 	select SYS_SUPPORTS_ZBOOT_UART16550
++	select CPU_SUPPORTS_CPUFREQ
++	select MIPS_EXTERNAL_TIMER
+ 
+ config LANTIQ
+ 	bool "Lantiq based platforms"
+diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
+index f2e216eef7da5..8294eaa6f902d 100644
+--- a/arch/mips/include/asm/cpu-features.h
++++ b/arch/mips/include/asm/cpu-features.h
+@@ -64,6 +64,8 @@
+ 	((MIPS_ISA_REV >= (ge)) && (MIPS_ISA_REV < (lt)))
+ #define __isa_range_or_flag(ge, lt, flag) \
+ 	(__isa_range(ge, lt) || ((MIPS_ISA_REV < (lt)) && __isa(flag)))
++#define __isa_range_and_ase(ge, lt, ase) \
++	(__isa_range(ge, lt) && __ase(ase))
+ 
+ /*
+  * SMP assumption: Options of CPU 0 are a superset of all processors.
+@@ -423,7 +425,7 @@
+ #endif
+ 
+ #ifndef cpu_has_mipsmt
+-#define cpu_has_mipsmt		__isa_lt_and_ase(6, MIPS_ASE_MIPSMT)
++#define cpu_has_mipsmt		__isa_range_and_ase(2, 6, MIPS_ASE_MIPSMT)
+ #endif
+ 
+ #ifndef cpu_has_vp
+diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h
+index 10e3be870df78..c2144409c0c40 100644
+--- a/arch/mips/include/asm/hugetlb.h
++++ b/arch/mips/include/asm/hugetlb.h
+@@ -46,7 +46,13 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
+ 					 unsigned long addr, pte_t *ptep)
+ {
+-	flush_tlb_page(vma, addr & huge_page_mask(hstate_vma(vma)));
++	/*
++	 * clear the huge pte entry firstly, so that the other smp threads will
++	 * not get old pte entry after finishing flush_tlb_page and before
++	 * setting new huge pte entry
++	 */
++	huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
++	flush_tlb_page(vma, addr);
+ }
+ 
+ #define __HAVE_ARCH_HUGE_PTE_NONE
+diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
+index a0e8ae5497b61..7a7467d3f7f05 100644
+--- a/arch/mips/include/asm/mipsregs.h
++++ b/arch/mips/include/asm/mipsregs.h
+@@ -2073,7 +2073,7 @@ _ASM_MACRO_0(tlbginvf, _ASM_INSN_IF_MIPS(0x4200000c)
+ ({ int __res;								\
+ 	__asm__ __volatile__(						\
+ 		".set\tpush\n\t"					\
+-		".set\tmips32r2\n\t"					\
++		".set\tmips32r5\n\t"					\
+ 		_ASM_SET_VIRT						\
+ 		"mfgc0\t%0, " #source ", %1\n\t"			\
+ 		".set\tpop"						\
+@@ -2086,7 +2086,7 @@ _ASM_MACRO_0(tlbginvf, _ASM_INSN_IF_MIPS(0x4200000c)
+ ({ unsigned long long __res;						\
+ 	__asm__ __volatile__(						\
+ 		".set\tpush\n\t"					\
+-		".set\tmips64r2\n\t"					\
++		".set\tmips64r5\n\t"					\
+ 		_ASM_SET_VIRT						\
+ 		"dmfgc0\t%0, " #source ", %1\n\t"			\
+ 		".set\tpop"						\
+@@ -2099,7 +2099,7 @@ _ASM_MACRO_0(tlbginvf, _ASM_INSN_IF_MIPS(0x4200000c)
+ do {									\
+ 	__asm__ __volatile__(						\
+ 		".set\tpush\n\t"					\
+-		".set\tmips32r2\n\t"					\
++		".set\tmips32r5\n\t"					\
+ 		_ASM_SET_VIRT						\
+ 		"mtgc0\t%z0, " #register ", %1\n\t"			\
+ 		".set\tpop"						\
+@@ -2111,7 +2111,7 @@ do {									\
+ do {									\
+ 	__asm__ __volatile__(						\
+ 		".set\tpush\n\t"					\
+-		".set\tmips64r2\n\t"					\
++		".set\tmips64r5\n\t"					\
+ 		_ASM_SET_VIRT						\
+ 		"dmtgc0\t%z0, " #register ", %1\n\t"			\
+ 		".set\tpop"						\
+diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
+index 8b18424b31208..d0cf997b4ba84 100644
+--- a/arch/mips/include/asm/pgalloc.h
++++ b/arch/mips/include/asm/pgalloc.h
+@@ -59,11 +59,15 @@ do {							\
+ 
+ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
+ {
+-	pmd_t *pmd;
++	pmd_t *pmd = NULL;
++	struct page *pg;
+ 
+-	pmd = (pmd_t *) __get_free_pages(GFP_KERNEL, PMD_ORDER);
+-	if (pmd)
++	pg = alloc_pages(GFP_KERNEL | __GFP_ACCOUNT, PMD_ORDER);
++	if (pg) {
++		pgtable_pmd_page_ctor(pg);
++		pmd = (pmd_t *)page_address(pg);
+ 		pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
++	}
+ 	return pmd;
+ }
+ 
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index e6ae2bcdbeda0..067cb3eb16141 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1827,6 +1827,11 @@ static inline void cpu_probe_ingenic(struct cpuinfo_mips *c, unsigned int cpu)
+ 		 */
+ 		case PRID_COMP_INGENIC_D0:
+ 			c->isa_level &= ~MIPS_CPU_ISA_M32R2;
++
++			/* FPU is not properly detected on JZ4760(B). */
++			if (c->processor_id == 0x2ed0024f)
++				c->options |= MIPS_CPU_FPU;
++
+ 			fallthrough;
+ 
+ 		/*
+diff --git a/arch/mips/loongson64/numa.c b/arch/mips/loongson64/numa.c
+index cf9459f79f9b4..e4c461df3ee64 100644
+--- a/arch/mips/loongson64/numa.c
++++ b/arch/mips/loongson64/numa.c
+@@ -182,6 +182,9 @@ static void __init node_mem_init(unsigned int node)
+ 		if (node_end_pfn(0) >= (0xffffffff >> PAGE_SHIFT))
+ 			memblock_reserve((node_addrspace_offset | 0xfe000000),
+ 					 32 << 20);
++
++		/* Reserve pfn range 0~node[0]->node_start_pfn */
++		memblock_reserve(0, PAGE_SIZE * start_pfn);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
+index f53c423808327..8c14f84a8770b 100644
+--- a/arch/powerpc/include/asm/barrier.h
++++ b/arch/powerpc/include/asm/barrier.h
+@@ -46,6 +46,8 @@
+ #    define SMPWMB      eieio
+ #endif
+ 
++/* clang defines this macro for a builtin, which will not work with runtime patching */
++#undef __lwsync
+ #define __lwsync()	__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
+ #define dma_rmb()	__lwsync()
+ #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 72e1b51beb10c..55fdba8ba21ca 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -198,9 +198,7 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
+ {
+ 	int is_exec = TRAP(regs) == 0x400;
+ 
+-	/* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */
+-	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT |
+-				      DSISR_PROTFAULT))) {
++	if (is_exec) {
+ 		pr_crit_ratelimited("kernel tried to execute %s page (%lx) - exploit attempt? (uid: %d)\n",
+ 				    address >= TASK_SIZE ? "exec-protected" : "user",
+ 				    address,
+diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
+index 5f5fe63a3d1ce..7ba0840fc3b55 100644
+--- a/arch/powerpc/platforms/powernv/vas-window.c
++++ b/arch/powerpc/platforms/powernv/vas-window.c
+@@ -1093,9 +1093,9 @@ struct vas_window *vas_tx_win_open(int vasid, enum vas_cop_type cop,
+ 		/*
+ 		 * Process closes window during exit. In the case of
+ 		 * multithread application, the child thread can open
+-		 * window and can exit without closing it. Expects parent
+-		 * thread to use and close the window. So do not need
+-		 * to take pid reference for parent thread.
++		 * window and can exit without closing it. so takes tgid
++		 * reference until window closed to make sure tgid is not
++		 * reused.
+ 		 */
+ 		txwin->tgid = find_get_pid(task_tgid_vnr(current));
+ 		/*
+@@ -1339,8 +1339,9 @@ int vas_win_close(struct vas_window *window)
+ 	/* if send window, drop reference to matching receive window */
+ 	if (window->tx_win) {
+ 		if (window->user_win) {
+-			/* Drop references to pid and mm */
++			/* Drop references to pid. tgid and mm */
+ 			put_pid(window->pid);
++			put_pid(window->tgid);
+ 			if (window->mm) {
+ 				mm_context_remove_vas_window(window->mm);
+ 				mmdrop(window->mm);
+diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
+index 656460636ad34..e83af7bc75919 100644
+--- a/block/blk-rq-qos.c
++++ b/block/blk-rq-qos.c
+@@ -266,8 +266,8 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
+ 	if (!has_sleeper && acquire_inflight_cb(rqw, private_data))
+ 		return;
+ 
+-	prepare_to_wait_exclusive(&rqw->wait, &data.wq, TASK_UNINTERRUPTIBLE);
+-	has_sleeper = !wq_has_single_sleeper(&rqw->wait);
++	has_sleeper = !prepare_to_wait_exclusive(&rqw->wait, &data.wq,
++						 TASK_UNINTERRUPTIBLE);
+ 	do {
+ 		/* The memory barrier in set_task_state saves us here. */
+ 		if (data.got_token)
+diff --git a/drivers/ata/ahci_sunxi.c b/drivers/ata/ahci_sunxi.c
+index cb69b737cb499..56b695136977a 100644
+--- a/drivers/ata/ahci_sunxi.c
++++ b/drivers/ata/ahci_sunxi.c
+@@ -200,7 +200,7 @@ static void ahci_sunxi_start_engine(struct ata_port *ap)
+ }
+ 
+ static const struct ata_port_info ahci_sunxi_port_info = {
+-	.flags		= AHCI_FLAG_COMMON | ATA_FLAG_NCQ,
++	.flags		= AHCI_FLAG_COMMON | ATA_FLAG_NCQ | ATA_FLAG_NO_DIPM,
+ 	.pio_mask	= ATA_PIO4,
+ 	.udma_mask	= ATA_UDMA6,
+ 	.port_ops	= &ahci_platform_ops,
+diff --git a/drivers/atm/iphase.c b/drivers/atm/iphase.c
+index eef637fd90b32..a59554e5b8b0f 100644
+--- a/drivers/atm/iphase.c
++++ b/drivers/atm/iphase.c
+@@ -3279,7 +3279,7 @@ static void __exit ia_module_exit(void)
+ {
+ 	pci_unregister_driver(&ia_driver);
+ 
+-        del_timer(&ia_timer);
++	del_timer_sync(&ia_timer);
+ }
+ 
+ module_init(ia_module_init);
+diff --git a/drivers/atm/nicstar.c b/drivers/atm/nicstar.c
+index 09ad73361879e..6eb4ed256a7e2 100644
+--- a/drivers/atm/nicstar.c
++++ b/drivers/atm/nicstar.c
+@@ -297,7 +297,7 @@ static void __exit nicstar_cleanup(void)
+ {
+ 	XPRINTK("nicstar: nicstar_cleanup() called.\n");
+ 
+-	del_timer(&ns_timer);
++	del_timer_sync(&ns_timer);
+ 
+ 	pci_unregister_driver(&nicstar_driver);
+ 
+@@ -525,6 +525,15 @@ static int ns_init_card(int i, struct pci_dev *pcidev)
+ 	/* Set the VPI/VCI MSb mask to zero so we can receive OAM cells */
+ 	writel(0x00000000, card->membase + VPM);
+ 
++	card->intcnt = 0;
++	if (request_irq
++	    (pcidev->irq, &ns_irq_handler, IRQF_SHARED, "nicstar", card) != 0) {
++		pr_err("nicstar%d: can't allocate IRQ %d.\n", i, pcidev->irq);
++		error = 9;
++		ns_init_card_error(card, error);
++		return error;
++	}
++
+ 	/* Initialize TSQ */
+ 	card->tsq.org = dma_alloc_coherent(&card->pcidev->dev,
+ 					   NS_TSQSIZE + NS_TSQ_ALIGNMENT,
+@@ -751,15 +760,6 @@ static int ns_init_card(int i, struct pci_dev *pcidev)
+ 
+ 	card->efbie = 1;
+ 
+-	card->intcnt = 0;
+-	if (request_irq
+-	    (pcidev->irq, &ns_irq_handler, IRQF_SHARED, "nicstar", card) != 0) {
+-		printk("nicstar%d: can't allocate IRQ %d.\n", i, pcidev->irq);
+-		error = 9;
+-		ns_init_card_error(card, error);
+-		return error;
+-	}
+-
+ 	/* Register device */
+ 	card->atmdev = atm_dev_register("nicstar", &card->pcidev->dev, &atm_ops,
+ 					-1, NULL);
+@@ -837,10 +837,12 @@ static void ns_init_card_error(ns_dev *card, int error)
+ 			dev_kfree_skb_any(hb);
+ 	}
+ 	if (error >= 12) {
+-		kfree(card->rsq.org);
++		dma_free_coherent(&card->pcidev->dev, NS_RSQSIZE + NS_RSQ_ALIGNMENT,
++				card->rsq.org, card->rsq.dma);
+ 	}
+ 	if (error >= 11) {
+-		kfree(card->tsq.org);
++		dma_free_coherent(&card->pcidev->dev, NS_TSQSIZE + NS_TSQ_ALIGNMENT,
++				card->tsq.org, card->tsq.dma);
+ 	}
+ 	if (error >= 10) {
+ 		free_irq(card->pcidev->irq, card);
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 06d44ae9701f1..f0fa0c8e7ec60 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1224,6 +1224,9 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
+ 		goto out_unlock;
+ 	}
+ 
++	if (test_bit(QUEUE_FLAG_WC, &lo->lo_queue->queue_flags))
++		blk_queue_write_cache(lo->lo_queue, false, false);
++
+ 	/* freeze request queue during the transition */
+ 	blk_mq_freeze_queue(lo->lo_queue);
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index b1f0b13cc8bc6..afd2b1f12d49d 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -269,6 +269,8 @@ static const struct usb_device_id blacklist_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0cf3, 0xe360), .driver_info = BTUSB_QCA_ROME |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0cf3, 0xe500), .driver_info = BTUSB_QCA_ROME |
++						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe092), .driver_info = BTUSB_QCA_ROME |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe09f), .driver_info = BTUSB_QCA_ROME |
+@@ -1719,6 +1721,13 @@ static void btusb_work(struct work_struct *work)
+ 			 * which work with WBS at all.
+ 			 */
+ 			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
++			/* Because mSBC frames do not need to be aligned to the
++			 * SCO packet boundary. If support the Alt 3, use the
++			 * Alt 3 for HCI payload >= 60 Bytes let air packet
++			 * data satisfy 60 bytes.
++			 */
++			if (new_alts == 1 && btusb_find_altsetting(data, 3))
++				new_alts = 3;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -2963,11 +2972,6 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	struct btmtk_wmt_hdr *hdr;
+ 	int err;
+ 
+-	/* Submit control IN URB on demand to process the WMT event */
+-	err = btusb_mtk_submit_wmt_recv_urb(hdev);
+-	if (err < 0)
+-		return err;
+-
+ 	/* Send the WMT command and wait until the WMT event returns */
+ 	hlen = sizeof(*hdr) + wmt_params->dlen;
+ 	if (hlen > 255)
+@@ -2989,6 +2993,11 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 		return err;
+ 	}
+ 
++	/* Submit control IN URB on demand to process the WMT event */
++	err = btusb_mtk_submit_wmt_recv_urb(hdev);
++	if (err < 0)
++		return err;
++
+ 	/* The vendor specific WMT commands are all answered by a vendor
+ 	 * specific event and will have the Command Status or Command
+ 	 * Complete as with usual HCI command flow control.
+@@ -3549,6 +3558,11 @@ static int btusb_setup_qca_download_fw(struct hci_dev *hdev,
+ 	sent += size;
+ 	count -= size;
+ 
++	/* ep2 need time to switch from function acl to function dfu,
++	 * so we add 20ms delay here.
++	 */
++	msleep(20);
++
+ 	while (count) {
+ 		size = min_t(size_t, count, QCA_DFU_PACKET_LEN);
+ 
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index f78156d93c3f2..6384510c48d6b 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -371,16 +371,18 @@ static int __ipmi_set_timeout(struct ipmi_smi_msg  *smi_msg,
+ 	data[0] = 0;
+ 	WDOG_SET_TIMER_USE(data[0], WDOG_TIMER_USE_SMS_OS);
+ 
+-	if ((ipmi_version_major > 1)
+-	    || ((ipmi_version_major == 1) && (ipmi_version_minor >= 5))) {
+-		/* This is an IPMI 1.5-only feature. */
+-		data[0] |= WDOG_DONT_STOP_ON_SET;
+-	} else if (ipmi_watchdog_state != WDOG_TIMEOUT_NONE) {
+-		/*
+-		 * In ipmi 1.0, setting the timer stops the watchdog, we
+-		 * need to start it back up again.
+-		 */
+-		hbnow = 1;
++	if (ipmi_watchdog_state != WDOG_TIMEOUT_NONE) {
++		if ((ipmi_version_major > 1) ||
++		    ((ipmi_version_major == 1) && (ipmi_version_minor >= 5))) {
++			/* This is an IPMI 1.5-only feature. */
++			data[0] |= WDOG_DONT_STOP_ON_SET;
++		} else {
++			/*
++			 * In ipmi 1.0, setting the timer stops the watchdog, we
++			 * need to start it back up again.
++			 */
++			hbnow = 1;
++		}
+ 	}
+ 
+ 	data[1] = 0;
+diff --git a/drivers/clk/renesas/r8a77995-cpg-mssr.c b/drivers/clk/renesas/r8a77995-cpg-mssr.c
+index 5b4691117b470..026e2612c33ca 100644
+--- a/drivers/clk/renesas/r8a77995-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77995-cpg-mssr.c
+@@ -75,6 +75,7 @@ static const struct cpg_core_clk r8a77995_core_clks[] __initconst = {
+ 	DEF_RATE(".oco",       CLK_OCO,            8 * 1000 * 1000),
+ 
+ 	/* Core Clock Outputs */
++	DEF_FIXED("za2",       R8A77995_CLK_ZA2,   CLK_PLL0D3,     2, 1),
+ 	DEF_FIXED("z2",        R8A77995_CLK_Z2,    CLK_PLL0D3,     1, 1),
+ 	DEF_FIXED("ztr",       R8A77995_CLK_ZTR,   CLK_PLL1,       6, 1),
+ 	DEF_FIXED("zt",        R8A77995_CLK_ZT,    CLK_PLL1,       4, 1),
+diff --git a/drivers/clk/renesas/rcar-usb2-clock-sel.c b/drivers/clk/renesas/rcar-usb2-clock-sel.c
+index d4c02986c34e9..0ccc6e709a385 100644
+--- a/drivers/clk/renesas/rcar-usb2-clock-sel.c
++++ b/drivers/clk/renesas/rcar-usb2-clock-sel.c
+@@ -128,10 +128,8 @@ static int rcar_usb2_clock_sel_resume(struct device *dev)
+ static int rcar_usb2_clock_sel_remove(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+-	struct usb2_clock_sel_priv *priv = platform_get_drvdata(pdev);
+ 
+ 	of_clk_del_provider(dev->of_node);
+-	clk_hw_unregister(&priv->hw);
+ 	pm_runtime_put(dev);
+ 	pm_runtime_disable(dev);
+ 
+@@ -164,9 +162,6 @@ static int rcar_usb2_clock_sel_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->rsts))
+ 		return PTR_ERR(priv->rsts);
+ 
+-	pm_runtime_enable(dev);
+-	pm_runtime_get_sync(dev);
+-
+ 	clk = devm_clk_get(dev, "usb_extal");
+ 	if (!IS_ERR(clk) && !clk_prepare_enable(clk)) {
+ 		priv->extal = !!clk_get_rate(clk);
+@@ -183,6 +178,8 @@ static int rcar_usb2_clock_sel_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
++	pm_runtime_enable(dev);
++	pm_runtime_get_sync(dev);
+ 	platform_set_drvdata(pdev, priv);
+ 	dev_set_drvdata(dev, priv);
+ 
+@@ -193,11 +190,20 @@ static int rcar_usb2_clock_sel_probe(struct platform_device *pdev)
+ 	init.num_parents = 0;
+ 	priv->hw.init = &init;
+ 
+-	clk = clk_register(NULL, &priv->hw);
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
++	ret = devm_clk_hw_register(NULL, &priv->hw);
++	if (ret)
++		goto pm_put;
++
++	ret = of_clk_add_hw_provider(np, of_clk_hw_simple_get, &priv->hw);
++	if (ret)
++		goto pm_put;
++
++	return 0;
+ 
+-	return of_clk_add_hw_provider(np, of_clk_hw_simple_get, &priv->hw);
++pm_put:
++	pm_runtime_put(dev);
++	pm_runtime_disable(dev);
++	return ret;
+ }
+ 
+ static const struct dev_pm_ops rcar_usb2_clock_sel_pm_ops = {
+diff --git a/drivers/clk/tegra/clk-periph-gate.c b/drivers/clk/tegra/clk-periph-gate.c
+index 4b31beefc9fc2..dc3f92678407b 100644
+--- a/drivers/clk/tegra/clk-periph-gate.c
++++ b/drivers/clk/tegra/clk-periph-gate.c
+@@ -48,18 +48,9 @@ static int clk_periph_is_enabled(struct clk_hw *hw)
+ 	return state;
+ }
+ 
+-static int clk_periph_enable(struct clk_hw *hw)
++static void clk_periph_enable_locked(struct clk_hw *hw)
+ {
+ 	struct tegra_clk_periph_gate *gate = to_clk_periph_gate(hw);
+-	unsigned long flags = 0;
+-
+-	spin_lock_irqsave(&periph_ref_lock, flags);
+-
+-	gate->enable_refcnt[gate->clk_num]++;
+-	if (gate->enable_refcnt[gate->clk_num] > 1) {
+-		spin_unlock_irqrestore(&periph_ref_lock, flags);
+-		return 0;
+-	}
+ 
+ 	write_enb_set(periph_clk_to_bit(gate), gate);
+ 	udelay(2);
+@@ -78,6 +69,32 @@ static int clk_periph_enable(struct clk_hw *hw)
+ 		udelay(1);
+ 		writel_relaxed(0, gate->clk_base + LVL2_CLK_GATE_OVRE);
+ 	}
++}
++
++static void clk_periph_disable_locked(struct clk_hw *hw)
++{
++	struct tegra_clk_periph_gate *gate = to_clk_periph_gate(hw);
++
++	/*
++	 * If peripheral is in the APB bus then read the APB bus to
++	 * flush the write operation in apb bus. This will avoid the
++	 * peripheral access after disabling clock
++	 */
++	if (gate->flags & TEGRA_PERIPH_ON_APB)
++		tegra_read_chipid();
++
++	write_enb_clr(periph_clk_to_bit(gate), gate);
++}
++
++static int clk_periph_enable(struct clk_hw *hw)
++{
++	struct tegra_clk_periph_gate *gate = to_clk_periph_gate(hw);
++	unsigned long flags = 0;
++
++	spin_lock_irqsave(&periph_ref_lock, flags);
++
++	if (!gate->enable_refcnt[gate->clk_num]++)
++		clk_periph_enable_locked(hw);
+ 
+ 	spin_unlock_irqrestore(&periph_ref_lock, flags);
+ 
+@@ -91,21 +108,28 @@ static void clk_periph_disable(struct clk_hw *hw)
+ 
+ 	spin_lock_irqsave(&periph_ref_lock, flags);
+ 
+-	gate->enable_refcnt[gate->clk_num]--;
+-	if (gate->enable_refcnt[gate->clk_num] > 0) {
+-		spin_unlock_irqrestore(&periph_ref_lock, flags);
+-		return;
+-	}
++	WARN_ON(!gate->enable_refcnt[gate->clk_num]);
++
++	if (--gate->enable_refcnt[gate->clk_num] == 0)
++		clk_periph_disable_locked(hw);
++
++	spin_unlock_irqrestore(&periph_ref_lock, flags);
++}
++
++static void clk_periph_disable_unused(struct clk_hw *hw)
++{
++	struct tegra_clk_periph_gate *gate = to_clk_periph_gate(hw);
++	unsigned long flags = 0;
++
++	spin_lock_irqsave(&periph_ref_lock, flags);
+ 
+ 	/*
+-	 * If peripheral is in the APB bus then read the APB bus to
+-	 * flush the write operation in apb bus. This will avoid the
+-	 * peripheral access after disabling clock
++	 * Some clocks are duplicated and some of them are marked as critical,
++	 * like fuse and fuse_burn for example, thus the enable_refcnt will
++	 * be non-zero here if the "unused" duplicate is disabled by CCF.
+ 	 */
+-	if (gate->flags & TEGRA_PERIPH_ON_APB)
+-		tegra_read_chipid();
+-
+-	write_enb_clr(periph_clk_to_bit(gate), gate);
++	if (!gate->enable_refcnt[gate->clk_num])
++		clk_periph_disable_locked(hw);
+ 
+ 	spin_unlock_irqrestore(&periph_ref_lock, flags);
+ }
+@@ -114,6 +138,7 @@ const struct clk_ops tegra_clk_periph_gate_ops = {
+ 	.is_enabled = clk_periph_is_enabled,
+ 	.enable = clk_periph_enable,
+ 	.disable = clk_periph_disable,
++	.disable_unused = clk_periph_disable_unused,
+ };
+ 
+ struct clk *tegra_clk_register_periph_gate(const char *name,
+@@ -148,9 +173,6 @@ struct clk *tegra_clk_register_periph_gate(const char *name,
+ 	gate->enable_refcnt = enable_refcnt;
+ 	gate->regs = pregs;
+ 
+-	if (read_enb(gate) & periph_clk_to_bit(gate))
+-		enable_refcnt[clk_num]++;
+-
+ 	/* Data in .init is copied by clk_register(), so stack variable OK */
+ 	gate->hw.init = &init;
+ 
+diff --git a/drivers/clk/tegra/clk-periph.c b/drivers/clk/tegra/clk-periph.c
+index 67620c7ecd9ee..79ca3aa072b70 100644
+--- a/drivers/clk/tegra/clk-periph.c
++++ b/drivers/clk/tegra/clk-periph.c
+@@ -100,6 +100,15 @@ static void clk_periph_disable(struct clk_hw *hw)
+ 	gate_ops->disable(gate_hw);
+ }
+ 
++static void clk_periph_disable_unused(struct clk_hw *hw)
++{
++	struct tegra_clk_periph *periph = to_clk_periph(hw);
++	const struct clk_ops *gate_ops = periph->gate_ops;
++	struct clk_hw *gate_hw = &periph->gate.hw;
++
++	gate_ops->disable_unused(gate_hw);
++}
++
+ static void clk_periph_restore_context(struct clk_hw *hw)
+ {
+ 	struct tegra_clk_periph *periph = to_clk_periph(hw);
+@@ -126,6 +135,7 @@ const struct clk_ops tegra_clk_periph_ops = {
+ 	.is_enabled = clk_periph_is_enabled,
+ 	.enable = clk_periph_enable,
+ 	.disable = clk_periph_disable,
++	.disable_unused = clk_periph_disable_unused,
+ 	.restore_context = clk_periph_restore_context,
+ };
+ 
+@@ -135,6 +145,7 @@ static const struct clk_ops tegra_clk_periph_nodiv_ops = {
+ 	.is_enabled = clk_periph_is_enabled,
+ 	.enable = clk_periph_enable,
+ 	.disable = clk_periph_disable,
++	.disable_unused = clk_periph_disable_unused,
+ 	.restore_context = clk_periph_restore_context,
+ };
+ 
+diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c
+index c5cc0a2dac6ff..d709ecb7d8d7e 100644
+--- a/drivers/clk/tegra/clk-pll.c
++++ b/drivers/clk/tegra/clk-pll.c
+@@ -1131,7 +1131,8 @@ static int clk_pllu_enable(struct clk_hw *hw)
+ 	if (pll->lock)
+ 		spin_lock_irqsave(pll->lock, flags);
+ 
+-	_clk_pll_enable(hw);
++	if (!clk_pll_is_enabled(hw))
++		_clk_pll_enable(hw);
+ 
+ 	ret = clk_pll_wait_for_lock(pll);
+ 	if (ret < 0)
+@@ -1748,15 +1749,13 @@ static int clk_pllu_tegra114_enable(struct clk_hw *hw)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (clk_pll_is_enabled(hw))
+-		return 0;
+-
+ 	input_rate = clk_hw_get_rate(__clk_get_hw(osc));
+ 
+ 	if (pll->lock)
+ 		spin_lock_irqsave(pll->lock, flags);
+ 
+-	_clk_pll_enable(hw);
++	if (!clk_pll_is_enabled(hw))
++		_clk_pll_enable(hw);
+ 
+ 	ret = clk_pll_wait_for_lock(pll);
+ 	if (ret < 0)
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index d0177824c518b..f4881764bf8f4 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -352,7 +352,7 @@ static u64 notrace arm64_858921_read_cntvct_el0(void)
+ 	do {								\
+ 		_val = read_sysreg(reg);				\
+ 		_retries--;						\
+-	} while (((_val + 1) & GENMASK(9, 0)) <= 1 && _retries);	\
++	} while (((_val + 1) & GENMASK(8, 0)) <= 1 && _retries);	\
+ 									\
+ 	WARN_ON_ONCE(!_retries);					\
+ 	_val;								\
+diff --git a/drivers/extcon/extcon-intel-mrfld.c b/drivers/extcon/extcon-intel-mrfld.c
+index f47016fb28a84..cd1a5f230077c 100644
+--- a/drivers/extcon/extcon-intel-mrfld.c
++++ b/drivers/extcon/extcon-intel-mrfld.c
+@@ -197,6 +197,7 @@ static int mrfld_extcon_probe(struct platform_device *pdev)
+ 	struct intel_soc_pmic *pmic = dev_get_drvdata(dev->parent);
+ 	struct regmap *regmap = pmic->regmap;
+ 	struct mrfld_extcon_data *data;
++	unsigned int status;
+ 	unsigned int id;
+ 	int irq, ret;
+ 
+@@ -244,6 +245,14 @@ static int mrfld_extcon_probe(struct platform_device *pdev)
+ 	/* Get initial state */
+ 	mrfld_extcon_role_detect(data);
+ 
++	/*
++	 * Cached status value is used for cable detection, see comments
++	 * in mrfld_extcon_cable_detect(), we need to sync cached value
++	 * with a real state of the hardware.
++	 */
++	regmap_read(regmap, BCOVE_SCHGRIRQ1, &status);
++	data->status = status;
++
+ 	mrfld_extcon_clear(data, BCOVE_MIRQLVL1, BCOVE_LVL1_CHGR);
+ 	mrfld_extcon_clear(data, BCOVE_MCHGRIRQ1, BCOVE_CHGRIRQ_ALL);
+ 
+diff --git a/drivers/firmware/qemu_fw_cfg.c b/drivers/firmware/qemu_fw_cfg.c
+index 0078260fbabea..172c751a4f6c2 100644
+--- a/drivers/firmware/qemu_fw_cfg.c
++++ b/drivers/firmware/qemu_fw_cfg.c
+@@ -299,15 +299,13 @@ static int fw_cfg_do_platform_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static ssize_t fw_cfg_showrev(struct kobject *k, struct attribute *a, char *buf)
++static ssize_t fw_cfg_showrev(struct kobject *k, struct kobj_attribute *a,
++			      char *buf)
+ {
+ 	return sprintf(buf, "%u\n", fw_cfg_rev);
+ }
+ 
+-static const struct {
+-	struct attribute attr;
+-	ssize_t (*show)(struct kobject *k, struct attribute *a, char *buf);
+-} fw_cfg_rev_attr = {
++static const struct kobj_attribute fw_cfg_rev_attr = {
+ 	.attr = { .name = "rev", .mode = S_IRUSR },
+ 	.show = fw_cfg_showrev,
+ };
+diff --git a/drivers/fpga/stratix10-soc.c b/drivers/fpga/stratix10-soc.c
+index 657a70c5fc996..9e34bbbce26e2 100644
+--- a/drivers/fpga/stratix10-soc.c
++++ b/drivers/fpga/stratix10-soc.c
+@@ -454,6 +454,7 @@ static int s10_remove(struct platform_device *pdev)
+ 	struct s10_priv *priv = mgr->priv;
+ 
+ 	fpga_mgr_unregister(mgr);
++	fpga_mgr_free(mgr);
+ 	stratix10_svc_free_channel(priv->chan);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 5da487b64a668..26f8a21383774 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -48,12 +48,6 @@ static struct {
+ 	spinlock_t mem_limit_lock;
+ } kfd_mem_limit;
+ 
+-/* Struct used for amdgpu_amdkfd_bo_validate */
+-struct amdgpu_vm_parser {
+-	uint32_t        domain;
+-	bool            wait;
+-};
+-
+ static const char * const domain_bit_to_string[] = {
+ 		"CPU",
+ 		"GTT",
+@@ -337,11 +331,9 @@ validate_fail:
+ 	return ret;
+ }
+ 
+-static int amdgpu_amdkfd_validate(void *param, struct amdgpu_bo *bo)
++static int amdgpu_amdkfd_validate_vm_bo(void *_unused, struct amdgpu_bo *bo)
+ {
+-	struct amdgpu_vm_parser *p = param;
+-
+-	return amdgpu_amdkfd_bo_validate(bo, p->domain, p->wait);
++	return amdgpu_amdkfd_bo_validate(bo, bo->allowed_domains, false);
+ }
+ 
+ /* vm_validate_pt_pd_bos - Validate page table and directory BOs
+@@ -355,20 +347,15 @@ static int vm_validate_pt_pd_bos(struct amdgpu_vm *vm)
+ {
+ 	struct amdgpu_bo *pd = vm->root.base.bo;
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(pd->tbo.bdev);
+-	struct amdgpu_vm_parser param;
+ 	int ret;
+ 
+-	param.domain = AMDGPU_GEM_DOMAIN_VRAM;
+-	param.wait = false;
+-
+-	ret = amdgpu_vm_validate_pt_bos(adev, vm, amdgpu_amdkfd_validate,
+-					&param);
++	ret = amdgpu_vm_validate_pt_bos(adev, vm, amdgpu_amdkfd_validate_vm_bo, NULL);
+ 	if (ret) {
+ 		pr_err("failed to validate PT BOs\n");
+ 		return ret;
+ 	}
+ 
+-	ret = amdgpu_amdkfd_validate(&param, pd);
++	ret = amdgpu_amdkfd_validate_vm_bo(NULL, pd);
+ 	if (ret) {
+ 		pr_err("failed to validate PD\n");
+ 		return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 87c7c45f1bb73..6948ab3c0d998 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2760,7 +2760,7 @@ static int amdgpu_device_ip_reinit_early_sriov(struct amdgpu_device *adev)
+ 		AMD_IP_BLOCK_TYPE_IH,
+ 	};
+ 
+-	for (i = 0; i < ARRAY_SIZE(ip_order); i++) {
++	for (i = 0; i < adev->num_ip_blocks; i++) {
+ 		int j;
+ 		struct amdgpu_ip_block *block;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 28f20f0b722f5..163188ce02bd4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -128,7 +128,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_ib *ib = &ibs[0];
+ 	struct dma_fence *tmp = NULL;
+-	bool skip_preamble, need_ctx_switch;
++	bool need_ctx_switch;
+ 	unsigned patch_offset = ~0;
+ 	struct amdgpu_vm *vm;
+ 	uint64_t fence_ctx;
+@@ -221,7 +221,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	if (need_ctx_switch)
+ 		status |= AMDGPU_HAVE_CTX_SWITCH;
+ 
+-	skip_preamble = ring->current_ctx == fence_ctx;
+ 	if (job && ring->funcs->emit_cntxcntl) {
+ 		status |= job->preamble_status;
+ 		status |= job->preemption_status;
+@@ -239,14 +238,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	for (i = 0; i < num_ibs; ++i) {
+ 		ib = &ibs[i];
+ 
+-		/* drop preamble IBs if we don't have a context switch */
+-		if ((ib->flags & AMDGPU_IB_FLAG_PREAMBLE) &&
+-		    skip_preamble &&
+-		    !(status & AMDGPU_PREAMBLE_IB_PRESENT_FIRST) &&
+-		    !amdgpu_mcbp &&
+-		    !amdgpu_sriov_vf(adev)) /* for SRIOV preemption, Preamble CE ib must be inserted anyway */
+-			continue;
+-
+ 		if (job && ring->funcs->emit_frame_cntl) {
+ 			if (secure != !!(ib->flags & AMDGPU_IB_FLAGS_SECURE)) {
+ 				amdgpu_ring_emit_frame_cntl(ring, false, secure);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h
+index 1838144936580..bda4438c39256 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h
+@@ -21,6 +21,11 @@
+ #ifndef __AMDGPU_UMC_H__
+ #define __AMDGPU_UMC_H__
+ 
++/*
++ * (addr / 256) * 4096, the higher 26 bits in ErrorAddr
++ * is the index of 4KB block
++ */
++#define ADDR_OF_4KB_BLOCK(addr)			(((addr) & ~0xffULL) << 4)
+ /*
+  * (addr / 256) * 8192, the higher 26 bits in ErrorAddr
+  * is the index of 8KB block
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index e82f49f62f6e6..1f2e2460e121e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -143,7 +143,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4_1[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0111, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003e0),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+ 
+@@ -269,7 +269,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4_3[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_POWER_CNTL, 0x003fff07, 0x40000051),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003e0),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x03fbe1fe)
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/umc_v8_7.c b/drivers/gpu/drm/amd/amdgpu/umc_v8_7.c
+index 5665c77a9d587..afbbe9f05d5ea 100644
+--- a/drivers/gpu/drm/amd/amdgpu/umc_v8_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/umc_v8_7.c
+@@ -233,7 +233,7 @@ static void umc_v8_7_query_error_address(struct amdgpu_device *adev,
+ 		err_addr &= ~((0x1ULL << lsb) - 1);
+ 
+ 		/* translate umc channel address to soc pa, 3 parts are included */
+-		retired_page = ADDR_OF_8KB_BLOCK(err_addr) |
++		retired_page = ADDR_OF_4KB_BLOCK(err_addr) |
+ 				ADDR_OF_256B_BLOCK(channel_index) |
+ 				OFFSET_IN_256B_BLOCK(err_addr);
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 6ea8a4b6efde3..352a32dc609b2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -486,9 +486,6 @@ static int destroy_queue_nocpsch_locked(struct device_queue_manager *dqm,
+ 	if (retval == -ETIME)
+ 		qpd->reset_wavefronts = true;
+ 
+-
+-	mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
+-
+ 	list_del(&q->list);
+ 	if (list_empty(&qpd->queues_list)) {
+ 		if (qpd->reset_wavefronts) {
+@@ -523,6 +520,8 @@ static int destroy_queue_nocpsch(struct device_queue_manager *dqm,
+ 	int retval;
+ 	uint64_t sdma_val = 0;
+ 	struct kfd_process_device *pdd = qpd_to_pdd(qpd);
++	struct mqd_manager *mqd_mgr =
++		dqm->mqd_mgrs[get_mqd_type_from_queue_type(q->properties.type)];
+ 
+ 	/* Get the SDMA queue stats */
+ 	if ((q->properties.type == KFD_QUEUE_TYPE_SDMA) ||
+@@ -540,6 +539,8 @@ static int destroy_queue_nocpsch(struct device_queue_manager *dqm,
+ 		pdd->sdma_past_activity_counter += sdma_val;
+ 	dqm_unlock(dqm);
+ 
++	mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
++
+ 	return retval;
+ }
+ 
+@@ -1632,7 +1633,7 @@ static int set_trap_handler(struct device_queue_manager *dqm,
+ static int process_termination_nocpsch(struct device_queue_manager *dqm,
+ 		struct qcm_process_device *qpd)
+ {
+-	struct queue *q, *next;
++	struct queue *q;
+ 	struct device_process_node *cur, *next_dpn;
+ 	int retval = 0;
+ 	bool found = false;
+@@ -1640,12 +1641,19 @@ static int process_termination_nocpsch(struct device_queue_manager *dqm,
+ 	dqm_lock(dqm);
+ 
+ 	/* Clear all user mode queues */
+-	list_for_each_entry_safe(q, next, &qpd->queues_list, list) {
++	while (!list_empty(&qpd->queues_list)) {
++		struct mqd_manager *mqd_mgr;
+ 		int ret;
+ 
++		q = list_first_entry(&qpd->queues_list, struct queue, list);
++		mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
++				q->properties.type)];
+ 		ret = destroy_queue_nocpsch_locked(dqm, qpd, q);
+ 		if (ret)
+ 			retval = ret;
++		dqm_unlock(dqm);
++		mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
++		dqm_lock(dqm);
+ 	}
+ 
+ 	/* Unregister process */
+@@ -1677,36 +1685,34 @@ static int get_wave_state(struct device_queue_manager *dqm,
+ 			  u32 *save_area_used_size)
+ {
+ 	struct mqd_manager *mqd_mgr;
+-	int r;
+ 
+ 	dqm_lock(dqm);
+ 
+-	if (q->properties.type != KFD_QUEUE_TYPE_COMPUTE ||
+-	    q->properties.is_active || !q->device->cwsr_enabled) {
+-		r = -EINVAL;
+-		goto dqm_unlock;
+-	}
+-
+ 	mqd_mgr = dqm->mqd_mgrs[KFD_MQD_TYPE_CP];
+ 
+-	if (!mqd_mgr->get_wave_state) {
+-		r = -EINVAL;
+-		goto dqm_unlock;
++	if (q->properties.type != KFD_QUEUE_TYPE_COMPUTE ||
++	    q->properties.is_active || !q->device->cwsr_enabled ||
++	    !mqd_mgr->get_wave_state) {
++		dqm_unlock(dqm);
++		return -EINVAL;
+ 	}
+ 
+-	r = mqd_mgr->get_wave_state(mqd_mgr, q->mqd, ctl_stack,
+-			ctl_stack_used_size, save_area_used_size);
+-
+-dqm_unlock:
+ 	dqm_unlock(dqm);
+-	return r;
++
++	/*
++	 * get_wave_state is outside the dqm lock to prevent circular locking
++	 * and the queue should be protected against destruction by the process
++	 * lock.
++	 */
++	return mqd_mgr->get_wave_state(mqd_mgr, q->mqd, ctl_stack,
++			ctl_stack_used_size, save_area_used_size);
+ }
+ 
+ static int process_termination_cpsch(struct device_queue_manager *dqm,
+ 		struct qcm_process_device *qpd)
+ {
+ 	int retval;
+-	struct queue *q, *next;
++	struct queue *q;
+ 	struct kernel_queue *kq, *kq_next;
+ 	struct mqd_manager *mqd_mgr;
+ 	struct device_process_node *cur, *next_dpn;
+@@ -1763,24 +1769,26 @@ static int process_termination_cpsch(struct device_queue_manager *dqm,
+ 		qpd->reset_wavefronts = false;
+ 	}
+ 
+-	dqm_unlock(dqm);
+-
+-	/* Outside the DQM lock because under the DQM lock we can't do
+-	 * reclaim or take other locks that others hold while reclaiming.
+-	 */
+-	if (found)
+-		kfd_dec_compute_active(dqm->dev);
+-
+ 	/* Lastly, free mqd resources.
+ 	 * Do free_mqd() after dqm_unlock to avoid circular locking.
+ 	 */
+-	list_for_each_entry_safe(q, next, &qpd->queues_list, list) {
++	while (!list_empty(&qpd->queues_list)) {
++		q = list_first_entry(&qpd->queues_list, struct queue, list);
+ 		mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
+ 				q->properties.type)];
+ 		list_del(&q->list);
+ 		qpd->queue_count--;
++		dqm_unlock(dqm);
+ 		mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
++		dqm_lock(dqm);
+ 	}
++	dqm_unlock(dqm);
++
++	/* Outside the DQM lock because under the DQM lock we can't do
++	 * reclaim or take other locks that others hold while reclaiming.
++	 */
++	if (found)
++		kfd_dec_compute_active(dqm->dev);
+ 
+ 	return retval;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index df26c07cb9120..6eb308670f487 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3685,6 +3685,23 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state,
+ 	scaling_info->src_rect.x = state->src_x >> 16;
+ 	scaling_info->src_rect.y = state->src_y >> 16;
+ 
++	/*
++	 * For reasons we don't (yet) fully understand a non-zero
++	 * src_y coordinate into an NV12 buffer can cause a
++	 * system hang. To avoid hangs (and maybe be overly cautious)
++	 * let's reject both non-zero src_x and src_y.
++	 *
++	 * We currently know of only one use-case to reproduce a
++	 * scenario with non-zero src_x and src_y for NV12, which
++	 * is to gesture the YouTube Android app into full screen
++	 * on ChromeOS.
++	 */
++	if (state->fb &&
++	    state->fb->format->format == DRM_FORMAT_NV12 &&
++	    (scaling_info->src_rect.x != 0 ||
++	     scaling_info->src_rect.y != 0))
++		return -EINVAL;
++
+ 	/*
+ 	 * For reasons we don't (yet) fully understand a non-zero
+ 	 * src_y coordinate into an NV12 buffer can cause a
+@@ -8291,7 +8308,8 @@ skip_modeset:
+ 	BUG_ON(dm_new_crtc_state->stream == NULL);
+ 
+ 	/* Scaling or underscan settings */
+-	if (is_scaling_state_different(dm_old_conn_state, dm_new_conn_state))
++	if (is_scaling_state_different(dm_old_conn_state, dm_new_conn_state) ||
++				drm_atomic_crtc_needs_modeset(new_crtc_state))
+ 		update_stream_scaling_settings(
+ 			&new_crtc_state->mode, dm_new_conn_state, dm_new_crtc_state->stream);
+ 
+@@ -8744,6 +8762,10 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 		    old_crtc_state->vrr_enabled == new_crtc_state->vrr_enabled)
+ 			continue;
+ 
++		ret = amdgpu_dm_verify_lut_sizes(new_crtc_state);
++		if (ret)
++			goto fail;
++
+ 		if (!new_crtc_state->enable)
+ 			continue;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 1df7f1b180496..6c7235bb2f41b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -498,6 +498,7 @@ void amdgpu_dm_trigger_timing_sync(struct drm_device *dev);
+ #define MAX_COLOR_LEGACY_LUT_ENTRIES 256
+ 
+ void amdgpu_dm_init_color_mod(void);
++int amdgpu_dm_verify_lut_sizes(const struct drm_crtc_state *crtc_state);
+ int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc);
+ int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc,
+ 				      struct dc_plane_state *dc_plane_state);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
+index 5df05f0d18bc9..179ff4b42f200 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
+@@ -284,6 +284,37 @@ static int __set_input_tf(struct dc_transfer_func *func,
+ 	return res ? 0 : -ENOMEM;
+ }
+ 
++/**
++ * Verifies that the Degamma and Gamma LUTs attached to the |crtc_state| are of
++ * the expected size.
++ * Returns 0 on success.
++ */
++int amdgpu_dm_verify_lut_sizes(const struct drm_crtc_state *crtc_state)
++{
++	const struct drm_color_lut *lut = NULL;
++	uint32_t size = 0;
++
++	lut = __extract_blob_lut(crtc_state->degamma_lut, &size);
++	if (lut && size != MAX_COLOR_LUT_ENTRIES) {
++		DRM_DEBUG_DRIVER(
++			"Invalid Degamma LUT size. Should be %u but got %u.\n",
++			MAX_COLOR_LUT_ENTRIES, size);
++		return -EINVAL;
++	}
++
++	lut = __extract_blob_lut(crtc_state->gamma_lut, &size);
++	if (lut && size != MAX_COLOR_LUT_ENTRIES &&
++	    size != MAX_COLOR_LEGACY_LUT_ENTRIES) {
++		DRM_DEBUG_DRIVER(
++			"Invalid Gamma LUT size. Should be %u (or %u for legacy) but got %u.\n",
++			MAX_COLOR_LUT_ENTRIES, MAX_COLOR_LEGACY_LUT_ENTRIES,
++			size);
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ /**
+  * amdgpu_dm_update_crtc_color_mgmt: Maps DRM color management to DC stream.
+  * @crtc: amdgpu_dm crtc state
+@@ -317,14 +348,12 @@ int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc)
+ 	bool is_legacy;
+ 	int r;
+ 
+-	degamma_lut = __extract_blob_lut(crtc->base.degamma_lut, &degamma_size);
+-	if (degamma_lut && degamma_size != MAX_COLOR_LUT_ENTRIES)
+-		return -EINVAL;
++	r = amdgpu_dm_verify_lut_sizes(&crtc->base);
++	if (r)
++		return r;
+ 
++	degamma_lut = __extract_blob_lut(crtc->base.degamma_lut, &degamma_size);
+ 	regamma_lut = __extract_blob_lut(crtc->base.gamma_lut, &regamma_size);
+-	if (regamma_lut && regamma_size != MAX_COLOR_LUT_ENTRIES &&
+-	    regamma_size != MAX_COLOR_LEGACY_LUT_ENTRIES)
+-		return -EINVAL;
+ 
+ 	has_degamma =
+ 		degamma_lut && !__is_lut_linear(degamma_lut, degamma_size);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 32b73ea866737..a7f8caf1086b9 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1704,6 +1704,8 @@ static void set_dp_mst_mode(struct dc_link *link, bool mst_enable)
+ 		link->type = dc_connection_single;
+ 		link->local_sink = link->remote_sinks[0];
+ 		link->local_sink->sink_signal = SIGNAL_TYPE_DISPLAY_PORT;
++		dc_sink_retain(link->local_sink);
++		dm_helpers_dp_mst_stop_top_mgr(link->ctx, link);
+ 	} else if (mst_enable == true &&
+ 			link->type == dc_connection_single &&
+ 			link->remote_sinks[0] != NULL) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
+index fce37c527a0b9..8bb5912d837d4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
+@@ -482,10 +482,13 @@ static enum lb_memory_config dpp1_dscl_find_lb_memory_config(struct dcn10_dpp *d
+ 	int vtaps_c = scl_data->taps.v_taps_c;
+ 	int ceil_vratio = dc_fixpt_ceil(scl_data->ratios.vert);
+ 	int ceil_vratio_c = dc_fixpt_ceil(scl_data->ratios.vert_c);
+-	enum lb_memory_config mem_cfg = LB_MEMORY_CONFIG_0;
+ 
+-	if (dpp->base.ctx->dc->debug.use_max_lb)
+-		return mem_cfg;
++	if (dpp->base.ctx->dc->debug.use_max_lb) {
++		if (scl_data->format == PIXEL_FORMAT_420BPP8
++				|| scl_data->format == PIXEL_FORMAT_420BPP10)
++			return LB_MEMORY_CONFIG_3;
++		return LB_MEMORY_CONFIG_0;
++	}
+ 
+ 	dpp->base.caps->dscl_calc_lb_num_partitions(
+ 			scl_data, LB_MEMORY_CONFIG_1, &num_part_y, &num_part_c);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index f1e9b3b06b924..9d3ccdd355825 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -243,7 +243,7 @@ void dcn20_dccg_init(struct dce_hwseq *hws)
+ 	REG_WRITE(MILLISECOND_TIME_BASE_DIV, 0x1186a0);
+ 
+ 	/* This value is dependent on the hardware pipeline delay so set once per SOC */
+-	REG_WRITE(DISPCLK_FREQ_CHANGE_CNTL, 0x801003c);
++	REG_WRITE(DISPCLK_FREQ_CHANGE_CNTL, 0xe01003c);
+ }
+ 
+ void dcn20_disable_vga(
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index 9e0ae18e71fac..2663f1b318420 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -64,6 +64,7 @@ typedef struct {
+ #define BPP_INVALID 0
+ #define BPP_BLENDED_PIPE 0xffffffff
+ #define DCN30_MAX_DSC_IMAGE_WIDTH 5184
++#define DCN30_MAX_FMT_420_BUFFER_WIDTH 4096
+ 
+ static void DisplayPipeConfiguration(struct display_mode_lib *mode_lib);
+ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation(
+@@ -2052,7 +2053,7 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
+ 			v->DISPCLKWithoutRamping,
+ 			v->DISPCLKDPPCLKVCOSpeed);
+ 	v->MaxDispclkRoundedToDFSGranularity = RoundToDFSGranularityDown(
+-			v->soc.clock_limits[mode_lib->soc.num_states].dispclk_mhz,
++			v->soc.clock_limits[mode_lib->soc.num_states - 1].dispclk_mhz,
+ 			v->DISPCLKDPPCLKVCOSpeed);
+ 	if (v->DISPCLKWithoutRampingRoundedToDFSGranularity
+ 			> v->MaxDispclkRoundedToDFSGranularity) {
+@@ -3957,20 +3958,20 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 			for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) {
+ 				v->PlaneRequiredDISPCLKWithoutODMCombine = v->PixelClock[k] * (1.0 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0)
+ 						* (1.0 + v->DISPCLKRampingMargin / 100.0);
+-				if ((v->PlaneRequiredDISPCLKWithoutODMCombine >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states]
+-						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states])) {
++				if ((v->PlaneRequiredDISPCLKWithoutODMCombine >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states - 1]
++						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states - 1])) {
+ 					v->PlaneRequiredDISPCLKWithoutODMCombine = v->PixelClock[k] * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 				}
+ 				v->PlaneRequiredDISPCLKWithODMCombine2To1 = v->PixelClock[k] / 2 * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0)
+ 						* (1 + v->DISPCLKRampingMargin / 100.0);
+-				if ((v->PlaneRequiredDISPCLKWithODMCombine2To1 >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states]
+-						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states])) {
++				if ((v->PlaneRequiredDISPCLKWithODMCombine2To1 >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states - 1]
++						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states - 1])) {
+ 					v->PlaneRequiredDISPCLKWithODMCombine2To1 = v->PixelClock[k] / 2 * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 				}
+ 				v->PlaneRequiredDISPCLKWithODMCombine4To1 = v->PixelClock[k] / 4 * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0)
+ 						* (1 + v->DISPCLKRampingMargin / 100.0);
+-				if ((v->PlaneRequiredDISPCLKWithODMCombine4To1 >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states]
+-						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states])) {
++				if ((v->PlaneRequiredDISPCLKWithODMCombine4To1 >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states - 1]
++						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states - 1])) {
+ 					v->PlaneRequiredDISPCLKWithODMCombine4To1 = v->PixelClock[k] / 4 * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 				}
+ 
+@@ -3987,19 +3988,30 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 				} else if (v->PlaneRequiredDISPCLKWithoutODMCombine > v->MaxDispclkRoundedDownToDFSGranularity) {
+ 					v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 					v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine2To1;
+-				} else if (v->DSCEnabled[k] && (v->HActive[k] > DCN30_MAX_DSC_IMAGE_WIDTH)) {
+-					v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+-					v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine2To1;
+ 				} else {
+ 					v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ 					v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithoutODMCombine;
+-					/*420 format workaround*/
+-					if (v->HActive[k] > 4096 && v->OutputFormat[k] == dm_420) {
++				}
++				if (v->DSCEnabled[k] && v->HActive[k] > DCN30_MAX_DSC_IMAGE_WIDTH
++						&& v->ODMCombineEnablePerState[i][k] != dm_odm_combine_mode_4to1) {
++					if (v->HActive[k] / 2 > DCN30_MAX_DSC_IMAGE_WIDTH) {
++						v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_4to1;
++						v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine4To1;
++					} else {
++						v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
++						v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine2To1;
++					}
++				}
++				if (v->OutputFormat[k] == dm_420 && v->HActive[k] > DCN30_MAX_FMT_420_BUFFER_WIDTH
++						&& v->ODMCombineEnablePerState[i][k] != dm_odm_combine_mode_4to1) {
++					if (v->HActive[k] / 2 > DCN30_MAX_FMT_420_BUFFER_WIDTH) {
++						v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_4to1;
++						v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine4To1;
++					} else {
+ 						v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine2To1;
+ 					}
+ 				}
+-
+ 				if (v->ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_4to1) {
+ 					v->MPCCombine[i][j][k] = false;
+ 					v->NoOfDPP[i][j][k] = 4;
+@@ -4281,42 +4293,8 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 		}
+ 	}
+ 
+-	for (i = 0; i < v->soc.num_states; i++) {
+-		v->DSCCLKRequiredMoreThanSupported[i] = false;
+-		for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) {
+-			if (v->BlendingAndTiming[k] == k) {
+-				if (v->Output[k] == dm_dp || v->Output[k] == dm_edp) {
+-					if (v->OutputFormat[k] == dm_420) {
+-						v->DSCFormatFactor = 2;
+-					} else if (v->OutputFormat[k] == dm_444) {
+-						v->DSCFormatFactor = 1;
+-					} else if (v->OutputFormat[k] == dm_n422) {
+-						v->DSCFormatFactor = 2;
+-					} else {
+-						v->DSCFormatFactor = 1;
+-					}
+-					if (v->RequiresDSC[i][k] == true) {
+-						if (v->ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_4to1) {
+-							if (v->PixelClockBackEnd[k] / 12.0 / v->DSCFormatFactor
+-									> (1.0 - v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * v->MaxDSCCLK[i]) {
+-								v->DSCCLKRequiredMoreThanSupported[i] = true;
+-							}
+-						} else if (v->ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_2to1) {
+-							if (v->PixelClockBackEnd[k] / 6.0 / v->DSCFormatFactor
+-									> (1.0 - v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * v->MaxDSCCLK[i]) {
+-								v->DSCCLKRequiredMoreThanSupported[i] = true;
+-							}
+-						} else {
+-							if (v->PixelClockBackEnd[k] / 3.0 / v->DSCFormatFactor
+-									> (1.0 - v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * v->MaxDSCCLK[i]) {
+-								v->DSCCLKRequiredMoreThanSupported[i] = true;
+-							}
+-						}
+-					}
+-				}
+-			}
+-		}
+-	}
++	/* Skip dscclk validation: as long as dispclk is supported, dscclk is also implicitly supported */
++
+ 	for (i = 0; i < v->soc.num_states; i++) {
+ 		v->NotEnoughDSCUnits[i] = false;
+ 		v->TotalDSCUnitsRequired = 0.0;
+@@ -5319,7 +5297,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 		for (j = 0; j < 2; j++) {
+ 			if (v->ScaleRatioAndTapsSupport == 1 && v->SourceFormatPixelAndScanSupport == 1 && v->ViewportSizeSupport[i][j] == 1
+ 					&& v->DIOSupport[i] == 1 && v->ODMCombine4To1SupportCheckOK[i] == 1
+-					&& v->NotEnoughDSCUnits[i] == 0 && v->DSCCLKRequiredMoreThanSupported[i] == 0
++					&& v->NotEnoughDSCUnits[i] == 0
+ 					&& v->DTBCLKRequiredMoreThanSupported[i] == 0
+ 					&& v->ROBSupport[i][j] == 1 && v->DISPCLK_DPPCLK_Support[i][j] == 1 && v->TotalAvailablePipesSupport[i][j] == 1
+ 					&& EnoughWritebackUnits == 1 && WritebackModeSupport == 1
+diff --git a/drivers/gpu/drm/amd/display/dc/irq_types.h b/drivers/gpu/drm/amd/display/dc/irq_types.h
+index d0ccd81ad5b4d..ad3e5621a1744 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq_types.h
++++ b/drivers/gpu/drm/amd/display/dc/irq_types.h
+@@ -163,7 +163,7 @@ enum irq_type
+ };
+ 
+ #define DAL_VALID_IRQ_SRC_NUM(src) \
+-	((src) <= DAL_IRQ_SOURCES_NUMBER && (src) > DC_IRQ_SOURCE_INVALID)
++	((src) < DAL_IRQ_SOURCES_NUMBER && (src) > DC_IRQ_SOURCE_INVALID)
+ 
+ /* Number of Page Flip IRQ Sources. */
+ #define DAL_PFLIP_IRQ_SRC_NUM \
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
+index 20e554e771d16..fa8aeec304ef4 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
+@@ -260,7 +260,6 @@ enum mod_hdcp_status mod_hdcp_setup(struct mod_hdcp *hdcp,
+ 	struct mod_hdcp_output output;
+ 	enum mod_hdcp_status status = MOD_HDCP_STATUS_SUCCESS;
+ 
+-	memset(hdcp, 0, sizeof(struct mod_hdcp));
+ 	memset(&output, 0, sizeof(output));
+ 	hdcp->config = *config;
+ 	HDCP_TOP_INTERFACE_TRACE(hdcp);
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
+index f244b72e74e06..53eab2b8e2c8a 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
+@@ -29,8 +29,10 @@ static inline enum mod_hdcp_status validate_bksv(struct mod_hdcp *hdcp)
+ {
+ 	uint64_t n = 0;
+ 	uint8_t count = 0;
++	u8 bksv[sizeof(n)] = { };
+ 
+-	memcpy(&n, hdcp->auth.msg.hdcp1.bksv, sizeof(uint64_t));
++	memcpy(bksv, hdcp->auth.msg.hdcp1.bksv, sizeof(hdcp->auth.msg.hdcp1.bksv));
++	n = *(uint64_t *)bksv;
+ 
+ 	while (n) {
+ 		count++;
+diff --git a/drivers/gpu/drm/amd/include/navi10_enum.h b/drivers/gpu/drm/amd/include/navi10_enum.h
+index d5ead9680c6ed..84bcb96f76ea4 100644
+--- a/drivers/gpu/drm/amd/include/navi10_enum.h
++++ b/drivers/gpu/drm/amd/include/navi10_enum.h
+@@ -430,7 +430,7 @@ ARRAY_2D_DEPTH                           = 0x00000001,
+  */
+ 
+ typedef enum ENUM_NUM_SIMD_PER_CU {
+-NUM_SIMD_PER_CU                          = 0x00000004,
++NUM_SIMD_PER_CU                          = 0x00000002,
+ } ENUM_NUM_SIMD_PER_CU;
+ 
+ /*
+diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c
+index 351a85088d0ec..f1e8bc39b16d3 100644
+--- a/drivers/gpu/drm/arm/malidp_planes.c
++++ b/drivers/gpu/drm/arm/malidp_planes.c
+@@ -922,6 +922,11 @@ static const struct drm_plane_helper_funcs malidp_de_plane_helper_funcs = {
+ 	.atomic_disable = malidp_de_plane_disable,
+ };
+ 
++static const uint64_t linear_only_modifiers[] = {
++	DRM_FORMAT_MOD_LINEAR,
++	DRM_FORMAT_MOD_INVALID
++};
++
+ int malidp_de_planes_init(struct drm_device *drm)
+ {
+ 	struct malidp_drm *malidp = drm->dev_private;
+@@ -985,8 +990,8 @@ int malidp_de_planes_init(struct drm_device *drm)
+ 		 */
+ 		ret = drm_universal_plane_init(drm, &plane->base, crtcs,
+ 				&malidp_de_plane_funcs, formats, n,
+-				(id == DE_SMART) ? NULL : modifiers, plane_type,
+-				NULL);
++				(id == DE_SMART) ? linear_only_modifiers : modifiers,
++				plane_type, NULL);
+ 
+ 		if (ret < 0)
+ 			goto cleanup;
+diff --git a/drivers/gpu/drm/ast/ast_dp501.c b/drivers/gpu/drm/ast/ast_dp501.c
+index 88121c0e0d05c..cd93c44f26627 100644
+--- a/drivers/gpu/drm/ast/ast_dp501.c
++++ b/drivers/gpu/drm/ast/ast_dp501.c
+@@ -189,6 +189,9 @@ bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size)
+ 	u32 i, data;
+ 	u32 boot_address;
+ 
++	if (ast->config_mode != ast_use_p2a)
++		return false;
++
+ 	data = ast_mindwm(ast, 0x1e6e2100) & 0x01;
+ 	if (data) {
+ 		boot_address = get_fw_base(ast);
+@@ -207,6 +210,9 @@ static bool ast_launch_m68k(struct drm_device *dev)
+ 	u8 *fw_addr = NULL;
+ 	u8 jreg;
+ 
++	if (ast->config_mode != ast_use_p2a)
++		return false;
++
+ 	data = ast_mindwm(ast, 0x1e6e2100) & 0x01;
+ 	if (!data) {
+ 
+@@ -271,25 +277,55 @@ u8 ast_get_dp501_max_clk(struct drm_device *dev)
+ 	struct ast_private *ast = to_ast_private(dev);
+ 	u32 boot_address, offset, data;
+ 	u8 linkcap[4], linkrate, linklanes, maxclk = 0xff;
++	u32 *plinkcap;
+ 
+-	boot_address = get_fw_base(ast);
+-
+-	/* validate FW version */
+-	offset = 0xf000;
+-	data = ast_mindwm(ast, boot_address + offset);
+-	if ((data & 0xf0) != 0x10) /* version: 1x */
+-		return maxclk;
+-
+-	/* Read Link Capability */
+-	offset  = 0xf014;
+-	*(u32 *)linkcap = ast_mindwm(ast, boot_address + offset);
+-	if (linkcap[2] == 0) {
+-		linkrate = linkcap[0];
+-		linklanes = linkcap[1];
+-		data = (linkrate == 0x0a) ? (90 * linklanes) : (54 * linklanes);
+-		if (data > 0xff)
+-			data = 0xff;
+-		maxclk = (u8)data;
++	if (ast->config_mode == ast_use_p2a) {
++		boot_address = get_fw_base(ast);
++
++		/* validate FW version */
++		offset = AST_DP501_GBL_VERSION;
++		data = ast_mindwm(ast, boot_address + offset);
++		if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1) /* version: 1x */
++			return maxclk;
++
++		/* Read Link Capability */
++		offset  = AST_DP501_LINKRATE;
++		plinkcap = (u32 *)linkcap;
++		*plinkcap  = ast_mindwm(ast, boot_address + offset);
++		if (linkcap[2] == 0) {
++			linkrate = linkcap[0];
++			linklanes = linkcap[1];
++			data = (linkrate == 0x0a) ? (90 * linklanes) : (54 * linklanes);
++			if (data > 0xff)
++				data = 0xff;
++			maxclk = (u8)data;
++		}
++	} else {
++		if (!ast->dp501_fw_buf)
++			return AST_DP501_DEFAULT_DCLK;	/* 1024x768 as default */
++
++		/* dummy read */
++		offset = 0x0000;
++		data = readl(ast->dp501_fw_buf + offset);
++
++		/* validate FW version */
++		offset = AST_DP501_GBL_VERSION;
++		data = readl(ast->dp501_fw_buf + offset);
++		if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1) /* version: 1x */
++			return maxclk;
++
++		/* Read Link Capability */
++		offset = AST_DP501_LINKRATE;
++		plinkcap = (u32 *)linkcap;
++		*plinkcap = readl(ast->dp501_fw_buf + offset);
++		if (linkcap[2] == 0) {
++			linkrate = linkcap[0];
++			linklanes = linkcap[1];
++			data = (linkrate == 0x0a) ? (90 * linklanes) : (54 * linklanes);
++			if (data > 0xff)
++				data = 0xff;
++			maxclk = (u8)data;
++		}
+ 	}
+ 	return maxclk;
+ }
+@@ -298,26 +334,57 @@ bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata)
+ {
+ 	struct ast_private *ast = to_ast_private(dev);
+ 	u32 i, boot_address, offset, data;
++	u32 *pEDIDidx;
+ 
+-	boot_address = get_fw_base(ast);
+-
+-	/* validate FW version */
+-	offset = 0xf000;
+-	data = ast_mindwm(ast, boot_address + offset);
+-	if ((data & 0xf0) != 0x10)
+-		return false;
+-
+-	/* validate PnP Monitor */
+-	offset = 0xf010;
+-	data = ast_mindwm(ast, boot_address + offset);
+-	if (!(data & 0x01))
+-		return false;
++	if (ast->config_mode == ast_use_p2a) {
++		boot_address = get_fw_base(ast);
+ 
+-	/* Read EDID */
+-	offset = 0xf020;
+-	for (i = 0; i < 128; i += 4) {
+-		data = ast_mindwm(ast, boot_address + offset + i);
+-		*(u32 *)(ediddata + i) = data;
++		/* validate FW version */
++		offset = AST_DP501_GBL_VERSION;
++		data = ast_mindwm(ast, boot_address + offset);
++		if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1)
++			return false;
++
++		/* validate PnP Monitor */
++		offset = AST_DP501_PNPMONITOR;
++		data = ast_mindwm(ast, boot_address + offset);
++		if (!(data & AST_DP501_PNP_CONNECTED))
++			return false;
++
++		/* Read EDID */
++		offset = AST_DP501_EDID_DATA;
++		for (i = 0; i < 128; i += 4) {
++			data = ast_mindwm(ast, boot_address + offset + i);
++			pEDIDidx = (u32 *)(ediddata + i);
++			*pEDIDidx = data;
++		}
++	} else {
++		if (!ast->dp501_fw_buf)
++			return false;
++
++		/* dummy read */
++		offset = 0x0000;
++		data = readl(ast->dp501_fw_buf + offset);
++
++		/* validate FW version */
++		offset = AST_DP501_GBL_VERSION;
++		data = readl(ast->dp501_fw_buf + offset);
++		if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1)
++			return false;
++
++		/* validate PnP Monitor */
++		offset = AST_DP501_PNPMONITOR;
++		data = readl(ast->dp501_fw_buf + offset);
++		if (!(data & AST_DP501_PNP_CONNECTED))
++			return false;
++
++		/* Read EDID */
++		offset = AST_DP501_EDID_DATA;
++		for (i = 0; i < 128; i += 4) {
++			data = readl(ast->dp501_fw_buf + offset + i);
++			pEDIDidx = (u32 *)(ediddata + i);
++			*pEDIDidx = data;
++		}
+ 	}
+ 
+ 	return true;
+diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
+index 467049ca8430a..b68b1ddfecb7a 100644
+--- a/drivers/gpu/drm/ast/ast_drv.h
++++ b/drivers/gpu/drm/ast/ast_drv.h
+@@ -120,6 +120,7 @@ struct ast_private {
+ 
+ 	void __iomem *regs;
+ 	void __iomem *ioregs;
++	void __iomem *dp501_fw_buf;
+ 
+ 	enum ast_chip chip;
+ 	bool vga2_clone;
+@@ -298,6 +299,17 @@ int ast_mode_config_init(struct ast_private *ast);
+ #define AST_MM_ALIGN_SHIFT 4
+ #define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1)
+ 
++#define AST_DP501_FW_VERSION_MASK	GENMASK(7, 4)
++#define AST_DP501_FW_VERSION_1		BIT(4)
++#define AST_DP501_PNP_CONNECTED		BIT(1)
++
++#define AST_DP501_DEFAULT_DCLK	65
++
++#define AST_DP501_GBL_VERSION	0xf000
++#define AST_DP501_PNPMONITOR	0xf010
++#define AST_DP501_LINKRATE	0xf014
++#define AST_DP501_EDID_DATA	0xf020
++
+ int ast_mm_init(struct ast_private *ast);
+ 
+ /* ast post */
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index ee82b2ddf9325..cf08215a9f21b 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -98,7 +98,7 @@ static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev)
+ 	if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
+ 		/* Double check it's actually working */
+ 		data = ast_read32(ast, 0xf004);
+-		if (data != 0xFFFFFFFF) {
++		if ((data != 0xFFFFFFFF) && (data != 0x00)) {
+ 			/* P2A works, grab silicon revision */
+ 			ast->config_mode = ast_use_p2a;
+ 
+@@ -406,7 +406,6 @@ struct ast_private *ast_device_create(struct drm_driver *drv,
+ 		return ast;
+ 	dev = &ast->base;
+ 
+-	dev->pdev = pdev;
+ 	pci_set_drvdata(pdev, dev);
+ 
+ 	ast->regs = pcim_iomap(pdev, 1, 0);
+@@ -446,6 +445,14 @@ struct ast_private *ast_device_create(struct drm_driver *drv,
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
++	/* map reserved buffer */
++	ast->dp501_fw_buf = NULL;
++	if (dev->vram_mm->vram_size < pci_resource_len(pdev, 0)) {
++		ast->dp501_fw_buf = pci_iomap_range(pdev, 0, dev->vram_mm->vram_size, 0);
++		if (!ast->dp501_fw_buf)
++			drm_info(dev, "failed to map reserved buffer!\n");
++	}
++
+ 	ret = ast_mode_config_init(ast);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+index d0c65610ebb5c..f56ff97c98990 100644
+--- a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
++++ b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+@@ -2369,9 +2369,9 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
+ 	clk_prepare_enable(clk);
+ 
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0) {
+-		dev_err(dev, "pm_runtime_get_sync failed\n");
++		dev_err(dev, "pm_runtime_resume_and_get failed\n");
+ 		pm_runtime_disable(dev);
+ 		goto clk_disable;
+ 	}
+diff --git a/drivers/gpu/drm/bridge/cdns-dsi.c b/drivers/gpu/drm/bridge/cdns-dsi.c
+index 76373e31df92d..b31281f76117c 100644
+--- a/drivers/gpu/drm/bridge/cdns-dsi.c
++++ b/drivers/gpu/drm/bridge/cdns-dsi.c
+@@ -1028,7 +1028,7 @@ static ssize_t cdns_dsi_transfer(struct mipi_dsi_host *host,
+ 	struct mipi_dsi_packet packet;
+ 	int ret, i, tx_len, rx_len;
+ 
+-	ret = pm_runtime_get_sync(host->dev);
++	ret = pm_runtime_resume_and_get(host->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index d734d9402c350..c1926154eda84 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -1209,6 +1209,7 @@ static struct i2c_device_id lt9611_id[] = {
+ 	{ "lontium,lt9611", 0 },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(i2c, lt9611_id);
+ 
+ static const struct of_device_id lt9611_match_table[] = {
+ 	{ .compatible = "lontium,lt9611" },
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index 66b67402f1acd..c65ca860712d2 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -21,6 +21,7 @@
+ #include <linux/sys_soc.h>
+ #include <linux/time64.h>
+ 
++#include <drm/drm_atomic_state_helper.h>
+ #include <drm/drm_bridge.h>
+ #include <drm/drm_mipi_dsi.h>
+ #include <drm/drm_of.h>
+@@ -742,7 +743,9 @@ static int nwl_dsi_disable(struct nwl_dsi *dsi)
+ 	return 0;
+ }
+ 
+-static void nwl_dsi_bridge_disable(struct drm_bridge *bridge)
++static void
++nwl_dsi_bridge_atomic_disable(struct drm_bridge *bridge,
++			      struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct nwl_dsi *dsi = bridge_to_dsi(bridge);
+ 	int ret;
+@@ -803,17 +806,6 @@ static int nwl_dsi_get_dphy_params(struct nwl_dsi *dsi,
+ 	return 0;
+ }
+ 
+-static bool nwl_dsi_bridge_mode_fixup(struct drm_bridge *bridge,
+-				      const struct drm_display_mode *mode,
+-				      struct drm_display_mode *adjusted_mode)
+-{
+-	/* At least LCDIF + NWL needs active high sync */
+-	adjusted_mode->flags |= (DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC);
+-	adjusted_mode->flags &= ~(DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC);
+-
+-	return true;
+-}
+-
+ static enum drm_mode_status
+ nwl_dsi_bridge_mode_valid(struct drm_bridge *bridge,
+ 			  const struct drm_display_info *info,
+@@ -831,6 +823,24 @@ nwl_dsi_bridge_mode_valid(struct drm_bridge *bridge,
+ 	return MODE_OK;
+ }
+ 
++static int nwl_dsi_bridge_atomic_check(struct drm_bridge *bridge,
++				       struct drm_bridge_state *bridge_state,
++				       struct drm_crtc_state *crtc_state,
++				       struct drm_connector_state *conn_state)
++{
++	struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
++
++	/* At least LCDIF + NWL needs active high sync */
++	adjusted_mode->flags |= (DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC);
++	adjusted_mode->flags &= ~(DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC);
++
++	/* Do a full modeset if crtc_state->active is changed to be true. */
++	if (crtc_state->active_changed && crtc_state->active)
++		crtc_state->mode_changed = true;
++
++	return 0;
++}
++
+ static void
+ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ 			const struct drm_display_mode *mode,
+@@ -862,7 +872,9 @@ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ 	drm_mode_debug_printmodeline(adjusted_mode);
+ }
+ 
+-static void nwl_dsi_bridge_pre_enable(struct drm_bridge *bridge)
++static void
++nwl_dsi_bridge_atomic_pre_enable(struct drm_bridge *bridge,
++				 struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct nwl_dsi *dsi = bridge_to_dsi(bridge);
+ 	int ret;
+@@ -897,7 +909,9 @@ static void nwl_dsi_bridge_pre_enable(struct drm_bridge *bridge)
+ 	}
+ }
+ 
+-static void nwl_dsi_bridge_enable(struct drm_bridge *bridge)
++static void
++nwl_dsi_bridge_atomic_enable(struct drm_bridge *bridge,
++			     struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct nwl_dsi *dsi = bridge_to_dsi(bridge);
+ 	int ret;
+@@ -942,14 +956,17 @@ static void nwl_dsi_bridge_detach(struct drm_bridge *bridge)
+ }
+ 
+ static const struct drm_bridge_funcs nwl_dsi_bridge_funcs = {
+-	.pre_enable = nwl_dsi_bridge_pre_enable,
+-	.enable     = nwl_dsi_bridge_enable,
+-	.disable    = nwl_dsi_bridge_disable,
+-	.mode_fixup = nwl_dsi_bridge_mode_fixup,
+-	.mode_set   = nwl_dsi_bridge_mode_set,
+-	.mode_valid = nwl_dsi_bridge_mode_valid,
+-	.attach	    = nwl_dsi_bridge_attach,
+-	.detach	    = nwl_dsi_bridge_detach,
++	.atomic_duplicate_state	= drm_atomic_helper_bridge_duplicate_state,
++	.atomic_destroy_state	= drm_atomic_helper_bridge_destroy_state,
++	.atomic_reset		= drm_atomic_helper_bridge_reset,
++	.atomic_check		= nwl_dsi_bridge_atomic_check,
++	.atomic_pre_enable	= nwl_dsi_bridge_atomic_pre_enable,
++	.atomic_enable		= nwl_dsi_bridge_atomic_enable,
++	.atomic_disable		= nwl_dsi_bridge_atomic_disable,
++	.mode_set		= nwl_dsi_bridge_mode_set,
++	.mode_valid		= nwl_dsi_bridge_mode_valid,
++	.attach			= nwl_dsi_bridge_attach,
++	.detach			= nwl_dsi_bridge_detach,
+ };
+ 
+ static int nwl_dsi_parse_dt(struct nwl_dsi *dsi)
+diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
+index deeed73f4ed69..3c55753bab161 100644
+--- a/drivers/gpu/drm/drm_dp_helper.c
++++ b/drivers/gpu/drm/drm_dp_helper.c
+@@ -602,7 +602,14 @@ int drm_dp_read_downstream_info(struct drm_dp_aux *aux,
+ 	    !(dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DWN_STRM_PORT_PRESENT))
+ 		return 0;
+ 
++	/* Some branches advertise having 0 downstream ports, despite also advertising they have a
++	 * downstream port present. The DP spec isn't clear on if this is allowed or not, but since
++	 * some branches do it we need to handle it regardless.
++	 */
+ 	len = drm_dp_downstream_port_count(dpcd);
++	if (!len)
++		return 0;
++
+ 	if (dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DETAILED_CAP_INFO_AVAILABLE)
+ 		len *= 4;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index eb02ecb6e345a..65d73eb5e155c 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -5080,7 +5080,7 @@ static int intel_dp_vsc_sdp_unpack(struct drm_dp_vsc_sdp *vsc,
+ 	if (size < sizeof(struct dp_sdp))
+ 		return -EINVAL;
+ 
+-	memset(vsc, 0, size);
++	memset(vsc, 0, sizeof(*vsc));
+ 
+ 	if (sdp->sdp_header.HB0 != 0)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index ac038572164d3..dfd5ed15a7f4a 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -274,7 +274,7 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc)
+ 		drm_connector_list_iter_end(&conn_iter);
+ 	}
+ 
+-	ret = pm_runtime_get_sync(crtc->dev->dev);
++	ret = pm_runtime_resume_and_get(crtc->dev->dev);
+ 	if (ret < 0) {
+ 		DRM_ERROR("Failed to enable power domain: %d\n", ret);
+ 		return ret;
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index dbf8d429223e4..2f75e39052022 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -88,8 +88,6 @@ static int mdp4_hw_init(struct msm_kms *kms)
+ 	if (mdp4_kms->rev > 1)
+ 		mdp4_write(mdp4_kms, REG_MDP4_RESET_STATUS, 1);
+ 
+-	dev->mode_config.allow_fb_modifiers = true;
+-
+ out:
+ 	pm_runtime_put_sync(dev->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
+index da3cc1d8c3312..ee1dbb2b84af4 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
+@@ -347,6 +347,12 @@ enum mdp4_pipe mdp4_plane_pipe(struct drm_plane *plane)
+ 	return mdp4_plane->pipe;
+ }
+ 
++static const uint64_t supported_format_modifiers[] = {
++	DRM_FORMAT_MOD_SAMSUNG_64_32_TILE,
++	DRM_FORMAT_MOD_LINEAR,
++	DRM_FORMAT_MOD_INVALID
++};
++
+ /* initialize plane */
+ struct drm_plane *mdp4_plane_init(struct drm_device *dev,
+ 		enum mdp4_pipe pipe_id, bool private_plane)
+@@ -375,7 +381,7 @@ struct drm_plane *mdp4_plane_init(struct drm_device *dev,
+ 	type = private_plane ? DRM_PLANE_TYPE_PRIMARY : DRM_PLANE_TYPE_OVERLAY;
+ 	ret = drm_universal_plane_init(dev, plane, 0xff, &mdp4_plane_funcs,
+ 				 mdp4_plane->formats, mdp4_plane->nformats,
+-				 NULL, type, NULL);
++				 supported_format_modifiers, type, NULL);
+ 	if (ret)
+ 		goto fail;
+ 
+diff --git a/drivers/gpu/drm/mxsfb/Kconfig b/drivers/gpu/drm/mxsfb/Kconfig
+index 0143d539f8f82..ee22cd25d3e3d 100644
+--- a/drivers/gpu/drm/mxsfb/Kconfig
++++ b/drivers/gpu/drm/mxsfb/Kconfig
+@@ -10,7 +10,6 @@ config DRM_MXSFB
+ 	depends on COMMON_CLK
+ 	select DRM_MXS
+ 	select DRM_KMS_HELPER
+-	select DRM_KMS_FB_HELPER
+ 	select DRM_KMS_CMA_HELPER
+ 	select DRM_PANEL
+ 	select DRM_PANEL_BRIDGE
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
+index bceb48a2dfca6..f2ad6f49fb72e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.c
++++ b/drivers/gpu/drm/nouveau/nouveau_display.c
+@@ -700,7 +700,6 @@ nouveau_display_create(struct drm_device *dev)
+ 
+ 	dev->mode_config.preferred_depth = 24;
+ 	dev->mode_config.prefer_shadow = 1;
+-	dev->mode_config.allow_fb_modifiers = true;
+ 
+ 	if (drm->client.device.info.chipset < 0x11)
+ 		dev->mode_config.async_page_flip = false;
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index e0ae911ef427d..71bdafac9210d 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -1334,6 +1334,7 @@ radeon_user_framebuffer_create(struct drm_device *dev,
+ 	/* Handle is imported dma-buf, so cannot be migrated to VRAM for scanout */
+ 	if (obj->import_attach) {
+ 		DRM_DEBUG_KMS("Cannot create framebuffer from imported dma_buf\n");
++		drm_gem_object_put(obj);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
+index 4cd30613fa1dd..e9df5e8ef0a5b 100644
+--- a/drivers/gpu/drm/radeon/radeon_drv.c
++++ b/drivers/gpu/drm/radeon/radeon_drv.c
+@@ -416,13 +416,13 @@ radeon_pci_shutdown(struct pci_dev *pdev)
+ 	if (radeon_device_is_virtual())
+ 		radeon_pci_remove(pdev);
+ 
+-#ifdef CONFIG_PPC64
++#if defined(CONFIG_PPC64) || defined(CONFIG_MACH_LOONGSON64)
+ 	/*
+ 	 * Some adapters need to be suspended before a
+ 	 * shutdown occurs in order to prevent an error
+-	 * during kexec.
+-	 * Make this power specific becauase it breaks
+-	 * some non-power boards.
++	 * during kexec, shutdown or reboot.
++	 * Make this power and Loongson specific because
++	 * it breaks some other boards.
+ 	 */
+ 	radeon_suspend_kms(pci_get_drvdata(pdev), true, true, false);
+ #endif
+diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+index 75a76408cb29e..d0c9610ad2202 100644
+--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+@@ -243,7 +243,6 @@ struct dw_mipi_dsi_rockchip {
+ 	struct dw_mipi_dsi *dmd;
+ 	const struct rockchip_dw_dsi_chip_data *cdata;
+ 	struct dw_mipi_dsi_plat_data pdata;
+-	int devcnt;
+ };
+ 
+ struct dphy_pll_parameter_map {
+@@ -1141,9 +1140,6 @@ static int dw_mipi_dsi_rockchip_remove(struct platform_device *pdev)
+ {
+ 	struct dw_mipi_dsi_rockchip *dsi = platform_get_drvdata(pdev);
+ 
+-	if (dsi->devcnt == 0)
+-		component_del(dsi->dev, &dw_mipi_dsi_rockchip_ops);
+-
+ 	dw_mipi_dsi_remove(dsi->dmd);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+index 80053d91a301f..a6fe03c3748aa 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+@@ -349,8 +349,8 @@ static const struct vop_win_phy rk3066_win0_data = {
+ 	.nformats = ARRAY_SIZE(formats_win_full),
+ 	.format_modifiers = format_modifiers_win_full,
+ 	.enable = VOP_REG(RK3066_SYS_CTRL1, 0x1, 0),
+-	.format = VOP_REG(RK3066_SYS_CTRL0, 0x7, 4),
+-	.rb_swap = VOP_REG(RK3066_SYS_CTRL0, 0x1, 19),
++	.format = VOP_REG(RK3066_SYS_CTRL1, 0x7, 4),
++	.rb_swap = VOP_REG(RK3066_SYS_CTRL1, 0x1, 19),
+ 	.act_info = VOP_REG(RK3066_WIN0_ACT_INFO, 0x1fff1fff, 0),
+ 	.dsp_info = VOP_REG(RK3066_WIN0_DSP_INFO, 0x0fff0fff, 0),
+ 	.dsp_st = VOP_REG(RK3066_WIN0_DSP_ST, 0x1fff1fff, 0),
+@@ -361,13 +361,12 @@ static const struct vop_win_phy rk3066_win0_data = {
+ };
+ 
+ static const struct vop_win_phy rk3066_win1_data = {
+-	.scl = &rk3066_win_scl,
+ 	.data_formats = formats_win_full,
+ 	.nformats = ARRAY_SIZE(formats_win_full),
+ 	.format_modifiers = format_modifiers_win_full,
+ 	.enable = VOP_REG(RK3066_SYS_CTRL1, 0x1, 1),
+-	.format = VOP_REG(RK3066_SYS_CTRL0, 0x7, 7),
+-	.rb_swap = VOP_REG(RK3066_SYS_CTRL0, 0x1, 23),
++	.format = VOP_REG(RK3066_SYS_CTRL1, 0x7, 7),
++	.rb_swap = VOP_REG(RK3066_SYS_CTRL1, 0x1, 23),
+ 	.act_info = VOP_REG(RK3066_WIN1_ACT_INFO, 0x1fff1fff, 0),
+ 	.dsp_info = VOP_REG(RK3066_WIN1_DSP_INFO, 0x0fff0fff, 0),
+ 	.dsp_st = VOP_REG(RK3066_WIN1_DSP_ST, 0x1fff1fff, 0),
+@@ -382,8 +381,8 @@ static const struct vop_win_phy rk3066_win2_data = {
+ 	.nformats = ARRAY_SIZE(formats_win_lite),
+ 	.format_modifiers = format_modifiers_win_lite,
+ 	.enable = VOP_REG(RK3066_SYS_CTRL1, 0x1, 2),
+-	.format = VOP_REG(RK3066_SYS_CTRL0, 0x7, 10),
+-	.rb_swap = VOP_REG(RK3066_SYS_CTRL0, 0x1, 27),
++	.format = VOP_REG(RK3066_SYS_CTRL1, 0x7, 10),
++	.rb_swap = VOP_REG(RK3066_SYS_CTRL1, 0x1, 27),
+ 	.dsp_info = VOP_REG(RK3066_WIN2_DSP_INFO, 0x0fff0fff, 0),
+ 	.dsp_st = VOP_REG(RK3066_WIN2_DSP_ST, 0x1fff1fff, 0),
+ 	.yrgb_mst = VOP_REG(RK3066_WIN2_MST, 0xffffffff, 0),
+@@ -408,6 +407,9 @@ static const struct vop_common rk3066_common = {
+ 	.dither_down_en = VOP_REG(RK3066_DSP_CTRL0, 0x1, 11),
+ 	.dither_down_mode = VOP_REG(RK3066_DSP_CTRL0, 0x1, 10),
+ 	.dsp_blank = VOP_REG(RK3066_DSP_CTRL1, 0x1, 24),
++	.dither_up = VOP_REG(RK3066_DSP_CTRL0, 0x1, 9),
++	.dsp_lut_en = VOP_REG(RK3066_SYS_CTRL1, 0x1, 31),
++	.data_blank = VOP_REG(RK3066_DSP_CTRL1, 0x1, 25),
+ };
+ 
+ static const struct vop_win_data rk3066_vop_win_data[] = {
+@@ -505,7 +507,10 @@ static const struct vop_common rk3188_common = {
+ 	.dither_down_sel = VOP_REG(RK3188_DSP_CTRL0, 0x1, 27),
+ 	.dither_down_en = VOP_REG(RK3188_DSP_CTRL0, 0x1, 11),
+ 	.dither_down_mode = VOP_REG(RK3188_DSP_CTRL0, 0x1, 10),
+-	.dsp_blank = VOP_REG(RK3188_DSP_CTRL1, 0x3, 24),
++	.dsp_blank = VOP_REG(RK3188_DSP_CTRL1, 0x1, 24),
++	.dither_up = VOP_REG(RK3188_DSP_CTRL0, 0x1, 9),
++	.dsp_lut_en = VOP_REG(RK3188_SYS_CTRL, 0x1, 28),
++	.data_blank = VOP_REG(RK3188_DSP_CTRL1, 0x1, 25),
+ };
+ 
+ static const struct vop_win_data rk3188_vop_win_data[] = {
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index 1463801189624..3f7f761df4cd2 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -113,7 +113,8 @@ static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity)
+ 	rmb(); /* for list_empty to work without lock */
+ 
+ 	if (list_empty(&entity->list) ||
+-	    spsc_queue_count(&entity->job_queue) == 0)
++	    spsc_queue_count(&entity->job_queue) == 0 ||
++	    entity->stopped)
+ 		return true;
+ 
+ 	return false;
+@@ -218,11 +219,16 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
+ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity)
+ {
+ 	struct drm_sched_job *job;
++	struct dma_fence *f;
+ 	int r;
+ 
+ 	while ((job = to_drm_sched_job(spsc_queue_pop(&entity->job_queue)))) {
+ 		struct drm_sched_fence *s_fence = job->s_fence;
+ 
++		/* Wait for all dependencies to avoid data corruptions */
++		while ((f = job->sched->ops->dependency(job, entity)))
++			dma_fence_wait(f, false);
++
+ 		drm_sched_fence_scheduled(s_fence);
+ 		dma_fence_set_error(&s_fence->finished, -ESRCH);
+ 
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 7111e0f527b0b..b6c2757c3d83f 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -887,9 +887,33 @@ EXPORT_SYMBOL(drm_sched_init);
+  */
+ void drm_sched_fini(struct drm_gpu_scheduler *sched)
+ {
++	struct drm_sched_entity *s_entity;
++	int i;
++
+ 	if (sched->thread)
+ 		kthread_stop(sched->thread);
+ 
++	for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
++		struct drm_sched_rq *rq = &sched->sched_rq[i];
++
++		if (!rq)
++			continue;
++
++		spin_lock(&rq->lock);
++		list_for_each_entry(s_entity, &rq->entities, list)
++			/*
++			 * Prevents reinsertion and marks job_queue as idle,
++			 * it will removed from rq in drm_sched_entity_fini
++			 * eventually
++			 */
++			s_entity->stopped = true;
++		spin_unlock(&rq->lock);
++
++	}
++
++	/* Wakeup everyone stuck in drm_sched_entity_flush for this scheduler */
++	wake_up_all(&sched->job_scheduled);
++
+ 	/* Confirm no work left behind accessing device structures */
+ 	cancel_delayed_work_sync(&sched->work_tdr);
+ 
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index 3aa9a74060854..ceb86338c0039 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -947,6 +947,11 @@ static const struct drm_plane_helper_funcs tegra_cursor_plane_helper_funcs = {
+ 	.atomic_disable = tegra_cursor_atomic_disable,
+ };
+ 
++static const uint64_t linear_modifiers[] = {
++	DRM_FORMAT_MOD_LINEAR,
++	DRM_FORMAT_MOD_INVALID
++};
++
+ static struct drm_plane *tegra_dc_cursor_plane_create(struct drm_device *drm,
+ 						      struct tegra_dc *dc)
+ {
+@@ -975,7 +980,7 @@ static struct drm_plane *tegra_dc_cursor_plane_create(struct drm_device *drm,
+ 
+ 	err = drm_universal_plane_init(drm, &plane->base, possible_crtcs,
+ 				       &tegra_plane_funcs, formats,
+-				       num_formats, NULL,
++				       num_formats, linear_modifiers,
+ 				       DRM_PLANE_TYPE_CURSOR, NULL);
+ 	if (err < 0) {
+ 		kfree(plane);
+@@ -1094,7 +1099,8 @@ static struct drm_plane *tegra_dc_overlay_plane_create(struct drm_device *drm,
+ 
+ 	err = drm_universal_plane_init(drm, &plane->base, possible_crtcs,
+ 				       &tegra_plane_funcs, formats,
+-				       num_formats, NULL, type, NULL);
++				       num_formats, linear_modifiers,
++				       type, NULL);
+ 	if (err < 0) {
+ 		kfree(plane);
+ 		return ERR_PTR(err);
+diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
+index e4baf07992a4d..2c6ebc328b24f 100644
+--- a/drivers/gpu/drm/tegra/drm.c
++++ b/drivers/gpu/drm/tegra/drm.c
+@@ -1127,8 +1127,6 @@ static int host1x_drm_probe(struct host1x_device *dev)
+ 	drm->mode_config.max_width = 4096;
+ 	drm->mode_config.max_height = 4096;
+ 
+-	drm->mode_config.allow_fb_modifiers = true;
+-
+ 	drm->mode_config.normalize_zpos = true;
+ 
+ 	drm->mode_config.funcs = &tegra_drm_mode_config_funcs;
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 1d2416d466a36..f4ccca922e44a 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -1001,7 +1001,7 @@ static const struct vc4_pv_data bcm2711_pv3_data = {
+ 	.fifo_depth = 64,
+ 	.pixels_per_clock = 1,
+ 	.encoder_types = {
+-		[0] = VC4_ENCODER_TYPE_VEC,
++		[PV_CONTROL_CLK_SELECT_VEC] = VC4_ENCODER_TYPE_VEC,
+ 	},
+ };
+ 
+@@ -1042,6 +1042,9 @@ static void vc4_set_crtc_possible_masks(struct drm_device *drm,
+ 		struct vc4_encoder *vc4_encoder;
+ 		int i;
+ 
++		if (encoder->encoder_type == DRM_MODE_ENCODER_VIRTUAL)
++			continue;
++
+ 		vc4_encoder = to_vc4_encoder(encoder);
+ 		for (i = 0; i < ARRAY_SIZE(pv_data->encoder_types); i++) {
+ 			if (vc4_encoder->type == encoder_types[i]) {
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index c5f2944d5bc60..9809c3a856c67 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -837,7 +837,7 @@ void vc4_crtc_destroy_state(struct drm_crtc *crtc,
+ void vc4_crtc_reset(struct drm_crtc *crtc);
+ void vc4_crtc_handle_vblank(struct vc4_crtc *crtc);
+ void vc4_crtc_get_margins(struct drm_crtc_state *state,
+-			  unsigned int *right, unsigned int *left,
++			  unsigned int *left, unsigned int *right,
+ 			  unsigned int *top, unsigned int *bottom);
+ 
+ /* vc4_debugfs.c */
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 25a09aaf58838..c58b8840090ab 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -627,7 +627,7 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder)
+ 	unsigned long pixel_rate, hsm_rate;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(&vc4_hdmi->pdev->dev);
++	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
+ 	if (ret < 0) {
+ 		DRM_ERROR("Failed to retain power domain: %d\n", ret);
+ 		return;
+@@ -1807,6 +1807,14 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 	if (vc4_hdmi->variant->reset)
+ 		vc4_hdmi->variant->reset(vc4_hdmi);
+ 
++	if ((of_device_is_compatible(dev->of_node, "brcm,bcm2711-hdmi0") ||
++	     of_device_is_compatible(dev->of_node, "brcm,bcm2711-hdmi1")) &&
++	    HDMI_READ(HDMI_VID_CTL) & VC4_HD_VID_CTL_ENABLE) {
++		clk_prepare_enable(vc4_hdmi->pixel_clock);
++		clk_prepare_enable(vc4_hdmi->hsm_clock);
++		clk_prepare_enable(vc4_hdmi->pixel_bvb_clock);
++	}
++
+ 	pm_runtime_enable(dev);
+ 
+ 	drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c
+index 849dcafbfff17..d13502ae973dd 100644
+--- a/drivers/gpu/drm/vc4/vc4_txp.c
++++ b/drivers/gpu/drm/vc4/vc4_txp.c
+@@ -503,7 +503,7 @@ static int vc4_txp_bind(struct device *dev, struct device *master, void *data)
+ 		return ret;
+ 
+ 	encoder = &txp->connector.encoder;
+-	encoder->possible_crtcs |= drm_crtc_mask(crtc);
++	encoder->possible_crtcs = drm_crtc_mask(crtc);
+ 
+ 	ret = devm_request_irq(dev, irq, vc4_txp_interrupt, 0,
+ 			       dev_name(dev), txp);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
+index eed57a9313098..a28b01f927933 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
++++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
+@@ -209,6 +209,7 @@ err_scanouts:
+ err_vbufs:
+ 	vgdev->vdev->config->del_vqs(vgdev->vdev);
+ err_vqs:
++	dev->dev_private = NULL;
+ 	kfree(vgdev);
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/zte/Kconfig b/drivers/gpu/drm/zte/Kconfig
+index 90ebaedc11fdf..aa8594190b509 100644
+--- a/drivers/gpu/drm/zte/Kconfig
++++ b/drivers/gpu/drm/zte/Kconfig
+@@ -3,7 +3,6 @@ config DRM_ZTE
+ 	tristate "DRM Support for ZTE SoCs"
+ 	depends on DRM && ARCH_ZX
+ 	select DRM_KMS_CMA_HELPER
+-	select DRM_KMS_FB_HELPER
+ 	select DRM_KMS_HELPER
+ 	select SND_SOC_HDMI_CODEC if SND_SOC
+ 	select VIDEOMODE_HELPERS
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index b2088d2d386a4..424c296845dbb 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1347,7 +1347,7 @@ static int coresight_fixup_device_conns(struct coresight_device *csdev)
+ 		}
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int coresight_remove_match(struct device *dev, void *data)
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index 989d965f3d901..8978f3410bee5 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -528,7 +528,7 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
+ 		buf_ptr = buf->data_pages[cur] + offset;
+ 		*buf_ptr = readl_relaxed(drvdata->base + TMC_RRD);
+ 
+-		if (lost && *barrier) {
++		if (lost && i < CORESIGHT_BARRIER_PKT_SIZE) {
+ 			*buf_ptr = *barrier;
+ 			barrier++;
+ 		}
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 0c879e40bd18d..34b94e5253905 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2793,7 +2793,8 @@ static int cma_resolve_ib_route(struct rdma_id_private *id_priv,
+ 
+ 	cma_init_resolve_route_work(work, id_priv);
+ 
+-	route->path_rec = kmalloc(sizeof *route->path_rec, GFP_KERNEL);
++	if (!route->path_rec)
++		route->path_rec = kmalloc(sizeof *route->path_rec, GFP_KERNEL);
+ 	if (!route->path_rec) {
+ 		ret = -ENOMEM;
+ 		goto err1;
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index 5df4bb52bb10f..861e19fdfeb46 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -295,6 +295,7 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
+ 	if (user && (!wq->sq.bar2_pa || (need_rq && !wq->rq.bar2_pa))) {
+ 		pr_warn("%s: sqid %u or rqid %u not in BAR2 range\n",
+ 			pci_name(rdev->lldi.pdev), wq->sq.qid, wq->rq.qid);
++		ret = -EINVAL;
+ 		goto free_dma;
+ 	}
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index d2ce852447c13..026285f7f36ac 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -139,7 +139,7 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
+ 	if (IS_ERR(umem)) {
+ 		pr_warn("err %d from rxe_umem_get\n",
+ 			(int)PTR_ERR(umem));
+-		err = -EINVAL;
++		err = PTR_ERR(umem);
+ 		goto err1;
+ 	}
+ 
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index e653c83f8a356..edea37da8a5bd 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -35,10 +35,10 @@ static const struct kernel_param_ops sg_tablesize_ops = {
+ 	.get = param_get_int,
+ };
+ 
+-static int isert_sg_tablesize = ISCSI_ISER_DEF_SG_TABLESIZE;
++static int isert_sg_tablesize = ISCSI_ISER_MIN_SG_TABLESIZE;
+ module_param_cb(sg_tablesize, &sg_tablesize_ops, &isert_sg_tablesize, 0644);
+ MODULE_PARM_DESC(sg_tablesize,
+-		 "Number of gather/scatter entries in a single scsi command, should >= 128 (default: 256, max: 4096)");
++		 "Number of gather/scatter entries in a single scsi command, should >= 128 (default: 128, max: 4096)");
+ 
+ static DEFINE_MUTEX(device_list_mutex);
+ static LIST_HEAD(device_list);
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h
+index 6c5af13db4e0d..ca8cfebe26ca7 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.h
++++ b/drivers/infiniband/ulp/isert/ib_isert.h
+@@ -65,9 +65,6 @@
+  */
+ #define ISER_RX_SIZE		(ISCSI_DEF_MAX_RECV_SEG_LEN + 1024)
+ 
+-/* Default I/O size is 1MB */
+-#define ISCSI_ISER_DEF_SG_TABLESIZE 256
+-
+ /* Minimum I/O size is 512KB */
+ #define ISCSI_ISER_MIN_SG_TABLESIZE 128
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+index 8caad0a2322bf..51c60f5428761 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+@@ -47,12 +47,15 @@ enum {
+ 	MAX_PATHS_NUM = 128,
+ 
+ 	/*
+-	 * With the size of struct rtrs_permit allocated on the client, 4K
+-	 * is the maximum number of rtrs_permits we can allocate. This number is
+-	 * also used on the client to allocate the IU for the user connection
+-	 * to receive the RDMA addresses from the server.
++	 * Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS)
++	 * and the minimum chunk size is 4096 (2^12).
++	 * So the maximum sess_queue_depth is 65536 (2^16) in theory.
++	 * But mempool_create, create_qp and ib_post_send fail with
++	 * "cannot allocate memory" error if sess_queue_depth is too big.
++	 * Therefore the pratical max value of sess_queue_depth is
++	 * somewhere between 1 and 65536 and it depends on the system.
+ 	 */
+-	MAX_SESS_QUEUE_DEPTH = 4096,
++	MAX_SESS_QUEUE_DEPTH = 65536,
+ 
+ 	RTRS_HB_INTERVAL_MS = 5000,
+ 	RTRS_HB_MISSED_MAX = 5,
+diff --git a/drivers/ipack/carriers/tpci200.c b/drivers/ipack/carriers/tpci200.c
+index ec71063fff76a..e1822e87ec3d2 100644
+--- a/drivers/ipack/carriers/tpci200.c
++++ b/drivers/ipack/carriers/tpci200.c
+@@ -596,8 +596,11 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 
+ out_err_bus_register:
+ 	tpci200_uninstall(tpci200);
++	/* tpci200->info->cfg_regs is unmapped in tpci200_uninstall */
++	tpci200->info->cfg_regs = NULL;
+ out_err_install:
+-	iounmap(tpci200->info->cfg_regs);
++	if (tpci200->info->cfg_regs)
++		iounmap(tpci200->info->cfg_regs);
+ out_err_ioremap:
+ 	pci_release_region(pdev, TPCI200_CFG_MEM_BAR);
+ out_err_pci_request:
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index 56bd2e9db6ed6..e501cb03f211d 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -2342,7 +2342,7 @@ static void __exit
+ HFC_cleanup(void)
+ {
+ 	if (timer_pending(&hfc_tl))
+-		del_timer(&hfc_tl);
++		del_timer_sync(&hfc_tl);
+ 
+ 	pci_unregister_driver(&hfc_driver);
+ }
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 8628c4aa2e854..9d6ae3e64285b 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -532,7 +532,7 @@ static void ssd_commit_superblock(struct dm_writecache *wc)
+ 
+ 	region.bdev = wc->ssd_dev->bdev;
+ 	region.sector = 0;
+-	region.count = PAGE_SIZE >> SECTOR_SHIFT;
++	region.count = max(4096U, wc->block_size) >> SECTOR_SHIFT;
+ 
+ 	if (unlikely(region.sector + region.count > wc->metadata_sectors))
+ 		region.count = wc->metadata_sectors - region.sector;
+@@ -1301,8 +1301,12 @@ static int writecache_map(struct dm_target *ti, struct bio *bio)
+ 			writecache_flush(wc);
+ 			if (writecache_has_error(wc))
+ 				goto unlock_error;
++			if (unlikely(wc->cleaner))
++				goto unlock_remap_origin;
+ 			goto unlock_submit;
+ 		} else {
++			if (dm_bio_get_target_bio_nr(bio))
++				goto unlock_remap_origin;
+ 			writecache_offload_bio(wc, bio);
+ 			goto unlock_return;
+ 		}
+@@ -1360,14 +1364,18 @@ read_next_block:
+ 	} else {
+ 		do {
+ 			bool found_entry = false;
++			bool search_used = false;
+ 			if (writecache_has_error(wc))
+ 				goto unlock_error;
+ 			e = writecache_find_entry(wc, bio->bi_iter.bi_sector, 0);
+ 			if (e) {
+-				if (!writecache_entry_is_committed(wc, e))
++				if (!writecache_entry_is_committed(wc, e)) {
++					search_used = true;
+ 					goto bio_copy;
++				}
+ 				if (!WC_MODE_PMEM(wc) && !e->write_in_progress) {
+ 					wc->overwrote_committed = true;
++					search_used = true;
+ 					goto bio_copy;
+ 				}
+ 				found_entry = true;
+@@ -1377,7 +1385,7 @@ read_next_block:
+ 			}
+ 			e = writecache_pop_from_freelist(wc, (sector_t)-1);
+ 			if (unlikely(!e)) {
+-				if (!found_entry) {
++				if (!WC_MODE_PMEM(wc) && !found_entry) {
+ direct_write:
+ 					e = writecache_find_entry(wc, bio->bi_iter.bi_sector, WFE_RETURN_FOLLOWING);
+ 					if (e) {
+@@ -1404,13 +1412,31 @@ bio_copy:
+ 				sector_t current_cache_sec = start_cache_sec + (bio_size >> SECTOR_SHIFT);
+ 
+ 				while (bio_size < bio->bi_iter.bi_size) {
+-					struct wc_entry *f = writecache_pop_from_freelist(wc, current_cache_sec);
+-					if (!f)
+-						break;
+-					write_original_sector_seq_count(wc, f, bio->bi_iter.bi_sector +
+-									(bio_size >> SECTOR_SHIFT), wc->seq_count);
+-					writecache_insert_entry(wc, f);
+-					wc->uncommitted_blocks++;
++					if (!search_used) {
++						struct wc_entry *f = writecache_pop_from_freelist(wc, current_cache_sec);
++						if (!f)
++							break;
++						write_original_sector_seq_count(wc, f, bio->bi_iter.bi_sector +
++										(bio_size >> SECTOR_SHIFT), wc->seq_count);
++						writecache_insert_entry(wc, f);
++						wc->uncommitted_blocks++;
++					} else {
++						struct wc_entry *f;
++						struct rb_node *next = rb_next(&e->rb_node);
++						if (!next)
++							break;
++						f = container_of(next, struct wc_entry, rb_node);
++						if (f != e + 1)
++							break;
++						if (read_original_sector(wc, f) !=
++						    read_original_sector(wc, e) + (wc->block_size >> SECTOR_SHIFT))
++							break;
++						if (unlikely(f->write_in_progress))
++							break;
++						if (writecache_entry_is_committed(wc, f))
++							wc->overwrote_committed = true;
++						e = f;
++					}
+ 					bio_size += wc->block_size;
+ 					current_cache_sec += wc->block_size >> SECTOR_SHIFT;
+ 				}
+@@ -2465,7 +2491,7 @@ overflow:
+ 		goto bad;
+ 	}
+ 
+-	ti->num_flush_bios = 1;
++	ti->num_flush_bios = WC_MODE_PMEM(wc) ? 1 : 2;
+ 	ti->flush_supported = true;
+ 	ti->num_discard_bios = 1;
+ 
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index b298fefb022eb..5100907974612 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -1390,6 +1390,13 @@ static int dmz_init_zone(struct blk_zone *blkz, unsigned int num, void *data)
+ 		return -ENXIO;
+ 	}
+ 
++	/*
++	 * Devices that have zones with a capacity smaller than the zone size
++	 * (e.g. NVMe zoned namespaces) are not supported.
++	 */
++	if (blkz->capacity != blkz->len)
++		return -ENXIO;
++
+ 	switch (blkz->type) {
+ 	case BLK_ZONE_TYPE_CONVENTIONAL:
+ 		set_bit(DMZ_RND, &zone->flags);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 638c04f9e832c..19a70f434029b 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1230,8 +1230,8 @@ static int dm_dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff,
+ 
+ /*
+  * A target may call dm_accept_partial_bio only from the map routine.  It is
+- * allowed for all bio types except REQ_PREFLUSH, REQ_OP_ZONE_RESET,
+- * REQ_OP_ZONE_OPEN, REQ_OP_ZONE_CLOSE and REQ_OP_ZONE_FINISH.
++ * allowed for all bio types except REQ_PREFLUSH, REQ_OP_ZONE_* zone management
++ * operations and REQ_OP_ZONE_APPEND (zone append writes).
+  *
+  * dm_accept_partial_bio informs the dm that the target only wants to process
+  * additional n_sectors sectors of the bio and the rest of the data should be
+@@ -1261,9 +1261,13 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
+ {
+ 	struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone);
+ 	unsigned bi_size = bio->bi_iter.bi_size >> SECTOR_SHIFT;
++
+ 	BUG_ON(bio->bi_opf & REQ_PREFLUSH);
++	BUG_ON(op_is_zone_mgmt(bio_op(bio)));
++	BUG_ON(bio_op(bio) == REQ_OP_ZONE_APPEND);
+ 	BUG_ON(bi_size > *tio->len_ptr);
+ 	BUG_ON(n_sectors > bi_size);
++
+ 	*tio->len_ptr -= bi_size - n_sectors;
+ 	bio->bi_iter.bi_size = n_sectors << SECTOR_SHIFT;
+ }
+diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
+index eff04fa23dfad..9e4d1212f4c16 100644
+--- a/drivers/md/persistent-data/dm-btree-remove.c
++++ b/drivers/md/persistent-data/dm-btree-remove.c
+@@ -549,7 +549,8 @@ int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
+ 		delete_at(n, index);
+ 	}
+ 
+-	*new_root = shadow_root(&spine);
++	if (!r)
++		*new_root = shadow_root(&spine);
+ 	exit_shadow_spine(&spine);
+ 
+ 	return r;
+diff --git a/drivers/md/persistent-data/dm-space-map-disk.c b/drivers/md/persistent-data/dm-space-map-disk.c
+index bf4c5e2ccb6ff..e0acae7a3815d 100644
+--- a/drivers/md/persistent-data/dm-space-map-disk.c
++++ b/drivers/md/persistent-data/dm-space-map-disk.c
+@@ -171,6 +171,14 @@ static int sm_disk_new_block(struct dm_space_map *sm, dm_block_t *b)
+ 	 * Any block we allocate has to be free in both the old and current ll.
+ 	 */
+ 	r = sm_ll_find_common_free_block(&smd->old_ll, &smd->ll, smd->begin, smd->ll.nr_blocks, b);
++	if (r == -ENOSPC) {
++		/*
++		 * There's no free block between smd->begin and the end of the metadata device.
++		 * We search before smd->begin in case something has been freed.
++		 */
++		r = sm_ll_find_common_free_block(&smd->old_ll, &smd->ll, 0, smd->begin, b);
++	}
++
+ 	if (r)
+ 		return r;
+ 
+@@ -199,7 +207,6 @@ static int sm_disk_commit(struct dm_space_map *sm)
+ 		return r;
+ 
+ 	memcpy(&smd->old_ll, &smd->ll, sizeof(smd->old_ll));
+-	smd->begin = 0;
+ 	smd->nr_allocated_this_transaction = 0;
+ 
+ 	r = sm_disk_get_nr_free(sm, &nr_free);
+diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c
+index 9e3c64ec2026f..da439ac857963 100644
+--- a/drivers/md/persistent-data/dm-space-map-metadata.c
++++ b/drivers/md/persistent-data/dm-space-map-metadata.c
+@@ -452,6 +452,14 @@ static int sm_metadata_new_block_(struct dm_space_map *sm, dm_block_t *b)
+ 	 * Any block we allocate has to be free in both the old and current ll.
+ 	 */
+ 	r = sm_ll_find_common_free_block(&smm->old_ll, &smm->ll, smm->begin, smm->ll.nr_blocks, b);
++	if (r == -ENOSPC) {
++		/*
++		 * There's no free block between smm->begin and the end of the metadata device.
++		 * We search before smm->begin in case something has been freed.
++		 */
++		r = sm_ll_find_common_free_block(&smm->old_ll, &smm->ll, 0, smm->begin, b);
++	}
++
+ 	if (r)
+ 		return r;
+ 
+@@ -503,7 +511,6 @@ static int sm_metadata_commit(struct dm_space_map *sm)
+ 		return r;
+ 
+ 	memcpy(&smm->old_ll, &smm->ll, sizeof(smm->old_ll));
+-	smm->begin = 0;
+ 	smm->allocated_this_transaction = 0;
+ 
+ 	return 0;
+diff --git a/drivers/media/i2c/saa6588.c b/drivers/media/i2c/saa6588.c
+index ecb491d5f2ab8..d1e0716bdfffd 100644
+--- a/drivers/media/i2c/saa6588.c
++++ b/drivers/media/i2c/saa6588.c
+@@ -380,7 +380,7 @@ static void saa6588_configure(struct saa6588 *s)
+ 
+ /* ---------------------------------------------------------------------- */
+ 
+-static long saa6588_ioctl(struct v4l2_subdev *sd, unsigned int cmd, void *arg)
++static long saa6588_command(struct v4l2_subdev *sd, unsigned int cmd, void *arg)
+ {
+ 	struct saa6588 *s = to_saa6588(sd);
+ 	struct saa6588_command *a = arg;
+@@ -433,7 +433,7 @@ static int saa6588_s_tuner(struct v4l2_subdev *sd, const struct v4l2_tuner *vt)
+ /* ----------------------------------------------------------------------- */
+ 
+ static const struct v4l2_subdev_core_ops saa6588_core_ops = {
+-	.ioctl = saa6588_ioctl,
++	.command = saa6588_command,
+ };
+ 
+ static const struct v4l2_subdev_tuner_ops saa6588_tuner_ops = {
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 8824dd0fb331e..35a51e9b539da 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -3187,7 +3187,7 @@ static int radio_release(struct file *file)
+ 
+ 	btv->radio_user--;
+ 
+-	bttv_call_all(btv, core, ioctl, SAA6588_CMD_CLOSE, &cmd);
++	bttv_call_all(btv, core, command, SAA6588_CMD_CLOSE, &cmd);
+ 
+ 	if (btv->radio_user == 0)
+ 		btv->has_radio_tuner = 0;
+@@ -3268,7 +3268,7 @@ static ssize_t radio_read(struct file *file, char __user *data,
+ 	cmd.result = -ENODEV;
+ 	radio_enable(btv);
+ 
+-	bttv_call_all(btv, core, ioctl, SAA6588_CMD_READ, &cmd);
++	bttv_call_all(btv, core, command, SAA6588_CMD_READ, &cmd);
+ 
+ 	return cmd.result;
+ }
+@@ -3289,7 +3289,7 @@ static __poll_t radio_poll(struct file *file, poll_table *wait)
+ 	cmd.instance = file;
+ 	cmd.event_list = wait;
+ 	cmd.poll_mask = res;
+-	bttv_call_all(btv, core, ioctl, SAA6588_CMD_POLL, &cmd);
++	bttv_call_all(btv, core, command, SAA6588_CMD_POLL, &cmd);
+ 
+ 	return cmd.poll_mask;
+ }
+diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c
+index 9a6a6b68f8e3e..85d082baaadc5 100644
+--- a/drivers/media/pci/saa7134/saa7134-video.c
++++ b/drivers/media/pci/saa7134/saa7134-video.c
+@@ -1178,7 +1178,7 @@ static int video_release(struct file *file)
+ 
+ 	saa_call_all(dev, tuner, standby);
+ 	if (vdev->vfl_type == VFL_TYPE_RADIO)
+-		saa_call_all(dev, core, ioctl, SAA6588_CMD_CLOSE, &cmd);
++		saa_call_all(dev, core, command, SAA6588_CMD_CLOSE, &cmd);
+ 	mutex_unlock(&dev->lock);
+ 
+ 	return 0;
+@@ -1197,7 +1197,7 @@ static ssize_t radio_read(struct file *file, char __user *data,
+ 	cmd.result = -ENODEV;
+ 
+ 	mutex_lock(&dev->lock);
+-	saa_call_all(dev, core, ioctl, SAA6588_CMD_READ, &cmd);
++	saa_call_all(dev, core, command, SAA6588_CMD_READ, &cmd);
+ 	mutex_unlock(&dev->lock);
+ 
+ 	return cmd.result;
+@@ -1213,7 +1213,7 @@ static __poll_t radio_poll(struct file *file, poll_table *wait)
+ 	cmd.event_list = wait;
+ 	cmd.poll_mask = 0;
+ 	mutex_lock(&dev->lock);
+-	saa_call_all(dev, core, ioctl, SAA6588_CMD_POLL, &cmd);
++	saa_call_all(dev, core, command, SAA6588_CMD_POLL, &cmd);
+ 	mutex_unlock(&dev->lock);
+ 
+ 	return rc | cmd.poll_mask;
+diff --git a/drivers/media/platform/davinci/vpbe_display.c b/drivers/media/platform/davinci/vpbe_display.c
+index d19bad997f30c..bf3c3e76b9213 100644
+--- a/drivers/media/platform/davinci/vpbe_display.c
++++ b/drivers/media/platform/davinci/vpbe_display.c
+@@ -47,7 +47,7 @@ static int venc_is_second_field(struct vpbe_display *disp_dev)
+ 
+ 	ret = v4l2_subdev_call(vpbe_dev->venc,
+ 			       core,
+-			       ioctl,
++			       command,
+ 			       VENC_GET_FLD,
+ 			       &val);
+ 	if (ret < 0) {
+diff --git a/drivers/media/platform/davinci/vpbe_venc.c b/drivers/media/platform/davinci/vpbe_venc.c
+index 8caa084e57046..bde241c26d795 100644
+--- a/drivers/media/platform/davinci/vpbe_venc.c
++++ b/drivers/media/platform/davinci/vpbe_venc.c
+@@ -521,9 +521,7 @@ static int venc_s_routing(struct v4l2_subdev *sd, u32 input, u32 output,
+ 	return ret;
+ }
+ 
+-static long venc_ioctl(struct v4l2_subdev *sd,
+-			unsigned int cmd,
+-			void *arg)
++static long venc_command(struct v4l2_subdev *sd, unsigned int cmd, void *arg)
+ {
+ 	u32 val;
+ 
+@@ -542,7 +540,7 @@ static long venc_ioctl(struct v4l2_subdev *sd,
+ }
+ 
+ static const struct v4l2_subdev_core_ops venc_core_ops = {
+-	.ioctl      = venc_ioctl,
++	.command      = venc_command,
+ };
+ 
+ static const struct v4l2_subdev_video_ops venc_video_ops = {
+diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
+index 3fe3edd808765..afae0afe3f810 100644
+--- a/drivers/media/rc/bpf-lirc.c
++++ b/drivers/media/rc/bpf-lirc.c
+@@ -326,7 +326,8 @@ int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
+ 	}
+ 
+ 	if (attr->query.prog_cnt != 0 && prog_ids && cnt)
+-		ret = bpf_prog_array_copy_to_user(progs, prog_ids, cnt);
++		ret = bpf_prog_array_copy_to_user(progs, prog_ids,
++						  attr->query.prog_cnt);
+ 
+ unlock:
+ 	mutex_unlock(&ir_raw_handler_lock);
+diff --git a/drivers/media/usb/dvb-usb/dtv5100.c b/drivers/media/usb/dvb-usb/dtv5100.c
+index fba06932a9e0e..1c13e493322cc 100644
+--- a/drivers/media/usb/dvb-usb/dtv5100.c
++++ b/drivers/media/usb/dvb-usb/dtv5100.c
+@@ -26,6 +26,7 @@ static int dtv5100_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 			   u8 *wbuf, u16 wlen, u8 *rbuf, u16 rlen)
+ {
+ 	struct dtv5100_state *st = d->priv;
++	unsigned int pipe;
+ 	u8 request;
+ 	u8 type;
+ 	u16 value;
+@@ -34,6 +35,7 @@ static int dtv5100_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 	switch (wlen) {
+ 	case 1:
+ 		/* write { reg }, read { value } */
++		pipe = usb_rcvctrlpipe(d->udev, 0);
+ 		request = (addr == DTV5100_DEMOD_ADDR ? DTV5100_DEMOD_READ :
+ 							DTV5100_TUNER_READ);
+ 		type = USB_TYPE_VENDOR | USB_DIR_IN;
+@@ -41,6 +43,7 @@ static int dtv5100_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 		break;
+ 	case 2:
+ 		/* write { reg, value } */
++		pipe = usb_sndctrlpipe(d->udev, 0);
+ 		request = (addr == DTV5100_DEMOD_ADDR ? DTV5100_DEMOD_WRITE :
+ 							DTV5100_TUNER_WRITE);
+ 		type = USB_TYPE_VENDOR | USB_DIR_OUT;
+@@ -54,7 +57,7 @@ static int dtv5100_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 
+ 	memcpy(st->data, rbuf, rlen);
+ 	msleep(1); /* avoid I2C errors */
+-	return usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0), request,
++	return usb_control_msg(d->udev, pipe, request,
+ 			       type, value, index, st->data, rlen,
+ 			       DTV5100_USB_TIMEOUT);
+ }
+@@ -141,7 +144,7 @@ static int dtv5100_probe(struct usb_interface *intf,
+ 
+ 	/* initialize non qt1010/zl10353 part? */
+ 	for (i = 0; dtv5100_init[i].request; i++) {
+-		ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
++		ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				      dtv5100_init[i].request,
+ 				      USB_TYPE_VENDOR | USB_DIR_OUT,
+ 				      dtv5100_init[i].value,
+diff --git a/drivers/media/usb/gspca/sq905.c b/drivers/media/usb/gspca/sq905.c
+index 9491110709718..32504ebcfd4de 100644
+--- a/drivers/media/usb/gspca/sq905.c
++++ b/drivers/media/usb/gspca/sq905.c
+@@ -116,7 +116,7 @@ static int sq905_command(struct gspca_dev *gspca_dev, u16 index)
+ 	}
+ 
+ 	ret = usb_control_msg(gspca_dev->dev,
+-			      usb_sndctrlpipe(gspca_dev->dev, 0),
++			      usb_rcvctrlpipe(gspca_dev->dev, 0),
+ 			      USB_REQ_SYNCH_FRAME,                /* request */
+ 			      USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 			      SQ905_PING, 0, gspca_dev->usb_buf, 1,
+diff --git a/drivers/media/usb/gspca/sunplus.c b/drivers/media/usb/gspca/sunplus.c
+index ace3da40006e7..971dee0a56dae 100644
+--- a/drivers/media/usb/gspca/sunplus.c
++++ b/drivers/media/usb/gspca/sunplus.c
+@@ -242,6 +242,10 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ 		gspca_err(gspca_dev, "reg_r: buffer overflow\n");
+ 		return;
+ 	}
++	if (len == 0) {
++		gspca_err(gspca_dev, "reg_r: zero-length read\n");
++		return;
++	}
+ 	if (gspca_dev->usb_err < 0)
+ 		return;
+ 	ret = usb_control_msg(gspca_dev->dev,
+@@ -250,7 +254,7 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ 			USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 			0,		/* value */
+ 			index,
+-			len ? gspca_dev->usb_buf : NULL, len,
++			gspca_dev->usb_buf, len,
+ 			500);
+ 	if (ret < 0) {
+ 		pr_err("reg_r err %d\n", ret);
+@@ -727,7 +731,7 @@ static int sd_start(struct gspca_dev *gspca_dev)
+ 		case MegaImageVI:
+ 			reg_w_riv(gspca_dev, 0xf0, 0, 0);
+ 			spca504B_WaitCmdStatus(gspca_dev);
+-			reg_r(gspca_dev, 0xf0, 4, 0);
++			reg_w_riv(gspca_dev, 0xf0, 4, 0);
+ 			spca504B_WaitCmdStatus(gspca_dev);
+ 			break;
+ 		default:
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index a6a441d92b948..5878c78334862 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -124,10 +124,37 @@ int uvc_query_ctrl(struct uvc_device *dev, u8 query, u8 unit,
+ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream,
+ 	struct uvc_streaming_control *ctrl)
+ {
++	static const struct usb_device_id elgato_cam_link_4k = {
++		USB_DEVICE(0x0fd9, 0x0066)
++	};
+ 	struct uvc_format *format = NULL;
+ 	struct uvc_frame *frame = NULL;
+ 	unsigned int i;
+ 
++	/*
++	 * The response of the Elgato Cam Link 4K is incorrect: The second byte
++	 * contains bFormatIndex (instead of being the second byte of bmHint).
++	 * The first byte is always zero. The third byte is always 1.
++	 *
++	 * The UVC 1.5 class specification defines the first five bits in the
++	 * bmHint bitfield. The remaining bits are reserved and should be zero.
++	 * Therefore a valid bmHint will be less than 32.
++	 *
++	 * Latest Elgato Cam Link 4K firmware as of 2021-03-23 needs this fix.
++	 * MCU: 20.02.19, FPGA: 67
++	 */
++	if (usb_match_one_id(stream->dev->intf, &elgato_cam_link_4k) &&
++	    ctrl->bmHint > 255) {
++		u8 corrected_format_index = ctrl->bmHint >> 8;
++
++		/* uvc_dbg(stream->dev, VIDEO,
++			"Correct USB video probe response from {bmHint: 0x%04x, bFormatIndex: %u} to {bmHint: 0x%04x, bFormatIndex: %u}\n",
++			ctrl->bmHint, ctrl->bFormatIndex,
++			1, corrected_format_index); */
++		ctrl->bmHint = 1;
++		ctrl->bFormatIndex = corrected_format_index;
++	}
++
+ 	for (i = 0; i < stream->nformats; ++i) {
+ 		if (stream->format[i].index == ctrl->bFormatIndex) {
+ 			format = &stream->format[i];
+diff --git a/drivers/media/usb/zr364xx/zr364xx.c b/drivers/media/usb/zr364xx/zr364xx.c
+index 8c670934d9207..1b79053b2a052 100644
+--- a/drivers/media/usb/zr364xx/zr364xx.c
++++ b/drivers/media/usb/zr364xx/zr364xx.c
+@@ -1034,6 +1034,7 @@ static int zr364xx_start_readpipe(struct zr364xx_camera *cam)
+ 	DBG("submitting URB %p\n", pipe_info->stream_urb);
+ 	retval = usb_submit_urb(pipe_info->stream_urb, GFP_KERNEL);
+ 	if (retval) {
++		usb_free_urb(pipe_info->stream_urb);
+ 		printk(KERN_ERR KBUILD_MODNAME ": start read pipe failed\n");
+ 		return retval;
+ 	}
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index ca465794ea9c8..df5cebb372a59 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -108,6 +108,7 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_clk)
+ 	syscon_config.max_register = resource_size(&res) - reg_io_width;
+ 
+ 	regmap = regmap_init_mmio(NULL, base, &syscon_config);
++	kfree(syscon_config.name);
+ 	if (IS_ERR(regmap)) {
+ 		pr_err("regmap init failed\n");
+ 		ret = PTR_ERR(regmap);
+@@ -144,7 +145,6 @@ err_clk:
+ 	regmap_exit(regmap);
+ err_regmap:
+ 	iounmap(base);
+-	kfree(syscon_config.name);
+ err_map:
+ 	kfree(syscon);
+ 	return ERR_PTR(ret);
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index a0675d4154d2f..a337f97b30e28 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -144,6 +144,9 @@ void lkdtm_UNALIGNED_LOAD_STORE_WRITE(void)
+ 	if (*p == 0)
+ 		val = 0x87654321;
+ 	*p = val;
++
++	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
++		pr_err("XFAIL: arch has CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS\n");
+ }
+ 
+ void lkdtm_SOFTLOCKUP(void)
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index eaf4810fe656d..b5f3f160c8420 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -936,11 +936,14 @@ int mmc_execute_tuning(struct mmc_card *card)
+ 
+ 	err = host->ops->execute_tuning(host, opcode);
+ 
+-	if (err)
++	if (err) {
+ 		pr_err("%s: tuning execution failed: %d\n",
+ 			mmc_hostname(host), err);
+-	else
++	} else {
++		host->retune_now = 0;
++		host->need_retune = 0;
+ 		mmc_retune_enable(host);
++	}
+ 
+ 	return err;
+ }
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 636d4e3aa0e35..bac343a8d569a 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -847,11 +847,13 @@ try_again:
+ 		return err;
+ 
+ 	/*
+-	 * In case CCS and S18A in the response is set, start Signal Voltage
+-	 * Switch procedure. SPI mode doesn't support CMD11.
++	 * In case the S18A bit is set in the response, let's start the signal
++	 * voltage switch procedure. SPI mode doesn't support CMD11.
++	 * Note that, according to the spec, the S18A bit is not valid unless
++	 * the CCS bit is set as well. We deliberately deviate from the spec in
++	 * regards to this, which allows UHS-I to be supported for SDSC cards.
+ 	 */
+-	if (!mmc_host_is_spi(host) && rocr &&
+-	   ((*rocr & 0x41000000) == 0x41000000)) {
++	if (!mmc_host_is_spi(host) && rocr && (*rocr & 0x01000000)) {
+ 		err = mmc_set_uhs_voltage(host, pocr);
+ 		if (err == -EAGAIN) {
+ 			retries--;
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 54205e3be9e87..a2cdb37fcbbec 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -788,6 +788,17 @@ static const struct dmi_system_id sdhci_acpi_quirks[] = {
+ 		},
+ 		.driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT,
+ 	},
++	{
++		/*
++		 * The Toshiba WT8-B's microSD slot always reports the card being
++		 * write-protected.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TOSHIBA ENCORE 2 WT8-B"),
++		},
++		.driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT,
++	},
+ 	{} /* Terminating entry */
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 58c977d581e7c..6cdadbb3accd5 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1813,6 +1813,10 @@ static u16 sdhci_get_preset_value(struct sdhci_host *host)
+ 	u16 preset = 0;
+ 
+ 	switch (host->timing) {
++	case MMC_TIMING_MMC_HS:
++	case MMC_TIMING_SD_HS:
++		preset = sdhci_readw(host, SDHCI_PRESET_FOR_HIGH_SPEED);
++		break;
+ 	case MMC_TIMING_UHS_SDR12:
+ 		preset = sdhci_readw(host, SDHCI_PRESET_FOR_SDR12);
+ 		break;
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 0770c036e2ff5..960fed78529e1 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -253,6 +253,7 @@
+ 
+ /* 60-FB reserved */
+ 
++#define SDHCI_PRESET_FOR_HIGH_SPEED	0x64
+ #define SDHCI_PRESET_FOR_SDR12 0x66
+ #define SDHCI_PRESET_FOR_SDR25 0x68
+ #define SDHCI_PRESET_FOR_SDR50 0x6A
+diff --git a/drivers/net/dsa/ocelot/seville_vsc9953.c b/drivers/net/dsa/ocelot/seville_vsc9953.c
+index ebbaf6817ec86..7026523f886c8 100644
+--- a/drivers/net/dsa/ocelot/seville_vsc9953.c
++++ b/drivers/net/dsa/ocelot/seville_vsc9953.c
+@@ -1214,6 +1214,11 @@ static int seville_probe(struct platform_device *pdev)
+ 	felix->info = &seville_info_vsc9953;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res) {
++		err = -EINVAL;
++		dev_err(&pdev->dev, "Invalid resource\n");
++		goto err_alloc_felix;
++	}
+ 	felix->switch_base = res->start;
+ 
+ 	ds = kzalloc(sizeof(struct dsa_switch), GFP_KERNEL);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 6fb6c35562854..f9e91304d2327 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -423,6 +423,10 @@ static int bcmgenet_mii_register(struct bcmgenet_priv *priv)
+ 	int id, ret;
+ 
+ 	pres = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!pres) {
++		dev_err(&pdev->dev, "Invalid resource\n");
++		return -EINVAL;
++	}
+ 	memset(&res, 0, sizeof(res));
+ 	memset(&ppd, 0, sizeof(ppd));
+ 
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 960def41cc555..2cb73e850a327 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -75,6 +75,8 @@ static void fec_enet_itr_coal_init(struct net_device *ndev);
+ 
+ #define DRIVER_NAME	"fec"
+ 
++static const u16 fec_enet_vlan_pri_to_queue[8] = {0, 0, 1, 1, 1, 2, 2, 2};
++
+ /* Pause frame feild and FIFO threshold */
+ #define FEC_ENET_FCE	(1 << 5)
+ #define FEC_ENET_RSEM_V	0x84
+@@ -3222,10 +3224,40 @@ static int fec_set_features(struct net_device *netdev,
+ 	return 0;
+ }
+ 
++static u16 fec_enet_get_raw_vlan_tci(struct sk_buff *skb)
++{
++	struct vlan_ethhdr *vhdr;
++	unsigned short vlan_TCI = 0;
++
++	if (skb->protocol == htons(ETH_P_ALL)) {
++		vhdr = (struct vlan_ethhdr *)(skb->data);
++		vlan_TCI = ntohs(vhdr->h_vlan_TCI);
++	}
++
++	return vlan_TCI;
++}
++
++static u16 fec_enet_select_queue(struct net_device *ndev, struct sk_buff *skb,
++				 struct net_device *sb_dev)
++{
++	struct fec_enet_private *fep = netdev_priv(ndev);
++	u16 vlan_tag;
++
++	if (!(fep->quirks & FEC_QUIRK_HAS_AVB))
++		return netdev_pick_tx(ndev, skb, NULL);
++
++	vlan_tag = fec_enet_get_raw_vlan_tci(skb);
++	if (!vlan_tag)
++		return vlan_tag;
++
++	return fec_enet_vlan_pri_to_queue[vlan_tag >> 13];
++}
++
+ static const struct net_device_ops fec_netdev_ops = {
+ 	.ndo_open		= fec_enet_open,
+ 	.ndo_stop		= fec_enet_close,
+ 	.ndo_start_xmit		= fec_enet_start_xmit,
++	.ndo_select_queue       = fec_enet_select_queue,
+ 	.ndo_set_rx_mode	= set_multicast_list,
+ 	.ndo_validate_addr	= eth_validate_addr,
+ 	.ndo_tx_timeout		= fec_timeout,
+diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c
+index 8cc651d37a7fd..609e47b8287d1 100644
+--- a/drivers/net/ethernet/intel/e100.c
++++ b/drivers/net/ethernet/intel/e100.c
+@@ -1395,7 +1395,7 @@ static int e100_phy_check_without_mii(struct nic *nic)
+ 	u8 phy_type;
+ 	int without_mii;
+ 
+-	phy_type = (nic->eeprom[eeprom_phy_iface] >> 8) & 0x0f;
++	phy_type = (le16_to_cpu(nic->eeprom[eeprom_phy_iface]) >> 8) & 0x0f;
+ 
+ 	switch (phy_type) {
+ 	case NoSuchPhy: /* Non-MII PHY; UNTESTED! */
+@@ -1515,7 +1515,7 @@ static int e100_phy_init(struct nic *nic)
+ 		mdio_write(netdev, nic->mii.phy_id, MII_BMCR, bmcr);
+ 	} else if ((nic->mac >= mac_82550_D102) || ((nic->flags & ich) &&
+ 	   (mdio_read(netdev, nic->mii.phy_id, MII_TPISTATUS) & 0x8000) &&
+-		(nic->eeprom[eeprom_cnfg_mdix] & eeprom_mdix_enabled))) {
++	   (le16_to_cpu(nic->eeprom[eeprom_cnfg_mdix]) & eeprom_mdix_enabled))) {
+ 		/* enable/disable MDI/MDI-X auto-switching. */
+ 		mdio_write(netdev, nic->mii.phy_id, MII_NCONFIG,
+ 				nic->mii.force_media ? 0 : NCONFIG_AUTO_SWITCH);
+@@ -2263,9 +2263,9 @@ static int e100_asf(struct nic *nic)
+ {
+ 	/* ASF can be enabled from eeprom */
+ 	return (nic->pdev->device >= 0x1050) && (nic->pdev->device <= 0x1057) &&
+-	   (nic->eeprom[eeprom_config_asf] & eeprom_asf) &&
+-	   !(nic->eeprom[eeprom_config_asf] & eeprom_gcl) &&
+-	   ((nic->eeprom[eeprom_smbus_addr] & 0xFF) != 0xFE);
++	   (le16_to_cpu(nic->eeprom[eeprom_config_asf]) & eeprom_asf) &&
++	   !(le16_to_cpu(nic->eeprom[eeprom_config_asf]) & eeprom_gcl) &&
++	   ((le16_to_cpu(nic->eeprom[eeprom_smbus_addr]) & 0xFF) != 0xFE);
+ }
+ 
+ static int e100_up(struct nic *nic)
+@@ -2920,7 +2920,7 @@ static int e100_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	/* Wol magic packet can be enabled from eeprom */
+ 	if ((nic->mac >= mac_82558_D101_A4) &&
+-	   (nic->eeprom[eeprom_id] & eeprom_id_wol)) {
++	   (le16_to_cpu(nic->eeprom[eeprom_id]) & eeprom_id_wol)) {
+ 		nic->flags |= wol_magic;
+ 		device_set_wakeup_enable(&pdev->dev, true);
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+index 7a879614ca55e..1dad6c93ac9c8 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+@@ -11,13 +11,14 @@
+  * operate with the nanosecond field directly without fear of overflow.
+  *
+  * Much like the 82599, the update period is dependent upon the link speed:
+- * At 40Gb link or no link, the period is 1.6ns.
+- * At 10Gb link, the period is multiplied by 2. (3.2ns)
++ * At 40Gb, 25Gb, or no link, the period is 1.6ns.
++ * At 10Gb or 5Gb link, the period is multiplied by 2. (3.2ns)
+  * At 1Gb link, the period is multiplied by 20. (32ns)
+  * 1588 functionality is not supported at 100Mbps.
+  */
+ #define I40E_PTP_40GB_INCVAL		0x0199999999ULL
+ #define I40E_PTP_10GB_INCVAL_MULT	2
++#define I40E_PTP_5GB_INCVAL_MULT	2
+ #define I40E_PTP_1GB_INCVAL_MULT	20
+ 
+ #define I40E_PRTTSYN_CTL1_TSYNTYPE_V1  BIT(I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT)
+@@ -465,6 +466,9 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ 	case I40E_LINK_SPEED_10GB:
+ 		mult = I40E_PTP_10GB_INCVAL_MULT;
+ 		break;
++	case I40E_LINK_SPEED_5GB:
++		mult = I40E_PTP_5GB_INCVAL_MULT;
++		break;
+ 	case I40E_LINK_SPEED_1GB:
+ 		mult = I40E_PTP_1GB_INCVAL_MULT;
+ 		break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index a7975afecf70f..14eba9bc174d8 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -3492,13 +3492,9 @@ static int
+ ice_get_rc_coalesce(struct ethtool_coalesce *ec, enum ice_container_type c_type,
+ 		    struct ice_ring_container *rc)
+ {
+-	struct ice_pf *pf;
+-
+ 	if (!rc->ring)
+ 		return -EINVAL;
+ 
+-	pf = rc->ring->vsi->back;
+-
+ 	switch (c_type) {
+ 	case ICE_RX_CONTAINER:
+ 		ec->use_adaptive_rx_coalesce = ITR_IS_DYNAMIC(rc->itr_setting);
+@@ -3510,7 +3506,7 @@ ice_get_rc_coalesce(struct ethtool_coalesce *ec, enum ice_container_type c_type,
+ 		ec->tx_coalesce_usecs = rc->itr_setting & ~ICE_ITR_DYNAMIC;
+ 		break;
+ 	default:
+-		dev_dbg(ice_pf_to_dev(pf), "Invalid c_type %d\n", c_type);
++		dev_dbg(ice_pf_to_dev(rc->ring->vsi->back), "Invalid c_type %d\n", c_type);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+index 4ec24c3e813fe..c0ee0541e53fc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
++++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+@@ -608,7 +608,7 @@ static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+ 	/* L2 Packet types */
+ 	ICE_PTT_UNUSED_ENTRY(0),
+ 	ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+-	ICE_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
++	ICE_PTT_UNUSED_ENTRY(2),
+ 	ICE_PTT_UNUSED_ENTRY(3),
+ 	ICE_PTT_UNUSED_ENTRY(4),
+ 	ICE_PTT_UNUSED_ENTRY(5),
+@@ -722,7 +722,7 @@ static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+ 	/* Non Tunneled IPv6 */
+ 	ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+ 	ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+-	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
++	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+ 	ICE_PTT_UNUSED_ENTRY(91),
+ 	ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+ 	ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
+index 1bed183d96a0d..ee3497d254646 100644
+--- a/drivers/net/ethernet/intel/ice/ice_type.h
++++ b/drivers/net/ethernet/intel/ice/ice_type.h
+@@ -63,7 +63,7 @@ enum ice_aq_res_ids {
+ /* FW update timeout definitions are in milliseconds */
+ #define ICE_NVM_TIMEOUT			180000
+ #define ICE_CHANGE_LOCK_TIMEOUT		1000
+-#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	3000
++#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	5000
+ 
+ enum ice_aq_res_access_type {
+ 	ICE_RES_READ = 1,
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 5c87c0a7ce3d7..4b9b5148c916b 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -2643,7 +2643,8 @@ static int igb_parse_cls_flower(struct igb_adapter *adapter,
+ 			}
+ 
+ 			input->filter.match_flags |= IGB_FILTER_FLAG_VLAN_TCI;
+-			input->filter.vlan_tci = match.key->vlan_priority;
++			input->filter.vlan_tci =
++				(__force __be16)match.key->vlan_priority;
+ 		}
+ 	}
+ 
+@@ -6288,12 +6289,12 @@ int igb_xmit_xdp_ring(struct igb_adapter *adapter,
+ 	cmd_type |= len | IGB_TXD_DCMD;
+ 	tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
+ 
+-	olinfo_status = cpu_to_le32(len << E1000_ADVTXD_PAYLEN_SHIFT);
++	olinfo_status = len << E1000_ADVTXD_PAYLEN_SHIFT;
+ 	/* 82575 requires a unique index per ring */
+ 	if (test_bit(IGB_RING_FLAG_TX_CTX_IDX, &tx_ring->flags))
+ 		olinfo_status |= tx_ring->reg_idx << 4;
+ 
+-	tx_desc->read.olinfo_status = olinfo_status;
++	tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status);
+ 
+ 	netdev_tx_sent_queue(txring_txq(tx_ring), tx_buffer->bytecount);
+ 
+@@ -8617,7 +8618,7 @@ static void igb_process_skb_fields(struct igb_ring *rx_ring,
+ 
+ 		if (igb_test_staterr(rx_desc, E1000_RXDEXT_STATERR_LB) &&
+ 		    test_bit(IGB_RING_FLAG_RX_LB_VLAN_BSWAP, &rx_ring->flags))
+-			vid = be16_to_cpu(rx_desc->wb.upper.vlan);
++			vid = be16_to_cpu((__force __be16)rx_desc->wb.upper.vlan);
+ 		else
+ 			vid = le16_to_cpu(rx_desc->wb.upper.vlan);
+ 
+diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
+index ee9f8c1dca836..07c9e9e0546f5 100644
+--- a/drivers/net/ethernet/intel/igbvf/netdev.c
++++ b/drivers/net/ethernet/intel/igbvf/netdev.c
+@@ -83,14 +83,14 @@ static int igbvf_desc_unused(struct igbvf_ring *ring)
+ static void igbvf_receive_skb(struct igbvf_adapter *adapter,
+ 			      struct net_device *netdev,
+ 			      struct sk_buff *skb,
+-			      u32 status, u16 vlan)
++			      u32 status, __le16 vlan)
+ {
+ 	u16 vid;
+ 
+ 	if (status & E1000_RXD_STAT_VP) {
+ 		if ((adapter->flags & IGBVF_FLAG_RX_LB_VLAN_BSWAP) &&
+ 		    (status & E1000_RXDEXT_STATERR_LB))
+-			vid = be16_to_cpu(vlan) & E1000_RXD_SPC_VLAN_MASK;
++			vid = be16_to_cpu((__force __be16)vlan) & E1000_RXD_SPC_VLAN_MASK;
+ 		else
+ 			vid = le16_to_cpu(vlan) & E1000_RXD_SPC_VLAN_MASK;
+ 		if (test_bit(vid, adapter->active_vlans))
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index a9f65d6677617..ec9b6c564300e 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -6871,6 +6871,10 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 			return PTR_ERR(priv->lms_base);
+ 	} else {
+ 		res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
++		if (!res) {
++			dev_err(&pdev->dev, "Invalid resource\n");
++			return -EINVAL;
++		}
+ 		if (has_acpi_companion(&pdev->dev)) {
+ 			/* In case the MDIO memory region is declared in
+ 			 * the ACPI, it can already appear as 'in-use'
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 7e1f8660dfec0..f327b78261ec4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1318,7 +1318,8 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+ 	if (rep->vlan && skb_vlan_tag_present(skb))
+ 		skb_vlan_pop(skb);
+ 
+-	if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv)) {
++	if (unlikely(!mlx5_ipsec_is_rx_flow(cqe) &&
++		     !mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv))) {
+ 		dev_kfree_skb_any(skb);
+ 		goto free_wqe;
+ 	}
+@@ -1375,7 +1376,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64
+ 
+ 	mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+ 
+-	if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv)) {
++	if (unlikely(!mlx5_ipsec_is_rx_flow(cqe) &&
++		     !mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv))) {
+ 		dev_kfree_skb_any(skb);
+ 		goto mpwrq_cqe_out;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+index 9025e5f38bb65..fe5476a76464f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+@@ -118,17 +118,24 @@ static bool __mlx5_lag_is_sriov(struct mlx5_lag *ldev)
+ static void mlx5_infer_tx_affinity_mapping(struct lag_tracker *tracker,
+ 					   u8 *port1, u8 *port2)
+ {
++	bool p1en;
++	bool p2en;
++
++	p1en = tracker->netdev_state[MLX5_LAG_P1].tx_enabled &&
++	       tracker->netdev_state[MLX5_LAG_P1].link_up;
++
++	p2en = tracker->netdev_state[MLX5_LAG_P2].tx_enabled &&
++	       tracker->netdev_state[MLX5_LAG_P2].link_up;
++
+ 	*port1 = 1;
+ 	*port2 = 2;
+-	if (!tracker->netdev_state[MLX5_LAG_P1].tx_enabled ||
+-	    !tracker->netdev_state[MLX5_LAG_P1].link_up) {
+-		*port1 = 2;
++	if ((!p1en && !p2en) || (p1en && p2en))
+ 		return;
+-	}
+ 
+-	if (!tracker->netdev_state[MLX5_LAG_P2].tx_enabled ||
+-	    !tracker->netdev_state[MLX5_LAG_P2].link_up)
++	if (p1en)
+ 		*port2 = 1;
++	else
++		*port1 = 2;
+ }
+ 
+ void mlx5_modify_lag(struct mlx5_lag *ldev,
+diff --git a/drivers/net/ethernet/micrel/ks8842.c b/drivers/net/ethernet/micrel/ks8842.c
+index caa251d0e3815..b27713906d3a6 100644
+--- a/drivers/net/ethernet/micrel/ks8842.c
++++ b/drivers/net/ethernet/micrel/ks8842.c
+@@ -1135,6 +1135,10 @@ static int ks8842_probe(struct platform_device *pdev)
+ 	unsigned i;
+ 
+ 	iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!iomem) {
++		dev_err(&pdev->dev, "Invalid resource\n");
++		return -EINVAL;
++	}
+ 	if (!request_mem_region(iomem->start, resource_size(iomem), DRV_NAME))
+ 		goto err_mem_region;
+ 
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index 49fd843c4c8a0..a4380c45f6689 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -481,14 +481,13 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ 	priv->ndev = ndev;
+ 	priv->pdev = pdev;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	ndev->base_addr = res->start;
+-	priv->base = devm_ioremap_resource(p_dev, res);
++	priv->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(priv->base)) {
+ 		dev_err(p_dev, "devm_ioremap_resource failed\n");
+ 		ret = PTR_ERR(priv->base);
+ 		goto init_fail;
+ 	}
++	ndev->base_addr = res->start;
+ 
+ 	spin_lock_init(&priv->txlock);
+ 
+diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+index 9a0870dc2f034..2942102efd488 100644
+--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
++++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+@@ -107,7 +107,7 @@ static int pch_ptp_match(struct sk_buff *skb, u16 uid_hi, u32 uid_lo, u16 seqid)
+ {
+ 	u8 *data = skb->data;
+ 	unsigned int offset;
+-	u16 *hi, *id;
++	u16 hi, id;
+ 	u32 lo;
+ 
+ 	if (ptp_classify_raw(skb) == PTP_CLASS_NONE)
+@@ -118,14 +118,11 @@ static int pch_ptp_match(struct sk_buff *skb, u16 uid_hi, u32 uid_lo, u16 seqid)
+ 	if (skb->len < offset + OFF_PTP_SEQUENCE_ID + sizeof(seqid))
+ 		return 0;
+ 
+-	hi = (u16 *)(data + offset + OFF_PTP_SOURCE_UUID);
+-	id = (u16 *)(data + offset + OFF_PTP_SEQUENCE_ID);
++	hi = get_unaligned_be16(data + offset + OFF_PTP_SOURCE_UUID + 0);
++	lo = get_unaligned_be32(data + offset + OFF_PTP_SOURCE_UUID + 2);
++	id = get_unaligned_be16(data + offset + OFF_PTP_SEQUENCE_ID);
+ 
+-	memcpy(&lo, &hi[1], sizeof(lo));
+-
+-	return (uid_hi == *hi &&
+-		uid_lo == lo &&
+-		seqid  == *id);
++	return (uid_hi == hi && uid_lo == lo && seqid == id);
+ }
+ 
+ static void
+@@ -135,7 +132,6 @@ pch_rx_timestamp(struct pch_gbe_adapter *adapter, struct sk_buff *skb)
+ 	struct pci_dev *pdev;
+ 	u64 ns;
+ 	u32 hi, lo, val;
+-	u16 uid, seq;
+ 
+ 	if (!adapter->hwts_rx_en)
+ 		return;
+@@ -151,10 +147,7 @@ pch_rx_timestamp(struct pch_gbe_adapter *adapter, struct sk_buff *skb)
+ 	lo = pch_src_uuid_lo_read(pdev);
+ 	hi = pch_src_uuid_hi_read(pdev);
+ 
+-	uid = hi & 0xffff;
+-	seq = (hi >> 16) & 0xffff;
+-
+-	if (!pch_ptp_match(skb, htons(uid), htonl(lo), htons(seq)))
++	if (!pch_ptp_match(skb, hi, lo, hi >> 16))
+ 		goto out;
+ 
+ 	ns = pch_rx_snap_read(pdev);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index a6bf80b529679..9010aabd97826 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -3547,7 +3547,6 @@ static void rtl_hw_start_8106(struct rtl8169_private *tp)
+ 	rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000);
+ 
+ 	rtl_pcie_state_l2l3_disable(tp);
+-	rtl_hw_aspm_clkreq_enable(tp, true);
+ }
+ 
+ DECLARE_RTL_COND(rtl_mac_ocp_e00e_cond)
+diff --git a/drivers/net/ethernet/sfc/ef10_sriov.c b/drivers/net/ethernet/sfc/ef10_sriov.c
+index 21fa6c0e88734..84041cd587d78 100644
+--- a/drivers/net/ethernet/sfc/ef10_sriov.c
++++ b/drivers/net/ethernet/sfc/ef10_sriov.c
+@@ -402,12 +402,17 @@ fail1:
+ 	return rc;
+ }
+ 
++/* Disable SRIOV and remove VFs
++ * If some VFs are attached to a guest (using Xen, only) nothing is
++ * done if force=false, and vports are freed if force=true (for the non
++ * attachedc ones, only) but SRIOV is not disabled and VFs are not
++ * removed in either case.
++ */
+ static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force)
+ {
+ 	struct pci_dev *dev = efx->pci_dev;
+-	unsigned int vfs_assigned = 0;
+-
+-	vfs_assigned = pci_vfs_assigned(dev);
++	unsigned int vfs_assigned = pci_vfs_assigned(dev);
++	int rc = 0;
+ 
+ 	if (vfs_assigned && !force) {
+ 		netif_info(efx, drv, efx->net_dev, "VFs are assigned to guests; "
+@@ -417,10 +422,12 @@ static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force)
+ 
+ 	if (!vfs_assigned)
+ 		pci_disable_sriov(dev);
++	else
++		rc = -EBUSY;
+ 
+ 	efx_ef10_sriov_free_vf_vswitching(efx);
+ 	efx->vf_count = 0;
+-	return 0;
++	return rc;
+ }
+ 
+ int efx_ef10_sriov_configure(struct efx_nic *efx, int num_vfs)
+@@ -439,7 +446,6 @@ int efx_ef10_sriov_init(struct efx_nic *efx)
+ void efx_ef10_sriov_fini(struct efx_nic *efx)
+ {
+ 	struct efx_ef10_nic_data *nic_data = efx->nic_data;
+-	unsigned int i;
+ 	int rc;
+ 
+ 	if (!nic_data->vf) {
+@@ -449,14 +455,7 @@ void efx_ef10_sriov_fini(struct efx_nic *efx)
+ 		return;
+ 	}
+ 
+-	/* Remove any VFs in the host */
+-	for (i = 0; i < efx->vf_count; ++i) {
+-		struct efx_nic *vf_efx = nic_data->vf[i].efx;
+-
+-		if (vf_efx)
+-			vf_efx->pci_dev->driver->remove(vf_efx->pci_dev);
+-	}
+-
++	/* Disable SRIOV and remove any VFs in the host */
+ 	rc = efx_ef10_pci_sriov_disable(efx, true);
+ 	if (rc)
+ 		netif_dbg(efx, drv, efx->net_dev,
+diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c
+index 6eef0f45b133b..2b29fd4cbdf44 100644
+--- a/drivers/net/ethernet/sgi/ioc3-eth.c
++++ b/drivers/net/ethernet/sgi/ioc3-eth.c
+@@ -835,6 +835,10 @@ static int ioc3eth_probe(struct platform_device *pdev)
+ 	int err;
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!regs) {
++		dev_err(&pdev->dev, "Invalid resource\n");
++		return -EINVAL;
++	}
+ 	/* get mac addr from one wire prom */
+ 	if (ioc3eth_get_mac_addr(regs, mac_addr))
+ 		return -EPROBE_DEFER; /* not available yet */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index b2a707e2ef439..678726c62a8af 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -441,6 +441,12 @@ int stmmac_mdio_register(struct net_device *ndev)
+ 		found = 1;
+ 	}
+ 
++	if (!found && !mdio_node) {
++		dev_warn(dev, "No PHY found\n");
++		err = -ENODEV;
++		goto no_phy_found;
++	}
++
+ 	/* Try to probe the XPCS by scanning all addresses. */
+ 	if (priv->hw->xpcs) {
+ 		struct mdio_xpcs_args *xpcs = &priv->hw->xpcs_args;
+@@ -449,6 +455,7 @@ int stmmac_mdio_register(struct net_device *ndev)
+ 
+ 		xpcs->bus = new_bus;
+ 
++		found = 0;
+ 		for (addr = 0; addr < max_addr; addr++) {
+ 			xpcs->addr = addr;
+ 
+@@ -458,13 +465,12 @@ int stmmac_mdio_register(struct net_device *ndev)
+ 				break;
+ 			}
+ 		}
+-	}
+ 
+-	if (!found && !mdio_node) {
+-		dev_warn(dev, "No PHY found\n");
+-		mdiobus_unregister(new_bus);
+-		mdiobus_free(new_bus);
+-		return -ENODEV;
++		if (!found && !mdio_node) {
++			dev_warn(dev, "No XPCS found\n");
++			err = -ENODEV;
++			goto no_xpcs_found;
++		}
+ 	}
+ 
+ bus_register_done:
+@@ -472,6 +478,9 @@ bus_register_done:
+ 
+ 	return 0;
+ 
++no_xpcs_found:
++no_phy_found:
++	mdiobus_unregister(new_bus);
+ bus_register_fail:
+ 	mdiobus_free(new_bus);
+ 	return err;
+diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c
+index 466622664424d..e449d94661225 100644
+--- a/drivers/net/fjes/fjes_main.c
++++ b/drivers/net/fjes/fjes_main.c
+@@ -1262,6 +1262,10 @@ static int fjes_probe(struct platform_device *plat_dev)
+ 	adapter->interrupt_watch_enable = false;
+ 
+ 	res = platform_get_resource(plat_dev, IORESOURCE_MEM, 0);
++	if (!res) {
++		err = -EINVAL;
++		goto err_free_control_wq;
++	}
+ 	hw->hw_res.start = res->start;
+ 	hw->hw_res.size = resource_size(res);
+ 	hw->hw_res.irq = platform_get_irq(plat_dev, 0);
+diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
+index cd4d993b0bbb2..4162a608a3bf9 100644
+--- a/drivers/net/ipa/ipa_main.c
++++ b/drivers/net/ipa/ipa_main.c
+@@ -589,6 +589,7 @@ static int ipa_firmware_load(struct device *dev)
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &res);
++	of_node_put(node);
+ 	if (ret) {
+ 		dev_err(dev, "error %d getting \"memory-region\" resource\n",
+ 			ret);
+diff --git a/drivers/net/mdio/mdio-ipq8064.c b/drivers/net/mdio/mdio-ipq8064.c
+index 1bd18857e1c5e..f0a6bfa61645e 100644
+--- a/drivers/net/mdio/mdio-ipq8064.c
++++ b/drivers/net/mdio/mdio-ipq8064.c
+@@ -10,7 +10,7 @@
+ #include <linux/module.h>
+ #include <linux/regmap.h>
+ #include <linux/of_mdio.h>
+-#include <linux/phy.h>
++#include <linux/of_address.h>
+ #include <linux/platform_device.h>
+ #include <linux/mfd/syscon.h>
+ 
+@@ -96,14 +96,34 @@ ipq8064_mdio_write(struct mii_bus *bus, int phy_addr, int reg_offset, u16 data)
+ 	return ipq8064_mdio_wait_busy(priv);
+ }
+ 
++static const struct regmap_config ipq8064_mdio_regmap_config = {
++	.reg_bits = 32,
++	.reg_stride = 4,
++	.val_bits = 32,
++	.can_multi_write = false,
++	/* the mdio lock is used by any user of this mdio driver */
++	.disable_locking = true,
++
++	.cache_type = REGCACHE_NONE,
++};
++
+ static int
+ ipq8064_mdio_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct ipq8064_mdio *priv;
++	struct resource res;
+ 	struct mii_bus *bus;
++	void __iomem *base;
+ 	int ret;
+ 
++	if (of_address_to_resource(np, 0, &res))
++		return -ENOMEM;
++
++	base = ioremap(res.start, resource_size(&res));
++	if (!base)
++		return -ENOMEM;
++
+ 	bus = devm_mdiobus_alloc_size(&pdev->dev, sizeof(*priv));
+ 	if (!bus)
+ 		return -ENOMEM;
+@@ -115,15 +135,10 @@ ipq8064_mdio_probe(struct platform_device *pdev)
+ 	bus->parent = &pdev->dev;
+ 
+ 	priv = bus->priv;
+-	priv->base = device_node_to_regmap(np);
+-	if (IS_ERR(priv->base)) {
+-		if (priv->base == ERR_PTR(-EPROBE_DEFER))
+-			return -EPROBE_DEFER;
+-
+-		dev_err(&pdev->dev, "error getting device regmap, error=%pe\n",
+-			priv->base);
++	priv->base = devm_regmap_init_mmio(&pdev->dev, base,
++					   &ipq8064_mdio_regmap_config);
++	if (IS_ERR(priv->base))
+ 		return PTR_ERR(priv->base);
+-	}
+ 
+ 	ret = of_mdiobus_register(bus, np);
+ 	if (ret)
+diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c
+index 575580d3ffe04..b4879306bb8ae 100644
+--- a/drivers/net/phy/realtek.c
++++ b/drivers/net/phy/realtek.c
+@@ -246,6 +246,19 @@ static int rtl8211f_config_init(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
++static int rtl821x_resume(struct phy_device *phydev)
++{
++	int ret;
++
++	ret = genphy_resume(phydev);
++	if (ret < 0)
++		return ret;
++
++	msleep(20);
++
++	return 0;
++}
++
+ static int rtl8211e_config_init(struct phy_device *phydev)
+ {
+ 	int ret = 0, oldpage;
+@@ -624,7 +637,7 @@ static struct phy_driver realtek_drvs[] = {
+ 		.ack_interrupt	= &rtl8211f_ack_interrupt,
+ 		.config_intr	= &rtl8211f_config_intr,
+ 		.suspend	= genphy_suspend,
+-		.resume		= genphy_resume,
++		.resume		= rtl821x_resume,
+ 		.read_page	= rtl821x_read_page,
+ 		.write_page	= rtl821x_write_page,
+ 	}, {
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 286f836a53bfe..91e0e6254a01d 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -660,6 +660,12 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ 	len -= vi->hdr_len;
+ 	stats->bytes += len;
+ 
++	if (unlikely(len > GOOD_PACKET_LEN)) {
++		pr_debug("%s: rx error: len %u exceeds max size %d\n",
++			 dev->name, len, GOOD_PACKET_LEN);
++		dev->stats.rx_length_errors++;
++		goto err_len;
++	}
+ 	rcu_read_lock();
+ 	xdp_prog = rcu_dereference(rq->xdp_prog);
+ 	if (xdp_prog) {
+@@ -763,6 +769,7 @@ err:
+ err_xdp:
+ 	rcu_read_unlock();
+ 	stats->xdp_drops++;
++err_len:
+ 	stats->drops++;
+ 	put_page(page);
+ xdp_xmit:
+@@ -816,6 +823,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ 	head_skb = NULL;
+ 	stats->bytes += len - vi->hdr_len;
+ 
++	if (unlikely(len > truesize)) {
++		pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
++			 dev->name, len, (unsigned long)ctx);
++		dev->stats.rx_length_errors++;
++		goto err_skb;
++	}
+ 	rcu_read_lock();
+ 	xdp_prog = rcu_dereference(rq->xdp_prog);
+ 	if (xdp_prog) {
+@@ -943,13 +956,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ 	}
+ 	rcu_read_unlock();
+ 
+-	if (unlikely(len > truesize)) {
+-		pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
+-			 dev->name, len, (unsigned long)ctx);
+-		dev->stats.rx_length_errors++;
+-		goto err_skb;
+-	}
+-
+ 	head_skb = page_to_skb(vi, rq, page, offset, len, truesize, !xdp_prog,
+ 			       metasize);
+ 	curr_skb = head_skb;
+@@ -1557,7 +1563,7 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
+ 	if (virtio_net_hdr_from_skb(skb, &hdr->hdr,
+ 				    virtio_is_little_endian(vi->vdev), false,
+ 				    0))
+-		BUG();
++		return -EPROTO;
+ 
+ 	if (vi->mergeable_rx_bufs)
+ 		hdr->num_buffers = 0;
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index cc0c30ceaa0d4..63d70aecbd0f1 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -4603,13 +4603,13 @@ err_peer_del:
+ 		if (ret) {
+ 			ath11k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
+ 				    arvif->vdev_id, vif->addr);
+-			return ret;
++			goto err;
+ 		}
+ 
+ 		ret = ath11k_wait_for_peer_delete_done(ar, arvif->vdev_id,
+ 						       vif->addr);
+ 		if (ret)
+-			return ret;
++			goto err;
+ 
+ 		ar->num_peers--;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index d42165559df6e..8cba923b1ec6c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -3794,6 +3794,7 @@ static int iwl_mvm_roc(struct ieee80211_hw *hw,
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ 	struct cfg80211_chan_def chandef;
+ 	struct iwl_mvm_phy_ctxt *phy_ctxt;
++	bool band_change_removal;
+ 	int ret, i;
+ 
+ 	IWL_DEBUG_MAC80211(mvm, "enter (%d, %d, %d)\n", channel->hw_value,
+@@ -3874,19 +3875,30 @@ static int iwl_mvm_roc(struct ieee80211_hw *hw,
+ 	cfg80211_chandef_create(&chandef, channel, NL80211_CHAN_NO_HT);
+ 
+ 	/*
+-	 * Change the PHY context configuration as it is currently referenced
+-	 * only by the P2P Device MAC
++	 * Check if the remain-on-channel is on a different band and that
++	 * requires context removal, see iwl_mvm_phy_ctxt_changed(). If
++	 * so, we'll need to release and then re-configure here, since we
++	 * must not remove a PHY context that's part of a binding.
+ 	 */
+-	if (mvmvif->phy_ctxt->ref == 1) {
++	band_change_removal =
++		fw_has_capa(&mvm->fw->ucode_capa,
++			    IWL_UCODE_TLV_CAPA_BINDING_CDB_SUPPORT) &&
++		mvmvif->phy_ctxt->channel->band != chandef.chan->band;
++
++	if (mvmvif->phy_ctxt->ref == 1 && !band_change_removal) {
++		/*
++		 * Change the PHY context configuration as it is currently
++		 * referenced only by the P2P Device MAC (and we can modify it)
++		 */
+ 		ret = iwl_mvm_phy_ctxt_changed(mvm, mvmvif->phy_ctxt,
+ 					       &chandef, 1, 1);
+ 		if (ret)
+ 			goto out_unlock;
+ 	} else {
+ 		/*
+-		 * The PHY context is shared with other MACs. Need to remove the
+-		 * P2P Device from the binding, allocate an new PHY context and
+-		 * create a new binding
++		 * The PHY context is shared with other MACs (or we're trying to
++		 * switch bands), so remove the P2P Device from the binding,
++		 * allocate an new PHY context and create a new binding.
+ 		 */
+ 		phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
+ 		if (!phy_ctxt) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 3939eccd3d5ac..394598b14a173 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -345,6 +345,8 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm,
+ 			 * and know the dtim period.
+ 			 */
+ 			iwl_mvm_te_check_disconnect(mvm, te_data->vif,
++				!te_data->vif->bss_conf.assoc ?
++				"Not associated and the time event is over already..." :
+ 				"No beacon heard and the time event is over already...");
+ 			break;
+ 		default:
+@@ -843,6 +845,8 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
+ 			 * and know the dtim period.
+ 			 */
+ 			iwl_mvm_te_check_disconnect(mvm, vif,
++						    !vif->bss_conf.assoc ?
++						    "Not associated and the session protection is over already..." :
+ 						    "No beacon heard and the session protection is over already...");
+ 			spin_lock_bh(&mvm->time_event_lock);
+ 			iwl_mvm_te_clear_data(mvm, te_data);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index ec1d6025081de..56f63f5f5dd34 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -126,7 +126,6 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	struct iwl_prph_scratch *prph_scratch;
+ 	struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl;
+ 	struct iwl_prph_info *prph_info;
+-	void *iml_img;
+ 	u32 control_flags = 0;
+ 	int ret;
+ 	int cmdq_size = max_t(u32, IWL_CMD_QUEUE_SIZE,
+@@ -234,14 +233,15 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	trans_pcie->prph_scratch = prph_scratch;
+ 
+ 	/* Allocate IML */
+-	iml_img = dma_alloc_coherent(trans->dev, trans->iml_len,
+-				     &trans_pcie->iml_dma_addr, GFP_KERNEL);
+-	if (!iml_img) {
++	trans_pcie->iml = dma_alloc_coherent(trans->dev, trans->iml_len,
++					     &trans_pcie->iml_dma_addr,
++					     GFP_KERNEL);
++	if (!trans_pcie->iml) {
+ 		ret = -ENOMEM;
+ 		goto err_free_ctxt_info;
+ 	}
+ 
+-	memcpy(iml_img, trans->iml, trans->iml_len);
++	memcpy(trans_pcie->iml, trans->iml, trans->iml_len);
+ 
+ 	iwl_enable_fw_load_int_ctx_info(trans);
+ 
+@@ -290,6 +290,11 @@ void iwl_pcie_ctxt_info_gen3_free(struct iwl_trans *trans)
+ 	trans_pcie->ctxt_info_dma_addr = 0;
+ 	trans_pcie->ctxt_info_gen3 = NULL;
+ 
++	dma_free_coherent(trans->dev, trans->iml_len, trans_pcie->iml,
++			  trans_pcie->iml_dma_addr);
++	trans_pcie->iml_dma_addr = 0;
++	trans_pcie->iml = NULL;
++
+ 	iwl_pcie_ctxt_info_free_fw_img(trans);
+ 
+ 	dma_free_coherent(trans->dev, sizeof(*trans_pcie->prph_scratch),
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index ff542d2f0054b..f05025e8d11d5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -336,6 +336,8 @@ struct cont_rec {
+  *	Context information addresses will be taken from here.
+  *	This is driver's local copy for keeping track of size and
+  *	count for allocating and freeing the memory.
++ * @iml: image loader image virtual address
++ * @iml_dma_addr: image loader image DMA address
+  * @trans: pointer to the generic transport area
+  * @scd_base_addr: scheduler sram base address in SRAM
+  * @kw: keep warm address
+@@ -388,6 +390,7 @@ struct iwl_trans_pcie {
+ 	};
+ 	struct iwl_prph_info *prph_info;
+ 	struct iwl_prph_scratch *prph_scratch;
++	void *iml;
+ 	dma_addr_t ctxt_info_dma_addr;
+ 	dma_addr_t prph_info_dma_addr;
+ 	dma_addr_t prph_scratch_dma_addr;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index 4c3ca2a376964..b031e9304983c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -269,7 +269,8 @@ void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr)
+ 	/* now that we got alive we can free the fw image & the context info.
+ 	 * paging memory cannot be freed included since FW will still use it
+ 	 */
+-	iwl_pcie_ctxt_info_free(trans);
++	if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
++		iwl_pcie_ctxt_info_free(trans);
+ 
+ 	/*
+ 	 * Re-enable all the interrupts, including the RF-Kill one, now that
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index f147d4feedb91..4ca0b06d09add 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -557,6 +557,7 @@ struct mac80211_hwsim_data {
+ 	u32 ciphers[ARRAY_SIZE(hwsim_ciphers)];
+ 
+ 	struct mac_address addresses[2];
++	struct ieee80211_chanctx_conf *chanctx;
+ 	int channels, idx;
+ 	bool use_chanctx;
+ 	bool destroy_on_close;
+@@ -1187,7 +1188,8 @@ static inline u16 trans_tx_rate_flags_ieee2hwsim(struct ieee80211_tx_rate *rate)
+ 
+ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw,
+ 				       struct sk_buff *my_skb,
+-				       int dst_portid)
++				       int dst_portid,
++				       struct ieee80211_channel *channel)
+ {
+ 	struct sk_buff *skb;
+ 	struct mac80211_hwsim_data *data = hw->priv;
+@@ -1242,7 +1244,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw,
+ 	if (nla_put_u32(skb, HWSIM_ATTR_FLAGS, hwsim_flags))
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_u32(skb, HWSIM_ATTR_FREQ, data->channel->center_freq))
++	if (nla_put_u32(skb, HWSIM_ATTR_FREQ, channel->center_freq))
+ 		goto nla_put_failure;
+ 
+ 	/* We get the tx control (rate and retries) info*/
+@@ -1589,7 +1591,7 @@ static void mac80211_hwsim_tx(struct ieee80211_hw *hw,
+ 	_portid = READ_ONCE(data->wmediumd);
+ 
+ 	if (_portid || hwsim_virtio_enabled)
+-		return mac80211_hwsim_tx_frame_nl(hw, skb, _portid);
++		return mac80211_hwsim_tx_frame_nl(hw, skb, _portid, channel);
+ 
+ 	/* NO wmediumd detected, perfect medium simulation */
+ 	data->tx_pkts++;
+@@ -1705,7 +1707,7 @@ static void mac80211_hwsim_tx_frame(struct ieee80211_hw *hw,
+ 	mac80211_hwsim_monitor_rx(hw, skb, chan);
+ 
+ 	if (_pid || hwsim_virtio_enabled)
+-		return mac80211_hwsim_tx_frame_nl(hw, skb, _pid);
++		return mac80211_hwsim_tx_frame_nl(hw, skb, _pid, chan);
+ 
+ 	mac80211_hwsim_tx_frame_no_nl(hw, skb, chan);
+ 	dev_kfree_skb(skb);
+@@ -2444,6 +2446,11 @@ static int mac80211_hwsim_croc(struct ieee80211_hw *hw,
+ static int mac80211_hwsim_add_chanctx(struct ieee80211_hw *hw,
+ 				      struct ieee80211_chanctx_conf *ctx)
+ {
++	struct mac80211_hwsim_data *hwsim = hw->priv;
++
++	mutex_lock(&hwsim->mutex);
++	hwsim->chanctx = ctx;
++	mutex_unlock(&hwsim->mutex);
+ 	hwsim_set_chanctx_magic(ctx);
+ 	wiphy_dbg(hw->wiphy,
+ 		  "add channel context control: %d MHz/width: %d/cfreqs:%d/%d MHz\n",
+@@ -2455,6 +2462,11 @@ static int mac80211_hwsim_add_chanctx(struct ieee80211_hw *hw,
+ static void mac80211_hwsim_remove_chanctx(struct ieee80211_hw *hw,
+ 					  struct ieee80211_chanctx_conf *ctx)
+ {
++	struct mac80211_hwsim_data *hwsim = hw->priv;
++
++	mutex_lock(&hwsim->mutex);
++	hwsim->chanctx = NULL;
++	mutex_unlock(&hwsim->mutex);
+ 	wiphy_dbg(hw->wiphy,
+ 		  "remove channel context control: %d MHz/width: %d/cfreqs:%d/%d MHz\n",
+ 		  ctx->def.chan->center_freq, ctx->def.width,
+@@ -2467,6 +2479,11 @@ static void mac80211_hwsim_change_chanctx(struct ieee80211_hw *hw,
+ 					  struct ieee80211_chanctx_conf *ctx,
+ 					  u32 changed)
+ {
++	struct mac80211_hwsim_data *hwsim = hw->priv;
++
++	mutex_lock(&hwsim->mutex);
++	hwsim->chanctx = ctx;
++	mutex_unlock(&hwsim->mutex);
+ 	hwsim_check_chanctx_magic(ctx);
+ 	wiphy_dbg(hw->wiphy,
+ 		  "change channel context control: %d MHz/width: %d/cfreqs:%d/%d MHz\n",
+@@ -3059,6 +3076,7 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 		hw->wiphy->max_remain_on_channel_duration = 1000;
+ 		data->if_combination.radar_detect_widths = 0;
+ 		data->if_combination.num_different_channels = data->channels;
++		data->chanctx = NULL;
+ 	} else {
+ 		data->if_combination.num_different_channels = 1;
+ 		data->if_combination.radar_detect_widths =
+@@ -3566,6 +3584,7 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 	int frame_data_len;
+ 	void *frame_data;
+ 	struct sk_buff *skb = NULL;
++	struct ieee80211_channel *channel = NULL;
+ 
+ 	if (!info->attrs[HWSIM_ATTR_ADDR_RECEIVER] ||
+ 	    !info->attrs[HWSIM_ATTR_FRAME] ||
+@@ -3592,6 +3611,17 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 	if (!data2)
+ 		goto out;
+ 
++	if (data2->use_chanctx) {
++		if (data2->tmp_chan)
++			channel = data2->tmp_chan;
++		else if (data2->chanctx)
++			channel = data2->chanctx->def.chan;
++	} else {
++		channel = data2->channel;
++	}
++	if (!channel)
++		goto out;
++
+ 	if (!hwsim_virtio_enabled) {
+ 		if (hwsim_net_get_netgroup(genl_info_net(info)) !=
+ 		    data2->netgroup)
+@@ -3603,7 +3633,7 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 
+ 	/* check if radio is configured properly */
+ 
+-	if (data2->idle || !data2->started)
++	if ((data2->idle && !data2->tmp_chan) || !data2->started)
+ 		goto out;
+ 
+ 	/* A frame is received from user space */
+@@ -3616,18 +3646,16 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 		mutex_lock(&data2->mutex);
+ 		rx_status.freq = nla_get_u32(info->attrs[HWSIM_ATTR_FREQ]);
+ 
+-		if (rx_status.freq != data2->channel->center_freq &&
+-		    (!data2->tmp_chan ||
+-		     rx_status.freq != data2->tmp_chan->center_freq)) {
++		if (rx_status.freq != channel->center_freq) {
+ 			mutex_unlock(&data2->mutex);
+ 			goto out;
+ 		}
+ 		mutex_unlock(&data2->mutex);
+ 	} else {
+-		rx_status.freq = data2->channel->center_freq;
++		rx_status.freq = channel->center_freq;
+ 	}
+ 
+-	rx_status.band = data2->channel->band;
++	rx_status.band = channel->band;
+ 	rx_status.rate_idx = nla_get_u32(info->attrs[HWSIM_ATTR_RX_RATE]);
+ 	rx_status.signal = nla_get_u32(info->attrs[HWSIM_ATTR_SIGNAL]);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 5795e44f8a529..f44f478bb970e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -1177,22 +1177,20 @@ static bool mt7615_fill_txs(struct mt7615_dev *dev, struct mt7615_sta *sta,
+ 	int first_idx = 0, last_idx;
+ 	int i, idx, count;
+ 	bool fixed_rate, ack_timeout;
+-	bool probe, ampdu, cck = false;
++	bool ampdu, cck = false;
+ 	bool rs_idx;
+ 	u32 rate_set_tsf;
+ 	u32 final_rate, final_rate_flags, final_nss, txs;
+ 
+-	fixed_rate = info->status.rates[0].count;
+-	probe = !!(info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE);
+-
+ 	txs = le32_to_cpu(txs_data[1]);
+-	ampdu = !fixed_rate && (txs & MT_TXS1_AMPDU);
++	ampdu = txs & MT_TXS1_AMPDU;
+ 
+ 	txs = le32_to_cpu(txs_data[3]);
+ 	count = FIELD_GET(MT_TXS3_TX_COUNT, txs);
+ 	last_idx = FIELD_GET(MT_TXS3_LAST_TX_RATE, txs);
+ 
+ 	txs = le32_to_cpu(txs_data[0]);
++	fixed_rate = txs & MT_TXS0_FIXED_RATE;
+ 	final_rate = FIELD_GET(MT_TXS0_TX_RATE, txs);
+ 	ack_timeout = txs & MT_TXS0_ACK_TIMEOUT;
+ 
+@@ -1214,7 +1212,7 @@ static bool mt7615_fill_txs(struct mt7615_dev *dev, struct mt7615_sta *sta,
+ 
+ 	first_idx = max_t(int, 0, last_idx - (count - 1) / MT7615_RATE_RETRY);
+ 
+-	if (fixed_rate && !probe) {
++	if (fixed_rate) {
+ 		info->status.rates[0].count = count;
+ 		i = 0;
+ 		goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 0232b66acb4f9..8f01ca1694bca 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -335,6 +335,9 @@ mt7915_set_stream_he_txbf_caps(struct ieee80211_sta_he_cap *he_cap,
+ 	if (nss < 2)
+ 		return;
+ 
++	/* the maximum cap is 4 x 3, (Nr, Nc) = (3, 2) */
++	elem->phy_cap_info[7] |= min_t(int, nss - 1, 2) << 3;
++
+ 	if (vif != NL80211_IFTYPE_AP)
+ 		return;
+ 
+@@ -348,9 +351,6 @@ mt7915_set_stream_he_txbf_caps(struct ieee80211_sta_he_cap *he_cap,
+ 	c = IEEE80211_HE_PHY_CAP6_TRIG_SU_BEAMFORMER_FB |
+ 	    IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMER_FB;
+ 	elem->phy_cap_info[6] |= c;
+-
+-	/* the maximum cap is 4 x 3, (Nr, Nc) = (3, 2) */
+-	elem->phy_cap_info[7] |= min_t(int, nss - 1, 2) << 3;
+ }
+ 
+ static void
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index d6d1be4169e5f..acb6b0cd36672 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -853,15 +853,10 @@ struct rtl8192eu_efuse {
+ 	u8 usb_optional_function;
+ 	u8 res9[2];
+ 	u8 mac_addr[ETH_ALEN];		/* 0xd7 */
+-	u8 res10[2];
+-	u8 vendor_name[7];
+-	u8 res11[2];
+-	u8 device_name[0x0b];		/* 0xe8 */
+-	u8 res12[2];
+-	u8 serial[0x0b];		/* 0xf5 */
+-	u8 res13[0x30];
++	u8 device_info[80];
++	u8 res11[3];
+ 	u8 unknown[0x0d];		/* 0x130 */
+-	u8 res14[0xc3];
++	u8 res12[0xc3];
+ };
+ 
+ struct rtl8xxxu_reg8val {
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+index 9f1f93d04145d..199e7e031d7d9 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+@@ -554,9 +554,43 @@ rtl8192e_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
+ 	}
+ }
+ 
++static void rtl8192eu_log_next_device_info(struct rtl8xxxu_priv *priv,
++					   char *record_name,
++					   char *device_info,
++					   unsigned int *record_offset)
++{
++	char *record = device_info + *record_offset;
++
++	/* A record is [ total length | 0x03 | value ] */
++	unsigned char l = record[0];
++
++	/*
++	 * The whole device info section seems to be 80 characters, make sure
++	 * we don't read further.
++	 */
++	if (*record_offset + l > 80) {
++		dev_warn(&priv->udev->dev,
++			 "invalid record length %d while parsing \"%s\" at offset %u.\n",
++			 l, record_name, *record_offset);
++		return;
++	}
++
++	if (l >= 2) {
++		char value[80];
++
++		memcpy(value, &record[2], l - 2);
++		value[l - 2] = '\0';
++		dev_info(&priv->udev->dev, "%s: %s\n", record_name, value);
++		*record_offset = *record_offset + l;
++	} else {
++		dev_info(&priv->udev->dev, "%s not available.\n", record_name);
++	}
++}
++
+ static int rtl8192eu_parse_efuse(struct rtl8xxxu_priv *priv)
+ {
+ 	struct rtl8192eu_efuse *efuse = &priv->efuse_wifi.efuse8192eu;
++	unsigned int record_offset;
+ 	int i;
+ 
+ 	if (efuse->rtl_id != cpu_to_le16(0x8129))
+@@ -604,12 +638,25 @@ static int rtl8192eu_parse_efuse(struct rtl8xxxu_priv *priv)
+ 	priv->has_xtalk = 1;
+ 	priv->xtalk = priv->efuse_wifi.efuse8192eu.xtal_k & 0x3f;
+ 
+-	dev_info(&priv->udev->dev, "Vendor: %.7s\n", efuse->vendor_name);
+-	dev_info(&priv->udev->dev, "Product: %.11s\n", efuse->device_name);
+-	if (memchr_inv(efuse->serial, 0xff, 11))
+-		dev_info(&priv->udev->dev, "Serial: %.11s\n", efuse->serial);
+-	else
+-		dev_info(&priv->udev->dev, "Serial not available.\n");
++	/*
++	 * device_info section seems to be laid out as records
++	 * [ total length | 0x03 | value ] so:
++	 * - vendor length + 2
++	 * - 0x03
++	 * - vendor string (not null terminated)
++	 * - product length + 2
++	 * - 0x03
++	 * - product string (not null terminated)
++	 * Then there is one or 2 0x00 on all the 4 devices I own or found
++	 * dumped online.
++	 * As previous version of the code handled an optional serial
++	 * string, I now assume there may be a third record if the
++	 * length is not 0.
++	 */
++	record_offset = 0;
++	rtl8192eu_log_next_device_info(priv, "Vendor", efuse->device_info, &record_offset);
++	rtl8192eu_log_next_device_info(priv, "Product", efuse->device_info, &record_offset);
++	rtl8192eu_log_next_device_info(priv, "Serial", efuse->device_info, &record_offset);
+ 
+ 	if (rtl8xxxu_debug & RTL8XXXU_DEBUG_EFUSE) {
+ 		unsigned char *raw = priv->efuse_wifi.raw;
+diff --git a/drivers/net/wireless/st/cw1200/cw1200_sdio.c b/drivers/net/wireless/st/cw1200/cw1200_sdio.c
+index b65ec14136c7e..4c30b5772ce0f 100644
+--- a/drivers/net/wireless/st/cw1200/cw1200_sdio.c
++++ b/drivers/net/wireless/st/cw1200/cw1200_sdio.c
+@@ -53,6 +53,7 @@ static const struct sdio_device_id cw1200_sdio_ids[] = {
+ 	{ SDIO_DEVICE(SDIO_VENDOR_ID_STE, SDIO_DEVICE_ID_STE_CW1200) },
+ 	{ /* end: all zeroes */			},
+ };
++MODULE_DEVICE_TABLE(sdio, cw1200_sdio_ids);
+ 
+ /* hwbus_ops implemetation */
+ 
+diff --git a/drivers/net/wireless/ti/wl1251/cmd.c b/drivers/net/wireless/ti/wl1251/cmd.c
+index 9547aea01b0fb..ea0215246c5c8 100644
+--- a/drivers/net/wireless/ti/wl1251/cmd.c
++++ b/drivers/net/wireless/ti/wl1251/cmd.c
+@@ -466,9 +466,12 @@ int wl1251_cmd_scan(struct wl1251 *wl, u8 *ssid, size_t ssid_len,
+ 		cmd->channels[i].channel = channels[i]->hw_value;
+ 	}
+ 
+-	cmd->params.ssid_len = ssid_len;
+-	if (ssid)
+-		memcpy(cmd->params.ssid, ssid, ssid_len);
++	if (ssid) {
++		int len = clamp_val(ssid_len, 0, IEEE80211_MAX_SSID_LEN);
++
++		cmd->params.ssid_len = len;
++		memcpy(cmd->params.ssid, ssid, len);
++	}
+ 
+ 	ret = wl1251_cmd_send(wl, CMD_SCAN, cmd, sizeof(*cmd));
+ 	if (ret < 0) {
+diff --git a/drivers/net/wireless/ti/wl12xx/main.c b/drivers/net/wireless/ti/wl12xx/main.c
+index 9d7dbfe7fe0c3..c6da0cfb4afbe 100644
+--- a/drivers/net/wireless/ti/wl12xx/main.c
++++ b/drivers/net/wireless/ti/wl12xx/main.c
+@@ -1503,6 +1503,13 @@ static int wl12xx_get_fuse_mac(struct wl1271 *wl)
+ 	u32 mac1, mac2;
+ 	int ret;
+ 
++	/* Device may be in ELP from the bootloader or kexec */
++	ret = wlcore_write32(wl, WL12XX_WELP_ARM_COMMAND, WELP_ARM_COMMAND_VAL);
++	if (ret < 0)
++		goto out;
++
++	usleep_range(500000, 700000);
++
+ 	ret = wlcore_set_partition(wl, &wl->ptable[PART_DRPW]);
+ 	if (ret < 0)
+ 		goto out;
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 9b6ab83956c3b..bb8fb2b3711d4 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -549,15 +549,17 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
+ 			continue;
+ 		if (len < 2 * sizeof(u32)) {
+ 			dev_err(dev, "nvmem: invalid reg on %pOF\n", child);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+ 		cell = kzalloc(sizeof(*cell), GFP_KERNEL);
+-		if (!cell)
++		if (!cell) {
++			of_node_put(child);
+ 			return -ENOMEM;
++		}
+ 
+ 		cell->nvmem = nvmem;
+-		cell->np = of_node_get(child);
+ 		cell->offset = be32_to_cpup(addr++);
+ 		cell->bytes = be32_to_cpup(addr);
+ 		cell->name = kasprintf(GFP_KERNEL, "%pOFn", child);
+@@ -578,11 +580,12 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
+ 				cell->name, nvmem->stride);
+ 			/* Cells already added will be freed later. */
+ 			kfree_const(cell->name);
+-			of_node_put(cell->np);
+ 			kfree(cell);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
++		cell->np = of_node_get(child);
+ 		nvmem_cell_add(cell);
+ 	}
+ 
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 41be72c74e3a4..b1b41b61e0bd0 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -56,7 +56,7 @@
+ #define   PIO_COMPLETION_STATUS_UR		1
+ #define   PIO_COMPLETION_STATUS_CRS		2
+ #define   PIO_COMPLETION_STATUS_CA		4
+-#define   PIO_NON_POSTED_REQ			BIT(0)
++#define   PIO_NON_POSTED_REQ			BIT(10)
+ #define PIO_ADDR_LS				(PIO_BASE_ADDR + 0x8)
+ #define PIO_ADDR_MS				(PIO_BASE_ADDR + 0xc)
+ #define PIO_WR_DATA				(PIO_BASE_ADDR + 0x10)
+@@ -124,6 +124,7 @@
+ #define     LTSSM_MASK				0x3f
+ #define     LTSSM_L0				0x10
+ #define     RC_BAR_CONFIG			0x300
++#define VENDOR_ID_REG				(LMI_BASE_ADDR + 0x44)
+ 
+ /* PCIe core controller registers */
+ #define CTRL_CORE_BASE_ADDR			0x18000
+@@ -385,6 +386,16 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg |= (IS_RC_MSK << IS_RC_SHIFT);
+ 	advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+ 
++	/*
++	 * Replace incorrect PCI vendor id value 0x1b4b by correct value 0x11ab.
++	 * VENDOR_ID_REG contains vendor id in low 16 bits and subsystem vendor
++	 * id in high 16 bits. Updating this register changes readback value of
++	 * read-only vendor id bits in PCIE_CORE_DEV_ID_REG register. Workaround
++	 * for erratum 4.1: "The value of device and vendor ID is incorrect".
++	 */
++	reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL;
++	advk_writel(pcie, reg, VENDOR_ID_REG);
++
+ 	/* Set Advanced Error Capabilities and Control PF0 register */
+ 	reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX |
+ 		PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN |
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 16fb3d7714d51..41bcdfac03d80 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -27,6 +27,7 @@
+ #include <linux/nvme.h>
+ #include <linux/platform_data/x86/apple.h>
+ #include <linux/pm_runtime.h>
++#include <linux/suspend.h>
+ #include <linux/switchtec.h>
+ #include <asm/dma.h>	/* isa_dma_bridge_buggy */
+ #include "pci.h"
+@@ -3667,6 +3668,16 @@ static void quirk_apple_poweroff_thunderbolt(struct pci_dev *dev)
+ 		return;
+ 	if (pci_pcie_type(dev) != PCI_EXP_TYPE_UPSTREAM)
+ 		return;
++
++	/*
++	 * SXIO/SXFP/SXLF turns off power to the Thunderbolt controller.
++	 * We don't know how to turn it back on again, but firmware does,
++	 * so we can only use SXIO/SXFP/SXLF if we're suspending via
++	 * firmware.
++	 */
++	if (!pm_suspend_via_firmware())
++		return;
++
+ 	bridge = ACPI_HANDLE(&dev->dev);
+ 	if (!bridge)
+ 		return;
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 899c16c17b6da..ef49402c0623d 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -952,6 +952,7 @@ static int amd_gpio_remove(struct platform_device *pdev)
+ static const struct acpi_device_id amd_gpio_acpi_match[] = {
+ 	{ "AMD0030", 0 },
+ 	{ "AMDI0030", 0},
++	{ "AMDI0031", 0},
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(acpi, amd_gpio_acpi_match);
+diff --git a/drivers/pinctrl/pinctrl-equilibrium.c b/drivers/pinctrl/pinctrl-equilibrium.c
+index 067271b7d35a3..ac1c47f542c11 100644
+--- a/drivers/pinctrl/pinctrl-equilibrium.c
++++ b/drivers/pinctrl/pinctrl-equilibrium.c
+@@ -929,6 +929,7 @@ static const struct of_device_id eqbr_pinctrl_dt_match[] = {
+ 	{ .compatible = "intel,lgm-io" },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(of, eqbr_pinctrl_dt_match);
+ 
+ static struct platform_driver eqbr_pinctrl_driver = {
+ 	.probe	= eqbr_pinctrl_probe,
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index ce2d8014b7e0b..d0259577934e9 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -351,6 +351,11 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
+ 	if (mcp_read(mcp, MCP_INTF, &intf))
+ 		goto unlock;
+ 
++	if (intf == 0) {
++		/* There is no interrupt pending */
++		goto unlock;
++	}
++
+ 	if (mcp_read(mcp, MCP_INTCAP, &intcap))
+ 		goto unlock;
+ 
+@@ -368,11 +373,6 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
+ 	mcp->cached_gpio = gpio;
+ 	mutex_unlock(&mcp->lock);
+ 
+-	if (intf == 0) {
+-		/* There is no interrupt pending */
+-		return IRQ_HANDLED;
+-	}
+-
+ 	dev_dbg(mcp->chip.parent,
+ 		"intcap 0x%04X intf 0x%04X gpio_orig 0x%04X gpio 0x%04X\n",
+ 		intcap, intf, gpio_orig, gpio);
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 41b8192d207d0..41023fc4bf2dc 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -3089,9 +3089,10 @@ fail_mgmt_tasks(struct iscsi_session *session, struct iscsi_conn *conn)
+ 	}
+ }
+ 
+-static void iscsi_start_session_recovery(struct iscsi_session *session,
+-					 struct iscsi_conn *conn, int flag)
++void iscsi_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
+ {
++	struct iscsi_conn *conn = cls_conn->dd_data;
++	struct iscsi_session *session = conn->session;
+ 	int old_stop_stage;
+ 
+ 	mutex_lock(&session->eh_mutex);
+@@ -3149,27 +3150,6 @@ static void iscsi_start_session_recovery(struct iscsi_session *session,
+ 	spin_unlock_bh(&session->frwd_lock);
+ 	mutex_unlock(&session->eh_mutex);
+ }
+-
+-void iscsi_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
+-{
+-	struct iscsi_conn *conn = cls_conn->dd_data;
+-	struct iscsi_session *session = conn->session;
+-
+-	switch (flag) {
+-	case STOP_CONN_RECOVER:
+-		cls_conn->state = ISCSI_CONN_FAILED;
+-		break;
+-	case STOP_CONN_TERM:
+-		cls_conn->state = ISCSI_CONN_DOWN;
+-		break;
+-	default:
+-		iscsi_conn_printk(KERN_ERR, conn,
+-				  "invalid stop flag %d\n", flag);
+-		return;
+-	}
+-
+-	iscsi_start_session_recovery(session, conn, flag);
+-}
+ EXPORT_SYMBOL_GPL(iscsi_conn_stop);
+ 
+ int iscsi_conn_bind(struct iscsi_cls_session *cls_session,
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index c520239082fc6..2735178f15c73 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2479,9 +2479,22 @@ static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag)
+ 	 * it works.
+ 	 */
+ 	mutex_lock(&conn_mutex);
++	switch (flag) {
++	case STOP_CONN_RECOVER:
++		conn->state = ISCSI_CONN_FAILED;
++		break;
++	case STOP_CONN_TERM:
++		conn->state = ISCSI_CONN_DOWN;
++		break;
++	default:
++		iscsi_cls_conn_printk(KERN_ERR, conn,
++				      "invalid stop flag %d\n", flag);
++		goto unlock;
++	}
++
+ 	conn->transport->stop_conn(conn, flag);
++unlock:
+ 	mutex_unlock(&conn_mutex);
+-
+ }
+ 
+ static void stop_conn_work_fn(struct work_struct *work)
+@@ -2906,6 +2919,13 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ 	default:
+ 		err = transport->set_param(conn, ev->u.set_param.param,
+ 					   data, ev->u.set_param.len);
++		if ((conn->state == ISCSI_CONN_BOUND) ||
++			(conn->state == ISCSI_CONN_UP)) {
++			err = transport->set_param(conn, ev->u.set_param.param,
++					data, ev->u.set_param.len);
++		} else {
++			return -ENOTCONN;
++		}
+ 	}
+ 
+ 	return err;
+@@ -2965,6 +2985,7 @@ static int iscsi_if_ep_disconnect(struct iscsi_transport *transport,
+ 		mutex_lock(&conn->ep_mutex);
+ 		conn->ep = NULL;
+ 		mutex_unlock(&conn->ep_mutex);
++		conn->state = ISCSI_CONN_FAILED;
+ 	}
+ 
+ 	transport->ep_disconnect(ep);
+@@ -3732,6 +3753,8 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 		ev->r.retcode =	transport->bind_conn(session, conn,
+ 						ev->u.b_conn.transport_eph,
+ 						ev->u.b_conn.is_leading);
++		if (!ev->r.retcode)
++			conn->state = ISCSI_CONN_BOUND;
+ 		mutex_unlock(&conn_mutex);
+ 
+ 		if (ev->r.retcode || !transport->ep_connect)
+@@ -3971,7 +3994,8 @@ iscsi_conn_attr(local_ipaddr, ISCSI_PARAM_LOCAL_IPADDR);
+ static const char *const connection_state_names[] = {
+ 	[ISCSI_CONN_UP] = "up",
+ 	[ISCSI_CONN_DOWN] = "down",
+-	[ISCSI_CONN_FAILED] = "failed"
++	[ISCSI_CONN_FAILED] = "failed",
++	[ISCSI_CONN_BOUND] = "bound"
+ };
+ 
+ static ssize_t show_conn_state(struct device *dev,
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+index 81e8b15ef405d..74158fa660ddf 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+@@ -156,24 +156,27 @@ static ssize_t tcc_offset_degree_celsius_show(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	val = (val >> 24) & 0xff;
++	val = (val >> 24) & 0x3f;
+ 	return sprintf(buf, "%d\n", (int)val);
+ }
+ 
+-static int tcc_offset_update(int tcc)
++static int tcc_offset_update(unsigned int tcc)
+ {
+ 	u64 val;
+ 	int err;
+ 
+-	if (!tcc)
++	if (tcc > 63)
+ 		return -EINVAL;
+ 
+ 	err = rdmsrl_safe(MSR_IA32_TEMPERATURE_TARGET, &val);
+ 	if (err)
+ 		return err;
+ 
+-	val &= ~GENMASK_ULL(31, 24);
+-	val |= (tcc & 0xff) << 24;
++	if (val & BIT(31))
++		return -EPERM;
++
++	val &= ~GENMASK_ULL(29, 24);
++	val |= (tcc & 0x3f) << 24;
+ 
+ 	err = wrmsrl_safe(MSR_IA32_TEMPERATURE_TARGET, val);
+ 	if (err)
+@@ -182,14 +185,15 @@ static int tcc_offset_update(int tcc)
+ 	return 0;
+ }
+ 
+-static int tcc_offset_save;
++static unsigned int tcc_offset_save;
+ 
+ static ssize_t tcc_offset_degree_celsius_store(struct device *dev,
+ 				struct device_attribute *attr, const char *buf,
+ 				size_t count)
+ {
++	unsigned int tcc;
+ 	u64 val;
+-	int tcc, err;
++	int err;
+ 
+ 	err = rdmsrl_safe(MSR_PLATFORM_INFO, &val);
+ 	if (err)
+@@ -198,7 +202,7 @@ static ssize_t tcc_offset_degree_celsius_store(struct device *dev,
+ 	if (!(val & BIT(30)))
+ 		return -EACCES;
+ 
+-	if (kstrtoint(buf, 0, &tcc))
++	if (kstrtouint(buf, 0, &tcc))
+ 		return -EINVAL;
+ 
+ 	err = tcc_offset_update(tcc);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 7cae226b000f7..115a77b96e5e1 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1480,6 +1480,7 @@ struct ext4_sb_info {
+ 	struct kobject s_kobj;
+ 	struct completion s_kobj_unregister;
+ 	struct super_block *s_sb;
++	struct buffer_head *s_mmp_bh;
+ 
+ 	/* Journaling */
+ 	struct journal_s *s_journal;
+@@ -3624,6 +3625,9 @@ extern struct ext4_io_end_vec *ext4_last_io_end_vec(ext4_io_end_t *io_end);
+ /* mmp.c */
+ extern int ext4_multi_mount_protect(struct super_block *, ext4_fsblk_t);
+ 
++/* mmp.c */
++extern void ext4_stop_mmpd(struct ext4_sb_info *sbi);
++
+ /* verity.c */
+ extern const struct fsverity_operations ext4_verityops;
+ 
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 68fbeedd627bc..6cb598b549ca1 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -127,9 +127,9 @@ void __dump_mmp_msg(struct super_block *sb, struct mmp_struct *mmp,
+  */
+ static int kmmpd(void *data)
+ {
+-	struct super_block *sb = ((struct mmpd_data *) data)->sb;
+-	struct buffer_head *bh = ((struct mmpd_data *) data)->bh;
++	struct super_block *sb = (struct super_block *) data;
+ 	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
++	struct buffer_head *bh = EXT4_SB(sb)->s_mmp_bh;
+ 	struct mmp_struct *mmp;
+ 	ext4_fsblk_t mmp_block;
+ 	u32 seq = 0;
+@@ -245,12 +245,18 @@ static int kmmpd(void *data)
+ 	retval = write_mmp_block(sb, bh);
+ 
+ exit_thread:
+-	EXT4_SB(sb)->s_mmp_tsk = NULL;
+-	kfree(data);
+-	brelse(bh);
+ 	return retval;
+ }
+ 
++void ext4_stop_mmpd(struct ext4_sb_info *sbi)
++{
++	if (sbi->s_mmp_tsk) {
++		kthread_stop(sbi->s_mmp_tsk);
++		brelse(sbi->s_mmp_bh);
++		sbi->s_mmp_tsk = NULL;
++	}
++}
++
+ /*
+  * Get a random new sequence number but make sure it is not greater than
+  * EXT4_MMP_SEQ_MAX.
+@@ -275,7 +281,6 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+ 	struct buffer_head *bh = NULL;
+ 	struct mmp_struct *mmp = NULL;
+-	struct mmpd_data *mmpd_data;
+ 	u32 seq;
+ 	unsigned int mmp_check_interval = le16_to_cpu(es->s_mmp_update_interval);
+ 	unsigned int wait_time = 0;
+@@ -364,24 +369,17 @@ skip:
+ 		goto failed;
+ 	}
+ 
+-	mmpd_data = kmalloc(sizeof(*mmpd_data), GFP_KERNEL);
+-	if (!mmpd_data) {
+-		ext4_warning(sb, "not enough memory for mmpd_data");
+-		goto failed;
+-	}
+-	mmpd_data->sb = sb;
+-	mmpd_data->bh = bh;
++	EXT4_SB(sb)->s_mmp_bh = bh;
+ 
+ 	/*
+ 	 * Start a kernel thread to update the MMP block periodically.
+ 	 */
+-	EXT4_SB(sb)->s_mmp_tsk = kthread_run(kmmpd, mmpd_data, "kmmpd-%.*s",
++	EXT4_SB(sb)->s_mmp_tsk = kthread_run(kmmpd, sb, "kmmpd-%.*s",
+ 					     (int)sizeof(mmp->mmp_bdevname),
+ 					     bdevname(bh->b_bdev,
+ 						      mmp->mmp_bdevname));
+ 	if (IS_ERR(EXT4_SB(sb)->s_mmp_tsk)) {
+ 		EXT4_SB(sb)->s_mmp_tsk = NULL;
+-		kfree(mmpd_data);
+ 		ext4_warning(sb, "Unable to create kmmpd thread for %s.",
+ 			     sb->s_id);
+ 		goto failed;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 4956917b7cc2b..099e4afa41e52 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1260,8 +1260,8 @@ static void ext4_put_super(struct super_block *sb)
+ 	ext4_xattr_destroy_cache(sbi->s_ea_block_cache);
+ 	sbi->s_ea_block_cache = NULL;
+ 
+-	if (sbi->s_mmp_tsk)
+-		kthread_stop(sbi->s_mmp_tsk);
++	ext4_stop_mmpd(sbi);
++
+ 	brelse(sbi->s_sbh);
+ 	sb->s_fs_info = NULL;
+ 	/*
+@@ -5173,8 +5173,7 @@ failed_mount3a:
+ 	ext4_es_unregister_shrinker(sbi);
+ failed_mount3:
+ 	del_timer_sync(&sbi->s_err_report);
+-	if (sbi->s_mmp_tsk)
+-		kthread_stop(sbi->s_mmp_tsk);
++	ext4_stop_mmpd(sbi);
+ failed_mount2:
+ 	rcu_read_lock();
+ 	group_desc = rcu_dereference(sbi->s_group_desc);
+@@ -5927,8 +5926,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 				 */
+ 				ext4_mark_recovery_complete(sb, es);
+ 			}
+-			if (sbi->s_mmp_tsk)
+-				kthread_stop(sbi->s_mmp_tsk);
++			ext4_stop_mmpd(sbi);
+ 		} else {
+ 			/* Make sure we can mount this feature set readwrite */
+ 			if (ext4_has_feature_readonly(sb) ||
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 69a390c6064c6..2d7799bd30b10 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3462,6 +3462,8 @@ void f2fs_destroy_garbage_collection_cache(void);
+  */
+ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only);
+ bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi);
++int __init f2fs_create_recovery_cache(void);
++void f2fs_destroy_recovery_cache(void);
+ 
+ /*
+  * debug.c
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 4f12ade6410a1..72ce131116791 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -777,13 +777,6 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
+ 	quota_enabled = f2fs_enable_quota_files(sbi, s_flags & SB_RDONLY);
+ #endif
+ 
+-	fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry",
+-			sizeof(struct fsync_inode_entry));
+-	if (!fsync_entry_slab) {
+-		err = -ENOMEM;
+-		goto out;
+-	}
+-
+ 	INIT_LIST_HEAD(&inode_list);
+ 	INIT_LIST_HEAD(&tmp_inode_list);
+ 	INIT_LIST_HEAD(&dir_list);
+@@ -856,8 +849,6 @@ skip:
+ 		}
+ 	}
+ 
+-	kmem_cache_destroy(fsync_entry_slab);
+-out:
+ #ifdef CONFIG_QUOTA
+ 	/* Turn quotas off */
+ 	if (quota_enabled)
+@@ -867,3 +858,17 @@ out:
+ 
+ 	return ret ? ret: err;
+ }
++
++int __init f2fs_create_recovery_cache(void)
++{
++	fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry",
++					sizeof(struct fsync_inode_entry));
++	if (!fsync_entry_slab)
++		return -ENOMEM;
++	return 0;
++}
++
++void f2fs_destroy_recovery_cache(void)
++{
++	kmem_cache_destroy(fsync_entry_slab);
++}
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index abc469dd9aea8..4af02719bb14a 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -4027,9 +4027,12 @@ static int __init init_f2fs_fs(void)
+ 	err = f2fs_create_checkpoint_caches();
+ 	if (err)
+ 		goto free_segment_manager_caches;
+-	err = f2fs_create_extent_cache();
++	err = f2fs_create_recovery_cache();
+ 	if (err)
+ 		goto free_checkpoint_caches;
++	err = f2fs_create_extent_cache();
++	if (err)
++		goto free_recovery_cache;
+ 	err = f2fs_create_garbage_collection_cache();
+ 	if (err)
+ 		goto free_extent_cache;
+@@ -4078,6 +4081,8 @@ free_garbage_collection_cache:
+ 	f2fs_destroy_garbage_collection_cache();
+ free_extent_cache:
+ 	f2fs_destroy_extent_cache();
++free_recovery_cache:
++	f2fs_destroy_recovery_cache();
+ free_checkpoint_caches:
+ 	f2fs_destroy_checkpoint_caches();
+ free_segment_manager_caches:
+@@ -4103,6 +4108,7 @@ static void __exit exit_f2fs_fs(void)
+ 	f2fs_exit_sysfs();
+ 	f2fs_destroy_garbage_collection_cache();
+ 	f2fs_destroy_extent_cache();
++	f2fs_destroy_recovery_cache();
+ 	f2fs_destroy_checkpoint_caches();
+ 	f2fs_destroy_segment_manager_caches();
+ 	f2fs_destroy_node_manager_caches();
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index f72d53848dcbc..8bb17b6d4de3c 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -299,7 +299,8 @@ static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+ 	 * Most likely an attempt to queue unbounded work on an io_wq that
+ 	 * wasn't setup with any unbounded workers.
+ 	 */
+-	WARN_ON_ONCE(!acct->max_workers);
++	if (unlikely(!acct->max_workers))
++		pr_warn_once("io-wq is not configured for unbound workers");
+ 
+ 	rcu_read_lock();
+ 	ret = io_wqe_activate_free_worker(wqe);
+@@ -1085,6 +1086,8 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+ 
+ 	if (WARN_ON_ONCE(!data->free_work || !data->do_work))
+ 		return ERR_PTR(-EINVAL);
++	if (WARN_ON_ONCE(!bounded))
++		return ERR_PTR(-EINVAL);
+ 
+ 	wq = kzalloc(sizeof(*wq), GFP_KERNEL);
+ 	if (!wq)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 0138aa7133172..42153106b7bc9 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -344,9 +344,10 @@ struct io_ring_ctx {
+ 	struct socket		*ring_sock;
+ #endif
+ 
+-	struct idr		io_buffer_idr;
++	struct xarray		io_buffers;
+ 
+-	struct idr		personality_idr;
++	struct xarray		personalities;
++	u32			pers_next;
+ 
+ 	struct {
+ 		unsigned		cached_cq_tail;
+@@ -1211,8 +1212,8 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ 	INIT_LIST_HEAD(&ctx->cq_overflow_list);
+ 	init_completion(&ctx->ref_comp);
+ 	init_completion(&ctx->sq_thread_comp);
+-	idr_init(&ctx->io_buffer_idr);
+-	idr_init(&ctx->personality_idr);
++	xa_init_flags(&ctx->io_buffers, XA_FLAGS_ALLOC1);
++	xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
+ 	mutex_init(&ctx->uring_lock);
+ 	init_waitqueue_head(&ctx->wait);
+ 	spin_lock_init(&ctx->completion_lock);
+@@ -2086,7 +2087,6 @@ static void __io_req_task_submit(struct io_kiocb *req)
+ 		__io_req_task_cancel(req, -EFAULT);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
+-	ctx->flags &= ~IORING_SETUP_R_DISABLED;
+ 	if (ctx->flags & IORING_SETUP_SQPOLL)
+ 		io_sq_thread_drop_mm();
+ }
+@@ -2989,7 +2989,7 @@ static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
+ 
+ 	lockdep_assert_held(&req->ctx->uring_lock);
+ 
+-	head = idr_find(&req->ctx->io_buffer_idr, bgid);
++	head = xa_load(&req->ctx->io_buffers, bgid);
+ 	if (head) {
+ 		if (!list_empty(&head->list)) {
+ 			kbuf = list_last_entry(&head->list, struct io_buffer,
+@@ -2997,7 +2997,7 @@ static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
+ 			list_del(&kbuf->list);
+ 		} else {
+ 			kbuf = head;
+-			idr_remove(&req->ctx->io_buffer_idr, bgid);
++			xa_erase(&req->ctx->io_buffers, bgid);
+ 		}
+ 		if (*len > kbuf->len)
+ 			*len = kbuf->len;
+@@ -3959,7 +3959,7 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, struct io_buffer *buf,
+ 	}
+ 	i++;
+ 	kfree(buf);
+-	idr_remove(&ctx->io_buffer_idr, bgid);
++	xa_erase(&ctx->io_buffers, bgid);
+ 
+ 	return i;
+ }
+@@ -3977,7 +3977,7 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
+ 	lockdep_assert_held(&ctx->uring_lock);
+ 
+ 	ret = -ENOENT;
+-	head = idr_find(&ctx->io_buffer_idr, p->bgid);
++	head = xa_load(&ctx->io_buffers, p->bgid);
+ 	if (head)
+ 		ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
+ 	if (ret < 0)
+@@ -4068,21 +4068,14 @@ static int io_provide_buffers(struct io_kiocb *req, bool force_nonblock,
+ 
+ 	lockdep_assert_held(&ctx->uring_lock);
+ 
+-	list = head = idr_find(&ctx->io_buffer_idr, p->bgid);
++	list = head = xa_load(&ctx->io_buffers, p->bgid);
+ 
+ 	ret = io_add_buffers(p, &head);
+-	if (ret < 0)
+-		goto out;
+-
+-	if (!list) {
+-		ret = idr_alloc(&ctx->io_buffer_idr, head, p->bgid, p->bgid + 1,
+-					GFP_KERNEL);
+-		if (ret < 0) {
++	if (ret >= 0 && !list) {
++		ret = xa_insert(&ctx->io_buffers, p->bgid, head, GFP_KERNEL);
++		if (ret < 0)
+ 			__io_remove_buffers(ctx, head, p->bgid, -1U);
+-			goto out;
+-		}
+ 	}
+-out:
+ 	if (ret < 0)
+ 		req_set_fail_links(req);
+ 
+@@ -6629,7 +6622,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ 	if (id) {
+ 		struct io_identity *iod;
+ 
+-		iod = idr_find(&ctx->personality_idr, id);
++		iod = xa_load(&ctx->personalities, id);
+ 		if (unlikely(!iod))
+ 			return -EINVAL;
+ 		refcount_inc(&iod->count);
+@@ -7998,6 +7991,7 @@ static void io_sq_offload_start(struct io_ring_ctx *ctx)
+ {
+ 	struct io_sq_data *sqd = ctx->sq_data;
+ 
++	ctx->flags &= ~IORING_SETUP_R_DISABLED;
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) && sqd->thread)
+ 		wake_up_process(sqd->thread);
+ }
+@@ -8410,19 +8404,13 @@ static int io_eventfd_unregister(struct io_ring_ctx *ctx)
+ 	return -ENXIO;
+ }
+ 
+-static int __io_destroy_buffers(int id, void *p, void *data)
+-{
+-	struct io_ring_ctx *ctx = data;
+-	struct io_buffer *buf = p;
+-
+-	__io_remove_buffers(ctx, buf, id, -1U);
+-	return 0;
+-}
+-
+ static void io_destroy_buffers(struct io_ring_ctx *ctx)
+ {
+-	idr_for_each(&ctx->io_buffer_idr, __io_destroy_buffers, ctx);
+-	idr_destroy(&ctx->io_buffer_idr);
++	struct io_buffer *buf;
++	unsigned long index;
++
++	xa_for_each(&ctx->io_buffers, index, buf)
++		__io_remove_buffers(ctx, buf, index, -1U);
+ }
+ 
+ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+@@ -8445,7 +8433,6 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ 	io_sqe_files_unregister(ctx);
+ 	io_eventfd_unregister(ctx);
+ 	io_destroy_buffers(ctx);
+-	idr_destroy(&ctx->personality_idr);
+ 
+ #if defined(CONFIG_UNIX)
+ 	if (ctx->ring_sock) {
+@@ -8505,18 +8492,19 @@ static int io_uring_fasync(int fd, struct file *file, int on)
+ 	return fasync_helper(fd, file, on, &ctx->cq_fasync);
+ }
+ 
+-static int io_remove_personalities(int id, void *p, void *data)
++static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
+ {
+-	struct io_ring_ctx *ctx = data;
+ 	struct io_identity *iod;
+ 
+-	iod = idr_remove(&ctx->personality_idr, id);
++	iod = xa_erase(&ctx->personalities, id);
+ 	if (iod) {
+ 		put_cred(iod->creds);
+ 		if (refcount_dec_and_test(&iod->count))
+ 			kfree(iod);
++		return 0;
+ 	}
+-	return 0;
++
++	return -EINVAL;
+ }
+ 
+ static void io_ring_exit_work(struct work_struct *work)
+@@ -8545,6 +8533,9 @@ static bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
+ 
+ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ {
++	unsigned long index;
++	struct io_identify *iod;
++
+ 	mutex_lock(&ctx->uring_lock);
+ 	percpu_ref_kill(&ctx->refs);
+ 	/* if force is set, the ring is going away. always drop after that */
+@@ -8565,7 +8556,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ 
+ 	/* if we failed setting up the ctx, we might not have any rings */
+ 	io_iopoll_try_reap_events(ctx);
+-	idr_for_each(&ctx->personality_idr, io_remove_personalities, ctx);
++	xa_for_each(&ctx->personalities, index, iod)
++		 io_unregister_personality(ctx, index);
+ 
+ 	/*
+ 	 * Do this upfront, so we won't have a grace period where the ring
+@@ -9128,11 +9120,10 @@ out_fput:
+ }
+ 
+ #ifdef CONFIG_PROC_FS
+-static int io_uring_show_cred(int id, void *p, void *data)
++static int io_uring_show_cred(struct seq_file *m, unsigned int id,
++		const struct io_identity *iod)
+ {
+-	struct io_identity *iod = p;
+ 	const struct cred *cred = iod->creds;
+-	struct seq_file *m = data;
+ 	struct user_namespace *uns = seq_user_ns(m);
+ 	struct group_info *gi;
+ 	kernel_cap_t cap;
+@@ -9200,9 +9191,13 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ 		seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf,
+ 						(unsigned int) buf->len);
+ 	}
+-	if (has_lock && !idr_is_empty(&ctx->personality_idr)) {
++	if (has_lock && !xa_empty(&ctx->personalities)) {
++		unsigned long index;
++		const struct io_identity *iod;
++
+ 		seq_printf(m, "Personalities:\n");
+-		idr_for_each(&ctx->personality_idr, io_uring_show_cred, m);
++		xa_for_each(&ctx->personalities, index, iod)
++			io_uring_show_cred(m, index, iod);
+ 	}
+ 	seq_printf(m, "PollList:\n");
+ 	spin_lock_irq(&ctx->completion_lock);
+@@ -9588,39 +9583,26 @@ out:
+ 
+ static int io_register_personality(struct io_ring_ctx *ctx)
+ {
+-	struct io_identity *id;
++	struct io_identity *iod;
++	u32 id;
+ 	int ret;
+ 
+-	id = kmalloc(sizeof(*id), GFP_KERNEL);
+-	if (unlikely(!id))
++	iod = kmalloc(sizeof(*iod), GFP_KERNEL);
++	if (unlikely(!iod))
+ 		return -ENOMEM;
+ 
+-	io_init_identity(id);
+-	id->creds = get_current_cred();
++	io_init_identity(iod);
++	iod->creds = get_current_cred();
+ 
+-	ret = idr_alloc_cyclic(&ctx->personality_idr, id, 1, USHRT_MAX, GFP_KERNEL);
+-	if (ret < 0) {
+-		put_cred(id->creds);
+-		kfree(id);
+-	}
++	ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)iod,
++			XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
++	if (!ret)
++		return id;
++	put_cred(iod->creds);
++	kfree(iod);
+ 	return ret;
+ }
+ 
+-static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
+-{
+-	struct io_identity *iod;
+-
+-	iod = idr_remove(&ctx->personality_idr, id);
+-	if (iod) {
+-		put_cred(iod->creds);
+-		if (refcount_dec_and_test(&iod->count))
+-			kfree(iod);
+-		return 0;
+-	}
+-
+-	return -EINVAL;
+-}
+-
+ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
+ 				    unsigned int nr_args)
+ {
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index 6f65bfa9f18d5..b0eb9c85eea0c 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -151,7 +151,8 @@ void jfs_evict_inode(struct inode *inode)
+ 			if (test_cflag(COMMIT_Freewmap, inode))
+ 				jfs_free_zero_link(inode);
+ 
+-			diFree(inode);
++			if (JFS_SBI(inode->i_sb)->ipimap)
++				diFree(inode);
+ 
+ 			/*
+ 			 * Free the inode from the quota allocation.
+diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
+index e98f99338f8f8..df5fc12a6ceed 100644
+--- a/fs/reiserfs/journal.c
++++ b/fs/reiserfs/journal.c
+@@ -2760,6 +2760,20 @@ int journal_init(struct super_block *sb, const char *j_dev_name,
+ 		goto free_and_return;
+ 	}
+ 
++	/*
++	 * Sanity check to see if journal first block is correct.
++	 * If journal first block is invalid it can cause
++	 * zeroing important superblock members.
++	 */
++	if (!SB_ONDISK_JOURNAL_DEVICE(sb) &&
++	    SB_ONDISK_JOURNAL_1st_BLOCK(sb) < SB_JOURNAL_1st_RESERVED_BLOCK(sb)) {
++		reiserfs_warning(sb, "journal-1393",
++				 "journal 1st super block is invalid: 1st reserved block %d, but actual 1st block is %d",
++				 SB_JOURNAL_1st_RESERVED_BLOCK(sb),
++				 SB_ONDISK_JOURNAL_1st_BLOCK(sb));
++		goto free_and_return;
++	}
++
+ 	if (journal_init_dev(sb, journal, j_dev_name) != 0) {
+ 		reiserfs_warning(sb, "sh-462",
+ 				 "unable to initialize journal device");
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index dacbb999ae34d..cfd46753a6856 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -275,6 +275,7 @@ static struct inode *ubifs_alloc_inode(struct super_block *sb)
+ 	memset((void *)ui + sizeof(struct inode), 0,
+ 	       sizeof(struct ubifs_inode) - sizeof(struct inode));
+ 	mutex_init(&ui->ui_mutex);
++	init_rwsem(&ui->xattr_sem);
+ 	spin_lock_init(&ui->ui_lock);
+ 	return &ui->vfs_inode;
+ };
+diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
+index 4ffd832e3b937..e7e48f3b179ab 100644
+--- a/fs/ubifs/ubifs.h
++++ b/fs/ubifs/ubifs.h
+@@ -356,6 +356,7 @@ struct ubifs_gced_idx_leb {
+  * @ui_mutex: serializes inode write-back with the rest of VFS operations,
+  *            serializes "clean <-> dirty" state changes, serializes bulk-read,
+  *            protects @dirty, @bulk_read, @ui_size, and @xattr_size
++ * @xattr_sem: serilizes write operations (remove|set|create) on xattr
+  * @ui_lock: protects @synced_i_size
+  * @synced_i_size: synchronized size of inode, i.e. the value of inode size
+  *                 currently stored on the flash; used only for regular file
+@@ -409,6 +410,7 @@ struct ubifs_inode {
+ 	unsigned int bulk_read:1;
+ 	unsigned int compr_type:2;
+ 	struct mutex ui_mutex;
++	struct rw_semaphore xattr_sem;
+ 	spinlock_t ui_lock;
+ 	loff_t synced_i_size;
+ 	loff_t ui_size;
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index a0b9b349efe65..09280796fc610 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -285,6 +285,7 @@ int ubifs_xattr_set(struct inode *host, const char *name, const void *value,
+ 	if (!xent)
+ 		return -ENOMEM;
+ 
++	down_write(&ubifs_inode(host)->xattr_sem);
+ 	/*
+ 	 * The extended attribute entries are stored in LNC, so multiple
+ 	 * look-ups do not involve reading the flash.
+@@ -319,6 +320,7 @@ int ubifs_xattr_set(struct inode *host, const char *name, const void *value,
+ 	iput(inode);
+ 
+ out_free:
++	up_write(&ubifs_inode(host)->xattr_sem);
+ 	kfree(xent);
+ 	return err;
+ }
+@@ -341,18 +343,19 @@ ssize_t ubifs_xattr_get(struct inode *host, const char *name, void *buf,
+ 	if (!xent)
+ 		return -ENOMEM;
+ 
++	down_read(&ubifs_inode(host)->xattr_sem);
+ 	xent_key_init(c, &key, host->i_ino, &nm);
+ 	err = ubifs_tnc_lookup_nm(c, &key, xent, &nm);
+ 	if (err) {
+ 		if (err == -ENOENT)
+ 			err = -ENODATA;
+-		goto out_unlock;
++		goto out_cleanup;
+ 	}
+ 
+ 	inode = iget_xattr(c, le64_to_cpu(xent->inum));
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+-		goto out_unlock;
++		goto out_cleanup;
+ 	}
+ 
+ 	ui = ubifs_inode(inode);
+@@ -374,7 +377,8 @@ ssize_t ubifs_xattr_get(struct inode *host, const char *name, void *buf,
+ out_iput:
+ 	mutex_unlock(&ui->ui_mutex);
+ 	iput(inode);
+-out_unlock:
++out_cleanup:
++	up_read(&ubifs_inode(host)->xattr_sem);
+ 	kfree(xent);
+ 	return err;
+ }
+@@ -406,16 +410,21 @@ ssize_t ubifs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+ 	dbg_gen("ino %lu ('%pd'), buffer size %zd", host->i_ino,
+ 		dentry, size);
+ 
++	down_read(&host_ui->xattr_sem);
+ 	len = host_ui->xattr_names + host_ui->xattr_cnt;
+-	if (!buffer)
++	if (!buffer) {
+ 		/*
+ 		 * We should return the minimum buffer size which will fit a
+ 		 * null-terminated list of all the extended attribute names.
+ 		 */
+-		return len;
++		err = len;
++		goto out_err;
++	}
+ 
+-	if (len > size)
+-		return -ERANGE;
++	if (len > size) {
++		err = -ERANGE;
++		goto out_err;
++	}
+ 
+ 	lowest_xent_key(c, &key, host->i_ino);
+ 	while (1) {
+@@ -437,8 +446,9 @@ ssize_t ubifs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+ 		pxent = xent;
+ 		key_read(c, &xent->key, &key);
+ 	}
+-
+ 	kfree(pxent);
++	up_read(&host_ui->xattr_sem);
++
+ 	if (err != -ENOENT) {
+ 		ubifs_err(c, "cannot find next direntry, error %d", err);
+ 		return err;
+@@ -446,6 +456,10 @@ ssize_t ubifs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+ 
+ 	ubifs_assert(c, written <= size);
+ 	return written;
++
++out_err:
++	up_read(&host_ui->xattr_sem);
++	return err;
+ }
+ 
+ static int remove_xattr(struct ubifs_info *c, struct inode *host,
+@@ -504,6 +518,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ 	ubifs_warn(c, "inode %lu has too many xattrs, doing a non-atomic deletion",
+ 		   host->i_ino);
+ 
++	down_write(&ubifs_inode(host)->xattr_sem);
+ 	lowest_xent_key(c, &key, host->i_ino);
+ 	while (1) {
+ 		xent = ubifs_tnc_next_ent(c, &key, &nm);
+@@ -523,7 +538,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ 			ubifs_ro_mode(c, err);
+ 			kfree(pxent);
+ 			kfree(xent);
+-			return err;
++			goto out_err;
+ 		}
+ 
+ 		ubifs_assert(c, ubifs_inode(xino)->xattr);
+@@ -535,7 +550,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ 			kfree(xent);
+ 			iput(xino);
+ 			ubifs_err(c, "cannot remove xattr, error %d", err);
+-			return err;
++			goto out_err;
+ 		}
+ 
+ 		iput(xino);
+@@ -544,14 +559,19 @@ int ubifs_purge_xattrs(struct inode *host)
+ 		pxent = xent;
+ 		key_read(c, &xent->key, &key);
+ 	}
+-
+ 	kfree(pxent);
++	up_write(&ubifs_inode(host)->xattr_sem);
++
+ 	if (err != -ENOENT) {
+ 		ubifs_err(c, "cannot find next direntry, error %d", err);
+ 		return err;
+ 	}
+ 
+ 	return 0;
++
++out_err:
++	up_write(&ubifs_inode(host)->xattr_sem);
++	return err;
+ }
+ 
+ /**
+@@ -594,6 +614,7 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 	if (!xent)
+ 		return -ENOMEM;
+ 
++	down_write(&ubifs_inode(host)->xattr_sem);
+ 	xent_key_init(c, &key, host->i_ino, &nm);
+ 	err = ubifs_tnc_lookup_nm(c, &key, xent, &nm);
+ 	if (err) {
+@@ -618,6 +639,7 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 	iput(inode);
+ 
+ out_free:
++	up_write(&ubifs_inode(host)->xattr_sem);
+ 	kfree(xent);
+ 	return err;
+ }
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index e169d8fe35b54..f4a72ff8cf959 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -932,6 +932,10 @@ static int udf_symlink(struct inode *dir, struct dentry *dentry,
+ 				iinfo->i_location.partitionReferenceNum,
+ 				0);
+ 		epos.bh = udf_tgetblk(sb, block);
++		if (unlikely(!epos.bh)) {
++			err = -ENOMEM;
++			goto out_no_entry;
++		}
+ 		lock_buffer(epos.bh);
+ 		memset(epos.bh->b_data, 0x00, bsize);
+ 		set_buffer_uptodate(epos.bh);
+diff --git a/include/linux/mfd/abx500/ux500_chargalg.h b/include/linux/mfd/abx500/ux500_chargalg.h
+index 9b97d284d0ce8..bc3819dc33e12 100644
+--- a/include/linux/mfd/abx500/ux500_chargalg.h
++++ b/include/linux/mfd/abx500/ux500_chargalg.h
+@@ -15,7 +15,7 @@
+  * - POWER_SUPPLY_TYPE_USB,
+  * because only them store as drv_data pointer to struct ux500_charger.
+  */
+-#define psy_to_ux500_charger(x) power_supply_get_drvdata(psy)
++#define psy_to_ux500_charger(x) power_supply_get_drvdata(x)
+ 
+ /* Forward declaration */
+ struct ux500_charger;
+diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h
+index 0b17c4322b097..f96b7f8d82e52 100644
+--- a/include/linux/netdev_features.h
++++ b/include/linux/netdev_features.h
+@@ -87,7 +87,7 @@ enum {
+ 
+ 	/*
+ 	 * Add your fresh new feature above and remember to update
+-	 * netdev_features_strings[] in net/core/ethtool.c and maybe
++	 * netdev_features_strings[] in net/ethtool/common.c and maybe
+ 	 * some feature mask #defines below. Please also describe it
+ 	 * in Documentation/networking/netdev-features.rst.
+ 	 */
+diff --git a/include/linux/of_mdio.h b/include/linux/of_mdio.h
+index cfe8c607a628d..f56c6a9230ac8 100644
+--- a/include/linux/of_mdio.h
++++ b/include/linux/of_mdio.h
+@@ -75,6 +75,13 @@ static inline int of_mdiobus_register(struct mii_bus *mdio, struct device_node *
+ 	return mdiobus_register(mdio);
+ }
+ 
++static inline int devm_of_mdiobus_register(struct device *dev,
++					   struct mii_bus *mdio,
++					   struct device_node *np)
++{
++	return devm_mdiobus_register(dev, mdio);
++}
++
+ static inline struct mdio_device *of_mdio_find_device(struct device_node *np)
+ {
+ 	return NULL;
+diff --git a/include/linux/wait.h b/include/linux/wait.h
+index 27fb99cfeb026..f8b0704968a1e 100644
+--- a/include/linux/wait.h
++++ b/include/linux/wait.h
+@@ -1126,7 +1126,7 @@ do {										\
+  * Waitqueues which are removed from the waitqueue_head at wakeup time
+  */
+ void prepare_to_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
+-void prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
++bool prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
+ long prepare_to_wait_event(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
+ void finish_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
+ long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout);
+diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
+index 1de960bfcab9c..73150520c02d4 100644
+--- a/include/media/v4l2-subdev.h
++++ b/include/media/v4l2-subdev.h
+@@ -162,6 +162,9 @@ struct v4l2_subdev_io_pin_config {
+  * @s_gpio: set GPIO pins. Very simple right now, might need to be extended with
+  *	a direction argument if needed.
+  *
++ * @command: called by in-kernel drivers in order to call functions internal
++ *	   to subdev drivers driver that have a separate callback.
++ *
+  * @ioctl: called at the end of ioctl() syscall handler at the V4L2 core.
+  *	   used to provide support for private ioctls used on the driver.
+  *
+@@ -193,6 +196,7 @@ struct v4l2_subdev_core_ops {
+ 	int (*load_fw)(struct v4l2_subdev *sd);
+ 	int (*reset)(struct v4l2_subdev *sd, u32 val);
+ 	int (*s_gpio)(struct v4l2_subdev *sd, u32 val);
++	long (*command)(struct v4l2_subdev *sd, unsigned int cmd, void *arg);
+ 	long (*ioctl)(struct v4l2_subdev *sd, unsigned int cmd, void *arg);
+ #ifdef CONFIG_COMPAT
+ 	long (*compat_ioctl32)(struct v4l2_subdev *sd, unsigned int cmd,
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index 123b1e9ea304a..161b909790389 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -312,12 +312,14 @@ flow_action_mixed_hw_stats_check(const struct flow_action *action,
+ 	if (flow_offload_has_one_action(action))
+ 		return true;
+ 
+-	flow_action_for_each(i, action_entry, action) {
+-		if (i && action_entry->hw_stats != last_hw_stats) {
+-			NL_SET_ERR_MSG_MOD(extack, "Mixing HW stats types for actions is not supported");
+-			return false;
++	if (action) {
++		flow_action_for_each(i, action_entry, action) {
++			if (i && action_entry->hw_stats != last_hw_stats) {
++				NL_SET_ERR_MSG_MOD(extack, "Mixing HW stats types for actions is not supported");
++				return false;
++			}
++			last_hw_stats = action_entry->hw_stats;
+ 		}
+-		last_hw_stats = action_entry->hw_stats;
+ 	}
+ 	return true;
+ }
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index 0bdff38eb4bb7..51d698f2656fc 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -458,7 +458,7 @@ struct sctp_af {
+ 					 int saddr);
+ 	void		(*from_sk)	(union sctp_addr *,
+ 					 struct sock *sk);
+-	void		(*from_addr_param) (union sctp_addr *,
++	bool		(*from_addr_param) (union sctp_addr *,
+ 					    union sctp_addr_param *,
+ 					    __be16 port, int iif);
+ 	int		(*to_addr_param) (const union sctp_addr *,
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index 8a26a2ffa9523..fc5a39839b4b0 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -193,6 +193,7 @@ enum iscsi_connection_state {
+ 	ISCSI_CONN_UP = 0,
+ 	ISCSI_CONN_DOWN,
+ 	ISCSI_CONN_FAILED,
++	ISCSI_CONN_BOUND,
+ };
+ 
+ struct iscsi_cls_conn {
+diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h
+index cde753bb20935..13772f039c8dc 100644
+--- a/include/uapi/linux/ethtool.h
++++ b/include/uapi/linux/ethtool.h
+@@ -223,7 +223,7 @@ enum tunable_id {
+ 	ETHTOOL_PFC_PREVENTION_TOUT, /* timeout in msecs */
+ 	/*
+ 	 * Add your fresh new tunable attribute above and remember to update
+-	 * tunable_strings[] in net/core/ethtool.c
++	 * tunable_strings[] in net/ethtool/common.c
+ 	 */
+ 	__ETHTOOL_TUNABLE_COUNT,
+ };
+@@ -287,7 +287,7 @@ enum phy_tunable_id {
+ 	ETHTOOL_PHY_EDPD,
+ 	/*
+ 	 * Add your fresh new phy tunable attribute above and remember to update
+-	 * phy_tunable_strings[] in net/core/ethtool.c
++	 * phy_tunable_strings[] in net/ethtool/common.c
+ 	 */
+ 	__ETHTOOL_PHY_TUNABLE_COUNT,
+ };
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 182e162f8fd0b..239c6b3b59934 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1395,29 +1395,54 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
+ select_insn:
+ 	goto *jumptable[insn->code];
+ 
+-	/* ALU */
+-#define ALU(OPCODE, OP)			\
+-	ALU64_##OPCODE##_X:		\
+-		DST = DST OP SRC;	\
+-		CONT;			\
+-	ALU_##OPCODE##_X:		\
+-		DST = (u32) DST OP (u32) SRC;	\
+-		CONT;			\
+-	ALU64_##OPCODE##_K:		\
+-		DST = DST OP IMM;		\
+-		CONT;			\
+-	ALU_##OPCODE##_K:		\
+-		DST = (u32) DST OP (u32) IMM;	\
++	/* Explicitly mask the register-based shift amounts with 63 or 31
++	 * to avoid undefined behavior. Normally this won't affect the
++	 * generated code, for example, in case of native 64 bit archs such
++	 * as x86-64 or arm64, the compiler is optimizing the AND away for
++	 * the interpreter. In case of JITs, each of the JIT backends compiles
++	 * the BPF shift operations to machine instructions which produce
++	 * implementation-defined results in such a case; the resulting
++	 * contents of the register may be arbitrary, but program behaviour
++	 * as a whole remains defined. In other words, in case of JIT backends,
++	 * the AND must /not/ be added to the emitted LSH/RSH/ARSH translation.
++	 */
++	/* ALU (shifts) */
++#define SHT(OPCODE, OP)					\
++	ALU64_##OPCODE##_X:				\
++		DST = DST OP (SRC & 63);		\
++		CONT;					\
++	ALU_##OPCODE##_X:				\
++		DST = (u32) DST OP ((u32) SRC & 31);	\
++		CONT;					\
++	ALU64_##OPCODE##_K:				\
++		DST = DST OP IMM;			\
++		CONT;					\
++	ALU_##OPCODE##_K:				\
++		DST = (u32) DST OP (u32) IMM;		\
++		CONT;
++	/* ALU (rest) */
++#define ALU(OPCODE, OP)					\
++	ALU64_##OPCODE##_X:				\
++		DST = DST OP SRC;			\
++		CONT;					\
++	ALU_##OPCODE##_X:				\
++		DST = (u32) DST OP (u32) SRC;		\
++		CONT;					\
++	ALU64_##OPCODE##_K:				\
++		DST = DST OP IMM;			\
++		CONT;					\
++	ALU_##OPCODE##_K:				\
++		DST = (u32) DST OP (u32) IMM;		\
+ 		CONT;
+-
+ 	ALU(ADD,  +)
+ 	ALU(SUB,  -)
+ 	ALU(AND,  &)
+ 	ALU(OR,   |)
+-	ALU(LSH, <<)
+-	ALU(RSH, >>)
+ 	ALU(XOR,  ^)
+ 	ALU(MUL,  *)
++	SHT(LSH, <<)
++	SHT(RSH, >>)
++#undef SHT
+ #undef ALU
+ 	ALU_NEG:
+ 		DST = (u32) -DST;
+@@ -1442,13 +1467,13 @@ select_insn:
+ 		insn++;
+ 		CONT;
+ 	ALU_ARSH_X:
+-		DST = (u64) (u32) (((s32) DST) >> SRC);
++		DST = (u64) (u32) (((s32) DST) >> (SRC & 31));
+ 		CONT;
+ 	ALU_ARSH_K:
+ 		DST = (u64) (u32) (((s32) DST) >> IMM);
+ 		CONT;
+ 	ALU64_ARSH_X:
+-		(*(s64 *) &DST) >>= SRC;
++		(*(s64 *) &DST) >>= (SRC & 63);
+ 		CONT;
+ 	ALU64_ARSH_K:
+ 		(*(s64 *) &DST) >>= IMM;
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index add0b34f2b340..f9913bc65ef8d 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -8,6 +8,7 @@
+ #include <linux/vmalloc.h>
+ #include <linux/wait.h>
+ #include <linux/poll.h>
++#include <linux/kmemleak.h>
+ #include <uapi/linux/btf.h>
+ 
+ #define RINGBUF_CREATE_FLAG_MASK (BPF_F_NUMA_NODE)
+@@ -109,6 +110,7 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
+ 	rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
+ 		  VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
+ 	if (rb) {
++		kmemleak_not_leak(pages);
+ 		rb->pages = pages;
+ 		rb->nr_pages = nr_pages;
+ 		return rb;
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 2b8d7a5db3837..67c22941b5f27 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -32,6 +32,7 @@
+ #include <linux/relay.h>
+ #include <linux/slab.h>
+ #include <linux/percpu-rwsem.h>
++#include <linux/cpuset.h>
+ 
+ #include <trace/events/power.h>
+ #define CREATE_TRACE_POINTS
+@@ -814,6 +815,52 @@ void __init cpuhp_threads_init(void)
+ 	kthread_unpark(this_cpu_read(cpuhp_state.thread));
+ }
+ 
++/*
++ *
++ * Serialize hotplug trainwrecks outside of the cpu_hotplug_lock
++ * protected region.
++ *
++ * The operation is still serialized against concurrent CPU hotplug via
++ * cpu_add_remove_lock, i.e. CPU map protection.  But it is _not_
++ * serialized against other hotplug related activity like adding or
++ * removing of state callbacks and state instances, which invoke either the
++ * startup or the teardown callback of the affected state.
++ *
++ * This is required for subsystems which are unfixable vs. CPU hotplug and
++ * evade lock inversion problems by scheduling work which has to be
++ * completed _before_ cpu_up()/_cpu_down() returns.
++ *
++ * Don't even think about adding anything to this for any new code or even
++ * drivers. It's only purpose is to keep existing lock order trainwrecks
++ * working.
++ *
++ * For cpu_down() there might be valid reasons to finish cleanups which are
++ * not required to be done under cpu_hotplug_lock, but that's a different
++ * story and would be not invoked via this.
++ */
++static void cpu_up_down_serialize_trainwrecks(bool tasks_frozen)
++{
++	/*
++	 * cpusets delegate hotplug operations to a worker to "solve" the
++	 * lock order problems. Wait for the worker, but only if tasks are
++	 * _not_ frozen (suspend, hibernate) as that would wait forever.
++	 *
++	 * The wait is required because otherwise the hotplug operation
++	 * returns with inconsistent state, which could even be observed in
++	 * user space when a new CPU is brought up. The CPU plug uevent
++	 * would be delivered and user space reacting on it would fail to
++	 * move tasks to the newly plugged CPU up to the point where the
++	 * work has finished because up to that point the newly plugged CPU
++	 * is not assignable in cpusets/cgroups. On unplug that's not
++	 * necessarily a visible issue, but it is still inconsistent state,
++	 * which is the real problem which needs to be "fixed". This can't
++	 * prevent the transient state between scheduling the work and
++	 * returning from waiting for it.
++	 */
++	if (!tasks_frozen)
++		cpuset_wait_for_hotplug();
++}
++
+ #ifdef CONFIG_HOTPLUG_CPU
+ #ifndef arch_clear_mm_cpumask_cpu
+ #define arch_clear_mm_cpumask_cpu(cpu, mm) cpumask_clear_cpu(cpu, mm_cpumask(mm))
+@@ -1051,6 +1098,7 @@ out:
+ 	 */
+ 	lockup_detector_cleanup();
+ 	arch_smt_update();
++	cpu_up_down_serialize_trainwrecks(tasks_frozen);
+ 	return ret;
+ }
+ 
+@@ -1247,6 +1295,7 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
+ out:
+ 	cpus_write_unlock();
+ 	arch_smt_update();
++	cpu_up_down_serialize_trainwrecks(tasks_frozen);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 3d92de7909bf4..32c0905bca849 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3672,15 +3672,15 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ 
+ 		r = removed_load;
+ 		sub_positive(&sa->load_avg, r);
+-		sub_positive(&sa->load_sum, r * divider);
++		sa->load_sum = sa->load_avg * divider;
+ 
+ 		r = removed_util;
+ 		sub_positive(&sa->util_avg, r);
+-		sub_positive(&sa->util_sum, r * divider);
++		sa->util_sum = sa->util_avg * divider;
+ 
+ 		r = removed_runnable;
+ 		sub_positive(&sa->runnable_avg, r);
+-		sub_positive(&sa->runnable_sum, r * divider);
++		sa->runnable_sum = sa->runnable_avg * divider;
+ 
+ 		/*
+ 		 * removed_runnable is the unweighted version of removed_load so we
+diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
+index 01f5d3020589d..21005b980a6b7 100644
+--- a/kernel/sched/wait.c
++++ b/kernel/sched/wait.c
+@@ -249,17 +249,22 @@ prepare_to_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_ent
+ }
+ EXPORT_SYMBOL(prepare_to_wait);
+ 
+-void
++/* Returns true if we are the first waiter in the queue, false otherwise. */
++bool
+ prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state)
+ {
+ 	unsigned long flags;
++	bool was_empty = false;
+ 
+ 	wq_entry->flags |= WQ_FLAG_EXCLUSIVE;
+ 	spin_lock_irqsave(&wq_head->lock, flags);
+-	if (list_empty(&wq_entry->entry))
++	if (list_empty(&wq_entry->entry)) {
++		was_empty = list_empty(&wq_head->head);
+ 		__add_wait_queue_entry_tail(wq_head, wq_entry);
++	}
+ 	set_current_state(state);
+ 	spin_unlock_irqrestore(&wq_head->lock, flags);
++	return was_empty;
+ }
+ EXPORT_SYMBOL(prepare_to_wait_exclusive);
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b09c598065019..ee84891bacfac 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2181,8 +2181,15 @@ void tracing_reset_all_online_cpus(void)
+ 	}
+ }
+ 
++/*
++ * The tgid_map array maps from pid to tgid; i.e. the value stored at index i
++ * is the tgid last observed corresponding to pid=i.
++ */
+ static int *tgid_map;
+ 
++/* The maximum valid index into tgid_map. */
++static size_t tgid_map_max;
++
+ #define SAVED_CMDLINES_DEFAULT 128
+ #define NO_CMDLINE_MAP UINT_MAX
+ static arch_spinlock_t trace_cmdline_lock = __ARCH_SPIN_LOCK_UNLOCKED;
+@@ -2455,24 +2462,41 @@ void trace_find_cmdline(int pid, char comm[])
+ 	preempt_enable();
+ }
+ 
++static int *trace_find_tgid_ptr(int pid)
++{
++	/*
++	 * Pairs with the smp_store_release in set_tracer_flag() to ensure that
++	 * if we observe a non-NULL tgid_map then we also observe the correct
++	 * tgid_map_max.
++	 */
++	int *map = smp_load_acquire(&tgid_map);
++
++	if (unlikely(!map || pid > tgid_map_max))
++		return NULL;
++
++	return &map[pid];
++}
++
+ int trace_find_tgid(int pid)
+ {
+-	if (unlikely(!tgid_map || !pid || pid > PID_MAX_DEFAULT))
+-		return 0;
++	int *ptr = trace_find_tgid_ptr(pid);
+ 
+-	return tgid_map[pid];
++	return ptr ? *ptr : 0;
+ }
+ 
+ static int trace_save_tgid(struct task_struct *tsk)
+ {
++	int *ptr;
++
+ 	/* treat recording of idle task as a success */
+ 	if (!tsk->pid)
+ 		return 1;
+ 
+-	if (unlikely(!tgid_map || tsk->pid > PID_MAX_DEFAULT))
++	ptr = trace_find_tgid_ptr(tsk->pid);
++	if (!ptr)
+ 		return 0;
+ 
+-	tgid_map[tsk->pid] = tsk->tgid;
++	*ptr = tsk->tgid;
+ 	return 1;
+ }
+ 
+@@ -4847,6 +4871,8 @@ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set)
+ 
+ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
+ {
++	int *map;
++
+ 	if ((mask == TRACE_ITER_RECORD_TGID) ||
+ 	    (mask == TRACE_ITER_RECORD_CMD))
+ 		lockdep_assert_held(&event_mutex);
+@@ -4869,10 +4895,19 @@ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
+ 		trace_event_enable_cmd_record(enabled);
+ 
+ 	if (mask == TRACE_ITER_RECORD_TGID) {
+-		if (!tgid_map)
+-			tgid_map = kvcalloc(PID_MAX_DEFAULT + 1,
+-					   sizeof(*tgid_map),
+-					   GFP_KERNEL);
++		if (!tgid_map) {
++			tgid_map_max = pid_max;
++			map = kvcalloc(tgid_map_max + 1, sizeof(*tgid_map),
++				       GFP_KERNEL);
++
++			/*
++			 * Pairs with smp_load_acquire() in
++			 * trace_find_tgid_ptr() to ensure that if it observes
++			 * the tgid_map we just allocated then it also observes
++			 * the corresponding tgid_map_max value.
++			 */
++			smp_store_release(&tgid_map, map);
++		}
+ 		if (!tgid_map) {
+ 			tr->trace_flags &= ~TRACE_ITER_RECORD_TGID;
+ 			return -ENOMEM;
+@@ -5284,37 +5319,16 @@ static const struct file_operations tracing_readme_fops = {
+ 
+ static void *saved_tgids_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+-	int *ptr = v;
++	int pid = ++(*pos);
+ 
+-	if (*pos || m->count)
+-		ptr++;
+-
+-	(*pos)++;
+-
+-	for (; ptr <= &tgid_map[PID_MAX_DEFAULT]; ptr++) {
+-		if (trace_find_tgid(*ptr))
+-			return ptr;
+-	}
+-
+-	return NULL;
++	return trace_find_tgid_ptr(pid);
+ }
+ 
+ static void *saved_tgids_start(struct seq_file *m, loff_t *pos)
+ {
+-	void *v;
+-	loff_t l = 0;
++	int pid = *pos;
+ 
+-	if (!tgid_map)
+-		return NULL;
+-
+-	v = &tgid_map[0];
+-	while (l <= *pos) {
+-		v = saved_tgids_next(m, v, &l);
+-		if (!v)
+-			return NULL;
+-	}
+-
+-	return v;
++	return trace_find_tgid_ptr(pid);
+ }
+ 
+ static void saved_tgids_stop(struct seq_file *m, void *v)
+@@ -5323,9 +5337,14 @@ static void saved_tgids_stop(struct seq_file *m, void *v)
+ 
+ static int saved_tgids_show(struct seq_file *m, void *v)
+ {
+-	int pid = (int *)v - tgid_map;
++	int *entry = (int *)v;
++	int pid = entry - tgid_map;
++	int tgid = *entry;
++
++	if (tgid == 0)
++		return SEQ_SKIP;
+ 
+-	seq_printf(m, "%d %d\n", pid, trace_find_tgid(pid));
++	seq_printf(m, "%d %d\n", pid, tgid);
+ 	return 0;
+ }
+ 
+diff --git a/lib/seq_buf.c b/lib/seq_buf.c
+index 89c26c393bdba..6dafde8513337 100644
+--- a/lib/seq_buf.c
++++ b/lib/seq_buf.c
+@@ -229,8 +229,10 @@ int seq_buf_putmem_hex(struct seq_buf *s, const void *mem,
+ 
+ 	WARN_ON(s->size == 0);
+ 
++	BUILD_BUG_ON(MAX_MEMHEX_BYTES * 2 >= HEX_CHARS);
++
+ 	while (len) {
+-		start_len = min(len, HEX_CHARS - 1);
++		start_len = min(len, MAX_MEMHEX_BYTES);
+ #ifdef __BIG_ENDIAN
+ 		for (i = 0, j = 0; i < start_len; i++) {
+ #else
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 25fb82320e3d5..01445ddff58d8 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1856,11 +1856,11 @@ static int __soft_offline_page(struct page *page)
+ 			pr_info("soft offline: %#lx: %s migration failed %d, type %lx (%pGp)\n",
+ 				pfn, msg_page[huge], ret, page->flags, &page->flags);
+ 			if (ret > 0)
+-				ret = -EIO;
++				ret = -EBUSY;
+ 		}
+ 	} else {
+-		pr_info("soft offline: %#lx: %s isolation failed: %d, page count %d, type %lx (%pGp)\n",
+-			pfn, msg_page[huge], ret, page_count(page), page->flags, &page->flags);
++		pr_info("soft offline: %#lx: %s isolation failed, page count %d, type %lx (%pGp)\n",
++			pfn, msg_page[huge], page_count(page), page->flags, &page->flags);
+ 		ret = -EBUSY;
+ 	}
+ 	return ret;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 86ebfc6ae6986..0854f1b35683c 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1713,14 +1713,6 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 
+ 	BT_DBG("%s %p", hdev->name, hdev);
+ 
+-	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
+-	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
+-	    test_bit(HCI_UP, &hdev->flags)) {
+-		/* Execute vendor specific shutdown routine */
+-		if (hdev->shutdown)
+-			hdev->shutdown(hdev);
+-	}
+-
+ 	cancel_delayed_work(&hdev->power_off);
+ 
+ 	hci_request_cancel_all(hdev);
+@@ -1796,6 +1788,14 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 		clear_bit(HCI_INIT, &hdev->flags);
+ 	}
+ 
++	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
++	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
++	    test_bit(HCI_UP, &hdev->flags)) {
++		/* Execute vendor specific shutdown routine */
++		if (hdev->shutdown)
++			hdev->shutdown(hdev);
++	}
++
+ 	/* flush cmd  work */
+ 	flush_work(&hdev->cmd_work);
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index d62ac4b737099..e59ae24a8f17f 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4360,12 +4360,12 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ 
+ 	bt_dev_dbg(hdev, "SCO connected with air mode: %02x", ev->air_mode);
+ 
+-	switch (conn->setting & SCO_AIRMODE_MASK) {
+-	case SCO_AIRMODE_CVSD:
++	switch (ev->air_mode) {
++	case 0x02:
+ 		if (hdev->notify)
+ 			hdev->notify(hdev, HCI_NOTIFY_ENABLE_SCO_CVSD);
+ 		break;
+-	case SCO_AIRMODE_TRANSP:
++	case 0x03:
+ 		if (hdev->notify)
+ 			hdev->notify(hdev, HCI_NOTIFY_ENABLE_SCO_TRANSP);
+ 		break;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index cdc3863371739..0ddbc415ce156 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -6055,7 +6055,7 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn,
+ 	struct l2cap_ecred_conn_rsp *rsp = (void *) data;
+ 	struct hci_conn *hcon = conn->hcon;
+ 	u16 mtu, mps, credits, result;
+-	struct l2cap_chan *chan;
++	struct l2cap_chan *chan, *tmp;
+ 	int err = 0, sec_level;
+ 	int i = 0;
+ 
+@@ -6074,7 +6074,7 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn,
+ 
+ 	cmd_len -= sizeof(*rsp);
+ 
+-	list_for_each_entry(chan, &conn->chan_l, list) {
++	list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {
+ 		u16 dcid;
+ 
+ 		if (chan->ident != cmd->ident ||
+@@ -6237,7 +6237,7 @@ static inline int l2cap_ecred_reconf_rsp(struct l2cap_conn *conn,
+ 					 struct l2cap_cmd_hdr *cmd, u16 cmd_len,
+ 					 u8 *data)
+ {
+-	struct l2cap_chan *chan;
++	struct l2cap_chan *chan, *tmp;
+ 	struct l2cap_ecred_conn_rsp *rsp = (void *) data;
+ 	u16 result;
+ 
+@@ -6251,7 +6251,7 @@ static inline int l2cap_ecred_reconf_rsp(struct l2cap_conn *conn,
+ 	if (!result)
+ 		return 0;
+ 
+-	list_for_each_entry(chan, &conn->chan_l, list) {
++	list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {
+ 		if (chan->ident != cmd->ident)
+ 			continue;
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 13520c7b4f2fb..31a585fe0c7c6 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -247,12 +247,15 @@ static const u8 mgmt_status_table[] = {
+ 	MGMT_STATUS_TIMEOUT,		/* Instant Passed */
+ 	MGMT_STATUS_NOT_SUPPORTED,	/* Pairing Not Supported */
+ 	MGMT_STATUS_FAILED,		/* Transaction Collision */
++	MGMT_STATUS_FAILED,		/* Reserved for future use */
+ 	MGMT_STATUS_INVALID_PARAMS,	/* Unacceptable Parameter */
+ 	MGMT_STATUS_REJECTED,		/* QoS Rejected */
+ 	MGMT_STATUS_NOT_SUPPORTED,	/* Classification Not Supported */
+ 	MGMT_STATUS_REJECTED,		/* Insufficient Security */
+ 	MGMT_STATUS_INVALID_PARAMS,	/* Parameter Out Of Range */
++	MGMT_STATUS_FAILED,		/* Reserved for future use */
+ 	MGMT_STATUS_BUSY,		/* Role Switch Pending */
++	MGMT_STATUS_FAILED,		/* Reserved for future use */
+ 	MGMT_STATUS_FAILED,		/* Slot Violation */
+ 	MGMT_STATUS_FAILED,		/* Role Switch Failed */
+ 	MGMT_STATUS_INVALID_PARAMS,	/* EIR Too Large */
+@@ -4035,6 +4038,8 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
++	memset(&rp, 0, sizeof(rp));
++
+ 	if (cp->addr.type == BDADDR_BREDR) {
+ 		br_params = hci_bdaddr_list_lookup_with_flags(&hdev->whitelist,
+ 							      &cp->addr.bdaddr,
+diff --git a/net/bridge/br_mrp.c b/net/bridge/br_mrp.c
+index d1336a7ad7ff2..3259f5480127a 100644
+--- a/net/bridge/br_mrp.c
++++ b/net/bridge/br_mrp.c
+@@ -607,8 +607,7 @@ int br_mrp_set_ring_state(struct net_bridge *br,
+ 	if (!mrp)
+ 		return -EINVAL;
+ 
+-	if (mrp->ring_state == BR_MRP_RING_STATE_CLOSED &&
+-	    state->ring_state != BR_MRP_RING_STATE_CLOSED)
++	if (mrp->ring_state != state->ring_state)
+ 		mrp->ring_transitions++;
+ 
+ 	mrp->ring_state = state->ring_state;
+@@ -690,8 +689,7 @@ int br_mrp_set_in_state(struct net_bridge *br, struct br_mrp_in_state *state)
+ 	if (!mrp)
+ 		return -EINVAL;
+ 
+-	if (mrp->in_state == BR_MRP_IN_STATE_CLOSED &&
+-	    state->in_state != BR_MRP_IN_STATE_CLOSED)
++	if (mrp->in_state != state->in_state)
+ 		mrp->in_transitions++;
+ 
+ 	mrp->in_state = state->in_state;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 0c9ce36afc8cf..2fdf30eefc596 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6433,11 +6433,18 @@ EXPORT_SYMBOL(napi_schedule_prep);
+  * __napi_schedule_irqoff - schedule for receive
+  * @n: entry to schedule
+  *
+- * Variant of __napi_schedule() assuming hard irqs are masked
++ * Variant of __napi_schedule() assuming hard irqs are masked.
++ *
++ * On PREEMPT_RT enabled kernels this maps to __napi_schedule()
++ * because the interrupt disabled assumption might not be true
++ * due to force-threaded interrupts and spinlock substitution.
+  */
+ void __napi_schedule_irqoff(struct napi_struct *n)
+ {
+-	____napi_schedule(this_cpu_ptr(&softnet_data), n);
++	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
++		____napi_schedule(this_cpu_ptr(&softnet_data), n);
++	else
++		__napi_schedule(n);
+ }
+ EXPORT_SYMBOL(__napi_schedule_irqoff);
+ 
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 97975bed491ad..560d5dc435629 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1053,7 +1053,7 @@ static int __ip_append_data(struct sock *sk,
+ 			unsigned int datalen;
+ 			unsigned int fraglen;
+ 			unsigned int fraggap;
+-			unsigned int alloclen;
++			unsigned int alloclen, alloc_extra;
+ 			unsigned int pagedlen;
+ 			struct sk_buff *skb_prev;
+ alloc_new_skb:
+@@ -1073,35 +1073,39 @@ alloc_new_skb:
+ 			fraglen = datalen + fragheaderlen;
+ 			pagedlen = 0;
+ 
++			alloc_extra = hh_len + 15;
++			alloc_extra += exthdrlen;
++
++			/* The last fragment gets additional space at tail.
++			 * Note, with MSG_MORE we overallocate on fragments,
++			 * because we have no idea what fragment will be
++			 * the last.
++			 */
++			if (datalen == length + fraggap)
++				alloc_extra += rt->dst.trailer_len;
++
+ 			if ((flags & MSG_MORE) &&
+ 			    !(rt->dst.dev->features&NETIF_F_SG))
+ 				alloclen = mtu;
+-			else if (!paged)
++			else if (!paged &&
++				 (fraglen + alloc_extra < SKB_MAX_ALLOC ||
++				  !(rt->dst.dev->features & NETIF_F_SG)))
+ 				alloclen = fraglen;
+ 			else {
+ 				alloclen = min_t(int, fraglen, MAX_HEADER);
+ 				pagedlen = fraglen - alloclen;
+ 			}
+ 
+-			alloclen += exthdrlen;
+-
+-			/* The last fragment gets additional space at tail.
+-			 * Note, with MSG_MORE we overallocate on fragments,
+-			 * because we have no idea what fragment will be
+-			 * the last.
+-			 */
+-			if (datalen == length + fraggap)
+-				alloclen += rt->dst.trailer_len;
++			alloclen += alloc_extra;
+ 
+ 			if (transhdrlen) {
+-				skb = sock_alloc_send_skb(sk,
+-						alloclen + hh_len + 15,
++				skb = sock_alloc_send_skb(sk, alloclen,
+ 						(flags & MSG_DONTWAIT), &err);
+ 			} else {
+ 				skb = NULL;
+ 				if (refcount_read(&sk->sk_wmem_alloc) + wmem_alloc_delta <=
+ 				    2 * sk->sk_sndbuf)
+-					skb = alloc_skb(alloclen + hh_len + 15,
++					skb = alloc_skb(alloclen,
+ 							sk->sk_allocation);
+ 				if (unlikely(!skb))
+ 					err = -ENOBUFS;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index fac5c1469ceee..4d4b641c204d4 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2802,8 +2802,17 @@ static void tcp_process_loss(struct sock *sk, int flag, int num_dupack,
+ 	*rexmit = REXMIT_LOST;
+ }
+ 
++static bool tcp_force_fast_retransmit(struct sock *sk)
++{
++	struct tcp_sock *tp = tcp_sk(sk);
++
++	return after(tcp_highest_sack_seq(tp),
++		     tp->snd_una + tp->reordering * tp->mss_cache);
++}
++
+ /* Undo during fast recovery after partial ACK. */
+-static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una)
++static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una,
++				 bool *do_lost)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
+@@ -2828,7 +2837,9 @@ static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una)
+ 		tcp_undo_cwnd_reduction(sk, true);
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPPARTIALUNDO);
+ 		tcp_try_keep_open(sk);
+-		return true;
++	} else {
++		/* Partial ACK arrived. Force fast retransmit. */
++		*do_lost = tcp_force_fast_retransmit(sk);
+ 	}
+ 	return false;
+ }
+@@ -2852,14 +2863,6 @@ static void tcp_identify_packet_loss(struct sock *sk, int *ack_flag)
+ 	}
+ }
+ 
+-static bool tcp_force_fast_retransmit(struct sock *sk)
+-{
+-	struct tcp_sock *tp = tcp_sk(sk);
+-
+-	return after(tcp_highest_sack_seq(tp),
+-		     tp->snd_una + tp->reordering * tp->mss_cache);
+-}
+-
+ /* Process an event, which can update packets-in-flight not trivially.
+  * Main goal of this function is to calculate new estimate for left_out,
+  * taking into account both packets sitting in receiver's buffer and
+@@ -2929,17 +2932,21 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una,
+ 		if (!(flag & FLAG_SND_UNA_ADVANCED)) {
+ 			if (tcp_is_reno(tp))
+ 				tcp_add_reno_sack(sk, num_dupack, ece_ack);
+-		} else {
+-			if (tcp_try_undo_partial(sk, prior_snd_una))
+-				return;
+-			/* Partial ACK arrived. Force fast retransmit. */
+-			do_lost = tcp_force_fast_retransmit(sk);
+-		}
+-		if (tcp_try_undo_dsack(sk)) {
+-			tcp_try_keep_open(sk);
++		} else if (tcp_try_undo_partial(sk, prior_snd_una, &do_lost))
+ 			return;
+-		}
++
++		if (tcp_try_undo_dsack(sk))
++			tcp_try_keep_open(sk);
++
+ 		tcp_identify_packet_loss(sk, ack_flag);
++		if (icsk->icsk_ca_state != TCP_CA_Recovery) {
++			if (!tcp_time_to_recover(sk, flag))
++				return;
++			/* Undo reverts the recovery state. If loss is evident,
++			 * starts a new recovery (e.g. reordering then loss);
++			 */
++			tcp_enter_recovery(sk, ece_ack);
++		}
+ 		break;
+ 	case TCP_CA_Loss:
+ 		tcp_process_loss(sk, flag, num_dupack, rexmit);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 077d43af8226b..e889655ca0e20 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1554,7 +1554,7 @@ emsgsize:
+ 			unsigned int datalen;
+ 			unsigned int fraglen;
+ 			unsigned int fraggap;
+-			unsigned int alloclen;
++			unsigned int alloclen, alloc_extra;
+ 			unsigned int pagedlen;
+ alloc_new_skb:
+ 			/* There's no room in the current skb */
+@@ -1581,17 +1581,28 @@ alloc_new_skb:
+ 			fraglen = datalen + fragheaderlen;
+ 			pagedlen = 0;
+ 
++			alloc_extra = hh_len;
++			alloc_extra += dst_exthdrlen;
++			alloc_extra += rt->dst.trailer_len;
++
++			/* We just reserve space for fragment header.
++			 * Note: this may be overallocation if the message
++			 * (without MSG_MORE) fits into the MTU.
++			 */
++			alloc_extra += sizeof(struct frag_hdr);
++
+ 			if ((flags & MSG_MORE) &&
+ 			    !(rt->dst.dev->features&NETIF_F_SG))
+ 				alloclen = mtu;
+-			else if (!paged)
++			else if (!paged &&
++				 (fraglen + alloc_extra < SKB_MAX_ALLOC ||
++				  !(rt->dst.dev->features & NETIF_F_SG)))
+ 				alloclen = fraglen;
+ 			else {
+ 				alloclen = min_t(int, fraglen, MAX_HEADER);
+ 				pagedlen = fraglen - alloclen;
+ 			}
+-
+-			alloclen += dst_exthdrlen;
++			alloclen += alloc_extra;
+ 
+ 			if (datalen != length + fraggap) {
+ 				/*
+@@ -1601,30 +1612,21 @@ alloc_new_skb:
+ 				datalen += rt->dst.trailer_len;
+ 			}
+ 
+-			alloclen += rt->dst.trailer_len;
+ 			fraglen = datalen + fragheaderlen;
+ 
+-			/*
+-			 * We just reserve space for fragment header.
+-			 * Note: this may be overallocation if the message
+-			 * (without MSG_MORE) fits into the MTU.
+-			 */
+-			alloclen += sizeof(struct frag_hdr);
+-
+ 			copy = datalen - transhdrlen - fraggap - pagedlen;
+ 			if (copy < 0) {
+ 				err = -EINVAL;
+ 				goto error;
+ 			}
+ 			if (transhdrlen) {
+-				skb = sock_alloc_send_skb(sk,
+-						alloclen + hh_len,
++				skb = sock_alloc_send_skb(sk, alloclen,
+ 						(flags & MSG_DONTWAIT), &err);
+ 			} else {
+ 				skb = NULL;
+ 				if (refcount_read(&sk->sk_wmem_alloc) + wmem_alloc_delta <=
+ 				    2 * sk->sk_sndbuf)
+-					skb = alloc_skb(alloclen + hh_len,
++					skb = alloc_skb(alloclen,
+ 							sk->sk_allocation);
+ 				if (unlikely(!skb))
+ 					err = -ENOBUFS;
+diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c
+index af36acc1a6448..2880dc7d9a491 100644
+--- a/net/ipv6/output_core.c
++++ b/net/ipv6/output_core.c
+@@ -15,29 +15,11 @@ static u32 __ipv6_select_ident(struct net *net,
+ 			       const struct in6_addr *dst,
+ 			       const struct in6_addr *src)
+ {
+-	const struct {
+-		struct in6_addr dst;
+-		struct in6_addr src;
+-	} __aligned(SIPHASH_ALIGNMENT) combined = {
+-		.dst = *dst,
+-		.src = *src,
+-	};
+-	u32 hash, id;
+-
+-	/* Note the following code is not safe, but this is okay. */
+-	if (unlikely(siphash_key_is_zero(&net->ipv4.ip_id_key)))
+-		get_random_bytes(&net->ipv4.ip_id_key,
+-				 sizeof(net->ipv4.ip_id_key));
+-
+-	hash = siphash(&combined, sizeof(combined), &net->ipv4.ip_id_key);
+-
+-	/* Treat id of 0 as unset and if we get 0 back from ip_idents_reserve,
+-	 * set the hight order instead thus minimizing possible future
+-	 * collisions.
+-	 */
+-	id = ip_idents_reserve(hash, 1);
+-	if (unlikely(!id))
+-		id = 1 << 31;
++	u32 id;
++
++	do {
++		id = prandom_u32();
++	} while (!id);
+ 
+ 	return id;
+ }
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 13250cadb4202..e18c3855f6161 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2088,10 +2088,9 @@ static struct ieee80211_sta_rx_stats *
+ sta_get_last_rx_stats(struct sta_info *sta)
+ {
+ 	struct ieee80211_sta_rx_stats *stats = &sta->rx_stats;
+-	struct ieee80211_local *local = sta->local;
+ 	int cpu;
+ 
+-	if (!ieee80211_hw_check(&local->hw, USES_RSS))
++	if (!sta->pcpu_rx_stats)
+ 		return stats;
+ 
+ 	for_each_possible_cpu(cpu) {
+@@ -2191,9 +2190,7 @@ static void sta_set_tidstats(struct sta_info *sta,
+ 	int cpu;
+ 
+ 	if (!(tidstats->filled & BIT(NL80211_TID_STATS_RX_MSDU))) {
+-		if (!ieee80211_hw_check(&local->hw, USES_RSS))
+-			tidstats->rx_msdu +=
+-				sta_get_tidstats_msdu(&sta->rx_stats, tid);
++		tidstats->rx_msdu += sta_get_tidstats_msdu(&sta->rx_stats, tid);
+ 
+ 		if (sta->pcpu_rx_stats) {
+ 			for_each_possible_cpu(cpu) {
+@@ -2272,7 +2269,6 @@ void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo,
+ 		sinfo->rx_beacon = sdata->u.mgd.count_beacon_signal;
+ 
+ 	drv_sta_statistics(local, sdata, &sta->sta, sinfo);
+-
+ 	sinfo->filled |= BIT_ULL(NL80211_STA_INFO_INACTIVE_TIME) |
+ 			 BIT_ULL(NL80211_STA_INFO_STA_FLAGS) |
+ 			 BIT_ULL(NL80211_STA_INFO_BSS_PARAM) |
+@@ -2307,8 +2303,7 @@ void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo,
+ 
+ 	if (!(sinfo->filled & (BIT_ULL(NL80211_STA_INFO_RX_BYTES64) |
+ 			       BIT_ULL(NL80211_STA_INFO_RX_BYTES)))) {
+-		if (!ieee80211_hw_check(&local->hw, USES_RSS))
+-			sinfo->rx_bytes += sta_get_stats_bytes(&sta->rx_stats);
++		sinfo->rx_bytes += sta_get_stats_bytes(&sta->rx_stats);
+ 
+ 		if (sta->pcpu_rx_stats) {
+ 			for_each_possible_cpu(cpu) {
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 88e14cfeb5d52..f613299ca7f0a 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -333,7 +333,8 @@ static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
+ 	}
+ 	mutex_unlock(&idrinfo->lock);
+ 
+-	if (nla_put_u32(skb, TCA_FCNT, n_i))
++	ret = nla_put_u32(skb, TCA_FCNT, n_i);
++	if (ret)
+ 		goto nla_put_failure;
+ 	nla_nest_end(skb, nest);
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index a281da07bb1d2..30090794b7912 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1532,7 +1532,7 @@ static inline int __tcf_classify(struct sk_buff *skb,
+ 				 u32 *last_executed_chain)
+ {
+ #ifdef CONFIG_NET_CLS_ACT
+-	const int max_reclassify_loop = 4;
++	const int max_reclassify_loop = 16;
+ 	const struct tcf_proto *first_tp;
+ 	int limit = 0;
+ 
+diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
+index 53e5ed79f63f3..59e653b528b1f 100644
+--- a/net/sctp/bind_addr.c
++++ b/net/sctp/bind_addr.c
+@@ -270,22 +270,19 @@ int sctp_raw_to_bind_addrs(struct sctp_bind_addr *bp, __u8 *raw_addr_list,
+ 		rawaddr = (union sctp_addr_param *)raw_addr_list;
+ 
+ 		af = sctp_get_af_specific(param_type2af(param->type));
+-		if (unlikely(!af)) {
++		if (unlikely(!af) ||
++		    !af->from_addr_param(&addr, rawaddr, htons(port), 0)) {
+ 			retval = -EINVAL;
+-			sctp_bind_addr_clean(bp);
+-			break;
++			goto out_err;
+ 		}
+ 
+-		af->from_addr_param(&addr, rawaddr, htons(port), 0);
+ 		if (sctp_bind_addr_state(bp, &addr) != -1)
+ 			goto next;
+ 		retval = sctp_add_bind_addr(bp, &addr, sizeof(addr),
+ 					    SCTP_ADDR_SRC, gfp);
+-		if (retval) {
++		if (retval)
+ 			/* Can't finish building the list, clean up. */
+-			sctp_bind_addr_clean(bp);
+-			break;
+-		}
++			goto out_err;
+ 
+ next:
+ 		len = ntohs(param->length);
+@@ -294,6 +291,12 @@ next:
+ 	}
+ 
+ 	return retval;
++
++out_err:
++	if (retval)
++		sctp_bind_addr_clean(bp);
++
++	return retval;
+ }
+ 
+ /********************************************************************
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index d508f6f3dd08a..f72bff93745c4 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -1131,7 +1131,8 @@ static struct sctp_association *__sctp_rcv_init_lookup(struct net *net,
+ 		if (!af)
+ 			continue;
+ 
+-		af->from_addr_param(paddr, params.addr, sh->source, 0);
++		if (!af->from_addr_param(paddr, params.addr, sh->source, 0))
++			continue;
+ 
+ 		asoc = __sctp_lookup_association(net, laddr, paddr, transportp);
+ 		if (asoc)
+@@ -1174,7 +1175,8 @@ static struct sctp_association *__sctp_rcv_asconf_lookup(
+ 	if (unlikely(!af))
+ 		return NULL;
+ 
+-	af->from_addr_param(&paddr, param, peer_port, 0);
++	if (af->from_addr_param(&paddr, param, peer_port, 0))
++		return NULL;
+ 
+ 	return __sctp_lookup_association(net, laddr, &paddr, transportp);
+ }
+@@ -1245,7 +1247,7 @@ static struct sctp_association *__sctp_rcv_walk_lookup(struct net *net,
+ 
+ 		ch = (struct sctp_chunkhdr *)ch_end;
+ 		chunk_num++;
+-	} while (ch_end < skb_tail_pointer(skb));
++	} while (ch_end + sizeof(*ch) < skb_tail_pointer(skb));
+ 
+ 	return asoc;
+ }
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index c8074f435d3ef..d594b949ae82f 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -530,15 +530,20 @@ static void sctp_v6_to_sk_daddr(union sctp_addr *addr, struct sock *sk)
+ }
+ 
+ /* Initialize a sctp_addr from an address parameter. */
+-static void sctp_v6_from_addr_param(union sctp_addr *addr,
++static bool sctp_v6_from_addr_param(union sctp_addr *addr,
+ 				    union sctp_addr_param *param,
+ 				    __be16 port, int iif)
+ {
++	if (ntohs(param->v6.param_hdr.length) < sizeof(struct sctp_ipv6addr_param))
++		return false;
++
+ 	addr->v6.sin6_family = AF_INET6;
+ 	addr->v6.sin6_port = port;
+ 	addr->v6.sin6_flowinfo = 0; /* BUG */
+ 	addr->v6.sin6_addr = param->v6.addr;
+ 	addr->v6.sin6_scope_id = iif;
++
++	return true;
+ }
+ 
+ /* Initialize an address parameter from a sctp_addr and return the length
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 25833238fe93c..47fb87ce489fc 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -253,14 +253,19 @@ static void sctp_v4_to_sk_daddr(union sctp_addr *addr, struct sock *sk)
+ }
+ 
+ /* Initialize a sctp_addr from an address parameter. */
+-static void sctp_v4_from_addr_param(union sctp_addr *addr,
++static bool sctp_v4_from_addr_param(union sctp_addr *addr,
+ 				    union sctp_addr_param *param,
+ 				    __be16 port, int iif)
+ {
++	if (ntohs(param->v4.param_hdr.length) < sizeof(struct sctp_ipv4addr_param))
++		return false;
++
+ 	addr->v4.sin_family = AF_INET;
+ 	addr->v4.sin_port = port;
+ 	addr->v4.sin_addr.s_addr = param->v4.addr.s_addr;
+ 	memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
++
++	return true;
+ }
+ 
+ /* Initialize an address parameter from a sctp_addr and return the length
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index b9d6babe28702..7411fa4428214 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -2329,11 +2329,13 @@ int sctp_process_init(struct sctp_association *asoc, struct sctp_chunk *chunk,
+ 
+ 	/* Process the initialization parameters.  */
+ 	sctp_walk_params(param, peer_init, init_hdr.params) {
+-		if (!src_match && (param.p->type == SCTP_PARAM_IPV4_ADDRESS ||
+-		    param.p->type == SCTP_PARAM_IPV6_ADDRESS)) {
++		if (!src_match &&
++		    (param.p->type == SCTP_PARAM_IPV4_ADDRESS ||
++		     param.p->type == SCTP_PARAM_IPV6_ADDRESS)) {
+ 			af = sctp_get_af_specific(param_type2af(param.p->type));
+-			af->from_addr_param(&addr, param.addr,
+-					    chunk->sctp_hdr->source, 0);
++			if (!af->from_addr_param(&addr, param.addr,
++						 chunk->sctp_hdr->source, 0))
++				continue;
+ 			if (sctp_cmp_addr_exact(sctp_source(chunk), &addr))
+ 				src_match = 1;
+ 		}
+@@ -2514,7 +2516,8 @@ static int sctp_process_param(struct sctp_association *asoc,
+ 			break;
+ do_addr_param:
+ 		af = sctp_get_af_specific(param_type2af(param.p->type));
+-		af->from_addr_param(&addr, param.addr, htons(asoc->peer.port), 0);
++		if (!af->from_addr_param(&addr, param.addr, htons(asoc->peer.port), 0))
++			break;
+ 		scope = sctp_scope(peer_addr);
+ 		if (sctp_in_scope(net, &addr, scope))
+ 			if (!sctp_assoc_add_peer(asoc, &addr, gfp, SCTP_UNCONFIRMED))
+@@ -2615,15 +2618,13 @@ do_addr_param:
+ 		addr_param = param.v + sizeof(struct sctp_addip_param);
+ 
+ 		af = sctp_get_af_specific(param_type2af(addr_param->p.type));
+-		if (af == NULL)
++		if (!af)
+ 			break;
+ 
+-		af->from_addr_param(&addr, addr_param,
+-				    htons(asoc->peer.port), 0);
++		if (!af->from_addr_param(&addr, addr_param,
++					 htons(asoc->peer.port), 0))
++			break;
+ 
+-		/* if the address is invalid, we can't process it.
+-		 * XXX: see spec for what to do.
+-		 */
+ 		if (!af->addr_valid(&addr, NULL, NULL))
+ 			break;
+ 
+@@ -3037,7 +3038,8 @@ static __be16 sctp_process_asconf_param(struct sctp_association *asoc,
+ 	if (unlikely(!af))
+ 		return SCTP_ERROR_DNS_FAILED;
+ 
+-	af->from_addr_param(&addr, addr_param, htons(asoc->peer.port), 0);
++	if (!af->from_addr_param(&addr, addr_param, htons(asoc->peer.port), 0))
++		return SCTP_ERROR_DNS_FAILED;
+ 
+ 	/* ADDIP 4.2.1  This parameter MUST NOT contain a broadcast
+ 	 * or multicast address.
+@@ -3314,7 +3316,8 @@ static void sctp_asconf_param_success(struct sctp_association *asoc,
+ 
+ 	/* We have checked the packet before, so we do not check again.	*/
+ 	af = sctp_get_af_specific(param_type2af(addr_param->p.type));
+-	af->from_addr_param(&addr, addr_param, htons(bp->port), 0);
++	if (!af->from_addr_param(&addr, addr_param, htons(bp->port), 0))
++		return;
+ 
+ 	switch (asconf_param->param_hdr.type) {
+ 	case SCTP_PARAM_ADD_IP:
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index cf86c1376b1a4..326250513570e 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1352,7 +1352,7 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
+ 
+ 		if (signal_pending(current)) {
+ 			err = sock_intr_errno(timeout);
+-			sk->sk_state = TCP_CLOSE;
++			sk->sk_state = sk->sk_state == TCP_ESTABLISHED ? TCP_CLOSING : TCP_CLOSE;
+ 			sock->state = SS_UNCONNECTED;
+ 			vsock_transport_cancel_pkt(vsk);
+ 			goto out_wait;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index daf3f29c7f0cc..8fb0478888fb2 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -4625,11 +4625,10 @@ static int nl80211_parse_tx_bitrate_mask(struct genl_info *info,
+ 		       sband->ht_cap.mcs.rx_mask,
+ 		       sizeof(mask->control[i].ht_mcs));
+ 
+-		if (!sband->vht_cap.vht_supported)
+-			continue;
+-
+-		vht_tx_mcs_map = le16_to_cpu(sband->vht_cap.vht_mcs.tx_mcs_map);
+-		vht_build_mcs_mask(vht_tx_mcs_map, mask->control[i].vht_mcs);
++		if (sband->vht_cap.vht_supported) {
++			vht_tx_mcs_map = le16_to_cpu(sband->vht_cap.vht_mcs.tx_mcs_map);
++			vht_build_mcs_mask(vht_tx_mcs_map, mask->control[i].vht_mcs);
++		}
+ 
+ 		he_cap = ieee80211_get_he_iftype_cap(sband, wdev->iftype);
+ 		if (!he_cap)
+diff --git a/net/wireless/wext-spy.c b/net/wireless/wext-spy.c
+index 33bef22e44e95..b379a03716539 100644
+--- a/net/wireless/wext-spy.c
++++ b/net/wireless/wext-spy.c
+@@ -120,8 +120,8 @@ int iw_handler_set_thrspy(struct net_device *	dev,
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Just do it */
+-	memcpy(&(spydata->spy_thr_low), &(threshold->low),
+-	       2 * sizeof(struct iw_quality));
++	spydata->spy_thr_low = threshold->low;
++	spydata->spy_thr_high = threshold->high;
+ 
+ 	/* Clear flag */
+ 	memset(spydata->spy_thr_under, '\0', sizeof(spydata->spy_thr_under));
+@@ -147,8 +147,8 @@ int iw_handler_get_thrspy(struct net_device *	dev,
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Just do it */
+-	memcpy(&(threshold->low), &(spydata->spy_thr_low),
+-	       2 * sizeof(struct iw_quality));
++	threshold->low = spydata->spy_thr_low;
++	threshold->high = spydata->spy_thr_high;
+ 
+ 	return 0;
+ }
+@@ -173,10 +173,10 @@ static void iw_send_thrspy_event(struct net_device *	dev,
+ 	memcpy(threshold.addr.sa_data, address, ETH_ALEN);
+ 	threshold.addr.sa_family = ARPHRD_ETHER;
+ 	/* Copy stats */
+-	memcpy(&(threshold.qual), wstats, sizeof(struct iw_quality));
++	threshold.qual = *wstats;
+ 	/* Copy also thresholds */
+-	memcpy(&(threshold.low), &(spydata->spy_thr_low),
+-	       2 * sizeof(struct iw_quality));
++	threshold.low = spydata->spy_thr_low;
++	threshold.high = spydata->spy_thr_high;
+ 
+ 	/* Send event to user space */
+ 	wireless_send_event(dev, SIOCGIWTHRSPY, &wrqu, (char *) &threshold);
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index d0c32a8fcc4a9..45f86a97eaf26 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -580,6 +580,20 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 
+ 	copy_from_user_state(x, p);
+ 
++	if (attrs[XFRMA_ENCAP]) {
++		x->encap = kmemdup(nla_data(attrs[XFRMA_ENCAP]),
++				   sizeof(*x->encap), GFP_KERNEL);
++		if (x->encap == NULL)
++			goto error;
++	}
++
++	if (attrs[XFRMA_COADDR]) {
++		x->coaddr = kmemdup(nla_data(attrs[XFRMA_COADDR]),
++				    sizeof(*x->coaddr), GFP_KERNEL);
++		if (x->coaddr == NULL)
++			goto error;
++	}
++
+ 	if (attrs[XFRMA_SA_EXTRA_FLAGS])
+ 		x->props.extra_flags = nla_get_u32(attrs[XFRMA_SA_EXTRA_FLAGS]);
+ 
+@@ -600,23 +614,9 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 				   attrs[XFRMA_ALG_COMP])))
+ 		goto error;
+ 
+-	if (attrs[XFRMA_ENCAP]) {
+-		x->encap = kmemdup(nla_data(attrs[XFRMA_ENCAP]),
+-				   sizeof(*x->encap), GFP_KERNEL);
+-		if (x->encap == NULL)
+-			goto error;
+-	}
+-
+ 	if (attrs[XFRMA_TFCPAD])
+ 		x->tfcpad = nla_get_u32(attrs[XFRMA_TFCPAD]);
+ 
+-	if (attrs[XFRMA_COADDR]) {
+-		x->coaddr = kmemdup(nla_data(attrs[XFRMA_COADDR]),
+-				    sizeof(*x->coaddr), GFP_KERNEL);
+-		if (x->coaddr == NULL)
+-			goto error;
+-	}
+-
+ 	xfrm_mark_get(attrs, &x->mark);
+ 
+ 	xfrm_smark_init(attrs, &x->props.smark);
+diff --git a/security/selinux/avc.c b/security/selinux/avc.c
+index 3c05827608b6a..884a014ce2b85 100644
+--- a/security/selinux/avc.c
++++ b/security/selinux/avc.c
+@@ -297,26 +297,27 @@ static struct avc_xperms_decision_node
+ 	struct avc_xperms_decision_node *xpd_node;
+ 	struct extended_perms_decision *xpd;
+ 
+-	xpd_node = kmem_cache_zalloc(avc_xperms_decision_cachep, GFP_NOWAIT);
++	xpd_node = kmem_cache_zalloc(avc_xperms_decision_cachep,
++				     GFP_NOWAIT | __GFP_NOWARN);
+ 	if (!xpd_node)
+ 		return NULL;
+ 
+ 	xpd = &xpd_node->xpd;
+ 	if (which & XPERMS_ALLOWED) {
+ 		xpd->allowed = kmem_cache_zalloc(avc_xperms_data_cachep,
+-						GFP_NOWAIT);
++						GFP_NOWAIT | __GFP_NOWARN);
+ 		if (!xpd->allowed)
+ 			goto error;
+ 	}
+ 	if (which & XPERMS_AUDITALLOW) {
+ 		xpd->auditallow = kmem_cache_zalloc(avc_xperms_data_cachep,
+-						GFP_NOWAIT);
++						GFP_NOWAIT | __GFP_NOWARN);
+ 		if (!xpd->auditallow)
+ 			goto error;
+ 	}
+ 	if (which & XPERMS_DONTAUDIT) {
+ 		xpd->dontaudit = kmem_cache_zalloc(avc_xperms_data_cachep,
+-						GFP_NOWAIT);
++						GFP_NOWAIT | __GFP_NOWARN);
+ 		if (!xpd->dontaudit)
+ 			goto error;
+ 	}
+@@ -344,7 +345,7 @@ static struct avc_xperms_node *avc_xperms_alloc(void)
+ {
+ 	struct avc_xperms_node *xp_node;
+ 
+-	xp_node = kmem_cache_zalloc(avc_xperms_cachep, GFP_NOWAIT);
++	xp_node = kmem_cache_zalloc(avc_xperms_cachep, GFP_NOWAIT | __GFP_NOWARN);
+ 	if (!xp_node)
+ 		return xp_node;
+ 	INIT_LIST_HEAD(&xp_node->xpd_head);
+@@ -500,7 +501,7 @@ static struct avc_node *avc_alloc_node(struct selinux_avc *avc)
+ {
+ 	struct avc_node *node;
+ 
+-	node = kmem_cache_zalloc(avc_node_cachep, GFP_NOWAIT);
++	node = kmem_cache_zalloc(avc_node_cachep, GFP_NOWAIT | __GFP_NOWARN);
+ 	if (!node)
+ 		goto out;
+ 
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index 334299357e715..b88c1a9538334 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -855,6 +855,8 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 	if (format == SMK_FIXED24_FMT &&
+ 	    (count < SMK_CIPSOMIN || count > SMK_CIPSOMAX))
+ 		return -EINVAL;
++	if (count > PAGE_SIZE)
++		return -EINVAL;
+ 
+ 	data = memdup_user_nul(buf, count);
+ 	if (IS_ERR(data))
+diff --git a/sound/soc/tegra/tegra_alc5632.c b/sound/soc/tegra/tegra_alc5632.c
+index 8661877bf4c6f..13a956c6077b8 100644
+--- a/sound/soc/tegra/tegra_alc5632.c
++++ b/sound/soc/tegra/tegra_alc5632.c
+@@ -139,6 +139,7 @@ static struct snd_soc_dai_link tegra_alc5632_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_alc5632 = {
+ 	.name = "tegra-alc5632",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_alc5632_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_max98090.c b/sound/soc/tegra/tegra_max98090.c
+index 9d8e16473ab99..2fdf46bad2bff 100644
+--- a/sound/soc/tegra/tegra_max98090.c
++++ b/sound/soc/tegra/tegra_max98090.c
+@@ -182,6 +182,7 @@ static struct snd_soc_dai_link tegra_max98090_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_max98090 = {
+ 	.name = "tegra-max98090",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_max98090_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_rt5640.c b/sound/soc/tegra/tegra_rt5640.c
+index c73bd23b3d679..6c2689f5da224 100644
+--- a/sound/soc/tegra/tegra_rt5640.c
++++ b/sound/soc/tegra/tegra_rt5640.c
+@@ -132,6 +132,7 @@ static struct snd_soc_dai_link tegra_rt5640_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_rt5640 = {
+ 	.name = "tegra-rt5640",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_rt5640_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_rt5677.c b/sound/soc/tegra/tegra_rt5677.c
+index 7504507dd8b85..0588889d081a5 100644
+--- a/sound/soc/tegra/tegra_rt5677.c
++++ b/sound/soc/tegra/tegra_rt5677.c
+@@ -175,6 +175,7 @@ static struct snd_soc_dai_link tegra_rt5677_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_rt5677 = {
+ 	.name = "tegra-rt5677",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_rt5677_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_sgtl5000.c b/sound/soc/tegra/tegra_sgtl5000.c
+index e1dc8e7d337a2..3d35a57d8c0c4 100644
+--- a/sound/soc/tegra/tegra_sgtl5000.c
++++ b/sound/soc/tegra/tegra_sgtl5000.c
+@@ -97,6 +97,7 @@ static struct snd_soc_dai_link tegra_sgtl5000_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_sgtl5000 = {
+ 	.name = "tegra-sgtl5000",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_sgtl5000_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_wm8753.c b/sound/soc/tegra/tegra_wm8753.c
+index fa41fa366dafa..bdddda4eb9139 100644
+--- a/sound/soc/tegra/tegra_wm8753.c
++++ b/sound/soc/tegra/tegra_wm8753.c
+@@ -101,6 +101,7 @@ static struct snd_soc_dai_link tegra_wm8753_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_wm8753 = {
+ 	.name = "tegra-wm8753",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_wm8753_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_wm8903.c b/sound/soc/tegra/tegra_wm8903.c
+index ef6652aaac9b6..98adf93fb898d 100644
+--- a/sound/soc/tegra/tegra_wm8903.c
++++ b/sound/soc/tegra/tegra_wm8903.c
+@@ -235,6 +235,7 @@ static struct snd_soc_dai_link tegra_wm8903_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_wm8903 = {
+ 	.name = "tegra-wm8903",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_wm8903_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_wm9712.c b/sound/soc/tegra/tegra_wm9712.c
+index 726edfa21a29d..df7662258bc6d 100644
+--- a/sound/soc/tegra/tegra_wm9712.c
++++ b/sound/soc/tegra/tegra_wm9712.c
+@@ -54,6 +54,7 @@ static struct snd_soc_dai_link tegra_wm9712_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_wm9712 = {
+ 	.name = "tegra-wm9712",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_wm9712_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/trimslice.c b/sound/soc/tegra/trimslice.c
+index baae4cce7fc66..d8fbb22482d5f 100644
+--- a/sound/soc/tegra/trimslice.c
++++ b/sound/soc/tegra/trimslice.c
+@@ -94,6 +94,7 @@ static struct snd_soc_dai_link trimslice_tlv320aic23_dai = {
+ 
+ static struct snd_soc_card snd_soc_trimslice = {
+ 	.name = "tegra-trimslice",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &trimslice_tlv320aic23_dai,
+ 	.num_links = 1,
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh
+index f5abb1ebd3923..269b2680611b4 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh
+@@ -108,6 +108,9 @@ router_destroy()
+ 	__addr_add_del $rp1 del 192.0.2.2/24 2001:db8:1::2/64
+ 
+ 	tc qdisc del dev $rp2 clsact
++
++	ip link set dev $rp2 down
++	ip link set dev $rp1 down
+ }
+ 
+ setup_prepare()
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh
+index 1fedfc9da434f..1d157b1bd838a 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh
+@@ -111,6 +111,9 @@ router_destroy()
+ 	__addr_add_del $rp1 del 192.0.2.2/24 2001:db8:1::2/64
+ 
+ 	tc qdisc del dev $rp2 clsact
++
++	ip link set dev $rp2 down
++	ip link set dev $rp1 down
+ }
+ 
+ setup_prepare()
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh
+index 5cbff8038f84c..28a570006d4d9 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh
+@@ -93,7 +93,9 @@ switch_destroy()
+ 	lldptool -T -i $swp1 -V APP -d $(dscp_map 10) >/dev/null
+ 	lldpad_app_wait_del
+ 
++	ip link set dev $swp2 down
+ 	ip link set dev $swp2 nomaster
++	ip link set dev $swp1 down
+ 	ip link set dev $swp1 nomaster
+ 	ip link del dev br1
+ }
+diff --git a/tools/testing/selftests/lkdtm/tests.txt b/tools/testing/selftests/lkdtm/tests.txt
+index 74a8d329a72c8..9b84cba9e9114 100644
+--- a/tools/testing/selftests/lkdtm/tests.txt
++++ b/tools/testing/selftests/lkdtm/tests.txt
+@@ -11,7 +11,7 @@ CORRUPT_LIST_ADD list_add corruption
+ CORRUPT_LIST_DEL list_del corruption
+ STACK_GUARD_PAGE_LEADING
+ STACK_GUARD_PAGE_TRAILING
+-UNSET_SMEP CR4 bits went missing
++UNSET_SMEP pinned CR4 bits changed:
+ DOUBLE_FAULT
+ CORRUPT_PAC
+ UNALIGNED_LOAD_STORE_WRITE
+diff --git a/tools/testing/selftests/net/forwarding/pedit_dsfield.sh b/tools/testing/selftests/net/forwarding/pedit_dsfield.sh
+index 55eeacf592411..64fbd211d907b 100755
+--- a/tools/testing/selftests/net/forwarding/pedit_dsfield.sh
++++ b/tools/testing/selftests/net/forwarding/pedit_dsfield.sh
+@@ -75,7 +75,9 @@ switch_destroy()
+ 	tc qdisc del dev $swp2 clsact
+ 	tc qdisc del dev $swp1 clsact
+ 
++	ip link set dev $swp2 down
+ 	ip link set dev $swp2 nomaster
++	ip link set dev $swp1 down
+ 	ip link set dev $swp1 nomaster
+ 	ip link del dev br1
+ }
+diff --git a/tools/testing/selftests/net/forwarding/pedit_l4port.sh b/tools/testing/selftests/net/forwarding/pedit_l4port.sh
+index 5f20d289ee43c..10e594c551175 100755
+--- a/tools/testing/selftests/net/forwarding/pedit_l4port.sh
++++ b/tools/testing/selftests/net/forwarding/pedit_l4port.sh
+@@ -71,7 +71,9 @@ switch_destroy()
+ 	tc qdisc del dev $swp2 clsact
+ 	tc qdisc del dev $swp1 clsact
+ 
++	ip link set dev $swp2 down
+ 	ip link set dev $swp2 nomaster
++	ip link set dev $swp1 down
+ 	ip link set dev $swp1 nomaster
+ 	ip link del dev br1
+ }
+diff --git a/tools/testing/selftests/net/forwarding/skbedit_priority.sh b/tools/testing/selftests/net/forwarding/skbedit_priority.sh
+index e3bd8a6bb8b40..bde11dc27873c 100755
+--- a/tools/testing/selftests/net/forwarding/skbedit_priority.sh
++++ b/tools/testing/selftests/net/forwarding/skbedit_priority.sh
+@@ -72,7 +72,9 @@ switch_destroy()
+ 	tc qdisc del dev $swp2 clsact
+ 	tc qdisc del dev $swp1 clsact
+ 
++	ip link set dev $swp2 down
+ 	ip link set dev $swp2 nomaster
++	ip link set dev $swp1 down
+ 	ip link set dev $swp1 nomaster
+ 	ip link del dev br1
+ }
+diff --git a/tools/testing/selftests/resctrl/README b/tools/testing/selftests/resctrl/README
+index 6e5a0ffa18e8b..20502cb4791fc 100644
+--- a/tools/testing/selftests/resctrl/README
++++ b/tools/testing/selftests/resctrl/README
+@@ -47,7 +47,7 @@ Parameter '-h' shows usage information.
+ 
+ usage: resctrl_tests [-h] [-b "benchmark_cmd [options]"] [-t test list] [-n no_of_bits]
+         -b benchmark_cmd [options]: run specified benchmark for MBM, MBA and CQM default benchmark is builtin fill_buf
+-        -t test list: run tests specified in the test list, e.g. -t mbm, mba, cqm, cat
++        -t test list: run tests specified in the test list, e.g. -t mbm,mba,cqm,cat
+         -n no_of_bits: run cache tests using specified no of bits in cache bit mask
+         -p cpu_no: specify CPU number to run the test. 1 is default
+         -h: help
+diff --git a/tools/testing/selftests/resctrl/resctrl_tests.c b/tools/testing/selftests/resctrl/resctrl_tests.c
+index ac2269610aa9d..bd98746c6f858 100644
+--- a/tools/testing/selftests/resctrl/resctrl_tests.c
++++ b/tools/testing/selftests/resctrl/resctrl_tests.c
+@@ -40,7 +40,7 @@ static void cmd_help(void)
+ 	printf("\t-b benchmark_cmd [options]: run specified benchmark for MBM, MBA and CQM");
+ 	printf("\t default benchmark is builtin fill_buf\n");
+ 	printf("\t-t test list: run tests specified in the test list, ");
+-	printf("e.g. -t mbm, mba, cqm, cat\n");
++	printf("e.g. -t mbm,mba,cqm,cat\n");
+ 	printf("\t-n no_of_bits: run cache tests using specified no of bits in cache bit mask\n");
+ 	printf("\t-p cpu_no: specify CPU number to run the test. 1 is default\n");
+ 	printf("\t-h: help\n");
+@@ -98,7 +98,7 @@ int main(int argc, char **argv)
+ 
+ 					return -1;
+ 				}
+-				token = strtok(NULL, ":\t");
++				token = strtok(NULL, ",");
+ 			}
+ 			break;
+ 		case 'p':


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-20 15:44 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-07-20 15:44 UTC (permalink / raw
  To: gentoo-commits

commit:     a7657e6d2116620eaf018ac783e5d0e55e74bd98
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 20 15:43:58 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Jul 20 15:44:14 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a7657e6d

Linux patch 5.10.52

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |     4 +
 1051_linux-5.10.52.patch | 11027 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11031 insertions(+)

diff --git a/0000_README b/0000_README
index 8176468..51e9403 100644
--- a/0000_README
+++ b/0000_README
@@ -247,6 +247,10 @@ Patch:  1050_linux-5.10.51.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.51
 
+Patch:  1051_linux-5.10.52.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.52
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1051_linux-5.10.52.patch b/1051_linux-5.10.52.patch
new file mode 100644
index 0000000..727b9d0
--- /dev/null
+++ b/1051_linux-5.10.52.patch
@@ -0,0 +1,11027 @@
+diff --git a/Documentation/devicetree/bindings/i2c/i2c-at91.txt b/Documentation/devicetree/bindings/i2c/i2c-at91.txt
+index 96c914e048f59..2015f50aed0f9 100644
+--- a/Documentation/devicetree/bindings/i2c/i2c-at91.txt
++++ b/Documentation/devicetree/bindings/i2c/i2c-at91.txt
+@@ -73,7 +73,7 @@ i2c0: i2c@f8034600 {
+ 	pinctrl-0 = <&pinctrl_i2c0>;
+ 	pinctrl-1 = <&pinctrl_i2c0_gpio>;
+ 	sda-gpios = <&pioA 30 GPIO_ACTIVE_HIGH>;
+-	scl-gpios = <&pioA 31 GPIO_ACTIVE_HIGH>;
++	scl-gpios = <&pioA 31 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 
+ 	wm8731: wm8731@1a {
+ 		compatible = "wm8731";
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index b8ee761c9922a..8c0fbdd8ce6fb 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -688,10 +688,10 @@ users.
+ ===================== ======================== ===================
+ User                  F2FS                     Block
+ ===================== ======================== ===================
+-                      META                     WRITE_LIFE_NOT_SET
+-                      HOT_NODE                 "
+-                      WARM_NODE                "
+-                      COLD_NODE                "
++N/A                   META                     WRITE_LIFE_NOT_SET
++N/A                   HOT_NODE                 "
++N/A                   WARM_NODE                "
++N/A                   COLD_NODE                "
+ ioctl(COLD)           COLD_DATA                WRITE_LIFE_EXTREME
+ extension list        "                        "
+ 
+@@ -717,10 +717,10 @@ WRITE_LIFE_LONG       "                        WRITE_LIFE_LONG
+ ===================== ======================== ===================
+ User                  F2FS                     Block
+ ===================== ======================== ===================
+-                      META                     WRITE_LIFE_MEDIUM;
+-                      HOT_NODE                 WRITE_LIFE_NOT_SET
+-                      WARM_NODE                "
+-                      COLD_NODE                WRITE_LIFE_NONE
++N/A                   META                     WRITE_LIFE_MEDIUM;
++N/A                   HOT_NODE                 WRITE_LIFE_NOT_SET
++N/A                   WARM_NODE                "
++N/A                   COLD_NODE                WRITE_LIFE_NONE
+ ioctl(COLD)           COLD_DATA                WRITE_LIFE_EXTREME
+ extension list        "                        "
+ 
+diff --git a/Makefile b/Makefile
+index d0fad1e774931..23d656936d405 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 51
++SUBLEVEL = 52
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/am335x-cm-t335.dts b/arch/arm/boot/dts/am335x-cm-t335.dts
+index c6fe9db660e2b..08c89f1768456 100644
+--- a/arch/arm/boot/dts/am335x-cm-t335.dts
++++ b/arch/arm/boot/dts/am335x-cm-t335.dts
+@@ -496,7 +496,7 @@ status = "okay";
+ 	status = "okay";
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&spi0_pins>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ 	/* WLS1271 WiFi */
+ 	wlcore: wlcore@1 {
+ 		compatible = "ti,wl1271";
+diff --git a/arch/arm/boot/dts/am43x-epos-evm.dts b/arch/arm/boot/dts/am43x-epos-evm.dts
+index f517d1e843cf4..8b696107eef8c 100644
+--- a/arch/arm/boot/dts/am43x-epos-evm.dts
++++ b/arch/arm/boot/dts/am43x-epos-evm.dts
+@@ -860,7 +860,7 @@
+ 	pinctrl-names = "default", "sleep";
+ 	pinctrl-0 = <&spi0_pins_default>;
+ 	pinctrl-1 = <&spi0_pins_sleep>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ };
+ 
+ &spi1 {
+@@ -868,7 +868,7 @@
+ 	pinctrl-names = "default", "sleep";
+ 	pinctrl-0 = <&spi1_pins_default>;
+ 	pinctrl-1 = <&spi1_pins_sleep>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ };
+ 
+ &usb2_phy1 {
+diff --git a/arch/arm/boot/dts/am5718.dtsi b/arch/arm/boot/dts/am5718.dtsi
+index ebf4d3cc1cfbe..6d7530a48c73f 100644
+--- a/arch/arm/boot/dts/am5718.dtsi
++++ b/arch/arm/boot/dts/am5718.dtsi
+@@ -17,17 +17,13 @@
+  * VCP1, VCP2
+  * MLB
+  * ISS
+- * USB3, USB4
++ * USB3
+  */
+ 
+ &usb3_tm {
+ 	status = "disabled";
+ };
+ 
+-&usb4_tm {
+-	status = "disabled";
+-};
+-
+ &atl_tm {
+ 	status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index ac3a99cf20793..72b0df6910bd5 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -515,27 +515,27 @@
+ 		      <0x1811b408 0x004>,
+ 		      <0x180293a0 0x01c>;
+ 		reg-names = "mspi", "bspi", "intr_regs", "intr_status_reg";
+-		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,
++		interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>,
+-			     <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>,
+-			     <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>,
+-			     <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "spi_lr_fullness_reached",
++			     <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
++		interrupt-names = "mspi_done",
++				  "mspi_halted",
++				  "spi_lr_fullness_reached",
+ 				  "spi_lr_session_aborted",
+ 				  "spi_lr_impatient",
+ 				  "spi_lr_session_done",
+-				  "spi_lr_overhead",
+-				  "mspi_done",
+-				  "mspi_halted";
++				  "spi_lr_overread";
+ 		clocks = <&iprocmed>;
+ 		clock-names = "iprocmed";
+ 		num-cs = <2>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		spi_nor: spi-nor@0 {
++		spi_nor: flash@0 {
+ 			compatible = "jedec,spi-nor";
+ 			reg = <0>;
+ 			spi-max-frequency = <20000000>;
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index a294a02f2d232..1dafce92fc767 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -4095,28 +4095,6 @@
+ 			};
+ 		};
+ 
+-		usb4_tm: target-module@140000 {		/* 0x48940000, ap 75 3c.0 */
+-			compatible = "ti,sysc-omap4", "ti,sysc";
+-			reg = <0x140000 0x4>,
+-			      <0x140010 0x4>;
+-			reg-names = "rev", "sysc";
+-			ti,sysc-mask = <SYSC_OMAP4_DMADISABLE>;
+-			ti,sysc-midle = <SYSC_IDLE_FORCE>,
+-					<SYSC_IDLE_NO>,
+-					<SYSC_IDLE_SMART>,
+-					<SYSC_IDLE_SMART_WKUP>;
+-			ti,sysc-sidle = <SYSC_IDLE_FORCE>,
+-					<SYSC_IDLE_NO>,
+-					<SYSC_IDLE_SMART>,
+-					<SYSC_IDLE_SMART_WKUP>;
+-			/* Domains (P, C): l3init_pwrdm, l3init_clkdm */
+-			clocks = <&l3init_clkctrl DRA7_L3INIT_USB_OTG_SS4_CLKCTRL 0>;
+-			clock-names = "fck";
+-			#address-cells = <1>;
+-			#size-cells = <1>;
+-			ranges = <0x0 0x140000 0x20000>;
+-		};
+-
+ 		target-module@170000 {			/* 0x48970000, ap 21 0a.0 */
+ 			compatible = "ti,sysc-omap4", "ti,sysc";
+ 			reg = <0x170010 0x4>;
+diff --git a/arch/arm/boot/dts/dra71x.dtsi b/arch/arm/boot/dts/dra71x.dtsi
+index cad0e4a2bd8df..9c270d8f75d5b 100644
+--- a/arch/arm/boot/dts/dra71x.dtsi
++++ b/arch/arm/boot/dts/dra71x.dtsi
+@@ -11,7 +11,3 @@
+ &rtctarget {
+ 	status = "disabled";
+ };
+-
+-&usb4_tm {
+-	status = "disabled";
+-};
+diff --git a/arch/arm/boot/dts/dra72x.dtsi b/arch/arm/boot/dts/dra72x.dtsi
+index d403acc754b68..f3e934ef7d3e2 100644
+--- a/arch/arm/boot/dts/dra72x.dtsi
++++ b/arch/arm/boot/dts/dra72x.dtsi
+@@ -108,7 +108,3 @@
+ &pcie2_rc {
+ 	compatible = "ti,dra726-pcie-rc", "ti,dra7-pcie";
+ };
+-
+-&usb4_tm {
+-	status = "disabled";
+-};
+diff --git a/arch/arm/boot/dts/dra74x.dtsi b/arch/arm/boot/dts/dra74x.dtsi
+index e1850d6c841a7..b4e07d99ffde1 100644
+--- a/arch/arm/boot/dts/dra74x.dtsi
++++ b/arch/arm/boot/dts/dra74x.dtsi
+@@ -49,49 +49,6 @@
+ 			reg = <0x41500000 0x100>;
+ 		};
+ 
+-		target-module@48940000 {
+-			compatible = "ti,sysc-omap4", "ti,sysc";
+-			reg = <0x48940000 0x4>,
+-			      <0x48940010 0x4>;
+-			reg-names = "rev", "sysc";
+-			ti,sysc-mask = <SYSC_OMAP4_DMADISABLE>;
+-			ti,sysc-midle = <SYSC_IDLE_FORCE>,
+-					<SYSC_IDLE_NO>,
+-					<SYSC_IDLE_SMART>,
+-					<SYSC_IDLE_SMART_WKUP>;
+-			ti,sysc-sidle = <SYSC_IDLE_FORCE>,
+-					<SYSC_IDLE_NO>,
+-					<SYSC_IDLE_SMART>,
+-					<SYSC_IDLE_SMART_WKUP>;
+-			clocks = <&l3init_clkctrl DRA7_L3INIT_USB_OTG_SS4_CLKCTRL 0>;
+-			clock-names = "fck";
+-			#address-cells = <1>;
+-			#size-cells = <1>;
+-			ranges = <0x0 0x48940000 0x20000>;
+-
+-			omap_dwc3_4: omap_dwc3_4@0 {
+-				compatible = "ti,dwc3";
+-				reg = <0 0x10000>;
+-				interrupts = <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>;
+-				#address-cells = <1>;
+-				#size-cells = <1>;
+-				utmi-mode = <2>;
+-				ranges;
+-				status = "disabled";
+-				usb4: usb@10000 {
+-					compatible = "snps,dwc3";
+-					reg = <0x10000 0x17000>;
+-					interrupts = <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
+-						     <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
+-						     <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>;
+-					interrupt-names = "peripheral",
+-							  "host",
+-							  "otg";
+-					maximum-speed = "high-speed";
+-					dr_mode = "otg";
+-				};
+-			};
+-		};
+ 
+ 		target-module@41501000 {
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+@@ -224,3 +181,52 @@
+ &pcie2_rc {
+ 	compatible = "ti,dra746-pcie-rc", "ti,dra7-pcie";
+ };
++
++&l4_per3 {
++	segment@0 {
++		usb4_tm: target-module@140000 {         /* 0x48940000, ap 75 3c.0 */
++			compatible = "ti,sysc-omap4", "ti,sysc";
++			reg = <0x140000 0x4>,
++			      <0x140010 0x4>;
++			reg-names = "rev", "sysc";
++			ti,sysc-mask = <SYSC_OMAP4_DMADISABLE>;
++			ti,sysc-midle = <SYSC_IDLE_FORCE>,
++					<SYSC_IDLE_NO>,
++					<SYSC_IDLE_SMART>,
++					<SYSC_IDLE_SMART_WKUP>;
++			ti,sysc-sidle = <SYSC_IDLE_FORCE>,
++					<SYSC_IDLE_NO>,
++					<SYSC_IDLE_SMART>,
++					<SYSC_IDLE_SMART_WKUP>;
++			/* Domains (P, C): l3init_pwrdm, l3init_clkdm */
++			clocks = <&l3init_clkctrl DRA7_L3INIT_USB_OTG_SS4_CLKCTRL 0>;
++			clock-names = "fck";
++			#address-cells = <1>;
++			#size-cells = <1>;
++			ranges = <0x0 0x140000 0x20000>;
++
++			omap_dwc3_4: omap_dwc3_4@0 {
++				compatible = "ti,dwc3";
++				reg = <0 0x10000>;
++				interrupts = <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>;
++				#address-cells = <1>;
++				#size-cells = <1>;
++				utmi-mode = <2>;
++				ranges;
++				status = "disabled";
++				usb4: usb@10000 {
++					compatible = "snps,dwc3";
++					reg = <0x10000 0x17000>;
++					interrupts = <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
++						     <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
++						     <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>;
++					interrupt-names = "peripheral",
++							  "host",
++							  "otg";
++					maximum-speed = "high-speed";
++					dr_mode = "otg";
++				};
++			};
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/exynos5422-odroidhc1.dts b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+index 8126592602786..88f5c150a30a1 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidhc1.dts
++++ b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+@@ -22,7 +22,7 @@
+ 			label = "blue:heartbeat";
+ 			pwms = <&pwm 2 2000000 0>;
+ 			pwm-names = "pwm2";
+-			max_brightness = <255>;
++			max-brightness = <255>;
+ 			linux,default-trigger = "heartbeat";
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/exynos5422-odroidxu4.dts b/arch/arm/boot/dts/exynos5422-odroidxu4.dts
+index ddd55d3bcadd0..4ef0dbc84b0ca 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidxu4.dts
++++ b/arch/arm/boot/dts/exynos5422-odroidxu4.dts
+@@ -24,7 +24,7 @@
+ 			label = "blue:heartbeat";
+ 			pwms = <&pwm 2 2000000 0>;
+ 			pwm-names = "pwm2";
+-			max_brightness = <255>;
++			max-brightness = <255>;
+ 			linux,default-trigger = "heartbeat";
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/exynos54xx-odroidxu-leds.dtsi b/arch/arm/boot/dts/exynos54xx-odroidxu-leds.dtsi
+index 56acd832f0b3c..16e1087ec7172 100644
+--- a/arch/arm/boot/dts/exynos54xx-odroidxu-leds.dtsi
++++ b/arch/arm/boot/dts/exynos54xx-odroidxu-leds.dtsi
+@@ -22,7 +22,7 @@
+ 			 * Green LED is much brighter than the others
+ 			 * so limit its max brightness
+ 			 */
+-			max_brightness = <127>;
++			max-brightness = <127>;
+ 			linux,default-trigger = "mmc0";
+ 		};
+ 
+@@ -30,7 +30,7 @@
+ 			label = "blue:heartbeat";
+ 			pwms = <&pwm 2 2000000 0>;
+ 			pwm-names = "pwm2";
+-			max_brightness = <255>;
++			max-brightness = <255>;
+ 			linux,default-trigger = "heartbeat";
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/gemini-rut1xx.dts b/arch/arm/boot/dts/gemini-rut1xx.dts
+index 9611ddf067927..08091d2a64e15 100644
+--- a/arch/arm/boot/dts/gemini-rut1xx.dts
++++ b/arch/arm/boot/dts/gemini-rut1xx.dts
+@@ -125,18 +125,6 @@
+ 			};
+ 		};
+ 
+-		ethernet@60000000 {
+-			status = "okay";
+-
+-			ethernet-port@0 {
+-				phy-mode = "rgmii";
+-				phy-handle = <&phy0>;
+-			};
+-			ethernet-port@1 {
+-				/* Not used in this platform */
+-			};
+-		};
+-
+ 		usb@68000000 {
+ 			status = "okay";
+ 		};
+diff --git a/arch/arm/boot/dts/imx6q-dhcom-som.dtsi b/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
+index d0768ae429faa..e3de2b487cf49 100644
+--- a/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
+@@ -96,30 +96,40 @@
+ 			reg = <0>;
+ 			max-speed = <100>;
+ 			reset-gpios = <&gpio5 0 GPIO_ACTIVE_LOW>;
+-			reset-delay-us = <1000>;
+-			reset-post-delay-us = <1000>;
++			reset-assert-us = <1000>;
++			reset-deassert-us = <1000>;
++			smsc,disable-energy-detect; /* Make plugin detection reliable */
+ 		};
+ 	};
+ };
+ 
+ &i2c1 {
+ 	clock-frequency = <100000>;
+-	pinctrl-names = "default";
++	pinctrl-names = "default", "gpio";
+ 	pinctrl-0 = <&pinctrl_i2c1>;
++	pinctrl-1 = <&pinctrl_i2c1_gpio>;
++	scl-gpios = <&gpio3 21 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++	sda-gpios = <&gpio3 28 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	status = "okay";
+ };
+ 
+ &i2c2 {
+ 	clock-frequency = <100000>;
+-	pinctrl-names = "default";
++	pinctrl-names = "default", "gpio";
+ 	pinctrl-0 = <&pinctrl_i2c2>;
++	pinctrl-1 = <&pinctrl_i2c2_gpio>;
++	scl-gpios = <&gpio4 12 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++	sda-gpios = <&gpio4 13 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	status = "okay";
+ };
+ 
+ &i2c3 {
+ 	clock-frequency = <100000>;
+-	pinctrl-names = "default";
++	pinctrl-names = "default", "gpio";
+ 	pinctrl-0 = <&pinctrl_i2c3>;
++	pinctrl-1 = <&pinctrl_i2c3_gpio>;
++	scl-gpios = <&gpio1 3 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++	sda-gpios = <&gpio1 6 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	status = "okay";
+ 
+ 	ltc3676: pmic@3c {
+@@ -285,6 +295,13 @@
+ 		>;
+ 	};
+ 
++	pinctrl_i2c1_gpio: i2c1-gpio-grp {
++		fsl,pins = <
++			MX6QDL_PAD_EIM_D21__GPIO3_IO21		0x4001b8b1
++			MX6QDL_PAD_EIM_D28__GPIO3_IO28		0x4001b8b1
++		>;
++	};
++
+ 	pinctrl_i2c2: i2c2-grp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_KEY_COL3__I2C2_SCL		0x4001b8b1
+@@ -292,6 +309,13 @@
+ 		>;
+ 	};
+ 
++	pinctrl_i2c2_gpio: i2c2-gpio-grp {
++		fsl,pins = <
++			MX6QDL_PAD_KEY_COL3__GPIO4_IO12		0x4001b8b1
++			MX6QDL_PAD_KEY_ROW3__GPIO4_IO13		0x4001b8b1
++		>;
++	};
++
+ 	pinctrl_i2c3: i2c3-grp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_GPIO_3__I2C3_SCL		0x4001b8b1
+@@ -299,6 +323,13 @@
+ 		>;
+ 	};
+ 
++	pinctrl_i2c3_gpio: i2c3-gpio-grp {
++		fsl,pins = <
++			MX6QDL_PAD_GPIO_3__GPIO1_IO03		0x4001b8b1
++			MX6QDL_PAD_GPIO_6__GPIO1_IO06		0x4001b8b1
++		>;
++	};
++
+ 	pinctrl_pmic_hw300: pmic-hw300-grp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_EIM_A25__GPIO5_IO02		0x1B0B0
+diff --git a/arch/arm/boot/dts/r8a7779-marzen.dts b/arch/arm/boot/dts/r8a7779-marzen.dts
+index d2240b89ee529..4658453234959 100644
+--- a/arch/arm/boot/dts/r8a7779-marzen.dts
++++ b/arch/arm/boot/dts/r8a7779-marzen.dts
+@@ -145,7 +145,7 @@
+ 	status = "okay";
+ 
+ 	clocks = <&mstp1_clks R8A7779_CLK_DU>, <&x3_clk>;
+-	clock-names = "du", "dclkin.0";
++	clock-names = "du.0", "dclkin.0";
+ 
+ 	ports {
+ 		port@0 {
+diff --git a/arch/arm/boot/dts/r8a7779.dtsi b/arch/arm/boot/dts/r8a7779.dtsi
+index 74d7e9084eabe..3c5fcdfe16b87 100644
+--- a/arch/arm/boot/dts/r8a7779.dtsi
++++ b/arch/arm/boot/dts/r8a7779.dtsi
+@@ -463,6 +463,7 @@
+ 		reg = <0xfff80000 0x40000>;
+ 		interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&mstp1_clks R8A7779_CLK_DU>;
++		clock-names = "du.0";
+ 		power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
+ 		status = "disabled";
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index 2d027dafb7bce..27f19575fada6 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -118,7 +118,6 @@
+ 	max-speed = <100>;
+ 	phy-handle = <&phy0>;
+ 	st,eth-ref-clk-sel;
+-	phy-reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
+ 
+ 	mdio0 {
+ 		#address-cells = <1>;
+@@ -127,6 +126,15 @@
+ 
+ 		phy0: ethernet-phy@1 {
+ 			reg = <1>;
++			/* LAN8710Ai */
++			compatible = "ethernet-phy-id0007.c0f0",
++				     "ethernet-phy-ieee802.3-c22";
++			clocks = <&rcc ETHCK_K>;
++			reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
++			reset-assert-us = <500>;
++			reset-deassert-us = <500>;
++			interrupt-parent = <&gpioi>;
++			interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts b/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
+index 97f497854e05d..d05fa679dcd30 100644
+--- a/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
++++ b/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
+@@ -85,7 +85,7 @@
+ 	pinctrl-0 = <&emac_rgmii_pins>;
+ 	phy-supply = <&reg_gmac_3v3>;
+ 	phy-handle = <&ext_rgmii_phy>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 
+ 	status = "okay";
+ };
+diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c
+index 700763e07083d..83d1d1327f96e 100644
+--- a/arch/arm/mach-exynos/exynos.c
++++ b/arch/arm/mach-exynos/exynos.c
+@@ -55,6 +55,7 @@ void __init exynos_sysram_init(void)
+ 		sysram_base_addr = of_iomap(node, 0);
+ 		sysram_base_phys = of_translate_address(node,
+ 					   of_get_address(node, 0, NULL, NULL));
++		of_node_put(node);
+ 		break;
+ 	}
+ 
+@@ -62,6 +63,7 @@ void __init exynos_sysram_init(void)
+ 		if (!of_device_is_available(node))
+ 			continue;
+ 		sysram_ns_base_addr = of_iomap(node, 0);
++		of_node_put(node);
+ 		break;
+ 	}
+ }
+diff --git a/arch/arm/probes/kprobes/test-thumb.c b/arch/arm/probes/kprobes/test-thumb.c
+index 456c181a7bfe8..4e11f0b760f89 100644
+--- a/arch/arm/probes/kprobes/test-thumb.c
++++ b/arch/arm/probes/kprobes/test-thumb.c
+@@ -441,21 +441,21 @@ void kprobe_thumb32_test_cases(void)
+ 		"3:	mvn	r0, r0	\n\t"
+ 		"2:	nop		\n\t")
+ 
+-	TEST_RX("tbh	[pc, r",7, (9f-(1f+4))>>1,"]",
++	TEST_RX("tbh	[pc, r",7, (9f-(1f+4))>>1,", lsl #1]",
+ 		"9:			\n\t"
+ 		".short	(2f-1b-4)>>1	\n\t"
+ 		".short	(3f-1b-4)>>1	\n\t"
+ 		"3:	mvn	r0, r0	\n\t"
+ 		"2:	nop		\n\t")
+ 
+-	TEST_RX("tbh	[pc, r",12, ((9f-(1f+4))>>1)+1,"]",
++	TEST_RX("tbh	[pc, r",12, ((9f-(1f+4))>>1)+1,", lsl #1]",
+ 		"9:			\n\t"
+ 		".short	(2f-1b-4)>>1	\n\t"
+ 		".short	(3f-1b-4)>>1	\n\t"
+ 		"3:	mvn	r0, r0	\n\t"
+ 		"2:	nop		\n\t")
+ 
+-	TEST_RRX("tbh	[r",1,9f, ", r",14,1,"]",
++	TEST_RRX("tbh	[r",1,9f, ", r",14,1,", lsl #1]",
+ 		"9:			\n\t"
+ 		".short	(2f-1b-4)>>1	\n\t"
+ 		".short	(3f-1b-4)>>1	\n\t"
+@@ -468,10 +468,10 @@ void kprobe_thumb32_test_cases(void)
+ 
+ 	TEST_UNSUPPORTED("strexb	r0, r1, [r2]")
+ 	TEST_UNSUPPORTED("strexh	r0, r1, [r2]")
+-	TEST_UNSUPPORTED("strexd	r0, r1, [r2]")
++	TEST_UNSUPPORTED("strexd	r0, r1, r2, [r2]")
+ 	TEST_UNSUPPORTED("ldrexb	r0, [r1]")
+ 	TEST_UNSUPPORTED("ldrexh	r0, [r1]")
+-	TEST_UNSUPPORTED("ldrexd	r0, [r1]")
++	TEST_UNSUPPORTED("ldrexd	r0, r1, [r1]")
+ 
+ 	TEST_GROUP("Data-processing (shifted register) and (modified immediate)")
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
+index d4069749d7216..068cbd955bfc2 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
+@@ -79,7 +79,7 @@
+ &emac {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&rgmii_pins>;
+-	phy-mode = "rgmii-id";
++	phy-mode = "rgmii-txid";
+ 	phy-handle = <&ext_rgmii_phy>;
+ 	phy-supply = <&reg_dc1sw>;
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+index bf875589d3640..5b2a616c6257b 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+@@ -622,6 +622,8 @@ edp_brij_i2c: &i2c2 {
+ 		clocks = <&rpmhcc RPMH_LN_BB_CLK3>;
+ 		clock-names = "refclk";
+ 
++		no-hpd;
++
+ 		ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index c15f1c571eb0d..db091fa751151 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -76,6 +76,7 @@
+ 			opp-hz = /bits/ 64 <1500000000>;
+ 			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
++			opp-suspend;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/renesas/r8a77960.dtsi b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+index f379c8d1511d9..fa9567ed55e4d 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77960.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+@@ -63,18 +63,19 @@
+ 
+ 		opp-500000000 {
+ 			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1500000000 {
+ 			opp-hz = /bits/ 64 <1500000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
++			opp-suspend;
+ 		};
+ 		opp-1600000000 {
+ 			opp-hz = /bits/ 64 <1600000000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77961.dtsi b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+index 1ba30313c8b82..b23f49b89cadc 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77961.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+@@ -52,18 +52,19 @@
+ 
+ 		opp-500000000 {
+ 			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1500000000 {
+ 			opp-hz = /bits/ 64 <1500000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
++			opp-suspend;
+ 		};
+ 		opp-1600000000 {
+ 			opp-hz = /bits/ 64 <1600000000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts b/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts
+index 668a1ece9af00..0c66cc0a13674 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts
++++ b/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts
+@@ -59,7 +59,7 @@
+ 	memory@48000000 {
+ 		device_type = "memory";
+ 		/* first 128MB is reserved for secure area. */
+-		reg = <0x0 0x48000000 0x0 0x38000000>;
++		reg = <0x0 0x48000000 0x0 0x78000000>;
+ 	};
+ 
+ 	osc5_clk: osc5-clock {
+diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+index 86ec32a919d29..bfbb53bf53757 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+@@ -111,7 +111,6 @@
+ 			      <0x0 0xf1060000 0 0x110000>;
+ 			interrupts = <GIC_PPI 9
+ 				      (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_HIGH)>;
+-			power-domains = <&sysc R8A779A0_PD_ALWAYS_ON>;
+ 		};
+ 
+ 		prr: chipid@fff00044 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi
+index 20309076dbac0..35b7ab3bf10c6 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi
+@@ -384,6 +384,7 @@
+ 
+ 			vcc_sdio: LDO_REG4 {
+ 				regulator-name = "vcc_sdio";
++				regulator-always-on;
+ 				regulator-boot-on;
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <3000000>;
+@@ -488,6 +489,8 @@
+ 		regulator-min-microvolt = <712500>;
+ 		regulator-max-microvolt = <1500000>;
+ 		regulator-ramp-delay = <1000>;
++		regulator-always-on;
++		regulator-boot-on;
+ 		vin-supply = <&vcc3v3_sys>;
+ 
+ 		regulator-state-mem {
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 689538244392c..5832ad830ed14 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -446,6 +446,7 @@
+ 					  "otg";
+ 			maximum-speed = "super-speed";
+ 			dr_mode = "otg";
++			cdns,phyrst-a-enable;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+index 52e121155563a..7cd31ac67f880 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+@@ -560,6 +560,10 @@
+ 	status = "okay";
+ };
+ 
++&cmn_refclk1 {
++	clock-frequency = <100000000>;
++};
++
+ &serdes0 {
+ 	serdes0_pcie_link: link@0 {
+ 		reg = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index c66ded9079be4..6ffdebd601223 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -8,6 +8,20 @@
+ #include <dt-bindings/mux/mux.h>
+ #include <dt-bindings/mux/ti-serdes.h>
+ 
++/ {
++	cmn_refclk: clock-cmnrefclk {
++		#clock-cells = <0>;
++		compatible = "fixed-clock";
++		clock-frequency = <0>;
++	};
++
++	cmn_refclk1: clock-cmnrefclk1 {
++		#clock-cells = <0>;
++		compatible = "fixed-clock";
++		clock-frequency = <0>;
++	};
++};
++
+ &cbass_main {
+ 	msmc_ram: sram@70000000 {
+ 		compatible = "mmio-sram";
+@@ -369,24 +383,12 @@
+ 		pinctrl-single,function-mask = <0xffffffff>;
+ 	};
+ 
+-	dummy_cmn_refclk: dummy-cmn-refclk {
+-		#clock-cells = <0>;
+-		compatible = "fixed-clock";
+-		clock-frequency = <100000000>;
+-	};
+-
+-	dummy_cmn_refclk1: dummy-cmn-refclk1 {
+-		#clock-cells = <0>;
+-		compatible = "fixed-clock";
+-		clock-frequency = <100000000>;
+-	};
+-
+ 	serdes_wiz0: wiz@5000000 {
+ 		compatible = "ti,j721e-wiz-16g";
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		power-domains = <&k3_pds 292 TI_SCI_PD_EXCLUSIVE>;
+-		clocks = <&k3_clks 292 5>, <&k3_clks 292 11>, <&dummy_cmn_refclk>;
++		clocks = <&k3_clks 292 5>, <&k3_clks 292 11>, <&cmn_refclk>;
+ 		clock-names = "fck", "core_ref_clk", "ext_ref_clk";
+ 		assigned-clocks = <&k3_clks 292 11>, <&k3_clks 292 0>;
+ 		assigned-clock-parents = <&k3_clks 292 15>, <&k3_clks 292 4>;
+@@ -395,21 +397,21 @@
+ 		ranges = <0x5000000 0x0 0x5000000 0x10000>;
+ 
+ 		wiz0_pll0_refclk: pll0-refclk {
+-			clocks = <&k3_clks 292 11>, <&dummy_cmn_refclk>;
++			clocks = <&k3_clks 292 11>, <&cmn_refclk>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz0_pll0_refclk>;
+ 			assigned-clock-parents = <&k3_clks 292 11>;
+ 		};
+ 
+ 		wiz0_pll1_refclk: pll1-refclk {
+-			clocks = <&k3_clks 292 0>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 292 0>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz0_pll1_refclk>;
+ 			assigned-clock-parents = <&k3_clks 292 0>;
+ 		};
+ 
+ 		wiz0_refclk_dig: refclk-dig {
+-			clocks = <&k3_clks 292 11>, <&k3_clks 292 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 292 11>, <&k3_clks 292 0>, <&cmn_refclk>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz0_refclk_dig>;
+ 			assigned-clock-parents = <&k3_clks 292 11>;
+@@ -443,7 +445,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		power-domains = <&k3_pds 293 TI_SCI_PD_EXCLUSIVE>;
+-		clocks = <&k3_clks 293 5>, <&k3_clks 293 13>, <&dummy_cmn_refclk>;
++		clocks = <&k3_clks 293 5>, <&k3_clks 293 13>, <&cmn_refclk>;
+ 		clock-names = "fck", "core_ref_clk", "ext_ref_clk";
+ 		assigned-clocks = <&k3_clks 293 13>, <&k3_clks 293 0>;
+ 		assigned-clock-parents = <&k3_clks 293 17>, <&k3_clks 293 4>;
+@@ -452,21 +454,21 @@
+ 		ranges = <0x5010000 0x0 0x5010000 0x10000>;
+ 
+ 		wiz1_pll0_refclk: pll0-refclk {
+-			clocks = <&k3_clks 293 13>, <&dummy_cmn_refclk>;
++			clocks = <&k3_clks 293 13>, <&cmn_refclk>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz1_pll0_refclk>;
+ 			assigned-clock-parents = <&k3_clks 293 13>;
+ 		};
+ 
+ 		wiz1_pll1_refclk: pll1-refclk {
+-			clocks = <&k3_clks 293 0>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 293 0>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz1_pll1_refclk>;
+ 			assigned-clock-parents = <&k3_clks 293 0>;
+ 		};
+ 
+ 		wiz1_refclk_dig: refclk-dig {
+-			clocks = <&k3_clks 293 13>, <&k3_clks 293 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 293 13>, <&k3_clks 293 0>, <&cmn_refclk>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz1_refclk_dig>;
+ 			assigned-clock-parents = <&k3_clks 293 13>;
+@@ -500,7 +502,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		power-domains = <&k3_pds 294 TI_SCI_PD_EXCLUSIVE>;
+-		clocks = <&k3_clks 294 5>, <&k3_clks 294 11>, <&dummy_cmn_refclk>;
++		clocks = <&k3_clks 294 5>, <&k3_clks 294 11>, <&cmn_refclk>;
+ 		clock-names = "fck", "core_ref_clk", "ext_ref_clk";
+ 		assigned-clocks = <&k3_clks 294 11>, <&k3_clks 294 0>;
+ 		assigned-clock-parents = <&k3_clks 294 15>, <&k3_clks 294 4>;
+@@ -509,21 +511,21 @@
+ 		ranges = <0x5020000 0x0 0x5020000 0x10000>;
+ 
+ 		wiz2_pll0_refclk: pll0-refclk {
+-			clocks = <&k3_clks 294 11>, <&dummy_cmn_refclk>;
++			clocks = <&k3_clks 294 11>, <&cmn_refclk>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz2_pll0_refclk>;
+ 			assigned-clock-parents = <&k3_clks 294 11>;
+ 		};
+ 
+ 		wiz2_pll1_refclk: pll1-refclk {
+-			clocks = <&k3_clks 294 0>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 294 0>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz2_pll1_refclk>;
+ 			assigned-clock-parents = <&k3_clks 294 0>;
+ 		};
+ 
+ 		wiz2_refclk_dig: refclk-dig {
+-			clocks = <&k3_clks 294 11>, <&k3_clks 294 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 294 11>, <&k3_clks 294 0>, <&cmn_refclk>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz2_refclk_dig>;
+ 			assigned-clock-parents = <&k3_clks 294 11>;
+@@ -557,7 +559,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		power-domains = <&k3_pds 295 TI_SCI_PD_EXCLUSIVE>;
+-		clocks = <&k3_clks 295 5>, <&k3_clks 295 9>, <&dummy_cmn_refclk>;
++		clocks = <&k3_clks 295 5>, <&k3_clks 295 9>, <&cmn_refclk>;
+ 		clock-names = "fck", "core_ref_clk", "ext_ref_clk";
+ 		assigned-clocks = <&k3_clks 295 9>, <&k3_clks 295 0>;
+ 		assigned-clock-parents = <&k3_clks 295 13>, <&k3_clks 295 4>;
+@@ -566,21 +568,21 @@
+ 		ranges = <0x5030000 0x0 0x5030000 0x10000>;
+ 
+ 		wiz3_pll0_refclk: pll0-refclk {
+-			clocks = <&k3_clks 295 9>, <&dummy_cmn_refclk>;
++			clocks = <&k3_clks 295 9>, <&cmn_refclk>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz3_pll0_refclk>;
+ 			assigned-clock-parents = <&k3_clks 295 9>;
+ 		};
+ 
+ 		wiz3_pll1_refclk: pll1-refclk {
+-			clocks = <&k3_clks 295 0>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 295 0>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz3_pll1_refclk>;
+ 			assigned-clock-parents = <&k3_clks 295 0>;
+ 		};
+ 
+ 		wiz3_refclk_dig: refclk-dig {
+-			clocks = <&k3_clks 295 9>, <&k3_clks 295 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 295 9>, <&k3_clks 295 0>, <&cmn_refclk>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz3_refclk_dig>;
+ 			assigned-clock-parents = <&k3_clks 295 9>;
+diff --git a/arch/hexagon/kernel/vmlinux.lds.S b/arch/hexagon/kernel/vmlinux.lds.S
+index 35b18e55eae80..57465bff1fe49 100644
+--- a/arch/hexagon/kernel/vmlinux.lds.S
++++ b/arch/hexagon/kernel/vmlinux.lds.S
+@@ -38,6 +38,8 @@ SECTIONS
+ 	.text : AT(ADDR(.text)) {
+ 		_text = .;
+ 		TEXT_TEXT
++		IRQENTRY_TEXT
++		SOFTIRQENTRY_TEXT
+ 		SCHED_TEXT
+ 		CPUIDLE_TEXT
+ 		LOCK_TEXT
+@@ -59,14 +61,9 @@ SECTIONS
+ 
+ 	_end = .;
+ 
+-	/DISCARD/ : {
+-		EXIT_TEXT
+-		EXIT_DATA
+-		EXIT_CALL
+-	}
+-
+ 	STABS_DEBUG
+ 	DWARF_DEBUG
+ 	ELF_DETAILS
+ 
++	DISCARDS
+ }
+diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
+index 337ab1d18cc1f..eae0fad30f27e 100644
+--- a/arch/mips/boot/compressed/Makefile
++++ b/arch/mips/boot/compressed/Makefile
+@@ -39,7 +39,7 @@ KCOV_INSTRUMENT		:= n
+ UBSAN_SANITIZE := n
+ 
+ # decompressor objects (linked with vmlinuz)
+-vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o
++vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o $(obj)/bswapsi.o
+ 
+ ifdef CONFIG_DEBUG_ZBOOT
+ vmlinuzobjs-$(CONFIG_DEBUG_ZBOOT)		   += $(obj)/dbg.o
+@@ -53,7 +53,7 @@ extra-y += uart-ath79.c
+ $(obj)/uart-ath79.c: $(srctree)/arch/mips/ath79/early_printk.c
+ 	$(call cmd,shipped)
+ 
+-vmlinuzobjs-$(CONFIG_KERNEL_XZ) += $(obj)/ashldi3.o $(obj)/bswapsi.o
++vmlinuzobjs-$(CONFIG_KERNEL_XZ) += $(obj)/ashldi3.o
+ 
+ extra-y += ashldi3.c
+ $(obj)/ashldi3.c: $(obj)/%.c: $(srctree)/lib/%.c FORCE
+diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c
+index e3946b06e840a..1e91155bed803 100644
+--- a/arch/mips/boot/compressed/decompress.c
++++ b/arch/mips/boot/compressed/decompress.c
+@@ -7,6 +7,8 @@
+  * Author: Wu Zhangjin <wuzhangjin@gmail.com>
+  */
+ 
++#define DISABLE_BRANCH_PROFILING
++
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/string.h>
+diff --git a/arch/mips/include/asm/vdso/vdso.h b/arch/mips/include/asm/vdso/vdso.h
+index 737ddfc3411cb..a327ca21270ec 100644
+--- a/arch/mips/include/asm/vdso/vdso.h
++++ b/arch/mips/include/asm/vdso/vdso.h
+@@ -67,7 +67,7 @@ static inline const struct vdso_data *get_vdso_data(void)
+ 
+ static inline void __iomem *get_gic(const struct vdso_data *data)
+ {
+-	return (void __iomem *)data - PAGE_SIZE;
++	return (void __iomem *)((unsigned long)data & PAGE_MASK) - PAGE_SIZE;
+ }
+ 
+ #endif /* CONFIG_CLKSRC_MIPS_GIC */
+diff --git a/arch/powerpc/boot/devtree.c b/arch/powerpc/boot/devtree.c
+index 5d91036ad626d..58fbcfcc98c9e 100644
+--- a/arch/powerpc/boot/devtree.c
++++ b/arch/powerpc/boot/devtree.c
+@@ -13,6 +13,7 @@
+ #include "string.h"
+ #include "stdio.h"
+ #include "ops.h"
++#include "of.h"
+ 
+ void dt_fixup_memory(u64 start, u64 size)
+ {
+@@ -23,21 +24,25 @@ void dt_fixup_memory(u64 start, u64 size)
+ 	root = finddevice("/");
+ 	if (getprop(root, "#address-cells", &naddr, sizeof(naddr)) < 0)
+ 		naddr = 2;
++	else
++		naddr = be32_to_cpu(naddr);
+ 	if (naddr < 1 || naddr > 2)
+ 		fatal("Can't cope with #address-cells == %d in /\n\r", naddr);
+ 
+ 	if (getprop(root, "#size-cells", &nsize, sizeof(nsize)) < 0)
+ 		nsize = 1;
++	else
++		nsize = be32_to_cpu(nsize);
+ 	if (nsize < 1 || nsize > 2)
+ 		fatal("Can't cope with #size-cells == %d in /\n\r", nsize);
+ 
+ 	i = 0;
+ 	if (naddr == 2)
+-		memreg[i++] = start >> 32;
+-	memreg[i++] = start & 0xffffffff;
++		memreg[i++] = cpu_to_be32(start >> 32);
++	memreg[i++] = cpu_to_be32(start & 0xffffffff);
+ 	if (nsize == 2)
+-		memreg[i++] = size >> 32;
+-	memreg[i++] = size & 0xffffffff;
++		memreg[i++] = cpu_to_be32(size >> 32);
++	memreg[i++] = cpu_to_be32(size & 0xffffffff);
+ 
+ 	memory = finddevice("/memory");
+ 	if (! memory) {
+@@ -45,9 +50,9 @@ void dt_fixup_memory(u64 start, u64 size)
+ 		setprop_str(memory, "device_type", "memory");
+ 	}
+ 
+-	printf("Memory <- <0x%x", memreg[0]);
++	printf("Memory <- <0x%x", be32_to_cpu(memreg[0]));
+ 	for (i = 1; i < (naddr + nsize); i++)
+-		printf(" 0x%x", memreg[i]);
++		printf(" 0x%x", be32_to_cpu(memreg[i]));
+ 	printf("> (%ldMB)\n\r", (unsigned long)(size >> 20));
+ 
+ 	setprop(memory, "reg", memreg, (naddr + nsize)*sizeof(u32));
+@@ -65,10 +70,10 @@ void dt_fixup_cpu_clocks(u32 cpu, u32 tb, u32 bus)
+ 		printf("CPU bus-frequency <- 0x%x (%dMHz)\n\r", bus, MHZ(bus));
+ 
+ 	while ((devp = find_node_by_devtype(devp, "cpu"))) {
+-		setprop_val(devp, "clock-frequency", cpu);
+-		setprop_val(devp, "timebase-frequency", tb);
++		setprop_val(devp, "clock-frequency", cpu_to_be32(cpu));
++		setprop_val(devp, "timebase-frequency", cpu_to_be32(tb));
+ 		if (bus > 0)
+-			setprop_val(devp, "bus-frequency", bus);
++			setprop_val(devp, "bus-frequency", cpu_to_be32(bus));
+ 	}
+ 
+ 	timebase_period_ns = 1000000000 / tb;
+@@ -80,7 +85,7 @@ void dt_fixup_clock(const char *path, u32 freq)
+ 
+ 	if (devp) {
+ 		printf("%s: clock-frequency <- %x (%dMHz)\n\r", path, freq, MHZ(freq));
+-		setprop_val(devp, "clock-frequency", freq);
++		setprop_val(devp, "clock-frequency", cpu_to_be32(freq));
+ 	}
+ }
+ 
+@@ -133,8 +138,12 @@ void dt_get_reg_format(void *node, u32 *naddr, u32 *nsize)
+ {
+ 	if (getprop(node, "#address-cells", naddr, 4) != 4)
+ 		*naddr = 2;
++	else
++		*naddr = be32_to_cpu(*naddr);
+ 	if (getprop(node, "#size-cells", nsize, 4) != 4)
+ 		*nsize = 1;
++	else
++		*nsize = be32_to_cpu(*nsize);
+ }
+ 
+ static void copy_val(u32 *dest, u32 *src, int naddr)
+@@ -163,9 +172,9 @@ static int add_reg(u32 *reg, u32 *add, int naddr)
+ 	int i, carry = 0;
+ 
+ 	for (i = MAX_ADDR_CELLS - 1; i >= MAX_ADDR_CELLS - naddr; i--) {
+-		u64 tmp = (u64)reg[i] + add[i] + carry;
++		u64 tmp = (u64)be32_to_cpu(reg[i]) + be32_to_cpu(add[i]) + carry;
+ 		carry = tmp >> 32;
+-		reg[i] = (u32)tmp;
++		reg[i] = cpu_to_be32((u32)tmp);
+ 	}
+ 
+ 	return !carry;
+@@ -180,18 +189,18 @@ static int compare_reg(u32 *reg, u32 *range, u32 *rangesize)
+ 	u32 end;
+ 
+ 	for (i = 0; i < MAX_ADDR_CELLS; i++) {
+-		if (reg[i] < range[i])
++		if (be32_to_cpu(reg[i]) < be32_to_cpu(range[i]))
+ 			return 0;
+-		if (reg[i] > range[i])
++		if (be32_to_cpu(reg[i]) > be32_to_cpu(range[i]))
+ 			break;
+ 	}
+ 
+ 	for (i = 0; i < MAX_ADDR_CELLS; i++) {
+-		end = range[i] + rangesize[i];
++		end = be32_to_cpu(range[i]) + be32_to_cpu(rangesize[i]);
+ 
+-		if (reg[i] < end)
++		if (be32_to_cpu(reg[i]) < end)
+ 			break;
+-		if (reg[i] > end)
++		if (be32_to_cpu(reg[i]) > end)
+ 			return 0;
+ 	}
+ 
+@@ -240,7 +249,6 @@ static int dt_xlate(void *node, int res, int reglen, unsigned long *addr,
+ 		return 0;
+ 
+ 	dt_get_reg_format(parent, &naddr, &nsize);
+-
+ 	if (nsize > 2)
+ 		return 0;
+ 
+@@ -252,10 +260,10 @@ static int dt_xlate(void *node, int res, int reglen, unsigned long *addr,
+ 
+ 	copy_val(last_addr, prop_buf + offset, naddr);
+ 
+-	ret_size = prop_buf[offset + naddr];
++	ret_size = be32_to_cpu(prop_buf[offset + naddr]);
+ 	if (nsize == 2) {
+ 		ret_size <<= 32;
+-		ret_size |= prop_buf[offset + naddr + 1];
++		ret_size |= be32_to_cpu(prop_buf[offset + naddr + 1]);
+ 	}
+ 
+ 	for (;;) {
+@@ -278,7 +286,6 @@ static int dt_xlate(void *node, int res, int reglen, unsigned long *addr,
+ 
+ 		offset = find_range(last_addr, prop_buf, prev_naddr,
+ 		                    naddr, prev_nsize, buflen / 4);
+-
+ 		if (offset < 0)
+ 			return 0;
+ 
+@@ -296,8 +303,7 @@ static int dt_xlate(void *node, int res, int reglen, unsigned long *addr,
+ 	if (naddr > 2)
+ 		return 0;
+ 
+-	ret_addr = ((u64)last_addr[2] << 32) | last_addr[3];
+-
++	ret_addr = ((u64)be32_to_cpu(last_addr[2]) << 32) | be32_to_cpu(last_addr[3]);
+ 	if (sizeof(void *) == 4 &&
+ 	    (ret_addr >= 0x100000000ULL || ret_size > 0x100000000ULL ||
+ 	     ret_addr + ret_size > 0x100000000ULL))
+@@ -350,11 +356,14 @@ int dt_is_compatible(void *node, const char *compat)
+ int dt_get_virtual_reg(void *node, void **addr, int nres)
+ {
+ 	unsigned long xaddr;
+-	int n;
++	int n, i;
+ 
+ 	n = getprop(node, "virtual-reg", addr, nres * 4);
+-	if (n > 0)
++	if (n > 0) {
++		for (i = 0; i < n/4; i ++)
++			((u32 *)addr)[i] = be32_to_cpu(((u32 *)addr)[i]);
+ 		return n / 4;
++	}
+ 
+ 	for (n = 0; n < nres; n++) {
+ 		if (!dt_xlate_reg(node, n, &xaddr, NULL))
+diff --git a/arch/powerpc/boot/ns16550.c b/arch/powerpc/boot/ns16550.c
+index b0da4466d4198..f16d2be1d0f31 100644
+--- a/arch/powerpc/boot/ns16550.c
++++ b/arch/powerpc/boot/ns16550.c
+@@ -15,6 +15,7 @@
+ #include "stdio.h"
+ #include "io.h"
+ #include "ops.h"
++#include "of.h"
+ 
+ #define UART_DLL	0	/* Out: Divisor Latch Low */
+ #define UART_DLM	1	/* Out: Divisor Latch High */
+@@ -58,16 +59,20 @@ int ns16550_console_init(void *devp, struct serial_console_data *scdp)
+ 	int n;
+ 	u32 reg_offset;
+ 
+-	if (dt_get_virtual_reg(devp, (void **)&reg_base, 1) < 1)
++	if (dt_get_virtual_reg(devp, (void **)&reg_base, 1) < 1) {
++		printf("virt reg parse fail...\r\n");
+ 		return -1;
++	}
+ 
+ 	n = getprop(devp, "reg-offset", &reg_offset, sizeof(reg_offset));
+ 	if (n == sizeof(reg_offset))
+-		reg_base += reg_offset;
++		reg_base += be32_to_cpu(reg_offset);
+ 
+ 	n = getprop(devp, "reg-shift", &reg_shift, sizeof(reg_shift));
+ 	if (n != sizeof(reg_shift))
+ 		reg_shift = 0;
++	else
++		reg_shift = be32_to_cpu(reg_shift);
+ 
+ 	scdp->open = ns16550_open;
+ 	scdp->putc = ns16550_putc;
+diff --git a/arch/powerpc/include/asm/ps3.h b/arch/powerpc/include/asm/ps3.h
+index cb89e4bf55cef..964063765662b 100644
+--- a/arch/powerpc/include/asm/ps3.h
++++ b/arch/powerpc/include/asm/ps3.h
+@@ -71,6 +71,7 @@ struct ps3_dma_region_ops;
+  * @bus_addr: The 'translated' bus address of the region.
+  * @len: The length in bytes of the region.
+  * @offset: The offset from the start of memory of the region.
++ * @dma_mask: Device dma_mask.
+  * @ioid: The IOID of the device who owns this region
+  * @chunk_list: Opaque variable used by the ioc page manager.
+  * @region_ops: struct ps3_dma_region_ops - dma region operations
+@@ -85,6 +86,7 @@ struct ps3_dma_region {
+ 	enum ps3_dma_region_type region_type;
+ 	unsigned long len;
+ 	unsigned long offset;
++	u64 dma_mask;
+ 
+ 	/* driver variables  (set by ps3_dma_region_create) */
+ 	unsigned long bus_addr;
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index b487b489d4b68..4c2f75916a7ea 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -282,22 +282,30 @@ static inline void fixup_tlbie_lpid(unsigned long lpid)
+ /*
+  * We use 128 set in radix mode and 256 set in hpt mode.
+  */
+-static __always_inline void _tlbiel_pid(unsigned long pid, unsigned long ric)
++static inline void _tlbiel_pid(unsigned long pid, unsigned long ric)
+ {
+ 	int set;
+ 
+ 	asm volatile("ptesync": : :"memory");
+ 
+-	/*
+-	 * Flush the first set of the TLB, and if we're doing a RIC_FLUSH_ALL,
+-	 * also flush the entire Page Walk Cache.
+-	 */
+-	__tlbiel_pid(pid, 0, ric);
++	switch (ric) {
++	case RIC_FLUSH_PWC:
+ 
+-	/* For PWC, only one flush is needed */
+-	if (ric == RIC_FLUSH_PWC) {
++		/* For PWC, only one flush is needed */
++		__tlbiel_pid(pid, 0, RIC_FLUSH_PWC);
+ 		ppc_after_tlbiel_barrier();
+ 		return;
++	case RIC_FLUSH_TLB:
++		__tlbiel_pid(pid, 0, RIC_FLUSH_TLB);
++		break;
++	case RIC_FLUSH_ALL:
++	default:
++		/*
++		 * Flush the first set of the TLB, and if
++		 * we're doing a RIC_FLUSH_ALL, also flush
++		 * the entire Page Walk Cache.
++		 */
++		__tlbiel_pid(pid, 0, RIC_FLUSH_ALL);
+ 	}
+ 
+ 	/* For the remaining sets, just flush the TLB */
+@@ -1068,7 +1076,7 @@ void radix__tlb_flush(struct mmu_gather *tlb)
+ 	}
+ }
+ 
+-static __always_inline void __radix__flush_tlb_range_psize(struct mm_struct *mm,
++static void __radix__flush_tlb_range_psize(struct mm_struct *mm,
+ 				unsigned long start, unsigned long end,
+ 				int psize, bool also_pwc)
+ {
+diff --git a/arch/powerpc/platforms/ps3/mm.c b/arch/powerpc/platforms/ps3/mm.c
+index d094321964fb0..a81eac35d9009 100644
+--- a/arch/powerpc/platforms/ps3/mm.c
++++ b/arch/powerpc/platforms/ps3/mm.c
+@@ -6,6 +6,7 @@
+  *  Copyright 2006 Sony Corp.
+  */
+ 
++#include <linux/dma-mapping.h>
+ #include <linux/kernel.h>
+ #include <linux/export.h>
+ #include <linux/memblock.h>
+@@ -1118,6 +1119,7 @@ int ps3_dma_region_init(struct ps3_system_bus_device *dev,
+ 	enum ps3_dma_region_type region_type, void *addr, unsigned long len)
+ {
+ 	unsigned long lpar_addr;
++	int result;
+ 
+ 	lpar_addr = addr ? ps3_mm_phys_to_lpar(__pa(addr)) : 0;
+ 
+@@ -1129,6 +1131,16 @@ int ps3_dma_region_init(struct ps3_system_bus_device *dev,
+ 		r->offset -= map.r1.offset;
+ 	r->len = len ? len : ALIGN(map.total, 1 << r->page_size);
+ 
++	dev->core.dma_mask = &r->dma_mask;
++
++	result = dma_set_mask_and_coherent(&dev->core, DMA_BIT_MASK(32));
++
++	if (result < 0) {
++		dev_err(&dev->core, "%s:%d: dma_set_mask_and_coherent failed: %d\n",
++			__func__, __LINE__, result);
++		return result;
++	}
++
+ 	switch (dev->dev_type) {
+ 	case PS3_DEVICE_TYPE_SB:
+ 		r->region_ops =  (USE_DYNAMIC_DMA)
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index ba94b03c8b2f4..92506918da633 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -28,6 +28,7 @@ KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables
+ KBUILD_CFLAGS_DECOMPRESSOR += -ffreestanding
++KBUILD_CFLAGS_DECOMPRESSOR += -fno-stack-protector
+ KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,))
+diff --git a/arch/s390/boot/ipl_parm.c b/arch/s390/boot/ipl_parm.c
+index f94b91d72620e..c56bbf58a945e 100644
+--- a/arch/s390/boot/ipl_parm.c
++++ b/arch/s390/boot/ipl_parm.c
+@@ -28,22 +28,25 @@ static inline int __diag308(unsigned long subcode, void *addr)
+ 	register unsigned long _addr asm("0") = (unsigned long)addr;
+ 	register unsigned long _rc asm("1") = 0;
+ 	unsigned long reg1, reg2;
+-	psw_t old = S390_lowcore.program_new_psw;
++	psw_t old;
+ 
+ 	asm volatile(
++		"	mvc	0(16,%[psw_old]),0(%[psw_pgm])\n"
+ 		"	epsw	%0,%1\n"
+-		"	st	%0,%[psw_pgm]\n"
+-		"	st	%1,%[psw_pgm]+4\n"
++		"	st	%0,0(%[psw_pgm])\n"
++		"	st	%1,4(%[psw_pgm])\n"
+ 		"	larl	%0,1f\n"
+-		"	stg	%0,%[psw_pgm]+8\n"
++		"	stg	%0,8(%[psw_pgm])\n"
+ 		"	diag	%[addr],%[subcode],0x308\n"
+-		"1:	nopr	%%r7\n"
++		"1:	mvc	0(16,%[psw_pgm]),0(%[psw_old])\n"
+ 		: "=&d" (reg1), "=&a" (reg2),
+-		  [psw_pgm] "=Q" (S390_lowcore.program_new_psw),
++		  "+Q" (S390_lowcore.program_new_psw),
++		  "=Q" (old),
+ 		  [addr] "+d" (_addr), "+d" (_rc)
+-		: [subcode] "d" (subcode)
++		: [subcode] "d" (subcode),
++		  [psw_old] "a" (&old),
++		  [psw_pgm] "a" (&S390_lowcore.program_new_psw)
+ 		: "cc", "memory");
+-	S390_lowcore.program_new_psw = old;
+ 	return _rc;
+ }
+ 
+diff --git a/arch/s390/boot/mem_detect.c b/arch/s390/boot/mem_detect.c
+index 62e7c13ce85c7..85049541c191e 100644
+--- a/arch/s390/boot/mem_detect.c
++++ b/arch/s390/boot/mem_detect.c
+@@ -70,24 +70,27 @@ static int __diag260(unsigned long rx1, unsigned long rx2)
+ 	register unsigned long _ry asm("4") = 0x10; /* storage configuration */
+ 	int rc = -1;				    /* fail */
+ 	unsigned long reg1, reg2;
+-	psw_t old = S390_lowcore.program_new_psw;
++	psw_t old;
+ 
+ 	asm volatile(
++		"	mvc	0(16,%[psw_old]),0(%[psw_pgm])\n"
+ 		"	epsw	%0,%1\n"
+-		"	st	%0,%[psw_pgm]\n"
+-		"	st	%1,%[psw_pgm]+4\n"
++		"	st	%0,0(%[psw_pgm])\n"
++		"	st	%1,4(%[psw_pgm])\n"
+ 		"	larl	%0,1f\n"
+-		"	stg	%0,%[psw_pgm]+8\n"
++		"	stg	%0,8(%[psw_pgm])\n"
+ 		"	diag	%[rx],%[ry],0x260\n"
+ 		"	ipm	%[rc]\n"
+ 		"	srl	%[rc],28\n"
+-		"1:\n"
++		"1:	mvc	0(16,%[psw_pgm]),0(%[psw_old])\n"
+ 		: "=&d" (reg1), "=&a" (reg2),
+-		  [psw_pgm] "=Q" (S390_lowcore.program_new_psw),
++		  "+Q" (S390_lowcore.program_new_psw),
++		  "=Q" (old),
+ 		  [rc] "+&d" (rc), [ry] "+d" (_ry)
+-		: [rx] "d" (_rx1), "d" (_rx2)
++		: [rx] "d" (_rx1), "d" (_rx2),
++		  [psw_old] "a" (&old),
++		  [psw_pgm] "a" (&S390_lowcore.program_new_psw)
+ 		: "cc", "memory");
+-	S390_lowcore.program_new_psw = old;
+ 	return rc == 0 ? _ry : -1;
+ }
+ 
+@@ -112,24 +115,30 @@ static int diag260(void)
+ 
+ static int tprot(unsigned long addr)
+ {
+-	unsigned long pgm_addr;
++	unsigned long reg1, reg2;
+ 	int rc = -EFAULT;
+-	psw_t old = S390_lowcore.program_new_psw;
++	psw_t old;
+ 
+-	S390_lowcore.program_new_psw.mask = __extract_psw();
+ 	asm volatile(
+-		"	larl	%[pgm_addr],1f\n"
+-		"	stg	%[pgm_addr],%[psw_pgm_addr]\n"
++		"	mvc	0(16,%[psw_old]),0(%[psw_pgm])\n"
++		"	epsw	%[reg1],%[reg2]\n"
++		"	st	%[reg1],0(%[psw_pgm])\n"
++		"	st	%[reg2],4(%[psw_pgm])\n"
++		"	larl	%[reg1],1f\n"
++		"	stg	%[reg1],8(%[psw_pgm])\n"
+ 		"	tprot	0(%[addr]),0\n"
+ 		"	ipm	%[rc]\n"
+ 		"	srl	%[rc],28\n"
+-		"1:\n"
+-		: [pgm_addr] "=&d"(pgm_addr),
+-		  [psw_pgm_addr] "=Q"(S390_lowcore.program_new_psw.addr),
+-		  [rc] "+&d"(rc)
+-		: [addr] "a"(addr)
++		"1:	mvc	0(16,%[psw_pgm]),0(%[psw_old])\n"
++		: [reg1] "=&d" (reg1),
++		  [reg2] "=&a" (reg2),
++		  [rc] "+&d" (rc),
++		  "=Q" (S390_lowcore.program_new_psw.addr),
++		  "=Q" (old)
++		: [psw_old] "a" (&old),
++		  [psw_pgm] "a" (&S390_lowcore.program_new_psw),
++		  [addr] "a" (addr)
+ 		: "cc", "memory");
+-	S390_lowcore.program_new_psw = old;
+ 	return rc;
+ }
+ 
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index 962da04234af4..0987c3fc45f58 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -211,7 +211,7 @@ static __always_inline unsigned long current_stack_pointer(void)
+ 	return sp;
+ }
+ 
+-static __no_kasan_or_inline unsigned short stap(void)
++static __always_inline unsigned short stap(void)
+ {
+ 	unsigned short cpu_address;
+ 
+@@ -250,7 +250,7 @@ static inline void __load_psw(psw_t psw)
+  * Set PSW mask to specified value, while leaving the
+  * PSW addr pointing to the next instruction.
+  */
+-static __no_kasan_or_inline void __load_psw_mask(unsigned long mask)
++static __always_inline void __load_psw_mask(unsigned long mask)
+ {
+ 	unsigned long addr;
+ 	psw_t psw;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 83a3f346e5bd9..5cd9d20af31e9 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -166,7 +166,7 @@ static void __init set_preferred_console(void)
+ 	else if (CONSOLE_IS_3270)
+ 		add_preferred_console("tty3270", 0, NULL);
+ 	else if (CONSOLE_IS_VT220)
+-		add_preferred_console("ttyS", 1, NULL);
++		add_preferred_console("ttysclp", 0, NULL);
+ 	else if (CONSOLE_IS_HVC)
+ 		add_preferred_console("hvc", 0, NULL);
+ }
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index c57f8c40e9926..21c4ebe29b9a2 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -24,6 +24,7 @@ KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes
+ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float -fno-common
++KBUILD_CFLAGS += -fno-stack-protector
+ KBUILD_CFLAGS += $(CLANG_FLAGS)
+ KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
+ KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+diff --git a/arch/um/drivers/chan_user.c b/arch/um/drivers/chan_user.c
+index d8845d4aac6a7..6040817c036f3 100644
+--- a/arch/um/drivers/chan_user.c
++++ b/arch/um/drivers/chan_user.c
+@@ -256,7 +256,8 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ 		goto out_close;
+ 	}
+ 
+-	if (os_set_fd_block(*fd_out, 0)) {
++	err = os_set_fd_block(*fd_out, 0);
++	if (err) {
+ 		printk(UM_KERN_ERR "winch_tramp: failed to set thread_fd "
+ 		       "non-blocking.\n");
+ 		goto out_close;
+diff --git a/arch/um/drivers/slip_user.c b/arch/um/drivers/slip_user.c
+index 482a19c5105c5..7334019c9e60a 100644
+--- a/arch/um/drivers/slip_user.c
++++ b/arch/um/drivers/slip_user.c
+@@ -145,7 +145,8 @@ static int slip_open(void *data)
+ 	}
+ 	sfd = err;
+ 
+-	if (set_up_tty(sfd))
++	err = set_up_tty(sfd);
++	if (err)
+ 		goto out_close2;
+ 
+ 	pri->slave = sfd;
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index 16bf4d4a8159e..4e5af2b00d89b 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -103,6 +103,7 @@ static inline void fpstate_init_fxstate(struct fxregs_state *fx)
+ }
+ extern void fpstate_sanitize_xstate(struct fpu *fpu);
+ 
++/* Returns 0 or the negated trap number, which results in -EFAULT for #PF */
+ #define user_insn(insn, output, input...)				\
+ ({									\
+ 	int err;							\
+@@ -110,14 +111,14 @@ extern void fpstate_sanitize_xstate(struct fpu *fpu);
+ 	might_fault();							\
+ 									\
+ 	asm volatile(ASM_STAC "\n"					\
+-		     "1:" #insn "\n\t"					\
++		     "1: " #insn "\n"					\
+ 		     "2: " ASM_CLAC "\n"				\
+ 		     ".section .fixup,\"ax\"\n"				\
+-		     "3:  movl $-1,%[err]\n"				\
++		     "3:  negl %%eax\n"					\
+ 		     "    jmp  2b\n"					\
+ 		     ".previous\n"					\
+-		     _ASM_EXTABLE(1b, 3b)				\
+-		     : [err] "=r" (err), output				\
++		     _ASM_EXTABLE_FAULT(1b, 3b)				\
++		     : [err] "=a" (err), output				\
+ 		     : "0"(0), input);					\
+ 	err;								\
+ })
+@@ -219,16 +220,20 @@ static inline void fxsave(struct fxregs_state *fx)
+ #define XRSTOR		".byte " REX_PREFIX "0x0f,0xae,0x2f"
+ #define XRSTORS		".byte " REX_PREFIX "0x0f,0xc7,0x1f"
+ 
++/*
++ * After this @err contains 0 on success or the negated trap number when
++ * the operation raises an exception. For faults this results in -EFAULT.
++ */
+ #define XSTATE_OP(op, st, lmask, hmask, err)				\
+ 	asm volatile("1:" op "\n\t"					\
+ 		     "xor %[err], %[err]\n"				\
+ 		     "2:\n\t"						\
+ 		     ".pushsection .fixup,\"ax\"\n\t"			\
+-		     "3: movl $-2,%[err]\n\t"				\
++		     "3: negl %%eax\n\t"				\
+ 		     "jmp 2b\n\t"					\
+ 		     ".popsection\n\t"					\
+-		     _ASM_EXTABLE(1b, 3b)				\
+-		     : [err] "=r" (err)					\
++		     _ASM_EXTABLE_FAULT(1b, 3b)				\
++		     : [err] "=a" (err)					\
+ 		     : "D" (st), "m" (*st), "a" (lmask), "d" (hmask)	\
+ 		     : "memory")
+ 
+diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c
+index c413756ba89fa..6bb874441de8b 100644
+--- a/arch/x86/kernel/fpu/regset.c
++++ b/arch/x86/kernel/fpu/regset.c
+@@ -117,7 +117,7 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset,
+ 	/*
+ 	 * A whole standard-format XSAVE buffer is needed:
+ 	 */
+-	if ((pos != 0) || (count < fpu_user_xstate_size))
++	if (pos != 0 || count != fpu_user_xstate_size)
+ 		return -EFAULT;
+ 
+ 	xsave = &fpu->state.xsave;
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 80dcf0417f30b..80836b94189e7 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -1084,20 +1084,10 @@ static inline bool xfeatures_mxcsr_quirk(u64 xfeatures)
+ 	return true;
+ }
+ 
+-static void fill_gap(struct membuf *to, unsigned *last, unsigned offset)
++static void copy_feature(bool from_xstate, struct membuf *to, void *xstate,
++			 void *init_xstate, unsigned int size)
+ {
+-	if (*last >= offset)
+-		return;
+-	membuf_write(to, (void *)&init_fpstate.xsave + *last, offset - *last);
+-	*last = offset;
+-}
+-
+-static void copy_part(struct membuf *to, unsigned *last, unsigned offset,
+-		      unsigned size, void *from)
+-{
+-	fill_gap(to, last, offset);
+-	membuf_write(to, from, size);
+-	*last = offset + size;
++	membuf_write(to, from_xstate ? xstate : init_xstate, size);
+ }
+ 
+ /*
+@@ -1109,10 +1099,10 @@ static void copy_part(struct membuf *to, unsigned *last, unsigned offset,
+  */
+ void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave)
+ {
++	const unsigned int off_mxcsr = offsetof(struct fxregs_state, mxcsr);
++	struct xregs_state *xinit = &init_fpstate.xsave;
+ 	struct xstate_header header;
+-	const unsigned off_mxcsr = offsetof(struct fxregs_state, mxcsr);
+-	unsigned size = to.left;
+-	unsigned last = 0;
++	unsigned int zerofrom;
+ 	int i;
+ 
+ 	/*
+@@ -1122,41 +1112,68 @@ void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave)
+ 	header.xfeatures = xsave->header.xfeatures;
+ 	header.xfeatures &= xfeatures_mask_user();
+ 
+-	if (header.xfeatures & XFEATURE_MASK_FP)
+-		copy_part(&to, &last, 0, off_mxcsr, &xsave->i387);
+-	if (header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM))
+-		copy_part(&to, &last, off_mxcsr,
+-			  MXCSR_AND_FLAGS_SIZE, &xsave->i387.mxcsr);
+-	if (header.xfeatures & XFEATURE_MASK_FP)
+-		copy_part(&to, &last, offsetof(struct fxregs_state, st_space),
+-			  128, &xsave->i387.st_space);
+-	if (header.xfeatures & XFEATURE_MASK_SSE)
+-		copy_part(&to, &last, xstate_offsets[XFEATURE_SSE],
+-			  256, &xsave->i387.xmm_space);
+-	/*
+-	 * Fill xsave->i387.sw_reserved value for ptrace frame:
+-	 */
+-	copy_part(&to, &last, offsetof(struct fxregs_state, sw_reserved),
+-		  48, xstate_fx_sw_bytes);
+-	/*
+-	 * Copy xregs_state->header:
+-	 */
+-	copy_part(&to, &last, offsetof(struct xregs_state, header),
+-		  sizeof(header), &header);
++	/* Copy FP state up to MXCSR */
++	copy_feature(header.xfeatures & XFEATURE_MASK_FP, &to, &xsave->i387,
++		     &xinit->i387, off_mxcsr);
++
++	/* Copy MXCSR when SSE or YMM are set in the feature mask */
++	copy_feature(header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM),
++		     &to, &xsave->i387.mxcsr, &xinit->i387.mxcsr,
++		     MXCSR_AND_FLAGS_SIZE);
++
++	/* Copy the remaining FP state */
++	copy_feature(header.xfeatures & XFEATURE_MASK_FP,
++		     &to, &xsave->i387.st_space, &xinit->i387.st_space,
++		     sizeof(xsave->i387.st_space));
++
++	/* Copy the SSE state - shared with YMM, but independently managed */
++	copy_feature(header.xfeatures & XFEATURE_MASK_SSE,
++		     &to, &xsave->i387.xmm_space, &xinit->i387.xmm_space,
++		     sizeof(xsave->i387.xmm_space));
++
++	/* Zero the padding area */
++	membuf_zero(&to, sizeof(xsave->i387.padding));
++
++	/* Copy xsave->i387.sw_reserved */
++	membuf_write(&to, xstate_fx_sw_bytes, sizeof(xsave->i387.sw_reserved));
++
++	/* Copy the user space relevant state of @xsave->header */
++	membuf_write(&to, &header, sizeof(header));
++
++	zerofrom = offsetof(struct xregs_state, extended_state_area);
+ 
+ 	for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) {
+ 		/*
+-		 * Copy only in-use xstates:
++		 * The ptrace buffer is in non-compacted XSAVE format.
++		 * In non-compacted format disabled features still occupy
++		 * state space, but there is no state to copy from in the
++		 * compacted init_fpstate. The gap tracking will zero this
++		 * later.
+ 		 */
+-		if ((header.xfeatures >> i) & 1) {
+-			void *src = __raw_xsave_addr(xsave, i);
++		if (!(xfeatures_mask_user() & BIT_ULL(i)))
++			continue;
+ 
+-			copy_part(&to, &last, xstate_offsets[i],
+-				  xstate_sizes[i], src);
+-		}
++		/*
++		 * If there was a feature or alignment gap, zero the space
++		 * in the destination buffer.
++		 */
++		if (zerofrom < xstate_offsets[i])
++			membuf_zero(&to, xstate_offsets[i] - zerofrom);
++
++		copy_feature(header.xfeatures & BIT_ULL(i), &to,
++			     __raw_xsave_addr(xsave, i),
++			     __raw_xsave_addr(xinit, i),
++			     xstate_sizes[i]);
+ 
++		/*
++		 * Keep track of the last copied state in the non-compacted
++		 * target buffer for gap zeroing.
++		 */
++		zerofrom = xstate_offsets[i] + xstate_sizes[i];
+ 	}
+-	fill_gap(&to, &last, size);
++
++	if (to.left)
++		membuf_zero(&to, to.left);
+ }
+ 
+ /*
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index f51cab3e983d8..b001ba811cabb 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -234,10 +234,11 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
+ 	     void __user **fpstate)
+ {
+ 	/* Default to using normal stack */
++	bool nested_altstack = on_sig_stack(regs->sp);
++	bool entering_altstack = false;
+ 	unsigned long math_size = 0;
+ 	unsigned long sp = regs->sp;
+ 	unsigned long buf_fx = 0;
+-	int onsigstack = on_sig_stack(sp);
+ 	int ret;
+ 
+ 	/* redzone */
+@@ -246,15 +247,23 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
+ 
+ 	/* This is the X/Open sanctioned signal stack switching.  */
+ 	if (ka->sa.sa_flags & SA_ONSTACK) {
+-		if (sas_ss_flags(sp) == 0)
++		/*
++		 * This checks nested_altstack via sas_ss_flags(). Sensible
++		 * programs use SS_AUTODISARM, which disables that check, and
++		 * programs that don't use SS_AUTODISARM get compatible.
++		 */
++		if (sas_ss_flags(sp) == 0) {
+ 			sp = current->sas_ss_sp + current->sas_ss_size;
++			entering_altstack = true;
++		}
+ 	} else if (IS_ENABLED(CONFIG_X86_32) &&
+-		   !onsigstack &&
++		   !nested_altstack &&
+ 		   regs->ss != __USER_DS &&
+ 		   !(ka->sa.sa_flags & SA_RESTORER) &&
+ 		   ka->sa.sa_restorer) {
+ 		/* This is the legacy signal stack switching. */
+ 		sp = (unsigned long) ka->sa.sa_restorer;
++		entering_altstack = true;
+ 	}
+ 
+ 	sp = fpu__alloc_mathframe(sp, IS_ENABLED(CONFIG_X86_32),
+@@ -267,8 +276,15 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
+ 	 * If we are on the alternate signal stack and would overflow it, don't.
+ 	 * Return an always-bogus address instead so we will die with SIGSEGV.
+ 	 */
+-	if (onsigstack && !likely(on_sig_stack(sp)))
++	if (unlikely((nested_altstack || entering_altstack) &&
++		     !__on_sig_stack(sp))) {
++
++		if (show_unhandled_signals && printk_ratelimit())
++			pr_info("%s[%d] overflowed sigaltstack\n",
++				current->comm, task_pid_nr(current));
++
+ 		return (void __user *)-1L;
++	}
+ 
+ 	/* save i387 and extended state */
+ 	ret = copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 56a62d555e924..7a3fbf3b796e6 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -827,8 +827,14 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		unsigned virt_as = max((entry->eax >> 8) & 0xff, 48U);
+ 		unsigned phys_as = entry->eax & 0xff;
+ 
+-		if (!g_phys_as)
++		/*
++		 * Use bare metal's MAXPHADDR if the CPU doesn't report guest
++		 * MAXPHYADDR separately, or if TDP (NPT) is disabled, as the
++		 * guest version "applies only to guests using nested paging".
++		 */
++		if (!g_phys_as || !tdp_enabled)
+ 			g_phys_as = phys_as;
++
+ 		entry->eax = g_phys_as | (virt_as << 8);
+ 		entry->edx = 0;
+ 		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 7e6dc454ea28d..7ca2da9028298 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -52,6 +52,8 @@
+ #include <asm/kvm_page_track.h>
+ #include "trace.h"
+ 
++#include "paging.h"
++
+ extern bool itlb_multihit_kvm_mitigation;
+ 
+ static int __read_mostly nx_huge_pages = -1;
+diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h
+new file mode 100644
+index 0000000000000..de8ab323bb707
+--- /dev/null
++++ b/arch/x86/kvm/mmu/paging.h
+@@ -0,0 +1,14 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/* Shadow paging constants/helpers that don't need to be #undef'd. */
++#ifndef __KVM_X86_PAGING_H
++#define __KVM_X86_PAGING_H
++
++#define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
++#define PT64_LVL_ADDR_MASK(level) \
++	(GUEST_PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \
++						* PT64_LEVEL_BITS))) - 1))
++#define PT64_LVL_OFFSET_MASK(level) \
++	(GUEST_PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \
++						* PT64_LEVEL_BITS))) - 1))
++#endif /* __KVM_X86_PAGING_H */
++
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index d6cd702e85b68..f8829134bf341 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -24,7 +24,7 @@
+ 	#define pt_element_t u64
+ 	#define guest_walker guest_walker64
+ 	#define FNAME(name) paging##64_##name
+-	#define PT_BASE_ADDR_MASK PT64_BASE_ADDR_MASK
++	#define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK
+ 	#define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl)
+ 	#define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl)
+ 	#define PT_INDEX(addr, level) PT64_INDEX(addr, level)
+@@ -57,7 +57,7 @@
+ 	#define pt_element_t u64
+ 	#define guest_walker guest_walkerEPT
+ 	#define FNAME(name) ept_##name
+-	#define PT_BASE_ADDR_MASK PT64_BASE_ADDR_MASK
++	#define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK
+ 	#define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl)
+ 	#define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl)
+ 	#define PT_INDEX(addr, level) PT64_INDEX(addr, level)
+diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
+index 2b3a30bd38b07..667f207d3d099 100644
+--- a/arch/x86/kvm/mmu/spte.h
++++ b/arch/x86/kvm/mmu/spte.h
+@@ -23,12 +23,6 @@
+ #else
+ #define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
+ #endif
+-#define PT64_LVL_ADDR_MASK(level) \
+-	(PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \
+-						* PT64_LEVEL_BITS))) - 1))
+-#define PT64_LVL_OFFSET_MASK(level) \
+-	(PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \
+-						* PT64_LEVEL_BITS))) - 1))
+ 
+ #define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask \
+ 			| shadow_x_mask | shadow_nx_mask | shadow_me_mask)
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 41d44fb5f753d..1c9226cd6cdec 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2745,7 +2745,16 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 			svm_disable_lbrv(vcpu);
+ 		break;
+ 	case MSR_VM_HSAVE_PA:
+-		svm->nested.hsave_msr = data;
++		/*
++		 * Old kernels did not validate the value written to
++		 * MSR_VM_HSAVE_PA.  Allow KVM_SET_MSR to set an invalid
++		 * value to allow live migrating buggy or malicious guests
++		 * originating from those kernels.
++		 */
++		if (!msr->host_initiated && !page_address_valid(vcpu, data))
++			return 1;
++
++		svm->nested.hsave_msr = data & PAGE_MASK;
+ 		break;
+ 	case MSR_VM_CR:
+ 		return svm_set_vm_cr(vcpu, data);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7bf88e6cbd0e9..800914e9e12b9 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9020,6 +9020,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		set_debugreg(vcpu->arch.eff_db[3], 3);
+ 		set_debugreg(vcpu->arch.dr6, 6);
+ 		vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD;
++	} else if (unlikely(hw_breakpoint_active())) {
++		set_debugreg(0, 7);
+ 	}
+ 
+ 	exit_fastpath = kvm_x86_ops.run(vcpu);
+diff --git a/block/partitions/ldm.c b/block/partitions/ldm.c
+index d333786b5c7eb..cc86534c80ad9 100644
+--- a/block/partitions/ldm.c
++++ b/block/partitions/ldm.c
+@@ -510,7 +510,7 @@ static bool ldm_validate_partition_table(struct parsed_partitions *state)
+ 
+ 	p = (struct msdos_partition *)(data + 0x01BE);
+ 	for (i = 0; i < 4; i++, p++)
+-		if (SYS_IND (p) == LDM_PARTITION) {
++		if (p->sys_ind == LDM_PARTITION) {
+ 			result = true;
+ 			break;
+ 		}
+diff --git a/block/partitions/ldm.h b/block/partitions/ldm.h
+index d8d6beaa72c4d..8693704dcf5e9 100644
+--- a/block/partitions/ldm.h
++++ b/block/partitions/ldm.h
+@@ -84,9 +84,6 @@ struct parsed_partitions;
+ #define TOC_BITMAP1		"config"	/* Names of the two defined */
+ #define TOC_BITMAP2		"log"		/* bitmaps in the TOCBLOCK. */
+ 
+-/* Borrowed from msdos.c */
+-#define SYS_IND(p)		(get_unaligned(&(p)->sys_ind))
+-
+ struct frag {				/* VBLK Fragment handling */
+ 	struct list_head list;
+ 	u32		group;
+diff --git a/block/partitions/msdos.c b/block/partitions/msdos.c
+index 8f2fcc0802642..c94de377c5025 100644
+--- a/block/partitions/msdos.c
++++ b/block/partitions/msdos.c
+@@ -38,8 +38,6 @@
+  */
+ #include <asm/unaligned.h>
+ 
+-#define SYS_IND(p)	get_unaligned(&p->sys_ind)
+-
+ static inline sector_t nr_sects(struct msdos_partition *p)
+ {
+ 	return (sector_t)get_unaligned_le32(&p->nr_sects);
+@@ -52,9 +50,9 @@ static inline sector_t start_sect(struct msdos_partition *p)
+ 
+ static inline int is_extended_partition(struct msdos_partition *p)
+ {
+-	return (SYS_IND(p) == DOS_EXTENDED_PARTITION ||
+-		SYS_IND(p) == WIN98_EXTENDED_PARTITION ||
+-		SYS_IND(p) == LINUX_EXTENDED_PARTITION);
++	return (p->sys_ind == DOS_EXTENDED_PARTITION ||
++		p->sys_ind == WIN98_EXTENDED_PARTITION ||
++		p->sys_ind == LINUX_EXTENDED_PARTITION);
+ }
+ 
+ #define MSDOS_LABEL_MAGIC1	0x55
+@@ -193,7 +191,7 @@ static void parse_extended(struct parsed_partitions *state,
+ 
+ 			put_partition(state, state->next, next, size);
+ 			set_info(state, state->next, disksig);
+-			if (SYS_IND(p) == LINUX_RAID_PARTITION)
++			if (p->sys_ind == LINUX_RAID_PARTITION)
+ 				state->parts[state->next].flags = ADDPART_FLAG_RAID;
+ 			loopct = 0;
+ 			if (++state->next == state->limit)
+@@ -546,7 +544,7 @@ static void parse_minix(struct parsed_partitions *state,
+ 	 * a secondary MBR describing its subpartitions, or
+ 	 * the normal boot sector. */
+ 	if (msdos_magic_present(data + 510) &&
+-	    SYS_IND(p) == MINIX_PARTITION) { /* subpartition table present */
++	    p->sys_ind == MINIX_PARTITION) { /* subpartition table present */
+ 		char tmp[1 + BDEVNAME_SIZE + 10 + 9 + 1];
+ 
+ 		snprintf(tmp, sizeof(tmp), " %s%d: <minix:", state->name, origin);
+@@ -555,7 +553,7 @@ static void parse_minix(struct parsed_partitions *state,
+ 			if (state->next == state->limit)
+ 				break;
+ 			/* add each partition in use */
+-			if (SYS_IND(p) == MINIX_PARTITION)
++			if (p->sys_ind == MINIX_PARTITION)
+ 				put_partition(state, state->next++,
+ 					      start_sect(p), nr_sects(p));
+ 		}
+@@ -643,7 +641,7 @@ int msdos_partition(struct parsed_partitions *state)
+ 	p = (struct msdos_partition *) (data + 0x1be);
+ 	for (slot = 1 ; slot <= 4 ; slot++, p++) {
+ 		/* If this is an EFI GPT disk, msdos should ignore it. */
+-		if (SYS_IND(p) == EFI_PMBR_OSTYPE_EFI_GPT) {
++		if (p->sys_ind == EFI_PMBR_OSTYPE_EFI_GPT) {
+ 			put_dev_sector(sect);
+ 			return 0;
+ 		}
+@@ -685,11 +683,11 @@ int msdos_partition(struct parsed_partitions *state)
+ 		}
+ 		put_partition(state, slot, start, size);
+ 		set_info(state, slot, disksig);
+-		if (SYS_IND(p) == LINUX_RAID_PARTITION)
++		if (p->sys_ind == LINUX_RAID_PARTITION)
+ 			state->parts[slot].flags = ADDPART_FLAG_RAID;
+-		if (SYS_IND(p) == DM6_PARTITION)
++		if (p->sys_ind == DM6_PARTITION)
+ 			strlcat(state->pp_buf, "[DM]", PAGE_SIZE);
+-		if (SYS_IND(p) == EZD_PARTITION)
++		if (p->sys_ind == EZD_PARTITION)
+ 			strlcat(state->pp_buf, "[EZD]", PAGE_SIZE);
+ 	}
+ 
+@@ -698,7 +696,7 @@ int msdos_partition(struct parsed_partitions *state)
+ 	/* second pass - output for each on a separate line */
+ 	p = (struct msdos_partition *) (0x1be + data);
+ 	for (slot = 1 ; slot <= 4 ; slot++, p++) {
+-		unsigned char id = SYS_IND(p);
++		unsigned char id = p->sys_ind;
+ 		int n;
+ 
+ 		if (!nr_sects(p))
+diff --git a/certs/.gitignore b/certs/.gitignore
+index 2a24839906863..6cbd1f1a5837b 100644
+--- a/certs/.gitignore
++++ b/certs/.gitignore
+@@ -1,2 +1,3 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ x509_certificate_list
++x509_revocation_list
+diff --git a/drivers/acpi/acpi_amba.c b/drivers/acpi/acpi_amba.c
+index 49b781a9cd979..ab8a4e0191b19 100644
+--- a/drivers/acpi/acpi_amba.c
++++ b/drivers/acpi/acpi_amba.c
+@@ -76,6 +76,7 @@ static int amba_handler_attach(struct acpi_device *adev,
+ 		case IORESOURCE_MEM:
+ 			if (!address_found) {
+ 				dev->res = *rentry->res;
++				dev->res.name = dev_name(&dev->dev);
+ 				address_found = true;
+ 			}
+ 			break;
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index a322a7bd286ba..eb04b2f828eef 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -543,6 +543,15 @@ static const struct dmi_system_id video_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V131"),
+ 		},
+ 	},
++	{
++	 .callback = video_set_report_key_events,
++	 .driver_data = (void *)((uintptr_t)REPORT_BRIGHTNESS_KEY_EVENTS),
++	 .ident = "Dell Vostro 3350",
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3350"),
++		},
++	},
+ 	/*
+ 	 * Some machines change the brightness themselves when a brightness
+ 	 * hotkey gets pressed, despite us telling them not to. In this case
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index a314b9382442b..42acf9587ef38 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -946,6 +946,8 @@ static int virtblk_freeze(struct virtio_device *vdev)
+ 	blk_mq_quiesce_queue(vblk->disk->queue);
+ 
+ 	vdev->config->del_vqs(vdev);
++	kfree(vblk->vqs);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 1836cc56e357b..673522874cec4 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -475,7 +475,7 @@ static struct port_buffer *get_inbuf(struct port *port)
+ 
+ 	buf = virtqueue_get_buf(port->in_vq, &len);
+ 	if (buf) {
+-		buf->len = len;
++		buf->len = min_t(size_t, len, buf->size);
+ 		buf->offset = 0;
+ 		port->stats.bytes_received += len;
+ 	}
+@@ -1712,7 +1712,7 @@ static void control_work_handler(struct work_struct *work)
+ 	while ((buf = virtqueue_get_buf(vq, &len))) {
+ 		spin_unlock(&portdev->c_ivq_lock);
+ 
+-		buf->len = len;
++		buf->len = min_t(size_t, len, buf->size);
+ 		buf->offset = 0;
+ 
+ 		handle_control_message(vq->vdev, portdev, buf);
+diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
+index ed2ab46b15e7f..045ead46ec8fc 100644
+--- a/drivers/dma/fsl-qdma.c
++++ b/drivers/dma/fsl-qdma.c
+@@ -1235,7 +1235,11 @@ static int fsl_qdma_probe(struct platform_device *pdev)
+ 	fsl_qdma->dma_dev.device_synchronize = fsl_qdma_synchronize;
+ 	fsl_qdma->dma_dev.device_terminate_all = fsl_qdma_terminate_all;
+ 
+-	dma_set_mask(&pdev->dev, DMA_BIT_MASK(40));
++	ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(40));
++	if (ret) {
++		dev_err(&pdev->dev, "dma_set_mask failure.\n");
++		return ret;
++	}
+ 
+ 	platform_set_drvdata(pdev, fsl_qdma);
+ 
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 6b2ce3f28f7b9..f9901fadb3a43 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -268,6 +268,10 @@ static void scmi_handle_response(struct scmi_chan_info *cinfo,
+ 		return;
+ 	}
+ 
++	/* rx.len could be shrunk in the sync do_xfer, so reset to maxsz */
++	if (msg_type == MSG_TYPE_DELAYED_RESP)
++		xfer->rx.len = info->desc->max_msg_size;
++
+ 	scmi_dump_header_dbg(dev, &xfer->hdr);
+ 
+ 	info->desc->ops->fetch_response(cinfo, xfer);
+diff --git a/drivers/firmware/tegra/bpmp-tegra210.c b/drivers/firmware/tegra/bpmp-tegra210.c
+index ae15940a078e3..c32754055c60b 100644
+--- a/drivers/firmware/tegra/bpmp-tegra210.c
++++ b/drivers/firmware/tegra/bpmp-tegra210.c
+@@ -210,7 +210,7 @@ static int tegra210_bpmp_init(struct tegra_bpmp *bpmp)
+ 	priv->tx_irq_data = irq_get_irq_data(err);
+ 	if (!priv->tx_irq_data) {
+ 		dev_err(&pdev->dev, "failed to get IRQ data for TX IRQ\n");
+-		return err;
++		return -ENOENT;
+ 	}
+ 
+ 	err = platform_get_irq_byname(pdev, "rx");
+diff --git a/drivers/firmware/turris-mox-rwtm.c b/drivers/firmware/turris-mox-rwtm.c
+index 50bb2a6d6ccf7..03f1eac9ad69b 100644
+--- a/drivers/firmware/turris-mox-rwtm.c
++++ b/drivers/firmware/turris-mox-rwtm.c
+@@ -147,11 +147,14 @@ MOX_ATTR_RO(pubkey, "%s\n", pubkey);
+ 
+ static int mox_get_status(enum mbox_cmd cmd, u32 retval)
+ {
+-	if (MBOX_STS_CMD(retval) != cmd ||
+-	    MBOX_STS_ERROR(retval) != MBOX_STS_SUCCESS)
++	if (MBOX_STS_CMD(retval) != cmd)
+ 		return -EIO;
+ 	else if (MBOX_STS_ERROR(retval) == MBOX_STS_FAIL)
+ 		return -(int)MBOX_STS_VALUE(retval);
++	else if (MBOX_STS_ERROR(retval) == MBOX_STS_BADCMD)
++		return -ENOSYS;
++	else if (MBOX_STS_ERROR(retval) != MBOX_STS_SUCCESS)
++		return -EIO;
+ 	else
+ 		return MBOX_STS_VALUE(retval);
+ }
+@@ -201,11 +204,14 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 		return ret;
+ 
+ 	ret = mox_get_status(MBOX_CMD_BOARD_INFO, reply->retval);
+-	if (ret < 0 && ret != -ENODATA) {
+-		return ret;
+-	} else if (ret == -ENODATA) {
++	if (ret == -ENODATA) {
+ 		dev_warn(rwtm->dev,
+ 			 "Board does not have manufacturing information burned!\n");
++	} else if (ret == -ENOSYS) {
++		dev_notice(rwtm->dev,
++			   "Firmware does not support the BOARD_INFO command\n");
++	} else if (ret < 0) {
++		return ret;
+ 	} else {
+ 		rwtm->serial_number = reply->status[1];
+ 		rwtm->serial_number <<= 32;
+@@ -234,10 +240,13 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 		return ret;
+ 
+ 	ret = mox_get_status(MBOX_CMD_ECDSA_PUB_KEY, reply->retval);
+-	if (ret < 0 && ret != -ENODATA) {
+-		return ret;
+-	} else if (ret == -ENODATA) {
++	if (ret == -ENODATA) {
+ 		dev_warn(rwtm->dev, "Board has no public key burned!\n");
++	} else if (ret == -ENOSYS) {
++		dev_notice(rwtm->dev,
++			   "Firmware does not support the ECDSA_PUB_KEY command\n");
++	} else if (ret < 0) {
++		return ret;
+ 	} else {
+ 		u32 *s = reply->status;
+ 
+@@ -251,6 +260,27 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 	return 0;
+ }
+ 
++static int check_get_random_support(struct mox_rwtm *rwtm)
++{
++	struct armada_37xx_rwtm_tx_msg msg;
++	int ret;
++
++	msg.command = MBOX_CMD_GET_RANDOM;
++	msg.args[0] = 1;
++	msg.args[1] = rwtm->buf_phys;
++	msg.args[2] = 4;
++
++	ret = mbox_send_message(rwtm->mbox, &msg);
++	if (ret < 0)
++		return ret;
++
++	ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
++	if (ret < 0)
++		return ret;
++
++	return mox_get_status(MBOX_CMD_GET_RANDOM, rwtm->reply.retval);
++}
++
+ static int mox_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
+ {
+ 	struct mox_rwtm *rwtm = (struct mox_rwtm *) rng->priv;
+@@ -488,6 +518,13 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		dev_warn(dev, "Cannot read board information: %i\n", ret);
+ 
++	ret = check_get_random_support(rwtm);
++	if (ret < 0) {
++		dev_notice(dev,
++			   "Firmware does not support the GET_RANDOM command\n");
++		goto free_channel;
++	}
++
+ 	rwtm->hwrng.name = DRIVER_NAME "_hwrng";
+ 	rwtm->hwrng.read = mox_hwrng_read;
+ 	rwtm->hwrng.priv = (unsigned long) rwtm;
+@@ -505,6 +542,8 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
+ 		goto free_channel;
+ 	}
+ 
++	dev_info(dev, "HWRNG successfully registered\n");
++
+ 	return 0;
+ 
+ free_channel:
+diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c
+index 90dbe58ca1edc..dbad73162c833 100644
+--- a/drivers/fsi/fsi-master-aspeed.c
++++ b/drivers/fsi/fsi-master-aspeed.c
+@@ -645,6 +645,7 @@ static const struct of_device_id fsi_master_aspeed_match[] = {
+ 	{ .compatible = "aspeed,ast2600-fsi-master" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, fsi_master_aspeed_match);
+ 
+ static struct platform_driver fsi_master_aspeed_driver = {
+ 	.driver = {
+diff --git a/drivers/fsi/fsi-master-ast-cf.c b/drivers/fsi/fsi-master-ast-cf.c
+index 57a779a89b073..70c03e304d6c8 100644
+--- a/drivers/fsi/fsi-master-ast-cf.c
++++ b/drivers/fsi/fsi-master-ast-cf.c
+@@ -1427,6 +1427,7 @@ static const struct of_device_id fsi_master_acf_match[] = {
+ 	{ .compatible = "aspeed,ast2500-cf-fsi-master" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, fsi_master_acf_match);
+ 
+ static struct platform_driver fsi_master_acf = {
+ 	.driver = {
+diff --git a/drivers/fsi/fsi-master-gpio.c b/drivers/fsi/fsi-master-gpio.c
+index aa97c4a250cb4..7d5f29b4b595d 100644
+--- a/drivers/fsi/fsi-master-gpio.c
++++ b/drivers/fsi/fsi-master-gpio.c
+@@ -882,6 +882,7 @@ static const struct of_device_id fsi_master_gpio_match[] = {
+ 	{ .compatible = "fsi-master-gpio" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, fsi_master_gpio_match);
+ 
+ static struct platform_driver fsi_master_gpio_driver = {
+ 	.driver = {
+diff --git a/drivers/fsi/fsi-occ.c b/drivers/fsi/fsi-occ.c
+index a691f9732a13b..a9beef2ae5a09 100644
+--- a/drivers/fsi/fsi-occ.c
++++ b/drivers/fsi/fsi-occ.c
+@@ -579,6 +579,7 @@ static const struct of_device_id occ_match[] = {
+ 	{ .compatible = "ibm,p9-occ" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, occ_match);
+ 
+ static struct platform_driver occ_driver = {
+ 	.driver = {
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 6898c27f71f85..7cc7d137133aa 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1239,6 +1239,7 @@ static const struct of_device_id pca953x_dt_ids[] = {
+ 
+ 	{ .compatible = "onnn,cat9554", .data = OF_953X( 8, PCA_INT), },
+ 	{ .compatible = "onnn,pca9654", .data = OF_953X( 8, PCA_INT), },
++	{ .compatible = "onnn,pca9655", .data = OF_953X(16, PCA_INT), },
+ 
+ 	{ .compatible = "exar,xra1202", .data = OF_953X( 8, 0), },
+ 	{ }
+diff --git a/drivers/gpio/gpio-zynq.c b/drivers/gpio/gpio-zynq.c
+index 3521c1dc3ac00..c288a7502de25 100644
+--- a/drivers/gpio/gpio-zynq.c
++++ b/drivers/gpio/gpio-zynq.c
+@@ -736,6 +736,11 @@ static int __maybe_unused zynq_gpio_suspend(struct device *dev)
+ 	struct zynq_gpio *gpio = dev_get_drvdata(dev);
+ 	struct irq_data *data = irq_get_irq_data(gpio->irq);
+ 
++	if (!data) {
++		dev_err(dev, "irq_get_irq_data() failed\n");
++		return -EINVAL;
++	}
++
+ 	if (!device_may_wakeup(dev))
+ 		disable_irq(gpio->irq);
+ 
+@@ -753,6 +758,11 @@ static int __maybe_unused zynq_gpio_resume(struct device *dev)
+ 	struct irq_data *data = irq_get_irq_data(gpio->irq);
+ 	int ret;
+ 
++	if (!data) {
++		dev_err(dev, "irq_get_irq_data() failed\n");
++		return -EINVAL;
++	}
++
+ 	if (!device_may_wakeup(dev))
+ 		enable_irq(gpio->irq);
+ 
+@@ -1001,8 +1011,11 @@ err_pm_dis:
+ static int zynq_gpio_remove(struct platform_device *pdev)
+ {
+ 	struct zynq_gpio *gpio = platform_get_drvdata(pdev);
++	int ret;
+ 
+-	pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_get_sync(&pdev->dev);
++	if (ret < 0)
++		dev_warn(&pdev->dev, "pm_runtime_get_sync() Failed\n");
+ 	gpiochip_remove(&gpio->chip);
+ 	clk_disable_unprepare(gpio->clk);
+ 	device_set_wakeup_capable(&pdev->dev, 0);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 65803e153a223..d243e60c6eef7 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -452,13 +452,9 @@ static const struct sysfs_ops procfs_stats_ops = {
+ 	.show = kfd_procfs_stats_show,
+ };
+ 
+-static struct attribute *procfs_stats_attrs[] = {
+-	NULL
+-};
+-
+ static struct kobj_type procfs_stats_type = {
+ 	.sysfs_ops = &procfs_stats_ops,
+-	.default_attrs = procfs_stats_attrs,
++	.release = kfd_procfs_kobj_release,
+ };
+ 
+ int kfd_procfs_add_queue(struct queue *q)
+@@ -973,9 +969,11 @@ static void kfd_process_wq_release(struct work_struct *work)
+ 		list_for_each_entry(pdd, &p->per_device_data, per_device_list) {
+ 			sysfs_remove_file(p->kobj, &pdd->attr_vram);
+ 			sysfs_remove_file(p->kobj, &pdd->attr_sdma);
+-			sysfs_remove_file(p->kobj, &pdd->attr_evict);
+-			if (pdd->dev->kfd2kgd->get_cu_occupancy != NULL)
+-				sysfs_remove_file(p->kobj, &pdd->attr_cu_occupancy);
++
++			sysfs_remove_file(pdd->kobj_stats, &pdd->attr_evict);
++			if (pdd->dev->kfd2kgd->get_cu_occupancy)
++				sysfs_remove_file(pdd->kobj_stats,
++						  &pdd->attr_cu_occupancy);
+ 			kobject_del(pdd->kobj_stats);
+ 			kobject_put(pdd->kobj_stats);
+ 			pdd->kobj_stats = NULL;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index eb1635ac89887..43c07ac2c6fce 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -153,6 +153,7 @@ void pqm_uninit(struct process_queue_manager *pqm)
+ 		if (pqn->q && pqn->q->gws)
+ 			amdgpu_amdkfd_remove_gws_from_process(pqm->process->kgd_process_info,
+ 				pqn->q->gws);
++		kfd_procfs_del_queue(pqn->q);
+ 		uninit_queue(pqn->q);
+ 		list_del(&pqn->process_queue_list);
+ 		kfree(pqn);
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index cf08215a9f21b..0d163511564e7 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -406,6 +406,7 @@ struct ast_private *ast_device_create(struct drm_driver *drv,
+ 		return ast;
+ 	dev = &ast->base;
+ 
++	dev->pdev = pdev;
+ 	pci_set_drvdata(pdev, dev);
+ 
+ 	ast->regs = pcim_iomap(pdev, 1, 0);
+@@ -447,8 +448,8 @@ struct ast_private *ast_device_create(struct drm_driver *drv,
+ 
+ 	/* map reserved buffer */
+ 	ast->dp501_fw_buf = NULL;
+-	if (dev->vram_mm->vram_size < pci_resource_len(pdev, 0)) {
+-		ast->dp501_fw_buf = pci_iomap_range(pdev, 0, dev->vram_mm->vram_size, 0);
++	if (dev->vram_mm->vram_size < pci_resource_len(dev->pdev, 0)) {
++		ast->dp501_fw_buf = pci_iomap_range(dev->pdev, 0, dev->vram_mm->vram_size, 0);
+ 		if (!ast->dp501_fw_buf)
+ 			drm_info(dev, "failed to map reserved buffer!\n");
+ 	}
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index a08cc6b53bc2f..861f16dfd1a3d 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -94,6 +94,9 @@ static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port);
+ static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port);
+ static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr);
+ 
++static bool drm_dp_mst_port_downstream_of_branch(struct drm_dp_mst_port *port,
++						 struct drm_dp_mst_branch *branch);
++
+ #define DBG_PREFIX "[dp_mst]"
+ 
+ #define DP_STR(x) [DP_ ## x] = #x
+@@ -2499,7 +2502,7 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
+ {
+ 	struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
+ 	struct drm_dp_mst_port *port;
+-	int old_ddps, old_input, ret, i;
++	int old_ddps, ret;
+ 	u8 new_pdt;
+ 	bool new_mcs;
+ 	bool dowork = false, create_connector = false;
+@@ -2531,7 +2534,6 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
+ 	}
+ 
+ 	old_ddps = port->ddps;
+-	old_input = port->input;
+ 	port->input = conn_stat->input_port;
+ 	port->ldps = conn_stat->legacy_device_plug_status;
+ 	port->ddps = conn_stat->displayport_device_plug_status;
+@@ -2554,28 +2556,6 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
+ 		dowork = false;
+ 	}
+ 
+-	if (!old_input && old_ddps != port->ddps && !port->ddps) {
+-		for (i = 0; i < mgr->max_payloads; i++) {
+-			struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i];
+-			struct drm_dp_mst_port *port_validated;
+-
+-			if (!vcpi)
+-				continue;
+-
+-			port_validated =
+-				container_of(vcpi, struct drm_dp_mst_port, vcpi);
+-			port_validated =
+-				drm_dp_mst_topology_get_port_validated(mgr, port_validated);
+-			if (!port_validated) {
+-				mutex_lock(&mgr->payload_lock);
+-				vcpi->num_slots = 0;
+-				mutex_unlock(&mgr->payload_lock);
+-			} else {
+-				drm_dp_mst_topology_put_port(port_validated);
+-			}
+-		}
+-	}
+-
+ 	if (port->connector)
+ 		drm_modeset_unlock(&mgr->base.lock);
+ 	else if (create_connector)
+@@ -3385,6 +3365,7 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
+ 	struct drm_dp_mst_port *port;
+ 	int i, j;
+ 	int cur_slots = 1;
++	bool skip;
+ 
+ 	mutex_lock(&mgr->payload_lock);
+ 	for (i = 0; i < mgr->max_payloads; i++) {
+@@ -3399,6 +3380,16 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
+ 			port = container_of(vcpi, struct drm_dp_mst_port,
+ 					    vcpi);
+ 
++			mutex_lock(&mgr->lock);
++			skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary);
++			mutex_unlock(&mgr->lock);
++
++			if (skip) {
++				drm_dbg_kms(mgr->dev,
++					    "Virtual channel %d is not in current topology\n",
++					    i);
++				continue;
++			}
+ 			/* Validated ports don't matter if we're releasing
+ 			 * VCPI
+ 			 */
+@@ -3406,8 +3397,16 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
+ 				port = drm_dp_mst_topology_get_port_validated(
+ 				    mgr, port);
+ 				if (!port) {
+-					mutex_unlock(&mgr->payload_lock);
+-					return -EINVAL;
++					if (vcpi->num_slots == payload->num_slots) {
++						cur_slots += vcpi->num_slots;
++						payload->start_slot = req_payload.start_slot;
++						continue;
++					} else {
++						drm_dbg_kms(mgr->dev,
++							    "Fail:set payload to invalid sink");
++						mutex_unlock(&mgr->payload_lock);
++						return -EINVAL;
++					}
+ 				}
+ 				put_port = true;
+ 			}
+@@ -3491,6 +3490,7 @@ int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr)
+ 	struct drm_dp_mst_port *port;
+ 	int i;
+ 	int ret = 0;
++	bool skip;
+ 
+ 	mutex_lock(&mgr->payload_lock);
+ 	for (i = 0; i < mgr->max_payloads; i++) {
+@@ -3500,6 +3500,13 @@ int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr)
+ 
+ 		port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);
+ 
++		mutex_lock(&mgr->lock);
++		skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary);
++		mutex_unlock(&mgr->lock);
++
++		if (skip)
++			continue;
++
+ 		DRM_DEBUG_KMS("payload %d %d\n", i, mgr->payloads[i].payload_state);
+ 		if (mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL) {
+ 			ret = drm_dp_create_payload_step2(mgr, port, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]);
+@@ -4581,9 +4588,18 @@ EXPORT_SYMBOL(drm_dp_mst_reset_vcpi_slots);
+ void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ 				struct drm_dp_mst_port *port)
+ {
++	bool skip;
++
+ 	if (!port->vcpi.vcpi)
+ 		return;
+ 
++	mutex_lock(&mgr->lock);
++	skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary);
++	mutex_unlock(&mgr->lock);
++
++	if (skip)
++		return;
++
+ 	drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);
+ 	port->vcpi.num_slots = 0;
+ 	port->vcpi.pbn = 0;
+diff --git a/drivers/gpu/drm/gma500/framebuffer.c b/drivers/gpu/drm/gma500/framebuffer.c
+index 54d9876b5305a..6ef4ea07d1bb8 100644
+--- a/drivers/gpu/drm/gma500/framebuffer.c
++++ b/drivers/gpu/drm/gma500/framebuffer.c
+@@ -435,6 +435,7 @@ static struct drm_framebuffer *psb_user_framebuffer_create
+ 			 const struct drm_mode_fb_cmd2 *cmd)
+ {
+ 	struct drm_gem_object *obj;
++	struct drm_framebuffer *fb;
+ 
+ 	/*
+ 	 *	Find the GEM object and thus the gtt range object that is
+@@ -445,7 +446,11 @@ static struct drm_framebuffer *psb_user_framebuffer_create
+ 		return ERR_PTR(-ENOENT);
+ 
+ 	/* Let the core code do all the work */
+-	return psb_framebuffer_create(dev, cmd, obj);
++	fb = psb_framebuffer_create(dev, cmd, obj);
++	if (IS_ERR(fb))
++		drm_gem_object_put(obj);
++
++	return fb;
+ }
+ 
+ static int psbfb_probe(struct drm_fb_helper *fb_helper,
+diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+index f08e25e95746e..be27f9889431e 100644
+--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+@@ -299,10 +299,7 @@ static void __gen8_ppgtt_alloc(struct i915_address_space * const vm,
+ 			__i915_gem_object_pin_pages(pt->base);
+ 			i915_gem_object_make_unshrinkable(pt->base);
+ 
+-			if (lvl ||
+-			    gen8_pt_count(*start, end) < I915_PDES ||
+-			    intel_vgpu_active(vm->i915))
+-				fill_px(pt, vm->scratch[lvl]->encode);
++			fill_px(pt, vm->scratch[lvl]->encode);
+ 
+ 			spin_lock(&pd->lock);
+ 			if (likely(!pd->entry[idx])) {
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+index b5937b39145a4..cd71631bef0ca 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+@@ -348,7 +348,7 @@ static struct i915_fence_reg *fence_find(struct i915_ggtt *ggtt)
+ 	if (intel_has_pending_fb_unpin(ggtt->vm.i915))
+ 		return ERR_PTR(-EAGAIN);
+ 
+-	return ERR_PTR(-EDEADLK);
++	return ERR_PTR(-ENOBUFS);
+ }
+ 
+ int __i915_vma_pin_fence(struct i915_vma *vma)
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+index a3d1617d7c67e..b6bb5fc7d183e 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+@@ -347,7 +347,7 @@ static void ingenic_drm_plane_enable(struct ingenic_drm *priv,
+ 	unsigned int en_bit;
+ 
+ 	if (priv->soc_info->has_osd) {
+-		if (plane->type == DRM_PLANE_TYPE_PRIMARY)
++		if (plane != &priv->f0)
+ 			en_bit = JZ_LCD_OSDC_F1EN;
+ 		else
+ 			en_bit = JZ_LCD_OSDC_F0EN;
+@@ -362,7 +362,7 @@ void ingenic_drm_plane_disable(struct device *dev, struct drm_plane *plane)
+ 	unsigned int en_bit;
+ 
+ 	if (priv->soc_info->has_osd) {
+-		if (plane->type == DRM_PLANE_TYPE_PRIMARY)
++		if (plane != &priv->f0)
+ 			en_bit = JZ_LCD_OSDC_F1EN;
+ 		else
+ 			en_bit = JZ_LCD_OSDC_F0EN;
+@@ -389,8 +389,7 @@ void ingenic_drm_plane_config(struct device *dev,
+ 
+ 	ingenic_drm_plane_enable(priv, plane);
+ 
+-	if (priv->soc_info->has_osd &&
+-	    plane->type == DRM_PLANE_TYPE_PRIMARY) {
++	if (priv->soc_info->has_osd && plane != &priv->f0) {
+ 		switch (fourcc) {
+ 		case DRM_FORMAT_XRGB1555:
+ 			ctrl |= JZ_LCD_OSDCTRL_RGB555;
+@@ -423,7 +422,7 @@ void ingenic_drm_plane_config(struct device *dev,
+ 	}
+ 
+ 	if (priv->soc_info->has_osd) {
+-		if (plane->type == DRM_PLANE_TYPE_PRIMARY) {
++		if (plane != &priv->f0) {
+ 			xy_reg = JZ_REG_LCD_XYP1;
+ 			size_reg = JZ_REG_LCD_SIZE1;
+ 		} else {
+@@ -455,7 +454,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
+ 		height = state->src_h >> 16;
+ 		cpp = state->fb->format->cpp[0];
+ 
+-		if (priv->soc_info->has_osd && plane->type == DRM_PLANE_TYPE_OVERLAY)
++		if (!priv->soc_info->has_osd || plane == &priv->f0)
+ 			hwdesc = priv->dma_hwdesc_f0;
+ 		else
+ 			hwdesc = priv->dma_hwdesc_f1;
+@@ -692,6 +691,7 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
+ 	const struct jz_soc_info *soc_info;
+ 	struct ingenic_drm *priv;
+ 	struct clk *parent_clk;
++	struct drm_plane *primary;
+ 	struct drm_bridge *bridge;
+ 	struct drm_panel *panel;
+ 	struct drm_encoder *encoder;
+@@ -784,9 +784,11 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
+ 	if (soc_info->has_osd)
+ 		priv->ipu_plane = drm_plane_from_index(drm, 0);
+ 
+-	drm_plane_helper_add(&priv->f1, &ingenic_drm_plane_helper_funcs);
++	primary = priv->soc_info->has_osd ? &priv->f1 : &priv->f0;
++
++	drm_plane_helper_add(primary, &ingenic_drm_plane_helper_funcs);
+ 
+-	ret = drm_universal_plane_init(drm, &priv->f1, 1,
++	ret = drm_universal_plane_init(drm, primary, 1,
+ 				       &ingenic_drm_primary_plane_funcs,
+ 				       ingenic_drm_primary_formats,
+ 				       ARRAY_SIZE(ingenic_drm_primary_formats),
+@@ -798,7 +800,7 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
+ 
+ 	drm_crtc_helper_add(&priv->crtc, &ingenic_drm_crtc_helper_funcs);
+ 
+-	ret = drm_crtc_init_with_planes(drm, &priv->crtc, &priv->f1,
++	ret = drm_crtc_init_with_planes(drm, &priv->crtc, primary,
+ 					NULL, &ingenic_drm_crtc_funcs, NULL);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to init CRTC: %i\n", ret);
+diff --git a/drivers/gpu/drm/ingenic/ingenic-ipu.c b/drivers/gpu/drm/ingenic/ingenic-ipu.c
+index fc8c6e970ee31..06fd118b1444d 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-ipu.c
++++ b/drivers/gpu/drm/ingenic/ingenic-ipu.c
+@@ -753,7 +753,7 @@ static int ingenic_ipu_bind(struct device *dev, struct device *master, void *d)
+ 
+ 	err = drm_universal_plane_init(drm, plane, 1, &ingenic_ipu_plane_funcs,
+ 				       soc_info->formats, soc_info->num_formats,
+-				       NULL, DRM_PLANE_TYPE_PRIMARY, NULL);
++				       NULL, DRM_PLANE_TYPE_OVERLAY, NULL);
+ 	if (err) {
+ 		dev_err(dev, "Failed to init plane: %i\n", err);
+ 		return err;
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index c9ac3dc651135..9cb8c7d13d46b 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -215,6 +215,22 @@ static ssize_t port_show(struct device *dev, struct device_attribute *attr,
+ 
+ static DEVICE_ATTR_RO(port);
+ 
++static void intel_th_trace_prepare(struct intel_th_device *thdev)
++{
++	struct intel_th_device *hub = to_intel_th_hub(thdev);
++	struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver);
++
++	if (hub->type != INTEL_TH_SWITCH)
++		return;
++
++	if (thdev->type != INTEL_TH_OUTPUT)
++		return;
++
++	pm_runtime_get_sync(&thdev->dev);
++	hubdrv->prepare(hub, &thdev->output);
++	pm_runtime_put(&thdev->dev);
++}
++
+ static int intel_th_output_activate(struct intel_th_device *thdev)
+ {
+ 	struct intel_th_driver *thdrv =
+@@ -235,6 +251,7 @@ static int intel_th_output_activate(struct intel_th_device *thdev)
+ 	if (ret)
+ 		goto fail_put;
+ 
++	intel_th_trace_prepare(thdev);
+ 	if (thdrv->activate)
+ 		ret = thdrv->activate(thdev);
+ 	else
+diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
+index 28509b02a0b56..b3308934a687d 100644
+--- a/drivers/hwtracing/intel_th/gth.c
++++ b/drivers/hwtracing/intel_th/gth.c
+@@ -564,6 +564,21 @@ static void gth_tscu_resync(struct gth_device *gth)
+ 	iowrite32(reg, gth->base + REG_TSCU_TSUCTRL);
+ }
+ 
++static void intel_th_gth_prepare(struct intel_th_device *thdev,
++				 struct intel_th_output *output)
++{
++	struct gth_device *gth = dev_get_drvdata(&thdev->dev);
++	int count;
++
++	/*
++	 * Wait until the output port is in reset before we start
++	 * programming it.
++	 */
++	for (count = GTH_PLE_WAITLOOP_DEPTH;
++	     count && !(gth_output_get(gth, output->port) & BIT(5)); count--)
++		cpu_relax();
++}
++
+ /**
+  * intel_th_gth_enable() - enable tracing to an output device
+  * @thdev:	GTH device
+@@ -815,6 +830,7 @@ static struct intel_th_driver intel_th_gth_driver = {
+ 	.assign		= intel_th_gth_assign,
+ 	.unassign	= intel_th_gth_unassign,
+ 	.set_output	= intel_th_gth_set_output,
++	.prepare	= intel_th_gth_prepare,
+ 	.enable		= intel_th_gth_enable,
+ 	.trig_switch	= intel_th_gth_switch,
+ 	.disable	= intel_th_gth_disable,
+diff --git a/drivers/hwtracing/intel_th/intel_th.h b/drivers/hwtracing/intel_th/intel_th.h
+index 5fe694708b7a3..595615b791086 100644
+--- a/drivers/hwtracing/intel_th/intel_th.h
++++ b/drivers/hwtracing/intel_th/intel_th.h
+@@ -143,6 +143,7 @@ intel_th_output_assigned(struct intel_th_device *thdev)
+  * @remove:	remove method
+  * @assign:	match a given output type device against available outputs
+  * @unassign:	deassociate an output type device from an output port
++ * @prepare:	prepare output port for tracing
+  * @enable:	enable tracing for a given output device
+  * @disable:	disable tracing for a given output device
+  * @irq:	interrupt callback
+@@ -164,6 +165,8 @@ struct intel_th_driver {
+ 					  struct intel_th_device *othdev);
+ 	void			(*unassign)(struct intel_th_device *thdev,
+ 					    struct intel_th_device *othdev);
++	void			(*prepare)(struct intel_th_device *thdev,
++					   struct intel_th_output *output);
+ 	void			(*enable)(struct intel_th_device *thdev,
+ 					  struct intel_th_output *output);
+ 	void			(*trig_switch)(struct intel_th_device *thdev,
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index c13e7f107dd36..bdce6d3e53273 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -24,6 +24,7 @@
+ #include <linux/i2c-smbus.h>
+ #include <linux/idr.h>
+ #include <linux/init.h>
++#include <linux/interrupt.h>
+ #include <linux/irqflags.h>
+ #include <linux/jump_label.h>
+ #include <linux/kernel.h>
+@@ -585,6 +586,8 @@ static void i2c_device_shutdown(struct device *dev)
+ 	driver = to_i2c_driver(dev->driver);
+ 	if (driver->shutdown)
+ 		driver->shutdown(client);
++	else if (client->irq > 0)
++		disable_irq(client->irq);
+ }
+ 
+ static void i2c_client_dev_release(struct device *dev)
+diff --git a/drivers/iio/gyro/fxas21002c_core.c b/drivers/iio/gyro/fxas21002c_core.c
+index b7523357d8eba..ec6bd15bd2d4c 100644
+--- a/drivers/iio/gyro/fxas21002c_core.c
++++ b/drivers/iio/gyro/fxas21002c_core.c
+@@ -366,14 +366,7 @@ out_unlock:
+ 
+ static int  fxas21002c_pm_get(struct fxas21002c_data *data)
+ {
+-	struct device *dev = regmap_get_device(data->regmap);
+-	int ret;
+-
+-	ret = pm_runtime_get_sync(dev);
+-	if (ret < 0)
+-		pm_runtime_put_noidle(dev);
+-
+-	return ret;
++	return pm_runtime_resume_and_get(regmap_get_device(data->regmap));
+ }
+ 
+ static int  fxas21002c_pm_put(struct fxas21002c_data *data)
+@@ -1005,7 +998,6 @@ int fxas21002c_core_probe(struct device *dev, struct regmap *regmap, int irq,
+ pm_disable:
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
+-	pm_runtime_put_noidle(dev);
+ 
+ 	return ret;
+ }
+@@ -1019,7 +1011,6 @@ void fxas21002c_core_remove(struct device *dev)
+ 
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
+-	pm_runtime_put_noidle(dev);
+ }
+ EXPORT_SYMBOL_GPL(fxas21002c_core_remove);
+ 
+diff --git a/drivers/iio/magnetometer/bmc150_magn.c b/drivers/iio/magnetometer/bmc150_magn.c
+index 8042175275d09..8eacfaf584cfd 100644
+--- a/drivers/iio/magnetometer/bmc150_magn.c
++++ b/drivers/iio/magnetometer/bmc150_magn.c
+@@ -263,7 +263,7 @@ static int bmc150_magn_set_power_state(struct bmc150_magn_data *data, bool on)
+ 	int ret;
+ 
+ 	if (on) {
+-		ret = pm_runtime_get_sync(data->dev);
++		ret = pm_runtime_resume_and_get(data->dev);
+ 	} else {
+ 		pm_runtime_mark_last_busy(data->dev);
+ 		ret = pm_runtime_put_autosuspend(data->dev);
+@@ -272,9 +272,6 @@ static int bmc150_magn_set_power_state(struct bmc150_magn_data *data, bool on)
+ 	if (ret < 0) {
+ 		dev_err(data->dev,
+ 			"failed to change power state to %d\n", on);
+-		if (on)
+-			pm_runtime_put_noidle(data->dev);
+-
+ 		return ret;
+ 	}
+ #endif
+@@ -944,12 +941,14 @@ int bmc150_magn_probe(struct device *dev, struct regmap *regmap,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "unable to register iio device\n");
+-		goto err_buffer_cleanup;
++		goto err_disable_runtime_pm;
+ 	}
+ 
+ 	dev_dbg(dev, "Registered device %s\n", name);
+ 	return 0;
+ 
++err_disable_runtime_pm:
++	pm_runtime_disable(dev);
+ err_buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ err_free_irq:
+@@ -973,7 +972,6 @@ int bmc150_magn_remove(struct device *dev)
+ 
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
+-	pm_runtime_put_noidle(dev);
+ 
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ 
+diff --git a/drivers/input/touchscreen/hideep.c b/drivers/input/touchscreen/hideep.c
+index ddad4a82a5e55..e9547ee297564 100644
+--- a/drivers/input/touchscreen/hideep.c
++++ b/drivers/input/touchscreen/hideep.c
+@@ -361,13 +361,16 @@ static int hideep_enter_pgm(struct hideep_ts *ts)
+ 	return -EIO;
+ }
+ 
+-static void hideep_nvm_unlock(struct hideep_ts *ts)
++static int hideep_nvm_unlock(struct hideep_ts *ts)
+ {
+ 	u32 unmask_code;
++	int error;
+ 
+ 	hideep_pgm_w_reg(ts, HIDEEP_FLASH_CFG, HIDEEP_NVM_SFR_RPAGE);
+-	hideep_pgm_r_reg(ts, 0x0000000C, &unmask_code);
++	error = hideep_pgm_r_reg(ts, 0x0000000C, &unmask_code);
+ 	hideep_pgm_w_reg(ts, HIDEEP_FLASH_CFG, HIDEEP_NVM_DEFAULT_PAGE);
++	if (error)
++		return error;
+ 
+ 	/* make it unprotected code */
+ 	unmask_code &= ~HIDEEP_PROT_MODE;
+@@ -384,6 +387,8 @@ static void hideep_nvm_unlock(struct hideep_ts *ts)
+ 	NVM_W_SFR(HIDEEP_NVM_MASK_OFS, ts->nvm_mask);
+ 	SET_FLASH_HWCONTROL();
+ 	hideep_pgm_w_reg(ts, HIDEEP_FLASH_CFG, HIDEEP_NVM_DEFAULT_PAGE);
++
++	return 0;
+ }
+ 
+ static int hideep_check_status(struct hideep_ts *ts)
+@@ -462,7 +467,9 @@ static int hideep_program_nvm(struct hideep_ts *ts,
+ 	u32 addr = 0;
+ 	int error;
+ 
+-	hideep_nvm_unlock(ts);
++       error = hideep_nvm_unlock(ts);
++       if (error)
++               return error;
+ 
+ 	while (ucode_len > 0) {
+ 		xfer_len = min_t(size_t, ucode_len, HIDEEP_NVM_PAGE_SIZE);
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+index bcbacf22331d6..df24bbe3ea4f1 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+@@ -74,7 +74,7 @@ static bool using_legacy_binding, using_generic_binding;
+ static inline int arm_smmu_rpm_get(struct arm_smmu_device *smmu)
+ {
+ 	if (pm_runtime_enabled(smmu->dev))
+-		return pm_runtime_get_sync(smmu->dev);
++		return pm_runtime_resume_and_get(smmu->dev);
+ 
+ 	return 0;
+ }
+@@ -1284,6 +1284,7 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
+ 	u64 phys;
+ 	unsigned long va, flags;
+ 	int ret, idx = cfg->cbndx;
++	phys_addr_t addr = 0;
+ 
+ 	ret = arm_smmu_rpm_get(smmu);
+ 	if (ret < 0)
+@@ -1303,6 +1304,7 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
+ 		dev_err(dev,
+ 			"iova to phys timed out on %pad. Falling back to software table walk.\n",
+ 			&iova);
++		arm_smmu_rpm_put(smmu);
+ 		return ops->iova_to_phys(ops, iova);
+ 	}
+ 
+@@ -1311,12 +1313,14 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
+ 	if (phys & ARM_SMMU_CB_PAR_F) {
+ 		dev_err(dev, "translation fault!\n");
+ 		dev_err(dev, "PAR = 0x%llx\n", phys);
+-		return 0;
++		goto out;
+ 	}
+ 
++	addr = (phys & GENMASK_ULL(39, 12)) | (iova & 0xfff);
++out:
+ 	arm_smmu_rpm_put(smmu);
+ 
+-	return (phys & GENMASK_ULL(39, 12)) | (iova & 0xfff);
++	return addr;
+ }
+ 
+ static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain,
+diff --git a/drivers/leds/leds-tlc591xx.c b/drivers/leds/leds-tlc591xx.c
+index 5b9dfdf743ecd..cb7bd1353f9f0 100644
+--- a/drivers/leds/leds-tlc591xx.c
++++ b/drivers/leds/leds-tlc591xx.c
+@@ -148,16 +148,20 @@ static int
+ tlc591xx_probe(struct i2c_client *client,
+ 	       const struct i2c_device_id *id)
+ {
+-	struct device_node *np = dev_of_node(&client->dev), *child;
++	struct device_node *np, *child;
+ 	struct device *dev = &client->dev;
+ 	const struct tlc591xx *tlc591xx;
+ 	struct tlc591xx_priv *priv;
+ 	int err, count, reg;
+ 
+-	tlc591xx = device_get_match_data(dev);
++	np = dev_of_node(dev);
+ 	if (!np)
+ 		return -ENODEV;
+ 
++	tlc591xx = device_get_match_data(dev);
++	if (!tlc591xx)
++		return -ENODEV;
++
+ 	count = of_get_available_child_count(np);
+ 	if (!count || count > tlc591xx->max_leds)
+ 		return -EINVAL;
+diff --git a/drivers/leds/leds-turris-omnia.c b/drivers/leds/leds-turris-omnia.c
+index 880fc8def5309..ec87a958f1512 100644
+--- a/drivers/leds/leds-turris-omnia.c
++++ b/drivers/leds/leds-turris-omnia.c
+@@ -277,6 +277,7 @@ static const struct i2c_device_id omnia_id[] = {
+ 	{ "omnia", 0 },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(i2c, omnia_id);
+ 
+ static struct i2c_driver omnia_leds_driver = {
+ 	.probe		= omnia_leds_probe,
+diff --git a/drivers/memory/atmel-ebi.c b/drivers/memory/atmel-ebi.c
+index 14386d0b5f578..c267283b01fda 100644
+--- a/drivers/memory/atmel-ebi.c
++++ b/drivers/memory/atmel-ebi.c
+@@ -600,8 +600,10 @@ static int atmel_ebi_probe(struct platform_device *pdev)
+ 				child);
+ 
+ 			ret = atmel_ebi_dev_disable(ebi, child);
+-			if (ret)
++			if (ret) {
++				of_node_put(child);
+ 				return ret;
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/memory/fsl_ifc.c b/drivers/memory/fsl_ifc.c
+index 89f99b5b64504..d062c2f8250f4 100644
+--- a/drivers/memory/fsl_ifc.c
++++ b/drivers/memory/fsl_ifc.c
+@@ -97,7 +97,6 @@ static int fsl_ifc_ctrl_remove(struct platform_device *dev)
+ 	iounmap(ctrl->gregs);
+ 
+ 	dev_set_drvdata(&dev->dev, NULL);
+-	kfree(ctrl);
+ 
+ 	return 0;
+ }
+@@ -209,7 +208,8 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 
+ 	dev_info(&dev->dev, "Freescale Integrated Flash Controller\n");
+ 
+-	fsl_ifc_ctrl_dev = kzalloc(sizeof(*fsl_ifc_ctrl_dev), GFP_KERNEL);
++	fsl_ifc_ctrl_dev = devm_kzalloc(&dev->dev, sizeof(*fsl_ifc_ctrl_dev),
++					GFP_KERNEL);
+ 	if (!fsl_ifc_ctrl_dev)
+ 		return -ENOMEM;
+ 
+@@ -219,8 +219,7 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 	fsl_ifc_ctrl_dev->gregs = of_iomap(dev->dev.of_node, 0);
+ 	if (!fsl_ifc_ctrl_dev->gregs) {
+ 		dev_err(&dev->dev, "failed to get memory region\n");
+-		ret = -ENODEV;
+-		goto err;
++		return -ENODEV;
+ 	}
+ 
+ 	if (of_property_read_bool(dev->dev.of_node, "little-endian")) {
+@@ -295,6 +294,7 @@ err_irq:
+ 	free_irq(fsl_ifc_ctrl_dev->irq, fsl_ifc_ctrl_dev);
+ 	irq_dispose_mapping(fsl_ifc_ctrl_dev->irq);
+ err:
++	iounmap(fsl_ifc_ctrl_dev->gregs);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/memory/pl353-smc.c b/drivers/memory/pl353-smc.c
+index b42804b1801e6..cc01979780d87 100644
+--- a/drivers/memory/pl353-smc.c
++++ b/drivers/memory/pl353-smc.c
+@@ -407,6 +407,7 @@ static int pl353_smc_probe(struct amba_device *adev, const struct amba_id *id)
+ 		break;
+ 	}
+ 	if (!match) {
++		err = -ENODEV;
+ 		dev_err(&adev->dev, "no matching children\n");
+ 		goto out_clk_disable;
+ 	}
+diff --git a/drivers/memory/stm32-fmc2-ebi.c b/drivers/memory/stm32-fmc2-ebi.c
+index 4d5758c419c55..ffec26a99313b 100644
+--- a/drivers/memory/stm32-fmc2-ebi.c
++++ b/drivers/memory/stm32-fmc2-ebi.c
+@@ -1048,16 +1048,19 @@ static int stm32_fmc2_ebi_parse_dt(struct stm32_fmc2_ebi *ebi)
+ 		if (ret) {
+ 			dev_err(dev, "could not retrieve reg property: %d\n",
+ 				ret);
++			of_node_put(child);
+ 			return ret;
+ 		}
+ 
+ 		if (bank >= FMC2_MAX_BANKS) {
+ 			dev_err(dev, "invalid reg value: %d\n", bank);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+ 		if (ebi->bank_assigned & BIT(bank)) {
+ 			dev_err(dev, "bank already assigned: %d\n", bank);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -1066,6 +1069,7 @@ static int stm32_fmc2_ebi_parse_dt(struct stm32_fmc2_ebi *ebi)
+ 			if (ret) {
+ 				dev_err(dev, "setup chip select %d failed: %d\n",
+ 					bank, ret);
++				of_node_put(child);
+ 				return ret;
+ 			}
+ 		}
+diff --git a/drivers/mfd/da9052-i2c.c b/drivers/mfd/da9052-i2c.c
+index 47556d2d9abe2..8ebfc7bbe4e01 100644
+--- a/drivers/mfd/da9052-i2c.c
++++ b/drivers/mfd/da9052-i2c.c
+@@ -113,6 +113,7 @@ static const struct i2c_device_id da9052_i2c_id[] = {
+ 	{"da9053-bc", DA9053_BC},
+ 	{}
+ };
++MODULE_DEVICE_TABLE(i2c, da9052_i2c_id);
+ 
+ #ifdef CONFIG_OF
+ static const struct of_device_id dialog_dt_ids[] = {
+diff --git a/drivers/mfd/motorola-cpcap.c b/drivers/mfd/motorola-cpcap.c
+index 30d82bfe5b02f..6fb206da27298 100644
+--- a/drivers/mfd/motorola-cpcap.c
++++ b/drivers/mfd/motorola-cpcap.c
+@@ -327,6 +327,10 @@ static int cpcap_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Parent SPI controller uses DMA, CPCAP and child devices do not */
++	spi->dev.coherent_dma_mask = 0;
++	spi->dev.dma_mask = &spi->dev.coherent_dma_mask;
++
+ 	return devm_mfd_add_devices(&spi->dev, 0, cpcap_mfd_devices,
+ 				    ARRAY_SIZE(cpcap_mfd_devices), NULL, 0, NULL);
+ }
+diff --git a/drivers/mfd/stmpe-i2c.c b/drivers/mfd/stmpe-i2c.c
+index 61aa020199f57..cd2f45257dc16 100644
+--- a/drivers/mfd/stmpe-i2c.c
++++ b/drivers/mfd/stmpe-i2c.c
+@@ -109,7 +109,7 @@ static const struct i2c_device_id stmpe_i2c_id[] = {
+ 	{ "stmpe2403", STMPE2403 },
+ 	{ }
+ };
+-MODULE_DEVICE_TABLE(i2c, stmpe_id);
++MODULE_DEVICE_TABLE(i2c, stmpe_i2c_id);
+ 
+ static struct i2c_driver stmpe_i2c_driver = {
+ 	.driver = {
+diff --git a/drivers/misc/cardreader/alcor_pci.c b/drivers/misc/cardreader/alcor_pci.c
+index cd402c89189ea..de6d44a158bba 100644
+--- a/drivers/misc/cardreader/alcor_pci.c
++++ b/drivers/misc/cardreader/alcor_pci.c
+@@ -139,7 +139,13 @@ static void alcor_pci_init_check_aspm(struct alcor_pci_priv *priv)
+ 	u32 val32;
+ 
+ 	priv->pdev_cap_off    = alcor_pci_find_cap_offset(priv, priv->pdev);
+-	priv->parent_cap_off = alcor_pci_find_cap_offset(priv,
++	/*
++	 * A device might be attached to root complex directly and
++	 * priv->parent_pdev will be NULL. In this case we don't check its
++	 * capability and disable ASPM completely.
++	 */
++	if (priv->parent_pdev)
++		priv->parent_cap_off = alcor_pci_find_cap_offset(priv,
+ 							 priv->parent_pdev);
+ 
+ 	if ((priv->pdev_cap_off == 0) || (priv->parent_cap_off == 0)) {
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index 68f661aca3ff2..37edd663603f6 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -2164,7 +2164,7 @@ static void gaudi_init_mme_qman(struct hl_device *hdev, u32 mme_offset,
+ 
+ 		/* Configure RAZWI IRQ */
+ 		mme_id = mme_offset /
+-				(mmMME1_QM_GLBL_CFG0 - mmMME0_QM_GLBL_CFG0);
++				(mmMME1_QM_GLBL_CFG0 - mmMME0_QM_GLBL_CFG0) / 2;
+ 
+ 		mme_qm_err_cfg = MME_QMAN_GLBL_ERR_CFG_MSG_EN_MASK;
+ 		if (hdev->stop_on_err) {
+@@ -3708,6 +3708,7 @@ already_pinned:
+ 	return 0;
+ 
+ unpin_memory:
++	list_del(&userptr->job_node);
+ 	hl_unpin_host_memory(hdev, userptr);
+ free_userptr:
+ 	kfree(userptr);
+diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
+index 986ed3c072088..5b5d6275c2495 100644
+--- a/drivers/misc/habanalabs/goya/goya.c
++++ b/drivers/misc/habanalabs/goya/goya.c
+@@ -3190,6 +3190,7 @@ already_pinned:
+ 	return 0;
+ 
+ unpin_memory:
++	list_del(&userptr->job_node);
+ 	hl_unpin_host_memory(hdev, userptr);
+ free_userptr:
+ 	kfree(userptr);
+diff --git a/drivers/misc/ibmasm/module.c b/drivers/misc/ibmasm/module.c
+index 4edad6c445d37..dc8a06c06c637 100644
+--- a/drivers/misc/ibmasm/module.c
++++ b/drivers/misc/ibmasm/module.c
+@@ -111,7 +111,7 @@ static int ibmasm_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	result = ibmasm_init_remote_input_dev(sp);
+ 	if (result) {
+ 		dev_err(sp->dev, "Failed to initialize remote queue\n");
+-		goto error_send_message;
++		goto error_init_remote;
+ 	}
+ 
+ 	result = ibmasm_send_driver_vpd(sp);
+@@ -131,8 +131,9 @@ static int ibmasm_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ error_send_message:
+-	disable_sp_interrupts(sp->base_address);
+ 	ibmasm_free_remote_input_dev(sp);
++error_init_remote:
++	disable_sp_interrupts(sp->base_address);
+ 	free_irq(sp->irq, (void *)sp);
+ error_request_irq:
+ 	iounmap(sp->base_address);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 91e0e6254a01d..7d1f609306f94 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1519,6 +1519,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
+ 	struct virtnet_info *vi = sq->vq->vdev->priv;
+ 	unsigned int index = vq2txq(sq->vq);
+ 	struct netdev_queue *txq;
++	int opaque;
++	bool done;
+ 
+ 	if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {
+ 		/* We don't need to enable cb for XDP */
+@@ -1528,10 +1530,28 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
+ 
+ 	txq = netdev_get_tx_queue(vi->dev, index);
+ 	__netif_tx_lock(txq, raw_smp_processor_id());
++	virtqueue_disable_cb(sq->vq);
+ 	free_old_xmit_skbs(sq, true);
++
++	opaque = virtqueue_enable_cb_prepare(sq->vq);
++
++	done = napi_complete_done(napi, 0);
++
++	if (!done)
++		virtqueue_disable_cb(sq->vq);
++
+ 	__netif_tx_unlock(txq);
+ 
+-	virtqueue_napi_complete(napi, sq->vq, 0);
++	if (done) {
++		if (unlikely(virtqueue_poll(sq->vq, opaque))) {
++			if (napi_schedule_prep(napi)) {
++				__netif_tx_lock(txq, raw_smp_processor_id());
++				virtqueue_disable_cb(sq->vq);
++				__netif_tx_unlock(txq);
++				__napi_schedule(napi);
++			}
++		}
++	}
+ 
+ 	if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
+ 		netif_tx_wake_queue(txq);
+@@ -3234,8 +3254,11 @@ static __maybe_unused int virtnet_restore(struct virtio_device *vdev)
+ 	virtnet_set_queues(vi, vi->curr_queue_pairs);
+ 
+ 	err = virtnet_cpu_notif_add(vi);
+-	if (err)
++	if (err) {
++		virtnet_freeze_down(vdev);
++		remove_vq_common(vi);
+ 		return err;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 4df4f37e6b895..dedcb7aaf0d82 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1467,7 +1467,6 @@ static void nvmet_tcp_state_change(struct sock *sk)
+ 	case TCP_CLOSE_WAIT:
+ 	case TCP_CLOSE:
+ 		/* FALLTHRU */
+-		sk->sk_user_data = NULL;
+ 		nvmet_tcp_schedule_release_queue(queue);
+ 		break;
+ 	default:
+diff --git a/drivers/pci/controller/dwc/pcie-intel-gw.c b/drivers/pci/controller/dwc/pcie-intel-gw.c
+index 5650cb78acbad..5e1a284fdc538 100644
+--- a/drivers/pci/controller/dwc/pcie-intel-gw.c
++++ b/drivers/pci/controller/dwc/pcie-intel-gw.c
+@@ -39,6 +39,10 @@
+ #define PCIE_APP_IRN_PM_TO_ACK		BIT(9)
+ #define PCIE_APP_IRN_LINK_AUTO_BW_STAT	BIT(11)
+ #define PCIE_APP_IRN_BW_MGT		BIT(12)
++#define PCIE_APP_IRN_INTA		BIT(13)
++#define PCIE_APP_IRN_INTB		BIT(14)
++#define PCIE_APP_IRN_INTC		BIT(15)
++#define PCIE_APP_IRN_INTD		BIT(16)
+ #define PCIE_APP_IRN_MSG_LTR		BIT(18)
+ #define PCIE_APP_IRN_SYS_ERR_RC		BIT(29)
+ #define PCIE_APP_INTX_OFST		12
+@@ -48,10 +52,8 @@
+ 	PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
+ 	PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
+ 	PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
+-	(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
+-	(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
+-	(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
+-	(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
++	PCIE_APP_IRN_INTA | PCIE_APP_IRN_INTB | \
++	PCIE_APP_IRN_INTC | PCIE_APP_IRN_INTD)
+ 
+ #define BUS_IATU_OFFSET			SZ_256M
+ #define RESET_INTERVAL_MS		100
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index d788f4d7f9aa3..506f6a294eac3 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -1841,7 +1841,7 @@ static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq)
+ 	if (unlikely(irq > 31))
+ 		return -EINVAL;
+ 
+-	appl_writel(pcie, (1 << irq), APPL_MSI_CTRL_1);
++	appl_writel(pcie, BIT(irq), APPL_MSI_CTRL_1);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pci/controller/pci-ftpci100.c b/drivers/pci/controller/pci-ftpci100.c
+index da3cd216da007..aefef1986201a 100644
+--- a/drivers/pci/controller/pci-ftpci100.c
++++ b/drivers/pci/controller/pci-ftpci100.c
+@@ -34,12 +34,12 @@
+  * Special configuration registers directly in the first few words
+  * in I/O space.
+  */
+-#define PCI_IOSIZE	0x00
+-#define PCI_PROT	0x04 /* AHB protection */
+-#define PCI_CTRL	0x08 /* PCI control signal */
+-#define PCI_SOFTRST	0x10 /* Soft reset counter and response error enable */
+-#define PCI_CONFIG	0x28 /* PCI configuration command register */
+-#define PCI_DATA	0x2C
++#define FTPCI_IOSIZE	0x00
++#define FTPCI_PROT	0x04 /* AHB protection */
++#define FTPCI_CTRL	0x08 /* PCI control signal */
++#define FTPCI_SOFTRST	0x10 /* Soft reset counter and response error enable */
++#define FTPCI_CONFIG	0x28 /* PCI configuration command register */
++#define FTPCI_DATA	0x2C
+ 
+ #define FARADAY_PCI_STATUS_CMD		0x04 /* Status and command */
+ #define FARADAY_PCI_PMC			0x40 /* Power management control */
+@@ -195,9 +195,9 @@ static int faraday_raw_pci_read_config(struct faraday_pci *p, int bus_number,
+ 			PCI_CONF_FUNCTION(PCI_FUNC(fn)) |
+ 			PCI_CONF_WHERE(config) |
+ 			PCI_CONF_ENABLE,
+-			p->base + PCI_CONFIG);
++			p->base + FTPCI_CONFIG);
+ 
+-	*value = readl(p->base + PCI_DATA);
++	*value = readl(p->base + FTPCI_DATA);
+ 
+ 	if (size == 1)
+ 		*value = (*value >> (8 * (config & 3))) & 0xFF;
+@@ -230,17 +230,17 @@ static int faraday_raw_pci_write_config(struct faraday_pci *p, int bus_number,
+ 			PCI_CONF_FUNCTION(PCI_FUNC(fn)) |
+ 			PCI_CONF_WHERE(config) |
+ 			PCI_CONF_ENABLE,
+-			p->base + PCI_CONFIG);
++			p->base + FTPCI_CONFIG);
+ 
+ 	switch (size) {
+ 	case 4:
+-		writel(value, p->base + PCI_DATA);
++		writel(value, p->base + FTPCI_DATA);
+ 		break;
+ 	case 2:
+-		writew(value, p->base + PCI_DATA + (config & 3));
++		writew(value, p->base + FTPCI_DATA + (config & 3));
+ 		break;
+ 	case 1:
+-		writeb(value, p->base + PCI_DATA + (config & 3));
++		writeb(value, p->base + FTPCI_DATA + (config & 3));
+ 		break;
+ 	default:
+ 		ret = PCIBIOS_BAD_REGISTER_NUMBER;
+@@ -469,7 +469,7 @@ static int faraday_pci_probe(struct platform_device *pdev)
+ 		if (!faraday_res_to_memcfg(io->start - win->offset,
+ 					   resource_size(io), &val)) {
+ 			/* setup I/O space size */
+-			writel(val, p->base + PCI_IOSIZE);
++			writel(val, p->base + FTPCI_IOSIZE);
+ 		} else {
+ 			dev_err(dev, "illegal IO mem size\n");
+ 			return -EINVAL;
+@@ -477,11 +477,11 @@ static int faraday_pci_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Setup hostbridge */
+-	val = readl(p->base + PCI_CTRL);
++	val = readl(p->base + FTPCI_CTRL);
+ 	val |= PCI_COMMAND_IO;
+ 	val |= PCI_COMMAND_MEMORY;
+ 	val |= PCI_COMMAND_MASTER;
+-	writel(val, p->base + PCI_CTRL);
++	writel(val, p->base + FTPCI_CTRL);
+ 	/* Mask and clear all interrupts */
+ 	faraday_raw_pci_write_config(p, 0, 0, FARADAY_PCI_CTRL2 + 2, 2, 0xF000);
+ 	if (variant->cascaded_irq) {
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index d57c538bbb2db..44e15f0e3a2ed 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -444,7 +444,6 @@ enum hv_pcibus_state {
+ 	hv_pcibus_probed,
+ 	hv_pcibus_installed,
+ 	hv_pcibus_removing,
+-	hv_pcibus_removed,
+ 	hv_pcibus_maximum
+ };
+ 
+@@ -3247,8 +3246,9 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
+ 		struct pci_packet teardown_packet;
+ 		u8 buffer[sizeof(struct pci_message)];
+ 	} pkt;
+-	struct hv_dr_state *dr;
+ 	struct hv_pci_compl comp_pkt;
++	struct hv_pci_dev *hpdev, *tmp;
++	unsigned long flags;
+ 	int ret;
+ 
+ 	/*
+@@ -3260,9 +3260,16 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
+ 
+ 	if (!keep_devs) {
+ 		/* Delete any children which might still exist. */
+-		dr = kzalloc(sizeof(*dr), GFP_KERNEL);
+-		if (dr && hv_pci_start_relations_work(hbus, dr))
+-			kfree(dr);
++		spin_lock_irqsave(&hbus->device_list_lock, flags);
++		list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry) {
++			list_del(&hpdev->list_entry);
++			if (hpdev->pci_slot)
++				pci_destroy_slot(hpdev->pci_slot);
++			/* For the two refs got in new_pcichild_device() */
++			put_pcichild(hpdev);
++			put_pcichild(hpdev);
++		}
++		spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ 	}
+ 
+ 	ret = hv_send_resources_released(hdev);
+@@ -3305,13 +3312,23 @@ static int hv_pci_remove(struct hv_device *hdev)
+ 
+ 	hbus = hv_get_drvdata(hdev);
+ 	if (hbus->state == hv_pcibus_installed) {
++		tasklet_disable(&hdev->channel->callback_event);
++		hbus->state = hv_pcibus_removing;
++		tasklet_enable(&hdev->channel->callback_event);
++		destroy_workqueue(hbus->wq);
++		hbus->wq = NULL;
++		/*
++		 * At this point, no work is running or can be scheduled
++		 * on hbus-wq. We can't race with hv_pci_devices_present()
++		 * or hv_pci_eject_device(), it's safe to proceed.
++		 */
++
+ 		/* Remove the bus from PCI's point of view. */
+ 		pci_lock_rescan_remove();
+ 		pci_stop_root_bus(hbus->pci_bus);
+ 		hv_pci_remove_slots(hbus);
+ 		pci_remove_root_bus(hbus->pci_bus);
+ 		pci_unlock_rescan_remove();
+-		hbus->state = hv_pcibus_removed;
+ 	}
+ 
+ 	ret = hv_pci_bus_exit(hdev, false);
+@@ -3326,7 +3343,6 @@ static int hv_pci_remove(struct hv_device *hdev)
+ 	irq_domain_free_fwnode(hbus->sysdata.fwnode);
+ 	put_hvpcibus(hbus);
+ 	wait_for_completion(&hbus->remove_event);
+-	destroy_workqueue(hbus->wq);
+ 
+ 	hv_put_dom_num(hbus->sysdata.domain);
+ 
+diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c
+index 8fcabed7c6a67..1a2af963599ca 100644
+--- a/drivers/pci/controller/pci-tegra.c
++++ b/drivers/pci/controller/pci-tegra.c
+@@ -2506,6 +2506,7 @@ static const struct of_device_id tegra_pcie_of_match[] = {
+ 	{ .compatible = "nvidia,tegra20-pcie", .data = &tegra20_pcie },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, tegra_pcie_of_match);
+ 
+ static void *tegra_pcie_ports_seq_start(struct seq_file *s, loff_t *pos)
+ {
+diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c
+index eede4e8f3f75a..81b4effeb1309 100644
+--- a/drivers/pci/controller/pcie-iproc-msi.c
++++ b/drivers/pci/controller/pcie-iproc-msi.c
+@@ -171,7 +171,7 @@ static struct irq_chip iproc_msi_irq_chip = {
+ 
+ static struct msi_domain_info iproc_msi_domain_info = {
+ 	.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+-		MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX,
++		MSI_FLAG_PCI_MSIX,
+ 	.chip = &iproc_msi_irq_chip,
+ };
+ 
+@@ -250,20 +250,23 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
+ 	struct iproc_msi *msi = domain->host_data;
+ 	int hwirq, i;
+ 
++	if (msi->nr_cpus > 1 && nr_irqs > 1)
++		return -EINVAL;
++
+ 	mutex_lock(&msi->bitmap_lock);
+ 
+-	/* Allocate 'nr_cpus' number of MSI vectors each time */
+-	hwirq = bitmap_find_next_zero_area(msi->bitmap, msi->nr_msi_vecs, 0,
+-					   msi->nr_cpus, 0);
+-	if (hwirq < msi->nr_msi_vecs) {
+-		bitmap_set(msi->bitmap, hwirq, msi->nr_cpus);
+-	} else {
+-		mutex_unlock(&msi->bitmap_lock);
+-		return -ENOSPC;
+-	}
++	/*
++	 * Allocate 'nr_irqs' multiplied by 'nr_cpus' number of MSI vectors
++	 * each time
++	 */
++	hwirq = bitmap_find_free_region(msi->bitmap, msi->nr_msi_vecs,
++					order_base_2(msi->nr_cpus * nr_irqs));
+ 
+ 	mutex_unlock(&msi->bitmap_lock);
+ 
++	if (hwirq < 0)
++		return -ENOSPC;
++
+ 	for (i = 0; i < nr_irqs; i++) {
+ 		irq_domain_set_info(domain, virq + i, hwirq + i,
+ 				    &iproc_msi_bottom_irq_chip,
+@@ -284,7 +287,8 @@ static void iproc_msi_irq_domain_free(struct irq_domain *domain,
+ 	mutex_lock(&msi->bitmap_lock);
+ 
+ 	hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq);
+-	bitmap_clear(msi->bitmap, hwirq, msi->nr_cpus);
++	bitmap_release_region(msi->bitmap, hwirq,
++			      order_base_2(msi->nr_cpus * nr_irqs));
+ 
+ 	mutex_unlock(&msi->bitmap_lock);
+ 
+@@ -539,6 +543,9 @@ int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node)
+ 	mutex_init(&msi->bitmap_lock);
+ 	msi->nr_cpus = num_possible_cpus();
+ 
++	if (msi->nr_cpus == 1)
++		iproc_msi_domain_info.flags |=  MSI_FLAG_MULTI_PCI_MSI;
++
+ 	msi->nr_irqs = of_irq_count(node);
+ 	if (!msi->nr_irqs) {
+ 		dev_err(pcie->dev, "found no MSI GIC interrupt\n");
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index 9705059523a6e..0d6df73bb9181 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -593,10 +593,6 @@ static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip)
+ 	if (err)
+ 		return err;
+ 
+-	err = rockchip_pcie_setup_irq(rockchip);
+-	if (err)
+-		return err;
+-
+ 	rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v");
+ 	if (IS_ERR(rockchip->vpcie12v)) {
+ 		if (PTR_ERR(rockchip->vpcie12v) != -ENODEV)
+@@ -974,8 +970,6 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
+ 	if (err)
+ 		goto err_vpcie;
+ 
+-	rockchip_pcie_enable_interrupts(rockchip);
+-
+ 	err = rockchip_pcie_init_irq_domain(rockchip);
+ 	if (err < 0)
+ 		goto err_deinit_port;
+@@ -993,6 +987,12 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
+ 	bridge->sysdata = rockchip;
+ 	bridge->ops = &rockchip_pcie_ops;
+ 
++	err = rockchip_pcie_setup_irq(rockchip);
++	if (err)
++		goto err_remove_irq_domain;
++
++	rockchip_pcie_enable_interrupts(rockchip);
++
+ 	err = pci_host_probe(bridge);
+ 	if (err < 0)
+ 		goto err_remove_irq_domain;
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index fb3840e222add..9d06939736c0f 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -563,6 +563,32 @@ void pciehp_power_off_slot(struct controller *ctrl)
+ 		 PCI_EXP_SLTCTL_PWR_OFF);
+ }
+ 
++static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
++					  struct pci_dev *pdev, int irq)
++{
++	/*
++	 * Ignore link changes which occurred while waiting for DPC recovery.
++	 * Could be several if DPC triggered multiple times consecutively.
++	 */
++	synchronize_hardirq(irq);
++	atomic_and(~PCI_EXP_SLTSTA_DLLSC, &ctrl->pending_events);
++	if (pciehp_poll_mode)
++		pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
++					   PCI_EXP_SLTSTA_DLLSC);
++	ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored (recovered by DPC)\n",
++		  slot_name(ctrl));
++
++	/*
++	 * If the link is unexpectedly down after successful recovery,
++	 * the corresponding link change may have been ignored above.
++	 * Synthesize it to ensure that it is acted on.
++	 */
++	down_read(&ctrl->reset_lock);
++	if (!pciehp_check_link_active(ctrl))
++		pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
++	up_read(&ctrl->reset_lock);
++}
++
+ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ {
+ 	struct controller *ctrl = (struct controller *)dev_id;
+@@ -706,6 +732,16 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 				      PCI_EXP_SLTCTL_ATTN_IND_ON);
+ 	}
+ 
++	/*
++	 * Ignore Link Down/Up events caused by Downstream Port Containment
++	 * if recovery from the error succeeded.
++	 */
++	if ((events & PCI_EXP_SLTSTA_DLLSC) && pci_dpc_recovered(pdev) &&
++	    ctrl->state == ON_STATE) {
++		events &= ~PCI_EXP_SLTSTA_DLLSC;
++		pciehp_ignore_dpc_link_change(ctrl, pdev, irq);
++	}
++
+ 	/*
+ 	 * Disable requests have higher priority than Presence Detect Changed
+ 	 * or Data Link Layer State Changed events.
+diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
+index de1c331dbed43..f07c5dbc94e10 100644
+--- a/drivers/pci/p2pdma.c
++++ b/drivers/pci/p2pdma.c
+@@ -308,10 +308,41 @@ static const struct pci_p2pdma_whitelist_entry {
+ 	{}
+ };
+ 
++/*
++ * This lookup function tries to find the PCI device corresponding to a given
++ * host bridge.
++ *
++ * It assumes the host bridge device is the first PCI device in the
++ * bus->devices list and that the devfn is 00.0. These assumptions should hold
++ * for all the devices in the whitelist above.
++ *
++ * This function is equivalent to pci_get_slot(host->bus, 0), however it does
++ * not take the pci_bus_sem lock seeing __host_bridge_whitelist() must not
++ * sleep.
++ *
++ * For this to be safe, the caller should hold a reference to a device on the
++ * bridge, which should ensure the host_bridge device will not be freed
++ * or removed from the head of the devices list.
++ */
++static struct pci_dev *pci_host_bridge_dev(struct pci_host_bridge *host)
++{
++	struct pci_dev *root;
++
++	root = list_first_entry_or_null(&host->bus->devices,
++					struct pci_dev, bus_list);
++
++	if (!root)
++		return NULL;
++	if (root->devfn != PCI_DEVFN(0, 0))
++		return NULL;
++
++	return root;
++}
++
+ static bool __host_bridge_whitelist(struct pci_host_bridge *host,
+ 				    bool same_host_bridge)
+ {
+-	struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0));
++	struct pci_dev *root = pci_host_bridge_dev(host);
+ 	const struct pci_p2pdma_whitelist_entry *entry;
+ 	unsigned short vendor, device;
+ 
+@@ -320,7 +351,6 @@ static bool __host_bridge_whitelist(struct pci_host_bridge *host,
+ 
+ 	vendor = root->vendor;
+ 	device = root->device;
+-	pci_dev_put(root);
+ 
+ 	for (entry = pci_p2pdma_whitelist; entry->vendor; entry++) {
+ 		if (vendor != entry->vendor || device != entry->device)
+diff --git a/drivers/pci/pci-label.c b/drivers/pci/pci-label.c
+index 781e45cf60d1c..cd84cf52a92e1 100644
+--- a/drivers/pci/pci-label.c
++++ b/drivers/pci/pci-label.c
+@@ -162,7 +162,7 @@ static void dsm_label_utf16s_to_utf8s(union acpi_object *obj, char *buf)
+ 	len = utf16s_to_utf8s((const wchar_t *)obj->buffer.pointer,
+ 			      obj->buffer.length,
+ 			      UTF16_LITTLE_ENDIAN,
+-			      buf, PAGE_SIZE);
++			      buf, PAGE_SIZE - 1);
+ 	buf[len] = '\n';
+ }
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 09ebc134d0d74..a96dc6f530760 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -409,6 +409,8 @@ static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
+ 
+ /* pci_dev priv_flags */
+ #define PCI_DEV_ADDED 0
++#define PCI_DPC_RECOVERED 1
++#define PCI_DPC_RECOVERING 2
+ 
+ static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
+ {
+@@ -454,10 +456,12 @@ void pci_restore_dpc_state(struct pci_dev *dev);
+ void pci_dpc_init(struct pci_dev *pdev);
+ void dpc_process_error(struct pci_dev *pdev);
+ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
++bool pci_dpc_recovered(struct pci_dev *pdev);
+ #else
+ static inline void pci_save_dpc_state(struct pci_dev *dev) {}
+ static inline void pci_restore_dpc_state(struct pci_dev *dev) {}
+ static inline void pci_dpc_init(struct pci_dev *pdev) {}
++static inline bool pci_dpc_recovered(struct pci_dev *pdev) { return false; }
+ #endif
+ 
+ #ifdef CONFIG_PCI_ATS
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index e05aba86a3179..c556e7beafe38 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -71,6 +71,58 @@ void pci_restore_dpc_state(struct pci_dev *dev)
+ 	pci_write_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, *cap);
+ }
+ 
++static DECLARE_WAIT_QUEUE_HEAD(dpc_completed_waitqueue);
++
++#ifdef CONFIG_HOTPLUG_PCI_PCIE
++static bool dpc_completed(struct pci_dev *pdev)
++{
++	u16 status;
++
++	pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_STATUS, &status);
++	if ((status != 0xffff) && (status & PCI_EXP_DPC_STATUS_TRIGGER))
++		return false;
++
++	if (test_bit(PCI_DPC_RECOVERING, &pdev->priv_flags))
++		return false;
++
++	return true;
++}
++
++/**
++ * pci_dpc_recovered - whether DPC triggered and has recovered successfully
++ * @pdev: PCI device
++ *
++ * Return true if DPC was triggered for @pdev and has recovered successfully.
++ * Wait for recovery if it hasn't completed yet.  Called from the PCIe hotplug
++ * driver to recognize and ignore Link Down/Up events caused by DPC.
++ */
++bool pci_dpc_recovered(struct pci_dev *pdev)
++{
++	struct pci_host_bridge *host;
++
++	if (!pdev->dpc_cap)
++		return false;
++
++	/*
++	 * Synchronization between hotplug and DPC is not supported
++	 * if DPC is owned by firmware and EDR is not enabled.
++	 */
++	host = pci_find_host_bridge(pdev->bus);
++	if (!host->native_dpc && !IS_ENABLED(CONFIG_PCIE_EDR))
++		return false;
++
++	/*
++	 * Need a timeout in case DPC never completes due to failure of
++	 * dpc_wait_rp_inactive().  The spec doesn't mandate a time limit,
++	 * but reports indicate that DPC completes within 4 seconds.
++	 */
++	wait_event_timeout(dpc_completed_waitqueue, dpc_completed(pdev),
++			   msecs_to_jiffies(4000));
++
++	return test_and_clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
++}
++#endif /* CONFIG_HOTPLUG_PCI_PCIE */
++
+ static int dpc_wait_rp_inactive(struct pci_dev *pdev)
+ {
+ 	unsigned long timeout = jiffies + HZ;
+@@ -91,8 +143,11 @@ static int dpc_wait_rp_inactive(struct pci_dev *pdev)
+ 
+ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
+ {
++	pci_ers_result_t ret;
+ 	u16 cap;
+ 
++	set_bit(PCI_DPC_RECOVERING, &pdev->priv_flags);
++
+ 	/*
+ 	 * DPC disables the Link automatically in hardware, so it has
+ 	 * already been reset by the time we get here.
+@@ -106,18 +161,27 @@ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
+ 	if (!pcie_wait_for_link(pdev, false))
+ 		pci_info(pdev, "Data Link Layer Link Active not cleared in 1000 msec\n");
+ 
+-	if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev))
+-		return PCI_ERS_RESULT_DISCONNECT;
++	if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev)) {
++		clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
++		ret = PCI_ERS_RESULT_DISCONNECT;
++		goto out;
++	}
+ 
+ 	pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
+ 			      PCI_EXP_DPC_STATUS_TRIGGER);
+ 
+ 	if (!pcie_wait_for_link(pdev, true)) {
+ 		pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n");
+-		return PCI_ERS_RESULT_DISCONNECT;
++		clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
++		ret = PCI_ERS_RESULT_DISCONNECT;
++	} else {
++		set_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
++		ret = PCI_ERS_RESULT_RECOVERED;
+ 	}
+-
+-	return PCI_ERS_RESULT_RECOVERED;
++out:
++	clear_bit(PCI_DPC_RECOVERING, &pdev->priv_flags);
++	wake_up_all(&dpc_completed_waitqueue);
++	return ret;
+ }
+ 
+ static void dpc_process_rp_pio_error(struct pci_dev *pdev)
+diff --git a/drivers/phy/intel/phy-intel-keembay-emmc.c b/drivers/phy/intel/phy-intel-keembay-emmc.c
+index eb7c635ed89ae..0eb11ac7c2e2e 100644
+--- a/drivers/phy/intel/phy-intel-keembay-emmc.c
++++ b/drivers/phy/intel/phy-intel-keembay-emmc.c
+@@ -95,7 +95,8 @@ static int keembay_emmc_phy_power(struct phy *phy, bool on_off)
+ 	else
+ 		freqsel = 0x0;
+ 
+-	if (mhz < 50 || mhz > 200)
++	/* Check for EMMC clock rate*/
++	if (mhz > 175)
+ 		dev_warn(&phy->dev, "Unsupported rate: %d MHz\n", mhz);
+ 
+ 	/*
+diff --git a/drivers/power/reset/gpio-poweroff.c b/drivers/power/reset/gpio-poweroff.c
+index c5067eb753706..1c5af2fef1423 100644
+--- a/drivers/power/reset/gpio-poweroff.c
++++ b/drivers/power/reset/gpio-poweroff.c
+@@ -90,6 +90,7 @@ static const struct of_device_id of_gpio_poweroff_match[] = {
+ 	{ .compatible = "gpio-poweroff", },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, of_gpio_poweroff_match);
+ 
+ static struct platform_driver gpio_poweroff_driver = {
+ 	.probe = gpio_poweroff_probe,
+diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
+index 1699b9269a78e..0aa46b4510177 100644
+--- a/drivers/power/supply/Kconfig
++++ b/drivers/power/supply/Kconfig
+@@ -692,7 +692,8 @@ config BATTERY_GOLDFISH
+ 
+ config BATTERY_RT5033
+ 	tristate "RT5033 fuel gauge support"
+-	depends on MFD_RT5033
++	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  This adds support for battery fuel gauge in Richtek RT5033 PMIC.
+ 	  The fuelgauge calculates and determines the battery state of charge
+diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8500_btemp.c
+index 909f0242bacbc..4417d64c31f97 100644
+--- a/drivers/power/supply/ab8500_btemp.c
++++ b/drivers/power/supply/ab8500_btemp.c
+@@ -1142,6 +1142,7 @@ static const struct of_device_id ab8500_btemp_match[] = {
+ 	{ .compatible = "stericsson,ab8500-btemp", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, ab8500_btemp_match);
+ 
+ static struct platform_driver ab8500_btemp_driver = {
+ 	.probe = ab8500_btemp_probe,
+diff --git a/drivers/power/supply/ab8500_charger.c b/drivers/power/supply/ab8500_charger.c
+index db65be0269206..3d627768ad7b2 100644
+--- a/drivers/power/supply/ab8500_charger.c
++++ b/drivers/power/supply/ab8500_charger.c
+@@ -413,6 +413,14 @@ disable_otp:
+ static void ab8500_power_supply_changed(struct ab8500_charger *di,
+ 					struct power_supply *psy)
+ {
++	/*
++	 * This happens if we get notifications or interrupts and
++	 * the platform has been configured not to support one or
++	 * other type of charging.
++	 */
++	if (!psy)
++		return;
++
+ 	if (di->autopower_cfg) {
+ 		if (!di->usb.charger_connected &&
+ 		    !di->ac.charger_connected &&
+@@ -439,7 +447,15 @@ static void ab8500_charger_set_usb_connected(struct ab8500_charger *di,
+ 		if (!connected)
+ 			di->flags.vbus_drop_end = false;
+ 
+-		sysfs_notify(&di->usb_chg.psy->dev.kobj, NULL, "present");
++		/*
++		 * Sometimes the platform is configured not to support
++		 * USB charging and no psy has been created, but we still
++		 * will get these notifications.
++		 */
++		if (di->usb_chg.psy) {
++			sysfs_notify(&di->usb_chg.psy->dev.kobj, NULL,
++				     "present");
++		}
+ 
+ 		if (connected) {
+ 			mutex_lock(&di->charger_attached_mutex);
+@@ -3663,6 +3679,7 @@ static const struct of_device_id ab8500_charger_match[] = {
+ 	{ .compatible = "stericsson,ab8500-charger", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, ab8500_charger_match);
+ 
+ static struct platform_driver ab8500_charger_driver = {
+ 	.probe = ab8500_charger_probe,
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index 592a73d4dde68..f1da757c939f8 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -3249,6 +3249,7 @@ static const struct of_device_id ab8500_fg_match[] = {
+ 	{ .compatible = "stericsson,ab8500-fg", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, ab8500_fg_match);
+ 
+ static struct platform_driver ab8500_fg_driver = {
+ 	.probe = ab8500_fg_probe,
+diff --git a/drivers/power/supply/charger-manager.c b/drivers/power/supply/charger-manager.c
+index 6fcebe4415522..333349275b964 100644
+--- a/drivers/power/supply/charger-manager.c
++++ b/drivers/power/supply/charger-manager.c
+@@ -1279,6 +1279,7 @@ static const struct of_device_id charger_manager_match[] = {
+ 	},
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, charger_manager_match);
+ 
+ static struct charger_desc *of_cm_parse_desc(struct device *dev)
+ {
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index 2e9672fe4df1f..794caf03658d7 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -1094,7 +1094,7 @@ static int max17042_probe(struct i2c_client *client,
+ 	}
+ 
+ 	if (client->irq) {
+-		unsigned int flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT;
++		unsigned int flags = IRQF_ONESHOT;
+ 
+ 		/*
+ 		 * On ACPI systems the IRQ may be handled by ACPI-event code,
+diff --git a/drivers/power/supply/rt5033_battery.c b/drivers/power/supply/rt5033_battery.c
+index f330452341f02..9ad0afe83d1b7 100644
+--- a/drivers/power/supply/rt5033_battery.c
++++ b/drivers/power/supply/rt5033_battery.c
+@@ -164,9 +164,16 @@ static const struct i2c_device_id rt5033_battery_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, rt5033_battery_id);
+ 
++static const struct of_device_id rt5033_battery_of_match[] = {
++	{ .compatible = "richtek,rt5033-battery", },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, rt5033_battery_of_match);
++
+ static struct i2c_driver rt5033_battery_driver = {
+ 	.driver = {
+ 		.name = "rt5033-battery",
++		.of_match_table = rt5033_battery_of_match,
+ 	},
+ 	.probe = rt5033_battery_probe,
+ 	.remove = rt5033_battery_remove,
+diff --git a/drivers/power/supply/sc2731_charger.c b/drivers/power/supply/sc2731_charger.c
+index 335cb857ef307..288b79836c139 100644
+--- a/drivers/power/supply/sc2731_charger.c
++++ b/drivers/power/supply/sc2731_charger.c
+@@ -524,6 +524,7 @@ static const struct of_device_id sc2731_charger_of_match[] = {
+ 	{ .compatible = "sprd,sc2731-charger", },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, sc2731_charger_of_match);
+ 
+ static struct platform_driver sc2731_charger_driver = {
+ 	.driver = {
+diff --git a/drivers/power/supply/sc27xx_fuel_gauge.c b/drivers/power/supply/sc27xx_fuel_gauge.c
+index 9c627618c2249..1ae8374e1cebe 100644
+--- a/drivers/power/supply/sc27xx_fuel_gauge.c
++++ b/drivers/power/supply/sc27xx_fuel_gauge.c
+@@ -1342,6 +1342,7 @@ static const struct of_device_id sc27xx_fgu_of_match[] = {
+ 	{ .compatible = "sprd,sc2731-fgu", },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, sc27xx_fgu_of_match);
+ 
+ static struct platform_driver sc27xx_fgu_driver = {
+ 	.probe = sc27xx_fgu_probe,
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index a34d95ed70b20..22c002e685b34 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -156,7 +156,7 @@ static int img_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	struct img_pwm_chip *pwm_chip = to_img_pwm_chip(chip);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(chip->dev);
++	ret = pm_runtime_resume_and_get(chip->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/pwm/pwm-imx1.c b/drivers/pwm/pwm-imx1.c
+index f8b2c2e001a7a..c17652c40e142 100644
+--- a/drivers/pwm/pwm-imx1.c
++++ b/drivers/pwm/pwm-imx1.c
+@@ -180,8 +180,6 @@ static int pwm_imx1_remove(struct platform_device *pdev)
+ {
+ 	struct pwm_imx1_chip *imx = platform_get_drvdata(pdev);
+ 
+-	pwm_imx1_clk_disable_unprepare(&imx->chip);
+-
+ 	return pwmchip_remove(&imx->chip);
+ }
+ 
+diff --git a/drivers/pwm/pwm-spear.c b/drivers/pwm/pwm-spear.c
+index 6c6b44fd3f438..2d11ac277de8d 100644
+--- a/drivers/pwm/pwm-spear.c
++++ b/drivers/pwm/pwm-spear.c
+@@ -231,10 +231,6 @@ static int spear_pwm_probe(struct platform_device *pdev)
+ static int spear_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct spear_pwm_chip *pc = platform_get_drvdata(pdev);
+-	int i;
+-
+-	for (i = 0; i < NUM_PWM; i++)
+-		pwm_disable(&pc->chip.pwms[i]);
+ 
+ 	/* clk was prepared in probe, hence unprepare it here */
+ 	clk_unprepare(pc->clk);
+diff --git a/drivers/pwm/pwm-tegra.c b/drivers/pwm/pwm-tegra.c
+index 1daf591025c00..8c4e6657b61e7 100644
+--- a/drivers/pwm/pwm-tegra.c
++++ b/drivers/pwm/pwm-tegra.c
+@@ -303,7 +303,6 @@ static int tegra_pwm_probe(struct platform_device *pdev)
+ static int tegra_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct tegra_pwm_chip *pc = platform_get_drvdata(pdev);
+-	unsigned int i;
+ 	int err;
+ 
+ 	if (WARN_ON(!pc))
+@@ -313,18 +312,6 @@ static int tegra_pwm_remove(struct platform_device *pdev)
+ 	if (err < 0)
+ 		return err;
+ 
+-	for (i = 0; i < pc->chip.npwm; i++) {
+-		struct pwm_device *pwm = &pc->chip.pwms[i];
+-
+-		if (!pwm_is_enabled(pwm))
+-			if (clk_prepare_enable(pc->clk) < 0)
+-				continue;
+-
+-		pwm_writel(pc, i, 0);
+-
+-		clk_disable_unprepare(pc->clk);
+-	}
+-
+ 	reset_control_assert(pc->rst);
+ 	clk_disable_unprepare(pc->clk);
+ 
+diff --git a/drivers/remoteproc/remoteproc_cdev.c b/drivers/remoteproc/remoteproc_cdev.c
+index b19ea3057bde4..ff92ed25d8b0a 100644
+--- a/drivers/remoteproc/remoteproc_cdev.c
++++ b/drivers/remoteproc/remoteproc_cdev.c
+@@ -111,7 +111,7 @@ int rproc_char_device_add(struct rproc *rproc)
+ 
+ void rproc_char_device_remove(struct rproc *rproc)
+ {
+-	__unregister_chrdev(MAJOR(rproc->dev.devt), rproc->index, 1, "remoteproc");
++	cdev_del(&rproc->cdev);
+ }
+ 
+ void __init rproc_init_cdev(void)
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index dab2c0f5caf0e..47924d5ed4f56 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -2290,7 +2290,6 @@ int rproc_del(struct rproc *rproc)
+ 	mutex_unlock(&rproc->lock);
+ 
+ 	rproc_delete_debug_dir(rproc);
+-	rproc_char_device_remove(rproc);
+ 
+ 	/* the rproc is downref'ed as soon as it's removed from the klist */
+ 	mutex_lock(&rproc_list_mutex);
+@@ -2301,6 +2300,7 @@ int rproc_del(struct rproc *rproc)
+ 	synchronize_rcu();
+ 
+ 	device_del(&rproc->dev);
++	rproc_char_device_remove(rproc);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index d9307935441df..afeb9d6e4313d 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -1202,9 +1202,9 @@ static int k3_r5_core_of_init(struct platform_device *pdev)
+ 
+ 	core->tsp = k3_r5_core_of_get_tsp(dev, core->ti_sci);
+ 	if (IS_ERR(core->tsp)) {
++		ret = PTR_ERR(core->tsp);
+ 		dev_err(dev, "failed to construct ti-sci proc control, ret = %d\n",
+ 			ret);
+-		ret = PTR_ERR(core->tsp);
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/reset/Kconfig b/drivers/reset/Kconfig
+index 07d162b179fce..147543ad303f2 100644
+--- a/drivers/reset/Kconfig
++++ b/drivers/reset/Kconfig
+@@ -52,7 +52,8 @@ config RESET_BRCMSTB
+ config RESET_BRCMSTB_RESCAL
+ 	bool "Broadcom STB RESCAL reset controller"
+ 	depends on HAS_IOMEM
+-	default ARCH_BRCMSTB || COMPILE_TEST
++	depends on ARCH_BRCMSTB || COMPILE_TEST
++	default ARCH_BRCMSTB
+ 	help
+ 	  This enables the RESCAL reset controller for SATA, PCIe0, or PCIe1 on
+ 	  BCM7216.
+@@ -75,6 +76,7 @@ config RESET_IMX7
+ 
+ config RESET_INTEL_GW
+ 	bool "Intel Reset Controller Driver"
++	depends on X86 || COMPILE_TEST
+ 	depends on OF && HAS_IOMEM
+ 	select REGMAP_MMIO
+ 	help
+diff --git a/drivers/reset/core.c b/drivers/reset/core.c
+index a2df88e900118..f93388b9a4a1f 100644
+--- a/drivers/reset/core.c
++++ b/drivers/reset/core.c
+@@ -567,7 +567,10 @@ static struct reset_control *__reset_control_get_internal(
+ 	if (!rstc)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	try_module_get(rcdev->owner);
++	if (!try_module_get(rcdev->owner)) {
++		kfree(rstc);
++		return ERR_PTR(-ENODEV);
++	}
+ 
+ 	rstc->rcdev = rcdev;
+ 	list_add(&rstc->list, &rcdev->reset_control_head);
+diff --git a/drivers/reset/reset-a10sr.c b/drivers/reset/reset-a10sr.c
+index 7eacc89382f8a..99b3bc8382f35 100644
+--- a/drivers/reset/reset-a10sr.c
++++ b/drivers/reset/reset-a10sr.c
+@@ -118,6 +118,7 @@ static struct platform_driver a10sr_reset_driver = {
+ 	.probe	= a10sr_reset_probe,
+ 	.driver = {
+ 		.name		= "altr_a10sr_reset",
++		.of_match_table	= a10sr_reset_of_match,
+ 	},
+ };
+ module_platform_driver(a10sr_reset_driver);
+diff --git a/drivers/reset/reset-brcmstb.c b/drivers/reset/reset-brcmstb.c
+index f213264c8567b..42c9d5241c530 100644
+--- a/drivers/reset/reset-brcmstb.c
++++ b/drivers/reset/reset-brcmstb.c
+@@ -111,6 +111,7 @@ static const struct of_device_id brcmstb_reset_of_match[] = {
+ 	{ .compatible = "brcm,brcmstb-reset" },
+ 	{ /* sentinel */ }
+ };
++MODULE_DEVICE_TABLE(of, brcmstb_reset_of_match);
+ 
+ static struct platform_driver brcmstb_reset_driver = {
+ 	.probe	= brcmstb_reset_probe,
+diff --git a/drivers/rtc/proc.c b/drivers/rtc/proc.c
+index 73344598fc1be..cbcdbb19d848e 100644
+--- a/drivers/rtc/proc.c
++++ b/drivers/rtc/proc.c
+@@ -23,8 +23,8 @@ static bool is_rtc_hctosys(struct rtc_device *rtc)
+ 	int size;
+ 	char name[NAME_SIZE];
+ 
+-	size = scnprintf(name, NAME_SIZE, "rtc%d", rtc->id);
+-	if (size > NAME_SIZE)
++	size = snprintf(name, NAME_SIZE, "rtc%d", rtc->id);
++	if (size >= NAME_SIZE)
+ 		return false;
+ 
+ 	return !strncmp(name, CONFIG_RTC_HCTOSYS_DEVICE, NAME_SIZE);
+diff --git a/drivers/s390/char/sclp_vt220.c b/drivers/s390/char/sclp_vt220.c
+index 3f9a6ef650fac..3c2ed6d013873 100644
+--- a/drivers/s390/char/sclp_vt220.c
++++ b/drivers/s390/char/sclp_vt220.c
+@@ -35,8 +35,8 @@
+ #define SCLP_VT220_MINOR		65
+ #define SCLP_VT220_DRIVER_NAME		"sclp_vt220"
+ #define SCLP_VT220_DEVICE_NAME		"ttysclp"
+-#define SCLP_VT220_CONSOLE_NAME		"ttyS"
+-#define SCLP_VT220_CONSOLE_INDEX	1	/* console=ttyS1 */
++#define SCLP_VT220_CONSOLE_NAME		"ttysclp"
++#define SCLP_VT220_CONSOLE_INDEX	0	/* console=ttysclp0 */
+ 
+ /* Representation of a single write request */
+ struct sclp_vt220_request {
+diff --git a/drivers/s390/scsi/zfcp_sysfs.c b/drivers/s390/scsi/zfcp_sysfs.c
+index 8d9662e8b7179..3c7f5ecf5511d 100644
+--- a/drivers/s390/scsi/zfcp_sysfs.c
++++ b/drivers/s390/scsi/zfcp_sysfs.c
+@@ -487,6 +487,7 @@ static ssize_t zfcp_sysfs_port_fc_security_show(struct device *dev,
+ 	if (0 == (status & ZFCP_STATUS_COMMON_OPEN) ||
+ 	    0 == (status & ZFCP_STATUS_COMMON_UNBLOCKED) ||
+ 	    0 == (status & ZFCP_STATUS_PORT_PHYS_OPEN) ||
++	    0 != (status & ZFCP_STATUS_PORT_LINK_TEST) ||
+ 	    0 != (status & ZFCP_STATUS_COMMON_ERP_FAILED) ||
+ 	    0 != (status & ZFCP_STATUS_COMMON_ACCESS_BOXED))
+ 		i = sprintf(buf, "unknown\n");
+diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
+index e4fdb473b9906..9294a2c677b3e 100644
+--- a/drivers/scsi/arcmsr/arcmsr_hba.c
++++ b/drivers/scsi/arcmsr/arcmsr_hba.c
+@@ -1928,8 +1928,12 @@ static void arcmsr_post_ccb(struct AdapterControlBlock *acb, struct CommandContr
+ 
+ 		if (ccb->arc_cdb_size <= 0x300)
+ 			arc_cdb_size = (ccb->arc_cdb_size - 1) >> 6 | 1;
+-		else
+-			arc_cdb_size = (((ccb->arc_cdb_size + 0xff) >> 8) + 2) << 1 | 1;
++		else {
++			arc_cdb_size = ((ccb->arc_cdb_size + 0xff) >> 8) + 2;
++			if (arc_cdb_size > 0xF)
++				arc_cdb_size = 0xF;
++			arc_cdb_size = (arc_cdb_size << 1) | 1;
++		}
+ 		ccb_post_stamp = (ccb->smid | arc_cdb_size);
+ 		writel(0, &pmu->inbound_queueport_high);
+ 		writel(ccb_post_stamp, &pmu->inbound_queueport_low);
+@@ -2420,10 +2424,17 @@ static void arcmsr_hbaD_doorbell_isr(struct AdapterControlBlock *pACB)
+ 
+ static void arcmsr_hbaE_doorbell_isr(struct AdapterControlBlock *pACB)
+ {
+-	uint32_t outbound_doorbell, in_doorbell, tmp;
++	uint32_t outbound_doorbell, in_doorbell, tmp, i;
+ 	struct MessageUnit_E __iomem *reg = pACB->pmuE;
+ 
+-	in_doorbell = readl(&reg->iobound_doorbell);
++	if (pACB->adapter_type == ACB_ADAPTER_TYPE_F) {
++		for (i = 0; i < 5; i++) {
++			in_doorbell = readl(&reg->iobound_doorbell);
++			if (in_doorbell != 0)
++				break;
++		}
++	} else
++		in_doorbell = readl(&reg->iobound_doorbell);
+ 	outbound_doorbell = in_doorbell ^ pACB->in_doorbell;
+ 	do {
+ 		writel(0, &reg->host_int_status); /* clear interrupt */
+diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
+index 5c3513a4b450e..987dc8135a9b4 100644
+--- a/drivers/scsi/be2iscsi/be_main.c
++++ b/drivers/scsi/be2iscsi/be_main.c
+@@ -416,7 +416,7 @@ static struct beiscsi_hba *beiscsi_hba_alloc(struct pci_dev *pcidev)
+ 			"beiscsi_hba_alloc - iscsi_host_alloc failed\n");
+ 		return NULL;
+ 	}
+-	shost->max_id = BE2_MAX_SESSIONS;
++	shost->max_id = BE2_MAX_SESSIONS - 1;
+ 	shost->max_channel = 0;
+ 	shost->max_cmd_len = BEISCSI_MAX_CMD_LEN;
+ 	shost->max_lun = BEISCSI_NUM_MAX_LUN;
+@@ -5318,7 +5318,7 @@ static int beiscsi_enable_port(struct beiscsi_hba *phba)
+ 	/* Re-enable UER. If different TPE occurs then it is recoverable. */
+ 	beiscsi_set_uer_feature(phba);
+ 
+-	phba->shost->max_id = phba->params.cxns_per_ctrl;
++	phba->shost->max_id = phba->params.cxns_per_ctrl - 1;
+ 	phba->shost->can_queue = phba->params.ios_per_ctrl;
+ 	ret = beiscsi_init_port(phba);
+ 	if (ret < 0) {
+@@ -5745,6 +5745,7 @@ free_hba:
+ 	pci_disable_msix(phba->pcidev);
+ 	pci_dev_put(phba->pcidev);
+ 	iscsi_host_free(phba->shost);
++	pci_disable_pcie_error_reporting(pcidev);
+ 	pci_set_drvdata(pcidev, NULL);
+ disable_pci:
+ 	pci_release_regions(pcidev);
+diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+index fdd446765311a..21efc73b87bee 100644
+--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
++++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+@@ -791,7 +791,7 @@ struct bnx2i_hba *bnx2i_alloc_hba(struct cnic_dev *cnic)
+ 		return NULL;
+ 	shost->dma_boundary = cnic->pcidev->dma_mask;
+ 	shost->transportt = bnx2i_scsi_xport_template;
+-	shost->max_id = ISCSI_MAX_CONNS_PER_HBA;
++	shost->max_id = ISCSI_MAX_CONNS_PER_HBA - 1;
+ 	shost->max_channel = 0;
+ 	shost->max_lun = 512;
+ 	shost->max_cmd_len = 16;
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index f078b3c4e083f..ecb134b4699f2 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -337,7 +337,7 @@ void cxgbi_hbas_remove(struct cxgbi_device *cdev)
+ EXPORT_SYMBOL_GPL(cxgbi_hbas_remove);
+ 
+ int cxgbi_hbas_add(struct cxgbi_device *cdev, u64 max_lun,
+-		unsigned int max_id, struct scsi_host_template *sht,
++		unsigned int max_conns, struct scsi_host_template *sht,
+ 		struct scsi_transport_template *stt)
+ {
+ 	struct cxgbi_hba *chba;
+@@ -357,7 +357,7 @@ int cxgbi_hbas_add(struct cxgbi_device *cdev, u64 max_lun,
+ 
+ 		shost->transportt = stt;
+ 		shost->max_lun = max_lun;
+-		shost->max_id = max_id;
++		shost->max_id = max_conns - 1;
+ 		shost->max_channel = 0;
+ 		shost->max_cmd_len = SCSI_MAX_VARLEN_CDB_SIZE;
+ 
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index df5a3bbeba5eb..fe8a5e5c0df84 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -508,7 +508,8 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ 	struct alua_port_group *tmp_pg;
+ 	int len, k, off, bufflen = ALUA_RTPG_SIZE;
+ 	unsigned char *desc, *buff;
+-	unsigned err, retval;
++	unsigned err;
++	int retval;
+ 	unsigned int tpg_desc_tbl_off;
+ 	unsigned char orig_transition_tmo;
+ 	unsigned long flags;
+@@ -548,12 +549,12 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ 			kfree(buff);
+ 			return SCSI_DH_OK;
+ 		}
+-		if (!scsi_sense_valid(&sense_hdr)) {
++		if (retval < 0 || !scsi_sense_valid(&sense_hdr)) {
+ 			sdev_printk(KERN_INFO, sdev,
+ 				    "%s: rtpg failed, result %d\n",
+ 				    ALUA_DH_NAME, retval);
+ 			kfree(buff);
+-			if (driver_byte(retval) == DRIVER_ERROR)
++			if (retval < 0)
+ 				return SCSI_DH_DEV_TEMP_BUSY;
+ 			return SCSI_DH_IO;
+ 		}
+@@ -775,11 +776,11 @@ static unsigned alua_stpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ 	retval = submit_stpg(sdev, pg->group_id, &sense_hdr);
+ 
+ 	if (retval) {
+-		if (!scsi_sense_valid(&sense_hdr)) {
++		if (retval < 0 || !scsi_sense_valid(&sense_hdr)) {
+ 			sdev_printk(KERN_INFO, sdev,
+ 				    "%s: stpg failed, result %d",
+ 				    ALUA_DH_NAME, retval);
+-			if (driver_byte(retval) == DRIVER_ERROR)
++			if (retval < 0)
+ 				return SCSI_DH_DEV_TEMP_BUSY;
+ 		} else {
+ 			sdev_printk(KERN_INFO, sdev, "%s: stpg failed\n",
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 6c2a97f80b120..2e529d67de730 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1647,7 +1647,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 			if (irq < 0) {
+ 				dev_err(dev, "irq init: fail map phy interrupt %d\n",
+ 					idx);
+-				return -ENOENT;
++				return irq;
+ 			}
+ 
+ 			rc = devm_request_irq(dev, irq, phy_interrupts[j], 0,
+@@ -1655,7 +1655,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 			if (rc) {
+ 				dev_err(dev, "irq init: could not request phy interrupt %d, rc=%d\n",
+ 					irq, rc);
+-				return -ENOENT;
++				return rc;
+ 			}
+ 		}
+ 	}
+@@ -1666,7 +1666,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		if (irq < 0) {
+ 			dev_err(dev, "irq init: could not map cq interrupt %d\n",
+ 				idx);
+-			return -ENOENT;
++			return irq;
+ 		}
+ 
+ 		rc = devm_request_irq(dev, irq, cq_interrupt_v1_hw, 0,
+@@ -1674,7 +1674,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		if (rc) {
+ 			dev_err(dev, "irq init: could not request cq interrupt %d, rc=%d\n",
+ 				irq, rc);
+-			return -ENOENT;
++			return rc;
+ 		}
+ 	}
+ 
+@@ -1684,7 +1684,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		if (irq < 0) {
+ 			dev_err(dev, "irq init: could not map fatal interrupt %d\n",
+ 				idx);
+-			return -ENOENT;
++			return irq;
+ 		}
+ 
+ 		rc = devm_request_irq(dev, irq, fatal_interrupts[i], 0,
+@@ -1692,7 +1692,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		if (rc) {
+ 			dev_err(dev, "irq init: could not request fatal interrupt %d, rc=%d\n",
+ 				irq, rc);
+-			return -ENOENT;
++			return rc;
+ 		}
+ 	}
+ 
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index b93dd8ef4ac82..da3920a19d53d 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -220,6 +220,9 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 		goto fail;
+ 	}
+ 
++	shost->cmd_per_lun = min_t(short, shost->cmd_per_lun,
++				   shost->can_queue);
++
+ 	error = scsi_init_sense_cache(shost);
+ 	if (error)
+ 		goto fail;
+@@ -490,6 +493,7 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
+ 		shost_printk(KERN_WARNING, shost,
+ 			"error handler thread failed to spawn, error = %ld\n",
+ 			PTR_ERR(shost->ehandler));
++		shost->ehandler = NULL;
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 41023fc4bf2dc..30d27b6706746 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -230,11 +230,11 @@ static int iscsi_prep_ecdb_ahs(struct iscsi_task *task)
+  */
+ static int iscsi_check_tmf_restrictions(struct iscsi_task *task, int opcode)
+ {
+-	struct iscsi_conn *conn = task->conn;
+-	struct iscsi_tm *tmf = &conn->tmhdr;
++	struct iscsi_session *session = task->conn->session;
++	struct iscsi_tm *tmf = &session->tmhdr;
+ 	u64 hdr_lun;
+ 
+-	if (conn->tmf_state == TMF_INITIAL)
++	if (session->tmf_state == TMF_INITIAL)
+ 		return 0;
+ 
+ 	if ((tmf->opcode & ISCSI_OPCODE_MASK) != ISCSI_OP_SCSI_TMFUNC)
+@@ -254,24 +254,19 @@ static int iscsi_check_tmf_restrictions(struct iscsi_task *task, int opcode)
+ 		 * Fail all SCSI cmd PDUs
+ 		 */
+ 		if (opcode != ISCSI_OP_SCSI_DATA_OUT) {
+-			iscsi_conn_printk(KERN_INFO, conn,
+-					  "task [op %x itt "
+-					  "0x%x/0x%x] "
+-					  "rejected.\n",
+-					  opcode, task->itt,
+-					  task->hdr_itt);
++			iscsi_session_printk(KERN_INFO, session,
++					     "task [op %x itt 0x%x/0x%x] rejected.\n",
++					     opcode, task->itt, task->hdr_itt);
+ 			return -EACCES;
+ 		}
+ 		/*
+ 		 * And also all data-out PDUs in response to R2T
+ 		 * if fast_abort is set.
+ 		 */
+-		if (conn->session->fast_abort) {
+-			iscsi_conn_printk(KERN_INFO, conn,
+-					  "task [op %x itt "
+-					  "0x%x/0x%x] fast abort.\n",
+-					  opcode, task->itt,
+-					  task->hdr_itt);
++		if (session->fast_abort) {
++			iscsi_session_printk(KERN_INFO, session,
++					     "task [op %x itt 0x%x/0x%x] fast abort.\n",
++					     opcode, task->itt, task->hdr_itt);
+ 			return -EACCES;
+ 		}
+ 		break;
+@@ -284,7 +279,7 @@ static int iscsi_check_tmf_restrictions(struct iscsi_task *task, int opcode)
+ 		 */
+ 		if (opcode == ISCSI_OP_SCSI_DATA_OUT &&
+ 		    task->hdr_itt == tmf->rtt) {
+-			ISCSI_DBG_SESSION(conn->session,
++			ISCSI_DBG_SESSION(session,
+ 					  "Preventing task %x/%x from sending "
+ 					  "data-out due to abort task in "
+ 					  "progress\n", task->itt,
+@@ -923,20 +918,21 @@ iscsi_data_in_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ static void iscsi_tmf_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
+ {
+ 	struct iscsi_tm_rsp *tmf = (struct iscsi_tm_rsp *)hdr;
++	struct iscsi_session *session = conn->session;
+ 
+ 	conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1;
+ 	conn->tmfrsp_pdus_cnt++;
+ 
+-	if (conn->tmf_state != TMF_QUEUED)
++	if (session->tmf_state != TMF_QUEUED)
+ 		return;
+ 
+ 	if (tmf->response == ISCSI_TMF_RSP_COMPLETE)
+-		conn->tmf_state = TMF_SUCCESS;
++		session->tmf_state = TMF_SUCCESS;
+ 	else if (tmf->response == ISCSI_TMF_RSP_NO_TASK)
+-		conn->tmf_state = TMF_NOT_FOUND;
++		session->tmf_state = TMF_NOT_FOUND;
+ 	else
+-		conn->tmf_state = TMF_FAILED;
+-	wake_up(&conn->ehwait);
++		session->tmf_state = TMF_FAILED;
++	wake_up(&session->ehwait);
+ }
+ 
+ static int iscsi_send_nopout(struct iscsi_conn *conn, struct iscsi_nopin *rhdr)
+@@ -1348,7 +1344,6 @@ void iscsi_session_failure(struct iscsi_session *session,
+ 			   enum iscsi_err err)
+ {
+ 	struct iscsi_conn *conn;
+-	struct device *dev;
+ 
+ 	spin_lock_bh(&session->frwd_lock);
+ 	conn = session->leadconn;
+@@ -1357,10 +1352,8 @@ void iscsi_session_failure(struct iscsi_session *session,
+ 		return;
+ 	}
+ 
+-	dev = get_device(&conn->cls_conn->dev);
++	iscsi_get_conn(conn->cls_conn);
+ 	spin_unlock_bh(&session->frwd_lock);
+-	if (!dev)
+-	        return;
+ 	/*
+ 	 * if the host is being removed bypass the connection
+ 	 * recovery initialization because we are going to kill
+@@ -1370,7 +1363,7 @@ void iscsi_session_failure(struct iscsi_session *session,
+ 		iscsi_conn_error_event(conn->cls_conn, err);
+ 	else
+ 		iscsi_conn_failure(conn, err);
+-	put_device(dev);
++	iscsi_put_conn(conn->cls_conn);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_session_failure);
+ 
+@@ -1787,15 +1780,14 @@ EXPORT_SYMBOL_GPL(iscsi_target_alloc);
+ 
+ static void iscsi_tmf_timedout(struct timer_list *t)
+ {
+-	struct iscsi_conn *conn = from_timer(conn, t, tmf_timer);
+-	struct iscsi_session *session = conn->session;
++	struct iscsi_session *session = from_timer(session, t, tmf_timer);
+ 
+ 	spin_lock(&session->frwd_lock);
+-	if (conn->tmf_state == TMF_QUEUED) {
+-		conn->tmf_state = TMF_TIMEDOUT;
++	if (session->tmf_state == TMF_QUEUED) {
++		session->tmf_state = TMF_TIMEDOUT;
+ 		ISCSI_DBG_EH(session, "tmf timedout\n");
+ 		/* unblock eh_abort() */
+-		wake_up(&conn->ehwait);
++		wake_up(&session->ehwait);
+ 	}
+ 	spin_unlock(&session->frwd_lock);
+ }
+@@ -1818,8 +1810,8 @@ static int iscsi_exec_task_mgmt_fn(struct iscsi_conn *conn,
+ 		return -EPERM;
+ 	}
+ 	conn->tmfcmd_pdus_cnt++;
+-	conn->tmf_timer.expires = timeout * HZ + jiffies;
+-	add_timer(&conn->tmf_timer);
++	session->tmf_timer.expires = timeout * HZ + jiffies;
++	add_timer(&session->tmf_timer);
+ 	ISCSI_DBG_EH(session, "tmf set timeout\n");
+ 
+ 	spin_unlock_bh(&session->frwd_lock);
+@@ -1833,12 +1825,12 @@ static int iscsi_exec_task_mgmt_fn(struct iscsi_conn *conn,
+ 	 * 3) session is terminated or restarted or userspace has
+ 	 * given up on recovery
+ 	 */
+-	wait_event_interruptible(conn->ehwait, age != session->age ||
++	wait_event_interruptible(session->ehwait, age != session->age ||
+ 				 session->state != ISCSI_STATE_LOGGED_IN ||
+-				 conn->tmf_state != TMF_QUEUED);
++				 session->tmf_state != TMF_QUEUED);
+ 	if (signal_pending(current))
+ 		flush_signals(current);
+-	del_timer_sync(&conn->tmf_timer);
++	del_timer_sync(&session->tmf_timer);
+ 
+ 	mutex_lock(&session->eh_mutex);
+ 	spin_lock_bh(&session->frwd_lock);
+@@ -2198,17 +2190,17 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 	}
+ 
+ 	/* only have one tmf outstanding at a time */
+-	if (conn->tmf_state != TMF_INITIAL)
++	if (session->tmf_state != TMF_INITIAL)
+ 		goto failed;
+-	conn->tmf_state = TMF_QUEUED;
++	session->tmf_state = TMF_QUEUED;
+ 
+-	hdr = &conn->tmhdr;
++	hdr = &session->tmhdr;
+ 	iscsi_prep_abort_task_pdu(task, hdr);
+ 
+ 	if (iscsi_exec_task_mgmt_fn(conn, hdr, age, session->abort_timeout))
+ 		goto failed;
+ 
+-	switch (conn->tmf_state) {
++	switch (session->tmf_state) {
+ 	case TMF_SUCCESS:
+ 		spin_unlock_bh(&session->frwd_lock);
+ 		/*
+@@ -2223,7 +2215,7 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 		 */
+ 		spin_lock_bh(&session->frwd_lock);
+ 		fail_scsi_task(task, DID_ABORT);
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		memset(hdr, 0, sizeof(*hdr));
+ 		spin_unlock_bh(&session->frwd_lock);
+ 		iscsi_start_tx(conn);
+@@ -2234,7 +2226,7 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 		goto failed_unlocked;
+ 	case TMF_NOT_FOUND:
+ 		if (!sc->SCp.ptr) {
+-			conn->tmf_state = TMF_INITIAL;
++			session->tmf_state = TMF_INITIAL;
+ 			memset(hdr, 0, sizeof(*hdr));
+ 			/* task completed before tmf abort response */
+ 			ISCSI_DBG_EH(session, "sc completed while abort	in "
+@@ -2243,7 +2235,7 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 		}
+ 		fallthrough;
+ 	default:
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		goto failed;
+ 	}
+ 
+@@ -2300,11 +2292,11 @@ int iscsi_eh_device_reset(struct scsi_cmnd *sc)
+ 	conn = session->leadconn;
+ 
+ 	/* only have one tmf outstanding at a time */
+-	if (conn->tmf_state != TMF_INITIAL)
++	if (session->tmf_state != TMF_INITIAL)
+ 		goto unlock;
+-	conn->tmf_state = TMF_QUEUED;
++	session->tmf_state = TMF_QUEUED;
+ 
+-	hdr = &conn->tmhdr;
++	hdr = &session->tmhdr;
+ 	iscsi_prep_lun_reset_pdu(sc, hdr);
+ 
+ 	if (iscsi_exec_task_mgmt_fn(conn, hdr, session->age,
+@@ -2313,7 +2305,7 @@ int iscsi_eh_device_reset(struct scsi_cmnd *sc)
+ 		goto unlock;
+ 	}
+ 
+-	switch (conn->tmf_state) {
++	switch (session->tmf_state) {
+ 	case TMF_SUCCESS:
+ 		break;
+ 	case TMF_TIMEDOUT:
+@@ -2321,7 +2313,7 @@ int iscsi_eh_device_reset(struct scsi_cmnd *sc)
+ 		iscsi_conn_failure(conn, ISCSI_ERR_SCSI_EH_SESSION_RST);
+ 		goto done;
+ 	default:
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		goto unlock;
+ 	}
+ 
+@@ -2333,7 +2325,7 @@ int iscsi_eh_device_reset(struct scsi_cmnd *sc)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	memset(hdr, 0, sizeof(*hdr));
+ 	fail_scsi_tasks(conn, sc->device->lun, DID_ERROR);
+-	conn->tmf_state = TMF_INITIAL;
++	session->tmf_state = TMF_INITIAL;
+ 	spin_unlock_bh(&session->frwd_lock);
+ 
+ 	iscsi_start_tx(conn);
+@@ -2356,8 +2348,7 @@ void iscsi_session_recovery_timedout(struct iscsi_cls_session *cls_session)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	if (session->state != ISCSI_STATE_LOGGED_IN) {
+ 		session->state = ISCSI_STATE_RECOVERY_FAILED;
+-		if (session->leadconn)
+-			wake_up(&session->leadconn->ehwait);
++		wake_up(&session->ehwait);
+ 	}
+ 	spin_unlock_bh(&session->frwd_lock);
+ }
+@@ -2402,7 +2393,7 @@ failed:
+ 	iscsi_conn_failure(conn, ISCSI_ERR_SCSI_EH_SESSION_RST);
+ 
+ 	ISCSI_DBG_EH(session, "wait for relogin\n");
+-	wait_event_interruptible(conn->ehwait,
++	wait_event_interruptible(session->ehwait,
+ 				 session->state == ISCSI_STATE_TERMINATE ||
+ 				 session->state == ISCSI_STATE_LOGGED_IN ||
+ 				 session->state == ISCSI_STATE_RECOVERY_FAILED);
+@@ -2463,11 +2454,11 @@ static int iscsi_eh_target_reset(struct scsi_cmnd *sc)
+ 	conn = session->leadconn;
+ 
+ 	/* only have one tmf outstanding at a time */
+-	if (conn->tmf_state != TMF_INITIAL)
++	if (session->tmf_state != TMF_INITIAL)
+ 		goto unlock;
+-	conn->tmf_state = TMF_QUEUED;
++	session->tmf_state = TMF_QUEUED;
+ 
+-	hdr = &conn->tmhdr;
++	hdr = &session->tmhdr;
+ 	iscsi_prep_tgt_reset_pdu(sc, hdr);
+ 
+ 	if (iscsi_exec_task_mgmt_fn(conn, hdr, session->age,
+@@ -2476,7 +2467,7 @@ static int iscsi_eh_target_reset(struct scsi_cmnd *sc)
+ 		goto unlock;
+ 	}
+ 
+-	switch (conn->tmf_state) {
++	switch (session->tmf_state) {
+ 	case TMF_SUCCESS:
+ 		break;
+ 	case TMF_TIMEDOUT:
+@@ -2484,7 +2475,7 @@ static int iscsi_eh_target_reset(struct scsi_cmnd *sc)
+ 		iscsi_conn_failure(conn, ISCSI_ERR_SCSI_EH_SESSION_RST);
+ 		goto done;
+ 	default:
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		goto unlock;
+ 	}
+ 
+@@ -2496,7 +2487,7 @@ static int iscsi_eh_target_reset(struct scsi_cmnd *sc)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	memset(hdr, 0, sizeof(*hdr));
+ 	fail_scsi_tasks(conn, -1, DID_ERROR);
+-	conn->tmf_state = TMF_INITIAL;
++	session->tmf_state = TMF_INITIAL;
+ 	spin_unlock_bh(&session->frwd_lock);
+ 
+ 	iscsi_start_tx(conn);
+@@ -2803,7 +2794,10 @@ iscsi_session_setup(struct iscsi_transport *iscsit, struct Scsi_Host *shost,
+ 	session->tt = iscsit;
+ 	session->dd_data = cls_session->dd_data + sizeof(*session);
+ 
++	session->tmf_state = TMF_INITIAL;
++	timer_setup(&session->tmf_timer, iscsi_tmf_timedout, 0);
+ 	mutex_init(&session->eh_mutex);
++
+ 	spin_lock_init(&session->frwd_lock);
+ 	spin_lock_init(&session->back_lock);
+ 
+@@ -2907,7 +2901,6 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
+ 	conn->c_stage = ISCSI_CONN_INITIAL_STAGE;
+ 	conn->id = conn_idx;
+ 	conn->exp_statsn = 0;
+-	conn->tmf_state = TMF_INITIAL;
+ 
+ 	timer_setup(&conn->transport_timer, iscsi_check_transport_timeouts, 0);
+ 
+@@ -2933,8 +2926,7 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
+ 		goto login_task_data_alloc_fail;
+ 	conn->login_task->data = conn->data = data;
+ 
+-	timer_setup(&conn->tmf_timer, iscsi_tmf_timedout, 0);
+-	init_waitqueue_head(&conn->ehwait);
++	init_waitqueue_head(&session->ehwait);
+ 
+ 	return cls_conn;
+ 
+@@ -2969,7 +2961,7 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ 		 * leading connection? then give up on recovery.
+ 		 */
+ 		session->state = ISCSI_STATE_TERMINATE;
+-		wake_up(&conn->ehwait);
++		wake_up(&session->ehwait);
+ 	}
+ 	spin_unlock_bh(&session->frwd_lock);
+ 
+@@ -3044,7 +3036,7 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn)
+ 		 * commands after successful recovery
+ 		 */
+ 		conn->stop_stage = 0;
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		session->age++;
+ 		if (session->age == 16)
+ 			session->age = 0;
+@@ -3058,7 +3050,7 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn)
+ 	spin_unlock_bh(&session->frwd_lock);
+ 
+ 	iscsi_unblock_session(session->cls_session);
+-	wake_up(&conn->ehwait);
++	wake_up(&session->ehwait);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(iscsi_conn_start);
+@@ -3146,7 +3138,7 @@ void iscsi_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	fail_scsi_tasks(conn, -1, DID_TRANSPORT_DISRUPTED);
+ 	fail_mgmt_tasks(session, conn);
+-	memset(&conn->tmhdr, 0, sizeof(conn->tmhdr));
++	memset(&session->tmhdr, 0, sizeof(session->tmhdr));
+ 	spin_unlock_bh(&session->frwd_lock);
+ 	mutex_unlock(&session->eh_mutex);
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index b60945182db80..3d9889b3d5c8a 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1179,6 +1179,15 @@ stop_rr_fcf_flogi:
+ 			phba->fcf.fcf_redisc_attempted = 0; /* reset */
+ 			goto out;
+ 		}
++	} else if (vport->port_state > LPFC_FLOGI &&
++		   vport->fc_flag & FC_PT2PT) {
++		/*
++		 * In a p2p topology, it is possible that discovery has
++		 * already progressed, and this completion can be ignored.
++		 * Recheck the indicated topology.
++		 */
++		if (!sp->cmn.fPort)
++			goto out;
+ 	}
+ 
+ flogifail:
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index bf171ef61abd5..990b700de6892 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -7825,7 +7825,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
+ 				"0393 Error %d during rpi post operation\n",
+ 				rc);
+ 		rc = -ENODEV;
+-		goto out_destroy_queue;
++		goto out_free_iocblist;
+ 	}
+ 	lpfc_sli4_node_prep(phba);
+ 
+@@ -7991,8 +7991,9 @@ out_io_buff_free:
+ out_unset_queue:
+ 	/* Unset all the queues set up in this routine when error out */
+ 	lpfc_sli4_queue_unset(phba);
+-out_destroy_queue:
++out_free_iocblist:
+ 	lpfc_free_iocb_list(phba);
++out_destroy_queue:
+ 	lpfc_sli4_queue_destroy(phba);
+ out_stop_timers:
+ 	lpfc_stop_hba_timers(phba);
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index 5e4137f10e0e9..6b8ec57e8bdfa 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -2259,6 +2259,15 @@ enum MR_PERF_MODE {
+ 		 (mode) == MR_LATENCY_PERF_MODE ? "Latency" : \
+ 		 "Unknown")
+ 
++enum MEGASAS_LD_TARGET_ID_STATUS {
++	LD_TARGET_ID_INITIAL,
++	LD_TARGET_ID_ACTIVE,
++	LD_TARGET_ID_DELETED,
++};
++
++#define MEGASAS_TARGET_ID(sdev)						\
++	(((sdev->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) + sdev->id)
++
+ struct megasas_instance {
+ 
+ 	unsigned int *reply_map;
+@@ -2323,6 +2332,9 @@ struct megasas_instance {
+ 	struct megasas_pd_list          pd_list[MEGASAS_MAX_PD];
+ 	struct megasas_pd_list          local_pd_list[MEGASAS_MAX_PD];
+ 	u8 ld_ids[MEGASAS_MAX_LD_IDS];
++	u8 ld_tgtid_status[MEGASAS_MAX_LD_IDS];
++	u8 ld_ids_prev[MEGASAS_MAX_LD_IDS];
++	u8 ld_ids_from_raidmap[MEGASAS_MAX_LD_IDS];
+ 	s8 init_id;
+ 
+ 	u16 max_num_sge;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index cc45cdac13844..1a70cc995c28c 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -127,6 +127,8 @@ static int megasas_register_aen(struct megasas_instance *instance,
+ 				u32 seq_num, u32 class_locale_word);
+ static void megasas_get_pd_info(struct megasas_instance *instance,
+ 				struct scsi_device *sdev);
++static void
++megasas_set_ld_removed_by_fw(struct megasas_instance *instance);
+ 
+ /*
+  * PCI ID table for all supported controllers
+@@ -421,6 +423,12 @@ megasas_decode_evt(struct megasas_instance *instance)
+ 			(class_locale.members.locale),
+ 			format_class(class_locale.members.class),
+ 			evt_detail->description);
++
++	if (megasas_dbg_lvl & LD_PD_DEBUG)
++		dev_info(&instance->pdev->dev,
++			 "evt_detail.args.ld.target_id/index %d/%d\n",
++			 evt_detail->args.ld.target_id, evt_detail->args.ld.ld_index);
++
+ }
+ 
+ /*
+@@ -1764,6 +1772,7 @@ megasas_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ {
+ 	struct megasas_instance *instance;
+ 	struct MR_PRIV_DEVICE *mr_device_priv_data;
++	u32 ld_tgt_id;
+ 
+ 	instance = (struct megasas_instance *)
+ 	    scmd->device->host->hostdata;
+@@ -1790,17 +1799,21 @@ megasas_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ 		}
+ 	}
+ 
+-	if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR) {
++	mr_device_priv_data = scmd->device->hostdata;
++	if (!mr_device_priv_data ||
++	    (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR)) {
+ 		scmd->result = DID_NO_CONNECT << 16;
+ 		scmd->scsi_done(scmd);
+ 		return 0;
+ 	}
+ 
+-	mr_device_priv_data = scmd->device->hostdata;
+-	if (!mr_device_priv_data) {
+-		scmd->result = DID_NO_CONNECT << 16;
+-		scmd->scsi_done(scmd);
+-		return 0;
++	if (MEGASAS_IS_LOGICAL(scmd->device)) {
++		ld_tgt_id = MEGASAS_TARGET_ID(scmd->device);
++		if (instance->ld_tgtid_status[ld_tgt_id] == LD_TARGET_ID_DELETED) {
++			scmd->result = DID_NO_CONNECT << 16;
++			scmd->scsi_done(scmd);
++			return 0;
++		}
+ 	}
+ 
+ 	if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL)
+@@ -2080,7 +2093,7 @@ static int megasas_slave_configure(struct scsi_device *sdev)
+ 
+ static int megasas_slave_alloc(struct scsi_device *sdev)
+ {
+-	u16 pd_index = 0;
++	u16 pd_index = 0, ld_tgt_id;
+ 	struct megasas_instance *instance ;
+ 	struct MR_PRIV_DEVICE *mr_device_priv_data;
+ 
+@@ -2105,6 +2118,14 @@ scan_target:
+ 					GFP_KERNEL);
+ 	if (!mr_device_priv_data)
+ 		return -ENOMEM;
++
++	if (MEGASAS_IS_LOGICAL(sdev)) {
++		ld_tgt_id = MEGASAS_TARGET_ID(sdev);
++		instance->ld_tgtid_status[ld_tgt_id] = LD_TARGET_ID_ACTIVE;
++		if (megasas_dbg_lvl & LD_PD_DEBUG)
++			sdev_printk(KERN_INFO, sdev, "LD target ID %d created.\n", ld_tgt_id);
++	}
++
+ 	sdev->hostdata = mr_device_priv_data;
+ 
+ 	atomic_set(&mr_device_priv_data->r1_ldio_hint,
+@@ -2114,6 +2135,19 @@ scan_target:
+ 
+ static void megasas_slave_destroy(struct scsi_device *sdev)
+ {
++	u16 ld_tgt_id;
++	struct megasas_instance *instance;
++
++	instance = megasas_lookup_instance(sdev->host->host_no);
++
++	if (MEGASAS_IS_LOGICAL(sdev)) {
++		ld_tgt_id = MEGASAS_TARGET_ID(sdev);
++		instance->ld_tgtid_status[ld_tgt_id] = LD_TARGET_ID_DELETED;
++		if (megasas_dbg_lvl & LD_PD_DEBUG)
++			sdev_printk(KERN_INFO, sdev,
++				    "LD target ID %d removed from OS stack\n", ld_tgt_id);
++	}
++
+ 	kfree(sdev->hostdata);
+ 	sdev->hostdata = NULL;
+ }
+@@ -3472,6 +3506,22 @@ megasas_complete_abort(struct megasas_instance *instance,
+ 	}
+ }
+ 
++static void
++megasas_set_ld_removed_by_fw(struct megasas_instance *instance)
++{
++	uint i;
++
++	for (i = 0; (i < MEGASAS_MAX_LD_IDS); i++) {
++		if (instance->ld_ids_prev[i] != 0xff &&
++		    instance->ld_ids_from_raidmap[i] == 0xff) {
++			if (megasas_dbg_lvl & LD_PD_DEBUG)
++				dev_info(&instance->pdev->dev,
++					 "LD target ID %d removed from RAID map\n", i);
++			instance->ld_tgtid_status[i] = LD_TARGET_ID_DELETED;
++		}
++	}
++}
++
+ /**
+  * megasas_complete_cmd -	Completes a command
+  * @instance:			Adapter soft state
+@@ -3634,9 +3684,13 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
+ 				fusion->fast_path_io = 0;
+ 			}
+ 
++			if (instance->adapter_type >= INVADER_SERIES)
++				megasas_set_ld_removed_by_fw(instance);
++
+ 			megasas_sync_map_info(instance);
+ 			spin_unlock_irqrestore(instance->host->host_lock,
+ 					       flags);
++
+ 			break;
+ 		}
+ 		if (opcode == MR_DCMD_CTRL_EVENT_GET_INFO ||
+@@ -7439,11 +7493,16 @@ static int megasas_probe_one(struct pci_dev *pdev,
+ 	return 0;
+ 
+ fail_start_aen:
++	instance->unload = 1;
++	scsi_remove_host(instance->host);
+ fail_io_attach:
+ 	megasas_mgmt_info.count--;
+ 	megasas_mgmt_info.max_index--;
+ 	megasas_mgmt_info.instance[megasas_mgmt_info.max_index] = NULL;
+ 
++	if (instance->requestorId && !instance->skip_heartbeat_timer_del)
++		del_timer_sync(&instance->sriov_heartbeat_timer);
++
+ 	instance->instancet->disable_intr(instance);
+ 	megasas_destroy_irqs(instance);
+ 
+@@ -7451,8 +7510,16 @@ fail_io_attach:
+ 		megasas_release_fusion(instance);
+ 	else
+ 		megasas_release_mfi(instance);
++
+ 	if (instance->msix_vectors)
+ 		pci_free_irq_vectors(instance->pdev);
++	instance->msix_vectors = 0;
++
++	if (instance->fw_crash_state != UNAVAILABLE)
++		megasas_free_host_crash_buffer(instance);
++
++	if (instance->adapter_type != MFI_SERIES)
++		megasas_fusion_stop_watchdog(instance);
+ fail_init_mfi:
+ 	scsi_host_put(host);
+ fail_alloc_instance:
+@@ -8764,8 +8831,10 @@ megasas_aen_polling(struct work_struct *work)
+ 	union megasas_evt_class_locale class_locale;
+ 	int event_type = 0;
+ 	u32 seq_num;
++	u16 ld_target_id;
+ 	int error;
+ 	u8  dcmd_ret = DCMD_SUCCESS;
++	struct scsi_device *sdev1;
+ 
+ 	if (!instance) {
+ 		printk(KERN_ERR "invalid instance!\n");
+@@ -8788,12 +8857,23 @@ megasas_aen_polling(struct work_struct *work)
+ 			break;
+ 
+ 		case MR_EVT_LD_OFFLINE:
+-		case MR_EVT_CFG_CLEARED:
+ 		case MR_EVT_LD_DELETED:
++			ld_target_id = instance->evt_detail->args.ld.target_id;
++			sdev1 = scsi_device_lookup(instance->host,
++						   MEGASAS_MAX_PD_CHANNELS +
++						   (ld_target_id / MEGASAS_MAX_DEV_PER_CHANNEL),
++						   (ld_target_id - MEGASAS_MAX_DEV_PER_CHANNEL),
++						   0);
++			if (sdev1)
++				megasas_remove_scsi_device(sdev1);
++
++			event_type = SCAN_VD_CHANNEL;
++			break;
+ 		case MR_EVT_LD_CREATED:
+ 			event_type = SCAN_VD_CHANNEL;
+ 			break;
+ 
++		case MR_EVT_CFG_CLEARED:
+ 		case MR_EVT_CTRL_HOST_BUS_SCAN_REQUESTED:
+ 		case MR_EVT_FOREIGN_CFG_IMPORTED:
+ 		case MR_EVT_LD_STATE_CHANGE:
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fp.c b/drivers/scsi/megaraid/megaraid_sas_fp.c
+index b6c08d6200335..83f69c33b01a9 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fp.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fp.c
+@@ -349,6 +349,10 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance, u64 map_id)
+ 
+ 	num_lds = le16_to_cpu(drv_map->raidMap.ldCount);
+ 
++	memcpy(instance->ld_ids_prev,
++	       instance->ld_ids_from_raidmap,
++	       sizeof(instance->ld_ids_from_raidmap));
++	memset(instance->ld_ids_from_raidmap, 0xff, MEGASAS_MAX_LD_IDS);
+ 	/*Convert Raid capability values to CPU arch */
+ 	for (i = 0; (num_lds > 0) && (i < MAX_LOGICAL_DRIVES_EXT); i++) {
+ 		ld = MR_TargetIdToLdGet(i, drv_map);
+@@ -359,7 +363,7 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance, u64 map_id)
+ 
+ 		raid = MR_LdRaidGet(ld, drv_map);
+ 		le32_to_cpus((u32 *)&raid->capability);
+-
++		instance->ld_ids_from_raidmap[i] = i;
+ 		num_lds--;
+ 	}
+ 
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index b0c01cf0428f2..13022a42fd6f4 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -3667,6 +3667,7 @@ static void megasas_sync_irqs(unsigned long instance_addr)
+ 		if (irq_ctx->irq_poll_scheduled) {
+ 			irq_ctx->irq_poll_scheduled = false;
+ 			enable_irq(irq_ctx->os_irq);
++			complete_cmd_fusion(instance, irq_ctx->MSIxIndex, irq_ctx);
+ 		}
+ 	}
+ }
+@@ -3698,6 +3699,7 @@ int megasas_irqpoll(struct irq_poll *irqpoll, int budget)
+ 		irq_poll_complete(irqpoll);
+ 		irq_ctx->irq_poll_scheduled = false;
+ 		enable_irq(irq_ctx->os_irq);
++		complete_cmd_fusion(instance, irq_ctx->MSIxIndex, irq_ctx);
+ 	}
+ 
+ 	return num_entries;
+@@ -3714,6 +3716,7 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr)
+ {
+ 	struct megasas_instance *instance =
+ 		(struct megasas_instance *)instance_addr;
++	struct megasas_irq_context *irq_ctx = NULL;
+ 	u32 count, MSIxIndex;
+ 
+ 	count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
+@@ -3722,8 +3725,10 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr)
+ 	if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR)
+ 		return;
+ 
+-	for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++)
+-		complete_cmd_fusion(instance, MSIxIndex, NULL);
++	for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++) {
++		irq_ctx = &instance->irq_context[MSIxIndex];
++		complete_cmd_fusion(instance, MSIxIndex, irq_ctx);
++	}
+ }
+ 
+ /**
+@@ -5193,6 +5198,7 @@ megasas_alloc_fusion_context(struct megasas_instance *instance)
+ 		if (!fusion->log_to_span) {
+ 			dev_err(&instance->pdev->dev, "Failed from %s %d\n",
+ 				__func__, __LINE__);
++			kfree(instance->ctrl_context);
+ 			return -ENOMEM;
+ 		}
+ 	}
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 008f734698f71..31c384108bc9c 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -3526,6 +3526,28 @@ _scsih_fw_event_cleanup_queue(struct MPT3SAS_ADAPTER *ioc)
+ 	ioc->fw_events_cleanup = 1;
+ 	while ((fw_event = dequeue_next_fw_event(ioc)) ||
+ 	     (fw_event = ioc->current_event)) {
++
++		/*
++		 * Don't call cancel_work_sync() for current_event
++		 * other than MPT3SAS_REMOVE_UNRESPONDING_DEVICES;
++		 * otherwise we may observe deadlock if current
++		 * hard reset issued as part of processing the current_event.
++		 *
++		 * Orginal logic of cleaning the current_event is added
++		 * for handling the back to back host reset issued by the user.
++		 * i.e. during back to back host reset, driver use to process
++		 * the two instances of MPT3SAS_REMOVE_UNRESPONDING_DEVICES
++		 * event back to back and this made the drives to unregister
++		 * the devices from SML.
++		 */
++
++		if (fw_event == ioc->current_event &&
++		    ioc->current_event->event !=
++		    MPT3SAS_REMOVE_UNRESPONDING_DEVICES) {
++			ioc->current_event = NULL;
++			continue;
++		}
++
+ 		/*
+ 		 * Wait on the fw_event to complete. If this returns 1, then
+ 		 * the event was never executed, and we need a put for the
+diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
+index c342defc3f522..ce199a7a16b80 100644
+--- a/drivers/scsi/qedi/qedi.h
++++ b/drivers/scsi/qedi/qedi.h
+@@ -284,6 +284,7 @@ struct qedi_ctx {
+ #define QEDI_IN_RECOVERY	5
+ #define QEDI_IN_OFFLINE		6
+ #define QEDI_IN_SHUTDOWN	7
++#define QEDI_BLOCK_IO		8
+ 
+ 	u8 mac[ETH_ALEN];
+ 	u32 src_ip[4];
+diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
+index 440ddd2309f1d..4c87640e6a911 100644
+--- a/drivers/scsi/qedi/qedi_fw.c
++++ b/drivers/scsi/qedi/qedi_fw.c
+@@ -73,7 +73,6 @@ static void qedi_process_logout_resp(struct qedi_ctx *qedi,
+ 	spin_unlock(&qedi_conn->list_lock);
+ 
+ 	cmd->state = RESPONSE_RECEIVED;
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, NULL, 0);
+ 
+ 	spin_unlock(&session->back_lock);
+@@ -138,7 +137,6 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
+ 	spin_unlock(&qedi_conn->list_lock);
+ 
+ 	cmd->state = RESPONSE_RECEIVED;
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ 
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr,
+ 			     qedi_conn->gen_pdu.resp_buf,
+@@ -161,16 +159,9 @@ static void qedi_tmf_resp_work(struct work_struct *work)
+ 	set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+ 	resp_hdr_ptr =  (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
+ 
+-	iscsi_block_session(session->cls_session);
+ 	rval = qedi_cleanup_all_io(qedi, qedi_conn, qedi_cmd->task, true);
+-	if (rval) {
+-		qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+-		iscsi_unblock_session(session->cls_session);
++	if (rval)
+ 		goto exit_tmf_resp;
+-	}
+-
+-	iscsi_unblock_session(session->cls_session);
+-	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+ 
+ 	spin_lock(&session->back_lock);
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
+@@ -245,8 +236,6 @@ static void qedi_process_tmf_resp(struct qedi_ctx *qedi,
+ 		goto unblock_sess;
+ 	}
+ 
+-	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+-
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
+ 	kfree(resp_hdr_ptr);
+ 
+@@ -314,7 +303,6 @@ static void qedi_process_login_resp(struct qedi_ctx *qedi,
+ 		  "Freeing tid=0x%x for cid=0x%x\n",
+ 		  cmd->task_id, qedi_conn->iscsi_conn_id);
+ 	cmd->state = RESPONSE_RECEIVED;
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ }
+ 
+ static void qedi_get_rq_bdq_buf(struct qedi_ctx *qedi,
+@@ -468,7 +456,6 @@ static int qedi_process_nopin_mesg(struct qedi_ctx *qedi,
+ 		}
+ 
+ 		spin_unlock(&qedi_conn->list_lock);
+-		qedi_clear_task_idx(qedi, cmd->task_id);
+ 	}
+ 
+ done:
+@@ -673,7 +660,6 @@ static void qedi_scsi_completion(struct qedi_ctx *qedi,
+ 	if (qedi_io_tracing)
+ 		qedi_trace_io(qedi, task, cmd->task_id, QEDI_IO_TRACE_RSP);
+ 
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr,
+ 			     conn->data, datalen);
+ error:
+@@ -730,7 +716,6 @@ static void qedi_process_nopin_local_cmpl(struct qedi_ctx *qedi,
+ 		  cqe->itid, cmd->task_id);
+ 
+ 	cmd->state = RESPONSE_RECEIVED;
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ 
+ 	spin_lock_bh(&session->back_lock);
+ 	__iscsi_put_task(task);
+@@ -748,7 +733,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 	itt_t protoitt = 0;
+ 	int found = 0;
+ 	struct qedi_cmd *qedi_cmd = NULL;
+-	u32 rtid = 0;
+ 	u32 iscsi_cid;
+ 	struct qedi_conn *qedi_conn;
+ 	struct qedi_cmd *dbg_cmd;
+@@ -779,7 +763,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 			found = 1;
+ 			mtask = qedi_cmd->task;
+ 			tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+-			rtid = work->rtid;
+ 
+ 			list_del_init(&work->list);
+ 			kfree(work);
+@@ -821,8 +804,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 			if (qedi_cmd->state == CLEANUP_WAIT_FAILED)
+ 				qedi_cmd->state = CLEANUP_RECV;
+ 
+-			qedi_clear_task_idx(qedi_conn->qedi, rtid);
+-
+ 			spin_lock(&qedi_conn->list_lock);
+ 			if (likely(dbg_cmd->io_cmd_in_list)) {
+ 				dbg_cmd->io_cmd_in_list = false;
+@@ -856,7 +837,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+ 			  "Freeing tid=0x%x for cid=0x%x\n",
+ 			  cqe->itid, qedi_conn->iscsi_conn_id);
+-		qedi_clear_task_idx(qedi_conn->qedi, cqe->itid);
+ 
+ 	} else {
+ 		qedi_get_proto_itt(qedi, cqe->itid, &ptmp_itt);
+@@ -1453,7 +1433,7 @@ abort_ret:
+ 
+ ldel_exit:
+ 	spin_lock_bh(&qedi_conn->tmf_work_lock);
+-	if (!qedi_cmd->list_tmf_work) {
++	if (qedi_cmd->list_tmf_work) {
+ 		list_del_init(&list_work->list);
+ 		qedi_cmd->list_tmf_work = NULL;
+ 		kfree(list_work);
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 08c05403cd720..f51723e2d5227 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -330,12 +330,22 @@ free_conn:
+ 
+ void qedi_mark_device_missing(struct iscsi_cls_session *cls_session)
+ {
+-	iscsi_block_session(cls_session);
++	struct iscsi_session *session = cls_session->dd_data;
++	struct qedi_conn *qedi_conn = session->leadconn->dd_data;
++
++	spin_lock_bh(&session->frwd_lock);
++	set_bit(QEDI_BLOCK_IO, &qedi_conn->qedi->flags);
++	spin_unlock_bh(&session->frwd_lock);
+ }
+ 
+ void qedi_mark_device_available(struct iscsi_cls_session *cls_session)
+ {
+-	iscsi_unblock_session(cls_session);
++	struct iscsi_session *session = cls_session->dd_data;
++	struct qedi_conn *qedi_conn = session->leadconn->dd_data;
++
++	spin_lock_bh(&session->frwd_lock);
++	clear_bit(QEDI_BLOCK_IO, &qedi_conn->qedi->flags);
++	spin_unlock_bh(&session->frwd_lock);
+ }
+ 
+ static int qedi_bind_conn_to_iscsi_cid(struct qedi_ctx *qedi,
+@@ -772,7 +782,6 @@ static int qedi_mtask_xmit(struct iscsi_conn *conn, struct iscsi_task *task)
+ 	}
+ 
+ 	cmd->conn = conn->dd_data;
+-	cmd->scsi_cmd = NULL;
+ 	return qedi_iscsi_send_generic_request(task);
+ }
+ 
+@@ -783,9 +792,16 @@ static int qedi_task_xmit(struct iscsi_task *task)
+ 	struct qedi_cmd *cmd = task->dd_data;
+ 	struct scsi_cmnd *sc = task->sc;
+ 
++	/* Clear now so in cleanup_task we know it didn't make it */
++	cmd->scsi_cmd = NULL;
++	cmd->task_id = U16_MAX;
++
+ 	if (test_bit(QEDI_IN_SHUTDOWN, &qedi_conn->qedi->flags))
+ 		return -ENODEV;
+ 
++	if (test_bit(QEDI_BLOCK_IO, &qedi_conn->qedi->flags))
++		return -EACCES;
++
+ 	cmd->state = 0;
+ 	cmd->task = NULL;
+ 	cmd->use_slowpath = false;
+@@ -1383,13 +1399,24 @@ static umode_t qedi_attr_is_visible(int param_type, int param)
+ 
+ static void qedi_cleanup_task(struct iscsi_task *task)
+ {
+-	if (!task->sc || task->state == ISCSI_TASK_PENDING) {
++	struct qedi_cmd *cmd;
++
++	if (task->state == ISCSI_TASK_PENDING) {
+ 		QEDI_INFO(NULL, QEDI_LOG_IO, "Returning ref_cnt=%d\n",
+ 			  refcount_read(&task->refcount));
+ 		return;
+ 	}
+ 
+-	qedi_iscsi_unmap_sg_list(task->dd_data);
++	if (task->sc)
++		qedi_iscsi_unmap_sg_list(task->dd_data);
++
++	cmd = task->dd_data;
++	if (cmd->task_id != U16_MAX)
++		qedi_clear_task_idx(iscsi_host_priv(task->conn->session->host),
++				    cmd->task_id);
++
++	cmd->task_id = U16_MAX;
++	cmd->scsi_cmd = NULL;
+ }
+ 
+ struct iscsi_transport qedi_iscsi_transport = {
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 69c5b5ee2169b..b33eff9ea80ba 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -642,7 +642,7 @@ static struct qedi_ctx *qedi_host_alloc(struct pci_dev *pdev)
+ 		goto exit_setup_shost;
+ 	}
+ 
+-	shost->max_id = QEDI_MAX_ISCSI_CONNS_PER_HBA;
++	shost->max_id = QEDI_MAX_ISCSI_CONNS_PER_HBA - 1;
+ 	shost->max_channel = 0;
+ 	shost->max_lun = ~0;
+ 	shost->max_cmd_len = 16;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index a045d00509d5c..d89db29fa829c 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2072,9 +2072,7 @@ EXPORT_SYMBOL_GPL(scsi_mode_select);
+  *	@sshdr: place to put sense data (or NULL if no sense to be collected).
+  *		must be SCSI_SENSE_BUFFERSIZE big.
+  *
+- *	Returns zero if unsuccessful, or the header offset (either 4
+- *	or 8 depending on whether a six or ten byte command was
+- *	issued) if successful.
++ *	Returns zero if successful, or a negative error number on failure
+  */
+ int
+ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+@@ -2121,6 +2119,8 @@ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+ 
+ 	result = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buffer, len,
+ 				  sshdr, timeout, retries, NULL);
++	if (result < 0)
++		return result;
+ 
+ 	/* This code looks awful: what it's doing is making sure an
+ 	 * ILLEGAL REQUEST sense return identifies the actual command
+@@ -2165,13 +2165,15 @@ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+ 			data->block_descriptor_length = buffer[3];
+ 		}
+ 		data->header_length = header_length;
++		result = 0;
+ 	} else if ((status_byte(result) == CHECK_CONDITION) &&
+ 		   scsi_sense_valid(sshdr) &&
+ 		   sshdr->sense_key == UNIT_ATTENTION && retry_count) {
+ 		retry_count--;
+ 		goto retry;
+ 	}
+-
++	if (result > 0)
++		result = -EIO;
+ 	return result;
+ }
+ EXPORT_SYMBOL(scsi_mode_sense);
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 2735178f15c73..2171dab3e5dc8 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2353,6 +2353,18 @@ int iscsi_destroy_conn(struct iscsi_cls_conn *conn)
+ }
+ EXPORT_SYMBOL_GPL(iscsi_destroy_conn);
+ 
++void iscsi_put_conn(struct iscsi_cls_conn *conn)
++{
++	put_device(&conn->dev);
++}
++EXPORT_SYMBOL_GPL(iscsi_put_conn);
++
++void iscsi_get_conn(struct iscsi_cls_conn *conn)
++{
++	get_device(&conn->dev);
++}
++EXPORT_SYMBOL_GPL(iscsi_get_conn);
++
+ /*
+  * iscsi interface functions
+  */
+diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
+index c9abed8429c9a..4a96fb05731d2 100644
+--- a/drivers/scsi/scsi_transport_sas.c
++++ b/drivers/scsi/scsi_transport_sas.c
+@@ -1229,16 +1229,15 @@ int sas_read_port_mode_page(struct scsi_device *sdev)
+ 	char *buffer = kzalloc(BUF_SIZE, GFP_KERNEL), *msdata;
+ 	struct sas_end_device *rdev = sas_sdev_to_rdev(sdev);
+ 	struct scsi_mode_data mode_data;
+-	int res, error;
++	int error;
+ 
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+-	res = scsi_mode_sense(sdev, 1, 0x19, buffer, BUF_SIZE, 30*HZ, 3,
+-			      &mode_data, NULL);
++	error = scsi_mode_sense(sdev, 1, 0x19, buffer, BUF_SIZE, 30*HZ, 3,
++				&mode_data, NULL);
+ 
+-	error = -EINVAL;
+-	if (!scsi_status_is_good(res))
++	if (error)
+ 		goto out;
+ 
+ 	msdata = buffer +  mode_data.header_length +
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 01f87bcab3dd1..f0c0935d79099 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -2687,18 +2687,18 @@ sd_read_write_protect_flag(struct scsi_disk *sdkp, unsigned char *buffer)
+ 		 * 5: Illegal Request, Sense Code 24: Invalid field in
+ 		 * CDB.
+ 		 */
+-		if (!scsi_status_is_good(res))
++		if (res < 0)
+ 			res = sd_do_mode_sense(sdkp, 0, 0, buffer, 4, &data, NULL);
+ 
+ 		/*
+ 		 * Third attempt: ask 255 bytes, as we did earlier.
+ 		 */
+-		if (!scsi_status_is_good(res))
++		if (res < 0)
+ 			res = sd_do_mode_sense(sdkp, 0, 0x3F, buffer, 255,
+ 					       &data, NULL);
+ 	}
+ 
+-	if (!scsi_status_is_good(res)) {
++	if (res < 0) {
+ 		sd_first_printk(KERN_WARNING, sdkp,
+ 			  "Test WP failed, assume Write Enabled\n");
+ 	} else {
+@@ -2759,7 +2759,7 @@ sd_read_cache_type(struct scsi_disk *sdkp, unsigned char *buffer)
+ 	res = sd_do_mode_sense(sdkp, dbd, modepage, buffer, first_len,
+ 			&data, &sshdr);
+ 
+-	if (!scsi_status_is_good(res))
++	if (res < 0)
+ 		goto bad_sense;
+ 
+ 	if (!data.header_length) {
+@@ -2791,7 +2791,7 @@ sd_read_cache_type(struct scsi_disk *sdkp, unsigned char *buffer)
+ 		res = sd_do_mode_sense(sdkp, dbd, modepage, buffer, len,
+ 				&data, &sshdr);
+ 
+-	if (scsi_status_is_good(res)) {
++	if (!res) {
+ 		int offset = data.header_length + data.block_descriptor_length;
+ 
+ 		while (offset < len) {
+@@ -2909,7 +2909,7 @@ static void sd_read_app_tag_own(struct scsi_disk *sdkp, unsigned char *buffer)
+ 	res = scsi_mode_sense(sdp, 1, 0x0a, buffer, 36, SD_TIMEOUT,
+ 			      sdkp->max_retries, &data, &sshdr);
+ 
+-	if (!scsi_status_is_good(res) || !data.header_length ||
++	if (res < 0 || !data.header_length ||
+ 	    data.length < 6) {
+ 		sd_first_printk(KERN_WARNING, sdkp,
+ 			  "getting Control mode page failed, assume no ATO\n");
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 77961f0583674..726b7048a7674 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -930,7 +930,7 @@ static void get_capabilities(struct scsi_cd *cd)
+ 	rc = scsi_mode_sense(cd->device, 0, 0x2a, buffer, ms_len,
+ 			     SR_TIMEOUT, 3, &data, NULL);
+ 
+-	if (!scsi_status_is_good(rc) || data.length > ms_len ||
++	if (rc < 0 || data.length > ms_len ||
+ 	    data.header_length + data.block_descriptor_length > data.length) {
+ 		/* failed, drive doesn't have capabilities mode page */
+ 		cd->cdi.speed = 1;
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index ded00a89bfc4e..0ee0b80006e05 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -994,17 +994,40 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ 	struct storvsc_scan_work *wrk;
+ 	void (*process_err_fn)(struct work_struct *work);
+ 	struct hv_host_device *host_dev = shost_priv(host);
+-	bool do_work = false;
+ 
+-	switch (SRB_STATUS(vm_srb->srb_status)) {
+-	case SRB_STATUS_ERROR:
++	/*
++	 * In some situations, Hyper-V sets multiple bits in the
++	 * srb_status, such as ABORTED and ERROR. So process them
++	 * individually, with the most specific bits first.
++	 */
++
++	if (vm_srb->srb_status & SRB_STATUS_INVALID_LUN) {
++		set_host_byte(scmnd, DID_NO_CONNECT);
++		process_err_fn = storvsc_remove_lun;
++		goto do_work;
++	}
++
++	if (vm_srb->srb_status & SRB_STATUS_ABORTED) {
++		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID &&
++		    /* Capacity data has changed */
++		    (asc == 0x2a) && (ascq == 0x9)) {
++			process_err_fn = storvsc_device_scan;
++			/*
++			 * Retry the I/O that triggered this.
++			 */
++			set_host_byte(scmnd, DID_REQUEUE);
++			goto do_work;
++		}
++	}
++
++	if (vm_srb->srb_status & SRB_STATUS_ERROR) {
+ 		/*
+ 		 * Let upper layer deal with error when
+ 		 * sense message is present.
+ 		 */
+-
+ 		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID)
+-			break;
++			return;
++
+ 		/*
+ 		 * If there is an error; offline the device since all
+ 		 * error recovery strategies would have already been
+@@ -1017,37 +1040,19 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ 			set_host_byte(scmnd, DID_PASSTHROUGH);
+ 			break;
+ 		/*
+-		 * On Some Windows hosts TEST_UNIT_READY command can return
+-		 * SRB_STATUS_ERROR, let the upper level code deal with it
+-		 * based on the sense information.
++		 * On some Hyper-V hosts TEST_UNIT_READY command can
++		 * return SRB_STATUS_ERROR. Let the upper level code
++		 * deal with it based on the sense information.
+ 		 */
+ 		case TEST_UNIT_READY:
+ 			break;
+ 		default:
+ 			set_host_byte(scmnd, DID_ERROR);
+ 		}
+-		break;
+-	case SRB_STATUS_INVALID_LUN:
+-		set_host_byte(scmnd, DID_NO_CONNECT);
+-		do_work = true;
+-		process_err_fn = storvsc_remove_lun;
+-		break;
+-	case SRB_STATUS_ABORTED:
+-		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID &&
+-		    (asc == 0x2a) && (ascq == 0x9)) {
+-			do_work = true;
+-			process_err_fn = storvsc_device_scan;
+-			/*
+-			 * Retry the I/O that triggered this.
+-			 */
+-			set_host_byte(scmnd, DID_REQUEUE);
+-		}
+-		break;
+ 	}
++	return;
+ 
+-	if (!do_work)
+-		return;
+-
++do_work:
+ 	/*
+ 	 * We need to schedule work to process this error; schedule it.
+ 	 */
+diff --git a/drivers/staging/rtl8723bs/hal/odm.h b/drivers/staging/rtl8723bs/hal/odm.h
+index 16e8f66a31714..a8d232245227b 100644
+--- a/drivers/staging/rtl8723bs/hal/odm.h
++++ b/drivers/staging/rtl8723bs/hal/odm.h
+@@ -197,10 +197,7 @@ typedef struct _ODM_RATE_ADAPTIVE {
+ 
+ #define AVG_THERMAL_NUM		8
+ #define IQK_Matrix_REG_NUM	8
+-#define IQK_Matrix_Settings_NUM	(14 + 24 + 21) /*   Channels_2_4G_NUM
+-						* + Channels_5G_20M_NUM
+-						* + Channels_5G
+-						*/
++#define IQK_Matrix_Settings_NUM	14 /* Channels_2_4G_NUM */
+ 
+ #define		DM_Type_ByFW			0
+ #define		DM_Type_ByDriver		1
+diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c
+index 0dd47dca3e771..8d724d92d57f4 100644
+--- a/drivers/thermal/rcar_gen3_thermal.c
++++ b/drivers/thermal/rcar_gen3_thermal.c
+@@ -141,7 +141,7 @@ static void rcar_gen3_thermal_calc_coefs(struct rcar_gen3_thermal_tsc *tsc,
+ 	 * Division is not scaled in BSP and if scaled it might overflow
+ 	 * the dividend (4095 * 4095 << 14 > INT_MAX) so keep it unscaled
+ 	 */
+-	tsc->tj_t = (FIXPT_INT((ptat[1] - ptat[2]) * 157)
++	tsc->tj_t = (FIXPT_INT((ptat[1] - ptat[2]) * (ths_tj_1 - TJ_3))
+ 		     / (ptat[0] - ptat[2])) + FIXPT_INT(TJ_3);
+ 
+ 	tsc->coef.a1 = FIXPT_DIV(FIXPT_INT(thcode[1] - thcode[2]),
+diff --git a/drivers/thermal/sprd_thermal.c b/drivers/thermal/sprd_thermal.c
+index 3682edb2f4669..fe06cccf14b38 100644
+--- a/drivers/thermal/sprd_thermal.c
++++ b/drivers/thermal/sprd_thermal.c
+@@ -532,6 +532,7 @@ static const struct of_device_id sprd_thermal_of_match[] = {
+ 	{ .compatible = "sprd,ums512-thermal", .data = &ums512_data },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, sprd_thermal_of_match);
+ 
+ static const struct dev_pm_ops sprd_thermal_pm_ops = {
+ 	SET_SYSTEM_SLEEP_PM_OPS(sprd_thm_suspend, sprd_thm_resume)
+diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
+index 1d3ec8503cef3..7c3ea68e533e2 100644
+--- a/drivers/tty/serial/8250/serial_cs.c
++++ b/drivers/tty/serial/8250/serial_cs.c
+@@ -306,6 +306,7 @@ static int serial_resume(struct pcmcia_device *link)
+ static int serial_probe(struct pcmcia_device *link)
+ {
+ 	struct serial_info *info;
++	int ret;
+ 
+ 	dev_dbg(&link->dev, "serial_attach()\n");
+ 
+@@ -320,7 +321,15 @@ static int serial_probe(struct pcmcia_device *link)
+ 	if (do_sound)
+ 		link->config_flags |= CONF_ENABLE_SPKR;
+ 
+-	return serial_config(link);
++	ret = serial_config(link);
++	if (ret)
++		goto free_info;
++
++	return 0;
++
++free_info:
++	kfree(info);
++	return ret;
+ }
+ 
+ static void serial_detach(struct pcmcia_device *link)
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index de5ee4aad9f34..2e74c88808db6 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1571,6 +1571,9 @@ static void lpuart_tx_dma_startup(struct lpuart_port *sport)
+ 	u32 uartbaud;
+ 	int ret;
+ 
++	if (uart_console(&sport->port))
++		goto err;
++
+ 	if (!sport->dma_tx_chan)
+ 		goto err;
+ 
+@@ -1600,6 +1603,9 @@ static void lpuart_rx_dma_startup(struct lpuart_port *sport)
+ 	int ret;
+ 	unsigned char cr3;
+ 
++	if (uart_console(&sport->port))
++		goto err;
++
+ 	if (!sport->dma_rx_chan)
+ 		goto err;
+ 
+@@ -2404,6 +2410,9 @@ lpuart32_console_get_options(struct lpuart_port *sport, int *baud,
+ 
+ 	bd = lpuart32_read(&sport->port, UARTBAUD);
+ 	bd &= UARTBAUD_SBR_MASK;
++	if (!bd)
++		return;
++
+ 	sbr = bd;
+ 	uartclk = lpuart_get_baud_clk_rate(sport);
+ 	/*
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index 09379db613d8b..7081ab322b402 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -505,21 +505,23 @@ static void ulite_console_write(struct console *co, const char *s,
+ 
+ static int ulite_console_setup(struct console *co, char *options)
+ {
+-	struct uart_port *port;
++	struct uart_port *port = NULL;
+ 	int baud = 9600;
+ 	int bits = 8;
+ 	int parity = 'n';
+ 	int flow = 'n';
+ 
+-
+-	port = console_port;
++	if (co->index >= 0 && co->index < ULITE_NR_UARTS)
++		port = ulite_ports + co->index;
+ 
+ 	/* Has the device been initialized yet? */
+-	if (!port->mapbase) {
++	if (!port || !port->mapbase) {
+ 		pr_debug("console on ttyUL%i not present\n", co->index);
+ 		return -ENODEV;
+ 	}
+ 
++	console_port = port;
++
+ 	/* not initialized yet? */
+ 	if (!port->membase) {
+ 		if (ulite_request_port(port))
+@@ -655,17 +657,6 @@ static int ulite_assign(struct device *dev, int id, u32 base, int irq,
+ 
+ 	dev_set_drvdata(dev, port);
+ 
+-#ifdef CONFIG_SERIAL_UARTLITE_CONSOLE
+-	/*
+-	 * If console hasn't been found yet try to assign this port
+-	 * because it is required to be assigned for console setup function.
+-	 * If register_console() don't assign value, then console_port pointer
+-	 * is cleanup.
+-	 */
+-	if (ulite_uart_driver.cons->index == -1)
+-		console_port = port;
+-#endif
+-
+ 	/* Register the port */
+ 	rc = uart_add_one_port(&ulite_uart_driver, port);
+ 	if (rc) {
+@@ -675,12 +666,6 @@ static int ulite_assign(struct device *dev, int id, u32 base, int irq,
+ 		return rc;
+ 	}
+ 
+-#ifdef CONFIG_SERIAL_UARTLITE_CONSOLE
+-	/* This is not port which is used for console that's why clean it up */
+-	if (ulite_uart_driver.cons->index == -1)
+-		console_port = NULL;
+-#endif
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/common/usb-conn-gpio.c b/drivers/usb/common/usb-conn-gpio.c
+index 6c4e3a19f42cb..c9545a4eff664 100644
+--- a/drivers/usb/common/usb-conn-gpio.c
++++ b/drivers/usb/common/usb-conn-gpio.c
+@@ -149,14 +149,32 @@ static int usb_charger_get_property(struct power_supply *psy,
+ 	return 0;
+ }
+ 
+-static int usb_conn_probe(struct platform_device *pdev)
++static int usb_conn_psy_register(struct usb_conn_info *info)
+ {
+-	struct device *dev = &pdev->dev;
+-	struct power_supply_desc *desc;
+-	struct usb_conn_info *info;
++	struct device *dev = info->dev;
++	struct power_supply_desc *desc = &info->desc;
+ 	struct power_supply_config cfg = {
+ 		.of_node = dev->of_node,
+ 	};
++
++	desc->name = "usb-charger";
++	desc->properties = usb_charger_properties;
++	desc->num_properties = ARRAY_SIZE(usb_charger_properties);
++	desc->get_property = usb_charger_get_property;
++	desc->type = POWER_SUPPLY_TYPE_USB;
++	cfg.drv_data = info;
++
++	info->charger = devm_power_supply_register(dev, desc, &cfg);
++	if (IS_ERR(info->charger))
++		dev_err(dev, "Unable to register charger\n");
++
++	return PTR_ERR_OR_ZERO(info->charger);
++}
++
++static int usb_conn_probe(struct platform_device *pdev)
++{
++	struct device *dev = &pdev->dev;
++	struct usb_conn_info *info;
+ 	bool need_vbus = true;
+ 	int ret = 0;
+ 
+@@ -218,6 +236,10 @@ static int usb_conn_probe(struct platform_device *pdev)
+ 		return PTR_ERR(info->role_sw);
+ 	}
+ 
++	ret = usb_conn_psy_register(info);
++	if (ret)
++		goto put_role_sw;
++
+ 	if (info->id_gpiod) {
+ 		info->id_irq = gpiod_to_irq(info->id_gpiod);
+ 		if (info->id_irq < 0) {
+@@ -252,20 +274,6 @@ static int usb_conn_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	desc = &info->desc;
+-	desc->name = "usb-charger";
+-	desc->properties = usb_charger_properties;
+-	desc->num_properties = ARRAY_SIZE(usb_charger_properties);
+-	desc->get_property = usb_charger_get_property;
+-	desc->type = POWER_SUPPLY_TYPE_USB;
+-	cfg.drv_data = info;
+-
+-	info->charger = devm_power_supply_register(dev, desc, &cfg);
+-	if (IS_ERR(info->charger)) {
+-		dev_err(dev, "Unable to register charger\n");
+-		return PTR_ERR(info->charger);
+-	}
+-
+ 	platform_set_drvdata(pdev, info);
+ 
+ 	/* Perform initial detection */
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index e556993081170..a82b3de1a54be 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -88,7 +88,7 @@ static struct usb_interface_descriptor hidg_interface_desc = {
+ static struct hid_descriptor hidg_desc = {
+ 	.bLength			= sizeof hidg_desc,
+ 	.bDescriptorType		= HID_DT_HID,
+-	.bcdHID				= 0x0101,
++	.bcdHID				= cpu_to_le16(0x0101),
+ 	.bCountryCode			= 0x00,
+ 	.bNumDescriptors		= 0x1,
+ 	/*.desc[0].bDescriptorType	= DYNAMIC */
+diff --git a/drivers/usb/gadget/legacy/hid.c b/drivers/usb/gadget/legacy/hid.c
+index c4eda7fe7ab45..5b27d289443fe 100644
+--- a/drivers/usb/gadget/legacy/hid.c
++++ b/drivers/usb/gadget/legacy/hid.c
+@@ -171,8 +171,10 @@ static int hid_bind(struct usb_composite_dev *cdev)
+ 		struct usb_descriptor_header *usb_desc;
+ 
+ 		usb_desc = usb_otg_descriptor_alloc(gadget);
+-		if (!usb_desc)
++		if (!usb_desc) {
++			status = -ENOMEM;
+ 			goto put;
++		}
+ 		usb_otg_descriptor_init(gadget, usb_desc);
+ 		otg_desc[0] = usb_desc;
+ 		otg_desc[1] = NULL;
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 8af30d07f6880..fe7ed3212473d 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -596,8 +596,8 @@ static void cq_destroy(struct mlx5_vdpa_net *ndev, u16 idx)
+ 	mlx5_db_free(ndev->mvdev.mdev, &vcq->db);
+ }
+ 
+-static int umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq, int num,
+-		     struct mlx5_vdpa_umem **umemp)
++static void set_umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq, int num,
++			  struct mlx5_vdpa_umem **umemp)
+ {
+ 	struct mlx5_core_dev *mdev = ndev->mvdev.mdev;
+ 	int p_a;
+@@ -620,7 +620,7 @@ static int umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq
+ 		*umemp = &mvq->umem3;
+ 		break;
+ 	}
+-	return p_a * mvq->num_ent + p_b;
++	(*umemp)->size = p_a * mvq->num_ent + p_b;
+ }
+ 
+ static void umem_frag_buf_free(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_umem *umem)
+@@ -636,15 +636,10 @@ static int create_umem(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
+ 	void *in;
+ 	int err;
+ 	__be64 *pas;
+-	int size;
+ 	struct mlx5_vdpa_umem *umem;
+ 
+-	size = umem_size(ndev, mvq, num, &umem);
+-	if (size < 0)
+-		return size;
+-
+-	umem->size = size;
+-	err = umem_frag_buf_alloc(ndev, umem, size);
++	set_umem_size(ndev, mvq, num, &umem);
++	err = umem_frag_buf_alloc(ndev, umem, umem->size);
+ 	if (err)
+ 		return err;
+ 
+@@ -814,9 +809,9 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
+ 	MLX5_SET(virtio_q, vq_ctx, umem_1_id, mvq->umem1.id);
+ 	MLX5_SET(virtio_q, vq_ctx, umem_1_size, mvq->umem1.size);
+ 	MLX5_SET(virtio_q, vq_ctx, umem_2_id, mvq->umem2.id);
+-	MLX5_SET(virtio_q, vq_ctx, umem_2_size, mvq->umem1.size);
++	MLX5_SET(virtio_q, vq_ctx, umem_2_size, mvq->umem2.size);
+ 	MLX5_SET(virtio_q, vq_ctx, umem_3_id, mvq->umem3.id);
+-	MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem1.size);
++	MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem3.size);
+ 	MLX5_SET(virtio_q, vq_ctx, pd, ndev->mvdev.res.pdn);
+ 	if (MLX5_CAP_DEV_VDPA_EMULATION(ndev->mvdev.mdev, eth_frame_offload_type))
+ 		MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0, 1);
+@@ -1757,6 +1752,14 @@ out:
+ 	mutex_unlock(&ndev->reslock);
+ }
+ 
++static void clear_vqs_ready(struct mlx5_vdpa_net *ndev)
++{
++	int i;
++
++	for (i = 0; i < ndev->mvdev.max_vqs; i++)
++		ndev->vqs[i].ready = false;
++}
++
+ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ {
+ 	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
+@@ -1767,6 +1770,7 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ 	if (!status) {
+ 		mlx5_vdpa_info(mvdev, "performing device reset\n");
+ 		teardown_driver(ndev);
++		clear_vqs_ready(ndev);
+ 		mlx5_vdpa_destroy_mr(&ndev->mvdev);
+ 		ndev->mvdev.status = 0;
+ 		ndev->mvdev.mlx_features = 0;
+diff --git a/drivers/video/backlight/lm3630a_bl.c b/drivers/video/backlight/lm3630a_bl.c
+index 662029d6a3dc9..419b0334cf087 100644
+--- a/drivers/video/backlight/lm3630a_bl.c
++++ b/drivers/video/backlight/lm3630a_bl.c
+@@ -190,7 +190,7 @@ static int lm3630a_bank_a_update_status(struct backlight_device *bl)
+ 	if ((pwm_ctrl & LM3630A_PWM_BANK_A) != 0) {
+ 		lm3630a_pwm_ctrl(pchip, bl->props.brightness,
+ 				 bl->props.max_brightness);
+-		return bl->props.brightness;
++		return 0;
+ 	}
+ 
+ 	/* disable sleep */
+@@ -210,8 +210,8 @@ static int lm3630a_bank_a_update_status(struct backlight_device *bl)
+ 	return 0;
+ 
+ out_i2c_err:
+-	dev_err(pchip->dev, "i2c failed to access\n");
+-	return bl->props.brightness;
++	dev_err(pchip->dev, "i2c failed to access (%pe)\n", ERR_PTR(ret));
++	return ret;
+ }
+ 
+ static int lm3630a_bank_a_get_brightness(struct backlight_device *bl)
+@@ -267,7 +267,7 @@ static int lm3630a_bank_b_update_status(struct backlight_device *bl)
+ 	if ((pwm_ctrl & LM3630A_PWM_BANK_B) != 0) {
+ 		lm3630a_pwm_ctrl(pchip, bl->props.brightness,
+ 				 bl->props.max_brightness);
+-		return bl->props.brightness;
++		return 0;
+ 	}
+ 
+ 	/* disable sleep */
+@@ -287,8 +287,8 @@ static int lm3630a_bank_b_update_status(struct backlight_device *bl)
+ 	return 0;
+ 
+ out_i2c_err:
+-	dev_err(pchip->dev, "i2c failed to access REG_CTRL\n");
+-	return bl->props.brightness;
++	dev_err(pchip->dev, "i2c failed to access (%pe)\n", ERR_PTR(ret));
++	return ret;
+ }
+ 
+ static int lm3630a_bank_b_get_brightness(struct backlight_device *bl)
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 8268bbee8cae1..98030d75833b8 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -970,13 +970,11 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 		fb_var_to_videomode(&mode2, &info->var);
+ 		/* make sure we don't delete the videomode of current var */
+ 		ret = fb_mode_is_equal(&mode1, &mode2);
+-
+-		if (!ret)
+-			fbcon_mode_deleted(info, &mode1);
+-
+-		if (!ret)
+-			fb_delete_videomode(&mode1, &info->modelist);
+-
++		if (!ret) {
++			ret = fbcon_mode_deleted(info, &mode1);
++			if (!ret)
++				fb_delete_videomode(&mode1, &info->modelist);
++		}
+ 
+ 		return ret ? -EINVAL : 0;
+ 	}
+diff --git a/drivers/w1/slaves/w1_ds2438.c b/drivers/w1/slaves/w1_ds2438.c
+index 5cfb0ae23e916..5698566b0ee01 100644
+--- a/drivers/w1/slaves/w1_ds2438.c
++++ b/drivers/w1/slaves/w1_ds2438.c
+@@ -62,13 +62,13 @@ static int w1_ds2438_get_page(struct w1_slave *sl, int pageno, u8 *buf)
+ 		if (w1_reset_select_slave(sl))
+ 			continue;
+ 		w1_buf[0] = W1_DS2438_RECALL_MEMORY;
+-		w1_buf[1] = 0x00;
++		w1_buf[1] = (u8)pageno;
+ 		w1_write_block(sl->master, w1_buf, 2);
+ 
+ 		if (w1_reset_select_slave(sl))
+ 			continue;
+ 		w1_buf[0] = W1_DS2438_READ_SCRATCH;
+-		w1_buf[1] = 0x00;
++		w1_buf[1] = (u8)pageno;
+ 		w1_write_block(sl->master, w1_buf, 2);
+ 
+ 		count = w1_read_block(sl->master, buf, DS2438_PAGE_SIZE + 1);
+diff --git a/drivers/watchdog/aspeed_wdt.c b/drivers/watchdog/aspeed_wdt.c
+index 7e00960651fa2..507fd815d7679 100644
+--- a/drivers/watchdog/aspeed_wdt.c
++++ b/drivers/watchdog/aspeed_wdt.c
+@@ -147,7 +147,7 @@ static int aspeed_wdt_set_timeout(struct watchdog_device *wdd,
+ 
+ 	wdd->timeout = timeout;
+ 
+-	actual = min(timeout, wdd->max_hw_heartbeat_ms * 1000);
++	actual = min(timeout, wdd->max_hw_heartbeat_ms / 1000);
+ 
+ 	writel(actual * WDT_RATE_1MHZ, wdt->base + WDT_RELOAD_VALUE);
+ 	writel(WDT_RESTART_MAGIC, wdt->base + WDT_RESTART);
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index a370a185a41c4..519a539eeb9e8 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -73,6 +73,8 @@
+ #define TCOBASE(p)	((p)->tco_res->start)
+ /* SMI Control and Enable Register */
+ #define SMI_EN(p)	((p)->smi_res->start)
++#define TCO_EN		(1 << 13)
++#define GBL_SMI_EN	(1 << 0)
+ 
+ #define TCO_RLD(p)	(TCOBASE(p) + 0x00) /* TCO Timer Reload/Curr. Value */
+ #define TCOv1_TMR(p)	(TCOBASE(p) + 0x01) /* TCOv1 Timer Initial Value*/
+@@ -357,8 +359,12 @@ static int iTCO_wdt_set_timeout(struct watchdog_device *wd_dev, unsigned int t)
+ 
+ 	tmrval = seconds_to_ticks(p, t);
+ 
+-	/* For TCO v1 the timer counts down twice before rebooting */
+-	if (p->iTCO_version == 1)
++	/*
++	 * If TCO SMIs are off, the timer counts down twice before rebooting.
++	 * Otherwise, the BIOS generally reboots when the SMI triggers.
++	 */
++	if (p->smi_res &&
++	    (SMI_EN(p) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
+ 		tmrval /= 2;
+ 
+ 	/* from the specs: */
+@@ -523,7 +529,7 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 		 * Disables TCO logic generating an SMI#
+ 		 */
+ 		val32 = inl(SMI_EN(p));
+-		val32 &= 0xffffdfff;	/* Turn off SMI clearing watchdog */
++		val32 &= ~TCO_EN;	/* Turn off SMI clearing watchdog */
+ 		outl(val32, SMI_EN(p));
+ 	}
+ 
+diff --git a/drivers/watchdog/imx_sc_wdt.c b/drivers/watchdog/imx_sc_wdt.c
+index e9ee22a7cb456..8ac021748d160 100644
+--- a/drivers/watchdog/imx_sc_wdt.c
++++ b/drivers/watchdog/imx_sc_wdt.c
+@@ -183,16 +183,12 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ 	watchdog_stop_on_reboot(wdog);
+ 	watchdog_stop_on_unregister(wdog);
+ 
+-	ret = devm_watchdog_register_device(dev, wdog);
+-	if (ret)
+-		return ret;
+-
+ 	ret = imx_scu_irq_group_enable(SC_IRQ_GROUP_WDOG,
+ 				       SC_IRQ_WDOG,
+ 				       true);
+ 	if (ret) {
+ 		dev_warn(dev, "Enable irq failed, pretimeout NOT supported\n");
+-		return 0;
++		goto register_device;
+ 	}
+ 
+ 	imx_sc_wdd->wdt_notifier.notifier_call = imx_sc_wdt_notify;
+@@ -203,7 +199,7 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ 					 false);
+ 		dev_warn(dev,
+ 			 "Register irq notifier failed, pretimeout NOT supported\n");
+-		return 0;
++		goto register_device;
+ 	}
+ 
+ 	ret = devm_add_action_or_reset(dev, imx_sc_wdt_action,
+@@ -213,7 +209,8 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ 	else
+ 		dev_warn(dev, "Add action failed, pretimeout NOT supported\n");
+ 
+-	return 0;
++register_device:
++	return devm_watchdog_register_device(dev, wdog);
+ }
+ 
+ static int __maybe_unused imx_sc_wdt_suspend(struct device *dev)
+diff --git a/drivers/watchdog/jz4740_wdt.c b/drivers/watchdog/jz4740_wdt.c
+index bdf9564efa29e..395bde79e2920 100644
+--- a/drivers/watchdog/jz4740_wdt.c
++++ b/drivers/watchdog/jz4740_wdt.c
+@@ -176,9 +176,9 @@ static int jz4740_wdt_probe(struct platform_device *pdev)
+ 	watchdog_set_drvdata(jz4740_wdt, drvdata);
+ 
+ 	drvdata->map = device_node_to_regmap(dev->parent->of_node);
+-	if (!drvdata->map) {
++	if (IS_ERR(drvdata->map)) {
+ 		dev_err(dev, "regmap not found\n");
+-		return -EINVAL;
++		return PTR_ERR(drvdata->map);
+ 	}
+ 
+ 	return devm_watchdog_register_device(dev, &drvdata->wdt);
+diff --git a/drivers/watchdog/lpc18xx_wdt.c b/drivers/watchdog/lpc18xx_wdt.c
+index 78cf11c949416..60b6d74f267dd 100644
+--- a/drivers/watchdog/lpc18xx_wdt.c
++++ b/drivers/watchdog/lpc18xx_wdt.c
+@@ -292,7 +292,7 @@ static int lpc18xx_wdt_remove(struct platform_device *pdev)
+ 	struct lpc18xx_wdt_dev *lpc18xx_wdt = platform_get_drvdata(pdev);
+ 
+ 	dev_warn(&pdev->dev, "I quit now, hardware will probably reboot!\n");
+-	del_timer(&lpc18xx_wdt->timer);
++	del_timer_sync(&lpc18xx_wdt->timer);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/watchdog/sbc60xxwdt.c b/drivers/watchdog/sbc60xxwdt.c
+index a947a63fb44ae..7b974802dfc7c 100644
+--- a/drivers/watchdog/sbc60xxwdt.c
++++ b/drivers/watchdog/sbc60xxwdt.c
+@@ -146,7 +146,7 @@ static void wdt_startup(void)
+ static void wdt_turnoff(void)
+ {
+ 	/* Stop the timer */
+-	del_timer(&timer);
++	del_timer_sync(&timer);
+ 	inb_p(wdt_stop);
+ 	pr_info("Watchdog timer is now disabled...\n");
+ }
+diff --git a/drivers/watchdog/sc520_wdt.c b/drivers/watchdog/sc520_wdt.c
+index e66e6b905964b..ca65468f4b9ce 100644
+--- a/drivers/watchdog/sc520_wdt.c
++++ b/drivers/watchdog/sc520_wdt.c
+@@ -186,7 +186,7 @@ static int wdt_startup(void)
+ static int wdt_turnoff(void)
+ {
+ 	/* Stop the timer */
+-	del_timer(&timer);
++	del_timer_sync(&timer);
+ 
+ 	/* Stop the watchdog */
+ 	wdt_config(0);
+diff --git a/drivers/watchdog/w83877f_wdt.c b/drivers/watchdog/w83877f_wdt.c
+index 5772cc5d37804..f2650863fd027 100644
+--- a/drivers/watchdog/w83877f_wdt.c
++++ b/drivers/watchdog/w83877f_wdt.c
+@@ -166,7 +166,7 @@ static void wdt_startup(void)
+ static void wdt_turnoff(void)
+ {
+ 	/* Stop the timer */
+-	del_timer(&timer);
++	del_timer_sync(&timer);
+ 
+ 	wdt_change(WDT_DISABLE);
+ 
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 8b0507f69c156..3465ff95cb89f 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -78,10 +78,6 @@ static int ceph_set_page_dirty(struct page *page)
+ 	struct inode *inode;
+ 	struct ceph_inode_info *ci;
+ 	struct ceph_snap_context *snapc;
+-	int ret;
+-
+-	if (unlikely(!mapping))
+-		return !TestSetPageDirty(page);
+ 
+ 	if (PageDirty(page)) {
+ 		dout("%p set_page_dirty %p idx %lu -- already dirty\n",
+@@ -127,11 +123,7 @@ static int ceph_set_page_dirty(struct page *page)
+ 	page->private = (unsigned long)snapc;
+ 	SetPagePrivate(page);
+ 
+-	ret = __set_page_dirty_nobuffers(page);
+-	WARN_ON(!PageLocked(page));
+-	WARN_ON(!page->mapping);
+-
+-	return ret;
++	return __set_page_dirty_nobuffers(page);
+ }
+ 
+ /*
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index fb7088d57e46f..8ffe8063e42c1 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -5352,7 +5352,8 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 	if (!tree)
+ 		return -ENOMEM;
+ 
+-	if (!tcon->dfs_path) {
++	/* If it is not dfs or there was no cached dfs referral, then reconnect to same share */
++	if (!tcon->dfs_path || dfs_cache_noreq_find(tcon->dfs_path + 1, &ref, &tl)) {
+ 		if (tcon->ipc) {
+ 			scnprintf(tree, MAX_TREE_SIZE, "\\\\%s\\IPC$", server->hostname);
+ 			rc = ops->tree_connect(xid, tcon->ses, tree, tcon, nlsc);
+@@ -5362,9 +5363,6 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 		goto out;
+ 	}
+ 
+-	rc = dfs_cache_noreq_find(tcon->dfs_path + 1, &ref, &tl);
+-	if (rc)
+-		goto out;
+ 	isroot = ref.server_type == DFS_TYPE_ROOT;
+ 	free_dfs_info_param(&ref);
+ 
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 9b38cef4d50fe..e02affb5c0e79 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1801,6 +1801,7 @@ static void init_atgc_management(struct f2fs_sb_info *sbi)
+ 	am->candidate_ratio = DEF_GC_THREAD_CANDIDATE_RATIO;
+ 	am->max_candidate_count = DEF_GC_THREAD_MAX_CANDIDATE_COUNT;
+ 	am->age_weight = DEF_GC_THREAD_AGE_WEIGHT;
++	am->age_threshold = DEF_GC_THREAD_AGE_THRESHOLD;
+ }
+ 
+ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 5f7ab4f113224..17d0e5f4efec8 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -153,7 +153,8 @@ fail_drop:
+ 	return ERR_PTR(err);
+ }
+ 
+-static inline int is_extension_exist(const unsigned char *s, const char *sub)
++static inline int is_extension_exist(const unsigned char *s, const char *sub,
++						bool tmp_ext)
+ {
+ 	size_t slen = strlen(s);
+ 	size_t sublen = strlen(sub);
+@@ -169,6 +170,13 @@ static inline int is_extension_exist(const unsigned char *s, const char *sub)
+ 	if (slen < sublen + 2)
+ 		return 0;
+ 
++	if (!tmp_ext) {
++		/* file has no temp extension */
++		if (s[slen - sublen - 1] != '.')
++			return 0;
++		return !strncasecmp(s + slen - sublen, sub, sublen);
++	}
++
+ 	for (i = 1; i < slen - sublen; i++) {
+ 		if (s[i] != '.')
+ 			continue;
+@@ -194,7 +202,7 @@ static inline void set_file_temperature(struct f2fs_sb_info *sbi, struct inode *
+ 	hot_count = sbi->raw_super->hot_ext_count;
+ 
+ 	for (i = 0; i < cold_count + hot_count; i++) {
+-		if (is_extension_exist(name, extlist[i]))
++		if (is_extension_exist(name, extlist[i], true))
+ 			break;
+ 	}
+ 
+@@ -295,7 +303,7 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode,
+ 	hot_count = sbi->raw_super->hot_ext_count;
+ 
+ 	for (i = cold_count; i < cold_count + hot_count; i++) {
+-		if (is_extension_exist(name, extlist[i])) {
++		if (is_extension_exist(name, extlist[i], false)) {
+ 			up_read(&sbi->sb_lock);
+ 			return;
+ 		}
+@@ -306,7 +314,7 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode,
+ 	ext = F2FS_OPTION(sbi).extensions;
+ 
+ 	for (i = 0; i < ext_cnt; i++) {
+-		if (!is_extension_exist(name, ext[i]))
++		if (!is_extension_exist(name, ext[i], false))
+ 			continue;
+ 
+ 		set_compress_context(inode);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 4af02719bb14a..c529880678878 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -4122,4 +4122,5 @@ module_exit(exit_f2fs_fs)
+ MODULE_AUTHOR("Samsung Electronics's Praesto Team");
+ MODULE_DESCRIPTION("Flash Friendly File System");
+ MODULE_LICENSE("GPL");
++MODULE_SOFTDEP("pre: crc32");
+ 
+diff --git a/fs/jfs/jfs_logmgr.c b/fs/jfs/jfs_logmgr.c
+index 9330eff210e0c..78fd136ac13b9 100644
+--- a/fs/jfs/jfs_logmgr.c
++++ b/fs/jfs/jfs_logmgr.c
+@@ -1324,6 +1324,7 @@ int lmLogInit(struct jfs_log * log)
+ 		} else {
+ 			if (!uuid_equal(&logsuper->uuid, &log->uuid)) {
+ 				jfs_warn("wrong uuid on JFS log device");
++				rc = -EINVAL;
+ 				goto errout20;
+ 			}
+ 			log->size = le32_to_cpu(logsuper->size);
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 04bf8066980c1..d6ac2c4f88b6b 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -75,6 +75,13 @@ void nfs_mark_delegation_referenced(struct nfs_delegation *delegation)
+ 	set_bit(NFS_DELEGATION_REFERENCED, &delegation->flags);
+ }
+ 
++static void nfs_mark_return_delegation(struct nfs_server *server,
++				       struct nfs_delegation *delegation)
++{
++	set_bit(NFS_DELEGATION_RETURN, &delegation->flags);
++	set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
++}
++
+ static bool
+ nfs4_is_valid_delegation(const struct nfs_delegation *delegation,
+ 		fmode_t flags)
+@@ -293,6 +300,7 @@ nfs_start_delegation_return_locked(struct nfs_inode *nfsi)
+ 		goto out;
+ 	spin_lock(&delegation->lock);
+ 	if (!test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) {
++		clear_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags);
+ 		/* Refcount matched in nfs_end_delegation_return() */
+ 		ret = nfs_get_delegation(delegation);
+ 	}
+@@ -314,16 +322,17 @@ nfs_start_delegation_return(struct nfs_inode *nfsi)
+ 	return delegation;
+ }
+ 
+-static void
+-nfs_abort_delegation_return(struct nfs_delegation *delegation,
+-		struct nfs_client *clp)
++static void nfs_abort_delegation_return(struct nfs_delegation *delegation,
++					struct nfs_client *clp, int err)
+ {
+ 
+ 	spin_lock(&delegation->lock);
+ 	clear_bit(NFS_DELEGATION_RETURNING, &delegation->flags);
+-	set_bit(NFS_DELEGATION_RETURN, &delegation->flags);
++	if (err == -EAGAIN) {
++		set_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags);
++		set_bit(NFS4CLNT_DELEGRETURN_DELAYED, &clp->cl_state);
++	}
+ 	spin_unlock(&delegation->lock);
+-	set_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state);
+ }
+ 
+ static struct nfs_delegation *
+@@ -528,7 +537,7 @@ static int nfs_end_delegation_return(struct inode *inode, struct nfs_delegation
+ 	} while (err == 0);
+ 
+ 	if (err) {
+-		nfs_abort_delegation_return(delegation, clp);
++		nfs_abort_delegation_return(delegation, clp, err);
+ 		goto out;
+ 	}
+ 
+@@ -557,6 +566,7 @@ static bool nfs_delegation_need_return(struct nfs_delegation *delegation)
+ 	if (ret)
+ 		clear_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
+ 	if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags) ||
++	    test_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags) ||
+ 	    test_bit(NFS_DELEGATION_REVOKED, &delegation->flags))
+ 		ret = false;
+ 
+@@ -636,6 +646,38 @@ out:
+ 	return err;
+ }
+ 
++static bool nfs_server_clear_delayed_delegations(struct nfs_server *server)
++{
++	struct nfs_delegation *d;
++	bool ret = false;
++
++	list_for_each_entry_rcu (d, &server->delegations, super_list) {
++		if (!test_bit(NFS_DELEGATION_RETURN_DELAYED, &d->flags))
++			continue;
++		nfs_mark_return_delegation(server, d);
++		clear_bit(NFS_DELEGATION_RETURN_DELAYED, &d->flags);
++		ret = true;
++	}
++	return ret;
++}
++
++static bool nfs_client_clear_delayed_delegations(struct nfs_client *clp)
++{
++	struct nfs_server *server;
++	bool ret = false;
++
++	if (!test_and_clear_bit(NFS4CLNT_DELEGRETURN_DELAYED, &clp->cl_state))
++		goto out;
++	rcu_read_lock();
++	list_for_each_entry_rcu (server, &clp->cl_superblocks, client_link) {
++		if (nfs_server_clear_delayed_delegations(server))
++			ret = true;
++	}
++	rcu_read_unlock();
++out:
++	return ret;
++}
++
+ /**
+  * nfs_client_return_marked_delegations - return previously marked delegations
+  * @clp: nfs_client to process
+@@ -648,8 +690,14 @@ out:
+  */
+ int nfs_client_return_marked_delegations(struct nfs_client *clp)
+ {
+-	return nfs_client_for_each_server(clp,
+-			nfs_server_return_marked_delegations, NULL);
++	int err = nfs_client_for_each_server(
++		clp, nfs_server_return_marked_delegations, NULL);
++	if (err)
++		return err;
++	/* If a return was delayed, sleep to prevent hard looping */
++	if (nfs_client_clear_delayed_delegations(clp))
++		ssleep(1);
++	return 0;
+ }
+ 
+ /**
+@@ -764,13 +812,6 @@ static void nfs_mark_return_if_closed_delegation(struct nfs_server *server,
+ 	set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
+ }
+ 
+-static void nfs_mark_return_delegation(struct nfs_server *server,
+-		struct nfs_delegation *delegation)
+-{
+-	set_bit(NFS_DELEGATION_RETURN, &delegation->flags);
+-	set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
+-}
+-
+ static bool nfs_server_mark_return_all_delegations(struct nfs_server *server)
+ {
+ 	struct nfs_delegation *delegation;
+diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h
+index 9b00a0b7f8321..26f57a99da84e 100644
+--- a/fs/nfs/delegation.h
++++ b/fs/nfs/delegation.h
+@@ -36,6 +36,7 @@ enum {
+ 	NFS_DELEGATION_REVOKED,
+ 	NFS_DELEGATION_TEST_EXPIRED,
+ 	NFS_DELEGATION_INODE_FREEING,
++	NFS_DELEGATION_RETURN_DELAYED,
+ };
+ 
+ int nfs_inode_set_delegation(struct inode *inode, const struct cred *cred,
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 2d30a4da49fa0..2e894fec036b0 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -700,8 +700,8 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ {
+ 	struct nfs_direct_req *dreq = hdr->dreq;
+ 	struct nfs_commit_info cinfo;
+-	bool request_commit = false;
+ 	struct nfs_page *req = nfs_list_entry(hdr->pages.next);
++	int flags = NFS_ODIRECT_DONE;
+ 
+ 	nfs_init_cinfo_from_dreq(&cinfo, dreq);
+ 
+@@ -713,15 +713,9 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ 
+ 	nfs_direct_count_bytes(dreq, hdr);
+ 	if (hdr->good_bytes != 0 && nfs_write_need_commit(hdr)) {
+-		switch (dreq->flags) {
+-		case 0:
++		if (!dreq->flags)
+ 			dreq->flags = NFS_ODIRECT_DO_COMMIT;
+-			request_commit = true;
+-			break;
+-		case NFS_ODIRECT_RESCHED_WRITES:
+-		case NFS_ODIRECT_DO_COMMIT:
+-			request_commit = true;
+-		}
++		flags = dreq->flags;
+ 	}
+ 	spin_unlock(&dreq->lock);
+ 
+@@ -729,12 +723,15 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ 
+ 		req = nfs_list_entry(hdr->pages.next);
+ 		nfs_list_remove_request(req);
+-		if (request_commit) {
++		if (flags == NFS_ODIRECT_DO_COMMIT) {
+ 			kref_get(&req->wb_kref);
+ 			memcpy(&req->wb_verf, &hdr->verf.verifier,
+ 			       sizeof(req->wb_verf));
+ 			nfs_mark_request_commit(req, hdr->lseg, &cinfo,
+ 				hdr->ds_commit_idx);
++		} else if (flags == NFS_ODIRECT_RESCHED_WRITES) {
++			kref_get(&req->wb_kref);
++			nfs_mark_request_commit(req, NULL, &cinfo, 0);
+ 		}
+ 		nfs_unlock_and_release_request(req);
+ 	}
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index dc2cbca98fb0d..9811880470a07 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1064,6 +1064,7 @@ EXPORT_SYMBOL_GPL(nfs_inode_attach_open_context);
+ void nfs_file_set_open_context(struct file *filp, struct nfs_open_context *ctx)
+ {
+ 	filp->private_data = get_nfs_open_context(ctx);
++	set_bit(NFS_CONTEXT_FILE_OPEN, &ctx->flags);
+ 	if (list_empty(&ctx->list))
+ 		nfs_inode_attach_open_context(ctx);
+ }
+@@ -1083,6 +1084,8 @@ struct nfs_open_context *nfs_find_open_context(struct inode *inode, const struct
+ 			continue;
+ 		if ((pos->mode & (FMODE_READ|FMODE_WRITE)) != mode)
+ 			continue;
++		if (!test_bit(NFS_CONTEXT_FILE_OPEN, &pos->flags))
++			continue;
+ 		ctx = get_nfs_open_context(pos);
+ 		if (ctx)
+ 			break;
+@@ -1098,6 +1101,7 @@ void nfs_file_clear_open_context(struct file *filp)
+ 	if (ctx) {
+ 		struct inode *inode = d_inode(ctx->dentry);
+ 
++		clear_bit(NFS_CONTEXT_FILE_OPEN, &ctx->flags);
+ 		/*
+ 		 * We fatal error on write before. Try to writeback
+ 		 * every page again.
+diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
+index 2397ceedba8af..e1491def7124f 100644
+--- a/fs/nfs/nfs3proc.c
++++ b/fs/nfs/nfs3proc.c
+@@ -360,7 +360,7 @@ nfs3_proc_create(struct inode *dir, struct dentry *dentry, struct iattr *sattr,
+ 				break;
+ 
+ 			case NFS3_CREATE_UNCHECKED:
+-				goto out;
++				goto out_release_acls;
+ 		}
+ 		nfs_fattr_init(data->res.dir_attr);
+ 		nfs_fattr_init(data->res.fattr);
+@@ -727,7 +727,7 @@ nfs3_proc_mknod(struct inode *dir, struct dentry *dentry, struct iattr *sattr,
+ 		break;
+ 	default:
+ 		status = -EINVAL;
+-		goto out;
++		goto out_release_acls;
+ 	}
+ 
+ 	d_alias = nfs3_do_create(dir, dentry, data);
+diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
+index 543d916f79abb..3e344bec3647b 100644
+--- a/fs/nfs/nfs4_fs.h
++++ b/fs/nfs/nfs4_fs.h
+@@ -45,6 +45,7 @@ enum nfs4_client_state {
+ 	NFS4CLNT_RECALL_RUNNING,
+ 	NFS4CLNT_RECALL_ANY_LAYOUT_READ,
+ 	NFS4CLNT_RECALL_ANY_LAYOUT_RW,
++	NFS4CLNT_DELEGRETURN_DELAYED,
+ };
+ 
+ #define NFS4_RENEW_TIMEOUT		0x01
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 7491323a58207..6d74f2e2de461 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -197,8 +197,11 @@ void nfs40_shutdown_client(struct nfs_client *clp)
+ 
+ struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *cl_init)
+ {
+-	int err;
++	char buf[INET6_ADDRSTRLEN + 1];
++	const char *ip_addr = cl_init->ip_addr;
+ 	struct nfs_client *clp = nfs_alloc_client(cl_init);
++	int err;
++
+ 	if (IS_ERR(clp))
+ 		return clp;
+ 
+@@ -222,6 +225,44 @@ struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *cl_init)
+ 	init_waitqueue_head(&clp->cl_lock_waitq);
+ #endif
+ 	INIT_LIST_HEAD(&clp->pending_cb_stateids);
++
++	if (cl_init->minorversion != 0)
++		__set_bit(NFS_CS_INFINITE_SLOTS, &clp->cl_flags);
++	__set_bit(NFS_CS_DISCRTRY, &clp->cl_flags);
++	__set_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags);
++
++	/*
++	 * Set up the connection to the server before we add add to the
++	 * global list.
++	 */
++	err = nfs_create_rpc_client(clp, cl_init, RPC_AUTH_GSS_KRB5I);
++	if (err == -EINVAL)
++		err = nfs_create_rpc_client(clp, cl_init, RPC_AUTH_UNIX);
++	if (err < 0)
++		goto error;
++
++	/* If no clientaddr= option was specified, find a usable cb address */
++	if (ip_addr == NULL) {
++		struct sockaddr_storage cb_addr;
++		struct sockaddr *sap = (struct sockaddr *)&cb_addr;
++
++		err = rpc_localaddr(clp->cl_rpcclient, sap, sizeof(cb_addr));
++		if (err < 0)
++			goto error;
++		err = rpc_ntop(sap, buf, sizeof(buf));
++		if (err < 0)
++			goto error;
++		ip_addr = (const char *)buf;
++	}
++	strlcpy(clp->cl_ipaddr, ip_addr, sizeof(clp->cl_ipaddr));
++
++	err = nfs_idmap_new(clp);
++	if (err < 0) {
++		dprintk("%s: failed to create idmapper. Error = %d\n",
++			__func__, err);
++		goto error;
++	}
++	__set_bit(NFS_CS_IDMAP, &clp->cl_res_state);
+ 	return clp;
+ 
+ error:
+@@ -372,8 +413,6 @@ static int nfs4_init_client_minor_version(struct nfs_client *clp)
+ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
+ 				    const struct nfs_client_initdata *cl_init)
+ {
+-	char buf[INET6_ADDRSTRLEN + 1];
+-	const char *ip_addr = cl_init->ip_addr;
+ 	struct nfs_client *old;
+ 	int error;
+ 
+@@ -381,43 +420,6 @@ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
+ 		/* the client is initialised already */
+ 		return clp;
+ 
+-	/* Check NFS protocol revision and initialize RPC op vector */
+-	clp->rpc_ops = &nfs_v4_clientops;
+-
+-	if (clp->cl_minorversion != 0)
+-		__set_bit(NFS_CS_INFINITE_SLOTS, &clp->cl_flags);
+-	__set_bit(NFS_CS_DISCRTRY, &clp->cl_flags);
+-	__set_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags);
+-
+-	error = nfs_create_rpc_client(clp, cl_init, RPC_AUTH_GSS_KRB5I);
+-	if (error == -EINVAL)
+-		error = nfs_create_rpc_client(clp, cl_init, RPC_AUTH_UNIX);
+-	if (error < 0)
+-		goto error;
+-
+-	/* If no clientaddr= option was specified, find a usable cb address */
+-	if (ip_addr == NULL) {
+-		struct sockaddr_storage cb_addr;
+-		struct sockaddr *sap = (struct sockaddr *)&cb_addr;
+-
+-		error = rpc_localaddr(clp->cl_rpcclient, sap, sizeof(cb_addr));
+-		if (error < 0)
+-			goto error;
+-		error = rpc_ntop(sap, buf, sizeof(buf));
+-		if (error < 0)
+-			goto error;
+-		ip_addr = (const char *)buf;
+-	}
+-	strlcpy(clp->cl_ipaddr, ip_addr, sizeof(clp->cl_ipaddr));
+-
+-	error = nfs_idmap_new(clp);
+-	if (error < 0) {
+-		dprintk("%s: failed to create idmapper. Error = %d\n",
+-			__func__, error);
+-		goto error;
+-	}
+-	__set_bit(NFS_CS_IDMAP, &clp->cl_res_state);
+-
+ 	error = nfs4_init_client_minor_version(clp);
+ 	if (error < 0)
+ 		goto error;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 4d20125e982a0..371665e0c154c 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -966,10 +966,8 @@ void
+ pnfs_set_layout_stateid(struct pnfs_layout_hdr *lo, const nfs4_stateid *new,
+ 			const struct cred *cred, bool update_barrier)
+ {
+-	u32 oldseq, newseq, new_barrier = 0;
+-
+-	oldseq = be32_to_cpu(lo->plh_stateid.seqid);
+-	newseq = be32_to_cpu(new->seqid);
++	u32 oldseq = be32_to_cpu(lo->plh_stateid.seqid);
++	u32 newseq = be32_to_cpu(new->seqid);
+ 
+ 	if (!pnfs_layout_is_valid(lo)) {
+ 		pnfs_set_layout_cred(lo, cred);
+@@ -979,19 +977,21 @@ pnfs_set_layout_stateid(struct pnfs_layout_hdr *lo, const nfs4_stateid *new,
+ 		clear_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags);
+ 		return;
+ 	}
+-	if (pnfs_seqid_is_newer(newseq, oldseq)) {
++
++	if (pnfs_seqid_is_newer(newseq, oldseq))
+ 		nfs4_stateid_copy(&lo->plh_stateid, new);
+-		/*
+-		 * Because of wraparound, we want to keep the barrier
+-		 * "close" to the current seqids.
+-		 */
+-		new_barrier = newseq - atomic_read(&lo->plh_outstanding);
+-	}
+-	if (update_barrier)
+-		new_barrier = be32_to_cpu(new->seqid);
+-	else if (new_barrier == 0)
++
++	if (update_barrier) {
++		pnfs_barrier_update(lo, newseq);
+ 		return;
+-	pnfs_barrier_update(lo, new_barrier);
++	}
++	/*
++	 * Because of wraparound, we want to keep the barrier
++	 * "close" to the current seqids. We really only want to
++	 * get here from a layoutget call.
++	 */
++	if (atomic_read(&lo->plh_outstanding) == 1)
++		 pnfs_barrier_update(lo, be32_to_cpu(lo->plh_stateid.seqid));
+ }
+ 
+ static bool
+@@ -2015,7 +2015,7 @@ lookup_again:
+ 	 * If the layout segment list is empty, but there are outstanding
+ 	 * layoutget calls, then they might be subject to a layoutrecall.
+ 	 */
+-	if (list_empty(&lo->plh_segs) &&
++	if ((list_empty(&lo->plh_segs) || !pnfs_layout_is_valid(lo)) &&
+ 	    atomic_read(&lo->plh_outstanding) != 0) {
+ 		spin_unlock(&ino->i_lock);
+ 		lseg = ERR_PTR(wait_var_event_killable(&lo->plh_outstanding,
+@@ -2391,11 +2391,13 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
+ 		goto out_forget;
+ 	}
+ 
++	if (!pnfs_layout_is_valid(lo) && !pnfs_is_first_layoutget(lo))
++		goto out_forget;
++
+ 	if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
+ 		/* existing state ID, make sure the sequence number matches. */
+ 		if (pnfs_layout_stateid_blocked(lo, &res->stateid)) {
+-			if (!pnfs_layout_is_valid(lo) &&
+-			    pnfs_is_first_layoutget(lo))
++			if (!pnfs_layout_is_valid(lo))
+ 				lo->plh_barrier = 0;
+ 			dprintk("%s forget reply due to sequence\n", __func__);
+ 			goto out_forget;
+@@ -2416,8 +2418,6 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
+ 		goto out_forget;
+ 	} else {
+ 		/* We have a completely new layout */
+-		if (!pnfs_is_first_layoutget(lo))
+-			goto out_forget;
+ 		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, true);
+ 	}
+ 
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index e3b25822e0bb1..251c4a3aef9a6 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -791,19 +791,16 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(nfs4_pnfs_ds_add);
+ 
+-static void nfs4_wait_ds_connect(struct nfs4_pnfs_ds *ds)
++static int nfs4_wait_ds_connect(struct nfs4_pnfs_ds *ds)
+ {
+ 	might_sleep();
+-	wait_on_bit(&ds->ds_state, NFS4DS_CONNECTING,
+-			TASK_KILLABLE);
++	return wait_on_bit(&ds->ds_state, NFS4DS_CONNECTING, TASK_KILLABLE);
+ }
+ 
+ static void nfs4_clear_ds_conn_bit(struct nfs4_pnfs_ds *ds)
+ {
+ 	smp_mb__before_atomic();
+-	clear_bit(NFS4DS_CONNECTING, &ds->ds_state);
+-	smp_mb__after_atomic();
+-	wake_up_bit(&ds->ds_state, NFS4DS_CONNECTING);
++	clear_and_wake_up_bit(NFS4DS_CONNECTING, &ds->ds_state);
+ }
+ 
+ static struct nfs_client *(*get_v3_ds_connect)(
+@@ -969,30 +966,33 @@ int nfs4_pnfs_ds_connect(struct nfs_server *mds_srv, struct nfs4_pnfs_ds *ds,
+ {
+ 	int err;
+ 
+-again:
+-	err = 0;
+-	if (test_and_set_bit(NFS4DS_CONNECTING, &ds->ds_state) == 0) {
+-		if (version == 3) {
+-			err = _nfs4_pnfs_v3_ds_connect(mds_srv, ds, timeo,
+-						       retrans);
+-		} else if (version == 4) {
+-			err = _nfs4_pnfs_v4_ds_connect(mds_srv, ds, timeo,
+-						       retrans, minor_version);
+-		} else {
+-			dprintk("%s: unsupported DS version %d\n", __func__,
+-				version);
+-			err = -EPROTONOSUPPORT;
+-		}
++	do {
++		err = nfs4_wait_ds_connect(ds);
++		if (err || ds->ds_clp)
++			goto out;
++		if (nfs4_test_deviceid_unavailable(devid))
++			return -ENODEV;
++	} while (test_and_set_bit(NFS4DS_CONNECTING, &ds->ds_state) != 0);
+ 
+-		nfs4_clear_ds_conn_bit(ds);
+-	} else {
+-		nfs4_wait_ds_connect(ds);
++	if (ds->ds_clp)
++		goto connect_done;
+ 
+-		/* what was waited on didn't connect AND didn't mark unavail */
+-		if (!ds->ds_clp && !nfs4_test_deviceid_unavailable(devid))
+-			goto again;
++	switch (version) {
++	case 3:
++		err = _nfs4_pnfs_v3_ds_connect(mds_srv, ds, timeo, retrans);
++		break;
++	case 4:
++		err = _nfs4_pnfs_v4_ds_connect(mds_srv, ds, timeo, retrans,
++					       minor_version);
++		break;
++	default:
++		dprintk("%s: unsupported DS version %d\n", __func__, version);
++		err = -EPROTONOSUPPORT;
+ 	}
+ 
++connect_done:
++	nfs4_clear_ds_conn_bit(ds);
++out:
+ 	/*
+ 	 * At this point the ds->ds_clp should be ready, but it might have
+ 	 * hit an error.
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index ac20f79bbedd6..80e394a2e3fd7 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -7158,7 +7158,6 @@ nfs4_client_to_reclaim(struct xdr_netobj name, struct xdr_netobj princhash,
+ 	unsigned int strhashval;
+ 	struct nfs4_client_reclaim *crp;
+ 
+-	trace_nfsd_clid_reclaim(nn, name.len, name.data);
+ 	crp = alloc_reclaim();
+ 	if (crp) {
+ 		strhashval = clientstr_hashval(name);
+@@ -7208,8 +7207,6 @@ nfsd4_find_reclaim_client(struct xdr_netobj name, struct nfsd_net *nn)
+ 	unsigned int strhashval;
+ 	struct nfs4_client_reclaim *crp = NULL;
+ 
+-	trace_nfsd_clid_find(nn, name.len, name.data);
+-
+ 	strhashval = clientstr_hashval(name);
+ 	list_for_each_entry(crp, &nn->reclaim_str_hashtbl[strhashval], cr_strhash) {
+ 		if (compare_blob(&crp->cr_name, &name) == 0) {
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index 99bf07800cd09..c8ca73d69ad04 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -368,35 +368,6 @@ DEFINE_EVENT(nfsd_net_class, nfsd_##name, \
+ DEFINE_NET_EVENT(grace_start);
+ DEFINE_NET_EVENT(grace_complete);
+ 
+-DECLARE_EVENT_CLASS(nfsd_clid_class,
+-	TP_PROTO(const struct nfsd_net *nn,
+-		 unsigned int namelen,
+-		 const unsigned char *namedata),
+-	TP_ARGS(nn, namelen, namedata),
+-	TP_STRUCT__entry(
+-		__field(unsigned long long, boot_time)
+-		__field(unsigned int, namelen)
+-		__dynamic_array(unsigned char,  name, namelen)
+-	),
+-	TP_fast_assign(
+-		__entry->boot_time = nn->boot_time;
+-		__entry->namelen = namelen;
+-		memcpy(__get_dynamic_array(name), namedata, namelen);
+-	),
+-	TP_printk("boot_time=%16llx nfs4_clientid=%.*s",
+-		__entry->boot_time, __entry->namelen, __get_str(name))
+-)
+-
+-#define DEFINE_CLID_EVENT(name) \
+-DEFINE_EVENT(nfsd_clid_class, nfsd_clid_##name, \
+-	TP_PROTO(const struct nfsd_net *nn, \
+-		 unsigned int namelen, \
+-		 const unsigned char *namedata), \
+-	TP_ARGS(nn, namelen, namedata))
+-
+-DEFINE_CLID_EVENT(find);
+-DEFINE_CLID_EVENT(reclaim);
+-
+ TRACE_EVENT(nfsd_clid_inuse_err,
+ 	TP_PROTO(const struct nfs4_client *clp),
+ 	TP_ARGS(clp),
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 1ecaceebee133..011cd570b50df 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1113,6 +1113,19 @@ out:
+ }
+ 
+ #ifdef CONFIG_NFSD_V3
++static int
++nfsd_filemap_write_and_wait_range(struct nfsd_file *nf, loff_t offset,
++				  loff_t end)
++{
++	struct address_space *mapping = nf->nf_file->f_mapping;
++	int ret = filemap_fdatawrite_range(mapping, offset, end);
++
++	if (ret)
++		return ret;
++	filemap_fdatawait_range_keep_errors(mapping, offset, end);
++	return 0;
++}
++
+ /*
+  * Commit all pending writes to stable storage.
+  *
+@@ -1143,10 +1156,11 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	if (err)
+ 		goto out;
+ 	if (EX_ISSYNC(fhp->fh_export)) {
+-		int err2;
++		int err2 = nfsd_filemap_write_and_wait_range(nf, offset, end);
+ 
+ 		down_write(&nf->nf_rwsem);
+-		err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
++		if (!err2)
++			err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
+ 		switch (err2) {
+ 		case 0:
+ 			nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
+diff --git a/fs/orangefs/super.c b/fs/orangefs/super.c
+index ee5efdc35cc1e..2f2e430461b21 100644
+--- a/fs/orangefs/super.c
++++ b/fs/orangefs/super.c
+@@ -209,7 +209,7 @@ static int orangefs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	buf->f_bavail = (sector_t) new_op->downcall.resp.statfs.blocks_avail;
+ 	buf->f_files = (sector_t) new_op->downcall.resp.statfs.files_total;
+ 	buf->f_ffree = (sector_t) new_op->downcall.resp.statfs.files_avail;
+-	buf->f_frsize = sb->s_blocksize;
++	buf->f_frsize = 0;
+ 
+ out_op_release:
+ 	op_release(new_op);
+diff --git a/fs/seq_file.c b/fs/seq_file.c
+index 03a369ccd28c3..472714716be69 100644
+--- a/fs/seq_file.c
++++ b/fs/seq_file.c
+@@ -32,6 +32,9 @@ static void seq_set_overflow(struct seq_file *m)
+ 
+ static void *seq_buf_alloc(unsigned long size)
+ {
++	if (unlikely(size > MAX_RW_COUNT))
++		return NULL;
++
+ 	return kvmalloc(size, GFP_KERNEL_ACCOUNT);
+ }
+ 
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 08fde777c3247..ad90a3a64293e 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1335,7 +1335,10 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			goto out_release;
+ 		}
+ 
++		spin_lock(&whiteout->i_lock);
+ 		whiteout->i_state |= I_LINKABLE;
++		spin_unlock(&whiteout->i_lock);
++
+ 		whiteout_ui = ubifs_inode(whiteout);
+ 		whiteout_ui->data = dev;
+ 		whiteout_ui->data_len = ubifs_encode_dev(dev, MKDEV(0, 0));
+@@ -1428,7 +1431,11 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 		inc_nlink(whiteout);
+ 		mark_inode_dirty(whiteout);
++
++		spin_lock(&whiteout->i_lock);
+ 		whiteout->i_state &= ~I_LINKABLE;
++		spin_unlock(&whiteout->i_lock);
++
+ 		iput(whiteout);
+ 	}
+ 
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 091c2ad8f2111..7274bd23881b2 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -881,7 +881,8 @@ int ubifs_jnl_write_inode(struct ubifs_info *c, const struct inode *inode)
+ 		struct inode *xino;
+ 		struct ubifs_dent_node *xent, *pxent = NULL;
+ 
+-		if (ui->xattr_cnt >= ubifs_xattr_max_cnt(c)) {
++		if (ui->xattr_cnt > ubifs_xattr_max_cnt(c)) {
++			err = -EPERM;
+ 			ubifs_err(c, "Cannot delete inode, it has too much xattrs!");
+ 			goto out_release;
+ 		}
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 09280796fc610..17745f5462f02 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -512,7 +512,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ 	struct fscrypt_name nm = {0};
+ 	int err;
+ 
+-	if (ubifs_inode(host)->xattr_cnt < ubifs_xattr_max_cnt(c))
++	if (ubifs_inode(host)->xattr_cnt <= ubifs_xattr_max_cnt(c))
+ 		return 0;
+ 
+ 	ubifs_warn(c, "inode %lu has too many xattrs, doing a non-atomic deletion",
+diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
+index 189149de77a9d..9ba951e3a6c22 100644
+--- a/include/linux/compiler-clang.h
++++ b/include/linux/compiler-clang.h
+@@ -23,6 +23,12 @@
+ /* all clang versions usable with the kernel support KASAN ABI version 5 */
+ #define KASAN_ABI_VERSION 5
+ 
++/*
++ * Note: Checking __has_feature(*_sanitizer) is only true if the feature is
++ * enabled. Therefore it is not required to additionally check defined(CONFIG_*)
++ * to avoid adding redundant attributes in other configurations.
++ */
++
+ #if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
+ /* Emulate GCC's __SANITIZE_ADDRESS__ flag */
+ #define __SANITIZE_ADDRESS__
+@@ -55,6 +61,17 @@
+ #define __no_sanitize_undefined
+ #endif
+ 
++/*
++ * Support for __has_feature(coverage_sanitizer) was added in Clang 13 together
++ * with no_sanitize("coverage"). Prior versions of Clang support coverage
++ * instrumentation, but cannot be queried for support by the preprocessor.
++ */
++#if __has_feature(coverage_sanitizer)
++#define __no_sanitize_coverage __attribute__((no_sanitize("coverage")))
++#else
++#define __no_sanitize_coverage
++#endif
++
+ /*
+  * Not all versions of clang implement the type-generic versions
+  * of the builtin overflow checkers. Fortunately, clang implements
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index 555ab0fddbef7..4cf524ccab430 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -137,6 +137,12 @@
+ #define __no_sanitize_undefined
+ #endif
+ 
++#if defined(CONFIG_KCOV) && __has_attribute(__no_sanitize_coverage__)
++#define __no_sanitize_coverage __attribute__((no_sanitize_coverage))
++#else
++#define __no_sanitize_coverage
++#endif
++
+ #if GCC_VERSION >= 50100
+ #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1
+ #endif
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index ac3fa37a84f94..2a1c202baa1fe 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -205,7 +205,7 @@ struct ftrace_likely_data {
+ /* Section for code which can't be instrumented at all */
+ #define noinstr								\
+ 	noinline notrace __attribute((__section__(".noinstr.text")))	\
+-	__no_kcsan __no_sanitize_address
++	__no_kcsan __no_sanitize_address __no_sanitize_coverage
+ 
+ #endif /* __KERNEL__ */
+ 
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index a2c6455ea3fae..91a6525a98cb7 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -79,6 +79,7 @@ struct nfs_open_context {
+ #define NFS_CONTEXT_RESEND_WRITES	(1)
+ #define NFS_CONTEXT_BAD			(2)
+ #define NFS_CONTEXT_UNLOCK	(3)
++#define NFS_CONTEXT_FILE_OPEN		(4)
+ 	int error;
+ 
+ 	struct list_head list;
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index c5adba5e79e7e..7d12c76e8fa45 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -308,7 +308,7 @@ static inline int rcu_read_lock_any_held(void)
+ #define RCU_LOCKDEP_WARN(c, s)						\
+ 	do {								\
+ 		static bool __section(".data.unlikely") __warned;	\
+-		if (debug_lockdep_rcu_enabled() && !__warned && (c)) {	\
++		if ((c) && debug_lockdep_rcu_enabled() && !__warned) {	\
+ 			__warned = true;				\
+ 			lockdep_rcu_suspicious(__FILE__, __LINE__, s);	\
+ 		}							\
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 4b6a8234d7fc2..657640015b335 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -525,6 +525,17 @@ static inline int kill_cad_pid(int sig, int priv)
+ #define SEND_SIG_NOINFO ((struct kernel_siginfo *) 0)
+ #define SEND_SIG_PRIV	((struct kernel_siginfo *) 1)
+ 
++static inline int __on_sig_stack(unsigned long sp)
++{
++#ifdef CONFIG_STACK_GROWSUP
++	return sp >= current->sas_ss_sp &&
++		sp - current->sas_ss_sp < current->sas_ss_size;
++#else
++	return sp > current->sas_ss_sp &&
++		sp - current->sas_ss_sp <= current->sas_ss_size;
++#endif
++}
++
+ /*
+  * True if we are on the alternate signal stack.
+  */
+@@ -542,13 +553,7 @@ static inline int on_sig_stack(unsigned long sp)
+ 	if (current->sas_ss_flags & SS_AUTODISARM)
+ 		return 0;
+ 
+-#ifdef CONFIG_STACK_GROWSUP
+-	return sp >= current->sas_ss_sp &&
+-		sp - current->sas_ss_sp < current->sas_ss_size;
+-#else
+-	return sp > current->sas_ss_sp &&
+-		sp - current->sas_ss_sp <= current->sas_ss_size;
+-#endif
++	return __on_sig_stack(sp);
+ }
+ 
+ static inline int sas_ss_flags(unsigned long sp)
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index b3bbd10eb3f07..2b5f97224f693 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -195,12 +195,6 @@ struct iscsi_conn {
+ 	unsigned long		suspend_tx;	/* suspend Tx */
+ 	unsigned long		suspend_rx;	/* suspend Rx */
+ 
+-	/* abort */
+-	wait_queue_head_t	ehwait;		/* used in eh_abort() */
+-	struct iscsi_tm		tmhdr;
+-	struct timer_list	tmf_timer;
+-	int			tmf_state;	/* see TMF_INITIAL, etc.*/
+-
+ 	/* negotiated params */
+ 	unsigned		max_recv_dlength; /* initiator_max_recv_dsl*/
+ 	unsigned		max_xmit_dlength; /* target_max_recv_dsl */
+@@ -270,6 +264,11 @@ struct iscsi_session {
+ 	 * and recv lock.
+ 	 */
+ 	struct mutex		eh_mutex;
++	/* abort */
++	wait_queue_head_t	ehwait;		/* used in eh_abort() */
++	struct iscsi_tm		tmhdr;
++	struct timer_list	tmf_timer;
++	int			tmf_state;	/* see TMF_INITIAL, etc.*/
+ 
+ 	/* iSCSI session-wide sequencing */
+ 	uint32_t		cmdsn;
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index fc5a39839b4b0..f28bb20d62713 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -434,6 +434,8 @@ extern void iscsi_remove_session(struct iscsi_cls_session *session);
+ extern void iscsi_free_session(struct iscsi_cls_session *session);
+ extern struct iscsi_cls_conn *iscsi_create_conn(struct iscsi_cls_session *sess,
+ 						int dd_size, uint32_t cid);
++extern void iscsi_put_conn(struct iscsi_cls_conn *conn);
++extern void iscsi_get_conn(struct iscsi_cls_conn *conn);
+ extern int iscsi_destroy_conn(struct iscsi_cls_conn *conn);
+ extern void iscsi_unblock_session(struct iscsi_cls_session *session);
+ extern void iscsi_block_session(struct iscsi_cls_session *session);
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index f6dddb3a8f4a2..04eb28f7735fb 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -912,6 +912,8 @@ int cgroup1_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 	opt = fs_parse(fc, cgroup1_fs_parameters, param, &result);
+ 	if (opt == -ENOPARAM) {
+ 		if (strcmp(param->key, "source") == 0) {
++			if (param->type != fs_value_is_string)
++				return invalf(fc, "Non-string source");
+ 			if (fc->source)
+ 				return invalf(fc, "Multiple sources not supported");
+ 			fc->source = param->string;
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index a0c325664190b..4ae693ce71a41 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -316,14 +316,16 @@ static int addr_conflict(struct jump_entry *entry, void *start, void *end)
+ }
+ 
+ static int __jump_label_text_reserved(struct jump_entry *iter_start,
+-		struct jump_entry *iter_stop, void *start, void *end)
++		struct jump_entry *iter_stop, void *start, void *end, bool init)
+ {
+ 	struct jump_entry *iter;
+ 
+ 	iter = iter_start;
+ 	while (iter < iter_stop) {
+-		if (addr_conflict(iter, start, end))
+-			return 1;
++		if (init || !jump_entry_is_init(iter)) {
++			if (addr_conflict(iter, start, end))
++				return 1;
++		}
+ 		iter++;
+ 	}
+ 
+@@ -561,7 +563,7 @@ static int __jump_label_mod_text_reserved(void *start, void *end)
+ 
+ 	ret = __jump_label_text_reserved(mod->jump_entries,
+ 				mod->jump_entries + mod->num_jump_entries,
+-				start, end);
++				start, end, mod->state == MODULE_STATE_COMING);
+ 
+ 	module_put(mod);
+ 
+@@ -786,8 +788,9 @@ early_initcall(jump_label_init_module);
+  */
+ int jump_label_text_reserved(void *start, void *end)
+ {
++	bool init = system_state < SYSTEM_RUNNING;
+ 	int ret = __jump_label_text_reserved(__start___jump_table,
+-			__stop___jump_table, start, end);
++			__stop___jump_table, start, end, init);
+ 
+ 	if (ret)
+ 		return ret;
+diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
+index e01cba5e4b529..fcf95d1eec69a 100644
+--- a/kernel/rcu/rcu.h
++++ b/kernel/rcu/rcu.h
+@@ -308,6 +308,8 @@ static inline void rcu_init_levelspread(int *levelspread, const int *levelcnt)
+ 	}
+ }
+ 
++extern void rcu_init_geometry(void);
++
+ /* Returns a pointer to the first leaf rcu_node structure. */
+ #define rcu_first_leaf_node() (rcu_state.level[rcu_num_lvls - 1])
+ 
+diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
+index c13348ee80a5a..68ceac3878445 100644
+--- a/kernel/rcu/srcutree.c
++++ b/kernel/rcu/srcutree.c
+@@ -90,6 +90,9 @@ static void init_srcu_struct_nodes(struct srcu_struct *ssp, bool is_static)
+ 	struct srcu_node *snp;
+ 	struct srcu_node *snp_first;
+ 
++	/* Initialize geometry if it has not already been initialized. */
++	rcu_init_geometry();
++
+ 	/* Work out the overall tree geometry. */
+ 	ssp->level[0] = &ssp->node[0];
+ 	for (i = 1; i < rcu_num_lvls; i++)
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 45b60e9974610..8c3ba0185082d 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -4383,11 +4383,25 @@ static void __init rcu_init_one(void)
+  * replace the definitions in tree.h because those are needed to size
+  * the ->node array in the rcu_state structure.
+  */
+-static void __init rcu_init_geometry(void)
++void rcu_init_geometry(void)
+ {
+ 	ulong d;
+ 	int i;
++	static unsigned long old_nr_cpu_ids;
+ 	int rcu_capacity[RCU_NUM_LVLS];
++	static bool initialized;
++
++	if (initialized) {
++		/*
++		 * Warn if setup_nr_cpu_ids() had not yet been invoked,
++		 * unless nr_cpus_ids == NR_CPUS, in which case who cares?
++		 */
++		WARN_ON_ONCE(old_nr_cpu_ids != nr_cpu_ids);
++		return;
++	}
++
++	old_nr_cpu_ids = nr_cpu_ids;
++	initialized = true;
+ 
+ 	/*
+ 	 * Initialize any unspecified boot parameters.
+diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
+index 39334d2d2b379..849f0aa99333b 100644
+--- a/kernel/rcu/update.c
++++ b/kernel/rcu/update.c
+@@ -275,7 +275,7 @@ EXPORT_SYMBOL_GPL(rcu_callback_map);
+ 
+ noinstr int notrace debug_lockdep_rcu_enabled(void)
+ {
+-	return rcu_scheduler_active != RCU_SCHEDULER_INACTIVE && debug_locks &&
++	return rcu_scheduler_active != RCU_SCHEDULER_INACTIVE && READ_ONCE(debug_locks) &&
+ 	       current->lockdep_recursion == 0;
+ }
+ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index fdebfcbdfca94..39112ac7ab347 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2422,20 +2422,27 @@ static __always_inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ 				  struct task_struct *p)
+ {
+-	unsigned long min_util;
+-	unsigned long max_util;
++	unsigned long min_util = 0;
++	unsigned long max_util = 0;
+ 
+ 	if (!static_branch_likely(&sched_uclamp_used))
+ 		return util;
+ 
+-	min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
+-	max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
+-
+ 	if (p) {
+-		min_util = max(min_util, uclamp_eff_value(p, UCLAMP_MIN));
+-		max_util = max(max_util, uclamp_eff_value(p, UCLAMP_MAX));
++		min_util = uclamp_eff_value(p, UCLAMP_MIN);
++		max_util = uclamp_eff_value(p, UCLAMP_MAX);
++
++		/*
++		 * Ignore last runnable task's max clamp, as this task will
++		 * reset it. Similarly, no need to read the rq's min clamp.
++		 */
++		if (rq->uclamp_flags & UCLAMP_FLAG_IDLE)
++			goto out;
+ 	}
+ 
++	min_util = max_t(unsigned long, min_util, READ_ONCE(rq->uclamp[UCLAMP_MIN].value));
++	max_util = max_t(unsigned long, max_util, READ_ONCE(rq->uclamp[UCLAMP_MAX].value));
++out:
+ 	/*
+ 	 * Since CPU's {min,max}_util clamps are MAX aggregated considering
+ 	 * RUNNABLE tasks with _different_ clamps, we can end up with an
+diff --git a/kernel/static_call.c b/kernel/static_call.c
+index f59089a122319..b62a0c41c9050 100644
+--- a/kernel/static_call.c
++++ b/kernel/static_call.c
+@@ -292,13 +292,15 @@ static int addr_conflict(struct static_call_site *site, void *start, void *end)
+ 
+ static int __static_call_text_reserved(struct static_call_site *iter_start,
+ 				       struct static_call_site *iter_stop,
+-				       void *start, void *end)
++				       void *start, void *end, bool init)
+ {
+ 	struct static_call_site *iter = iter_start;
+ 
+ 	while (iter < iter_stop) {
+-		if (addr_conflict(iter, start, end))
+-			return 1;
++		if (init || !static_call_is_init(iter)) {
++			if (addr_conflict(iter, start, end))
++				return 1;
++		}
+ 		iter++;
+ 	}
+ 
+@@ -324,7 +326,7 @@ static int __static_call_mod_text_reserved(void *start, void *end)
+ 
+ 	ret = __static_call_text_reserved(mod->static_call_sites,
+ 			mod->static_call_sites + mod->num_static_call_sites,
+-			start, end);
++			start, end, mod->state == MODULE_STATE_COMING);
+ 
+ 	module_put(mod);
+ 
+@@ -459,8 +461,9 @@ static inline int __static_call_mod_text_reserved(void *start, void *end)
+ 
+ int static_call_text_reserved(void *start, void *end)
+ {
++	bool init = system_state < SYSTEM_RUNNING;
+ 	int ret = __static_call_text_reserved(__start_static_call_sites,
+-			__stop_static_call_sites, start, end);
++			__stop_static_call_sites, start, end, init);
+ 
+ 	if (ret)
+ 		return ret;
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 0b24938cbe92e..49d886b328dc1 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1673,7 +1673,9 @@ static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
+ 	if (WARN_ON_ONCE(!field))
+ 		goto out;
+ 
+-	if (is_string_field(field)) {
++	/* Pointers to strings are just pointers and dangerous to dereference */
++	if (is_string_field(field) &&
++	    (field->filter_type != FILTER_PTR_STRING)) {
+ 		flags |= HIST_FIELD_FL_STRING;
+ 
+ 		hist_field->size = MAX_FILTER_STR_VAL;
+@@ -4469,8 +4471,6 @@ static inline void add_to_key(char *compound_key, void *key,
+ 		field = key_field->field;
+ 		if (field->filter_type == FILTER_DYN_STRING)
+ 			size = *(u32 *)(rec + field->offset) >> 16;
+-		else if (field->filter_type == FILTER_PTR_STRING)
+-			size = strlen(key);
+ 		else if (field->filter_type == FILTER_STATIC_STRING)
+ 			size = field->size;
+ 
+diff --git a/lib/decompress_unlz4.c b/lib/decompress_unlz4.c
+index c0cfcfd486be0..e6327391b6b66 100644
+--- a/lib/decompress_unlz4.c
++++ b/lib/decompress_unlz4.c
+@@ -112,6 +112,9 @@ STATIC inline int INIT unlz4(u8 *input, long in_len,
+ 				error("data corrupted");
+ 				goto exit_2;
+ 			}
++		} else if (size < 4) {
++			/* empty or end-of-file */
++			goto exit_3;
+ 		}
+ 
+ 		chunksize = get_unaligned_le32(inp);
+@@ -125,6 +128,10 @@ STATIC inline int INIT unlz4(u8 *input, long in_len,
+ 			continue;
+ 		}
+ 
++		if (!fill && chunksize == 0) {
++			/* empty or end-of-file */
++			goto exit_3;
++		}
+ 
+ 		if (posp)
+ 			*posp += 4;
+@@ -184,6 +191,7 @@ STATIC inline int INIT unlz4(u8 *input, long in_len,
+ 		}
+ 	}
+ 
++exit_3:
+ 	ret = 0;
+ exit_2:
+ 	if (!input)
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 5015ece7adf7a..e5328a2777ecd 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -2998,7 +2998,9 @@ static void br_multicast_pim(struct net_bridge *br,
+ 	    pim_hdr_type(pimhdr) != PIM_TYPE_HELLO)
+ 		return;
+ 
++	spin_lock(&br->multicast_lock);
+ 	br_multicast_mark_router(br, port);
++	spin_unlock(&br->multicast_lock);
+ }
+ 
+ static int br_ip4_multicast_mrd_rcv(struct net_bridge *br,
+@@ -3009,7 +3011,9 @@ static int br_ip4_multicast_mrd_rcv(struct net_bridge *br,
+ 	    igmp_hdr(skb)->type != IGMP_MRDISC_ADV)
+ 		return -ENOMSG;
+ 
++	spin_lock(&br->multicast_lock);
+ 	br_multicast_mark_router(br, port);
++	spin_unlock(&br->multicast_lock);
+ 
+ 	return 0;
+ }
+@@ -3077,7 +3081,9 @@ static void br_ip6_multicast_mrd_rcv(struct net_bridge *br,
+ 	if (icmp6_hdr(skb)->icmp6_type != ICMPV6_MRDISC_ADV)
+ 		return;
+ 
++	spin_lock(&br->multicast_lock);
+ 	br_multicast_mark_router(br, port);
++	spin_unlock(&br->multicast_lock);
+ }
+ 
+ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index c56a66cdf4ac8..9c0f71e82d978 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1680,7 +1680,8 @@ static int xs_bind(struct sock_xprt *transport, struct socket *sock)
+ 		err = kernel_bind(sock, (struct sockaddr *)&myaddr,
+ 				transport->xprt.addrlen);
+ 		if (err == 0) {
+-			transport->srcport = port;
++			if (transport->xprt.reuseport)
++				transport->srcport = port;
+ 			break;
+ 		}
+ 		last = port;
+diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
+index 7985dd8198b6c..99e1728b52ae4 100644
+--- a/sound/ac97/bus.c
++++ b/sound/ac97/bus.c
+@@ -520,7 +520,7 @@ static int ac97_bus_remove(struct device *dev)
+ 	struct ac97_codec_driver *adrv = to_ac97_driver(dev->driver);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/sound/firewire/Kconfig b/sound/firewire/Kconfig
+index 9897bd26a4388..12664c3a14141 100644
+--- a/sound/firewire/Kconfig
++++ b/sound/firewire/Kconfig
+@@ -38,7 +38,7 @@ config SND_OXFW
+ 	   * Mackie(Loud) Onyx 1640i (former model)
+ 	   * Mackie(Loud) Onyx Satellite
+ 	   * Mackie(Loud) Tapco Link.Firewire
+-	   * Mackie(Loud) d.4 pro
++	   * Mackie(Loud) d.2 pro/d.4 pro (built-in FireWire card with OXFW971 ASIC)
+ 	   * Mackie(Loud) U.420/U.420d
+ 	   * TASCAM FireOne
+ 	   * Stanton Controllers & Systems 1 Deck/Mixer
+@@ -84,7 +84,7 @@ config SND_BEBOB
+ 	  * PreSonus FIREBOX/FIREPOD/FP10/Inspire1394
+ 	  * BridgeCo RDAudio1/Audio5
+ 	  * Mackie Onyx 1220/1620/1640 (FireWire I/O Card)
+-	  * Mackie d.2 (FireWire Option) and d.2 Pro
++	  * Mackie d.2 (optional FireWire card with DM1000 ASIC)
+ 	  * Stanton FinalScratch 2 (ScratchAmp)
+ 	  * Tascam IF-FW/DM
+ 	  * Behringer XENIX UFX 1204/1604
+@@ -110,6 +110,7 @@ config SND_BEBOB
+ 	  * M-Audio Ozonic/NRV10/ProfireLightBridge
+ 	  * M-Audio FireWire 1814/ProjectMix IO
+ 	  * Digidesign Mbox 2 Pro
++	  * ToneWeal FW66
+ 
+ 	  To compile this driver as a module, choose M here: the module
+ 	  will be called snd-bebob.
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index daeecfa8b9aac..67fa0f2178b01 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -59,6 +59,7 @@ static DECLARE_BITMAP(devices_used, SNDRV_CARDS);
+ #define VEN_MAUDIO1	0x00000d6c
+ #define VEN_MAUDIO2	0x000007f5
+ #define VEN_DIGIDESIGN	0x00a07e
++#define OUI_SHOUYO	0x002327
+ 
+ #define MODEL_FOCUSRITE_SAFFIRE_BOTH	0x00000000
+ #define MODEL_MAUDIO_AUDIOPHILE_BOTH	0x00010060
+@@ -387,7 +388,7 @@ static const struct ieee1394_device_id bebob_id_table[] = {
+ 	SND_BEBOB_DEV_ENTRY(VEN_BRIDGECO, 0x00010049, &spec_normal),
+ 	/* Mackie, Onyx 1220/1620/1640 (Firewire I/O Card) */
+ 	SND_BEBOB_DEV_ENTRY(VEN_MACKIE2, 0x00010065, &spec_normal),
+-	// Mackie, d.2 (Firewire option card) and d.2 Pro (the card is built-in).
++	// Mackie, d.2 (optional Firewire card with DM1000).
+ 	SND_BEBOB_DEV_ENTRY(VEN_MACKIE1, 0x00010067, &spec_normal),
+ 	/* Stanton, ScratchAmp */
+ 	SND_BEBOB_DEV_ENTRY(VEN_STANTON, 0x00000001, &spec_normal),
+@@ -486,6 +487,8 @@ static const struct ieee1394_device_id bebob_id_table[] = {
+ 			    &maudio_special_spec),
+ 	/* Digidesign Mbox 2 Pro */
+ 	SND_BEBOB_DEV_ENTRY(VEN_DIGIDESIGN, 0x0000a9, &spec_normal),
++	// Toneweal FW66.
++	SND_BEBOB_DEV_ENTRY(OUI_SHOUYO, 0x020002, &spec_normal),
+ 	/* IDs are unknown but able to be supported */
+ 	/*  Apogee, Mini-ME Firewire */
+ 	/*  Apogee, Mini-DAC Firewire */
+diff --git a/sound/firewire/motu/motu-protocol-v2.c b/sound/firewire/motu/motu-protocol-v2.c
+index 784073aa10265..f0a0ecad4d74a 100644
+--- a/sound/firewire/motu/motu-protocol-v2.c
++++ b/sound/firewire/motu/motu-protocol-v2.c
+@@ -86,24 +86,23 @@ static int detect_clock_source_optical_model(struct snd_motu *motu, u32 data,
+ 		*src = SND_MOTU_CLOCK_SOURCE_INTERNAL;
+ 		break;
+ 	case 1:
++		*src = SND_MOTU_CLOCK_SOURCE_ADAT_ON_OPT;
++		break;
++	case 2:
+ 	{
+ 		__be32 reg;
+ 
+ 		// To check the configuration of optical interface.
+-		int err = snd_motu_transaction_read(motu, V2_IN_OUT_CONF_OFFSET,
+-						    &reg, sizeof(reg));
++		int err = snd_motu_transaction_read(motu, V2_IN_OUT_CONF_OFFSET, &reg, sizeof(reg));
+ 		if (err < 0)
+ 			return err;
+ 
+-		if (be32_to_cpu(reg) & 0x00000200)
++		if (((data & V2_OPT_IN_IFACE_MASK) >> V2_OPT_IN_IFACE_SHIFT) == V2_OPT_IFACE_MODE_SPDIF)
+ 			*src = SND_MOTU_CLOCK_SOURCE_SPDIF_ON_OPT;
+ 		else
+-			*src = SND_MOTU_CLOCK_SOURCE_ADAT_ON_OPT;
++			*src = SND_MOTU_CLOCK_SOURCE_SPDIF_ON_COAX;
+ 		break;
+ 	}
+-	case 2:
+-		*src = SND_MOTU_CLOCK_SOURCE_SPDIF_ON_COAX;
+-		break;
+ 	case 3:
+ 		*src = SND_MOTU_CLOCK_SOURCE_SPH;
+ 		break;
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index 9eea25c46dc7e..5490637d278a4 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -355,7 +355,7 @@ static const struct ieee1394_device_id oxfw_id_table[] = {
+ 	 *  Onyx-i series (former models):	0x081216
+ 	 *  Mackie Onyx Satellite:		0x00200f
+ 	 *  Tapco LINK.firewire 4x6:		0x000460
+-	 *  d.4 pro:				Unknown
++	 *  d.2 pro/d.4 pro (built-in card):	Unknown
+ 	 *  U.420:				Unknown
+ 	 *  U.420d:				Unknown
+ 	 */
+diff --git a/sound/isa/cmi8330.c b/sound/isa/cmi8330.c
+index 4669eb0cc8ce2..5434cc90db1db 100644
+--- a/sound/isa/cmi8330.c
++++ b/sound/isa/cmi8330.c
+@@ -548,7 +548,7 @@ static int snd_cmi8330_probe(struct snd_card *card, int dev)
+ 	}
+ 	if (acard->sb->hardware != SB_HW_16) {
+ 		snd_printk(KERN_ERR PFX "SB16 not found during probe\n");
+-		return err;
++		return -ENODEV;
+ 	}
+ 
+ 	snd_wss_out(acard->wss, CS4231_MISC_INFO, 0x40); /* switch on MODE2 */
+diff --git a/sound/isa/sb/sb16_csp.c b/sound/isa/sb/sb16_csp.c
+index 1528e04a4d28e..dbcd9ab2c2b76 100644
+--- a/sound/isa/sb/sb16_csp.c
++++ b/sound/isa/sb/sb16_csp.c
+@@ -1072,10 +1072,14 @@ static void snd_sb_qsound_destroy(struct snd_sb_csp * p)
+ 	card = p->chip->card;	
+ 	
+ 	down_write(&card->controls_rwsem);
+-	if (p->qsound_switch)
++	if (p->qsound_switch) {
+ 		snd_ctl_remove(card, p->qsound_switch);
+-	if (p->qsound_space)
++		p->qsound_switch = NULL;
++	}
++	if (p->qsound_space) {
+ 		snd_ctl_remove(card, p->qsound_space);
++		p->qsound_space = NULL;
++	}
+ 	up_write(&card->controls_rwsem);
+ 
+ 	/* cancel pending transfer of QSound parameters */
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index 361cf2041911a..07787698b9738 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -302,6 +302,9 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
+ 	const char *sname, *drv_name = "tegra-hda";
+ 	struct device_node *np = pdev->dev.of_node;
+ 
++	if (irq_id < 0)
++		return irq_id;
++
+ 	err = hda_tegra_init_chip(chip, pdev);
+ 	if (err)
+ 		return err;
+diff --git a/sound/ppc/powermac.c b/sound/ppc/powermac.c
+index 96ef55082bf9a..b135d114ce893 100644
+--- a/sound/ppc/powermac.c
++++ b/sound/ppc/powermac.c
+@@ -77,7 +77,11 @@ static int snd_pmac_probe(struct platform_device *devptr)
+ 		sprintf(card->shortname, "PowerMac %s", name_ext);
+ 		sprintf(card->longname, "%s (Dev %d) Sub-frame %d",
+ 			card->shortname, chip->device_id, chip->subframe);
+-		if ( snd_pmac_tumbler_init(chip) < 0 || snd_pmac_tumbler_post_init() < 0)
++		err = snd_pmac_tumbler_init(chip);
++		if (err < 0)
++			goto __error;
++		err = snd_pmac_tumbler_post_init();
++		if (err < 0)
+ 			goto __error;
+ 		break;
+ 	case PMAC_AWACS:
+diff --git a/sound/soc/img/img-i2s-in.c b/sound/soc/img/img-i2s-in.c
+index 0843235d73c91..fd3432a1d6ab8 100644
+--- a/sound/soc/img/img-i2s-in.c
++++ b/sound/soc/img/img-i2s-in.c
+@@ -464,7 +464,7 @@ static int img_i2s_in_probe(struct platform_device *pdev)
+ 		if (ret)
+ 			goto err_pm_disable;
+ 	}
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		goto err_suspend;
+ 
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98357a.c b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+index dc3d897ad2802..36f1f49e0b76b 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98357a.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+@@ -594,7 +594,7 @@ static int kabylake_audio_probe(struct platform_device *pdev)
+ 
+ static const struct platform_device_id kbl_board_ids[] = {
+ 	{
+-		.name = "kbl_da7219_max98357a",
++		.name = "kbl_da7219_mx98357a",
+ 		.driver_data =
+ 			(kernel_ulong_t)&kabylake_audio_card_da7219_m98357a,
+ 	},
+@@ -616,4 +616,4 @@ module_platform_driver(kabylake_audio)
+ MODULE_DESCRIPTION("Audio Machine driver-DA7219 & MAX98357A in I2S mode");
+ MODULE_AUTHOR("Naveen Manohar <naveen.m@intel.com>");
+ MODULE_LICENSE("GPL v2");
+-MODULE_ALIAS("platform:kbl_da7219_max98357a");
++MODULE_ALIAS("platform:kbl_da7219_mx98357a");
+diff --git a/sound/soc/intel/boards/sof_da7219_max98373.c b/sound/soc/intel/boards/sof_da7219_max98373.c
+index f3cb0773e70ee..8d1ad892e86b6 100644
+--- a/sound/soc/intel/boards/sof_da7219_max98373.c
++++ b/sound/soc/intel/boards/sof_da7219_max98373.c
+@@ -440,6 +440,7 @@ static const struct platform_device_id board_ids[] = {
+ 	},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(platform, board_ids);
+ 
+ static struct platform_driver audio = {
+ 	.probe = audio_probe,
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index ddbb9fe7cc06b..1f94fa5a15db6 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -877,6 +877,7 @@ static const struct platform_device_id board_ids[] = {
+ 	},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(platform, board_ids);
+ 
+ static struct platform_driver sof_audio = {
+ 	.probe = sof_audio_probe,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 75a0bfedb4493..2770e8179983a 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -354,6 +354,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ 		.part_id = 0x714,
+ 		.version_id = 3,
+ 		.direction = {false, true},
++		.ignore_pch_dmic = true,
+ 		.dai_name = "rt715-aif2",
+ 		.init = sof_sdw_rt715_sdca_init,
+ 	},
+@@ -361,6 +362,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ 		.part_id = 0x715,
+ 		.version_id = 3,
+ 		.direction = {false, true},
++		.ignore_pch_dmic = true,
+ 		.dai_name = "rt715-aif2",
+ 		.init = sof_sdw_rt715_sdca_init,
+ 	},
+@@ -368,6 +370,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ 		.part_id = 0x714,
+ 		.version_id = 2,
+ 		.direction = {false, true},
++		.ignore_pch_dmic = true,
+ 		.dai_name = "rt715-aif2",
+ 		.init = sof_sdw_rt715_init,
+ 	},
+@@ -375,6 +378,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ 		.part_id = 0x715,
+ 		.version_id = 2,
+ 		.direction = {false, true},
++		.ignore_pch_dmic = true,
+ 		.dai_name = "rt715-aif2",
+ 		.init = sof_sdw_rt715_init,
+ 	},
+@@ -731,7 +735,8 @@ static int create_sdw_dailink(struct device *dev, int *be_index,
+ 			      int *cpu_id, bool *group_generated,
+ 			      struct snd_soc_codec_conf *codec_conf,
+ 			      int codec_count,
+-			      int *codec_conf_index)
++			      int *codec_conf_index,
++			      bool *ignore_pch_dmic)
+ {
+ 	const struct snd_soc_acpi_link_adr *link_next;
+ 	struct snd_soc_dai_link_component *codecs;
+@@ -784,6 +789,9 @@ static int create_sdw_dailink(struct device *dev, int *be_index,
+ 	if (codec_index < 0)
+ 		return codec_index;
+ 
++	if (codec_info_list[codec_index].ignore_pch_dmic)
++		*ignore_pch_dmic = true;
++
+ 	cpu_dai_index = *cpu_id;
+ 	for_each_pcm_streams(stream) {
+ 		char *name, *cpu_name;
+@@ -915,6 +923,7 @@ static int sof_card_dai_links_create(struct device *dev,
+ 	const struct snd_soc_acpi_link_adr *adr_link;
+ 	struct snd_soc_dai_link_component *cpus;
+ 	struct snd_soc_codec_conf *codec_conf;
++	bool ignore_pch_dmic = false;
+ 	int codec_conf_count;
+ 	int codec_conf_index = 0;
+ 	bool group_generated[SDW_MAX_GROUPS];
+@@ -1021,7 +1030,8 @@ static int sof_card_dai_links_create(struct device *dev,
+ 					 sdw_cpu_dai_num, cpus, adr_link,
+ 					 &cpu_id, group_generated,
+ 					 codec_conf, codec_conf_count,
+-					 &codec_conf_index);
++					 &codec_conf_index,
++					 &ignore_pch_dmic);
+ 		if (ret < 0) {
+ 			dev_err(dev, "failed to create dai link %d", be_id);
+ 			return -ENOMEM;
+@@ -1089,6 +1099,10 @@ SSP:
+ DMIC:
+ 	/* dmic */
+ 	if (dmic_num > 0) {
++		if (ignore_pch_dmic) {
++			dev_warn(dev, "Ignoring PCH DMIC\n");
++			goto HDMI;
++		}
+ 		cpus[cpu_id].dai_name = "DMIC01 Pin";
+ 		init_dai_link(links + link_id, be_id, "dmic01",
+ 			      0, 1, // DMIC only supports capture
+@@ -1107,6 +1121,7 @@ DMIC:
+ 		INC_ID(be_id, cpu_id, link_id);
+ 	}
+ 
++HDMI:
+ 	/* HDMI */
+ 	if (hdmi_num > 0) {
+ 		idisp_components = devm_kcalloc(dev, hdmi_num,
+diff --git a/sound/soc/intel/boards/sof_sdw_common.h b/sound/soc/intel/boards/sof_sdw_common.h
+index f3cb6796363e7..ea60e8ed215c5 100644
+--- a/sound/soc/intel/boards/sof_sdw_common.h
++++ b/sound/soc/intel/boards/sof_sdw_common.h
+@@ -56,6 +56,7 @@ struct sof_sdw_codec_info {
+ 	int amp_num;
+ 	const u8 acpi_id[ACPI_ID_LEN];
+ 	const bool direction[2]; // playback & capture support
++	const bool ignore_pch_dmic;
+ 	const char *dai_name;
+ 	const struct snd_soc_ops *ops;
+ 
+diff --git a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+index a4fbe6707ca76..4ed1349affc4d 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+@@ -113,7 +113,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_kbl_machines[] = {
+ 	},
+ 	{
+ 		.id = "DLGS7219",
+-		.drv_name = "kbl_da7219_max98373",
++		.drv_name = "kbl_da7219_mx98373",
+ 		.fw_filename = "intel/dsp_fw_kbl.bin",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &kbl_7219_98373_codecs,
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index b22674e3a89c9..e677422c10585 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -2804,7 +2804,7 @@ int snd_soc_of_parse_audio_routing(struct snd_soc_card *card,
+ 	if (!routes) {
+ 		dev_err(card->dev,
+ 			"ASoC: Could not allocate DAPM route table\n");
+-		return -EINVAL;
++		return -ENOMEM;
+ 	}
+ 
+ 	for (i = 0; i < num_routes; i++) {
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 91bf339581590..8b8a9aca2912f 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1738,7 +1738,7 @@ static int dpcm_apply_symmetry(struct snd_pcm_substream *fe_substream,
+ 	struct snd_soc_dpcm *dpcm;
+ 	struct snd_soc_pcm_runtime *fe = asoc_substream_to_rtd(fe_substream);
+ 	struct snd_soc_dai *fe_cpu_dai;
+-	int err;
++	int err = 0;
+ 	int i;
+ 
+ 	/* apply symmetry for FE */
+diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
+index 97e72b3e06c26..1b7c7b754c385 100644
+--- a/sound/usb/mixer_scarlett_gen2.c
++++ b/sound/usb/mixer_scarlett_gen2.c
+@@ -254,10 +254,10 @@ static const struct scarlett2_device_info s6i6_gen2_info = {
+ 	.pad_input_count = 2,
+ 
+ 	.line_out_descrs = {
+-		"Monitor L",
+-		"Monitor R",
+-		"Headphones L",
+-		"Headphones R",
++		"Headphones 1 L",
++		"Headphones 1 R",
++		"Headphones 2 L",
++		"Headphones 2 R",
+ 	},
+ 
+ 	.ports = {
+@@ -356,7 +356,7 @@ static const struct scarlett2_device_info s18i8_gen2_info = {
+ 		},
+ 		[SCARLETT2_PORT_TYPE_PCM] = {
+ 			.id = 0x600,
+-			.num = { 20, 18, 18, 14, 10 },
++			.num = { 8, 18, 18, 14, 10 },
+ 			.src_descr = "PCM %d",
+ 			.src_num_offset = 1,
+ 			.dst_descr = "PCM %02d Capture"
+@@ -1033,11 +1033,10 @@ static int scarlett2_master_volume_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_mixer_data *private = mixer->private_data;
+ 
+-	if (private->vol_updated) {
+-		mutex_lock(&private->data_mutex);
++	mutex_lock(&private->data_mutex);
++	if (private->vol_updated)
+ 		scarlett2_update_volumes(mixer);
+-		mutex_unlock(&private->data_mutex);
+-	}
++	mutex_unlock(&private->data_mutex);
+ 
+ 	ucontrol->value.integer.value[0] = private->master_vol;
+ 	return 0;
+@@ -1051,11 +1050,10 @@ static int scarlett2_volume_ctl_get(struct snd_kcontrol *kctl,
+ 	struct scarlett2_mixer_data *private = mixer->private_data;
+ 	int index = elem->control;
+ 
+-	if (private->vol_updated) {
+-		mutex_lock(&private->data_mutex);
++	mutex_lock(&private->data_mutex);
++	if (private->vol_updated)
+ 		scarlett2_update_volumes(mixer);
+-		mutex_unlock(&private->data_mutex);
+-	}
++	mutex_unlock(&private->data_mutex);
+ 
+ 	ucontrol->value.integer.value[0] = private->vol[index];
+ 	return 0;
+@@ -1186,6 +1184,8 @@ static int scarlett2_sw_hw_enum_ctl_put(struct snd_kcontrol *kctl,
+ 	/* Send SW/HW switch change to the device */
+ 	err = scarlett2_usb_set_config(mixer, SCARLETT2_CONFIG_SW_HW_SWITCH,
+ 				       index, val);
++	if (err == 0)
++		err = 1;
+ 
+ unlock:
+ 	mutex_unlock(&private->data_mutex);
+@@ -1246,6 +1246,8 @@ static int scarlett2_level_enum_ctl_put(struct snd_kcontrol *kctl,
+ 	/* Send switch change to the device */
+ 	err = scarlett2_usb_set_config(mixer, SCARLETT2_CONFIG_LEVEL_SWITCH,
+ 				       index, val);
++	if (err == 0)
++		err = 1;
+ 
+ unlock:
+ 	mutex_unlock(&private->data_mutex);
+@@ -1296,6 +1298,8 @@ static int scarlett2_pad_ctl_put(struct snd_kcontrol *kctl,
+ 	/* Send switch change to the device */
+ 	err = scarlett2_usb_set_config(mixer, SCARLETT2_CONFIG_PAD_SWITCH,
+ 				       index, val);
++	if (err == 0)
++		err = 1;
+ 
+ unlock:
+ 	mutex_unlock(&private->data_mutex);
+@@ -1319,11 +1323,10 @@ static int scarlett2_button_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_mixer_data *private = mixer->private_data;
+ 
+-	if (private->vol_updated) {
+-		mutex_lock(&private->data_mutex);
++	mutex_lock(&private->data_mutex);
++	if (private->vol_updated)
+ 		scarlett2_update_volumes(mixer);
+-		mutex_unlock(&private->data_mutex);
+-	}
++	mutex_unlock(&private->data_mutex);
+ 
+ 	ucontrol->value.enumerated.item[0] = private->buttons[elem->control];
+ 	return 0;
+@@ -1352,6 +1355,8 @@ static int scarlett2_button_ctl_put(struct snd_kcontrol *kctl,
+ 	/* Send switch change to the device */
+ 	err = scarlett2_usb_set_config(mixer, SCARLETT2_CONFIG_BUTTONS,
+ 				       index, val);
++	if (err == 0)
++		err = 1;
+ 
+ unlock:
+ 	mutex_unlock(&private->data_mutex);
+diff --git a/sound/usb/usx2y/usX2Yhwdep.c b/sound/usb/usx2y/usX2Yhwdep.c
+index 22412cd69e985..10868c3fb6561 100644
+--- a/sound/usb/usx2y/usX2Yhwdep.c
++++ b/sound/usb/usx2y/usX2Yhwdep.c
+@@ -29,7 +29,7 @@ static vm_fault_t snd_us428ctls_vm_fault(struct vm_fault *vmf)
+ 		   vmf->pgoff);
+ 	
+ 	offset = vmf->pgoff << PAGE_SHIFT;
+-	vaddr = (char *)((struct usX2Ydev *)vmf->vma->vm_private_data)->us428ctls_sharedmem + offset;
++	vaddr = (char *)((struct usx2ydev *)vmf->vma->vm_private_data)->us428ctls_sharedmem + offset;
+ 	page = virt_to_page(vaddr);
+ 	get_page(page);
+ 	vmf->page = page;
+@@ -47,7 +47,7 @@ static const struct vm_operations_struct us428ctls_vm_ops = {
+ static int snd_us428ctls_mmap(struct snd_hwdep * hw, struct file *filp, struct vm_area_struct *area)
+ {
+ 	unsigned long	size = (unsigned long)(area->vm_end - area->vm_start);
+-	struct usX2Ydev	*us428 = hw->private_data;
++	struct usx2ydev	*us428 = hw->private_data;
+ 
+ 	// FIXME this hwdep interface is used twice: fpga download and mmap for controlling Lights etc. Maybe better using 2 hwdep devs?
+ 	// so as long as the device isn't fully initialised yet we return -EBUSY here.
+@@ -66,7 +66,7 @@ static int snd_us428ctls_mmap(struct snd_hwdep * hw, struct file *filp, struct v
+ 		if (!us428->us428ctls_sharedmem)
+ 			return -ENOMEM;
+ 		memset(us428->us428ctls_sharedmem, -1, sizeof(struct us428ctls_sharedmem));
+-		us428->us428ctls_sharedmem->CtlSnapShotLast = -2;
++		us428->us428ctls_sharedmem->ctl_snapshot_last = -2;
+ 	}
+ 	area->vm_ops = &us428ctls_vm_ops;
+ 	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+@@ -77,21 +77,21 @@ static int snd_us428ctls_mmap(struct snd_hwdep * hw, struct file *filp, struct v
+ static __poll_t snd_us428ctls_poll(struct snd_hwdep *hw, struct file *file, poll_table *wait)
+ {
+ 	__poll_t	mask = 0;
+-	struct usX2Ydev	*us428 = hw->private_data;
++	struct usx2ydev	*us428 = hw->private_data;
+ 	struct us428ctls_sharedmem *shm = us428->us428ctls_sharedmem;
+ 	if (us428->chip_status & USX2Y_STAT_CHIP_HUP)
+ 		return EPOLLHUP;
+ 
+ 	poll_wait(file, &us428->us428ctls_wait_queue_head, wait);
+ 
+-	if (shm != NULL && shm->CtlSnapShotLast != shm->CtlSnapShotRed)
++	if (shm != NULL && shm->ctl_snapshot_last != shm->ctl_snapshot_red)
+ 		mask |= EPOLLIN;
+ 
+ 	return mask;
+ }
+ 
+ 
+-static int snd_usX2Y_hwdep_dsp_status(struct snd_hwdep *hw,
++static int snd_usx2y_hwdep_dsp_status(struct snd_hwdep *hw,
+ 				      struct snd_hwdep_dsp_status *info)
+ {
+ 	static const char * const type_ids[USX2Y_TYPE_NUMS] = {
+@@ -99,7 +99,7 @@ static int snd_usX2Y_hwdep_dsp_status(struct snd_hwdep *hw,
+ 		[USX2Y_TYPE_224] = "us224",
+ 		[USX2Y_TYPE_428] = "us428",
+ 	};
+-	struct usX2Ydev	*us428 = hw->private_data;
++	struct usx2ydev	*us428 = hw->private_data;
+ 	int id = -1;
+ 
+ 	switch (le16_to_cpu(us428->dev->descriptor.idProduct)) {
+@@ -124,7 +124,7 @@ static int snd_usX2Y_hwdep_dsp_status(struct snd_hwdep *hw,
+ }
+ 
+ 
+-static int usX2Y_create_usbmidi(struct snd_card *card)
++static int usx2y_create_usbmidi(struct snd_card *card)
+ {
+ 	static const struct snd_usb_midi_endpoint_info quirk_data_1 = {
+ 		.out_ep = 0x06,
+@@ -152,28 +152,28 @@ static int usX2Y_create_usbmidi(struct snd_card *card)
+        		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+ 		.data = &quirk_data_2
+ 	};
+-	struct usb_device *dev = usX2Y(card)->dev;
++	struct usb_device *dev = usx2y(card)->dev;
+ 	struct usb_interface *iface = usb_ifnum_to_if(dev, 0);
+ 	const struct snd_usb_audio_quirk *quirk =
+ 		le16_to_cpu(dev->descriptor.idProduct) == USB_ID_US428 ?
+ 		&quirk_2 : &quirk_1;
+ 
+-	snd_printdd("usX2Y_create_usbmidi \n");
+-	return snd_usbmidi_create(card, iface, &usX2Y(card)->midi_list, quirk);
++	snd_printdd("usx2y_create_usbmidi \n");
++	return snd_usbmidi_create(card, iface, &usx2y(card)->midi_list, quirk);
+ }
+ 
+-static int usX2Y_create_alsa_devices(struct snd_card *card)
++static int usx2y_create_alsa_devices(struct snd_card *card)
+ {
+ 	int err;
+ 
+ 	do {
+-		if ((err = usX2Y_create_usbmidi(card)) < 0) {
+-			snd_printk(KERN_ERR "usX2Y_create_alsa_devices: usX2Y_create_usbmidi error %i \n", err);
++		if ((err = usx2y_create_usbmidi(card)) < 0) {
++			snd_printk(KERN_ERR "usx2y_create_alsa_devices: usx2y_create_usbmidi error %i \n", err);
+ 			break;
+ 		}
+-		if ((err = usX2Y_audio_create(card)) < 0) 
++		if ((err = usx2y_audio_create(card)) < 0) 
+ 			break;
+-		if ((err = usX2Y_hwdep_pcm_new(card)) < 0)
++		if ((err = usx2y_hwdep_pcm_new(card)) < 0)
+ 			break;
+ 		if ((err = snd_card_register(card)) < 0)
+ 			break;
+@@ -182,10 +182,10 @@ static int usX2Y_create_alsa_devices(struct snd_card *card)
+ 	return err;
+ } 
+ 
+-static int snd_usX2Y_hwdep_dsp_load(struct snd_hwdep *hw,
++static int snd_usx2y_hwdep_dsp_load(struct snd_hwdep *hw,
+ 				    struct snd_hwdep_dsp_image *dsp)
+ {
+-	struct usX2Ydev *priv = hw->private_data;
++	struct usx2ydev *priv = hw->private_data;
+ 	struct usb_device* dev = priv->dev;
+ 	int lret, err;
+ 	char *buf;
+@@ -206,19 +206,19 @@ static int snd_usX2Y_hwdep_dsp_load(struct snd_hwdep *hw,
+ 		return err;
+ 	if (dsp->index == 1) {
+ 		msleep(250);				// give the device some time
+-		err = usX2Y_AsyncSeq04_init(priv);
++		err = usx2y_async_seq04_init(priv);
+ 		if (err) {
+-			snd_printk(KERN_ERR "usX2Y_AsyncSeq04_init error \n");
++			snd_printk(KERN_ERR "usx2y_async_seq04_init error \n");
+ 			return err;
+ 		}
+-		err = usX2Y_In04_init(priv);
++		err = usx2y_in04_init(priv);
+ 		if (err) {
+-			snd_printk(KERN_ERR "usX2Y_In04_init error \n");
++			snd_printk(KERN_ERR "usx2y_in04_init error \n");
+ 			return err;
+ 		}
+-		err = usX2Y_create_alsa_devices(hw->card);
++		err = usx2y_create_alsa_devices(hw->card);
+ 		if (err) {
+-			snd_printk(KERN_ERR "usX2Y_create_alsa_devices error %i \n", err);
++			snd_printk(KERN_ERR "usx2y_create_alsa_devices error %i \n", err);
+ 			snd_card_free(hw->card);
+ 			return err;
+ 		}
+@@ -229,7 +229,7 @@ static int snd_usX2Y_hwdep_dsp_load(struct snd_hwdep *hw,
+ }
+ 
+ 
+-int usX2Y_hwdep_new(struct snd_card *card, struct usb_device* device)
++int usx2y_hwdep_new(struct snd_card *card, struct usb_device* device)
+ {
+ 	int err;
+ 	struct snd_hwdep *hw;
+@@ -238,9 +238,9 @@ int usX2Y_hwdep_new(struct snd_card *card, struct usb_device* device)
+ 		return err;
+ 
+ 	hw->iface = SNDRV_HWDEP_IFACE_USX2Y;
+-	hw->private_data = usX2Y(card);
+-	hw->ops.dsp_status = snd_usX2Y_hwdep_dsp_status;
+-	hw->ops.dsp_load = snd_usX2Y_hwdep_dsp_load;
++	hw->private_data = usx2y(card);
++	hw->ops.dsp_status = snd_usx2y_hwdep_dsp_status;
++	hw->ops.dsp_load = snd_usx2y_hwdep_dsp_load;
+ 	hw->ops.mmap = snd_us428ctls_mmap;
+ 	hw->ops.poll = snd_us428ctls_poll;
+ 	hw->exclusive = 1;
+diff --git a/sound/usb/usx2y/usX2Yhwdep.h b/sound/usb/usx2y/usX2Yhwdep.h
+index 457199b5ed03b..34cef625712c6 100644
+--- a/sound/usb/usx2y/usX2Yhwdep.h
++++ b/sound/usb/usx2y/usX2Yhwdep.h
+@@ -2,6 +2,6 @@
+ #ifndef USX2YHWDEP_H
+ #define USX2YHWDEP_H
+ 
+-int usX2Y_hwdep_new(struct snd_card *card, struct usb_device* device);
++int usx2y_hwdep_new(struct snd_card *card, struct usb_device* device);
+ 
+ #endif
+diff --git a/sound/usb/usx2y/usb_stream.c b/sound/usb/usx2y/usb_stream.c
+index 091c071b270af..cff684942c4f0 100644
+--- a/sound/usb/usx2y/usb_stream.c
++++ b/sound/usb/usx2y/usb_stream.c
+@@ -142,8 +142,11 @@ void usb_stream_free(struct usb_stream_kernel *sk)
+ 	if (!s)
+ 		return;
+ 
+-	free_pages_exact(sk->write_page, s->write_size);
+-	sk->write_page = NULL;
++	if (sk->write_page) {
++		free_pages_exact(sk->write_page, s->write_size);
++		sk->write_page = NULL;
++	}
++
+ 	free_pages_exact(s, s->read_size);
+ 	sk->s = NULL;
+ }
+diff --git a/sound/usb/usx2y/usbus428ctldefs.h b/sound/usb/usx2y/usbus428ctldefs.h
+index 5a7518ea3aeb4..7366a940ffbba 100644
+--- a/sound/usb/usx2y/usbus428ctldefs.h
++++ b/sound/usb/usx2y/usbus428ctldefs.h
+@@ -4,28 +4,28 @@
+  * Copyright (c) 2003 by Karsten Wiese <annabellesgarden@yahoo.de>
+  */
+ 
+-enum E_In84{
+-	eFader0 = 0,
+-	eFader1,
+-	eFader2,
+-	eFader3,
+-	eFader4,
+-	eFader5,
+-	eFader6,
+-	eFader7,
+-	eFaderM,
+-	eTransport,
+-	eModifier = 10,
+-	eFilterSelect,
+-	eSelect,
+-	eMute,
++enum E_IN84 {
++	E_FADER_0 = 0,
++	E_FADER_1,
++	E_FADER_2,
++	E_FADER_3,
++	E_FADER_4,
++	E_FADER_5,
++	E_FADER_6,
++	E_FADER_7,
++	E_FADER_M,
++	E_TRANSPORT,
++	E_MODIFIER = 10,
++	E_FILTER_SELECT,
++	E_SELECT,
++	E_MUTE,
+ 
+-	eSwitch   = 15,
+-	eWheelGain,
+-	eWheelFreq,
+-	eWheelQ,
+-	eWheelPan,
+-	eWheel    = 20
++	E_SWITCH   = 15,
++	E_WHEEL_GAIN,
++	E_WHEEL_FREQ,
++	E_WHEEL_Q,
++	E_WHEEL_PAN,
++	E_WHEEL    = 20
+ };
+ 
+ #define T_RECORD   1
+@@ -39,53 +39,53 @@ enum E_In84{
+ 
+ 
+ struct us428_ctls {
+-	unsigned char   Fader[9];
+-	unsigned char 	Transport;
+-	unsigned char 	Modifier;
+-	unsigned char 	FilterSelect;
+-	unsigned char 	Select;
+-	unsigned char   Mute;
+-	unsigned char   UNKNOWN;
+-	unsigned char   Switch;	     
+-	unsigned char   Wheel[5];
++	unsigned char   fader[9];
++	unsigned char 	transport;
++	unsigned char 	modifier;
++	unsigned char 	filters_elect;
++	unsigned char 	select;
++	unsigned char   mute;
++	unsigned char   unknown;
++	unsigned char   wswitch;	     
++	unsigned char   wheel[5];
+ };
+ 
+-struct us428_setByte {
+-	unsigned char Offset,
+-		Value;
++struct us428_set_byte {
++	unsigned char offset,
++		value;
+ };
+ 
+ enum {
+-	eLT_Volume = 0,
+-	eLT_Light
++	ELT_VOLUME = 0,
++	ELT_LIGHT
+ };
+ 
+-struct usX2Y_volume {
+-	unsigned char Channel,
+-		LH,
+-		LL,
+-		RH,
+-		RL;
++struct usx2y_volume {
++	unsigned char channel,
++		lh,
++		ll,
++		rh,
++		rl;
+ };
+ 
+ struct us428_lights {
+-	struct us428_setByte Light[7];
++	struct us428_set_byte light[7];
+ };
+ 
+ struct us428_p4out {
+ 	char type;
+ 	union {
+-		struct usX2Y_volume vol;
++		struct usx2y_volume vol;
+ 		struct us428_lights lights;
+ 	} val;
+ };
+ 
+-#define N_us428_ctl_BUFS 16
+-#define N_us428_p4out_BUFS 16
+-struct us428ctls_sharedmem{
+-	struct us428_ctls	CtlSnapShot[N_us428_ctl_BUFS];
+-	int			CtlSnapShotDiffersAt[N_us428_ctl_BUFS];
+-	int			CtlSnapShotLast, CtlSnapShotRed;
+-	struct us428_p4out	p4out[N_us428_p4out_BUFS];
+-	int			p4outLast, p4outSent;
++#define N_US428_CTL_BUFS 16
++#define N_US428_P4OUT_BUFS 16
++struct us428ctls_sharedmem {
++	struct us428_ctls	ctl_snapshot[N_US428_CTL_BUFS];
++	int			ctl_snapshot_differs_at[N_US428_CTL_BUFS];
++	int			ctl_snapshot_last, ctl_snapshot_red;
++	struct us428_p4out	p4out[N_US428_P4OUT_BUFS];
++	int			p4out_last, p4out_sent;
+ };
+diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c
+index c54158146917b..6d910f23da0d0 100644
+--- a/sound/usb/usx2y/usbusx2y.c
++++ b/sound/usb/usx2y/usbusx2y.c
+@@ -17,7 +17,7 @@
+ 
+ 2004-10-26 Karsten Wiese
+ 	Version 0.8.6:
+-	wake_up() process waiting in usX2Y_urbs_start() on error.
++	wake_up() process waiting in usx2y_urbs_start() on error.
+ 
+ 2004-10-21 Karsten Wiese
+ 	Version 0.8.5:
+@@ -48,7 +48,7 @@
+ 2004-06-12 Karsten Wiese
+ 	Version 0.6.3:
+ 	Made it thus the following rule is enforced:
+-	"All pcm substreams of one usX2Y have to operate at the same rate & format."
++	"All pcm substreams of one usx2y have to operate at the same rate & format."
+ 
+ 2004-04-06 Karsten Wiese
+ 	Version 0.6.0:
+@@ -151,161 +151,161 @@ module_param_array(enable, bool, NULL, 0444);
+ MODULE_PARM_DESC(enable, "Enable "NAME_ALLCAPS".");
+ 
+ 
+-static int snd_usX2Y_card_used[SNDRV_CARDS];
++static int snd_usx2y_card_used[SNDRV_CARDS];
+ 
+-static void usX2Y_usb_disconnect(struct usb_device* usb_device, void* ptr);
+-static void snd_usX2Y_card_private_free(struct snd_card *card);
++static void usx2y_usb_disconnect(struct usb_device* usb_device, void* ptr);
++static void snd_usx2y_card_private_free(struct snd_card *card);
+ 
+ /* 
+  * pipe 4 is used for switching the lamps, setting samplerate, volumes ....   
+  */
+-static void i_usX2Y_Out04Int(struct urb *urb)
++static void i_usx2y_out04_int(struct urb *urb)
+ {
+ #ifdef CONFIG_SND_DEBUG
+ 	if (urb->status) {
+ 		int 		i;
+-		struct usX2Ydev *usX2Y = urb->context;
+-		for (i = 0; i < 10 && usX2Y->AS04.urb[i] != urb; i++);
+-		snd_printdd("i_usX2Y_Out04Int() urb %i status=%i\n", i, urb->status);
++		struct usx2ydev *usx2y = urb->context;
++		for (i = 0; i < 10 && usx2y->as04.urb[i] != urb; i++);
++		snd_printdd("i_usx2y_out04_int() urb %i status=%i\n", i, urb->status);
+ 	}
+ #endif
+ }
+ 
+-static void i_usX2Y_In04Int(struct urb *urb)
++static void i_usx2y_in04_int(struct urb *urb)
+ {
+ 	int			err = 0;
+-	struct usX2Ydev		*usX2Y = urb->context;
+-	struct us428ctls_sharedmem	*us428ctls = usX2Y->us428ctls_sharedmem;
++	struct usx2ydev		*usx2y = urb->context;
++	struct us428ctls_sharedmem	*us428ctls = usx2y->us428ctls_sharedmem;
+ 
+-	usX2Y->In04IntCalls++;
++	usx2y->in04_int_calls++;
+ 
+ 	if (urb->status) {
+ 		snd_printdd("Interrupt Pipe 4 came back with status=%i\n", urb->status);
+ 		return;
+ 	}
+ 
+-	//	printk("%i:0x%02X ", 8, (int)((unsigned char*)usX2Y->In04Buf)[8]); Master volume shows 0 here if fader is at max during boot ?!?
++	//	printk("%i:0x%02X ", 8, (int)((unsigned char*)usx2y->in04_buf)[8]); Master volume shows 0 here if fader is at max during boot ?!?
+ 	if (us428ctls) {
+ 		int diff = -1;
+-		if (-2 == us428ctls->CtlSnapShotLast) {
++		if (-2 == us428ctls->ctl_snapshot_last) {
+ 			diff = 0;
+-			memcpy(usX2Y->In04Last, usX2Y->In04Buf, sizeof(usX2Y->In04Last));
+-			us428ctls->CtlSnapShotLast = -1;
++			memcpy(usx2y->in04_last, usx2y->in04_buf, sizeof(usx2y->in04_last));
++			us428ctls->ctl_snapshot_last = -1;
+ 		} else {
+ 			int i;
+ 			for (i = 0; i < 21; i++) {
+-				if (usX2Y->In04Last[i] != ((char*)usX2Y->In04Buf)[i]) {
++				if (usx2y->in04_last[i] != ((char*)usx2y->in04_buf)[i]) {
+ 					if (diff < 0)
+ 						diff = i;
+-					usX2Y->In04Last[i] = ((char*)usX2Y->In04Buf)[i];
++					usx2y->in04_last[i] = ((char*)usx2y->in04_buf)[i];
+ 				}
+ 			}
+ 		}
+ 		if (0 <= diff) {
+-			int n = us428ctls->CtlSnapShotLast + 1;
+-			if (n >= N_us428_ctl_BUFS  ||  n < 0)
++			int n = us428ctls->ctl_snapshot_last + 1;
++			if (n >= N_US428_CTL_BUFS  ||  n < 0)
+ 				n = 0;
+-			memcpy(us428ctls->CtlSnapShot + n, usX2Y->In04Buf, sizeof(us428ctls->CtlSnapShot[0]));
+-			us428ctls->CtlSnapShotDiffersAt[n] = diff;
+-			us428ctls->CtlSnapShotLast = n;
+-			wake_up(&usX2Y->us428ctls_wait_queue_head);
++			memcpy(us428ctls->ctl_snapshot + n, usx2y->in04_buf, sizeof(us428ctls->ctl_snapshot[0]));
++			us428ctls->ctl_snapshot_differs_at[n] = diff;
++			us428ctls->ctl_snapshot_last = n;
++			wake_up(&usx2y->us428ctls_wait_queue_head);
+ 		}
+ 	}
+ 	
+ 	
+-	if (usX2Y->US04) {
+-		if (0 == usX2Y->US04->submitted)
++	if (usx2y->us04) {
++		if (0 == usx2y->us04->submitted)
+ 			do {
+-				err = usb_submit_urb(usX2Y->US04->urb[usX2Y->US04->submitted++], GFP_ATOMIC);
+-			} while (!err && usX2Y->US04->submitted < usX2Y->US04->len);
++				err = usb_submit_urb(usx2y->us04->urb[usx2y->us04->submitted++], GFP_ATOMIC);
++			} while (!err && usx2y->us04->submitted < usx2y->us04->len);
+ 	} else
+-		if (us428ctls && us428ctls->p4outLast >= 0 && us428ctls->p4outLast < N_us428_p4out_BUFS) {
+-			if (us428ctls->p4outLast != us428ctls->p4outSent) {
+-				int j, send = us428ctls->p4outSent + 1;
+-				if (send >= N_us428_p4out_BUFS)
++		if (us428ctls && us428ctls->p4out_last >= 0 && us428ctls->p4out_last < N_US428_P4OUT_BUFS) {
++			if (us428ctls->p4out_last != us428ctls->p4out_sent) {
++				int j, send = us428ctls->p4out_sent + 1;
++				if (send >= N_US428_P4OUT_BUFS)
+ 					send = 0;
+-				for (j = 0; j < URBS_AsyncSeq  &&  !err; ++j)
+-					if (0 == usX2Y->AS04.urb[j]->status) {
++				for (j = 0; j < URBS_ASYNC_SEQ  &&  !err; ++j)
++					if (0 == usx2y->as04.urb[j]->status) {
+ 						struct us428_p4out *p4out = us428ctls->p4out + send;	// FIXME if more than 1 p4out is new, 1 gets lost.
+-						usb_fill_bulk_urb(usX2Y->AS04.urb[j], usX2Y->dev,
+-								  usb_sndbulkpipe(usX2Y->dev, 0x04), &p4out->val.vol,
+-								  p4out->type == eLT_Light ? sizeof(struct us428_lights) : 5,
+-								  i_usX2Y_Out04Int, usX2Y);
+-						err = usb_submit_urb(usX2Y->AS04.urb[j], GFP_ATOMIC);
+-						us428ctls->p4outSent = send;
++						usb_fill_bulk_urb(usx2y->as04.urb[j], usx2y->dev,
++								  usb_sndbulkpipe(usx2y->dev, 0x04), &p4out->val.vol,
++								  p4out->type == ELT_LIGHT ? sizeof(struct us428_lights) : 5,
++								  i_usx2y_out04_int, usx2y);
++						err = usb_submit_urb(usx2y->as04.urb[j], GFP_ATOMIC);
++						us428ctls->p4out_sent = send;
+ 						break;
+ 					}
+ 			}
+ 		}
+ 
+ 	if (err)
+-		snd_printk(KERN_ERR "In04Int() usb_submit_urb err=%i\n", err);
++		snd_printk(KERN_ERR "in04_int() usb_submit_urb err=%i\n", err);
+ 
+-	urb->dev = usX2Y->dev;
++	urb->dev = usx2y->dev;
+ 	usb_submit_urb(urb, GFP_ATOMIC);
+ }
+ 
+ /*
+  * Prepare some urbs
+  */
+-int usX2Y_AsyncSeq04_init(struct usX2Ydev *usX2Y)
++int usx2y_async_seq04_init(struct usx2ydev *usx2y)
+ {
+ 	int	err = 0,
+ 		i;
+ 
+-	usX2Y->AS04.buffer = kmalloc_array(URBS_AsyncSeq,
+-					   URB_DataLen_AsyncSeq, GFP_KERNEL);
+-	if (NULL == usX2Y->AS04.buffer) {
++	usx2y->as04.buffer = kmalloc_array(URBS_ASYNC_SEQ,
++					   URB_DATA_LEN_ASYNC_SEQ, GFP_KERNEL);
++	if (NULL == usx2y->as04.buffer) {
+ 		err = -ENOMEM;
+ 	} else
+-		for (i = 0; i < URBS_AsyncSeq; ++i) {
+-			if (NULL == (usX2Y->AS04.urb[i] = usb_alloc_urb(0, GFP_KERNEL))) {
++		for (i = 0; i < URBS_ASYNC_SEQ; ++i) {
++			if (NULL == (usx2y->as04.urb[i] = usb_alloc_urb(0, GFP_KERNEL))) {
+ 				err = -ENOMEM;
+ 				break;
+ 			}
+-			usb_fill_bulk_urb(	usX2Y->AS04.urb[i], usX2Y->dev,
+-						usb_sndbulkpipe(usX2Y->dev, 0x04),
+-						usX2Y->AS04.buffer + URB_DataLen_AsyncSeq*i, 0,
+-						i_usX2Y_Out04Int, usX2Y
++			usb_fill_bulk_urb(	usx2y->as04.urb[i], usx2y->dev,
++						usb_sndbulkpipe(usx2y->dev, 0x04),
++						usx2y->as04.buffer + URB_DATA_LEN_ASYNC_SEQ*i, 0,
++						i_usx2y_out04_int, usx2y
+ 				);
+-			err = usb_urb_ep_type_check(usX2Y->AS04.urb[i]);
++			err = usb_urb_ep_type_check(usx2y->as04.urb[i]);
+ 			if (err < 0)
+ 				break;
+ 		}
+ 	return err;
+ }
+ 
+-int usX2Y_In04_init(struct usX2Ydev *usX2Y)
++int usx2y_in04_init(struct usx2ydev *usx2y)
+ {
+-	if (! (usX2Y->In04urb = usb_alloc_urb(0, GFP_KERNEL)))
++	if (! (usx2y->in04_urb = usb_alloc_urb(0, GFP_KERNEL)))
+ 		return -ENOMEM;
+ 
+-	if (! (usX2Y->In04Buf = kmalloc(21, GFP_KERNEL)))
++	if (! (usx2y->in04_buf = kmalloc(21, GFP_KERNEL)))
+ 		return -ENOMEM;
+ 	 
+-	init_waitqueue_head(&usX2Y->In04WaitQueue);
+-	usb_fill_int_urb(usX2Y->In04urb, usX2Y->dev, usb_rcvintpipe(usX2Y->dev, 0x4),
+-			 usX2Y->In04Buf, 21,
+-			 i_usX2Y_In04Int, usX2Y,
++	init_waitqueue_head(&usx2y->in04_wait_queue);
++	usb_fill_int_urb(usx2y->in04_urb, usx2y->dev, usb_rcvintpipe(usx2y->dev, 0x4),
++			 usx2y->in04_buf, 21,
++			 i_usx2y_in04_int, usx2y,
+ 			 10);
+-	if (usb_urb_ep_type_check(usX2Y->In04urb))
++	if (usb_urb_ep_type_check(usx2y->in04_urb))
+ 		return -EINVAL;
+-	return usb_submit_urb(usX2Y->In04urb, GFP_KERNEL);
++	return usb_submit_urb(usx2y->in04_urb, GFP_KERNEL);
+ }
+ 
+-static void usX2Y_unlinkSeq(struct snd_usX2Y_AsyncSeq *S)
++static void usx2y_unlinkseq(struct snd_usx2y_async_seq *s)
+ {
+ 	int	i;
+-	for (i = 0; i < URBS_AsyncSeq; ++i) {
+-		usb_kill_urb(S->urb[i]);
+-		usb_free_urb(S->urb[i]);
+-		S->urb[i] = NULL;
++	for (i = 0; i < URBS_ASYNC_SEQ; ++i) {
++		usb_kill_urb(s->urb[i]);
++		usb_free_urb(s->urb[i]);
++		s->urb[i] = NULL;
+ 	}
+-	kfree(S->buffer);
++	kfree(s->buffer);
+ }
+ 
+ 
+-static const struct usb_device_id snd_usX2Y_usb_id_table[] = {
++static const struct usb_device_id snd_usx2y_usb_id_table[] = {
+ 	{
+ 		.match_flags =	USB_DEVICE_ID_MATCH_DEVICE,
+ 		.idVendor =	0x1604,
+@@ -324,7 +324,7 @@ static const struct usb_device_id snd_usX2Y_usb_id_table[] = {
+ 	{ /* terminator */ }
+ };
+ 
+-static int usX2Y_create_card(struct usb_device *device,
++static int usx2y_create_card(struct usb_device *device,
+ 			     struct usb_interface *intf,
+ 			     struct snd_card **cardp)
+ {
+@@ -333,20 +333,20 @@ static int usX2Y_create_card(struct usb_device *device,
+ 	int err;
+ 
+ 	for (dev = 0; dev < SNDRV_CARDS; ++dev)
+-		if (enable[dev] && !snd_usX2Y_card_used[dev])
++		if (enable[dev] && !snd_usx2y_card_used[dev])
+ 			break;
+ 	if (dev >= SNDRV_CARDS)
+ 		return -ENODEV;
+ 	err = snd_card_new(&intf->dev, index[dev], id[dev], THIS_MODULE,
+-			   sizeof(struct usX2Ydev), &card);
++			   sizeof(struct usx2ydev), &card);
+ 	if (err < 0)
+ 		return err;
+-	snd_usX2Y_card_used[usX2Y(card)->card_index = dev] = 1;
+-	card->private_free = snd_usX2Y_card_private_free;
+-	usX2Y(card)->dev = device;
+-	init_waitqueue_head(&usX2Y(card)->prepare_wait_queue);
+-	mutex_init(&usX2Y(card)->pcm_mutex);
+-	INIT_LIST_HEAD(&usX2Y(card)->midi_list);
++	snd_usx2y_card_used[usx2y(card)->card_index = dev] = 1;
++	card->private_free = snd_usx2y_card_private_free;
++	usx2y(card)->dev = device;
++	init_waitqueue_head(&usx2y(card)->prepare_wait_queue);
++	mutex_init(&usx2y(card)->pcm_mutex);
++	INIT_LIST_HEAD(&usx2y(card)->midi_list);
+ 	strcpy(card->driver, "USB "NAME_ALLCAPS"");
+ 	sprintf(card->shortname, "TASCAM "NAME_ALLCAPS"");
+ 	sprintf(card->longname, "%s (%x:%x if %d at %03d/%03d)",
+@@ -354,14 +354,14 @@ static int usX2Y_create_card(struct usb_device *device,
+ 		le16_to_cpu(device->descriptor.idVendor),
+ 		le16_to_cpu(device->descriptor.idProduct),
+ 		0,//us428(card)->usbmidi.ifnum,
+-		usX2Y(card)->dev->bus->busnum, usX2Y(card)->dev->devnum
++		usx2y(card)->dev->bus->busnum, usx2y(card)->dev->devnum
+ 		);
+ 	*cardp = card;
+ 	return 0;
+ }
+ 
+ 
+-static int usX2Y_usb_probe(struct usb_device *device,
++static int usx2y_usb_probe(struct usb_device *device,
+ 			   struct usb_interface *intf,
+ 			   const struct usb_device_id *device_id,
+ 			   struct snd_card **cardp)
+@@ -376,10 +376,10 @@ static int usX2Y_usb_probe(struct usb_device *device,
+ 	     le16_to_cpu(device->descriptor.idProduct) != USB_ID_US428))
+ 		return -EINVAL;
+ 
+-	err = usX2Y_create_card(device, intf, &card);
++	err = usx2y_create_card(device, intf, &card);
+ 	if (err < 0)
+ 		return err;
+-	if ((err = usX2Y_hwdep_new(card, device)) < 0  ||
++	if ((err = usx2y_hwdep_new(card, device)) < 0  ||
+ 	    (err = snd_card_register(card)) < 0) {
+ 		snd_card_free(card);
+ 		return err;
+@@ -391,64 +391,64 @@ static int usX2Y_usb_probe(struct usb_device *device,
+ /*
+  * new 2.5 USB kernel API
+  */
+-static int snd_usX2Y_probe(struct usb_interface *intf, const struct usb_device_id *id)
++static int snd_usx2y_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ {
+ 	struct snd_card *card;
+ 	int err;
+ 
+-	err = usX2Y_usb_probe(interface_to_usbdev(intf), intf, id, &card);
++	err = usx2y_usb_probe(interface_to_usbdev(intf), intf, id, &card);
+ 	if (err < 0)
+ 		return err;
+ 	dev_set_drvdata(&intf->dev, card);
+ 	return 0;
+ }
+ 
+-static void snd_usX2Y_disconnect(struct usb_interface *intf)
++static void snd_usx2y_disconnect(struct usb_interface *intf)
+ {
+-	usX2Y_usb_disconnect(interface_to_usbdev(intf),
++	usx2y_usb_disconnect(interface_to_usbdev(intf),
+ 				 usb_get_intfdata(intf));
+ }
+ 
+-MODULE_DEVICE_TABLE(usb, snd_usX2Y_usb_id_table);
+-static struct usb_driver snd_usX2Y_usb_driver = {
++MODULE_DEVICE_TABLE(usb, snd_usx2y_usb_id_table);
++static struct usb_driver snd_usx2y_usb_driver = {
+ 	.name =		"snd-usb-usx2y",
+-	.probe =	snd_usX2Y_probe,
+-	.disconnect =	snd_usX2Y_disconnect,
+-	.id_table =	snd_usX2Y_usb_id_table,
++	.probe =	snd_usx2y_probe,
++	.disconnect =	snd_usx2y_disconnect,
++	.id_table =	snd_usx2y_usb_id_table,
+ };
+ 
+-static void snd_usX2Y_card_private_free(struct snd_card *card)
++static void snd_usx2y_card_private_free(struct snd_card *card)
+ {
+-	kfree(usX2Y(card)->In04Buf);
+-	usb_free_urb(usX2Y(card)->In04urb);
+-	if (usX2Y(card)->us428ctls_sharedmem)
+-		free_pages_exact(usX2Y(card)->us428ctls_sharedmem,
+-				 sizeof(*usX2Y(card)->us428ctls_sharedmem));
+-	if (usX2Y(card)->card_index >= 0  &&  usX2Y(card)->card_index < SNDRV_CARDS)
+-		snd_usX2Y_card_used[usX2Y(card)->card_index] = 0;
++	kfree(usx2y(card)->in04_buf);
++	usb_free_urb(usx2y(card)->in04_urb);
++	if (usx2y(card)->us428ctls_sharedmem)
++		free_pages_exact(usx2y(card)->us428ctls_sharedmem,
++				 sizeof(*usx2y(card)->us428ctls_sharedmem));
++	if (usx2y(card)->card_index >= 0  &&  usx2y(card)->card_index < SNDRV_CARDS)
++		snd_usx2y_card_used[usx2y(card)->card_index] = 0;
+ }
+ 
+ /*
+  * Frees the device.
+  */
+-static void usX2Y_usb_disconnect(struct usb_device *device, void* ptr)
++static void usx2y_usb_disconnect(struct usb_device *device, void* ptr)
+ {
+ 	if (ptr) {
+ 		struct snd_card *card = ptr;
+-		struct usX2Ydev *usX2Y = usX2Y(card);
++		struct usx2ydev *usx2y = usx2y(card);
+ 		struct list_head *p;
+-		usX2Y->chip_status = USX2Y_STAT_CHIP_HUP;
+-		usX2Y_unlinkSeq(&usX2Y->AS04);
+-		usb_kill_urb(usX2Y->In04urb);
++		usx2y->chip_status = USX2Y_STAT_CHIP_HUP;
++		usx2y_unlinkseq(&usx2y->as04);
++		usb_kill_urb(usx2y->in04_urb);
+ 		snd_card_disconnect(card);
+ 		/* release the midi resources */
+-		list_for_each(p, &usX2Y->midi_list) {
++		list_for_each(p, &usx2y->midi_list) {
+ 			snd_usbmidi_disconnect(p);
+ 		}
+-		if (usX2Y->us428ctls_sharedmem) 
+-			wake_up(&usX2Y->us428ctls_wait_queue_head);
++		if (usx2y->us428ctls_sharedmem) 
++			wake_up(&usx2y->us428ctls_wait_queue_head);
+ 		snd_card_free(card);
+ 	}
+ }
+ 
+-module_usb_driver(snd_usX2Y_usb_driver);
++module_usb_driver(snd_usx2y_usb_driver);
+diff --git a/sound/usb/usx2y/usbusx2y.h b/sound/usb/usx2y/usbusx2y.h
+index 144b85f57bd2a..c330af628bccd 100644
+--- a/sound/usb/usx2y/usbusx2y.h
++++ b/sound/usb/usx2y/usbusx2y.h
+@@ -8,14 +8,14 @@
+ #define NRURBS	        2	
+ 
+ 
+-#define URBS_AsyncSeq 10
+-#define URB_DataLen_AsyncSeq 32
+-struct snd_usX2Y_AsyncSeq {
+-	struct urb	*urb[URBS_AsyncSeq];
++#define URBS_ASYNC_SEQ 10
++#define URB_DATA_LEN_ASYNC_SEQ 32
++struct snd_usx2y_async_seq {
++	struct urb	*urb[URBS_ASYNC_SEQ];
+ 	char		*buffer;
+ };
+ 
+-struct snd_usX2Y_urbSeq {
++struct snd_usx2y_urb_seq {
+ 	int	submitted;
+ 	int	len;
+ 	struct urb	*urb[];
+@@ -23,17 +23,17 @@ struct snd_usX2Y_urbSeq {
+ 
+ #include "usx2yhwdeppcm.h"
+ 
+-struct usX2Ydev {
++struct usx2ydev {
+ 	struct usb_device	*dev;
+ 	int			card_index;
+ 	int			stride;
+-	struct urb		*In04urb;
+-	void			*In04Buf;
+-	char			In04Last[24];
+-	unsigned		In04IntCalls;
+-	struct snd_usX2Y_urbSeq	*US04;
+-	wait_queue_head_t	In04WaitQueue;
+-	struct snd_usX2Y_AsyncSeq	AS04;
++	struct urb		*in04_urb;
++	void			*in04_buf;
++	char			in04_last[24];
++	unsigned		in04_int_calls;
++	struct snd_usx2y_urb_seq	*us04;
++	wait_queue_head_t	in04_wait_queue;
++	struct snd_usx2y_async_seq	as04;
+ 	unsigned int		rate,
+ 				format;
+ 	int			chip_status;
+@@ -41,9 +41,9 @@ struct usX2Ydev {
+ 	struct us428ctls_sharedmem	*us428ctls_sharedmem;
+ 	int			wait_iso_frame;
+ 	wait_queue_head_t	us428ctls_wait_queue_head;
+-	struct snd_usX2Y_hwdep_pcm_shm	*hwdep_pcm_shm;
+-	struct snd_usX2Y_substream	*subs[4];
+-	struct snd_usX2Y_substream	* volatile  prepare_subs;
++	struct snd_usx2y_hwdep_pcm_shm	*hwdep_pcm_shm;
++	struct snd_usx2y_substream	*subs[4];
++	struct snd_usx2y_substream	* volatile  prepare_subs;
+ 	wait_queue_head_t	prepare_wait_queue;
+ 	struct list_head	midi_list;
+ 	struct list_head	pcm_list;
+@@ -51,21 +51,21 @@ struct usX2Ydev {
+ };
+ 
+ 
+-struct snd_usX2Y_substream {
+-	struct usX2Ydev	*usX2Y;
++struct snd_usx2y_substream {
++	struct usx2ydev	*usx2y;
+ 	struct snd_pcm_substream *pcm_substream;
+ 
+ 	int			endpoint;		
+ 	unsigned int		maxpacksize;		/* max packet size in bytes */
+ 
+ 	atomic_t		state;
+-#define state_STOPPED	0
+-#define state_STARTING1 1
+-#define state_STARTING2 2
+-#define state_STARTING3 3
+-#define state_PREPARED	4
+-#define state_PRERUNNING  6
+-#define state_RUNNING	8
++#define STATE_STOPPED	0
++#define STATE_STARTING1 1
++#define STATE_STARTING2 2
++#define STATE_STARTING3 3
++#define STATE_PREPARED	4
++#define STATE_PRERUNNING  6
++#define STATE_RUNNING	8
+ 
+ 	int			hwptr;			/* free frame position in the buffer (only for playback) */
+ 	int			hwptr_done;		/* processed frame position in the buffer */
+@@ -77,12 +77,12 @@ struct snd_usX2Y_substream {
+ };
+ 
+ 
+-#define usX2Y(c) ((struct usX2Ydev *)(c)->private_data)
++#define usx2y(c) ((struct usx2ydev *)(c)->private_data)
+ 
+-int usX2Y_audio_create(struct snd_card *card);
++int usx2y_audio_create(struct snd_card *card);
+ 
+-int usX2Y_AsyncSeq04_init(struct usX2Ydev *usX2Y);
+-int usX2Y_In04_init(struct usX2Ydev *usX2Y);
++int usx2y_async_seq04_init(struct usx2ydev *usx2y);
++int usx2y_in04_init(struct usx2ydev *usx2y);
+ 
+ #define NAME_ALLCAPS "US-X2Y"
+ 
+diff --git a/sound/usb/usx2y/usbusx2yaudio.c b/sound/usb/usx2y/usbusx2yaudio.c
+index ecaf41265dcd0..8033bb7255d5c 100644
+--- a/sound/usb/usx2y/usbusx2yaudio.c
++++ b/sound/usb/usx2y/usbusx2yaudio.c
+@@ -54,13 +54,13 @@
+ #endif
+ 
+ 
+-static int usX2Y_urb_capt_retire(struct snd_usX2Y_substream *subs)
++static int usx2y_urb_capt_retire(struct snd_usx2y_substream *subs)
+ {
+ 	struct urb	*urb = subs->completed_urb;
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+ 	unsigned char	*cp;
+ 	int 		i, len, lens = 0, hwptr_done = subs->hwptr_done;
+-	struct usX2Ydev	*usX2Y = subs->usX2Y;
++	struct usx2ydev	*usx2y = subs->usx2y;
+ 
+ 	for (i = 0; i < nr_of_packs(); i++) {
+ 		cp = (unsigned char*)urb->transfer_buffer + urb->iso_frame_desc[i].offset;
+@@ -70,7 +70,7 @@ static int usX2Y_urb_capt_retire(struct snd_usX2Y_substream *subs)
+ 				   urb->iso_frame_desc[i].status);
+ 			return urb->iso_frame_desc[i].status;
+ 		}
+-		len = urb->iso_frame_desc[i].actual_length / usX2Y->stride;
++		len = urb->iso_frame_desc[i].actual_length / usx2y->stride;
+ 		if (! len) {
+ 			snd_printd("0 == len ERROR!\n");
+ 			continue;
+@@ -79,12 +79,12 @@ static int usX2Y_urb_capt_retire(struct snd_usX2Y_substream *subs)
+ 		/* copy a data chunk */
+ 		if ((hwptr_done + len) > runtime->buffer_size) {
+ 			int cnt = runtime->buffer_size - hwptr_done;
+-			int blen = cnt * usX2Y->stride;
+-			memcpy(runtime->dma_area + hwptr_done * usX2Y->stride, cp, blen);
+-			memcpy(runtime->dma_area, cp + blen, len * usX2Y->stride - blen);
++			int blen = cnt * usx2y->stride;
++			memcpy(runtime->dma_area + hwptr_done * usx2y->stride, cp, blen);
++			memcpy(runtime->dma_area, cp + blen, len * usx2y->stride - blen);
+ 		} else {
+-			memcpy(runtime->dma_area + hwptr_done * usX2Y->stride, cp,
+-			       len * usX2Y->stride);
++			memcpy(runtime->dma_area + hwptr_done * usx2y->stride, cp,
++			       len * usx2y->stride);
+ 		}
+ 		lens += len;
+ 		if ((hwptr_done += len) >= runtime->buffer_size)
+@@ -110,18 +110,18 @@ static int usX2Y_urb_capt_retire(struct snd_usX2Y_substream *subs)
+  * it directly from the buffer.  thus the data is once copied to
+  * a temporary buffer and urb points to that.
+  */
+-static int usX2Y_urb_play_prepare(struct snd_usX2Y_substream *subs,
++static int usx2y_urb_play_prepare(struct snd_usx2y_substream *subs,
+ 				  struct urb *cap_urb,
+ 				  struct urb *urb)
+ {
+ 	int count, counts, pack;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
++	struct usx2ydev *usx2y = subs->usx2y;
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+ 
+ 	count = 0;
+ 	for (pack = 0; pack <  nr_of_packs(); pack++) {
+ 		/* calculate the size of a packet */
+-		counts = cap_urb->iso_frame_desc[pack].actual_length / usX2Y->stride;
++		counts = cap_urb->iso_frame_desc[pack].actual_length / usx2y->stride;
+ 		count += counts;
+ 		if (counts < 43 || counts > 50) {
+ 			snd_printk(KERN_ERR "should not be here with counts=%i\n", counts);
+@@ -134,7 +134,7 @@ static int usX2Y_urb_play_prepare(struct snd_usX2Y_substream *subs,
+ 			0;
+ 		urb->iso_frame_desc[pack].length = cap_urb->iso_frame_desc[pack].actual_length;
+ 	}
+-	if (atomic_read(&subs->state) >= state_PRERUNNING)
++	if (atomic_read(&subs->state) >= STATE_PRERUNNING)
+ 		if (subs->hwptr + count > runtime->buffer_size) {
+ 			/* err, the transferred area goes over buffer boundary.
+ 			 * copy the data to the temp buffer.
+@@ -143,20 +143,20 @@ static int usX2Y_urb_play_prepare(struct snd_usX2Y_substream *subs,
+ 			len = runtime->buffer_size - subs->hwptr;
+ 			urb->transfer_buffer = subs->tmpbuf;
+ 			memcpy(subs->tmpbuf, runtime->dma_area +
+-			       subs->hwptr * usX2Y->stride, len * usX2Y->stride);
+-			memcpy(subs->tmpbuf + len * usX2Y->stride,
+-			       runtime->dma_area, (count - len) * usX2Y->stride);
++			       subs->hwptr * usx2y->stride, len * usx2y->stride);
++			memcpy(subs->tmpbuf + len * usx2y->stride,
++			       runtime->dma_area, (count - len) * usx2y->stride);
+ 			subs->hwptr += count;
+ 			subs->hwptr -= runtime->buffer_size;
+ 		} else {
+ 			/* set the buffer pointer */
+-			urb->transfer_buffer = runtime->dma_area + subs->hwptr * usX2Y->stride;
++			urb->transfer_buffer = runtime->dma_area + subs->hwptr * usx2y->stride;
+ 			if ((subs->hwptr += count) >= runtime->buffer_size)
+ 				subs->hwptr -= runtime->buffer_size;
+ 		}
+ 	else
+ 		urb->transfer_buffer = subs->tmpbuf;
+-	urb->transfer_buffer_length = count * usX2Y->stride;
++	urb->transfer_buffer_length = count * usx2y->stride;
+ 	return 0;
+ }
+ 
+@@ -165,10 +165,10 @@ static int usX2Y_urb_play_prepare(struct snd_usX2Y_substream *subs,
+  *
+  * update the current position and call callback if a period is processed.
+  */
+-static void usX2Y_urb_play_retire(struct snd_usX2Y_substream *subs, struct urb *urb)
++static void usx2y_urb_play_retire(struct snd_usx2y_substream *subs, struct urb *urb)
+ {
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+-	int		len = urb->actual_length / subs->usX2Y->stride;
++	int		len = urb->actual_length / subs->usx2y->stride;
+ 
+ 	subs->transfer_done += len;
+ 	subs->hwptr_done +=  len;
+@@ -180,14 +180,14 @@ static void usX2Y_urb_play_retire(struct snd_usX2Y_substream *subs, struct urb *
+ 	}
+ }
+ 
+-static int usX2Y_urb_submit(struct snd_usX2Y_substream *subs, struct urb *urb, int frame)
++static int usx2y_urb_submit(struct snd_usx2y_substream *subs, struct urb *urb, int frame)
+ {
+ 	int err;
+ 	if (!urb)
+ 		return -ENODEV;
+ 	urb->start_frame = (frame + NRURBS * nr_of_packs());  // let hcd do rollover sanity checks
+ 	urb->hcpriv = NULL;
+-	urb->dev = subs->usX2Y->dev; /* we need to set this at each time */
++	urb->dev = subs->usx2y->dev; /* we need to set this at each time */
+ 	if ((err = usb_submit_urb(urb, GFP_ATOMIC)) < 0) {
+ 		snd_printk(KERN_ERR "usb_submit_urb() returned %i\n", err);
+ 		return err;
+@@ -195,8 +195,8 @@ static int usX2Y_urb_submit(struct snd_usX2Y_substream *subs, struct urb *urb, i
+ 	return 0;
+ }
+ 
+-static inline int usX2Y_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+-					  struct snd_usX2Y_substream *playbacksubs,
++static inline int usx2y_usbframe_complete(struct snd_usx2y_substream *capsubs,
++					  struct snd_usx2y_substream *playbacksubs,
+ 					  int frame)
+ {
+ 	int err, state;
+@@ -204,25 +204,25 @@ static inline int usX2Y_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+ 
+ 	state = atomic_read(&playbacksubs->state);
+ 	if (NULL != urb) {
+-		if (state == state_RUNNING)
+-			usX2Y_urb_play_retire(playbacksubs, urb);
+-		else if (state >= state_PRERUNNING)
++		if (state == STATE_RUNNING)
++			usx2y_urb_play_retire(playbacksubs, urb);
++		else if (state >= STATE_PRERUNNING)
+ 			atomic_inc(&playbacksubs->state);
+ 	} else {
+ 		switch (state) {
+-		case state_STARTING1:
++		case STATE_STARTING1:
+ 			urb = playbacksubs->urb[0];
+ 			atomic_inc(&playbacksubs->state);
+ 			break;
+-		case state_STARTING2:
++		case STATE_STARTING2:
+ 			urb = playbacksubs->urb[1];
+ 			atomic_inc(&playbacksubs->state);
+ 			break;
+ 		}
+ 	}
+ 	if (urb) {
+-		if ((err = usX2Y_urb_play_prepare(playbacksubs, capsubs->completed_urb, urb)) ||
+-		    (err = usX2Y_urb_submit(playbacksubs, urb, frame))) {
++		if ((err = usx2y_urb_play_prepare(playbacksubs, capsubs->completed_urb, urb)) ||
++		    (err = usx2y_urb_submit(playbacksubs, urb, frame))) {
+ 			return err;
+ 		}
+ 	}
+@@ -230,13 +230,13 @@ static inline int usX2Y_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+ 	playbacksubs->completed_urb = NULL;
+ 
+ 	state = atomic_read(&capsubs->state);
+-	if (state >= state_PREPARED) {
+-		if (state == state_RUNNING) {
+-			if ((err = usX2Y_urb_capt_retire(capsubs)))
++	if (state >= STATE_PREPARED) {
++		if (state == STATE_RUNNING) {
++			if ((err = usx2y_urb_capt_retire(capsubs)))
+ 				return err;
+-		} else if (state >= state_PRERUNNING)
++		} else if (state >= STATE_PRERUNNING)
+ 			atomic_inc(&capsubs->state);
+-		if ((err = usX2Y_urb_submit(capsubs, capsubs->completed_urb, frame)))
++		if ((err = usx2y_urb_submit(capsubs, capsubs->completed_urb, frame)))
+ 			return err;
+ 	}
+ 	capsubs->completed_urb = NULL;
+@@ -244,21 +244,21 @@ static inline int usX2Y_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+ }
+ 
+ 
+-static void usX2Y_clients_stop(struct usX2Ydev *usX2Y)
++static void usx2y_clients_stop(struct usx2ydev *usx2y)
+ {
+ 	int s, u;
+ 
+ 	for (s = 0; s < 4; s++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[s];
++		struct snd_usx2y_substream *subs = usx2y->subs[s];
+ 		if (subs) {
+ 			snd_printdd("%i %p state=%i\n", s, subs, atomic_read(&subs->state));
+-			atomic_set(&subs->state, state_STOPPED);
++			atomic_set(&subs->state, STATE_STOPPED);
+ 		}
+ 	}
+ 	for (s = 0; s < 4; s++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[s];
++		struct snd_usx2y_substream *subs = usx2y->subs[s];
+ 		if (subs) {
+-			if (atomic_read(&subs->state) >= state_PRERUNNING)
++			if (atomic_read(&subs->state) >= STATE_PRERUNNING)
+ 				snd_pcm_stop_xrun(subs->pcm_substream);
+ 			for (u = 0; u < NRURBS; u++) {
+ 				struct urb *urb = subs->urb[u];
+@@ -268,60 +268,60 @@ static void usX2Y_clients_stop(struct usX2Ydev *usX2Y)
+ 			}
+ 		}
+ 	}
+-	usX2Y->prepare_subs = NULL;
+-	wake_up(&usX2Y->prepare_wait_queue);
++	usx2y->prepare_subs = NULL;
++	wake_up(&usx2y->prepare_wait_queue);
+ }
+ 
+-static void usX2Y_error_urb_status(struct usX2Ydev *usX2Y,
+-				   struct snd_usX2Y_substream *subs, struct urb *urb)
++static void usx2y_error_urb_status(struct usx2ydev *usx2y,
++				   struct snd_usx2y_substream *subs, struct urb *urb)
+ {
+ 	snd_printk(KERN_ERR "ep=%i stalled with status=%i\n", subs->endpoint, urb->status);
+ 	urb->status = 0;
+-	usX2Y_clients_stop(usX2Y);
++	usx2y_clients_stop(usx2y);
+ }
+ 
+-static void i_usX2Y_urb_complete(struct urb *urb)
++static void i_usx2y_urb_complete(struct urb *urb)
+ {
+-	struct snd_usX2Y_substream *subs = urb->context;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
++	struct snd_usx2y_substream *subs = urb->context;
++	struct usx2ydev *usx2y = subs->usx2y;
+ 
+-	if (unlikely(atomic_read(&subs->state) < state_PREPARED)) {
++	if (unlikely(atomic_read(&subs->state) < STATE_PREPARED)) {
+ 		snd_printdd("hcd_frame=%i ep=%i%s status=%i start_frame=%i\n",
+-			    usb_get_current_frame_number(usX2Y->dev),
++			    usb_get_current_frame_number(usx2y->dev),
+ 			    subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out",
+ 			    urb->status, urb->start_frame);
+ 		return;
+ 	}
+ 	if (unlikely(urb->status)) {
+-		usX2Y_error_urb_status(usX2Y, subs, urb);
++		usx2y_error_urb_status(usx2y, subs, urb);
+ 		return;
+ 	}
+ 
+ 	subs->completed_urb = urb;
+ 
+ 	{
+-		struct snd_usX2Y_substream *capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE],
+-			*playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++		struct snd_usx2y_substream *capsubs = usx2y->subs[SNDRV_PCM_STREAM_CAPTURE],
++			*playbacksubs = usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+ 		if (capsubs->completed_urb &&
+-		    atomic_read(&capsubs->state) >= state_PREPARED &&
++		    atomic_read(&capsubs->state) >= STATE_PREPARED &&
+ 		    (playbacksubs->completed_urb ||
+-		     atomic_read(&playbacksubs->state) < state_PREPARED)) {
+-			if (!usX2Y_usbframe_complete(capsubs, playbacksubs, urb->start_frame))
+-				usX2Y->wait_iso_frame += nr_of_packs();
++		     atomic_read(&playbacksubs->state) < STATE_PREPARED)) {
++			if (!usx2y_usbframe_complete(capsubs, playbacksubs, urb->start_frame))
++				usx2y->wait_iso_frame += nr_of_packs();
+ 			else {
+ 				snd_printdd("\n");
+-				usX2Y_clients_stop(usX2Y);
++				usx2y_clients_stop(usx2y);
+ 			}
+ 		}
+ 	}
+ }
+ 
+-static void usX2Y_urbs_set_complete(struct usX2Ydev * usX2Y,
++static void usx2y_urbs_set_complete(struct usx2ydev * usx2y,
+ 				    void (*complete)(struct urb *))
+ {
+ 	int s, u;
+ 	for (s = 0; s < 4; s++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[s];
++		struct snd_usx2y_substream *subs = usx2y->subs[s];
+ 		if (NULL != subs)
+ 			for (u = 0; u < NRURBS; u++) {
+ 				struct urb * urb = subs->urb[u];
+@@ -331,30 +331,30 @@ static void usX2Y_urbs_set_complete(struct usX2Ydev * usX2Y,
+ 	}
+ }
+ 
+-static void usX2Y_subs_startup_finish(struct usX2Ydev * usX2Y)
++static void usx2y_subs_startup_finish(struct usx2ydev * usx2y)
+ {
+-	usX2Y_urbs_set_complete(usX2Y, i_usX2Y_urb_complete);
+-	usX2Y->prepare_subs = NULL;
++	usx2y_urbs_set_complete(usx2y, i_usx2y_urb_complete);
++	usx2y->prepare_subs = NULL;
+ }
+ 
+-static void i_usX2Y_subs_startup(struct urb *urb)
++static void i_usx2y_subs_startup(struct urb *urb)
+ {
+-	struct snd_usX2Y_substream *subs = urb->context;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *prepare_subs = usX2Y->prepare_subs;
++	struct snd_usx2y_substream *subs = urb->context;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *prepare_subs = usx2y->prepare_subs;
+ 	if (NULL != prepare_subs)
+ 		if (urb->start_frame == prepare_subs->urb[0]->start_frame) {
+-			usX2Y_subs_startup_finish(usX2Y);
++			usx2y_subs_startup_finish(usx2y);
+ 			atomic_inc(&prepare_subs->state);
+-			wake_up(&usX2Y->prepare_wait_queue);
++			wake_up(&usx2y->prepare_wait_queue);
+ 		}
+ 
+-	i_usX2Y_urb_complete(urb);
++	i_usx2y_urb_complete(urb);
+ }
+ 
+-static void usX2Y_subs_prepare(struct snd_usX2Y_substream *subs)
++static void usx2y_subs_prepare(struct snd_usx2y_substream *subs)
+ {
+-	snd_printdd("usX2Y_substream_prepare(%p) ep=%i urb0=%p urb1=%p\n",
++	snd_printdd("usx2y_substream_prepare(%p) ep=%i urb0=%p urb1=%p\n",
+ 		    subs, subs->endpoint, subs->urb[0], subs->urb[1]);
+ 	/* reset the pointer */
+ 	subs->hwptr = 0;
+@@ -363,7 +363,7 @@ static void usX2Y_subs_prepare(struct snd_usX2Y_substream *subs)
+ }
+ 
+ 
+-static void usX2Y_urb_release(struct urb **urb, int free_tb)
++static void usx2y_urb_release(struct urb **urb, int free_tb)
+ {
+ 	if (*urb) {
+ 		usb_kill_urb(*urb);
+@@ -376,13 +376,13 @@ static void usX2Y_urb_release(struct urb **urb, int free_tb)
+ /*
+  * release a substreams urbs
+  */
+-static void usX2Y_urbs_release(struct snd_usX2Y_substream *subs)
++static void usx2y_urbs_release(struct snd_usx2y_substream *subs)
+ {
+ 	int i;
+-	snd_printdd("usX2Y_urbs_release() %i\n", subs->endpoint);
++	snd_printdd("usx2y_urbs_release() %i\n", subs->endpoint);
+ 	for (i = 0; i < NRURBS; i++)
+-		usX2Y_urb_release(subs->urb + i,
+-				  subs != subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK]);
++		usx2y_urb_release(subs->urb + i,
++				  subs != subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK]);
+ 
+ 	kfree(subs->tmpbuf);
+ 	subs->tmpbuf = NULL;
+@@ -390,12 +390,12 @@ static void usX2Y_urbs_release(struct snd_usX2Y_substream *subs)
+ /*
+  * initialize a substream's urbs
+  */
+-static int usX2Y_urbs_allocate(struct snd_usX2Y_substream *subs)
++static int usx2y_urbs_allocate(struct snd_usx2y_substream *subs)
+ {
+ 	int i;
+ 	unsigned int pipe;
+-	int is_playback = subs == subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-	struct usb_device *dev = subs->usX2Y->dev;
++	int is_playback = subs == subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++	struct usb_device *dev = subs->usx2y->dev;
+ 
+ 	pipe = is_playback ? usb_sndisocpipe(dev, subs->endpoint) :
+ 			usb_rcvisocpipe(dev, subs->endpoint);
+@@ -417,7 +417,7 @@ static int usX2Y_urbs_allocate(struct snd_usX2Y_substream *subs)
+ 		}
+ 		*purb = usb_alloc_urb(nr_of_packs(), GFP_KERNEL);
+ 		if (NULL == *purb) {
+-			usX2Y_urbs_release(subs);
++			usx2y_urbs_release(subs);
+ 			return -ENOMEM;
+ 		}
+ 		if (!is_playback && !(*purb)->transfer_buffer) {
+@@ -426,7 +426,7 @@ static int usX2Y_urbs_allocate(struct snd_usX2Y_substream *subs)
+ 				kmalloc_array(subs->maxpacksize,
+ 					      nr_of_packs(), GFP_KERNEL);
+ 			if (NULL == (*purb)->transfer_buffer) {
+-				usX2Y_urbs_release(subs);
++				usx2y_urbs_release(subs);
+ 				return -ENOMEM;
+ 			}
+ 		}
+@@ -435,43 +435,43 @@ static int usX2Y_urbs_allocate(struct snd_usX2Y_substream *subs)
+ 		(*purb)->number_of_packets = nr_of_packs();
+ 		(*purb)->context = subs;
+ 		(*purb)->interval = 1;
+-		(*purb)->complete = i_usX2Y_subs_startup;
++		(*purb)->complete = i_usx2y_subs_startup;
+ 	}
+ 	return 0;
+ }
+ 
+-static void usX2Y_subs_startup(struct snd_usX2Y_substream *subs)
++static void usx2y_subs_startup(struct snd_usx2y_substream *subs)
+ {
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	usX2Y->prepare_subs = subs;
++	struct usx2ydev *usx2y = subs->usx2y;
++	usx2y->prepare_subs = subs;
+ 	subs->urb[0]->start_frame = -1;
+ 	wmb();
+-	usX2Y_urbs_set_complete(usX2Y, i_usX2Y_subs_startup);
++	usx2y_urbs_set_complete(usx2y, i_usx2y_subs_startup);
+ }
+ 
+-static int usX2Y_urbs_start(struct snd_usX2Y_substream *subs)
++static int usx2y_urbs_start(struct snd_usx2y_substream *subs)
+ {
+ 	int i, err;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
++	struct usx2ydev *usx2y = subs->usx2y;
+ 
+-	if ((err = usX2Y_urbs_allocate(subs)) < 0)
++	if ((err = usx2y_urbs_allocate(subs)) < 0)
+ 		return err;
+ 	subs->completed_urb = NULL;
+ 	for (i = 0; i < 4; i++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[i];
+-		if (subs != NULL && atomic_read(&subs->state) >= state_PREPARED)
++		struct snd_usx2y_substream *subs = usx2y->subs[i];
++		if (subs != NULL && atomic_read(&subs->state) >= STATE_PREPARED)
+ 			goto start;
+ 	}
+ 
+  start:
+-	usX2Y_subs_startup(subs);
++	usx2y_subs_startup(subs);
+ 	for (i = 0; i < NRURBS; i++) {
+ 		struct urb *urb = subs->urb[i];
+ 		if (usb_pipein(urb->pipe)) {
+ 			unsigned long pack;
+ 			if (0 == i)
+-				atomic_set(&subs->state, state_STARTING3);
+-			urb->dev = usX2Y->dev;
++				atomic_set(&subs->state, STATE_STARTING3);
++			urb->dev = usx2y->dev;
+ 			for (pack = 0; pack < nr_of_packs(); pack++) {
+ 				urb->iso_frame_desc[pack].offset = subs->maxpacksize * pack;
+ 				urb->iso_frame_desc[pack].length = subs->maxpacksize;
+@@ -483,22 +483,22 @@ static int usX2Y_urbs_start(struct snd_usX2Y_substream *subs)
+ 				goto cleanup;
+ 			} else
+ 				if (i == 0)
+-					usX2Y->wait_iso_frame = urb->start_frame;
++					usx2y->wait_iso_frame = urb->start_frame;
+ 			urb->transfer_flags = 0;
+ 		} else {
+-			atomic_set(&subs->state, state_STARTING1);
++			atomic_set(&subs->state, STATE_STARTING1);
+ 			break;
+ 		}
+ 	}
+ 	err = 0;
+-	wait_event(usX2Y->prepare_wait_queue, NULL == usX2Y->prepare_subs);
+-	if (atomic_read(&subs->state) != state_PREPARED)
++	wait_event(usx2y->prepare_wait_queue, NULL == usx2y->prepare_subs);
++	if (atomic_read(&subs->state) != STATE_PREPARED)
+ 		err = -EPIPE;
+ 
+  cleanup:
+ 	if (err) {
+-		usX2Y_subs_startup_finish(usX2Y);
+-		usX2Y_clients_stop(usX2Y);		// something is completely wroong > stop evrything
++		usx2y_subs_startup_finish(usx2y);
++		usx2y_clients_stop(usx2y);		// something is completely wroong > stop evrything
+ 	}
+ 	return err;
+ }
+@@ -506,33 +506,33 @@ static int usX2Y_urbs_start(struct snd_usX2Y_substream *subs)
+ /*
+  * return the current pcm pointer.  just return the hwptr_done value.
+  */
+-static snd_pcm_uframes_t snd_usX2Y_pcm_pointer(struct snd_pcm_substream *substream)
++static snd_pcm_uframes_t snd_usx2y_pcm_pointer(struct snd_pcm_substream *substream)
+ {
+-	struct snd_usX2Y_substream *subs = substream->runtime->private_data;
++	struct snd_usx2y_substream *subs = substream->runtime->private_data;
+ 	return subs->hwptr_done;
+ }
+ /*
+  * start/stop substream
+  */
+-static int snd_usX2Y_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
++static int snd_usx2y_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
+ {
+-	struct snd_usX2Y_substream *subs = substream->runtime->private_data;
++	struct snd_usx2y_substream *subs = substream->runtime->private_data;
+ 
+ 	switch (cmd) {
+ 	case SNDRV_PCM_TRIGGER_START:
+-		snd_printdd("snd_usX2Y_pcm_trigger(START)\n");
+-		if (atomic_read(&subs->state) == state_PREPARED &&
+-		    atomic_read(&subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE]->state) >= state_PREPARED) {
+-			atomic_set(&subs->state, state_PRERUNNING);
++		snd_printdd("snd_usx2y_pcm_trigger(START)\n");
++		if (atomic_read(&subs->state) == STATE_PREPARED &&
++		    atomic_read(&subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE]->state) >= STATE_PREPARED) {
++			atomic_set(&subs->state, STATE_PRERUNNING);
+ 		} else {
+ 			snd_printdd("\n");
+ 			return -EPIPE;
+ 		}
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
+-		snd_printdd("snd_usX2Y_pcm_trigger(STOP)\n");
+-		if (atomic_read(&subs->state) >= state_PRERUNNING)
+-			atomic_set(&subs->state, state_PREPARED);
++		snd_printdd("snd_usx2y_pcm_trigger(STOP)\n");
++		if (atomic_read(&subs->state) >= STATE_PRERUNNING)
++			atomic_set(&subs->state, STATE_PREPARED);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -553,7 +553,7 @@ static const struct s_c2
+ {
+ 	char c1, c2;
+ }
+-	SetRate44100[] =
++	setrate_44100[] =
+ {
+ 	{ 0x14, 0x08},	// this line sets 44100, well actually a little less
+ 	{ 0x18, 0x40},	// only tascam / frontier design knows the further lines .......
+@@ -589,7 +589,7 @@ static const struct s_c2
+ 	{ 0x18, 0x7C},
+ 	{ 0x18, 0x7E}
+ };
+-static const struct s_c2 SetRate48000[] =
++static const struct s_c2 setrate_48000[] =
+ {
+ 	{ 0x14, 0x09},	// this line sets 48000, well actually a little less
+ 	{ 0x18, 0x40},	// only tascam / frontier design knows the further lines .......
+@@ -625,26 +625,26 @@ static const struct s_c2 SetRate48000[] =
+ 	{ 0x18, 0x7C},
+ 	{ 0x18, 0x7E}
+ };
+-#define NOOF_SETRATE_URBS ARRAY_SIZE(SetRate48000)
++#define NOOF_SETRATE_URBS ARRAY_SIZE(setrate_48000)
+ 
+-static void i_usX2Y_04Int(struct urb *urb)
++static void i_usx2y_04int(struct urb *urb)
+ {
+-	struct usX2Ydev *usX2Y = urb->context;
++	struct usx2ydev *usx2y = urb->context;
+ 	
+ 	if (urb->status)
+-		snd_printk(KERN_ERR "snd_usX2Y_04Int() urb->status=%i\n", urb->status);
+-	if (0 == --usX2Y->US04->len)
+-		wake_up(&usX2Y->In04WaitQueue);
++		snd_printk(KERN_ERR "snd_usx2y_04int() urb->status=%i\n", urb->status);
++	if (0 == --usx2y->us04->len)
++		wake_up(&usx2y->in04_wait_queue);
+ }
+ 
+-static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
++static int usx2y_rate_set(struct usx2ydev *usx2y, int rate)
+ {
+ 	int			err = 0, i;
+-	struct snd_usX2Y_urbSeq	*us = NULL;
++	struct snd_usx2y_urb_seq	*us = NULL;
+ 	int			*usbdata = NULL;
+-	const struct s_c2	*ra = rate == 48000 ? SetRate48000 : SetRate44100;
++	const struct s_c2	*ra = rate == 48000 ? setrate_48000 : setrate_44100;
+ 
+-	if (usX2Y->rate != rate) {
++	if (usx2y->rate != rate) {
+ 		us = kzalloc(sizeof(*us) + sizeof(struct urb*) * NOOF_SETRATE_URBS, GFP_KERNEL);
+ 		if (NULL == us) {
+ 			err = -ENOMEM;
+@@ -663,17 +663,17 @@ static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
+ 			}
+ 			((char*)(usbdata + i))[0] = ra[i].c1;
+ 			((char*)(usbdata + i))[1] = ra[i].c2;
+-			usb_fill_bulk_urb(us->urb[i], usX2Y->dev, usb_sndbulkpipe(usX2Y->dev, 4),
+-					  usbdata + i, 2, i_usX2Y_04Int, usX2Y);
++			usb_fill_bulk_urb(us->urb[i], usx2y->dev, usb_sndbulkpipe(usx2y->dev, 4),
++					  usbdata + i, 2, i_usx2y_04int, usx2y);
+ 		}
+ 		err = usb_urb_ep_type_check(us->urb[0]);
+ 		if (err < 0)
+ 			goto cleanup;
+ 		us->submitted =	0;
+ 		us->len =	NOOF_SETRATE_URBS;
+-		usX2Y->US04 =	us;
+-		wait_event_timeout(usX2Y->In04WaitQueue, 0 == us->len, HZ);
+-		usX2Y->US04 =	NULL;
++		usx2y->us04 =	us;
++		wait_event_timeout(usx2y->in04_wait_queue, 0 == us->len, HZ);
++		usx2y->us04 =	NULL;
+ 		if (us->len)
+ 			err = -ENODEV;
+ 	cleanup:
+@@ -690,11 +690,11 @@ static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
+ 				}
+ 				usb_free_urb(urb);
+ 			}
+-			usX2Y->US04 = NULL;
++			usx2y->us04 = NULL;
+ 			kfree(usbdata);
+ 			kfree(us);
+ 			if (!err)
+-				usX2Y->rate = rate;
++				usx2y->rate = rate;
+ 		}
+ 	}
+ 
+@@ -702,53 +702,53 @@ static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
+ }
+ 
+ 
+-static int usX2Y_format_set(struct usX2Ydev *usX2Y, snd_pcm_format_t format)
++static int usx2y_format_set(struct usx2ydev *usx2y, snd_pcm_format_t format)
+ {
+ 	int alternate, err;
+ 	struct list_head* p;
+ 	if (format == SNDRV_PCM_FORMAT_S24_3LE) {
+ 		alternate = 2;
+-		usX2Y->stride = 6;
++		usx2y->stride = 6;
+ 	} else {
+ 		alternate = 1;
+-		usX2Y->stride = 4;
++		usx2y->stride = 4;
+ 	}
+-	list_for_each(p, &usX2Y->midi_list) {
++	list_for_each(p, &usx2y->midi_list) {
+ 		snd_usbmidi_input_stop(p);
+ 	}
+-	usb_kill_urb(usX2Y->In04urb);
+-	if ((err = usb_set_interface(usX2Y->dev, 0, alternate))) {
++	usb_kill_urb(usx2y->in04_urb);
++	if ((err = usb_set_interface(usx2y->dev, 0, alternate))) {
+ 		snd_printk(KERN_ERR "usb_set_interface error \n");
+ 		return err;
+ 	}
+-	usX2Y->In04urb->dev = usX2Y->dev;
+-	err = usb_submit_urb(usX2Y->In04urb, GFP_KERNEL);
+-	list_for_each(p, &usX2Y->midi_list) {
++	usx2y->in04_urb->dev = usx2y->dev;
++	err = usb_submit_urb(usx2y->in04_urb, GFP_KERNEL);
++	list_for_each(p, &usx2y->midi_list) {
+ 		snd_usbmidi_input_start(p);
+ 	}
+-	usX2Y->format = format;
+-	usX2Y->rate = 0;
++	usx2y->format = format;
++	usx2y->rate = 0;
+ 	return err;
+ }
+ 
+ 
+-static int snd_usX2Y_pcm_hw_params(struct snd_pcm_substream *substream,
++static int snd_usx2y_pcm_hw_params(struct snd_pcm_substream *substream,
+ 				   struct snd_pcm_hw_params *hw_params)
+ {
+ 	int			err = 0;
+ 	unsigned int		rate = params_rate(hw_params);
+ 	snd_pcm_format_t	format = params_format(hw_params);
+ 	struct snd_card *card = substream->pstr->pcm->card;
+-	struct usX2Ydev	*dev = usX2Y(card);
++	struct usx2ydev	*dev = usx2y(card);
+ 	int i;
+ 
+-	mutex_lock(&usX2Y(card)->pcm_mutex);
+-	snd_printdd("snd_usX2Y_hw_params(%p, %p)\n", substream, hw_params);
+-	/* all pcm substreams off one usX2Y have to operate at the same
++	mutex_lock(&usx2y(card)->pcm_mutex);
++	snd_printdd("snd_usx2y_hw_params(%p, %p)\n", substream, hw_params);
++	/* all pcm substreams off one usx2y have to operate at the same
+ 	 * rate & format
+ 	 */
+ 	for (i = 0; i < dev->pcm_devs * 2; i++) {
+-		struct snd_usX2Y_substream *subs = dev->subs[i];
++		struct snd_usx2y_substream *subs = dev->subs[i];
+ 		struct snd_pcm_substream *test_substream;
+ 
+ 		if (!subs)
+@@ -767,39 +767,39 @@ static int snd_usX2Y_pcm_hw_params(struct snd_pcm_substream *substream,
+ 	}
+ 
+  error:
+-	mutex_unlock(&usX2Y(card)->pcm_mutex);
++	mutex_unlock(&usx2y(card)->pcm_mutex);
+ 	return err;
+ }
+ 
+ /*
+  * free the buffer
+  */
+-static int snd_usX2Y_pcm_hw_free(struct snd_pcm_substream *substream)
++static int snd_usx2y_pcm_hw_free(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
+-	mutex_lock(&subs->usX2Y->pcm_mutex);
+-	snd_printdd("snd_usX2Y_hw_free(%p)\n", substream);
++	struct snd_usx2y_substream *subs = runtime->private_data;
++	mutex_lock(&subs->usx2y->pcm_mutex);
++	snd_printdd("snd_usx2y_hw_free(%p)\n", substream);
+ 
+ 	if (SNDRV_PCM_STREAM_PLAYBACK == substream->stream) {
+-		struct snd_usX2Y_substream *cap_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+-		atomic_set(&subs->state, state_STOPPED);
+-		usX2Y_urbs_release(subs);
++		struct snd_usx2y_substream *cap_subs = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
++		atomic_set(&subs->state, STATE_STOPPED);
++		usx2y_urbs_release(subs);
+ 		if (!cap_subs->pcm_substream ||
+ 		    !cap_subs->pcm_substream->runtime ||
+ 		    !cap_subs->pcm_substream->runtime->status ||
+ 		    cap_subs->pcm_substream->runtime->status->state < SNDRV_PCM_STATE_PREPARED) {
+-			atomic_set(&cap_subs->state, state_STOPPED);
+-			usX2Y_urbs_release(cap_subs);
++			atomic_set(&cap_subs->state, STATE_STOPPED);
++			usx2y_urbs_release(cap_subs);
+ 		}
+ 	} else {
+-		struct snd_usX2Y_substream *playback_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-		if (atomic_read(&playback_subs->state) < state_PREPARED) {
+-			atomic_set(&subs->state, state_STOPPED);
+-			usX2Y_urbs_release(subs);
++		struct snd_usx2y_substream *playback_subs = subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++		if (atomic_read(&playback_subs->state) < STATE_PREPARED) {
++			atomic_set(&subs->state, STATE_STOPPED);
++			usx2y_urbs_release(subs);
+ 		}
+ 	}
+-	mutex_unlock(&subs->usX2Y->pcm_mutex);
++	mutex_unlock(&subs->usx2y->pcm_mutex);
+ 	return 0;
+ }
+ /*
+@@ -807,40 +807,40 @@ static int snd_usX2Y_pcm_hw_free(struct snd_pcm_substream *substream)
+  *
+  * set format and initialize urbs
+  */
+-static int snd_usX2Y_pcm_prepare(struct snd_pcm_substream *substream)
++static int snd_usx2y_pcm_prepare(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *capsubs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
++	struct snd_usx2y_substream *subs = runtime->private_data;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *capsubs = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
+ 	int err = 0;
+-	snd_printdd("snd_usX2Y_pcm_prepare(%p)\n", substream);
++	snd_printdd("snd_usx2y_pcm_prepare(%p)\n", substream);
+ 
+-	mutex_lock(&usX2Y->pcm_mutex);
+-	usX2Y_subs_prepare(subs);
++	mutex_lock(&usx2y->pcm_mutex);
++	usx2y_subs_prepare(subs);
+ // Start hardware streams
+ // SyncStream first....
+-	if (atomic_read(&capsubs->state) < state_PREPARED) {
+-		if (usX2Y->format != runtime->format)
+-			if ((err = usX2Y_format_set(usX2Y, runtime->format)) < 0)
++	if (atomic_read(&capsubs->state) < STATE_PREPARED) {
++		if (usx2y->format != runtime->format)
++			if ((err = usx2y_format_set(usx2y, runtime->format)) < 0)
+ 				goto up_prepare_mutex;
+-		if (usX2Y->rate != runtime->rate)
+-			if ((err = usX2Y_rate_set(usX2Y, runtime->rate)) < 0)
++		if (usx2y->rate != runtime->rate)
++			if ((err = usx2y_rate_set(usx2y, runtime->rate)) < 0)
+ 				goto up_prepare_mutex;
+ 		snd_printdd("starting capture pipe for %s\n", subs == capsubs ? "self" : "playpipe");
+-		if (0 > (err = usX2Y_urbs_start(capsubs)))
++		if (0 > (err = usx2y_urbs_start(capsubs)))
+ 			goto up_prepare_mutex;
+ 	}
+ 
+-	if (subs != capsubs && atomic_read(&subs->state) < state_PREPARED)
+-		err = usX2Y_urbs_start(subs);
++	if (subs != capsubs && atomic_read(&subs->state) < STATE_PREPARED)
++		err = usx2y_urbs_start(subs);
+ 
+  up_prepare_mutex:
+-	mutex_unlock(&usX2Y->pcm_mutex);
++	mutex_unlock(&usx2y->pcm_mutex);
+ 	return err;
+ }
+ 
+-static const struct snd_pcm_hardware snd_usX2Y_2c =
++static const struct snd_pcm_hardware snd_usx2y_2c =
+ {
+ 	.info =			(SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
+ 				 SNDRV_PCM_INFO_BLOCK_TRANSFER |
+@@ -862,16 +862,16 @@ static const struct snd_pcm_hardware snd_usX2Y_2c =
+ 
+ 
+ 
+-static int snd_usX2Y_pcm_open(struct snd_pcm_substream *substream)
++static int snd_usx2y_pcm_open(struct snd_pcm_substream *substream)
+ {
+-	struct snd_usX2Y_substream	*subs = ((struct snd_usX2Y_substream **)
++	struct snd_usx2y_substream	*subs = ((struct snd_usx2y_substream **)
+ 					 snd_pcm_substream_chip(substream))[substream->stream];
+ 	struct snd_pcm_runtime	*runtime = substream->runtime;
+ 
+-	if (subs->usX2Y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS)
++	if (subs->usx2y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS)
+ 		return -EBUSY;
+ 
+-	runtime->hw = snd_usX2Y_2c;
++	runtime->hw = snd_usx2y_2c;
+ 	runtime->private_data = subs;
+ 	subs->pcm_substream = substream;
+ 	snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_PERIOD_TIME, 1000, 200000);
+@@ -880,10 +880,10 @@ static int snd_usX2Y_pcm_open(struct snd_pcm_substream *substream)
+ 
+ 
+ 
+-static int snd_usX2Y_pcm_close(struct snd_pcm_substream *substream)
++static int snd_usx2y_pcm_close(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
++	struct snd_usx2y_substream *subs = runtime->private_data;
+ 
+ 	subs->pcm_substream = NULL;
+ 
+@@ -891,75 +891,75 @@ static int snd_usX2Y_pcm_close(struct snd_pcm_substream *substream)
+ }
+ 
+ 
+-static const struct snd_pcm_ops snd_usX2Y_pcm_ops =
++static const struct snd_pcm_ops snd_usx2y_pcm_ops =
+ {
+-	.open =		snd_usX2Y_pcm_open,
+-	.close =	snd_usX2Y_pcm_close,
+-	.hw_params =	snd_usX2Y_pcm_hw_params,
+-	.hw_free =	snd_usX2Y_pcm_hw_free,
+-	.prepare =	snd_usX2Y_pcm_prepare,
+-	.trigger =	snd_usX2Y_pcm_trigger,
+-	.pointer =	snd_usX2Y_pcm_pointer,
++	.open =		snd_usx2y_pcm_open,
++	.close =	snd_usx2y_pcm_close,
++	.hw_params =	snd_usx2y_pcm_hw_params,
++	.hw_free =	snd_usx2y_pcm_hw_free,
++	.prepare =	snd_usx2y_pcm_prepare,
++	.trigger =	snd_usx2y_pcm_trigger,
++	.pointer =	snd_usx2y_pcm_pointer,
+ };
+ 
+ 
+ /*
+  * free a usb stream instance
+  */
+-static void usX2Y_audio_stream_free(struct snd_usX2Y_substream **usX2Y_substream)
++static void usx2y_audio_stream_free(struct snd_usx2y_substream **usx2y_substream)
+ {
+ 	int stream;
+ 
+ 	for_each_pcm_streams(stream) {
+-		kfree(usX2Y_substream[stream]);
+-		usX2Y_substream[stream] = NULL;
++		kfree(usx2y_substream[stream]);
++		usx2y_substream[stream] = NULL;
+ 	}
+ }
+ 
+-static void snd_usX2Y_pcm_private_free(struct snd_pcm *pcm)
++static void snd_usx2y_pcm_private_free(struct snd_pcm *pcm)
+ {
+-	struct snd_usX2Y_substream **usX2Y_stream = pcm->private_data;
+-	if (usX2Y_stream)
+-		usX2Y_audio_stream_free(usX2Y_stream);
++	struct snd_usx2y_substream **usx2y_stream = pcm->private_data;
++	if (usx2y_stream)
++		usx2y_audio_stream_free(usx2y_stream);
+ }
+ 
+-static int usX2Y_audio_stream_new(struct snd_card *card, int playback_endpoint, int capture_endpoint)
++static int usx2y_audio_stream_new(struct snd_card *card, int playback_endpoint, int capture_endpoint)
+ {
+ 	struct snd_pcm *pcm;
+ 	int err, i;
+-	struct snd_usX2Y_substream **usX2Y_substream =
+-		usX2Y(card)->subs + 2 * usX2Y(card)->pcm_devs;
++	struct snd_usx2y_substream **usx2y_substream =
++		usx2y(card)->subs + 2 * usx2y(card)->pcm_devs;
+ 
+ 	for (i = playback_endpoint ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE;
+ 	     i <= SNDRV_PCM_STREAM_CAPTURE; ++i) {
+-		usX2Y_substream[i] = kzalloc(sizeof(struct snd_usX2Y_substream), GFP_KERNEL);
+-		if (!usX2Y_substream[i])
++		usx2y_substream[i] = kzalloc(sizeof(struct snd_usx2y_substream), GFP_KERNEL);
++		if (!usx2y_substream[i])
+ 			return -ENOMEM;
+ 
+-		usX2Y_substream[i]->usX2Y = usX2Y(card);
++		usx2y_substream[i]->usx2y = usx2y(card);
+ 	}
+ 
+ 	if (playback_endpoint)
+-		usX2Y_substream[SNDRV_PCM_STREAM_PLAYBACK]->endpoint = playback_endpoint;
+-	usX2Y_substream[SNDRV_PCM_STREAM_CAPTURE]->endpoint = capture_endpoint;
++		usx2y_substream[SNDRV_PCM_STREAM_PLAYBACK]->endpoint = playback_endpoint;
++	usx2y_substream[SNDRV_PCM_STREAM_CAPTURE]->endpoint = capture_endpoint;
+ 
+-	err = snd_pcm_new(card, NAME_ALLCAPS" Audio", usX2Y(card)->pcm_devs,
++	err = snd_pcm_new(card, NAME_ALLCAPS" Audio", usx2y(card)->pcm_devs,
+ 			  playback_endpoint ? 1 : 0, 1,
+ 			  &pcm);
+ 	if (err < 0) {
+-		usX2Y_audio_stream_free(usX2Y_substream);
++		usx2y_audio_stream_free(usx2y_substream);
+ 		return err;
+ 	}
+ 
+ 	if (playback_endpoint)
+-		snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usX2Y_pcm_ops);
+-	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usX2Y_pcm_ops);
++		snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usx2y_pcm_ops);
++	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usx2y_pcm_ops);
+ 
+-	pcm->private_data = usX2Y_substream;
+-	pcm->private_free = snd_usX2Y_pcm_private_free;
++	pcm->private_data = usx2y_substream;
++	pcm->private_free = snd_usx2y_pcm_private_free;
+ 	pcm->info_flags = 0;
+ 
+-	sprintf(pcm->name, NAME_ALLCAPS" Audio #%d", usX2Y(card)->pcm_devs);
++	sprintf(pcm->name, NAME_ALLCAPS" Audio #%d", usx2y(card)->pcm_devs);
+ 
+ 	if (playback_endpoint) {
+ 		snd_pcm_set_managed_buffer(pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream,
+@@ -972,7 +972,7 @@ static int usX2Y_audio_stream_new(struct snd_card *card, int playback_endpoint,
+ 				   SNDRV_DMA_TYPE_CONTINUOUS,
+ 				   NULL,
+ 				   64*1024, 128*1024);
+-	usX2Y(card)->pcm_devs++;
++	usx2y(card)->pcm_devs++;
+ 
+ 	return 0;
+ }
+@@ -980,18 +980,18 @@ static int usX2Y_audio_stream_new(struct snd_card *card, int playback_endpoint,
+ /*
+  * create a chip instance and set its names.
+  */
+-int usX2Y_audio_create(struct snd_card *card)
++int usx2y_audio_create(struct snd_card *card)
+ {
+ 	int err = 0;
+ 	
+-	INIT_LIST_HEAD(&usX2Y(card)->pcm_list);
++	INIT_LIST_HEAD(&usx2y(card)->pcm_list);
+ 
+-	if (0 > (err = usX2Y_audio_stream_new(card, 0xA, 0x8)))
++	if (0 > (err = usx2y_audio_stream_new(card, 0xA, 0x8)))
+ 		return err;
+-	if (le16_to_cpu(usX2Y(card)->dev->descriptor.idProduct) == USB_ID_US428)
+-	     if (0 > (err = usX2Y_audio_stream_new(card, 0, 0xA)))
++	if (le16_to_cpu(usx2y(card)->dev->descriptor.idProduct) == USB_ID_US428)
++	     if (0 > (err = usx2y_audio_stream_new(card, 0, 0xA)))
+ 		     return err;
+-	if (le16_to_cpu(usX2Y(card)->dev->descriptor.idProduct) != USB_ID_US122)
+-		err = usX2Y_rate_set(usX2Y(card), 44100);	// Lets us428 recognize output-volume settings, disturbs us122.
++	if (le16_to_cpu(usx2y(card)->dev->descriptor.idProduct) != USB_ID_US122)
++		err = usx2y_rate_set(usx2y(card), 44100);	// Lets us428 recognize output-volume settings, disturbs us122.
+ 	return err;
+ }
+diff --git a/sound/usb/usx2y/usx2yhwdeppcm.c b/sound/usb/usx2y/usx2yhwdeppcm.c
+index 8253669c6a7d7..399470e51c411 100644
+--- a/sound/usb/usx2y/usx2yhwdeppcm.c
++++ b/sound/usb/usx2y/usx2yhwdeppcm.c
+@@ -47,17 +47,17 @@
+ #include <sound/hwdep.h>
+ 
+ 
+-static int usX2Y_usbpcm_urb_capt_retire(struct snd_usX2Y_substream *subs)
++static int usx2y_usbpcm_urb_capt_retire(struct snd_usx2y_substream *subs)
+ {
+ 	struct urb	*urb = subs->completed_urb;
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+ 	int 		i, lens = 0, hwptr_done = subs->hwptr_done;
+-	struct usX2Ydev	*usX2Y = subs->usX2Y;
+-	if (0 > usX2Y->hwdep_pcm_shm->capture_iso_start) { //FIXME
+-		int head = usX2Y->hwdep_pcm_shm->captured_iso_head + 1;
+-		if (head >= ARRAY_SIZE(usX2Y->hwdep_pcm_shm->captured_iso))
++	struct usx2ydev	*usx2y = subs->usx2y;
++	if (0 > usx2y->hwdep_pcm_shm->capture_iso_start) { //FIXME
++		int head = usx2y->hwdep_pcm_shm->captured_iso_head + 1;
++		if (head >= ARRAY_SIZE(usx2y->hwdep_pcm_shm->captured_iso))
+ 			head = 0;
+-		usX2Y->hwdep_pcm_shm->capture_iso_start = head;
++		usx2y->hwdep_pcm_shm->capture_iso_start = head;
+ 		snd_printdd("cap start %i\n", head);
+ 	}
+ 	for (i = 0; i < nr_of_packs(); i++) {
+@@ -65,7 +65,7 @@ static int usX2Y_usbpcm_urb_capt_retire(struct snd_usX2Y_substream *subs)
+ 			snd_printk(KERN_ERR "active frame status %i. Most probably some hardware problem.\n", urb->iso_frame_desc[i].status);
+ 			return urb->iso_frame_desc[i].status;
+ 		}
+-		lens += urb->iso_frame_desc[i].actual_length / usX2Y->stride;
++		lens += urb->iso_frame_desc[i].actual_length / usx2y->stride;
+ 	}
+ 	if ((hwptr_done += lens) >= runtime->buffer_size)
+ 		hwptr_done -= runtime->buffer_size;
+@@ -79,10 +79,10 @@ static int usX2Y_usbpcm_urb_capt_retire(struct snd_usX2Y_substream *subs)
+ 	return 0;
+ }
+ 
+-static inline int usX2Y_iso_frames_per_buffer(struct snd_pcm_runtime *runtime,
+-					      struct usX2Ydev * usX2Y)
++static inline int usx2y_iso_frames_per_buffer(struct snd_pcm_runtime *runtime,
++					      struct usx2ydev * usx2y)
+ {
+-	return (runtime->buffer_size * 1000) / usX2Y->rate + 1;	//FIXME: so far only correct period_size == 2^x ?
++	return (runtime->buffer_size * 1000) / usx2y->rate + 1;	//FIXME: so far only correct period_size == 2^x ?
+ }
+ 
+ /*
+@@ -95,17 +95,17 @@ static inline int usX2Y_iso_frames_per_buffer(struct snd_pcm_runtime *runtime,
+  * it directly from the buffer.  thus the data is once copied to
+  * a temporary buffer and urb points to that.
+  */
+-static int usX2Y_hwdep_urb_play_prepare(struct snd_usX2Y_substream *subs,
++static int usx2y_hwdep_urb_play_prepare(struct snd_usx2y_substream *subs,
+ 					struct urb *urb)
+ {
+ 	int count, counts, pack;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_hwdep_pcm_shm *shm = usX2Y->hwdep_pcm_shm;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_hwdep_pcm_shm *shm = usx2y->hwdep_pcm_shm;
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+ 
+ 	if (0 > shm->playback_iso_start) {
+ 		shm->playback_iso_start = shm->captured_iso_head -
+-			usX2Y_iso_frames_per_buffer(runtime, usX2Y);
++			usx2y_iso_frames_per_buffer(runtime, usx2y);
+ 		if (0 > shm->playback_iso_start)
+ 			shm->playback_iso_start += ARRAY_SIZE(shm->captured_iso);
+ 		shm->playback_iso_head = shm->playback_iso_start;
+@@ -114,7 +114,7 @@ static int usX2Y_hwdep_urb_play_prepare(struct snd_usX2Y_substream *subs,
+ 	count = 0;
+ 	for (pack = 0; pack < nr_of_packs(); pack++) {
+ 		/* calculate the size of a packet */
+-		counts = shm->captured_iso[shm->playback_iso_head].length / usX2Y->stride;
++		counts = shm->captured_iso[shm->playback_iso_head].length / usx2y->stride;
+ 		if (counts < 43 || counts > 50) {
+ 			snd_printk(KERN_ERR "should not be here with counts=%i\n", counts);
+ 			return -EPIPE;
+@@ -122,26 +122,26 @@ static int usX2Y_hwdep_urb_play_prepare(struct snd_usX2Y_substream *subs,
+ 		/* set up descriptor */
+ 		urb->iso_frame_desc[pack].offset = shm->captured_iso[shm->playback_iso_head].offset;
+ 		urb->iso_frame_desc[pack].length = shm->captured_iso[shm->playback_iso_head].length;
+-		if (atomic_read(&subs->state) != state_RUNNING)
++		if (atomic_read(&subs->state) != STATE_RUNNING)
+ 			memset((char *)urb->transfer_buffer + urb->iso_frame_desc[pack].offset, 0,
+ 			       urb->iso_frame_desc[pack].length);
+ 		if (++shm->playback_iso_head >= ARRAY_SIZE(shm->captured_iso))
+ 			shm->playback_iso_head = 0;
+ 		count += counts;
+ 	}
+-	urb->transfer_buffer_length = count * usX2Y->stride;
++	urb->transfer_buffer_length = count * usx2y->stride;
+ 	return 0;
+ }
+ 
+ 
+-static inline void usX2Y_usbpcm_urb_capt_iso_advance(struct snd_usX2Y_substream *subs,
++static inline void usx2y_usbpcm_urb_capt_iso_advance(struct snd_usx2y_substream *subs,
+ 						     struct urb *urb)
+ {
+ 	int pack;
+ 	for (pack = 0; pack < nr_of_packs(); ++pack) {
+ 		struct usb_iso_packet_descriptor *desc = urb->iso_frame_desc + pack;
+ 		if (NULL != subs) {
+-			struct snd_usX2Y_hwdep_pcm_shm *shm = subs->usX2Y->hwdep_pcm_shm;
++			struct snd_usx2y_hwdep_pcm_shm *shm = subs->usx2y->hwdep_pcm_shm;
+ 			int head = shm->captured_iso_head + 1;
+ 			if (head >= ARRAY_SIZE(shm->captured_iso))
+ 				head = 0;
+@@ -157,9 +157,9 @@ static inline void usX2Y_usbpcm_urb_capt_iso_advance(struct snd_usX2Y_substream
+ 	}
+ }
+ 
+-static inline int usX2Y_usbpcm_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+-						 struct snd_usX2Y_substream *capsubs2,
+-						 struct snd_usX2Y_substream *playbacksubs,
++static inline int usx2y_usbpcm_usbframe_complete(struct snd_usx2y_substream *capsubs,
++						 struct snd_usx2y_substream *capsubs2,
++						 struct snd_usx2y_substream *playbacksubs,
+ 						 int frame)
+ {
+ 	int err, state;
+@@ -167,25 +167,25 @@ static inline int usX2Y_usbpcm_usbframe_complete(struct snd_usX2Y_substream *cap
+ 
+ 	state = atomic_read(&playbacksubs->state);
+ 	if (NULL != urb) {
+-		if (state == state_RUNNING)
+-			usX2Y_urb_play_retire(playbacksubs, urb);
+-		else if (state >= state_PRERUNNING)
++		if (state == STATE_RUNNING)
++			usx2y_urb_play_retire(playbacksubs, urb);
++		else if (state >= STATE_PRERUNNING)
+ 			atomic_inc(&playbacksubs->state);
+ 	} else {
+ 		switch (state) {
+-		case state_STARTING1:
++		case STATE_STARTING1:
+ 			urb = playbacksubs->urb[0];
+ 			atomic_inc(&playbacksubs->state);
+ 			break;
+-		case state_STARTING2:
++		case STATE_STARTING2:
+ 			urb = playbacksubs->urb[1];
+ 			atomic_inc(&playbacksubs->state);
+ 			break;
+ 		}
+ 	}
+ 	if (urb) {
+-		if ((err = usX2Y_hwdep_urb_play_prepare(playbacksubs, urb)) ||
+-		    (err = usX2Y_urb_submit(playbacksubs, urb, frame))) {
++		if ((err = usx2y_hwdep_urb_play_prepare(playbacksubs, urb)) ||
++		    (err = usx2y_urb_submit(playbacksubs, urb, frame))) {
+ 			return err;
+ 		}
+ 	}
+@@ -193,19 +193,19 @@ static inline int usX2Y_usbpcm_usbframe_complete(struct snd_usX2Y_substream *cap
+ 	playbacksubs->completed_urb = NULL;
+ 
+ 	state = atomic_read(&capsubs->state);
+-	if (state >= state_PREPARED) {
+-		if (state == state_RUNNING) {
+-			if ((err = usX2Y_usbpcm_urb_capt_retire(capsubs)))
++	if (state >= STATE_PREPARED) {
++		if (state == STATE_RUNNING) {
++			if ((err = usx2y_usbpcm_urb_capt_retire(capsubs)))
+ 				return err;
+-		} else if (state >= state_PRERUNNING)
++		} else if (state >= STATE_PRERUNNING)
+ 			atomic_inc(&capsubs->state);
+-		usX2Y_usbpcm_urb_capt_iso_advance(capsubs, capsubs->completed_urb);
++		usx2y_usbpcm_urb_capt_iso_advance(capsubs, capsubs->completed_urb);
+ 		if (NULL != capsubs2)
+-			usX2Y_usbpcm_urb_capt_iso_advance(NULL, capsubs2->completed_urb);
+-		if ((err = usX2Y_urb_submit(capsubs, capsubs->completed_urb, frame)))
++			usx2y_usbpcm_urb_capt_iso_advance(NULL, capsubs2->completed_urb);
++		if ((err = usx2y_urb_submit(capsubs, capsubs->completed_urb, frame)))
+ 			return err;
+ 		if (NULL != capsubs2)
+-			if ((err = usX2Y_urb_submit(capsubs2, capsubs2->completed_urb, frame)))
++			if ((err = usx2y_urb_submit(capsubs2, capsubs2->completed_urb, frame)))
+ 				return err;
+ 	}
+ 	capsubs->completed_urb = NULL;
+@@ -215,42 +215,42 @@ static inline int usX2Y_usbpcm_usbframe_complete(struct snd_usX2Y_substream *cap
+ }
+ 
+ 
+-static void i_usX2Y_usbpcm_urb_complete(struct urb *urb)
++static void i_usx2y_usbpcm_urb_complete(struct urb *urb)
+ {
+-	struct snd_usX2Y_substream *subs = urb->context;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *capsubs, *capsubs2, *playbacksubs;
++	struct snd_usx2y_substream *subs = urb->context;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *capsubs, *capsubs2, *playbacksubs;
+ 
+-	if (unlikely(atomic_read(&subs->state) < state_PREPARED)) {
++	if (unlikely(atomic_read(&subs->state) < STATE_PREPARED)) {
+ 		snd_printdd("hcd_frame=%i ep=%i%s status=%i start_frame=%i\n",
+-			    usb_get_current_frame_number(usX2Y->dev),
++			    usb_get_current_frame_number(usx2y->dev),
+ 			    subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out",
+ 			    urb->status, urb->start_frame);
+ 		return;
+ 	}
+ 	if (unlikely(urb->status)) {
+-		usX2Y_error_urb_status(usX2Y, subs, urb);
++		usx2y_error_urb_status(usx2y, subs, urb);
+ 		return;
+ 	}
+ 
+ 	subs->completed_urb = urb;
+-	capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+-	capsubs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+-	playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-	if (capsubs->completed_urb && atomic_read(&capsubs->state) >= state_PREPARED &&
++	capsubs = usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
++	capsubs2 = usx2y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
++	playbacksubs = usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++	if (capsubs->completed_urb && atomic_read(&capsubs->state) >= STATE_PREPARED &&
+ 	    (NULL == capsubs2 || capsubs2->completed_urb) &&
+-	    (playbacksubs->completed_urb || atomic_read(&playbacksubs->state) < state_PREPARED)) {
+-		if (!usX2Y_usbpcm_usbframe_complete(capsubs, capsubs2, playbacksubs, urb->start_frame))
+-			usX2Y->wait_iso_frame += nr_of_packs();
++	    (playbacksubs->completed_urb || atomic_read(&playbacksubs->state) < STATE_PREPARED)) {
++		if (!usx2y_usbpcm_usbframe_complete(capsubs, capsubs2, playbacksubs, urb->start_frame))
++			usx2y->wait_iso_frame += nr_of_packs();
+ 		else {
+ 			snd_printdd("\n");
+-			usX2Y_clients_stop(usX2Y);
++			usx2y_clients_stop(usx2y);
+ 		}
+ 	}
+ }
+ 
+ 
+-static void usX2Y_hwdep_urb_release(struct urb **urb)
++static void usx2y_hwdep_urb_release(struct urb **urb)
+ {
+ 	usb_kill_urb(*urb);
+ 	usb_free_urb(*urb);
+@@ -260,49 +260,49 @@ static void usX2Y_hwdep_urb_release(struct urb **urb)
+ /*
+  * release a substream
+  */
+-static void usX2Y_usbpcm_urbs_release(struct snd_usX2Y_substream *subs)
++static void usx2y_usbpcm_urbs_release(struct snd_usx2y_substream *subs)
+ {
+ 	int i;
+-	snd_printdd("snd_usX2Y_urbs_release() %i\n", subs->endpoint);
++	snd_printdd("snd_usx2y_urbs_release() %i\n", subs->endpoint);
+ 	for (i = 0; i < NRURBS; i++)
+-		usX2Y_hwdep_urb_release(subs->urb + i);
++		usx2y_hwdep_urb_release(subs->urb + i);
+ }
+ 
+-static void usX2Y_usbpcm_subs_startup_finish(struct usX2Ydev * usX2Y)
++static void usx2y_usbpcm_subs_startup_finish(struct usx2ydev * usx2y)
+ {
+-	usX2Y_urbs_set_complete(usX2Y, i_usX2Y_usbpcm_urb_complete);
+-	usX2Y->prepare_subs = NULL;
++	usx2y_urbs_set_complete(usx2y, i_usx2y_usbpcm_urb_complete);
++	usx2y->prepare_subs = NULL;
+ }
+ 
+-static void i_usX2Y_usbpcm_subs_startup(struct urb *urb)
++static void i_usx2y_usbpcm_subs_startup(struct urb *urb)
+ {
+-	struct snd_usX2Y_substream *subs = urb->context;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *prepare_subs = usX2Y->prepare_subs;
++	struct snd_usx2y_substream *subs = urb->context;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *prepare_subs = usx2y->prepare_subs;
+ 	if (NULL != prepare_subs &&
+ 	    urb->start_frame == prepare_subs->urb[0]->start_frame) {
+ 		atomic_inc(&prepare_subs->state);
+-		if (prepare_subs == usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE]) {
+-			struct snd_usX2Y_substream *cap_subs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
++		if (prepare_subs == usx2y->subs[SNDRV_PCM_STREAM_CAPTURE]) {
++			struct snd_usx2y_substream *cap_subs2 = usx2y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+ 			if (cap_subs2 != NULL)
+ 				atomic_inc(&cap_subs2->state);
+ 		}
+-		usX2Y_usbpcm_subs_startup_finish(usX2Y);
+-		wake_up(&usX2Y->prepare_wait_queue);
++		usx2y_usbpcm_subs_startup_finish(usx2y);
++		wake_up(&usx2y->prepare_wait_queue);
+ 	}
+ 
+-	i_usX2Y_usbpcm_urb_complete(urb);
++	i_usx2y_usbpcm_urb_complete(urb);
+ }
+ 
+ /*
+  * initialize a substream's urbs
+  */
+-static int usX2Y_usbpcm_urbs_allocate(struct snd_usX2Y_substream *subs)
++static int usx2y_usbpcm_urbs_allocate(struct snd_usx2y_substream *subs)
+ {
+ 	int i;
+ 	unsigned int pipe;
+-	int is_playback = subs == subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-	struct usb_device *dev = subs->usX2Y->dev;
++	int is_playback = subs == subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++	struct usb_device *dev = subs->usx2y->dev;
+ 
+ 	pipe = is_playback ? usb_sndisocpipe(dev, subs->endpoint) :
+ 			usb_rcvisocpipe(dev, subs->endpoint);
+@@ -319,21 +319,21 @@ static int usX2Y_usbpcm_urbs_allocate(struct snd_usX2Y_substream *subs)
+ 		}
+ 		*purb = usb_alloc_urb(nr_of_packs(), GFP_KERNEL);
+ 		if (NULL == *purb) {
+-			usX2Y_usbpcm_urbs_release(subs);
++			usx2y_usbpcm_urbs_release(subs);
+ 			return -ENOMEM;
+ 		}
+ 		(*purb)->transfer_buffer = is_playback ?
+-			subs->usX2Y->hwdep_pcm_shm->playback : (
++			subs->usx2y->hwdep_pcm_shm->playback : (
+ 				subs->endpoint == 0x8 ?
+-				subs->usX2Y->hwdep_pcm_shm->capture0x8 :
+-				subs->usX2Y->hwdep_pcm_shm->capture0xA);
++				subs->usx2y->hwdep_pcm_shm->capture0x8 :
++				subs->usx2y->hwdep_pcm_shm->capture0xA);
+ 
+ 		(*purb)->dev = dev;
+ 		(*purb)->pipe = pipe;
+ 		(*purb)->number_of_packets = nr_of_packs();
+ 		(*purb)->context = subs;
+ 		(*purb)->interval = 1;
+-		(*purb)->complete = i_usX2Y_usbpcm_subs_startup;
++		(*purb)->complete = i_usx2y_usbpcm_subs_startup;
+ 	}
+ 	return 0;
+ }
+@@ -341,91 +341,91 @@ static int usX2Y_usbpcm_urbs_allocate(struct snd_usX2Y_substream *subs)
+ /*
+  * free the buffer
+  */
+-static int snd_usX2Y_usbpcm_hw_free(struct snd_pcm_substream *substream)
++static int snd_usx2y_usbpcm_hw_free(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data,
+-		*cap_subs2 = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+-	mutex_lock(&subs->usX2Y->pcm_mutex);
+-	snd_printdd("snd_usX2Y_usbpcm_hw_free(%p)\n", substream);
++	struct snd_usx2y_substream *subs = runtime->private_data,
++		*cap_subs2 = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
++	mutex_lock(&subs->usx2y->pcm_mutex);
++	snd_printdd("snd_usx2y_usbpcm_hw_free(%p)\n", substream);
+ 
+ 	if (SNDRV_PCM_STREAM_PLAYBACK == substream->stream) {
+-		struct snd_usX2Y_substream *cap_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+-		atomic_set(&subs->state, state_STOPPED);
+-		usX2Y_usbpcm_urbs_release(subs);
++		struct snd_usx2y_substream *cap_subs = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
++		atomic_set(&subs->state, STATE_STOPPED);
++		usx2y_usbpcm_urbs_release(subs);
+ 		if (!cap_subs->pcm_substream ||
+ 		    !cap_subs->pcm_substream->runtime ||
+ 		    !cap_subs->pcm_substream->runtime->status ||
+ 		    cap_subs->pcm_substream->runtime->status->state < SNDRV_PCM_STATE_PREPARED) {
+-			atomic_set(&cap_subs->state, state_STOPPED);
++			atomic_set(&cap_subs->state, STATE_STOPPED);
+ 			if (NULL != cap_subs2)
+-				atomic_set(&cap_subs2->state, state_STOPPED);
+-			usX2Y_usbpcm_urbs_release(cap_subs);
++				atomic_set(&cap_subs2->state, STATE_STOPPED);
++			usx2y_usbpcm_urbs_release(cap_subs);
+ 			if (NULL != cap_subs2)
+-				usX2Y_usbpcm_urbs_release(cap_subs2);
++				usx2y_usbpcm_urbs_release(cap_subs2);
+ 		}
+ 	} else {
+-		struct snd_usX2Y_substream *playback_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-		if (atomic_read(&playback_subs->state) < state_PREPARED) {
+-			atomic_set(&subs->state, state_STOPPED);
++		struct snd_usx2y_substream *playback_subs = subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++		if (atomic_read(&playback_subs->state) < STATE_PREPARED) {
++			atomic_set(&subs->state, STATE_STOPPED);
+ 			if (NULL != cap_subs2)
+-				atomic_set(&cap_subs2->state, state_STOPPED);
+-			usX2Y_usbpcm_urbs_release(subs);
++				atomic_set(&cap_subs2->state, STATE_STOPPED);
++			usx2y_usbpcm_urbs_release(subs);
+ 			if (NULL != cap_subs2)
+-				usX2Y_usbpcm_urbs_release(cap_subs2);
++				usx2y_usbpcm_urbs_release(cap_subs2);
+ 		}
+ 	}
+-	mutex_unlock(&subs->usX2Y->pcm_mutex);
++	mutex_unlock(&subs->usx2y->pcm_mutex);
+ 	return 0;
+ }
+ 
+-static void usX2Y_usbpcm_subs_startup(struct snd_usX2Y_substream *subs)
++static void usx2y_usbpcm_subs_startup(struct snd_usx2y_substream *subs)
+ {
+-	struct usX2Ydev * usX2Y = subs->usX2Y;
+-	usX2Y->prepare_subs = subs;
++	struct usx2ydev * usx2y = subs->usx2y;
++	usx2y->prepare_subs = subs;
+ 	subs->urb[0]->start_frame = -1;
+-	smp_wmb();	// Make sure above modifications are seen by i_usX2Y_subs_startup()
+-	usX2Y_urbs_set_complete(usX2Y, i_usX2Y_usbpcm_subs_startup);
++	smp_wmb();	// Make sure above modifications are seen by i_usx2y_subs_startup()
++	usx2y_urbs_set_complete(usx2y, i_usx2y_usbpcm_subs_startup);
+ }
+ 
+-static int usX2Y_usbpcm_urbs_start(struct snd_usX2Y_substream *subs)
++static int usx2y_usbpcm_urbs_start(struct snd_usx2y_substream *subs)
+ {
+ 	int	p, u, err,
+ 		stream = subs->pcm_substream->stream;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
++	struct usx2ydev *usx2y = subs->usx2y;
+ 
+ 	if (SNDRV_PCM_STREAM_CAPTURE == stream) {
+-		usX2Y->hwdep_pcm_shm->captured_iso_head = -1;
+-		usX2Y->hwdep_pcm_shm->captured_iso_frames = 0;
++		usx2y->hwdep_pcm_shm->captured_iso_head = -1;
++		usx2y->hwdep_pcm_shm->captured_iso_frames = 0;
+ 	}
+ 
+ 	for (p = 0; 3 >= (stream + p); p += 2) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[stream + p];
++		struct snd_usx2y_substream *subs = usx2y->subs[stream + p];
+ 		if (subs != NULL) {
+-			if ((err = usX2Y_usbpcm_urbs_allocate(subs)) < 0)
++			if ((err = usx2y_usbpcm_urbs_allocate(subs)) < 0)
+ 				return err;
+ 			subs->completed_urb = NULL;
+ 		}
+ 	}
+ 
+ 	for (p = 0; p < 4; p++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[p];
+-		if (subs != NULL && atomic_read(&subs->state) >= state_PREPARED)
++		struct snd_usx2y_substream *subs = usx2y->subs[p];
++		if (subs != NULL && atomic_read(&subs->state) >= STATE_PREPARED)
+ 			goto start;
+ 	}
+ 
+  start:
+-	usX2Y_usbpcm_subs_startup(subs);
++	usx2y_usbpcm_subs_startup(subs);
+ 	for (u = 0; u < NRURBS; u++) {
+ 		for (p = 0; 3 >= (stream + p); p += 2) {
+-			struct snd_usX2Y_substream *subs = usX2Y->subs[stream + p];
++			struct snd_usx2y_substream *subs = usx2y->subs[stream + p];
+ 			if (subs != NULL) {
+ 				struct urb *urb = subs->urb[u];
+ 				if (usb_pipein(urb->pipe)) {
+ 					unsigned long pack;
+ 					if (0 == u)
+-						atomic_set(&subs->state, state_STARTING3);
+-					urb->dev = usX2Y->dev;
++						atomic_set(&subs->state, STATE_STARTING3);
++					urb->dev = usx2y->dev;
+ 					for (pack = 0; pack < nr_of_packs(); pack++) {
+ 						urb->iso_frame_desc[pack].offset = subs->maxpacksize * (pack + u * nr_of_packs());
+ 						urb->iso_frame_desc[pack].length = subs->maxpacksize;
+@@ -438,25 +438,25 @@ static int usX2Y_usbpcm_urbs_start(struct snd_usX2Y_substream *subs)
+ 					}  else {
+ 						snd_printdd("%i\n", urb->start_frame);
+ 						if (u == 0)
+-							usX2Y->wait_iso_frame = urb->start_frame;
++							usx2y->wait_iso_frame = urb->start_frame;
+ 					}
+ 					urb->transfer_flags = 0;
+ 				} else {
+-					atomic_set(&subs->state, state_STARTING1);
++					atomic_set(&subs->state, STATE_STARTING1);
+ 					break;
+ 				}			
+ 			}
+ 		}
+ 	}
+ 	err = 0;
+-	wait_event(usX2Y->prepare_wait_queue, NULL == usX2Y->prepare_subs);
+-	if (atomic_read(&subs->state) != state_PREPARED)
++	wait_event(usx2y->prepare_wait_queue, NULL == usx2y->prepare_subs);
++	if (atomic_read(&subs->state) != STATE_PREPARED)
+ 		err = -EPIPE;
+ 		
+  cleanup:
+ 	if (err) {
+-		usX2Y_subs_startup_finish(usX2Y);	// Call it now
+-		usX2Y_clients_stop(usX2Y);		// something is completely wroong > stop evrything			
++		usx2y_subs_startup_finish(usx2y);	// Call it now
++		usx2y_clients_stop(usx2y);		// something is completely wroong > stop evrything			
+ 	}
+ 	return err;
+ }
+@@ -466,69 +466,69 @@ static int usX2Y_usbpcm_urbs_start(struct snd_usX2Y_substream *subs)
+  *
+  * set format and initialize urbs
+  */
+-static int snd_usX2Y_usbpcm_prepare(struct snd_pcm_substream *substream)
++static int snd_usx2y_usbpcm_prepare(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *capsubs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
++	struct snd_usx2y_substream *subs = runtime->private_data;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *capsubs = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
+ 	int err = 0;
+-	snd_printdd("snd_usX2Y_pcm_prepare(%p)\n", substream);
++	snd_printdd("snd_usx2y_pcm_prepare(%p)\n", substream);
+ 
+-	if (NULL == usX2Y->hwdep_pcm_shm) {
+-		usX2Y->hwdep_pcm_shm = alloc_pages_exact(sizeof(struct snd_usX2Y_hwdep_pcm_shm),
++	if (NULL == usx2y->hwdep_pcm_shm) {
++		usx2y->hwdep_pcm_shm = alloc_pages_exact(sizeof(struct snd_usx2y_hwdep_pcm_shm),
+ 							 GFP_KERNEL);
+-		if (!usX2Y->hwdep_pcm_shm)
++		if (!usx2y->hwdep_pcm_shm)
+ 			return -ENOMEM;
+-		memset(usX2Y->hwdep_pcm_shm, 0, sizeof(struct snd_usX2Y_hwdep_pcm_shm));
++		memset(usx2y->hwdep_pcm_shm, 0, sizeof(struct snd_usx2y_hwdep_pcm_shm));
+ 	}
+ 
+-	mutex_lock(&usX2Y->pcm_mutex);
+-	usX2Y_subs_prepare(subs);
++	mutex_lock(&usx2y->pcm_mutex);
++	usx2y_subs_prepare(subs);
+ // Start hardware streams
+ // SyncStream first....
+-	if (atomic_read(&capsubs->state) < state_PREPARED) {
+-		if (usX2Y->format != runtime->format)
+-			if ((err = usX2Y_format_set(usX2Y, runtime->format)) < 0)
++	if (atomic_read(&capsubs->state) < STATE_PREPARED) {
++		if (usx2y->format != runtime->format)
++			if ((err = usx2y_format_set(usx2y, runtime->format)) < 0)
+ 				goto up_prepare_mutex;
+-		if (usX2Y->rate != runtime->rate)
+-			if ((err = usX2Y_rate_set(usX2Y, runtime->rate)) < 0)
++		if (usx2y->rate != runtime->rate)
++			if ((err = usx2y_rate_set(usx2y, runtime->rate)) < 0)
+ 				goto up_prepare_mutex;
+ 		snd_printdd("starting capture pipe for %s\n", subs == capsubs ?
+ 			    "self" : "playpipe");
+-		if (0 > (err = usX2Y_usbpcm_urbs_start(capsubs)))
++		if (0 > (err = usx2y_usbpcm_urbs_start(capsubs)))
+ 			goto up_prepare_mutex;
+ 	}
+ 
+ 	if (subs != capsubs) {
+-		usX2Y->hwdep_pcm_shm->playback_iso_start = -1;
+-		if (atomic_read(&subs->state) < state_PREPARED) {
+-			while (usX2Y_iso_frames_per_buffer(runtime, usX2Y) >
+-			       usX2Y->hwdep_pcm_shm->captured_iso_frames) {
++		usx2y->hwdep_pcm_shm->playback_iso_start = -1;
++		if (atomic_read(&subs->state) < STATE_PREPARED) {
++			while (usx2y_iso_frames_per_buffer(runtime, usx2y) >
++			       usx2y->hwdep_pcm_shm->captured_iso_frames) {
+ 				snd_printdd("Wait: iso_frames_per_buffer=%i,"
+ 					    "captured_iso_frames=%i\n",
+-					    usX2Y_iso_frames_per_buffer(runtime, usX2Y),
+-					    usX2Y->hwdep_pcm_shm->captured_iso_frames);
++					    usx2y_iso_frames_per_buffer(runtime, usx2y),
++					    usx2y->hwdep_pcm_shm->captured_iso_frames);
+ 				if (msleep_interruptible(10)) {
+ 					err = -ERESTARTSYS;
+ 					goto up_prepare_mutex;
+ 				}
+ 			} 
+-			if (0 > (err = usX2Y_usbpcm_urbs_start(subs)))
++			if (0 > (err = usx2y_usbpcm_urbs_start(subs)))
+ 				goto up_prepare_mutex;
+ 		}
+ 		snd_printdd("Ready: iso_frames_per_buffer=%i,captured_iso_frames=%i\n",
+-			    usX2Y_iso_frames_per_buffer(runtime, usX2Y),
+-			    usX2Y->hwdep_pcm_shm->captured_iso_frames);
++			    usx2y_iso_frames_per_buffer(runtime, usx2y),
++			    usx2y->hwdep_pcm_shm->captured_iso_frames);
+ 	} else
+-		usX2Y->hwdep_pcm_shm->capture_iso_start = -1;
++		usx2y->hwdep_pcm_shm->capture_iso_start = -1;
+ 
+  up_prepare_mutex:
+-	mutex_unlock(&usX2Y->pcm_mutex);
++	mutex_unlock(&usx2y->pcm_mutex);
+ 	return err;
+ }
+ 
+-static const struct snd_pcm_hardware snd_usX2Y_4c =
++static const struct snd_pcm_hardware snd_usx2y_4c =
+ {
+ 	.info =			(SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
+ 				 SNDRV_PCM_INFO_BLOCK_TRANSFER |
+@@ -549,17 +549,17 @@ static const struct snd_pcm_hardware snd_usX2Y_4c =
+ 
+ 
+ 
+-static int snd_usX2Y_usbpcm_open(struct snd_pcm_substream *substream)
++static int snd_usx2y_usbpcm_open(struct snd_pcm_substream *substream)
+ {
+-	struct snd_usX2Y_substream	*subs = ((struct snd_usX2Y_substream **)
++	struct snd_usx2y_substream	*subs = ((struct snd_usx2y_substream **)
+ 					 snd_pcm_substream_chip(substream))[substream->stream];
+ 	struct snd_pcm_runtime	*runtime = substream->runtime;
+ 
+-	if (!(subs->usX2Y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS))
++	if (!(subs->usx2y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS))
+ 		return -EBUSY;
+ 
+-	runtime->hw = SNDRV_PCM_STREAM_PLAYBACK == substream->stream ? snd_usX2Y_2c :
+-		(subs->usX2Y->subs[3] ? snd_usX2Y_4c : snd_usX2Y_2c);
++	runtime->hw = SNDRV_PCM_STREAM_PLAYBACK == substream->stream ? snd_usx2y_2c :
++		(subs->usx2y->subs[3] ? snd_usx2y_4c : snd_usx2y_2c);
+ 	runtime->private_data = subs;
+ 	subs->pcm_substream = substream;
+ 	snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_PERIOD_TIME, 1000, 200000);
+@@ -567,35 +567,35 @@ static int snd_usX2Y_usbpcm_open(struct snd_pcm_substream *substream)
+ }
+ 
+ 
+-static int snd_usX2Y_usbpcm_close(struct snd_pcm_substream *substream)
++static int snd_usx2y_usbpcm_close(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
++	struct snd_usx2y_substream *subs = runtime->private_data;
+ 
+ 	subs->pcm_substream = NULL;
+ 	return 0;
+ }
+ 
+ 
+-static const struct snd_pcm_ops snd_usX2Y_usbpcm_ops =
++static const struct snd_pcm_ops snd_usx2y_usbpcm_ops =
+ {
+-	.open =		snd_usX2Y_usbpcm_open,
+-	.close =	snd_usX2Y_usbpcm_close,
+-	.hw_params =	snd_usX2Y_pcm_hw_params,
+-	.hw_free =	snd_usX2Y_usbpcm_hw_free,
+-	.prepare =	snd_usX2Y_usbpcm_prepare,
+-	.trigger =	snd_usX2Y_pcm_trigger,
+-	.pointer =	snd_usX2Y_pcm_pointer,
++	.open =		snd_usx2y_usbpcm_open,
++	.close =	snd_usx2y_usbpcm_close,
++	.hw_params =	snd_usx2y_pcm_hw_params,
++	.hw_free =	snd_usx2y_usbpcm_hw_free,
++	.prepare =	snd_usx2y_usbpcm_prepare,
++	.trigger =	snd_usx2y_pcm_trigger,
++	.pointer =	snd_usx2y_pcm_pointer,
+ };
+ 
+ 
+-static int usX2Y_pcms_busy_check(struct snd_card *card)
++static int usx2y_pcms_busy_check(struct snd_card *card)
+ {
+-	struct usX2Ydev	*dev = usX2Y(card);
++	struct usx2ydev	*dev = usx2y(card);
+ 	int i;
+ 
+ 	for (i = 0; i < dev->pcm_devs * 2; i++) {
+-		struct snd_usX2Y_substream *subs = dev->subs[i];
++		struct snd_usx2y_substream *subs = dev->subs[i];
+ 		if (subs && subs->pcm_substream &&
+ 		    SUBSTREAM_BUSY(subs->pcm_substream))
+ 			return -EBUSY;
+@@ -603,102 +603,102 @@ static int usX2Y_pcms_busy_check(struct snd_card *card)
+ 	return 0;
+ }
+ 
+-static int snd_usX2Y_hwdep_pcm_open(struct snd_hwdep *hw, struct file *file)
++static int snd_usx2y_hwdep_pcm_open(struct snd_hwdep *hw, struct file *file)
+ {
+ 	struct snd_card *card = hw->card;
+ 	int err;
+ 
+-	mutex_lock(&usX2Y(card)->pcm_mutex);
+-	err = usX2Y_pcms_busy_check(card);
++	mutex_lock(&usx2y(card)->pcm_mutex);
++	err = usx2y_pcms_busy_check(card);
+ 	if (!err)
+-		usX2Y(card)->chip_status |= USX2Y_STAT_CHIP_MMAP_PCM_URBS;
+-	mutex_unlock(&usX2Y(card)->pcm_mutex);
++		usx2y(card)->chip_status |= USX2Y_STAT_CHIP_MMAP_PCM_URBS;
++	mutex_unlock(&usx2y(card)->pcm_mutex);
+ 	return err;
+ }
+ 
+ 
+-static int snd_usX2Y_hwdep_pcm_release(struct snd_hwdep *hw, struct file *file)
++static int snd_usx2y_hwdep_pcm_release(struct snd_hwdep *hw, struct file *file)
+ {
+ 	struct snd_card *card = hw->card;
+ 	int err;
+ 
+-	mutex_lock(&usX2Y(card)->pcm_mutex);
+-	err = usX2Y_pcms_busy_check(card);
++	mutex_lock(&usx2y(card)->pcm_mutex);
++	err = usx2y_pcms_busy_check(card);
+ 	if (!err)
+-		usX2Y(hw->card)->chip_status &= ~USX2Y_STAT_CHIP_MMAP_PCM_URBS;
+-	mutex_unlock(&usX2Y(card)->pcm_mutex);
++		usx2y(hw->card)->chip_status &= ~USX2Y_STAT_CHIP_MMAP_PCM_URBS;
++	mutex_unlock(&usx2y(card)->pcm_mutex);
+ 	return err;
+ }
+ 
+ 
+-static void snd_usX2Y_hwdep_pcm_vm_open(struct vm_area_struct *area)
++static void snd_usx2y_hwdep_pcm_vm_open(struct vm_area_struct *area)
+ {
+ }
+ 
+ 
+-static void snd_usX2Y_hwdep_pcm_vm_close(struct vm_area_struct *area)
++static void snd_usx2y_hwdep_pcm_vm_close(struct vm_area_struct *area)
+ {
+ }
+ 
+ 
+-static vm_fault_t snd_usX2Y_hwdep_pcm_vm_fault(struct vm_fault *vmf)
++static vm_fault_t snd_usx2y_hwdep_pcm_vm_fault(struct vm_fault *vmf)
+ {
+ 	unsigned long offset;
+ 	void *vaddr;
+ 
+ 	offset = vmf->pgoff << PAGE_SHIFT;
+-	vaddr = (char *)((struct usX2Ydev *)vmf->vma->vm_private_data)->hwdep_pcm_shm + offset;
++	vaddr = (char *)((struct usx2ydev *)vmf->vma->vm_private_data)->hwdep_pcm_shm + offset;
+ 	vmf->page = virt_to_page(vaddr);
+ 	get_page(vmf->page);
+ 	return 0;
+ }
+ 
+ 
+-static const struct vm_operations_struct snd_usX2Y_hwdep_pcm_vm_ops = {
+-	.open = snd_usX2Y_hwdep_pcm_vm_open,
+-	.close = snd_usX2Y_hwdep_pcm_vm_close,
+-	.fault = snd_usX2Y_hwdep_pcm_vm_fault,
++static const struct vm_operations_struct snd_usx2y_hwdep_pcm_vm_ops = {
++	.open = snd_usx2y_hwdep_pcm_vm_open,
++	.close = snd_usx2y_hwdep_pcm_vm_close,
++	.fault = snd_usx2y_hwdep_pcm_vm_fault,
+ };
+ 
+ 
+-static int snd_usX2Y_hwdep_pcm_mmap(struct snd_hwdep * hw, struct file *filp, struct vm_area_struct *area)
++static int snd_usx2y_hwdep_pcm_mmap(struct snd_hwdep * hw, struct file *filp, struct vm_area_struct *area)
+ {
+ 	unsigned long	size = (unsigned long)(area->vm_end - area->vm_start);
+-	struct usX2Ydev	*usX2Y = hw->private_data;
++	struct usx2ydev	*usx2y = hw->private_data;
+ 
+-	if (!(usX2Y->chip_status & USX2Y_STAT_CHIP_INIT))
++	if (!(usx2y->chip_status & USX2Y_STAT_CHIP_INIT))
+ 		return -EBUSY;
+ 
+ 	/* if userspace tries to mmap beyond end of our buffer, fail */ 
+-	if (size > PAGE_ALIGN(sizeof(struct snd_usX2Y_hwdep_pcm_shm))) {
+-		snd_printd("%lu > %lu\n", size, (unsigned long)sizeof(struct snd_usX2Y_hwdep_pcm_shm)); 
++	if (size > PAGE_ALIGN(sizeof(struct snd_usx2y_hwdep_pcm_shm))) {
++		snd_printd("%lu > %lu\n", size, (unsigned long)sizeof(struct snd_usx2y_hwdep_pcm_shm)); 
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!usX2Y->hwdep_pcm_shm) {
++	if (!usx2y->hwdep_pcm_shm) {
+ 		return -ENODEV;
+ 	}
+-	area->vm_ops = &snd_usX2Y_hwdep_pcm_vm_ops;
++	area->vm_ops = &snd_usx2y_hwdep_pcm_vm_ops;
+ 	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+ 	area->vm_private_data = hw->private_data;
+ 	return 0;
+ }
+ 
+ 
+-static void snd_usX2Y_hwdep_pcm_private_free(struct snd_hwdep *hwdep)
++static void snd_usx2y_hwdep_pcm_private_free(struct snd_hwdep *hwdep)
+ {
+-	struct usX2Ydev *usX2Y = hwdep->private_data;
+-	if (NULL != usX2Y->hwdep_pcm_shm)
+-		free_pages_exact(usX2Y->hwdep_pcm_shm, sizeof(struct snd_usX2Y_hwdep_pcm_shm));
++	struct usx2ydev *usx2y = hwdep->private_data;
++	if (NULL != usx2y->hwdep_pcm_shm)
++		free_pages_exact(usx2y->hwdep_pcm_shm, sizeof(struct snd_usx2y_hwdep_pcm_shm));
+ }
+ 
+ 
+-int usX2Y_hwdep_pcm_new(struct snd_card *card)
++int usx2y_hwdep_pcm_new(struct snd_card *card)
+ {
+ 	int err;
+ 	struct snd_hwdep *hw;
+ 	struct snd_pcm *pcm;
+-	struct usb_device *dev = usX2Y(card)->dev;
++	struct usb_device *dev = usx2y(card)->dev;
+ 	if (1 != nr_of_packs())
+ 		return 0;
+ 
+@@ -706,11 +706,11 @@ int usX2Y_hwdep_pcm_new(struct snd_card *card)
+ 		return err;
+ 
+ 	hw->iface = SNDRV_HWDEP_IFACE_USX2Y_PCM;
+-	hw->private_data = usX2Y(card);
+-	hw->private_free = snd_usX2Y_hwdep_pcm_private_free;
+-	hw->ops.open = snd_usX2Y_hwdep_pcm_open;
+-	hw->ops.release = snd_usX2Y_hwdep_pcm_release;
+-	hw->ops.mmap = snd_usX2Y_hwdep_pcm_mmap;
++	hw->private_data = usx2y(card);
++	hw->private_free = snd_usx2y_hwdep_pcm_private_free;
++	hw->ops.open = snd_usx2y_hwdep_pcm_open;
++	hw->ops.release = snd_usx2y_hwdep_pcm_release;
++	hw->ops.mmap = snd_usx2y_hwdep_pcm_mmap;
+ 	hw->exclusive = 1;
+ 	sprintf(hw->name, "/dev/bus/usb/%03d/%03d/hwdeppcm", dev->bus->busnum, dev->devnum);
+ 
+@@ -718,10 +718,10 @@ int usX2Y_hwdep_pcm_new(struct snd_card *card)
+ 	if (err < 0) {
+ 		return err;
+ 	}
+-	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usX2Y_usbpcm_ops);
+-	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usX2Y_usbpcm_ops);
++	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usx2y_usbpcm_ops);
++	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usx2y_usbpcm_ops);
+ 
+-	pcm->private_data = usX2Y(card)->subs;
++	pcm->private_data = usx2y(card)->subs;
+ 	pcm->info_flags = 0;
+ 
+ 	sprintf(pcm->name, NAME_ALLCAPS" hwdep Audio");
+@@ -739,7 +739,7 @@ int usX2Y_hwdep_pcm_new(struct snd_card *card)
+ 
+ #else
+ 
+-int usX2Y_hwdep_pcm_new(struct snd_card *card)
++int usx2y_hwdep_pcm_new(struct snd_card *card)
+ {
+ 	return 0;
+ }
+diff --git a/sound/usb/usx2y/usx2yhwdeppcm.h b/sound/usb/usx2y/usx2yhwdeppcm.h
+index eb5a46466f0e6..731b1c5a34741 100644
+--- a/sound/usb/usx2y/usx2yhwdeppcm.h
++++ b/sound/usb/usx2y/usx2yhwdeppcm.h
+@@ -4,7 +4,7 @@
+ #define MAXSTRIDE 3
+ 
+ #define SSS (((MAXPACK*MAXBUFFERMS*MAXSTRIDE + 4096) / 4096) * 4096)
+-struct snd_usX2Y_hwdep_pcm_shm {
++struct snd_usx2y_hwdep_pcm_shm {
+ 	char playback[SSS];
+ 	char capture0x8[SSS];
+ 	char capture0xA[SSS];
+@@ -20,4 +20,4 @@ struct snd_usX2Y_hwdep_pcm_shm {
+ 	int capture_iso_start;
+ };
+ 
+-int usX2Y_hwdep_pcm_new(struct snd_card *card);
++int usx2y_hwdep_pcm_new(struct snd_card *card);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/no_handler_test.c b/tools/testing/selftests/powerpc/pmu/ebb/no_handler_test.c
+index fc5bf4870d8e6..01e827c31169d 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/no_handler_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/no_handler_test.c
+@@ -50,8 +50,6 @@ static int no_handler_test(void)
+ 
+ 	event_close(&event);
+ 
+-	dump_ebb_state();
+-
+ 	/* The real test is that we never took an EBB at 0x0 */
+ 
+ 	return 0;
+diff --git a/tools/testing/selftests/timers/rtcpie.c b/tools/testing/selftests/timers/rtcpie.c
+index 47b5bad1b3933..4ef2184f15588 100644
+--- a/tools/testing/selftests/timers/rtcpie.c
++++ b/tools/testing/selftests/timers/rtcpie.c
+@@ -18,6 +18,8 @@
+ #include <stdlib.h>
+ #include <errno.h>
+ 
++#include "../kselftest.h"
++
+ /*
+  * This expects the new RTC class driver framework, working with
+  * clocks that will often not be clones of what the PC-AT had.
+@@ -35,8 +37,14 @@ int main(int argc, char **argv)
+ 	switch (argc) {
+ 	case 2:
+ 		rtc = argv[1];
+-		/* FALLTHROUGH */
++		break;
+ 	case 1:
++		fd = open(default_rtc, O_RDONLY);
++		if (fd == -1) {
++			printf("Default RTC %s does not exist. Test Skipped!\n", default_rtc);
++			exit(KSFT_SKIP);
++		}
++		close(fd);
+ 		break;
+ 	default:
+ 		fprintf(stderr, "usage:  rtctest [rtcdev] [d]\n");
+diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
+index 6edfcf1f3bd66..d5bebb37238c0 100644
+--- a/virt/kvm/coalesced_mmio.c
++++ b/virt/kvm/coalesced_mmio.c
+@@ -186,7 +186,6 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ 		    coalesced_mmio_in_range(dev, zone->addr, zone->size)) {
+ 			r = kvm_io_bus_unregister_dev(kvm,
+ 				zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev);
+-			kvm_iodevice_destructor(&dev->dev);
+ 
+ 			/*
+ 			 * On failure, unregister destroys all devices on the
+@@ -196,6 +195,7 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ 			 */
+ 			if (r)
+ 				break;
++			kvm_iodevice_destructor(&dev->dev);
+ 		}
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-25 17:26 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-25 17:26 UTC (permalink / raw
  To: gentoo-commits

commit:     3adf50790158ef4c31f719ffb8e40755f22e829c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 25 17:26:28 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 25 17:26:28 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3adf5079

Linux patch 5.10.53

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 1052_linux-5.10.53.patch | 4919 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 4919 insertions(+)

diff --git a/1052_linux-5.10.53.patch b/1052_linux-5.10.53.patch
new file mode 100644
index 0000000..470e5b2
--- /dev/null
+++ b/1052_linux-5.10.53.patch
@@ -0,0 +1,4919 @@
+diff --git a/Makefile b/Makefile
+index 23d656936d405..aab88519cb0ed 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 52
++SUBLEVEL = 53
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -704,11 +704,12 @@ $(KCONFIG_CONFIG):
+ # This exploits the 'multi-target pattern rule' trick.
+ # The syncconfig should be executed only once to make all the targets.
+ # (Note: use the grouped target '&:' when we bump to GNU Make 4.3)
+-quiet_cmd_syncconfig = SYNC    $@
+-      cmd_syncconfig = $(MAKE) -f $(srctree)/Makefile syncconfig
+-
++#
++# Do not use $(call cmd,...) here. That would suppress prompts from syncconfig,
++# so you cannot notice that Kconfig is waiting for the user input.
+ %/config/auto.conf %/config/auto.conf.cmd %/generated/autoconf.h: $(KCONFIG_CONFIG)
+-	+$(call cmd,syncconfig)
++	$(Q)$(kecho) "  SYNC    $@"
++	$(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
+ else # !may-sync-config
+ # External modules and some install targets need include/generated/autoconf.h
+ # and include/config/auto.conf but do not care if they are up-to-date.
+diff --git a/arch/arm/boot/dts/am335x-baltos.dtsi b/arch/arm/boot/dts/am335x-baltos.dtsi
+index b7f64c7ba83d3..77e23e7368541 100644
+--- a/arch/arm/boot/dts/am335x-baltos.dtsi
++++ b/arch/arm/boot/dts/am335x-baltos.dtsi
+@@ -393,10 +393,10 @@
+ 	status = "okay";
+ };
+ 
+-&gpio0 {
++&gpio0_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+-&gpio3 {
++&gpio3_target {
+ 	ti,no-reset-on-init;
+ };
+diff --git a/arch/arm/boot/dts/am335x-evmsk.dts b/arch/arm/boot/dts/am335x-evmsk.dts
+index b43b94122d3c5..bf05b68274c25 100644
+--- a/arch/arm/boot/dts/am335x-evmsk.dts
++++ b/arch/arm/boot/dts/am335x-evmsk.dts
+@@ -648,7 +648,7 @@
+ 	status = "okay";
+ };
+ 
+-&gpio0 {
++&gpio0_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+diff --git a/arch/arm/boot/dts/am335x-moxa-uc-2100-common.dtsi b/arch/arm/boot/dts/am335x-moxa-uc-2100-common.dtsi
+index 4e90f9c23d2e5..8121a199607cc 100644
+--- a/arch/arm/boot/dts/am335x-moxa-uc-2100-common.dtsi
++++ b/arch/arm/boot/dts/am335x-moxa-uc-2100-common.dtsi
+@@ -150,7 +150,7 @@
+ 	status = "okay";
+ };
+ 
+-&gpio0 {
++&gpio0_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+diff --git a/arch/arm/boot/dts/am335x-moxa-uc-8100-common.dtsi b/arch/arm/boot/dts/am335x-moxa-uc-8100-common.dtsi
+index 98d8ed4ad9677..39e5d2ce600a1 100644
+--- a/arch/arm/boot/dts/am335x-moxa-uc-8100-common.dtsi
++++ b/arch/arm/boot/dts/am335x-moxa-uc-8100-common.dtsi
+@@ -353,7 +353,7 @@
+ 	status = "okay";
+ };
+ 
+-&gpio0 {
++&gpio0_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+diff --git a/arch/arm/boot/dts/am33xx-l4.dtsi b/arch/arm/boot/dts/am33xx-l4.dtsi
+index ea20e4bdf040e..29fafb67cfaad 100644
+--- a/arch/arm/boot/dts/am33xx-l4.dtsi
++++ b/arch/arm/boot/dts/am33xx-l4.dtsi
+@@ -1723,7 +1723,7 @@
+ 			};
+ 		};
+ 
+-		target-module@ae000 {			/* 0x481ae000, ap 56 3a.0 */
++		gpio3_target: target-module@ae000 {		/* 0x481ae000, ap 56 3a.0 */
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+ 			reg = <0xae000 0x4>,
+ 			      <0xae010 0x4>,
+diff --git a/arch/arm/boot/dts/am437x-gp-evm.dts b/arch/arm/boot/dts/am437x-gp-evm.dts
+index 6e4d05d649e98..033b984ff637d 100644
+--- a/arch/arm/boot/dts/am437x-gp-evm.dts
++++ b/arch/arm/boot/dts/am437x-gp-evm.dts
+@@ -813,11 +813,14 @@
+ 	status = "okay";
+ };
+ 
++&gpio5_target {
++	ti,no-reset-on-init;
++};
++
+ &gpio5 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&display_mux_pins>;
+ 	status = "okay";
+-	ti,no-reset-on-init;
+ 
+ 	p8 {
+ 		/*
+diff --git a/arch/arm/boot/dts/am437x-l4.dtsi b/arch/arm/boot/dts/am437x-l4.dtsi
+index 243e35f7a56c7..370c4e64676f6 100644
+--- a/arch/arm/boot/dts/am437x-l4.dtsi
++++ b/arch/arm/boot/dts/am437x-l4.dtsi
+@@ -2033,7 +2033,7 @@
+ 			};
+ 		};
+ 
+-		target-module@22000 {			/* 0x48322000, ap 116 64.0 */
++		gpio5_target: target-module@22000 {		/* 0x48322000, ap 116 64.0 */
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+ 			reg = <0x22000 0x4>,
+ 			      <0x22010 0x4>,
+diff --git a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+index 0d5fe2bfb683a..aed81568a297d 100644
+--- a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
++++ b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+@@ -454,20 +454,20 @@
+ 
+ &mailbox5 {
+ 	status = "okay";
+-	mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
++	mbox_ipu1_ipc3x: mbox-ipu1-ipc3x {
+ 		status = "okay";
+ 	};
+-	mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
++	mbox_dsp1_ipc3x: mbox-dsp1-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+ 
+ &mailbox6 {
+ 	status = "okay";
+-	mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
++	mbox_ipu2_ipc3x: mbox-ipu2-ipc3x {
+ 		status = "okay";
+ 	};
+-	mbox_dsp2_ipc3x: mbox_dsp2_ipc3x {
++	mbox_dsp2_ipc3x: mbox-dsp2-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+@@ -610,12 +610,11 @@
+ 	>;
+ };
+ 
+-&gpio3 {
+-	status = "okay";
++&gpio3_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+-&gpio2 {
++&gpio2_target {
+ 	status = "okay";
+ 	ti,no-reset-on-init;
+ };
+diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+index c76b0046b4028..5a4e137fec210 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+@@ -156,10 +156,7 @@
+ 	/*W0-W7*/	"","","","","","","","",
+ 	/*X0-X7*/	"","","","","","","","",
+ 	/*Y0-Y7*/	"","","","","","","","",
+-	/*Z0-Z7*/	"","","","","","","","",
+-	/*AA0-AA7*/	"","","","","","","","",
+-	/*AB0-AB7*/	"","","","","","","","",
+-	/*AC0-AC7*/	"","","","","","","","";
++	/*Z0-Z7*/	"","","","","","","","";
+ 
+ 	pin_mclr_vpp {
+ 		gpio-hog;
+diff --git a/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts b/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
+index e86c22ce6d123..0e8851cec979e 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
+@@ -127,10 +127,7 @@
+ 	/*W0-W7*/	"","","","","","","","",
+ 	/*X0-X7*/	"","","","","","","","",
+ 	/*Y0-Y7*/	"","","","","","","","",
+-	/*Z0-Z7*/	"","","","","","","","",
+-	/*AA0-AA7*/	"","","","","","","","",
+-	/*AB0-AB7*/	"","","","","","","","",
+-	/*AC0-AC7*/	"","","","","","","","";
++	/*Z0-Z7*/	"","","","","","","","";
+ };
+ 
+ &fmc {
+@@ -180,6 +177,7 @@
+ 
+ &emmc {
+ 	status = "okay";
++	clk-phase-mmc-hs200 = <36>, <270>;
+ };
+ 
+ &fsim0 {
+diff --git a/arch/arm/boot/dts/bcm-cygnus.dtsi b/arch/arm/boot/dts/bcm-cygnus.dtsi
+index dacaef2c14caa..ea3dc128b764b 100644
+--- a/arch/arm/boot/dts/bcm-cygnus.dtsi
++++ b/arch/arm/boot/dts/bcm-cygnus.dtsi
+@@ -460,7 +460,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		nand: nand@18046000 {
++		nand_controller: nand-controller@18046000 {
+ 			compatible = "brcm,nand-iproc", "brcm,brcmnand-v6.1";
+ 			reg = <0x18046000 0x600>, <0xf8105408 0x600>,
+ 			      <0x18046f00 0x20>;
+diff --git a/arch/arm/boot/dts/bcm-hr2.dtsi b/arch/arm/boot/dts/bcm-hr2.dtsi
+index e8df458aad392..84cda16f68a23 100644
+--- a/arch/arm/boot/dts/bcm-hr2.dtsi
++++ b/arch/arm/boot/dts/bcm-hr2.dtsi
+@@ -179,7 +179,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		nand: nand@26000 {
++		nand_controller: nand-controller@26000 {
+ 			compatible = "brcm,nand-iproc", "brcm,brcmnand-v6.1";
+ 			reg = <0x26000 0x600>,
+ 			      <0x11b408 0x600>,
+diff --git a/arch/arm/boot/dts/bcm-nsp.dtsi b/arch/arm/boot/dts/bcm-nsp.dtsi
+index e895f7cb8c9f2..605b6d2f4a569 100644
+--- a/arch/arm/boot/dts/bcm-nsp.dtsi
++++ b/arch/arm/boot/dts/bcm-nsp.dtsi
+@@ -269,7 +269,7 @@
+ 			dma-coherent;
+ 		};
+ 
+-		nand: nand@26000 {
++		nand_controller: nand-controller@26000 {
+ 			compatible = "brcm,nand-iproc", "brcm,brcmnand-v6.1";
+ 			reg = <0x026000 0x600>,
+ 			      <0x11b408 0x600>,
+diff --git a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
+index 09a1182c29363..5395e8c2484e0 100644
+--- a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
++++ b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
+@@ -28,11 +28,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 42 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&expgpio 2 GPIO_ACTIVE_LOW>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index 4847dd305317a..3d040f6e2a20f 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -395,7 +395,7 @@
+ 		ranges = <0x0 0x7e000000  0x0 0xfe000000  0x01800000>;
+ 		dma-ranges = <0x0 0xc0000000  0x0 0x00000000  0x40000000>;
+ 
+-		emmc2: emmc2@7e340000 {
++		emmc2: mmc@7e340000 {
+ 			compatible = "brcm,bcm2711-emmc2";
+ 			reg = <0x0 0x7e340000 0x100>;
+ 			interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts b/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
+index 6c8ce39833bf6..40b9405f1a8e4 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
+@@ -14,11 +14,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&gpio 35 GPIO_ACTIVE_HIGH>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-a.dts b/arch/arm/boot/dts/bcm2835-rpi-a.dts
+index 17fdd48346ffb..11edb581dbaf0 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-a.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-a.dts
+@@ -14,7 +14,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 16 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
+index b0355c229cdc2..1b435c64bd9c1 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
+@@ -15,11 +15,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&gpio 35 GPIO_ACTIVE_HIGH>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts b/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
+index 33b3b5c025219..a23c25c00eea7 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
+@@ -15,7 +15,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 16 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b.dts b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+index 2b69957e0113e..1b63d6b19750b 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+@@ -15,7 +15,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 16 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi b/arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi
+index 58059c2ce1294..e4e6b6abbfc13 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi
++++ b/arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi
+@@ -5,7 +5,7 @@
+ 
+ / {
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+index f65448c01e317..33b2b77aa47db 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+@@ -23,7 +23,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero.dts b/arch/arm/boot/dts/bcm2835-rpi-zero.dts
+index 6dd93c6f49666..6f9b3a908f287 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-zero.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-zero.dts
+@@ -18,7 +18,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi.dtsi b/arch/arm/boot/dts/bcm2835-rpi.dtsi
+index d94357b21f7e9..87ddcad760834 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi.dtsi
++++ b/arch/arm/boot/dts/bcm2835-rpi.dtsi
+@@ -4,7 +4,7 @@
+ 	leds {
+ 		compatible = "gpio-leds";
+ 
+-		act {
++		led-act {
+ 			label = "ACT";
+ 			default-state = "keep";
+ 			linux,default-trigger = "heartbeat";
+diff --git a/arch/arm/boot/dts/bcm2836-rpi-2-b.dts b/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
+index 0455a680394a2..d8af8eeac7b6f 100644
+--- a/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
++++ b/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
+@@ -15,11 +15,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&gpio 35 GPIO_ACTIVE_HIGH>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts
+index 28be0332c1c81..77099a7871b03 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts
+@@ -19,11 +19,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 29 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&expgpio 2 GPIO_ACTIVE_LOW>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+index 37343148643db..61010266ca9a3 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+@@ -20,11 +20,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 29 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&expgpio 2 GPIO_ACTIVE_LOW>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b.dts
+index 054ecaa355c9a..dd4a486040971 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-b.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-b.dts
+@@ -20,7 +20,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&expgpio 2 GPIO_ACTIVE_HIGH>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi b/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
+index 925cb37c22f06..828a20561b969 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
++++ b/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
+@@ -14,7 +14,7 @@
+ 		 * Since there is no upstream GPIO driver yet,
+ 		 * remove the incomplete node.
+ 		 */
+-		/delete-node/ act;
++		/delete-node/ led-act;
+ 	};
+ 
+ 	reg_3v3: fixed-regulator {
+diff --git a/arch/arm/boot/dts/bcm283x.dtsi b/arch/arm/boot/dts/bcm283x.dtsi
+index b83a864e2e8ba..0f3be55201a5b 100644
+--- a/arch/arm/boot/dts/bcm283x.dtsi
++++ b/arch/arm/boot/dts/bcm283x.dtsi
+@@ -420,7 +420,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdhci: sdhci@7e300000 {
++		sdhci: mmc@7e300000 {
+ 			compatible = "brcm,bcm2835-sdhci";
+ 			reg = <0x7e300000 0x100>;
+ 			interrupts = <2 30>;
+diff --git a/arch/arm/boot/dts/bcm63138.dtsi b/arch/arm/boot/dts/bcm63138.dtsi
+index 9c0325cf9e22e..cca49a2e2d623 100644
+--- a/arch/arm/boot/dts/bcm63138.dtsi
++++ b/arch/arm/boot/dts/bcm63138.dtsi
+@@ -203,7 +203,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		nand: nand@2000 {
++		nand_controller: nand-controller@2000 {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			compatible = "brcm,nand-bcm63138", "brcm,brcmnand-v7.0", "brcm,brcmnand";
+diff --git a/arch/arm/boot/dts/bcm7445-bcm97445svmb.dts b/arch/arm/boot/dts/bcm7445-bcm97445svmb.dts
+index 8313b7cad5427..f92d2cf859726 100644
+--- a/arch/arm/boot/dts/bcm7445-bcm97445svmb.dts
++++ b/arch/arm/boot/dts/bcm7445-bcm97445svmb.dts
+@@ -14,10 +14,10 @@
+ 	};
+ };
+ 
+-&nand {
++&nand_controller {
+ 	status = "okay";
+ 
+-	nandcs@1 {
++	nand@1 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <1>;
+ 		nand-ecc-step-size = <512>;
+diff --git a/arch/arm/boot/dts/bcm7445.dtsi b/arch/arm/boot/dts/bcm7445.dtsi
+index 58f67c9b830b8..5ac2042515b8f 100644
+--- a/arch/arm/boot/dts/bcm7445.dtsi
++++ b/arch/arm/boot/dts/bcm7445.dtsi
+@@ -148,7 +148,7 @@
+ 			reg-names = "aon-ctrl", "aon-sram";
+ 		};
+ 
+-		nand: nand@3e2800 {
++		nand_controller: nand-controller@3e2800 {
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/bcm911360_entphn.dts b/arch/arm/boot/dts/bcm911360_entphn.dts
+index b2d323f4a5aba..a76c74b44bbaf 100644
+--- a/arch/arm/boot/dts/bcm911360_entphn.dts
++++ b/arch/arm/boot/dts/bcm911360_entphn.dts
+@@ -82,8 +82,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@1 {
++&nand_controller {
++	nand@1 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958300k.dts b/arch/arm/boot/dts/bcm958300k.dts
+index b4a1392bd5a6c..dda3e11b711f6 100644
+--- a/arch/arm/boot/dts/bcm958300k.dts
++++ b/arch/arm/boot/dts/bcm958300k.dts
+@@ -60,8 +60,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@1 {
++&nand_controller {
++	nand@1 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958305k.dts b/arch/arm/boot/dts/bcm958305k.dts
+index 3378683321d3c..ea3c6b88b313b 100644
+--- a/arch/arm/boot/dts/bcm958305k.dts
++++ b/arch/arm/boot/dts/bcm958305k.dts
+@@ -68,8 +68,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@1 {
++&nand_controller {
++	nand@1 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958522er.dts b/arch/arm/boot/dts/bcm958522er.dts
+index 7be4c4e628e02..5dd1001255c64 100644
+--- a/arch/arm/boot/dts/bcm958522er.dts
++++ b/arch/arm/boot/dts/bcm958522er.dts
+@@ -74,8 +74,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958525er.dts b/arch/arm/boot/dts/bcm958525er.dts
+index e58ed7e953460..0e624c57b5854 100644
+--- a/arch/arm/boot/dts/bcm958525er.dts
++++ b/arch/arm/boot/dts/bcm958525er.dts
+@@ -74,8 +74,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958525xmc.dts b/arch/arm/boot/dts/bcm958525xmc.dts
+index 21f922dc60196..2ac6d61ad297e 100644
+--- a/arch/arm/boot/dts/bcm958525xmc.dts
++++ b/arch/arm/boot/dts/bcm958525xmc.dts
+@@ -90,8 +90,8 @@
+ 	};
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958622hr.dts b/arch/arm/boot/dts/bcm958622hr.dts
+index a49c2fd21f4a8..e072fa83537a1 100644
+--- a/arch/arm/boot/dts/bcm958622hr.dts
++++ b/arch/arm/boot/dts/bcm958622hr.dts
+@@ -78,8 +78,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958623hr.dts b/arch/arm/boot/dts/bcm958623hr.dts
+index dd6dff6452b87..835f4765efbc1 100644
+--- a/arch/arm/boot/dts/bcm958623hr.dts
++++ b/arch/arm/boot/dts/bcm958623hr.dts
+@@ -78,8 +78,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958625hr.dts b/arch/arm/boot/dts/bcm958625hr.dts
+index a71371b4065ed..57974b8fab86f 100644
+--- a/arch/arm/boot/dts/bcm958625hr.dts
++++ b/arch/arm/boot/dts/bcm958625hr.dts
+@@ -89,8 +89,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958625k.dts b/arch/arm/boot/dts/bcm958625k.dts
+index 7782b61c51a16..45a043ea77ce5 100644
+--- a/arch/arm/boot/dts/bcm958625k.dts
++++ b/arch/arm/boot/dts/bcm958625k.dts
+@@ -68,8 +68,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm963138dvt.dts b/arch/arm/boot/dts/bcm963138dvt.dts
+index 5b177274f1826..df5c8ab906273 100644
+--- a/arch/arm/boot/dts/bcm963138dvt.dts
++++ b/arch/arm/boot/dts/bcm963138dvt.dts
+@@ -31,10 +31,10 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
++&nand_controller {
+ 	status = "okay";
+ 
+-	nandcs@0 {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-ecc-strength = <4>;
+diff --git a/arch/arm/boot/dts/bcm988312hr.dts b/arch/arm/boot/dts/bcm988312hr.dts
+index edd0f630e0251..16b212cc8a2a0 100644
+--- a/arch/arm/boot/dts/bcm988312hr.dts
++++ b/arch/arm/boot/dts/bcm988312hr.dts
+@@ -74,8 +74,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/dm816x.dtsi b/arch/arm/boot/dts/dm816x.dtsi
+index 3551a64963f8d..1825d912b8ab4 100644
+--- a/arch/arm/boot/dts/dm816x.dtsi
++++ b/arch/arm/boot/dts/dm816x.dtsi
+@@ -351,7 +351,7 @@
+ 			#mbox-cells = <1>;
+ 			ti,mbox-num-users = <4>;
+ 			ti,mbox-num-fifos = <12>;
+-			mbox_dsp: mbox_dsp {
++			mbox_dsp: mbox-dsp {
+ 				ti,mbox-tx = <3 0 0>;
+ 				ti,mbox-rx = <0 0 0>;
+ 			};
+diff --git a/arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi b/arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi
+index a25749a1c3659..a5bdc6431d8d0 100644
+--- a/arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi
++++ b/arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi
+@@ -5,17 +5,17 @@
+ 
+ &mailbox5 {
+ 	status = "okay";
+-	mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
++	mbox_ipu1_ipc3x: mbox-ipu1-ipc3x {
+ 		status = "okay";
+ 	};
+-	mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
++	mbox_dsp1_ipc3x: mbox-dsp1-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+ 
+ &mailbox6 {
+ 	status = "okay";
+-	mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
++	mbox_ipu2_ipc3x: mbox-ipu2-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 1dafce92fc767..30b72f4318501 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -1315,7 +1315,7 @@
+ 			};
+ 		};
+ 
+-		target-module@55000 {			/* 0x48055000, ap 13 0e.0 */
++		gpio2_target: target-module@55000 {		/* 0x48055000, ap 13 0e.0 */
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+ 			reg = <0x55000 0x4>,
+ 			      <0x55010 0x4>,
+@@ -1348,7 +1348,7 @@
+ 			};
+ 		};
+ 
+-		target-module@57000 {			/* 0x48057000, ap 15 06.0 */
++		gpio3_target: target-module@57000 {		/* 0x48057000, ap 15 06.0 */
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+ 			reg = <0x57000 0x4>,
+ 			      <0x57010 0x4>,
+diff --git a/arch/arm/boot/dts/dra72x.dtsi b/arch/arm/boot/dts/dra72x.dtsi
+index f3e934ef7d3e2..90617261373cf 100644
+--- a/arch/arm/boot/dts/dra72x.dtsi
++++ b/arch/arm/boot/dts/dra72x.dtsi
+@@ -77,12 +77,12 @@
+ };
+ 
+ &mailbox5 {
+-	mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
++	mbox_ipu1_ipc3x: mbox-ipu1-ipc3x {
+ 		ti,mbox-tx = <6 2 2>;
+ 		ti,mbox-rx = <4 2 2>;
+ 		status = "disabled";
+ 	};
+-	mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
++	mbox_dsp1_ipc3x: mbox-dsp1-ipc3x {
+ 		ti,mbox-tx = <5 2 2>;
+ 		ti,mbox-rx = <1 2 2>;
+ 		status = "disabled";
+@@ -90,7 +90,7 @@
+ };
+ 
+ &mailbox6 {
+-	mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
++	mbox_ipu2_ipc3x: mbox-ipu2-ipc3x {
+ 		ti,mbox-tx = <6 2 2>;
+ 		ti,mbox-rx = <4 2 2>;
+ 		status = "disabled";
+diff --git a/arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi b/arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi
+index b1147a4b77f9d..3256631510c56 100644
+--- a/arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi
++++ b/arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi
+@@ -6,7 +6,7 @@
+ #include "dra7-ipu-dsp-common.dtsi"
+ 
+ &mailbox6 {
+-	mbox_dsp2_ipc3x: mbox_dsp2_ipc3x {
++	mbox_dsp2_ipc3x: mbox-dsp2-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/dra74x.dtsi b/arch/arm/boot/dts/dra74x.dtsi
+index b4e07d99ffde1..cfb39dde49300 100644
+--- a/arch/arm/boot/dts/dra74x.dtsi
++++ b/arch/arm/boot/dts/dra74x.dtsi
+@@ -145,12 +145,12 @@
+ };
+ 
+ &mailbox5 {
+-	mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
++	mbox_ipu1_ipc3x: mbox-ipu1-ipc3x {
+ 		ti,mbox-tx = <6 2 2>;
+ 		ti,mbox-rx = <4 2 2>;
+ 		status = "disabled";
+ 	};
+-	mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
++	mbox_dsp1_ipc3x: mbox-dsp1-ipc3x {
+ 		ti,mbox-tx = <5 2 2>;
+ 		ti,mbox-rx = <1 2 2>;
+ 		status = "disabled";
+@@ -158,12 +158,12 @@
+ };
+ 
+ &mailbox6 {
+-	mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
++	mbox_ipu2_ipc3x: mbox-ipu2-ipc3x {
+ 		ti,mbox-tx = <6 2 2>;
+ 		ti,mbox-rx = <4 2 2>;
+ 		status = "disabled";
+ 	};
+-	mbox_dsp2_ipc3x: mbox_dsp2_ipc3x {
++	mbox_dsp2_ipc3x: mbox-dsp2-ipc3x {
+ 		ti,mbox-tx = <5 2 2>;
+ 		ti,mbox-rx = <1 2 2>;
+ 		status = "disabled";
+diff --git a/arch/arm/boot/dts/gemini-dlink-dns-313.dts b/arch/arm/boot/dts/gemini-dlink-dns-313.dts
+index c6f3d90e3e90c..b8acc6eaaa6d7 100644
+--- a/arch/arm/boot/dts/gemini-dlink-dns-313.dts
++++ b/arch/arm/boot/dts/gemini-dlink-dns-313.dts
+@@ -140,7 +140,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		/* Uses MDC and MDIO */
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+diff --git a/arch/arm/boot/dts/gemini-nas4220b.dts b/arch/arm/boot/dts/gemini-nas4220b.dts
+index 43c45f7e1e0a3..13112a8a5dd88 100644
+--- a/arch/arm/boot/dts/gemini-nas4220b.dts
++++ b/arch/arm/boot/dts/gemini-nas4220b.dts
+@@ -62,7 +62,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+ 			<&gpio0 21 GPIO_ACTIVE_HIGH>; /* MDIO */
+diff --git a/arch/arm/boot/dts/gemini-rut1xx.dts b/arch/arm/boot/dts/gemini-rut1xx.dts
+index 08091d2a64e15..0ebda4efd9d0f 100644
+--- a/arch/arm/boot/dts/gemini-rut1xx.dts
++++ b/arch/arm/boot/dts/gemini-rut1xx.dts
+@@ -56,7 +56,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+ 			<&gpio0 21 GPIO_ACTIVE_HIGH>; /* MDIO */
+diff --git a/arch/arm/boot/dts/gemini-wbd111.dts b/arch/arm/boot/dts/gemini-wbd111.dts
+index 3a2761dd460f9..5602ba8f30f2f 100644
+--- a/arch/arm/boot/dts/gemini-wbd111.dts
++++ b/arch/arm/boot/dts/gemini-wbd111.dts
+@@ -68,7 +68,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+ 			<&gpio0 21 GPIO_ACTIVE_HIGH>; /* MDIO */
+diff --git a/arch/arm/boot/dts/gemini-wbd222.dts b/arch/arm/boot/dts/gemini-wbd222.dts
+index 52b4dbc0c0723..a4a260c36d752 100644
+--- a/arch/arm/boot/dts/gemini-wbd222.dts
++++ b/arch/arm/boot/dts/gemini-wbd222.dts
+@@ -67,7 +67,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+ 			<&gpio0 21 GPIO_ACTIVE_HIGH>; /* MDIO */
+diff --git a/arch/arm/boot/dts/gemini.dtsi b/arch/arm/boot/dts/gemini.dtsi
+index 065ed10a79fa7..07448c03dac9e 100644
+--- a/arch/arm/boot/dts/gemini.dtsi
++++ b/arch/arm/boot/dts/gemini.dtsi
+@@ -286,6 +286,7 @@
+ 			clock-names = "PCLK", "PCICLK";
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pci_default_pins>;
++			device_type = "pci";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 			#interrupt-cells = <1>;
+diff --git a/arch/arm/boot/dts/imx6dl-riotboard.dts b/arch/arm/boot/dts/imx6dl-riotboard.dts
+index 065d3ab0f50a7..e7d9bfbfd0e4d 100644
+--- a/arch/arm/boot/dts/imx6dl-riotboard.dts
++++ b/arch/arm/boot/dts/imx6dl-riotboard.dts
+@@ -106,6 +106,8 @@
+ 			reset-gpios = <&gpio3 31 GPIO_ACTIVE_LOW>;
+ 			reset-assert-us = <10000>;
+ 			reset-deassert-us = <1000>;
++			qca,smarteee-tw-us-1g = <24>;
++			qca,clk-out-frequency = <125000000>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+index 5f84e9f2b5767..33a70f436aa3b 100644
+--- a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+@@ -315,8 +315,8 @@
+ 			fsl,pins = <
+ 				MX6QDL_PAD_EIM_D24__UART3_TX_DATA	0x1b0b1
+ 				MX6QDL_PAD_EIM_D25__UART3_RX_DATA	0x1b0b1
+-				MX6QDL_PAD_EIM_D30__UART3_RTS_B		0x1b0b1
+-				MX6QDL_PAD_EIM_D31__UART3_CTS_B		0x1b0b1
++				MX6QDL_PAD_EIM_D31__UART3_RTS_B		0x1b0b1
++				MX6QDL_PAD_EIM_D30__UART3_CTS_B		0x1b0b1
+ 			>;
+ 		};
+ 
+@@ -403,6 +403,7 @@
+ &uart3 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_uart3>;
++	uart-has-rtscts;
+ 	status = "disabled";
+ };
+ 
+diff --git a/arch/arm/boot/dts/omap4-l4.dtsi b/arch/arm/boot/dts/omap4-l4.dtsi
+index de742bf84efb0..5015df4d876c2 100644
+--- a/arch/arm/boot/dts/omap4-l4.dtsi
++++ b/arch/arm/boot/dts/omap4-l4.dtsi
+@@ -597,11 +597,11 @@
+ 				#mbox-cells = <1>;
+ 				ti,mbox-num-users = <3>;
+ 				ti,mbox-num-fifos = <8>;
+-				mbox_ipu: mbox_ipu {
++				mbox_ipu: mbox-ipu {
+ 					ti,mbox-tx = <0 0 0>;
+ 					ti,mbox-rx = <1 0 0>;
+ 				};
+-				mbox_dsp: mbox_dsp {
++				mbox_dsp: mbox-dsp {
+ 					ti,mbox-tx = <3 0 0>;
+ 					ti,mbox-rx = <2 0 0>;
+ 				};
+diff --git a/arch/arm/boot/dts/omap5-l4.dtsi b/arch/arm/boot/dts/omap5-l4.dtsi
+index f3d3a16b7c64e..c67c8698cc30d 100644
+--- a/arch/arm/boot/dts/omap5-l4.dtsi
++++ b/arch/arm/boot/dts/omap5-l4.dtsi
+@@ -613,11 +613,11 @@
+ 				#mbox-cells = <1>;
+ 				ti,mbox-num-users = <3>;
+ 				ti,mbox-num-fifos = <8>;
+-				mbox_ipu: mbox_ipu {
++				mbox_ipu: mbox-ipu {
+ 					ti,mbox-tx = <0 0 0>;
+ 					ti,mbox-rx = <1 0 0>;
+ 				};
+-				mbox_dsp: mbox_dsp {
++				mbox_dsp: mbox-dsp {
+ 					ti,mbox-tx = <3 0 0>;
+ 					ti,mbox-rx = <2 0 0>;
+ 				};
+diff --git a/arch/arm/boot/dts/rk3036-kylin.dts b/arch/arm/boot/dts/rk3036-kylin.dts
+index 7154b827ea2f0..e817eba8c622b 100644
+--- a/arch/arm/boot/dts/rk3036-kylin.dts
++++ b/arch/arm/boot/dts/rk3036-kylin.dts
+@@ -390,7 +390,7 @@
+ 		};
+ 	};
+ 
+-	sleep {
++	suspend {
+ 		global_pwroff: global-pwroff {
+ 			rockchip,pins = <2 RK_PA7 1 &pcfg_pull_none>;
+ 		};
+diff --git a/arch/arm/boot/dts/rk3066a.dtsi b/arch/arm/boot/dts/rk3066a.dtsi
+index 252750c97f97f..bbc3bff508560 100644
+--- a/arch/arm/boot/dts/rk3066a.dtsi
++++ b/arch/arm/boot/dts/rk3066a.dtsi
+@@ -755,7 +755,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		pd_vio@RK3066_PD_VIO {
++		power-domain@RK3066_PD_VIO {
+ 			reg = <RK3066_PD_VIO>;
+ 			clocks = <&cru ACLK_LCDC0>,
+ 				 <&cru ACLK_LCDC1>,
+@@ -782,7 +782,7 @@
+ 				 <&qos_rga>;
+ 		};
+ 
+-		pd_video@RK3066_PD_VIDEO {
++		power-domain@RK3066_PD_VIDEO {
+ 			reg = <RK3066_PD_VIDEO>;
+ 			clocks = <&cru ACLK_VDPU>,
+ 				 <&cru ACLK_VEPU>,
+@@ -791,7 +791,7 @@
+ 			pm_qos = <&qos_vpu>;
+ 		};
+ 
+-		pd_gpu@RK3066_PD_GPU {
++		power-domain@RK3066_PD_GPU {
+ 			reg = <RK3066_PD_GPU>;
+ 			clocks = <&cru ACLK_GPU>;
+ 			pm_qos = <&qos_gpu>;
+diff --git a/arch/arm/boot/dts/rk3188.dtsi b/arch/arm/boot/dts/rk3188.dtsi
+index 2298a8d840ba3..b6bde9d12c2be 100644
+--- a/arch/arm/boot/dts/rk3188.dtsi
++++ b/arch/arm/boot/dts/rk3188.dtsi
+@@ -150,16 +150,16 @@
+ 		compatible = "rockchip,rk3188-timer", "rockchip,rk3288-timer";
+ 		reg = <0x2000e000 0x20>;
+ 		interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
+-		clocks = <&cru SCLK_TIMER3>, <&cru PCLK_TIMER3>;
+-		clock-names = "timer", "pclk";
++		clocks = <&cru PCLK_TIMER3>, <&cru SCLK_TIMER3>;
++		clock-names = "pclk", "timer";
+ 	};
+ 
+ 	timer6: timer@200380a0 {
+ 		compatible = "rockchip,rk3188-timer", "rockchip,rk3288-timer";
+ 		reg = <0x200380a0 0x20>;
+ 		interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>;
+-		clocks = <&cru SCLK_TIMER6>, <&cru PCLK_TIMER0>;
+-		clock-names = "timer", "pclk";
++		clocks = <&cru PCLK_TIMER0>, <&cru SCLK_TIMER6>;
++		clock-names = "pclk", "timer";
+ 	};
+ 
+ 	i2s0: i2s@1011a000 {
+@@ -699,7 +699,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		pd_vio@RK3188_PD_VIO {
++		power-domain@RK3188_PD_VIO {
+ 			reg = <RK3188_PD_VIO>;
+ 			clocks = <&cru ACLK_LCDC0>,
+ 				 <&cru ACLK_LCDC1>,
+@@ -721,7 +721,7 @@
+ 				 <&qos_rga>;
+ 		};
+ 
+-		pd_video@RK3188_PD_VIDEO {
++		power-domain@RK3188_PD_VIDEO {
+ 			reg = <RK3188_PD_VIDEO>;
+ 			clocks = <&cru ACLK_VDPU>,
+ 				 <&cru ACLK_VEPU>,
+@@ -730,7 +730,7 @@
+ 			pm_qos = <&qos_vpu>;
+ 		};
+ 
+-		pd_gpu@RK3188_PD_GPU {
++		power-domain@RK3188_PD_GPU {
+ 			reg = <RK3188_PD_GPU>;
+ 			clocks = <&cru ACLK_GPU>;
+ 			pm_qos = <&qos_gpu>;
+diff --git a/arch/arm/boot/dts/rk322x.dtsi b/arch/arm/boot/dts/rk322x.dtsi
+index 48e6e8d44a1a5..7de8b006ca13a 100644
+--- a/arch/arm/boot/dts/rk322x.dtsi
++++ b/arch/arm/boot/dts/rk322x.dtsi
+@@ -524,7 +524,7 @@
+ 		pinctrl-0 = <&otp_pin>;
+ 		pinctrl-1 = <&otp_out>;
+ 		pinctrl-2 = <&otp_pin>;
+-		#thermal-sensor-cells = <0>;
++		#thermal-sensor-cells = <1>;
+ 		rockchip,hw-tshut-temp = <95000>;
+ 		status = "disabled";
+ 	};
+@@ -565,10 +565,9 @@
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x20020800 0x100>;
+ 		interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "vpu_mmu";
+ 		clocks = <&cru ACLK_VPU>, <&cru HCLK_VPU>;
+ 		clock-names = "aclk", "iface";
+-		iommu-cells = <0>;
++		#iommu-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+@@ -576,10 +575,9 @@
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x20030480 0x40>, <0x200304c0 0x40>;
+ 		interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "vdec_mmu";
+ 		clocks = <&cru ACLK_RKVDEC>, <&cru HCLK_RKVDEC>;
+ 		clock-names = "aclk", "iface";
+-		iommu-cells = <0>;
++		#iommu-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+@@ -609,7 +607,6 @@
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x20053f00 0x100>;
+ 		interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "vop_mmu";
+ 		clocks = <&cru ACLK_VOP>, <&cru HCLK_VOP>;
+ 		clock-names = "aclk", "iface";
+ 		#iommu-cells = <0>;
+@@ -630,10 +627,9 @@
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x20070800 0x100>;
+ 		interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "iep_mmu";
+ 		clocks = <&cru ACLK_IEP>, <&cru HCLK_IEP>;
+ 		clock-names = "aclk", "iface";
+-		iommu-cells = <0>;
++		#iommu-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/rk3288-rock2-som.dtsi b/arch/arm/boot/dts/rk3288-rock2-som.dtsi
+index 44bb5e6f83b12..76363b8afcb9b 100644
+--- a/arch/arm/boot/dts/rk3288-rock2-som.dtsi
++++ b/arch/arm/boot/dts/rk3288-rock2-som.dtsi
+@@ -218,7 +218,7 @@
+ 	flash0-supply = <&vcc_flash>;
+ 	flash1-supply = <&vccio_pmu>;
+ 	gpio30-supply = <&vccio_pmu>;
+-	gpio1830 = <&vcc_io>;
++	gpio1830-supply = <&vcc_io>;
+ 	lcdc-supply = <&vcc_io>;
+ 	sdcard-supply = <&vccio_sd>;
+ 	wifi-supply = <&vcc_18>;
+diff --git a/arch/arm/boot/dts/rk3288-vyasa.dts b/arch/arm/boot/dts/rk3288-vyasa.dts
+index aa50f8ed4ca0a..b156a83eb7d78 100644
+--- a/arch/arm/boot/dts/rk3288-vyasa.dts
++++ b/arch/arm/boot/dts/rk3288-vyasa.dts
+@@ -379,10 +379,10 @@
+ 	audio-supply = <&vcc_18>;
+ 	bb-supply = <&vcc_io>;
+ 	dvp-supply = <&vcc_io>;
+-	flash0-suuply = <&vcc_18>;
++	flash0-supply = <&vcc_18>;
+ 	flash1-supply = <&vcc_lan>;
+ 	gpio30-supply = <&vcc_io>;
+-	gpio1830 = <&vcc_io>;
++	gpio1830-supply = <&vcc_io>;
+ 	lcdc-supply = <&vcc_io>;
+ 	sdcard-supply = <&vccio_sd>;
+ 	wifi-supply = <&vcc_18>;
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index 68d5a58cfe889..0d89ad274268b 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -240,8 +240,8 @@
+ 		compatible = "rockchip,rk3288-timer";
+ 		reg = <0x0 0xff810000 0x0 0x20>;
+ 		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>;
+-		clocks = <&xin24m>, <&cru PCLK_TIMER>;
+-		clock-names = "timer", "pclk";
++		clocks = <&cru PCLK_TIMER>, <&xin24m>;
++		clock-names = "pclk", "timer";
+ 	};
+ 
+ 	display-subsystem {
+@@ -788,7 +788,7 @@
+ 			 *	*_HDMI		HDMI
+ 			 *	*_MIPI_*	MIPI
+ 			 */
+-			pd_vio@RK3288_PD_VIO {
++			power-domain@RK3288_PD_VIO {
+ 				reg = <RK3288_PD_VIO>;
+ 				clocks = <&cru ACLK_IEP>,
+ 					 <&cru ACLK_ISP>,
+@@ -830,7 +830,7 @@
+ 			 * Note: The following 3 are HEVC(H.265) clocks,
+ 			 * and on the ACLK_HEVC_NIU (NOC).
+ 			 */
+-			pd_hevc@RK3288_PD_HEVC {
++			power-domain@RK3288_PD_HEVC {
+ 				reg = <RK3288_PD_HEVC>;
+ 				clocks = <&cru ACLK_HEVC>,
+ 					 <&cru SCLK_HEVC_CABAC>,
+@@ -844,7 +844,7 @@
+ 			 * (video endecoder & decoder) clocks that on the
+ 			 * ACLK_VCODEC_NIU and HCLK_VCODEC_NIU (NOC).
+ 			 */
+-			pd_video@RK3288_PD_VIDEO {
++			power-domain@RK3288_PD_VIDEO {
+ 				reg = <RK3288_PD_VIDEO>;
+ 				clocks = <&cru ACLK_VCODEC>,
+ 					 <&cru HCLK_VCODEC>;
+@@ -855,7 +855,7 @@
+ 			 * Note: ACLK_GPU is the GPU clock,
+ 			 * and on the ACLK_GPU_NIU (NOC).
+ 			 */
+-			pd_gpu@RK3288_PD_GPU {
++			power-domain@RK3288_PD_GPU {
+ 				reg = <RK3288_PD_GPU>;
+ 				clocks = <&cru ACLK_GPU>;
+ 				pm_qos = <&qos_gpu_r>,
+@@ -1593,7 +1593,7 @@
+ 			drive-strength = <12>;
+ 		};
+ 
+-		sleep {
++		suspend {
+ 			global_pwroff: global-pwroff {
+ 				rockchip,pins = <0 RK_PA0 1 &pcfg_pull_none>;
+ 			};
+diff --git a/arch/arm/boot/dts/ste-ab8500.dtsi b/arch/arm/boot/dts/ste-ab8500.dtsi
+index aab5719cc1a93..16d50369226af 100644
+--- a/arch/arm/boot/dts/ste-ab8500.dtsi
++++ b/arch/arm/boot/dts/ste-ab8500.dtsi
+@@ -34,7 +34,7 @@
+ 					#clock-cells = <1>;
+ 				};
+ 
+-				ab8500_gpio: ab8500-gpio {
++				ab8500_gpio: ab8500-gpiocontroller {
+ 					compatible = "stericsson,ab8500-gpio";
+ 					gpio-controller;
+ 					#gpio-cells = <2>;
+@@ -42,15 +42,15 @@
+ 
+ 				ab8500-rtc {
+ 					compatible = "stericsson,ab8500-rtc";
+-					interrupts = <17 IRQ_TYPE_LEVEL_HIGH
+-						      18 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <17 IRQ_TYPE_LEVEL_HIGH>,
++						     <18 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "60S", "ALARM";
+ 				};
+ 
+ 				gpadc: ab8500-gpadc {
+ 					compatible = "stericsson,ab8500-gpadc";
+-					interrupts = <32 IRQ_TYPE_LEVEL_HIGH
+-						      39 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <32 IRQ_TYPE_LEVEL_HIGH>,
++						     <39 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "HW_CONV_END", "SW_CONV_END";
+ 					vddadc-supply = <&ab8500_ldo_tvout_reg>;
+ 					#address-cells = <1>;
+@@ -169,13 +169,13 @@
+ 
+ 				ab8500_usb {
+ 					compatible = "stericsson,ab8500-usb";
+-					interrupts = < 90 IRQ_TYPE_LEVEL_HIGH
+-						       96 IRQ_TYPE_LEVEL_HIGH
+-						       14 IRQ_TYPE_LEVEL_HIGH
+-						       15 IRQ_TYPE_LEVEL_HIGH
+-						       79 IRQ_TYPE_LEVEL_HIGH
+-						       74 IRQ_TYPE_LEVEL_HIGH
+-						       75 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <90 IRQ_TYPE_LEVEL_HIGH>,
++						     <96 IRQ_TYPE_LEVEL_HIGH>,
++						     <14 IRQ_TYPE_LEVEL_HIGH>,
++						     <15 IRQ_TYPE_LEVEL_HIGH>,
++						     <79 IRQ_TYPE_LEVEL_HIGH>,
++						     <74 IRQ_TYPE_LEVEL_HIGH>,
++						     <75 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "ID_WAKEUP_R",
+ 							  "ID_WAKEUP_F",
+ 							  "VBUS_DET_F",
+@@ -192,8 +192,8 @@
+ 
+ 				ab8500-ponkey {
+ 					compatible = "stericsson,ab8500-poweron-key";
+-					interrupts = <6 IRQ_TYPE_LEVEL_HIGH
+-						      7 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <6 IRQ_TYPE_LEVEL_HIGH>,
++						     <7 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "ONKEY_DBF", "ONKEY_DBR";
+ 				};
+ 
+diff --git a/arch/arm/boot/dts/ste-ab8505.dtsi b/arch/arm/boot/dts/ste-ab8505.dtsi
+index 67bc69e67b330..8d2cda0b4d622 100644
+--- a/arch/arm/boot/dts/ste-ab8505.dtsi
++++ b/arch/arm/boot/dts/ste-ab8505.dtsi
+@@ -30,7 +30,7 @@
+ 					#clock-cells = <1>;
+ 				};
+ 
+-				ab8505_gpio: ab8505-gpio {
++				ab8505_gpio: ab8505-gpiocontroller {
+ 					compatible = "stericsson,ab8505-gpio";
+ 					gpio-controller;
+ 					#gpio-cells = <2>;
+@@ -38,8 +38,8 @@
+ 
+ 				ab8500-rtc {
+ 					compatible = "stericsson,ab8500-rtc";
+-					interrupts = <17 IRQ_TYPE_LEVEL_HIGH
+-						      18 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <17 IRQ_TYPE_LEVEL_HIGH>,
++						     <18 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "60S", "ALARM";
+ 				};
+ 
+@@ -131,13 +131,13 @@
+ 
+ 				ab8500_usb: ab8500_usb {
+ 					compatible = "stericsson,ab8500-usb";
+-					interrupts = < 90 IRQ_TYPE_LEVEL_HIGH
+-						       96 IRQ_TYPE_LEVEL_HIGH
+-						       14 IRQ_TYPE_LEVEL_HIGH
+-						       15 IRQ_TYPE_LEVEL_HIGH
+-						       79 IRQ_TYPE_LEVEL_HIGH
+-						       74 IRQ_TYPE_LEVEL_HIGH
+-						       75 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <90 IRQ_TYPE_LEVEL_HIGH>,
++						     <96 IRQ_TYPE_LEVEL_HIGH>,
++						     <14 IRQ_TYPE_LEVEL_HIGH>,
++						     <15 IRQ_TYPE_LEVEL_HIGH>,
++						     <79 IRQ_TYPE_LEVEL_HIGH>,
++						     <74 IRQ_TYPE_LEVEL_HIGH>,
++						     <75 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "ID_WAKEUP_R",
+ 							  "ID_WAKEUP_F",
+ 							  "VBUS_DET_F",
+@@ -154,8 +154,8 @@
+ 
+ 				ab8500-ponkey {
+ 					compatible = "stericsson,ab8500-poweron-key";
+-					interrupts = <6 IRQ_TYPE_LEVEL_HIGH
+-						      7 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <6 IRQ_TYPE_LEVEL_HIGH>,
++						     <7 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "ONKEY_DBF", "ONKEY_DBR";
+ 				};
+ 
+diff --git a/arch/arm/boot/dts/ste-href-ab8500.dtsi b/arch/arm/boot/dts/ste-href-ab8500.dtsi
+index 4946743de7b9b..3ccb7b5c71625 100644
+--- a/arch/arm/boot/dts/ste-href-ab8500.dtsi
++++ b/arch/arm/boot/dts/ste-href-ab8500.dtsi
+@@ -9,7 +9,7 @@
+ 	soc {
+ 		prcmu@80157000 {
+ 			ab8500 {
+-				ab8500-gpio {
++				ab8500-gpiocontroller {
+ 					/* Hog a few default settings */
+ 					pinctrl-names = "default";
+ 					pinctrl-0 = <&gpio2_default_mode>,
+diff --git a/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi b/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
+index c0de1337bdaad..457bddabc32c2 100644
+--- a/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
++++ b/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
+@@ -19,6 +19,9 @@
+ 					     <19 IRQ_TYPE_EDGE_RISING>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&accel_tvk_mode>;
++				mount-matrix = "0", "-1", "0",
++					       "-1", "0", "0",
++					       "0", "0", "-1";
+ 			};
+ 			magnetometer@1e {
+ 				compatible = "st,lsm303dlm-magn";
+diff --git a/arch/arm/boot/dts/ste-href.dtsi b/arch/arm/boot/dts/ste-href.dtsi
+index 359c1219b0bab..3586b5d7876a4 100644
+--- a/arch/arm/boot/dts/ste-href.dtsi
++++ b/arch/arm/boot/dts/ste-href.dtsi
+@@ -224,7 +224,7 @@
+ 
+ 		prcmu@80157000 {
+ 			ab8500 {
+-				ab8500-gpio {
++				ab8500-gpiocontroller {
+ 				};
+ 
+ 				ab8500_usb {
+diff --git a/arch/arm/boot/dts/ste-snowball.dts b/arch/arm/boot/dts/ste-snowball.dts
+index 27d8a07718a00..f6e0d71f6c09f 100644
+--- a/arch/arm/boot/dts/ste-snowball.dts
++++ b/arch/arm/boot/dts/ste-snowball.dts
+@@ -376,7 +376,7 @@
+ 
+ 		prcmu@80157000 {
+ 			ab8500 {
+-				ab8500-gpio {
++				ab8500-gpiocontroller {
+ 					/*
+ 					 * AB8500 GPIOs are numbered starting from 1, so the first
+ 					 * index 0 is what in the datasheet is called "GPIO1", and
+diff --git a/arch/arm/boot/dts/stm32429i-eval.dts b/arch/arm/boot/dts/stm32429i-eval.dts
+index 67e7648de41ef..8b0ead46ef9be 100644
+--- a/arch/arm/boot/dts/stm32429i-eval.dts
++++ b/arch/arm/boot/dts/stm32429i-eval.dts
+@@ -119,17 +119,15 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "Wake up";
+ 			linux,code = <KEY_WAKEUP>;
+ 			gpios = <&gpioa 0 0>;
+ 		};
+-		button@1 {
++		button-1 {
+ 			label = "Tamper";
+ 			linux,code = <KEY_RESTART>;
+ 			gpios = <&gpioc 13 0>;
+diff --git a/arch/arm/boot/dts/stm32746g-eval.dts b/arch/arm/boot/dts/stm32746g-eval.dts
+index ca8c192449ee9..327613fd9666c 100644
+--- a/arch/arm/boot/dts/stm32746g-eval.dts
++++ b/arch/arm/boot/dts/stm32746g-eval.dts
+@@ -81,12 +81,10 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "Wake up";
+ 			linux,code = <KEY_WAKEUP>;
+ 			gpios = <&gpioc 13 0>;
+diff --git a/arch/arm/boot/dts/stm32f429-disco.dts b/arch/arm/boot/dts/stm32f429-disco.dts
+index 3dc068b91ca15..075ac57d0bf4a 100644
+--- a/arch/arm/boot/dts/stm32f429-disco.dts
++++ b/arch/arm/boot/dts/stm32f429-disco.dts
+@@ -81,12 +81,10 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "User";
+ 			linux,code = <KEY_HOME>;
+ 			gpios = <&gpioa 0 0>;
+diff --git a/arch/arm/boot/dts/stm32f429.dtsi b/arch/arm/boot/dts/stm32f429.dtsi
+index ad715a0e1c9a9..0dc5fa94dbdf8 100644
+--- a/arch/arm/boot/dts/stm32f429.dtsi
++++ b/arch/arm/boot/dts/stm32f429.dtsi
+@@ -283,8 +283,6 @@
+ 		};
+ 
+ 		timers13: timers@40001c00 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40001C00 0x400>;
+ 			clocks = <&rcc 0 STM32F4_APB1_CLOCK(TIM13)>;
+@@ -299,8 +297,6 @@
+ 		};
+ 
+ 		timers14: timers@40002000 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40002000 0x400>;
+ 			clocks = <&rcc 0 STM32F4_APB1_CLOCK(TIM14)>;
+@@ -633,8 +629,6 @@
+ 		};
+ 
+ 		timers10: timers@40014400 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40014400 0x400>;
+ 			clocks = <&rcc 0 STM32F4_APB2_CLOCK(TIM10)>;
+@@ -649,8 +643,6 @@
+ 		};
+ 
+ 		timers11: timers@40014800 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40014800 0x400>;
+ 			clocks = <&rcc 0 STM32F4_APB2_CLOCK(TIM11)>;
+@@ -709,7 +701,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		rcc: rcc@40023810 {
++		rcc: rcc@40023800 {
+ 			#reset-cells = <1>;
+ 			#clock-cells = <2>;
+ 			compatible = "st,stm32f42xx-rcc", "st,stm32-rcc";
+diff --git a/arch/arm/boot/dts/stm32f469-disco.dts b/arch/arm/boot/dts/stm32f469-disco.dts
+index 2e1b3bbbe4b5b..8c982ae79f432 100644
+--- a/arch/arm/boot/dts/stm32f469-disco.dts
++++ b/arch/arm/boot/dts/stm32f469-disco.dts
+@@ -104,12 +104,10 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "User";
+ 			linux,code = <KEY_WAKEUP>;
+ 			gpios = <&gpioa 0 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/stm32f746.dtsi b/arch/arm/boot/dts/stm32f746.dtsi
+index 640ff54ed00ca..d49e481b3aa62 100644
+--- a/arch/arm/boot/dts/stm32f746.dtsi
++++ b/arch/arm/boot/dts/stm32f746.dtsi
+@@ -265,8 +265,6 @@
+ 		};
+ 
+ 		timers13: timers@40001c00 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40001C00 0x400>;
+ 			clocks = <&rcc 0 STM32F7_APB1_CLOCK(TIM13)>;
+@@ -281,8 +279,6 @@
+ 		};
+ 
+ 		timers14: timers@40002000 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40002000 0x400>;
+ 			clocks = <&rcc 0 STM32F7_APB1_CLOCK(TIM14)>;
+@@ -364,9 +360,9 @@
+ 			status = "disabled";
+ 		};
+ 
+-		i2c3: i2c@40005C00 {
++		i2c3: i2c@40005c00 {
+ 			compatible = "st,stm32f7-i2c";
+-			reg = <0x40005C00 0x400>;
++			reg = <0x40005c00 0x400>;
+ 			interrupts = <72>,
+ 				     <73>;
+ 			resets = <&rcc STM32F7_APB1_RESET(I2C3)>;
+@@ -531,8 +527,6 @@
+ 		};
+ 
+ 		timers10: timers@40014400 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40014400 0x400>;
+ 			clocks = <&rcc 0 STM32F7_APB2_CLOCK(TIM10)>;
+@@ -547,8 +541,6 @@
+ 		};
+ 
+ 		timers11: timers@40014800 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40014800 0x400>;
+ 			clocks = <&rcc 0 STM32F7_APB2_CLOCK(TIM11)>;
+diff --git a/arch/arm/boot/dts/stm32f769-disco.dts b/arch/arm/boot/dts/stm32f769-disco.dts
+index 0ce7fbc20fa4d..be943b7019806 100644
+--- a/arch/arm/boot/dts/stm32f769-disco.dts
++++ b/arch/arm/boot/dts/stm32f769-disco.dts
+@@ -75,12 +75,10 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "User";
+ 			linux,code = <KEY_HOME>;
+ 			gpios = <&gpioa 0 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/stm32h743.dtsi b/arch/arm/boot/dts/stm32h743.dtsi
+index 7febe19e780d2..1579707ea566d 100644
+--- a/arch/arm/boot/dts/stm32h743.dtsi
++++ b/arch/arm/boot/dts/stm32h743.dtsi
+@@ -454,8 +454,6 @@
+ 		};
+ 
+ 		lptimer4: timer@58002c00 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-lptimer";
+ 			reg = <0x58002c00 0x400>;
+ 			clocks = <&rcc LPTIM4_CK>;
+@@ -470,8 +468,6 @@
+ 		};
+ 
+ 		lptimer5: timer@58003000 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-lptimer";
+ 			reg = <0x58003000 0x400>;
+ 			clocks = <&rcc LPTIM5_CK>;
+diff --git a/arch/arm/boot/dts/stm32mp151.dtsi b/arch/arm/boot/dts/stm32mp151.dtsi
+index 84757901cd8df..b479016fef008 100644
+--- a/arch/arm/boot/dts/stm32mp151.dtsi
++++ b/arch/arm/boot/dts/stm32mp151.dtsi
+@@ -1395,12 +1395,6 @@
+ 			status = "disabled";
+ 		};
+ 
+-		stmmac_axi_config_0: stmmac-axi-config {
+-			snps,wr_osr_lmt = <0x7>;
+-			snps,rd_osr_lmt = <0x7>;
+-			snps,blen = <0 0 0 0 16 8 4>;
+-		};
+-
+ 		ethernet0: ethernet@5800a000 {
+ 			compatible = "st,stm32mp1-dwmac", "snps,dwmac-4.20a";
+ 			reg = <0x5800a000 0x2000>;
+@@ -1424,6 +1418,12 @@
+ 			snps,axi-config = <&stmmac_axi_config_0>;
+ 			snps,tso;
+ 			status = "disabled";
++
++			stmmac_axi_config_0: stmmac-axi-config {
++				snps,wr_osr_lmt = <0x7>;
++				snps,rd_osr_lmt = <0x7>;
++				snps,blen = <0 0 0 0 16 8 4>;
++			};
+ 		};
+ 
+ 		usbh_ohci: usbh-ohci@5800c000 {
+diff --git a/arch/arm/boot/dts/stm32mp157a-stinger96.dtsi b/arch/arm/boot/dts/stm32mp157a-stinger96.dtsi
+index 58275bcf9e26e..4dd138d691c7c 100644
+--- a/arch/arm/boot/dts/stm32mp157a-stinger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp157a-stinger96.dtsi
+@@ -184,8 +184,6 @@
+ 
+ 			vdd_usb: ldo4 {
+ 				regulator-name = "vdd_usb";
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				interrupts = <IT_CURLIM_LDO4 0>;
+ 			};
+ 
+@@ -208,7 +206,6 @@
+ 			vref_ddr: vref_ddr {
+ 				regulator-name = "vref_ddr";
+ 				regulator-always-on;
+-				regulator-over-current-protection;
+ 			};
+ 
+ 			bst_out: boost {
+@@ -219,13 +216,13 @@
+ 			vbus_otg: pwr_sw1 {
+ 				regulator-name = "vbus_otg";
+ 				interrupts = <IT_OCP_OTG 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 
+ 			vbus_sw: pwr_sw2 {
+ 				regulator-name = "vbus_sw";
+ 				interrupts = <IT_OCP_SWOUT 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi b/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi
+index 6cf49a0a9e694..2d9461006810c 100644
+--- a/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi
+@@ -173,8 +173,6 @@
+ 
+ 			vdd_usb: ldo4 {
+ 				regulator-name = "vdd_usb";
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				interrupts = <IT_CURLIM_LDO4 0>;
+ 			};
+ 
+@@ -197,7 +195,6 @@
+ 			vref_ddr: vref_ddr {
+ 				regulator-name = "vref_ddr";
+ 				regulator-always-on;
+-				regulator-over-current-protection;
+ 			};
+ 
+ 			 bst_out: boost {
+@@ -213,7 +210,7 @@
+ 			 vbus_sw: pwr_sw2 {
+ 				regulator-name = "vbus_sw";
+ 				interrupts = <IT_OCP_SWOUT 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			 };
+ 		};
+ 
+@@ -269,7 +266,7 @@
+ 	st,neg-edge;
+ 	bus-width = <8>;
+ 	vmmc-supply = <&v3v3>;
+-	vqmmc-supply = <&v3v3>;
++	vqmmc-supply = <&vdd>;
+ 	mmc-ddr-3_3v;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/stm32mp157c-odyssey.dts b/arch/arm/boot/dts/stm32mp157c-odyssey.dts
+index a7ffec8f15164..be1dd5e9e7443 100644
+--- a/arch/arm/boot/dts/stm32mp157c-odyssey.dts
++++ b/arch/arm/boot/dts/stm32mp157c-odyssey.dts
+@@ -64,7 +64,7 @@
+ 	pinctrl-0 = <&sdmmc1_b4_pins_a>;
+ 	pinctrl-1 = <&sdmmc1_b4_od_pins_a>;
+ 	pinctrl-2 = <&sdmmc1_b4_sleep_pins_a>;
+-	cd-gpios = <&gpiob 7 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
++	cd-gpios = <&gpioi 3 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
+ 	disable-wp;
+ 	st,neg-edge;
+ 	bus-width = <4>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 8456f172d4b1b..59b3239bcd763 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -34,7 +34,6 @@
+ 
+ 	gpio-keys-polled {
+ 		compatible = "gpio-keys-polled";
+-		#size-cells = <0>;
+ 		poll-interval = <20>;
+ 
+ 		/*
+@@ -60,7 +59,6 @@
+ 
+ 	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#size-cells = <0>;
+ 
+ 		button-1 {
+ 			label = "TA2-GPIO-B";
+@@ -184,12 +182,11 @@
+ 
+ 	};
+ 
+-	polytouch@38 {
+-		compatible = "edt,edt-ft5x06";
++	touchscreen@38 {
++		compatible = "edt,edt-ft5406";
+ 		reg = <0x38>;
+ 		interrupt-parent = <&gpiog>;
+ 		interrupts = <2 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
+-		linux,wakeup;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index 27f19575fada6..8221bf69fefeb 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -234,8 +234,6 @@
+ 
+ 			vdd_usb: ldo4 {
+ 				regulator-name = "vdd_usb";
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				interrupts = <IT_CURLIM_LDO4 0>;
+ 			};
+ 
+@@ -257,7 +255,6 @@
+ 			vref_ddr: vref_ddr {
+ 				regulator-name = "vref_ddr";
+ 				regulator-always-on;
+-				regulator-over-current-protection;
+ 			};
+ 
+ 			bst_out: boost {
+@@ -273,7 +270,7 @@
+ 			vbus_sw: pwr_sw2 {
+ 				regulator-name = "vbus_sw";
+ 				interrupts = <IT_OCP_SWOUT 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 		};
+ 
+@@ -338,7 +335,7 @@
+ 	#size-cells = <0>;
+ 	status = "okay";
+ 
+-	flash0: mx66l51235l@0 {
++	flash0: flash@0 {
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-rx-bus-width = <4>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+index 803eb8bc9c85c..a9eb82b2f1704 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+@@ -194,7 +194,7 @@
+ 	#size-cells = <0>;
+ 	status = "okay";
+ 
+-	flash0: spi-flash@0 {
++	flash0: flash@0 {
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-rx-bus-width = <4>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-osd32.dtsi b/arch/arm/boot/dts/stm32mp15xx-osd32.dtsi
+index 713485a95795a..6706d8311a665 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-osd32.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-osd32.dtsi
+@@ -146,8 +146,6 @@
+ 
+ 			vdd_usb: ldo4 {
+ 				regulator-name = "vdd_usb";
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				interrupts = <IT_CURLIM_LDO4 0>;
+ 			};
+ 
+@@ -171,7 +169,6 @@
+ 			vref_ddr: vref_ddr {
+ 				regulator-name = "vref_ddr";
+ 				regulator-always-on;
+-				regulator-over-current-protection;
+ 			};
+ 
+ 			bst_out: boost {
+@@ -182,13 +179,13 @@
+ 			vbus_otg: pwr_sw1 {
+ 				regulator-name = "vbus_otg";
+ 				interrupts = <IT_OCP_OTG 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 
+ 			vbus_sw: pwr_sw2 {
+ 				regulator-name = "vbus_sw";
+ 				interrupts = <IT_OCP_SWOUT 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+index 068aabcffb13b..5d0f0fbba1d2e 100644
+--- a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
++++ b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+@@ -1009,7 +1009,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_LOW>;
+ 		nvidia,int-mic-en-gpios = <&wm8903 1 GPIO_ACTIVE_HIGH>;
+ 		nvidia,headset;
+ 
+diff --git a/arch/arm/boot/dts/tegra20-harmony.dts b/arch/arm/boot/dts/tegra20-harmony.dts
+index 86494cb4d5a1d..ae4312eedcbd5 100644
+--- a/arch/arm/boot/dts/tegra20-harmony.dts
++++ b/arch/arm/boot/dts/tegra20-harmony.dts
+@@ -748,7 +748,7 @@
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+ 		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2)
+-			GPIO_ACTIVE_HIGH>;
++			GPIO_ACTIVE_LOW>;
+ 		nvidia,int-mic-en-gpios = <&gpio TEGRA_GPIO(X, 0)
+ 			GPIO_ACTIVE_HIGH>;
+ 		nvidia,ext-mic-en-gpios = <&gpio TEGRA_GPIO(X, 1)
+diff --git a/arch/arm/boot/dts/tegra20-medcom-wide.dts b/arch/arm/boot/dts/tegra20-medcom-wide.dts
+index a348ca30e522b..b31c9bca16e6a 100644
+--- a/arch/arm/boot/dts/tegra20-medcom-wide.dts
++++ b/arch/arm/boot/dts/tegra20-medcom-wide.dts
+@@ -84,7 +84,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA20_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA20_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/boot/dts/tegra20-plutux.dts b/arch/arm/boot/dts/tegra20-plutux.dts
+index 378f23b2958b1..5811b7006a9bf 100644
+--- a/arch/arm/boot/dts/tegra20-plutux.dts
++++ b/arch/arm/boot/dts/tegra20-plutux.dts
+@@ -52,7 +52,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA20_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA20_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/boot/dts/tegra20-seaboard.dts b/arch/arm/boot/dts/tegra20-seaboard.dts
+index c24d4a37613e9..92d494b8c3d25 100644
+--- a/arch/arm/boot/dts/tegra20-seaboard.dts
++++ b/arch/arm/boot/dts/tegra20-seaboard.dts
+@@ -911,7 +911,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(X, 1) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(X, 1) GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA20_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA20_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/boot/dts/tegra20-tec.dts b/arch/arm/boot/dts/tegra20-tec.dts
+index 44ced60315de1..10ff09d86efa7 100644
+--- a/arch/arm/boot/dts/tegra20-tec.dts
++++ b/arch/arm/boot/dts/tegra20-tec.dts
+@@ -61,7 +61,7 @@
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+ 		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2)
+-			GPIO_ACTIVE_HIGH>;
++			GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA20_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA20_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/boot/dts/tegra20-ventana.dts b/arch/arm/boot/dts/tegra20-ventana.dts
+index 055334ae3d288..fe400fb84f022 100644
+--- a/arch/arm/boot/dts/tegra20-ventana.dts
++++ b/arch/arm/boot/dts/tegra20-ventana.dts
+@@ -686,7 +686,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_LOW>;
+ 		nvidia,int-mic-en-gpios = <&gpio TEGRA_GPIO(X, 0)
+ 			GPIO_ACTIVE_HIGH>;
+ 		nvidia,ext-mic-en-gpios = <&gpio TEGRA_GPIO(X, 1)
+diff --git a/arch/arm/boot/dts/tegra30-asus-nexus7-grouper-ti-pmic.dtsi b/arch/arm/boot/dts/tegra30-asus-nexus7-grouper-ti-pmic.dtsi
+index bfc06b988781d..215e497652d8e 100644
+--- a/arch/arm/boot/dts/tegra30-asus-nexus7-grouper-ti-pmic.dtsi
++++ b/arch/arm/boot/dts/tegra30-asus-nexus7-grouper-ti-pmic.dtsi
+@@ -143,7 +143,7 @@
+ 	};
+ 
+ 	vdd_3v3_sys: regulator@1 {
+-		gpio = <&pmic 7 GPIO_ACTIVE_HIGH>;
++		gpio = <&pmic 6 GPIO_ACTIVE_HIGH>;
+ 		enable-active-high;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/tegra30-cardhu.dtsi b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+index dab9989fa7605..57aead7d81627 100644
+--- a/arch/arm/boot/dts/tegra30-cardhu.dtsi
++++ b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+@@ -589,7 +589,7 @@
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+ 		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2)
+-			GPIO_ACTIVE_HIGH>;
++			GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA30_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA30_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/mach-imx/suspend-imx53.S b/arch/arm/mach-imx/suspend-imx53.S
+index 41b8aad653634..46570ec2fbcfe 100644
+--- a/arch/arm/mach-imx/suspend-imx53.S
++++ b/arch/arm/mach-imx/suspend-imx53.S
+@@ -28,11 +28,11 @@
+  *                              ^
+  *                              ^
+  *                      imx53_suspend code
+- *              PM_INFO structure(imx53_suspend_info)
++ *              PM_INFO structure(imx5_cpu_suspend_info)
+  * ======================== low address =======================
+  */
+ 
+-/* Offsets of members of struct imx53_suspend_info */
++/* Offsets of members of struct imx5_cpu_suspend_info */
+ #define SUSPEND_INFO_MX53_M4IF_V_OFFSET		0x0
+ #define SUSPEND_INFO_MX53_IOMUXC_V_OFFSET	0x4
+ #define SUSPEND_INFO_MX53_IO_COUNT_OFFSET	0x8
+diff --git a/arch/arm/mach-omap2/pm33xx-core.c b/arch/arm/mach-omap2/pm33xx-core.c
+index 56f2c0bcae5a3..bf0d25fd2cea4 100644
+--- a/arch/arm/mach-omap2/pm33xx-core.c
++++ b/arch/arm/mach-omap2/pm33xx-core.c
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/cpuidle.h>
+ #include <linux/platform_data/pm33xx.h>
++#include <linux/suspend.h>
+ #include <asm/cpuidle.h>
+ #include <asm/smp_scu.h>
+ #include <asm/suspend.h>
+@@ -324,6 +325,44 @@ static struct am33xx_pm_platform_data *am33xx_pm_get_pdata(void)
+ 		return NULL;
+ }
+ 
++#ifdef CONFIG_SUSPEND
++/*
++ * Block system suspend initially. Later on pm33xx sets up it's own
++ * platform_suspend_ops after probe. That depends also on loaded
++ * wkup_m3_ipc and booted am335x-pm-firmware.elf.
++ */
++static int amx3_suspend_block(suspend_state_t state)
++{
++	pr_warn("PM not initialized for pm33xx, wkup_m3_ipc, or am335x-pm-firmware.elf\n");
++
++	return -EINVAL;
++}
++
++static int amx3_pm_valid(suspend_state_t state)
++{
++	switch (state) {
++	case PM_SUSPEND_STANDBY:
++		return 1;
++	default:
++		return 0;
++	}
++}
++
++static const struct platform_suspend_ops amx3_blocked_pm_ops = {
++	.begin = amx3_suspend_block,
++	.valid = amx3_pm_valid,
++};
++
++static void __init amx3_block_suspend(void)
++{
++	suspend_set_ops(&amx3_blocked_pm_ops);
++}
++#else
++static inline void amx3_block_suspend(void)
++{
++}
++#endif	/* CONFIG_SUSPEND */
++
+ int __init amx3_common_pm_init(void)
+ {
+ 	struct am33xx_pm_platform_data *pdata;
+@@ -337,6 +376,7 @@ int __init amx3_common_pm_init(void)
+ 	devinfo.size_data = sizeof(*pdata);
+ 	devinfo.id = -1;
+ 	platform_device_register_full(&devinfo);
++	amx3_block_suspend();
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
+index f6c55877fbd94..2c0161125ece4 100644
+--- a/arch/arm64/boot/dts/arm/juno-base.dtsi
++++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
+@@ -564,13 +564,13 @@
+ 		clocks {
+ 			compatible = "arm,scpi-clocks";
+ 
+-			scpi_dvfs: scpi-dvfs {
++			scpi_dvfs: clocks-0 {
+ 				compatible = "arm,scpi-dvfs-clocks";
+ 				#clock-cells = <1>;
+ 				clock-indices = <0>, <1>, <2>;
+ 				clock-output-names = "atlclk", "aplclk","gpuclk";
+ 			};
+-			scpi_clk: scpi-clk {
++			scpi_clk: clocks-1 {
+ 				compatible = "arm,scpi-variable-clocks";
+ 				#clock-cells = <1>;
+ 				clock-indices = <3>;
+@@ -578,7 +578,7 @@
+ 			};
+ 		};
+ 
+-		scpi_devpd: scpi-power-domains {
++		scpi_devpd: power-controller {
+ 			compatible = "arm,scpi-power-domains";
+ 			num-domains = <2>;
+ 			#power-domain-cells = <1>;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
+index e7abb74bd8168..4d34d82b898a4 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
+@@ -625,7 +625,6 @@
+ 			clocks = <&clockgen 4 3>;
+ 			clock-names = "dspi";
+ 			spi-num-chipselects = <5>;
+-			bus-num = <0>;
+ 		};
+ 
+ 		esdhc: esdhc@2140000 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index 5e0e7d0f1bc4e..c86cf786f4061 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -1258,6 +1258,14 @@
+ 			         <&src IMX8MQ_RESET_PCIE_CTRL_APPS_EN>,
+ 			         <&src IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF>;
+ 			reset-names = "pciephy", "apps", "turnoff";
++			assigned-clocks = <&clk IMX8MQ_CLK_PCIE1_CTRL>,
++			                  <&clk IMX8MQ_CLK_PCIE1_PHY>,
++			                  <&clk IMX8MQ_CLK_PCIE1_AUX>;
++			assigned-clock-parents = <&clk IMX8MQ_SYS2_PLL_250M>,
++			                         <&clk IMX8MQ_SYS2_PLL_100M>,
++			                         <&clk IMX8MQ_SYS1_PLL_80M>;
++			assigned-clock-rates = <250000000>, <100000000>,
++			                       <10000000>;
+ 			status = "disabled";
+ 		};
+ 
+@@ -1287,6 +1295,14 @@
+ 			         <&src IMX8MQ_RESET_PCIE2_CTRL_APPS_EN>,
+ 			         <&src IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF>;
+ 			reset-names = "pciephy", "apps", "turnoff";
++			assigned-clocks = <&clk IMX8MQ_CLK_PCIE2_CTRL>,
++			                  <&clk IMX8MQ_CLK_PCIE2_PHY>,
++			                  <&clk IMX8MQ_CLK_PCIE2_AUX>;
++			assigned-clock-parents = <&clk IMX8MQ_SYS2_PLL_250M>,
++			                         <&clk IMX8MQ_SYS2_PLL_100M>,
++			                         <&clk IMX8MQ_SYS1_PLL_80M>;
++			assigned-clock-rates = <250000000>, <100000000>,
++			                       <10000000>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index cca143e4b6bf8..389aebdb35f17 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -108,10 +108,8 @@
+ 	};
+ 
+ 	firmware {
+-		turris-mox-rwtm {
+-			compatible = "cznic,turris-mox-rwtm";
+-			mboxes = <&rwtm 0>;
+-			status = "okay";
++		armada-3700-rwtm {
++			compatible = "marvell,armada-3700-rwtm-firmware", "cznic,turris-mox-rwtm";
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 879115dfdf828..83d2d83f7692b 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -502,4 +502,12 @@
+ 			};
+ 		};
+ 	};
++
++	firmware {
++		armada-3700-rwtm {
++			compatible = "marvell,armada-3700-rwtm-firmware";
++			mboxes = <&rwtm 0>;
++			status = "okay";
++		};
++	};
+ };
+diff --git a/arch/arm64/boot/dts/marvell/cn9130-db.dts b/arch/arm64/boot/dts/marvell/cn9130-db.dts
+index ce49a70d88a05..d242948884000 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130-db.dts
++++ b/arch/arm64/boot/dts/marvell/cn9130-db.dts
+@@ -258,7 +258,7 @@
+ 			};
+ 			partition@200000 {
+ 				label = "Linux";
+-				reg = <0x200000 0xd00000>;
++				reg = <0x200000 0xe00000>;
+ 			};
+ 			partition@1000000 {
+ 				label = "Filesystem";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-idp.dts b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+index e77a7926034a7..afe0f9c258164 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-idp.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+@@ -45,7 +45,7 @@
+ 
+ /* Increase the size from 2MB to 8MB */
+ &rmtfs_mem {
+-	reg = <0x0 0x84400000 0x0 0x800000>;
++	reg = <0x0 0x94600000 0x0 0x800000>;
+ };
+ 
+ / {
+diff --git a/arch/arm64/boot/dts/rockchip/px30.dtsi b/arch/arm64/boot/dts/rockchip/px30.dtsi
+index 64193292d26c3..0d6761074b11a 100644
+--- a/arch/arm64/boot/dts/rockchip/px30.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30.dtsi
+@@ -244,20 +244,20 @@
+ 			#size-cells = <0>;
+ 
+ 			/* These power domains are grouped by VD_LOGIC */
+-			pd_usb@PX30_PD_USB {
++			power-domain@PX30_PD_USB {
+ 				reg = <PX30_PD_USB>;
+ 				clocks = <&cru HCLK_HOST>,
+ 					 <&cru HCLK_OTG>,
+ 					 <&cru SCLK_OTG_ADP>;
+ 				pm_qos = <&qos_usb_host>, <&qos_usb_otg>;
+ 			};
+-			pd_sdcard@PX30_PD_SDCARD {
++			power-domain@PX30_PD_SDCARD {
+ 				reg = <PX30_PD_SDCARD>;
+ 				clocks = <&cru HCLK_SDMMC>,
+ 					 <&cru SCLK_SDMMC>;
+ 				pm_qos = <&qos_sdmmc>;
+ 			};
+-			pd_gmac@PX30_PD_GMAC {
++			power-domain@PX30_PD_GMAC {
+ 				reg = <PX30_PD_GMAC>;
+ 				clocks = <&cru ACLK_GMAC>,
+ 					 <&cru PCLK_GMAC>,
+@@ -265,7 +265,7 @@
+ 					 <&cru SCLK_GMAC_RX_TX>;
+ 				pm_qos = <&qos_gmac>;
+ 			};
+-			pd_mmc_nand@PX30_PD_MMC_NAND {
++			power-domain@PX30_PD_MMC_NAND {
+ 				reg = <PX30_PD_MMC_NAND>;
+ 				clocks =  <&cru HCLK_NANDC>,
+ 					  <&cru HCLK_EMMC>,
+@@ -278,14 +278,14 @@
+ 				pm_qos = <&qos_emmc>, <&qos_nand>,
+ 					 <&qos_sdio>, <&qos_sfc>;
+ 			};
+-			pd_vpu@PX30_PD_VPU {
++			power-domain@PX30_PD_VPU {
+ 				reg = <PX30_PD_VPU>;
+ 				clocks = <&cru ACLK_VPU>,
+ 					 <&cru HCLK_VPU>,
+ 					 <&cru SCLK_CORE_VPU>;
+ 				pm_qos = <&qos_vpu>, <&qos_vpu_r128>;
+ 			};
+-			pd_vo@PX30_PD_VO {
++			power-domain@PX30_PD_VO {
+ 				reg = <PX30_PD_VO>;
+ 				clocks = <&cru ACLK_RGA>,
+ 					 <&cru ACLK_VOPB>,
+@@ -301,7 +301,7 @@
+ 				pm_qos = <&qos_rga_rd>, <&qos_rga_wr>,
+ 					 <&qos_vop_m0>, <&qos_vop_m1>;
+ 			};
+-			pd_vi@PX30_PD_VI {
++			power-domain@PX30_PD_VI {
+ 				reg = <PX30_PD_VI>;
+ 				clocks = <&cru ACLK_CIF>,
+ 					 <&cru ACLK_ISP>,
+@@ -312,7 +312,7 @@
+ 					 <&qos_isp_wr>, <&qos_isp_m1>,
+ 					 <&qos_vip>;
+ 			};
+-			pd_gpu@PX30_PD_GPU {
++			power-domain@PX30_PD_GPU {
+ 				reg = <PX30_PD_GPU>;
+ 				clocks = <&cru SCLK_GPU>;
+ 				pm_qos = <&qos_gpu>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+index 7a96be10eaf0d..bce6f8b7db436 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+@@ -78,8 +78,8 @@
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <3300000>;
+ 		gpios = <&gpio0 RK_PA7 GPIO_ACTIVE_HIGH>;
+-		states = <1800000 0x0
+-			  3300000 0x1>;
++		states = <1800000 0x0>,
++			 <3300000 0x1>;
+ 		vin-supply = <&vcc5v0_sys>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts b/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
+index 1eecad724f04c..83a0bdbe00d6f 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
+@@ -71,8 +71,8 @@
+ 		regulator-settling-time-us = <5000>;
+ 		regulator-type = "voltage";
+ 		startup-delay-us = <2000>;
+-		states = <1800000 0x1
+-			  3300000 0x0>;
++		states = <1800000 0x1>,
++			 <3300000 0x0>;
+ 		vin-supply = <&vcc_io_33>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+index b76282e704de1..daa9a0c601a9f 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+@@ -45,8 +45,8 @@
+ 	vcc_sdio: sdmmcio-regulator {
+ 		compatible = "regulator-gpio";
+ 		gpios = <&grf_gpio 0 GPIO_ACTIVE_HIGH>;
+-		states = <1800000 0x1
+-			  3300000 0x0>;
++		states = <1800000 0x1>,
++			 <3300000 0x0>;
+ 		regulator-name = "vcc_sdio";
+ 		regulator-type = "voltage";
+ 		regulator-min-microvolt = <1800000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index de1e5e8a0e885..e546c9d1d6463 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -318,13 +318,13 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			pd_hevc@RK3328_PD_HEVC {
++			power-domain@RK3328_PD_HEVC {
+ 				reg = <RK3328_PD_HEVC>;
+ 			};
+-			pd_video@RK3328_PD_VIDEO {
++			power-domain@RK3328_PD_VIDEO {
+ 				reg = <RK3328_PD_VIDEO>;
+ 			};
+-			pd_vpu@RK3328_PD_VPU {
++			power-domain@RK3328_PD_VPU {
+ 				reg = <RK3328_PD_VPU>;
+ 				clocks = <&cru ACLK_VPU>, <&cru HCLK_VPU>;
+ 			};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
+index 60cd1c18cd4e0..e9ecffc409c0b 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
+@@ -245,7 +245,7 @@ pp1800_pcie: &pp1800_s0 {
+ };
+ 
+ &ppvar_sd_card_io {
+-	states = <1800000 0x0 3300000 0x1>;
++	states = <1800000 0x0>, <3300000 0x1>;
+ 	regulator-max-microvolt = <3300000>;
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
+index 32dcaf2100855..765b24a2bcbf0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
+@@ -247,8 +247,8 @@
+ 		enable-active-high;
+ 		enable-gpio = <&gpio2 2 GPIO_ACTIVE_HIGH>;
+ 		gpios = <&gpio2 28 GPIO_ACTIVE_HIGH>;
+-		states = <1800000 0x1
+-			  3000000 0x0>;
++		states = <1800000 0x1>,
++			 <3000000 0x0>;
+ 
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <3000000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 7e69603fb41c0..4b6065dbba55e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1000,26 +1000,26 @@
+ 			#size-cells = <0>;
+ 
+ 			/* These power domains are grouped by VD_CENTER */
+-			pd_iep@RK3399_PD_IEP {
++			power-domain@RK3399_PD_IEP {
+ 				reg = <RK3399_PD_IEP>;
+ 				clocks = <&cru ACLK_IEP>,
+ 					 <&cru HCLK_IEP>;
+ 				pm_qos = <&qos_iep>;
+ 			};
+-			pd_rga@RK3399_PD_RGA {
++			power-domain@RK3399_PD_RGA {
+ 				reg = <RK3399_PD_RGA>;
+ 				clocks = <&cru ACLK_RGA>,
+ 					 <&cru HCLK_RGA>;
+ 				pm_qos = <&qos_rga_r>,
+ 					 <&qos_rga_w>;
+ 			};
+-			pd_vcodec@RK3399_PD_VCODEC {
++			power-domain@RK3399_PD_VCODEC {
+ 				reg = <RK3399_PD_VCODEC>;
+ 				clocks = <&cru ACLK_VCODEC>,
+ 					 <&cru HCLK_VCODEC>;
+ 				pm_qos = <&qos_video_m0>;
+ 			};
+-			pd_vdu@RK3399_PD_VDU {
++			power-domain@RK3399_PD_VDU {
+ 				reg = <RK3399_PD_VDU>;
+ 				clocks = <&cru ACLK_VDU>,
+ 					 <&cru HCLK_VDU>;
+@@ -1028,94 +1028,94 @@
+ 			};
+ 
+ 			/* These power domains are grouped by VD_GPU */
+-			pd_gpu@RK3399_PD_GPU {
++			power-domain@RK3399_PD_GPU {
+ 				reg = <RK3399_PD_GPU>;
+ 				clocks = <&cru ACLK_GPU>;
+ 				pm_qos = <&qos_gpu>;
+ 			};
+ 
+ 			/* These power domains are grouped by VD_LOGIC */
+-			pd_edp@RK3399_PD_EDP {
++			power-domain@RK3399_PD_EDP {
+ 				reg = <RK3399_PD_EDP>;
+ 				clocks = <&cru PCLK_EDP_CTRL>;
+ 			};
+-			pd_emmc@RK3399_PD_EMMC {
++			power-domain@RK3399_PD_EMMC {
+ 				reg = <RK3399_PD_EMMC>;
+ 				clocks = <&cru ACLK_EMMC>;
+ 				pm_qos = <&qos_emmc>;
+ 			};
+-			pd_gmac@RK3399_PD_GMAC {
++			power-domain@RK3399_PD_GMAC {
+ 				reg = <RK3399_PD_GMAC>;
+ 				clocks = <&cru ACLK_GMAC>,
+ 					 <&cru PCLK_GMAC>;
+ 				pm_qos = <&qos_gmac>;
+ 			};
+-			pd_sd@RK3399_PD_SD {
++			power-domain@RK3399_PD_SD {
+ 				reg = <RK3399_PD_SD>;
+ 				clocks = <&cru HCLK_SDMMC>,
+ 					 <&cru SCLK_SDMMC>;
+ 				pm_qos = <&qos_sd>;
+ 			};
+-			pd_sdioaudio@RK3399_PD_SDIOAUDIO {
++			power-domain@RK3399_PD_SDIOAUDIO {
+ 				reg = <RK3399_PD_SDIOAUDIO>;
+ 				clocks = <&cru HCLK_SDIO>;
+ 				pm_qos = <&qos_sdioaudio>;
+ 			};
+-			pd_tcpc0@RK3399_PD_TCPD0 {
++			power-domain@RK3399_PD_TCPD0 {
+ 				reg = <RK3399_PD_TCPD0>;
+ 				clocks = <&cru SCLK_UPHY0_TCPDCORE>,
+ 					 <&cru SCLK_UPHY0_TCPDPHY_REF>;
+ 			};
+-			pd_tcpc1@RK3399_PD_TCPD1 {
++			power-domain@RK3399_PD_TCPD1 {
+ 				reg = <RK3399_PD_TCPD1>;
+ 				clocks = <&cru SCLK_UPHY1_TCPDCORE>,
+ 					 <&cru SCLK_UPHY1_TCPDPHY_REF>;
+ 			};
+-			pd_usb3@RK3399_PD_USB3 {
++			power-domain@RK3399_PD_USB3 {
+ 				reg = <RK3399_PD_USB3>;
+ 				clocks = <&cru ACLK_USB3>;
+ 				pm_qos = <&qos_usb_otg0>,
+ 					 <&qos_usb_otg1>;
+ 			};
+-			pd_vio@RK3399_PD_VIO {
++			power-domain@RK3399_PD_VIO {
+ 				reg = <RK3399_PD_VIO>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+ 
+-				pd_hdcp@RK3399_PD_HDCP {
++				power-domain@RK3399_PD_HDCP {
+ 					reg = <RK3399_PD_HDCP>;
+ 					clocks = <&cru ACLK_HDCP>,
+ 						 <&cru HCLK_HDCP>,
+ 						 <&cru PCLK_HDCP>;
+ 					pm_qos = <&qos_hdcp>;
+ 				};
+-				pd_isp0@RK3399_PD_ISP0 {
++				power-domain@RK3399_PD_ISP0 {
+ 					reg = <RK3399_PD_ISP0>;
+ 					clocks = <&cru ACLK_ISP0>,
+ 						 <&cru HCLK_ISP0>;
+ 					pm_qos = <&qos_isp0_m0>,
+ 						 <&qos_isp0_m1>;
+ 				};
+-				pd_isp1@RK3399_PD_ISP1 {
++				power-domain@RK3399_PD_ISP1 {
+ 					reg = <RK3399_PD_ISP1>;
+ 					clocks = <&cru ACLK_ISP1>,
+ 						 <&cru HCLK_ISP1>;
+ 					pm_qos = <&qos_isp1_m0>,
+ 						 <&qos_isp1_m1>;
+ 				};
+-				pd_vo@RK3399_PD_VO {
++				power-domain@RK3399_PD_VO {
+ 					reg = <RK3399_PD_VO>;
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 
+-					pd_vopb@RK3399_PD_VOPB {
++					power-domain@RK3399_PD_VOPB {
+ 						reg = <RK3399_PD_VOPB>;
+ 						clocks = <&cru ACLK_VOP0>,
+ 							 <&cru HCLK_VOP0>;
+ 						pm_qos = <&qos_vop_big_r>,
+ 							 <&qos_vop_big_w>;
+ 					};
+-					pd_vopl@RK3399_PD_VOPL {
++					power-domain@RK3399_PD_VOPL {
+ 						reg = <RK3399_PD_VOPL>;
+ 						clocks = <&cru ACLK_VOP1>,
+ 							 <&cru HCLK_VOP1>;
+@@ -2342,7 +2342,7 @@
+ 			};
+ 		};
+ 
+-		sleep {
++		suspend {
+ 			ap_pwroff: ap-pwroff {
+ 				rockchip,pins = <1 RK_PA5 1 &pcfg_pull_none>;
+ 			};
+diff --git a/arch/arm64/boot/dts/ti/k3-am654-base-board.dts b/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
+index d12dd89f3405a..937dd7280c7ae 100644
+--- a/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
+@@ -111,7 +111,7 @@
+ 			AM65X_WKUP_IOPAD(0x007c, PIN_INPUT, 0) /* (L5) MCU_RGMII1_RD2 */
+ 			AM65X_WKUP_IOPAD(0x0080, PIN_INPUT, 0) /* (M6) MCU_RGMII1_RD1 */
+ 			AM65X_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* (L6) MCU_RGMII1_RD0 */
+-			AM65X_WKUP_IOPAD(0x0070, PIN_INPUT, 0) /* (N1) MCU_RGMII1_TXC */
++			AM65X_WKUP_IOPAD(0x0070, PIN_OUTPUT, 0) /* (N1) MCU_RGMII1_TXC */
+ 			AM65X_WKUP_IOPAD(0x0074, PIN_INPUT, 0) /* (M1) MCU_RGMII1_RXC */
+ 		>;
+ 	};
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index ef03e7636b66a..e8a4143e1c241 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -29,7 +29,7 @@
+ 			J721E_WKUP_IOPAD(0x008c, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
+ 			J721E_WKUP_IOPAD(0x0090, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
+ 			J721E_WKUP_IOPAD(0x0094, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
+-			J721E_WKUP_IOPAD(0x0080, PIN_INPUT, 0) /* MCU_RGMII1_TXC */
++			J721E_WKUP_IOPAD(0x0080, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
+ 			J721E_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
+ 		>;
+ 	};
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+index 7cd31ac67f880..479abff9cb8ed 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+@@ -206,7 +206,7 @@
+ 			J721E_WKUP_IOPAD(0x007c, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
+ 			J721E_WKUP_IOPAD(0x0080, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
+ 			J721E_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
+-			J721E_WKUP_IOPAD(0x0070, PIN_INPUT, 0) /* MCU_RGMII1_TXC */
++			J721E_WKUP_IOPAD(0x0070, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
+ 			J721E_WKUP_IOPAD(0x0074, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
+ 		>;
+ 	};
+diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h
+index 779b6972aa84b..9f64fdfbf2750 100644
+--- a/arch/ia64/include/asm/pgtable.h
++++ b/arch/ia64/include/asm/pgtable.h
+@@ -520,8 +520,9 @@ extern struct page *zero_page_memmap_ptr;
+ 
+ #  ifdef CONFIG_VIRTUAL_MEM_MAP
+   /* arch mem_map init routine is needed due to holes in a virtual mem_map */
+-    extern void memmap_init (unsigned long size, int nid, unsigned long zone,
+-			     unsigned long start_pfn);
++void memmap_init(void);
++void arch_memmap_init(unsigned long size, int nid, unsigned long zone,
++		      unsigned long start_pfn);
+ #  endif /* CONFIG_VIRTUAL_MEM_MAP */
+ # endif /* !__ASSEMBLY__ */
+ 
+diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
+index 27ca549ff47ed..f316a833b7033 100644
+--- a/arch/ia64/mm/init.c
++++ b/arch/ia64/mm/init.c
+@@ -542,7 +542,7 @@ virtual_memmap_init(u64 start, u64 end, void *arg)
+ }
+ 
+ void __meminit
+-memmap_init (unsigned long size, int nid, unsigned long zone,
++arch_memmap_init (unsigned long size, int nid, unsigned long zone,
+ 	     unsigned long start_pfn)
+ {
+ 	if (!vmem_map) {
+@@ -562,6 +562,10 @@ memmap_init (unsigned long size, int nid, unsigned long zone,
+ 	}
+ }
+ 
++void __init memmap_init(void)
++{
++}
++
+ int
+ ia64_pfn_valid (unsigned long pfn)
+ {
+diff --git a/arch/s390/include/asm/stacktrace.h b/arch/s390/include/asm/stacktrace.h
+index ee582896b6a3f..90488b0c26f60 100644
+--- a/arch/s390/include/asm/stacktrace.h
++++ b/arch/s390/include/asm/stacktrace.h
+@@ -128,6 +128,103 @@ struct stack_frame {
+ 	r2;								\
+ })
+ 
++#define CALL_LARGS_0(...)						\
++	long dummy = 0
++#define CALL_LARGS_1(t1, a1)						\
++	long arg1  = (long)(t1)(a1)
++#define CALL_LARGS_2(t1, a1, t2, a2)					\
++	CALL_LARGS_1(t1, a1);						\
++	long arg2 = (long)(t2)(a2)
++#define CALL_LARGS_3(t1, a1, t2, a2, t3, a3)				\
++	CALL_LARGS_2(t1, a1, t2, a2);					\
++	long arg3 = (long)(t3)(a3)
++#define CALL_LARGS_4(t1, a1, t2, a2, t3, a3, t4, a4)			\
++	CALL_LARGS_3(t1, a1, t2, a2, t3, a3);				\
++	long arg4  = (long)(t4)(a4)
++#define CALL_LARGS_5(t1, a1, t2, a2, t3, a3, t4, a4, t5, a5)		\
++	CALL_LARGS_4(t1, a1, t2, a2, t3, a3, t4, a4);			\
++	long arg5 = (long)(t5)(a5)
++
++#define CALL_REGS_0							\
++	register long r2 asm("2") = dummy
++#define CALL_REGS_1							\
++	register long r2 asm("2") = arg1
++#define CALL_REGS_2							\
++	CALL_REGS_1;							\
++	register long r3 asm("3") = arg2
++#define CALL_REGS_3							\
++	CALL_REGS_2;							\
++	register long r4 asm("4") = arg3
++#define CALL_REGS_4							\
++	CALL_REGS_3;							\
++	register long r5 asm("5") = arg4
++#define CALL_REGS_5							\
++	CALL_REGS_4;							\
++	register long r6 asm("6") = arg5
++
++#define CALL_TYPECHECK_0(...)
++#define CALL_TYPECHECK_1(t, a, ...)					\
++	typecheck(t, a)
++#define CALL_TYPECHECK_2(t, a, ...)					\
++	CALL_TYPECHECK_1(__VA_ARGS__);					\
++	typecheck(t, a)
++#define CALL_TYPECHECK_3(t, a, ...)					\
++	CALL_TYPECHECK_2(__VA_ARGS__);					\
++	typecheck(t, a)
++#define CALL_TYPECHECK_4(t, a, ...)					\
++	CALL_TYPECHECK_3(__VA_ARGS__);					\
++	typecheck(t, a)
++#define CALL_TYPECHECK_5(t, a, ...)					\
++	CALL_TYPECHECK_4(__VA_ARGS__);					\
++	typecheck(t, a)
++
++#define CALL_PARM_0(...) void
++#define CALL_PARM_1(t, a, ...) t
++#define CALL_PARM_2(t, a, ...) t, CALL_PARM_1(__VA_ARGS__)
++#define CALL_PARM_3(t, a, ...) t, CALL_PARM_2(__VA_ARGS__)
++#define CALL_PARM_4(t, a, ...) t, CALL_PARM_3(__VA_ARGS__)
++#define CALL_PARM_5(t, a, ...) t, CALL_PARM_4(__VA_ARGS__)
++#define CALL_PARM_6(t, a, ...) t, CALL_PARM_5(__VA_ARGS__)
++
++/*
++ * Use call_on_stack() to call a function switching to a specified
++ * stack. Proper sign and zero extension of function arguments is
++ * done. Usage:
++ *
++ * rc = call_on_stack(nr, stack, rettype, fn, t1, a1, t2, a2, ...)
++ *
++ * - nr specifies the number of function arguments of fn.
++ * - stack specifies the stack to be used.
++ * - fn is the function to be called.
++ * - rettype is the return type of fn.
++ * - t1, a1, ... are pairs, where t1 must match the type of the first
++ *   argument of fn, t2 the second, etc. a1 is the corresponding
++ *   first function argument (not name), etc.
++ */
++#define call_on_stack(nr, stack, rettype, fn, ...)			\
++({									\
++	rettype (*__fn)(CALL_PARM_##nr(__VA_ARGS__)) = fn;		\
++	unsigned long frame = current_frame_address();			\
++	unsigned long __stack = stack;					\
++	unsigned long prev;						\
++	CALL_LARGS_##nr(__VA_ARGS__);					\
++	CALL_REGS_##nr;							\
++									\
++	CALL_TYPECHECK_##nr(__VA_ARGS__);				\
++	asm volatile(							\
++		"	lgr	%[_prev],15\n"				\
++		"	lg	15,%[_stack]\n"				\
++		"	stg	%[_frame],%[_bc](15)\n"			\
++		"	brasl	14,%[_fn]\n"				\
++		"	lgr	15,%[_prev]\n"				\
++		: [_prev] "=&d" (prev), CALL_FMT_##nr			\
++		: [_stack] "R" (__stack),				\
++		  [_bc] "i" (offsetof(struct stack_frame, back_chain)),	\
++		  [_frame] "d" (frame),					\
++		  [_fn] "X" (__fn) : CALL_CLOBBER_##nr);		\
++	(rettype)r2;							\
++})
++
+ #define CALL_ON_STACK_NORETURN(fn, stack)				\
+ ({									\
+ 	asm volatile(							\
+diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c
+index 8d1e8a1a97dfd..16934fa19069b 100644
+--- a/arch/s390/kernel/traps.c
++++ b/arch/s390/kernel/traps.c
+@@ -272,6 +272,8 @@ static void __init test_monitor_call(void)
+ {
+ 	int val = 1;
+ 
++	if (!IS_ENABLED(CONFIG_BUG))
++		return;
+ 	asm volatile(
+ 		"	mc	0,0\n"
+ 		"0:	xgr	%0,%0\n"
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 16159950fcf5b..9c936d06fb61a 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3752,11 +3752,11 @@ static int skx_iio_set_mapping(struct intel_uncore_type *type)
+ 	/* One more for NULL. */
+ 	attrs = kcalloc((uncore_max_dies() + 1), sizeof(*attrs), GFP_KERNEL);
+ 	if (!attrs)
+-		goto err;
++		goto clear_topology;
+ 
+ 	eas = kcalloc(uncore_max_dies(), sizeof(*eas), GFP_KERNEL);
+ 	if (!eas)
+-		goto err;
++		goto clear_attrs;
+ 
+ 	for (die = 0; die < uncore_max_dies(); die++) {
+ 		sprintf(buf, "die%ld", die);
+@@ -3777,7 +3777,9 @@ err:
+ 	for (; die >= 0; die--)
+ 		kfree(eas[die].attr.attr.name);
+ 	kfree(eas);
++clear_attrs:
+ 	kfree(attrs);
++clear_topology:
+ 	kfree(type->topology);
+ clear_attr_update:
+ 	type->attr_update = NULL;
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index a11796bbb9cee..d5fa772560584 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -564,6 +564,9 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog)
+ 
+ 	for (i = 0; i < prog->aux->size_poke_tab; i++) {
+ 		poke = &prog->aux->poke_tab[i];
++		if (poke->aux && poke->aux != prog->aux)
++			continue;
++
+ 		WARN_ON_ONCE(READ_ONCE(poke->tailcall_target_stable));
+ 
+ 		if (poke->reason != BPF_POKE_REASON_TAIL_CALL)
+diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
+index 5a5a1da01a00f..f0c822952c201 100644
+--- a/drivers/dma-buf/sync_file.c
++++ b/drivers/dma-buf/sync_file.c
+@@ -211,8 +211,8 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
+ 					 struct sync_file *b)
+ {
+ 	struct sync_file *sync_file;
+-	struct dma_fence **fences, **nfences, **a_fences, **b_fences;
+-	int i, i_a, i_b, num_fences, a_num_fences, b_num_fences;
++	struct dma_fence **fences = NULL, **nfences, **a_fences, **b_fences;
++	int i = 0, i_a, i_b, num_fences, a_num_fences, b_num_fences;
+ 
+ 	sync_file = sync_file_alloc();
+ 	if (!sync_file)
+@@ -236,7 +236,7 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
+ 	 * If a sync_file can only be created with sync_file_merge
+ 	 * and sync_file_create, this is a reasonable assumption.
+ 	 */
+-	for (i = i_a = i_b = 0; i_a < a_num_fences && i_b < b_num_fences; ) {
++	for (i_a = i_b = 0; i_a < a_num_fences && i_b < b_num_fences; ) {
+ 		struct dma_fence *pt_a = a_fences[i_a];
+ 		struct dma_fence *pt_b = b_fences[i_b];
+ 
+@@ -278,15 +278,16 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
+ 		fences = nfences;
+ 	}
+ 
+-	if (sync_file_set_fence(sync_file, fences, i) < 0) {
+-		kfree(fences);
++	if (sync_file_set_fence(sync_file, fences, i) < 0)
+ 		goto err;
+-	}
+ 
+ 	strlcpy(sync_file->user_name, name, sizeof(sync_file->user_name));
+ 	return sync_file;
+ 
+ err:
++	while (i)
++		dma_fence_put(fences[--i]);
++	kfree(fences);
+ 	fput(sync_file->file);
+ 	return NULL;
+ 
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index 5fa6b3ca0a385..c08968c5ddf8c 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -9,7 +9,7 @@ menu "Firmware Drivers"
+ config ARM_SCMI_PROTOCOL
+ 	tristate "ARM System Control and Management Interface (SCMI) Message Protocol"
+ 	depends on ARM || ARM64 || COMPILE_TEST
+-	depends on MAILBOX
++	depends on MAILBOX || HAVE_ARM_SMCCC_DISCOVERY
+ 	help
+ 	  ARM System Control and Management Interface (SCMI) protocol is a
+ 	  set of operating system-independent software interfaces that are
+diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
+index 65063fa948d41..34b7ae7980a3e 100644
+--- a/drivers/firmware/arm_scmi/common.h
++++ b/drivers/firmware/arm_scmi/common.h
+@@ -243,7 +243,7 @@ struct scmi_desc {
+ };
+ 
+ extern const struct scmi_desc scmi_mailbox_desc;
+-#ifdef CONFIG_HAVE_ARM_SMCCC
++#ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
+ extern const struct scmi_desc scmi_smc_desc;
+ #endif
+ 
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index f9901fadb3a43..af4560dab6b40 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -922,7 +922,9 @@ ATTRIBUTE_GROUPS(versions);
+ 
+ /* Each compatible listed below must have descriptor associated with it */
+ static const struct of_device_id scmi_of_match[] = {
++#ifdef CONFIG_MAILBOX
+ 	{ .compatible = "arm,scmi", .data = &scmi_mailbox_desc },
++#endif
+ #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
+ 	{ .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
+ #endif
+diff --git a/drivers/firmware/tegra/Makefile b/drivers/firmware/tegra/Makefile
+index 49c87e00fafb3..620cf3fdd6074 100644
+--- a/drivers/firmware/tegra/Makefile
++++ b/drivers/firmware/tegra/Makefile
+@@ -3,6 +3,7 @@ tegra-bpmp-y			= bpmp.o
+ tegra-bpmp-$(CONFIG_ARCH_TEGRA_210_SOC)	+= bpmp-tegra210.o
+ tegra-bpmp-$(CONFIG_ARCH_TEGRA_186_SOC)	+= bpmp-tegra186.o
+ tegra-bpmp-$(CONFIG_ARCH_TEGRA_194_SOC)	+= bpmp-tegra186.o
++tegra-bpmp-$(CONFIG_ARCH_TEGRA_234_SOC)	+= bpmp-tegra186.o
+ tegra-bpmp-$(CONFIG_DEBUG_FS)	+= bpmp-debugfs.o
+ obj-$(CONFIG_TEGRA_BPMP)	+= tegra-bpmp.o
+ obj-$(CONFIG_TEGRA_IVC)		+= ivc.o
+diff --git a/drivers/firmware/tegra/bpmp-private.h b/drivers/firmware/tegra/bpmp-private.h
+index 54d560c48398e..182bfe3965161 100644
+--- a/drivers/firmware/tegra/bpmp-private.h
++++ b/drivers/firmware/tegra/bpmp-private.h
+@@ -24,7 +24,8 @@ struct tegra_bpmp_ops {
+ };
+ 
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \
+-    IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC)
++    IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) || \
++    IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC)
+ extern const struct tegra_bpmp_ops tegra186_bpmp_ops;
+ #endif
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+diff --git a/drivers/firmware/tegra/bpmp.c b/drivers/firmware/tegra/bpmp.c
+index 0742a90cb844e..5654c5e9862b1 100644
+--- a/drivers/firmware/tegra/bpmp.c
++++ b/drivers/firmware/tegra/bpmp.c
+@@ -809,7 +809,8 @@ static const struct dev_pm_ops tegra_bpmp_pm_ops = {
+ };
+ 
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \
+-    IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC)
++    IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) || \
++    IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC)
+ static const struct tegra_bpmp_soc tegra186_soc = {
+ 	.channels = {
+ 		.cpu_tx = {
+diff --git a/drivers/firmware/turris-mox-rwtm.c b/drivers/firmware/turris-mox-rwtm.c
+index 03f1eac9ad69b..0bef988580ada 100644
+--- a/drivers/firmware/turris-mox-rwtm.c
++++ b/drivers/firmware/turris-mox-rwtm.c
+@@ -569,6 +569,7 @@ static int turris_mox_rwtm_remove(struct platform_device *pdev)
+ 
+ static const struct of_device_id turris_mox_rwtm_match[] = {
+ 	{ .compatible = "cznic,turris-mox-rwtm", },
++	{ .compatible = "marvell,armada-3700-rwtm-firmware", },
+ 	{ },
+ };
+ 
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+index ef70140c5b09d..873cbd38e6d3a 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+@@ -706,9 +706,7 @@ static int nt35510_power_on(struct nt35510 *nt)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = nt35510_read_id(nt);
+-	if (ret)
+-		return ret;
++	nt35510_read_id(nt);
+ 
+ 	/* Set up stuff in  manufacturer control, page 1 */
+ 	ret = nt35510_send_long(nt, dsi, MCS_CMD_MAUCCTR,
+diff --git a/drivers/memory/tegra/tegra124-emc.c b/drivers/memory/tegra/tegra124-emc.c
+index 76ace42a688a7..dae816e840a96 100644
+--- a/drivers/memory/tegra/tegra124-emc.c
++++ b/drivers/memory/tegra/tegra124-emc.c
+@@ -265,8 +265,8 @@
+ #define EMC_PUTERM_ADJ				0x574
+ 
+ #define DRAM_DEV_SEL_ALL			0
+-#define DRAM_DEV_SEL_0				(2 << 30)
+-#define DRAM_DEV_SEL_1				(1 << 30)
++#define DRAM_DEV_SEL_0				BIT(31)
++#define DRAM_DEV_SEL_1				BIT(30)
+ 
+ #define EMC_CFG_POWER_FEATURES_MASK		\
+ 	(EMC_CFG_DYN_SREF | EMC_CFG_DRAM_ACPD | EMC_CFG_DRAM_CLKSTOP_SR | \
+diff --git a/drivers/memory/tegra/tegra30-emc.c b/drivers/memory/tegra/tegra30-emc.c
+index 055af0e08a2e5..1bd6d3d827aa9 100644
+--- a/drivers/memory/tegra/tegra30-emc.c
++++ b/drivers/memory/tegra/tegra30-emc.c
+@@ -145,8 +145,8 @@
+ #define EMC_SELF_REF_CMD_ENABLED		BIT(0)
+ 
+ #define DRAM_DEV_SEL_ALL			(0 << 30)
+-#define DRAM_DEV_SEL_0				(2 << 30)
+-#define DRAM_DEV_SEL_1				(1 << 30)
++#define DRAM_DEV_SEL_0				BIT(31)
++#define DRAM_DEV_SEL_1				BIT(30)
+ #define DRAM_BROADCAST(num) \
+ 	((num) > 1 ? DRAM_DEV_SEL_ALL : DRAM_DEV_SEL_0)
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 70ec17f3c3007..429ce8638c2b3 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3407,7 +3407,7 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.port_set_cmode = mv88e6341_port_set_cmode,
+ 	.port_setup_message_port = mv88e6xxx_setup_message_port,
+ 	.stats_snapshot = mv88e6390_g1_stats_snapshot,
+-	.stats_set_histogram = mv88e6095_g1_stats_set_histogram,
++	.stats_set_histogram = mv88e6390_g1_stats_set_histogram,
+ 	.stats_get_sset_count = mv88e6320_stats_get_sset_count,
+ 	.stats_get_strings = mv88e6320_stats_get_strings,
+ 	.stats_get_stats = mv88e6390_stats_get_stats,
+@@ -3417,6 +3417,9 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.mgmt_rsvd2cpu =  mv88e6390_g1_mgmt_rsvd2cpu,
+ 	.pot_clear = mv88e6xxx_g2_pot_clear,
+ 	.reset = mv88e6352_g1_reset,
++	.rmu_disable = mv88e6390_g1_rmu_disable,
++	.atu_get_hash = mv88e6165_g1_atu_get_hash,
++	.atu_set_hash = mv88e6165_g1_atu_set_hash,
+ 	.vtu_getnext = mv88e6352_g1_vtu_getnext,
+ 	.vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
+ 	.serdes_power = mv88e6390_serdes_power,
+@@ -3613,6 +3616,7 @@ static const struct mv88e6xxx_ops mv88e6175_ops = {
+ 	.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
+ 	.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
++	.port_set_policy = mv88e6352_port_set_policy,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+ 	.port_set_ether_type = mv88e6351_port_set_ether_type,
+@@ -4173,7 +4177,7 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.port_set_cmode = mv88e6341_port_set_cmode,
+ 	.port_setup_message_port = mv88e6xxx_setup_message_port,
+ 	.stats_snapshot = mv88e6390_g1_stats_snapshot,
+-	.stats_set_histogram = mv88e6095_g1_stats_set_histogram,
++	.stats_set_histogram = mv88e6390_g1_stats_set_histogram,
+ 	.stats_get_sset_count = mv88e6320_stats_get_sset_count,
+ 	.stats_get_strings = mv88e6320_stats_get_strings,
+ 	.stats_get_stats = mv88e6390_stats_get_stats,
+@@ -4183,6 +4187,9 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.mgmt_rsvd2cpu =  mv88e6390_g1_mgmt_rsvd2cpu,
+ 	.pot_clear = mv88e6xxx_g2_pot_clear,
+ 	.reset = mv88e6352_g1_reset,
++	.rmu_disable = mv88e6390_g1_rmu_disable,
++	.atu_get_hash = mv88e6165_g1_atu_get_hash,
++	.atu_set_hash = mv88e6165_g1_atu_set_hash,
+ 	.vtu_getnext = mv88e6352_g1_vtu_getnext,
+ 	.vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
+ 	.serdes_power = mv88e6390_serdes_power,
+@@ -4253,6 +4260,7 @@ static const struct mv88e6xxx_ops mv88e6351_ops = {
+ 	.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
+ 	.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
++	.port_set_policy = mv88e6352_port_set_policy,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+ 	.port_set_ether_type = mv88e6351_port_set_ether_type,
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 41f7f078cd27c..db74241935ab4 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1640,7 +1640,8 @@ static void bcmgenet_power_up(struct bcmgenet_priv *priv,
+ 
+ 	switch (mode) {
+ 	case GENET_POWER_PASSIVE:
+-		reg &= ~(EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS);
++		reg &= ~(EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS |
++			 EXT_ENERGY_DET_MASK);
+ 		if (GENET_IS_V5(priv)) {
+ 			reg &= ~(EXT_PWR_DOWN_PHY_EN |
+ 				 EXT_PWR_DOWN_PHY_RD |
+@@ -3237,15 +3238,21 @@ static void bcmgenet_get_hw_addr(struct bcmgenet_priv *priv,
+ /* Returns a reusable dma control register value */
+ static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
+ {
++	unsigned int i;
+ 	u32 reg;
+ 	u32 dma_ctrl;
+ 
+ 	/* disable DMA */
+ 	dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN;
++	for (i = 0; i < priv->hw_params->tx_queues; i++)
++		dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT));
+ 	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+ 	reg &= ~dma_ctrl;
+ 	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+ 
++	dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN;
++	for (i = 0; i < priv->hw_params->rx_queues; i++)
++		dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT));
+ 	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+ 	reg &= ~dma_ctrl;
+ 	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+@@ -3292,7 +3299,6 @@ static int bcmgenet_open(struct net_device *dev)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	unsigned long dma_ctrl;
+-	u32 reg;
+ 	int ret;
+ 
+ 	netif_dbg(priv, ifup, dev, "bcmgenet_open\n");
+@@ -3318,12 +3324,6 @@ static int bcmgenet_open(struct net_device *dev)
+ 
+ 	bcmgenet_set_hw_addr(priv, dev->dev_addr);
+ 
+-	if (priv->internal_phy) {
+-		reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+-		reg |= EXT_ENERGY_DET_MASK;
+-		bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+-	}
+-
+ 	/* Disable RX/TX DMA and flush TX queues */
+ 	dma_ctrl = bcmgenet_dma_disable(priv);
+ 
+@@ -4139,7 +4139,6 @@ static int bcmgenet_resume(struct device *d)
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	struct bcmgenet_rxnfc_rule *rule;
+ 	unsigned long dma_ctrl;
+-	u32 reg;
+ 	int ret;
+ 
+ 	if (!netif_running(dev))
+@@ -4176,12 +4175,6 @@ static int bcmgenet_resume(struct device *d)
+ 		if (rule->state != BCMGENET_RXNFC_STATE_UNUSED)
+ 			bcmgenet_hfb_create_rxnfc_filter(priv, rule);
+ 
+-	if (priv->internal_phy) {
+-		reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+-		reg |= EXT_ENERGY_DET_MASK;
+-		bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+-	}
+-
+ 	/* Disable RX/TX DMA and flush TX queues */
+ 	dma_ctrl = bcmgenet_dma_disable(priv);
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index 1c86eddb1b510..e84ad587fb214 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -187,12 +187,6 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ 	reg |= CMD_RX_EN;
+ 	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+ 
+-	if (priv->hw_params->flags & GENET_HAS_EXT) {
+-		reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+-		reg &= ~EXT_ENERGY_DET_MASK;
+-		bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+-	}
+-
+ 	reg = UMAC_IRQ_MPD_R;
+ 	if (hfb_enable)
+ 		reg |=  UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM;
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index a4380c45f6689..a0e1ccca505b7 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -541,10 +541,8 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ 	SET_NETDEV_DEV(ndev, &pdev->dev);
+ 
+ 	ret = register_netdev(ndev);
+-	if (ret) {
+-		free_netdev(ndev);
++	if (ret)
+ 		goto init_fail;
+-	}
+ 
+ 	netdev_dbg(ndev, "%s: IRQ=%d address=%pM\n",
+ 		   __func__, ndev->irq, ndev->dev_addr);
+diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c
+index 8543bf3c34840..ad655f0a4965c 100644
+--- a/drivers/net/ethernet/qualcomm/emac/emac.c
++++ b/drivers/net/ethernet/qualcomm/emac/emac.c
+@@ -735,12 +735,13 @@ static int emac_remove(struct platform_device *pdev)
+ 
+ 	put_device(&adpt->phydev->mdio.dev);
+ 	mdiobus_unregister(adpt->mii_bus);
+-	free_netdev(netdev);
+ 
+ 	if (adpt->phy.digital)
+ 		iounmap(adpt->phy.digital);
+ 	iounmap(adpt->phy.base);
+ 
++	free_netdev(netdev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/ti/tlan.c b/drivers/net/ethernet/ti/tlan.c
+index 267c080ee084b..0072ffa6e19e3 100644
+--- a/drivers/net/ethernet/ti/tlan.c
++++ b/drivers/net/ethernet/ti/tlan.c
+@@ -313,9 +313,8 @@ static void tlan_remove_one(struct pci_dev *pdev)
+ 	pci_release_regions(pdev);
+ #endif
+ 
+-	free_netdev(dev);
+-
+ 	cancel_work_sync(&priv->tlan_tqueue);
++	free_netdev(dev);
+ }
+ 
+ static void tlan_start(struct net_device *dev)
+diff --git a/drivers/net/fddi/defza.c b/drivers/net/fddi/defza.c
+index eaf85db53a5ef..2e7d6cc16b2d1 100644
+--- a/drivers/net/fddi/defza.c
++++ b/drivers/net/fddi/defza.c
+@@ -1504,9 +1504,8 @@ err_out_resource:
+ 	release_mem_region(start, len);
+ 
+ err_out_kfree:
+-	free_netdev(dev);
+-
+ 	pr_err("%s: initialization failure, aborting!\n", fp->name);
++	free_netdev(dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index 3811f1bde84e7..b80ed2ffd45eb 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -85,7 +85,7 @@ static int nsim_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 				       u32 *mykey, u32 *mysalt)
+ {
+ 	const char aes_gcm_name[] = "rfc4106(gcm(aes))";
+-	struct net_device *dev = xs->xso.dev;
++	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -134,7 +134,7 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs)
+ 	u16 sa_idx;
+ 	int ret;
+ 
+-	dev = xs->xso.dev;
++	dev = xs->xso.real_dev;
+ 	ns = netdev_priv(dev);
+ 	ipsec = &ns->ipsec;
+ 
+@@ -194,7 +194,7 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs)
+ 
+ static void nsim_ipsec_del_sa(struct xfrm_state *xs)
+ {
+-	struct netdevsim *ns = netdev_priv(xs->xso.dev);
++	struct netdevsim *ns = netdev_priv(xs->xso.real_dev);
+ 	struct nsim_ipsec *ipsec = &ns->ipsec;
+ 	u16 sa_idx;
+ 
+@@ -211,7 +211,7 @@ static void nsim_ipsec_del_sa(struct xfrm_state *xs)
+ 
+ static bool nsim_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
+ {
+-	struct netdevsim *ns = netdev_priv(xs->xso.dev);
++	struct netdevsim *ns = netdev_priv(xs->xso.real_dev);
+ 	struct nsim_ipsec *ipsec = &ns->ipsec;
+ 
+ 	ipsec->ok++;
+diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c
+index 7ec8652f2c269..66590e95c2929 100644
+--- a/drivers/net/vmxnet3/vmxnet3_ethtool.c
++++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c
+@@ -1,7 +1,7 @@
+ /*
+  * Linux driver for VMware's vmxnet3 ethernet NIC.
+  *
+- * Copyright (C) 2008-2020, VMware, Inc. All Rights Reserved.
++ * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved.
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms of the GNU General Public License as published by the
+@@ -26,6 +26,10 @@
+ 
+ 
+ #include "vmxnet3_int.h"
++#include <net/vxlan.h>
++#include <net/geneve.h>
++
++#define VXLAN_UDP_PORT 8472
+ 
+ struct vmxnet3_stat_desc {
+ 	char desc[ETH_GSTRING_LEN];
+@@ -277,6 +281,8 @@ netdev_features_t vmxnet3_features_check(struct sk_buff *skb,
+ 	if (VMXNET3_VERSION_GE_4(adapter) &&
+ 	    skb->encapsulation && skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		u8 l4_proto = 0;
++		u16 port;
++		struct udphdr *udph;
+ 
+ 		switch (vlan_get_protocol(skb)) {
+ 		case htons(ETH_P_IP):
+@@ -289,8 +295,20 @@ netdev_features_t vmxnet3_features_check(struct sk_buff *skb,
+ 			return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
+ 		}
+ 
+-		if (l4_proto != IPPROTO_UDP)
++		switch (l4_proto) {
++		case IPPROTO_UDP:
++			udph = udp_hdr(skb);
++			port = be16_to_cpu(udph->dest);
++			/* Check if offloaded port is supported */
++			if (port != GENEVE_UDP_PORT &&
++			    port != IANA_VXLAN_UDP_PORT &&
++			    port != VXLAN_UDP_PORT) {
++				return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
++			}
++			break;
++		default:
+ 			return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
++		}
+ 	}
+ 	return features;
+ }
+diff --git a/drivers/reset/reset-ti-syscon.c b/drivers/reset/reset-ti-syscon.c
+index ef97c4dbbb4ee..742056a2a7561 100644
+--- a/drivers/reset/reset-ti-syscon.c
++++ b/drivers/reset/reset-ti-syscon.c
+@@ -58,8 +58,8 @@ struct ti_syscon_reset_data {
+ 	unsigned int nr_controls;
+ };
+ 
+-#define to_ti_syscon_reset_data(rcdev)	\
+-	container_of(rcdev, struct ti_syscon_reset_data, rcdev)
++#define to_ti_syscon_reset_data(_rcdev)	\
++	container_of(_rcdev, struct ti_syscon_reset_data, rcdev)
+ 
+ /**
+  * ti_syscon_reset_assert() - assert device reset
+diff --git a/drivers/rtc/rtc-max77686.c b/drivers/rtc/rtc-max77686.c
+index d51cc12114cbe..eae7cb9faf1eb 100644
+--- a/drivers/rtc/rtc-max77686.c
++++ b/drivers/rtc/rtc-max77686.c
+@@ -717,8 +717,8 @@ static int max77686_init_rtc_regmap(struct max77686_rtc_info *info)
+ 
+ add_rtc_irq:
+ 	ret = regmap_add_irq_chip(info->rtc_regmap, info->rtc_irq,
+-				  IRQF_TRIGGER_FALLING | IRQF_ONESHOT |
+-				  IRQF_SHARED, 0, info->drv_data->rtc_irq_chip,
++				  IRQF_ONESHOT | IRQF_SHARED,
++				  0, info->drv_data->rtc_irq_chip,
+ 				  &info->rtc_irq_data);
+ 	if (ret < 0) {
+ 		dev_err(info->dev, "Failed to add RTC irq chip: %d\n", ret);
+diff --git a/drivers/rtc/rtc-mxc_v2.c b/drivers/rtc/rtc-mxc_v2.c
+index 91534560fe2a2..d349cef09cb7c 100644
+--- a/drivers/rtc/rtc-mxc_v2.c
++++ b/drivers/rtc/rtc-mxc_v2.c
+@@ -373,6 +373,7 @@ static const struct of_device_id mxc_ids[] = {
+ 	{ .compatible = "fsl,imx53-rtc", },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(of, mxc_ids);
+ 
+ static struct platform_driver mxc_rtc_driver = {
+ 	.driver = {
+diff --git a/drivers/scsi/aic7xxx/aic7xxx_core.c b/drivers/scsi/aic7xxx/aic7xxx_core.c
+index 725bb7f58054b..12fed15dec66f 100644
+--- a/drivers/scsi/aic7xxx/aic7xxx_core.c
++++ b/drivers/scsi/aic7xxx/aic7xxx_core.c
+@@ -493,7 +493,7 @@ ahc_inq(struct ahc_softc *ahc, u_int port)
+ 	return ((ahc_inb(ahc, port))
+ 	      | (ahc_inb(ahc, port+1) << 8)
+ 	      | (ahc_inb(ahc, port+2) << 16)
+-	      | (ahc_inb(ahc, port+3) << 24)
++	      | (((uint64_t)ahc_inb(ahc, port+3)) << 24)
+ 	      | (((uint64_t)ahc_inb(ahc, port+4)) << 32)
+ 	      | (((uint64_t)ahc_inb(ahc, port+5)) << 40)
+ 	      | (((uint64_t)ahc_inb(ahc, port+6)) << 48)
+diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c
+index a195bfe9eccc0..7a78606598c4b 100644
+--- a/drivers/scsi/aic94xx/aic94xx_init.c
++++ b/drivers/scsi/aic94xx/aic94xx_init.c
+@@ -53,6 +53,7 @@ static struct scsi_host_template aic94xx_sht = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler	= sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler	= sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 2e529d67de730..2c1028183b242 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1769,6 +1769,7 @@ static struct scsi_host_template sht_v1_hw = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index 6ef8730c61a6e..b75d54339e40c 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -3545,6 +3545,7 @@ static struct scsi_host_template sht_v2_hw = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index e9a82a390672c..50a1c3478a6e0 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -3126,6 +3126,7 @@ static struct scsi_host_template sht_v3_hw = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/isci/init.c b/drivers/scsi/isci/init.c
+index 93bc9019667f1..9d7cc62ace2e6 100644
+--- a/drivers/scsi/isci/init.c
++++ b/drivers/scsi/isci/init.c
+@@ -167,6 +167,7 @@ static struct scsi_host_template isci_sht = {
+ 	.eh_abort_handler		= sas_eh_abort_handler,
+ 	.eh_device_reset_handler        = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler        = sas_eh_target_reset_handler,
++	.slave_alloc			= sas_slave_alloc,
+ 	.target_destroy			= sas_target_destroy,
+ 	.ioctl				= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c
+index a60b228d13f16..f40edb0dab704 100644
+--- a/drivers/scsi/libfc/fc_rport.c
++++ b/drivers/scsi/libfc/fc_rport.c
+@@ -1162,6 +1162,7 @@ static void fc_rport_prli_resp(struct fc_seq *sp, struct fc_frame *fp,
+ 		resp_code = (pp->spp.spp_flags & FC_SPP_RESP_MASK);
+ 		FC_RPORT_DBG(rdata, "PRLI spp_flags = 0x%x spp_type 0x%x\n",
+ 			     pp->spp.spp_flags, pp->spp.spp_type);
++
+ 		rdata->spp_type = pp->spp.spp_type;
+ 		if (resp_code != FC_SPP_RESP_ACK) {
+ 			if (resp_code == FC_SPP_RESP_CONF)
+@@ -1184,11 +1185,13 @@ static void fc_rport_prli_resp(struct fc_seq *sp, struct fc_frame *fp,
+ 		/*
+ 		 * Call prli provider if we should act as a target
+ 		 */
+-		prov = fc_passive_prov[rdata->spp_type];
+-		if (prov) {
+-			memset(&temp_spp, 0, sizeof(temp_spp));
+-			prov->prli(rdata, pp->prli.prli_spp_len,
+-				   &pp->spp, &temp_spp);
++		if (rdata->spp_type < FC_FC4_PROV_SIZE) {
++			prov = fc_passive_prov[rdata->spp_type];
++			if (prov) {
++				memset(&temp_spp, 0, sizeof(temp_spp));
++				prov->prli(rdata, pp->prli.prli_spp_len,
++					   &pp->spp, &temp_spp);
++			}
+ 		}
+ 		/*
+ 		 * Check if the image pair could be established
+diff --git a/drivers/scsi/libsas/sas_scsi_host.c b/drivers/scsi/libsas/sas_scsi_host.c
+index 1bf939818c981..ee44a0d7730b4 100644
+--- a/drivers/scsi/libsas/sas_scsi_host.c
++++ b/drivers/scsi/libsas/sas_scsi_host.c
+@@ -911,6 +911,14 @@ void sas_task_abort(struct sas_task *task)
+ 		blk_abort_request(sc->request);
+ }
+ 
++int sas_slave_alloc(struct scsi_device *sdev)
++{
++	if (dev_is_sata(sdev_to_domain_dev(sdev)) && sdev->lun)
++		return -ENXIO;
++
++	return 0;
++}
++
+ void sas_target_destroy(struct scsi_target *starget)
+ {
+ 	struct domain_device *found_dev = starget->hostdata;
+@@ -957,5 +965,6 @@ EXPORT_SYMBOL_GPL(sas_task_abort);
+ EXPORT_SYMBOL_GPL(sas_phy_reset);
+ EXPORT_SYMBOL_GPL(sas_eh_device_reset_handler);
+ EXPORT_SYMBOL_GPL(sas_eh_target_reset_handler);
++EXPORT_SYMBOL_GPL(sas_slave_alloc);
+ EXPORT_SYMBOL_GPL(sas_target_destroy);
+ EXPORT_SYMBOL_GPL(sas_ioctl);
+diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
+index 6aa2697c4a15d..b03c0f35d7b04 100644
+--- a/drivers/scsi/mvsas/mv_init.c
++++ b/drivers/scsi/mvsas/mv_init.c
+@@ -46,6 +46,7 @@ static struct scsi_host_template mvs_sht = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 0c0c886c7371d..13b8ddec61894 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -101,6 +101,7 @@ static struct scsi_host_template pm8001_sht = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 4869ef813dc4f..63f99f4eeed97 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -1520,9 +1520,19 @@ void qedf_process_error_detect(struct qedf_ctx *qedf, struct fcoe_cqe *cqe,
+ {
+ 	int rval;
+ 
++	if (io_req == NULL) {
++		QEDF_INFO(NULL, QEDF_LOG_IO, "io_req is NULL.\n");
++		return;
++	}
++
++	if (io_req->fcport == NULL) {
++		QEDF_INFO(NULL, QEDF_LOG_IO, "fcport is NULL.\n");
++		return;
++	}
++
+ 	if (!cqe) {
+ 		QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_IO,
+-			  "cqe is NULL for io_req %p\n", io_req);
++			"cqe is NULL for io_req %p\n", io_req);
+ 		return;
+ 	}
+ 
+@@ -1538,6 +1548,16 @@ void qedf_process_error_detect(struct qedf_ctx *qedf, struct fcoe_cqe *cqe,
+ 		  le32_to_cpu(cqe->cqe_info.err_info.rx_buf_off),
+ 		  le32_to_cpu(cqe->cqe_info.err_info.rx_id));
+ 
++	/* When flush is active, let the cmds be flushed out from the cleanup context */
++	if (test_bit(QEDF_RPORT_IN_TARGET_RESET, &io_req->fcport->flags) ||
++		(test_bit(QEDF_RPORT_IN_LUN_RESET, &io_req->fcport->flags) &&
++		 io_req->sc_cmd->device->lun == (u64)io_req->fcport->lun_reset_lun)) {
++		QEDF_ERR(&qedf->dbg_ctx,
++			"Dropping EQE for xid=0x%x as fcport is flushing",
++			io_req->xid);
++		return;
++	}
++
+ 	if (qedf->stop_io_on_error) {
+ 		qedf_stop_all_io(qedf);
+ 		return;
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra30.c b/drivers/soc/tegra/fuse/fuse-tegra30.c
+index 9ea7f01684573..c1aa7815bd6ec 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra30.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra30.c
+@@ -37,7 +37,8 @@
+     defined(CONFIG_ARCH_TEGRA_132_SOC) || \
+     defined(CONFIG_ARCH_TEGRA_210_SOC) || \
+     defined(CONFIG_ARCH_TEGRA_186_SOC) || \
+-    defined(CONFIG_ARCH_TEGRA_194_SOC)
++    defined(CONFIG_ARCH_TEGRA_194_SOC) || \
++    defined(CONFIG_ARCH_TEGRA_234_SOC)
+ static u32 tegra30_fuse_read_early(struct tegra_fuse *fuse, unsigned int offset)
+ {
+ 	if (WARN_ON(!fuse->base))
+diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c
+index b01d28eca7eec..8d76dbfde6a9f 100644
+--- a/drivers/thermal/imx_sc_thermal.c
++++ b/drivers/thermal/imx_sc_thermal.c
+@@ -93,6 +93,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 	for_each_available_child_of_node(np, child) {
+ 		sensor = devm_kzalloc(&pdev->dev, sizeof(*sensor), GFP_KERNEL);
+ 		if (!sensor) {
++			of_node_put(child);
+ 			of_node_put(sensor_np);
+ 			return -ENOMEM;
+ 		}
+@@ -104,6 +105,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev,
+ 				"failed to get valid sensor resource id: %d\n",
+ 				ret);
++			of_node_put(child);
+ 			break;
+ 		}
+ 
+@@ -114,6 +116,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 		if (IS_ERR(sensor->tzd)) {
+ 			dev_err(&pdev->dev, "failed to register thermal zone\n");
+ 			ret = PTR_ERR(sensor->tzd);
++			of_node_put(child);
+ 			break;
+ 		}
+ 
+diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c
+index 8d724d92d57f4..c4f28752d1488 100644
+--- a/drivers/thermal/rcar_gen3_thermal.c
++++ b/drivers/thermal/rcar_gen3_thermal.c
+@@ -366,7 +366,7 @@ static int rcar_gen3_thermal_probe(struct platform_device *pdev)
+ {
+ 	struct rcar_gen3_thermal_priv *priv;
+ 	struct device *dev = &pdev->dev;
+-	const int *rcar_gen3_ths_tj_1 = of_device_get_match_data(dev);
++	const int *ths_tj_1 = of_device_get_match_data(dev);
+ 	struct resource *res;
+ 	struct thermal_zone_device *zone;
+ 	int ret, irq, i;
+@@ -434,8 +434,7 @@ static int rcar_gen3_thermal_probe(struct platform_device *pdev)
+ 		priv->tscs[i] = tsc;
+ 
+ 		priv->thermal_init(tsc);
+-		rcar_gen3_thermal_calc_coefs(tsc, ptat, thcodes[i],
+-					     *rcar_gen3_ths_tj_1);
++		rcar_gen3_thermal_calc_coefs(tsc, ptat, thcodes[i], *ths_tj_1);
+ 
+ 		zone = devm_thermal_zone_of_sensor_register(dev, i, tsc,
+ 							    &rcar_gen3_tz_of_ops);
+diff --git a/drivers/thermal/sprd_thermal.c b/drivers/thermal/sprd_thermal.c
+index fe06cccf14b38..fff80fc180028 100644
+--- a/drivers/thermal/sprd_thermal.c
++++ b/drivers/thermal/sprd_thermal.c
+@@ -388,7 +388,7 @@ static int sprd_thm_probe(struct platform_device *pdev)
+ 		sen = devm_kzalloc(&pdev->dev, sizeof(*sen), GFP_KERNEL);
+ 		if (!sen) {
+ 			ret = -ENOMEM;
+-			goto disable_clk;
++			goto of_put;
+ 		}
+ 
+ 		sen->data = thm;
+@@ -397,13 +397,13 @@ static int sprd_thm_probe(struct platform_device *pdev)
+ 		ret = of_property_read_u32(sen_child, "reg", &sen->id);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "get sensor reg failed");
+-			goto disable_clk;
++			goto of_put;
+ 		}
+ 
+ 		ret = sprd_thm_sensor_calibration(sen_child, thm, sen);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "efuse cal analysis failed");
+-			goto disable_clk;
++			goto of_put;
+ 		}
+ 
+ 		sprd_thm_sensor_init(thm, sen);
+@@ -416,19 +416,20 @@ static int sprd_thm_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev, "register thermal zone failed %d\n",
+ 				sen->id);
+ 			ret = PTR_ERR(sen->tzd);
+-			goto disable_clk;
++			goto of_put;
+ 		}
+ 
+ 		thm->sensor[sen->id] = sen;
+ 	}
++	/* sen_child set to NULL at this point */
+ 
+ 	ret = sprd_thm_set_ready(thm);
+ 	if (ret)
+-		goto disable_clk;
++		goto of_put;
+ 
+ 	ret = sprd_thm_wait_temp_ready(thm);
+ 	if (ret)
+-		goto disable_clk;
++		goto of_put;
+ 
+ 	for (i = 0; i < thm->nr_sensors; i++)
+ 		sprd_thm_toggle_sensor(thm->sensor[i], true);
+@@ -436,6 +437,8 @@ static int sprd_thm_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, thm);
+ 	return 0;
+ 
++of_put:
++	of_node_put(sen_child);
+ disable_clk:
+ 	clk_disable_unprepare(thm->clk);
+ 	return ret;
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index c6d74bc1c90bb..e669f83faa3c5 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -1509,7 +1509,7 @@ free_tz:
+ EXPORT_SYMBOL_GPL(thermal_zone_device_register);
+ 
+ /**
+- * thermal_device_unregister - removes the registered thermal zone device
++ * thermal_zone_device_unregister - removes the registered thermal zone device
+  * @tz: the thermal zone device to remove
+  */
+ void thermal_zone_device_unregister(struct thermal_zone_device *tz)
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 5b76f9a1280d5..6379f26a335f6 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -559,6 +559,9 @@ void thermal_zone_of_sensor_unregister(struct device *dev,
+ 	if (!tz)
+ 		return;
+ 
++	/* stop temperature polling */
++	thermal_zone_device_disable(tzd);
++
+ 	mutex_lock(&tzd->lock);
+ 	tzd->ops->get_temp = NULL;
+ 	tzd->ops->get_trend = NULL;
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index c24c0e3440e39..9d38f864cb68c 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -2006,7 +2006,7 @@ static void cdns3_configure_dmult(struct cdns3_device *priv_dev,
+ 		else
+ 			mask = BIT(priv_ep->num);
+ 
+-		if (priv_ep->type != USB_ENDPOINT_XFER_ISOC) {
++		if (priv_ep->type != USB_ENDPOINT_XFER_ISOC  && !priv_ep->dir) {
+ 			cdns3_set_register_bit(&regs->tdl_from_trb, mask);
+ 			cdns3_set_register_bit(&regs->tdl_beh, mask);
+ 			cdns3_set_register_bit(&regs->tdl_beh2, mask);
+@@ -2045,15 +2045,13 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	case USB_ENDPOINT_XFER_INT:
+ 		ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_INT);
+ 
+-		if ((priv_dev->dev_ver == DEV_VER_V2 && !priv_ep->dir) ||
+-		    priv_dev->dev_ver > DEV_VER_V2)
++		if (priv_dev->dev_ver >= DEV_VER_V2 && !priv_ep->dir)
+ 			ep_cfg |= EP_CFG_TDL_CHK;
+ 		break;
+ 	case USB_ENDPOINT_XFER_BULK:
+ 		ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_BULK);
+ 
+-		if ((priv_dev->dev_ver == DEV_VER_V2  && !priv_ep->dir) ||
+-		    priv_dev->dev_ver > DEV_VER_V2)
++		if (priv_dev->dev_ver >= DEV_VER_V2 && !priv_ep->dir)
+ 			ep_cfg |= EP_CFG_TDL_CHK;
+ 		break;
+ 	default:
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+index cc3ada12848d9..42125601ebb10 100644
+--- a/fs/cifs/cifs_dfs_ref.c
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -151,6 +151,9 @@ char *cifs_compose_mount_options(const char *sb_mountdata,
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if (ref) {
++		if (WARN_ON_ONCE(!ref->node_name || ref->path_consumed < 0))
++			return ERR_PTR(-EINVAL);
++
+ 		if (strlen(fullpath) - ref->path_consumed) {
+ 			prepath = fullpath + ref->path_consumed;
+ 			/* skip initial delimiter */
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index ec77ccfea923d..b8850c81068a0 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -612,7 +612,9 @@ F2FS_FEATURE_RO_ATTR(lost_found, FEAT_LOST_FOUND);
+ F2FS_FEATURE_RO_ATTR(verity, FEAT_VERITY);
+ #endif
+ F2FS_FEATURE_RO_ATTR(sb_checksum, FEAT_SB_CHECKSUM);
++#ifdef CONFIG_UNICODE
+ F2FS_FEATURE_RO_ATTR(casefold, FEAT_CASEFOLD);
++#endif
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ F2FS_FEATURE_RO_ATTR(compression, FEAT_COMPRESSION);
+ #endif
+@@ -700,7 +702,9 @@ static struct attribute *f2fs_feat_attrs[] = {
+ 	ATTR_LIST(verity),
+ #endif
+ 	ATTR_LIST(sb_checksum),
++#ifdef CONFIG_UNICODE
+ 	ATTR_LIST(casefold),
++#endif
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ 	ATTR_LIST(compression),
+ #endif
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 8ad819132dde3..c3ccb242d1993 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -752,6 +752,7 @@ struct bpf_jit_poke_descriptor {
+ 	void *tailcall_target;
+ 	void *tailcall_bypass;
+ 	void *bypass_addr;
++	void *aux;
+ 	union {
+ 		struct {
+ 			struct bpf_map *map;
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index e72787731a5b2..176457145bcf2 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -10,7 +10,7 @@
+ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
+ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
+-		  struct vm_area_struct *vma);
++		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
+ void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd);
+ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index dfabf4660a670..fbc6805358da0 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -503,15 +503,6 @@ static inline struct swap_info_struct *swp_swap_info(swp_entry_t entry)
+ 	return NULL;
+ }
+ 
+-static inline struct swap_info_struct *get_swap_device(swp_entry_t entry)
+-{
+-	return NULL;
+-}
+-
+-static inline void put_swap_device(struct swap_info_struct *si)
+-{
+-}
+-
+ #define swap_address_space(entry)		(NULL)
+ #define get_nr_swap_pages()			0L
+ #define total_swap_pages			0L
+diff --git a/include/linux/swapops.h b/include/linux/swapops.h
+index 6430a94c69818..0d429a102d417 100644
+--- a/include/linux/swapops.h
++++ b/include/linux/swapops.h
+@@ -265,6 +265,8 @@ static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd)
+ 
+ 	if (pmd_swp_soft_dirty(pmd))
+ 		pmd = pmd_swp_clear_soft_dirty(pmd);
++	if (pmd_swp_uffd_wp(pmd))
++		pmd = pmd_swp_clear_uffd_wp(pmd);
+ 	arch_entry = __pmd_to_swp_entry(pmd);
+ 	return swp_entry(__swp_type(arch_entry), __swp_offset(arch_entry));
+ }
+diff --git a/include/net/dst_metadata.h b/include/net/dst_metadata.h
+index 56cb3c38569a7..14efa0ded75dd 100644
+--- a/include/net/dst_metadata.h
++++ b/include/net/dst_metadata.h
+@@ -45,7 +45,9 @@ skb_tunnel_info(const struct sk_buff *skb)
+ 		return &md_dst->u.tun_info;
+ 
+ 	dst = skb_dst(skb);
+-	if (dst && dst->lwtstate)
++	if (dst && dst->lwtstate &&
++	    (dst->lwtstate->type == LWTUNNEL_ENCAP_IP ||
++	     dst->lwtstate->type == LWTUNNEL_ENCAP_IP6))
+ 		return lwt_tun_info(dst->lwtstate);
+ 
+ 	return NULL;
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index 37a7fb1969d6c..42fe4e1b6a8c7 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -262,7 +262,7 @@ static inline bool ipv6_anycast_destination(const struct dst_entry *dst,
+ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 		 int (*output)(struct net *, struct sock *, struct sk_buff *));
+ 
+-static inline int ip6_skb_dst_mtu(struct sk_buff *skb)
++static inline unsigned int ip6_skb_dst_mtu(struct sk_buff *skb)
+ {
+ 	int mtu;
+ 
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 7d66c61d22c7d..eff611da5780b 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -676,6 +676,10 @@ static inline u32 __tcp_set_rto(const struct tcp_sock *tp)
+ 
+ static inline void __tcp_fast_path_on(struct tcp_sock *tp, u32 snd_wnd)
+ {
++	/* mptcp hooks are only on the slow path */
++	if (sk_is_mptcp((struct sock *)tp))
++		return;
++
+ 	tp->pred_flags = htonl((tp->tcp_header_len << 26) |
+ 			       ntohl(TCP_FLAG_ACK) |
+ 			       snd_wnd);
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 239c6b3b59934..75c2d184018a5 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2167,8 +2167,14 @@ static void bpf_prog_free_deferred(struct work_struct *work)
+ #endif
+ 	if (aux->dst_trampoline)
+ 		bpf_trampoline_put(aux->dst_trampoline);
+-	for (i = 0; i < aux->func_cnt; i++)
++	for (i = 0; i < aux->func_cnt; i++) {
++		/* We can just unlink the subprog poke descriptor table as
++		 * it was originally linked to the main program and is also
++		 * released along with it.
++		 */
++		aux->func[i]->aux->poke_tab = NULL;
+ 		bpf_jit_free(aux->func[i]);
++	}
+ 	if (aux->func_cnt) {
+ 		kfree(aux->func);
+ 		bpf_prog_unlock_free(aux->prog);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index bf6798fb23319..1f8bf2b39d506 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -11157,33 +11157,19 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 			goto out_free;
+ 		func[i]->is_func = 1;
+ 		func[i]->aux->func_idx = i;
+-		/* the btf and func_info will be freed only at prog->aux */
++		/* Below members will be freed only at prog->aux */
+ 		func[i]->aux->btf = prog->aux->btf;
+ 		func[i]->aux->func_info = prog->aux->func_info;
++		func[i]->aux->poke_tab = prog->aux->poke_tab;
++		func[i]->aux->size_poke_tab = prog->aux->size_poke_tab;
+ 
+ 		for (j = 0; j < prog->aux->size_poke_tab; j++) {
+-			u32 insn_idx = prog->aux->poke_tab[j].insn_idx;
+-			int ret;
++			struct bpf_jit_poke_descriptor *poke;
+ 
+-			if (!(insn_idx >= subprog_start &&
+-			      insn_idx <= subprog_end))
+-				continue;
+-
+-			ret = bpf_jit_add_poke_descriptor(func[i],
+-							  &prog->aux->poke_tab[j]);
+-			if (ret < 0) {
+-				verbose(env, "adding tail call poke descriptor failed\n");
+-				goto out_free;
+-			}
+-
+-			func[i]->insnsi[insn_idx - subprog_start].imm = ret + 1;
+-
+-			map_ptr = func[i]->aux->poke_tab[ret].tail_call.map;
+-			ret = map_ptr->ops->map_poke_track(map_ptr, func[i]->aux);
+-			if (ret < 0) {
+-				verbose(env, "tracking tail call prog failed\n");
+-				goto out_free;
+-			}
++			poke = &prog->aux->poke_tab[j];
++			if (poke->insn_idx < subprog_end &&
++			    poke->insn_idx >= subprog_start)
++				poke->aux = func[i]->aux;
+ 		}
+ 
+ 		/* Use bpf_prog_F_tag to indicate functions in stack traces.
+@@ -11213,18 +11199,6 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 		cond_resched();
+ 	}
+ 
+-	/* Untrack main program's aux structs so that during map_poke_run()
+-	 * we will not stumble upon the unfilled poke descriptors; each
+-	 * of the main program's poke descs got distributed across subprogs
+-	 * and got tracked onto map, so we are sure that none of them will
+-	 * be missed after the operation below
+-	 */
+-	for (i = 0; i < prog->aux->size_poke_tab; i++) {
+-		map_ptr = prog->aux->poke_tab[i].tail_call.map;
+-
+-		map_ptr->ops->map_poke_untrack(map_ptr, prog->aux);
+-	}
+-
+ 	/* at this point all bpf functions were successfully JITed
+ 	 * now populate all bpf_calls with correct addresses and
+ 	 * run last pass of JIT
+@@ -11293,14 +11267,22 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 	bpf_prog_free_unused_jited_linfo(prog);
+ 	return 0;
+ out_free:
++	/* We failed JIT'ing, so at this point we need to unregister poke
++	 * descriptors from subprogs, so that kernel is not attempting to
++	 * patch it anymore as we're freeing the subprog JIT memory.
++	 */
++	for (i = 0; i < prog->aux->size_poke_tab; i++) {
++		map_ptr = prog->aux->poke_tab[i].tail_call.map;
++		map_ptr->ops->map_poke_untrack(map_ptr, prog->aux);
++	}
++	/* At this point we're guaranteed that poke descriptors are not
++	 * live anymore. We can just unlink its descriptor table as it's
++	 * released with the main prog.
++	 */
+ 	for (i = 0; i < env->subprog_cnt; i++) {
+ 		if (!func[i])
+ 			continue;
+-
+-		for (j = 0; j < func[i]->aux->size_poke_tab; j++) {
+-			map_ptr = func[i]->aux->poke_tab[j].tail_call.map;
+-			map_ptr->ops->map_poke_untrack(map_ptr, func[i]->aux);
+-		}
++		func[i]->aux->poke_tab = NULL;
+ 		bpf_jit_free(func[i]);
+ 	}
+ 	kfree(func);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 32c0905bca849..262b02d750076 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5047,7 +5047,7 @@ static const u64 cfs_bandwidth_slack_period = 5 * NSEC_PER_MSEC;
+ static int runtime_refresh_within(struct cfs_bandwidth *cfs_b, u64 min_expire)
+ {
+ 	struct hrtimer *refresh_timer = &cfs_b->period_timer;
+-	u64 remaining;
++	s64 remaining;
+ 
+ 	/* if the call-back is running a quota refresh is already occurring */
+ 	if (hrtimer_callback_running(refresh_timer))
+@@ -5055,7 +5055,7 @@ static int runtime_refresh_within(struct cfs_bandwidth *cfs_b, u64 min_expire)
+ 
+ 	/* is a quota refresh about to occur? */
+ 	remaining = ktime_to_ns(hrtimer_expires_remaining(refresh_timer));
+-	if (remaining < min_expire)
++	if (remaining < (s64)min_expire)
+ 		return 1;
+ 
+ 	return 0;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 9fe622ff2fc4a..594368f6134f1 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1012,7 +1012,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
+ 
+ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
+-		  struct vm_area_struct *vma)
++		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ {
+ 	spinlock_t *dst_ptl, *src_ptl;
+ 	struct page *src_page;
+@@ -1021,7 +1021,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 	int ret = -ENOMEM;
+ 
+ 	/* Skip if can be re-fill on fault */
+-	if (!vma_is_anonymous(vma))
++	if (!vma_is_anonymous(dst_vma))
+ 		return 0;
+ 
+ 	pgtable = pte_alloc_one(dst_mm);
+@@ -1035,14 +1035,6 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 	ret = -EAGAIN;
+ 	pmd = *src_pmd;
+ 
+-	/*
+-	 * Make sure the _PAGE_UFFD_WP bit is cleared if the new VMA
+-	 * does not have the VM_UFFD_WP, which means that the uffd
+-	 * fork event is not enabled.
+-	 */
+-	if (!(vma->vm_flags & VM_UFFD_WP))
+-		pmd = pmd_clear_uffd_wp(pmd);
+-
+ #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ 	if (unlikely(is_swap_pmd(pmd))) {
+ 		swp_entry_t entry = pmd_to_swp_entry(pmd);
+@@ -1053,11 +1045,15 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 			pmd = swp_entry_to_pmd(entry);
+ 			if (pmd_swp_soft_dirty(*src_pmd))
+ 				pmd = pmd_swp_mksoft_dirty(pmd);
++			if (pmd_swp_uffd_wp(*src_pmd))
++				pmd = pmd_swp_mkuffd_wp(pmd);
+ 			set_pmd_at(src_mm, addr, src_pmd, pmd);
+ 		}
+ 		add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ 		mm_inc_nr_ptes(dst_mm);
+ 		pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
++		if (!userfaultfd_wp(dst_vma))
++			pmd = pmd_swp_clear_uffd_wp(pmd);
+ 		set_pmd_at(dst_mm, addr, dst_pmd, pmd);
+ 		ret = 0;
+ 		goto out_unlock;
+@@ -1074,17 +1070,13 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 	 * a page table.
+ 	 */
+ 	if (is_huge_zero_pmd(pmd)) {
+-		struct page *zero_page;
+ 		/*
+ 		 * get_huge_zero_page() will never allocate a new page here,
+ 		 * since we already have a zero page to copy. It just takes a
+ 		 * reference.
+ 		 */
+-		zero_page = mm_get_huge_zero_page(dst_mm);
+-		set_huge_zero_page(pgtable, dst_mm, vma, addr, dst_pmd,
+-				zero_page);
+-		ret = 0;
+-		goto out_unlock;
++		mm_get_huge_zero_page(dst_mm);
++		goto out_zero_page;
+ 	}
+ 
+ 	src_page = pmd_page(pmd);
+@@ -1097,23 +1089,25 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 	 * best effort that the pinned pages won't be replaced by another
+ 	 * random page during the coming copy-on-write.
+ 	 */
+-	if (unlikely(is_cow_mapping(vma->vm_flags) &&
++	if (unlikely(is_cow_mapping(src_vma->vm_flags) &&
+ 		     atomic_read(&src_mm->has_pinned) &&
+ 		     page_maybe_dma_pinned(src_page))) {
+ 		pte_free(dst_mm, pgtable);
+ 		spin_unlock(src_ptl);
+ 		spin_unlock(dst_ptl);
+-		__split_huge_pmd(vma, src_pmd, addr, false, NULL);
++		__split_huge_pmd(src_vma, src_pmd, addr, false, NULL);
+ 		return -EAGAIN;
+ 	}
+ 
+ 	get_page(src_page);
+ 	page_dup_rmap(src_page, true);
+ 	add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
++out_zero_page:
+ 	mm_inc_nr_ptes(dst_mm);
+ 	pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
+-
+ 	pmdp_set_wrprotect(src_mm, addr, src_pmd);
++	if (!userfaultfd_wp(dst_vma))
++		pmd = pmd_clear_uffd_wp(pmd);
+ 	pmd = pmd_mkold(pmd_wrprotect(pmd));
+ 	set_pmd_at(dst_mm, addr, dst_pmd, pmd);
+ 
+@@ -1832,6 +1826,8 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			newpmd = swp_entry_to_pmd(entry);
+ 			if (pmd_swp_soft_dirty(*pmd))
+ 				newpmd = pmd_swp_mksoft_dirty(newpmd);
++			if (pmd_swp_uffd_wp(*pmd))
++				newpmd = pmd_swp_mkuffd_wp(newpmd);
+ 			set_pmd_at(mm, addr, pmd, newpmd);
+ 		}
+ 		goto unlock;
+@@ -2998,6 +2994,8 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+ 		pmde = pmd_mksoft_dirty(pmde);
+ 	if (is_write_migration_entry(entry))
+ 		pmde = maybe_pmd_mkwrite(pmde, vma);
++	if (pmd_swp_uffd_wp(*pvmw->pmd))
++		pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde));
+ 
+ 	flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE);
+ 	if (PageAnon(new))
+diff --git a/mm/memory.c b/mm/memory.c
+index 0a905e0a7e672..4fe24cd865a79 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -696,10 +696,10 @@ out:
+ 
+ static unsigned long
+ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+-		pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma,
+-		unsigned long addr, int *rss)
++		pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *dst_vma,
++		struct vm_area_struct *src_vma, unsigned long addr, int *rss)
+ {
+-	unsigned long vm_flags = vma->vm_flags;
++	unsigned long vm_flags = dst_vma->vm_flags;
+ 	pte_t pte = *src_pte;
+ 	struct page *page;
+ 	swp_entry_t entry = pte_to_swp_entry(pte);
+@@ -768,6 +768,8 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 			set_pte_at(src_mm, addr, src_pte, pte);
+ 		}
+ 	}
++	if (!userfaultfd_wp(dst_vma))
++		pte = pte_swp_clear_uffd_wp(pte);
+ 	set_pte_at(dst_mm, addr, dst_pte, pte);
+ 	return 0;
+ }
+@@ -839,6 +841,9 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
+ 	/* All done, just insert the new page copy in the child */
+ 	pte = mk_pte(new_page, dst_vma->vm_page_prot);
+ 	pte = maybe_mkwrite(pte_mkdirty(pte), dst_vma);
++	if (userfaultfd_pte_wp(dst_vma, *src_pte))
++		/* Uffd-wp needs to be delivered to dest pte as well */
++		pte = pte_wrprotect(pte_mkuffd_wp(pte));
+ 	set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte);
+ 	return 0;
+ }
+@@ -888,12 +893,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+ 		pte = pte_mkclean(pte);
+ 	pte = pte_mkold(pte);
+ 
+-	/*
+-	 * Make sure the _PAGE_UFFD_WP bit is cleared if the new VMA
+-	 * does not have the VM_UFFD_WP, which means that the uffd
+-	 * fork event is not enabled.
+-	 */
+-	if (!(vm_flags & VM_UFFD_WP))
++	if (!userfaultfd_wp(dst_vma))
+ 		pte = pte_clear_uffd_wp(pte);
+ 
+ 	set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte);
+@@ -968,7 +968,8 @@ again:
+ 		if (unlikely(!pte_present(*src_pte))) {
+ 			entry.val = copy_nonpresent_pte(dst_mm, src_mm,
+ 							dst_pte, src_pte,
+-							src_vma, addr, rss);
++							dst_vma, src_vma,
++							addr, rss);
+ 			if (entry.val)
+ 				break;
+ 			progress += 8;
+@@ -1045,8 +1046,8 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+ 			|| pmd_devmap(*src_pmd)) {
+ 			int err;
+ 			VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma);
+-			err = copy_huge_pmd(dst_mm, src_mm,
+-					    dst_pmd, src_pmd, addr, src_vma);
++			err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd,
++					    addr, dst_vma, src_vma);
+ 			if (err == -ENOMEM)
+ 				return -ENOMEM;
+ 			if (!err)
+@@ -3302,7 +3303,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct page *page = NULL, *swapcache;
+-	struct swap_info_struct *si = NULL;
+ 	swp_entry_t entry;
+ 	pte_t pte;
+ 	int locked;
+@@ -3330,16 +3330,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 		goto out;
+ 	}
+ 
+-	/* Prevent swapoff from happening to us. */
+-	si = get_swap_device(entry);
+-	if (unlikely(!si))
+-		goto out;
+ 
+ 	delayacct_set_flag(DELAYACCT_PF_SWAPIN);
+ 	page = lookup_swap_cache(entry, vma, vmf->address);
+ 	swapcache = page;
+ 
+ 	if (!page) {
++		struct swap_info_struct *si = swp_swap_info(entry);
++
+ 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
+ 		    __swap_count(entry) == 1) {
+ 			/* skip swapcache */
+@@ -3510,8 +3508,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ unlock:
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+ out:
+-	if (si)
+-		put_swap_device(si);
+ 	return ret;
+ out_nomap:
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+@@ -3523,8 +3519,6 @@ out_release:
+ 		unlock_page(swapcache);
+ 		put_page(swapcache);
+ 	}
+-	if (si)
+-		put_swap_device(si);
+ 	return ret;
+ }
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index e30d88efd7fbb..0166558d3d647 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6129,7 +6129,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
+ 		return;
+ 
+ 	/*
+-	 * The call to memmap_init_zone should have already taken care
++	 * The call to memmap_init should have already taken care
+ 	 * of the pages reserved for the memmap, so we can just jump to
+ 	 * the end of that region and start processing the device pages.
+ 	 */
+@@ -6194,7 +6194,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
+ /*
+  * Only struct pages that correspond to ranges defined by memblock.memory
+  * are zeroed and initialized by going through __init_single_page() during
+- * memmap_init_zone().
++ * memmap_init_zone_range().
+  *
+  * But, there could be struct pages that correspond to holes in
+  * memblock.memory. This can happen because of the following reasons:
+@@ -6213,9 +6213,9 @@ static void __meminit zone_init_free_lists(struct zone *zone)
+  *   zone/node above the hole except for the trailing pages in the last
+  *   section that will be appended to the zone/node below.
+  */
+-static u64 __meminit init_unavailable_range(unsigned long spfn,
+-					    unsigned long epfn,
+-					    int zone, int node)
++static void __init init_unavailable_range(unsigned long spfn,
++					  unsigned long epfn,
++					  int zone, int node)
+ {
+ 	unsigned long pfn;
+ 	u64 pgcnt = 0;
+@@ -6231,58 +6231,84 @@ static u64 __meminit init_unavailable_range(unsigned long spfn,
+ 		pgcnt++;
+ 	}
+ 
+-	return pgcnt;
++	if (pgcnt)
++		pr_info("On node %d, zone %s: %lld pages in unavailable ranges",
++			node, zone_names[zone], pgcnt);
+ }
+ #else
+-static inline u64 init_unavailable_range(unsigned long spfn, unsigned long epfn,
+-					 int zone, int node)
++static inline void init_unavailable_range(unsigned long spfn,
++					  unsigned long epfn,
++					  int zone, int node)
+ {
+-	return 0;
+ }
+ #endif
+ 
+-void __meminit __weak memmap_init(unsigned long size, int nid,
+-				  unsigned long zone,
+-				  unsigned long range_start_pfn)
++static void __init memmap_init_zone_range(struct zone *zone,
++					  unsigned long start_pfn,
++					  unsigned long end_pfn,
++					  unsigned long *hole_pfn)
++{
++	unsigned long zone_start_pfn = zone->zone_start_pfn;
++	unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages;
++	int nid = zone_to_nid(zone), zone_id = zone_idx(zone);
++
++	start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
++	end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn);
++
++	if (start_pfn >= end_pfn)
++		return;
++
++	memmap_init_zone(end_pfn - start_pfn, nid, zone_id, start_pfn,
++			  zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
++
++	if (*hole_pfn < start_pfn)
++		init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
++
++	*hole_pfn = end_pfn;
++}
++
++void __init __weak memmap_init(void)
+ {
+-	static unsigned long hole_pfn;
+ 	unsigned long start_pfn, end_pfn;
+-	unsigned long range_end_pfn = range_start_pfn + size;
+-	int i;
+-	u64 pgcnt = 0;
++	unsigned long hole_pfn = 0;
++	int i, j, zone_id, nid;
+ 
+-	for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
+-		start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn);
+-		end_pfn = clamp(end_pfn, range_start_pfn, range_end_pfn);
++	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
++		struct pglist_data *node = NODE_DATA(nid);
+ 
+-		if (end_pfn > start_pfn) {
+-			size = end_pfn - start_pfn;
+-			memmap_init_zone(size, nid, zone, start_pfn, range_end_pfn,
+-					 MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
+-		}
++		for (j = 0; j < MAX_NR_ZONES; j++) {
++			struct zone *zone = node->node_zones + j;
++
++			if (!populated_zone(zone))
++				continue;
+ 
+-		if (hole_pfn < start_pfn)
+-			pgcnt += init_unavailable_range(hole_pfn, start_pfn,
+-							zone, nid);
+-		hole_pfn = end_pfn;
++			memmap_init_zone_range(zone, start_pfn, end_pfn,
++					       &hole_pfn);
++			zone_id = j;
++		}
+ 	}
+ 
+ #ifdef CONFIG_SPARSEMEM
+ 	/*
+-	 * Initialize the hole in the range [zone_end_pfn, section_end].
+-	 * If zone boundary falls in the middle of a section, this hole
+-	 * will be re-initialized during the call to this function for the
+-	 * higher zone.
++	 * Initialize the memory map for hole in the range [memory_end,
++	 * section_end].
++	 * Append the pages in this hole to the highest zone in the last
++	 * node.
++	 * The call to init_unavailable_range() is outside the ifdef to
++	 * silence the compiler warining about zone_id set but not used;
++	 * for FLATMEM it is a nop anyway
+ 	 */
+-	end_pfn = round_up(range_end_pfn, PAGES_PER_SECTION);
++	end_pfn = round_up(end_pfn, PAGES_PER_SECTION);
+ 	if (hole_pfn < end_pfn)
+-		pgcnt += init_unavailable_range(hole_pfn, end_pfn,
+-						zone, nid);
+ #endif
++		init_unavailable_range(hole_pfn, end_pfn, zone_id, nid);
++}
+ 
+-	if (pgcnt)
+-		pr_info("  %s zone: %llu pages in unavailable ranges\n",
+-			zone_names[zone], pgcnt);
++/* A stub for backwards compatibility with custom implementatin on IA-64 */
++void __meminit __weak arch_memmap_init(unsigned long size, int nid,
++				       unsigned long zone,
++				       unsigned long range_start_pfn)
++{
+ }
+ 
+ static int zone_batchsize(struct zone *zone)
+@@ -6981,7 +7007,7 @@ static void __init free_area_init_core(struct pglist_data *pgdat)
+ 		set_pageblock_order();
+ 		setup_usemap(pgdat, zone, zone_start_pfn, size);
+ 		init_currently_empty_zone(zone, zone_start_pfn, size);
+-		memmap_init(size, nid, j, zone_start_pfn);
++		arch_memmap_init(size, nid, j, zone_start_pfn);
+ 	}
+ }
+ 
+@@ -7507,6 +7533,8 @@ void __init free_area_init(unsigned long *max_zone_pfn)
+ 			node_set_state(nid, N_MEMORY);
+ 		check_for_memory(pgdat, nid);
+ 	}
++
++	memmap_init();
+ }
+ 
+ static int __init cmdline_parse_core(char *p, unsigned long *core,
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 96df61c8af653..ae8adca3b56d1 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1698,8 +1698,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	struct address_space *mapping = inode->i_mapping;
+ 	struct shmem_inode_info *info = SHMEM_I(inode);
+ 	struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
+-	struct swap_info_struct *si;
+-	struct page *page = NULL;
++	struct page *page;
+ 	swp_entry_t swap;
+ 	int error;
+ 
+@@ -1707,12 +1706,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	swap = radix_to_swp_entry(*pagep);
+ 	*pagep = NULL;
+ 
+-	/* Prevent swapoff from happening to us. */
+-	si = get_swap_device(swap);
+-	if (!si) {
+-		error = EINVAL;
+-		goto failed;
+-	}
+ 	/* Look it up and read it in.. */
+ 	page = lookup_swap_cache(swap, NULL, 0);
+ 	if (!page) {
+@@ -1774,8 +1767,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	swap_free(swap);
+ 
+ 	*pagep = page;
+-	if (si)
+-		put_swap_device(si);
+ 	return 0;
+ failed:
+ 	if (!shmem_confirm_swap(mapping, index, swap))
+@@ -1786,9 +1777,6 @@ unlock:
+ 		put_page(page);
+ 	}
+ 
+-	if (si)
+-		put_swap_device(si);
+-
+ 	return error;
+ }
+ 
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index a0e9a7937412a..857a2c512ca39 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -561,7 +561,7 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 	struct net_bridge_port *p;
+ 	int err = 0;
+ 	unsigned br_hr, dev_hr;
+-	bool changed_addr;
++	bool changed_addr, fdb_synced = false;
+ 
+ 	/* Don't allow bridging non-ethernet like devices. */
+ 	if ((dev->flags & IFF_LOOPBACK) ||
+@@ -651,6 +651,19 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 	list_add_rcu(&p->list, &br->port_list);
+ 
+ 	nbp_update_port_count(br);
++	if (!br_promisc_port(p) && (p->dev->priv_flags & IFF_UNICAST_FLT)) {
++		/* When updating the port count we also update all ports'
++		 * promiscuous mode.
++		 * A port leaving promiscuous mode normally gets the bridge's
++		 * fdb synced to the unicast filter (if supported), however,
++		 * `br_port_clear_promisc` does not distinguish between
++		 * non-promiscuous ports and *new* ports, so we need to
++		 * sync explicitly here.
++		 */
++		fdb_synced = br_fdb_sync_static(br, p) == 0;
++		if (!fdb_synced)
++			netdev_err(dev, "failed to sync bridge static fdb addresses to this port\n");
++	}
+ 
+ 	netdev_update_features(br->dev);
+ 
+@@ -700,6 +713,8 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 	return 0;
+ 
+ err7:
++	if (fdb_synced)
++		br_fdb_unsync_static(br, p);
+ 	list_del_rcu(&p->list);
+ 	br_fdb_delete_by_port(br, p, 0, 1);
+ 	nbp_update_port_count(br);
+diff --git a/net/dsa/switch.c b/net/dsa/switch.c
+index 3fb362b6874e3..a44035872cff4 100644
+--- a/net/dsa/switch.c
++++ b/net/dsa/switch.c
+@@ -112,11 +112,11 @@ static int dsa_switch_bridge_leave(struct dsa_switch *ds,
+ 	int err, i;
+ 
+ 	if (dst->index == info->tree_index && ds->index == info->sw_index &&
+-	    ds->ops->port_bridge_join)
++	    ds->ops->port_bridge_leave)
+ 		ds->ops->port_bridge_leave(ds, info->port, info->br);
+ 
+ 	if ((dst->index != info->tree_index || ds->index != info->sw_index) &&
+-	    ds->ops->crosschip_bridge_join)
++	    ds->ops->crosschip_bridge_leave)
+ 		ds->ops->crosschip_bridge_leave(ds, info->tree_index,
+ 						info->sw_index, info->port,
+ 						info->br);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index f6cc26de5ed30..0dca00745ac3c 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -317,7 +317,7 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
+ 	}
+ 
+ 	dev->needed_headroom = t_hlen + hlen;
+-	mtu -= t_hlen;
++	mtu -= t_hlen + (dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0);
+ 
+ 	if (mtu < IPV4_MIN_MTU)
+ 		mtu = IPV4_MIN_MTU;
+@@ -348,6 +348,9 @@ static struct ip_tunnel *ip_tunnel_create(struct net *net,
+ 	t_hlen = nt->hlen + sizeof(struct iphdr);
+ 	dev->min_mtu = ETH_MIN_MTU;
+ 	dev->max_mtu = IP_MAX_MTU - t_hlen;
++	if (dev->type == ARPHRD_ETHER)
++		dev->max_mtu -= dev->hard_header_len;
++
+ 	ip_tunnel_add(itn, nt);
+ 	return nt;
+ 
+@@ -489,11 +492,14 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
+ 
+ 	tunnel_hlen = md ? tunnel_hlen : tunnel->hlen;
+ 	pkt_size = skb->len - tunnel_hlen;
++	pkt_size -= dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0;
+ 
+-	if (df)
++	if (df) {
+ 		mtu = dst_mtu(&rt->dst) - (sizeof(struct iphdr) + tunnel_hlen);
+-	else
++		mtu -= dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0;
++	} else {
+ 		mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
++	}
+ 
+ 	if (skb_valid_dst(skb))
+ 		skb_dst_update_pmtu_no_confirm(skb, mtu);
+@@ -972,6 +978,9 @@ int __ip_tunnel_change_mtu(struct net_device *dev, int new_mtu, bool strict)
+ 	int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ 	int max_mtu = IP_MAX_MTU - t_hlen;
+ 
++	if (dev->type == ARPHRD_ETHER)
++		max_mtu -= dev->hard_header_len;
++
+ 	if (new_mtu < ETH_MIN_MTU)
+ 		return -EINVAL;
+ 
+@@ -1149,6 +1158,9 @@ int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
+ 	if (tb[IFLA_MTU]) {
+ 		unsigned int max = IP_MAX_MTU - (nt->hlen + sizeof(struct iphdr));
+ 
++		if (dev->type == ARPHRD_ETHER)
++			max -= dev->hard_header_len;
++
+ 		mtu = clamp(dev->mtu, (unsigned int)ETH_MIN_MTU, max);
+ 	}
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 2384ac048bead..54230852e5f95 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1361,6 +1361,9 @@ new_segment:
+ 			}
+ 			pfrag->offset += copy;
+ 		} else {
++			if (!sk_wmem_schedule(sk, copy))
++				goto wait_for_space;
++
+ 			err = skb_zerocopy_iter_stream(sk, skb, msg, copy, uarg);
+ 			if (err == -EMSGSIZE || err == -EEXIST) {
+ 				tcp_mark_push(tp, skb);
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 4d4b641c204d4..ac8d38e044002 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5911,8 +5911,8 @@ void tcp_init_transfer(struct sock *sk, int bpf_op, struct sk_buff *skb)
+ 		tp->snd_cwnd = tcp_init_cwnd(tp, __sk_dst_get(sk));
+ 	tp->snd_cwnd_stamp = tcp_jiffies32;
+ 
+-	icsk->icsk_ca_initialized = 0;
+ 	bpf_skops_established(sk, bpf_op, skb);
++	/* Initialize congestion control unless BPF initialized it already: */
+ 	if (!icsk->icsk_ca_initialized)
+ 		tcp_init_congestion_control(sk);
+ 	tcp_init_buffer_space(sk);
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index ab8ed0fc47697..5212db9ea157e 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -342,7 +342,7 @@ void tcp_v4_mtu_reduced(struct sock *sk)
+ 
+ 	if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))
+ 		return;
+-	mtu = tcp_sk(sk)->mtu_info;
++	mtu = READ_ONCE(tcp_sk(sk)->mtu_info);
+ 	dst = inet_csk_update_pmtu(sk, mtu);
+ 	if (!dst)
+ 		return;
+@@ -546,7 +546,7 @@ int tcp_v4_err(struct sk_buff *skb, u32 info)
+ 			if (sk->sk_state == TCP_LISTEN)
+ 				goto out;
+ 
+-			tp->mtu_info = info;
++			WRITE_ONCE(tp->mtu_info, info);
+ 			if (!sock_owned_by_user(sk)) {
+ 				tcp_v4_mtu_reduced(sk);
+ 			} else {
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index f99494637ff47..19ef4577b70d6 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1730,6 +1730,7 @@ int tcp_mtu_to_mss(struct sock *sk, int pmtu)
+ 	return __tcp_mtu_to_mss(sk, pmtu) -
+ 	       (tcp_sk(sk)->tcp_header_len - sizeof(struct tcphdr));
+ }
++EXPORT_SYMBOL(tcp_mtu_to_mss);
+ 
+ /* Inverse of above */
+ int tcp_mss_to_mtu(struct sock *sk, int mss)
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index fbb9a11fe4a37..e73312546c5a1 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1097,7 +1097,7 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	}
+ 
+ 	ipcm_init_sk(&ipc, inet);
+-	ipc.gso_size = up->gso_size;
++	ipc.gso_size = READ_ONCE(up->gso_size);
+ 
+ 	if (msg->msg_controllen) {
+ 		err = udp_cmsg_send(sk, msg, &ipc.gso_size);
+@@ -2655,7 +2655,7 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
+ 	case UDP_SEGMENT:
+ 		if (val < 0 || val > USHRT_MAX)
+ 			return -EINVAL;
+-		up->gso_size = val;
++		WRITE_ONCE(up->gso_size, val);
+ 		break;
+ 
+ 	case UDP_GRO:
+@@ -2750,7 +2750,7 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
+ 		break;
+ 
+ 	case UDP_SEGMENT:
+-		val = up->gso_size;
++		val = READ_ONCE(up->gso_size);
+ 		break;
+ 
+ 	case UDP_GRO:
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 3f9bb6dd1f986..df33145b876c6 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -348,11 +348,20 @@ failure:
+ static void tcp_v6_mtu_reduced(struct sock *sk)
+ {
+ 	struct dst_entry *dst;
++	u32 mtu;
+ 
+ 	if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))
+ 		return;
+ 
+-	dst = inet6_csk_update_pmtu(sk, tcp_sk(sk)->mtu_info);
++	mtu = READ_ONCE(tcp_sk(sk)->mtu_info);
++
++	/* Drop requests trying to increase our current mss.
++	 * Check done in __ip6_rt_update_pmtu() is too late.
++	 */
++	if (tcp_mtu_to_mss(sk, mtu) >= tcp_sk(sk)->mss_cache)
++		return;
++
++	dst = inet6_csk_update_pmtu(sk, mtu);
+ 	if (!dst)
+ 		return;
+ 
+@@ -433,6 +442,8 @@ static int tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	}
+ 
+ 	if (type == ICMPV6_PKT_TOOBIG) {
++		u32 mtu = ntohl(info);
++
+ 		/* We are not interested in TCP_LISTEN and open_requests
+ 		 * (SYN-ACKs send out by Linux are always <576bytes so
+ 		 * they should go through unfragmented).
+@@ -443,7 +454,11 @@ static int tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 		if (!ip6_sk_accept_pmtu(sk))
+ 			goto out;
+ 
+-		tp->mtu_info = ntohl(info);
++		if (mtu < IPV6_MIN_MTU)
++			goto out;
++
++		WRITE_ONCE(tp->mtu_info, mtu);
++
+ 		if (!sock_owned_by_user(sk))
+ 			tcp_v6_mtu_reduced(sk);
+ 		else if (!test_and_set_bit(TCP_MTU_REDUCED_DEFERRED,
+@@ -540,7 +555,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
+ 		opt = ireq->ipv6_opt;
+ 		if (!opt)
+ 			opt = rcu_dereference(np->opt);
+-		err = ip6_xmit(sk, skb, fl6, sk->sk_mark, opt,
++		err = ip6_xmit(sk, skb, fl6, skb->mark ? : sk->sk_mark, opt,
+ 			       tclass, sk->sk_priority);
+ 		rcu_read_unlock();
+ 		err = net_xmit_eval(err);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index e2de58d6cdce2..a448b6cd47273 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1294,7 +1294,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
+ 
+ 	ipcm6_init(&ipc6);
+-	ipc6.gso_size = up->gso_size;
++	ipc6.gso_size = READ_ONCE(up->gso_size);
+ 	ipc6.sockc.tsflags = sk->sk_tsflags;
+ 	ipc6.sockc.mark = sk->sk_mark;
+ 
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index 8b84d534b19d8..6abb45a671994 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -56,7 +56,7 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct dst_entry *dst = skb_dst(skb);
+ 	struct xfrm_state *x = dst->xfrm;
+-	int mtu;
++	unsigned int mtu;
+ 	bool toobig;
+ 
+ #ifdef CONFIG_NETFILTER
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index c1bfd8181341a..cb4cfa4f61a8d 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -213,6 +213,7 @@ static int ctnetlink_dump_helpinfo(struct sk_buff *skb,
+ 	if (!help)
+ 		return 0;
+ 
++	rcu_read_lock();
+ 	helper = rcu_dereference(help->helper);
+ 	if (!helper)
+ 		goto out;
+@@ -228,9 +229,11 @@ static int ctnetlink_dump_helpinfo(struct sk_buff *skb,
+ 
+ 	nla_nest_end(skb, nest_helper);
+ out:
++	rcu_read_unlock();
+ 	return 0;
+ 
+ nla_put_failure:
++	rcu_read_unlock();
+ 	return -1;
+ }
+ 
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 7ef074c6dd160..812c3c70a53a0 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -320,11 +320,22 @@ err_alloc:
+ 
+ static void tcf_ct_flow_table_cleanup_work(struct work_struct *work)
+ {
++	struct flow_block_cb *block_cb, *tmp_cb;
+ 	struct tcf_ct_flow_table *ct_ft;
++	struct flow_block *block;
+ 
+ 	ct_ft = container_of(to_rcu_work(work), struct tcf_ct_flow_table,
+ 			     rwork);
+ 	nf_flow_table_free(&ct_ft->nf_ft);
++
++	/* Remove any remaining callbacks before cleanup */
++	block = &ct_ft->nf_ft.flow_block;
++	down_write(&ct_ft->nf_ft.flow_block_lock);
++	list_for_each_entry_safe(block_cb, tmp_cb, &block->cb_list, list) {
++		list_del(&block_cb->list);
++		flow_block_cb_free(block_cb);
++	}
++	up_write(&ct_ft->nf_ft.flow_block_lock);
+ 	kfree(ct_ft);
+ 
+ 	module_put(THIS_MODULE);
+@@ -1023,7 +1034,8 @@ do_nat:
+ 		/* This will take care of sending queued events
+ 		 * even if the connection is already confirmed.
+ 		 */
+-		nf_conntrack_confirm(skb);
++		if (nf_conntrack_confirm(skb) != NF_ACCEPT)
++			goto drop;
+ 	}
+ 
+ 	if (!skip_add)
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 08e011175b4c8..0d6e118207913 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -174,8 +174,13 @@ clean := -f $(srctree)/scripts/Makefile.clean obj
+ echo-cmd = $(if $($(quiet)cmd_$(1)),\
+ 	echo '  $(call escsq,$($(quiet)cmd_$(1)))$(echo-why)';)
+ 
++# sink stdout for 'make -s'
++       redirect :=
++ quiet_redirect :=
++silent_redirect := exec >/dev/null;
++
+ # printing commands
+-cmd = @set -e; $(echo-cmd) $(cmd_$(1))
++cmd = @set -e; $(echo-cmd) $($(quiet)redirect) $(cmd_$(1))
+ 
+ ###
+ # if_changed      - execute command if any prerequisite is newer than
+diff --git a/scripts/mkcompile_h b/scripts/mkcompile_h
+index 4ae735039daf2..a72b154de7b01 100755
+--- a/scripts/mkcompile_h
++++ b/scripts/mkcompile_h
+@@ -70,15 +70,23 @@ UTS_VERSION="$(echo $UTS_VERSION $CONFIG_FLAGS $TIMESTAMP | cut -b -$UTS_LEN)"
+ # Only replace the real compile.h if the new one is different,
+ # in order to preserve the timestamp and avoid unnecessary
+ # recompilations.
+-# We don't consider the file changed if only the date/time changed.
++# We don't consider the file changed if only the date/time changed,
++# unless KBUILD_BUILD_TIMESTAMP was explicitly set (e.g. for
++# reproducible builds with that value referring to a commit timestamp).
+ # A kernel config change will increase the generation number, thus
+ # causing compile.h to be updated (including date/time) due to the
+ # changed comment in the
+ # first line.
+ 
++if [ -z "$KBUILD_BUILD_TIMESTAMP" ]; then
++   IGNORE_PATTERN="UTS_VERSION"
++else
++   IGNORE_PATTERN="NOT_A_PATTERN_TO_BE_MATCHED"
++fi
++
+ if [ -r $TARGET ] && \
+-      grep -v 'UTS_VERSION' $TARGET > .tmpver.1 && \
+-      grep -v 'UTS_VERSION' .tmpcompile > .tmpver.2 && \
++      grep -v $IGNORE_PATTERN $TARGET > .tmpver.1 && \
++      grep -v $IGNORE_PATTERN .tmpcompile > .tmpver.2 && \
+       cmp -s .tmpver.1 .tmpver.2; then
+    rm -f .tmpcompile
+ else
+diff --git a/tools/bpf/Makefile b/tools/bpf/Makefile
+index 39bb322707b4b..b11cfc86a3d02 100644
+--- a/tools/bpf/Makefile
++++ b/tools/bpf/Makefile
+@@ -97,7 +97,7 @@ clean: bpftool_clean runqslower_clean resolve_btfids_clean
+ 	$(Q)$(RM) -- $(OUTPUT)FEATURE-DUMP.bpf
+ 	$(Q)$(RM) -r -- $(OUTPUT)feature
+ 
+-install: $(PROGS) bpftool_install runqslower_install
++install: $(PROGS) bpftool_install
+ 	$(call QUIET_INSTALL, bpf_jit_disasm)
+ 	$(Q)$(INSTALL) -m 0755 -d $(DESTDIR)$(prefix)/bin
+ 	$(Q)$(INSTALL) $(OUTPUT)bpf_jit_disasm $(DESTDIR)$(prefix)/bin/bpf_jit_disasm
+@@ -118,9 +118,6 @@ bpftool_clean:
+ runqslower:
+ 	$(call descend,runqslower)
+ 
+-runqslower_install:
+-	$(call descend,runqslower,install)
+-
+ runqslower_clean:
+ 	$(call descend,runqslower,clean)
+ 
+@@ -131,5 +128,5 @@ resolve_btfids_clean:
+ 	$(call descend,resolve_btfids,clean)
+ 
+ .PHONY: all install clean bpftool bpftool_install bpftool_clean \
+-	runqslower runqslower_install runqslower_clean \
++	runqslower runqslower_clean \
+ 	resolve_btfids resolve_btfids_clean
+diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
+index e7e7eee9f1725..24734f2249d6e 100644
+--- a/tools/bpf/bpftool/jit_disasm.c
++++ b/tools/bpf/bpftool/jit_disasm.c
+@@ -43,11 +43,13 @@ static int fprintf_json(void *out, const char *fmt, ...)
+ {
+ 	va_list ap;
+ 	char *s;
++	int err;
+ 
+ 	va_start(ap, fmt);
+-	if (vasprintf(&s, fmt, ap) < 0)
+-		return -1;
++	err = vasprintf(&s, fmt, ap);
+ 	va_end(ap);
++	if (err < 0)
++		return -1;
+ 
+ 	if (!oper_count) {
+ 		int i;
+diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
+index cd77e334e5777..8345ff4acedf2 100644
+--- a/tools/perf/tests/bpf.c
++++ b/tools/perf/tests/bpf.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <errno.h>
+ #include <stdio.h>
++#include <stdlib.h>
+ #include <sys/epoll.h>
+ #include <sys/types.h>
+ #include <sys/stat.h>
+@@ -283,6 +284,7 @@ static int __test__bpf(int idx)
+ 	}
+ 
+ out:
++	free(obj_buf);
+ 	bpf__clear();
+ 	return ret;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-25 17:28 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-25 17:28 UTC (permalink / raw
  To: gentoo-commits

commit:     59f4c7daba109c8f5e59a74f44297962e8694c4f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 25 17:27:59 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 25 17:27:59 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=59f4c7da

Update README

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/0000_README b/0000_README
index 51e9403..8050056 100644
--- a/0000_README
+++ b/0000_README
@@ -251,6 +251,10 @@ Patch:  1051_linux-5.10.52.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.52
 
+Patch:  1052_linux-5.10.53.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.53
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-28 13:22 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-07-28 13:22 UTC (permalink / raw
  To: gentoo-commits

commit:     36da15eed6bc90fe949bf01fc3f8851b8d8e2ff8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 28 13:22:45 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 28 13:22:45 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=36da15ee

Linux patch 5.10.54

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1053_linux-5.10.54.patch | 5411 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5415 insertions(+)

diff --git a/0000_README b/0000_README
index 8050056..e70652a 100644
--- a/0000_README
+++ b/0000_README
@@ -255,6 +255,10 @@ Patch:  1052_linux-5.10.53.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.53
 
+Patch:  1053_linux-5.10.54.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.54
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1053_linux-5.10.54.patch b/1053_linux-5.10.54.patch
new file mode 100644
index 0000000..94dca6f
--- /dev/null
+++ b/1053_linux-5.10.54.patch
@@ -0,0 +1,5411 @@
+diff --git a/Documentation/arm64/tagged-address-abi.rst b/Documentation/arm64/tagged-address-abi.rst
+index 4a9d9c794ee5d..7d255249094d0 100644
+--- a/Documentation/arm64/tagged-address-abi.rst
++++ b/Documentation/arm64/tagged-address-abi.rst
+@@ -45,14 +45,24 @@ how the user addresses are used by the kernel:
+ 
+ 1. User addresses not accessed by the kernel but used for address space
+    management (e.g. ``mprotect()``, ``madvise()``). The use of valid
+-   tagged pointers in this context is allowed with the exception of
+-   ``brk()``, ``mmap()`` and the ``new_address`` argument to
+-   ``mremap()`` as these have the potential to alias with existing
+-   user addresses.
+-
+-   NOTE: This behaviour changed in v5.6 and so some earlier kernels may
+-   incorrectly accept valid tagged pointers for the ``brk()``,
+-   ``mmap()`` and ``mremap()`` system calls.
++   tagged pointers in this context is allowed with these exceptions:
++
++   - ``brk()``, ``mmap()`` and the ``new_address`` argument to
++     ``mremap()`` as these have the potential to alias with existing
++      user addresses.
++
++     NOTE: This behaviour changed in v5.6 and so some earlier kernels may
++     incorrectly accept valid tagged pointers for the ``brk()``,
++     ``mmap()`` and ``mremap()`` system calls.
++
++   - The ``range.start``, ``start`` and ``dst`` arguments to the
++     ``UFFDIO_*`` ``ioctl()``s used on a file descriptor obtained from
++     ``userfaultfd()``, as fault addresses subsequently obtained by reading
++     the file descriptor will be untagged, which may otherwise confuse
++     tag-unaware programs.
++
++     NOTE: This behaviour changed in v5.14 and so some earlier kernels may
++     incorrectly accept valid tagged pointers for this system call.
+ 
+ 2. User addresses accessed by the kernel (e.g. ``write()``). This ABI
+    relaxation is disabled by default and the application thread needs to
+diff --git a/Documentation/driver-api/early-userspace/early_userspace_support.rst b/Documentation/driver-api/early-userspace/early_userspace_support.rst
+index 8a58c61932ff5..61bdeac1bae54 100644
+--- a/Documentation/driver-api/early-userspace/early_userspace_support.rst
++++ b/Documentation/driver-api/early-userspace/early_userspace_support.rst
+@@ -69,17 +69,17 @@ early userspace image can be built by an unprivileged user.
+ 
+ As a technical note, when directories and files are specified, the
+ entire CONFIG_INITRAMFS_SOURCE is passed to
+-usr/gen_initramfs_list.sh.  This means that CONFIG_INITRAMFS_SOURCE
++usr/gen_initramfs.sh.  This means that CONFIG_INITRAMFS_SOURCE
+ can really be interpreted as any legal argument to
+-gen_initramfs_list.sh.  If a directory is specified as an argument then
++gen_initramfs.sh.  If a directory is specified as an argument then
+ the contents are scanned, uid/gid translation is performed, and
+ usr/gen_init_cpio file directives are output.  If a directory is
+-specified as an argument to usr/gen_initramfs_list.sh then the
++specified as an argument to usr/gen_initramfs.sh then the
+ contents of the file are simply copied to the output.  All of the output
+ directives from directory scanning and file contents copying are
+ processed by usr/gen_init_cpio.
+ 
+-See also 'usr/gen_initramfs_list.sh -h'.
++See also 'usr/gen_initramfs.sh -h'.
+ 
+ Where's this all leading?
+ =========================
+diff --git a/Documentation/filesystems/ramfs-rootfs-initramfs.rst b/Documentation/filesystems/ramfs-rootfs-initramfs.rst
+index 4598b0d90b607..164960631925d 100644
+--- a/Documentation/filesystems/ramfs-rootfs-initramfs.rst
++++ b/Documentation/filesystems/ramfs-rootfs-initramfs.rst
+@@ -170,7 +170,7 @@ Documentation/driver-api/early-userspace/early_userspace_support.rst for more de
+ The kernel does not depend on external cpio tools.  If you specify a
+ directory instead of a configuration file, the kernel's build infrastructure
+ creates a configuration file from that directory (usr/Makefile calls
+-usr/gen_initramfs_list.sh), and proceeds to package up that directory
++usr/gen_initramfs.sh), and proceeds to package up that directory
+ using the config file (by feeding it to usr/gen_init_cpio, which is created
+ from usr/gen_init_cpio.c).  The kernel's build-time cpio creation code is
+ entirely self-contained, and the kernel's boot-time extractor is also
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index 4abcfff15e384..4822a058a81d7 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -751,7 +751,7 @@ tcp_fastopen_blackhole_timeout_sec - INTEGER
+ 	initial value when the blackhole issue goes away.
+ 	0 to disable the blackhole detection.
+ 
+-	By default, it is set to 1hr.
++	By default, it is set to 0 (feature is disabled).
+ 
+ tcp_fastopen_key - list of comma separated 32-digit hexadecimal INTEGERs
+ 	The list consists of a primary key and an optional backup key. The
+diff --git a/Documentation/trace/histogram.rst b/Documentation/trace/histogram.rst
+index b71e09f745c3d..f99be8062bc82 100644
+--- a/Documentation/trace/histogram.rst
++++ b/Documentation/trace/histogram.rst
+@@ -191,7 +191,7 @@ Documentation written by Tom Zanussi
+                                 with the event, in nanoseconds.  May be
+ 			        modified by .usecs to have timestamps
+ 			        interpreted as microseconds.
+-    cpu                    int  the cpu on which the event occurred.
++    common_cpu             int  the cpu on which the event occurred.
+     ====================== ==== =======================================
+ 
+ Extended error information
+diff --git a/Makefile b/Makefile
+index aab88519cb0ed..eb01d3028b020 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 53
++SUBLEVEL = 54
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/nds32/mm/mmap.c b/arch/nds32/mm/mmap.c
+index c206b31ce07ac..1bdf5e7d1b438 100644
+--- a/arch/nds32/mm/mmap.c
++++ b/arch/nds32/mm/mmap.c
+@@ -59,7 +59,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
+ 
+ 		vma = find_vma(mm, addr);
+ 		if (TASK_SIZE - len >= addr &&
+-		    (!vma || addr + len <= vma->vm_start))
++		    (!vma || addr + len <= vm_start_gap(vma)))
+ 			return addr;
+ 	}
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 2325b7a6e95f8..bd7350a608d4b 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -2366,8 +2366,10 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
+ 		HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP | HFSCR_PREFIX;
+ 	if (cpu_has_feature(CPU_FTR_HVMODE)) {
+ 		vcpu->arch.hfscr &= mfspr(SPRN_HFSCR);
++#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ 		if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ 			vcpu->arch.hfscr |= HFSCR_TM;
++#endif
+ 	}
+ 	if (cpu_has_feature(CPU_FTR_TM_COMP))
+ 		vcpu->arch.hfscr |= HFSCR_TM;
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 065738819db9b..a5f1ae892ba68 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -232,6 +232,9 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
+ 	if (vcpu->kvm->arch.l1_ptcr == 0)
+ 		return H_NOT_AVAILABLE;
+ 
++	if (MSR_TM_TRANSACTIONAL(vcpu->arch.shregs.msr))
++		return H_BAD_MODE;
++
+ 	/* copy parameters in */
+ 	hv_ptr = kvmppc_get_gpr(vcpu, 4);
+ 	regs_ptr = kvmppc_get_gpr(vcpu, 5);
+@@ -254,6 +257,23 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
+ 	if (l2_hv.vcpu_token >= NR_CPUS)
+ 		return H_PARAMETER;
+ 
++	/*
++	 * L1 must have set up a suspended state to enter the L2 in a
++	 * transactional state, and only in that case. These have to be
++	 * filtered out here to prevent causing a TM Bad Thing in the
++	 * host HRFID. We could synthesize a TM Bad Thing back to the L1
++	 * here but there doesn't seem like much point.
++	 */
++	if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr)) {
++		if (!MSR_TM_ACTIVE(l2_regs.msr))
++			return H_BAD_MODE;
++	} else {
++		if (l2_regs.msr & MSR_TS_MASK)
++			return H_BAD_MODE;
++		if (WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_TS_MASK))
++			return H_BAD_MODE;
++	}
++
+ 	/* translate lpid */
+ 	l2 = kvmhv_get_nested(vcpu->kvm, l2_hv.lpid, true);
+ 	if (!l2)
+diff --git a/arch/powerpc/kvm/book3s_rtas.c b/arch/powerpc/kvm/book3s_rtas.c
+index c5e677508d3b2..0f847f1e5ddd0 100644
+--- a/arch/powerpc/kvm/book3s_rtas.c
++++ b/arch/powerpc/kvm/book3s_rtas.c
+@@ -242,6 +242,17 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
+ 	 * value so we can restore it on the way out.
+ 	 */
+ 	orig_rets = args.rets;
++	if (be32_to_cpu(args.nargs) >= ARRAY_SIZE(args.args)) {
++		/*
++		 * Don't overflow our args array: ensure there is room for
++		 * at least rets[0] (even if the call specifies 0 nret).
++		 *
++		 * Each handler must then check for the correct nargs and nret
++		 * values, but they may always return failure in rets[0].
++		 */
++		rc = -EINVAL;
++		goto fail;
++	}
+ 	args.rets = &args.args[be32_to_cpu(args.nargs)];
+ 
+ 	mutex_lock(&vcpu->kvm->arch.rtas_token_lock);
+@@ -269,9 +280,17 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
+ fail:
+ 	/*
+ 	 * We only get here if the guest has called RTAS with a bogus
+-	 * args pointer. That means we can't get to the args, and so we
+-	 * can't fail the RTAS call. So fail right out to userspace,
+-	 * which should kill the guest.
++	 * args pointer or nargs/nret values that would overflow the
++	 * array. That means we can't get to the args, and so we can't
++	 * fail the RTAS call. So fail right out to userspace, which
++	 * should kill the guest.
++	 *
++	 * SLOF should actually pass the hcall return value from the
++	 * rtas handler call in r3, so enter_rtas could be modified to
++	 * return a failure indication in r3 and we could return such
++	 * errors to the guest rather than failing to host userspace.
++	 * However old guests that don't test for failure could then
++	 * continue silently after errors, so for now we won't do this.
+ 	 */
+ 	return rc;
+ }
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index 32fa0fa3d4ff5..543db9157f3b1 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -2041,9 +2041,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 	{
+ 		struct kvm_enable_cap cap;
+ 		r = -EFAULT;
+-		vcpu_load(vcpu);
+ 		if (copy_from_user(&cap, argp, sizeof(cap)))
+ 			goto out;
++		vcpu_load(vcpu);
+ 		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+ 		vcpu_put(vcpu);
+ 		break;
+@@ -2067,9 +2067,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 	case KVM_DIRTY_TLB: {
+ 		struct kvm_dirty_tlb dirty;
+ 		r = -EFAULT;
+-		vcpu_load(vcpu);
+ 		if (copy_from_user(&dirty, argp, sizeof(dirty)))
+ 			goto out;
++		vcpu_load(vcpu);
+ 		r = kvm_vcpu_ioctl_dirty_tlb(vcpu, &dirty);
+ 		vcpu_put(vcpu);
+ 		break;
+diff --git a/arch/s390/boot/text_dma.S b/arch/s390/boot/text_dma.S
+index f7c77cd518f2b..5ff5fee028016 100644
+--- a/arch/s390/boot/text_dma.S
++++ b/arch/s390/boot/text_dma.S
+@@ -9,16 +9,6 @@
+ #include <asm/errno.h>
+ #include <asm/sigp.h>
+ 
+-#ifdef CC_USING_EXPOLINE
+-	.pushsection .dma.text.__s390_indirect_jump_r14,"axG"
+-__dma__s390_indirect_jump_r14:
+-	larl	%r1,0f
+-	ex	0,0(%r1)
+-	j	.
+-0:	br	%r14
+-	.popsection
+-#endif
+-
+ 	.section .dma.text,"ax"
+ /*
+  * Simplified version of expoline thunk. The normal thunks can not be used here,
+@@ -27,11 +17,10 @@ __dma__s390_indirect_jump_r14:
+  * affects a few functions that are not performance-relevant.
+  */
+ 	.macro BR_EX_DMA_r14
+-#ifdef CC_USING_EXPOLINE
+-	jg	__dma__s390_indirect_jump_r14
+-#else
+-	br	%r14
+-#endif
++	larl	%r1,0f
++	ex	0,0(%r1)
++	j	.
++0:	br	%r14
+ 	.endm
+ 
+ /*
+diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h
+index 68d362f8d6c17..c72c179a5aae9 100644
+--- a/arch/s390/include/asm/ftrace.h
++++ b/arch/s390/include/asm/ftrace.h
+@@ -27,6 +27,7 @@ void ftrace_caller(void);
+ 
+ extern char ftrace_graph_caller_end;
+ extern unsigned long ftrace_plt;
++extern void *ftrace_func;
+ 
+ struct dyn_arch_ftrace { };
+ 
+diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c
+index b388e87a08bf9..923ecccae3061 100644
+--- a/arch/s390/kernel/ftrace.c
++++ b/arch/s390/kernel/ftrace.c
+@@ -57,6 +57,7 @@
+  * >	brasl	%r0,ftrace_caller	# offset 0
+  */
+ 
++void *ftrace_func __read_mostly = ftrace_stub;
+ unsigned long ftrace_plt;
+ 
+ static inline void ftrace_generate_orig_insn(struct ftrace_insn *insn)
+@@ -120,6 +121,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ 
+ int ftrace_update_ftrace_func(ftrace_func_t func)
+ {
++	ftrace_func = func;
+ 	return 0;
+ }
+ 
+diff --git a/arch/s390/kernel/mcount.S b/arch/s390/kernel/mcount.S
+index 7458dcfd64642..be9e85e7558ae 100644
+--- a/arch/s390/kernel/mcount.S
++++ b/arch/s390/kernel/mcount.S
+@@ -67,13 +67,13 @@ ENTRY(ftrace_caller)
+ #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
+ 	aghik	%r2,%r0,-MCOUNT_INSN_SIZE
+ 	lgrl	%r4,function_trace_op
+-	lgrl	%r1,ftrace_trace_function
++	lgrl	%r1,ftrace_func
+ #else
+ 	lgr	%r2,%r0
+ 	aghi	%r2,-MCOUNT_INSN_SIZE
+ 	larl	%r4,function_trace_op
+ 	lg	%r4,0(%r4)
+-	larl	%r1,ftrace_trace_function
++	larl	%r1,ftrace_func
+ 	lg	%r1,0(%r1)
+ #endif
+ 	lgr	%r3,%r14
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 0a41827928769..fc44dce59536e 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -112,7 +112,7 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1)
+ {
+ 	u32 r1 = reg2hex[b1];
+ 
+-	if (!jit->seen_reg[r1] && r1 >= 6 && r1 <= 15)
++	if (r1 >= 6 && r1 <= 15 && !jit->seen_reg[r1])
+ 		jit->seen_reg[r1] = 1;
+ }
+ 
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 7a3fbf3b796e6..41b0dc37720e0 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -684,7 +684,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 
+ 		edx.split.num_counters_fixed = min(cap.num_counters_fixed, MAX_FIXED_COUNTERS);
+ 		edx.split.bit_width_fixed = cap.bit_width_fixed;
+-		edx.split.anythread_deprecated = 1;
++		if (cap.version)
++			edx.split.anythread_deprecated = 1;
+ 		edx.split.reserved1 = 0;
+ 		edx.split.reserved2 = 0;
+ 
+diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
+index edf1558c11052..b5ea34c340ccc 100644
+--- a/drivers/acpi/Kconfig
++++ b/drivers/acpi/Kconfig
+@@ -359,7 +359,7 @@ config ACPI_TABLE_UPGRADE
+ config ACPI_TABLE_OVERRIDE_VIA_BUILTIN_INITRD
+ 	bool "Override ACPI tables from built-in initrd"
+ 	depends on ACPI_TABLE_UPGRADE
+-	depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION=""
++	depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION_NONE
+ 	help
+ 	  This option provides functionality to override arbitrary ACPI tables
+ 	  from built-in uncompressed initrd.
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 8b3c3fcf35a96..1157f9aea9c04 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -449,8 +449,10 @@ static void devlink_remove_symlinks(struct device *dev,
+ 		return;
+ 	}
+ 
+-	snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
+-	sysfs_remove_link(&con->kobj, buf);
++	if (device_is_registered(con)) {
++		snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
++		sysfs_remove_link(&con->kobj, buf);
++	}
+ 	snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
+ 	sysfs_remove_link(&sup->kobj, buf);
+ 	kfree(buf);
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index f84128abade31..340b1df365f72 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -4147,8 +4147,6 @@ again:
+ 
+ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
+ {
+-	bool need_wait;
+-
+ 	dout("%s rbd_dev %p\n", __func__, rbd_dev);
+ 	lockdep_assert_held_write(&rbd_dev->lock_rwsem);
+ 
+@@ -4160,11 +4158,11 @@ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
+ 	 */
+ 	rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING;
+ 	rbd_assert(!completion_done(&rbd_dev->releasing_wait));
+-	need_wait = !list_empty(&rbd_dev->running_list);
+-	downgrade_write(&rbd_dev->lock_rwsem);
+-	if (need_wait)
+-		wait_for_completion(&rbd_dev->releasing_wait);
+-	up_read(&rbd_dev->lock_rwsem);
++	if (list_empty(&rbd_dev->running_list))
++		return true;
++
++	up_write(&rbd_dev->lock_rwsem);
++	wait_for_completion(&rbd_dev->releasing_wait);
+ 
+ 	down_write(&rbd_dev->lock_rwsem);
+ 	if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING)
+@@ -4250,15 +4248,11 @@ static void rbd_handle_acquired_lock(struct rbd_device *rbd_dev, u8 struct_v,
+ 	if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
+ 		down_write(&rbd_dev->lock_rwsem);
+ 		if (rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
+-			/*
+-			 * we already know that the remote client is
+-			 * the owner
+-			 */
+-			up_write(&rbd_dev->lock_rwsem);
+-			return;
++			dout("%s rbd_dev %p cid %llu-%llu == owner_cid\n",
++			     __func__, rbd_dev, cid.gid, cid.handle);
++		} else {
++			rbd_set_owner_cid(rbd_dev, &cid);
+ 		}
+-
+-		rbd_set_owner_cid(rbd_dev, &cid);
+ 		downgrade_write(&rbd_dev->lock_rwsem);
+ 	} else {
+ 		down_read(&rbd_dev->lock_rwsem);
+@@ -4283,14 +4277,12 @@ static void rbd_handle_released_lock(struct rbd_device *rbd_dev, u8 struct_v,
+ 	if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
+ 		down_write(&rbd_dev->lock_rwsem);
+ 		if (!rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
+-			dout("%s rbd_dev %p unexpected owner, cid %llu-%llu != owner_cid %llu-%llu\n",
++			dout("%s rbd_dev %p cid %llu-%llu != owner_cid %llu-%llu\n",
+ 			     __func__, rbd_dev, cid.gid, cid.handle,
+ 			     rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle);
+-			up_write(&rbd_dev->lock_rwsem);
+-			return;
++		} else {
++			rbd_set_owner_cid(rbd_dev, &rbd_empty_cid);
+ 		}
+-
+-		rbd_set_owner_cid(rbd_dev, &rbd_empty_cid);
+ 		downgrade_write(&rbd_dev->lock_rwsem);
+ 	} else {
+ 		down_read(&rbd_dev->lock_rwsem);
+diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
+index d86ce1a06b75c..614dd287cb4ff 100644
+--- a/drivers/bus/mhi/core/main.c
++++ b/drivers/bus/mhi/core/main.c
+@@ -706,11 +706,18 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
+ 	cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
+ 
+ 	chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+-	mhi_chan = &mhi_cntrl->mhi_chan[chan];
+-	write_lock_bh(&mhi_chan->lock);
+-	mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
+-	complete(&mhi_chan->completion);
+-	write_unlock_bh(&mhi_chan->lock);
++
++	if (chan < mhi_cntrl->max_chan &&
++	    mhi_cntrl->mhi_chan[chan].configured) {
++		mhi_chan = &mhi_cntrl->mhi_chan[chan];
++		write_lock_bh(&mhi_chan->lock);
++		mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
++		complete(&mhi_chan->completion);
++		write_unlock_bh(&mhi_chan->lock);
++	} else {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Completion packet for invalid channel ID: %d\n", chan);
++	}
+ 
+ 	mhi_del_ring_element(mhi_cntrl, mhi_ring);
+ }
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 4b7ee3fa9224f..847f33ffc4aed 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -896,6 +896,7 @@ static int __init efi_memreserve_map_root(void)
+ static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
+ {
+ 	struct resource *res, *parent;
++	int ret;
+ 
+ 	res = kzalloc(sizeof(struct resource), GFP_ATOMIC);
+ 	if (!res)
+@@ -908,7 +909,17 @@ static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
+ 
+ 	/* we expect a conflict with a 'System RAM' region */
+ 	parent = request_resource_conflict(&iomem_resource, res);
+-	return parent ? request_resource(parent, res) : 0;
++	ret = parent ? request_resource(parent, res) : 0;
++
++	/*
++	 * Given that efi_mem_reserve_iomem() can be called at any
++	 * time, only call memblock_reserve() if the architecture
++	 * keeps the infrastructure around.
++	 */
++	if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK) && !ret)
++		memblock_reserve(addr, size);
++
++	return ret;
+ }
+ 
+ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index c1955d320fecd..8f665678e9e39 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -62,9 +62,11 @@ int __init efi_tpm_eventlog_init(void)
+ 	tbl_size = sizeof(*log_tbl) + log_tbl->size;
+ 	memblock_reserve(efi.tpm_log, tbl_size);
+ 
+-	if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||
+-	    log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {
+-		pr_warn(FW_BUG "TPM Final Events table missing or invalid\n");
++	if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR) {
++		pr_info("TPM Final Events table not present\n");
++		goto out;
++	} else if (log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {
++		pr_warn(FW_BUG "TPM Final Events table invalid\n");
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index fc8da5fed779b..0e3ff5c3766ed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -3137,6 +3137,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3[] =
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER7_SELECT, 0xf0f001ff, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER8_SELECT, 0xf0f001ff, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER9_SELECT, 0xf0f001ff, 0x00000000),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSX_DEBUG_1, 0x00010000, 0x00010020),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffbfffff, 0x00a00000)
+ };
+diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
+index ae647be4a49fb..4606cc938b36d 100644
+--- a/drivers/gpu/drm/drm_ioctl.c
++++ b/drivers/gpu/drm/drm_ioctl.c
+@@ -827,6 +827,9 @@ long drm_ioctl(struct file *filp,
+ 	if (drm_dev_is_unplugged(dev))
+ 		return -ENODEV;
+ 
++       if (DRM_IOCTL_TYPE(cmd) != DRM_IOCTL_BASE)
++               return -ENOTTY;
++
+ 	is_driver_ioctl = nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END;
+ 
+ 	if (is_driver_ioctl) {
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index eb342a7599438..0b1ea29dcffac 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -1728,6 +1728,21 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
+ 	if (drm_WARN_ON(&i915->drm, !engine))
+ 		return -EINVAL;
+ 
++	/*
++	 * Due to d3_entered is used to indicate skipping PPGTT invalidation on
++	 * vGPU reset, it's set on D0->D3 on PCI config write, and cleared after
++	 * vGPU reset if in resuming.
++	 * In S0ix exit, the device power state also transite from D3 to D0 as
++	 * S3 resume, but no vGPU reset (triggered by QEMU devic model). After
++	 * S0ix exit, all engines continue to work. However the d3_entered
++	 * remains set which will break next vGPU reset logic (miss the expected
++	 * PPGTT invalidation).
++	 * Engines can only work in D0. Thus the 1st elsp write gives GVT a
++	 * chance to clear d3_entered.
++	 */
++	if (vgpu->d3_entered)
++		vgpu->d3_entered = false;
++
+ 	execlist = &vgpu->submission.execlist[engine->id];
+ 
+ 	execlist->elsp_dwords.data[3 - execlist->elsp_dwords.index] = data;
+diff --git a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+index 5e9ccefb88f62..bbdd086be7f59 100644
+--- a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
++++ b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+@@ -447,7 +447,6 @@ static int rpi_touchscreen_remove(struct i2c_client *i2c)
+ 	drm_panel_remove(&ts->base);
+ 
+ 	mipi_dsi_device_unregister(ts->dsi);
+-	kfree(ts->dsi);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/pci/ngene/ngene-core.c b/drivers/media/pci/ngene/ngene-core.c
+index f9f94f47d76b6..e1a8c611d01b4 100644
+--- a/drivers/media/pci/ngene/ngene-core.c
++++ b/drivers/media/pci/ngene/ngene-core.c
+@@ -385,7 +385,7 @@ static int ngene_command_config_free_buf(struct ngene *dev, u8 *config)
+ 
+ 	com.cmd.hdr.Opcode = CMD_CONFIGURE_FREE_BUFFER;
+ 	com.cmd.hdr.Length = 6;
+-	memcpy(&com.cmd.ConfigureBuffers.config, config, 6);
++	memcpy(&com.cmd.ConfigureFreeBuffers.config, config, 6);
+ 	com.in_len = 6;
+ 	com.out_len = 0;
+ 
+diff --git a/drivers/media/pci/ngene/ngene.h b/drivers/media/pci/ngene/ngene.h
+index 84f04e0e0cb9a..3d296f1998a1a 100644
+--- a/drivers/media/pci/ngene/ngene.h
++++ b/drivers/media/pci/ngene/ngene.h
+@@ -407,12 +407,14 @@ enum _BUFFER_CONFIGS {
+ 
+ struct FW_CONFIGURE_FREE_BUFFERS {
+ 	struct FW_HEADER hdr;
+-	u8   UVI1_BufferLength;
+-	u8   UVI2_BufferLength;
+-	u8   TVO_BufferLength;
+-	u8   AUD1_BufferLength;
+-	u8   AUD2_BufferLength;
+-	u8   TVA_BufferLength;
++	struct {
++		u8   UVI1_BufferLength;
++		u8   UVI2_BufferLength;
++		u8   TVO_BufferLength;
++		u8   AUD1_BufferLength;
++		u8   AUD2_BufferLength;
++		u8   TVA_BufferLength;
++	} __packed config;
+ } __attribute__ ((__packed__));
+ 
+ struct FW_CONFIGURE_UART {
+diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
+index 7a6f01ace78ac..305ffad131a29 100644
+--- a/drivers/misc/eeprom/at24.c
++++ b/drivers/misc/eeprom/at24.c
+@@ -714,23 +714,20 @@ static int at24_probe(struct i2c_client *client)
+ 	}
+ 
+ 	/*
+-	 * If the 'label' property is not present for the AT24 EEPROM,
+-	 * then nvmem_config.id is initialised to NVMEM_DEVID_AUTO,
+-	 * and this will append the 'devid' to the name of the NVMEM
+-	 * device. This is purely legacy and the AT24 driver has always
+-	 * defaulted to this. However, if the 'label' property is
+-	 * present then this means that the name is specified by the
+-	 * firmware and this name should be used verbatim and so it is
+-	 * not necessary to append the 'devid'.
++	 * We initialize nvmem_config.id to NVMEM_DEVID_AUTO even if the
++	 * label property is set as some platform can have multiple eeproms
++	 * with same label and we can not register each of those with same
++	 * label. Failing to register those eeproms trigger cascade failure
++	 * on such platform.
+ 	 */
++	nvmem_config.id = NVMEM_DEVID_AUTO;
++
+ 	if (device_property_present(dev, "label")) {
+-		nvmem_config.id = NVMEM_DEVID_NONE;
+ 		err = device_property_read_string(dev, "label",
+ 						  &nvmem_config.name);
+ 		if (err)
+ 			return err;
+ 	} else {
+-		nvmem_config.id = NVMEM_DEVID_AUTO;
+ 		nvmem_config.name = dev_name(dev);
+ 	}
+ 
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index fa59e6f4801c1..58112999a69ab 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -74,7 +74,8 @@ static void mmc_host_classdev_release(struct device *dev)
+ {
+ 	struct mmc_host *host = cls_dev_to_mmc_host(dev);
+ 	wakeup_source_unregister(host->ws);
+-	ida_simple_remove(&mmc_host_ida, host->index);
++	if (of_alias_get_id(host->parent->of_node, "mmc") < 0)
++		ida_simple_remove(&mmc_host_ida, host->index);
+ 	kfree(host);
+ }
+ 
+@@ -436,7 +437,7 @@ static int mmc_first_nonreserved_index(void)
+  */
+ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
+ {
+-	int err;
++	int index;
+ 	struct mmc_host *host;
+ 	int alias_id, min_idx, max_idx;
+ 
+@@ -449,20 +450,19 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
+ 
+ 	alias_id = of_alias_get_id(dev->of_node, "mmc");
+ 	if (alias_id >= 0) {
+-		min_idx = alias_id;
+-		max_idx = alias_id + 1;
++		index = alias_id;
+ 	} else {
+ 		min_idx = mmc_first_nonreserved_index();
+ 		max_idx = 0;
+-	}
+ 
+-	err = ida_simple_get(&mmc_host_ida, min_idx, max_idx, GFP_KERNEL);
+-	if (err < 0) {
+-		kfree(host);
+-		return NULL;
++		index = ida_simple_get(&mmc_host_ida, min_idx, max_idx, GFP_KERNEL);
++		if (index < 0) {
++			kfree(host);
++			return NULL;
++		}
+ 	}
+ 
+-	host->index = err;
++	host->index = index;
+ 
+ 	dev_set_name(&host->class_dev, "mmc%d", host->index);
+ 	host->ws = wakeup_source_register(NULL, dev_name(&host->class_dev));
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 345a3f61c723c..018af1e38eb9b 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -385,24 +385,85 @@ static int bond_vlan_rx_kill_vid(struct net_device *bond_dev,
+ static int bond_ipsec_add_sa(struct xfrm_state *xs)
+ {
+ 	struct net_device *bond_dev = xs->xso.dev;
++	struct bond_ipsec *ipsec;
+ 	struct bonding *bond;
+ 	struct slave *slave;
++	int err;
+ 
+ 	if (!bond_dev)
+ 		return -EINVAL;
+ 
++	rcu_read_lock();
+ 	bond = netdev_priv(bond_dev);
+ 	slave = rcu_dereference(bond->curr_active_slave);
+-	xs->xso.real_dev = slave->dev;
+-	bond->xs = xs;
++	if (!slave) {
++		rcu_read_unlock();
++		return -ENODEV;
++	}
+ 
+-	if (!(slave->dev->xfrmdev_ops
+-	      && slave->dev->xfrmdev_ops->xdo_dev_state_add)) {
++	if (!slave->dev->xfrmdev_ops ||
++	    !slave->dev->xfrmdev_ops->xdo_dev_state_add ||
++	    netif_is_bond_master(slave->dev)) {
+ 		slave_warn(bond_dev, slave->dev, "Slave does not support ipsec offload\n");
++		rcu_read_unlock();
+ 		return -EINVAL;
+ 	}
+ 
+-	return slave->dev->xfrmdev_ops->xdo_dev_state_add(xs);
++	ipsec = kmalloc(sizeof(*ipsec), GFP_ATOMIC);
++	if (!ipsec) {
++		rcu_read_unlock();
++		return -ENOMEM;
++	}
++	xs->xso.real_dev = slave->dev;
++
++	err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs);
++	if (!err) {
++		ipsec->xs = xs;
++		INIT_LIST_HEAD(&ipsec->list);
++		spin_lock_bh(&bond->ipsec_lock);
++		list_add(&ipsec->list, &bond->ipsec_list);
++		spin_unlock_bh(&bond->ipsec_lock);
++	} else {
++		kfree(ipsec);
++	}
++	rcu_read_unlock();
++	return err;
++}
++
++static void bond_ipsec_add_sa_all(struct bonding *bond)
++{
++	struct net_device *bond_dev = bond->dev;
++	struct bond_ipsec *ipsec;
++	struct slave *slave;
++
++	rcu_read_lock();
++	slave = rcu_dereference(bond->curr_active_slave);
++	if (!slave)
++		goto out;
++
++	if (!slave->dev->xfrmdev_ops ||
++	    !slave->dev->xfrmdev_ops->xdo_dev_state_add ||
++	    netif_is_bond_master(slave->dev)) {
++		spin_lock_bh(&bond->ipsec_lock);
++		if (!list_empty(&bond->ipsec_list))
++			slave_warn(bond_dev, slave->dev,
++				   "%s: no slave xdo_dev_state_add\n",
++				   __func__);
++		spin_unlock_bh(&bond->ipsec_lock);
++		goto out;
++	}
++
++	spin_lock_bh(&bond->ipsec_lock);
++	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
++		ipsec->xs->xso.real_dev = slave->dev;
++		if (slave->dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs)) {
++			slave_warn(bond_dev, slave->dev, "%s: failed to add SA\n", __func__);
++			ipsec->xs->xso.real_dev = NULL;
++		}
++	}
++	spin_unlock_bh(&bond->ipsec_lock);
++out:
++	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -412,27 +473,77 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs)
+ static void bond_ipsec_del_sa(struct xfrm_state *xs)
+ {
+ 	struct net_device *bond_dev = xs->xso.dev;
++	struct bond_ipsec *ipsec;
+ 	struct bonding *bond;
+ 	struct slave *slave;
+ 
+ 	if (!bond_dev)
+ 		return;
+ 
++	rcu_read_lock();
+ 	bond = netdev_priv(bond_dev);
+ 	slave = rcu_dereference(bond->curr_active_slave);
+ 
+ 	if (!slave)
+-		return;
++		goto out;
+ 
+-	xs->xso.real_dev = slave->dev;
++	if (!xs->xso.real_dev)
++		goto out;
++
++	WARN_ON(xs->xso.real_dev != slave->dev);
+ 
+-	if (!(slave->dev->xfrmdev_ops
+-	      && slave->dev->xfrmdev_ops->xdo_dev_state_delete)) {
++	if (!slave->dev->xfrmdev_ops ||
++	    !slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
++	    netif_is_bond_master(slave->dev)) {
+ 		slave_warn(bond_dev, slave->dev, "%s: no slave xdo_dev_state_delete\n", __func__);
+-		return;
++		goto out;
+ 	}
+ 
+ 	slave->dev->xfrmdev_ops->xdo_dev_state_delete(xs);
++out:
++	spin_lock_bh(&bond->ipsec_lock);
++	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
++		if (ipsec->xs == xs) {
++			list_del(&ipsec->list);
++			kfree(ipsec);
++			break;
++		}
++	}
++	spin_unlock_bh(&bond->ipsec_lock);
++	rcu_read_unlock();
++}
++
++static void bond_ipsec_del_sa_all(struct bonding *bond)
++{
++	struct net_device *bond_dev = bond->dev;
++	struct bond_ipsec *ipsec;
++	struct slave *slave;
++
++	rcu_read_lock();
++	slave = rcu_dereference(bond->curr_active_slave);
++	if (!slave) {
++		rcu_read_unlock();
++		return;
++	}
++
++	spin_lock_bh(&bond->ipsec_lock);
++	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
++		if (!ipsec->xs->xso.real_dev)
++			continue;
++
++		if (!slave->dev->xfrmdev_ops ||
++		    !slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
++		    netif_is_bond_master(slave->dev)) {
++			slave_warn(bond_dev, slave->dev,
++				   "%s: no slave xdo_dev_state_delete\n",
++				   __func__);
++		} else {
++			slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
++		}
++		ipsec->xs->xso.real_dev = NULL;
++	}
++	spin_unlock_bh(&bond->ipsec_lock);
++	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -443,21 +554,37 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
+ static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
+ {
+ 	struct net_device *bond_dev = xs->xso.dev;
+-	struct bonding *bond = netdev_priv(bond_dev);
+-	struct slave *curr_active = rcu_dereference(bond->curr_active_slave);
+-	struct net_device *slave_dev = curr_active->dev;
++	struct net_device *real_dev;
++	struct slave *curr_active;
++	struct bonding *bond;
++	int err;
+ 
+-	if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)
+-		return true;
++	bond = netdev_priv(bond_dev);
++	rcu_read_lock();
++	curr_active = rcu_dereference(bond->curr_active_slave);
++	real_dev = curr_active->dev;
+ 
+-	if (!(slave_dev->xfrmdev_ops
+-	      && slave_dev->xfrmdev_ops->xdo_dev_offload_ok)) {
+-		slave_warn(bond_dev, slave_dev, "%s: no slave xdo_dev_offload_ok\n", __func__);
+-		return false;
++	if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
++		err = false;
++		goto out;
++	}
++
++	if (!xs->xso.real_dev) {
++		err = false;
++		goto out;
++	}
++
++	if (!real_dev->xfrmdev_ops ||
++	    !real_dev->xfrmdev_ops->xdo_dev_offload_ok ||
++	    netif_is_bond_master(real_dev)) {
++		err = false;
++		goto out;
+ 	}
+ 
+-	xs->xso.real_dev = slave_dev;
+-	return slave_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
++	err = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
++out:
++	rcu_read_unlock();
++	return err;
+ }
+ 
+ static const struct xfrmdev_ops bond_xfrmdev_ops = {
+@@ -974,8 +1101,7 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
+ 		return;
+ 
+ #ifdef CONFIG_XFRM_OFFLOAD
+-	if (old_active && bond->xs)
+-		bond_ipsec_del_sa(bond->xs);
++	bond_ipsec_del_sa_all(bond);
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ 
+ 	if (new_active) {
+@@ -1051,10 +1177,7 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
+ 	}
+ 
+ #ifdef CONFIG_XFRM_OFFLOAD
+-	if (new_active && bond->xs) {
+-		xfrm_dev_state_flush(dev_net(bond->dev), bond->dev, true);
+-		bond_ipsec_add_sa(bond->xs);
+-	}
++	bond_ipsec_add_sa_all(bond);
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ 
+ 	/* resend IGMP joins since active slave has changed or
+@@ -3293,6 +3416,9 @@ static int bond_master_netdev_event(unsigned long event,
+ 		return bond_event_changename(event_bond);
+ 	case NETDEV_UNREGISTER:
+ 		bond_remove_proc_entry(event_bond);
++#ifdef CONFIG_XFRM_OFFLOAD
++		xfrm_dev_state_flush(dev_net(bond_dev), bond_dev, true);
++#endif /* CONFIG_XFRM_OFFLOAD */
+ 		break;
+ 	case NETDEV_REGISTER:
+ 		bond_create_proc_entry(event_bond);
+@@ -4726,7 +4852,8 @@ void bond_setup(struct net_device *bond_dev)
+ #ifdef CONFIG_XFRM_OFFLOAD
+ 	/* set up xfrm device ops (only supported in active-backup right now) */
+ 	bond_dev->xfrmdev_ops = &bond_xfrmdev_ops;
+-	bond->xs = NULL;
++	INIT_LIST_HEAD(&bond->ipsec_list);
++	spin_lock_init(&bond->ipsec_lock);
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ 
+ 	/* don't acquire bond device's netif_tx_lock when transmitting */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 429ce8638c2b3..184cbc93328c2 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3433,6 +3433,11 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.serdes_irq_enable = mv88e6390_serdes_irq_enable,
+ 	.serdes_irq_status = mv88e6390_serdes_irq_status,
+ 	.gpio_ops = &mv88e6352_gpio_ops,
++	.serdes_get_sset_count = mv88e6390_serdes_get_sset_count,
++	.serdes_get_strings = mv88e6390_serdes_get_strings,
++	.serdes_get_stats = mv88e6390_serdes_get_stats,
++	.serdes_get_regs_len = mv88e6390_serdes_get_regs_len,
++	.serdes_get_regs = mv88e6390_serdes_get_regs,
+ 	.phylink_validate = mv88e6341_phylink_validate,
+ };
+ 
+@@ -4205,6 +4210,11 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.gpio_ops = &mv88e6352_gpio_ops,
+ 	.avb_ops = &mv88e6390_avb_ops,
+ 	.ptp_ops = &mv88e6352_ptp_ops,
++	.serdes_get_sset_count = mv88e6390_serdes_get_sset_count,
++	.serdes_get_strings = mv88e6390_serdes_get_strings,
++	.serdes_get_stats = mv88e6390_serdes_get_stats,
++	.serdes_get_regs_len = mv88e6390_serdes_get_regs_len,
++	.serdes_get_regs = mv88e6390_serdes_get_regs,
+ 	.phylink_validate = mv88e6341_phylink_validate,
+ };
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
+index 9c07b4f3d3454..6920e62c864df 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.c
++++ b/drivers/net/dsa/mv88e6xxx/serdes.c
+@@ -590,7 +590,7 @@ static struct mv88e6390_serdes_hw_stat mv88e6390_serdes_hw_stats[] = {
+ 
+ int mv88e6390_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port)
+ {
+-	if (mv88e6390_serdes_get_lane(chip, port) == 0)
++	if (mv88e6xxx_serdes_get_lane(chip, port) == 0)
+ 		return 0;
+ 
+ 	return ARRAY_SIZE(mv88e6390_serdes_hw_stats);
+@@ -602,7 +602,7 @@ int mv88e6390_serdes_get_strings(struct mv88e6xxx_chip *chip,
+ 	struct mv88e6390_serdes_hw_stat *stat;
+ 	int i;
+ 
+-	if (mv88e6390_serdes_get_lane(chip, port) == 0)
++	if (mv88e6xxx_serdes_get_lane(chip, port) == 0)
+ 		return 0;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(mv88e6390_serdes_hw_stats); i++) {
+@@ -638,7 +638,7 @@ int mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+ 	int lane;
+ 	int i;
+ 
+-	lane = mv88e6390_serdes_get_lane(chip, port);
++	lane = mv88e6xxx_serdes_get_lane(chip, port);
+ 	if (lane == 0)
+ 		return 0;
+ 
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 82852c57cc0e4..82b918d361173 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -350,6 +350,12 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
+ 		if (dsa_is_cpu_port(ds, port))
+ 			v->pvid = true;
+ 		list_add(&v->list, &priv->dsa_8021q_vlans);
++
++		v = kmemdup(v, sizeof(*v), GFP_KERNEL);
++		if (!v)
++			return -ENOMEM;
++
++		list_add(&v->list, &priv->bridge_vlans);
+ 	}
+ 
+ 	((struct sja1105_vlan_lookup_entry *)table->entries)[0] = pvid;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index db1b89f570794..8f169508a90a9 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1633,11 +1633,16 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
+ 
+ 	if ((tpa_info->flags2 & RX_CMP_FLAGS2_META_FORMAT_VLAN) &&
+ 	    (skb->dev->features & BNXT_HW_FEATURE_VLAN_ALL_RX)) {
+-		u16 vlan_proto = tpa_info->metadata >>
+-			RX_CMP_FLAGS2_METADATA_TPID_SFT;
++		__be16 vlan_proto = htons(tpa_info->metadata >>
++					  RX_CMP_FLAGS2_METADATA_TPID_SFT);
+ 		u16 vtag = tpa_info->metadata & RX_CMP_FLAGS2_METADATA_TCI_MASK;
+ 
+-		__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vtag);
++		if (eth_type_vlan(vlan_proto)) {
++			__vlan_hwaccel_put_tag(skb, vlan_proto, vtag);
++		} else {
++			dev_kfree_skb(skb);
++			return NULL;
++		}
+ 	}
+ 
+ 	skb_checksum_none_assert(skb);
+@@ -1858,9 +1863,15 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 	    (skb->dev->features & BNXT_HW_FEATURE_VLAN_ALL_RX)) {
+ 		u32 meta_data = le32_to_cpu(rxcmp1->rx_cmp_meta_data);
+ 		u16 vtag = meta_data & RX_CMP_FLAGS2_METADATA_TCI_MASK;
+-		u16 vlan_proto = meta_data >> RX_CMP_FLAGS2_METADATA_TPID_SFT;
++		__be16 vlan_proto = htons(meta_data >>
++					  RX_CMP_FLAGS2_METADATA_TPID_SFT);
+ 
+-		__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vtag);
++		if (eth_type_vlan(vlan_proto)) {
++			__vlan_hwaccel_put_tag(skb, vlan_proto, vtag);
++		} else {
++			dev_kfree_skb(skb);
++			goto next_rx;
++		}
+ 	}
+ 
+ 	skb_checksum_none_assert(skb);
+@@ -9830,6 +9841,12 @@ int bnxt_half_open_nic(struct bnxt *bp)
+ {
+ 	int rc = 0;
+ 
++	if (test_bit(BNXT_STATE_ABORT_ERR, &bp->state)) {
++		netdev_err(bp->dev, "A previous firmware reset has not completed, aborting half open\n");
++		rc = -ENODEV;
++		goto half_open_err;
++	}
++
+ 	rc = bnxt_alloc_mem(bp, false);
+ 	if (rc) {
+ 		netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
+@@ -11480,6 +11497,10 @@ static void bnxt_fw_reset_task(struct work_struct *work)
+ 		}
+ 		bp->fw_reset_timestamp = jiffies;
+ 		rtnl_lock();
++		if (test_bit(BNXT_STATE_ABORT_ERR, &bp->state)) {
++			rtnl_unlock();
++			goto fw_reset_abort;
++		}
+ 		bnxt_fw_reset_close(bp);
+ 		if (bp->fw_cap & BNXT_FW_CAP_ERR_RECOVER_RELOAD) {
+ 			bp->fw_reset_state = BNXT_FW_RESET_STATE_POLL_FW_DOWN;
+@@ -12901,7 +12922,8 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev,
+ 	if (netif_running(netdev))
+ 		bnxt_close(netdev);
+ 
+-	pci_disable_device(pdev);
++	if (pci_is_enabled(pdev))
++		pci_disable_device(pdev);
+ 	bnxt_free_ctx_mem(bp);
+ 	kfree(bp->ctx);
+ 	bp->ctx = NULL;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index 64dbbb04b0434..abf169001bf33 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -479,15 +479,16 @@ struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev)
+ 		if (!edev)
+ 			return ERR_PTR(-ENOMEM);
+ 		edev->en_ops = &bnxt_en_ops_tbl;
+-		if (bp->flags & BNXT_FLAG_ROCEV1_CAP)
+-			edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP;
+-		if (bp->flags & BNXT_FLAG_ROCEV2_CAP)
+-			edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP;
+ 		edev->net = dev;
+ 		edev->pdev = bp->pdev;
+ 		edev->l2_db_size = bp->db_size;
+ 		edev->l2_db_size_nc = bp->db_size;
+ 		bp->edev = edev;
+ 	}
++	edev->flags &= ~BNXT_EN_FLAG_ROCE_CAP;
++	if (bp->flags & BNXT_FLAG_ROCEV1_CAP)
++		edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP;
++	if (bp->flags & BNXT_FLAG_ROCEV2_CAP)
++		edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP;
+ 	return bp->edev;
+ }
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+index 4cddd628d41b2..9ed3d1ab2ca58 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+@@ -420,7 +420,7 @@ static int cn23xx_pf_setup_global_input_regs(struct octeon_device *oct)
+ 	 * bits 32:47 indicate the PVF num.
+ 	 */
+ 	for (q_no = 0; q_no < ern; q_no++) {
+-		reg_val = oct->pcie_port << CN23XX_PKT_INPUT_CTL_MAC_NUM_POS;
++		reg_val = (u64)oct->pcie_port << CN23XX_PKT_INPUT_CTL_MAC_NUM_POS;
+ 
+ 		/* for VF assigned queues. */
+ 		if (q_no < oct->sriov_info.pf_srn) {
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 8be525c5e2e4a..6698afad43796 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -2643,6 +2643,9 @@ static void detach_ulds(struct adapter *adap)
+ {
+ 	unsigned int i;
+ 
++	if (!is_uld(adap))
++		return;
++
+ 	mutex_lock(&uld_mutex);
+ 	list_del(&adap->list_node);
+ 
+@@ -7145,10 +7148,13 @@ static void remove_one(struct pci_dev *pdev)
+ 		 */
+ 		destroy_workqueue(adapter->workq);
+ 
+-		if (is_uld(adapter)) {
+-			detach_ulds(adapter);
+-			t4_uld_clean_up(adapter);
+-		}
++		detach_ulds(adapter);
++
++		for_each_port(adapter, i)
++			if (adapter->port[i]->reg_state == NETREG_REGISTERED)
++				unregister_netdev(adapter->port[i]);
++
++		t4_uld_clean_up(adapter);
+ 
+ 		adap_free_hma_mem(adapter);
+ 
+@@ -7156,10 +7162,6 @@ static void remove_one(struct pci_dev *pdev)
+ 
+ 		cxgb4_free_mps_ref_entries(adapter);
+ 
+-		for_each_port(adapter, i)
+-			if (adapter->port[i]->reg_state == NETREG_REGISTERED)
+-				unregister_netdev(adapter->port[i]);
+-
+ 		debugfs_remove_recursive(adapter->debugfs_root);
+ 
+ 		if (!is_t4(adapter->params.chip))
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+index 743af9e654aa7..17faac715882d 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+@@ -581,6 +581,9 @@ void t4_uld_clean_up(struct adapter *adap)
+ {
+ 	unsigned int i;
+ 
++	if (!is_uld(adap))
++		return;
++
+ 	mutex_lock(&uld_mutex);
+ 	for (i = 0; i < CXGB4_ULD_MAX; i++) {
+ 		if (!adap->uld[i].handle)
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 3a74e4645ce65..0b714b606ba19 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1340,13 +1340,16 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	err = register_netdev(dev);
+ 	if (err)
+-		goto abort_with_wq;
++		goto abort_with_gve_init;
+ 
+ 	dev_info(&pdev->dev, "GVE version %s\n", gve_version_str);
+ 	gve_clear_probe_in_progress(priv);
+ 	queue_work(priv->gve_wq, &priv->service_task);
+ 	return 0;
+ 
++abort_with_gve_init:
++	gve_teardown_priv_resources(priv);
++
+ abort_with_wq:
+ 	destroy_workqueue(priv->gve_wq);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
+index 12f6c2442a7ad..e53512f6878af 100644
+--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
++++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
+@@ -131,7 +131,7 @@
+ /* buf unit size is cache_line_size, which is 64, so the shift is 6 */
+ #define PPE_BUF_SIZE_SHIFT		6
+ #define PPE_TX_BUF_HOLD			BIT(31)
+-#define CACHE_LINE_MASK			0x3F
++#define SOC_CACHE_LINE_MASK		0x3F
+ #else
+ #define PPE_CFG_QOS_VMID_GRP_SHIFT	8
+ #define PPE_CFG_RX_CTRL_ALIGN_SHIFT	11
+@@ -531,8 +531,8 @@ hip04_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ #if defined(CONFIG_HI13X1_GMAC)
+ 	desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV
+ 		| TX_RELEASE_TO_PPE | priv->port << TX_POOL_SHIFT);
+-	desc->data_offset = (__force u32)cpu_to_be32(phys & CACHE_LINE_MASK);
+-	desc->send_addr =  (__force u32)cpu_to_be32(phys & ~CACHE_LINE_MASK);
++	desc->data_offset = (__force u32)cpu_to_be32(phys & SOC_CACHE_LINE_MASK);
++	desc->send_addr =  (__force u32)cpu_to_be32(phys & ~SOC_CACHE_LINE_MASK);
+ #else
+ 	desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV);
+ 	desc->send_addr = (__force u32)cpu_to_be32(phys);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+index 98a9f5e3fe864..98f55fbe6c3d6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+@@ -134,7 +134,8 @@ struct hclge_mbx_vf_to_pf_cmd {
+ 	u8 mbx_need_resp;
+ 	u8 rsv1[1];
+ 	u8 msg_len;
+-	u8 rsv2[3];
++	u8 rsv2;
++	u16 match_id;
+ 	struct hclge_vf_to_pf_msg msg;
+ };
+ 
+@@ -144,7 +145,8 @@ struct hclge_mbx_pf_to_vf_cmd {
+ 	u8 dest_vfid;
+ 	u8 rsv[3];
+ 	u8 msg_len;
+-	u8 rsv1[3];
++	u8 rsv1;
++	u16 match_id;
+ 	struct hclge_pf_to_vf_msg msg;
+ };
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 2c2d53f5c56e1..61f6f0287cbe1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -47,6 +47,7 @@ static int hclge_gen_resp_to_vf(struct hclge_vport *vport,
+ 
+ 	resp_pf_to_vf->dest_vfid = vf_to_pf_req->mbx_src_vfid;
+ 	resp_pf_to_vf->msg_len = vf_to_pf_req->msg_len;
++	resp_pf_to_vf->match_id = vf_to_pf_req->match_id;
+ 
+ 	resp_pf_to_vf->msg.code = HCLGE_MBX_PF_VF_RESP;
+ 	resp_pf_to_vf->msg.vf_mbx_msg_code = vf_to_pf_req->msg.code;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index ac6980acb6f02..d3010d5ab3665 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2518,6 +2518,16 @@ static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
+ 
+ static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
+ {
++	struct hnae3_handle *nic = &hdev->nic;
++	int ret;
++
++	ret = hclgevf_en_hw_strip_rxvtag(nic, true);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"failed to enable rx vlan offload, ret = %d\n", ret);
++		return ret;
++	}
++
+ 	return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0,
+ 				       false);
+ }
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index b3ad95ac3d859..361b8d0bd78d7 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -7657,6 +7657,7 @@ err_flashmap:
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
+index 9e3103fae723c..caedf24c24c1f 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
++++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
+@@ -2227,6 +2227,7 @@ err_sw_init:
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_netdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index ebd08543791bd..f3caf5eab8d4a 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -3759,6 +3759,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 4b9b5148c916b..e24fb122c03a2 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -931,6 +931,7 @@ static void igb_configure_msix(struct igb_adapter *adapter)
+  **/
+ static int igb_request_msix(struct igb_adapter *adapter)
+ {
++	unsigned int num_q_vectors = adapter->num_q_vectors;
+ 	struct net_device *netdev = adapter->netdev;
+ 	int i, err = 0, vector = 0, free_vector = 0;
+ 
+@@ -939,7 +940,13 @@ static int igb_request_msix(struct igb_adapter *adapter)
+ 	if (err)
+ 		goto err_out;
+ 
+-	for (i = 0; i < adapter->num_q_vectors; i++) {
++	if (num_q_vectors > MAX_Q_VECTORS) {
++		num_q_vectors = MAX_Q_VECTORS;
++		dev_warn(&adapter->pdev->dev,
++			 "The number of queue vectors (%d) is higher than max allowed (%d)\n",
++			 adapter->num_q_vectors, MAX_Q_VECTORS);
++	}
++	for (i = 0; i < num_q_vectors; i++) {
+ 		struct igb_q_vector *q_vector = adapter->q_vector[i];
+ 
+ 		vector++;
+@@ -1678,14 +1685,15 @@ static bool is_any_txtime_enabled(struct igb_adapter *adapter)
+  **/
+ static void igb_config_tx_modes(struct igb_adapter *adapter, int queue)
+ {
+-	struct igb_ring *ring = adapter->tx_ring[queue];
+ 	struct net_device *netdev = adapter->netdev;
+ 	struct e1000_hw *hw = &adapter->hw;
++	struct igb_ring *ring;
+ 	u32 tqavcc, tqavctrl;
+ 	u16 value;
+ 
+ 	WARN_ON(hw->mac.type != e1000_i210);
+ 	WARN_ON(queue < 0 || queue > 1);
++	ring = adapter->tx_ring[queue];
+ 
+ 	/* If any of the Qav features is enabled, configure queues as SR and
+ 	 * with HIGH PRIO. If none is, then configure them with LOW PRIO and
+@@ -3616,6 +3624,7 @@ err_sw_init:
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+@@ -4836,6 +4845,8 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring)
+ 					       DMA_TO_DEVICE);
+ 		}
+ 
++		tx_buffer->next_to_watch = NULL;
++
+ 		/* move us one more past the eop_desc for start of next pkt */
+ 		tx_buffer++;
+ 		i++;
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index 6dca67d9c25d8..a97bf7a5f1d67 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -532,7 +532,7 @@ static inline s32 igc_read_phy_reg(struct igc_hw *hw, u32 offset, u16 *data)
+ 	if (hw->phy.ops.read_reg)
+ 		return hw->phy.ops.read_reg(hw, offset, data);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ void igc_reinit_locked(struct igc_adapter *);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 7b822cdcc6c58..b9fe2785f5735 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -207,6 +207,8 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
+ 					       DMA_TO_DEVICE);
+ 		}
+ 
++		tx_buffer->next_to_watch = NULL;
++
+ 		/* move us one more past the eop_desc for start of next pkt */
+ 		tx_buffer++;
+ 		i++;
+@@ -5221,6 +5223,7 @@ err_sw_init:
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 1bfba87f1ff60..37439b76fcb5e 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -1825,7 +1825,8 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
+ 				struct sk_buff *skb)
+ {
+ 	if (ring_uses_build_skb(rx_ring)) {
+-		unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
++		unsigned long mask = (unsigned long)ixgbe_rx_pg_size(rx_ring) - 1;
++		unsigned long offset = (unsigned long)(skb->data) & mask;
+ 
+ 		dma_sync_single_range_for_cpu(rx_ring->dev,
+ 					      IXGBE_CB(skb)->dma,
+@@ -11081,6 +11082,7 @@ err_ioremap:
+ 	disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+index caaea2c920a6e..e3e4676af9e45 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+@@ -211,7 +211,7 @@ struct xfrm_state *ixgbevf_ipsec_find_rx_state(struct ixgbevf_ipsec *ipsec,
+ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 					  u32 *mykey, u32 *mysalt)
+ {
+-	struct net_device *dev = xs->xso.dev;
++	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -260,12 +260,15 @@ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
+  **/
+ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.dev;
+-	struct ixgbevf_adapter *adapter = netdev_priv(dev);
+-	struct ixgbevf_ipsec *ipsec = adapter->ipsec;
++	struct net_device *dev = xs->xso.real_dev;
++	struct ixgbevf_adapter *adapter;
++	struct ixgbevf_ipsec *ipsec;
+ 	u16 sa_idx;
+ 	int ret;
+ 
++	adapter = netdev_priv(dev);
++	ipsec = adapter->ipsec;
++
+ 	if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) {
+ 		netdev_err(dev, "Unsupported protocol 0x%04x for IPsec offload\n",
+ 			   xs->id.proto);
+@@ -383,11 +386,14 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
+  **/
+ static void ixgbevf_ipsec_del_sa(struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.dev;
+-	struct ixgbevf_adapter *adapter = netdev_priv(dev);
+-	struct ixgbevf_ipsec *ipsec = adapter->ipsec;
++	struct net_device *dev = xs->xso.real_dev;
++	struct ixgbevf_adapter *adapter;
++	struct ixgbevf_ipsec *ipsec;
+ 	u16 sa_idx;
+ 
++	adapter = netdev_priv(dev);
++	ipsec = adapter->ipsec;
++
+ 	if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
+ 		sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_RX_INDEX;
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 9010aabd97826..e690a1b09e98b 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -5160,7 +5160,8 @@ static int r8169_mdio_register(struct rtl8169_private *tp)
+ 	new_bus->priv = tp;
+ 	new_bus->parent = &pdev->dev;
+ 	new_bus->irq[0] = PHY_IGNORE_INTERRUPT;
+-	snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x", pci_dev_id(pdev));
++	snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x-%x",
++		 pci_domain_nr(pdev->bus), pci_dev_id(pdev));
+ 
+ 	new_bus->read = r8169_mdio_read_reg;
+ 	new_bus->write = r8169_mdio_write_reg;
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index a4a626e9cd9a1..0a8799a208cf4 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -889,18 +889,20 @@ int efx_set_channels(struct efx_nic *efx)
+ 			if (efx_channel_is_xdp_tx(channel)) {
+ 				efx_for_each_channel_tx_queue(tx_queue, channel) {
+ 					tx_queue->queue = next_queue++;
+-					netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
+-						  channel->channel, tx_queue->label,
+-						  xdp_queue_number, tx_queue->queue);
++
+ 					/* We may have a few left-over XDP TX
+ 					 * queues owing to xdp_tx_queue_count
+ 					 * not dividing evenly by EFX_MAX_TXQ_PER_CHANNEL.
+ 					 * We still allocate and probe those
+ 					 * TXQs, but never use them.
+ 					 */
+-					if (xdp_queue_number < efx->xdp_tx_queue_count)
++					if (xdp_queue_number < efx->xdp_tx_queue_count) {
++						netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
++							  channel->channel, tx_queue->label,
++							  xdp_queue_number, tx_queue->queue);
+ 						efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
+-					xdp_queue_number++;
++						xdp_queue_number++;
++					}
+ 				}
+ 			} else {
+ 				efx_for_each_channel_tx_queue(tx_queue, channel) {
+@@ -912,6 +914,7 @@ int efx_set_channels(struct efx_nic *efx)
+ 			}
+ 		}
+ 	}
++	WARN_ON(xdp_queue_number != efx->xdp_tx_queue_count);
+ 
+ 	rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels);
+ 	if (rc)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index ff95400594fc1..53be8fc1d125f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -399,6 +399,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct plat_stmmacenet_data *plat;
+ 	struct stmmac_dma_cfg *dma_cfg;
++	int phy_mode;
+ 	int rc;
+ 
+ 	plat = devm_kzalloc(&pdev->dev, sizeof(*plat), GFP_KERNEL);
+@@ -413,10 +414,11 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
+ 		*mac = NULL;
+ 	}
+ 
+-	plat->phy_interface = device_get_phy_mode(&pdev->dev);
+-	if (plat->phy_interface < 0)
+-		return ERR_PTR(plat->phy_interface);
++	phy_mode = device_get_phy_mode(&pdev->dev);
++	if (phy_mode < 0)
++		return ERR_PTR(phy_mode);
+ 
++	plat->phy_interface = phy_mode;
+ 	plat->interface = stmmac_of_get_mac_mode(np);
+ 	if (plat->interface < 0)
+ 		plat->interface = plat->phy_interface;
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index fbfcbd0dcfcbc..5b3aff2c279f7 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2496,7 +2496,7 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 			   hso_net_init);
+ 	if (!net) {
+ 		dev_err(&interface->dev, "Unable to create ethernet device\n");
+-		goto exit;
++		goto err_hso_dev;
+ 	}
+ 
+ 	hso_net = netdev_priv(net);
+@@ -2509,13 +2509,13 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 				      USB_DIR_IN);
+ 	if (!hso_net->in_endp) {
+ 		dev_err(&interface->dev, "Can't find BULK IN endpoint\n");
+-		goto exit;
++		goto err_net;
+ 	}
+ 	hso_net->out_endp = hso_get_ep(interface, USB_ENDPOINT_XFER_BULK,
+ 				       USB_DIR_OUT);
+ 	if (!hso_net->out_endp) {
+ 		dev_err(&interface->dev, "Can't find BULK OUT endpoint\n");
+-		goto exit;
++		goto err_net;
+ 	}
+ 	SET_NETDEV_DEV(net, &interface->dev);
+ 	SET_NETDEV_DEVTYPE(net, &hso_type);
+@@ -2524,18 +2524,18 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 	for (i = 0; i < MUX_BULK_RX_BUF_COUNT; i++) {
+ 		hso_net->mux_bulk_rx_urb_pool[i] = usb_alloc_urb(0, GFP_KERNEL);
+ 		if (!hso_net->mux_bulk_rx_urb_pool[i])
+-			goto exit;
++			goto err_mux_bulk_rx;
+ 		hso_net->mux_bulk_rx_buf_pool[i] = kzalloc(MUX_BULK_RX_BUF_SIZE,
+ 							   GFP_KERNEL);
+ 		if (!hso_net->mux_bulk_rx_buf_pool[i])
+-			goto exit;
++			goto err_mux_bulk_rx;
+ 	}
+ 	hso_net->mux_bulk_tx_urb = usb_alloc_urb(0, GFP_KERNEL);
+ 	if (!hso_net->mux_bulk_tx_urb)
+-		goto exit;
++		goto err_mux_bulk_rx;
+ 	hso_net->mux_bulk_tx_buf = kzalloc(MUX_BULK_TX_BUF_SIZE, GFP_KERNEL);
+ 	if (!hso_net->mux_bulk_tx_buf)
+-		goto exit;
++		goto err_free_tx_urb;
+ 
+ 	add_net_device(hso_dev);
+ 
+@@ -2543,7 +2543,7 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 	result = register_netdev(net);
+ 	if (result) {
+ 		dev_err(&interface->dev, "Failed to register device\n");
+-		goto exit;
++		goto err_free_tx_buf;
+ 	}
+ 
+ 	hso_log_port(hso_dev);
+@@ -2551,8 +2551,21 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 	hso_create_rfkill(hso_dev, interface);
+ 
+ 	return hso_dev;
+-exit:
+-	hso_free_net_device(hso_dev, true);
++
++err_free_tx_buf:
++	remove_net_device(hso_dev);
++	kfree(hso_net->mux_bulk_tx_buf);
++err_free_tx_urb:
++	usb_free_urb(hso_net->mux_bulk_tx_urb);
++err_mux_bulk_rx:
++	for (i = 0; i < MUX_BULK_RX_BUF_COUNT; i++) {
++		usb_free_urb(hso_net->mux_bulk_rx_urb_pool[i]);
++		kfree(hso_net->mux_bulk_rx_buf_pool[i]);
++	}
++err_net:
++	free_netdev(net);
++err_hso_dev:
++	kfree(hso_dev);
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index f520a71a361fc..ff5a16b17133d 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -751,7 +751,10 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
+ 		cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req)));
+ 	cmnd->write_zeroes.length =
+ 		cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1);
+-	cmnd->write_zeroes.control = 0;
++	if (nvme_ns_has_pi(ns))
++		cmnd->write_zeroes.control = cpu_to_le16(NVME_RW_PRINFO_PRACT);
++	else
++		cmnd->write_zeroes.control = 0;
+ 	return BLK_STS_OK;
+ }
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 3f05df98697d3..fb48a88d1acb5 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2596,7 +2596,9 @@ static void nvme_reset_work(struct work_struct *work)
+ 	bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL);
+ 	int result;
+ 
+-	if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) {
++	if (dev->ctrl.state != NVME_CTRL_RESETTING) {
++		dev_warn(dev->ctrl.device, "ctrl state %d is not RESETTING\n",
++			 dev->ctrl.state);
+ 		result = -ENODEV;
+ 		goto out;
+ 	}
+@@ -3003,7 +3005,6 @@ static void nvme_remove(struct pci_dev *pdev)
+ 	if (!pci_device_is_present(pdev)) {
+ 		nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DEAD);
+ 		nvme_dev_disable(dev, true);
+-		nvme_dev_remove_admin(dev);
+ 	}
+ 
+ 	flush_work(&dev->ctrl.reset_work);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 41bcdfac03d80..bb1122e257dd4 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5264,7 +5264,8 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
+ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ {
+ 	if ((pdev->device == 0x7312 && pdev->revision != 0x00) ||
+-	    (pdev->device == 0x7340 && pdev->revision != 0xc5))
++	    (pdev->device == 0x7340 && pdev->revision != 0xc5) ||
++	    (pdev->device == 0x7341 && pdev->revision != 0x00))
+ 		return;
+ 
+ 	pci_info(pdev, "disabling ATS\n");
+@@ -5279,6 +5280,7 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_amd_harvest_no_ats);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
+ /* AMD Navi14 dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7341, quirk_amd_harvest_no_ats);
+ #endif /* CONFIG_PCI_ATS */
+ 
+ /* Freescale PCIe doesn't support MSI in RC mode */
+diff --git a/drivers/pwm/pwm-sprd.c b/drivers/pwm/pwm-sprd.c
+index 5123d948efd6f..9eeb59cb81b68 100644
+--- a/drivers/pwm/pwm-sprd.c
++++ b/drivers/pwm/pwm-sprd.c
+@@ -180,13 +180,10 @@ static int sprd_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			}
+ 		}
+ 
+-		if (state->period != cstate->period ||
+-		    state->duty_cycle != cstate->duty_cycle) {
+-			ret = sprd_pwm_config(spc, pwm, state->duty_cycle,
+-					      state->period);
+-			if (ret)
+-				return ret;
+-		}
++		ret = sprd_pwm_config(spc, pwm, state->duty_cycle,
++				      state->period);
++		if (ret)
++			return ret;
+ 
+ 		sprd_pwm_write(spc, pwm->hwpwm, SPRD_PWM_ENABLE, 1);
+ 	} else if (cstate->enabled) {
+diff --git a/drivers/regulator/hi6421-regulator.c b/drivers/regulator/hi6421-regulator.c
+index dc631c1a46b4c..d144a4bdb76da 100644
+--- a/drivers/regulator/hi6421-regulator.c
++++ b/drivers/regulator/hi6421-regulator.c
+@@ -366,9 +366,8 @@ static struct hi6421_regulator_info
+ 
+ static int hi6421_regulator_enable(struct regulator_dev *rdev)
+ {
+-	struct hi6421_regulator_pdata *pdata;
++	struct hi6421_regulator_pdata *pdata = rdev_get_drvdata(rdev);
+ 
+-	pdata = dev_get_drvdata(rdev->dev.parent);
+ 	/* hi6421 spec requires regulator enablement must be serialized:
+ 	 *  - Because when BUCK, LDO switching from off to on, it will have
+ 	 *    a huge instantaneous current; so you can not turn on two or
+@@ -385,9 +384,10 @@ static int hi6421_regulator_enable(struct regulator_dev *rdev)
+ 
+ static unsigned int hi6421_regulator_ldo_get_mode(struct regulator_dev *rdev)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
+-	u32 reg_val;
++	struct hi6421_regulator_info *info;
++	unsigned int reg_val;
+ 
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 	regmap_read(rdev->regmap, rdev->desc->enable_reg, &reg_val);
+ 	if (reg_val & info->mode_mask)
+ 		return REGULATOR_MODE_IDLE;
+@@ -397,9 +397,10 @@ static unsigned int hi6421_regulator_ldo_get_mode(struct regulator_dev *rdev)
+ 
+ static unsigned int hi6421_regulator_buck_get_mode(struct regulator_dev *rdev)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
+-	u32 reg_val;
++	struct hi6421_regulator_info *info;
++	unsigned int reg_val;
+ 
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 	regmap_read(rdev->regmap, rdev->desc->enable_reg, &reg_val);
+ 	if (reg_val & info->mode_mask)
+ 		return REGULATOR_MODE_STANDBY;
+@@ -410,9 +411,10 @@ static unsigned int hi6421_regulator_buck_get_mode(struct regulator_dev *rdev)
+ static int hi6421_regulator_ldo_set_mode(struct regulator_dev *rdev,
+ 						unsigned int mode)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
+-	u32 new_mode;
++	struct hi6421_regulator_info *info;
++	unsigned int new_mode;
+ 
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 	switch (mode) {
+ 	case REGULATOR_MODE_NORMAL:
+ 		new_mode = 0;
+@@ -434,9 +436,10 @@ static int hi6421_regulator_ldo_set_mode(struct regulator_dev *rdev,
+ static int hi6421_regulator_buck_set_mode(struct regulator_dev *rdev,
+ 						unsigned int mode)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
+-	u32 new_mode;
++	struct hi6421_regulator_info *info;
++	unsigned int new_mode;
+ 
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 	switch (mode) {
+ 	case REGULATOR_MODE_NORMAL:
+ 		new_mode = 0;
+@@ -459,7 +462,9 @@ static unsigned int
+ hi6421_regulator_ldo_get_optimum_mode(struct regulator_dev *rdev,
+ 			int input_uV, int output_uV, int load_uA)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
++	struct hi6421_regulator_info *info;
++
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 
+ 	if (load_uA > info->eco_microamp)
+ 		return REGULATOR_MODE_NORMAL;
+@@ -543,14 +548,13 @@ static int hi6421_regulator_probe(struct platform_device *pdev)
+ 	if (!pdata)
+ 		return -ENOMEM;
+ 	mutex_init(&pdata->lock);
+-	platform_set_drvdata(pdev, pdata);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(hi6421_regulator_info); i++) {
+ 		/* assign per-regulator data */
+ 		info = &hi6421_regulator_info[i];
+ 
+ 		config.dev = pdev->dev.parent;
+-		config.driver_data = info;
++		config.driver_data = pdata;
+ 		config.regmap = pmic->regmap;
+ 
+ 		rdev = devm_regulator_register(&pdev->dev, &info->desc,
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 2171dab3e5dc8..ac07a9ef35780 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -440,39 +440,10 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 	struct device *dev = container_of(kobj, struct device, kobj);
+ 	struct iscsi_iface *iface = iscsi_dev_to_iface(dev);
+ 	struct iscsi_transport *t = iface->transport;
+-	int param;
+-	int param_type;
++	int param = -1;
+ 
+ 	if (attr == &dev_attr_iface_enabled.attr)
+ 		param = ISCSI_NET_PARAM_IFACE_ENABLE;
+-	else if (attr == &dev_attr_iface_vlan_id.attr)
+-		param = ISCSI_NET_PARAM_VLAN_ID;
+-	else if (attr == &dev_attr_iface_vlan_priority.attr)
+-		param = ISCSI_NET_PARAM_VLAN_PRIORITY;
+-	else if (attr == &dev_attr_iface_vlan_enabled.attr)
+-		param = ISCSI_NET_PARAM_VLAN_ENABLED;
+-	else if (attr == &dev_attr_iface_mtu.attr)
+-		param = ISCSI_NET_PARAM_MTU;
+-	else if (attr == &dev_attr_iface_port.attr)
+-		param = ISCSI_NET_PARAM_PORT;
+-	else if (attr == &dev_attr_iface_ipaddress_state.attr)
+-		param = ISCSI_NET_PARAM_IPADDR_STATE;
+-	else if (attr == &dev_attr_iface_delayed_ack_en.attr)
+-		param = ISCSI_NET_PARAM_DELAYED_ACK_EN;
+-	else if (attr == &dev_attr_iface_tcp_nagle_disable.attr)
+-		param = ISCSI_NET_PARAM_TCP_NAGLE_DISABLE;
+-	else if (attr == &dev_attr_iface_tcp_wsf_disable.attr)
+-		param = ISCSI_NET_PARAM_TCP_WSF_DISABLE;
+-	else if (attr == &dev_attr_iface_tcp_wsf.attr)
+-		param = ISCSI_NET_PARAM_TCP_WSF;
+-	else if (attr == &dev_attr_iface_tcp_timer_scale.attr)
+-		param = ISCSI_NET_PARAM_TCP_TIMER_SCALE;
+-	else if (attr == &dev_attr_iface_tcp_timestamp_en.attr)
+-		param = ISCSI_NET_PARAM_TCP_TIMESTAMP_EN;
+-	else if (attr == &dev_attr_iface_cache_id.attr)
+-		param = ISCSI_NET_PARAM_CACHE_ID;
+-	else if (attr == &dev_attr_iface_redirect_en.attr)
+-		param = ISCSI_NET_PARAM_REDIRECT_EN;
+ 	else if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
+ 		param = ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO;
+ 	else if (attr == &dev_attr_iface_header_digest.attr)
+@@ -509,6 +480,38 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 		param = ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN;
+ 	else if (attr == &dev_attr_iface_initiator_name.attr)
+ 		param = ISCSI_IFACE_PARAM_INITIATOR_NAME;
++
++	if (param != -1)
++		return t->attr_is_visible(ISCSI_IFACE_PARAM, param);
++
++	if (attr == &dev_attr_iface_vlan_id.attr)
++		param = ISCSI_NET_PARAM_VLAN_ID;
++	else if (attr == &dev_attr_iface_vlan_priority.attr)
++		param = ISCSI_NET_PARAM_VLAN_PRIORITY;
++	else if (attr == &dev_attr_iface_vlan_enabled.attr)
++		param = ISCSI_NET_PARAM_VLAN_ENABLED;
++	else if (attr == &dev_attr_iface_mtu.attr)
++		param = ISCSI_NET_PARAM_MTU;
++	else if (attr == &dev_attr_iface_port.attr)
++		param = ISCSI_NET_PARAM_PORT;
++	else if (attr == &dev_attr_iface_ipaddress_state.attr)
++		param = ISCSI_NET_PARAM_IPADDR_STATE;
++	else if (attr == &dev_attr_iface_delayed_ack_en.attr)
++		param = ISCSI_NET_PARAM_DELAYED_ACK_EN;
++	else if (attr == &dev_attr_iface_tcp_nagle_disable.attr)
++		param = ISCSI_NET_PARAM_TCP_NAGLE_DISABLE;
++	else if (attr == &dev_attr_iface_tcp_wsf_disable.attr)
++		param = ISCSI_NET_PARAM_TCP_WSF_DISABLE;
++	else if (attr == &dev_attr_iface_tcp_wsf.attr)
++		param = ISCSI_NET_PARAM_TCP_WSF;
++	else if (attr == &dev_attr_iface_tcp_timer_scale.attr)
++		param = ISCSI_NET_PARAM_TCP_TIMER_SCALE;
++	else if (attr == &dev_attr_iface_tcp_timestamp_en.attr)
++		param = ISCSI_NET_PARAM_TCP_TIMESTAMP_EN;
++	else if (attr == &dev_attr_iface_cache_id.attr)
++		param = ISCSI_NET_PARAM_CACHE_ID;
++	else if (attr == &dev_attr_iface_redirect_en.attr)
++		param = ISCSI_NET_PARAM_REDIRECT_EN;
+ 	else if (iface->iface_type == ISCSI_IFACE_TYPE_IPV4) {
+ 		if (attr == &dev_attr_ipv4_iface_ipaddress.attr)
+ 			param = ISCSI_NET_PARAM_IPV4_ADDR;
+@@ -599,32 +602,7 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 		return 0;
+ 	}
+ 
+-	switch (param) {
+-	case ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO:
+-	case ISCSI_IFACE_PARAM_HDRDGST_EN:
+-	case ISCSI_IFACE_PARAM_DATADGST_EN:
+-	case ISCSI_IFACE_PARAM_IMM_DATA_EN:
+-	case ISCSI_IFACE_PARAM_INITIAL_R2T_EN:
+-	case ISCSI_IFACE_PARAM_DATASEQ_INORDER_EN:
+-	case ISCSI_IFACE_PARAM_PDU_INORDER_EN:
+-	case ISCSI_IFACE_PARAM_ERL:
+-	case ISCSI_IFACE_PARAM_MAX_RECV_DLENGTH:
+-	case ISCSI_IFACE_PARAM_FIRST_BURST:
+-	case ISCSI_IFACE_PARAM_MAX_R2T:
+-	case ISCSI_IFACE_PARAM_MAX_BURST:
+-	case ISCSI_IFACE_PARAM_CHAP_AUTH_EN:
+-	case ISCSI_IFACE_PARAM_BIDI_CHAP_EN:
+-	case ISCSI_IFACE_PARAM_DISCOVERY_AUTH_OPTIONAL:
+-	case ISCSI_IFACE_PARAM_DISCOVERY_LOGOUT_EN:
+-	case ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN:
+-	case ISCSI_IFACE_PARAM_INITIATOR_NAME:
+-		param_type = ISCSI_IFACE_PARAM;
+-		break;
+-	default:
+-		param_type = ISCSI_NET_PARAM;
+-	}
+-
+-	return t->attr_is_visible(param_type, param);
++	return t->attr_is_visible(ISCSI_NET_PARAM, param);
+ }
+ 
+ static struct attribute *iscsi_iface_attrs[] = {
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index 29ee555a42f90..33c32e9317675 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -84,6 +84,7 @@ MODULE_PARM_DESC(polling_limit_us,
+  * struct bcm2835_spi - BCM2835 SPI controller
+  * @regs: base address of register map
+  * @clk: core clock, divided to calculate serial clock
++ * @clk_hz: core clock cached speed
+  * @irq: interrupt, signals TX FIFO empty or RX FIFO ¾ full
+  * @tfr: SPI transfer currently processed
+  * @ctlr: SPI controller reverse lookup
+@@ -124,6 +125,7 @@ MODULE_PARM_DESC(polling_limit_us,
+ struct bcm2835_spi {
+ 	void __iomem *regs;
+ 	struct clk *clk;
++	unsigned long clk_hz;
+ 	int irq;
+ 	struct spi_transfer *tfr;
+ 	struct spi_controller *ctlr;
+@@ -1082,19 +1084,18 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr,
+ 				    struct spi_transfer *tfr)
+ {
+ 	struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
+-	unsigned long spi_hz, clk_hz, cdiv;
++	unsigned long spi_hz, cdiv;
+ 	unsigned long hz_per_byte, byte_limit;
+ 	u32 cs = bs->prepare_cs[spi->chip_select];
+ 
+ 	/* set clock */
+ 	spi_hz = tfr->speed_hz;
+-	clk_hz = clk_get_rate(bs->clk);
+ 
+-	if (spi_hz >= clk_hz / 2) {
++	if (spi_hz >= bs->clk_hz / 2) {
+ 		cdiv = 2; /* clk_hz/2 is the fastest we can go */
+ 	} else if (spi_hz) {
+ 		/* CDIV must be a multiple of two */
+-		cdiv = DIV_ROUND_UP(clk_hz, spi_hz);
++		cdiv = DIV_ROUND_UP(bs->clk_hz, spi_hz);
+ 		cdiv += (cdiv % 2);
+ 
+ 		if (cdiv >= 65536)
+@@ -1102,7 +1103,7 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr,
+ 	} else {
+ 		cdiv = 0; /* 0 is the slowest we can go */
+ 	}
+-	tfr->effective_speed_hz = cdiv ? (clk_hz / cdiv) : (clk_hz / 65536);
++	tfr->effective_speed_hz = cdiv ? (bs->clk_hz / cdiv) : (bs->clk_hz / 65536);
+ 	bcm2835_wr(bs, BCM2835_SPI_CLK, cdiv);
+ 
+ 	/* handle all the 3-wire mode */
+@@ -1318,6 +1319,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev)
+ 		return bs->irq ? bs->irq : -ENODEV;
+ 
+ 	clk_prepare_enable(bs->clk);
++	bs->clk_hz = clk_get_rate(bs->clk);
+ 
+ 	err = bcm2835_dma_init(ctlr, &pdev->dev, bs);
+ 	if (err)
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index a3afd1b9ac567..ceb16e70d235a 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -517,6 +517,12 @@ static int cdns_spi_probe(struct platform_device *pdev)
+ 		goto clk_dis_apb;
+ 	}
+ 
++	pm_runtime_use_autosuspend(&pdev->dev);
++	pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
++	pm_runtime_get_noresume(&pdev->dev);
++	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_enable(&pdev->dev);
++
+ 	ret = of_property_read_u32(pdev->dev.of_node, "num-cs", &num_cs);
+ 	if (ret < 0)
+ 		master->num_chipselect = CDNS_SPI_DEFAULT_NUM_CS;
+@@ -531,11 +537,6 @@ static int cdns_spi_probe(struct platform_device *pdev)
+ 	/* SPI controller initializations */
+ 	cdns_spi_init_hw(xspi);
+ 
+-	pm_runtime_set_active(&pdev->dev);
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_use_autosuspend(&pdev->dev);
+-	pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
+-
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq <= 0) {
+ 		ret = -ENXIO;
+@@ -566,6 +567,9 @@ static int cdns_spi_probe(struct platform_device *pdev)
+ 
+ 	master->bits_per_word_mask = SPI_BPW_MASK(8);
+ 
++	pm_runtime_mark_last_busy(&pdev->dev);
++	pm_runtime_put_autosuspend(&pdev->dev);
++
+ 	ret = spi_register_master(master);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "spi_register_master failed\n");
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 831a38920fa98..c8b750d8ac35f 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -66,8 +66,7 @@ struct spi_imx_data;
+ struct spi_imx_devtype_data {
+ 	void (*intctrl)(struct spi_imx_data *, int);
+ 	int (*prepare_message)(struct spi_imx_data *, struct spi_message *);
+-	int (*prepare_transfer)(struct spi_imx_data *, struct spi_device *,
+-				struct spi_transfer *);
++	int (*prepare_transfer)(struct spi_imx_data *, struct spi_device *);
+ 	void (*trigger)(struct spi_imx_data *);
+ 	int (*rx_available)(struct spi_imx_data *);
+ 	void (*reset)(struct spi_imx_data *);
+@@ -572,11 +571,10 @@ static int mx51_ecspi_prepare_message(struct spi_imx_data *spi_imx,
+ }
+ 
+ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
+-				       struct spi_device *spi,
+-				       struct spi_transfer *t)
++				       struct spi_device *spi)
+ {
+ 	u32 ctrl = readl(spi_imx->base + MX51_ECSPI_CTRL);
+-	u32 clk = t->speed_hz, delay;
++	u32 clk, delay;
+ 
+ 	/* Clear BL field and set the right value */
+ 	ctrl &= ~MX51_ECSPI_CTRL_BL_MASK;
+@@ -590,7 +588,7 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
+ 	/* set clock speed */
+ 	ctrl &= ~(0xf << MX51_ECSPI_CTRL_POSTDIV_OFFSET |
+ 		  0xf << MX51_ECSPI_CTRL_PREDIV_OFFSET);
+-	ctrl |= mx51_ecspi_clkdiv(spi_imx, t->speed_hz, &clk);
++	ctrl |= mx51_ecspi_clkdiv(spi_imx, spi_imx->spi_bus_clk, &clk);
+ 	spi_imx->spi_bus_clk = clk;
+ 
+ 	if (spi_imx->usedma)
+@@ -702,13 +700,12 @@ static int mx31_prepare_message(struct spi_imx_data *spi_imx,
+ }
+ 
+ static int mx31_prepare_transfer(struct spi_imx_data *spi_imx,
+-				 struct spi_device *spi,
+-				 struct spi_transfer *t)
++				 struct spi_device *spi)
+ {
+ 	unsigned int reg = MX31_CSPICTRL_ENABLE | MX31_CSPICTRL_MASTER;
+ 	unsigned int clk;
+ 
+-	reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, t->speed_hz, &clk) <<
++	reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, spi_imx->spi_bus_clk, &clk) <<
+ 		MX31_CSPICTRL_DR_SHIFT;
+ 	spi_imx->spi_bus_clk = clk;
+ 
+@@ -807,14 +804,13 @@ static int mx21_prepare_message(struct spi_imx_data *spi_imx,
+ }
+ 
+ static int mx21_prepare_transfer(struct spi_imx_data *spi_imx,
+-				 struct spi_device *spi,
+-				 struct spi_transfer *t)
++				 struct spi_device *spi)
+ {
+ 	unsigned int reg = MX21_CSPICTRL_ENABLE | MX21_CSPICTRL_MASTER;
+ 	unsigned int max = is_imx27_cspi(spi_imx) ? 16 : 18;
+ 	unsigned int clk;
+ 
+-	reg |= spi_imx_clkdiv_1(spi_imx->spi_clk, t->speed_hz, max, &clk)
++	reg |= spi_imx_clkdiv_1(spi_imx->spi_clk, spi_imx->spi_bus_clk, max, &clk)
+ 		<< MX21_CSPICTRL_DR_SHIFT;
+ 	spi_imx->spi_bus_clk = clk;
+ 
+@@ -883,13 +879,12 @@ static int mx1_prepare_message(struct spi_imx_data *spi_imx,
+ }
+ 
+ static int mx1_prepare_transfer(struct spi_imx_data *spi_imx,
+-				struct spi_device *spi,
+-				struct spi_transfer *t)
++				struct spi_device *spi)
+ {
+ 	unsigned int reg = MX1_CSPICTRL_ENABLE | MX1_CSPICTRL_MASTER;
+ 	unsigned int clk;
+ 
+-	reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, t->speed_hz, &clk) <<
++	reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, spi_imx->spi_bus_clk, &clk) <<
+ 		MX1_CSPICTRL_DR_SHIFT;
+ 	spi_imx->spi_bus_clk = clk;
+ 
+@@ -1195,6 +1190,16 @@ static int spi_imx_setupxfer(struct spi_device *spi,
+ 	if (!t)
+ 		return 0;
+ 
++	if (!t->speed_hz) {
++		if (!spi->max_speed_hz) {
++			dev_err(&spi->dev, "no speed_hz provided!\n");
++			return -EINVAL;
++		}
++		dev_dbg(&spi->dev, "using spi->max_speed_hz!\n");
++		spi_imx->spi_bus_clk = spi->max_speed_hz;
++	} else
++		spi_imx->spi_bus_clk = t->speed_hz;
++
+ 	spi_imx->bits_per_word = t->bits_per_word;
+ 
+ 	/*
+@@ -1236,7 +1241,7 @@ static int spi_imx_setupxfer(struct spi_device *spi,
+ 		spi_imx->slave_burst = t->len;
+ 	}
+ 
+-	spi_imx->devtype_data->prepare_transfer(spi_imx, spi, t);
++	spi_imx->devtype_data->prepare_transfer(spi_imx, spi);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 5d643051bf3de..8f2d112f0b5d4 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -434,13 +434,23 @@ static int mtk_spi_fifo_transfer(struct spi_master *master,
+ 	mtk_spi_setup_packet(master);
+ 
+ 	cnt = xfer->len / 4;
+-	iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);
++	if (xfer->tx_buf)
++		iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);
++
++	if (xfer->rx_buf)
++		ioread32_rep(mdata->base + SPI_RX_DATA_REG, xfer->rx_buf, cnt);
+ 
+ 	remainder = xfer->len % 4;
+ 	if (remainder > 0) {
+ 		reg_val = 0;
+-		memcpy(&reg_val, xfer->tx_buf + (cnt * 4), remainder);
+-		writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++		if (xfer->tx_buf) {
++			memcpy(&reg_val, xfer->tx_buf + (cnt * 4), remainder);
++			writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++		}
++		if (xfer->rx_buf) {
++			reg_val = readl(mdata->base + SPI_RX_DATA_REG);
++			memcpy(xfer->rx_buf + (cnt * 4), &reg_val, remainder);
++		}
+ 	}
+ 
+ 	mtk_spi_enable_transfer(master);
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 0318f02d62123..8f91f8705eeea 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -1946,6 +1946,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 		master->can_dma = stm32_spi_can_dma;
+ 
+ 	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_get_noresume(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	ret = spi_register_master(master);
+@@ -1967,6 +1968,8 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 
+ err_pm_disable:
+ 	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ err_dma_release:
+ 	if (spi->dma_tx)
+ 		dma_release_channel(spi->dma_tx);
+@@ -1983,9 +1986,14 @@ static int stm32_spi_remove(struct platform_device *pdev)
+ 	struct spi_master *master = platform_get_drvdata(pdev);
+ 	struct stm32_spi *spi = spi_master_get_devdata(master);
+ 
++	pm_runtime_get_sync(&pdev->dev);
++
+ 	spi_unregister_master(master);
+ 	spi->cfg->disable(spi);
+ 
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ 	if (master->dma_tx)
+ 		dma_release_channel(master->dma_tx);
+ 	if (master->dma_rx)
+@@ -1993,7 +2001,6 @@ static int stm32_spi_remove(struct platform_device *pdev)
+ 
+ 	clk_disable_unprepare(spi->clk);
+ 
+-	pm_runtime_disable(&pdev->dev);
+ 
+ 	pinctrl_pm_select_sleep_state(&pdev->dev);
+ 
+diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
+index 6e8b8d30938f6..eaf8551ebc612 100644
+--- a/drivers/target/target_core_sbc.c
++++ b/drivers/target/target_core_sbc.c
+@@ -25,7 +25,7 @@
+ #include "target_core_alua.h"
+ 
+ static sense_reason_t
+-sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char *, u32, bool);
++sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char, u32, bool);
+ static sense_reason_t sbc_execute_unmap(struct se_cmd *cmd);
+ 
+ static sense_reason_t
+@@ -279,14 +279,14 @@ static inline unsigned long long transport_lba_64_ext(unsigned char *cdb)
+ }
+ 
+ static sense_reason_t
+-sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *ops)
++sbc_setup_write_same(struct se_cmd *cmd, unsigned char flags, struct sbc_ops *ops)
+ {
+ 	struct se_device *dev = cmd->se_dev;
+ 	sector_t end_lba = dev->transport->get_blocks(dev) + 1;
+ 	unsigned int sectors = sbc_get_write_same_sectors(cmd);
+ 	sense_reason_t ret;
+ 
+-	if ((flags[0] & 0x04) || (flags[0] & 0x02)) {
++	if ((flags & 0x04) || (flags & 0x02)) {
+ 		pr_err("WRITE_SAME PBDATA and LBDATA"
+ 			" bits not supported for Block Discard"
+ 			" Emulation\n");
+@@ -308,7 +308,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	}
+ 
+ 	/* We always have ANC_SUP == 0 so setting ANCHOR is always an error */
+-	if (flags[0] & 0x10) {
++	if (flags & 0x10) {
+ 		pr_warn("WRITE SAME with ANCHOR not supported\n");
+ 		return TCM_INVALID_CDB_FIELD;
+ 	}
+@@ -316,7 +316,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	 * Special case for WRITE_SAME w/ UNMAP=1 that ends up getting
+ 	 * translated into block discard requests within backend code.
+ 	 */
+-	if (flags[0] & 0x08) {
++	if (flags & 0x08) {
+ 		if (!ops->execute_unmap)
+ 			return TCM_UNSUPPORTED_SCSI_OPCODE;
+ 
+@@ -331,7 +331,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	if (!ops->execute_write_same)
+ 		return TCM_UNSUPPORTED_SCSI_OPCODE;
+ 
+-	ret = sbc_check_prot(dev, cmd, &cmd->t_task_cdb[0], sectors, true);
++	ret = sbc_check_prot(dev, cmd, flags >> 5, sectors, true);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -686,10 +686,9 @@ sbc_set_prot_op_checks(u8 protect, bool fabric_prot, enum target_prot_type prot_
+ }
+ 
+ static sense_reason_t
+-sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
++sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char protect,
+ 	       u32 sectors, bool is_write)
+ {
+-	u8 protect = cdb[1] >> 5;
+ 	int sp_ops = cmd->se_sess->sup_prot_ops;
+ 	int pi_prot_type = dev->dev_attrib.pi_prot_type;
+ 	bool fabric_prot = false;
+@@ -737,7 +736,7 @@ sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
+ 		fallthrough;
+ 	default:
+ 		pr_err("Unable to determine pi_prot_type for CDB: 0x%02x "
+-		       "PROTECT: 0x%02x\n", cdb[0], protect);
++		       "PROTECT: 0x%02x\n", cmd->t_task_cdb[0], protect);
+ 		return TCM_INVALID_CDB_FIELD;
+ 	}
+ 
+@@ -812,7 +811,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -826,7 +825,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -840,7 +839,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -861,7 +860,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -875,7 +874,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -890,7 +889,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -949,7 +948,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 			size = sbc_get_size(cmd, 1);
+ 			cmd->t_task_lba = get_unaligned_be64(&cdb[12]);
+ 
+-			ret = sbc_setup_write_same(cmd, &cdb[10], ops);
++			ret = sbc_setup_write_same(cmd, cdb[10], ops);
+ 			if (ret)
+ 				return ret;
+ 			break;
+@@ -1048,7 +1047,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		size = sbc_get_size(cmd, 1);
+ 		cmd->t_task_lba = get_unaligned_be64(&cdb[2]);
+ 
+-		ret = sbc_setup_write_same(cmd, &cdb[1], ops);
++		ret = sbc_setup_write_same(cmd, cdb[1], ops);
+ 		if (ret)
+ 			return ret;
+ 		break;
+@@ -1066,7 +1065,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		 * Follow sbcr26 with WRITE_SAME (10) and check for the existence
+ 		 * of byte 1 bit 3 UNMAP instead of original reserved field
+ 		 */
+-		ret = sbc_setup_write_same(cmd, &cdb[1], ops);
++		ret = sbc_setup_write_same(cmd, cdb[1], ops);
+ 		if (ret)
+ 			return ret;
+ 		break;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 357730e8f52f2..95a9bae72f135 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -47,6 +47,7 @@
+ 
+ #define USB_TP_TRANSMISSION_DELAY	40	/* ns */
+ #define USB_TP_TRANSMISSION_DELAY_MAX	65535	/* ns */
++#define USB_PING_RESPONSE_TIME		400	/* ns */
+ 
+ /* Protect struct usb_device->state and ->children members
+  * Note: Both are also protected by ->dev.sem, except that ->state can
+@@ -181,8 +182,9 @@ int usb_device_supports_lpm(struct usb_device *udev)
+ }
+ 
+ /*
+- * Set the Maximum Exit Latency (MEL) for the host to initiate a transition from
+- * either U1 or U2.
++ * Set the Maximum Exit Latency (MEL) for the host to wakup up the path from
++ * U1/U2, send a PING to the device and receive a PING_RESPONSE.
++ * See USB 3.1 section C.1.5.2
+  */
+ static void usb_set_lpm_mel(struct usb_device *udev,
+ 		struct usb3_lpm_parameters *udev_lpm_params,
+@@ -192,35 +194,37 @@ static void usb_set_lpm_mel(struct usb_device *udev,
+ 		unsigned int hub_exit_latency)
+ {
+ 	unsigned int total_mel;
+-	unsigned int device_mel;
+-	unsigned int hub_mel;
+ 
+ 	/*
+-	 * Calculate the time it takes to transition all links from the roothub
+-	 * to the parent hub into U0.  The parent hub must then decode the
+-	 * packet (hub header decode latency) to figure out which port it was
+-	 * bound for.
+-	 *
+-	 * The Hub Header decode latency is expressed in 0.1us intervals (0x1
+-	 * means 0.1us).  Multiply that by 100 to get nanoseconds.
++	 * tMEL1. time to transition path from host to device into U0.
++	 * MEL for parent already contains the delay up to parent, so only add
++	 * the exit latency for the last link (pick the slower exit latency),
++	 * and the hub header decode latency. See USB 3.1 section C 2.2.1
++	 * Store MEL in nanoseconds
+ 	 */
+ 	total_mel = hub_lpm_params->mel +
+-		(hub->descriptor->u.ss.bHubHdrDecLat * 100);
++		max(udev_exit_latency, hub_exit_latency) * 1000 +
++		hub->descriptor->u.ss.bHubHdrDecLat * 100;
+ 
+ 	/*
+-	 * How long will it take to transition the downstream hub's port into
+-	 * U0?  The greater of either the hub exit latency or the device exit
+-	 * latency.
+-	 *
+-	 * The BOS U1/U2 exit latencies are expressed in 1us intervals.
+-	 * Multiply that by 1000 to get nanoseconds.
++	 * tMEL2. Time to submit PING packet. Sum of tTPTransmissionDelay for
++	 * each link + wHubDelay for each hub. Add only for last link.
++	 * tMEL4, the time for PING_RESPONSE to traverse upstream is similar.
++	 * Multiply by 2 to include it as well.
+ 	 */
+-	device_mel = udev_exit_latency * 1000;
+-	hub_mel = hub_exit_latency * 1000;
+-	if (device_mel > hub_mel)
+-		total_mel += device_mel;
+-	else
+-		total_mel += hub_mel;
++	total_mel += (__le16_to_cpu(hub->descriptor->u.ss.wHubDelay) +
++		      USB_TP_TRANSMISSION_DELAY) * 2;
++
++	/*
++	 * tMEL3, tPingResponse. Time taken by device to generate PING_RESPONSE
++	 * after receiving PING. Also add 2100ns as stated in USB 3.1 C 1.5.2.4
++	 * to cover the delay if the PING_RESPONSE is queued behind a Max Packet
++	 * Size DP.
++	 * Note these delays should be added only once for the entire path, so
++	 * add them to the MEL of the device connected to the roothub.
++	 */
++	if (!hub->hdev->parent)
++		total_mel += USB_PING_RESPONSE_TIME + 2100;
+ 
+ 	udev_lpm_params->mel = total_mel;
+ }
+@@ -4040,6 +4044,47 @@ static int usb_set_lpm_timeout(struct usb_device *udev,
+ 	return 0;
+ }
+ 
++/*
++ * Don't allow device intiated U1/U2 if the system exit latency + one bus
++ * interval is greater than the minimum service interval of any active
++ * periodic endpoint. See USB 3.2 section 9.4.9
++ */
++static bool usb_device_may_initiate_lpm(struct usb_device *udev,
++					enum usb3_link_state state)
++{
++	unsigned int sel;		/* us */
++	int i, j;
++
++	if (state == USB3_LPM_U1)
++		sel = DIV_ROUND_UP(udev->u1_params.sel, 1000);
++	else if (state == USB3_LPM_U2)
++		sel = DIV_ROUND_UP(udev->u2_params.sel, 1000);
++	else
++		return false;
++
++	for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
++		struct usb_interface *intf;
++		struct usb_endpoint_descriptor *desc;
++		unsigned int interval;
++
++		intf = udev->actconfig->interface[i];
++		if (!intf)
++			continue;
++
++		for (j = 0; j < intf->cur_altsetting->desc.bNumEndpoints; j++) {
++			desc = &intf->cur_altsetting->endpoint[j].desc;
++
++			if (usb_endpoint_xfer_int(desc) ||
++			    usb_endpoint_xfer_isoc(desc)) {
++				interval = (1 << (desc->bInterval - 1)) * 125;
++				if (sel + 125 > interval)
++					return false;
++			}
++		}
++	}
++	return true;
++}
++
+ /*
+  * Enable the hub-initiated U1/U2 idle timeouts, and enable device-initiated
+  * U1/U2 entry.
+@@ -4112,20 +4157,23 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
+ 	 * U1/U2_ENABLE
+ 	 */
+ 	if (udev->actconfig &&
+-	    usb_set_device_initiated_lpm(udev, state, true) == 0) {
+-		if (state == USB3_LPM_U1)
+-			udev->usb3_lpm_u1_enabled = 1;
+-		else if (state == USB3_LPM_U2)
+-			udev->usb3_lpm_u2_enabled = 1;
+-	} else {
+-		/* Don't request U1/U2 entry if the device
+-		 * cannot transition to U1/U2.
+-		 */
+-		usb_set_lpm_timeout(udev, state, 0);
+-		hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);
++	    usb_device_may_initiate_lpm(udev, state)) {
++		if (usb_set_device_initiated_lpm(udev, state, true)) {
++			/*
++			 * Request to enable device initiated U1/U2 failed,
++			 * better to turn off lpm in this case.
++			 */
++			usb_set_lpm_timeout(udev, state, 0);
++			hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);
++			return;
++		}
+ 	}
+-}
+ 
++	if (state == USB3_LPM_U1)
++		udev->usb3_lpm_u1_enabled = 1;
++	else if (state == USB3_LPM_U2)
++		udev->usb3_lpm_u2_enabled = 1;
++}
+ /*
+  * Disable the hub-initiated U1/U2 idle timeouts, and disable device-initiated
+  * U1/U2 entry.
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 21e7522655ac9..a54a735b63843 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -502,10 +502,6 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* DJI CineSSD */
+ 	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+ 
+-	/* Fibocom L850-GL LTE Modem */
+-	{ USB_DEVICE(0x2cb7, 0x0007), .driver_info =
+-			USB_QUIRK_IGNORE_REMOTE_WAKEUP },
+-
+ 	/* INTEL VALUE SSD */
+ 	{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index d2f623d83bf78..b06286f132c6f 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -2749,12 +2749,14 @@ static void dwc2_hsotg_complete_in(struct dwc2_hsotg *hsotg,
+ 		return;
+ 	}
+ 
+-	/* Zlp for all endpoints, for ep0 only in DATA IN stage */
++	/* Zlp for all endpoints in non DDMA, for ep0 only in DATA IN stage */
+ 	if (hs_ep->send_zlp) {
+-		dwc2_hsotg_program_zlp(hsotg, hs_ep);
+ 		hs_ep->send_zlp = 0;
+-		/* transfer will be completed on next complete interrupt */
+-		return;
++		if (!using_desc_dma(hsotg)) {
++			dwc2_hsotg_program_zlp(hsotg, hs_ep);
++			/* transfer will be completed on next complete interrupt */
++			return;
++		}
+ 	}
+ 
+ 	if (hs_ep->index == 0 && hsotg->ep0_state == DWC2_EP0_DATA_IN) {
+@@ -3900,9 +3902,27 @@ static void dwc2_hsotg_ep_stop_xfr(struct dwc2_hsotg *hsotg,
+ 					 __func__);
+ 		}
+ 	} else {
++		/* Mask GINTSTS_GOUTNAKEFF interrupt */
++		dwc2_hsotg_disable_gsint(hsotg, GINTSTS_GOUTNAKEFF);
++
+ 		if (!(dwc2_readl(hsotg, GINTSTS) & GINTSTS_GOUTNAKEFF))
+ 			dwc2_set_bit(hsotg, DCTL, DCTL_SGOUTNAK);
+ 
++		if (!using_dma(hsotg)) {
++			/* Wait for GINTSTS_RXFLVL interrupt */
++			if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,
++						    GINTSTS_RXFLVL, 100)) {
++				dev_warn(hsotg->dev, "%s: timeout GINTSTS.RXFLVL\n",
++					 __func__);
++			} else {
++				/*
++				 * Pop GLOBAL OUT NAK status packet from RxFIFO
++				 * to assert GOUTNAKEFF interrupt
++				 */
++				dwc2_readl(hsotg, GRXSTSP);
++			}
++		}
++
+ 		/* Wait for global nak to take effect */
+ 		if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,
+ 					    GINTSTS_GOUTNAKEFF, 100))
+@@ -4348,6 +4368,9 @@ static int dwc2_hsotg_ep_sethalt(struct usb_ep *ep, int value, bool now)
+ 		epctl = dwc2_readl(hs, epreg);
+ 
+ 		if (value) {
++			/* Unmask GOUTNAKEFF interrupt */
++			dwc2_hsotg_en_gsint(hs, GINTSTS_GOUTNAKEFF);
++
+ 			if (!(dwc2_readl(hs, GINTSTS) & GINTSTS_GOUTNAKEFF))
+ 				dwc2_set_bit(hs, DCTL, DCTL_SGOUTNAK);
+ 			// STALL bit will be set in GOUTNAKEFF interrupt handler
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 2319c9737c2bd..f3f112b08c9b1 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3861,6 +3861,7 @@ static int tegra_xudc_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ free_eps:
++	pm_runtime_disable(&pdev->dev);
+ 	tegra_xudc_free_eps(xudc);
+ free_event_ring:
+ 	tegra_xudc_free_event_ring(xudc);
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index b5db2b2d0901a..6793fd99c1cb4 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -703,7 +703,8 @@ EXPORT_SYMBOL_GPL(ehci_setup);
+ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
+ {
+ 	struct ehci_hcd		*ehci = hcd_to_ehci (hcd);
+-	u32			status, masked_status, pcd_status = 0, cmd;
++	u32			status, current_status, masked_status, pcd_status = 0;
++	u32			cmd;
+ 	int			bh;
+ 	unsigned long		flags;
+ 
+@@ -715,19 +716,22 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
+ 	 */
+ 	spin_lock_irqsave(&ehci->lock, flags);
+ 
+-	status = ehci_readl(ehci, &ehci->regs->status);
++	status = 0;
++	current_status = ehci_readl(ehci, &ehci->regs->status);
++restart:
+ 
+ 	/* e.g. cardbus physical eject */
+-	if (status == ~(u32) 0) {
++	if (current_status == ~(u32) 0) {
+ 		ehci_dbg (ehci, "device removed\n");
+ 		goto dead;
+ 	}
++	status |= current_status;
+ 
+ 	/*
+ 	 * We don't use STS_FLR, but some controllers don't like it to
+ 	 * remain on, so mask it out along with the other status bits.
+ 	 */
+-	masked_status = status & (INTR_MASK | STS_FLR);
++	masked_status = current_status & (INTR_MASK | STS_FLR);
+ 
+ 	/* Shared IRQ? */
+ 	if (!masked_status || unlikely(ehci->rh_state == EHCI_RH_HALTED)) {
+@@ -737,6 +741,12 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
+ 
+ 	/* clear (just) interrupts */
+ 	ehci_writel(ehci, masked_status, &ehci->regs->status);
++
++	/* For edge interrupts, don't race with an interrupt bit being raised */
++	current_status = ehci_readl(ehci, &ehci->regs->status);
++	if (current_status & INTR_MASK)
++		goto restart;
++
+ 	cmd = ehci_readl(ehci, &ehci->regs->command);
+ 	bh = 0;
+ 
+diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
+index ebb8180b52ab1..c86d413226eb9 100644
+--- a/drivers/usb/host/max3421-hcd.c
++++ b/drivers/usb/host/max3421-hcd.c
+@@ -153,8 +153,6 @@ struct max3421_hcd {
+ 	 */
+ 	struct urb *curr_urb;
+ 	enum scheduling_pass sched_pass;
+-	struct usb_device *loaded_dev;	/* dev that's loaded into the chip */
+-	int loaded_epnum;		/* epnum whose toggles are loaded */
+ 	int urb_done;			/* > 0 -> no errors, < 0: errno */
+ 	size_t curr_len;
+ 	u8 hien;
+@@ -492,39 +490,17 @@ max3421_set_speed(struct usb_hcd *hcd, struct usb_device *dev)
+  * Caller must NOT hold HCD spinlock.
+  */
+ static void
+-max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum,
+-		    int force_toggles)
++max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum)
+ {
+-	struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);
+-	int old_epnum, same_ep, rcvtog, sndtog;
+-	struct usb_device *old_dev;
++	int rcvtog, sndtog;
+ 	u8 hctl;
+ 
+-	old_dev = max3421_hcd->loaded_dev;
+-	old_epnum = max3421_hcd->loaded_epnum;
+-
+-	same_ep = (dev == old_dev && epnum == old_epnum);
+-	if (same_ep && !force_toggles)
+-		return;
+-
+-	if (old_dev && !same_ep) {
+-		/* save the old end-points toggles: */
+-		u8 hrsl = spi_rd8(hcd, MAX3421_REG_HRSL);
+-
+-		rcvtog = (hrsl >> MAX3421_HRSL_RCVTOGRD_BIT) & 1;
+-		sndtog = (hrsl >> MAX3421_HRSL_SNDTOGRD_BIT) & 1;
+-
+-		/* no locking: HCD (i.e., we) own toggles, don't we? */
+-		usb_settoggle(old_dev, old_epnum, 0, rcvtog);
+-		usb_settoggle(old_dev, old_epnum, 1, sndtog);
+-	}
+ 	/* setup new endpoint's toggle bits: */
+ 	rcvtog = usb_gettoggle(dev, epnum, 0);
+ 	sndtog = usb_gettoggle(dev, epnum, 1);
+ 	hctl = (BIT(rcvtog + MAX3421_HCTL_RCVTOG0_BIT) |
+ 		BIT(sndtog + MAX3421_HCTL_SNDTOG0_BIT));
+ 
+-	max3421_hcd->loaded_epnum = epnum;
+ 	spi_wr8(hcd, MAX3421_REG_HCTL, hctl);
+ 
+ 	/*
+@@ -532,7 +508,6 @@ max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum,
+ 	 * address-assignment so it's best to just always load the
+ 	 * address whenever the end-point changed/was forced.
+ 	 */
+-	max3421_hcd->loaded_dev = dev;
+ 	spi_wr8(hcd, MAX3421_REG_PERADDR, dev->devnum);
+ }
+ 
+@@ -667,7 +642,7 @@ max3421_select_and_start_urb(struct usb_hcd *hcd)
+ 	struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);
+ 	struct urb *urb, *curr_urb = NULL;
+ 	struct max3421_ep *max3421_ep;
+-	int epnum, force_toggles = 0;
++	int epnum;
+ 	struct usb_host_endpoint *ep;
+ 	struct list_head *pos;
+ 	unsigned long flags;
+@@ -777,7 +752,6 @@ done:
+ 			usb_settoggle(urb->dev, epnum, 0, 1);
+ 			usb_settoggle(urb->dev, epnum, 1, 1);
+ 			max3421_ep->pkt_state = PKT_STATE_SETUP;
+-			force_toggles = 1;
+ 		} else
+ 			max3421_ep->pkt_state = PKT_STATE_TRANSFER;
+ 	}
+@@ -785,7 +759,7 @@ done:
+ 	spin_unlock_irqrestore(&max3421_hcd->lock, flags);
+ 
+ 	max3421_ep->last_active = max3421_hcd->frame_number;
+-	max3421_set_address(hcd, urb->dev, epnum, force_toggles);
++	max3421_set_address(hcd, urb->dev, epnum);
+ 	max3421_set_speed(hcd, urb->dev);
+ 	max3421_next_transfer(hcd, 0);
+ 	return 1;
+@@ -1380,6 +1354,16 @@ max3421_urb_done(struct usb_hcd *hcd)
+ 		status = 0;
+ 	urb = max3421_hcd->curr_urb;
+ 	if (urb) {
++		/* save the old end-points toggles: */
++		u8 hrsl = spi_rd8(hcd, MAX3421_REG_HRSL);
++		int rcvtog = (hrsl >> MAX3421_HRSL_RCVTOGRD_BIT) & 1;
++		int sndtog = (hrsl >> MAX3421_HRSL_SNDTOGRD_BIT) & 1;
++		int epnum = usb_endpoint_num(&urb->ep->desc);
++
++		/* no locking: HCD (i.e., we) own toggles, don't we? */
++		usb_settoggle(urb->dev, epnum, 0, rcvtog);
++		usb_settoggle(urb->dev, epnum, 1, sndtog);
++
+ 		max3421_hcd->curr_urb = NULL;
+ 		spin_lock_irqsave(&max3421_hcd->lock, flags);
+ 		usb_hcd_unlink_urb_from_ep(hcd, urb);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 74c497fd34762..8466527eb2462 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1552,11 +1552,12 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+ 	 * Inform the usbcore about resume-in-progress by returning
+ 	 * a non-zero value even if there are no status changes.
+ 	 */
++	spin_lock_irqsave(&xhci->lock, flags);
++
+ 	status = bus_state->resuming_ports;
+ 
+ 	mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;
+ 
+-	spin_lock_irqsave(&xhci->lock, flags);
+ 	/* For each port, did anything change?  If so, set that bit in buf. */
+ 	for (i = 0; i < max_ports; i++) {
+ 		temp = readl(ports[i]->addr);
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index 431213cdf9e0e..f97ac9f52bf4d 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -207,8 +207,7 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
+ 			return 0;
+ 
+ 		case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
+-			dev_dbg(&pdev->dev, "Unknown ROM status ...\n");
+-			break;
++			return 0;
+ 
+ 		case RENESAS_ROM_STATUS_ERROR: /* Error State */
+ 		default: /* All other states are marked as "Reserved states" */
+@@ -225,12 +224,13 @@ static int renesas_fw_check_running(struct pci_dev *pdev)
+ 	u8 fw_state;
+ 	int err;
+ 
+-	/*
+-	 * Only if device has ROM and loaded FW we can skip loading and
+-	 * return success. Otherwise (even unknown state), attempt to load FW.
+-	 */
+-	if (renesas_check_rom(pdev) && !renesas_check_rom_state(pdev))
+-		return 0;
++	/* Check if device has ROM and loaded, if so skip everything */
++	err = renesas_check_rom(pdev);
++	if (err) { /* we have rom */
++		err = renesas_check_rom_state(pdev);
++		if (!err)
++			return err;
++	}
+ 
+ 	/*
+ 	 * Test if the device is actually needing the firmware. As most
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 7bc18cf8042cc..119d1a8fbb194 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -631,7 +631,14 @@ static const struct pci_device_id pci_ids[] = {
+ 	{ /* end: all zeroes */ }
+ };
+ MODULE_DEVICE_TABLE(pci, pci_ids);
++
++/*
++ * Without CONFIG_USB_XHCI_PCI_RENESAS renesas_xhci_check_request_fw() won't
++ * load firmware, so don't encumber the xhci-pci driver with it.
++ */
++#if IS_ENABLED(CONFIG_USB_XHCI_PCI_RENESAS)
+ MODULE_FIRMWARE("renesas_usb_fw.mem");
++#endif
+ 
+ /* pci driver glue; this is a "new style" PCI driver module */
+ static struct pci_driver xhci_pci_driver = {
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 054840a69eb4a..53059ee957ad5 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -446,6 +446,26 @@ void xhci_ring_doorbell_for_active_rings(struct xhci_hcd *xhci,
+ 	ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+ }
+ 
++static struct xhci_virt_ep *xhci_get_virt_ep(struct xhci_hcd *xhci,
++					     unsigned int slot_id,
++					     unsigned int ep_index)
++{
++	if (slot_id == 0 || slot_id >= MAX_HC_SLOTS) {
++		xhci_warn(xhci, "Invalid slot_id %u\n", slot_id);
++		return NULL;
++	}
++	if (ep_index >= EP_CTX_PER_DEV) {
++		xhci_warn(xhci, "Invalid endpoint index %u\n", ep_index);
++		return NULL;
++	}
++	if (!xhci->devs[slot_id]) {
++		xhci_warn(xhci, "No xhci virt device for slot_id %u\n", slot_id);
++		return NULL;
++	}
++
++	return &xhci->devs[slot_id]->eps[ep_index];
++}
++
+ /* Get the right ring for the given slot_id, ep_index and stream_id.
+  * If the endpoint supports streams, boundary check the URB's stream ID.
+  * If the endpoint doesn't support streams, return the singular endpoint ring.
+@@ -456,7 +476,10 @@ struct xhci_ring *xhci_triad_to_transfer_ring(struct xhci_hcd *xhci,
+ {
+ 	struct xhci_virt_ep *ep;
+ 
+-	ep = &xhci->devs[slot_id]->eps[ep_index];
++	ep = xhci_get_virt_ep(xhci, slot_id, ep_index);
++	if (!ep)
++		return NULL;
++
+ 	/* Common case: no streams */
+ 	if (!(ep->ep_state & EP_HAS_STREAMS))
+ 		return ep->ring;
+@@ -747,11 +770,14 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ 	memset(&deq_state, 0, sizeof(deq_state));
+ 	ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
+ 
++	ep = xhci_get_virt_ep(xhci, slot_id, ep_index);
++	if (!ep)
++		return;
++
+ 	vdev = xhci->devs[slot_id];
+ 	ep_ctx = xhci_get_ep_ctx(xhci, vdev->out_ctx, ep_index);
+ 	trace_xhci_handle_cmd_stop_ep(ep_ctx);
+ 
+-	ep = &xhci->devs[slot_id]->eps[ep_index];
+ 	last_unlinked_td = list_last_entry(&ep->cancelled_td_list,
+ 			struct xhci_td, cancelled_td_list);
+ 
+@@ -1076,9 +1102,11 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ 
+ 	ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
+ 	stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2]));
+-	dev = xhci->devs[slot_id];
+-	ep = &dev->eps[ep_index];
++	ep = xhci_get_virt_ep(xhci, slot_id, ep_index);
++	if (!ep)
++		return;
+ 
++	dev = xhci->devs[slot_id];
+ 	ep_ring = xhci_stream_id_to_ring(dev, ep_index, stream_id);
+ 	if (!ep_ring) {
+ 		xhci_warn(xhci, "WARN Set TR deq ptr command for freed stream ID %u\n",
+@@ -1151,9 +1179,9 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ 	}
+ 
+ cleanup:
+-	dev->eps[ep_index].ep_state &= ~SET_DEQ_PENDING;
+-	dev->eps[ep_index].queued_deq_seg = NULL;
+-	dev->eps[ep_index].queued_deq_ptr = NULL;
++	ep->ep_state &= ~SET_DEQ_PENDING;
++	ep->queued_deq_seg = NULL;
++	ep->queued_deq_ptr = NULL;
+ 	/* Restart any rings with pending URBs */
+ 	ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+ }
+@@ -1162,10 +1190,15 @@ static void xhci_handle_cmd_reset_ep(struct xhci_hcd *xhci, int slot_id,
+ 		union xhci_trb *trb, u32 cmd_comp_code)
+ {
+ 	struct xhci_virt_device *vdev;
++	struct xhci_virt_ep *ep;
+ 	struct xhci_ep_ctx *ep_ctx;
+ 	unsigned int ep_index;
+ 
+ 	ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
++	ep = xhci_get_virt_ep(xhci, slot_id, ep_index);
++	if (!ep)
++		return;
++
+ 	vdev = xhci->devs[slot_id];
+ 	ep_ctx = xhci_get_ep_ctx(xhci, vdev->out_ctx, ep_index);
+ 	trace_xhci_handle_cmd_reset_ep(ep_ctx);
+@@ -1195,7 +1228,7 @@ static void xhci_handle_cmd_reset_ep(struct xhci_hcd *xhci, int slot_id,
+ 		xhci_ring_cmd_db(xhci);
+ 	} else {
+ 		/* Clear our internal halted state */
+-		xhci->devs[slot_id]->eps[ep_index].ep_state &= ~EP_HALTED;
++		ep->ep_state &= ~EP_HALTED;
+ 	}
+ 
+ 	/* if this was a soft reset, then restart */
+@@ -2364,14 +2397,13 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+ 	ep_trb_dma = le64_to_cpu(event->buffer);
+ 
+-	xdev = xhci->devs[slot_id];
+-	if (!xdev) {
+-		xhci_err(xhci, "ERROR Transfer event pointed to bad slot %u\n",
+-			 slot_id);
++	ep = xhci_get_virt_ep(xhci, slot_id, ep_index);
++	if (!ep) {
++		xhci_err(xhci, "ERROR Invalid Transfer event\n");
+ 		goto err_out;
+ 	}
+ 
+-	ep = &xdev->eps[ep_index];
++	xdev = xhci->devs[slot_id];
+ 	ep_ring = xhci_dma_to_transfer_ring(ep, ep_trb_dma);
+ 	ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
+ 
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index d01241f1daf3b..c1865a121100c 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -993,6 +993,7 @@ struct xhci_interval_bw_table {
+ 	unsigned int		ss_bw_out;
+ };
+ 
++#define EP_CTX_PER_DEV		31
+ 
+ struct xhci_virt_device {
+ 	struct usb_device		*udev;
+@@ -1007,7 +1008,7 @@ struct xhci_virt_device {
+ 	struct xhci_container_ctx       *out_ctx;
+ 	/* Used for addressing devices and configuration changes */
+ 	struct xhci_container_ctx       *in_ctx;
+-	struct xhci_virt_ep		eps[31];
++	struct xhci_virt_ep		eps[EP_CTX_PER_DEV];
+ 	u8				fake_port;
+ 	u8				real_port;
+ 	struct xhci_interval_bw_table	*bw_table;
+diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
+index e6fa137018082..02f49d7c869ee 100644
+--- a/drivers/usb/renesas_usbhs/fifo.c
++++ b/drivers/usb/renesas_usbhs/fifo.c
+@@ -101,6 +101,8 @@ static struct dma_chan *usbhsf_dma_chan_get(struct usbhs_fifo *fifo,
+ #define usbhsf_dma_map(p)	__usbhsf_dma_map_ctrl(p, 1)
+ #define usbhsf_dma_unmap(p)	__usbhsf_dma_map_ctrl(p, 0)
+ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map);
++static void usbhsf_tx_irq_ctrl(struct usbhs_pipe *pipe, int enable);
++static void usbhsf_rx_irq_ctrl(struct usbhs_pipe *pipe, int enable);
+ struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
+ {
+ 	struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
+@@ -123,6 +125,11 @@ struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
+ 		if (chan) {
+ 			dmaengine_terminate_all(chan);
+ 			usbhsf_dma_unmap(pkt);
++		} else {
++			if (usbhs_pipe_is_dir_in(pipe))
++				usbhsf_rx_irq_ctrl(pipe, 0);
++			else
++				usbhsf_tx_irq_ctrl(pipe, 0);
+ 		}
+ 
+ 		usbhs_pipe_clear_without_sequence(pipe, 0, 0);
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 28a728f883bc5..329fc25f78a44 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -159,6 +159,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */
+ 	{ USB_DEVICE(0x10C4, 0x89FB) }, /* Qivicon ZigBee USB Radio Stick */
+ 	{ USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
++	{ USB_DEVICE(0x10C4, 0x8A5B) }, /* CEL EM3588 ZigBee USB Stick */
+ 	{ USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */
+ 	{ USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */
+ 	{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
+@@ -206,8 +207,8 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1901, 0x0194) },	/* GE Healthcare Remote Alarm Box */
+ 	{ USB_DEVICE(0x1901, 0x0195) },	/* GE B850/B650/B450 CP2104 DP UART interface */
+ 	{ USB_DEVICE(0x1901, 0x0196) },	/* GE B850 CP2105 DP UART interface */
+-	{ USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 Display serial interface */
+-	{ USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 M.2 Key E serial interface */
++	{ USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 M.2 Key E serial interface */
++	{ USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 Display serial interface */
+ 	{ USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
+ 	{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ 	{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 61d94641ddc08..39333f2cba04e 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -238,6 +238,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_UC15			0x9090
+ /* These u-blox products use Qualcomm's vendor ID */
+ #define UBLOX_PRODUCT_R410M			0x90b2
++#define UBLOX_PRODUCT_R6XX			0x90fa
+ /* These Yuga products use Qualcomm's vendor ID */
+ #define YUGA_PRODUCT_CLM920_NC5			0x9625
+ 
+@@ -1101,6 +1102,8 @@ static const struct usb_device_id option_ids[] = {
+ 	/* u-blox products using Qualcomm vendor ID */
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),
+ 	  .driver_info = RSVD(1) | RSVD(3) },
++	{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R6XX),
++	  .driver_info = RSVD(3) },
+ 	/* Quectel products using Quectel vendor ID */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff),
+ 	  .driver_info = NUMEP2 },
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index f9677a5ec31b2..c35a6db993f1b 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -45,6 +45,13 @@ UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+ 
++/* Reported-by: Julian Sikorski <belegdol@gmail.com> */
++UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
++		"LaCie",
++		"Rugged USB3-FW",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ /*
+  * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+  * commands in UAS mode.  Observed with the 1.28 firmware; are there others?
+diff --git a/drivers/usb/typec/stusb160x.c b/drivers/usb/typec/stusb160x.c
+index 6eaeba9b096e1..3d3848e7c2c2f 100644
+--- a/drivers/usb/typec/stusb160x.c
++++ b/drivers/usb/typec/stusb160x.c
+@@ -739,10 +739,6 @@ static int stusb160x_probe(struct i2c_client *client)
+ 	typec_set_pwr_opmode(chip->port, chip->pwr_opmode);
+ 
+ 	if (client->irq) {
+-		ret = stusb160x_irq_init(chip, client->irq);
+-		if (ret)
+-			goto port_unregister;
+-
+ 		chip->role_sw = fwnode_usb_role_switch_get(fwnode);
+ 		if (IS_ERR(chip->role_sw)) {
+ 			ret = PTR_ERR(chip->role_sw);
+@@ -752,6 +748,10 @@ static int stusb160x_probe(struct i2c_client *client)
+ 					ret);
+ 			goto port_unregister;
+ 		}
++
++		ret = stusb160x_irq_init(chip, client->irq);
++		if (ret)
++			goto role_sw_put;
+ 	} else {
+ 		/*
+ 		 * If Source or Dual power role, need to enable VDD supply
+@@ -775,6 +775,9 @@ static int stusb160x_probe(struct i2c_client *client)
+ 
+ 	return 0;
+ 
++role_sw_put:
++	if (chip->role_sw)
++		usb_role_switch_put(chip->role_sw);
+ port_unregister:
+ 	typec_unregister_port(chip->port);
+ all_reg_disable:
+diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c
+index a4e9e6e07e939..2a528b70478c6 100644
+--- a/fs/afs/cmservice.c
++++ b/fs/afs/cmservice.c
+@@ -29,16 +29,11 @@ static void SRXAFSCB_TellMeAboutYourself(struct work_struct *);
+ 
+ static int afs_deliver_yfs_cb_callback(struct afs_call *);
+ 
+-#define CM_NAME(name) \
+-	char afs_SRXCB##name##_name[] __tracepoint_string =	\
+-		"CB." #name
+-
+ /*
+  * CB.CallBack operation type
+  */
+-static CM_NAME(CallBack);
+ static const struct afs_call_type afs_SRXCBCallBack = {
+-	.name		= afs_SRXCBCallBack_name,
++	.name		= "CB.CallBack",
+ 	.deliver	= afs_deliver_cb_callback,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_CallBack,
+@@ -47,9 +42,8 @@ static const struct afs_call_type afs_SRXCBCallBack = {
+ /*
+  * CB.InitCallBackState operation type
+  */
+-static CM_NAME(InitCallBackState);
+ static const struct afs_call_type afs_SRXCBInitCallBackState = {
+-	.name		= afs_SRXCBInitCallBackState_name,
++	.name		= "CB.InitCallBackState",
+ 	.deliver	= afs_deliver_cb_init_call_back_state,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_InitCallBackState,
+@@ -58,9 +52,8 @@ static const struct afs_call_type afs_SRXCBInitCallBackState = {
+ /*
+  * CB.InitCallBackState3 operation type
+  */
+-static CM_NAME(InitCallBackState3);
+ static const struct afs_call_type afs_SRXCBInitCallBackState3 = {
+-	.name		= afs_SRXCBInitCallBackState3_name,
++	.name		= "CB.InitCallBackState3",
+ 	.deliver	= afs_deliver_cb_init_call_back_state3,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_InitCallBackState,
+@@ -69,9 +62,8 @@ static const struct afs_call_type afs_SRXCBInitCallBackState3 = {
+ /*
+  * CB.Probe operation type
+  */
+-static CM_NAME(Probe);
+ static const struct afs_call_type afs_SRXCBProbe = {
+-	.name		= afs_SRXCBProbe_name,
++	.name		= "CB.Probe",
+ 	.deliver	= afs_deliver_cb_probe,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_Probe,
+@@ -80,9 +72,8 @@ static const struct afs_call_type afs_SRXCBProbe = {
+ /*
+  * CB.ProbeUuid operation type
+  */
+-static CM_NAME(ProbeUuid);
+ static const struct afs_call_type afs_SRXCBProbeUuid = {
+-	.name		= afs_SRXCBProbeUuid_name,
++	.name		= "CB.ProbeUuid",
+ 	.deliver	= afs_deliver_cb_probe_uuid,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_ProbeUuid,
+@@ -91,9 +82,8 @@ static const struct afs_call_type afs_SRXCBProbeUuid = {
+ /*
+  * CB.TellMeAboutYourself operation type
+  */
+-static CM_NAME(TellMeAboutYourself);
+ static const struct afs_call_type afs_SRXCBTellMeAboutYourself = {
+-	.name		= afs_SRXCBTellMeAboutYourself_name,
++	.name		= "CB.TellMeAboutYourself",
+ 	.deliver	= afs_deliver_cb_tell_me_about_yourself,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_TellMeAboutYourself,
+@@ -102,9 +92,8 @@ static const struct afs_call_type afs_SRXCBTellMeAboutYourself = {
+ /*
+  * YFS CB.CallBack operation type
+  */
+-static CM_NAME(YFS_CallBack);
+ static const struct afs_call_type afs_SRXYFSCB_CallBack = {
+-	.name		= afs_SRXCBYFS_CallBack_name,
++	.name		= "YFSCB.CallBack",
+ 	.deliver	= afs_deliver_yfs_cb_callback,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_CallBack,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 73ebe0c5fdbc9..c346c46020e5e 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5883,6 +5883,9 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 	devices = &fs_info->fs_devices->devices;
+ 	list_for_each_entry(device, devices, dev_list) {
++		if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
++			continue;
++
+ 		ret = btrfs_trim_free_extents(device, &group_trimmed);
+ 		if (ret) {
+ 			dev_failed++;
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index d560752b764d8..6b00f1d7c8e77 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4401,7 +4401,7 @@ bool check_session_state(struct ceph_mds_session *s)
+ 		break;
+ 	case CEPH_MDS_SESSION_CLOSING:
+ 		/* Should never reach this when we're unmounting */
+-		WARN_ON_ONCE(true);
++		WARN_ON_ONCE(s->s_ttl);
+ 		fallthrough;
+ 	case CEPH_MDS_SESSION_NEW:
+ 	case CEPH_MDS_SESSION_RESTARTING:
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index f6ceb79a995d0..b0b06eb86edfb 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3466,7 +3466,7 @@ static int smb3_simple_fallocate_write_range(unsigned int xid,
+ 					     char *buf)
+ {
+ 	struct cifs_io_parms io_parms = {0};
+-	int nbytes;
++	int rc, nbytes;
+ 	struct kvec iov[2];
+ 
+ 	io_parms.netfid = cfile->fid.netfid;
+@@ -3474,13 +3474,25 @@ static int smb3_simple_fallocate_write_range(unsigned int xid,
+ 	io_parms.tcon = tcon;
+ 	io_parms.persistent_fid = cfile->fid.persistent_fid;
+ 	io_parms.volatile_fid = cfile->fid.volatile_fid;
+-	io_parms.offset = off;
+-	io_parms.length = len;
+ 
+-	/* iov[0] is reserved for smb header */
+-	iov[1].iov_base = buf;
+-	iov[1].iov_len = io_parms.length;
+-	return SMB2_write(xid, &io_parms, &nbytes, iov, 1);
++	while (len) {
++		io_parms.offset = off;
++		io_parms.length = len;
++		if (io_parms.length > SMB2_MAX_BUFFER_SIZE)
++			io_parms.length = SMB2_MAX_BUFFER_SIZE;
++		/* iov[0] is reserved for smb header */
++		iov[1].iov_base = buf;
++		iov[1].iov_len = io_parms.length;
++		rc = SMB2_write(xid, &io_parms, &nbytes, iov, 1);
++		if (rc)
++			break;
++		if (nbytes > len)
++			return -EINVAL;
++		buf += nbytes;
++		off += nbytes;
++		len -= nbytes;
++	}
++	return rc;
+ }
+ 
+ static int smb3_simple_fallocate_range(unsigned int xid,
+@@ -3504,11 +3516,6 @@ static int smb3_simple_fallocate_range(unsigned int xid,
+ 			(char **)&out_data, &out_data_len);
+ 	if (rc)
+ 		goto out;
+-	/*
+-	 * It is already all allocated
+-	 */
+-	if (out_data_len == 0)
+-		goto out;
+ 
+ 	buf = kzalloc(1024 * 1024, GFP_KERNEL);
+ 	if (buf == NULL) {
+@@ -3631,6 +3638,24 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 		goto out;
+ 	}
+ 
++	if (keep_size == true) {
++		/*
++		 * We can not preallocate pages beyond the end of the file
++		 * in SMB2
++		 */
++		if (off >= i_size_read(inode)) {
++			rc = 0;
++			goto out;
++		}
++		/*
++		 * For fallocates that are partially beyond the end of file,
++		 * clamp len so we only fallocate up to the end of file.
++		 */
++		if (off + len > i_size_read(inode)) {
++			len = i_size_read(inode) - off;
++		}
++	}
++
+ 	if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
+ 		/*
+ 		 * At this point, we are trying to fallocate an internal
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index b7c24d152604d..5fc9ccab907c3 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -77,7 +77,7 @@ enum hugetlb_param {
+ static const struct fs_parameter_spec hugetlb_fs_parameters[] = {
+ 	fsparam_u32   ("gid",		Opt_gid),
+ 	fsparam_string("min_size",	Opt_min_size),
+-	fsparam_u32   ("mode",		Opt_mode),
++	fsparam_u32oct("mode",		Opt_mode),
+ 	fsparam_string("nr_inodes",	Opt_nr_inodes),
+ 	fsparam_string("pagesize",	Opt_pagesize),
+ 	fsparam_string("size",		Opt_size),
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 42153106b7bc9..07f08c424d17b 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4916,6 +4916,7 @@ static int io_connect(struct io_kiocb *req, bool force_nonblock,
+ struct io_poll_table {
+ 	struct poll_table_struct pt;
+ 	struct io_kiocb *req;
++	int nr_entries;
+ 	int error;
+ };
+ 
+@@ -5098,11 +5099,11 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ 	struct io_kiocb *req = pt->req;
+ 
+ 	/*
+-	 * If poll->head is already set, it's because the file being polled
+-	 * uses multiple waitqueues for poll handling (eg one for read, one
+-	 * for write). Setup a separate io_poll_iocb if this happens.
++	 * The file being polled uses multiple waitqueues for poll handling
++	 * (e.g. one for read, one for write). Setup a separate io_poll_iocb
++	 * if this happens.
+ 	 */
+-	if (unlikely(poll->head)) {
++	if (unlikely(pt->nr_entries)) {
+ 		struct io_poll_iocb *poll_one = poll;
+ 
+ 		/* already have a 2nd entry, fail a third attempt */
+@@ -5124,7 +5125,7 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ 		*poll_ptr = poll;
+ 	}
+ 
+-	pt->error = 0;
++	pt->nr_entries++;
+ 	poll->head = head;
+ 
+ 	if (poll->events & EPOLLEXCLUSIVE)
+@@ -5210,11 +5211,16 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
+ 
+ 	ipt->pt._key = mask;
+ 	ipt->req = req;
+-	ipt->error = -EINVAL;
++	ipt->error = 0;
++	ipt->nr_entries = 0;
+ 
+ 	mask = vfs_poll(req->file, &ipt->pt) & poll->events;
++	if (unlikely(!ipt->nr_entries) && !ipt->error)
++		ipt->error = -EINVAL;
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
++	if (ipt->error)
++		io_poll_remove_double(req);
+ 	if (likely(poll->head)) {
+ 		spin_lock(&poll->head->lock);
+ 		if (unlikely(list_empty(&poll->wait.entry))) {
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index df9b17dd92cb3..5d52aea8d7e7d 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -855,7 +855,7 @@ static ssize_t mem_rw(struct file *file, char __user *buf,
+ 	flags = FOLL_FORCE | (write ? FOLL_WRITE : 0);
+ 
+ 	while (count > 0) {
+-		int this_len = min_t(int, count, PAGE_SIZE);
++		size_t this_len = min_t(size_t, count, PAGE_SIZE);
+ 
+ 		if (write && copy_from_user(page, buf, this_len)) {
+ 			copied = -EFAULT;
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 000b457ad0876..3d181b1a6d567 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1228,23 +1228,21 @@ static __always_inline void wake_userfault(struct userfaultfd_ctx *ctx,
+ }
+ 
+ static __always_inline int validate_range(struct mm_struct *mm,
+-					  __u64 *start, __u64 len)
++					  __u64 start, __u64 len)
+ {
+ 	__u64 task_size = mm->task_size;
+ 
+-	*start = untagged_addr(*start);
+-
+-	if (*start & ~PAGE_MASK)
++	if (start & ~PAGE_MASK)
+ 		return -EINVAL;
+ 	if (len & ~PAGE_MASK)
+ 		return -EINVAL;
+ 	if (!len)
+ 		return -EINVAL;
+-	if (*start < mmap_min_addr)
++	if (start < mmap_min_addr)
+ 		return -EINVAL;
+-	if (*start >= task_size)
++	if (start >= task_size)
+ 		return -EINVAL;
+-	if (len > task_size - *start)
++	if (len > task_size - start)
+ 		return -EINVAL;
+ 	return 0;
+ }
+@@ -1290,7 +1288,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
+ 	if (uffdio_register.mode & UFFDIO_REGISTER_MODE_WP)
+ 		vm_flags |= VM_UFFD_WP;
+ 
+-	ret = validate_range(mm, &uffdio_register.range.start,
++	ret = validate_range(mm, uffdio_register.range.start,
+ 			     uffdio_register.range.len);
+ 	if (ret)
+ 		goto out;
+@@ -1490,7 +1488,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
+ 	if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister)))
+ 		goto out;
+ 
+-	ret = validate_range(mm, &uffdio_unregister.start,
++	ret = validate_range(mm, uffdio_unregister.start,
+ 			     uffdio_unregister.len);
+ 	if (ret)
+ 		goto out;
+@@ -1639,7 +1637,7 @@ static int userfaultfd_wake(struct userfaultfd_ctx *ctx,
+ 	if (copy_from_user(&uffdio_wake, buf, sizeof(uffdio_wake)))
+ 		goto out;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_wake.start, uffdio_wake.len);
++	ret = validate_range(ctx->mm, uffdio_wake.start, uffdio_wake.len);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -1679,7 +1677,7 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx,
+ 			   sizeof(uffdio_copy)-sizeof(__s64)))
+ 		goto out;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_copy.dst, uffdio_copy.len);
++	ret = validate_range(ctx->mm, uffdio_copy.dst, uffdio_copy.len);
+ 	if (ret)
+ 		goto out;
+ 	/*
+@@ -1736,7 +1734,7 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx,
+ 			   sizeof(uffdio_zeropage)-sizeof(__s64)))
+ 		goto out;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_zeropage.range.start,
++	ret = validate_range(ctx->mm, uffdio_zeropage.range.start,
+ 			     uffdio_zeropage.range.len);
+ 	if (ret)
+ 		goto out;
+@@ -1786,7 +1784,7 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx,
+ 			   sizeof(struct uffdio_writeprotect)))
+ 		return -EFAULT;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_wp.range.start,
++	ret = validate_range(ctx->mm, uffdio_wp.range.start,
+ 			     uffdio_wp.range.len);
+ 	if (ret)
+ 		return ret;
+diff --git a/include/drm/drm_ioctl.h b/include/drm/drm_ioctl.h
+index 10100a4bbe2ad..afb27cb6a7bd8 100644
+--- a/include/drm/drm_ioctl.h
++++ b/include/drm/drm_ioctl.h
+@@ -68,6 +68,7 @@ typedef int drm_ioctl_compat_t(struct file *filp, unsigned int cmd,
+ 			       unsigned long arg);
+ 
+ #define DRM_IOCTL_NR(n)                _IOC_NR(n)
++#define DRM_IOCTL_TYPE(n)              _IOC_TYPE(n)
+ #define DRM_MAJOR       226
+ 
+ /**
+diff --git a/include/linux/memblock.h b/include/linux/memblock.h
+index ef131255cedce..1a8d25f2e0412 100644
+--- a/include/linux/memblock.h
++++ b/include/linux/memblock.h
+@@ -207,7 +207,7 @@ static inline void __next_physmem_range(u64 *idx, struct memblock_type *type,
+  */
+ #define for_each_mem_range(i, p_start, p_end) \
+ 	__for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE,	\
+-			     MEMBLOCK_NONE, p_start, p_end, NULL)
++			     MEMBLOCK_HOTPLUG, p_start, p_end, NULL)
+ 
+ /**
+  * for_each_mem_range_rev - reverse iterate through memblock areas from
+@@ -218,7 +218,7 @@ static inline void __next_physmem_range(u64 *idx, struct memblock_type *type,
+  */
+ #define for_each_mem_range_rev(i, p_start, p_end)			\
+ 	__for_each_mem_range_rev(i, &memblock.memory, NULL, NUMA_NO_NODE, \
+-				 MEMBLOCK_NONE, p_start, p_end, NULL)
++				 MEMBLOCK_HOTPLUG, p_start, p_end, NULL)
+ 
+ /**
+  * for_each_reserved_mem_range - iterate over all reserved memblock areas
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index a828cf99c5215..2d01b2bbb7465 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -4150,6 +4150,9 @@ enum skb_ext_id {
+ #endif
+ #if IS_ENABLED(CONFIG_MPTCP)
+ 	SKB_EXT_MPTCP,
++#endif
++#if IS_ENABLED(CONFIG_KCOV)
++	SKB_EXT_KCOV_HANDLE,
+ #endif
+ 	SKB_EXT_NUM, /* must be last */
+ };
+@@ -4605,5 +4608,35 @@ static inline void skb_reset_redirect(struct sk_buff *skb)
+ #endif
+ }
+ 
++#ifdef CONFIG_KCOV
++static inline void skb_set_kcov_handle(struct sk_buff *skb,
++				       const u64 kcov_handle)
++{
++	/* Do not allocate skb extensions only to set kcov_handle to zero
++	 * (as it is zero by default). However, if the extensions are
++	 * already allocated, update kcov_handle anyway since
++	 * skb_set_kcov_handle can be called to zero a previously set
++	 * value.
++	 */
++	if (skb_has_extensions(skb) || kcov_handle) {
++		u64 *kcov_handle_ptr = skb_ext_add(skb, SKB_EXT_KCOV_HANDLE);
++
++		if (kcov_handle_ptr)
++			*kcov_handle_ptr = kcov_handle;
++	}
++}
++
++static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
++{
++	u64 *kcov_handle = skb_ext_find(skb, SKB_EXT_KCOV_HANDLE);
++
++	return kcov_handle ? *kcov_handle : 0;
++}
++#else
++static inline void skb_set_kcov_handle(struct sk_buff *skb,
++				       const u64 kcov_handle) { }
++static inline u64 skb_get_kcov_handle(struct sk_buff *skb) { return 0; }
++#endif /* CONFIG_KCOV */
++
+ #endif	/* __KERNEL__ */
+ #endif	/* _LINUX_SKBUFF_H */
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index adc3da7769700..67d676059aa0d 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -199,6 +199,11 @@ struct bond_up_slave {
+  */
+ #define BOND_LINK_NOCHANGE -1
+ 
++struct bond_ipsec {
++	struct list_head list;
++	struct xfrm_state *xs;
++};
++
+ /*
+  * Here are the locking policies for the two bonding locks:
+  * Get rcu_read_lock when reading or RTNL when writing slave list.
+@@ -247,7 +252,9 @@ struct bonding {
+ #endif /* CONFIG_DEBUG_FS */
+ 	struct rtnl_link_stats64 bond_stats;
+ #ifdef CONFIG_XFRM_OFFLOAD
+-	struct xfrm_state *xs;
++	struct list_head ipsec_list;
++	/* protecting ipsec_list */
++	spinlock_t ipsec_lock;
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ };
+ 
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index 4eef374d4413e..5deb9f490f6f6 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -174,6 +174,34 @@ enum afs_vl_operation {
+ 	afs_VL_GetCapabilities	= 65537,	/* AFS Get VL server capabilities */
+ };
+ 
++enum afs_cm_operation {
++	afs_CB_CallBack			= 204,	/* AFS break callback promises */
++	afs_CB_InitCallBackState	= 205,	/* AFS initialise callback state */
++	afs_CB_Probe			= 206,	/* AFS probe client */
++	afs_CB_GetLock			= 207,	/* AFS get contents of CM lock table */
++	afs_CB_GetCE			= 208,	/* AFS get cache file description */
++	afs_CB_GetXStatsVersion		= 209,	/* AFS get version of extended statistics */
++	afs_CB_GetXStats		= 210,	/* AFS get contents of extended statistics data */
++	afs_CB_InitCallBackState3	= 213,	/* AFS initialise callback state, version 3 */
++	afs_CB_ProbeUuid		= 214,	/* AFS check the client hasn't rebooted */
++};
++
++enum yfs_cm_operation {
++	yfs_CB_Probe			= 206,	/* YFS probe client */
++	yfs_CB_GetLock			= 207,	/* YFS get contents of CM lock table */
++	yfs_CB_XStatsVersion		= 209,	/* YFS get version of extended statistics */
++	yfs_CB_GetXStats		= 210,	/* YFS get contents of extended statistics data */
++	yfs_CB_InitCallBackState3	= 213,	/* YFS initialise callback state, version 3 */
++	yfs_CB_ProbeUuid		= 214,	/* YFS check the client hasn't rebooted */
++	yfs_CB_GetServerPrefs		= 215,
++	yfs_CB_GetCellServDV		= 216,
++	yfs_CB_GetLocalCell		= 217,
++	yfs_CB_GetCacheConfig		= 218,
++	yfs_CB_GetCellByNum		= 65537,
++	yfs_CB_TellMeAboutYourself	= 65538, /* get client capabilities */
++	yfs_CB_CallBack			= 64204,
++};
++
+ enum afs_edit_dir_op {
+ 	afs_edit_dir_create,
+ 	afs_edit_dir_create_error,
+@@ -435,6 +463,32 @@ enum afs_cb_break_reason {
+ 	EM(afs_YFSVL_GetCellName,		"YFSVL.GetCellName") \
+ 	E_(afs_VL_GetCapabilities,		"VL.GetCapabilities")
+ 
++#define afs_cm_operations \
++	EM(afs_CB_CallBack,			"CB.CallBack") \
++	EM(afs_CB_InitCallBackState,		"CB.InitCallBackState") \
++	EM(afs_CB_Probe,			"CB.Probe") \
++	EM(afs_CB_GetLock,			"CB.GetLock") \
++	EM(afs_CB_GetCE,			"CB.GetCE") \
++	EM(afs_CB_GetXStatsVersion,		"CB.GetXStatsVersion") \
++	EM(afs_CB_GetXStats,			"CB.GetXStats") \
++	EM(afs_CB_InitCallBackState3,		"CB.InitCallBackState3") \
++	E_(afs_CB_ProbeUuid,			"CB.ProbeUuid")
++
++#define yfs_cm_operations \
++	EM(yfs_CB_Probe,			"YFSCB.Probe") \
++	EM(yfs_CB_GetLock,			"YFSCB.GetLock") \
++	EM(yfs_CB_XStatsVersion,		"YFSCB.XStatsVersion") \
++	EM(yfs_CB_GetXStats,			"YFSCB.GetXStats") \
++	EM(yfs_CB_InitCallBackState3,		"YFSCB.InitCallBackState3") \
++	EM(yfs_CB_ProbeUuid,			"YFSCB.ProbeUuid") \
++	EM(yfs_CB_GetServerPrefs,		"YFSCB.GetServerPrefs") \
++	EM(yfs_CB_GetCellServDV,		"YFSCB.GetCellServDV") \
++	EM(yfs_CB_GetLocalCell,			"YFSCB.GetLocalCell") \
++	EM(yfs_CB_GetCacheConfig,		"YFSCB.GetCacheConfig") \
++	EM(yfs_CB_GetCellByNum,			"YFSCB.GetCellByNum") \
++	EM(yfs_CB_TellMeAboutYourself,		"YFSCB.TellMeAboutYourself") \
++	E_(yfs_CB_CallBack,			"YFSCB.CallBack")
++
+ #define afs_edit_dir_ops				  \
+ 	EM(afs_edit_dir_create,			"create") \
+ 	EM(afs_edit_dir_create_error,		"c_fail") \
+@@ -567,6 +621,8 @@ afs_server_traces;
+ afs_cell_traces;
+ afs_fs_operations;
+ afs_vl_operations;
++afs_cm_operations;
++yfs_cm_operations;
+ afs_edit_dir_ops;
+ afs_edit_dir_reasons;
+ afs_eproto_causes;
+@@ -647,20 +703,21 @@ TRACE_EVENT(afs_cb_call,
+ 
+ 	    TP_STRUCT__entry(
+ 		    __field(unsigned int,		call		)
+-		    __field(const char *,		name		)
+ 		    __field(u32,			op		)
++		    __field(u16,			service_id	)
+ 			     ),
+ 
+ 	    TP_fast_assign(
+ 		    __entry->call	= call->debug_id;
+-		    __entry->name	= call->type->name;
+ 		    __entry->op		= call->operation_ID;
++		    __entry->service_id	= call->service_id;
+ 			   ),
+ 
+-	    TP_printk("c=%08x %s o=%u",
++	    TP_printk("c=%08x %s",
+ 		      __entry->call,
+-		      __entry->name,
+-		      __entry->op)
++		      __entry->service_id == 2501 ?
++		      __print_symbolic(__entry->op, yfs_cm_operations) :
++		      __print_symbolic(__entry->op, afs_cm_operations))
+ 	    );
+ 
+ TRACE_EVENT(afs_call,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 1f8bf2b39d506..36bc34fce623c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3356,6 +3356,8 @@ continue_func:
+ 	if (tail_call_reachable)
+ 		for (j = 0; j < frame; j++)
+ 			subprog[ret_prog[j]].tail_call_reachable = true;
++	if (subprog[0].tail_call_reachable)
++		env->prog->aux->tail_call_reachable = true;
+ 
+ 	/* end of for() loop means the last insn of the 'subprog'
+ 	 * was reached. Doesn't matter whether it was JA or EXIT
+diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
+index 910ae69cae777..af4a6ef48ce04 100644
+--- a/kernel/dma/ops_helpers.c
++++ b/kernel/dma/ops_helpers.c
+@@ -5,6 +5,13 @@
+  */
+ #include <linux/dma-map-ops.h>
+ 
++static struct page *dma_common_vaddr_to_page(void *cpu_addr)
++{
++	if (is_vmalloc_addr(cpu_addr))
++		return vmalloc_to_page(cpu_addr);
++	return virt_to_page(cpu_addr);
++}
++
+ /*
+  * Create scatter-list for the already allocated DMA buffer.
+  */
+@@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
+ 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
+ 		 unsigned long attrs)
+ {
+-	struct page *page = virt_to_page(cpu_addr);
++	struct page *page = dma_common_vaddr_to_page(cpu_addr);
+ 	int ret;
+ 
+ 	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+@@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+ 	unsigned long user_count = vma_pages(vma);
+ 	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ 	unsigned long off = vma->vm_pgoff;
++	struct page *page = dma_common_vaddr_to_page(cpu_addr);
+ 	int ret = -ENXIO;
+ 
+ 	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
+@@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+ 		return -ENXIO;
+ 
+ 	return remap_pfn_range(vma, vma->vm_start,
+-			page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
++			page_to_pfn(page) + vma->vm_pgoff,
+ 			user_count << PAGE_SHIFT, vma->vm_page_prot);
+ #else
+ 	return -ENXIO;
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 9abe15255bc4e..08c033b802569 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -991,6 +991,11 @@ static void posix_cpu_timer_rearm(struct k_itimer *timer)
+ 	if (!p)
+ 		goto out;
+ 
++	/* Protect timer list r/w in arm_timer() */
++	sighand = lock_task_sighand(p, &flags);
++	if (unlikely(sighand == NULL))
++		goto out;
++
+ 	/*
+ 	 * Fetch the current sample and update the timer's expiry time.
+ 	 */
+@@ -1001,11 +1006,6 @@ static void posix_cpu_timer_rearm(struct k_itimer *timer)
+ 
+ 	bump_cpu_timer(timer, now);
+ 
+-	/* Protect timer list r/w in arm_timer() */
+-	sighand = lock_task_sighand(p, &flags);
+-	if (unlikely(sighand == NULL))
+-		goto out;
+-
+ 	/*
+ 	 * Now re-arm for the new expiry time.
+ 	 */
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index c3ad64fb9d8bd..aa96b8a4e508f 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -207,6 +207,7 @@ struct timer_base {
+ 	unsigned int		cpu;
+ 	bool			next_expiry_recalc;
+ 	bool			is_idle;
++	bool			timers_pending;
+ 	DECLARE_BITMAP(pending_map, WHEEL_SIZE);
+ 	struct hlist_head	vectors[WHEEL_SIZE];
+ } ____cacheline_aligned;
+@@ -595,6 +596,7 @@ static void enqueue_timer(struct timer_base *base, struct timer_list *timer,
+ 		 * can reevaluate the wheel:
+ 		 */
+ 		base->next_expiry = bucket_expiry;
++		base->timers_pending = true;
+ 		base->next_expiry_recalc = false;
+ 		trigger_dyntick_cpu(base, timer);
+ 	}
+@@ -1575,6 +1577,7 @@ static unsigned long __next_timer_interrupt(struct timer_base *base)
+ 	}
+ 
+ 	base->next_expiry_recalc = false;
++	base->timers_pending = !(next == base->clk + NEXT_TIMER_MAX_DELTA);
+ 
+ 	return next;
+ }
+@@ -1626,7 +1629,6 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
+ 	struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
+ 	u64 expires = KTIME_MAX;
+ 	unsigned long nextevt;
+-	bool is_max_delta;
+ 
+ 	/*
+ 	 * Pretend that there is no timer pending if the cpu is offline.
+@@ -1639,7 +1641,6 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
+ 	if (base->next_expiry_recalc)
+ 		base->next_expiry = __next_timer_interrupt(base);
+ 	nextevt = base->next_expiry;
+-	is_max_delta = (nextevt == base->clk + NEXT_TIMER_MAX_DELTA);
+ 
+ 	/*
+ 	 * We have a fresh next event. Check whether we can forward the
+@@ -1657,7 +1658,7 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
+ 		expires = basem;
+ 		base->is_idle = false;
+ 	} else {
+-		if (!is_max_delta)
++		if (base->timers_pending)
+ 			expires = basem + (u64)(nextevt - basej) * TICK_NSEC;
+ 		/*
+ 		 * If we expect to sleep more than a tick, mark the base idle.
+@@ -1940,6 +1941,7 @@ int timers_prepare_cpu(unsigned int cpu)
+ 		base = per_cpu_ptr(&timer_bases[b], cpu);
+ 		base->clk = jiffies;
+ 		base->next_expiry = base->clk + NEXT_TIMER_MAX_DELTA;
++		base->timers_pending = false;
+ 		base->is_idle = false;
+ 	}
+ 	return 0;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index b12e5f4721ca7..39d4d9b25d1a1 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -3649,10 +3649,30 @@ static bool rb_per_cpu_empty(struct ring_buffer_per_cpu *cpu_buffer)
+ 	if (unlikely(!head))
+ 		return true;
+ 
+-	return reader->read == rb_page_commit(reader) &&
+-		(commit == reader ||
+-		 (commit == head &&
+-		  head->read == rb_page_commit(commit)));
++	/* Reader should exhaust content in reader page */
++	if (reader->read != rb_page_commit(reader))
++		return false;
++
++	/*
++	 * If writers are committing on the reader page, knowing all
++	 * committed content has been read, the ring buffer is empty.
++	 */
++	if (commit == reader)
++		return true;
++
++	/*
++	 * If writers are committing on a page other than reader page
++	 * and head page, there should always be content to read.
++	 */
++	if (commit != head)
++		return false;
++
++	/*
++	 * Writers are committing on the head page, we just need
++	 * to care about there're committed data, and the reader will
++	 * swap reader page with head page when it is to read data.
++	 */
++	return rb_page_commit(commit) == 0;
+ }
+ 
+ /**
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index ee84891bacfac..625034c44d5f3 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5241,6 +5241,10 @@ static const char readme_msg[] =
+ 	"\t            [:name=histname1]\n"
+ 	"\t            [:<handler>.<action>]\n"
+ 	"\t            [if <filter>]\n\n"
++	"\t    Note, special fields can be used as well:\n"
++	"\t            common_timestamp - to record current timestamp\n"
++	"\t            common_cpu - to record the CPU the event happened on\n"
++	"\n"
+ 	"\t    When a matching event is hit, an entry is added to a hash\n"
+ 	"\t    table using the key(s) and value(s) named, and the value of a\n"
+ 	"\t    sum called 'hitcount' is incremented.  Keys and values\n"
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 49d886b328dc1..379eade0c0837 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1095,7 +1095,7 @@ static const char *hist_field_name(struct hist_field *field,
+ 		 field->flags & HIST_FIELD_FL_ALIAS)
+ 		field_name = hist_field_name(field->operands[0], ++level);
+ 	else if (field->flags & HIST_FIELD_FL_CPU)
+-		field_name = "cpu";
++		field_name = "common_cpu";
+ 	else if (field->flags & HIST_FIELD_FL_EXPR ||
+ 		 field->flags & HIST_FIELD_FL_VAR_REF) {
+ 		if (field->system) {
+@@ -1975,14 +1975,24 @@ parse_field(struct hist_trigger_data *hist_data, struct trace_event_file *file,
+ 		hist_data->enable_timestamps = true;
+ 		if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS)
+ 			hist_data->attrs->ts_in_usecs = true;
+-	} else if (strcmp(field_name, "cpu") == 0)
++	} else if (strcmp(field_name, "common_cpu") == 0)
+ 		*flags |= HIST_FIELD_FL_CPU;
+ 	else {
+ 		field = trace_find_event_field(file->event_call, field_name);
+ 		if (!field || !field->size) {
+-			hist_err(tr, HIST_ERR_FIELD_NOT_FOUND, errpos(field_name));
+-			field = ERR_PTR(-EINVAL);
+-			goto out;
++			/*
++			 * For backward compatibility, if field_name
++			 * was "cpu", then we treat this the same as
++			 * common_cpu.
++			 */
++			if (strcmp(field_name, "cpu") == 0) {
++				*flags |= HIST_FIELD_FL_CPU;
++			} else {
++				hist_err(tr, HIST_ERR_FIELD_NOT_FOUND,
++					 errpos(field_name));
++				field = ERR_PTR(-EINVAL);
++				goto out;
++			}
+ 		}
+ 	}
+  out:
+@@ -5057,7 +5067,7 @@ static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
+ 		seq_printf(m, "%s=", hist_field->var.name);
+ 
+ 	if (hist_field->flags & HIST_FIELD_FL_CPU)
+-		seq_puts(m, "cpu");
++		seq_puts(m, "common_cpu");
+ 	else if (field_name) {
+ 		if (hist_field->flags & HIST_FIELD_FL_VAR_REF ||
+ 		    hist_field->flags & HIST_FIELD_FL_ALIAS)
+diff --git a/kernel/trace/trace_synth.h b/kernel/trace/trace_synth.h
+index 6e146b959dcd0..4007fe95cf42c 100644
+--- a/kernel/trace/trace_synth.h
++++ b/kernel/trace/trace_synth.h
+@@ -14,10 +14,10 @@ struct synth_field {
+ 	char *name;
+ 	size_t size;
+ 	unsigned int offset;
++	unsigned int field_pos;
+ 	bool is_signed;
+ 	bool is_string;
+ 	bool is_dynamic;
+-	bool field_pos;
+ };
+ 
+ struct synth_event {
+diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
+index f8b161edca5ea..1f490638b2dcc 100644
+--- a/kernel/tracepoint.c
++++ b/kernel/tracepoint.c
+@@ -320,8 +320,8 @@ static int tracepoint_add_func(struct tracepoint *tp,
+ 	 * a pointer to it.  This array is referenced by __DO_TRACE from
+ 	 * include/linux/tracepoint.h using rcu_dereference_sched().
+ 	 */
+-	rcu_assign_pointer(tp->funcs, tp_funcs);
+ 	tracepoint_update_call(tp, tp_funcs, false);
++	rcu_assign_pointer(tp->funcs, tp_funcs);
+ 	static_key_enable(&tp->key);
+ 
+ 	release_probes(old);
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 5b7f88a2876db..ffccc13d685bd 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1869,6 +1869,7 @@ config KCOV
+ 	depends on CC_HAS_SANCOV_TRACE_PC || GCC_PLUGINS
+ 	select DEBUG_FS
+ 	select GCC_PLUGIN_SANCOV if !CC_HAS_SANCOV_TRACE_PC
++	select SKB_EXTENSIONS
+ 	help
+ 	  KCOV exposes kernel code coverage information in a form suitable
+ 	  for coverage-guided fuzzing (randomized testing).
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 10bd7d1ef0f49..c337df03b6a17 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -940,7 +940,8 @@ static bool should_skip_region(struct memblock_type *type,
+ 		return true;
+ 
+ 	/* skip hotpluggable memory regions if needed */
+-	if (movable_node_is_enabled() && memblock_is_hotpluggable(m))
++	if (movable_node_is_enabled() && memblock_is_hotpluggable(m) &&
++	    !(flags & MEMBLOCK_HOTPLUG))
+ 		return true;
+ 
+ 	/* if we want mirror memory skip non-mirror memory regions */
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 8b796c499cbb2..e7cbd1b4a5e51 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -627,6 +627,9 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
+ 	void *data;
+ 	int ret;
+ 
++	if (prog->expected_attach_type == BPF_XDP_DEVMAP ||
++	    prog->expected_attach_type == BPF_XDP_CPUMAP)
++		return -EINVAL;
+ 	if (kattr->test.ctx_in || kattr->test.ctx_out)
+ 		return -EINVAL;
+ 
+diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
+index 3ad0a1df67128..9d26c5e9da058 100644
+--- a/net/caif/caif_socket.c
++++ b/net/caif/caif_socket.c
+@@ -539,7 +539,8 @@ static int caif_seqpkt_sendmsg(struct socket *sock, struct msghdr *msg,
+ 		goto err;
+ 
+ 	ret = -EINVAL;
+-	if (unlikely(msg->msg_iter.iov->iov_base == NULL))
++	if (unlikely(msg->msg_iter.nr_segs == 0) ||
++	    unlikely(msg->msg_iter.iov->iov_base == NULL))
+ 		goto err;
+ 	noblock = msg->msg_flags & MSG_DONTWAIT;
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2fdf30eefc596..b9d19fbb15890 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5870,6 +5870,19 @@ static struct list_head *gro_list_prepare(struct napi_struct *napi,
+ 			diffs = memcmp(skb_mac_header(p),
+ 				       skb_mac_header(skb),
+ 				       maclen);
++
++		diffs |= skb_get_nfct(p) ^ skb_get_nfct(skb);
++#if IS_ENABLED(CONFIG_SKB_EXTENSIONS) && IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
++		if (!diffs) {
++			struct tc_skb_ext *skb_ext = skb_ext_find(skb, TC_SKB_EXT);
++			struct tc_skb_ext *p_ext = skb_ext_find(p, TC_SKB_EXT);
++
++			diffs |= (!!p_ext) ^ (!!skb_ext);
++			if (!diffs && unlikely(skb_ext))
++				diffs |= p_ext->chain ^ skb_ext->chain;
++		}
++#endif
++
+ 		NAPI_GRO_CB(p)->same_flow = !diffs;
+ 	}
+ 
+@@ -6149,6 +6162,7 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
+ 	skb_shinfo(skb)->gso_type = 0;
+ 	skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
+ 	skb_ext_reset(skb);
++	nf_reset_ct(skb);
+ 
+ 	napi->skb = skb;
+ }
+@@ -9351,14 +9365,17 @@ int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ 	struct net_device *dev;
+ 	int err, fd;
+ 
++	rtnl_lock();
+ 	dev = dev_get_by_index(net, attr->link_create.target_ifindex);
+-	if (!dev)
++	if (!dev) {
++		rtnl_unlock();
+ 		return -EINVAL;
++	}
+ 
+ 	link = kzalloc(sizeof(*link), GFP_USER);
+ 	if (!link) {
+ 		err = -ENOMEM;
+-		goto out_put_dev;
++		goto unlock;
+ 	}
+ 
+ 	bpf_link_init(&link->link, BPF_LINK_TYPE_XDP, &bpf_xdp_link_lops, prog);
+@@ -9368,14 +9385,14 @@ int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ 	err = bpf_link_prime(&link->link, &link_primer);
+ 	if (err) {
+ 		kfree(link);
+-		goto out_put_dev;
++		goto unlock;
+ 	}
+ 
+-	rtnl_lock();
+ 	err = dev_xdp_attach_link(dev, NULL, link);
+ 	rtnl_unlock();
+ 
+ 	if (err) {
++		link->dev = NULL;
+ 		bpf_link_cleanup(&link_primer);
+ 		goto out_put_dev;
+ 	}
+@@ -9385,6 +9402,9 @@ int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ 	dev_put(dev);
+ 	return fd;
+ 
++unlock:
++	rtnl_unlock();
++
+ out_put_dev:
+ 	dev_put(dev);
+ 	return err;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 1301ea694b940..2d27aae6d36ff 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -249,6 +249,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
+ 
+ 		fclones->skb2.fclone = SKB_FCLONE_CLONE;
+ 	}
++
++	skb_set_kcov_handle(skb, kcov_common_handle());
++
+ out:
+ 	return skb;
+ nodata:
+@@ -282,6 +285,8 @@ static struct sk_buff *__build_skb_around(struct sk_buff *skb,
+ 	memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
+ 	atomic_set(&shinfo->dataref, 1);
+ 
++	skb_set_kcov_handle(skb, kcov_common_handle());
++
+ 	return skb;
+ }
+ 
+@@ -654,6 +659,7 @@ fastpath:
+ 
+ void skb_release_head_state(struct sk_buff *skb)
+ {
++	nf_reset_ct(skb);
+ 	skb_dst_drop(skb);
+ 	if (skb->destructor) {
+ 		WARN_ON(in_irq());
+@@ -4248,6 +4254,9 @@ static const u8 skb_ext_type_len[] = {
+ #if IS_ENABLED(CONFIG_MPTCP)
+ 	[SKB_EXT_MPTCP] = SKB_EXT_CHUNKSIZEOF(struct mptcp_ext),
+ #endif
++#if IS_ENABLED(CONFIG_KCOV)
++	[SKB_EXT_KCOV_HANDLE] = SKB_EXT_CHUNKSIZEOF(u64),
++#endif
+ };
+ 
+ static __always_inline unsigned int skb_ext_total_length(void)
+@@ -4264,6 +4273,9 @@ static __always_inline unsigned int skb_ext_total_length(void)
+ #endif
+ #if IS_ENABLED(CONFIG_MPTCP)
+ 		skb_ext_type_len[SKB_EXT_MPTCP] +
++#endif
++#if IS_ENABLED(CONFIG_KCOV)
++		skb_ext_type_len[SKB_EXT_KCOV_HANDLE] +
+ #endif
+ 		0;
+ }
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 923a1d0f84ca3..c4c224a5b9de7 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -433,10 +433,8 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ 	if (skb_linearize(skb))
+ 		return -EAGAIN;
+ 	num_sge = skb_to_sgvec(skb, msg->sg.data, 0, skb->len);
+-	if (unlikely(num_sge < 0)) {
+-		kfree(msg);
++	if (unlikely(num_sge < 0))
+ 		return num_sge;
+-	}
+ 
+ 	copied = skb->len;
+ 	msg->sg.start = 0;
+@@ -455,6 +453,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+ {
+ 	struct sock *sk = psock->sk;
+ 	struct sk_msg *msg;
++	int err;
+ 
+ 	/* If we are receiving on the same sock skb->sk is already assigned,
+ 	 * skip memory accounting and owner transition seeing it already set
+@@ -473,7 +472,10 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+ 	 * into user buffers.
+ 	 */
+ 	skb_set_owner_r(skb, sk);
+-	return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	if (err < 0)
++		kfree(msg);
++	return err;
+ }
+ 
+ /* Puts an skb on the ingress queue of the socket already assigned to the
+@@ -484,12 +486,16 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
+ {
+ 	struct sk_msg *msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_ATOMIC);
+ 	struct sock *sk = psock->sk;
++	int err;
+ 
+ 	if (unlikely(!msg))
+ 		return -EAGAIN;
+ 	sk_msg_init(msg);
+ 	skb_set_owner_r(skb, sk);
+-	return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	if (err < 0)
++		kfree(msg);
++	return err;
+ }
+ 
+ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
+diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
+index 5dbd45dc35ad3..dc92a67baea39 100644
+--- a/net/decnet/af_decnet.c
++++ b/net/decnet/af_decnet.c
+@@ -816,7 +816,7 @@ static int dn_auto_bind(struct socket *sock)
+ static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+ {
+ 	struct dn_scp *scp = DN_SK(sk);
+-	DEFINE_WAIT(wait);
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	int err;
+ 
+ 	if (scp->state != DN_CR)
+@@ -826,11 +826,11 @@ static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+ 	scp->segsize_loc = dst_metric_advmss(__sk_dst_get(sk));
+ 	dn_send_conn_conf(sk, allocation);
+ 
+-	prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
++	add_wait_queue(sk_sleep(sk), &wait);
+ 	for(;;) {
+ 		release_sock(sk);
+ 		if (scp->state == DN_CC)
+-			*timeo = schedule_timeout(*timeo);
++			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+ 		lock_sock(sk);
+ 		err = 0;
+ 		if (scp->state == DN_RUN)
+@@ -844,9 +844,8 @@ static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+ 		err = -EAGAIN;
+ 		if (!*timeo)
+ 			break;
+-		prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+ 	}
+-	finish_wait(sk_sleep(sk), &wait);
++	remove_wait_queue(sk_sleep(sk), &wait);
+ 	if (err == 0) {
+ 		sk->sk_socket->state = SS_CONNECTED;
+ 	} else if (scp->state != DN_CC) {
+@@ -858,7 +857,7 @@ static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+ static int dn_wait_run(struct sock *sk, long *timeo)
+ {
+ 	struct dn_scp *scp = DN_SK(sk);
+-	DEFINE_WAIT(wait);
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	int err = 0;
+ 
+ 	if (scp->state == DN_RUN)
+@@ -867,11 +866,11 @@ static int dn_wait_run(struct sock *sk, long *timeo)
+ 	if (!*timeo)
+ 		return -EALREADY;
+ 
+-	prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
++	add_wait_queue(sk_sleep(sk), &wait);
+ 	for(;;) {
+ 		release_sock(sk);
+ 		if (scp->state == DN_CI || scp->state == DN_CC)
+-			*timeo = schedule_timeout(*timeo);
++			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+ 		lock_sock(sk);
+ 		err = 0;
+ 		if (scp->state == DN_RUN)
+@@ -885,9 +884,8 @@ static int dn_wait_run(struct sock *sk, long *timeo)
+ 		err = -ETIMEDOUT;
+ 		if (!*timeo)
+ 			break;
+-		prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+ 	}
+-	finish_wait(sk_sleep(sk), &wait);
++	remove_wait_queue(sk_sleep(sk), &wait);
+ out:
+ 	if (err == 0) {
+ 		sk->sk_socket->state = SS_CONNECTED;
+@@ -1032,16 +1030,16 @@ static void dn_user_copy(struct sk_buff *skb, struct optdata_dn *opt)
+ 
+ static struct sk_buff *dn_wait_for_connect(struct sock *sk, long *timeo)
+ {
+-	DEFINE_WAIT(wait);
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	struct sk_buff *skb = NULL;
+ 	int err = 0;
+ 
+-	prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
++	add_wait_queue(sk_sleep(sk), &wait);
+ 	for(;;) {
+ 		release_sock(sk);
+ 		skb = skb_dequeue(&sk->sk_receive_queue);
+ 		if (skb == NULL) {
+-			*timeo = schedule_timeout(*timeo);
++			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+ 			skb = skb_dequeue(&sk->sk_receive_queue);
+ 		}
+ 		lock_sock(sk);
+@@ -1056,9 +1054,8 @@ static struct sk_buff *dn_wait_for_connect(struct sock *sk, long *timeo)
+ 		err = -EAGAIN;
+ 		if (!*timeo)
+ 			break;
+-		prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+ 	}
+-	finish_wait(sk_sleep(sk), &wait);
++	remove_wait_queue(sk_sleep(sk), &wait);
+ 
+ 	return skb == NULL ? ERR_PTR(err) : skb;
+ }
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index bc7d2a586e183..f91ae827d47f5 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -588,7 +588,7 @@ static int __init tcp_bpf_v4_build_proto(void)
+ 	tcp_bpf_rebuild_protos(tcp_bpf_prots[TCP_BPF_IPV4], &tcp_prot);
+ 	return 0;
+ }
+-core_initcall(tcp_bpf_v4_build_proto);
++late_initcall(tcp_bpf_v4_build_proto);
+ 
+ static int tcp_bpf_assert_proto_ops(struct proto *ops)
+ {
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index af2814c9342af..d49709ba8e165 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -507,8 +507,18 @@ void tcp_fastopen_active_disable(struct sock *sk)
+ {
+ 	struct net *net = sock_net(sk);
+ 
++	if (!sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout)
++		return;
++
++	/* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */
++	WRITE_ONCE(net->ipv4.tfo_active_disable_stamp, jiffies);
++
++	/* Paired with smp_rmb() in tcp_fastopen_active_should_disable().
++	 * We want net->ipv4.tfo_active_disable_stamp to be updated first.
++	 */
++	smp_mb__before_atomic();
+ 	atomic_inc(&net->ipv4.tfo_active_disable_times);
+-	net->ipv4.tfo_active_disable_stamp = jiffies;
++
+ 	NET_INC_STATS(net, LINUX_MIB_TCPFASTOPENBLACKHOLE);
+ }
+ 
+@@ -519,17 +529,27 @@ void tcp_fastopen_active_disable(struct sock *sk)
+ bool tcp_fastopen_active_should_disable(struct sock *sk)
+ {
+ 	unsigned int tfo_bh_timeout = sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout;
+-	int tfo_da_times = atomic_read(&sock_net(sk)->ipv4.tfo_active_disable_times);
+ 	unsigned long timeout;
++	int tfo_da_times;
+ 	int multiplier;
+ 
++	if (!tfo_bh_timeout)
++		return false;
++
++	tfo_da_times = atomic_read(&sock_net(sk)->ipv4.tfo_active_disable_times);
+ 	if (!tfo_da_times)
+ 		return false;
+ 
++	/* Paired with smp_mb__before_atomic() in tcp_fastopen_active_disable() */
++	smp_rmb();
++
+ 	/* Limit timout to max: 2^6 * initial timeout */
+ 	multiplier = 1 << min(tfo_da_times - 1, 6);
+-	timeout = multiplier * tfo_bh_timeout * HZ;
+-	if (time_before(jiffies, sock_net(sk)->ipv4.tfo_active_disable_stamp + timeout))
++
++	/* Paired with the WRITE_ONCE() in tcp_fastopen_active_disable(). */
++	timeout = READ_ONCE(sock_net(sk)->ipv4.tfo_active_disable_stamp) +
++		  multiplier * tfo_bh_timeout * HZ;
++	if (time_before(jiffies, timeout))
+ 		return true;
+ 
+ 	/* Mark check bit so we can check for successful active TFO
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 5212db9ea157e..04e259a04443c 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2913,7 +2913,7 @@ static int __net_init tcp_sk_init(struct net *net)
+ 	net->ipv4.sysctl_tcp_comp_sack_nr = 44;
+ 	net->ipv4.sysctl_tcp_fastopen = TFO_CLIENT_ENABLE;
+ 	spin_lock_init(&net->ipv4.tcp_fastopen_ctx_lock);
+-	net->ipv4.sysctl_tcp_fastopen_blackhole_timeout = 60 * 60;
++	net->ipv4.sysctl_tcp_fastopen_blackhole_timeout = 0;
+ 	atomic_set(&net->ipv4.tfo_active_disable_times, 0);
+ 
+ 	/* Reno is always built in */
+diff --git a/net/ipv4/udp_bpf.c b/net/ipv4/udp_bpf.c
+index 7a94791efc1ab..69c9663f9ee7d 100644
+--- a/net/ipv4/udp_bpf.c
++++ b/net/ipv4/udp_bpf.c
+@@ -39,7 +39,7 @@ static int __init udp_bpf_v4_build_proto(void)
+ 	udp_bpf_rebuild_protos(&udp_bpf_prots[UDP_BPF_IPV4], &udp_prot);
+ 	return 0;
+ }
+-core_initcall(udp_bpf_v4_build_proto);
++late_initcall(udp_bpf_v4_build_proto);
+ 
+ struct proto *udp_bpf_get_proto(struct sock *sk, struct sk_psock *psock)
+ {
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index e889655ca0e20..341d0c7acc8bf 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -478,7 +478,9 @@ int ip6_forward(struct sk_buff *skb)
+ 	if (skb_warn_if_lro(skb))
+ 		goto drop;
+ 
+-	if (!xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) {
++	if (!net->ipv6.devconf_all->disable_policy &&
++	    !idev->cnf.disable_policy &&
++	    !xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) {
+ 		__IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
+ 		goto drop;
+ 	}
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index ccff4738313c1..62db3c98424bd 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3640,7 +3640,7 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
+ 		err = PTR_ERR(rt->fib6_metrics);
+ 		/* Do not leave garbage there. */
+ 		rt->fib6_metrics = (struct dst_metrics *)&dst_default_metrics;
+-		goto out;
++		goto out_free;
+ 	}
+ 
+ 	if (cfg->fc_flags & RTF_ADDRCONF)
+diff --git a/net/mptcp/syncookies.c b/net/mptcp/syncookies.c
+index abe0fd0997467..37127781aee98 100644
+--- a/net/mptcp/syncookies.c
++++ b/net/mptcp/syncookies.c
+@@ -37,7 +37,21 @@ static spinlock_t join_entry_locks[COOKIE_JOIN_SLOTS] __cacheline_aligned_in_smp
+ 
+ static u32 mptcp_join_entry_hash(struct sk_buff *skb, struct net *net)
+ {
+-	u32 i = skb_get_hash(skb) ^ net_hash_mix(net);
++	static u32 mptcp_join_hash_secret __read_mostly;
++	struct tcphdr *th = tcp_hdr(skb);
++	u32 seq, i;
++
++	net_get_random_once(&mptcp_join_hash_secret,
++			    sizeof(mptcp_join_hash_secret));
++
++	if (th->syn)
++		seq = TCP_SKB_CB(skb)->seq;
++	else
++		seq = TCP_SKB_CB(skb)->seq - 1;
++
++	i = jhash_3words(seq, net_hash_mix(net),
++			 (__force __u32)th->source << 16 | (__force __u32)th->dest,
++			 mptcp_join_hash_secret);
+ 
+ 	return i % ARRAY_SIZE(join_entries);
+ }
+diff --git a/net/netrom/nr_timer.c b/net/netrom/nr_timer.c
+index 9115f8a7dd45b..a8da88db7893f 100644
+--- a/net/netrom/nr_timer.c
++++ b/net/netrom/nr_timer.c
+@@ -121,11 +121,9 @@ static void nr_heartbeat_expiry(struct timer_list *t)
+ 		   is accepted() it isn't 'dead' so doesn't get removed. */
+ 		if (sock_flag(sk, SOCK_DESTROY) ||
+ 		    (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) {
+-			sock_hold(sk);
+ 			bh_unlock_sock(sk);
+ 			nr_destroy_socket(sk);
+-			sock_put(sk);
+-			return;
++			goto out;
+ 		}
+ 		break;
+ 
+@@ -146,6 +144,8 @@ static void nr_heartbeat_expiry(struct timer_list *t)
+ 
+ 	nr_start_heartbeat(sk);
+ 	bh_unlock_sock(sk);
++out:
++	sock_put(sk);
+ }
+ 
+ static void nr_t2timer_expiry(struct timer_list *t)
+@@ -159,6 +159,7 @@ static void nr_t2timer_expiry(struct timer_list *t)
+ 		nr_enquiry_response(sk);
+ 	}
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+ 
+ static void nr_t4timer_expiry(struct timer_list *t)
+@@ -169,6 +170,7 @@ static void nr_t4timer_expiry(struct timer_list *t)
+ 	bh_lock_sock(sk);
+ 	nr_sk(sk)->condition &= ~NR_COND_PEER_RX_BUSY;
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+ 
+ static void nr_idletimer_expiry(struct timer_list *t)
+@@ -197,6 +199,7 @@ static void nr_idletimer_expiry(struct timer_list *t)
+ 		sock_set_flag(sk, SOCK_DEAD);
+ 	}
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+ 
+ static void nr_t1timer_expiry(struct timer_list *t)
+@@ -209,8 +212,7 @@ static void nr_t1timer_expiry(struct timer_list *t)
+ 	case NR_STATE_1:
+ 		if (nr->n2count == nr->n2) {
+ 			nr_disconnect(sk, ETIMEDOUT);
+-			bh_unlock_sock(sk);
+-			return;
++			goto out;
+ 		} else {
+ 			nr->n2count++;
+ 			nr_write_internal(sk, NR_CONNREQ);
+@@ -220,8 +222,7 @@ static void nr_t1timer_expiry(struct timer_list *t)
+ 	case NR_STATE_2:
+ 		if (nr->n2count == nr->n2) {
+ 			nr_disconnect(sk, ETIMEDOUT);
+-			bh_unlock_sock(sk);
+-			return;
++			goto out;
+ 		} else {
+ 			nr->n2count++;
+ 			nr_write_internal(sk, NR_DISCREQ);
+@@ -231,8 +232,7 @@ static void nr_t1timer_expiry(struct timer_list *t)
+ 	case NR_STATE_3:
+ 		if (nr->n2count == nr->n2) {
+ 			nr_disconnect(sk, ETIMEDOUT);
+-			bh_unlock_sock(sk);
+-			return;
++			goto out;
+ 		} else {
+ 			nr->n2count++;
+ 			nr_requeue_frames(sk);
+@@ -241,5 +241,7 @@ static void nr_t1timer_expiry(struct timer_list *t)
+ 	}
+ 
+ 	nr_start_t1timer(sk);
++out:
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c
+index 81a1c67335be6..8d17a543cc9fe 100644
+--- a/net/sched/act_skbmod.c
++++ b/net/sched/act_skbmod.c
+@@ -6,6 +6,7 @@
+ */
+ 
+ #include <linux/module.h>
++#include <linux/if_arp.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/skbuff.h>
+@@ -33,6 +34,13 @@ static int tcf_skbmod_act(struct sk_buff *skb, const struct tc_action *a,
+ 	tcf_lastuse_update(&d->tcf_tm);
+ 	bstats_cpu_update(this_cpu_ptr(d->common.cpu_bstats), skb);
+ 
++	action = READ_ONCE(d->tcf_action);
++	if (unlikely(action == TC_ACT_SHOT))
++		goto drop;
++
++	if (!skb->dev || skb->dev->type != ARPHRD_ETHER)
++		return action;
++
+ 	/* XXX: if you are going to edit more fields beyond ethernet header
+ 	 * (example when you add IP header replacement or vlan swap)
+ 	 * then MAX_EDIT_LEN needs to change appropriately
+@@ -41,10 +49,6 @@ static int tcf_skbmod_act(struct sk_buff *skb, const struct tc_action *a,
+ 	if (unlikely(err)) /* best policy is to drop on the floor */
+ 		goto drop;
+ 
+-	action = READ_ONCE(d->tcf_action);
+-	if (unlikely(action == TC_ACT_SHOT))
+-		goto drop;
+-
+ 	p = rcu_dereference_bh(d->skbmod_p);
+ 	flags = p->flags;
+ 	if (flags & SKBMOD_F_DMAC)
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 30090794b7912..31ac76a9189ee 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2905,7 +2905,7 @@ replay:
+ 		break;
+ 	case RTM_GETCHAIN:
+ 		err = tc_chain_notify(chain, skb, n->nlmsg_seq,
+-				      n->nlmsg_seq, n->nlmsg_type, true);
++				      n->nlmsg_flags, n->nlmsg_type, true);
+ 		if (err < 0)
+ 			NL_SET_ERR_MSG(extack, "Failed to send chain notify message");
+ 		break;
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 5b274534264c2..e9a8a2c86bbdd 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -278,6 +278,8 @@ static int tcindex_filter_result_init(struct tcindex_filter_result *r,
+ 			     TCA_TCINDEX_POLICE);
+ }
+ 
++static void tcindex_free_perfect_hash(struct tcindex_data *cp);
++
+ static void tcindex_partial_destroy_work(struct work_struct *work)
+ {
+ 	struct tcindex_data *p = container_of(to_rcu_work(work),
+@@ -285,7 +287,8 @@ static void tcindex_partial_destroy_work(struct work_struct *work)
+ 					      rwork);
+ 
+ 	rtnl_lock();
+-	kfree(p->perfect);
++	if (p->perfect)
++		tcindex_free_perfect_hash(p);
+ 	kfree(p);
+ 	rtnl_unlock();
+ }
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index 6f8319b828b0d..fe74c5f956303 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -860,6 +860,8 @@ int sctp_auth_set_key(struct sctp_endpoint *ep,
+ 	if (replace) {
+ 		list_del_init(&shkey->key_list);
+ 		sctp_auth_shkey_release(shkey);
++		if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
++			sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
+ 	}
+ 	list_add(&cur_key->key_list, sh_keys);
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 3ac6b21ecf2c1..e872bc50bbe61 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4471,6 +4471,10 @@ static int sctp_setsockopt(struct sock *sk, int level, int optname,
+ 	}
+ 
+ 	if (optlen > 0) {
++		/* Trim it to the biggest size sctp sockopt may need if necessary */
++		optlen = min_t(unsigned int, optlen,
++			       PAGE_ALIGN(USHRT_MAX +
++					  sizeof(__u16) * sizeof(struct sctp_reset_streams)));
+ 		kopt = memdup_sockptr(optval, optlen);
+ 		if (IS_ERR(kopt))
+ 			return PTR_ERR(kopt);
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 50c14cc861e69..6d1759b9ccb2f 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -246,12 +246,18 @@ static bool hw_support_mmap(struct snd_pcm_substream *substream)
+ 	if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_MMAP))
+ 		return false;
+ 
+-	if (substream->ops->mmap ||
+-	    (substream->dma_buffer.dev.type != SNDRV_DMA_TYPE_DEV &&
+-	     substream->dma_buffer.dev.type != SNDRV_DMA_TYPE_DEV_UC))
++	if (substream->ops->mmap)
+ 		return true;
+ 
+-	return dma_can_mmap(substream->dma_buffer.dev.dev);
++	switch (substream->dma_buffer.dev.type) {
++	case SNDRV_DMA_TYPE_UNKNOWN:
++		return false;
++	case SNDRV_DMA_TYPE_CONTINUOUS:
++	case SNDRV_DMA_TYPE_VMALLOC:
++		return true;
++	default:
++		return dma_can_mmap(substream->dma_buffer.dev.dev);
++	}
+ }
+ 
+ static int constrain_mask_params(struct snd_pcm_substream *substream,
+@@ -3062,9 +3068,14 @@ static int snd_pcm_ioctl_sync_ptr_compat(struct snd_pcm_substream *substream,
+ 		boundary = 0x7fffffff;
+ 	snd_pcm_stream_lock_irq(substream);
+ 	/* FIXME: we should consider the boundary for the sync from app */
+-	if (!(sflags & SNDRV_PCM_SYNC_PTR_APPL))
+-		control->appl_ptr = scontrol.appl_ptr;
+-	else
++	if (!(sflags & SNDRV_PCM_SYNC_PTR_APPL)) {
++		err = pcm_lib_apply_appl_ptr(substream,
++				scontrol.appl_ptr);
++		if (err < 0) {
++			snd_pcm_stream_unlock_irq(substream);
++			return err;
++		}
++	} else
+ 		scontrol.appl_ptr = control->appl_ptr % boundary;
+ 	if (!(sflags & SNDRV_PCM_SYNC_PTR_AVAIL_MIN))
+ 		control->avail_min = scontrol.avail_min;
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index fe49e9a97f0ec..61e1de6d7be0a 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -318,6 +318,10 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC,
+ 		.device = 0x4b55,
+ 	},
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC,
++		.device = 0x4b58,
++	},
+ #endif
+ 
+ };
+diff --git a/sound/isa/sb/sb16_csp.c b/sound/isa/sb/sb16_csp.c
+index dbcd9ab2c2b76..6a4051bce3a3b 100644
+--- a/sound/isa/sb/sb16_csp.c
++++ b/sound/isa/sb/sb16_csp.c
+@@ -814,6 +814,7 @@ static int snd_sb_csp_start(struct snd_sb_csp * p, int sample_width, int channel
+ 	mixR = snd_sbmixer_read(p->chip, SB_DSP4_PCM_DEV + 1);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV, mixL & 0x7);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV + 1, mixR & 0x7);
++	spin_unlock_irqrestore(&p->chip->mixer_lock, flags);
+ 
+ 	spin_lock(&p->chip->reg_lock);
+ 	set_mode_register(p->chip, 0xc0);	/* c0 = STOP */
+@@ -853,6 +854,7 @@ static int snd_sb_csp_start(struct snd_sb_csp * p, int sample_width, int channel
+ 	spin_unlock(&p->chip->reg_lock);
+ 
+ 	/* restore PCM volume */
++	spin_lock_irqsave(&p->chip->mixer_lock, flags);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV, mixL);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV + 1, mixR);
+ 	spin_unlock_irqrestore(&p->chip->mixer_lock, flags);
+@@ -878,6 +880,7 @@ static int snd_sb_csp_stop(struct snd_sb_csp * p)
+ 	mixR = snd_sbmixer_read(p->chip, SB_DSP4_PCM_DEV + 1);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV, mixL & 0x7);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV + 1, mixR & 0x7);
++	spin_unlock_irqrestore(&p->chip->mixer_lock, flags);
+ 
+ 	spin_lock(&p->chip->reg_lock);
+ 	if (p->running & SNDRV_SB_CSP_ST_QSOUND) {
+@@ -892,6 +895,7 @@ static int snd_sb_csp_stop(struct snd_sb_csp * p)
+ 	spin_unlock(&p->chip->reg_lock);
+ 
+ 	/* restore PCM volume */
++	spin_lock_irqsave(&p->chip->mixer_lock, flags);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV, mixL);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV + 1, mixR);
+ 	spin_unlock_irqrestore(&p->chip->mixer_lock, flags);
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 6d2a4dfcfe436..c65144715af78 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1939,6 +1939,7 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ static const struct snd_pci_quirk force_connect_list[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
++	SND_PCI_QUIRK(0x1462, 0xec94, "MS-7C94", 1),
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1cc83344c2ecf..bedc2a316adf9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8550,6 +8550,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ 	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
+diff --git a/sound/soc/codecs/rt5631.c b/sound/soc/codecs/rt5631.c
+index 653da3eaf3559..86d58d0df0571 100644
+--- a/sound/soc/codecs/rt5631.c
++++ b/sound/soc/codecs/rt5631.c
+@@ -1695,6 +1695,8 @@ static const struct regmap_config rt5631_regmap_config = {
+ 	.reg_defaults = rt5631_reg,
+ 	.num_reg_defaults = ARRAY_SIZE(rt5631_reg),
+ 	.cache_type = REGCACHE_RBTREE,
++	.use_single_read = true,
++	.use_single_write = true,
+ };
+ 
+ static int rt5631_i2c_probe(struct i2c_client *i2c,
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 985b2dcecf138..51d95437e0fdf 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -1221,7 +1221,7 @@ static int wm_coeff_tlv_get(struct snd_kcontrol *kctl,
+ 
+ 	mutex_lock(&ctl->dsp->pwr_lock);
+ 
+-	ret = wm_coeff_read_ctrl_raw(ctl, ctl->cache, size);
++	ret = wm_coeff_read_ctrl(ctl, ctl->cache, size);
+ 
+ 	if (!ret && copy_to_user(bytes, ctl->cache, size))
+ 		ret = -EFAULT;
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 8e11582fbae98..b598f8f0d06ec 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -3274,7 +3274,15 @@ static void snd_usb_mixer_dump_cval(struct snd_info_buffer *buffer,
+ {
+ 	struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
+ 	static const char * const val_types[] = {
+-		"BOOLEAN", "INV_BOOLEAN", "S8", "U8", "S16", "U16", "S32", "U32",
++		[USB_MIXER_BOOLEAN] = "BOOLEAN",
++		[USB_MIXER_INV_BOOLEAN] = "INV_BOOLEAN",
++		[USB_MIXER_S8] = "S8",
++		[USB_MIXER_U8] = "U8",
++		[USB_MIXER_S16] = "S16",
++		[USB_MIXER_U16] = "U16",
++		[USB_MIXER_S32] = "S32",
++		[USB_MIXER_U32] = "U32",
++		[USB_MIXER_BESPOKEN] = "BESPOKEN",
+ 	};
+ 	snd_iprintf(buffer, "    Info: id=%i, control=%i, cmask=0x%x, "
+ 			    "channels=%i, type=\"%s\"\n", cval->head.id,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index bddef8ad57783..7af97448d09b3 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1895,6 +1895,9 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16d8, 2),	/* Kingston HyperX AMP */
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ed, 2),	/* Kingston HyperX Cloud Alpha S */
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
++	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */
+ 	{ 0 }					/* terminator */
+ };
+ 
+diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
+index 65303664417e4..6ebf2b215ef49 100644
+--- a/tools/bpf/bpftool/common.c
++++ b/tools/bpf/bpftool/common.c
+@@ -221,6 +221,11 @@ int mount_bpffs_for_pin(const char *name)
+ 	int err = 0;
+ 
+ 	file = malloc(strlen(name) + 1);
++	if (!file) {
++		p_err("mem alloc failed");
++		return -1;
++	}
++
+ 	strcpy(file, name);
+ 	dir = dirname(file);
+ 
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index 5320ac1b1285c..5378a14e38368 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -358,9 +358,10 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
+ 		dso = machine__findnew_dso_id(machine, filename, id);
+ 	}
+ 
+-	if (dso)
++	if (dso) {
++		nsinfo__put(dso->nsinfo);
+ 		dso->nsinfo = nsi;
+-	else
++	} else
+ 		nsinfo__put(nsi);
+ 
+ 	thread__put(thread);
+@@ -905,8 +906,10 @@ int cmd_inject(int argc, const char **argv)
+ 
+ 	data.path = inject.input_name;
+ 	inject.session = perf_session__new(&data, inject.output.is_pipe, &inject.tool);
+-	if (IS_ERR(inject.session))
+-		return PTR_ERR(inject.session);
++	if (IS_ERR(inject.session)) {
++		ret = PTR_ERR(inject.session);
++		goto out_close_output;
++	}
+ 
+ 	if (zstd_init(&(inject.session->zstd_data), 0) < 0)
+ 		pr_warning("Decompression initialization failed.\n");
+@@ -948,5 +951,7 @@ int cmd_inject(int argc, const char **argv)
+ out_delete:
+ 	zstd_fini(&(inject.session->zstd_data));
+ 	perf_session__delete(inject.session);
++out_close_output:
++	perf_data__close(&inject.output);
+ 	return ret;
+ }
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 3c74c9c0f3c38..5824aa24acfcc 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -1143,6 +1143,8 @@ int cmd_report(int argc, const char **argv)
+ 		.socket_filter		 = -1,
+ 		.annotation_opts	 = annotation__default_options,
+ 	};
++	char *sort_order_help = sort_help("sort by key(s):");
++	char *field_order_help = sort_help("output field(s): overhead period sample ");
+ 	const struct option options[] = {
+ 	OPT_STRING('i', "input", &input_name, "file",
+ 		    "input file name"),
+@@ -1177,9 +1179,9 @@ int cmd_report(int argc, const char **argv)
+ 	OPT_BOOLEAN(0, "header-only", &report.header_only,
+ 		    "Show only data header."),
+ 	OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
+-		   sort_help("sort by key(s):")),
++		   sort_order_help),
+ 	OPT_STRING('F', "fields", &field_order, "key[,keys...]",
+-		   sort_help("output field(s): overhead period sample ")),
++		   field_order_help),
+ 	OPT_BOOLEAN(0, "show-cpu-utilization", &symbol_conf.show_cpu_utilization,
+ 		    "Show sample percentage for different cpu modes"),
+ 	OPT_BOOLEAN_FLAG(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
+@@ -1308,11 +1310,11 @@ int cmd_report(int argc, const char **argv)
+ 	char sort_tmp[128];
+ 
+ 	if (ret < 0)
+-		return ret;
++		goto exit;
+ 
+ 	ret = perf_config(report__config, &report);
+ 	if (ret)
+-		return ret;
++		goto exit;
+ 
+ 	argc = parse_options(argc, argv, options, report_usage, 0);
+ 	if (argc) {
+@@ -1326,8 +1328,10 @@ int cmd_report(int argc, const char **argv)
+ 		report.symbol_filter_str = argv[0];
+ 	}
+ 
+-	if (annotate_check_args(&report.annotation_opts) < 0)
+-		return -EINVAL;
++	if (annotate_check_args(&report.annotation_opts) < 0) {
++		ret = -EINVAL;
++		goto exit;
++	}
+ 
+ 	if (report.mmaps_mode)
+ 		report.tasks_mode = true;
+@@ -1341,12 +1345,14 @@ int cmd_report(int argc, const char **argv)
+ 	if (symbol_conf.vmlinux_name &&
+ 	    access(symbol_conf.vmlinux_name, R_OK)) {
+ 		pr_err("Invalid file: %s\n", symbol_conf.vmlinux_name);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto exit;
+ 	}
+ 	if (symbol_conf.kallsyms_name &&
+ 	    access(symbol_conf.kallsyms_name, R_OK)) {
+ 		pr_err("Invalid file: %s\n", symbol_conf.kallsyms_name);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto exit;
+ 	}
+ 
+ 	if (report.inverted_callchain)
+@@ -1370,12 +1376,14 @@ int cmd_report(int argc, const char **argv)
+ 
+ repeat:
+ 	session = perf_session__new(&data, false, &report.tool);
+-	if (IS_ERR(session))
+-		return PTR_ERR(session);
++	if (IS_ERR(session)) {
++		ret = PTR_ERR(session);
++		goto exit;
++	}
+ 
+ 	ret = evswitch__init(&report.evswitch, session->evlist, stderr);
+ 	if (ret)
+-		return ret;
++		goto exit;
+ 
+ 	if (zstd_init(&(session->zstd_data), 0) < 0)
+ 		pr_warning("Decompression initialization failed. Reported data may be incomplete.\n");
+@@ -1603,5 +1611,8 @@ error:
+ 
+ 	zstd_fini(&(session->zstd_data));
+ 	perf_session__delete(session);
++exit:
++	free(sort_order_help);
++	free(field_order_help);
+ 	return ret;
+ }
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index 0e16f9d5a9471..d3b5f5faf8c14 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -3337,6 +3337,16 @@ static void setup_sorting(struct perf_sched *sched, const struct option *options
+ 	sort_dimension__add("pid", &sched->cmp_pid);
+ }
+ 
++static bool schedstat_events_exposed(void)
++{
++	/*
++	 * Select "sched:sched_stat_wait" event to check
++	 * whether schedstat tracepoints are exposed.
++	 */
++	return IS_ERR(trace_event__tp_format("sched", "sched_stat_wait")) ?
++		false : true;
++}
++
+ static int __cmd_record(int argc, const char **argv)
+ {
+ 	unsigned int rec_argc, i, j;
+@@ -3348,21 +3358,33 @@ static int __cmd_record(int argc, const char **argv)
+ 		"-m", "1024",
+ 		"-c", "1",
+ 		"-e", "sched:sched_switch",
+-		"-e", "sched:sched_stat_wait",
+-		"-e", "sched:sched_stat_sleep",
+-		"-e", "sched:sched_stat_iowait",
+ 		"-e", "sched:sched_stat_runtime",
+ 		"-e", "sched:sched_process_fork",
+ 		"-e", "sched:sched_wakeup_new",
+ 		"-e", "sched:sched_migrate_task",
+ 	};
++
++	/*
++	 * The tracepoints trace_sched_stat_{wait, sleep, iowait}
++	 * are not exposed to user if CONFIG_SCHEDSTATS is not set,
++	 * to prevent "perf sched record" execution failure, determine
++	 * whether to record schedstat events according to actual situation.
++	 */
++	const char * const schedstat_args[] = {
++		"-e", "sched:sched_stat_wait",
++		"-e", "sched:sched_stat_sleep",
++		"-e", "sched:sched_stat_iowait",
++	};
++	unsigned int schedstat_argc = schedstat_events_exposed() ?
++		ARRAY_SIZE(schedstat_args) : 0;
++
+ 	struct tep_event *waking_event;
+ 
+ 	/*
+ 	 * +2 for either "-e", "sched:sched_wakeup" or
+ 	 * "-e", "sched:sched_waking"
+ 	 */
+-	rec_argc = ARRAY_SIZE(record_args) + 2 + argc - 1;
++	rec_argc = ARRAY_SIZE(record_args) + 2 + schedstat_argc + argc - 1;
+ 	rec_argv = calloc(rec_argc + 1, sizeof(char *));
+ 
+ 	if (rec_argv == NULL)
+@@ -3378,6 +3400,9 @@ static int __cmd_record(int argc, const char **argv)
+ 	else
+ 		rec_argv[i++] = strdup("sched:sched_wakeup");
+ 
++	for (j = 0; j < schedstat_argc; j++)
++		rec_argv[i++] = strdup(schedstat_args[j]);
++
+ 	for (j = 1; j < (unsigned int)argc; j++, i++)
+ 		rec_argv[i] = argv[j];
+ 
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 48588ccf902eb..2bb159c105035 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -2483,6 +2483,12 @@ static void perf_script__exit_per_event_dump_stats(struct perf_script *script)
+ 	}
+ }
+ 
++static void perf_script__exit(struct perf_script *script)
++{
++	perf_thread_map__put(script->threads);
++	perf_cpu_map__put(script->cpus);
++}
++
+ static int __cmd_script(struct perf_script *script)
+ {
+ 	int ret;
+@@ -3937,6 +3943,7 @@ out_delete:
+ 
+ 	perf_evlist__free_stats(session->evlist);
+ 	perf_session__delete(session);
++	perf_script__exit(&script);
+ 
+ 	if (script_started)
+ 		cleanup_scripting();
+diff --git a/tools/perf/tests/event_update.c b/tools/perf/tests/event_update.c
+index bdcf032f85162..1c9a6138fba13 100644
+--- a/tools/perf/tests/event_update.c
++++ b/tools/perf/tests/event_update.c
+@@ -119,6 +119,6 @@ int test__event_update(struct test *test __maybe_unused, int subtest __maybe_unu
+ 	TEST_ASSERT_VAL("failed to synthesize attr update cpus",
+ 			!perf_event__synthesize_event_update_cpus(&tmp.tool, evsel, process_event_cpus));
+ 
+-	perf_cpu_map__put(evsel->core.own_cpus);
++	evlist__delete(evlist);
+ 	return 0;
+ }
+diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
+index edcbc70ff9d66..1ac72919fa358 100644
+--- a/tools/perf/tests/maps.c
++++ b/tools/perf/tests/maps.c
+@@ -116,5 +116,7 @@ int test__maps__merge_in(struct test *t __maybe_unused, int subtest __maybe_unus
+ 
+ 	ret = check_maps(merged3, ARRAY_SIZE(merged3), &maps);
+ 	TEST_ASSERT_VAL("merge check failed", !ret);
++
++	maps__exit(&maps);
+ 	return TEST_OK;
+ }
+diff --git a/tools/perf/tests/topology.c b/tools/perf/tests/topology.c
+index 22daf2bdf5faf..f4a2c0df09549 100644
+--- a/tools/perf/tests/topology.c
++++ b/tools/perf/tests/topology.c
+@@ -52,6 +52,7 @@ static int session_write_header(char *path)
+ 	TEST_ASSERT_VAL("failed to write header",
+ 			!perf_session__write_header(session, session->evlist, data.file.fd, true));
+ 
++	evlist__delete(session->evlist);
+ 	perf_session__delete(session);
+ 
+ 	return 0;
+diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
+index 5d97b3e45fbb1..bcb494dc816a0 100644
+--- a/tools/perf/util/data.c
++++ b/tools/perf/util/data.c
+@@ -20,7 +20,7 @@
+ 
+ static void close_dir(struct perf_data_file *files, int nr)
+ {
+-	while (--nr >= 1) {
++	while (--nr >= 0) {
+ 		close(files[nr].fd);
+ 		zfree(&files[nr].path);
+ 	}
+diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
+index 55c11e854fe46..b1ff0c9f32daf 100644
+--- a/tools/perf/util/dso.c
++++ b/tools/perf/util/dso.c
+@@ -1141,8 +1141,10 @@ struct map *dso__new_map(const char *name)
+ 	struct map *map = NULL;
+ 	struct dso *dso = dso__new(name);
+ 
+-	if (dso)
++	if (dso) {
+ 		map = map__new2(0, dso);
++		dso__put(dso);
++	}
+ 
+ 	return map;
+ }
+diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
+index fadc59708ece0..03bc843b1cf87 100644
+--- a/tools/perf/util/env.c
++++ b/tools/perf/util/env.c
+@@ -178,10 +178,12 @@ void perf_env__exit(struct perf_env *env)
+ 	zfree(&env->cpuid);
+ 	zfree(&env->cmdline);
+ 	zfree(&env->cmdline_argv);
++	zfree(&env->sibling_dies);
+ 	zfree(&env->sibling_cores);
+ 	zfree(&env->sibling_threads);
+ 	zfree(&env->pmu_mappings);
+ 	zfree(&env->cpu);
++	zfree(&env->cpu_pmu_caps);
+ 	zfree(&env->numa_map);
+ 
+ 	for (i = 0; i < env->nr_numa_nodes; i++)
+diff --git a/tools/perf/util/lzma.c b/tools/perf/util/lzma.c
+index 39062df026291..51424cdc3b682 100644
+--- a/tools/perf/util/lzma.c
++++ b/tools/perf/util/lzma.c
+@@ -69,7 +69,7 @@ int lzma_decompress_to_file(const char *input, int output_fd)
+ 
+ 			if (ferror(infile)) {
+ 				pr_err("lzma: read error: %s\n", strerror(errno));
+-				goto err_fclose;
++				goto err_lzma_end;
+ 			}
+ 
+ 			if (feof(infile))
+@@ -83,7 +83,7 @@ int lzma_decompress_to_file(const char *input, int output_fd)
+ 
+ 			if (writen(output_fd, buf_out, write_size) != write_size) {
+ 				pr_err("lzma: write error: %s\n", strerror(errno));
+-				goto err_fclose;
++				goto err_lzma_end;
+ 			}
+ 
+ 			strm.next_out  = buf_out;
+@@ -95,11 +95,13 @@ int lzma_decompress_to_file(const char *input, int output_fd)
+ 				break;
+ 
+ 			pr_err("lzma: failed %s\n", lzma_strerror(ret));
+-			goto err_fclose;
++			goto err_lzma_end;
+ 		}
+ 	}
+ 
+ 	err = 0;
++err_lzma_end:
++	lzma_end(&strm);
+ err_fclose:
+ 	fclose(infile);
+ 	return err;
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index f4d44f75ba152..6688f6b253a72 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -192,6 +192,8 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
+ 			if (!(prot & PROT_EXEC))
+ 				dso__set_loaded(dso);
+ 		}
++
++		nsinfo__put(dso->nsinfo);
+ 		dso->nsinfo = nsi;
+ 		dso__put(dso);
+ 	}
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 8eae2afff71a9..07db6cfad65b9 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -180,8 +180,10 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
+ 		struct map *map;
+ 
+ 		map = dso__new_map(target);
+-		if (map && map->dso)
++		if (map && map->dso) {
++			nsinfo__put(map->dso->nsinfo);
+ 			map->dso->nsinfo = nsinfo__get(nsi);
++		}
+ 		return map;
+ 	} else {
+ 		return kernel_get_module_map(target);
+diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
+index bbecb449ea944..d2b98d64438e5 100644
+--- a/tools/perf/util/probe-file.c
++++ b/tools/perf/util/probe-file.c
+@@ -342,11 +342,11 @@ int probe_file__del_events(int fd, struct strfilter *filter)
+ 
+ 	ret = probe_file__get_events(fd, filter, namelist);
+ 	if (ret < 0)
+-		return ret;
++		goto out;
+ 
+ 	ret = probe_file__del_strlist(fd, namelist);
++out:
+ 	strlist__delete(namelist);
+-
+ 	return ret;
+ }
+ 
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index 8a3b7d5a47376..5e9e96452b9e6 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -3177,7 +3177,7 @@ static void add_hpp_sort_string(struct strbuf *sb, struct hpp_dimension *s, int
+ 		add_key(sb, s[i].name, llen);
+ }
+ 
+-const char *sort_help(const char *prefix)
++char *sort_help(const char *prefix)
+ {
+ 	struct strbuf sb;
+ 	char *s;
+diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h
+index 66d39c4cfe2b3..fc94dcd67abc3 100644
+--- a/tools/perf/util/sort.h
++++ b/tools/perf/util/sort.h
+@@ -293,7 +293,7 @@ void reset_output_field(void);
+ void sort__setup_elide(FILE *fp);
+ void perf_hpp__set_elide(int idx, bool elide);
+ 
+-const char *sort_help(const char *prefix);
++char *sort_help(const char *prefix);
+ 
+ int report_parse_ignore_callees_opt(const struct option *opt, const char *arg, int unset);
+ 
+diff --git a/tools/testing/selftests/net/icmp_redirect.sh b/tools/testing/selftests/net/icmp_redirect.sh
+index bf361f30d6ef9..104a7a5f13b1e 100755
+--- a/tools/testing/selftests/net/icmp_redirect.sh
++++ b/tools/testing/selftests/net/icmp_redirect.sh
+@@ -309,9 +309,10 @@ check_exception()
+ 	fi
+ 	log_test $? 0 "IPv4: ${desc}"
+ 
+-	if [ "$with_redirect" = "yes" ]; then
++	# No PMTU info for test "redirect" and "mtu exception plus redirect"
++	if [ "$with_redirect" = "yes" ] && [ "$desc" != "redirect exception plus mtu" ]; then
+ 		ip -netns h1 -6 ro get ${H1_VRF_ARG} ${H2_N2_IP6} | \
+-		grep -q "${H2_N2_IP6} from :: via ${R2_LLADDR} dev br0.*${mtu}"
++		grep -v "mtu" | grep -q "${H2_N2_IP6} .*via ${R2_LLADDR} dev br0"
+ 	elif [ -n "${mtu}" ]; then
+ 		ip -netns h1 -6 ro get ${H1_VRF_ARG} ${H2_N2_IP6} | \
+ 		grep -q "${mtu}"
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index c4425597769a0..b1be8df806119 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -180,8 +180,10 @@ static int anon_release_pages(char *rel_area)
+ 
+ static void anon_allocate_area(void **alloc_area)
+ {
+-	if (posix_memalign(alloc_area, page_size, nr_pages * page_size)) {
+-		fprintf(stderr, "out of memory\n");
++	*alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE,
++			   MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
++	if (*alloc_area == MAP_FAILED)
++		fprintf(stderr, "mmap of anonymous memory failed");
+ 		*alloc_area = NULL;
+ 	}
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-07-31 10:30 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-07-31 10:30 UTC (permalink / raw
  To: gentoo-commits

commit:     fffb3b021d5afffa2a7eba74d2e5fc5d114650ac
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Jul 31 10:29:25 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Jul 31 10:30:15 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fffb3b02

Linux patch 5.10.55

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |   4 +
 1054_linux-5.10.55.patch | 730 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 734 insertions(+)

diff --git a/0000_README b/0000_README
index e70652a..f1cdfa7 100644
--- a/0000_README
+++ b/0000_README
@@ -259,6 +259,10 @@ Patch:  1053_linux-5.10.54.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.54
 
+Patch:  1054_linux-5.10.55.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.55
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1054_linux-5.10.55.patch b/1054_linux-5.10.55.patch
new file mode 100644
index 0000000..25c76f1
--- /dev/null
+++ b/1054_linux-5.10.55.patch
@@ -0,0 +1,730 @@
+diff --git a/Makefile b/Makefile
+index eb01d3028b020..7fb6405f3b60f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 54
++SUBLEVEL = 55
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/versatile-ab.dts b/arch/arm/boot/dts/versatile-ab.dts
+index 37bd41ff8dffa..151c0220047dd 100644
+--- a/arch/arm/boot/dts/versatile-ab.dts
++++ b/arch/arm/boot/dts/versatile-ab.dts
+@@ -195,16 +195,15 @@
+ 		#size-cells = <1>;
+ 		ranges;
+ 
+-		vic: intc@10140000 {
++		vic: interrupt-controller@10140000 {
+ 			compatible = "arm,versatile-vic";
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+ 			reg = <0x10140000 0x1000>;
+-			clear-mask = <0xffffffff>;
+ 			valid-mask = <0xffffffff>;
+ 		};
+ 
+-		sic: intc@10003000 {
++		sic: interrupt-controller@10003000 {
+ 			compatible = "arm,versatile-sic";
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+diff --git a/arch/arm/boot/dts/versatile-pb.dts b/arch/arm/boot/dts/versatile-pb.dts
+index 06a0fdf24026c..e7e751a858d81 100644
+--- a/arch/arm/boot/dts/versatile-pb.dts
++++ b/arch/arm/boot/dts/versatile-pb.dts
+@@ -7,7 +7,7 @@
+ 
+ 	amba {
+ 		/* The Versatile PB is using more SIC IRQ lines than the AB */
+-		sic: intc@10003000 {
++		sic: interrupt-controller@10003000 {
+ 			clear-mask = <0xffffffff>;
+ 			/*
+ 			 * Valid interrupt lines mask according to
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 800914e9e12b9..3ad6f77ea1c45 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -541,8 +541,6 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
+ 
+ 	if (!vcpu->arch.exception.pending && !vcpu->arch.exception.injected) {
+ 	queue:
+-		if (has_error && !is_protmode(vcpu))
+-			has_error = false;
+ 		if (reinject) {
+ 			/*
+ 			 * On vmentry, vcpu->arch.exception.pending is only
+@@ -8265,6 +8263,13 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
+ 	kvm_x86_ops.update_cr8_intercept(vcpu, tpr, max_irr);
+ }
+ 
++static void kvm_inject_exception(struct kvm_vcpu *vcpu)
++{
++	if (vcpu->arch.exception.error_code && !is_protmode(vcpu))
++		vcpu->arch.exception.error_code = false;
++	kvm_x86_ops.queue_exception(vcpu);
++}
++
+ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
+ {
+ 	int r;
+@@ -8273,7 +8278,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
+ 	/* try to reinject previous events if any */
+ 
+ 	if (vcpu->arch.exception.injected) {
+-		kvm_x86_ops.queue_exception(vcpu);
++		kvm_inject_exception(vcpu);
+ 		can_inject = false;
+ 	}
+ 	/*
+@@ -8336,7 +8341,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
+ 			}
+ 		}
+ 
+-		kvm_x86_ops.queue_exception(vcpu);
++		kvm_inject_exception(vcpu);
+ 		can_inject = false;
+ 	}
+ 
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index af4560dab6b40..8c9663258d5d4 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -43,7 +43,6 @@ enum scmi_error_codes {
+ 	SCMI_ERR_GENERIC = -8,	/* Generic Error */
+ 	SCMI_ERR_HARDWARE = -9,	/* Hardware Error */
+ 	SCMI_ERR_PROTOCOL = -10,/* Protocol Error */
+-	SCMI_ERR_MAX
+ };
+ 
+ /* List of all SCMI devices active in system */
+@@ -118,8 +117,10 @@ static const int scmi_linux_errmap[] = {
+ 
+ static inline int scmi_to_linux_errno(int errno)
+ {
+-	if (errno < SCMI_SUCCESS && errno > SCMI_ERR_MAX)
+-		return scmi_linux_errmap[-errno];
++	int err_idx = -errno;
++
++	if (err_idx >= SCMI_SUCCESS && err_idx < ARRAY_SIZE(scmi_linux_errmap))
++		return scmi_linux_errmap[err_idx];
+ 	return -EIO;
+ }
+ 
+@@ -614,8 +615,9 @@ static int __scmi_xfer_info_init(struct scmi_info *sinfo,
+ 	const struct scmi_desc *desc = sinfo->desc;
+ 
+ 	/* Pre-allocated messages, no more than what hdr.seq can support */
+-	if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) {
+-		dev_err(dev, "Maximum message of %d exceeds supported %ld\n",
++	if (WARN_ON(!desc->max_msg || desc->max_msg > MSG_TOKEN_MAX)) {
++		dev_err(dev,
++			"Invalid maximum messages %d, not in range [1 - %lu]\n",
+ 			desc->max_msg, MSG_TOKEN_MAX);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/ttm/ttm_range_manager.c b/drivers/gpu/drm/ttm/ttm_range_manager.c
+index 1da0e277c5111..ce9d127edbb5d 100644
+--- a/drivers/gpu/drm/ttm/ttm_range_manager.c
++++ b/drivers/gpu/drm/ttm/ttm_range_manager.c
+@@ -147,6 +147,9 @@ int ttm_range_man_fini(struct ttm_bo_device *bdev,
+ 	struct drm_mm *mm = &rman->mm;
+ 	int ret;
+ 
++	if (!man)
++		return 0;
++
+ 	ttm_resource_manager_set_used(man, false);
+ 
+ 	ret = ttm_resource_manager_force_list_clean(bdev, man);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index b0b06eb86edfb..81e0877237770 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -497,8 +497,8 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 	p = buf;
+ 	while (bytes_left >= sizeof(*p)) {
+ 		info->speed = le64_to_cpu(p->LinkSpeed);
+-		info->rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE);
+-		info->rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE);
++		info->rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;
++		info->rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0;
+ 
+ 		cifs_dbg(FYI, "%s: adding iface %zu\n", __func__, *iface_count);
+ 		cifs_dbg(FYI, "%s: speed %zu bps\n", __func__, info->speed);
+diff --git a/fs/hfs/bfind.c b/fs/hfs/bfind.c
+index 4af318fbda774..ef9498a6e88ac 100644
+--- a/fs/hfs/bfind.c
++++ b/fs/hfs/bfind.c
+@@ -25,7 +25,19 @@ int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd)
+ 	fd->key = ptr + tree->max_key_len + 2;
+ 	hfs_dbg(BNODE_REFS, "find_init: %d (%p)\n",
+ 		tree->cnid, __builtin_return_address(0));
+-	mutex_lock(&tree->tree_lock);
++	switch (tree->cnid) {
++	case HFS_CAT_CNID:
++		mutex_lock_nested(&tree->tree_lock, CATALOG_BTREE_MUTEX);
++		break;
++	case HFS_EXT_CNID:
++		mutex_lock_nested(&tree->tree_lock, EXTENTS_BTREE_MUTEX);
++		break;
++	case HFS_ATTR_CNID:
++		mutex_lock_nested(&tree->tree_lock, ATTR_BTREE_MUTEX);
++		break;
++	default:
++		return -EINVAL;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index b63a4df7327b6..c0a73a6ffb28b 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -15,16 +15,31 @@
+ 
+ #include "btree.h"
+ 
+-void hfs_bnode_read(struct hfs_bnode *node, void *buf,
+-		int off, int len)
++void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ 	struct page *page;
++	int pagenum;
++	int bytes_read;
++	int bytes_to_read;
++	void *vaddr;
+ 
+ 	off += node->page_offset;
+-	page = node->page[0];
++	pagenum = off >> PAGE_SHIFT;
++	off &= ~PAGE_MASK; /* compute page offset for the first page */
+ 
+-	memcpy(buf, kmap(page) + off, len);
+-	kunmap(page);
++	for (bytes_read = 0; bytes_read < len; bytes_read += bytes_to_read) {
++		if (pagenum >= node->tree->pages_per_bnode)
++			break;
++		page = node->page[pagenum];
++		bytes_to_read = min_t(int, len - bytes_read, PAGE_SIZE - off);
++
++		vaddr = kmap_atomic(page);
++		memcpy(buf + bytes_read, vaddr + off, bytes_to_read);
++		kunmap_atomic(vaddr);
++
++		pagenum++;
++		off = 0; /* page offset only applies to the first page */
++	}
+ }
+ 
+ u16 hfs_bnode_read_u16(struct hfs_bnode *node, int off)
+diff --git a/fs/hfs/btree.h b/fs/hfs/btree.h
+index 4ba45caf59392..0e6baee932453 100644
+--- a/fs/hfs/btree.h
++++ b/fs/hfs/btree.h
+@@ -13,6 +13,13 @@ typedef int (*btree_keycmp)(const btree_key *, const btree_key *);
+ 
+ #define NODE_HASH_SIZE  256
+ 
++/* B-tree mutex nested subclasses */
++enum hfs_btree_mutex_classes {
++	CATALOG_BTREE_MUTEX,
++	EXTENTS_BTREE_MUTEX,
++	ATTR_BTREE_MUTEX,
++};
++
+ /* A HFS BTree held in memory */
+ struct hfs_btree {
+ 	struct super_block *sb;
+diff --git a/fs/hfs/super.c b/fs/hfs/super.c
+index 44d07c9e3a7f0..12d9bae393631 100644
+--- a/fs/hfs/super.c
++++ b/fs/hfs/super.c
+@@ -420,14 +420,12 @@ static int hfs_fill_super(struct super_block *sb, void *data, int silent)
+ 	if (!res) {
+ 		if (fd.entrylength > sizeof(rec) || fd.entrylength < 0) {
+ 			res =  -EIO;
+-			goto bail;
++			goto bail_hfs_find;
+ 		}
+ 		hfs_bnode_read(fd.bnode, &rec, fd.entryoffset, fd.entrylength);
+ 	}
+-	if (res) {
+-		hfs_find_exit(&fd);
+-		goto bail_no_root;
+-	}
++	if (res)
++		goto bail_hfs_find;
+ 	res = -EINVAL;
+ 	root_inode = hfs_iget(sb, &fd.search_key->cat, &rec);
+ 	hfs_find_exit(&fd);
+@@ -443,6 +441,8 @@ static int hfs_fill_super(struct super_block *sb, void *data, int silent)
+ 	/* everything's okay */
+ 	return 0;
+ 
++bail_hfs_find:
++	hfs_find_exit(&fd);
+ bail_no_root:
+ 	pr_err("get root inode failed\n");
+ bail:
+diff --git a/fs/internal.h b/fs/internal.h
+index a7cd0f64faa4a..5155f6ce95c79 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -64,7 +64,6 @@ extern void __init chrdev_init(void);
+  */
+ extern const struct fs_context_operations legacy_fs_context_ops;
+ extern int parse_monolithic_mount_data(struct fs_context *, void *);
+-extern void fc_drop_locked(struct fs_context *);
+ extern void vfs_clean_context(struct fs_context *fc);
+ extern int finish_clean_context(struct fs_context *fc);
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 07f08c424d17b..525b44140d7a3 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -6266,7 +6266,6 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
+ 	if (prev) {
+ 		io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME);
+ 		io_put_req_deferred(prev, 1);
+-		io_put_req_deferred(req, 1);
+ 	} else {
+ 		io_cqring_add_event(req, -ETIME, 0);
+ 		io_put_req_deferred(req, 1);
+diff --git a/fs/iomap/seek.c b/fs/iomap/seek.c
+index 107ee80c35683..220c306167f79 100644
+--- a/fs/iomap/seek.c
++++ b/fs/iomap/seek.c
+@@ -140,23 +140,20 @@ loff_t
+ iomap_seek_hole(struct inode *inode, loff_t offset, const struct iomap_ops *ops)
+ {
+ 	loff_t size = i_size_read(inode);
+-	loff_t length = size - offset;
+ 	loff_t ret;
+ 
+ 	/* Nothing to be found before or beyond the end of the file. */
+ 	if (offset < 0 || offset >= size)
+ 		return -ENXIO;
+ 
+-	while (length > 0) {
+-		ret = iomap_apply(inode, offset, length, IOMAP_REPORT, ops,
+-				  &offset, iomap_seek_hole_actor);
++	while (offset < size) {
++		ret = iomap_apply(inode, offset, size - offset, IOMAP_REPORT,
++				  ops, &offset, iomap_seek_hole_actor);
+ 		if (ret < 0)
+ 			return ret;
+ 		if (ret == 0)
+ 			break;
+-
+ 		offset += ret;
+-		length -= ret;
+ 	}
+ 
+ 	return offset;
+@@ -186,27 +183,23 @@ loff_t
+ iomap_seek_data(struct inode *inode, loff_t offset, const struct iomap_ops *ops)
+ {
+ 	loff_t size = i_size_read(inode);
+-	loff_t length = size - offset;
+ 	loff_t ret;
+ 
+ 	/* Nothing to be found before or beyond the end of the file. */
+ 	if (offset < 0 || offset >= size)
+ 		return -ENXIO;
+ 
+-	while (length > 0) {
+-		ret = iomap_apply(inode, offset, length, IOMAP_REPORT, ops,
+-				  &offset, iomap_seek_data_actor);
++	while (offset < size) {
++		ret = iomap_apply(inode, offset, size - offset, IOMAP_REPORT,
++				  ops, &offset, iomap_seek_data_actor);
+ 		if (ret < 0)
+ 			return ret;
+ 		if (ret == 0)
+-			break;
+-
++			return offset;
+ 		offset += ret;
+-		length -= ret;
+ 	}
+ 
+-	if (length <= 0)
+-		return -ENXIO;
+-	return offset;
++	/* We've reached the end of the file without finding data */
++	return -ENXIO;
+ }
+ EXPORT_SYMBOL_GPL(iomap_seek_data);
+diff --git a/include/linux/fs_context.h b/include/linux/fs_context.h
+index 37e1e8f7f08da..5b44b0195a28a 100644
+--- a/include/linux/fs_context.h
++++ b/include/linux/fs_context.h
+@@ -139,6 +139,7 @@ extern int vfs_parse_fs_string(struct fs_context *fc, const char *key,
+ extern int generic_parse_monolithic(struct fs_context *fc, void *data);
+ extern int vfs_get_tree(struct fs_context *fc);
+ extern void put_fs_context(struct fs_context *fc);
++extern void fc_drop_locked(struct fs_context *fc);
+ 
+ /*
+  * sget() wrappers to be called from the ->get_tree() op.
+diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
+index b001fa91c14ea..716b7c5f6fdd9 100644
+--- a/include/net/busy_poll.h
++++ b/include/net/busy_poll.h
+@@ -36,7 +36,7 @@ static inline bool net_busy_loop_on(void)
+ 
+ static inline bool sk_can_busy_loop(const struct sock *sk)
+ {
+-	return sk->sk_ll_usec && !signal_pending(current);
++	return READ_ONCE(sk->sk_ll_usec) && !signal_pending(current);
+ }
+ 
+ bool sk_busy_loop_end(void *p, unsigned long start_time);
+diff --git a/include/net/sctp/constants.h b/include/net/sctp/constants.h
+index 122d9e2d8dfde..1ad049ac2add4 100644
+--- a/include/net/sctp/constants.h
++++ b/include/net/sctp/constants.h
+@@ -340,8 +340,7 @@ enum {
+ #define SCTP_SCOPE_POLICY_MAX	SCTP_SCOPE_POLICY_LINK
+ 
+ /* Based on IPv4 scoping <draft-stewart-tsvwg-sctp-ipv4-00.txt>,
+- * SCTP IPv4 unusable addresses: 0.0.0.0/8, 224.0.0.0/4, 198.18.0.0/24,
+- * 192.88.99.0/24.
++ * SCTP IPv4 unusable addresses: 0.0.0.0/8, 224.0.0.0/4, 192.88.99.0/24.
+  * Also, RFC 8.4, non-unicast addresses are not considered valid SCTP
+  * addresses.
+  */
+@@ -349,7 +348,6 @@ enum {
+ 	((htonl(INADDR_BROADCAST) == a) ||  \
+ 	 ipv4_is_multicast(a) ||	    \
+ 	 ipv4_is_zeronet(a) ||		    \
+-	 ipv4_is_test_198(a) ||		    \
+ 	 ipv4_is_anycast_6to4(a))
+ 
+ /* Flags used for the bind address copy functions.  */
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 04eb28f7735fb..7f71b54c06c5f 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -1225,9 +1225,7 @@ int cgroup1_get_tree(struct fs_context *fc)
+ 		ret = cgroup_do_get_tree(fc);
+ 
+ 	if (!ret && percpu_ref_is_dying(&ctx->root->cgrp.self.refcnt)) {
+-		struct super_block *sb = fc->root->d_sb;
+-		dput(fc->root);
+-		deactivate_locked_super(sb);
++		fc_drop_locked(fc);
+ 		ret = 1;
+ 	}
+ 
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 73bbe792fe1e8..b338f514ee5aa 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -879,10 +879,9 @@ static bool trc_inspect_reader(struct task_struct *t, void *arg)
+ 		in_qs = likely(!t->trc_reader_nesting);
+ 	}
+ 
+-	// Mark as checked.  Because this is called from the grace-period
+-	// kthread, also remove the task from the holdout list.
++	// Mark as checked so that the grace-period kthread will
++	// remove it from the holdout list.
+ 	t->trc_reader_checked = true;
+-	trc_del_holdout(t);
+ 
+ 	if (in_qs)
+ 		return true;  // Already in quiescent state, done!!!
+@@ -909,7 +908,6 @@ static void trc_wait_for_one_reader(struct task_struct *t,
+ 	// The current task had better be in a quiescent state.
+ 	if (t == current) {
+ 		t->trc_reader_checked = true;
+-		trc_del_holdout(t);
+ 		WARN_ON_ONCE(t->trc_reader_nesting);
+ 		return;
+ 	}
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index b23f7d1044be7..51d19fc71e616 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3670,15 +3670,21 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
+ 						  unbound_release_work);
+ 	struct workqueue_struct *wq = pwq->wq;
+ 	struct worker_pool *pool = pwq->pool;
+-	bool is_last;
++	bool is_last = false;
+ 
+-	if (WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND)))
+-		return;
++	/*
++	 * when @pwq is not linked, it doesn't hold any reference to the
++	 * @wq, and @wq is invalid to access.
++	 */
++	if (!list_empty(&pwq->pwqs_node)) {
++		if (WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND)))
++			return;
+ 
+-	mutex_lock(&wq->mutex);
+-	list_del_rcu(&pwq->pwqs_node);
+-	is_last = list_empty(&wq->pwqs);
+-	mutex_unlock(&wq->mutex);
++		mutex_lock(&wq->mutex);
++		list_del_rcu(&pwq->pwqs_node);
++		is_last = list_empty(&wq->pwqs);
++		mutex_unlock(&wq->mutex);
++	}
+ 
+ 	mutex_lock(&wq_pool_mutex);
+ 	put_unbound_pool(pool);
+diff --git a/net/802/garp.c b/net/802/garp.c
+index 400bd857e5f57..f6012f8e59f00 100644
+--- a/net/802/garp.c
++++ b/net/802/garp.c
+@@ -203,6 +203,19 @@ static void garp_attr_destroy(struct garp_applicant *app, struct garp_attr *attr
+ 	kfree(attr);
+ }
+ 
++static void garp_attr_destroy_all(struct garp_applicant *app)
++{
++	struct rb_node *node, *next;
++	struct garp_attr *attr;
++
++	for (node = rb_first(&app->gid);
++	     next = node ? rb_next(node) : NULL, node != NULL;
++	     node = next) {
++		attr = rb_entry(node, struct garp_attr, node);
++		garp_attr_destroy(app, attr);
++	}
++}
++
+ static int garp_pdu_init(struct garp_applicant *app)
+ {
+ 	struct sk_buff *skb;
+@@ -609,6 +622,7 @@ void garp_uninit_applicant(struct net_device *dev, struct garp_application *appl
+ 
+ 	spin_lock_bh(&app->lock);
+ 	garp_gid_event(app, GARP_EVENT_TRANSMIT_PDU);
++	garp_attr_destroy_all(app);
+ 	garp_pdu_queue(app);
+ 	spin_unlock_bh(&app->lock);
+ 
+diff --git a/net/802/mrp.c b/net/802/mrp.c
+index bea6e43d45a0d..35e04cc5390c4 100644
+--- a/net/802/mrp.c
++++ b/net/802/mrp.c
+@@ -292,6 +292,19 @@ static void mrp_attr_destroy(struct mrp_applicant *app, struct mrp_attr *attr)
+ 	kfree(attr);
+ }
+ 
++static void mrp_attr_destroy_all(struct mrp_applicant *app)
++{
++	struct rb_node *node, *next;
++	struct mrp_attr *attr;
++
++	for (node = rb_first(&app->mad);
++	     next = node ? rb_next(node) : NULL, node != NULL;
++	     node = next) {
++		attr = rb_entry(node, struct mrp_attr, node);
++		mrp_attr_destroy(app, attr);
++	}
++}
++
+ static int mrp_pdu_init(struct mrp_applicant *app)
+ {
+ 	struct sk_buff *skb;
+@@ -895,6 +908,7 @@ void mrp_uninit_applicant(struct net_device *dev, struct mrp_application *appl)
+ 
+ 	spin_lock_bh(&app->lock);
+ 	mrp_mad_event(app, MRP_EVENT_TX);
++	mrp_attr_destroy_all(app);
+ 	mrp_pdu_queue(app);
+ 	spin_unlock_bh(&app->lock);
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 7de51ea15cdfc..d638c5361ed29 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1164,7 +1164,7 @@ set_sndbuf:
+ 			if (val < 0)
+ 				ret = -EINVAL;
+ 			else
+-				sk->sk_ll_usec = val;
++				WRITE_ONCE(sk->sk_ll_usec, val);
+ 		}
+ 		break;
+ #endif
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 341d0c7acc8bf..72a673a43a754 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -60,10 +60,38 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
+ {
+ 	struct dst_entry *dst = skb_dst(skb);
+ 	struct net_device *dev = dst->dev;
++	unsigned int hh_len = LL_RESERVED_SPACE(dev);
++	int delta = hh_len - skb_headroom(skb);
+ 	const struct in6_addr *nexthop;
+ 	struct neighbour *neigh;
+ 	int ret;
+ 
++	/* Be paranoid, rather than too clever. */
++	if (unlikely(delta > 0) && dev->header_ops) {
++		/* pskb_expand_head() might crash, if skb is shared */
++		if (skb_shared(skb)) {
++			struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC);
++
++			if (likely(nskb)) {
++				if (skb->sk)
++					skb_set_owner_w(nskb, skb->sk);
++				consume_skb(skb);
++			} else {
++				kfree_skb(skb);
++			}
++			skb = nskb;
++		}
++		if (skb &&
++		    pskb_expand_head(skb, SKB_DATA_ALIGN(delta), 0, GFP_ATOMIC)) {
++			kfree_skb(skb);
++			skb = NULL;
++		}
++		if (!skb) {
++			IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTDISCARDS);
++			return -ENOMEM;
++		}
++	}
++
+ 	if (ipv6_addr_is_multicast(&ipv6_hdr(skb)->daddr)) {
+ 		struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb));
+ 
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 47fb87ce489fc..940f1e257a90a 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -397,7 +397,8 @@ static enum sctp_scope sctp_v4_scope(union sctp_addr *addr)
+ 		retval = SCTP_SCOPE_LINK;
+ 	} else if (ipv4_is_private_10(addr->v4.sin_addr.s_addr) ||
+ 		   ipv4_is_private_172(addr->v4.sin_addr.s_addr) ||
+-		   ipv4_is_private_192(addr->v4.sin_addr.s_addr)) {
++		   ipv4_is_private_192(addr->v4.sin_addr.s_addr) ||
++		   ipv4_is_test_198(addr->v4.sin_addr.s_addr)) {
+ 		retval = SCTP_SCOPE_PRIVATE;
+ 	} else {
+ 		retval = SCTP_SCOPE_GLOBAL;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 39be4b52329b5..37ffa7725cee2 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1521,6 +1521,53 @@ out:
+ 	return err;
+ }
+ 
++static void unix_peek_fds(struct scm_cookie *scm, struct sk_buff *skb)
++{
++	scm->fp = scm_fp_dup(UNIXCB(skb).fp);
++
++	/*
++	 * Garbage collection of unix sockets starts by selecting a set of
++	 * candidate sockets which have reference only from being in flight
++	 * (total_refs == inflight_refs).  This condition is checked once during
++	 * the candidate collection phase, and candidates are marked as such, so
++	 * that non-candidates can later be ignored.  While inflight_refs is
++	 * protected by unix_gc_lock, total_refs (file count) is not, hence this
++	 * is an instantaneous decision.
++	 *
++	 * Once a candidate, however, the socket must not be reinstalled into a
++	 * file descriptor while the garbage collection is in progress.
++	 *
++	 * If the above conditions are met, then the directed graph of
++	 * candidates (*) does not change while unix_gc_lock is held.
++	 *
++	 * Any operations that changes the file count through file descriptors
++	 * (dup, close, sendmsg) does not change the graph since candidates are
++	 * not installed in fds.
++	 *
++	 * Dequeing a candidate via recvmsg would install it into an fd, but
++	 * that takes unix_gc_lock to decrement the inflight count, so it's
++	 * serialized with garbage collection.
++	 *
++	 * MSG_PEEK is special in that it does not change the inflight count,
++	 * yet does install the socket into an fd.  The following lock/unlock
++	 * pair is to ensure serialization with garbage collection.  It must be
++	 * done between incrementing the file count and installing the file into
++	 * an fd.
++	 *
++	 * If garbage collection starts after the barrier provided by the
++	 * lock/unlock, then it will see the elevated refcount and not mark this
++	 * as a candidate.  If a garbage collection is already in progress
++	 * before the file count was incremented, then the lock/unlock pair will
++	 * ensure that garbage collection is finished before progressing to
++	 * installing the fd.
++	 *
++	 * (*) A -> B where B is on the queue of A or B is on the queue of C
++	 * which is on the queue of listening socket A.
++	 */
++	spin_lock(&unix_gc_lock);
++	spin_unlock(&unix_gc_lock);
++}
++
+ static int unix_scm_to_skb(struct scm_cookie *scm, struct sk_buff *skb, bool send_fds)
+ {
+ 	int err = 0;
+@@ -2170,7 +2217,7 @@ static int unix_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+ 		sk_peek_offset_fwd(sk, size);
+ 
+ 		if (UNIXCB(skb).fp)
+-			scm.fp = scm_fp_dup(UNIXCB(skb).fp);
++			unix_peek_fds(&scm, skb);
+ 	}
+ 	err = (flags & MSG_TRUNC) ? skb->len - skip : size;
+ 
+@@ -2413,7 +2460,7 @@ unlock:
+ 			/* It is questionable, see note in unix_dgram_recvmsg.
+ 			 */
+ 			if (UNIXCB(skb).fp)
+-				scm.fp = scm_fp_dup(UNIXCB(skb).fp);
++				unix_peek_fds(&scm, skb);
+ 
+ 			sk_peek_offset_fwd(sk, chunk);
+ 
+diff --git a/tools/scripts/Makefile.include b/tools/scripts/Makefile.include
+index 1358e89cdf7d6..a3f99ef6b11ba 100644
+--- a/tools/scripts/Makefile.include
++++ b/tools/scripts/Makefile.include
+@@ -39,8 +39,6 @@ EXTRA_WARNINGS += -Wundef
+ EXTRA_WARNINGS += -Wwrite-strings
+ EXTRA_WARNINGS += -Wformat
+ 
+-CC_NO_CLANG := $(shell $(CC) -dM -E -x c /dev/null | grep -Fq "__clang__"; echo $$?)
+-
+ # Makefiles suck: This macro sets a default value of $(2) for the
+ # variable named by $(1), unless the variable has been set by
+ # environment or command line. This is necessary for CC and AR
+@@ -52,12 +50,22 @@ define allow-override
+     $(eval $(1) = $(2)))
+ endef
+ 
++ifneq ($(LLVM),)
++$(call allow-override,CC,clang)
++$(call allow-override,AR,llvm-ar)
++$(call allow-override,LD,ld.lld)
++$(call allow-override,CXX,clang++)
++$(call allow-override,STRIP,llvm-strip)
++else
+ # Allow setting various cross-compile vars or setting CROSS_COMPILE as a prefix.
+ $(call allow-override,CC,$(CROSS_COMPILE)gcc)
+ $(call allow-override,AR,$(CROSS_COMPILE)ar)
+ $(call allow-override,LD,$(CROSS_COMPILE)ld)
+ $(call allow-override,CXX,$(CROSS_COMPILE)g++)
+ $(call allow-override,STRIP,$(CROSS_COMPILE)strip)
++endif
++
++CC_NO_CLANG := $(shell $(CC) -dM -E -x c /dev/null | grep -Fq "__clang__"; echo $$?)
+ 
+ ifneq ($(LLVM),)
+ HOSTAR  ?= llvm-ar


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-02 22:35 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-02 22:35 UTC (permalink / raw
  To: gentoo-commits

commit:     e110b1538735fe21fd40eac1a27cbfe4fbea30dc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug  2 22:27:34 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug  2 22:35:41 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e110b153

Select SECCOMP options only if supported

Thanks to Matt Turner for this patch

Some architectures (e.g., alpha, sparc) do not support SECCOMP.
Without this kernel builds will show:

WARNING: unmet direct dependencies detected for SECCOMP
  Depends on [n]: HAVE_ARCH_SECCOMP [=n]
  Selected by [y]:
  - GENTOO_LINUX_INIT_SYSTEMD [=y] && GENTOO_LINUX [=y] && GENTOO_LINUX_UDEV [=y]

WARNING: unmet direct dependencies detected for SECCOMP_FILTER
  Depends on [n]: HAVE_ARCH_SECCOMP_FILTER [=n] && SECCOMP [=y] && NET [=y]
  Selected by [y]:
  - GENTOO_LINUX_INIT_SYSTEMD [=y] && GENTOO_LINUX [=y] && GENTOO_LINUX_UDEV [=y]

Signed-off-by: Matt Turner <mattst88 <AT> gentoo.org>
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index c063c6d..f875dba 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -138,8 +138,8 @@
 +	select NET
 +	select NET_NS
 +	select PROC_FS
-+	select SECCOMP
-+	select SECCOMP_FILTER
++	select SECCOMP if HAVE_ARCH_SECCOMP
++	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
 +	select SIGNALFD
 +	select SYSFS
 +	select TIMERFD
@@ -188,8 +188,8 @@
 +	select DEBUG_SG
 +	select BUG_ON_DATA_CORRUPTION
 +	select SCHED_STACK_END_CHECK
-+	select SECCOMP
-+	select SECCOMP_FILTER
++	select SECCOMP if HAVE_ARCH_SECCOMP
++	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
 +	select SECURITY_YAMA
 +	select SLAB_FREELIST_RANDOM
 +	select SLAB_FREELIST_HARDENED


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-03 11:03 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-03 11:03 UTC (permalink / raw
  To: gentoo-commits

commit:     0a51dd78934b182a78327edc2c0e6a0b847f2f01
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug  3 11:00:25 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug  3 11:03:47 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0a51dd78

Fix SECCOMP Patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index f875dba..fa005e6 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -139,7 +139,7 @@
 +	select NET_NS
 +	select PROC_FS
 +	select SECCOMP if HAVE_ARCH_SECCOMP
-+	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
++	select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
 +	select SIGNALFD
 +	select SYSFS
 +	select TIMERFD
@@ -189,7 +189,7 @@
 +	select BUG_ON_DATA_CORRUPTION
 +	select SCHED_STACK_END_CHECK
 +	select SECCOMP if HAVE_ARCH_SECCOMP
-+	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
++	select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
 +	select SECURITY_YAMA
 +	select SLAB_FREELIST_RANDOM
 +	select SLAB_FREELIST_HARDENED


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-04 11:52 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-04 11:52 UTC (permalink / raw
  To: gentoo-commits

commit:     db2679f60f87e86e11bf86afe9885781fc7f0505
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug  4 11:52:05 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug  4 11:52:05 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=db2679f6

Linux patch 5.10.56

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1055_linux-5.10.56.patch | 2796 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2800 insertions(+)

diff --git a/0000_README b/0000_README
index f1cdfa7..f79bdd7 100644
--- a/0000_README
+++ b/0000_README
@@ -263,6 +263,10 @@ Patch:  1054_linux-5.10.55.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.55
 
+Patch:  1055_linux-5.10.56.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.56
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1055_linux-5.10.56.patch b/1055_linux-5.10.56.patch
new file mode 100644
index 0000000..c83aa61
--- /dev/null
+++ b/1055_linux-5.10.56.patch
@@ -0,0 +1,2796 @@
+diff --git a/Makefile b/Makefile
+index 7fb6405f3b60f..0090f53846e9c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 55
++SUBLEVEL = 56
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
+index 916e42d74a869..7eea28d6f6c88 100644
+--- a/arch/alpha/kernel/setup.c
++++ b/arch/alpha/kernel/setup.c
+@@ -325,18 +325,19 @@ setup_memory(void *kernel_end)
+ 		       i, cluster->usage, cluster->start_pfn,
+ 		       cluster->start_pfn + cluster->numpages);
+ 
+-		/* Bit 0 is console/PALcode reserved.  Bit 1 is
+-		   non-volatile memory -- we might want to mark
+-		   this for later.  */
+-		if (cluster->usage & 3)
+-			continue;
+-
+ 		end = cluster->start_pfn + cluster->numpages;
+ 		if (end > max_low_pfn)
+ 			max_low_pfn = end;
+ 
+ 		memblock_add(PFN_PHYS(cluster->start_pfn),
+ 			     cluster->numpages << PAGE_SHIFT);
++
++		/* Bit 0 is console/PALcode reserved.  Bit 1 is
++		   non-volatile memory -- we might want to mark
++		   this for later.  */
++		if (cluster->usage & 3)
++			memblock_reserve(PFN_PHYS(cluster->start_pfn),
++				         cluster->numpages << PAGE_SHIFT);
+ 	}
+ 
+ 	/*
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index 0207b6ea6e8a0..ce8b043263521 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -1602,6 +1602,9 @@ exit:
+ 		rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx);
+ 		emit_ldx_r(dst, rn, off, ctx, BPF_SIZE(code));
+ 		break;
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		break;
+ 	/* ST: *(size *)(dst + off) = imm */
+ 	case BPF_ST | BPF_MEM | BPF_W:
+ 	case BPF_ST | BPF_MEM | BPF_H:
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index ef9f1d5e989d0..345066b8e9fc8 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -829,6 +829,19 @@ emit_cond_jmp:
+ 			return ret;
+ 		break;
+ 
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		/*
++		 * Nothing required here.
++		 *
++		 * In case of arm64, we rely on the firmware mitigation of
++		 * Speculative Store Bypass as controlled via the ssbd kernel
++		 * parameter. Whenever the mitigation is enabled, it works
++		 * for all of the kernel code with no need to provide any
++		 * additional instructions.
++		 */
++		break;
++
+ 	/* ST: *(size *)(dst + off) = imm */
+ 	case BPF_ST | BPF_MEM | BPF_W:
+ 	case BPF_ST | BPF_MEM | BPF_H:
+diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c
+index 561154cbcc401..b31b91e57c341 100644
+--- a/arch/mips/net/ebpf_jit.c
++++ b/arch/mips/net/ebpf_jit.c
+@@ -1355,6 +1355,9 @@ jeq_common:
+ 		}
+ 		break;
+ 
++	case BPF_ST | BPF_NOSPEC: /* speculation barrier */
++		break;
++
+ 	case BPF_ST | BPF_B | BPF_MEM:
+ 	case BPF_ST | BPF_H | BPF_MEM:
+ 	case BPF_ST | BPF_W | BPF_MEM:
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 022103c6a201a..658ca2bab13cc 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -646,6 +646,12 @@ emit_clear:
+ 			}
+ 			break;
+ 
++		/*
++		 * BPF_ST NOSPEC (speculation barrier)
++		 */
++		case BPF_ST | BPF_NOSPEC:
++			break;
++
+ 		/*
+ 		 * BPF_ST(X)
+ 		 */
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 090c13f6c8815..5f0d446a2325e 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -76,7 +76,7 @@
+ #include "../../../../drivers/pci/pci.h"
+ 
+ DEFINE_STATIC_KEY_FALSE(shared_processor);
+-EXPORT_SYMBOL_GPL(shared_processor);
++EXPORT_SYMBOL(shared_processor);
+ 
+ int CMO_PrPSP = -1;
+ int CMO_SecPSP = -1;
+diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c
+index 579575f9cdae0..f300f93ba6456 100644
+--- a/arch/riscv/net/bpf_jit_comp32.c
++++ b/arch/riscv/net/bpf_jit_comp32.c
+@@ -1251,6 +1251,10 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+ 			return -1;
+ 		break;
+ 
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		break;
++
+ 	case BPF_ST | BPF_MEM | BPF_B:
+ 	case BPF_ST | BPF_MEM | BPF_H:
+ 	case BPF_ST | BPF_MEM | BPF_W:
+diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
+index 8a56b52931170..c113ae818b14e 100644
+--- a/arch/riscv/net/bpf_jit_comp64.c
++++ b/arch/riscv/net/bpf_jit_comp64.c
+@@ -939,6 +939,10 @@ out_be:
+ 		emit_ld(rd, 0, RV_REG_T1, ctx);
+ 		break;
+ 
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		break;
++
+ 	/* ST: *(size *)(dst + off) = imm */
+ 	case BPF_ST | BPF_MEM | BPF_B:
+ 		emit_imm(RV_REG_T1, imm, ctx);
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index fc44dce59536e..dee01d3b23a40 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -1153,6 +1153,11 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 			break;
+ 		}
+ 		break;
++	/*
++	 * BPF_NOSPEC (speculation barrier)
++	 */
++	case BPF_ST | BPF_NOSPEC:
++		break;
+ 	/*
+ 	 * BPF_ST(X)
+ 	 */
+diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
+index 3364e2a009899..fef734473c0f3 100644
+--- a/arch/sparc/net/bpf_jit_comp_64.c
++++ b/arch/sparc/net/bpf_jit_comp_64.c
+@@ -1287,6 +1287,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ 			return 1;
+ 		break;
+ 	}
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		break;
+ 	/* ST: *(size *)(dst + off) = imm */
+ 	case BPF_ST | BPF_MEM | BPF_W:
+ 	case BPF_ST | BPF_MEM | BPF_H:
+diff --git a/arch/x86/include/asm/proto.h b/arch/x86/include/asm/proto.h
+index b6a9d51d1d791..8c5d1910a848f 100644
+--- a/arch/x86/include/asm/proto.h
++++ b/arch/x86/include/asm/proto.h
+@@ -4,6 +4,8 @@
+ 
+ #include <asm/ldt.h>
+ 
++struct task_struct;
++
+ /* misc architecture specific prototypes */
+ 
+ void syscall_init(void);
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index 698969e18fe35..ff005fe738a4c 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -96,7 +96,7 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
+ static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic)
+ {
+ 	ioapic->rtc_status.pending_eoi = 0;
+-	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID);
++	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID + 1);
+ }
+ 
+ static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic);
+diff --git a/arch/x86/kvm/ioapic.h b/arch/x86/kvm/ioapic.h
+index 660401700075d..11e4065e16176 100644
+--- a/arch/x86/kvm/ioapic.h
++++ b/arch/x86/kvm/ioapic.h
+@@ -43,13 +43,13 @@ struct kvm_vcpu;
+ 
+ struct dest_map {
+ 	/* vcpu bitmap where IRQ has been sent */
+-	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID);
++	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID + 1);
+ 
+ 	/*
+ 	 * Vector sent to a given vcpu, only valid when
+ 	 * the vcpu's bit in map is set
+ 	 */
+-	u8 vectors[KVM_MAX_VCPU_ID];
++	u8 vectors[KVM_MAX_VCPU_ID + 1];
+ };
+ 
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 3ad6f77ea1c45..27faa00fff71c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3205,7 +3205,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			return 1;
+ 		break;
+ 	case MSR_KVM_ASYNC_PF_ACK:
+-		if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF))
++		if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
+ 			return 1;
+ 		if (data & 0x1) {
+ 			vcpu->arch.apf.pageready_pending = false;
+@@ -3534,7 +3534,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		msr_info->data = vcpu->arch.apf.msr_int_val;
+ 		break;
+ 	case MSR_KVM_ASYNC_PF_ACK:
+-		if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF))
++		if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
+ 			return 1;
+ 
+ 		msr_info->data = 0;
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index d5fa772560584..0a962cd6bac18 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1141,6 +1141,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			}
+ 			break;
+ 
++			/* speculation barrier */
++		case BPF_ST | BPF_NOSPEC:
++			if (boot_cpu_has(X86_FEATURE_XMM2))
++				/* Emit 'lfence' */
++				EMIT3(0x0F, 0xAE, 0xE8);
++			break;
++
+ 			/* ST: *(u8*)(dst_reg + off) = imm */
+ 		case BPF_ST | BPF_MEM | BPF_B:
+ 			if (is_ereg(dst_reg))
+diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
+index 2cf4d217840d8..4bd0f98df700c 100644
+--- a/arch/x86/net/bpf_jit_comp32.c
++++ b/arch/x86/net/bpf_jit_comp32.c
+@@ -1705,6 +1705,12 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			i++;
+ 			break;
+ 		}
++		/* speculation barrier */
++		case BPF_ST | BPF_NOSPEC:
++			if (boot_cpu_has(X86_FEATURE_XMM2))
++				/* Emit 'lfence' */
++				EMIT3(0x0F, 0xAE, 0xE8);
++			break;
+ 		/* ST: *(u8*)(dst_reg + off) = imm */
+ 		case BPF_ST | BPF_MEM | BPF_H:
+ 		case BPF_ST | BPF_MEM | BPF_B:
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index aaae531135aac..b7d8a954d99c3 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1394,16 +1394,17 @@ static int iocg_wake_fn(struct wait_queue_entry *wq_entry, unsigned mode,
+ 		return -1;
+ 
+ 	iocg_commit_bio(ctx->iocg, wait->bio, wait->abs_cost, cost);
++	wait->committed = true;
+ 
+ 	/*
+ 	 * autoremove_wake_function() removes the wait entry only when it
+-	 * actually changed the task state.  We want the wait always
+-	 * removed.  Remove explicitly and use default_wake_function().
++	 * actually changed the task state. We want the wait always removed.
++	 * Remove explicitly and use default_wake_function(). Note that the
++	 * order of operations is important as finish_wait() tests whether
++	 * @wq_entry is removed without grabbing the lock.
+ 	 */
+-	list_del_init(&wq_entry->entry);
+-	wait->committed = true;
+-
+ 	default_wake_function(wq_entry, mode, flags, key);
++	list_del_init_careful(&wq_entry->entry);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/acpi/dptf/dptf_pch_fivr.c b/drivers/acpi/dptf/dptf_pch_fivr.c
+index 5fca18296bf68..550b9081fcbc2 100644
+--- a/drivers/acpi/dptf/dptf_pch_fivr.c
++++ b/drivers/acpi/dptf/dptf_pch_fivr.c
+@@ -9,6 +9,42 @@
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ 
++struct pch_fivr_resp {
++	u64 status;
++	u64 result;
++};
++
++static int pch_fivr_read(acpi_handle handle, char *method, struct pch_fivr_resp *fivr_resp)
++{
++	struct acpi_buffer resp = { sizeof(struct pch_fivr_resp), fivr_resp};
++	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
++	struct acpi_buffer format = { sizeof("NN"), "NN" };
++	union acpi_object *obj;
++	acpi_status status;
++	int ret = -EFAULT;
++
++	status = acpi_evaluate_object(handle, method, NULL, &buffer);
++	if (ACPI_FAILURE(status))
++		return ret;
++
++	obj = buffer.pointer;
++	if (!obj || obj->type != ACPI_TYPE_PACKAGE)
++		goto release_buffer;
++
++	status = acpi_extract_package(obj, &format, &resp);
++	if (ACPI_FAILURE(status))
++		goto release_buffer;
++
++	if (fivr_resp->status)
++		goto release_buffer;
++
++	ret = 0;
++
++release_buffer:
++	kfree(buffer.pointer);
++	return ret;
++}
++
+ /*
+  * Presentation of attributes which are defined for INT1045
+  * They are:
+@@ -23,15 +59,14 @@ static ssize_t name##_show(struct device *dev,\
+ 			   char *buf)\
+ {\
+ 	struct acpi_device *acpi_dev = dev_get_drvdata(dev);\
+-	unsigned long long val;\
+-	acpi_status status;\
++	struct pch_fivr_resp fivr_resp;\
++	int status;\
+ \
+-	status = acpi_evaluate_integer(acpi_dev->handle, #method,\
+-				       NULL, &val);\
+-	if (ACPI_SUCCESS(status))\
+-		return sprintf(buf, "%d\n", (int)val);\
+-	else\
+-		return -EINVAL;\
++	status = pch_fivr_read(acpi_dev->handle, #method, &fivr_resp);\
++	if (status)\
++		return status;\
++\
++	return sprintf(buf, "%llu\n", fivr_resp.result);\
+ }
+ 
+ #define PCH_FIVR_STORE(name, method) \
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 9d82440a1d75b..f2f5f1dc7c61d 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -430,13 +430,6 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 	}
+ }
+ 
+-static bool irq_is_legacy(struct acpi_resource_irq *irq)
+-{
+-	return irq->triggering == ACPI_EDGE_SENSITIVE &&
+-		irq->polarity == ACPI_ACTIVE_HIGH &&
+-		irq->shareable == ACPI_EXCLUSIVE;
+-}
+-
+ /**
+  * acpi_dev_resource_interrupt - Extract ACPI interrupt resource information.
+  * @ares: Input ACPI resource object.
+@@ -475,7 +468,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
+ 		}
+ 		acpi_dev_get_irqresource(res, irq->interrupts[index],
+ 					 irq->triggering, irq->polarity,
+-					 irq->shareable, irq_is_legacy(irq));
++					 irq->shareable, true);
+ 		break;
+ 	case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
+ 		ext_irq = &ares->data.extended_irq;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 6948ab3c0d998..ffd310279a69c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3322,13 +3322,13 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 	r = amdgpu_device_get_job_timeout_settings(adev);
+ 	if (r) {
+ 		dev_err(adev->dev, "invalid lockup_timeout parameter syntax\n");
+-		goto failed_unmap;
++		return r;
+ 	}
+ 
+ 	/* early init functions */
+ 	r = amdgpu_device_ip_early_init(adev);
+ 	if (r)
+-		goto failed_unmap;
++		return r;
+ 
+ 	/* doorbell bar mapping and doorbell index init*/
+ 	amdgpu_device_doorbell_init(adev);
+@@ -3532,10 +3532,6 @@ failed:
+ 	if (boco)
+ 		vga_switcheroo_fini_domain_pm_ops(adev->dev);
+ 
+-failed_unmap:
+-	iounmap(adev->rmmio);
+-	adev->rmmio = NULL;
+-
+ 	return r;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+index c4828bd3264bc..b0ee77ee80b90 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+@@ -67,7 +67,7 @@ static int psp_v12_0_init_microcode(struct psp_context *psp)
+ 
+ 	err = psp_init_asd_microcode(psp, chip_name);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_ta.bin", chip_name);
+ 	err = request_firmware(&adev->psp.ta_fw, fw_name, adev->dev);
+@@ -80,7 +80,7 @@ static int psp_v12_0_init_microcode(struct psp_context *psp)
+ 	} else {
+ 		err = amdgpu_ucode_validate(adev->psp.ta_fw);
+ 		if (err)
+-			goto out2;
++			goto out;
+ 
+ 		ta_hdr = (const struct ta_firmware_header_v1_0 *)
+ 				 adev->psp.ta_fw->data;
+@@ -105,10 +105,9 @@ static int psp_v12_0_init_microcode(struct psp_context *psp)
+ 
+ 	return 0;
+ 
+-out2:
++out:
+ 	release_firmware(adev->psp.ta_fw);
+ 	adev->psp.ta_fw = NULL;
+-out:
+ 	if (err) {
+ 		dev_err(adev->dev,
+ 			"psp v12.0: Failed to load firmware \"%s\"\n",
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+index f2114bc910bf0..6b744edcd75fd 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+@@ -135,7 +135,7 @@ void dcn20_update_clocks_update_dentist(struct clk_mgr_internal *clk_mgr)
+ 
+ 	REG_UPDATE(DENTIST_DISPCLK_CNTL,
+ 			DENTIST_DISPCLK_WDIVIDER, dispclk_wdivider);
+-//	REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, 1, 5, 100);
++	REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, 1, 50, 1000);
+ 	REG_UPDATE(DENTIST_DISPCLK_CNTL,
+ 			DENTIST_DPPCLK_WDIVIDER, dppclk_wdivider);
+ 	REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DPPCLK_CHG_DONE, 1, 5, 100);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 60b304b72b7c3..b39980b9db1db 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -168,7 +168,7 @@ static const struct dpu_mdp_cfg sc7180_mdp[] = {
+ static const struct dpu_mdp_cfg sm8250_mdp[] = {
+ 	{
+ 	.name = "top_0", .id = MDP_TOP,
+-	.base = 0x0, .len = 0x45C,
++	.base = 0x0, .len = 0x494,
+ 	.features = 0,
+ 	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
+index 4963bfe6a4726..aeca8b2ac5c6b 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
+@@ -740,6 +740,7 @@ int dp_catalog_panel_timing_cfg(struct dp_catalog *dp_catalog)
+ 	dp_write_link(catalog, REG_DP_HSYNC_VSYNC_WIDTH_POLARITY,
+ 				dp_catalog->width_blanking);
+ 	dp_write_link(catalog, REG_DP_ACTIVE_HOR_VER, dp_catalog->dp_active);
++	dp_write_p0(catalog, MMSS_DP_INTF_CONFIG, 0);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 2d70dc4bea654..4228ddc3df0e6 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3829,7 +3829,7 @@ int wacom_setup_touch_input_capabilities(struct input_dev *input_dev,
+ 		    wacom_wac->shared->touch->product == 0xF6) {
+ 			input_dev->evbit[0] |= BIT_MASK(EV_SW);
+ 			__set_bit(SW_MUTE_DEVICE, input_dev->swbit);
+-			wacom_wac->shared->has_mute_touch_switch = true;
++			wacom_wac->has_mute_touch_switch = true;
+ 		}
+ 		fallthrough;
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 04621ba8fa761..1fadca8af71a1 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -119,6 +119,7 @@ static int bnxt_re_setup_chip_ctx(struct bnxt_re_dev *rdev, u8 wqe_mode)
+ 	if (!chip_ctx)
+ 		return -ENOMEM;
+ 	chip_ctx->chip_num = bp->chip_num;
++	chip_ctx->hw_stats_size = bp->hw_ring_stats_size;
+ 
+ 	rdev->chip_ctx = chip_ctx;
+ 	/* rest members to follow eventually */
+@@ -507,6 +508,7 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 				       dma_addr_t dma_map,
+ 				       u32 *fw_stats_ctx_id)
+ {
++	struct bnxt_qplib_chip_ctx *chip_ctx = rdev->chip_ctx;
+ 	struct hwrm_stat_ctx_alloc_output resp = {0};
+ 	struct hwrm_stat_ctx_alloc_input req = {0};
+ 	struct bnxt_en_dev *en_dev = rdev->en_dev;
+@@ -523,7 +525,7 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1);
+ 	req.update_period_ms = cpu_to_le32(1000);
+ 	req.stats_dma_addr = cpu_to_le64(dma_map);
+-	req.stats_dma_length = cpu_to_le16(sizeof(struct ctx_hw_stats_ext));
++	req.stats_dma_length = cpu_to_le16(chip_ctx->hw_stats_size);
+ 	req.stat_ctx_flags = STAT_CTX_ALLOC_REQ_STAT_CTX_FLAGS_ROCE;
+ 	bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
+ 			    sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index 3ca47004b7527..754dcebeb4ca1 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -56,6 +56,7 @@
+ static void bnxt_qplib_free_stats_ctx(struct pci_dev *pdev,
+ 				      struct bnxt_qplib_stats *stats);
+ static int bnxt_qplib_alloc_stats_ctx(struct pci_dev *pdev,
++				      struct bnxt_qplib_chip_ctx *cctx,
+ 				      struct bnxt_qplib_stats *stats);
+ 
+ /* PBL */
+@@ -559,7 +560,7 @@ int bnxt_qplib_alloc_ctx(struct bnxt_qplib_res *res,
+ 		goto fail;
+ stats_alloc:
+ 	/* Stats */
+-	rc = bnxt_qplib_alloc_stats_ctx(res->pdev, &ctx->stats);
++	rc = bnxt_qplib_alloc_stats_ctx(res->pdev, res->cctx, &ctx->stats);
+ 	if (rc)
+ 		goto fail;
+ 
+@@ -889,15 +890,12 @@ static void bnxt_qplib_free_stats_ctx(struct pci_dev *pdev,
+ }
+ 
+ static int bnxt_qplib_alloc_stats_ctx(struct pci_dev *pdev,
++				      struct bnxt_qplib_chip_ctx *cctx,
+ 				      struct bnxt_qplib_stats *stats)
+ {
+ 	memset(stats, 0, sizeof(*stats));
+ 	stats->fw_id = -1;
+-	/* 128 byte aligned context memory is required only for 57500.
+-	 * However making this unconditional, it does not harm previous
+-	 * generation.
+-	 */
+-	stats->size = ALIGN(sizeof(struct ctx_hw_stats), 128);
++	stats->size = cctx->hw_stats_size;
+ 	stats->dma = dma_alloc_coherent(&pdev->dev, stats->size,
+ 					&stats->dma_map, GFP_KERNEL);
+ 	if (!stats->dma) {
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+index 7a1ab38b95da1..58bad6f784567 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+@@ -60,6 +60,7 @@ struct bnxt_qplib_chip_ctx {
+ 	u16	chip_num;
+ 	u8	chip_rev;
+ 	u8	chip_metal;
++	u16	hw_stats_size;
+ 	struct bnxt_qplib_drv_modes modes;
+ };
+ 
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index 73d48c3b8ded3..7d2315c8cacb1 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -218,7 +218,7 @@ static int hi3110_spi_trans(struct spi_device *spi, int len)
+ 	return ret;
+ }
+ 
+-static u8 hi3110_cmd(struct spi_device *spi, u8 command)
++static int hi3110_cmd(struct spi_device *spi, u8 command)
+ {
+ 	struct hi3110_priv *priv = spi_get_drvdata(spi);
+ 
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index db9f15f17610b..249d2fba28c7f 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -255,6 +255,8 @@ struct ems_usb {
+ 	unsigned int free_slots; /* remember number of available slots */
+ 
+ 	struct ems_cpc_msg active_params; /* active controller parameters */
++	void *rxbuf[MAX_RX_URBS];
++	dma_addr_t rxbuf_dma[MAX_RX_URBS];
+ };
+ 
+ static void ems_usb_read_interrupt_callback(struct urb *urb)
+@@ -587,6 +589,7 @@ static int ems_usb_start(struct ems_usb *dev)
+ 	for (i = 0; i < MAX_RX_URBS; i++) {
+ 		struct urb *urb = NULL;
+ 		u8 *buf = NULL;
++		dma_addr_t buf_dma;
+ 
+ 		/* create a URB, and a buffer for it */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -596,7 +599,7 @@ static int ems_usb_start(struct ems_usb *dev)
+ 		}
+ 
+ 		buf = usb_alloc_coherent(dev->udev, RX_BUFFER_SIZE, GFP_KERNEL,
+-					 &urb->transfer_dma);
++					 &buf_dma);
+ 		if (!buf) {
+ 			netdev_err(netdev, "No memory left for USB buffer\n");
+ 			usb_free_urb(urb);
+@@ -604,6 +607,8 @@ static int ems_usb_start(struct ems_usb *dev)
+ 			break;
+ 		}
+ 
++		urb->transfer_dma = buf_dma;
++
+ 		usb_fill_bulk_urb(urb, dev->udev, usb_rcvbulkpipe(dev->udev, 2),
+ 				  buf, RX_BUFFER_SIZE,
+ 				  ems_usb_read_bulk_callback, dev);
+@@ -619,6 +624,9 @@ static int ems_usb_start(struct ems_usb *dev)
+ 			break;
+ 		}
+ 
++		dev->rxbuf[i] = buf;
++		dev->rxbuf_dma[i] = buf_dma;
++
+ 		/* Drop reference, USB core will take care of freeing it */
+ 		usb_free_urb(urb);
+ 	}
+@@ -684,6 +692,10 @@ static void unlink_all_urbs(struct ems_usb *dev)
+ 
+ 	usb_kill_anchored_urbs(&dev->rx_submitted);
+ 
++	for (i = 0; i < MAX_RX_URBS; ++i)
++		usb_free_coherent(dev->udev, RX_BUFFER_SIZE,
++				  dev->rxbuf[i], dev->rxbuf_dma[i]);
++
+ 	usb_kill_anchored_urbs(&dev->tx_submitted);
+ 	atomic_set(&dev->active_tx_urbs, 0);
+ 
+diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
+index b5d7ed21d7d9e..485e20e0dec2c 100644
+--- a/drivers/net/can/usb/esd_usb2.c
++++ b/drivers/net/can/usb/esd_usb2.c
+@@ -195,6 +195,8 @@ struct esd_usb2 {
+ 	int net_count;
+ 	u32 version;
+ 	int rxinitdone;
++	void *rxbuf[MAX_RX_URBS];
++	dma_addr_t rxbuf_dma[MAX_RX_URBS];
+ };
+ 
+ struct esd_usb2_net_priv {
+@@ -544,6 +546,7 @@ static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
+ 	for (i = 0; i < MAX_RX_URBS; i++) {
+ 		struct urb *urb = NULL;
+ 		u8 *buf = NULL;
++		dma_addr_t buf_dma;
+ 
+ 		/* create a URB, and a buffer for it */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -553,7 +556,7 @@ static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
+ 		}
+ 
+ 		buf = usb_alloc_coherent(dev->udev, RX_BUFFER_SIZE, GFP_KERNEL,
+-					 &urb->transfer_dma);
++					 &buf_dma);
+ 		if (!buf) {
+ 			dev_warn(dev->udev->dev.parent,
+ 				 "No memory left for USB buffer\n");
+@@ -561,6 +564,8 @@ static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
+ 			goto freeurb;
+ 		}
+ 
++		urb->transfer_dma = buf_dma;
++
+ 		usb_fill_bulk_urb(urb, dev->udev,
+ 				  usb_rcvbulkpipe(dev->udev, 1),
+ 				  buf, RX_BUFFER_SIZE,
+@@ -573,8 +578,12 @@ static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
+ 			usb_unanchor_urb(urb);
+ 			usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf,
+ 					  urb->transfer_dma);
++			goto freeurb;
+ 		}
+ 
++		dev->rxbuf[i] = buf;
++		dev->rxbuf_dma[i] = buf_dma;
++
+ freeurb:
+ 		/* Drop reference, USB core will take care of freeing it */
+ 		usb_free_urb(urb);
+@@ -662,6 +671,11 @@ static void unlink_all_urbs(struct esd_usb2 *dev)
+ 	int i, j;
+ 
+ 	usb_kill_anchored_urbs(&dev->rx_submitted);
++
++	for (i = 0; i < MAX_RX_URBS; ++i)
++		usb_free_coherent(dev->udev, RX_BUFFER_SIZE,
++				  dev->rxbuf[i], dev->rxbuf_dma[i]);
++
+ 	for (i = 0; i < dev->net_count; i++) {
+ 		priv = dev->nets[i];
+ 		if (priv) {
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 6d03f1d6c4d38..912160fd2ca02 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -653,6 +653,8 @@ static int mcba_usb_start(struct mcba_priv *priv)
+ 			break;
+ 		}
+ 
++		urb->transfer_dma = buf_dma;
++
+ 		usb_fill_bulk_urb(urb, priv->udev,
+ 				  usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_IN),
+ 				  buf, MCBA_USB_RX_BUFF_SIZE,
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb.c b/drivers/net/can/usb/peak_usb/pcan_usb.c
+index 63bd2ed966978..ce2d0761c4e5a 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb.c
+@@ -117,7 +117,8 @@ MODULE_SUPPORTED_DEVICE("PEAK-System PCAN-USB adapter");
+ #define PCAN_USB_BERR_MASK	(PCAN_USB_ERR_RXERR | PCAN_USB_ERR_TXERR)
+ 
+ /* identify bus event packets with rx/tx error counters */
+-#define PCAN_USB_ERR_CNT		0x80
++#define PCAN_USB_ERR_CNT_DEC		0x00	/* counters are decreasing */
++#define PCAN_USB_ERR_CNT_INC		0x80	/* counters are increasing */
+ 
+ /* private to PCAN-USB adapter */
+ struct pcan_usb {
+@@ -611,11 +612,12 @@ static int pcan_usb_handle_bus_evt(struct pcan_usb_msg_context *mc, u8 ir)
+ 
+ 	/* acccording to the content of the packet */
+ 	switch (ir) {
+-	case PCAN_USB_ERR_CNT:
++	case PCAN_USB_ERR_CNT_DEC:
++	case PCAN_USB_ERR_CNT_INC:
+ 
+ 		/* save rx/tx error counters from in the device context */
+-		pdev->bec.rxerr = mc->ptr[0];
+-		pdev->bec.txerr = mc->ptr[1];
++		pdev->bec.rxerr = mc->ptr[1];
++		pdev->bec.txerr = mc->ptr[2];
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index 62749c67c9592..ca7c55d6a41db 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -137,7 +137,8 @@ struct usb_8dev_priv {
+ 	u8 *cmd_msg_buffer;
+ 
+ 	struct mutex usb_8dev_cmd_lock;
+-
++	void *rxbuf[MAX_RX_URBS];
++	dma_addr_t rxbuf_dma[MAX_RX_URBS];
+ };
+ 
+ /* tx frame */
+@@ -733,6 +734,7 @@ static int usb_8dev_start(struct usb_8dev_priv *priv)
+ 	for (i = 0; i < MAX_RX_URBS; i++) {
+ 		struct urb *urb = NULL;
+ 		u8 *buf;
++		dma_addr_t buf_dma;
+ 
+ 		/* create a URB, and a buffer for it */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -742,7 +744,7 @@ static int usb_8dev_start(struct usb_8dev_priv *priv)
+ 		}
+ 
+ 		buf = usb_alloc_coherent(priv->udev, RX_BUFFER_SIZE, GFP_KERNEL,
+-					 &urb->transfer_dma);
++					 &buf_dma);
+ 		if (!buf) {
+ 			netdev_err(netdev, "No memory left for USB buffer\n");
+ 			usb_free_urb(urb);
+@@ -750,6 +752,8 @@ static int usb_8dev_start(struct usb_8dev_priv *priv)
+ 			break;
+ 		}
+ 
++		urb->transfer_dma = buf_dma;
++
+ 		usb_fill_bulk_urb(urb, priv->udev,
+ 				  usb_rcvbulkpipe(priv->udev,
+ 						  USB_8DEV_ENDP_DATA_RX),
+@@ -767,6 +771,9 @@ static int usb_8dev_start(struct usb_8dev_priv *priv)
+ 			break;
+ 		}
+ 
++		priv->rxbuf[i] = buf;
++		priv->rxbuf_dma[i] = buf_dma;
++
+ 		/* Drop reference, USB core will take care of freeing it */
+ 		usb_free_urb(urb);
+ 	}
+@@ -836,6 +843,10 @@ static void unlink_all_urbs(struct usb_8dev_priv *priv)
+ 
+ 	usb_kill_anchored_urbs(&priv->rx_submitted);
+ 
++	for (i = 0; i < MAX_RX_URBS; ++i)
++		usb_free_coherent(priv->udev, RX_BUFFER_SIZE,
++				  priv->rxbuf[i], priv->rxbuf_dma[i]);
++
+ 	usb_kill_anchored_urbs(&priv->tx_submitted);
+ 	atomic_set(&priv->active_tx_urbs, 0);
+ 
+diff --git a/drivers/net/ethernet/dec/tulip/winbond-840.c b/drivers/net/ethernet/dec/tulip/winbond-840.c
+index 89cbdc1f48574..6161e1c604c0e 100644
+--- a/drivers/net/ethernet/dec/tulip/winbond-840.c
++++ b/drivers/net/ethernet/dec/tulip/winbond-840.c
+@@ -357,7 +357,7 @@ static int w840_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	int i, option = find_cnt < MAX_UNITS ? options[find_cnt] : 0;
+ 	void __iomem *ioaddr;
+ 
+-	i = pci_enable_device(pdev);
++	i = pcim_enable_device(pdev);
+ 	if (i) return i;
+ 
+ 	pci_set_master(pdev);
+@@ -379,7 +379,7 @@ static int w840_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	ioaddr = pci_iomap(pdev, TULIP_BAR, netdev_res_size);
+ 	if (!ioaddr)
+-		goto err_out_free_res;
++		goto err_out_netdev;
+ 
+ 	for (i = 0; i < 3; i++)
+ 		((__le16 *)dev->dev_addr)[i] = cpu_to_le16(eeprom_read(ioaddr, i));
+@@ -458,8 +458,6 @@ static int w840_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ err_out_cleardev:
+ 	pci_iounmap(pdev, ioaddr);
+-err_out_free_res:
+-	pci_release_regions(pdev);
+ err_out_netdev:
+ 	free_netdev (dev);
+ 	return -ENODEV;
+@@ -1526,7 +1524,6 @@ static void w840_remove1(struct pci_dev *pdev)
+ 	if (dev) {
+ 		struct netdev_private *np = netdev_priv(dev);
+ 		unregister_netdev(dev);
+-		pci_release_regions(pdev);
+ 		pci_iounmap(pdev, np->base_addr);
+ 		free_netdev(dev);
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 874073f7f0248..a2bdb2906519e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -980,7 +980,7 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
+ 	default:
+ 		/* if we got here and link is up something bad is afoot */
+ 		netdev_info(netdev,
+-			    "WARNING: Link is up but PHY type 0x%x is not recognized.\n",
++			    "WARNING: Link is up but PHY type 0x%x is not recognized, or incorrect cable is in use\n",
+ 			    hw_link_info->phy_type);
+ 	}
+ 
+@@ -5106,6 +5106,10 @@ flags_complete:
+ 					dev_warn(&pf->pdev->dev,
+ 						 "Device configuration forbids SW from starting the LLDP agent.\n");
+ 					return -EINVAL;
++				case I40E_AQ_RC_EAGAIN:
++					dev_warn(&pf->pdev->dev,
++						 "Stop FW LLDP agent command is still being processed, please try again in a second.\n");
++					return -EBUSY;
+ 				default:
+ 					dev_warn(&pf->pdev->dev,
+ 						 "Starting FW LLDP agent failed: error: %s, %s\n",
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 52e31f712a545..bc648ce0743c7 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -4425,11 +4425,10 @@ int i40e_control_wait_tx_q(int seid, struct i40e_pf *pf, int pf_q,
+ }
+ 
+ /**
+- * i40e_vsi_control_tx - Start or stop a VSI's rings
++ * i40e_vsi_enable_tx - Start a VSI's rings
+  * @vsi: the VSI being configured
+- * @enable: start or stop the rings
+  **/
+-static int i40e_vsi_control_tx(struct i40e_vsi *vsi, bool enable)
++static int i40e_vsi_enable_tx(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_pf *pf = vsi->back;
+ 	int i, pf_q, ret = 0;
+@@ -4438,7 +4437,7 @@ static int i40e_vsi_control_tx(struct i40e_vsi *vsi, bool enable)
+ 	for (i = 0; i < vsi->num_queue_pairs; i++, pf_q++) {
+ 		ret = i40e_control_wait_tx_q(vsi->seid, pf,
+ 					     pf_q,
+-					     false /*is xdp*/, enable);
++					     false /*is xdp*/, true);
+ 		if (ret)
+ 			break;
+ 
+@@ -4447,7 +4446,7 @@ static int i40e_vsi_control_tx(struct i40e_vsi *vsi, bool enable)
+ 
+ 		ret = i40e_control_wait_tx_q(vsi->seid, pf,
+ 					     pf_q + vsi->alloc_queue_pairs,
+-					     true /*is xdp*/, enable);
++					     true /*is xdp*/, true);
+ 		if (ret)
+ 			break;
+ 	}
+@@ -4545,32 +4544,25 @@ int i40e_control_wait_rx_q(struct i40e_pf *pf, int pf_q, bool enable)
+ }
+ 
+ /**
+- * i40e_vsi_control_rx - Start or stop a VSI's rings
++ * i40e_vsi_enable_rx - Start a VSI's rings
+  * @vsi: the VSI being configured
+- * @enable: start or stop the rings
+  **/
+-static int i40e_vsi_control_rx(struct i40e_vsi *vsi, bool enable)
++static int i40e_vsi_enable_rx(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_pf *pf = vsi->back;
+ 	int i, pf_q, ret = 0;
+ 
+ 	pf_q = vsi->base_queue;
+ 	for (i = 0; i < vsi->num_queue_pairs; i++, pf_q++) {
+-		ret = i40e_control_wait_rx_q(pf, pf_q, enable);
++		ret = i40e_control_wait_rx_q(pf, pf_q, true);
+ 		if (ret) {
+ 			dev_info(&pf->pdev->dev,
+-				 "VSI seid %d Rx ring %d %sable timeout\n",
+-				 vsi->seid, pf_q, (enable ? "en" : "dis"));
++				 "VSI seid %d Rx ring %d enable timeout\n",
++				 vsi->seid, pf_q);
+ 			break;
+ 		}
+ 	}
+ 
+-	/* Due to HW errata, on Rx disable only, the register can indicate done
+-	 * before it really is. Needs 50ms to be sure
+-	 */
+-	if (!enable)
+-		mdelay(50);
+-
+ 	return ret;
+ }
+ 
+@@ -4583,29 +4575,47 @@ int i40e_vsi_start_rings(struct i40e_vsi *vsi)
+ 	int ret = 0;
+ 
+ 	/* do rx first for enable and last for disable */
+-	ret = i40e_vsi_control_rx(vsi, true);
++	ret = i40e_vsi_enable_rx(vsi);
+ 	if (ret)
+ 		return ret;
+-	ret = i40e_vsi_control_tx(vsi, true);
++	ret = i40e_vsi_enable_tx(vsi);
+ 
+ 	return ret;
+ }
+ 
++#define I40E_DISABLE_TX_GAP_MSEC	50
++
+ /**
+  * i40e_vsi_stop_rings - Stop a VSI's rings
+  * @vsi: the VSI being configured
+  **/
+ void i40e_vsi_stop_rings(struct i40e_vsi *vsi)
+ {
++	struct i40e_pf *pf = vsi->back;
++	int pf_q, err, q_end;
++
+ 	/* When port TX is suspended, don't wait */
+ 	if (test_bit(__I40E_PORT_SUSPENDED, vsi->back->state))
+ 		return i40e_vsi_stop_rings_no_wait(vsi);
+ 
+-	/* do rx first for enable and last for disable
+-	 * Ignore return value, we need to shutdown whatever we can
+-	 */
+-	i40e_vsi_control_tx(vsi, false);
+-	i40e_vsi_control_rx(vsi, false);
++	q_end = vsi->base_queue + vsi->num_queue_pairs;
++	for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++)
++		i40e_pre_tx_queue_cfg(&pf->hw, (u32)pf_q, false);
++
++	for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++) {
++		err = i40e_control_wait_rx_q(pf, pf_q, false);
++		if (err)
++			dev_info(&pf->pdev->dev,
++				 "VSI seid %d Rx ring %d dissable timeout\n",
++				 vsi->seid, pf_q);
++	}
++
++	msleep(I40E_DISABLE_TX_GAP_MSEC);
++	pf_q = vsi->base_queue;
++	for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++)
++		wr32(&pf->hw, I40E_QTX_ENA(pf_q), 0);
++
++	i40e_vsi_wait_queues_disabled(vsi);
+ }
+ 
+ /**
+@@ -6923,6 +6933,8 @@ static int i40e_validate_mqprio_qopt(struct i40e_vsi *vsi,
+ 	}
+ 	if (vsi->num_queue_pairs <
+ 	    (mqprio_qopt->qopt.offset[i] + mqprio_qopt->qopt.count[i])) {
++		dev_err(&vsi->back->pdev->dev,
++			"Failed to create traffic channel, insufficient number of queues.\n");
+ 		return -EINVAL;
+ 	}
+ 	if (sum_max_rate > i40e_get_link_speed(vsi)) {
+@@ -12799,6 +12811,7 @@ static const struct net_device_ops i40e_netdev_ops = {
+ 	.ndo_poll_controller	= i40e_netpoll,
+ #endif
+ 	.ndo_setup_tc		= __i40e_setup_tc,
++	.ndo_select_queue	= i40e_lan_select_queue,
+ 	.ndo_set_features	= i40e_set_features,
+ 	.ndo_set_vf_mac		= i40e_ndo_set_vf_mac,
+ 	.ndo_set_vf_vlan	= i40e_ndo_set_vf_port_vlan,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index c40ac82db863e..615802b07521a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -3524,6 +3524,56 @@ dma_error:
+ 	return -1;
+ }
+ 
++static u16 i40e_swdcb_skb_tx_hash(struct net_device *dev,
++				  const struct sk_buff *skb,
++				  u16 num_tx_queues)
++{
++	u32 jhash_initval_salt = 0xd631614b;
++	u32 hash;
++
++	if (skb->sk && skb->sk->sk_hash)
++		hash = skb->sk->sk_hash;
++	else
++		hash = (__force u16)skb->protocol ^ skb->hash;
++
++	hash = jhash_1word(hash, jhash_initval_salt);
++
++	return (u16)(((u64)hash * num_tx_queues) >> 32);
++}
++
++u16 i40e_lan_select_queue(struct net_device *netdev,
++			  struct sk_buff *skb,
++			  struct net_device __always_unused *sb_dev)
++{
++	struct i40e_netdev_priv *np = netdev_priv(netdev);
++	struct i40e_vsi *vsi = np->vsi;
++	struct i40e_hw *hw;
++	u16 qoffset;
++	u16 qcount;
++	u8 tclass;
++	u16 hash;
++	u8 prio;
++
++	/* is DCB enabled at all? */
++	if (vsi->tc_config.numtc == 1)
++		return i40e_swdcb_skb_tx_hash(netdev, skb,
++					      netdev->real_num_tx_queues);
++
++	prio = skb->priority;
++	hw = &vsi->back->hw;
++	tclass = hw->local_dcbx_config.etscfg.prioritytable[prio];
++	/* sanity check */
++	if (unlikely(!(vsi->tc_config.enabled_tc & BIT(tclass))))
++		tclass = 0;
++
++	/* select a queue assigned for the given TC */
++	qcount = vsi->tc_config.tc_info[tclass].qcount;
++	hash = i40e_swdcb_skb_tx_hash(netdev, skb, qcount);
++
++	qoffset = vsi->tc_config.tc_info[tclass].qoffset;
++	return qoffset + hash;
++}
++
+ /**
+  * i40e_xmit_xdp_ring - transmits an XDP buffer to an XDP Tx ring
+  * @xdpf: data to transmit
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+index 2feed920ef8ad..93ac201f68b8e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+@@ -449,6 +449,8 @@ static inline unsigned int i40e_rx_pg_order(struct i40e_ring *ring)
+ 
+ bool i40e_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
+ netdev_tx_t i40e_lan_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
++u16 i40e_lan_select_queue(struct net_device *netdev, struct sk_buff *skb,
++			  struct net_device *sb_dev);
+ void i40e_clean_tx_ring(struct i40e_ring *tx_ring);
+ void i40e_clean_rx_ring(struct i40e_ring *rx_ring);
+ int i40e_setup_tx_descriptors(struct i40e_ring *tx_ring);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+index 662fb80dbb9d5..c6d408de06050 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+@@ -230,15 +230,14 @@ static int otx2_set_channels(struct net_device *dev,
+ 	err = otx2_set_real_num_queues(dev, channel->tx_count,
+ 				       channel->rx_count);
+ 	if (err)
+-		goto fail;
++		return err;
+ 
+ 	pfvf->hw.rx_queues = channel->rx_count;
+ 	pfvf->hw.tx_queues = channel->tx_count;
+ 	pfvf->qset.cq_cnt = pfvf->hw.tx_queues +  pfvf->hw.rx_queues;
+ 
+-fail:
+ 	if (if_up)
+-		dev->netdev_ops->ndo_open(dev);
++		err = dev->netdev_ops->ndo_open(dev);
+ 
+ 	netdev_info(dev, "Setting num Tx rings to %d, Rx rings to %d success\n",
+ 		    pfvf->hw.tx_queues, pfvf->hw.rx_queues);
+@@ -342,7 +341,7 @@ static int otx2_set_ringparam(struct net_device *netdev,
+ 	qs->rqe_cnt = rx_count;
+ 
+ 	if (if_up)
+-		netdev->netdev_ops->ndo_open(netdev);
++		return netdev->netdev_ops->ndo_open(netdev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 9fef9be015e5e..044a5b1196acb 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1592,6 +1592,7 @@ int otx2_open(struct net_device *netdev)
+ err_tx_stop_queues:
+ 	netif_tx_stop_all_queues(netdev);
+ 	netif_carrier_off(netdev);
++	pf->flags |= OTX2_FLAG_INTF_DOWN;
+ err_free_cints:
+ 	otx2_free_cints(pf, qidx);
+ 	vec = pci_irq_vector(pf->pdev,
+@@ -1619,6 +1620,10 @@ int otx2_stop(struct net_device *netdev)
+ 	struct otx2_rss_info *rss;
+ 	int qidx, vec, wrk;
+ 
++	/* If the DOWN flag is set resources are already freed */
++	if (pf->flags & OTX2_FLAG_INTF_DOWN)
++		return 0;
++
+ 	netif_carrier_off(netdev);
+ 	netif_tx_stop_all_queues(netdev);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
+index 00c84656b2e7e..28ac4693da3cf 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/main.c
++++ b/drivers/net/ethernet/mellanox/mlx4/main.c
+@@ -3535,6 +3535,7 @@ slave_start:
+ 
+ 		if (!SRIOV_VALID_STATE(dev->flags)) {
+ 			mlx4_err(dev, "Invalid SRIOV state\n");
++			err = -EINVAL;
+ 			goto err_close;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 59837af959d06..1ad1692a5b2d7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -481,12 +481,32 @@ static void mlx5e_detach_mod_hdr(struct mlx5e_priv *priv,
+ static
+ struct mlx5_core_dev *mlx5e_hairpin_get_mdev(struct net *net, int ifindex)
+ {
++	struct mlx5_core_dev *mdev;
+ 	struct net_device *netdev;
+ 	struct mlx5e_priv *priv;
+ 
+-	netdev = __dev_get_by_index(net, ifindex);
++	netdev = dev_get_by_index(net, ifindex);
++	if (!netdev)
++		return ERR_PTR(-ENODEV);
++
+ 	priv = netdev_priv(netdev);
+-	return priv->mdev;
++	mdev = priv->mdev;
++	dev_put(netdev);
++
++	/* Mirred tc action holds a refcount on the ifindex net_device (see
++	 * net/sched/act_mirred.c:tcf_mirred_get_dev). So, it's okay to continue using mdev
++	 * after dev_put(netdev), while we're in the context of adding a tc flow.
++	 *
++	 * The mdev pointer corresponds to the peer/out net_device of a hairpin. It is then
++	 * stored in a hairpin object, which exists until all flows, that refer to it, get
++	 * removed.
++	 *
++	 * On the other hand, after a hairpin object has been created, the peer net_device may
++	 * be removed/unbound while there are still some hairpin flows that are using it. This
++	 * case is handled by mlx5e_tc_hairpin_update_dead_peer, which is hooked to
++	 * NETDEV_UNREGISTER event of the peer net_device.
++	 */
++	return mdev;
+ }
+ 
+ static int mlx5e_hairpin_create_transport(struct mlx5e_hairpin *hp)
+@@ -685,6 +705,10 @@ mlx5e_hairpin_create(struct mlx5e_priv *priv, struct mlx5_hairpin_params *params
+ 
+ 	func_mdev = priv->mdev;
+ 	peer_mdev = mlx5e_hairpin_get_mdev(dev_net(priv->netdev), peer_ifindex);
++	if (IS_ERR(peer_mdev)) {
++		err = PTR_ERR(peer_mdev);
++		goto create_pair_err;
++	}
+ 
+ 	pair = mlx5_core_hairpin_create(func_mdev, peer_mdev, params);
+ 	if (IS_ERR(pair)) {
+@@ -823,6 +847,11 @@ static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv,
+ 	int err;
+ 
+ 	peer_mdev = mlx5e_hairpin_get_mdev(dev_net(priv->netdev), peer_ifindex);
++	if (IS_ERR(peer_mdev)) {
++		NL_SET_ERR_MSG_MOD(extack, "invalid ifindex of mirred device");
++		return PTR_ERR(peer_mdev);
++	}
++
+ 	if (!MLX5_CAP_GEN(priv->mdev, hairpin) || !MLX5_CAP_GEN(peer_mdev, hairpin)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "hairpin is not supported");
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 79fc5755735fa..1d4b4e6f6fb41 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1024,17 +1024,19 @@ static int connect_fwd_rules(struct mlx5_core_dev *dev,
+ static int connect_flow_table(struct mlx5_core_dev *dev, struct mlx5_flow_table *ft,
+ 			      struct fs_prio *prio)
+ {
+-	struct mlx5_flow_table *next_ft;
++	struct mlx5_flow_table *next_ft, *first_ft;
+ 	int err = 0;
+ 
+ 	/* Connect_prev_fts and update_root_ft_create are mutually exclusive */
+ 
+-	if (list_empty(&prio->node.children)) {
++	first_ft = list_first_entry_or_null(&prio->node.children,
++					    struct mlx5_flow_table, node.list);
++	if (!first_ft || first_ft->level > ft->level) {
+ 		err = connect_prev_fts(dev, ft, prio);
+ 		if (err)
+ 			return err;
+ 
+-		next_ft = find_next_chained_ft(prio);
++		next_ft = first_ft ? first_ft : find_next_chained_ft(prio);
+ 		err = connect_fwd_rules(dev, ft, next_ft);
+ 		if (err)
+ 			return err;
+@@ -2113,7 +2115,7 @@ static int disconnect_flow_table(struct mlx5_flow_table *ft)
+ 				node.list) == ft))
+ 		return 0;
+ 
+-	next_ft = find_next_chained_ft(prio);
++	next_ft = find_next_ft(ft);
+ 	err = connect_fwd_rules(dev, next_ft, ft);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index d0ae1cf43592d..6dc7ce6494488 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -52,7 +52,19 @@ static void ionic_dim_work(struct work_struct *work)
+ 	cur_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
+ 	qcq = container_of(dim, struct ionic_qcq, dim);
+ 	new_coal = ionic_coal_usec_to_hw(qcq->q.lif->ionic, cur_moder.usec);
+-	qcq->intr.dim_coal_hw = new_coal ? new_coal : 1;
++	new_coal = new_coal ? new_coal : 1;
++
++	if (qcq->intr.dim_coal_hw != new_coal) {
++		unsigned int qi = qcq->cq.bound_q->index;
++		struct ionic_lif *lif = qcq->q.lif;
++
++		qcq->intr.dim_coal_hw = new_coal;
++
++		ionic_intr_coal_init(lif->ionic->idev.intr_ctrl,
++				     lif->rxqcqs[qi]->intr.index,
++				     qcq->intr.dim_coal_hw);
++	}
++
+ 	dim->state = DIM_START_MEASURE;
+ }
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index 909eca14f647f..46dbb49f837c8 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -197,12 +197,11 @@ static void ionic_rx_clean(struct ionic_queue *q,
+ 		}
+ 	}
+ 
+-	if (likely(netdev->features & NETIF_F_RXCSUM)) {
+-		if (comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC) {
+-			skb->ip_summed = CHECKSUM_COMPLETE;
+-			skb->csum = (__force __wsum)le16_to_cpu(comp->csum);
+-			stats->csum_complete++;
+-		}
++	if (likely(netdev->features & NETIF_F_RXCSUM) &&
++	    (comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC)) {
++		skb->ip_summed = CHECKSUM_COMPLETE;
++		skb->csum = (__force __wsum)le16_to_cpu(comp->csum);
++		stats->csum_complete++;
+ 	} else {
+ 		stats->csum_none++;
+ 	}
+@@ -417,11 +416,12 @@ void ionic_rx_empty(struct ionic_queue *q)
+ 	}
+ }
+ 
+-static void ionic_dim_update(struct ionic_qcq *qcq)
++static void ionic_dim_update(struct ionic_qcq *qcq, int napi_mode)
+ {
+ 	struct dim_sample dim_sample;
+ 	struct ionic_lif *lif;
+ 	unsigned int qi;
++	u64 pkts, bytes;
+ 
+ 	if (!qcq->intr.dim_coal_hw)
+ 		return;
+@@ -429,14 +429,23 @@ static void ionic_dim_update(struct ionic_qcq *qcq)
+ 	lif = qcq->q.lif;
+ 	qi = qcq->cq.bound_q->index;
+ 
+-	ionic_intr_coal_init(lif->ionic->idev.intr_ctrl,
+-			     lif->rxqcqs[qi]->intr.index,
+-			     qcq->intr.dim_coal_hw);
++	switch (napi_mode) {
++	case IONIC_LIF_F_TX_DIM_INTR:
++		pkts = lif->txqstats[qi].pkts;
++		bytes = lif->txqstats[qi].bytes;
++		break;
++	case IONIC_LIF_F_RX_DIM_INTR:
++		pkts = lif->rxqstats[qi].pkts;
++		bytes = lif->rxqstats[qi].bytes;
++		break;
++	default:
++		pkts = lif->txqstats[qi].pkts + lif->rxqstats[qi].pkts;
++		bytes = lif->txqstats[qi].bytes + lif->rxqstats[qi].bytes;
++		break;
++	}
+ 
+ 	dim_update_sample(qcq->cq.bound_intr->rearm_count,
+-			  lif->txqstats[qi].pkts,
+-			  lif->txqstats[qi].bytes,
+-			  &dim_sample);
++			  pkts, bytes, &dim_sample);
+ 
+ 	net_dim(&qcq->dim, dim_sample);
+ }
+@@ -457,7 +466,7 @@ int ionic_tx_napi(struct napi_struct *napi, int budget)
+ 				     ionic_tx_service, NULL, NULL);
+ 
+ 	if (work_done < budget && napi_complete_done(napi, work_done)) {
+-		ionic_dim_update(qcq);
++		ionic_dim_update(qcq, IONIC_LIF_F_TX_DIM_INTR);
+ 		flags |= IONIC_INTR_CRED_UNMASK;
+ 		cq->bound_intr->rearm_count++;
+ 	}
+@@ -493,7 +502,7 @@ int ionic_rx_napi(struct napi_struct *napi, int budget)
+ 		ionic_rx_fill(cq->bound_q);
+ 
+ 	if (work_done < budget && napi_complete_done(napi, work_done)) {
+-		ionic_dim_update(qcq);
++		ionic_dim_update(qcq, IONIC_LIF_F_RX_DIM_INTR);
+ 		flags |= IONIC_INTR_CRED_UNMASK;
+ 		cq->bound_intr->rearm_count++;
+ 	}
+@@ -535,7 +544,7 @@ int ionic_txrx_napi(struct napi_struct *napi, int budget)
+ 		ionic_rx_fill_cb(rxcq->bound_q);
+ 
+ 	if (rx_work_done < budget && napi_complete_done(napi, rx_work_done)) {
+-		ionic_dim_update(qcq);
++		ionic_dim_update(qcq, 0);
+ 		flags |= IONIC_INTR_CRED_UNMASK;
+ 		rxcq->bound_intr->rearm_count++;
+ 	}
+diff --git a/drivers/net/ethernet/sis/sis900.c b/drivers/net/ethernet/sis/sis900.c
+index 620c26f71be89..e267b7ce3a45e 100644
+--- a/drivers/net/ethernet/sis/sis900.c
++++ b/drivers/net/ethernet/sis/sis900.c
+@@ -443,7 +443,7 @@ static int sis900_probe(struct pci_dev *pci_dev,
+ #endif
+ 
+ 	/* setup various bits in PCI command register */
+-	ret = pci_enable_device(pci_dev);
++	ret = pcim_enable_device(pci_dev);
+ 	if(ret) return ret;
+ 
+ 	i = dma_set_mask(&pci_dev->dev, DMA_BIT_MASK(32));
+@@ -469,7 +469,7 @@ static int sis900_probe(struct pci_dev *pci_dev,
+ 	ioaddr = pci_iomap(pci_dev, 0, 0);
+ 	if (!ioaddr) {
+ 		ret = -ENOMEM;
+-		goto err_out_cleardev;
++		goto err_out;
+ 	}
+ 
+ 	sis_priv = netdev_priv(net_dev);
+@@ -581,8 +581,6 @@ err_unmap_tx:
+ 			  sis_priv->tx_ring_dma);
+ err_out_unmap:
+ 	pci_iounmap(pci_dev, ioaddr);
+-err_out_cleardev:
+-	pci_release_regions(pci_dev);
+  err_out:
+ 	free_netdev(net_dev);
+ 	return ret;
+@@ -2499,7 +2497,6 @@ static void sis900_remove(struct pci_dev *pci_dev)
+ 			  sis_priv->tx_ring_dma);
+ 	pci_iounmap(pci_dev, sis_priv->ioaddr);
+ 	free_netdev(net_dev);
+-	pci_release_regions(pci_dev);
+ }
+ 
+ static int __maybe_unused sis900_suspend(struct device *dev)
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 74e748662ec01..860644d182ab0 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -8191,8 +8191,9 @@ static int niu_pci_vpd_fetch(struct niu *np, u32 start)
+ 		err = niu_pci_vpd_scan_props(np, here, end);
+ 		if (err < 0)
+ 			return err;
++		/* ret == 1 is not an error */
+ 		if (err == 1)
+-			return -EINVAL;
++			return 0;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/nfc/nfcsim.c b/drivers/nfc/nfcsim.c
+index a9864fcdfba6b..dd27c85190d34 100644
+--- a/drivers/nfc/nfcsim.c
++++ b/drivers/nfc/nfcsim.c
+@@ -192,8 +192,7 @@ static void nfcsim_recv_wq(struct work_struct *work)
+ 
+ 		if (!IS_ERR(skb))
+ 			dev_kfree_skb(skb);
+-
+-		skb = ERR_PTR(-ENODEV);
++		return;
+ 	}
+ 
+ 	dev->cb(dev->nfc_digital_dev, dev->arg, skb);
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 7bdc86b74f152..646416b5940e9 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -340,7 +340,7 @@ static void end_compressed_bio_write(struct bio *bio)
+ 	cb->compressed_pages[0]->mapping = cb->inode->i_mapping;
+ 	btrfs_writepage_endio_finish_ordered(cb->compressed_pages[0],
+ 			cb->start, cb->start + cb->len - 1,
+-			bio->bi_status == BLK_STS_OK);
++			!cb->errors);
+ 	cb->compressed_pages[0]->mapping = NULL;
+ 
+ 	end_compressed_writeback(inode, cb);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f9ae3850526c6..920c84fae7102 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1077,6 +1077,7 @@ static void __btrfs_free_extra_devids(struct btrfs_fs_devices *fs_devices,
+ 		if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {
+ 			list_del_init(&device->dev_alloc_list);
+ 			clear_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state);
++			fs_devices->rw_devices--;
+ 		}
+ 		list_del_init(&device->dev_list);
+ 		fs_devices->num_devices--;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index da057570bb93d..f46904a4ead31 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4550,7 +4550,7 @@ read_complete:
+ 
+ static int cifs_readpage(struct file *file, struct page *page)
+ {
+-	loff_t offset = (loff_t)page->index << PAGE_SHIFT;
++	loff_t offset = page_file_offset(page);
+ 	int rc = -EACCES;
+ 	unsigned int xid;
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 525b44140d7a3..ed641dca79573 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -7997,7 +7997,7 @@ static void io_sq_offload_start(struct io_ring_ctx *ctx)
+ 	struct io_sq_data *sqd = ctx->sq_data;
+ 
+ 	ctx->flags &= ~IORING_SETUP_R_DISABLED;
+-	if ((ctx->flags & IORING_SETUP_SQPOLL) && sqd->thread)
++	if ((ctx->flags & IORING_SETUP_SQPOLL) && sqd && sqd->thread)
+ 		wake_up_process(sqd->thread);
+ }
+ 
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 2b296d720c9fa..919e552abf637 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1529,6 +1529,45 @@ static void ocfs2_truncate_cluster_pages(struct inode *inode, u64 byte_start,
+ 	}
+ }
+ 
++/*
++ * zero out partial blocks of one cluster.
++ *
++ * start: file offset where zero starts, will be made upper block aligned.
++ * len: it will be trimmed to the end of current cluster if "start + len"
++ *      is bigger than it.
++ */
++static int ocfs2_zeroout_partial_cluster(struct inode *inode,
++					u64 start, u64 len)
++{
++	int ret;
++	u64 start_block, end_block, nr_blocks;
++	u64 p_block, offset;
++	u32 cluster, p_cluster, nr_clusters;
++	struct super_block *sb = inode->i_sb;
++	u64 end = ocfs2_align_bytes_to_clusters(sb, start);
++
++	if (start + len < end)
++		end = start + len;
++
++	start_block = ocfs2_blocks_for_bytes(sb, start);
++	end_block = ocfs2_blocks_for_bytes(sb, end);
++	nr_blocks = end_block - start_block;
++	if (!nr_blocks)
++		return 0;
++
++	cluster = ocfs2_bytes_to_clusters(sb, start);
++	ret = ocfs2_get_clusters(inode, cluster, &p_cluster,
++				&nr_clusters, NULL);
++	if (ret)
++		return ret;
++	if (!p_cluster)
++		return 0;
++
++	offset = start_block - ocfs2_clusters_to_blocks(sb, cluster);
++	p_block = ocfs2_clusters_to_blocks(sb, p_cluster) + offset;
++	return sb_issue_zeroout(sb, p_block, nr_blocks, GFP_NOFS);
++}
++
+ static int ocfs2_zero_partial_clusters(struct inode *inode,
+ 				       u64 start, u64 len)
+ {
+@@ -1538,6 +1577,7 @@ static int ocfs2_zero_partial_clusters(struct inode *inode,
+ 	struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+ 	unsigned int csize = osb->s_clustersize;
+ 	handle_t *handle;
++	loff_t isize = i_size_read(inode);
+ 
+ 	/*
+ 	 * The "start" and "end" values are NOT necessarily part of
+@@ -1558,6 +1598,26 @@ static int ocfs2_zero_partial_clusters(struct inode *inode,
+ 	if ((start & (csize - 1)) == 0 && (end & (csize - 1)) == 0)
+ 		goto out;
+ 
++	/* No page cache for EOF blocks, issue zero out to disk. */
++	if (end > isize) {
++		/*
++		 * zeroout eof blocks in last cluster starting from
++		 * "isize" even "start" > "isize" because it is
++		 * complicated to zeroout just at "start" as "start"
++		 * may be not aligned with block size, buffer write
++		 * would be required to do that, but out of eof buffer
++		 * write is not supported.
++		 */
++		ret = ocfs2_zeroout_partial_cluster(inode, isize,
++					end - isize);
++		if (ret) {
++			mlog_errno(ret);
++			goto out;
++		}
++		if (start >= isize)
++			goto out;
++		end = isize;
++	}
+ 	handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
+ 	if (IS_ERR(handle)) {
+ 		ret = PTR_ERR(handle);
+@@ -1855,45 +1915,6 @@ out:
+ 	return ret;
+ }
+ 
+-/*
+- * zero out partial blocks of one cluster.
+- *
+- * start: file offset where zero starts, will be made upper block aligned.
+- * len: it will be trimmed to the end of current cluster if "start + len"
+- *      is bigger than it.
+- */
+-static int ocfs2_zeroout_partial_cluster(struct inode *inode,
+-					u64 start, u64 len)
+-{
+-	int ret;
+-	u64 start_block, end_block, nr_blocks;
+-	u64 p_block, offset;
+-	u32 cluster, p_cluster, nr_clusters;
+-	struct super_block *sb = inode->i_sb;
+-	u64 end = ocfs2_align_bytes_to_clusters(sb, start);
+-
+-	if (start + len < end)
+-		end = start + len;
+-
+-	start_block = ocfs2_blocks_for_bytes(sb, start);
+-	end_block = ocfs2_blocks_for_bytes(sb, end);
+-	nr_blocks = end_block - start_block;
+-	if (!nr_blocks)
+-		return 0;
+-
+-	cluster = ocfs2_bytes_to_clusters(sb, start);
+-	ret = ocfs2_get_clusters(inode, cluster, &p_cluster,
+-				&nr_clusters, NULL);
+-	if (ret)
+-		return ret;
+-	if (!p_cluster)
+-		return 0;
+-
+-	offset = start_block - ocfs2_clusters_to_blocks(sb, cluster);
+-	p_block = ocfs2_clusters_to_blocks(sb, p_cluster) + offset;
+-	return sb_issue_zeroout(sb, p_block, nr_blocks, GFP_NOFS);
+-}
+-
+ /*
+  * Parts of this function taken from xfs_change_file_space()
+  */
+@@ -1935,7 +1956,6 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		goto out_inode_unlock;
+ 	}
+ 
+-	orig_isize = i_size_read(inode);
+ 	switch (sr->l_whence) {
+ 	case 0: /*SEEK_SET*/
+ 		break;
+@@ -1943,7 +1963,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		sr->l_start += f_pos;
+ 		break;
+ 	case 2: /*SEEK_END*/
+-		sr->l_start += orig_isize;
++		sr->l_start += i_size_read(inode);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -1998,6 +2018,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		ret = -EINVAL;
+ 	}
+ 
++	orig_isize = i_size_read(inode);
+ 	/* zeroout eof blocks in the cluster. */
+ 	if (!ret && change_size && orig_isize < size) {
+ 		ret = ocfs2_zeroout_partial_cluster(inode, orig_isize,
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 412b3b618994c..6f5f97c0fdee9 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -429,20 +429,20 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ #endif
+ 
+ 	/*
+-	 * Only wake up if the pipe started out empty, since
+-	 * otherwise there should be no readers waiting.
++	 * Epoll nonsensically wants a wakeup whether the pipe
++	 * was already empty or not.
+ 	 *
+ 	 * If it wasn't empty we try to merge new data into
+ 	 * the last buffer.
+ 	 *
+ 	 * That naturally merges small writes, but it also
+-	 * page-aligs the rest of the writes for large writes
++	 * page-aligns the rest of the writes for large writes
+ 	 * spanning multiple pages.
+ 	 */
+ 	head = pipe->head;
+-	was_empty = pipe_empty(head, pipe->tail);
++	was_empty = true;
+ 	chars = total_len & (PAGE_SIZE-1);
+-	if (chars && !was_empty) {
++	if (chars && !pipe_empty(head, pipe->tail)) {
+ 		unsigned int mask = pipe->ring_size - 1;
+ 		struct pipe_buffer *buf = &pipe->bufs[(head - 1) & mask];
+ 		int offset = buf->offset + buf->len;
+diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
+index 2e6f568377f10..a8137bb6dd3c2 100644
+--- a/include/linux/bpf_types.h
++++ b/include/linux/bpf_types.h
+@@ -133,4 +133,5 @@ BPF_LINK_TYPE(BPF_LINK_TYPE_CGROUP, cgroup)
+ BPF_LINK_TYPE(BPF_LINK_TYPE_ITER, iter)
+ #ifdef CONFIG_NET
+ BPF_LINK_TYPE(BPF_LINK_TYPE_NETNS, netns)
++BPF_LINK_TYPE(BPF_LINK_TYPE_XDP, xdp)
+ #endif
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 2739a6431b9ee..6e330ff2f28df 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -204,6 +204,13 @@ struct bpf_idx_pair {
+ 	u32 idx;
+ };
+ 
++struct bpf_id_pair {
++	u32 old;
++	u32 cur;
++};
++
++/* Maximum number of register states that can exist at once */
++#define BPF_ID_MAP_SIZE (MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE)
+ #define MAX_CALL_FRAMES 8
+ struct bpf_verifier_state {
+ 	/* call stack tracking */
+@@ -319,8 +326,8 @@ struct bpf_insn_aux_data {
+ 	};
+ 	u64 map_key_state; /* constant (32 bit) key tracking for maps */
+ 	int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
+-	int sanitize_stack_off; /* stack slot to be cleared */
+ 	u32 seen; /* this insn was processed by the verifier at env->pass_cnt */
++	bool sanitize_stack_spill; /* subject to Spectre v4 sanitation */
+ 	bool zext_dst; /* this insn zero extends dst reg */
+ 	u8 alu_state; /* used in combination with alu_limit */
+ 
+@@ -390,6 +397,7 @@ struct bpf_verifier_env {
+ 	struct bpf_map *used_maps[MAX_USED_MAPS]; /* array of map's used by eBPF program */
+ 	u32 used_map_cnt;		/* number of used maps */
+ 	u32 id_gen;			/* used to generate unique reg IDs */
++	bool explore_alu_limits;
+ 	bool allow_ptr_leaks;
+ 	bool allow_uninit_stack;
+ 	bool allow_ptr_to_map_access;
+@@ -401,6 +409,7 @@ struct bpf_verifier_env {
+ 	const struct bpf_line_info *prev_linfo;
+ 	struct bpf_verifier_log log;
+ 	struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
++	struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
+ 	struct {
+ 		int *insn_state;
+ 		int *insn_stack;
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index e2ffa02f9067a..822b701c803d1 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -72,6 +72,11 @@ struct ctl_table_header;
+ /* unused opcode to mark call to interpreter with arguments */
+ #define BPF_CALL_ARGS	0xe0
+ 
++/* unused opcode to mark speculation barrier for mitigating
++ * Speculative Store Bypass
++ */
++#define BPF_NOSPEC	0xc0
++
+ /* As per nm, we expose JITed images as text (code) section for
+  * kallsyms. That way, tools like perf can find it to match
+  * addresses.
+@@ -372,6 +377,16 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
+ 		.off   = 0,					\
+ 		.imm   = 0 })
+ 
++/* Speculation barrier */
++
++#define BPF_ST_NOSPEC()						\
++	((struct bpf_insn) {					\
++		.code  = BPF_ST | BPF_NOSPEC,			\
++		.dst_reg = 0,					\
++		.src_reg = 0,					\
++		.off   = 0,					\
++		.imm   = 0 })
++
+ /* Internal classic blocks for direct assignment */
+ 
+ #define __BPF_STMT(CODE, K)					\
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 82126d5297986..822c048934e3f 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -395,7 +395,6 @@ static inline struct sk_psock *sk_psock_get(struct sock *sk)
+ }
+ 
+ void sk_psock_stop(struct sock *sk, struct sk_psock *psock);
+-void sk_psock_destroy(struct rcu_head *rcu);
+ void sk_psock_drop(struct sock *sk, struct sk_psock *psock);
+ 
+ static inline void sk_psock_put(struct sock *sk, struct sk_psock *psock)
+diff --git a/include/net/llc_pdu.h b/include/net/llc_pdu.h
+index c0f0a13ed8183..49aa79c7b278a 100644
+--- a/include/net/llc_pdu.h
++++ b/include/net/llc_pdu.h
+@@ -15,9 +15,11 @@
+ #include <linux/if_ether.h>
+ 
+ /* Lengths of frame formats */
+-#define LLC_PDU_LEN_I	4       /* header and 2 control bytes */
+-#define LLC_PDU_LEN_S	4
+-#define LLC_PDU_LEN_U	3       /* header and 1 control byte */
++#define LLC_PDU_LEN_I		4       /* header and 2 control bytes */
++#define LLC_PDU_LEN_S		4
++#define LLC_PDU_LEN_U		3       /* header and 1 control byte */
++/* header and 1 control byte and XID info */
++#define LLC_PDU_LEN_U_XID	(LLC_PDU_LEN_U + sizeof(struct llc_xid_info))
+ /* Known SAP addresses */
+ #define LLC_GLOBAL_SAP	0xFF
+ #define LLC_NULL_SAP	0x00	/* not network-layer visible */
+@@ -50,9 +52,10 @@
+ #define LLC_PDU_TYPE_U_MASK    0x03	/* 8-bit control field */
+ #define LLC_PDU_TYPE_MASK      0x03
+ 
+-#define LLC_PDU_TYPE_I	0	/* first bit */
+-#define LLC_PDU_TYPE_S	1	/* first two bits */
+-#define LLC_PDU_TYPE_U	3	/* first two bits */
++#define LLC_PDU_TYPE_I		0	/* first bit */
++#define LLC_PDU_TYPE_S		1	/* first two bits */
++#define LLC_PDU_TYPE_U		3	/* first two bits */
++#define LLC_PDU_TYPE_U_XID	4	/* private type for detecting XID commands */
+ 
+ #define LLC_PDU_TYPE_IS_I(pdu) \
+ 	((!(pdu->ctrl_1 & LLC_PDU_TYPE_I_MASK)) ? 1 : 0)
+@@ -230,9 +233,18 @@ static inline struct llc_pdu_un *llc_pdu_un_hdr(struct sk_buff *skb)
+ static inline void llc_pdu_header_init(struct sk_buff *skb, u8 type,
+ 				       u8 ssap, u8 dsap, u8 cr)
+ {
+-	const int hlen = type == LLC_PDU_TYPE_U ? 3 : 4;
++	int hlen = 4; /* default value for I and S types */
+ 	struct llc_pdu_un *pdu;
+ 
++	switch (type) {
++	case LLC_PDU_TYPE_U:
++		hlen = 3;
++		break;
++	case LLC_PDU_TYPE_U_XID:
++		hlen = 6;
++		break;
++	}
++
+ 	skb_push(skb, hlen);
+ 	skb_reset_network_header(skb);
+ 	pdu = llc_pdu_un_hdr(skb);
+@@ -374,7 +386,10 @@ static inline void llc_pdu_init_as_xid_cmd(struct sk_buff *skb,
+ 	xid_info->fmt_id = LLC_XID_FMT_ID;	/* 0x81 */
+ 	xid_info->type	 = svcs_supported;
+ 	xid_info->rw	 = rx_window << 1;	/* size of receive window */
+-	skb_put(skb, sizeof(struct llc_xid_info));
++
++	/* no need to push/put since llc_pdu_header_init() has already
++	 * pushed 3 + 3 bytes
++	 */
+ }
+ 
+ /**
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 75c2d184018a5..d12efb2550d35 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -32,6 +32,8 @@
+ #include <linux/perf_event.h>
+ #include <linux/extable.h>
+ #include <linux/log2.h>
++
++#include <asm/barrier.h>
+ #include <asm/unaligned.h>
+ 
+ /* Registers */
+@@ -1380,6 +1382,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
+ 		/* Non-UAPI available opcodes. */
+ 		[BPF_JMP | BPF_CALL_ARGS] = &&JMP_CALL_ARGS,
+ 		[BPF_JMP | BPF_TAIL_CALL] = &&JMP_TAIL_CALL,
++		[BPF_ST  | BPF_NOSPEC] = &&ST_NOSPEC,
+ 		[BPF_LDX | BPF_PROBE_MEM | BPF_B] = &&LDX_PROBE_MEM_B,
+ 		[BPF_LDX | BPF_PROBE_MEM | BPF_H] = &&LDX_PROBE_MEM_H,
+ 		[BPF_LDX | BPF_PROBE_MEM | BPF_W] = &&LDX_PROBE_MEM_W,
+@@ -1624,7 +1627,21 @@ out:
+ 	COND_JMP(s, JSGE, >=)
+ 	COND_JMP(s, JSLE, <=)
+ #undef COND_JMP
+-	/* STX and ST and LDX*/
++	/* ST, STX and LDX*/
++	ST_NOSPEC:
++		/* Speculation barrier for mitigating Speculative Store Bypass.
++		 * In case of arm64, we rely on the firmware mitigation as
++		 * controlled via the ssbd kernel parameter. Whenever the
++		 * mitigation is enabled, it works for all of the kernel code
++		 * with no need to provide any additional instructions here.
++		 * In case of x86, we use 'lfence' insn for mitigation. We
++		 * reuse preexisting logic from Spectre v1 mitigation that
++		 * happens to produce the required code on x86 for v4 as well.
++		 */
++#ifdef CONFIG_X86
++		barrier_nospec();
++#endif
++		CONT;
+ #define LDST(SIZEOP, SIZE)						\
+ 	STX_MEM_##SIZEOP:						\
+ 		*(SIZE *)(unsigned long) (DST + insn->off) = SRC;	\
+diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
+index b44d8c447afd1..ff1dd7d45b58a 100644
+--- a/kernel/bpf/disasm.c
++++ b/kernel/bpf/disasm.c
+@@ -162,15 +162,17 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
+ 		else
+ 			verbose(cbs->private_data, "BUG_%02x\n", insn->code);
+ 	} else if (class == BPF_ST) {
+-		if (BPF_MODE(insn->code) != BPF_MEM) {
++		if (BPF_MODE(insn->code) == BPF_MEM) {
++			verbose(cbs->private_data, "(%02x) *(%s *)(r%d %+d) = %d\n",
++				insn->code,
++				bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
++				insn->dst_reg,
++				insn->off, insn->imm);
++		} else if (BPF_MODE(insn->code) == 0xc0 /* BPF_NOSPEC, no UAPI */) {
++			verbose(cbs->private_data, "(%02x) nospec\n", insn->code);
++		} else {
+ 			verbose(cbs->private_data, "BUG_st_%02x\n", insn->code);
+-			return;
+ 		}
+-		verbose(cbs->private_data, "(%02x) *(%s *)(r%d %+d) = %d\n",
+-			insn->code,
+-			bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+-			insn->dst_reg,
+-			insn->off, insn->imm);
+ 	} else if (class == BPF_LDX) {
+ 		if (BPF_MODE(insn->code) != BPF_MEM) {
+ 			verbose(cbs->private_data, "BUG_ldx_%02x\n", insn->code);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 36bc34fce623c..ce1e9193365f8 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2297,6 +2297,19 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 	cur = env->cur_state->frame[env->cur_state->curframe];
+ 	if (value_regno >= 0)
+ 		reg = &cur->regs[value_regno];
++	if (!env->bypass_spec_v4) {
++		bool sanitize = reg && is_spillable_regtype(reg->type);
++
++		for (i = 0; i < size; i++) {
++			if (state->stack[spi].slot_type[i] == STACK_INVALID) {
++				sanitize = true;
++				break;
++			}
++		}
++
++		if (sanitize)
++			env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
++	}
+ 
+ 	if (reg && size == BPF_REG_SIZE && register_is_bounded(reg) &&
+ 	    !register_is_null(reg) && env->bpf_capable) {
+@@ -2319,47 +2332,10 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 			verbose(env, "invalid size of register spill\n");
+ 			return -EACCES;
+ 		}
+-
+ 		if (state != cur && reg->type == PTR_TO_STACK) {
+ 			verbose(env, "cannot spill pointers to stack into stack frame of the caller\n");
+ 			return -EINVAL;
+ 		}
+-
+-		if (!env->bypass_spec_v4) {
+-			bool sanitize = false;
+-
+-			if (state->stack[spi].slot_type[0] == STACK_SPILL &&
+-			    register_is_const(&state->stack[spi].spilled_ptr))
+-				sanitize = true;
+-			for (i = 0; i < BPF_REG_SIZE; i++)
+-				if (state->stack[spi].slot_type[i] == STACK_MISC) {
+-					sanitize = true;
+-					break;
+-				}
+-			if (sanitize) {
+-				int *poff = &env->insn_aux_data[insn_idx].sanitize_stack_off;
+-				int soff = (-spi - 1) * BPF_REG_SIZE;
+-
+-				/* detected reuse of integer stack slot with a pointer
+-				 * which means either llvm is reusing stack slot or
+-				 * an attacker is trying to exploit CVE-2018-3639
+-				 * (speculative store bypass)
+-				 * Have to sanitize that slot with preemptive
+-				 * store of zero.
+-				 */
+-				if (*poff && *poff != soff) {
+-					/* disallow programs where single insn stores
+-					 * into two different stack slots, since verifier
+-					 * cannot sanitize them
+-					 */
+-					verbose(env,
+-						"insn %d cannot access two stack slots fp%d and fp%d",
+-						insn_idx, *poff, soff);
+-					return -EINVAL;
+-				}
+-				*poff = soff;
+-			}
+-		}
+ 		save_register_state(state, spi, reg);
+ 	} else {
+ 		u8 type = STACK_MISC;
+@@ -5816,6 +5792,12 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 		alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;
+ 		alu_state |= ptr_is_dst_reg ?
+ 			     BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
++
++		/* Limit pruning on unknown scalars to enable deep search for
++		 * potential masking differences from other program paths.
++		 */
++		if (!off_is_imm)
++			env->explore_alu_limits = true;
+ 	}
+ 
+ 	err = update_alu_sanitation_state(aux, alu_state, alu_limit);
+@@ -8986,13 +8968,6 @@ static bool range_within(struct bpf_reg_state *old,
+ 	       old->s32_max_value >= cur->s32_max_value;
+ }
+ 
+-/* Maximum number of register states that can exist at once */
+-#define ID_MAP_SIZE	(MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE)
+-struct idpair {
+-	u32 old;
+-	u32 cur;
+-};
+-
+ /* If in the old state two registers had the same id, then they need to have
+  * the same id in the new state as well.  But that id could be different from
+  * the old state, so we need to track the mapping from old to new ids.
+@@ -9003,11 +8978,11 @@ struct idpair {
+  * So we look through our idmap to see if this old id has been seen before.  If
+  * so, we require the new id to match; otherwise, we add the id pair to the map.
+  */
+-static bool check_ids(u32 old_id, u32 cur_id, struct idpair *idmap)
++static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
+ {
+ 	unsigned int i;
+ 
+-	for (i = 0; i < ID_MAP_SIZE; i++) {
++	for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
+ 		if (!idmap[i].old) {
+ 			/* Reached an empty slot; haven't seen this id before */
+ 			idmap[i].old = old_id;
+@@ -9119,8 +9094,8 @@ next:
+ }
+ 
+ /* Returns true if (rold safe implies rcur safe) */
+-static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
+-		    struct idpair *idmap)
++static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
++		    struct bpf_reg_state *rcur, struct bpf_id_pair *idmap)
+ {
+ 	bool equal;
+ 
+@@ -9146,6 +9121,8 @@ static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
+ 		return false;
+ 	switch (rold->type) {
+ 	case SCALAR_VALUE:
++		if (env->explore_alu_limits)
++			return false;
+ 		if (rcur->type == SCALAR_VALUE) {
+ 			if (!rold->precise && !rcur->precise)
+ 				return true;
+@@ -9235,9 +9212,8 @@ static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
+ 	return false;
+ }
+ 
+-static bool stacksafe(struct bpf_func_state *old,
+-		      struct bpf_func_state *cur,
+-		      struct idpair *idmap)
++static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
++		      struct bpf_func_state *cur, struct bpf_id_pair *idmap)
+ {
+ 	int i, spi;
+ 
+@@ -9282,9 +9258,8 @@ static bool stacksafe(struct bpf_func_state *old,
+ 			continue;
+ 		if (old->stack[spi].slot_type[0] != STACK_SPILL)
+ 			continue;
+-		if (!regsafe(&old->stack[spi].spilled_ptr,
+-			     &cur->stack[spi].spilled_ptr,
+-			     idmap))
++		if (!regsafe(env, &old->stack[spi].spilled_ptr,
++			     &cur->stack[spi].spilled_ptr, idmap))
+ 			/* when explored and current stack slot are both storing
+ 			 * spilled registers, check that stored pointers types
+ 			 * are the same as well.
+@@ -9334,32 +9309,24 @@ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur)
+  * whereas register type in current state is meaningful, it means that
+  * the current state will reach 'bpf_exit' instruction safely
+  */
+-static bool func_states_equal(struct bpf_func_state *old,
++static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ 			      struct bpf_func_state *cur)
+ {
+-	struct idpair *idmap;
+-	bool ret = false;
+ 	int i;
+ 
+-	idmap = kcalloc(ID_MAP_SIZE, sizeof(struct idpair), GFP_KERNEL);
+-	/* If we failed to allocate the idmap, just say it's not safe */
+-	if (!idmap)
+-		return false;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++) {
+-		if (!regsafe(&old->regs[i], &cur->regs[i], idmap))
+-			goto out_free;
+-	}
++	memset(env->idmap_scratch, 0, sizeof(env->idmap_scratch));
++	for (i = 0; i < MAX_BPF_REG; i++)
++		if (!regsafe(env, &old->regs[i], &cur->regs[i],
++			     env->idmap_scratch))
++			return false;
+ 
+-	if (!stacksafe(old, cur, idmap))
+-		goto out_free;
++	if (!stacksafe(env, old, cur, env->idmap_scratch))
++		return false;
+ 
+ 	if (!refsafe(old, cur))
+-		goto out_free;
+-	ret = true;
+-out_free:
+-	kfree(idmap);
+-	return ret;
++		return false;
++
++	return true;
+ }
+ 
+ static bool states_equal(struct bpf_verifier_env *env,
+@@ -9386,7 +9353,7 @@ static bool states_equal(struct bpf_verifier_env *env,
+ 	for (i = 0; i <= old->curframe; i++) {
+ 		if (old->frame[i]->callsite != cur->frame[i]->callsite)
+ 			return false;
+-		if (!func_states_equal(old->frame[i], cur->frame[i]))
++		if (!func_states_equal(env, old->frame[i], cur->frame[i]))
+ 			return false;
+ 	}
+ 	return true;
+@@ -10947,35 +10914,33 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 
+ 	for (i = 0; i < insn_cnt; i++, insn++) {
+ 		bpf_convert_ctx_access_t convert_ctx_access;
++		bool ctx_access;
+ 
+ 		if (insn->code == (BPF_LDX | BPF_MEM | BPF_B) ||
+ 		    insn->code == (BPF_LDX | BPF_MEM | BPF_H) ||
+ 		    insn->code == (BPF_LDX | BPF_MEM | BPF_W) ||
+-		    insn->code == (BPF_LDX | BPF_MEM | BPF_DW))
++		    insn->code == (BPF_LDX | BPF_MEM | BPF_DW)) {
+ 			type = BPF_READ;
+-		else if (insn->code == (BPF_STX | BPF_MEM | BPF_B) ||
+-			 insn->code == (BPF_STX | BPF_MEM | BPF_H) ||
+-			 insn->code == (BPF_STX | BPF_MEM | BPF_W) ||
+-			 insn->code == (BPF_STX | BPF_MEM | BPF_DW))
++			ctx_access = true;
++		} else if (insn->code == (BPF_STX | BPF_MEM | BPF_B) ||
++			   insn->code == (BPF_STX | BPF_MEM | BPF_H) ||
++			   insn->code == (BPF_STX | BPF_MEM | BPF_W) ||
++			   insn->code == (BPF_STX | BPF_MEM | BPF_DW) ||
++			   insn->code == (BPF_ST | BPF_MEM | BPF_B) ||
++			   insn->code == (BPF_ST | BPF_MEM | BPF_H) ||
++			   insn->code == (BPF_ST | BPF_MEM | BPF_W) ||
++			   insn->code == (BPF_ST | BPF_MEM | BPF_DW)) {
+ 			type = BPF_WRITE;
+-		else
++			ctx_access = BPF_CLASS(insn->code) == BPF_STX;
++		} else {
+ 			continue;
++		}
+ 
+ 		if (type == BPF_WRITE &&
+-		    env->insn_aux_data[i + delta].sanitize_stack_off) {
++		    env->insn_aux_data[i + delta].sanitize_stack_spill) {
+ 			struct bpf_insn patch[] = {
+-				/* Sanitize suspicious stack slot with zero.
+-				 * There are no memory dependencies for this store,
+-				 * since it's only using frame pointer and immediate
+-				 * constant of zero
+-				 */
+-				BPF_ST_MEM(BPF_DW, BPF_REG_FP,
+-					   env->insn_aux_data[i + delta].sanitize_stack_off,
+-					   0),
+-				/* the original STX instruction will immediately
+-				 * overwrite the same stack slot with appropriate value
+-				 */
+ 				*insn,
++				BPF_ST_NOSPEC(),
+ 			};
+ 
+ 			cnt = ARRAY_SIZE(patch);
+@@ -10989,6 +10954,9 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 			continue;
+ 		}
+ 
++		if (!ctx_access)
++			continue;
++
+ 		switch (env->insn_aux_data[i + delta].ptr_type) {
+ 		case PTR_TO_CTX:
+ 			if (!ops->convert_ctx_access)
+@@ -11730,37 +11698,6 @@ static void free_states(struct bpf_verifier_env *env)
+ 	}
+ }
+ 
+-/* The verifier is using insn_aux_data[] to store temporary data during
+- * verification and to store information for passes that run after the
+- * verification like dead code sanitization. do_check_common() for subprogram N
+- * may analyze many other subprograms. sanitize_insn_aux_data() clears all
+- * temporary data after do_check_common() finds that subprogram N cannot be
+- * verified independently. pass_cnt counts the number of times
+- * do_check_common() was run and insn->aux->seen tells the pass number
+- * insn_aux_data was touched. These variables are compared to clear temporary
+- * data from failed pass. For testing and experiments do_check_common() can be
+- * run multiple times even when prior attempt to verify is unsuccessful.
+- *
+- * Note that special handling is needed on !env->bypass_spec_v1 if this is
+- * ever called outside of error path with subsequent program rejection.
+- */
+-static void sanitize_insn_aux_data(struct bpf_verifier_env *env)
+-{
+-	struct bpf_insn *insn = env->prog->insnsi;
+-	struct bpf_insn_aux_data *aux;
+-	int i, class;
+-
+-	for (i = 0; i < env->prog->len; i++) {
+-		class = BPF_CLASS(insn[i].code);
+-		if (class != BPF_LDX && class != BPF_STX)
+-			continue;
+-		aux = &env->insn_aux_data[i];
+-		if (aux->seen != env->pass_cnt)
+-			continue;
+-		memset(aux, 0, offsetof(typeof(*aux), orig_idx));
+-	}
+-}
+-
+ static int do_check_common(struct bpf_verifier_env *env, int subprog)
+ {
+ 	bool pop_log = !(env->log.level & BPF_LOG_LEVEL2);
+@@ -11830,9 +11767,6 @@ out:
+ 	if (!ret && pop_log)
+ 		bpf_vlog_reset(&env->log, 0);
+ 	free_states(env);
+-	if (ret)
+-		/* clean aux data in case subprog was rejected */
+-		sanitize_insn_aux_data(env);
+ 	return ret;
+ }
+ 
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index c3946c3558826..bdc95bd7a851f 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1075,11 +1075,16 @@ static bool j1939_session_deactivate_locked(struct j1939_session *session)
+ 
+ static bool j1939_session_deactivate(struct j1939_session *session)
+ {
++	struct j1939_priv *priv = session->priv;
+ 	bool active;
+ 
+-	j1939_session_list_lock(session->priv);
++	j1939_session_list_lock(priv);
++	/* This function should be called with a session ref-count of at
++	 * least 2.
++	 */
++	WARN_ON_ONCE(kref_read(&session->kref) < 2);
+ 	active = j1939_session_deactivate_locked(session);
+-	j1939_session_list_unlock(session->priv);
++	j1939_session_list_unlock(priv);
+ 
+ 	return active;
+ }
+@@ -1869,7 +1874,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ 		if (!session->transmission)
+ 			j1939_tp_schedule_txtimer(session, 0);
+ 	} else {
+-		j1939_tp_set_rxtimeout(session, 250);
++		j1939_tp_set_rxtimeout(session, 750);
+ 	}
+ 	session->last_cmd = 0xff;
+ 	consume_skb(se_skb);
+diff --git a/net/can/raw.c b/net/can/raw.c
+index 4a7c063deb6ce..069657f681afa 100644
+--- a/net/can/raw.c
++++ b/net/can/raw.c
+@@ -546,10 +546,18 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 				return -EFAULT;
+ 		}
+ 
++		rtnl_lock();
+ 		lock_sock(sk);
+ 
+-		if (ro->bound && ro->ifindex)
++		if (ro->bound && ro->ifindex) {
+ 			dev = dev_get_by_index(sock_net(sk), ro->ifindex);
++			if (!dev) {
++				if (count > 1)
++					kfree(filter);
++				err = -ENODEV;
++				goto out_fil;
++			}
++		}
+ 
+ 		if (ro->bound) {
+ 			/* (try to) register the new filters */
+@@ -588,6 +596,7 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 			dev_put(dev);
+ 
+ 		release_sock(sk);
++		rtnl_unlock();
+ 
+ 		break;
+ 
+@@ -600,10 +609,16 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 
+ 		err_mask &= CAN_ERR_MASK;
+ 
++		rtnl_lock();
+ 		lock_sock(sk);
+ 
+-		if (ro->bound && ro->ifindex)
++		if (ro->bound && ro->ifindex) {
+ 			dev = dev_get_by_index(sock_net(sk), ro->ifindex);
++			if (!dev) {
++				err = -ENODEV;
++				goto out_err;
++			}
++		}
+ 
+ 		/* remove current error mask */
+ 		if (ro->bound) {
+@@ -627,6 +642,7 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 			dev_put(dev);
+ 
+ 		release_sock(sk);
++		rtnl_unlock();
+ 
+ 		break;
+ 
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index c4c224a5b9de7..5dd5569f89bf5 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -676,14 +676,13 @@ static void sk_psock_destroy_deferred(struct work_struct *gc)
+ 	kfree(psock);
+ }
+ 
+-void sk_psock_destroy(struct rcu_head *rcu)
++static void sk_psock_destroy(struct rcu_head *rcu)
+ {
+ 	struct sk_psock *psock = container_of(rcu, struct sk_psock, rcu);
+ 
+ 	INIT_WORK(&psock->gc, sk_psock_destroy_deferred);
+ 	schedule_work(&psock->gc);
+ }
+-EXPORT_SYMBOL_GPL(sk_psock_destroy);
+ 
+ void sk_psock_drop(struct sock *sk, struct sk_psock *psock)
+ {
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 0dca00745ac3c..be75b409445c2 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -390,7 +390,7 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
+ 		tunnel->i_seqno = ntohl(tpi->seq) + 1;
+ 	}
+ 
+-	skb_reset_network_header(skb);
++	skb_set_network_header(skb, (tunnel->dev->type == ARPHRD_ETHER) ? ETH_HLEN : 0);
+ 
+ 	err = IP_ECN_decapsulate(iph, skb);
+ 	if (unlikely(err)) {
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 7180979114e49..ac5cadd02cfa8 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -98,8 +98,16 @@ static inline u8 llc_ui_header_len(struct sock *sk, struct sockaddr_llc *addr)
+ {
+ 	u8 rc = LLC_PDU_LEN_U;
+ 
+-	if (addr->sllc_test || addr->sllc_xid)
++	if (addr->sllc_test)
+ 		rc = LLC_PDU_LEN_U;
++	else if (addr->sllc_xid)
++		/* We need to expand header to sizeof(struct llc_xid_info)
++		 * since llc_pdu_init_as_xid_cmd() sets 4,5,6 bytes of LLC header
++		 * as XID PDU. In llc_ui_sendmsg() we reserved header size and then
++		 * filled all other space with user data. If we won't reserve this
++		 * bytes, llc_pdu_init_as_xid_cmd() will overwrite user data
++		 */
++		rc = LLC_PDU_LEN_U_XID;
+ 	else if (sk->sk_type == SOCK_STREAM)
+ 		rc = LLC_PDU_LEN_I;
+ 	return rc;
+diff --git a/net/llc/llc_s_ac.c b/net/llc/llc_s_ac.c
+index 7ae4cc684d3ab..9fa3342c7a829 100644
+--- a/net/llc/llc_s_ac.c
++++ b/net/llc/llc_s_ac.c
+@@ -79,7 +79,7 @@ int llc_sap_action_send_xid_c(struct llc_sap *sap, struct sk_buff *skb)
+ 	struct llc_sap_state_ev *ev = llc_sap_ev(skb);
+ 	int rc;
+ 
+-	llc_pdu_header_init(skb, LLC_PDU_TYPE_U, ev->saddr.lsap,
++	llc_pdu_header_init(skb, LLC_PDU_TYPE_U_XID, ev->saddr.lsap,
+ 			    ev->daddr.lsap, LLC_PDU_CMD);
+ 	llc_pdu_init_as_xid_cmd(skb, LLC_XID_NULL_CLASS_2, 0);
+ 	rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 6a96deded7632..e429dbb10df71 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -152,6 +152,8 @@ static int ieee80211_change_iface(struct wiphy *wiphy,
+ 				  struct vif_params *params)
+ {
+ 	struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
++	struct ieee80211_local *local = sdata->local;
++	struct sta_info *sta;
+ 	int ret;
+ 
+ 	ret = ieee80211_if_change_type(sdata, type);
+@@ -162,7 +164,24 @@ static int ieee80211_change_iface(struct wiphy *wiphy,
+ 		RCU_INIT_POINTER(sdata->u.vlan.sta, NULL);
+ 		ieee80211_check_fast_rx_iface(sdata);
+ 	} else if (type == NL80211_IFTYPE_STATION && params->use_4addr >= 0) {
++		struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
++
++		if (params->use_4addr == ifmgd->use_4addr)
++			return 0;
++
+ 		sdata->u.mgd.use_4addr = params->use_4addr;
++		if (!ifmgd->associated)
++			return 0;
++
++		mutex_lock(&local->sta_mtx);
++		sta = sta_info_get(sdata, ifmgd->bssid);
++		if (sta)
++			drv_sta_set_4addr(local, sdata, &sta->sta,
++					  params->use_4addr);
++		mutex_unlock(&local->sta_mtx);
++
++		if (params->use_4addr)
++			ieee80211_send_4addr_nullfunc(local, sdata);
+ 	}
+ 
+ 	if (sdata->vif.type == NL80211_IFTYPE_MONITOR) {
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index a83f0c2fcdf77..7f2be08b72a56 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2051,6 +2051,8 @@ void ieee80211_dynamic_ps_timer(struct timer_list *t);
+ void ieee80211_send_nullfunc(struct ieee80211_local *local,
+ 			     struct ieee80211_sub_if_data *sdata,
+ 			     bool powersave);
++void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
++				   struct ieee80211_sub_if_data *sdata);
+ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata,
+ 			     struct ieee80211_hdr *hdr, bool ack, u16 tx_time);
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 142bb28199c48..32bc30ec50ec9 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1115,8 +1115,8 @@ void ieee80211_send_nullfunc(struct ieee80211_local *local,
+ 	ieee80211_tx_skb(sdata, skb);
+ }
+ 
+-static void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
+-					  struct ieee80211_sub_if_data *sdata)
++void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
++				   struct ieee80211_sub_if_data *sdata)
+ {
+ 	struct sk_buff *skb;
+ 	struct ieee80211_hdr *nullfunc;
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index ff0168736f6ea..f9f2af26ccb37 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -661,8 +661,13 @@ bool nf_ct_delete(struct nf_conn *ct, u32 portid, int report)
+ 		return false;
+ 
+ 	tstamp = nf_conn_tstamp_find(ct);
+-	if (tstamp && tstamp->stop == 0)
++	if (tstamp) {
++		s32 timeout = ct->timeout - nfct_time_stamp;
++
+ 		tstamp->stop = ktime_get_real_ns();
++		if (timeout < 0)
++			tstamp->stop -= jiffies_to_nsecs(-timeout);
++	}
+ 
+ 	if (nf_conntrack_event_report(IPCT_DESTROY, ct,
+ 				    portid, report) < 0) {
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index 4bcf33b049c47..ea53fd999f465 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -201,7 +201,9 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 		alen = sizeof_field(struct nf_nat_range, min_addr.ip6);
+ 		break;
+ 	default:
+-		return -EAFNOSUPPORT;
++		if (tb[NFTA_NAT_REG_ADDR_MIN])
++			return -EAFNOSUPPORT;
++		break;
+ 	}
+ 	priv->family = family;
+ 
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 0d9baddb9cd49..6826558483f97 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -504,8 +504,10 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 		if (!ipc)
+ 			goto err;
+ 
+-		if (sock_queue_rcv_skb(&ipc->sk, skb))
++		if (sock_queue_rcv_skb(&ipc->sk, skb)) {
++			qrtr_port_put(ipc);
+ 			goto err;
++		}
+ 
+ 		qrtr_port_put(ipc);
+ 	}
+@@ -830,6 +832,8 @@ static int qrtr_local_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+ 
+ 	ipc = qrtr_port_lookup(to->sq_port);
+ 	if (!ipc || &ipc->sk == skb->sk) { /* do not send to self */
++		if (ipc)
++			qrtr_port_put(ipc);
+ 		kfree_skb(skb);
+ 		return -ENODEV;
+ 	}
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index f72bff93745c4..ddb5b5c2550ef 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -1175,7 +1175,7 @@ static struct sctp_association *__sctp_rcv_asconf_lookup(
+ 	if (unlikely(!af))
+ 		return NULL;
+ 
+-	if (af->from_addr_param(&paddr, param, peer_port, 0))
++	if (!af->from_addr_param(&paddr, param, peer_port, 0))
+ 		return NULL;
+ 
+ 	return __sctp_lookup_association(net, laddr, &paddr, transportp);
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 2301b66280def..f8e73c4a00933 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -891,16 +891,10 @@ static int tipc_aead_decrypt(struct net *net, struct tipc_aead *aead,
+ 	if (unlikely(!aead))
+ 		return -ENOKEY;
+ 
+-	/* Cow skb data if needed */
+-	if (likely(!skb_cloned(skb) &&
+-		   (!skb_is_nonlinear(skb) || !skb_has_frag_list(skb)))) {
+-		nsg = 1 + skb_shinfo(skb)->nr_frags;
+-	} else {
+-		nsg = skb_cow_data(skb, 0, &unused);
+-		if (unlikely(nsg < 0)) {
+-			pr_err("RX: skb_cow_data() returned %d\n", nsg);
+-			return nsg;
+-		}
++	nsg = skb_cow_data(skb, 0, &unused);
++	if (unlikely(nsg < 0)) {
++		pr_err("RX: skb_cow_data() returned %d\n", nsg);
++		return nsg;
+ 	}
+ 
+ 	/* Allocate memory for the AEAD operation */
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 9f7cc9e1e4ef3..4f9bd95b4eeed 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -148,6 +148,7 @@ static void tipc_sk_remove(struct tipc_sock *tsk);
+ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dsz);
+ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dsz);
+ static void tipc_sk_push_backlog(struct tipc_sock *tsk, bool nagle_ack);
++static int tipc_wait_for_connect(struct socket *sock, long *timeo_p);
+ 
+ static const struct proto_ops packet_ops;
+ static const struct proto_ops stream_ops;
+@@ -1508,8 +1509,13 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
+ 		rc = 0;
+ 	}
+ 
+-	if (unlikely(syn && !rc))
++	if (unlikely(syn && !rc)) {
+ 		tipc_set_sk_state(sk, TIPC_CONNECTING);
++		if (timeout) {
++			timeout = msecs_to_jiffies(timeout);
++			tipc_wait_for_connect(sock, &timeout);
++		}
++	}
+ 
+ 	return rc ? rc : dlen;
+ }
+@@ -1557,7 +1563,7 @@ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dlen)
+ 		return -EMSGSIZE;
+ 
+ 	/* Handle implicit connection setup */
+-	if (unlikely(dest)) {
++	if (unlikely(dest && sk->sk_state == TIPC_OPEN)) {
+ 		rc = __tipc_sendmsg(sock, m, dlen);
+ 		if (dlen && dlen == rc) {
+ 			tsk->peer_caps = tipc_node_get_capabilities(net, dnode);
+@@ -2644,7 +2650,7 @@ static int tipc_listen(struct socket *sock, int len)
+ static int tipc_wait_for_accept(struct socket *sock, long timeo)
+ {
+ 	struct sock *sk = sock->sk;
+-	DEFINE_WAIT(wait);
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	int err;
+ 
+ 	/* True wake-one mechanism for incoming connections: only
+@@ -2653,12 +2659,12 @@ static int tipc_wait_for_accept(struct socket *sock, long timeo)
+ 	 * anymore, the common case will execute the loop only once.
+ 	*/
+ 	for (;;) {
+-		prepare_to_wait_exclusive(sk_sleep(sk), &wait,
+-					  TASK_INTERRUPTIBLE);
+ 		if (timeo && skb_queue_empty(&sk->sk_receive_queue)) {
++			add_wait_queue(sk_sleep(sk), &wait);
+ 			release_sock(sk);
+-			timeo = schedule_timeout(timeo);
++			timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
+ 			lock_sock(sk);
++			remove_wait_queue(sk_sleep(sk), &wait);
+ 		}
+ 		err = 0;
+ 		if (!skb_queue_empty(&sk->sk_receive_queue))
+@@ -2670,7 +2676,6 @@ static int tipc_wait_for_accept(struct socket *sock, long timeo)
+ 		if (signal_pending(current))
+ 			break;
+ 	}
+-	finish_wait(sk_sleep(sk), &wait);
+ 	return err;
+ }
+ 
+@@ -2686,9 +2691,10 @@ static int tipc_accept(struct socket *sock, struct socket *new_sock, int flags,
+ 		       bool kern)
+ {
+ 	struct sock *new_sk, *sk = sock->sk;
+-	struct sk_buff *buf;
+ 	struct tipc_sock *new_tsock;
++	struct msghdr m = {NULL,};
+ 	struct tipc_msg *msg;
++	struct sk_buff *buf;
+ 	long timeo;
+ 	int res;
+ 
+@@ -2733,19 +2739,17 @@ static int tipc_accept(struct socket *sock, struct socket *new_sock, int flags,
+ 	}
+ 
+ 	/*
+-	 * Respond to 'SYN-' by discarding it & returning 'ACK'-.
+-	 * Respond to 'SYN+' by queuing it on new socket.
++	 * Respond to 'SYN-' by discarding it & returning 'ACK'.
++	 * Respond to 'SYN+' by queuing it on new socket & returning 'ACK'.
+ 	 */
+ 	if (!msg_data_sz(msg)) {
+-		struct msghdr m = {NULL,};
+-
+ 		tsk_advance_rx_queue(sk);
+-		__tipc_sendstream(new_sock, &m, 0);
+ 	} else {
+ 		__skb_dequeue(&sk->sk_receive_queue);
+ 		__skb_queue_head(&new_sk->sk_receive_queue, buf);
+ 		skb_set_owner_r(buf, new_sk);
+ 	}
++	__tipc_sendstream(new_sock, &m, 0);
+ 	release_sock(new_sk);
+ exit:
+ 	release_sock(sk);
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 87fc56bc4f1e7..fab1f0d504036 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1746,16 +1746,14 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 			 * be grouped with this beacon for updates ...
+ 			 */
+ 			if (!cfg80211_combine_bsses(rdev, new)) {
+-				kfree(new);
++				bss_ref_put(rdev, new);
+ 				goto drop;
+ 			}
+ 		}
+ 
+ 		if (rdev->bss_entries >= bss_entries_limit &&
+ 		    !cfg80211_bss_expire_oldest(rdev)) {
+-			if (!list_empty(&new->hidden_list))
+-				list_del(&new->hidden_list);
+-			kfree(new);
++			bss_ref_put(rdev, new);
+ 			goto drop;
+ 		}
+ 
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 6688f6b253a72..f4d44f75ba152 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -192,8 +192,6 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
+ 			if (!(prot & PROT_EXEC))
+ 				dso__set_loaded(dso);
+ 		}
+-
+-		nsinfo__put(dso->nsinfo);
+ 		dso->nsinfo = nsi;
+ 		dso__put(dso);
+ 	}
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index b1be8df806119..d418ca5f90399 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -182,7 +182,7 @@ static void anon_allocate_area(void **alloc_area)
+ {
+ 	*alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE,
+ 			   MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+-	if (*alloc_area == MAP_FAILED)
++	if (*alloc_area == MAP_FAILED) {
+ 		fprintf(stderr, "mmap of anonymous memory failed");
+ 		*alloc_area = NULL;
+ 	}
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 1353439691cf7..2809a69418a65 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -3896,6 +3896,16 @@ struct compat_kvm_dirty_log {
+ 	};
+ };
+ 
++struct compat_kvm_clear_dirty_log {
++	__u32 slot;
++	__u32 num_pages;
++	__u64 first_page;
++	union {
++		compat_uptr_t dirty_bitmap; /* one bit per page */
++		__u64 padding2;
++	};
++};
++
+ static long kvm_vm_compat_ioctl(struct file *filp,
+ 			   unsigned int ioctl, unsigned long arg)
+ {
+@@ -3905,6 +3915,24 @@ static long kvm_vm_compat_ioctl(struct file *filp,
+ 	if (kvm->mm != current->mm)
+ 		return -EIO;
+ 	switch (ioctl) {
++#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
++	case KVM_CLEAR_DIRTY_LOG: {
++		struct compat_kvm_clear_dirty_log compat_log;
++		struct kvm_clear_dirty_log log;
++
++		if (copy_from_user(&compat_log, (void __user *)arg,
++				   sizeof(compat_log)))
++			return -EFAULT;
++		log.slot	 = compat_log.slot;
++		log.num_pages	 = compat_log.num_pages;
++		log.first_page	 = compat_log.first_page;
++		log.padding2	 = compat_log.padding2;
++		log.dirty_bitmap = compat_ptr(compat_log.dirty_bitmap);
++
++		r = kvm_vm_ioctl_clear_dirty_log(kvm, &log);
++		break;
++	}
++#endif
+ 	case KVM_GET_DIRTY_LOG: {
+ 		struct compat_kvm_dirty_log compat_log;
+ 		struct kvm_dirty_log log;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-08 13:36 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-08 13:36 UTC (permalink / raw
  To: gentoo-commits

commit:     d4630fe751cd5a8ff485433d08677e44f878f704
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug  8 13:36:39 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug  8 13:36:39 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d4630fe7

Linux patch 5.10.57

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1056_linux-5.10.57.patch | 1905 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1909 insertions(+)

diff --git a/0000_README b/0000_README
index f79bdd7..5f127c2 100644
--- a/0000_README
+++ b/0000_README
@@ -267,6 +267,10 @@ Patch:  1055_linux-5.10.56.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.56
 
+Patch:  1056_linux-5.10.57.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.57
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1056_linux-5.10.57.patch b/1056_linux-5.10.57.patch
new file mode 100644
index 0000000..d23eda4
--- /dev/null
+++ b/1056_linux-5.10.57.patch
@@ -0,0 +1,1905 @@
+diff --git a/Makefile b/Makefile
+index 0090f53846e9c..e9621a90e752f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 56
++SUBLEVEL = 57
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c
+index 1377ec76a45db..def8a84d1611b 100644
+--- a/drivers/firmware/arm_scmi/bus.c
++++ b/drivers/firmware/arm_scmi/bus.c
+@@ -113,6 +113,9 @@ int scmi_driver_register(struct scmi_driver *driver, struct module *owner,
+ {
+ 	int retval;
+ 
++	if (!driver->probe)
++		return -EINVAL;
++
+ 	driver->driver.bus = &scmi_bus_type;
+ 	driver->driver.name = driver->name;
+ 	driver->driver.owner = owner;
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 8c9663258d5d4..7632232486645 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -436,8 +436,12 @@ int scmi_do_xfer_with_response(const struct scmi_handle *handle,
+ 	xfer->async_done = &async_response;
+ 
+ 	ret = scmi_do_xfer(handle, xfer);
+-	if (!ret && !wait_for_completion_timeout(xfer->async_done, timeout))
+-		ret = -ETIMEDOUT;
++	if (!ret) {
++		if (!wait_for_completion_timeout(xfer->async_done, timeout))
++			ret = -ETIMEDOUT;
++		else if (xfer->hdr.status)
++			ret = scmi_to_linux_errno(xfer->hdr.status);
++	}
+ 
+ 	xfer->async_done = NULL;
+ 	return ret;
+diff --git a/drivers/firmware/efi/mokvar-table.c b/drivers/firmware/efi/mokvar-table.c
+index d8bc013406861..38722d2009e20 100644
+--- a/drivers/firmware/efi/mokvar-table.c
++++ b/drivers/firmware/efi/mokvar-table.c
+@@ -180,7 +180,10 @@ void __init efi_mokvar_table_init(void)
+ 		pr_err("EFI MOKvar config table is not valid\n");
+ 		return;
+ 	}
+-	efi_mem_reserve(efi.mokvar_table, map_size_needed);
++
++	if (md.type == EFI_BOOT_SERVICES_DATA)
++		efi_mem_reserve(efi.mokvar_table, map_size_needed);
++
+ 	efi_mokvar_table_size = map_size_needed;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 1812ec7ee11bb..cfe85ba1018e8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -2077,8 +2077,10 @@ int dcn20_populate_dml_pipes_from_context(
+ 				- timing->v_border_bottom;
+ 		pipes[pipe_cnt].pipe.dest.htotal = timing->h_total;
+ 		pipes[pipe_cnt].pipe.dest.vtotal = v_total;
+-		pipes[pipe_cnt].pipe.dest.hactive = timing->h_addressable;
+-		pipes[pipe_cnt].pipe.dest.vactive = timing->v_addressable;
++		pipes[pipe_cnt].pipe.dest.hactive =
++			timing->h_addressable + timing->h_border_left + timing->h_border_right;
++		pipes[pipe_cnt].pipe.dest.vactive =
++			timing->v_addressable + timing->v_border_top + timing->v_border_bottom;
+ 		pipes[pipe_cnt].pipe.dest.interlaced = timing->flags.INTERLACE;
+ 		pipes[pipe_cnt].pipe.dest.pixel_rate_mhz = timing->pix_clk_100hz/10000.0;
+ 		if (timing->timing_3d_format == TIMING_3D_FORMAT_HW_FRAME_PACKING)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+index 367c82b5ab4c1..c09bca3350687 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+@@ -4888,7 +4888,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 				}
+ 			} while ((locals->PrefetchSupported[i][j] != true || locals->VRatioInPrefetchSupported[i][j] != true)
+ 					&& (mode_lib->vba.NextMaxVStartup != mode_lib->vba.MaxMaxVStartup[0][0]
+-						|| mode_lib->vba.NextPrefetchMode < mode_lib->vba.MaxPrefetchMode));
++						|| mode_lib->vba.NextPrefetchMode <= mode_lib->vba.MaxPrefetchMode));
+ 
+ 			if (locals->PrefetchSupported[i][j] == true && locals->VRatioInPrefetchSupported[i][j] == true) {
+ 				mode_lib->vba.BandwidthAvailableForImmediateFlip = locals->ReturnBWPerState[i][0];
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index bd3046e5a9348..e5ac0936a5871 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -24,7 +24,6 @@
+ #include "i915_gem_clflush.h"
+ #include "i915_gem_context.h"
+ #include "i915_gem_ioctls.h"
+-#include "i915_sw_fence_work.h"
+ #include "i915_trace.h"
+ #include "i915_user_extensions.h"
+ 
+@@ -1401,6 +1400,10 @@ static u32 *reloc_gpu(struct i915_execbuffer *eb,
+ 		int err;
+ 		struct intel_engine_cs *engine = eb->engine;
+ 
++		/* If we need to copy for the cmdparser, we will stall anyway */
++		if (eb_use_cmdparser(eb))
++			return ERR_PTR(-EWOULDBLOCK);
++
+ 		if (!reloc_can_use_engine(engine)) {
+ 			engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];
+ 			if (!engine)
+@@ -2267,152 +2270,6 @@ shadow_batch_pin(struct i915_execbuffer *eb,
+ 	return vma;
+ }
+ 
+-struct eb_parse_work {
+-	struct dma_fence_work base;
+-	struct intel_engine_cs *engine;
+-	struct i915_vma *batch;
+-	struct i915_vma *shadow;
+-	struct i915_vma *trampoline;
+-	unsigned long batch_offset;
+-	unsigned long batch_length;
+-};
+-
+-static int __eb_parse(struct dma_fence_work *work)
+-{
+-	struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
+-
+-	return intel_engine_cmd_parser(pw->engine,
+-				       pw->batch,
+-				       pw->batch_offset,
+-				       pw->batch_length,
+-				       pw->shadow,
+-				       pw->trampoline);
+-}
+-
+-static void __eb_parse_release(struct dma_fence_work *work)
+-{
+-	struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
+-
+-	if (pw->trampoline)
+-		i915_active_release(&pw->trampoline->active);
+-	i915_active_release(&pw->shadow->active);
+-	i915_active_release(&pw->batch->active);
+-}
+-
+-static const struct dma_fence_work_ops eb_parse_ops = {
+-	.name = "eb_parse",
+-	.work = __eb_parse,
+-	.release = __eb_parse_release,
+-};
+-
+-static inline int
+-__parser_mark_active(struct i915_vma *vma,
+-		     struct intel_timeline *tl,
+-		     struct dma_fence *fence)
+-{
+-	struct intel_gt_buffer_pool_node *node = vma->private;
+-
+-	return i915_active_ref(&node->active, tl->fence_context, fence);
+-}
+-
+-static int
+-parser_mark_active(struct eb_parse_work *pw, struct intel_timeline *tl)
+-{
+-	int err;
+-
+-	mutex_lock(&tl->mutex);
+-
+-	err = __parser_mark_active(pw->shadow, tl, &pw->base.dma);
+-	if (err)
+-		goto unlock;
+-
+-	if (pw->trampoline) {
+-		err = __parser_mark_active(pw->trampoline, tl, &pw->base.dma);
+-		if (err)
+-			goto unlock;
+-	}
+-
+-unlock:
+-	mutex_unlock(&tl->mutex);
+-	return err;
+-}
+-
+-static int eb_parse_pipeline(struct i915_execbuffer *eb,
+-			     struct i915_vma *shadow,
+-			     struct i915_vma *trampoline)
+-{
+-	struct eb_parse_work *pw;
+-	int err;
+-
+-	GEM_BUG_ON(overflows_type(eb->batch_start_offset, pw->batch_offset));
+-	GEM_BUG_ON(overflows_type(eb->batch_len, pw->batch_length));
+-
+-	pw = kzalloc(sizeof(*pw), GFP_KERNEL);
+-	if (!pw)
+-		return -ENOMEM;
+-
+-	err = i915_active_acquire(&eb->batch->vma->active);
+-	if (err)
+-		goto err_free;
+-
+-	err = i915_active_acquire(&shadow->active);
+-	if (err)
+-		goto err_batch;
+-
+-	if (trampoline) {
+-		err = i915_active_acquire(&trampoline->active);
+-		if (err)
+-			goto err_shadow;
+-	}
+-
+-	dma_fence_work_init(&pw->base, &eb_parse_ops);
+-
+-	pw->engine = eb->engine;
+-	pw->batch = eb->batch->vma;
+-	pw->batch_offset = eb->batch_start_offset;
+-	pw->batch_length = eb->batch_len;
+-	pw->shadow = shadow;
+-	pw->trampoline = trampoline;
+-
+-	/* Mark active refs early for this worker, in case we get interrupted */
+-	err = parser_mark_active(pw, eb->context->timeline);
+-	if (err)
+-		goto err_commit;
+-
+-	err = dma_resv_reserve_shared(pw->batch->resv, 1);
+-	if (err)
+-		goto err_commit;
+-
+-	/* Wait for all writes (and relocs) into the batch to complete */
+-	err = i915_sw_fence_await_reservation(&pw->base.chain,
+-					      pw->batch->resv, NULL, false,
+-					      0, I915_FENCE_GFP);
+-	if (err < 0)
+-		goto err_commit;
+-
+-	/* Keep the batch alive and unwritten as we parse */
+-	dma_resv_add_shared_fence(pw->batch->resv, &pw->base.dma);
+-
+-	/* Force execution to wait for completion of the parser */
+-	dma_resv_add_excl_fence(shadow->resv, &pw->base.dma);
+-
+-	dma_fence_work_commit_imm(&pw->base);
+-	return 0;
+-
+-err_commit:
+-	i915_sw_fence_set_error_once(&pw->base.chain, err);
+-	dma_fence_work_commit_imm(&pw->base);
+-	return err;
+-
+-err_shadow:
+-	i915_active_release(&shadow->active);
+-err_batch:
+-	i915_active_release(&eb->batch->vma->active);
+-err_free:
+-	kfree(pw);
+-	return err;
+-}
+-
+ static struct i915_vma *eb_dispatch_secure(struct i915_execbuffer *eb, struct i915_vma *vma)
+ {
+ 	/*
+@@ -2494,13 +2351,11 @@ static int eb_parse(struct i915_execbuffer *eb)
+ 		eb->batch_flags |= I915_DISPATCH_SECURE;
+ 	}
+ 
+-	batch = eb_dispatch_secure(eb, shadow);
+-	if (IS_ERR(batch)) {
+-		err = PTR_ERR(batch);
+-		goto err_trampoline;
+-	}
+-
+-	err = eb_parse_pipeline(eb, shadow, trampoline);
++	err = intel_engine_cmd_parser(eb->engine,
++				      eb->batch->vma,
++				      eb->batch_start_offset,
++				      eb->batch_len,
++				      shadow, trampoline);
+ 	if (err)
+ 		goto err_unpin_batch;
+ 
+@@ -2522,7 +2377,6 @@ secure_batch:
+ err_unpin_batch:
+ 	if (batch)
+ 		i915_vma_unpin(batch);
+-err_trampoline:
+ 	if (trampoline)
+ 		i915_vma_unpin(trampoline);
+ err_shadow:
+diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
+index 9ce174950340b..635aae9145cb2 100644
+--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
++++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
+@@ -1143,27 +1143,30 @@ find_reg(const struct intel_engine_cs *engine, u32 addr)
+ /* Returns a vmap'd pointer to dst_obj, which the caller must unmap */
+ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 		       struct drm_i915_gem_object *src_obj,
+-		       unsigned long offset, unsigned long length)
++		       u32 offset, u32 length)
+ {
+-	bool needs_clflush;
++	unsigned int src_needs_clflush;
++	unsigned int dst_needs_clflush;
+ 	void *dst, *src;
+ 	int ret;
+ 
++	ret = i915_gem_object_prepare_write(dst_obj, &dst_needs_clflush);
++	if (ret)
++		return ERR_PTR(ret);
++
+ 	dst = i915_gem_object_pin_map(dst_obj, I915_MAP_FORCE_WB);
++	i915_gem_object_finish_access(dst_obj);
+ 	if (IS_ERR(dst))
+ 		return dst;
+ 
+-	ret = i915_gem_object_pin_pages(src_obj);
++	ret = i915_gem_object_prepare_read(src_obj, &src_needs_clflush);
+ 	if (ret) {
+ 		i915_gem_object_unpin_map(dst_obj);
+ 		return ERR_PTR(ret);
+ 	}
+ 
+-	needs_clflush =
+-		!(src_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
+-
+ 	src = ERR_PTR(-ENODEV);
+-	if (needs_clflush && i915_has_memcpy_from_wc()) {
++	if (src_needs_clflush && i915_has_memcpy_from_wc()) {
+ 		src = i915_gem_object_pin_map(src_obj, I915_MAP_WC);
+ 		if (!IS_ERR(src)) {
+ 			i915_unaligned_memcpy_from_wc(dst,
+@@ -1185,7 +1188,7 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 		 * validate up to the end of the batch.
+ 		 */
+ 		remain = length;
+-		if (!(dst_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))
++		if (dst_needs_clflush & CLFLUSH_BEFORE)
+ 			remain = round_up(remain,
+ 					  boot_cpu_data.x86_clflush_size);
+ 
+@@ -1195,7 +1198,7 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 			int len = min(remain, PAGE_SIZE - x);
+ 
+ 			src = kmap_atomic(i915_gem_object_get_page(src_obj, n));
+-			if (needs_clflush)
++			if (src_needs_clflush)
+ 				drm_clflush_virt_range(src + x, len);
+ 			memcpy(ptr, src + x, len);
+ 			kunmap_atomic(src);
+@@ -1206,11 +1209,10 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 		}
+ 	}
+ 
+-	i915_gem_object_unpin_pages(src_obj);
++	i915_gem_object_finish_access(src_obj);
+ 
+ 	memset32(dst + length, 0, (dst_obj->base.size - length) / sizeof(u32));
+ 
+-	/* dst_obj is returned with vmap pinned */
+ 	return dst;
+ }
+ 
+@@ -1417,6 +1419,7 @@ static unsigned long *alloc_whitelist(u32 batch_length)
+  * Return: non-zero if the parser finds violations or otherwise fails; -EACCES
+  * if the batch appears legal but should use hardware parsing
+  */
++
+ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ 			    struct i915_vma *batch,
+ 			    unsigned long batch_offset,
+@@ -1437,7 +1440,8 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ 				     batch->size));
+ 	GEM_BUG_ON(!batch_length);
+ 
+-	cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length);
++	cmd = copy_batch(shadow->obj, batch->obj,
++			 batch_offset, batch_length);
+ 	if (IS_ERR(cmd)) {
+ 		DRM_DEBUG("CMD: Failed to copy batch\n");
+ 		return PTR_ERR(cmd);
+diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
+index 0e813819b041b..d8fef42ca38e1 100644
+--- a/drivers/gpu/drm/i915/i915_request.c
++++ b/drivers/gpu/drm/i915/i915_request.c
+@@ -1285,10 +1285,8 @@ i915_request_await_execution(struct i915_request *rq,
+ 
+ 	do {
+ 		fence = *child++;
+-		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+-			i915_sw_fence_set_error_once(&rq->submit, fence->error);
++		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+ 			continue;
+-		}
+ 
+ 		if (fence->context == rq->fence.context)
+ 			continue;
+@@ -1386,10 +1384,8 @@ i915_request_await_dma_fence(struct i915_request *rq, struct dma_fence *fence)
+ 
+ 	do {
+ 		fence = *child++;
+-		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+-			i915_sw_fence_set_error_once(&rq->submit, fence->error);
++		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+ 			continue;
+-		}
+ 
+ 		/*
+ 		 * Requests on the same timeline are explicitly ordered, along
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index cd882c4533942..caeef25c89bb1 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -474,14 +474,18 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+-		if (!qed_mcp_has_pending_cmd(p_hwfn))
++		if (!qed_mcp_has_pending_cmd(p_hwfn)) {
++			spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 			break;
++		}
+ 
+ 		rc = qed_mcp_update_pending_cmd(p_hwfn, p_ptt);
+-		if (!rc)
++		if (!rc) {
++			spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 			break;
+-		else if (rc != -EAGAIN)
++		} else if (rc != -EAGAIN) {
+ 			goto err;
++		}
+ 
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+@@ -498,6 +502,8 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EAGAIN;
+ 	}
+ 
++	spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
++
+ 	/* Send the mailbox command */
+ 	qed_mcp_reread_offsets(p_hwfn, p_ptt);
+ 	seq_num = ++p_hwfn->mcp_info->drv_mb_seq;
+@@ -524,14 +530,18 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+-		if (p_cmd_elem->b_is_completed)
++		if (p_cmd_elem->b_is_completed) {
++			spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 			break;
++		}
+ 
+ 		rc = qed_mcp_update_pending_cmd(p_hwfn, p_ptt);
+-		if (!rc)
++		if (!rc) {
++			spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 			break;
+-		else if (rc != -EAGAIN)
++		} else if (rc != -EAGAIN) {
+ 			goto err;
++		}
+ 
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 	} while (++cnt < max_retries);
+@@ -554,6 +564,7 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EAGAIN;
+ 	}
+ 
++	spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 	qed_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+ 	spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 95e27fb7d2c10..105622e1defab 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -5282,9 +5282,10 @@ static int rtl8152_close(struct net_device *netdev)
+ 		tp->rtl_ops.down(tp);
+ 
+ 		mutex_unlock(&tp->control);
++	}
+ 
++	if (!res)
+ 		usb_autopm_put_interface(tp->intf);
+-	}
+ 
+ 	free_all_mem(tp);
+ 
+diff --git a/drivers/nvme/host/trace.h b/drivers/nvme/host/trace.h
+index daaf700eae799..35bac7a254227 100644
+--- a/drivers/nvme/host/trace.h
++++ b/drivers/nvme/host/trace.h
+@@ -56,7 +56,7 @@ TRACE_EVENT(nvme_setup_cmd,
+ 		__field(u8, fctype)
+ 		__field(u16, cid)
+ 		__field(u32, nsid)
+-		__field(u64, metadata)
++		__field(bool, metadata)
+ 		__array(u8, cdw10, 24)
+ 	    ),
+ 	    TP_fast_assign(
+@@ -66,13 +66,13 @@ TRACE_EVENT(nvme_setup_cmd,
+ 		__entry->flags = cmd->common.flags;
+ 		__entry->cid = cmd->common.command_id;
+ 		__entry->nsid = le32_to_cpu(cmd->common.nsid);
+-		__entry->metadata = le64_to_cpu(cmd->common.metadata);
++		__entry->metadata = !!blk_integrity_rq(req);
+ 		__entry->fctype = cmd->fabrics.fctype;
+ 		__assign_disk_name(__entry->disk, req->rq_disk);
+ 		memcpy(__entry->cdw10, &cmd->common.cdw10,
+ 			sizeof(__entry->cdw10));
+ 	    ),
+-	    TP_printk("nvme%d: %sqid=%d, cmdid=%u, nsid=%u, flags=0x%x, meta=0x%llx, cmd=(%s %s)",
++	    TP_printk("nvme%d: %sqid=%d, cmdid=%u, nsid=%u, flags=0x%x, meta=0x%x, cmd=(%s %s)",
+ 		      __entry->ctrl_id, __print_disk_name(__entry->disk),
+ 		      __entry->qid, __entry->cid, __entry->nsid,
+ 		      __entry->flags, __entry->metadata,
+diff --git a/drivers/regulator/rtmv20-regulator.c b/drivers/regulator/rtmv20-regulator.c
+index 4bca64de0f672..2ee334174e2b0 100644
+--- a/drivers/regulator/rtmv20-regulator.c
++++ b/drivers/regulator/rtmv20-regulator.c
+@@ -37,7 +37,7 @@
+ #define RTMV20_WIDTH2_MASK	GENMASK(7, 0)
+ #define RTMV20_LBPLVL_MASK	GENMASK(3, 0)
+ #define RTMV20_LBPEN_MASK	BIT(7)
+-#define RTMV20_STROBEPOL_MASK	BIT(1)
++#define RTMV20_STROBEPOL_MASK	BIT(0)
+ #define RTMV20_VSYNPOL_MASK	BIT(1)
+ #define RTMV20_FSINEN_MASK	BIT(7)
+ #define RTMV20_ESEN_MASK	BIT(6)
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 8f2d112f0b5d4..83e56ee62649d 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -433,24 +433,15 @@ static int mtk_spi_fifo_transfer(struct spi_master *master,
+ 	mtk_spi_prepare_transfer(master, xfer);
+ 	mtk_spi_setup_packet(master);
+ 
+-	cnt = xfer->len / 4;
+-	if (xfer->tx_buf)
++	if (xfer->tx_buf) {
++		cnt = xfer->len / 4;
+ 		iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);
+-
+-	if (xfer->rx_buf)
+-		ioread32_rep(mdata->base + SPI_RX_DATA_REG, xfer->rx_buf, cnt);
+-
+-	remainder = xfer->len % 4;
+-	if (remainder > 0) {
+-		reg_val = 0;
+-		if (xfer->tx_buf) {
++		remainder = xfer->len % 4;
++		if (remainder > 0) {
++			reg_val = 0;
+ 			memcpy(&reg_val, xfer->tx_buf + (cnt * 4), remainder);
+ 			writel(reg_val, mdata->base + SPI_TX_DATA_REG);
+ 		}
+-		if (xfer->rx_buf) {
+-			reg_val = readl(mdata->base + SPI_RX_DATA_REG);
+-			memcpy(xfer->rx_buf + (cnt * 4), &reg_val, remainder);
+-		}
+ 	}
+ 
+ 	mtk_spi_enable_transfer(master);
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 8f91f8705eeea..a6dfc8fef20cd 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -917,15 +917,18 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
+ 	ier = readl_relaxed(spi->base + STM32H7_SPI_IER);
+ 
+ 	mask = ier;
+-	/* EOTIE is triggered on EOT, SUSP and TXC events. */
++	/*
++	 * EOTIE enables irq from EOT, SUSP and TXC events. We need to set
++	 * SUSP to acknowledge it later. TXC is automatically cleared
++	 */
++
+ 	mask |= STM32H7_SPI_SR_SUSP;
+ 	/*
+-	 * When TXTF is set, DXPIE and TXPIE are cleared. So in case of
+-	 * Full-Duplex, need to poll RXP event to know if there are remaining
+-	 * data, before disabling SPI.
++	 * DXPIE is set in Full-Duplex, one IT will be raised if TXP and RXP
++	 * are set. So in case of Full-Duplex, need to poll TXP and RXP event.
+ 	 */
+-	if (spi->rx_buf && !spi->cur_usedma)
+-		mask |= STM32H7_SPI_SR_RXP;
++	if ((spi->cur_comm == SPI_FULL_DUPLEX) && !spi->cur_usedma)
++		mask |= STM32H7_SPI_SR_TXP | STM32H7_SPI_SR_RXP;
+ 
+ 	if (!(sr & mask)) {
+ 		dev_warn(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index 519a539eeb9e8..a370a185a41c4 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -73,8 +73,6 @@
+ #define TCOBASE(p)	((p)->tco_res->start)
+ /* SMI Control and Enable Register */
+ #define SMI_EN(p)	((p)->smi_res->start)
+-#define TCO_EN		(1 << 13)
+-#define GBL_SMI_EN	(1 << 0)
+ 
+ #define TCO_RLD(p)	(TCOBASE(p) + 0x00) /* TCO Timer Reload/Curr. Value */
+ #define TCOv1_TMR(p)	(TCOBASE(p) + 0x01) /* TCOv1 Timer Initial Value*/
+@@ -359,12 +357,8 @@ static int iTCO_wdt_set_timeout(struct watchdog_device *wd_dev, unsigned int t)
+ 
+ 	tmrval = seconds_to_ticks(p, t);
+ 
+-	/*
+-	 * If TCO SMIs are off, the timer counts down twice before rebooting.
+-	 * Otherwise, the BIOS generally reboots when the SMI triggers.
+-	 */
+-	if (p->smi_res &&
+-	    (SMI_EN(p) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
++	/* For TCO v1 the timer counts down twice before rebooting */
++	if (p->iTCO_version == 1)
+ 		tmrval /= 2;
+ 
+ 	/* from the specs: */
+@@ -529,7 +523,7 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 		 * Disables TCO logic generating an SMI#
+ 		 */
+ 		val32 = inl(SMI_EN(p));
+-		val32 &= ~TCO_EN;	/* Turn off SMI clearing watchdog */
++		val32 &= 0xffffdfff;	/* Turn off SMI clearing watchdog */
+ 		outl(val32, SMI_EN(p));
+ 	}
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 4b913de2f24fb..f36928efcf92d 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6443,7 +6443,6 @@ void btrfs_log_new_name(struct btrfs_trans_handle *trans,
+ 			struct btrfs_inode *inode, struct btrfs_inode *old_dir,
+ 			struct dentry *parent)
+ {
+-	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	struct btrfs_log_ctx ctx;
+ 
+ 	/*
+@@ -6457,8 +6456,8 @@ void btrfs_log_new_name(struct btrfs_trans_handle *trans,
+ 	 * if this inode hasn't been logged and directory we're renaming it
+ 	 * from hasn't been logged, we don't need to log it
+ 	 */
+-	if (inode->logged_trans <= fs_info->last_trans_committed &&
+-	    (!old_dir || old_dir->logged_trans <= fs_info->last_trans_committed))
++	if (!inode_logged(trans, inode) &&
++	    (!old_dir || !inode_logged(trans, old_dir)))
+ 		return;
+ 
+ 	btrfs_init_log_ctx(&ctx, &inode->vfs_inode);
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index 37dac195adbb4..6ad3b89a8a2e0 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -689,7 +689,8 @@ acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv);
+ 
+ static inline void acpi_dev_put(struct acpi_device *adev)
+ {
+-	put_device(&adev->dev);
++	if (adev)
++		put_device(&adev->dev);
+ }
+ #else	/* CONFIG_ACPI */
+ 
+diff --git a/include/linux/mfd/rt5033-private.h b/include/linux/mfd/rt5033-private.h
+index f812105c538c8..f2271bfb3273f 100644
+--- a/include/linux/mfd/rt5033-private.h
++++ b/include/linux/mfd/rt5033-private.h
+@@ -200,13 +200,13 @@ enum rt5033_reg {
+ #define RT5033_REGULATOR_BUCK_VOLTAGE_MIN		1000000U
+ #define RT5033_REGULATOR_BUCK_VOLTAGE_MAX		3000000U
+ #define RT5033_REGULATOR_BUCK_VOLTAGE_STEP		100000U
+-#define RT5033_REGULATOR_BUCK_VOLTAGE_STEP_NUM		32
++#define RT5033_REGULATOR_BUCK_VOLTAGE_STEP_NUM		21
+ 
+ /* RT5033 regulator LDO output voltage uV */
+ #define RT5033_REGULATOR_LDO_VOLTAGE_MIN		1200000U
+ #define RT5033_REGULATOR_LDO_VOLTAGE_MAX		3000000U
+ #define RT5033_REGULATOR_LDO_VOLTAGE_STEP		100000U
+-#define RT5033_REGULATOR_LDO_VOLTAGE_STEP_NUM		32
++#define RT5033_REGULATOR_LDO_VOLTAGE_STEP_NUM		19
+ 
+ /* RT5033 regulator SAFE LDO output voltage uV */
+ #define RT5033_REGULATOR_SAFE_LDO_VOLTAGE		4900000U
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 0854f1b35683c..86ebfc6ae6986 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1713,6 +1713,14 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 
+ 	BT_DBG("%s %p", hdev->name, hdev);
+ 
++	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
++	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
++	    test_bit(HCI_UP, &hdev->flags)) {
++		/* Execute vendor specific shutdown routine */
++		if (hdev->shutdown)
++			hdev->shutdown(hdev);
++	}
++
+ 	cancel_delayed_work(&hdev->power_off);
+ 
+ 	hci_request_cancel_all(hdev);
+@@ -1788,14 +1796,6 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 		clear_bit(HCI_INIT, &hdev->flags);
+ 	}
+ 
+-	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
+-	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
+-	    test_bit(HCI_UP, &hdev->flags)) {
+-		/* Execute vendor specific shutdown routine */
+-		if (hdev->shutdown)
+-			hdev->shutdown(hdev);
+-	}
+-
+ 	/* flush cmd  work */
+ 	flush_work(&hdev->cmd_work);
+ 
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 2d27aae6d36ff..825e6b9880030 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -2922,8 +2922,11 @@ skb_zerocopy_headlen(const struct sk_buff *from)
+ 
+ 	if (!from->head_frag ||
+ 	    skb_headlen(from) < L1_CACHE_BYTES ||
+-	    skb_shinfo(from)->nr_frags >= MAX_SKB_FRAGS)
++	    skb_shinfo(from)->nr_frags >= MAX_SKB_FRAGS) {
+ 		hlen = skb_headlen(from);
++		if (!hlen)
++			hlen = from->len;
++	}
+ 
+ 	if (skb_has_frag_list(from))
+ 		hlen = from->len;
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index d9878173ff898..2e41b8c169e5b 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -971,10 +971,14 @@ int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert)
+ 		rt5682_enable_push_button_irq(component, false);
+ 		snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1,
+ 			RT5682_TRIG_JD_MASK, RT5682_TRIG_JD_LOW);
+-		if (!snd_soc_dapm_get_pin_status(dapm, "MICBIAS"))
++		if (!snd_soc_dapm_get_pin_status(dapm, "MICBIAS") &&
++			!snd_soc_dapm_get_pin_status(dapm, "PLL1") &&
++			!snd_soc_dapm_get_pin_status(dapm, "PLL2B"))
+ 			snd_soc_component_update_bits(component,
+ 				RT5682_PWR_ANLG_1, RT5682_PWR_MB, 0);
+-		if (!snd_soc_dapm_get_pin_status(dapm, "Vref2"))
++		if (!snd_soc_dapm_get_pin_status(dapm, "Vref2") &&
++			!snd_soc_dapm_get_pin_status(dapm, "PLL1") &&
++			!snd_soc_dapm_get_pin_status(dapm, "PLL2B"))
+ 			snd_soc_component_update_bits(component,
+ 				RT5682_PWR_ANLG_1, RT5682_PWR_VREF2, 0);
+ 		snd_soc_component_update_bits(component, RT5682_PWR_ANLG_3,
+diff --git a/sound/soc/codecs/tlv320aic31xx.h b/sound/soc/codecs/tlv320aic31xx.h
+index 81952984613d2..2513922a02923 100644
+--- a/sound/soc/codecs/tlv320aic31xx.h
++++ b/sound/soc/codecs/tlv320aic31xx.h
+@@ -151,8 +151,8 @@ struct aic31xx_pdata {
+ #define AIC31XX_WORD_LEN_24BITS		0x02
+ #define AIC31XX_WORD_LEN_32BITS		0x03
+ #define AIC31XX_IFACE1_MASTER_MASK	GENMASK(3, 2)
+-#define AIC31XX_BCLK_MASTER		BIT(2)
+-#define AIC31XX_WCLK_MASTER		BIT(3)
++#define AIC31XX_BCLK_MASTER		BIT(3)
++#define AIC31XX_WCLK_MASTER		BIT(2)
+ 
+ /* AIC31XX_DATA_OFFSET */
+ #define AIC31XX_DATA_OFFSET_MASK	GENMASK(7, 0)
+diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c
+index a7c0484d44ec7..265bbc5a2f96a 100644
+--- a/sound/soc/ti/j721e-evm.c
++++ b/sound/soc/ti/j721e-evm.c
+@@ -197,7 +197,7 @@ static int j721e_configure_refclk(struct j721e_priv *priv,
+ 		return ret;
+ 	}
+ 
+-	if (priv->hsdiv_rates[domain->parent_clk_id] != scki) {
++	if (domain->parent_clk_id == -1 || priv->hsdiv_rates[domain->parent_clk_id] != scki) {
+ 		dev_dbg(priv->dev,
+ 			"%s configuration for %u Hz: %s, %dxFS (SCKI: %u Hz)\n",
+ 			audio_domain == J721E_AUDIO_DOMAIN_CPB ? "CPB" : "IVI",
+@@ -278,23 +278,29 @@ static int j721e_audio_startup(struct snd_pcm_substream *substream)
+ 					  j721e_rule_rate, &priv->rate_range,
+ 					  SNDRV_PCM_HW_PARAM_RATE, -1);
+ 
+-	mutex_unlock(&priv->mutex);
+ 
+ 	if (ret)
+-		return ret;
++		goto out;
+ 
+ 	/* Reset TDM slots to 32 */
+ 	ret = snd_soc_dai_set_tdm_slot(cpu_dai, 0x3, 0x3, 2, 32);
+ 	if (ret && ret != -ENOTSUPP)
+-		return ret;
++		goto out;
+ 
+ 	for_each_rtd_codec_dais(rtd, i, codec_dai) {
+ 		ret = snd_soc_dai_set_tdm_slot(codec_dai, 0x3, 0x3, 2, 32);
+ 		if (ret && ret != -ENOTSUPP)
+-			return ret;
++			goto out;
+ 	}
+ 
+-	return 0;
++	if (ret == -ENOTSUPP)
++		ret = 0;
++out:
++	if (ret)
++		domain->active--;
++	mutex_unlock(&priv->mutex);
++
++	return ret;
+ }
+ 
+ static int j721e_audio_hw_params(struct snd_pcm_substream *substream,
+diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task.c b/tools/testing/selftests/bpf/progs/bpf_iter_task.c
+index 4983087852a09..b7f32c160f4e2 100644
+--- a/tools/testing/selftests/bpf/progs/bpf_iter_task.c
++++ b/tools/testing/selftests/bpf/progs/bpf_iter_task.c
+@@ -11,9 +11,10 @@ int dump_task(struct bpf_iter__task *ctx)
+ {
+ 	struct seq_file *seq = ctx->meta->seq;
+ 	struct task_struct *task = ctx->task;
++	static char info[] = "    === END ===";
+ 
+ 	if (task == (void *)0) {
+-		BPF_SEQ_PRINTF(seq, "    === END ===\n");
++		BPF_SEQ_PRINTF(seq, "%s\n", info);
+ 		return 0;
+ 	}
+ 
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 9be395d9dc648..a4c55fcb0e7b1 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -1036,7 +1036,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
+ 		}
+ 	}
+ 
+-	if (test->insn_processed) {
++	if (!unpriv && test->insn_processed) {
+ 		uint32_t insn_processed;
+ 		char *proc;
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/and.c b/tools/testing/selftests/bpf/verifier/and.c
+index ca8fdb1b3f015..7d7ebee5cc7a8 100644
+--- a/tools/testing/selftests/bpf/verifier/and.c
++++ b/tools/testing/selftests/bpf/verifier/and.c
+@@ -61,6 +61,8 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R1 !read_ok",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 0
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/basic_stack.c b/tools/testing/selftests/bpf/verifier/basic_stack.c
+index b56f8117c09d2..f995777dddb3f 100644
+--- a/tools/testing/selftests/bpf/verifier/basic_stack.c
++++ b/tools/testing/selftests/bpf/verifier/basic_stack.c
+@@ -4,7 +4,7 @@
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, 8, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid stack",
++	.errstr = "invalid write to stack",
+ 	.result = REJECT,
+ },
+ {
+diff --git a/tools/testing/selftests/bpf/verifier/bounds.c b/tools/testing/selftests/bpf/verifier/bounds.c
+index 57ed67b860746..e061e8799ce23 100644
+--- a/tools/testing/selftests/bpf/verifier/bounds.c
++++ b/tools/testing/selftests/bpf/verifier/bounds.c
+@@ -261,8 +261,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	/* not actually fully unbounded, but the bound is very high */
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root",
+-	.result_unpriv = REJECT,
+ 	.errstr = "value -4294967168 makes map_value pointer be out of bounds",
+ 	.result = REJECT,
+ },
+@@ -298,9 +296,6 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+-	/* not actually fully unbounded, but the bound is very high */
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root",
+-	.result_unpriv = REJECT,
+ 	.errstr = "value -4294967168 makes map_value pointer be out of bounds",
+ 	.result = REJECT,
+ },
+@@ -513,6 +508,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT
+ },
+ {
+@@ -533,6 +530,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT
+ },
+ {
+@@ -574,6 +573,8 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
++	.result_unpriv = REJECT,
+ 	.fixup_map_hash_8b = { 3 },
+ 	.result = ACCEPT,
+ },
+@@ -594,6 +595,8 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
++	.result_unpriv = REJECT,
+ 	.fixup_map_hash_8b = { 3 },
+ 	.result = ACCEPT,
+ },
+@@ -614,6 +617,8 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
++	.result_unpriv = REJECT,
+ 	.fixup_map_hash_8b = { 3 },
+ 	.result = ACCEPT,
+ },
+@@ -679,6 +684,8 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
++	.result_unpriv = REJECT,
+ 	.fixup_map_hash_8b = { 3 },
+ 	.result = ACCEPT,
+ },
+@@ -700,6 +707,8 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
++	.result_unpriv = REJECT,
+ 	.fixup_map_hash_8b = { 3 },
+ 	.result = ACCEPT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/bounds_deduction.c b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
+index c162498a64fc6..91869aea6d641 100644
+--- a/tools/testing/selftests/bpf/verifier/bounds_deduction.c
++++ b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
+@@ -6,7 +6,7 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
+ 	.result = REJECT,
+ },
+@@ -21,7 +21,7 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 1,
+@@ -34,22 +34,23 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
+ 	.result = REJECT,
+ },
+ {
+ 	"check deducing bounds from const, 4",
+ 	.insns = {
++		BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+ 		BPF_MOV64_IMM(BPF_REG_0, 0),
+ 		BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 1),
+ 		BPF_EXIT_INSN(),
+ 		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
+ 		BPF_EXIT_INSN(),
+-		BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
++		BPF_ALU64_REG(BPF_SUB, BPF_REG_6, BPF_REG_0),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R6 has pointer with unsupported alu operation",
+ 	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ },
+@@ -61,7 +62,7 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
+ 	.result = REJECT,
+ },
+@@ -74,7 +75,7 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
+ 	.result = REJECT,
+ },
+@@ -88,7 +89,7 @@
+ 			    offsetof(struct __sk_buff, mark)),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.errstr = "dereference of modified ctx ptr",
+ 	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+@@ -103,7 +104,7 @@
+ 			    offsetof(struct __sk_buff, mark)),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.errstr = "dereference of modified ctx ptr",
+ 	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+@@ -116,7 +117,7 @@
+ 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.errstr = "R0 tried to subtract pointer from scalar",
+ 	.result = REJECT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
+index 9baca7a75c42a..c2aa6f26738b4 100644
+--- a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
++++ b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
+@@ -19,7 +19,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -43,7 +42,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -69,7 +67,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R8 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -94,7 +91,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R8 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -141,7 +137,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -210,7 +205,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -260,7 +254,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -287,7 +280,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -313,7 +305,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -342,7 +333,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R7 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -372,7 +362,6 @@
+ 	},
+ 	.fixup_map_hash_8b = { 4 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+ },
+ {
+@@ -400,7 +389,5 @@
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+ 	.errstr = "unbounded min value",
+-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
+ 	.result = REJECT,
+-	.result_unpriv = REJECT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
+index c4f5d909e58a7..eb888c8479c32 100644
+--- a/tools/testing/selftests/bpf/verifier/calls.c
++++ b/tools/testing/selftests/bpf/verifier/calls.c
+@@ -1228,7 +1228,7 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.fixup_map_hash_8b = { 23 },
+ 	.result = REJECT,
+-	.errstr = "invalid read from stack off -16+0 size 8",
++	.errstr = "invalid read from stack R7 off=-16 size=8",
+ },
+ {
+ 	"calls: two calls that receive map_value via arg=ptr_stack_of_caller. test1",
+@@ -1958,7 +1958,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_48b = { 6 },
+-	.errstr = "invalid indirect read from stack off -8+0 size 8",
++	.errstr = "invalid indirect read from stack R2 off -8+0 size 8",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/const_or.c b/tools/testing/selftests/bpf/verifier/const_or.c
+index 6c214c58e8d4a..0719b0ddec040 100644
+--- a/tools/testing/selftests/bpf/verifier/const_or.c
++++ b/tools/testing/selftests/bpf/verifier/const_or.c
+@@ -23,7 +23,7 @@
+ 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid stack type R1 off=-48 access_size=58",
++	.errstr = "invalid indirect access to stack R1 off=-48 size=58",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+@@ -54,7 +54,7 @@
+ 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid stack type R1 off=-48 access_size=58",
++	.errstr = "invalid indirect access to stack R1 off=-48 size=58",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/dead_code.c b/tools/testing/selftests/bpf/verifier/dead_code.c
+index 5cf361d8eb1cc..721ec9391be5a 100644
+--- a/tools/testing/selftests/bpf/verifier/dead_code.c
++++ b/tools/testing/selftests/bpf/verifier/dead_code.c
+@@ -8,6 +8,8 @@
+ 	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, -4),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R9 !read_ok",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 7,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+index 87c4e79000833..0ab7f1dfc97ac 100644
+--- a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
++++ b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+@@ -39,7 +39,7 @@
+ 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack off -64+0 size 64",
++	.errstr = "invalid indirect read from stack R1 off -64+0 size 64",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+@@ -59,7 +59,7 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid stack type R1 off=-64 access_size=65",
++	.errstr = "invalid indirect access to stack R1 off=-64 size=65",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+@@ -136,7 +136,7 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid stack type R1 off=-64 access_size=65",
++	.errstr = "invalid indirect access to stack R1 off=-64 size=65",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+@@ -156,7 +156,7 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid stack type R1 off=-64 access_size=65",
++	.errstr = "invalid indirect access to stack R1 off=-64 size=65",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+@@ -194,7 +194,7 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack off -64+0 size 64",
++	.errstr = "invalid indirect read from stack R1 off -64+0 size 64",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+@@ -584,7 +584,7 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack off -64+32 size 64",
++	.errstr = "invalid indirect read from stack R1 off -64+32 size 64",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/int_ptr.c b/tools/testing/selftests/bpf/verifier/int_ptr.c
+index ca3b4729df66c..070893fb29007 100644
+--- a/tools/testing/selftests/bpf/verifier/int_ptr.c
++++ b/tools/testing/selftests/bpf/verifier/int_ptr.c
+@@ -27,7 +27,7 @@
+ 	},
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL,
+-	.errstr = "invalid indirect read from stack off -16+0 size 8",
++	.errstr = "invalid indirect read from stack R4 off -16+0 size 8",
+ },
+ {
+ 	"ARG_PTR_TO_LONG half-uninitialized",
+@@ -59,7 +59,7 @@
+ 	},
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL,
+-	.errstr = "invalid indirect read from stack off -16+4 size 8",
++	.errstr = "invalid indirect read from stack R4 off -16+4 size 8",
+ },
+ {
+ 	"ARG_PTR_TO_LONG misaligned",
+@@ -125,7 +125,7 @@
+ 	},
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL,
+-	.errstr = "invalid stack type R4 off=-4 access_size=8",
++	.errstr = "invalid indirect access to stack R4 off=-4 size=8",
+ },
+ {
+ 	"ARG_PTR_TO_LONG initialized",
+diff --git a/tools/testing/selftests/bpf/verifier/jmp32.c b/tools/testing/selftests/bpf/verifier/jmp32.c
+index bd5cae4a7f733..1c857b2fbdf0a 100644
+--- a/tools/testing/selftests/bpf/verifier/jmp32.c
++++ b/tools/testing/selftests/bpf/verifier/jmp32.c
+@@ -87,6 +87,8 @@
+ 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R9 !read_ok",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ },
+ {
+@@ -150,6 +152,8 @@
+ 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R9 !read_ok",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ },
+ {
+@@ -213,6 +217,8 @@
+ 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R9 !read_ok",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ },
+ {
+@@ -280,6 +286,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 2,
+ },
+@@ -348,6 +356,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 2,
+ },
+@@ -416,6 +426,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 2,
+ },
+@@ -484,6 +496,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 2,
+ },
+@@ -552,6 +566,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 2,
+ },
+@@ -620,6 +636,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 2,
+ },
+@@ -688,6 +706,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 2,
+ },
+@@ -756,6 +776,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 2,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/jset.c b/tools/testing/selftests/bpf/verifier/jset.c
+index 8dcd4e0383d57..11fc68da735ea 100644
+--- a/tools/testing/selftests/bpf/verifier/jset.c
++++ b/tools/testing/selftests/bpf/verifier/jset.c
+@@ -82,8 +82,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+-	.retval_unpriv = 1,
+-	.result_unpriv = ACCEPT,
++	.errstr_unpriv = "R9 !read_ok",
++	.result_unpriv = REJECT,
+ 	.retval = 1,
+ 	.result = ACCEPT,
+ },
+@@ -141,7 +141,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+-	.result_unpriv = ACCEPT,
++	.errstr_unpriv = "R9 !read_ok",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ },
+ {
+@@ -162,6 +163,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+-	.result_unpriv = ACCEPT,
++	.errstr_unpriv = "R9 !read_ok",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/map_ptr.c b/tools/testing/selftests/bpf/verifier/map_ptr.c
+index 92a1dc8e17462..2f551cb24cf7c 100644
+--- a/tools/testing/selftests/bpf/verifier/map_ptr.c
++++ b/tools/testing/selftests/bpf/verifier/map_ptr.c
+@@ -75,7 +75,7 @@
+ 	},
+ 	.fixup_map_hash_16b = { 4 },
+ 	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 has pointer with unsupported alu operation",
+ 	.result = ACCEPT,
+ },
+ {
+@@ -93,6 +93,6 @@
+ 	},
+ 	.fixup_map_hash_16b = { 4 },
+ 	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R0 has pointer with unsupported alu operation",
+ 	.result = ACCEPT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/raw_stack.c b/tools/testing/selftests/bpf/verifier/raw_stack.c
+index 193d9e87d5a90..cc8e8c3cdc03d 100644
+--- a/tools/testing/selftests/bpf/verifier/raw_stack.c
++++ b/tools/testing/selftests/bpf/verifier/raw_stack.c
+@@ -11,7 +11,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid read from stack off -8+0 size 8",
++	.errstr = "invalid read from stack R6 off=-8 size=8",
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+@@ -59,7 +59,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack type R3",
++	.errstr = "invalid zero-sized read",
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+@@ -205,7 +205,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack type R3 off=-513 access_size=8",
++	.errstr = "invalid indirect access to stack R3 off=-513 size=8",
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+@@ -221,7 +221,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack type R3 off=-1 access_size=8",
++	.errstr = "invalid indirect access to stack R3 off=-1 size=8",
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+@@ -285,7 +285,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack type R3 off=-512 access_size=0",
++	.errstr = "invalid zero-sized read",
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+diff --git a/tools/testing/selftests/bpf/verifier/stack_ptr.c b/tools/testing/selftests/bpf/verifier/stack_ptr.c
+index 8bfeb77c60bd3..8ab94d65f3d54 100644
+--- a/tools/testing/selftests/bpf/verifier/stack_ptr.c
++++ b/tools/testing/selftests/bpf/verifier/stack_ptr.c
+@@ -44,7 +44,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack off=-79992 size=8",
++	.errstr = "invalid write to stack R1 off=-79992 size=8",
+ 	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
+ },
+ {
+@@ -57,7 +57,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack off=0 size=8",
++	.errstr = "invalid write to stack R1 off=0 size=8",
+ },
+ {
+ 	"PTR_TO_STACK check high 1",
+@@ -106,7 +106,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
+-	.errstr = "invalid stack off=0 size=1",
++	.errstr = "invalid write to stack R1 off=0 size=1",
+ 	.result = REJECT,
+ },
+ {
+@@ -119,7 +119,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack off",
++	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
++	.errstr = "invalid write to stack R1",
+ },
+ {
+ 	"PTR_TO_STACK check high 6",
+@@ -131,7 +132,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack off",
++	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
++	.errstr = "invalid write to stack",
+ },
+ {
+ 	"PTR_TO_STACK check high 7",
+@@ -183,7 +185,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
+-	.errstr = "invalid stack off=-513 size=1",
++	.errstr = "invalid write to stack R1 off=-513 size=1",
+ 	.result = REJECT,
+ },
+ {
+@@ -208,7 +210,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack off",
++	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
++	.errstr = "invalid write to stack",
+ },
+ {
+ 	"PTR_TO_STACK check low 6",
+@@ -220,7 +223,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.result = REJECT,
+-	.errstr = "invalid stack off",
++	.errstr = "invalid write to stack",
++	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
+ },
+ {
+ 	"PTR_TO_STACK check low 7",
+@@ -291,8 +295,6 @@
+ 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.result_unpriv = REJECT,
+-	.errstr_unpriv = "invalid stack off=0 size=1",
+ 	.result = ACCEPT,
+ 	.retval = 42,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c
+index 0d621c841db14..9dfb68c8c78d9 100644
+--- a/tools/testing/selftests/bpf/verifier/unpriv.c
++++ b/tools/testing/selftests/bpf/verifier/unpriv.c
+@@ -108,8 +108,9 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+-	.errstr = "invalid indirect read from stack off -8+0 size 8",
+-	.result = REJECT,
++	.errstr_unpriv = "invalid indirect read from stack R2 off -8+0 size 8",
++	.result_unpriv = REJECT,
++	.result = ACCEPT,
+ },
+ {
+ 	"unpriv: mangle pointer on stack 1",
+@@ -418,6 +419,8 @@
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
++	.errstr_unpriv = "R7 invalid mem access 'inv'",
++	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.retval = 0,
+ },
+@@ -503,7 +506,7 @@
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
+ 	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+index feb91266db39a..a3e593ddfafc9 100644
+--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
++++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+@@ -21,8 +21,6 @@
+ 	.fixup_map_hash_16b = { 5 },
+ 	.fixup_map_array_48b = { 8 },
+ 	.result = ACCEPT,
+-	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R1 tried to add from different maps",
+ 	.retval = 1,
+ },
+ {
+@@ -122,7 +120,7 @@
+ 	.fixup_map_array_48b = { 1 },
+ 	.result = ACCEPT,
+ 	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R2 tried to add from different pointers or scalars",
++	.errstr_unpriv = "R2 pointer comparison prohibited",
+ 	.retval = 0,
+ },
+ {
+@@ -161,7 +159,8 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	// fake-dead code; targeted from branch A to
+-	// prevent dead code sanitization
++	// prevent dead code sanitization, rejected
++	// via branch B however
+ 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+@@ -169,7 +168,7 @@
+ 	.fixup_map_array_48b = { 1 },
+ 	.result = ACCEPT,
+ 	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R2 tried to add from different maps, paths, or prohibited types",
++	.errstr_unpriv = "R0 invalid mem access 'inv'",
+ 	.retval = 0,
+ },
+ {
+@@ -302,8 +301,6 @@
+ 	},
+ 	.fixup_map_array_48b = { 3 },
+ 	.result = ACCEPT,
+-	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
+ 	.retval = 1,
+ },
+ {
+@@ -373,8 +370,6 @@
+ 	},
+ 	.fixup_map_array_48b = { 3 },
+ 	.result = ACCEPT,
+-	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
+ 	.retval = 1,
+ },
+ {
+@@ -474,8 +469,6 @@
+ 	},
+ 	.fixup_map_array_48b = { 3 },
+ 	.result = ACCEPT,
+-	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
+ 	.retval = 1,
+ },
+ {
+@@ -768,8 +761,6 @@
+ 	},
+ 	.fixup_map_array_48b = { 3 },
+ 	.result = ACCEPT,
+-	.result_unpriv = REJECT,
+-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
+ 	.retval = 1,
+ },
+ {
+diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c
+index 8504ac9378098..eab1f7f56e2f0 100644
+--- a/tools/testing/selftests/bpf/verifier/var_off.c
++++ b/tools/testing/selftests/bpf/verifier/var_off.c
+@@ -18,7 +18,7 @@
+ 	.prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+ {
+-	"variable-offset stack access",
++	"variable-offset stack read, priv vs unpriv",
+ 	.insns = {
+ 	/* Fill the top 8 bytes of the stack */
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+@@ -31,14 +31,109 @@
+ 	 * we don't know which
+ 	 */
+ 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
+-	/* dereference it */
++	/* dereference it for a stack read */
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "R2 variable stack access prohibited for !root",
++	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
++},
++{
++	"variable-offset stack read, uninitialized",
++	.insns = {
++	/* Get an unknown value */
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
++	/* Make it small and 4-byte aligned */
++	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4),
++	BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 8),
++	/* add it to fp.  We now have either fp-4 or fp-8, but
++	 * we don't know which
++	 */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
++	/* dereference it for a stack read */
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "variable stack access var_off=(0xfffffffffffffff8; 0x4)",
+ 	.result = REJECT,
++	.errstr = "invalid variable-offset read from stack R2",
+ 	.prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
++{
++	"variable-offset stack write, priv vs unpriv",
++	.insns = {
++	/* Get an unknown value */
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
++	/* Make it small and 8-byte aligned */
++	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 8),
++	BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16),
++	/* Add it to fp.  We now have either fp-8 or fp-16, but
++	 * we don't know which
++	 */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
++	/* Dereference it for a stack write */
++	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
++	/* Now read from the address we just wrote. This shows
++	 * that, after a variable-offset write, a priviledged
++	 * program can read the slots that were in the range of
++	 * that write (even if the verifier doesn't actually know
++	 * if the slot being read was really written to or not.
++	 */
++	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	/* Variable stack access is rejected for unprivileged.
++	 */
++	.errstr_unpriv = "R2 variable stack access prohibited for !root",
++	.result_unpriv = REJECT,
++	.result = ACCEPT,
++},
++{
++	"variable-offset stack write clobbers spilled regs",
++	.insns = {
++	/* Dummy instruction; needed because we need to patch the next one
++	 * and we can't patch the first instruction.
++	 */
++	BPF_MOV64_IMM(BPF_REG_6, 0),
++	/* Make R0 a map ptr */
++	BPF_LD_MAP_FD(BPF_REG_0, 0),
++	/* Get an unknown value */
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
++	/* Make it small and 8-byte aligned */
++	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 8),
++	BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16),
++	/* Add it to fp. We now have either fp-8 or fp-16, but
++	 * we don't know which.
++	 */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
++	/* Spill R0(map ptr) into stack */
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
++	/* Dereference the unknown value for a stack write */
++	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
++	/* Fill the register back into R2 */
++	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8),
++	/* Try to dereference R2 for a memory load */
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8),
++	BPF_EXIT_INSN(),
++	},
++	.fixup_map_hash_8b = { 1 },
++	/* The unpriviledged case is not too interesting; variable
++	 * stack access is rejected.
++	 */
++	.errstr_unpriv = "R2 variable stack access prohibited for !root",
++	.result_unpriv = REJECT,
++	/* In the priviledged case, dereferencing a spilled-and-then-filled
++	 * register is rejected because the previous variable offset stack
++	 * write might have overwritten the spilled pointer (i.e. we lose track
++	 * of the spilled register when we analyze the write).
++	 */
++	.errstr = "R2 invalid mem access 'inv'",
++	.result = REJECT,
++},
+ {
+ 	"indirect variable-offset stack access, unbounded",
+ 	.insns = {
+@@ -63,7 +158,7 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "R4 unbounded indirect variable offset stack access",
++	.errstr = "invalid unbounded variable-offset indirect access to stack R4",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_SOCK_OPS,
+ },
+@@ -88,7 +183,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 5 },
+-	.errstr = "R2 max value is outside of stack bound",
++	.errstr = "invalid variable-offset indirect access to stack R2",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+@@ -113,7 +208,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 5 },
+-	.errstr = "R2 min value is outside of stack bound",
++	.errstr = "invalid variable-offset indirect access to stack R2",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+@@ -138,7 +233,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 5 },
+-	.errstr = "invalid indirect read from stack var_off",
++	.errstr = "invalid indirect read from stack R2 var_off",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+@@ -163,7 +258,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 5 },
+-	.errstr = "invalid indirect read from stack var_off",
++	.errstr = "invalid indirect read from stack R2 var_off",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+@@ -189,7 +284,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 6 },
+-	.errstr_unpriv = "R2 stack pointer arithmetic goes out of range, prohibited for !root",
++	.errstr_unpriv = "R2 variable stack access prohibited for !root",
+ 	.result_unpriv = REJECT,
+ 	.result = ACCEPT,
+ 	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
+@@ -217,7 +312,7 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack var_off",
++	.errstr = "invalid indirect read from stack R4 var_off",
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_SOCK_OPS,
+ },


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-10 11:49 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-10 11:49 UTC (permalink / raw
  To: gentoo-commits

commit:     3a5b30d38a0dcc2a21213fcffb3cb4a3dfe454d8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug  3 22:49:56 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 10 11:48:54 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3a5b30d3

Add CONFIG_RELOCATABLE when selecting RANDOMIZE_BASE

Redo menu's to make more user-friendly

Bug: https://bugs.gentoo.org/806300

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 51 ++++++++++++++++++++++------------------
 1 file changed, 28 insertions(+), 23 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index fa005e6..429e9d4 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-07-04 10:53:51.006624416 -0400
-+++ b/distro/Kconfig	2021-07-04 11:07:33.534248860 -0400
-@@ -0,0 +1,263 @@
+--- /dev/null	2021-08-03 06:44:27.767516067 -0400
++++ b/distro/Kconfig	2021-08-03 18:43:33.303563865 -0400
+@@ -0,0 +1,268 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -166,11 +166,22 @@
 +
 +endmenu
 +
-+menu "Enable Kernel Self Protection Project Recommendations"
-+	visible if GENTOO_LINUX
++menuconfig GENTOO_KERNEL_SELF_PROTECTION
++	bool "Kernel Self Protection Project"
++	depends on GENTOO_LINUX
++	help
++  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
++		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
++		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_COMMON and search for 
++		GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency information on your 
++		specific architecture.
++		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
++		for X86_64
 +
-+config GENTOO_KERNEL_SELF_PROTECTION
-+	bool "Architecture Independant Kernel Self Protection Project Recommendations"
++if GENTOO_KERNEL_SELF_PROTECTION
++config GENTOO_KERNEL_SELF_PROTECTION_COMMON
++	bool "Enable Kernel Self Protection Project Recommendations"
 +
 +	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
 +
@@ -214,26 +225,21 @@
 +	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
 +
 +	help
-+  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
-+		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
-+		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
-+		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
-+		dependency information on your specific architecture.
-+		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
-+		for X86_64
-+
-+menu "Architecture Specific Self Protection Project Recommendations"
++		Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency 
++		information on your specific architecture.  Note 2: Please see the URL above for 
++		numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 for X86_64
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_X86_64
-+	bool "X86_64 KSPP Settings"
++	bool "X86_64 KSPP Settings" if GENTOO_KERNEL_SELF_PROTECTION_COMMON
 +
-+	depends on !X86_MSR && X86_64
++	depends on !X86_MSR && X86_64 && GENTOO_KERNEL_SELF_PROTECTION
 +	default n
 +	
 +	select RANDOMIZE_BASE
 +	select RANDOMIZE_MEMORY
++	select RELOCATABLE
 +	select LEGACY_VSYSCALL_NONE
-+ select PAGE_TABLE_ISOLATION
++ 	select PAGE_TABLE_ISOLATION
 +
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM64
@@ -243,6 +249,7 @@
 +	default n
 +
 +	select RANDOMIZE_BASE
++	select RELOCATABLE
 +	select ARM64_SW_TTBR0_PAN
 +	select CONFIG_UNMAP_KERNEL_AT_EL0
 +
@@ -255,6 +262,7 @@
 +	select HIGHMEM64G
 +	select X86_PAE
 +	select RANDOMIZE_BASE
++	select RELOCATABLE
 +	select PAGE_TABLE_ISOLATION
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM
@@ -267,10 +275,7 @@
 +	select STRICT_MEMORY_RWX
 +	select CPU_SW_DOMAIN_PAN
 +
-+endmenu
-+
-+endmenu
-+
++endif
 +endmenu
 diff --git a/security/Kconfig b/security/Kconfig
 index 7561f6f99..01f0bf73f 100644


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-10 11:49 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-10 11:49 UTC (permalink / raw
  To: gentoo-commits

commit:     28db68741312acc0eb9935bd4183084892f7cf1b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug  9 23:18:23 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 10 11:49:24 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=28db6874

Fix GCC_PLUGINS depends

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 429e9d4..864f86a 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-08-03 06:44:27.767516067 -0400
-+++ b/distro/Kconfig	2021-08-03 18:43:33.303563865 -0400
-@@ -0,0 +1,268 @@
+--- /dev/null	2021-08-09 07:18:54.945580285 -0400
++++ b/distro/Kconfig	2021-08-09 19:15:34.418191114 -0400
+@@ -0,0 +1,267 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -170,7 +170,7 @@
 +	bool "Kernel Self Protection Project"
 +	depends on GENTOO_LINUX
 +	help
-+  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
 +		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
 +		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
 +		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_COMMON and search for 
@@ -183,7 +183,7 @@
 +config GENTOO_KERNEL_SELF_PROTECTION_COMMON
 +	bool "Enable Kernel Self Protection Project Recommendations"
 +
-+	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
++	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS
 +
 +	select BUG
 +	select STRICT_KERNEL_RWX
@@ -216,7 +216,6 @@
 +	select FORTIFY_SOURCE
 +	select SECURITY_DMESG_RESTRICT
 +	select PANIC_ON_OOPS
-+	select CONFIG_GCC_PLUGINS
 +	select GCC_PLUGIN_LATENT_ENTROPY
 +	select GCC_PLUGIN_STRUCTLEAK
 +	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-12 11:53 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-12 11:53 UTC (permalink / raw
  To: gentoo-commits

commit:     f20fa456a51eda89450ac7fba83967c78b91fa02
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 12 11:53:41 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 12 11:53:41 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f20fa456

Linux patch 5.10.58

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1057_linux-5.10.58.patch | 4330 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4334 insertions(+)

diff --git a/0000_README b/0000_README
index 5f127c2..2552ce8 100644
--- a/0000_README
+++ b/0000_README
@@ -271,6 +271,10 @@ Patch:  1056_linux-5.10.57.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.57
 
+Patch:  1057_linux-5.10.58.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.58
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1057_linux-5.10.58.patch b/1057_linux-5.10.58.patch
new file mode 100644
index 0000000..b313099
--- /dev/null
+++ b/1057_linux-5.10.58.patch
@@ -0,0 +1,4330 @@
+diff --git a/Makefile b/Makefile
+index e9621a90e752f..232dee1140c11 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 57
++SUBLEVEL = 58
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
+index 4b2575f936d46..cb64e4797d2a8 100644
+--- a/arch/alpha/kernel/smp.c
++++ b/arch/alpha/kernel/smp.c
+@@ -582,7 +582,7 @@ void
+ smp_send_stop(void)
+ {
+ 	cpumask_t to_whom;
+-	cpumask_copy(&to_whom, cpu_possible_mask);
++	cpumask_copy(&to_whom, cpu_online_mask);
+ 	cpumask_clear_cpu(smp_processor_id(), &to_whom);
+ #ifdef DEBUG_IPI_MSG
+ 	if (hard_smp_processor_id() != boot_cpu_id)
+diff --git a/arch/arm/boot/dts/am437x-l4.dtsi b/arch/arm/boot/dts/am437x-l4.dtsi
+index 370c4e64676f6..86bf668f3848e 100644
+--- a/arch/arm/boot/dts/am437x-l4.dtsi
++++ b/arch/arm/boot/dts/am437x-l4.dtsi
+@@ -1576,7 +1576,7 @@
+ 				compatible = "ti,am4372-d_can", "ti,am3352-d_can";
+ 				reg = <0x0 0x2000>;
+ 				clocks = <&dcan1_fck>;
+-				clock-name = "fck";
++				clock-names = "fck";
+ 				syscon-raminit = <&scm_conf 0x644 1>;
+ 				interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
+ 				status = "disabled";
+diff --git a/arch/arm/boot/dts/imx53-m53menlo.dts b/arch/arm/boot/dts/imx53-m53menlo.dts
+index f98691ae4415b..d3082b9774e40 100644
+--- a/arch/arm/boot/dts/imx53-m53menlo.dts
++++ b/arch/arm/boot/dts/imx53-m53menlo.dts
+@@ -388,13 +388,13 @@
+ 
+ 		pinctrl_power_button: powerbutgrp {
+ 			fsl,pins = <
+-				MX53_PAD_SD2_DATA2__GPIO1_13		0x1e4
++				MX53_PAD_SD2_DATA0__GPIO1_15		0x1e4
+ 			>;
+ 		};
+ 
+ 		pinctrl_power_out: poweroutgrp {
+ 			fsl,pins = <
+-				MX53_PAD_SD2_DATA0__GPIO1_15		0x1e4
++				MX53_PAD_SD2_DATA2__GPIO1_13		0x1e4
+ 			>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
+index 7e4e5fd0143a1..c56337b63c3b4 100644
+--- a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
+@@ -54,7 +54,13 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_microsom_enet_ar8035>;
+ 	phy-mode = "rgmii-id";
+-	phy-reset-duration = <2>;
++
++	/*
++	 * The PHY seems to require a long-enough reset duration to avoid
++	 * some rare issues where the PHY gets stuck in an inconsistent and
++	 * non-functional state at boot-up. 10ms proved to be fine .
++	 */
++	phy-reset-duration = <10>;
+ 	phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+diff --git a/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi b/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi
+index a0545431b3dc3..9f1e38282bee7 100644
+--- a/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi
++++ b/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi
+@@ -43,6 +43,7 @@
+ 	assigned-clock-rates = <0>, <198000000>;
+ 	cap-power-off-card;
+ 	keep-power-in-suspend;
++	max-frequency = <25000000>;
+ 	mmc-pwrseq = <&wifi_pwrseq>;
+ 	no-1-8-v;
+ 	non-removable;
+diff --git a/arch/arm/boot/dts/omap5-board-common.dtsi b/arch/arm/boot/dts/omap5-board-common.dtsi
+index d8f13626cfd1b..3a8f102314758 100644
+--- a/arch/arm/boot/dts/omap5-board-common.dtsi
++++ b/arch/arm/boot/dts/omap5-board-common.dtsi
+@@ -30,14 +30,6 @@
+ 		regulator-max-microvolt = <5000000>;
+ 	};
+ 
+-	vdds_1v8_main: fixedregulator-vdds_1v8_main {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vdds_1v8_main";
+-		vin-supply = <&smps7_reg>;
+-		regulator-min-microvolt = <1800000>;
+-		regulator-max-microvolt = <1800000>;
+-	};
+-
+ 	vmmcsd_fixed: fixedregulator-mmcsd {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vmmcsd_fixed";
+@@ -487,6 +479,7 @@
+ 					regulator-boot-on;
+ 				};
+ 
++				vdds_1v8_main:
+ 				smps7_reg: smps7 {
+ 					/* VDDS_1v8_OMAP over VDDS_1v8_MAIN */
+ 					regulator-name = "smps7";
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 59b3239bcd763..633079245601b 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -37,7 +37,7 @@
+ 		poll-interval = <20>;
+ 
+ 		/*
+-		 * The EXTi IRQ line 3 is shared with touchscreen and ethernet,
++		 * The EXTi IRQ line 3 is shared with ethernet,
+ 		 * so mark this as polled GPIO key.
+ 		 */
+ 		button-0 {
+@@ -46,6 +46,16 @@
+ 			gpios = <&gpiof 3 GPIO_ACTIVE_LOW>;
+ 		};
+ 
++		/*
++		 * The EXTi IRQ line 6 is shared with touchscreen,
++		 * so mark this as polled GPIO key.
++		 */
++		button-1 {
++			label = "TA2-GPIO-B";
++			linux,code = <KEY_B>;
++			gpios = <&gpiod 6 GPIO_ACTIVE_LOW>;
++		};
++
+ 		/*
+ 		 * The EXTi IRQ line 0 is shared with PMIC,
+ 		 * so mark this as polled GPIO key.
+@@ -60,13 +70,6 @@
+ 	gpio-keys {
+ 		compatible = "gpio-keys";
+ 
+-		button-1 {
+-			label = "TA2-GPIO-B";
+-			linux,code = <KEY_B>;
+-			gpios = <&gpiod 6 GPIO_ACTIVE_LOW>;
+-			wakeup-source;
+-		};
+-
+ 		button-3 {
+ 			label = "TA4-GPIO-D";
+ 			linux,code = <KEY_D>;
+@@ -82,6 +85,7 @@
+ 			label = "green:led5";
+ 			gpios = <&gpioc 6 GPIO_ACTIVE_HIGH>;
+ 			default-state = "off";
++			status = "disabled";
+ 		};
+ 
+ 		led-1 {
+@@ -185,8 +189,8 @@
+ 	touchscreen@38 {
+ 		compatible = "edt,edt-ft5406";
+ 		reg = <0x38>;
+-		interrupt-parent = <&gpiog>;
+-		interrupts = <2 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
++		interrupt-parent = <&gpioc>;
++		interrupts = <6 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index 8221bf69fefeb..000af71777017 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -133,6 +133,7 @@
+ 			reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
+ 			reset-assert-us = <500>;
+ 			reset-deassert-us = <500>;
++			smsc,disable-energy-detect;
+ 			interrupt-parent = <&gpioi>;
+ 			interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
+ 		};
+diff --git a/arch/arm/mach-imx/mmdc.c b/arch/arm/mach-imx/mmdc.c
+index 0dfd0ae7a63dd..af12668d0bf51 100644
+--- a/arch/arm/mach-imx/mmdc.c
++++ b/arch/arm/mach-imx/mmdc.c
+@@ -103,6 +103,7 @@ struct mmdc_pmu {
+ 	struct perf_event *mmdc_events[MMDC_NUM_COUNTERS];
+ 	struct hlist_node node;
+ 	struct fsl_mmdc_devtype_data *devtype_data;
++	struct clk *mmdc_ipg_clk;
+ };
+ 
+ /*
+@@ -462,11 +463,14 @@ static int imx_mmdc_remove(struct platform_device *pdev)
+ 
+ 	cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
+ 	perf_pmu_unregister(&pmu_mmdc->pmu);
++	iounmap(pmu_mmdc->mmdc_base);
++	clk_disable_unprepare(pmu_mmdc->mmdc_ipg_clk);
+ 	kfree(pmu_mmdc);
+ 	return 0;
+ }
+ 
+-static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base)
++static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base,
++			      struct clk *mmdc_ipg_clk)
+ {
+ 	struct mmdc_pmu *pmu_mmdc;
+ 	char *name;
+@@ -494,6 +498,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ 	}
+ 
+ 	mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
++	pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
+ 	if (mmdc_num == 0)
+ 		name = "mmdc";
+ 	else
+@@ -529,7 +534,7 @@ pmu_free:
+ 
+ #else
+ #define imx_mmdc_remove NULL
+-#define imx_mmdc_perf_init(pdev, mmdc_base) 0
++#define imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk) 0
+ #endif
+ 
+ static int imx_mmdc_probe(struct platform_device *pdev)
+@@ -567,7 +572,13 @@ static int imx_mmdc_probe(struct platform_device *pdev)
+ 	val &= ~(1 << BP_MMDC_MAPSR_PSD);
+ 	writel_relaxed(val, reg);
+ 
+-	return imx_mmdc_perf_init(pdev, mmdc_base);
++	err = imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk);
++	if (err) {
++		iounmap(mmdc_base);
++		clk_disable_unprepare(mmdc_ipg_clk);
++	}
++
++	return err;
+ }
+ 
+ int imx_mmdc_get_ddr_type(void)
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 15b29a179c8ad..83d595ebcf1f6 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -3777,6 +3777,7 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
+ 	struct omap_hwmod_ocp_if *oi;
+ 	struct clockdomain *clkdm;
+ 	struct clk_hw_omap *clk;
++	struct clk_hw *hw;
+ 
+ 	if (!oh)
+ 		return NULL;
+@@ -3793,7 +3794,14 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
+ 		c = oi->_clk;
+ 	}
+ 
+-	clk = to_clk_hw_omap(__clk_get_hw(c));
++	hw = __clk_get_hw(c);
++	if (!hw)
++		return NULL;
++
++	clk = to_clk_hw_omap(hw);
++	if (!clk)
++		return NULL;
++
+ 	clkdm = clk->clkdm;
+ 	if (!clkdm)
+ 		return NULL;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts
+index dd764b720fb0a..f6a79c8080d14 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts
+@@ -54,6 +54,7 @@
+ 
+ &mscc_felix_port0 {
+ 	label = "swp0";
++	managed = "in-band-status";
+ 	phy-handle = <&phy0>;
+ 	phy-mode = "sgmii";
+ 	status = "okay";
+@@ -61,6 +62,7 @@
+ 
+ &mscc_felix_port1 {
+ 	label = "swp1";
++	managed = "in-band-status";
+ 	phy-handle = <&phy1>;
+ 	phy-mode = "sgmii";
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+index f3b58bb9b8408..5f42904d53ab6 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+@@ -69,7 +69,7 @@
+ 		};
+ 	};
+ 
+-	sysclk: clock-sysclk {
++	sysclk: sysclk {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <100000000>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index 389aebdb35f17..bbd34ae12a53b 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -19,6 +19,8 @@
+ 	aliases {
+ 		spi0 = &spi0;
+ 		ethernet1 = &eth1;
++		mmc0 = &sdhci0;
++		mmc1 = &sdhci1;
+ 	};
+ 
+ 	chosen {
+@@ -118,6 +120,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&i2c1_pins>;
+ 	clock-frequency = <100000>;
++	/delete-property/ mrvl,i2c-fast-mode;
+ 	status = "okay";
+ 
+ 	rtc@6f {
+diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h
+index 9f0ec21d6327f..88d20f04c64a5 100644
+--- a/arch/arm64/include/asm/arch_timer.h
++++ b/arch/arm64/include/asm/arch_timer.h
+@@ -165,25 +165,6 @@ static inline void arch_timer_set_cntkctl(u32 cntkctl)
+ 	isb();
+ }
+ 
+-/*
+- * Ensure that reads of the counter are treated the same as memory reads
+- * for the purposes of ordering by subsequent memory barriers.
+- *
+- * This insanity brought to you by speculative system register reads,
+- * out-of-order memory accesses, sequence locks and Thomas Gleixner.
+- *
+- * http://lists.infradead.org/pipermail/linux-arm-kernel/2019-February/631195.html
+- */
+-#define arch_counter_enforce_ordering(val) do {				\
+-	u64 tmp, _val = (val);						\
+-									\
+-	asm volatile(							\
+-	"	eor	%0, %1, %1\n"					\
+-	"	add	%0, sp, %0\n"					\
+-	"	ldr	xzr, [%0]"					\
+-	: "=r" (tmp) : "r" (_val));					\
+-} while (0)
+-
+ static __always_inline u64 __arch_counter_get_cntpct_stable(void)
+ {
+ 	u64 cnt;
+@@ -224,8 +205,6 @@ static __always_inline u64 __arch_counter_get_cntvct(void)
+ 	return cnt;
+ }
+ 
+-#undef arch_counter_enforce_ordering
+-
+ static inline int arch_timer_arch_init(void)
+ {
+ 	return 0;
+diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
+index c3009b0e52393..37d891af8ea53 100644
+--- a/arch/arm64/include/asm/barrier.h
++++ b/arch/arm64/include/asm/barrier.h
+@@ -70,6 +70,25 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx,
+ 	return mask;
+ }
+ 
++/*
++ * Ensure that reads of the counter are treated the same as memory reads
++ * for the purposes of ordering by subsequent memory barriers.
++ *
++ * This insanity brought to you by speculative system register reads,
++ * out-of-order memory accesses, sequence locks and Thomas Gleixner.
++ *
++ * http://lists.infradead.org/pipermail/linux-arm-kernel/2019-February/631195.html
++ */
++#define arch_counter_enforce_ordering(val) do {				\
++	u64 tmp, _val = (val);						\
++									\
++	asm volatile(							\
++	"	eor	%0, %1, %1\n"					\
++	"	add	%0, sp, %0\n"					\
++	"	ldr	xzr, [%0]"					\
++	: "=r" (tmp) : "r" (_val));					\
++} while (0)
++
+ #define __smp_mb()	dmb(ish)
+ #define __smp_rmb()	dmb(ishld)
+ #define __smp_wmb()	dmb(ishst)
+diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
+index 28c85b87b8cd4..d3106f5e121f9 100644
+--- a/arch/arm64/include/asm/ptrace.h
++++ b/arch/arm64/include/asm/ptrace.h
+@@ -316,7 +316,17 @@ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+ 
+ static inline unsigned long regs_return_value(struct pt_regs *regs)
+ {
+-	return regs->regs[0];
++	unsigned long val = regs->regs[0];
++
++	/*
++	 * Audit currently uses regs_return_value() instead of
++	 * syscall_get_return_value(). Apply the same sign-extension here until
++	 * audit is updated to use syscall_get_return_value().
++	 */
++	if (compat_user_mode(regs))
++		val = sign_extend64(val, 31);
++
++	return val;
+ }
+ 
+ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
+diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
+index cfc0672013f67..03e20895453a7 100644
+--- a/arch/arm64/include/asm/syscall.h
++++ b/arch/arm64/include/asm/syscall.h
+@@ -29,22 +29,23 @@ static inline void syscall_rollback(struct task_struct *task,
+ 	regs->regs[0] = regs->orig_x0;
+ }
+ 
+-
+-static inline long syscall_get_error(struct task_struct *task,
+-				     struct pt_regs *regs)
++static inline long syscall_get_return_value(struct task_struct *task,
++					    struct pt_regs *regs)
+ {
+-	unsigned long error = regs->regs[0];
++	unsigned long val = regs->regs[0];
+ 
+ 	if (is_compat_thread(task_thread_info(task)))
+-		error = sign_extend64(error, 31);
++		val = sign_extend64(val, 31);
+ 
+-	return IS_ERR_VALUE(error) ? error : 0;
++	return val;
+ }
+ 
+-static inline long syscall_get_return_value(struct task_struct *task,
+-					    struct pt_regs *regs)
++static inline long syscall_get_error(struct task_struct *task,
++				     struct pt_regs *regs)
+ {
+-	return regs->regs[0];
++	unsigned long error = syscall_get_return_value(task, regs);
++
++	return IS_ERR_VALUE(error) ? error : 0;
+ }
+ 
+ static inline void syscall_set_return_value(struct task_struct *task,
+diff --git a/arch/arm64/include/asm/vdso/gettimeofday.h b/arch/arm64/include/asm/vdso/gettimeofday.h
+index 631ab12816335..4b4c0dac0e149 100644
+--- a/arch/arm64/include/asm/vdso/gettimeofday.h
++++ b/arch/arm64/include/asm/vdso/gettimeofday.h
+@@ -83,11 +83,7 @@ static __always_inline u64 __arch_get_hw_counter(s32 clock_mode,
+ 	 */
+ 	isb();
+ 	asm volatile("mrs %0, cntvct_el0" : "=r" (res) :: "memory");
+-	/*
+-	 * This isb() is required to prevent that the seq lock is
+-	 * speculated.#
+-	 */
+-	isb();
++	arch_counter_enforce_ordering(res);
+ 
+ 	return res;
+ }
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 66256603bd596..2817e39881fee 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -1823,7 +1823,7 @@ void syscall_trace_exit(struct pt_regs *regs)
+ 	audit_syscall_exit(regs);
+ 
+ 	if (flags & _TIF_SYSCALL_TRACEPOINT)
+-		trace_sys_exit(regs, regs_return_value(regs));
++		trace_sys_exit(regs, syscall_get_return_value(current, regs));
+ 
+ 	if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
+ 		tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT);
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 50852992752b0..e62005317ce29 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -29,6 +29,7 @@
+ #include <asm/unistd.h>
+ #include <asm/fpsimd.h>
+ #include <asm/ptrace.h>
++#include <asm/syscall.h>
+ #include <asm/signal32.h>
+ #include <asm/traps.h>
+ #include <asm/vdso.h>
+@@ -890,7 +891,7 @@ static void do_signal(struct pt_regs *regs)
+ 		     retval == -ERESTART_RESTARTBLOCK ||
+ 		     (retval == -ERESTARTSYS &&
+ 		      !(ksig.ka.sa.sa_flags & SA_RESTART)))) {
+-			regs->regs[0] = -EINTR;
++			syscall_set_return_value(current, regs, -EINTR, 0);
+ 			regs->pc = continue_addr;
+ 		}
+ 
+diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
+index dbce0dcf4cc06..c445828ecc3aa 100644
+--- a/arch/arm64/kernel/stacktrace.c
++++ b/arch/arm64/kernel/stacktrace.c
+@@ -199,7 +199,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
+ 
+ #ifdef CONFIG_STACKTRACE
+ 
+-noinline void arch_stack_walk(stack_trace_consume_fn consume_entry,
++noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
+ 			      void *cookie, struct task_struct *task,
+ 			      struct pt_regs *regs)
+ {
+diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
+index 6fa8cfb8232aa..befde0eaa5e79 100644
+--- a/arch/arm64/kernel/syscall.c
++++ b/arch/arm64/kernel/syscall.c
+@@ -50,10 +50,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
+ 		ret = do_ni_syscall(regs, scno);
+ 	}
+ 
+-	if (is_compat_task())
+-		ret = lower_32_bits(ret);
+-
+-	regs->regs[0] = ret;
++	syscall_set_return_value(current, regs, 0, ret);
+ }
+ 
+ static inline bool has_syscall_work(unsigned long flags)
+@@ -128,7 +125,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 		 * syscall. do_notify_resume() will send a signal to userspace
+ 		 * before the syscall is restarted.
+ 		 */
+-		regs->regs[0] = -ERESTARTNOINTR;
++		syscall_set_return_value(current, regs, -ERESTARTNOINTR, 0);
+ 		return;
+ 	}
+ 
+@@ -149,7 +146,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 		 * anyway.
+ 		 */
+ 		if (scno == NO_SYSCALL)
+-			regs->regs[0] = -ENOSYS;
++			syscall_set_return_value(current, regs, -ENOSYS, 0);
+ 		scno = syscall_trace_enter(regs);
+ 		if (scno == NO_SYSCALL)
+ 			goto trace_exit;
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 686990fcc5f0f..acab8018ab440 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -320,7 +320,7 @@ KBUILD_LDFLAGS		+= -m $(ld-emul)
+ 
+ ifdef CONFIG_MIPS
+ CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
+-	egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
++	egrep -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
+ 	sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
+ endif
+ 
+diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
+index d0cf997b4ba84..139b4050259fa 100644
+--- a/arch/mips/include/asm/pgalloc.h
++++ b/arch/mips/include/asm/pgalloc.h
+@@ -59,15 +59,20 @@ do {							\
+ 
+ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
+ {
+-	pmd_t *pmd = NULL;
++	pmd_t *pmd;
+ 	struct page *pg;
+ 
+-	pg = alloc_pages(GFP_KERNEL | __GFP_ACCOUNT, PMD_ORDER);
+-	if (pg) {
+-		pgtable_pmd_page_ctor(pg);
+-		pmd = (pmd_t *)page_address(pg);
+-		pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
++	pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_ORDER);
++	if (!pg)
++		return NULL;
++
++	if (!pgtable_pmd_page_ctor(pg)) {
++		__free_pages(pg, PMD_ORDER);
++		return NULL;
+ 	}
++
++	pmd = (pmd_t *)page_address(pg);
++	pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
+ 	return pmd;
+ }
+ 
+diff --git a/arch/mips/mti-malta/malta-platform.c b/arch/mips/mti-malta/malta-platform.c
+index 11e9527c6e441..62ffac500eb52 100644
+--- a/arch/mips/mti-malta/malta-platform.c
++++ b/arch/mips/mti-malta/malta-platform.c
+@@ -47,7 +47,8 @@ static struct plat_serial8250_port uart8250_data[] = {
+ 		.mapbase	= 0x1f000900,	/* The CBUS UART */
+ 		.irq		= MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2,
+ 		.uartclk	= 3686400,	/* Twice the usual clk! */
+-		.iotype		= UPIO_MEM32,
++		.iotype		= IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) ?
++				  UPIO_MEM32BE : UPIO_MEM32,
+ 		.flags		= CBUS_UART_FLAGS,
+ 		.regshift	= 3,
+ 	},
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index f07d77cffb3c6..0c3b8fa7e5322 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -1009,9 +1009,10 @@ void x86_pmu_stop(struct perf_event *event, int flags);
+ 
+ static inline void x86_pmu_disable_event(struct perf_event *event)
+ {
++	u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask);
+ 	struct hw_perf_event *hwc = &event->hw;
+ 
+-	wrmsrl(hwc->config_base, hwc->config);
++	wrmsrl(hwc->config_base, hwc->config & ~disable_mask);
+ 
+ 	if (is_counter_pair(hwc))
+ 		wrmsrl(x86_pmu_config_addr(hwc->idx + 1), 0);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 7ca2da9028298..5e25d93ec7d08 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -1621,7 +1621,7 @@ static int is_empty_shadow_page(u64 *spt)
+  * aggregate version in order to make the slab shrinker
+  * faster
+  */
+-static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, unsigned long nr)
++static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
+ {
+ 	kvm->arch.n_used_mmu_pages += nr;
+ 	percpu_counter_add(&kvm_total_used_mmu_pages, nr);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 27faa00fff71c..6ab42cdcb8a44 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4100,8 +4100,17 @@ static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu)
+ 
+ static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu)
+ {
+-	return kvm_arch_interrupt_allowed(vcpu) &&
+-		kvm_cpu_accept_dm_intr(vcpu);
++	/*
++	 * Do not cause an interrupt window exit if an exception
++	 * is pending or an event needs reinjection; userspace
++	 * might want to inject the interrupt manually using KVM_SET_REGS
++	 * or KVM_SET_SREGS.  For that to work, we must be at an
++	 * instruction boundary and with no events half-injected.
++	 */
++	return (kvm_arch_interrupt_allowed(vcpu) &&
++		kvm_cpu_accept_dm_intr(vcpu) &&
++		!kvm_event_needs_reinjection(vcpu) &&
++		!vcpu->arch.exception.pending);
+ }
+ 
+ static int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 81be0096411da..d8b0d8bd132bc 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -833,7 +833,11 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
+ 
+ 	enable = iolatency_set_min_lat_nsec(blkg, lat_val);
+ 	if (enable) {
+-		WARN_ON_ONCE(!blk_get_queue(blkg->q));
++		if (!blk_get_queue(blkg->q)) {
++			ret = -ENODEV;
++			goto out;
++		}
++
+ 		blkg_get(blkg);
+ 	}
+ 
+diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
+index 8768594c79e58..125143c41bb81 100644
+--- a/drivers/acpi/acpica/nsrepair2.c
++++ b/drivers/acpi/acpica/nsrepair2.c
+@@ -375,13 +375,6 @@ acpi_ns_repair_CID(struct acpi_evaluate_info *info,
+ 
+ 			(*element_ptr)->common.reference_count =
+ 			    original_ref_count;
+-
+-			/*
+-			 * The original_element holds a reference from the package object
+-			 * that represents _HID. Since a new element was created by _HID,
+-			 * remove the reference from the _CID package.
+-			 */
+-			acpi_ut_remove_reference(original_element);
+ 		}
+ 
+ 		element_ptr++;
+diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
+index ae7189d1a5682..b71ea4a680b01 100644
+--- a/drivers/ata/libata-sff.c
++++ b/drivers/ata/libata-sff.c
+@@ -637,6 +637,20 @@ unsigned int ata_sff_data_xfer32(struct ata_queued_cmd *qc, unsigned char *buf,
+ }
+ EXPORT_SYMBOL_GPL(ata_sff_data_xfer32);
+ 
++static void ata_pio_xfer(struct ata_queued_cmd *qc, struct page *page,
++		unsigned int offset, size_t xfer_size)
++{
++	bool do_write = (qc->tf.flags & ATA_TFLAG_WRITE);
++	unsigned char *buf;
++
++	buf = kmap_atomic(page);
++	qc->ap->ops->sff_data_xfer(qc, buf + offset, xfer_size, do_write);
++	kunmap_atomic(buf);
++
++	if (!do_write && !PageSlab(page))
++		flush_dcache_page(page);
++}
++
+ /**
+  *	ata_pio_sector - Transfer a sector of data.
+  *	@qc: Command on going
+@@ -648,11 +662,9 @@ EXPORT_SYMBOL_GPL(ata_sff_data_xfer32);
+  */
+ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ {
+-	int do_write = (qc->tf.flags & ATA_TFLAG_WRITE);
+ 	struct ata_port *ap = qc->ap;
+ 	struct page *page;
+ 	unsigned int offset;
+-	unsigned char *buf;
+ 
+ 	if (!qc->cursg) {
+ 		qc->curbytes = qc->nbytes;
+@@ -670,13 +682,20 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ 
+ 	DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read");
+ 
+-	/* do the actual data transfer */
+-	buf = kmap_atomic(page);
+-	ap->ops->sff_data_xfer(qc, buf + offset, qc->sect_size, do_write);
+-	kunmap_atomic(buf);
++	/*
++	 * Split the transfer when it splits a page boundary.  Note that the
++	 * split still has to be dword aligned like all ATA data transfers.
++	 */
++	WARN_ON_ONCE(offset % 4);
++	if (offset + qc->sect_size > PAGE_SIZE) {
++		unsigned int split_len = PAGE_SIZE - offset;
+ 
+-	if (!do_write && !PageSlab(page))
+-		flush_dcache_page(page);
++		ata_pio_xfer(qc, page, offset, split_len);
++		ata_pio_xfer(qc, nth_page(page, 1), 0,
++			     qc->sect_size - split_len);
++	} else {
++		ata_pio_xfer(qc, page, offset, qc->sect_size);
++	}
+ 
+ 	qc->curbytes += qc->sect_size;
+ 	qc->cursg_ofs += qc->sect_size;
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 8d7f94ef0cfe0..85bb8742f0906 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -617,8 +617,6 @@ dev_groups_failed:
+ 	else if (drv->remove)
+ 		drv->remove(dev);
+ probe_failed:
+-	kfree(dev->dma_range_map);
+-	dev->dma_range_map = NULL;
+ 	if (dev->bus)
+ 		blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ 					     BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
+@@ -626,6 +624,8 @@ pinctrl_bind_failed:
+ 	device_links_no_driver(dev);
+ 	devres_release_all(dev);
+ 	arch_teardown_dma_ops(dev);
++	kfree(dev->dma_range_map);
++	dev->dma_range_map = NULL;
+ 	driver_sysfs_remove(dev);
+ 	dev->driver = NULL;
+ 	dev_set_drvdata(dev, NULL);
+diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c
+index 4dec4b79ae067..0fdd18ce2c52e 100644
+--- a/drivers/base/firmware_loader/fallback.c
++++ b/drivers/base/firmware_loader/fallback.c
+@@ -89,12 +89,11 @@ static void __fw_load_abort(struct fw_priv *fw_priv)
+ {
+ 	/*
+ 	 * There is a small window in which user can write to 'loading'
+-	 * between loading done and disappearance of 'loading'
++	 * between loading done/aborted and disappearance of 'loading'
+ 	 */
+-	if (fw_sysfs_done(fw_priv))
++	if (fw_state_is_aborted(fw_priv) || fw_sysfs_done(fw_priv))
+ 		return;
+ 
+-	list_del_init(&fw_priv->pending_list);
+ 	fw_state_aborted(fw_priv);
+ }
+ 
+@@ -280,7 +279,6 @@ static ssize_t firmware_loading_store(struct device *dev,
+ 			 * Same logic as fw_load_abort, only the DONE bit
+ 			 * is ignored and we set ABORT only on failure.
+ 			 */
+-			list_del_init(&fw_priv->pending_list);
+ 			if (rc) {
+ 				fw_state_aborted(fw_priv);
+ 				written = rc;
+@@ -513,6 +511,11 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
+ 	}
+ 
+ 	mutex_lock(&fw_lock);
++	if (fw_state_is_aborted(fw_priv)) {
++		mutex_unlock(&fw_lock);
++		retval = -EINTR;
++		goto out;
++	}
+ 	list_add(&fw_priv->pending_list, &pending_fw_head);
+ 	mutex_unlock(&fw_lock);
+ 
+@@ -535,11 +538,10 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
+ 	if (fw_state_is_aborted(fw_priv)) {
+ 		if (retval == -ERESTARTSYS)
+ 			retval = -EINTR;
+-		else
+-			retval = -EAGAIN;
+ 	} else if (fw_priv->is_paged_buf && !fw_priv->data)
+ 		retval = -ENOMEM;
+ 
++out:
+ 	device_del(f_dev);
+ err_put_dev:
+ 	put_device(f_dev);
+diff --git a/drivers/base/firmware_loader/firmware.h b/drivers/base/firmware_loader/firmware.h
+index 63bd29fdcb9c5..a3014e9e2c852 100644
+--- a/drivers/base/firmware_loader/firmware.h
++++ b/drivers/base/firmware_loader/firmware.h
+@@ -117,8 +117,16 @@ static inline void __fw_state_set(struct fw_priv *fw_priv,
+ 
+ 	WRITE_ONCE(fw_st->status, status);
+ 
+-	if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED)
++	if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) {
++#ifdef CONFIG_FW_LOADER_USER_HELPER
++		/*
++		 * Doing this here ensures that the fw_priv is deleted from
++		 * the pending list in all abort/done paths.
++		 */
++		list_del_init(&fw_priv->pending_list);
++#endif
+ 		complete_all(&fw_st->completion);
++	}
+ }
+ 
+ static inline void fw_state_aborted(struct fw_priv *fw_priv)
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 78355095e00d6..a529235e6bfe9 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -781,8 +781,10 @@ static void fw_abort_batch_reqs(struct firmware *fw)
+ 		return;
+ 
+ 	fw_priv = fw->priv;
++	mutex_lock(&fw_lock);
+ 	if (!fw_state_is_aborted(fw_priv))
+ 		fw_state_aborted(fw_priv);
++	mutex_unlock(&fw_lock);
+ }
+ 
+ /* called from request_firmware() and request_firmware_work_func() */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 818dc7f54f038..c3d8d44f28d75 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -100,6 +100,7 @@ static const char * const clock_names[SYSC_MAX_CLOCKS] = {
+  * @cookie: data used by legacy platform callbacks
+  * @name: name if available
+  * @revision: interconnect target module revision
++ * @reserved: target module is reserved and already in use
+  * @enabled: sysc runtime enabled status
+  * @needs_resume: runtime resume needed on resume from suspend
+  * @child_needs_resume: runtime resume needed for child on resume from suspend
+@@ -130,6 +131,7 @@ struct sysc {
+ 	struct ti_sysc_cookie cookie;
+ 	const char *name;
+ 	u32 revision;
++	unsigned int reserved:1;
+ 	unsigned int enabled:1;
+ 	unsigned int needs_resume:1;
+ 	unsigned int child_needs_resume:1;
+@@ -2918,6 +2920,8 @@ static int sysc_init_soc(struct sysc *ddata)
+ 		case SOC_3430 ... SOC_3630:
+ 			sysc_add_disabled(0x48304000);	/* timer12 */
+ 			break;
++		case SOC_AM3:
++			sysc_add_disabled(0x48310000);  /* rng */
+ 		default:
+ 			break;
+ 		};
+@@ -3057,8 +3061,8 @@ static int sysc_probe(struct platform_device *pdev)
+ 		return error;
+ 
+ 	error = sysc_check_active_timer(ddata);
+-	if (error)
+-		return error;
++	if (error == -EBUSY)
++		ddata->reserved = true;
+ 
+ 	error = sysc_get_clocks(ddata);
+ 	if (error)
+@@ -3094,11 +3098,15 @@ static int sysc_probe(struct platform_device *pdev)
+ 	sysc_show_registers(ddata);
+ 
+ 	ddata->dev->type = &sysc_device_type;
+-	error = of_platform_populate(ddata->dev->of_node, sysc_match_table,
+-				     pdata ? pdata->auxdata : NULL,
+-				     ddata->dev);
+-	if (error)
+-		goto err;
++
++	if (!ddata->reserved) {
++		error = of_platform_populate(ddata->dev->of_node,
++					     sysc_match_table,
++					     pdata ? pdata->auxdata : NULL,
++					     ddata->dev);
++		if (error)
++			goto err;
++	}
+ 
+ 	INIT_DELAYED_WORK(&ddata->idle_work, ti_sysc_idle);
+ 
+diff --git a/drivers/char/tpm/tpm_ftpm_tee.c b/drivers/char/tpm/tpm_ftpm_tee.c
+index 2ccdf8ac69948..6e3235565a4d8 100644
+--- a/drivers/char/tpm/tpm_ftpm_tee.c
++++ b/drivers/char/tpm/tpm_ftpm_tee.c
+@@ -254,11 +254,11 @@ static int ftpm_tee_probe(struct device *dev)
+ 	pvt_data->session = sess_arg.session;
+ 
+ 	/* Allocate dynamic shared memory with fTPM TA */
+-	pvt_data->shm = tee_shm_alloc(pvt_data->ctx,
+-				      MAX_COMMAND_SIZE + MAX_RESPONSE_SIZE,
+-				      TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
++	pvt_data->shm = tee_shm_alloc_kernel_buf(pvt_data->ctx,
++						 MAX_COMMAND_SIZE +
++						 MAX_RESPONSE_SIZE);
+ 	if (IS_ERR(pvt_data->shm)) {
+-		dev_err(dev, "%s: tee_shm_alloc failed\n", __func__);
++		dev_err(dev, "%s: tee_shm_alloc_kernel_buf failed\n", __func__);
+ 		rc = -ENOMEM;
+ 		goto out_shm_alloc;
+ 	}
+diff --git a/drivers/clk/clk-devres.c b/drivers/clk/clk-devres.c
+index be160764911bf..f9d5b73343417 100644
+--- a/drivers/clk/clk-devres.c
++++ b/drivers/clk/clk-devres.c
+@@ -92,13 +92,20 @@ int __must_check devm_clk_bulk_get_optional(struct device *dev, int num_clks,
+ }
+ EXPORT_SYMBOL_GPL(devm_clk_bulk_get_optional);
+ 
++static void devm_clk_bulk_release_all(struct device *dev, void *res)
++{
++	struct clk_bulk_devres *devres = res;
++
++	clk_bulk_put_all(devres->num_clks, devres->clks);
++}
++
+ int __must_check devm_clk_bulk_get_all(struct device *dev,
+ 				       struct clk_bulk_data **clks)
+ {
+ 	struct clk_bulk_devres *devres;
+ 	int ret;
+ 
+-	devres = devres_alloc(devm_clk_bulk_release,
++	devres = devres_alloc(devm_clk_bulk_release_all,
+ 			      sizeof(*devres), GFP_KERNEL);
+ 	if (!devres)
+ 		return -ENOMEM;
+diff --git a/drivers/clk/clk-stm32f4.c b/drivers/clk/clk-stm32f4.c
+index 18117ce5ff85f..5c75e3d906c20 100644
+--- a/drivers/clk/clk-stm32f4.c
++++ b/drivers/clk/clk-stm32f4.c
+@@ -526,7 +526,7 @@ struct stm32f4_pll {
+ 
+ struct stm32f4_pll_post_div_data {
+ 	int idx;
+-	u8 pll_num;
++	int pll_idx;
+ 	const char *name;
+ 	const char *parent;
+ 	u8 flag;
+@@ -557,13 +557,13 @@ static const struct clk_div_table post_divr_table[] = {
+ 
+ #define MAX_POST_DIV 3
+ static const struct stm32f4_pll_post_div_data  post_div_data[MAX_POST_DIV] = {
+-	{ CLK_I2SQ_PDIV, PLL_I2S, "plli2s-q-div", "plli2s-q",
++	{ CLK_I2SQ_PDIV, PLL_VCO_I2S, "plli2s-q-div", "plli2s-q",
+ 		CLK_SET_RATE_PARENT, STM32F4_RCC_DCKCFGR, 0, 5, 0, NULL},
+ 
+-	{ CLK_SAIQ_PDIV, PLL_SAI, "pllsai-q-div", "pllsai-q",
++	{ CLK_SAIQ_PDIV, PLL_VCO_SAI, "pllsai-q-div", "pllsai-q",
+ 		CLK_SET_RATE_PARENT, STM32F4_RCC_DCKCFGR, 8, 5, 0, NULL },
+ 
+-	{ NO_IDX, PLL_SAI, "pllsai-r-div", "pllsai-r", CLK_SET_RATE_PARENT,
++	{ NO_IDX, PLL_VCO_SAI, "pllsai-r-div", "pllsai-r", CLK_SET_RATE_PARENT,
+ 		STM32F4_RCC_DCKCFGR, 16, 2, 0, post_divr_table },
+ };
+ 
+@@ -1774,7 +1774,7 @@ static void __init stm32f4_rcc_init(struct device_node *np)
+ 				post_div->width,
+ 				post_div->flag_div,
+ 				post_div->div_table,
+-				clks[post_div->pll_num],
++				clks[post_div->pll_idx],
+ 				&stm32f4_clk_lock);
+ 
+ 		if (post_div->idx != NO_IDX)
+diff --git a/drivers/clk/tegra/clk-sdmmc-mux.c b/drivers/clk/tegra/clk-sdmmc-mux.c
+index 316912d3b1a4f..4f2c3309eea4d 100644
+--- a/drivers/clk/tegra/clk-sdmmc-mux.c
++++ b/drivers/clk/tegra/clk-sdmmc-mux.c
+@@ -194,6 +194,15 @@ static void clk_sdmmc_mux_disable(struct clk_hw *hw)
+ 	gate_ops->disable(gate_hw);
+ }
+ 
++static void clk_sdmmc_mux_disable_unused(struct clk_hw *hw)
++{
++	struct tegra_sdmmc_mux *sdmmc_mux = to_clk_sdmmc_mux(hw);
++	const struct clk_ops *gate_ops = sdmmc_mux->gate_ops;
++	struct clk_hw *gate_hw = &sdmmc_mux->gate.hw;
++
++	gate_ops->disable_unused(gate_hw);
++}
++
+ static void clk_sdmmc_mux_restore_context(struct clk_hw *hw)
+ {
+ 	struct clk_hw *parent = clk_hw_get_parent(hw);
+@@ -218,6 +227,7 @@ static const struct clk_ops tegra_clk_sdmmc_mux_ops = {
+ 	.is_enabled = clk_sdmmc_mux_is_enabled,
+ 	.enable = clk_sdmmc_mux_enable,
+ 	.disable = clk_sdmmc_mux_disable,
++	.disable_unused = clk_sdmmc_mux_disable_unused,
+ 	.restore_context = clk_sdmmc_mux_restore_context,
+ };
+ 
+diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
+index 670db04b07571..52e361ed43e3b 100644
+--- a/drivers/dma/imx-dma.c
++++ b/drivers/dma/imx-dma.c
+@@ -831,6 +831,8 @@ static struct dma_async_tx_descriptor *imxdma_prep_slave_sg(
+ 		dma_length += sg_dma_len(sg);
+ 	}
+ 
++	imxdma_config_write(chan, &imxdmac->config, direction);
++
+ 	switch (imxdmac->word_size) {
+ 	case DMA_SLAVE_BUSWIDTH_4_BYTES:
+ 		if (sg_dma_len(sgl) & 3 || sgl->dma_address & 3)
+diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c
+index d0055d2f0b9a4..1150aa90eab64 100644
+--- a/drivers/dma/stm32-dma.c
++++ b/drivers/dma/stm32-dma.c
+@@ -1187,7 +1187,7 @@ static int stm32_dma_alloc_chan_resources(struct dma_chan *c)
+ 
+ 	chan->config_init = false;
+ 
+-	ret = pm_runtime_get_sync(dmadev->ddev.dev);
++	ret = pm_runtime_resume_and_get(dmadev->ddev.dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1455,7 +1455,7 @@ static int stm32_dma_suspend(struct device *dev)
+ 	struct stm32_dma_device *dmadev = dev_get_drvdata(dev);
+ 	int id, ret, scr;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/dma/stm32-dmamux.c b/drivers/dma/stm32-dmamux.c
+index a10ccd964376f..bddd3b23f33fc 100644
+--- a/drivers/dma/stm32-dmamux.c
++++ b/drivers/dma/stm32-dmamux.c
+@@ -137,7 +137,7 @@ static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec,
+ 
+ 	/* Set dma request */
+ 	spin_lock_irqsave(&dmamux->lock, flags);
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0) {
+ 		spin_unlock_irqrestore(&dmamux->lock, flags);
+ 		goto error;
+@@ -336,7 +336,7 @@ static int stm32_dmamux_suspend(struct device *dev)
+ 	struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev);
+ 	int i, ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -361,7 +361,7 @@ static int stm32_dmamux_resume(struct device *dev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/dma/uniphier-xdmac.c b/drivers/dma/uniphier-xdmac.c
+index 16b19654873df..d6b8a202474f4 100644
+--- a/drivers/dma/uniphier-xdmac.c
++++ b/drivers/dma/uniphier-xdmac.c
+@@ -209,8 +209,8 @@ static int uniphier_xdmac_chan_stop(struct uniphier_xdmac_chan *xc)
+ 	writel(0, xc->reg_ch_base + XDMAC_TSS);
+ 
+ 	/* wait until transfer is stopped */
+-	return readl_poll_timeout(xc->reg_ch_base + XDMAC_STAT, val,
+-				  !(val & XDMAC_STAT_TENF), 100, 1000);
++	return readl_poll_timeout_atomic(xc->reg_ch_base + XDMAC_STAT, val,
++					 !(val & XDMAC_STAT_TENF), 100, 1000);
+ }
+ 
+ /* xc->vc.lock must be held by caller */
+diff --git a/drivers/fpga/dfl-fme-perf.c b/drivers/fpga/dfl-fme-perf.c
+index 531266287eeee..329b03244fd2e 100644
+--- a/drivers/fpga/dfl-fme-perf.c
++++ b/drivers/fpga/dfl-fme-perf.c
+@@ -953,6 +953,8 @@ static int fme_perf_offline_cpu(unsigned int cpu, struct hlist_node *node)
+ 		return 0;
+ 
+ 	priv->cpu = target;
++	perf_pmu_migrate_context(&priv->pmu, cpu, target);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index 6dfca83bcd908..3c2fa44d9279b 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -396,7 +396,7 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_request_irq(&pdev->dev, mpc8xxx_gc->irqn,
+ 			       mpc8xxx_gpio_irq_cascade,
+-			       IRQF_SHARED, "gpio-cascade",
++			       IRQF_NO_THREAD | IRQF_SHARED, "gpio-cascade",
+ 			       mpc8xxx_gc);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "%s: failed to devm_request_irq(%d), ret = %d\n",
+diff --git a/drivers/gpio/gpio-tqmx86.c b/drivers/gpio/gpio-tqmx86.c
+index 5022e0ad0faee..0f5d17f343f1e 100644
+--- a/drivers/gpio/gpio-tqmx86.c
++++ b/drivers/gpio/gpio-tqmx86.c
+@@ -238,8 +238,8 @@ static int tqmx86_gpio_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int ret, irq;
+ 
+-	irq = platform_get_irq(pdev, 0);
+-	if (irq < 0)
++	irq = platform_get_irq_optional(pdev, 0);
++	if (irq < 0 && irq != -ENXIO)
+ 		return irq;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_IO, 0);
+@@ -278,7 +278,7 @@ static int tqmx86_gpio_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	if (irq) {
++	if (irq > 0) {
+ 		struct irq_chip *irq_chip = &gpio->irq_chip;
+ 		u8 irq_status;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 6eb308670f487..bc9df3f216f56 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1301,6 +1301,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
+ 	}
+ 
+ 	hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data;
++	adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
+ 
+ 	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+ 		adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id =
+@@ -1314,7 +1315,6 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
+ 			 adev->dm.dmcub_fw_version);
+ 	}
+ 
+-	adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
+ 
+ 	adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL);
+ 	dmub_srv = adev->dm.dmub_srv;
+@@ -2162,9 +2162,9 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ 	max_cll = conn_base->hdr_sink_metadata.hdmi_type1.max_cll;
+ 	min_cll = conn_base->hdr_sink_metadata.hdmi_type1.min_cll;
+ 
+-	if (caps->ext_caps->bits.oled == 1 ||
++	if (caps->ext_caps->bits.oled == 1 /*||
+ 	    caps->ext_caps->bits.sdr_aux_backlight_control == 1 ||
+-	    caps->ext_caps->bits.hdr_aux_backlight_control == 1)
++	    caps->ext_caps->bits.hdr_aux_backlight_control == 1*/)
+ 		caps->aux_support = true;
+ 
+ 	if (amdgpu_backlight == 0)
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index e5ac0936a5871..0c083af5a59d5 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -2351,6 +2351,12 @@ static int eb_parse(struct i915_execbuffer *eb)
+ 		eb->batch_flags |= I915_DISPATCH_SECURE;
+ 	}
+ 
++	batch = eb_dispatch_secure(eb, shadow);
++	if (IS_ERR(batch)) {
++		err = PTR_ERR(batch);
++		goto err_trampoline;
++	}
++
+ 	err = intel_engine_cmd_parser(eb->engine,
+ 				      eb->batch->vma,
+ 				      eb->batch_start_offset,
+@@ -2377,6 +2383,7 @@ secure_batch:
+ err_unpin_batch:
+ 	if (batch)
+ 		i915_vma_unpin(batch);
++err_trampoline:
+ 	if (trampoline)
+ 		i915_vma_unpin(trampoline);
+ err_shadow:
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 5cd83eac940c3..ce8c91c5fdd3b 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -413,7 +413,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
+ #define GEN11_VECS_SFC_USAGE(engine)		_MMIO((engine)->mmio_base + 0x2014)
+ #define   GEN11_VECS_SFC_USAGE_BIT		(1 << 0)
+ 
+-#define GEN12_SFC_DONE(n)		_MMIO(0x1cc00 + (n) * 0x100)
++#define GEN12_SFC_DONE(n)		_MMIO(0x1cc000 + (n) * 0x1000)
+ #define GEN12_SFC_DONE_MAX		4
+ 
+ #define RING_PP_DIR_BASE(base)		_MMIO((base) + 0x228)
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 971694e781b65..19346693c1da4 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -526,8 +526,8 @@ static void __cache_work_func(struct mlx5_cache_ent *ent)
+ 		 */
+ 		spin_unlock_irq(&ent->lock);
+ 		need_delay = need_resched() || someone_adding(cache) ||
+-			     time_after(jiffies,
+-					READ_ONCE(cache->last_add) + 300 * HZ);
++			     !time_after(jiffies,
++					 READ_ONCE(cache->last_add) + 300 * HZ);
+ 		spin_lock_irq(&ent->lock);
+ 		if (ent->disabled)
+ 			goto out;
+diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
+index 8a1e70e008764..7887941730dbb 100644
+--- a/drivers/interconnect/core.c
++++ b/drivers/interconnect/core.c
+@@ -403,7 +403,7 @@ struct icc_path *devm_of_icc_get(struct device *dev, const char *name)
+ {
+ 	struct icc_path **ptr, *path;
+ 
+-	ptr = devres_alloc(devm_icc_release, sizeof(**ptr), GFP_KERNEL);
++	ptr = devres_alloc(devm_icc_release, sizeof(*ptr), GFP_KERNEL);
+ 	if (!ptr)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -973,9 +973,14 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider)
+ 	}
+ 	node->avg_bw = node->init_avg;
+ 	node->peak_bw = node->init_peak;
++
++	if (provider->pre_aggregate)
++		provider->pre_aggregate(node);
++
+ 	if (provider->aggregate)
+ 		provider->aggregate(node, 0, node->init_avg, node->init_peak,
+ 				    &node->avg_bw, &node->peak_bw);
++
+ 	provider->set(node, node);
+ 	node->avg_bw = 0;
+ 	node->peak_bw = 0;
+@@ -1106,6 +1111,8 @@ void icc_sync_state(struct device *dev)
+ 		dev_dbg(p->dev, "interconnect provider is in synced state\n");
+ 		list_for_each_entry(n, &p->nodes, node_list) {
+ 			if (n->init_avg || n->init_peak) {
++				n->init_avg = 0;
++				n->init_peak = 0;
+ 				aggregate_requests(n);
+ 				p->set(n, n);
+ 			}
+diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c
+index bf01d09dba6c4..f6fae64861ce8 100644
+--- a/drivers/interconnect/qcom/icc-rpmh.c
++++ b/drivers/interconnect/qcom/icc-rpmh.c
+@@ -57,6 +57,11 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+ 			qn->sum_avg[i] += avg_bw;
+ 			qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw);
+ 		}
++
++		if (node->init_avg || node->init_peak) {
++			qn->sum_avg[i] = max_t(u64, qn->sum_avg[i], node->init_avg);
++			qn->max_peak[i] = max_t(u64, qn->max_peak[i], node->init_peak);
++		}
+ 	}
+ 
+ 	*agg_avg += avg_bw;
+@@ -79,7 +84,6 @@ EXPORT_SYMBOL_GPL(qcom_icc_aggregate);
+ int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+ {
+ 	struct qcom_icc_provider *qp;
+-	struct qcom_icc_node *qn;
+ 	struct icc_node *node;
+ 
+ 	if (!src)
+@@ -88,12 +92,6 @@ int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+ 		node = src;
+ 
+ 	qp = to_qcom_provider(node->provider);
+-	qn = node->data;
+-
+-	qn->sum_avg[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->sum_avg[QCOM_ICC_BUCKET_AMC],
+-						 node->avg_bw);
+-	qn->max_peak[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->max_peak[QCOM_ICC_BUCKET_AMC],
+-						  node->peak_bw);
+ 
+ 	qcom_icc_bcm_voter_commit(qp->voter);
+ 
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index a6480568c7ebe..fb31e5dd54a6e 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -472,8 +472,6 @@ static void raid1_end_write_request(struct bio *bio)
+ 		/*
+ 		 * When the device is faulty, it is not necessary to
+ 		 * handle write error.
+-		 * For failfast, this is the only remaining device,
+-		 * We need to retry the write without FailFast.
+ 		 */
+ 		if (!test_bit(Faulty, &rdev->flags))
+ 			set_bit(R1BIO_WriteError, &r1_bio->state);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 9f9d8b67b5dd1..70dccc3c9631d 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -470,12 +470,12 @@ static void raid10_end_write_request(struct bio *bio)
+ 			/*
+ 			 * When the device is faulty, it is not necessary to
+ 			 * handle write error.
+-			 * For failfast, this is the only remaining device,
+-			 * We need to retry the write without FailFast.
+ 			 */
+ 			if (!test_bit(Faulty, &rdev->flags))
+ 				set_bit(R10BIO_WriteError, &r10_bio->state);
+ 			else {
++				/* Fail the request */
++				set_bit(R10BIO_Degraded, &r10_bio->state);
+ 				r10_bio->devs[slot].bio = NULL;
+ 				to_put = bio;
+ 				dec_rdev = 1;
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 89e38392509c1..72350343a56a6 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -1573,6 +1573,7 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
+ 		  struct media_request *req)
+ {
+ 	struct vb2_buffer *vb;
++	enum vb2_buffer_state orig_state;
+ 	int ret;
+ 
+ 	if (q->error) {
+@@ -1673,6 +1674,7 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
+ 	 * Add to the queued buffers list, a buffer will stay on it until
+ 	 * dequeued in dqbuf.
+ 	 */
++	orig_state = vb->state;
+ 	list_add_tail(&vb->queued_entry, &q->queued_list);
+ 	q->queued_count++;
+ 	q->waiting_for_buffers = false;
+@@ -1703,8 +1705,17 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
+ 	if (q->streaming && !q->start_streaming_called &&
+ 	    q->queued_count >= q->min_buffers_needed) {
+ 		ret = vb2_start_streaming(q);
+-		if (ret)
++		if (ret) {
++			/*
++			 * Since vb2_core_qbuf will return with an error,
++			 * we should return it to state DEQUEUED since
++			 * the error indicates that the buffer wasn't queued.
++			 */
++			list_del(&vb->queued_entry);
++			q->queued_count--;
++			vb->state = orig_state;
+ 			return ret;
++		}
+ 	}
+ 
+ 	dprintk(q, 2, "qbuf of buffer %d succeeded\n", vb->index);
+diff --git a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+index 91460e4d0c301..c278b9b0f1024 100644
+--- a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
++++ b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+@@ -37,7 +37,16 @@ static int rtl28xxu_ctrl_msg(struct dvb_usb_device *d, struct rtl28xxu_req *req)
+ 	} else {
+ 		/* read */
+ 		requesttype = (USB_TYPE_VENDOR | USB_DIR_IN);
+-		pipe = usb_rcvctrlpipe(d->udev, 0);
++
++		/*
++		 * Zero-length transfers must use usb_sndctrlpipe() and
++		 * rtl28xxu_identify_state() uses a zero-length i2c read
++		 * command to determine the chip type.
++		 */
++		if (req->size)
++			pipe = usb_rcvctrlpipe(d->udev, 0);
++		else
++			pipe = usb_sndctrlpipe(d->udev, 0);
+ 	}
+ 
+ 	ret = usb_control_msg(d->udev, pipe, 0, requesttype, req->value,
+diff --git a/drivers/net/dsa/qca/ar9331.c b/drivers/net/dsa/qca/ar9331.c
+index 4d49c5f2b7905..661745932a539 100644
+--- a/drivers/net/dsa/qca/ar9331.c
++++ b/drivers/net/dsa/qca/ar9331.c
+@@ -689,16 +689,24 @@ static int ar9331_mdio_write(void *ctx, u32 reg, u32 val)
+ 		return 0;
+ 	}
+ 
+-	ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg, val);
++	/* In case of this switch we work with 32bit registers on top of 16bit
++	 * bus. Some registers (for example access to forwarding database) have
++	 * trigger bit on the first 16bit half of request, the result and
++	 * configuration of request in the second half.
++	 * To make it work properly, we should do the second part of transfer
++	 * before the first one is done.
++	 */
++	ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg + 2,
++				  val >> 16);
+ 	if (ret < 0)
+ 		goto error;
+ 
+-	ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg + 2,
+-				  val >> 16);
++	ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg, val);
+ 	if (ret < 0)
+ 		goto error;
+ 
+ 	return 0;
++
+ error:
+ 	dev_err_ratelimited(&sbus->dev, "Bus error. Failed to write register.\n");
+ 	return ret;
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 82b918d361173..855371fcbf85c 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1260,10 +1260,11 @@ static int sja1105et_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin,
+ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ 		      const unsigned char *addr, u16 vid)
+ {
+-	struct sja1105_l2_lookup_entry l2_lookup = {0};
++	struct sja1105_l2_lookup_entry l2_lookup = {0}, tmp;
+ 	struct sja1105_private *priv = ds->priv;
+ 	struct device *dev = ds->dev;
+ 	int last_unused = -1;
++	int start, end, i;
+ 	int bin, way, rc;
+ 
+ 	bin = sja1105et_fdb_hash(priv, addr, vid);
+@@ -1275,7 +1276,7 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ 		 * mask? If yes, we need to do nothing. If not, we need
+ 		 * to rewrite the entry by adding this port to it.
+ 		 */
+-		if (l2_lookup.destports & BIT(port))
++		if ((l2_lookup.destports & BIT(port)) && l2_lookup.lockeds)
+ 			return 0;
+ 		l2_lookup.destports |= BIT(port);
+ 	} else {
+@@ -1306,6 +1307,7 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ 						     index, NULL, false);
+ 		}
+ 	}
++	l2_lookup.lockeds = true;
+ 	l2_lookup.index = sja1105et_fdb_index(bin, way);
+ 
+ 	rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+@@ -1314,6 +1316,29 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ 	if (rc < 0)
+ 		return rc;
+ 
++	/* Invalidate a dynamically learned entry if that exists */
++	start = sja1105et_fdb_index(bin, 0);
++	end = sja1105et_fdb_index(bin, way);
++
++	for (i = start; i < end; i++) {
++		rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
++						 i, &tmp);
++		if (rc == -ENOENT)
++			continue;
++		if (rc)
++			return rc;
++
++		if (tmp.macaddr != ether_addr_to_u64(addr) || tmp.vlanid != vid)
++			continue;
++
++		rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
++						  i, NULL, false);
++		if (rc)
++			return rc;
++
++		break;
++	}
++
+ 	return sja1105_static_fdb_change(priv, port, &l2_lookup, true);
+ }
+ 
+@@ -1355,31 +1380,24 @@ int sja1105et_fdb_del(struct dsa_switch *ds, int port,
+ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
+ 			const unsigned char *addr, u16 vid)
+ {
+-	struct sja1105_l2_lookup_entry l2_lookup = {0};
++	struct sja1105_l2_lookup_entry l2_lookup = {0}, tmp;
+ 	struct sja1105_private *priv = ds->priv;
+ 	int rc, i;
+ 
+ 	/* Search for an existing entry in the FDB table */
+ 	l2_lookup.macaddr = ether_addr_to_u64(addr);
+ 	l2_lookup.vlanid = vid;
+-	l2_lookup.iotag = SJA1105_S_TAG;
+ 	l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
+-	if (priv->vlan_state != SJA1105_VLAN_UNAWARE) {
+-		l2_lookup.mask_vlanid = VLAN_VID_MASK;
+-		l2_lookup.mask_iotag = BIT(0);
+-	} else {
+-		l2_lookup.mask_vlanid = 0;
+-		l2_lookup.mask_iotag = 0;
+-	}
++	l2_lookup.mask_vlanid = VLAN_VID_MASK;
+ 	l2_lookup.destports = BIT(port);
+ 
+ 	rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+ 					 SJA1105_SEARCH, &l2_lookup);
+ 	if (rc == 0) {
+-		/* Found and this port is already in the entry's
++		/* Found a static entry and this port is already in the entry's
+ 		 * port mask => job done
+ 		 */
+-		if (l2_lookup.destports & BIT(port))
++		if ((l2_lookup.destports & BIT(port)) && l2_lookup.lockeds)
+ 			return 0;
+ 		/* l2_lookup.index is populated by the switch in case it
+ 		 * found something.
+@@ -1402,16 +1420,46 @@ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
+ 		dev_err(ds->dev, "FDB is full, cannot add entry.\n");
+ 		return -EINVAL;
+ 	}
+-	l2_lookup.lockeds = true;
+ 	l2_lookup.index = i;
+ 
+ skip_finding_an_index:
++	l2_lookup.lockeds = true;
++
+ 	rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+ 					  l2_lookup.index, &l2_lookup,
+ 					  true);
+ 	if (rc < 0)
+ 		return rc;
+ 
++	/* The switch learns dynamic entries and looks up the FDB left to
++	 * right. It is possible that our addition was concurrent with the
++	 * dynamic learning of the same address, so now that the static entry
++	 * has been installed, we are certain that address learning for this
++	 * particular address has been turned off, so the dynamic entry either
++	 * is in the FDB at an index smaller than the static one, or isn't (it
++	 * can also be at a larger index, but in that case it is inactive
++	 * because the static FDB entry will match first, and the dynamic one
++	 * will eventually age out). Search for a dynamically learned address
++	 * prior to our static one and invalidate it.
++	 */
++	tmp = l2_lookup;
++
++	rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
++					 SJA1105_SEARCH, &tmp);
++	if (rc < 0) {
++		dev_err(ds->dev,
++			"port %d failed to read back entry for %pM vid %d: %pe\n",
++			port, addr, vid, ERR_PTR(rc));
++		return rc;
++	}
++
++	if (tmp.index < l2_lookup.index) {
++		rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
++						  tmp.index, NULL, false);
++		if (rc < 0)
++			return rc;
++	}
++
+ 	return sja1105_static_fdb_change(priv, port, &l2_lookup, true);
+ }
+ 
+@@ -1425,15 +1473,8 @@ int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port,
+ 
+ 	l2_lookup.macaddr = ether_addr_to_u64(addr);
+ 	l2_lookup.vlanid = vid;
+-	l2_lookup.iotag = SJA1105_S_TAG;
+ 	l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
+-	if (priv->vlan_state != SJA1105_VLAN_UNAWARE) {
+-		l2_lookup.mask_vlanid = VLAN_VID_MASK;
+-		l2_lookup.mask_iotag = BIT(0);
+-	} else {
+-		l2_lookup.mask_vlanid = 0;
+-		l2_lookup.mask_iotag = 0;
+-	}
++	l2_lookup.mask_vlanid = VLAN_VID_MASK;
+ 	l2_lookup.destports = BIT(port);
+ 
+ 	rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 1a6ec1a12d531..b5d954cb409ae 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -2669,7 +2669,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
+ 	}
+ 
+ 	/* Allocated memory for FW statistics  */
+-	if (bnx2x_alloc_fw_stats_mem(bp))
++	rc = bnx2x_alloc_fw_stats_mem(bp);
++	if (rc)
+ 		LOAD_ERROR_EXIT(bp, load_error0);
+ 
+ 	/* request pf to initialize status blocks */
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 2cb73e850a327..94eb838a01760 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3822,13 +3822,13 @@ fec_drv_remove(struct platform_device *pdev)
+ 	if (of_phy_is_fixed_link(np))
+ 		of_phy_deregister_fixed_link(np);
+ 	of_node_put(fep->phy_node);
+-	free_netdev(ndev);
+ 
+ 	clk_disable_unprepare(fep->clk_ahb);
+ 	clk_disable_unprepare(fep->clk_ipg);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
++	free_netdev(ndev);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/natsemi/natsemi.c b/drivers/net/ethernet/natsemi/natsemi.c
+index b81e1487945c8..14a17ad730f03 100644
+--- a/drivers/net/ethernet/natsemi/natsemi.c
++++ b/drivers/net/ethernet/natsemi/natsemi.c
+@@ -819,7 +819,7 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		printk(version);
+ #endif
+ 
+-	i = pci_enable_device(pdev);
++	i = pcim_enable_device(pdev);
+ 	if (i) return i;
+ 
+ 	/* natsemi has a non-standard PM control register
+@@ -852,7 +852,7 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	ioaddr = ioremap(iostart, iosize);
+ 	if (!ioaddr) {
+ 		i = -ENOMEM;
+-		goto err_ioremap;
++		goto err_pci_request_regions;
+ 	}
+ 
+ 	/* Work around the dropped serial bit. */
+@@ -974,9 +974,6 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+  err_register_netdev:
+ 	iounmap(ioaddr);
+ 
+- err_ioremap:
+-	pci_release_regions(pdev);
+-
+  err_pci_request_regions:
+ 	free_netdev(dev);
+ 	return i;
+@@ -3241,7 +3238,6 @@ static void natsemi_remove1(struct pci_dev *pdev)
+ 
+ 	NATSEMI_REMOVE_FILE(pdev, dspcfg_workaround);
+ 	unregister_netdev (dev);
+-	pci_release_regions (pdev);
+ 	iounmap(ioaddr);
+ 	free_netdev (dev);
+ }
+diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c
+index 87892bd992b18..56556373548c4 100644
+--- a/drivers/net/ethernet/neterion/vxge/vxge-main.c
++++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c
+@@ -3527,13 +3527,13 @@ static void vxge_device_unregister(struct __vxge_hw_device *hldev)
+ 
+ 	kfree(vdev->vpaths);
+ 
+-	/* we are safe to free it now */
+-	free_netdev(dev);
+-
+ 	vxge_debug_init(vdev->level_trace, "%s: ethernet device unregistered",
+ 			buf);
+ 	vxge_debug_entryexit(vdev->level_trace,	"%s: %s:%d  Exiting...", buf,
+ 			     __func__, __LINE__);
++
++	/* we are safe to free it now */
++	free_netdev(dev);
+ }
+ 
+ /*
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index 9c9ae33d84ce9..c036a1d0f8de6 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -286,6 +286,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
+ 
+ 	/* Init to unknowns */
+ 	ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
++	ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
++	ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+ 	cmd->base.port = PORT_OTHER;
+ 	cmd->base.speed = SPEED_UNKNOWN;
+ 	cmd->base.duplex = DUPLEX_UNKNOWN;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+index c59b72c902932..a2e4dfb5cb44e 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+@@ -831,7 +831,7 @@ int qede_configure_vlan_filters(struct qede_dev *edev)
+ int qede_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
+ {
+ 	struct qede_dev *edev = netdev_priv(dev);
+-	struct qede_vlan *vlan = NULL;
++	struct qede_vlan *vlan;
+ 	int rc = 0;
+ 
+ 	DP_VERBOSE(edev, NETIF_MSG_IFDOWN, "Removing vlan 0x%04x\n", vid);
+@@ -842,7 +842,7 @@ int qede_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
+ 		if (vlan->vid == vid)
+ 			break;
+ 
+-	if (!vlan || (vlan->vid != vid)) {
++	if (list_entry_is_head(vlan, &edev->vlan_list, list)) {
+ 		DP_VERBOSE(edev, (NETIF_MSG_IFUP | NETIF_MSG_IFDOWN),
+ 			   "Vlan isn't configured\n");
+ 		goto out;
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index a83b3d69a6565..c7923e22a4c42 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -154,7 +154,7 @@ static int ql_wait_for_drvr_lock(struct ql3_adapter *qdev)
+ 				      "driver lock acquired\n");
+ 			return 1;
+ 		}
+-		ssleep(1);
++		mdelay(1000);
+ 	} while (++i < 10);
+ 
+ 	netdev_err(qdev->ndev, "Timed out waiting for driver lock...\n");
+@@ -3290,7 +3290,7 @@ static int ql_adapter_reset(struct ql3_adapter *qdev)
+ 		if ((value & ISP_CONTROL_SR) == 0)
+ 			break;
+ 
+-		ssleep(1);
++		mdelay(1000);
+ 	} while ((--max_wait_time));
+ 
+ 	/*
+@@ -3326,7 +3326,7 @@ static int ql_adapter_reset(struct ql3_adapter *qdev)
+ 						   ispControlStatus);
+ 			if ((value & ISP_CONTROL_FSR) == 0)
+ 				break;
+-			ssleep(1);
++			mdelay(1000);
+ 		} while ((--max_wait_time));
+ 	}
+ 	if (max_wait_time == 0)
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 9b0bc8b74bc01..9a566c5b36a6a 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -349,11 +349,11 @@ static int ksz8041_config_aneg(struct phy_device *phydev)
+ }
+ 
+ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
+-					    const u32 ksz_phy_id)
++					    const bool ksz_8051)
+ {
+ 	int ret;
+ 
+-	if ((phydev->phy_id & MICREL_PHY_ID_MASK) != ksz_phy_id)
++	if ((phydev->phy_id & MICREL_PHY_ID_MASK) != PHY_ID_KSZ8051)
+ 		return 0;
+ 
+ 	ret = phy_read(phydev, MII_BMSR);
+@@ -366,7 +366,7 @@ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
+ 	 * the switch does not.
+ 	 */
+ 	ret &= BMSR_ERCAP;
+-	if (ksz_phy_id == PHY_ID_KSZ8051)
++	if (ksz_8051)
+ 		return ret;
+ 	else
+ 		return !ret;
+@@ -374,7 +374,7 @@ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
+ 
+ static int ksz8051_match_phy_device(struct phy_device *phydev)
+ {
+-	return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ8051);
++	return ksz8051_ksz8795_match_phy_device(phydev, true);
+ }
+ 
+ static int ksz8081_config_init(struct phy_device *phydev)
+@@ -402,7 +402,7 @@ static int ksz8061_config_init(struct phy_device *phydev)
+ 
+ static int ksz8795_match_phy_device(struct phy_device *phydev)
+ {
+-	return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ87XX);
++	return ksz8051_ksz8795_match_phy_device(phydev, false);
+ }
+ 
+ static int ksz9021_load_values_from_of(struct phy_device *phydev,
+diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
+index 32e1335c94ad0..0d7935924e580 100644
+--- a/drivers/net/usb/pegasus.c
++++ b/drivers/net/usb/pegasus.c
+@@ -736,12 +736,16 @@ static inline void disable_net_traffic(pegasus_t *pegasus)
+ 	set_registers(pegasus, EthCtrl0, sizeof(tmp), &tmp);
+ }
+ 
+-static inline void get_interrupt_interval(pegasus_t *pegasus)
++static inline int get_interrupt_interval(pegasus_t *pegasus)
+ {
+ 	u16 data;
+ 	u8 interval;
++	int ret;
++
++	ret = read_eprom_word(pegasus, 4, &data);
++	if (ret < 0)
++		return ret;
+ 
+-	read_eprom_word(pegasus, 4, &data);
+ 	interval = data >> 8;
+ 	if (pegasus->usb->speed != USB_SPEED_HIGH) {
+ 		if (interval < 0x80) {
+@@ -756,6 +760,8 @@ static inline void get_interrupt_interval(pegasus_t *pegasus)
+ 		}
+ 	}
+ 	pegasus->intr_interval = interval;
++
++	return 0;
+ }
+ 
+ static void set_carrier(struct net_device *net)
+@@ -1150,7 +1156,9 @@ static int pegasus_probe(struct usb_interface *intf,
+ 				| NETIF_MSG_PROBE | NETIF_MSG_LINK);
+ 
+ 	pegasus->features = usb_dev_id[dev_index].private;
+-	get_interrupt_interval(pegasus);
++	res = get_interrupt_interval(pegasus);
++	if (res)
++		goto out2;
+ 	if (reset_mac(pegasus)) {
+ 		dev_err(&intf->dev, "can't reset MAC\n");
+ 		res = -EIO;
+diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c
+index 1df959532c7d3..514f2c1124b61 100644
+--- a/drivers/net/wireless/virt_wifi.c
++++ b/drivers/net/wireless/virt_wifi.c
+@@ -136,6 +136,29 @@ static struct ieee80211_supported_band band_5ghz = {
+ /* Assigned at module init. Guaranteed locally-administered and unicast. */
+ static u8 fake_router_bssid[ETH_ALEN] __ro_after_init = {};
+ 
++static void virt_wifi_inform_bss(struct wiphy *wiphy)
++{
++	u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
++	struct cfg80211_bss *informed_bss;
++	static const struct {
++		u8 tag;
++		u8 len;
++		u8 ssid[8];
++	} __packed ssid = {
++		.tag = WLAN_EID_SSID,
++		.len = 8,
++		.ssid = "VirtWifi",
++	};
++
++	informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
++					   CFG80211_BSS_FTYPE_PRESP,
++					   fake_router_bssid, tsf,
++					   WLAN_CAPABILITY_ESS, 0,
++					   (void *)&ssid, sizeof(ssid),
++					   DBM_TO_MBM(-50), GFP_KERNEL);
++	cfg80211_put_bss(wiphy, informed_bss);
++}
++
+ /* Called with the rtnl lock held. */
+ static int virt_wifi_scan(struct wiphy *wiphy,
+ 			  struct cfg80211_scan_request *request)
+@@ -156,28 +179,13 @@ static int virt_wifi_scan(struct wiphy *wiphy,
+ /* Acquires and releases the rdev BSS lock. */
+ static void virt_wifi_scan_result(struct work_struct *work)
+ {
+-	struct {
+-		u8 tag;
+-		u8 len;
+-		u8 ssid[8];
+-	} __packed ssid = {
+-		.tag = WLAN_EID_SSID, .len = 8, .ssid = "VirtWifi",
+-	};
+-	struct cfg80211_bss *informed_bss;
+ 	struct virt_wifi_wiphy_priv *priv =
+ 		container_of(work, struct virt_wifi_wiphy_priv,
+ 			     scan_result.work);
+ 	struct wiphy *wiphy = priv_to_wiphy(priv);
+ 	struct cfg80211_scan_info scan_info = { .aborted = false };
+-	u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
+ 
+-	informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
+-					   CFG80211_BSS_FTYPE_PRESP,
+-					   fake_router_bssid, tsf,
+-					   WLAN_CAPABILITY_ESS, 0,
+-					   (void *)&ssid, sizeof(ssid),
+-					   DBM_TO_MBM(-50), GFP_KERNEL);
+-	cfg80211_put_bss(wiphy, informed_bss);
++	virt_wifi_inform_bss(wiphy);
+ 
+ 	/* Schedules work which acquires and releases the rtnl lock. */
+ 	cfg80211_scan_done(priv->scan_request, &scan_info);
+@@ -225,10 +233,12 @@ static int virt_wifi_connect(struct wiphy *wiphy, struct net_device *netdev,
+ 	if (!could_schedule)
+ 		return -EBUSY;
+ 
+-	if (sme->bssid)
++	if (sme->bssid) {
+ 		ether_addr_copy(priv->connect_requested_bss, sme->bssid);
+-	else
++	} else {
++		virt_wifi_inform_bss(wiphy);
+ 		eth_zero_addr(priv->connect_requested_bss);
++	}
+ 
+ 	wiphy_debug(wiphy, "connect\n");
+ 
+@@ -241,11 +251,13 @@ static void virt_wifi_connect_complete(struct work_struct *work)
+ 	struct virt_wifi_netdev_priv *priv =
+ 		container_of(work, struct virt_wifi_netdev_priv, connect.work);
+ 	u8 *requested_bss = priv->connect_requested_bss;
+-	bool has_addr = !is_zero_ether_addr(requested_bss);
+ 	bool right_addr = ether_addr_equal(requested_bss, fake_router_bssid);
+ 	u16 status = WLAN_STATUS_SUCCESS;
+ 
+-	if (!priv->is_up || (has_addr && !right_addr))
++	if (is_zero_ether_addr(requested_bss))
++		requested_bss = NULL;
++
++	if (!priv->is_up || (requested_bss && !right_addr))
+ 		status = WLAN_STATUS_UNSPECIFIED_FAILURE;
+ 	else
+ 		priv->is_connected = true;
+diff --git a/drivers/pcmcia/i82092.c b/drivers/pcmcia/i82092.c
+index 85887d885b5f3..192c9049d654f 100644
+--- a/drivers/pcmcia/i82092.c
++++ b/drivers/pcmcia/i82092.c
+@@ -112,6 +112,7 @@ static int i82092aa_pci_probe(struct pci_dev *dev,
+ 	for (i = 0; i < socket_count; i++) {
+ 		sockets[i].card_state = 1; /* 1 = present but empty */
+ 		sockets[i].io_base = pci_resource_start(dev, 0);
++		sockets[i].dev = dev;
+ 		sockets[i].socket.features |= SS_CAP_PCCARD;
+ 		sockets[i].socket.map_size = 0x1000;
+ 		sockets[i].socket.irq_mask = 0;
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 726b7048a7674..4cb4ab9c6137e 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -221,7 +221,7 @@ static unsigned int sr_get_events(struct scsi_device *sdev)
+ 	else if (med->media_event_code == 2)
+ 		return DISK_EVENT_MEDIA_CHANGE;
+ 	else if (med->media_event_code == 3)
+-		return DISK_EVENT_EJECT_REQUEST;
++		return DISK_EVENT_MEDIA_CHANGE;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/ixp4xx/ixp4xx-npe.c b/drivers/soc/ixp4xx/ixp4xx-npe.c
+index ec90b44fa0cd3..6065aaab67403 100644
+--- a/drivers/soc/ixp4xx/ixp4xx-npe.c
++++ b/drivers/soc/ixp4xx/ixp4xx-npe.c
+@@ -690,8 +690,8 @@ static int ixp4xx_npe_probe(struct platform_device *pdev)
+ 
+ 		if (!(ixp4xx_read_feature_bits() &
+ 		      (IXP4XX_FEATURE_RESET_NPEA << i))) {
+-			dev_info(dev, "NPE%d at 0x%08x-0x%08x not available\n",
+-				 i, res->start, res->end);
++			dev_info(dev, "NPE%d at %pR not available\n",
++				 i, res);
+ 			continue; /* NPE already disabled or not present */
+ 		}
+ 		npe->regs = devm_ioremap_resource(dev, res);
+@@ -699,13 +699,12 @@ static int ixp4xx_npe_probe(struct platform_device *pdev)
+ 			return PTR_ERR(npe->regs);
+ 
+ 		if (npe_reset(npe)) {
+-			dev_info(dev, "NPE%d at 0x%08x-0x%08x does not reset\n",
+-				 i, res->start, res->end);
++			dev_info(dev, "NPE%d at %pR does not reset\n",
++				 i, res);
+ 			continue;
+ 		}
+ 		npe->valid = 1;
+-		dev_info(dev, "NPE%d at 0x%08x-0x%08x registered\n",
+-			 i, res->start, res->end);
++		dev_info(dev, "NPE%d at %pR registered\n", i, res);
+ 		found++;
+ 	}
+ 
+diff --git a/drivers/soc/ixp4xx/ixp4xx-qmgr.c b/drivers/soc/ixp4xx/ixp4xx-qmgr.c
+index 8c968382cea76..065a800717bd5 100644
+--- a/drivers/soc/ixp4xx/ixp4xx-qmgr.c
++++ b/drivers/soc/ixp4xx/ixp4xx-qmgr.c
+@@ -145,12 +145,12 @@ static irqreturn_t qmgr_irq1_a0(int irq, void *pdev)
+ 	/* ACK - it may clear any bits so don't rely on it */
+ 	__raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[0]);
+ 
+-	en_bitmap = qmgr_regs->irqen[0];
++	en_bitmap = __raw_readl(&qmgr_regs->irqen[0]);
+ 	while (en_bitmap) {
+ 		i = __fls(en_bitmap); /* number of the last "low" queue */
+ 		en_bitmap &= ~BIT(i);
+-		src = qmgr_regs->irqsrc[i >> 3];
+-		stat = qmgr_regs->stat1[i >> 3];
++		src = __raw_readl(&qmgr_regs->irqsrc[i >> 3]);
++		stat = __raw_readl(&qmgr_regs->stat1[i >> 3]);
+ 		if (src & 4) /* the IRQ condition is inverted */
+ 			stat = ~stat;
+ 		if (stat & BIT(src & 3)) {
+@@ -170,7 +170,8 @@ static irqreturn_t qmgr_irq2_a0(int irq, void *pdev)
+ 	/* ACK - it may clear any bits so don't rely on it */
+ 	__raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[1]);
+ 
+-	req_bitmap = qmgr_regs->irqen[1] & qmgr_regs->statne_h;
++	req_bitmap = __raw_readl(&qmgr_regs->irqen[1]) &
++		     __raw_readl(&qmgr_regs->statne_h);
+ 	while (req_bitmap) {
+ 		i = __fls(req_bitmap); /* number of the last "high" queue */
+ 		req_bitmap &= ~BIT(i);
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index c8b750d8ac35f..0e3bc0b0a5265 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -505,8 +505,10 @@ static int mx51_ecspi_prepare_message(struct spi_imx_data *spi_imx,
+ 				      struct spi_message *msg)
+ {
+ 	struct spi_device *spi = msg->spi;
++	struct spi_transfer *xfer;
+ 	u32 ctrl = MX51_ECSPI_CTRL_ENABLE;
+-	u32 testreg;
++	u32 min_speed_hz = ~0U;
++	u32 testreg, delay;
+ 	u32 cfg = readl(spi_imx->base + MX51_ECSPI_CONFIG);
+ 
+ 	/* set Master or Slave mode */
+@@ -567,6 +569,35 @@ static int mx51_ecspi_prepare_message(struct spi_imx_data *spi_imx,
+ 
+ 	writel(cfg, spi_imx->base + MX51_ECSPI_CONFIG);
+ 
++	/*
++	 * Wait until the changes in the configuration register CONFIGREG
++	 * propagate into the hardware. It takes exactly one tick of the
++	 * SCLK clock, but we will wait two SCLK clock just to be sure. The
++	 * effect of the delay it takes for the hardware to apply changes
++	 * is noticable if the SCLK clock run very slow. In such a case, if
++	 * the polarity of SCLK should be inverted, the GPIO ChipSelect might
++	 * be asserted before the SCLK polarity changes, which would disrupt
++	 * the SPI communication as the device on the other end would consider
++	 * the change of SCLK polarity as a clock tick already.
++	 *
++	 * Because spi_imx->spi_bus_clk is only set in bitbang prepare_message
++	 * callback, iterate over all the transfers in spi_message, find the
++	 * one with lowest bus frequency, and use that bus frequency for the
++	 * delay calculation. In case all transfers have speed_hz == 0, then
++	 * min_speed_hz is ~0 and the resulting delay is zero.
++	 */
++	list_for_each_entry(xfer, &msg->transfers, transfer_list) {
++		if (!xfer->speed_hz)
++			continue;
++		min_speed_hz = min(xfer->speed_hz, min_speed_hz);
++	}
++
++	delay = (2 * 1000000) / min_speed_hz;
++	if (likely(delay < 10))	/* SCLK is faster than 100 kHz */
++		udelay(delay);
++	else			/* SCLK is _very_ slow */
++		usleep_range(delay, delay + 10);
++
+ 	return 0;
+ }
+ 
+@@ -574,7 +605,7 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
+ 				       struct spi_device *spi)
+ {
+ 	u32 ctrl = readl(spi_imx->base + MX51_ECSPI_CTRL);
+-	u32 clk, delay;
++	u32 clk;
+ 
+ 	/* Clear BL field and set the right value */
+ 	ctrl &= ~MX51_ECSPI_CTRL_BL_MASK;
+@@ -596,23 +627,6 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
+ 
+ 	writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL);
+ 
+-	/*
+-	 * Wait until the changes in the configuration register CONFIGREG
+-	 * propagate into the hardware. It takes exactly one tick of the
+-	 * SCLK clock, but we will wait two SCLK clock just to be sure. The
+-	 * effect of the delay it takes for the hardware to apply changes
+-	 * is noticable if the SCLK clock run very slow. In such a case, if
+-	 * the polarity of SCLK should be inverted, the GPIO ChipSelect might
+-	 * be asserted before the SCLK polarity changes, which would disrupt
+-	 * the SPI communication as the device on the other end would consider
+-	 * the change of SCLK polarity as a clock tick already.
+-	 */
+-	delay = (2 * 1000000) / clk;
+-	if (likely(delay < 10))	/* SCLK is faster than 100 kHz */
+-		udelay(delay);
+-	else			/* SCLK is _very_ slow */
+-		usleep_range(delay, delay + 10);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index b2c4621db34d7..c208efeadd184 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -785,6 +785,8 @@ static int meson_spicc_remove(struct platform_device *pdev)
+ 	clk_disable_unprepare(spicc->core);
+ 	clk_disable_unprepare(spicc->pclk);
+ 
++	spi_master_put(spicc->master);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c
+index 22974277afa08..4eff3fdecdb8a 100644
+--- a/drivers/staging/rtl8712/hal_init.c
++++ b/drivers/staging/rtl8712/hal_init.c
+@@ -29,21 +29,31 @@
+ #define FWBUFF_ALIGN_SZ 512
+ #define MAX_DUMP_FWSZ (48 * 1024)
+ 
++static void rtl871x_load_fw_fail(struct _adapter *adapter)
++{
++	struct usb_device *udev = adapter->dvobjpriv.pusbdev;
++	struct device *dev = &udev->dev;
++	struct device *parent = dev->parent;
++
++	complete(&adapter->rtl8712_fw_ready);
++
++	dev_err(&udev->dev, "r8712u: Firmware request failed\n");
++
++	if (parent)
++		device_lock(parent);
++
++	device_release_driver(dev);
++
++	if (parent)
++		device_unlock(parent);
++}
++
+ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
+ {
+ 	struct _adapter *adapter = context;
+ 
+ 	if (!firmware) {
+-		struct usb_device *udev = adapter->dvobjpriv.pusbdev;
+-		struct usb_interface *usb_intf = adapter->pusb_intf;
+-
+-		dev_err(&udev->dev, "r8712u: Firmware request failed\n");
+-		usb_put_dev(udev);
+-		usb_set_intfdata(usb_intf, NULL);
+-		r8712_free_drv_sw(adapter);
+-		adapter->dvobj_deinit(adapter);
+-		complete(&adapter->rtl8712_fw_ready);
+-		free_netdev(adapter->pnetdev);
++		rtl871x_load_fw_fail(adapter);
+ 		return;
+ 	}
+ 	adapter->fw = firmware;
+diff --git a/drivers/staging/rtl8712/rtl8712_led.c b/drivers/staging/rtl8712/rtl8712_led.c
+index 5901026949f25..d5fc9026b036e 100644
+--- a/drivers/staging/rtl8712/rtl8712_led.c
++++ b/drivers/staging/rtl8712/rtl8712_led.c
+@@ -1820,3 +1820,11 @@ void LedControl871x(struct _adapter *padapter, enum LED_CTL_MODE LedAction)
+ 		break;
+ 	}
+ }
++
++void r8712_flush_led_works(struct _adapter *padapter)
++{
++	struct led_priv *pledpriv = &padapter->ledpriv;
++
++	flush_work(&pledpriv->SwLed0.BlinkWorkItem);
++	flush_work(&pledpriv->SwLed1.BlinkWorkItem);
++}
+diff --git a/drivers/staging/rtl8712/rtl871x_led.h b/drivers/staging/rtl8712/rtl871x_led.h
+index ee19c873cf010..2f0768132ad8f 100644
+--- a/drivers/staging/rtl8712/rtl871x_led.h
++++ b/drivers/staging/rtl8712/rtl871x_led.h
+@@ -112,6 +112,7 @@ struct led_priv {
+ void r8712_InitSwLeds(struct _adapter *padapter);
+ void r8712_DeInitSwLeds(struct _adapter *padapter);
+ void LedControl871x(struct _adapter *padapter, enum LED_CTL_MODE LedAction);
++void r8712_flush_led_works(struct _adapter *padapter);
+ 
+ #endif
+ 
+diff --git a/drivers/staging/rtl8712/rtl871x_pwrctrl.c b/drivers/staging/rtl8712/rtl871x_pwrctrl.c
+index 23cff43437e21..cd6d9ff0bebca 100644
+--- a/drivers/staging/rtl8712/rtl871x_pwrctrl.c
++++ b/drivers/staging/rtl8712/rtl871x_pwrctrl.c
+@@ -224,3 +224,11 @@ void r8712_unregister_cmd_alive(struct _adapter *padapter)
+ 	}
+ 	mutex_unlock(&pwrctrl->mutex_lock);
+ }
++
++void r8712_flush_rwctrl_works(struct _adapter *padapter)
++{
++	struct pwrctrl_priv *pwrctrl = &padapter->pwrctrlpriv;
++
++	flush_work(&pwrctrl->SetPSModeWorkItem);
++	flush_work(&pwrctrl->rpwm_workitem);
++}
+diff --git a/drivers/staging/rtl8712/rtl871x_pwrctrl.h b/drivers/staging/rtl8712/rtl871x_pwrctrl.h
+index dd5a79f90b1a6..6eee6f1bdba4d 100644
+--- a/drivers/staging/rtl8712/rtl871x_pwrctrl.h
++++ b/drivers/staging/rtl8712/rtl871x_pwrctrl.h
+@@ -111,5 +111,6 @@ void r8712_cpwm_int_hdl(struct _adapter *padapter,
+ void r8712_set_ps_mode(struct _adapter *padapter, uint ps_mode,
+ 			uint smart_ps);
+ void r8712_set_rpwm(struct _adapter *padapter, u8 val8);
++void r8712_flush_rwctrl_works(struct _adapter *padapter);
+ 
+ #endif  /* __RTL871X_PWRCTRL_H_ */
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index b760bc3559373..17d28af0d0867 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -594,35 +594,30 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ {
+ 	struct net_device *pnetdev = usb_get_intfdata(pusb_intf);
+ 	struct usb_device *udev = interface_to_usbdev(pusb_intf);
++	struct _adapter *padapter = netdev_priv(pnetdev);
++
++	/* never exit with a firmware callback pending */
++	wait_for_completion(&padapter->rtl8712_fw_ready);
++	usb_set_intfdata(pusb_intf, NULL);
++	release_firmware(padapter->fw);
++	if (drvpriv.drv_registered)
++		padapter->surprise_removed = true;
++	if (pnetdev->reg_state != NETREG_UNINITIALIZED)
++		unregister_netdev(pnetdev); /* will call netdev_close() */
++	r8712_flush_rwctrl_works(padapter);
++	r8712_flush_led_works(padapter);
++	udelay(1);
++	/* Stop driver mlme relation timer */
++	r8712_stop_drv_timers(padapter);
++	r871x_dev_unload(padapter);
++	r8712_free_drv_sw(padapter);
++	free_netdev(pnetdev);
++
++	/* decrease the reference count of the usb device structure
++	 * when disconnect
++	 */
++	usb_put_dev(udev);
+ 
+-	if (pnetdev) {
+-		struct _adapter *padapter = netdev_priv(pnetdev);
+-
+-		/* never exit with a firmware callback pending */
+-		wait_for_completion(&padapter->rtl8712_fw_ready);
+-		pnetdev = usb_get_intfdata(pusb_intf);
+-		usb_set_intfdata(pusb_intf, NULL);
+-		if (!pnetdev)
+-			goto firmware_load_fail;
+-		release_firmware(padapter->fw);
+-		if (drvpriv.drv_registered)
+-			padapter->surprise_removed = true;
+-		if (pnetdev->reg_state != NETREG_UNINITIALIZED)
+-			unregister_netdev(pnetdev); /* will call netdev_close() */
+-		flush_scheduled_work();
+-		udelay(1);
+-		/* Stop driver mlme relation timer */
+-		r8712_stop_drv_timers(padapter);
+-		r871x_dev_unload(padapter);
+-		r8712_free_drv_sw(padapter);
+-		free_netdev(pnetdev);
+-
+-		/* decrease the reference count of the usb device structure
+-		 * when disconnect
+-		 */
+-		usb_put_dev(udev);
+-	}
+-firmware_load_fail:
+ 	/* If we didn't unplug usb dongle and remove/insert module, driver
+ 	 * fails on sitesurvey for the first time when device is up.
+ 	 * Reset usb port for sitesurvey fail issue.
+diff --git a/drivers/staging/rtl8723bs/hal/sdio_ops.c b/drivers/staging/rtl8723bs/hal/sdio_ops.c
+index 369f55d115192..31cbcdeac14a1 100644
+--- a/drivers/staging/rtl8723bs/hal/sdio_ops.c
++++ b/drivers/staging/rtl8723bs/hal/sdio_ops.c
+@@ -989,6 +989,8 @@ void sd_int_dpc(struct adapter *adapter)
+ 				} else {
+ 					rtw_c2h_wk_cmd(adapter, (u8 *)c2h_evt);
+ 				}
++			} else {
++				kfree(c2h_evt);
+ 			}
+ 		} else {
+ 			/* Error handling for malloc fail */
+diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c
+index 0790de29f0ca2..1231ce56e7123 100644
+--- a/drivers/tee/optee/call.c
++++ b/drivers/tee/optee/call.c
+@@ -413,11 +413,13 @@ void optee_enable_shm_cache(struct optee *optee)
+ }
+ 
+ /**
+- * optee_disable_shm_cache() - Disables caching of some shared memory allocation
+- *			      in OP-TEE
++ * __optee_disable_shm_cache() - Disables caching of some shared memory
++ *                               allocation in OP-TEE
+  * @optee:	main service struct
++ * @is_mapped:	true if the cached shared memory addresses were mapped by this
++ *		kernel, are safe to dereference, and should be freed
+  */
+-void optee_disable_shm_cache(struct optee *optee)
++static void __optee_disable_shm_cache(struct optee *optee, bool is_mapped)
+ {
+ 	struct optee_call_waiter w;
+ 
+@@ -436,6 +438,13 @@ void optee_disable_shm_cache(struct optee *optee)
+ 		if (res.result.status == OPTEE_SMC_RETURN_OK) {
+ 			struct tee_shm *shm;
+ 
++			/*
++			 * Shared memory references that were not mapped by
++			 * this kernel must be ignored to prevent a crash.
++			 */
++			if (!is_mapped)
++				continue;
++
+ 			shm = reg_pair_to_ptr(res.result.shm_upper32,
+ 					      res.result.shm_lower32);
+ 			tee_shm_free(shm);
+@@ -446,6 +455,27 @@ void optee_disable_shm_cache(struct optee *optee)
+ 	optee_cq_wait_final(&optee->call_queue, &w);
+ }
+ 
++/**
++ * optee_disable_shm_cache() - Disables caching of mapped shared memory
++ *                             allocations in OP-TEE
++ * @optee:	main service struct
++ */
++void optee_disable_shm_cache(struct optee *optee)
++{
++	return __optee_disable_shm_cache(optee, true);
++}
++
++/**
++ * optee_disable_unmapped_shm_cache() - Disables caching of shared memory
++ *                                      allocations in OP-TEE which are not
++ *                                      currently mapped
++ * @optee:	main service struct
++ */
++void optee_disable_unmapped_shm_cache(struct optee *optee)
++{
++	return __optee_disable_shm_cache(optee, false);
++}
++
+ #define PAGELIST_ENTRIES_PER_PAGE				\
+ 	((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1)
+ 
+diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
+index 63542c1cc2914..7b17248f1527c 100644
+--- a/drivers/tee/optee/core.c
++++ b/drivers/tee/optee/core.c
+@@ -6,6 +6,7 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
+ #include <linux/arm-smccc.h>
++#include <linux/crash_dump.h>
+ #include <linux/errno.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
+@@ -572,6 +573,13 @@ static optee_invoke_fn *get_invoke_func(struct device *dev)
+ 	return ERR_PTR(-EINVAL);
+ }
+ 
++/* optee_remove - Device Removal Routine
++ * @pdev: platform device information struct
++ *
++ * optee_remove is called by platform subsystem to alert the driver
++ * that it should release the device
++ */
++
+ static int optee_remove(struct platform_device *pdev)
+ {
+ 	struct optee *optee = platform_get_drvdata(pdev);
+@@ -602,6 +610,18 @@ static int optee_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++/* optee_shutdown - Device Removal Routine
++ * @pdev: platform device information struct
++ *
++ * platform_shutdown is called by the platform subsystem to alert
++ * the driver that a shutdown, reboot, or kexec is happening and
++ * device must be disabled.
++ */
++static void optee_shutdown(struct platform_device *pdev)
++{
++	optee_disable_shm_cache(platform_get_drvdata(pdev));
++}
++
+ static int optee_probe(struct platform_device *pdev)
+ {
+ 	optee_invoke_fn *invoke_fn;
+@@ -612,6 +632,16 @@ static int optee_probe(struct platform_device *pdev)
+ 	u32 sec_caps;
+ 	int rc;
+ 
++	/*
++	 * The kernel may have crashed at the same time that all available
++	 * secure world threads were suspended and we cannot reschedule the
++	 * suspended threads without access to the crashed kernel's wait_queue.
++	 * Therefore, we cannot reliably initialize the OP-TEE driver in the
++	 * kdump kernel.
++	 */
++	if (is_kdump_kernel())
++		return -ENODEV;
++
+ 	invoke_fn = get_invoke_func(&pdev->dev);
+ 	if (IS_ERR(invoke_fn))
+ 		return PTR_ERR(invoke_fn);
+@@ -686,6 +716,15 @@ static int optee_probe(struct platform_device *pdev)
+ 	optee->memremaped_shm = memremaped_shm;
+ 	optee->pool = pool;
+ 
++	/*
++	 * Ensure that there are no pre-existing shm objects before enabling
++	 * the shm cache so that there's no chance of receiving an invalid
++	 * address during shutdown. This could occur, for example, if we're
++	 * kexec booting from an older kernel that did not properly cleanup the
++	 * shm cache.
++	 */
++	optee_disable_unmapped_shm_cache(optee);
++
+ 	optee_enable_shm_cache(optee);
+ 
+ 	if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
+@@ -728,6 +767,7 @@ MODULE_DEVICE_TABLE(of, optee_dt_match);
+ static struct platform_driver optee_driver = {
+ 	.probe  = optee_probe,
+ 	.remove = optee_remove,
++	.shutdown = optee_shutdown,
+ 	.driver = {
+ 		.name = "optee",
+ 		.of_match_table = optee_dt_match,
+diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h
+index e25b216a14ef8..dbdd367be1568 100644
+--- a/drivers/tee/optee/optee_private.h
++++ b/drivers/tee/optee/optee_private.h
+@@ -159,6 +159,7 @@ int optee_cancel_req(struct tee_context *ctx, u32 cancel_id, u32 session);
+ 
+ void optee_enable_shm_cache(struct optee *optee);
+ void optee_disable_shm_cache(struct optee *optee);
++void optee_disable_unmapped_shm_cache(struct optee *optee);
+ 
+ int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm,
+ 		       struct page **pages, size_t num_pages,
+diff --git a/drivers/tee/optee/shm_pool.c b/drivers/tee/optee/shm_pool.c
+index d767eebf30bdd..da06ce9b9313e 100644
+--- a/drivers/tee/optee/shm_pool.c
++++ b/drivers/tee/optee/shm_pool.c
+@@ -32,8 +32,10 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
+ 		struct page **pages;
+ 
+ 		pages = kcalloc(nr_pages, sizeof(pages), GFP_KERNEL);
+-		if (!pages)
+-			return -ENOMEM;
++		if (!pages) {
++			rc = -ENOMEM;
++			goto err;
++		}
+ 
+ 		for (i = 0; i < nr_pages; i++) {
+ 			pages[i] = page;
+@@ -44,8 +46,14 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
+ 		rc = optee_shm_register(shm->ctx, shm, pages, nr_pages,
+ 					(unsigned long)shm->kaddr);
+ 		kfree(pages);
++		if (rc)
++			goto err;
+ 	}
+ 
++	return 0;
++
++err:
++	__free_pages(page, order);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index 00472f5ce22e4..c65e44707cd69 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -193,6 +193,24 @@ err_dev_put:
+ }
+ EXPORT_SYMBOL_GPL(tee_shm_alloc);
+ 
++/**
++ * tee_shm_alloc_kernel_buf() - Allocate shared memory for kernel buffer
++ * @ctx:	Context that allocates the shared memory
++ * @size:	Requested size of shared memory
++ *
++ * The returned memory registered in secure world and is suitable to be
++ * passed as a memory buffer in parameter argument to
++ * tee_client_invoke_func(). The memory allocated is later freed with a
++ * call to tee_shm_free().
++ *
++ * @returns a pointer to 'struct tee_shm'
++ */
++struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size)
++{
++	return tee_shm_alloc(ctx, size, TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
++}
++EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
++
+ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
+ 				 size_t length, u32 flags)
+ {
+diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
+index f7d3023f860f0..fb65dc601b237 100644
+--- a/drivers/tty/serial/8250/8250_mtk.c
++++ b/drivers/tty/serial/8250/8250_mtk.c
+@@ -93,10 +93,13 @@ static void mtk8250_dma_rx_complete(void *param)
+ 	struct dma_tx_state state;
+ 	int copied, total, cnt;
+ 	unsigned char *ptr;
++	unsigned long flags;
+ 
+ 	if (data->rx_status == DMA_RX_SHUTDOWN)
+ 		return;
+ 
++	spin_lock_irqsave(&up->port.lock, flags);
++
+ 	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+ 	total = dma->rx_size - state.residue;
+ 	cnt = total;
+@@ -120,6 +123,8 @@ static void mtk8250_dma_rx_complete(void *param)
+ 	tty_flip_buffer_push(tty_port);
+ 
+ 	mtk8250_rx_dma(up);
++
++	spin_unlock_irqrestore(&up->port.lock, flags);
+ }
+ 
+ static void mtk8250_rx_dma(struct uart_8250_port *up)
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 13929ab64dceb..39f9ea24e3169 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -3804,6 +3804,12 @@ static const struct pci_device_id blacklist[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x0f0c), },
+ 	{ PCI_VDEVICE(INTEL, 0x228a), },
+ 	{ PCI_VDEVICE(INTEL, 0x228c), },
++	{ PCI_VDEVICE(INTEL, 0x4b96), },
++	{ PCI_VDEVICE(INTEL, 0x4b97), },
++	{ PCI_VDEVICE(INTEL, 0x4b98), },
++	{ PCI_VDEVICE(INTEL, 0x4b99), },
++	{ PCI_VDEVICE(INTEL, 0x4b9a), },
++	{ PCI_VDEVICE(INTEL, 0x4b9b), },
+ 	{ PCI_VDEVICE(INTEL, 0x9ce3), },
+ 	{ PCI_VDEVICE(INTEL, 0x9ce4), },
+ 
+@@ -3964,6 +3970,7 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board)
+ 		if (pci_match_id(pci_use_msi, dev)) {
+ 			dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n");
+ 			pci_set_master(dev);
++			uart.port.flags &= ~UPF_SHARE_IRQ;
+ 			rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES);
+ 		} else {
+ 			dev_dbg(&dev->dev, "Using legacy interrupts\n");
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 6d9c494bed7d2..3de0a16e055a3 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -311,7 +311,11 @@ static const struct serial8250_config uart_config[] = {
+ /* Uart divisor latch read */
+ static int default_serial_dl_read(struct uart_8250_port *up)
+ {
+-	return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8;
++	/* Assign these in pieces to truncate any bits above 7.  */
++	unsigned char dll = serial_in(up, UART_DLL);
++	unsigned char dlm = serial_in(up, UART_DLM);
++
++	return dll | dlm << 8;
+ }
+ 
+ /* Uart divisor latch write */
+@@ -1297,9 +1301,11 @@ static void autoconfig(struct uart_8250_port *up)
+ 	serial_out(up, UART_LCR, 0);
+ 
+ 	serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+-	scratch = serial_in(up, UART_IIR) >> 6;
+ 
+-	switch (scratch) {
++	/* Assign this as it is to truncate any bits above 7.  */
++	scratch = serial_in(up, UART_IIR);
++
++	switch (scratch >> 6) {
+ 	case 0:
+ 		autoconfig_8250(up);
+ 		break;
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index fdce1a7995920..26fa69609ee5b 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -1040,9 +1040,11 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
+ 
+ 	if (tup->cdata->fifo_mode_enable_status) {
+ 		ret = tegra_uart_wait_fifo_mode_enabled(tup);
+-		dev_err(tup->uport.dev, "FIFO mode not enabled\n");
+-		if (ret < 0)
++		if (ret < 0) {
++			dev_err(tup->uport.dev,
++				"Failed to enable FIFO mode: %d\n", ret);
+ 			return ret;
++		}
+ 	} else {
+ 		/*
+ 		 * For all tegra devices (up to t210), there is a hardware
+diff --git a/drivers/usb/cdns3/ep0.c b/drivers/usb/cdns3/ep0.c
+index d3121a32cc68c..30d3516c7f988 100644
+--- a/drivers/usb/cdns3/ep0.c
++++ b/drivers/usb/cdns3/ep0.c
+@@ -731,6 +731,7 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
+ 		request->actual = 0;
+ 		priv_dev->status_completion_no_call = true;
+ 		priv_dev->pending_status_request = request;
++		usb_gadget_set_state(&priv_dev->gadget, USB_STATE_CONFIGURED);
+ 		spin_unlock_irqrestore(&priv_dev->lock, flags);
+ 
+ 		/*
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index b222b777e6a43..58274c5073531 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -2283,17 +2283,10 @@ static void usbtmc_interrupt(struct urb *urb)
+ 		dev_err(dev, "overflow with length %d, actual length is %d\n",
+ 			data->iin_wMaxPacketSize, urb->actual_length);
+ 		fallthrough;
+-	case -ECONNRESET:
+-	case -ENOENT:
+-	case -ESHUTDOWN:
+-	case -EILSEQ:
+-	case -ETIME:
+-	case -EPIPE:
++	default:
+ 		/* urb terminated, clean up */
+ 		dev_dbg(dev, "urb terminated, status: %d\n", status);
+ 		return;
+-	default:
+-		dev_err(dev, "unknown status received: %d\n", status);
+ 	}
+ exit:
+ 	rv = usb_submit_urb(urb, GFP_ATOMIC);
+diff --git a/drivers/usb/common/usb-otg-fsm.c b/drivers/usb/common/usb-otg-fsm.c
+index 3740cf95560e9..0697fde51d00f 100644
+--- a/drivers/usb/common/usb-otg-fsm.c
++++ b/drivers/usb/common/usb-otg-fsm.c
+@@ -193,7 +193,11 @@ static void otg_start_hnp_polling(struct otg_fsm *fsm)
+ 	if (!fsm->host_req_flag)
+ 		return;
+ 
+-	INIT_DELAYED_WORK(&fsm->hnp_polling_work, otg_hnp_polling_work);
++	if (!fsm->hnp_work_inited) {
++		INIT_DELAYED_WORK(&fsm->hnp_polling_work, otg_hnp_polling_work);
++		fsm->hnp_work_inited = true;
++	}
++
+ 	schedule_delayed_work(&fsm->hnp_polling_work,
+ 					msecs_to_jiffies(T_HOST_REQ_POLL));
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 14a7c05abfe8f..756839e0e91d5 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2131,6 +2131,17 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 		}
+ 	}
+ 
++	/*
++	 * Avoid issuing a runtime resume if the device is already in the
++	 * suspended state during gadget disconnect.  DWC3 gadget was already
++	 * halted/stopped during runtime suspend.
++	 */
++	if (!is_on) {
++		pm_runtime_barrier(dwc->dev);
++		if (pm_runtime_suspended(dwc->dev))
++			return 0;
++	}
++
+ 	/*
+ 	 * Check the return value for successful resume, or error.  For a
+ 	 * successful resume, the DWC3 runtime PM resume routine will handle
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index a82b3de1a54be..6742271cd6e6a 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -41,6 +41,7 @@ struct f_hidg {
+ 	unsigned char			bInterfaceSubClass;
+ 	unsigned char			bInterfaceProtocol;
+ 	unsigned char			protocol;
++	unsigned char			idle;
+ 	unsigned short			report_desc_length;
+ 	char				*report_desc;
+ 	unsigned short			report_length;
+@@ -338,6 +339,11 @@ static ssize_t f_hidg_write(struct file *file, const char __user *buffer,
+ 
+ 	spin_lock_irqsave(&hidg->write_spinlock, flags);
+ 
++	if (!hidg->req) {
++		spin_unlock_irqrestore(&hidg->write_spinlock, flags);
++		return -ESHUTDOWN;
++	}
++
+ #define WRITE_COND (!hidg->write_pending)
+ try_again:
+ 	/* write queue */
+@@ -358,8 +364,14 @@ try_again:
+ 	count  = min_t(unsigned, count, hidg->report_length);
+ 
+ 	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+-	status = copy_from_user(req->buf, buffer, count);
+ 
++	if (!req) {
++		ERROR(hidg->func.config->cdev, "hidg->req is NULL\n");
++		status = -ESHUTDOWN;
++		goto release_write_pending;
++	}
++
++	status = copy_from_user(req->buf, buffer, count);
+ 	if (status != 0) {
+ 		ERROR(hidg->func.config->cdev,
+ 			"copy_from_user error\n");
+@@ -387,14 +399,17 @@ try_again:
+ 
+ 	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ 
++	if (!hidg->in_ep->enabled) {
++		ERROR(hidg->func.config->cdev, "in_ep is disabled\n");
++		status = -ESHUTDOWN;
++		goto release_write_pending;
++	}
++
+ 	status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC);
+-	if (status < 0) {
+-		ERROR(hidg->func.config->cdev,
+-			"usb_ep_queue error on int endpoint %zd\n", status);
++	if (status < 0)
+ 		goto release_write_pending;
+-	} else {
++	else
+ 		status = count;
+-	}
+ 
+ 	return status;
+ release_write_pending:
+@@ -523,6 +538,14 @@ static int hidg_setup(struct usb_function *f,
+ 		goto respond;
+ 		break;
+ 
++	case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
++		  | HID_REQ_GET_IDLE):
++		VDBG(cdev, "get_idle\n");
++		length = min_t(unsigned int, length, 1);
++		((u8 *) req->buf)[0] = hidg->idle;
++		goto respond;
++		break;
++
+ 	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
+ 		  | HID_REQ_SET_REPORT):
+ 		VDBG(cdev, "set_report | wLength=%d\n", ctrl->wLength);
+@@ -546,6 +569,14 @@ static int hidg_setup(struct usb_function *f,
+ 		goto stall;
+ 		break;
+ 
++	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
++		  | HID_REQ_SET_IDLE):
++		VDBG(cdev, "set_idle\n");
++		length = 0;
++		hidg->idle = value >> 8;
++		goto respond;
++		break;
++
+ 	case ((USB_DIR_IN | USB_TYPE_STANDARD | USB_RECIP_INTERFACE) << 8
+ 		  | USB_REQ_GET_DESCRIPTOR):
+ 		switch (value >> 8) {
+@@ -773,6 +804,7 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 	hidg_interface_desc.bInterfaceSubClass = hidg->bInterfaceSubClass;
+ 	hidg_interface_desc.bInterfaceProtocol = hidg->bInterfaceProtocol;
+ 	hidg->protocol = HID_REPORT_PROTOCOL;
++	hidg->idle = 1;
+ 	hidg_ss_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);
+ 	hidg_ss_in_comp_desc.wBytesPerInterval =
+ 				cpu_to_le16(hidg->report_length);
+diff --git a/drivers/usb/gadget/udc/max3420_udc.c b/drivers/usb/gadget/udc/max3420_udc.c
+index 35179543c3272..91c9e9057cff3 100644
+--- a/drivers/usb/gadget/udc/max3420_udc.c
++++ b/drivers/usb/gadget/udc/max3420_udc.c
+@@ -1260,12 +1260,14 @@ static int max3420_probe(struct spi_device *spi)
+ 	err = devm_request_irq(&spi->dev, irq, max3420_irq_handler, 0,
+ 			       "max3420", udc);
+ 	if (err < 0)
+-		return err;
++		goto del_gadget;
+ 
+ 	udc->thread_task = kthread_create(max3420_thread, udc,
+ 					  "max3420-thread");
+-	if (IS_ERR(udc->thread_task))
+-		return PTR_ERR(udc->thread_task);
++	if (IS_ERR(udc->thread_task)) {
++		err = PTR_ERR(udc->thread_task);
++		goto del_gadget;
++	}
+ 
+ 	irq = of_irq_get_byname(spi->dev.of_node, "vbus");
+ 	if (irq <= 0) { /* no vbus irq implies self-powered design */
+@@ -1285,10 +1287,14 @@ static int max3420_probe(struct spi_device *spi)
+ 		err = devm_request_irq(&spi->dev, irq,
+ 				       max3420_vbus_handler, 0, "vbus", udc);
+ 		if (err < 0)
+-			return err;
++			goto del_gadget;
+ 	}
+ 
+ 	return 0;
++
++del_gadget:
++	usb_del_gadget_udc(&udc->gadget);
++	return err;
+ }
+ 
+ static int max3420_remove(struct spi_device *spi)
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index 0487a4b0501ec..99e994fd3d1df 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -606,8 +606,6 @@ ohci_hcd_at91_drv_suspend(struct device *dev)
+ 	if (ohci_at91->wakeup)
+ 		enable_irq_wake(hcd->irq);
+ 
+-	ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1);
+-
+ 	ret = ohci_suspend(hcd, ohci_at91->wakeup);
+ 	if (ret) {
+ 		if (ohci_at91->wakeup)
+@@ -627,7 +625,10 @@ ohci_hcd_at91_drv_suspend(struct device *dev)
+ 		/* flush the writes */
+ 		(void) ohci_readl (ohci, &ohci->regs->control);
+ 		msleep(1);
++		ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1);
+ 		at91_stop_clock(ohci_at91);
++	} else {
++		ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1);
+ 	}
+ 
+ 	return ret;
+@@ -639,6 +640,8 @@ ohci_hcd_at91_drv_resume(struct device *dev)
+ 	struct usb_hcd	*hcd = dev_get_drvdata(dev);
+ 	struct ohci_at91_priv *ohci_at91 = hcd_to_ohci_at91_priv(hcd);
+ 
++	ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0);
++
+ 	if (ohci_at91->wakeup)
+ 		disable_irq_wake(hcd->irq);
+ 	else
+@@ -646,8 +649,6 @@ ohci_hcd_at91_drv_resume(struct device *dev)
+ 
+ 	ohci_resume(hcd, false);
+ 
+-	ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index f26861246f653..119b7b6e1adac 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -853,6 +853,7 @@ static struct usb_serial_driver ch341_device = {
+ 		.owner	= THIS_MODULE,
+ 		.name	= "ch341-uart",
+ 	},
++	.bulk_in_size      = 512,
+ 	.id_table          = id_table,
+ 	.num_ports         = 1,
+ 	.open              = ch341_open,
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 1aef9b1e1c4eb..dfcf79bdfddce 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -219,6 +219,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(FTDI_VID, FTDI_MTXORB_6_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_R2000KU_TRUE_RNG) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_VARDAAN_PID) },
++	{ USB_DEVICE(FTDI_VID, FTDI_AUTO_M3_OP_COM_V2_PID) },
+ 	{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0100_PID) },
+ 	{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0101_PID) },
+ 	{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0102_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index add602bebd820..755858ca20bac 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -159,6 +159,9 @@
+ /* Vardaan Enterprises Serial Interface VEUSB422R3 */
+ #define FTDI_VARDAAN_PID	0xF070
+ 
++/* Auto-M3 Ltd. - OP-COM USB V2 - OBD interface Adapter */
++#define FTDI_AUTO_M3_OP_COM_V2_PID	0x4f50
++
+ /*
+  * Xsens Technologies BV products (http://www.xsens.com).
+  */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 39333f2cba04e..2b85e0e9bffdb 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1203,6 +1203,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1055, 0xff),	/* Telit FN980 (PCIe) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff),	/* Telit FD980 */
++	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 61929d37d7fc4..0b08dd3b19eb0 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4314,7 +4314,7 @@ EXPORT_SYMBOL_GPL(tcpm_pd_hard_reset);
+ void tcpm_sink_frs(struct tcpm_port *port)
+ {
+ 	spin_lock(&port->pd_event_lock);
+-	port->pd_events = TCPM_FRS_EVENT;
++	port->pd_events |= TCPM_FRS_EVENT;
+ 	spin_unlock(&port->pd_event_lock);
+ 	kthread_queue_work(port->wq, &port->event_work);
+ }
+@@ -4323,7 +4323,7 @@ EXPORT_SYMBOL_GPL(tcpm_sink_frs);
+ void tcpm_sourcing_vbus(struct tcpm_port *port)
+ {
+ 	spin_lock(&port->pd_event_lock);
+-	port->pd_events = TCPM_SOURCING_VBUS;
++	port->pd_events |= TCPM_SOURCING_VBUS;
+ 	spin_unlock(&port->pd_event_lock);
+ 	kthread_queue_work(port->wq, &port->event_work);
+ }
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 81e0877237770..fdb1d660bd136 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3466,7 +3466,8 @@ static int smb3_simple_fallocate_write_range(unsigned int xid,
+ 					     char *buf)
+ {
+ 	struct cifs_io_parms io_parms = {0};
+-	int rc, nbytes;
++	int nbytes;
++	int rc = 0;
+ 	struct kvec iov[2];
+ 
+ 	io_parms.netfid = cfile->fid.netfid;
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index ab7baf5299176..f71de6c1ecf40 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2412,7 +2412,7 @@ again:
+ 				goto journal_error;
+ 			err = ext4_handle_dirty_dx_node(handle, dir,
+ 							frame->bh);
+-			if (err)
++			if (restart || err)
+ 				goto journal_error;
+ 		} else {
+ 			struct dx_root *dxroot;
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 6f5f97c0fdee9..28b2e973f10ed 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -31,6 +31,21 @@
+ 
+ #include "internal.h"
+ 
++/*
++ * New pipe buffers will be restricted to this size while the user is exceeding
++ * their pipe buffer quota. The general pipe use case needs at least two
++ * buffers: one for data yet to be read, and one for new data. If this is less
++ * than two, then a write to a non-empty pipe may block even if the pipe is not
++ * full. This can occur with GNU make jobserver or similar uses of pipes as
++ * semaphores: multiple processes may be waiting to write tokens back to the
++ * pipe before reading tokens: https://lore.kernel.org/lkml/1628086770.5rn8p04n6j.none@localhost/.
++ *
++ * Users can reduce their pipe buffers with F_SETPIPE_SZ below this at their
++ * own risk, namely: pipe writes to non-full pipes may block until the pipe is
++ * emptied.
++ */
++#define PIPE_MIN_DEF_BUFFERS 2
++
+ /*
+  * The max size that a non-root user is allowed to grow the pipe. Can
+  * be set by root in /proc/sys/fs/pipe-max-size
+@@ -781,8 +796,8 @@ struct pipe_inode_info *alloc_pipe_info(void)
+ 	user_bufs = account_pipe_buffers(user, 0, pipe_bufs);
+ 
+ 	if (too_many_pipe_buffers_soft(user_bufs) && pipe_is_unprivileged_user()) {
+-		user_bufs = account_pipe_buffers(user, pipe_bufs, 1);
+-		pipe_bufs = 1;
++		user_bufs = account_pipe_buffers(user, pipe_bufs, PIPE_MIN_DEF_BUFFERS);
++		pipe_bufs = PIPE_MIN_DEF_BUFFERS;
+ 	}
+ 
+ 	if (too_many_pipe_buffers_hard(user_bufs) && pipe_is_unprivileged_user())
+diff --git a/fs/reiserfs/stree.c b/fs/reiserfs/stree.c
+index 476a7ff494822..ef42729216d1f 100644
+--- a/fs/reiserfs/stree.c
++++ b/fs/reiserfs/stree.c
+@@ -387,6 +387,24 @@ void pathrelse(struct treepath *search_path)
+ 	search_path->path_length = ILLEGAL_PATH_ELEMENT_OFFSET;
+ }
+ 
++static int has_valid_deh_location(struct buffer_head *bh, struct item_head *ih)
++{
++	struct reiserfs_de_head *deh;
++	int i;
++
++	deh = B_I_DEH(bh, ih);
++	for (i = 0; i < ih_entry_count(ih); i++) {
++		if (deh_location(&deh[i]) > ih_item_len(ih)) {
++			reiserfs_warning(NULL, "reiserfs-5094",
++					 "directory entry location seems wrong %h",
++					 &deh[i]);
++			return 0;
++		}
++	}
++
++	return 1;
++}
++
+ static int is_leaf(char *buf, int blocksize, struct buffer_head *bh)
+ {
+ 	struct block_head *blkh;
+@@ -454,11 +472,14 @@ static int is_leaf(char *buf, int blocksize, struct buffer_head *bh)
+ 					 "(second one): %h", ih);
+ 			return 0;
+ 		}
+-		if (is_direntry_le_ih(ih) && (ih_item_len(ih) < (ih_entry_count(ih) * IH_SIZE))) {
+-			reiserfs_warning(NULL, "reiserfs-5093",
+-					 "item entry count seems wrong %h",
+-					 ih);
+-			return 0;
++		if (is_direntry_le_ih(ih)) {
++			if (ih_item_len(ih) < (ih_entry_count(ih) * IH_SIZE)) {
++				reiserfs_warning(NULL, "reiserfs-5093",
++						 "item entry count seems wrong %h",
++						 ih);
++				return 0;
++			}
++			return has_valid_deh_location(bh, ih);
+ 		}
+ 		prev_location = ih_location(ih);
+ 	}
+diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
+index 1b9c7a387dc71..913f5af9bf248 100644
+--- a/fs/reiserfs/super.c
++++ b/fs/reiserfs/super.c
+@@ -2082,6 +2082,14 @@ static int reiserfs_fill_super(struct super_block *s, void *data, int silent)
+ 		unlock_new_inode(root_inode);
+ 	}
+ 
++	if (!S_ISDIR(root_inode->i_mode) || !inode_get_bytes(root_inode) ||
++	    !root_inode->i_size) {
++		SWARN(silent, s, "", "corrupt root inode, run fsck");
++		iput(root_inode);
++		errval = -EUCLEAN;
++		goto error;
++	}
++
+ 	s->s_root = d_make_root(root_inode);
+ 	if (!s->s_root)
+ 		goto error;
+diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
+index cdd049a724b10..9b24cc3d3024f 100644
+--- a/include/linux/tee_drv.h
++++ b/include/linux/tee_drv.h
+@@ -332,6 +332,7 @@ void *tee_get_drvdata(struct tee_device *teedev);
+  * @returns a pointer to 'struct tee_shm'
+  */
+ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags);
++struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size);
+ 
+ /**
+  * tee_shm_register() - Register shared memory buffer
+diff --git a/include/linux/usb/otg-fsm.h b/include/linux/usb/otg-fsm.h
+index e78eb577d0fa1..8ef7d148c1493 100644
+--- a/include/linux/usb/otg-fsm.h
++++ b/include/linux/usb/otg-fsm.h
+@@ -196,6 +196,7 @@ struct otg_fsm {
+ 	struct mutex lock;
+ 	u8 *host_req_flag;
+ 	struct delayed_work hnp_polling_work;
++	bool hnp_work_inited;
+ 	bool state_changed;
+ };
+ 
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index e534dff2874e1..a592a826e2fb5 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1189,6 +1189,7 @@ struct hci_dev *hci_alloc_dev(void);
+ void hci_free_dev(struct hci_dev *hdev);
+ int hci_register_dev(struct hci_dev *hdev);
+ void hci_unregister_dev(struct hci_dev *hdev);
++void hci_cleanup_dev(struct hci_dev *hdev);
+ int hci_suspend_dev(struct hci_dev *hdev);
+ int hci_resume_dev(struct hci_dev *hdev);
+ int hci_reset_dev(struct hci_dev *hdev);
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index 42fe4e1b6a8c7..44969d03debf8 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -264,7 +264,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 
+ static inline unsigned int ip6_skb_dst_mtu(struct sk_buff *skb)
+ {
+-	int mtu;
++	unsigned int mtu;
+ 
+ 	struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ?
+ 				inet6_sk(skb->sk) : NULL;
+diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
+index b59d73d529ba7..22e1bc72b979c 100644
+--- a/include/net/netns/xfrm.h
++++ b/include/net/netns/xfrm.h
+@@ -74,6 +74,7 @@ struct netns_xfrm {
+ #endif
+ 	spinlock_t		xfrm_state_lock;
+ 	seqcount_t		xfrm_state_hash_generation;
++	seqcount_spinlock_t	xfrm_policy_hash_generation;
+ 
+ 	spinlock_t xfrm_policy_lock;
+ 	struct mutex xfrm_cfg_mutex;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 679562d2f55d1..84c105902027c 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1598,12 +1598,18 @@ void deactivate_task(struct rq *rq, struct task_struct *p, int flags)
+ 	dequeue_task(rq, p, flags);
+ }
+ 
+-/*
+- * __normal_prio - return the priority that is based on the static prio
+- */
+-static inline int __normal_prio(struct task_struct *p)
++static inline int __normal_prio(int policy, int rt_prio, int nice)
+ {
+-	return p->static_prio;
++	int prio;
++
++	if (dl_policy(policy))
++		prio = MAX_DL_PRIO - 1;
++	else if (rt_policy(policy))
++		prio = MAX_RT_PRIO - 1 - rt_prio;
++	else
++		prio = NICE_TO_PRIO(nice);
++
++	return prio;
+ }
+ 
+ /*
+@@ -1615,15 +1621,7 @@ static inline int __normal_prio(struct task_struct *p)
+  */
+ static inline int normal_prio(struct task_struct *p)
+ {
+-	int prio;
+-
+-	if (task_has_dl_policy(p))
+-		prio = MAX_DL_PRIO-1;
+-	else if (task_has_rt_policy(p))
+-		prio = MAX_RT_PRIO-1 - p->rt_priority;
+-	else
+-		prio = __normal_prio(p);
+-	return prio;
++	return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
+ }
+ 
+ /*
+@@ -3248,7 +3246,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 		} else if (PRIO_TO_NICE(p->static_prio) < 0)
+ 			p->static_prio = NICE_TO_PRIO(0);
+ 
+-		p->prio = p->normal_prio = __normal_prio(p);
++		p->prio = p->normal_prio = p->static_prio;
+ 		set_load_weight(p, false);
+ 
+ 		/*
+@@ -4799,6 +4797,18 @@ int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flag
+ }
+ EXPORT_SYMBOL(default_wake_function);
+ 
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	if (dl_prio(prio))
++		p->sched_class = &dl_sched_class;
++	else if (rt_prio(prio))
++		p->sched_class = &rt_sched_class;
++	else
++		p->sched_class = &fair_sched_class;
++
++	p->prio = prio;
++}
++
+ #ifdef CONFIG_RT_MUTEXES
+ 
+ static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
+@@ -4914,22 +4924,19 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
+ 		} else {
+ 			p->dl.pi_se = &p->dl;
+ 		}
+-		p->sched_class = &dl_sched_class;
+ 	} else if (rt_prio(prio)) {
+ 		if (dl_prio(oldprio))
+ 			p->dl.pi_se = &p->dl;
+ 		if (oldprio < prio)
+ 			queue_flag |= ENQUEUE_HEAD;
+-		p->sched_class = &rt_sched_class;
+ 	} else {
+ 		if (dl_prio(oldprio))
+ 			p->dl.pi_se = &p->dl;
+ 		if (rt_prio(oldprio))
+ 			p->rt.timeout = 0;
+-		p->sched_class = &fair_sched_class;
+ 	}
+ 
+-	p->prio = prio;
++	__setscheduler_prio(p, prio);
+ 
+ 	if (queued)
+ 		enqueue_task(rq, p, queue_flag);
+@@ -5162,35 +5169,6 @@ static void __setscheduler_params(struct task_struct *p,
+ 	set_load_weight(p, true);
+ }
+ 
+-/* Actually do priority change: must hold pi & rq lock. */
+-static void __setscheduler(struct rq *rq, struct task_struct *p,
+-			   const struct sched_attr *attr, bool keep_boost)
+-{
+-	/*
+-	 * If params can't change scheduling class changes aren't allowed
+-	 * either.
+-	 */
+-	if (attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)
+-		return;
+-
+-	__setscheduler_params(p, attr);
+-
+-	/*
+-	 * Keep a potential priority boosting if called from
+-	 * sched_setscheduler().
+-	 */
+-	p->prio = normal_prio(p);
+-	if (keep_boost)
+-		p->prio = rt_effective_prio(p, p->prio);
+-
+-	if (dl_prio(p->prio))
+-		p->sched_class = &dl_sched_class;
+-	else if (rt_prio(p->prio))
+-		p->sched_class = &rt_sched_class;
+-	else
+-		p->sched_class = &fair_sched_class;
+-}
+-
+ /*
+  * Check the target process has a UID that matches the current process's:
+  */
+@@ -5211,10 +5189,8 @@ static int __sched_setscheduler(struct task_struct *p,
+ 				const struct sched_attr *attr,
+ 				bool user, bool pi)
+ {
+-	int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 :
+-		      MAX_RT_PRIO - 1 - attr->sched_priority;
+-	int retval, oldprio, oldpolicy = -1, queued, running;
+-	int new_effective_prio, policy = attr->sched_policy;
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, oldprio, newprio, queued, running;
+ 	const struct sched_class *prev_class;
+ 	struct rq_flags rf;
+ 	int reset_on_fork;
+@@ -5412,6 +5388,7 @@ change:
+ 	p->sched_reset_on_fork = reset_on_fork;
+ 	oldprio = p->prio;
+ 
++	newprio = __normal_prio(policy, attr->sched_priority, attr->sched_nice);
+ 	if (pi) {
+ 		/*
+ 		 * Take priority boosted tasks into account. If the new
+@@ -5420,8 +5397,8 @@ change:
+ 		 * the runqueue. This will be done when the task deboost
+ 		 * itself.
+ 		 */
+-		new_effective_prio = rt_effective_prio(p, newprio);
+-		if (new_effective_prio == oldprio)
++		newprio = rt_effective_prio(p, newprio);
++		if (newprio == oldprio)
+ 			queue_flags &= ~DEQUEUE_MOVE;
+ 	}
+ 
+@@ -5434,7 +5411,10 @@ change:
+ 
+ 	prev_class = p->sched_class;
+ 
+-	__setscheduler(rq, p, attr, pi);
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
+ 	__setscheduler_uclamp(p, attr);
+ 
+ 	if (queued) {
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index aa96b8a4e508f..a3ec21be3b140 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1265,8 +1265,10 @@ static inline void timer_base_unlock_expiry(struct timer_base *base)
+ static void timer_sync_wait_running(struct timer_base *base)
+ {
+ 	if (atomic_read(&base->timer_waiters)) {
++		raw_spin_unlock_irq(&base->lock);
+ 		spin_unlock(&base->expiry_lock);
+ 		spin_lock(&base->expiry_lock);
++		raw_spin_lock_irq(&base->lock);
+ 	}
+ }
+ 
+@@ -1450,14 +1452,14 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head)
+ 		if (timer->flags & TIMER_IRQSAFE) {
+ 			raw_spin_unlock(&base->lock);
+ 			call_timer_fn(timer, fn, baseclk);
+-			base->running_timer = NULL;
+ 			raw_spin_lock(&base->lock);
++			base->running_timer = NULL;
+ 		} else {
+ 			raw_spin_unlock_irq(&base->lock);
+ 			call_timer_fn(timer, fn, baseclk);
++			raw_spin_lock_irq(&base->lock);
+ 			base->running_timer = NULL;
+ 			timer_sync_wait_running(base);
+-			raw_spin_lock_irq(&base->lock);
+ 		}
+ 	}
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 625034c44d5f3..e4f154119e52c 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8683,8 +8683,10 @@ static int trace_array_create_dir(struct trace_array *tr)
+ 		return -EINVAL;
+ 
+ 	ret = event_trace_add_tracer(tr->dir, tr);
+-	if (ret)
++	if (ret) {
+ 		tracefs_remove(tr->dir);
++		return ret;
++	}
+ 
+ 	init_tracer_tracefs(tr, tr->dir);
+ 	__update_tracer_options(tr);
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 379eade0c0837..75529b3117692 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -65,7 +65,8 @@
+ 	C(INVALID_SORT_MODIFIER,"Invalid sort modifier"),		\
+ 	C(EMPTY_SORT_FIELD,	"Empty sort field"),			\
+ 	C(TOO_MANY_SORT_FIELDS,	"Too many sort fields (Max = 2)"),	\
+-	C(INVALID_SORT_FIELD,	"Sort field must be a key or a val"),
++	C(INVALID_SORT_FIELD,	"Sort field must be a key or a val"),	\
++	C(INVALID_STR_OPERAND,	"String type can not be an operand in expression"),
+ 
+ #undef C
+ #define C(a, b)		HIST_ERR_##a
+@@ -2140,6 +2141,13 @@ static struct hist_field *parse_unary(struct hist_trigger_data *hist_data,
+ 		ret = PTR_ERR(operand1);
+ 		goto free;
+ 	}
++	if (operand1->flags & HIST_FIELD_FL_STRING) {
++		/* String type can not be the operand of unary operator. */
++		hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(str));
++		destroy_hist_field(operand1, 0);
++		ret = -EINVAL;
++		goto free;
++	}
+ 
+ 	expr->flags |= operand1->flags &
+ 		(HIST_FIELD_FL_TIMESTAMP | HIST_FIELD_FL_TIMESTAMP_USECS);
+@@ -2241,6 +2249,11 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
+ 		operand1 = NULL;
+ 		goto free;
+ 	}
++	if (operand1->flags & HIST_FIELD_FL_STRING) {
++		hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(operand1_str));
++		ret = -EINVAL;
++		goto free;
++	}
+ 
+ 	/* rest of string could be another expression e.g. b+c in a+b+c */
+ 	operand_flags = 0;
+@@ -2250,6 +2263,11 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
+ 		operand2 = NULL;
+ 		goto free;
+ 	}
++	if (operand2->flags & HIST_FIELD_FL_STRING) {
++		hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(str));
++		ret = -EINVAL;
++		goto free;
++	}
+ 
+ 	ret = check_expr_operands(file->tr, operand1, operand2);
+ 	if (ret)
+@@ -2271,6 +2289,10 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
+ 
+ 	expr->operands[0] = operand1;
+ 	expr->operands[1] = operand2;
++
++	/* The operand sizes should be the same, so just pick one */
++	expr->size = operand1->size;
++
+ 	expr->operator = field_op;
+ 	expr->name = expr_str(expr, 0);
+ 	expr->type = kstrdup(operand1->type, GFP_KERNEL);
+diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
+index 1f490638b2dcc..d7260f6614a6a 100644
+--- a/kernel/tracepoint.c
++++ b/kernel/tracepoint.c
+@@ -15,6 +15,13 @@
+ #include <linux/sched/task.h>
+ #include <linux/static_key.h>
+ 
++enum tp_func_state {
++	TP_FUNC_0,
++	TP_FUNC_1,
++	TP_FUNC_2,
++	TP_FUNC_N,
++};
++
+ extern tracepoint_ptr_t __start___tracepoints_ptrs[];
+ extern tracepoint_ptr_t __stop___tracepoints_ptrs[];
+ 
+@@ -267,26 +274,29 @@ static void *func_remove(struct tracepoint_func **funcs,
+ 	return old;
+ }
+ 
+-static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func *tp_funcs, bool sync)
++/*
++ * Count the number of functions (enum tp_func_state) in a tp_funcs array.
++ */
++static enum tp_func_state nr_func_state(const struct tracepoint_func *tp_funcs)
++{
++	if (!tp_funcs)
++		return TP_FUNC_0;
++	if (!tp_funcs[1].func)
++		return TP_FUNC_1;
++	if (!tp_funcs[2].func)
++		return TP_FUNC_2;
++	return TP_FUNC_N;	/* 3 or more */
++}
++
++static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func *tp_funcs)
+ {
+ 	void *func = tp->iterator;
+ 
+ 	/* Synthetic events do not have static call sites */
+ 	if (!tp->static_call_key)
+ 		return;
+-
+-	if (!tp_funcs[1].func) {
++	if (nr_func_state(tp_funcs) == TP_FUNC_1)
+ 		func = tp_funcs[0].func;
+-		/*
+-		 * If going from the iterator back to a single caller,
+-		 * we need to synchronize with __DO_TRACE to make sure
+-		 * that the data passed to the callback is the one that
+-		 * belongs to that callback.
+-		 */
+-		if (sync)
+-			tracepoint_synchronize_unregister();
+-	}
+-
+ 	__static_call_update(tp->static_call_key, tp->static_call_tramp, func);
+ }
+ 
+@@ -320,9 +330,31 @@ static int tracepoint_add_func(struct tracepoint *tp,
+ 	 * a pointer to it.  This array is referenced by __DO_TRACE from
+ 	 * include/linux/tracepoint.h using rcu_dereference_sched().
+ 	 */
+-	tracepoint_update_call(tp, tp_funcs, false);
+-	rcu_assign_pointer(tp->funcs, tp_funcs);
+-	static_key_enable(&tp->key);
++	switch (nr_func_state(tp_funcs)) {
++	case TP_FUNC_1:		/* 0->1 */
++		/* Set static call to first function */
++		tracepoint_update_call(tp, tp_funcs);
++		/* Both iterator and static call handle NULL tp->funcs */
++		rcu_assign_pointer(tp->funcs, tp_funcs);
++		static_key_enable(&tp->key);
++		break;
++	case TP_FUNC_2:		/* 1->2 */
++		/* Set iterator static call */
++		tracepoint_update_call(tp, tp_funcs);
++		/*
++		 * Iterator callback installed before updating tp->funcs.
++		 * Requires ordering between RCU assign/dereference and
++		 * static call update/call.
++		 */
++		rcu_assign_pointer(tp->funcs, tp_funcs);
++		break;
++	case TP_FUNC_N:		/* N->N+1 (N>1) */
++		rcu_assign_pointer(tp->funcs, tp_funcs);
++		break;
++	default:
++		WARN_ON_ONCE(1);
++		break;
++	}
+ 
+ 	release_probes(old);
+ 	return 0;
+@@ -349,17 +381,47 @@ static int tracepoint_remove_func(struct tracepoint *tp,
+ 		/* Failed allocating new tp_funcs, replaced func with stub */
+ 		return 0;
+ 
+-	if (!tp_funcs) {
++	switch (nr_func_state(tp_funcs)) {
++	case TP_FUNC_0:		/* 1->0 */
+ 		/* Removed last function */
+ 		if (tp->unregfunc && static_key_enabled(&tp->key))
+ 			tp->unregfunc();
+ 
+ 		static_key_disable(&tp->key);
++		/* Set iterator static call */
++		tracepoint_update_call(tp, tp_funcs);
++		/* Both iterator and static call handle NULL tp->funcs */
++		rcu_assign_pointer(tp->funcs, NULL);
++		/*
++		 * Make sure new func never uses old data after a 1->0->1
++		 * transition sequence.
++		 * Considering that transition 0->1 is the common case
++		 * and don't have rcu-sync, issue rcu-sync after
++		 * transition 1->0 to break that sequence by waiting for
++		 * readers to be quiescent.
++		 */
++		tracepoint_synchronize_unregister();
++		break;
++	case TP_FUNC_1:		/* 2->1 */
+ 		rcu_assign_pointer(tp->funcs, tp_funcs);
+-	} else {
++		/*
++		 * On 2->1 transition, RCU sync is needed before setting
++		 * static call to first callback, because the observer
++		 * may have loaded any prior tp->funcs after the last one
++		 * associated with an rcu-sync.
++		 */
++		tracepoint_synchronize_unregister();
++		/* Set static call to first function */
++		tracepoint_update_call(tp, tp_funcs);
++		break;
++	case TP_FUNC_2:		/* N->N-1 (N>2) */
++		fallthrough;
++	case TP_FUNC_N:
+ 		rcu_assign_pointer(tp->funcs, tp_funcs);
+-		tracepoint_update_call(tp, tp_funcs,
+-				       tp_funcs[0].func != old[0].func);
++		break;
++	default:
++		WARN_ON_ONCE(1);
++		break;
+ 	}
+ 	release_probes(old);
+ 	return 0;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 86ebfc6ae6986..65d3f54099637 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3816,14 +3816,10 @@ EXPORT_SYMBOL(hci_register_dev);
+ /* Unregister HCI device */
+ void hci_unregister_dev(struct hci_dev *hdev)
+ {
+-	int id;
+-
+ 	BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus);
+ 
+ 	hci_dev_set_flag(hdev, HCI_UNREGISTER);
+ 
+-	id = hdev->id;
+-
+ 	write_lock(&hci_dev_list_lock);
+ 	list_del(&hdev->list);
+ 	write_unlock(&hci_dev_list_lock);
+@@ -3858,7 +3854,14 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ 	}
+ 
+ 	device_del(&hdev->dev);
++	/* Actual cleanup is deferred until hci_cleanup_dev(). */
++	hci_dev_put(hdev);
++}
++EXPORT_SYMBOL(hci_unregister_dev);
+ 
++/* Cleanup HCI device */
++void hci_cleanup_dev(struct hci_dev *hdev)
++{
+ 	debugfs_remove_recursive(hdev->debugfs);
+ 	kfree_const(hdev->hw_info);
+ 	kfree_const(hdev->fw_info);
+@@ -3883,11 +3886,8 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ 	hci_blocked_keys_clear(hdev);
+ 	hci_dev_unlock(hdev);
+ 
+-	hci_dev_put(hdev);
+-
+-	ida_simple_remove(&hci_index_ida, id);
++	ida_simple_remove(&hci_index_ida, hdev->id);
+ }
+-EXPORT_SYMBOL(hci_unregister_dev);
+ 
+ /* Suspend HCI device */
+ int hci_suspend_dev(struct hci_dev *hdev)
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index eed0dd066e12c..53f85d7c5f9e5 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -59,6 +59,17 @@ struct hci_pinfo {
+ 	char              comm[TASK_COMM_LEN];
+ };
+ 
++static struct hci_dev *hci_hdev_from_sock(struct sock *sk)
++{
++	struct hci_dev *hdev = hci_pi(sk)->hdev;
++
++	if (!hdev)
++		return ERR_PTR(-EBADFD);
++	if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
++		return ERR_PTR(-EPIPE);
++	return hdev;
++}
++
+ void hci_sock_set_flag(struct sock *sk, int nr)
+ {
+ 	set_bit(nr, &hci_pi(sk)->flags);
+@@ -759,19 +770,13 @@ void hci_sock_dev_event(struct hci_dev *hdev, int event)
+ 	if (event == HCI_DEV_UNREG) {
+ 		struct sock *sk;
+ 
+-		/* Detach sockets from device */
++		/* Wake up sockets using this dead device */
+ 		read_lock(&hci_sk_list.lock);
+ 		sk_for_each(sk, &hci_sk_list.head) {
+-			lock_sock(sk);
+ 			if (hci_pi(sk)->hdev == hdev) {
+-				hci_pi(sk)->hdev = NULL;
+ 				sk->sk_err = EPIPE;
+-				sk->sk_state = BT_OPEN;
+ 				sk->sk_state_change(sk);
+-
+-				hci_dev_put(hdev);
+ 			}
+-			release_sock(sk);
+ 		}
+ 		read_unlock(&hci_sk_list.lock);
+ 	}
+@@ -930,10 +935,10 @@ static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg)
+ static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd,
+ 				unsigned long arg)
+ {
+-	struct hci_dev *hdev = hci_pi(sk)->hdev;
++	struct hci_dev *hdev = hci_hdev_from_sock(sk);
+ 
+-	if (!hdev)
+-		return -EBADFD;
++	if (IS_ERR(hdev))
++		return PTR_ERR(hdev);
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_USER_CHANNEL))
+ 		return -EBUSY;
+@@ -1103,6 +1108,18 @@ static int hci_sock_bind(struct socket *sock, struct sockaddr *addr,
+ 
+ 	lock_sock(sk);
+ 
++	/* Allow detaching from dead device and attaching to alive device, if
++	 * the caller wants to re-bind (instead of close) this socket in
++	 * response to hci_sock_dev_event(HCI_DEV_UNREG) notification.
++	 */
++	hdev = hci_pi(sk)->hdev;
++	if (hdev && hci_dev_test_flag(hdev, HCI_UNREGISTER)) {
++		hci_pi(sk)->hdev = NULL;
++		sk->sk_state = BT_OPEN;
++		hci_dev_put(hdev);
++	}
++	hdev = NULL;
++
+ 	if (sk->sk_state == BT_BOUND) {
+ 		err = -EALREADY;
+ 		goto done;
+@@ -1379,9 +1396,9 @@ static int hci_sock_getname(struct socket *sock, struct sockaddr *addr,
+ 
+ 	lock_sock(sk);
+ 
+-	hdev = hci_pi(sk)->hdev;
+-	if (!hdev) {
+-		err = -EBADFD;
++	hdev = hci_hdev_from_sock(sk);
++	if (IS_ERR(hdev)) {
++		err = PTR_ERR(hdev);
+ 		goto done;
+ 	}
+ 
+@@ -1743,9 +1760,9 @@ static int hci_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 		goto done;
+ 	}
+ 
+-	hdev = hci_pi(sk)->hdev;
+-	if (!hdev) {
+-		err = -EBADFD;
++	hdev = hci_hdev_from_sock(sk);
++	if (IS_ERR(hdev)) {
++		err = PTR_ERR(hdev);
+ 		goto done;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index 9874844a95a98..b69d88b88d2e4 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -83,6 +83,9 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
+ static void bt_host_release(struct device *dev)
+ {
+ 	struct hci_dev *hdev = to_hci_dev(dev);
++
++	if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
++		hci_cleanup_dev(hdev);
+ 	kfree(hdev);
+ 	module_put(THIS_MODULE);
+ }
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index e09147ac9a990..fc61cd3fea652 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -298,6 +298,9 @@ int tcp_gro_complete(struct sk_buff *skb)
+ 	if (th->cwr)
+ 		skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
+ 
++	if (skb->encapsulation)
++		skb->inner_transport_header = skb->transport_header;
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(tcp_gro_complete);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 6e2b02cf78418..f4b8e56068e06 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -615,6 +615,10 @@ static int udp_gro_complete_segment(struct sk_buff *skb)
+ 
+ 	skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
+ 	skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_L4;
++
++	if (skb->encapsulation)
++		skb->inner_transport_header = skb->transport_header;
++
+ 	return 0;
+ }
+ 
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 854d2b38db856..05aa2571a4095 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -886,7 +886,7 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue,
+ 
+ 	/* seqlock has the same scope of busylock, for NOLOCK qdisc */
+ 	spin_lock_init(&sch->seqlock);
+-	lockdep_set_class(&sch->busylock,
++	lockdep_set_class(&sch->seqlock,
+ 			  dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);
+ 
+ 	seqcount_init(&sch->running);
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index fe74c5f956303..db6b7373d16c3 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -857,14 +857,18 @@ int sctp_auth_set_key(struct sctp_endpoint *ep,
+ 	memcpy(key->data, &auth_key->sca_key[0], auth_key->sca_keylength);
+ 	cur_key->key = key;
+ 
+-	if (replace) {
+-		list_del_init(&shkey->key_list);
+-		sctp_auth_shkey_release(shkey);
+-		if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
+-			sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
++	if (!replace) {
++		list_add(&cur_key->key_list, sh_keys);
++		return 0;
+ 	}
++
++	list_del_init(&shkey->key_list);
++	sctp_auth_shkey_release(shkey);
+ 	list_add(&cur_key->key_list, sh_keys);
+ 
++	if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
++		sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
++
+ 	return 0;
+ }
+ 
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index a20aec9d73933..2bf2693901631 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -298,8 +298,16 @@ static int xfrm_xlate64(struct sk_buff *dst, const struct nlmsghdr *nlh_src)
+ 	len = nlmsg_attrlen(nlh_src, xfrm_msg_min[type]);
+ 
+ 	nla_for_each_attr(nla, attrs, len, remaining) {
+-		int err = xfrm_xlate64_attr(dst, nla);
++		int err;
+ 
++		switch (type) {
++		case XFRM_MSG_NEWSPDINFO:
++			err = xfrm_nla_cpy(dst, nla, nla_len(nla));
++			break;
++		default:
++			err = xfrm_xlate64_attr(dst, nla);
++			break;
++		}
+ 		if (err)
+ 			return err;
+ 	}
+@@ -341,7 +349,8 @@ static int xfrm_alloc_compat(struct sk_buff *skb, const struct nlmsghdr *nlh_src
+ 
+ /* Calculates len of translated 64-bit message. */
+ static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src,
+-					    struct nlattr *attrs[XFRMA_MAX+1])
++					    struct nlattr *attrs[XFRMA_MAX + 1],
++					    int maxtype)
+ {
+ 	size_t len = nlmsg_len(src);
+ 
+@@ -358,10 +367,20 @@ static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src,
+ 	case XFRM_MSG_POLEXPIRE:
+ 		len += 8;
+ 		break;
++	case XFRM_MSG_NEWSPDINFO:
++		/* attirbutes are xfrm_spdattr_type_t, not xfrm_attr_type_t */
++		return len;
+ 	default:
+ 		break;
+ 	}
+ 
++	/* Unexpected for anything, but XFRM_MSG_NEWSPDINFO, please
++	 * correct both 64=>32-bit and 32=>64-bit translators to copy
++	 * new attributes.
++	 */
++	if (WARN_ON_ONCE(maxtype))
++		return len;
++
+ 	if (attrs[XFRMA_SA])
+ 		len += 4;
+ 	if (attrs[XFRMA_POLICY])
+@@ -440,7 +459,8 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
+ 
+ static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src,
+ 			struct nlattr *attrs[XFRMA_MAX+1],
+-			size_t size, u8 type, struct netlink_ext_ack *extack)
++			size_t size, u8 type, int maxtype,
++			struct netlink_ext_ack *extack)
+ {
+ 	size_t pos;
+ 	int i;
+@@ -520,6 +540,25 @@ static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src,
+ 	}
+ 	pos = dst->nlmsg_len;
+ 
++	if (maxtype) {
++		/* attirbutes are xfrm_spdattr_type_t, not xfrm_attr_type_t */
++		WARN_ON_ONCE(src->nlmsg_type != XFRM_MSG_NEWSPDINFO);
++
++		for (i = 1; i <= maxtype; i++) {
++			int err;
++
++			if (!attrs[i])
++				continue;
++
++			/* just copy - no need for translation */
++			err = xfrm_attr_cpy32(dst, &pos, attrs[i], size,
++					nla_len(attrs[i]), nla_len(attrs[i]));
++			if (err)
++				return err;
++		}
++		return 0;
++	}
++
+ 	for (i = 1; i < XFRMA_MAX + 1; i++) {
+ 		int err;
+ 
+@@ -564,7 +603,7 @@ static struct nlmsghdr *xfrm_user_rcv_msg_compat(const struct nlmsghdr *h32,
+ 	if (err < 0)
+ 		return ERR_PTR(err);
+ 
+-	len = xfrm_user_rcv_calculate_len64(h32, attrs);
++	len = xfrm_user_rcv_calculate_len64(h32, attrs, maxtype);
+ 	/* The message doesn't need translation */
+ 	if (len == nlmsg_len(h32))
+ 		return NULL;
+@@ -574,7 +613,7 @@ static struct nlmsghdr *xfrm_user_rcv_msg_compat(const struct nlmsghdr *h32,
+ 	if (!h64)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	err = xfrm_xlate32(h64, h32, attrs, len, type, extack);
++	err = xfrm_xlate32(h64, h32, attrs, len, type, maxtype, extack);
+ 	if (err < 0) {
+ 		kvfree(h64);
+ 		return ERR_PTR(err);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index b74f28cabe24f..3a9831c05ec71 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -155,7 +155,6 @@ static struct xfrm_policy_afinfo const __rcu *xfrm_policy_afinfo[AF_INET6 + 1]
+ 						__read_mostly;
+ 
+ static struct kmem_cache *xfrm_dst_cache __ro_after_init;
+-static __read_mostly seqcount_mutex_t xfrm_policy_hash_generation;
+ 
+ static struct rhashtable xfrm_policy_inexact_table;
+ static const struct rhashtable_params xfrm_pol_inexact_params;
+@@ -585,7 +584,7 @@ static void xfrm_bydst_resize(struct net *net, int dir)
+ 		return;
+ 
+ 	spin_lock_bh(&net->xfrm.xfrm_policy_lock);
+-	write_seqcount_begin(&xfrm_policy_hash_generation);
++	write_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
+ 
+ 	odst = rcu_dereference_protected(net->xfrm.policy_bydst[dir].table,
+ 				lockdep_is_held(&net->xfrm.xfrm_policy_lock));
+@@ -596,7 +595,7 @@ static void xfrm_bydst_resize(struct net *net, int dir)
+ 	rcu_assign_pointer(net->xfrm.policy_bydst[dir].table, ndst);
+ 	net->xfrm.policy_bydst[dir].hmask = nhashmask;
+ 
+-	write_seqcount_end(&xfrm_policy_hash_generation);
++	write_seqcount_end(&net->xfrm.xfrm_policy_hash_generation);
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+ 
+ 	synchronize_rcu();
+@@ -1245,7 +1244,7 @@ static void xfrm_hash_rebuild(struct work_struct *work)
+ 	} while (read_seqretry(&net->xfrm.policy_hthresh.lock, seq));
+ 
+ 	spin_lock_bh(&net->xfrm.xfrm_policy_lock);
+-	write_seqcount_begin(&xfrm_policy_hash_generation);
++	write_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
+ 
+ 	/* make sure that we can insert the indirect policies again before
+ 	 * we start with destructive action.
+@@ -1354,7 +1353,7 @@ static void xfrm_hash_rebuild(struct work_struct *work)
+ 
+ out_unlock:
+ 	__xfrm_policy_inexact_flush(net);
+-	write_seqcount_end(&xfrm_policy_hash_generation);
++	write_seqcount_end(&net->xfrm.xfrm_policy_hash_generation);
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+ 
+ 	mutex_unlock(&hash_resize_mutex);
+@@ -2095,9 +2094,9 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type,
+ 	rcu_read_lock();
+  retry:
+ 	do {
+-		sequence = read_seqcount_begin(&xfrm_policy_hash_generation);
++		sequence = read_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
+ 		chain = policy_hash_direct(net, daddr, saddr, family, dir);
+-	} while (read_seqcount_retry(&xfrm_policy_hash_generation, sequence));
++	} while (read_seqcount_retry(&net->xfrm.xfrm_policy_hash_generation, sequence));
+ 
+ 	ret = NULL;
+ 	hlist_for_each_entry_rcu(pol, chain, bydst) {
+@@ -2128,7 +2127,7 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type,
+ 	}
+ 
+ skip_inexact:
+-	if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence))
++	if (read_seqcount_retry(&net->xfrm.xfrm_policy_hash_generation, sequence))
+ 		goto retry;
+ 
+ 	if (ret && !xfrm_pol_hold_rcu(ret))
+@@ -4123,6 +4122,7 @@ static int __net_init xfrm_net_init(struct net *net)
+ 	/* Initialize the per-net locks here */
+ 	spin_lock_init(&net->xfrm.xfrm_state_lock);
+ 	spin_lock_init(&net->xfrm.xfrm_policy_lock);
++	seqcount_spinlock_init(&net->xfrm.xfrm_policy_hash_generation, &net->xfrm.xfrm_policy_lock);
+ 	mutex_init(&net->xfrm.xfrm_cfg_mutex);
+ 
+ 	rv = xfrm_statistics_init(net);
+@@ -4167,7 +4167,6 @@ void __init xfrm_init(void)
+ {
+ 	register_pernet_subsys(&xfrm_net_ops);
+ 	xfrm_dev_init();
+-	seqcount_mutex_init(&xfrm_policy_hash_generation, &hash_resize_mutex);
+ 	xfrm_input_init();
+ 
+ #ifdef CONFIG_XFRM_ESPINTCP
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 45f86a97eaf26..6f97665b632ed 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -2751,6 +2751,16 @@ static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	err = link->doit(skb, nlh, attrs);
+ 
++	/* We need to free skb allocated in xfrm_alloc_compat() before
++	 * returning from this function, because consume_skb() won't take
++	 * care of frag_list since netlink destructor sets
++	 * sbk->head to NULL. (see netlink_skb_destructor())
++	 */
++	if (skb_has_frag_list(skb)) {
++		kfree_skb(skb_shinfo(skb)->frag_list);
++		skb_shinfo(skb)->frag_list = NULL;
++	}
++
+ err:
+ 	kvfree(nlh64);
+ 	return err;
+diff --git a/scripts/tracing/draw_functrace.py b/scripts/tracing/draw_functrace.py
+index 74f8aadfd4cbc..7011fbe003ff2 100755
+--- a/scripts/tracing/draw_functrace.py
++++ b/scripts/tracing/draw_functrace.py
+@@ -17,7 +17,7 @@ Usage:
+ 	$ cat /sys/kernel/debug/tracing/trace_pipe > ~/raw_trace_func
+ 	Wait some times but not too much, the script is a bit slow.
+ 	Break the pipe (Ctrl + Z)
+-	$ scripts/draw_functrace.py < raw_trace_func > draw_functrace
++	$ scripts/tracing/draw_functrace.py < ~/raw_trace_func > draw_functrace
+ 	Then you have your drawn trace in draw_functrace
+ """
+ 
+@@ -103,10 +103,10 @@ def parseLine(line):
+ 	line = line.strip()
+ 	if line.startswith("#"):
+ 		raise CommentLineException
+-	m = re.match("[^]]+?\\] +([0-9.]+): (\\w+) <-(\\w+)", line)
++	m = re.match("[^]]+?\\] +([a-z.]+) +([0-9.]+): (\\w+) <-(\\w+)", line)
+ 	if m is None:
+ 		raise BrokenLineException
+-	return (m.group(1), m.group(2), m.group(3))
++	return (m.group(2), m.group(3), m.group(4))
+ 
+ 
+ def main():
+diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c
+index 9fccf417006b0..6a04de21343f8 100644
+--- a/security/selinux/ss/policydb.c
++++ b/security/selinux/ss/policydb.c
+@@ -874,7 +874,7 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s)
+ 	rc = sidtab_init(s);
+ 	if (rc) {
+ 		pr_err("SELinux:  out of memory on SID table init\n");
+-		goto out;
++		return rc;
+ 	}
+ 
+ 	head = p->ocontexts[OCON_ISID];
+@@ -885,7 +885,7 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s)
+ 		if (sid == SECSID_NULL) {
+ 			pr_err("SELinux:  SID 0 was assigned a context.\n");
+ 			sidtab_destroy(s);
+-			goto out;
++			return -EINVAL;
+ 		}
+ 
+ 		/* Ignore initial SIDs unused by this kernel. */
+@@ -897,12 +897,10 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s)
+ 			pr_err("SELinux:  unable to load initial SID %s.\n",
+ 			       name);
+ 			sidtab_destroy(s);
+-			goto out;
++			return rc;
+ 		}
+ 	}
+-	rc = 0;
+-out:
+-	return rc;
++	return 0;
+ }
+ 
+ int policydb_class_isvalid(struct policydb *p, unsigned int class)
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 6d1759b9ccb2f..b36c8d94d4dab 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -246,7 +246,7 @@ static bool hw_support_mmap(struct snd_pcm_substream *substream)
+ 	if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_MMAP))
+ 		return false;
+ 
+-	if (substream->ops->mmap)
++	if (substream->ops->mmap || substream->ops->page)
+ 		return true;
+ 
+ 	switch (substream->dma_buffer.dev.type) {
+diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c
+index 83be6b982a87c..97e8eb38b0961 100644
+--- a/sound/core/seq/seq_ports.c
++++ b/sound/core/seq/seq_ports.c
+@@ -514,10 +514,11 @@ static int check_and_subscribe_port(struct snd_seq_client *client,
+ 	return err;
+ }
+ 
+-static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+-					struct snd_seq_client_port *port,
+-					struct snd_seq_subscribers *subs,
+-					bool is_src, bool ack)
++/* called with grp->list_mutex held */
++static void __delete_and_unsubscribe_port(struct snd_seq_client *client,
++					  struct snd_seq_client_port *port,
++					  struct snd_seq_subscribers *subs,
++					  bool is_src, bool ack)
+ {
+ 	struct snd_seq_port_subs_info *grp;
+ 	struct list_head *list;
+@@ -525,7 +526,6 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+ 
+ 	grp = is_src ? &port->c_src : &port->c_dest;
+ 	list = is_src ? &subs->src_list : &subs->dest_list;
+-	down_write(&grp->list_mutex);
+ 	write_lock_irq(&grp->list_lock);
+ 	empty = list_empty(list);
+ 	if (!empty)
+@@ -535,6 +535,18 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+ 
+ 	if (!empty)
+ 		unsubscribe_port(client, port, grp, &subs->info, ack);
++}
++
++static void delete_and_unsubscribe_port(struct snd_seq_client *client,
++					struct snd_seq_client_port *port,
++					struct snd_seq_subscribers *subs,
++					bool is_src, bool ack)
++{
++	struct snd_seq_port_subs_info *grp;
++
++	grp = is_src ? &port->c_src : &port->c_dest;
++	down_write(&grp->list_mutex);
++	__delete_and_unsubscribe_port(client, port, subs, is_src, ack);
+ 	up_write(&grp->list_mutex);
+ }
+ 
+@@ -590,27 +602,30 @@ int snd_seq_port_disconnect(struct snd_seq_client *connector,
+ 			    struct snd_seq_client_port *dest_port,
+ 			    struct snd_seq_port_subscribe *info)
+ {
+-	struct snd_seq_port_subs_info *src = &src_port->c_src;
++	struct snd_seq_port_subs_info *dest = &dest_port->c_dest;
+ 	struct snd_seq_subscribers *subs;
+ 	int err = -ENOENT;
+ 
+-	down_write(&src->list_mutex);
++	/* always start from deleting the dest port for avoiding concurrent
++	 * deletions
++	 */
++	down_write(&dest->list_mutex);
+ 	/* look for the connection */
+-	list_for_each_entry(subs, &src->list_head, src_list) {
++	list_for_each_entry(subs, &dest->list_head, dest_list) {
+ 		if (match_subs_info(info, &subs->info)) {
+-			atomic_dec(&subs->ref_count); /* mark as not ready */
++			__delete_and_unsubscribe_port(dest_client, dest_port,
++						      subs, false,
++						      connector->number != dest_client->number);
+ 			err = 0;
+ 			break;
+ 		}
+ 	}
+-	up_write(&src->list_mutex);
++	up_write(&dest->list_mutex);
+ 	if (err < 0)
+ 		return err;
+ 
+ 	delete_and_unsubscribe_port(src_client, src_port, subs, true,
+ 				    connector->number != src_client->number);
+-	delete_and_unsubscribe_port(dest_client, dest_port, subs, false,
+-				    connector->number != dest_client->number);
+ 	kfree(subs);
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index bedc2a316adf9..4556ce9a4b158 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8200,9 +8200,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x129c, "Acer SWIFT SF314-55", ALC256_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1300, "Acer SWIFT SF314-56", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 258b81b399177..45fc217e4e97b 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -907,7 +907,7 @@ static void usb_audio_disconnect(struct usb_interface *intf)
+ 		}
+ 	}
+ 
+-	if (chip->quirk_type & QUIRK_SETUP_DISABLE_AUTOSUSPEND)
++	if (chip->quirk_type == QUIRK_SETUP_DISABLE_AUTOSUSPEND)
+ 		usb_enable_autosuspend(interface_to_usbdev(intf));
+ 
+ 	chip->num_interfaces--;
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index e3d97e5112fde..514d18a3e07a6 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -319,6 +319,12 @@ static int __uac_clock_find_source(struct snd_usb_audio *chip,
+ 					      selector->baCSourceID[ret - 1],
+ 					      visited, validate);
+ 		if (ret > 0) {
++			/*
++			 * For Samsung USBC Headset (AKG), setting clock selector again
++			 * will result in incorrect default clock setting problems
++			 */
++			if (chip->usb_id == USB_ID(0x04e8, 0xa051))
++				return ret;
+ 			err = uac_clock_selector_set_val(chip, entity_id, cur);
+ 			if (err < 0)
+ 				return err;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 7af97448d09b3..33d185b62a767 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1897,6 +1897,7 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */
+ 	{ 0 }					/* terminator */
+ };
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 2809a69418a65..0e4310c415a8d 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -685,6 +685,8 @@ static void kvm_destroy_vm_debugfs(struct kvm *kvm)
+ 
+ static int kvm_create_vm_debugfs(struct kvm *kvm, int fd)
+ {
++	static DEFINE_MUTEX(kvm_debugfs_lock);
++	struct dentry *dent;
+ 	char dir_name[ITOA_MAX_LEN * 2];
+ 	struct kvm_stat_data *stat_data;
+ 	struct kvm_stats_debugfs_item *p;
+@@ -693,8 +695,20 @@ static int kvm_create_vm_debugfs(struct kvm *kvm, int fd)
+ 		return 0;
+ 
+ 	snprintf(dir_name, sizeof(dir_name), "%d-%d", task_pid_nr(current), fd);
+-	kvm->debugfs_dentry = debugfs_create_dir(dir_name, kvm_debugfs_dir);
++	mutex_lock(&kvm_debugfs_lock);
++	dent = debugfs_lookup(dir_name, kvm_debugfs_dir);
++	if (dent) {
++		pr_warn_ratelimited("KVM: debugfs: duplicate directory %s\n", dir_name);
++		dput(dent);
++		mutex_unlock(&kvm_debugfs_lock);
++		return 0;
++	}
++	dent = debugfs_create_dir(dir_name, kvm_debugfs_dir);
++	mutex_unlock(&kvm_debugfs_lock);
++	if (IS_ERR(dent))
++		return 0;
+ 
++	kvm->debugfs_dentry = dent;
+ 	kvm->debugfs_stat_data = kcalloc(kvm_debugfs_num_entries,
+ 					 sizeof(*kvm->debugfs_stat_data),
+ 					 GFP_KERNEL_ACCOUNT);
+@@ -4698,7 +4712,7 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm)
+ 	}
+ 	add_uevent_var(env, "PID=%d", kvm->userspace_pid);
+ 
+-	if (!IS_ERR_OR_NULL(kvm->debugfs_dentry)) {
++	if (kvm->debugfs_dentry) {
+ 		char *tmp, *p = kmalloc(PATH_MAX, GFP_KERNEL_ACCOUNT);
+ 
+ 		if (p) {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-15 20:05 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-15 20:05 UTC (permalink / raw
  To: gentoo-commits

commit:     70dca92c5946e33e43279219d6b567a7093187b9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 15 20:05:14 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug 15 20:05:14 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=70dca92c

Linux patch 5.10.59

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1058_linux-5.10.59.patch | 711 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 715 insertions(+)

diff --git a/0000_README b/0000_README
index 2552ce8..4078503 100644
--- a/0000_README
+++ b/0000_README
@@ -275,6 +275,10 @@ Patch:  1057_linux-5.10.58.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.58
 
+Patch:  1058_linux-5.10.59.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.59
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1058_linux-5.10.59.patch b/1058_linux-5.10.59.patch
new file mode 100644
index 0000000..7187305
--- /dev/null
+++ b/1058_linux-5.10.59.patch
@@ -0,0 +1,711 @@
+diff --git a/Makefile b/Makefile
+index 232dee1140c11..df86b39267cee 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 58
++SUBLEVEL = 59
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+index 597388f871272..bc4bb5dd8bae9 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+@@ -271,12 +271,12 @@
+ &ehci0 {
+ 	dr_mode = "otg";
+ 	status = "okay";
+-	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>;
++	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>, <&usb2_clksel>, <&versaclock5 3>;
+ };
+ 
+ &ehci1 {
+ 	status = "okay";
+-	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>;
++	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>, <&usb2_clksel>, <&versaclock5 3>;
+ };
+ 
+ &hdmi0 {
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+index 289cf711307d6..e3773b05c403b 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+@@ -295,8 +295,10 @@
+ 	status = "okay";
+ };
+ 
+-&usb_extal_clk {
+-	clock-frequency = <50000000>;
++&usb2_clksel {
++	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>,
++		  <&versaclock5 3>, <&usb3s0_clk>;
++	status = "okay";
+ };
+ 
+ &usb3s0_clk {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index db091fa751151..c58a0846db502 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -836,6 +836,21 @@
+ 			status = "disabled";
+ 		};
+ 
++		usb2_clksel: clock-controller@e6590630 {
++			compatible = "renesas,r8a774a1-rcar-usb2-clock-sel",
++				     "renesas,rcar-gen3-usb2-clock-sel";
++			reg = <0 0xe6590630 0 0x02>;
++			clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>,
++				 <&usb_extal_clk>, <&usb3s0_clk>;
++			clock-names = "ehci_ohci", "hs-usb-if",
++				      "usb_extal", "usb_xtal";
++			#clock-cells = <0>;
++			power-domains = <&sysc R8A774A1_PD_ALWAYS_ON>;
++			resets = <&cpg 703>, <&cpg 704>;
++			reset-names = "ehci_ohci", "hs-usb-if";
++			status = "disabled";
++		};
++
+ 		usb_dmac0: dma-controller@e65a0000 {
+ 			compatible = "renesas,r8a774a1-usb-dmac",
+ 				     "renesas,usb-dmac";
+diff --git a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+index 39a1a26ffb546..9ebf6e58ba31c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+@@ -709,6 +709,21 @@
+ 			status = "disabled";
+ 		};
+ 
++		usb2_clksel: clock-controller@e6590630 {
++			compatible = "renesas,r8a774b1-rcar-usb2-clock-sel",
++				     "renesas,rcar-gen3-usb2-clock-sel";
++			reg = <0 0xe6590630 0 0x02>;
++			clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>,
++				 <&usb_extal_clk>, <&usb3s0_clk>;
++			clock-names = "ehci_ohci", "hs-usb-if",
++				      "usb_extal", "usb_xtal";
++			#clock-cells = <0>;
++			power-domains = <&sysc R8A774B1_PD_ALWAYS_ON>;
++			resets = <&cpg 703>, <&cpg 704>;
++			reset-names = "ehci_ohci", "hs-usb-if";
++			status = "disabled";
++		};
++
+ 		usb_dmac0: dma-controller@e65a0000 {
+ 			compatible = "renesas,r8a774b1-usb-dmac",
+ 				     "renesas,usb-dmac";
+diff --git a/arch/arm64/boot/dts/renesas/r8a774e1.dtsi b/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
+index c29643442e91f..708258696b4f4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
+@@ -890,6 +890,21 @@
+ 			status = "disabled";
+ 		};
+ 
++		usb2_clksel: clock-controller@e6590630 {
++			compatible = "renesas,r8a774e1-rcar-usb2-clock-sel",
++				     "renesas,rcar-gen3-usb2-clock-sel";
++			reg = <0 0xe6590630 0 0x02>;
++			clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>,
++				 <&usb_extal_clk>, <&usb3s0_clk>;
++			clock-names = "ehci_ohci", "hs-usb-if",
++				      "usb_extal", "usb_xtal";
++			#clock-cells = <0>;
++			power-domains = <&sysc R8A774E1_PD_ALWAYS_ON>;
++			resets = <&cpg 703>, <&cpg 704>;
++			reset-names = "ehci_ohci", "hs-usb-if";
++			status = "disabled";
++		};
++
+ 		usb_dmac0: dma-controller@e65a0000 {
+ 			compatible = "renesas,r8a774e1-usb-dmac",
+ 				     "renesas,usb-dmac";
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 01547bdbfb061..6c82ef22985d9 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -124,7 +124,7 @@ static void sev_asid_free(int asid)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		sd = per_cpu(svm_data, cpu);
+-		sd->sev_vmcbs[pos] = NULL;
++		sd->sev_vmcbs[asid] = NULL;
+ 	}
+ 
+ 	mutex_unlock(&sev_bitmap_lock);
+diff --git a/drivers/firmware/broadcom/tee_bnxt_fw.c b/drivers/firmware/broadcom/tee_bnxt_fw.c
+index ed10da5313e86..a5bf4c3f6dc74 100644
+--- a/drivers/firmware/broadcom/tee_bnxt_fw.c
++++ b/drivers/firmware/broadcom/tee_bnxt_fw.c
+@@ -212,10 +212,9 @@ static int tee_bnxt_fw_probe(struct device *dev)
+ 
+ 	pvt_data.dev = dev;
+ 
+-	fw_shm_pool = tee_shm_alloc(pvt_data.ctx, MAX_SHM_MEM_SZ,
+-				    TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
++	fw_shm_pool = tee_shm_alloc_kernel_buf(pvt_data.ctx, MAX_SHM_MEM_SZ);
+ 	if (IS_ERR(fw_shm_pool)) {
+-		dev_err(pvt_data.dev, "tee_shm_alloc failed\n");
++		dev_err(pvt_data.dev, "tee_shm_alloc_kernel_buf failed\n");
+ 		err = PTR_ERR(fw_shm_pool);
+ 		goto out_sess;
+ 	}
+@@ -242,6 +241,14 @@ static int tee_bnxt_fw_remove(struct device *dev)
+ 	return 0;
+ }
+ 
++static void tee_bnxt_fw_shutdown(struct device *dev)
++{
++	tee_shm_free(pvt_data.fw_shm_pool);
++	tee_client_close_session(pvt_data.ctx, pvt_data.session_id);
++	tee_client_close_context(pvt_data.ctx);
++	pvt_data.ctx = NULL;
++}
++
+ static const struct tee_client_device_id tee_bnxt_fw_id_table[] = {
+ 	{UUID_INIT(0x6272636D, 0x2019, 0x0716,
+ 		    0x42, 0x43, 0x4D, 0x5F, 0x53, 0x43, 0x48, 0x49)},
+@@ -257,6 +264,7 @@ static struct tee_client_driver tee_bnxt_fw_driver = {
+ 		.bus		= &tee_bus_type,
+ 		.probe		= tee_bnxt_fw_probe,
+ 		.remove		= tee_bnxt_fw_remove,
++		.shutdown	= tee_bnxt_fw_shutdown,
+ 	},
+ };
+ 
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index 0c26f5bcc523a..962831cdde4db 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -1191,9 +1191,8 @@ static int xemaclite_of_probe(struct platform_device *ofdev)
+ 	}
+ 
+ 	dev_info(dev,
+-		 "Xilinx EmacLite at 0x%08X mapped to 0x%08X, irq=%d\n",
+-		 (unsigned int __force)ndev->mem_start,
+-		 (unsigned int __force)lp->base_addr, ndev->irq);
++		 "Xilinx EmacLite at 0x%08X mapped to 0x%p, irq=%d\n",
++		 (unsigned int __force)ndev->mem_start, lp->base_addr, ndev->irq);
+ 	return 0;
+ 
+ error:
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 7d005896a0f93..f7a13529e4add 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -283,7 +283,7 @@ static struct channel *ppp_find_channel(struct ppp_net *pn, int unit);
+ static int ppp_connect_channel(struct channel *pch, int unit);
+ static int ppp_disconnect_channel(struct channel *pch);
+ static void ppp_destroy_channel(struct channel *pch);
+-static int unit_get(struct idr *p, void *ptr);
++static int unit_get(struct idr *p, void *ptr, int min);
+ static int unit_set(struct idr *p, void *ptr, int n);
+ static void unit_put(struct idr *p, int n);
+ static void *unit_find(struct idr *p, int n);
+@@ -1045,9 +1045,20 @@ static int ppp_unit_register(struct ppp *ppp, int unit, bool ifname_is_set)
+ 	mutex_lock(&pn->all_ppp_mutex);
+ 
+ 	if (unit < 0) {
+-		ret = unit_get(&pn->units_idr, ppp);
++		ret = unit_get(&pn->units_idr, ppp, 0);
+ 		if (ret < 0)
+ 			goto err;
++		if (!ifname_is_set) {
++			while (1) {
++				snprintf(ppp->dev->name, IFNAMSIZ, "ppp%i", ret);
++				if (!__dev_get_by_name(ppp->ppp_net, ppp->dev->name))
++					break;
++				unit_put(&pn->units_idr, ret);
++				ret = unit_get(&pn->units_idr, ppp, ret + 1);
++				if (ret < 0)
++					goto err;
++			}
++		}
+ 	} else {
+ 		/* Caller asked for a specific unit number. Fail with -EEXIST
+ 		 * if unavailable. For backward compatibility, return -EEXIST
+@@ -3378,9 +3389,9 @@ static int unit_set(struct idr *p, void *ptr, int n)
+ }
+ 
+ /* get new free unit number and associate pointer with it */
+-static int unit_get(struct idr *p, void *ptr)
++static int unit_get(struct idr *p, void *ptr, int min)
+ {
+-	return idr_alloc(p, ptr, 0, 0, GFP_KERNEL);
++	return idr_alloc(p, ptr, min, 0, GFP_KERNEL);
+ }
+ 
+ /* put unit number back to a pool */
+diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c
+index 1231ce56e7123..f8f1594bea435 100644
+--- a/drivers/tee/optee/call.c
++++ b/drivers/tee/optee/call.c
+@@ -181,7 +181,7 @@ static struct tee_shm *get_msg_arg(struct tee_context *ctx, size_t num_params,
+ 	struct optee_msg_arg *ma;
+ 
+ 	shm = tee_shm_alloc(ctx, OPTEE_MSG_GET_ARG_SIZE(num_params),
+-			    TEE_SHM_MAPPED);
++			    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 	if (IS_ERR(shm))
+ 		return shm;
+ 
+diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
+index 7b17248f1527c..823a81d8ff0ed 100644
+--- a/drivers/tee/optee/core.c
++++ b/drivers/tee/optee/core.c
+@@ -278,7 +278,8 @@ static void optee_release(struct tee_context *ctx)
+ 	if (!ctxdata)
+ 		return;
+ 
+-	shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg), TEE_SHM_MAPPED);
++	shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg),
++			    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 	if (!IS_ERR(shm)) {
+ 		arg = tee_shm_get_va(shm, 0);
+ 		/*
+diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c
+index 6cbb3643c6c48..9dbdd783d6f2d 100644
+--- a/drivers/tee/optee/rpc.c
++++ b/drivers/tee/optee/rpc.c
+@@ -313,7 +313,7 @@ static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
+ 		shm = cmd_alloc_suppl(ctx, sz);
+ 		break;
+ 	case OPTEE_MSG_RPC_SHM_TYPE_KERNEL:
+-		shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED);
++		shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		break;
+ 	default:
+ 		arg->ret = TEEC_ERROR_BAD_PARAMETERS;
+@@ -501,7 +501,8 @@ void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param,
+ 
+ 	switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) {
+ 	case OPTEE_SMC_RPC_FUNC_ALLOC:
+-		shm = tee_shm_alloc(ctx, param->a1, TEE_SHM_MAPPED);
++		shm = tee_shm_alloc(ctx, param->a1,
++				    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) {
+ 			reg_pair_from_64(&param->a1, &param->a2, pa);
+ 			reg_pair_from_64(&param->a4, &param->a5,
+diff --git a/drivers/tee/optee/shm_pool.c b/drivers/tee/optee/shm_pool.c
+index da06ce9b9313e..c41a9a501a6e9 100644
+--- a/drivers/tee/optee/shm_pool.c
++++ b/drivers/tee/optee/shm_pool.c
+@@ -27,7 +27,11 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
+ 	shm->paddr = page_to_phys(page);
+ 	shm->size = PAGE_SIZE << order;
+ 
+-	if (shm->flags & TEE_SHM_DMA_BUF) {
++	/*
++	 * Shared memory private to the OP-TEE driver doesn't need
++	 * to be registered with OP-TEE.
++	 */
++	if (!(shm->flags & TEE_SHM_PRIV)) {
+ 		unsigned int nr_pages = 1 << order, i;
+ 		struct page **pages;
+ 
+@@ -60,7 +64,7 @@ err:
+ static void pool_op_free(struct tee_shm_pool_mgr *poolm,
+ 			 struct tee_shm *shm)
+ {
+-	if (shm->flags & TEE_SHM_DMA_BUF)
++	if (!(shm->flags & TEE_SHM_PRIV))
+ 		optee_shm_unregister(shm->ctx, shm);
+ 
+ 	free_pages((unsigned long)shm->kaddr, get_order(shm->size));
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index c65e44707cd69..8a9384a64f3e2 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -117,7 +117,7 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF))) {
++	if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF | TEE_SHM_PRIV))) {
+ 		dev_err(teedev->dev.parent, "invalid shm flags 0x%x", flags);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+@@ -207,7 +207,7 @@ EXPORT_SYMBOL_GPL(tee_shm_alloc);
+  */
+ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size)
+ {
+-	return tee_shm_alloc(ctx, size, TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
++	return tee_shm_alloc(ctx, size, TEE_SHM_MAPPED);
+ }
+ EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
+ 
+diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c
+index 71ec3025686fe..e87cf3a00fa4b 100644
+--- a/drivers/usb/host/ehci-pci.c
++++ b/drivers/usb/host/ehci-pci.c
+@@ -297,6 +297,9 @@ static int ehci_pci_setup(struct usb_hcd *hcd)
+ 	if (pdev->vendor == PCI_VENDOR_ID_STMICRO
+ 	    && pdev->device == PCI_DEVICE_ID_STMICRO_USB_HOST)
+ 		;	/* ConneXT has no sbrn register */
++	else if (pdev->vendor == PCI_VENDOR_ID_HUAWEI
++			 && pdev->device == 0xa239)
++		;	/* HUAWEI Kunpeng920 USB EHCI has no sbrn register */
+ 	else
+ 		pci_read_config_byte(pdev, 0x60, &ehci->sbrn);
+ 
+diff --git a/fs/namespace.c b/fs/namespace.c
+index c7fbb50a5aaa5..175312428cdf6 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1919,6 +1919,20 @@ void drop_collected_mounts(struct vfsmount *mnt)
+ 	namespace_unlock();
+ }
+ 
++static bool has_locked_children(struct mount *mnt, struct dentry *dentry)
++{
++	struct mount *child;
++
++	list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) {
++		if (!is_subdir(child->mnt_mountpoint, dentry))
++			continue;
++
++		if (child->mnt.mnt_flags & MNT_LOCKED)
++			return true;
++	}
++	return false;
++}
++
+ /**
+  * clone_private_mount - create a private clone of a path
+  *
+@@ -1933,10 +1947,19 @@ struct vfsmount *clone_private_mount(const struct path *path)
+ 	struct mount *old_mnt = real_mount(path->mnt);
+ 	struct mount *new_mnt;
+ 
++	down_read(&namespace_sem);
+ 	if (IS_MNT_UNBINDABLE(old_mnt))
+-		return ERR_PTR(-EINVAL);
++		goto invalid;
++
++	if (!check_mnt(old_mnt))
++		goto invalid;
++
++	if (has_locked_children(old_mnt, path->dentry))
++		goto invalid;
+ 
+ 	new_mnt = clone_mnt(old_mnt, path->dentry, CL_PRIVATE);
++	up_read(&namespace_sem);
++
+ 	if (IS_ERR(new_mnt))
+ 		return ERR_CAST(new_mnt);
+ 
+@@ -1944,6 +1967,10 @@ struct vfsmount *clone_private_mount(const struct path *path)
+ 	new_mnt->mnt_ns = MNT_NS_INTERNAL;
+ 
+ 	return &new_mnt->mnt;
++
++invalid:
++	up_read(&namespace_sem);
++	return ERR_PTR(-EINVAL);
+ }
+ EXPORT_SYMBOL_GPL(clone_private_mount);
+ 
+@@ -2295,19 +2322,6 @@ static int do_change_type(struct path *path, int ms_flags)
+ 	return err;
+ }
+ 
+-static bool has_locked_children(struct mount *mnt, struct dentry *dentry)
+-{
+-	struct mount *child;
+-	list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) {
+-		if (!is_subdir(child->mnt_mountpoint, dentry))
+-			continue;
+-
+-		if (child->mnt.mnt_flags & MNT_LOCKED)
+-			return true;
+-	}
+-	return false;
+-}
+-
+ static struct mount *__do_loopback(struct path *old_path, int recurse)
+ {
+ 	struct mount *mnt = ERR_PTR(-EINVAL), *old = real_mount(old_path->mnt);
+diff --git a/fs/vboxsf/dir.c b/fs/vboxsf/dir.c
+index 4d569f14a8d80..0664787f2b74f 100644
+--- a/fs/vboxsf/dir.c
++++ b/fs/vboxsf/dir.c
+@@ -253,7 +253,7 @@ static int vboxsf_dir_instantiate(struct inode *parent, struct dentry *dentry,
+ }
+ 
+ static int vboxsf_dir_create(struct inode *parent, struct dentry *dentry,
+-			     umode_t mode, int is_dir)
++			     umode_t mode, bool is_dir, bool excl, u64 *handle_ret)
+ {
+ 	struct vboxsf_inode *sf_parent_i = VBOXSF_I(parent);
+ 	struct vboxsf_sbi *sbi = VBOXSF_SBI(parent->i_sb);
+@@ -261,10 +261,12 @@ static int vboxsf_dir_create(struct inode *parent, struct dentry *dentry,
+ 	int err;
+ 
+ 	params.handle = SHFL_HANDLE_NIL;
+-	params.create_flags = SHFL_CF_ACT_CREATE_IF_NEW |
+-			      SHFL_CF_ACT_FAIL_IF_EXISTS |
+-			      SHFL_CF_ACCESS_READWRITE |
+-			      (is_dir ? SHFL_CF_DIRECTORY : 0);
++	params.create_flags = SHFL_CF_ACT_CREATE_IF_NEW | SHFL_CF_ACCESS_READWRITE;
++	if (is_dir)
++		params.create_flags |= SHFL_CF_DIRECTORY;
++	if (excl)
++		params.create_flags |= SHFL_CF_ACT_FAIL_IF_EXISTS;
++
+ 	params.info.attr.mode = (mode & 0777) |
+ 				(is_dir ? SHFL_TYPE_DIRECTORY : SHFL_TYPE_FILE);
+ 	params.info.attr.additional = SHFLFSOBJATTRADD_NOTHING;
+@@ -276,28 +278,32 @@ static int vboxsf_dir_create(struct inode *parent, struct dentry *dentry,
+ 	if (params.result != SHFL_FILE_CREATED)
+ 		return -EPERM;
+ 
+-	vboxsf_close(sbi->root, params.handle);
+-
+ 	err = vboxsf_dir_instantiate(parent, dentry, &params.info);
+ 	if (err)
+-		return err;
++		goto out;
+ 
+ 	/* parent directory access/change time changed */
+ 	sf_parent_i->force_restat = 1;
+ 
+-	return 0;
++out:
++	if (err == 0 && handle_ret)
++		*handle_ret = params.handle;
++	else
++		vboxsf_close(sbi->root, params.handle);
++
++	return err;
+ }
+ 
+ static int vboxsf_dir_mkfile(struct inode *parent, struct dentry *dentry,
+ 			     umode_t mode, bool excl)
+ {
+-	return vboxsf_dir_create(parent, dentry, mode, 0);
++	return vboxsf_dir_create(parent, dentry, mode, false, excl, NULL);
+ }
+ 
+ static int vboxsf_dir_mkdir(struct inode *parent, struct dentry *dentry,
+ 			    umode_t mode)
+ {
+-	return vboxsf_dir_create(parent, dentry, mode, 1);
++	return vboxsf_dir_create(parent, dentry, mode, true, true, NULL);
+ }
+ 
+ static int vboxsf_dir_unlink(struct inode *parent, struct dentry *dentry)
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index 9d0c454d23cd6..63b550403317a 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -445,7 +445,7 @@ struct zone {
+ 	 */
+ 	long lowmem_reserve[MAX_NR_ZONES];
+ 
+-#ifdef CONFIG_NUMA
++#ifdef CONFIG_NEED_MULTIPLE_NODES
+ 	int node;
+ #endif
+ 	struct pglist_data	*zone_pgdat;
+@@ -896,7 +896,7 @@ static inline bool populated_zone(struct zone *zone)
+ 	return zone->present_pages;
+ }
+ 
+-#ifdef CONFIG_NUMA
++#ifdef CONFIG_NEED_MULTIPLE_NODES
+ static inline int zone_to_nid(struct zone *zone)
+ {
+ 	return zone->node;
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 39642626a707c..7ef74d01b8e74 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -120,6 +120,7 @@ enum lockdown_reason {
+ 	LOCKDOWN_MMIOTRACE,
+ 	LOCKDOWN_DEBUGFS,
+ 	LOCKDOWN_XMON_WR,
++	LOCKDOWN_BPF_WRITE_USER,
+ 	LOCKDOWN_INTEGRITY_MAX,
+ 	LOCKDOWN_KCORE,
+ 	LOCKDOWN_KPROBES,
+diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
+index 9b24cc3d3024f..459e9a76d7e6f 100644
+--- a/include/linux/tee_drv.h
++++ b/include/linux/tee_drv.h
+@@ -27,6 +27,7 @@
+ #define TEE_SHM_USER_MAPPED	BIT(4)  /* Memory mapped in user space */
+ #define TEE_SHM_POOL		BIT(5)  /* Memory allocated from pool */
+ #define TEE_SHM_KERNEL_MAPPED	BIT(6)  /* Memory mapped in kernel space */
++#define TEE_SHM_PRIV		BIT(7)  /* Memory private to TEE driver */
+ 
+ struct device;
+ struct tee_device;
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 216329c23f18a..ba644760f5076 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1272,12 +1272,13 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ 		return &bpf_get_numa_node_id_proto;
+ 	case BPF_FUNC_perf_event_read:
+ 		return &bpf_perf_event_read_proto;
+-	case BPF_FUNC_probe_write_user:
+-		return bpf_get_probe_write_proto();
+ 	case BPF_FUNC_current_task_under_cgroup:
+ 		return &bpf_current_task_under_cgroup_proto;
+ 	case BPF_FUNC_get_prandom_u32:
+ 		return &bpf_get_prandom_u32_proto;
++	case BPF_FUNC_probe_write_user:
++		return security_locked_down(LOCKDOWN_BPF_WRITE_USER) < 0 ?
++		       NULL : bpf_get_probe_write_proto();
+ 	case BPF_FUNC_probe_read_user:
+ 		return &bpf_probe_read_user_proto;
+ 	case BPF_FUNC_probe_read_kernel:
+diff --git a/security/security.c b/security/security.c
+index a28045dc9e7f6..1c696bce8e1c9 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -58,6 +58,7 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = {
+ 	[LOCKDOWN_MMIOTRACE] = "unsafe mmio",
+ 	[LOCKDOWN_DEBUGFS] = "debugfs access",
+ 	[LOCKDOWN_XMON_WR] = "xmon write access",
++	[LOCKDOWN_BPF_WRITE_USER] = "use of bpf to write user RAM",
+ 	[LOCKDOWN_INTEGRITY_MAX] = "integrity",
+ 	[LOCKDOWN_KCORE] = "/proc/kcore access",
+ 	[LOCKDOWN_KPROBES] = "use of kprobes",
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index b36c8d94d4dab..c5ef5182fcf19 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -251,7 +251,10 @@ static bool hw_support_mmap(struct snd_pcm_substream *substream)
+ 
+ 	switch (substream->dma_buffer.dev.type) {
+ 	case SNDRV_DMA_TYPE_UNKNOWN:
+-		return false;
++		/* we can't know the device, so just assume that the driver does
++		 * everything right
++		 */
++		return true;
+ 	case SNDRV_DMA_TYPE_CONTINUOUS:
+ 	case SNDRV_DMA_TYPE_VMALLOC:
+ 		return true;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4556ce9a4b158..beb5fb03e3884 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8357,6 +8357,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -8389,6 +8390,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
+ 	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
++	SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
+index 9dcc96e1ad3d7..36da6136af968 100644
+--- a/tools/testing/selftests/resctrl/resctrl.h
++++ b/tools/testing/selftests/resctrl/resctrl.h
+@@ -28,10 +28,6 @@
+ #define RESCTRL_PATH		"/sys/fs/resctrl"
+ #define PHYS_ID_PATH		"/sys/devices/system/cpu/cpu"
+ #define CBM_MASK_PATH		"/sys/fs/resctrl/info"
+-#define L3_PATH			"/sys/fs/resctrl/info/L3"
+-#define MB_PATH			"/sys/fs/resctrl/info/MB"
+-#define L3_MON_PATH		"/sys/fs/resctrl/info/L3_MON"
+-#define L3_MON_FEATURES_PATH	"/sys/fs/resctrl/info/L3_MON/mon_features"
+ 
+ #define PARENT_EXIT(err_msg)			\
+ 	do {					\
+@@ -83,7 +79,7 @@ int remount_resctrlfs(bool mum_resctrlfs);
+ int get_resource_id(int cpu_no, int *resource_id);
+ int umount_resctrlfs(void);
+ int validate_bw_report_request(char *bw_report);
+-bool validate_resctrl_feature_request(const char *resctrl_val);
++bool validate_resctrl_feature_request(char *resctrl_val);
+ char *fgrep(FILE *inf, const char *str);
+ int taskset_benchmark(pid_t bm_pid, int cpu_no);
+ void run_benchmark(int signum, siginfo_t *info, void *ucontext);
+diff --git a/tools/testing/selftests/resctrl/resctrlfs.c b/tools/testing/selftests/resctrl/resctrlfs.c
+index b57170f53861d..4174e48e06d1e 100644
+--- a/tools/testing/selftests/resctrl/resctrlfs.c
++++ b/tools/testing/selftests/resctrl/resctrlfs.c
+@@ -616,56 +616,26 @@ char *fgrep(FILE *inf, const char *str)
+  * validate_resctrl_feature_request - Check if requested feature is valid.
+  * @resctrl_val:	Requested feature
+  *
+- * Return: True if the feature is supported, else false
++ * Return: 0 on success, non-zero on failure
+  */
+-bool validate_resctrl_feature_request(const char *resctrl_val)
++bool validate_resctrl_feature_request(char *resctrl_val)
+ {
+-	struct stat statbuf;
++	FILE *inf = fopen("/proc/cpuinfo", "r");
+ 	bool found = false;
+ 	char *res;
+-	FILE *inf;
+ 
+-	if (!resctrl_val)
++	if (!inf)
+ 		return false;
+ 
+-	if (remount_resctrlfs(false))
+-		return false;
++	res = fgrep(inf, "flags");
+ 
+-	if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR))) {
+-		if (!stat(L3_PATH, &statbuf))
+-			return true;
+-	} else if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
+-		if (!stat(MB_PATH, &statbuf))
+-			return true;
+-	} else if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) ||
+-		   !strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) {
+-		if (!stat(L3_MON_PATH, &statbuf)) {
+-			inf = fopen(L3_MON_FEATURES_PATH, "r");
+-			if (!inf)
+-				return false;
+-
+-			if (!strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) {
+-				res = fgrep(inf, "llc_occupancy");
+-				if (res) {
+-					found = true;
+-					free(res);
+-				}
+-			}
+-
+-			if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) {
+-				res = fgrep(inf, "mbm_total_bytes");
+-				if (res) {
+-					free(res);
+-					res = fgrep(inf, "mbm_local_bytes");
+-					if (res) {
+-						found = true;
+-						free(res);
+-					}
+-				}
+-			}
+-			fclose(inf);
+-		}
++	if (res) {
++		char *s = strchr(res, ':');
++
++		found = s && !strstr(s, resctrl_val);
++		free(res);
+ 	}
++	fclose(inf);
+ 
+ 	return found;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-18 12:46 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-18 12:46 UTC (permalink / raw
  To: gentoo-commits

commit:     fe1524b63f9d280cafc9779e0bff480987e27ffa
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 18 12:46:04 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 18 12:46:04 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fe1524b6

Linux patch 5.10.60

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1059_linux-5.10.60.patch | 3494 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3498 insertions(+)

diff --git a/0000_README b/0000_README
index 4078503..c48c49d 100644
--- a/0000_README
+++ b/0000_README
@@ -279,6 +279,10 @@ Patch:  1058_linux-5.10.59.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.59
 
+Patch:  1059_linux-5.10.60.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.60
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1059_linux-5.10.60.patch b/1059_linux-5.10.60.patch
new file mode 100644
index 0000000..bd1ff05
--- /dev/null
+++ b/1059_linux-5.10.60.patch
@@ -0,0 +1,3494 @@
+diff --git a/Makefile b/Makefile
+index df86b39267cee..7f25cfee84ece 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 59
++SUBLEVEL = 60
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/kernel/fpu.c b/arch/arc/kernel/fpu.c
+index c67c0f0f5f778..ec640219d989f 100644
+--- a/arch/arc/kernel/fpu.c
++++ b/arch/arc/kernel/fpu.c
+@@ -57,23 +57,26 @@ void fpu_save_restore(struct task_struct *prev, struct task_struct *next)
+ 
+ void fpu_init_task(struct pt_regs *regs)
+ {
++	const unsigned int fwe = 0x80000000;
++
+ 	/* default rounding mode */
+ 	write_aux_reg(ARC_REG_FPU_CTRL, 0x100);
+ 
+-	/* set "Write enable" to allow explicit write to exception flags */
+-	write_aux_reg(ARC_REG_FPU_STATUS, 0x80000000);
++	/* Initialize to zero: setting requires FWE be set */
++	write_aux_reg(ARC_REG_FPU_STATUS, fwe);
+ }
+ 
+ void fpu_save_restore(struct task_struct *prev, struct task_struct *next)
+ {
+ 	struct arc_fpu *save = &prev->thread.fpu;
+ 	struct arc_fpu *restore = &next->thread.fpu;
++	const unsigned int fwe = 0x80000000;
+ 
+ 	save->ctrl = read_aux_reg(ARC_REG_FPU_CTRL);
+ 	save->status = read_aux_reg(ARC_REG_FPU_STATUS);
+ 
+ 	write_aux_reg(ARC_REG_FPU_CTRL, restore->ctrl);
+-	write_aux_reg(ARC_REG_FPU_STATUS, restore->status);
++	write_aux_reg(ARC_REG_FPU_STATUS, (fwe | restore->status));
+ }
+ 
+ #endif
+diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
+index e8c2a6373157d..00fafc8b249eb 100644
+--- a/arch/powerpc/kernel/kprobes.c
++++ b/arch/powerpc/kernel/kprobes.c
+@@ -276,7 +276,8 @@ int kprobe_handler(struct pt_regs *regs)
+ 	if (user_mode(regs))
+ 		return 0;
+ 
+-	if (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR))
++	if (!IS_ENABLED(CONFIG_BOOKE) &&
++	    (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR)))
+ 		return 0;
+ 
+ 	/*
+diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
+index 2e08640bb3b4b..d36e71ba002c1 100644
+--- a/arch/powerpc/kernel/sysfs.c
++++ b/arch/powerpc/kernel/sysfs.c
+@@ -1167,7 +1167,7 @@ static int __init topology_init(void)
+ 		 * CPU.  For instance, the boot cpu might never be valid
+ 		 * for hotplugging.
+ 		 */
+-		if (smp_ops->cpu_offline_self)
++		if (smp_ops && smp_ops->cpu_offline_self)
+ 			c->hotpluggable = 1;
+ #endif
+ 
+diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
+index 71d630bb5e086..f8fad50502ad6 100644
+--- a/arch/x86/include/asm/svm.h
++++ b/arch/x86/include/asm/svm.h
+@@ -166,6 +166,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
+ #define V_IGN_TPR_SHIFT 20
+ #define V_IGN_TPR_MASK (1 << V_IGN_TPR_SHIFT)
+ 
++#define V_IRQ_INJECTION_BITS_MASK (V_IRQ_MASK | V_INTR_PRIO_MASK | V_IGN_TPR_MASK)
++
+ #define V_INTR_MASKING_SHIFT 24
+ #define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
+ 
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 0d4818eab0da8..25b1d5c6af969 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1948,7 +1948,8 @@ static struct irq_chip ioapic_chip __read_mostly = {
+ 	.irq_set_affinity	= ioapic_set_affinity,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_get_irqchip_state	= ioapic_irq_get_chip_state,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static struct irq_chip ioapic_ir_chip __read_mostly = {
+@@ -1961,7 +1962,8 @@ static struct irq_chip ioapic_ir_chip __read_mostly = {
+ 	.irq_set_affinity	= ioapic_set_affinity,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_get_irqchip_state	= ioapic_irq_get_chip_state,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static inline void init_IO_APIC_traps(void)
+diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
+index 6313f0a05db7a..6bd98a20fc90d 100644
+--- a/arch/x86/kernel/apic/msi.c
++++ b/arch/x86/kernel/apic/msi.c
+@@ -86,11 +86,13 @@ msi_set_affinity(struct irq_data *irqd, const struct cpumask *mask, bool force)
+ 	 *   The quirk bit is not set in this case.
+ 	 * - The new vector is the same as the old vector
+ 	 * - The old vector is MANAGED_IRQ_SHUTDOWN_VECTOR (interrupt starts up)
++	 * - The interrupt is not yet started up
+ 	 * - The new destination CPU is the same as the old destination CPU
+ 	 */
+ 	if (!irqd_msi_nomask_quirk(irqd) ||
+ 	    cfg->vector == old_cfg.vector ||
+ 	    old_cfg.vector == MANAGED_IRQ_SHUTDOWN_VECTOR ||
++	    !irqd_is_started(irqd) ||
+ 	    cfg->dest_apicid == old_cfg.dest_apicid) {
+ 		irq_msi_update_msg(irqd, cfg);
+ 		return ret;
+@@ -178,7 +180,8 @@ static struct irq_chip pci_msi_controller = {
+ 	.irq_ack		= irq_chip_ack_parent,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_set_affinity	= msi_set_affinity,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ int pci_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec,
+@@ -247,7 +250,8 @@ static struct irq_chip pci_msi_ir_controller = {
+ 	.irq_mask		= pci_msi_mask_irq,
+ 	.irq_ack		= irq_chip_ack_parent,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static struct msi_domain_info pci_msi_ir_domain_info = {
+@@ -289,7 +293,8 @@ static struct irq_chip dmar_msi_controller = {
+ 	.irq_set_affinity	= msi_domain_set_affinity,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_write_msi_msg	= dmar_msi_write_msg,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static int dmar_msi_init(struct irq_domain *domain,
+@@ -381,7 +386,7 @@ static struct irq_chip hpet_msi_controller __ro_after_init = {
+ 	.irq_set_affinity = msi_domain_set_affinity,
+ 	.irq_retrigger = irq_chip_retrigger_hierarchy,
+ 	.irq_write_msi_msg = hpet_msi_write_msg,
+-	.flags = IRQCHIP_SKIP_SET_WAKE,
++	.flags = IRQCHIP_SKIP_SET_WAKE | IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static int hpet_msi_init(struct irq_domain *domain,
+diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
+index a98519a3a2e63..3075624723b27 100644
+--- a/arch/x86/kernel/cpu/resctrl/monitor.c
++++ b/arch/x86/kernel/cpu/resctrl/monitor.c
+@@ -222,15 +222,14 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
+ 	return chunks >>= shift;
+ }
+ 
+-static int __mon_event_count(u32 rmid, struct rmid_read *rr)
++static u64 __mon_event_count(u32 rmid, struct rmid_read *rr)
+ {
+ 	struct mbm_state *m;
+ 	u64 chunks, tval;
+ 
+ 	tval = __rmid_read(rmid, rr->evtid);
+ 	if (tval & (RMID_VAL_ERROR | RMID_VAL_UNAVAIL)) {
+-		rr->val = tval;
+-		return -EINVAL;
++		return tval;
+ 	}
+ 	switch (rr->evtid) {
+ 	case QOS_L3_OCCUP_EVENT_ID:
+@@ -242,12 +241,6 @@ static int __mon_event_count(u32 rmid, struct rmid_read *rr)
+ 	case QOS_L3_MBM_LOCAL_EVENT_ID:
+ 		m = &rr->d->mbm_local[rmid];
+ 		break;
+-	default:
+-		/*
+-		 * Code would never reach here because
+-		 * an invalid event id would fail the __rmid_read.
+-		 */
+-		return -EINVAL;
+ 	}
+ 
+ 	if (rr->first) {
+@@ -297,23 +290,29 @@ void mon_event_count(void *info)
+ 	struct rdtgroup *rdtgrp, *entry;
+ 	struct rmid_read *rr = info;
+ 	struct list_head *head;
++	u64 ret_val;
+ 
+ 	rdtgrp = rr->rgrp;
+ 
+-	if (__mon_event_count(rdtgrp->mon.rmid, rr))
+-		return;
++	ret_val = __mon_event_count(rdtgrp->mon.rmid, rr);
+ 
+ 	/*
+-	 * For Ctrl groups read data from child monitor groups.
++	 * For Ctrl groups read data from child monitor groups and
++	 * add them together. Count events which are read successfully.
++	 * Discard the rmid_read's reporting errors.
+ 	 */
+ 	head = &rdtgrp->mon.crdtgrp_list;
+ 
+ 	if (rdtgrp->type == RDTCTRL_GROUP) {
+ 		list_for_each_entry(entry, head, mon.crdtgrp_list) {
+-			if (__mon_event_count(entry->mon.rmid, rr))
+-				return;
++			if (__mon_event_count(entry->mon.rmid, rr) == 0)
++				ret_val = 0;
+ 		}
+ 	}
++
++	/* Report error if none of rmid_reads are successful */
++	if (ret_val)
++		rr->val = ret_val;
+ }
+ 
+ /*
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index da6ce73c10bb7..df17146e841fb 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -147,6 +147,9 @@ void recalc_intercepts(struct vcpu_svm *svm)
+ 
+ 	for (i = 0; i < MAX_INTERCEPT; i++)
+ 		c->intercepts[i] |= g->intercepts[i];
++
++	vmcb_set_intercept(c, INTERCEPT_VMLOAD);
++	vmcb_set_intercept(c, INTERCEPT_VMSAVE);
+ }
+ 
+ static void copy_vmcb_control_area(struct vmcb_control_area *dst,
+@@ -429,7 +432,10 @@ static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
+ 
+ static void nested_prepare_vmcb_control(struct vcpu_svm *svm)
+ {
+-	const u32 mask = V_INTR_MASKING_MASK | V_GIF_ENABLE_MASK | V_GIF_MASK;
++	const u32 int_ctl_vmcb01_bits =
++		V_INTR_MASKING_MASK | V_GIF_MASK | V_GIF_ENABLE_MASK;
++
++	const u32 int_ctl_vmcb12_bits = V_TPR_MASK | V_IRQ_INJECTION_BITS_MASK;
+ 
+ 	if (nested_npt_enabled(svm))
+ 		nested_svm_init_mmu_context(&svm->vcpu);
+@@ -437,9 +443,9 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm)
+ 	svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset =
+ 		svm->vcpu.arch.l1_tsc_offset + svm->nested.ctl.tsc_offset;
+ 
+-	svm->vmcb->control.int_ctl             =
+-		(svm->nested.ctl.int_ctl & ~mask) |
+-		(svm->nested.hsave->control.int_ctl & mask);
++	svm->vmcb->control.int_ctl =
++		(svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) |
++		(svm->nested.hsave->control.int_ctl & int_ctl_vmcb01_bits);
+ 
+ 	svm->vmcb->control.virt_ext            = svm->nested.ctl.virt_ext;
+ 	svm->vmcb->control.int_vector          = svm->nested.ctl.int_vector;
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 1c9226cd6cdec..1c23aee3778c3 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1486,17 +1486,17 @@ static void svm_set_vintr(struct vcpu_svm *svm)
+ 
+ static void svm_clear_vintr(struct vcpu_svm *svm)
+ {
+-	const u32 mask = V_TPR_MASK | V_GIF_ENABLE_MASK | V_GIF_MASK | V_INTR_MASKING_MASK;
+ 	svm_clr_intercept(svm, INTERCEPT_VINTR);
+ 
+ 	/* Drop int_ctl fields related to VINTR injection.  */
+-	svm->vmcb->control.int_ctl &= mask;
++	svm->vmcb->control.int_ctl &= ~V_IRQ_INJECTION_BITS_MASK;
+ 	if (is_guest_mode(&svm->vcpu)) {
+-		svm->nested.hsave->control.int_ctl &= mask;
++		svm->nested.hsave->control.int_ctl &= ~V_IRQ_INJECTION_BITS_MASK;
+ 
+ 		WARN_ON((svm->vmcb->control.int_ctl & V_TPR_MASK) !=
+ 			(svm->nested.ctl.int_ctl & V_TPR_MASK));
+-		svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl & ~mask;
++		svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl &
++			V_IRQ_INJECTION_BITS_MASK;
+ 	}
+ 
+ 	vmcb_mark_dirty(svm->vmcb, VMCB_INTR);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 67554bc7adb26..e0c7910207c0f 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -5779,7 +5779,8 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
+ 		if (is_nmi(intr_info))
+ 			return true;
+ 		else if (is_page_fault(intr_info))
+-			return vcpu->arch.apf.host_apf_flags || !enable_ept;
++			return vcpu->arch.apf.host_apf_flags ||
++			       vmx_need_pf_intercept(vcpu);
+ 		else if (is_debug(intr_info) &&
+ 			 vcpu->guest_debug &
+ 			 (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 73d87d44b6578..5ff24537393e2 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -488,7 +488,7 @@ static inline void decache_tsc_multiplier(struct vcpu_vmx *vmx)
+ 
+ static inline bool vmx_has_waitpkg(struct vcpu_vmx *vmx)
+ {
+-	return vmx->secondary_exec_control &
++	return secondary_exec_controls_get(vmx) &
+ 		SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE;
+ }
+ 
+diff --git a/arch/x86/tools/chkobjdump.awk b/arch/x86/tools/chkobjdump.awk
+index fd1ab80be0dec..a4cf678cf5c80 100644
+--- a/arch/x86/tools/chkobjdump.awk
++++ b/arch/x86/tools/chkobjdump.awk
+@@ -10,6 +10,7 @@ BEGIN {
+ 
+ /^GNU objdump/ {
+ 	verstr = ""
++	gsub(/\(.*\)/, "");
+ 	for (i = 3; i <= NF; i++)
+ 		if (match($(i), "^[0-9]")) {
+ 			verstr = $(i);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 4c97b0f44fce2..cb18cb5c51b17 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -3031,6 +3031,9 @@ static int acpi_nfit_register_region(struct acpi_nfit_desc *acpi_desc,
+ 		struct acpi_nfit_memory_map *memdev = nfit_memdev->memdev;
+ 		struct nd_mapping_desc *mapping;
+ 
++		/* range index 0 == unmapped in SPA or invalid-SPA */
++		if (memdev->range_index == 0 || spa->range_index == 0)
++			continue;
+ 		if (memdev->range_index != spa->range_index)
+ 			continue;
+ 		if (count >= ND_MAX_MAPPINGS) {
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 1157f9aea9c04..a364fe565007c 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2452,6 +2452,7 @@ void device_initialize(struct device *dev)
+ 	device_pm_init(dev);
+ 	set_dev_node(dev, -1);
+ #ifdef CONFIG_GENERIC_MSI_IRQ
++	raw_spin_lock_init(&dev->msi_lock);
+ 	INIT_LIST_HEAD(&dev->msi_list);
+ #endif
+ 	INIT_LIST_HEAD(&dev->links.consumers);
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 9a70eab7edbf7..59c452fff8352 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -812,6 +812,10 @@ static bool nbd_clear_req(struct request *req, void *data, bool reserved)
+ {
+ 	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
+ 
++	/* don't abort one completed request */
++	if (blk_mq_request_completed(req))
++		return true;
++
+ 	mutex_lock(&cmd->lock);
+ 	cmd->status = BLK_STS_IOERR;
+ 	mutex_unlock(&cmd->lock);
+@@ -2024,15 +2028,19 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ {
+ 	mutex_lock(&nbd->config_lock);
+ 	nbd_disconnect(nbd);
+-	nbd_clear_sock(nbd);
+-	mutex_unlock(&nbd->config_lock);
++	sock_shutdown(nbd);
+ 	/*
+ 	 * Make sure recv thread has finished, so it does not drop the last
+ 	 * config ref and try to destroy the workqueue from inside the work
+-	 * queue.
++	 * queue. And this also ensure that we can safely call nbd_clear_que()
++	 * to cancel the inflight I/Os.
+ 	 */
+ 	if (nbd->recv_workq)
+ 		flush_workqueue(nbd->recv_workq);
++	nbd_clear_que(nbd);
++	nbd->task_setup = NULL;
++	mutex_unlock(&nbd->config_lock);
++
+ 	if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
+ 			       &nbd->config->runtime_flags))
+ 		nbd_config_put(nbd);
+diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
+index 22ece1ad68a8f..c1b57dfb12776 100644
+--- a/drivers/firmware/efi/libstub/arm64-stub.c
++++ b/drivers/firmware/efi/libstub/arm64-stub.c
+@@ -35,15 +35,48 @@ efi_status_t check_platform_features(void)
+ }
+ 
+ /*
+- * Although relocatable kernels can fix up the misalignment with respect to
+- * MIN_KIMG_ALIGN, the resulting virtual text addresses are subtly out of
+- * sync with those recorded in the vmlinux when kaslr is disabled but the
+- * image required relocation anyway. Therefore retain 2M alignment unless
+- * KASLR is in use.
++ * Distro versions of GRUB may ignore the BSS allocation entirely (i.e., fail
++ * to provide space, and fail to zero it). Check for this condition by double
++ * checking that the first and the last byte of the image are covered by the
++ * same EFI memory map entry.
+  */
+-static u64 min_kimg_align(void)
++static bool check_image_region(u64 base, u64 size)
+ {
+-	return efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN;
++	unsigned long map_size, desc_size, buff_size;
++	efi_memory_desc_t *memory_map;
++	struct efi_boot_memmap map;
++	efi_status_t status;
++	bool ret = false;
++	int map_offset;
++
++	map.map =	&memory_map;
++	map.map_size =	&map_size;
++	map.desc_size =	&desc_size;
++	map.desc_ver =	NULL;
++	map.key_ptr =	NULL;
++	map.buff_size =	&buff_size;
++
++	status = efi_get_memory_map(&map);
++	if (status != EFI_SUCCESS)
++		return false;
++
++	for (map_offset = 0; map_offset < map_size; map_offset += desc_size) {
++		efi_memory_desc_t *md = (void *)memory_map + map_offset;
++		u64 end = md->phys_addr + md->num_pages * EFI_PAGE_SIZE;
++
++		/*
++		 * Find the region that covers base, and return whether
++		 * it covers base+size bytes.
++		 */
++		if (base >= md->phys_addr && base < end) {
++			ret = (base + size) <= end;
++			break;
++		}
++	}
++
++	efi_bs_call(free_pool, memory_map);
++
++	return ret;
+ }
+ 
+ efi_status_t handle_kernel_image(unsigned long *image_addr,
+@@ -56,6 +89,16 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 	unsigned long kernel_size, kernel_memsize = 0;
+ 	u32 phys_seed = 0;
+ 
++	/*
++	 * Although relocatable kernels can fix up the misalignment with
++	 * respect to MIN_KIMG_ALIGN, the resulting virtual text addresses are
++	 * subtly out of sync with those recorded in the vmlinux when kaslr is
++	 * disabled but the image required relocation anyway. Therefore retain
++	 * 2M alignment if KASLR was explicitly disabled, even if it was not
++	 * going to be activated to begin with.
++	 */
++	u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN;
++
+ 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ 		if (!efi_nokaslr) {
+ 			status = efi_get_random_bytes(sizeof(phys_seed),
+@@ -76,6 +119,10 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 	if (image->image_base != _text)
+ 		efi_err("FIRMWARE BUG: efi_loaded_image_t::image_base has bogus value\n");
+ 
++	if (!IS_ALIGNED((u64)_text, EFI_KIMG_ALIGN))
++		efi_err("FIRMWARE BUG: kernel image not aligned on %ldk boundary\n",
++			EFI_KIMG_ALIGN >> 10);
++
+ 	kernel_size = _edata - _text;
+ 	kernel_memsize = kernel_size + (_end - _edata);
+ 	*reserve_size = kernel_memsize;
+@@ -85,14 +132,16 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 		 * If KASLR is enabled, and we have some randomness available,
+ 		 * locate the kernel at a randomized offset in physical memory.
+ 		 */
+-		status = efi_random_alloc(*reserve_size, min_kimg_align(),
++		status = efi_random_alloc(*reserve_size, min_kimg_align,
+ 					  reserve_addr, phys_seed);
+ 	} else {
+ 		status = EFI_OUT_OF_RESOURCES;
+ 	}
+ 
+ 	if (status != EFI_SUCCESS) {
+-		if (IS_ALIGNED((u64)_text, min_kimg_align())) {
++		if (!check_image_region((u64)_text, kernel_memsize)) {
++			efi_err("FIRMWARE BUG: Image BSS overlaps adjacent EFI memory region\n");
++		} else if (IS_ALIGNED((u64)_text, min_kimg_align)) {
+ 			/*
+ 			 * Just execute from wherever we were loaded by the
+ 			 * UEFI PE/COFF loader if the alignment is suitable.
+@@ -103,7 +152,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 		}
+ 
+ 		status = efi_allocate_pages_aligned(*reserve_size, reserve_addr,
+-						    ULONG_MAX, min_kimg_align());
++						    ULONG_MAX, min_kimg_align);
+ 
+ 		if (status != EFI_SUCCESS) {
+ 			efi_err("Failed to relocate kernel\n");
+diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c
+index a408df474d837..724155b9e10dc 100644
+--- a/drivers/firmware/efi/libstub/randomalloc.c
++++ b/drivers/firmware/efi/libstub/randomalloc.c
+@@ -30,6 +30,8 @@ static unsigned long get_entry_num_slots(efi_memory_desc_t *md,
+ 
+ 	region_end = min(md->phys_addr + md->num_pages * EFI_PAGE_SIZE - 1,
+ 			 (u64)ULONG_MAX);
++	if (region_end < size)
++		return 0;
+ 
+ 	first_slot = round_up(md->phys_addr, align);
+ 	last_slot = round_down(region_end - size + 1, align);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index a2425f7ca7597..ed13a2f76884c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1344,6 +1344,8 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
+ 			pci_set_power_state(pdev, PCI_D3cold);
+ 		}
+ 		drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
++	} else if (amdgpu_device_supports_boco(drm_dev)) {
++		/* nothing to do */
+ 	} else if (amdgpu_device_supports_baco(drm_dev)) {
+ 		amdgpu_device_baco_enter(drm_dev);
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index 281b274e2b9b2..80b448ae90d29 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -531,7 +531,7 @@ static void amdgpu_dm_irq_schedule_work(struct amdgpu_device *adev,
+ 		handler_data = container_of(handler_list->next, struct amdgpu_dm_irq_handler_data, list);
+ 
+ 		/*allocate a new amdgpu_dm_irq_handler_data*/
+-		handler_data_add = kzalloc(sizeof(*handler_data), GFP_KERNEL);
++		handler_data_add = kzalloc(sizeof(*handler_data), GFP_ATOMIC);
+ 		if (!handler_data_add) {
+ 			DRM_ERROR("DM_IRQ: failed to allocate irq handler!\n");
+ 			return;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+index 8465cae180da7..e5f4f93317cf3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+@@ -1875,7 +1875,6 @@ static bool dcn30_split_stream_for_mpc_or_odm(
+ 		}
+ 		pri_pipe->next_odm_pipe = sec_pipe;
+ 		sec_pipe->prev_odm_pipe = pri_pipe;
+-		ASSERT(sec_pipe->top_pipe == NULL);
+ 
+ 		sec_pipe->stream_res.opp = pool->opps[pipe_idx];
+ 		if (sec_pipe->stream->timing.flags.DSC == 1) {
+diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
+index cf6e47adfde6f..9ce8f043ad7f8 100644
+--- a/drivers/gpu/drm/i915/i915_gpu_error.c
++++ b/drivers/gpu/drm/i915/i915_gpu_error.c
+@@ -727,9 +727,18 @@ static void err_print_gt(struct drm_i915_error_state_buf *m,
+ 	if (INTEL_GEN(m->i915) >= 12) {
+ 		int i;
+ 
+-		for (i = 0; i < GEN12_SFC_DONE_MAX; i++)
++		for (i = 0; i < GEN12_SFC_DONE_MAX; i++) {
++			/*
++			 * SFC_DONE resides in the VD forcewake domain, so it
++			 * only exists if the corresponding VCS engine is
++			 * present.
++			 */
++			if (!HAS_ENGINE(gt->_gt, _VCS(i * 2)))
++				continue;
++
+ 			err_printf(m, "  SFC_DONE[%d]: 0x%08x\n", i,
+ 				   gt->sfc_done[i]);
++		}
+ 
+ 		err_printf(m, "  GAM_DONE: 0x%08x\n", gt->gam_done);
+ 	}
+@@ -1594,6 +1603,14 @@ static void gt_record_regs(struct intel_gt_coredump *gt)
+ 
+ 	if (INTEL_GEN(i915) >= 12) {
+ 		for (i = 0; i < GEN12_SFC_DONE_MAX; i++) {
++			/*
++			 * SFC_DONE resides in the VD forcewake domain, so it
++			 * only exists if the corresponding VCS engine is
++			 * present.
++			 */
++			if (!HAS_ENGINE(gt->_gt, _VCS(i * 2)))
++				continue;
++
+ 			gt->sfc_done[i] =
+ 				intel_uncore_read(uncore, GEN12_SFC_DONE(i));
+ 		}
+diff --git a/drivers/gpu/drm/meson/meson_registers.h b/drivers/gpu/drm/meson/meson_registers.h
+index 446e7961da486..0f3cafab88600 100644
+--- a/drivers/gpu/drm/meson/meson_registers.h
++++ b/drivers/gpu/drm/meson/meson_registers.h
+@@ -634,6 +634,11 @@
+ #define VPP_WRAP_OSD3_MATRIX_PRE_OFFSET2 0x3dbc
+ #define VPP_WRAP_OSD3_MATRIX_EN_CTRL 0x3dbd
+ 
++/* osd1 HDR */
++#define OSD1_HDR2_CTRL 0x38a0
++#define OSD1_HDR2_CTRL_VDIN0_HDR2_TOP_EN       BIT(13)
++#define OSD1_HDR2_CTRL_REG_ONLY_MAT            BIT(16)
++
+ /* osd2 scaler */
+ #define OSD2_VSC_PHASE_STEP 0x3d00
+ #define OSD2_VSC_INI_PHASE 0x3d01
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index aede0c67a57f0..259f3e6bec90a 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -425,9 +425,14 @@ void meson_viu_init(struct meson_drm *priv)
+ 	if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) ||
+ 	    meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL))
+ 		meson_viu_load_matrix(priv);
+-	else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
++	else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {
+ 		meson_viu_set_g12a_osd1_matrix(priv, RGB709_to_YUV709l_coeff,
+ 					       true);
++		/* fix green/pink color distortion from vendor u-boot */
++		writel_bits_relaxed(OSD1_HDR2_CTRL_REG_ONLY_MAT |
++				OSD1_HDR2_CTRL_VDIN0_HDR2_TOP_EN, 0,
++				priv->io_base + _REG(OSD1_HDR2_CTRL));
++	}
+ 
+ 	/* Initialize OSD1 fifo control register */
+ 	reg = VIU_OSD_DDR_PRIORITY_URGENT |
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index 6ef38a8ee95cb..f358120d59b38 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -141,7 +141,7 @@ static ssize_t i2cdev_read(struct file *file, char __user *buf, size_t count,
+ 	if (count > 8192)
+ 		count = 8192;
+ 
+-	tmp = kmalloc(count, GFP_KERNEL);
++	tmp = kzalloc(count, GFP_KERNEL);
+ 	if (tmp == NULL)
+ 		return -ENOMEM;
+ 
+@@ -150,7 +150,8 @@ static ssize_t i2cdev_read(struct file *file, char __user *buf, size_t count,
+ 
+ 	ret = i2c_master_recv(client, tmp, count);
+ 	if (ret >= 0)
+-		ret = copy_to_user(buf, tmp, count) ? -EFAULT : ret;
++		if (copy_to_user(buf, tmp, ret))
++			ret = -EFAULT;
+ 	kfree(tmp);
+ 	return ret;
+ }
+diff --git a/drivers/iio/adc/palmas_gpadc.c b/drivers/iio/adc/palmas_gpadc.c
+index 889b88768b630..f4756671cddb6 100644
+--- a/drivers/iio/adc/palmas_gpadc.c
++++ b/drivers/iio/adc/palmas_gpadc.c
+@@ -654,8 +654,8 @@ static int palmas_adc_wakeup_configure(struct palmas_gpadc *adc)
+ 
+ 	adc_period = adc->auto_conversion_period;
+ 	for (i = 0; i < 16; ++i) {
+-		if (((1000 * (1 << i)) / 32) < adc_period)
+-			continue;
++		if (((1000 * (1 << i)) / 32) >= adc_period)
++			break;
+ 	}
+ 	if (i > 0)
+ 		i--;
+diff --git a/drivers/iio/adc/ti-ads7950.c b/drivers/iio/adc/ti-ads7950.c
+index 2383eacada87d..a2b83f0bd5260 100644
+--- a/drivers/iio/adc/ti-ads7950.c
++++ b/drivers/iio/adc/ti-ads7950.c
+@@ -568,7 +568,6 @@ static int ti_ads7950_probe(struct spi_device *spi)
+ 	st->ring_xfer.tx_buf = &st->tx_buf[0];
+ 	st->ring_xfer.rx_buf = &st->rx_buf[0];
+ 	/* len will be set later */
+-	st->ring_xfer.cs_change = true;
+ 
+ 	spi_message_add_tail(&st->ring_xfer, &st->ring_msg);
+ 
+diff --git a/drivers/iio/humidity/hdc100x.c b/drivers/iio/humidity/hdc100x.c
+index 2a957f19048ee..9e0fce917ce4c 100644
+--- a/drivers/iio/humidity/hdc100x.c
++++ b/drivers/iio/humidity/hdc100x.c
+@@ -25,6 +25,8 @@
+ #include <linux/iio/trigger_consumer.h>
+ #include <linux/iio/triggered_buffer.h>
+ 
++#include <linux/time.h>
++
+ #define HDC100X_REG_TEMP			0x00
+ #define HDC100X_REG_HUMIDITY			0x01
+ 
+@@ -166,7 +168,7 @@ static int hdc100x_get_measurement(struct hdc100x_data *data,
+ 				   struct iio_chan_spec const *chan)
+ {
+ 	struct i2c_client *client = data->client;
+-	int delay = data->adc_int_us[chan->address];
++	int delay = data->adc_int_us[chan->address] + 1*USEC_PER_MSEC;
+ 	int ret;
+ 	__be16 val;
+ 
+@@ -316,7 +318,7 @@ static irqreturn_t hdc100x_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct hdc100x_data *data = iio_priv(indio_dev);
+ 	struct i2c_client *client = data->client;
+-	int delay = data->adc_int_us[0] + data->adc_int_us[1];
++	int delay = data->adc_int_us[0] + data->adc_int_us[1] + 2*USEC_PER_MSEC;
+ 	int ret;
+ 
+ 	/* dual read starts at temp register */
+diff --git a/drivers/iio/imu/adis.c b/drivers/iio/imu/adis.c
+index 319b64b2fd887..f8b7837d8b8f6 100644
+--- a/drivers/iio/imu/adis.c
++++ b/drivers/iio/imu/adis.c
+@@ -415,12 +415,11 @@ int __adis_initial_startup(struct adis *adis)
+ 	int ret;
+ 
+ 	/* check if the device has rst pin low */
+-	gpio = devm_gpiod_get_optional(&adis->spi->dev, "reset", GPIOD_ASIS);
++	gpio = devm_gpiod_get_optional(&adis->spi->dev, "reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(gpio))
+ 		return PTR_ERR(gpio);
+ 
+ 	if (gpio) {
+-		gpiod_set_value_cansleep(gpio, 1);
+ 		msleep(10);
+ 		/* bring device out of reset */
+ 		gpiod_set_value_cansleep(gpio, 0);
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 372adb7ceb74e..74644b6ea0ff1 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -930,7 +930,6 @@ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ 	u32 *cqb = NULL;
+ 	void *cqc;
+ 	int cqe_size;
+-	unsigned int irqn;
+ 	int eqn;
+ 	int err;
+ 
+@@ -969,7 +968,7 @@ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ 		INIT_WORK(&cq->notify_work, notify_soft_wc_handler);
+ 	}
+ 
+-	err = mlx5_vector2eqn(dev->mdev, vector, &eqn, &irqn);
++	err = mlx5_vector2eqn(dev->mdev, vector, &eqn);
+ 	if (err)
+ 		goto err_cqb;
+ 
+@@ -992,7 +991,6 @@ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ 		goto err_cqb;
+ 
+ 	mlx5_ib_dbg(dev, "cqn 0x%x\n", cq->mcq.cqn);
+-	cq->mcq.irqn = irqn;
+ 	if (udata)
+ 		cq->mcq.tasklet_ctx.comp = mlx5_ib_cq_comp;
+ 	else
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 06a8732576193..343e6709d9fc3 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -904,7 +904,6 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)(
+ 	struct mlx5_ib_dev *dev;
+ 	int user_vector;
+ 	int dev_eqn;
+-	unsigned int irqn;
+ 	int err;
+ 
+ 	if (uverbs_copy_from(&user_vector, attrs,
+@@ -916,7 +915,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)(
+ 		return PTR_ERR(c);
+ 	dev = to_mdev(c->ibucontext.device);
+ 
+-	err = mlx5_vector2eqn(dev->mdev, user_vector, &dev_eqn, &irqn);
++	err = mlx5_vector2eqn(dev->mdev, user_vector, &dev_eqn);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index 59c1724bcd0ed..39b128205f255 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -71,12 +71,18 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 		family = AF_INET6;
+ 
+ 	if (bareudp->ethertype == htons(ETH_P_IP)) {
+-		struct iphdr *iphdr;
++		__u8 ipversion;
+ 
+-		iphdr = (struct iphdr *)(skb->data + BAREUDP_BASE_HLEN);
+-		if (iphdr->version == 4) {
+-			proto = bareudp->ethertype;
+-		} else if (bareudp->multi_proto_mode && (iphdr->version == 6)) {
++		if (skb_copy_bits(skb, BAREUDP_BASE_HLEN, &ipversion,
++				  sizeof(ipversion))) {
++			bareudp->dev->stats.rx_dropped++;
++			goto drop;
++		}
++		ipversion >>= 4;
++
++		if (ipversion == 4) {
++			proto = htons(ETH_P_IP);
++		} else if (ipversion == 6 && bareudp->multi_proto_mode) {
+ 			proto = htons(ETH_P_IPV6);
+ 		} else {
+ 			bareudp->dev->stats.rx_dropped++;
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index aa1142d6a9f54..dcf1fc89451f2 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -557,12 +557,12 @@ static int lan9303_alr_make_entry_raw(struct lan9303 *chip, u32 dat0, u32 dat1)
+ 	return 0;
+ }
+ 
+-typedef void alr_loop_cb_t(struct lan9303 *chip, u32 dat0, u32 dat1,
+-			   int portmap, void *ctx);
++typedef int alr_loop_cb_t(struct lan9303 *chip, u32 dat0, u32 dat1,
++			  int portmap, void *ctx);
+ 
+-static void lan9303_alr_loop(struct lan9303 *chip, alr_loop_cb_t *cb, void *ctx)
++static int lan9303_alr_loop(struct lan9303 *chip, alr_loop_cb_t *cb, void *ctx)
+ {
+-	int i;
++	int ret = 0, i;
+ 
+ 	mutex_lock(&chip->alr_mutex);
+ 	lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD,
+@@ -582,13 +582,17 @@ static void lan9303_alr_loop(struct lan9303 *chip, alr_loop_cb_t *cb, void *ctx)
+ 						LAN9303_ALR_DAT1_PORT_BITOFFS;
+ 		portmap = alrport_2_portmap[alrport];
+ 
+-		cb(chip, dat0, dat1, portmap, ctx);
++		ret = cb(chip, dat0, dat1, portmap, ctx);
++		if (ret)
++			break;
+ 
+ 		lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD,
+ 					 LAN9303_ALR_CMD_GET_NEXT);
+ 		lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD, 0);
+ 	}
+ 	mutex_unlock(&chip->alr_mutex);
++
++	return ret;
+ }
+ 
+ static void alr_reg_to_mac(u32 dat0, u32 dat1, u8 mac[6])
+@@ -606,18 +610,20 @@ struct del_port_learned_ctx {
+ };
+ 
+ /* Clear learned (non-static) entry on given port */
+-static void alr_loop_cb_del_port_learned(struct lan9303 *chip, u32 dat0,
+-					 u32 dat1, int portmap, void *ctx)
++static int alr_loop_cb_del_port_learned(struct lan9303 *chip, u32 dat0,
++					u32 dat1, int portmap, void *ctx)
+ {
+ 	struct del_port_learned_ctx *del_ctx = ctx;
+ 	int port = del_ctx->port;
+ 
+ 	if (((BIT(port) & portmap) == 0) || (dat1 & LAN9303_ALR_DAT1_STATIC))
+-		return;
++		return 0;
+ 
+ 	/* learned entries has only one port, we can just delete */
+ 	dat1 &= ~LAN9303_ALR_DAT1_VALID; /* delete entry */
+ 	lan9303_alr_make_entry_raw(chip, dat0, dat1);
++
++	return 0;
+ }
+ 
+ struct port_fdb_dump_ctx {
+@@ -626,19 +632,19 @@ struct port_fdb_dump_ctx {
+ 	dsa_fdb_dump_cb_t *cb;
+ };
+ 
+-static void alr_loop_cb_fdb_port_dump(struct lan9303 *chip, u32 dat0,
+-				      u32 dat1, int portmap, void *ctx)
++static int alr_loop_cb_fdb_port_dump(struct lan9303 *chip, u32 dat0,
++				     u32 dat1, int portmap, void *ctx)
+ {
+ 	struct port_fdb_dump_ctx *dump_ctx = ctx;
+ 	u8 mac[ETH_ALEN];
+ 	bool is_static;
+ 
+ 	if ((BIT(dump_ctx->port) & portmap) == 0)
+-		return;
++		return 0;
+ 
+ 	alr_reg_to_mac(dat0, dat1, mac);
+ 	is_static = !!(dat1 & LAN9303_ALR_DAT1_STATIC);
+-	dump_ctx->cb(mac, 0, is_static, dump_ctx->data);
++	return dump_ctx->cb(mac, 0, is_static, dump_ctx->data);
+ }
+ 
+ /* Set a static ALR entry. Delete entry if port_map is zero */
+@@ -1210,9 +1216,7 @@ static int lan9303_port_fdb_dump(struct dsa_switch *ds, int port,
+ 	};
+ 
+ 	dev_dbg(chip->dev, "%s(%d)\n", __func__, port);
+-	lan9303_alr_loop(chip, alr_loop_cb_fdb_port_dump, &dump_ctx);
+-
+-	return 0;
++	return lan9303_alr_loop(chip, alr_loop_cb_fdb_port_dump, &dump_ctx);
+ }
+ 
+ static int lan9303_port_mdb_prepare(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 93c7fa1fd4cb6..a455534740cdf 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1416,11 +1416,17 @@ static int gswip_port_fdb_dump(struct dsa_switch *ds, int port,
+ 		addr[1] = mac_bridge.key[2] & 0xff;
+ 		addr[0] = (mac_bridge.key[2] >> 8) & 0xff;
+ 		if (mac_bridge.val[1] & GSWIP_TABLE_MAC_BRIDGE_STATIC) {
+-			if (mac_bridge.val[0] & BIT(port))
+-				cb(addr, 0, true, data);
++			if (mac_bridge.val[0] & BIT(port)) {
++				err = cb(addr, 0, true, data);
++				if (err)
++					return err;
++			}
+ 		} else {
+-			if (((mac_bridge.val[0] & GENMASK(7, 4)) >> 4) == port)
+-				cb(addr, 0, false, data);
++			if (((mac_bridge.val[0] & GENMASK(7, 4)) >> 4) == port) {
++				err = cb(addr, 0, false, data);
++				if (err)
++					return err;
++			}
+ 		}
+ 	}
+ 	return 0;
+diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
+index 1e101ab56cea1..ada0533b81fae 100644
+--- a/drivers/net/dsa/microchip/ksz8795.c
++++ b/drivers/net/dsa/microchip/ksz8795.c
+@@ -790,20 +790,79 @@ static int ksz8795_port_vlan_filtering(struct dsa_switch *ds, int port,
+ 	if (switchdev_trans_ph_prepare(trans))
+ 		return 0;
+ 
++	/* Discard packets with VID not enabled on the switch */
+ 	ksz_cfg(dev, S_MIRROR_CTRL, SW_VLAN_ENABLE, flag);
+ 
++	/* Discard packets with VID not enabled on the ingress port */
++	for (port = 0; port < dev->phy_port_cnt; ++port)
++		ksz_port_cfg(dev, port, REG_PORT_CTRL_2, PORT_INGRESS_FILTER,
++			     flag);
++
+ 	return 0;
+ }
+ 
++static bool ksz8795_port_vlan_changes_remove_tag(
++	struct dsa_switch *ds, int port,
++	const struct switchdev_obj_port_vlan *vlan)
++{
++	bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
++	struct ksz_device *dev = ds->priv;
++	struct ksz_port *p = &dev->ports[port];
++
++	/* If a VLAN is added with untagged flag different from the
++	 * port's Remove Tag flag, we need to change the latter.
++	 * Ignore VID 0, which is always untagged.
++	 * Ignore CPU port, which will always be tagged.
++	 */
++	return untagged != p->remove_tag &&
++		!(vlan->vid_begin == 0 && vlan->vid_end == 0) &&
++		port != dev->cpu_port;
++}
++
++int ksz8795_port_vlan_prepare(struct dsa_switch *ds, int port,
++			      const struct switchdev_obj_port_vlan *vlan)
++{
++	struct ksz_device *dev = ds->priv;
++
++	/* Reject attempts to add a VLAN that requires the Remove Tag
++	 * flag to be changed, unless there are no other VLANs
++	 * currently configured.
++	 */
++	if (ksz8795_port_vlan_changes_remove_tag(ds, port, vlan)) {
++		unsigned int vid;
++
++		for (vid = 1; vid < dev->num_vlans; ++vid) {
++			u8 fid, member, valid;
++
++			/* Skip the VIDs we are going to add or reconfigure */
++			if (vid == vlan->vid_begin) {
++				vid = vlan->vid_end;
++				continue;
++			}
++
++			ksz8795_from_vlan(dev->vlan_cache[vid].table[0],
++					  &fid, &member, &valid);
++			if (valid && (member & BIT(port)))
++				return -EINVAL;
++		}
++	}
++
++	return ksz_port_vlan_prepare(ds, port, vlan);
++}
++
+ static void ksz8795_port_vlan_add(struct dsa_switch *ds, int port,
+ 				  const struct switchdev_obj_port_vlan *vlan)
+ {
+ 	bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+ 	struct ksz_device *dev = ds->priv;
++	struct ksz_port *p = &dev->ports[port];
+ 	u16 data, vid, new_pvid = 0;
+ 	u8 fid, member, valid;
+ 
+-	ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged);
++	if (ksz8795_port_vlan_changes_remove_tag(ds, port, vlan)) {
++		ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged);
++		p->remove_tag = untagged;
++	}
+ 
+ 	for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+ 		ksz8795_r_vlan_table(dev, vid, &data);
+@@ -827,25 +886,25 @@ static void ksz8795_port_vlan_add(struct dsa_switch *ds, int port,
+ 
+ 	if (new_pvid) {
+ 		ksz_pread16(dev, port, REG_PORT_CTRL_VID, &vid);
+-		vid &= 0xfff;
++		vid &= ~VLAN_VID_MASK;
+ 		vid |= new_pvid;
+ 		ksz_pwrite16(dev, port, REG_PORT_CTRL_VID, vid);
++
++		ksz_pwrite8(dev, port, REG_PORT_CTRL_12, 0x0f);
+ 	}
+ }
+ 
+ static int ksz8795_port_vlan_del(struct dsa_switch *ds, int port,
+ 				 const struct switchdev_obj_port_vlan *vlan)
+ {
+-	bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+ 	struct ksz_device *dev = ds->priv;
+-	u16 data, vid, pvid, new_pvid = 0;
++	u16 data, vid, pvid;
+ 	u8 fid, member, valid;
++	bool del_pvid = false;
+ 
+ 	ksz_pread16(dev, port, REG_PORT_CTRL_VID, &pvid);
+ 	pvid = pvid & 0xFFF;
+ 
+-	ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged);
+-
+ 	for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+ 		ksz8795_r_vlan_table(dev, vid, &data);
+ 		ksz8795_from_vlan(data, &fid, &member, &valid);
+@@ -859,14 +918,14 @@ static int ksz8795_port_vlan_del(struct dsa_switch *ds, int port,
+ 		}
+ 
+ 		if (pvid == vid)
+-			new_pvid = 1;
++			del_pvid = true;
+ 
+ 		ksz8795_to_vlan(fid, member, valid, &data);
+ 		ksz8795_w_vlan_table(dev, vid, data);
+ 	}
+ 
+-	if (new_pvid != pvid)
+-		ksz_pwrite16(dev, port, REG_PORT_CTRL_VID, pvid);
++	if (del_pvid)
++		ksz_pwrite8(dev, port, REG_PORT_CTRL_12, 0x00);
+ 
+ 	return 0;
+ }
+@@ -1079,6 +1138,8 @@ static int ksz8795_setup(struct dsa_switch *ds)
+ 
+ 	ksz_cfg(dev, S_MIRROR_CTRL, SW_MIRROR_RX_TX, false);
+ 
++	ksz_cfg(dev, REG_SW_CTRL_19, SW_INS_TAG_ENABLE, true);
++
+ 	/* set broadcast storm protection 10% rate */
+ 	regmap_update_bits(dev->regmap[1], S_REPLACE_VID_CTRL,
+ 			   BROADCAST_STORM_RATE,
+@@ -1117,7 +1178,7 @@ static const struct dsa_switch_ops ksz8795_switch_ops = {
+ 	.port_stp_state_set	= ksz8795_port_stp_state_set,
+ 	.port_fast_age		= ksz_port_fast_age,
+ 	.port_vlan_filtering	= ksz8795_port_vlan_filtering,
+-	.port_vlan_prepare	= ksz_port_vlan_prepare,
++	.port_vlan_prepare	= ksz8795_port_vlan_prepare,
+ 	.port_vlan_add		= ksz8795_port_vlan_add,
+ 	.port_vlan_del		= ksz8795_port_vlan_del,
+ 	.port_fdb_dump		= ksz_port_fdb_dump,
+@@ -1266,6 +1327,16 @@ static int ksz8795_switch_init(struct ksz_device *dev)
+ 	/* set the real number of ports */
+ 	dev->ds->num_ports = dev->port_cnt + 1;
+ 
++	/* We rely on software untagging on the CPU port, so that we
++	 * can support both tagged and untagged VLANs
++	 */
++	dev->ds->untag_bridge_pvid = true;
++
++	/* VLAN filtering is partly controlled by the global VLAN
++	 * Enable flag
++	 */
++	dev->ds->vlan_filtering_is_global = true;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index d4a64dbde3157..88fa0779e0bc9 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -432,7 +432,7 @@ int ksz_switch_register(struct ksz_device *dev,
+ 				if (of_property_read_u32(port, "reg",
+ 							 &port_num))
+ 					continue;
+-				if (port_num >= dev->port_cnt)
++				if (port_num >= dev->mib_port_cnt)
+ 					return -EINVAL;
+ 				of_get_phy_mode(port,
+ 						&dev->ports[port_num].interface);
+diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
+index cf866e48ff664..309ad4a72d78e 100644
+--- a/drivers/net/dsa/microchip/ksz_common.h
++++ b/drivers/net/dsa/microchip/ksz_common.h
+@@ -27,6 +27,7 @@ struct ksz_port_mib {
+ struct ksz_port {
+ 	u16 member;
+ 	u16 vid_member;
++	bool remove_tag;		/* Remove Tag flag set, for ksz8795 only */
+ 	int stp_state;
+ 	struct phy_device phydev;
+ 
+@@ -210,12 +211,8 @@ static inline int ksz_read64(struct ksz_device *dev, u32 reg, u64 *val)
+ 	int ret;
+ 
+ 	ret = regmap_bulk_read(dev->regmap[2], reg, value, 2);
+-	if (!ret) {
+-		/* Ick! ToDo: Add 64bit R/W to regmap on 32bit systems */
+-		value[0] = swab32(value[0]);
+-		value[1] = swab32(value[1]);
+-		*val = swab64((u64)*value);
+-	}
++	if (!ret)
++		*val = (u64)value[0] << 32 | value[1];
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 190025a0a98ed..3fa2f81c8b47d 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -45,6 +45,7 @@ static const struct mt7530_mib_desc mt7530_mib[] = {
+ 	MIB_DESC(2, 0x48, "TxBytes"),
+ 	MIB_DESC(1, 0x60, "RxDrop"),
+ 	MIB_DESC(1, 0x64, "RxFiltering"),
++	MIB_DESC(1, 0x68, "RxUnicast"),
+ 	MIB_DESC(1, 0x6c, "RxMulticast"),
+ 	MIB_DESC(1, 0x70, "RxBroadcast"),
+ 	MIB_DESC(1, 0x74, "RxAlignErr"),
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 855371fcbf85c..c03d76c108686 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1566,7 +1566,9 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
+ 		/* We need to hide the dsa_8021q VLANs from the user. */
+ 		if (priv->vlan_state == SJA1105_VLAN_UNAWARE)
+ 			l2_lookup.vlanid = 0;
+-		cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data);
++		rc = cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data);
++		if (rc)
++			return rc;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index f3caf5eab8d4a..c4ec9a91c7c52 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1489,11 +1489,6 @@ static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter)
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 
+ 	iavf_map_rings_to_vectors(adapter);
+-
+-	if (RSS_AQ(adapter))
+-		adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS;
+-	else
+-		err = iavf_init_rss(adapter);
+ err:
+ 	return err;
+ }
+@@ -2167,6 +2162,14 @@ continue_reset:
+ 			goto reset_err;
+ 	}
+ 
++	if (RSS_AQ(adapter)) {
++		adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS;
++	} else {
++		err = iavf_init_rss(adapter);
++		if (err)
++			goto reset_err;
++	}
++
+ 	adapter->aq_required |= IAVF_FLAG_AQ_GET_CONFIG;
+ 	adapter->aq_required |= IAVF_FLAG_AQ_MAP_VECTORS;
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 1567ddd4c5b87..a46780570cd95 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -189,6 +189,14 @@ static int ice_add_mac_to_unsync_list(struct net_device *netdev, const u8 *addr)
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 	struct ice_vsi *vsi = np->vsi;
+ 
++	/* Under some circumstances, we might receive a request to delete our
++	 * own device address from our uc list. Because we store the device
++	 * address in the VSI's MAC filter list, we need to ignore such
++	 * requests and not delete our device address from this list.
++	 */
++	if (ether_addr_equal(addr, netdev->dev_addr))
++		return 0;
++
+ 	if (ice_fltr_add_mac_to_list(vsi, &vsi->tmp_unsync_list, addr,
+ 				     ICE_FWD_TO_VSI))
+ 		return -EINVAL;
+@@ -3991,6 +3999,11 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
+ 	struct ice_hw *hw;
+ 	int i, err;
+ 
++	if (pdev->is_virtfn) {
++		dev_err(dev, "can't probe a virtual function\n");
++		return -EINVAL;
++	}
++
+ 	/* this driver uses devres, see
+ 	 * Documentation/driver-api/driver-model/devres.rst
+ 	 */
+@@ -4876,7 +4889,7 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 		return -EADDRNOTAVAIL;
+ 
+ 	if (ether_addr_equal(netdev->dev_addr, mac)) {
+-		netdev_warn(netdev, "already using mac %pM\n", mac);
++		netdev_dbg(netdev, "already using mac %pM\n", mac);
+ 		return 0;
+ 	}
+ 
+@@ -4887,6 +4900,7 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 		return -EBUSY;
+ 	}
+ 
++	netif_addr_lock_bh(netdev);
+ 	/* Clean up old MAC filter. Not an error if old filter doesn't exist */
+ 	status = ice_fltr_remove_mac(vsi, netdev->dev_addr, ICE_FWD_TO_VSI);
+ 	if (status && status != ICE_ERR_DOES_NOT_EXIST) {
+@@ -4896,30 +4910,28 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 
+ 	/* Add filter for new MAC. If filter exists, return success */
+ 	status = ice_fltr_add_mac(vsi, mac, ICE_FWD_TO_VSI);
+-	if (status == ICE_ERR_ALREADY_EXISTS) {
++	if (status == ICE_ERR_ALREADY_EXISTS)
+ 		/* Although this MAC filter is already present in hardware it's
+ 		 * possible in some cases (e.g. bonding) that dev_addr was
+ 		 * modified outside of the driver and needs to be restored back
+ 		 * to this value.
+ 		 */
+-		memcpy(netdev->dev_addr, mac, netdev->addr_len);
+ 		netdev_dbg(netdev, "filter for MAC %pM already exists\n", mac);
+-		return 0;
+-	}
+-
+-	/* error if the new filter addition failed */
+-	if (status)
++	else if (status)
++		/* error if the new filter addition failed */
+ 		err = -EADDRNOTAVAIL;
+ 
+ err_update_filters:
+ 	if (err) {
+ 		netdev_err(netdev, "can't set MAC %pM. filter update failed\n",
+ 			   mac);
++		netif_addr_unlock_bh(netdev);
+ 		return err;
+ 	}
+ 
+ 	/* change the netdev's MAC address */
+ 	memcpy(netdev->dev_addr, mac, netdev->addr_len);
++	netif_addr_unlock_bh(netdev);
+ 	netdev_dbg(vsi->netdev, "updated MAC address to %pM\n",
+ 		   netdev->dev_addr);
+ 
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index a1aefce55e655..d825eb021b22e 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -854,7 +854,7 @@ enum mvpp22_ptp_packet_format {
+ #define MVPP2_BM_COOKIE_POOL_OFFS	8
+ #define MVPP2_BM_COOKIE_CPU_OFFS	24
+ 
+-#define MVPP2_BM_SHORT_FRAME_SIZE	704	/* frame size 128 */
++#define MVPP2_BM_SHORT_FRAME_SIZE	736	/* frame size 128 */
+ #define MVPP2_BM_LONG_FRAME_SIZE	2240	/* frame size 1664 */
+ #define MVPP2_BM_JUMBO_FRAME_SIZE	10432	/* frame size 9856 */
+ /* BM short pool packet size
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+index df3e4938ecdd9..360e093874d4f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+@@ -134,6 +134,7 @@ int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq,
+ 			      cq->cqn);
+ 
+ 	cq->uar = dev->priv.uar;
++	cq->irqn = eq->core.irqn;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 2eb022ad7fd09..3dfcb20e97c6f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -1019,12 +1019,19 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer)
+ 	MLX5_NB_INIT(&tracer->nb, fw_tracer_event, DEVICE_TRACER);
+ 	mlx5_eq_notifier_register(dev, &tracer->nb);
+ 
+-	mlx5_fw_tracer_start(tracer);
+-
++	err = mlx5_fw_tracer_start(tracer);
++	if (err) {
++		mlx5_core_warn(dev, "FWTracer: Failed to start tracer %d\n", err);
++		goto err_notifier_unregister;
++	}
+ 	return 0;
+ 
++err_notifier_unregister:
++	mlx5_eq_notifier_unregister(dev, &tracer->nb);
++	mlx5_core_destroy_mkey(dev, &tracer->buff.mkey);
+ err_dealloc_pd:
+ 	mlx5_core_dealloc_pd(dev, tracer->buff.pdn);
++	cancel_work_sync(&tracer->read_fw_strings_work);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index d81fa8e561991..6b4a3d90c9f7f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -1547,15 +1547,9 @@ static int mlx5e_alloc_cq_common(struct mlx5_core_dev *mdev,
+ 				 struct mlx5e_cq *cq)
+ {
+ 	struct mlx5_core_cq *mcq = &cq->mcq;
+-	int eqn_not_used;
+-	unsigned int irqn;
+ 	int err;
+ 	u32 i;
+ 
+-	err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn);
+-	if (err)
+-		return err;
+-
+ 	err = mlx5_cqwq_create(mdev, &param->wq, param->cqc, &cq->wq,
+ 			       &cq->wq_ctrl);
+ 	if (err)
+@@ -1569,7 +1563,6 @@ static int mlx5e_alloc_cq_common(struct mlx5_core_dev *mdev,
+ 	mcq->vector     = param->eq_ix;
+ 	mcq->comp       = mlx5e_completion_event;
+ 	mcq->event      = mlx5e_cq_error_event;
+-	mcq->irqn       = irqn;
+ 
+ 	for (i = 0; i < mlx5_cqwq_get_size(&cq->wq); i++) {
+ 		struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, i);
+@@ -1615,11 +1608,10 @@ static int mlx5e_create_cq(struct mlx5e_cq *cq, struct mlx5e_cq_param *param)
+ 	void *in;
+ 	void *cqc;
+ 	int inlen;
+-	unsigned int irqn_not_used;
+ 	int eqn;
+ 	int err;
+ 
+-	err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used);
++	err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn);
+ 	if (err)
+ 		return err;
+ 
+@@ -1977,9 +1969,8 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ 	struct mlx5e_channel *c;
+ 	unsigned int irq;
+ 	int err;
+-	int eqn;
+ 
+-	err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq);
++	err = mlx5_vector2irqn(priv->mdev, ix, &irq);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index ccd53a7a2b801..4f4f79ca37a81 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -859,8 +859,8 @@ clean:
+ 	return err;
+ }
+ 
+-int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
+-		    unsigned int *irqn)
++static int vector2eqnirqn(struct mlx5_core_dev *dev, int vector, int *eqn,
++			  unsigned int *irqn)
+ {
+ 	struct mlx5_eq_table *table = dev->priv.eq_table;
+ 	struct mlx5_eq_comp *eq, *n;
+@@ -869,8 +869,10 @@ int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
+ 
+ 	list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
+ 		if (i++ == vector) {
+-			*eqn = eq->core.eqn;
+-			*irqn = eq->core.irqn;
++			if (irqn)
++				*irqn = eq->core.irqn;
++			if (eqn)
++				*eqn = eq->core.eqn;
+ 			err = 0;
+ 			break;
+ 		}
+@@ -878,8 +880,18 @@ int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
+ 
+ 	return err;
+ }
++
++int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn)
++{
++	return vector2eqnirqn(dev, vector, eqn, NULL);
++}
+ EXPORT_SYMBOL(mlx5_vector2eqn);
+ 
++int mlx5_vector2irqn(struct mlx5_core_dev *dev, int vector, unsigned int *irqn)
++{
++	return vector2eqnirqn(dev, vector, NULL, irqn);
++}
++
+ unsigned int mlx5_comp_vectors_count(struct mlx5_core_dev *dev)
+ {
+ 	return dev->priv.eq_table->num_comp_eqs;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
+index 80da50e129153..a42bd493293a0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
+@@ -417,7 +417,6 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
+ 	struct mlx5_wq_param wqp;
+ 	struct mlx5_cqe64 *cqe;
+ 	int inlen, err, eqn;
+-	unsigned int irqn;
+ 	void *cqc, *in;
+ 	__be64 *pas;
+ 	u32 i;
+@@ -446,7 +445,7 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
+ 		goto err_cqwq;
+ 	}
+ 
+-	err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn, &irqn);
++	err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn);
+ 	if (err) {
+ 		kvfree(in);
+ 		goto err_cqwq;
+@@ -476,7 +475,6 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
+ 	*conn->cq.mcq.arm_db    = 0;
+ 	conn->cq.mcq.vector     = 0;
+ 	conn->cq.mcq.comp       = mlx5_fpga_conn_cq_complete;
+-	conn->cq.mcq.irqn       = irqn;
+ 	conn->cq.mcq.uar        = fdev->conn_res.uar;
+ 	tasklet_setup(&conn->cq.tasklet, mlx5_fpga_conn_cq_tasklet);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+index 81f2cc4ca1da9..fa79e6e6a98a0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+@@ -98,4 +98,6 @@ void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev);
+ struct cpu_rmap *mlx5_eq_table_get_rmap(struct mlx5_core_dev *dev);
+ #endif
+ 
++int mlx5_vector2irqn(struct mlx5_core_dev *dev, int vector, unsigned int *irqn);
++
+ #endif
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 24dede1b0a209..ea3c6cf27db42 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -711,7 +711,6 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 	struct mlx5_cqe64 *cqe;
+ 	struct mlx5dr_cq *cq;
+ 	int inlen, err, eqn;
+-	unsigned int irqn;
+ 	void *cqc, *in;
+ 	__be64 *pas;
+ 	int vector;
+@@ -744,7 +743,7 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 		goto err_cqwq;
+ 
+ 	vector = raw_smp_processor_id() % mlx5_comp_vectors_count(mdev);
+-	err = mlx5_vector2eqn(mdev, vector, &eqn, &irqn);
++	err = mlx5_vector2eqn(mdev, vector, &eqn);
+ 	if (err) {
+ 		kvfree(in);
+ 		goto err_cqwq;
+@@ -780,7 +779,6 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 	*cq->mcq.arm_db = cpu_to_be32(2 << 28);
+ 
+ 	cq->mcq.vector = 0;
+-	cq->mcq.irqn = irqn;
+ 	cq->mcq.uar = uar;
+ 
+ 	return cq;
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index 2f5e0ad23ad7c..31cefd6ef6806 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -928,7 +928,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff *skb,
+ 	struct cpdma_chan *txch;
+ 	int ret, q_idx;
+ 
+-	if (skb_padto(skb, CPSW_MIN_PACKET_SIZE)) {
++	if (skb_put_padto(skb, READ_ONCE(priv->tx_packet_min))) {
+ 		cpsw_err(priv, tx_err, "packet pad failed\n");
+ 		ndev->stats.tx_dropped++;
+ 		return NET_XMIT_DROP;
+@@ -1108,7 +1108,7 @@ static int cpsw_ndo_xdp_xmit(struct net_device *ndev, int n,
+ 
+ 	for (i = 0; i < n; i++) {
+ 		xdpf = frames[i];
+-		if (xdpf->len < CPSW_MIN_PACKET_SIZE) {
++		if (xdpf->len < READ_ONCE(priv->tx_packet_min)) {
+ 			xdp_return_frame_rx_napi(xdpf);
+ 			drops++;
+ 			continue;
+@@ -1402,6 +1402,7 @@ static int cpsw_create_ports(struct cpsw_common *cpsw)
+ 		priv->dev  = dev;
+ 		priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
+ 		priv->emac_port = i + 1;
++		priv->tx_packet_min = CPSW_MIN_PACKET_SIZE;
+ 
+ 		if (is_valid_ether_addr(slave_data->mac_addr)) {
+ 			ether_addr_copy(priv->mac_addr, slave_data->mac_addr);
+@@ -1699,6 +1700,7 @@ static int cpsw_dl_switch_mode_set(struct devlink *dl, u32 id,
+ 
+ 			priv = netdev_priv(sl_ndev);
+ 			slave->port_vlan = vlan;
++			WRITE_ONCE(priv->tx_packet_min, CPSW_MIN_PACKET_SIZE_VLAN);
+ 			if (netif_running(sl_ndev))
+ 				cpsw_port_add_switch_def_ale_entries(priv,
+ 								     slave);
+@@ -1727,6 +1729,7 @@ static int cpsw_dl_switch_mode_set(struct devlink *dl, u32 id,
+ 
+ 			priv = netdev_priv(slave->ndev);
+ 			slave->port_vlan = slave->data->dual_emac_res_vlan;
++			WRITE_ONCE(priv->tx_packet_min, CPSW_MIN_PACKET_SIZE);
+ 			cpsw_port_add_dual_emac_def_ale_entries(priv, slave);
+ 		}
+ 
+diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h
+index 7b7f3596b20da..a100c93edee87 100644
+--- a/drivers/net/ethernet/ti/cpsw_priv.h
++++ b/drivers/net/ethernet/ti/cpsw_priv.h
+@@ -89,7 +89,8 @@ do {								\
+ 
+ #define CPSW_POLL_WEIGHT	64
+ #define CPSW_RX_VLAN_ENCAP_HDR_SIZE		4
+-#define CPSW_MIN_PACKET_SIZE	(VLAN_ETH_ZLEN)
++#define CPSW_MIN_PACKET_SIZE_VLAN	(VLAN_ETH_ZLEN)
++#define CPSW_MIN_PACKET_SIZE	(ETH_ZLEN)
+ #define CPSW_MAX_PACKET_SIZE	(VLAN_ETH_FRAME_LEN +\
+ 				 ETH_FCS_LEN +\
+ 				 CPSW_RX_VLAN_ENCAP_HDR_SIZE)
+@@ -380,6 +381,7 @@ struct cpsw_priv {
+ 	u32 emac_port;
+ 	struct cpsw_common *cpsw;
+ 	int offload_fwd_mark;
++	u32 tx_packet_min;
+ };
+ 
+ #define ndev_to_cpsw(ndev) (((struct cpsw_priv *)netdev_priv(ndev))->cpsw)
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index 626e1ce817fcf..080b15fc00601 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -418,7 +418,7 @@ static int hwsim_new_edge_nl(struct sk_buff *msg, struct genl_info *info)
+ 	struct hwsim_edge *e;
+ 	u32 v0, v1;
+ 
+-	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] &&
++	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] ||
+ 	    !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE])
+ 		return -EINVAL;
+ 
+@@ -528,14 +528,14 @@ static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info)
+ 	u32 v0, v1;
+ 	u8 lqi;
+ 
+-	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] &&
++	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] ||
+ 	    !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE])
+ 		return -EINVAL;
+ 
+ 	if (nla_parse_nested_deprecated(edge_attrs, MAC802154_HWSIM_EDGE_ATTR_MAX, info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE], hwsim_edge_policy, NULL))
+ 		return -EINVAL;
+ 
+-	if (!edge_attrs[MAC802154_HWSIM_EDGE_ATTR_ENDPOINT_ID] &&
++	if (!edge_attrs[MAC802154_HWSIM_EDGE_ATTR_ENDPOINT_ID] ||
+ 	    !edge_attrs[MAC802154_HWSIM_EDGE_ATTR_LQI])
+ 		return -EINVAL;
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 9a566c5b36a6a..69b20a466c61c 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1374,8 +1374,6 @@ static struct phy_driver ksphy_driver[] = {
+ 	.name		= "Micrel KSZ87XX Switch",
+ 	/* PHY_BASIC_FEATURES */
+ 	.config_init	= kszphy_config_init,
+-	.config_aneg	= ksz8873mll_config_aneg,
+-	.read_status	= ksz8873mll_read_status,
+ 	.match_phy_device = ksz8795_match_phy_device,
+ 	.suspend	= genphy_suspend,
+ 	.resume		= genphy_resume,
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index f7a13529e4add..33b2e0fb68bbb 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1207,7 +1207,7 @@ static int ppp_nl_newlink(struct net *src_net, struct net_device *dev,
+ 	 * the PPP unit identifer as suffix (i.e. ppp<unit_id>). This allows
+ 	 * userspace to infer the device name using to the PPPIOCGUNIT ioctl.
+ 	 */
+-	if (!tb[IFLA_IFNAME])
++	if (!tb[IFLA_IFNAME] || !nla_len(tb[IFLA_IFNAME]) || !*(char *)nla_data(tb[IFLA_IFNAME]))
+ 		conf.ifname_is_set = false;
+ 
+ 	err = ppp_dev_configure(src_net, dev, &conf);
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 2403b71b601e9..745478213ff21 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -2527,7 +2527,7 @@ static void deactivate_labels(void *region)
+ 
+ static int init_active_labels(struct nd_region *nd_region)
+ {
+-	int i;
++	int i, rc = 0;
+ 
+ 	for (i = 0; i < nd_region->ndr_mappings; i++) {
+ 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+@@ -2546,13 +2546,14 @@ static int init_active_labels(struct nd_region *nd_region)
+ 			else if (test_bit(NDD_LABELING, &nvdimm->flags))
+ 				/* fail, labels needed to disambiguate dpa */;
+ 			else
+-				return 0;
++				continue;
+ 
+ 			dev_err(&nd_region->dev, "%s: is %s, failing probe\n",
+ 					dev_name(&nd_mapping->nvdimm->dev),
+ 					test_bit(NDD_LOCKED, &nvdimm->flags)
+ 					? "locked" : "disabled");
+-			return -ENXIO;
++			rc = -ENXIO;
++			goto out;
+ 		}
+ 		nd_mapping->ndd = ndd;
+ 		atomic_inc(&nvdimm->busy);
+@@ -2586,13 +2587,17 @@ static int init_active_labels(struct nd_region *nd_region)
+ 			break;
+ 	}
+ 
+-	if (i < nd_region->ndr_mappings) {
++	if (i < nd_region->ndr_mappings)
++		rc = -ENOMEM;
++
++out:
++	if (rc) {
+ 		deactivate_labels(nd_region);
+-		return -ENOMEM;
++		return rc;
+ 	}
+ 
+ 	return devm_add_action_or_reset(&nd_region->dev, deactivate_labels,
+-			nd_region);
++					nd_region);
+ }
+ 
+ int nd_region_register_namespaces(struct nd_region *nd_region, int *err)
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index d52d118979a6d..2548c64194ca9 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -171,24 +171,25 @@ static inline __attribute_const__ u32 msi_mask(unsigned x)
+  * reliably as devices without an INTx disable bit will then generate a
+  * level IRQ which will never be cleared.
+  */
+-u32 __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
++void __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
+ {
+-	u32 mask_bits = desc->masked;
++	raw_spinlock_t *lock = &desc->dev->msi_lock;
++	unsigned long flags;
+ 
+ 	if (pci_msi_ignore_mask || !desc->msi_attrib.maskbit)
+-		return 0;
++		return;
+ 
+-	mask_bits &= ~mask;
+-	mask_bits |= flag;
++	raw_spin_lock_irqsave(lock, flags);
++	desc->masked &= ~mask;
++	desc->masked |= flag;
+ 	pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->mask_pos,
+-			       mask_bits);
+-
+-	return mask_bits;
++			       desc->masked);
++	raw_spin_unlock_irqrestore(lock, flags);
+ }
+ 
+ static void msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
+ {
+-	desc->masked = __pci_msi_desc_mask_irq(desc, mask, flag);
++	__pci_msi_desc_mask_irq(desc, mask, flag);
+ }
+ 
+ static void __iomem *pci_msix_desc_addr(struct msi_desc *desc)
+@@ -317,13 +318,31 @@ void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
+ 		/* Don't touch the hardware now */
+ 	} else if (entry->msi_attrib.is_msix) {
+ 		void __iomem *base = pci_msix_desc_addr(entry);
++		bool unmasked = !(entry->masked & PCI_MSIX_ENTRY_CTRL_MASKBIT);
+ 
+ 		if (!base)
+ 			goto skip;
+ 
++		/*
++		 * The specification mandates that the entry is masked
++		 * when the message is modified:
++		 *
++		 * "If software changes the Address or Data value of an
++		 * entry while the entry is unmasked, the result is
++		 * undefined."
++		 */
++		if (unmasked)
++			__pci_msix_desc_mask_irq(entry, PCI_MSIX_ENTRY_CTRL_MASKBIT);
++
+ 		writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR);
+ 		writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR);
+ 		writel(msg->data, base + PCI_MSIX_ENTRY_DATA);
++
++		if (unmasked)
++			__pci_msix_desc_mask_irq(entry, 0);
++
++		/* Ensure that the writes are visible in the device */
++		readl(base + PCI_MSIX_ENTRY_DATA);
+ 	} else {
+ 		int pos = dev->msi_cap;
+ 		u16 msgctl;
+@@ -344,6 +363,8 @@ void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
+ 			pci_write_config_word(dev, pos + PCI_MSI_DATA_32,
+ 					      msg->data);
+ 		}
++		/* Ensure that the writes are visible in the device */
++		pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl);
+ 	}
+ 
+ skip:
+@@ -643,21 +664,21 @@ static int msi_capability_init(struct pci_dev *dev, int nvec,
+ 	/* Configure MSI capability structure */
+ 	ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSI);
+ 	if (ret) {
+-		msi_mask_irq(entry, mask, ~mask);
++		msi_mask_irq(entry, mask, 0);
+ 		free_msi_irqs(dev);
+ 		return ret;
+ 	}
+ 
+ 	ret = msi_verify_entries(dev);
+ 	if (ret) {
+-		msi_mask_irq(entry, mask, ~mask);
++		msi_mask_irq(entry, mask, 0);
+ 		free_msi_irqs(dev);
+ 		return ret;
+ 	}
+ 
+ 	ret = populate_msi_sysfs(dev);
+ 	if (ret) {
+-		msi_mask_irq(entry, mask, ~mask);
++		msi_mask_irq(entry, mask, 0);
+ 		free_msi_irqs(dev);
+ 		return ret;
+ 	}
+@@ -698,6 +719,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
+ {
+ 	struct irq_affinity_desc *curmsk, *masks = NULL;
+ 	struct msi_desc *entry;
++	void __iomem *addr;
+ 	int ret, i;
+ 	int vec_count = pci_msix_vec_count(dev);
+ 
+@@ -718,6 +740,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
+ 
+ 		entry->msi_attrib.is_msix	= 1;
+ 		entry->msi_attrib.is_64		= 1;
++
+ 		if (entries)
+ 			entry->msi_attrib.entry_nr = entries[i].entry;
+ 		else
+@@ -729,6 +752,10 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
+ 		entry->msi_attrib.default_irq	= dev->irq;
+ 		entry->mask_base		= base;
+ 
++		addr = pci_msix_desc_addr(entry);
++		if (addr)
++			entry->masked = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
++
+ 		list_add_tail(&entry->list, dev_to_msi_list(&dev->dev));
+ 		if (masks)
+ 			curmsk++;
+@@ -739,26 +766,25 @@ out:
+ 	return ret;
+ }
+ 
+-static void msix_program_entries(struct pci_dev *dev,
+-				 struct msix_entry *entries)
++static void msix_update_entries(struct pci_dev *dev, struct msix_entry *entries)
+ {
+ 	struct msi_desc *entry;
+-	int i = 0;
+-	void __iomem *desc_addr;
+ 
+ 	for_each_pci_msi_entry(entry, dev) {
+-		if (entries)
+-			entries[i++].vector = entry->irq;
++		if (entries) {
++			entries->vector = entry->irq;
++			entries++;
++		}
++	}
++}
+ 
+-		desc_addr = pci_msix_desc_addr(entry);
+-		if (desc_addr)
+-			entry->masked = readl(desc_addr +
+-					      PCI_MSIX_ENTRY_VECTOR_CTRL);
+-		else
+-			entry->masked = 0;
++static void msix_mask_all(void __iomem *base, int tsize)
++{
++	u32 ctrl = PCI_MSIX_ENTRY_CTRL_MASKBIT;
++	int i;
+ 
+-		msix_mask_irq(entry, 1);
+-	}
++	for (i = 0; i < tsize; i++, base += PCI_MSIX_ENTRY_SIZE)
++		writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL);
+ }
+ 
+ /**
+@@ -775,22 +801,33 @@ static void msix_program_entries(struct pci_dev *dev,
+ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ 				int nvec, struct irq_affinity *affd)
+ {
+-	int ret;
+-	u16 control;
+ 	void __iomem *base;
++	int ret, tsize;
++	u16 control;
+ 
+-	/* Ensure MSI-X is disabled while it is set up */
+-	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
++	/*
++	 * Some devices require MSI-X to be enabled before the MSI-X
++	 * registers can be accessed.  Mask all the vectors to prevent
++	 * interrupts coming in before they're fully set up.
++	 */
++	pci_msix_clear_and_set_ctrl(dev, 0, PCI_MSIX_FLAGS_MASKALL |
++				    PCI_MSIX_FLAGS_ENABLE);
+ 
+ 	pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control);
+ 	/* Request & Map MSI-X table region */
+-	base = msix_map_region(dev, msix_table_size(control));
+-	if (!base)
+-		return -ENOMEM;
++	tsize = msix_table_size(control);
++	base = msix_map_region(dev, tsize);
++	if (!base) {
++		ret = -ENOMEM;
++		goto out_disable;
++	}
++
++	/* Ensure that all table entries are masked. */
++	msix_mask_all(base, tsize);
+ 
+ 	ret = msix_setup_entries(dev, base, entries, nvec, affd);
+ 	if (ret)
+-		return ret;
++		goto out_disable;
+ 
+ 	ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
+ 	if (ret)
+@@ -801,15 +838,7 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ 	if (ret)
+ 		goto out_free;
+ 
+-	/*
+-	 * Some devices require MSI-X to be enabled before we can touch the
+-	 * MSI-X registers.  We need to mask all the vectors to prevent
+-	 * interrupts coming in before they're fully set up.
+-	 */
+-	pci_msix_clear_and_set_ctrl(dev, 0,
+-				PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE);
+-
+-	msix_program_entries(dev, entries);
++	msix_update_entries(dev, entries);
+ 
+ 	ret = populate_msi_sysfs(dev);
+ 	if (ret)
+@@ -843,6 +872,9 @@ out_avail:
+ out_free:
+ 	free_msi_irqs(dev);
+ 
++out_disable:
++	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
++
+ 	return ret;
+ }
+ 
+@@ -930,8 +962,7 @@ static void pci_msi_shutdown(struct pci_dev *dev)
+ 
+ 	/* Return the device with MSI unmasked as initial states */
+ 	mask = msi_mask(desc->msi_attrib.multi_cap);
+-	/* Keep cached state to be restored */
+-	__pci_msi_desc_mask_irq(desc, mask, ~mask);
++	msi_mask_irq(desc, mask, 0);
+ 
+ 	/* Restore dev->irq to its default pin-assertion IRQ */
+ 	dev->irq = desc->msi_attrib.default_irq;
+@@ -1016,10 +1047,8 @@ static void pci_msix_shutdown(struct pci_dev *dev)
+ 	}
+ 
+ 	/* Return the device with MSI-X masked as initial states */
+-	for_each_pci_msi_entry(entry, dev) {
+-		/* Keep cached states to be restored */
++	for_each_pci_msi_entry(entry, dev)
+ 		__pci_msix_desc_mask_irq(entry, 1);
+-	}
+ 
+ 	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
+ 	pci_intx_for_msi(dev, 1);
+diff --git a/drivers/pinctrl/intel/pinctrl-tigerlake.c b/drivers/pinctrl/intel/pinctrl-tigerlake.c
+index 3e354e02f4084..bed769d99b8be 100644
+--- a/drivers/pinctrl/intel/pinctrl-tigerlake.c
++++ b/drivers/pinctrl/intel/pinctrl-tigerlake.c
+@@ -701,32 +701,32 @@ static const struct pinctrl_pin_desc tglh_pins[] = {
+ 
+ static const struct intel_padgroup tglh_community0_gpps[] = {
+ 	TGL_GPP(0, 0, 24, 0),				/* GPP_A */
+-	TGL_GPP(1, 25, 44, 128),			/* GPP_R */
+-	TGL_GPP(2, 45, 70, 32),				/* GPP_B */
+-	TGL_GPP(3, 71, 78, INTEL_GPIO_BASE_NOMAP),	/* vGPIO_0 */
++	TGL_GPP(1, 25, 44, 32),				/* GPP_R */
++	TGL_GPP(2, 45, 70, 64),				/* GPP_B */
++	TGL_GPP(3, 71, 78, 96),				/* vGPIO_0 */
+ };
+ 
+ static const struct intel_padgroup tglh_community1_gpps[] = {
+-	TGL_GPP(0, 79, 104, 96),			/* GPP_D */
+-	TGL_GPP(1, 105, 128, 64),			/* GPP_C */
+-	TGL_GPP(2, 129, 136, 160),			/* GPP_S */
+-	TGL_GPP(3, 137, 153, 192),			/* GPP_G */
+-	TGL_GPP(4, 154, 180, 224),			/* vGPIO */
++	TGL_GPP(0, 79, 104, 128),			/* GPP_D */
++	TGL_GPP(1, 105, 128, 160),			/* GPP_C */
++	TGL_GPP(2, 129, 136, 192),			/* GPP_S */
++	TGL_GPP(3, 137, 153, 224),			/* GPP_G */
++	TGL_GPP(4, 154, 180, 256),			/* vGPIO */
+ };
+ 
+ static const struct intel_padgroup tglh_community3_gpps[] = {
+-	TGL_GPP(0, 181, 193, 256),			/* GPP_E */
+-	TGL_GPP(1, 194, 217, 288),			/* GPP_F */
++	TGL_GPP(0, 181, 193, 288),			/* GPP_E */
++	TGL_GPP(1, 194, 217, 320),			/* GPP_F */
+ };
+ 
+ static const struct intel_padgroup tglh_community4_gpps[] = {
+-	TGL_GPP(0, 218, 241, 320),			/* GPP_H */
++	TGL_GPP(0, 218, 241, 352),			/* GPP_H */
+ 	TGL_GPP(1, 242, 251, 384),			/* GPP_J */
+-	TGL_GPP(2, 252, 266, 352),			/* GPP_K */
++	TGL_GPP(2, 252, 266, 416),			/* GPP_K */
+ };
+ 
+ static const struct intel_padgroup tglh_community5_gpps[] = {
+-	TGL_GPP(0, 267, 281, 416),			/* GPP_I */
++	TGL_GPP(0, 267, 281, 448),			/* GPP_I */
+ 	TGL_GPP(1, 282, 290, INTEL_GPIO_BASE_NOMAP),	/* JTAG */
+ };
+ 
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index 7815426e7aeaa..10002b8497fea 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -926,12 +926,10 @@ int mtk_pinconf_adv_pull_set(struct mtk_pinctrl *hw,
+ 			err = hw->soc->bias_set(hw, desc, pullup);
+ 			if (err)
+ 				return err;
+-		} else if (hw->soc->bias_set_combo) {
+-			err = hw->soc->bias_set_combo(hw, desc, pullup, arg);
+-			if (err)
+-				return err;
+ 		} else {
+-			return -ENOTSUPP;
++			err = mtk_pinconf_bias_set_rev1(hw, desc, pullup);
++			if (err)
++				err = mtk_pinconf_bias_set(hw, desc, pullup);
+ 		}
+ 	}
+ 
+diff --git a/drivers/platform/x86/pcengines-apuv2.c b/drivers/platform/x86/pcengines-apuv2.c
+index c37349f97bb80..d063d91db9bcb 100644
+--- a/drivers/platform/x86/pcengines-apuv2.c
++++ b/drivers/platform/x86/pcengines-apuv2.c
+@@ -94,6 +94,7 @@ static struct gpiod_lookup_table gpios_led_table = {
+ 				NULL, 1, GPIO_ACTIVE_LOW),
+ 		GPIO_LOOKUP_IDX(AMD_FCH_GPIO_DRIVER_NAME, APU2_GPIO_LINE_LED3,
+ 				NULL, 2, GPIO_ACTIVE_LOW),
++		{} /* Terminating entry */
+ 	}
+ };
+ 
+@@ -123,6 +124,7 @@ static struct gpiod_lookup_table gpios_key_table = {
+ 	.table = {
+ 		GPIO_LOOKUP_IDX(AMD_FCH_GPIO_DRIVER_NAME, APU2_GPIO_LINE_MODESW,
+ 				NULL, 0, GPIO_ACTIVE_LOW),
++		{} /* Terminating entry */
+ 	}
+ };
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 2dde5ddc687de..37612299a34a1 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -13080,6 +13080,8 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
+ 	if (!phba)
+ 		return -ENOMEM;
+ 
++	INIT_LIST_HEAD(&phba->poll_list);
++
+ 	/* Perform generic PCI device enabling operation */
+ 	error = lpfc_enable_pci_dev(phba);
+ 	if (error)
+@@ -13214,7 +13216,6 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
+ 	/* Enable RAS FW log support */
+ 	lpfc_sli4_ras_setup(phba);
+ 
+-	INIT_LIST_HEAD(&phba->poll_list);
+ 	timer_setup(&phba->cpuhp_poll_timer, lpfc_sli4_poll_hbtimer, 0);
+ 	cpuhp_state_add_instance_nocalls(lpfc_cpuhp_state, &phba->cpuhp);
+ 
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index fe7ed3212473d..fbdc9468818d3 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -511,7 +511,6 @@ static int cq_create(struct mlx5_vdpa_net *ndev, u16 idx, u32 num_ent)
+ 	void __iomem *uar_page = ndev->mvdev.res.uar->map;
+ 	u32 out[MLX5_ST_SZ_DW(create_cq_out)];
+ 	struct mlx5_vdpa_cq *vcq = &mvq->cq;
+-	unsigned int irqn;
+ 	__be64 *pas;
+ 	int inlen;
+ 	void *cqc;
+@@ -551,7 +550,7 @@ static int cq_create(struct mlx5_vdpa_net *ndev, u16 idx, u32 num_ent)
+ 	/* Use vector 0 by default. Consider adding code to choose least used
+ 	 * vector.
+ 	 */
+-	err = mlx5_vector2eqn(mdev, 0, &eqn, &irqn);
++	err = mlx5_vector2eqn(mdev, 0, &eqn);
+ 	if (err)
+ 		goto err_vec;
+ 
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index af0f6ad32522c..fba78daee449a 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -192,12 +192,12 @@ static void disable_dynirq(struct irq_data *data);
+ 
+ static DEFINE_PER_CPU(unsigned int, irq_epoch);
+ 
+-static void clear_evtchn_to_irq_row(unsigned row)
++static void clear_evtchn_to_irq_row(int *evtchn_row)
+ {
+ 	unsigned col;
+ 
+ 	for (col = 0; col < EVTCHN_PER_ROW; col++)
+-		WRITE_ONCE(evtchn_to_irq[row][col], -1);
++		WRITE_ONCE(evtchn_row[col], -1);
+ }
+ 
+ static void clear_evtchn_to_irq_all(void)
+@@ -207,7 +207,7 @@ static void clear_evtchn_to_irq_all(void)
+ 	for (row = 0; row < EVTCHN_ROW(xen_evtchn_max_channels()); row++) {
+ 		if (evtchn_to_irq[row] == NULL)
+ 			continue;
+-		clear_evtchn_to_irq_row(row);
++		clear_evtchn_to_irq_row(evtchn_to_irq[row]);
+ 	}
+ }
+ 
+@@ -215,6 +215,7 @@ static int set_evtchn_to_irq(evtchn_port_t evtchn, unsigned int irq)
+ {
+ 	unsigned row;
+ 	unsigned col;
++	int *evtchn_row;
+ 
+ 	if (evtchn >= xen_evtchn_max_channels())
+ 		return -EINVAL;
+@@ -227,11 +228,18 @@ static int set_evtchn_to_irq(evtchn_port_t evtchn, unsigned int irq)
+ 		if (irq == -1)
+ 			return 0;
+ 
+-		evtchn_to_irq[row] = (int *)get_zeroed_page(GFP_KERNEL);
+-		if (evtchn_to_irq[row] == NULL)
++		evtchn_row = (int *) __get_free_pages(GFP_KERNEL, 0);
++		if (evtchn_row == NULL)
+ 			return -ENOMEM;
+ 
+-		clear_evtchn_to_irq_row(row);
++		clear_evtchn_to_irq_row(evtchn_row);
++
++		/*
++		 * We've prepared an empty row for the mapping. If a different
++		 * thread was faster inserting it, we can drop ours.
++		 */
++		if (cmpxchg(&evtchn_to_irq[row], NULL, evtchn_row) != NULL)
++			free_page((unsigned long) evtchn_row);
+ 	}
+ 
+ 	WRITE_ONCE(evtchn_to_irq[row][col], irq);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index e4fc99afa25a9..45093a765a9b5 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -4202,11 +4202,19 @@ bad:
+ 
+ /*
+  * Delayed work handler to process end of delayed cap release LRU list.
++ *
++ * If new caps are added to the list while processing it, these won't get
++ * processed in this run.  In this case, the ci->i_hold_caps_max will be
++ * returned so that the work can be scheduled accordingly.
+  */
+-void ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
++unsigned long ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
+ {
+ 	struct inode *inode;
+ 	struct ceph_inode_info *ci;
++	struct ceph_mount_options *opt = mdsc->fsc->mount_options;
++	unsigned long delay_max = opt->caps_wanted_delay_max * HZ;
++	unsigned long loop_start = jiffies;
++	unsigned long delay = 0;
+ 
+ 	dout("check_delayed_caps\n");
+ 	spin_lock(&mdsc->cap_delay_lock);
+@@ -4214,6 +4222,11 @@ void ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
+ 		ci = list_first_entry(&mdsc->cap_delay_list,
+ 				      struct ceph_inode_info,
+ 				      i_cap_delay_list);
++		if (time_before(loop_start, ci->i_hold_caps_max - delay_max)) {
++			dout("%s caps added recently.  Exiting loop", __func__);
++			delay = ci->i_hold_caps_max;
++			break;
++		}
+ 		if ((ci->i_ceph_flags & CEPH_I_FLUSH) == 0 &&
+ 		    time_before(jiffies, ci->i_hold_caps_max))
+ 			break;
+@@ -4230,6 +4243,8 @@ void ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
+ 		}
+ 	}
+ 	spin_unlock(&mdsc->cap_delay_lock);
++
++	return delay;
+ }
+ 
+ /*
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 6b00f1d7c8e77..1701902415c4b 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4435,22 +4435,29 @@ void inc_session_sequence(struct ceph_mds_session *s)
+ }
+ 
+ /*
+- * delayed work -- periodically trim expired leases, renew caps with mds
++ * delayed work -- periodically trim expired leases, renew caps with mds.  If
++ * the @delay parameter is set to 0 or if it's more than 5 secs, the default
++ * workqueue delay value of 5 secs will be used.
+  */
+-static void schedule_delayed(struct ceph_mds_client *mdsc)
++static void schedule_delayed(struct ceph_mds_client *mdsc, unsigned long delay)
+ {
+-	int delay = 5;
+-	unsigned hz = round_jiffies_relative(HZ * delay);
+-	schedule_delayed_work(&mdsc->delayed_work, hz);
++	unsigned long max_delay = HZ * 5;
++
++	/* 5 secs default delay */
++	if (!delay || (delay > max_delay))
++		delay = max_delay;
++	schedule_delayed_work(&mdsc->delayed_work,
++			      round_jiffies_relative(delay));
+ }
+ 
+ static void delayed_work(struct work_struct *work)
+ {
+-	int i;
+ 	struct ceph_mds_client *mdsc =
+ 		container_of(work, struct ceph_mds_client, delayed_work.work);
++	unsigned long delay;
+ 	int renew_interval;
+ 	int renew_caps;
++	int i;
+ 
+ 	dout("mdsc delayed_work\n");
+ 
+@@ -4490,7 +4497,7 @@ static void delayed_work(struct work_struct *work)
+ 	}
+ 	mutex_unlock(&mdsc->mutex);
+ 
+-	ceph_check_delayed_caps(mdsc);
++	delay = ceph_check_delayed_caps(mdsc);
+ 
+ 	ceph_queue_cap_reclaim_work(mdsc);
+ 
+@@ -4498,7 +4505,7 @@ static void delayed_work(struct work_struct *work)
+ 
+ 	maybe_recover_session(mdsc);
+ 
+-	schedule_delayed(mdsc);
++	schedule_delayed(mdsc, delay);
+ }
+ 
+ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+@@ -4984,7 +4991,7 @@ void ceph_mdsc_handle_mdsmap(struct ceph_mds_client *mdsc, struct ceph_msg *msg)
+ 			  mdsc->mdsmap->m_epoch);
+ 
+ 	mutex_unlock(&mdsc->mutex);
+-	schedule_delayed(mdsc);
++	schedule_delayed(mdsc, 0);
+ 	return;
+ 
+ bad_unlock:
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index b611f829cb611..803b60a967023 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -60,24 +60,26 @@
+ /*
+  * increase ref count for the realm
+  *
+- * caller must hold snap_rwsem for write.
++ * caller must hold snap_rwsem.
+  */
+ void ceph_get_snap_realm(struct ceph_mds_client *mdsc,
+ 			 struct ceph_snap_realm *realm)
+ {
+-	dout("get_realm %p %d -> %d\n", realm,
+-	     atomic_read(&realm->nref), atomic_read(&realm->nref)+1);
++	lockdep_assert_held(&mdsc->snap_rwsem);
++
+ 	/*
+-	 * since we _only_ increment realm refs or empty the empty
+-	 * list with snap_rwsem held, adjusting the empty list here is
+-	 * safe.  we do need to protect against concurrent empty list
+-	 * additions, however.
++	 * The 0->1 and 1->0 transitions must take the snap_empty_lock
++	 * atomically with the refcount change. Go ahead and bump the
++	 * nref here, unless it's 0, in which case we take the spinlock
++	 * and then do the increment and remove it from the list.
+ 	 */
+-	if (atomic_inc_return(&realm->nref) == 1) {
+-		spin_lock(&mdsc->snap_empty_lock);
++	if (atomic_inc_not_zero(&realm->nref))
++		return;
++
++	spin_lock(&mdsc->snap_empty_lock);
++	if (atomic_inc_return(&realm->nref) == 1)
+ 		list_del_init(&realm->empty_item);
+-		spin_unlock(&mdsc->snap_empty_lock);
+-	}
++	spin_unlock(&mdsc->snap_empty_lock);
+ }
+ 
+ static void __insert_snap_realm(struct rb_root *root,
+@@ -113,6 +115,8 @@ static struct ceph_snap_realm *ceph_create_snap_realm(
+ {
+ 	struct ceph_snap_realm *realm;
+ 
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	realm = kzalloc(sizeof(*realm), GFP_NOFS);
+ 	if (!realm)
+ 		return ERR_PTR(-ENOMEM);
+@@ -135,7 +139,7 @@ static struct ceph_snap_realm *ceph_create_snap_realm(
+ /*
+  * lookup the realm rooted at @ino.
+  *
+- * caller must hold snap_rwsem for write.
++ * caller must hold snap_rwsem.
+  */
+ static struct ceph_snap_realm *__lookup_snap_realm(struct ceph_mds_client *mdsc,
+ 						   u64 ino)
+@@ -143,6 +147,8 @@ static struct ceph_snap_realm *__lookup_snap_realm(struct ceph_mds_client *mdsc,
+ 	struct rb_node *n = mdsc->snap_realms.rb_node;
+ 	struct ceph_snap_realm *r;
+ 
++	lockdep_assert_held(&mdsc->snap_rwsem);
++
+ 	while (n) {
+ 		r = rb_entry(n, struct ceph_snap_realm, node);
+ 		if (ino < r->ino)
+@@ -176,6 +182,8 @@ static void __put_snap_realm(struct ceph_mds_client *mdsc,
+ static void __destroy_snap_realm(struct ceph_mds_client *mdsc,
+ 				 struct ceph_snap_realm *realm)
+ {
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	dout("__destroy_snap_realm %p %llx\n", realm, realm->ino);
+ 
+ 	rb_erase(&realm->node, &mdsc->snap_realms);
+@@ -198,28 +206,30 @@ static void __destroy_snap_realm(struct ceph_mds_client *mdsc,
+ static void __put_snap_realm(struct ceph_mds_client *mdsc,
+ 			     struct ceph_snap_realm *realm)
+ {
+-	dout("__put_snap_realm %llx %p %d -> %d\n", realm->ino, realm,
+-	     atomic_read(&realm->nref), atomic_read(&realm->nref)-1);
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
++	/*
++	 * We do not require the snap_empty_lock here, as any caller that
++	 * increments the value must hold the snap_rwsem.
++	 */
+ 	if (atomic_dec_and_test(&realm->nref))
+ 		__destroy_snap_realm(mdsc, realm);
+ }
+ 
+ /*
+- * caller needn't hold any locks
++ * See comments in ceph_get_snap_realm. Caller needn't hold any locks.
+  */
+ void ceph_put_snap_realm(struct ceph_mds_client *mdsc,
+ 			 struct ceph_snap_realm *realm)
+ {
+-	dout("put_snap_realm %llx %p %d -> %d\n", realm->ino, realm,
+-	     atomic_read(&realm->nref), atomic_read(&realm->nref)-1);
+-	if (!atomic_dec_and_test(&realm->nref))
++	if (!atomic_dec_and_lock(&realm->nref, &mdsc->snap_empty_lock))
+ 		return;
+ 
+ 	if (down_write_trylock(&mdsc->snap_rwsem)) {
++		spin_unlock(&mdsc->snap_empty_lock);
+ 		__destroy_snap_realm(mdsc, realm);
+ 		up_write(&mdsc->snap_rwsem);
+ 	} else {
+-		spin_lock(&mdsc->snap_empty_lock);
+ 		list_add(&realm->empty_item, &mdsc->snap_empty);
+ 		spin_unlock(&mdsc->snap_empty_lock);
+ 	}
+@@ -236,6 +246,8 @@ static void __cleanup_empty_realms(struct ceph_mds_client *mdsc)
+ {
+ 	struct ceph_snap_realm *realm;
+ 
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	spin_lock(&mdsc->snap_empty_lock);
+ 	while (!list_empty(&mdsc->snap_empty)) {
+ 		realm = list_first_entry(&mdsc->snap_empty,
+@@ -269,6 +281,8 @@ static int adjust_snap_realm_parent(struct ceph_mds_client *mdsc,
+ {
+ 	struct ceph_snap_realm *parent;
+ 
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	if (realm->parent_ino == parentino)
+ 		return 0;
+ 
+@@ -686,6 +700,8 @@ int ceph_update_snap_trace(struct ceph_mds_client *mdsc,
+ 	int err = -ENOMEM;
+ 	LIST_HEAD(dirty_realms);
+ 
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	dout("update_snap_trace deletion=%d\n", deletion);
+ more:
+ 	ceph_decode_need(&p, e, sizeof(*ri), bad);
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index c33f744a8e11c..6712509ae1d64 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -1138,7 +1138,7 @@ extern void ceph_flush_snaps(struct ceph_inode_info *ci,
+ extern bool __ceph_should_report_size(struct ceph_inode_info *ci);
+ extern void ceph_check_caps(struct ceph_inode_info *ci, int flags,
+ 			    struct ceph_mds_session *session);
+-extern void ceph_check_delayed_caps(struct ceph_mds_client *mdsc);
++extern unsigned long ceph_check_delayed_caps(struct ceph_mds_client *mdsc);
+ extern void ceph_flush_dirty_caps(struct ceph_mds_client *mdsc);
+ extern int  ceph_drop_caps_for_unlink(struct inode *inode);
+ extern int ceph_encode_inode_release(void **p, struct inode *inode,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index ab509965656e3..ca5102773b72b 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2367,7 +2367,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ 	memcpy(aclptr, &acl, sizeof(struct cifs_acl));
+ 
+ 	buf->ccontext.DataLength = cpu_to_le32(ptr - (__u8 *)&buf->sd);
+-	*len = ptr - (__u8 *)buf;
++	*len = roundup(ptr - (__u8 *)buf, 8);
+ 
+ 	return buf;
+ }
+diff --git a/fs/vboxsf/dir.c b/fs/vboxsf/dir.c
+index 0664787f2b74f..0d85959be0d55 100644
+--- a/fs/vboxsf/dir.c
++++ b/fs/vboxsf/dir.c
+@@ -306,6 +306,53 @@ static int vboxsf_dir_mkdir(struct inode *parent, struct dentry *dentry,
+ 	return vboxsf_dir_create(parent, dentry, mode, true, true, NULL);
+ }
+ 
++static int vboxsf_dir_atomic_open(struct inode *parent, struct dentry *dentry,
++				  struct file *file, unsigned int flags, umode_t mode)
++{
++	struct vboxsf_sbi *sbi = VBOXSF_SBI(parent->i_sb);
++	struct vboxsf_handle *sf_handle;
++	struct dentry *res = NULL;
++	u64 handle;
++	int err;
++
++	if (d_in_lookup(dentry)) {
++		res = vboxsf_dir_lookup(parent, dentry, 0);
++		if (IS_ERR(res))
++			return PTR_ERR(res);
++
++		if (res)
++			dentry = res;
++	}
++
++	/* Only creates */
++	if (!(flags & O_CREAT) || d_really_is_positive(dentry))
++		return finish_no_open(file, res);
++
++	err = vboxsf_dir_create(parent, dentry, mode, false, flags & O_EXCL, &handle);
++	if (err)
++		goto out;
++
++	sf_handle = vboxsf_create_sf_handle(d_inode(dentry), handle, SHFL_CF_ACCESS_READWRITE);
++	if (IS_ERR(sf_handle)) {
++		vboxsf_close(sbi->root, handle);
++		err = PTR_ERR(sf_handle);
++		goto out;
++	}
++
++	err = finish_open(file, dentry, generic_file_open);
++	if (err) {
++		/* This also closes the handle passed to vboxsf_create_sf_handle() */
++		vboxsf_release_sf_handle(d_inode(dentry), sf_handle);
++		goto out;
++	}
++
++	file->private_data = sf_handle;
++	file->f_mode |= FMODE_CREATED;
++out:
++	dput(res);
++	return err;
++}
++
+ static int vboxsf_dir_unlink(struct inode *parent, struct dentry *dentry)
+ {
+ 	struct vboxsf_sbi *sbi = VBOXSF_SBI(parent->i_sb);
+@@ -424,6 +471,7 @@ const struct inode_operations vboxsf_dir_iops = {
+ 	.lookup  = vboxsf_dir_lookup,
+ 	.create  = vboxsf_dir_mkfile,
+ 	.mkdir   = vboxsf_dir_mkdir,
++	.atomic_open = vboxsf_dir_atomic_open,
+ 	.rmdir   = vboxsf_dir_unlink,
+ 	.unlink  = vboxsf_dir_unlink,
+ 	.rename  = vboxsf_dir_rename,
+diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c
+index c4ab5996d97a8..864c2fad23beb 100644
+--- a/fs/vboxsf/file.c
++++ b/fs/vboxsf/file.c
+@@ -20,17 +20,39 @@ struct vboxsf_handle {
+ 	struct list_head head;
+ };
+ 
+-static int vboxsf_file_open(struct inode *inode, struct file *file)
++struct vboxsf_handle *vboxsf_create_sf_handle(struct inode *inode,
++					      u64 handle, u32 access_flags)
+ {
+ 	struct vboxsf_inode *sf_i = VBOXSF_I(inode);
+-	struct shfl_createparms params = {};
+ 	struct vboxsf_handle *sf_handle;
+-	u32 access_flags = 0;
+-	int err;
+ 
+ 	sf_handle = kmalloc(sizeof(*sf_handle), GFP_KERNEL);
+ 	if (!sf_handle)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
++
++	/* the host may have given us different attr then requested */
++	sf_i->force_restat = 1;
++
++	/* init our handle struct and add it to the inode's handles list */
++	sf_handle->handle = handle;
++	sf_handle->root = VBOXSF_SBI(inode->i_sb)->root;
++	sf_handle->access_flags = access_flags;
++	kref_init(&sf_handle->refcount);
++
++	mutex_lock(&sf_i->handle_list_mutex);
++	list_add(&sf_handle->head, &sf_i->handle_list);
++	mutex_unlock(&sf_i->handle_list_mutex);
++
++	return sf_handle;
++}
++
++static int vboxsf_file_open(struct inode *inode, struct file *file)
++{
++	struct vboxsf_sbi *sbi = VBOXSF_SBI(inode->i_sb);
++	struct shfl_createparms params = {};
++	struct vboxsf_handle *sf_handle;
++	u32 access_flags = 0;
++	int err;
+ 
+ 	/*
+ 	 * We check the value of params.handle afterwards to find out if
+@@ -83,23 +105,14 @@ static int vboxsf_file_open(struct inode *inode, struct file *file)
+ 	err = vboxsf_create_at_dentry(file_dentry(file), &params);
+ 	if (err == 0 && params.handle == SHFL_HANDLE_NIL)
+ 		err = (params.result == SHFL_FILE_EXISTS) ? -EEXIST : -ENOENT;
+-	if (err) {
+-		kfree(sf_handle);
++	if (err)
+ 		return err;
+-	}
+-
+-	/* the host may have given us different attr then requested */
+-	sf_i->force_restat = 1;
+ 
+-	/* init our handle struct and add it to the inode's handles list */
+-	sf_handle->handle = params.handle;
+-	sf_handle->root = VBOXSF_SBI(inode->i_sb)->root;
+-	sf_handle->access_flags = access_flags;
+-	kref_init(&sf_handle->refcount);
+-
+-	mutex_lock(&sf_i->handle_list_mutex);
+-	list_add(&sf_handle->head, &sf_i->handle_list);
+-	mutex_unlock(&sf_i->handle_list_mutex);
++	sf_handle = vboxsf_create_sf_handle(inode, params.handle, access_flags);
++	if (IS_ERR(sf_handle)) {
++		vboxsf_close(sbi->root, params.handle);
++		return PTR_ERR(sf_handle);
++	}
+ 
+ 	file->private_data = sf_handle;
+ 	return 0;
+@@ -114,22 +127,26 @@ static void vboxsf_handle_release(struct kref *refcount)
+ 	kfree(sf_handle);
+ }
+ 
+-static int vboxsf_file_release(struct inode *inode, struct file *file)
++void vboxsf_release_sf_handle(struct inode *inode, struct vboxsf_handle *sf_handle)
+ {
+ 	struct vboxsf_inode *sf_i = VBOXSF_I(inode);
+-	struct vboxsf_handle *sf_handle = file->private_data;
+ 
++	mutex_lock(&sf_i->handle_list_mutex);
++	list_del(&sf_handle->head);
++	mutex_unlock(&sf_i->handle_list_mutex);
++
++	kref_put(&sf_handle->refcount, vboxsf_handle_release);
++}
++
++static int vboxsf_file_release(struct inode *inode, struct file *file)
++{
+ 	/*
+ 	 * When a file is closed on our (the guest) side, we want any subsequent
+ 	 * accesses done on the host side to see all changes done from our side.
+ 	 */
+ 	filemap_write_and_wait(inode->i_mapping);
+ 
+-	mutex_lock(&sf_i->handle_list_mutex);
+-	list_del(&sf_handle->head);
+-	mutex_unlock(&sf_i->handle_list_mutex);
+-
+-	kref_put(&sf_handle->refcount, vboxsf_handle_release);
++	vboxsf_release_sf_handle(inode, file->private_data);
+ 	return 0;
+ }
+ 
+diff --git a/fs/vboxsf/vfsmod.h b/fs/vboxsf/vfsmod.h
+index 18f95b00fc334..a4050b166c999 100644
+--- a/fs/vboxsf/vfsmod.h
++++ b/fs/vboxsf/vfsmod.h
+@@ -18,6 +18,8 @@
+ #define VBOXSF_SBI(sb)	((struct vboxsf_sbi *)(sb)->s_fs_info)
+ #define VBOXSF_I(i)	container_of(i, struct vboxsf_inode, vfs_inode)
+ 
++struct vboxsf_handle;
++
+ struct vboxsf_options {
+ 	unsigned long ttl;
+ 	kuid_t uid;
+@@ -80,6 +82,11 @@ extern const struct file_operations vboxsf_reg_fops;
+ extern const struct address_space_operations vboxsf_reg_aops;
+ extern const struct dentry_operations vboxsf_dentry_ops;
+ 
++/* from file.c */
++struct vboxsf_handle *vboxsf_create_sf_handle(struct inode *inode,
++					      u64 handle, u32 access_flags);
++void vboxsf_release_sf_handle(struct inode *inode, struct vboxsf_handle *sf_handle);
++
+ /* from utils.c */
+ struct inode *vboxsf_new_inode(struct super_block *sb);
+ void vboxsf_init_inode(struct vboxsf_sbi *sbi, struct inode *inode,
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 18468b46c4506..a774361f28d40 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -599,6 +599,7 @@
+ 		NOINSTR_TEXT						\
+ 		*(.text..refcount)					\
+ 		*(.ref.text)						\
++		*(.text.asan.* .text.tsan.*)				\
+ 	MEM_KEEP(init.text*)						\
+ 	MEM_KEEP(exit.text*)						\
+ 
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 8d97871631d02..5dc0f81e4f9d4 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -497,6 +497,7 @@ struct device {
+ 	struct dev_pin_info	*pins;
+ #endif
+ #ifdef CONFIG_GENERIC_MSI_IRQ
++	raw_spinlock_t		msi_lock;
+ 	struct list_head	msi_list;
+ #endif
+ #ifdef CONFIG_DMA_OPS
+diff --git a/include/linux/inetdevice.h b/include/linux/inetdevice.h
+index 3515ca64e638a..b68fca08be27c 100644
+--- a/include/linux/inetdevice.h
++++ b/include/linux/inetdevice.h
+@@ -41,7 +41,7 @@ struct in_device {
+ 	unsigned long		mr_qri;		/* Query Response Interval */
+ 	unsigned char		mr_qrv;		/* Query Robustness Variable */
+ 	unsigned char		mr_gq_running;
+-	unsigned char		mr_ifc_count;
++	u32			mr_ifc_count;
+ 	struct timer_list	mr_gq_timer;	/* general query timer */
+ 	struct timer_list	mr_ifc_timer;	/* interface change timer */
+ 
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index a36d35c259963..607bee9271bd7 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -567,6 +567,7 @@ struct irq_chip {
+  * IRQCHIP_SUPPORTS_NMI:              Chip can deliver NMIs, only for root irqchips
+  * IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND:  Invokes __enable_irq()/__disable_irq() for wake irqs
+  *                                    in the suspend path if they are in disabled state
++ * IRQCHIP_AFFINITY_PRE_STARTUP:      Default affinity update before startup
+  */
+ enum {
+ 	IRQCHIP_SET_TYPE_MASKED			= (1 <<  0),
+@@ -579,6 +580,7 @@ enum {
+ 	IRQCHIP_SUPPORTS_LEVEL_MSI		= (1 <<  7),
+ 	IRQCHIP_SUPPORTS_NMI			= (1 <<  8),
+ 	IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND	= (1 <<  9),
++	IRQCHIP_AFFINITY_PRE_STARTUP		= (1 << 10),
+ };
+ 
+ #include <linux/irqdesc.h>
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index add85094f9a58..41fbb4793394d 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -981,8 +981,7 @@ void mlx5_unregister_debugfs(void);
+ void mlx5_fill_page_array(struct mlx5_frag_buf *buf, __be64 *pas);
+ void mlx5_fill_page_frag_array_perm(struct mlx5_frag_buf *buf, __be64 *pas, u8 perm);
+ void mlx5_fill_page_frag_array(struct mlx5_frag_buf *frag_buf, __be64 *pas);
+-int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
+-		    unsigned int *irqn);
++int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn);
+ int mlx5_core_attach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn);
+ int mlx5_core_detach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn);
+ 
+diff --git a/include/linux/msi.h b/include/linux/msi.h
+index 2a3e997751cea..70c910b23e131 100644
+--- a/include/linux/msi.h
++++ b/include/linux/msi.h
+@@ -194,7 +194,7 @@ void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
+ void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
+ 
+ u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag);
+-u32 __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag);
++void __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag);
+ void pci_msi_mask_irq(struct irq_data *data);
+ void pci_msi_unmask_irq(struct irq_data *data);
+ 
+diff --git a/include/net/psample.h b/include/net/psample.h
+index 68ae16bb0a4a8..20a17551f790f 100644
+--- a/include/net/psample.h
++++ b/include/net/psample.h
+@@ -18,6 +18,8 @@ struct psample_group *psample_group_get(struct net *net, u32 group_num);
+ void psample_group_take(struct psample_group *group);
+ void psample_group_put(struct psample_group *group);
+ 
++struct sk_buff;
++
+ #if IS_ENABLED(CONFIG_PSAMPLE)
+ 
+ void psample_sample_packet(struct psample_group *group, struct sk_buff *skb,
+diff --git a/include/uapi/linux/neighbour.h b/include/uapi/linux/neighbour.h
+index dc8b72201f6c5..00a60695fa538 100644
+--- a/include/uapi/linux/neighbour.h
++++ b/include/uapi/linux/neighbour.h
+@@ -66,8 +66,11 @@ enum {
+ #define NUD_NONE	0x00
+ 
+ /* NUD_NOARP & NUD_PERMANENT are pseudostates, they never change
+-   and make no address resolution or NUD.
+-   NUD_PERMANENT also cannot be deleted by garbage collectors.
++ * and make no address resolution or NUD.
++ * NUD_PERMANENT also cannot be deleted by garbage collectors.
++ * When NTF_EXT_LEARNED is set for a bridge fdb entry the different cache entry
++ * states don't make sense and thus are ignored. Such entries don't age and
++ * can roam.
+  */
+ 
+ struct nda_cacheinfo {
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 1fccba6e88c4e..6c444e815406b 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -1425,8 +1425,8 @@ alloc:
+ 	/* We cannot do copy_from_user or copy_to_user inside
+ 	 * the rcu_read_lock. Allocate enough space here.
+ 	 */
+-	keys = kvmalloc(key_size * bucket_size, GFP_USER | __GFP_NOWARN);
+-	values = kvmalloc(value_size * bucket_size, GFP_USER | __GFP_NOWARN);
++	keys = kvmalloc_array(key_size, bucket_size, GFP_USER | __GFP_NOWARN);
++	values = kvmalloc_array(value_size, bucket_size, GFP_USER | __GFP_NOWARN);
+ 	if (!keys || !values) {
+ 		ret = -ENOMEM;
+ 		goto after_loop;
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index b9b9618e1aca9..0b70811fd9561 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -265,8 +265,11 @@ int irq_startup(struct irq_desc *desc, bool resend, bool force)
+ 	} else {
+ 		switch (__irq_startup_managed(desc, aff, force)) {
+ 		case IRQ_STARTUP_NORMAL:
++			if (d->chip->flags & IRQCHIP_AFFINITY_PRE_STARTUP)
++				irq_setup_affinity(desc);
+ 			ret = __irq_startup(desc);
+-			irq_setup_affinity(desc);
++			if (!(d->chip->flags & IRQCHIP_AFFINITY_PRE_STARTUP))
++				irq_setup_affinity(desc);
+ 			break;
+ 		case IRQ_STARTUP_MANAGED:
+ 			irq_do_set_affinity(d, aff, false);
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index d924676c8781b..d217acc9f71b6 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -476,11 +476,6 @@ skip_activate:
+ 	return 0;
+ 
+ cleanup:
+-	for_each_msi_vector(desc, i, dev) {
+-		irq_data = irq_domain_get_irq_data(domain, i);
+-		if (irqd_is_activated(irq_data))
+-			irq_domain_deactivate_irq(irq_data);
+-	}
+ 	msi_domain_free_irqs(domain, dev);
+ 	return ret;
+ }
+@@ -505,7 +500,15 @@ int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
+ 
+ void __msi_domain_free_irqs(struct irq_domain *domain, struct device *dev)
+ {
++	struct irq_data *irq_data;
+ 	struct msi_desc *desc;
++	int i;
++
++	for_each_msi_vector(desc, i, dev) {
++		irq_data = irq_domain_get_irq_data(domain, i);
++		if (irqd_is_activated(irq_data))
++			irq_domain_deactivate_irq(irq_data);
++	}
+ 
+ 	for_each_msi_entry(desc, dev) {
+ 		/*
+diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
+index 773b6105c4aea..6990490fa67be 100644
+--- a/kernel/irq/timings.c
++++ b/kernel/irq/timings.c
+@@ -453,6 +453,11 @@ static __always_inline void __irq_timings_store(int irq, struct irqt_stat *irqs,
+ 	 */
+ 	index = irq_timings_interval_index(interval);
+ 
++	if (index > PREDICTION_BUFFER_SIZE - 1) {
++		irqs->count = 0;
++		return;
++	}
++
+ 	/*
+ 	 * Store the index as an element of the pattern in another
+ 	 * circular array.
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 0aabfcaf269a9..305f0eca163ed 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -511,7 +511,7 @@ static inline void seccomp_sync_threads(unsigned long flags)
+ 		smp_store_release(&thread->seccomp.filter,
+ 				  caller->seccomp.filter);
+ 		atomic_set(&thread->seccomp.filter_count,
+-			   atomic_read(&thread->seccomp.filter_count));
++			   atomic_read(&caller->seccomp.filter_count));
+ 
+ 		/*
+ 		 * Don't let an unprivileged task work around
+diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
+index 32ac8343b0ba1..8a6470a217024 100644
+--- a/net/bridge/br_fdb.c
++++ b/net/bridge/br_fdb.c
+@@ -950,7 +950,8 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source,
+ 
+ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
+ 			struct net_bridge_port *p, const unsigned char *addr,
+-			u16 nlh_flags, u16 vid, struct nlattr *nfea_tb[])
++			u16 nlh_flags, u16 vid, struct nlattr *nfea_tb[],
++			struct netlink_ext_ack *extack)
+ {
+ 	int err = 0;
+ 
+@@ -969,6 +970,11 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
+ 		rcu_read_unlock();
+ 		local_bh_enable();
+ 	} else if (ndm->ndm_flags & NTF_EXT_LEARNED) {
++		if (!p && !(ndm->ndm_state & NUD_PERMANENT)) {
++			NL_SET_ERR_MSG_MOD(extack,
++					   "FDB entry towards bridge must be permanent");
++			return -EINVAL;
++		}
+ 		err = br_fdb_external_learn_add(br, p, addr, vid, true);
+ 	} else {
+ 		spin_lock_bh(&br->hash_lock);
+@@ -1041,9 +1047,11 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+ 		}
+ 
+ 		/* VID was specified, so use it. */
+-		err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid, nfea_tb);
++		err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid, nfea_tb,
++				   extack);
+ 	} else {
+-		err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0, nfea_tb);
++		err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0, nfea_tb,
++				   extack);
+ 		if (err || !vg || !vg->num_vlans)
+ 			goto out;
+ 
+@@ -1055,7 +1063,7 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+ 			if (!br_vlan_should_use(v))
+ 				continue;
+ 			err = __br_fdb_add(ndm, br, p, addr, nlh_flags, v->vid,
+-					   nfea_tb);
++					   nfea_tb, extack);
+ 			if (err)
+ 				goto out;
+ 		}
+@@ -1212,6 +1220,10 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ 
+ 		if (swdev_notify)
+ 			flags |= BIT(BR_FDB_ADDED_BY_USER);
++
++		if (!p)
++			flags |= BIT(BR_FDB_LOCAL);
++
+ 		fdb = fdb_create(br, p, addr, vid, flags);
+ 		if (!fdb) {
+ 			err = -ENOMEM;
+@@ -1238,6 +1250,9 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ 		if (swdev_notify)
+ 			set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
+ 
++		if (!p)
++			set_bit(BR_FDB_LOCAL, &fdb->flags);
++
+ 		if (modified)
+ 			fdb_notify(br, fdb, RTM_NEWNEIGH, swdev_notify);
+ 	}
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 857a2c512ca39..1d87bf51f3840 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -615,6 +615,7 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 
+ 	err = dev_set_allmulti(dev, 1);
+ 	if (err) {
++		br_multicast_del_port(p);
+ 		kfree(p);	/* kobject not yet init'd, manually free */
+ 		goto err1;
+ 	}
+@@ -728,6 +729,7 @@ err4:
+ err3:
+ 	sysfs_remove_link(br->ifobj, p->dev->name);
+ err2:
++	br_multicast_del_port(p);
+ 	kobject_put(&p->kobj);
+ 	dev_set_allmulti(dev, -1);
+ err1:
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index 8d033a75a766e..fdbed31585553 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -88,6 +88,12 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ 
+ 			skb = ip_fraglist_next(&iter);
+ 		}
++
++		if (!err)
++			return 0;
++
++		kfree_skb_list(iter.frag);
++
+ 		return err;
+ 	}
+ slow_path:
+diff --git a/net/core/link_watch.c b/net/core/link_watch.c
+index 75431ca9300fb..1a455847da54f 100644
+--- a/net/core/link_watch.c
++++ b/net/core/link_watch.c
+@@ -158,7 +158,7 @@ static void linkwatch_do_dev(struct net_device *dev)
+ 	clear_bit(__LINK_STATE_LINKWATCH_PENDING, &dev->state);
+ 
+ 	rfc2863_policy(dev);
+-	if (dev->flags & IFF_UP && netif_device_present(dev)) {
++	if (dev->flags & IFF_UP) {
+ 		if (netif_carrier_ok(dev))
+ 			dev_activate(dev);
+ 		else
+@@ -204,7 +204,8 @@ static void __linkwatch_run_queue(int urgent_only)
+ 		dev = list_first_entry(&wrk, struct net_device, link_watch_list);
+ 		list_del_init(&dev->link_watch_list);
+ 
+-		if (urgent_only && !linkwatch_urgent_event(dev)) {
++		if (!netif_device_present(dev) ||
++		    (urgent_only && !linkwatch_urgent_event(dev))) {
+ 			list_add_tail(&dev->link_watch_list, &lweventlist);
+ 			continue;
+ 		}
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index a45a0401adc50..c25f7617770c8 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -984,6 +984,11 @@ static const struct proto_ops ieee802154_dgram_ops = {
+ 	.sendpage	   = sock_no_sendpage,
+ };
+ 
++static void ieee802154_sock_destruct(struct sock *sk)
++{
++	skb_queue_purge(&sk->sk_receive_queue);
++}
++
+ /* Create a socket. Initialise the socket, blank the addresses
+  * set the state.
+  */
+@@ -1024,7 +1029,7 @@ static int ieee802154_create(struct net *net, struct socket *sock,
+ 	sock->ops = ops;
+ 
+ 	sock_init_data(sock, sk);
+-	/* FIXME: sk->sk_destruct */
++	sk->sk_destruct = ieee802154_sock_destruct;
+ 	sk->sk_family = PF_IEEE802154;
+ 
+ 	/* Checksums on by default */
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 6b3c558a4f232..00576bae183d3 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -803,10 +803,17 @@ static void igmp_gq_timer_expire(struct timer_list *t)
+ static void igmp_ifc_timer_expire(struct timer_list *t)
+ {
+ 	struct in_device *in_dev = from_timer(in_dev, t, mr_ifc_timer);
++	u32 mr_ifc_count;
+ 
+ 	igmpv3_send_cr(in_dev);
+-	if (in_dev->mr_ifc_count) {
+-		in_dev->mr_ifc_count--;
++restart:
++	mr_ifc_count = READ_ONCE(in_dev->mr_ifc_count);
++
++	if (mr_ifc_count) {
++		if (cmpxchg(&in_dev->mr_ifc_count,
++			    mr_ifc_count,
++			    mr_ifc_count - 1) != mr_ifc_count)
++			goto restart;
+ 		igmp_ifc_start_timer(in_dev,
+ 				     unsolicited_report_interval(in_dev));
+ 	}
+@@ -818,7 +825,7 @@ static void igmp_ifc_event(struct in_device *in_dev)
+ 	struct net *net = dev_net(in_dev->dev);
+ 	if (IGMP_V1_SEEN(in_dev) || IGMP_V2_SEEN(in_dev))
+ 		return;
+-	in_dev->mr_ifc_count = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++	WRITE_ONCE(in_dev->mr_ifc_count, in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv);
+ 	igmp_ifc_start_timer(in_dev, 1);
+ }
+ 
+@@ -957,7 +964,7 @@ static bool igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb,
+ 				in_dev->mr_qri;
+ 		}
+ 		/* cancel the interface change timer */
+-		in_dev->mr_ifc_count = 0;
++		WRITE_ONCE(in_dev->mr_ifc_count, 0);
+ 		if (del_timer(&in_dev->mr_ifc_timer))
+ 			__in_dev_put(in_dev);
+ 		/* clear deleted report items */
+@@ -1724,7 +1731,7 @@ void ip_mc_down(struct in_device *in_dev)
+ 		igmp_group_dropped(pmc);
+ 
+ #ifdef CONFIG_IP_MULTICAST
+-	in_dev->mr_ifc_count = 0;
++	WRITE_ONCE(in_dev->mr_ifc_count, 0);
+ 	if (del_timer(&in_dev->mr_ifc_timer))
+ 		__in_dev_put(in_dev);
+ 	in_dev->mr_gq_running = 0;
+@@ -1941,7 +1948,7 @@ static int ip_mc_del_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
+ 		pmc->sfmode = MCAST_INCLUDE;
+ #ifdef CONFIG_IP_MULTICAST
+ 		pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+-		in_dev->mr_ifc_count = pmc->crcount;
++		WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount);
+ 		for (psf = pmc->sources; psf; psf = psf->sf_next)
+ 			psf->sf_crcount = 0;
+ 		igmp_ifc_event(pmc->interface);
+@@ -2120,7 +2127,7 @@ static int ip_mc_add_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
+ 		/* else no filters; keep old mode for reports */
+ 
+ 		pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+-		in_dev->mr_ifc_count = pmc->crcount;
++		WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount);
+ 		for (psf = pmc->sources; psf; psf = psf->sf_next)
+ 			psf->sf_crcount = 0;
+ 		igmp_ifc_event(in_dev);
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 6ea3dc2e42194..6274462b86b4b 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -1041,7 +1041,7 @@ static void bbr_init(struct sock *sk)
+ 	bbr->prior_cwnd = 0;
+ 	tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+ 	bbr->rtt_cnt = 0;
+-	bbr->next_rtt_delivered = 0;
++	bbr->next_rtt_delivered = tp->delivered;
+ 	bbr->prev_ca_state = TCP_CA_Open;
+ 	bbr->packet_conservation = 0;
+ 
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index e24b7e2331cdd..0b0eb18919c09 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -261,6 +261,9 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 			goto out;
+ 	}
+ 
++	/* All mirred/redirected skbs should clear previous ct info */
++	nf_reset_ct(skb2);
++
+ 	want_ingress = tcf_mirred_act_wants_ingress(m_eaction);
+ 
+ 	expects_nh = want_ingress || !m_mac_header_xmit;
+diff --git a/net/smc/smc_core.h b/net/smc/smc_core.h
+index f1e867ce2e630..4745a9a5a28f5 100644
+--- a/net/smc/smc_core.h
++++ b/net/smc/smc_core.h
+@@ -94,6 +94,7 @@ struct smc_link {
+ 	unsigned long		*wr_tx_mask;	/* bit mask of used indexes */
+ 	u32			wr_tx_cnt;	/* number of WR send buffers */
+ 	wait_queue_head_t	wr_tx_wait;	/* wait for free WR send buf */
++	atomic_t		wr_tx_refcnt;	/* tx refs to link */
+ 
+ 	struct smc_wr_buf	*wr_rx_bufs;	/* WR recv payload buffers */
+ 	struct ib_recv_wr	*wr_rx_ibs;	/* WR recv meta data */
+@@ -106,6 +107,7 @@ struct smc_link {
+ 
+ 	struct ib_reg_wr	wr_reg;		/* WR register memory region */
+ 	wait_queue_head_t	wr_reg_wait;	/* wait for wr_reg result */
++	atomic_t		wr_reg_refcnt;	/* reg refs to link */
+ 	enum smc_wr_reg_state	wr_reg_state;	/* state of wr_reg request */
+ 
+ 	u8			gid[SMC_GID_SIZE];/* gid matching used vlan id*/
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index 273eaf1bfe49a..2e7560eba9812 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -888,6 +888,7 @@ int smc_llc_cli_add_link(struct smc_link *link, struct smc_llc_qentry *qentry)
+ 	if (!rc)
+ 		goto out;
+ out_clear_lnk:
++	lnk_new->state = SMC_LNK_INACTIVE;
+ 	smcr_link_clear(lnk_new, false);
+ out_reject:
+ 	smc_llc_cli_add_link_reject(qentry);
+@@ -1184,6 +1185,7 @@ int smc_llc_srv_add_link(struct smc_link *link)
+ 		goto out_err;
+ 	return 0;
+ out_err:
++	link_new->state = SMC_LNK_INACTIVE;
+ 	smcr_link_clear(link_new, false);
+ 	return rc;
+ }
+@@ -1286,10 +1288,8 @@ static void smc_llc_process_cli_delete_link(struct smc_link_group *lgr)
+ 	del_llc->reason = 0;
+ 	smc_llc_send_message(lnk, &qentry->msg); /* response */
+ 
+-	if (smc_link_downing(&lnk_del->state)) {
+-		if (smc_switch_conns(lgr, lnk_del, false))
+-			smc_wr_tx_wait_no_pending_sends(lnk_del);
+-	}
++	if (smc_link_downing(&lnk_del->state))
++		smc_switch_conns(lgr, lnk_del, false);
+ 	smcr_link_clear(lnk_del, true);
+ 
+ 	active_links = smc_llc_active_link_count(lgr);
+@@ -1805,8 +1805,6 @@ void smc_llc_link_clear(struct smc_link *link, bool log)
+ 				    link->smcibdev->ibdev->name, link->ibport);
+ 	complete(&link->llc_testlink_resp);
+ 	cancel_delayed_work_sync(&link->llc_testlink_wrk);
+-	smc_wr_wakeup_reg_wait(link);
+-	smc_wr_wakeup_tx_wait(link);
+ }
+ 
+ /* register a new rtoken at the remote peer (for all links) */
+diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
+index 4532c16bf85ec..ff02952b3d03e 100644
+--- a/net/smc/smc_tx.c
++++ b/net/smc/smc_tx.c
+@@ -479,7 +479,7 @@ static int smc_tx_rdma_writes(struct smc_connection *conn,
+ /* Wakeup sndbuf consumers from any context (IRQ or process)
+  * since there is more data to transmit; usable snd_wnd as max transmit
+  */
+-static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
++static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+ {
+ 	struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags;
+ 	struct smc_link *link = conn->lnk;
+@@ -533,6 +533,22 @@ out_unlock:
+ 	return rc;
+ }
+ 
++static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
++{
++	struct smc_link *link = conn->lnk;
++	int rc = -ENOLINK;
++
++	if (!link)
++		return rc;
++
++	atomic_inc(&link->wr_tx_refcnt);
++	if (smc_link_usable(link))
++		rc = _smcr_tx_sndbuf_nonempty(conn);
++	if (atomic_dec_and_test(&link->wr_tx_refcnt))
++		wake_up_all(&link->wr_tx_wait);
++	return rc;
++}
++
+ static int smcd_tx_sndbuf_nonempty(struct smc_connection *conn)
+ {
+ 	struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags;
+diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c
+index 1e23cdd41eb1e..9dbe4804853e0 100644
+--- a/net/smc/smc_wr.c
++++ b/net/smc/smc_wr.c
+@@ -322,9 +322,12 @@ int smc_wr_reg_send(struct smc_link *link, struct ib_mr *mr)
+ 	if (rc)
+ 		return rc;
+ 
++	atomic_inc(&link->wr_reg_refcnt);
+ 	rc = wait_event_interruptible_timeout(link->wr_reg_wait,
+ 					      (link->wr_reg_state != POSTED),
+ 					      SMC_WR_REG_MR_WAIT_TIME);
++	if (atomic_dec_and_test(&link->wr_reg_refcnt))
++		wake_up_all(&link->wr_reg_wait);
+ 	if (!rc) {
+ 		/* timeout - terminate link */
+ 		smcr_link_down_cond_sched(link);
+@@ -566,10 +569,15 @@ void smc_wr_free_link(struct smc_link *lnk)
+ 		return;
+ 	ibdev = lnk->smcibdev->ibdev;
+ 
++	smc_wr_wakeup_reg_wait(lnk);
++	smc_wr_wakeup_tx_wait(lnk);
++
+ 	if (smc_wr_tx_wait_no_pending_sends(lnk))
+ 		memset(lnk->wr_tx_mask, 0,
+ 		       BITS_TO_LONGS(SMC_WR_BUF_CNT) *
+ 						sizeof(*lnk->wr_tx_mask));
++	wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt)));
++	wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt)));
+ 
+ 	if (lnk->wr_rx_dma_addr) {
+ 		ib_dma_unmap_single(ibdev, lnk->wr_rx_dma_addr,
+@@ -730,7 +738,9 @@ int smc_wr_create_link(struct smc_link *lnk)
+ 	memset(lnk->wr_tx_mask, 0,
+ 	       BITS_TO_LONGS(SMC_WR_BUF_CNT) * sizeof(*lnk->wr_tx_mask));
+ 	init_waitqueue_head(&lnk->wr_tx_wait);
++	atomic_set(&lnk->wr_tx_refcnt, 0);
+ 	init_waitqueue_head(&lnk->wr_reg_wait);
++	atomic_set(&lnk->wr_reg_refcnt, 0);
+ 	return rc;
+ 
+ dma_unmap:
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index 2700a63ab095e..3a056f8affd1d 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -356,11 +356,14 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
+ 
+ static void virtio_vsock_reset_sock(struct sock *sk)
+ {
+-	lock_sock(sk);
++	/* vmci_transport.c doesn't take sk_lock here either.  At least we're
++	 * under vsock_table_lock so the sock cannot disappear while we're
++	 * executing.
++	 */
++
+ 	sk->sk_state = TCP_CLOSE;
+ 	sk->sk_err = ECONNRESET;
+ 	sk->sk_error_report(sk);
+-	release_sock(sk);
+ }
+ 
+ static void virtio_vsock_update_guest_cid(struct virtio_vsock *vsock)
+diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp-pcm-dma.c
+index 143155a840aca..cc1ce6f22caad 100644
+--- a/sound/soc/amd/acp-pcm-dma.c
++++ b/sound/soc/amd/acp-pcm-dma.c
+@@ -969,7 +969,7 @@ static int acp_dma_hw_params(struct snd_soc_component *component,
+ 
+ 	acp_set_sram_bank_state(rtd->acp_mmio, 0, true);
+ 	/* Save for runtime private data */
+-	rtd->dma_addr = substream->dma_buffer.addr;
++	rtd->dma_addr = runtime->dma_addr;
+ 	rtd->order = get_order(size);
+ 
+ 	/* Fill the page table entries in ACP SRAM */
+diff --git a/sound/soc/amd/raven/acp3x-pcm-dma.c b/sound/soc/amd/raven/acp3x-pcm-dma.c
+index 2447a1e6e913f..01b2834830604 100644
+--- a/sound/soc/amd/raven/acp3x-pcm-dma.c
++++ b/sound/soc/amd/raven/acp3x-pcm-dma.c
+@@ -288,7 +288,7 @@ static int acp3x_dma_hw_params(struct snd_soc_component *component,
+ 		pr_err("pinfo failed\n");
+ 	}
+ 	size = params_buffer_bytes(params);
+-	rtd->dma_addr = substream->dma_buffer.addr;
++	rtd->dma_addr = substream->runtime->dma_addr;
+ 	rtd->num_pages = (PAGE_ALIGN(size) >> PAGE_SHIFT);
+ 	config_acp3x_dma(rtd, substream->stream);
+ 	return 0;
+diff --git a/sound/soc/amd/renoir/acp3x-pdm-dma.c b/sound/soc/amd/renoir/acp3x-pdm-dma.c
+index 7b14d9a81b97a..7dcca3674295e 100644
+--- a/sound/soc/amd/renoir/acp3x-pdm-dma.c
++++ b/sound/soc/amd/renoir/acp3x-pdm-dma.c
+@@ -248,7 +248,7 @@ static int acp_pdm_dma_hw_params(struct snd_soc_component *component,
+ 		return -EINVAL;
+ 	size = params_buffer_bytes(params);
+ 	period_bytes = params_period_bytes(params);
+-	rtd->dma_addr = substream->dma_buffer.addr;
++	rtd->dma_addr = substream->runtime->dma_addr;
+ 	rtd->num_pages = (PAGE_ALIGN(size) >> PAGE_SHIFT);
+ 	config_acp_dma(rtd, substream->stream);
+ 	init_pdm_ring_buffer(MEM_WINDOW_START, size, period_bytes,
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 7c6b10bc0b8c5..828dc78202e8b 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -403,7 +403,7 @@ static const struct regmap_config cs42l42_regmap = {
+ 	.use_single_write = true,
+ };
+ 
+-static DECLARE_TLV_DB_SCALE(adc_tlv, -9600, 100, false);
++static DECLARE_TLV_DB_SCALE(adc_tlv, -9700, 100, true);
+ static DECLARE_TLV_DB_SCALE(mixer_tlv, -6300, 100, true);
+ 
+ static const char * const cs42l42_hpf_freq_text[] = {
+@@ -423,34 +423,23 @@ static SOC_ENUM_SINGLE_DECL(cs42l42_wnf3_freq_enum, CS42L42_ADC_WNF_HPF_CTL,
+ 			    CS42L42_ADC_WNF_CF_SHIFT,
+ 			    cs42l42_wnf3_freq_text);
+ 
+-static const char * const cs42l42_wnf05_freq_text[] = {
+-	"280Hz", "315Hz", "350Hz", "385Hz",
+-	"420Hz", "455Hz", "490Hz", "525Hz"
+-};
+-
+-static SOC_ENUM_SINGLE_DECL(cs42l42_wnf05_freq_enum, CS42L42_ADC_WNF_HPF_CTL,
+-			    CS42L42_ADC_WNF_CF_SHIFT,
+-			    cs42l42_wnf05_freq_text);
+-
+ static const struct snd_kcontrol_new cs42l42_snd_controls[] = {
+ 	/* ADC Volume and Filter Controls */
+ 	SOC_SINGLE("ADC Notch Switch", CS42L42_ADC_CTL,
+-				CS42L42_ADC_NOTCH_DIS_SHIFT, true, false),
++				CS42L42_ADC_NOTCH_DIS_SHIFT, true, true),
+ 	SOC_SINGLE("ADC Weak Force Switch", CS42L42_ADC_CTL,
+ 				CS42L42_ADC_FORCE_WEAK_VCM_SHIFT, true, false),
+ 	SOC_SINGLE("ADC Invert Switch", CS42L42_ADC_CTL,
+ 				CS42L42_ADC_INV_SHIFT, true, false),
+ 	SOC_SINGLE("ADC Boost Switch", CS42L42_ADC_CTL,
+ 				CS42L42_ADC_DIG_BOOST_SHIFT, true, false),
+-	SOC_SINGLE_SX_TLV("ADC Volume", CS42L42_ADC_VOLUME,
+-				CS42L42_ADC_VOL_SHIFT, 0xA0, 0x6C, adc_tlv),
++	SOC_SINGLE_S8_TLV("ADC Volume", CS42L42_ADC_VOLUME, -97, 12, adc_tlv),
+ 	SOC_SINGLE("ADC WNF Switch", CS42L42_ADC_WNF_HPF_CTL,
+ 				CS42L42_ADC_WNF_EN_SHIFT, true, false),
+ 	SOC_SINGLE("ADC HPF Switch", CS42L42_ADC_WNF_HPF_CTL,
+ 				CS42L42_ADC_HPF_EN_SHIFT, true, false),
+ 	SOC_ENUM("HPF Corner Freq", cs42l42_hpf_freq_enum),
+ 	SOC_ENUM("WNF 3dB Freq", cs42l42_wnf3_freq_enum),
+-	SOC_ENUM("WNF 05dB Freq", cs42l42_wnf05_freq_enum),
+ 
+ 	/* DAC Volume and Filter Controls */
+ 	SOC_SINGLE("DACA Invert Switch", CS42L42_DAC_CTL1,
+@@ -669,15 +658,6 @@ static int cs42l42_pll_config(struct snd_soc_component *component)
+ 					CS42L42_FSYNC_PULSE_WIDTH_MASK,
+ 					CS42L42_FRAC1_VAL(fsync - 1) <<
+ 					CS42L42_FSYNC_PULSE_WIDTH_SHIFT);
+-			snd_soc_component_update_bits(component,
+-					CS42L42_ASP_FRM_CFG,
+-					CS42L42_ASP_5050_MASK,
+-					CS42L42_ASP_5050_MASK);
+-			/* Set the frame delay to 1.0 SCLK clocks */
+-			snd_soc_component_update_bits(component, CS42L42_ASP_FRM_CFG,
+-					CS42L42_ASP_FSD_MASK,
+-					CS42L42_ASP_FSD_1_0 <<
+-					CS42L42_ASP_FSD_SHIFT);
+ 			/* Set the sample rates (96k or lower) */
+ 			snd_soc_component_update_bits(component, CS42L42_FS_RATE_EN,
+ 					CS42L42_FS_EN_MASK,
+@@ -773,7 +753,18 @@ static int cs42l42_set_dai_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt)
+ 	/* interface format */
+ 	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_I2S:
+-	case SND_SOC_DAIFMT_LEFT_J:
++		/*
++		 * 5050 mode, frame starts on falling edge of LRCLK,
++		 * frame delayed by 1.0 SCLKs
++		 */
++		snd_soc_component_update_bits(component,
++					      CS42L42_ASP_FRM_CFG,
++					      CS42L42_ASP_STP_MASK |
++					      CS42L42_ASP_5050_MASK |
++					      CS42L42_ASP_FSD_MASK,
++					      CS42L42_ASP_5050_MASK |
++					      (CS42L42_ASP_FSD_1_0 <<
++						CS42L42_ASP_FSD_SHIFT));
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/sound/soc/codecs/tlv320aic31xx.c b/sound/soc/codecs/tlv320aic31xx.c
+index 5ac7ce2644311..9e57e071bb8f6 100644
+--- a/sound/soc/codecs/tlv320aic31xx.c
++++ b/sound/soc/codecs/tlv320aic31xx.c
+@@ -35,6 +35,9 @@
+ 
+ #include "tlv320aic31xx.h"
+ 
++static int aic31xx_set_jack(struct snd_soc_component *component,
++                            struct snd_soc_jack *jack, void *data);
++
+ static const struct reg_default aic31xx_reg_defaults[] = {
+ 	{ AIC31XX_CLKMUX, 0x00 },
+ 	{ AIC31XX_PLLPR, 0x11 },
+@@ -1256,6 +1259,13 @@ static int aic31xx_power_on(struct snd_soc_component *component)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * The jack detection configuration is in the same register
++	 * that is used to report jack detect status so is volatile
++	 * and not covered by the cache sync, restore it separately.
++	 */
++	aic31xx_set_jack(component, aic31xx->jack, NULL);
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index aa5dd590ddd52..2784611196f06 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -127,7 +127,7 @@ static void sst_fill_alloc_params(struct snd_pcm_substream *substream,
+ 	snd_pcm_uframes_t period_size;
+ 	ssize_t periodbytes;
+ 	ssize_t buffer_bytes = snd_pcm_lib_buffer_bytes(substream);
+-	u32 buffer_addr = virt_to_phys(substream->dma_buffer.area);
++	u32 buffer_addr = substream->runtime->dma_addr;
+ 
+ 	channels = substream->runtime->channels;
+ 	period_size = substream->runtime->period_size;
+@@ -233,7 +233,6 @@ static int sst_platform_alloc_stream(struct snd_pcm_substream *substream,
+ 	/* set codec params and inform SST driver the same */
+ 	sst_fill_pcm_params(substream, &param);
+ 	sst_fill_alloc_params(substream, &alloc_params);
+-	substream->runtime->dma_area = substream->dma_buffer.area;
+ 	str_params.sparams = param;
+ 	str_params.aparams = alloc_params;
+ 	str_params.codec = SST_CODEC_TYPE_PCM;
+diff --git a/sound/soc/sof/intel/hda-ipc.c b/sound/soc/sof/intel/hda-ipc.c
+index c91aa951df226..acfeca42604cd 100644
+--- a/sound/soc/sof/intel/hda-ipc.c
++++ b/sound/soc/sof/intel/hda-ipc.c
+@@ -107,8 +107,8 @@ void hda_dsp_ipc_get_reply(struct snd_sof_dev *sdev)
+ 	} else {
+ 		/* reply correct size ? */
+ 		if (reply.hdr.size != msg->reply_size &&
+-			/* getter payload is never known upfront */
+-			!(reply.hdr.cmd & SOF_IPC_GLB_PROBE)) {
++		    /* getter payload is never known upfront */
++		    ((reply.hdr.cmd & SOF_GLB_TYPE_MASK) != SOF_IPC_GLB_PROBE)) {
+ 			dev_err(sdev->dev, "error: reply expected %zu got %u bytes\n",
+ 				msg->reply_size, reply.hdr.size);
+ 			ret = -EINVAL;
+diff --git a/sound/soc/uniphier/aio-dma.c b/sound/soc/uniphier/aio-dma.c
+index 3c1628a3a1acd..3d9736e7381f8 100644
+--- a/sound/soc/uniphier/aio-dma.c
++++ b/sound/soc/uniphier/aio-dma.c
+@@ -198,7 +198,7 @@ static int uniphier_aiodma_mmap(struct snd_soc_component *component,
+ 	vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+ 
+ 	return remap_pfn_range(vma, vma->vm_start,
+-			       substream->dma_buffer.addr >> PAGE_SHIFT,
++			       substream->runtime->dma_addr >> PAGE_SHIFT,
+ 			       vma->vm_end - vma->vm_start, vma->vm_page_prot);
+ }
+ 
+diff --git a/sound/soc/xilinx/xlnx_formatter_pcm.c b/sound/soc/xilinx/xlnx_formatter_pcm.c
+index 1d59fb668c77a..91afea9d5de67 100644
+--- a/sound/soc/xilinx/xlnx_formatter_pcm.c
++++ b/sound/soc/xilinx/xlnx_formatter_pcm.c
+@@ -452,8 +452,8 @@ static int xlnx_formatter_pcm_hw_params(struct snd_soc_component *component,
+ 
+ 	stream_data->buffer_size = size;
+ 
+-	low = lower_32_bits(substream->dma_buffer.addr);
+-	high = upper_32_bits(substream->dma_buffer.addr);
++	low = lower_32_bits(runtime->dma_addr);
++	high = upper_32_bits(runtime->dma_addr);
+ 	writel(low, stream_data->mmio + XLNX_AUD_BUFF_ADDR_LSB);
+ 	writel(high, stream_data->mmio + XLNX_AUD_BUFF_ADDR_MSB);
+ 
+diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
+index 5482a9b7ae2d3..d38284a3aaf0b 100644
+--- a/tools/lib/bpf/libbpf_probes.c
++++ b/tools/lib/bpf/libbpf_probes.c
+@@ -75,6 +75,9 @@ probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
+ 	case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
+ 		xattr.expected_attach_type = BPF_CGROUP_INET4_CONNECT;
+ 		break;
++	case BPF_PROG_TYPE_CGROUP_SOCKOPT:
++		xattr.expected_attach_type = BPF_CGROUP_GETSOCKOPT;
++		break;
+ 	case BPF_PROG_TYPE_SK_LOOKUP:
+ 		xattr.expected_attach_type = BPF_SK_LOOKUP;
+ 		break;
+@@ -104,7 +107,6 @@ probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
+ 	case BPF_PROG_TYPE_SK_REUSEPORT:
+ 	case BPF_PROG_TYPE_FLOW_DISSECTOR:
+ 	case BPF_PROG_TYPE_CGROUP_SYSCTL:
+-	case BPF_PROG_TYPE_CGROUP_SOCKOPT:
+ 	case BPF_PROG_TYPE_TRACING:
+ 	case BPF_PROG_TYPE_STRUCT_OPS:
+ 	case BPF_PROG_TYPE_EXT:


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-19 11:56 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-19 11:56 UTC (permalink / raw
  To: gentoo-commits

commit:     110ddc77a9146172044d99c17076ebd8cea59c84
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 19 11:56:06 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 19 11:56:06 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=110ddc77

Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                            |  4 ++
 2700_Bluetooth-usb-alt-3-for-WBS.patch | 84 ++++++++++++++++++++++++++++++++++
 2 files changed, 88 insertions(+)

diff --git a/0000_README b/0000_README
index c48c49d..3c66718 100644
--- a/0000_README
+++ b/0000_README
@@ -295,6 +295,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2700_Bluetooth-usb-alt-3-for-WBS.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/?id=55981d3541812234e687062926ff199c83f79a39
+Desc:   Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_Bluetooth-usb-alt-3-for-WBS.patch b/2700_Bluetooth-usb-alt-3-for-WBS.patch
new file mode 100644
index 0000000..e0a67ea
--- /dev/null
+++ b/2700_Bluetooth-usb-alt-3-for-WBS.patch
@@ -0,0 +1,84 @@
+From 55981d3541812234e687062926ff199c83f79a39 Mon Sep 17 00:00:00 2001
+From: Pauli Virtanen <pav@iki.fi>
+Date: Mon, 26 Jul 2021 21:02:06 +0300
+Subject: Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Some USB BT adapters don't satisfy the MTU requirement mentioned in
+commit e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
+and have ALT 3 setting that produces no/garbled audio. Some adapters
+with larger MTU were also reported to have problems with ALT 3.
+
+Add a flag and check it and MTU before selecting ALT 3, falling back to
+ALT 1. Enable the flag for Realtek, restoring the previous behavior for
+non-Realtek devices.
+
+Tested with USB adapters (mtu<72, no/garbled sound with ALT3, ALT1
+works) BCM20702A1 0b05:17cb, CSR8510A10 0a12:0001, and (mtu>=72, ALT3
+works) RTL8761BU 0bda:8771, Intel AX200 8087:0029 (after disabling
+ALT6). Also got reports for (mtu>=72, ALT 3 reported to produce bad
+audio) Intel 8087:0a2b.
+
+Signed-off-by: Pauli Virtanen <pav@iki.fi>
+Fixes: e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
+Tested-by: Michał Kępień <kernel@kempniu.pl>
+Tested-by: Jonathan Lampérth <jon@h4n.dev>
+Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
+---
+ drivers/bluetooth/btusb.c | 22 ++++++++++++++--------
+ 1 file changed, 14 insertions(+), 8 deletions(-)
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 488f110e17e27..2336f731dbc7e 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -528,6 +528,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
+ #define BTUSB_HW_RESET_ACTIVE	12
+ #define BTUSB_TX_WAIT_VND_EVT	13
+ #define BTUSB_WAKEUP_DISABLE	14
++#define BTUSB_USE_ALT3_FOR_WBS	15
+ 
+ struct btusb_data {
+ 	struct hci_dev       *hdev;
+@@ -1761,16 +1762,20 @@ static void btusb_work(struct work_struct *work)
+ 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
+ 			 * many adapters do not support it.  Alt 1 appears to
+ 			 * work for all adapters that do not have alt 6, and
+-			 * which work with WBS at all.
++			 * which work with WBS at all.  Some devices prefer
++			 * alt 3 (HCI payload >= 60 Bytes let air packet
++			 * data satisfy 60 bytes), requiring
++			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
++			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
+ 			 */
+-			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
+-			/* Because mSBC frames do not need to be aligned to the
+-			 * SCO packet boundary. If support the Alt 3, use the
+-			 * Alt 3 for HCI payload >= 60 Bytes let air packet
+-			 * data satisfy 60 bytes.
+-			 */
+-			if (new_alts == 1 && btusb_find_altsetting(data, 3))
++			if (btusb_find_altsetting(data, 6))
++				new_alts = 6;
++			else if (btusb_find_altsetting(data, 3) &&
++				 hdev->sco_mtu >= 72 &&
++				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
+ 				new_alts = 3;
++			else
++				new_alts = 1;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -3882,6 +3887,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 		 * (DEVICE_REMOTE_WAKEUP)
+ 		 */
+ 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
++		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
+ 	}
+ 
+ 	if (!reset)
+-- 
+cgit 1.2.3-1.el7
+


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-21 14:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-21 14:17 UTC (permalink / raw
  To: gentoo-commits

commit:     d5f1e03220d15cd5875e2577cb56875b6a96e815
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 21 14:17:04 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Aug 21 14:17:04 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d5f1e032

Update to CPU Optimization patch (USE=experiental)

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index c45d13b..e37528f 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -219,7 +219,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MZEN3
 +	bool "AMD Zen 3"
-+	depends on ( CC_IS_GCC && GCC_VERSION >= 100300 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	help
 +	  Select this for AMD Family 19h Zen 3 processors.
 +
@@ -378,7 +378,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MCOOPERLAKE
 +	bool "Intel Cooper Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
 +	select X86_P6_NOP
 +	help
 +
@@ -388,7 +388,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MTIGERLAKE
 +	bool "Intel Tiger Lake"
-+	depends on  ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
++	depends on  (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
 +	select X86_P6_NOP
 +	help
 +
@@ -398,7 +398,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MSAPPHIRERAPIDS
 +	bool "Intel Sapphire Rapids"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -408,7 +408,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MROCKETLAKE
 +	bool "Intel Rocket Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -418,7 +418,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MALDERLAKE
 +	bool "Intel Alder Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -435,7 +435,7 @@ index 814fe0d349b0..8acf6519d279 100644
  
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU.
@@ -443,7 +443,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config GENERIC_CPU3
 +	bool "Generic-x86-64-v3"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64-v3 CPU with v3 instructions.
@@ -451,7 +451,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config GENERIC_CPU4
 +	bool "Generic-x86-64-v4"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU with v4 instructions.


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-24 21:32 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-24 21:32 UTC (permalink / raw
  To: gentoo-commits

commit:     f9a9acd1053ef9c54a3d22aa6f280b237e9452f4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 24 19:53:28 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 24 21:31:58 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f9a9acd1

Add CONFIG option to print firmware info

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 864f86a..fd8f955 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-08-09 07:18:54.945580285 -0400
-+++ b/distro/Kconfig	2021-08-09 19:15:34.418191114 -0400
-@@ -0,0 +1,267 @@
+--- /dev/null	2021-08-24 15:34:24.700702871 -0400
++++ b/distro/Kconfig	2021-08-24 15:49:16.965525424 -0400
+@@ -0,0 +1,281 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -275,6 +275,20 @@
 +	select CPU_SW_DOMAIN_PAN
 +
 +endif
++
++config GENTOO_PRINT_FIRMWARE_INFO
++	bool "Print firmware information that the kernel attempts to load"
++
++	depends on GENTOO_LINUX
++	default n 
++
++	help
++		Enable this option to print information about firmware that the kernel
++		is attempting to load.  This information can be accessible via the
++		dmesg command-line utility
++
++		See the settings that become available for more details and fine-tuning.
++
 +endmenu
 diff --git a/security/Kconfig b/security/Kconfig
 index 7561f6f99..01f0bf73f 100644


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-24 21:33 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-24 21:33 UTC (permalink / raw
  To: gentoo-commits

commit:     9b9138860ef32edd768318afe37f6ef014e0c21c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 24 21:32:51 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 24 21:32:51 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9b913886

Add suport for printing firmware info on load attempt

This requires the kernel config parameter:
CONFIG_GENTOO_PRINT_FIRMWARE_INFO
located under the GENTOO_LINUX distro config options

Thanks to Georgy Yakovlev

Bug: https://bugs.gentoo.org/732852

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                               |  4 ++++
 3000_Support-printing-firmware-info.patch | 14 ++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/0000_README b/0000_README
index 3c66718..303d4fa 100644
--- a/0000_README
+++ b/0000_README
@@ -307,6 +307,10 @@ Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL
 
+Patch:  3000_Support-printing-firmware-info.patch
+From:   https://bugs.gentoo.org/732852
+Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.

diff --git a/3000_Support-printing-firmware-info.patch b/3000_Support-printing-firmware-info.patch
new file mode 100644
index 0000000..a630cfb
--- /dev/null
+++ b/3000_Support-printing-firmware-info.patch
@@ -0,0 +1,14 @@
+--- a/drivers/base/firmware_loader/main.c	2021-08-24 15:42:07.025482085 -0400
++++ b/drivers/base/firmware_loader/main.c	2021-08-24 15:44:40.782975313 -0400
+@@ -809,6 +809,11 @@ _request_firmware(const struct firmware
+ 
+ 	ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ 					offset, opt_flags);
++
++#ifdef CONFIG_GENTOO_PRINT_FIRMWARE_INFO
++        printk(KERN_NOTICE "Loading firmware: %s\n", name);
++#endif
++
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-25 16:23 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-25 16:23 UTC (permalink / raw
  To: gentoo-commits

commit:     0edf32ac2f1a2938f11af7b67096a9dc1e7da277
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 25 16:20:53 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 25 16:23:37 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0edf32ac

Change CONFIG_GENTOO_PRINT_FIRMWARE_INFO to y

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index fd8f955..d2175f0 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -280,7 +280,7 @@
 +	bool "Print firmware information that the kernel attempts to load"
 +
 +	depends on GENTOO_LINUX
-+	default n 
++	default y
 +
 +	help
 +		Enable this option to print information about firmware that the kernel


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-08-26 14:34 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-08-26 14:34 UTC (permalink / raw
  To: gentoo-commits

commit:     98ebfc29be0ae2eb5d885132485e9bbb277f4fa1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 26 14:34:21 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 26 14:34:21 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=98ebfc29

Linux patch 5.10.61

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1060_linux-5.10.61.patch | 3599 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3603 insertions(+)

diff --git a/0000_README b/0000_README
index 303d4fa..812aa2d 100644
--- a/0000_README
+++ b/0000_README
@@ -283,6 +283,10 @@ Patch:  1059_linux-5.10.60.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.60
 
+Patch:  1060_linux-5.10.61.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.61
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1060_linux-5.10.61.patch b/1060_linux-5.10.61.patch
new file mode 100644
index 0000000..564e5c8
--- /dev/null
+++ b/1060_linux-5.10.61.patch
@@ -0,0 +1,3599 @@
+diff --git a/Makefile b/Makefile
+index 7f25cfee84ece..a6ab3263f81df 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 60
++SUBLEVEL = 61
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/am43x-epos-evm.dts b/arch/arm/boot/dts/am43x-epos-evm.dts
+index 8b696107eef8c..d2aebdbc7e0f9 100644
+--- a/arch/arm/boot/dts/am43x-epos-evm.dts
++++ b/arch/arm/boot/dts/am43x-epos-evm.dts
+@@ -582,7 +582,7 @@
+ 	status = "okay";
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&i2c0_pins>;
+-	clock-frequency = <400000>;
++	clock-frequency = <100000>;
+ 
+ 	tps65218: tps65218@24 {
+ 		reg = <0x24>;
+diff --git a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
+index 4f38aeecadb3a..42abea3ea92cf 100644
+--- a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
++++ b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
+@@ -755,14 +755,14 @@
+ 			status = "disabled";
+ 		};
+ 
+-		vica: intc@10140000 {
++		vica: interrupt-controller@10140000 {
+ 			compatible = "arm,versatile-vic";
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+ 			reg = <0x10140000 0x20>;
+ 		};
+ 
+-		vicb: intc@10140020 {
++		vicb: interrupt-controller@10140020 {
+ 			compatible = "arm,versatile-vic";
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
+index 5969b5cfdc85a..cb82864a90ef3 100644
+--- a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
++++ b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /* Copyright (c) 2015, LGE Inc. All rights reserved.
+  * Copyright (c) 2016, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2021, Petr Vorel <petr.vorel@gmail.com>
+  */
+ 
+ /dts-v1/;
+@@ -15,6 +16,9 @@
+ 	qcom,board-id = <0xb64 0>;
+ 	qcom,pmic-id = <0x10009 0x1000A 0x0 0x0>;
+ 
++	/* Bullhead firmware doesn't support PSCI */
++	/delete-node/ psci;
++
+ 	aliases {
+ 		serial0 = &blsp1_uart2;
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index 888dc23a530e6..ad6561843ba28 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -564,7 +564,7 @@
+ 		left_spkr: wsa8810-left{
+ 			compatible = "sdw10217211000";
+ 			reg = <0 3>;
+-			powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
++			powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>;
+ 			#thermal-sensor-cells = <0>;
+ 			sound-name-prefix = "SpkrLeft";
+ 			#sound-dai-cells = <0>;
+@@ -572,7 +572,7 @@
+ 
+ 		right_spkr: wsa8810-right{
+ 			compatible = "sdw10217211000";
+-			powerdown-gpios = <&wcdgpio 3 GPIO_ACTIVE_HIGH>;
++			powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
+ 			reg = <0 4>;
+ 			#thermal-sensor-cells = <0>;
+ 			sound-name-prefix = "SpkrRight";
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 1ae7a76ae97b7..ca1a105e3b5d4 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -558,9 +558,12 @@ static void zpci_cleanup_bus_resources(struct zpci_dev *zdev)
+ 
+ int pcibios_add_device(struct pci_dev *pdev)
+ {
++	struct zpci_dev *zdev = to_zpci(pdev);
+ 	struct resource *res;
+ 	int i;
+ 
++	/* The pdev has a reference to the zdev via its bus */
++	zpci_zdev_get(zdev);
+ 	if (pdev->is_physfn)
+ 		pdev->no_vf_scan = 1;
+ 
+@@ -580,7 +583,10 @@ int pcibios_add_device(struct pci_dev *pdev)
+ 
+ void pcibios_release_device(struct pci_dev *pdev)
+ {
++	struct zpci_dev *zdev = to_zpci(pdev);
++
+ 	zpci_unmap_resources(pdev);
++	zpci_zdev_put(zdev);
+ }
+ 
+ int pcibios_enable_device(struct pci_dev *pdev, int mask)
+diff --git a/arch/s390/pci/pci_bus.h b/arch/s390/pci/pci_bus.h
+index f8dfac0b5b713..55c9488e504cc 100644
+--- a/arch/s390/pci/pci_bus.h
++++ b/arch/s390/pci/pci_bus.h
+@@ -16,6 +16,11 @@ static inline void zpci_zdev_put(struct zpci_dev *zdev)
+ 	kref_put(&zdev->kref, zpci_release_device);
+ }
+ 
++static inline void zpci_zdev_get(struct zpci_dev *zdev)
++{
++	kref_get(&zdev->kref);
++}
++
+ int zpci_alloc_domain(int domain);
+ void zpci_free_domain(int domain);
+ int zpci_setup_bus_resources(struct zpci_dev *zdev,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 6ab42cdcb8a44..812585986bb82 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7023,6 +7023,11 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu)
+ 	BUILD_BUG_ON(HF_SMM_MASK != X86EMUL_SMM_MASK);
+ 	BUILD_BUG_ON(HF_SMM_INSIDE_NMI_MASK != X86EMUL_SMM_INSIDE_NMI_MASK);
+ 
++	ctxt->interruptibility = 0;
++	ctxt->have_exception = false;
++	ctxt->exception.vector = -1;
++	ctxt->perm_ok = false;
++
+ 	init_decode_cache(ctxt);
+ 	vcpu->arch.emulate_regs_need_sync_from_vcpu = false;
+ }
+@@ -7338,6 +7343,37 @@ static bool is_vmware_backdoor_opcode(struct x86_emulate_ctxt *ctxt)
+ 	return false;
+ }
+ 
++/*
++ * Decode to be emulated instruction. Return EMULATION_OK if success.
++ */
++int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
++				    void *insn, int insn_len)
++{
++	int r = EMULATION_OK;
++	struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt;
++
++	init_emulate_ctxt(vcpu);
++
++	/*
++	 * We will reenter on the same instruction since we do not set
++	 * complete_userspace_io. This does not handle watchpoints yet,
++	 * those would be handled in the emulate_ops.
++	 */
++	if (!(emulation_type & EMULTYPE_SKIP) &&
++	    kvm_vcpu_check_breakpoint(vcpu, &r))
++		return r;
++
++	ctxt->ud = emulation_type & EMULTYPE_TRAP_UD;
++
++	r = x86_decode_insn(ctxt, insn, insn_len);
++
++	trace_kvm_emulate_insn_start(vcpu);
++	++vcpu->stat.insn_emulation;
++
++	return r;
++}
++EXPORT_SYMBOL_GPL(x86_decode_emulated_instruction);
++
+ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 			    int emulation_type, void *insn, int insn_len)
+ {
+@@ -7357,32 +7393,12 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 	 */
+ 	write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable;
+ 	vcpu->arch.write_fault_to_shadow_pgtable = false;
+-	kvm_clear_exception_queue(vcpu);
+ 
+ 	if (!(emulation_type & EMULTYPE_NO_DECODE)) {
+-		init_emulate_ctxt(vcpu);
+-
+-		/*
+-		 * We will reenter on the same instruction since
+-		 * we do not set complete_userspace_io.  This does not
+-		 * handle watchpoints yet, those would be handled in
+-		 * the emulate_ops.
+-		 */
+-		if (!(emulation_type & EMULTYPE_SKIP) &&
+-		    kvm_vcpu_check_breakpoint(vcpu, &r))
+-			return r;
+-
+-		ctxt->interruptibility = 0;
+-		ctxt->have_exception = false;
+-		ctxt->exception.vector = -1;
+-		ctxt->perm_ok = false;
+-
+-		ctxt->ud = emulation_type & EMULTYPE_TRAP_UD;
+-
+-		r = x86_decode_insn(ctxt, insn, insn_len);
++		kvm_clear_exception_queue(vcpu);
+ 
+-		trace_kvm_emulate_insn_start(vcpu);
+-		++vcpu->stat.insn_emulation;
++		r = x86_decode_emulated_instruction(vcpu, emulation_type,
++						    insn, insn_len);
+ 		if (r != EMULATION_OK)  {
+ 			if ((emulation_type & EMULTYPE_TRAP_UD) ||
+ 			    (emulation_type & EMULTYPE_TRAP_UD_FORCED)) {
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 2249a7d7ca27f..2bff44f1efec8 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -272,6 +272,8 @@ bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn,
+ 					  int page_num);
+ bool kvm_vector_hashing_enabled(void);
+ void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_code);
++int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
++				    void *insn, int insn_len);
+ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 			    int emulation_type, void *insn, int insn_len);
+ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index c3d8d44f28d75..159b57c6dc4df 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3061,8 +3061,10 @@ static int sysc_probe(struct platform_device *pdev)
+ 		return error;
+ 
+ 	error = sysc_check_active_timer(ddata);
+-	if (error == -EBUSY)
++	if (error == -ENXIO)
+ 		ddata->reserved = true;
++	else if (error)
++		return error;
+ 
+ 	error = sysc_get_clocks(ddata);
+ 	if (error)
+diff --git a/drivers/clk/imx/clk-imx6q.c b/drivers/clk/imx/clk-imx6q.c
+index f444bbe8244c2..7d07dd92a7b44 100644
+--- a/drivers/clk/imx/clk-imx6q.c
++++ b/drivers/clk/imx/clk-imx6q.c
+@@ -974,6 +974,6 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 			       hws[IMX6QDL_CLK_PLL3_USB_OTG]->clk);
+ 	}
+ 
+-	imx_register_uart_clocks(1);
++	imx_register_uart_clocks(2);
+ }
+ CLK_OF_DECLARE(imx6q, "fsl,imx6q-ccm", imx6q_clocks_init);
+diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
+index 51ed640e527b4..4ece326ea233e 100644
+--- a/drivers/clk/qcom/gdsc.c
++++ b/drivers/clk/qcom/gdsc.c
+@@ -357,27 +357,43 @@ static int gdsc_init(struct gdsc *sc)
+ 	if (on < 0)
+ 		return on;
+ 
+-	/*
+-	 * Votable GDSCs can be ON due to Vote from other masters.
+-	 * If a Votable GDSC is ON, make sure we have a Vote.
+-	 */
+-	if ((sc->flags & VOTABLE) && on)
+-		gdsc_enable(&sc->pd);
++	if (on) {
++		/* The regulator must be on, sync the kernel state */
++		if (sc->rsupply) {
++			ret = regulator_enable(sc->rsupply);
++			if (ret < 0)
++				return ret;
++		}
+ 
+-	/*
+-	 * Make sure the retain bit is set if the GDSC is already on, otherwise
+-	 * we end up turning off the GDSC and destroying all the register
+-	 * contents that we thought we were saving.
+-	 */
+-	if ((sc->flags & RETAIN_FF_ENABLE) && on)
+-		gdsc_retain_ff_on(sc);
++		/*
++		 * Votable GDSCs can be ON due to Vote from other masters.
++		 * If a Votable GDSC is ON, make sure we have a Vote.
++		 */
++		if (sc->flags & VOTABLE) {
++			ret = regmap_update_bits(sc->regmap, sc->gdscr,
++						 SW_COLLAPSE_MASK, val);
++			if (ret)
++				return ret;
++		}
++
++		/* Turn on HW trigger mode if supported */
++		if (sc->flags & HW_CTRL) {
++			ret = gdsc_hwctrl(sc, true);
++			if (ret < 0)
++				return ret;
++		}
+ 
+-	/* If ALWAYS_ON GDSCs are not ON, turn them ON */
+-	if (sc->flags & ALWAYS_ON) {
+-		if (!on)
+-			gdsc_enable(&sc->pd);
++		/*
++		 * Make sure the retain bit is set if the GDSC is already on,
++		 * otherwise we end up turning off the GDSC and destroying all
++		 * the register contents that we thought we were saving.
++		 */
++		if (sc->flags & RETAIN_FF_ENABLE)
++			gdsc_retain_ff_on(sc);
++	} else if (sc->flags & ALWAYS_ON) {
++		/* If ALWAYS_ON GDSCs are not ON, turn them ON */
++		gdsc_enable(&sc->pd);
+ 		on = true;
+-		sc->pd.flags |= GENPD_FLAG_ALWAYS_ON;
+ 	}
+ 
+ 	if (on || (sc->pwrsts & PWRSTS_RET))
+@@ -385,6 +401,8 @@ static int gdsc_init(struct gdsc *sc)
+ 	else
+ 		gdsc_clear_mem_on(sc);
+ 
++	if (sc->flags & ALWAYS_ON)
++		sc->pd.flags |= GENPD_FLAG_ALWAYS_ON;
+ 	if (!sc->pd.power_off)
+ 		sc->pd.power_off = gdsc_disable;
+ 	if (!sc->pd.power_on)
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index e4782f562e7a9..2de7fd18f66a1 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -102,7 +102,11 @@ struct armada_37xx_dvfs {
+ };
+ 
+ static struct armada_37xx_dvfs armada_37xx_dvfs[] = {
+-	{.cpu_freq_max = 1200*1000*1000, .divider = {1, 2, 4, 6} },
++	/*
++	 * The cpufreq scaling for 1.2 GHz variant of the SOC is currently
++	 * unstable because we do not know how to configure it properly.
++	 */
++	/* {.cpu_freq_max = 1200*1000*1000, .divider = {1, 2, 4, 6} }, */
+ 	{.cpu_freq_max = 1000*1000*1000, .divider = {1, 2, 4, 5} },
+ 	{.cpu_freq_max = 800*1000*1000,  .divider = {1, 2, 3, 4} },
+ 	{.cpu_freq_max = 600*1000*1000,  .divider = {2, 4, 5, 6} },
+diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c
+index 8a4f608904b98..4be433482053b 100644
+--- a/drivers/dma/of-dma.c
++++ b/drivers/dma/of-dma.c
+@@ -67,8 +67,12 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
+ 		return NULL;
+ 
+ 	ofdma_target = of_dma_find_controller(&dma_spec_target);
+-	if (!ofdma_target)
+-		return NULL;
++	if (!ofdma_target) {
++		ofdma->dma_router->route_free(ofdma->dma_router->dev,
++					      route_data);
++		chan = ERR_PTR(-EPROBE_DEFER);
++		goto err;
++	}
+ 
+ 	chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target);
+ 	if (IS_ERR_OR_NULL(chan)) {
+@@ -79,6 +83,7 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
+ 		chan->route_data = route_data;
+ 	}
+ 
++err:
+ 	/*
+ 	 * Need to put the node back since the ofdma->of_dma_route_allocate
+ 	 * has taken it for generating the new, translated dma_spec
+diff --git a/drivers/dma/sh/usb-dmac.c b/drivers/dma/sh/usb-dmac.c
+index 8f7ceb698226c..1cc06900153e4 100644
+--- a/drivers/dma/sh/usb-dmac.c
++++ b/drivers/dma/sh/usb-dmac.c
+@@ -855,8 +855,8 @@ static int usb_dmac_probe(struct platform_device *pdev)
+ 
+ error:
+ 	of_dma_controller_free(pdev->dev.of_node);
+-	pm_runtime_put(&pdev->dev);
+ error_pm:
++	pm_runtime_put(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	return ret;
+ }
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index 79777550a6ffc..9ffdbeec436bd 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -394,6 +394,7 @@ struct xilinx_dma_tx_descriptor {
+  * @genlock: Support genlock mode
+  * @err: Channel has errors
+  * @idle: Check for channel idle
++ * @terminating: Check for channel being synchronized by user
+  * @tasklet: Cleanup work after irq
+  * @config: Device configuration info
+  * @flush_on_fsync: Flush on Frame sync
+@@ -431,6 +432,7 @@ struct xilinx_dma_chan {
+ 	bool genlock;
+ 	bool err;
+ 	bool idle;
++	bool terminating;
+ 	struct tasklet_struct tasklet;
+ 	struct xilinx_vdma_config config;
+ 	bool flush_on_fsync;
+@@ -1049,6 +1051,13 @@ static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
+ 		/* Run any dependencies, then free the descriptor */
+ 		dma_run_dependencies(&desc->async_tx);
+ 		xilinx_dma_free_tx_descriptor(chan, desc);
++
++		/*
++		 * While we ran a callback the user called a terminate function,
++		 * which takes care of cleaning up any remaining descriptors
++		 */
++		if (chan->terminating)
++			break;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+@@ -1965,6 +1974,8 @@ static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	if (desc->cyclic)
+ 		chan->cyclic = true;
+ 
++	chan->terminating = false;
++
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+ 
+ 	return cookie;
+@@ -2436,6 +2447,7 @@ static int xilinx_dma_terminate_all(struct dma_chan *dchan)
+ 
+ 	xilinx_dma_chan_reset(chan);
+ 	/* Remove and free all of the descriptors in the lists */
++	chan->terminating = true;
+ 	xilinx_dma_free_descriptors(chan);
+ 	chan->idle = true;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index fb15e8b5af32f..ac3a88197b2fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1271,6 +1271,16 @@ static bool is_raven_kicker(struct amdgpu_device *adev)
+ 		return false;
+ }
+ 
++static bool check_if_enlarge_doorbell_range(struct amdgpu_device *adev)
++{
++	if ((adev->asic_type == CHIP_RENOIR) &&
++	    (adev->gfx.me_fw_version >= 0x000000a5) &&
++	    (adev->gfx.me_feature_version >= 52))
++		return true;
++	else
++		return false;
++}
++
+ static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev)
+ {
+ 	if (gfx_v9_0_should_disable_gfxoff(adev->pdev))
+@@ -3619,7 +3629,16 @@ static int gfx_v9_0_kiq_init_register(struct amdgpu_ring *ring)
+ 	if (ring->use_doorbell) {
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
+ 					(adev->doorbell_index.kiq * 2) << 2);
+-		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
++		/* If GC has entered CGPG, ringing doorbell > first page
++		 * doesn't wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to
++		 * workaround this issue. And this change has to align with firmware
++		 * update.
++		 */
++		if (check_if_enlarge_doorbell_range(adev))
++			WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
++					(adev->doorbell.size - 4));
++		else
++			WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
+ 					(adev->doorbell_index.userqueue_end * 2) << 2);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index 1c6e401dd4cce..0eba391e597fd 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -66,9 +66,11 @@ int rn_get_active_display_cnt_wa(
+ 	for (i = 0; i < context->stream_count; i++) {
+ 		const struct dc_stream_state *stream = context->streams[i];
+ 
++		/* Extend the WA to DP for Linux*/
+ 		if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A ||
+ 				stream->signal == SIGNAL_TYPE_DVI_SINGLE_LINK ||
+-				stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK)
++				stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK ||
++				stream->signal == SIGNAL_TYPE_DISPLAY_PORT)
+ 			tmds_present = true;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
+index d8b18c515d067..e3cfb442a0620 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
+@@ -357,7 +357,7 @@ void optc2_lock_doublebuffer_enable(struct timing_generator *optc)
+ 
+ 	REG_UPDATE_2(OTG_GLOBAL_CONTROL1,
+ 			MASTER_UPDATE_LOCK_DB_X,
+-			h_blank_start - 200 - 1,
++			(h_blank_start - 200 - 1) / optc1->opp_count,
+ 			MASTER_UPDATE_LOCK_DB_Y,
+ 			v_blank_start - 1);
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+index 3064eac1a7507..e747ff7ba3dce 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+@@ -34,6 +34,7 @@
+ 
+ #define DISP_AAL_EN				0x0000
+ #define DISP_AAL_SIZE				0x0030
++#define DISP_AAL_OUTPUT_SIZE			0x04d8
+ 
+ #define DISP_CCORR_EN				0x0000
+ #define CCORR_EN				BIT(0)
+@@ -180,7 +181,8 @@ static void mtk_aal_config(struct mtk_ddp_comp *comp, unsigned int w,
+ 			   unsigned int h, unsigned int vrefresh,
+ 			   unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
+ {
+-	mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_AAL_SIZE);
++	mtk_ddp_write(cmdq_pkt, w << 16 | h, comp, DISP_AAL_SIZE);
++	mtk_ddp_write(cmdq_pkt, w << 16 | h, comp, DISP_AAL_OUTPUT_SIZE);
+ }
+ 
+ static void mtk_aal_start(struct mtk_ddp_comp *comp)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
+index 1d9e00b694625..5aa52b7afeecb 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
+@@ -7,6 +7,7 @@
+ #define MTK_DRM_DDP_COMP_H
+ 
+ #include <linux/io.h>
++#include <linux/soc/mediatek/mtk-mmsys.h>
+ 
+ struct device;
+ struct device_node;
+@@ -35,39 +36,6 @@ enum mtk_ddp_comp_type {
+ 	MTK_DDP_COMP_TYPE_MAX,
+ };
+ 
+-enum mtk_ddp_comp_id {
+-	DDP_COMPONENT_AAL0,
+-	DDP_COMPONENT_AAL1,
+-	DDP_COMPONENT_BLS,
+-	DDP_COMPONENT_CCORR,
+-	DDP_COMPONENT_COLOR0,
+-	DDP_COMPONENT_COLOR1,
+-	DDP_COMPONENT_DITHER,
+-	DDP_COMPONENT_DPI0,
+-	DDP_COMPONENT_DPI1,
+-	DDP_COMPONENT_DSI0,
+-	DDP_COMPONENT_DSI1,
+-	DDP_COMPONENT_DSI2,
+-	DDP_COMPONENT_DSI3,
+-	DDP_COMPONENT_GAMMA,
+-	DDP_COMPONENT_OD0,
+-	DDP_COMPONENT_OD1,
+-	DDP_COMPONENT_OVL0,
+-	DDP_COMPONENT_OVL_2L0,
+-	DDP_COMPONENT_OVL_2L1,
+-	DDP_COMPONENT_OVL1,
+-	DDP_COMPONENT_PWM0,
+-	DDP_COMPONENT_PWM1,
+-	DDP_COMPONENT_PWM2,
+-	DDP_COMPONENT_RDMA0,
+-	DDP_COMPONENT_RDMA1,
+-	DDP_COMPONENT_RDMA2,
+-	DDP_COMPONENT_UFOE,
+-	DDP_COMPONENT_WDMA0,
+-	DDP_COMPONENT_WDMA1,
+-	DDP_COMPONENT_ID_MAX,
+-};
+-
+ struct mtk_ddp_comp;
+ struct cmdq_pkt;
+ struct mtk_ddp_comp_funcs {
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index 1e7c17989084e..fb911b6c418f2 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -466,20 +466,6 @@ pasid_cache_invalidation_with_pasid(struct intel_iommu *iommu,
+ 	qi_submit_sync(iommu, &desc, 1, 0);
+ }
+ 
+-static void
+-iotlb_invalidation_with_pasid(struct intel_iommu *iommu, u16 did, u32 pasid)
+-{
+-	struct qi_desc desc;
+-
+-	desc.qw0 = QI_EIOTLB_PASID(pasid) | QI_EIOTLB_DID(did) |
+-			QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | QI_EIOTLB_TYPE;
+-	desc.qw1 = 0;
+-	desc.qw2 = 0;
+-	desc.qw3 = 0;
+-
+-	qi_submit_sync(iommu, &desc, 1, 0);
+-}
+-
+ static void
+ devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
+ 			       struct device *dev, u32 pasid)
+@@ -511,20 +497,26 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
+ 				 u32 pasid, bool fault_ignore)
+ {
+ 	struct pasid_entry *pte;
+-	u16 did;
++	u16 did, pgtt;
+ 
+ 	pte = intel_pasid_get_entry(dev, pasid);
+ 	if (WARN_ON(!pte))
+ 		return;
+ 
+ 	did = pasid_get_domain_id(pte);
++	pgtt = pasid_pte_get_pgtt(pte);
++
+ 	intel_pasid_clear_entry(dev, pasid, fault_ignore);
+ 
+ 	if (!ecap_coherent(iommu->ecap))
+ 		clflush_cache_range(pte, sizeof(*pte));
+ 
+ 	pasid_cache_invalidation_with_pasid(iommu, did, pasid);
+-	iotlb_invalidation_with_pasid(iommu, did, pasid);
++
++	if (pgtt == PASID_ENTRY_PGTT_PT || pgtt == PASID_ENTRY_PGTT_FL_ONLY)
++		qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
++	else
++		iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
+ 
+ 	/* Device IOTLB doesn't need to be flushed in caching mode. */
+ 	if (!cap_caching_mode(iommu->cap))
+@@ -540,7 +532,7 @@ static void pasid_flush_caches(struct intel_iommu *iommu,
+ 
+ 	if (cap_caching_mode(iommu->cap)) {
+ 		pasid_cache_invalidation_with_pasid(iommu, did, pasid);
+-		iotlb_invalidation_with_pasid(iommu, did, pasid);
++		qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
+ 	} else {
+ 		iommu_flush_write_buffer(iommu);
+ 	}
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index 086ebd6973199..30cb30046b15e 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -99,6 +99,12 @@ static inline bool pasid_pte_is_present(struct pasid_entry *pte)
+ 	return READ_ONCE(pte->val[0]) & PASID_PTE_PRESENT;
+ }
+ 
++/* Get PGTT field of a PASID table entry */
++static inline u16 pasid_pte_get_pgtt(struct pasid_entry *pte)
++{
++	return (u16)((READ_ONCE(pte->val[0]) >> 6) & 0x7);
++}
++
+ extern unsigned int intel_pasid_max_id;
+ int intel_pasid_alloc_id(void *ptr, int start, int end, gfp_t gfp);
+ void intel_pasid_free_id(u32 pasid);
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 6168dec7cb40d..aabf56272b86d 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -123,53 +123,16 @@ static void __flush_svm_range_dev(struct intel_svm *svm,
+ 				  unsigned long address,
+ 				  unsigned long pages, int ih)
+ {
+-	struct qi_desc desc;
++	struct device_domain_info *info = get_domain_info(sdev->dev);
+ 
+-	if (pages == -1) {
+-		desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
+-			QI_EIOTLB_DID(sdev->did) |
+-			QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
+-			QI_EIOTLB_TYPE;
+-		desc.qw1 = 0;
+-	} else {
+-		int mask = ilog2(__roundup_pow_of_two(pages));
+-
+-		desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
+-				QI_EIOTLB_DID(sdev->did) |
+-				QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) |
+-				QI_EIOTLB_TYPE;
+-		desc.qw1 = QI_EIOTLB_ADDR(address) |
+-				QI_EIOTLB_IH(ih) |
+-				QI_EIOTLB_AM(mask);
+-	}
+-	desc.qw2 = 0;
+-	desc.qw3 = 0;
+-	qi_submit_sync(sdev->iommu, &desc, 1, 0);
+-
+-	if (sdev->dev_iotlb) {
+-		desc.qw0 = QI_DEV_EIOTLB_PASID(svm->pasid) |
+-				QI_DEV_EIOTLB_SID(sdev->sid) |
+-				QI_DEV_EIOTLB_QDEP(sdev->qdep) |
+-				QI_DEIOTLB_TYPE;
+-		if (pages == -1) {
+-			desc.qw1 = QI_DEV_EIOTLB_ADDR(-1ULL >> 1) |
+-					QI_DEV_EIOTLB_SIZE;
+-		} else if (pages > 1) {
+-			/* The least significant zero bit indicates the size. So,
+-			 * for example, an "address" value of 0x12345f000 will
+-			 * flush from 0x123440000 to 0x12347ffff (256KiB). */
+-			unsigned long last = address + ((unsigned long)(pages - 1) << VTD_PAGE_SHIFT);
+-			unsigned long mask = __rounddown_pow_of_two(address ^ last);
+-
+-			desc.qw1 = QI_DEV_EIOTLB_ADDR((address & ~mask) |
+-					(mask - 1)) | QI_DEV_EIOTLB_SIZE;
+-		} else {
+-			desc.qw1 = QI_DEV_EIOTLB_ADDR(address);
+-		}
+-		desc.qw2 = 0;
+-		desc.qw3 = 0;
+-		qi_submit_sync(sdev->iommu, &desc, 1, 0);
+-	}
++	if (WARN_ON(!pages))
++		return;
++
++	qi_flush_piotlb(sdev->iommu, sdev->did, svm->pasid, address, pages, ih);
++	if (info->ats_enabled)
++		qi_flush_dev_iotlb_pasid(sdev->iommu, sdev->sid, info->pfsid,
++					 svm->pasid, sdev->qdep, address,
++					 order_base_2(pages));
+ }
+ 
+ static void intel_flush_svm_range_dev(struct intel_svm *svm,
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 9b8664d388af0..bcf060b5cf85b 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -879,6 +879,9 @@ void iommu_group_remove_device(struct device *dev)
+ 	struct iommu_group *group = dev->iommu_group;
+ 	struct group_device *tmp_device, *device = NULL;
+ 
++	if (!group)
++		return;
++
+ 	dev_info(dev, "Removing from iommu group %d\n", group->id);
+ 
+ 	/* Pre-notify listeners that a device is being removed. */
+diff --git a/drivers/ipack/carriers/tpci200.c b/drivers/ipack/carriers/tpci200.c
+index e1822e87ec3d2..c1098f40e03f8 100644
+--- a/drivers/ipack/carriers/tpci200.c
++++ b/drivers/ipack/carriers/tpci200.c
+@@ -91,16 +91,13 @@ static void tpci200_unregister(struct tpci200_board *tpci200)
+ 	free_irq(tpci200->info->pdev->irq, (void *) tpci200);
+ 
+ 	pci_iounmap(tpci200->info->pdev, tpci200->info->interface_regs);
+-	pci_iounmap(tpci200->info->pdev, tpci200->info->cfg_regs);
+ 
+ 	pci_release_region(tpci200->info->pdev, TPCI200_IP_INTERFACE_BAR);
+ 	pci_release_region(tpci200->info->pdev, TPCI200_IO_ID_INT_SPACES_BAR);
+ 	pci_release_region(tpci200->info->pdev, TPCI200_MEM16_SPACE_BAR);
+ 	pci_release_region(tpci200->info->pdev, TPCI200_MEM8_SPACE_BAR);
+-	pci_release_region(tpci200->info->pdev, TPCI200_CFG_MEM_BAR);
+ 
+ 	pci_disable_device(tpci200->info->pdev);
+-	pci_dev_put(tpci200->info->pdev);
+ }
+ 
+ static void tpci200_enable_irq(struct tpci200_board *tpci200,
+@@ -259,7 +256,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to allocate PCI resource for BAR 2 !",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_disable_pci;
++		goto err_disable_device;
+ 	}
+ 
+ 	/* Request IO ID INT space (Bar 3) */
+@@ -271,7 +268,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to allocate PCI resource for BAR 3 !",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_release_ip_space;
++		goto err_ip_interface_bar;
+ 	}
+ 
+ 	/* Request MEM8 space (Bar 5) */
+@@ -282,7 +279,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to allocate PCI resource for BAR 5!",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_release_ioid_int_space;
++		goto err_io_id_int_spaces_bar;
+ 	}
+ 
+ 	/* Request MEM16 space (Bar 4) */
+@@ -293,7 +290,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to allocate PCI resource for BAR 4!",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_release_mem8_space;
++		goto err_mem8_space_bar;
+ 	}
+ 
+ 	/* Map internal tpci200 driver user space */
+@@ -307,7 +304,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+ 		res = -ENOMEM;
+-		goto out_release_mem8_space;
++		goto err_mem16_space_bar;
+ 	}
+ 
+ 	/* Initialize lock that protects interface_regs */
+@@ -346,18 +343,22 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) unable to register IRQ !",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_release_ioid_int_space;
++		goto err_interface_regs;
+ 	}
+ 
+ 	return 0;
+ 
+-out_release_mem8_space:
++err_interface_regs:
++	pci_iounmap(tpci200->info->pdev, tpci200->info->interface_regs);
++err_mem16_space_bar:
++	pci_release_region(tpci200->info->pdev, TPCI200_MEM16_SPACE_BAR);
++err_mem8_space_bar:
+ 	pci_release_region(tpci200->info->pdev, TPCI200_MEM8_SPACE_BAR);
+-out_release_ioid_int_space:
++err_io_id_int_spaces_bar:
+ 	pci_release_region(tpci200->info->pdev, TPCI200_IO_ID_INT_SPACES_BAR);
+-out_release_ip_space:
++err_ip_interface_bar:
+ 	pci_release_region(tpci200->info->pdev, TPCI200_IP_INTERFACE_BAR);
+-out_disable_pci:
++err_disable_device:
+ 	pci_disable_device(tpci200->info->pdev);
+ 	return res;
+ }
+@@ -529,7 +530,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 	tpci200->info = kzalloc(sizeof(struct tpci200_infos), GFP_KERNEL);
+ 	if (!tpci200->info) {
+ 		ret = -ENOMEM;
+-		goto out_err_info;
++		goto err_tpci200;
+ 	}
+ 
+ 	pci_dev_get(pdev);
+@@ -540,7 +541,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to allocate PCI Configuration Memory");
+ 		ret = -EBUSY;
+-		goto out_err_pci_request;
++		goto err_tpci200_info;
+ 	}
+ 	tpci200->info->cfg_regs = ioremap(
+ 			pci_resource_start(pdev, TPCI200_CFG_MEM_BAR),
+@@ -548,7 +549,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 	if (!tpci200->info->cfg_regs) {
+ 		dev_err(&pdev->dev, "Failed to map PCI Configuration Memory");
+ 		ret = -EFAULT;
+-		goto out_err_ioremap;
++		goto err_request_region;
+ 	}
+ 
+ 	/* Disable byte swapping for 16 bit IP module access. This will ensure
+@@ -571,7 +572,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "error during tpci200 install\n");
+ 		ret = -ENODEV;
+-		goto out_err_install;
++		goto err_cfg_regs;
+ 	}
+ 
+ 	/* Register the carrier in the industry pack bus driver */
+@@ -583,7 +584,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 		dev_err(&pdev->dev,
+ 			"error registering the carrier on ipack driver\n");
+ 		ret = -EFAULT;
+-		goto out_err_bus_register;
++		goto err_tpci200_install;
+ 	}
+ 
+ 	/* save the bus number given by ipack to logging purpose */
+@@ -594,19 +595,16 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 		tpci200_create_device(tpci200, i);
+ 	return 0;
+ 
+-out_err_bus_register:
++err_tpci200_install:
+ 	tpci200_uninstall(tpci200);
+-	/* tpci200->info->cfg_regs is unmapped in tpci200_uninstall */
+-	tpci200->info->cfg_regs = NULL;
+-out_err_install:
+-	if (tpci200->info->cfg_regs)
+-		iounmap(tpci200->info->cfg_regs);
+-out_err_ioremap:
++err_cfg_regs:
++	pci_iounmap(tpci200->info->pdev, tpci200->info->cfg_regs);
++err_request_region:
+ 	pci_release_region(pdev, TPCI200_CFG_MEM_BAR);
+-out_err_pci_request:
+-	pci_dev_put(pdev);
++err_tpci200_info:
+ 	kfree(tpci200->info);
+-out_err_info:
++	pci_dev_put(pdev);
++err_tpci200:
+ 	kfree(tpci200);
+ 	return ret;
+ }
+@@ -616,6 +614,12 @@ static void __tpci200_pci_remove(struct tpci200_board *tpci200)
+ 	ipack_bus_unregister(tpci200->info->ipack_bus);
+ 	tpci200_uninstall(tpci200);
+ 
++	pci_iounmap(tpci200->info->pdev, tpci200->info->cfg_regs);
++
++	pci_release_region(tpci200->info->pdev, TPCI200_CFG_MEM_BAR);
++
++	pci_dev_put(tpci200->info->pdev);
++
+ 	kfree(tpci200->info);
+ 	kfree(tpci200);
+ }
+diff --git a/drivers/media/usb/zr364xx/zr364xx.c b/drivers/media/usb/zr364xx/zr364xx.c
+index 1b79053b2a052..08b86b22e5e80 100644
+--- a/drivers/media/usb/zr364xx/zr364xx.c
++++ b/drivers/media/usb/zr364xx/zr364xx.c
+@@ -1184,15 +1184,11 @@ out:
+ 	return err;
+ }
+ 
+-static void zr364xx_release(struct v4l2_device *v4l2_dev)
++static void zr364xx_board_uninit(struct zr364xx_camera *cam)
+ {
+-	struct zr364xx_camera *cam =
+-		container_of(v4l2_dev, struct zr364xx_camera, v4l2_dev);
+ 	unsigned long i;
+ 
+-	v4l2_device_unregister(&cam->v4l2_dev);
+-
+-	videobuf_mmap_free(&cam->vb_vidq);
++	zr364xx_stop_readpipe(cam);
+ 
+ 	/* release sys buffers */
+ 	for (i = 0; i < FRAMES; i++) {
+@@ -1203,9 +1199,19 @@ static void zr364xx_release(struct v4l2_device *v4l2_dev)
+ 		cam->buffer.frame[i].lpvbits = NULL;
+ 	}
+ 
+-	v4l2_ctrl_handler_free(&cam->ctrl_handler);
+ 	/* release transfer buffer */
+ 	kfree(cam->pipe->transfer_buffer);
++}
++
++static void zr364xx_release(struct v4l2_device *v4l2_dev)
++{
++	struct zr364xx_camera *cam =
++		container_of(v4l2_dev, struct zr364xx_camera, v4l2_dev);
++
++	videobuf_mmap_free(&cam->vb_vidq);
++	v4l2_ctrl_handler_free(&cam->ctrl_handler);
++	zr364xx_board_uninit(cam);
++	v4l2_device_unregister(&cam->v4l2_dev);
+ 	kfree(cam);
+ }
+ 
+@@ -1328,6 +1334,7 @@ static int zr364xx_board_init(struct zr364xx_camera *cam)
+ {
+ 	struct zr364xx_pipeinfo *pipe = cam->pipe;
+ 	unsigned long i;
++	int err;
+ 
+ 	DBG("board init: %p\n", cam);
+ 	memset(pipe, 0, sizeof(*pipe));
+@@ -1360,9 +1367,8 @@ static int zr364xx_board_init(struct zr364xx_camera *cam)
+ 
+ 	if (i == 0) {
+ 		printk(KERN_INFO KBUILD_MODNAME ": out of memory. Aborting\n");
+-		kfree(cam->pipe->transfer_buffer);
+-		cam->pipe->transfer_buffer = NULL;
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto err_free;
+ 	} else
+ 		cam->buffer.dwFrames = i;
+ 
+@@ -1377,9 +1383,20 @@ static int zr364xx_board_init(struct zr364xx_camera *cam)
+ 	/*** end create system buffers ***/
+ 
+ 	/* start read pipe */
+-	zr364xx_start_readpipe(cam);
++	err = zr364xx_start_readpipe(cam);
++	if (err)
++		goto err_free_frames;
++
+ 	DBG(": board initialized\n");
+ 	return 0;
++
++err_free_frames:
++	for (i = 0; i < FRAMES; i++)
++		vfree(cam->buffer.frame[i].lpvbits);
++err_free:
++	kfree(cam->pipe->transfer_buffer);
++	cam->pipe->transfer_buffer = NULL;
++	return err;
+ }
+ 
+ static int zr364xx_probe(struct usb_interface *intf,
+@@ -1404,12 +1421,10 @@ static int zr364xx_probe(struct usb_interface *intf,
+ 	if (!cam)
+ 		return -ENOMEM;
+ 
+-	cam->v4l2_dev.release = zr364xx_release;
+ 	err = v4l2_device_register(&intf->dev, &cam->v4l2_dev);
+ 	if (err < 0) {
+ 		dev_err(&udev->dev, "couldn't register v4l2_device\n");
+-		kfree(cam);
+-		return err;
++		goto free_cam;
+ 	}
+ 	hdl = &cam->ctrl_handler;
+ 	v4l2_ctrl_handler_init(hdl, 1);
+@@ -1418,7 +1433,7 @@ static int zr364xx_probe(struct usb_interface *intf,
+ 	if (hdl->error) {
+ 		err = hdl->error;
+ 		dev_err(&udev->dev, "couldn't register control\n");
+-		goto fail;
++		goto free_hdlr_and_unreg_dev;
+ 	}
+ 	/* save the init method used by this camera */
+ 	cam->method = id->driver_info;
+@@ -1491,7 +1506,7 @@ static int zr364xx_probe(struct usb_interface *intf,
+ 	if (!cam->read_endpoint) {
+ 		err = -ENOMEM;
+ 		dev_err(&intf->dev, "Could not find bulk-in endpoint\n");
+-		goto fail;
++		goto free_hdlr_and_unreg_dev;
+ 	}
+ 
+ 	/* v4l */
+@@ -1502,10 +1517,11 @@ static int zr364xx_probe(struct usb_interface *intf,
+ 
+ 	/* load zr364xx board specific */
+ 	err = zr364xx_board_init(cam);
+-	if (!err)
+-		err = v4l2_ctrl_handler_setup(hdl);
+ 	if (err)
+-		goto fail;
++		goto free_hdlr_and_unreg_dev;
++	err = v4l2_ctrl_handler_setup(hdl);
++	if (err)
++		goto board_uninit;
+ 
+ 	spin_lock_init(&cam->slock);
+ 
+@@ -1520,16 +1536,20 @@ static int zr364xx_probe(struct usb_interface *intf,
+ 	err = video_register_device(&cam->vdev, VFL_TYPE_VIDEO, -1);
+ 	if (err) {
+ 		dev_err(&udev->dev, "video_register_device failed\n");
+-		goto fail;
++		goto board_uninit;
+ 	}
++	cam->v4l2_dev.release = zr364xx_release;
+ 
+ 	dev_info(&udev->dev, DRIVER_DESC " controlling device %s\n",
+ 		 video_device_node_name(&cam->vdev));
+ 	return 0;
+ 
+-fail:
++board_uninit:
++	zr364xx_board_uninit(cam);
++free_hdlr_and_unreg_dev:
+ 	v4l2_ctrl_handler_free(hdl);
+ 	v4l2_device_unregister(&cam->v4l2_dev);
++free_cam:
+ 	kfree(cam);
+ 	return err;
+ }
+@@ -1576,10 +1596,19 @@ static int zr364xx_resume(struct usb_interface *intf)
+ 	if (!cam->was_streaming)
+ 		return 0;
+ 
+-	zr364xx_start_readpipe(cam);
++	res = zr364xx_start_readpipe(cam);
++	if (res)
++		return res;
++
+ 	res = zr364xx_prepare(cam);
+-	if (!res)
+-		zr364xx_start_acquire(cam);
++	if (res)
++		goto err_prepare;
++
++	zr364xx_start_acquire(cam);
++	return 0;
++
++err_prepare:
++	zr364xx_stop_readpipe(cam);
+ 	return res;
+ }
+ #endif
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 43c5795691fb2..8b5d542e20f30 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2018,8 +2018,8 @@ static void dw_mci_tasklet_func(unsigned long priv)
+ 					continue;
+ 				}
+ 
+-				dw_mci_stop_dma(host);
+ 				send_stop_abort(host, data);
++				dw_mci_stop_dma(host);
+ 				state = STATE_SENDING_STOP;
+ 				break;
+ 			}
+@@ -2043,10 +2043,10 @@ static void dw_mci_tasklet_func(unsigned long priv)
+ 			 */
+ 			if (test_and_clear_bit(EVENT_DATA_ERROR,
+ 					       &host->pending_events)) {
+-				dw_mci_stop_dma(host);
+ 				if (!(host->data_status & (SDMMC_INT_DRTO |
+ 							   SDMMC_INT_EBE)))
+ 					send_stop_abort(host, data);
++				dw_mci_stop_dma(host);
+ 				state = STATE_DATA_ERROR;
+ 				break;
+ 			}
+@@ -2079,10 +2079,10 @@ static void dw_mci_tasklet_func(unsigned long priv)
+ 			 */
+ 			if (test_and_clear_bit(EVENT_DATA_ERROR,
+ 					       &host->pending_events)) {
+-				dw_mci_stop_dma(host);
+ 				if (!(host->data_status & (SDMMC_INT_DRTO |
+ 							   SDMMC_INT_EBE)))
+ 					send_stop_abort(host, data);
++				dw_mci_stop_dma(host);
+ 				state = STATE_DATA_ERROR;
+ 				break;
+ 			}
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index 51db30acf4dca..fdaa11f92fe6f 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -479,8 +479,9 @@ static int sdmmc_post_sig_volt_switch(struct mmci_host *host,
+ 	u32 status;
+ 	int ret = 0;
+ 
+-	if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180) {
+-		spin_lock_irqsave(&host->lock, flags);
++	spin_lock_irqsave(&host->lock, flags);
++	if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180 &&
++	    host->pwr_reg & MCI_STM32_VSWITCHEN) {
+ 		mmci_write_pwrreg(host, host->pwr_reg | MCI_STM32_VSWITCH);
+ 		spin_unlock_irqrestore(&host->lock, flags);
+ 
+@@ -492,9 +493,11 @@ static int sdmmc_post_sig_volt_switch(struct mmci_host *host,
+ 
+ 		writel_relaxed(MCI_STM32_VSWENDC | MCI_STM32_CKSTOPC,
+ 			       host->base + MMCICLEAR);
++		spin_lock_irqsave(&host->lock, flags);
+ 		mmci_write_pwrreg(host, host->pwr_reg &
+ 				  ~(MCI_STM32_VSWITCHEN | MCI_STM32_VSWITCH));
+ 	}
++	spin_unlock_irqrestore(&host->lock, flags);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
+index ddeaf8e1f72f9..9f0eef97ebddd 100644
+--- a/drivers/mmc/host/sdhci-iproc.c
++++ b/drivers/mmc/host/sdhci-iproc.c
+@@ -173,6 +173,23 @@ static unsigned int sdhci_iproc_get_max_clock(struct sdhci_host *host)
+ 		return pltfm_host->clock;
+ }
+ 
++/*
++ * There is a known bug on BCM2711's SDHCI core integration where the
++ * controller will hang when the difference between the core clock and the bus
++ * clock is too great. Specifically this can be reproduced under the following
++ * conditions:
++ *
++ *  - No SD card plugged in, polling thread is running, probing cards at
++ *    100 kHz.
++ *  - BCM2711's core clock configured at 500MHz or more
++ *
++ * So we set 200kHz as the minimum clock frequency available for that SoC.
++ */
++static unsigned int sdhci_iproc_bcm2711_get_min_clock(struct sdhci_host *host)
++{
++	return 200000;
++}
++
+ static const struct sdhci_ops sdhci_iproc_ops = {
+ 	.set_clock = sdhci_set_clock,
+ 	.get_max_clock = sdhci_iproc_get_max_clock,
+@@ -271,13 +288,15 @@ static const struct sdhci_ops sdhci_iproc_bcm2711_ops = {
+ 	.set_clock = sdhci_set_clock,
+ 	.set_power = sdhci_set_power_and_bus_voltage,
+ 	.get_max_clock = sdhci_iproc_get_max_clock,
++	.get_min_clock = sdhci_iproc_bcm2711_get_min_clock,
+ 	.set_bus_width = sdhci_set_bus_width,
+ 	.reset = sdhci_reset,
+ 	.set_uhs_signaling = sdhci_set_uhs_signaling,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_bcm2711_pltfm_data = {
+-	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
++	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12 |
++		  SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+ 	.ops = &sdhci_iproc_bcm2711_ops,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 3451eb3255135..588b9a5641179 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1834,6 +1834,23 @@ static void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery)
+ 	sdhci_cqe_disable(mmc, recovery);
+ }
+ 
++static void sdhci_msm_set_timeout(struct sdhci_host *host, struct mmc_command *cmd)
++{
++	u32 count, start = 15;
++
++	__sdhci_set_timeout(host, cmd);
++	count = sdhci_readb(host, SDHCI_TIMEOUT_CONTROL);
++	/*
++	 * Update software timeout value if its value is less than hardware data
++	 * timeout value. Qcom SoC hardware data timeout value was calculated
++	 * using 4 * MCLK * 2^(count + 13). where MCLK = 1 / host->clock.
++	 */
++	if (cmd && cmd->data && host->clock > 400000 &&
++	    host->clock <= 50000000 &&
++	    ((1 << (count + start)) > (10 * host->clock)))
++		host->data_timeout = 22LL * NSEC_PER_SEC;
++}
++
+ static const struct cqhci_host_ops sdhci_msm_cqhci_ops = {
+ 	.enable		= sdhci_cqe_enable,
+ 	.disable	= sdhci_msm_cqe_disable,
+@@ -2184,6 +2201,7 @@ static const struct sdhci_ops sdhci_msm_ops = {
+ 	.irq	= sdhci_msm_cqe_irq,
+ 	.dump_vendor_regs = sdhci_msm_dump_vendor_regs,
+ 	.set_power = sdhci_set_power_noreg,
++	.set_timeout = sdhci_msm_set_timeout,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_msm_pdata = {
+diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c
+index a1f3e1031c3d2..96a27e06401fd 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0002.c
++++ b/drivers/mtd/chips/cfi_cmdset_0002.c
+@@ -119,7 +119,7 @@ static int cfi_use_status_reg(struct cfi_private *cfi)
+ 	struct cfi_pri_amdstd *extp = cfi->cmdset_priv;
+ 	u8 poll_mask = CFI_POLL_STATUS_REG | CFI_POLL_DQ;
+ 
+-	return extp->MinorVersion >= '5' &&
++	return extp && extp->MinorVersion >= '5' &&
+ 		(extp->SoftwareFeatures & poll_mask) == CFI_POLL_STATUS_REG;
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 8f169508a90a9..849ae99a955a3 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -69,7 +69,8 @@
+ #include "bnxt_debugfs.h"
+ 
+ #define BNXT_TX_TIMEOUT		(5 * HZ)
+-#define BNXT_DEF_MSG_ENABLE	(NETIF_MSG_DRV | NETIF_MSG_HW)
++#define BNXT_DEF_MSG_ENABLE	(NETIF_MSG_DRV | NETIF_MSG_HW | \
++				 NETIF_MSG_TX_ERR)
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Broadcom BCM573xx network driver");
+@@ -360,6 +361,33 @@ static u16 bnxt_xmit_get_cfa_action(struct sk_buff *skb)
+ 	return md_dst->u.port_info.port_id;
+ }
+ 
++static void bnxt_txr_db_kick(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
++			     u16 prod)
++{
++	bnxt_db_write(bp, &txr->tx_db, prod);
++	txr->kick_pending = 0;
++}
++
++static bool bnxt_txr_netif_try_stop_queue(struct bnxt *bp,
++					  struct bnxt_tx_ring_info *txr,
++					  struct netdev_queue *txq)
++{
++	netif_tx_stop_queue(txq);
++
++	/* netif_tx_stop_queue() must be done before checking
++	 * tx index in bnxt_tx_avail() below, because in
++	 * bnxt_tx_int(), we update tx index before checking for
++	 * netif_tx_queue_stopped().
++	 */
++	smp_mb();
++	if (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh) {
++		netif_tx_wake_queue(txq);
++		return false;
++	}
++
++	return true;
++}
++
+ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct bnxt *bp = netdev_priv(dev);
+@@ -378,6 +406,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	i = skb_get_queue_mapping(skb);
+ 	if (unlikely(i >= bp->tx_nr_rings)) {
+ 		dev_kfree_skb_any(skb);
++		atomic_long_inc(&dev->tx_dropped);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+@@ -387,8 +416,12 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	free_size = bnxt_tx_avail(bp, txr);
+ 	if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
+-		netif_tx_stop_queue(txq);
+-		return NETDEV_TX_BUSY;
++		/* We must have raced with NAPI cleanup */
++		if (net_ratelimit() && txr->kick_pending)
++			netif_warn(bp, tx_err, dev,
++				   "bnxt: ring busy w/ flush pending!\n");
++		if (bnxt_txr_netif_try_stop_queue(bp, txr, txq))
++			return NETDEV_TX_BUSY;
+ 	}
+ 
+ 	length = skb->len;
+@@ -490,21 +523,16 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ normal_tx:
+ 	if (length < BNXT_MIN_PKT_SIZE) {
+ 		pad = BNXT_MIN_PKT_SIZE - length;
+-		if (skb_pad(skb, pad)) {
++		if (skb_pad(skb, pad))
+ 			/* SKB already freed. */
+-			tx_buf->skb = NULL;
+-			return NETDEV_TX_OK;
+-		}
++			goto tx_kick_pending;
+ 		length = BNXT_MIN_PKT_SIZE;
+ 	}
+ 
+ 	mapping = dma_map_single(&pdev->dev, skb->data, len, DMA_TO_DEVICE);
+ 
+-	if (unlikely(dma_mapping_error(&pdev->dev, mapping))) {
+-		dev_kfree_skb_any(skb);
+-		tx_buf->skb = NULL;
+-		return NETDEV_TX_OK;
+-	}
++	if (unlikely(dma_mapping_error(&pdev->dev, mapping)))
++		goto tx_free;
+ 
+ 	dma_unmap_addr_set(tx_buf, mapping, mapping);
+ 	flags = (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD |
+@@ -589,24 +617,17 @@ normal_tx:
+ 	txr->tx_prod = prod;
+ 
+ 	if (!netdev_xmit_more() || netif_xmit_stopped(txq))
+-		bnxt_db_write(bp, &txr->tx_db, prod);
++		bnxt_txr_db_kick(bp, txr, prod);
++	else
++		txr->kick_pending = 1;
+ 
+ tx_done:
+ 
+ 	if (unlikely(bnxt_tx_avail(bp, txr) <= MAX_SKB_FRAGS + 1)) {
+ 		if (netdev_xmit_more() && !tx_buf->is_push)
+-			bnxt_db_write(bp, &txr->tx_db, prod);
+-
+-		netif_tx_stop_queue(txq);
++			bnxt_txr_db_kick(bp, txr, prod);
+ 
+-		/* netif_tx_stop_queue() must be done before checking
+-		 * tx index in bnxt_tx_avail() below, because in
+-		 * bnxt_tx_int(), we update tx index before checking for
+-		 * netif_tx_queue_stopped().
+-		 */
+-		smp_mb();
+-		if (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh)
+-			netif_tx_wake_queue(txq);
++		bnxt_txr_netif_try_stop_queue(bp, txr, txq);
+ 	}
+ 	return NETDEV_TX_OK;
+ 
+@@ -616,7 +637,6 @@ tx_dma_error:
+ 	/* start back at beginning and unmap skb */
+ 	prod = txr->tx_prod;
+ 	tx_buf = &txr->tx_buf_ring[prod];
+-	tx_buf->skb = NULL;
+ 	dma_unmap_single(&pdev->dev, dma_unmap_addr(tx_buf, mapping),
+ 			 skb_headlen(skb), PCI_DMA_TODEVICE);
+ 	prod = NEXT_TX(prod);
+@@ -630,7 +650,13 @@ tx_dma_error:
+ 			       PCI_DMA_TODEVICE);
+ 	}
+ 
++tx_free:
+ 	dev_kfree_skb_any(skb);
++tx_kick_pending:
++	if (txr->kick_pending)
++		bnxt_txr_db_kick(bp, txr, txr->tx_prod);
++	txr->tx_buf_ring[txr->tx_prod].skb = NULL;
++	atomic_long_inc(&dev->tx_dropped);
+ 	return NETDEV_TX_OK;
+ }
+ 
+@@ -690,14 +716,9 @@ next_tx_int:
+ 	smp_mb();
+ 
+ 	if (unlikely(netif_tx_queue_stopped(txq)) &&
+-	    (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh)) {
+-		__netif_tx_lock(txq, smp_processor_id());
+-		if (netif_tx_queue_stopped(txq) &&
+-		    bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
+-		    txr->dev_state != BNXT_DEV_STATE_CLOSING)
+-			netif_tx_wake_queue(txq);
+-		__netif_tx_unlock(txq);
+-	}
++	    bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
++	    READ_ONCE(txr->dev_state) != BNXT_DEV_STATE_CLOSING)
++		netif_tx_wake_queue(txq);
+ }
+ 
+ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
+@@ -1726,6 +1747,10 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 	if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons))
+ 		return -EBUSY;
+ 
++	/* The valid test of the entry must be done first before
++	 * reading any further.
++	 */
++	dma_rmb();
+ 	prod = rxr->rx_prod;
+ 
+ 	if (cmp_type == CMP_TYPE_RX_L2_TPA_START_CMP) {
+@@ -1929,6 +1954,10 @@ static int bnxt_force_rx_discard(struct bnxt *bp,
+ 	if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons))
+ 		return -EBUSY;
+ 
++	/* The valid test of the entry must be done first before
++	 * reading any further.
++	 */
++	dma_rmb();
+ 	cmp_type = RX_CMP_TYPE(rxcmp);
+ 	if (cmp_type == CMP_TYPE_RX_L2_CMP) {
+ 		rxcmp1->rx_cmp_cfa_code_errors_v2 |=
+@@ -2373,6 +2402,10 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+ 		if (!TX_CMP_VALID(txcmp, raw_cons))
+ 			break;
+ 
++		/* The valid test of the entry must be done first before
++		 * reading any further.
++		 */
++		dma_rmb();
+ 		if ((TX_CMP_TYPE(txcmp) & 0x30) == 0x10) {
+ 			tmp_raw_cons = NEXT_RAW_CMP(raw_cons);
+ 			cp_cons = RING_CMP(tmp_raw_cons);
+@@ -8849,10 +8882,9 @@ static void bnxt_disable_napi(struct bnxt *bp)
+ 	for (i = 0; i < bp->cp_nr_rings; i++) {
+ 		struct bnxt_cp_ring_info *cpr = &bp->bnapi[i]->cp_ring;
+ 
++		napi_disable(&bp->bnapi[i]->napi);
+ 		if (bp->bnapi[i]->rx_ring)
+ 			cancel_work_sync(&cpr->dim.work);
+-
+-		napi_disable(&bp->bnapi[i]->napi);
+ 	}
+ }
+ 
+@@ -8885,9 +8917,11 @@ void bnxt_tx_disable(struct bnxt *bp)
+ 	if (bp->tx_ring) {
+ 		for (i = 0; i < bp->tx_nr_rings; i++) {
+ 			txr = &bp->tx_ring[i];
+-			txr->dev_state = BNXT_DEV_STATE_CLOSING;
++			WRITE_ONCE(txr->dev_state, BNXT_DEV_STATE_CLOSING);
+ 		}
+ 	}
++	/* Make sure napi polls see @dev_state change */
++	synchronize_net();
+ 	/* Drop carrier first to prevent TX timeout */
+ 	netif_carrier_off(bp->dev);
+ 	/* Stop all TX queues */
+@@ -8901,8 +8935,10 @@ void bnxt_tx_enable(struct bnxt *bp)
+ 
+ 	for (i = 0; i < bp->tx_nr_rings; i++) {
+ 		txr = &bp->tx_ring[i];
+-		txr->dev_state = 0;
++		WRITE_ONCE(txr->dev_state, 0);
+ 	}
++	/* Make sure napi polls see @dev_state change */
++	synchronize_net();
+ 	netif_tx_wake_all_queues(bp->dev);
+ 	if (bp->link_info.link_up)
+ 		netif_carrier_on(bp->dev);
+@@ -10372,6 +10408,9 @@ static bool bnxt_rfs_supported(struct bnxt *bp)
+ 			return true;
+ 		return false;
+ 	}
++	/* 212 firmware is broken for aRFS */
++	if (BNXT_FW_MAJ(bp) == 212)
++		return false;
+ 	if (BNXT_PF(bp) && !BNXT_CHIP_TYPE_NITRO_A0(bp))
+ 		return true;
+ 	if (bp->flags & BNXT_FLAG_NEW_RSS_CAP)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index a95c5afa2f018..95d10e7bbb041 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -770,6 +770,7 @@ struct bnxt_tx_ring_info {
+ 	u16			tx_prod;
+ 	u16			tx_cons;
+ 	u16			txq_index;
++	u8			kick_pending;
+ 	struct bnxt_db_info	tx_db;
+ 
+ 	struct tx_bd		*tx_desc_ring[MAX_TX_PAGES];
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 615802b07521a..5ad28129fab2a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -3556,8 +3556,7 @@ u16 i40e_lan_select_queue(struct net_device *netdev,
+ 
+ 	/* is DCB enabled at all? */
+ 	if (vsi->tc_config.numtc == 1)
+-		return i40e_swdcb_skb_tx_hash(netdev, skb,
+-					      netdev->real_num_tx_queues);
++		return netdev_pick_tx(netdev, skb, sb_dev);
+ 
+ 	prio = skb->priority;
+ 	hw = &vsi->back->hw;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 8a65525a7c0d2..6766446a33f49 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -134,6 +134,7 @@ struct iavf_q_vector {
+ struct iavf_mac_filter {
+ 	struct list_head list;
+ 	u8 macaddr[ETH_ALEN];
++	bool is_new_mac;	/* filter is new, wait for PF decision */
+ 	bool remove;		/* filter needs to be removed */
+ 	bool add;		/* filter needs to be added */
+ };
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index c4ec9a91c7c52..7023aa147043f 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -751,6 +751,7 @@ struct iavf_mac_filter *iavf_add_filter(struct iavf_adapter *adapter,
+ 
+ 		list_add_tail(&f->list, &adapter->mac_filter_list);
+ 		f->add = true;
++		f->is_new_mac = true;
+ 		adapter->aq_required |= IAVF_FLAG_AQ_ADD_MAC_FILTER;
+ 	} else {
+ 		f->remove = false;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index ed08ace4f05a8..8be3151f2c62b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -537,6 +537,47 @@ void iavf_del_ether_addrs(struct iavf_adapter *adapter)
+ 	kfree(veal);
+ }
+ 
++/**
++ * iavf_mac_add_ok
++ * @adapter: adapter structure
++ *
++ * Submit list of filters based on PF response.
++ **/
++static void iavf_mac_add_ok(struct iavf_adapter *adapter)
++{
++	struct iavf_mac_filter *f, *ftmp;
++
++	spin_lock_bh(&adapter->mac_vlan_list_lock);
++	list_for_each_entry_safe(f, ftmp, &adapter->mac_filter_list, list) {
++		f->is_new_mac = false;
++	}
++	spin_unlock_bh(&adapter->mac_vlan_list_lock);
++}
++
++/**
++ * iavf_mac_add_reject
++ * @adapter: adapter structure
++ *
++ * Remove filters from list based on PF response.
++ **/
++static void iavf_mac_add_reject(struct iavf_adapter *adapter)
++{
++	struct net_device *netdev = adapter->netdev;
++	struct iavf_mac_filter *f, *ftmp;
++
++	spin_lock_bh(&adapter->mac_vlan_list_lock);
++	list_for_each_entry_safe(f, ftmp, &adapter->mac_filter_list, list) {
++		if (f->remove && ether_addr_equal(f->macaddr, netdev->dev_addr))
++			f->remove = false;
++
++		if (f->is_new_mac) {
++			list_del(&f->list);
++			kfree(f);
++		}
++	}
++	spin_unlock_bh(&adapter->mac_vlan_list_lock);
++}
++
+ /**
+  * iavf_add_vlans
+  * @adapter: adapter structure
+@@ -1295,6 +1336,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		case VIRTCHNL_OP_ADD_ETH_ADDR:
+ 			dev_err(&adapter->pdev->dev, "Failed to add MAC filter, error %s\n",
+ 				iavf_stat_str(&adapter->hw, v_retval));
++			iavf_mac_add_reject(adapter);
+ 			/* restore administratively set MAC address */
+ 			ether_addr_copy(adapter->hw.mac.addr, netdev->dev_addr);
+ 			break;
+@@ -1364,10 +1406,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		}
+ 	}
+ 	switch (v_opcode) {
+-	case VIRTCHNL_OP_ADD_ETH_ADDR: {
++	case VIRTCHNL_OP_ADD_ETH_ADDR:
++		if (!v_retval)
++			iavf_mac_add_ok(adapter);
+ 		if (!ether_addr_equal(netdev->dev_addr, adapter->hw.mac.addr))
+ 			ether_addr_copy(netdev->dev_addr, adapter->hw.mac.addr);
+-		}
+ 		break;
+ 	case VIRTCHNL_OP_GET_STATS: {
+ 		struct iavf_eth_stats *stats =
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index f72d2978263b9..d60da7a89092e 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -52,8 +52,11 @@ static int ixgbe_xsk_pool_enable(struct ixgbe_adapter *adapter,
+ 
+ 		/* Kick start the NAPI context so that receiving will start */
+ 		err = ixgbe_xsk_wakeup(adapter->netdev, qid, XDP_WAKEUP_RX);
+-		if (err)
++		if (err) {
++			clear_bit(qid, adapter->af_xdp_zc_qps);
++			xsk_pool_dma_unmap(pool, IXGBE_RX_DMA_ATTR);
+ 			return err;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h
+index 3efc5899f6563..f313fd7303316 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede.h
++++ b/drivers/net/ethernet/qlogic/qede/qede.h
+@@ -494,6 +494,7 @@ struct qede_fastpath {
+ #define QEDE_SP_HW_ERR                  4
+ #define QEDE_SP_ARFS_CONFIG             5
+ #define QEDE_SP_AER			7
++#define QEDE_SP_DISABLE			8
+ 
+ #ifdef CONFIG_RFS_ACCEL
+ int qede_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 05e3a3b60269e..d9a3c811ac8b1 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -1006,6 +1006,13 @@ static void qede_sp_task(struct work_struct *work)
+ 	struct qede_dev *edev = container_of(work, struct qede_dev,
+ 					     sp_task.work);
+ 
++	/* Disable execution of this deferred work once
++	 * qede removal is in progress, this stop any future
++	 * scheduling of sp_task.
++	 */
++	if (test_bit(QEDE_SP_DISABLE, &edev->sp_flags))
++		return;
++
+ 	/* The locking scheme depends on the specific flag:
+ 	 * In case of QEDE_SP_RECOVERY, acquiring the RTNL lock is required to
+ 	 * ensure that ongoing flows are ended and new ones are not started.
+@@ -1297,6 +1304,7 @@ static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode)
+ 	qede_rdma_dev_remove(edev, (mode == QEDE_REMOVE_RECOVERY));
+ 
+ 	if (mode != QEDE_REMOVE_RECOVERY) {
++		set_bit(QEDE_SP_DISABLE, &edev->sp_flags);
+ 		unregister_netdev(ndev);
+ 
+ 		cancel_delayed_work_sync(&edev->sp_task);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index d8882d0b6b498..d51bac7ba5afa 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -3156,8 +3156,10 @@ int qlcnic_83xx_flash_read32(struct qlcnic_adapter *adapter, u32 flash_addr,
+ 
+ 		indirect_addr = QLC_83XX_FLASH_DIRECT_DATA(addr);
+ 		ret = QLCRD32(adapter, indirect_addr, &err);
+-		if (err == -EIO)
++		if (err == -EIO) {
++			qlcnic_83xx_unlock_flash(adapter);
+ 			return err;
++		}
+ 
+ 		word = ret;
+ 		*(u32 *)p_data  = word;
+diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
+index 71d6629e65c97..da13683d52d1a 100644
+--- a/drivers/net/hamradio/6pack.c
++++ b/drivers/net/hamradio/6pack.c
+@@ -839,6 +839,12 @@ static void decode_data(struct sixpack *sp, unsigned char inbyte)
+ 		return;
+ 	}
+ 
++	if (sp->rx_count_cooked + 2 >= sizeof(sp->cooked_buf)) {
++		pr_err("6pack: cooked buffer overrun, data loss\n");
++		sp->rx_count = 0;
++		return;
++	}
++
+ 	buf = sp->raw_buf;
+ 	sp->cooked_buf[sp->rx_count_cooked++] =
+ 		buf[0] | ((buf[1] << 2) & 0xc0);
+diff --git a/drivers/net/mdio/mdio-mux.c b/drivers/net/mdio/mdio-mux.c
+index 6a1d3540210bd..ccb3ee704eb1c 100644
+--- a/drivers/net/mdio/mdio-mux.c
++++ b/drivers/net/mdio/mdio-mux.c
+@@ -82,6 +82,17 @@ out:
+ 
+ static int parent_count;
+ 
++static void mdio_mux_uninit_children(struct mdio_mux_parent_bus *pb)
++{
++	struct mdio_mux_child_bus *cb = pb->children;
++
++	while (cb) {
++		mdiobus_unregister(cb->mii_bus);
++		mdiobus_free(cb->mii_bus);
++		cb = cb->next;
++	}
++}
++
+ int mdio_mux_init(struct device *dev,
+ 		  struct device_node *mux_node,
+ 		  int (*switch_fn)(int cur, int desired, void *data),
+@@ -144,7 +155,7 @@ int mdio_mux_init(struct device *dev,
+ 		cb = devm_kzalloc(dev, sizeof(*cb), GFP_KERNEL);
+ 		if (!cb) {
+ 			ret_val = -ENOMEM;
+-			continue;
++			goto err_loop;
+ 		}
+ 		cb->bus_number = v;
+ 		cb->parent = pb;
+@@ -152,8 +163,7 @@ int mdio_mux_init(struct device *dev,
+ 		cb->mii_bus = mdiobus_alloc();
+ 		if (!cb->mii_bus) {
+ 			ret_val = -ENOMEM;
+-			devm_kfree(dev, cb);
+-			continue;
++			goto err_loop;
+ 		}
+ 		cb->mii_bus->priv = cb;
+ 
+@@ -165,11 +175,15 @@ int mdio_mux_init(struct device *dev,
+ 		cb->mii_bus->write = mdio_mux_write;
+ 		r = of_mdiobus_register(cb->mii_bus, child_bus_node);
+ 		if (r) {
++			mdiobus_free(cb->mii_bus);
++			if (r == -EPROBE_DEFER) {
++				ret_val = r;
++				goto err_loop;
++			}
++			devm_kfree(dev, cb);
+ 			dev_err(dev,
+ 				"Error: Failed to register MDIO bus for child %pOF\n",
+ 				child_bus_node);
+-			mdiobus_free(cb->mii_bus);
+-			devm_kfree(dev, cb);
+ 		} else {
+ 			cb->next = pb->children;
+ 			pb->children = cb;
+@@ -182,6 +196,10 @@ int mdio_mux_init(struct device *dev,
+ 
+ 	dev_err(dev, "Error: No acceptable child buses found\n");
+ 	devm_kfree(dev, pb);
++
++err_loop:
++	mdio_mux_uninit_children(pb);
++	of_node_put(child_bus_node);
+ err_pb_kz:
+ 	put_device(&parent_bus->dev);
+ err_parent_bus:
+@@ -193,14 +211,8 @@ EXPORT_SYMBOL_GPL(mdio_mux_init);
+ void mdio_mux_uninit(void *mux_handle)
+ {
+ 	struct mdio_mux_parent_bus *pb = mux_handle;
+-	struct mdio_mux_child_bus *cb = pb->children;
+-
+-	while (cb) {
+-		mdiobus_unregister(cb->mii_bus);
+-		mdiobus_free(cb->mii_bus);
+-		cb = cb->next;
+-	}
+ 
++	mdio_mux_uninit_children(pb);
+ 	put_device(&pb->mii_bus->dev);
+ }
+ EXPORT_SYMBOL_GPL(mdio_mux_uninit);
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 65b315bc60abd..a5cd42bae9621 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1159,7 +1159,7 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ {
+ 	struct phy_device *phydev = dev->net->phydev;
+ 	struct ethtool_link_ksettings ecmd;
+-	int ladv, radv, ret;
++	int ladv, radv, ret, link;
+ 	u32 buf;
+ 
+ 	/* clear LAN78xx interrupt status */
+@@ -1167,9 +1167,12 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 	if (unlikely(ret < 0))
+ 		return -EIO;
+ 
++	mutex_lock(&phydev->lock);
+ 	phy_read_status(phydev);
++	link = phydev->link;
++	mutex_unlock(&phydev->lock);
+ 
+-	if (!phydev->link && dev->link_on) {
++	if (!link && dev->link_on) {
+ 		dev->link_on = false;
+ 
+ 		/* reset MAC */
+@@ -1182,7 +1185,7 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 			return -EIO;
+ 
+ 		del_timer(&dev->stat_monitor);
+-	} else if (phydev->link && !dev->link_on) {
++	} else if (link && !dev->link_on) {
+ 		dev->link_on = true;
+ 
+ 		phy_ethtool_ksettings_get(phydev, &ecmd);
+@@ -1471,9 +1474,14 @@ static int lan78xx_set_eee(struct net_device *net, struct ethtool_eee *edata)
+ 
+ static u32 lan78xx_get_link(struct net_device *net)
+ {
++	u32 link;
++
++	mutex_lock(&net->phydev->lock);
+ 	phy_read_status(net->phydev);
++	link = net->phydev->link;
++	mutex_unlock(&net->phydev->lock);
+ 
+-	return net->phydev->link;
++	return link;
+ }
+ 
+ static void lan78xx_get_drvinfo(struct net_device *net,
+diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
+index 0d7935924e580..fb1a8c4486ddf 100644
+--- a/drivers/net/usb/pegasus.c
++++ b/drivers/net/usb/pegasus.c
+@@ -132,9 +132,15 @@ static int get_registers(pegasus_t *pegasus, __u16 indx, __u16 size, void *data)
+ static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size,
+ 			 const void *data)
+ {
+-	return usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REGS,
++	int ret;
++
++	ret = usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REGS,
+ 				    PEGASUS_REQT_WRITE, 0, indx, data, size,
+ 				    1000, GFP_NOIO);
++	if (ret < 0)
++		netif_dbg(pegasus, drv, pegasus->net, "%s failed with %d\n", __func__, ret);
++
++	return ret;
+ }
+ 
+ /*
+@@ -145,10 +151,15 @@ static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size,
+ static int set_register(pegasus_t *pegasus, __u16 indx, __u8 data)
+ {
+ 	void *buf = &data;
++	int ret;
+ 
+-	return usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REG,
++	ret = usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REG,
+ 				    PEGASUS_REQT_WRITE, data, indx, buf, 1,
+ 				    1000, GFP_NOIO);
++	if (ret < 0)
++		netif_dbg(pegasus, drv, pegasus->net, "%s failed with %d\n", __func__, ret);
++
++	return ret;
+ }
+ 
+ static int update_eth_regs_async(pegasus_t *pegasus)
+@@ -188,10 +199,9 @@ static int update_eth_regs_async(pegasus_t *pegasus)
+ 
+ static int __mii_op(pegasus_t *p, __u8 phy, __u8 indx, __u16 *regd, __u8 cmd)
+ {
+-	int i;
+-	__u8 data[4] = { phy, 0, 0, indx };
++	int i, ret;
+ 	__le16 regdi;
+-	int ret = -ETIMEDOUT;
++	__u8 data[4] = { phy, 0, 0, indx };
+ 
+ 	if (cmd & PHY_WRITE) {
+ 		__le16 *t = (__le16 *) & data[1];
+@@ -207,12 +217,15 @@ static int __mii_op(pegasus_t *p, __u8 phy, __u8 indx, __u16 *regd, __u8 cmd)
+ 		if (data[0] & PHY_DONE)
+ 			break;
+ 	}
+-	if (i >= REG_TIMEOUT)
++	if (i >= REG_TIMEOUT) {
++		ret = -ETIMEDOUT;
+ 		goto fail;
++	}
+ 	if (cmd & PHY_READ) {
+ 		ret = get_registers(p, PhyData, 2, &regdi);
++		if (ret < 0)
++			goto fail;
+ 		*regd = le16_to_cpu(regdi);
+-		return ret;
+ 	}
+ 	return 0;
+ fail:
+@@ -235,9 +248,13 @@ static int write_mii_word(pegasus_t *pegasus, __u8 phy, __u8 indx, __u16 *regd)
+ static int mdio_read(struct net_device *dev, int phy_id, int loc)
+ {
+ 	pegasus_t *pegasus = netdev_priv(dev);
++	int ret;
+ 	u16 res;
+ 
+-	read_mii_word(pegasus, phy_id, loc, &res);
++	ret = read_mii_word(pegasus, phy_id, loc, &res);
++	if (ret < 0)
++		return ret;
++
+ 	return (int)res;
+ }
+ 
+@@ -251,10 +268,9 @@ static void mdio_write(struct net_device *dev, int phy_id, int loc, int val)
+ 
+ static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata)
+ {
+-	int i;
+-	__u8 tmp = 0;
++	int ret, i;
+ 	__le16 retdatai;
+-	int ret;
++	__u8 tmp = 0;
+ 
+ 	set_register(pegasus, EpromCtrl, 0);
+ 	set_register(pegasus, EpromOffset, index);
+@@ -262,21 +278,25 @@ static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata)
+ 
+ 	for (i = 0; i < REG_TIMEOUT; i++) {
+ 		ret = get_registers(pegasus, EpromCtrl, 1, &tmp);
++		if (ret < 0)
++			goto fail;
+ 		if (tmp & EPROM_DONE)
+ 			break;
+-		if (ret == -ESHUTDOWN)
+-			goto fail;
+ 	}
+-	if (i >= REG_TIMEOUT)
++	if (i >= REG_TIMEOUT) {
++		ret = -ETIMEDOUT;
+ 		goto fail;
++	}
+ 
+ 	ret = get_registers(pegasus, EpromData, 2, &retdatai);
++	if (ret < 0)
++		goto fail;
+ 	*retdata = le16_to_cpu(retdatai);
+ 	return ret;
+ 
+ fail:
+-	netif_warn(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+-	return -ETIMEDOUT;
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
++	return ret;
+ }
+ 
+ #ifdef	PEGASUS_WRITE_EEPROM
+@@ -324,10 +344,10 @@ static int write_eprom_word(pegasus_t *pegasus, __u8 index, __u16 data)
+ 	return ret;
+ 
+ fail:
+-	netif_warn(pegasus, drv, pegasus->net, "%s failed\n", __func__);
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+ 	return -ETIMEDOUT;
+ }
+-#endif				/* PEGASUS_WRITE_EEPROM */
++#endif	/* PEGASUS_WRITE_EEPROM */
+ 
+ static inline int get_node_id(pegasus_t *pegasus, u8 *id)
+ {
+@@ -367,19 +387,21 @@ static void set_ethernet_addr(pegasus_t *pegasus)
+ 	return;
+ err:
+ 	eth_hw_addr_random(pegasus->net);
+-	dev_info(&pegasus->intf->dev, "software assigned MAC address.\n");
++	netif_dbg(pegasus, drv, pegasus->net, "software assigned MAC address.\n");
+ 
+ 	return;
+ }
+ 
+ static inline int reset_mac(pegasus_t *pegasus)
+ {
++	int ret, i;
+ 	__u8 data = 0x8;
+-	int i;
+ 
+ 	set_register(pegasus, EthCtrl1, data);
+ 	for (i = 0; i < REG_TIMEOUT; i++) {
+-		get_registers(pegasus, EthCtrl1, 1, &data);
++		ret = get_registers(pegasus, EthCtrl1, 1, &data);
++		if (ret < 0)
++			goto fail;
+ 		if (~data & 0x08) {
+ 			if (loopback)
+ 				break;
+@@ -402,22 +424,29 @@ static inline int reset_mac(pegasus_t *pegasus)
+ 	}
+ 	if (usb_dev_id[pegasus->dev_index].vendor == VENDOR_ELCON) {
+ 		__u16 auxmode;
+-		read_mii_word(pegasus, 3, 0x1b, &auxmode);
++		ret = read_mii_word(pegasus, 3, 0x1b, &auxmode);
++		if (ret < 0)
++			goto fail;
+ 		auxmode |= 4;
+ 		write_mii_word(pegasus, 3, 0x1b, &auxmode);
+ 	}
+ 
+ 	return 0;
++fail:
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
++	return ret;
+ }
+ 
+ static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
+ {
+-	__u16 linkpart;
+-	__u8 data[4];
+ 	pegasus_t *pegasus = netdev_priv(dev);
+ 	int ret;
++	__u16 linkpart;
++	__u8 data[4];
+ 
+-	read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart);
++	ret = read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart);
++	if (ret < 0)
++		goto fail;
+ 	data[0] = 0xc8; /* TX & RX enable, append status, no CRC */
+ 	data[1] = 0;
+ 	if (linkpart & (ADVERTISE_100FULL | ADVERTISE_10FULL))
+@@ -435,11 +464,16 @@ static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
+ 	    usb_dev_id[pegasus->dev_index].vendor == VENDOR_LINKSYS2 ||
+ 	    usb_dev_id[pegasus->dev_index].vendor == VENDOR_DLINK) {
+ 		u16 auxmode;
+-		read_mii_word(pegasus, 0, 0x1b, &auxmode);
++		ret = read_mii_word(pegasus, 0, 0x1b, &auxmode);
++		if (ret < 0)
++			goto fail;
+ 		auxmode |= 4;
+ 		write_mii_word(pegasus, 0, 0x1b, &auxmode);
+ 	}
+ 
++	return 0;
++fail:
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+ 	return ret;
+ }
+ 
+@@ -447,9 +481,9 @@ static void read_bulk_callback(struct urb *urb)
+ {
+ 	pegasus_t *pegasus = urb->context;
+ 	struct net_device *net;
++	u8 *buf = urb->transfer_buffer;
+ 	int rx_status, count = urb->actual_length;
+ 	int status = urb->status;
+-	u8 *buf = urb->transfer_buffer;
+ 	__u16 pkt_len;
+ 
+ 	if (!pegasus)
+@@ -1005,8 +1039,7 @@ static int pegasus_ioctl(struct net_device *net, struct ifreq *rq, int cmd)
+ 		data[0] = pegasus->phy;
+ 		fallthrough;
+ 	case SIOCDEVPRIVATE + 1:
+-		read_mii_word(pegasus, data[0], data[1] & 0x1f, &data[3]);
+-		res = 0;
++		res = read_mii_word(pegasus, data[0], data[1] & 0x1f, &data[3]);
+ 		break;
+ 	case SIOCDEVPRIVATE + 2:
+ 		if (!capable(CAP_NET_ADMIN))
+@@ -1040,22 +1073,25 @@ static void pegasus_set_multicast(struct net_device *net)
+ 
+ static __u8 mii_phy_probe(pegasus_t *pegasus)
+ {
+-	int i;
++	int i, ret;
+ 	__u16 tmp;
+ 
+ 	for (i = 0; i < 32; i++) {
+-		read_mii_word(pegasus, i, MII_BMSR, &tmp);
++		ret = read_mii_word(pegasus, i, MII_BMSR, &tmp);
++		if (ret < 0)
++			goto fail;
+ 		if (tmp == 0 || tmp == 0xffff || (tmp & BMSR_MEDIA) == 0)
+ 			continue;
+ 		else
+ 			return i;
+ 	}
+-
++fail:
+ 	return 0xff;
+ }
+ 
+ static inline void setup_pegasus_II(pegasus_t *pegasus)
+ {
++	int ret;
+ 	__u8 data = 0xa5;
+ 
+ 	set_register(pegasus, Reg1d, 0);
+@@ -1067,7 +1103,9 @@ static inline void setup_pegasus_II(pegasus_t *pegasus)
+ 		set_register(pegasus, Reg7b, 2);
+ 
+ 	set_register(pegasus, 0x83, data);
+-	get_registers(pegasus, 0x83, 1, &data);
++	ret = get_registers(pegasus, 0x83, 1, &data);
++	if (ret < 0)
++		goto fail;
+ 
+ 	if (data == 0xa5)
+ 		pegasus->chip = 0x8513;
+@@ -1082,6 +1120,10 @@ static inline void setup_pegasus_II(pegasus_t *pegasus)
+ 		set_register(pegasus, Reg81, 6);
+ 	else
+ 		set_register(pegasus, Reg81, 2);
++
++	return;
++fail:
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+ }
+ 
+ static void check_carrier(struct work_struct *work)
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 105622e1defab..0bb5b1c786546 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -3432,7 +3432,7 @@ static void rtl_clear_bp(struct r8152 *tp, u16 type)
+ 	case RTL_VER_09:
+ 	default:
+ 		if (type == MCU_TYPE_USB) {
+-			ocp_write_byte(tp, MCU_TYPE_USB, USB_BP2_EN, 0);
++			ocp_write_word(tp, MCU_TYPE_USB, USB_BP2_EN, 0);
+ 
+ 			ocp_write_word(tp, MCU_TYPE_USB, USB_BP_8, 0);
+ 			ocp_write_word(tp, MCU_TYPE_USB, USB_BP_9, 0);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 7d1f609306f94..cbe47eed7cc3c 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -63,7 +63,7 @@ static const unsigned long guest_offloads[] = {
+ 	VIRTIO_NET_F_GUEST_CSUM
+ };
+ 
+-#define GUEST_OFFLOAD_LRO_MASK ((1ULL << VIRTIO_NET_F_GUEST_TSO4) | \
++#define GUEST_OFFLOAD_GRO_HW_MASK ((1ULL << VIRTIO_NET_F_GUEST_TSO4) | \
+ 				(1ULL << VIRTIO_NET_F_GUEST_TSO6) | \
+ 				(1ULL << VIRTIO_NET_F_GUEST_ECN)  | \
+ 				(1ULL << VIRTIO_NET_F_GUEST_UFO))
+@@ -195,6 +195,9 @@ struct virtnet_info {
+ 	/* # of XDP queue pairs currently used by the driver */
+ 	u16 xdp_queue_pairs;
+ 
++	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
++	bool xdp_enabled;
++
+ 	/* I like... big packets and I cannot lie! */
+ 	bool big_packets;
+ 
+@@ -485,12 +488,41 @@ static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
+ 	return 0;
+ }
+ 
+-static struct send_queue *virtnet_xdp_sq(struct virtnet_info *vi)
+-{
+-	unsigned int qp;
+-
+-	qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id();
+-	return &vi->sq[qp];
++/* when vi->curr_queue_pairs > nr_cpu_ids, the txq/sq is only used for xdp tx on
++ * the current cpu, so it does not need to be locked.
++ *
++ * Here we use marco instead of inline functions because we have to deal with
++ * three issues at the same time: 1. the choice of sq. 2. judge and execute the
++ * lock/unlock of txq 3. make sparse happy. It is difficult for two inline
++ * functions to perfectly solve these three problems at the same time.
++ */
++#define virtnet_xdp_get_sq(vi) ({                                       \
++	struct netdev_queue *txq;                                       \
++	typeof(vi) v = (vi);                                            \
++	unsigned int qp;                                                \
++									\
++	if (v->curr_queue_pairs > nr_cpu_ids) {                         \
++		qp = v->curr_queue_pairs - v->xdp_queue_pairs;          \
++		qp += smp_processor_id();                               \
++		txq = netdev_get_tx_queue(v->dev, qp);                  \
++		__netif_tx_acquire(txq);                                \
++	} else {                                                        \
++		qp = smp_processor_id() % v->curr_queue_pairs;          \
++		txq = netdev_get_tx_queue(v->dev, qp);                  \
++		__netif_tx_lock(txq, raw_smp_processor_id());           \
++	}                                                               \
++	v->sq + qp;                                                     \
++})
++
++#define virtnet_xdp_put_sq(vi, q) {                                     \
++	struct netdev_queue *txq;                                       \
++	typeof(vi) v = (vi);                                            \
++									\
++	txq = netdev_get_tx_queue(v->dev, (q) - v->sq);                 \
++	if (v->curr_queue_pairs > nr_cpu_ids)                           \
++		__netif_tx_release(txq);                                \
++	else                                                            \
++		__netif_tx_unlock(txq);                                 \
+ }
+ 
+ static int virtnet_xdp_xmit(struct net_device *dev,
+@@ -516,7 +548,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
+ 	if (!xdp_prog)
+ 		return -ENXIO;
+ 
+-	sq = virtnet_xdp_sq(vi);
++	sq = virtnet_xdp_get_sq(vi);
+ 
+ 	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) {
+ 		ret = -EINVAL;
+@@ -564,12 +596,13 @@ out:
+ 	sq->stats.kicks += kicks;
+ 	u64_stats_update_end(&sq->stats.syncp);
+ 
++	virtnet_xdp_put_sq(vi, sq);
+ 	return ret;
+ }
+ 
+ static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
+ {
+-	return vi->xdp_queue_pairs ? VIRTIO_XDP_HEADROOM : 0;
++	return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
+ }
+ 
+ /* We copy the packet for XDP in the following cases:
+@@ -1473,12 +1506,13 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
+ 		xdp_do_flush();
+ 
+ 	if (xdp_xmit & VIRTIO_XDP_TX) {
+-		sq = virtnet_xdp_sq(vi);
++		sq = virtnet_xdp_get_sq(vi);
+ 		if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) {
+ 			u64_stats_update_begin(&sq->stats.syncp);
+ 			sq->stats.kicks++;
+ 			u64_stats_update_end(&sq->stats.syncp);
+ 		}
++		virtnet_xdp_put_sq(vi, sq);
+ 	}
+ 
+ 	return received;
+@@ -2432,7 +2466,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 	        virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) ||
+ 		virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO) ||
+ 		virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM))) {
+-		NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO/CSUM, disable LRO/CSUM first");
++		NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing GRO_HW/CSUM, disable GRO_HW/CSUM first");
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+@@ -2453,10 +2487,9 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 
+ 	/* XDP requires extra queues for XDP_TX */
+ 	if (curr_qp + xdp_qp > vi->max_queue_pairs) {
+-		NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings available");
+-		netdev_warn(dev, "request %i queues but max is %i\n",
++		netdev_warn(dev, "XDP request %i queues but max is %i. XDP_TX and XDP_REDIRECT will operate in a slower locked tx mode.\n",
+ 			    curr_qp + xdp_qp, vi->max_queue_pairs);
+-		return -ENOMEM;
++		xdp_qp = 0;
+ 	}
+ 
+ 	old_prog = rtnl_dereference(vi->rq[0].xdp_prog);
+@@ -2490,11 +2523,14 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 	vi->xdp_queue_pairs = xdp_qp;
+ 
+ 	if (prog) {
++		vi->xdp_enabled = true;
+ 		for (i = 0; i < vi->max_queue_pairs; i++) {
+ 			rcu_assign_pointer(vi->rq[i].xdp_prog, prog);
+ 			if (i == 0 && !old_prog)
+ 				virtnet_clear_guest_offloads(vi);
+ 		}
++	} else {
++		vi->xdp_enabled = false;
+ 	}
+ 
+ 	for (i = 0; i < vi->max_queue_pairs; i++) {
+@@ -2561,15 +2597,15 @@ static int virtnet_set_features(struct net_device *dev,
+ 	u64 offloads;
+ 	int err;
+ 
+-	if ((dev->features ^ features) & NETIF_F_LRO) {
+-		if (vi->xdp_queue_pairs)
++	if ((dev->features ^ features) & NETIF_F_GRO_HW) {
++		if (vi->xdp_enabled)
+ 			return -EBUSY;
+ 
+-		if (features & NETIF_F_LRO)
++		if (features & NETIF_F_GRO_HW)
+ 			offloads = vi->guest_offloads_capable;
+ 		else
+ 			offloads = vi->guest_offloads_capable &
+-				   ~GUEST_OFFLOAD_LRO_MASK;
++				   ~GUEST_OFFLOAD_GRO_HW_MASK;
+ 
+ 		err = virtnet_set_guest_offloads(vi, offloads);
+ 		if (err)
+@@ -3044,9 +3080,9 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 		dev->features |= NETIF_F_RXCSUM;
+ 	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
+ 	    virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6))
+-		dev->features |= NETIF_F_LRO;
++		dev->features |= NETIF_F_GRO_HW;
+ 	if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS))
+-		dev->hw_features |= NETIF_F_LRO;
++		dev->hw_features |= NETIF_F_GRO_HW;
+ 
+ 	dev->vlan_features = dev->features;
+ 
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 2746f77745e4d..d406da82b4fb8 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1313,6 +1313,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ 	bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);
+ 	bool is_ndisc = ipv6_ndisc_frame(skb);
+ 
++	nf_reset_ct(skb);
++
+ 	/* loopback, multicast & non-ND link-local traffic; do not push through
+ 	 * packet taps again. Reset pkt_type for upper layers to process skb.
+ 	 * For strict packets with a source LLA, determine the dst using the
+@@ -1369,6 +1371,8 @@ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ 	skb->skb_iif = vrf_dev->ifindex;
+ 	IPCB(skb)->flags |= IPSKB_L3SLAVE;
+ 
++	nf_reset_ct(skb);
++
+ 	if (ipv4_is_multicast(ip_hdr(skb)->daddr))
+ 		goto out;
+ 
+diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h
+index 7a364eca46d64..f083fb9038c36 100644
+--- a/drivers/net/wireless/ath/ath.h
++++ b/drivers/net/wireless/ath/ath.h
+@@ -197,12 +197,13 @@ struct sk_buff *ath_rxbuf_alloc(struct ath_common *common,
+ bool ath_is_mybeacon(struct ath_common *common, struct ieee80211_hdr *hdr);
+ 
+ void ath_hw_setbssidmask(struct ath_common *common);
+-void ath_key_delete(struct ath_common *common, struct ieee80211_key_conf *key);
++void ath_key_delete(struct ath_common *common, u8 hw_key_idx);
+ int ath_key_config(struct ath_common *common,
+ 			  struct ieee80211_vif *vif,
+ 			  struct ieee80211_sta *sta,
+ 			  struct ieee80211_key_conf *key);
+ bool ath_hw_keyreset(struct ath_common *common, u16 entry);
++bool ath_hw_keysetmac(struct ath_common *common, u16 entry, const u8 *mac);
+ void ath_hw_cycle_counters_update(struct ath_common *common);
+ int32_t ath_hw_get_listen_time(struct ath_common *common);
+ 
+diff --git a/drivers/net/wireless/ath/ath5k/mac80211-ops.c b/drivers/net/wireless/ath/ath5k/mac80211-ops.c
+index 5e866a193ed04..d065600791c11 100644
+--- a/drivers/net/wireless/ath/ath5k/mac80211-ops.c
++++ b/drivers/net/wireless/ath/ath5k/mac80211-ops.c
+@@ -521,7 +521,7 @@ ath5k_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 		}
+ 		break;
+ 	case DISABLE_KEY:
+-		ath_key_delete(common, key);
++		ath_key_delete(common, key->hw_key_idx);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_main.c b/drivers/net/wireless/ath/ath9k/htc_drv_main.c
+index 2b7832b1c8008..72ef319feeda7 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_main.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_main.c
+@@ -1461,7 +1461,7 @@ static int ath9k_htc_set_key(struct ieee80211_hw *hw,
+ 		}
+ 		break;
+ 	case DISABLE_KEY:
+-		ath_key_delete(common, key);
++		ath_key_delete(common, key->hw_key_idx);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+diff --git a/drivers/net/wireless/ath/ath9k/hw.h b/drivers/net/wireless/ath/ath9k/hw.h
+index 023599e10dd51..b7b65b1c90e8f 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.h
++++ b/drivers/net/wireless/ath/ath9k/hw.h
+@@ -820,6 +820,7 @@ struct ath_hw {
+ 	struct ath9k_pacal_info pacal_info;
+ 	struct ar5416Stats stats;
+ 	struct ath9k_tx_queue_info txq[ATH9K_NUM_TX_QUEUES];
++	DECLARE_BITMAP(pending_del_keymap, ATH_KEYMAX);
+ 
+ 	enum ath9k_int imask;
+ 	u32 imrs2_reg;
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index ac805f56627ab..5739c1dbf1661 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -826,12 +826,80 @@ exit:
+ 	ieee80211_free_txskb(hw, skb);
+ }
+ 
++static bool ath9k_txq_list_has_key(struct list_head *txq_list, u32 keyix)
++{
++	struct ath_buf *bf;
++	struct ieee80211_tx_info *txinfo;
++	struct ath_frame_info *fi;
++
++	list_for_each_entry(bf, txq_list, list) {
++		if (bf->bf_state.stale || !bf->bf_mpdu)
++			continue;
++
++		txinfo = IEEE80211_SKB_CB(bf->bf_mpdu);
++		fi = (struct ath_frame_info *)&txinfo->rate_driver_data[0];
++		if (fi->keyix == keyix)
++			return true;
++	}
++
++	return false;
++}
++
++static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
++{
++	struct ath_hw *ah = sc->sc_ah;
++	int i;
++	struct ath_txq *txq;
++	bool key_in_use = false;
++
++	for (i = 0; !key_in_use && i < ATH9K_NUM_TX_QUEUES; i++) {
++		if (!ATH_TXQ_SETUP(sc, i))
++			continue;
++		txq = &sc->tx.txq[i];
++		if (!txq->axq_depth)
++			continue;
++		if (!ath9k_hw_numtxpending(ah, txq->axq_qnum))
++			continue;
++
++		ath_txq_lock(sc, txq);
++		key_in_use = ath9k_txq_list_has_key(&txq->axq_q, keyix);
++		if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) {
++			int idx = txq->txq_tailidx;
++
++			while (!key_in_use &&
++			       !list_empty(&txq->txq_fifo[idx])) {
++				key_in_use = ath9k_txq_list_has_key(
++					&txq->txq_fifo[idx], keyix);
++				INCR(idx, ATH_TXFIFO_DEPTH);
++			}
++		}
++		ath_txq_unlock(sc, txq);
++	}
++
++	return key_in_use;
++}
++
++static void ath9k_pending_key_del(struct ath_softc *sc, u8 keyix)
++{
++	struct ath_hw *ah = sc->sc_ah;
++	struct ath_common *common = ath9k_hw_common(ah);
++
++	if (!test_bit(keyix, ah->pending_del_keymap) ||
++	    ath9k_txq_has_key(sc, keyix))
++		return;
++
++	/* No more TXQ frames point to this key cache entry, so delete it. */
++	clear_bit(keyix, ah->pending_del_keymap);
++	ath_key_delete(common, keyix);
++}
++
+ static void ath9k_stop(struct ieee80211_hw *hw)
+ {
+ 	struct ath_softc *sc = hw->priv;
+ 	struct ath_hw *ah = sc->sc_ah;
+ 	struct ath_common *common = ath9k_hw_common(ah);
+ 	bool prev_idle;
++	int i;
+ 
+ 	ath9k_deinit_channel_context(sc);
+ 
+@@ -899,6 +967,14 @@ static void ath9k_stop(struct ieee80211_hw *hw)
+ 
+ 	spin_unlock_bh(&sc->sc_pcu_lock);
+ 
++	for (i = 0; i < ATH_KEYMAX; i++)
++		ath9k_pending_key_del(sc, i);
++
++	/* Clear key cache entries explicitly to get rid of any potentially
++	 * remaining keys.
++	 */
++	ath9k_cmn_init_crypto(sc->sc_ah);
++
+ 	ath9k_ps_restore(sc);
+ 
+ 	sc->ps_idle = prev_idle;
+@@ -1548,12 +1624,11 @@ static void ath9k_del_ps_key(struct ath_softc *sc,
+ {
+ 	struct ath_common *common = ath9k_hw_common(sc->sc_ah);
+ 	struct ath_node *an = (struct ath_node *) sta->drv_priv;
+-	struct ieee80211_key_conf ps_key = { .hw_key_idx = an->ps_key };
+ 
+ 	if (!an->ps_key)
+ 	    return;
+ 
+-	ath_key_delete(common, &ps_key);
++	ath_key_delete(common, an->ps_key);
+ 	an->ps_key = 0;
+ 	an->key_idx[0] = 0;
+ }
+@@ -1724,6 +1799,12 @@ static int ath9k_set_key(struct ieee80211_hw *hw,
+ 	if (sta)
+ 		an = (struct ath_node *)sta->drv_priv;
+ 
++	/* Delete pending key cache entries if no more frames are pointing to
++	 * them in TXQs.
++	 */
++	for (i = 0; i < ATH_KEYMAX; i++)
++		ath9k_pending_key_del(sc, i);
++
+ 	switch (cmd) {
+ 	case SET_KEY:
+ 		if (sta)
+@@ -1753,7 +1834,15 @@ static int ath9k_set_key(struct ieee80211_hw *hw,
+ 		}
+ 		break;
+ 	case DISABLE_KEY:
+-		ath_key_delete(common, key);
++		if (ath9k_txq_has_key(sc, key->hw_key_idx)) {
++			/* Delay key cache entry deletion until there are no
++			 * remaining TXQ frames pointing to this entry.
++			 */
++			set_bit(key->hw_key_idx, sc->sc_ah->pending_del_keymap);
++			ath_hw_keysetmac(common, key->hw_key_idx, NULL);
++		} else {
++			ath_key_delete(common, key->hw_key_idx);
++		}
+ 		if (an) {
+ 			for (i = 0; i < ARRAY_SIZE(an->key_idx); i++) {
+ 				if (an->key_idx[i] != key->hw_key_idx)
+diff --git a/drivers/net/wireless/ath/key.c b/drivers/net/wireless/ath/key.c
+index 1816b4e7dc264..61b59a804e308 100644
+--- a/drivers/net/wireless/ath/key.c
++++ b/drivers/net/wireless/ath/key.c
+@@ -84,8 +84,7 @@ bool ath_hw_keyreset(struct ath_common *common, u16 entry)
+ }
+ EXPORT_SYMBOL(ath_hw_keyreset);
+ 
+-static bool ath_hw_keysetmac(struct ath_common *common,
+-			     u16 entry, const u8 *mac)
++bool ath_hw_keysetmac(struct ath_common *common, u16 entry, const u8 *mac)
+ {
+ 	u32 macHi, macLo;
+ 	u32 unicast_flag = AR_KEYTABLE_VALID;
+@@ -125,6 +124,7 @@ static bool ath_hw_keysetmac(struct ath_common *common,
+ 
+ 	return true;
+ }
++EXPORT_SYMBOL(ath_hw_keysetmac);
+ 
+ static bool ath_hw_set_keycache_entry(struct ath_common *common, u16 entry,
+ 				      const struct ath_keyval *k,
+@@ -581,29 +581,38 @@ EXPORT_SYMBOL(ath_key_config);
+ /*
+  * Delete Key.
+  */
+-void ath_key_delete(struct ath_common *common, struct ieee80211_key_conf *key)
++void ath_key_delete(struct ath_common *common, u8 hw_key_idx)
+ {
+-	ath_hw_keyreset(common, key->hw_key_idx);
+-	if (key->hw_key_idx < IEEE80211_WEP_NKID)
++	/* Leave CCMP and TKIP (main key) configured to avoid disabling
++	 * encryption for potentially pending frames already in a TXQ with the
++	 * keyix pointing to this key entry. Instead, only clear the MAC address
++	 * to prevent RX processing from using this key cache entry.
++	 */
++	if (test_bit(hw_key_idx, common->ccmp_keymap) ||
++	    test_bit(hw_key_idx, common->tkip_keymap))
++		ath_hw_keysetmac(common, hw_key_idx, NULL);
++	else
++		ath_hw_keyreset(common, hw_key_idx);
++	if (hw_key_idx < IEEE80211_WEP_NKID)
+ 		return;
+ 
+-	clear_bit(key->hw_key_idx, common->keymap);
+-	clear_bit(key->hw_key_idx, common->ccmp_keymap);
+-	if (key->cipher != WLAN_CIPHER_SUITE_TKIP)
++	clear_bit(hw_key_idx, common->keymap);
++	clear_bit(hw_key_idx, common->ccmp_keymap);
++	if (!test_bit(hw_key_idx, common->tkip_keymap))
+ 		return;
+ 
+-	clear_bit(key->hw_key_idx + 64, common->keymap);
++	clear_bit(hw_key_idx + 64, common->keymap);
+ 
+-	clear_bit(key->hw_key_idx, common->tkip_keymap);
+-	clear_bit(key->hw_key_idx + 64, common->tkip_keymap);
++	clear_bit(hw_key_idx, common->tkip_keymap);
++	clear_bit(hw_key_idx + 64, common->tkip_keymap);
+ 
+ 	if (!(common->crypt_caps & ATH_CRYPT_CAP_MIC_COMBINED)) {
+-		ath_hw_keyreset(common, key->hw_key_idx + 32);
+-		clear_bit(key->hw_key_idx + 32, common->keymap);
+-		clear_bit(key->hw_key_idx + 64 + 32, common->keymap);
++		ath_hw_keyreset(common, hw_key_idx + 32);
++		clear_bit(hw_key_idx + 32, common->keymap);
++		clear_bit(hw_key_idx + 64 + 32, common->keymap);
+ 
+-		clear_bit(key->hw_key_idx + 32, common->tkip_keymap);
+-		clear_bit(key->hw_key_idx + 64 + 32, common->tkip_keymap);
++		clear_bit(hw_key_idx + 32, common->tkip_keymap);
++		clear_bit(hw_key_idx + 64 + 32, common->tkip_keymap);
+ 	}
+ }
+ EXPORT_SYMBOL(ath_key_delete);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index bb1122e257dd4..cd2401d4764f2 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1905,6 +1905,7 @@ static void quirk_ryzen_xhci_d3hot(struct pci_dev *dev)
+ }
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15e0, quirk_ryzen_xhci_d3hot);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15e1, quirk_ryzen_xhci_d3hot);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1639, quirk_ryzen_xhci_d3hot);
+ 
+ #ifdef CONFIG_X86_IO_APIC
+ static int dmi_disable_ioapicreroute(const struct dmi_system_id *d)
+diff --git a/drivers/ptp/Kconfig b/drivers/ptp/Kconfig
+index deb429a3dff1d..3e377f3c69e5d 100644
+--- a/drivers/ptp/Kconfig
++++ b/drivers/ptp/Kconfig
+@@ -90,7 +90,8 @@ config PTP_1588_CLOCK_INES
+ config PTP_1588_CLOCK_PCH
+ 	tristate "Intel PCH EG20T as PTP clock"
+ 	depends on X86_32 || COMPILE_TEST
+-	depends on HAS_IOMEM && NET
++	depends on HAS_IOMEM && PCI
++	depends on NET
+ 	imply PTP_1588_CLOCK
+ 	help
+ 	  This driver adds support for using the PCH EG20T as a PTP
+diff --git a/drivers/scsi/device_handler/scsi_dh_rdac.c b/drivers/scsi/device_handler/scsi_dh_rdac.c
+index 5efc959493ecd..85a71bafaea76 100644
+--- a/drivers/scsi/device_handler/scsi_dh_rdac.c
++++ b/drivers/scsi/device_handler/scsi_dh_rdac.c
+@@ -453,8 +453,8 @@ static int initialize_controller(struct scsi_device *sdev,
+ 		if (!h->ctlr)
+ 			err = SCSI_DH_RES_TEMP_UNAVAIL;
+ 		else {
+-			list_add_rcu(&h->node, &h->ctlr->dh_list);
+ 			h->sdev = sdev;
++			list_add_rcu(&h->node, &h->ctlr->dh_list);
+ 		}
+ 		spin_unlock(&list_lock);
+ 		err = SCSI_DH_OK;
+@@ -778,11 +778,11 @@ static void rdac_bus_detach( struct scsi_device *sdev )
+ 	spin_lock(&list_lock);
+ 	if (h->ctlr) {
+ 		list_del_rcu(&h->node);
+-		h->sdev = NULL;
+ 		kref_put(&h->ctlr->kref, release_controller);
+ 	}
+ 	spin_unlock(&list_lock);
+ 	sdev->handler_data = NULL;
++	synchronize_rcu();
+ 	kfree(h);
+ }
+ 
+diff --git a/drivers/scsi/megaraid/megaraid_mm.c b/drivers/scsi/megaraid/megaraid_mm.c
+index 8df53446641ac..422b726e2ac10 100644
+--- a/drivers/scsi/megaraid/megaraid_mm.c
++++ b/drivers/scsi/megaraid/megaraid_mm.c
+@@ -238,7 +238,7 @@ mraid_mm_get_adapter(mimd_t __user *umimd, int *rval)
+ 	mimd_t		mimd;
+ 	uint32_t	adapno;
+ 	int		iterator;
+-
++	bool		is_found;
+ 
+ 	if (copy_from_user(&mimd, umimd, sizeof(mimd_t))) {
+ 		*rval = -EFAULT;
+@@ -254,12 +254,16 @@ mraid_mm_get_adapter(mimd_t __user *umimd, int *rval)
+ 
+ 	adapter = NULL;
+ 	iterator = 0;
++	is_found = false;
+ 
+ 	list_for_each_entry(adapter, &adapters_list_g, list) {
+-		if (iterator++ == adapno) break;
++		if (iterator++ == adapno) {
++			is_found = true;
++			break;
++		}
+ 	}
+ 
+-	if (!adapter) {
++	if (!is_found) {
+ 		*rval = -ENODEV;
+ 		return NULL;
+ 	}
+@@ -725,6 +729,7 @@ ioctl_done(uioc_t *kioc)
+ 	uint32_t	adapno;
+ 	int		iterator;
+ 	mraid_mmadp_t*	adapter;
++	bool		is_found;
+ 
+ 	/*
+ 	 * When the kioc returns from driver, make sure it still doesn't
+@@ -747,19 +752,23 @@ ioctl_done(uioc_t *kioc)
+ 		iterator	= 0;
+ 		adapter		= NULL;
+ 		adapno		= kioc->adapno;
++		is_found	= false;
+ 
+ 		con_log(CL_ANN, ( KERN_WARNING "megaraid cmm: completed "
+ 					"ioctl that was timedout before\n"));
+ 
+ 		list_for_each_entry(adapter, &adapters_list_g, list) {
+-			if (iterator++ == adapno) break;
++			if (iterator++ == adapno) {
++				is_found = true;
++				break;
++			}
+ 		}
+ 
+ 		kioc->timedout = 0;
+ 
+-		if (adapter) {
++		if (is_found)
+ 			mraid_mm_dealloc_kioc( adapter, kioc );
+-		}
++
+ 	}
+ 	else {
+ 		wake_up(&wait_q);
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index 39de9a9360d3e..c3bb58885033b 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -684,8 +684,7 @@ int pm8001_dev_found(struct domain_device *dev)
+ 
+ void pm8001_task_done(struct sas_task *task)
+ {
+-	if (!del_timer(&task->slow_task->timer))
+-		return;
++	del_timer(&task->slow_task->timer);
+ 	complete(&task->slow_task->completion);
+ }
+ 
+@@ -693,9 +692,14 @@ static void pm8001_tmf_timedout(struct timer_list *t)
+ {
+ 	struct sas_task_slow *slow = from_timer(slow, t, timer);
+ 	struct sas_task *task = slow->task;
++	unsigned long flags;
+ 
+-	task->task_state_flags |= SAS_TASK_STATE_ABORTED;
+-	complete(&task->slow_task->completion);
++	spin_lock_irqsave(&task->task_state_lock, flags);
++	if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
++		task->task_state_flags |= SAS_TASK_STATE_ABORTED;
++		complete(&task->slow_task->completion);
++	}
++	spin_unlock_irqrestore(&task->task_state_lock, flags);
+ }
+ 
+ #define PM8001_TASK_TIMEOUT 20
+@@ -748,13 +752,10 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
+ 		}
+ 		res = -TMF_RESP_FUNC_FAILED;
+ 		/* Even TMF timed out, return direct. */
+-		if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
+-			if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
+-				pm8001_dbg(pm8001_ha, FAIL,
+-					   "TMF task[%x]timeout.\n",
+-					   tmf->tmf);
+-				goto ex_err;
+-			}
++		if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
++			pm8001_dbg(pm8001_ha, FAIL, "TMF task[%x]timeout.\n",
++				   tmf->tmf);
++			goto ex_err;
+ 		}
+ 
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+@@ -834,12 +835,9 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+ 		wait_for_completion(&task->slow_task->completion);
+ 		res = TMF_RESP_FUNC_FAILED;
+ 		/* Even TMF timed out, return direct. */
+-		if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
+-			if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
+-				pm8001_dbg(pm8001_ha, FAIL,
+-					   "TMF task timeout.\n");
+-				goto ex_err;
+-			}
++		if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
++			pm8001_dbg(pm8001_ha, FAIL, "TMF task timeout.\n");
++			goto ex_err;
+ 		}
+ 
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 9af50e6f94c4c..8e474b1452495 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -453,7 +453,8 @@ static struct scsi_target *scsi_alloc_target(struct device *parent,
+ 		error = shost->hostt->target_alloc(starget);
+ 
+ 		if(error) {
+-			dev_printk(KERN_ERR, dev, "target allocation failed, error %d\n", error);
++			if (error != -ENXIO)
++				dev_err(dev, "target allocation failed, error %d\n", error);
+ 			/* don't want scsi_target_reap to do the final
+ 			 * put because it will be under the host lock */
+ 			scsi_target_destroy(starget);
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index d6e344fa33ad9..4dcced95c8b47 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -807,11 +807,14 @@ store_state_field(struct device *dev, struct device_attribute *attr,
+ 	mutex_lock(&sdev->state_mutex);
+ 	ret = scsi_device_set_state(sdev, state);
+ 	/*
+-	 * If the device state changes to SDEV_RUNNING, we need to run
+-	 * the queue to avoid I/O hang.
++	 * If the device state changes to SDEV_RUNNING, we need to
++	 * rescan the device to revalidate it, and run the queue to
++	 * avoid I/O hang.
+ 	 */
+-	if (ret == 0 && state == SDEV_RUNNING)
++	if (ret == 0 && state == SDEV_RUNNING) {
++		scsi_rescan_device(dev);
+ 		blk_mq_run_hw_queues(sdev->request_queue, true);
++	}
+ 	mutex_unlock(&sdev->state_mutex);
+ 
+ 	return ret == 0 ? count : -EINVAL;
+diff --git a/drivers/slimbus/messaging.c b/drivers/slimbus/messaging.c
+index d5879142dbef1..ddf0371ad52b2 100644
+--- a/drivers/slimbus/messaging.c
++++ b/drivers/slimbus/messaging.c
+@@ -66,7 +66,7 @@ int slim_alloc_txn_tid(struct slim_controller *ctrl, struct slim_msg_txn *txn)
+ 	int ret = 0;
+ 
+ 	spin_lock_irqsave(&ctrl->txn_lock, flags);
+-	ret = idr_alloc_cyclic(&ctrl->tid_idr, txn, 0,
++	ret = idr_alloc_cyclic(&ctrl->tid_idr, txn, 1,
+ 				SLIM_MAX_TIDS, GFP_ATOMIC);
+ 	if (ret < 0) {
+ 		spin_unlock_irqrestore(&ctrl->txn_lock, flags);
+@@ -131,7 +131,8 @@ int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn)
+ 			goto slim_xfer_err;
+ 		}
+ 	}
+-
++	/* Initialize tid to invalid value */
++	txn->tid = 0;
+ 	need_tid = slim_tid_txn(txn->mt, txn->mc);
+ 
+ 	if (need_tid) {
+@@ -163,7 +164,7 @@ int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn)
+ 			txn->mt, txn->mc, txn->la, ret);
+ 
+ slim_xfer_err:
+-	if (!clk_pause_msg && (!need_tid  || ret == -ETIMEDOUT)) {
++	if (!clk_pause_msg && (txn->tid == 0  || ret == -ETIMEDOUT)) {
+ 		/*
+ 		 * remove runtime-pm vote if this was TX only, or
+ 		 * if there was error during this transaction
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index 50cfd67c2871e..d0540376221c0 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -1065,7 +1065,8 @@ static void qcom_slim_ngd_setup(struct qcom_slim_ngd_ctrl *ctrl)
+ {
+ 	u32 cfg = readl_relaxed(ctrl->ngd->base);
+ 
+-	if (ctrl->state == QCOM_SLIM_NGD_CTRL_DOWN)
++	if (ctrl->state == QCOM_SLIM_NGD_CTRL_DOWN ||
++		ctrl->state == QCOM_SLIM_NGD_CTRL_ASLEEP)
+ 		qcom_slim_ngd_init_dma(ctrl);
+ 
+ 	/* By default enable message queues */
+@@ -1116,6 +1117,7 @@ static int qcom_slim_ngd_power_up(struct qcom_slim_ngd_ctrl *ctrl)
+ 			dev_info(ctrl->dev, "Subsys restart: ADSP active framer\n");
+ 			return 0;
+ 		}
++		qcom_slim_ngd_setup(ctrl);
+ 		return 0;
+ 	}
+ 
+@@ -1506,6 +1508,7 @@ static int __maybe_unused qcom_slim_ngd_runtime_suspend(struct device *dev)
+ 	struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev);
+ 	int ret = 0;
+ 
++	qcom_slim_ngd_exit_dma(ctrl);
+ 	if (!ctrl->qmi.handle)
+ 		return 0;
+ 
+diff --git a/drivers/soc/mediatek/mtk-mmsys.c b/drivers/soc/mediatek/mtk-mmsys.c
+index a55f255111730..36ad66bb221b4 100644
+--- a/drivers/soc/mediatek/mtk-mmsys.c
++++ b/drivers/soc/mediatek/mtk-mmsys.c
+@@ -5,13 +5,11 @@
+  */
+ 
+ #include <linux/device.h>
++#include <linux/io.h>
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/soc/mediatek/mtk-mmsys.h>
+ 
+-#include "../../gpu/drm/mediatek/mtk_drm_ddp.h"
+-#include "../../gpu/drm/mediatek/mtk_drm_ddp_comp.h"
+-
+ #define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN	0x040
+ #define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN	0x044
+ #define DISP_REG_CONFIG_DISP_OD_MOUT_EN		0x048
+diff --git a/drivers/spi/spi-mux.c b/drivers/spi/spi-mux.c
+index 37dfc6e828042..9708b7827ff70 100644
+--- a/drivers/spi/spi-mux.c
++++ b/drivers/spi/spi-mux.c
+@@ -167,10 +167,17 @@ err_put_ctlr:
+ 	return ret;
+ }
+ 
++static const struct spi_device_id spi_mux_id[] = {
++	{ "spi-mux" },
++	{ }
++};
++MODULE_DEVICE_TABLE(spi, spi_mux_id);
++
+ static const struct of_device_id spi_mux_of_match[] = {
+ 	{ .compatible = "spi-mux" },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, spi_mux_of_match);
+ 
+ static struct spi_driver spi_mux_driver = {
+ 	.probe  = spi_mux_probe,
+@@ -178,6 +185,7 @@ static struct spi_driver spi_mux_driver = {
+ 		.name   = "spi-mux",
+ 		.of_match_table = spi_mux_of_match,
+ 	},
++	.id_table = spi_mux_id,
+ };
+ 
+ module_spi_driver(spi_mux_driver);
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 2218941d35a3f..73b60f013b205 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1133,7 +1133,7 @@ static int do_proc_control(struct usb_dev_state *ps,
+ 		"wIndex=%04x wLength=%04x\n",
+ 		ctrl->bRequestType, ctrl->bRequest, ctrl->wValue,
+ 		ctrl->wIndex, ctrl->wLength);
+-	if (ctrl->bRequestType & 0x80) {
++	if ((ctrl->bRequestType & USB_DIR_IN) && ctrl->wLength) {
+ 		pipe = usb_rcvctrlpipe(dev, 0);
+ 		snoop_urb(dev, NULL, pipe, ctrl->wLength, tmo, SUBMIT, NULL, 0);
+ 
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 19ebb542befcb..dba2baca486e7 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -785,6 +785,9 @@ int usb_get_descriptor(struct usb_device *dev, unsigned char type,
+ 	int i;
+ 	int result;
+ 
++	if (size <= 0)		/* No point in asking for no data */
++		return -EINVAL;
++
+ 	memset(buf, 0, size);	/* Make sure we parse really received data */
+ 
+ 	for (i = 0; i < 3; ++i) {
+@@ -833,6 +836,9 @@ static int usb_get_string(struct usb_device *dev, unsigned short langid,
+ 	int i;
+ 	int result;
+ 
++	if (size <= 0)		/* No point in asking for no data */
++		return -EINVAL;
++
+ 	for (i = 0; i < 3; ++i) {
+ 		/* retry on length 0 or stall; some devices are flakey */
+ 		result = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index aa656f57bf5b7..32c9925de4736 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -454,11 +454,6 @@ out:
+ 	mutex_unlock(&mr->mkey_mtx);
+ }
+ 
+-static bool map_empty(struct vhost_iotlb *iotlb)
+-{
+-	return !vhost_iotlb_itree_first(iotlb, 0, U64_MAX);
+-}
+-
+ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
+ 			     bool *change_map)
+ {
+@@ -466,10 +461,6 @@ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *io
+ 	int err = 0;
+ 
+ 	*change_map = false;
+-	if (map_empty(iotlb)) {
+-		mlx5_vdpa_destroy_mr(mvdev);
+-		return 0;
+-	}
+ 	mutex_lock(&mr->mkey_mtx);
+ 	if (mr->initialized) {
+ 		mlx5_vdpa_info(mvdev, "memory map update\n");
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 80184153ac7d9..c4d53ff06bf85 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -623,7 +623,8 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ 	long pinned;
+ 	int ret = 0;
+ 
+-	if (msg->iova < v->range.first ||
++	if (msg->iova < v->range.first || !msg->size ||
++	    msg->iova > U64_MAX - msg->size + 1 ||
+ 	    msg->iova + msg->size - 1 > v->range.last)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 5ccb0705beae1..f41463ab4031d 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -735,10 +735,16 @@ static bool log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
+ 			 (sz + VHOST_PAGE_SIZE * 8 - 1) / VHOST_PAGE_SIZE / 8);
+ }
+ 
++/* Make sure 64 bit math will not overflow. */
+ static bool vhost_overflow(u64 uaddr, u64 size)
+ {
+-	/* Make sure 64 bit math will not overflow. */
+-	return uaddr > ULONG_MAX || size > ULONG_MAX || uaddr > ULONG_MAX - size;
++	if (uaddr > ULONG_MAX || size > ULONG_MAX)
++		return true;
++
++	if (!size)
++		return false;
++
++	return uaddr > ULONG_MAX - size + 1;
+ }
+ 
+ /* Caller should have vq mutex and device mutex. */
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 42e09cc1b8ac5..84b5dec5d29cd 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -357,6 +357,7 @@ int register_virtio_device(struct virtio_device *dev)
+ 	virtio_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE);
+ 
+ 	INIT_LIST_HEAD(&dev->vqs);
++	spin_lock_init(&dev->vqs_list_lock);
+ 
+ 	/*
+ 	 * device_add() causes the bus infrastructure to look for a matching
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 71e16b53e9c18..6b7aa26c53844 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1668,7 +1668,9 @@ static struct virtqueue *vring_create_virtqueue_packed(
+ 			cpu_to_le16(vq->packed.event_flags_shadow);
+ 	}
+ 
++	spin_lock(&vdev->vqs_list_lock);
+ 	list_add_tail(&vq->vq.list, &vdev->vqs);
++	spin_unlock(&vdev->vqs_list_lock);
+ 	return &vq->vq;
+ 
+ err_desc_extra:
+@@ -2126,7 +2128,9 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
+ 	memset(vq->split.desc_state, 0, vring.num *
+ 			sizeof(struct vring_desc_state_split));
+ 
++	spin_lock(&vdev->vqs_list_lock);
+ 	list_add_tail(&vq->vq.list, &vdev->vqs);
++	spin_unlock(&vdev->vqs_list_lock);
+ 	return &vq->vq;
+ }
+ EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
+@@ -2210,7 +2214,9 @@ void vring_del_virtqueue(struct virtqueue *_vq)
+ 	}
+ 	if (!vq->packed_ring)
+ 		kfree(vq->split.desc_state);
++	spin_lock(&vq->vq.vdev->vqs_list_lock);
+ 	list_del(&_vq->list);
++	spin_unlock(&vq->vq.vdev->vqs_list_lock);
+ 	kfree(vq);
+ }
+ EXPORT_SYMBOL_GPL(vring_del_virtqueue);
+@@ -2274,10 +2280,12 @@ void virtio_break_device(struct virtio_device *dev)
+ {
+ 	struct virtqueue *_vq;
+ 
++	spin_lock(&dev->vqs_list_lock);
+ 	list_for_each_entry(_vq, &dev->vqs, list) {
+ 		struct vring_virtqueue *vq = to_vvq(_vq);
+ 		vq->broken = true;
+ 	}
++	spin_unlock(&dev->vqs_list_lock);
+ }
+ EXPORT_SYMBOL_GPL(virtio_break_device);
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4f21b8fbfd4bc..fc4311415fc67 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -8904,8 +8904,14 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 	bool dest_log_pinned = false;
+ 	bool need_abort = false;
+ 
+-	/* we only allow rename subvolume link between subvolumes */
+-	if (old_ino != BTRFS_FIRST_FREE_OBJECTID && root != dest)
++	/*
++	 * For non-subvolumes allow exchange only within one subvolume, in the
++	 * same inode namespace. Two subvolumes (represented as directory) can
++	 * be exchanged as they're a logical link and have a fixed inode number.
++	 */
++	if (root != dest &&
++	    (old_ino != BTRFS_FIRST_FREE_OBJECTID ||
++	     new_ino != BTRFS_FIRST_FREE_OBJECTID))
+ 		return -EXDEV;
+ 
+ 	/* close the race window with snapshot create/destroy ioctl */
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index ed641dca79573..108b0ed31c11a 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -9078,9 +9078,10 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+ 		io_cqring_overflow_flush(ctx, false, NULL, NULL);
+ 
+-		ret = -EOWNERDEAD;
+-		if (unlikely(ctx->sqo_dead))
++		if (unlikely(ctx->sqo_dead)) {
++			ret = -EOWNERDEAD;
+ 			goto out;
++		}
+ 		if (flags & IORING_ENTER_SQ_WAKEUP)
+ 			wake_up(&ctx->sq_data->wait);
+ 		if (flags & IORING_ENTER_SQ_WAIT) {
+@@ -9601,11 +9602,12 @@ static int io_register_personality(struct io_ring_ctx *ctx)
+ 
+ 	ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)iod,
+ 			XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
+-	if (!ret)
+-		return id;
+-	put_cred(iod->creds);
+-	kfree(iod);
+-	return ret;
++	if (ret < 0) {
++		put_cred(iod->creds);
++		kfree(iod);
++		return ret;
++	}
++	return id;
+ }
+ 
+ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 175312428cdf6..046b084136c51 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1697,8 +1697,12 @@ static inline bool may_mount(void)
+ }
+ 
+ #ifdef	CONFIG_MANDATORY_FILE_LOCKING
+-static inline bool may_mandlock(void)
++static bool may_mandlock(void)
+ {
++	pr_warn_once("======================================================\n"
++		     "WARNING: the mand mount option is being deprecated and\n"
++		     "         will be removed in v5.15!\n"
++		     "======================================================\n");
+ 	return capable(CAP_SYS_ADMIN);
+ }
+ #else
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index c691b1ac95f88..4b975111b5361 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -360,12 +360,15 @@ static inline bool mem_cgroup_disabled(void)
+ 	return !cgroup_subsys_enabled(memory_cgrp_subsys);
+ }
+ 
+-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root,
+-						  struct mem_cgroup *memcg,
+-						  bool in_low_reclaim)
++static inline void mem_cgroup_protection(struct mem_cgroup *root,
++					 struct mem_cgroup *memcg,
++					 unsigned long *min,
++					 unsigned long *low)
+ {
++	*min = *low = 0;
++
+ 	if (mem_cgroup_disabled())
+-		return 0;
++		return;
+ 
+ 	/*
+ 	 * There is no reclaim protection applied to a targeted reclaim.
+@@ -401,13 +404,10 @@ static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root,
+ 	 *
+ 	 */
+ 	if (root == memcg)
+-		return 0;
+-
+-	if (in_low_reclaim)
+-		return READ_ONCE(memcg->memory.emin);
++		return;
+ 
+-	return max(READ_ONCE(memcg->memory.emin),
+-		   READ_ONCE(memcg->memory.elow));
++	*min = READ_ONCE(memcg->memory.emin);
++	*low = READ_ONCE(memcg->memory.elow);
+ }
+ 
+ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
+@@ -966,11 +966,12 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
+ {
+ }
+ 
+-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root,
+-						  struct mem_cgroup *memcg,
+-						  bool in_low_reclaim)
++static inline void mem_cgroup_protection(struct mem_cgroup *root,
++					 struct mem_cgroup *memcg,
++					 unsigned long *min,
++					 unsigned long *low)
+ {
+-	return 0;
++	*min = *low = 0;
+ }
+ 
+ static inline void mem_cgroup_calculate_protection(struct mem_cgroup *root,
+diff --git a/include/linux/soc/mediatek/mtk-mmsys.h b/include/linux/soc/mediatek/mtk-mmsys.h
+index 7bab5d9a3d31b..2228bf6133da2 100644
+--- a/include/linux/soc/mediatek/mtk-mmsys.h
++++ b/include/linux/soc/mediatek/mtk-mmsys.h
+@@ -9,6 +9,39 @@
+ enum mtk_ddp_comp_id;
+ struct device;
+ 
++enum mtk_ddp_comp_id {
++	DDP_COMPONENT_AAL0,
++	DDP_COMPONENT_AAL1,
++	DDP_COMPONENT_BLS,
++	DDP_COMPONENT_CCORR,
++	DDP_COMPONENT_COLOR0,
++	DDP_COMPONENT_COLOR1,
++	DDP_COMPONENT_DITHER,
++	DDP_COMPONENT_DPI0,
++	DDP_COMPONENT_DPI1,
++	DDP_COMPONENT_DSI0,
++	DDP_COMPONENT_DSI1,
++	DDP_COMPONENT_DSI2,
++	DDP_COMPONENT_DSI3,
++	DDP_COMPONENT_GAMMA,
++	DDP_COMPONENT_OD0,
++	DDP_COMPONENT_OD1,
++	DDP_COMPONENT_OVL0,
++	DDP_COMPONENT_OVL_2L0,
++	DDP_COMPONENT_OVL_2L1,
++	DDP_COMPONENT_OVL1,
++	DDP_COMPONENT_PWM0,
++	DDP_COMPONENT_PWM1,
++	DDP_COMPONENT_PWM2,
++	DDP_COMPONENT_RDMA0,
++	DDP_COMPONENT_RDMA1,
++	DDP_COMPONENT_RDMA2,
++	DDP_COMPONENT_UFOE,
++	DDP_COMPONENT_WDMA0,
++	DDP_COMPONENT_WDMA1,
++	DDP_COMPONENT_ID_MAX,
++};
++
+ void mtk_mmsys_ddp_connect(struct device *dev,
+ 			   enum mtk_ddp_comp_id cur,
+ 			   enum mtk_ddp_comp_id next);
+diff --git a/include/linux/virtio.h b/include/linux/virtio.h
+index 55ea329fe72a4..8ecc2e208d613 100644
+--- a/include/linux/virtio.h
++++ b/include/linux/virtio.h
+@@ -110,6 +110,7 @@ struct virtio_device {
+ 	bool config_enabled;
+ 	bool config_change_pending;
+ 	spinlock_t config_lock;
++	spinlock_t vqs_list_lock; /* Protects VQs list access */
+ 	struct device dev;
+ 	struct virtio_device_id id;
+ 	const struct virtio_config_ops *config;
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index 161b909790389..123b1e9ea304a 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -312,14 +312,12 @@ flow_action_mixed_hw_stats_check(const struct flow_action *action,
+ 	if (flow_offload_has_one_action(action))
+ 		return true;
+ 
+-	if (action) {
+-		flow_action_for_each(i, action_entry, action) {
+-			if (i && action_entry->hw_stats != last_hw_stats) {
+-				NL_SET_ERR_MSG_MOD(extack, "Mixing HW stats types for actions is not supported");
+-				return false;
+-			}
+-			last_hw_stats = action_entry->hw_stats;
++	flow_action_for_each(i, action_entry, action) {
++		if (i && action_entry->hw_stats != last_hw_stats) {
++			NL_SET_ERR_MSG_MOD(extack, "Mixing HW stats types for actions is not supported");
++			return false;
+ 		}
++		last_hw_stats = action_entry->hw_stats;
+ 	}
+ 	return true;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index ce1e9193365f8..1410f128c4042 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -10705,6 +10705,7 @@ static void sanitize_dead_code(struct bpf_verifier_env *env)
+ 		if (aux_data[i].seen)
+ 			continue;
+ 		memcpy(insn + i, &trap, sizeof(trap));
++		aux_data[i].zext_dst = false;
+ 	}
+ }
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 75529b3117692..1b7f90e00eb05 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -3405,6 +3405,8 @@ trace_action_create_field_var(struct hist_trigger_data *hist_data,
+ 			event = data->match_data.event;
+ 		}
+ 
++		if (!event)
++			goto free;
+ 		/*
+ 		 * At this point, we're looking at a field on another
+ 		 * event.  Because we can't modify a hist trigger on
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 67d38334052ef..7fb9af001ed5c 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -102,9 +102,12 @@ struct scan_control {
+ 	unsigned int may_swap:1;
+ 
+ 	/*
+-	 * Cgroups are not reclaimed below their configured memory.low,
+-	 * unless we threaten to OOM. If any cgroups are skipped due to
+-	 * memory.low and nothing was reclaimed, go back for memory.low.
++	 * Cgroup memory below memory.low is protected as long as we
++	 * don't threaten to OOM. If any cgroup is reclaimed at
++	 * reduced force or passed over entirely due to its memory.low
++	 * setting (memcg_low_skipped), and nothing is reclaimed as a
++	 * result, then go back for one more cycle that reclaims the protected
++	 * memory (memcg_low_reclaim) to avert OOM.
+ 	 */
+ 	unsigned int memcg_low_reclaim:1;
+ 	unsigned int memcg_low_skipped:1;
+@@ -2323,15 +2326,14 @@ out:
+ 	for_each_evictable_lru(lru) {
+ 		int file = is_file_lru(lru);
+ 		unsigned long lruvec_size;
++		unsigned long low, min;
+ 		unsigned long scan;
+-		unsigned long protection;
+ 
+ 		lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
+-		protection = mem_cgroup_protection(sc->target_mem_cgroup,
+-						   memcg,
+-						   sc->memcg_low_reclaim);
++		mem_cgroup_protection(sc->target_mem_cgroup, memcg,
++				      &min, &low);
+ 
+-		if (protection) {
++		if (min || low) {
+ 			/*
+ 			 * Scale a cgroup's reclaim pressure by proportioning
+ 			 * its current usage to its memory.low or memory.min
+@@ -2362,6 +2364,15 @@ out:
+ 			 * hard protection.
+ 			 */
+ 			unsigned long cgroup_size = mem_cgroup_size(memcg);
++			unsigned long protection;
++
++			/* memory.low scaling, make sure we retry before OOM */
++			if (!sc->memcg_low_reclaim && low > min) {
++				protection = low;
++				sc->memcg_low_skipped = 1;
++			} else {
++				protection = min;
++			}
+ 
+ 			/* Avoid TOCTOU with earlier protection check */
+ 			cgroup_size = max(cgroup_size, protection);
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 3b4fa27a44e64..0db48c8126623 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -1290,7 +1290,7 @@ static int hidp_session_thread(void *arg)
+ 
+ 	/* cleanup runtime environment */
+ 	remove_wait_queue(sk_sleep(session->intr_sock->sk), &intr_wait);
+-	remove_wait_queue(sk_sleep(session->intr_sock->sk), &ctrl_wait);
++	remove_wait_queue(sk_sleep(session->ctrl_sock->sk), &ctrl_wait);
+ 	wake_up_interruptible(&session->report_queue);
+ 	hidp_del_timer(session);
+ 
+diff --git a/net/dccp/dccp.h b/net/dccp/dccp.h
+index 9cc9d1ee6cdb9..c5c1d2b8045e8 100644
+--- a/net/dccp/dccp.h
++++ b/net/dccp/dccp.h
+@@ -41,9 +41,9 @@ extern bool dccp_debug;
+ #define dccp_pr_debug_cat(format, a...)   DCCP_PRINTK(dccp_debug, format, ##a)
+ #define dccp_debug(fmt, a...)		  dccp_pr_debug_cat(KERN_DEBUG fmt, ##a)
+ #else
+-#define dccp_pr_debug(format, a...)
+-#define dccp_pr_debug_cat(format, a...)
+-#define dccp_debug(format, a...)
++#define dccp_pr_debug(format, a...)	  do {} while (0)
++#define dccp_pr_debug_cat(format, a...)	  do {} while (0)
++#define dccp_debug(format, a...)	  do {} while (0)
+ #endif
+ 
+ extern struct inet_hashinfo dccp_hashinfo;
+diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
+index 82d801f063b70..1c05d4bef3313 100644
+--- a/net/openvswitch/vport.c
++++ b/net/openvswitch/vport.c
+@@ -503,6 +503,7 @@ void ovs_vport_send(struct vport *vport, struct sk_buff *skb, u8 mac_proto)
+ 	}
+ 
+ 	skb->dev = vport->dev;
++	skb->tstamp = 0;
+ 	vport->ops->send(skb);
+ 	return;
+ 
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 5c15968b5155b..c2c37ffd94f22 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -720,7 +720,7 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ skip_hash:
+ 	if (flow_override)
+ 		flow_hash = flow_override - 1;
+-	else if (use_skbhash)
++	else if (use_skbhash && (flow_mode & CAKE_FLOW_FLOWS))
+ 		flow_hash = skb->hash;
+ 	if (host_override) {
+ 		dsthost_hash = host_override - 1;
+diff --git a/net/xfrm/xfrm_ipcomp.c b/net/xfrm/xfrm_ipcomp.c
+index 4d422447aadc3..0814320472f18 100644
+--- a/net/xfrm/xfrm_ipcomp.c
++++ b/net/xfrm/xfrm_ipcomp.c
+@@ -250,7 +250,7 @@ static void ipcomp_free_tfms(struct crypto_comp * __percpu *tfms)
+ 			break;
+ 	}
+ 
+-	WARN_ON(!pos);
++	WARN_ON(list_entry_is_head(pos, &ipcomp_tfms_list, list));
+ 
+ 	if (--pos->users)
+ 		return;
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 7c49a7e92dd21..323df011b94a3 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -3458,7 +3458,7 @@ static int cap_put_caller(struct snd_kcontrol *kcontrol,
+ 	struct hda_gen_spec *spec = codec->spec;
+ 	const struct hda_input_mux *imux;
+ 	struct nid_path *path;
+-	int i, adc_idx, err = 0;
++	int i, adc_idx, ret, err = 0;
+ 
+ 	imux = &spec->input_mux;
+ 	adc_idx = kcontrol->id.index;
+@@ -3468,9 +3468,13 @@ static int cap_put_caller(struct snd_kcontrol *kcontrol,
+ 		if (!path || !path->ctls[type])
+ 			continue;
+ 		kcontrol->private_value = path->ctls[type];
+-		err = func(kcontrol, ucontrol);
+-		if (err < 0)
++		ret = func(kcontrol, ucontrol);
++		if (ret < 0) {
++			err = ret;
+ 			break;
++		}
++		if (ret > 0)
++			err = 1;
+ 	}
+ 	mutex_unlock(&codec->control_mutex);
+ 	if (err >= 0 && spec->cap_sync_hook)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index beb5fb03e3884..6219d0311c9a0 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6590,6 +6590,7 @@ enum {
+ 	ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP,
+ 	ALC623_FIXUP_LENOVO_THINKSTATION_P340,
+ 	ALC255_FIXUP_ACER_HEADPHONE_AND_MIC,
++	ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8168,6 +8169,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC255_FIXUP_XIAOMI_HEADSET_MIC
+ 	},
++	[ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_limit_int_mic_boost,
++		.chained = true,
++		.chain_id = ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8258,6 +8265,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1028, 0x0a61, "Dell XPS 15 9510", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -8363,8 +8371,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+-	SND_PCI_QUIRK(0x103c, 0x8862, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+-	SND_PCI_QUIRK(0x103c, 0x8863, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8862, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x103c, 0x8863, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index a5c1a2c4eae4e..773a136161f11 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -1041,6 +1041,7 @@ static const struct hda_fixup via_fixups[] = {
+ };
+ 
+ static const struct snd_pci_quirk vt2002p_fixups[] = {
++	SND_PCI_QUIRK(0x1043, 0x13f7, "Asus B23E", VIA_FIXUP_POWER_SAVE),
+ 	SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75),
+ 	SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST),
+ 	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo", VIA_FIXUP_POWER_SAVE),
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 2784611196f06..255b4d528a66c 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -127,7 +127,7 @@ static void sst_fill_alloc_params(struct snd_pcm_substream *substream,
+ 	snd_pcm_uframes_t period_size;
+ 	ssize_t periodbytes;
+ 	ssize_t buffer_bytes = snd_pcm_lib_buffer_bytes(substream);
+-	u32 buffer_addr = substream->runtime->dma_addr;
++	u32 buffer_addr = virt_to_phys(substream->runtime->dma_area);
+ 
+ 	channels = substream->runtime->channels;
+ 	period_size = substream->runtime->period_size;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-03 11:20 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-03 11:20 UTC (permalink / raw
  To: gentoo-commits

commit:     58c8486b95871d1487eb5de573389bda44f0fb32
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep  3 11:20:02 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep  3 11:20:02 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=58c8486b

Linux patch 5.10.62

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1061_linux-5.10.62.patch | 4421 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4425 insertions(+)

diff --git a/0000_README b/0000_README
index 812aa2d..eb58622 100644
--- a/0000_README
+++ b/0000_README
@@ -287,6 +287,10 @@ Patch:  1060_linux-5.10.61.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.61
 
+Patch:  1061_linux-5.10.62.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.62
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1061_linux-5.10.62.patch b/1061_linux-5.10.62.patch
new file mode 100644
index 0000000..877906a
--- /dev/null
+++ b/1061_linux-5.10.62.patch
@@ -0,0 +1,4421 @@
+diff --git a/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml b/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
+index efc0198eeb74d..5444be7667b60 100644
+--- a/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
++++ b/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
+@@ -24,9 +24,9 @@ allOf:
+ select:
+   properties:
+     compatible:
+-      items:
+-        - enum:
+-            - sifive,fu540-c000-ccache
++      contains:
++        enum:
++          - sifive,fu540-c000-ccache
+ 
+   required:
+     - compatible
+diff --git a/Makefile b/Makefile
+index a6ab3263f81df..90c0cb3e4d3c2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 61
++SUBLEVEL = 62
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/kernel/vmlinux.lds.S b/arch/arc/kernel/vmlinux.lds.S
+index 33ce59d914619..f67e4ad7b3ce2 100644
+--- a/arch/arc/kernel/vmlinux.lds.S
++++ b/arch/arc/kernel/vmlinux.lds.S
+@@ -88,6 +88,8 @@ SECTIONS
+ 		CPUIDLE_TEXT
+ 		LOCK_TEXT
+ 		KPROBES_TEXT
++		IRQENTRY_TEXT
++		SOFTIRQENTRY_TEXT
+ 		*(.fixup)
+ 		*(.gnu.warning)
+ 	}
+diff --git a/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
+index baa55643b40ff..ffe1a9bd8f705 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
++++ b/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
+@@ -32,3 +32,7 @@
+ 		};
+ 	};
+ };
++
++&tlmm {
++	gpio-reserved-ranges = <85 4>;
++};
+diff --git a/arch/parisc/include/asm/string.h b/arch/parisc/include/asm/string.h
+index 4a0c9dbd62fd0..f6e1132f4e352 100644
+--- a/arch/parisc/include/asm/string.h
++++ b/arch/parisc/include/asm/string.h
+@@ -8,19 +8,4 @@ extern void * memset(void *, int, size_t);
+ #define __HAVE_ARCH_MEMCPY
+ void * memcpy(void * dest,const void *src,size_t count);
+ 
+-#define __HAVE_ARCH_STRLEN
+-extern size_t strlen(const char *s);
+-
+-#define __HAVE_ARCH_STRCPY
+-extern char *strcpy(char *dest, const char *src);
+-
+-#define __HAVE_ARCH_STRNCPY
+-extern char *strncpy(char *dest, const char *src, size_t count);
+-
+-#define __HAVE_ARCH_STRCAT
+-extern char *strcat(char *dest, const char *src);
+-
+-#define __HAVE_ARCH_MEMSET
+-extern void *memset(void *, int, size_t);
+-
+ #endif
+diff --git a/arch/parisc/kernel/parisc_ksyms.c b/arch/parisc/kernel/parisc_ksyms.c
+index 8ed409ecec933..e8a6a751dfd8e 100644
+--- a/arch/parisc/kernel/parisc_ksyms.c
++++ b/arch/parisc/kernel/parisc_ksyms.c
+@@ -17,10 +17,6 @@
+ 
+ #include <linux/string.h>
+ EXPORT_SYMBOL(memset);
+-EXPORT_SYMBOL(strlen);
+-EXPORT_SYMBOL(strcpy);
+-EXPORT_SYMBOL(strncpy);
+-EXPORT_SYMBOL(strcat);
+ 
+ #include <linux/atomic.h>
+ EXPORT_SYMBOL(__xchg8);
+diff --git a/arch/parisc/lib/Makefile b/arch/parisc/lib/Makefile
+index 2d7a9974dbaef..7b197667faf6c 100644
+--- a/arch/parisc/lib/Makefile
++++ b/arch/parisc/lib/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for parisc-specific library files
+ #
+ 
+-lib-y	:= lusercopy.o bitops.o checksum.o io.o memcpy.o \
+-	   ucmpdi2.o delay.o string.o
++lib-y	:= lusercopy.o bitops.o checksum.o io.o memset.o memcpy.o \
++	   ucmpdi2.o delay.o
+ 
+ obj-y	:= iomap.o
+diff --git a/arch/parisc/lib/memset.c b/arch/parisc/lib/memset.c
+new file mode 100644
+index 0000000000000..133e4809859a3
+--- /dev/null
++++ b/arch/parisc/lib/memset.c
+@@ -0,0 +1,72 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++#include <linux/types.h>
++#include <asm/string.h>
++
++#define OPSIZ (BITS_PER_LONG/8)
++typedef unsigned long op_t;
++
++void *
++memset (void *dstpp, int sc, size_t len)
++{
++  unsigned int c = sc;
++  long int dstp = (long int) dstpp;
++
++  if (len >= 8)
++    {
++      size_t xlen;
++      op_t cccc;
++
++      cccc = (unsigned char) c;
++      cccc |= cccc << 8;
++      cccc |= cccc << 16;
++      if (OPSIZ > 4)
++	/* Do the shift in two steps to avoid warning if long has 32 bits.  */
++	cccc |= (cccc << 16) << 16;
++
++      /* There are at least some bytes to set.
++	 No need to test for LEN == 0 in this alignment loop.  */
++      while (dstp % OPSIZ != 0)
++	{
++	  ((unsigned char *) dstp)[0] = c;
++	  dstp += 1;
++	  len -= 1;
++	}
++
++      /* Write 8 `op_t' per iteration until less than 8 `op_t' remain.  */
++      xlen = len / (OPSIZ * 8);
++      while (xlen > 0)
++	{
++	  ((op_t *) dstp)[0] = cccc;
++	  ((op_t *) dstp)[1] = cccc;
++	  ((op_t *) dstp)[2] = cccc;
++	  ((op_t *) dstp)[3] = cccc;
++	  ((op_t *) dstp)[4] = cccc;
++	  ((op_t *) dstp)[5] = cccc;
++	  ((op_t *) dstp)[6] = cccc;
++	  ((op_t *) dstp)[7] = cccc;
++	  dstp += 8 * OPSIZ;
++	  xlen -= 1;
++	}
++      len %= OPSIZ * 8;
++
++      /* Write 1 `op_t' per iteration until less than OPSIZ bytes remain.  */
++      xlen = len / OPSIZ;
++      while (xlen > 0)
++	{
++	  ((op_t *) dstp)[0] = cccc;
++	  dstp += OPSIZ;
++	  xlen -= 1;
++	}
++      len %= OPSIZ;
++    }
++
++  /* Write the last few bytes.  */
++  while (len > 0)
++    {
++      ((unsigned char *) dstp)[0] = c;
++      dstp += 1;
++      len -= 1;
++    }
++
++  return dstpp;
++}
+diff --git a/arch/parisc/lib/string.S b/arch/parisc/lib/string.S
+deleted file mode 100644
+index 4a64264427a63..0000000000000
+--- a/arch/parisc/lib/string.S
++++ /dev/null
+@@ -1,136 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- *    PA-RISC assembly string functions
+- *
+- *    Copyright (C) 2019 Helge Deller <deller@gmx.de>
+- */
+-
+-#include <asm/assembly.h>
+-#include <linux/linkage.h>
+-
+-	.section .text.hot
+-	.level PA_ASM_LEVEL
+-
+-	t0 = r20
+-	t1 = r21
+-	t2 = r22
+-
+-ENTRY_CFI(strlen, frame=0,no_calls)
+-	or,COND(<>) arg0,r0,ret0
+-	b,l,n	.Lstrlen_null_ptr,r0
+-	depwi	0,31,2,ret0
+-	cmpb,COND(<>) arg0,ret0,.Lstrlen_not_aligned
+-	ldw,ma	4(ret0),t0
+-	cmpib,tr 0,r0,.Lstrlen_loop
+-	uxor,nbz r0,t0,r0
+-.Lstrlen_not_aligned:
+-	uaddcm	arg0,ret0,t1
+-	shladd	t1,3,r0,t1
+-	mtsar	t1
+-	depwi	-1,%sar,32,t0
+-	uxor,nbz r0,t0,r0
+-.Lstrlen_loop:
+-	b,l,n	.Lstrlen_end_loop,r0
+-	ldw,ma	4(ret0),t0
+-	cmpib,tr 0,r0,.Lstrlen_loop
+-	uxor,nbz r0,t0,r0
+-.Lstrlen_end_loop:
+-	extrw,u,<> t0,7,8,r0
+-	addib,tr,n -3,ret0,.Lstrlen_out
+-	extrw,u,<> t0,15,8,r0
+-	addib,tr,n -2,ret0,.Lstrlen_out
+-	extrw,u,<> t0,23,8,r0
+-	addi	-1,ret0,ret0
+-.Lstrlen_out:
+-	bv r0(rp)
+-	uaddcm ret0,arg0,ret0
+-.Lstrlen_null_ptr:
+-	bv,n r0(rp)
+-ENDPROC_CFI(strlen)
+-
+-
+-ENTRY_CFI(strcpy, frame=0,no_calls)
+-	ldb	0(arg1),t0
+-	stb	t0,0(arg0)
+-	ldo	0(arg0),ret0
+-	ldo	1(arg1),t1
+-	cmpb,=	r0,t0,2f
+-	ldo	1(arg0),t2
+-1:	ldb	0(t1),arg1
+-	stb	arg1,0(t2)
+-	ldo	1(t1),t1
+-	cmpb,<> r0,arg1,1b
+-	ldo	1(t2),t2
+-2:	bv,n	r0(rp)
+-ENDPROC_CFI(strcpy)
+-
+-
+-ENTRY_CFI(strncpy, frame=0,no_calls)
+-	ldb	0(arg1),t0
+-	stb	t0,0(arg0)
+-	ldo	1(arg1),t1
+-	ldo	0(arg0),ret0
+-	cmpb,=	r0,t0,2f
+-	ldo	1(arg0),arg1
+-1:	ldo	-1(arg2),arg2
+-	cmpb,COND(=),n r0,arg2,2f
+-	ldb	0(t1),arg0
+-	stb	arg0,0(arg1)
+-	ldo	1(t1),t1
+-	cmpb,<> r0,arg0,1b
+-	ldo	1(arg1),arg1
+-2:	bv,n	r0(rp)
+-ENDPROC_CFI(strncpy)
+-
+-
+-ENTRY_CFI(strcat, frame=0,no_calls)
+-	ldb	0(arg0),t0
+-	cmpb,=	t0,r0,2f
+-	ldo	0(arg0),ret0
+-	ldo	1(arg0),arg0
+-1:	ldb	0(arg0),t1
+-	cmpb,<>,n r0,t1,1b
+-	ldo	1(arg0),arg0
+-2:	ldb	0(arg1),t2
+-	stb	t2,0(arg0)
+-	ldo	1(arg0),arg0
+-	ldb	0(arg1),t0
+-	cmpb,<>	r0,t0,2b
+-	ldo	1(arg1),arg1
+-	bv,n	r0(rp)
+-ENDPROC_CFI(strcat)
+-
+-
+-ENTRY_CFI(memset, frame=0,no_calls)
+-	copy	arg0,ret0
+-	cmpb,COND(=) r0,arg0,4f
+-	copy	arg0,t2
+-	cmpb,COND(=) r0,arg2,4f
+-	ldo	-1(arg2),arg3
+-	subi	-1,arg3,t0
+-	subi	0,t0,t1
+-	cmpiclr,COND(>=) 0,t1,arg2
+-	ldo	-1(t1),arg2
+-	extru arg2,31,2,arg0
+-2:	stb	arg1,0(t2)
+-	ldo	1(t2),t2
+-	addib,>= -1,arg0,2b
+-	ldo	-1(arg3),arg3
+-	cmpiclr,COND(<=) 4,arg2,r0
+-	b,l,n	4f,r0
+-#ifdef CONFIG_64BIT
+-	depd,*	r0,63,2,arg2
+-#else
+-	depw	r0,31,2,arg2
+-#endif
+-	ldo	1(t2),t2
+-3:	stb	arg1,-1(t2)
+-	stb	arg1,0(t2)
+-	stb	arg1,1(t2)
+-	stb	arg1,2(t2)
+-	addib,COND(>) -4,arg2,3b
+-	ldo	4(t2),t2
+-4:	bv,n	r0(rp)
+-ENDPROC_CFI(memset)
+-
+-	.end
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index ded4a3efd3f06..91452313489f1 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1884,7 +1884,7 @@ static bool is_event_blacklisted(u64 ev)
+ static int power_pmu_event_init(struct perf_event *event)
+ {
+ 	u64 ev;
+-	unsigned long flags;
++	unsigned long flags, irq_flags;
+ 	struct perf_event *ctrs[MAX_HWEVENTS];
+ 	u64 events[MAX_HWEVENTS];
+ 	unsigned int cflags[MAX_HWEVENTS];
+@@ -1992,7 +1992,9 @@ static int power_pmu_event_init(struct perf_event *event)
+ 	if (check_excludes(ctrs, cflags, n, 1))
+ 		return -EINVAL;
+ 
+-	cpuhw = &get_cpu_var(cpu_hw_events);
++	local_irq_save(irq_flags);
++	cpuhw = this_cpu_ptr(&cpu_hw_events);
++
+ 	err = power_check_constraints(cpuhw, events, cflags, n + 1);
+ 
+ 	if (has_branch_stack(event)) {
+@@ -2003,13 +2005,13 @@ static int power_pmu_event_init(struct perf_event *event)
+ 					event->attr.branch_sample_type);
+ 
+ 		if (bhrb_filter == -1) {
+-			put_cpu_var(cpu_hw_events);
++			local_irq_restore(irq_flags);
+ 			return -EOPNOTSUPP;
+ 		}
+ 		cpuhw->bhrb_filter = bhrb_filter;
+ 	}
+ 
+-	put_cpu_var(cpu_hw_events);
++	local_irq_restore(irq_flags);
+ 	if (err)
+ 		return -EINVAL;
+ 
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index fa896c5f7ccb7..62de075fc60c0 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -4,8 +4,9 @@
+ #
+ 
+ ifdef CONFIG_FTRACE
+-CFLAGS_REMOVE_ftrace.o	= -pg
+-CFLAGS_REMOVE_patch.o	= -pg
++CFLAGS_REMOVE_ftrace.o	= $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_patch.o	= $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_sbi.o	= $(CC_FLAGS_FTRACE)
+ endif
+ 
+ extra-y += head.o
+diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
+index 2d6395f5ad54f..69678ab6457d7 100644
+--- a/arch/riscv/kernel/ptrace.c
++++ b/arch/riscv/kernel/ptrace.c
+@@ -10,6 +10,7 @@
+ #include <asm/ptrace.h>
+ #include <asm/syscall.h>
+ #include <asm/thread_info.h>
++#include <asm/switch_to.h>
+ #include <linux/audit.h>
+ #include <linux/ptrace.h>
+ #include <linux/elf.h>
+@@ -56,6 +57,9 @@ static int riscv_fpr_get(struct task_struct *target,
+ {
+ 	struct __riscv_d_ext_state *fstate = &target->thread.fstate;
+ 
++	if (target == current)
++		fstate_save(current, task_pt_regs(current));
++
+ 	membuf_write(&to, fstate, offsetof(struct __riscv_d_ext_state, fcsr));
+ 	membuf_store(&to, fstate->fcsr);
+ 	return membuf_zero(&to, 4);	// explicitly pad
+diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
+index c0185e556ca51..7ebaef10ea1b6 100644
+--- a/arch/riscv/mm/Makefile
++++ b/arch/riscv/mm/Makefile
+@@ -2,7 +2,8 @@
+ 
+ CFLAGS_init.o := -mcmodel=medany
+ ifdef CONFIG_FTRACE
+-CFLAGS_REMOVE_init.o = -pg
++CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_cacheflush.o = $(CC_FLAGS_FTRACE)
+ endif
+ 
+ KCOV_INSTRUMENT_init.o := n
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 9c936d06fb61a..2701f87a9a7c6 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -4669,7 +4669,7 @@ static void __snr_uncore_mmio_init_box(struct intel_uncore_box *box,
+ 		return;
+ 
+ 	pci_read_config_dword(pdev, SNR_IMC_MMIO_BASE_OFFSET, &pci_dword);
+-	addr = (pci_dword & SNR_IMC_MMIO_BASE_MASK) << 23;
++	addr = ((resource_size_t)pci_dword & SNR_IMC_MMIO_BASE_MASK) << 23;
+ 
+ 	pci_read_config_dword(pdev, mem_offset, &pci_dword);
+ 	addr |= (pci_dword & SNR_IMC_MMIO_MEM0_MASK) << 12;
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index b7d8a954d99c3..e95b93f72bd5c 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -3039,19 +3039,19 @@ static ssize_t ioc_weight_write(struct kernfs_open_file *of, char *buf,
+ 		if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
+ 			return -EINVAL;
+ 
+-		spin_lock(&blkcg->lock);
++		spin_lock_irq(&blkcg->lock);
+ 		iocc->dfl_weight = v * WEIGHT_ONE;
+ 		hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
+ 			struct ioc_gq *iocg = blkg_to_iocg(blkg);
+ 
+ 			if (iocg) {
+-				spin_lock_irq(&iocg->ioc->lock);
++				spin_lock(&iocg->ioc->lock);
+ 				ioc_now(iocg->ioc, &now);
+ 				weight_updated(iocg, &now);
+-				spin_unlock_irq(&iocg->ioc->lock);
++				spin_unlock(&iocg->ioc->lock);
+ 			}
+ 		}
+-		spin_unlock(&blkcg->lock);
++		spin_unlock_irq(&blkcg->lock);
+ 
+ 		return nbytes;
+ 	}
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index a368eb6dc6470..044d0e3a15ad7 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -941,34 +941,14 @@ static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
+ 	unsigned long *next = priv;
+ 
+ 	/*
+-	 * Just do a quick check if it is expired before locking the request in
+-	 * so we're not unnecessarilly synchronizing across CPUs.
+-	 */
+-	if (!blk_mq_req_expired(rq, next))
+-		return true;
+-
+-	/*
+-	 * We have reason to believe the request may be expired. Take a
+-	 * reference on the request to lock this request lifetime into its
+-	 * currently allocated context to prevent it from being reallocated in
+-	 * the event the completion by-passes this timeout handler.
+-	 *
+-	 * If the reference was already released, then the driver beat the
+-	 * timeout handler to posting a natural completion.
+-	 */
+-	if (!refcount_inc_not_zero(&rq->ref))
+-		return true;
+-
+-	/*
+-	 * The request is now locked and cannot be reallocated underneath the
+-	 * timeout handler's processing. Re-verify this exact request is truly
+-	 * expired; if it is not expired, then the request was completed and
+-	 * reallocated as a new request.
++	 * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
++	 * be reallocated underneath the timeout handler's processing, then
++	 * the expire check is reliable. If the request is not expired, then
++	 * it was completed and reallocated as a new request after returning
++	 * from blk_mq_check_expired().
+ 	 */
+ 	if (blk_mq_req_expired(rq, next))
+ 		blk_mq_rq_timed_out(rq, reserved);
+-
+-	blk_mq_put_rq_ref(rq);
+ 	return true;
+ }
+ 
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 295da442329f3..7df79ae6b0a1e 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -4120,23 +4120,23 @@ static int floppy_open(struct block_device *bdev, fmode_t mode)
+ 	if (fdc_state[FDC(drive)].rawcmd == 1)
+ 		fdc_state[FDC(drive)].rawcmd = 2;
+ 
+-	if (mode & (FMODE_READ|FMODE_WRITE)) {
+-		drive_state[drive].last_checked = 0;
+-		clear_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags);
+-		if (bdev_check_media_change(bdev))
+-			floppy_revalidate(bdev->bd_disk);
+-		if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags))
+-			goto out;
+-		if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags))
++	if (!(mode & FMODE_NDELAY)) {
++		if (mode & (FMODE_READ|FMODE_WRITE)) {
++			drive_state[drive].last_checked = 0;
++			clear_bit(FD_OPEN_SHOULD_FAIL_BIT,
++				  &drive_state[drive].flags);
++			if (bdev_check_media_change(bdev))
++				floppy_revalidate(bdev->bd_disk);
++			if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags))
++				goto out;
++			if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags))
++				goto out;
++		}
++		res = -EROFS;
++		if ((mode & FMODE_WRITE) &&
++		    !test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags))
+ 			goto out;
+ 	}
+-
+-	res = -EROFS;
+-
+-	if ((mode & FMODE_WRITE) &&
+-			!test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags))
+-		goto out;
+-
+ 	mutex_unlock(&open_lock);
+ 	mutex_unlock(&floppy_mutex);
+ 	return 0;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index afd2b1f12d49d..e0859f4e28073 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -486,6 +486,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
+ #define BTUSB_HW_RESET_ACTIVE	12
+ #define BTUSB_TX_WAIT_VND_EVT	13
+ #define BTUSB_WAKEUP_DISABLE	14
++#define BTUSB_USE_ALT3_FOR_WBS	15
+ 
+ struct btusb_data {
+ 	struct hci_dev       *hdev;
+@@ -1718,16 +1719,20 @@ static void btusb_work(struct work_struct *work)
+ 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
+ 			 * many adapters do not support it.  Alt 1 appears to
+ 			 * work for all adapters that do not have alt 6, and
+-			 * which work with WBS at all.
++			 * which work with WBS at all.  Some devices prefer
++			 * alt 3 (HCI payload >= 60 Bytes let air packet
++			 * data satisfy 60 bytes), requiring
++			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
++			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
+ 			 */
+-			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
+-			/* Because mSBC frames do not need to be aligned to the
+-			 * SCO packet boundary. If support the Alt 3, use the
+-			 * Alt 3 for HCI payload >= 60 Bytes let air packet
+-			 * data satisfy 60 bytes.
+-			 */
+-			if (new_alts == 1 && btusb_find_altsetting(data, 3))
++			if (btusb_find_altsetting(data, 6))
++				new_alts = 6;
++			else if (btusb_find_altsetting(data, 3) &&
++				 hdev->sco_mtu >= 72 &&
++				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
+ 				new_alts = 3;
++			else
++				new_alts = 1;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -4170,6 +4175,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 		 * (DEVICE_REMOTE_WAKEUP)
+ 		 */
+ 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
++		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
+ 	}
+ 
+ 	if (!reset)
+diff --git a/drivers/clk/renesas/rcar-usb2-clock-sel.c b/drivers/clk/renesas/rcar-usb2-clock-sel.c
+index 0ccc6e709a385..7a64dcb7209e4 100644
+--- a/drivers/clk/renesas/rcar-usb2-clock-sel.c
++++ b/drivers/clk/renesas/rcar-usb2-clock-sel.c
+@@ -190,7 +190,7 @@ static int rcar_usb2_clock_sel_probe(struct platform_device *pdev)
+ 	init.num_parents = 0;
+ 	priv->hw.init = &init;
+ 
+-	ret = devm_clk_hw_register(NULL, &priv->hw);
++	ret = devm_clk_hw_register(dev, &priv->hw);
+ 	if (ret)
+ 		goto pm_put;
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index 1c192a42f11e0..a3734014db477 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -136,6 +136,7 @@ static const struct of_device_id blacklist[] __initconst = {
+ 	{ .compatible = "qcom,qcs404", },
+ 	{ .compatible = "qcom,sc7180", },
+ 	{ .compatible = "qcom,sdm845", },
++	{ .compatible = "qcom,sm8150", },
+ 
+ 	{ .compatible = "st,stih407", },
+ 	{ .compatible = "st,stih410", },
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index ffd310279a69c..97723f2b5ece7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2619,12 +2619,11 @@ static void amdgpu_device_delay_enable_gfx_off(struct work_struct *work)
+ 	struct amdgpu_device *adev =
+ 		container_of(work, struct amdgpu_device, gfx.gfx_off_delay_work.work);
+ 
+-	mutex_lock(&adev->gfx.gfx_off_mutex);
+-	if (!adev->gfx.gfx_off_state && !adev->gfx.gfx_off_req_count) {
+-		if (!amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, true))
+-			adev->gfx.gfx_off_state = true;
+-	}
+-	mutex_unlock(&adev->gfx.gfx_off_mutex);
++	WARN_ON_ONCE(adev->gfx.gfx_off_state);
++	WARN_ON_ONCE(adev->gfx.gfx_off_req_count);
++
++	if (!amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, true))
++		adev->gfx.gfx_off_state = true;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index c485ec86804e5..9f9f55a2b257c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -556,24 +556,38 @@ void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
+ 
+ 	mutex_lock(&adev->gfx.gfx_off_mutex);
+ 
+-	if (!enable)
+-		adev->gfx.gfx_off_req_count++;
+-	else if (adev->gfx.gfx_off_req_count > 0)
++	if (enable) {
++		/* If the count is already 0, it means there's an imbalance bug somewhere.
++		 * Note that the bug may be in a different caller than the one which triggers the
++		 * WARN_ON_ONCE.
++		 */
++		if (WARN_ON_ONCE(adev->gfx.gfx_off_req_count == 0))
++			goto unlock;
++
+ 		adev->gfx.gfx_off_req_count--;
+ 
+-	if (enable && !adev->gfx.gfx_off_state && !adev->gfx.gfx_off_req_count) {
+-		schedule_delayed_work(&adev->gfx.gfx_off_delay_work, GFX_OFF_DELAY_ENABLE);
+-	} else if (!enable && adev->gfx.gfx_off_state) {
+-		if (!amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, false)) {
+-			adev->gfx.gfx_off_state = false;
++		if (adev->gfx.gfx_off_req_count == 0 && !adev->gfx.gfx_off_state)
++			schedule_delayed_work(&adev->gfx.gfx_off_delay_work, GFX_OFF_DELAY_ENABLE);
++	} else {
++		if (adev->gfx.gfx_off_req_count == 0) {
++			cancel_delayed_work_sync(&adev->gfx.gfx_off_delay_work);
++
++			if (adev->gfx.gfx_off_state &&
++			    !amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, false)) {
++				adev->gfx.gfx_off_state = false;
+ 
+-			if (adev->gfx.funcs->init_spm_golden) {
+-				dev_dbg(adev->dev, "GFXOFF is disabled, re-init SPM golden settings\n");
+-				amdgpu_gfx_init_spm_golden(adev);
++				if (adev->gfx.funcs->init_spm_golden) {
++					dev_dbg(adev->dev,
++						"GFXOFF is disabled, re-init SPM golden settings\n");
++					amdgpu_gfx_init_spm_golden(adev);
++				}
+ 			}
+ 		}
++
++		adev->gfx.gfx_off_req_count++;
+ 	}
+ 
++unlock:
+ 	mutex_unlock(&adev->gfx.gfx_off_mutex);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index 132c269c7c893..4dc27ec4d012d 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -5122,6 +5122,13 @@ static int vega10_get_power_profile_mode(struct pp_hwmgr *hwmgr, char *buf)
+ 	return size;
+ }
+ 
++static bool vega10_get_power_profile_mode_quirks(struct pp_hwmgr *hwmgr)
++{
++	struct amdgpu_device *adev = hwmgr->adev;
++
++	return (adev->pdev->device == 0x6860);
++}
++
+ static int vega10_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, uint32_t size)
+ {
+ 	struct vega10_hwmgr *data = hwmgr->backend;
+@@ -5158,9 +5165,15 @@ static int vega10_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui
+ 	}
+ 
+ out:
+-	smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask,
++	if (vega10_get_power_profile_mode_quirks(hwmgr))
++		smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask,
++						1 << power_profile_mode,
++						NULL);
++	else
++		smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask,
+ 						(!power_profile_mode) ? 0 : 1 << (power_profile_mode - 1),
+ 						NULL);
++
+ 	hwmgr->power_profile_mode = power_profile_mode;
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
+index dc734d4828a17..aaf8d625ce1a0 100644
+--- a/drivers/gpu/drm/drm_ioc32.c
++++ b/drivers/gpu/drm/drm_ioc32.c
+@@ -865,8 +865,6 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
+ 	req.request.sequence = req32.request.sequence;
+ 	req.request.signal = req32.request.signal;
+ 	err = drm_ioctl_kernel(file, drm_wait_vblank_ioctl, &req, DRM_UNLOCKED);
+-	if (err)
+-		return err;
+ 
+ 	req32.reply.type = req.reply.type;
+ 	req32.reply.sequence = req.reply.sequence;
+@@ -875,7 +873,7 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
+ 	if (copy_to_user(argp, &req32, sizeof(req32)))
+ 		return -EFAULT;
+ 
+-	return 0;
++	return err;
+ }
+ 
+ #if defined(CONFIG_X86)
+diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
+index 8015964043eb7..e25385ad2c1e8 100644
+--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
++++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
+@@ -296,6 +296,14 @@ static void intel_timeline_fini(struct intel_timeline *timeline)
+ 		i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
+ 
+ 	i915_vma_put(timeline->hwsp_ggtt);
++
++	/*
++	 * A small race exists between intel_gt_retire_requests_timeout and
++	 * intel_timeline_exit which could result in the syncmap not getting
++	 * free'd. Rather than work to hard to seal this race, simply cleanup
++	 * the syncmap on fini.
++	 */
++	i915_syncmap_free(&timeline->sync);
+ }
+ 
+ struct intel_timeline *
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 5b8cabb099eb1..c2d34c91e840c 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -2202,6 +2202,33 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state)
+ 		interlock[NV50_DISP_INTERLOCK_CORE] = 0;
+ 	}
+ 
++	/* Finish updating head(s)...
++	 *
++	 * NVD is rather picky about both where window assignments can change,
++	 * *and* about certain core and window channel states matching.
++	 *
++	 * The EFI GOP driver on newer GPUs configures window channels with a
++	 * different output format to what we do, and the core channel update
++	 * in the assign_windows case above would result in a state mismatch.
++	 *
++	 * Delay some of the head update until after that point to workaround
++	 * the issue.  This only affects the initial modeset.
++	 *
++	 * TODO: handle this better when adding flexible window mapping
++	 */
++	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
++		struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state);
++		struct nv50_head *head = nv50_head(crtc);
++
++		NV_ATOMIC(drm, "%s: set %04x (clr %04x)\n", crtc->name,
++			  asyh->set.mask, asyh->clr.mask);
++
++		if (asyh->set.mask) {
++			nv50_head_flush_set_wndw(head, asyh);
++			interlock[NV50_DISP_INTERLOCK_CORE] = 1;
++		}
++	}
++
+ 	/* Update plane(s). */
+ 	for_each_new_plane_in_state(state, plane, new_plane_state, i) {
+ 		struct nv50_wndw_atom *asyw = nv50_wndw_atom(new_plane_state);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c
+index 841edfaf5b9d4..61826cac3061a 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.c
+@@ -49,11 +49,8 @@ nv50_head_flush_clr(struct nv50_head *head,
+ }
+ 
+ void
+-nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
++nv50_head_flush_set_wndw(struct nv50_head *head, struct nv50_head_atom *asyh)
+ {
+-	if (asyh->set.view   ) head->func->view    (head, asyh);
+-	if (asyh->set.mode   ) head->func->mode    (head, asyh);
+-	if (asyh->set.core   ) head->func->core_set(head, asyh);
+ 	if (asyh->set.olut   ) {
+ 		asyh->olut.offset = nv50_lut_load(&head->olut,
+ 						  asyh->olut.buffer,
+@@ -61,6 +58,14 @@ nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
+ 						  asyh->olut.load);
+ 		head->func->olut_set(head, asyh);
+ 	}
++}
++
++void
++nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
++{
++	if (asyh->set.view   ) head->func->view    (head, asyh);
++	if (asyh->set.mode   ) head->func->mode    (head, asyh);
++	if (asyh->set.core   ) head->func->core_set(head, asyh);
+ 	if (asyh->set.curs   ) head->func->curs_set(head, asyh);
+ 	if (asyh->set.base   ) head->func->base    (head, asyh);
+ 	if (asyh->set.ovly   ) head->func->ovly    (head, asyh);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.h b/drivers/gpu/drm/nouveau/dispnv50/head.h
+index dae841dc05fdf..0bac6be9ba34d 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.h
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.h
+@@ -21,6 +21,7 @@ struct nv50_head {
+ 
+ struct nv50_head *nv50_head_create(struct drm_device *, int index);
+ void nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh);
++void nv50_head_flush_set_wndw(struct nv50_head *head, struct nv50_head_atom *asyh);
+ void nv50_head_flush_clr(struct nv50_head *head,
+ 			 struct nv50_head_atom *asyh, bool flush);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+index 3800aeb507d01..2a7b8bc3ec4dd 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+@@ -419,7 +419,7 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps)
+ 	return ret;
+ }
+ 
+-static void
++void
+ nvkm_dp_disable(struct nvkm_outp *outp, struct nvkm_ior *ior)
+ {
+ 	struct nvkm_dp *dp = nvkm_dp(outp);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h
+index 428b3f488f033..e484d0c3b0d42 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h
+@@ -32,6 +32,7 @@ struct nvkm_dp {
+ 
+ int nvkm_dp_new(struct nvkm_disp *, int index, struct dcb_output *,
+ 		struct nvkm_outp **);
++void nvkm_dp_disable(struct nvkm_outp *, struct nvkm_ior *);
+ 
+ /* DPCD Receiver Capabilities */
+ #define DPCD_RC00_DPCD_REV                                              0x00000
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+index dffcac249211c..129982fef7ef6 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+@@ -22,6 +22,7 @@
+  * Authors: Ben Skeggs
+  */
+ #include "outp.h"
++#include "dp.h"
+ #include "ior.h"
+ 
+ #include <subdev/bios.h>
+@@ -257,6 +258,14 @@ nvkm_outp_init_route(struct nvkm_outp *outp)
+ 	if (!ior->arm.head || ior->arm.proto != proto) {
+ 		OUTP_DBG(outp, "no heads (%x %d %d)", ior->arm.head,
+ 			 ior->arm.proto, proto);
++
++		/* The EFI GOP driver on Ampere can leave unused DP links routed,
++		 * which we don't expect.  The DisableLT IED script *should* get
++		 * us back to where we need to be.
++		 */
++		if (ior->func->route.get && !ior->arm.head && outp->info.type == DCB_OUTPUT_DP)
++			nvkm_dp_disable(outp, ior);
++
+ 		return;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 266de55f57192..441952a5eca4a 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -1691,6 +1691,7 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ 	if (nq)
+ 		nq->budget++;
+ 	atomic_inc(&rdev->srq_count);
++	spin_lock_init(&srq->lock);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 1fadca8af71a1..9ef6aea29ff16 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -1410,7 +1410,6 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 wqe_mode)
+ 	memset(&rattr, 0, sizeof(rattr));
+ 	rc = bnxt_re_register_netdev(rdev);
+ 	if (rc) {
+-		rtnl_unlock();
+ 		ibdev_err(&rdev->ibdev,
+ 			  "Failed to register with netedev: %#x\n", rc);
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c
+index 6faed3a81e087..ffdd18f4217f5 100644
+--- a/drivers/infiniband/hw/efa/efa_main.c
++++ b/drivers/infiniband/hw/efa/efa_main.c
+@@ -377,6 +377,7 @@ static int efa_enable_msix(struct efa_dev *dev)
+ 	}
+ 
+ 	if (irq_num != msix_vecs) {
++		efa_disable_msix(dev);
+ 		dev_err(&dev->pdev->dev,
+ 			"Allocated %d MSI-X (out of %d requested)\n",
+ 			irq_num, msix_vecs);
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index a307d4c8b15a7..ac6f87137b637 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -3055,6 +3055,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
+ static int _extend_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ {
+ 	int i;
++	struct sdma_desc *descp;
+ 
+ 	/* Handle last descriptor */
+ 	if (unlikely((tx->num_desc == (MAX_DESC - 1)))) {
+@@ -3075,12 +3076,10 @@ static int _extend_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ 	if (unlikely(tx->num_desc == MAX_DESC))
+ 		goto enomem;
+ 
+-	tx->descp = kmalloc_array(
+-			MAX_DESC,
+-			sizeof(struct sdma_desc),
+-			GFP_ATOMIC);
+-	if (!tx->descp)
++	descp = kmalloc_array(MAX_DESC, sizeof(struct sdma_desc), GFP_ATOMIC);
++	if (!descp)
+ 		goto enomem;
++	tx->descp = descp;
+ 
+ 	/* reserve last descriptor for coalescing */
+ 	tx->desc_limit = MAX_DESC - 1;
+diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
+index 97803f213d9d4..c802db9aaeb04 100644
+--- a/drivers/misc/lkdtm/core.c
++++ b/drivers/misc/lkdtm/core.c
+@@ -173,9 +173,7 @@ static const struct crashtype crashtypes[] = {
+ 	CRASHTYPE(USERCOPY_KERNEL),
+ 	CRASHTYPE(STACKLEAK_ERASING),
+ 	CRASHTYPE(CFI_FORWARD_PROTO),
+-#ifdef CONFIG_X86_32
+ 	CRASHTYPE(DOUBLE_FAULT),
+-#endif
+ };
+ 
+ 
+diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
+index 9f0eef97ebddd..b9eb2ec61a83a 100644
+--- a/drivers/mmc/host/sdhci-iproc.c
++++ b/drivers/mmc/host/sdhci-iproc.c
+@@ -295,8 +295,7 @@ static const struct sdhci_ops sdhci_iproc_bcm2711_ops = {
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_bcm2711_pltfm_data = {
+-	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12 |
+-		  SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
++	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
+ 	.ops = &sdhci_iproc_bcm2711_ops,
+ };
+ 
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index 558d8a14810b6..8794a1f6eacd9 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -419,7 +419,7 @@ static int spinand_check_ecc_status(struct spinand_device *spinand, u8 status)
+ 		 * fixed, so let's return the maximum possible value so that
+ 		 * wear-leveling layers move the data immediately.
+ 		 */
+-		return nanddev_get_ecc_conf(nand)->strength;
++		return nanddev_get_ecc_requirements(nand)->strength;
+ 
+ 	case STATUS_ECC_UNCOR_ERROR:
+ 		return -EBADMSG;
+@@ -1090,8 +1090,8 @@ static int spinand_init(struct spinand_device *spinand)
+ 	mtd->oobavail = ret;
+ 
+ 	/* Propagate ECC information to mtd_info */
+-	mtd->ecc_strength = nanddev_get_ecc_conf(nand)->strength;
+-	mtd->ecc_step_size = nanddev_get_ecc_conf(nand)->step_size;
++	mtd->ecc_strength = nanddev_get_ecc_requirements(nand)->strength;
++	mtd->ecc_step_size = nanddev_get_ecc_requirements(nand)->step_size;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mtd/nand/spi/macronix.c b/drivers/mtd/nand/spi/macronix.c
+index 8e801e4c3a006..cd7a9cacc3fbf 100644
+--- a/drivers/mtd/nand/spi/macronix.c
++++ b/drivers/mtd/nand/spi/macronix.c
+@@ -84,11 +84,11 @@ static int mx35lf1ge4ab_ecc_get_status(struct spinand_device *spinand,
+ 		 * data around if it's not necessary.
+ 		 */
+ 		if (mx35lf1ge4ab_get_eccsr(spinand, &eccsr))
+-			return nanddev_get_ecc_conf(nand)->strength;
++			return nanddev_get_ecc_requirements(nand)->strength;
+ 
+-		if (WARN_ON(eccsr > nanddev_get_ecc_conf(nand)->strength ||
++		if (WARN_ON(eccsr > nanddev_get_ecc_requirements(nand)->strength ||
+ 			    !eccsr))
+-			return nanddev_get_ecc_conf(nand)->strength;
++			return nanddev_get_ecc_requirements(nand)->strength;
+ 
+ 		return eccsr;
+ 
+diff --git a/drivers/mtd/nand/spi/toshiba.c b/drivers/mtd/nand/spi/toshiba.c
+index 21fde28756742..6fe7bd2a94d28 100644
+--- a/drivers/mtd/nand/spi/toshiba.c
++++ b/drivers/mtd/nand/spi/toshiba.c
+@@ -90,12 +90,12 @@ static int tx58cxgxsxraix_ecc_get_status(struct spinand_device *spinand,
+ 		 * data around if it's not necessary.
+ 		 */
+ 		if (spi_mem_exec_op(spinand->spimem, &op))
+-			return nanddev_get_ecc_conf(nand)->strength;
++			return nanddev_get_ecc_requirements(nand)->strength;
+ 
+ 		mbf >>= 4;
+ 
+-		if (WARN_ON(mbf > nanddev_get_ecc_conf(nand)->strength || !mbf))
+-			return nanddev_get_ecc_conf(nand)->strength;
++		if (WARN_ON(mbf > nanddev_get_ecc_requirements(nand)->strength || !mbf))
++			return nanddev_get_ecc_requirements(nand)->strength;
+ 
+ 		return mbf;
+ 
+diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
+index 485e20e0dec2c..8847942a8d97e 100644
+--- a/drivers/net/can/usb/esd_usb2.c
++++ b/drivers/net/can/usb/esd_usb2.c
+@@ -224,8 +224,8 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
+ 	if (id == ESD_EV_CAN_ERROR_EXT) {
+ 		u8 state = msg->msg.rx.data[0];
+ 		u8 ecc = msg->msg.rx.data[1];
+-		u8 txerr = msg->msg.rx.data[2];
+-		u8 rxerr = msg->msg.rx.data[3];
++		u8 rxerr = msg->msg.rx.data[2];
++		u8 txerr = msg->msg.rx.data[3];
+ 
+ 		skb = alloc_can_err_skb(priv->netdev, &cf);
+ 		if (skb == NULL) {
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 3fa2f81c8b47d..73de09093c350 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1161,11 +1161,8 @@ mt7530_port_bridge_leave(struct dsa_switch *ds, int port,
+ 		/* Remove this port from the port matrix of the other ports
+ 		 * in the same bridge. If the port is disabled, port matrix
+ 		 * is kept and not being setup until the port becomes enabled.
+-		 * And the other port's port matrix cannot be broken when the
+-		 * other port is still a VLAN-aware port.
+ 		 */
+-		if (dsa_is_user_port(ds, i) && i != port &&
+-		   !dsa_port_is_vlan_filtering(dsa_to_port(ds, i))) {
++		if (dsa_is_user_port(ds, i) && i != port) {
+ 			if (dsa_to_port(ds, i)->bridge_dev != bridge)
+ 				continue;
+ 			if (priv->ports[i].enable)
+diff --git a/drivers/net/ethernet/apm/xgene-v2/main.c b/drivers/net/ethernet/apm/xgene-v2/main.c
+index 860c18fb7aae9..80399c8980bd3 100644
+--- a/drivers/net/ethernet/apm/xgene-v2/main.c
++++ b/drivers/net/ethernet/apm/xgene-v2/main.c
+@@ -677,11 +677,13 @@ static int xge_probe(struct platform_device *pdev)
+ 	ret = register_netdev(ndev);
+ 	if (ret) {
+ 		netdev_err(ndev, "Failed to register netdev\n");
+-		goto err;
++		goto err_mdio_remove;
+ 	}
+ 
+ 	return 0;
+ 
++err_mdio_remove:
++	xge_mdio_remove(ndev);
+ err:
+ 	free_netdev(ndev);
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 6698afad43796..3c28a1c3c1ed7 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -5072,6 +5072,7 @@ static int adap_init0(struct adapter *adap, int vpd_skip)
+ 		ret = -ENOMEM;
+ 		goto bye;
+ 	}
++	bitmap_zero(adap->sge.blocked_fl, adap->sge.egr_sz);
+ #endif
+ 
+ 	params[0] = FW_PARAM_PFVF(CLIP_START);
+@@ -6792,13 +6793,11 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	setup_memwin(adapter);
+ 	err = adap_init0(adapter, 0);
+-#ifdef CONFIG_DEBUG_FS
+-	bitmap_zero(adapter->sge.blocked_fl, adapter->sge.egr_sz);
+-#endif
+-	setup_memwin_rdma(adapter);
+ 	if (err)
+ 		goto out_unmap_bar;
+ 
++	setup_memwin_rdma(adapter);
++
+ 	/* configure SGE_STAT_CFG_A to read WC stats */
+ 	if (!is_t4(adapter->params.chip))
+ 		t4_write_reg(adapter, SGE_STAT_CFG_A, STATSOURCE_T5_V(7) |
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+index e6321dda0f3f7..6f9f759ce0c0e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+@@ -521,9 +521,13 @@ static void hclge_cmd_uninit_regs(struct hclge_hw *hw)
+ 
+ void hclge_cmd_uninit(struct hclge_dev *hdev)
+ {
++	set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
++	/* wait to ensure that the firmware completes the possible left
++	 * over commands.
++	 */
++	msleep(HCLGE_CMDQ_CLEAR_WAIT_TIME);
+ 	spin_lock_bh(&hdev->hw.cmq.csq.lock);
+ 	spin_lock(&hdev->hw.cmq.crq.lock);
+-	set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
+ 	hclge_cmd_uninit_regs(&hdev->hw);
+ 	spin_unlock(&hdev->hw.cmq.crq.lock);
+ 	spin_unlock_bh(&hdev->hw.cmq.csq.lock);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+index 36690fc5c1aff..3d70c3a47d631 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+@@ -9,6 +9,7 @@
+ #include "hnae3.h"
+ 
+ #define HCLGE_CMDQ_TX_TIMEOUT		30000
++#define HCLGE_CMDQ_CLEAR_WAIT_TIME	200
+ #define HCLGE_DESC_DATA_LEN		6
+ 
+ struct hclge_dev;
+@@ -262,6 +263,9 @@ enum hclge_opcode_type {
+ 	/* Led command */
+ 	HCLGE_OPC_LED_STATUS_CFG	= 0xB000,
+ 
++	/* clear hardware resource command */
++	HCLGE_OPC_CLEAR_HW_RESOURCE	= 0x700B,
++
+ 	/* NCL config command */
+ 	HCLGE_OPC_QUERY_NCL_CONFIG	= 0x7011,
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 3606240025a8a..a93c7eb4e7cbb 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -283,21 +283,12 @@ static int hclge_ieee_getpfc(struct hnae3_handle *h, struct ieee_pfc *pfc)
+ 	u64 requests[HNAE3_MAX_TC], indications[HNAE3_MAX_TC];
+ 	struct hclge_vport *vport = hclge_get_vport(h);
+ 	struct hclge_dev *hdev = vport->back;
+-	u8 i, j, pfc_map, *prio_tc;
+ 	int ret;
++	u8 i;
+ 
+ 	memset(pfc, 0, sizeof(*pfc));
+ 	pfc->pfc_cap = hdev->pfc_max;
+-	prio_tc = hdev->tm_info.prio_tc;
+-	pfc_map = hdev->tm_info.hw_pfc_map;
+-
+-	/* Pfc setting is based on TC */
+-	for (i = 0; i < hdev->tm_info.num_tc; i++) {
+-		for (j = 0; j < HNAE3_MAX_USER_PRIO; j++) {
+-			if ((prio_tc[j] == i) && (pfc_map & BIT(i)))
+-				pfc->pfc_en |= BIT(j);
+-		}
+-	}
++	pfc->pfc_en = hdev->tm_info.pfc_en;
+ 
+ 	ret = hclge_pfc_tx_stats_get(hdev, requests);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 98190aa907818..2261de5caf863 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -8792,7 +8792,11 @@ static int hclge_init_vlan_config(struct hclge_dev *hdev)
+ static void hclge_add_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
+ 				       bool writen_to_tbl)
+ {
+-	struct hclge_vport_vlan_cfg *vlan;
++	struct hclge_vport_vlan_cfg *vlan, *tmp;
++
++	list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node)
++		if (vlan->vlan_id == vlan_id)
++			return;
+ 
+ 	vlan = kzalloc(sizeof(*vlan), GFP_KERNEL);
+ 	if (!vlan)
+@@ -10030,6 +10034,28 @@ static void hclge_clear_resetting_state(struct hclge_dev *hdev)
+ 	}
+ }
+ 
++static int hclge_clear_hw_resource(struct hclge_dev *hdev)
++{
++	struct hclge_desc desc;
++	int ret;
++
++	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CLEAR_HW_RESOURCE, false);
++
++	ret = hclge_cmd_send(&hdev->hw, &desc, 1);
++	/* This new command is only supported by new firmware, it will
++	 * fail with older firmware. Error value -EOPNOSUPP can only be
++	 * returned by older firmware running this command, to keep code
++	 * backward compatible we will override this value and return
++	 * success.
++	 */
++	if (ret && ret != -EOPNOTSUPP) {
++		dev_err(&hdev->pdev->dev,
++			"failed to clear hw resource, ret = %d\n", ret);
++		return ret;
++	}
++	return 0;
++}
++
+ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+ {
+ 	struct pci_dev *pdev = ae_dev->pdev;
+@@ -10067,6 +10093,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+ 	if (ret)
+ 		goto err_cmd_uninit;
+ 
++	ret  = hclge_clear_hw_resource(hdev);
++	if (ret)
++		goto err_cmd_uninit;
++
+ 	ret = hclge_get_cap(hdev);
+ 	if (ret)
+ 		goto err_cmd_uninit;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+index 66866c1cfb128..cae6db17cb190 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+@@ -472,12 +472,17 @@ static void hclgevf_cmd_uninit_regs(struct hclgevf_hw *hw)
+ 
+ void hclgevf_cmd_uninit(struct hclgevf_dev *hdev)
+ {
++	set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
++	/* wait to ensure that the firmware completes the possible left
++	 * over commands.
++	 */
++	msleep(HCLGEVF_CMDQ_CLEAR_WAIT_TIME);
+ 	spin_lock_bh(&hdev->hw.cmq.csq.lock);
+ 	spin_lock(&hdev->hw.cmq.crq.lock);
+-	set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
+ 	hclgevf_cmd_uninit_regs(&hdev->hw);
+ 	spin_unlock(&hdev->hw.cmq.crq.lock);
+ 	spin_unlock_bh(&hdev->hw.cmq.csq.lock);
++
+ 	hclgevf_free_cmd_desc(&hdev->hw.cmq.csq);
+ 	hclgevf_free_cmd_desc(&hdev->hw.cmq.crq);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
+index 9460c128c0955..f90ff8a84b7ed 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
+@@ -8,6 +8,7 @@
+ #include "hnae3.h"
+ 
+ #define HCLGEVF_CMDQ_TX_TIMEOUT		30000
++#define HCLGEVF_CMDQ_CLEAR_WAIT_TIME	200
+ #define HCLGEVF_CMDQ_RX_INVLD_B		0
+ #define HCLGEVF_CMDQ_RX_OUTVLD_B	1
+ 
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 6fb46682b058a..854c585de2e13 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -1006,6 +1006,8 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
+ {
+ 	u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) |
+ 	    link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND;
++	u16 max_ltr_enc_d = 0;	/* maximum LTR decoded by platform */
++	u16 lat_enc_d = 0;	/* latency decoded */
+ 	u16 lat_enc = 0;	/* latency encoded */
+ 
+ 	if (link) {
+@@ -1059,7 +1061,17 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
+ 				     E1000_PCI_LTR_CAP_LPT + 2, &max_nosnoop);
+ 		max_ltr_enc = max_t(u16, max_snoop, max_nosnoop);
+ 
+-		if (lat_enc > max_ltr_enc)
++		lat_enc_d = (lat_enc & E1000_LTRV_VALUE_MASK) *
++			     (1U << (E1000_LTRV_SCALE_FACTOR *
++			     ((lat_enc & E1000_LTRV_SCALE_MASK)
++			     >> E1000_LTRV_SCALE_SHIFT)));
++
++		max_ltr_enc_d = (max_ltr_enc & E1000_LTRV_VALUE_MASK) *
++				 (1U << (E1000_LTRV_SCALE_FACTOR *
++				 ((max_ltr_enc & E1000_LTRV_SCALE_MASK)
++				 >> E1000_LTRV_SCALE_SHIFT)));
++
++		if (lat_enc_d > max_ltr_enc_d)
+ 			lat_enc = max_ltr_enc;
+ 	}
+ 
+@@ -4122,13 +4134,17 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
+ 		return ret_val;
+ 
+ 	if (!(data & valid_csum_mask)) {
+-		data |= valid_csum_mask;
+-		ret_val = e1000_write_nvm(hw, word, 1, &data);
+-		if (ret_val)
+-			return ret_val;
+-		ret_val = e1000e_update_nvm_checksum(hw);
+-		if (ret_val)
+-			return ret_val;
++		e_dbg("NVM Checksum Invalid\n");
++
++		if (hw->mac.type < e1000_pch_cnp) {
++			data |= valid_csum_mask;
++			ret_val = e1000_write_nvm(hw, word, 1, &data);
++			if (ret_val)
++				return ret_val;
++			ret_val = e1000e_update_nvm_checksum(hw);
++			if (ret_val)
++				return ret_val;
++		}
+ 	}
+ 
+ 	return e1000e_validate_nvm_checksum_generic(hw);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+index 1502895eb45dd..e757896287eba 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+@@ -274,8 +274,11 @@
+ 
+ /* Latency Tolerance Reporting */
+ #define E1000_LTRV			0x000F8
++#define E1000_LTRV_VALUE_MASK		0x000003FF
+ #define E1000_LTRV_SCALE_MAX		5
+ #define E1000_LTRV_SCALE_FACTOR		5
++#define E1000_LTRV_SCALE_SHIFT		10
++#define E1000_LTRV_SCALE_MASK		0x00001C00
+ #define E1000_LTRV_REQ_SHIFT		15
+ #define E1000_LTRV_NOSNOOP_SHIFT	16
+ #define E1000_LTRV_SEND			(1 << 30)
+diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
+index 511da59bd6f28..f18ce43b7e740 100644
+--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
++++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
+@@ -23,7 +23,9 @@ static int ice_info_pba(struct ice_pf *pf, char *buf, size_t len)
+ 
+ 	status = ice_read_pba_string(hw, (u8 *)buf, len);
+ 	if (status)
+-		return -EIO;
++		/* We failed to locate the PBA, so just skip this entry */
++		dev_dbg(ice_pf_to_dev(pf), "Failed to read Product Board Assembly string, status %s\n",
++			ice_stat_str(status));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index b9fe2785f5735..013dd29553814 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -138,6 +138,9 @@ static void igc_release_hw_control(struct igc_adapter *adapter)
+ 	struct igc_hw *hw = &adapter->hw;
+ 	u32 ctrl_ext;
+ 
++	if (!pci_device_is_present(adapter->pdev))
++		return;
++
+ 	/* Let firmware take over control of h/w */
+ 	ctrl_ext = rd32(IGC_CTRL_EXT);
+ 	wr32(IGC_CTRL_EXT,
+@@ -3782,26 +3785,29 @@ void igc_down(struct igc_adapter *adapter)
+ 
+ 	igc_ptp_suspend(adapter);
+ 
+-	/* disable receives in the hardware */
+-	rctl = rd32(IGC_RCTL);
+-	wr32(IGC_RCTL, rctl & ~IGC_RCTL_EN);
+-	/* flush and sleep below */
+-
++	if (pci_device_is_present(adapter->pdev)) {
++		/* disable receives in the hardware */
++		rctl = rd32(IGC_RCTL);
++		wr32(IGC_RCTL, rctl & ~IGC_RCTL_EN);
++		/* flush and sleep below */
++	}
+ 	/* set trans_start so we don't get spurious watchdogs during reset */
+ 	netif_trans_update(netdev);
+ 
+ 	netif_carrier_off(netdev);
+ 	netif_tx_stop_all_queues(netdev);
+ 
+-	/* disable transmits in the hardware */
+-	tctl = rd32(IGC_TCTL);
+-	tctl &= ~IGC_TCTL_EN;
+-	wr32(IGC_TCTL, tctl);
+-	/* flush both disables and wait for them to finish */
+-	wrfl();
+-	usleep_range(10000, 20000);
++	if (pci_device_is_present(adapter->pdev)) {
++		/* disable transmits in the hardware */
++		tctl = rd32(IGC_TCTL);
++		tctl &= ~IGC_TCTL_EN;
++		wr32(IGC_TCTL, tctl);
++		/* flush both disables and wait for them to finish */
++		wrfl();
++		usleep_range(10000, 20000);
+ 
+-	igc_irq_disable(adapter);
++		igc_irq_disable(adapter);
++	}
+ 
+ 	adapter->flags &= ~IGC_FLAG_NEED_LINK_UPDATE;
+ 
+@@ -4755,7 +4761,7 @@ static bool validate_schedule(struct igc_adapter *adapter,
+ 		if (e->command != TC_TAPRIO_CMD_SET_GATES)
+ 			return false;
+ 
+-		for (i = 0; i < IGC_MAX_TX_QUEUES; i++) {
++		for (i = 0; i < adapter->num_tx_queues; i++) {
+ 			if (e->gate_mask & BIT(i))
+ 				queue_uses[i]++;
+ 
+@@ -4812,7 +4818,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
+ 
+ 		end_time += e->interval;
+ 
+-		for (i = 0; i < IGC_MAX_TX_QUEUES; i++) {
++		for (i = 0; i < adapter->num_tx_queues; i++) {
+ 			struct igc_ring *ring = adapter->tx_ring[i];
+ 
+ 			if (!(e->gate_mask & BIT(i)))
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 545f4d0e67cf4..4ab46eee3d938 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -557,7 +557,8 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
+ 	adapter->ptp_tx_skb = NULL;
+ 	clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
+ 
+-	igc_ptp_time_save(adapter);
++	if (pci_device_is_present(adapter->pdev))
++		igc_ptp_time_save(adapter);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index c6b735b305156..74e266c0b8e10 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -103,7 +103,7 @@
+ #define      MVNETA_DESC_SWAP                    BIT(6)
+ #define      MVNETA_TX_BRST_SZ_MASK(burst)       ((burst) << 22)
+ #define MVNETA_PORT_STATUS                       0x2444
+-#define      MVNETA_TX_IN_PRGRS                  BIT(1)
++#define      MVNETA_TX_IN_PRGRS                  BIT(0)
+ #define      MVNETA_TX_FIFO_EMPTY                BIT(8)
+ #define MVNETA_RX_MIN_FRAME_SIZE                 0x247c
+ /* Only exists on Armada XP and Armada 370 */
+diff --git a/drivers/net/ethernet/mscc/ocelot_io.c b/drivers/net/ethernet/mscc/ocelot_io.c
+index ea4e83410fe4d..7390fa3980ec5 100644
+--- a/drivers/net/ethernet/mscc/ocelot_io.c
++++ b/drivers/net/ethernet/mscc/ocelot_io.c
+@@ -21,7 +21,7 @@ u32 __ocelot_read_ix(struct ocelot *ocelot, u32 reg, u32 offset)
+ 		    ocelot->map[target][reg & REG_MASK] + offset, &val);
+ 	return val;
+ }
+-EXPORT_SYMBOL(__ocelot_read_ix);
++EXPORT_SYMBOL_GPL(__ocelot_read_ix);
+ 
+ void __ocelot_write_ix(struct ocelot *ocelot, u32 val, u32 reg, u32 offset)
+ {
+@@ -32,7 +32,7 @@ void __ocelot_write_ix(struct ocelot *ocelot, u32 val, u32 reg, u32 offset)
+ 	regmap_write(ocelot->targets[target],
+ 		     ocelot->map[target][reg & REG_MASK] + offset, val);
+ }
+-EXPORT_SYMBOL(__ocelot_write_ix);
++EXPORT_SYMBOL_GPL(__ocelot_write_ix);
+ 
+ void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 mask, u32 reg,
+ 		     u32 offset)
+@@ -45,7 +45,7 @@ void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 mask, u32 reg,
+ 			   ocelot->map[target][reg & REG_MASK] + offset,
+ 			   mask, val);
+ }
+-EXPORT_SYMBOL(__ocelot_rmw_ix);
++EXPORT_SYMBOL_GPL(__ocelot_rmw_ix);
+ 
+ u32 ocelot_port_readl(struct ocelot_port *port, u32 reg)
+ {
+@@ -58,7 +58,7 @@ u32 ocelot_port_readl(struct ocelot_port *port, u32 reg)
+ 	regmap_read(port->target, ocelot->map[target][reg & REG_MASK], &val);
+ 	return val;
+ }
+-EXPORT_SYMBOL(ocelot_port_readl);
++EXPORT_SYMBOL_GPL(ocelot_port_readl);
+ 
+ void ocelot_port_writel(struct ocelot_port *port, u32 val, u32 reg)
+ {
+@@ -69,7 +69,7 @@ void ocelot_port_writel(struct ocelot_port *port, u32 val, u32 reg)
+ 
+ 	regmap_write(port->target, ocelot->map[target][reg & REG_MASK], val);
+ }
+-EXPORT_SYMBOL(ocelot_port_writel);
++EXPORT_SYMBOL_GPL(ocelot_port_writel);
+ 
+ void ocelot_port_rmwl(struct ocelot_port *port, u32 val, u32 mask, u32 reg)
+ {
+@@ -77,7 +77,7 @@ void ocelot_port_rmwl(struct ocelot_port *port, u32 val, u32 mask, u32 reg)
+ 
+ 	ocelot_port_writel(port, (cur & (~mask)) | val, reg);
+ }
+-EXPORT_SYMBOL(ocelot_port_rmwl);
++EXPORT_SYMBOL_GPL(ocelot_port_rmwl);
+ 
+ u32 __ocelot_target_read_ix(struct ocelot *ocelot, enum ocelot_target target,
+ 			    u32 reg, u32 offset)
+@@ -128,7 +128,7 @@ int ocelot_regfields_init(struct ocelot *ocelot,
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(ocelot_regfields_init);
++EXPORT_SYMBOL_GPL(ocelot_regfields_init);
+ 
+ static struct regmap_config ocelot_regmap_config = {
+ 	.reg_bits	= 32,
+@@ -148,4 +148,4 @@ struct regmap *ocelot_regmap_init(struct ocelot *ocelot, struct resource *res)
+ 
+ 	return devm_regmap_init_mmio(ocelot->dev, regs, &ocelot_regmap_config);
+ }
+-EXPORT_SYMBOL(ocelot_regmap_init);
++EXPORT_SYMBOL_GPL(ocelot_regmap_init);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+index 49783f365079e..f2c8273dce67d 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+@@ -327,6 +327,9 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+ 	unsigned long flags;
+ 	int rc = -EINVAL;
+ 
++	if (!p_ll2_conn)
++		return rc;
++
+ 	spin_lock_irqsave(&p_tx->lock, flags);
+ 	if (p_tx->b_completing_packet) {
+ 		rc = -EBUSY;
+@@ -500,7 +503,16 @@ static int qed_ll2_rxq_completion(struct qed_hwfn *p_hwfn, void *cookie)
+ 	unsigned long flags = 0;
+ 	int rc = 0;
+ 
++	if (!p_ll2_conn)
++		return rc;
++
+ 	spin_lock_irqsave(&p_rx->lock, flags);
++
++	if (!QED_LL2_RX_REGISTERED(p_ll2_conn)) {
++		spin_unlock_irqrestore(&p_rx->lock, flags);
++		return 0;
++	}
++
+ 	cq_new_idx = le16_to_cpu(*p_rx->p_fw_cons);
+ 	cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
+ 
+@@ -821,6 +833,9 @@ static int qed_ll2_lb_rxq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+ 	struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie;
+ 	int rc;
+ 
++	if (!p_ll2_conn)
++		return 0;
++
+ 	if (!QED_LL2_RX_REGISTERED(p_ll2_conn))
+ 		return 0;
+ 
+@@ -844,6 +859,9 @@ static int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+ 	u16 new_idx = 0, num_bds = 0;
+ 	int rc;
+ 
++	if (!p_ll2_conn)
++		return 0;
++
+ 	if (!QED_LL2_TX_REGISTERED(p_ll2_conn))
+ 		return 0;
+ 
+@@ -1725,6 +1743,8 @@ int qed_ll2_post_rx_buffer(void *cxt,
+ 	if (!p_ll2_conn)
+ 		return -EINVAL;
+ 	p_rx = &p_ll2_conn->rx_queue;
++	if (!p_rx->set_prod_addr)
++		return -EIO;
+ 
+ 	spin_lock_irqsave(&p_rx->lock, flags);
+ 	if (!list_empty(&p_rx->free_descq))
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+index da864d12916b7..4f4b79250a2b2 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+@@ -1285,8 +1285,7 @@ qed_rdma_create_qp(void *rdma_cxt,
+ 
+ 	if (!rdma_cxt || !in_params || !out_params ||
+ 	    !p_hwfn->p_rdma_info->active) {
+-		DP_ERR(p_hwfn->cdev,
+-		       "qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n",
++		pr_err("qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n",
+ 		       rdma_cxt, in_params, out_params);
+ 		return NULL;
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 40dc14d1415f3..6399803061158 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -689,14 +689,18 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 					 GFP_KERNEL);
+ 		if (!plat->est)
+ 			return -ENOMEM;
++
++		mutex_init(&priv->plat->est->lock);
+ 	} else {
+ 		memset(plat->est, 0, sizeof(*plat->est));
+ 	}
+ 
+ 	size = qopt->num_entries;
+ 
++	mutex_lock(&priv->plat->est->lock);
+ 	priv->plat->est->gcl_size = size;
+ 	priv->plat->est->enable = qopt->enable;
++	mutex_unlock(&priv->plat->est->lock);
+ 
+ 	for (i = 0; i < size; i++) {
+ 		s64 delta_ns = qopt->entries[i].interval;
+@@ -727,6 +731,7 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 		priv->plat->est->gcl[i] = delta_ns | (gates << wid);
+ 	}
+ 
++	mutex_lock(&priv->plat->est->lock);
+ 	/* Adjust for real system time */
+ 	priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, &current_time);
+ 	current_time_ns = timespec64_to_ktime(current_time);
+@@ -751,19 +756,23 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 	priv->plat->est->ctr[0] = do_div(ctr, NSEC_PER_SEC);
+ 	priv->plat->est->ctr[1] = (u32)ctr;
+ 
+-	if (fpe && !priv->dma_cap.fpesel)
++	if (fpe && !priv->dma_cap.fpesel) {
++		mutex_unlock(&priv->plat->est->lock);
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	ret = stmmac_fpe_configure(priv, priv->ioaddr,
+ 				   priv->plat->tx_queues_to_use,
+ 				   priv->plat->rx_queues_to_use, fpe);
+ 	if (ret && fpe) {
++		mutex_unlock(&priv->plat->est->lock);
+ 		netdev_err(priv->dev, "failed to enable Frame Preemption\n");
+ 		return ret;
+ 	}
+ 
+ 	ret = stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
+ 				   priv->plat->clk_ptp_rate);
++	mutex_unlock(&priv->plat->est->lock);
+ 	if (ret) {
+ 		netdev_err(priv->dev, "failed to configure EST\n");
+ 		goto disable;
+@@ -773,9 +782,14 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 	return 0;
+ 
+ disable:
+-	priv->plat->est->enable = false;
+-	stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
+-			     priv->plat->clk_ptp_rate);
++	if (priv->plat->est) {
++		mutex_lock(&priv->plat->est->lock);
++		priv->plat->est->enable = false;
++		stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
++				     priv->plat->clk_ptp_rate);
++		mutex_unlock(&priv->plat->est->lock);
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
+index fb1a8c4486ddf..2a748a924f838 100644
+--- a/drivers/net/usb/pegasus.c
++++ b/drivers/net/usb/pegasus.c
+@@ -471,7 +471,7 @@ static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
+ 		write_mii_word(pegasus, 0, 0x1b, &auxmode);
+ 	}
+ 
+-	return 0;
++	return ret;
+ fail:
+ 	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+ 	return ret;
+@@ -861,7 +861,7 @@ static int pegasus_open(struct net_device *net)
+ 	if (!pegasus->rx_skb)
+ 		goto exit;
+ 
+-	res = set_registers(pegasus, EthID, 6, net->dev_addr);
++	set_registers(pegasus, EthID, 6, net->dev_addr);
+ 
+ 	usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
+ 			  usb_rcvbulkpipe(pegasus->usb, 1),
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+index 37ce4fe136c5e..cdea741be6f6a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+@@ -38,6 +38,7 @@ static int iwl_pnvm_handle_section(struct iwl_trans *trans, const u8 *data,
+ 	u32 sha1 = 0;
+ 	u16 mac_type = 0, rf_id = 0;
+ 	u8 *pnvm_data = NULL, *tmp;
++	bool hw_match = false;
+ 	u32 size = 0;
+ 	int ret;
+ 
+@@ -84,6 +85,9 @@ static int iwl_pnvm_handle_section(struct iwl_trans *trans, const u8 *data,
+ 				break;
+ 			}
+ 
++			if (hw_match)
++				break;
++
+ 			mac_type = le16_to_cpup((__le16 *)data);
+ 			rf_id = le16_to_cpup((__le16 *)(data + sizeof(__le16)));
+ 
+@@ -91,15 +95,9 @@ static int iwl_pnvm_handle_section(struct iwl_trans *trans, const u8 *data,
+ 				     "Got IWL_UCODE_TLV_HW_TYPE mac_type 0x%0x rf_id 0x%0x\n",
+ 				     mac_type, rf_id);
+ 
+-			if (mac_type != CSR_HW_REV_TYPE(trans->hw_rev) ||
+-			    rf_id != CSR_HW_RFID_TYPE(trans->hw_rf_id)) {
+-				IWL_DEBUG_FW(trans,
+-					     "HW mismatch, skipping PNVM section, mac_type 0x%0x, rf_id 0x%0x.\n",
+-					     CSR_HW_REV_TYPE(trans->hw_rev), trans->hw_rf_id);
+-				ret = -ENOENT;
+-				goto out;
+-			}
+-
++			if (mac_type == CSR_HW_REV_TYPE(trans->hw_rev) &&
++			    rf_id == CSR_HW_RFID_TYPE(trans->hw_rf_id))
++				hw_match = true;
+ 			break;
+ 		case IWL_UCODE_TLV_SEC_RT: {
+ 			struct iwl_pnvm_section *section = (void *)data;
+@@ -150,6 +148,15 @@ static int iwl_pnvm_handle_section(struct iwl_trans *trans, const u8 *data,
+ 	}
+ 
+ done:
++	if (!hw_match) {
++		IWL_DEBUG_FW(trans,
++			     "HW mismatch, skipping PNVM section (need mac_type 0x%x rf_id 0x%x)\n",
++			     CSR_HW_REV_TYPE(trans->hw_rev),
++			     CSR_HW_RFID_TYPE(trans->hw_rf_id));
++		ret = -ENOENT;
++		goto out;
++	}
++
+ 	if (!size) {
+ 		IWL_DEBUG_FW(trans, "Empty PNVM, skipping.\n");
+ 		ret = -ENOENT;
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 363277b31ecbb..d92a1bfe16905 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -870,8 +870,9 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
+ 		}
+ 	}
+ 
+-	/* There should be one of more OPP defined */
+-	if (WARN_ON(!count)) {
++	/* There should be one or more OPPs defined */
++	if (!count) {
++		dev_err(dev, "%s: no supported OPPs", __func__);
+ 		ret = -ENOENT;
+ 		goto remove_static_opp;
+ 	}
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 4dcced95c8b47..8173b67ec7b0f 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -808,12 +808,15 @@ store_state_field(struct device *dev, struct device_attribute *attr,
+ 	ret = scsi_device_set_state(sdev, state);
+ 	/*
+ 	 * If the device state changes to SDEV_RUNNING, we need to
+-	 * rescan the device to revalidate it, and run the queue to
+-	 * avoid I/O hang.
++	 * run the queue to avoid I/O hang, and rescan the device
++	 * to revalidate it. Running the queue first is necessary
++	 * because another thread may be waiting inside
++	 * blk_mq_freeze_queue_wait() and because that call may be
++	 * waiting for pending I/O to finish.
+ 	 */
+ 	if (ret == 0 && state == SDEV_RUNNING) {
+-		scsi_rescan_device(dev);
+ 		blk_mq_run_hw_queues(sdev->request_queue, true);
++		scsi_rescan_device(dev);
+ 	}
+ 	mutex_unlock(&sdev->state_mutex);
+ 
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index 09b8d02acd996..90e4fcd3dc39a 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -246,6 +246,8 @@ int vt_waitactive(int n)
+  *
+  * XXX It should at least call into the driver, fbdev's definitely need to
+  * restore their engine state. --BenH
++ *
++ * Called with the console lock held.
+  */
+ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ {
+@@ -262,7 +264,6 @@ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ 		return -EINVAL;
+ 	}
+ 
+-	/* FIXME: this needs the console lock extending */
+ 	if (vc->vc_mode == mode)
+ 		return 0;
+ 
+@@ -271,12 +272,10 @@ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ 		return 0;
+ 
+ 	/* explicitly blank/unblank the screen if switching modes */
+-	console_lock();
+ 	if (mode == KD_TEXT)
+ 		do_unblank_screen(1);
+ 	else
+ 		do_blank_screen(1);
+-	console_unlock();
+ 
+ 	return 0;
+ }
+@@ -378,7 +377,10 @@ static int vt_k_ioctl(struct tty_struct *tty, unsigned int cmd,
+ 		if (!perm)
+ 			return -EPERM;
+ 
+-		return vt_kdsetmode(vc, arg);
++		console_lock();
++		ret = vt_kdsetmode(vc, arg);
++		console_unlock();
++		return ret;
+ 
+ 	case KDGETMODE:
+ 		return put_user(vc->vc_mode, (int __user *)arg);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 756839e0e91d5..b75fe568096f9 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -932,19 +932,19 @@ static struct dwc3_trb *dwc3_ep_prev_trb(struct dwc3_ep *dep, u8 index)
+ 
+ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
+ {
+-	struct dwc3_trb		*tmp;
+ 	u8			trbs_left;
+ 
+ 	/*
+-	 * If enqueue & dequeue are equal than it is either full or empty.
+-	 *
+-	 * One way to know for sure is if the TRB right before us has HWO bit
+-	 * set or not. If it has, then we're definitely full and can't fit any
+-	 * more transfers in our ring.
++	 * If the enqueue & dequeue are equal then the TRB ring is either full
++	 * or empty. It's considered full when there are DWC3_TRB_NUM-1 of TRBs
++	 * pending to be processed by the driver.
+ 	 */
+ 	if (dep->trb_enqueue == dep->trb_dequeue) {
+-		tmp = dwc3_ep_prev_trb(dep, dep->trb_enqueue);
+-		if (tmp->ctrl & DWC3_TRB_CTRL_HWO)
++		/*
++		 * If there is any request remained in the started_list at
++		 * this point, that means there is no TRB available.
++		 */
++		if (!list_empty(&dep->started_list))
+ 			return 0;
+ 
+ 		return DWC3_TRB_NUM - 1;
+@@ -2125,10 +2125,8 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 
+ 		ret = wait_for_completion_timeout(&dwc->ep0_in_setup,
+ 				msecs_to_jiffies(DWC3_PULL_UP_TIMEOUT));
+-		if (ret == 0) {
+-			dev_err(dwc->dev, "timed out waiting for SETUP phase\n");
+-			return -ETIMEDOUT;
+-		}
++		if (ret == 0)
++			dev_warn(dwc->dev, "timed out waiting for SETUP phase\n");
+ 	}
+ 
+ 	/*
+@@ -2332,6 +2330,7 @@ static int __dwc3_gadget_start(struct dwc3 *dwc)
+ 	/* begin to receive SETUP packets */
+ 	dwc->ep0state = EP0_SETUP_PHASE;
+ 	dwc->link_state = DWC3_LINK_STATE_SS_DIS;
++	dwc->delayed_status = false;
+ 	dwc3_ep0_out_start(dwc);
+ 
+ 	dwc3_gadget_enable_irq(dwc);
+diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
+index 908e49dafd620..95605b1ef4eb4 100644
+--- a/drivers/usb/gadget/function/u_audio.c
++++ b/drivers/usb/gadget/function/u_audio.c
+@@ -334,8 +334,6 @@ static inline void free_ep(struct uac_rtd_params *prm, struct usb_ep *ep)
+ 	if (!prm->ep_enabled)
+ 		return;
+ 
+-	prm->ep_enabled = false;
+-
+ 	audio_dev = uac->audio_dev;
+ 	params = &audio_dev->params;
+ 
+@@ -353,11 +351,12 @@ static inline void free_ep(struct uac_rtd_params *prm, struct usb_ep *ep)
+ 		}
+ 	}
+ 
++	prm->ep_enabled = false;
++
+ 	if (usb_ep_disable(ep))
+ 		dev_err(uac->card->dev, "%s:%d Error!\n", __func__, __LINE__);
+ }
+ 
+-
+ int u_audio_start_capture(struct g_audio *audio_dev)
+ {
+ 	struct snd_uac_chip *uac = audio_dev->uac;
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index f97ac9f52bf4d..96692dbbd4dad 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -207,7 +207,8 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
+ 			return 0;
+ 
+ 		case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
+-			return 0;
++			dev_dbg(&pdev->dev, "Unknown ROM status ...\n");
++			return -ENOENT;
+ 
+ 		case RENESAS_ROM_STATUS_ERROR: /* Error State */
+ 		default: /* All other states are marked as "Reserved states" */
+@@ -224,14 +225,6 @@ static int renesas_fw_check_running(struct pci_dev *pdev)
+ 	u8 fw_state;
+ 	int err;
+ 
+-	/* Check if device has ROM and loaded, if so skip everything */
+-	err = renesas_check_rom(pdev);
+-	if (err) { /* we have rom */
+-		err = renesas_check_rom_state(pdev);
+-		if (!err)
+-			return err;
+-	}
+-
+ 	/*
+ 	 * Test if the device is actually needing the firmware. As most
+ 	 * BIOSes will initialize the device for us. If the device is
+@@ -591,21 +584,39 @@ int renesas_xhci_check_request_fw(struct pci_dev *pdev,
+ 			(struct xhci_driver_data *)id->driver_data;
+ 	const char *fw_name = driver_data->firmware;
+ 	const struct firmware *fw;
++	bool has_rom;
+ 	int err;
+ 
++	/* Check if device has ROM and loaded, if so skip everything */
++	has_rom = renesas_check_rom(pdev);
++	if (has_rom) {
++		err = renesas_check_rom_state(pdev);
++		if (!err)
++			return 0;
++		else if (err != -ENOENT)
++			has_rom = false;
++	}
++
+ 	err = renesas_fw_check_running(pdev);
+ 	/* Continue ahead, if the firmware is already running. */
+ 	if (err == 0)
+ 		return 0;
+ 
++	/* no firmware interface available */
+ 	if (err != 1)
+-		return err;
++		return has_rom ? 0 : err;
+ 
+ 	pci_dev_get(pdev);
+-	err = request_firmware(&fw, fw_name, &pdev->dev);
++	err = firmware_request_nowarn(&fw, fw_name, &pdev->dev);
+ 	pci_dev_put(pdev);
+ 	if (err) {
+-		dev_err(&pdev->dev, "request_firmware failed: %d\n", err);
++		if (has_rom) {
++			dev_info(&pdev->dev, "failed to load firmware %s, fallback to ROM\n",
++				 fw_name);
++			return 0;
++		}
++		dev_err(&pdev->dev, "failed to load firmware %s: %d\n",
++			fw_name, err);
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 119b7b6e1adac..f26861246f653 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -853,7 +853,6 @@ static struct usb_serial_driver ch341_device = {
+ 		.owner	= THIS_MODULE,
+ 		.name	= "ch341-uart",
+ 	},
+-	.bulk_in_size      = 512,
+ 	.id_table          = id_table,
+ 	.num_ports         = 1,
+ 	.open              = ch341_open,
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 2b85e0e9bffdb..acb8eec14f689 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2074,6 +2074,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) | RSVD(5) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff),			/* Fibocom NL678 series */
+ 	  .driver_info = RSVD(6) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) },	/* Fibocom FG150 Diag */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) },		/* Fibocom FG150 AT */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) },			/* LongSung M5710 */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) },			/* GosunCn GM500 RNDIS */
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 310b5caeb05ae..3bfa8005ae65d 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -53,7 +53,7 @@ static int ucsi_acknowledge_connector_change(struct ucsi *ucsi)
+ 	ctrl = UCSI_ACK_CC_CI;
+ 	ctrl |= UCSI_ACK_CONNECTOR_CHANGE;
+ 
+-	return ucsi->ops->async_write(ucsi, UCSI_CONTROL, &ctrl, sizeof(ctrl));
++	return ucsi->ops->sync_write(ucsi, UCSI_CONTROL, &ctrl, sizeof(ctrl));
+ }
+ 
+ static int ucsi_exec_command(struct ucsi *ucsi, u64 command);
+@@ -648,21 +648,113 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+ 	struct ucsi_connector *con = container_of(work, struct ucsi_connector,
+ 						  work);
+ 	struct ucsi *ucsi = con->ucsi;
++	struct ucsi_connector_status pre_ack_status;
++	struct ucsi_connector_status post_ack_status;
+ 	enum typec_role role;
++	u16 inferred_changes;
++	u16 changed_flags;
+ 	u64 command;
+ 	int ret;
+ 
+ 	mutex_lock(&con->lock);
+ 
++	/*
++	 * Some/many PPMs have an issue where all fields in the change bitfield
++	 * are cleared when an ACK is send. This will causes any change
++	 * between GET_CONNECTOR_STATUS and ACK to be lost.
++	 *
++	 * We work around this by re-fetching the connector status afterwards.
++	 * We then infer any changes that we see have happened but that may not
++	 * be represented in the change bitfield.
++	 *
++	 * Also, even though we don't need to know the currently supported alt
++	 * modes, we run the GET_CAM_SUPPORTED command to ensure the PPM does
++	 * not get stuck in case it assumes we do.
++	 * Always do this, rather than relying on UCSI_CONSTAT_CAM_CHANGE to be
++	 * set in the change bitfield.
++	 *
++	 * We end up with the following actions:
++	 *  1. UCSI_GET_CONNECTOR_STATUS, store result, update unprocessed_changes
++	 *  2. UCSI_GET_CAM_SUPPORTED, discard result
++	 *  3. ACK connector change
++	 *  4. UCSI_GET_CONNECTOR_STATUS, store result
++	 *  5. Infere lost changes by comparing UCSI_GET_CONNECTOR_STATUS results
++	 *  6. If PPM reported a new change, then restart in order to ACK
++	 *  7. Process everything as usual.
++	 *
++	 * We may end up seeing a change twice, but we can only miss extremely
++	 * short transitional changes.
++	 */
++
++	/* 1. First UCSI_GET_CONNECTOR_STATUS */
++	command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num);
++	ret = ucsi_send_command(ucsi, command, &pre_ack_status,
++				sizeof(pre_ack_status));
++	if (ret < 0) {
++		dev_err(ucsi->dev, "%s: GET_CONNECTOR_STATUS failed (%d)\n",
++			__func__, ret);
++		goto out_unlock;
++	}
++	con->unprocessed_changes |= pre_ack_status.change;
++
++	/* 2. Run UCSI_GET_CAM_SUPPORTED and discard the result. */
++	command = UCSI_GET_CAM_SUPPORTED;
++	command |= UCSI_CONNECTOR_NUMBER(con->num);
++	ucsi_send_command(con->ucsi, command, NULL, 0);
++
++	/* 3. ACK connector change */
++	ret = ucsi_acknowledge_connector_change(ucsi);
++	clear_bit(EVENT_PENDING, &ucsi->flags);
++	if (ret) {
++		dev_err(ucsi->dev, "%s: ACK failed (%d)", __func__, ret);
++		goto out_unlock;
++	}
++
++	/* 4. Second UCSI_GET_CONNECTOR_STATUS */
+ 	command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num);
+-	ret = ucsi_send_command(ucsi, command, &con->status,
+-				sizeof(con->status));
++	ret = ucsi_send_command(ucsi, command, &post_ack_status,
++				sizeof(post_ack_status));
+ 	if (ret < 0) {
+ 		dev_err(ucsi->dev, "%s: GET_CONNECTOR_STATUS failed (%d)\n",
+ 			__func__, ret);
+ 		goto out_unlock;
+ 	}
+ 
++	/* 5. Inferre any missing changes */
++	changed_flags = pre_ack_status.flags ^ post_ack_status.flags;
++	inferred_changes = 0;
++	if (UCSI_CONSTAT_PWR_OPMODE(changed_flags) != 0)
++		inferred_changes |= UCSI_CONSTAT_POWER_OPMODE_CHANGE;
++
++	if (changed_flags & UCSI_CONSTAT_CONNECTED)
++		inferred_changes |= UCSI_CONSTAT_CONNECT_CHANGE;
++
++	if (changed_flags & UCSI_CONSTAT_PWR_DIR)
++		inferred_changes |= UCSI_CONSTAT_POWER_DIR_CHANGE;
++
++	if (UCSI_CONSTAT_PARTNER_FLAGS(changed_flags) != 0)
++		inferred_changes |= UCSI_CONSTAT_PARTNER_CHANGE;
++
++	if (UCSI_CONSTAT_PARTNER_TYPE(changed_flags) != 0)
++		inferred_changes |= UCSI_CONSTAT_PARTNER_CHANGE;
++
++	/* Mask out anything that was correctly notified in the later call. */
++	inferred_changes &= ~post_ack_status.change;
++	if (inferred_changes)
++		dev_dbg(ucsi->dev, "%s: Inferred changes that would have been lost: 0x%04x\n",
++			__func__, inferred_changes);
++
++	con->unprocessed_changes |= inferred_changes;
++
++	/* 6. If PPM reported a new change, then restart in order to ACK */
++	if (post_ack_status.change)
++		goto out_unlock;
++
++	/* 7. Continue as if nothing happened */
++	con->status = post_ack_status;
++	con->status.change = con->unprocessed_changes;
++	con->unprocessed_changes = 0;
++
+ 	role = !!(con->status.flags & UCSI_CONSTAT_PWR_DIR);
+ 
+ 	if (con->status.change & UCSI_CONSTAT_POWER_OPMODE_CHANGE ||
+@@ -703,28 +795,19 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+ 		ucsi_port_psy_changed(con);
+ 	}
+ 
+-	if (con->status.change & UCSI_CONSTAT_CAM_CHANGE) {
+-		/*
+-		 * We don't need to know the currently supported alt modes here.
+-		 * Running GET_CAM_SUPPORTED command just to make sure the PPM
+-		 * does not get stuck in case it assumes we do so.
+-		 */
+-		command = UCSI_GET_CAM_SUPPORTED;
+-		command |= UCSI_CONNECTOR_NUMBER(con->num);
+-		ucsi_send_command(con->ucsi, command, NULL, 0);
+-	}
+-
+ 	if (con->status.change & UCSI_CONSTAT_PARTNER_CHANGE)
+ 		ucsi_partner_change(con);
+ 
+-	ret = ucsi_acknowledge_connector_change(ucsi);
+-	if (ret)
+-		dev_err(ucsi->dev, "%s: ACK failed (%d)", __func__, ret);
+-
+ 	trace_ucsi_connector_change(con->num, &con->status);
+ 
+ out_unlock:
+-	clear_bit(EVENT_PENDING, &ucsi->flags);
++	if (test_and_clear_bit(EVENT_PENDING, &ucsi->flags)) {
++		schedule_work(&con->work);
++		mutex_unlock(&con->lock);
++		return;
++	}
++
++	clear_bit(EVENT_PROCESSING, &ucsi->flags);
+ 	mutex_unlock(&con->lock);
+ }
+ 
+@@ -742,7 +825,9 @@ void ucsi_connector_change(struct ucsi *ucsi, u8 num)
+ 		return;
+ 	}
+ 
+-	if (!test_and_set_bit(EVENT_PENDING, &ucsi->flags))
++	set_bit(EVENT_PENDING, &ucsi->flags);
++
++	if (!test_and_set_bit(EVENT_PROCESSING, &ucsi->flags))
+ 		schedule_work(&con->work);
+ }
+ EXPORT_SYMBOL_GPL(ucsi_connector_change);
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 047e17c4b4922..fce23ad16c6d0 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -299,6 +299,7 @@ struct ucsi {
+ #define EVENT_PENDING	0
+ #define COMMAND_PENDING	1
+ #define ACK_PENDING	2
++#define EVENT_PROCESSING	3
+ };
+ 
+ #define UCSI_MAX_SVID		5
+@@ -324,6 +325,7 @@ struct ucsi_connector {
+ 
+ 	struct typec_capability typec_cap;
+ 
++	u16 unprocessed_changes;
+ 	struct ucsi_connector_status status;
+ 	struct ucsi_connector_capability cap;
+ 	struct power_supply *psy;
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index fbfe8f5933af8..04976435ad736 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -103,11 +103,12 @@ static void ucsi_acpi_notify(acpi_handle handle, u32 event, void *data)
+ 	if (ret)
+ 		return;
+ 
++	if (UCSI_CCI_CONNECTOR(cci))
++		ucsi_connector_change(ua->ucsi, UCSI_CCI_CONNECTOR(cci));
++
+ 	if (test_bit(COMMAND_PENDING, &ua->flags) &&
+ 	    cci & (UCSI_CCI_ACK_COMPLETE | UCSI_CCI_COMMAND_COMPLETE))
+ 		complete(&ua->complete);
+-	else if (UCSI_CCI_CONNECTOR(cci))
+-		ucsi_connector_change(ua->ucsi, UCSI_CCI_CONNECTOR(cci));
+ }
+ 
+ static int ucsi_acpi_probe(struct platform_device *pdev)
+diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
+index b7403ba8e7f70..0bd7e64331f08 100644
+--- a/drivers/vhost/vringh.c
++++ b/drivers/vhost/vringh.c
+@@ -341,7 +341,7 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ 			iov = wiov;
+ 		else {
+ 			iov = riov;
+-			if (unlikely(wiov && wiov->i)) {
++			if (unlikely(wiov && wiov->used)) {
+ 				vringh_bad("Readable desc %p after writable",
+ 					   &descs[i]);
+ 				err = -EINVAL;
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index 222d630c41fc9..b35bb2d57f62c 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -576,6 +576,13 @@ static void virtio_pci_remove(struct pci_dev *pci_dev)
+ 	struct virtio_pci_device *vp_dev = pci_get_drvdata(pci_dev);
+ 	struct device *dev = get_device(&vp_dev->vdev.dev);
+ 
++	/*
++	 * Device is marked broken on surprise removal so that virtio upper
++	 * layers can abort any ongoing operation.
++	 */
++	if (!pci_device_is_present(pci_dev))
++		virtio_break_device(&vp_dev->vdev);
++
+ 	pci_disable_sriov(pci_dev);
+ 
+ 	unregister_virtio_device(&vp_dev->vdev);
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 6b7aa26c53844..6c730d6d50f71 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -2268,7 +2268,7 @@ bool virtqueue_is_broken(struct virtqueue *_vq)
+ {
+ 	struct vring_virtqueue *vq = to_vvq(_vq);
+ 
+-	return vq->broken;
++	return READ_ONCE(vq->broken);
+ }
+ EXPORT_SYMBOL_GPL(virtqueue_is_broken);
+ 
+@@ -2283,7 +2283,9 @@ void virtio_break_device(struct virtio_device *dev)
+ 	spin_lock(&dev->vqs_list_lock);
+ 	list_for_each_entry(_vq, &dev->vqs, list) {
+ 		struct vring_virtqueue *vq = to_vvq(_vq);
+-		vq->broken = true;
++
++		/* Pairs with READ_ONCE() in virtqueue_is_broken(). */
++		WRITE_ONCE(vq->broken, true);
+ 	}
+ 	spin_unlock(&dev->vqs_list_lock);
+ }
+diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
+index 4a9ddb44b2a74..3f95dedcccebe 100644
+--- a/drivers/virtio/virtio_vdpa.c
++++ b/drivers/virtio/virtio_vdpa.c
+@@ -149,6 +149,9 @@ virtio_vdpa_setup_vq(struct virtio_device *vdev, unsigned int index,
+ 	if (!name)
+ 		return NULL;
+ 
++	if (index >= vdpa->nvqs)
++		return ERR_PTR(-ENOENT);
++
+ 	/* Queue shouldn't already be set up. */
+ 	if (ops->get_vq_ready(vdpa, index))
+ 		return ERR_PTR(-ENOENT);
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index 8de4bf8edb9c0..5a43f8e07122c 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -308,6 +308,21 @@ static inline void btrfs_mod_outstanding_extents(struct btrfs_inode *inode,
+ 						  mod);
+ }
+ 
++/*
++ * Called every time after doing a buffered, direct IO or memory mapped write.
++ *
++ * This is to ensure that if we write to a file that was previously fsynced in
++ * the current transaction, then try to fsync it again in the same transaction,
++ * we will know that there were changes in the file and that it needs to be
++ * logged.
++ */
++static inline void btrfs_set_inode_last_sub_trans(struct btrfs_inode *inode)
++{
++	spin_lock(&inode->lock);
++	inode->last_sub_trans = inode->root->log_transid;
++	spin_unlock(&inode->lock);
++}
++
+ static inline int btrfs_inode_in_log(struct btrfs_inode *inode, u64 generation)
+ {
+ 	int ret = 0;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index ffa48ac98d1e5..6ab91661cd261 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1862,7 +1862,6 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
+ 	struct file *file = iocb->ki_filp;
+ 	struct inode *inode = file_inode(file);
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+-	struct btrfs_root *root = BTRFS_I(inode)->root;
+ 	u64 start_pos;
+ 	u64 end_pos;
+ 	ssize_t num_written = 0;
+@@ -2006,14 +2005,8 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
+ 
+ 	inode_unlock(inode);
+ 
+-	/*
+-	 * We also have to set last_sub_trans to the current log transid,
+-	 * otherwise subsequent syncs to a file that's been synced in this
+-	 * transaction will appear to have already occurred.
+-	 */
+-	spin_lock(&BTRFS_I(inode)->lock);
+-	BTRFS_I(inode)->last_sub_trans = root->log_transid;
+-	spin_unlock(&BTRFS_I(inode)->lock);
++	btrfs_set_inode_last_sub_trans(BTRFS_I(inode));
++
+ 	if (num_written > 0)
+ 		num_written = generic_write_sync(iocb, num_written);
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index fc4311415fc67..69c6786a9fdf2 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -547,7 +547,7 @@ again:
+ 	 * inode has not been flagged as nocompress.  This flag can
+ 	 * change at any time if we discover bad compression ratios.
+ 	 */
+-	if (nr_pages > 1 && inode_need_compress(BTRFS_I(inode), start, end)) {
++	if (inode_need_compress(BTRFS_I(inode), start, end)) {
+ 		WARN_ON(pages);
+ 		pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
+ 		if (!pages) {
+@@ -8449,9 +8449,7 @@ again:
+ 	set_page_dirty(page);
+ 	SetPageUptodate(page);
+ 
+-	BTRFS_I(inode)->last_trans = fs_info->generation;
+-	BTRFS_I(inode)->last_sub_trans = BTRFS_I(inode)->root->log_transid;
+-	BTRFS_I(inode)->last_log_commit = BTRFS_I(inode)->root->last_log_commit;
++	btrfs_set_inode_last_sub_trans(BTRFS_I(inode));
+ 
+ 	unlock_extent_cached(io_tree, page_start, page_end, &cached_state);
+ 
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index 858d9153a1cd1..f73654d93fa03 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -171,7 +171,7 @@ static inline void btrfs_set_inode_last_trans(struct btrfs_trans_handle *trans,
+ 	spin_lock(&inode->lock);
+ 	inode->last_trans = trans->transaction->transid;
+ 	inode->last_sub_trans = inode->root->log_transid;
+-	inode->last_log_commit = inode->root->last_log_commit;
++	inode->last_log_commit = inode->last_sub_trans - 1;
+ 	spin_unlock(&inode->lock);
+ }
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 920c84fae7102..d1fccddcf4035 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2059,7 +2059,7 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 
+ 	if (IS_ERR(device)) {
+ 		if (PTR_ERR(device) == -ENOENT &&
+-		    strcmp(device_path, "missing") == 0)
++		    device_path && strcmp(device_path, "missing") == 0)
+ 			ret = BTRFS_ERROR_DEV_MISSING_NOT_FOUND;
+ 		else
+ 			ret = PTR_ERR(device);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 45093a765a9b5..b864c9b9e8df1 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1752,7 +1752,11 @@ int __ceph_mark_dirty_caps(struct ceph_inode_info *ci, int mask,
+ 
+ struct ceph_cap_flush *ceph_alloc_cap_flush(void)
+ {
+-	return kmem_cache_alloc(ceph_cap_flush_cachep, GFP_KERNEL);
++	struct ceph_cap_flush *cf;
++
++	cf = kmem_cache_alloc(ceph_cap_flush_cachep, GFP_KERNEL);
++	cf->is_capsnap = false;
++	return cf;
+ }
+ 
+ void ceph_free_cap_flush(struct ceph_cap_flush *cf)
+@@ -1787,7 +1791,7 @@ static bool __detach_cap_flush_from_mdsc(struct ceph_mds_client *mdsc,
+ 		prev->wake = true;
+ 		wake = false;
+ 	}
+-	list_del(&cf->g_list);
++	list_del_init(&cf->g_list);
+ 	return wake;
+ }
+ 
+@@ -1802,7 +1806,7 @@ static bool __detach_cap_flush_from_ci(struct ceph_inode_info *ci,
+ 		prev->wake = true;
+ 		wake = false;
+ 	}
+-	list_del(&cf->i_list);
++	list_del_init(&cf->i_list);
+ 	return wake;
+ }
+ 
+@@ -2422,7 +2426,7 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc,
+ 	ci->i_ceph_flags &= ~CEPH_I_KICK_FLUSH;
+ 
+ 	list_for_each_entry_reverse(cf, &ci->i_cap_flush_list, i_list) {
+-		if (!cf->caps) {
++		if (cf->is_capsnap) {
+ 			last_snap_flush = cf->tid;
+ 			break;
+ 		}
+@@ -2441,7 +2445,7 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc,
+ 
+ 		first_tid = cf->tid + 1;
+ 
+-		if (cf->caps) {
++		if (!cf->is_capsnap) {
+ 			struct cap_msg_args arg;
+ 
+ 			dout("kick_flushing_caps %p cap %p tid %llu %s\n",
+@@ -3564,7 +3568,7 @@ static void handle_cap_flush_ack(struct inode *inode, u64 flush_tid,
+ 			cleaned = cf->caps;
+ 
+ 		/* Is this a capsnap? */
+-		if (cf->caps == 0)
++		if (cf->is_capsnap)
+ 			continue;
+ 
+ 		if (cf->tid <= flush_tid) {
+@@ -3637,8 +3641,9 @@ out:
+ 	while (!list_empty(&to_remove)) {
+ 		cf = list_first_entry(&to_remove,
+ 				      struct ceph_cap_flush, i_list);
+-		list_del(&cf->i_list);
+-		ceph_free_cap_flush(cf);
++		list_del_init(&cf->i_list);
++		if (!cf->is_capsnap)
++			ceph_free_cap_flush(cf);
+ 	}
+ 
+ 	if (wake_ci)
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 1701902415c4b..816cea4975372 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1618,7 +1618,7 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		spin_lock(&mdsc->cap_dirty_lock);
+ 
+ 		list_for_each_entry(cf, &to_remove, i_list)
+-			list_del(&cf->g_list);
++			list_del_init(&cf->g_list);
+ 
+ 		if (!list_empty(&ci->i_dirty_item)) {
+ 			pr_warn_ratelimited(
+@@ -1670,8 +1670,9 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		struct ceph_cap_flush *cf;
+ 		cf = list_first_entry(&to_remove,
+ 				      struct ceph_cap_flush, i_list);
+-		list_del(&cf->i_list);
+-		ceph_free_cap_flush(cf);
++		list_del_init(&cf->i_list);
++		if (!cf->is_capsnap)
++			ceph_free_cap_flush(cf);
+ 	}
+ 
+ 	wake_up_all(&ci->i_cap_wq);
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index 803b60a967023..0369f672a76fb 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -487,6 +487,9 @@ void ceph_queue_cap_snap(struct ceph_inode_info *ci)
+ 		pr_err("ENOMEM allocating ceph_cap_snap on %p\n", inode);
+ 		return;
+ 	}
++	capsnap->cap_flush.is_capsnap = true;
++	INIT_LIST_HEAD(&capsnap->cap_flush.i_list);
++	INIT_LIST_HEAD(&capsnap->cap_flush.g_list);
+ 
+ 	spin_lock(&ci->i_ceph_lock);
+ 	used = __ceph_caps_used(ci);
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 6712509ae1d64..a8c460393b01b 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -181,8 +181,9 @@ struct ceph_cap {
+ 
+ struct ceph_cap_flush {
+ 	u64 tid;
+-	int caps; /* 0 means capsnap */
++	int caps;
+ 	bool wake; /* wake up flush waiters when finish ? */
++	bool is_capsnap; /* true means capsnap */
+ 	struct list_head g_list; // global
+ 	struct list_head i_list; // per inode
+ };
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index ed35be3fafc66..f469982dcb36e 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -390,6 +390,7 @@ static struct dentry *ovl_lookup_real_one(struct dentry *connected,
+ 	 */
+ 	take_dentry_name_snapshot(&name, real);
+ 	this = lookup_one_len(name.name.name, connected, name.name.len);
++	release_dentry_name_snapshot(&name);
+ 	err = PTR_ERR(this);
+ 	if (IS_ERR(this)) {
+ 		goto fail;
+@@ -404,7 +405,6 @@ static struct dentry *ovl_lookup_real_one(struct dentry *connected,
+ 	}
+ 
+ out:
+-	release_dentry_name_snapshot(&name);
+ 	dput(parent);
+ 	inode_unlock(dir);
+ 	return this;
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 28b2e973f10ed..d6d4019ba32f5 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -363,10 +363,9 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ 		 * _very_ unlikely case that the pipe was full, but we got
+ 		 * no data.
+ 		 */
+-		if (unlikely(was_full)) {
++		if (unlikely(was_full))
+ 			wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM);
+-			kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+-		}
++		kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+ 
+ 		/*
+ 		 * But because we didn't read anything, at this point we can
+@@ -385,12 +384,11 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ 		wake_next_reader = false;
+ 	__pipe_unlock(pipe);
+ 
+-	if (was_full) {
++	if (was_full)
+ 		wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM);
+-		kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+-	}
+ 	if (wake_next_reader)
+ 		wake_up_interruptible_sync_poll(&pipe->rd_wait, EPOLLIN | EPOLLRDNORM);
++	kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+ 	if (ret > 0)
+ 		file_accessed(filp);
+ 	return ret;
+@@ -444,9 +442,6 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ #endif
+ 
+ 	/*
+-	 * Epoll nonsensically wants a wakeup whether the pipe
+-	 * was already empty or not.
+-	 *
+ 	 * If it wasn't empty we try to merge new data into
+ 	 * the last buffer.
+ 	 *
+@@ -455,9 +450,9 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ 	 * spanning multiple pages.
+ 	 */
+ 	head = pipe->head;
+-	was_empty = true;
++	was_empty = pipe_empty(head, pipe->tail);
+ 	chars = total_len & (PAGE_SIZE-1);
+-	if (chars && !pipe_empty(head, pipe->tail)) {
++	if (chars && !was_empty) {
+ 		unsigned int mask = pipe->ring_size - 1;
+ 		struct pipe_buffer *buf = &pipe->bufs[(head - 1) & mask];
+ 		int offset = buf->offset + buf->len;
+@@ -568,10 +563,9 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ 		 * become empty while we dropped the lock.
+ 		 */
+ 		__pipe_unlock(pipe);
+-		if (was_empty) {
++		if (was_empty)
+ 			wake_up_interruptible_sync_poll(&pipe->rd_wait, EPOLLIN | EPOLLRDNORM);
+-			kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+-		}
++		kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+ 		wait_event_interruptible_exclusive(pipe->wr_wait, pipe_writable(pipe));
+ 		__pipe_lock(pipe);
+ 		was_empty = pipe_empty(pipe->head, pipe->tail);
+@@ -590,11 +584,13 @@ out:
+ 	 * This is particularly important for small writes, because of
+ 	 * how (for example) the GNU make jobserver uses small writes to
+ 	 * wake up pending jobs
++	 *
++	 * Epoll nonsensically wants a wakeup whether the pipe
++	 * was already empty or not.
+ 	 */
+-	if (was_empty) {
++	if (was_empty || pipe->poll_usage)
+ 		wake_up_interruptible_sync_poll(&pipe->rd_wait, EPOLLIN | EPOLLRDNORM);
+-		kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+-	}
++	kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+ 	if (wake_next_writer)
+ 		wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM);
+ 	if (ret > 0 && sb_start_write_trylock(file_inode(filp)->i_sb)) {
+@@ -654,6 +650,9 @@ pipe_poll(struct file *filp, poll_table *wait)
+ 	struct pipe_inode_info *pipe = filp->private_data;
+ 	unsigned int head, tail;
+ 
++	/* Epoll has some historical nasty semantics, this enables them */
++	pipe->poll_usage = 1;
++
+ 	/*
+ 	 * Reading pipe state only -- no need for acquiring the semaphore.
+ 	 *
+diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
+index ed71bd1a08257..91b9669785418 100644
+--- a/include/linux/bpf-cgroup.h
++++ b/include/linux/bpf-cgroup.h
+@@ -20,14 +20,25 @@ struct bpf_sock_ops_kern;
+ struct bpf_cgroup_storage;
+ struct ctl_table;
+ struct ctl_table_header;
++struct task_struct;
+ 
+ #ifdef CONFIG_CGROUP_BPF
+ 
+ extern struct static_key_false cgroup_bpf_enabled_key;
+ #define cgroup_bpf_enabled static_branch_unlikely(&cgroup_bpf_enabled_key)
+ 
+-DECLARE_PER_CPU(struct bpf_cgroup_storage*,
+-		bpf_cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE]);
++#define BPF_CGROUP_STORAGE_NEST_MAX	8
++
++struct bpf_cgroup_storage_info {
++	struct task_struct *task;
++	struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE];
++};
++
++/* For each cpu, permit maximum BPF_CGROUP_STORAGE_NEST_MAX number of tasks
++ * to use bpf cgroup storage simultaneously.
++ */
++DECLARE_PER_CPU(struct bpf_cgroup_storage_info,
++		bpf_cgroup_storage_info[BPF_CGROUP_STORAGE_NEST_MAX]);
+ 
+ #define for_each_cgroup_storage_type(stype) \
+ 	for (stype = 0; stype < MAX_BPF_CGROUP_STORAGE_TYPE; stype++)
+@@ -156,13 +167,42 @@ static inline enum bpf_cgroup_storage_type cgroup_storage_type(
+ 	return BPF_CGROUP_STORAGE_SHARED;
+ }
+ 
+-static inline void bpf_cgroup_storage_set(struct bpf_cgroup_storage
+-					  *storage[MAX_BPF_CGROUP_STORAGE_TYPE])
++static inline int bpf_cgroup_storage_set(struct bpf_cgroup_storage
++					 *storage[MAX_BPF_CGROUP_STORAGE_TYPE])
+ {
+ 	enum bpf_cgroup_storage_type stype;
++	int i, err = 0;
++
++	preempt_disable();
++	for (i = 0; i < BPF_CGROUP_STORAGE_NEST_MAX; i++) {
++		if (unlikely(this_cpu_read(bpf_cgroup_storage_info[i].task) != NULL))
++			continue;
++
++		this_cpu_write(bpf_cgroup_storage_info[i].task, current);
++		for_each_cgroup_storage_type(stype)
++			this_cpu_write(bpf_cgroup_storage_info[i].storage[stype],
++				       storage[stype]);
++		goto out;
++	}
++	err = -EBUSY;
++	WARN_ON_ONCE(1);
++
++out:
++	preempt_enable();
++	return err;
++}
++
++static inline void bpf_cgroup_storage_unset(void)
++{
++	int i;
++
++	for (i = BPF_CGROUP_STORAGE_NEST_MAX - 1; i >= 0; i--) {
++		if (likely(this_cpu_read(bpf_cgroup_storage_info[i].task) != current))
++			continue;
+ 
+-	for_each_cgroup_storage_type(stype)
+-		this_cpu_write(bpf_cgroup_storage[stype], storage[stype]);
++		this_cpu_write(bpf_cgroup_storage_info[i].task, NULL);
++		return;
++	}
+ }
+ 
+ struct bpf_cgroup_storage *
+@@ -410,8 +450,9 @@ static inline int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ 	return -EINVAL;
+ }
+ 
+-static inline void bpf_cgroup_storage_set(
+-	struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE]) {}
++static inline int bpf_cgroup_storage_set(
++	struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE]) { return 0; }
++static inline void bpf_cgroup_storage_unset(void) {}
+ static inline int bpf_cgroup_storage_assign(struct bpf_prog_aux *aux,
+ 					    struct bpf_map *map) { return 0; }
+ static inline struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index c3ccb242d1993..3f93a50c25efe 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1089,9 +1089,14 @@ int bpf_prog_array_copy(struct bpf_prog_array *old_array,
+ 			goto _out;			\
+ 		_item = &_array->items[0];		\
+ 		while ((_prog = READ_ONCE(_item->prog))) {		\
+-			if (set_cg_storage)		\
+-				bpf_cgroup_storage_set(_item->cgroup_storage);	\
+-			_ret &= func(_prog, ctx);	\
++			if (!set_cg_storage) {			\
++				_ret &= func(_prog, ctx);	\
++			} else {				\
++				if (unlikely(bpf_cgroup_storage_set(_item->cgroup_storage)))	\
++					break;			\
++				_ret &= func(_prog, ctx);	\
++				bpf_cgroup_storage_unset();	\
++			}				\
+ 			_item++;			\
+ 		}					\
+ _out:							\
+@@ -1135,8 +1140,10 @@ _out:							\
+ 		_array = rcu_dereference(array);	\
+ 		_item = &_array->items[0];		\
+ 		while ((_prog = READ_ONCE(_item->prog))) {		\
+-			bpf_cgroup_storage_set(_item->cgroup_storage);	\
++			if (unlikely(bpf_cgroup_storage_set(_item->cgroup_storage)))	\
++				break;			\
+ 			ret = func(_prog, ctx);		\
++			bpf_cgroup_storage_unset();	\
+ 			_ret &= (ret & 1);		\
+ 			_cn |= (ret & 2);		\
+ 			_item++;			\
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index e37480b5f4c0e..4fdeccf223784 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3884,6 +3884,10 @@ int netdev_rx_handler_register(struct net_device *dev,
+ void netdev_rx_handler_unregister(struct net_device *dev);
+ 
+ bool dev_valid_name(const char *name);
++static inline bool is_socket_ioctl_cmd(unsigned int cmd)
++{
++	return _IOC_TYPE(cmd) == SOCK_IOC_TYPE;
++}
+ int dev_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr,
+ 		bool *need_copyout);
+ int dev_ifconf(struct net *net, struct ifconf *, int);
+diff --git a/include/linux/once.h b/include/linux/once.h
+index 9225ee6d96c75..ae6f4eb41cbe7 100644
+--- a/include/linux/once.h
++++ b/include/linux/once.h
+@@ -7,7 +7,7 @@
+ 
+ bool __do_once_start(bool *done, unsigned long *flags);
+ void __do_once_done(bool *done, struct static_key_true *once_key,
+-		    unsigned long *flags);
++		    unsigned long *flags, struct module *mod);
+ 
+ /* Call a function exactly once. The idea of DO_ONCE() is to perform
+  * a function call such as initialization of random seeds, etc, only
+@@ -46,7 +46,7 @@ void __do_once_done(bool *done, struct static_key_true *once_key,
+ 			if (unlikely(___ret)) {				     \
+ 				func(__VA_ARGS__);			     \
+ 				__do_once_done(&___done, &___once_key,	     \
+-					       &___flags);		     \
++					       &___flags, THIS_MODULE);	     \
+ 			}						     \
+ 		}							     \
+ 		___ret;							     \
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index 5d2705f1d01c3..fc5642431b923 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -48,6 +48,7 @@ struct pipe_buffer {
+  *	@files: number of struct file referring this pipe (protected by ->i_lock)
+  *	@r_counter: reader counter
+  *	@w_counter: writer counter
++ *	@poll_usage: is this pipe used for epoll, which has crazy wakeups?
+  *	@fasync_readers: reader side fasync
+  *	@fasync_writers: writer side fasync
+  *	@bufs: the circular array of pipe buffers
+@@ -70,6 +71,7 @@ struct pipe_inode_info {
+ 	unsigned int files;
+ 	unsigned int r_counter;
+ 	unsigned int w_counter;
++	unsigned int poll_usage;
+ 	struct page *tmp_page;
+ 	struct fasync_struct *fasync_readers;
+ 	struct fasync_struct *fasync_writers;
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 7d12c76e8fa45..0d7013da818cb 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -33,6 +33,8 @@
+ #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
+ #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
+ #define ulong2long(a)		(*(long *)(&(a)))
++#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
++#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
+ 
+ /* Exported common interfaces */
+ void call_rcu(struct rcu_head *head, rcu_callback_t func);
+diff --git a/include/linux/srcu.h b/include/linux/srcu.h
+index e432cc92c73de..a0895bbf71ce0 100644
+--- a/include/linux/srcu.h
++++ b/include/linux/srcu.h
+@@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
+ int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
+ void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
+ void synchronize_srcu(struct srcu_struct *ssp);
++unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
++unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
++bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
+ 
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+ 
+diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
+index 5a5a1941ca156..0e0cf4d6a72a0 100644
+--- a/include/linux/srcutiny.h
++++ b/include/linux/srcutiny.h
+@@ -15,7 +15,8 @@
+ 
+ struct srcu_struct {
+ 	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
+-	short srcu_idx;			/* Current reader array element. */
++	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
++	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
+ 	u8 srcu_gp_running;		/* GP workqueue running? */
+ 	u8 srcu_gp_waiting;		/* GP waiting for readers? */
+ 	struct swait_queue_head srcu_wq;
+@@ -59,7 +60,7 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp)
+ {
+ 	int idx;
+ 
+-	idx = READ_ONCE(ssp->srcu_idx);
++	idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
+ 	WRITE_ONCE(ssp->srcu_lock_nesting[idx], ssp->srcu_lock_nesting[idx] + 1);
+ 	return idx;
+ }
+@@ -80,7 +81,7 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
+ {
+ 	int idx;
+ 
+-	idx = READ_ONCE(ssp->srcu_idx) & 0x1;
++	idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
+ 	pr_alert("%s%s Tiny SRCU per-CPU(idx=%d): (%hd,%hd)\n",
+ 		 tt, tf, idx,
+ 		 READ_ONCE(ssp->srcu_lock_nesting[!idx]),
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index 15ca6b4167cc9..b56e1dedcf2fd 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -112,6 +112,7 @@ struct stmmac_axi {
+ 
+ #define EST_GCL		1024
+ struct stmmac_est {
++	struct mutex lock;
+ 	int enable;
+ 	u32 btr_offset[2];
+ 	u32 btr[2];
+diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
+index 6c91902f4f455..39241207ec044 100644
+--- a/kernel/audit_tree.c
++++ b/kernel/audit_tree.c
+@@ -593,7 +593,6 @@ static void prune_tree_chunks(struct audit_tree *victim, bool tagged)
+ 		spin_lock(&hash_lock);
+ 	}
+ 	spin_unlock(&hash_lock);
+-	put_tree(victim);
+ }
+ 
+ /*
+@@ -602,6 +601,7 @@ static void prune_tree_chunks(struct audit_tree *victim, bool tagged)
+ static void prune_one(struct audit_tree *victim)
+ {
+ 	prune_tree_chunks(victim, false);
++	put_tree(victim);
+ }
+ 
+ /* trim the uncommitted chunks from tree */
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index f7e99bb8c3b6c..0efe7c7bfe5e9 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -372,8 +372,8 @@ const struct bpf_func_proto bpf_get_current_ancestor_cgroup_id_proto = {
+ };
+ 
+ #ifdef CONFIG_CGROUP_BPF
+-DECLARE_PER_CPU(struct bpf_cgroup_storage*,
+-		bpf_cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE]);
++DECLARE_PER_CPU(struct bpf_cgroup_storage_info,
++		bpf_cgroup_storage_info[BPF_CGROUP_STORAGE_NEST_MAX]);
+ 
+ BPF_CALL_2(bpf_get_local_storage, struct bpf_map *, map, u64, flags)
+ {
+@@ -382,10 +382,17 @@ BPF_CALL_2(bpf_get_local_storage, struct bpf_map *, map, u64, flags)
+ 	 * verifier checks that its value is correct.
+ 	 */
+ 	enum bpf_cgroup_storage_type stype = cgroup_storage_type(map);
+-	struct bpf_cgroup_storage *storage;
++	struct bpf_cgroup_storage *storage = NULL;
+ 	void *ptr;
++	int i;
+ 
+-	storage = this_cpu_read(bpf_cgroup_storage[stype]);
++	for (i = BPF_CGROUP_STORAGE_NEST_MAX - 1; i >= 0; i--) {
++		if (likely(this_cpu_read(bpf_cgroup_storage_info[i].task) != current))
++			continue;
++
++		storage = this_cpu_read(bpf_cgroup_storage_info[i].storage[stype]);
++		break;
++	}
+ 
+ 	if (stype == BPF_CGROUP_STORAGE_SHARED)
+ 		ptr = &READ_ONCE(storage->buf)->data[0];
+diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
+index 571bb351ed3bf..b139247d2dd33 100644
+--- a/kernel/bpf/local_storage.c
++++ b/kernel/bpf/local_storage.c
+@@ -9,10 +9,11 @@
+ #include <linux/slab.h>
+ #include <uapi/linux/btf.h>
+ 
+-DEFINE_PER_CPU(struct bpf_cgroup_storage*, bpf_cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE]);
+-
+ #ifdef CONFIG_CGROUP_BPF
+ 
++DEFINE_PER_CPU(struct bpf_cgroup_storage_info,
++	       bpf_cgroup_storage_info[BPF_CGROUP_STORAGE_NEST_MAX]);
++
+ #include "../cgroup/cgroup-internal.h"
+ 
+ #define LOCAL_STORAGE_CREATE_FLAG_MASK					\
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 1410f128c4042..29d4f4e375954 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4693,8 +4693,6 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
+ 	case BPF_MAP_TYPE_RINGBUF:
+ 		if (func_id != BPF_FUNC_ringbuf_output &&
+ 		    func_id != BPF_FUNC_ringbuf_reserve &&
+-		    func_id != BPF_FUNC_ringbuf_submit &&
+-		    func_id != BPF_FUNC_ringbuf_discard &&
+ 		    func_id != BPF_FUNC_ringbuf_query)
+ 			goto error;
+ 		break;
+@@ -4798,6 +4796,12 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
+ 		if (map->map_type != BPF_MAP_TYPE_PERF_EVENT_ARRAY)
+ 			goto error;
+ 		break;
++	case BPF_FUNC_ringbuf_output:
++	case BPF_FUNC_ringbuf_reserve:
++	case BPF_FUNC_ringbuf_query:
++		if (map->map_type != BPF_MAP_TYPE_RINGBUF)
++			goto error;
++		break;
+ 	case BPF_FUNC_get_stackid:
+ 		if (map->map_type != BPF_MAP_TYPE_STACK_TRACE)
+ 			goto error;
+diff --git a/kernel/cred.c b/kernel/cred.c
+index 098213d4a39c3..8c0983fa794a7 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -286,13 +286,13 @@ struct cred *prepare_creds(void)
+ 	new->security = NULL;
+ #endif
+ 
+-	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+-		goto error;
+-
+ 	new->ucounts = get_ucounts(new->ucounts);
+ 	if (!new->ucounts)
+ 		goto error;
+ 
++	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
++		goto error;
++
+ 	validate_creds(new);
+ 	return new;
+ 
+@@ -753,13 +753,13 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
+ #ifdef CONFIG_SECURITY
+ 	new->security = NULL;
+ #endif
+-	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+-		goto error;
+-
+ 	new->ucounts = get_ucounts(new->ucounts);
+ 	if (!new->ucounts)
+ 		goto error;
+ 
++	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
++		goto error;
++
+ 	put_cred(old);
+ 	validate_creds(new);
+ 	return new;
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 9825cf89c614d..508fe52782857 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -84,6 +84,25 @@ static inline struct kthread *to_kthread(struct task_struct *k)
+ 	return (__force void *)k->set_child_tid;
+ }
+ 
++/*
++ * Variant of to_kthread() that doesn't assume @p is a kthread.
++ *
++ * Per construction; when:
++ *
++ *   (p->flags & PF_KTHREAD) && p->set_child_tid
++ *
++ * the task is both a kthread and struct kthread is persistent. However
++ * PF_KTHREAD on it's own is not, kernel_thread() can exec() (See umh.c and
++ * begin_new_exec()).
++ */
++static inline struct kthread *__to_kthread(struct task_struct *p)
++{
++	void *kthread = (__force void *)p->set_child_tid;
++	if (kthread && !(p->flags & PF_KTHREAD))
++		kthread = NULL;
++	return kthread;
++}
++
+ void free_kthread_struct(struct task_struct *k)
+ {
+ 	struct kthread *kthread;
+@@ -168,8 +187,9 @@ EXPORT_SYMBOL_GPL(kthread_freezable_should_stop);
+  */
+ void *kthread_func(struct task_struct *task)
+ {
+-	if (task->flags & PF_KTHREAD)
+-		return to_kthread(task)->threadfn;
++	struct kthread *kthread = __to_kthread(task);
++	if (kthread)
++		return kthread->threadfn;
+ 	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(kthread_func);
+@@ -199,10 +219,11 @@ EXPORT_SYMBOL_GPL(kthread_data);
+  */
+ void *kthread_probe_data(struct task_struct *task)
+ {
+-	struct kthread *kthread = to_kthread(task);
++	struct kthread *kthread = __to_kthread(task);
+ 	void *data = NULL;
+ 
+-	copy_from_kernel_nofault(&data, &kthread->data, sizeof(data));
++	if (kthread)
++		copy_from_kernel_nofault(&data, &kthread->data, sizeof(data));
+ 	return data;
+ }
+ 
+@@ -514,9 +535,9 @@ void kthread_set_per_cpu(struct task_struct *k, int cpu)
+ 	set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+ }
+ 
+-bool kthread_is_per_cpu(struct task_struct *k)
++bool kthread_is_per_cpu(struct task_struct *p)
+ {
+-	struct kthread *kthread = to_kthread(k);
++	struct kthread *kthread = __to_kthread(p);
+ 	if (!kthread)
+ 		return false;
+ 
+diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
+index 6208c1dae5c95..26344dc6483b0 100644
+--- a/kernel/rcu/srcutiny.c
++++ b/kernel/rcu/srcutiny.c
+@@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
+ 	ssp->srcu_gp_running = false;
+ 	ssp->srcu_gp_waiting = false;
+ 	ssp->srcu_idx = 0;
++	ssp->srcu_idx_max = 0;
+ 	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
+ 	INIT_LIST_HEAD(&ssp->srcu_work.entry);
+ 	return 0;
+@@ -84,6 +85,8 @@ void cleanup_srcu_struct(struct srcu_struct *ssp)
+ 	WARN_ON(ssp->srcu_gp_waiting);
+ 	WARN_ON(ssp->srcu_cb_head);
+ 	WARN_ON(&ssp->srcu_cb_head != ssp->srcu_cb_tail);
++	WARN_ON(ssp->srcu_idx != ssp->srcu_idx_max);
++	WARN_ON(ssp->srcu_idx & 0x1);
+ }
+ EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
+ 
+@@ -114,7 +117,7 @@ void srcu_drive_gp(struct work_struct *wp)
+ 	struct srcu_struct *ssp;
+ 
+ 	ssp = container_of(wp, struct srcu_struct, srcu_work);
+-	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
++	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
+ 		return; /* Already running or nothing to do. */
+ 
+ 	/* Remove recently arrived callbacks and wait for readers. */
+@@ -124,11 +127,12 @@ void srcu_drive_gp(struct work_struct *wp)
+ 	ssp->srcu_cb_head = NULL;
+ 	ssp->srcu_cb_tail = &ssp->srcu_cb_head;
+ 	local_irq_enable();
+-	idx = ssp->srcu_idx;
+-	WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
++	idx = (ssp->srcu_idx & 0x2) / 2;
++	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
+ 	WRITE_ONCE(ssp->srcu_gp_waiting, true);  /* srcu_read_unlock() wakes! */
+ 	swait_event_exclusive(ssp->srcu_wq, !READ_ONCE(ssp->srcu_lock_nesting[idx]));
+ 	WRITE_ONCE(ssp->srcu_gp_waiting, false); /* srcu_read_unlock() cheap. */
++	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
+ 
+ 	/* Invoke the callbacks we removed above. */
+ 	while (lh) {
+@@ -146,11 +150,27 @@ void srcu_drive_gp(struct work_struct *wp)
+ 	 * straighten that out.
+ 	 */
+ 	WRITE_ONCE(ssp->srcu_gp_running, false);
+-	if (READ_ONCE(ssp->srcu_cb_head))
++	if (USHORT_CMP_LT(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
+ 		schedule_work(&ssp->srcu_work);
+ }
+ EXPORT_SYMBOL_GPL(srcu_drive_gp);
+ 
++static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
++{
++	unsigned short cookie;
++
++	cookie = get_state_synchronize_srcu(ssp);
++	if (USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx_max), cookie))
++		return;
++	WRITE_ONCE(ssp->srcu_idx_max, cookie);
++	if (!READ_ONCE(ssp->srcu_gp_running)) {
++		if (likely(srcu_init_done))
++			schedule_work(&ssp->srcu_work);
++		else if (list_empty(&ssp->srcu_work.entry))
++			list_add(&ssp->srcu_work.entry, &srcu_boot_list);
++	}
++}
++
+ /*
+  * Enqueue an SRCU callback on the specified srcu_struct structure,
+  * initiating grace-period processing if it is not already running.
+@@ -166,12 +186,7 @@ void call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
+ 	*ssp->srcu_cb_tail = rhp;
+ 	ssp->srcu_cb_tail = &rhp->next;
+ 	local_irq_restore(flags);
+-	if (!READ_ONCE(ssp->srcu_gp_running)) {
+-		if (likely(srcu_init_done))
+-			schedule_work(&ssp->srcu_work);
+-		else if (list_empty(&ssp->srcu_work.entry))
+-			list_add(&ssp->srcu_work.entry, &srcu_boot_list);
+-	}
++	srcu_gp_start_if_needed(ssp);
+ }
+ EXPORT_SYMBOL_GPL(call_srcu);
+ 
+@@ -190,6 +205,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
+ }
+ EXPORT_SYMBOL_GPL(synchronize_srcu);
+ 
++/*
++ * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
++ */
++unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
++{
++	unsigned long ret;
++
++	barrier();
++	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
++	barrier();
++	return ret & USHRT_MAX;
++}
++EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
++
++/*
++ * start_poll_synchronize_srcu - Provide cookie and start grace period
++ *
++ * The difference between this and get_state_synchronize_srcu() is that
++ * this function ensures that the poll_state_synchronize_srcu() will
++ * eventually return the value true.
++ */
++unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
++{
++	unsigned long ret = get_state_synchronize_srcu(ssp);
++
++	srcu_gp_start_if_needed(ssp);
++	return ret;
++}
++EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
++
++/*
++ * poll_state_synchronize_srcu - Has cookie's grace period ended?
++ */
++bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
++{
++	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
++
++	barrier();
++	return ret;
++}
++EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
++
+ /* Lockdep diagnostics.  */
+ void __init rcu_scheduler_starting(void)
+ {
+diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
+index 68ceac3878445..b8821665c4357 100644
+--- a/kernel/rcu/srcutree.c
++++ b/kernel/rcu/srcutree.c
+@@ -808,6 +808,46 @@ static void srcu_leak_callback(struct rcu_head *rhp)
+ {
+ }
+ 
++/*
++ * Start an SRCU grace period, and also queue the callback if non-NULL.
++ */
++static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
++					     struct rcu_head *rhp, bool do_norm)
++{
++	unsigned long flags;
++	int idx;
++	bool needexp = false;
++	bool needgp = false;
++	unsigned long s;
++	struct srcu_data *sdp;
++
++	check_init_srcu_struct(ssp);
++	idx = srcu_read_lock(ssp);
++	sdp = raw_cpu_ptr(ssp->sda);
++	spin_lock_irqsave_rcu_node(sdp, flags);
++	if (rhp)
++		rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
++	rcu_segcblist_advance(&sdp->srcu_cblist,
++			      rcu_seq_current(&ssp->srcu_gp_seq));
++	s = rcu_seq_snap(&ssp->srcu_gp_seq);
++	(void)rcu_segcblist_accelerate(&sdp->srcu_cblist, s);
++	if (ULONG_CMP_LT(sdp->srcu_gp_seq_needed, s)) {
++		sdp->srcu_gp_seq_needed = s;
++		needgp = true;
++	}
++	if (!do_norm && ULONG_CMP_LT(sdp->srcu_gp_seq_needed_exp, s)) {
++		sdp->srcu_gp_seq_needed_exp = s;
++		needexp = true;
++	}
++	spin_unlock_irqrestore_rcu_node(sdp, flags);
++	if (needgp)
++		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
++	else if (needexp)
++		srcu_funnel_exp_start(ssp, sdp->mynode, s);
++	srcu_read_unlock(ssp, idx);
++	return s;
++}
++
+ /*
+  * Enqueue an SRCU callback on the srcu_data structure associated with
+  * the current CPU and the specified srcu_struct structure, initiating
+@@ -839,14 +879,6 @@ static void srcu_leak_callback(struct rcu_head *rhp)
+ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
+ 			rcu_callback_t func, bool do_norm)
+ {
+-	unsigned long flags;
+-	int idx;
+-	bool needexp = false;
+-	bool needgp = false;
+-	unsigned long s;
+-	struct srcu_data *sdp;
+-
+-	check_init_srcu_struct(ssp);
+ 	if (debug_rcu_head_queue(rhp)) {
+ 		/* Probable double call_srcu(), so leak the callback. */
+ 		WRITE_ONCE(rhp->func, srcu_leak_callback);
+@@ -854,28 +886,7 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
+ 		return;
+ 	}
+ 	rhp->func = func;
+-	idx = srcu_read_lock(ssp);
+-	sdp = raw_cpu_ptr(ssp->sda);
+-	spin_lock_irqsave_rcu_node(sdp, flags);
+-	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
+-	rcu_segcblist_advance(&sdp->srcu_cblist,
+-			      rcu_seq_current(&ssp->srcu_gp_seq));
+-	s = rcu_seq_snap(&ssp->srcu_gp_seq);
+-	(void)rcu_segcblist_accelerate(&sdp->srcu_cblist, s);
+-	if (ULONG_CMP_LT(sdp->srcu_gp_seq_needed, s)) {
+-		sdp->srcu_gp_seq_needed = s;
+-		needgp = true;
+-	}
+-	if (!do_norm && ULONG_CMP_LT(sdp->srcu_gp_seq_needed_exp, s)) {
+-		sdp->srcu_gp_seq_needed_exp = s;
+-		needexp = true;
+-	}
+-	spin_unlock_irqrestore_rcu_node(sdp, flags);
+-	if (needgp)
+-		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
+-	else if (needexp)
+-		srcu_funnel_exp_start(ssp, sdp->mynode, s);
+-	srcu_read_unlock(ssp, idx);
++	(void)srcu_gp_start_if_needed(ssp, rhp, do_norm);
+ }
+ 
+ /**
+@@ -1004,6 +1015,62 @@ void synchronize_srcu(struct srcu_struct *ssp)
+ }
+ EXPORT_SYMBOL_GPL(synchronize_srcu);
+ 
++/**
++ * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
++ * @ssp: srcu_struct to provide cookie for.
++ *
++ * This function returns a cookie that can be passed to
++ * poll_state_synchronize_srcu(), which will return true if a full grace
++ * period has elapsed in the meantime.  It is the caller's responsibility
++ * to make sure that grace period happens, for example, by invoking
++ * call_srcu() after return from get_state_synchronize_srcu().
++ */
++unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
++{
++	// Any prior manipulation of SRCU-protected data must happen
++	// before the load from ->srcu_gp_seq.
++	smp_mb();
++	return rcu_seq_snap(&ssp->srcu_gp_seq);
++}
++EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
++
++/**
++ * start_poll_synchronize_srcu - Provide cookie and start grace period
++ * @ssp: srcu_struct to provide cookie for.
++ *
++ * This function returns a cookie that can be passed to
++ * poll_state_synchronize_srcu(), which will return true if a full grace
++ * period has elapsed in the meantime.  Unlike get_state_synchronize_srcu(),
++ * this function also ensures that any needed SRCU grace period will be
++ * started.  This convenience does come at a cost in terms of CPU overhead.
++ */
++unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
++{
++	return srcu_gp_start_if_needed(ssp, NULL, true);
++}
++EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
++
++/**
++ * poll_state_synchronize_srcu - Has cookie's grace period ended?
++ * @ssp: srcu_struct to provide cookie for.
++ * @cookie: Return value from get_state_synchronize_srcu() or start_poll_synchronize_srcu().
++ *
++ * This function takes the cookie that was returned from either
++ * get_state_synchronize_srcu() or start_poll_synchronize_srcu(), and
++ * returns @true if an SRCU grace period elapsed since the time that the
++ * cookie was created.
++ */
++bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
++{
++	if (!rcu_seq_done(&ssp->srcu_gp_seq, cookie))
++		return false;
++	// Ensure that the end of the SRCU grace period happens before
++	// any subsequent code that the caller might execute.
++	smp_mb(); // ^^^
++	return true;
++}
++EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
++
+ /*
+  * Callback function for srcu_barrier() use.
+  */
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 262b02d750076..bad97d35684d2 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7569,7 +7569,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
+ 		return 0;
+ 
+ 	/* Disregard pcpu kthreads; they are where they need to be. */
+-	if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
++	if (kthread_is_per_cpu(p))
+ 		return 0;
+ 
+ 	if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
+diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
+index d7260f6614a6a..2dff7f1a27ec0 100644
+--- a/kernel/tracepoint.c
++++ b/kernel/tracepoint.c
+@@ -28,6 +28,44 @@ extern tracepoint_ptr_t __stop___tracepoints_ptrs[];
+ DEFINE_SRCU(tracepoint_srcu);
+ EXPORT_SYMBOL_GPL(tracepoint_srcu);
+ 
++enum tp_transition_sync {
++	TP_TRANSITION_SYNC_1_0_1,
++	TP_TRANSITION_SYNC_N_2_1,
++
++	_NR_TP_TRANSITION_SYNC,
++};
++
++struct tp_transition_snapshot {
++	unsigned long rcu;
++	unsigned long srcu;
++	bool ongoing;
++};
++
++/* Protected by tracepoints_mutex */
++static struct tp_transition_snapshot tp_transition_snapshot[_NR_TP_TRANSITION_SYNC];
++
++static void tp_rcu_get_state(enum tp_transition_sync sync)
++{
++	struct tp_transition_snapshot *snapshot = &tp_transition_snapshot[sync];
++
++	/* Keep the latest get_state snapshot. */
++	snapshot->rcu = get_state_synchronize_rcu();
++	snapshot->srcu = start_poll_synchronize_srcu(&tracepoint_srcu);
++	snapshot->ongoing = true;
++}
++
++static void tp_rcu_cond_sync(enum tp_transition_sync sync)
++{
++	struct tp_transition_snapshot *snapshot = &tp_transition_snapshot[sync];
++
++	if (!snapshot->ongoing)
++		return;
++	cond_synchronize_rcu(snapshot->rcu);
++	if (!poll_state_synchronize_srcu(&tracepoint_srcu, snapshot->srcu))
++		synchronize_srcu(&tracepoint_srcu);
++	snapshot->ongoing = false;
++}
++
+ /* Set to 1 to enable tracepoint debug output */
+ static const int tracepoint_debug;
+ 
+@@ -332,6 +370,11 @@ static int tracepoint_add_func(struct tracepoint *tp,
+ 	 */
+ 	switch (nr_func_state(tp_funcs)) {
+ 	case TP_FUNC_1:		/* 0->1 */
++		/*
++		 * Make sure new static func never uses old data after a
++		 * 1->0->1 transition sequence.
++		 */
++		tp_rcu_cond_sync(TP_TRANSITION_SYNC_1_0_1);
+ 		/* Set static call to first function */
+ 		tracepoint_update_call(tp, tp_funcs);
+ 		/* Both iterator and static call handle NULL tp->funcs */
+@@ -346,10 +389,15 @@ static int tracepoint_add_func(struct tracepoint *tp,
+ 		 * Requires ordering between RCU assign/dereference and
+ 		 * static call update/call.
+ 		 */
+-		rcu_assign_pointer(tp->funcs, tp_funcs);
+-		break;
++		fallthrough;
+ 	case TP_FUNC_N:		/* N->N+1 (N>1) */
+ 		rcu_assign_pointer(tp->funcs, tp_funcs);
++		/*
++		 * Make sure static func never uses incorrect data after a
++		 * N->...->2->1 (N>1) transition sequence.
++		 */
++		if (tp_funcs[0].data != old[0].data)
++			tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1);
+ 		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+@@ -393,24 +441,23 @@ static int tracepoint_remove_func(struct tracepoint *tp,
+ 		/* Both iterator and static call handle NULL tp->funcs */
+ 		rcu_assign_pointer(tp->funcs, NULL);
+ 		/*
+-		 * Make sure new func never uses old data after a 1->0->1
+-		 * transition sequence.
+-		 * Considering that transition 0->1 is the common case
+-		 * and don't have rcu-sync, issue rcu-sync after
+-		 * transition 1->0 to break that sequence by waiting for
+-		 * readers to be quiescent.
++		 * Make sure new static func never uses old data after a
++		 * 1->0->1 transition sequence.
+ 		 */
+-		tracepoint_synchronize_unregister();
++		tp_rcu_get_state(TP_TRANSITION_SYNC_1_0_1);
+ 		break;
+ 	case TP_FUNC_1:		/* 2->1 */
+ 		rcu_assign_pointer(tp->funcs, tp_funcs);
+ 		/*
+-		 * On 2->1 transition, RCU sync is needed before setting
+-		 * static call to first callback, because the observer
+-		 * may have loaded any prior tp->funcs after the last one
+-		 * associated with an rcu-sync.
++		 * Make sure static func never uses incorrect data after a
++		 * N->...->2->1 (N>2) transition sequence. If the first
++		 * element's data has changed, then force the synchronization
++		 * to prevent current readers that have loaded the old data
++		 * from calling the new function.
+ 		 */
+-		tracepoint_synchronize_unregister();
++		if (tp_funcs[0].data != old[0].data)
++			tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1);
++		tp_rcu_cond_sync(TP_TRANSITION_SYNC_N_2_1);
+ 		/* Set static call to first function */
+ 		tracepoint_update_call(tp, tp_funcs);
+ 		break;
+@@ -418,6 +465,12 @@ static int tracepoint_remove_func(struct tracepoint *tp,
+ 		fallthrough;
+ 	case TP_FUNC_N:
+ 		rcu_assign_pointer(tp->funcs, tp_funcs);
++		/*
++		 * Make sure static func never uses incorrect data after a
++		 * N->...->2->1 (N>2) transition sequence.
++		 */
++		if (tp_funcs[0].data != old[0].data)
++			tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1);
+ 		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+diff --git a/lib/once.c b/lib/once.c
+index 8b7d6235217ee..59149bf3bfb4a 100644
+--- a/lib/once.c
++++ b/lib/once.c
+@@ -3,10 +3,12 @@
+ #include <linux/spinlock.h>
+ #include <linux/once.h>
+ #include <linux/random.h>
++#include <linux/module.h>
+ 
+ struct once_work {
+ 	struct work_struct work;
+ 	struct static_key_true *key;
++	struct module *module;
+ };
+ 
+ static void once_deferred(struct work_struct *w)
+@@ -16,10 +18,11 @@ static void once_deferred(struct work_struct *w)
+ 	work = container_of(w, struct once_work, work);
+ 	BUG_ON(!static_key_enabled(work->key));
+ 	static_branch_disable(work->key);
++	module_put(work->module);
+ 	kfree(work);
+ }
+ 
+-static void once_disable_jump(struct static_key_true *key)
++static void once_disable_jump(struct static_key_true *key, struct module *mod)
+ {
+ 	struct once_work *w;
+ 
+@@ -29,6 +32,8 @@ static void once_disable_jump(struct static_key_true *key)
+ 
+ 	INIT_WORK(&w->work, once_deferred);
+ 	w->key = key;
++	w->module = mod;
++	__module_get(mod);
+ 	schedule_work(&w->work);
+ }
+ 
+@@ -53,11 +58,11 @@ bool __do_once_start(bool *done, unsigned long *flags)
+ EXPORT_SYMBOL(__do_once_start);
+ 
+ void __do_once_done(bool *done, struct static_key_true *once_key,
+-		    unsigned long *flags)
++		    unsigned long *flags, struct module *mod)
+ 	__releases(once_lock)
+ {
+ 	*done = true;
+ 	spin_unlock_irqrestore(&once_lock, *flags);
+-	once_disable_jump(once_key);
++	once_disable_jump(once_key, mod);
+ }
+ EXPORT_SYMBOL(__do_once_done);
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index e7cbd1b4a5e51..72d424a5a1429 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -42,13 +42,17 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
+ 	migrate_disable();
+ 	time_start = ktime_get_ns();
+ 	for (i = 0; i < repeat; i++) {
+-		bpf_cgroup_storage_set(storage);
++		ret = bpf_cgroup_storage_set(storage);
++		if (ret)
++			break;
+ 
+ 		if (xdp)
+ 			*retval = bpf_prog_run_xdp(prog, ctx);
+ 		else
+ 			*retval = BPF_PROG_RUN(prog, ctx);
+ 
++		bpf_cgroup_storage_unset();
++
+ 		if (signal_pending(current)) {
+ 			ret = -EINTR;
+ 			break;
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index dd46592464058..7266571d5c7e2 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2601,6 +2601,7 @@ static int do_setlink(const struct sk_buff *skb,
+ 		return err;
+ 
+ 	if (tb[IFLA_NET_NS_PID] || tb[IFLA_NET_NS_FD] || tb[IFLA_TARGET_NETNSID]) {
++		const char *pat = ifname && ifname[0] ? ifname : NULL;
+ 		struct net *net = rtnl_link_get_net_capable(skb, dev_net(dev),
+ 							    tb, CAP_NET_ADMIN);
+ 		if (IS_ERR(net)) {
+@@ -2608,7 +2609,7 @@ static int do_setlink(const struct sk_buff *skb,
+ 			goto errout;
+ 		}
+ 
+-		err = dev_change_net_namespace(dev, net, ifname);
++		err = dev_change_net_namespace(dev, net, pat);
+ 		put_net(net);
+ 		if (err)
+ 			goto errout;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index e70291748889b..a0829495b211e 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -468,6 +468,8 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
+ 
+ static int gre_handle_offloads(struct sk_buff *skb, bool csum)
+ {
++	if (csum && skb_checksum_start(skb) < skb->data)
++		return -EINVAL;
+ 	return iptunnel_handle_offloads(skb, csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
+ }
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index e15c1d8b7c8de..3d9946fd41f38 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -624,14 +624,14 @@ static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash)
+ 	return oldest;
+ }
+ 
+-static inline u32 fnhe_hashfun(__be32 daddr)
++static u32 fnhe_hashfun(__be32 daddr)
+ {
+-	static u32 fnhe_hashrnd __read_mostly;
+-	u32 hval;
++	static siphash_key_t fnhe_hash_key __read_mostly;
++	u64 hval;
+ 
+-	net_get_random_once(&fnhe_hashrnd, sizeof(fnhe_hashrnd));
+-	hval = jhash_1word((__force u32)daddr, fnhe_hashrnd);
+-	return hash_32(hval, FNHE_HASH_SHIFT);
++	net_get_random_once(&fnhe_hash_key, sizeof(fnhe_hash_key));
++	hval = siphash_1u32((__force u32)daddr, &fnhe_hash_key);
++	return hash_64(hval, FNHE_HASH_SHIFT);
+ }
+ 
+ static void fill_route_from_fnhe(struct rtable *rt, struct fib_nh_exception *fnhe)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 62db3c98424bd..bcf4fae83a9bd 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -41,6 +41,7 @@
+ #include <linux/nsproxy.h>
+ #include <linux/slab.h>
+ #include <linux/jhash.h>
++#include <linux/siphash.h>
+ #include <net/net_namespace.h>
+ #include <net/snmp.h>
+ #include <net/ipv6.h>
+@@ -1482,17 +1483,24 @@ static void rt6_exception_remove_oldest(struct rt6_exception_bucket *bucket)
+ static u32 rt6_exception_hash(const struct in6_addr *dst,
+ 			      const struct in6_addr *src)
+ {
+-	static u32 seed __read_mostly;
+-	u32 val;
++	static siphash_key_t rt6_exception_key __read_mostly;
++	struct {
++		struct in6_addr dst;
++		struct in6_addr src;
++	} __aligned(SIPHASH_ALIGNMENT) combined = {
++		.dst = *dst,
++	};
++	u64 val;
+ 
+-	net_get_random_once(&seed, sizeof(seed));
+-	val = jhash2((const u32 *)dst, sizeof(*dst)/sizeof(u32), seed);
++	net_get_random_once(&rt6_exception_key, sizeof(rt6_exception_key));
+ 
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	if (src)
+-		val = jhash2((const u32 *)src, sizeof(*src)/sizeof(u32), val);
++		combined.src = *src;
+ #endif
+-	return hash_32(val, FIB6_EXCEPTION_BUCKET_SIZE_SHIFT);
++	val = siphash(&combined, sizeof(combined), &rt6_exception_key);
++
++	return hash_64(val, FIB6_EXCEPTION_BUCKET_SIZE_SHIFT);
+ }
+ 
+ /* Helper function to find the cached rt in the hash table
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index f9f2af26ccb37..54430a34d2f64 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -66,22 +66,17 @@ EXPORT_SYMBOL_GPL(nf_conntrack_hash);
+ 
+ struct conntrack_gc_work {
+ 	struct delayed_work	dwork;
+-	u32			last_bucket;
++	u32			next_bucket;
+ 	bool			exiting;
+ 	bool			early_drop;
+-	long			next_gc_run;
+ };
+ 
+ static __read_mostly struct kmem_cache *nf_conntrack_cachep;
+ static DEFINE_SPINLOCK(nf_conntrack_locks_all_lock);
+ static __read_mostly bool nf_conntrack_locks_all;
+ 
+-/* every gc cycle scans at most 1/GC_MAX_BUCKETS_DIV part of table */
+-#define GC_MAX_BUCKETS_DIV	128u
+-/* upper bound of full table scan */
+-#define GC_MAX_SCAN_JIFFIES	(16u * HZ)
+-/* desired ratio of entries found to be expired */
+-#define GC_EVICT_RATIO	50u
++#define GC_SCAN_INTERVAL	(120u * HZ)
++#define GC_SCAN_MAX_DURATION	msecs_to_jiffies(10)
+ 
+ static struct conntrack_gc_work conntrack_gc_work;
+ 
+@@ -1352,17 +1347,13 @@ static bool gc_worker_can_early_drop(const struct nf_conn *ct)
+ 
+ static void gc_worker(struct work_struct *work)
+ {
+-	unsigned int min_interval = max(HZ / GC_MAX_BUCKETS_DIV, 1u);
+-	unsigned int i, goal, buckets = 0, expired_count = 0;
+-	unsigned int nf_conntrack_max95 = 0;
++	unsigned long end_time = jiffies + GC_SCAN_MAX_DURATION;
++	unsigned int i, hashsz, nf_conntrack_max95 = 0;
++	unsigned long next_run = GC_SCAN_INTERVAL;
+ 	struct conntrack_gc_work *gc_work;
+-	unsigned int ratio, scanned = 0;
+-	unsigned long next_run;
+-
+ 	gc_work = container_of(work, struct conntrack_gc_work, dwork.work);
+ 
+-	goal = nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV;
+-	i = gc_work->last_bucket;
++	i = gc_work->next_bucket;
+ 	if (gc_work->early_drop)
+ 		nf_conntrack_max95 = nf_conntrack_max / 100u * 95u;
+ 
+@@ -1370,22 +1361,21 @@ static void gc_worker(struct work_struct *work)
+ 		struct nf_conntrack_tuple_hash *h;
+ 		struct hlist_nulls_head *ct_hash;
+ 		struct hlist_nulls_node *n;
+-		unsigned int hashsz;
+ 		struct nf_conn *tmp;
+ 
+-		i++;
+ 		rcu_read_lock();
+ 
+ 		nf_conntrack_get_ht(&ct_hash, &hashsz);
+-		if (i >= hashsz)
+-			i = 0;
++		if (i >= hashsz) {
++			rcu_read_unlock();
++			break;
++		}
+ 
+ 		hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[i], hnnode) {
+ 			struct net *net;
+ 
+ 			tmp = nf_ct_tuplehash_to_ctrack(h);
+ 
+-			scanned++;
+ 			if (test_bit(IPS_OFFLOAD_BIT, &tmp->status)) {
+ 				nf_ct_offload_timeout(tmp);
+ 				continue;
+@@ -1393,7 +1383,6 @@ static void gc_worker(struct work_struct *work)
+ 
+ 			if (nf_ct_is_expired(tmp)) {
+ 				nf_ct_gc_expired(tmp);
+-				expired_count++;
+ 				continue;
+ 			}
+ 
+@@ -1425,7 +1414,14 @@ static void gc_worker(struct work_struct *work)
+ 		 */
+ 		rcu_read_unlock();
+ 		cond_resched();
+-	} while (++buckets < goal);
++		i++;
++
++		if (time_after(jiffies, end_time) && i < hashsz) {
++			gc_work->next_bucket = i;
++			next_run = 0;
++			break;
++		}
++	} while (i < hashsz);
+ 
+ 	if (gc_work->exiting)
+ 		return;
+@@ -1436,40 +1432,17 @@ static void gc_worker(struct work_struct *work)
+ 	 *
+ 	 * This worker is only here to reap expired entries when system went
+ 	 * idle after a busy period.
+-	 *
+-	 * The heuristics below are supposed to balance conflicting goals:
+-	 *
+-	 * 1. Minimize time until we notice a stale entry
+-	 * 2. Maximize scan intervals to not waste cycles
+-	 *
+-	 * Normally, expire ratio will be close to 0.
+-	 *
+-	 * As soon as a sizeable fraction of the entries have expired
+-	 * increase scan frequency.
+ 	 */
+-	ratio = scanned ? expired_count * 100 / scanned : 0;
+-	if (ratio > GC_EVICT_RATIO) {
+-		gc_work->next_gc_run = min_interval;
+-	} else {
+-		unsigned int max = GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV;
+-
+-		BUILD_BUG_ON((GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV) == 0);
+-
+-		gc_work->next_gc_run += min_interval;
+-		if (gc_work->next_gc_run > max)
+-			gc_work->next_gc_run = max;
++	if (next_run) {
++		gc_work->early_drop = false;
++		gc_work->next_bucket = 0;
+ 	}
+-
+-	next_run = gc_work->next_gc_run;
+-	gc_work->last_bucket = i;
+-	gc_work->early_drop = false;
+ 	queue_delayed_work(system_power_efficient_wq, &gc_work->dwork, next_run);
+ }
+ 
+ static void conntrack_gc_work_init(struct conntrack_gc_work *gc_work)
+ {
+ 	INIT_DEFERRABLE_WORK(&gc_work->dwork, gc_worker);
+-	gc_work->next_gc_run = HZ;
+ 	gc_work->exiting = false;
+ }
+ 
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 6826558483f97..56cffbfa000b7 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -486,7 +486,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 		goto err;
+ 	}
+ 
+-	if (len != ALIGN(size, 4) + hdrlen)
++	if (!size || len != ALIGN(size, 4) + hdrlen)
+ 		goto err;
+ 
+ 	if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
+diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
+index 9b6ffff72f2d1..28c1b00221780 100644
+--- a/net/rds/ib_frmr.c
++++ b/net/rds/ib_frmr.c
+@@ -131,9 +131,9 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr)
+ 		cpu_relax();
+ 	}
+ 
+-	ret = ib_map_mr_sg_zbva(frmr->mr, ibmr->sg, ibmr->sg_len,
++	ret = ib_map_mr_sg_zbva(frmr->mr, ibmr->sg, ibmr->sg_dma_len,
+ 				&off, PAGE_SIZE);
+-	if (unlikely(ret != ibmr->sg_len))
++	if (unlikely(ret != ibmr->sg_dma_len))
+ 		return ret < 0 ? ret : -EINVAL;
+ 
+ 	if (cmpxchg(&frmr->fr_state,
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index c1e84d1eeaba8..c76701ac35abf 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -660,6 +660,13 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 	sch_tree_lock(sch);
+ 
+ 	q->nbands = nbands;
++	for (i = nstrict; i < q->nstrict; i++) {
++		INIT_LIST_HEAD(&q->classes[i].alist);
++		if (q->classes[i].qdisc->q.qlen) {
++			list_add_tail(&q->classes[i].alist, &q->active);
++			q->classes[i].deficit = quanta[i];
++		}
++	}
+ 	q->nstrict = nstrict;
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+ 
+diff --git a/net/socket.c b/net/socket.c
+index 002d5952ae5d8..dd5da07bc1ffc 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1062,7 +1062,7 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 		rtnl_unlock();
+ 		if (!err && copy_to_user(argp, &ifc, sizeof(struct ifconf)))
+ 			err = -EFAULT;
+-	} else {
++	} else if (is_socket_ioctl_cmd(cmd)) {
+ 		struct ifreq ifr;
+ 		bool need_copyout;
+ 		if (copy_from_user(&ifr, argp, sizeof(struct ifreq)))
+@@ -1071,6 +1071,8 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 		if (!err && need_copyout)
+ 			if (copy_to_user(argp, &ifr, sizeof(struct ifreq)))
+ 				return -EFAULT;
++	} else {
++		err = -ENOTTY;
+ 	}
+ 	return err;
+ }
+@@ -3264,6 +3266,8 @@ static int compat_ifr_data_ioctl(struct net *net, unsigned int cmd,
+ 	struct ifreq ifreq;
+ 	u32 data32;
+ 
++	if (!is_socket_ioctl_cmd(cmd))
++		return -ENOTTY;
+ 	if (copy_from_user(ifreq.ifr_name, u_ifreq32->ifr_name, IFNAMSIZ))
+ 		return -EFAULT;
+ 	if (get_user(data32, &u_ifreq32->ifr_data))
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 4f9bd95b4eeed..9bd72468bc68e 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1511,7 +1511,7 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
+ 
+ 	if (unlikely(syn && !rc)) {
+ 		tipc_set_sk_state(sk, TIPC_CONNECTING);
+-		if (timeout) {
++		if (dlen && timeout) {
+ 			timeout = msecs_to_jiffies(timeout);
+ 			tipc_wait_for_connect(sock, &timeout);
+ 		}
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index 2e41b8c169e5b..0486b14697994 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -44,6 +44,7 @@ static const struct reg_sequence patch_list[] = {
+ 	{RT5682_I2C_CTRL, 0x000f},
+ 	{RT5682_PLL2_INTERNAL, 0x8266},
+ 	{RT5682_SAR_IL_CMD_3, 0x8365},
++	{RT5682_SAR_IL_CMD_6, 0x0180},
+ };
+ 
+ void rt5682_apply_patch_list(struct rt5682_priv *rt5682, struct device *dev)
+diff --git a/sound/soc/soc-component.c b/sound/soc/soc-component.c
+index 728e93f35ffbe..4295c05929016 100644
+--- a/sound/soc/soc-component.c
++++ b/sound/soc/soc-component.c
+@@ -135,86 +135,75 @@ int snd_soc_component_set_bias_level(struct snd_soc_component *component,
+ 	return soc_component_ret(component, ret);
+ }
+ 
+-static int soc_component_pin(struct snd_soc_component *component,
+-			     const char *pin,
+-			     int (*pin_func)(struct snd_soc_dapm_context *dapm,
+-					     const char *pin))
+-{
+-	struct snd_soc_dapm_context *dapm =
+-		snd_soc_component_get_dapm(component);
+-	char *full_name;
+-	int ret;
+-
+-	if (!component->name_prefix) {
+-		ret = pin_func(dapm, pin);
+-		goto end;
+-	}
+-
+-	full_name = kasprintf(GFP_KERNEL, "%s %s", component->name_prefix, pin);
+-	if (!full_name) {
+-		ret = -ENOMEM;
+-		goto end;
+-	}
+-
+-	ret = pin_func(dapm, full_name);
+-	kfree(full_name);
+-end:
+-	return soc_component_ret(component, ret);
+-}
+-
+ int snd_soc_component_enable_pin(struct snd_soc_component *component,
+ 				 const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_enable_pin);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_enable_pin(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_enable_pin);
+ 
+ int snd_soc_component_enable_pin_unlocked(struct snd_soc_component *component,
+ 					  const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_enable_pin_unlocked);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_enable_pin_unlocked(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_enable_pin_unlocked);
+ 
+ int snd_soc_component_disable_pin(struct snd_soc_component *component,
+ 				  const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_disable_pin);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_disable_pin(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_disable_pin);
+ 
+ int snd_soc_component_disable_pin_unlocked(struct snd_soc_component *component,
+ 					   const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_disable_pin_unlocked);
++	struct snd_soc_dapm_context *dapm = 
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_disable_pin_unlocked(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_disable_pin_unlocked);
+ 
+ int snd_soc_component_nc_pin(struct snd_soc_component *component,
+ 			     const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_nc_pin);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_nc_pin(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_nc_pin);
+ 
+ int snd_soc_component_nc_pin_unlocked(struct snd_soc_component *component,
+ 				      const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_nc_pin_unlocked);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_nc_pin_unlocked(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_nc_pin_unlocked);
+ 
+ int snd_soc_component_get_pin_status(struct snd_soc_component *component,
+ 				     const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_get_pin_status);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_get_pin_status(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_get_pin_status);
+ 
+ int snd_soc_component_force_enable_pin(struct snd_soc_component *component,
+ 				       const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_force_enable_pin);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_force_enable_pin(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_force_enable_pin);
+ 
+@@ -222,7 +211,9 @@ int snd_soc_component_force_enable_pin_unlocked(
+ 	struct snd_soc_component *component,
+ 	const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_force_enable_pin_unlocked);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_force_enable_pin_unlocked(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_force_enable_pin_unlocked);
+ 
+diff --git a/tools/perf/arch/arm/include/perf_regs.h b/tools/perf/arch/arm/include/perf_regs.h
+index ed20e0253e25d..4085419283d07 100644
+--- a/tools/perf/arch/arm/include/perf_regs.h
++++ b/tools/perf/arch/arm/include/perf_regs.h
+@@ -15,7 +15,7 @@ void perf_regs_load(u64 *regs);
+ #define PERF_REG_IP	PERF_REG_ARM_PC
+ #define PERF_REG_SP	PERF_REG_ARM_SP
+ 
+-static inline const char *perf_reg_name(int id)
++static inline const char *__perf_reg_name(int id)
+ {
+ 	switch (id) {
+ 	case PERF_REG_ARM_R0:
+diff --git a/tools/perf/arch/arm64/include/perf_regs.h b/tools/perf/arch/arm64/include/perf_regs.h
+index baaa5e64a3fb0..fa3e07459f768 100644
+--- a/tools/perf/arch/arm64/include/perf_regs.h
++++ b/tools/perf/arch/arm64/include/perf_regs.h
+@@ -15,7 +15,7 @@ void perf_regs_load(u64 *regs);
+ #define PERF_REG_IP	PERF_REG_ARM64_PC
+ #define PERF_REG_SP	PERF_REG_ARM64_SP
+ 
+-static inline const char *perf_reg_name(int id)
++static inline const char *__perf_reg_name(int id)
+ {
+ 	switch (id) {
+ 	case PERF_REG_ARM64_X0:
+diff --git a/tools/perf/arch/csky/include/perf_regs.h b/tools/perf/arch/csky/include/perf_regs.h
+index 8f336ea1161af..25ac3bdcb9d18 100644
+--- a/tools/perf/arch/csky/include/perf_regs.h
++++ b/tools/perf/arch/csky/include/perf_regs.h
+@@ -15,7 +15,7 @@
+ #define PERF_REG_IP	PERF_REG_CSKY_PC
+ #define PERF_REG_SP	PERF_REG_CSKY_SP
+ 
+-static inline const char *perf_reg_name(int id)
++static inline const char *__perf_reg_name(int id)
+ {
+ 	switch (id) {
+ 	case PERF_REG_CSKY_A0:
+diff --git a/tools/perf/arch/powerpc/include/perf_regs.h b/tools/perf/arch/powerpc/include/perf_regs.h
+index 63f3ac91049f7..004bed2286931 100644
+--- a/tools/perf/arch/powerpc/include/perf_regs.h
++++ b/tools/perf/arch/powerpc/include/perf_regs.h
+@@ -73,7 +73,7 @@ static const char *reg_names[] = {
+ 	[PERF_REG_POWERPC_SIER3] = "sier3",
+ };
+ 
+-static inline const char *perf_reg_name(int id)
++static inline const char *__perf_reg_name(int id)
+ {
+ 	return reg_names[id];
+ }
+diff --git a/tools/perf/arch/riscv/include/perf_regs.h b/tools/perf/arch/riscv/include/perf_regs.h
+index 7a8bcde7a2b15..6b02a767c918f 100644
+--- a/tools/perf/arch/riscv/include/perf_regs.h
++++ b/tools/perf/arch/riscv/include/perf_regs.h
+@@ -19,7 +19,7 @@
+ #define PERF_REG_IP	PERF_REG_RISCV_PC
+ #define PERF_REG_SP	PERF_REG_RISCV_SP
+ 
+-static inline const char *perf_reg_name(int id)
++static inline const char *__perf_reg_name(int id)
+ {
+ 	switch (id) {
+ 	case PERF_REG_RISCV_PC:
+diff --git a/tools/perf/arch/s390/include/perf_regs.h b/tools/perf/arch/s390/include/perf_regs.h
+index bcfbaed78cc25..ce30315266239 100644
+--- a/tools/perf/arch/s390/include/perf_regs.h
++++ b/tools/perf/arch/s390/include/perf_regs.h
+@@ -14,7 +14,7 @@ void perf_regs_load(u64 *regs);
+ #define PERF_REG_IP PERF_REG_S390_PC
+ #define PERF_REG_SP PERF_REG_S390_R15
+ 
+-static inline const char *perf_reg_name(int id)
++static inline const char *__perf_reg_name(int id)
+ {
+ 	switch (id) {
+ 	case PERF_REG_S390_R0:
+diff --git a/tools/perf/arch/x86/include/perf_regs.h b/tools/perf/arch/x86/include/perf_regs.h
+index b7321337d1003..cddc4cdc0d9b5 100644
+--- a/tools/perf/arch/x86/include/perf_regs.h
++++ b/tools/perf/arch/x86/include/perf_regs.h
+@@ -23,7 +23,7 @@ void perf_regs_load(u64 *regs);
+ #define PERF_REG_IP PERF_REG_X86_IP
+ #define PERF_REG_SP PERF_REG_X86_SP
+ 
+-static inline const char *perf_reg_name(int id)
++static inline const char *__perf_reg_name(int id)
+ {
+ 	switch (id) {
+ 	case PERF_REG_X86_AX:
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 6c8575e182ed1..3081894547883 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -317,12 +317,18 @@ bool ins__is_call(const struct ins *ins)
+ /*
+  * Prevents from matching commas in the comment section, e.g.:
+  * ffff200008446e70:       b.cs    ffff2000084470f4 <generic_exec_single+0x314>  // b.hs, b.nlast
++ *
++ * and skip comma as part of function arguments, e.g.:
++ * 1d8b4ac <linemap_lookup(line_maps const*, unsigned int)+0xcc>
+  */
+ static inline const char *validate_comma(const char *c, struct ins_operands *ops)
+ {
+ 	if (ops->raw_comment && c > ops->raw_comment)
+ 		return NULL;
+ 
++	if (ops->raw_func_start && c > ops->raw_func_start)
++		return NULL;
++
+ 	return c;
+ }
+ 
+@@ -337,6 +343,8 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
+ 	u64 start, end;
+ 
+ 	ops->raw_comment = strchr(ops->raw, arch->objdump.comment_char);
++	ops->raw_func_start = strchr(ops->raw, '<');
++
+ 	c = validate_comma(c, ops);
+ 
+ 	/*
+diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
+index 0a0cd4f32175e..096cdaf21b01f 100644
+--- a/tools/perf/util/annotate.h
++++ b/tools/perf/util/annotate.h
+@@ -32,6 +32,7 @@ struct ins {
+ struct ins_operands {
+ 	char	*raw;
+ 	char	*raw_comment;
++	char	*raw_func_start;
+ 	struct {
+ 		char	*raw;
+ 		char	*name;
+diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
+index 03bc843b1cf87..f0dceb527ca38 100644
+--- a/tools/perf/util/env.c
++++ b/tools/perf/util/env.c
+@@ -142,6 +142,7 @@ static void perf_env__purge_bpf(struct perf_env *env)
+ 		node = rb_entry(next, struct bpf_prog_info_node, rb_node);
+ 		next = rb_next(&node->rb_node);
+ 		rb_erase(&node->rb_node, root);
++		free(node->info_linear);
+ 		free(node);
+ 	}
+ 
+diff --git a/tools/perf/util/perf_regs.h b/tools/perf/util/perf_regs.h
+index a454991261844..eeac181ebccf5 100644
+--- a/tools/perf/util/perf_regs.h
++++ b/tools/perf/util/perf_regs.h
+@@ -33,6 +33,13 @@ extern const struct sample_reg sample_reg_masks[];
+ 
+ int perf_reg_value(u64 *valp, struct regs_dump *regs, int id);
+ 
++static inline const char *perf_reg_name(int id)
++{
++	const char *reg_name = __perf_reg_name(id);
++
++	return reg_name ?: "unknown";
++}
++
+ #else
+ #define PERF_REGS_MASK	0
+ #define PERF_REGS_MAX	0
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 44dd86a4f25f2..7356eb398b32a 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -2360,6 +2360,7 @@ int cleanup_sdt_note_list(struct list_head *sdt_notes)
+ 
+ 	list_for_each_entry_safe(pos, tmp, sdt_notes, note_list) {
+ 		list_del_init(&pos->note_list);
++		zfree(&pos->args);
+ 		zfree(&pos->name);
+ 		zfree(&pos->provider);
+ 		free(pos);
+diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
+index 3cc91ad048ea8..43beb169631d3 100644
+--- a/tools/perf/util/vdso.c
++++ b/tools/perf/util/vdso.c
+@@ -133,6 +133,8 @@ static struct dso *__machine__addnew_vdso(struct machine *machine, const char *s
+ 	if (dso != NULL) {
+ 		__dsos__add(&machine->dsos, dso);
+ 		dso__set_long_name(dso, long_name, false);
++		/* Put dso here because __dsos_add already got it */
++		dso__put(dso);
+ 	}
+ 
+ 	return dso;
+diff --git a/tools/virtio/Makefile b/tools/virtio/Makefile
+index b587b9a7a124b..0d7bbe49359d8 100644
+--- a/tools/virtio/Makefile
++++ b/tools/virtio/Makefile
+@@ -4,7 +4,8 @@ test: virtio_test vringh_test
+ virtio_test: virtio_ring.o virtio_test.o
+ vringh_test: vringh_test.o vringh.o virtio_ring.o
+ 
+-CFLAGS += -g -O2 -Werror -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h
++CFLAGS += -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h
++LDFLAGS += -lpthread
+ vpath %.c ../../drivers/virtio ../../drivers/vhost
+ mod:
+ 	${MAKE} -C `pwd`/../.. M=`pwd`/vhost_test V=${V}
+diff --git a/tools/virtio/linux/spinlock.h b/tools/virtio/linux/spinlock.h
+new file mode 100644
+index 0000000000000..028e3cdcc5d30
+--- /dev/null
++++ b/tools/virtio/linux/spinlock.h
+@@ -0,0 +1,56 @@
++#ifndef SPINLOCK_H_STUB
++#define SPINLOCK_H_STUB
++
++#include <pthread.h>
++
++typedef pthread_spinlock_t  spinlock_t;
++
++static inline void spin_lock_init(spinlock_t *lock)
++{
++	int r = pthread_spin_init(lock, 0);
++	assert(!r);
++}
++
++static inline void spin_lock(spinlock_t *lock)
++{
++	int ret = pthread_spin_lock(lock);
++	assert(!ret);
++}
++
++static inline void spin_unlock(spinlock_t *lock)
++{
++	int ret = pthread_spin_unlock(lock);
++	assert(!ret);
++}
++
++static inline void spin_lock_bh(spinlock_t *lock)
++{
++	spin_lock(lock);
++}
++
++static inline void spin_unlock_bh(spinlock_t *lock)
++{
++	spin_unlock(lock);
++}
++
++static inline void spin_lock_irq(spinlock_t *lock)
++{
++	spin_lock(lock);
++}
++
++static inline void spin_unlock_irq(spinlock_t *lock)
++{
++	spin_unlock(lock);
++}
++
++static inline void spin_lock_irqsave(spinlock_t *lock, unsigned long f)
++{
++	spin_lock(lock);
++}
++
++static inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long f)
++{
++	spin_unlock(lock);
++}
++
++#endif
+diff --git a/tools/virtio/linux/virtio.h b/tools/virtio/linux/virtio.h
+index 5d90254ddae47..363b982283011 100644
+--- a/tools/virtio/linux/virtio.h
++++ b/tools/virtio/linux/virtio.h
+@@ -3,6 +3,7 @@
+ #define LINUX_VIRTIO_H
+ #include <linux/scatterlist.h>
+ #include <linux/kernel.h>
++#include <linux/spinlock.h>
+ 
+ struct device {
+ 	void *parent;
+@@ -12,6 +13,7 @@ struct virtio_device {
+ 	struct device dev;
+ 	u64 features;
+ 	struct list_head vqs;
++	spinlock_t vqs_list_lock;
+ };
+ 
+ struct virtqueue {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-03 11:47 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-03 11:47 UTC (permalink / raw
  To: gentoo-commits

commit:     8319e0b2010888b6931a923355830fcea33ff278
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep  3 11:46:52 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep  3 11:46:52 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8319e0b2

Remove redundant patch

Removed: 2700_Bluetooth-usb-alt-3-for-WBS.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                            |  4 --
 2700_Bluetooth-usb-alt-3-for-WBS.patch | 84 ----------------------------------
 2 files changed, 88 deletions(-)

diff --git a/0000_README b/0000_README
index eb58622..eaceece 100644
--- a/0000_README
+++ b/0000_README
@@ -303,10 +303,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2700_Bluetooth-usb-alt-3-for-WBS.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/?id=55981d3541812234e687062926ff199c83f79a39
-Desc:   Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_Bluetooth-usb-alt-3-for-WBS.patch b/2700_Bluetooth-usb-alt-3-for-WBS.patch
deleted file mode 100644
index e0a67ea..0000000
--- a/2700_Bluetooth-usb-alt-3-for-WBS.patch
+++ /dev/null
@@ -1,84 +0,0 @@
-From 55981d3541812234e687062926ff199c83f79a39 Mon Sep 17 00:00:00 2001
-From: Pauli Virtanen <pav@iki.fi>
-Date: Mon, 26 Jul 2021 21:02:06 +0300
-Subject: Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-Some USB BT adapters don't satisfy the MTU requirement mentioned in
-commit e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
-and have ALT 3 setting that produces no/garbled audio. Some adapters
-with larger MTU were also reported to have problems with ALT 3.
-
-Add a flag and check it and MTU before selecting ALT 3, falling back to
-ALT 1. Enable the flag for Realtek, restoring the previous behavior for
-non-Realtek devices.
-
-Tested with USB adapters (mtu<72, no/garbled sound with ALT3, ALT1
-works) BCM20702A1 0b05:17cb, CSR8510A10 0a12:0001, and (mtu>=72, ALT3
-works) RTL8761BU 0bda:8771, Intel AX200 8087:0029 (after disabling
-ALT6). Also got reports for (mtu>=72, ALT 3 reported to produce bad
-audio) Intel 8087:0a2b.
-
-Signed-off-by: Pauli Virtanen <pav@iki.fi>
-Fixes: e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
-Tested-by: Michał Kępień <kernel@kempniu.pl>
-Tested-by: Jonathan Lampérth <jon@h4n.dev>
-Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
----
- drivers/bluetooth/btusb.c | 22 ++++++++++++++--------
- 1 file changed, 14 insertions(+), 8 deletions(-)
-
-diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
-index 488f110e17e27..2336f731dbc7e 100644
---- a/drivers/bluetooth/btusb.c
-+++ b/drivers/bluetooth/btusb.c
-@@ -528,6 +528,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
- #define BTUSB_HW_RESET_ACTIVE	12
- #define BTUSB_TX_WAIT_VND_EVT	13
- #define BTUSB_WAKEUP_DISABLE	14
-+#define BTUSB_USE_ALT3_FOR_WBS	15
- 
- struct btusb_data {
- 	struct hci_dev       *hdev;
-@@ -1761,16 +1762,20 @@ static void btusb_work(struct work_struct *work)
- 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
- 			 * many adapters do not support it.  Alt 1 appears to
- 			 * work for all adapters that do not have alt 6, and
--			 * which work with WBS at all.
-+			 * which work with WBS at all.  Some devices prefer
-+			 * alt 3 (HCI payload >= 60 Bytes let air packet
-+			 * data satisfy 60 bytes), requiring
-+			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
-+			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
- 			 */
--			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
--			/* Because mSBC frames do not need to be aligned to the
--			 * SCO packet boundary. If support the Alt 3, use the
--			 * Alt 3 for HCI payload >= 60 Bytes let air packet
--			 * data satisfy 60 bytes.
--			 */
--			if (new_alts == 1 && btusb_find_altsetting(data, 3))
-+			if (btusb_find_altsetting(data, 6))
-+				new_alts = 6;
-+			else if (btusb_find_altsetting(data, 3) &&
-+				 hdev->sco_mtu >= 72 &&
-+				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
- 				new_alts = 3;
-+			else
-+				new_alts = 1;
- 		}
- 
- 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
-@@ -3882,6 +3887,7 @@ static int btusb_probe(struct usb_interface *intf,
- 		 * (DEVICE_REMOTE_WAKEUP)
- 		 */
- 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
-+		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
- 	}
- 
- 	if (!reset)
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-08 13:00 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-09-08 13:00 UTC (permalink / raw
  To: gentoo-commits

commit:     0486329cfbf6f62348f4d49248593c730e723131
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  8 13:00:20 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Sep  8 13:00:37 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0486329c

Linux patch 5.10.63

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1062_linux-5.10.63.patch | 1105 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1109 insertions(+)

diff --git a/0000_README b/0000_README
index eaceece..0e463ca 100644
--- a/0000_README
+++ b/0000_README
@@ -291,6 +291,10 @@ Patch:  1061_linux-5.10.62.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.62
 
+Patch:  1062_linux-5.10.63.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.63
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1062_linux-5.10.63.patch b/1062_linux-5.10.63.patch
new file mode 100644
index 0000000..55177d3
--- /dev/null
+++ b/1062_linux-5.10.63.patch
@@ -0,0 +1,1105 @@
+diff --git a/Makefile b/Makefile
+index 90c0cb3e4d3c2..b2d326f4dea68 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 62
++SUBLEVEL = 63
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/mach-omap1/board-ams-delta.c b/arch/arm/mach-omap1/board-ams-delta.c
+index 2ee527c002840..1026a816dcc02 100644
+--- a/arch/arm/mach-omap1/board-ams-delta.c
++++ b/arch/arm/mach-omap1/board-ams-delta.c
+@@ -458,20 +458,6 @@ static struct gpiod_lookup_table leds_gpio_table = {
+ 
+ #ifdef CONFIG_LEDS_TRIGGERS
+ DEFINE_LED_TRIGGER(ams_delta_camera_led_trigger);
+-
+-static int ams_delta_camera_power(struct device *dev, int power)
+-{
+-	/*
+-	 * turn on camera LED
+-	 */
+-	if (power)
+-		led_trigger_event(ams_delta_camera_led_trigger, LED_FULL);
+-	else
+-		led_trigger_event(ams_delta_camera_led_trigger, LED_OFF);
+-	return 0;
+-}
+-#else
+-#define ams_delta_camera_power	NULL
+ #endif
+ 
+ static struct platform_device ams_delta_audio_device = {
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 40669eac9d6db..921f47b9bb247 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -90,6 +90,7 @@ struct perf_ibs {
+ 	unsigned long			offset_mask[1];
+ 	int				offset_max;
+ 	unsigned int			fetch_count_reset_broken : 1;
++	unsigned int			fetch_ignore_if_zero_rip : 1;
+ 	struct cpu_perf_ibs __percpu	*pcpu;
+ 
+ 	struct attribute		**format_attrs;
+@@ -672,6 +673,10 @@ fail:
+ 	if (check_rip && (ibs_data.regs[2] & IBS_RIP_INVALID)) {
+ 		regs.flags &= ~PERF_EFLAGS_EXACT;
+ 	} else {
++		/* Workaround for erratum #1197 */
++		if (perf_ibs->fetch_ignore_if_zero_rip && !(ibs_data.regs[1]))
++			goto out;
++
+ 		set_linear_ip(&regs, ibs_data.regs[1]);
+ 		regs.flags |= PERF_EFLAGS_EXACT;
+ 	}
+@@ -769,6 +774,9 @@ static __init void perf_event_ibs_init(void)
+ 	if (boot_cpu_data.x86 >= 0x16 && boot_cpu_data.x86 <= 0x18)
+ 		perf_ibs_fetch.fetch_count_reset_broken = 1;
+ 
++	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
++		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
++
+ 	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
+ 
+ 	if (ibs_caps & IBS_CAPS_OPCNT) {
+diff --git a/arch/x86/events/amd/power.c b/arch/x86/events/amd/power.c
+index 16a2369c586e8..37d5b380516ec 100644
+--- a/arch/x86/events/amd/power.c
++++ b/arch/x86/events/amd/power.c
+@@ -213,6 +213,7 @@ static struct pmu pmu_class = {
+ 	.stop		= pmu_event_stop,
+ 	.read		= pmu_event_read,
+ 	.capabilities	= PERF_PMU_CAP_NO_EXCLUDE,
++	.module		= THIS_MODULE,
+ };
+ 
+ static int power_cpu_exit(unsigned int cpu)
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index e94af4a54d0d8..37129b76135a1 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -62,7 +62,7 @@ static struct pt_cap_desc {
+ 	PT_CAP(single_range_output,	0, CPUID_ECX, BIT(2)),
+ 	PT_CAP(output_subsys,		0, CPUID_ECX, BIT(3)),
+ 	PT_CAP(payloads_lip,		0, CPUID_ECX, BIT(31)),
+-	PT_CAP(num_address_ranges,	1, CPUID_EAX, 0x3),
++	PT_CAP(num_address_ranges,	1, CPUID_EAX, 0x7),
+ 	PT_CAP(mtc_periods,		1, CPUID_EAX, 0xffff0000),
+ 	PT_CAP(cycle_thresholds,	1, CPUID_EBX, 0xffff),
+ 	PT_CAP(psb_periods,		1, CPUID_EBX, 0xffff0000),
+diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
+index d0dfa50bd0bb4..87e08ad38ea71 100644
+--- a/arch/xtensa/Kconfig
++++ b/arch/xtensa/Kconfig
+@@ -30,7 +30,7 @@ config XTENSA
+ 	select HAVE_DMA_CONTIGUOUS
+ 	select HAVE_EXIT_THREAD
+ 	select HAVE_FUNCTION_TRACER
+-	select HAVE_FUTEX_CMPXCHG if !MMU
++	select HAVE_FUTEX_CMPXCHG if !MMU && FUTEX
+ 	select HAVE_HW_BREAKPOINT if PERF_EVENTS
+ 	select HAVE_IRQ_TIME_ACCOUNTING
+ 	select HAVE_OPROFILE
+diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
+index f40ebe9f50474..f2548049aa0e9 100644
+--- a/drivers/block/Kconfig
++++ b/drivers/block/Kconfig
+@@ -230,7 +230,7 @@ config BLK_DEV_LOOP_MIN_COUNT
+ 	  dynamically allocated with the /dev/loop-control interface.
+ 
+ config BLK_DEV_CRYPTOLOOP
+-	tristate "Cryptoloop Support"
++	tristate "Cryptoloop Support (DEPRECATED)"
+ 	select CRYPTO
+ 	select CRYPTO_CBC
+ 	depends on BLK_DEV_LOOP
+@@ -242,7 +242,7 @@ config BLK_DEV_CRYPTOLOOP
+ 	  WARNING: This device is not safe for journaled file systems like
+ 	  ext3 or Reiserfs. Please use the Device Mapper crypto module
+ 	  instead, which can be configured to be on-disk compatible with the
+-	  cryptoloop device.
++	  cryptoloop device.  cryptoloop support will be removed in Linux 5.16.
+ 
+ source "drivers/block/drbd/Kconfig"
+ 
+diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
+index 3cabc335ae744..f0a91faa43a89 100644
+--- a/drivers/block/cryptoloop.c
++++ b/drivers/block/cryptoloop.c
+@@ -189,6 +189,8 @@ init_cryptoloop(void)
+ 
+ 	if (rc)
+ 		printk(KERN_ERR "cryptoloop: loop_register_transfer failed\n");
++	else
++		pr_warn("the cryptoloop driver has been deprecated and will be removed in in Linux 5.16\n");
+ 	return rc;
+ }
+ 
+diff --git a/drivers/gpu/ipu-v3/ipu-cpmem.c b/drivers/gpu/ipu-v3/ipu-cpmem.c
+index a1c85d1521f5c..82b244cb313e6 100644
+--- a/drivers/gpu/ipu-v3/ipu-cpmem.c
++++ b/drivers/gpu/ipu-v3/ipu-cpmem.c
+@@ -585,21 +585,21 @@ static const struct ipu_rgb def_bgra_16 = {
+ 	.bits_per_pixel = 16,
+ };
+ 
+-#define Y_OFFSET(pix, x, y)	((x) + pix->width * (y))
+-#define U_OFFSET(pix, x, y)	((pix->width * pix->height) +		\
+-				 (pix->width * ((y) / 2) / 2) + (x) / 2)
+-#define V_OFFSET(pix, x, y)	((pix->width * pix->height) +		\
+-				 (pix->width * pix->height / 4) +	\
+-				 (pix->width * ((y) / 2) / 2) + (x) / 2)
+-#define U2_OFFSET(pix, x, y)	((pix->width * pix->height) +		\
+-				 (pix->width * (y) / 2) + (x) / 2)
+-#define V2_OFFSET(pix, x, y)	((pix->width * pix->height) +		\
+-				 (pix->width * pix->height / 2) +	\
+-				 (pix->width * (y) / 2) + (x) / 2)
+-#define UV_OFFSET(pix, x, y)	((pix->width * pix->height) +	\
+-				 (pix->width * ((y) / 2)) + (x))
+-#define UV2_OFFSET(pix, x, y)	((pix->width * pix->height) +	\
+-				 (pix->width * y) + (x))
++#define Y_OFFSET(pix, x, y)	((x) + pix->bytesperline * (y))
++#define U_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * ((y) / 2) / 2) + (x) / 2)
++#define V_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * pix->height / 4) + \
++				 (pix->bytesperline * ((y) / 2) / 2) + (x) / 2)
++#define U2_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * (y) / 2) + (x) / 2)
++#define V2_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * pix->height / 2) + \
++				 (pix->bytesperline * (y) / 2) + (x) / 2)
++#define UV_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * ((y) / 2)) + (x))
++#define UV2_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * y) + (x))
+ 
+ #define NUM_ALPHA_CHANNELS	7
+ 
+diff --git a/drivers/media/usb/stkwebcam/stk-webcam.c b/drivers/media/usb/stkwebcam/stk-webcam.c
+index a45d464427c4c..0e231e576dc3d 100644
+--- a/drivers/media/usb/stkwebcam/stk-webcam.c
++++ b/drivers/media/usb/stkwebcam/stk-webcam.c
+@@ -1346,7 +1346,7 @@ static int stk_camera_probe(struct usb_interface *interface,
+ 	if (!dev->isoc_ep) {
+ 		pr_err("Could not find isoc-in endpoint\n");
+ 		err = -ENODEV;
+-		goto error;
++		goto error_put;
+ 	}
+ 	dev->vsettings.palette = V4L2_PIX_FMT_RGB565;
+ 	dev->vsettings.mode = MODE_VGA;
+@@ -1359,10 +1359,12 @@ static int stk_camera_probe(struct usb_interface *interface,
+ 
+ 	err = stk_register_video_device(dev);
+ 	if (err)
+-		goto error;
++		goto error_put;
+ 
+ 	return 0;
+ 
++error_put:
++	usb_put_intf(interface);
+ error:
+ 	v4l2_ctrl_handler_free(hdl);
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
+index 283918aeb741d..09d64a29f56e3 100644
+--- a/drivers/net/ethernet/cadence/macb_ptp.c
++++ b/drivers/net/ethernet/cadence/macb_ptp.c
+@@ -275,6 +275,12 @@ void gem_ptp_rxstamp(struct macb *bp, struct sk_buff *skb,
+ 
+ 	if (GEM_BFEXT(DMA_RXVALID, desc->addr)) {
+ 		desc_ptp = macb_ptp_desc(bp, desc);
++		/* Unlikely but check */
++		if (!desc_ptp) {
++			dev_warn_ratelimited(&bp->pdev->dev,
++					     "Timestamp not supported in BD\n");
++			return;
++		}
+ 		gem_hw_timestamp(bp, desc_ptp->ts_1, desc_ptp->ts_2, &ts);
+ 		memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
+ 		shhwtstamps->hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+@@ -307,8 +313,11 @@ int gem_ptp_txstamp(struct macb_queue *queue, struct sk_buff *skb,
+ 	if (CIRC_SPACE(head, tail, PTP_TS_BUFFER_SIZE) == 0)
+ 		return -ENOMEM;
+ 
+-	skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 	desc_ptp = macb_ptp_desc(queue->bp, desc);
++	/* Unlikely but check */
++	if (!desc_ptp)
++		return -EINVAL;
++	skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 	tx_timestamp = &queue->tx_timestamps[head];
+ 	tx_timestamp->skb = skb;
+ 	/* ensure ts_1/ts_2 is loaded after ctrl (TX_USED check) */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 5bd58c65e1631..6bb9ec98a12b5 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -616,7 +616,12 @@ static int qed_enable_msix(struct qed_dev *cdev,
+ 			rc = cnt;
+ 	}
+ 
+-	if (rc > 0) {
++	/* For VFs, we should return with an error in case we didn't get the
++	 * exact number of msix vectors as we requested.
++	 * Not doing that will lead to a crash when starting queues for
++	 * this VF.
++	 */
++	if ((IS_PF(cdev) && rc > 0) || (IS_VF(cdev) && rc == cnt)) {
+ 		/* MSI-x configuration was achieved */
+ 		int_params->out.int_mode = QED_INT_MODE_MSIX;
+ 		int_params->out.num_vectors = rc;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index d9a3c811ac8b1..e93f06e4a1729 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -1869,6 +1869,7 @@ static void qede_sync_free_irqs(struct qede_dev *edev)
+ 	}
+ 
+ 	edev->int_info.used_cnt = 0;
++	edev->int_info.msix_cnt = 0;
+ }
+ 
+ static int qede_req_msix_irqs(struct qede_dev *edev)
+@@ -2409,7 +2410,6 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode,
+ 	goto out;
+ err4:
+ 	qede_sync_free_irqs(edev);
+-	memset(&edev->int_info.msix_cnt, 0, sizeof(struct qed_int_info));
+ err3:
+ 	qede_napi_disable_remove(edev);
+ err2:
+diff --git a/drivers/reset/reset-zynqmp.c b/drivers/reset/reset-zynqmp.c
+index ebd433fa09dd7..8c51768e9a720 100644
+--- a/drivers/reset/reset-zynqmp.c
++++ b/drivers/reset/reset-zynqmp.c
+@@ -53,7 +53,8 @@ static int zynqmp_reset_status(struct reset_controller_dev *rcdev,
+ 			       unsigned long id)
+ {
+ 	struct zynqmp_reset_data *priv = to_zynqmp_reset_data(rcdev);
+-	int val, err;
++	int err;
++	u32 val;
+ 
+ 	err = zynqmp_pm_reset_get_status(priv->data->reset_id + id, &val);
+ 	if (err)
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 95e2d6de4f213..ad0549dac7d79 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1211,6 +1211,7 @@ static int omap8250_no_handle_irq(struct uart_port *port)
+ static const struct soc_device_attribute k3_soc_devices[] = {
+ 	{ .family = "AM65X",  },
+ 	{ .family = "J721E", .revision = "SR1.0" },
++	{ /* sentinel */ }
+ };
+ 
+ static struct omap8250_dma_params am654_dma = {
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index ae0c38ad1fcbe..0791480bf922b 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -398,7 +398,7 @@ static int v9fs_test_inode(struct inode *inode, void *data)
+ 
+ 	umode = p9mode2unixmode(v9ses, st, &rdev);
+ 	/* don't match inode of different type */
+-	if ((inode->i_mode & S_IFMT) != (umode & S_IFMT))
++	if (inode_wrong_type(inode, umode))
+ 		return 0;
+ 
+ 	/* compare qid details */
+@@ -1360,7 +1360,7 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ 	 * Don't update inode if the file type is different
+ 	 */
+ 	umode = p9mode2unixmode(v9ses, st, &rdev);
+-	if ((inode->i_mode & S_IFMT) != (umode & S_IFMT))
++	if (inode_wrong_type(inode, umode))
+ 		goto out;
+ 
+ 	/*
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 0028eccb665a6..72b67d810b8c2 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -59,7 +59,7 @@ static int v9fs_test_inode_dotl(struct inode *inode, void *data)
+ 	struct p9_stat_dotl *st = (struct p9_stat_dotl *)data;
+ 
+ 	/* don't match inode of different type */
+-	if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT))
++	if (inode_wrong_type(inode, st->st_mode))
+ 		return 0;
+ 
+ 	if (inode->i_generation != st->st_gen)
+@@ -933,7 +933,7 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ 	/*
+ 	 * Don't update inode if the file type is different
+ 	 */
+-	if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT))
++	if (inode_wrong_type(inode, st->st_mode))
+ 		goto out;
+ 
+ 	/*
+diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c
+index 1096d1d3a84c4..47f2903bacb92 100644
+--- a/fs/ceph/mdsmap.c
++++ b/fs/ceph/mdsmap.c
+@@ -393,9 +393,11 @@ void ceph_mdsmap_destroy(struct ceph_mdsmap *m)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < m->possible_max_rank; i++)
+-		kfree(m->m_info[i].export_targets);
+-	kfree(m->m_info);
++	if (m->m_info) {
++		for (i = 0; i < m->possible_max_rank; i++)
++			kfree(m->m_info[i].export_targets);
++		kfree(m->m_info);
++	}
+ 	kfree(m->m_data_pg_pools);
+ 	kfree(m);
+ }
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index b1f0c05d6eaf8..b11a919b9cab0 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -425,8 +425,7 @@ int cifs_get_inode_info_unix(struct inode **pinode,
+ 		}
+ 
+ 		/* if filetype is different, return error */
+-		if (unlikely(((*pinode)->i_mode & S_IFMT) !=
+-		    (fattr.cf_mode & S_IFMT))) {
++		if (unlikely(inode_wrong_type(*pinode, fattr.cf_mode))) {
+ 			CIFS_I(*pinode)->time = 0; /* force reval */
+ 			rc = -ESTALE;
+ 			goto cgiiu_exit;
+@@ -1243,7 +1242,7 @@ cifs_find_inode(struct inode *inode, void *opaque)
+ 		return 0;
+ 
+ 	/* don't match inode of different type */
+-	if ((inode->i_mode & S_IFMT) != (fattr->cf_mode & S_IFMT))
++	if (inode_wrong_type(inode, fattr->cf_mode))
+ 		return 0;
+ 
+ 	/* if it's not a directory or has no dentries, then flag it */
+diff --git a/fs/crypto/hooks.c b/fs/crypto/hooks.c
+index 061418be4b086..4180371bf8642 100644
+--- a/fs/crypto/hooks.c
++++ b/fs/crypto/hooks.c
+@@ -379,3 +379,47 @@ err_kfree:
+ 	return ERR_PTR(err);
+ }
+ EXPORT_SYMBOL_GPL(fscrypt_get_symlink);
++
++/**
++ * fscrypt_symlink_getattr() - set the correct st_size for encrypted symlinks
++ * @path: the path for the encrypted symlink being queried
++ * @stat: the struct being filled with the symlink's attributes
++ *
++ * Override st_size of encrypted symlinks to be the length of the decrypted
++ * symlink target (or the no-key encoded symlink target, if the key is
++ * unavailable) rather than the length of the encrypted symlink target.  This is
++ * necessary for st_size to match the symlink target that userspace actually
++ * sees.  POSIX requires this, and some userspace programs depend on it.
++ *
++ * This requires reading the symlink target from disk if needed, setting up the
++ * inode's encryption key if possible, and then decrypting or encoding the
++ * symlink target.  This makes lstat() more heavyweight than is normally the
++ * case.  However, decrypted symlink targets will be cached in ->i_link, so
++ * usually the symlink won't have to be read and decrypted again later if/when
++ * it is actually followed, readlink() is called, or lstat() is called again.
++ *
++ * Return: 0 on success, -errno on failure
++ */
++int fscrypt_symlink_getattr(const struct path *path, struct kstat *stat)
++{
++	struct dentry *dentry = path->dentry;
++	struct inode *inode = d_inode(dentry);
++	const char *link;
++	DEFINE_DELAYED_CALL(done);
++
++	/*
++	 * To get the symlink target that userspace will see (whether it's the
++	 * decrypted target or the no-key encoded target), we can just get it in
++	 * the same way the VFS does during path resolution and readlink().
++	 */
++	link = READ_ONCE(inode->i_link);
++	if (!link) {
++		link = inode->i_op->get_link(dentry, inode, &done);
++		if (IS_ERR(link))
++			return PTR_ERR(link);
++	}
++	stat->size = strlen(link);
++	do_delayed_call(&done);
++	return 0;
++}
++EXPORT_SYMBOL_GPL(fscrypt_symlink_getattr);
+diff --git a/fs/exec.c b/fs/exec.c
+index c7a4ef8df3058..ca89e0e3ef10f 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1347,10 +1347,6 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 	WRITE_ONCE(me->self_exec_id, me->self_exec_id + 1);
+ 	flush_signal_handlers(me, 0);
+ 
+-	retval = set_cred_ucounts(bprm->cred);
+-	if (retval < 0)
+-		goto out_unlock;
+-
+ 	/*
+ 	 * install the new credentials for this executable
+ 	 */
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index b41512d1badc3..0f7b53d5edea6 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -750,6 +750,12 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	ext4_write_lock_xattr(inode, &no_expand);
+ 	BUG_ON(!ext4_has_inline_data(inode));
+ 
++	/*
++	 * ei->i_inline_off may have changed since ext4_write_begin()
++	 * called ext4_try_to_write_inline_data()
++	 */
++	(void) ext4_find_inline_data_nolock(inode);
++
+ 	kaddr = kmap_atomic(page);
+ 	ext4_write_inline_data(inode, &iloc, kaddr, pos, len);
+ 	kunmap_atomic(kaddr);
+diff --git a/fs/ext4/symlink.c b/fs/ext4/symlink.c
+index dd05af983092d..a9457fed351ed 100644
+--- a/fs/ext4/symlink.c
++++ b/fs/ext4/symlink.c
+@@ -52,10 +52,19 @@ static const char *ext4_encrypted_get_link(struct dentry *dentry,
+ 	return paddr;
+ }
+ 
++static int ext4_encrypted_symlink_getattr(const struct path *path,
++					  struct kstat *stat, u32 request_mask,
++					  unsigned int query_flags)
++{
++	ext4_getattr(path, stat, request_mask, query_flags);
++
++	return fscrypt_symlink_getattr(path, stat);
++}
++
+ const struct inode_operations ext4_encrypted_symlink_inode_operations = {
+ 	.get_link	= ext4_encrypted_get_link,
+ 	.setattr	= ext4_setattr,
+-	.getattr	= ext4_getattr,
++	.getattr	= ext4_encrypted_symlink_getattr,
+ 	.listxattr	= ext4_listxattr,
+ };
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 17d0e5f4efec8..710a6f73a6858 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -1307,9 +1307,18 @@ static const char *f2fs_encrypted_get_link(struct dentry *dentry,
+ 	return target;
+ }
+ 
++static int f2fs_encrypted_symlink_getattr(const struct path *path,
++					  struct kstat *stat, u32 request_mask,
++					  unsigned int query_flags)
++{
++	f2fs_getattr(path, stat, request_mask, query_flags);
++
++	return fscrypt_symlink_getattr(path, stat);
++}
++
+ const struct inode_operations f2fs_encrypted_symlink_inode_operations = {
+ 	.get_link	= f2fs_encrypted_get_link,
+-	.getattr	= f2fs_getattr,
++	.getattr	= f2fs_encrypted_symlink_getattr,
+ 	.setattr	= f2fs_setattr,
+ 	.listxattr	= f2fs_listxattr,
+ };
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 756bbdd563e08..2e300176cb889 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -252,7 +252,7 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
+ 		if (ret == -ENOMEM)
+ 			goto out;
+ 		if (ret || fuse_invalid_attr(&outarg.attr) ||
+-		    (outarg.attr.mode ^ inode->i_mode) & S_IFMT)
++		    fuse_stale_inode(inode, outarg.generation, &outarg.attr))
+ 			goto invalid;
+ 
+ 		forget_all_cached_acls(inode);
+@@ -1062,7 +1062,7 @@ static int fuse_do_getattr(struct inode *inode, struct kstat *stat,
+ 	err = fuse_simple_request(fm, &args);
+ 	if (!err) {
+ 		if (fuse_invalid_attr(&outarg.attr) ||
+-		    (inode->i_mode ^ outarg.attr.mode) & S_IFMT) {
++		    inode_wrong_type(inode, outarg.attr.mode)) {
+ 			fuse_make_bad(inode);
+ 			err = -EIO;
+ 		} else {
+@@ -1699,7 +1699,7 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
+ 	}
+ 
+ 	if (fuse_invalid_attr(&outarg.attr) ||
+-	    (inode->i_mode ^ outarg.attr.mode) & S_IFMT) {
++	    inode_wrong_type(inode, outarg.attr.mode)) {
+ 		fuse_make_bad(inode);
+ 		err = -EIO;
+ 		goto error;
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 8150621101c6f..ff94da6840176 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -860,6 +860,13 @@ static inline u64 fuse_get_attr_version(struct fuse_conn *fc)
+ 	return atomic64_read(&fc->attr_version);
+ }
+ 
++static inline bool fuse_stale_inode(const struct inode *inode, int generation,
++				    struct fuse_attr *attr)
++{
++	return inode->i_generation != generation ||
++		inode_wrong_type(inode, attr->mode);
++}
++
+ static inline void fuse_make_bad(struct inode *inode)
+ {
+ 	remove_inode_hash(inode);
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index f94b0bb57619c..053c56af3b6f3 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -340,8 +340,8 @@ retry:
+ 		inode->i_generation = generation;
+ 		fuse_init_inode(inode, attr);
+ 		unlock_new_inode(inode);
+-	} else if ((inode->i_mode ^ attr->mode) & S_IFMT) {
+-		/* Inode has changed type, any I/O on the old should fail */
++	} else if (fuse_stale_inode(inode, generation, attr)) {
++		/* nodeid was reused, any I/O on the old inode should fail */
+ 		fuse_make_bad(inode);
+ 		iput(inode);
+ 		goto retry;
+diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c
+index 3441ffa740f3d..bc267832310c7 100644
+--- a/fs/fuse/readdir.c
++++ b/fs/fuse/readdir.c
+@@ -200,9 +200,12 @@ retry:
+ 	if (!d_in_lookup(dentry)) {
+ 		struct fuse_inode *fi;
+ 		inode = d_inode(dentry);
++		if (inode && get_node_id(inode) != o->nodeid)
++			inode = NULL;
+ 		if (!inode ||
+-		    get_node_id(inode) != o->nodeid ||
+-		    ((o->attr.mode ^ inode->i_mode) & S_IFMT)) {
++		    fuse_stale_inode(inode, o->generation, &o->attr)) {
++			if (inode)
++				fuse_make_bad(inode);
+ 			d_invalidate(dentry);
+ 			dput(dentry);
+ 			goto retry;
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 9811880470a07..21addb78523d2 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -322,7 +322,7 @@ nfs_find_actor(struct inode *inode, void *opaque)
+ 
+ 	if (NFS_FILEID(inode) != fattr->fileid)
+ 		return 0;
+-	if ((S_IFMT & inode->i_mode) != (S_IFMT & fattr->mode))
++	if (inode_wrong_type(inode, fattr->mode))
+ 		return 0;
+ 	if (nfs_compare_fh(NFS_FH(inode), fh))
+ 		return 0;
+@@ -1446,7 +1446,7 @@ static int nfs_check_inode_attributes(struct inode *inode, struct nfs_fattr *fat
+ 			return 0;
+ 		return -ESTALE;
+ 	}
+-	if ((fattr->valid & NFS_ATTR_FATTR_TYPE) && (inode->i_mode & S_IFMT) != (fattr->mode & S_IFMT))
++	if ((fattr->valid & NFS_ATTR_FATTR_TYPE) && inode_wrong_type(inode, fattr->mode))
+ 		return -ESTALE;
+ 
+ 
+@@ -1861,7 +1861,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 	/*
+ 	 * Make sure the inode's type hasn't changed.
+ 	 */
+-	if ((fattr->valid & NFS_ATTR_FATTR_TYPE) && (inode->i_mode & S_IFMT) != (fattr->mode & S_IFMT)) {
++	if ((fattr->valid & NFS_ATTR_FATTR_TYPE) && inode_wrong_type(inode, fattr->mode)) {
+ 		/*
+ 		* Big trouble! The inode has become a different object.
+ 		*/
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index 0d71549f9d42a..9c9de2b66e641 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -376,7 +376,7 @@ nfsd_proc_create(struct svc_rqst *rqstp)
+ 
+ 		/* Make sure the type and device matches */
+ 		resp->status = nfserr_exist;
+-		if (inode && type != (inode->i_mode & S_IFMT))
++		if (inode && inode_wrong_type(inode, type))
+ 			goto out_unlock;
+ 	}
+ 
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index f3309e044f079..092812c2f118a 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -366,7 +366,7 @@ int ovl_check_origin_fh(struct ovl_fs *ofs, struct ovl_fh *fh, bool connected,
+ 		return PTR_ERR(origin);
+ 
+ 	if (upperdentry && !ovl_is_whiteout(upperdentry) &&
+-	    ((d_inode(origin)->i_mode ^ d_inode(upperdentry)->i_mode) & S_IFMT))
++	    inode_wrong_type(d_inode(upperdentry), d_inode(origin)->i_mode))
+ 		goto invalid;
+ 
+ 	if (!*stackp)
+@@ -724,7 +724,7 @@ struct dentry *ovl_lookup_index(struct ovl_fs *ofs, struct dentry *upper,
+ 		index = ERR_PTR(-ESTALE);
+ 		goto out;
+ 	} else if (ovl_dentry_weird(index) || ovl_is_whiteout(index) ||
+-		   ((inode->i_mode ^ d_inode(origin)->i_mode) & S_IFMT)) {
++		   inode_wrong_type(inode, d_inode(origin)->i_mode)) {
+ 		/*
+ 		 * Index should always be of the same file type as origin
+ 		 * except for the case of a whiteout index. A whiteout
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index b77d1637bbbc8..f4826b6da6828 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -1629,6 +1629,16 @@ static const char *ubifs_get_link(struct dentry *dentry,
+ 	return fscrypt_get_symlink(inode, ui->data, ui->data_len, done);
+ }
+ 
++static int ubifs_symlink_getattr(const struct path *path, struct kstat *stat,
++				 u32 request_mask, unsigned int query_flags)
++{
++	ubifs_getattr(path, stat, request_mask, query_flags);
++
++	if (IS_ENCRYPTED(d_inode(path->dentry)))
++		return fscrypt_symlink_getattr(path, stat);
++	return 0;
++}
++
+ const struct address_space_operations ubifs_file_address_operations = {
+ 	.readpage       = ubifs_readpage,
+ 	.writepage      = ubifs_writepage,
+@@ -1654,7 +1664,7 @@ const struct inode_operations ubifs_file_inode_operations = {
+ const struct inode_operations ubifs_symlink_inode_operations = {
+ 	.get_link    = ubifs_get_link,
+ 	.setattr     = ubifs_setattr,
+-	.getattr     = ubifs_getattr,
++	.getattr     = ubifs_symlink_getattr,
+ #ifdef CONFIG_UBIFS_FS_XATTR
+ 	.listxattr   = ubifs_listxattr,
+ #endif
+diff --git a/include/linux/cred.h b/include/linux/cred.h
+index ad160e5fe5c64..18639c069263f 100644
+--- a/include/linux/cred.h
++++ b/include/linux/cred.h
+@@ -144,7 +144,6 @@ struct cred {
+ #endif
+ 	struct user_struct *user;	/* real user ID subscription */
+ 	struct user_namespace *user_ns; /* user_ns the caps and keyrings are relative to. */
+-	struct ucounts *ucounts;
+ 	struct group_info *group_info;	/* supplementary groups for euid/fsgid */
+ 	/* RCU deletion */
+ 	union {
+@@ -171,7 +170,6 @@ extern int set_security_override_from_ctx(struct cred *, const char *);
+ extern int set_create_files_as(struct cred *, struct inode *);
+ extern int cred_fscmp(const struct cred *, const struct cred *);
+ extern void __init cred_init(void);
+-extern int set_cred_ucounts(struct cred *);
+ 
+ /*
+  * check for validity of credentials
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 8bde32cf97115..43bb6a51e42d9 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2768,6 +2768,11 @@ static inline bool execute_ok(struct inode *inode)
+ 	return (inode->i_mode & S_IXUGO) || S_ISDIR(inode->i_mode);
+ }
+ 
++static inline bool inode_wrong_type(const struct inode *inode, umode_t mode)
++{
++	return (inode->i_mode ^ mode) & S_IFMT;
++}
++
+ static inline void file_start_write(struct file *file)
+ {
+ 	if (!S_ISREG(file_inode(file)->i_mode))
+diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
+index 8e1d31c959bfa..d0a1b8edfd9db 100644
+--- a/include/linux/fscrypt.h
++++ b/include/linux/fscrypt.h
+@@ -252,6 +252,7 @@ int __fscrypt_encrypt_symlink(struct inode *inode, const char *target,
+ const char *fscrypt_get_symlink(struct inode *inode, const void *caddr,
+ 				unsigned int max_size,
+ 				struct delayed_call *done);
++int fscrypt_symlink_getattr(const struct path *path, struct kstat *stat);
+ static inline void fscrypt_set_ops(struct super_block *sb,
+ 				   const struct fscrypt_operations *s_cop)
+ {
+@@ -575,6 +576,12 @@ static inline const char *fscrypt_get_symlink(struct inode *inode,
+ 	return ERR_PTR(-EOPNOTSUPP);
+ }
+ 
++static inline int fscrypt_symlink_getattr(const struct path *path,
++					  struct kstat *stat)
++{
++	return -EOPNOTSUPP;
++}
++
+ static inline void fscrypt_set_ops(struct super_block *sb,
+ 				   const struct fscrypt_operations *s_cop)
+ {
+diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
+index 2d906b9c14992..e1d88630ff243 100644
+--- a/include/linux/spi/spi.h
++++ b/include/linux/spi/spi.h
+@@ -646,8 +646,8 @@ struct spi_controller {
+ 	int			*cs_gpios;
+ 	struct gpio_desc	**cs_gpiods;
+ 	bool			use_gpio_descriptors;
+-	u8			unused_native_cs;
+-	u8			max_native_cs;
++	s8			unused_native_cs;
++	s8			max_native_cs;
+ 
+ 	/* statistics */
+ 	struct spi_statistics	statistics;
+diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
+index e1bd560da1cd4..7616c7bf4b241 100644
+--- a/include/linux/user_namespace.h
++++ b/include/linux/user_namespace.h
+@@ -101,15 +101,11 @@ struct ucounts {
+ };
+ 
+ extern struct user_namespace init_user_ns;
+-extern struct ucounts init_ucounts;
+ 
+ bool setup_userns_sysctls(struct user_namespace *ns);
+ void retire_userns_sysctls(struct user_namespace *ns);
+ struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid, enum ucount_type type);
+ void dec_ucount(struct ucounts *ucounts, enum ucount_type type);
+-struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid);
+-struct ucounts *get_ucounts(struct ucounts *ucounts);
+-void put_ucounts(struct ucounts *ucounts);
+ 
+ #ifdef CONFIG_USER_NS
+ 
+diff --git a/kernel/cred.c b/kernel/cred.c
+index 8c0983fa794a7..421b1149c6516 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -60,7 +60,6 @@ struct cred init_cred = {
+ 	.user			= INIT_USER,
+ 	.user_ns		= &init_user_ns,
+ 	.group_info		= &init_groups,
+-	.ucounts		= &init_ucounts,
+ };
+ 
+ static inline void set_cred_subscribers(struct cred *cred, int n)
+@@ -120,8 +119,6 @@ static void put_cred_rcu(struct rcu_head *rcu)
+ 	if (cred->group_info)
+ 		put_group_info(cred->group_info);
+ 	free_uid(cred->user);
+-	if (cred->ucounts)
+-		put_ucounts(cred->ucounts);
+ 	put_user_ns(cred->user_ns);
+ 	kmem_cache_free(cred_jar, cred);
+ }
+@@ -225,7 +222,6 @@ struct cred *cred_alloc_blank(void)
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	new->magic = CRED_MAGIC;
+ #endif
+-	new->ucounts = get_ucounts(&init_ucounts);
+ 
+ 	if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
+@@ -286,13 +282,8 @@ struct cred *prepare_creds(void)
+ 	new->security = NULL;
+ #endif
+ 
+-	new->ucounts = get_ucounts(new->ucounts);
+-	if (!new->ucounts)
+-		goto error;
+-
+ 	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
+-
+ 	validate_creds(new);
+ 	return new;
+ 
+@@ -372,9 +363,6 @@ int copy_creds(struct task_struct *p, unsigned long clone_flags)
+ 		ret = create_user_ns(new);
+ 		if (ret < 0)
+ 			goto error_put;
+-		ret = set_cred_ucounts(new);
+-		if (ret < 0)
+-			goto error_put;
+ 	}
+ 
+ #ifdef CONFIG_KEYS
+@@ -665,31 +653,6 @@ int cred_fscmp(const struct cred *a, const struct cred *b)
+ }
+ EXPORT_SYMBOL(cred_fscmp);
+ 
+-int set_cred_ucounts(struct cred *new)
+-{
+-	struct task_struct *task = current;
+-	const struct cred *old = task->real_cred;
+-	struct ucounts *old_ucounts = new->ucounts;
+-
+-	if (new->user == old->user && new->user_ns == old->user_ns)
+-		return 0;
+-
+-	/*
+-	 * This optimization is needed because alloc_ucounts() uses locks
+-	 * for table lookups.
+-	 */
+-	if (old_ucounts && old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->euid))
+-		return 0;
+-
+-	if (!(new->ucounts = alloc_ucounts(new->user_ns, new->euid)))
+-		return -EAGAIN;
+-
+-	if (old_ucounts)
+-		put_ucounts(old_ucounts);
+-
+-	return 0;
+-}
+-
+ /*
+  * initialise the credentials stuff
+  */
+@@ -753,10 +716,6 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
+ #ifdef CONFIG_SECURITY
+ 	new->security = NULL;
+ #endif
+-	new->ucounts = get_ucounts(new->ucounts);
+-	if (!new->ucounts)
+-		goto error;
+-
+ 	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 096945ef49ad7..9705439439fe3 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2960,12 +2960,6 @@ int ksys_unshare(unsigned long unshare_flags)
+ 	if (err)
+ 		goto bad_unshare_cleanup_cred;
+ 
+-	if (new_cred) {
+-		err = set_cred_ucounts(new_cred);
+-		if (err)
+-			goto bad_unshare_cleanup_cred;
+-	}
+-
+ 	if (new_fs || new_fd || do_sysvsem || new_cred || new_nsproxy) {
+ 		if (do_sysvsem) {
+ 			/*
+diff --git a/kernel/static_call.c b/kernel/static_call.c
+index b62a0c41c9050..dc5665b628140 100644
+--- a/kernel/static_call.c
++++ b/kernel/static_call.c
+@@ -165,13 +165,13 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
+ 
+ 		stop = __stop_static_call_sites;
+ 
+-#ifdef CONFIG_MODULES
+ 		if (mod) {
++#ifdef CONFIG_MODULES
+ 			stop = mod->static_call_sites +
+ 			       mod->num_static_call_sites;
+ 			init = mod->state == MODULE_STATE_COMING;
+-		}
+ #endif
++		}
+ 
+ 		for (site = site_mod->sites;
+ 		     site < stop && static_call_key(site) == key; site++) {
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 0670e824e0197..a730c03ee607c 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -552,10 +552,6 @@ long __sys_setreuid(uid_t ruid, uid_t euid)
+ 	if (retval < 0)
+ 		goto error;
+ 
+-	retval = set_cred_ucounts(new);
+-	if (retval < 0)
+-		goto error;
+-
+ 	return commit_creds(new);
+ 
+ error:
+@@ -614,10 +610,6 @@ long __sys_setuid(uid_t uid)
+ 	if (retval < 0)
+ 		goto error;
+ 
+-	retval = set_cred_ucounts(new);
+-	if (retval < 0)
+-		goto error;
+-
+ 	return commit_creds(new);
+ 
+ error:
+@@ -693,10 +685,6 @@ long __sys_setresuid(uid_t ruid, uid_t euid, uid_t suid)
+ 	if (retval < 0)
+ 		goto error;
+ 
+-	retval = set_cred_ucounts(new);
+-	if (retval < 0)
+-		goto error;
+-
+ 	return commit_creds(new);
+ 
+ error:
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 9894795043c42..11b1596e2542a 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -8,12 +8,6 @@
+ #include <linux/kmemleak.h>
+ #include <linux/user_namespace.h>
+ 
+-struct ucounts init_ucounts = {
+-	.ns    = &init_user_ns,
+-	.uid   = GLOBAL_ROOT_UID,
+-	.count = 1,
+-};
+-
+ #define UCOUNTS_HASHTABLE_BITS 10
+ static struct hlist_head ucounts_hashtable[(1 << UCOUNTS_HASHTABLE_BITS)];
+ static DEFINE_SPINLOCK(ucounts_lock);
+@@ -131,15 +125,7 @@ static struct ucounts *find_ucounts(struct user_namespace *ns, kuid_t uid, struc
+ 	return NULL;
+ }
+ 
+-static void hlist_add_ucounts(struct ucounts *ucounts)
+-{
+-	struct hlist_head *hashent = ucounts_hashentry(ucounts->ns, ucounts->uid);
+-	spin_lock_irq(&ucounts_lock);
+-	hlist_add_head(&ucounts->node, hashent);
+-	spin_unlock_irq(&ucounts_lock);
+-}
+-
+-struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
++static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
+ {
+ 	struct hlist_head *hashent = ucounts_hashentry(ns, uid);
+ 	struct ucounts *ucounts, *new;
+@@ -174,26 +160,7 @@ struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
+ 	return ucounts;
+ }
+ 
+-struct ucounts *get_ucounts(struct ucounts *ucounts)
+-{
+-	unsigned long flags;
+-
+-	if (!ucounts)
+-		return NULL;
+-
+-	spin_lock_irqsave(&ucounts_lock, flags);
+-	if (ucounts->count == INT_MAX) {
+-		WARN_ONCE(1, "ucounts: counter has reached its maximum value");
+-		ucounts = NULL;
+-	} else {
+-		ucounts->count += 1;
+-	}
+-	spin_unlock_irqrestore(&ucounts_lock, flags);
+-
+-	return ucounts;
+-}
+-
+-void put_ucounts(struct ucounts *ucounts)
++static void put_ucounts(struct ucounts *ucounts)
+ {
+ 	unsigned long flags;
+ 
+@@ -227,7 +194,7 @@ struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid,
+ {
+ 	struct ucounts *ucounts, *iter, *bad;
+ 	struct user_namespace *tns;
+-	ucounts = alloc_ucounts(ns, uid);
++	ucounts = get_ucounts(ns, uid);
+ 	for (iter = ucounts; iter; iter = tns->ucounts) {
+ 		int max;
+ 		tns = iter->ns;
+@@ -270,7 +237,6 @@ static __init int user_namespace_sysctl_init(void)
+ 	BUG_ON(!user_header);
+ 	BUG_ON(!setup_userns_sysctls(&init_user_ns));
+ #endif
+-	hlist_add_ucounts(&init_ucounts);
+ 	return 0;
+ }
+ subsys_initcall(user_namespace_sysctl_init);
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index 8206a13c81ebc..ce396ea4de608 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -1340,9 +1340,6 @@ static int userns_install(struct nsset *nsset, struct ns_common *ns)
+ 	put_user_ns(cred->user_ns);
+ 	set_cred_user_ns(cred, get_user_ns(user_ns));
+ 
+-	if (set_cred_ucounts(cred) < 0)
+-		return -EINVAL;
+-
+ 	return 0;
+ }
+ 
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index bda3514c7b2d9..5e04c4b9e0239 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1746,7 +1746,7 @@ static int snd_pcm_lib_ioctl_fifo_size(struct snd_pcm_substream *substream,
+ 		channels = params_channels(params);
+ 		frame_size = snd_pcm_format_size(format, channels);
+ 		if (frame_size > 0)
+-			params->fifo_size /= (unsigned)frame_size;
++			params->fifo_size /= frame_size;
+ 	}
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6219d0311c9a0..f47f639980dbb 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8364,6 +8364,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f2, "HP ProBook 640 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87f6, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -9440,6 +9441,16 @@ static int patch_alc269(struct hda_codec *codec)
+ 
+ 	snd_hda_pick_fixup(codec, alc269_fixup_models,
+ 		       alc269_fixup_tbl, alc269_fixups);
++	/* FIXME: both TX300 and ROG Strix G17 have the same SSID, and
++	 * the quirk breaks the latter (bko#214101).
++	 * Clear the wrong entry.
++	 */
++	if (codec->fixup_id == ALC282_FIXUP_ASUS_TX300 &&
++	    codec->core.vendor_id == 0x10ec0294) {
++		codec_dbg(codec, "Clear wrong fixup for ASUS ROG Strix G17\n");
++		codec->fixup_id = HDA_FIXUP_ID_NOT_SET;
++	}
++
+ 	snd_hda_pick_pin_fixup(codec, alc269_pin_fixup_tbl, alc269_fixups, true);
+ 	snd_hda_pick_pin_fixup(codec, alc269_fallback_pin_fixup_tbl, alc269_fixups, false);
+ 	snd_hda_pick_fixup(codec, NULL,	alc269_fixup_vendor_tbl,


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-12 14:38 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-12 14:38 UTC (permalink / raw
  To: gentoo-commits

commit:     e28c47d5ba4a96b33ace817627a332757b3e1606
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 12 14:37:50 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Sep 12 14:37:50 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e28c47d5

Linux patch 5.10.64

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1063_linux-5.10.64.patch | 1262 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1266 insertions(+)

diff --git a/0000_README b/0000_README
index 0e463ca..8ddd447 100644
--- a/0000_README
+++ b/0000_README
@@ -295,6 +295,10 @@ Patch:  1062_linux-5.10.63.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.63
 
+Patch:  1063_linux-5.10.64.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.64
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1063_linux-5.10.64.patch b/1063_linux-5.10.64.patch
new file mode 100644
index 0000000..e05fdf6
--- /dev/null
+++ b/1063_linux-5.10.64.patch
@@ -0,0 +1,1262 @@
+diff --git a/Makefile b/Makefile
+index b2d326f4dea68..982aa1876aa04 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 63
++SUBLEVEL = 64
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
+index 6a98a76516214..2da6139b0977f 100644
+--- a/arch/x86/events/amd/iommu.c
++++ b/arch/x86/events/amd/iommu.c
+@@ -18,8 +18,6 @@
+ #include "../perf_event.h"
+ #include "iommu.h"
+ 
+-#define COUNTER_SHIFT		16
+-
+ /* iommu pmu conf masks */
+ #define GET_CSOURCE(x)     ((x)->conf & 0xFFULL)
+ #define GET_DEVID(x)       (((x)->conf >> 8)  & 0xFFFFULL)
+@@ -285,22 +283,31 @@ static void perf_iommu_start(struct perf_event *event, int flags)
+ 	WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE));
+ 	hwc->state = 0;
+ 
++	/*
++	 * To account for power-gating, which prevents write to
++	 * the counter, we need to enable the counter
++	 * before setting up counter register.
++	 */
++	perf_iommu_enable_event(event);
++
+ 	if (flags & PERF_EF_RELOAD) {
+-		u64 prev_raw_count = local64_read(&hwc->prev_count);
++		u64 count = 0;
+ 		struct amd_iommu *iommu = perf_event_2_iommu(event);
+ 
++		/*
++		 * Since the IOMMU PMU only support counting mode,
++		 * the counter always start with value zero.
++		 */
+ 		amd_iommu_pc_set_reg(iommu, hwc->iommu_bank, hwc->iommu_cntr,
+-				     IOMMU_PC_COUNTER_REG, &prev_raw_count);
++				     IOMMU_PC_COUNTER_REG, &count);
+ 	}
+ 
+-	perf_iommu_enable_event(event);
+ 	perf_event_update_userpage(event);
+-
+ }
+ 
+ static void perf_iommu_read(struct perf_event *event)
+ {
+-	u64 count, prev, delta;
++	u64 count;
+ 	struct hw_perf_event *hwc = &event->hw;
+ 	struct amd_iommu *iommu = perf_event_2_iommu(event);
+ 
+@@ -311,14 +318,11 @@ static void perf_iommu_read(struct perf_event *event)
+ 	/* IOMMU pc counter register is only 48 bits */
+ 	count &= GENMASK_ULL(47, 0);
+ 
+-	prev = local64_read(&hwc->prev_count);
+-	if (local64_cmpxchg(&hwc->prev_count, prev, count) != prev)
+-		return;
+-
+-	/* Handle 48-bit counter overflow */
+-	delta = (count << COUNTER_SHIFT) - (prev << COUNTER_SHIFT);
+-	delta >>= COUNTER_SHIFT;
+-	local64_add(delta, &event->count);
++	/*
++	 * Since the counter always start with value zero,
++	 * simply just accumulate the count for the event.
++	 */
++	local64_add(count, &event->count);
+ }
+ 
+ static void perf_iommu_stop(struct perf_event *event, int flags)
+@@ -328,15 +332,16 @@ static void perf_iommu_stop(struct perf_event *event, int flags)
+ 	if (hwc->state & PERF_HES_UPTODATE)
+ 		return;
+ 
++	/*
++	 * To account for power-gating, in which reading the counter would
++	 * return zero, we need to read the register before disabling.
++	 */
++	perf_iommu_read(event);
++	hwc->state |= PERF_HES_UPTODATE;
++
+ 	perf_iommu_disable_event(event);
+ 	WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+ 	hwc->state |= PERF_HES_STOPPED;
+-
+-	if (hwc->state & PERF_HES_UPTODATE)
+-		return;
+-
+-	perf_iommu_read(event);
+-	hwc->state |= PERF_HES_UPTODATE;
+ }
+ 
+ static int perf_iommu_add(struct perf_event *event, int flags)
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index b29657b76e3fa..798a6f73f8946 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -388,10 +388,11 @@ static const struct dmi_system_id reboot_dmi_table[] __initconst = {
+ 	},
+ 	{	/* Handle problems with rebooting on the OptiPlex 990. */
+ 		.callback = set_pci_reboot,
+-		.ident = "Dell OptiPlex 990",
++		.ident = "Dell OptiPlex 990 BIOS A0x",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 990"),
++			DMI_MATCH(DMI_BIOS_VERSION, "A0"),
+ 		},
+ 	},
+ 	{	/* Handle problems with rebooting on Dell 300's */
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 2d53e2ff48ff8..fbc39756f37de 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -121,7 +121,6 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
+ 	rq->internal_tag = BLK_MQ_NO_TAG;
+ 	rq->start_time_ns = ktime_get_ns();
+ 	rq->part = NULL;
+-	refcount_set(&rq->ref, 1);
+ 	blk_crypto_rq_set_defaults(rq);
+ }
+ EXPORT_SYMBOL(blk_rq_init);
+diff --git a/block/blk-flush.c b/block/blk-flush.c
+index 7ee7e5e8905d5..70f1d02135ed6 100644
+--- a/block/blk-flush.c
++++ b/block/blk-flush.c
+@@ -263,6 +263,11 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+ 	spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
+ }
+ 
++bool is_flush_rq(struct request *rq)
++{
++	return rq->end_io == flush_end_io;
++}
++
+ /**
+  * blk_kick_flush - consider issuing flush request
+  * @q: request_queue being kicked
+@@ -330,6 +335,14 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq,
+ 	flush_rq->rq_flags |= RQF_FLUSH_SEQ;
+ 	flush_rq->rq_disk = first_rq->rq_disk;
+ 	flush_rq->end_io = flush_end_io;
++	/*
++	 * Order WRITE ->end_io and WRITE rq->ref, and its pair is the one
++	 * implied in refcount_inc_not_zero() called from
++	 * blk_mq_find_and_get_req(), which orders WRITE/READ flush_rq->ref
++	 * and READ flush_rq->end_io
++	 */
++	smp_wmb();
++	refcount_set(&flush_rq->ref, 1);
+ 
+ 	blk_flush_queue_rq(flush_rq, false);
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 044d0e3a15ad7..9e3fedbaa644b 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -929,7 +929,7 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
+ 
+ void blk_mq_put_rq_ref(struct request *rq)
+ {
+-	if (is_flush_rq(rq, rq->mq_hctx))
++	if (is_flush_rq(rq))
+ 		rq->end_io(rq, 0);
+ 	else if (refcount_dec_and_test(&rq->ref))
+ 		__blk_mq_free_request(rq);
+@@ -2589,16 +2589,49 @@ static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
+ 					    &hctx->cpuhp_dead);
+ }
+ 
++/*
++ * Before freeing hw queue, clearing the flush request reference in
++ * tags->rqs[] for avoiding potential UAF.
++ */
++static void blk_mq_clear_flush_rq_mapping(struct blk_mq_tags *tags,
++		unsigned int queue_depth, struct request *flush_rq)
++{
++	int i;
++	unsigned long flags;
++
++	/* The hw queue may not be mapped yet */
++	if (!tags)
++		return;
++
++	WARN_ON_ONCE(refcount_read(&flush_rq->ref) != 0);
++
++	for (i = 0; i < queue_depth; i++)
++		cmpxchg(&tags->rqs[i], flush_rq, NULL);
++
++	/*
++	 * Wait until all pending iteration is done.
++	 *
++	 * Request reference is cleared and it is guaranteed to be observed
++	 * after the ->lock is released.
++	 */
++	spin_lock_irqsave(&tags->lock, flags);
++	spin_unlock_irqrestore(&tags->lock, flags);
++}
++
+ /* hctx->ctxs will be freed in queue's release handler */
+ static void blk_mq_exit_hctx(struct request_queue *q,
+ 		struct blk_mq_tag_set *set,
+ 		struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
+ {
++	struct request *flush_rq = hctx->fq->flush_rq;
++
+ 	if (blk_mq_hw_queue_mapped(hctx))
+ 		blk_mq_tag_idle(hctx);
+ 
++	blk_mq_clear_flush_rq_mapping(set->tags[hctx_idx],
++			set->queue_depth, flush_rq);
+ 	if (set->ops->exit_request)
+-		set->ops->exit_request(set, hctx->fq->flush_rq, hctx_idx);
++		set->ops->exit_request(set, flush_rq, hctx_idx);
+ 
+ 	if (set->ops->exit_hctx)
+ 		set->ops->exit_hctx(hctx, hctx_idx);
+diff --git a/block/blk.h b/block/blk.h
+index dfab98465db9a..ecfd523c68d00 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -44,11 +44,7 @@ static inline void __blk_get_queue(struct request_queue *q)
+ 	kobject_get(&q->kobj);
+ }
+ 
+-static inline bool
+-is_flush_rq(struct request *req, struct blk_mq_hw_ctx *hctx)
+-{
+-	return hctx->fq->flush_rq == req;
+-}
++bool is_flush_rq(struct request *req);
+ 
+ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
+ 					      gfp_t flags);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index e690a1b09e98b..30be18bac8063 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -3547,6 +3547,7 @@ static void rtl_hw_start_8106(struct rtl8169_private *tp)
+ 	rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000);
+ 
+ 	rtl_pcie_state_l2l3_disable(tp);
++	rtl_hw_aspm_clkreq_enable(tp, true);
+ }
+ 
+ DECLARE_RTL_COND(rtl_mac_ocp_e00e_cond)
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index 6bd3a389d389c..650ffb93796f1 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -942,10 +942,8 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	wmb();
+ 	lp->dma_out(lp, TX_TAILDESC_PTR, tail_p); /* DMA start */
+ 
+-	if (temac_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) {
+-		netdev_info(ndev, "%s -> netif_stop_queue\n", __func__);
++	if (temac_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
+ 		netif_stop_queue(ndev);
+-	}
+ 
+ 	return NETDEV_TX_OK;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index cd2401d4764f2..a91c944961caa 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3246,12 +3246,12 @@ static void fixup_mpss_256(struct pci_dev *dev)
+ {
+ 	dev->pcie_mpss = 1; /* 256 bytes */
+ }
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
+ 
+ /*
+  * Intel 5000 and 5100 Memory controllers have an erratum with read completion
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index ad0549dac7d79..c37468887fd2a 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -538,6 +538,11 @@ static void omap_8250_pm(struct uart_port *port, unsigned int state,
+ static void omap_serial_fill_features_erratas(struct uart_8250_port *up,
+ 					      struct omap8250_priv *priv)
+ {
++	const struct soc_device_attribute k3_soc_devices[] = {
++		{ .family = "AM65X",  },
++		{ .family = "J721E", .revision = "SR1.0" },
++		{ /* sentinel */ }
++	};
+ 	u32 mvr, scheme;
+ 	u16 revision, major, minor;
+ 
+@@ -585,6 +590,14 @@ static void omap_serial_fill_features_erratas(struct uart_8250_port *up,
+ 	default:
+ 		break;
+ 	}
++
++	/*
++	 * AM65x SR1.0, AM65x SR2.0 and J721e SR1.0 don't
++	 * don't have RHR_IT_DIS bit in IER2 register. So drop to flag
++	 * to enable errata workaround.
++	 */
++	if (soc_device_match(k3_soc_devices))
++		priv->habit &= ~UART_HAS_RHR_IT_DIS;
+ }
+ 
+ static void omap8250_uart_qos_work(struct work_struct *work)
+@@ -1208,12 +1221,6 @@ static int omap8250_no_handle_irq(struct uart_port *port)
+ 	return 0;
+ }
+ 
+-static const struct soc_device_attribute k3_soc_devices[] = {
+-	{ .family = "AM65X",  },
+-	{ .family = "J721E", .revision = "SR1.0" },
+-	{ /* sentinel */ }
+-};
+-
+ static struct omap8250_dma_params am654_dma = {
+ 	.rx_size = SZ_2K,
+ 	.rx_trigger = 1,
+@@ -1419,13 +1426,6 @@ static int omap8250_probe(struct platform_device *pdev)
+ 			up.dma->rxconf.src_maxburst = RX_TRIGGER;
+ 			up.dma->txconf.dst_maxburst = TX_TRIGGER;
+ 		}
+-
+-		/*
+-		 * AM65x SR1.0, AM65x SR2.0 and J721e SR1.0 don't
+-		 * don't have RHR_IT_DIS bit in IER2 register
+-		 */
+-		if (soc_device_match(k3_soc_devices))
+-			priv->habit &= ~UART_HAS_RHR_IT_DIS;
+ 	}
+ #endif
+ 	ret = serial8250_register_8250_port(&up);
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index f3f112b08c9b1..57ee72fead45a 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -1610,7 +1610,7 @@ static void tegra_xudc_ep_context_setup(struct tegra_xudc_ep *ep)
+ 	u16 maxpacket, maxburst = 0, esit = 0;
+ 	u32 val;
+ 
+-	maxpacket = usb_endpoint_maxp(desc) & 0x7ff;
++	maxpacket = usb_endpoint_maxp(desc);
+ 	if (xudc->gadget.speed == USB_SPEED_SUPER) {
+ 		if (!usb_endpoint_xfer_control(desc))
+ 			maxburst = comp_desc->bMaxBurst;
+@@ -1621,7 +1621,7 @@ static void tegra_xudc_ep_context_setup(struct tegra_xudc_ep *ep)
+ 		   (usb_endpoint_xfer_int(desc) ||
+ 		    usb_endpoint_xfer_isoc(desc))) {
+ 		if (xudc->gadget.speed == USB_SPEED_HIGH) {
+-			maxburst = (usb_endpoint_maxp(desc) >> 11) & 0x3;
++			maxburst = usb_endpoint_maxp_mult(desc) - 1;
+ 			if (maxburst == 0x3) {
+ 				dev_warn(xudc->dev,
+ 					 "invalid endpoint maxburst\n");
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index 2c0fda57869e4..dc832ddf7033f 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -198,12 +198,13 @@ static void xhci_ring_dump_segment(struct seq_file *s,
+ 	int			i;
+ 	dma_addr_t		dma;
+ 	union xhci_trb		*trb;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	for (i = 0; i < TRBS_PER_SEGMENT; i++) {
+ 		trb = &seg->trbs[i];
+ 		dma = seg->dma + i * sizeof(*trb);
+ 		seq_printf(s, "%pad: %s\n", &dma,
+-			   xhci_decode_trb(le32_to_cpu(trb->generic.field[0]),
++			   xhci_decode_trb(str, XHCI_MSG_MAX, le32_to_cpu(trb->generic.field[0]),
+ 					   le32_to_cpu(trb->generic.field[1]),
+ 					   le32_to_cpu(trb->generic.field[2]),
+ 					   le32_to_cpu(trb->generic.field[3])));
+@@ -260,11 +261,13 @@ static int xhci_slot_context_show(struct seq_file *s, void *unused)
+ 	struct xhci_slot_ctx	*slot_ctx;
+ 	struct xhci_slot_priv	*priv = s->private;
+ 	struct xhci_virt_device	*dev = priv->dev;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
+ 	slot_ctx = xhci_get_slot_ctx(xhci, dev->out_ctx);
+ 	seq_printf(s, "%pad: %s\n", &dev->out_ctx->dma,
+-		   xhci_decode_slot_context(le32_to_cpu(slot_ctx->dev_info),
++		   xhci_decode_slot_context(str,
++					    le32_to_cpu(slot_ctx->dev_info),
+ 					    le32_to_cpu(slot_ctx->dev_info2),
+ 					    le32_to_cpu(slot_ctx->tt_info),
+ 					    le32_to_cpu(slot_ctx->dev_state)));
+@@ -280,6 +283,7 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+ 	struct xhci_ep_ctx	*ep_ctx;
+ 	struct xhci_slot_priv	*priv = s->private;
+ 	struct xhci_virt_device	*dev = priv->dev;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
+ 
+@@ -287,7 +291,8 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+ 		ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index);
+ 		dma = dev->out_ctx->dma + (ep_index + 1) * CTX_SIZE(xhci->hcc_params);
+ 		seq_printf(s, "%pad: %s\n", &dma,
+-			   xhci_decode_ep_context(le32_to_cpu(ep_ctx->ep_info),
++			   xhci_decode_ep_context(str,
++						  le32_to_cpu(ep_ctx->ep_info),
+ 						  le32_to_cpu(ep_ctx->ep_info2),
+ 						  le64_to_cpu(ep_ctx->deq),
+ 						  le32_to_cpu(ep_ctx->tx_info)));
+@@ -341,9 +346,10 @@ static int xhci_portsc_show(struct seq_file *s, void *unused)
+ {
+ 	struct xhci_port	*port = s->private;
+ 	u32			portsc;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	portsc = readl(port->addr);
+-	seq_printf(s, "%s\n", xhci_decode_portsc(portsc));
++	seq_printf(s, "%s\n", xhci_decode_portsc(str, portsc));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index 1bc4fe7b8c756..9888ba7d85b6a 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -134,6 +134,13 @@ static int xhci_rcar_download_firmware(struct usb_hcd *hcd)
+ 	const struct soc_device_attribute *attr;
+ 	const char *firmware_name;
+ 
++	/*
++	 * According to the datasheet, "Upon the completion of FW Download,
++	 * there is no need to write or reload FW".
++	 */
++	if (readl(regs + RCAR_USB3_DL_CTRL) & RCAR_USB3_DL_CTRL_FW_SUCCESS)
++		return 0;
++
+ 	attr = soc_device_match(rcar_quirks_match);
+ 	if (attr)
+ 		quirks = (uintptr_t)attr->data;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 53059ee957ad5..dc2068e3bedb7 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1005,6 +1005,7 @@ void xhci_stop_endpoint_command_watchdog(struct timer_list *t)
+ 	struct xhci_hcd *xhci = ep->xhci;
+ 	unsigned long flags;
+ 	u32 usbsts;
++	char str[XHCI_MSG_MAX];
+ 
+ 	spin_lock_irqsave(&xhci->lock, flags);
+ 
+@@ -1018,7 +1019,7 @@ void xhci_stop_endpoint_command_watchdog(struct timer_list *t)
+ 	usbsts = readl(&xhci->op_regs->status);
+ 
+ 	xhci_warn(xhci, "xHCI host not responding to stop endpoint command.\n");
+-	xhci_warn(xhci, "USBSTS:%s\n", xhci_decode_usbsts(usbsts));
++	xhci_warn(xhci, "USBSTS:%s\n", xhci_decode_usbsts(str, usbsts));
+ 
+ 	ep->ep_state &= ~EP_STOP_CMD_PENDING;
+ 
+diff --git a/drivers/usb/host/xhci-trace.h b/drivers/usb/host/xhci-trace.h
+index 627abd236dbe1..a5da020772977 100644
+--- a/drivers/usb/host/xhci-trace.h
++++ b/drivers/usb/host/xhci-trace.h
+@@ -25,8 +25,6 @@
+ #include "xhci.h"
+ #include "xhci-dbgcap.h"
+ 
+-#define XHCI_MSG_MAX	500
+-
+ DECLARE_EVENT_CLASS(xhci_log_msg,
+ 	TP_PROTO(struct va_format *vaf),
+ 	TP_ARGS(vaf),
+@@ -122,6 +120,7 @@ DECLARE_EVENT_CLASS(xhci_log_trb,
+ 		__field(u32, field1)
+ 		__field(u32, field2)
+ 		__field(u32, field3)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->type = ring->type;
+@@ -131,7 +130,7 @@ DECLARE_EVENT_CLASS(xhci_log_trb,
+ 		__entry->field3 = le32_to_cpu(trb->field[3]);
+ 	),
+ 	TP_printk("%s: %s", xhci_ring_type_string(__entry->type),
+-			xhci_decode_trb(__entry->field0, __entry->field1,
++		  xhci_decode_trb(__get_str(str), XHCI_MSG_MAX, __entry->field0, __entry->field1,
+ 					__entry->field2, __entry->field3)
+ 	)
+ );
+@@ -323,6 +322,7 @@ DECLARE_EVENT_CLASS(xhci_log_ep_ctx,
+ 		__field(u32, info2)
+ 		__field(u64, deq)
+ 		__field(u32, tx_info)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->info = le32_to_cpu(ctx->ep_info);
+@@ -330,8 +330,8 @@ DECLARE_EVENT_CLASS(xhci_log_ep_ctx,
+ 		__entry->deq = le64_to_cpu(ctx->deq);
+ 		__entry->tx_info = le32_to_cpu(ctx->tx_info);
+ 	),
+-	TP_printk("%s", xhci_decode_ep_context(__entry->info,
+-		__entry->info2, __entry->deq, __entry->tx_info)
++	TP_printk("%s", xhci_decode_ep_context(__get_str(str),
++		__entry->info, __entry->info2, __entry->deq, __entry->tx_info)
+ 	)
+ );
+ 
+@@ -368,6 +368,7 @@ DECLARE_EVENT_CLASS(xhci_log_slot_ctx,
+ 		__field(u32, info2)
+ 		__field(u32, tt_info)
+ 		__field(u32, state)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->info = le32_to_cpu(ctx->dev_info);
+@@ -375,9 +376,9 @@ DECLARE_EVENT_CLASS(xhci_log_slot_ctx,
+ 		__entry->tt_info = le64_to_cpu(ctx->tt_info);
+ 		__entry->state = le32_to_cpu(ctx->dev_state);
+ 	),
+-	TP_printk("%s", xhci_decode_slot_context(__entry->info,
+-			__entry->info2, __entry->tt_info,
+-			__entry->state)
++	TP_printk("%s", xhci_decode_slot_context(__get_str(str),
++			__entry->info, __entry->info2,
++			__entry->tt_info, __entry->state)
+ 	)
+ );
+ 
+@@ -432,12 +433,13 @@ DECLARE_EVENT_CLASS(xhci_log_ctrl_ctx,
+ 	TP_STRUCT__entry(
+ 		__field(u32, drop)
+ 		__field(u32, add)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->drop = le32_to_cpu(ctrl_ctx->drop_flags);
+ 		__entry->add = le32_to_cpu(ctrl_ctx->add_flags);
+ 	),
+-	TP_printk("%s", xhci_decode_ctrl_ctx(__entry->drop, __entry->add)
++	TP_printk("%s", xhci_decode_ctrl_ctx(__get_str(str), __entry->drop, __entry->add)
+ 	)
+ );
+ 
+@@ -523,6 +525,7 @@ DECLARE_EVENT_CLASS(xhci_log_portsc,
+ 		    TP_STRUCT__entry(
+ 				     __field(u32, portnum)
+ 				     __field(u32, portsc)
++				     __dynamic_array(char, str, XHCI_MSG_MAX)
+ 				     ),
+ 		    TP_fast_assign(
+ 				   __entry->portnum = portnum;
+@@ -530,7 +533,7 @@ DECLARE_EVENT_CLASS(xhci_log_portsc,
+ 				   ),
+ 		    TP_printk("port-%d: %s",
+ 			      __entry->portnum,
+-			      xhci_decode_portsc(__entry->portsc)
++			      xhci_decode_portsc(__get_str(str), __entry->portsc)
+ 			      )
+ );
+ 
+@@ -555,13 +558,14 @@ DECLARE_EVENT_CLASS(xhci_log_doorbell,
+ 	TP_STRUCT__entry(
+ 		__field(u32, slot)
+ 		__field(u32, doorbell)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->slot = slot;
+ 		__entry->doorbell = doorbell;
+ 	),
+ 	TP_printk("Ring doorbell for %s",
+-		xhci_decode_doorbell(__entry->slot, __entry->doorbell)
++		  xhci_decode_doorbell(__get_str(str), __entry->slot, __entry->doorbell)
+ 	)
+ );
+ 
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index c1865a121100c..1c97c8d81154d 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -22,6 +22,9 @@
+ #include	"xhci-ext-caps.h"
+ #include "pci-quirks.h"
+ 
++/* max buffer size for trace and debug messages */
++#define XHCI_MSG_MAX		500
++
+ /* xHCI PCI Configuration Registers */
+ #define XHCI_SBRN_OFFSET	(0x60)
+ 
+@@ -2223,15 +2226,14 @@ static inline char *xhci_slot_state_string(u32 state)
+ 	}
+ }
+ 
+-static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+-		u32 field3)
++static inline const char *xhci_decode_trb(char *str, size_t size,
++					  u32 field0, u32 field1, u32 field2, u32 field3)
+ {
+-	static char str[256];
+ 	int type = TRB_FIELD_TO_TYPE(field3);
+ 
+ 	switch (type) {
+ 	case TRB_LINK:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"LINK %08x%08x intr %d type '%s' flags %c:%c:%c:%c",
+ 			field1, field0, GET_INTR_TARGET(field2),
+ 			xhci_trb_type_string(type),
+@@ -2248,7 +2250,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	case TRB_HC_EVENT:
+ 	case TRB_DEV_NOTE:
+ 	case TRB_MFINDEX_WRAP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"TRB %08x%08x status '%s' len %d slot %d ep %d type '%s' flags %c:%c",
+ 			field1, field0,
+ 			xhci_trb_comp_code_string(GET_COMP_CODE(field2)),
+@@ -2261,7 +2263,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 
+ 		break;
+ 	case TRB_SETUP:
+-		sprintf(str, "bRequestType %02x bRequest %02x wValue %02x%02x wIndex %02x%02x wLength %d length %d TD size %d intr %d type '%s' flags %c:%c:%c",
++		snprintf(str, size,
++			"bRequestType %02x bRequest %02x wValue %02x%02x wIndex %02x%02x wLength %d length %d TD size %d intr %d type '%s' flags %c:%c:%c",
+ 				field0 & 0xff,
+ 				(field0 & 0xff00) >> 8,
+ 				(field0 & 0xff000000) >> 24,
+@@ -2278,7 +2281,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 				field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_DATA:
+-		sprintf(str, "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c",
++		snprintf(str, size,
++			 "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c",
+ 				field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 				GET_INTR_TARGET(field2),
+ 				xhci_trb_type_string(type),
+@@ -2291,7 +2295,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 				field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_STATUS:
+-		sprintf(str, "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c",
++		snprintf(str, size,
++			 "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c",
+ 				field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 				GET_INTR_TARGET(field2),
+ 				xhci_trb_type_string(type),
+@@ -2304,7 +2309,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	case TRB_ISOC:
+ 	case TRB_EVENT_DATA:
+ 	case TRB_TR_NOOP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c:%c",
+ 			field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 			GET_INTR_TARGET(field2),
+@@ -2321,21 +2326,21 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 
+ 	case TRB_CMD_NOOP:
+ 	case TRB_ENABLE_SLOT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: flags %c",
+ 			xhci_trb_type_string(type),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_DISABLE_SLOT:
+ 	case TRB_NEG_BANDWIDTH:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_SLOT_ID(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_ADDR_DEV:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2344,7 +2349,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_CONFIG_EP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2353,7 +2358,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_EVAL_CONTEXT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2361,7 +2366,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_RESET_EP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d ep %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2382,7 +2387,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_SET_DEQ:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: deq %08x%08x stream %d slot %d ep %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2393,14 +2398,14 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_RESET_DEV:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_SLOT_ID(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_FORCE_EVENT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: event %08x%08x vf intr %d vf id %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2409,14 +2414,14 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_SET_LT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: belt %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_BELT(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_GET_BW:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d speed %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2425,7 +2430,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_FORCE_HEADER:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: info %08x%08x%08x pkt type %d roothub port %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field2, field1, field0 & 0xffffffe0,
+@@ -2434,7 +2439,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	default:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"type '%s' -> raw %08x %08x %08x %08x",
+ 			xhci_trb_type_string(type),
+ 			field0, field1, field2, field3);
+@@ -2443,10 +2448,9 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_ctrl_ctx(unsigned long drop,
+-					       unsigned long add)
++static inline const char *xhci_decode_ctrl_ctx(char *str,
++		unsigned long drop, unsigned long add)
+ {
+-	static char	str[1024];
+ 	unsigned int	bit;
+ 	int		ret = 0;
+ 
+@@ -2472,10 +2476,9 @@ static inline const char *xhci_decode_ctrl_ctx(unsigned long drop,
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_slot_context(u32 info, u32 info2,
+-		u32 tt_info, u32 state)
++static inline const char *xhci_decode_slot_context(char *str,
++		u32 info, u32 info2, u32 tt_info, u32 state)
+ {
+-	static char str[1024];
+ 	u32 speed;
+ 	u32 hub;
+ 	u32 mtt;
+@@ -2559,9 +2562,8 @@ static inline const char *xhci_portsc_link_state_string(u32 portsc)
+ 	return "Unknown";
+ }
+ 
+-static inline const char *xhci_decode_portsc(u32 portsc)
++static inline const char *xhci_decode_portsc(char *str, u32 portsc)
+ {
+-	static char str[256];
+ 	int ret;
+ 
+ 	ret = sprintf(str, "%s %s %s Link:%s PortSpeed:%d ",
+@@ -2605,9 +2607,8 @@ static inline const char *xhci_decode_portsc(u32 portsc)
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_usbsts(u32 usbsts)
++static inline const char *xhci_decode_usbsts(char *str, u32 usbsts)
+ {
+-	static char str[256];
+ 	int ret = 0;
+ 
+ 	if (usbsts == ~(u32)0)
+@@ -2634,9 +2635,8 @@ static inline const char *xhci_decode_usbsts(u32 usbsts)
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_doorbell(u32 slot, u32 doorbell)
++static inline const char *xhci_decode_doorbell(char *str, u32 slot, u32 doorbell)
+ {
+-	static char str[256];
+ 	u8 ep;
+ 	u16 stream;
+ 	int ret;
+@@ -2703,10 +2703,9 @@ static inline const char *xhci_ep_type_string(u8 type)
+ 	}
+ }
+ 
+-static inline const char *xhci_decode_ep_context(u32 info, u32 info2, u64 deq,
+-		u32 tx_info)
++static inline const char *xhci_decode_ep_context(char *str, u32 info,
++		u32 info2, u64 deq, u32 tx_info)
+ {
+-	static char str[1024];
+ 	int ret;
+ 
+ 	u32 esit;
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index b3b4599375668..3d328dfdbb5ed 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -227,11 +227,13 @@ void mtu3_set_speed(struct mtu3 *mtu, enum usb_device_speed speed)
+ 		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
+ 		break;
+ 	case USB_SPEED_SUPER:
++		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
+ 		mtu3_clrbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
+ 			     SSUSB_U3_PORT_SSP_SPEED);
+ 		break;
+ 	case USB_SPEED_SUPER_PLUS:
+-			mtu3_setbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
++		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
++		mtu3_setbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
+ 			     SSUSB_U3_PORT_SSP_SPEED);
+ 		break;
+ 	default:
+diff --git a/drivers/usb/mtu3/mtu3_gadget.c b/drivers/usb/mtu3/mtu3_gadget.c
+index 38f17d66d5bc1..0b3aa7c65857a 100644
+--- a/drivers/usb/mtu3/mtu3_gadget.c
++++ b/drivers/usb/mtu3/mtu3_gadget.c
+@@ -64,14 +64,12 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 	u32 interval = 0;
+ 	u32 mult = 0;
+ 	u32 burst = 0;
+-	int max_packet;
+ 	int ret;
+ 
+ 	desc = mep->desc;
+ 	comp_desc = mep->comp_desc;
+ 	mep->type = usb_endpoint_type(desc);
+-	max_packet = usb_endpoint_maxp(desc);
+-	mep->maxp = max_packet & GENMASK(10, 0);
++	mep->maxp = usb_endpoint_maxp(desc);
+ 
+ 	switch (mtu->g.speed) {
+ 	case USB_SPEED_SUPER:
+@@ -92,7 +90,7 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 				usb_endpoint_xfer_int(desc)) {
+ 			interval = desc->bInterval;
+ 			interval = clamp_val(interval, 1, 16) - 1;
+-			burst = (max_packet & GENMASK(12, 11)) >> 11;
++			mult = usb_endpoint_maxp_mult(desc) - 1;
+ 		}
+ 		break;
+ 	default:
+diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c
+index b418a0d4adb89..c713d98b4a203 100644
+--- a/drivers/usb/serial/mos7720.c
++++ b/drivers/usb/serial/mos7720.c
+@@ -226,8 +226,10 @@ static int read_mos_reg(struct usb_serial *serial, unsigned int serial_portnum,
+ 	int status;
+ 
+ 	buf = kmalloc(1, GFP_KERNEL);
+-	if (!buf)
++	if (!buf) {
++		*data = 0;
+ 		return -ENOMEM;
++	}
+ 
+ 	status = usb_control_msg(usbdev, pipe, request, requesttype, value,
+ 				     index, buf, 1, MOS_WDR_TIMEOUT);
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 2d01b2bbb7465..0a1239819fd2a 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -4608,7 +4608,7 @@ static inline void skb_reset_redirect(struct sk_buff *skb)
+ #endif
+ }
+ 
+-#ifdef CONFIG_KCOV
++#if IS_ENABLED(CONFIG_KCOV) && IS_ENABLED(CONFIG_SKB_EXTENSIONS)
+ static inline void skb_set_kcov_handle(struct sk_buff *skb,
+ 				       const u64 kcov_handle)
+ {
+@@ -4636,7 +4636,7 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
+ static inline void skb_set_kcov_handle(struct sk_buff *skb,
+ 				       const u64 kcov_handle) { }
+ static inline u64 skb_get_kcov_handle(struct sk_buff *skb) { return 0; }
+-#endif /* CONFIG_KCOV */
++#endif /* CONFIG_KCOV && CONFIG_SKB_EXTENSIONS */
+ 
+ #endif	/* __KERNEL__ */
+ #endif	/* _LINUX_SKBUFF_H */
+diff --git a/include/uapi/linux/termios.h b/include/uapi/linux/termios.h
+index 33961d4e4de0d..e6da9d4433d11 100644
+--- a/include/uapi/linux/termios.h
++++ b/include/uapi/linux/termios.h
+@@ -5,19 +5,4 @@
+ #include <linux/types.h>
+ #include <asm/termios.h>
+ 
+-#define NFF	5
+-
+-struct termiox
+-{
+-	__u16	x_hflag;
+-	__u16	x_cflag;
+-	__u16	x_rflag[NFF];
+-	__u16	x_sflag;
+-};
+-
+-#define	RTSXOFF		0x0001		/* RTS flow control on input */
+-#define	CTSXON		0x0002		/* CTS flow control on output */
+-#define	DTRXOFF		0x0004		/* DTR flow control on input */
+-#define DSRXON		0x0008		/* DCD flow control on output */
+-
+ #endif
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index ffccc13d685bd..bf174798afcb9 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1869,7 +1869,7 @@ config KCOV
+ 	depends on CC_HAS_SANCOV_TRACE_PC || GCC_PLUGINS
+ 	select DEBUG_FS
+ 	select GCC_PLUGIN_SANCOV if !CC_HAS_SANCOV_TRACE_PC
+-	select SKB_EXTENSIONS
++	select SKB_EXTENSIONS if NET
+ 	help
+ 	  KCOV exposes kernel code coverage information in a form suitable
+ 	  for coverage-guided fuzzing (randomized testing).
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 0166558d3d647..e8e0f1cec8b04 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -996,7 +996,7 @@ static inline void __free_one_page(struct page *page,
+ 	struct page *buddy;
+ 	bool to_tail;
+ 
+-	max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
++	max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order);
+ 
+ 	VM_BUG_ON(!zone_is_initialized(zone));
+ 	VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
+@@ -1009,7 +1009,7 @@ static inline void __free_one_page(struct page *page,
+ 	VM_BUG_ON_PAGE(bad_range(zone, page), page);
+ 
+ continue_merging:
+-	while (order < max_order - 1) {
++	while (order < max_order) {
+ 		if (compaction_capture(capc, page, order, migratetype)) {
+ 			__mod_zone_freepage_state(zone, -(1 << order),
+ 								migratetype);
+@@ -1035,7 +1035,7 @@ continue_merging:
+ 		pfn = combined_pfn;
+ 		order++;
+ 	}
+-	if (max_order < MAX_ORDER) {
++	if (order < MAX_ORDER - 1) {
+ 		/* If we are here, it means order is >= pageblock_order.
+ 		 * We want to prevent merge between freepages on isolate
+ 		 * pageblock and normal pageblock. Without this, pageblock
+@@ -1056,7 +1056,7 @@ continue_merging:
+ 						is_migrate_isolate(buddy_mt)))
+ 				goto done_merging;
+ 		}
+-		max_order++;
++		max_order = order + 1;
+ 		goto continue_merging;
+ 	}
+ 
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 00576bae183d3..0c321996c6eb0 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -2720,6 +2720,7 @@ int ip_check_mc_rcu(struct in_device *in_dev, __be32 mc_addr, __be32 src_addr, u
+ 		rv = 1;
+ 	} else if (im) {
+ 		if (src_addr) {
++			spin_lock_bh(&im->lock);
+ 			for (psf = im->sources; psf; psf = psf->sf_next) {
+ 				if (psf->sf_inaddr == src_addr)
+ 					break;
+@@ -2730,6 +2731,7 @@ int ip_check_mc_rcu(struct in_device *in_dev, __be32 mc_addr, __be32 src_addr, u
+ 					im->sfcount[MCAST_EXCLUDE];
+ 			else
+ 				rv = im->sfcount[MCAST_EXCLUDE] != 0;
++			spin_unlock_bh(&im->lock);
+ 		} else
+ 			rv = 1; /* unspecified source; tentatively allow */
+ 	}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index e34d05cc57549..2b5f97e1d40b9 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4115,6 +4115,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	struct nft_table *table;
+ 	struct nft_set *set;
+ 	struct nft_ctx ctx;
++	size_t alloc_size;
+ 	char *name;
+ 	u64 size;
+ 	u64 timeout;
+@@ -4263,8 +4264,10 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	size = 0;
+ 	if (ops->privsize != NULL)
+ 		size = ops->privsize(nla, &desc);
+-
+-	set = kvzalloc(sizeof(*set) + size + udlen, GFP_KERNEL);
++	alloc_size = sizeof(*set) + size + udlen;
++	if (alloc_size < size)
++		return -ENOMEM;
++	set = kvzalloc(alloc_size, GFP_KERNEL);
+ 	if (!set)
+ 		return -ENOMEM;
+ 
+@@ -4277,15 +4280,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	err = nf_tables_set_alloc_name(&ctx, set, name);
+ 	kfree(name);
+ 	if (err < 0)
+-		goto err_set_alloc_name;
+-
+-	if (nla[NFTA_SET_EXPR]) {
+-		expr = nft_set_elem_expr_alloc(&ctx, set, nla[NFTA_SET_EXPR]);
+-		if (IS_ERR(expr)) {
+-			err = PTR_ERR(expr);
+-			goto err_set_alloc_name;
+-		}
+-	}
++		goto err_set_name;
+ 
+ 	udata = NULL;
+ 	if (udlen) {
+@@ -4296,21 +4291,19 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	INIT_LIST_HEAD(&set->bindings);
+ 	set->table = table;
+ 	write_pnet(&set->net, net);
+-	set->ops   = ops;
++	set->ops = ops;
+ 	set->ktype = ktype;
+-	set->klen  = desc.klen;
++	set->klen = desc.klen;
+ 	set->dtype = dtype;
+ 	set->objtype = objtype;
+-	set->dlen  = desc.dlen;
+-	set->expr = expr;
++	set->dlen = desc.dlen;
+ 	set->flags = flags;
+-	set->size  = desc.size;
++	set->size = desc.size;
+ 	set->policy = policy;
+-	set->udlen  = udlen;
+-	set->udata  = udata;
++	set->udlen = udlen;
++	set->udata = udata;
+ 	set->timeout = timeout;
+ 	set->gc_int = gc_int;
+-	set->handle = nf_tables_alloc_handle(table);
+ 
+ 	set->field_count = desc.field_count;
+ 	for (i = 0; i < desc.field_count; i++)
+@@ -4320,20 +4313,32 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	if (err < 0)
+ 		goto err_set_init;
+ 
++	if (nla[NFTA_SET_EXPR]) {
++		expr = nft_set_elem_expr_alloc(&ctx, set, nla[NFTA_SET_EXPR]);
++		if (IS_ERR(expr)) {
++			err = PTR_ERR(expr);
++			goto err_set_expr_alloc;
++		}
++
++		set->expr = expr;
++	}
++
++	set->handle = nf_tables_alloc_handle(table);
++
+ 	err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set);
+ 	if (err < 0)
+-		goto err_set_trans;
++		goto err_set_expr_alloc;
+ 
+ 	list_add_tail_rcu(&set->list, &table->sets);
+ 	table->use++;
+ 	return 0;
+ 
+-err_set_trans:
++err_set_expr_alloc:
++	if (set->expr)
++		nft_expr_destroy(&ctx, set->expr);
++
+ 	ops->destroy(set);
+ err_set_init:
+-	if (expr)
+-		nft_expr_destroy(&ctx, expr);
+-err_set_alloc_name:
+ 	kfree(set->name);
+ err_set_name:
+ 	kvfree(set);
+@@ -5145,6 +5150,24 @@ static void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
+ 	kfree(elem);
+ }
+ 
++static int nft_set_elem_expr_setup(struct nft_ctx *ctx,
++				   const struct nft_set_ext *ext,
++				   struct nft_expr *expr)
++{
++	struct nft_expr *elem_expr = nft_set_ext_expr(ext);
++	int err;
++
++	if (expr == NULL)
++		return 0;
++
++	err = nft_expr_clone(elem_expr, expr);
++	if (err < 0)
++		return -ENOMEM;
++
++	nft_expr_destroy(ctx, expr);
++	return 0;
++}
++
+ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 			    const struct nlattr *attr, u32 nlmsg_flags)
+ {
+@@ -5347,15 +5370,17 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 		*nft_set_ext_obj(ext) = obj;
+ 		obj->use++;
+ 	}
+-	if (expr) {
+-		memcpy(nft_set_ext_expr(ext), expr, expr->ops->size);
+-		kfree(expr);
+-		expr = NULL;
+-	}
++
++	err = nft_set_elem_expr_setup(ctx, ext, expr);
++	if (err < 0)
++		goto err_elem_expr;
++	expr = NULL;
+ 
+ 	trans = nft_trans_elem_alloc(ctx, NFT_MSG_NEWSETELEM, set);
+-	if (trans == NULL)
+-		goto err_trans;
++	if (trans == NULL) {
++		err = -ENOMEM;
++		goto err_elem_expr;
++	}
+ 
+ 	ext->genmask = nft_genmask_cur(ctx->net) | NFT_SET_ELEM_BUSY_MASK;
+ 	err = set->ops->insert(ctx->net, set, &elem, &ext2);
+@@ -5399,7 +5424,7 @@ err_set_full:
+ 	set->ops->remove(ctx->net, set, &elem);
+ err_element_clash:
+ 	kfree(trans);
+-err_trans:
++err_elem_expr:
+ 	if (obj)
+ 		obj->use--;
+ 
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index d7083bcb20e8c..858c8d4d659a8 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -604,7 +604,7 @@ static u64 nft_hash_privsize(const struct nlattr * const nla[],
+ 			     const struct nft_set_desc *desc)
+ {
+ 	return sizeof(struct nft_hash) +
+-	       nft_hash_buckets(desc->size) * sizeof(struct hlist_head);
++	       (u64)nft_hash_buckets(desc->size) * sizeof(struct hlist_head);
+ }
+ 
+ static int nft_hash_init(const struct nft_set *set,
+@@ -644,8 +644,8 @@ static bool nft_hash_estimate(const struct nft_set_desc *desc, u32 features,
+ 		return false;
+ 
+ 	est->size   = sizeof(struct nft_hash) +
+-		      nft_hash_buckets(desc->size) * sizeof(struct hlist_head) +
+-		      desc->size * sizeof(struct nft_hash_elem);
++		      (u64)nft_hash_buckets(desc->size) * sizeof(struct hlist_head) +
++		      (u64)desc->size * sizeof(struct nft_hash_elem);
+ 	est->lookup = NFT_SET_CLASS_O_1;
+ 	est->space  = NFT_SET_CLASS_O_N;
+ 
+@@ -662,8 +662,8 @@ static bool nft_hash_fast_estimate(const struct nft_set_desc *desc, u32 features
+ 		return false;
+ 
+ 	est->size   = sizeof(struct nft_hash) +
+-		      nft_hash_buckets(desc->size) * sizeof(struct hlist_head) +
+-		      desc->size * sizeof(struct nft_hash_elem);
++		      (u64)nft_hash_buckets(desc->size) * sizeof(struct hlist_head) +
++		      (u64)desc->size * sizeof(struct nft_hash_elem);
+ 	est->lookup = NFT_SET_CLASS_O_1;
+ 	est->space  = NFT_SET_CLASS_O_N;
+ 
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 33d185b62a767..a45b27a2ed4ec 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1896,6 +1896,7 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ed, 2),	/* Kingston HyperX Cloud Alpha S */
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x1f47, 2),	/* JBL Quantum 800 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-15 12:00 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-15 12:00 UTC (permalink / raw
  To: gentoo-commits

commit:     d91b8a390d38f10017cd604bd38bb38ae7753793
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 15 11:59:58 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 15 11:59:58 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d91b8a39

Linux patch 5.10.65

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1064_linux-5.10.65.patch | 9178 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 9182 insertions(+)

diff --git a/0000_README b/0000_README
index 8ddd447..da11f46 100644
--- a/0000_README
+++ b/0000_README
@@ -299,6 +299,10 @@ Patch:  1063_linux-5.10.64.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.64
 
+Patch:  1064_linux-5.10.65.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.65
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1064_linux-5.10.65.patch b/1064_linux-5.10.65.patch
new file mode 100644
index 0000000..756e792
--- /dev/null
+++ b/1064_linux-5.10.65.patch
@@ -0,0 +1,9178 @@
+diff --git a/Documentation/fault-injection/provoke-crashes.rst b/Documentation/fault-injection/provoke-crashes.rst
+index a20ba5d939320..18de17354206a 100644
+--- a/Documentation/fault-injection/provoke-crashes.rst
++++ b/Documentation/fault-injection/provoke-crashes.rst
+@@ -29,7 +29,7 @@ recur_count
+ cpoint_name
+ 	Where in the kernel to trigger the action. It can be
+ 	one of INT_HARDWARE_ENTRY, INT_HW_IRQ_EN, INT_TASKLET_ENTRY,
+-	FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_DISPATCH_CMD,
++	FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_QUEUE_RQ,
+ 	IDE_CORE_CP, or DIRECT
+ 
+ cpoint_type
+diff --git a/Makefile b/Makefile
+index 982aa1876aa04..91eb017f5296d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 64
++SUBLEVEL = 65
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+index 7028e21bdd980..910eacc8ad3bd 100644
+--- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+@@ -208,12 +208,12 @@
+ 	};
+ 
+ 	pinctrl_hvi3c3_default: hvi3c3_default {
+-		function = "HVI3C3";
++		function = "I3C3";
+ 		groups = "HVI3C3";
+ 	};
+ 
+ 	pinctrl_hvi3c4_default: hvi3c4_default {
+-		function = "HVI3C4";
++		function = "I3C4";
+ 		groups = "HVI3C4";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index edca66c232c15..ebbc9b23aef1c 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -92,6 +92,8 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
+ 		status = "okay"; /* Conflict with pwm0. */
+ 
+ 		red {
+@@ -537,6 +539,10 @@
+ 				 AT91_PIOA 19 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI)	/* PA19 DAT2 periph A with pullup */
+ 				 AT91_PIOA 20 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI)>;	/* PA20 DAT3 periph A with pullup */
+ 		};
++		pinctrl_sdmmc0_cd: sdmmc0_cd {
++			atmel,pins =
++				<AT91_PIOA 23 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++		};
+ 	};
+ 
+ 	sdmmc1 {
+@@ -569,6 +575,14 @@
+ 				      AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ 		};
+ 	};
++
++	leds {
++		pinctrl_gpio_leds: gpio_leds {
++			atmel,pins = <AT91_PIOB 11 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++				      AT91_PIOB 12 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++				      AT91_PIOB 13 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++		};
++	};
+ }; /* pinctrl */
+ 
+ &pwm0 {
+@@ -580,7 +594,7 @@
+ &sdmmc0 {
+ 	bus-width = <4>;
+ 	pinctrl-names = "default";
+-	pinctrl-0 = <&pinctrl_sdmmc0_default>;
++	pinctrl-0 = <&pinctrl_sdmmc0_default &pinctrl_sdmmc0_cd>;
+ 	status = "okay";
+ 	cd-gpios = <&pioA 23 GPIO_ACTIVE_LOW>;
+ 	disable-wp;
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index 9c55a921263bd..cc55d1684322b 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -57,6 +57,8 @@
+ 			};
+ 
+ 			spi0: spi@f0004000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi0_cs>;
+ 				cs-gpios = <&pioD 13 0>, <0>, <0>, <&pioD 16 0>;
+ 				status = "okay";
+ 			};
+@@ -169,6 +171,8 @@
+ 			};
+ 
+ 			spi1: spi@f8008000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi1_cs>;
+ 				cs-gpios = <&pioC 25 0>;
+ 				status = "okay";
+ 			};
+@@ -248,6 +252,26 @@
+ 							<AT91_PIOE 3 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
+ 							 AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ 					};
++
++					pinctrl_gpio_leds: gpio_leds_default {
++						atmel,pins =
++							<AT91_PIOE 23 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOE 24 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_spi0_cs: spi0_cs_default {
++						atmel,pins =
++							<AT91_PIOD 13 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_spi1_cs: spi1_cs_default {
++						atmel,pins = <AT91_PIOC 25 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_vcc_mmc0_reg_gpio: vcc_mmc0_reg_gpio_default {
++						atmel,pins = <AT91_PIOE 2 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -339,6 +363,8 @@
+ 
+ 	vcc_mmc0_reg: fixedregulator_mmc0 {
+ 		compatible = "regulator-fixed";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_vcc_mmc0_reg_gpio>;
+ 		gpio = <&pioE 2 GPIO_ACTIVE_LOW>;
+ 		regulator-name = "mmc0-card-supply";
+ 		regulator-min-microvolt = <3300000>;
+@@ -362,6 +388,9 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
++		status = "okay";
+ 
+ 		d2 {
+ 			label = "d2";
+diff --git a/arch/arm/boot/dts/at91-sama5d4_xplained.dts b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+index 0b3ad1b580b83..e42dae06b5826 100644
+--- a/arch/arm/boot/dts/at91-sama5d4_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+@@ -90,6 +90,8 @@
+ 			};
+ 
+ 			spi1: spi@fc018000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi0_cs>;
+ 				cs-gpios = <&pioB 21 0>;
+ 				status = "okay";
+ 			};
+@@ -147,6 +149,19 @@
+ 						atmel,pins =
+ 							<AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
+ 					};
++					pinctrl_spi0_cs: spi0_cs_default {
++						atmel,pins =
++							<AT91_PIOB 21 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++					pinctrl_gpio_leds: gpio_leds_default {
++						atmel,pins =
++							<AT91_PIOD 30 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOE 15 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++					pinctrl_vcc_mmc1_reg: vcc_mmc1_reg {
++						atmel,pins =
++							<AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -252,6 +267,8 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
+ 		status = "okay";
+ 
+ 		d8 {
+@@ -278,6 +295,8 @@
+ 
+ 	vcc_mmc1_reg: fixedregulator_mmc1 {
+ 		compatible = "regulator-fixed";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_vcc_mmc1_reg>;
+ 		gpio = <&pioE 4 GPIO_ACTIVE_LOW>;
+ 		regulator-name = "VDD MCI1";
+ 		regulator-min-microvolt = <3300000>;
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index 04688e8abce2c..740a6c816266c 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -251,8 +251,13 @@
+ 					  "pp2", "ppmmu2", "pp4", "ppmmu4",
+ 					  "pp5", "ppmmu5", "pp6", "ppmmu6";
+ 			resets = <&reset RESET_MALI>;
++
+ 			clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>;
+ 			clock-names = "bus", "core";
++
++			assigned-clocks = <&clkc CLKID_MALI>;
++			assigned-clock-rates = <318750000>;
++
+ 			operating-points-v2 = <&gpu_opp_table>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/meson8b-ec100.dts b/arch/arm/boot/dts/meson8b-ec100.dts
+index ed06102a40140..c6824d26dbbf6 100644
+--- a/arch/arm/boot/dts/meson8b-ec100.dts
++++ b/arch/arm/boot/dts/meson8b-ec100.dts
+@@ -153,7 +153,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 0 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -237,7 +237,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 1 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm/boot/dts/meson8b-mxq.dts b/arch/arm/boot/dts/meson8b-mxq.dts
+index 33037ef62d0ad..b2edac1fce6dc 100644
+--- a/arch/arm/boot/dts/meson8b-mxq.dts
++++ b/arch/arm/boot/dts/meson8b-mxq.dts
+@@ -39,6 +39,8 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
++		pwm-supply = <&vcc_5v>;
++
+ 		pwms = <&pwm_cd 0 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+ 
+@@ -84,7 +86,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 1 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm/boot/dts/meson8b-odroidc1.dts b/arch/arm/boot/dts/meson8b-odroidc1.dts
+index 5963566dbcc9d..73ce1c13da24c 100644
+--- a/arch/arm/boot/dts/meson8b-odroidc1.dts
++++ b/arch/arm/boot/dts/meson8b-odroidc1.dts
+@@ -136,7 +136,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&p5v0>;
++		pwm-supply = <&p5v0>;
+ 
+ 		pwms = <&pwm_cd 0 12218 0>;
+ 		pwm-dutycycle-range = <91 0>;
+@@ -168,7 +168,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&p5v0>;
++		pwm-supply = <&p5v0>;
+ 
+ 		pwms = <&pwm_cd 1 12218 0>;
+ 		pwm-dutycycle-range = <91 0>;
+diff --git a/arch/arm64/boot/dts/exynos/exynos7.dtsi b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+index 7599e1a00ff51..48952a556648a 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+@@ -102,7 +102,7 @@
+ 			#address-cells = <0>;
+ 			interrupt-controller;
+ 			reg =	<0x11001000 0x1000>,
+-				<0x11002000 0x1000>,
++				<0x11002000 0x2000>,
+ 				<0x11004000 0x2000>,
+ 				<0x11006000 0x2000>;
+ 		};
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index bbd34ae12a53b..2e437f20da39b 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -134,6 +134,23 @@
+ 	pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
+ 	status = "okay";
+ 	reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
++	/*
++	 * U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property
++	 * contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and
++	 * 2 size cells and also expects that the second range starts at 16 MB offset. If these
++	 * conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address
++	 * space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window
++	 * for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB.
++	 * This bug is not present in U-Boot ports for other Armada 3700 devices and is fixed in
++	 * U-Boot version 2021.07. See relevant U-Boot commits (the last one contains fix):
++	 * https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7
++	 * https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf
++	 * https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33
++	 */
++	#address-cells = <3>;
++	#size-cells = <2>;
++	ranges = <0x81000000 0 0xe8000000   0 0xe8000000   0 0x01000000   /* Port 0 IO */
++		  0x82000000 0 0xe9000000   0 0xe9000000   0 0x07000000>; /* Port 0 MEM */
+ 
+ 	/* enabled by U-Boot if PCIe module is present */
+ 	status = "disabled";
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 83d2d83f7692b..2a2015a153627 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -487,8 +487,15 @@
+ 			#interrupt-cells = <1>;
+ 			msi-parent = <&pcie0>;
+ 			msi-controller;
+-			ranges = <0x82000000 0 0xe8000000   0 0xe8000000 0 0x1000000 /* Port 0 MEM */
+-				  0x81000000 0 0xe9000000   0 0xe9000000 0 0x10000>; /* Port 0 IO*/
++			/*
++			 * The 128 MiB address range [0xe8000000-0xf0000000] is
++			 * dedicated for PCIe and can be assigned to 8 windows
++			 * with size a power of two. Use one 64 KiB window for
++			 * IO at the end and the remaining seven windows
++			 * (totaling 127 MiB) for MEM.
++			 */
++			ranges = <0x82000000 0 0xe8000000   0 0xe8000000   0 0x07f00000   /* Port 0 MEM */
++				  0x81000000 0 0xefff0000   0 0xefff0000   0 0x00010000>; /* Port 0 IO */
+ 			interrupt-map-mask = <0 0 0 7>;
+ 			interrupt-map = <0 0 0 1 &pcie_intc 0>,
+ 					<0 0 0 2 &pcie_intc 1>,
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+index e3773b05c403b..3c73dfc430afc 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+@@ -55,7 +55,8 @@
+ 	pinctrl-0 = <&avb_pins>;
+ 	pinctrl-names = "default";
+ 	phy-handle = <&phy0>;
+-	phy-mode = "rgmii-id";
++	rx-internal-delay-ps = <1800>;
++	tx-internal-delay-ps = <2000>;
+ 	status = "okay";
+ 
+ 	phy0: ethernet-phy@0 {
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
+index b9e46aed53362..dde3a07bc417c 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
+@@ -19,7 +19,8 @@
+ 	pinctrl-0 = <&avb_pins>;
+ 	pinctrl-names = "default";
+ 	phy-handle = <&phy0>;
+-	phy-mode = "rgmii-txid";
++	tx-internal-delay-ps = <2000>;
++	rx-internal-delay-ps = <1800>;
+ 	status = "okay";
+ 
+ 	phy0: ethernet-phy@0 {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index c58a0846db502..a5ebe574fbace 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -1131,6 +1131,8 @@
+ 			power-domains = <&sysc R8A774A1_PD_ALWAYS_ON>;
+ 			resets = <&cpg 812>;
+ 			phy-mode = "rgmii";
++			rx-internal-delay-ps = <0>;
++			tx-internal-delay-ps = <0>;
+ 			iommus = <&ipmmu_ds0 16>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+index 9ebf6e58ba31c..20003a41a706b 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+@@ -1004,6 +1004,8 @@
+ 			power-domains = <&sysc R8A774B1_PD_ALWAYS_ON>;
+ 			resets = <&cpg 812>;
+ 			phy-mode = "rgmii";
++			rx-internal-delay-ps = <0>;
++			tx-internal-delay-ps = <0>;
+ 			iommus = <&ipmmu_ds0 16>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index f27d9b2eb996b..e0e54342cd4c7 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -960,6 +960,7 @@
+ 			power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>;
+ 			resets = <&cpg 812>;
+ 			phy-mode = "rgmii";
++			rx-internal-delay-ps = <0>;
+ 			iommus = <&ipmmu_ds0 16>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774e1.dtsi b/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
+index 708258696b4f4..2e6c12a46daf5 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
+@@ -1233,6 +1233,8 @@
+ 			power-domains = <&sysc R8A774E1_PD_ALWAYS_ON>;
+ 			resets = <&cpg 812>;
+ 			phy-mode = "rgmii";
++			rx-internal-delay-ps = <0>;
++			tx-internal-delay-ps = <0>;
+ 			iommus = <&ipmmu_ds0 16>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995-draak.dts b/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
+index 8f471881b7a36..2e4bb7ecd5bde 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
++++ b/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
+@@ -277,10 +277,6 @@
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <28 IRQ_TYPE_LEVEL_LOW>;
+ 
+-		/* Depends on LVDS */
+-		max-clock = <135000000>;
+-		min-vrefresh = <50>;
+-
+ 		adi,input-depth = <8>;
+ 		adi,input-colorspace = "rgb";
+ 		adi,input-clock = "1x";
+diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu
+index 694c4fca9f5dc..c17205da47fe3 100644
+--- a/arch/m68k/Kconfig.cpu
++++ b/arch/m68k/Kconfig.cpu
+@@ -25,6 +25,7 @@ config COLDFIRE
+ 	bool "Coldfire CPU family support"
+ 	select ARCH_HAVE_CUSTOM_GPIO_H
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_MULDIV64
+ 	select GENERIC_CSUM
+ 	select GPIOLIB
+@@ -38,6 +39,7 @@ config M68000
+ 	bool "MC68000"
+ 	depends on !MMU
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_MULDIV64
+ 	select CPU_HAS_NO_UNALIGNED
+ 	select GENERIC_CSUM
+@@ -53,6 +55,7 @@ config M68000
+ config MCPU32
+ 	bool
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_UNALIGNED
+ 	select CPU_NO_EFFICIENT_FFS
+ 	help
+@@ -357,7 +360,7 @@ config ADVANCED
+ 
+ config RMW_INSNS
+ 	bool "Use read-modify-write instructions"
+-	depends on ADVANCED
++	depends on ADVANCED && !CPU_HAS_NO_CAS
+ 	help
+ 	  This allows to use certain instructions that work with indivisible
+ 	  read-modify-write bus cycles. While this is faster than the
+@@ -411,6 +414,9 @@ config NODES_SHIFT
+ config CPU_HAS_NO_BITFIELDS
+ 	bool
+ 
++config CPU_HAS_NO_CAS
++	bool
++
+ config CPU_HAS_NO_MULDIV64
+ 	bool
+ 
+diff --git a/arch/m68k/emu/nfeth.c b/arch/m68k/emu/nfeth.c
+index d2875e32abfca..79e55421cfb18 100644
+--- a/arch/m68k/emu/nfeth.c
++++ b/arch/m68k/emu/nfeth.c
+@@ -254,8 +254,8 @@ static void __exit nfeth_cleanup(void)
+ 
+ 	for (i = 0; i < MAX_UNIT; i++) {
+ 		if (nfeth_dev[i]) {
+-			unregister_netdev(nfeth_dev[0]);
+-			free_netdev(nfeth_dev[0]);
++			unregister_netdev(nfeth_dev[i]);
++			free_netdev(nfeth_dev[i]);
+ 		}
+ 	}
+ 	free_irq(nfEtherIRQ, nfeth_interrupt);
+diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
+index 463c24e26000f..171913b9a9250 100644
+--- a/arch/s390/include/asm/kvm_host.h
++++ b/arch/s390/include/asm/kvm_host.h
+@@ -957,6 +957,7 @@ struct kvm_arch{
+ 	atomic64_t cmma_dirty_pages;
+ 	/* subset of available cpu features enabled by user space */
+ 	DECLARE_BITMAP(cpu_feat, KVM_S390_VM_CPU_FEAT_NR_BITS);
++	/* indexed by vcpu_idx */
+ 	DECLARE_BITMAP(idle_mask, KVM_MAX_VCPUS);
+ 	struct kvm_s390_gisa_interrupt gisa_int;
+ 	struct kvm_s390_pv pv;
+diff --git a/arch/s390/kernel/debug.c b/arch/s390/kernel/debug.c
+index b6619ae9a3e0c..89fbfb3b1e01d 100644
+--- a/arch/s390/kernel/debug.c
++++ b/arch/s390/kernel/debug.c
+@@ -24,6 +24,7 @@
+ #include <linux/export.h>
+ #include <linux/init.h>
+ #include <linux/fs.h>
++#include <linux/minmax.h>
+ #include <linux/debugfs.h>
+ 
+ #include <asm/debug.h>
+@@ -92,6 +93,8 @@ static int debug_hex_ascii_format_fn(debug_info_t *id, struct debug_view *view,
+ 				     char *out_buf, const char *in_buf);
+ static int debug_sprintf_format_fn(debug_info_t *id, struct debug_view *view,
+ 				   char *out_buf, debug_sprintf_entry_t *curr_event);
++static void debug_areas_swap(debug_info_t *a, debug_info_t *b);
++static void debug_events_append(debug_info_t *dest, debug_info_t *src);
+ 
+ /* globals */
+ 
+@@ -311,24 +314,6 @@ static debug_info_t *debug_info_create(const char *name, int pages_per_area,
+ 		goto out;
+ 
+ 	rc->mode = mode & ~S_IFMT;
+-
+-	/* create root directory */
+-	rc->debugfs_root_entry = debugfs_create_dir(rc->name,
+-						    debug_debugfs_root_entry);
+-
+-	/* append new element to linked list */
+-	if (!debug_area_first) {
+-		/* first element in list */
+-		debug_area_first = rc;
+-		rc->prev = NULL;
+-	} else {
+-		/* append element to end of list */
+-		debug_area_last->next = rc;
+-		rc->prev = debug_area_last;
+-	}
+-	debug_area_last = rc;
+-	rc->next = NULL;
+-
+ 	refcount_set(&rc->ref_count, 1);
+ out:
+ 	return rc;
+@@ -388,27 +373,10 @@ static void debug_info_get(debug_info_t *db_info)
+  */
+ static void debug_info_put(debug_info_t *db_info)
+ {
+-	int i;
+-
+ 	if (!db_info)
+ 		return;
+-	if (refcount_dec_and_test(&db_info->ref_count)) {
+-		for (i = 0; i < DEBUG_MAX_VIEWS; i++) {
+-			if (!db_info->views[i])
+-				continue;
+-			debugfs_remove(db_info->debugfs_entries[i]);
+-		}
+-		debugfs_remove(db_info->debugfs_root_entry);
+-		if (db_info == debug_area_first)
+-			debug_area_first = db_info->next;
+-		if (db_info == debug_area_last)
+-			debug_area_last = db_info->prev;
+-		if (db_info->prev)
+-			db_info->prev->next = db_info->next;
+-		if (db_info->next)
+-			db_info->next->prev = db_info->prev;
++	if (refcount_dec_and_test(&db_info->ref_count))
+ 		debug_info_free(db_info);
+-	}
+ }
+ 
+ /*
+@@ -632,6 +600,31 @@ static int debug_close(struct inode *inode, struct file *file)
+ 	return 0; /* success */
+ }
+ 
++/* Create debugfs entries and add to internal list. */
++static void _debug_register(debug_info_t *id)
++{
++	/* create root directory */
++	id->debugfs_root_entry = debugfs_create_dir(id->name,
++						    debug_debugfs_root_entry);
++
++	/* append new element to linked list */
++	if (!debug_area_first) {
++		/* first element in list */
++		debug_area_first = id;
++		id->prev = NULL;
++	} else {
++		/* append element to end of list */
++		debug_area_last->next = id;
++		id->prev = debug_area_last;
++	}
++	debug_area_last = id;
++	id->next = NULL;
++
++	debug_register_view(id, &debug_level_view);
++	debug_register_view(id, &debug_flush_view);
++	debug_register_view(id, &debug_pages_view);
++}
++
+ /**
+  * debug_register_mode() - creates and initializes debug area.
+  *
+@@ -661,19 +654,16 @@ debug_info_t *debug_register_mode(const char *name, int pages_per_area,
+ 	if ((uid != 0) || (gid != 0))
+ 		pr_warn("Root becomes the owner of all s390dbf files in sysfs\n");
+ 	BUG_ON(!initialized);
+-	mutex_lock(&debug_mutex);
+ 
+ 	/* create new debug_info */
+ 	rc = debug_info_create(name, pages_per_area, nr_areas, buf_size, mode);
+-	if (!rc)
+-		goto out;
+-	debug_register_view(rc, &debug_level_view);
+-	debug_register_view(rc, &debug_flush_view);
+-	debug_register_view(rc, &debug_pages_view);
+-out:
+-	if (!rc)
++	if (rc) {
++		mutex_lock(&debug_mutex);
++		_debug_register(rc);
++		mutex_unlock(&debug_mutex);
++	} else {
+ 		pr_err("Registering debug feature %s failed\n", name);
+-	mutex_unlock(&debug_mutex);
++	}
+ 	return rc;
+ }
+ EXPORT_SYMBOL(debug_register_mode);
+@@ -702,6 +692,27 @@ debug_info_t *debug_register(const char *name, int pages_per_area,
+ }
+ EXPORT_SYMBOL(debug_register);
+ 
++/* Remove debugfs entries and remove from internal list. */
++static void _debug_unregister(debug_info_t *id)
++{
++	int i;
++
++	for (i = 0; i < DEBUG_MAX_VIEWS; i++) {
++		if (!id->views[i])
++			continue;
++		debugfs_remove(id->debugfs_entries[i]);
++	}
++	debugfs_remove(id->debugfs_root_entry);
++	if (id == debug_area_first)
++		debug_area_first = id->next;
++	if (id == debug_area_last)
++		debug_area_last = id->prev;
++	if (id->prev)
++		id->prev->next = id->next;
++	if (id->next)
++		id->next->prev = id->prev;
++}
++
+ /**
+  * debug_unregister() - give back debug area.
+  *
+@@ -715,8 +726,10 @@ void debug_unregister(debug_info_t *id)
+ 	if (!id)
+ 		return;
+ 	mutex_lock(&debug_mutex);
+-	debug_info_put(id);
++	_debug_unregister(id);
+ 	mutex_unlock(&debug_mutex);
++
++	debug_info_put(id);
+ }
+ EXPORT_SYMBOL(debug_unregister);
+ 
+@@ -726,35 +739,28 @@ EXPORT_SYMBOL(debug_unregister);
+  */
+ static int debug_set_size(debug_info_t *id, int nr_areas, int pages_per_area)
+ {
+-	debug_entry_t ***new_areas;
++	debug_info_t *new_id;
+ 	unsigned long flags;
+-	int rc = 0;
+ 
+ 	if (!id || (nr_areas <= 0) || (pages_per_area < 0))
+ 		return -EINVAL;
+-	if (pages_per_area > 0) {
+-		new_areas = debug_areas_alloc(pages_per_area, nr_areas);
+-		if (!new_areas) {
+-			pr_info("Allocating memory for %i pages failed\n",
+-				pages_per_area);
+-			rc = -ENOMEM;
+-			goto out;
+-		}
+-	} else {
+-		new_areas = NULL;
++
++	new_id = debug_info_alloc("", pages_per_area, nr_areas, id->buf_size,
++				  id->level, ALL_AREAS);
++	if (!new_id) {
++		pr_info("Allocating memory for %i pages failed\n",
++			pages_per_area);
++		return -ENOMEM;
+ 	}
++
+ 	spin_lock_irqsave(&id->lock, flags);
+-	debug_areas_free(id);
+-	id->areas = new_areas;
+-	id->nr_areas = nr_areas;
+-	id->pages_per_area = pages_per_area;
+-	id->active_area = 0;
+-	memset(id->active_entries, 0, sizeof(int)*id->nr_areas);
+-	memset(id->active_pages, 0, sizeof(int)*id->nr_areas);
++	debug_events_append(new_id, id);
++	debug_areas_swap(new_id, id);
++	debug_info_free(new_id);
+ 	spin_unlock_irqrestore(&id->lock, flags);
+ 	pr_info("%s: set new size (%i pages)\n", id->name, pages_per_area);
+-out:
+-	return rc;
++
++	return 0;
+ }
+ 
+ /**
+@@ -821,6 +827,42 @@ static inline debug_entry_t *get_active_entry(debug_info_t *id)
+ 				  id->active_entries[id->active_area]);
+ }
+ 
++/* Swap debug areas of a and b. */
++static void debug_areas_swap(debug_info_t *a, debug_info_t *b)
++{
++	swap(a->nr_areas, b->nr_areas);
++	swap(a->pages_per_area, b->pages_per_area);
++	swap(a->areas, b->areas);
++	swap(a->active_area, b->active_area);
++	swap(a->active_pages, b->active_pages);
++	swap(a->active_entries, b->active_entries);
++}
++
++/* Append all debug events in active area from source to destination log. */
++static void debug_events_append(debug_info_t *dest, debug_info_t *src)
++{
++	debug_entry_t *from, *to, *last;
++
++	if (!src->areas || !dest->areas)
++		return;
++
++	/* Loop over all entries in src, starting with oldest. */
++	from = get_active_entry(src);
++	last = from;
++	do {
++		if (from->clock != 0LL) {
++			to = get_active_entry(dest);
++			memset(to, 0, dest->entry_size);
++			memcpy(to, from, min(src->entry_size,
++					     dest->entry_size));
++			proceed_active_entry(dest);
++		}
++
++		proceed_active_entry(src);
++		from = get_active_entry(src);
++	} while (from != last);
++}
++
+ /*
+  * debug_finish_entry:
+  * - set timestamp, caller address, cpu number etc.
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 2f177298c663b..2bb9996ff09b4 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -419,13 +419,13 @@ static unsigned long deliverable_irqs(struct kvm_vcpu *vcpu)
+ static void __set_cpu_idle(struct kvm_vcpu *vcpu)
+ {
+ 	kvm_s390_set_cpuflags(vcpu, CPUSTAT_WAIT);
+-	set_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	set_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
+ {
+ 	kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT);
+-	clear_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)
+@@ -3050,18 +3050,18 @@ int kvm_s390_get_irq_state(struct kvm_vcpu *vcpu, __u8 __user *buf, int len)
+ 
+ static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
+ {
+-	int vcpu_id, online_vcpus = atomic_read(&kvm->online_vcpus);
++	int vcpu_idx, online_vcpus = atomic_read(&kvm->online_vcpus);
+ 	struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
+ 	struct kvm_vcpu *vcpu;
+ 
+-	for_each_set_bit(vcpu_id, kvm->arch.idle_mask, online_vcpus) {
+-		vcpu = kvm_get_vcpu(kvm, vcpu_id);
++	for_each_set_bit(vcpu_idx, kvm->arch.idle_mask, online_vcpus) {
++		vcpu = kvm_get_vcpu(kvm, vcpu_idx);
+ 		if (psw_ioint_disabled(vcpu))
+ 			continue;
+ 		deliverable_mask &= (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
+ 		if (deliverable_mask) {
+ 			/* lately kicked but not yet running */
+-			if (test_and_set_bit(vcpu_id, gi->kicked_mask))
++			if (test_and_set_bit(vcpu_idx, gi->kicked_mask))
+ 				return;
+ 			kvm_s390_vcpu_wakeup(vcpu);
+ 			return;
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index f94b4f78d4dab..7f719b468b440 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4015,7 +4015,7 @@ static int vcpu_pre_run(struct kvm_vcpu *vcpu)
+ 		kvm_s390_patch_guest_per_regs(vcpu);
+ 	}
+ 
+-	clear_bit(vcpu->vcpu_id, vcpu->kvm->arch.gisa_int.kicked_mask);
++	clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.gisa_int.kicked_mask);
+ 
+ 	vcpu->arch.sie_block->icptcode = 0;
+ 	cpuflags = atomic_read(&vcpu->arch.sie_block->cpuflags);
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index 79dcd647b378d..2d134833bca69 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -79,7 +79,7 @@ static inline int is_vcpu_stopped(struct kvm_vcpu *vcpu)
+ 
+ static inline int is_vcpu_idle(struct kvm_vcpu *vcpu)
+ {
+-	return test_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	return test_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static inline int kvm_is_ucontrol(struct kvm *kvm)
+diff --git a/arch/s390/mm/kasan_init.c b/arch/s390/mm/kasan_init.c
+index 5646b39c728a9..e9a9b7b616bc1 100644
+--- a/arch/s390/mm/kasan_init.c
++++ b/arch/s390/mm/kasan_init.c
+@@ -108,6 +108,9 @@ static void __init kasan_early_vmemmap_populate(unsigned long address,
+ 		sgt_prot &= ~_SEGMENT_ENTRY_NOEXEC;
+ 	}
+ 
++	/*
++	 * The first 1MB of 1:1 mapping is mapped with 4KB pages
++	 */
+ 	while (address < end) {
+ 		pg_dir = pgd_offset_k(address);
+ 		if (pgd_none(*pg_dir)) {
+@@ -165,30 +168,26 @@ static void __init kasan_early_vmemmap_populate(unsigned long address,
+ 
+ 		pm_dir = pmd_offset(pu_dir, address);
+ 		if (pmd_none(*pm_dir)) {
+-			if (mode == POPULATE_ZERO_SHADOW &&
+-			    IS_ALIGNED(address, PMD_SIZE) &&
++			if (IS_ALIGNED(address, PMD_SIZE) &&
+ 			    end - address >= PMD_SIZE) {
+-				pmd_populate(&init_mm, pm_dir,
+-						kasan_early_shadow_pte);
+-				address = (address + PMD_SIZE) & PMD_MASK;
+-				continue;
+-			}
+-			/* the first megabyte of 1:1 is mapped with 4k pages */
+-			if (has_edat && address && end - address >= PMD_SIZE &&
+-			    mode != POPULATE_ZERO_SHADOW) {
+-				void *page;
+-
+-				if (mode == POPULATE_ONE2ONE) {
+-					page = (void *)address;
+-				} else {
+-					page = kasan_early_alloc_segment();
+-					memset(page, 0, _SEGMENT_SIZE);
++				if (mode == POPULATE_ZERO_SHADOW) {
++					pmd_populate(&init_mm, pm_dir, kasan_early_shadow_pte);
++					address = (address + PMD_SIZE) & PMD_MASK;
++					continue;
++				} else if (has_edat && address) {
++					void *page;
++
++					if (mode == POPULATE_ONE2ONE) {
++						page = (void *)address;
++					} else {
++						page = kasan_early_alloc_segment();
++						memset(page, 0, _SEGMENT_SIZE);
++					}
++					pmd_val(*pm_dir) = __pa(page) | sgt_prot;
++					address = (address + PMD_SIZE) & PMD_MASK;
++					continue;
+ 				}
+-				pmd_val(*pm_dir) = __pa(page) | sgt_prot;
+-				address = (address + PMD_SIZE) & PMD_MASK;
+-				continue;
+ 			}
+-
+ 			pt_dir = kasan_early_pte_alloc();
+ 			pmd_populate(&init_mm, pm_dir, pt_dir);
+ 		} else if (pmd_large(*pm_dir)) {
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index ca1a105e3b5d4..0ddb1fe353dc8 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -659,9 +659,10 @@ int zpci_enable_device(struct zpci_dev *zdev)
+ {
+ 	int rc;
+ 
+-	rc = clp_enable_fh(zdev, ZPCI_NR_DMA_SPACES);
+-	if (rc)
++	if (clp_enable_fh(zdev, ZPCI_NR_DMA_SPACES)) {
++		rc = -EIO;
+ 		goto out;
++	}
+ 
+ 	rc = zpci_dma_init_device(zdev);
+ 	if (rc)
+@@ -684,7 +685,7 @@ int zpci_disable_device(struct zpci_dev *zdev)
+ 	 * The zPCI function may already be disabled by the platform, this is
+ 	 * detected in clp_disable_fh() which becomes a no-op.
+ 	 */
+-	return clp_disable_fh(zdev);
++	return clp_disable_fh(zdev) ? -EIO : 0;
+ }
+ EXPORT_SYMBOL_GPL(zpci_disable_device);
+ 
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index d3331596ddbe1..0a0e8b8293bef 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -213,15 +213,19 @@ out:
+ }
+ 
+ static int clp_refresh_fh(u32 fid);
+-/*
+- * Enable/Disable a given PCI function and update its function handle if
+- * necessary
++/**
++ * clp_set_pci_fn() - Execute a command on a PCI function
++ * @zdev: Function that will be affected
++ * @nr_dma_as: DMA address space number
++ * @command: The command code to execute
++ *
++ * Returns: 0 on success, < 0 for Linux errors (e.g. -ENOMEM), and
++ * > 0 for non-success platform responses
+  */
+ static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
+ {
+ 	struct clp_req_rsp_set_pci *rrb;
+ 	int rc, retries = 100;
+-	u32 fid = zdev->fid;
+ 
+ 	rrb = clp_alloc_block(GFP_KERNEL);
+ 	if (!rrb)
+@@ -245,17 +249,16 @@ static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
+ 		}
+ 	} while (rrb->response.hdr.rsp == CLP_RC_SETPCIFN_BUSY);
+ 
+-	if (rc || rrb->response.hdr.rsp != CLP_RC_OK) {
+-		zpci_err("Set PCI FN:\n");
+-		zpci_err_clp(rrb->response.hdr.rsp, rc);
+-	}
+-
+ 	if (!rc && rrb->response.hdr.rsp == CLP_RC_OK) {
+ 		zdev->fh = rrb->response.fh;
+-	} else if (!rc && rrb->response.hdr.rsp == CLP_RC_SETPCIFN_ALRDY &&
+-			rrb->response.fh == 0) {
++	} else if (!rc && rrb->response.hdr.rsp == CLP_RC_SETPCIFN_ALRDY) {
+ 		/* Function is already in desired state - update handle */
+-		rc = clp_refresh_fh(fid);
++		rc = clp_refresh_fh(zdev->fid);
++	} else {
++		zpci_err("Set PCI FN:\n");
++		zpci_err_clp(rrb->response.hdr.rsp, rc);
++		if (!rc)
++			rc = rrb->response.hdr.rsp;
+ 	}
+ 	clp_free_block(rrb);
+ 	return rc;
+@@ -301,17 +304,13 @@ int clp_enable_fh(struct zpci_dev *zdev, u8 nr_dma_as)
+ 
+ 	rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_PCI_FN);
+ 	zpci_dbg(3, "ena fid:%x, fh:%x, rc:%d\n", zdev->fid, zdev->fh, rc);
+-	if (rc)
+-		goto out;
+-
+-	if (zpci_use_mio(zdev)) {
++	if (!rc && zpci_use_mio(zdev)) {
+ 		rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_MIO);
+ 		zpci_dbg(3, "ena mio fid:%x, fh:%x, rc:%d\n",
+ 				zdev->fid, zdev->fh, rc);
+ 		if (rc)
+ 			clp_disable_fh(zdev);
+ 	}
+-out:
+ 	return rc;
+ }
+ 
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 921f47b9bb247..ccc9ee1971e89 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -571,6 +571,7 @@ static struct perf_ibs perf_ibs_op = {
+ 		.start		= perf_ibs_start,
+ 		.stop		= perf_ibs_stop,
+ 		.read		= perf_ibs_read,
++		.capabilities	= PERF_PMU_CAP_NO_EXCLUDE,
+ 	},
+ 	.msr			= MSR_AMD64_IBSOPCTL,
+ 	.config_mask		= IBS_OP_CONFIG_MASK,
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index fc25c88c7ff29..9b5ff423e9398 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -259,6 +259,7 @@ enum mcp_flags {
+ 	MCP_TIMESTAMP	= BIT(0),	/* log time stamp */
+ 	MCP_UC		= BIT(1),	/* log uncorrected errors */
+ 	MCP_DONTLOG	= BIT(2),	/* only clear, don't log */
++	MCP_QUEUE_LOG	= BIT(3),	/* only queue to genpool */
+ };
+ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b);
+ 
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index b7a27589dfa0b..056d0367864e9 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -817,7 +817,10 @@ log_it:
+ 		if (mca_cfg.dont_log_ce && !mce_usable_address(&m))
+ 			goto clear_it;
+ 
+-		mce_log(&m);
++		if (flags & MCP_QUEUE_LOG)
++			mce_gen_pool_add(&m);
++		else
++			mce_log(&m);
+ 
+ clear_it:
+ 		/*
+@@ -1628,10 +1631,12 @@ static void __mcheck_cpu_init_generic(void)
+ 		m_fl = MCP_DONTLOG;
+ 
+ 	/*
+-	 * Log the machine checks left over from the previous reset.
++	 * Log the machine checks left over from the previous reset. Log them
++	 * only, do not start processing them. That will happen in mcheck_late_init()
++	 * when all consumers have been registered on the notifier chain.
+ 	 */
+ 	bitmap_fill(all_banks, MAX_NR_BANKS);
+-	machine_check_poll(MCP_UC | m_fl, &all_banks);
++	machine_check_poll(MCP_UC | MCP_QUEUE_LOG | m_fl, &all_banks);
+ 
+ 	cr4_set_bits(X86_CR4_MCE);
+ 
+diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
+index 3075624723b27..576f16a505e37 100644
+--- a/arch/x86/kernel/cpu/resctrl/monitor.c
++++ b/arch/x86/kernel/cpu/resctrl/monitor.c
+@@ -241,6 +241,12 @@ static u64 __mon_event_count(u32 rmid, struct rmid_read *rr)
+ 	case QOS_L3_MBM_LOCAL_EVENT_ID:
+ 		m = &rr->d->mbm_local[rmid];
+ 		break;
++	default:
++		/*
++		 * Code would never reach here because an invalid
++		 * event id would fail the __rmid_read.
++		 */
++		return RMID_VAL_ERROR;
+ 	}
+ 
+ 	if (rr->first) {
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 5e25d93ec7d08..060d9a906535c 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -267,12 +267,6 @@ static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte)
+ static gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
+                                   struct x86_exception *exception)
+ {
+-	/* Check if guest physical address doesn't exceed guest maximum */
+-	if (kvm_vcpu_is_illegal_gpa(vcpu, gpa)) {
+-		exception->error_code |= PFERR_RSVD_MASK;
+-		return UNMAPPED_GVA;
+-	}
+-
+         return gpa;
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index e0c7910207c0f..d5f24a2f3e916 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2243,12 +2243,11 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 			 ~PIN_BASED_VMX_PREEMPTION_TIMER);
+ 
+ 	/* Posted interrupts setting is only taken from vmcs12.  */
+-	if (nested_cpu_has_posted_intr(vmcs12)) {
++	vmx->nested.pi_pending = false;
++	if (nested_cpu_has_posted_intr(vmcs12))
+ 		vmx->nested.posted_intr_nv = vmcs12->posted_intr_nv;
+-		vmx->nested.pi_pending = false;
+-	} else {
++	else
+ 		exec_control &= ~PIN_BASED_POSTED_INTR;
+-	}
+ 	pin_controls_set(vmx, exec_control);
+ 
+ 	/*
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index de24d3826788a..fcd8bcb7e0ea9 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6396,6 +6396,9 @@ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 
++	if (vmx->emulation_required)
++		return;
++
+ 	if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
+ 		handle_external_interrupt_irqoff(vcpu);
+ 	else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 812585986bb82..75c59ad27e9fd 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3116,6 +3116,10 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			if (!msr_info->host_initiated) {
+ 				s64 adj = data - vcpu->arch.ia32_tsc_adjust_msr;
+ 				adjust_tsc_offset_guest(vcpu, adj);
++				/* Before back to guest, tsc_timestamp must be adjusted
++				 * as well, otherwise guest's percpu pvclock time could jump.
++				 */
++				kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
+ 			}
+ 			vcpu->arch.ia32_tsc_adjust_msr = data;
+ 		}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index c91dca641eb46..8ea37328ca84e 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2251,6 +2251,9 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
+ 	__rq = bfq_find_rq_fmerge(bfqd, bio, q);
+ 	if (__rq && elv_bio_merge_ok(__rq, bio)) {
+ 		*req = __rq;
++
++		if (blk_discard_mergable(__rq))
++			return ELEVATOR_DISCARD_MERGE;
+ 		return ELEVATOR_FRONT_MERGE;
+ 	}
+ 
+diff --git a/block/bio.c b/block/bio.c
+index 9c931df2d9864..0703a208ca248 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -978,6 +978,14 @@ static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter)
+ 	return 0;
+ }
+ 
++static void bio_put_pages(struct page **pages, size_t size, size_t off)
++{
++	size_t i, nr = DIV_ROUND_UP(size + (off & ~PAGE_MASK), PAGE_SIZE);
++
++	for (i = 0; i < nr; i++)
++		put_page(pages[i]);
++}
++
+ #define PAGE_PTRS_PER_BVEC     (sizeof(struct bio_vec) / sizeof(struct page *))
+ 
+ /**
+@@ -1022,8 +1030,10 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ 			if (same_page)
+ 				put_page(page);
+ 		} else {
+-			if (WARN_ON_ONCE(bio_full(bio, len)))
+-                                return -EINVAL;
++			if (WARN_ON_ONCE(bio_full(bio, len))) {
++				bio_put_pages(pages + i, left, offset);
++				return -EINVAL;
++			}
+ 			__bio_add_page(bio, page, len, offset);
+ 		}
+ 		offset = 0;
+@@ -1068,6 +1078,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
+ 		len = min_t(size_t, PAGE_SIZE - offset, left);
+ 		if (bio_add_hw_page(q, bio, page, len, offset,
+ 				max_append_sectors, &same_page) != len) {
++			bio_put_pages(pages + i, left, offset);
+ 			ret = -EINVAL;
+ 			break;
+ 		}
+diff --git a/block/blk-crypto.c b/block/blk-crypto.c
+index 5da43f0973b46..5ffa9aab49de0 100644
+--- a/block/blk-crypto.c
++++ b/block/blk-crypto.c
+@@ -332,7 +332,7 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+ 	if (mode->keysize == 0)
+ 		return -EINVAL;
+ 
+-	if (dun_bytes == 0 || dun_bytes > BLK_CRYPTO_MAX_IV_SIZE)
++	if (dun_bytes == 0 || dun_bytes > mode->ivsize)
+ 		return -EINVAL;
+ 
+ 	if (!is_power_of_2(data_unit_size))
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 349cd7d3af815..26f4bcc10de9d 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -341,6 +341,8 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs)
+ 		trace_block_split(q, split, (*bio)->bi_iter.bi_sector);
+ 		submit_bio_noacct(*bio);
+ 		*bio = split;
++
++		blk_throtl_charge_bio_split(*bio);
+ 	}
+ }
+ 
+@@ -700,22 +702,6 @@ static void blk_account_io_merge_request(struct request *req)
+ 	}
+ }
+ 
+-/*
+- * Two cases of handling DISCARD merge:
+- * If max_discard_segments > 1, the driver takes every bio
+- * as a range and send them to controller together. The ranges
+- * needn't to be contiguous.
+- * Otherwise, the bios/requests will be handled as same as
+- * others which should be contiguous.
+- */
+-static inline bool blk_discard_mergable(struct request *req)
+-{
+-	if (req_op(req) == REQ_OP_DISCARD &&
+-	    queue_max_discard_segments(req->q) > 1)
+-		return true;
+-	return false;
+-}
+-
+ static enum elv_merge blk_try_req_merge(struct request *req,
+ 					struct request *next)
+ {
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index b771c42999827..63e9d00a08321 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -178,6 +178,9 @@ struct throtl_grp {
+ 	unsigned int bad_bio_cnt; /* bios exceeding latency threshold */
+ 	unsigned long bio_cnt_reset_time;
+ 
++	atomic_t io_split_cnt[2];
++	atomic_t last_io_split_cnt[2];
++
+ 	struct blkg_rwstat stat_bytes;
+ 	struct blkg_rwstat stat_ios;
+ };
+@@ -771,6 +774,8 @@ static inline void throtl_start_new_slice_with_credit(struct throtl_grp *tg,
+ 	tg->bytes_disp[rw] = 0;
+ 	tg->io_disp[rw] = 0;
+ 
++	atomic_set(&tg->io_split_cnt[rw], 0);
++
+ 	/*
+ 	 * Previous slice has expired. We must have trimmed it after last
+ 	 * bio dispatch. That means since start of last slice, we never used
+@@ -793,6 +798,9 @@ static inline void throtl_start_new_slice(struct throtl_grp *tg, bool rw)
+ 	tg->io_disp[rw] = 0;
+ 	tg->slice_start[rw] = jiffies;
+ 	tg->slice_end[rw] = jiffies + tg->td->throtl_slice;
++
++	atomic_set(&tg->io_split_cnt[rw], 0);
++
+ 	throtl_log(&tg->service_queue,
+ 		   "[%c] new slice start=%lu end=%lu jiffies=%lu",
+ 		   rw == READ ? 'R' : 'W', tg->slice_start[rw],
+@@ -1025,6 +1033,9 @@ static bool tg_may_dispatch(struct throtl_grp *tg, struct bio *bio,
+ 				jiffies + tg->td->throtl_slice);
+ 	}
+ 
++	if (iops_limit != UINT_MAX)
++		tg->io_disp[rw] += atomic_xchg(&tg->io_split_cnt[rw], 0);
++
+ 	if (tg_with_in_bps_limit(tg, bio, bps_limit, &bps_wait) &&
+ 	    tg_with_in_iops_limit(tg, bio, iops_limit, &iops_wait)) {
+ 		if (wait)
+@@ -2046,12 +2057,14 @@ static void throtl_downgrade_check(struct throtl_grp *tg)
+ 	}
+ 
+ 	if (tg->iops[READ][LIMIT_LOW]) {
++		tg->last_io_disp[READ] += atomic_xchg(&tg->last_io_split_cnt[READ], 0);
+ 		iops = tg->last_io_disp[READ] * HZ / elapsed_time;
+ 		if (iops >= tg->iops[READ][LIMIT_LOW])
+ 			tg->last_low_overflow_time[READ] = now;
+ 	}
+ 
+ 	if (tg->iops[WRITE][LIMIT_LOW]) {
++		tg->last_io_disp[WRITE] += atomic_xchg(&tg->last_io_split_cnt[WRITE], 0);
+ 		iops = tg->last_io_disp[WRITE] * HZ / elapsed_time;
+ 		if (iops >= tg->iops[WRITE][LIMIT_LOW])
+ 			tg->last_low_overflow_time[WRITE] = now;
+@@ -2170,6 +2183,25 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
+ }
+ #endif
+ 
++void blk_throtl_charge_bio_split(struct bio *bio)
++{
++	struct blkcg_gq *blkg = bio->bi_blkg;
++	struct throtl_grp *parent = blkg_to_tg(blkg);
++	struct throtl_service_queue *parent_sq;
++	bool rw = bio_data_dir(bio);
++
++	do {
++		if (!parent->has_rules[rw])
++			break;
++
++		atomic_inc(&parent->io_split_cnt[rw]);
++		atomic_inc(&parent->last_io_split_cnt[rw]);
++
++		parent_sq = parent->service_queue.parent_sq;
++		parent = sq_to_tg(parent_sq);
++	} while (parent);
++}
++
+ bool blk_throtl_bio(struct bio *bio)
+ {
+ 	struct request_queue *q = bio->bi_disk->queue;
+diff --git a/block/blk.h b/block/blk.h
+index ecfd523c68d00..f84c83300f6fa 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -299,11 +299,13 @@ int create_task_io_context(struct task_struct *task, gfp_t gfp_mask, int node);
+ extern int blk_throtl_init(struct request_queue *q);
+ extern void blk_throtl_exit(struct request_queue *q);
+ extern void blk_throtl_register_queue(struct request_queue *q);
++extern void blk_throtl_charge_bio_split(struct bio *bio);
+ bool blk_throtl_bio(struct bio *bio);
+ #else /* CONFIG_BLK_DEV_THROTTLING */
+ static inline int blk_throtl_init(struct request_queue *q) { return 0; }
+ static inline void blk_throtl_exit(struct request_queue *q) { }
+ static inline void blk_throtl_register_queue(struct request_queue *q) { }
++static inline void blk_throtl_charge_bio_split(struct bio *bio) { }
+ static inline bool blk_throtl_bio(struct bio *bio) { return false; }
+ #endif /* CONFIG_BLK_DEV_THROTTLING */
+ #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+diff --git a/block/elevator.c b/block/elevator.c
+index 293c5c81397a1..2a525863d4e92 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -336,6 +336,9 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
+ 	__rq = elv_rqhash_find(q, bio->bi_iter.bi_sector);
+ 	if (__rq && elv_bio_merge_ok(__rq, bio)) {
+ 		*req = __rq;
++
++		if (blk_discard_mergable(__rq))
++			return ELEVATOR_DISCARD_MERGE;
+ 		return ELEVATOR_BACK_MERGE;
+ 	}
+ 
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 2b9635d0dcba8..e4e90761eab35 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -454,6 +454,8 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
+ 
+ 		if (elv_bio_merge_ok(__rq, bio)) {
+ 			*rq = __rq;
++			if (blk_discard_mergable(__rq))
++				return ELEVATOR_DISCARD_MERGE;
+ 			return ELEVATOR_FRONT_MERGE;
+ 		}
+ 	}
+diff --git a/certs/Makefile b/certs/Makefile
+index b6db52ebf0beb..b338799c0b242 100644
+--- a/certs/Makefile
++++ b/certs/Makefile
+@@ -47,11 +47,19 @@ endif
+ redirect_openssl	= 2>&1
+ quiet_redirect_openssl	= 2>&1
+ silent_redirect_openssl = 2>/dev/null
++openssl_available       = $(shell openssl help 2>/dev/null && echo yes)
+ 
+ # We do it this way rather than having a boolean option for enabling an
+ # external private key, because 'make randconfig' might enable such a
+ # boolean option and we unfortunately can't make it depend on !RANDCONFIG.
+ ifeq ($(CONFIG_MODULE_SIG_KEY),"certs/signing_key.pem")
++
++ifeq ($(openssl_available),yes)
++X509TEXT=$(shell openssl x509 -in "certs/signing_key.pem" -text 2>/dev/null)
++
++$(if $(findstring rsaEncryption,$(X509TEXT)),,$(shell rm -f "certs/signing_key.pem"))
++endif
++
+ $(obj)/signing_key.pem: $(obj)/x509.genkey
+ 	@$(kecho) "###"
+ 	@$(kecho) "### Now generating an X.509 key pair to be used for signing modules."
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 61c762961ca8e..44f434acfce08 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5573,7 +5573,7 @@ int ata_host_start(struct ata_host *host)
+ 			have_stop = 1;
+ 	}
+ 
+-	if (host->ops->host_stop)
++	if (host->ops && host->ops->host_stop)
+ 		have_stop = 1;
+ 
+ 	if (have_stop) {
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 85bb8742f0906..81ad4f867f02d 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -543,7 +543,8 @@ re_probe:
+ 			goto probe_failed;
+ 	}
+ 
+-	if (driver_sysfs_add(dev)) {
++	ret = driver_sysfs_add(dev);
++	if (ret) {
+ 		pr_err("%s: driver_sysfs_add(%s) failed\n",
+ 		       __func__, dev_name(dev));
+ 		goto probe_failed;
+@@ -565,15 +566,18 @@ re_probe:
+ 			goto probe_failed;
+ 	}
+ 
+-	if (device_add_groups(dev, drv->dev_groups)) {
++	ret = device_add_groups(dev, drv->dev_groups);
++	if (ret) {
+ 		dev_err(dev, "device_add_groups() failed\n");
+ 		goto dev_groups_failed;
+ 	}
+ 
+-	if (dev_has_sync_state(dev) &&
+-	    device_create_file(dev, &dev_attr_state_synced)) {
+-		dev_err(dev, "state_synced sysfs add failed\n");
+-		goto dev_sysfs_state_synced_failed;
++	if (dev_has_sync_state(dev)) {
++		ret = device_create_file(dev, &dev_attr_state_synced);
++		if (ret) {
++			dev_err(dev, "state_synced sysfs add failed\n");
++			goto dev_sysfs_state_synced_failed;
++		}
+ 	}
+ 
+ 	if (test_remove) {
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index a529235e6bfe9..f41e4e4993d37 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -164,7 +164,7 @@ static inline int fw_state_wait(struct fw_priv *fw_priv)
+ 	return __fw_state_wait_common(fw_priv, MAX_SCHEDULE_TIMEOUT);
+ }
+ 
+-static int fw_cache_piggyback_on_request(const char *name);
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv);
+ 
+ static struct fw_priv *__allocate_fw_priv(const char *fw_name,
+ 					  struct firmware_cache *fwc,
+@@ -705,10 +705,8 @@ int assign_fw(struct firmware *fw, struct device *device)
+ 	 * on request firmware.
+ 	 */
+ 	if (!(fw_priv->opt_flags & FW_OPT_NOCACHE) &&
+-	    fw_priv->fwc->state == FW_LOADER_START_CACHE) {
+-		if (fw_cache_piggyback_on_request(fw_priv->fw_name))
+-			kref_get(&fw_priv->ref);
+-	}
++	    fw_priv->fwc->state == FW_LOADER_START_CACHE)
++		fw_cache_piggyback_on_request(fw_priv);
+ 
+ 	/* pass the pages buffer to driver at the last minute */
+ 	fw_set_page_data(fw_priv, fw);
+@@ -1257,11 +1255,11 @@ static int __fw_entry_found(const char *name)
+ 	return 0;
+ }
+ 
+-static int fw_cache_piggyback_on_request(const char *name)
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv)
+ {
+-	struct firmware_cache *fwc = &fw_cache;
++	const char *name = fw_priv->fw_name;
++	struct firmware_cache *fwc = fw_priv->fwc;
+ 	struct fw_cache_entry *fce;
+-	int ret = 0;
+ 
+ 	spin_lock(&fwc->name_lock);
+ 	if (__fw_entry_found(name))
+@@ -1269,13 +1267,12 @@ static int fw_cache_piggyback_on_request(const char *name)
+ 
+ 	fce = alloc_fw_cache_entry(name);
+ 	if (fce) {
+-		ret = 1;
+ 		list_add(&fce->list, &fwc->fw_names);
++		kref_get(&fw_priv->ref);
+ 		pr_debug("%s: fw: %s\n", __func__, name);
+ 	}
+ found:
+ 	spin_unlock(&fwc->name_lock);
+-	return ret;
+ }
+ 
+ static void free_fw_cache_entry(struct fw_cache_entry *fce)
+@@ -1506,9 +1503,8 @@ static inline void unregister_fw_pm_ops(void)
+ 	unregister_pm_notifier(&fw_cache.pm_notify);
+ }
+ #else
+-static int fw_cache_piggyback_on_request(const char *name)
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv)
+ {
+-	return 0;
+ }
+ static inline int register_fw_pm_ops(void)
+ {
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 5db536ccfcd6b..456a1787e18d0 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1652,7 +1652,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 			if (ret) {
+ 				dev_err(map->dev,
+ 					"Error in caching of register: %x ret: %d\n",
+-					reg + i, ret);
++					reg + regmap_get_offset(map, i), ret);
+ 				return ret;
+ 			}
+ 		}
+diff --git a/drivers/bcma/main.c b/drivers/bcma/main.c
+index 6535614a7dc13..1df2b5801c3bc 100644
+--- a/drivers/bcma/main.c
++++ b/drivers/bcma/main.c
+@@ -236,6 +236,7 @@ EXPORT_SYMBOL(bcma_core_irq);
+ 
+ void bcma_prepare_core(struct bcma_bus *bus, struct bcma_device *core)
+ {
++	device_initialize(&core->dev);
+ 	core->dev.release = bcma_release_core_dev;
+ 	core->dev.bus = &bcma_bus_type;
+ 	dev_set_name(&core->dev, "bcma%d:%d", bus->num, core->core_index);
+@@ -277,11 +278,10 @@ static void bcma_register_core(struct bcma_bus *bus, struct bcma_device *core)
+ {
+ 	int err;
+ 
+-	err = device_register(&core->dev);
++	err = device_add(&core->dev);
+ 	if (err) {
+ 		bcma_err(bus, "Could not register dev for core 0x%03X\n",
+ 			 core->id.id);
+-		put_device(&core->dev);
+ 		return;
+ 	}
+ 	core->dev_registered = true;
+@@ -372,7 +372,7 @@ void bcma_unregister_cores(struct bcma_bus *bus)
+ 	/* Now noone uses internally-handled cores, we can free them */
+ 	list_for_each_entry_safe(core, tmp, &bus->cores, list) {
+ 		list_del(&core->list);
+-		kfree(core);
++		put_device(&core->dev);
+ 	}
+ }
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 59c452fff8352..98274ba0701d6 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1759,7 +1759,17 @@ static int nbd_dev_add(int index)
+ 	refcount_set(&nbd->refs, 1);
+ 	INIT_LIST_HEAD(&nbd->list);
+ 	disk->major = NBD_MAJOR;
++
++	/* Too big first_minor can cause duplicate creation of
++	 * sysfs files/links, since first_minor will be truncated to
++	 * byte in __device_add_disk().
++	 */
+ 	disk->first_minor = index << part_shift;
++	if (disk->first_minor > 0xff) {
++		err = -EINVAL;
++		goto out_free_idr;
++	}
++
+ 	disk->fops = &nbd_fops;
+ 	disk->private_data = nbd;
+ 	sprintf(disk->disk_name, "nbd%d", index);
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 994385bf37c0c..3ca7528322f53 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -106,17 +106,12 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ 	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
+ 	u16 len;
+-	int sig;
+ 
+ 	if (!ibmvtpm->rtce_buf) {
+ 		dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n");
+ 		return 0;
+ 	}
+ 
+-	sig = wait_event_interruptible(ibmvtpm->wq, !ibmvtpm->tpm_processing_cmd);
+-	if (sig)
+-		return -EINTR;
+-
+ 	len = ibmvtpm->res_len;
+ 
+ 	if (count < len) {
+@@ -237,7 +232,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 	 * set the processing flag before the Hcall, since we may get the
+ 	 * result (interrupt) before even being able to check rc.
+ 	 */
+-	ibmvtpm->tpm_processing_cmd = true;
++	ibmvtpm->tpm_processing_cmd = 1;
+ 
+ again:
+ 	rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+@@ -255,7 +250,7 @@ again:
+ 			goto again;
+ 		}
+ 		dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
+-		ibmvtpm->tpm_processing_cmd = false;
++		ibmvtpm->tpm_processing_cmd = 0;
+ 	}
+ 
+ 	spin_unlock(&ibmvtpm->rtce_lock);
+@@ -269,7 +264,9 @@ static void tpm_ibmvtpm_cancel(struct tpm_chip *chip)
+ 
+ static u8 tpm_ibmvtpm_status(struct tpm_chip *chip)
+ {
+-	return 0;
++	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
++
++	return ibmvtpm->tpm_processing_cmd;
+ }
+ 
+ /**
+@@ -459,7 +456,7 @@ static const struct tpm_class_ops tpm_ibmvtpm = {
+ 	.send = tpm_ibmvtpm_send,
+ 	.cancel = tpm_ibmvtpm_cancel,
+ 	.status = tpm_ibmvtpm_status,
+-	.req_complete_mask = 0,
++	.req_complete_mask = 1,
+ 	.req_complete_val = 0,
+ 	.req_canceled = tpm_ibmvtpm_req_canceled,
+ };
+@@ -552,7 +549,7 @@ static void ibmvtpm_crq_process(struct ibmvtpm_crq *crq,
+ 		case VTPM_TPM_COMMAND_RES:
+ 			/* len of the data in rtce buffer */
+ 			ibmvtpm->res_len = be16_to_cpu(crq->len);
+-			ibmvtpm->tpm_processing_cmd = false;
++			ibmvtpm->tpm_processing_cmd = 0;
+ 			wake_up_interruptible(&ibmvtpm->wq);
+ 			return;
+ 		default:
+@@ -690,8 +687,15 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ 		goto init_irq_cleanup;
+ 	}
+ 
+-	if (!strcmp(id->compat, "IBM,vtpm20")) {
++
++	if (!strcmp(id->compat, "IBM,vtpm20"))
+ 		chip->flags |= TPM_CHIP_FLAG_TPM2;
++
++	rc = tpm_get_timeouts(chip);
++	if (rc)
++		goto init_irq_cleanup;
++
++	if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+ 		rc = tpm2_get_cc_attrs_tbl(chip);
+ 		if (rc)
+ 			goto init_irq_cleanup;
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.h b/drivers/char/tpm/tpm_ibmvtpm.h
+index b92aa7d3e93e7..51198b137461e 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.h
++++ b/drivers/char/tpm/tpm_ibmvtpm.h
+@@ -41,7 +41,7 @@ struct ibmvtpm_dev {
+ 	wait_queue_head_t wq;
+ 	u16 res_len;
+ 	u32 vtpm_version;
+-	bool tpm_processing_cmd;
++	u8 tpm_processing_cmd;
+ };
+ 
+ #define CRQ_RES_BUF_SIZE	PAGE_SIZE
+diff --git a/drivers/clk/mvebu/kirkwood.c b/drivers/clk/mvebu/kirkwood.c
+index 47680237d0beb..8bc893df47364 100644
+--- a/drivers/clk/mvebu/kirkwood.c
++++ b/drivers/clk/mvebu/kirkwood.c
+@@ -265,6 +265,7 @@ static const char *powersave_parents[] = {
+ static const struct clk_muxing_soc_desc kirkwood_mux_desc[] __initconst = {
+ 	{ "powersave", powersave_parents, ARRAY_SIZE(powersave_parents),
+ 		11, 1, 0 },
++	{ }
+ };
+ 
+ static struct clk *clk_muxing_get_src(
+diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
+index 760777458a909..2acfcc966bb54 100644
+--- a/drivers/clocksource/sh_cmt.c
++++ b/drivers/clocksource/sh_cmt.c
+@@ -572,7 +572,8 @@ static int sh_cmt_start(struct sh_cmt_channel *ch, unsigned long flag)
+ 	ch->flags |= flag;
+ 
+ 	/* setup timeout if no clockevent */
+-	if ((flag == FLAG_CLOCKSOURCE) && (!(ch->flags & FLAG_CLOCKEVENT)))
++	if (ch->cmt->num_channels == 1 &&
++	    flag == FLAG_CLOCKSOURCE && (!(ch->flags & FLAG_CLOCKEVENT)))
+ 		__sh_cmt_set_next(ch, ch->max_match_value);
+  out:
+ 	raw_spin_unlock_irqrestore(&ch->lock, flags);
+@@ -608,20 +609,25 @@ static struct sh_cmt_channel *cs_to_sh_cmt(struct clocksource *cs)
+ static u64 sh_cmt_clocksource_read(struct clocksource *cs)
+ {
+ 	struct sh_cmt_channel *ch = cs_to_sh_cmt(cs);
+-	unsigned long flags;
+ 	u32 has_wrapped;
+-	u64 value;
+-	u32 raw;
+ 
+-	raw_spin_lock_irqsave(&ch->lock, flags);
+-	value = ch->total_cycles;
+-	raw = sh_cmt_get_counter(ch, &has_wrapped);
++	if (ch->cmt->num_channels == 1) {
++		unsigned long flags;
++		u64 value;
++		u32 raw;
+ 
+-	if (unlikely(has_wrapped))
+-		raw += ch->match_value + 1;
+-	raw_spin_unlock_irqrestore(&ch->lock, flags);
++		raw_spin_lock_irqsave(&ch->lock, flags);
++		value = ch->total_cycles;
++		raw = sh_cmt_get_counter(ch, &has_wrapped);
++
++		if (unlikely(has_wrapped))
++			raw += ch->match_value + 1;
++		raw_spin_unlock_irqrestore(&ch->lock, flags);
++
++		return value + raw;
++	}
+ 
+-	return value + raw;
++	return sh_cmt_get_counter(ch, &has_wrapped);
+ }
+ 
+ static int sh_cmt_clocksource_enable(struct clocksource *cs)
+@@ -684,7 +690,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_channel *ch,
+ 	cs->disable = sh_cmt_clocksource_disable;
+ 	cs->suspend = sh_cmt_clocksource_suspend;
+ 	cs->resume = sh_cmt_clocksource_resume;
+-	cs->mask = CLOCKSOURCE_MASK(sizeof(u64) * 8);
++	cs->mask = CLOCKSOURCE_MASK(ch->cmt->info->width);
+ 	cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
+ 
+ 	dev_info(&ch->cmt->pdev->dev, "ch%u: used as clock source\n",
+diff --git a/drivers/counter/104-quad-8.c b/drivers/counter/104-quad-8.c
+index 78766b6ec271a..21bb2bb767a1e 100644
+--- a/drivers/counter/104-quad-8.c
++++ b/drivers/counter/104-quad-8.c
+@@ -1224,12 +1224,13 @@ static ssize_t quad8_count_ceiling_write(struct counter_device *counter,
+ 	case 1:
+ 	case 3:
+ 		quad8_preset_register_set(priv, count->id, ceiling);
+-		break;
++		mutex_unlock(&priv->lock);
++		return len;
+ 	}
+ 
+ 	mutex_unlock(&priv->lock);
+ 
+-	return len;
++	return -EINVAL;
+ }
+ 
+ static ssize_t quad8_count_preset_enable_read(struct counter_device *counter,
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index 909a7eb748e35..7daed8b78ac83 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -169,15 +169,19 @@ static struct dcp *global_sdcp;
+ 
+ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ {
++	int dma_err;
+ 	struct dcp *sdcp = global_sdcp;
+ 	const int chan = actx->chan;
+ 	uint32_t stat;
+ 	unsigned long ret;
+ 	struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+-
+ 	dma_addr_t desc_phys = dma_map_single(sdcp->dev, desc, sizeof(*desc),
+ 					      DMA_TO_DEVICE);
+ 
++	dma_err = dma_mapping_error(sdcp->dev, desc_phys);
++	if (dma_err)
++		return dma_err;
++
+ 	reinit_completion(&sdcp->completion[chan]);
+ 
+ 	/* Clear status register. */
+@@ -215,18 +219,29 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ 			   struct skcipher_request *req, int init)
+ {
++	dma_addr_t key_phys, src_phys, dst_phys;
+ 	struct dcp *sdcp = global_sdcp;
+ 	struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+ 	struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
+ 	int ret;
+ 
+-	dma_addr_t key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
+-					     2 * AES_KEYSIZE_128,
+-					     DMA_TO_DEVICE);
+-	dma_addr_t src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
+-					     DCP_BUF_SZ, DMA_TO_DEVICE);
+-	dma_addr_t dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
+-					     DCP_BUF_SZ, DMA_FROM_DEVICE);
++	key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
++				  2 * AES_KEYSIZE_128, DMA_TO_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, key_phys);
++	if (ret)
++		return ret;
++
++	src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
++				  DCP_BUF_SZ, DMA_TO_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, src_phys);
++	if (ret)
++		goto err_src;
++
++	dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
++				  DCP_BUF_SZ, DMA_FROM_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, dst_phys);
++	if (ret)
++		goto err_dst;
+ 
+ 	if (actx->fill % AES_BLOCK_SIZE) {
+ 		dev_err(sdcp->dev, "Invalid block size!\n");
+@@ -264,10 +279,12 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ 	ret = mxs_dcp_start_dma(actx);
+ 
+ aes_done_run:
++	dma_unmap_single(sdcp->dev, dst_phys, DCP_BUF_SZ, DMA_FROM_DEVICE);
++err_dst:
++	dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
++err_src:
+ 	dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
+ 			 DMA_TO_DEVICE);
+-	dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
+-	dma_unmap_single(sdcp->dev, dst_phys, DCP_BUF_SZ, DMA_FROM_DEVICE);
+ 
+ 	return ret;
+ }
+@@ -556,6 +573,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
+ 	dma_addr_t buf_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_in_buf,
+ 					     DCP_BUF_SZ, DMA_TO_DEVICE);
+ 
++	ret = dma_mapping_error(sdcp->dev, buf_phys);
++	if (ret)
++		return ret;
++
+ 	/* Fill in the DMA descriptor. */
+ 	desc->control0 = MXS_DCP_CONTROL0_DECR_SEMAPHORE |
+ 		    MXS_DCP_CONTROL0_INTERRUPT |
+@@ -588,6 +609,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
+ 	if (rctx->fini) {
+ 		digest_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_out_buf,
+ 					     DCP_SHA_PAY_SZ, DMA_FROM_DEVICE);
++		ret = dma_mapping_error(sdcp->dev, digest_phys);
++		if (ret)
++			goto done_run;
++
+ 		desc->control0 |= MXS_DCP_CONTROL0_HASH_TERM;
+ 		desc->payload = digest_phys;
+ 	}
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 0dd4c6b157de9..9b968ac4ee7b6 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -1175,9 +1175,9 @@ static int omap_aes_probe(struct platform_device *pdev)
+ 	spin_lock_init(&dd->lock);
+ 
+ 	INIT_LIST_HEAD(&dd->list);
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_add_tail(&dd->list, &dev_list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	/* Initialize crypto engine */
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+@@ -1264,9 +1264,9 @@ static int omap_aes_remove(struct platform_device *pdev)
+ 	if (!dd)
+ 		return -ENODEV;
+ 
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
+diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
+index c9d38bcfd1c77..7fdf38e07adf8 100644
+--- a/drivers/crypto/omap-des.c
++++ b/drivers/crypto/omap-des.c
+@@ -1035,9 +1035,9 @@ static int omap_des_probe(struct platform_device *pdev)
+ 
+ 
+ 	INIT_LIST_HEAD(&dd->list);
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_add_tail(&dd->list, &dev_list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	/* Initialize des crypto engine */
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+@@ -1096,9 +1096,9 @@ static int omap_des_remove(struct platform_device *pdev)
+ 	if (!dd)
+ 		return -ENODEV;
+ 
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index 39d17ed1db2f2..48f78e34cf8dd 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -1735,7 +1735,7 @@ static void omap_sham_done_task(unsigned long data)
+ 		if (test_and_clear_bit(FLAGS_OUTPUT_READY, &dd->flags))
+ 			goto finish;
+ 	} else if (test_bit(FLAGS_DMA_READY, &dd->flags)) {
+-		if (test_and_clear_bit(FLAGS_DMA_ACTIVE, &dd->flags)) {
++		if (test_bit(FLAGS_DMA_ACTIVE, &dd->flags)) {
+ 			omap_sham_update_dma_stop(dd);
+ 			if (dd->err) {
+ 				err = dd->err;
+@@ -2143,9 +2143,9 @@ static int omap_sham_probe(struct platform_device *pdev)
+ 		(rev & dd->pdata->major_mask) >> dd->pdata->major_shift,
+ 		(rev & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
+ 
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_add_tail(&dd->list, &sham.dev_list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ 
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+ 	if (!dd->engine) {
+@@ -2193,9 +2193,9 @@ err_algs:
+ err_engine_start:
+ 	crypto_engine_exit(dd->engine);
+ err_engine:
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ err_pm:
+ 	pm_runtime_disable(dev);
+ 	if (!dd->polling_mode)
+@@ -2214,9 +2214,9 @@ static int omap_sham_remove(struct platform_device *pdev)
+ 	dd = platform_get_drvdata(pdev);
+ 	if (!dd)
+ 		return -ENODEV;
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
+ 			crypto_unregister_ahash(
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+index d2fedbd7113cb..9709f29b64540 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+@@ -79,10 +79,10 @@ void adf_init_hw_data_c3xxxiov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+index 29fd3f1091abc..5e6909d6cfc65 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+@@ -79,10 +79,10 @@ void adf_init_hw_data_c62xiov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h
+index f22342f612c1d..469e06c93fafe 100644
+--- a/drivers/crypto/qat/qat_common/adf_common_drv.h
++++ b/drivers/crypto/qat/qat_common/adf_common_drv.h
+@@ -195,8 +195,8 @@ void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev,
+ void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
+ void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
+ 
+-int adf_vf2pf_init(struct adf_accel_dev *accel_dev);
+-void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev);
++int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev);
++void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev);
+ int adf_init_pf_wq(void);
+ void adf_exit_pf_wq(void);
+ int adf_init_vf_wq(void);
+@@ -219,12 +219,12 @@ static inline void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev)
+ {
+ }
+ 
+-static inline int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
++static inline int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev)
+ {
+ 	return 0;
+ }
+ 
+-static inline void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
++static inline void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
+ {
+ }
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_init.c b/drivers/crypto/qat/qat_common/adf_init.c
+index 42029153408ee..5c78433d19d42 100644
+--- a/drivers/crypto/qat/qat_common/adf_init.c
++++ b/drivers/crypto/qat/qat_common/adf_init.c
+@@ -61,6 +61,7 @@ int adf_dev_init(struct adf_accel_dev *accel_dev)
+ 	struct service_hndl *service;
+ 	struct list_head *list_itr;
+ 	struct adf_hw_device_data *hw_data = accel_dev->hw_device;
++	int ret;
+ 
+ 	if (!hw_data) {
+ 		dev_err(&GET_DEV(accel_dev),
+@@ -127,9 +128,9 @@ int adf_dev_init(struct adf_accel_dev *accel_dev)
+ 	}
+ 
+ 	hw_data->enable_error_correction(accel_dev);
+-	hw_data->enable_vf2pf_comms(accel_dev);
++	ret = hw_data->enable_vf2pf_comms(accel_dev);
+ 
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(adf_dev_init);
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c
+index da6ef007a6aef..de2f137e44ef8 100644
+--- a/drivers/crypto/qat/qat_common/adf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_isr.c
+@@ -15,6 +15,8 @@
+ #include "adf_transport_access_macros.h"
+ #include "adf_transport_internal.h"
+ 
++#define ADF_MAX_NUM_VFS	32
++
+ static int adf_enable_msix(struct adf_accel_dev *accel_dev)
+ {
+ 	struct adf_accel_pci *pci_dev_info = &accel_dev->accel_pci_dev;
+@@ -67,7 +69,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
+ 		struct adf_bar *pmisc =
+ 			&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
+ 		void __iomem *pmisc_bar_addr = pmisc->virt_addr;
+-		u32 vf_mask;
++		unsigned long vf_mask;
+ 
+ 		/* Get the interrupt sources triggered by VFs */
+ 		vf_mask = ((ADF_CSR_RD(pmisc_bar_addr, ADF_ERRSOU5) &
+@@ -88,8 +90,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
+ 			 * unless the VF is malicious and is attempting to
+ 			 * flood the host OS with VF2PF interrupts.
+ 			 */
+-			for_each_set_bit(i, (const unsigned long *)&vf_mask,
+-					 (sizeof(vf_mask) * BITS_PER_BYTE)) {
++			for_each_set_bit(i, &vf_mask, ADF_MAX_NUM_VFS) {
+ 				vf_info = accel_dev->pf.vf_info + i;
+ 
+ 				if (!__ratelimit(&vf_info->vf2pf_ratelimit)) {
+diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+index 8b090b7ae8c6b..e829c6aaf16fd 100644
+--- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+@@ -186,7 +186,6 @@ int adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(adf_iov_putmsg);
+ 
+ void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
+ {
+@@ -316,6 +315,8 @@ static int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev)
+ 	msg |= ADF_PFVF_COMPATIBILITY_VERSION << ADF_VF2PF_COMPAT_VER_REQ_SHIFT;
+ 	BUILD_BUG_ON(ADF_PFVF_COMPATIBILITY_VERSION > 255);
+ 
++	reinit_completion(&accel_dev->vf.iov_msg_completion);
++
+ 	/* Send request from VF to PF */
+ 	ret = adf_iov_putmsg(accel_dev, msg, 0);
+ 	if (ret) {
+diff --git a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
+index 2c98fb63f7b72..54b738da829d8 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
+@@ -5,14 +5,14 @@
+ #include "adf_pf2vf_msg.h"
+ 
+ /**
+- * adf_vf2pf_init() - send init msg to PF
++ * adf_vf2pf_notify_init() - send init msg to PF
+  * @accel_dev:  Pointer to acceleration VF device.
+  *
+  * Function sends an init messge from the VF to a PF
+  *
+  * Return: 0 on success, error code otherwise.
+  */
+-int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
++int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev)
+ {
+ 	u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
+ 		(ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
+@@ -25,17 +25,17 @@ int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
+ 	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(adf_vf2pf_init);
++EXPORT_SYMBOL_GPL(adf_vf2pf_notify_init);
+ 
+ /**
+- * adf_vf2pf_shutdown() - send shutdown msg to PF
++ * adf_vf2pf_notify_shutdown() - send shutdown msg to PF
+  * @accel_dev:  Pointer to acceleration VF device.
+  *
+  * Function sends a shutdown messge from the VF to a PF
+  *
+  * Return: void
+  */
+-void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
++void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
+ {
+ 	u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
+ 	    (ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
+@@ -45,4 +45,4 @@ void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
+ 			dev_err(&GET_DEV(accel_dev),
+ 				"Failed to send Shutdown event to PF\n");
+ }
+-EXPORT_SYMBOL_GPL(adf_vf2pf_shutdown);
++EXPORT_SYMBOL_GPL(adf_vf2pf_notify_shutdown);
+diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+index 31a36288623a2..024401ec9d1ae 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+@@ -159,6 +159,7 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 	struct adf_bar *pmisc =
+ 			&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
+ 	void __iomem *pmisc_bar_addr = pmisc->virt_addr;
++	bool handled = false;
+ 	u32 v_int;
+ 
+ 	/* Read VF INT source CSR to determine the source of VF interrupt */
+@@ -171,7 +172,7 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 
+ 		/* Schedule tasklet to handle interrupt BH */
+ 		tasklet_hi_schedule(&accel_dev->vf.pf2vf_bh_tasklet);
+-		return IRQ_HANDLED;
++		handled = true;
+ 	}
+ 
+ 	/* Check bundle interrupt */
+@@ -183,10 +184,10 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 		WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, bank->bank_number,
+ 					   0);
+ 		tasklet_hi_schedule(&bank->resp_handler);
+-		return IRQ_HANDLED;
++		handled = true;
+ 	}
+ 
+-	return IRQ_NONE;
++	return handled ? IRQ_HANDLED : IRQ_NONE;
+ }
+ 
+ static int adf_request_msi_irq(struct adf_accel_dev *accel_dev)
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+index 5246f0524ca34..fc4cf141b1dea 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+@@ -79,10 +79,10 @@ void adf_init_hw_data_dh895xcciov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 4912a7b883801..3a7362f968c9f 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -26,8 +26,8 @@
+ 	pci_read_config_dword((d)->uracu, 0xd8 + (i) * 4, &(reg))
+ #define I10NM_GET_DIMMMTR(m, i, j)	\
+ 	readl((m)->mbase + 0x2080c + (i) * 0x4000 + (j) * 4)
+-#define I10NM_GET_MCDDRTCFG(m, i, j)	\
+-	readl((m)->mbase + 0x20970 + (i) * 0x4000 + (j) * 4)
++#define I10NM_GET_MCDDRTCFG(m, i)	\
++	readl((m)->mbase + 0x20970 + (i) * 0x4000)
+ #define I10NM_GET_MCMTR(m, i)		\
+ 	readl((m)->mbase + 0x20ef8 + (i) * 0x4000)
+ 
+@@ -170,10 +170,10 @@ static int i10nm_get_dimm_config(struct mem_ctl_info *mci)
+ 			continue;
+ 
+ 		ndimms = 0;
++		mcddrtcfg = I10NM_GET_MCDDRTCFG(imc, i);
+ 		for (j = 0; j < I10NM_NUM_DIMMS; j++) {
+ 			dimm = edac_get_dimm(mci, i, j, 0);
+ 			mtr = I10NM_GET_DIMMMTR(imc, i, j);
+-			mcddrtcfg = I10NM_GET_MCDDRTCFG(imc, i, j);
+ 			edac_dbg(1, "dimmmtr 0x%x mcddrtcfg 0x%x (mc%d ch%d dimm%d)\n",
+ 				 mtr, mcddrtcfg, imc->mc, i, j);
+ 
+diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c
+index 6c474fbef32af..b6d4ae84a9a5b 100644
+--- a/drivers/edac/mce_amd.c
++++ b/drivers/edac/mce_amd.c
+@@ -1176,6 +1176,9 @@ static int __init mce_amd_init(void)
+ 	    c->x86_vendor != X86_VENDOR_HYGON)
+ 		return -ENODEV;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	if (boot_cpu_has(X86_FEATURE_SMCA)) {
+ 		xec_mask = 0x3f;
+ 		goto out;
+diff --git a/drivers/firmware/raspberrypi.c b/drivers/firmware/raspberrypi.c
+index 2371d08bdd17a..1d965c1252cac 100644
+--- a/drivers/firmware/raspberrypi.c
++++ b/drivers/firmware/raspberrypi.c
+@@ -7,6 +7,7 @@
+  */
+ 
+ #include <linux/dma-mapping.h>
++#include <linux/kref.h>
+ #include <linux/mailbox_client.h>
+ #include <linux/module.h>
+ #include <linux/of_platform.h>
+@@ -27,6 +28,8 @@ struct rpi_firmware {
+ 	struct mbox_chan *chan; /* The property channel. */
+ 	struct completion c;
+ 	u32 enabled;
++
++	struct kref consumers;
+ };
+ 
+ static DEFINE_MUTEX(transaction_lock);
+@@ -225,12 +228,31 @@ static void rpi_register_clk_driver(struct device *dev)
+ 						-1, NULL, 0);
+ }
+ 
++static void rpi_firmware_delete(struct kref *kref)
++{
++	struct rpi_firmware *fw = container_of(kref, struct rpi_firmware,
++					       consumers);
++
++	mbox_free_channel(fw->chan);
++	kfree(fw);
++}
++
++void rpi_firmware_put(struct rpi_firmware *fw)
++{
++	kref_put(&fw->consumers, rpi_firmware_delete);
++}
++EXPORT_SYMBOL_GPL(rpi_firmware_put);
++
+ static int rpi_firmware_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct rpi_firmware *fw;
+ 
+-	fw = devm_kzalloc(dev, sizeof(*fw), GFP_KERNEL);
++	/*
++	 * Memory will be freed by rpi_firmware_delete() once all users have
++	 * released their firmware handles. Don't use devm_kzalloc() here.
++	 */
++	fw = kzalloc(sizeof(*fw), GFP_KERNEL);
+ 	if (!fw)
+ 		return -ENOMEM;
+ 
+@@ -247,6 +269,7 @@ static int rpi_firmware_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	init_completion(&fw->c);
++	kref_init(&fw->consumers);
+ 
+ 	platform_set_drvdata(pdev, fw);
+ 
+@@ -275,7 +298,8 @@ static int rpi_firmware_remove(struct platform_device *pdev)
+ 	rpi_hwmon = NULL;
+ 	platform_device_unregister(rpi_clk);
+ 	rpi_clk = NULL;
+-	mbox_free_channel(fw->chan);
++
++	rpi_firmware_put(fw);
+ 
+ 	return 0;
+ }
+@@ -284,16 +308,32 @@ static int rpi_firmware_remove(struct platform_device *pdev)
+  * rpi_firmware_get - Get pointer to rpi_firmware structure.
+  * @firmware_node:    Pointer to the firmware Device Tree node.
+  *
++ * The reference to rpi_firmware has to be released with rpi_firmware_put().
++ *
+  * Returns NULL is the firmware device is not ready.
+  */
+ struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
+ {
+ 	struct platform_device *pdev = of_find_device_by_node(firmware_node);
++	struct rpi_firmware *fw;
+ 
+ 	if (!pdev)
+ 		return NULL;
+ 
+-	return platform_get_drvdata(pdev);
++	fw = platform_get_drvdata(pdev);
++	if (!fw)
++		goto err_put_device;
++
++	if (!kref_get_unless_zero(&fw->consumers))
++		goto err_put_device;
++
++	put_device(&pdev->dev);
++
++	return fw;
++
++err_put_device:
++	put_device(&pdev->dev);
++	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(rpi_firmware_get);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+index d3e51d361179e..eb68b0f1da825 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+@@ -160,17 +160,28 @@ static int acp_poweron(struct generic_pm_domain *genpd)
+ 	return 0;
+ }
+ 
+-static struct device *get_mfd_cell_dev(const char *device_name, int r)
++static int acp_genpd_add_device(struct device *dev, void *data)
+ {
+-	char auto_dev_name[25];
+-	struct device *dev;
++	struct generic_pm_domain *gpd = data;
++	int ret;
+ 
+-	snprintf(auto_dev_name, sizeof(auto_dev_name),
+-		 "%s.%d.auto", device_name, r);
+-	dev = bus_find_device_by_name(&platform_bus_type, NULL, auto_dev_name);
+-	dev_info(dev, "device %s added to pm domain\n", auto_dev_name);
++	ret = pm_genpd_add_device(gpd, dev);
++	if (ret)
++		dev_err(dev, "Failed to add dev to genpd %d\n", ret);
+ 
+-	return dev;
++	return ret;
++}
++
++static int acp_genpd_remove_device(struct device *dev, void *data)
++{
++	int ret;
++
++	ret = pm_genpd_remove_device(dev);
++	if (ret)
++		dev_err(dev, "Failed to remove dev from genpd %d\n", ret);
++
++	/* Continue to remove */
++	return 0;
+ }
+ 
+ /**
+@@ -181,11 +192,10 @@ static struct device *get_mfd_cell_dev(const char *device_name, int r)
+  */
+ static int acp_hw_init(void *handle)
+ {
+-	int r, i;
++	int r;
+ 	uint64_t acp_base;
+ 	u32 val = 0;
+ 	u32 count = 0;
+-	struct device *dev;
+ 	struct i2s_platform_data *i2s_pdata = NULL;
+ 
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+@@ -341,15 +351,10 @@ static int acp_hw_init(void *handle)
+ 	if (r)
+ 		goto failure;
+ 
+-	for (i = 0; i < ACP_DEVS ; i++) {
+-		dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+-		r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev);
+-		if (r) {
+-			dev_err(dev, "Failed to add dev to genpd\n");
+-			goto failure;
+-		}
+-	}
+-
++	r = device_for_each_child(adev->acp.parent, &adev->acp.acp_genpd->gpd,
++				  acp_genpd_add_device);
++	if (r)
++		goto failure;
+ 
+ 	/* Assert Soft reset of ACP */
+ 	val = cgs_read_register(adev->acp.cgs_device, mmACP_SOFT_RESET);
+@@ -410,10 +415,8 @@ failure:
+  */
+ static int acp_hw_fini(void *handle)
+ {
+-	int i, ret;
+ 	u32 val = 0;
+ 	u32 count = 0;
+-	struct device *dev;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
+ 	/* return early if no ACP */
+@@ -458,13 +461,8 @@ static int acp_hw_fini(void *handle)
+ 		udelay(100);
+ 	}
+ 
+-	for (i = 0; i < ACP_DEVS ; i++) {
+-		dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+-		ret = pm_genpd_remove_device(dev);
+-		/* If removal fails, dont giveup and try rest */
+-		if (ret)
+-			dev_err(dev, "remove dev from genpd failed\n");
+-	}
++	device_for_each_child(adev->acp.parent, NULL,
++			      acp_genpd_remove_device);
+ 
+ 	mfd_remove_devices(adev->acp.parent);
+ 	kfree(adev->acp.acp_res);
+diff --git a/drivers/gpu/drm/drm_of.c b/drivers/gpu/drm/drm_of.c
+index ca04c34e82518..997b8827fed27 100644
+--- a/drivers/gpu/drm/drm_of.c
++++ b/drivers/gpu/drm/drm_of.c
+@@ -315,7 +315,7 @@ static int drm_of_lvds_get_remote_pixels_type(
+ 
+ 		remote_port = of_graph_get_remote_port(endpoint);
+ 		if (!remote_port) {
+-			of_node_put(remote_port);
++			of_node_put(endpoint);
+ 			return -EPIPE;
+ 		}
+ 
+@@ -331,8 +331,10 @@ static int drm_of_lvds_get_remote_pixels_type(
+ 		 * configurations by passing the endpoints explicitly to
+ 		 * drm_of_lvds_get_dual_link_pixel_order().
+ 		 */
+-		if (!current_pt || pixels_type != current_pt)
++		if (!current_pt || pixels_type != current_pt) {
++			of_node_put(endpoint);
+ 			return -EINVAL;
++		}
+ 	}
+ 
+ 	return pixels_type;
+diff --git a/drivers/gpu/drm/gma500/oaktrail_lvds.c b/drivers/gpu/drm/gma500/oaktrail_lvds.c
+index 2828360153d16..30b949d6856c0 100644
+--- a/drivers/gpu/drm/gma500/oaktrail_lvds.c
++++ b/drivers/gpu/drm/gma500/oaktrail_lvds.c
+@@ -117,7 +117,7 @@ static void oaktrail_lvds_mode_set(struct drm_encoder *encoder,
+ 			continue;
+ 	}
+ 
+-	if (!connector) {
++	if (list_entry_is_head(connector, &mode_config->connector_list, head)) {
+ 		DRM_ERROR("Couldn't find connector when setting mode");
+ 		gma_power_end(dev);
+ 		return;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+index 758c355b4fd80..f8c7100a8acb6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+@@ -340,10 +340,12 @@ static void dpu_hw_ctl_clear_all_blendstages(struct dpu_hw_ctl *ctx)
+ 	int i;
+ 
+ 	for (i = 0; i < ctx->mixer_count; i++) {
+-		DPU_REG_WRITE(c, CTL_LAYER(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT2(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT3(LM_0 + i), 0);
++		enum dpu_lm mixer_id = ctx->mixer_hw_caps[i].id;
++
++		DPU_REG_WRITE(c, CTL_LAYER(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT2(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT3(mixer_id), 0);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index 2f75e39052022..c1c152e39918b 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -19,30 +19,12 @@ static int mdp4_hw_init(struct msm_kms *kms)
+ {
+ 	struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
+ 	struct drm_device *dev = mdp4_kms->dev;
+-	uint32_t version, major, minor, dmap_cfg, vg_cfg;
++	u32 dmap_cfg, vg_cfg;
+ 	unsigned long clk;
+ 	int ret = 0;
+ 
+ 	pm_runtime_get_sync(dev->dev);
+ 
+-	mdp4_enable(mdp4_kms);
+-	version = mdp4_read(mdp4_kms, REG_MDP4_VERSION);
+-	mdp4_disable(mdp4_kms);
+-
+-	major = FIELD(version, MDP4_VERSION_MAJOR);
+-	minor = FIELD(version, MDP4_VERSION_MINOR);
+-
+-	DBG("found MDP4 version v%d.%d", major, minor);
+-
+-	if (major != 4) {
+-		DRM_DEV_ERROR(dev->dev, "unexpected MDP version: v%d.%d\n",
+-				major, minor);
+-		ret = -ENXIO;
+-		goto out;
+-	}
+-
+-	mdp4_kms->rev = minor;
+-
+ 	if (mdp4_kms->rev > 1) {
+ 		mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER0, 0x0707ffff);
+ 		mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER1, 0x03073f3f);
+@@ -88,7 +70,6 @@ static int mdp4_hw_init(struct msm_kms *kms)
+ 	if (mdp4_kms->rev > 1)
+ 		mdp4_write(mdp4_kms, REG_MDP4_RESET_STATUS, 1);
+ 
+-out:
+ 	pm_runtime_put_sync(dev->dev);
+ 
+ 	return ret;
+@@ -409,6 +390,22 @@ fail:
+ 	return ret;
+ }
+ 
++static void read_mdp_hw_revision(struct mdp4_kms *mdp4_kms,
++				 u32 *major, u32 *minor)
++{
++	struct drm_device *dev = mdp4_kms->dev;
++	u32 version;
++
++	mdp4_enable(mdp4_kms);
++	version = mdp4_read(mdp4_kms, REG_MDP4_VERSION);
++	mdp4_disable(mdp4_kms);
++
++	*major = FIELD(version, MDP4_VERSION_MAJOR);
++	*minor = FIELD(version, MDP4_VERSION_MINOR);
++
++	DRM_DEV_INFO(dev->dev, "MDP4 version v%d.%d", *major, *minor);
++}
++
+ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+@@ -417,6 +414,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	struct msm_kms *kms = NULL;
+ 	struct msm_gem_address_space *aspace;
+ 	int irq, ret;
++	u32 major, minor;
+ 
+ 	mdp4_kms = kzalloc(sizeof(*mdp4_kms), GFP_KERNEL);
+ 	if (!mdp4_kms) {
+@@ -473,15 +471,6 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	if (IS_ERR(mdp4_kms->pclk))
+ 		mdp4_kms->pclk = NULL;
+ 
+-	if (mdp4_kms->rev >= 2) {
+-		mdp4_kms->lut_clk = devm_clk_get(&pdev->dev, "lut_clk");
+-		if (IS_ERR(mdp4_kms->lut_clk)) {
+-			DRM_DEV_ERROR(dev->dev, "failed to get lut_clk\n");
+-			ret = PTR_ERR(mdp4_kms->lut_clk);
+-			goto fail;
+-		}
+-	}
+-
+ 	mdp4_kms->axi_clk = devm_clk_get(&pdev->dev, "bus_clk");
+ 	if (IS_ERR(mdp4_kms->axi_clk)) {
+ 		DRM_DEV_ERROR(dev->dev, "failed to get axi_clk\n");
+@@ -490,8 +479,27 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	}
+ 
+ 	clk_set_rate(mdp4_kms->clk, config->max_clk);
+-	if (mdp4_kms->lut_clk)
++
++	read_mdp_hw_revision(mdp4_kms, &major, &minor);
++
++	if (major != 4) {
++		DRM_DEV_ERROR(dev->dev, "unexpected MDP version: v%d.%d\n",
++			      major, minor);
++		ret = -ENXIO;
++		goto fail;
++	}
++
++	mdp4_kms->rev = minor;
++
++	if (mdp4_kms->rev >= 2) {
++		mdp4_kms->lut_clk = devm_clk_get(&pdev->dev, "lut_clk");
++		if (IS_ERR(mdp4_kms->lut_clk)) {
++			DRM_DEV_ERROR(dev->dev, "failed to get lut_clk\n");
++			ret = PTR_ERR(mdp4_kms->lut_clk);
++			goto fail;
++		}
+ 		clk_set_rate(mdp4_kms->lut_clk, config->max_clk);
++	}
+ 
+ 	pm_runtime_enable(dev->dev);
+ 	mdp4_kms->rpm_enabled = true;
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index 627048851d99c..7e364b9c9f9e1 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -26,8 +26,10 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
+ 	}
+ 
+ 	phy_pdev = of_find_device_by_node(phy_node);
+-	if (phy_pdev)
++	if (phy_pdev) {
+ 		msm_dsi->phy = platform_get_drvdata(phy_pdev);
++		msm_dsi->phy_dev = &phy_pdev->dev;
++	}
+ 
+ 	of_node_put(phy_node);
+ 
+@@ -36,8 +38,6 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
+ 		return -EPROBE_DEFER;
+ 	}
+ 
+-	msm_dsi->phy_dev = get_device(&phy_pdev->dev);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.c b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+index 17f26052e8450..f31e8ef3c258c 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+@@ -51,6 +51,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0xff,
+ 		.hs_wdth_shift	= 24,
+ 		.has_overlay	= false,
++		.has_ctrl2	= false,
+ 	},
+ 	[MXSFB_V4] = {
+ 		.transfer_count	= LCDC_V4_TRANSFER_COUNT,
+@@ -59,6 +60,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0x3fff,
+ 		.hs_wdth_shift	= 18,
+ 		.has_overlay	= false,
++		.has_ctrl2	= true,
+ 	},
+ 	[MXSFB_V6] = {
+ 		.transfer_count	= LCDC_V4_TRANSFER_COUNT,
+@@ -67,6 +69,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0x3fff,
+ 		.hs_wdth_shift	= 18,
+ 		.has_overlay	= true,
++		.has_ctrl2	= true,
+ 	},
+ };
+ 
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.h b/drivers/gpu/drm/mxsfb/mxsfb_drv.h
+index 399d23e91ed10..7c720e226fdfd 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.h
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.h
+@@ -22,6 +22,7 @@ struct mxsfb_devdata {
+ 	unsigned int	hs_wdth_mask;
+ 	unsigned int	hs_wdth_shift;
+ 	bool		has_overlay;
++	bool		has_ctrl2;
+ };
+ 
+ struct mxsfb_drm_private {
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+index 9e1224d54729f..b535621f4f78d 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+@@ -107,6 +107,14 @@ static void mxsfb_enable_controller(struct mxsfb_drm_private *mxsfb)
+ 		clk_prepare_enable(mxsfb->clk_disp_axi);
+ 	clk_prepare_enable(mxsfb->clk);
+ 
++	/* Increase number of outstanding requests on all supported IPs */
++	if (mxsfb->devdata->has_ctrl2) {
++		reg = readl(mxsfb->base + LCDC_V4_CTRL2);
++		reg &= ~CTRL2_SET_OUTSTANDING_REQS_MASK;
++		reg |= CTRL2_SET_OUTSTANDING_REQS_16;
++		writel(reg, mxsfb->base + LCDC_V4_CTRL2);
++	}
++
+ 	/* If it was disabled, re-enable the mode again */
+ 	writel(CTRL_DOTCLK_MODE, mxsfb->base + LCDC_CTRL + REG_SET);
+ 
+@@ -115,6 +123,35 @@ static void mxsfb_enable_controller(struct mxsfb_drm_private *mxsfb)
+ 	reg |= VDCTRL4_SYNC_SIGNALS_ON;
+ 	writel(reg, mxsfb->base + LCDC_VDCTRL4);
+ 
++	/*
++	 * Enable recovery on underflow.
++	 *
++	 * There is some sort of corner case behavior of the controller,
++	 * which could rarely be triggered at least on i.MX6SX connected
++	 * to 800x480 DPI panel and i.MX8MM connected to DPI->DSI->LVDS
++	 * bridged 1920x1080 panel (and likely on other setups too), where
++	 * the image on the panel shifts to the right and wraps around.
++	 * This happens either when the controller is enabled on boot or
++	 * even later during run time. The condition does not correct
++	 * itself automatically, i.e. the display image remains shifted.
++	 *
++	 * It seems this problem is known and is due to sporadic underflows
++	 * of the LCDIF FIFO. While the LCDIF IP does have underflow/overflow
++	 * IRQs, neither of the IRQs trigger and neither IRQ status bit is
++	 * asserted when this condition occurs.
++	 *
++	 * All known revisions of the LCDIF IP have CTRL1 RECOVER_ON_UNDERFLOW
++	 * bit, which is described in the reference manual since i.MX23 as
++	 * "
++	 *   Set this bit to enable the LCDIF block to recover in the next
++	 *   field/frame if there was an underflow in the current field/frame.
++	 * "
++	 * Enable this bit to mitigate the sporadic underflows.
++	 */
++	reg = readl(mxsfb->base + LCDC_CTRL1);
++	reg |= CTRL1_RECOVER_ON_UNDERFLOW;
++	writel(reg, mxsfb->base + LCDC_CTRL1);
++
+ 	writel(CTRL_RUN, mxsfb->base + LCDC_CTRL + REG_SET);
+ }
+ 
+@@ -206,6 +243,9 @@ static void mxsfb_crtc_mode_set_nofb(struct mxsfb_drm_private *mxsfb)
+ 
+ 	/* Clear the FIFOs */
+ 	writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_SET);
++	readl(mxsfb->base + LCDC_CTRL1);
++	writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_CLR);
++	readl(mxsfb->base + LCDC_CTRL1);
+ 
+ 	if (mxsfb->devdata->has_overlay)
+ 		writel(0, mxsfb->base + LCDC_AS_CTRL);
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_regs.h b/drivers/gpu/drm/mxsfb/mxsfb_regs.h
+index 55d28a27f9124..694fea13e893e 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_regs.h
++++ b/drivers/gpu/drm/mxsfb/mxsfb_regs.h
+@@ -15,6 +15,7 @@
+ #define LCDC_CTRL			0x00
+ #define LCDC_CTRL1			0x10
+ #define LCDC_V3_TRANSFER_COUNT		0x20
++#define LCDC_V4_CTRL2			0x20
+ #define LCDC_V4_TRANSFER_COUNT		0x30
+ #define LCDC_V4_CUR_BUF			0x40
+ #define LCDC_V4_NEXT_BUF		0x50
+@@ -54,12 +55,20 @@
+ #define CTRL_DF24			BIT(1)
+ #define CTRL_RUN			BIT(0)
+ 
++#define CTRL1_RECOVER_ON_UNDERFLOW	BIT(24)
+ #define CTRL1_FIFO_CLEAR		BIT(21)
+ #define CTRL1_SET_BYTE_PACKAGING(x)	(((x) & 0xf) << 16)
+ #define CTRL1_GET_BYTE_PACKAGING(x)	(((x) >> 16) & 0xf)
+ #define CTRL1_CUR_FRAME_DONE_IRQ_EN	BIT(13)
+ #define CTRL1_CUR_FRAME_DONE_IRQ	BIT(9)
+ 
++#define CTRL2_SET_OUTSTANDING_REQS_1	0
++#define CTRL2_SET_OUTSTANDING_REQS_2	(0x1 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_4	(0x2 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_8	(0x3 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_16	(0x4 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_MASK	(0x7 << 21)
++
+ #define TRANSFER_COUNT_SET_VCOUNT(x)	(((x) & 0xffff) << 16)
+ #define TRANSFER_COUNT_GET_VCOUNT(x)	(((x) >> 16) & 0xffff)
+ #define TRANSFER_COUNT_SET_HCOUNT(x)	((x) & 0xffff)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c
+index bf7c34cfb84c0..c256929e859b3 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.c
++++ b/drivers/gpu/drm/panfrost/panfrost_device.c
+@@ -60,7 +60,8 @@ static int panfrost_clk_init(struct panfrost_device *pfdev)
+ 	if (IS_ERR(pfdev->bus_clock)) {
+ 		dev_err(pfdev->dev, "get bus_clock failed %ld\n",
+ 			PTR_ERR(pfdev->bus_clock));
+-		return PTR_ERR(pfdev->bus_clock);
++		err = PTR_ERR(pfdev->bus_clock);
++		goto disable_clock;
+ 	}
+ 
+ 	if (pfdev->bus_clock) {
+diff --git a/drivers/i2c/busses/i2c-highlander.c b/drivers/i2c/busses/i2c-highlander.c
+index 803dad70e2a71..a2add128d0843 100644
+--- a/drivers/i2c/busses/i2c-highlander.c
++++ b/drivers/i2c/busses/i2c-highlander.c
+@@ -379,7 +379,7 @@ static int highlander_i2c_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, dev);
+ 
+ 	dev->irq = platform_get_irq(pdev, 0);
+-	if (iic_force_poll)
++	if (dev->irq < 0 || iic_force_poll)
+ 		dev->irq = 0;
+ 
+ 	if (dev->irq) {
+diff --git a/drivers/i2c/busses/i2c-hix5hd2.c b/drivers/i2c/busses/i2c-hix5hd2.c
+index ab15b1ec2ab3c..8993534bc510d 100644
+--- a/drivers/i2c/busses/i2c-hix5hd2.c
++++ b/drivers/i2c/busses/i2c-hix5hd2.c
+@@ -413,10 +413,8 @@ static int hix5hd2_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->regs);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0) {
+-		dev_err(&pdev->dev, "cannot find HS-I2C IRQ\n");
++	if (irq < 0)
+ 		return irq;
+-	}
+ 
+ 	priv->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(priv->clk)) {
+diff --git a/drivers/i2c/busses/i2c-iop3xx.c b/drivers/i2c/busses/i2c-iop3xx.c
+index 2f8b8050a2233..899624721c1ea 100644
+--- a/drivers/i2c/busses/i2c-iop3xx.c
++++ b/drivers/i2c/busses/i2c-iop3xx.c
+@@ -467,16 +467,14 @@ iop3xx_i2c_probe(struct platform_device *pdev)
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+-		ret = -ENXIO;
++		ret = irq;
+ 		goto unmap;
+ 	}
+ 	ret = request_irq(irq, iop3xx_i2c_irq_handler, 0,
+ 				pdev->name, adapter_data);
+ 
+-	if (ret) {
+-		ret = -EIO;
++	if (ret)
+ 		goto unmap;
+-	}
+ 
+ 	memcpy(new_adapter->name, pdev->name, strlen(pdev->name));
+ 	new_adapter->owner = THIS_MODULE;
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index dcde71ae63419..1a5f1ccd1d2f7 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -1207,7 +1207,7 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(i2c->pdmabase);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0)
++	if (irq < 0)
+ 		return irq;
+ 
+ 	init_completion(&i2c->msg_complete);
+diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
+index 40fa9e4af5d1c..05831848b7bf6 100644
+--- a/drivers/i2c/busses/i2c-s3c2410.c
++++ b/drivers/i2c/busses/i2c-s3c2410.c
+@@ -1140,7 +1140,7 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
+ 	 */
+ 	if (!(i2c->quirks & QUIRK_POLL)) {
+ 		i2c->irq = ret = platform_get_irq(pdev, 0);
+-		if (ret <= 0) {
++		if (ret < 0) {
+ 			dev_err(&pdev->dev, "cannot find IRQ\n");
+ 			clk_unprepare(i2c->clk);
+ 			return ret;
+diff --git a/drivers/i2c/busses/i2c-synquacer.c b/drivers/i2c/busses/i2c-synquacer.c
+index 31be1811d5e66..e4026c5416b15 100644
+--- a/drivers/i2c/busses/i2c-synquacer.c
++++ b/drivers/i2c/busses/i2c-synquacer.c
+@@ -578,7 +578,7 @@ static int synquacer_i2c_probe(struct platform_device *pdev)
+ 
+ 	i2c->irq = platform_get_irq(pdev, 0);
+ 	if (i2c->irq < 0)
+-		return -ENODEV;
++		return i2c->irq;
+ 
+ 	ret = devm_request_irq(&pdev->dev, i2c->irq, synquacer_i2c_isr,
+ 			       0, dev_name(&pdev->dev), i2c);
+diff --git a/drivers/i2c/busses/i2c-xlp9xx.c b/drivers/i2c/busses/i2c-xlp9xx.c
+index f2241cedf5d3f..6d24dc3855229 100644
+--- a/drivers/i2c/busses/i2c-xlp9xx.c
++++ b/drivers/i2c/busses/i2c-xlp9xx.c
+@@ -517,7 +517,7 @@ static int xlp9xx_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->base);
+ 
+ 	priv->irq = platform_get_irq(pdev, 0);
+-	if (priv->irq <= 0)
++	if (priv->irq < 0)
+ 		return priv->irq;
+ 	/* SMBAlert irq */
+ 	priv->alert_data.irq = platform_get_irq(pdev, 1);
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 1005b182bab47..1bdb7acf445f4 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -100,6 +100,27 @@ EXPORT_SYMBOL(gic_pmr_sync);
+ DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities);
+ EXPORT_SYMBOL(gic_nonsecure_priorities);
+ 
++/*
++ * When the Non-secure world has access to group 0 interrupts (as a
++ * consequence of SCR_EL3.FIQ == 0), reading the ICC_RPR_EL1 register will
++ * return the Distributor's view of the interrupt priority.
++ *
++ * When GIC security is enabled (GICD_CTLR.DS == 0), the interrupt priority
++ * written by software is moved to the Non-secure range by the Distributor.
++ *
++ * If both are true (which is when gic_nonsecure_priorities gets enabled),
++ * we need to shift down the priority programmed by software to match it
++ * against the value returned by ICC_RPR_EL1.
++ */
++#define GICD_INT_RPR_PRI(priority)					\
++	({								\
++		u32 __priority = (priority);				\
++		if (static_branch_unlikely(&gic_nonsecure_priorities))	\
++			__priority = 0x80 | (__priority >> 1);		\
++									\
++		__priority;						\
++	})
++
+ /* ppi_nmi_refs[n] == number of cpus having ppi[n + 16] set as NMI */
+ static refcount_t *ppi_nmi_refs;
+ 
+@@ -687,7 +708,7 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
+ 		return;
+ 
+ 	if (gic_supports_nmi() &&
+-	    unlikely(gic_read_rpr() == GICD_INT_NMI_PRI)) {
++	    unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI))) {
+ 		gic_handle_nmi(irqnr, regs);
+ 		return;
+ 	}
+diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c
+index 9bf6b9a5f7348..90e1ad6e36120 100644
+--- a/drivers/irqchip/irq-loongson-pch-pic.c
++++ b/drivers/irqchip/irq-loongson-pch-pic.c
+@@ -92,18 +92,22 @@ static int pch_pic_set_type(struct irq_data *d, unsigned int type)
+ 	case IRQ_TYPE_EDGE_RISING:
+ 		pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_edge_irq);
+ 		break;
+ 	case IRQ_TYPE_EDGE_FALLING:
+ 		pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_edge_irq);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+ 		pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_level_irq);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_LOW:
+ 		pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_level_irq);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -113,11 +117,24 @@ static int pch_pic_set_type(struct irq_data *d, unsigned int type)
+ 	return ret;
+ }
+ 
++static void pch_pic_ack_irq(struct irq_data *d)
++{
++	unsigned int reg;
++	struct pch_pic *priv = irq_data_get_irq_chip_data(d);
++
++	reg = readl(priv->base + PCH_PIC_EDGE + PIC_REG_IDX(d->hwirq) * 4);
++	if (reg & BIT(PIC_REG_BIT(d->hwirq))) {
++		writel(BIT(PIC_REG_BIT(d->hwirq)),
++			priv->base + PCH_PIC_CLR + PIC_REG_IDX(d->hwirq) * 4);
++	}
++	irq_chip_ack_parent(d);
++}
++
+ static struct irq_chip pch_pic_irq_chip = {
+ 	.name			= "PCH PIC",
+ 	.irq_mask		= pch_pic_mask_irq,
+ 	.irq_unmask		= pch_pic_unmask_irq,
+-	.irq_ack		= irq_chip_ack_parent,
++	.irq_ack		= pch_pic_ack_irq,
+ 	.irq_set_affinity	= irq_chip_set_affinity_parent,
+ 	.irq_set_type		= pch_pic_set_type,
+ };
+diff --git a/drivers/leds/leds-is31fl32xx.c b/drivers/leds/leds-is31fl32xx.c
+index 2180255ad3393..899ed94b66876 100644
+--- a/drivers/leds/leds-is31fl32xx.c
++++ b/drivers/leds/leds-is31fl32xx.c
+@@ -385,6 +385,7 @@ static int is31fl32xx_parse_dt(struct device *dev,
+ 			dev_err(dev,
+ 				"Node %pOF 'reg' conflicts with another LED\n",
+ 				child);
++			ret = -EINVAL;
+ 			goto err;
+ 		}
+ 
+diff --git a/drivers/leds/leds-lt3593.c b/drivers/leds/leds-lt3593.c
+index 68e06434ac087..7dab08773a347 100644
+--- a/drivers/leds/leds-lt3593.c
++++ b/drivers/leds/leds-lt3593.c
+@@ -99,10 +99,9 @@ static int lt3593_led_probe(struct platform_device *pdev)
+ 	init_data.default_label = ":";
+ 
+ 	ret = devm_led_classdev_register_ext(dev, &led_data->cdev, &init_data);
+-	if (ret < 0) {
+-		fwnode_handle_put(child);
++	fwnode_handle_put(child);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	platform_set_drvdata(pdev, led_data);
+ 
+diff --git a/drivers/leds/trigger/ledtrig-audio.c b/drivers/leds/trigger/ledtrig-audio.c
+index f76621e88482d..c6b437e6369b8 100644
+--- a/drivers/leds/trigger/ledtrig-audio.c
++++ b/drivers/leds/trigger/ledtrig-audio.c
+@@ -6,10 +6,33 @@
+ #include <linux/kernel.h>
+ #include <linux/leds.h>
+ #include <linux/module.h>
++#include "../leds.h"
+ 
+-static struct led_trigger *ledtrig_audio[NUM_AUDIO_LEDS];
+ static enum led_brightness audio_state[NUM_AUDIO_LEDS];
+ 
++static int ledtrig_audio_mute_activate(struct led_classdev *led_cdev)
++{
++	led_set_brightness_nosleep(led_cdev, audio_state[LED_AUDIO_MUTE]);
++	return 0;
++}
++
++static int ledtrig_audio_micmute_activate(struct led_classdev *led_cdev)
++{
++	led_set_brightness_nosleep(led_cdev, audio_state[LED_AUDIO_MICMUTE]);
++	return 0;
++}
++
++static struct led_trigger ledtrig_audio[NUM_AUDIO_LEDS] = {
++	[LED_AUDIO_MUTE] = {
++		.name     = "audio-mute",
++		.activate = ledtrig_audio_mute_activate,
++	},
++	[LED_AUDIO_MICMUTE] = {
++		.name     = "audio-micmute",
++		.activate = ledtrig_audio_micmute_activate,
++	},
++};
++
+ enum led_brightness ledtrig_audio_get(enum led_audio type)
+ {
+ 	return audio_state[type];
+@@ -19,24 +42,22 @@ EXPORT_SYMBOL_GPL(ledtrig_audio_get);
+ void ledtrig_audio_set(enum led_audio type, enum led_brightness state)
+ {
+ 	audio_state[type] = state;
+-	led_trigger_event(ledtrig_audio[type], state);
++	led_trigger_event(&ledtrig_audio[type], state);
+ }
+ EXPORT_SYMBOL_GPL(ledtrig_audio_set);
+ 
+ static int __init ledtrig_audio_init(void)
+ {
+-	led_trigger_register_simple("audio-mute",
+-				    &ledtrig_audio[LED_AUDIO_MUTE]);
+-	led_trigger_register_simple("audio-micmute",
+-				    &ledtrig_audio[LED_AUDIO_MICMUTE]);
++	led_trigger_register(&ledtrig_audio[LED_AUDIO_MUTE]);
++	led_trigger_register(&ledtrig_audio[LED_AUDIO_MICMUTE]);
+ 	return 0;
+ }
+ module_init(ledtrig_audio_init);
+ 
+ static void __exit ledtrig_audio_exit(void)
+ {
+-	led_trigger_unregister_simple(ledtrig_audio[LED_AUDIO_MUTE]);
+-	led_trigger_unregister_simple(ledtrig_audio[LED_AUDIO_MICMUTE]);
++	led_trigger_unregister(&ledtrig_audio[LED_AUDIO_MUTE]);
++	led_trigger_unregister(&ledtrig_audio[LED_AUDIO_MICMUTE]);
+ }
+ module_exit(ledtrig_audio_exit);
+ 
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 248bda63f0852..81f1cc5b34999 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -934,20 +934,20 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ 	n = BITS_TO_LONGS(d->nr_stripes) * sizeof(unsigned long);
+ 	d->full_dirty_stripes = kvzalloc(n, GFP_KERNEL);
+ 	if (!d->full_dirty_stripes)
+-		return -ENOMEM;
++		goto out_free_stripe_sectors_dirty;
+ 
+ 	idx = ida_simple_get(&bcache_device_idx, 0,
+ 				BCACHE_DEVICE_IDX_MAX, GFP_KERNEL);
+ 	if (idx < 0)
+-		return idx;
++		goto out_free_full_dirty_stripes;
+ 
+ 	if (bioset_init(&d->bio_split, 4, offsetof(struct bbio, bio),
+ 			BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER))
+-		goto err;
++		goto out_ida_remove;
+ 
+ 	d->disk = alloc_disk(BCACHE_MINORS);
+ 	if (!d->disk)
+-		goto err;
++		goto out_bioset_exit;
+ 
+ 	set_capacity(d->disk, sectors);
+ 	snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", idx);
+@@ -993,8 +993,14 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ 
+ 	return 0;
+ 
+-err:
++out_bioset_exit:
++	bioset_exit(&d->bio_split);
++out_ida_remove:
+ 	ida_simple_remove(&bcache_device_idx, idx);
++out_free_full_dirty_stripes:
++	kvfree(d->full_dirty_stripes);
++out_free_stripe_sectors_dirty:
++	kvfree(d->stripe_sectors_dirty);
+ 	return -ENOMEM;
+ 
+ }
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index 89bb7e6dc7a42..9554c8348c020 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -2233,6 +2233,7 @@ static int tda1997x_core_init(struct v4l2_subdev *sd)
+ 	/* get initial HDMI status */
+ 	state->hdmi_status = io_read(sd, REG_HDMI_FLAGS);
+ 
++	io_write(sd, REG_EDID_ENABLE, EDID_ENABLE_A_EN | EDID_ENABLE_B_EN);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index bf75927bac4e7..159c9de857885 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -2031,17 +2031,25 @@ static int __coda_start_decoding(struct coda_ctx *ctx)
+ 	u32 src_fourcc, dst_fourcc;
+ 	int ret;
+ 
++	q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
++	q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++	src_fourcc = q_data_src->fourcc;
++	dst_fourcc = q_data_dst->fourcc;
++
+ 	if (!ctx->initialized) {
+ 		ret = __coda_decoder_seq_init(ctx);
+ 		if (ret < 0)
+ 			return ret;
++	} else {
++		ctx->frame_mem_ctrl &= ~(CODA_FRAME_CHROMA_INTERLEAVE | (0x3 << 9) |
++					 CODA9_FRAME_TILED2LINEAR);
++		if (dst_fourcc == V4L2_PIX_FMT_NV12 || dst_fourcc == V4L2_PIX_FMT_YUYV)
++			ctx->frame_mem_ctrl |= CODA_FRAME_CHROMA_INTERLEAVE;
++		if (ctx->tiled_map_type == GDI_TILED_FRAME_MB_RASTER_MAP)
++			ctx->frame_mem_ctrl |= (0x3 << 9) |
++				((ctx->use_vdoa) ? 0 : CODA9_FRAME_TILED2LINEAR);
+ 	}
+ 
+-	q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+-	q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+-	src_fourcc = q_data_src->fourcc;
+-	dst_fourcc = q_data_dst->fourcc;
+-
+ 	coda_write(dev, ctx->parabuf.paddr, CODA_REG_BIT_PARA_BUF_ADDR);
+ 
+ 	ret = coda_alloc_framebuffers(ctx, q_data_dst, src_fourcc);
+diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c
+index 47246528ac7ef..e2d0fd5eaf29a 100644
+--- a/drivers/media/platform/qcom/venus/venc.c
++++ b/drivers/media/platform/qcom/venus/venc.c
+@@ -183,6 +183,8 @@ venc_try_fmt_common(struct venus_inst *inst, struct v4l2_format *f)
+ 		else
+ 			return NULL;
+ 		fmt = find_format(inst, pixmp->pixelformat, f->type);
++		if (!fmt)
++			return NULL;
+ 	}
+ 
+ 	pixmp->width = clamp(pixmp->width, frame_width_min(inst),
+diff --git a/drivers/media/platform/rockchip/rga/rga-buf.c b/drivers/media/platform/rockchip/rga/rga-buf.c
+index bf9a75b75083b..81508ed5abf34 100644
+--- a/drivers/media/platform/rockchip/rga/rga-buf.c
++++ b/drivers/media/platform/rockchip/rga/rga-buf.c
+@@ -79,9 +79,8 @@ static int rga_buf_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	struct rockchip_rga *rga = ctx->rga;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(rga->dev);
++	ret = pm_runtime_resume_and_get(rga->dev);
+ 	if (ret < 0) {
+-		pm_runtime_put_noidle(rga->dev);
+ 		rga_buf_return_buffers(q, VB2_BUF_STATE_QUEUED);
+ 		return ret;
+ 	}
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 9d122429706e9..6759091b15e09 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -863,10 +863,12 @@ static int rga_probe(struct platform_device *pdev)
+ 	if (IS_ERR(rga->m2m_dev)) {
+ 		v4l2_err(&rga->v4l2_dev, "Failed to init mem2mem device\n");
+ 		ret = PTR_ERR(rga->m2m_dev);
+-		goto unreg_video_dev;
++		goto rel_vdev;
+ 	}
+ 
+-	pm_runtime_get_sync(rga->dev);
++	ret = pm_runtime_resume_and_get(rga->dev);
++	if (ret < 0)
++		goto rel_vdev;
+ 
+ 	rga->version.major = (rga_read(rga, RGA_VERSION_INFO) >> 24) & 0xFF;
+ 	rga->version.minor = (rga_read(rga, RGA_VERSION_INFO) >> 20) & 0x0F;
+@@ -880,11 +882,23 @@ static int rga_probe(struct platform_device *pdev)
+ 	rga->cmdbuf_virt = dma_alloc_attrs(rga->dev, RGA_CMDBUF_SIZE,
+ 					   &rga->cmdbuf_phy, GFP_KERNEL,
+ 					   DMA_ATTR_WRITE_COMBINE);
++	if (!rga->cmdbuf_virt) {
++		ret = -ENOMEM;
++		goto rel_vdev;
++	}
+ 
+ 	rga->src_mmu_pages =
+ 		(unsigned int *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 3);
++	if (!rga->src_mmu_pages) {
++		ret = -ENOMEM;
++		goto free_dma;
++	}
+ 	rga->dst_mmu_pages =
+ 		(unsigned int *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 3);
++	if (rga->dst_mmu_pages) {
++		ret = -ENOMEM;
++		goto free_src_pages;
++	}
+ 
+ 	def_frame.stride = (def_frame.width * def_frame.fmt->depth) >> 3;
+ 	def_frame.size = def_frame.stride * def_frame.height;
+@@ -892,7 +906,7 @@ static int rga_probe(struct platform_device *pdev)
+ 	ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
+ 	if (ret) {
+ 		v4l2_err(&rga->v4l2_dev, "Failed to register video device\n");
+-		goto rel_vdev;
++		goto free_dst_pages;
+ 	}
+ 
+ 	v4l2_info(&rga->v4l2_dev, "Registered %s as /dev/%s\n",
+@@ -900,10 +914,15 @@ static int rga_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++free_dst_pages:
++	free_pages((unsigned long)rga->dst_mmu_pages, 3);
++free_src_pages:
++	free_pages((unsigned long)rga->src_mmu_pages, 3);
++free_dma:
++	dma_free_attrs(rga->dev, RGA_CMDBUF_SIZE, rga->cmdbuf_virt,
++		       rga->cmdbuf_phy, DMA_ATTR_WRITE_COMBINE);
+ rel_vdev:
+ 	video_device_release(vfd);
+-unreg_video_dev:
+-	video_unregister_device(rga->vfd);
+ unreg_v4l2_dev:
+ 	v4l2_device_unregister(&rga->v4l2_dev);
+ err_put_clk:
+diff --git a/drivers/media/spi/cxd2880-spi.c b/drivers/media/spi/cxd2880-spi.c
+index 4077217777f92..93194f03764d2 100644
+--- a/drivers/media/spi/cxd2880-spi.c
++++ b/drivers/media/spi/cxd2880-spi.c
+@@ -524,13 +524,13 @@ cxd2880_spi_probe(struct spi_device *spi)
+ 	if (IS_ERR(dvb_spi->vcc_supply)) {
+ 		if (PTR_ERR(dvb_spi->vcc_supply) == -EPROBE_DEFER) {
+ 			ret = -EPROBE_DEFER;
+-			goto fail_adapter;
++			goto fail_regulator;
+ 		}
+ 		dvb_spi->vcc_supply = NULL;
+ 	} else {
+ 		ret = regulator_enable(dvb_spi->vcc_supply);
+ 		if (ret)
+-			goto fail_adapter;
++			goto fail_regulator;
+ 	}
+ 
+ 	dvb_spi->spi = spi;
+@@ -618,6 +618,9 @@ fail_frontend:
+ fail_attach:
+ 	dvb_unregister_adapter(&dvb_spi->adapter);
+ fail_adapter:
++	if (!dvb_spi->vcc_supply)
++		regulator_disable(dvb_spi->vcc_supply);
++fail_regulator:
+ 	kfree(dvb_spi);
+ 	return ret;
+ }
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-i2c.c b/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
+index 2e07106f46803..bc4b2abdde1a4 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
+@@ -17,7 +17,8 @@ int dvb_usb_i2c_init(struct dvb_usb_device *d)
+ 
+ 	if (d->props.i2c_algo == NULL) {
+ 		err("no i2c algorithm specified");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err;
+ 	}
+ 
+ 	strscpy(d->i2c_adap.name, d->desc->name, sizeof(d->i2c_adap.name));
+@@ -27,11 +28,15 @@ int dvb_usb_i2c_init(struct dvb_usb_device *d)
+ 
+ 	i2c_set_adapdata(&d->i2c_adap, d);
+ 
+-	if ((ret = i2c_add_adapter(&d->i2c_adap)) < 0)
++	ret = i2c_add_adapter(&d->i2c_adap);
++	if (ret < 0) {
+ 		err("could not add i2c adapter");
++		goto err;
++	}
+ 
+ 	d->state |= DVB_USB_STATE_I2C;
+ 
++err:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index 28e1fd64dd3c2..61439c8f33cab 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -194,8 +194,8 @@ static int dvb_usb_init(struct dvb_usb_device *d, short *adapter_nums)
+ 
+ err_adapter_init:
+ 	dvb_usb_adapter_exit(d);
+-err_i2c_init:
+ 	dvb_usb_i2c_exit(d);
++err_i2c_init:
+ 	if (d->priv && d->props.priv_destroy)
+ 		d->props.priv_destroy(d);
+ err_priv_init:
+diff --git a/drivers/media/usb/dvb-usb/nova-t-usb2.c b/drivers/media/usb/dvb-usb/nova-t-usb2.c
+index e7b290552b663..9c0eb0d40822e 100644
+--- a/drivers/media/usb/dvb-usb/nova-t-usb2.c
++++ b/drivers/media/usb/dvb-usb/nova-t-usb2.c
+@@ -130,7 +130,7 @@ ret:
+ 
+ static int nova_t_read_mac_address (struct dvb_usb_device *d, u8 mac[6])
+ {
+-	int i;
++	int i, ret;
+ 	u8 b;
+ 
+ 	mac[0] = 0x00;
+@@ -139,7 +139,9 @@ static int nova_t_read_mac_address (struct dvb_usb_device *d, u8 mac[6])
+ 
+ 	/* this is a complete guess, but works for my box */
+ 	for (i = 136; i < 139; i++) {
+-		dibusb_read_eeprom_byte(d,i, &b);
++		ret = dibusb_read_eeprom_byte(d, i, &b);
++		if (ret)
++			return ret;
+ 
+ 		mac[5 - (i - 136)] = b;
+ 	}
+diff --git a/drivers/media/usb/dvb-usb/vp702x.c b/drivers/media/usb/dvb-usb/vp702x.c
+index bf54747e2e01a..a1d9e4801a2ba 100644
+--- a/drivers/media/usb/dvb-usb/vp702x.c
++++ b/drivers/media/usb/dvb-usb/vp702x.c
+@@ -291,16 +291,22 @@ static int vp702x_rc_query(struct dvb_usb_device *d, u32 *event, int *state)
+ static int vp702x_read_mac_addr(struct dvb_usb_device *d,u8 mac[6])
+ {
+ 	u8 i, *buf;
++	int ret;
+ 	struct vp702x_device_state *st = d->priv;
+ 
+ 	mutex_lock(&st->buf_mutex);
+ 	buf = st->buf;
+-	for (i = 6; i < 12; i++)
+-		vp702x_usb_in_op(d, READ_EEPROM_REQ, i, 1, &buf[i - 6], 1);
++	for (i = 6; i < 12; i++) {
++		ret = vp702x_usb_in_op(d, READ_EEPROM_REQ, i, 1,
++				       &buf[i - 6], 1);
++		if (ret < 0)
++			goto err;
++	}
+ 
+ 	memcpy(mac, buf, 6);
++err:
+ 	mutex_unlock(&st->buf_mutex);
+-	return 0;
++	return ret;
+ }
+ 
+ static int vp702x_frontend_attach(struct dvb_usb_adapter *adap)
+diff --git a/drivers/media/usb/em28xx/em28xx-input.c b/drivers/media/usb/em28xx/em28xx-input.c
+index 59529cbf9cd0b..0b6d77c3bec86 100644
+--- a/drivers/media/usb/em28xx/em28xx-input.c
++++ b/drivers/media/usb/em28xx/em28xx-input.c
+@@ -842,7 +842,6 @@ error:
+ 	kfree(ir);
+ ref_put:
+ 	em28xx_shutdown_buttons(dev);
+-	kref_put(&dev->ref, em28xx_free_device);
+ 	return err;
+ }
+ 
+diff --git a/drivers/media/usb/go7007/go7007-driver.c b/drivers/media/usb/go7007/go7007-driver.c
+index f1767be9d8685..6650eab913d81 100644
+--- a/drivers/media/usb/go7007/go7007-driver.c
++++ b/drivers/media/usb/go7007/go7007-driver.c
+@@ -691,49 +691,23 @@ struct go7007 *go7007_alloc(const struct go7007_board_info *board,
+ 						struct device *dev)
+ {
+ 	struct go7007 *go;
+-	int i;
+ 
+ 	go = kzalloc(sizeof(struct go7007), GFP_KERNEL);
+ 	if (go == NULL)
+ 		return NULL;
+ 	go->dev = dev;
+ 	go->board_info = board;
+-	go->board_id = 0;
+ 	go->tuner_type = -1;
+-	go->channel_number = 0;
+-	go->name[0] = 0;
+ 	mutex_init(&go->hw_lock);
+ 	init_waitqueue_head(&go->frame_waitq);
+ 	spin_lock_init(&go->spinlock);
+ 	go->status = STATUS_INIT;
+-	memset(&go->i2c_adapter, 0, sizeof(go->i2c_adapter));
+-	go->i2c_adapter_online = 0;
+-	go->interrupt_available = 0;
+ 	init_waitqueue_head(&go->interrupt_waitq);
+-	go->input = 0;
+ 	go7007_update_board(go);
+-	go->encoder_h_halve = 0;
+-	go->encoder_v_halve = 0;
+-	go->encoder_subsample = 0;
+ 	go->format = V4L2_PIX_FMT_MJPEG;
+ 	go->bitrate = 1500000;
+ 	go->fps_scale = 1;
+-	go->pali = 0;
+ 	go->aspect_ratio = GO7007_RATIO_1_1;
+-	go->gop_size = 0;
+-	go->ipb = 0;
+-	go->closed_gop = 0;
+-	go->repeat_seqhead = 0;
+-	go->seq_header_enable = 0;
+-	go->gop_header_enable = 0;
+-	go->dvd_mode = 0;
+-	go->interlace_coding = 0;
+-	for (i = 0; i < 4; ++i)
+-		go->modet[i].enable = 0;
+-	for (i = 0; i < 1624; ++i)
+-		go->modet_map[i] = 0;
+-	go->audio_deliver = NULL;
+-	go->audio_enabled = 0;
+ 
+ 	return go;
+ }
+diff --git a/drivers/media/usb/go7007/go7007-usb.c b/drivers/media/usb/go7007/go7007-usb.c
+index dbf0455d5d50d..eeb85981e02b6 100644
+--- a/drivers/media/usb/go7007/go7007-usb.c
++++ b/drivers/media/usb/go7007/go7007-usb.c
+@@ -1134,7 +1134,7 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ 
+ 	ep = usb->usbdev->ep_in[4];
+ 	if (!ep)
+-		return -ENODEV;
++		goto allocfail;
+ 
+ 	/* Allocate the URB and buffer for receiving incoming interrupts */
+ 	usb->intr_urb = usb_alloc_urb(0, GFP_KERNEL);
+diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
+index c802db9aaeb04..32b3d77368e37 100644
+--- a/drivers/misc/lkdtm/core.c
++++ b/drivers/misc/lkdtm/core.c
+@@ -81,7 +81,7 @@ static struct crashpoint crashpoints[] = {
+ 	CRASHPOINT("FS_DEVRW",		 "ll_rw_block"),
+ 	CRASHPOINT("MEM_SWAPOUT",	 "shrink_inactive_list"),
+ 	CRASHPOINT("TIMERADD",		 "hrtimer_start"),
+-	CRASHPOINT("SCSI_DISPATCH_CMD",	 "scsi_dispatch_cmd"),
++	CRASHPOINT("SCSI_QUEUE_RQ",	 "scsi_queue_rq"),
+ 	CRASHPOINT("IDE_CORE_CP",	 "generic_ide_ioctl"),
+ #endif
+ };
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 8b5d542e20f30..7f90326b1be50 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -782,6 +782,7 @@ static int dw_mci_edmac_start_dma(struct dw_mci *host,
+ 	int ret = 0;
+ 
+ 	/* Set external dma config: burst size, burst width */
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.dst_addr = host->phy_regs + fifo_offset;
+ 	cfg.src_addr = cfg.dst_addr;
+ 	cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index f25079ba3bca2..2e4a7c6971dc9 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -631,6 +631,7 @@ static int moxart_probe(struct platform_device *pdev)
+ 			 host->dma_chan_tx, host->dma_chan_rx);
+ 		host->have_dma = true;
+ 
++		memset(&cfg, 0, sizeof(cfg));
+ 		cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 6cdadbb3accd5..b1e1d327cb8eb 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1223,6 +1223,7 @@ static int sdhci_external_dma_setup(struct sdhci_host *host,
+ 	if (!host->mapbase)
+ 		return -EINVAL;
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.src_addr = host->mapbase + SDHCI_BUFFER;
+ 	cfg.dst_addr = host->mapbase + SDHCI_BUFFER;
+ 	cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index 59253846e8858..f26d037356191 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -417,6 +417,9 @@ static int atl_resume_common(struct device *dev, bool deep)
+ 	pci_restore_state(pdev);
+ 
+ 	if (deep) {
++		/* Reinitialize Nic/Vecs objects */
++		aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
++
+ 		ret = aq_nic_init(nic);
+ 		if (ret)
+ 			goto err_exit;
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 24ae6a28a806d..6009d76e41fc4 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -182,7 +182,8 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 	tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+ 
+ 	// Check if next command will overflow the buffer.
+-	if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) == tail) {
++	if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) ==
++	    (tail & priv->adminq_mask)) {
+ 		int err;
+ 
+ 		// Flush existing commands to make room.
+@@ -192,7 +193,8 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 
+ 		// Retry.
+ 		tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+-		if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) == tail) {
++		if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) ==
++		    (tail & priv->adminq_mask)) {
+ 			// This should never happen. We just flushed the
+ 			// command queue so there should be enough space.
+ 			return -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index e4f13a49c3df8..a02167cce81e1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1107,12 +1107,12 @@ static int i40e_quiesce_vf_pci(struct i40e_vf *vf)
+ }
+ 
+ /**
+- * i40e_getnum_vf_vsi_vlan_filters
++ * __i40e_getnum_vf_vsi_vlan_filters
+  * @vsi: pointer to the vsi
+  *
+  * called to get the number of VLANs offloaded on this VF
+  **/
+-static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
++static int __i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_mac_filter *f;
+ 	u16 num_vlans = 0, bkt;
+@@ -1125,6 +1125,23 @@ static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
+ 	return num_vlans;
+ }
+ 
++/**
++ * i40e_getnum_vf_vsi_vlan_filters
++ * @vsi: pointer to the vsi
++ *
++ * wrapper for __i40e_getnum_vf_vsi_vlan_filters() with spinlock held
++ **/
++static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
++{
++	int num_vlans;
++
++	spin_lock_bh(&vsi->mac_filter_hash_lock);
++	num_vlans = __i40e_getnum_vf_vsi_vlan_filters(vsi);
++	spin_unlock_bh(&vsi->mac_filter_hash_lock);
++
++	return num_vlans;
++}
++
+ /**
+  * i40e_get_vlan_list_sync
+  * @vsi: pointer to the VSI
+@@ -1142,7 +1159,7 @@ static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, u16 *num_vlans,
+ 	int bkt;
+ 
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+-	*num_vlans = i40e_getnum_vf_vsi_vlan_filters(vsi);
++	*num_vlans = __i40e_getnum_vf_vsi_vlan_filters(vsi);
+ 	*vlan_list = kcalloc(*num_vlans, sizeof(**vlan_list), GFP_ATOMIC);
+ 	if (!(*vlan_list))
+ 		goto err;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index a46780570cd95..5d0dc1f811e0f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4879,6 +4879,7 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 	struct ice_hw *hw = &pf->hw;
+ 	struct sockaddr *addr = pi;
+ 	enum ice_status status;
++	u8 old_mac[ETH_ALEN];
+ 	u8 flags = 0;
+ 	int err = 0;
+ 	u8 *mac;
+@@ -4901,8 +4902,13 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 	}
+ 
+ 	netif_addr_lock_bh(netdev);
++	ether_addr_copy(old_mac, netdev->dev_addr);
++	/* change the netdev's MAC address */
++	memcpy(netdev->dev_addr, mac, netdev->addr_len);
++	netif_addr_unlock_bh(netdev);
++
+ 	/* Clean up old MAC filter. Not an error if old filter doesn't exist */
+-	status = ice_fltr_remove_mac(vsi, netdev->dev_addr, ICE_FWD_TO_VSI);
++	status = ice_fltr_remove_mac(vsi, old_mac, ICE_FWD_TO_VSI);
+ 	if (status && status != ICE_ERR_DOES_NOT_EXIST) {
+ 		err = -EADDRNOTAVAIL;
+ 		goto err_update_filters;
+@@ -4925,13 +4931,12 @@ err_update_filters:
+ 	if (err) {
+ 		netdev_err(netdev, "can't set MAC %pM. filter update failed\n",
+ 			   mac);
++		netif_addr_lock_bh(netdev);
++		ether_addr_copy(netdev->dev_addr, old_mac);
+ 		netif_addr_unlock_bh(netdev);
+ 		return err;
+ 	}
+ 
+-	/* change the netdev's MAC address */
+-	memcpy(netdev->dev_addr, mac, netdev->addr_len);
+-	netif_addr_unlock_bh(netdev);
+ 	netdev_dbg(vsi->netdev, "updated MAC address to %pM\n",
+ 		   netdev->dev_addr);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 169ae491f9786..6fa9358e6db4f 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -27,7 +27,7 @@
+ #define NIXLF_PROMISC_ENTRY	2
+ 
+ #define NPC_PARSE_RESULT_DMAC_OFFSET	8
+-#define NPC_HW_TSTAMP_OFFSET		8
++#define NPC_HW_TSTAMP_OFFSET		8ULL
+ 
+ static const char def_pfl_name[] = "default";
+ 
+@@ -1171,14 +1171,15 @@ int rvu_npc_init(struct rvu *rvu)
+ 
+ 	/* Enable below for Rx pkts.
+ 	 * - Outer IPv4 header checksum validation.
+-	 * - Detect outer L2 broadcast address and set NPC_RESULT_S[L2M].
++	 * - Detect outer L2 broadcast address and set NPC_RESULT_S[L2B].
++	 * - Detect outer L2 multicast address and set NPC_RESULT_S[L2M].
+ 	 * - Inner IPv4 header checksum validation.
+ 	 * - Set non zero checksum error code value
+ 	 */
+ 	rvu_write64(rvu, blkaddr, NPC_AF_PCK_CFG,
+ 		    rvu_read64(rvu, blkaddr, NPC_AF_PCK_CFG) |
+-		    BIT_ULL(32) | BIT_ULL(24) | BIT_ULL(6) |
+-		    BIT_ULL(2) | BIT_ULL(1));
++		    ((u64)NPC_EC_OIP4_CSUM << 32) | (NPC_EC_IIP4_CSUM << 24) |
++		    BIT_ULL(7) | BIT_ULL(6) | BIT_ULL(2) | BIT_ULL(1));
+ 
+ 	/* Set RX and TX side MCAM search key size.
+ 	 * LA..LD (ltype only) + Channel
+@@ -1318,7 +1319,7 @@ static void npc_unmap_mcam_entry_and_cntr(struct rvu *rvu,
+ 					  int blkaddr, u16 entry, u16 cntr)
+ {
+ 	u16 index = entry & (mcam->banksize - 1);
+-	u16 bank = npc_get_bank(mcam, entry);
++	u32 bank = npc_get_bank(mcam, entry);
+ 
+ 	/* Remove mapping and reduce counter's refcnt */
+ 	mcam->entry2cntr_map[entry] = NPC_MCAM_INVALID_MAP;
+@@ -1879,8 +1880,8 @@ int rvu_mbox_handler_npc_mcam_shift_entry(struct rvu *rvu,
+ 	struct npc_mcam *mcam = &rvu->hw->mcam;
+ 	u16 pcifunc = req->hdr.pcifunc;
+ 	u16 old_entry, new_entry;
++	int blkaddr, rc = 0;
+ 	u16 index, cntr;
+-	int blkaddr, rc;
+ 
+ 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+ 	if (blkaddr < 0)
+@@ -2081,10 +2082,11 @@ int rvu_mbox_handler_npc_mcam_unmap_counter(struct rvu *rvu,
+ 		index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry);
+ 		if (index >= mcam->bmap_entries)
+ 			break;
++		entry = index + 1;
++
+ 		if (mcam->entry2cntr_map[index] != req->cntr)
+ 			continue;
+ 
+-		entry = index + 1;
+ 		npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+ 					      index, req->cntr);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index bf5cf022e279d..4cba110f6ef8c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -376,6 +376,48 @@ static void mlx5_devlink_set_params_init_values(struct devlink *devlink)
+ #endif
+ }
+ 
++#define MLX5_TRAP_DROP(_id, _group_id)					\
++	DEVLINK_TRAP_GENERIC(DROP, DROP, _id,				\
++			     DEVLINK_TRAP_GROUP_GENERIC_ID_##_group_id, \
++			     DEVLINK_TRAP_METADATA_TYPE_F_IN_PORT)
++
++static const struct devlink_trap mlx5_traps_arr[] = {
++	MLX5_TRAP_DROP(INGRESS_VLAN_FILTER, L2_DROPS),
++};
++
++static const struct devlink_trap_group mlx5_trap_groups_arr[] = {
++	DEVLINK_TRAP_GROUP_GENERIC(L2_DROPS, 0),
++};
++
++static int mlx5_devlink_traps_register(struct devlink *devlink)
++{
++	struct mlx5_core_dev *core_dev = devlink_priv(devlink);
++	int err;
++
++	err = devlink_trap_groups_register(devlink, mlx5_trap_groups_arr,
++					   ARRAY_SIZE(mlx5_trap_groups_arr));
++	if (err)
++		return err;
++
++	err = devlink_traps_register(devlink, mlx5_traps_arr, ARRAY_SIZE(mlx5_traps_arr),
++				     &core_dev->priv);
++	if (err)
++		goto err_trap_group;
++	return 0;
++
++err_trap_group:
++	devlink_trap_groups_unregister(devlink, mlx5_trap_groups_arr,
++				       ARRAY_SIZE(mlx5_trap_groups_arr));
++	return err;
++}
++
++static void mlx5_devlink_traps_unregister(struct devlink *devlink)
++{
++	devlink_traps_unregister(devlink, mlx5_traps_arr, ARRAY_SIZE(mlx5_traps_arr));
++	devlink_trap_groups_unregister(devlink, mlx5_trap_groups_arr,
++				       ARRAY_SIZE(mlx5_trap_groups_arr));
++}
++
+ int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
+ {
+ 	int err;
+@@ -390,8 +432,16 @@ int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
+ 		goto params_reg_err;
+ 	mlx5_devlink_set_params_init_values(devlink);
+ 	devlink_params_publish(devlink);
++
++	err = mlx5_devlink_traps_register(devlink);
++	if (err)
++		goto traps_reg_err;
++
+ 	return 0;
+ 
++traps_reg_err:
++	devlink_params_unregister(devlink, mlx5_devlink_params,
++				  ARRAY_SIZE(mlx5_devlink_params));
+ params_reg_err:
+ 	devlink_unregister(devlink);
+ 	return err;
+@@ -399,6 +449,8 @@ params_reg_err:
+ 
+ void mlx5_devlink_unregister(struct devlink *devlink)
+ {
++	mlx5_devlink_traps_unregister(devlink);
++	devlink_params_unpublish(devlink);
+ 	devlink_params_unregister(devlink, mlx5_devlink_params,
+ 				  ARRAY_SIZE(mlx5_devlink_params));
+ 	devlink_unregister(devlink);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+index dc744702aee4a..000ca294b0a0a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+@@ -262,18 +262,12 @@ struct ttc_params {
+ 
+ void mlx5e_set_ttc_basic_params(struct mlx5e_priv *priv, struct ttc_params *ttc_params);
+ void mlx5e_set_ttc_ft_params(struct ttc_params *ttc_params);
+-void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params);
+ 
+ int mlx5e_create_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+ 			   struct mlx5e_ttc_table *ttc);
+ void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv,
+ 			     struct mlx5e_ttc_table *ttc);
+ 
+-int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+-				 struct mlx5e_ttc_table *ttc);
+-void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
+-				   struct mlx5e_ttc_table *ttc);
+-
+ void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft);
+ int mlx5e_ttc_fwd_dest(struct mlx5e_priv *priv, enum mlx5e_traffic_types type,
+ 		       struct mlx5_flow_destination *new_dest);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+index 93877becfae26..f405c256b3cd0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+@@ -1138,7 +1138,7 @@ void mlx5e_set_ttc_basic_params(struct mlx5e_priv *priv,
+ 	ttc_params->inner_ttc = &priv->fs.inner_ttc;
+ }
+ 
+-void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params)
++static void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params)
+ {
+ 	struct mlx5_flow_table_attr *ft_attr = &ttc_params->ft_attr;
+ 
+@@ -1157,8 +1157,8 @@ void mlx5e_set_ttc_ft_params(struct ttc_params *ttc_params)
+ 	ft_attr->prio = MLX5E_NIC_PRIO;
+ }
+ 
+-int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+-				 struct mlx5e_ttc_table *ttc)
++static int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
++					struct mlx5e_ttc_table *ttc)
+ {
+ 	struct mlx5e_flow_table *ft = &ttc->ft;
+ 	int err;
+@@ -1188,8 +1188,8 @@ err:
+ 	return err;
+ }
+ 
+-void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
+-				   struct mlx5e_ttc_table *ttc)
++static void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
++					  struct mlx5e_ttc_table *ttc)
+ {
+ 	if (!mlx5e_tunnel_inner_ft_supported(priv->mdev))
+ 		return;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 6b4a3d90c9f7f..6974090a7efac 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2803,6 +2803,14 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
+ 		err = mlx5_core_modify_tir(mdev, priv->indir_tir[tt].tirn, in);
+ 		if (err)
+ 			goto free_in;
++
++		/* Verify inner tirs resources allocated */
++		if (!priv->inner_indir_tir[0].tirn)
++			continue;
++
++		err = mlx5_core_modify_tir(mdev, priv->inner_indir_tir[tt].tirn, in);
++		if (err)
++			goto free_in;
+ 	}
+ 
+ 	for (ix = 0; ix < priv->max_nch; ix++) {
+@@ -4928,7 +4936,14 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	netdev->hw_enc_features  |= NETIF_F_HW_VLAN_CTAG_TX;
+ 	netdev->hw_enc_features  |= NETIF_F_HW_VLAN_CTAG_RX;
+ 
++	/* Tunneled LRO is not supported in the driver, and the same RQs are
++	 * shared between inner and outer TIRs, so the driver can't disable LRO
++	 * for inner TIRs while having it enabled for outer TIRs. Due to this,
++	 * block LRO altogether if the firmware declares tunneled LRO support.
++	 */
+ 	if (!!MLX5_CAP_ETH(mdev, lro_cap) &&
++	    !MLX5_CAP_ETH(mdev, tunnel_lro_vxlan) &&
++	    !MLX5_CAP_ETH(mdev, tunnel_lro_gre) &&
+ 	    mlx5e_check_fragmented_striding_rq_cap(mdev))
+ 		netdev->vlan_features    |= NETIF_F_LRO;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index c9c2962ad49fb..5801f55ff0771 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -2564,8 +2564,11 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
+ 
+ 	switch (MLX5_CAP_ETH(dev, wqe_inline_mode)) {
+ 	case MLX5_CAP_INLINE_MODE_NOT_REQUIRED:
+-		if (mode == DEVLINK_ESWITCH_INLINE_MODE_NONE)
++		if (mode == DEVLINK_ESWITCH_INLINE_MODE_NONE) {
++			err = 0;
+ 			goto out;
++		}
++
+ 		fallthrough;
+ 	case MLX5_CAP_INLINE_MODE_L2:
+ 		NL_SET_ERR_MSG_MOD(extack, "Inline mode can't be set");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+index 97b5fcb1f4064..5c6a376aa62ec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+@@ -337,17 +337,6 @@ static int mlx5i_create_flow_steering(struct mlx5e_priv *priv)
+ 	}
+ 
+ 	mlx5e_set_ttc_basic_params(priv, &ttc_params);
+-	mlx5e_set_inner_ttc_ft_params(&ttc_params);
+-	for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++)
+-		ttc_params.indir_tirn[tt] = priv->inner_indir_tir[tt].tirn;
+-
+-	err = mlx5e_create_inner_ttc_table(priv, &ttc_params, &priv->fs.inner_ttc);
+-	if (err) {
+-		netdev_err(priv->netdev, "Failed to create inner ttc table, err=%d\n",
+-			   err);
+-		goto err_destroy_arfs_tables;
+-	}
+-
+ 	mlx5e_set_ttc_ft_params(&ttc_params);
+ 	for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++)
+ 		ttc_params.indir_tirn[tt] = priv->indir_tir[tt].tirn;
+@@ -356,13 +345,11 @@ static int mlx5i_create_flow_steering(struct mlx5e_priv *priv)
+ 	if (err) {
+ 		netdev_err(priv->netdev, "Failed to create ttc table, err=%d\n",
+ 			   err);
+-		goto err_destroy_inner_ttc_table;
++		goto err_destroy_arfs_tables;
+ 	}
+ 
+ 	return 0;
+ 
+-err_destroy_inner_ttc_table:
+-	mlx5e_destroy_inner_ttc_table(priv, &priv->fs.inner_ttc);
+ err_destroy_arfs_tables:
+ 	mlx5e_arfs_destroy_tables(priv);
+ 
+@@ -372,7 +359,6 @@ err_destroy_arfs_tables:
+ static void mlx5i_destroy_flow_steering(struct mlx5e_priv *priv)
+ {
+ 	mlx5e_destroy_ttc_table(priv, &priv->fs.ttc);
+-	mlx5e_destroy_inner_ttc_table(priv, &priv->fs.inner_ttc);
+ 	mlx5e_arfs_destroy_tables(priv);
+ }
+ 
+@@ -397,7 +383,7 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
+ 	if (err)
+ 		goto err_destroy_indirect_rqts;
+ 
+-	err = mlx5e_create_indirect_tirs(priv, true);
++	err = mlx5e_create_indirect_tirs(priv, false);
+ 	if (err)
+ 		goto err_destroy_direct_rqts;
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
+index 51d64718ed9f0..3d94064c685db 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
+@@ -91,20 +91,20 @@ int ionic_devlink_register(struct ionic *ionic)
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+ 	devlink_port_attrs_set(&ionic->dl_port, &attrs);
+ 	err = devlink_port_register(dl, &ionic->dl_port, 0);
+-	if (err)
++	if (err) {
+ 		dev_err(ionic->dev, "devlink_port_register failed: %d\n", err);
+-	else
+-		devlink_port_type_eth_set(&ionic->dl_port,
+-					  ionic->lif->netdev);
++		devlink_unregister(dl);
++		return err;
++	}
+ 
+-	return err;
++	devlink_port_type_eth_set(&ionic->dl_port, ionic->lif->netdev);
++	return 0;
+ }
+ 
+ void ionic_devlink_unregister(struct ionic *ionic)
+ {
+ 	struct devlink *dl = priv_to_devlink(ionic);
+ 
+-	if (ionic->dl_port.registered)
+-		devlink_port_unregister(&ionic->dl_port);
++	devlink_port_unregister(&ionic->dl_port);
+ 	devlink_unregister(dl);
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 5a3b65a6eb4f2..36bcb5db3be97 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -434,7 +434,7 @@ qcaspi_receive(struct qcaspi *qca)
+ 				skb_put(qca->rx_skb, retcode);
+ 				qca->rx_skb->protocol = eth_type_trans(
+ 					qca->rx_skb, qca->rx_skb->dev);
+-				qca->rx_skb->ip_summed = CHECKSUM_UNNECESSARY;
++				skb_checksum_none_assert(qca->rx_skb);
+ 				netif_rx_ni(qca->rx_skb);
+ 				qca->rx_skb = netdev_alloc_skb_ip_align(net_dev,
+ 					net_dev->mtu + VLAN_ETH_HLEN);
+diff --git a/drivers/net/ethernet/qualcomm/qca_uart.c b/drivers/net/ethernet/qualcomm/qca_uart.c
+index 362b4f5c162c0..0b7301db20ed4 100644
+--- a/drivers/net/ethernet/qualcomm/qca_uart.c
++++ b/drivers/net/ethernet/qualcomm/qca_uart.c
+@@ -107,7 +107,7 @@ qca_tty_receive(struct serdev_device *serdev, const unsigned char *data,
+ 			skb_put(qca->rx_skb, retcode);
+ 			qca->rx_skb->protocol = eth_type_trans(
+ 						qca->rx_skb, qca->rx_skb->dev);
+-			qca->rx_skb->ip_summed = CHECKSUM_UNNECESSARY;
++			skb_checksum_none_assert(qca->rx_skb);
+ 			netif_rx_ni(qca->rx_skb);
+ 			qca->rx_skb = netdev_alloc_skb_ip_align(netdev,
+ 								netdev->mtu +
+diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c
+index dbc47702a268d..5bacddee83449 100644
+--- a/drivers/net/wireless/ath/ath6kl/wmi.c
++++ b/drivers/net/wireless/ath/ath6kl/wmi.c
+@@ -2504,8 +2504,10 @@ static int ath6kl_wmi_sync_point(struct wmi *wmi, u8 if_idx)
+ 		goto free_data_skb;
+ 
+ 	for (index = 0; index < num_pri_streams; index++) {
+-		if (WARN_ON(!data_sync_bufs[index].skb))
++		if (WARN_ON(!data_sync_bufs[index].skb)) {
++			ret = -ENOMEM;
+ 			goto free_data_skb;
++		}
+ 
+ 		ep_id = ath6kl_ac2_endpoint_id(wmi->parent_dev,
+ 					       data_sync_bufs[index].
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 603aff421e38e..1f12dfb33938a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -2073,7 +2073,7 @@ cleanup:
+ 
+ 	err = brcmf_pcie_probe(pdev, NULL);
+ 	if (err)
+-		brcmf_err(bus, "probe after resume failed, err=%d\n", err);
++		__brcmf_err(NULL, __func__, "probe after resume failed, err=%d\n", err);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index 3e5a35e26ad34..5e4faf9ce4bbe 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -229,8 +229,8 @@ found:
+ IWL_EXPORT_SYMBOL(iwl_acpi_get_wifi_pkg);
+ 
+ int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+-		     __le32 *black_list_array,
+-		     int *black_list_size)
++		     __le32 *block_list_array,
++		     int *block_list_size)
+ {
+ 	union acpi_object *wifi_pkg, *data;
+ 	int ret, tbl_rev, i;
+@@ -254,47 +254,47 @@ int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+ 		goto out_free;
+ 	}
+ 
+-	enabled = !!wifi_pkg->package.elements[0].integer.value;
++	enabled = !!wifi_pkg->package.elements[1].integer.value;
+ 
+ 	if (!enabled) {
+-		*black_list_size = -1;
++		*block_list_size = -1;
+ 		IWL_DEBUG_RADIO(fwrt, "TAS not enabled\n");
+ 		ret = 0;
+ 		goto out_free;
+ 	}
+ 
+-	if (wifi_pkg->package.elements[1].type != ACPI_TYPE_INTEGER ||
+-	    wifi_pkg->package.elements[1].integer.value >
++	if (wifi_pkg->package.elements[2].type != ACPI_TYPE_INTEGER ||
++	    wifi_pkg->package.elements[2].integer.value >
+ 	    APCI_WTAS_BLACK_LIST_MAX) {
+ 		IWL_DEBUG_RADIO(fwrt, "TAS invalid array size %llu\n",
+ 				wifi_pkg->package.elements[1].integer.value);
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+-	*black_list_size = wifi_pkg->package.elements[1].integer.value;
++	*block_list_size = wifi_pkg->package.elements[2].integer.value;
+ 
+-	IWL_DEBUG_RADIO(fwrt, "TAS array size %d\n", *black_list_size);
+-	if (*black_list_size > APCI_WTAS_BLACK_LIST_MAX) {
++	IWL_DEBUG_RADIO(fwrt, "TAS array size %d\n", *block_list_size);
++	if (*block_list_size > APCI_WTAS_BLACK_LIST_MAX) {
+ 		IWL_DEBUG_RADIO(fwrt, "TAS invalid array size value %u\n",
+-				*black_list_size);
++				*block_list_size);
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+ 
+-	for (i = 0; i < *black_list_size; i++) {
++	for (i = 0; i < *block_list_size; i++) {
+ 		u32 country;
+ 
+-		if (wifi_pkg->package.elements[2 + i].type !=
++		if (wifi_pkg->package.elements[3 + i].type !=
+ 		    ACPI_TYPE_INTEGER) {
+ 			IWL_DEBUG_RADIO(fwrt,
+-					"TAS invalid array elem %d\n", 2 + i);
++					"TAS invalid array elem %d\n", 3 + i);
+ 			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+-		country = wifi_pkg->package.elements[2 + i].integer.value;
+-		black_list_array[i] = cpu_to_le32(country);
+-		IWL_DEBUG_RADIO(fwrt, "TAS black list country %d\n", country);
++		country = wifi_pkg->package.elements[3 + i].integer.value;
++		block_list_array[i] = cpu_to_le32(country);
++		IWL_DEBUG_RADIO(fwrt, "TAS block list country %d\n", country);
+ 	}
+ 
+ 	ret = 0;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
+index bddf8a44e163f..dfd341421adcf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
+@@ -100,7 +100,7 @@
+ #define ACPI_ECKV_WIFI_DATA_SIZE	2
+ 
+ /*
+- * 1 type, 1 enabled, 1 black list size, 16 black list array
++ * 1 type, 1 enabled, 1 block list size, 16 block list array
+  */
+ #define APCI_WTAS_BLACK_LIST_MAX	16
+ #define ACPI_WTAS_WIFI_DATA_SIZE	(3 + APCI_WTAS_BLACK_LIST_MAX)
+@@ -197,8 +197,8 @@ bool iwl_sar_geo_support(struct iwl_fw_runtime *fwrt);
+ int iwl_sar_geo_init(struct iwl_fw_runtime *fwrt,
+ 		     struct iwl_per_chain_offset *table, u32 n_bands);
+ 
+-int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt, __le32 *black_list_array,
+-		     int *black_list_size);
++int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt, __le32 *block_list_array,
++		     int *block_list_size);
+ 
+ #else /* CONFIG_ACPI */
+ 
+@@ -269,8 +269,8 @@ static inline bool iwl_sar_geo_support(struct iwl_fw_runtime *fwrt)
+ }
+ 
+ static inline int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+-				   __le32 *black_list_array,
+-				   int *black_list_size)
++				   __le32 *block_list_array,
++				   int *block_list_size)
+ {
+ 	return -ENOENT;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h b/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
+index 8cc36dbb23117..21543bc21c16f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
+@@ -323,7 +323,7 @@ enum iwl_legacy_cmds {
+ 
+ 	/**
+ 	 * @SCAN_OFFLOAD_UPDATE_PROFILES_CMD:
+-	 * update scan offload (scheduled scan) profiles/blacklist/etc.
++	 * update scan offload (scheduled scan) profiles/blocklist/etc.
+ 	 */
+ 	SCAN_OFFLOAD_UPDATE_PROFILES_CMD = 0x6E,
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h b/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
+index 55573168444e8..dd79bac98657b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
+@@ -449,12 +449,12 @@ enum iwl_mcc_source {
+ #define IWL_TAS_BLACK_LIST_MAX 16
+ /**
+  * struct iwl_tas_config_cmd - configures the TAS
+- * @black_list_size: size of relevant field in black_list_array
+- * @black_list_array: black list countries (without TAS)
++ * @block_list_size: size of relevant field in block_list_array
++ * @block_list_array: block list countries (without TAS)
+  */
+ struct iwl_tas_config_cmd {
+-	__le32 black_list_size;
+-	__le32 black_list_array[IWL_TAS_BLACK_LIST_MAX];
++	__le32 block_list_size;
++	__le32 block_list_array[IWL_TAS_BLACK_LIST_MAX];
+ } __packed; /* TAS_CONFIG_CMD_API_S_VER_2 */
+ 
+ /**
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+index 5cc33a1b71723..65d6608199664 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+@@ -8,7 +8,7 @@
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2019 Intel Corporation
++ * Copyright(c) 2018 - 2020 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -31,7 +31,7 @@
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2019 Intel Corporation
++ * Copyright(c) 2018 - 2020 Intel Corporation
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -117,12 +117,12 @@ enum scan_framework_client {
+ };
+ 
+ /**
+- * struct iwl_scan_offload_blacklist - SCAN_OFFLOAD_BLACKLIST_S
++ * struct iwl_scan_offload_blocklist - SCAN_OFFLOAD_BLACKLIST_S
+  * @ssid:		MAC address to filter out
+  * @reported_rssi:	AP rssi reported to the host
+  * @client_bitmap: clients ignore this entry  - enum scan_framework_client
+  */
+-struct iwl_scan_offload_blacklist {
++struct iwl_scan_offload_blocklist {
+ 	u8 ssid[ETH_ALEN];
+ 	u8 reported_rssi;
+ 	u8 client_bitmap;
+@@ -162,7 +162,7 @@ struct iwl_scan_offload_profile {
+ 
+ /**
+  * struct iwl_scan_offload_profile_cfg_data
+- * @blacklist_len:	length of blacklist
++ * @blocklist_len:	length of blocklist
+  * @num_profiles:	num of profiles in the list
+  * @match_notify:	clients waiting for match found notification
+  * @pass_match:		clients waiting for the results
+@@ -171,7 +171,7 @@ struct iwl_scan_offload_profile {
+  * @reserved:		reserved
+  */
+ struct iwl_scan_offload_profile_cfg_data {
+-	u8 blacklist_len;
++	u8 blocklist_len;
+ 	u8 num_profiles;
+ 	u8 match_notify;
+ 	u8 pass_match;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/file.h b/drivers/net/wireless/intel/iwlwifi/fw/file.h
+index 02c64b988a138..1be9ab186bbd5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/file.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/file.h
+@@ -220,7 +220,7 @@ struct iwl_ucode_capa {
+  *	treats good CRC threshold as a boolean
+  * @IWL_UCODE_TLV_FLAGS_MFP: This uCode image supports MFP (802.11w).
+  * @IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT: This uCode image supports uAPSD
+- * @IWL_UCODE_TLV_FLAGS_SHORT_BL: 16 entries of black list instead of 64 in scan
++ * @IWL_UCODE_TLV_FLAGS_SHORT_BL: 16 entries of block list instead of 64 in scan
+  *	offload profile config command.
+  * @IWL_UCODE_TLV_FLAGS_D3_6_IPV6_ADDRS: D3 image supports up to six
+  *	(rather than two) IPv6 addresses
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index bd04e4fbbb8ab..1a844c10c442b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -376,7 +376,7 @@ struct iwl_fw_mon_regs {
+  *	mode set
+  * @nvm_hw_section_num: the ID of the HW NVM section
+  * @mac_addr_from_csr: read HW address from CSR registers
+- * @features: hw features, any combination of feature_whitelist
++ * @features: hw features, any combination of feature_passlist
+  * @pwr_tx_backoffs: translation table between power limits and backoffs
+  * @max_tx_agg_size: max TX aggregation size of the ADDBA request/response
+  * @max_ht_ampdu_factor: the exponent of the max length of A-MPDU that the
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index ad374b25e2550..6348dfa61724a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1109,7 +1109,7 @@ static void iwl_mvm_tas_init(struct iwl_mvm *mvm)
+ 	struct iwl_tas_config_cmd cmd = {};
+ 	int list_size;
+ 
+-	BUILD_BUG_ON(ARRAY_SIZE(cmd.black_list_array) <
++	BUILD_BUG_ON(ARRAY_SIZE(cmd.block_list_array) <
+ 		     APCI_WTAS_BLACK_LIST_MAX);
+ 
+ 	if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_TAS_CFG)) {
+@@ -1117,7 +1117,7 @@ static void iwl_mvm_tas_init(struct iwl_mvm *mvm)
+ 		return;
+ 	}
+ 
+-	ret = iwl_acpi_get_tas(&mvm->fwrt, cmd.black_list_array, &list_size);
++	ret = iwl_acpi_get_tas(&mvm->fwrt, cmd.block_list_array, &list_size);
+ 	if (ret < 0) {
+ 		IWL_DEBUG_RADIO(mvm,
+ 				"TAS table invalid or unavailable. (%d)\n",
+@@ -1129,7 +1129,7 @@ static void iwl_mvm_tas_init(struct iwl_mvm *mvm)
+ 		return;
+ 
+ 	/* list size if TAS enabled can only be non-negative */
+-	cmd.black_list_size = cpu_to_le32((u32)list_size);
++	cmd.block_list_size = cpu_to_le32((u32)list_size);
+ 
+ 	ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(REGULATORY_AND_NVM_GROUP,
+ 						TAS_CONFIG),
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index cbdebefb854ac..5243b84e653cf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -115,12 +115,12 @@ static void iwl_mvm_mac_tsf_id_iter(void *_data, u8 *mac,
+ 	 * client in the system.
+ 	 *
+ 	 * The firmware will decide according to the MAC type which
+-	 * will be the master and slave. Clients that need to sync
+-	 * with a remote station will be the master, and an AP or GO
+-	 * will be the slave.
++	 * will be the leader and follower. Clients that need to sync
++	 * with a remote station will be the leader, and an AP or GO
++	 * will be the follower.
+ 	 *
+-	 * Depending on the new interface type it can be slaved to
+-	 * or become the master of an existing interface.
++	 * Depending on the new interface type it can be following
++	 * or become the leader of an existing interface.
+ 	 */
+ 	switch (data->vif->type) {
+ 	case NL80211_IFTYPE_STATION:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 8cba923b1ec6c..9caff70cbd276 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -2279,9 +2279,9 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
+ 	int ret;
+ 
+ 	/*
+-	 * Re-calculate the tsf id, as the master-slave relations depend on the
+-	 * beacon interval, which was not known when the station interface was
+-	 * added.
++	 * Re-calculate the tsf id, as the leader-follower relations depend
++	 * on the beacon interval, which was not known when the station
++	 * interface was added.
+ 	 */
+ 	if (changes & BSS_CHANGED_ASSOC && bss_conf->assoc) {
+ 		if (vif->bss_conf.he_support &&
+@@ -2499,8 +2499,9 @@ static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw,
+ 		goto out_unlock;
+ 
+ 	/*
+-	 * Re-calculate the tsf id, as the master-slave relations depend on the
+-	 * beacon interval, which was not known when the AP interface was added.
++	 * Re-calculate the tsf id, as the leader-follower relations depend on
++	 * the beacon interval, which was not known when the AP interface
++	 * was added.
+ 	 */
+ 	if (vif->type == NL80211_IFTYPE_AP)
+ 		iwl_mvm_mac_ctxt_recalc_tsf_id(mvm, vif);
+@@ -3116,7 +3117,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+ 		 * than 16. We can't avoid connecting at all, so refuse the
+ 		 * station state change, this will cause mac80211 to abandon
+ 		 * attempts to connect to this AP, and eventually wpa_s will
+-		 * blacklist the AP...
++		 * blocklist the AP...
+ 		 */
+ 		if (vif->type == NL80211_IFTYPE_STATION &&
+ 		    vif->bss_conf.beacon_int < 16) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 875281cf7fc09..aebaad45043fa 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -568,7 +568,7 @@ iwl_mvm_config_sched_scan_profiles(struct iwl_mvm *mvm,
+ {
+ 	struct iwl_scan_offload_profile *profile;
+ 	struct iwl_scan_offload_profile_cfg_v1 *profile_cfg_v1;
+-	struct iwl_scan_offload_blacklist *blacklist;
++	struct iwl_scan_offload_blocklist *blocklist;
+ 	struct iwl_scan_offload_profile_cfg_data *data;
+ 	int max_profiles = iwl_umac_scan_get_max_profiles(mvm->fw);
+ 	int profile_cfg_size = sizeof(*data) +
+@@ -579,7 +579,7 @@ iwl_mvm_config_sched_scan_profiles(struct iwl_mvm *mvm,
+ 		.dataflags[0] = IWL_HCMD_DFL_NOCOPY,
+ 		.dataflags[1] = IWL_HCMD_DFL_NOCOPY,
+ 	};
+-	int blacklist_len;
++	int blocklist_len;
+ 	int i;
+ 	int ret;
+ 
+@@ -587,22 +587,22 @@ iwl_mvm_config_sched_scan_profiles(struct iwl_mvm *mvm,
+ 		return -EIO;
+ 
+ 	if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_SHORT_BL)
+-		blacklist_len = IWL_SCAN_SHORT_BLACKLIST_LEN;
++		blocklist_len = IWL_SCAN_SHORT_BLACKLIST_LEN;
+ 	else
+-		blacklist_len = IWL_SCAN_MAX_BLACKLIST_LEN;
++		blocklist_len = IWL_SCAN_MAX_BLACKLIST_LEN;
+ 
+-	blacklist = kcalloc(blacklist_len, sizeof(*blacklist), GFP_KERNEL);
+-	if (!blacklist)
++	blocklist = kcalloc(blocklist_len, sizeof(*blocklist), GFP_KERNEL);
++	if (!blocklist)
+ 		return -ENOMEM;
+ 
+ 	profile_cfg_v1 = kzalloc(profile_cfg_size, GFP_KERNEL);
+ 	if (!profile_cfg_v1) {
+ 		ret = -ENOMEM;
+-		goto free_blacklist;
++		goto free_blocklist;
+ 	}
+ 
+-	cmd.data[0] = blacklist;
+-	cmd.len[0] = sizeof(*blacklist) * blacklist_len;
++	cmd.data[0] = blocklist;
++	cmd.len[0] = sizeof(*blocklist) * blocklist_len;
+ 	cmd.data[1] = profile_cfg_v1;
+ 
+ 	/* if max_profile is MAX_PROFILES_V2, we have the new API */
+@@ -615,7 +615,7 @@ iwl_mvm_config_sched_scan_profiles(struct iwl_mvm *mvm,
+ 		data = &profile_cfg_v1->data;
+ 	}
+ 
+-	/* No blacklist configuration */
++	/* No blocklist configuration */
+ 	data->num_profiles = req->n_match_sets;
+ 	data->active_clients = SCAN_CLIENT_SCHED_SCAN;
+ 	data->pass_match = SCAN_CLIENT_SCHED_SCAN;
+@@ -639,8 +639,8 @@ iwl_mvm_config_sched_scan_profiles(struct iwl_mvm *mvm,
+ 
+ 	ret = iwl_mvm_send_cmd(mvm, &cmd);
+ 	kfree(profile_cfg_v1);
+-free_blacklist:
+-	kfree(blacklist);
++free_blocklist:
++	kfree(blocklist);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index eeb70560b746e..90b12e201795c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -646,6 +646,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0xA0F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
++	IWL_DEV_INFO(0xA0F0, 0x6074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x0070, iwl_ax201_cfg_quz_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x0074, iwl_ax201_cfg_quz_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x6074, iwl_ax201_cfg_quz_hr, NULL),
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index 99b21a2c83861..f4a26f16f00f4 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -1038,8 +1038,10 @@ static int rsi_load_9116_firmware(struct rsi_hw *adapter)
+ 	}
+ 
+ 	ta_firmware = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
+-	if (!ta_firmware)
++	if (!ta_firmware) {
++		status = -ENOMEM;
+ 		goto fail_release_fw;
++	}
+ 	fw_p = ta_firmware;
+ 	instructions_sz = fw_entry->size;
+ 	rsi_dbg(INFO_ZONE, "FW Length = %d bytes\n", instructions_sz);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 00b5589847985..3b13de59605e1 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -814,6 +814,7 @@ static int rsi_probe(struct usb_interface *pfunction,
+ 	} else {
+ 		rsi_dbg(ERR_ZONE, "%s: Unsupported RSI device id 0x%x\n",
+ 			__func__, id->idProduct);
++		status = -ENODEV;
+ 		goto err1;
+ 	}
+ 
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index e6d58402b829d..c6c2e2361b2fe 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -735,13 +735,13 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl)
+ 	if (ret)
+ 		return ret;
+ 
+-	ctrl->ctrl.queue_count = nr_io_queues + 1;
+-	if (ctrl->ctrl.queue_count < 2) {
++	if (nr_io_queues == 0) {
+ 		dev_err(ctrl->ctrl.device,
+ 			"unable to set any I/O queues\n");
+ 		return -ENOMEM;
+ 	}
+ 
++	ctrl->ctrl.queue_count = nr_io_queues + 1;
+ 	dev_info(ctrl->ctrl.device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 82b2611d39a2f..5b11d8a23813f 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1755,13 +1755,13 @@ static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ 	if (ret)
+ 		return ret;
+ 
+-	ctrl->queue_count = nr_io_queues + 1;
+-	if (ctrl->queue_count < 2) {
++	if (nr_io_queues == 0) {
+ 		dev_err(ctrl->device,
+ 			"unable to set any I/O queues\n");
+ 		return -ENOMEM;
+ 	}
+ 
++	ctrl->queue_count = nr_io_queues + 1;
+ 	dev_info(ctrl->device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
+diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
+index 42bd12b8bf00c..e62d3d0fa6c85 100644
+--- a/drivers/nvme/target/fabrics-cmd.c
++++ b/drivers/nvme/target/fabrics-cmd.c
+@@ -120,6 +120,7 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
+ 	if (!sqsize) {
+ 		pr_warn("queue size zero!\n");
+ 		req->error_loc = offsetof(struct nvmf_connect_command, sqsize);
++		req->cqe->result.u32 = IPO_IATTR_CONNECT_SQE(sqsize);
+ 		ret = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
+ 		goto err;
+ 	}
+@@ -263,11 +264,11 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
+ 	}
+ 
+ 	status = nvmet_install_queue(ctrl, req);
+-	if (status) {
+-		/* pass back cntlid that had the issue of installing queue */
+-		req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
++	if (status)
+ 		goto out_ctrl_put;
+-	}
++
++	/* pass back cntlid for successful completion */
++	req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
+ 
+ 	pr_debug("adding queue %d to ctrl %d.\n", qid, ctrl->cntlid);
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 9e971fffeb6a3..29f5d699fa06d 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2469,7 +2469,14 @@ static int __pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable
+ 	if (enable) {
+ 		int error;
+ 
+-		if (pci_pme_capable(dev, state))
++		/*
++		 * Enable PME signaling if the device can signal PME from
++		 * D3cold regardless of whether or not it can signal PME from
++		 * the current target state, because that will allow it to
++		 * signal PME when the hierarchy above it goes into D3cold and
++		 * the device itself ends up in D3cold as a result of that.
++		 */
++		if (pci_pme_capable(dev, state) || pci_pme_capable(dev, PCI_D3cold))
+ 			pci_pme_active(dev, true);
+ 		else
+ 			ret = 1;
+@@ -2573,16 +2580,20 @@ static pci_power_t pci_target_state(struct pci_dev *dev, bool wakeup)
+ 	if (dev->current_state == PCI_D3cold)
+ 		target_state = PCI_D3cold;
+ 
+-	if (wakeup) {
++	if (wakeup && dev->pme_support) {
++		pci_power_t state = target_state;
++
+ 		/*
+ 		 * Find the deepest state from which the device can generate
+ 		 * PME#.
+ 		 */
+-		if (dev->pme_support) {
+-			while (target_state
+-			      && !(dev->pme_support & (1 << target_state)))
+-				target_state--;
+-		}
++		while (state && !(dev->pme_support & (1 << state)))
++			state--;
++
++		if (state)
++			return state;
++		else if (dev->pme_support & 1)
++			return PCI_D0;
+ 	}
+ 
+ 	return target_state;
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index 148eb8105803a..be24529157be2 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -149,7 +149,7 @@ static int fuel_gauge_reg_readb(struct axp288_fg_info *info, int reg)
+ 	}
+ 
+ 	if (ret < 0) {
+-		dev_err(&info->pdev->dev, "axp288 reg read err:%d\n", ret);
++		dev_err(&info->pdev->dev, "Error reading reg 0x%02x err: %d\n", reg, ret);
+ 		return ret;
+ 	}
+ 
+@@ -163,7 +163,7 @@ static int fuel_gauge_reg_writeb(struct axp288_fg_info *info, int reg, u8 val)
+ 	ret = regmap_write(info->regmap, reg, (unsigned int)val);
+ 
+ 	if (ret < 0)
+-		dev_err(&info->pdev->dev, "axp288 reg write err:%d\n", ret);
++		dev_err(&info->pdev->dev, "Error writing reg 0x%02x err: %d\n", reg, ret);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/power/supply/cw2015_battery.c b/drivers/power/supply/cw2015_battery.c
+index 0146f1bfc29bb..de1fa71be1e83 100644
+--- a/drivers/power/supply/cw2015_battery.c
++++ b/drivers/power/supply/cw2015_battery.c
+@@ -673,7 +673,9 @@ static int cw_bat_probe(struct i2c_client *client)
+ 						    &cw2015_bat_desc,
+ 						    &psy_cfg);
+ 	if (IS_ERR(cw_bat->rk_bat)) {
+-		dev_err(cw_bat->dev, "Failed to register power supply\n");
++		/* try again if this happens */
++		dev_err_probe(&client->dev, PTR_ERR(cw_bat->rk_bat),
++			"Failed to register power supply\n");
+ 		return PTR_ERR(cw_bat->rk_bat);
+ 	}
+ 
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index 794caf03658d7..48d3985eaa8ad 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -738,7 +738,7 @@ static inline void max17042_override_por_values(struct max17042_chip *chip)
+ 	struct max17042_config_data *config = chip->pdata->config_data;
+ 
+ 	max17042_override_por(map, MAX17042_TGAIN, config->tgain);
+-	max17042_override_por(map, MAx17042_TOFF, config->toff);
++	max17042_override_por(map, MAX17042_TOFF, config->toff);
+ 	max17042_override_por(map, MAX17042_CGAIN, config->cgain);
+ 	max17042_override_por(map, MAX17042_COFF, config->coff);
+ 
+diff --git a/drivers/power/supply/smb347-charger.c b/drivers/power/supply/smb347-charger.c
+index 8cfbd8d6b4786..912e2184f918c 100644
+--- a/drivers/power/supply/smb347-charger.c
++++ b/drivers/power/supply/smb347-charger.c
+@@ -56,6 +56,7 @@
+ #define CFG_PIN_EN_CTRL_ACTIVE_LOW		0x60
+ #define CFG_PIN_EN_APSD_IRQ			BIT(1)
+ #define CFG_PIN_EN_CHARGER_ERROR		BIT(2)
++#define CFG_PIN_EN_CTRL				BIT(4)
+ #define CFG_THERM				0x07
+ #define CFG_THERM_SOFT_HOT_COMPENSATION_MASK	0x03
+ #define CFG_THERM_SOFT_HOT_COMPENSATION_SHIFT	0
+@@ -725,6 +726,15 @@ static int smb347_hw_init(struct smb347_charger *smb)
+ 	if (ret < 0)
+ 		goto fail;
+ 
++	/* Activate pin control, making it writable. */
++	switch (smb->enable_control) {
++	case SMB3XX_CHG_ENABLE_PIN_ACTIVE_LOW:
++	case SMB3XX_CHG_ENABLE_PIN_ACTIVE_HIGH:
++		ret = regmap_set_bits(smb->regmap, CFG_PIN, CFG_PIN_EN_CTRL);
++		if (ret < 0)
++			goto fail;
++	}
++
+ 	/*
+ 	 * Make the charging functionality controllable by a write to the
+ 	 * command register unless pin control is specified in the platform
+diff --git a/drivers/regulator/tps65910-regulator.c b/drivers/regulator/tps65910-regulator.c
+index 1d5b0a1b86f78..06cbe60c990f9 100644
+--- a/drivers/regulator/tps65910-regulator.c
++++ b/drivers/regulator/tps65910-regulator.c
+@@ -1211,12 +1211,10 @@ static int tps65910_probe(struct platform_device *pdev)
+ 
+ 		rdev = devm_regulator_register(&pdev->dev, &pmic->desc[i],
+ 					       &config);
+-		if (IS_ERR(rdev)) {
+-			dev_err(tps65910->dev,
+-				"failed to register %s regulator\n",
+-				pdev->name);
+-			return PTR_ERR(rdev);
+-		}
++		if (IS_ERR(rdev))
++			return dev_err_probe(tps65910->dev, PTR_ERR(rdev),
++					     "failed to register %s regulator\n",
++					     pdev->name);
+ 
+ 		/* Save regulator for cleanup */
+ 		pmic->rdev[i] = rdev;
+diff --git a/drivers/regulator/vctrl-regulator.c b/drivers/regulator/vctrl-regulator.c
+index cbadb1c996790..d2a37978fc3a8 100644
+--- a/drivers/regulator/vctrl-regulator.c
++++ b/drivers/regulator/vctrl-regulator.c
+@@ -37,7 +37,6 @@ struct vctrl_voltage_table {
+ struct vctrl_data {
+ 	struct regulator_dev *rdev;
+ 	struct regulator_desc desc;
+-	struct regulator *ctrl_reg;
+ 	bool enabled;
+ 	unsigned int min_slew_down_rate;
+ 	unsigned int ovp_threshold;
+@@ -82,7 +81,12 @@ static int vctrl_calc_output_voltage(struct vctrl_data *vctrl, int ctrl_uV)
+ static int vctrl_get_voltage(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ctrl_uV = regulator_get_voltage_rdev(vctrl->ctrl_reg->rdev);
++	int ctrl_uV;
++
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
++	ctrl_uV = regulator_get_voltage_rdev(rdev->supply->rdev);
+ 
+ 	return vctrl_calc_output_voltage(vctrl, ctrl_uV);
+ }
+@@ -92,14 +96,19 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 			     unsigned int *selector)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+-	int orig_ctrl_uV = regulator_get_voltage_rdev(ctrl_reg->rdev);
+-	int uV = vctrl_calc_output_voltage(vctrl, orig_ctrl_uV);
++	int orig_ctrl_uV;
++	int uV;
+ 	int ret;
+ 
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
++	orig_ctrl_uV = regulator_get_voltage_rdev(rdev->supply->rdev);
++	uV = vctrl_calc_output_voltage(vctrl, orig_ctrl_uV);
++
+ 	if (req_min_uV >= uV || !vctrl->ovp_threshold)
+ 		/* voltage rising or no OVP */
+-		return regulator_set_voltage_rdev(ctrl_reg->rdev,
++		return regulator_set_voltage_rdev(rdev->supply->rdev,
+ 			vctrl_calc_ctrl_voltage(vctrl, req_min_uV),
+ 			vctrl_calc_ctrl_voltage(vctrl, req_max_uV),
+ 			PM_SUSPEND_ON);
+@@ -117,7 +126,7 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 		next_uV = max_t(int, req_min_uV, uV - max_drop_uV);
+ 		next_ctrl_uV = vctrl_calc_ctrl_voltage(vctrl, next_uV);
+ 
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    next_ctrl_uV,
+ 					    next_ctrl_uV,
+ 					    PM_SUSPEND_ON);
+@@ -134,7 +143,7 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 
+ err:
+ 	/* Try to go back to original voltage */
+-	regulator_set_voltage_rdev(ctrl_reg->rdev, orig_ctrl_uV, orig_ctrl_uV,
++	regulator_set_voltage_rdev(rdev->supply->rdev, orig_ctrl_uV, orig_ctrl_uV,
+ 				   PM_SUSPEND_ON);
+ 
+ 	return ret;
+@@ -151,16 +160,18 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ 				 unsigned int selector)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+ 	unsigned int orig_sel = vctrl->sel;
+ 	int ret;
+ 
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
+ 	if (selector >= rdev->desc->n_voltages)
+ 		return -EINVAL;
+ 
+ 	if (selector >= vctrl->sel || !vctrl->ovp_threshold) {
+ 		/* voltage rising or no OVP */
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    vctrl->vtable[selector].ctrl,
+ 					    vctrl->vtable[selector].ctrl,
+ 					    PM_SUSPEND_ON);
+@@ -179,7 +190,7 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ 		else
+ 			next_sel = vctrl->vtable[vctrl->sel].ovp_min_sel;
+ 
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    vctrl->vtable[next_sel].ctrl,
+ 					    vctrl->vtable[next_sel].ctrl,
+ 					    PM_SUSPEND_ON);
+@@ -202,7 +213,7 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ err:
+ 	if (vctrl->sel != orig_sel) {
+ 		/* Try to go back to original voltage */
+-		if (!regulator_set_voltage_rdev(ctrl_reg->rdev,
++		if (!regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					   vctrl->vtable[orig_sel].ctrl,
+ 					   vctrl->vtable[orig_sel].ctrl,
+ 					   PM_SUSPEND_ON))
+@@ -234,10 +245,6 @@ static int vctrl_parse_dt(struct platform_device *pdev,
+ 	u32 pval;
+ 	u32 vrange_ctrl[2];
+ 
+-	vctrl->ctrl_reg = devm_regulator_get(&pdev->dev, "ctrl");
+-	if (IS_ERR(vctrl->ctrl_reg))
+-		return PTR_ERR(vctrl->ctrl_reg);
+-
+ 	ret = of_property_read_u32(np, "ovp-threshold-percent", &pval);
+ 	if (!ret) {
+ 		vctrl->ovp_threshold = pval;
+@@ -315,11 +322,11 @@ static int vctrl_cmp_ctrl_uV(const void *a, const void *b)
+ 	return at->ctrl - bt->ctrl;
+ }
+ 
+-static int vctrl_init_vtable(struct platform_device *pdev)
++static int vctrl_init_vtable(struct platform_device *pdev,
++			     struct regulator *ctrl_reg)
+ {
+ 	struct vctrl_data *vctrl = platform_get_drvdata(pdev);
+ 	struct regulator_desc *rdesc = &vctrl->desc;
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+ 	struct vctrl_voltage_range *vrange_ctrl = &vctrl->vrange.ctrl;
+ 	int n_voltages;
+ 	int ctrl_uV;
+@@ -395,23 +402,19 @@ static int vctrl_init_vtable(struct platform_device *pdev)
+ static int vctrl_enable(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ret = regulator_enable(vctrl->ctrl_reg);
+ 
+-	if (!ret)
+-		vctrl->enabled = true;
++	vctrl->enabled = true;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int vctrl_disable(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ret = regulator_disable(vctrl->ctrl_reg);
+ 
+-	if (!ret)
+-		vctrl->enabled = false;
++	vctrl->enabled = false;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int vctrl_is_enabled(struct regulator_dev *rdev)
+@@ -447,6 +450,7 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	struct regulator_desc *rdesc;
+ 	struct regulator_config cfg = { };
+ 	struct vctrl_voltage_range *vrange_ctrl;
++	struct regulator *ctrl_reg;
+ 	int ctrl_uV;
+ 	int ret;
+ 
+@@ -461,15 +465,20 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	ctrl_reg = devm_regulator_get(&pdev->dev, "ctrl");
++	if (IS_ERR(ctrl_reg))
++		return PTR_ERR(ctrl_reg);
++
+ 	vrange_ctrl = &vctrl->vrange.ctrl;
+ 
+ 	rdesc = &vctrl->desc;
+ 	rdesc->name = "vctrl";
+ 	rdesc->type = REGULATOR_VOLTAGE;
+ 	rdesc->owner = THIS_MODULE;
++	rdesc->supply_name = "ctrl";
+ 
+-	if ((regulator_get_linear_step(vctrl->ctrl_reg) == 1) ||
+-	    (regulator_count_voltages(vctrl->ctrl_reg) == -EINVAL)) {
++	if ((regulator_get_linear_step(ctrl_reg) == 1) ||
++	    (regulator_count_voltages(ctrl_reg) == -EINVAL)) {
+ 		rdesc->continuous_voltage_range = true;
+ 		rdesc->ops = &vctrl_ops_cont;
+ 	} else {
+@@ -486,11 +495,12 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	cfg.init_data = init_data;
+ 
+ 	if (!rdesc->continuous_voltage_range) {
+-		ret = vctrl_init_vtable(pdev);
++		ret = vctrl_init_vtable(pdev, ctrl_reg);
+ 		if (ret)
+ 			return ret;
+ 
+-		ctrl_uV = regulator_get_voltage_rdev(vctrl->ctrl_reg->rdev);
++		/* Use locked consumer API when not in regulator framework */
++		ctrl_uV = regulator_get_voltage(ctrl_reg);
+ 		if (ctrl_uV < 0) {
+ 			dev_err(&pdev->dev, "failed to get control voltage\n");
+ 			return ctrl_uV;
+@@ -513,6 +523,9 @@ static int vctrl_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	/* Drop ctrl-supply here in favor of regulator core managed supply */
++	devm_regulator_put(ctrl_reg);
++
+ 	vctrl->rdev = devm_regulator_register(&pdev->dev, rdesc, &cfg);
+ 	if (IS_ERR(vctrl->rdev)) {
+ 		ret = PTR_ERR(vctrl->rdev);
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index cca1a7c4bb336..305db4173dcf3 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -426,9 +426,26 @@ static ssize_t pimpampom_show(struct device *dev,
+ }
+ static DEVICE_ATTR_RO(pimpampom);
+ 
++static ssize_t dev_busid_show(struct device *dev,
++			      struct device_attribute *attr,
++			      char *buf)
++{
++	struct subchannel *sch = to_subchannel(dev);
++	struct pmcw *pmcw = &sch->schib.pmcw;
++
++	if ((pmcw->st == SUBCHANNEL_TYPE_IO ||
++	     pmcw->st == SUBCHANNEL_TYPE_MSG) && pmcw->dnv)
++		return sysfs_emit(buf, "0.%x.%04x\n", sch->schid.ssid,
++				  pmcw->dev);
++	else
++		return sysfs_emit(buf, "none\n");
++}
++static DEVICE_ATTR_RO(dev_busid);
++
+ static struct attribute *io_subchannel_type_attrs[] = {
+ 	&dev_attr_chpids.attr,
+ 	&dev_attr_pimpampom.attr,
++	&dev_attr_dev_busid.attr,
+ 	NULL,
+ };
+ ATTRIBUTE_GROUPS(io_subchannel_type);
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index ef738b42a0926..c00a288a4eca2 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -114,22 +114,13 @@ static struct bus_type ap_bus_type;
+ /* Adapter interrupt definitions */
+ static void ap_interrupt_handler(struct airq_struct *airq, bool floating);
+ 
+-static int ap_airq_flag;
++static bool ap_irq_flag;
+ 
+ static struct airq_struct ap_airq = {
+ 	.handler = ap_interrupt_handler,
+ 	.isc = AP_ISC,
+ };
+ 
+-/**
+- * ap_using_interrupts() - Returns non-zero if interrupt support is
+- * available.
+- */
+-static inline int ap_using_interrupts(void)
+-{
+-	return ap_airq_flag;
+-}
+-
+ /**
+  * ap_airq_ptr() - Get the address of the adapter interrupt indicator
+  *
+@@ -139,7 +130,7 @@ static inline int ap_using_interrupts(void)
+  */
+ void *ap_airq_ptr(void)
+ {
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		return ap_airq.lsi_ptr;
+ 	return NULL;
+ }
+@@ -369,7 +360,7 @@ void ap_wait(enum ap_sm_wait wait)
+ 	switch (wait) {
+ 	case AP_SM_WAIT_AGAIN:
+ 	case AP_SM_WAIT_INTERRUPT:
+-		if (ap_using_interrupts())
++		if (ap_irq_flag)
+ 			break;
+ 		if (ap_poll_kthread) {
+ 			wake_up(&ap_poll_wait);
+@@ -444,7 +435,7 @@ static void ap_tasklet_fn(unsigned long dummy)
+ 	 * be received. Doing it in the beginning of the tasklet is therefor
+ 	 * important that no requests on any AP get lost.
+ 	 */
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		xchg(ap_airq.lsi_ptr, 0);
+ 
+ 	spin_lock_bh(&ap_queues_lock);
+@@ -514,7 +505,7 @@ static int ap_poll_thread_start(void)
+ {
+ 	int rc;
+ 
+-	if (ap_using_interrupts() || ap_poll_kthread)
++	if (ap_irq_flag || ap_poll_kthread)
+ 		return 0;
+ 	mutex_lock(&ap_poll_thread_mutex);
+ 	ap_poll_kthread = kthread_run(ap_poll_thread, NULL, "appoll");
+@@ -1014,7 +1005,7 @@ static BUS_ATTR_RO(ap_adapter_mask);
+ static ssize_t ap_interrupts_show(struct bus_type *bus, char *buf)
+ {
+ 	return scnprintf(buf, PAGE_SIZE, "%d\n",
+-			 ap_using_interrupts() ? 1 : 0);
++			 ap_irq_flag ? 1 : 0);
+ }
+ 
+ static BUS_ATTR_RO(ap_interrupts);
+@@ -1687,7 +1678,7 @@ static int __init ap_module_init(void)
+ 	/* enable interrupts if available */
+ 	if (ap_interrupts_available()) {
+ 		rc = register_adapter_interrupt(&ap_airq);
+-		ap_airq_flag = (rc == 0);
++		ap_irq_flag = (rc == 0);
+ 	}
+ 
+ 	/* Create /sys/bus/ap. */
+@@ -1737,7 +1728,7 @@ out_bus:
+ 		bus_remove_file(&ap_bus_type, ap_bus_attrs[i]);
+ 	bus_unregister(&ap_bus_type);
+ out:
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		unregister_adapter_interrupt(&ap_airq);
+ 	kfree(ap_qci_info);
+ 	return rc;
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 5029b80132aae..ccdbd95cab706 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -77,12 +77,6 @@ static inline int ap_test_bit(unsigned int *ptr, unsigned int nr)
+ #define AP_FUNC_EP11  5
+ #define AP_FUNC_APXA  6
+ 
+-/*
+- * AP interrupt states
+- */
+-#define AP_INTR_DISABLED	0	/* AP interrupt disabled */
+-#define AP_INTR_ENABLED		1	/* AP interrupt enabled */
+-
+ /*
+  * AP queue state machine states
+  */
+@@ -109,7 +103,7 @@ enum ap_sm_event {
+  * AP queue state wait behaviour
+  */
+ enum ap_sm_wait {
+-	AP_SM_WAIT_AGAIN,	/* retry immediately */
++	AP_SM_WAIT_AGAIN = 0,	/* retry immediately */
+ 	AP_SM_WAIT_TIMEOUT,	/* wait for timeout */
+ 	AP_SM_WAIT_INTERRUPT,	/* wait for thin interrupt (if available) */
+ 	AP_SM_WAIT_NONE,	/* no wait */
+@@ -182,7 +176,7 @@ struct ap_queue {
+ 	enum ap_dev_state dev_state;	/* queue device state */
+ 	bool config;			/* configured state */
+ 	ap_qid_t qid;			/* AP queue id. */
+-	int interrupt;			/* indicate if interrupts are enabled */
++	bool interrupt;			/* indicate if interrupts are enabled */
+ 	int queue_count;		/* # messages currently on AP queue. */
+ 	int pendingq_count;		/* # requests on pendingq list. */
+ 	int requestq_count;		/* # requests on requestq list. */
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index 337353c9655ed..639f8d25679c3 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -19,7 +19,7 @@
+ static void __ap_flush_queue(struct ap_queue *aq);
+ 
+ /**
+- * ap_queue_enable_interruption(): Enable interruption on an AP queue.
++ * ap_queue_enable_irq(): Enable interrupt support on this AP queue.
+  * @qid: The AP queue number
+  * @ind: the notification indicator byte
+  *
+@@ -27,7 +27,7 @@ static void __ap_flush_queue(struct ap_queue *aq);
+  * value it waits a while and tests the AP queue if interrupts
+  * have been switched on using ap_test_queue().
+  */
+-static int ap_queue_enable_interruption(struct ap_queue *aq, void *ind)
++static int ap_queue_enable_irq(struct ap_queue *aq, void *ind)
+ {
+ 	struct ap_queue_status status;
+ 	struct ap_qirq_ctrl qirqctrl = { 0 };
+@@ -198,7 +198,8 @@ static enum ap_sm_wait ap_sm_read(struct ap_queue *aq)
+ 		return AP_SM_WAIT_NONE;
+ 	case AP_RESPONSE_NO_PENDING_REPLY:
+ 		if (aq->queue_count > 0)
+-			return AP_SM_WAIT_INTERRUPT;
++			return aq->interrupt ?
++				AP_SM_WAIT_INTERRUPT : AP_SM_WAIT_TIMEOUT;
+ 		aq->sm_state = AP_SM_STATE_IDLE;
+ 		return AP_SM_WAIT_NONE;
+ 	default:
+@@ -252,7 +253,8 @@ static enum ap_sm_wait ap_sm_write(struct ap_queue *aq)
+ 		fallthrough;
+ 	case AP_RESPONSE_Q_FULL:
+ 		aq->sm_state = AP_SM_STATE_QUEUE_FULL;
+-		return AP_SM_WAIT_INTERRUPT;
++		return aq->interrupt ?
++			AP_SM_WAIT_INTERRUPT : AP_SM_WAIT_TIMEOUT;
+ 	case AP_RESPONSE_RESET_IN_PROGRESS:
+ 		aq->sm_state = AP_SM_STATE_RESET_WAIT;
+ 		return AP_SM_WAIT_TIMEOUT;
+@@ -302,7 +304,7 @@ static enum ap_sm_wait ap_sm_reset(struct ap_queue *aq)
+ 	case AP_RESPONSE_NORMAL:
+ 	case AP_RESPONSE_RESET_IN_PROGRESS:
+ 		aq->sm_state = AP_SM_STATE_RESET_WAIT;
+-		aq->interrupt = AP_INTR_DISABLED;
++		aq->interrupt = false;
+ 		return AP_SM_WAIT_TIMEOUT;
+ 	default:
+ 		aq->dev_state = AP_DEV_STATE_ERROR;
+@@ -335,7 +337,7 @@ static enum ap_sm_wait ap_sm_reset_wait(struct ap_queue *aq)
+ 	switch (status.response_code) {
+ 	case AP_RESPONSE_NORMAL:
+ 		lsi_ptr = ap_airq_ptr();
+-		if (lsi_ptr && ap_queue_enable_interruption(aq, lsi_ptr) == 0)
++		if (lsi_ptr && ap_queue_enable_irq(aq, lsi_ptr) == 0)
+ 			aq->sm_state = AP_SM_STATE_SETIRQ_WAIT;
+ 		else
+ 			aq->sm_state = (aq->queue_count > 0) ?
+@@ -376,7 +378,7 @@ static enum ap_sm_wait ap_sm_setirq_wait(struct ap_queue *aq)
+ 
+ 	if (status.irq_enabled == 1) {
+ 		/* Irqs are now enabled */
+-		aq->interrupt = AP_INTR_ENABLED;
++		aq->interrupt = true;
+ 		aq->sm_state = (aq->queue_count > 0) ?
+ 			AP_SM_STATE_WORKING : AP_SM_STATE_IDLE;
+ 	}
+@@ -566,7 +568,7 @@ static ssize_t interrupt_show(struct device *dev,
+ 	spin_lock_bh(&aq->lock);
+ 	if (aq->sm_state == AP_SM_STATE_SETIRQ_WAIT)
+ 		rc = scnprintf(buf, PAGE_SIZE, "Enable Interrupt pending.\n");
+-	else if (aq->interrupt == AP_INTR_ENABLED)
++	else if (aq->interrupt)
+ 		rc = scnprintf(buf, PAGE_SIZE, "Interrupts enabled.\n");
+ 	else
+ 		rc = scnprintf(buf, PAGE_SIZE, "Interrupts disabled.\n");
+@@ -747,7 +749,7 @@ struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type)
+ 	aq->ap_dev.device.type = &ap_queue_type;
+ 	aq->ap_dev.device_type = device_type;
+ 	aq->qid = qid;
+-	aq->interrupt = AP_INTR_DISABLED;
++	aq->interrupt = false;
+ 	spin_lock_init(&aq->lock);
+ 	INIT_LIST_HEAD(&aq->pendingq);
+ 	INIT_LIST_HEAD(&aq->requestq);
+diff --git a/drivers/s390/crypto/zcrypt_ccamisc.c b/drivers/s390/crypto/zcrypt_ccamisc.c
+index b1046811450fb..ffab935ddd95b 100644
+--- a/drivers/s390/crypto/zcrypt_ccamisc.c
++++ b/drivers/s390/crypto/zcrypt_ccamisc.c
+@@ -1715,10 +1715,10 @@ static int fetch_cca_info(u16 cardnr, u16 domain, struct cca_info *ci)
+ 	rlen = vlen = PAGE_SIZE/2;
+ 	rc = cca_query_crypto_facility(cardnr, domain, "STATICSB",
+ 				       rarray, &rlen, varray, &vlen);
+-	if (rc == 0 && rlen >= 10*8 && vlen >= 240) {
+-		ci->new_apka_mk_state = (char) rarray[7*8];
+-		ci->cur_apka_mk_state = (char) rarray[8*8];
+-		ci->old_apka_mk_state = (char) rarray[9*8];
++	if (rc == 0 && rlen >= 13*8 && vlen >= 240) {
++		ci->new_apka_mk_state = (char) rarray[10*8];
++		ci->cur_apka_mk_state = (char) rarray[11*8];
++		ci->old_apka_mk_state = (char) rarray[12*8];
+ 		if (ci->old_apka_mk_state == '2')
+ 			memcpy(&ci->old_apka_mkvp, varray + 208, 8);
+ 		if (ci->cur_apka_mk_state == '2')
+diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
+index e72426221a69c..c8b584d0c8fb4 100644
+--- a/drivers/soc/qcom/rpmhpd.c
++++ b/drivers/soc/qcom/rpmhpd.c
+@@ -310,12 +310,11 @@ static int rpmhpd_power_on(struct generic_pm_domain *domain)
+ static int rpmhpd_power_off(struct generic_pm_domain *domain)
+ {
+ 	struct rpmhpd *pd = domain_to_rpmhpd(domain);
+-	int ret = 0;
++	int ret;
+ 
+ 	mutex_lock(&rpmhpd_lock);
+ 
+-	ret = rpmhpd_aggregate_corner(pd, pd->level[0]);
+-
++	ret = rpmhpd_aggregate_corner(pd, 0);
+ 	if (!ret)
+ 		pd->enabled = false;
+ 
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index 70c3c90b997c9..c428d0f78816e 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -109,7 +109,7 @@ struct smsm_entry {
+ 	DECLARE_BITMAP(irq_enabled, 32);
+ 	DECLARE_BITMAP(irq_rising, 32);
+ 	DECLARE_BITMAP(irq_falling, 32);
+-	u32 last_value;
++	unsigned long last_value;
+ 
+ 	u32 *remote_state;
+ 	u32 *subscription;
+@@ -204,8 +204,7 @@ static irqreturn_t smsm_intr(int irq, void *data)
+ 	u32 val;
+ 
+ 	val = readl(entry->remote_state);
+-	changed = val ^ entry->last_value;
+-	entry->last_value = val;
++	changed = val ^ xchg(&entry->last_value, val);
+ 
+ 	for_each_set_bit(i, entry->irq_enabled, 32) {
+ 		if (!(changed & BIT(i)))
+@@ -266,6 +265,12 @@ static void smsm_unmask_irq(struct irq_data *irqd)
+ 	struct qcom_smsm *smsm = entry->smsm;
+ 	u32 val;
+ 
++	/* Make sure our last cached state is up-to-date */
++	if (readl(entry->remote_state) & BIT(irq))
++		set_bit(irq, &entry->last_value);
++	else
++		clear_bit(irq, &entry->last_value);
++
+ 	set_bit(irq, entry->irq_enabled);
+ 
+ 	if (entry->subscription) {
+diff --git a/drivers/soc/rockchip/Kconfig b/drivers/soc/rockchip/Kconfig
+index 2c13bf4dd5dbe..25eb2c1e31bb2 100644
+--- a/drivers/soc/rockchip/Kconfig
++++ b/drivers/soc/rockchip/Kconfig
+@@ -6,8 +6,8 @@ if ARCH_ROCKCHIP || COMPILE_TEST
+ #
+ 
+ config ROCKCHIP_GRF
+-	bool
+-	default y
++	bool "Rockchip General Register Files support" if COMPILE_TEST
++	default y if ARCH_ROCKCHIP
+ 	help
+ 	  The General Register Files are a central component providing
+ 	  special additional settings registers for a lot of soc-components.
+diff --git a/drivers/spi/spi-coldfire-qspi.c b/drivers/spi/spi-coldfire-qspi.c
+index 8996115ce736a..263ce90473277 100644
+--- a/drivers/spi/spi-coldfire-qspi.c
++++ b/drivers/spi/spi-coldfire-qspi.c
+@@ -444,7 +444,7 @@ static int mcfqspi_remove(struct platform_device *pdev)
+ 	mcfqspi_wr_qmr(mcfqspi, MCFQSPI_QMR_MSTR);
+ 
+ 	mcfqspi_cs_teardown(mcfqspi);
+-	clk_disable(mcfqspi->clk);
++	clk_disable_unprepare(mcfqspi->clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
+index 7453a1dbbc061..fda73221f3b78 100644
+--- a/drivers/spi/spi-davinci.c
++++ b/drivers/spi/spi-davinci.c
+@@ -213,12 +213,6 @@ static void davinci_spi_chipselect(struct spi_device *spi, int value)
+ 	 * line for the controller
+ 	 */
+ 	if (spi->cs_gpiod) {
+-		/*
+-		 * FIXME: is this code ever executed? This host does not
+-		 * set SPI_MASTER_GPIO_SS so this chipselect callback should
+-		 * not get called from the SPI core when we are using
+-		 * GPIOs for chip select.
+-		 */
+ 		if (value == BITBANG_CS_ACTIVE)
+ 			gpiod_set_value(spi->cs_gpiod, 1);
+ 		else
+@@ -950,7 +944,7 @@ static int davinci_spi_probe(struct platform_device *pdev)
+ 	master->bus_num = pdev->id;
+ 	master->num_chipselect = pdata->num_chipselect;
+ 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 16);
+-	master->flags = SPI_MASTER_MUST_RX;
++	master->flags = SPI_MASTER_MUST_RX | SPI_MASTER_GPIO_SS;
+ 	master->setup = davinci_spi_setup;
+ 	master->cleanup = davinci_spi_cleanup;
+ 	master->can_dma = davinci_spi_can_dma;
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index fb45e6af66381..fd004c9db9dc0 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -530,6 +530,7 @@ static int dspi_request_dma(struct fsl_dspi *dspi, phys_addr_t phy_addr)
+ 		goto err_rx_dma_buf;
+ 	}
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.src_addr = phy_addr + SPI_POPR;
+ 	cfg.dst_addr = phy_addr + SPI_PUSHR;
+ 	cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/spi/spi-pic32.c b/drivers/spi/spi-pic32.c
+index 104bde153efd2..5eb7b61bbb4d8 100644
+--- a/drivers/spi/spi-pic32.c
++++ b/drivers/spi/spi-pic32.c
+@@ -361,6 +361,7 @@ static int pic32_spi_dma_config(struct pic32_spi *pic32s, u32 dma_width)
+ 	struct dma_slave_config cfg;
+ 	int ret;
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.device_fc = true;
+ 	cfg.src_addr = pic32s->dma_base + buf_offset;
+ 	cfg.dst_addr = pic32s->dma_base + buf_offset;
+diff --git a/drivers/spi/spi-sprd-adi.c b/drivers/spi/spi-sprd-adi.c
+index 392ec5cfa3d61..307c079b938dc 100644
+--- a/drivers/spi/spi-sprd-adi.c
++++ b/drivers/spi/spi-sprd-adi.c
+@@ -103,7 +103,7 @@
+ #define HWRST_STATUS_WATCHDOG		0xf0
+ 
+ /* Use default timeout 50 ms that converts to watchdog values */
+-#define WDG_LOAD_VAL			((50 * 1000) / 32768)
++#define WDG_LOAD_VAL			((50 * 32768) / 1000)
+ #define WDG_LOAD_MASK			GENMASK(15, 0)
+ #define WDG_UNLOCK_KEY			0xe551
+ 
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index 68193db8b2e3c..b635835729d66 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -545,7 +545,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+@@ -563,7 +563,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+@@ -579,7 +579,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 
+@@ -603,7 +603,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+diff --git a/drivers/staging/clocking-wizard/Kconfig b/drivers/staging/clocking-wizard/Kconfig
+index 69cf51445f082..2324b5d737886 100644
+--- a/drivers/staging/clocking-wizard/Kconfig
++++ b/drivers/staging/clocking-wizard/Kconfig
+@@ -5,6 +5,6 @@
+ 
+ config COMMON_CLK_XLNX_CLKWZRD
+ 	tristate "Xilinx Clocking Wizard"
+-	depends on COMMON_CLK && OF && IOMEM
++	depends on COMMON_CLK && OF && HAS_IOMEM
+ 	help
+ 	  Support for the Xilinx Clocking Wizard IP core clock generator.
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c b/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
+index f5de81132177d..77293579a1348 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
+@@ -1533,16 +1533,19 @@ static struct v4l2_ctrl_config mt9m114_controls[] = {
+ static int mt9m114_detect(struct mt9m114_device *dev, struct i2c_client *client)
+ {
+ 	struct i2c_adapter *adapter = client->adapter;
+-	u32 retvalue;
++	u32 model;
++	int ret;
+ 
+ 	if (!i2c_check_functionality(adapter, I2C_FUNC_I2C)) {
+ 		dev_err(&client->dev, "%s: i2c error", __func__);
+ 		return -ENODEV;
+ 	}
+-	mt9m114_read_reg(client, MISENSOR_16BIT, (u32)MT9M114_PID, &retvalue);
+-	dev->real_model_id = retvalue;
++	ret = mt9m114_read_reg(client, MISENSOR_16BIT, MT9M114_PID, &model);
++	if (ret)
++		return ret;
++	dev->real_model_id = model;
+ 
+-	if (retvalue != MT9M114_MOD_ID) {
++	if (model != MT9M114_MOD_ID) {
+ 		dev_err(&client->dev, "%s: failed: client->addr = %x\n",
+ 			__func__, client->addr);
+ 		return -ENODEV;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 2e74c88808db6..a70911a227a84 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2597,7 +2597,7 @@ static int lpuart_probe(struct platform_device *pdev)
+ 		return PTR_ERR(sport->port.membase);
+ 
+ 	sport->port.membase += sdata->reg_off;
+-	sport->port.mapbase = res->start;
++	sport->port.mapbase = res->start + sdata->reg_off;
+ 	sport->port.dev = &pdev->dev;
+ 	sport->port.type = PORT_LPUART;
+ 	sport->devtype = sdata->devtype;
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index bc5314092aa4e..669aef77a0bd0 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -2257,8 +2257,6 @@ static int tty_fasync(int fd, struct file *filp, int on)
+  *	Locking:
+  *		Called functions take tty_ldiscs_lock
+  *		current->signal->tty check is safe without locks
+- *
+- *	FIXME: may race normal receive processing
+  */
+ 
+ static int tiocsti(struct tty_struct *tty, char __user *p)
+@@ -2274,8 +2272,10 @@ static int tiocsti(struct tty_struct *tty, char __user *p)
+ 	ld = tty_ldisc_ref_wait(tty);
+ 	if (!ld)
+ 		return -EIO;
++	tty_buffer_lock_exclusive(tty->port);
+ 	if (ld->ops->receive_buf)
+ 		ld->ops->receive_buf(tty, &ch, &mbz, 1);
++	tty_buffer_unlock_exclusive(tty->port);
+ 	tty_ldisc_deref(ld);
+ 	return 0;
+ }
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index ffe301d6ea359..d0f9b7c296b0d 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -598,6 +598,8 @@ static int dwc3_meson_g12a_otg_init(struct platform_device *pdev,
+ 				   USB_R5_ID_DIG_IRQ, 0);
+ 
+ 		irq = platform_get_irq(pdev, 0);
++		if (irq < 0)
++			return irq;
+ 		ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
+ 						dwc3_meson_g12a_irq_thread,
+ 						IRQF_ONESHOT, pdev->name, priv);
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 8bd077fb1190f..2a29e2f681fe6 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -610,6 +610,10 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
+ 		qcom->acpi_pdata->dwc3_core_base_size;
+ 
+ 	irq = platform_get_irq(pdev_irq, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto out;
++	}
+ 	child_res[1].flags = IORESOURCE_IRQ;
+ 	child_res[1].start = child_res[1].end = irq;
+ 
+diff --git a/drivers/usb/gadget/udc/at91_udc.c b/drivers/usb/gadget/udc/at91_udc.c
+index eede5cedacb4a..d9ad9adf7348f 100644
+--- a/drivers/usb/gadget/udc/at91_udc.c
++++ b/drivers/usb/gadget/udc/at91_udc.c
+@@ -1876,7 +1876,9 @@ static int at91udc_probe(struct platform_device *pdev)
+ 	clk_disable(udc->iclk);
+ 
+ 	/* request UDC and maybe VBUS irqs */
+-	udc->udp_irq = platform_get_irq(pdev, 0);
++	udc->udp_irq = retval = platform_get_irq(pdev, 0);
++	if (retval < 0)
++		goto err_unprepare_iclk;
+ 	retval = devm_request_irq(dev, udc->udp_irq, at91_udc_irq, 0,
+ 				  driver_name, udc);
+ 	if (retval) {
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_core.c b/drivers/usb/gadget/udc/bdc/bdc_core.c
+index 0bef6b3f049b9..fa1a3908ec3bb 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_core.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_core.c
+@@ -488,27 +488,14 @@ static int bdc_probe(struct platform_device *pdev)
+ 	int irq;
+ 	u32 temp;
+ 	struct device *dev = &pdev->dev;
+-	struct clk *clk;
+ 	int phy_num;
+ 
+ 	dev_dbg(dev, "%s()\n", __func__);
+ 
+-	clk = devm_clk_get_optional(dev, "sw_usbd");
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
+-
+-	ret = clk_prepare_enable(clk);
+-	if (ret) {
+-		dev_err(dev, "could not enable clock\n");
+-		return ret;
+-	}
+-
+ 	bdc = devm_kzalloc(dev, sizeof(*bdc), GFP_KERNEL);
+ 	if (!bdc)
+ 		return -ENOMEM;
+ 
+-	bdc->clk = clk;
+-
+ 	bdc->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(bdc->regs))
+ 		return PTR_ERR(bdc->regs);
+@@ -545,10 +532,20 @@ static int bdc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	bdc->clk = devm_clk_get_optional(dev, "sw_usbd");
++	if (IS_ERR(bdc->clk))
++		return PTR_ERR(bdc->clk);
++
++	ret = clk_prepare_enable(bdc->clk);
++	if (ret) {
++		dev_err(dev, "could not enable clock\n");
++		return ret;
++	}
++
+ 	ret = bdc_phy_init(bdc);
+ 	if (ret) {
+ 		dev_err(bdc->dev, "BDC phy init failure:%d\n", ret);
+-		return ret;
++		goto disable_clk;
+ 	}
+ 
+ 	temp = bdc_readl(bdc->regs, BDC_BDCCAP1);
+@@ -560,7 +557,8 @@ static int bdc_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(dev,
+ 				"No suitable DMA config available, abort\n");
+-			return -ENOTSUPP;
++			ret = -ENOTSUPP;
++			goto phycleanup;
+ 		}
+ 		dev_dbg(dev, "Using 32-bit address\n");
+ 	}
+@@ -580,6 +578,8 @@ cleanup:
+ 	bdc_hw_exit(bdc);
+ phycleanup:
+ 	bdc_phy_exit(bdc);
++disable_clk:
++	clk_disable_unprepare(bdc->clk);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/mv_u3d_core.c b/drivers/usb/gadget/udc/mv_u3d_core.c
+index 5486f5a708681..0db97fecf99e8 100644
+--- a/drivers/usb/gadget/udc/mv_u3d_core.c
++++ b/drivers/usb/gadget/udc/mv_u3d_core.c
+@@ -1921,14 +1921,6 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 		goto err_get_irq;
+ 	}
+ 	u3d->irq = r->start;
+-	if (request_irq(u3d->irq, mv_u3d_irq,
+-		IRQF_SHARED, driver_name, u3d)) {
+-		u3d->irq = 0;
+-		dev_err(&dev->dev, "Request irq %d for u3d failed\n",
+-			u3d->irq);
+-		retval = -ENODEV;
+-		goto err_request_irq;
+-	}
+ 
+ 	/* initialize gadget structure */
+ 	u3d->gadget.ops = &mv_u3d_ops;	/* usb_gadget_ops */
+@@ -1941,6 +1933,15 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 
+ 	mv_u3d_eps_init(u3d);
+ 
++	if (request_irq(u3d->irq, mv_u3d_irq,
++		IRQF_SHARED, driver_name, u3d)) {
++		u3d->irq = 0;
++		dev_err(&dev->dev, "Request irq %d for u3d failed\n",
++			u3d->irq);
++		retval = -ENODEV;
++		goto err_request_irq;
++	}
++
+ 	/* external vbus detection */
+ 	if (u3d->vbus) {
+ 		u3d->clock_gating = 1;
+@@ -1964,8 +1965,8 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 
+ err_unregister:
+ 	free_irq(u3d->irq, u3d);
+-err_request_irq:
+ err_get_irq:
++err_request_irq:
+ 	kfree(u3d->status_req);
+ err_alloc_status_req:
+ 	kfree(u3d->eps);
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index f1b35a39d1ba8..57d417a7c3e0a 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2707,10 +2707,15 @@ static const struct renesas_usb3_priv renesas_usb3_priv_r8a77990 = {
+ 
+ static const struct of_device_id usb3_of_match[] = {
+ 	{
++		.compatible = "renesas,r8a774c0-usb3-peri",
++		.data = &renesas_usb3_priv_r8a77990,
++	}, {
+ 		.compatible = "renesas,r8a7795-usb3-peri",
+ 		.data = &renesas_usb3_priv_gen3,
+-	},
+-	{
++	}, {
++		.compatible = "renesas,r8a77990-usb3-peri",
++		.data = &renesas_usb3_priv_r8a77990,
++	}, {
+ 		.compatible = "renesas,rcar-gen3-usb3-peri",
+ 		.data = &renesas_usb3_priv_gen3,
+ 	},
+@@ -2719,18 +2724,10 @@ static const struct of_device_id usb3_of_match[] = {
+ MODULE_DEVICE_TABLE(of, usb3_of_match);
+ 
+ static const struct soc_device_attribute renesas_usb3_quirks_match[] = {
+-	{
+-		.soc_id = "r8a774c0",
+-		.data = &renesas_usb3_priv_r8a77990,
+-	},
+ 	{
+ 		.soc_id = "r8a7795", .revision = "ES1.*",
+ 		.data = &renesas_usb3_priv_r8a7795_es1,
+ 	},
+-	{
+-		.soc_id = "r8a77990",
+-		.data = &renesas_usb3_priv_r8a77990,
+-	},
+ 	{ /* sentinel */ },
+ };
+ 
+diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
+index b154b62abefa1..82c4f3fb2daec 100644
+--- a/drivers/usb/gadget/udc/s3c2410_udc.c
++++ b/drivers/usb/gadget/udc/s3c2410_udc.c
+@@ -1784,6 +1784,10 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	s3c2410_udc_reinit(udc);
+ 
+ 	irq_usbd = platform_get_irq(pdev, 0);
++	if (irq_usbd < 0) {
++		retval = irq_usbd;
++		goto err_udc_clk;
++	}
+ 
+ 	/* irq setup after old hardware state is cleaned up */
+ 	retval = request_irq(irq_usbd, s3c2410_udc_irq,
+diff --git a/drivers/usb/host/ehci-orion.c b/drivers/usb/host/ehci-orion.c
+index a319b1df3011c..3626758b3e2aa 100644
+--- a/drivers/usb/host/ehci-orion.c
++++ b/drivers/usb/host/ehci-orion.c
+@@ -264,8 +264,11 @@ static int ehci_orion_drv_probe(struct platform_device *pdev)
+ 	 * the clock does not exists.
+ 	 */
+ 	priv->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (!IS_ERR(priv->clk))
+-		clk_prepare_enable(priv->clk);
++	if (!IS_ERR(priv->clk)) {
++		err = clk_prepare_enable(priv->clk);
++		if (err)
++			goto err_put_hcd;
++	}
+ 
+ 	priv->phy = devm_phy_optional_get(&pdev->dev, "usb");
+ 	if (IS_ERR(priv->phy)) {
+@@ -311,6 +314,7 @@ static int ehci_orion_drv_probe(struct platform_device *pdev)
+ err_dis_clk:
+ 	if (!IS_ERR(priv->clk))
+ 		clk_disable_unprepare(priv->clk);
++err_put_hcd:
+ 	usb_put_hcd(hcd);
+ err:
+ 	dev_err(&pdev->dev, "init %s fail, %d\n",
+diff --git a/drivers/usb/host/ohci-tmio.c b/drivers/usb/host/ohci-tmio.c
+index 7f857bad9e95b..08ec2ab0d95a5 100644
+--- a/drivers/usb/host/ohci-tmio.c
++++ b/drivers/usb/host/ohci-tmio.c
+@@ -202,6 +202,9 @@ static int ohci_hcd_tmio_drv_probe(struct platform_device *dev)
+ 	if (!cell)
+ 		return -EINVAL;
+ 
++	if (irq < 0)
++		return irq;
++
+ 	hcd = usb_create_hcd(&ohci_tmio_hc_driver, &dev->dev, dev_name(&dev->dev));
+ 	if (!hcd) {
+ 		ret = -ENOMEM;
+diff --git a/drivers/usb/phy/phy-fsl-usb.c b/drivers/usb/phy/phy-fsl-usb.c
+index f34c9437a182c..972704262b02b 100644
+--- a/drivers/usb/phy/phy-fsl-usb.c
++++ b/drivers/usb/phy/phy-fsl-usb.c
+@@ -873,6 +873,8 @@ int usb_otg_start(struct platform_device *pdev)
+ 
+ 	/* request irq */
+ 	p_otg->irq = platform_get_irq(pdev, 0);
++	if (p_otg->irq < 0)
++		return p_otg->irq;
+ 	status = request_irq(p_otg->irq, fsl_otg_isr,
+ 				IRQF_SHARED, driver_name, p_otg);
+ 	if (status) {
+diff --git a/drivers/usb/phy/phy-tahvo.c b/drivers/usb/phy/phy-tahvo.c
+index baebb1f5a9737..a3e043e3e4aae 100644
+--- a/drivers/usb/phy/phy-tahvo.c
++++ b/drivers/usb/phy/phy-tahvo.c
+@@ -393,7 +393,9 @@ static int tahvo_usb_probe(struct platform_device *pdev)
+ 
+ 	dev_set_drvdata(&pdev->dev, tu);
+ 
+-	tu->irq = platform_get_irq(pdev, 0);
++	tu->irq = ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		return ret;
+ 	ret = request_threaded_irq(tu->irq, NULL, tahvo_usb_vbus_interrupt,
+ 				   IRQF_ONESHOT,
+ 				   "tahvo-vbus", tu);
+diff --git a/drivers/usb/phy/phy-twl6030-usb.c b/drivers/usb/phy/phy-twl6030-usb.c
+index 8ba6c5a915570..ab3c38a7d8ac0 100644
+--- a/drivers/usb/phy/phy-twl6030-usb.c
++++ b/drivers/usb/phy/phy-twl6030-usb.c
+@@ -348,6 +348,11 @@ static int twl6030_usb_probe(struct platform_device *pdev)
+ 	twl->irq2		= platform_get_irq(pdev, 1);
+ 	twl->linkstat		= MUSB_UNKNOWN;
+ 
++	if (twl->irq1 < 0)
++		return twl->irq1;
++	if (twl->irq2 < 0)
++		return twl->irq2;
++
+ 	twl->comparator.set_vbus	= twl6030_set_vbus;
+ 	twl->comparator.start_srp	= twl6030_start_srp;
+ 
+diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
+index dfc760830eb90..1cf924f3aeccd 100644
+--- a/drivers/video/backlight/pwm_bl.c
++++ b/drivers/video/backlight/pwm_bl.c
+@@ -417,6 +417,33 @@ static bool pwm_backlight_is_linear(struct platform_pwm_backlight_data *data)
+ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ {
+ 	struct device_node *node = pb->dev->of_node;
++	bool active = true;
++
++	/*
++	 * If the enable GPIO is present, observable (either as input
++	 * or output) and off then the backlight is not currently active.
++	 * */
++	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
++		active = false;
++
++	if (!regulator_is_enabled(pb->power_supply))
++		active = false;
++
++	if (!pwm_is_enabled(pb->pwm))
++		active = false;
++
++	/*
++	 * Synchronize the enable_gpio with the observed state of the
++	 * hardware.
++	 */
++	if (pb->enable_gpio)
++		gpiod_direction_output(pb->enable_gpio, active);
++
++	/*
++	 * Do not change pb->enabled here! pb->enabled essentially
++	 * tells us if we own one of the regulator's use counts and
++	 * right now we do not.
++	 */
+ 
+ 	/* Not booted with device tree or no phandle link to the node */
+ 	if (!node || !node->phandle)
+@@ -428,20 +455,7 @@ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ 	 * assume that another driver will enable the backlight at the
+ 	 * appropriate time. Therefore, if it is disabled, keep it so.
+ 	 */
+-
+-	/* if the enable GPIO is disabled, do not enable the backlight */
+-	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
+-		return FB_BLANK_POWERDOWN;
+-
+-	/* The regulator is disabled, do not enable the backlight */
+-	if (!regulator_is_enabled(pb->power_supply))
+-		return FB_BLANK_POWERDOWN;
+-
+-	/* The PWM is disabled, keep it like this */
+-	if (!pwm_is_enabled(pb->pwm))
+-		return FB_BLANK_POWERDOWN;
+-
+-	return FB_BLANK_UNBLANK;
++	return active ? FB_BLANK_UNBLANK: FB_BLANK_POWERDOWN;
+ }
+ 
+ static int pwm_backlight_probe(struct platform_device *pdev)
+@@ -494,18 +508,6 @@ static int pwm_backlight_probe(struct platform_device *pdev)
+ 		goto err_alloc;
+ 	}
+ 
+-	/*
+-	 * If the GPIO is not known to be already configured as output, that
+-	 * is, if gpiod_get_direction returns either 1 or -EINVAL, change the
+-	 * direction to output and set the GPIO as active.
+-	 * Do not force the GPIO to active when it was already output as it
+-	 * could cause backlight flickering or we would enable the backlight too
+-	 * early. Leave the decision of the initial backlight state for later.
+-	 */
+-	if (pb->enable_gpio &&
+-	    gpiod_get_direction(pb->enable_gpio) != 0)
+-		gpiod_direction_output(pb->enable_gpio, 1);
+-
+ 	pb->power_supply = devm_regulator_get(&pdev->dev, "power");
+ 	if (IS_ERR(pb->power_supply)) {
+ 		ret = PTR_ERR(pb->power_supply);
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 98030d75833b8..00939ca2065a9 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -962,6 +962,7 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 	struct fb_var_screeninfo old_var;
+ 	struct fb_videomode mode;
+ 	struct fb_event event;
++	u32 unused;
+ 
+ 	if (var->activate & FB_ACTIVATE_INV_MODE) {
+ 		struct fb_videomode mode1, mode2;
+@@ -1008,6 +1009,11 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 	if (var->xres < 8 || var->yres < 8)
+ 		return -EINVAL;
+ 
++	/* Too huge resolution causes multiplication overflow. */
++	if (check_mul_overflow(var->xres, var->yres, &unused) ||
++	    check_mul_overflow(var->xres_virtual, var->yres_virtual, &unused))
++		return -EINVAL;
++
+ 	ret = info->fbops->fb_check_var(var, info);
+ 
+ 	if (ret)
+diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c
+index 9bd03a2310328..171ad8b42107e 100644
+--- a/fs/cifs/cifs_unicode.c
++++ b/fs/cifs/cifs_unicode.c
+@@ -358,14 +358,9 @@ cifs_strndup_from_utf16(const char *src, const int maxlen,
+ 		if (!dst)
+ 			return NULL;
+ 		cifs_from_utf16(dst, (__le16 *) src, len, maxlen, codepage,
+-			       NO_MAP_UNI_RSVD);
++				NO_MAP_UNI_RSVD);
+ 	} else {
+-		len = strnlen(src, maxlen);
+-		len++;
+-		dst = kmalloc(len, GFP_KERNEL);
+-		if (!dst)
+-			return NULL;
+-		strlcpy(dst, src, len);
++		dst = kstrndup(src, maxlen, GFP_KERNEL);
+ 	}
+ 
+ 	return dst;
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index 686e0ad287880..3aa5eb9ce498e 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -179,8 +179,10 @@ static int open_proxy_open(struct inode *inode, struct file *filp)
+ 	if (!fops_get(real_fops)) {
+ #ifdef CONFIG_MODULES
+ 		if (real_fops->owner &&
+-		    real_fops->owner->state == MODULE_STATE_GOING)
++		    real_fops->owner->state == MODULE_STATE_GOING) {
++			r = -ENXIO;
+ 			goto out;
++		}
+ #endif
+ 
+ 		/* Huh? Module did not clean up after itself at exit? */
+@@ -314,8 +316,10 @@ static int full_proxy_open(struct inode *inode, struct file *filp)
+ 	if (!fops_get(real_fops)) {
+ #ifdef CONFIG_MODULES
+ 		if (real_fops->owner &&
+-		    real_fops->owner->state == MODULE_STATE_GOING)
++		    real_fops->owner->state == MODULE_STATE_GOING) {
++			r = -ENXIO;
+ 			goto out;
++		}
+ #endif
+ 
+ 		/* Huh? Module did not cleanup after itself at exit? */
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 5c74b29971976..6ee8b1e0e1741 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -259,8 +259,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
+ 	};
+ 	unsigned int seq_id = 0;
+ 
+-	if (unlikely(f2fs_readonly(inode->i_sb) ||
+-				is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
++	if (unlikely(f2fs_readonly(inode->i_sb)))
+ 		return 0;
+ 
+ 	trace_f2fs_sync_file_enter(inode);
+@@ -274,7 +273,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
+ 	ret = file_write_and_wait_range(file, start, end);
+ 	clear_inode_flag(inode, FI_NEED_IPU);
+ 
+-	if (ret) {
++	if (ret || is_sbi_flag_set(sbi, SBI_CP_DISABLED)) {
+ 		trace_f2fs_sync_file_exit(inode, cp_reason, datasync, ret);
+ 		return ret;
+ 	}
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index c529880678878..476b2c497d282 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1764,8 +1764,17 @@ restore_flag:
+ 
+ static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi)
+ {
++	int retry = DEFAULT_RETRY_IO_COUNT;
++
+ 	/* we should flush all the data to keep data consistency */
+-	sync_inodes_sb(sbi->sb);
++	do {
++		sync_inodes_sb(sbi->sb);
++		cond_resched();
++		congestion_wait(BLK_RW_ASYNC, DEFAULT_IO_TIMEOUT);
++	} while (get_pages(sbi, F2FS_DIRTY_DATA) && retry--);
++
++	if (unlikely(retry < 0))
++		f2fs_warn(sbi, "checkpoint=enable has some unwritten data.");
+ 
+ 	down_write(&sbi->gc_lock);
+ 	f2fs_dirty_to_prefree(sbi);
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index 05b36b28f2e87..71b43538fa44c 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -995,13 +995,14 @@ static void kill_fasync_rcu(struct fasync_struct *fa, int sig, int band)
+ {
+ 	while (fa) {
+ 		struct fown_struct *fown;
++		unsigned long flags;
+ 
+ 		if (fa->magic != FASYNC_MAGIC) {
+ 			printk(KERN_ERR "kill_fasync: bad magic number in "
+ 			       "fasync_struct!\n");
+ 			return;
+ 		}
+-		read_lock(&fa->fa_lock);
++		read_lock_irqsave(&fa->fa_lock, flags);
+ 		if (fa->fa_file) {
+ 			fown = &fa->fa_file->f_owner;
+ 			/* Don't send SIGURG to processes which have not set a
+@@ -1010,7 +1011,7 @@ static void kill_fasync_rcu(struct fasync_struct *fa, int sig, int band)
+ 			if (!(sig == SIGURG && fown->signum == 0))
+ 				send_sigio(fown, fa->fa_fd, band);
+ 		}
+-		read_unlock(&fa->fa_lock);
++		read_unlock_irqrestore(&fa->fa_lock, flags);
+ 		fa = rcu_dereference(fa->fa_next);
+ 	}
+ }
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 8de9c24ac4ac6..c9606f2d2864d 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -194,12 +194,11 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 	struct fuse_file *ff = file->private_data;
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 
+-	if (!(ff->open_flags & FOPEN_KEEP_CACHE))
+-		invalidate_inode_pages2(inode->i_mapping);
+ 	if (ff->open_flags & FOPEN_STREAM)
+ 		stream_open(inode, file);
+ 	else if (ff->open_flags & FOPEN_NONSEEKABLE)
+ 		nonseekable_open(inode, file);
++
+ 	if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) {
+ 		struct fuse_inode *fi = get_fuse_inode(inode);
+ 
+@@ -207,10 +206,14 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 		fi->attr_version = atomic64_inc_return(&fc->attr_version);
+ 		i_size_write(inode, 0);
+ 		spin_unlock(&fi->lock);
++		truncate_pagecache(inode, 0);
+ 		fuse_invalidate_attr(inode);
+ 		if (fc->writeback_cache)
+ 			file_update_time(file);
++	} else if (!(ff->open_flags & FOPEN_KEEP_CACHE)) {
++		invalidate_inode_pages2(inode->i_mapping);
+ 	}
++
+ 	if ((file->f_mode & FMODE_WRITE) && fc->writeback_cache)
+ 		fuse_link_write_file(file);
+ }
+@@ -3237,7 +3240,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 
+ static int fuse_writeback_range(struct inode *inode, loff_t start, loff_t end)
+ {
+-	int err = filemap_write_and_wait_range(inode->i_mapping, start, end);
++	int err = filemap_write_and_wait_range(inode->i_mapping, start, -1);
+ 
+ 	if (!err)
+ 		fuse_sync_writes(inode);
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index ae9c5c1bdc508..b9ed6a6dbcf51 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -660,6 +660,7 @@ static int init_statfs(struct gfs2_sbd *sdp)
+ 			error = PTR_ERR(lsi->si_sc_inode);
+ 			fs_err(sdp, "can't find local \"sc\" file#%u: %d\n",
+ 			       jd->jd_jid, error);
++			kfree(lsi);
+ 			goto free_local;
+ 		}
+ 		lsi->si_jid = jd->jd_jid;
+@@ -1071,6 +1072,34 @@ void gfs2_online_uevent(struct gfs2_sbd *sdp)
+ 	kobject_uevent_env(&sdp->sd_kobj, KOBJ_ONLINE, envp);
+ }
+ 
++static int init_threads(struct gfs2_sbd *sdp)
++{
++	struct task_struct *p;
++	int error = 0;
++
++	p = kthread_run(gfs2_logd, sdp, "gfs2_logd");
++	if (IS_ERR(p)) {
++		error = PTR_ERR(p);
++		fs_err(sdp, "can't start logd thread: %d\n", error);
++		return error;
++	}
++	sdp->sd_logd_process = p;
++
++	p = kthread_run(gfs2_quotad, sdp, "gfs2_quotad");
++	if (IS_ERR(p)) {
++		error = PTR_ERR(p);
++		fs_err(sdp, "can't start quotad thread: %d\n", error);
++		goto fail;
++	}
++	sdp->sd_quotad_process = p;
++	return 0;
++
++fail:
++	kthread_stop(sdp->sd_logd_process);
++	sdp->sd_logd_process = NULL;
++	return error;
++}
++
+ /**
+  * gfs2_fill_super - Read in superblock
+  * @sb: The VFS superblock
+@@ -1197,6 +1226,14 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		goto fail_per_node;
+ 	}
+ 
++	if (!sb_rdonly(sb)) {
++		error = init_threads(sdp);
++		if (error) {
++			gfs2_withdraw_delayed(sdp);
++			goto fail_per_node;
++		}
++	}
++
+ 	error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
+ 	if (error)
+ 		goto fail_per_node;
+@@ -1206,6 +1243,12 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ 	gfs2_freeze_unlock(&freeze_gh);
+ 	if (error) {
++		if (sdp->sd_quotad_process)
++			kthread_stop(sdp->sd_quotad_process);
++		sdp->sd_quotad_process = NULL;
++		if (sdp->sd_logd_process)
++			kthread_stop(sdp->sd_logd_process);
++		sdp->sd_logd_process = NULL;
+ 		fs_err(sdp, "can't make FS RW: %d\n", error);
+ 		goto fail_per_node;
+ 	}
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 077dc8c035a8b..6a355e1347d7f 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -126,34 +126,6 @@ int gfs2_jdesc_check(struct gfs2_jdesc *jd)
+ 	return 0;
+ }
+ 
+-static int init_threads(struct gfs2_sbd *sdp)
+-{
+-	struct task_struct *p;
+-	int error = 0;
+-
+-	p = kthread_run(gfs2_logd, sdp, "gfs2_logd");
+-	if (IS_ERR(p)) {
+-		error = PTR_ERR(p);
+-		fs_err(sdp, "can't start logd thread: %d\n", error);
+-		return error;
+-	}
+-	sdp->sd_logd_process = p;
+-
+-	p = kthread_run(gfs2_quotad, sdp, "gfs2_quotad");
+-	if (IS_ERR(p)) {
+-		error = PTR_ERR(p);
+-		fs_err(sdp, "can't start quotad thread: %d\n", error);
+-		goto fail;
+-	}
+-	sdp->sd_quotad_process = p;
+-	return 0;
+-
+-fail:
+-	kthread_stop(sdp->sd_logd_process);
+-	sdp->sd_logd_process = NULL;
+-	return error;
+-}
+-
+ /**
+  * gfs2_make_fs_rw - Turn a Read-Only FS into a Read-Write one
+  * @sdp: the filesystem
+@@ -168,26 +140,17 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	struct gfs2_log_header_host head;
+ 	int error;
+ 
+-	error = init_threads(sdp);
+-	if (error) {
+-		gfs2_withdraw_delayed(sdp);
+-		return error;
+-	}
+-
+ 	j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+-	if (gfs2_withdrawn(sdp)) {
+-		error = -EIO;
+-		goto fail;
+-	}
++	if (gfs2_withdrawn(sdp))
++		return -EIO;
+ 
+ 	error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
+ 	if (error || gfs2_withdrawn(sdp))
+-		goto fail;
++		return error;
+ 
+ 	if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) {
+ 		gfs2_consist(sdp);
+-		error = -EIO;
+-		goto fail;
++		return -EIO;
+ 	}
+ 
+ 	/*  Initialize some head of the log stuff  */
+@@ -195,20 +158,8 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	gfs2_log_pointers_init(sdp, head.lh_blkno);
+ 
+ 	error = gfs2_quota_init(sdp);
+-	if (error || gfs2_withdrawn(sdp))
+-		goto fail;
+-
+-	set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+-
+-	return 0;
+-
+-fail:
+-	if (sdp->sd_quotad_process)
+-		kthread_stop(sdp->sd_quotad_process);
+-	sdp->sd_quotad_process = NULL;
+-	if (sdp->sd_logd_process)
+-		kthread_stop(sdp->sd_logd_process);
+-	sdp->sd_logd_process = NULL;
++	if (!error && !gfs2_withdrawn(sdp))
++		set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+ 	return error;
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 108b0ed31c11a..2009d1cda606c 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -889,6 +889,7 @@ static const struct io_op_def io_op_defs[] = {
+ 	},
+ 	[IORING_OP_WRITE] = {
+ 		.needs_file		= 1,
++		.hash_reg_file		= 1,
+ 		.unbound_nonreg_file	= 1,
+ 		.pollout		= 1,
+ 		.async_size		= sizeof(struct io_async_rw),
+diff --git a/fs/iomap/swapfile.c b/fs/iomap/swapfile.c
+index a5e478de14174..2ceea45aefd8c 100644
+--- a/fs/iomap/swapfile.c
++++ b/fs/iomap/swapfile.c
+@@ -30,11 +30,16 @@ static int iomap_swapfile_add_extent(struct iomap_swapfile_info *isi)
+ {
+ 	struct iomap *iomap = &isi->iomap;
+ 	unsigned long nr_pages;
++	unsigned long max_pages;
+ 	uint64_t first_ppage;
+ 	uint64_t first_ppage_reported;
+ 	uint64_t next_ppage;
+ 	int error;
+ 
++	if (unlikely(isi->nr_pages >= isi->sis->max))
++		return 0;
++	max_pages = isi->sis->max - isi->nr_pages;
++
+ 	/*
+ 	 * Round the start up and the end down so that the physical
+ 	 * extent aligns to a page boundary.
+@@ -47,6 +52,7 @@ static int iomap_swapfile_add_extent(struct iomap_swapfile_info *isi)
+ 	if (first_ppage >= next_ppage)
+ 		return 0;
+ 	nr_pages = next_ppage - first_ppage;
++	nr_pages = min(nr_pages, max_pages);
+ 
+ 	/*
+ 	 * Calculate how much swap space we're adding; the first page contains
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index ec90773527eea..35675a1065be8 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -155,7 +155,6 @@ struct iso9660_options{
+ 	unsigned int overriderockperm:1;
+ 	unsigned int uid_set:1;
+ 	unsigned int gid_set:1;
+-	unsigned int utf8:1;
+ 	unsigned char map;
+ 	unsigned char check;
+ 	unsigned int blocksize;
+@@ -355,7 +354,6 @@ static int parse_options(char *options, struct iso9660_options *popt)
+ 	popt->gid = GLOBAL_ROOT_GID;
+ 	popt->uid = GLOBAL_ROOT_UID;
+ 	popt->iocharset = NULL;
+-	popt->utf8 = 0;
+ 	popt->overriderockperm = 0;
+ 	popt->session=-1;
+ 	popt->sbsector=-1;
+@@ -388,10 +386,13 @@ static int parse_options(char *options, struct iso9660_options *popt)
+ 		case Opt_cruft:
+ 			popt->cruft = 1;
+ 			break;
++#ifdef CONFIG_JOLIET
+ 		case Opt_utf8:
+-			popt->utf8 = 1;
++			kfree(popt->iocharset);
++			popt->iocharset = kstrdup("utf8", GFP_KERNEL);
++			if (!popt->iocharset)
++				return 0;
+ 			break;
+-#ifdef CONFIG_JOLIET
+ 		case Opt_iocharset:
+ 			kfree(popt->iocharset);
+ 			popt->iocharset = match_strdup(&args[0]);
+@@ -494,7 +495,6 @@ static int isofs_show_options(struct seq_file *m, struct dentry *root)
+ 	if (sbi->s_nocompress)		seq_puts(m, ",nocompress");
+ 	if (sbi->s_overriderockperm)	seq_puts(m, ",overriderockperm");
+ 	if (sbi->s_showassoc)		seq_puts(m, ",showassoc");
+-	if (sbi->s_utf8)		seq_puts(m, ",utf8");
+ 
+ 	if (sbi->s_check)		seq_printf(m, ",check=%c", sbi->s_check);
+ 	if (sbi->s_mapping)		seq_printf(m, ",map=%c", sbi->s_mapping);
+@@ -517,9 +517,10 @@ static int isofs_show_options(struct seq_file *m, struct dentry *root)
+ 		seq_printf(m, ",fmode=%o", sbi->s_fmode);
+ 
+ #ifdef CONFIG_JOLIET
+-	if (sbi->s_nls_iocharset &&
+-	    strcmp(sbi->s_nls_iocharset->charset, CONFIG_NLS_DEFAULT) != 0)
++	if (sbi->s_nls_iocharset)
+ 		seq_printf(m, ",iocharset=%s", sbi->s_nls_iocharset->charset);
++	else
++		seq_puts(m, ",iocharset=utf8");
+ #endif
+ 	return 0;
+ }
+@@ -862,14 +863,13 @@ root_found:
+ 	sbi->s_nls_iocharset = NULL;
+ 
+ #ifdef CONFIG_JOLIET
+-	if (joliet_level && opt.utf8 == 0) {
++	if (joliet_level) {
+ 		char *p = opt.iocharset ? opt.iocharset : CONFIG_NLS_DEFAULT;
+-		sbi->s_nls_iocharset = load_nls(p);
+-		if (! sbi->s_nls_iocharset) {
+-			/* Fail only if explicit charset specified */
+-			if (opt.iocharset)
++		if (strcmp(p, "utf8") != 0) {
++			sbi->s_nls_iocharset = opt.iocharset ?
++				load_nls(opt.iocharset) : load_nls_default();
++			if (!sbi->s_nls_iocharset)
+ 				goto out_freesbi;
+-			sbi->s_nls_iocharset = load_nls_default();
+ 		}
+ 	}
+ #endif
+@@ -885,7 +885,6 @@ root_found:
+ 	sbi->s_gid = opt.gid;
+ 	sbi->s_uid_set = opt.uid_set;
+ 	sbi->s_gid_set = opt.gid_set;
+-	sbi->s_utf8 = opt.utf8;
+ 	sbi->s_nocompress = opt.nocompress;
+ 	sbi->s_overriderockperm = opt.overriderockperm;
+ 	/*
+diff --git a/fs/isofs/isofs.h b/fs/isofs/isofs.h
+index 055ec6c586f7f..dcdc191ed1834 100644
+--- a/fs/isofs/isofs.h
++++ b/fs/isofs/isofs.h
+@@ -44,7 +44,6 @@ struct isofs_sb_info {
+ 	unsigned char s_session;
+ 	unsigned int  s_high_sierra:1;
+ 	unsigned int  s_rock:2;
+-	unsigned int  s_utf8:1;
+ 	unsigned int  s_cruft:1; /* Broken disks with high byte of length
+ 				  * containing junk */
+ 	unsigned int  s_nocompress:1;
+diff --git a/fs/isofs/joliet.c b/fs/isofs/joliet.c
+index be8b6a9d0b926..c0f04a1e7f695 100644
+--- a/fs/isofs/joliet.c
++++ b/fs/isofs/joliet.c
+@@ -41,14 +41,12 @@ uni16_to_x8(unsigned char *ascii, __be16 *uni, int len, struct nls_table *nls)
+ int
+ get_joliet_filename(struct iso_directory_record * de, unsigned char *outname, struct inode * inode)
+ {
+-	unsigned char utf8;
+ 	struct nls_table *nls;
+ 	unsigned char len = 0;
+ 
+-	utf8 = ISOFS_SB(inode->i_sb)->s_utf8;
+ 	nls = ISOFS_SB(inode->i_sb)->s_nls_iocharset;
+ 
+-	if (utf8) {
++	if (!nls) {
+ 		len = utf16s_to_utf8s((const wchar_t *) de->name,
+ 				de->name_len[0] >> 1, UTF16_BIG_ENDIAN,
+ 				outname, PAGE_SIZE);
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 61d3cc2283dc8..498cb70c2c0d0 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -634,7 +634,7 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	conflock->caller = "somehost";	/* FIXME */
+ 	conflock->len = strlen(conflock->caller);
+ 	conflock->oh.len = 0;		/* don't return OH info */
+-	conflock->svid = ((struct nlm_lockowner *)lock->fl.fl_owner)->pid;
++	conflock->svid = lock->fl.fl_pid;
+ 	conflock->fl.fl_type = lock->fl.fl_type;
+ 	conflock->fl.fl_start = lock->fl.fl_start;
+ 	conflock->fl.fl_end = lock->fl.fl_end;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 80e394a2e3fd7..142aac9b63a89 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -2646,9 +2646,9 @@ static void force_expire_client(struct nfs4_client *clp)
+ 	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+ 	bool already_expired;
+ 
+-	spin_lock(&clp->cl_lock);
++	spin_lock(&nn->client_lock);
+ 	clp->cl_time = 0;
+-	spin_unlock(&clp->cl_lock);
++	spin_unlock(&nn->client_lock);
+ 
+ 	wait_event(expiry_wq, atomic_read(&clp->cl_rpc_users) == 0);
+ 	spin_lock(&nn->client_lock);
+diff --git a/fs/udf/misc.c b/fs/udf/misc.c
+index eab94527340dc..1614d308d0f06 100644
+--- a/fs/udf/misc.c
++++ b/fs/udf/misc.c
+@@ -173,13 +173,22 @@ struct genericFormat *udf_get_extendedattr(struct inode *inode, uint32_t type,
+ 		else
+ 			offset = le32_to_cpu(eahd->appAttrLocation);
+ 
+-		while (offset < iinfo->i_lenEAttr) {
++		while (offset + sizeof(*gaf) < iinfo->i_lenEAttr) {
++			uint32_t attrLength;
++
+ 			gaf = (struct genericFormat *)&ea[offset];
++			attrLength = le32_to_cpu(gaf->attrLength);
++
++			/* Detect undersized elements and buffer overflows */
++			if ((attrLength < sizeof(*gaf)) ||
++			    (attrLength > (iinfo->i_lenEAttr - offset)))
++				break;
++
+ 			if (le32_to_cpu(gaf->attrType) == type &&
+ 					gaf->attrSubtype == subtype)
+ 				return gaf;
+ 			else
+-				offset += le32_to_cpu(gaf->attrLength);
++				offset += attrLength;
+ 		}
+ 	}
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index d0df217f4712a..5d2b820ef303a 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -108,16 +108,10 @@ struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct super_block *sb)
+ 		return NULL;
+ 	lvid = (struct logicalVolIntegrityDesc *)UDF_SB(sb)->s_lvid_bh->b_data;
+ 	partnum = le32_to_cpu(lvid->numOfPartitions);
+-	if ((sb->s_blocksize - sizeof(struct logicalVolIntegrityDescImpUse) -
+-	     offsetof(struct logicalVolIntegrityDesc, impUse)) /
+-	     (2 * sizeof(uint32_t)) < partnum) {
+-		udf_err(sb, "Logical volume integrity descriptor corrupted "
+-			"(numOfPartitions = %u)!\n", partnum);
+-		return NULL;
+-	}
+ 	/* The offset is to skip freeSpaceTable and sizeTable arrays */
+ 	offset = partnum * 2 * sizeof(uint32_t);
+-	return (struct logicalVolIntegrityDescImpUse *)&(lvid->impUse[offset]);
++	return (struct logicalVolIntegrityDescImpUse *)
++					(((uint8_t *)(lvid + 1)) + offset);
+ }
+ 
+ /* UDF filesystem type */
+@@ -349,10 +343,10 @@ static int udf_show_options(struct seq_file *seq, struct dentry *root)
+ 		seq_printf(seq, ",lastblock=%u", sbi->s_last_block);
+ 	if (sbi->s_anchor != 0)
+ 		seq_printf(seq, ",anchor=%u", sbi->s_anchor);
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_UTF8))
+-		seq_puts(seq, ",utf8");
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP) && sbi->s_nls_map)
++	if (sbi->s_nls_map)
+ 		seq_printf(seq, ",iocharset=%s", sbi->s_nls_map->charset);
++	else
++		seq_puts(seq, ",iocharset=utf8");
+ 
+ 	return 0;
+ }
+@@ -557,19 +551,24 @@ static int udf_parse_options(char *options, struct udf_options *uopt,
+ 			/* Ignored (never implemented properly) */
+ 			break;
+ 		case Opt_utf8:
+-			uopt->flags |= (1 << UDF_FLAG_UTF8);
++			if (!remount) {
++				unload_nls(uopt->nls_map);
++				uopt->nls_map = NULL;
++			}
+ 			break;
+ 		case Opt_iocharset:
+ 			if (!remount) {
+-				if (uopt->nls_map)
+-					unload_nls(uopt->nls_map);
+-				/*
+-				 * load_nls() failure is handled later in
+-				 * udf_fill_super() after all options are
+-				 * parsed.
+-				 */
++				unload_nls(uopt->nls_map);
++				uopt->nls_map = NULL;
++			}
++			/* When nls_map is not loaded then UTF-8 is used */
++			if (!remount && strcmp(args[0].from, "utf8") != 0) {
+ 				uopt->nls_map = load_nls(args[0].from);
+-				uopt->flags |= (1 << UDF_FLAG_NLS_MAP);
++				if (!uopt->nls_map) {
++					pr_err("iocharset %s not found\n",
++						args[0].from);
++					return 0;
++				}
+ 			}
+ 			break;
+ 		case Opt_uforget:
+@@ -1541,6 +1540,7 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	struct logicalVolIntegrityDesc *lvid;
+ 	int indirections = 0;
++	u32 parts, impuselen;
+ 
+ 	while (++indirections <= UDF_MAX_LVID_NESTING) {
+ 		final_bh = NULL;
+@@ -1567,15 +1567,27 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+ 
+ 		lvid = (struct logicalVolIntegrityDesc *)final_bh->b_data;
+ 		if (lvid->nextIntegrityExt.extLength == 0)
+-			return;
++			goto check;
+ 
+ 		loc = leea_to_cpu(lvid->nextIntegrityExt);
+ 	}
+ 
+ 	udf_warn(sb, "Too many LVID indirections (max %u), ignoring.\n",
+ 		UDF_MAX_LVID_NESTING);
++out_err:
+ 	brelse(sbi->s_lvid_bh);
+ 	sbi->s_lvid_bh = NULL;
++	return;
++check:
++	parts = le32_to_cpu(lvid->numOfPartitions);
++	impuselen = le32_to_cpu(lvid->lengthOfImpUse);
++	if (parts >= sb->s_blocksize || impuselen >= sb->s_blocksize ||
++	    sizeof(struct logicalVolIntegrityDesc) + impuselen +
++	    2 * parts * sizeof(u32) > sb->s_blocksize) {
++		udf_warn(sb, "Corrupted LVID (parts=%u, impuselen=%u), "
++			 "ignoring.\n", parts, impuselen);
++		goto out_err;
++	}
+ }
+ 
+ /*
+@@ -2138,21 +2150,6 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ 	if (!udf_parse_options((char *)options, &uopt, false))
+ 		goto parse_options_failure;
+ 
+-	if (uopt.flags & (1 << UDF_FLAG_UTF8) &&
+-	    uopt.flags & (1 << UDF_FLAG_NLS_MAP)) {
+-		udf_err(sb, "utf8 cannot be combined with iocharset\n");
+-		goto parse_options_failure;
+-	}
+-	if ((uopt.flags & (1 << UDF_FLAG_NLS_MAP)) && !uopt.nls_map) {
+-		uopt.nls_map = load_nls_default();
+-		if (!uopt.nls_map)
+-			uopt.flags &= ~(1 << UDF_FLAG_NLS_MAP);
+-		else
+-			udf_debug("Using default NLS map\n");
+-	}
+-	if (!(uopt.flags & (1 << UDF_FLAG_NLS_MAP)))
+-		uopt.flags |= (1 << UDF_FLAG_UTF8);
+-
+ 	fileset.logicalBlockNum = 0xFFFFFFFF;
+ 	fileset.partitionReferenceNum = 0xFFFF;
+ 
+@@ -2307,8 +2304,7 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ error_out:
+ 	iput(sbi->s_vat_inode);
+ parse_options_failure:
+-	if (uopt.nls_map)
+-		unload_nls(uopt.nls_map);
++	unload_nls(uopt.nls_map);
+ 	if (lvid_open)
+ 		udf_close_lvid(sb);
+ 	brelse(sbi->s_lvid_bh);
+@@ -2358,8 +2354,7 @@ static void udf_put_super(struct super_block *sb)
+ 	sbi = UDF_SB(sb);
+ 
+ 	iput(sbi->s_vat_inode);
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
+-		unload_nls(sbi->s_nls_map);
++	unload_nls(sbi->s_nls_map);
+ 	if (!sb_rdonly(sb))
+ 		udf_close_lvid(sb);
+ 	brelse(sbi->s_lvid_bh);
+diff --git a/fs/udf/udf_sb.h b/fs/udf/udf_sb.h
+index 758efe557a199..4fa620543d302 100644
+--- a/fs/udf/udf_sb.h
++++ b/fs/udf/udf_sb.h
+@@ -20,8 +20,6 @@
+ #define UDF_FLAG_UNDELETE		6
+ #define UDF_FLAG_UNHIDE			7
+ #define UDF_FLAG_VARCONV		8
+-#define UDF_FLAG_NLS_MAP		9
+-#define UDF_FLAG_UTF8			10
+ #define UDF_FLAG_UID_FORGET     11    /* save -1 for uid to disk */
+ #define UDF_FLAG_GID_FORGET     12
+ #define UDF_FLAG_UID_SET	13
+diff --git a/fs/udf/unicode.c b/fs/udf/unicode.c
+index 5fcfa96463ebb..622569007b530 100644
+--- a/fs/udf/unicode.c
++++ b/fs/udf/unicode.c
+@@ -177,7 +177,7 @@ static int udf_name_from_CS0(struct super_block *sb,
+ 		return 0;
+ 	}
+ 
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
++	if (UDF_SB(sb)->s_nls_map)
+ 		conv_f = UDF_SB(sb)->s_nls_map->uni2char;
+ 	else
+ 		conv_f = NULL;
+@@ -285,7 +285,7 @@ static int udf_name_to_CS0(struct super_block *sb,
+ 	if (ocu_max_len <= 0)
+ 		return 0;
+ 
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
++	if (UDF_SB(sb)->s_nls_map)
+ 		conv_f = UDF_SB(sb)->s_nls_map->char2uni;
+ 	else
+ 		conv_f = NULL;
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 542471b76f410..8aae375864b6b 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1534,6 +1534,22 @@ static inline int queue_limit_discard_alignment(struct queue_limits *lim, sector
+ 	return offset << SECTOR_SHIFT;
+ }
+ 
++/*
++ * Two cases of handling DISCARD merge:
++ * If max_discard_segments > 1, the driver takes every bio
++ * as a range and send them to controller together. The ranges
++ * needn't to be contiguous.
++ * Otherwise, the bios/requests will be handled as same as
++ * others which should be contiguous.
++ */
++static inline bool blk_discard_mergable(struct request *req)
++{
++	if (req_op(req) == REQ_OP_DISCARD &&
++	    queue_max_discard_segments(req->q) > 1)
++		return true;
++	return false;
++}
++
+ static inline int bdev_discard_alignment(struct block_device *bdev)
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
+index b67a51c574b97..5f04a2b35e80b 100644
+--- a/include/linux/energy_model.h
++++ b/include/linux/energy_model.h
+@@ -51,6 +51,22 @@ struct em_perf_domain {
+ #ifdef CONFIG_ENERGY_MODEL
+ #define EM_MAX_POWER 0xFFFF
+ 
++/*
++ * Increase resolution of energy estimation calculations for 64-bit
++ * architectures. The extra resolution improves decision made by EAS for the
++ * task placement when two Performance Domains might provide similar energy
++ * estimation values (w/o better resolution the values could be equal).
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ */
++#ifdef CONFIG_64BIT
++#define em_scale_power(p) ((p) * 1000)
++#else
++#define em_scale_power(p) (p)
++#endif
++
+ struct em_data_callback {
+ 	/**
+ 	 * active_power() - Provide power at the next performance state of
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index 107cedd7019a4..7f1b8549ebcee 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -318,16 +318,12 @@ struct clock_event_device;
+ 
+ extern void hrtimer_interrupt(struct clock_event_device *dev);
+ 
+-extern void clock_was_set_delayed(void);
+-
+ extern unsigned int hrtimer_resolution;
+ 
+ #else
+ 
+ #define hrtimer_resolution	(unsigned int)LOW_RES_NSEC
+ 
+-static inline void clock_was_set_delayed(void) { }
+-
+ #endif
+ 
+ static inline ktime_t
+@@ -351,7 +347,6 @@ hrtimer_expires_remaining_adjusted(const struct hrtimer *timer)
+ 						    timer->base->get_time());
+ }
+ 
+-extern void clock_was_set(void);
+ #ifdef CONFIG_TIMERFD
+ extern void timerfd_clock_was_set(void);
+ #else
+diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
+index 4a8795b21d774..3f02b818625ef 100644
+--- a/include/linux/local_lock_internal.h
++++ b/include/linux/local_lock_internal.h
+@@ -14,26 +14,14 @@ typedef struct {
+ } local_lock_t;
+ 
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define LL_DEP_MAP_INIT(lockname)			\
++# define LOCAL_LOCK_DEBUG_INIT(lockname)		\
+ 	.dep_map = {					\
+ 		.name = #lockname,			\
+ 		.wait_type_inner = LD_WAIT_CONFIG,	\
+-	}
+-#else
+-# define LL_DEP_MAP_INIT(lockname)
+-#endif
+-
+-#define INIT_LOCAL_LOCK(lockname)	{ LL_DEP_MAP_INIT(lockname) }
++		.lock_type = LD_LOCK_PERCPU,		\
++	},						\
++	.owner = NULL,
+ 
+-#define __local_lock_init(lock)					\
+-do {								\
+-	static struct lock_class_key __key;			\
+-								\
+-	debug_check_no_locks_freed((void *)lock, sizeof(*lock));\
+-	lockdep_init_map_wait(&(lock)->dep_map, #lock, &__key, 0, LD_WAIT_CONFIG);\
+-} while (0)
+-
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ static inline void local_lock_acquire(local_lock_t *l)
+ {
+ 	lock_map_acquire(&l->dep_map);
+@@ -48,11 +36,30 @@ static inline void local_lock_release(local_lock_t *l)
+ 	lock_map_release(&l->dep_map);
+ }
+ 
++static inline void local_lock_debug_init(local_lock_t *l)
++{
++	l->owner = NULL;
++}
+ #else /* CONFIG_DEBUG_LOCK_ALLOC */
++# define LOCAL_LOCK_DEBUG_INIT(lockname)
+ static inline void local_lock_acquire(local_lock_t *l) { }
+ static inline void local_lock_release(local_lock_t *l) { }
++static inline void local_lock_debug_init(local_lock_t *l) { }
+ #endif /* !CONFIG_DEBUG_LOCK_ALLOC */
+ 
++#define INIT_LOCAL_LOCK(lockname)	{ LOCAL_LOCK_DEBUG_INIT(lockname) }
++
++#define __local_lock_init(lock)					\
++do {								\
++	static struct lock_class_key __key;			\
++								\
++	debug_check_no_locks_freed((void *)lock, sizeof(*lock));\
++	lockdep_init_map_type(&(lock)->dep_map, #lock, &__key,  \
++			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
++			      LD_LOCK_PERCPU);			\
++	local_lock_debug_init(lock);				\
++} while (0)
++
+ #define __local_lock(lock)					\
+ 	do {							\
+ 		preempt_disable();				\
+diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
+index f5594879175a6..20b6797babe2c 100644
+--- a/include/linux/lockdep.h
++++ b/include/linux/lockdep.h
+@@ -185,12 +185,19 @@ extern void lockdep_unregister_key(struct lock_class_key *key);
+  * to lockdep:
+  */
+ 
+-extern void lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
+-	struct lock_class_key *key, int subclass, short inner, short outer);
++extern void lockdep_init_map_type(struct lockdep_map *lock, const char *name,
++	struct lock_class_key *key, int subclass, u8 inner, u8 outer, u8 lock_type);
++
++static inline void
++lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
++		       struct lock_class_key *key, int subclass, u8 inner, u8 outer)
++{
++	lockdep_init_map_type(lock, name, key, subclass, inner, LD_WAIT_INV, LD_LOCK_NORMAL);
++}
+ 
+ static inline void
+ lockdep_init_map_wait(struct lockdep_map *lock, const char *name,
+-		      struct lock_class_key *key, int subclass, short inner)
++		      struct lock_class_key *key, int subclass, u8 inner)
+ {
+ 	lockdep_init_map_waits(lock, name, key, subclass, inner, LD_WAIT_INV);
+ }
+@@ -340,6 +347,8 @@ static inline void lockdep_set_selftest_task(struct task_struct *task)
+ # define lock_set_class(l, n, k, s, i)		do { } while (0)
+ # define lock_set_subclass(l, s, i)		do { } while (0)
+ # define lockdep_init()				do { } while (0)
++# define lockdep_init_map_type(lock, name, key, sub, inner, outer, type) \
++		do { (void)(name); (void)(key); } while (0)
+ # define lockdep_init_map_waits(lock, name, key, sub, inner, outer) \
+ 		do { (void)(name); (void)(key); } while (0)
+ # define lockdep_init_map_wait(lock, name, key, sub, inner) \
+diff --git a/include/linux/lockdep_types.h b/include/linux/lockdep_types.h
+index 9a1fd49df17f6..2ec9ff5a7fff0 100644
+--- a/include/linux/lockdep_types.h
++++ b/include/linux/lockdep_types.h
+@@ -30,6 +30,12 @@ enum lockdep_wait_type {
+ 	LD_WAIT_MAX,		/* must be last */
+ };
+ 
++enum lockdep_lock_type {
++	LD_LOCK_NORMAL = 0,	/* normal, catch all */
++	LD_LOCK_PERCPU,		/* percpu */
++	LD_LOCK_MAX,
++};
++
+ #ifdef CONFIG_LOCKDEP
+ 
+ /*
+@@ -119,8 +125,10 @@ struct lock_class {
+ 	int				name_version;
+ 	const char			*name;
+ 
+-	short				wait_type_inner;
+-	short				wait_type_outer;
++	u8				wait_type_inner;
++	u8				wait_type_outer;
++	u8				lock_type;
++	/* u8				hole; */
+ 
+ #ifdef CONFIG_LOCK_STAT
+ 	unsigned long			contention_point[LOCKSTAT_POINTS];
+@@ -169,8 +177,10 @@ struct lockdep_map {
+ 	struct lock_class_key		*key;
+ 	struct lock_class		*class_cache[NR_LOCKDEP_CACHING_CLASSES];
+ 	const char			*name;
+-	short				wait_type_outer; /* can be taken in this context */
+-	short				wait_type_inner; /* presents this context */
++	u8				wait_type_outer; /* can be taken in this context */
++	u8				wait_type_inner; /* presents this context */
++	u8				lock_type;
++	/* u8				hole; */
+ #ifdef CONFIG_LOCK_STAT
+ 	int				cpu;
+ 	unsigned long			ip;
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index af8f4e2cf21d1..70a3664785f80 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -876,7 +876,8 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
+ 	u8         scatter_fcs[0x1];
+ 	u8         enhanced_multi_pkt_send_wqe[0x1];
+ 	u8         tunnel_lso_const_out_ip_id[0x1];
+-	u8         reserved_at_1c[0x2];
++	u8         tunnel_lro_gre[0x1];
++	u8         tunnel_lro_vxlan[0x1];
+ 	u8         tunnel_stateless_gre[0x1];
+ 	u8         tunnel_stateless_vxlan[0x1];
+ 
+diff --git a/include/linux/power/max17042_battery.h b/include/linux/power/max17042_battery.h
+index d55c746ac56e2..e00ad1cfb1f1d 100644
+--- a/include/linux/power/max17042_battery.h
++++ b/include/linux/power/max17042_battery.h
+@@ -69,7 +69,7 @@ enum max17042_register {
+ 	MAX17042_RelaxCFG	= 0x2A,
+ 	MAX17042_MiscCFG	= 0x2B,
+ 	MAX17042_TGAIN		= 0x2C,
+-	MAx17042_TOFF		= 0x2D,
++	MAX17042_TOFF		= 0x2D,
+ 	MAX17042_CGAIN		= 0x2E,
+ 	MAX17042_COFF		= 0x2F,
+ 
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index 5117cb5b56561..81b9686a20799 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -25,7 +25,9 @@ struct itimerspec64 {
+ #define TIME64_MIN			(-TIME64_MAX - 1)
+ 
+ #define KTIME_MAX			((s64)~((u64)1 << 63))
++#define KTIME_MIN			(-KTIME_MAX - 1)
+ #define KTIME_SEC_MAX			(KTIME_MAX / NSEC_PER_SEC)
++#define KTIME_SEC_MIN			(KTIME_MIN / NSEC_PER_SEC)
+ 
+ /*
+  * Limits for settimeofday():
+@@ -124,10 +126,13 @@ static inline bool timespec64_valid_settod(const struct timespec64 *ts)
+  */
+ static inline s64 timespec64_to_ns(const struct timespec64 *ts)
+ {
+-	/* Prevent multiplication overflow */
+-	if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX)
++	/* Prevent multiplication overflow / underflow */
++	if (ts->tv_sec >= KTIME_SEC_MAX)
+ 		return KTIME_MAX;
+ 
++	if (ts->tv_sec <= KTIME_SEC_MIN)
++		return KTIME_MIN;
++
+ 	return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
+ }
+ 
+diff --git a/include/soc/bcm2835/raspberrypi-firmware.h b/include/soc/bcm2835/raspberrypi-firmware.h
+index cc9cdbc66403f..fdfef7fe40df9 100644
+--- a/include/soc/bcm2835/raspberrypi-firmware.h
++++ b/include/soc/bcm2835/raspberrypi-firmware.h
+@@ -140,6 +140,7 @@ int rpi_firmware_property(struct rpi_firmware *fw,
+ 			  u32 tag, void *data, size_t len);
+ int rpi_firmware_property_list(struct rpi_firmware *fw,
+ 			       void *data, size_t tag_size);
++void rpi_firmware_put(struct rpi_firmware *fw);
+ struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node);
+ #else
+ static inline int rpi_firmware_property(struct rpi_firmware *fw, u32 tag,
+@@ -154,6 +155,7 @@ static inline int rpi_firmware_property_list(struct rpi_firmware *fw,
+ 	return -ENOSYS;
+ }
+ 
++static inline void rpi_firmware_put(struct rpi_firmware *fw) { }
+ static inline struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
+ {
+ 	return NULL;
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 556216dc97030..762bf87c26a3e 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -2450,7 +2450,7 @@ union bpf_attr {
+  * long bpf_sk_select_reuseport(struct sk_reuseport_md *reuse, struct bpf_map *map, void *key, u64 flags)
+  *	Description
+  *		Select a **SO_REUSEPORT** socket from a
+- *		**BPF_MAP_TYPE_REUSEPORT_ARRAY** *map*.
++ *		**BPF_MAP_TYPE_REUSEPORT_SOCKARRAY** *map*.
+  *		It checks the selected socket is matching the incoming
+  *		request in the socket buffer.
+  *	Return
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 29d4f4e375954..cba1f86e75cdb 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -10456,10 +10456,11 @@ static void convert_pseudo_ld_imm64(struct bpf_verifier_env *env)
+  * insni[off, off + cnt).  Adjust corresponding insn_aux_data by copying
+  * [0, off) and [off, end) to new locations, so the patched range stays zero
+  */
+-static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+-				struct bpf_prog *new_prog, u32 off, u32 cnt)
++static void adjust_insn_aux_data(struct bpf_verifier_env *env,
++				 struct bpf_insn_aux_data *new_data,
++				 struct bpf_prog *new_prog, u32 off, u32 cnt)
+ {
+-	struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data;
++	struct bpf_insn_aux_data *old_data = env->insn_aux_data;
+ 	struct bpf_insn *insn = new_prog->insnsi;
+ 	u32 old_seen = old_data[off].seen;
+ 	u32 prog_len;
+@@ -10472,12 +10473,9 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+ 	old_data[off].zext_dst = insn_has_def32(env, insn + off + cnt - 1);
+ 
+ 	if (cnt == 1)
+-		return 0;
++		return;
+ 	prog_len = new_prog->len;
+-	new_data = vzalloc(array_size(prog_len,
+-				      sizeof(struct bpf_insn_aux_data)));
+-	if (!new_data)
+-		return -ENOMEM;
++
+ 	memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off);
+ 	memcpy(new_data + off + cnt - 1, old_data + off,
+ 	       sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
+@@ -10488,7 +10486,6 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+ 	}
+ 	env->insn_aux_data = new_data;
+ 	vfree(old_data);
+-	return 0;
+ }
+ 
+ static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len)
+@@ -10523,6 +10520,14 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
+ 					    const struct bpf_insn *patch, u32 len)
+ {
+ 	struct bpf_prog *new_prog;
++	struct bpf_insn_aux_data *new_data = NULL;
++
++	if (len > 1) {
++		new_data = vzalloc(array_size(env->prog->len + len - 1,
++					      sizeof(struct bpf_insn_aux_data)));
++		if (!new_data)
++			return NULL;
++	}
+ 
+ 	new_prog = bpf_patch_insn_single(env->prog, off, patch, len);
+ 	if (IS_ERR(new_prog)) {
+@@ -10530,10 +10535,10 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
+ 			verbose(env,
+ 				"insn %d cannot be patched due to 16-bit range\n",
+ 				env->insn_aux_data[off].orig_idx);
++		vfree(new_data);
+ 		return NULL;
+ 	}
+-	if (adjust_insn_aux_data(env, new_prog, off, len))
+-		return NULL;
++	adjust_insn_aux_data(env, new_data, new_prog, off, len);
+ 	adjust_subprog_starts(env, off, len);
+ 	adjust_poke_descs(new_prog, off, len);
+ 	return new_prog;
+@@ -11033,6 +11038,10 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 		if (is_narrower_load && size < target_size) {
+ 			u8 shift = bpf_ctx_narrow_access_offset(
+ 				off, size, size_default) * 8;
++			if (shift && cnt + 1 >= ARRAY_SIZE(insn_buf)) {
++				verbose(env, "bpf verifier narrow ctx load misconfigured\n");
++				return -EINVAL;
++			}
+ 			if (ctx_field_size <= 4) {
+ 				if (shift)
+ 					insn_buf[cnt++] = BPF_ALU32_IMM(BPF_RSH,
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 53c70c470a38d..1999fcec45c71 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1114,7 +1114,7 @@ enum subparts_cmd {
+  * cpus_allowed can be granted or an error code will be returned.
+  *
+  * For partcmd_disable, the cpuset is being transofrmed from a partition
+- * root back to a non-partition root. any CPUs in cpus_allowed that are in
++ * root back to a non-partition root. Any CPUs in cpus_allowed that are in
+  * parent's subparts_cpus will be taken away from that cpumask and put back
+  * into parent's effective_cpus. 0 should always be returned.
+  *
+@@ -1148,6 +1148,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	struct cpuset *parent = parent_cs(cpuset);
+ 	int adding;	/* Moving cpus from effective_cpus to subparts_cpus */
+ 	int deleting;	/* Moving cpus from subparts_cpus to effective_cpus */
++	int new_prs;
+ 	bool part_error = false;	/* Partition error? */
+ 
+ 	percpu_rwsem_assert_held(&cpuset_rwsem);
+@@ -1183,6 +1184,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	 * A cpumask update cannot make parent's effective_cpus become empty.
+ 	 */
+ 	adding = deleting = false;
++	new_prs = cpuset->partition_root_state;
+ 	if (cmd == partcmd_enable) {
+ 		cpumask_copy(tmp->addmask, cpuset->cpus_allowed);
+ 		adding = true;
+@@ -1225,7 +1227,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		/*
+ 		 * partcmd_update w/o newmask:
+ 		 *
+-		 * addmask = cpus_allowed & parent->effectiveb_cpus
++		 * addmask = cpus_allowed & parent->effective_cpus
+ 		 *
+ 		 * Note that parent's subparts_cpus may have been
+ 		 * pre-shrunk in case there is a change in the cpu list.
+@@ -1247,11 +1249,11 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		switch (cpuset->partition_root_state) {
+ 		case PRS_ENABLED:
+ 			if (part_error)
+-				cpuset->partition_root_state = PRS_ERROR;
++				new_prs = PRS_ERROR;
+ 			break;
+ 		case PRS_ERROR:
+ 			if (!part_error)
+-				cpuset->partition_root_state = PRS_ENABLED;
++				new_prs = PRS_ENABLED;
+ 			break;
+ 		}
+ 		/*
+@@ -1260,10 +1262,10 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		part_error = (prev_prs == PRS_ERROR);
+ 	}
+ 
+-	if (!part_error && (cpuset->partition_root_state == PRS_ERROR))
++	if (!part_error && (new_prs == PRS_ERROR))
+ 		return 0;	/* Nothing need to be done */
+ 
+-	if (cpuset->partition_root_state == PRS_ERROR) {
++	if (new_prs == PRS_ERROR) {
+ 		/*
+ 		 * Remove all its cpus from parent's subparts_cpus.
+ 		 */
+@@ -1272,7 +1274,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 				       parent->subparts_cpus);
+ 	}
+ 
+-	if (!adding && !deleting)
++	if (!adding && !deleting && (new_prs == cpuset->partition_root_state))
+ 		return 0;
+ 
+ 	/*
+@@ -1299,6 +1301,9 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	}
+ 
+ 	parent->nr_subparts_cpus = cpumask_weight(parent->subparts_cpus);
++
++	if (cpuset->partition_root_state != new_prs)
++		cpuset->partition_root_state = new_prs;
+ 	spin_unlock_irq(&callback_lock);
+ 
+ 	return cmd == partcmd_update;
+@@ -1321,6 +1326,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 	struct cpuset *cp;
+ 	struct cgroup_subsys_state *pos_css;
+ 	bool need_rebuild_sched_domains = false;
++	int new_prs;
+ 
+ 	rcu_read_lock();
+ 	cpuset_for_each_descendant_pre(cp, pos_css, cs) {
+@@ -1360,17 +1366,18 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 		 * update_tasks_cpumask() again for tasks in the parent
+ 		 * cpuset if the parent's subparts_cpus changes.
+ 		 */
+-		if ((cp != cs) && cp->partition_root_state) {
++		new_prs = cp->partition_root_state;
++		if ((cp != cs) && new_prs) {
+ 			switch (parent->partition_root_state) {
+ 			case PRS_DISABLED:
+ 				/*
+ 				 * If parent is not a partition root or an
+-				 * invalid partition root, clear the state
+-				 * state and the CS_CPU_EXCLUSIVE flag.
++				 * invalid partition root, clear its state
++				 * and its CS_CPU_EXCLUSIVE flag.
+ 				 */
+ 				WARN_ON_ONCE(cp->partition_root_state
+ 					     != PRS_ERROR);
+-				cp->partition_root_state = 0;
++				new_prs = PRS_DISABLED;
+ 
+ 				/*
+ 				 * clear_bit() is an atomic operation and
+@@ -1391,11 +1398,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 				/*
+ 				 * When parent is invalid, it has to be too.
+ 				 */
+-				cp->partition_root_state = PRS_ERROR;
+-				if (cp->nr_subparts_cpus) {
+-					cp->nr_subparts_cpus = 0;
+-					cpumask_clear(cp->subparts_cpus);
+-				}
++				new_prs = PRS_ERROR;
+ 				break;
+ 			}
+ 		}
+@@ -1407,8 +1410,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 		spin_lock_irq(&callback_lock);
+ 
+ 		cpumask_copy(cp->effective_cpus, tmp->new_cpus);
+-		if (cp->nr_subparts_cpus &&
+-		   (cp->partition_root_state != PRS_ENABLED)) {
++		if (cp->nr_subparts_cpus && (new_prs != PRS_ENABLED)) {
+ 			cp->nr_subparts_cpus = 0;
+ 			cpumask_clear(cp->subparts_cpus);
+ 		} else if (cp->nr_subparts_cpus) {
+@@ -1435,6 +1437,10 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 					= cpumask_weight(cp->subparts_cpus);
+ 			}
+ 		}
++
++		if (new_prs != cp->partition_root_state)
++			cp->partition_root_state = new_prs;
++
+ 		spin_unlock_irq(&callback_lock);
+ 
+ 		WARN_ON(!is_in_v2_mode() &&
+@@ -1937,34 +1943,32 @@ out:
+ 
+ /*
+  * update_prstate - update partititon_root_state
+- * cs:	the cpuset to update
+- * val: 0 - disabled, 1 - enabled
++ * cs: the cpuset to update
++ * new_prs: new partition root state
+  *
+  * Call with cpuset_mutex held.
+  */
+-static int update_prstate(struct cpuset *cs, int val)
++static int update_prstate(struct cpuset *cs, int new_prs)
+ {
+-	int err;
++	int err, old_prs = cs->partition_root_state;
+ 	struct cpuset *parent = parent_cs(cs);
+-	struct tmpmasks tmp;
++	struct tmpmasks tmpmask;
+ 
+-	if ((val != 0) && (val != 1))
+-		return -EINVAL;
+-	if (val == cs->partition_root_state)
++	if (old_prs == new_prs)
+ 		return 0;
+ 
+ 	/*
+ 	 * Cannot force a partial or invalid partition root to a full
+ 	 * partition root.
+ 	 */
+-	if (val && cs->partition_root_state)
++	if (new_prs && (old_prs == PRS_ERROR))
+ 		return -EINVAL;
+ 
+-	if (alloc_cpumasks(NULL, &tmp))
++	if (alloc_cpumasks(NULL, &tmpmask))
+ 		return -ENOMEM;
+ 
+ 	err = -EINVAL;
+-	if (!cs->partition_root_state) {
++	if (!old_prs) {
+ 		/*
+ 		 * Turning on partition root requires setting the
+ 		 * CS_CPU_EXCLUSIVE bit implicitly as well and cpus_allowed
+@@ -1978,31 +1982,27 @@ static int update_prstate(struct cpuset *cs, int val)
+ 			goto out;
+ 
+ 		err = update_parent_subparts_cpumask(cs, partcmd_enable,
+-						     NULL, &tmp);
++						     NULL, &tmpmask);
+ 		if (err) {
+ 			update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 			goto out;
+ 		}
+-		cs->partition_root_state = PRS_ENABLED;
+ 	} else {
+ 		/*
+ 		 * Turning off partition root will clear the
+ 		 * CS_CPU_EXCLUSIVE bit.
+ 		 */
+-		if (cs->partition_root_state == PRS_ERROR) {
+-			cs->partition_root_state = 0;
++		if (old_prs == PRS_ERROR) {
+ 			update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 			err = 0;
+ 			goto out;
+ 		}
+ 
+ 		err = update_parent_subparts_cpumask(cs, partcmd_disable,
+-						     NULL, &tmp);
++						     NULL, &tmpmask);
+ 		if (err)
+ 			goto out;
+ 
+-		cs->partition_root_state = 0;
+-
+ 		/* Turning off CS_CPU_EXCLUSIVE will not return error */
+ 		update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 	}
+@@ -2015,11 +2015,17 @@ static int update_prstate(struct cpuset *cs, int val)
+ 		update_tasks_cpumask(parent);
+ 
+ 	if (parent->child_ecpus_count)
+-		update_sibling_cpumasks(parent, cs, &tmp);
++		update_sibling_cpumasks(parent, cs, &tmpmask);
+ 
+ 	rebuild_sched_domains_locked();
+ out:
+-	free_cpumasks(NULL, &tmp);
++	if (!err) {
++		spin_lock_irq(&callback_lock);
++		cs->partition_root_state = new_prs;
++		spin_unlock_irq(&callback_lock);
++	}
++
++	free_cpumasks(NULL, &tmpmask);
+ 	return err;
+ }
+ 
+@@ -3060,7 +3066,7 @@ retry:
+ 		goto retry;
+ 	}
+ 
+-	parent =  parent_cs(cs);
++	parent = parent_cs(cs);
+ 	compute_effective_cpumask(&new_cpus, cs, parent);
+ 	nodes_and(new_mems, cs->mems_allowed, parent->effective_mems);
+ 
+@@ -3082,8 +3088,10 @@ retry:
+ 	if (is_partition_root(cs) && (cpumask_empty(&new_cpus) ||
+ 	   (parent->partition_root_state == PRS_ERROR))) {
+ 		if (cs->nr_subparts_cpus) {
++			spin_lock_irq(&callback_lock);
+ 			cs->nr_subparts_cpus = 0;
+ 			cpumask_clear(cs->subparts_cpus);
++			spin_unlock_irq(&callback_lock);
+ 			compute_effective_cpumask(&new_cpus, cs, parent);
+ 		}
+ 
+@@ -3097,7 +3105,9 @@ retry:
+ 		     cpumask_empty(&new_cpus)) {
+ 			update_parent_subparts_cpumask(cs, partcmd_disable,
+ 						       NULL, tmp);
++			spin_lock_irq(&callback_lock);
+ 			cs->partition_root_state = PRS_ERROR;
++			spin_unlock_irq(&callback_lock);
+ 		}
+ 		cpuset_force_rebuild();
+ 	}
+@@ -3168,6 +3178,13 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
+ 	cpus_updated = !cpumask_equal(top_cpuset.effective_cpus, &new_cpus);
+ 	mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);
+ 
++	/*
++	 * In the rare case that hotplug removes all the cpus in subparts_cpus,
++	 * we assumed that cpus are updated.
++	 */
++	if (!cpus_updated && top_cpuset.nr_subparts_cpus)
++		cpus_updated = true;
++
+ 	/* synchronize cpus_allowed to cpu_active_mask */
+ 	if (cpus_updated) {
+ 		spin_lock_irq(&callback_lock);
+diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
+index f7e1d0eccdbc6..246efc74e3f34 100644
+--- a/kernel/cpu_pm.c
++++ b/kernel/cpu_pm.c
+@@ -13,19 +13,32 @@
+ #include <linux/spinlock.h>
+ #include <linux/syscore_ops.h>
+ 
+-static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain);
++/*
++ * atomic_notifiers use a spinlock_t, which can block under PREEMPT_RT.
++ * Notifications for cpu_pm will be issued by the idle task itself, which can
++ * never block, IOW it requires using a raw_spinlock_t.
++ */
++static struct {
++	struct raw_notifier_head chain;
++	raw_spinlock_t lock;
++} cpu_pm_notifier = {
++	.chain = RAW_NOTIFIER_INIT(cpu_pm_notifier.chain),
++	.lock  = __RAW_SPIN_LOCK_UNLOCKED(cpu_pm_notifier.lock),
++};
+ 
+ static int cpu_pm_notify(enum cpu_pm_event event)
+ {
+ 	int ret;
+ 
+ 	/*
+-	 * atomic_notifier_call_chain has a RCU read critical section, which
+-	 * could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let
+-	 * RCU know this.
++	 * This introduces a RCU read critical section, which could be
++	 * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
++	 * this.
+ 	 */
+ 	rcu_irq_enter_irqson();
+-	ret = atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL);
++	rcu_read_lock();
++	ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
++	rcu_read_unlock();
+ 	rcu_irq_exit_irqson();
+ 
+ 	return notifier_to_errno(ret);
+@@ -33,10 +46,13 @@ static int cpu_pm_notify(enum cpu_pm_event event)
+ 
+ static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event event_down)
+ {
++	unsigned long flags;
+ 	int ret;
+ 
+ 	rcu_irq_enter_irqson();
+-	ret = atomic_notifier_call_chain_robust(&cpu_pm_notifier_chain, event_up, event_down, NULL);
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, event_down, NULL);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
+ 	rcu_irq_exit_irqson();
+ 
+ 	return notifier_to_errno(ret);
+@@ -49,12 +65,17 @@ static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event ev
+  * Add a driver to a list of drivers that are notified about
+  * CPU and CPU cluster low power entry and exit.
+  *
+- * This function may sleep, and has the same return conditions as
+- * raw_notifier_chain_register.
++ * This function has the same return conditions as raw_notifier_chain_register.
+  */
+ int cpu_pm_register_notifier(struct notifier_block *nb)
+ {
+-	return atomic_notifier_chain_register(&cpu_pm_notifier_chain, nb);
++	unsigned long flags;
++	int ret;
++
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_chain_register(&cpu_pm_notifier.chain, nb);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
+ 
+@@ -64,12 +85,17 @@ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
+  *
+  * Remove a driver from the CPU PM notifier list.
+  *
+- * This function may sleep, and has the same return conditions as
+- * raw_notifier_chain_unregister.
++ * This function has the same return conditions as raw_notifier_chain_unregister.
+  */
+ int cpu_pm_unregister_notifier(struct notifier_block *nb)
+ {
+-	return atomic_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
++	unsigned long flags;
++	int ret;
++
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_chain_unregister(&cpu_pm_notifier.chain, nb);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
+ 
+diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
+index 6990490fa67be..1f981162648a3 100644
+--- a/kernel/irq/timings.c
++++ b/kernel/irq/timings.c
+@@ -799,12 +799,14 @@ static int __init irq_timings_test_irqs(struct timings_intervals *ti)
+ 
+ 		__irq_timings_store(irq, irqs, ti->intervals[i]);
+ 		if (irqs->circ_timings[i & IRQ_TIMINGS_MASK] != index) {
++			ret = -EBADSLT;
+ 			pr_err("Failed to store in the circular buffer\n");
+ 			goto out;
+ 		}
+ 	}
+ 
+ 	if (irqs->count != ti->count) {
++		ret = -ERANGE;
+ 		pr_err("Count differs\n");
+ 		goto out;
+ 	}
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 8ae9d7abebc08..5184f68968158 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -1293,6 +1293,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
+ 	class->name_version = count_matching_names(class);
+ 	class->wait_type_inner = lock->wait_type_inner;
+ 	class->wait_type_outer = lock->wait_type_outer;
++	class->lock_type = lock->lock_type;
+ 	/*
+ 	 * We use RCU's safe list-add method to make
+ 	 * parallel walking of the hash-list safe:
+@@ -4621,9 +4622,9 @@ print_lock_invalid_wait_context(struct task_struct *curr,
+  */
+ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
+ {
+-	short next_inner = hlock_class(next)->wait_type_inner;
+-	short next_outer = hlock_class(next)->wait_type_outer;
+-	short curr_inner;
++	u8 next_inner = hlock_class(next)->wait_type_inner;
++	u8 next_outer = hlock_class(next)->wait_type_outer;
++	u8 curr_inner;
+ 	int depth;
+ 
+ 	if (!next_inner || next->trylock)
+@@ -4646,7 +4647,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
+ 
+ 	for (; depth < curr->lockdep_depth; depth++) {
+ 		struct held_lock *prev = curr->held_locks + depth;
+-		short prev_inner = hlock_class(prev)->wait_type_inner;
++		u8 prev_inner = hlock_class(prev)->wait_type_inner;
+ 
+ 		if (prev_inner) {
+ 			/*
+@@ -4695,9 +4696,9 @@ static inline int check_wait_context(struct task_struct *curr,
+ /*
+  * Initialize a lock instance's lock-class mapping info:
+  */
+-void lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
++void lockdep_init_map_type(struct lockdep_map *lock, const char *name,
+ 			    struct lock_class_key *key, int subclass,
+-			    short inner, short outer)
++			    u8 inner, u8 outer, u8 lock_type)
+ {
+ 	int i;
+ 
+@@ -4720,6 +4721,7 @@ void lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
+ 
+ 	lock->wait_type_outer = outer;
+ 	lock->wait_type_inner = inner;
++	lock->lock_type = lock_type;
+ 
+ 	/*
+ 	 * No key, no joy, we need to hash something.
+@@ -4754,7 +4756,7 @@ void lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
+ 		raw_local_irq_restore(flags);
+ 	}
+ }
+-EXPORT_SYMBOL_GPL(lockdep_init_map_waits);
++EXPORT_SYMBOL_GPL(lockdep_init_map_type);
+ 
+ struct lock_class_key __lockdep_no_validate__;
+ EXPORT_SYMBOL_GPL(__lockdep_no_validate__);
+diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
+index 15ac7c4bb1117..86061901636cc 100644
+--- a/kernel/locking/mutex.c
++++ b/kernel/locking/mutex.c
+@@ -938,7 +938,6 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 		    struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+ {
+ 	struct mutex_waiter waiter;
+-	bool first = false;
+ 	struct ww_mutex *ww;
+ 	int ret;
+ 
+@@ -1017,6 +1016,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 
+ 	set_current_state(state);
+ 	for (;;) {
++		bool first;
++
+ 		/*
+ 		 * Once we hold wait_lock, we're serialized against
+ 		 * mutex_unlock() handing the lock off to us, do a trylock
+@@ -1045,15 +1046,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 		spin_unlock(&lock->wait_lock);
+ 		schedule_preempt_disabled();
+ 
+-		/*
+-		 * ww_mutex needs to always recheck its position since its waiter
+-		 * list is not FIFO ordered.
+-		 */
+-		if (ww_ctx || !first) {
+-			first = __mutex_waiter_is_first(lock, &waiter);
+-			if (first)
+-				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+-		}
++		first = __mutex_waiter_is_first(lock, &waiter);
++		if (first)
++			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+ 
+ 		set_current_state(state);
+ 		/*
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index 994ca8353543a..be381eb6116a1 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -157,7 +157,9 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+ 	/* Compute the cost of each performance state. */
+ 	fmax = (u64) table[nr_states - 1].frequency;
+ 	for (i = 0; i < nr_states; i++) {
+-		table[i].cost = div64_u64(fmax * table[i].power,
++		unsigned long power_res = em_scale_power(table[i].power);
++
++		table[i].cost = div64_u64(fmax * power_res,
+ 					  table[i].frequency);
+ 	}
+ 
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 8c3ba0185082d..8c81c05c4236a 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -2561,6 +2561,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
+ void rcu_sched_clock_irq(int user)
+ {
+ 	trace_rcu_utilization(TPS("Start scheduler-tick"));
++	lockdep_assert_irqs_disabled();
+ 	raw_cpu_inc(rcu_data.ticks_this_gp);
+ 	/* The load-acquire pairs with the store-release setting to true. */
+ 	if (smp_load_acquire(this_cpu_ptr(&rcu_data.rcu_urgent_qs))) {
+@@ -2574,6 +2575,7 @@ void rcu_sched_clock_irq(int user)
+ 	rcu_flavor_sched_clock_irq(user);
+ 	if (rcu_pending(user))
+ 		invoke_rcu_core();
++	lockdep_assert_irqs_disabled();
+ 
+ 	trace_rcu_utilization(TPS("End scheduler-tick"));
+ }
+@@ -3730,6 +3732,8 @@ static int rcu_pending(int user)
+ 	struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
+ 	struct rcu_node *rnp = rdp->mynode;
+ 
++	lockdep_assert_irqs_disabled();
++
+ 	/* Check for CPU stalls, if enabled. */
+ 	check_cpu_stall(rdp);
+ 
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 7d4f78bf40577..574aeaac9272d 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -682,6 +682,7 @@ static void rcu_flavor_sched_clock_irq(int user)
+ {
+ 	struct task_struct *t = current;
+ 
++	lockdep_assert_irqs_disabled();
+ 	if (user || rcu_is_cpu_rrupt_from_idle()) {
+ 		rcu_note_voluntary_context_switch(current);
+ 	}
+diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
+index ca21d28a0f98f..251a9af3709af 100644
+--- a/kernel/rcu/tree_stall.h
++++ b/kernel/rcu/tree_stall.h
+@@ -7,6 +7,8 @@
+  * Author: Paul E. McKenney <paulmck@linux.ibm.com>
+  */
+ 
++#include <linux/kvm_para.h>
++
+ //////////////////////////////////////////////////////////////////////////////
+ //
+ // Controlling CPU stall warnings, including delay calculation.
+@@ -260,8 +262,11 @@ static int rcu_print_task_stall(struct rcu_node *rnp, unsigned long flags)
+ 	struct task_struct *t;
+ 	struct task_struct *ts[8];
+ 
+-	if (!rcu_preempt_blocked_readers_cgp(rnp))
++	lockdep_assert_irqs_disabled();
++	if (!rcu_preempt_blocked_readers_cgp(rnp)) {
++		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ 		return 0;
++	}
+ 	pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
+ 	       rnp->level, rnp->grplo, rnp->grphi);
+ 	t = list_entry(rnp->gp_tasks->prev,
+@@ -273,8 +278,8 @@ static int rcu_print_task_stall(struct rcu_node *rnp, unsigned long flags)
+ 			break;
+ 	}
+ 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+-	for (i--; i; i--) {
+-		t = ts[i];
++	while (i) {
++		t = ts[--i];
+ 		if (!try_invoke_on_locked_down_task(t, check_slow_task, &rscr))
+ 			pr_cont(" P%d", t->pid);
+ 		else
+@@ -284,6 +289,7 @@ static int rcu_print_task_stall(struct rcu_node *rnp, unsigned long flags)
+ 				".q"[rscr.rs.b.need_qs],
+ 				".e"[rscr.rs.b.exp_hint],
+ 				".l"[rscr.on_blkd_list]);
++		lockdep_assert_irqs_disabled();
+ 		put_task_struct(t);
+ 		ndetected++;
+ 	}
+@@ -472,6 +478,8 @@ static void print_other_cpu_stall(unsigned long gp_seq, unsigned long gps)
+ 	struct rcu_node *rnp;
+ 	long totqlen = 0;
+ 
++	lockdep_assert_irqs_disabled();
++
+ 	/* Kick and suppress, if so configured. */
+ 	rcu_stall_kick_kthreads();
+ 	if (rcu_stall_is_suppressed())
+@@ -493,6 +501,7 @@ static void print_other_cpu_stall(unsigned long gp_seq, unsigned long gps)
+ 				}
+ 		}
+ 		ndetected += rcu_print_task_stall(rnp, flags); // Releases rnp->lock.
++		lockdep_assert_irqs_disabled();
+ 	}
+ 
+ 	for_each_possible_cpu(cpu)
+@@ -538,6 +547,8 @@ static void print_cpu_stall(unsigned long gps)
+ 	struct rcu_node *rnp = rcu_get_root();
+ 	long totqlen = 0;
+ 
++	lockdep_assert_irqs_disabled();
++
+ 	/* Kick and suppress, if so configured. */
+ 	rcu_stall_kick_kthreads();
+ 	if (rcu_stall_is_suppressed())
+@@ -592,6 +603,7 @@ static void check_cpu_stall(struct rcu_data *rdp)
+ 	unsigned long js;
+ 	struct rcu_node *rnp;
+ 
++	lockdep_assert_irqs_disabled();
+ 	if ((rcu_stall_is_suppressed() && !READ_ONCE(rcu_kick_kthreads)) ||
+ 	    !rcu_gp_in_progress())
+ 		return;
+@@ -633,6 +645,14 @@ static void check_cpu_stall(struct rcu_data *rdp)
+ 	    (READ_ONCE(rnp->qsmask) & rdp->grpmask) &&
+ 	    cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
+ 
++		/*
++		 * If a virtual machine is stopped by the host it can look to
++		 * the watchdog like an RCU stall. Check to see if the host
++		 * stopped the vm.
++		 */
++		if (kvm_check_and_clear_guest_paused())
++			return;
++
+ 		/* We haven't checked in, so go dump stack. */
+ 		print_cpu_stall(gps);
+ 		if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
+@@ -642,6 +662,14 @@ static void check_cpu_stall(struct rcu_data *rdp)
+ 		   ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) &&
+ 		   cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
+ 
++		/*
++		 * If a virtual machine is stopped by the host it can look to
++		 * the watchdog like an RCU stall. Check to see if the host
++		 * stopped the vm.
++		 */
++		if (kvm_check_and_clear_guest_paused())
++			return;
++
+ 		/* They had a few time units to dump stack, so complain. */
+ 		print_other_cpu_stall(gs2, gps);
+ 		if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 84c105902027c..6db20a66e8e68 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1279,6 +1279,23 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
+ 		uclamp_rq_dec_id(rq, p, clamp_id);
+ }
+ 
++static inline void uclamp_rq_reinc_id(struct rq *rq, struct task_struct *p,
++				      enum uclamp_id clamp_id)
++{
++	if (!p->uclamp[clamp_id].active)
++		return;
++
++	uclamp_rq_dec_id(rq, p, clamp_id);
++	uclamp_rq_inc_id(rq, p, clamp_id);
++
++	/*
++	 * Make sure to clear the idle flag if we've transiently reached 0
++	 * active tasks on rq.
++	 */
++	if (clamp_id == UCLAMP_MAX && (rq->uclamp_flags & UCLAMP_FLAG_IDLE))
++		rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE;
++}
++
+ static inline void
+ uclamp_update_active(struct task_struct *p)
+ {
+@@ -1302,12 +1319,8 @@ uclamp_update_active(struct task_struct *p)
+ 	 * affecting a valid clamp bucket, the next time it's enqueued,
+ 	 * it will already see the updated clamp bucket value.
+ 	 */
+-	for_each_clamp_id(clamp_id) {
+-		if (p->uclamp[clamp_id].active) {
+-			uclamp_rq_dec_id(rq, p, clamp_id);
+-			uclamp_rq_inc_id(rq, p, clamp_id);
+-		}
+-	}
++	for_each_clamp_id(clamp_id)
++		uclamp_rq_reinc_id(rq, p, clamp_id);
+ 
+ 	task_rq_unlock(rq, p, &rf);
+ }
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 6b98c1fe6e7f8..a3ae00c348a8b 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1735,6 +1735,7 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused
+ 	 */
+ 	raw_spin_lock(&rq->lock);
+ 	if (p->dl.dl_non_contending) {
++		update_rq_clock(rq);
+ 		sub_running_bw(&p->dl, &rq->dl);
+ 		p->dl.dl_non_contending = 0;
+ 		/*
+@@ -2703,7 +2704,7 @@ void __setparam_dl(struct task_struct *p, const struct sched_attr *attr)
+ 	dl_se->dl_runtime = attr->sched_runtime;
+ 	dl_se->dl_deadline = attr->sched_deadline;
+ 	dl_se->dl_period = attr->sched_period ?: dl_se->dl_deadline;
+-	dl_se->flags = attr->sched_flags;
++	dl_se->flags = attr->sched_flags & SCHED_DL_FLAGS;
+ 	dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime);
+ 	dl_se->dl_density = to_ratio(dl_se->dl_deadline, dl_se->dl_runtime);
+ }
+@@ -2716,7 +2717,8 @@ void __getparam_dl(struct task_struct *p, struct sched_attr *attr)
+ 	attr->sched_runtime = dl_se->dl_runtime;
+ 	attr->sched_deadline = dl_se->dl_deadline;
+ 	attr->sched_period = dl_se->dl_period;
+-	attr->sched_flags = dl_se->flags;
++	attr->sched_flags &= ~SCHED_DL_FLAGS;
++	attr->sched_flags |= dl_se->flags;
+ }
+ 
+ /*
+@@ -2813,7 +2815,7 @@ bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
+ 	if (dl_se->dl_runtime != attr->sched_runtime ||
+ 	    dl_se->dl_deadline != attr->sched_deadline ||
+ 	    dl_se->dl_period != attr->sched_period ||
+-	    dl_se->flags != attr->sched_flags)
++	    dl_se->flags != (attr->sched_flags & SCHED_DL_FLAGS))
+ 		return true;
+ 
+ 	return false;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index bad97d35684d2..c004e3b89c324 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -1533,7 +1533,7 @@ static inline bool is_core_idle(int cpu)
+ 		if (cpu == sibling)
+ 			continue;
+ 
+-		if (!idle_cpu(cpu))
++		if (!idle_cpu(sibling))
+ 			return false;
+ 	}
+ #endif
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 39112ac7ab347..08db8e095e48f 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -226,6 +226,8 @@ static inline void update_avg(u64 *avg, u64 sample)
+  */
+ #define SCHED_FLAG_SUGOV	0x10000000
+ 
++#define SCHED_DL_FLAGS (SCHED_FLAG_RECLAIM | SCHED_FLAG_DL_OVERRUN | SCHED_FLAG_SUGOV)
++
+ static inline bool dl_entity_is_special(struct sched_dl_entity *dl_se)
+ {
+ #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 9505b1f21cdf8..4ef90718c1146 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -758,22 +758,6 @@ static void hrtimer_switch_to_hres(void)
+ 	retrigger_next_event(NULL);
+ }
+ 
+-static void clock_was_set_work(struct work_struct *work)
+-{
+-	clock_was_set();
+-}
+-
+-static DECLARE_WORK(hrtimer_work, clock_was_set_work);
+-
+-/*
+- * Called from timekeeping and resume code to reprogram the hrtimer
+- * interrupt device on all cpus.
+- */
+-void clock_was_set_delayed(void)
+-{
+-	schedule_work(&hrtimer_work);
+-}
+-
+ #else
+ 
+ static inline int hrtimer_is_hres_enabled(void) { return 0; }
+@@ -891,6 +875,22 @@ void clock_was_set(void)
+ 	timerfd_clock_was_set();
+ }
+ 
++static void clock_was_set_work(struct work_struct *work)
++{
++	clock_was_set();
++}
++
++static DECLARE_WORK(hrtimer_work, clock_was_set_work);
++
++/*
++ * Called from timekeeping and resume code to reprogram the hrtimer
++ * interrupt device on all cpus and to notify timerfd.
++ */
++void clock_was_set_delayed(void)
++{
++	schedule_work(&hrtimer_work);
++}
++
+ /*
+  * During resume we might have to reprogram the high resolution timer
+  * interrupt on all online CPUs.  However, all other CPUs will be
+@@ -1030,12 +1030,13 @@ static void __remove_hrtimer(struct hrtimer *timer,
+  * remove hrtimer, called with base lock held
+  */
+ static inline int
+-remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool restart)
++remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
++	       bool restart, bool keep_local)
+ {
+ 	u8 state = timer->state;
+ 
+ 	if (state & HRTIMER_STATE_ENQUEUED) {
+-		int reprogram;
++		bool reprogram;
+ 
+ 		/*
+ 		 * Remove the timer and force reprogramming when high
+@@ -1048,8 +1049,16 @@ remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool rest
+ 		debug_deactivate(timer);
+ 		reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
+ 
++		/*
++		 * If the timer is not restarted then reprogramming is
++		 * required if the timer is local. If it is local and about
++		 * to be restarted, avoid programming it twice (on removal
++		 * and a moment later when it's requeued).
++		 */
+ 		if (!restart)
+ 			state = HRTIMER_STATE_INACTIVE;
++		else
++			reprogram &= !keep_local;
+ 
+ 		__remove_hrtimer(timer, base, state, reprogram);
+ 		return 1;
+@@ -1103,9 +1112,31 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 				    struct hrtimer_clock_base *base)
+ {
+ 	struct hrtimer_clock_base *new_base;
++	bool force_local, first;
++
++	/*
++	 * If the timer is on the local cpu base and is the first expiring
++	 * timer then this might end up reprogramming the hardware twice
++	 * (on removal and on enqueue). To avoid that by prevent the
++	 * reprogram on removal, keep the timer local to the current CPU
++	 * and enforce reprogramming after it is queued no matter whether
++	 * it is the new first expiring timer again or not.
++	 */
++	force_local = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
++	force_local &= base->cpu_base->next_timer == timer;
+ 
+-	/* Remove an active timer from the queue: */
+-	remove_hrtimer(timer, base, true);
++	/*
++	 * Remove an active timer from the queue. In case it is not queued
++	 * on the current CPU, make sure that remove_hrtimer() updates the
++	 * remote data correctly.
++	 *
++	 * If it's on the current CPU and the first expiring timer, then
++	 * skip reprogramming, keep the timer local and enforce
++	 * reprogramming later if it was the first expiring timer.  This
++	 * avoids programming the underlying clock event twice (once at
++	 * removal and once after enqueue).
++	 */
++	remove_hrtimer(timer, base, true, force_local);
+ 
+ 	if (mode & HRTIMER_MODE_REL)
+ 		tim = ktime_add_safe(tim, base->get_time());
+@@ -1115,9 +1146,24 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 	hrtimer_set_expires_range_ns(timer, tim, delta_ns);
+ 
+ 	/* Switch the timer base, if necessary: */
+-	new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
++	if (!force_local) {
++		new_base = switch_hrtimer_base(timer, base,
++					       mode & HRTIMER_MODE_PINNED);
++	} else {
++		new_base = base;
++	}
++
++	first = enqueue_hrtimer(timer, new_base, mode);
++	if (!force_local)
++		return first;
+ 
+-	return enqueue_hrtimer(timer, new_base, mode);
++	/*
++	 * Timer was forced to stay on the current CPU to avoid
++	 * reprogramming on removal and enqueue. Force reprogram the
++	 * hardware by evaluating the new first expiring timer.
++	 */
++	hrtimer_force_reprogram(new_base->cpu_base, 1);
++	return 0;
+ }
+ 
+ /**
+@@ -1183,7 +1229,7 @@ int hrtimer_try_to_cancel(struct hrtimer *timer)
+ 	base = lock_hrtimer_base(timer, &flags);
+ 
+ 	if (!hrtimer_callback_running(timer))
+-		ret = remove_hrtimer(timer, base, false);
++		ret = remove_hrtimer(timer, base, false, false);
+ 
+ 	unlock_hrtimer_base(timer, &flags);
+ 
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 08c033b802569..d3d42b7637a19 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1346,8 +1346,6 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clkid,
+ 			}
+ 		}
+ 
+-		if (!*newval)
+-			return;
+ 		*newval += now;
+ 	}
+ 
+diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h
+index 7b24961367292..5294f5b1f9550 100644
+--- a/kernel/time/tick-internal.h
++++ b/kernel/time/tick-internal.h
+@@ -165,3 +165,6 @@ DECLARE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases);
+ 
+ extern u64 get_next_timer_interrupt(unsigned long basej, u64 basem);
+ void timer_clear_idle(void);
++
++void clock_was_set(void);
++void clock_was_set_delayed(void);
+diff --git a/lib/mpi/mpiutil.c b/lib/mpi/mpiutil.c
+index 3c63710c20c69..e6c4b3180ab1d 100644
+--- a/lib/mpi/mpiutil.c
++++ b/lib/mpi/mpiutil.c
+@@ -148,7 +148,7 @@ int mpi_resize(MPI a, unsigned nlimbs)
+ 		return 0;	/* no need to do it */
+ 
+ 	if (a->d) {
+-		p = kmalloc_array(nlimbs, sizeof(mpi_limb_t), GFP_KERNEL);
++		p = kcalloc(nlimbs, sizeof(mpi_limb_t), GFP_KERNEL);
+ 		if (!p)
+ 			return -ENOMEM;
+ 		memcpy(p, a->d, a->alloced * sizeof(mpi_limb_t));
+diff --git a/net/6lowpan/debugfs.c b/net/6lowpan/debugfs.c
+index 1c140af06d527..600b9563bfc53 100644
+--- a/net/6lowpan/debugfs.c
++++ b/net/6lowpan/debugfs.c
+@@ -170,7 +170,8 @@ static void lowpan_dev_debugfs_ctx_init(struct net_device *dev,
+ 	struct dentry *root;
+ 	char buf[32];
+ 
+-	WARN_ON_ONCE(id > LOWPAN_IPHC_CTX_TABLE_SIZE);
++	if (WARN_ON_ONCE(id >= LOWPAN_IPHC_CTX_TABLE_SIZE))
++		return;
+ 
+ 	sprintf(buf, "%d", id);
+ 
+diff --git a/net/bluetooth/cmtp/cmtp.h b/net/bluetooth/cmtp/cmtp.h
+index c32638dddbf94..f6b9dc4e408f2 100644
+--- a/net/bluetooth/cmtp/cmtp.h
++++ b/net/bluetooth/cmtp/cmtp.h
+@@ -26,7 +26,7 @@
+ #include <linux/types.h>
+ #include <net/bluetooth/bluetooth.h>
+ 
+-#define BTNAMSIZ 18
++#define BTNAMSIZ 21
+ 
+ /* CMTP ioctl defines */
+ #define CMTPCONNADD	_IOW('C', 200, int)
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 65d3f54099637..a9097fb7eb825 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1336,6 +1336,12 @@ int hci_inquiry(void __user *arg)
+ 		goto done;
+ 	}
+ 
++	/* Restrict maximum inquiry length to 60 seconds */
++	if (ir.length > 60) {
++		err = -EINVAL;
++		goto done;
++	}
++
+ 	hci_dev_lock(hdev);
+ 	if (inquiry_cache_age(hdev) > INQUIRY_CACHE_AGE_MAX ||
+ 	    inquiry_cache_empty(hdev) || ir.flags & IREQ_CACHE_FLUSH) {
+@@ -1726,6 +1732,14 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 	hci_request_cancel_all(hdev);
+ 	hci_req_sync_lock(hdev);
+ 
++	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
++	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
++	    test_bit(HCI_UP, &hdev->flags)) {
++		/* Execute vendor specific shutdown routine */
++		if (hdev->shutdown)
++			hdev->shutdown(hdev);
++	}
++
+ 	if (!test_and_clear_bit(HCI_UP, &hdev->flags)) {
+ 		cancel_delayed_work_sync(&hdev->cmd_timer);
+ 		hci_req_sync_unlock(hdev);
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 31a585fe0c7c6..08f67f91d427f 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7464,7 +7464,7 @@ static int add_advertising(struct sock *sk, struct hci_dev *hdev,
+ 	 * advertising.
+ 	 */
+ 	if (hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY))
+-		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_ADVERTISING,
++		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_ADD_ADVERTISING,
+ 				       MGMT_STATUS_NOT_SUPPORTED);
+ 
+ 	if (cp->instance < 1 || cp->instance > hdev->le_num_of_adv_sets)
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 22a110f37abc6..600b1832e1dd6 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -85,7 +85,6 @@ static void sco_sock_timeout(struct timer_list *t)
+ 	sk->sk_state_change(sk);
+ 	bh_unlock_sock(sk);
+ 
+-	sco_sock_kill(sk);
+ 	sock_put(sk);
+ }
+ 
+@@ -177,7 +176,6 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+ 		sco_sock_clear_timer(sk);
+ 		sco_chan_del(sk, err);
+ 		bh_unlock_sock(sk);
+-		sco_sock_kill(sk);
+ 		sock_put(sk);
+ 	}
+ 
+@@ -394,8 +392,7 @@ static void sco_sock_cleanup_listen(struct sock *parent)
+  */
+ static void sco_sock_kill(struct sock *sk)
+ {
+-	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket ||
+-	    sock_flag(sk, SOCK_DEAD))
++	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket)
+ 		return;
+ 
+ 	BT_DBG("sk %p state %d", sk, sk->sk_state);
+@@ -447,7 +444,6 @@ static void sco_sock_close(struct sock *sk)
+ 	lock_sock(sk);
+ 	__sco_sock_close(sk);
+ 	release_sock(sk);
+-	sco_sock_kill(sk);
+ }
+ 
+ static void sco_skb_put_cmsg(struct sk_buff *skb, struct msghdr *msg,
+@@ -773,6 +769,11 @@ static void sco_conn_defer_accept(struct hci_conn *conn, u16 setting)
+ 			cp.max_latency = cpu_to_le16(0xffff);
+ 			cp.retrans_effort = 0xff;
+ 			break;
++		default:
++			/* use CVSD settings as fallback */
++			cp.max_latency = cpu_to_le16(0xffff);
++			cp.retrans_effort = 0xff;
++			break;
+ 		}
+ 
+ 		hci_send_cmd(hdev, HCI_OP_ACCEPT_SYNC_CONN_REQ,
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 90badb6f72271..96cf4bc1f9585 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -3079,10 +3079,12 @@ static void devlink_param_notify(struct devlink *devlink,
+ 				 struct devlink_param_item *param_item,
+ 				 enum devlink_command cmd);
+ 
+-static void devlink_reload_netns_change(struct devlink *devlink,
+-					struct net *dest_net)
++static void devlink_ns_change_notify(struct devlink *devlink,
++				     struct net *dest_net, struct net *curr_net,
++				     bool new)
+ {
+ 	struct devlink_param_item *param_item;
++	enum devlink_command cmd;
+ 
+ 	/* Userspace needs to be notified about devlink objects
+ 	 * removed from original and entering new network namespace.
+@@ -3090,17 +3092,18 @@ static void devlink_reload_netns_change(struct devlink *devlink,
+ 	 * reload process so the notifications are generated separatelly.
+ 	 */
+ 
+-	list_for_each_entry(param_item, &devlink->param_list, list)
+-		devlink_param_notify(devlink, 0, param_item,
+-				     DEVLINK_CMD_PARAM_DEL);
+-	devlink_notify(devlink, DEVLINK_CMD_DEL);
++	if (!dest_net || net_eq(dest_net, curr_net))
++		return;
+ 
+-	__devlink_net_set(devlink, dest_net);
++	if (new)
++		devlink_notify(devlink, DEVLINK_CMD_NEW);
+ 
+-	devlink_notify(devlink, DEVLINK_CMD_NEW);
++	cmd = new ? DEVLINK_CMD_PARAM_NEW : DEVLINK_CMD_PARAM_DEL;
+ 	list_for_each_entry(param_item, &devlink->param_list, list)
+-		devlink_param_notify(devlink, 0, param_item,
+-				     DEVLINK_CMD_PARAM_NEW);
++		devlink_param_notify(devlink, 0, param_item, cmd);
++
++	if (!new)
++		devlink_notify(devlink, DEVLINK_CMD_DEL);
+ }
+ 
+ static bool devlink_reload_supported(const struct devlink_ops *ops)
+@@ -3180,6 +3183,7 @@ static int devlink_reload(struct devlink *devlink, struct net *dest_net,
+ 			  u32 *actions_performed, struct netlink_ext_ack *extack)
+ {
+ 	u32 remote_reload_stats[DEVLINK_RELOAD_STATS_ARRAY_SIZE];
++	struct net *curr_net;
+ 	int err;
+ 
+ 	if (!devlink->reload_enabled)
+@@ -3187,18 +3191,22 @@ static int devlink_reload(struct devlink *devlink, struct net *dest_net,
+ 
+ 	memcpy(remote_reload_stats, devlink->stats.remote_reload_stats,
+ 	       sizeof(remote_reload_stats));
++
++	curr_net = devlink_net(devlink);
++	devlink_ns_change_notify(devlink, dest_net, curr_net, false);
+ 	err = devlink->ops->reload_down(devlink, !!dest_net, action, limit, extack);
+ 	if (err)
+ 		return err;
+ 
+-	if (dest_net && !net_eq(dest_net, devlink_net(devlink)))
+-		devlink_reload_netns_change(devlink, dest_net);
++	if (dest_net && !net_eq(dest_net, curr_net))
++		__devlink_net_set(devlink, dest_net);
+ 
+ 	err = devlink->ops->reload_up(devlink, action, limit, actions_performed, extack);
+ 	devlink_reload_failed_set(devlink, !!err);
+ 	if (err)
+ 		return err;
+ 
++	devlink_ns_change_notify(devlink, dest_net, curr_net, true);
+ 	WARN_ON(!(*actions_performed & BIT(action)));
+ 	/* Catch driver on updating the remote action within devlink reload */
+ 	WARN_ON(memcmp(remote_reload_stats, devlink->stats.remote_reload_stats,
+@@ -3395,7 +3403,7 @@ out_free_msg:
+ 
+ void devlink_flash_update_begin_notify(struct devlink *devlink)
+ {
+-	struct devlink_flash_notify params = { 0 };
++	struct devlink_flash_notify params = {};
+ 
+ 	__devlink_flash_update_notify(devlink,
+ 				      DEVLINK_CMD_FLASH_UPDATE,
+@@ -3405,7 +3413,7 @@ EXPORT_SYMBOL_GPL(devlink_flash_update_begin_notify);
+ 
+ void devlink_flash_update_end_notify(struct devlink *devlink)
+ {
+-	struct devlink_flash_notify params = { 0 };
++	struct devlink_flash_notify params = {};
+ 
+ 	__devlink_flash_update_notify(devlink,
+ 				      DEVLINK_CMD_FLASH_UPDATE_END,
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 3d9946fd41f38..ce787c3867938 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -610,18 +610,25 @@ static void fnhe_flush_routes(struct fib_nh_exception *fnhe)
+ 	}
+ }
+ 
+-static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash)
++static void fnhe_remove_oldest(struct fnhe_hash_bucket *hash)
+ {
+-	struct fib_nh_exception *fnhe, *oldest;
++	struct fib_nh_exception __rcu **fnhe_p, **oldest_p;
++	struct fib_nh_exception *fnhe, *oldest = NULL;
+ 
+-	oldest = rcu_dereference(hash->chain);
+-	for (fnhe = rcu_dereference(oldest->fnhe_next); fnhe;
+-	     fnhe = rcu_dereference(fnhe->fnhe_next)) {
+-		if (time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp))
++	for (fnhe_p = &hash->chain; ; fnhe_p = &fnhe->fnhe_next) {
++		fnhe = rcu_dereference_protected(*fnhe_p,
++						 lockdep_is_held(&fnhe_lock));
++		if (!fnhe)
++			break;
++		if (!oldest ||
++		    time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp)) {
+ 			oldest = fnhe;
++			oldest_p = fnhe_p;
++		}
+ 	}
+ 	fnhe_flush_routes(oldest);
+-	return oldest;
++	*oldest_p = oldest->fnhe_next;
++	kfree_rcu(oldest, rcu);
+ }
+ 
+ static u32 fnhe_hashfun(__be32 daddr)
+@@ -700,16 +707,21 @@ static void update_or_create_fnhe(struct fib_nh_common *nhc, __be32 daddr,
+ 		if (rt)
+ 			fill_route_from_fnhe(rt, fnhe);
+ 	} else {
+-		if (depth > FNHE_RECLAIM_DEPTH)
+-			fnhe = fnhe_oldest(hash);
+-		else {
+-			fnhe = kzalloc(sizeof(*fnhe), GFP_ATOMIC);
+-			if (!fnhe)
+-				goto out_unlock;
+-
+-			fnhe->fnhe_next = hash->chain;
+-			rcu_assign_pointer(hash->chain, fnhe);
++		/* Randomize max depth to avoid some side channels attacks. */
++		int max_depth = FNHE_RECLAIM_DEPTH +
++				prandom_u32_max(FNHE_RECLAIM_DEPTH);
++
++		while (depth > max_depth) {
++			fnhe_remove_oldest(hash);
++			depth--;
+ 		}
++
++		fnhe = kzalloc(sizeof(*fnhe), GFP_ATOMIC);
++		if (!fnhe)
++			goto out_unlock;
++
++		fnhe->fnhe_next = hash->chain;
++
+ 		fnhe->fnhe_genid = genid;
+ 		fnhe->fnhe_daddr = daddr;
+ 		fnhe->fnhe_gw = gw;
+@@ -717,6 +729,8 @@ static void update_or_create_fnhe(struct fib_nh_common *nhc, __be32 daddr,
+ 		fnhe->fnhe_mtu_locked = lock;
+ 		fnhe->fnhe_expires = max(1UL, expires);
+ 
++		rcu_assign_pointer(hash->chain, fnhe);
++
+ 		/* Exception created; mark the cached routes for the nexthop
+ 		 * stale, so anyone caching it rechecks if this exception
+ 		 * applies to them.
+@@ -3064,7 +3078,7 @@ static struct sk_buff *inet_rtm_getroute_build_skb(__be32 src, __be32 dst,
+ 		udph = skb_put_zero(skb, sizeof(struct udphdr));
+ 		udph->source = sport;
+ 		udph->dest = dport;
+-		udph->len = sizeof(struct udphdr);
++		udph->len = htons(sizeof(struct udphdr));
+ 		udph->check = 0;
+ 		break;
+ 	}
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 04e259a04443c..71395e745bc5e 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2417,6 +2417,7 @@ static void *tcp_get_idx(struct seq_file *seq, loff_t pos)
+ static void *tcp_seek_last_pos(struct seq_file *seq)
+ {
+ 	struct tcp_iter_state *st = seq->private;
++	int bucket = st->bucket;
+ 	int offset = st->offset;
+ 	int orig_num = st->num;
+ 	void *rc = NULL;
+@@ -2427,7 +2428,7 @@ static void *tcp_seek_last_pos(struct seq_file *seq)
+ 			break;
+ 		st->state = TCP_SEQ_STATE_LISTENING;
+ 		rc = listening_get_next(seq, NULL);
+-		while (offset-- && rc)
++		while (offset-- && rc && bucket == st->bucket)
+ 			rc = listening_get_next(seq, rc);
+ 		if (rc)
+ 			break;
+@@ -2438,7 +2439,7 @@ static void *tcp_seek_last_pos(struct seq_file *seq)
+ 		if (st->bucket > tcp_hashinfo.ehash_mask)
+ 			break;
+ 		rc = established_get_first(seq);
+-		while (offset-- && rc)
++		while (offset-- && rc && bucket == st->bucket)
+ 			rc = established_get_next(seq, rc);
+ 	}
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index bcf4fae83a9bd..168a7b4d957ae 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1655,6 +1655,7 @@ static int rt6_insert_exception(struct rt6_info *nrt,
+ 	struct in6_addr *src_key = NULL;
+ 	struct rt6_exception *rt6_ex;
+ 	struct fib6_nh *nh = res->nh;
++	int max_depth;
+ 	int err = 0;
+ 
+ 	spin_lock_bh(&rt6_exception_lock);
+@@ -1709,7 +1710,9 @@ static int rt6_insert_exception(struct rt6_info *nrt,
+ 	bucket->depth++;
+ 	net->ipv6.rt6_stats->fib_rt_cache++;
+ 
+-	if (bucket->depth > FIB6_MAX_DEPTH)
++	/* Randomize max depth to avoid some side channels attacks. */
++	max_depth = FIB6_MAX_DEPTH + prandom_u32_max(FIB6_MAX_DEPTH);
++	while (bucket->depth > max_depth)
+ 		rt6_exception_remove_oldest(bucket);
+ 
+ out:
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 20b3581a1c43f..673ad3cf2c3ab 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3229,7 +3229,9 @@ static bool ieee80211_amsdu_prepare_head(struct ieee80211_sub_if_data *sdata,
+ 	if (info->control.flags & IEEE80211_TX_CTRL_AMSDU)
+ 		return true;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr)))
++	if (!ieee80211_amsdu_realloc_pad(local, skb,
++					 sizeof(*amsdu_hdr) +
++					 local->hw.extra_tx_headroom))
+ 		return false;
+ 
+ 	data = skb_push(skb, sizeof(*amsdu_hdr));
+diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
+index 4f50a64315cf0..50f40943c8153 100644
+--- a/net/netlabel/netlabel_cipso_v4.c
++++ b/net/netlabel/netlabel_cipso_v4.c
+@@ -187,14 +187,14 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		}
+ 	doi_def->map.std->lvl.local = kcalloc(doi_def->map.std->lvl.local_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 	if (doi_def->map.std->lvl.local == NULL) {
+ 		ret_val = -ENOMEM;
+ 		goto add_std_failure;
+ 	}
+ 	doi_def->map.std->lvl.cipso = kcalloc(doi_def->map.std->lvl.cipso_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 	if (doi_def->map.std->lvl.cipso == NULL) {
+ 		ret_val = -ENOMEM;
+ 		goto add_std_failure;
+@@ -263,7 +263,7 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		doi_def->map.std->cat.local = kcalloc(
+ 					      doi_def->map.std->cat.local_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 		if (doi_def->map.std->cat.local == NULL) {
+ 			ret_val = -ENOMEM;
+ 			goto add_std_failure;
+@@ -271,7 +271,7 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		doi_def->map.std->cat.cipso = kcalloc(
+ 					      doi_def->map.std->cat.cipso_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 		if (doi_def->map.std->cat.cipso == NULL) {
+ 			ret_val = -ENOMEM;
+ 			goto add_std_failure;
+diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
+index 53d45e029c36d..4a78fcf5d4f98 100644
+--- a/net/sched/sch_cbq.c
++++ b/net/sched/sch_cbq.c
+@@ -1614,7 +1614,7 @@ cbq_change_class(struct Qdisc *sch, u32 classid, u32 parentid, struct nlattr **t
+ 	err = tcf_block_get(&cl->block, &cl->filter_list, sch, extack);
+ 	if (err) {
+ 		kfree(cl);
+-		return err;
++		goto failure;
+ 	}
+ 
+ 	if (tca[TCA_RATE]) {
+diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
+index f78cb18319aaf..16eb839e71f03 100644
+--- a/samples/bpf/xdp_redirect_cpu_user.c
++++ b/samples/bpf/xdp_redirect_cpu_user.c
+@@ -837,7 +837,7 @@ int main(int argc, char **argv)
+ 	memset(cpu, 0, n_cpus * sizeof(int));
+ 
+ 	/* Parse commands line args */
+-	while ((opt = getopt_long(argc, argv, "hSd:s:p:q:c:xzFf:e:r:m:",
++	while ((opt = getopt_long(argc, argv, "hSd:s:p:q:c:xzFf:e:r:m:n",
+ 				  long_options, &longindex)) != -1) {
+ 		switch (opt) {
+ 		case 'd':
+diff --git a/samples/pktgen/pktgen_sample04_many_flows.sh b/samples/pktgen/pktgen_sample04_many_flows.sh
+index 2cd6b701400de..9db1ecf8de8bc 100755
+--- a/samples/pktgen/pktgen_sample04_many_flows.sh
++++ b/samples/pktgen/pktgen_sample04_many_flows.sh
+@@ -13,13 +13,15 @@ root_check_run_with_sudo "$@"
+ # Parameter parsing via include
+ source ${basedir}/parameters.sh
+ # Set some default params, if they didn't get set
+-[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
++if [ -z "$DEST_IP" ]; then
++    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
++fi
+ [ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
+ [ -z "$CLONE_SKB" ] && CLONE_SKB="0"
+ [ -z "$COUNT" ]     && COUNT="0" # Zero means indefinitely
+ if [ -n "$DEST_IP" ]; then
+-    validate_addr $DEST_IP
+-    read -r DST_MIN DST_MAX <<< $(parse_addr $DEST_IP)
++    validate_addr${IP6} $DEST_IP
++    read -r DST_MIN DST_MAX <<< $(parse_addr${IP6} $DEST_IP)
+ fi
+ if [ -n "$DST_PORT" ]; then
+     read -r UDP_DST_MIN UDP_DST_MAX <<< $(parse_ports $DST_PORT)
+@@ -65,8 +67,8 @@ for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ 
+     # Single destination
+     pg_set $dev "dst_mac $DST_MAC"
+-    pg_set $dev "dst_min $DST_MIN"
+-    pg_set $dev "dst_max $DST_MAX"
++    pg_set $dev "dst${IP6}_min $DST_MIN"
++    pg_set $dev "dst${IP6}_max $DST_MAX"
+ 
+     if [ -n "$DST_PORT" ]; then
+ 	# Single destination port or random port range
+diff --git a/samples/pktgen/pktgen_sample05_flow_per_thread.sh b/samples/pktgen/pktgen_sample05_flow_per_thread.sh
+index 4cb6252ade399..9fc6c6da028ac 100755
+--- a/samples/pktgen/pktgen_sample05_flow_per_thread.sh
++++ b/samples/pktgen/pktgen_sample05_flow_per_thread.sh
+@@ -17,14 +17,16 @@ root_check_run_with_sudo "$@"
+ # Parameter parsing via include
+ source ${basedir}/parameters.sh
+ # Set some default params, if they didn't get set
+-[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
++if [ -z "$DEST_IP" ]; then
++    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
++fi
+ [ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
+ [ -z "$CLONE_SKB" ] && CLONE_SKB="0"
+ [ -z "$BURST" ]     && BURST=32
+ [ -z "$COUNT" ]     && COUNT="0" # Zero means indefinitely
+ if [ -n "$DEST_IP" ]; then
+-    validate_addr $DEST_IP
+-    read -r DST_MIN DST_MAX <<< $(parse_addr $DEST_IP)
++    validate_addr${IP6} $DEST_IP
++    read -r DST_MIN DST_MAX <<< $(parse_addr${IP6} $DEST_IP)
+ fi
+ if [ -n "$DST_PORT" ]; then
+     read -r UDP_DST_MIN UDP_DST_MAX <<< $(parse_ports $DST_PORT)
+@@ -55,8 +57,8 @@ for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ 
+     # Single destination
+     pg_set $dev "dst_mac $DST_MAC"
+-    pg_set $dev "dst_min $DST_MIN"
+-    pg_set $dev "dst_max $DST_MAX"
++    pg_set $dev "dst${IP6}_min $DST_MIN"
++    pg_set $dev "dst${IP6}_max $DST_MAX"
+ 
+     if [ -n "$DST_PORT" ]; then
+ 	# Single destination port or random port range
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index 12e9250c1bec6..9e72edb8d31af 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -6,7 +6,6 @@ config IMA
+ 	select SECURITYFS
+ 	select CRYPTO
+ 	select CRYPTO_HMAC
+-	select CRYPTO_MD5
+ 	select CRYPTO_SHA1
+ 	select CRYPTO_HASH_INFO
+ 	select TCG_TPM if HAS_IOMEM && !UML
+diff --git a/security/integrity/ima/ima_mok.c b/security/integrity/ima/ima_mok.c
+index 1e5c019161738..95cc31525c573 100644
+--- a/security/integrity/ima/ima_mok.c
++++ b/security/integrity/ima/ima_mok.c
+@@ -21,7 +21,7 @@ struct key *ima_blacklist_keyring;
+ /*
+  * Allocate the IMA blacklist keyring
+  */
+-__init int ima_mok_init(void)
++static __init int ima_mok_init(void)
+ {
+ 	struct key_restriction *restriction;
+ 
+diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
+index 547445d1e3c69..89e545eb9a8a0 100644
+--- a/sound/soc/codecs/rt5682-i2c.c
++++ b/sound/soc/codecs/rt5682-i2c.c
+@@ -117,6 +117,13 @@ static struct snd_soc_dai_driver rt5682_dai[] = {
+ 	},
+ };
+ 
++static void rt5682_i2c_disable_regulators(void *data)
++{
++	struct rt5682_priv *rt5682 = data;
++
++	regulator_bulk_disable(ARRAY_SIZE(rt5682->supplies), rt5682->supplies);
++}
++
+ static int rt5682_i2c_probe(struct i2c_client *i2c,
+ 		const struct i2c_device_id *id)
+ {
+@@ -157,6 +164,11 @@ static int rt5682_i2c_probe(struct i2c_client *i2c,
+ 		return ret;
+ 	}
+ 
++	ret = devm_add_action_or_reset(&i2c->dev, rt5682_i2c_disable_regulators,
++				       rt5682);
++	if (ret)
++		return ret;
++
+ 	ret = regulator_bulk_enable(ARRAY_SIZE(rt5682->supplies),
+ 				    rt5682->supplies);
+ 	if (ret) {
+@@ -275,6 +287,13 @@ static void rt5682_i2c_shutdown(struct i2c_client *client)
+ 	rt5682_reset(rt5682);
+ }
+ 
++static int rt5682_i2c_remove(struct i2c_client *client)
++{
++	rt5682_i2c_shutdown(client);
++
++	return 0;
++}
++
+ static const struct of_device_id rt5682_of_match[] = {
+ 	{.compatible = "realtek,rt5682i"},
+ 	{},
+@@ -301,6 +320,7 @@ static struct i2c_driver rt5682_i2c_driver = {
+ 		.probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ 	},
+ 	.probe = rt5682_i2c_probe,
++	.remove = rt5682_i2c_remove,
+ 	.shutdown = rt5682_i2c_shutdown,
+ 	.id_table = rt5682_i2c_id,
+ };
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 4d2b1ec7c03bb..2677d0c3b19ba 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -4076,6 +4076,16 @@ static int wcd9335_setup_irqs(struct wcd9335_codec *wcd)
+ 	return ret;
+ }
+ 
++static void wcd9335_teardown_irqs(struct wcd9335_codec *wcd)
++{
++	int i;
++
++	/* disable interrupts on all slave ports */
++	for (i = 0; i < WCD9335_SLIM_NUM_PORT_REG; i++)
++		regmap_write(wcd->if_regmap, WCD9335_SLIM_PGD_PORT_INT_EN0 + i,
++			     0x00);
++}
++
+ static void wcd9335_cdc_sido_ccl_enable(struct wcd9335_codec *wcd,
+ 					bool ccl_flag)
+ {
+@@ -4844,6 +4854,7 @@ static void wcd9335_codec_init(struct snd_soc_component *component)
+ static int wcd9335_codec_probe(struct snd_soc_component *component)
+ {
+ 	struct wcd9335_codec *wcd = dev_get_drvdata(component->dev);
++	int ret;
+ 	int i;
+ 
+ 	snd_soc_component_init_regmap(component, wcd->regmap);
+@@ -4861,7 +4872,15 @@ static int wcd9335_codec_probe(struct snd_soc_component *component)
+ 	for (i = 0; i < NUM_CODEC_DAIS; i++)
+ 		INIT_LIST_HEAD(&wcd->dai[i].slim_ch_list);
+ 
+-	return wcd9335_setup_irqs(wcd);
++	ret = wcd9335_setup_irqs(wcd);
++	if (ret)
++		goto free_clsh_ctrl;
++
++	return 0;
++
++free_clsh_ctrl:
++	wcd_clsh_ctrl_free(wcd->clsh_ctrl);
++	return ret;
+ }
+ 
+ static void wcd9335_codec_remove(struct snd_soc_component *comp)
+@@ -4869,7 +4888,7 @@ static void wcd9335_codec_remove(struct snd_soc_component *comp)
+ 	struct wcd9335_codec *wcd = dev_get_drvdata(comp->dev);
+ 
+ 	wcd_clsh_ctrl_free(wcd->clsh_ctrl);
+-	free_irq(regmap_irq_get_virq(wcd->irq_data, WCD9335_IRQ_SLIMBUS), wcd);
++	wcd9335_teardown_irqs(wcd);
+ }
+ 
+ static int wcd9335_codec_set_sysclk(struct snd_soc_component *comp,
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98927.c b/sound/soc/intel/boards/kbl_da7219_max98927.c
+index e0149cf6127d0..884741aa48335 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98927.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98927.c
+@@ -197,7 +197,7 @@ static int kabylake_ssp0_hw_params(struct snd_pcm_substream *substream,
+ 		}
+ 		if (!strcmp(codec_dai->component->name, MAX98373_DEV0_NAME)) {
+ 			ret = snd_soc_dai_set_tdm_slot(codec_dai,
+-							0x03, 3, 8, 24);
++							0x30, 3, 8, 16);
+ 			if (ret < 0) {
+ 				dev_err(runtime->dev,
+ 						"DEV0 TDM slot err:%d\n", ret);
+@@ -206,10 +206,10 @@ static int kabylake_ssp0_hw_params(struct snd_pcm_substream *substream,
+ 		}
+ 		if (!strcmp(codec_dai->component->name, MAX98373_DEV1_NAME)) {
+ 			ret = snd_soc_dai_set_tdm_slot(codec_dai,
+-							0x0C, 3, 8, 24);
++							0xC0, 3, 8, 16);
+ 			if (ret < 0) {
+ 				dev_err(runtime->dev,
+-						"DEV0 TDM slot err:%d\n", ret);
++						"DEV1 TDM slot err:%d\n", ret);
+ 				return ret;
+ 			}
+ 		}
+@@ -309,24 +309,6 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	 * The above 2 loops are mutually exclusive based on the stream direction,
+ 	 * thus rtd_dpcm variable will never be overwritten
+ 	 */
+-	/*
+-	 * Topology for kblda7219m98373 & kblmax98373 supports only S24_LE,
+-	 * where as kblda7219m98927 & kblmax98927 supports S16_LE by default.
+-	 * Skipping the port wise FE and BE configuration for kblda7219m98373 &
+-	 * kblmax98373 as the topology (FE & BE) supports S24_LE only.
+-	 */
+-
+-	if (!strcmp(rtd->card->name, "kblda7219m98373") ||
+-		!strcmp(rtd->card->name, "kblmax98373")) {
+-		/* The ADSP will convert the FE rate to 48k, stereo */
+-		rate->min = rate->max = 48000;
+-		chan->min = chan->max = DUAL_CHANNEL;
+-
+-		/* set SSP to 24 bit */
+-		snd_mask_none(fmt);
+-		snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S24_LE);
+-		return 0;
+-	}
+ 
+ 	/*
+ 	 * The ADSP will convert the FE rate to 48k, stereo, 24 bit
+@@ -477,31 +459,20 @@ static struct snd_pcm_hw_constraint_list constraints_channels_quad = {
+ static int kbl_fe_startup(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_soc_pcm_runtime *soc_rt = asoc_substream_to_rtd(substream);
+ 
+ 	/*
+ 	 * On this platform for PCM device we support,
+ 	 * 48Khz
+ 	 * stereo
++	 * 16 bit audio
+ 	 */
+ 
+ 	runtime->hw.channels_max = DUAL_CHANNEL;
+ 	snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS,
+ 					   &constraints_channels);
+-	/*
+-	 * Setup S24_LE (32 bit container and 24 bit valid data) for
+-	 * kblda7219m98373 & kblmax98373. For kblda7219m98927 &
+-	 * kblmax98927 keeping it as 16/16 due to topology FW dependency.
+-	 */
+-	if (!strcmp(soc_rt->card->name, "kblda7219m98373") ||
+-		!strcmp(soc_rt->card->name, "kblmax98373")) {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S24_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 32, 24);
+-
+-	} else {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 16, 16);
+-	}
++
++	runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE;
++	snd_pcm_hw_constraint_msbits(runtime, 0, 16, 16);
+ 
+ 	snd_pcm_hw_constraint_list(runtime, 0,
+ 				SNDRV_PCM_HW_PARAM_RATE, &constraints_rates);
+@@ -534,23 +505,11 @@ static int kabylake_dmic_fixup(struct snd_soc_pcm_runtime *rtd,
+ static int kabylake_dmic_startup(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_soc_pcm_runtime *soc_rt = asoc_substream_to_rtd(substream);
+ 
+ 	runtime->hw.channels_min = runtime->hw.channels_max = QUAD_CHANNEL;
+ 	snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS,
+ 			&constraints_channels_quad);
+ 
+-	/*
+-	 * Topology for kblda7219m98373 & kblmax98373 supports only S24_LE.
+-	 * The DMIC also configured for S24_LE. Forcing the DMIC format to
+-	 * S24_LE due to the topology FW dependency.
+-	 */
+-	if (!strcmp(soc_rt->card->name, "kblda7219m98373") ||
+-		!strcmp(soc_rt->card->name, "kblmax98373")) {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S24_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 32, 24);
+-	}
+-
+ 	return snd_pcm_hw_constraint_list(substream->runtime, 0,
+ 			SNDRV_PCM_HW_PARAM_RATE, &constraints_rates);
+ }
+diff --git a/sound/soc/intel/common/soc-acpi-intel-cml-match.c b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+index 26dde88bb2279..9b85811ffd515 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-cml-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+@@ -62,7 +62,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_cml_machines[] = {
+ 	},
+ 	{
+ 		.id = "DLGS7219",
+-		.drv_name = "cml_da7219_max98357a",
++		.drv_name = "cml_da7219_mx98357a",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &max98390_spk_codecs,
+ 		.sof_fw_filename = "sof-cml.ri",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+index 4ed1349affc4d..20f2132a9cd66 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+@@ -87,7 +87,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_kbl_machines[] = {
+ 	},
+ 	{
+ 		.id = "DLGS7219",
+-		.drv_name = "kbl_da7219_max98357a",
++		.drv_name = "kbl_da7219_mx98357a",
+ 		.fw_filename = "intel/dsp_fw_kbl.bin",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &kbl_7219_98357_codecs,
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index 0955cbb4e9187..73976c6dfbdc0 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -113,7 +113,7 @@ static int is_skl_dsp_widget_type(struct snd_soc_dapm_widget *w,
+ 
+ static void skl_dump_mconfig(struct skl_dev *skl, struct skl_module_cfg *mcfg)
+ {
+-	struct skl_module_iface *iface = &mcfg->module->formats[0];
++	struct skl_module_iface *iface = &mcfg->module->formats[mcfg->fmt_idx];
+ 
+ 	dev_dbg(skl->dev, "Dumping config\n");
+ 	dev_dbg(skl->dev, "Input Format:\n");
+@@ -195,8 +195,8 @@ static void skl_tplg_update_params_fixup(struct skl_module_cfg *m_cfg,
+ 	struct skl_module_fmt *in_fmt, *out_fmt;
+ 
+ 	/* Fixups will be applied to pin 0 only */
+-	in_fmt = &m_cfg->module->formats[0].inputs[0].fmt;
+-	out_fmt = &m_cfg->module->formats[0].outputs[0].fmt;
++	in_fmt = &m_cfg->module->formats[m_cfg->fmt_idx].inputs[0].fmt;
++	out_fmt = &m_cfg->module->formats[m_cfg->fmt_idx].outputs[0].fmt;
+ 
+ 	if (params->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 		if (is_fe) {
+@@ -239,9 +239,9 @@ static void skl_tplg_update_buffer_size(struct skl_dev *skl,
+ 	/* Since fixups is applied to pin 0 only, ibs, obs needs
+ 	 * change for pin 0 only
+ 	 */
+-	res = &mcfg->module->resources[0];
+-	in_fmt = &mcfg->module->formats[0].inputs[0].fmt;
+-	out_fmt = &mcfg->module->formats[0].outputs[0].fmt;
++	res = &mcfg->module->resources[mcfg->res_idx];
++	in_fmt = &mcfg->module->formats[mcfg->fmt_idx].inputs[0].fmt;
++	out_fmt = &mcfg->module->formats[mcfg->fmt_idx].outputs[0].fmt;
+ 
+ 	if (mcfg->m_type == SKL_MODULE_TYPE_SRCINT)
+ 		multiplier = 5;
+@@ -1463,12 +1463,6 @@ static int skl_tplg_tlv_control_set(struct snd_kcontrol *kcontrol,
+ 	struct skl_dev *skl = get_skl_ctx(w->dapm->dev);
+ 
+ 	if (ac->params) {
+-		/*
+-		 * Widget data is expected to be stripped of T and L
+-		 */
+-		size -= 2 * sizeof(unsigned int);
+-		data += 2;
+-
+ 		if (size > ac->max)
+ 			return -EINVAL;
+ 		ac->size = size;
+@@ -1637,11 +1631,12 @@ int skl_tplg_update_pipe_params(struct device *dev,
+ 			struct skl_module_cfg *mconfig,
+ 			struct skl_pipe_params *params)
+ {
+-	struct skl_module_res *res = &mconfig->module->resources[0];
++	struct skl_module_res *res;
+ 	struct skl_dev *skl = get_skl_ctx(dev);
+ 	struct skl_module_fmt *format = NULL;
+ 	u8 cfg_idx = mconfig->pipe->cur_config_idx;
+ 
++	res = &mconfig->module->resources[mconfig->res_idx];
+ 	skl_tplg_fill_dma_id(mconfig, params);
+ 	mconfig->fmt_idx = mconfig->mod_cfg[cfg_idx].fmt_idx;
+ 	mconfig->res_idx = mconfig->mod_cfg[cfg_idx].res_idx;
+@@ -1650,9 +1645,9 @@ int skl_tplg_update_pipe_params(struct device *dev,
+ 		return 0;
+ 
+ 	if (params->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		format = &mconfig->module->formats[0].inputs[0].fmt;
++		format = &mconfig->module->formats[mconfig->fmt_idx].inputs[0].fmt;
+ 	else
+-		format = &mconfig->module->formats[0].outputs[0].fmt;
++		format = &mconfig->module->formats[mconfig->fmt_idx].outputs[0].fmt;
+ 
+ 	/* set the hw_params */
+ 	format->s_freq = params->s_freq;
+diff --git a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+index c4a598cbbdaa1..14e77df06b011 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
++++ b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+@@ -1119,25 +1119,26 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->regmap = syscon_node_to_regmap(dev->parent->of_node);
+ 	if (IS_ERR(afe->regmap)) {
+ 		dev_err(dev, "could not get regmap from parent\n");
+-		return PTR_ERR(afe->regmap);
++		ret = PTR_ERR(afe->regmap);
++		goto err_pm_disable;
+ 	}
+ 	ret = regmap_attach_dev(dev, afe->regmap, &mt8183_afe_regmap_config);
+ 	if (ret) {
+ 		dev_warn(dev, "regmap_attach_dev fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	rstc = devm_reset_control_get(dev, "audiosys");
+ 	if (IS_ERR(rstc)) {
+ 		ret = PTR_ERR(rstc);
+ 		dev_err(dev, "could not get audiosys reset:%d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = reset_control_reset(rstc);
+ 	if (ret) {
+ 		dev_err(dev, "failed to trigger audio reset:%d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* enable clock for regcache get default value from hw */
+@@ -1147,7 +1148,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	ret = regmap_reinit_cache(afe->regmap, &mt8183_afe_regmap_config);
+ 	if (ret) {
+ 		dev_err(dev, "regmap_reinit_cache fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+@@ -1160,8 +1161,10 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->memif_size = MT8183_MEMIF_NUM;
+ 	afe->memif = devm_kcalloc(dev, afe->memif_size, sizeof(*afe->memif),
+ 				  GFP_KERNEL);
+-	if (!afe->memif)
+-		return -ENOMEM;
++	if (!afe->memif) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->memif_size; i++) {
+ 		afe->memif[i].data = &memif_data[i];
+@@ -1178,22 +1181,26 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->irqs_size = MT8183_IRQ_NUM;
+ 	afe->irqs = devm_kcalloc(dev, afe->irqs_size, sizeof(*afe->irqs),
+ 				 GFP_KERNEL);
+-	if (!afe->irqs)
+-		return -ENOMEM;
++	if (!afe->irqs) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->irqs_size; i++)
+ 		afe->irqs[i].irq_data = &irq_data[i];
+ 
+ 	/* request irq */
+ 	irq_id = platform_get_irq(pdev, 0);
+-	if (irq_id < 0)
+-		return irq_id;
++	if (irq_id < 0) {
++		ret = irq_id;
++		goto err_pm_disable;
++	}
+ 
+ 	ret = devm_request_irq(dev, irq_id, mt8183_afe_irq_handler,
+ 			       IRQF_TRIGGER_NONE, "asys-isr", (void *)afe);
+ 	if (ret) {
+ 		dev_err(dev, "could not request_irq for asys-isr\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* init sub_dais */
+@@ -1204,7 +1211,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_warn(afe->dev, "dai register i %d fail, ret %d\n",
+ 				 i, ret);
+-			return ret;
++			goto err_pm_disable;
+ 		}
+ 	}
+ 
+@@ -1213,7 +1220,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_warn(afe->dev, "mtk_afe_combine_sub_dai fail, ret %d\n",
+ 			 ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	afe->mtk_afe_hardware = &mt8183_afe_hardware;
+@@ -1229,7 +1236,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 					      NULL, 0);
+ 	if (ret) {
+ 		dev_warn(dev, "err_platform\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = devm_snd_soc_register_component(afe->dev,
+@@ -1238,10 +1245,14 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 					      afe->num_dai_drivers);
+ 	if (ret) {
+ 		dev_warn(dev, "err_dai_component\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	return ret;
++
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
++	return ret;
+ }
+ 
+ static int mt8183_afe_pcm_dev_remove(struct platform_device *pdev)
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 556216dc97030..762bf87c26a3e 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -2450,7 +2450,7 @@ union bpf_attr {
+  * long bpf_sk_select_reuseport(struct sk_reuseport_md *reuse, struct bpf_map *map, void *key, u64 flags)
+  *	Description
+  *		Select a **SO_REUSEPORT** socket from a
+- *		**BPF_MAP_TYPE_REUSEPORT_ARRAY** *map*.
++ *		**BPF_MAP_TYPE_REUSEPORT_SOCKARRAY** *map*.
+  *		It checks the selected socket is matching the incoming
+  *		request in the socket buffer.
+  *	Return
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 310f647c2d5b6..154b75fc1373e 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -4,8 +4,9 @@
+ RM ?= rm
+ srctree = $(abs_srctree)
+ 
++VERSION_SCRIPT := libbpf.map
+ LIBBPF_VERSION := $(shell \
+-	grep -oE '^LIBBPF_([0-9.]+)' libbpf.map | \
++	grep -oE '^LIBBPF_([0-9.]+)' $(VERSION_SCRIPT) | \
+ 	sort -rV | head -n1 | cut -d'_' -f2)
+ LIBBPF_MAJOR_VERSION := $(firstword $(subst ., ,$(LIBBPF_VERSION)))
+ 
+@@ -131,7 +132,6 @@ SHARED_OBJDIR	:= $(OUTPUT)sharedobjs/
+ STATIC_OBJDIR	:= $(OUTPUT)staticobjs/
+ BPF_IN_SHARED	:= $(SHARED_OBJDIR)libbpf-in.o
+ BPF_IN_STATIC	:= $(STATIC_OBJDIR)libbpf-in.o
+-VERSION_SCRIPT	:= libbpf.map
+ BPF_HELPER_DEFS	:= $(OUTPUT)bpf_helper_defs.h
+ 
+ LIB_TARGET	:= $(addprefix $(OUTPUT),$(LIB_TARGET))
+@@ -184,10 +184,10 @@ $(BPF_HELPER_DEFS): $(srctree)/tools/include/uapi/linux/bpf.h
+ 
+ $(OUTPUT)libbpf.so: $(OUTPUT)libbpf.so.$(LIBBPF_VERSION)
+ 
+-$(OUTPUT)libbpf.so.$(LIBBPF_VERSION): $(BPF_IN_SHARED)
++$(OUTPUT)libbpf.so.$(LIBBPF_VERSION): $(BPF_IN_SHARED) $(VERSION_SCRIPT)
+ 	$(QUIET_LINK)$(CC) $(LDFLAGS) \
+ 		--shared -Wl,-soname,libbpf.so.$(LIBBPF_MAJOR_VERSION) \
+-		-Wl,--version-script=$(VERSION_SCRIPT) $^ -lelf -lz -o $@
++		-Wl,--version-script=$(VERSION_SCRIPT) $< -lelf -lz -o $@
+ 	@ln -sf $(@F) $(OUTPUT)libbpf.so
+ 	@ln -sf $(@F) $(OUTPUT)libbpf.so.$(LIBBPF_MAJOR_VERSION)
+ 
+@@ -202,7 +202,7 @@ $(OUTPUT)libbpf.pc:
+ 
+ check: check_abi
+ 
+-check_abi: $(OUTPUT)libbpf.so
++check_abi: $(OUTPUT)libbpf.so $(VERSION_SCRIPT)
+ 	@if [ "$(GLOBAL_SYM_COUNT)" != "$(VERSIONED_SYM_COUNT)" ]; then	 \
+ 		echo "Warning: Num of global symbols in $(BPF_IN_SHARED)"	 \
+ 		     "($(GLOBAL_SYM_COUNT)) does NOT match with num of"	 \
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 95eef7ebdac5c..28923b776cdc8 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4123,6 +4123,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ {
+ 	struct bpf_create_map_attr create_attr;
+ 	struct bpf_map_def *def = &map->def;
++	int err = 0;
+ 
+ 	memset(&create_attr, 0, sizeof(create_attr));
+ 
+@@ -4165,8 +4166,6 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ 
+ 	if (bpf_map_type__is_map_in_map(def->type)) {
+ 		if (map->inner_map) {
+-			int err;
+-
+ 			err = bpf_object__create_map(obj, map->inner_map);
+ 			if (err) {
+ 				pr_warn("map '%s': failed to create inner map: %d\n",
+@@ -4183,8 +4182,8 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ 	if (map->fd < 0 && (create_attr.btf_key_type_id ||
+ 			    create_attr.btf_value_type_id)) {
+ 		char *cp, errmsg[STRERR_BUFSIZE];
+-		int err = -errno;
+ 
++		err = -errno;
+ 		cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+ 		pr_warn("Error in bpf_create_map_xattr(%s):%s(%d). Retrying without BTF.\n",
+ 			map->name, cp, err);
+@@ -4196,15 +4195,14 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ 		map->fd = bpf_create_map_xattr(&create_attr);
+ 	}
+ 
+-	if (map->fd < 0)
+-		return -errno;
++	err = map->fd < 0 ? -errno : 0;
+ 
+ 	if (bpf_map_type__is_map_in_map(def->type) && map->inner_map) {
+ 		bpf_map__destroy(map->inner_map);
+ 		zfree(&map->inner_map);
+ 	}
+ 
+-	return 0;
++	return err;
+ }
+ 
+ static int init_map_slots(struct bpf_map *map)
+@@ -6907,8 +6905,10 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
+ 	kconfig = OPTS_GET(opts, kconfig, NULL);
+ 	if (kconfig) {
+ 		obj->kconfig = strdup(kconfig);
+-		if (!obj->kconfig)
+-			return ERR_PTR(-ENOMEM);
++		if (!obj->kconfig) {
++			err = -ENOMEM;
++			goto out;
++		}
+ 	}
+ 
+ 	err = bpf_object__elf_init(obj);
+diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c b/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
+index 54380c5e10692..aa96b604b2b31 100644
+--- a/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
++++ b/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
+@@ -122,7 +122,7 @@ static int dump_tcp_sock(struct seq_file *seq, struct tcp_sock *tp,
+ 	}
+ 
+ 	BPF_SEQ_PRINTF(seq, "%4d: %08X:%04X %08X:%04X ",
+-		       seq_num, src, srcp, destp, destp);
++		       seq_num, src, srcp, dest, destp);
+ 	BPF_SEQ_PRINTF(seq, "%02X %08X:%08X %02X:%08lX %08X %5u %8d %lu %d ",
+ 		       state,
+ 		       tp->write_seq - tp->snd_una, rx_queue,
+diff --git a/tools/testing/selftests/bpf/progs/test_core_autosize.c b/tools/testing/selftests/bpf/progs/test_core_autosize.c
+index 44f5aa2e8956f..9a7829c5e4a72 100644
+--- a/tools/testing/selftests/bpf/progs/test_core_autosize.c
++++ b/tools/testing/selftests/bpf/progs/test_core_autosize.c
+@@ -125,6 +125,16 @@ int handle_downsize(void *ctx)
+ 	return 0;
+ }
+ 
++#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
++#define bpf_core_read_int bpf_core_read
++#else
++#define bpf_core_read_int(dst, sz, src) ({ \
++	/* Prevent "subtraction from stack pointer prohibited" */ \
++	volatile long __off = sizeof(*dst) - (sz); \
++	bpf_core_read((char *)(dst) + __off, sz, src); \
++})
++#endif
++
+ SEC("raw_tp/sys_enter")
+ int handle_probed(void *ctx)
+ {
+@@ -132,23 +142,23 @@ int handle_probed(void *ctx)
+ 	__u64 tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->ptr), &in->ptr);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->ptr), &in->ptr);
+ 	ptr_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val1), &in->val1);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val1), &in->val1);
+ 	val1_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val2), &in->val2);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val2), &in->val2);
+ 	val2_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val3), &in->val3);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val3), &in->val3);
+ 	val3_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val4), &in->val4);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val4), &in->val4);
+ 	val4_probed = tmp;
+ 
+ 	return 0;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-16 11:20 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-16 11:20 UTC (permalink / raw
  To: gentoo-commits

commit:     b09a0d450ac5b3aea6dcb5d071735e66607b0c01
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 16 11:20:12 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 16 11:20:12 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b09a0d45

Linux patch 5.10.66

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |  4 ++
 1065_linux-5.10.66.patch | 97 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+)

diff --git a/0000_README b/0000_README
index da11f46..1521f72 100644
--- a/0000_README
+++ b/0000_README
@@ -303,6 +303,10 @@ Patch:  1064_linux-5.10.65.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.65
 
+Patch:  1065_linux-5.10.66.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.66
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1065_linux-5.10.66.patch b/1065_linux-5.10.66.patch
new file mode 100644
index 0000000..1a4fe67
--- /dev/null
+++ b/1065_linux-5.10.66.patch
@@ -0,0 +1,97 @@
+diff --git a/Makefile b/Makefile
+index 91eb017f5296d..8b1f1e7517b94 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 65
++SUBLEVEL = 66
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 98274ba0701d6..59c452fff8352 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1759,17 +1759,7 @@ static int nbd_dev_add(int index)
+ 	refcount_set(&nbd->refs, 1);
+ 	INIT_LIST_HEAD(&nbd->list);
+ 	disk->major = NBD_MAJOR;
+-
+-	/* Too big first_minor can cause duplicate creation of
+-	 * sysfs files/links, since first_minor will be truncated to
+-	 * byte in __device_add_disk().
+-	 */
+ 	disk->first_minor = index << part_shift;
+-	if (disk->first_minor > 0xff) {
+-		err = -EINVAL;
+-		goto out_free_idr;
+-	}
+-
+ 	disk->fops = &nbd_fops;
+ 	disk->private_data = nbd;
+ 	sprintf(disk->disk_name, "nbd%d", index);
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index 81b9686a20799..5117cb5b56561 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -25,9 +25,7 @@ struct itimerspec64 {
+ #define TIME64_MIN			(-TIME64_MAX - 1)
+ 
+ #define KTIME_MAX			((s64)~((u64)1 << 63))
+-#define KTIME_MIN			(-KTIME_MAX - 1)
+ #define KTIME_SEC_MAX			(KTIME_MAX / NSEC_PER_SEC)
+-#define KTIME_SEC_MIN			(KTIME_MIN / NSEC_PER_SEC)
+ 
+ /*
+  * Limits for settimeofday():
+@@ -126,13 +124,10 @@ static inline bool timespec64_valid_settod(const struct timespec64 *ts)
+  */
+ static inline s64 timespec64_to_ns(const struct timespec64 *ts)
+ {
+-	/* Prevent multiplication overflow / underflow */
+-	if (ts->tv_sec >= KTIME_SEC_MAX)
++	/* Prevent multiplication overflow */
++	if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX)
+ 		return KTIME_MAX;
+ 
+-	if (ts->tv_sec <= KTIME_SEC_MIN)
+-		return KTIME_MIN;
+-
+ 	return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
+ }
+ 
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index d3d42b7637a19..08c033b802569 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1346,6 +1346,8 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clkid,
+ 			}
+ 		}
+ 
++		if (!*newval)
++			return;
+ 		*newval += now;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index a9097fb7eb825..2ad66f64879f1 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1732,14 +1732,6 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 	hci_request_cancel_all(hdev);
+ 	hci_req_sync_lock(hdev);
+ 
+-	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
+-	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
+-	    test_bit(HCI_UP, &hdev->flags)) {
+-		/* Execute vendor specific shutdown routine */
+-		if (hdev->shutdown)
+-			hdev->shutdown(hdev);
+-	}
+-
+ 	if (!test_and_clear_bit(HCI_UP, &hdev->flags)) {
+ 		cancel_delayed_work_sync(&hdev->cmd_timer);
+ 		hci_req_sync_unlock(hdev);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-17 12:46 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-17 12:46 UTC (permalink / raw
  To: gentoo-commits

commit:     7c4e6c87971a923537b70faf3910986e4fe33d43
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 17 12:45:52 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 17 12:45:52 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7c4e6c87

Update CPU Opt Patch 2021-09-14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 158 +++++++++++++-------------
 1 file changed, 76 insertions(+), 82 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index e37528f..b9e8ebb 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,7 +1,7 @@
-From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
+From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Sun, 6 Jun 2021 09:41:36 -0400
-Subject: [PATCH] more uarches for kernel 5.8+
+Date: Tue, 14 Sep 2021 15:35:34 -0400
+Subject: [PATCH] more uarches for kernel 5.15+
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 8bit
@@ -86,7 +86,7 @@ See the following experimental evidence supporting this statement:
 https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
-linux version >=5.8
+linux version >=5.15
 gcc version >=9.0 or clang version >=9.0
 
 ACKNOWLEDGMENTS
@@ -102,17 +102,17 @@ REFERENCES
 Signed-off-by: graysky <graysky@archlinux.us>
 ---
  arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  47 ++++-
+ arch/x86/Makefile               |  40 +++-
  arch/x86/include/asm/vermagic.h |  66 +++++++
- 3 files changed, 428 insertions(+), 17 deletions(-)
+ 3 files changed, 424 insertions(+), 14 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..8acf6519d279 100644
+index 814fe0d349b0..61f0d7757499 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
- 
- 
+
+
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -121,7 +121,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD K6-family processor.  Enables use of
 @@ -165,7 +165,7 @@ config MK6
  	  flags to GCC.
- 
+
  config MK7
 -	bool "Athlon/Duron/K7"
 +	bool "AMD Athlon/Duron/K7"
@@ -130,7 +130,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
 @@ -173,12 +173,98 @@ config MK7
  	  flags to GCC.
- 
+
  config MK8
 -	bool "Opteron/Athlon64/Hammer/K8"
 +	bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -138,7 +138,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
  	  Enables use of some extended instructions, and passes appropriate
  	  optimization flags to GCC.
- 
+
 +config MK8SSE3
 +	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
 +	help
@@ -230,17 +230,17 @@ index 814fe0d349b0..8acf6519d279 100644
  	depends on X86_32
 @@ -270,7 +356,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
- 
+
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
  	help
- 
+
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
 @@ -278,6 +364,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
- 
+
 +	  Enables -march=core2
 +
  config MATOM
@@ -249,7 +249,7 @@ index 814fe0d349b0..8acf6519d279 100644
 @@ -287,6 +375,182 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
- 
+
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
@@ -432,7 +432,7 @@ index 814fe0d349b0..8acf6519d279 100644
 @@ -294,6 +558,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
- 
+
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
 +	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
@@ -478,7 +478,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +	  Enables -march=native
 +
  endchoice
- 
+
  config X86_GENERIC
 @@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
@@ -488,19 +488,19 @@ index 814fe0d349b0..8acf6519d279 100644
 +	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
+
 @@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
- 
+
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
 +	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
- 
+
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
 +	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
- 
+
  config X86_USE_3DNOW
  	def_bool y
 @@ -360,26 +668,26 @@ config X86_USE_3DNOW
@@ -509,24 +509,24 @@ index 814fe0d349b0..8acf6519d279 100644
  	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
 +	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
- 
+
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
 +	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
- 
+
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
 +	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
- 
+
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
 +	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
- 
+
  config X86_MINIMUM_CPU_FAMILY
  	int
  	default "64" if X86_64
@@ -534,65 +534,58 @@ index 814fe0d349b0..8acf6519d279 100644
 +	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
  	default "5" if X86_32 && X86_CMPXCHG64
  	default "4"
- 
+
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 78faf9c7e3ae..ee0cd507af8b 100644
+index 7488cfbbd2f6..01876b6fb8e1 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -114,11 +114,48 @@ else
+@@ -119,8 +119,44 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
--
--        cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
-+        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
-+        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
-+        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
-+        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
-+        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
-+        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
-+
-+        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
-+        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
-+        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
-+        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
-+        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
-+        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
-+        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
-+        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
-+        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
-+        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
-+        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
-+        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
-+        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
-+        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
-+        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
-+        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
-+        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
-+        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
-+        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
-+        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
-+        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
-+        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
-+        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
-         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         cflags-$(CONFIG_MK8)		+= -march=k8
+         cflags-$(CONFIG_MPSC)		+= -march=nocona
+-        cflags-$(CONFIG_MCORE2)		+= -march=core2
+-        cflags-$(CONFIG_MATOM)		+= -march=atom
++        cflags-$(CONFIG_MK8SSE3)	+= -march=k8-sse3
++        cflags-$(CONFIG_MK10) 		+= -march=amdfam10
++        cflags-$(CONFIG_MBARCELONA) 	+= -march=barcelona
++        cflags-$(CONFIG_MBOBCAT) 	+= -march=btver1
++        cflags-$(CONFIG_MJAGUAR) 	+= -march=btver2
++        cflags-$(CONFIG_MBULLDOZER) 	+= -march=bdver1
++        cflags-$(CONFIG_MPILEDRIVER)	+= -march=bdver2
++        cflags-$(CONFIG_MSTEAMROLLER) 	+= -march=bdver3
++        cflags-$(CONFIG_MEXCAVATOR) 	+= -march=bdver4
++        cflags-$(CONFIG_MZEN) 		+= -march=znver1
++        cflags-$(CONFIG_MZEN2) 	+= -march=znver2
++        cflags-$(CONFIG_MZEN3) 	+= -march=znver3
++        cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
++        cflags-$(CONFIG_MNATIVE_AMD) 	+= -march=native
++        cflags-$(CONFIG_MATOM) 	+= -march=bonnell
++        cflags-$(CONFIG_MCORE2) 	+= -march=core2
++        cflags-$(CONFIG_MNEHALEM) 	+= -march=nehalem
++        cflags-$(CONFIG_MWESTMERE) 	+= -march=westmere
++        cflags-$(CONFIG_MSILVERMONT) 	+= -march=silvermont
++        cflags-$(CONFIG_MGOLDMONT) 	+= -march=goldmont
++        cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
++        cflags-$(CONFIG_MSANDYBRIDGE) 	+= -march=sandybridge
++        cflags-$(CONFIG_MIVYBRIDGE) 	+= -march=ivybridge
++        cflags-$(CONFIG_MHASWELL) 	+= -march=haswell
++        cflags-$(CONFIG_MBROADWELL) 	+= -march=broadwell
++        cflags-$(CONFIG_MSKYLAKE) 	+= -march=skylake
++        cflags-$(CONFIG_MSKYLAKEX) 	+= -march=skylake-avx512
++        cflags-$(CONFIG_MCANNONLAKE) 	+= -march=cannonlake
++        cflags-$(CONFIG_MICELAKE) 	+= -march=icelake-client
++        cflags-$(CONFIG_MCASCADELAKE) 	+= -march=cascadelake
++        cflags-$(CONFIG_MCOOPERLAKE) 	+= -march=cooperlake
++        cflags-$(CONFIG_MTIGERLAKE) 	+= -march=tigerlake
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
++        cflags-$(CONFIG_MROCKETLAKE) 	+= -march=rocketlake
++        cflags-$(CONFIG_MALDERLAKE) 	+= -march=alderlake
++        cflags-$(CONFIG_GENERIC_CPU2) 	+= -march=x86-64-v2
++        cflags-$(CONFIG_GENERIC_CPU3) 	+= -march=x86-64-v3
++        cflags-$(CONFIG_GENERIC_CPU4) 	+= -march=x86-64-v4
+         cflags-$(CONFIG_GENERIC_CPU)	+= -mtune=generic
          KBUILD_CFLAGS += $(cflags-y)
- 
+
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
 index 75884d2cdec3..4e6a08d4c7e5 100644
 --- a/arch/x86/include/asm/vermagic.h
@@ -677,6 +670,7 @@ index 75884d2cdec3..4e6a08d4c7e5 100644
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
--- 
-2.31.1
+--
+2.33.0
+
 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-17 12:50 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-17 12:50 UTC (permalink / raw
  To: gentoo-commits

commit:     447bc0d85911f01ff0d6c1620067092cc2d6dc2e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 17 12:49:56 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 17 12:49:56 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=447bc0d8

Add correct CPU Opt Patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 158 +++++++++++++-------------
 1 file changed, 82 insertions(+), 76 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index b9e8ebb..d437e1a 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,7 +1,7 @@
-From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
+From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Tue, 14 Sep 2021 15:35:34 -0400
-Subject: [PATCH] more uarches for kernel 5.15+
+Date: Sun, 6 Jun 2021 09:41:36 -0400
+Subject: [PATCH] more uarches for kernel 5.8-5.14
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 8bit
@@ -86,7 +86,7 @@ See the following experimental evidence supporting this statement:
 https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
-linux version >=5.15
+linux version 5.8-5.14
 gcc version >=9.0 or clang version >=9.0
 
 ACKNOWLEDGMENTS
@@ -102,17 +102,17 @@ REFERENCES
 Signed-off-by: graysky <graysky@archlinux.us>
 ---
  arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  40 +++-
+ arch/x86/Makefile               |  47 ++++-
  arch/x86/include/asm/vermagic.h |  66 +++++++
- 3 files changed, 424 insertions(+), 14 deletions(-)
+ 3 files changed, 428 insertions(+), 17 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..61f0d7757499 100644
+index 814fe0d349b0..8acf6519d279 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
-
-
+ 
+ 
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -121,7 +121,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD K6-family processor.  Enables use of
 @@ -165,7 +165,7 @@ config MK6
  	  flags to GCC.
-
+ 
  config MK7
 -	bool "Athlon/Duron/K7"
 +	bool "AMD Athlon/Duron/K7"
@@ -130,7 +130,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
 @@ -173,12 +173,98 @@ config MK7
  	  flags to GCC.
-
+ 
  config MK8
 -	bool "Opteron/Athlon64/Hammer/K8"
 +	bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -138,7 +138,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
  	  Enables use of some extended instructions, and passes appropriate
  	  optimization flags to GCC.
-
+ 
 +config MK8SSE3
 +	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
 +	help
@@ -230,17 +230,17 @@ index 814fe0d349b0..61f0d7757499 100644
  	depends on X86_32
 @@ -270,7 +356,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
-
+ 
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
  	help
-
+ 
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
 @@ -278,6 +364,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
-
+ 
 +	  Enables -march=core2
 +
  config MATOM
@@ -249,7 +249,7 @@ index 814fe0d349b0..61f0d7757499 100644
 @@ -287,6 +375,182 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
-
+ 
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
@@ -432,7 +432,7 @@ index 814fe0d349b0..61f0d7757499 100644
 @@ -294,6 +558,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
-
+ 
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
 +	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
@@ -478,7 +478,7 @@ index 814fe0d349b0..61f0d7757499 100644
 +	  Enables -march=native
 +
  endchoice
-
+ 
  config X86_GENERIC
 @@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
@@ -488,19 +488,19 @@ index 814fe0d349b0..61f0d7757499 100644
 +	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-
+ 
 @@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
-
+ 
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
 +	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
-
+ 
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
 +	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
-
+ 
  config X86_USE_3DNOW
  	def_bool y
 @@ -360,26 +668,26 @@ config X86_USE_3DNOW
@@ -509,24 +509,24 @@ index 814fe0d349b0..61f0d7757499 100644
  	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
 +	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
-
+ 
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
 +	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
-
+ 
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
 +	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
-
+ 
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
 +	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
-
+ 
  config X86_MINIMUM_CPU_FAMILY
  	int
  	default "64" if X86_64
@@ -534,58 +534,65 @@ index 814fe0d349b0..61f0d7757499 100644
 +	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
  	default "5" if X86_32 && X86_CMPXCHG64
  	default "4"
-
+ 
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 7488cfbbd2f6..01876b6fb8e1 100644
+index 78faf9c7e3ae..ee0cd507af8b 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -119,8 +119,44 @@ else
+@@ -114,11 +114,48 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-         cflags-$(CONFIG_MK8)		+= -march=k8
-         cflags-$(CONFIG_MPSC)		+= -march=nocona
--        cflags-$(CONFIG_MCORE2)		+= -march=core2
--        cflags-$(CONFIG_MATOM)		+= -march=atom
-+        cflags-$(CONFIG_MK8SSE3)	+= -march=k8-sse3
-+        cflags-$(CONFIG_MK10) 		+= -march=amdfam10
-+        cflags-$(CONFIG_MBARCELONA) 	+= -march=barcelona
-+        cflags-$(CONFIG_MBOBCAT) 	+= -march=btver1
-+        cflags-$(CONFIG_MJAGUAR) 	+= -march=btver2
-+        cflags-$(CONFIG_MBULLDOZER) 	+= -march=bdver1
-+        cflags-$(CONFIG_MPILEDRIVER)	+= -march=bdver2
-+        cflags-$(CONFIG_MSTEAMROLLER) 	+= -march=bdver3
-+        cflags-$(CONFIG_MEXCAVATOR) 	+= -march=bdver4
-+        cflags-$(CONFIG_MZEN) 		+= -march=znver1
-+        cflags-$(CONFIG_MZEN2) 	+= -march=znver2
-+        cflags-$(CONFIG_MZEN3) 	+= -march=znver3
-+        cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
-+        cflags-$(CONFIG_MNATIVE_AMD) 	+= -march=native
-+        cflags-$(CONFIG_MATOM) 	+= -march=bonnell
-+        cflags-$(CONFIG_MCORE2) 	+= -march=core2
-+        cflags-$(CONFIG_MNEHALEM) 	+= -march=nehalem
-+        cflags-$(CONFIG_MWESTMERE) 	+= -march=westmere
-+        cflags-$(CONFIG_MSILVERMONT) 	+= -march=silvermont
-+        cflags-$(CONFIG_MGOLDMONT) 	+= -march=goldmont
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
-+        cflags-$(CONFIG_MSANDYBRIDGE) 	+= -march=sandybridge
-+        cflags-$(CONFIG_MIVYBRIDGE) 	+= -march=ivybridge
-+        cflags-$(CONFIG_MHASWELL) 	+= -march=haswell
-+        cflags-$(CONFIG_MBROADWELL) 	+= -march=broadwell
-+        cflags-$(CONFIG_MSKYLAKE) 	+= -march=skylake
-+        cflags-$(CONFIG_MSKYLAKEX) 	+= -march=skylake-avx512
-+        cflags-$(CONFIG_MCANNONLAKE) 	+= -march=cannonlake
-+        cflags-$(CONFIG_MICELAKE) 	+= -march=icelake-client
-+        cflags-$(CONFIG_MCASCADELAKE) 	+= -march=cascadelake
-+        cflags-$(CONFIG_MCOOPERLAKE) 	+= -march=cooperlake
-+        cflags-$(CONFIG_MTIGERLAKE) 	+= -march=tigerlake
-+        cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
-+        cflags-$(CONFIG_MROCKETLAKE) 	+= -march=rocketlake
-+        cflags-$(CONFIG_MALDERLAKE) 	+= -march=alderlake
-+        cflags-$(CONFIG_GENERIC_CPU2) 	+= -march=x86-64-v2
-+        cflags-$(CONFIG_GENERIC_CPU3) 	+= -march=x86-64-v3
-+        cflags-$(CONFIG_GENERIC_CPU4) 	+= -march=x86-64-v4
-         cflags-$(CONFIG_GENERIC_CPU)	+= -mtune=generic
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+-
+-        cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
++        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
++
++        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
++        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
++        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
++        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
++        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
++        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
++        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
++        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
++        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
++        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
++        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
++        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
++        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
++        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
++        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
++        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
++        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
++        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
++        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
++        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
++        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
++        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
++        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
          KBUILD_CFLAGS += $(cflags-y)
-
+ 
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
 index 75884d2cdec3..4e6a08d4c7e5 100644
 --- a/arch/x86/include/asm/vermagic.h
@@ -670,7 +677,6 @@ index 75884d2cdec3..4e6a08d4c7e5 100644
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
---
-2.33.0
-
+-- 
+2.31.1
 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-18 16:07 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-18 16:07 UTC (permalink / raw
  To: gentoo-commits

commit:     dee0860ae0aea5f984943a6421f17e9f1bfe5ab7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 18 16:07:07 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 18 16:07:07 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dee0860a

Linux patch 5.10.67

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1066_linux-5.10.67.patch | 12089 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 12093 insertions(+)

diff --git a/0000_README b/0000_README
index 1521f72..20bba3a 100644
--- a/0000_README
+++ b/0000_README
@@ -307,6 +307,10 @@ Patch:  1065_linux-5.10.66.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.66
 
+Patch:  1066_linux-5.10.67.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.67
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1066_linux-5.10.67.patch b/1066_linux-5.10.67.patch
new file mode 100644
index 0000000..3aee0f0
--- /dev/null
+++ b/1066_linux-5.10.67.patch
@@ -0,0 +1,12089 @@
+diff --git a/Documentation/admin-guide/devices.txt b/Documentation/admin-guide/devices.txt
+index 63fd4e6a014bc..8b738855e1c5a 100644
+--- a/Documentation/admin-guide/devices.txt
++++ b/Documentation/admin-guide/devices.txt
+@@ -3003,10 +3003,10 @@
+ 		65 = /dev/infiniband/issm1     Second InfiniBand IsSM device
+ 		  ...
+ 		127 = /dev/infiniband/issm63    63rd InfiniBand IsSM device
+-		128 = /dev/infiniband/uverbs0   First InfiniBand verbs device
+-		129 = /dev/infiniband/uverbs1   Second InfiniBand verbs device
++		192 = /dev/infiniband/uverbs0   First InfiniBand verbs device
++		193 = /dev/infiniband/uverbs1   Second InfiniBand verbs device
+ 		  ...
+-		159 = /dev/infiniband/uverbs31  31st InfiniBand verbs device
++		223 = /dev/infiniband/uverbs31  31st InfiniBand verbs device
+ 
+  232 char	Biometric Devices
+ 		0 = /dev/biometric/sensor0/fingerprint	first fingerprint sensor on first device
+diff --git a/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
+index 38dc56a577604..ecec514b31550 100644
+--- a/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
+@@ -43,19 +43,19 @@ group emmc_nb
+ 
+ group pwm0
+  - pin 11 (GPIO1-11)
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm1
+  - pin 12
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm2
+  - pin 13
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm3
+  - pin 14
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pmic1
+  - pin 7
+diff --git a/Makefile b/Makefile
+index 8b1f1e7517b94..a47273ecfdf21 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 66
++SUBLEVEL = 67
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
+index 0d6ee56f5831e..175213d7a1aa1 100644
+--- a/arch/arm/boot/compressed/Makefile
++++ b/arch/arm/boot/compressed/Makefile
+@@ -84,6 +84,8 @@ compress-$(CONFIG_KERNEL_LZ4)  = lz4
+ libfdt_objs := fdt_rw.o fdt_ro.o fdt_wip.o fdt.o
+ 
+ ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y)
++CFLAGS_REMOVE_atags_to_fdt.o += -Wframe-larger-than=${CONFIG_FRAME_WARN}
++CFLAGS_atags_to_fdt.o += -Wframe-larger-than=1280
+ OBJS	+= $(libfdt_objs) atags_to_fdt.o
+ endif
+ 
+diff --git a/arch/arm/boot/dts/at91-kizbox3_common.dtsi b/arch/arm/boot/dts/at91-kizbox3_common.dtsi
+index 7c3076e245efa..dc77d8e80e567 100644
+--- a/arch/arm/boot/dts/at91-kizbox3_common.dtsi
++++ b/arch/arm/boot/dts/at91-kizbox3_common.dtsi
+@@ -336,7 +336,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index ebbc9b23aef1c..b1068cca42287 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -662,7 +662,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	status = "okay";
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+index d3cd2443ba252..9a18453d78428 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+@@ -138,7 +138,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 				atmel,wakeup-rtc-timer;
+ 
+ 				input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+index 4883b84b4eded..20bcb7480d2ea 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+@@ -205,7 +205,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+index 19bb50f50c1fc..308d472bd1044 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_icp.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+@@ -693,7 +693,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+index 1c6361ba1aca4..317c6ddb56775 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+@@ -203,7 +203,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 
+ 				input@0 {
+ 					reg = <0>;
+diff --git a/arch/arm/boot/dts/at91-sama5d2_xplained.dts b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+index d767968ae2175..08c5182ba86bd 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+@@ -347,7 +347,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 				atmel,wakeup-rtc-timer;
+ 
+ 				input@0 {
+diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts
+index f7dcdf96e5c00..6d9a5ede94aaf 100644
+--- a/arch/arm/boot/dts/imx53-ppd.dts
++++ b/arch/arm/boot/dts/imx53-ppd.dts
+@@ -70,6 +70,12 @@
+ 		clock-frequency = <11289600>;
+ 	};
+ 
++	achc_24M: achc-clock {
++		compatible = "fixed-clock";
++		#clock-cells = <0>;
++		clock-frequency = <24000000>;
++	};
++
+ 	sgtlsound: sound {
+ 		compatible = "fsl,imx53-cpuvo-sgtl5000",
+ 			     "fsl,imx-audio-sgtl5000";
+@@ -313,16 +319,13 @@
+ 		    &gpio4 12 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+-	spidev0: spi@0 {
+-		compatible = "ge,achc";
+-		reg = <0>;
+-		spi-max-frequency = <1000000>;
+-	};
+-
+-	spidev1: spi@1 {
+-		compatible = "ge,achc";
+-		reg = <1>;
+-		spi-max-frequency = <1000000>;
++	spidev0: spi@1 {
++		compatible = "ge,achc", "nxp,kinetis-k20";
++		reg = <1>, <0>;
++		vdd-supply = <&reg_3v3>;
++		vdda-supply = <&reg_3v3>;
++		clocks = <&achc_24M>;
++		reset-gpios = <&gpio3 6 GPIO_ACTIVE_LOW>;
+ 	};
+ 
+ 	gpioxra0: gpio@2 {
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index 2687c4e890ba8..e36d590e83732 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -1262,9 +1262,9 @@
+ 				<&mmcc DSI1_BYTE_CLK>,
+ 				<&mmcc DSI_PIXEL_CLK>,
+ 				<&mmcc DSI1_ESC_CLK>;
+-			clock-names = "iface_clk", "bus_clk", "core_mmss_clk",
+-					"src_clk", "byte_clk", "pixel_clk",
+-					"core_clk";
++			clock-names = "iface", "bus", "core_mmss",
++					"src", "byte", "pixel",
++					"core";
+ 
+ 			assigned-clocks = <&mmcc DSI1_BYTE_SRC>,
+ 					<&mmcc DSI1_ESC_SRC>,
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 633079245601b..fd0cd10cb0931 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -172,15 +172,15 @@
+ 			sgtl5000_tx_endpoint: endpoint@0 {
+ 				reg = <0>;
+ 				remote-endpoint = <&sai2a_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&sgtl5000_tx_endpoint>;
++				bitclock-master = <&sgtl5000_tx_endpoint>;
+ 			};
+ 
+ 			sgtl5000_rx_endpoint: endpoint@1 {
+ 				reg = <1>;
+ 				remote-endpoint = <&sai2b_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&sgtl5000_rx_endpoint>;
++				bitclock-master = <&sgtl5000_rx_endpoint>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index ec02cee1dd9b0..944d38b85eef4 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -185,8 +185,8 @@
+ &i2c4 {
+ 	hdmi-transmitter@3d {
+ 		compatible = "adi,adv7513";
+-		reg = <0x3d>, <0x2d>, <0x4d>, <0x5d>;
+-		reg-names = "main", "cec", "edid", "packet";
++		reg = <0x3d>, <0x4d>, <0x2d>, <0x5d>;
++		reg-names = "main", "edid", "cec", "packet";
+ 		clocks = <&cec_clock>;
+ 		clock-names = "cec";
+ 
+@@ -204,8 +204,6 @@
+ 		adi,input-depth = <8>;
+ 		adi,input-colorspace = "rgb";
+ 		adi,input-clock = "1x";
+-		adi,input-style = <1>;
+-		adi,input-justification = "evenly";
+ 
+ 		ports {
+ 			#address-cells = <1>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+index 93398cfae97ee..47df8ac67cf1a 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+@@ -212,15 +212,15 @@
+ 			cs42l51_tx_endpoint: endpoint@0 {
+ 				reg = <0>;
+ 				remote-endpoint = <&sai2a_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&cs42l51_tx_endpoint>;
++				bitclock-master = <&cs42l51_tx_endpoint>;
+ 			};
+ 
+ 			cs42l51_rx_endpoint: endpoint@1 {
+ 				reg = <1>;
+ 				remote-endpoint = <&sai2b_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&cs42l51_rx_endpoint>;
++				bitclock-master = <&cs42l51_rx_endpoint>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+index 5d0f0fbba1d2e..5dbfb83c1b06b 100644
+--- a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
++++ b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+@@ -704,7 +704,6 @@
+ 		nvidia,xcvr-setup-use-fuses;
+ 		nvidia,xcvr-lsfslew = <2>;
+ 		nvidia,xcvr-lsrslew = <2>;
+-		vbus-supply = <&vdd_vbus1>;
+ 	};
+ 
+ 	usb@c5008000 {
+@@ -716,7 +715,7 @@
+ 		nvidia,xcvr-setup-use-fuses;
+ 		nvidia,xcvr-lsfslew = <2>;
+ 		nvidia,xcvr-lsrslew = <2>;
+-		vbus-supply = <&vdd_vbus3>;
++		vbus-supply = <&vdd_5v0_sys>;
+ 	};
+ 
+ 	brcm_wifi_pwrseq: wifi-pwrseq {
+@@ -967,28 +966,6 @@
+ 		vin-supply = <&vdd_5v0_sys>;
+ 	};
+ 
+-	vdd_vbus1: regulator@4 {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vdd_usb1_vbus";
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-		regulator-always-on;
+-		gpio = <&gpio TEGRA_GPIO(D, 0) GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-		vin-supply = <&vdd_5v0_sys>;
+-	};
+-
+-	vdd_vbus3: regulator@5 {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vdd_usb3_vbus";
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-		regulator-always-on;
+-		gpio = <&gpio TEGRA_GPIO(D, 3) GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-		vin-supply = <&vdd_5v0_sys>;
+-	};
+-
+ 	sound {
+ 		compatible = "nvidia,tegra-audio-wm8903-picasso",
+ 			     "nvidia,tegra-audio-wm8903";
+diff --git a/arch/arm/boot/dts/tegra20-tamonten.dtsi b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+index 95e6bccdb4f6e..dd4d506683de7 100644
+--- a/arch/arm/boot/dts/tegra20-tamonten.dtsi
++++ b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+@@ -185,8 +185,9 @@
+ 				nvidia,pins = "ata", "atb", "atc", "atd", "ate",
+ 					"cdev1", "cdev2", "dap1", "dtb", "gma",
+ 					"gmb", "gmc", "gmd", "gme", "gpu7",
+-					"gpv", "i2cp", "pta", "rm", "slxa",
+-					"slxk", "spia", "spib", "uac";
++					"gpv", "i2cp", "irrx", "irtx", "pta",
++					"rm", "slxa", "slxk", "spia", "spib",
++					"uac";
+ 				nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ 				nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ 			};
+@@ -211,7 +212,7 @@
+ 			conf_ddc {
+ 				nvidia,pins = "ddc", "dta", "dtd", "kbca",
+ 					"kbcb", "kbcc", "kbcd", "kbce", "kbcf",
+-					"sdc";
++					"sdc", "uad", "uca";
+ 				nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ 				nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ 			};
+@@ -221,10 +222,9 @@
+ 					"lvp0", "owc", "sdb";
+ 				nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ 			};
+-			conf_irrx {
+-				nvidia,pins = "irrx", "irtx", "sdd", "spic",
+-					"spie", "spih", "uaa", "uab", "uad",
+-					"uca", "ucb";
++			conf_sdd {
++				nvidia,pins = "sdd", "spic", "spie", "spih",
++					"uaa", "uab", "ucb";
+ 				nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ 				nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ 			};
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
+index be81330db14f6..02641191682e0 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
+@@ -32,14 +32,14 @@
+ 		};
+ 	};
+ 
+-	reg_vcc3v3: vcc3v3 {
++	reg_vcc3v3: regulator-vcc3v3 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vcc3v3";
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+ 	};
+ 
+-	reg_vdd_cpu_gpu: vdd-cpu-gpu {
++	reg_vdd_cpu_gpu: regulator-vdd-cpu-gpu {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vdd-cpu-gpu";
+ 		regulator-min-microvolt = <1135000>;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
+index db3d303093f61..6d22efbd645cb 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
+@@ -83,15 +83,9 @@
+ 			};
+ 
+ 			eeprom@52 {
+-				compatible = "atmel,24c512";
++				compatible = "onnn,cat24c04", "atmel,24c04";
+ 				reg = <0x52>;
+ 			};
+-
+-			eeprom@53 {
+-				compatible = "atmel,24c512";
+-				reg = <0x53>;
+-			};
+-
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
+index d53ccc56bb639..07139e35686d7 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
+@@ -58,14 +58,9 @@
+ 	};
+ 
+ 	eeprom@52 {
+-		compatible = "atmel,24c512";
++		compatible = "onnn,cat24c05", "atmel,24c04";
+ 		reg = <0x52>;
+ 	};
+-
+-	eeprom@53 {
+-		compatible = "atmel,24c512";
+-		reg = <0x53>;
+-	};
+ };
+ 
+ &i2c3 {
+diff --git a/arch/arm64/boot/dts/nvidia/tegra132.dtsi b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+index e40281510c0c0..b14e9f3bfdbdc 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra132.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+@@ -1215,13 +1215,13 @@
+ 
+ 		cpu@0 {
+ 			device_type = "cpu";
+-			compatible = "nvidia,denver";
++			compatible = "nvidia,tegra132-denver";
+ 			reg = <0>;
+ 		};
+ 
+ 		cpu@1 {
+ 			device_type = "cpu";
+-			compatible = "nvidia,denver";
++			compatible = "nvidia,tegra132-denver";
+ 			reg = <1>;
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 6946fb210e484..9b5007e5f790f 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -1976,7 +1976,7 @@
+ 	};
+ 
+ 	pcie_ep@14160000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX4A>;
+ 		reg = <0x00 0x14160000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x36040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+@@ -2008,7 +2008,7 @@
+ 	};
+ 
+ 	pcie_ep@14180000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
+ 		reg = <0x00 0x14180000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x38040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+@@ -2040,7 +2040,7 @@
+ 	};
+ 
+ 	pcie_ep@141a0000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>;
+ 		reg = <0x00 0x141a0000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x3a040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index cdc1e3d60c58e..3ceb36cac512f 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -151,7 +151,7 @@
+ 		#size-cells = <2>;
+ 		ranges;
+ 
+-		rpm_msg_ram: memory@0x60000 {
++		rpm_msg_ram: memory@60000 {
+ 			reg = <0x0 0x60000 0x0 0x6000>;
+ 			no-map;
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
+index e8c37a1693d3b..cc08dc4eb56a5 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
++++ b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
+@@ -20,7 +20,7 @@
+ 		stdout-path = "serial0";
+ 	};
+ 
+-	memory {
++	memory@40000000 {
+ 		device_type = "memory";
+ 		reg = <0x0 0x40000000 0x0 0x20000000>;
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 829e37ac82f66..776a6b0f61a62 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -567,10 +567,10 @@
+ 
+ 		pcie1: pci@10000000 {
+ 			compatible = "qcom,pcie-ipq8074";
+-			reg =  <0x10000000 0xf1d
+-				0x10000f20 0xa8
+-				0x00088000 0x2000
+-				0x10100000 0x1000>;
++			reg =  <0x10000000 0xf1d>,
++			       <0x10000f20 0xa8>,
++			       <0x00088000 0x2000>,
++			       <0x10100000 0x1000>;
+ 			reg-names = "dbi", "elbi", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <1>;
+@@ -629,10 +629,10 @@
+ 
+ 		pcie0: pci@20000000 {
+ 			compatible = "qcom,pcie-ipq8074";
+-			reg =  <0x20000000 0xf1d
+-				0x20000f20 0xa8
+-				0x00080000 0x2000
+-				0x20100000 0x1000>;
++			reg = <0x20000000 0xf1d>,
++			      <0x20000f20 0xa8>,
++			      <0x00080000 0x2000>,
++			      <0x20100000 0x1000>;
+ 			reg-names = "dbi", "elbi", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 6707f898607fe..45f9a44326a6d 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -14,16 +14,18 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
++			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32768>;
++			clock-output-names = "sleep_clk";
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index fd6ae5464dea4..eef17434d12ae 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -17,14 +17,14 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
+ 			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32764>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index deb928d303c22..f87054575ce7f 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -17,14 +17,14 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
+ 			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32764>;
+@@ -343,10 +343,19 @@
+ 		};
+ 
+ 		qhee_code: qhee-code@85800000 {
+-			reg = <0x0 0x85800000 0x0 0x3700000>;
++			reg = <0x0 0x85800000 0x0 0x600000>;
+ 			no-map;
+ 		};
+ 
++		rmtfs_mem: memory@85e00000 {
++			compatible = "qcom,rmtfs-mem";
++			reg = <0x0 0x85e00000 0x0 0x200000>;
++			no-map;
++
++			qcom,client-id = <1>;
++			qcom,vmid = <15>;
++		};
++
+ 		smem_region: smem-mem@86000000 {
+ 			reg = <0 0x86000000 0 0x200000>;
+ 			no-map;
+@@ -357,58 +366,44 @@
+ 			no-map;
+ 		};
+ 
+-		modem_fw_mem: modem-fw-region@8ac00000 {
++		mpss_region: mpss@8ac00000 {
+ 			reg = <0x0 0x8ac00000 0x0 0x7e00000>;
+ 			no-map;
+ 		};
+ 
+-		adsp_fw_mem: adsp-fw-region@92a00000 {
++		adsp_region: adsp@92a00000 {
+ 			reg = <0x0 0x92a00000 0x0 0x1e00000>;
+ 			no-map;
+ 		};
+ 
+-		pil_mba_mem: pil-mba-region@94800000 {
++		mba_region: mba@94800000 {
+ 			reg = <0x0 0x94800000 0x0 0x200000>;
+ 			no-map;
+ 		};
+ 
+-		buffer_mem: buffer-region@94a00000 {
++		buffer_mem: tzbuffer@94a00000 {
+ 			reg = <0x0 0x94a00000 0x0 0x100000>;
+ 			no-map;
+ 		};
+ 
+-		venus_fw_mem: venus-fw-region@9f800000 {
++		venus_region: venus@9f800000 {
+ 			reg = <0x0 0x9f800000 0x0 0x800000>;
+ 			no-map;
+ 		};
+ 
+-		secure_region2: secure-region2@f7c00000 {
+-			reg = <0x0 0xf7c00000 0x0 0x5c00000>;
+-			no-map;
+-		};
+-
+ 		adsp_mem: adsp-region@f6000000 {
+ 			reg = <0x0 0xf6000000 0x0 0x800000>;
+ 			no-map;
+ 		};
+ 
+-		qseecom_ta_mem: qseecom-ta-region@fec00000 {
+-			reg = <0x0 0xfec00000 0x0 0x1000000>;
+-			no-map;
+-		};
+-
+ 		qseecom_mem: qseecom-region@f6800000 {
+ 			reg = <0x0 0xf6800000 0x0 0x1400000>;
+ 			no-map;
+ 		};
+ 
+-		secure_display_memory: secure-region@f5c00000 {
+-			reg = <0x0 0xf5c00000 0x0 0x5c00000>;
+-			no-map;
+-		};
+-
+-		cont_splash_mem: cont-splash-region@9d400000 {
+-			reg = <0x0 0x9d400000 0x0 0x23ff000>;
++		zap_shader_region: gpu@fed00000 {
++			compatible = "shared-dma-pool";
++			reg = <0x0 0xfed00000 0x0 0xa00000>;
+ 			no-map;
+ 		};
+ 	};
+@@ -527,14 +522,18 @@
+ 			reg = <0x01f40000 0x20000>;
+ 		};
+ 
+-		tlmm: pinctrl@3000000 {
++		tlmm: pinctrl@3100000 {
+ 			compatible = "qcom,sdm630-pinctrl";
+-			reg = <0x03000000 0xc00000>;
++			reg = <0x03100000 0x400000>,
++				  <0x03500000 0x400000>,
++				  <0x03900000 0x400000>;
++			reg-names = "south", "center", "north";
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-controller;
+-			#gpio-cells = <0x2>;
++			gpio-ranges = <&tlmm 0 0 114>;
++			#gpio-cells = <2>;
+ 			interrupt-controller;
+-			#interrupt-cells = <0x2>;
++			#interrupt-cells = <2>;
+ 
+ 			blsp1_uart1_default: blsp1-uart1-default {
+ 				pins = "gpio0", "gpio1", "gpio2", "gpio3";
+@@ -554,40 +553,48 @@
+ 				bias-disable;
+ 			};
+ 
+-			blsp2_uart1_tx_active: blsp2-uart1-tx-active {
+-				pins = "gpio16";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
+-
+-			blsp2_uart1_tx_sleep: blsp2-uart1-tx-sleep {
+-				pins = "gpio16";
+-				drive-strength = <2>;
+-				bias-pull-up;
+-			};
++			blsp2_uart1_default: blsp2-uart1-active {
++				tx-rts {
++					pins = "gpio16", "gpio19";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-disable;
++				};
+ 
+-			blsp2_uart1_rxcts_active: blsp2-uart1-rxcts-active {
+-				pins = "gpio17", "gpio18";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
++				rx {
++					/*
++					 * Avoid garbage data while BT module
++					 * is powered off or not driving signal
++					 */
++					pins = "gpio17";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-pull-up;
++				};
+ 
+-			blsp2_uart1_rxcts_sleep: blsp2-uart1-rxcts-sleep {
+-				pins = "gpio17", "gpio18";
+-				drive-strength = <2>;
+-				bias-no-pull;
++				cts {
++					/* Match the pull of the BT module */
++					pins = "gpio18";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-pull-down;
++				};
+ 			};
+ 
+-			blsp2_uart1_rfr_active: blsp2-uart1-rfr-active {
+-				pins = "gpio19";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
++			blsp2_uart1_sleep: blsp2-uart1-sleep {
++				tx {
++					pins = "gpio16";
++					function = "gpio";
++					drive-strength = <2>;
++					bias-pull-up;
++				};
+ 
+-			blsp2_uart1_rfr_sleep: blsp2-uart1-rfr-sleep {
+-				pins = "gpio19";
+-				drive-strength = <2>;
+-				bias-no-pull;
++				rx-cts-rts {
++					pins = "gpio17", "gpio18", "gpio19";
++					function = "gpio";
++					drive-strength = <2>;
++					bias-no-pull;
++				};
+ 			};
+ 
+ 			i2c1_default: i2c1-default {
+@@ -686,50 +693,106 @@
+ 				bias-pull-up;
+ 			};
+ 
+-			sdc1_clk_on: sdc1-clk-on {
+-				pins = "sdc1_clk";
+-				bias-disable;
+-				drive-strength = <16>;
+-			};
++			sdc1_state_on: sdc1-on {
++				clk {
++					pins = "sdc1_clk";
++					bias-disable;
++					drive-strength = <16>;
++				};
+ 
+-			sdc1_clk_off: sdc1-clk-off {
+-				pins = "sdc1_clk";
+-				bias-disable;
+-				drive-strength = <2>;
+-			};
++				cmd {
++					pins = "sdc1_cmd";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
+ 
+-			sdc1_cmd_on: sdc1-cmd-on {
+-				pins = "sdc1_cmd";
+-				bias-pull-up;
+-				drive-strength = <10>;
+-			};
++				data {
++					pins = "sdc1_data";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
+ 
+-			sdc1_cmd_off: sdc1-cmd-off {
+-				pins = "sdc1_cmd";
+-				bias-pull-up;
+-				drive-strength = <2>;
++				rclk {
++					pins = "sdc1_rclk";
++					bias-pull-down;
++				};
+ 			};
+ 
+-			sdc1_data_on: sdc1-data-on {
+-				pins = "sdc1_data";
+-				bias-pull-up;
+-				drive-strength = <8>;
+-			};
++			sdc1_state_off: sdc1-off {
++				clk {
++					pins = "sdc1_clk";
++					bias-disable;
++					drive-strength = <2>;
++				};
+ 
+-			sdc1_data_off: sdc1-data-off {
+-				pins = "sdc1_data";
+-				bias-pull-up;
+-				drive-strength = <2>;
++				cmd {
++					pins = "sdc1_cmd";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				data {
++					pins = "sdc1_data";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				rclk {
++					pins = "sdc1_rclk";
++					bias-pull-down;
++				};
+ 			};
+ 
+-			sdc1_rclk_on: sdc1-rclk-on {
+-				pins = "sdc1_rclk";
+-				bias-pull-down;
++			sdc2_state_on: sdc2-on {
++				clk {
++					pins = "sdc2_clk";
++					bias-disable;
++					drive-strength = <16>;
++				};
++
++				cmd {
++					pins = "sdc2_cmd";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
++
++				data {
++					pins = "sdc2_data";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
++
++				sd-cd {
++					pins = "gpio54";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
+ 			};
+ 
+-			sdc1_rclk_off: sdc1-rclk-off {
+-				pins = "sdc1_rclk";
+-				bias-pull-down;
++			sdc2_state_off: sdc2-off {
++				clk {
++					pins = "sdc2_clk";
++					bias-disable;
++					drive-strength = <2>;
++				};
++
++				cmd {
++					pins = "sdc2_cmd";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				data {
++					pins = "sdc2_data";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				sd-cd {
++					pins = "gpio54";
++					bias-disable;
++					drive-strength = <2>;
++				};
+ 			};
+ 		};
+ 
+@@ -821,8 +884,8 @@
+ 			clock-names = "core", "iface", "xo";
+ 
+ 			pinctrl-names = "default", "sleep";
+-			pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on &sdc1_rclk_on>;
+-			pinctrl-1 = <&sdc1_clk_off &sdc1_cmd_off &sdc1_data_off &sdc1_rclk_off>;
++			pinctrl-0 = <&sdc1_state_on>;
++			pinctrl-1 = <&sdc1_state_off>;
+ 
+ 			bus-width = <8>;
+ 			non-removable;
+@@ -967,10 +1030,8 @@
+ 			dmas = <&blsp2_dma 0>, <&blsp2_dma 1>;
+ 			dma-names = "tx", "rx";
+ 			pinctrl-names = "default", "sleep";
+-			pinctrl-0 = <&blsp2_uart1_tx_active &blsp2_uart1_rxcts_active
+-				&blsp2_uart1_rfr_active>;
+-			pinctrl-1 = <&blsp2_uart1_tx_sleep &blsp2_uart1_rxcts_sleep
+-				&blsp2_uart1_rfr_sleep>;
++			pinctrl-0 = <&blsp2_uart1_default>;
++			pinctrl-1 = <&blsp2_uart1_sleep>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index d4547a192748b..ec356fe07ac8a 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -2346,7 +2346,7 @@
+ 			};
+ 		};
+ 
+-		epss_l3: interconnect@18591000 {
++		epss_l3: interconnect@18590000 {
+ 			compatible = "qcom,sm8250-epss-l3";
+ 			reg = <0 0x18590000 0 0x1000>;
+ 
+diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
+index 587c504a4c8b2..4b06cf9a8c8aa 100644
+--- a/arch/arm64/include/asm/kernel-pgtable.h
++++ b/arch/arm64/include/asm/kernel-pgtable.h
+@@ -65,8 +65,8 @@
+ #define EARLY_KASLR	(0)
+ #endif
+ 
+-#define EARLY_ENTRIES(vstart, vend, shift) (((vend) >> (shift)) \
+-					- ((vstart) >> (shift)) + 1 + EARLY_KASLR)
++#define EARLY_ENTRIES(vstart, vend, shift) \
++	((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR)
+ 
+ #define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT))
+ 
+diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
+index b2e91c187e2a6..c7315862e2435 100644
+--- a/arch/arm64/include/asm/mmu.h
++++ b/arch/arm64/include/asm/mmu.h
+@@ -30,11 +30,32 @@ typedef struct {
+ } mm_context_t;
+ 
+ /*
+- * This macro is only used by the TLBI and low-level switch_mm() code,
+- * neither of which can race with an ASID change. We therefore don't
+- * need to reload the counter using atomic64_read().
++ * We use atomic64_read() here because the ASID for an 'mm_struct' can
++ * be reallocated when scheduling one of its threads following a
++ * rollover event (see new_context() and flush_context()). In this case,
++ * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
++ * may use a stale ASID. This is fine in principle as the new ASID is
++ * guaranteed to be clean in the TLB, but the TLBI routines have to take
++ * care to handle the following race:
++ *
++ *    CPU 0                    CPU 1                          CPU 2
++ *
++ *    // ptep_clear_flush(mm)
++ *    xchg_relaxed(pte, 0)
++ *    DSB ISHST
++ *    old = ASID(mm)
++ *         |                                                  <rollover>
++ *         |                   new = new_context(mm)
++ *         \-----------------> atomic_set(mm->context.id, new)
++ *                             cpu_switch_mm(mm)
++ *                             // Hardware walk of pte using new ASID
++ *    TLBI(old)
++ *
++ * In this scenario, the barrier on CPU 0 and the dependency on CPU 1
++ * ensure that the page-table walker on CPU 1 *must* see the invalid PTE
++ * written by CPU 0.
+  */
+-#define ASID(mm)	((mm)->context.id.counter & 0xffff)
++#define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
+ 
+ static inline bool arm64_kernel_unmapped_at_el0(void)
+ {
+diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
+index cc3f5a33ff9c5..36f02892e1df8 100644
+--- a/arch/arm64/include/asm/tlbflush.h
++++ b/arch/arm64/include/asm/tlbflush.h
+@@ -245,9 +245,10 @@ static inline void flush_tlb_all(void)
+ 
+ static inline void flush_tlb_mm(struct mm_struct *mm)
+ {
+-	unsigned long asid = __TLBI_VADDR(0, ASID(mm));
++	unsigned long asid;
+ 
+ 	dsb(ishst);
++	asid = __TLBI_VADDR(0, ASID(mm));
+ 	__tlbi(aside1is, asid);
+ 	__tlbi_user(aside1is, asid);
+ 	dsb(ish);
+@@ -256,9 +257,10 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
+ static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
+ 					 unsigned long uaddr)
+ {
+-	unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
++	unsigned long addr;
+ 
+ 	dsb(ishst);
++	addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
+ 	__tlbi(vale1is, addr);
+ 	__tlbi_user(vale1is, addr);
+ }
+@@ -283,9 +285,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
+ {
+ 	int num = 0;
+ 	int scale = 0;
+-	unsigned long asid = ASID(vma->vm_mm);
+-	unsigned long addr;
+-	unsigned long pages;
++	unsigned long asid, addr, pages;
+ 
+ 	start = round_down(start, stride);
+ 	end = round_up(end, stride);
+@@ -305,6 +305,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
+ 	}
+ 
+ 	dsb(ishst);
++	asid = ASID(vma->vm_mm);
+ 
+ 	/*
+ 	 * When the CPU does not support TLB range operations, flush the TLB
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 78cdd6b24172c..f9119eea735e2 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -191,7 +191,7 @@ SYM_CODE_END(preserve_boot_args)
+  * to be composed of multiple pages. (This effectively scales the end index).
+  *
+  *	vstart:	virtual address of start of range
+- *	vend:	virtual address of end of range
++ *	vend:	virtual address of end of range - we map [vstart, vend]
+  *	shift:	shift used to transform virtual address into index
+  *	ptrs:	number of entries in page table
+  *	istart:	index in table corresponding to vstart
+@@ -228,17 +228,18 @@ SYM_CODE_END(preserve_boot_args)
+  *
+  *	tbl:	location of page table
+  *	rtbl:	address to be used for first level page table entry (typically tbl + PAGE_SIZE)
+- *	vstart:	start address to map
+- *	vend:	end address to map - we map [vstart, vend]
++ *	vstart:	virtual address of start of range
++ *	vend:	virtual address of end of range - we map [vstart, vend - 1]
+  *	flags:	flags to use to map last level entries
+  *	phys:	physical address corresponding to vstart - physical memory is contiguous
+  *	pgds:	the number of pgd entries
+  *
+  * Temporaries:	istart, iend, tmp, count, sv - these need to be different registers
+- * Preserves:	vstart, vend, flags
+- * Corrupts:	tbl, rtbl, istart, iend, tmp, count, sv
++ * Preserves:	vstart, flags
++ * Corrupts:	tbl, rtbl, vend, istart, iend, tmp, count, sv
+  */
+ 	.macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv
++	sub \vend, \vend, #1
+ 	add \rtbl, \tbl, #PAGE_SIZE
+ 	mov \sv, \rtbl
+ 	mov \count, #0
+diff --git a/arch/m68k/Kconfig.bus b/arch/m68k/Kconfig.bus
+index f1be832e2b746..d1e93a39cd3bc 100644
+--- a/arch/m68k/Kconfig.bus
++++ b/arch/m68k/Kconfig.bus
+@@ -63,7 +63,7 @@ source "drivers/zorro/Kconfig"
+ 
+ endif
+ 
+-if !MMU
++if COLDFIRE
+ 
+ config ISA_DMA_API
+ 	def_bool !M5272
+diff --git a/arch/mips/mti-malta/malta-dtshim.c b/arch/mips/mti-malta/malta-dtshim.c
+index 0ddf03df62688..f451268f6c384 100644
+--- a/arch/mips/mti-malta/malta-dtshim.c
++++ b/arch/mips/mti-malta/malta-dtshim.c
+@@ -22,7 +22,7 @@
+ #define  ROCIT_CONFIG_GEN1_MEMMAP_SHIFT	8
+ #define  ROCIT_CONFIG_GEN1_MEMMAP_MASK	(0xf << 8)
+ 
+-static unsigned char fdt_buf[16 << 10] __initdata;
++static unsigned char fdt_buf[16 << 10] __initdata __aligned(8);
+ 
+ /* determined physical memory size, not overridden by command line args	 */
+ extern unsigned long physical_memsize;
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index bc657e55c15f8..98e4f97db5159 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -547,6 +547,7 @@ EXCEPTION_ENTRY(_external_irq_handler)
+ 	l.bnf	1f			// ext irq enabled, all ok.
+ 	l.nop
+ 
++#ifdef CONFIG_PRINTK
+ 	l.addi  r1,r1,-0x8
+ 	l.movhi r3,hi(42f)
+ 	l.ori	r3,r3,lo(42f)
+@@ -560,6 +561,7 @@ EXCEPTION_ENTRY(_external_irq_handler)
+ 		.string "\n\rESR interrupt bug: in _external_irq_handler (ESR %x)\n\r"
+ 		.align 4
+ 	.previous
++#endif
+ 
+ 	l.ori	r4,r4,SPR_SR_IEE	// fix the bug
+ //	l.sw	PT_SR(r1),r4
+diff --git a/arch/parisc/kernel/signal.c b/arch/parisc/kernel/signal.c
+index 9f43eaeb0b0af..8d6c9b88eb3f2 100644
+--- a/arch/parisc/kernel/signal.c
++++ b/arch/parisc/kernel/signal.c
+@@ -237,6 +237,12 @@ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs,
+ #endif
+ 	
+ 	usp = (regs->gr[30] & ~(0x01UL));
++#ifdef CONFIG_64BIT
++	if (is_compat_task()) {
++		/* The gcc alloca implementation leaves garbage in the upper 32 bits of sp */
++		usp = (compat_uint_t)usp;
++	}
++#endif
+ 	/*FIXME: frame_size parameter is unused, remove it. */
+ 	frame = get_sigframe(&ksig->ka, usp, sizeof(*frame));
+ 
+diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
+index 949ff9ccda5e7..dbf3ff8adc654 100644
+--- a/arch/powerpc/configs/mpc885_ads_defconfig
++++ b/arch/powerpc/configs/mpc885_ads_defconfig
+@@ -34,6 +34,7 @@ CONFIG_MTD_CFI_GEOMETRY=y
+ # CONFIG_MTD_CFI_I2 is not set
+ CONFIG_MTD_CFI_I4=y
+ CONFIG_MTD_CFI_AMDSTD=y
++CONFIG_MTD_PHYSMAP=y
+ CONFIG_MTD_PHYSMAP_OF=y
+ # CONFIG_BLK_DEV is not set
+ CONFIG_NETDEVICES=y
+diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h
+index c6bbe9778d3cd..3c09109e708ef 100644
+--- a/arch/powerpc/include/asm/pmc.h
++++ b/arch/powerpc/include/asm/pmc.h
+@@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse)
+ #endif
+ }
+ 
++#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++static inline int ppc_get_pmu_inuse(void)
++{
++	return get_paca()->pmcregs_in_use;
++}
++#endif
++
+ extern void power4_enable_pmcs(void);
+ 
+ #else /* CONFIG_PPC64 */
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 26a028a9233af..91f274134884e 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1385,6 +1385,7 @@ static void add_cpu_to_masks(int cpu)
+ 	 * add it to it's own thread sibling mask.
+ 	 */
+ 	cpumask_set_cpu(cpu, cpu_sibling_mask(cpu));
++	cpumask_set_cpu(cpu, cpu_core_mask(cpu));
+ 
+ 	for (i = first_thread; i < first_thread + threads_per_core; i++)
+ 		if (cpu_online(i))
+@@ -1399,11 +1400,6 @@ static void add_cpu_to_masks(int cpu)
+ 	if (has_coregroup_support())
+ 		update_coregroup_mask(cpu, &mask);
+ 
+-	if (chip_id == -1 || !ret) {
+-		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
+-		goto out;
+-	}
+-
+ 	if (shared_caches)
+ 		submask_fn = cpu_l2_cache_mask;
+ 
+@@ -1413,6 +1409,10 @@ static void add_cpu_to_masks(int cpu)
+ 	/* Skip all CPUs already part of current CPU core mask */
+ 	cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
+ 
++	/* If chip_id is -1; limit the cpu_core_mask to within DIE*/
++	if (chip_id == -1)
++		cpumask_and(mask, mask, cpu_cpu_mask(cpu));
++
+ 	for_each_cpu(i, mask) {
+ 		if (chip_id == cpu_to_chip_id(i)) {
+ 			or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
+@@ -1422,7 +1422,6 @@ static void add_cpu_to_masks(int cpu)
+ 		}
+ 	}
+ 
+-out:
+ 	free_cpumask_var(mask);
+ }
+ 
+diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
+index 2f926ea9b7b94..d4a66ce93f522 100644
+--- a/arch/powerpc/kernel/stacktrace.c
++++ b/arch/powerpc/kernel/stacktrace.c
+@@ -8,6 +8,7 @@
+  * Copyright 2018 Nick Piggin, Michael Ellerman, IBM Corp.
+  */
+ 
++#include <linux/delay.h>
+ #include <linux/export.h>
+ #include <linux/kallsyms.h>
+ #include <linux/module.h>
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index bb35490400e99..04028f905e50e 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -64,10 +64,12 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
+ 	}
+ 	isync();
+ 
++	pagefault_disable();
+ 	if (is_load)
+-		ret = copy_from_user_nofault(to, (const void __user *)from, n);
++		ret = __copy_from_user_inatomic(to, (const void __user *)from, n);
+ 	else
+-		ret = copy_to_user_nofault((void __user *)to, from, n);
++		ret = __copy_to_user_inatomic((void __user *)to, from, n);
++	pagefault_enable();
+ 
+ 	/* switch the pid first to avoid running host with unallocated pid */
+ 	if (quadrant == 1 && pid != old_pid)
+diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
+index 083a4e037718d..e5ba96c41f3fc 100644
+--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
++++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
+@@ -173,10 +173,13 @@ static void kvmppc_rm_tce_put(struct kvmppc_spapr_tce_table *stt,
+ 	idx -= stt->offset;
+ 	page = stt->pages[idx / TCES_PER_PAGE];
+ 	/*
+-	 * page must not be NULL in real mode,
+-	 * kvmppc_rm_ioba_validate() must have taken care of this.
++	 * kvmppc_rm_ioba_validate() allows pages not be allocated if TCE is
++	 * being cleared, otherwise it returns H_TOO_HARD and we skip this.
+ 	 */
+-	WARN_ON_ONCE_RM(!page);
++	if (!page) {
++		WARN_ON_ONCE_RM(tce != 0);
++		return;
++	}
+ 	tbl = kvmppc_page_address(page);
+ 
+ 	tbl[idx % TCES_PER_PAGE] = tce;
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index bd7350a608d4b..175967a195c44 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -58,6 +58,7 @@
+ #include <asm/kvm_book3s.h>
+ #include <asm/mmu_context.h>
+ #include <asm/lppaca.h>
++#include <asm/pmc.h>
+ #include <asm/processor.h>
+ #include <asm/cputhreads.h>
+ #include <asm/page.h>
+@@ -3619,6 +3620,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	    cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ 		kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+ 
++#ifdef CONFIG_PPC_PSERIES
++	if (kvmhv_on_pseries()) {
++		barrier();
++		if (vcpu->arch.vpa.pinned_addr) {
++			struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
++			get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use;
++		} else {
++			get_lppaca()->pmcregs_in_use = 1;
++		}
++		barrier();
++	}
++#endif
+ 	kvmhv_load_guest_pmu(vcpu);
+ 
+ 	msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
+@@ -3756,6 +3769,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	save_pmu |= nesting_enabled(vcpu->kvm);
+ 
+ 	kvmhv_save_guest_pmu(vcpu, save_pmu);
++#ifdef CONFIG_PPC_PSERIES
++	if (kvmhv_on_pseries()) {
++		barrier();
++		get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse();
++		barrier();
++	}
++#endif
+ 
+ 	vc->entry_exit_map = 0x101;
+ 	vc->in_guest = 0;
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index f2bf98bdcea28..094a1076fd1fe 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -893,7 +893,7 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
+ static void __init find_possible_nodes(void)
+ {
+ 	struct device_node *rtas;
+-	const __be32 *domains;
++	const __be32 *domains = NULL;
+ 	int prop_length, max_nodes;
+ 	u32 i;
+ 
+@@ -909,9 +909,14 @@ static void __init find_possible_nodes(void)
+ 	 * it doesn't exist, then fallback on ibm,max-associativity-domains.
+ 	 * Current denotes what the platform can support compared to max
+ 	 * which denotes what the Hypervisor can support.
++	 *
++	 * If the LPAR is migratable, new nodes might be activated after a LPM,
++	 * so we should consider the max number in that case.
+ 	 */
+-	domains = of_get_property(rtas, "ibm,current-associativity-domains",
+-					&prop_length);
++	if (!of_get_property(of_root, "ibm,migratable-partition", NULL))
++		domains = of_get_property(rtas,
++					  "ibm,current-associativity-domains",
++					  &prop_length);
+ 	if (!domains) {
+ 		domains = of_get_property(rtas, "ibm,max-associativity-domains",
+ 					&prop_length);
+@@ -920,6 +925,8 @@ static void __init find_possible_nodes(void)
+ 	}
+ 
+ 	max_nodes = of_read_number(&domains[min_common_depth], 1);
++	pr_info("Partition configured for %d NUMA nodes.\n", max_nodes);
++
+ 	for (i = 0; i < max_nodes; i++) {
+ 		if (!node_possible(i))
+ 			node_set(i, node_possible_map);
+diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
+index d48413e28c39e..c756228a081fb 100644
+--- a/arch/powerpc/perf/hv-gpci.c
++++ b/arch/powerpc/perf/hv-gpci.c
+@@ -175,7 +175,7 @@ static unsigned long single_gpci_request(u32 req, u32 starting_index,
+ 	 */
+ 	count = 0;
+ 	for (i = offset; i < offset + length; i++)
+-		count |= arg->bytes[i] << (i - offset);
++		count |= (u64)(arg->bytes[i]) << ((length - 1 - (i - offset)) * 8);
+ 
+ 	*value = count;
+ out:
+diff --git a/arch/s390/include/asm/setup.h b/arch/s390/include/asm/setup.h
+index bdb242a1544eb..75a2ecec2ab8a 100644
+--- a/arch/s390/include/asm/setup.h
++++ b/arch/s390/include/asm/setup.h
+@@ -38,6 +38,7 @@
+ #define MACHINE_FLAG_NX		BIT(15)
+ #define MACHINE_FLAG_GS		BIT(16)
+ #define MACHINE_FLAG_SCC	BIT(17)
++#define MACHINE_FLAG_PCI_MIO	BIT(18)
+ 
+ #define LPP_MAGIC		BIT(31)
+ #define LPP_PID_MASK		_AC(0xffffffff, UL)
+@@ -113,6 +114,7 @@ extern unsigned long mio_wb_bit_mask;
+ #define MACHINE_HAS_NX		(S390_lowcore.machine_flags & MACHINE_FLAG_NX)
+ #define MACHINE_HAS_GS		(S390_lowcore.machine_flags & MACHINE_FLAG_GS)
+ #define MACHINE_HAS_SCC		(S390_lowcore.machine_flags & MACHINE_FLAG_SCC)
++#define MACHINE_HAS_PCI_MIO	(S390_lowcore.machine_flags & MACHINE_FLAG_PCI_MIO)
+ 
+ /*
+  * Console mode. Override with conmode=
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index 705844f739345..985e1e7553336 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -238,6 +238,10 @@ static __init void detect_machine_facilities(void)
+ 		clock_comparator_max = -1ULL >> 1;
+ 		__ctl_set_bit(0, 53);
+ 	}
++	if (IS_ENABLED(CONFIG_PCI) && test_facility(153)) {
++		S390_lowcore.machine_flags |= MACHINE_FLAG_PCI_MIO;
++		/* the control bit is set during PCI initialization */
++	}
+ }
+ 
+ static inline void save_vector_registers(void)
+diff --git a/arch/s390/kernel/jump_label.c b/arch/s390/kernel/jump_label.c
+index ab584e8e35275..9156653b56f69 100644
+--- a/arch/s390/kernel/jump_label.c
++++ b/arch/s390/kernel/jump_label.c
+@@ -36,7 +36,7 @@ static void jump_label_bug(struct jump_entry *entry, struct insn *expected,
+ 	unsigned char *ipe = (unsigned char *)expected;
+ 	unsigned char *ipn = (unsigned char *)new;
+ 
+-	pr_emerg("Jump label code mismatch at %pS [%p]\n", ipc, ipc);
++	pr_emerg("Jump label code mismatch at %pS [%px]\n", ipc, ipc);
+ 	pr_emerg("Found:    %6ph\n", ipc);
+ 	pr_emerg("Expected: %6ph\n", ipe);
+ 	pr_emerg("New:      %6ph\n", ipn);
+diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
+index 77767850d0d07..9d5960bbc45f2 100644
+--- a/arch/s390/mm/init.c
++++ b/arch/s390/mm/init.c
+@@ -180,9 +180,9 @@ static void pv_init(void)
+ 		return;
+ 
+ 	/* make sure bounce buffers are shared */
++	swiotlb_force = SWIOTLB_FORCE;
+ 	swiotlb_init(1);
+ 	swiotlb_update_mem_attributes();
+-	swiotlb_force = SWIOTLB_FORCE;
+ }
+ 
+ void __init mem_init(void)
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 0ddb1fe353dc8..f5ddbc625c1a5 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -866,7 +866,6 @@ static void zpci_mem_exit(void)
+ }
+ 
+ static unsigned int s390_pci_probe __initdata = 1;
+-static unsigned int s390_pci_no_mio __initdata;
+ unsigned int s390_pci_force_floating __initdata;
+ static unsigned int s390_pci_initialized;
+ 
+@@ -877,7 +876,7 @@ char * __init pcibios_setup(char *str)
+ 		return NULL;
+ 	}
+ 	if (!strcmp(str, "nomio")) {
+-		s390_pci_no_mio = 1;
++		S390_lowcore.machine_flags &= ~MACHINE_FLAG_PCI_MIO;
+ 		return NULL;
+ 	}
+ 	if (!strcmp(str, "force_floating")) {
+@@ -906,7 +905,7 @@ static int __init pci_base_init(void)
+ 	if (!test_facility(69) || !test_facility(71))
+ 		return 0;
+ 
+-	if (test_facility(153) && !s390_pci_no_mio) {
++	if (MACHINE_HAS_PCI_MIO) {
+ 		static_branch_enable(&have_mio);
+ 		ctl_set_bit(2, 5);
+ 	}
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index 6cc50ab07bded..65d11711cd7bb 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -322,8 +322,6 @@ static void __init ms_hyperv_init_platform(void)
+ 	if (ms_hyperv.features & HV_ACCESS_TSC_INVARIANT) {
+ 		wrmsrl(HV_X64_MSR_TSC_INVARIANT_CONTROL, 0x1);
+ 		setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
+-	} else {
+-		mark_tsc_unstable("running on Hyper-V");
+ 	}
+ 
+ 	/*
+@@ -382,6 +380,13 @@ static void __init ms_hyperv_init_platform(void)
+ 	/* Register Hyper-V specific clocksource */
+ 	hv_init_clocksource();
+ #endif
++	/*
++	 * TSC should be marked as unstable only after Hyper-V
++	 * clocksource has been initialized. This ensures that the
++	 * stability of the sched_clock is not altered.
++	 */
++	if (!(ms_hyperv.features & HV_ACCESS_TSC_INVARIANT))
++		mark_tsc_unstable("running on Hyper-V");
+ }
+ 
+ const __initconst struct hypervisor_x86 x86_hyper_ms_hyperv = {
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index 56e0f290fef65..e809f14468464 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -618,8 +618,8 @@ int xen_alloc_p2m_entry(unsigned long pfn)
+ 	}
+ 
+ 	/* Expanded the p2m? */
+-	if (pfn > xen_p2m_last_pfn) {
+-		xen_p2m_last_pfn = pfn;
++	if (pfn >= xen_p2m_last_pfn) {
++		xen_p2m_last_pfn = ALIGN(pfn + 1, P2M_PER_PAGE);
+ 		HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_last_pfn;
+ 	}
+ 
+diff --git a/arch/xtensa/platforms/iss/console.c b/arch/xtensa/platforms/iss/console.c
+index af81a62faba64..e7faea3d73d3b 100644
+--- a/arch/xtensa/platforms/iss/console.c
++++ b/arch/xtensa/platforms/iss/console.c
+@@ -168,9 +168,13 @@ static const struct tty_operations serial_ops = {
+ 
+ int __init rs_init(void)
+ {
+-	tty_port_init(&serial_port);
++	int ret;
+ 
+ 	serial_driver = alloc_tty_driver(SERIAL_MAX_NUM_LINES);
++	if (!serial_driver)
++		return -ENOMEM;
++
++	tty_port_init(&serial_port);
+ 
+ 	pr_info("%s %s\n", serial_name, serial_version);
+ 
+@@ -190,8 +194,15 @@ int __init rs_init(void)
+ 	tty_set_operations(serial_driver, &serial_ops);
+ 	tty_port_link_device(&serial_port, serial_driver, 0);
+ 
+-	if (tty_register_driver(serial_driver))
+-		panic("Couldn't register serial driver\n");
++	ret = tty_register_driver(serial_driver);
++	if (ret) {
++		pr_err("Couldn't register serial driver\n");
++		tty_driver_kref_put(serial_driver);
++		tty_port_destroy(&serial_port);
++
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 8ea37328ca84e..b8c2ddc01aec3 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -5011,7 +5011,7 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+ 	if (bfqq->new_ioprio >= IOPRIO_BE_NR) {
+ 		pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n",
+ 			bfqq->new_ioprio);
+-		bfqq->new_ioprio = IOPRIO_BE_NR;
++		bfqq->new_ioprio = IOPRIO_BE_NR - 1;
+ 	}
+ 
+ 	bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio);
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index ab7d7ebcf6ddc..61b452272f94e 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -296,9 +296,6 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
+ 	if (!blk_queue_is_zoned(q))
+ 		return -ENOTTY;
+ 
+-	if (!capable(CAP_SYS_ADMIN))
+-		return -EACCES;
+-
+ 	if (copy_from_user(&rep, argp, sizeof(struct blk_zone_report)))
+ 		return -EFAULT;
+ 
+@@ -357,9 +354,6 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
+ 	if (!blk_queue_is_zoned(q))
+ 		return -ENOTTY;
+ 
+-	if (!capable(CAP_SYS_ADMIN))
+-		return -EACCES;
+-
+ 	if (!(mode & FMODE_WRITE))
+ 		return -EBADF;
+ 
+diff --git a/block/bsg.c b/block/bsg.c
+index 3d78e843a83f6..2cbc1fcc8247b 100644
+--- a/block/bsg.c
++++ b/block/bsg.c
+@@ -371,10 +371,13 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	case SG_GET_RESERVED_SIZE:
+ 	case SG_SET_RESERVED_SIZE:
+ 	case SG_EMULATED_HOST:
+-	case SCSI_IOCTL_SEND_COMMAND:
+ 		return scsi_cmd_ioctl(bd->queue, NULL, file->f_mode, cmd, uarg);
+ 	case SG_IO:
+ 		return bsg_sg_io(bd->queue, file->f_mode, uarg);
++	case SCSI_IOCTL_SEND_COMMAND:
++		pr_warn_ratelimited("%s: calling unsupported SCSI_IOCTL_SEND_COMMAND\n",
++				current->comm);
++		return -EINVAL;
+ 	default:
+ 		return -ENOTTY;
+ 	}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 44f434acfce08..0e6e73b8023fc 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3950,6 +3950,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Samsung SSD 850*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++	{ "Samsung SSD 860*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++	{ "Samsung SSD 870*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "FCCT*M500*",			NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 
+diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c
+index 9dcef6ac643b9..982fe91125322 100644
+--- a/drivers/ata/sata_dwc_460ex.c
++++ b/drivers/ata/sata_dwc_460ex.c
+@@ -1249,24 +1249,20 @@ static int sata_dwc_probe(struct platform_device *ofdev)
+ 	irq = irq_of_parse_and_map(np, 0);
+ 	if (irq == NO_IRQ) {
+ 		dev_err(&ofdev->dev, "no SATA DMA irq\n");
+-		err = -ENODEV;
+-		goto error_out;
++		return -ENODEV;
+ 	}
+ 
+ #ifdef CONFIG_SATA_DWC_OLD_DMA
+ 	if (!of_find_property(np, "dmas", NULL)) {
+ 		err = sata_dwc_dma_init_old(ofdev, hsdev);
+ 		if (err)
+-			goto error_out;
++			return err;
+ 	}
+ #endif
+ 
+ 	hsdev->phy = devm_phy_optional_get(hsdev->dev, "sata-phy");
+-	if (IS_ERR(hsdev->phy)) {
+-		err = PTR_ERR(hsdev->phy);
+-		hsdev->phy = NULL;
+-		goto error_out;
+-	}
++	if (IS_ERR(hsdev->phy))
++		return PTR_ERR(hsdev->phy);
+ 
+ 	err = phy_init(hsdev->phy);
+ 	if (err)
+diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
+index 806766b1b45f6..e329cdd7156c9 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
++++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
+@@ -64,6 +64,8 @@ struct fsl_mc_addr_translation_range {
+ #define MC_FAPR_PL	BIT(18)
+ #define MC_FAPR_BMT	BIT(17)
+ 
++static phys_addr_t mc_portal_base_phys_addr;
++
+ /**
+  * fsl_mc_bus_match - device to driver matching callback
+  * @dev: the fsl-mc device to match against
+@@ -597,14 +599,30 @@ static int fsl_mc_device_get_mmio_regions(struct fsl_mc_device *mc_dev,
+ 		 * If base address is in the region_desc use it otherwise
+ 		 * revert to old mechanism
+ 		 */
+-		if (region_desc.base_address)
++		if (region_desc.base_address) {
+ 			regions[i].start = region_desc.base_address +
+ 						region_desc.base_offset;
+-		else
++		} else {
+ 			error = translate_mc_addr(mc_dev, mc_region_type,
+ 					  region_desc.base_offset,
+ 					  &regions[i].start);
+ 
++			/*
++			 * Some versions of the MC firmware wrongly report
++			 * 0 for register base address of the DPMCP associated
++			 * with child DPRC objects thus rendering them unusable.
++			 * This is particularly troublesome in ACPI boot
++			 * scenarios where the legacy way of extracting this
++			 * base address from the device tree does not apply.
++			 * Given that DPMCPs share the same base address,
++			 * workaround this by using the base address extracted
++			 * from the root DPRC container.
++			 */
++			if (is_fsl_mc_bus_dprc(mc_dev) &&
++			    regions[i].start == region_desc.base_offset)
++				regions[i].start += mc_portal_base_phys_addr;
++		}
++
+ 		if (error < 0) {
+ 			dev_err(parent_dev,
+ 				"Invalid MC offset: %#x (for %s.%d\'s region %d)\n",
+@@ -996,6 +1014,8 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
+ 	plat_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	mc_portal_phys_addr = plat_res->start;
+ 	mc_portal_size = resource_size(plat_res);
++	mc_portal_base_phys_addr = mc_portal_phys_addr & ~0x3ffffff;
++
+ 	error = fsl_create_mc_io(&pdev->dev, mc_portal_phys_addr,
+ 				 mc_portal_size, NULL,
+ 				 FSL_MC_IO_ATOMIC_CONTEXT_PORTAL, &mc_io);
+diff --git a/drivers/clk/at91/clk-generated.c b/drivers/clk/at91/clk-generated.c
+index b4fc8d71daf20..b656d25a97678 100644
+--- a/drivers/clk/at91/clk-generated.c
++++ b/drivers/clk/at91/clk-generated.c
+@@ -128,6 +128,12 @@ static int clk_generated_determine_rate(struct clk_hw *hw,
+ 	int i;
+ 	u32 div;
+ 
++	/* do not look for a rate that is outside of our range */
++	if (gck->range.max && req->rate > gck->range.max)
++		req->rate = gck->range.max;
++	if (gck->range.min && req->rate < gck->range.min)
++		req->rate = gck->range.min;
++
+ 	for (i = 0; i < clk_hw_get_num_parents(hw); i++) {
+ 		if (gck->chg_pid == i)
+ 			continue;
+diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c
+index 2c309e3dc8e34..04e728538cefe 100644
+--- a/drivers/clk/imx/clk-composite-8m.c
++++ b/drivers/clk/imx/clk-composite-8m.c
+@@ -216,7 +216,8 @@ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 		div->width = PCG_PREDIV_WIDTH;
+ 		divider_ops = &imx8m_clk_composite_divider_ops;
+ 		mux_ops = &clk_mux_ops;
+-		flags |= CLK_SET_PARENT_GATE;
++		if (!(composite_flags & IMX_COMPOSITE_FW_MANAGED))
++			flags |= CLK_SET_PARENT_GATE;
+ 	}
+ 
+ 	div->lock = &imx_ccm_lock;
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index 4cbf86ab2eacf..711bd2294c70b 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -458,10 +458,11 @@ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+-	hws[IMX8MM_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mm_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MM_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mm_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MM_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mm_dram_alt_sels, base + 0xa000);
++	hws[IMX8MM_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mm_dram_apb_sels, base + 0xa080);
+ 
+ 	/* IP */
+ 	hws[IMX8MM_CLK_VPU_G1] = imx8m_clk_hw_composite("vpu_g1", imx8mm_vpu_g1_sels, base + 0xa100);
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index f98f252795396..33a7ddc23cd24 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -441,10 +441,11 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+-	hws[IMX8MN_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mn_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MN_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mn_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MN_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mn_dram_alt_sels, base + 0xa000);
++	hws[IMX8MN_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mn_dram_apb_sels, base + 0xa080);
+ 
+ 	hws[IMX8MN_CLK_DISP_PIXEL] = imx8m_clk_hw_composite("disp_pixel", imx8mn_disp_pixel_sels, base + 0xa500);
+ 	hws[IMX8MN_CLK_SAI2] = imx8m_clk_hw_composite("sai2", imx8mn_sai2_sels, base + 0xa600);
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index aac6bcc65c20c..f679e5cc320b5 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -427,11 +427,12 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+ 	hws[IMX8MQ_CLK_DRAM_CORE] = imx_clk_hw_mux2_flags("dram_core_clk", base + 0x9800, 24, 1, imx8mq_dram_core_sels, ARRAY_SIZE(imx8mq_dram_core_sels), CLK_IS_CRITICAL);
+-	hws[IMX8MQ_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mq_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MQ_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mq_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MQ_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mq_dram_alt_sels, base + 0xa000);
++	hws[IMX8MQ_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mq_dram_apb_sels, base + 0xa080);
+ 
+ 	/* IP */
+ 	hws[IMX8MQ_CLK_VPU_G1] = imx8m_clk_hw_composite("vpu_g1", imx8mq_vpu_g1_sels, base + 0xa100);
+diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h
+index f04cbbab9fccd..c66e00e877114 100644
+--- a/drivers/clk/imx/clk.h
++++ b/drivers/clk/imx/clk.h
+@@ -533,8 +533,9 @@ struct clk_hw *imx_clk_hw_cpu(const char *name, const char *parent_name,
+ 		struct clk *div, struct clk *mux, struct clk *pll,
+ 		struct clk *step);
+ 
+-#define IMX_COMPOSITE_CORE	BIT(0)
+-#define IMX_COMPOSITE_BUS	BIT(1)
++#define IMX_COMPOSITE_CORE		BIT(0)
++#define IMX_COMPOSITE_BUS		BIT(1)
++#define IMX_COMPOSITE_FW_MANAGED	BIT(2)
+ 
+ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 					    const char * const *parent_names,
+@@ -570,6 +571,17 @@ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 		ARRAY_SIZE(parent_names), reg, 0, \
+ 		flags | CLK_SET_RATE_NO_REPARENT | CLK_OPS_PARENT_ENABLE)
+ 
++#define __imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, flags) \
++	imx8m_clk_hw_composite_flags(name, parent_names, \
++		ARRAY_SIZE(parent_names), reg, IMX_COMPOSITE_FW_MANAGED, \
++		flags | CLK_GET_RATE_NOCACHE | CLK_SET_RATE_NO_REPARENT | CLK_OPS_PARENT_ENABLE)
++
++#define imx8m_clk_hw_fw_managed_composite(name, parent_names, reg) \
++	__imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, 0)
++
++#define imx8m_clk_hw_fw_managed_composite_critical(name, parent_names, reg) \
++	__imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, CLK_IS_CRITICAL)
++
+ #define __imx8m_clk_composite(name, parent_names, reg, flags) \
+ 	to_clk(__imx8m_clk_hw_composite(name, parent_names, reg, flags))
+ 
+diff --git a/drivers/clk/rockchip/clk-pll.c b/drivers/clk/rockchip/clk-pll.c
+index 4c6c9167ef509..bbbf9ce428672 100644
+--- a/drivers/clk/rockchip/clk-pll.c
++++ b/drivers/clk/rockchip/clk-pll.c
+@@ -940,7 +940,7 @@ struct clk *rockchip_clk_register_pll(struct rockchip_clk_provider *ctx,
+ 	switch (pll_type) {
+ 	case pll_rk3036:
+ 	case pll_rk3328:
+-		if (!pll->rate_table || IS_ERR(ctx->grf))
++		if (!pll->rate_table)
+ 			init.ops = &rockchip_rk3036_pll_clk_norate_ops;
+ 		else
+ 			init.ops = &rockchip_rk3036_pll_clk_ops;
+diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c
+index 438075a50b9f2..7182afb4258a7 100644
+--- a/drivers/clk/socfpga/clk-agilex.c
++++ b/drivers/clk/socfpga/clk-agilex.c
+@@ -107,10 +107,10 @@ static const struct clk_parent_data gpio_db_free_mux[] = {
+ };
+ 
+ static const struct clk_parent_data psi_ref_free_mux[] = {
+-	{ .fw_name = "main_pll_c3",
+-	  .name = "main_pll_c3", },
+-	{ .fw_name = "peri_pll_c3",
+-	  .name = "peri_pll_c3", },
++	{ .fw_name = "main_pll_c2",
++	  .name = "main_pll_c2", },
++	{ .fw_name = "peri_pll_c2",
++	  .name = "peri_pll_c2", },
+ 	{ .fw_name = "osc1",
+ 	  .name = "osc1", },
+ 	{ .fw_name = "cb-intosc-hs-div2-clk",
+@@ -193,6 +193,13 @@ static const struct clk_parent_data sdmmc_mux[] = {
+ 	  .name = "boot_clk", },
+ };
+ 
++static const struct clk_parent_data s2f_user0_mux[] = {
++	{ .fw_name = "s2f_user0_free_clk",
++	  .name = "s2f_user0_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
+ static const struct clk_parent_data s2f_user1_mux[] = {
+ 	{ .fw_name = "s2f_user1_free_clk",
+ 	  .name = "s2f_user1_free_clk", },
+@@ -260,7 +267,7 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
+ 	{ AGILEX_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
+ 	  ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0, 0},
+ 	{ AGILEX_S2F_USER0_FREE_CLK, "s2f_user0_free_clk", NULL, s2f_usr0_free_mux,
+-	  ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0, 0},
++	  ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0x30, 2},
+ 	{ AGILEX_S2F_USER1_FREE_CLK, "s2f_user1_free_clk", NULL, s2f_usr1_free_mux,
+ 	  ARRAY_SIZE(s2f_usr1_free_mux), 0, 0xEC, 0, 0x88, 5},
+ 	{ AGILEX_PSI_REF_FREE_CLK, "psi_ref_free_clk", NULL, psi_ref_free_mux,
+@@ -306,6 +313,8 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  4, 0x98, 0, 16, 0x88, 3, 0},
+ 	{ AGILEX_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0x7C,
+ 	  5, 0, 0, 0, 0x88, 4, 4},
++	{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_user0_mux, ARRAY_SIZE(s2f_user0_mux), 0, 0x24,
++	  6, 0, 0, 0, 0x30, 2, 0},
+ 	{ AGILEX_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0x7C,
+ 	  6, 0, 0, 0, 0x88, 5, 0},
+ 	{ AGILEX_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0x7C,
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index e439b43c19ebe..8977e4de59157 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -36,6 +36,7 @@
+ #define MAX_PSTATE_SHIFT	32
+ #define LPSTATE_SHIFT		48
+ #define GPSTATE_SHIFT		56
++#define MAX_NR_CHIPS		32
+ 
+ #define MAX_RAMP_DOWN_TIME				5120
+ /*
+@@ -1051,12 +1052,20 @@ static int init_chip_info(void)
+ 	unsigned int *chip;
+ 	unsigned int cpu, i;
+ 	unsigned int prev_chip_id = UINT_MAX;
++	cpumask_t *chip_cpu_mask;
+ 	int ret = 0;
+ 
+ 	chip = kcalloc(num_possible_cpus(), sizeof(*chip), GFP_KERNEL);
+ 	if (!chip)
+ 		return -ENOMEM;
+ 
++	/* Allocate a chip cpu mask large enough to fit mask for all chips */
++	chip_cpu_mask = kcalloc(MAX_NR_CHIPS, sizeof(cpumask_t), GFP_KERNEL);
++	if (!chip_cpu_mask) {
++		ret = -ENOMEM;
++		goto free_and_return;
++	}
++
+ 	for_each_possible_cpu(cpu) {
+ 		unsigned int id = cpu_to_chip_id(cpu);
+ 
+@@ -1064,22 +1073,25 @@ static int init_chip_info(void)
+ 			prev_chip_id = id;
+ 			chip[nr_chips++] = id;
+ 		}
++		cpumask_set_cpu(cpu, &chip_cpu_mask[nr_chips-1]);
+ 	}
+ 
+ 	chips = kcalloc(nr_chips, sizeof(struct chip), GFP_KERNEL);
+ 	if (!chips) {
+ 		ret = -ENOMEM;
+-		goto free_and_return;
++		goto out_free_chip_cpu_mask;
+ 	}
+ 
+ 	for (i = 0; i < nr_chips; i++) {
+ 		chips[i].id = chip[i];
+-		cpumask_copy(&chips[i].mask, cpumask_of_node(chip[i]));
++		cpumask_copy(&chips[i].mask, &chip_cpu_mask[i]);
+ 		INIT_WORK(&chips[i].throttle, powernv_cpufreq_work_fn);
+ 		for_each_cpu(cpu, &chips[i].mask)
+ 			per_cpu(chip_info, cpu) =  &chips[i];
+ 	}
+ 
++out_free_chip_cpu_mask:
++	kfree(chip_cpu_mask);
+ free_and_return:
+ 	kfree(chip);
+ 	return ret;
+diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
+index a2b5c6f60cf0e..ff164dec8422e 100644
+--- a/drivers/cpuidle/cpuidle-pseries.c
++++ b/drivers/cpuidle/cpuidle-pseries.c
+@@ -402,7 +402,7 @@ static void __init fixup_cede0_latency(void)
+  * pseries_idle_probe()
+  * Choose state table for shared versus dedicated partition
+  */
+-static int pseries_idle_probe(void)
++static int __init pseries_idle_probe(void)
+ {
+ 
+ 	if (cpuidle_disable != IDLE_NO_OVERRIDE)
+@@ -419,7 +419,21 @@ static int pseries_idle_probe(void)
+ 			cpuidle_state_table = shared_states;
+ 			max_idle_state = ARRAY_SIZE(shared_states);
+ 		} else {
+-			fixup_cede0_latency();
++			/*
++			 * Use firmware provided latency values
++			 * starting with POWER10 platforms. In the
++			 * case that we are running on a POWER10
++			 * platform but in an earlier compat mode, we
++			 * can still use the firmware provided values.
++			 *
++			 * However, on platforms prior to POWER10, we
++			 * cannot rely on the accuracy of the firmware
++			 * provided latency values. On such platforms,
++			 * go with the conservative default estimate
++			 * of 10us.
++			 */
++			if (cpu_has_feature(CPU_FTR_ARCH_31) || pvr_version_is(PVR_POWER10))
++				fixup_cede0_latency();
+ 			cpuidle_state_table = dedicated_states;
+ 			max_idle_state = NR_DEDICATED_STATES;
+ 		}
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index d0018794e92e8..57b57d4db500c 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -278,6 +278,9 @@ static int __sev_platform_shutdown_locked(int *error)
+ 	struct sev_device *sev = psp_master->sev_data;
+ 	int ret;
+ 
++	if (sev->state == SEV_STATE_UNINIT)
++		return 0;
++
+ 	ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
+ 	if (ret)
+ 		return ret;
+@@ -1018,6 +1021,20 @@ e_err:
+ 	return ret;
+ }
+ 
++static void sev_firmware_shutdown(struct sev_device *sev)
++{
++	sev_platform_shutdown(NULL);
++
++	if (sev_es_tmr) {
++		/* The TMR area was encrypted, flush it from the cache */
++		wbinvd_on_all_cpus();
++
++		free_pages((unsigned long)sev_es_tmr,
++			   get_order(SEV_ES_TMR_SIZE));
++		sev_es_tmr = NULL;
++	}
++}
++
+ void sev_dev_destroy(struct psp_device *psp)
+ {
+ 	struct sev_device *sev = psp->sev_data;
+@@ -1025,6 +1042,8 @@ void sev_dev_destroy(struct psp_device *psp)
+ 	if (!sev)
+ 		return;
+ 
++	sev_firmware_shutdown(sev);
++
+ 	if (sev->misc)
+ 		kref_put(&misc_dev->refcount, sev_exit);
+ 
+@@ -1055,21 +1074,6 @@ void sev_pci_init(void)
+ 	if (sev_get_api_version())
+ 		goto err;
+ 
+-	/*
+-	 * If platform is not in UNINIT state then firmware upgrade and/or
+-	 * platform INIT command will fail. These command require UNINIT state.
+-	 *
+-	 * In a normal boot we should never run into case where the firmware
+-	 * is not in UNINIT state on boot. But in case of kexec boot, a reboot
+-	 * may not go through a typical shutdown sequence and may leave the
+-	 * firmware in INIT or WORKING state.
+-	 */
+-
+-	if (sev->state != SEV_STATE_UNINIT) {
+-		sev_platform_shutdown(NULL);
+-		sev->state = SEV_STATE_UNINIT;
+-	}
+-
+ 	if (sev_version_greater_or_equal(0, 15) &&
+ 	    sev_update_firmware(sev->dev) == 0)
+ 		sev_get_api_version();
+@@ -1114,17 +1118,10 @@ err:
+ 
+ void sev_pci_exit(void)
+ {
+-	if (!psp_master->sev_data)
+-		return;
+-
+-	sev_platform_shutdown(NULL);
++	struct sev_device *sev = psp_master->sev_data;
+ 
+-	if (sev_es_tmr) {
+-		/* The TMR area was encrypted, flush it from the cache */
+-		wbinvd_on_all_cpus();
++	if (!sev)
++		return;
+ 
+-		free_pages((unsigned long)sev_es_tmr,
+-			   get_order(SEV_ES_TMR_SIZE));
+-		sev_es_tmr = NULL;
+-	}
++	sev_firmware_shutdown(sev);
+ }
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index 7d346d842a39e..c319e7e3917dc 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -241,6 +241,17 @@ e_err:
+ 	return ret;
+ }
+ 
++static void sp_pci_shutdown(struct pci_dev *pdev)
++{
++	struct device *dev = &pdev->dev;
++	struct sp_device *sp = dev_get_drvdata(dev);
++
++	if (!sp)
++		return;
++
++	sp_destroy(sp);
++}
++
+ static void sp_pci_remove(struct pci_dev *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -370,6 +381,7 @@ static struct pci_driver sp_pci_driver = {
+ 	.id_table = sp_pci_table,
+ 	.probe = sp_pci_probe,
+ 	.remove = sp_pci_remove,
++	.shutdown = sp_pci_shutdown,
+ 	.driver.pm = &sp_pci_pm_ops,
+ };
+ 
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index 7daed8b78ac83..5edc91cdb4e65 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -299,21 +299,20 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 
+ 	struct scatterlist *dst = req->dst;
+ 	struct scatterlist *src = req->src;
+-	const int nents = sg_nents(req->src);
++	int dst_nents = sg_nents(dst);
+ 
+ 	const int out_off = DCP_BUF_SZ;
+ 	uint8_t *in_buf = sdcp->coh->aes_in_buf;
+ 	uint8_t *out_buf = sdcp->coh->aes_out_buf;
+ 
+-	uint8_t *out_tmp, *src_buf, *dst_buf = NULL;
+ 	uint32_t dst_off = 0;
++	uint8_t *src_buf = NULL;
+ 	uint32_t last_out_len = 0;
+ 
+ 	uint8_t *key = sdcp->coh->aes_key;
+ 
+ 	int ret = 0;
+-	int split = 0;
+-	unsigned int i, len, clen, rem = 0, tlen = 0;
++	unsigned int i, len, clen, tlen = 0;
+ 	int init = 0;
+ 	bool limit_hit = false;
+ 
+@@ -331,7 +330,7 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 		memset(key + AES_KEYSIZE_128, 0, AES_KEYSIZE_128);
+ 	}
+ 
+-	for_each_sg(req->src, src, nents, i) {
++	for_each_sg(req->src, src, sg_nents(src), i) {
+ 		src_buf = sg_virt(src);
+ 		len = sg_dma_len(src);
+ 		tlen += len;
+@@ -356,34 +355,17 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 			 * submit the buffer.
+ 			 */
+ 			if (actx->fill == out_off || sg_is_last(src) ||
+-				limit_hit) {
++			    limit_hit) {
+ 				ret = mxs_dcp_run_aes(actx, req, init);
+ 				if (ret)
+ 					return ret;
+ 				init = 0;
+ 
+-				out_tmp = out_buf;
++				sg_pcopy_from_buffer(dst, dst_nents, out_buf,
++						     actx->fill, dst_off);
++				dst_off += actx->fill;
+ 				last_out_len = actx->fill;
+-				while (dst && actx->fill) {
+-					if (!split) {
+-						dst_buf = sg_virt(dst);
+-						dst_off = 0;
+-					}
+-					rem = min(sg_dma_len(dst) - dst_off,
+-						  actx->fill);
+-
+-					memcpy(dst_buf + dst_off, out_tmp, rem);
+-					out_tmp += rem;
+-					dst_off += rem;
+-					actx->fill -= rem;
+-
+-					if (dst_off == sg_dma_len(dst)) {
+-						dst = sg_next(dst);
+-						split = 0;
+-					} else {
+-						split = 1;
+-					}
+-				}
++				actx->fill = 0;
+ 			}
+ 		} while (len);
+ 
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 16b908c77db30..306f93e4b26a8 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -379,7 +379,6 @@ struct sdma_channel {
+ 	unsigned long			watermark_level;
+ 	u32				shp_addr, per_addr;
+ 	enum dma_status			status;
+-	bool				context_loaded;
+ 	struct imx_dma_data		data;
+ 	struct work_struct		terminate_worker;
+ };
+@@ -985,9 +984,6 @@ static int sdma_load_context(struct sdma_channel *sdmac)
+ 	int ret;
+ 	unsigned long flags;
+ 
+-	if (sdmac->context_loaded)
+-		return 0;
+-
+ 	if (sdmac->direction == DMA_DEV_TO_MEM)
+ 		load_address = sdmac->pc_from_device;
+ 	else if (sdmac->direction == DMA_DEV_TO_DEV)
+@@ -1030,8 +1026,6 @@ static int sdma_load_context(struct sdma_channel *sdmac)
+ 
+ 	spin_unlock_irqrestore(&sdma->channel_0_lock, flags);
+ 
+-	sdmac->context_loaded = true;
+-
+ 	return ret;
+ }
+ 
+@@ -1070,7 +1064,6 @@ static void sdma_channel_terminate_work(struct work_struct *work)
+ 	vchan_get_all_descriptors(&sdmac->vc, &head);
+ 	spin_unlock_irqrestore(&sdmac->vc.lock, flags);
+ 	vchan_dma_desc_free_list(&sdmac->vc, &head);
+-	sdmac->context_loaded = false;
+ }
+ 
+ static int sdma_terminate_all(struct dma_chan *chan)
+@@ -1145,7 +1138,6 @@ static void sdma_set_watermarklevel_for_p2p(struct sdma_channel *sdmac)
+ static int sdma_config_channel(struct dma_chan *chan)
+ {
+ 	struct sdma_channel *sdmac = to_sdma_chan(chan);
+-	int ret;
+ 
+ 	sdma_disable_channel(chan);
+ 
+@@ -1185,9 +1177,7 @@ static int sdma_config_channel(struct dma_chan *chan)
+ 		sdmac->watermark_level = 0; /* FIXME: M3_BASE_ADDRESS */
+ 	}
+ 
+-	ret = sdma_load_context(sdmac);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static int sdma_set_channel_priority(struct sdma_channel *sdmac,
+@@ -1338,7 +1328,6 @@ static void sdma_free_chan_resources(struct dma_chan *chan)
+ 
+ 	sdmac->event_id0 = 0;
+ 	sdmac->event_id1 = 0;
+-	sdmac->context_loaded = false;
+ 
+ 	sdma_set_channel_priority(sdmac, 0);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
+index 47cad23a6b9e2..b91d3d29b4102 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
+@@ -339,7 +339,7 @@ static void amdgpu_i2c_put_byte(struct amdgpu_i2c_chan *i2c_bus,
+ void
+ amdgpu_i2c_router_select_ddc_port(const struct amdgpu_connector *amdgpu_connector)
+ {
+-	u8 val;
++	u8 val = 0;
+ 
+ 	if (!amdgpu_connector->router.ddc_valid)
+ 		return;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index ac043baac05d6..ad9863b84f1fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -207,7 +207,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
+ 		c++;
+ 	}
+ 
+-	BUG_ON(c >= AMDGPU_BO_MAX_PLACEMENTS);
++	BUG_ON(c > AMDGPU_BO_MAX_PLACEMENTS);
+ 
+ 	placement->num_placement = c;
+ 	placement->placement = places;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+index 0e64c39a23722..7c3efc5f1be07 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+@@ -305,7 +305,7 @@ int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control,
+ 		return ret;
+ 	}
+ 
+-	__decode_table_header_from_buff(hdr, &buff[2]);
++	__decode_table_header_from_buff(hdr, buff);
+ 
+ 	if (hdr->header == EEPROM_TABLE_HDR_VAL) {
+ 		control->num_recs = (hdr->tbl_size - EEPROM_TABLE_HEADER_SIZE) /
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index aa8ae0ca62f91..e8737fa438f06 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -120,7 +120,7 @@ static int vcn_v1_0_sw_init(void *handle)
+ 		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
+ 		adev->firmware.fw_size +=
+ 			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+index fc939d4f4841e..f493b5c3d382b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+@@ -122,7 +122,7 @@ static int vcn_v2_0_sw_init(void *handle)
+ 		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
+ 		adev->firmware.fw_size +=
+ 			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+index 2c328362eee3c..ce64d4016f903 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+@@ -152,7 +152,7 @@ static int vcn_v2_5_sw_init(void *handle)
+ 			adev->firmware.fw_size +=
+ 				ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+ 		}
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index c9c888be12285..2099f6ebd8338 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -148,7 +148,7 @@ static int vcn_v3_0_sw_init(void *handle)
+ 			adev->firmware.fw_size +=
+ 				ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+ 		}
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+index 88813dad731fa..c021519af8106 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+@@ -98,36 +98,78 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
+ 		uint32_t *se_mask)
+ {
+ 	struct kfd_cu_info cu_info;
+-	uint32_t cu_per_se[KFD_MAX_NUM_SE] = {0};
+-	int i, se, sh, cu = 0;
+-
++	uint32_t cu_per_sh[KFD_MAX_NUM_SE][KFD_MAX_NUM_SH_PER_SE] = {0};
++	int i, se, sh, cu;
+ 	amdgpu_amdkfd_get_cu_info(mm->dev->kgd, &cu_info);
+ 
+ 	if (cu_mask_count > cu_info.cu_active_number)
+ 		cu_mask_count = cu_info.cu_active_number;
+ 
++	/* Exceeding these bounds corrupts the stack and indicates a coding error.
++	 * Returning with no CU's enabled will hang the queue, which should be
++	 * attention grabbing.
++	 */
++	if (cu_info.num_shader_engines > KFD_MAX_NUM_SE) {
++		pr_err("Exceeded KFD_MAX_NUM_SE, chip reports %d\n", cu_info.num_shader_engines);
++		return;
++	}
++	if (cu_info.num_shader_arrays_per_engine > KFD_MAX_NUM_SH_PER_SE) {
++		pr_err("Exceeded KFD_MAX_NUM_SH, chip reports %d\n",
++			cu_info.num_shader_arrays_per_engine * cu_info.num_shader_engines);
++		return;
++	}
++	/* Count active CUs per SH.
++	 *
++	 * Some CUs in an SH may be disabled.	HW expects disabled CUs to be
++	 * represented in the high bits of each SH's enable mask (the upper and lower
++	 * 16 bits of se_mask) and will take care of the actual distribution of
++	 * disabled CUs within each SH automatically.
++	 * Each half of se_mask must be filled only on bits 0-cu_per_sh[se][sh]-1.
++	 *
++	 * See note on Arcturus cu_bitmap layout in gfx_v9_0_get_cu_info.
++	 */
+ 	for (se = 0; se < cu_info.num_shader_engines; se++)
+ 		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++)
+-			cu_per_se[se] += hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]);
+-
+-	/* Symmetrically map cu_mask to all SEs:
+-	 * cu_mask[0] bit0 -> se_mask[0] bit0;
+-	 * cu_mask[0] bit1 -> se_mask[1] bit0;
+-	 * ... (if # SE is 4)
+-	 * cu_mask[0] bit4 -> se_mask[0] bit1;
++			cu_per_sh[se][sh] = hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]);
++
++	/* Symmetrically map cu_mask to all SEs & SHs:
++	 * se_mask programs up to 2 SH in the upper and lower 16 bits.
++	 *
++	 * Examples
++	 * Assuming 1 SH/SE, 4 SEs:
++	 * cu_mask[0] bit0 -> se_mask[0] bit0
++	 * cu_mask[0] bit1 -> se_mask[1] bit0
++	 * ...
++	 * cu_mask[0] bit4 -> se_mask[0] bit1
++	 * ...
++	 *
++	 * Assuming 2 SH/SE, 4 SEs
++	 * cu_mask[0] bit0 -> se_mask[0] bit0 (SE0,SH0,CU0)
++	 * cu_mask[0] bit1 -> se_mask[1] bit0 (SE1,SH0,CU0)
++	 * ...
++	 * cu_mask[0] bit4 -> se_mask[0] bit16 (SE0,SH1,CU0)
++	 * cu_mask[0] bit5 -> se_mask[1] bit16 (SE1,SH1,CU0)
++	 * ...
++	 * cu_mask[0] bit8 -> se_mask[0] bit1 (SE0,SH0,CU1)
+ 	 * ...
++	 *
++	 * First ensure all CUs are disabled, then enable user specified CUs.
+ 	 */
+-	se = 0;
+-	for (i = 0; i < cu_mask_count; i++) {
+-		if (cu_mask[i / 32] & (1 << (i % 32)))
+-			se_mask[se] |= 1 << cu;
+-
+-		do {
+-			se++;
+-			if (se == cu_info.num_shader_engines) {
+-				se = 0;
+-				cu++;
++	for (i = 0; i < cu_info.num_shader_engines; i++)
++		se_mask[i] = 0;
++
++	i = 0;
++	for (cu = 0; cu < 16; cu++) {
++		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++) {
++			for (se = 0; se < cu_info.num_shader_engines; se++) {
++				if (cu_per_sh[se][sh] > cu) {
++					if (cu_mask[i / 32] & (1 << (i % 32)))
++						se_mask[se] |= 1 << (cu + sh * 16);
++					i++;
++					if (i == cu_mask_count)
++						return;
++				}
+ 			}
+-		} while (cu >= cu_per_se[se] && cu < 32);
++		}
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
+index fbdb16418847c..4edc012e31387 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
+@@ -27,6 +27,7 @@
+ #include "kfd_priv.h"
+ 
+ #define KFD_MAX_NUM_SE 8
++#define KFD_MAX_NUM_SH_PER_SE 2
+ 
+ /**
+  * struct mqd_manager
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index e02a55fc1382f..fbb65c95464b3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -197,29 +197,29 @@ static ssize_t dp_link_settings_read(struct file *f, char __user *buf,
+ 
+ 	rd_buf_ptr = rd_buf;
+ 
+-	str_len = strlen("Current:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Current:  %d  %d  %d  ",
++	str_len = strlen("Current:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Current:  %d  0x%x  %d  ",
+ 			link->cur_link_settings.lane_count,
+ 			link->cur_link_settings.link_rate,
+ 			link->cur_link_settings.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Verified:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Verified:  %d  %d  %d  ",
++	str_len = strlen("Verified:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Verified:  %d  0x%x  %d  ",
+ 			link->verified_link_cap.lane_count,
+ 			link->verified_link_cap.link_rate,
+ 			link->verified_link_cap.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Reported:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Reported:  %d  %d  %d  ",
++	str_len = strlen("Reported:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Reported:  %d  0x%x  %d  ",
+ 			link->reported_link_cap.lane_count,
+ 			link->reported_link_cap.link_rate,
+ 			link->reported_link_cap.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Preferred:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Preferred:  %d  %d  %d\n",
++	str_len = strlen("Preferred:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Preferred:  %d  0x%x  %d\n",
+ 			link->preferred_link_setting.lane_count,
+ 			link->preferred_link_setting.link_rate,
+ 			link->preferred_link_setting.link_spread);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 0d1e7b56fb395..532f6a1145b55 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -3740,13 +3740,12 @@ enum dc_status dcn10_set_clock(struct dc *dc,
+ 	struct dc_clock_config clock_cfg = {0};
+ 	struct dc_clocks *current_clocks = &context->bw_ctx.bw.dcn.clk;
+ 
+-	if (dc->clk_mgr && dc->clk_mgr->funcs->get_clock)
+-				dc->clk_mgr->funcs->get_clock(dc->clk_mgr,
+-						context, clock_type, &clock_cfg);
+-
+-	if (!dc->clk_mgr->funcs->get_clock)
++	if (!dc->clk_mgr || !dc->clk_mgr->funcs->get_clock)
+ 		return DC_FAIL_UNSUPPORTED_1;
+ 
++	dc->clk_mgr->funcs->get_clock(dc->clk_mgr,
++		context, clock_type, &clock_cfg);
++
+ 	if (clk_khz > clock_cfg.max_clock_khz)
+ 		return DC_FAIL_CLK_EXCEED_MAX;
+ 
+@@ -3764,7 +3763,7 @@ enum dc_status dcn10_set_clock(struct dc *dc,
+ 	else
+ 		return DC_ERROR_UNEXPECTED;
+ 
+-	if (dc->clk_mgr && dc->clk_mgr->funcs->update_clocks)
++	if (dc->clk_mgr->funcs->update_clocks)
+ 				dc->clk_mgr->funcs->update_clocks(dc->clk_mgr,
+ 				context, true);
+ 	return DC_OK;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 9d3ccdd355825..79a2b9c785f05 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1704,13 +1704,15 @@ void dcn20_program_front_end_for_ctx(
+ 				dcn20_program_pipe(dc, pipe, context);
+ 				pipe = pipe->bottom_pipe;
+ 			}
+-			/* Program secondary blending tree and writeback pipes */
+-			pipe = &context->res_ctx.pipe_ctx[i];
+-			if (!pipe->prev_odm_pipe && pipe->stream->num_wb_info > 0
+-					&& (pipe->update_flags.raw || pipe->plane_state->update_flags.raw || pipe->stream->update_flags.raw)
+-					&& hws->funcs.program_all_writeback_pipes_in_tree)
+-				hws->funcs.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
+ 		}
++		/* Program secondary blending tree and writeback pipes */
++		pipe = &context->res_ctx.pipe_ctx[i];
++		if (!pipe->top_pipe && !pipe->prev_odm_pipe
++				&& pipe->stream && pipe->stream->num_wb_info > 0
++				&& (pipe->update_flags.raw || (pipe->plane_state && pipe->plane_state->update_flags.raw)
++					|| pipe->stream->update_flags.raw)
++				&& hws->funcs.program_all_writeback_pipes_in_tree)
++			hws->funcs.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index cfe85ba1018e8..5dbc290bcbe86 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -2455,7 +2455,7 @@ void dcn20_set_mcif_arb_params(
+ 				wb_arb_params->cli_watermark[k] = get_wm_writeback_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+ 				wb_arb_params->pstate_watermark[k] = get_wm_writeback_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+ 			}
+-			wb_arb_params->time_per_pixel = 16.0 / context->res_ctx.pipe_ctx[i].stream->phy_pix_clk; /* 4 bit fraction, ms */
++			wb_arb_params->time_per_pixel = 16.0 * 1000 / (context->res_ctx.pipe_ctx[i].stream->phy_pix_clk / 1000); /* 4 bit fraction, ms */
+ 			wb_arb_params->slice_lines = 32;
+ 			wb_arb_params->arbitration_slice = 2;
+ 			wb_arb_params->max_scaled_time = dcn20_calc_max_scaled_time(wb_arb_params->time_per_pixel,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
+index 8593145379d99..6d621f07be489 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
+@@ -49,6 +49,11 @@
+ static void dwb3_get_reg_field_ogam(struct dcn30_dwbc *dwbc30,
+ 	struct dcn3_xfer_func_reg *reg)
+ {
++	reg->shifts.field_region_start_base = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_BASE_B;
++	reg->masks.field_region_start_base = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_BASE_B;
++	reg->shifts.field_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_OFFSET_B;
++	reg->masks.field_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_OFFSET_B;
++
+ 	reg->shifts.exp_region0_lut_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION0_LUT_OFFSET;
+ 	reg->masks.exp_region0_lut_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION0_LUT_OFFSET;
+ 	reg->shifts.exp_region0_num_segments = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION0_NUM_SEGMENTS;
+@@ -66,8 +71,6 @@ static void dwb3_get_reg_field_ogam(struct dcn30_dwbc *dwbc30,
+ 	reg->masks.field_region_end_base = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_END_BASE_B;
+ 	reg->shifts.field_region_linear_slope = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_SLOPE_B;
+ 	reg->masks.field_region_linear_slope = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_SLOPE_B;
+-	reg->masks.field_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_OFFSET_B;
+-	reg->shifts.field_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_OFFSET_B;
+ 	reg->shifts.exp_region_start = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_B;
+ 	reg->masks.exp_region_start = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_B;
+ 	reg->shifts.exp_resion_start_segment = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_SEGMENT_B;
+@@ -147,18 +150,19 @@ static enum dc_lut_mode dwb3_get_ogam_current(
+ 	uint32_t state_mode;
+ 	uint32_t ram_select;
+ 
+-	REG_GET(DWB_OGAM_CONTROL,
+-		DWB_OGAM_MODE, &state_mode);
+-	REG_GET(DWB_OGAM_CONTROL,
+-		DWB_OGAM_SELECT, &ram_select);
++	REG_GET_2(DWB_OGAM_CONTROL,
++		DWB_OGAM_MODE_CURRENT, &state_mode,
++		DWB_OGAM_SELECT_CURRENT, &ram_select);
+ 
+ 	if (state_mode == 0) {
+ 		mode = LUT_BYPASS;
+ 	} else if (state_mode == 2) {
+ 		if (ram_select == 0)
+ 			mode = LUT_RAM_A;
+-		else
++		else if (ram_select == 1)
+ 			mode = LUT_RAM_B;
++		else
++			mode = LUT_BYPASS;
+ 	} else {
+ 		// Reserved value
+ 		mode = LUT_BYPASS;
+@@ -172,10 +176,10 @@ static void dwb3_configure_ogam_lut(
+ 	struct dcn30_dwbc *dwbc30,
+ 	bool is_ram_a)
+ {
+-	REG_UPDATE(DWB_OGAM_LUT_CONTROL,
+-		DWB_OGAM_LUT_READ_COLOR_SEL, 7);
+-	REG_UPDATE(DWB_OGAM_CONTROL,
+-		DWB_OGAM_SELECT, is_ram_a == true ? 0 : 1);
++	REG_UPDATE_2(DWB_OGAM_LUT_CONTROL,
++		DWB_OGAM_LUT_WRITE_COLOR_MASK, 7,
++		DWB_OGAM_LUT_HOST_SEL, (is_ram_a == true) ? 0 : 1);
++
+ 	REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
+ }
+ 
+@@ -185,17 +189,45 @@ static void dwb3_program_ogam_pwl(struct dcn30_dwbc *dwbc30,
+ {
+ 	uint32_t i;
+ 
+-    // triple base implementation
+-	for (i = 0; i < num/2; i++) {
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].blue_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].blue_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].blue_reg);
++	uint32_t last_base_value_red = rgb[num-1].red_reg + rgb[num-1].delta_red_reg;
++	uint32_t last_base_value_green = rgb[num-1].green_reg + rgb[num-1].delta_green_reg;
++	uint32_t last_base_value_blue = rgb[num-1].blue_reg + rgb[num-1].delta_blue_reg;
++
++	if (is_rgb_equal(rgb,  num)) {
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].red_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_red);
++
++	} else {
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 4);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].red_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_red);
++
++		REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 2);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].green_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_green);
++
++		REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 1);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].blue_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_blue);
+ 	}
+ }
+ 
+@@ -211,6 +243,8 @@ static bool dwb3_program_ogam_lut(
+ 		return false;
+ 	}
+ 
++	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_MODE, 2);
++
+ 	current_mode = dwb3_get_ogam_current(dwbc30);
+ 	if (current_mode == LUT_BYPASS || current_mode == LUT_RAM_A)
+ 		next_mode = LUT_RAM_B;
+@@ -227,8 +261,7 @@ static bool dwb3_program_ogam_lut(
+ 	dwb3_program_ogam_pwl(
+ 		dwbc30, params->rgb_resulted, params->hw_points_num);
+ 
+-	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_MODE, 2);
+-	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_SELECT, next_mode == LUT_RAM_A ? 0 : 1);
++	REG_UPDATE(DWB_OGAM_CONTROL, DWB_OGAM_SELECT, next_mode == LUT_RAM_A ? 0 : 1);
+ 
+ 	return true;
+ }
+@@ -271,14 +304,19 @@ static void dwb3_program_gamut_remap(
+ 
+ 	struct color_matrices_reg gam_regs;
+ 
+-	REG_UPDATE(DWB_GAMUT_REMAP_COEF_FORMAT, DWB_GAMUT_REMAP_COEF_FORMAT, coef_format);
+-
+ 	if (regval == NULL || select == CM_GAMUT_REMAP_MODE_BYPASS) {
+ 		REG_SET(DWB_GAMUT_REMAP_MODE, 0,
+ 				DWB_GAMUT_REMAP_MODE, 0);
+ 		return;
+ 	}
+ 
++	REG_UPDATE(DWB_GAMUT_REMAP_COEF_FORMAT, DWB_GAMUT_REMAP_COEF_FORMAT, coef_format);
++
++	gam_regs.shifts.csc_c11 = dwbc30->dwbc_shift->DWB_GAMUT_REMAPA_C11;
++	gam_regs.masks.csc_c11  = dwbc30->dwbc_mask->DWB_GAMUT_REMAPA_C11;
++	gam_regs.shifts.csc_c12 = dwbc30->dwbc_shift->DWB_GAMUT_REMAPA_C12;
++	gam_regs.masks.csc_c12 = dwbc30->dwbc_mask->DWB_GAMUT_REMAPA_C12;
++
+ 	switch (select) {
+ 	case CM_GAMUT_REMAP_MODE_RAMA_COEFF:
+ 		gam_regs.csc_c11_c12 = REG(DWB_GAMUT_REMAPA_C11_C12);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index 97909d5aab344..22c77e96f6a54 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -396,12 +396,22 @@ void dcn30_program_all_writeback_pipes_in_tree(
+ 			for (i_pipe = 0; i_pipe < dc->res_pool->pipe_count; i_pipe++) {
+ 				struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i_pipe];
+ 
++				if (!pipe_ctx->plane_state)
++					continue;
++
+ 				if (pipe_ctx->plane_state == wb_info.writeback_source_plane) {
+ 					wb_info.mpcc_inst = pipe_ctx->plane_res.mpcc_inst;
+ 					break;
+ 				}
+ 			}
+-			ASSERT(wb_info.mpcc_inst != -1);
++
++			if (wb_info.mpcc_inst == -1) {
++				/* Disable writeback pipe and disconnect from MPCC
++				 * if source plane has been removed
++				 */
++				dc->hwss.disable_writeback(dc, wb_info.dwb_pipe_inst);
++				continue;
++			}
+ 
+ 			ASSERT(wb_info.dwb_pipe_inst < dc->res_pool->res_cap->num_dwb);
+ 			dwb = dc->res_pool->dwbc[wb_info.dwb_pipe_inst];
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+index e5f4f93317cf3..32993ce24a585 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+@@ -2455,16 +2455,37 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 	dc->dml.soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
+ 
+ 	if (bw_params->clk_table.entries[0].memclk_mhz) {
++		int max_dcfclk_mhz = 0, max_dispclk_mhz = 0, max_dppclk_mhz = 0, max_phyclk_mhz = 0;
++
++		for (i = 0; i < MAX_NUM_DPM_LVL; i++) {
++			if (bw_params->clk_table.entries[i].dcfclk_mhz > max_dcfclk_mhz)
++				max_dcfclk_mhz = bw_params->clk_table.entries[i].dcfclk_mhz;
++			if (bw_params->clk_table.entries[i].dispclk_mhz > max_dispclk_mhz)
++				max_dispclk_mhz = bw_params->clk_table.entries[i].dispclk_mhz;
++			if (bw_params->clk_table.entries[i].dppclk_mhz > max_dppclk_mhz)
++				max_dppclk_mhz = bw_params->clk_table.entries[i].dppclk_mhz;
++			if (bw_params->clk_table.entries[i].phyclk_mhz > max_phyclk_mhz)
++				max_phyclk_mhz = bw_params->clk_table.entries[i].phyclk_mhz;
++		}
++
++		if (!max_dcfclk_mhz)
++			max_dcfclk_mhz = dcn3_0_soc.clock_limits[0].dcfclk_mhz;
++		if (!max_dispclk_mhz)
++			max_dispclk_mhz = dcn3_0_soc.clock_limits[0].dispclk_mhz;
++		if (!max_dppclk_mhz)
++			max_dppclk_mhz = dcn3_0_soc.clock_limits[0].dppclk_mhz;
++		if (!max_phyclk_mhz)
++			max_phyclk_mhz = dcn3_0_soc.clock_limits[0].phyclk_mhz;
+ 
+-		if (bw_params->clk_table.entries[1].dcfclk_mhz > dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
++		if (max_dcfclk_mhz > dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+ 			// If max DCFCLK is greater than the max DCFCLK STA target, insert into the DCFCLK STA target array
+-			dcfclk_sta_targets[num_dcfclk_sta_targets] = bw_params->clk_table.entries[1].dcfclk_mhz;
++			dcfclk_sta_targets[num_dcfclk_sta_targets] = max_dcfclk_mhz;
+ 			num_dcfclk_sta_targets++;
+-		} else if (bw_params->clk_table.entries[1].dcfclk_mhz < dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
++		} else if (max_dcfclk_mhz < dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+ 			// If max DCFCLK is less than the max DCFCLK STA target, cap values and remove duplicates
+ 			for (i = 0; i < num_dcfclk_sta_targets; i++) {
+-				if (dcfclk_sta_targets[i] > bw_params->clk_table.entries[1].dcfclk_mhz) {
+-					dcfclk_sta_targets[i] = bw_params->clk_table.entries[1].dcfclk_mhz;
++				if (dcfclk_sta_targets[i] > max_dcfclk_mhz) {
++					dcfclk_sta_targets[i] = max_dcfclk_mhz;
+ 					break;
+ 				}
+ 			}
+@@ -2502,7 +2523,7 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 				dcfclk_mhz[num_states] = dcfclk_sta_targets[i];
+ 				dram_speed_mts[num_states++] = optimal_uclk_for_dcfclk_sta_targets[i++];
+ 			} else {
+-				if (j < num_uclk_states && optimal_dcfclk_for_uclk[j] <= bw_params->clk_table.entries[1].dcfclk_mhz) {
++				if (j < num_uclk_states && optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+ 					dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+ 					dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 				} else {
+@@ -2517,11 +2538,12 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 		}
+ 
+ 		while (j < num_uclk_states && num_states < DC__VOLTAGE_STATES &&
+-				optimal_dcfclk_for_uclk[j] <= bw_params->clk_table.entries[1].dcfclk_mhz) {
++				optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+ 			dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+ 			dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 		}
+ 
++		dcn3_0_soc.num_states = num_states;
+ 		for (i = 0; i < dcn3_0_soc.num_states; i++) {
+ 			dcn3_0_soc.clock_limits[i].state = i;
+ 			dcn3_0_soc.clock_limits[i].dcfclk_mhz = dcfclk_mhz[i];
+@@ -2529,9 +2551,9 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 			dcn3_0_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
+ 
+ 			/* Fill all states with max values of all other clocks */
+-			dcn3_0_soc.clock_limits[i].dispclk_mhz = bw_params->clk_table.entries[1].dispclk_mhz;
+-			dcn3_0_soc.clock_limits[i].dppclk_mhz  = bw_params->clk_table.entries[1].dppclk_mhz;
+-			dcn3_0_soc.clock_limits[i].phyclk_mhz  = bw_params->clk_table.entries[1].phyclk_mhz;
++			dcn3_0_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz;
++			dcn3_0_soc.clock_limits[i].dppclk_mhz  = max_dppclk_mhz;
++			dcn3_0_soc.clock_limits[i].phyclk_mhz  = max_phyclk_mhz;
+ 			dcn3_0_soc.clock_limits[i].dtbclk_mhz = dcn3_0_soc.clock_limits[0].dtbclk_mhz;
+ 			/* These clocks cannot come from bw_params, always fill from dcn3_0_soc[1] */
+ 			/* FCLK, PHYCLK_D18, SOCCLK, DSCCLK */
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index c65ca860712d2..6cac2e58cd15f 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -196,7 +196,7 @@ static u32 ps2bc(struct nwl_dsi *dsi, unsigned long long ps)
+ 	u32 bpp = mipi_dsi_pixel_format_to_bpp(dsi->format);
+ 
+ 	return DIV64_U64_ROUND_UP(ps * dsi->mode.clock * bpp,
+-				  dsi->lanes * 8 * NSEC_PER_SEC);
++				  dsi->lanes * 8ULL * NSEC_PER_SEC);
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
+index 232abbba36868..c7adbeaf10b1b 100644
+--- a/drivers/gpu/drm/drm_auth.c
++++ b/drivers/gpu/drm/drm_auth.c
+@@ -135,16 +135,18 @@ static void drm_set_master(struct drm_device *dev, struct drm_file *fpriv,
+ static int drm_new_set_master(struct drm_device *dev, struct drm_file *fpriv)
+ {
+ 	struct drm_master *old_master;
++	struct drm_master *new_master;
+ 
+ 	lockdep_assert_held_once(&dev->master_mutex);
+ 
+ 	WARN_ON(fpriv->is_master);
+ 	old_master = fpriv->master;
+-	fpriv->master = drm_master_create(dev);
+-	if (!fpriv->master) {
+-		fpriv->master = old_master;
++	new_master = drm_master_create(dev);
++	if (!new_master)
+ 		return -ENOMEM;
+-	}
++	spin_lock(&fpriv->master_lookup_lock);
++	fpriv->master = new_master;
++	spin_unlock(&fpriv->master_lookup_lock);
+ 
+ 	fpriv->is_master = 1;
+ 	fpriv->authenticated = 1;
+@@ -302,10 +304,13 @@ int drm_master_open(struct drm_file *file_priv)
+ 	/* if there is no current master make this fd it, but do not create
+ 	 * any master object for render clients */
+ 	mutex_lock(&dev->master_mutex);
+-	if (!dev->master)
++	if (!dev->master) {
+ 		ret = drm_new_set_master(dev, file_priv);
+-	else
++	} else {
++		spin_lock(&file_priv->master_lookup_lock);
+ 		file_priv->master = drm_master_get(dev->master);
++		spin_unlock(&file_priv->master_lookup_lock);
++	}
+ 	mutex_unlock(&dev->master_mutex);
+ 
+ 	return ret;
+@@ -371,6 +376,31 @@ struct drm_master *drm_master_get(struct drm_master *master)
+ }
+ EXPORT_SYMBOL(drm_master_get);
+ 
++/**
++ * drm_file_get_master - reference &drm_file.master of @file_priv
++ * @file_priv: DRM file private
++ *
++ * Increments the reference count of @file_priv's &drm_file.master and returns
++ * the &drm_file.master. If @file_priv has no &drm_file.master, returns NULL.
++ *
++ * Master pointers returned from this function should be unreferenced using
++ * drm_master_put().
++ */
++struct drm_master *drm_file_get_master(struct drm_file *file_priv)
++{
++	struct drm_master *master = NULL;
++
++	spin_lock(&file_priv->master_lookup_lock);
++	if (!file_priv->master)
++		goto unlock;
++	master = drm_master_get(file_priv->master);
++
++unlock:
++	spin_unlock(&file_priv->master_lookup_lock);
++	return master;
++}
++EXPORT_SYMBOL(drm_file_get_master);
++
+ static void drm_master_destroy(struct kref *kref)
+ {
+ 	struct drm_master *master = container_of(kref, struct drm_master, refcount);
+diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
+index 3d7182001004d..b0a8264894885 100644
+--- a/drivers/gpu/drm/drm_debugfs.c
++++ b/drivers/gpu/drm/drm_debugfs.c
+@@ -91,6 +91,7 @@ static int drm_clients_info(struct seq_file *m, void *data)
+ 	mutex_lock(&dev->filelist_mutex);
+ 	list_for_each_entry_reverse(priv, &dev->filelist, lhead) {
+ 		struct task_struct *task;
++		bool is_current_master = drm_is_current_master(priv);
+ 
+ 		rcu_read_lock(); /* locks pid_task()->comm */
+ 		task = pid_task(priv->pid, PIDTYPE_PID);
+@@ -99,7 +100,7 @@ static int drm_clients_info(struct seq_file *m, void *data)
+ 			   task ? task->comm : "<unknown>",
+ 			   pid_vnr(priv->pid),
+ 			   priv->minor->index,
+-			   drm_is_current_master(priv) ? 'y' : 'n',
++			   is_current_master ? 'y' : 'n',
+ 			   priv->authenticated ? 'y' : 'n',
+ 			   from_kuid_munged(seq_user_ns(m), uid),
+ 			   priv->magic);
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 861f16dfd1a3d..1f54e9470165a 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -2869,11 +2869,13 @@ static int process_single_tx_qlock(struct drm_dp_mst_topology_mgr *mgr,
+ 	idx += tosend + 1;
+ 
+ 	ret = drm_dp_send_sideband_msg(mgr, up, chunk, idx);
+-	if (unlikely(ret) && drm_debug_enabled(DRM_UT_DP)) {
+-		struct drm_printer p = drm_debug_printer(DBG_PREFIX);
++	if (ret) {
++		if (drm_debug_enabled(DRM_UT_DP)) {
++			struct drm_printer p = drm_debug_printer(DBG_PREFIX);
+ 
+-		drm_printf(&p, "sideband msg failed to send\n");
+-		drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
++			drm_printf(&p, "sideband msg failed to send\n");
++			drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
++		}
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 0ac4566ae3f45..537e7de8e9c33 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -177,6 +177,7 @@ struct drm_file *drm_file_alloc(struct drm_minor *minor)
+ 	init_waitqueue_head(&file->event_wait);
+ 	file->event_space = 4096; /* set aside 4k for event buffer */
+ 
++	spin_lock_init(&file->master_lookup_lock);
+ 	mutex_init(&file->event_read_lock);
+ 
+ 	if (drm_core_check_feature(dev, DRIVER_GEM))
+diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c
+index da4f085fc09e7..aef22634005ef 100644
+--- a/drivers/gpu/drm/drm_lease.c
++++ b/drivers/gpu/drm/drm_lease.c
+@@ -107,10 +107,19 @@ static bool _drm_has_leased(struct drm_master *master, int id)
+  */
+ bool _drm_lease_held(struct drm_file *file_priv, int id)
+ {
+-	if (!file_priv || !file_priv->master)
++	bool ret;
++	struct drm_master *master;
++
++	if (!file_priv)
+ 		return true;
+ 
+-	return _drm_lease_held_master(file_priv->master, id);
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return true;
++	ret = _drm_lease_held_master(master, id);
++	drm_master_put(&master);
++
++	return ret;
+ }
+ 
+ /**
+@@ -129,13 +138,22 @@ bool drm_lease_held(struct drm_file *file_priv, int id)
+ 	struct drm_master *master;
+ 	bool ret;
+ 
+-	if (!file_priv || !file_priv->master || !file_priv->master->lessor)
++	if (!file_priv)
+ 		return true;
+ 
+-	master = file_priv->master;
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return true;
++	if (!master->lessor) {
++		ret = true;
++		goto out;
++	}
+ 	mutex_lock(&master->dev->mode_config.idr_mutex);
+ 	ret = _drm_lease_held_master(master, id);
+ 	mutex_unlock(&master->dev->mode_config.idr_mutex);
++
++out:
++	drm_master_put(&master);
+ 	return ret;
+ }
+ 
+@@ -155,10 +173,16 @@ uint32_t drm_lease_filter_crtcs(struct drm_file *file_priv, uint32_t crtcs_in)
+ 	int count_in, count_out;
+ 	uint32_t crtcs_out = 0;
+ 
+-	if (!file_priv || !file_priv->master || !file_priv->master->lessor)
++	if (!file_priv)
+ 		return crtcs_in;
+ 
+-	master = file_priv->master;
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return crtcs_in;
++	if (!master->lessor) {
++		crtcs_out = crtcs_in;
++		goto out;
++	}
+ 	dev = master->dev;
+ 
+ 	count_in = count_out = 0;
+@@ -177,6 +201,9 @@ uint32_t drm_lease_filter_crtcs(struct drm_file *file_priv, uint32_t crtcs_in)
+ 		count_in++;
+ 	}
+ 	mutex_unlock(&master->dev->mode_config.idr_mutex);
++
++out:
++	drm_master_put(&master);
+ 	return crtcs_out;
+ }
+ 
+@@ -490,7 +517,7 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	size_t object_count;
+ 	int ret = 0;
+ 	struct idr leases;
+-	struct drm_master *lessor = lessor_priv->master;
++	struct drm_master *lessor;
+ 	struct drm_master *lessee = NULL;
+ 	struct file *lessee_file = NULL;
+ 	struct file *lessor_file = lessor_priv->filp;
+@@ -502,12 +529,6 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
+-	/* Do not allow sub-leases */
+-	if (lessor->lessor) {
+-		DRM_DEBUG_LEASE("recursive leasing not allowed\n");
+-		return -EINVAL;
+-	}
+-
+ 	/* need some objects */
+ 	if (cl->object_count == 0) {
+ 		DRM_DEBUG_LEASE("no objects in lease\n");
+@@ -519,12 +540,22 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 		return -EINVAL;
+ 	}
+ 
++	lessor = drm_file_get_master(lessor_priv);
++	/* Do not allow sub-leases */
++	if (lessor->lessor) {
++		DRM_DEBUG_LEASE("recursive leasing not allowed\n");
++		ret = -EINVAL;
++		goto out_lessor;
++	}
++
+ 	object_count = cl->object_count;
+ 
+ 	object_ids = memdup_user(u64_to_user_ptr(cl->object_ids),
+ 			array_size(object_count, sizeof(__u32)));
+-	if (IS_ERR(object_ids))
+-		return PTR_ERR(object_ids);
++	if (IS_ERR(object_ids)) {
++		ret = PTR_ERR(object_ids);
++		goto out_lessor;
++	}
+ 
+ 	idr_init(&leases);
+ 
+@@ -535,14 +566,15 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	if (ret) {
+ 		DRM_DEBUG_LEASE("lease object lookup failed: %i\n", ret);
+ 		idr_destroy(&leases);
+-		return ret;
++		goto out_lessor;
+ 	}
+ 
+ 	/* Allocate a file descriptor for the lease */
+ 	fd = get_unused_fd_flags(cl->flags & (O_CLOEXEC | O_NONBLOCK));
+ 	if (fd < 0) {
+ 		idr_destroy(&leases);
+-		return fd;
++		ret = fd;
++		goto out_lessor;
+ 	}
+ 
+ 	DRM_DEBUG_LEASE("Creating lease\n");
+@@ -578,6 +610,7 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	/* Hook up the fd */
+ 	fd_install(fd, lessee_file);
+ 
++	drm_master_put(&lessor);
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n");
+ 	return 0;
+ 
+@@ -587,6 +620,8 @@ out_lessee:
+ out_leases:
+ 	put_unused_fd(fd);
+ 
++out_lessor:
++	drm_master_put(&lessor);
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl failed: %d\n", ret);
+ 	return ret;
+ }
+@@ -609,7 +644,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 	struct drm_mode_list_lessees *arg = data;
+ 	__u32 __user *lessee_ids = (__u32 __user *) (uintptr_t) (arg->lessees_ptr);
+ 	__u32 count_lessees = arg->count_lessees;
+-	struct drm_master *lessor = lessor_priv->master, *lessee;
++	struct drm_master *lessor, *lessee;
+ 	int count;
+ 	int ret = 0;
+ 
+@@ -620,6 +655,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessor = drm_file_get_master(lessor_priv);
+ 	DRM_DEBUG_LEASE("List lessees for %d\n", lessor->lessee_id);
+ 
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+@@ -643,6 +679,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 		arg->count_lessees = count;
+ 
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessor);
+ 
+ 	return ret;
+ }
+@@ -662,7 +699,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 	struct drm_mode_get_lease *arg = data;
+ 	__u32 __user *object_ids = (__u32 __user *) (uintptr_t) (arg->objects_ptr);
+ 	__u32 count_objects = arg->count_objects;
+-	struct drm_master *lessee = lessee_priv->master;
++	struct drm_master *lessee;
+ 	struct idr *object_idr;
+ 	int count;
+ 	void *entry;
+@@ -676,6 +713,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessee = drm_file_get_master(lessee_priv);
+ 	DRM_DEBUG_LEASE("get lease for %d\n", lessee->lessee_id);
+ 
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+@@ -703,6 +741,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 		arg->count_objects = count;
+ 
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessee);
+ 
+ 	return ret;
+ }
+@@ -721,7 +760,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 				void *data, struct drm_file *lessor_priv)
+ {
+ 	struct drm_mode_revoke_lease *arg = data;
+-	struct drm_master *lessor = lessor_priv->master;
++	struct drm_master *lessor;
+ 	struct drm_master *lessee;
+ 	int ret = 0;
+ 
+@@ -731,6 +770,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessor = drm_file_get_master(lessor_priv);
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+ 
+ 	lessee = _drm_find_lessee(lessor, arg->lessee_id);
+@@ -751,6 +791,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 
+ fail:
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessor);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_dma.c b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+index 0644936afee26..bf33c3084cb41 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_dma.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+@@ -115,6 +115,8 @@ int exynos_drm_register_dma(struct drm_device *drm, struct device *dev,
+ 				EXYNOS_DEV_ADDR_START, EXYNOS_DEV_ADDR_SIZE);
+ 		else if (IS_ENABLED(CONFIG_IOMMU_DMA))
+ 			mapping = iommu_get_domain_for_dev(priv->dma_dev);
++		else
++			mapping = ERR_PTR(-ENODEV);
+ 
+ 		if (IS_ERR(mapping))
+ 			return PTR_ERR(mapping);
+diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.h b/drivers/gpu/drm/mgag200/mgag200_drv.h
+index 749a075fe9e4c..d1b51c133e27a 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_drv.h
++++ b/drivers/gpu/drm/mgag200/mgag200_drv.h
+@@ -43,6 +43,22 @@
+ #define ATTR_INDEX 0x1fc0
+ #define ATTR_DATA 0x1fc1
+ 
++#define WREG_MISC(v)						\
++	WREG8(MGA_MISC_OUT, v)
++
++#define RREG_MISC(v)						\
++	((v) = RREG8(MGA_MISC_IN))
++
++#define WREG_MISC_MASKED(v, mask)				\
++	do {							\
++		u8 misc_;					\
++		u8 mask_ = (mask);				\
++		RREG_MISC(misc_);				\
++		misc_ &= ~mask_;				\
++		misc_ |= ((v) & mask_);				\
++		WREG_MISC(misc_);				\
++	} while (0)
++
+ #define WREG_ATTR(reg, v)					\
+ 	do {							\
+ 		RREG8(0x1fda);					\
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index 38672f9e5c4f3..509968c0d16bc 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -172,6 +172,8 @@ static int mgag200_g200_set_plls(struct mga_device *mdev, long clock)
+ 	drm_dbg_kms(dev, "clock: %ld vco: %ld m: %d n: %d p: %d s: %d\n",
+ 		    clock, f_vco, m, n, p, s);
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG_DAC(MGA1064_PIX_PLLC_M, m);
+ 	WREG_DAC(MGA1064_PIX_PLLC_N, n);
+ 	WREG_DAC(MGA1064_PIX_PLLC_P, (p | (s << 3)));
+@@ -287,6 +289,8 @@ static int mga_g200se_set_plls(struct mga_device *mdev, long clock)
+ 		return 1;
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG_DAC(MGA1064_PIX_PLLC_M, m);
+ 	WREG_DAC(MGA1064_PIX_PLLC_N, n);
+ 	WREG_DAC(MGA1064_PIX_PLLC_P, p);
+@@ -383,6 +387,8 @@ static int mga_g200wb_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	for (i = 0; i <= 32 && pll_locked == false; i++) {
+ 		if (i > 0) {
+ 			WREG8(MGAREG_CRTC_INDEX, 0x1e);
+@@ -520,6 +526,8 @@ static int mga_g200ev_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 	tmp = RREG8(DAC_DATA);
+ 	tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;
+@@ -652,6 +660,9 @@ static int mga_g200eh_set_plls(struct mga_device *mdev, long clock)
+ 			}
+ 		}
+ 	}
++
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	for (i = 0; i <= 32 && pll_locked == false; i++) {
+ 		WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 		tmp = RREG8(DAC_DATA);
+@@ -752,6 +763,8 @@ static int mga_g200er_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 	tmp = RREG8(DAC_DATA);
+ 	tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;
+@@ -785,8 +798,6 @@ static int mga_g200er_set_plls(struct mga_device *mdev, long clock)
+ 
+ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock)
+ {
+-	u8 misc;
+-
+ 	switch(mdev->type) {
+ 	case G200_PCI:
+ 	case G200_AGP:
+@@ -811,11 +822,6 @@ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock)
+ 		break;
+ 	}
+ 
+-	misc = RREG8(MGA_MISC_IN);
+-	misc &= ~MGAREG_MISC_CLK_SEL_MASK;
+-	misc |= MGAREG_MISC_CLK_SEL_MGA_MSK;
+-	WREG8(MGA_MISC_OUT, misc);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/mgag200/mgag200_reg.h b/drivers/gpu/drm/mgag200/mgag200_reg.h
+index 977be0565c061..60e705283fe84 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_reg.h
++++ b/drivers/gpu/drm/mgag200/mgag200_reg.h
+@@ -222,11 +222,10 @@
+ 
+ #define MGAREG_MISC_IOADSEL	(0x1 << 0)
+ #define MGAREG_MISC_RAMMAPEN	(0x1 << 1)
+-#define MGAREG_MISC_CLK_SEL_MASK	GENMASK(3, 2)
+-#define MGAREG_MISC_CLK_SEL_VGA25	(0x0 << 2)
+-#define MGAREG_MISC_CLK_SEL_VGA28	(0x1 << 2)
+-#define MGAREG_MISC_CLK_SEL_MGA_PIX	(0x2 << 2)
+-#define MGAREG_MISC_CLK_SEL_MGA_MSK	(0x3 << 2)
++#define MGAREG_MISC_CLKSEL_MASK		GENMASK(3, 2)
++#define MGAREG_MISC_CLKSEL_VGA25	(0x0 << 2)
++#define MGAREG_MISC_CLKSEL_VGA28	(0x1 << 2)
++#define MGAREG_MISC_CLKSEL_MGA		(0x3 << 2)
+ #define MGAREG_MISC_VIDEO_DIS	(0x1 << 4)
+ #define MGAREG_MISC_HIGH_PG_SEL	(0x1 << 5)
+ #define MGAREG_MISC_HSYNCPOL		BIT(6)
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index c1c152e39918b..913de5938782a 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -89,13 +89,6 @@ static void mdp4_disable_commit(struct msm_kms *kms)
+ 
+ static void mdp4_prepare_commit(struct msm_kms *kms, struct drm_atomic_state *state)
+ {
+-	int i;
+-	struct drm_crtc *crtc;
+-	struct drm_crtc_state *crtc_state;
+-
+-	/* see 119ecb7fd */
+-	for_each_new_crtc_in_state(state, crtc, crtc_state, i)
+-		drm_crtc_vblank_get(crtc);
+ }
+ 
+ static void mdp4_flush_commit(struct msm_kms *kms, unsigned crtc_mask)
+@@ -114,12 +107,6 @@ static void mdp4_wait_flush(struct msm_kms *kms, unsigned crtc_mask)
+ 
+ static void mdp4_complete_commit(struct msm_kms *kms, unsigned crtc_mask)
+ {
+-	struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
+-	struct drm_crtc *crtc;
+-
+-	/* see 119ecb7fd */
+-	for_each_crtc_mask(mdp4_kms->dev, crtc, crtc_mask)
+-		drm_crtc_vblank_put(crtc);
+ }
+ 
+ static long mdp4_round_pixclk(struct msm_kms *kms, unsigned long rate,
+@@ -410,6 +397,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+ 	struct mdp4_platform_config *config = mdp4_get_config(pdev);
++	struct msm_drm_private *priv = dev->dev_private;
+ 	struct mdp4_kms *mdp4_kms;
+ 	struct msm_kms *kms = NULL;
+ 	struct msm_gem_address_space *aspace;
+@@ -425,7 +413,8 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 
+ 	mdp_kms_init(&mdp4_kms->base, &kms_funcs);
+ 
+-	kms = &mdp4_kms->base.base;
++	priv->kms = &mdp4_kms->base.base;
++	kms = priv->kms;
+ 
+ 	mdp4_kms->dev = dev;
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 18cec4fc5e0ba..2768d1d306f00 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -261,7 +261,7 @@ static u8 dp_panel_get_edid_checksum(struct edid *edid)
+ {
+ 	struct edid *last_block;
+ 	u8 *raw_edid;
+-	bool is_edid_corrupt;
++	bool is_edid_corrupt = false;
+ 
+ 	if (!edid) {
+ 		DRM_ERROR("invalid edid input\n");
+@@ -293,7 +293,12 @@ void dp_panel_handle_sink_request(struct dp_panel *dp_panel)
+ 	panel = container_of(dp_panel, struct dp_panel_private, dp_panel);
+ 
+ 	if (panel->link->sink_request & DP_TEST_LINK_EDID_READ) {
+-		u8 checksum = dp_panel_get_edid_checksum(dp_panel->edid);
++		u8 checksum;
++
++		if (dp_panel->edid)
++			checksum = dp_panel_get_edid_checksum(dp_panel->edid);
++		else
++			checksum = dp_panel->connector->real_edid_checksum;
+ 
+ 		dp_link_send_edid_checksum(panel->link, checksum);
+ 		dp_link_send_test_response(panel->link);
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_cfg.c b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+index b2ff68a15791a..d255bea87ca41 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_cfg.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+@@ -158,7 +158,6 @@ static const struct msm_dsi_config sdm660_dsi_cfg = {
+ 	.reg_cfg = {
+ 		.num = 2,
+ 		.regs = {
+-			{"vdd", 73400, 32 },	/* 0.9 V */
+ 			{"vdda", 12560, 4 },	/* 1.2 V */
+ 		},
+ 	},
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index 519400501bcdf..1ca9e73c6e078 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -168,7 +168,7 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_660_cfgs = {
+ 	.reg_cfg = {
+ 		.num = 1,
+ 		.regs = {
+-			{"vcca", 17000, 32},
++			{"vcca", 73400, 32},
+ 		},
+ 	},
+ 	.ops = {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
+index 597cf1459b0a8..4c6bdea5537b9 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.h
++++ b/drivers/gpu/drm/panfrost/panfrost_device.h
+@@ -120,8 +120,12 @@ struct panfrost_device {
+ };
+ 
+ struct panfrost_mmu {
++	struct panfrost_device *pfdev;
++	struct kref refcount;
+ 	struct io_pgtable_cfg pgtbl_cfg;
+ 	struct io_pgtable_ops *pgtbl_ops;
++	struct drm_mm mm;
++	spinlock_t mm_lock;
+ 	int as;
+ 	atomic_t as_count;
+ 	struct list_head list;
+@@ -132,9 +136,7 @@ struct panfrost_file_priv {
+ 
+ 	struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
+ 
+-	struct panfrost_mmu mmu;
+-	struct drm_mm mm;
+-	spinlock_t mm_lock;
++	struct panfrost_mmu *mmu;
+ };
+ 
+ static inline struct panfrost_device *to_panfrost_device(struct drm_device *ddev)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index 689be734ed200..a70261809cdd2 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -417,7 +417,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
+ 		 * anyway, so let's not bother.
+ 		 */
+ 		if (!list_is_singular(&bo->mappings.list) ||
+-		    WARN_ON_ONCE(first->mmu != &priv->mmu)) {
++		    WARN_ON_ONCE(first->mmu != priv->mmu)) {
+ 			ret = -EINVAL;
+ 			goto out_unlock_mappings;
+ 		}
+@@ -449,32 +449,6 @@ int panfrost_unstable_ioctl_check(void)
+ 	return 0;
+ }
+ 
+-#define PFN_4G		(SZ_4G >> PAGE_SHIFT)
+-#define PFN_4G_MASK	(PFN_4G - 1)
+-#define PFN_16M		(SZ_16M >> PAGE_SHIFT)
+-
+-static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node,
+-					 unsigned long color,
+-					 u64 *start, u64 *end)
+-{
+-	/* Executable buffers can't start or end on a 4GB boundary */
+-	if (!(color & PANFROST_BO_NOEXEC)) {
+-		u64 next_seg;
+-
+-		if ((*start & PFN_4G_MASK) == 0)
+-			(*start)++;
+-
+-		if ((*end & PFN_4G_MASK) == 0)
+-			(*end)--;
+-
+-		next_seg = ALIGN(*start, PFN_4G);
+-		if (next_seg - *start <= PFN_16M)
+-			*start = next_seg + 1;
+-
+-		*end = min(*end, ALIGN(*start, PFN_4G) - 1);
+-	}
+-}
+-
+ static int
+ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ {
+@@ -489,15 +463,11 @@ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ 	panfrost_priv->pfdev = pfdev;
+ 	file->driver_priv = panfrost_priv;
+ 
+-	spin_lock_init(&panfrost_priv->mm_lock);
+-
+-	/* 4G enough for now. can be 48-bit */
+-	drm_mm_init(&panfrost_priv->mm, SZ_32M >> PAGE_SHIFT, (SZ_4G - SZ_32M) >> PAGE_SHIFT);
+-	panfrost_priv->mm.color_adjust = panfrost_drm_mm_color_adjust;
+-
+-	ret = panfrost_mmu_pgtable_alloc(panfrost_priv);
+-	if (ret)
+-		goto err_pgtable;
++	panfrost_priv->mmu = panfrost_mmu_ctx_create(pfdev);
++	if (IS_ERR(panfrost_priv->mmu)) {
++		ret = PTR_ERR(panfrost_priv->mmu);
++		goto err_free;
++	}
+ 
+ 	ret = panfrost_job_open(panfrost_priv);
+ 	if (ret)
+@@ -506,9 +476,8 @@ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ 	return 0;
+ 
+ err_job:
+-	panfrost_mmu_pgtable_free(panfrost_priv);
+-err_pgtable:
+-	drm_mm_takedown(&panfrost_priv->mm);
++	panfrost_mmu_ctx_put(panfrost_priv->mmu);
++err_free:
+ 	kfree(panfrost_priv);
+ 	return ret;
+ }
+@@ -521,8 +490,7 @@ panfrost_postclose(struct drm_device *dev, struct drm_file *file)
+ 	panfrost_perfcnt_close(file);
+ 	panfrost_job_close(panfrost_priv);
+ 
+-	panfrost_mmu_pgtable_free(panfrost_priv);
+-	drm_mm_takedown(&panfrost_priv->mm);
++	panfrost_mmu_ctx_put(panfrost_priv->mmu);
+ 	kfree(panfrost_priv);
+ }
+ 
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 57a31dd0ffed1..1d917cea5ceb4 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -60,7 +60,7 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
+ 
+ 	mutex_lock(&bo->mappings.lock);
+ 	list_for_each_entry(iter, &bo->mappings.list, node) {
+-		if (iter->mmu == &priv->mmu) {
++		if (iter->mmu == priv->mmu) {
+ 			kref_get(&iter->refcount);
+ 			mapping = iter;
+ 			break;
+@@ -74,16 +74,13 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
+ static void
+ panfrost_gem_teardown_mapping(struct panfrost_gem_mapping *mapping)
+ {
+-	struct panfrost_file_priv *priv;
+-
+ 	if (mapping->active)
+ 		panfrost_mmu_unmap(mapping);
+ 
+-	priv = container_of(mapping->mmu, struct panfrost_file_priv, mmu);
+-	spin_lock(&priv->mm_lock);
++	spin_lock(&mapping->mmu->mm_lock);
+ 	if (drm_mm_node_allocated(&mapping->mmnode))
+ 		drm_mm_remove_node(&mapping->mmnode);
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mapping->mmu->mm_lock);
+ }
+ 
+ static void panfrost_gem_mapping_release(struct kref *kref)
+@@ -94,6 +91,7 @@ static void panfrost_gem_mapping_release(struct kref *kref)
+ 
+ 	panfrost_gem_teardown_mapping(mapping);
+ 	drm_gem_object_put(&mapping->obj->base.base);
++	panfrost_mmu_ctx_put(mapping->mmu);
+ 	kfree(mapping);
+ }
+ 
+@@ -143,11 +141,11 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv)
+ 	else
+ 		align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
+ 
+-	mapping->mmu = &priv->mmu;
+-	spin_lock(&priv->mm_lock);
+-	ret = drm_mm_insert_node_generic(&priv->mm, &mapping->mmnode,
++	mapping->mmu = panfrost_mmu_ctx_get(priv->mmu);
++	spin_lock(&mapping->mmu->mm_lock);
++	ret = drm_mm_insert_node_generic(&mapping->mmu->mm, &mapping->mmnode,
+ 					 size >> PAGE_SHIFT, align, color, 0);
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mapping->mmu->mm_lock);
+ 	if (ret)
+ 		goto err;
+ 
+@@ -176,7 +174,7 @@ void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv)
+ 
+ 	mutex_lock(&bo->mappings.lock);
+ 	list_for_each_entry(iter, &bo->mappings.list, node) {
+-		if (iter->mmu == &priv->mmu) {
++		if (iter->mmu == priv->mmu) {
+ 			mapping = iter;
+ 			list_del(&iter->node);
+ 			break;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 04e6f6f9b742e..7e1a5664d4525 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -165,7 +165,7 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
+ 		return;
+ 	}
+ 
+-	cfg = panfrost_mmu_as_get(pfdev, &job->file_priv->mmu);
++	cfg = panfrost_mmu_as_get(pfdev, job->file_priv->mmu);
+ 
+ 	job_write(pfdev, JS_HEAD_NEXT_LO(js), jc_head & 0xFFFFFFFF);
+ 	job_write(pfdev, JS_HEAD_NEXT_HI(js), jc_head >> 32);
+@@ -524,7 +524,7 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
+ 			if (job) {
+ 				pfdev->jobs[j] = NULL;
+ 
+-				panfrost_mmu_as_put(pfdev, &job->file_priv->mmu);
++				panfrost_mmu_as_put(pfdev, job->file_priv->mmu);
+ 				panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
+ 
+ 				dma_fence_signal_locked(job->done_fence);
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index 1986862163178..7fc45b13a52c2 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -1,5 +1,8 @@
+ // SPDX-License-Identifier:	GPL-2.0
+ /* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
++
++#include <drm/panfrost_drm.h>
++
+ #include <linux/atomic.h>
+ #include <linux/bitfield.h>
+ #include <linux/delay.h>
+@@ -52,25 +55,16 @@ static int write_cmd(struct panfrost_device *pfdev, u32 as_nr, u32 cmd)
+ }
+ 
+ static void lock_region(struct panfrost_device *pfdev, u32 as_nr,
+-			u64 iova, size_t size)
++			u64 iova, u64 size)
+ {
+ 	u8 region_width;
+ 	u64 region = iova & PAGE_MASK;
+-	/*
+-	 * fls returns:
+-	 * 1 .. 32
+-	 *
+-	 * 10 + fls(num_pages)
+-	 * results in the range (11 .. 42)
+-	 */
+-
+-	size = round_up(size, PAGE_SIZE);
+ 
+-	region_width = 10 + fls(size >> PAGE_SHIFT);
+-	if ((size >> PAGE_SHIFT) != (1ul << (region_width - 11))) {
+-		/* not pow2, so must go up to the next pow2 */
+-		region_width += 1;
+-	}
++	/* The size is encoded as ceil(log2) minus(1), which may be calculated
++	 * with fls. The size must be clamped to hardware bounds.
++	 */
++	size = max_t(u64, size, AS_LOCK_REGION_MIN_SIZE);
++	region_width = fls64(size - 1) - 1;
+ 	region |= region_width;
+ 
+ 	/* Lock the region that needs to be updated */
+@@ -81,7 +75,7 @@ static void lock_region(struct panfrost_device *pfdev, u32 as_nr,
+ 
+ 
+ static int mmu_hw_do_operation_locked(struct panfrost_device *pfdev, int as_nr,
+-				      u64 iova, size_t size, u32 op)
++				      u64 iova, u64 size, u32 op)
+ {
+ 	if (as_nr < 0)
+ 		return 0;
+@@ -98,7 +92,7 @@ static int mmu_hw_do_operation_locked(struct panfrost_device *pfdev, int as_nr,
+ 
+ static int mmu_hw_do_operation(struct panfrost_device *pfdev,
+ 			       struct panfrost_mmu *mmu,
+-			       u64 iova, size_t size, u32 op)
++			       u64 iova, u64 size, u32 op)
+ {
+ 	int ret;
+ 
+@@ -115,7 +109,7 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m
+ 	u64 transtab = cfg->arm_mali_lpae_cfg.transtab;
+ 	u64 memattr = cfg->arm_mali_lpae_cfg.memattr;
+ 
+-	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM);
++	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
+ 
+ 	mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), transtab & 0xffffffffUL);
+ 	mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), transtab >> 32);
+@@ -131,7 +125,7 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m
+ 
+ static void panfrost_mmu_disable(struct panfrost_device *pfdev, u32 as_nr)
+ {
+-	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM);
++	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
+ 
+ 	mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), 0);
+ 	mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), 0);
+@@ -231,7 +225,7 @@ static size_t get_pgsize(u64 addr, size_t size)
+ 
+ static void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
+ 				     struct panfrost_mmu *mmu,
+-				     u64 iova, size_t size)
++				     u64 iova, u64 size)
+ {
+ 	if (mmu->as < 0)
+ 		return;
+@@ -337,7 +331,7 @@ static void mmu_tlb_inv_context_s1(void *cookie)
+ 
+ static void mmu_tlb_sync_context(void *cookie)
+ {
+-	//struct panfrost_device *pfdev = cookie;
++	//struct panfrost_mmu *mmu = cookie;
+ 	// TODO: Wait 1000 GPU cycles for HW_ISSUE_6367/T60X
+ }
+ 
+@@ -359,57 +353,10 @@ static const struct iommu_flush_ops mmu_tlb_ops = {
+ 	.tlb_flush_leaf = mmu_tlb_flush_leaf,
+ };
+ 
+-int panfrost_mmu_pgtable_alloc(struct panfrost_file_priv *priv)
+-{
+-	struct panfrost_mmu *mmu = &priv->mmu;
+-	struct panfrost_device *pfdev = priv->pfdev;
+-
+-	INIT_LIST_HEAD(&mmu->list);
+-	mmu->as = -1;
+-
+-	mmu->pgtbl_cfg = (struct io_pgtable_cfg) {
+-		.pgsize_bitmap	= SZ_4K | SZ_2M,
+-		.ias		= FIELD_GET(0xff, pfdev->features.mmu_features),
+-		.oas		= FIELD_GET(0xff00, pfdev->features.mmu_features),
+-		.coherent_walk	= pfdev->coherent,
+-		.tlb		= &mmu_tlb_ops,
+-		.iommu_dev	= pfdev->dev,
+-	};
+-
+-	mmu->pgtbl_ops = alloc_io_pgtable_ops(ARM_MALI_LPAE, &mmu->pgtbl_cfg,
+-					      priv);
+-	if (!mmu->pgtbl_ops)
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+-void panfrost_mmu_pgtable_free(struct panfrost_file_priv *priv)
+-{
+-	struct panfrost_device *pfdev = priv->pfdev;
+-	struct panfrost_mmu *mmu = &priv->mmu;
+-
+-	spin_lock(&pfdev->as_lock);
+-	if (mmu->as >= 0) {
+-		pm_runtime_get_noresume(pfdev->dev);
+-		if (pm_runtime_active(pfdev->dev))
+-			panfrost_mmu_disable(pfdev, mmu->as);
+-		pm_runtime_put_autosuspend(pfdev->dev);
+-
+-		clear_bit(mmu->as, &pfdev->as_alloc_mask);
+-		clear_bit(mmu->as, &pfdev->as_in_use_mask);
+-		list_del(&mmu->list);
+-	}
+-	spin_unlock(&pfdev->as_lock);
+-
+-	free_io_pgtable_ops(mmu->pgtbl_ops);
+-}
+-
+ static struct panfrost_gem_mapping *
+ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr)
+ {
+ 	struct panfrost_gem_mapping *mapping = NULL;
+-	struct panfrost_file_priv *priv;
+ 	struct drm_mm_node *node;
+ 	u64 offset = addr >> PAGE_SHIFT;
+ 	struct panfrost_mmu *mmu;
+@@ -422,11 +369,10 @@ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr)
+ 	goto out;
+ 
+ found_mmu:
+-	priv = container_of(mmu, struct panfrost_file_priv, mmu);
+ 
+-	spin_lock(&priv->mm_lock);
++	spin_lock(&mmu->mm_lock);
+ 
+-	drm_mm_for_each_node(node, &priv->mm) {
++	drm_mm_for_each_node(node, &mmu->mm) {
+ 		if (offset >= node->start &&
+ 		    offset < (node->start + node->size)) {
+ 			mapping = drm_mm_node_to_panfrost_mapping(node);
+@@ -436,7 +382,7 @@ found_mmu:
+ 		}
+ 	}
+ 
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mmu->mm_lock);
+ out:
+ 	spin_unlock(&pfdev->as_lock);
+ 	return mapping;
+@@ -549,6 +495,107 @@ err_bo:
+ 	return ret;
+ }
+ 
++static void panfrost_mmu_release_ctx(struct kref *kref)
++{
++	struct panfrost_mmu *mmu = container_of(kref, struct panfrost_mmu,
++						refcount);
++	struct panfrost_device *pfdev = mmu->pfdev;
++
++	spin_lock(&pfdev->as_lock);
++	if (mmu->as >= 0) {
++		pm_runtime_get_noresume(pfdev->dev);
++		if (pm_runtime_active(pfdev->dev))
++			panfrost_mmu_disable(pfdev, mmu->as);
++		pm_runtime_put_autosuspend(pfdev->dev);
++
++		clear_bit(mmu->as, &pfdev->as_alloc_mask);
++		clear_bit(mmu->as, &pfdev->as_in_use_mask);
++		list_del(&mmu->list);
++	}
++	spin_unlock(&pfdev->as_lock);
++
++	free_io_pgtable_ops(mmu->pgtbl_ops);
++	drm_mm_takedown(&mmu->mm);
++	kfree(mmu);
++}
++
++void panfrost_mmu_ctx_put(struct panfrost_mmu *mmu)
++{
++	kref_put(&mmu->refcount, panfrost_mmu_release_ctx);
++}
++
++struct panfrost_mmu *panfrost_mmu_ctx_get(struct panfrost_mmu *mmu)
++{
++	kref_get(&mmu->refcount);
++
++	return mmu;
++}
++
++#define PFN_4G		(SZ_4G >> PAGE_SHIFT)
++#define PFN_4G_MASK	(PFN_4G - 1)
++#define PFN_16M		(SZ_16M >> PAGE_SHIFT)
++
++static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node,
++					 unsigned long color,
++					 u64 *start, u64 *end)
++{
++	/* Executable buffers can't start or end on a 4GB boundary */
++	if (!(color & PANFROST_BO_NOEXEC)) {
++		u64 next_seg;
++
++		if ((*start & PFN_4G_MASK) == 0)
++			(*start)++;
++
++		if ((*end & PFN_4G_MASK) == 0)
++			(*end)--;
++
++		next_seg = ALIGN(*start, PFN_4G);
++		if (next_seg - *start <= PFN_16M)
++			*start = next_seg + 1;
++
++		*end = min(*end, ALIGN(*start, PFN_4G) - 1);
++	}
++}
++
++struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev)
++{
++	struct panfrost_mmu *mmu;
++
++	mmu = kzalloc(sizeof(*mmu), GFP_KERNEL);
++	if (!mmu)
++		return ERR_PTR(-ENOMEM);
++
++	mmu->pfdev = pfdev;
++	spin_lock_init(&mmu->mm_lock);
++
++	/* 4G enough for now. can be 48-bit */
++	drm_mm_init(&mmu->mm, SZ_32M >> PAGE_SHIFT, (SZ_4G - SZ_32M) >> PAGE_SHIFT);
++	mmu->mm.color_adjust = panfrost_drm_mm_color_adjust;
++
++	INIT_LIST_HEAD(&mmu->list);
++	mmu->as = -1;
++
++	mmu->pgtbl_cfg = (struct io_pgtable_cfg) {
++		.pgsize_bitmap	= SZ_4K | SZ_2M,
++		.ias		= FIELD_GET(0xff, pfdev->features.mmu_features),
++		.oas		= FIELD_GET(0xff00, pfdev->features.mmu_features),
++		.coherent_walk	= pfdev->coherent,
++		.tlb		= &mmu_tlb_ops,
++		.iommu_dev	= pfdev->dev,
++	};
++
++	mmu->pgtbl_ops = alloc_io_pgtable_ops(ARM_MALI_LPAE, &mmu->pgtbl_cfg,
++					      mmu);
++	if (!mmu->pgtbl_ops) {
++		kfree(mmu);
++		return ERR_PTR(-EINVAL);
++	}
++
++	kref_init(&mmu->refcount);
++
++	return mmu;
++}
++
+ static const char *access_type_name(struct panfrost_device *pfdev,
+ 		u32 fault_status)
+ {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.h b/drivers/gpu/drm/panfrost/panfrost_mmu.h
+index 44fc2edf63ce6..cc2a0d307febc 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.h
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.h
+@@ -18,7 +18,8 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev);
+ u32 panfrost_mmu_as_get(struct panfrost_device *pfdev, struct panfrost_mmu *mmu);
+ void panfrost_mmu_as_put(struct panfrost_device *pfdev, struct panfrost_mmu *mmu);
+ 
+-int panfrost_mmu_pgtable_alloc(struct panfrost_file_priv *priv);
+-void panfrost_mmu_pgtable_free(struct panfrost_file_priv *priv);
++struct panfrost_mmu *panfrost_mmu_ctx_get(struct panfrost_mmu *mmu);
++void panfrost_mmu_ctx_put(struct panfrost_mmu *mmu);
++struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/panfrost/panfrost_regs.h b/drivers/gpu/drm/panfrost/panfrost_regs.h
+index eddaa62ad8b0e..2ae3a4d301d39 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_regs.h
++++ b/drivers/gpu/drm/panfrost/panfrost_regs.h
+@@ -318,6 +318,8 @@
+ #define AS_FAULTSTATUS_ACCESS_TYPE_READ		(0x2 << 8)
+ #define AS_FAULTSTATUS_ACCESS_TYPE_WRITE	(0x3 << 8)
+ 
++#define AS_LOCK_REGION_MIN_SIZE                 (1ULL << 15)
++
+ #define gpu_write(dev, reg, data) writel(data, dev->iomem + reg)
+ #define gpu_read(dev, reg) readl(dev->iomem + reg)
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index c58b8840090ab..ee293f061f0a8 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1074,7 +1074,9 @@ static int vc4_hdmi_audio_trigger(struct snd_pcm_substream *substream, int cmd,
+ 		HDMI_WRITE(HDMI_MAI_CTL,
+ 			   VC4_SET_FIELD(vc4_hdmi->audio.channels,
+ 					 VC4_HD_MAI_CTL_CHNUM) |
+-			   VC4_HD_MAI_CTL_ENABLE);
++					 VC4_HD_MAI_CTL_WHOLSMP |
++					 VC4_HD_MAI_CTL_CHALIGN |
++					 VC4_HD_MAI_CTL_ENABLE);
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 		HDMI_WRITE(HDMI_MAI_CTL,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index f493b20c7a38c..f1a51371de5b1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -866,7 +866,7 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 	user_srf->prime.base.shareable = false;
+ 	user_srf->prime.base.tfile = NULL;
+ 	if (drm_is_primary_client(file_priv))
+-		user_srf->master = drm_master_get(file_priv->master);
++		user_srf->master = drm_file_get_master(file_priv);
+ 
+ 	/**
+ 	 * From this point, the generic resource management functions
+@@ -1537,7 +1537,7 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 
+ 	user_srf = container_of(srf, struct vmw_user_surface, srf);
+ 	if (drm_is_primary_client(file_priv))
+-		user_srf->master = drm_master_get(file_priv->master);
++		user_srf->master = drm_file_get_master(file_priv);
+ 
+ 	ret = ttm_read_lock(&dev_priv->reservation_sem, true);
+ 	if (unlikely(ret != 0))
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_disp.c b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+index 8cd8af35cfaac..205c72a249b75 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_disp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+@@ -1447,9 +1447,10 @@ zynqmp_disp_crtc_atomic_enable(struct drm_crtc *crtc,
+ 	struct drm_display_mode *adjusted_mode = &crtc->state->adjusted_mode;
+ 	int ret, vrefresh;
+ 
++	pm_runtime_get_sync(disp->dev);
++
+ 	zynqmp_disp_crtc_setup_clock(crtc, adjusted_mode);
+ 
+-	pm_runtime_get_sync(disp->dev);
+ 	ret = clk_prepare_enable(disp->pclk);
+ 	if (ret) {
+ 		dev_err(disp->dev, "failed to enable a pixel clock\n");
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dp.c b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+index 59d1fb017da01..13811332b349f 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+@@ -402,10 +402,6 @@ static int zynqmp_dp_phy_init(struct zynqmp_dp *dp)
+ 		}
+ 	}
+ 
+-	ret = zynqmp_dp_reset(dp, false);
+-	if (ret < 0)
+-		return ret;
+-
+ 	zynqmp_dp_clr(dp, ZYNQMP_DP_PHY_RESET, ZYNQMP_DP_PHY_RESET_ALL_RESET);
+ 
+ 	/*
+@@ -441,8 +437,6 @@ static void zynqmp_dp_phy_exit(struct zynqmp_dp *dp)
+ 				ret);
+ 	}
+ 
+-	zynqmp_dp_reset(dp, true);
+-
+ 	for (i = 0; i < dp->num_lanes; i++) {
+ 		ret = phy_exit(dp->phy[i]);
+ 		if (ret)
+@@ -1682,9 +1676,13 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 		return PTR_ERR(dp->reset);
+ 	}
+ 
++	ret = zynqmp_dp_reset(dp, false);
++	if (ret < 0)
++		return ret;
++
+ 	ret = zynqmp_dp_phy_probe(dp);
+ 	if (ret)
+-		return ret;
++		goto err_reset;
+ 
+ 	/* Initialize the hardware. */
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_TX_PHY_POWER_DOWN,
+@@ -1696,7 +1694,7 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 
+ 	ret = zynqmp_dp_phy_init(dp);
+ 	if (ret)
+-		return ret;
++		goto err_reset;
+ 
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_TRANSMITTER_ENABLE, 1);
+ 
+@@ -1708,15 +1706,18 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 					zynqmp_dp_irq_handler, IRQF_ONESHOT,
+ 					dev_name(dp->dev), dp);
+ 	if (ret < 0)
+-		goto error;
++		goto err_phy_exit;
+ 
+ 	dev_dbg(dp->dev, "ZynqMP DisplayPort Tx probed with %u lanes\n",
+ 		dp->num_lanes);
+ 
+ 	return 0;
+ 
+-error:
++err_phy_exit:
+ 	zynqmp_dp_phy_exit(dp);
++err_reset:
++	zynqmp_dp_reset(dp, true);
++
+ 	return ret;
+ }
+ 
+@@ -1734,4 +1735,5 @@ void zynqmp_dp_remove(struct zynqmp_dpsub *dpsub)
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_INT_DS, 0xffffffff);
+ 
+ 	zynqmp_dp_phy_exit(dp);
++	zynqmp_dp_reset(dp, true);
+ }
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index d1ab2dccf6fd7..580d378342c41 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -415,8 +415,6 @@ static int hidinput_get_battery_property(struct power_supply *psy,
+ 
+ 		if (dev->battery_status == HID_BATTERY_UNKNOWN)
+ 			val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+-		else if (dev->battery_capacity == 100)
+-			val->intval = POWER_SUPPLY_STATUS_FULL;
+ 		else
+ 			val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
+ 		break;
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 1f08c848c33de..998aad8a9e608 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -176,8 +176,6 @@ static const struct i2c_hid_quirks {
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+ 	{ I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_3118,
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+-	{ USB_VENDOR_ID_ELAN, HID_ANY_ID,
+-		 I2C_HID_QUIRK_BOGUS_IRQ },
+ 	{ USB_VENDOR_ID_ALPS_JP, HID_ANY_ID,
+ 		 I2C_HID_QUIRK_RESET_ON_RESUME },
+ 	{ I2C_VENDOR_ID_SYNAPTICS, I2C_PRODUCT_ID_SYNAPTICS_SYNA2393,
+@@ -188,7 +186,8 @@ static const struct i2c_hid_quirks {
+ 	 * Sending the wakeup after reset actually break ELAN touchscreen controller
+ 	 */
+ 	{ USB_VENDOR_ID_ELAN, HID_ANY_ID,
+-		 I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET },
++		 I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET |
++		 I2C_HID_QUIRK_BOGUS_IRQ },
+ 	{ 0, 0 }
+ };
+ 
+diff --git a/drivers/hwmon/pmbus/ibm-cffps.c b/drivers/hwmon/pmbus/ibm-cffps.c
+index 2fb7540ee952b..79bc2032dcb2a 100644
+--- a/drivers/hwmon/pmbus/ibm-cffps.c
++++ b/drivers/hwmon/pmbus/ibm-cffps.c
+@@ -50,9 +50,9 @@
+ #define CFFPS_MFR_VAUX_FAULT			BIT(6)
+ #define CFFPS_MFR_CURRENT_SHARE_WARNING		BIT(7)
+ 
+-#define CFFPS_LED_BLINK				BIT(0)
+-#define CFFPS_LED_ON				BIT(1)
+-#define CFFPS_LED_OFF				BIT(2)
++#define CFFPS_LED_BLINK				(BIT(0) | BIT(6))
++#define CFFPS_LED_ON				(BIT(1) | BIT(6))
++#define CFFPS_LED_OFF				(BIT(2) | BIT(6))
+ #define CFFPS_BLINK_RATE_MS			250
+ 
+ enum {
+diff --git a/drivers/iio/dac/ad5624r_spi.c b/drivers/iio/dac/ad5624r_spi.c
+index 2b2b8edfd258c..ab4997bfd6d45 100644
+--- a/drivers/iio/dac/ad5624r_spi.c
++++ b/drivers/iio/dac/ad5624r_spi.c
+@@ -229,7 +229,7 @@ static int ad5624r_probe(struct spi_device *spi)
+ 	if (!indio_dev)
+ 		return -ENOMEM;
+ 	st = iio_priv(indio_dev);
+-	st->reg = devm_regulator_get(&spi->dev, "vcc");
++	st->reg = devm_regulator_get_optional(&spi->dev, "vref");
+ 	if (!IS_ERR(st->reg)) {
+ 		ret = regulator_enable(st->reg);
+ 		if (ret)
+@@ -240,6 +240,22 @@ static int ad5624r_probe(struct spi_device *spi)
+ 			goto error_disable_reg;
+ 
+ 		voltage_uv = ret;
++	} else {
++		if (PTR_ERR(st->reg) != -ENODEV)
++			return PTR_ERR(st->reg);
++		/* Backwards compatibility. This naming is not correct */
++		st->reg = devm_regulator_get_optional(&spi->dev, "vcc");
++		if (!IS_ERR(st->reg)) {
++			ret = regulator_enable(st->reg);
++			if (ret)
++				return ret;
++
++			ret = regulator_get_voltage(st->reg);
++			if (ret < 0)
++				goto error_disable_reg;
++
++			voltage_uv = ret;
++		}
+ 	}
+ 
+ 	spi_set_drvdata(spi, indio_dev);
+diff --git a/drivers/iio/temperature/ltc2983.c b/drivers/iio/temperature/ltc2983.c
+index 3b5ba26d7d867..3b4a0e60e6059 100644
+--- a/drivers/iio/temperature/ltc2983.c
++++ b/drivers/iio/temperature/ltc2983.c
+@@ -89,6 +89,8 @@
+ 
+ #define	LTC2983_STATUS_START_MASK	BIT(7)
+ #define	LTC2983_STATUS_START(x)		FIELD_PREP(LTC2983_STATUS_START_MASK, x)
++#define	LTC2983_STATUS_UP_MASK		GENMASK(7, 6)
++#define	LTC2983_STATUS_UP(reg)		FIELD_GET(LTC2983_STATUS_UP_MASK, reg)
+ 
+ #define	LTC2983_STATUS_CHAN_SEL_MASK	GENMASK(4, 0)
+ #define	LTC2983_STATUS_CHAN_SEL(x) \
+@@ -1362,17 +1364,16 @@ put_child:
+ 
+ static int ltc2983_setup(struct ltc2983_data *st, bool assign_iio)
+ {
+-	u32 iio_chan_t = 0, iio_chan_v = 0, chan, iio_idx = 0;
++	u32 iio_chan_t = 0, iio_chan_v = 0, chan, iio_idx = 0, status;
+ 	int ret;
+-	unsigned long time;
+-
+-	/* make sure the device is up */
+-	time = wait_for_completion_timeout(&st->completion,
+-					    msecs_to_jiffies(250));
+ 
+-	if (!time) {
++	/* make sure the device is up: start bit (7) is 0 and done bit (6) is 1 */
++	ret = regmap_read_poll_timeout(st->regmap, LTC2983_STATUS_REG, status,
++				       LTC2983_STATUS_UP(status) == 1, 25000,
++				       25000 * 10);
++	if (ret) {
+ 		dev_err(&st->spi->dev, "Device startup timed out\n");
+-		return -ETIMEDOUT;
++		return ret;
+ 	}
+ 
+ 	st->iio_chan = devm_kzalloc(&st->spi->dev,
+@@ -1492,10 +1493,11 @@ static int ltc2983_probe(struct spi_device *spi)
+ 	ret = ltc2983_parse_dt(st);
+ 	if (ret)
+ 		return ret;
+-	/*
+-	 * let's request the irq now so it is used to sync the device
+-	 * startup in ltc2983_setup()
+-	 */
++
++	ret = ltc2983_setup(st, true);
++	if (ret)
++		return ret;
++
+ 	ret = devm_request_irq(&spi->dev, spi->irq, ltc2983_irq_handler,
+ 			       IRQF_TRIGGER_RISING, name, st);
+ 	if (ret) {
+@@ -1503,10 +1505,6 @@ static int ltc2983_probe(struct spi_device *spi)
+ 		return ret;
+ 	}
+ 
+-	ret = ltc2983_setup(st, true);
+-	if (ret)
+-		return ret;
+-
+ 	indio_dev->name = name;
+ 	indio_dev->num_channels = st->iio_channels;
+ 	indio_dev->channels = st->iio_chan;
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index da8adadf47559..75b6da00065a3 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -1187,29 +1187,34 @@ static int __init iw_cm_init(void)
+ 
+ 	ret = iwpm_init(RDMA_NL_IWCM);
+ 	if (ret)
+-		pr_err("iw_cm: couldn't init iwpm\n");
+-	else
+-		rdma_nl_register(RDMA_NL_IWCM, iwcm_nl_cb_table);
++		return ret;
++
+ 	iwcm_wq = alloc_ordered_workqueue("iw_cm_wq", 0);
+ 	if (!iwcm_wq)
+-		return -ENOMEM;
++		goto err_alloc;
+ 
+ 	iwcm_ctl_table_hdr = register_net_sysctl(&init_net, "net/iw_cm",
+ 						 iwcm_ctl_table);
+ 	if (!iwcm_ctl_table_hdr) {
+ 		pr_err("iw_cm: couldn't register sysctl paths\n");
+-		destroy_workqueue(iwcm_wq);
+-		return -ENOMEM;
++		goto err_sysctl;
+ 	}
+ 
++	rdma_nl_register(RDMA_NL_IWCM, iwcm_nl_cb_table);
+ 	return 0;
++
++err_sysctl:
++	destroy_workqueue(iwcm_wq);
++err_alloc:
++	iwpm_exit(RDMA_NL_IWCM);
++	return -ENOMEM;
+ }
+ 
+ static void __exit iw_cm_cleanup(void)
+ {
++	rdma_nl_unregister(RDMA_NL_IWCM);
+ 	unregister_net_sysctl_table(iwcm_ctl_table_hdr);
+ 	destroy_workqueue(iwcm_wq);
+-	rdma_nl_unregister(RDMA_NL_IWCM);
+ 	iwpm_exit(RDMA_NL_IWCM);
+ }
+ 
+diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
+index 4e940fc50bba6..2ece682c7835b 100644
+--- a/drivers/infiniband/hw/efa/efa_verbs.c
++++ b/drivers/infiniband/hw/efa/efa_verbs.c
+@@ -717,7 +717,6 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
+ 
+ 	qp->qp_handle = create_qp_resp.qp_handle;
+ 	qp->ibqp.qp_num = create_qp_resp.qp_num;
+-	qp->ibqp.qp_type = init_attr->qp_type;
+ 	qp->max_send_wr = init_attr->cap.max_send_wr;
+ 	qp->max_recv_wr = init_attr->cap.max_recv_wr;
+ 	qp->max_send_sge = init_attr->cap.max_send_sge;
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 786c6316273f7..b6e453e9ba236 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -650,12 +650,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
+ 
+ 	ppd->pkeys[default_pkey_idx] = DEFAULT_P_KEY;
+ 	ppd->part_enforce |= HFI1_PART_ENFORCE_IN;
+-
+-	if (loopback) {
+-		dd_dev_err(dd, "Faking data partition 0x8001 in idx %u\n",
+-			   !default_pkey_idx);
+-		ppd->pkeys[!default_pkey_idx] = 0x8001;
+-	}
++	ppd->pkeys[0] = 0x8001;
+ 
+ 	INIT_WORK(&ppd->link_vc_work, handle_verify_cap);
+ 	INIT_WORK(&ppd->link_up_work, handle_link_up);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index ef1452215b17d..7ce9ad8aee1ec 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -740,7 +740,6 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 				goto err_out;
+ 			}
+ 			hr_qp->en_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB;
+-			resp->cap_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB;
+ 		}
+ 
+ 		if (user_qp_has_rdb(hr_dev, init_attr, udata, resp)) {
+@@ -752,7 +751,6 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 				goto err_sdb;
+ 			}
+ 			hr_qp->en_flags |= HNS_ROCE_QP_CAP_RQ_RECORD_DB;
+-			resp->cap_flags |= HNS_ROCE_QP_CAP_RQ_RECORD_DB;
+ 		}
+ 	} else {
+ 		/* QP doorbell register address */
+@@ -959,6 +957,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ 	}
+ 
+ 	if (udata) {
++		resp.cap_flags = hr_qp->en_flags;
+ 		ret = ib_copy_to_udata(udata, &resp,
+ 				       min(udata->outlen, sizeof(resp)));
+ 		if (ret) {
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 8beba002e5dd7..011477356a1de 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1842,7 +1842,6 @@ static int get_atomic_mode(struct mlx5_ib_dev *dev,
+ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ 			     struct mlx5_create_qp_params *params)
+ {
+-	struct mlx5_ib_create_qp *ucmd = params->ucmd;
+ 	struct ib_qp_init_attr *attr = params->attr;
+ 	u32 uidx = params->uidx;
+ 	struct mlx5_ib_resources *devr = &dev->devr;
+@@ -1862,8 +1861,6 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ 	if (!in)
+ 		return -ENOMEM;
+ 
+-	if (MLX5_CAP_GEN(mdev, ece_support) && ucmd)
+-		MLX5_SET(create_qp_in, in, ece, ucmd->ece_options);
+ 	qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
+ 
+ 	MLX5_SET(qpc, qpc, st, MLX5_QP_ST_XRC);
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index 30cb30046b15e..35963e6bf9fab 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -28,12 +28,12 @@
+ #define VCMD_CMD_ALLOC			0x1
+ #define VCMD_CMD_FREE			0x2
+ #define VCMD_VRSP_IP			0x1
+-#define VCMD_VRSP_SC(e)			(((e) >> 1) & 0x3)
++#define VCMD_VRSP_SC(e)			(((e) & 0xff) >> 1)
+ #define VCMD_VRSP_SC_SUCCESS		0
+-#define VCMD_VRSP_SC_NO_PASID_AVAIL	2
+-#define VCMD_VRSP_SC_INVALID_PASID	2
+-#define VCMD_VRSP_RESULT_PASID(e)	(((e) >> 8) & 0xfffff)
+-#define VCMD_CMD_OPERAND(e)		((e) << 8)
++#define VCMD_VRSP_SC_NO_PASID_AVAIL	16
++#define VCMD_VRSP_SC_INVALID_PASID	16
++#define VCMD_VRSP_RESULT_PASID(e)	(((e) >> 16) & 0xfffff)
++#define VCMD_CMD_OPERAND(e)		((e) << 16)
+ /*
+  * Domain ID reserved for pasid entries programmed for first-level
+  * only and pass-through transfer modes.
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 5665b6ea8119f..75378e35c3d66 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -168,7 +168,8 @@ static void cmdq_task_insert_into_thread(struct cmdq_task *task)
+ 	dma_sync_single_for_cpu(dev, prev_task->pa_base,
+ 				prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
+ 	prev_task_base[CMDQ_NUM_CMD(prev_task->pkt) - 1] =
+-		(u64)CMDQ_JUMP_BY_PA << 32 | task->pa_base;
++		(u64)CMDQ_JUMP_BY_PA << 32 |
++		(task->pa_base >> task->cmdq->shift_pa);
+ 	dma_sync_single_for_device(dev, prev_task->pa_base,
+ 				   prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
+ 
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 70ae6f3aede94..2aa4acd33af39 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2643,7 +2643,12 @@ static void *crypt_page_alloc(gfp_t gfp_mask, void *pool_data)
+ 	struct crypt_config *cc = pool_data;
+ 	struct page *page;
+ 
+-	if (unlikely(percpu_counter_compare(&cc->n_allocated_pages, dm_crypt_pages_per_client) >= 0) &&
++	/*
++	 * Note, percpu_counter_read_positive() may over (and under) estimate
++	 * the current usage by at most (batch - 1) * num_online_cpus() pages,
++	 * but avoids potential spinlock contention of an exact result.
++	 */
++	if (unlikely(percpu_counter_read_positive(&cc->n_allocated_pages) >= dm_crypt_pages_per_client) &&
+ 	    likely(gfp_mask & __GFP_NORETRY))
+ 		return NULL;
+ 
+diff --git a/drivers/media/cec/platform/stm32/stm32-cec.c b/drivers/media/cec/platform/stm32/stm32-cec.c
+index ea4b1ebfca991..0ffd89712536b 100644
+--- a/drivers/media/cec/platform/stm32/stm32-cec.c
++++ b/drivers/media/cec/platform/stm32/stm32-cec.c
+@@ -305,14 +305,16 @@ static int stm32_cec_probe(struct platform_device *pdev)
+ 
+ 	cec->clk_hdmi_cec = devm_clk_get(&pdev->dev, "hdmi-cec");
+ 	if (IS_ERR(cec->clk_hdmi_cec) &&
+-	    PTR_ERR(cec->clk_hdmi_cec) == -EPROBE_DEFER)
+-		return -EPROBE_DEFER;
++	    PTR_ERR(cec->clk_hdmi_cec) == -EPROBE_DEFER) {
++		ret = -EPROBE_DEFER;
++		goto err_unprepare_cec_clk;
++	}
+ 
+ 	if (!IS_ERR(cec->clk_hdmi_cec)) {
+ 		ret = clk_prepare(cec->clk_hdmi_cec);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "Can't prepare hdmi-cec clock\n");
+-			return ret;
++			goto err_unprepare_cec_clk;
+ 		}
+ 	}
+ 
+@@ -324,19 +326,27 @@ static int stm32_cec_probe(struct platform_device *pdev)
+ 			CEC_NAME, caps,	CEC_MAX_LOG_ADDRS);
+ 	ret = PTR_ERR_OR_ZERO(cec->adap);
+ 	if (ret)
+-		return ret;
++		goto err_unprepare_hdmi_cec_clk;
+ 
+ 	ret = cec_register_adapter(cec->adap, &pdev->dev);
+-	if (ret) {
+-		cec_delete_adapter(cec->adap);
+-		return ret;
+-	}
++	if (ret)
++		goto err_delete_adapter;
+ 
+ 	cec_hw_init(cec);
+ 
+ 	platform_set_drvdata(pdev, cec);
+ 
+ 	return 0;
++
++err_delete_adapter:
++	cec_delete_adapter(cec->adap);
++
++err_unprepare_hdmi_cec_clk:
++	clk_unprepare(cec->clk_hdmi_cec);
++
++err_unprepare_cec_clk:
++	clk_unprepare(cec->clk_cec);
++	return ret;
+ }
+ 
+ static int stm32_cec_remove(struct platform_device *pdev)
+diff --git a/drivers/media/cec/platform/tegra/tegra_cec.c b/drivers/media/cec/platform/tegra/tegra_cec.c
+index 1ac0c70a59818..5e907395ca2e5 100644
+--- a/drivers/media/cec/platform/tegra/tegra_cec.c
++++ b/drivers/media/cec/platform/tegra/tegra_cec.c
+@@ -366,7 +366,11 @@ static int tegra_cec_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
+-	clk_prepare_enable(cec->clk);
++	ret = clk_prepare_enable(cec->clk);
++	if (ret) {
++		dev_err(&pdev->dev, "Unable to prepare clock for CEC\n");
++		return ret;
++	}
+ 
+ 	/* set context info. */
+ 	cec->dev = &pdev->dev;
+@@ -446,9 +450,7 @@ static int tegra_cec_resume(struct platform_device *pdev)
+ 
+ 	dev_notice(&pdev->dev, "Resuming\n");
+ 
+-	clk_prepare_enable(cec->clk);
+-
+-	return 0;
++	return clk_prepare_enable(cec->clk);
+ }
+ #endif
+ 
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index 082796534b0ae..bb02354a48b81 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -2107,32 +2107,55 @@ static void dib8000_load_ana_fe_coefs(struct dib8000_state *state, const s16 *an
+ 			dib8000_write_word(state, 117 + mode, ana_fe[mode]);
+ }
+ 
+-static const u16 lut_prbs_2k[14] = {
+-	0, 0x423, 0x009, 0x5C7, 0x7A6, 0x3D8, 0x527, 0x7FF, 0x79B, 0x3D6, 0x3A2, 0x53B, 0x2F4, 0x213
++static const u16 lut_prbs_2k[13] = {
++	0x423, 0x009, 0x5C7,
++	0x7A6, 0x3D8, 0x527,
++	0x7FF, 0x79B, 0x3D6,
++	0x3A2, 0x53B, 0x2F4,
++	0x213
+ };
+-static const u16 lut_prbs_4k[14] = {
+-	0, 0x208, 0x0C3, 0x7B9, 0x423, 0x5C7, 0x3D8, 0x7FF, 0x3D6, 0x53B, 0x213, 0x029, 0x0D0, 0x48E
++
++static const u16 lut_prbs_4k[13] = {
++	0x208, 0x0C3, 0x7B9,
++	0x423, 0x5C7, 0x3D8,
++	0x7FF, 0x3D6, 0x53B,
++	0x213, 0x029, 0x0D0,
++	0x48E
+ };
+-static const u16 lut_prbs_8k[14] = {
+-	0, 0x740, 0x069, 0x7DD, 0x208, 0x7B9, 0x5C7, 0x7FF, 0x53B, 0x029, 0x48E, 0x4C4, 0x367, 0x684
++
++static const u16 lut_prbs_8k[13] = {
++	0x740, 0x069, 0x7DD,
++	0x208, 0x7B9, 0x5C7,
++	0x7FF, 0x53B, 0x029,
++	0x48E, 0x4C4, 0x367,
++	0x684
+ };
+ 
+ static u16 dib8000_get_init_prbs(struct dib8000_state *state, u16 subchannel)
+ {
+ 	int sub_channel_prbs_group = 0;
++	int prbs_group;
+ 
+-	sub_channel_prbs_group = (subchannel / 3) + 1;
+-	dprintk("sub_channel_prbs_group = %d , subchannel =%d prbs = 0x%04x\n", sub_channel_prbs_group, subchannel, lut_prbs_8k[sub_channel_prbs_group]);
++	sub_channel_prbs_group = subchannel / 3;
++	if (sub_channel_prbs_group >= ARRAY_SIZE(lut_prbs_2k))
++		return 0;
+ 
+ 	switch (state->fe[0]->dtv_property_cache.transmission_mode) {
+ 	case TRANSMISSION_MODE_2K:
+-			return lut_prbs_2k[sub_channel_prbs_group];
++		prbs_group = lut_prbs_2k[sub_channel_prbs_group];
++		break;
+ 	case TRANSMISSION_MODE_4K:
+-			return lut_prbs_4k[sub_channel_prbs_group];
++		prbs_group =  lut_prbs_4k[sub_channel_prbs_group];
++		break;
+ 	default:
+ 	case TRANSMISSION_MODE_8K:
+-			return lut_prbs_8k[sub_channel_prbs_group];
++		prbs_group = lut_prbs_8k[sub_channel_prbs_group];
+ 	}
++
++	dprintk("sub_channel_prbs_group = %d , subchannel =%d prbs = 0x%04x\n",
++		sub_channel_prbs_group, subchannel, prbs_group);
++
++	return prbs_group;
+ }
+ 
+ static void dib8000_set_13seg_channel(struct dib8000_state *state)
+@@ -2409,10 +2432,8 @@ static void dib8000_set_isdbt_common_channel(struct dib8000_state *state, u8 seq
+ 	/* TSB or ISDBT ? apply it now */
+ 	if (c->isdbt_sb_mode) {
+ 		dib8000_set_sb_channel(state);
+-		if (c->isdbt_sb_subchannel < 14)
+-			init_prbs = dib8000_get_init_prbs(state, c->isdbt_sb_subchannel);
+-		else
+-			init_prbs = 0;
++		init_prbs = dib8000_get_init_prbs(state,
++						  c->isdbt_sb_subchannel);
+ 	} else {
+ 		dib8000_set_13seg_channel(state);
+ 		init_prbs = 0xfff;
+@@ -3004,6 +3025,7 @@ static int dib8000_tune(struct dvb_frontend *fe)
+ 
+ 	unsigned long *timeout = &state->timeout;
+ 	unsigned long now = jiffies;
++	u16 init_prbs;
+ #ifdef DIB8000_AGC_FREEZE
+ 	u16 agc1, agc2;
+ #endif
+@@ -3302,8 +3324,10 @@ static int dib8000_tune(struct dvb_frontend *fe)
+ 		break;
+ 
+ 	case CT_DEMOD_STEP_11:  /* 41 : init prbs autosearch */
+-		if (state->subchannel <= 41) {
+-			dib8000_set_subchannel_prbs(state, dib8000_get_init_prbs(state, state->subchannel));
++		init_prbs = dib8000_get_init_prbs(state, state->subchannel);
++
++		if (init_prbs) {
++			dib8000_set_subchannel_prbs(state, init_prbs);
+ 			*tune_state = CT_DEMOD_STEP_9;
+ 		} else {
+ 			*tune_state = CT_DEMOD_STOP;
+diff --git a/drivers/media/i2c/imx258.c b/drivers/media/i2c/imx258.c
+index ccb55fd1d506f..e6104ee97ed29 100644
+--- a/drivers/media/i2c/imx258.c
++++ b/drivers/media/i2c/imx258.c
+@@ -22,7 +22,7 @@
+ #define IMX258_CHIP_ID			0x0258
+ 
+ /* V_TIMING internal */
+-#define IMX258_VTS_30FPS		0x0c98
++#define IMX258_VTS_30FPS		0x0c50
+ #define IMX258_VTS_30FPS_2K		0x0638
+ #define IMX258_VTS_30FPS_VGA		0x034c
+ #define IMX258_VTS_MAX			0xffff
+@@ -46,7 +46,7 @@
+ /* Analog gain control */
+ #define IMX258_REG_ANALOG_GAIN		0x0204
+ #define IMX258_ANA_GAIN_MIN		0
+-#define IMX258_ANA_GAIN_MAX		0x1fff
++#define IMX258_ANA_GAIN_MAX		480
+ #define IMX258_ANA_GAIN_STEP		1
+ #define IMX258_ANA_GAIN_DEFAULT		0x0
+ 
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index 9554c8348c020..17cc69c3227f8 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -1695,14 +1695,15 @@ static int tda1997x_query_dv_timings(struct v4l2_subdev *sd,
+ 				     struct v4l2_dv_timings *timings)
+ {
+ 	struct tda1997x_state *state = to_state(sd);
++	int ret;
+ 
+ 	v4l_dbg(1, debug, state->client, "%s\n", __func__);
+ 	memset(timings, 0, sizeof(struct v4l2_dv_timings));
+ 	mutex_lock(&state->lock);
+-	tda1997x_detect_std(state, timings);
++	ret = tda1997x_detect_std(state, timings);
+ 	mutex_unlock(&state->lock);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct v4l2_subdev_video_ops tda1997x_video_ops = {
+diff --git a/drivers/media/rc/rc-loopback.c b/drivers/media/rc/rc-loopback.c
+index 1ba3f96ffa7dc..40ab66c850f23 100644
+--- a/drivers/media/rc/rc-loopback.c
++++ b/drivers/media/rc/rc-loopback.c
+@@ -42,7 +42,7 @@ static int loop_set_tx_mask(struct rc_dev *dev, u32 mask)
+ 
+ 	if ((mask & (RXMASK_REGULAR | RXMASK_LEARNING)) != mask) {
+ 		dprintk("invalid tx mask: %u\n", mask);
+-		return -EINVAL;
++		return 2;
+ 	}
+ 
+ 	dprintk("setting tx mask: %u\n", mask);
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index c7172b8952a96..5f0e2fa69da5c 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -898,8 +898,8 @@ static int uvc_ioctl_g_input(struct file *file, void *fh, unsigned int *input)
+ {
+ 	struct uvc_fh *handle = fh;
+ 	struct uvc_video_chain *chain = handle->chain;
++	u8 *buf;
+ 	int ret;
+-	u8 i;
+ 
+ 	if (chain->selector == NULL ||
+ 	    (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) {
+@@ -907,22 +907,27 @@ static int uvc_ioctl_g_input(struct file *file, void *fh, unsigned int *input)
+ 		return 0;
+ 	}
+ 
++	buf = kmalloc(1, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
+ 	ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR, chain->selector->id,
+ 			     chain->dev->intfnum,  UVC_SU_INPUT_SELECT_CONTROL,
+-			     &i, 1);
+-	if (ret < 0)
+-		return ret;
++			     buf, 1);
++	if (!ret)
++		*input = *buf - 1;
+ 
+-	*input = i - 1;
+-	return 0;
++	kfree(buf);
++
++	return ret;
+ }
+ 
+ static int uvc_ioctl_s_input(struct file *file, void *fh, unsigned int input)
+ {
+ 	struct uvc_fh *handle = fh;
+ 	struct uvc_video_chain *chain = handle->chain;
++	u8 *buf;
+ 	int ret;
+-	u32 i;
+ 
+ 	ret = uvc_acquire_privileges(handle);
+ 	if (ret < 0)
+@@ -938,10 +943,17 @@ static int uvc_ioctl_s_input(struct file *file, void *fh, unsigned int input)
+ 	if (input >= chain->selector->bNrInPins)
+ 		return -EINVAL;
+ 
+-	i = input + 1;
+-	return uvc_query_ctrl(chain->dev, UVC_SET_CUR, chain->selector->id,
+-			      chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL,
+-			      &i, 1);
++	buf = kmalloc(1, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	*buf = input + 1;
++	ret = uvc_query_ctrl(chain->dev, UVC_SET_CUR, chain->selector->id,
++			     chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL,
++			     buf, 1);
++	kfree(buf);
++
++	return ret;
+ }
+ 
+ static int uvc_ioctl_queryctrl(struct file *file, void *fh,
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index 230d65a642178..af48705c704f8 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -196,7 +196,7 @@ bool v4l2_find_dv_timings_cap(struct v4l2_dv_timings *t,
+ 	if (!v4l2_valid_dv_timings(t, cap, fnc, fnc_handle))
+ 		return false;
+ 
+-	for (i = 0; i < v4l2_dv_timings_presets[i].bt.width; i++) {
++	for (i = 0; v4l2_dv_timings_presets[i].bt.width; i++) {
+ 		if (v4l2_valid_dv_timings(v4l2_dv_timings_presets + i, cap,
+ 					  fnc, fnc_handle) &&
+ 		    v4l2_match_dv_timings(t, v4l2_dv_timings_presets + i,
+@@ -218,7 +218,7 @@ bool v4l2_find_dv_timings_cea861_vic(struct v4l2_dv_timings *t, u8 vic)
+ {
+ 	unsigned int i;
+ 
+-	for (i = 0; i < v4l2_dv_timings_presets[i].bt.width; i++) {
++	for (i = 0; v4l2_dv_timings_presets[i].bt.width; i++) {
+ 		const struct v4l2_bt_timings *bt =
+ 			&v4l2_dv_timings_presets[i].bt;
+ 
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index c2338750313c4..a49782dd903cd 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -2238,7 +2238,8 @@ int vmci_qp_broker_map(struct vmci_handle handle,
+ 
+ 	result = VMCI_SUCCESS;
+ 
+-	if (context_id != VMCI_HOST_CONTEXT_ID) {
++	if (context_id != VMCI_HOST_CONTEXT_ID &&
++	    !QPBROKERSTATE_HAS_MEM(entry)) {
+ 		struct vmci_qp_page_store page_store;
+ 
+ 		page_store.pages = guest_mem;
+@@ -2345,7 +2346,8 @@ int vmci_qp_broker_unmap(struct vmci_handle handle,
+ 		goto out;
+ 	}
+ 
+-	if (context_id != VMCI_HOST_CONTEXT_ID) {
++	if (context_id != VMCI_HOST_CONTEXT_ID &&
++	    QPBROKERSTATE_HAS_MEM(entry)) {
+ 		qp_acquire_queue_mutex(entry->produce_q);
+ 		result = qp_save_headers(entry);
+ 		if (result < VMCI_SUCCESS)
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 87bac99207023..94caee49da99c 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -541,6 +541,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 		return mmc_sanitize(card);
+ 
+ 	mmc_wait_for_req(card->host, &mrq);
++	memcpy(&idata->ic.response, cmd.resp, sizeof(cmd.resp));
+ 
+ 	if (cmd.error) {
+ 		dev_err(mmc_dev(card->host), "%s: cmd error %d\n",
+@@ -590,8 +591,6 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	if (idata->ic.postsleep_min_us)
+ 		usleep_range(idata->ic.postsleep_min_us, idata->ic.postsleep_max_us);
+ 
+-	memcpy(&(idata->ic.response), cmd.resp, sizeof(cmd.resp));
+-
+ 	if (idata->rpmb || (cmd.flags & MMC_RSP_R1B) == MMC_RSP_R1B) {
+ 		/*
+ 		 * Ensure RPMB/R1B command has completed by polling CMD13
+diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c
+index eb395e1442071..e00167bcfaf6d 100644
+--- a/drivers/mmc/host/rtsx_pci_sdmmc.c
++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
+@@ -539,9 +539,22 @@ static int sd_write_long_data(struct realtek_pci_sdmmc *host,
+ 	return 0;
+ }
+ 
++static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host)
++{
++	rtsx_pci_write_register(host->pcr, SD_CFG1,
++			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128);
++}
++
++static inline void sd_disable_initial_mode(struct realtek_pci_sdmmc *host)
++{
++	rtsx_pci_write_register(host->pcr, SD_CFG1,
++			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0);
++}
++
+ static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq)
+ {
+ 	struct mmc_data *data = mrq->data;
++	int err;
+ 
+ 	if (host->sg_count < 0) {
+ 		data->error = host->sg_count;
+@@ -550,22 +563,19 @@ static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq)
+ 		return data->error;
+ 	}
+ 
+-	if (data->flags & MMC_DATA_READ)
+-		return sd_read_long_data(host, mrq);
++	if (data->flags & MMC_DATA_READ) {
++		if (host->initial_mode)
++			sd_disable_initial_mode(host);
+ 
+-	return sd_write_long_data(host, mrq);
+-}
++		err = sd_read_long_data(host, mrq);
+ 
+-static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host)
+-{
+-	rtsx_pci_write_register(host->pcr, SD_CFG1,
+-			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128);
+-}
++		if (host->initial_mode)
++			sd_enable_initial_mode(host);
+ 
+-static inline void sd_disable_initial_mode(struct realtek_pci_sdmmc *host)
+-{
+-	rtsx_pci_write_register(host->pcr, SD_CFG1,
+-			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0);
++		return err;
++	}
++
++	return sd_write_long_data(host, mrq);
+ }
+ 
+ static void sd_normal_rw(struct realtek_pci_sdmmc *host,
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index 3b8d456e857d5..fc38db64a6b48 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -159,6 +159,12 @@ struct sdhci_arasan_data {
+ /* Controller immediately reports SDHCI_CLOCK_INT_STABLE after enabling the
+  * internal clock even when the clock isn't stable */
+ #define SDHCI_ARASAN_QUIRK_CLOCK_UNSTABLE BIT(1)
++/*
++ * Some of the Arasan variations might not have timing requirements
++ * met at 25MHz for Default Speed mode, those controllers work at
++ * 19MHz instead
++ */
++#define SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN BIT(2)
+ };
+ 
+ struct sdhci_arasan_of_data {
+@@ -267,7 +273,12 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 			 * through low speeds without power cycling.
+ 			 */
+ 			sdhci_set_clock(host, host->max_clk);
+-			phy_power_on(sdhci_arasan->phy);
++			if (phy_power_on(sdhci_arasan->phy)) {
++				pr_err("%s: Cannot power on phy.\n",
++				       mmc_hostname(host->mmc));
++				return;
++			}
++
+ 			sdhci_arasan->is_phy_on = true;
+ 
+ 			/*
+@@ -290,6 +301,16 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 		sdhci_arasan->is_phy_on = false;
+ 	}
+ 
++	if (sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN) {
++		/*
++		 * Some of the Arasan variations might not have timing
++		 * requirements met at 25MHz for Default Speed mode,
++		 * those controllers work at 19MHz instead.
++		 */
++		if (clock == DEFAULT_SPEED_MAX_DTR)
++			clock = (DEFAULT_SPEED_MAX_DTR * 19) / 25;
++	}
++
+ 	/* Set the Input and Output Clock Phase Delays */
+ 	if (clk_data->set_clk_delays)
+ 		clk_data->set_clk_delays(host);
+@@ -307,7 +328,12 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 		msleep(20);
+ 
+ 	if (ctrl_phy) {
+-		phy_power_on(sdhci_arasan->phy);
++		if (phy_power_on(sdhci_arasan->phy)) {
++			pr_err("%s: Cannot power on phy.\n",
++			       mmc_hostname(host->mmc));
++			return;
++		}
++
+ 		sdhci_arasan->is_phy_on = true;
+ 	}
+ }
+@@ -463,7 +489,9 @@ static int sdhci_arasan_suspend(struct device *dev)
+ 		ret = phy_power_off(sdhci_arasan->phy);
+ 		if (ret) {
+ 			dev_err(dev, "Cannot power off phy.\n");
+-			sdhci_resume_host(host);
++			if (sdhci_resume_host(host))
++				dev_err(dev, "Cannot resume host.\n");
++
+ 			return ret;
+ 		}
+ 		sdhci_arasan->is_phy_on = false;
+@@ -1598,6 +1626,8 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
+ 	if (of_device_is_compatible(np, "xlnx,zynqmp-8.9a")) {
+ 		host->mmc_host_ops.execute_tuning =
+ 			arasan_zynqmp_execute_tuning;
++
++		sdhci_arasan->quirks |= SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN;
+ 	}
+ 
+ 	arasan_dt_parse_clk_phases(&pdev->dev, &sdhci_arasan->clk_data);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 018af1e38eb9b..645c7cabcbe4d 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2219,7 +2219,6 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 	/* recompute stats just before removing the slave */
+ 	bond_get_stats(bond->dev, &bond->bond_stats);
+ 
+-	bond_upper_dev_unlink(bond, slave);
+ 	/* unregister rx_handler early so bond_handle_frame wouldn't be called
+ 	 * for this slave anymore.
+ 	 */
+@@ -2228,6 +2227,8 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 	if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 		bond_3ad_unbind_slave(slave);
+ 
++	bond_upper_dev_unlink(bond, slave);
++
+ 	if (bond_mode_can_use_xmit_hash(bond))
+ 		bond_update_slave_arr(bond, slave);
+ 
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index a455534740cdf..95e634cbc4b63 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -853,7 +853,8 @@ static int gswip_setup(struct dsa_switch *ds)
+ 
+ 	gswip_switch_mask(priv, 0, GSWIP_MAC_CTRL_2_MLEN,
+ 			  GSWIP_MAC_CTRL_2p(cpu_port));
+-	gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8, GSWIP_MAC_FLEN);
++	gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8 + ETH_FCS_LEN,
++		       GSWIP_MAC_FLEN);
+ 	gswip_switch_mask(priv, 0, GSWIP_BM_QUEUE_GCTRL_GL_MOD,
+ 			  GSWIP_BM_QUEUE_GCTRL);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 61f6f0287cbe1..ff9d84a7147f1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -10,7 +10,14 @@
+ 
+ static u16 hclge_errno_to_resp(int errno)
+ {
+-	return abs(errno);
++	int resp = abs(errno);
++
++	/* The status for pf to vf msg cmd is u16, constrainted by HW.
++	 * We need to keep the same type with it.
++	 * The intput errno is the stander error code, it's safely to
++	 * use a u16 to store the abs(errno).
++	 */
++	return (u16)resp;
+ }
+ 
+ /* hclge_gen_resp_to_vf: used to generate a synchronous response to VF when PF
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 7023aa147043f..f06c079e812ec 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -131,6 +131,30 @@ enum iavf_status iavf_free_virt_mem_d(struct iavf_hw *hw,
+ 	return 0;
+ }
+ 
++/**
++ * iavf_lock_timeout - try to set bit but give up after timeout
++ * @adapter: board private structure
++ * @bit: bit to set
++ * @msecs: timeout in msecs
++ *
++ * Returns 0 on success, negative on failure
++ **/
++static int iavf_lock_timeout(struct iavf_adapter *adapter,
++			     enum iavf_critical_section_t bit,
++			     unsigned int msecs)
++{
++	unsigned int wait, delay = 10;
++
++	for (wait = 0; wait < msecs; wait += delay) {
++		if (!test_and_set_bit(bit, &adapter->crit_section))
++			return 0;
++
++		msleep(delay);
++	}
++
++	return -1;
++}
++
+ /**
+  * iavf_schedule_reset - Set the flags and schedule a reset event
+  * @adapter: board private structure
+@@ -1951,7 +1975,6 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		/* check for hw reset */
+ 	reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK;
+ 	if (!reg_val) {
+-		adapter->state = __IAVF_RESETTING;
+ 		adapter->flags |= IAVF_FLAG_RESET_PENDING;
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+@@ -2065,6 +2088,10 @@ static void iavf_reset_task(struct work_struct *work)
+ 	if (test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))
+ 		return;
+ 
++	if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 200)) {
++		schedule_work(&adapter->reset_task);
++		return;
++	}
+ 	while (test_and_set_bit(__IAVF_IN_CLIENT_TASK,
+ 				&adapter->crit_section))
+ 		usleep_range(500, 1000);
+@@ -2279,6 +2306,8 @@ static void iavf_adminq_task(struct work_struct *work)
+ 	if (!event.msg_buf)
+ 		goto out;
+ 
++	if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 200))
++		goto freedom;
+ 	do {
+ 		ret = iavf_clean_arq_element(hw, &event, &pending);
+ 		v_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high);
+@@ -2292,6 +2321,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ 		if (pending != 0)
+ 			memset(event.msg_buf, 0, IAVF_MAX_AQ_BUF_SIZE);
+ 	} while (pending);
++	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+ 
+ 	if ((adapter->flags &
+ 	     (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) ||
+@@ -3594,6 +3624,10 @@ static void iavf_init_task(struct work_struct *work)
+ 						    init_task.work);
+ 	struct iavf_hw *hw = &adapter->hw;
+ 
++	if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 5000)) {
++		dev_warn(&adapter->pdev->dev, "failed to set __IAVF_IN_CRITICAL_TASK in %s\n", __FUNCTION__);
++		return;
++	}
+ 	switch (adapter->state) {
+ 	case __IAVF_STARTUP:
+ 		if (iavf_startup(adapter) < 0)
+@@ -3606,14 +3640,14 @@ static void iavf_init_task(struct work_struct *work)
+ 	case __IAVF_INIT_GET_RESOURCES:
+ 		if (iavf_init_get_resources(adapter) < 0)
+ 			goto init_failed;
+-		return;
++		goto out;
+ 	default:
+ 		goto init_failed;
+ 	}
+ 
+ 	queue_delayed_work(iavf_wq, &adapter->init_task,
+ 			   msecs_to_jiffies(30));
+-	return;
++	goto out;
+ init_failed:
+ 	if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) {
+ 		dev_err(&adapter->pdev->dev,
+@@ -3622,9 +3656,11 @@ init_failed:
+ 		iavf_shutdown_adminq(hw);
+ 		adapter->state = __IAVF_STARTUP;
+ 		queue_delayed_work(iavf_wq, &adapter->init_task, HZ * 5);
+-		return;
++		goto out;
+ 	}
+ 	queue_delayed_work(iavf_wq, &adapter->init_task, HZ);
++out:
++	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+ }
+ 
+ /**
+@@ -3641,9 +3677,12 @@ static void iavf_shutdown(struct pci_dev *pdev)
+ 	if (netif_running(netdev))
+ 		iavf_close(netdev);
+ 
++	if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 5000))
++		dev_warn(&adapter->pdev->dev, "failed to set __IAVF_IN_CRITICAL_TASK in %s\n", __FUNCTION__);
+ 	/* Prevent the watchdog from running. */
+ 	adapter->state = __IAVF_REMOVE;
+ 	adapter->aq_required = 0;
++	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+ 
+ #ifdef CONFIG_PM
+ 	pci_save_state(pdev);
+@@ -3871,10 +3910,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 				 err);
+ 	}
+ 
+-	/* Shut down all the garbage mashers on the detention level */
+-	adapter->state = __IAVF_REMOVE;
+-	adapter->aq_required = 0;
+-	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 	iavf_request_reset(adapter);
+ 	msleep(50);
+ 	/* If the FW isn't responding, kick it once, but only once. */
+@@ -3882,6 +3917,13 @@ static void iavf_remove(struct pci_dev *pdev)
+ 		iavf_request_reset(adapter);
+ 		msleep(50);
+ 	}
++	if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 5000))
++		dev_warn(&adapter->pdev->dev, "failed to set __IAVF_IN_CRITICAL_TASK in %s\n", __FUNCTION__);
++
++	/* Shut down all the garbage mashers on the detention level */
++	adapter->state = __IAVF_REMOVE;
++	adapter->aq_required = 0;
++	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 	iavf_free_all_tx_resources(adapter);
+ 	iavf_free_all_rx_resources(adapter);
+ 	iavf_misc_irq_disable(adapter);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 013dd29553814..cae090a072524 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -4083,6 +4083,7 @@ static irqreturn_t igc_msix_ring(int irq, void *data)
+  */
+ static int igc_request_msix(struct igc_adapter *adapter)
+ {
++	unsigned int num_q_vectors = adapter->num_q_vectors;
+ 	int i = 0, err = 0, vector = 0, free_vector = 0;
+ 	struct net_device *netdev = adapter->netdev;
+ 
+@@ -4091,7 +4092,13 @@ static int igc_request_msix(struct igc_adapter *adapter)
+ 	if (err)
+ 		goto err_out;
+ 
+-	for (i = 0; i < adapter->num_q_vectors; i++) {
++	if (num_q_vectors > MAX_Q_VECTORS) {
++		num_q_vectors = MAX_Q_VECTORS;
++		dev_warn(&adapter->pdev->dev,
++			 "The number of queue vectors (%d) is higher than max allowed (%d)\n",
++			 adapter->num_q_vectors, MAX_Q_VECTORS);
++	}
++	for (i = 0; i < num_q_vectors; i++) {
+ 		struct igc_q_vector *q_vector = adapter->q_vector[i];
+ 
+ 		vector++;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index df238e46e2aeb..b062ed06235d2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -1129,7 +1129,22 @@ static int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
+ 	/* Enable backpressure for RQ aura */
+ 	if (aura_id < pfvf->hw.rqpool_cnt) {
+ 		aq->aura.bp_ena = 0;
++		/* If NIX1 LF is attached then specify NIX1_RX.
++		 *
++		 * Below NPA_AURA_S[BP_ENA] is set according to the
++		 * NPA_BPINTF_E enumeration given as:
++		 * 0x0 + a*0x1 where 'a' is 0 for NIX0_RX and 1 for NIX1_RX so
++		 * NIX0_RX is 0x0 + 0*0x1 = 0
++		 * NIX1_RX is 0x0 + 1*0x1 = 1
++		 * But in HRM it is given that
++		 * "NPA_AURA_S[BP_ENA](w1[33:32]) - Enable aura backpressure to
++		 * NIX-RX based on [BP] level. One bit per NIX-RX; index
++		 * enumerated by NPA_BPINTF_E."
++		 */
++		if (pfvf->nix_blkaddr == BLKADDR_NIX1)
++			aq->aura.bp_ena = 1;
+ 		aq->aura.nix0_bpid = pfvf->bpid[0];
++
+ 		/* Set backpressure level for RQ's Aura */
+ 		aq->aura.bp = RQ_BP_LVL_AURA;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index e49387dbef987..2e55e00888715 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -865,7 +865,7 @@ static void cb_timeout_handler(struct work_struct *work)
+ 	ent->ret = -ETIMEDOUT;
+ 	mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, timeout. Will cause a leak of a command resource\n",
+ 		       ent->idx, mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+-	mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++	mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ 
+ out:
+ 	cmd_ent_put(ent); /* for the cmd_ent_get() took on schedule delayed work */
+@@ -982,7 +982,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 		MLX5_SET(mbox_out, ent->out, status, status);
+ 		MLX5_SET(mbox_out, ent->out, syndrome, drv_synd);
+ 
+-		mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++		mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ 		return;
+ 	}
+ 
+@@ -996,7 +996,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 		poll_timeout(ent);
+ 		/* make sure we read the descriptor after ownership is SW */
+ 		rmb();
+-		mlx5_cmd_comp_handler(dev, 1UL << ent->idx, (ent->ret == -ETIMEDOUT));
++		mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, (ent->ret == -ETIMEDOUT));
+ 	}
+ }
+ 
+@@ -1056,7 +1056,7 @@ static void wait_func_handle_exec_timeout(struct mlx5_core_dev *dev,
+ 		       mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+ 
+ 	ent->ret = -ETIMEDOUT;
+-	mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++	mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ }
+ 
+ static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
+index b3c9dc032026c..478de5ded7c21 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
+@@ -824,9 +824,9 @@ again:
+ 			new_htbl = dr_rule_rehash(rule, nic_rule, cur_htbl,
+ 						  ste_location, send_ste_list);
+ 			if (!new_htbl) {
+-				mlx5dr_htbl_put(cur_htbl);
+ 				mlx5dr_err(dmn, "Failed creating rehash table, htbl-log_size: %d\n",
+ 					   cur_htbl->chunk_size);
++				mlx5dr_htbl_put(cur_htbl);
+ 			} else {
+ 				cur_htbl = new_htbl;
+ 			}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index ea3c6cf27db42..eb6677f737a0f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -605,6 +605,7 @@ static int dr_cmd_modify_qp_rtr2rts(struct mlx5_core_dev *mdev,
+ 
+ 	MLX5_SET(qpc, qpc, retry_count, attr->retry_cnt);
+ 	MLX5_SET(qpc, qpc, rnr_retry, attr->rnr_retry);
++	MLX5_SET(qpc, qpc, primary_address_path.ack_timeout, 0x8); /* ~1ms */
+ 
+ 	MLX5_SET(rtr2rts_qp_in, in, opcode, MLX5_CMD_OP_RTR2RTS_QP);
+ 	MLX5_SET(rtr2rts_qp_in, in, qpn, dr_qp->qpn);
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index 437226866ce81..dfc1f32cda2b3 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -1697,7 +1697,7 @@ nfp_net_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta,
+ 		case NFP_NET_META_RESYNC_INFO:
+ 			if (nfp_net_tls_rx_resync_req(netdev, data, pkt,
+ 						      pkt_len))
+-				return NULL;
++				return false;
+ 			data += sizeof(struct nfp_net_tls_resync_req);
+ 			break;
+ 		default:
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index 749585fe6fc96..90f69f43770a4 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -289,10 +289,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 		val &= ~NSS_COMMON_GMAC_CTL_PHY_IFACE_SEL;
+ 		break;
+ 	default:
+-		dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
+-			phy_modes(gmac->phy_mode));
+-		err = -EINVAL;
+-		goto err_remove_config_dt;
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_GMAC_CTL(gmac->id), val);
+ 
+@@ -309,10 +306,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 			NSS_COMMON_CLK_SRC_CTRL_OFFSET(gmac->id);
+ 		break;
+ 	default:
+-		dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
+-			phy_modes(gmac->phy_mode));
+-		err = -EINVAL;
+-		goto err_remove_config_dt;
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_CLK_SRC_CTRL, val);
+ 
+@@ -329,8 +323,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 				NSS_COMMON_CLK_GATE_GMII_TX_EN(gmac->id);
+ 		break;
+ 	default:
+-		/* We don't get here; the switch above will have errored out */
+-		unreachable();
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_CLK_GATE, val);
+ 
+@@ -361,6 +354,11 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++err_unsupported_phy:
++	dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
++		phy_modes(gmac->phy_mode));
++	err = -EINVAL;
++
+ err_remove_config_dt:
+ 	stmmac_remove_config_dt(pdev, plat_dat);
+ 
+diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c
+index c0d181a7f83ae..0b7135a3c585a 100644
+--- a/drivers/net/ethernet/wiznet/w5100.c
++++ b/drivers/net/ethernet/wiznet/w5100.c
+@@ -1052,6 +1052,8 @@ static int w5100_mmio_probe(struct platform_device *pdev)
+ 		mac_addr = data->mac_addr;
+ 
+ 	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!mem)
++		return -EINVAL;
+ 	if (resource_size(mem) < W5100_BUS_DIRECT_SIZE)
+ 		ops = &w5100_mmio_indirect_ops;
+ 	else
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index a9b058bb1be87..7bf43031cea8c 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -305,11 +305,9 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 
+ static int dp8382x_disable_wol(struct phy_device *phydev)
+ {
+-	int value = DP83822_WOL_EN | DP83822_WOL_MAGIC_EN |
+-		    DP83822_WOL_SECURE_ON;
+-
+-	return phy_clear_bits_mmd(phydev, DP83822_DEVADDR,
+-				  MII_DP83822_WOL_CFG, value);
++	return phy_clear_bits_mmd(phydev, DP83822_DEVADDR, MII_DP83822_WOL_CFG,
++				  DP83822_WOL_EN | DP83822_WOL_MAGIC_EN |
++				  DP83822_WOL_SECURE_ON);
+ }
+ 
+ static int dp83822_read_status(struct phy_device *phydev)
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+index b4885a700296e..b0a4ca3559fd8 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+@@ -3351,7 +3351,8 @@ found:
+ 			"Found block at %x: code=%d ref=%d length=%d major=%d minor=%d\n",
+ 			cptr, code, reference, length, major, minor);
+ 		if ((!AR_SREV_9485(ah) && length >= 1024) ||
+-		    (AR_SREV_9485(ah) && length > EEPROM_DATA_LEN_9485)) {
++		    (AR_SREV_9485(ah) && length > EEPROM_DATA_LEN_9485) ||
++		    (length > cptr)) {
+ 			ath_dbg(common, EEPROM, "Skipping bad header\n");
+ 			cptr -= COMP_HDR_LEN;
+ 			continue;
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index c86faebbc4594..6b2668f065d54 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -1622,7 +1622,6 @@ static void ath9k_hw_apply_gpio_override(struct ath_hw *ah)
+ 		ath9k_hw_gpio_request_out(ah, i, NULL,
+ 					  AR_GPIO_OUTPUT_MUX_AS_OUTPUT);
+ 		ath9k_hw_set_gpio(ah, i, !!(ah->gpio_val & BIT(i)));
+-		ath9k_hw_gpio_free(ah, i);
+ 	}
+ }
+ 
+@@ -2730,14 +2729,17 @@ static void ath9k_hw_gpio_cfg_output_mux(struct ath_hw *ah, u32 gpio, u32 type)
+ static void ath9k_hw_gpio_cfg_soc(struct ath_hw *ah, u32 gpio, bool out,
+ 				  const char *label)
+ {
++	int err;
++
+ 	if (ah->caps.gpio_requested & BIT(gpio))
+ 		return;
+ 
+-	/* may be requested by BSP, free anyway */
+-	gpio_free(gpio);
+-
+-	if (gpio_request_one(gpio, out ? GPIOF_OUT_INIT_LOW : GPIOF_IN, label))
++	err = gpio_request_one(gpio, out ? GPIOF_OUT_INIT_LOW : GPIOF_IN, label);
++	if (err) {
++		ath_err(ath9k_hw_common(ah), "request GPIO%d failed:%d\n",
++			gpio, err);
+ 		return;
++	}
+ 
+ 	ah->caps.gpio_requested |= BIT(gpio);
+ }
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 9f8e44210e89a..6bed619535427 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -405,13 +405,14 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ 		wcn36xx_dbg(WCN36XX_DBG_MAC, "wcn36xx_config channel switch=%d\n",
+ 			    ch);
+ 
+-		if (wcn->sw_scan_opchannel == ch) {
++		if (wcn->sw_scan_opchannel == ch && wcn->sw_scan_channel) {
+ 			/* If channel is the initial operating channel, we may
+ 			 * want to receive/transmit regular data packets, then
+ 			 * simply stop the scan session and exit PS mode.
+ 			 */
+ 			wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
+ 						wcn->sw_scan_vif);
++			wcn->sw_scan_channel = 0;
+ 		} else if (wcn->sw_scan) {
+ 			/* A scan is ongoing, do not change the operating
+ 			 * channel, but start a scan session on the channel.
+@@ -419,6 +420,7 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ 			wcn36xx_smd_init_scan(wcn, HAL_SYS_MODE_SCAN,
+ 					      wcn->sw_scan_vif);
+ 			wcn36xx_smd_start_scan(wcn, ch);
++			wcn->sw_scan_channel = ch;
+ 		} else {
+ 			wcn36xx_change_opchannel(wcn, ch);
+ 		}
+@@ -699,6 +701,7 @@ static void wcn36xx_sw_scan_start(struct ieee80211_hw *hw,
+ 
+ 	wcn->sw_scan = true;
+ 	wcn->sw_scan_vif = vif;
++	wcn->sw_scan_channel = 0;
+ 	if (vif_priv->sta_assoc)
+ 		wcn->sw_scan_opchannel = WCN36XX_HW_CHANNEL(wcn);
+ 	else
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.c b/drivers/net/wireless/ath/wcn36xx/txrx.c
+index 1b831157ede17..cab196bb38cd4 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.c
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.c
+@@ -287,6 +287,10 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 		status.rate_idx = 0;
+ 	}
+ 
++	if (ieee80211_is_beacon(hdr->frame_control) ||
++	    ieee80211_is_probe_resp(hdr->frame_control))
++		status.boottime_ns = ktime_get_boottime_ns();
++
+ 	memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index 71fa9992b118c..d0fcce86903ae 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -232,6 +232,7 @@ struct wcn36xx {
+ 	struct cfg80211_scan_request *scan_req;
+ 	bool			sw_scan;
+ 	u8			sw_scan_opchannel;
++	u8			sw_scan_channel;
+ 	struct ieee80211_vif	*sw_scan_vif;
+ 	struct mutex		scan_lock;
+ 	bool			scan_aborted;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index ab4a8b942c81d..419eaa5cf0b50 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -2303,7 +2303,7 @@ static void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt,
+ 		return;
+ 
+ 	if (dump_data->monitor_only)
+-		dump_mask &= IWL_FW_ERROR_DUMP_FW_MONITOR;
++		dump_mask &= BIT(IWL_FW_ERROR_DUMP_FW_MONITOR);
+ 
+ 	fw_error_dump.trans_ptr = iwl_trans_dump_data(fwrt->trans, dump_mask);
+ 	file_len = le32_to_cpu(dump_file->file_len);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index 5243b84e653cf..6a8bf9bb9c455 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -1044,8 +1044,10 @@ int iwl_mvm_mac_ctxt_beacon_changed(struct iwl_mvm *mvm,
+ 		return -ENOMEM;
+ 
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+-	if (mvm->beacon_inject_active)
++	if (mvm->beacon_inject_active) {
++		dev_kfree_skb(beacon);
+ 		return -EBUSY;
++	}
+ #endif
+ 
+ 	ret = iwl_mvm_mac_ctxt_send_beacon(mvm, vif, beacon);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 9caff70cbd276..6f301ac8cce20 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -3029,16 +3029,20 @@ static void iwl_mvm_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy,
+ 						    void *_data)
+ {
+ 	struct iwl_mvm_he_obss_narrow_bw_ru_data *data = _data;
++	const struct cfg80211_bss_ies *ies;
+ 	const struct element *elem;
+ 
+-	elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, bss->ies->data,
+-				  bss->ies->len);
++	rcu_read_lock();
++	ies = rcu_dereference(bss->ies);
++	elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, ies->data,
++				  ies->len);
+ 
+ 	if (!elem || elem->datalen < 10 ||
+ 	    !(elem->data[10] &
+ 	      WLAN_EXT_CAPA10_OBSS_NARROW_BW_RU_TOLERANCE_SUPPORT)) {
+ 		data->tolerated = false;
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ static void iwl_mvm_check_he_obss_narrow_bw_ru(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index cb83490f1016f..0be8ff30b13e6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -678,10 +678,26 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
+ 
+ 	mvm->fw_restart = iwlwifi_mod_params.fw_restart ? -1 : 0;
+ 
+-	mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE;
+-	mvm->snif_queue = IWL_MVM_DQA_INJECT_MONITOR_QUEUE;
+-	mvm->probe_queue = IWL_MVM_DQA_AP_PROBE_RESP_QUEUE;
+-	mvm->p2p_dev_queue = IWL_MVM_DQA_P2P_DEVICE_QUEUE;
++	if (iwl_mvm_has_new_tx_api(mvm)) {
++		/*
++		 * If we have the new TX/queue allocation API initialize them
++		 * all to invalid numbers. We'll rewrite the ones that we need
++		 * later, but that doesn't happen for all of them all of the
++		 * time (e.g. P2P Device is optional), and if a dynamic queue
++		 * ends up getting number 2 (IWL_MVM_DQA_P2P_DEVICE_QUEUE) then
++		 * iwl_mvm_is_static_queue() erroneously returns true, and we
++		 * might have things getting stuck.
++		 */
++		mvm->aux_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->snif_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->probe_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->p2p_dev_queue = IWL_MVM_INVALID_QUEUE;
++	} else {
++		mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE;
++		mvm->snif_queue = IWL_MVM_DQA_INJECT_MONITOR_QUEUE;
++		mvm->probe_queue = IWL_MVM_DQA_AP_PROBE_RESP_QUEUE;
++		mvm->p2p_dev_queue = IWL_MVM_DQA_P2P_DEVICE_QUEUE;
++	}
+ 
+ 	mvm->sf_state = SF_UNINIT;
+ 	if (iwl_mvm_has_unified_ucode(mvm))
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index aebaad45043fa..a5d90e028833c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1682,7 +1682,7 @@ iwl_mvm_umac_scan_cfg_channels_v6(struct iwl_mvm *mvm,
+ 		struct iwl_scan_channel_cfg_umac *cfg = &cp->channel_config[i];
+ 		u32 n_aps_flag =
+ 			iwl_mvm_scan_ch_n_aps_flag(vif_type,
+-						   cfg->v2.channel_num);
++						   channels[i]->hw_value);
+ 
+ 		cfg->flags = cpu_to_le32(flags | n_aps_flag);
+ 		cfg->v2.channel_num = channels[i]->hw_value;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index a66a5c19474a9..ef62839894c77 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -362,8 +362,9 @@ static int iwl_mvm_invalidate_sta_queue(struct iwl_mvm *mvm, int queue,
+ }
+ 
+ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+-			       int queue, u8 tid, u8 flags)
++			       u16 *queueptr, u8 tid, u8 flags)
+ {
++	int queue = *queueptr;
+ 	struct iwl_scd_txq_cfg_cmd cmd = {
+ 		.scd_queue = queue,
+ 		.action = SCD_CFG_DISABLE_QUEUE,
+@@ -372,6 +373,7 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 
+ 	if (iwl_mvm_has_new_tx_api(mvm)) {
+ 		iwl_trans_txq_free(mvm->trans, queue);
++		*queueptr = IWL_MVM_INVALID_QUEUE;
+ 
+ 		return 0;
+ 	}
+@@ -533,6 +535,7 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue,
+ 	u8 sta_id, tid;
+ 	unsigned long disable_agg_tids = 0;
+ 	bool same_sta;
++	u16 queue_tmp = queue;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+@@ -555,7 +558,7 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue,
+ 		iwl_mvm_invalidate_sta_queue(mvm, queue,
+ 					     disable_agg_tids, false);
+ 
+-	ret = iwl_mvm_disable_txq(mvm, old_sta, queue, tid, 0);
++	ret = iwl_mvm_disable_txq(mvm, old_sta, &queue_tmp, tid, 0);
+ 	if (ret) {
+ 		IWL_ERR(mvm,
+ 			"Failed to free inactive queue %d (ret=%d)\n",
+@@ -1230,6 +1233,7 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
+ 	unsigned int wdg_timeout =
+ 		iwl_mvm_get_wd_timeout(mvm, mvmsta->vif, false, false);
+ 	int queue = -1;
++	u16 queue_tmp;
+ 	unsigned long disable_agg_tids = 0;
+ 	enum iwl_mvm_agg_state queue_state;
+ 	bool shared_queue = false, inc_ssn;
+@@ -1378,7 +1382,8 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
+ 	return 0;
+ 
+ out_err:
+-	iwl_mvm_disable_txq(mvm, sta, queue, tid, 0);
++	queue_tmp = queue;
++	iwl_mvm_disable_txq(mvm, sta, &queue_tmp, tid, 0);
+ 
+ 	return ret;
+ }
+@@ -1825,7 +1830,7 @@ static void iwl_mvm_disable_sta_queues(struct iwl_mvm *mvm,
+ 		if (mvm_sta->tid_data[i].txq_id == IWL_MVM_INVALID_QUEUE)
+ 			continue;
+ 
+-		iwl_mvm_disable_txq(mvm, sta, mvm_sta->tid_data[i].txq_id, i,
++		iwl_mvm_disable_txq(mvm, sta, &mvm_sta->tid_data[i].txq_id, i,
+ 				    0);
+ 		mvm_sta->tid_data[i].txq_id = IWL_MVM_INVALID_QUEUE;
+ 	}
+@@ -2033,7 +2038,7 @@ static int iwl_mvm_add_int_sta_with_queue(struct iwl_mvm *mvm, int macidx,
+ 	ret = iwl_mvm_add_int_sta_common(mvm, sta, addr, macidx, maccolor);
+ 	if (ret) {
+ 		if (!iwl_mvm_has_new_tx_api(mvm))
+-			iwl_mvm_disable_txq(mvm, NULL, *queue,
++			iwl_mvm_disable_txq(mvm, NULL, queue,
+ 					    IWL_MAX_TID_COUNT, 0);
+ 		return ret;
+ 	}
+@@ -2106,7 +2111,7 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 	if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA))
+ 		return -EINVAL;
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id);
+ 	if (ret)
+ 		IWL_WARN(mvm, "Failed sending remove station\n");
+@@ -2123,7 +2128,7 @@ int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm)
+ 	if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA))
+ 		return -EINVAL;
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
+ 	if (ret)
+ 		IWL_WARN(mvm, "Failed sending remove station\n");
+@@ -2219,7 +2224,7 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 					  struct ieee80211_vif *vif)
+ {
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+-	int queue;
++	u16 *queueptr, queue;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+@@ -2228,10 +2233,10 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 	switch (vif->type) {
+ 	case NL80211_IFTYPE_AP:
+ 	case NL80211_IFTYPE_ADHOC:
+-		queue = mvm->probe_queue;
++		queueptr = &mvm->probe_queue;
+ 		break;
+ 	case NL80211_IFTYPE_P2P_DEVICE:
+-		queue = mvm->p2p_dev_queue;
++		queueptr = &mvm->p2p_dev_queue;
+ 		break;
+ 	default:
+ 		WARN(1, "Can't free bcast queue on vif type %d\n",
+@@ -2239,7 +2244,8 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 		return;
+ 	}
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, queue, IWL_MAX_TID_COUNT, 0);
++	queue = *queueptr;
++	iwl_mvm_disable_txq(mvm, NULL, queueptr, IWL_MAX_TID_COUNT, 0);
+ 	if (iwl_mvm_has_new_tx_api(mvm))
+ 		return;
+ 
+@@ -2474,7 +2480,7 @@ int iwl_mvm_rm_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 
+ 	iwl_mvm_flush_sta(mvm, &mvmvif->mcast_sta, true);
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvmvif->cab_queue, 0, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvmvif->cab_queue, 0, 0);
+ 
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvmvif->mcast_sta.sta_id);
+ 	if (ret)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 94299f259518d..2c13fa8f28200 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -544,6 +544,9 @@ void iwl_pcie_free_rbs_pool(struct iwl_trans *trans)
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	int i;
+ 
++	if (!trans_pcie->rx_pool)
++		return;
++
+ 	for (i = 0; i < RX_POOL_SIZE(trans_pcie->num_rx_bufs); i++) {
+ 		if (!trans_pcie->rx_pool[i].page)
+ 			continue;
+@@ -1094,7 +1097,7 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
+ 	INIT_LIST_HEAD(&rba->rbd_empty);
+ 	spin_unlock(&rba->lock);
+ 
+-	/* free all first - we might be reconfigured for a different size */
++	/* free all first - we overwrite everything here */
+ 	iwl_pcie_free_rbs_pool(trans);
+ 
+ 	for (i = 0; i < RX_QUEUE_SIZE; i++)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index bb990be7c870b..082768ec8aa80 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1909,6 +1909,9 @@ static void iwl_trans_pcie_configure(struct iwl_trans *trans,
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 
++	/* free all first - we might be reconfigured for a different size */
++	iwl_pcie_free_rbs_pool(trans);
++
+ 	trans->txqs.cmd.q_id = trans_cfg->cmd_queue;
+ 	trans->txqs.cmd.fifo = trans_cfg->cmd_fifo;
+ 	trans->txqs.cmd.wdg_timeout = trans_cfg->cmd_q_wdg_timeout;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index acb6b0cd36672..b28fa0c4d180c 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -1378,6 +1378,8 @@ struct rtl8xxxu_priv {
+ 	u8 no_pape:1;
+ 	u8 int_buf[USB_INTR_CONTENT_LENGTH];
+ 	u8 rssi_level;
++	DECLARE_BITMAP(tx_aggr_started, IEEE80211_NUM_TIDS);
++	DECLARE_BITMAP(tid_tx_operational, IEEE80211_NUM_TIDS);
+ 	/*
+ 	 * Only one virtual interface permitted because only STA mode
+ 	 * is supported and no iface_combinations are provided.
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 5cd7ef3625c5e..0d374a2948406 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -4805,6 +4805,8 @@ rtl8xxxu_fill_txdesc_v1(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 	struct ieee80211_rate *tx_rate = ieee80211_get_tx_rate(hw, tx_info);
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+ 	struct device *dev = &priv->udev->dev;
++	u8 *qc = ieee80211_get_qos_ctl(hdr);
++	u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 	u32 rate;
+ 	u16 rate_flags = tx_info->control.rates[0].flags;
+ 	u16 seq_number;
+@@ -4828,7 +4830,7 @@ rtl8xxxu_fill_txdesc_v1(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 
+ 	tx_desc->txdw3 = cpu_to_le32((u32)seq_number << TXDESC32_SEQ_SHIFT);
+ 
+-	if (ampdu_enable)
++	if (ampdu_enable && test_bit(tid, priv->tid_tx_operational))
+ 		tx_desc->txdw1 |= cpu_to_le32(TXDESC32_AGG_ENABLE);
+ 	else
+ 		tx_desc->txdw1 |= cpu_to_le32(TXDESC32_AGG_BREAK);
+@@ -4876,6 +4878,8 @@ rtl8xxxu_fill_txdesc_v2(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+ 	struct device *dev = &priv->udev->dev;
+ 	struct rtl8xxxu_txdesc40 *tx_desc40;
++	u8 *qc = ieee80211_get_qos_ctl(hdr);
++	u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 	u32 rate;
+ 	u16 rate_flags = tx_info->control.rates[0].flags;
+ 	u16 seq_number;
+@@ -4902,7 +4906,7 @@ rtl8xxxu_fill_txdesc_v2(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 
+ 	tx_desc40->txdw9 = cpu_to_le32((u32)seq_number << TXDESC40_SEQ_SHIFT);
+ 
+-	if (ampdu_enable)
++	if (ampdu_enable && test_bit(tid, priv->tid_tx_operational))
+ 		tx_desc40->txdw2 |= cpu_to_le32(TXDESC40_AGG_ENABLE);
+ 	else
+ 		tx_desc40->txdw2 |= cpu_to_le32(TXDESC40_AGG_BREAK);
+@@ -5015,12 +5019,19 @@ static void rtl8xxxu_tx(struct ieee80211_hw *hw,
+ 	if (ieee80211_is_data_qos(hdr->frame_control) && sta) {
+ 		if (sta->ht_cap.ht_supported) {
+ 			u32 ampdu, val32;
++			u8 *qc = ieee80211_get_qos_ctl(hdr);
++			u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 
+ 			ampdu = (u32)sta->ht_cap.ampdu_density;
+ 			val32 = ampdu << TXDESC_AMPDU_DENSITY_SHIFT;
+ 			tx_desc->txdw2 |= cpu_to_le32(val32);
+ 
+ 			ampdu_enable = true;
++
++			if (!test_bit(tid, priv->tx_aggr_started) &&
++			    !(skb->protocol == cpu_to_be16(ETH_P_PAE)))
++				if (!ieee80211_start_tx_ba_session(sta, tid, 0))
++					set_bit(tid, priv->tx_aggr_started);
+ 		}
+ 	}
+ 
+@@ -6095,6 +6106,7 @@ rtl8xxxu_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	struct device *dev = &priv->udev->dev;
+ 	u8 ampdu_factor, ampdu_density;
+ 	struct ieee80211_sta *sta = params->sta;
++	u16 tid = params->tid;
+ 	enum ieee80211_ampdu_mlme_action action = params->action;
+ 
+ 	switch (action) {
+@@ -6107,17 +6119,20 @@ rtl8xxxu_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 		dev_dbg(dev,
+ 			"Changed HT: ampdu_factor %02x, ampdu_density %02x\n",
+ 			ampdu_factor, ampdu_density);
+-		break;
++		return IEEE80211_AMPDU_TX_START_IMMEDIATE;
++	case IEEE80211_AMPDU_TX_STOP_CONT:
+ 	case IEEE80211_AMPDU_TX_STOP_FLUSH:
+-		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP_FLUSH\n", __func__);
+-		rtl8xxxu_set_ampdu_factor(priv, 0);
+-		rtl8xxxu_set_ampdu_min_space(priv, 0);
+-		break;
+ 	case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+-		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP_FLUSH_CONT\n",
+-			 __func__);
++		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP\n", __func__);
+ 		rtl8xxxu_set_ampdu_factor(priv, 0);
+ 		rtl8xxxu_set_ampdu_min_space(priv, 0);
++		clear_bit(tid, priv->tx_aggr_started);
++		clear_bit(tid, priv->tid_tx_operational);
++		ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
++		break;
++	case IEEE80211_AMPDU_TX_OPERATIONAL:
++		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_OPERATIONAL\n", __func__);
++		set_bit(tid, priv->tid_tx_operational);
+ 		break;
+ 	case IEEE80211_AMPDU_RX_START:
+ 		dev_dbg(dev, "%s: IEEE80211_AMPDU_RX_START\n", __func__);
+diff --git a/drivers/net/wireless/realtek/rtw88/Makefile b/drivers/net/wireless/realtek/rtw88/Makefile
+index c0e4b111c8b4e..73d6807a8cdfb 100644
+--- a/drivers/net/wireless/realtek/rtw88/Makefile
++++ b/drivers/net/wireless/realtek/rtw88/Makefile
+@@ -15,9 +15,9 @@ rtw88_core-y += main.o \
+ 	   ps.o \
+ 	   sec.o \
+ 	   bf.o \
+-	   wow.o \
+ 	   regd.o
+ 
++rtw88_core-$(CONFIG_PM) += wow.o
+ 
+ obj-$(CONFIG_RTW88_8822B)	+= rtw88_8822b.o
+ rtw88_8822b-objs		:= rtw8822b.o rtw8822b_table.o
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index b2fd87834f23d..0452630bcfacc 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -684,7 +684,7 @@ static u16 rtw_get_rsvd_page_probe_req_size(struct rtw_dev *rtwdev,
+ 			continue;
+ 		if ((!ssid && !rsvd_pkt->ssid) ||
+ 		    rtw_ssid_equal(rsvd_pkt->ssid, ssid))
+-			size = rsvd_pkt->skb->len;
++			size = rsvd_pkt->probe_req_size;
+ 	}
+ 
+ 	return size;
+@@ -912,6 +912,8 @@ static struct sk_buff *rtw_get_rsvd_page_skb(struct ieee80211_hw *hw,
+ 							 ssid->ssid_len, 0);
+ 		else
+ 			skb_new = ieee80211_probereq_get(hw, vif->addr, NULL, 0, 0);
++		if (skb_new)
++			rsvd_pkt->probe_req_size = (u16)skb_new->len;
+ 		break;
+ 	case RSVD_NLO_INFO:
+ 		skb_new = rtw_nlo_info_get(hw);
+@@ -1508,6 +1510,7 @@ int rtw_fw_dump_fifo(struct rtw_dev *rtwdev, u8 fifo_sel, u32 addr, u32 size,
+ static void __rtw_fw_update_pkt(struct rtw_dev *rtwdev, u8 pkt_id, u16 size,
+ 				u8 location)
+ {
++	struct rtw_chip_info *chip = rtwdev->chip;
+ 	u8 h2c_pkt[H2C_PKT_SIZE] = {0};
+ 	u16 total_size = H2C_PKT_HDR_SIZE + H2C_PKT_UPDATE_PKT_LEN;
+ 
+@@ -1518,6 +1521,7 @@ static void __rtw_fw_update_pkt(struct rtw_dev *rtwdev, u8 pkt_id, u16 size,
+ 	UPDATE_PKT_SET_LOCATION(h2c_pkt, location);
+ 
+ 	/* include txdesc size */
++	size += chip->tx_pkt_desc_sz;
+ 	UPDATE_PKT_SET_SIZE(h2c_pkt, size);
+ 
+ 	rtw_fw_send_h2c_packet(rtwdev, h2c_pkt);
+@@ -1527,7 +1531,7 @@ void rtw_fw_update_pkt_probe_req(struct rtw_dev *rtwdev,
+ 				 struct cfg80211_ssid *ssid)
+ {
+ 	u8 loc;
+-	u32 size;
++	u16 size;
+ 
+ 	loc = rtw_get_rsvd_page_probe_req_location(rtwdev, ssid);
+ 	if (!loc) {
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.h b/drivers/net/wireless/realtek/rtw88/fw.h
+index 08644540d2595..f4aed247e3bdb 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.h
++++ b/drivers/net/wireless/realtek/rtw88/fw.h
+@@ -117,6 +117,7 @@ struct rtw_rsvd_page {
+ 	u8 page;
+ 	bool add_txdesc;
+ 	struct cfg80211_ssid *ssid;
++	u16 probe_req_size;
+ };
+ 
+ enum rtw_keep_alive_pkt_type {
+diff --git a/drivers/net/wireless/realtek/rtw88/wow.c b/drivers/net/wireless/realtek/rtw88/wow.c
+index 2fcdf70a3a77e..bb2fd4e544f00 100644
+--- a/drivers/net/wireless/realtek/rtw88/wow.c
++++ b/drivers/net/wireless/realtek/rtw88/wow.c
+@@ -283,15 +283,26 @@ static void rtw_wow_rx_dma_start(struct rtw_dev *rtwdev)
+ 
+ static int rtw_wow_check_fw_status(struct rtw_dev *rtwdev, bool wow_enable)
+ {
+-	/* wait 100ms for wow firmware to finish work */
+-	msleep(100);
++	int ret;
++	u8 check;
++	u32 check_dis;
+ 
+ 	if (wow_enable) {
+-		if (rtw_read8(rtwdev, REG_WOWLAN_WAKE_REASON))
++		ret = read_poll_timeout(rtw_read8, check, !check, 1000,
++					100000, true, rtwdev,
++					REG_WOWLAN_WAKE_REASON);
++		if (ret)
+ 			goto wow_fail;
+ 	} else {
+-		if (rtw_read32_mask(rtwdev, REG_FE1IMR, BIT_FS_RXDONE) ||
+-		    rtw_read32_mask(rtwdev, REG_RXPKT_NUM, BIT_RW_RELEASE))
++		ret = read_poll_timeout(rtw_read32_mask, check_dis,
++					!check_dis, 1000, 100000, true, rtwdev,
++					REG_FE1IMR, BIT_FS_RXDONE);
++		if (ret)
++			goto wow_fail;
++		ret = read_poll_timeout(rtw_read32_mask, check_dis,
++					!check_dis, 1000, 100000, false, rtwdev,
++					REG_RXPKT_NUM, BIT_RW_RELEASE);
++		if (ret)
+ 			goto wow_fail;
+ 	}
+ 
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 875076b0ea6c1..d5dd79b59b16c 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -448,11 +448,11 @@ static int pmem_attach_disk(struct device *dev,
+ 		pmem->pfn_flags |= PFN_MAP;
+ 		bb_range = pmem->pgmap.range;
+ 	} else {
++		addr = devm_memremap(dev, pmem->phys_addr,
++				pmem->size, ARCH_MEMREMAP_PMEM);
+ 		if (devm_add_action_or_reset(dev, pmem_release_queue,
+ 					&pmem->pgmap))
+ 			return -ENOMEM;
+-		addr = devm_memremap(dev, pmem->phys_addr,
+-				pmem->size, ARCH_MEMREMAP_PMEM);
+ 		bb_range.start =  res->start;
+ 		bb_range.end = res->end;
+ 	}
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index ff5a16b17133d..5a9b2f1b1418a 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -878,7 +878,8 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
+ 		return BLK_STS_IOERR;
+ 	}
+ 
+-	cmd->common.command_id = req->tag;
++	nvme_req(req)->genctr++;
++	cmd->common.command_id = nvme_cid(req);
+ 	trace_nvme_setup_cmd(req, cmd);
+ 	return ret;
+ }
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 3cb3c82061d7e..8c735c55c15bf 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -153,6 +153,7 @@ enum nvme_quirks {
+ struct nvme_request {
+ 	struct nvme_command	*cmd;
+ 	union nvme_result	result;
++	u8			genctr;
+ 	u8			retries;
+ 	u8			flags;
+ 	u16			status;
+@@ -469,6 +470,49 @@ struct nvme_ctrl_ops {
+ 	int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
+ };
+ 
++/*
++ * nvme command_id is constructed as such:
++ * | xxxx | xxxxxxxxxxxx |
++ *   gen    request tag
++ */
++#define nvme_genctr_mask(gen)			(gen & 0xf)
++#define nvme_cid_install_genctr(gen)		(nvme_genctr_mask(gen) << 12)
++#define nvme_genctr_from_cid(cid)		((cid & 0xf000) >> 12)
++#define nvme_tag_from_cid(cid)			(cid & 0xfff)
++
++static inline u16 nvme_cid(struct request *rq)
++{
++	return nvme_cid_install_genctr(nvme_req(rq)->genctr) | rq->tag;
++}
++
++static inline struct request *nvme_find_rq(struct blk_mq_tags *tags,
++		u16 command_id)
++{
++	u8 genctr = nvme_genctr_from_cid(command_id);
++	u16 tag = nvme_tag_from_cid(command_id);
++	struct request *rq;
++
++	rq = blk_mq_tag_to_rq(tags, tag);
++	if (unlikely(!rq)) {
++		pr_err("could not locate request for tag %#x\n",
++			tag);
++		return NULL;
++	}
++	if (unlikely(nvme_genctr_mask(nvme_req(rq)->genctr) != genctr)) {
++		dev_err(nvme_req(rq)->ctrl->device,
++			"request %#x genctr mismatch (got %#x expected %#x)\n",
++			tag, genctr, nvme_genctr_mask(nvme_req(rq)->genctr));
++		return NULL;
++	}
++	return rq;
++}
++
++static inline struct request *nvme_cid_to_rq(struct blk_mq_tags *tags,
++                u16 command_id)
++{
++	return blk_mq_tag_to_rq(tags, nvme_tag_from_cid(command_id));
++}
++
+ #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
+ void nvme_fault_inject_init(struct nvme_fault_inject *fault_inj,
+ 			    const char *dev_name);
+@@ -566,7 +610,8 @@ static inline void nvme_put_ctrl(struct nvme_ctrl *ctrl)
+ 
+ static inline bool nvme_is_aen_req(u16 qid, __u16 command_id)
+ {
+-	return !qid && command_id >= NVME_AQ_BLK_MQ_DEPTH;
++	return !qid &&
++		nvme_tag_from_cid(command_id) >= NVME_AQ_BLK_MQ_DEPTH;
+ }
+ 
+ void nvme_complete_rq(struct request *req);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index fb48a88d1acb5..09767a805492c 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1012,7 +1012,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ 		return;
+ 	}
+ 
+-	req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), command_id);
++	req = nvme_find_rq(nvme_queue_tagset(nvmeq), command_id);
+ 	if (unlikely(!req)) {
+ 		dev_warn(nvmeq->dev->ctrl.device,
+ 			"invalid id %d completed on queue %d\n",
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index c6c2e2361b2fe..9c356be7f016e 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1738,10 +1738,10 @@ static void nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
+ 	struct request *rq;
+ 	struct nvme_rdma_request *req;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_rdma_tagset(queue), cqe->command_id);
++	rq = nvme_find_rq(nvme_rdma_tagset(queue), cqe->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"tag 0x%x on QP %#x not found\n",
++			"got bad command_id %#x on QP %#x\n",
+ 			cqe->command_id, queue->qp->qp_num);
+ 		nvme_rdma_error_recovery(queue->ctrl);
+ 		return;
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 5b11d8a23813f..c9a925999c6ea 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -484,11 +484,11 @@ static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
+ {
+ 	struct request *rq;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), cqe->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), cqe->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag 0x%x not found\n",
+-			nvme_tcp_queue_id(queue), cqe->command_id);
++			"got bad cqe.command_id %#x on queue %d\n",
++			cqe->command_id, nvme_tcp_queue_id(queue));
+ 		nvme_tcp_error_recovery(&queue->ctrl->ctrl);
+ 		return -EINVAL;
+ 	}
+@@ -505,11 +505,11 @@ static int nvme_tcp_handle_c2h_data(struct nvme_tcp_queue *queue,
+ {
+ 	struct request *rq;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), pdu->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
++			"got bad c2hdata.command_id %#x on queue %d\n",
++			pdu->command_id, nvme_tcp_queue_id(queue));
+ 		return -ENOENT;
+ 	}
+ 
+@@ -603,7 +603,7 @@ static int nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
+ 	data->hdr.plen =
+ 		cpu_to_le32(data->hdr.hlen + hdgst + req->pdu_len + ddgst);
+ 	data->ttag = pdu->ttag;
+-	data->command_id = rq->tag;
++	data->command_id = nvme_cid(rq);
+ 	data->data_offset = cpu_to_le32(req->data_sent);
+ 	data->data_length = cpu_to_le32(req->pdu_len);
+ 	return 0;
+@@ -616,11 +616,11 @@ static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue,
+ 	struct request *rq;
+ 	int ret;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), pdu->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
++			"got bad r2t.command_id %#x on queue %d\n",
++			pdu->command_id, nvme_tcp_queue_id(queue));
+ 		return -ENOENT;
+ 	}
+ 	req = blk_mq_rq_to_pdu(rq);
+@@ -699,17 +699,9 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct sk_buff *skb,
+ 			      unsigned int *offset, size_t *len)
+ {
+ 	struct nvme_tcp_data_pdu *pdu = (void *)queue->pdu;
+-	struct nvme_tcp_request *req;
+-	struct request *rq;
+-
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
+-	if (!rq) {
+-		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
+-		return -ENOENT;
+-	}
+-	req = blk_mq_rq_to_pdu(rq);
++	struct request *rq =
++		nvme_cid_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+ 
+ 	while (true) {
+ 		int recv_len, ret;
+@@ -801,8 +793,8 @@ static int nvme_tcp_recv_ddgst(struct nvme_tcp_queue *queue,
+ 	}
+ 
+ 	if (pdu->hdr.flags & NVME_TCP_F_DATA_SUCCESS) {
+-		struct request *rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue),
+-						pdu->command_id);
++		struct request *rq = nvme_cid_to_rq(nvme_tcp_tagset(queue),
++					pdu->command_id);
+ 
+ 		nvme_tcp_end_request(rq, NVME_SC_SUCCESS);
+ 		queue->nr_cqe++;
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index 16d71cc5a50eb..ff3258c3eb8b6 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -107,10 +107,10 @@ static void nvme_loop_queue_response(struct nvmet_req *req)
+ 	} else {
+ 		struct request *rq;
+ 
+-		rq = blk_mq_tag_to_rq(nvme_loop_tagset(queue), cqe->command_id);
++		rq = nvme_find_rq(nvme_loop_tagset(queue), cqe->command_id);
+ 		if (!rq) {
+ 			dev_err(queue->ctrl->ctrl.device,
+-				"tag 0x%x on queue %d not found\n",
++				"got bad command_id %#x on queue %d\n",
+ 				cqe->command_id, nvme_loop_queue_idx(queue));
+ 			return;
+ 		}
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index 955b8b8c82386..8ef772ccfb367 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -104,6 +104,9 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
+ {
+ 	int ret;
+ 
++	writel(old->timer_val, priv->qfpconf + QFPROM_BLOW_TIMER_OFFSET);
++	writel(old->accel_val, priv->qfpconf + QFPROM_ACCEL_OFFSET);
++
+ 	/*
+ 	 * This may be a shared rail and may be able to run at a lower rate
+ 	 * when we're not blowing fuses.  At the moment, the regulator framework
+@@ -124,9 +127,6 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
+ 			 "Failed to set clock rate for disable (ignoring)\n");
+ 
+ 	clk_disable_unprepare(priv->secclk);
+-
+-	writel(old->timer_val, priv->qfpconf + QFPROM_BLOW_TIMER_OFFSET);
+-	writel(old->accel_val, priv->qfpconf + QFPROM_ACCEL_OFFSET);
+ }
+ 
+ /**
+diff --git a/drivers/of/kobj.c b/drivers/of/kobj.c
+index a32e60b024b8d..6675b5e56960c 100644
+--- a/drivers/of/kobj.c
++++ b/drivers/of/kobj.c
+@@ -119,7 +119,7 @@ int __of_attach_node_sysfs(struct device_node *np)
+ 	struct property *pp;
+ 	int rc;
+ 
+-	if (!of_kset)
++	if (!IS_ENABLED(CONFIG_SYSFS) || !of_kset)
+ 		return 0;
+ 
+ 	np->kobj.kset = of_kset;
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index d92a1bfe16905..f83f4f6d70349 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -95,15 +95,7 @@ static struct dev_pm_opp *_find_opp_of_np(struct opp_table *opp_table,
+ static struct device_node *of_parse_required_opp(struct device_node *np,
+ 						 int index)
+ {
+-	struct device_node *required_np;
+-
+-	required_np = of_parse_phandle(np, "required-opps", index);
+-	if (unlikely(!required_np)) {
+-		pr_err("%s: Unable to parse required-opps: %pOF, index: %d\n",
+-		       __func__, np, index);
+-	}
+-
+-	return required_np;
++	return of_parse_phandle(np, "required-opps", index);
+ }
+ 
+ /* The caller must call dev_pm_opp_put_opp_table() after the table is used */
+@@ -1193,7 +1185,7 @@ int of_get_required_opp_performance_state(struct device_node *np, int index)
+ 
+ 	required_np = of_parse_required_opp(np, index);
+ 	if (!required_np)
+-		return -EINVAL;
++		return -ENODEV;
+ 
+ 	opp_table = _find_table_of_opp_np(required_np);
+ 	if (IS_ERR(opp_table)) {
+diff --git a/drivers/parport/ieee1284_ops.c b/drivers/parport/ieee1284_ops.c
+index 2c11bd3fe1fd6..17061f1df0f44 100644
+--- a/drivers/parport/ieee1284_ops.c
++++ b/drivers/parport/ieee1284_ops.c
+@@ -518,7 +518,7 @@ size_t parport_ieee1284_ecp_read_data (struct parport *port,
+ 				goto out;
+ 
+ 			/* Yield the port for a while. */
+-			if (count && dev->port->irq != PARPORT_IRQ_NONE) {
++			if (dev->port->irq != PARPORT_IRQ_NONE) {
+ 				parport_release (dev);
+ 				schedule_timeout_interruptible(msecs_to_jiffies(40));
+ 				parport_claim_or_block (dev);
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index b1b41b61e0bd0..88e19ad54f646 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -57,6 +57,7 @@
+ #define   PIO_COMPLETION_STATUS_CRS		2
+ #define   PIO_COMPLETION_STATUS_CA		4
+ #define   PIO_NON_POSTED_REQ			BIT(10)
++#define   PIO_ERR_STATUS			BIT(11)
+ #define PIO_ADDR_LS				(PIO_BASE_ADDR + 0x8)
+ #define PIO_ADDR_MS				(PIO_BASE_ADDR + 0xc)
+ #define PIO_WR_DATA				(PIO_BASE_ADDR + 0x10)
+@@ -117,6 +118,46 @@
+ #define PCIE_MSI_MASK_REG			(CONTROL_BASE_ADDR + 0x5C)
+ #define PCIE_MSI_PAYLOAD_REG			(CONTROL_BASE_ADDR + 0x9C)
+ 
++/* PCIe window configuration */
++#define OB_WIN_BASE_ADDR			0x4c00
++#define OB_WIN_BLOCK_SIZE			0x20
++#define OB_WIN_COUNT				8
++#define OB_WIN_REG_ADDR(win, offset)		(OB_WIN_BASE_ADDR + \
++						 OB_WIN_BLOCK_SIZE * (win) + \
++						 (offset))
++#define OB_WIN_MATCH_LS(win)			OB_WIN_REG_ADDR(win, 0x00)
++#define     OB_WIN_ENABLE			BIT(0)
++#define OB_WIN_MATCH_MS(win)			OB_WIN_REG_ADDR(win, 0x04)
++#define OB_WIN_REMAP_LS(win)			OB_WIN_REG_ADDR(win, 0x08)
++#define OB_WIN_REMAP_MS(win)			OB_WIN_REG_ADDR(win, 0x0c)
++#define OB_WIN_MASK_LS(win)			OB_WIN_REG_ADDR(win, 0x10)
++#define OB_WIN_MASK_MS(win)			OB_WIN_REG_ADDR(win, 0x14)
++#define OB_WIN_ACTIONS(win)			OB_WIN_REG_ADDR(win, 0x18)
++#define OB_WIN_DEFAULT_ACTIONS			(OB_WIN_ACTIONS(OB_WIN_COUNT-1) + 0x4)
++#define     OB_WIN_FUNC_NUM_MASK		GENMASK(31, 24)
++#define     OB_WIN_FUNC_NUM_SHIFT		24
++#define     OB_WIN_FUNC_NUM_ENABLE		BIT(23)
++#define     OB_WIN_BUS_NUM_BITS_MASK		GENMASK(22, 20)
++#define     OB_WIN_BUS_NUM_BITS_SHIFT		20
++#define     OB_WIN_MSG_CODE_ENABLE		BIT(22)
++#define     OB_WIN_MSG_CODE_MASK		GENMASK(21, 14)
++#define     OB_WIN_MSG_CODE_SHIFT		14
++#define     OB_WIN_MSG_PAYLOAD_LEN		BIT(12)
++#define     OB_WIN_ATTR_ENABLE			BIT(11)
++#define     OB_WIN_ATTR_TC_MASK			GENMASK(10, 8)
++#define     OB_WIN_ATTR_TC_SHIFT		8
++#define     OB_WIN_ATTR_RELAXED			BIT(7)
++#define     OB_WIN_ATTR_NOSNOOP			BIT(6)
++#define     OB_WIN_ATTR_POISON			BIT(5)
++#define     OB_WIN_ATTR_IDO			BIT(4)
++#define     OB_WIN_TYPE_MASK			GENMASK(3, 0)
++#define     OB_WIN_TYPE_SHIFT			0
++#define     OB_WIN_TYPE_MEM			0x0
++#define     OB_WIN_TYPE_IO			0x4
++#define     OB_WIN_TYPE_CONFIG_TYPE0		0x8
++#define     OB_WIN_TYPE_CONFIG_TYPE1		0x9
++#define     OB_WIN_TYPE_MSG			0xc
++
+ /* LMI registers base address and register offsets */
+ #define LMI_BASE_ADDR				0x6000
+ #define CFG_REG					(LMI_BASE_ADDR + 0x0)
+@@ -187,8 +228,16 @@
+ struct advk_pcie {
+ 	struct platform_device *pdev;
+ 	void __iomem *base;
++	struct {
++		phys_addr_t match;
++		phys_addr_t remap;
++		phys_addr_t mask;
++		u32 actions;
++	} wins[OB_WIN_COUNT];
++	u8 wins_count;
+ 	struct irq_domain *irq_domain;
+ 	struct irq_chip irq_chip;
++	raw_spinlock_t irq_lock;
+ 	struct irq_domain *msi_domain;
+ 	struct irq_domain *msi_inner_domain;
+ 	struct irq_chip msi_bottom_irq_chip;
+@@ -366,9 +415,39 @@ err:
+ 	dev_err(dev, "link never came up\n");
+ }
+ 
++/*
++ * Set PCIe address window register which could be used for memory
++ * mapping.
++ */
++static void advk_pcie_set_ob_win(struct advk_pcie *pcie, u8 win_num,
++				 phys_addr_t match, phys_addr_t remap,
++				 phys_addr_t mask, u32 actions)
++{
++	advk_writel(pcie, OB_WIN_ENABLE |
++			  lower_32_bits(match), OB_WIN_MATCH_LS(win_num));
++	advk_writel(pcie, upper_32_bits(match), OB_WIN_MATCH_MS(win_num));
++	advk_writel(pcie, lower_32_bits(remap), OB_WIN_REMAP_LS(win_num));
++	advk_writel(pcie, upper_32_bits(remap), OB_WIN_REMAP_MS(win_num));
++	advk_writel(pcie, lower_32_bits(mask), OB_WIN_MASK_LS(win_num));
++	advk_writel(pcie, upper_32_bits(mask), OB_WIN_MASK_MS(win_num));
++	advk_writel(pcie, actions, OB_WIN_ACTIONS(win_num));
++}
++
++static void advk_pcie_disable_ob_win(struct advk_pcie *pcie, u8 win_num)
++{
++	advk_writel(pcie, 0, OB_WIN_MATCH_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MATCH_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_REMAP_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_REMAP_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MASK_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MASK_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_ACTIONS(win_num));
++}
++
+ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ {
+ 	u32 reg;
++	int i;
+ 
+ 	/* Enable TX */
+ 	reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG);
+@@ -447,15 +526,51 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK);
+ 	advk_writel(pcie, reg, HOST_CTRL_INT_MASK_REG);
+ 
++	/*
++	 * Enable AXI address window location generation:
++	 * When it is enabled, the default outbound window
++	 * configurations (Default User Field: 0xD0074CFC)
++	 * are used to transparent address translation for
++	 * the outbound transactions. Thus, PCIe address
++	 * windows are not required for transparent memory
++	 * access when default outbound window configuration
++	 * is set for memory access.
++	 */
+ 	reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
+ 	reg |= PCIE_CORE_CTRL2_OB_WIN_ENABLE;
+ 	advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
+ 
+-	/* Bypass the address window mapping for PIO */
++	/*
++	 * Set memory access in Default User Field so it
++	 * is not required to configure PCIe address for
++	 * transparent memory access.
++	 */
++	advk_writel(pcie, OB_WIN_TYPE_MEM, OB_WIN_DEFAULT_ACTIONS);
++
++	/*
++	 * Bypass the address window mapping for PIO:
++	 * Since PIO access already contains all required
++	 * info over AXI interface by PIO registers, the
++	 * address window is not required.
++	 */
+ 	reg = advk_readl(pcie, PIO_CTRL);
+ 	reg |= PIO_CTRL_ADDR_WIN_DISABLE;
+ 	advk_writel(pcie, reg, PIO_CTRL);
+ 
++	/*
++	 * Configure PCIe address windows for non-memory or
++	 * non-transparent access as by default PCIe uses
++	 * transparent memory access.
++	 */
++	for (i = 0; i < pcie->wins_count; i++)
++		advk_pcie_set_ob_win(pcie, i,
++				     pcie->wins[i].match, pcie->wins[i].remap,
++				     pcie->wins[i].mask, pcie->wins[i].actions);
++
++	/* Disable remaining PCIe outbound windows */
++	for (i = pcie->wins_count; i < OB_WIN_COUNT; i++)
++		advk_pcie_disable_ob_win(pcie, i);
++
+ 	advk_pcie_train_link(pcie);
+ 
+ 	/*
+@@ -472,7 +587,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
+ }
+ 
+-static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
++static int advk_pcie_check_pio_status(struct advk_pcie *pcie, u32 *val)
+ {
+ 	struct device *dev = &pcie->pdev->dev;
+ 	u32 reg;
+@@ -483,14 +598,49 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 	status = (reg & PIO_COMPLETION_STATUS_MASK) >>
+ 		PIO_COMPLETION_STATUS_SHIFT;
+ 
+-	if (!status)
+-		return;
+-
++	/*
++	 * According to HW spec, the PIO status check sequence as below:
++	 * 1) even if COMPLETION_STATUS(bit9:7) indicates successful,
++	 *    it still needs to check Error Status(bit11), only when this bit
++	 *    indicates no error happen, the operation is successful.
++	 * 2) value Unsupported Request(1) of COMPLETION_STATUS(bit9:7) only
++	 *    means a PIO write error, and for PIO read it is successful with
++	 *    a read value of 0xFFFFFFFF.
++	 * 3) value Completion Retry Status(CRS) of COMPLETION_STATUS(bit9:7)
++	 *    only means a PIO write error, and for PIO read it is successful
++	 *    with a read value of 0xFFFF0001.
++	 * 4) value Completer Abort (CA) of COMPLETION_STATUS(bit9:7) means
++	 *    error for both PIO read and PIO write operation.
++	 * 5) other errors are indicated as 'unknown'.
++	 */
+ 	switch (status) {
++	case PIO_COMPLETION_STATUS_OK:
++		if (reg & PIO_ERR_STATUS) {
++			strcomp_status = "COMP_ERR";
++			break;
++		}
++		/* Get the read result */
++		if (val)
++			*val = advk_readl(pcie, PIO_RD_DATA);
++		/* No error */
++		strcomp_status = NULL;
++		break;
+ 	case PIO_COMPLETION_STATUS_UR:
+ 		strcomp_status = "UR";
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CRS:
++		/* PCIe r4.0, sec 2.3.2, says:
++		 * If CRS Software Visibility is not enabled, the Root Complex
++		 * must re-issue the Configuration Request as a new Request.
++		 * A Root Complex implementation may choose to limit the number
++		 * of Configuration Request/CRS Completion Status loops before
++		 * determining that something is wrong with the target of the
++		 * Request and taking appropriate action, e.g., complete the
++		 * Request to the host as a failed transaction.
++		 *
++		 * To simplify implementation do not re-issue the Configuration
++		 * Request and complete the Request as a failed transaction.
++		 */
+ 		strcomp_status = "CRS";
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CA:
+@@ -501,6 +651,9 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 		break;
+ 	}
+ 
++	if (!strcomp_status)
++		return 0;
++
+ 	if (reg & PIO_NON_POSTED_REQ)
+ 		str_posted = "Non-posted";
+ 	else
+@@ -508,6 +661,8 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 
+ 	dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
+ 		str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
++
++	return -EFAULT;
+ }
+ 
+ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
+@@ -745,10 +900,13 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 		return PCIBIOS_SET_FAILED;
+ 	}
+ 
+-	advk_pcie_check_pio_status(pcie);
++	/* Check PIO status and get the read result */
++	ret = advk_pcie_check_pio_status(pcie, val);
++	if (ret < 0) {
++		*val = 0xffffffff;
++		return PCIBIOS_SET_FAILED;
++	}
+ 
+-	/* Get the read result */
+-	*val = advk_readl(pcie, PIO_RD_DATA);
+ 	if (size == 1)
+ 		*val = (*val >> (8 * (where & 3))) & 0xff;
+ 	else if (size == 2)
+@@ -812,7 +970,9 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	if (ret < 0)
+ 		return PCIBIOS_SET_FAILED;
+ 
+-	advk_pcie_check_pio_status(pcie);
++	ret = advk_pcie_check_pio_status(pcie, NULL);
++	if (ret < 0)
++		return PCIBIOS_SET_FAILED;
+ 
+ 	return PCIBIOS_SUCCESSFUL;
+ }
+@@ -886,22 +1046,28 @@ static void advk_pcie_irq_mask(struct irq_data *d)
+ {
+ 	struct advk_pcie *pcie = d->domain->host_data;
+ 	irq_hw_number_t hwirq = irqd_to_hwirq(d);
++	unsigned long flags;
+ 	u32 mask;
+ 
++	raw_spin_lock_irqsave(&pcie->irq_lock, flags);
+ 	mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ 	mask |= PCIE_ISR1_INTX_ASSERT(hwirq);
+ 	advk_writel(pcie, mask, PCIE_ISR1_MASK_REG);
++	raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
+ }
+ 
+ static void advk_pcie_irq_unmask(struct irq_data *d)
+ {
+ 	struct advk_pcie *pcie = d->domain->host_data;
+ 	irq_hw_number_t hwirq = irqd_to_hwirq(d);
++	unsigned long flags;
+ 	u32 mask;
+ 
++	raw_spin_lock_irqsave(&pcie->irq_lock, flags);
+ 	mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ 	mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq);
+ 	advk_writel(pcie, mask, PCIE_ISR1_MASK_REG);
++	raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
+ }
+ 
+ static int advk_pcie_irq_map(struct irq_domain *h,
+@@ -985,6 +1151,8 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
+ 	struct irq_chip *irq_chip;
+ 	int ret = 0;
+ 
++	raw_spin_lock_init(&pcie->irq_lock);
++
+ 	pcie_intc_node =  of_get_next_child(node, NULL);
+ 	if (!pcie_intc_node) {
+ 		dev_err(dev, "No PCIe Intc node found\n");
+@@ -1162,6 +1330,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct advk_pcie *pcie;
+ 	struct pci_host_bridge *bridge;
++	struct resource_entry *entry;
+ 	int ret, irq;
+ 
+ 	bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie));
+@@ -1172,6 +1341,80 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 	pcie->pdev = pdev;
+ 	platform_set_drvdata(pdev, pcie);
+ 
++	resource_list_for_each_entry(entry, &bridge->windows) {
++		resource_size_t start = entry->res->start;
++		resource_size_t size = resource_size(entry->res);
++		unsigned long type = resource_type(entry->res);
++		u64 win_size;
++
++		/*
++		 * Aardvark hardware allows to configure also PCIe window
++		 * for config type 0 and type 1 mapping, but driver uses
++		 * only PIO for issuing configuration transfers which does
++		 * not use PCIe window configuration.
++		 */
++		if (type != IORESOURCE_MEM && type != IORESOURCE_MEM_64 &&
++		    type != IORESOURCE_IO)
++			continue;
++
++		/*
++		 * Skip transparent memory resources. Default outbound access
++		 * configuration is set to transparent memory access so it
++		 * does not need window configuration.
++		 */
++		if ((type == IORESOURCE_MEM || type == IORESOURCE_MEM_64) &&
++		    entry->offset == 0)
++			continue;
++
++		/*
++		 * The n-th PCIe window is configured by tuple (match, remap, mask)
++		 * and an access to address A uses this window if A matches the
++		 * match with given mask.
++		 * So every PCIe window size must be a power of two and every start
++		 * address must be aligned to window size. Minimal size is 64 KiB
++		 * because lower 16 bits of mask must be zero. Remapped address
++		 * may have set only bits from the mask.
++		 */
++		while (pcie->wins_count < OB_WIN_COUNT && size > 0) {
++			/* Calculate the largest aligned window size */
++			win_size = (1ULL << (fls64(size)-1)) |
++				   (start ? (1ULL << __ffs64(start)) : 0);
++			win_size = 1ULL << __ffs64(win_size);
++			if (win_size < 0x10000)
++				break;
++
++			dev_dbg(dev,
++				"Configuring PCIe window %d: [0x%llx-0x%llx] as %lu\n",
++				pcie->wins_count, (unsigned long long)start,
++				(unsigned long long)start + win_size, type);
++
++			if (type == IORESOURCE_IO) {
++				pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_IO;
++				pcie->wins[pcie->wins_count].match = pci_pio_to_address(start);
++			} else {
++				pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_MEM;
++				pcie->wins[pcie->wins_count].match = start;
++			}
++			pcie->wins[pcie->wins_count].remap = start - entry->offset;
++			pcie->wins[pcie->wins_count].mask = ~(win_size - 1);
++
++			if (pcie->wins[pcie->wins_count].remap & (win_size - 1))
++				break;
++
++			start += win_size;
++			size -= win_size;
++			pcie->wins_count++;
++		}
++
++		if (size > 0) {
++			dev_err(&pcie->pdev->dev,
++				"Invalid PCIe region [0x%llx-0x%llx]\n",
++				(unsigned long long)entry->res->start,
++				(unsigned long long)entry->res->end + 1);
++			return -EINVAL;
++		}
++	}
++
+ 	pcie->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(pcie->base))
+ 		return PTR_ERR(pcie->base);
+@@ -1252,6 +1495,7 @@ static int advk_pcie_remove(struct platform_device *pdev)
+ {
+ 	struct advk_pcie *pcie = platform_get_drvdata(pdev);
+ 	struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
++	int i;
+ 
+ 	pci_lock_rescan_remove();
+ 	pci_stop_root_bus(bridge->bus);
+@@ -1261,6 +1505,10 @@ static int advk_pcie_remove(struct platform_device *pdev)
+ 	advk_pcie_remove_msi_irq_domain(pcie);
+ 	advk_pcie_remove_irq_domain(pcie);
+ 
++	/* Disable outbound address windows mapping */
++	for (i = 0; i < OB_WIN_COUNT; i++)
++		advk_pcie_disable_ob_win(pcie, i);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c
+index f3cf7d61924f1..2a9fe7c3aef9f 100644
+--- a/drivers/pci/controller/pcie-xilinx-nwl.c
++++ b/drivers/pci/controller/pcie-xilinx-nwl.c
+@@ -6,6 +6,7 @@
+  * (C) Copyright 2014 - 2015, Xilinx, Inc.
+  */
+ 
++#include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+@@ -168,6 +169,7 @@ struct nwl_pcie {
+ 	u8 last_busno;
+ 	struct nwl_msi msi;
+ 	struct irq_domain *legacy_irq_domain;
++	struct clk *clk;
+ 	raw_spinlock_t leg_mask_lock;
+ };
+ 
+@@ -825,6 +827,16 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
++	pcie->clk = devm_clk_get(dev, NULL);
++	if (IS_ERR(pcie->clk))
++		return PTR_ERR(pcie->clk);
++
++	err = clk_prepare_enable(pcie->clk);
++	if (err) {
++		dev_err(dev, "can't enable PCIe ref clock\n");
++		return err;
++	}
++
+ 	err = nwl_pcie_bridge_init(pcie);
+ 	if (err) {
+ 		dev_err(dev, "HW Initialization failed\n");
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 2548c64194ca9..a7a1c74113483 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -783,6 +783,9 @@ static void msix_mask_all(void __iomem *base, int tsize)
+ 	u32 ctrl = PCI_MSIX_ENTRY_CTRL_MASKBIT;
+ 	int i;
+ 
++	if (pci_msi_ignore_mask)
++		return;
++
+ 	for (i = 0; i < tsize; i++, base += PCI_MSIX_ENTRY_SIZE)
+ 		writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL);
+ }
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 29f5d699fa06d..eae6a9fdd33d4 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1880,11 +1880,7 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
+ 	 * so that things like MSI message writing will behave as expected
+ 	 * (e.g. if the device really is in D0 at enable time).
+ 	 */
+-	if (dev->pm_cap) {
+-		u16 pmcsr;
+-		pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
+-		dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
+-	}
++	pci_update_current_state(dev, dev->current_state);
+ 
+ 	if (atomic_inc_return(&dev->enable_cnt) > 1)
+ 		return 0;		/* already enabled */
+@@ -4043,6 +4039,7 @@ phys_addr_t pci_pio_to_address(unsigned long pio)
+ 
+ 	return address;
+ }
++EXPORT_SYMBOL_GPL(pci_pio_to_address);
+ 
+ unsigned long __weak pci_address_to_pio(phys_addr_t address)
+ {
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index 50a9522ab07df..3779b264dbec3 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -260,8 +260,13 @@ static int get_port_device_capability(struct pci_dev *dev)
+ 		services |= PCIE_PORT_SERVICE_DPC;
+ 
+ 	if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
+-	    pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
+-		services |= PCIE_PORT_SERVICE_BWNOTIF;
++	    pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
++		u32 linkcap;
++
++		pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap);
++		if (linkcap & PCI_EXP_LNKCAP_LBNC)
++			services |= PCIE_PORT_SERVICE_BWNOTIF;
++	}
+ 
+ 	return services;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index a91c944961caa..bad294c352519 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3252,6 +3252,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
+ 			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
+ 			PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ASMEDIA, 0x0612, fixup_mpss_256);
+ 
+ /*
+  * Intel 5000 and 5100 Memory controllers have an erratum with read completion
+diff --git a/drivers/pci/syscall.c b/drivers/pci/syscall.c
+index 8b003c890b87b..c9f03418e71e0 100644
+--- a/drivers/pci/syscall.c
++++ b/drivers/pci/syscall.c
+@@ -22,8 +22,10 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
+ 	long err;
+ 	int cfg_ret;
+ 
++	err = -EPERM;
++	dev = NULL;
+ 	if (!capable(CAP_SYS_ADMIN))
+-		return -EPERM;
++		goto error;
+ 
+ 	err = -ENODEV;
+ 	dev = pci_get_domain_bus_and_slot(0, bus, dfn);
+diff --git a/drivers/pinctrl/actions/pinctrl-owl.c b/drivers/pinctrl/actions/pinctrl-owl.c
+index 903a4baf3846c..c8b3e396ea275 100644
+--- a/drivers/pinctrl/actions/pinctrl-owl.c
++++ b/drivers/pinctrl/actions/pinctrl-owl.c
+@@ -444,7 +444,6 @@ static int owl_group_config_get(struct pinctrl_dev *pctrldev,
+ 	*config = pinconf_to_config_packed(param, arg);
+ 
+ 	return ret;
+-
+ }
+ 
+ static int owl_group_config_set(struct pinctrl_dev *pctrldev,
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 20b477cd5a30a..6e6825d17a1d1 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -2119,7 +2119,6 @@ struct pinctrl_dev *pinctrl_register(struct pinctrl_desc *pctldesc,
+ 		return ERR_PTR(error);
+ 
+ 	return pctldev;
+-
+ }
+ EXPORT_SYMBOL_GPL(pinctrl_register);
+ 
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx1-core.c b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+index 08d110078c439..70186448d2f4a 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx1-core.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+@@ -290,7 +290,6 @@ static const struct pinctrl_ops imx1_pctrl_ops = {
+ 	.pin_dbg_show = imx1_pin_dbg_show,
+ 	.dt_node_to_map = imx1_dt_node_to_map,
+ 	.dt_free_map = imx1_dt_free_map,
+-
+ };
+ 
+ static int imx1_pmx_set(struct pinctrl_dev *pctldev, unsigned selector,
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 68894e9e05d2e..5cb018f988003 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -167,10 +167,14 @@ static struct armada_37xx_pin_group armada_37xx_nb_groups[] = {
+ 	PIN_GRP_GPIO("jtag", 20, 5, BIT(0), "jtag"),
+ 	PIN_GRP_GPIO("sdio0", 8, 3, BIT(1), "sdio"),
+ 	PIN_GRP_GPIO("emmc_nb", 27, 9, BIT(2), "emmc"),
+-	PIN_GRP_GPIO("pwm0", 11, 1, BIT(3), "pwm"),
+-	PIN_GRP_GPIO("pwm1", 12, 1, BIT(4), "pwm"),
+-	PIN_GRP_GPIO("pwm2", 13, 1, BIT(5), "pwm"),
+-	PIN_GRP_GPIO("pwm3", 14, 1, BIT(6), "pwm"),
++	PIN_GRP_GPIO_3("pwm0", 11, 1, BIT(3) | BIT(20), 0, BIT(20), BIT(3),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm1", 12, 1, BIT(4) | BIT(21), 0, BIT(21), BIT(4),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm2", 13, 1, BIT(5) | BIT(22), 0, BIT(22), BIT(5),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm3", 14, 1, BIT(6) | BIT(23), 0, BIT(23), BIT(6),
++		       "pwm", "led"),
+ 	PIN_GRP_GPIO("pmic1", 7, 1, BIT(7), "pmic"),
+ 	PIN_GRP_GPIO("pmic0", 6, 1, BIT(8), "pmic"),
+ 	PIN_GRP_GPIO("i2c2", 2, 2, BIT(9), "i2c"),
+@@ -184,11 +188,6 @@ static struct armada_37xx_pin_group armada_37xx_nb_groups[] = {
+ 	PIN_GRP_EXTRA("uart2", 9, 2, BIT(1) | BIT(13) | BIT(14) | BIT(19),
+ 		      BIT(1) | BIT(13) | BIT(14), BIT(1) | BIT(19),
+ 		      18, 2, "gpio", "uart"),
+-	PIN_GRP_GPIO_2("led0_od", 11, 1, BIT(20), BIT(20), 0, "led"),
+-	PIN_GRP_GPIO_2("led1_od", 12, 1, BIT(21), BIT(21), 0, "led"),
+-	PIN_GRP_GPIO_2("led2_od", 13, 1, BIT(22), BIT(22), 0, "led"),
+-	PIN_GRP_GPIO_2("led3_od", 14, 1, BIT(23), BIT(23), 0, "led"),
+-
+ };
+ 
+ static struct armada_37xx_pin_group armada_37xx_sb_groups[] = {
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 72edc675431ce..9015486e38c18 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -733,7 +733,6 @@ static const struct at91_pinctrl_mux_ops sam9x60_ops = {
+ 	.get_slewrate   = at91_mux_sam9x60_get_slewrate,
+ 	.set_slewrate   = at91_mux_sam9x60_set_slewrate,
+ 	.irq_type	= alt_gpio_irq_type,
+-
+ };
+ 
+ static struct at91_pinctrl_mux_ops sama5d3_ops = {
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index 033d142f0c272..e0df5ad6741dc 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -363,7 +363,7 @@ static const struct ingenic_chip_info jz4725b_chip_info = {
+ };
+ 
+ static const u32 jz4760_pull_ups[6] = {
+-	0xffffffff, 0xfffcf3ff, 0xffffffff, 0xffffcfff, 0xfffffb7c, 0xfffff00f,
++	0xffffffff, 0xfffcf3ff, 0xffffffff, 0xffffcfff, 0xfffffb7c, 0x0000000f,
+ };
+ 
+ static const u32 jz4760_pull_downs[6] = {
+@@ -618,11 +618,11 @@ static const struct ingenic_chip_info jz4760_chip_info = {
+ };
+ 
+ static const u32 jz4770_pull_ups[6] = {
+-	0x3fffffff, 0xfff0030c, 0xffffffff, 0xffff4fff, 0xfffffb7c, 0xffa7f00f,
++	0x3fffffff, 0xfff0f3fc, 0xffffffff, 0xffff4fff, 0xfffffb7c, 0x0024f00f,
+ };
+ 
+ static const u32 jz4770_pull_downs[6] = {
+-	0x00000000, 0x000f0c03, 0x00000000, 0x0000b000, 0x00000483, 0x00580ff0,
++	0x00000000, 0x000f0c03, 0x00000000, 0x0000b000, 0x00000483, 0x005b0ff0,
+ };
+ 
+ static int jz4770_uart0_data_pins[] = { 0xa0, 0xa3, };
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 12cc4eb186377..17aa0d542d925 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -1222,6 +1222,7 @@ static int pcs_parse_bits_in_pinctrl_entry(struct pcs_device *pcs,
+ 
+ 	if (PCS_HAS_PINCONF) {
+ 		dev_err(pcs->dev, "pinconf not supported\n");
++		res = -ENOTSUPP;
+ 		goto free_pingroups;
+ 	}
+ 
+diff --git a/drivers/pinctrl/pinctrl-st.c b/drivers/pinctrl/pinctrl-st.c
+index 7b8c7a0b13de0..43d9e6c7fd81f 100644
+--- a/drivers/pinctrl/pinctrl-st.c
++++ b/drivers/pinctrl/pinctrl-st.c
+@@ -541,7 +541,6 @@ static void st_pinconf_set_retime_packed(struct st_pinctrl *info,
+ 	st_regmap_field_bit_set_clear_pin(rt_p->delay_0, delay & 0x1, pin);
+ 	/* 2 bit delay, msb */
+ 	st_regmap_field_bit_set_clear_pin(rt_p->delay_1, delay & 0x2, pin);
+-
+ }
+ 
+ static void st_pinconf_set_retime_dedicated(struct st_pinctrl *info,
+diff --git a/drivers/pinctrl/pinctrl-stmfx.c b/drivers/pinctrl/pinctrl-stmfx.c
+index 008c83107a3ca..5fa2488fae87a 100644
+--- a/drivers/pinctrl/pinctrl-stmfx.c
++++ b/drivers/pinctrl/pinctrl-stmfx.c
+@@ -566,7 +566,7 @@ static irqreturn_t stmfx_pinctrl_irq_thread_fn(int irq, void *dev_id)
+ 	u8 pending[NR_GPIO_REGS];
+ 	u8 src[NR_GPIO_REGS] = {0, 0, 0};
+ 	unsigned long n, status;
+-	int ret;
++	int i, ret;
+ 
+ 	ret = regmap_bulk_read(pctl->stmfx->map, STMFX_REG_IRQ_GPI_PENDING,
+ 			       &pending, NR_GPIO_REGS);
+@@ -576,7 +576,9 @@ static irqreturn_t stmfx_pinctrl_irq_thread_fn(int irq, void *dev_id)
+ 	regmap_bulk_write(pctl->stmfx->map, STMFX_REG_IRQ_GPI_SRC,
+ 			  src, NR_GPIO_REGS);
+ 
+-	status = *(unsigned long *)pending;
++	BUILD_BUG_ON(NR_GPIO_REGS > sizeof(status));
++	for (i = 0, status = 0; i < NR_GPIO_REGS; i++)
++		status |= (unsigned long)pending[i] << (i * 8);
+ 	for_each_set_bit(n, &status, gc->ngpio) {
+ 		handle_nested_irq(irq_find_mapping(gc->irq.domain, n));
+ 		stmfx_pinctrl_irq_toggle_trigger(pctl, n);
+diff --git a/drivers/pinctrl/pinctrl-sx150x.c b/drivers/pinctrl/pinctrl-sx150x.c
+index c110f780407bd..484a3b9e875c1 100644
+--- a/drivers/pinctrl/pinctrl-sx150x.c
++++ b/drivers/pinctrl/pinctrl-sx150x.c
+@@ -443,7 +443,6 @@ static void sx150x_gpio_set(struct gpio_chip *chip, unsigned int offset,
+ 		sx150x_gpio_oscio_set(pctl, value);
+ 	else
+ 		__sx150x_gpio_set(pctl, offset, value);
+-
+ }
+ 
+ static void sx150x_gpio_set_multiple(struct gpio_chip *chip,
+diff --git a/drivers/pinctrl/qcom/pinctrl-sdm845.c b/drivers/pinctrl/qcom/pinctrl-sdm845.c
+index 2834d2c1338c8..c51793f6546f1 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sdm845.c
++++ b/drivers/pinctrl/qcom/pinctrl-sdm845.c
+@@ -1310,7 +1310,6 @@ static const struct msm_pinctrl_soc_data sdm845_pinctrl = {
+ 	.ngpios = 151,
+ 	.wakeirq_map = sdm845_pdc_map,
+ 	.nwakeirq_map = ARRAY_SIZE(sdm845_pdc_map),
+-
+ };
+ 
+ static const struct msm_pinctrl_soc_data sdm845_acpi_pinctrl = {
+diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c b/drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c
+index 681d8dcf37e34..92e7f2602847c 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c
++++ b/drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c
+@@ -617,7 +617,6 @@ static void pm8xxx_mpp_dbg_show_one(struct seq_file *s,
+ 		}
+ 		break;
+ 	}
+-
+ }
+ 
+ static void pm8xxx_mpp_dbg_show(struct seq_file *s, struct gpio_chip *chip)
+diff --git a/drivers/pinctrl/renesas/pfc-r8a77950.c b/drivers/pinctrl/renesas/pfc-r8a77950.c
+index 04812e62f3a47..9d89da2319e56 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a77950.c
++++ b/drivers/pinctrl/renesas/pfc-r8a77950.c
+@@ -1668,7 +1668,6 @@ static const unsigned int avb_mii_pins[] = {
+ 	PIN_AVB_RX_CTL, PIN_AVB_RXC, PIN_AVB_RD0,
+ 	PIN_AVB_RD1, PIN_AVB_RD2, PIN_AVB_RD3,
+ 	PIN_AVB_TXCREFCLK,
+-
+ };
+ static const unsigned int avb_mii_mux[] = {
+ 	AVB_TX_CTL_MARK, AVB_TXC_MARK, AVB_TD0_MARK,
+diff --git a/drivers/pinctrl/renesas/pfc-r8a77951.c b/drivers/pinctrl/renesas/pfc-r8a77951.c
+index a94ebe0bf5d06..4aea6e4b71571 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a77951.c
++++ b/drivers/pinctrl/renesas/pfc-r8a77951.c
+@@ -1727,7 +1727,6 @@ static const unsigned int avb_mii_pins[] = {
+ 	PIN_AVB_RX_CTL, PIN_AVB_RXC, PIN_AVB_RD0,
+ 	PIN_AVB_RD1, PIN_AVB_RD2, PIN_AVB_RD3,
+ 	PIN_AVB_TXCREFCLK,
+-
+ };
+ static const unsigned int avb_mii_mux[] = {
+ 	AVB_TX_CTL_MARK, AVB_TXC_MARK, AVB_TD0_MARK,
+diff --git a/drivers/pinctrl/renesas/pfc-r8a7796.c b/drivers/pinctrl/renesas/pfc-r8a7796.c
+index 3878d6b0db149..a67fa0e4df7c7 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a7796.c
++++ b/drivers/pinctrl/renesas/pfc-r8a7796.c
+@@ -1732,7 +1732,6 @@ static const unsigned int avb_mii_pins[] = {
+ 	PIN_AVB_RX_CTL, PIN_AVB_RXC, PIN_AVB_RD0,
+ 	PIN_AVB_RD1, PIN_AVB_RD2, PIN_AVB_RD3,
+ 	PIN_AVB_TXCREFCLK,
+-
+ };
+ static const unsigned int avb_mii_mux[] = {
+ 	AVB_TX_CTL_MARK, AVB_TXC_MARK, AVB_TD0_MARK,
+diff --git a/drivers/pinctrl/renesas/pfc-r8a77965.c b/drivers/pinctrl/renesas/pfc-r8a77965.c
+index 7a50b9b69a7dc..7db2b7f2ff678 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a77965.c
++++ b/drivers/pinctrl/renesas/pfc-r8a77965.c
+@@ -1736,7 +1736,6 @@ static const unsigned int avb_mii_pins[] = {
+ 	PIN_AVB_RX_CTL, PIN_AVB_RXC, PIN_AVB_RD0,
+ 	PIN_AVB_RD1, PIN_AVB_RD2, PIN_AVB_RD3,
+ 	PIN_AVB_TXCREFCLK,
+-
+ };
+ static const unsigned int avb_mii_mux[] = {
+ 	AVB_TX_CTL_MARK, AVB_TXC_MARK, AVB_TD0_MARK,
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 608eb5a07248e..7f809a57bee50 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -918,7 +918,7 @@ static int samsung_pinctrl_register(struct platform_device *pdev,
+ 		pin_bank->grange.pin_base = drvdata->pin_base
+ 						+ pin_bank->pin_base;
+ 		pin_bank->grange.base = pin_bank->grange.pin_base;
+-		pin_bank->grange.npins = pin_bank->gpio_chip.ngpio;
++		pin_bank->grange.npins = pin_bank->nr_pins;
+ 		pin_bank->grange.gc = &pin_bank->gpio_chip;
+ 		pinctrl_add_gpio_range(drvdata->pctl_dev, &pin_bank->grange);
+ 	}
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index ea5149efcbeae..9f698a7aad129 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -279,6 +279,15 @@ static int cros_ec_host_command_proto_query(struct cros_ec_device *ec_dev,
+ 	msg->insize = sizeof(struct ec_response_get_protocol_info);
+ 
+ 	ret = send_command(ec_dev, msg);
++	/*
++	 * Send command once again when timeout occurred.
++	 * Fingerprint MCU (FPMCU) is restarted during system boot which
++	 * introduces small window in which FPMCU won't respond for any
++	 * messages sent by kernel. There is no need to wait before next
++	 * attempt because we waited at least EC_MSG_DEADLINE_MS.
++	 */
++	if (ret == -ETIMEDOUT)
++		ret = send_command(ec_dev, msg);
+ 
+ 	if (ret < 0) {
+ 		dev_dbg(ec_dev->dev,
+diff --git a/drivers/platform/x86/dell-smbios-wmi.c b/drivers/platform/x86/dell-smbios-wmi.c
+index c97bd4a452422..5821e9d9a4ce4 100644
+--- a/drivers/platform/x86/dell-smbios-wmi.c
++++ b/drivers/platform/x86/dell-smbios-wmi.c
+@@ -69,6 +69,7 @@ static int run_smbios_call(struct wmi_device *wdev)
+ 		if (obj->type == ACPI_TYPE_INTEGER)
+ 			dev_dbg(&wdev->dev, "SMBIOS call failed: %llu\n",
+ 				obj->integer.value);
++		kfree(output.pointer);
+ 		return -EIO;
+ 	}
+ 	memcpy(&priv->buf->std, obj->buffer.pointer, obj->buffer.length);
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index 48d3985eaa8ad..69bb0f56e492a 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -859,8 +859,12 @@ static irqreturn_t max17042_thread_handler(int id, void *dev)
+ {
+ 	struct max17042_chip *chip = dev;
+ 	u32 val;
++	int ret;
++
++	ret = regmap_read(chip->regmap, MAX17042_STATUS, &val);
++	if (ret)
++		return IRQ_HANDLED;
+ 
+-	regmap_read(chip->regmap, MAX17042_STATUS, &val);
+ 	if ((val & STATUS_INTR_SOCMIN_BIT) ||
+ 		(val & STATUS_INTR_SOCMAX_BIT)) {
+ 		dev_info(&chip->client->dev, "SOC threshold INTR\n");
+diff --git a/drivers/rtc/rtc-tps65910.c b/drivers/rtc/rtc-tps65910.c
+index e3840386f430c..6eec86b0b1751 100644
+--- a/drivers/rtc/rtc-tps65910.c
++++ b/drivers/rtc/rtc-tps65910.c
+@@ -469,6 +469,6 @@ static struct platform_driver tps65910_rtc_driver = {
+ };
+ 
+ module_platform_driver(tps65910_rtc_driver);
+-MODULE_ALIAS("platform:rtc-tps65910");
++MODULE_ALIAS("platform:tps65910-rtc");
+ MODULE_AUTHOR("Venu Byravarasu <vbyravarasu@nvidia.com>");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index f9a31c7819ae6..3e29c26f01856 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -1025,6 +1025,33 @@ static void qdio_shutdown_queues(struct qdio_irq *irq_ptr)
+ 	}
+ }
+ 
++static int qdio_cancel_ccw(struct qdio_irq *irq, int how)
++{
++	struct ccw_device *cdev = irq->cdev;
++	int rc;
++
++	spin_lock_irq(get_ccwdev_lock(cdev));
++	qdio_set_state(irq, QDIO_IRQ_STATE_CLEANUP);
++	if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
++		rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
++	else
++		/* default behaviour is halt */
++		rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
++	spin_unlock_irq(get_ccwdev_lock(cdev));
++	if (rc) {
++		DBF_ERROR("%4x SHUTD ERR", irq->schid.sch_no);
++		DBF_ERROR("rc:%4d", rc);
++		return rc;
++	}
++
++	wait_event_interruptible_timeout(cdev->private->wait_q,
++					 irq->state == QDIO_IRQ_STATE_INACTIVE ||
++					 irq->state == QDIO_IRQ_STATE_ERR,
++					 10 * HZ);
++
++	return 0;
++}
++
+ /**
+  * qdio_shutdown - shut down a qdio subchannel
+  * @cdev: associated ccw device
+@@ -1063,27 +1090,7 @@ int qdio_shutdown(struct ccw_device *cdev, int how)
+ 	qdio_shutdown_queues(irq_ptr);
+ 	qdio_shutdown_debug_entries(irq_ptr);
+ 
+-	/* cleanup subchannel */
+-	spin_lock_irq(get_ccwdev_lock(cdev));
+-	qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
+-	if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
+-		rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
+-	else
+-		/* default behaviour is halt */
+-		rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
+-	spin_unlock_irq(get_ccwdev_lock(cdev));
+-	if (rc) {
+-		DBF_ERROR("%4x SHUTD ERR", irq_ptr->schid.sch_no);
+-		DBF_ERROR("rc:%4d", rc);
+-		goto no_cleanup;
+-	}
+-
+-	wait_event_interruptible_timeout(cdev->private->wait_q,
+-		irq_ptr->state == QDIO_IRQ_STATE_INACTIVE ||
+-		irq_ptr->state == QDIO_IRQ_STATE_ERR,
+-		10 * HZ);
+-
+-no_cleanup:
++	rc = qdio_cancel_ccw(irq_ptr, how);
+ 	qdio_shutdown_thinint(irq_ptr);
+ 	qdio_shutdown_irq(irq_ptr);
+ 
+@@ -1243,6 +1250,7 @@ int qdio_establish(struct ccw_device *cdev,
+ {
+ 	struct qdio_irq *irq_ptr = cdev->private->qdio_data;
+ 	struct subchannel_id schid;
++	long timeout;
+ 	int rc;
+ 
+ 	ccw_device_get_schid(cdev, &schid);
+@@ -1268,11 +1276,8 @@ int qdio_establish(struct ccw_device *cdev,
+ 	qdio_setup_irq(irq_ptr, init_data);
+ 
+ 	rc = qdio_establish_thinint(irq_ptr);
+-	if (rc) {
+-		qdio_shutdown_irq(irq_ptr);
+-		mutex_unlock(&irq_ptr->setup_mutex);
+-		return rc;
+-	}
++	if (rc)
++		goto err_thinint;
+ 
+ 	/* establish q */
+ 	irq_ptr->ccw.cmd_code = irq_ptr->equeue.cmd;
+@@ -1288,15 +1293,16 @@ int qdio_establish(struct ccw_device *cdev,
+ 	if (rc) {
+ 		DBF_ERROR("%4x est IO ERR", irq_ptr->schid.sch_no);
+ 		DBF_ERROR("rc:%4x", rc);
+-		qdio_shutdown_thinint(irq_ptr);
+-		qdio_shutdown_irq(irq_ptr);
+-		mutex_unlock(&irq_ptr->setup_mutex);
+-		return rc;
++		goto err_ccw_start;
+ 	}
+ 
+-	wait_event_interruptible_timeout(cdev->private->wait_q,
+-		irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED ||
+-		irq_ptr->state == QDIO_IRQ_STATE_ERR, HZ);
++	timeout = wait_event_interruptible_timeout(cdev->private->wait_q,
++						   irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED ||
++						   irq_ptr->state == QDIO_IRQ_STATE_ERR, HZ);
++	if (timeout <= 0) {
++		rc = (timeout == -ERESTARTSYS) ? -EINTR : -ETIME;
++		goto err_ccw_timeout;
++	}
+ 
+ 	if (irq_ptr->state != QDIO_IRQ_STATE_ESTABLISHED) {
+ 		mutex_unlock(&irq_ptr->setup_mutex);
+@@ -1315,6 +1321,16 @@ int qdio_establish(struct ccw_device *cdev,
+ 	qdio_print_subchannel_info(irq_ptr);
+ 	qdio_setup_debug_entries(irq_ptr);
+ 	return 0;
++
++err_ccw_timeout:
++	qdio_cancel_ccw(irq_ptr, QDIO_FLAG_CLEANUP_USING_CLEAR);
++err_ccw_start:
++	qdio_shutdown_thinint(irq_ptr);
++err_thinint:
++	qdio_shutdown_irq(irq_ptr);
++	qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE);
++	mutex_unlock(&irq_ptr->setup_mutex);
++	return rc;
+ }
+ EXPORT_SYMBOL_GPL(qdio_establish);
+ 
+diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
+index 7231de2767a96..39ef074069971 100644
+--- a/drivers/scsi/BusLogic.c
++++ b/drivers/scsi/BusLogic.c
+@@ -1845,7 +1845,7 @@ static bool __init blogic_reportconfig(struct blogic_adapter *adapter)
+ 		else
+ 			blogic_info("None, ", adapter);
+ 		if (adapter->bios_addr > 0)
+-			blogic_info("BIOS Address: 0x%lX, ", adapter,
++			blogic_info("BIOS Address: 0x%X, ", adapter,
+ 					adapter->bios_addr);
+ 		else
+ 			blogic_info("BIOS Address: None, ", adapter);
+@@ -3603,7 +3603,7 @@ static void blogic_msg(enum blogic_msglevel msglevel, char *fmt,
+ 			if (buf[0] != '\n' || len > 1)
+ 				printk("%sscsi%d: %s", blogic_msglevelmap[msglevel], adapter->host_no, buf);
+ 		} else
+-			printk("%s", buf);
++			pr_cont("%s", buf);
+ 	} else {
+ 		if (begin) {
+ 			if (adapter != NULL && adapter->adapter_initd)
+@@ -3611,7 +3611,7 @@ static void blogic_msg(enum blogic_msglevel msglevel, char *fmt,
+ 			else
+ 				printk("%s%s", blogic_msglevelmap[msglevel], buf);
+ 		} else
+-			printk("%s", buf);
++			pr_cont("%s", buf);
+ 	}
+ 	begin = (buf[len - 1] == '\n');
+ }
+diff --git a/drivers/scsi/pcmcia/fdomain_cs.c b/drivers/scsi/pcmcia/fdomain_cs.c
+index e42acf314d068..33df6a9ba9b5f 100644
+--- a/drivers/scsi/pcmcia/fdomain_cs.c
++++ b/drivers/scsi/pcmcia/fdomain_cs.c
+@@ -45,8 +45,10 @@ static int fdomain_probe(struct pcmcia_device *link)
+ 		goto fail_disable;
+ 
+ 	if (!request_region(link->resource[0]->start, FDOMAIN_REGION_SIZE,
+-			    "fdomain_cs"))
++			    "fdomain_cs")) {
++		ret = -EBUSY;
+ 		goto fail_disable;
++	}
+ 
+ 	sh = fdomain_create(link->resource[0]->start, link->irq, 7, &link->dev);
+ 	if (!sh) {
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 846a02de4d510..c63dcc39f76c2 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3000,7 +3000,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ {
+ 	u32 *list;
+ 	int i;
+-	int status = 0, rc;
++	int status;
+ 	u32 *pbl;
+ 	dma_addr_t page;
+ 	int num_pages;
+@@ -3012,7 +3012,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 */
+ 	if (!qedf->num_queues) {
+ 		QEDF_ERR(&(qedf->dbg_ctx), "No MSI-X vectors available!\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 
+ 	/*
+@@ -3020,7 +3020,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedf->p_cpuq) {
+-		status = 1;
++		status = -EINVAL;
+ 		QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n");
+ 		goto mem_alloc_failure;
+ 	}
+@@ -3036,8 +3036,8 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 		   "qedf->global_queues=%p.\n", qedf->global_queues);
+ 
+ 	/* Allocate DMA coherent buffers for BDQ */
+-	rc = qedf_alloc_bdq(qedf);
+-	if (rc) {
++	status = qedf_alloc_bdq(qedf);
++	if (status) {
+ 		QEDF_ERR(&qedf->dbg_ctx, "Unable to allocate bdq.\n");
+ 		goto mem_alloc_failure;
+ 	}
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index b33eff9ea80ba..299d0369e4f08 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1623,7 +1623,7 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ {
+ 	u32 *list;
+ 	int i;
+-	int status = 0, rc;
++	int status;
+ 	u32 *pbl;
+ 	dma_addr_t page;
+ 	int num_pages;
+@@ -1634,14 +1634,14 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ 	 */
+ 	if (!qedi->num_queues) {
+ 		QEDI_ERR(&qedi->dbg_ctx, "No MSI-X vectors available!\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 
+ 	/* Make sure we allocated the PBL that will contain the physical
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedi->p_cpuq) {
+-		status = 1;
++		status = -EINVAL;
+ 		goto mem_alloc_failure;
+ 	}
+ 
+@@ -1656,13 +1656,13 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ 		  "qedi->global_queues=%p.\n", qedi->global_queues);
+ 
+ 	/* Allocate DMA coherent buffers for BDQ */
+-	rc = qedi_alloc_bdq(qedi);
+-	if (rc)
++	status = qedi_alloc_bdq(qedi);
++	if (status)
+ 		goto mem_alloc_failure;
+ 
+ 	/* Allocate DMA coherent buffers for NVM_ISCSI_CFG */
+-	rc = qedi_alloc_nvm_iscsi_cfg(qedi);
+-	if (rc)
++	status = qedi_alloc_nvm_iscsi_cfg(qedi);
++	if (status)
+ 		goto mem_alloc_failure;
+ 
+ 	/* Allocate a CQ and an associated PBL for each MSI-X
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index b7a1dc24db380..f6c76a063294b 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -91,8 +91,9 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport,
+ 	struct qla_hw_data *ha;
+ 	struct qla_qpair *qpair;
+ 
+-	if (!qidx)
+-		qidx++;
++	/* Map admin queue and 1st IO queue to index 0 */
++	if (qidx)
++		qidx--;
+ 
+ 	vha = (struct scsi_qla_host *)lport->private;
+ 	ha = vha->hw;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 21be50b35bc27..4af794c46d175 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -14,6 +14,7 @@
+ #include <linux/slab.h>
+ #include <linux/blk-mq-pci.h>
+ #include <linux/refcount.h>
++#include <linux/crash_dump.h>
+ 
+ #include <scsi/scsi_tcq.h>
+ #include <scsi/scsicam.h>
+@@ -2828,6 +2829,11 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 			return ret;
+ 	}
+ 
++	if (is_kdump_kernel()) {
++		ql2xmqsupport = 0;
++		ql2xallocfwdump = 0;
++	}
++
+ 	/* This may fail but that's ok */
+ 	pci_enable_pcie_error_reporting(pdev);
+ 
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 5083e5d2b4675..de73ade70c24c 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -1207,6 +1207,7 @@ static int pqi_get_raid_map(struct pqi_ctrl_info *ctrl_info,
+ 				"Requested %d bytes, received %d bytes",
+ 				raid_map_size,
+ 				get_unaligned_le32(&raid_map->structure_size));
++			rc = -EINVAL;
+ 			goto error;
+ 		}
+ 	}
+diff --git a/drivers/scsi/ufs/ufs-exynos.c b/drivers/scsi/ufs/ufs-exynos.c
+index f54b494ca4486..3f4f3d6f48f9f 100644
+--- a/drivers/scsi/ufs/ufs-exynos.c
++++ b/drivers/scsi/ufs/ufs-exynos.c
+@@ -259,7 +259,7 @@ static int exynos_ufs_get_clk_info(struct exynos_ufs *ufs)
+ 	struct ufs_hba *hba = ufs->hba;
+ 	struct list_head *head = &hba->clk_list_head;
+ 	struct ufs_clk_info *clki;
+-	u32 pclk_rate;
++	unsigned long pclk_rate;
+ 	u32 f_min, f_max;
+ 	u8 div = 0;
+ 	int ret = 0;
+@@ -298,7 +298,7 @@ static int exynos_ufs_get_clk_info(struct exynos_ufs *ufs)
+ 	}
+ 
+ 	if (unlikely(pclk_rate < f_min || pclk_rate > f_max)) {
+-		dev_err(hba->dev, "not available pclk range %d\n", pclk_rate);
++		dev_err(hba->dev, "not available pclk range %lu\n", pclk_rate);
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/drivers/scsi/ufs/ufs-exynos.h b/drivers/scsi/ufs/ufs-exynos.h
+index 76d6e39efb2f0..541b577c371ce 100644
+--- a/drivers/scsi/ufs/ufs-exynos.h
++++ b/drivers/scsi/ufs/ufs-exynos.h
+@@ -197,7 +197,7 @@ struct exynos_ufs {
+ 	u32 pclk_div;
+ 	u32 pclk_avail_min;
+ 	u32 pclk_avail_max;
+-	u32 mclk_rate;
++	unsigned long mclk_rate;
+ 	int avail_ln_rx;
+ 	int avail_ln_tx;
+ 	int rx_sel_idx;
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 854c96e630077..4dabd09400c6d 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -3249,9 +3249,11 @@ int ufshcd_read_desc_param(struct ufs_hba *hba,
+ 
+ 	if (is_kmalloc) {
+ 		/* Make sure we don't copy more data than available */
+-		if (param_offset + param_size > buff_len)
+-			param_size = buff_len - param_offset;
+-		memcpy(param_read_buf, &desc_buf[param_offset], param_size);
++		if (param_offset >= buff_len)
++			ret = -EINVAL;
++		else
++			memcpy(param_read_buf, &desc_buf[param_offset],
++			       min_t(u32, param_size, buff_len - param_offset));
+ 	}
+ out:
+ 	if (is_kmalloc)
+diff --git a/drivers/soc/aspeed/aspeed-lpc-ctrl.c b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+index 01ed21e8bfee5..040c7dc1d4792 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-ctrl.c
++++ b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+@@ -46,7 +46,7 @@ static int aspeed_lpc_ctrl_mmap(struct file *file, struct vm_area_struct *vma)
+ 	unsigned long vsize = vma->vm_end - vma->vm_start;
+ 	pgprot_t prot = vma->vm_page_prot;
+ 
+-	if (vma->vm_pgoff + vsize > lpc_ctrl->mem_base + lpc_ctrl->mem_size)
++	if (vma->vm_pgoff + vma_pages(vma) > lpc_ctrl->mem_size >> PAGE_SHIFT)
+ 		return -EINVAL;
+ 
+ 	/* ast2400/2500 AHB accesses are not cache coherent */
+diff --git a/drivers/soc/aspeed/aspeed-p2a-ctrl.c b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
+index b60fbeaffcbd0..20b5fb2a207cc 100644
+--- a/drivers/soc/aspeed/aspeed-p2a-ctrl.c
++++ b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
+@@ -110,7 +110,7 @@ static int aspeed_p2a_mmap(struct file *file, struct vm_area_struct *vma)
+ 	vsize = vma->vm_end - vma->vm_start;
+ 	prot = vma->vm_page_prot;
+ 
+-	if (vma->vm_pgoff + vsize > ctrl->mem_base + ctrl->mem_size)
++	if (vma->vm_pgoff + vma_pages(vma) > ctrl->mem_size >> PAGE_SHIFT)
+ 		return -EINVAL;
+ 
+ 	/* ast2400/2500 AHB accesses are not cache coherent */
+diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c
+index ed2c687c16b31..4fe88d4690e2b 100644
+--- a/drivers/soc/qcom/qcom_aoss.c
++++ b/drivers/soc/qcom/qcom_aoss.c
+@@ -476,12 +476,12 @@ static int qmp_cooling_device_add(struct qmp *qmp,
+ static int qmp_cooling_devices_register(struct qmp *qmp)
+ {
+ 	struct device_node *np, *child;
+-	int count = QMP_NUM_COOLING_RESOURCES;
++	int count = 0;
+ 	int ret;
+ 
+ 	np = qmp->dev->of_node;
+ 
+-	qmp->cooling_devs = devm_kcalloc(qmp->dev, count,
++	qmp->cooling_devs = devm_kcalloc(qmp->dev, QMP_NUM_COOLING_RESOURCES,
+ 					 sizeof(*qmp->cooling_devs),
+ 					 GFP_KERNEL);
+ 
+@@ -497,12 +497,16 @@ static int qmp_cooling_devices_register(struct qmp *qmp)
+ 			goto unroll;
+ 	}
+ 
++	if (!count)
++		devm_kfree(qmp->dev, qmp->cooling_devs);
++
+ 	return 0;
+ 
+ unroll:
+ 	while (--count >= 0)
+ 		thermal_cooling_device_unregister
+ 			(qmp->cooling_devs[count].cdev);
++	devm_kfree(qmp->dev, qmp->cooling_devs);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 6a1e862b16c38..dad4326a2a714 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -537,12 +537,14 @@ static int intel_link_power_down(struct sdw_intel *sdw)
+ 
+ 	mutex_lock(sdw->link_res->shim_lock);
+ 
+-	intel_shim_master_ip_to_glue(sdw);
+-
+ 	if (!(*shim_mask & BIT(link_id)))
+ 		dev_err(sdw->cdns.dev,
+ 			"%s: Unbalanced power-up/down calls\n", __func__);
+ 
++	sdw->cdns.link_up = false;
++
++	intel_shim_master_ip_to_glue(sdw);
++
+ 	*shim_mask &= ~BIT(link_id);
+ 
+ 	if (!*shim_mask) {
+@@ -559,20 +561,21 @@ static int intel_link_power_down(struct sdw_intel *sdw)
+ 		link_control &=  spa_mask;
+ 
+ 		ret = intel_clear_bit(shim, SDW_SHIM_LCTL, link_control, cpa_mask);
++		if (ret < 0) {
++			dev_err(sdw->cdns.dev, "%s: could not power down link\n", __func__);
++
++			/*
++			 * we leave the sdw->cdns.link_up flag as false since we've disabled
++			 * the link at this point and cannot handle interrupts any longer.
++			 */
++		}
+ 	}
+ 
+ 	link_control = intel_readl(shim, SDW_SHIM_LCTL);
+ 
+ 	mutex_unlock(sdw->link_res->shim_lock);
+ 
+-	if (ret < 0) {
+-		dev_err(sdw->cdns.dev, "%s: could not power down link\n", __func__);
+-
+-		return ret;
+-	}
+-
+-	sdw->cdns.link_up = false;
+-	return 0;
++	return ret;
+ }
+ 
+ static void intel_shim_sync_arm(struct sdw_intel *sdw)
+diff --git a/drivers/staging/board/board.c b/drivers/staging/board/board.c
+index cb6feb34dd401..f980af0373452 100644
+--- a/drivers/staging/board/board.c
++++ b/drivers/staging/board/board.c
+@@ -136,6 +136,7 @@ int __init board_staging_register_clock(const struct board_staging_clk *bsc)
+ static int board_staging_add_dev_domain(struct platform_device *pdev,
+ 					const char *domain)
+ {
++	struct device *dev = &pdev->dev;
+ 	struct of_phandle_args pd_args;
+ 	struct device_node *np;
+ 
+@@ -148,7 +149,11 @@ static int board_staging_add_dev_domain(struct platform_device *pdev,
+ 	pd_args.np = np;
+ 	pd_args.args_count = 0;
+ 
+-	return of_genpd_add_device(&pd_args, &pdev->dev);
++	/* Initialization similar to device_pm_init_common() */
++	spin_lock_init(&dev->power.lock);
++	dev->power.early_init = true;
++
++	return of_genpd_add_device(&pd_args, dev);
+ }
+ #else
+ static inline int board_staging_add_dev_domain(struct platform_device *pdev,
+diff --git a/drivers/staging/ks7010/ks7010_sdio.c b/drivers/staging/ks7010/ks7010_sdio.c
+index 78dc8beeae98e..8c740c771f509 100644
+--- a/drivers/staging/ks7010/ks7010_sdio.c
++++ b/drivers/staging/ks7010/ks7010_sdio.c
+@@ -939,9 +939,9 @@ static void ks7010_private_init(struct ks_wlan_private *priv,
+ 	memset(&priv->wstats, 0, sizeof(priv->wstats));
+ 
+ 	/* sleep mode */
++	atomic_set(&priv->sleepstatus.status, 0);
+ 	atomic_set(&priv->sleepstatus.doze_request, 0);
+ 	atomic_set(&priv->sleepstatus.wakeup_request, 0);
+-	atomic_set(&priv->sleepstatus.wakeup_request, 0);
+ 
+ 	trx_device_init(priv);
+ 	hostif_init(priv);
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+index 0295e2e32d797..fa1bd99cd6f17 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+@@ -1763,7 +1763,8 @@ static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 	if (err < 0)
+ 		goto register_entities_fail;
+ 	/* init atomisp wdts */
+-	if (init_atomisp_wdts(isp) != 0)
++	err = init_atomisp_wdts(isp);
++	if (err != 0)
+ 		goto wdt_work_queue_fail;
+ 
+ 	/* save the iunit context only once after all the values are init'ed. */
+@@ -1815,6 +1816,7 @@ request_irq_fail:
+ 	hmm_cleanup();
+ 	hmm_pool_unregister(HMM_POOL_TYPE_RESERVED);
+ hmm_pool_fail:
++	pm_runtime_get_noresume(&pdev->dev);
+ 	destroy_workqueue(isp->wdt_work_queue);
+ wdt_work_queue_fail:
+ 	atomisp_acc_cleanup(isp);
+diff --git a/drivers/staging/media/hantro/hantro_g1_vp8_dec.c b/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
+index a5cdf150cd16c..d30bdc678cc24 100644
+--- a/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
++++ b/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
+@@ -377,12 +377,17 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vb2_dst = hantro_get_dst_buf(ctx);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->last_frame_ts);
+-	if (!ref)
++	if (!ref) {
++		vpu_debug(0, "failed to find last frame ts=%llu\n",
++			  hdr->last_frame_ts);
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
++	}
+ 	vdpu_write_relaxed(vpu, ref, G1_REG_ADDR_REF(0));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->golden_frame_ts);
+-	WARN_ON(!ref && hdr->golden_frame_ts);
++	if (!ref && hdr->golden_frame_ts)
++		vpu_debug(0, "failed to find golden frame ts=%llu\n",
++			  hdr->golden_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_HEADER_FLAG_SIGN_BIAS_GOLDEN)
+@@ -390,7 +395,9 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vdpu_write_relaxed(vpu, ref, G1_REG_ADDR_REF(4));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->alt_frame_ts);
+-	WARN_ON(!ref && hdr->alt_frame_ts);
++	if (!ref && hdr->alt_frame_ts)
++		vpu_debug(0, "failed to find alt frame ts=%llu\n",
++			  hdr->alt_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_HEADER_FLAG_SIGN_BIAS_ALT)
+diff --git a/drivers/staging/media/hantro/rk3399_vpu_hw_vp8_dec.c b/drivers/staging/media/hantro/rk3399_vpu_hw_vp8_dec.c
+index a4a792f00b111..5b8c8fc49cce8 100644
+--- a/drivers/staging/media/hantro/rk3399_vpu_hw_vp8_dec.c
++++ b/drivers/staging/media/hantro/rk3399_vpu_hw_vp8_dec.c
+@@ -454,12 +454,17 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vb2_dst = hantro_get_dst_buf(ctx);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->last_frame_ts);
+-	if (!ref)
++	if (!ref) {
++		vpu_debug(0, "failed to find last frame ts=%llu\n",
++			  hdr->last_frame_ts);
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
++	}
+ 	vdpu_write_relaxed(vpu, ref, VDPU_REG_VP8_ADDR_REF0);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->golden_frame_ts);
+-	WARN_ON(!ref && hdr->golden_frame_ts);
++	if (!ref && hdr->golden_frame_ts)
++		vpu_debug(0, "failed to find golden frame ts=%llu\n",
++			  hdr->golden_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_HEADER_FLAG_SIGN_BIAS_GOLDEN)
+@@ -467,7 +472,9 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vdpu_write_relaxed(vpu, ref, VDPU_REG_VP8_ADDR_REF2_5(2));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->alt_frame_ts);
+-	WARN_ON(!ref && hdr->alt_frame_ts);
++	if (!ref && hdr->alt_frame_ts)
++		vpu_debug(0, "failed to find alt frame ts=%llu\n",
++			  hdr->alt_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_HEADER_FLAG_SIGN_BIAS_ALT)
+diff --git a/drivers/staging/rts5208/rtsx_scsi.c b/drivers/staging/rts5208/rtsx_scsi.c
+index 1deb74112ad43..11d9d9155eef2 100644
+--- a/drivers/staging/rts5208/rtsx_scsi.c
++++ b/drivers/staging/rts5208/rtsx_scsi.c
+@@ -2802,10 +2802,10 @@ static int get_ms_information(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 	}
+ 
+ 	if (dev_info_id == 0x15) {
+-		buf_len = 0x3A;
++		buf_len = 0x3C;
+ 		data_len = 0x3A;
+ 	} else {
+-		buf_len = 0x6A;
++		buf_len = 0x6C;
+ 		data_len = 0x6A;
+ 	}
+ 
+@@ -2855,11 +2855,7 @@ static int get_ms_information(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 	}
+ 
+ 	rtsx_stor_set_xfer_buf(buf, buf_len, srb);
+-
+-	if (dev_info_id == 0x15)
+-		scsi_set_resid(srb, scsi_bufflen(srb) - 0x3C);
+-	else
+-		scsi_set_resid(srb, scsi_bufflen(srb) - 0x6C);
++	scsi_set_resid(srb, scsi_bufflen(srb) - buf_len);
+ 
+ 	kfree(buf);
+ 	return STATUS_SUCCESS;
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 9a272a516b2d7..c4b157c29af7a 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2204,7 +2204,7 @@ static void tb_switch_default_link_ports(struct tb_switch *sw)
+ {
+ 	int i;
+ 
+-	for (i = 1; i <= sw->config.max_port_number; i += 2) {
++	for (i = 1; i <= sw->config.max_port_number; i++) {
+ 		struct tb_port *port = &sw->ports[i];
+ 		struct tb_port *subordinate;
+ 
+diff --git a/drivers/tty/hvc/hvsi.c b/drivers/tty/hvc/hvsi.c
+index e8c58f9bd2632..d6afaae1729aa 100644
+--- a/drivers/tty/hvc/hvsi.c
++++ b/drivers/tty/hvc/hvsi.c
+@@ -1038,7 +1038,7 @@ static const struct tty_operations hvsi_ops = {
+ 
+ static int __init hvsi_init(void)
+ {
+-	int i;
++	int i, ret;
+ 
+ 	hvsi_driver = alloc_tty_driver(hvsi_count);
+ 	if (!hvsi_driver)
+@@ -1069,12 +1069,25 @@ static int __init hvsi_init(void)
+ 	}
+ 	hvsi_wait = wait_for_state; /* irqs active now */
+ 
+-	if (tty_register_driver(hvsi_driver))
+-		panic("Couldn't register hvsi console driver\n");
++	ret = tty_register_driver(hvsi_driver);
++	if (ret) {
++		pr_err("Couldn't register hvsi console driver\n");
++		goto err_free_irq;
++	}
+ 
+ 	printk(KERN_DEBUG "HVSI: registered %i devices\n", hvsi_count);
+ 
+ 	return 0;
++err_free_irq:
++	hvsi_wait = poll_for_state;
++	for (i = 0; i < hvsi_count; i++) {
++		struct hvsi_struct *hp = &hvsi_ports[i];
++
++		free_irq(hp->virq, hp);
++	}
++	tty_driver_kref_put(hvsi_driver);
++
++	return ret;
+ }
+ device_initcall(hvsi_init);
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index c37468887fd2a..efe4cf32add2c 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -617,7 +617,7 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 	struct uart_port *port = dev_id;
+ 	struct omap8250_priv *priv = port->private_data;
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+-	unsigned int iir;
++	unsigned int iir, lsr;
+ 	int ret;
+ 
+ #ifdef CONFIG_SERIAL_8250_DMA
+@@ -628,6 +628,7 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ #endif
+ 
+ 	serial8250_rpm_get(up);
++	lsr = serial_port_in(port, UART_LSR);
+ 	iir = serial_port_in(port, UART_IIR);
+ 	ret = serial8250_handle_irq(port, iir);
+ 
+@@ -642,6 +643,24 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 		serial_port_in(port, UART_RX);
+ 	}
+ 
++	/* Stop processing interrupts on input overrun */
++	if ((lsr & UART_LSR_OE) && up->overrun_backoff_time_ms > 0) {
++		unsigned long delay;
++
++		up->ier = port->serial_in(port, UART_IER);
++		if (up->ier & (UART_IER_RLSI | UART_IER_RDI)) {
++			port->ops->stop_rx(port);
++		} else {
++			/* Keep restarting the timer until
++			 * the input overrun subsides.
++			 */
++			cancel_delayed_work(&up->overrun_backoff);
++		}
++
++		delay = msecs_to_jiffies(up->overrun_backoff_time_ms);
++		schedule_delayed_work(&up->overrun_backoff, delay);
++	}
++
+ 	serial8250_rpm_put(up);
+ 
+ 	return IRQ_RETVAL(ret);
+@@ -1353,6 +1372,10 @@ static int omap8250_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	if (of_property_read_u32(np, "overrun-throttle-ms",
++				 &up.overrun_backoff_time_ms) != 0)
++		up.overrun_backoff_time_ms = 0;
++
+ 	priv->wakeirq = irq_of_parse_and_map(np, 1);
+ 
+ 	pdata = of_device_get_match_data(&pdev->dev);
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 39f9ea24e3169..58f718ed1bb98 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -87,7 +87,7 @@ static void moan_device(const char *str, struct pci_dev *dev)
+ 
+ static int
+ setup_port(struct serial_private *priv, struct uart_8250_port *port,
+-	   int bar, int offset, int regshift)
++	   u8 bar, unsigned int offset, int regshift)
+ {
+ 	struct pci_dev *dev = priv->dev;
+ 
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 3de0a16e055a3..5d40f1010fbfd 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -122,7 +122,8 @@ static const struct serial8250_config uart_config[] = {
+ 		.name		= "16C950/954",
+ 		.fifo_size	= 128,
+ 		.tx_loadsz	= 128,
+-		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
++		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01,
++		.rxtrig_bytes	= {16, 32, 112, 120},
+ 		/* UART_CAP_EFR breaks billionon CF bluetooth card. */
+ 		.flags		= UART_CAP_FIFO | UART_CAP_SLEEP,
+ 	},
+diff --git a/drivers/tty/serial/jsm/jsm_neo.c b/drivers/tty/serial/jsm/jsm_neo.c
+index bf0e2a4cb0cef..c6f927a76c3be 100644
+--- a/drivers/tty/serial/jsm/jsm_neo.c
++++ b/drivers/tty/serial/jsm/jsm_neo.c
+@@ -815,7 +815,9 @@ static void neo_parse_isr(struct jsm_board *brd, u32 port)
+ 		/* Parse any modem signal changes */
+ 		jsm_dbg(INTR, &ch->ch_bd->pci_dev,
+ 			"MOD_STAT: sending to parse_modem_sigs\n");
++		spin_lock_irqsave(&ch->uart_port.lock, lock_flags);
+ 		neo_parse_modem(ch, readb(&ch->ch_neo_uart->msr));
++		spin_unlock_irqrestore(&ch->uart_port.lock, lock_flags);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/serial/jsm/jsm_tty.c b/drivers/tty/serial/jsm/jsm_tty.c
+index 689774c073ca4..8438454ca653f 100644
+--- a/drivers/tty/serial/jsm/jsm_tty.c
++++ b/drivers/tty/serial/jsm/jsm_tty.c
+@@ -187,6 +187,7 @@ static void jsm_tty_break(struct uart_port *port, int break_state)
+ 
+ static int jsm_tty_open(struct uart_port *port)
+ {
++	unsigned long lock_flags;
+ 	struct jsm_board *brd;
+ 	struct jsm_channel *channel =
+ 		container_of(port, struct jsm_channel, uart_port);
+@@ -240,6 +241,7 @@ static int jsm_tty_open(struct uart_port *port)
+ 	channel->ch_cached_lsr = 0;
+ 	channel->ch_stops_sent = 0;
+ 
++	spin_lock_irqsave(&port->lock, lock_flags);
+ 	termios = &port->state->port.tty->termios;
+ 	channel->ch_c_cflag	= termios->c_cflag;
+ 	channel->ch_c_iflag	= termios->c_iflag;
+@@ -259,6 +261,7 @@ static int jsm_tty_open(struct uart_port *port)
+ 	jsm_carrier(channel);
+ 
+ 	channel->ch_open_count++;
++	spin_unlock_irqrestore(&port->lock, lock_flags);
+ 
+ 	jsm_dbg(OPEN, &channel->ch_bd->pci_dev, "finish\n");
+ 	return 0;
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 70898a999a498..f700bfaef1293 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -1760,6 +1760,10 @@ static irqreturn_t sci_br_interrupt(int irq, void *ptr)
+ 
+ 	/* Handle BREAKs */
+ 	sci_handle_breaks(port);
++
++	/* drop invalid character received before break was detected */
++	serial_port_in(port, SCxRDR);
++
+ 	sci_clear_SCxSR(port, SCxSR_BREAK_CLEAR(port));
+ 
+ 	return IRQ_HANDLED;
+@@ -1839,7 +1843,8 @@ static irqreturn_t sci_mpxed_interrupt(int irq, void *ptr)
+ 		ret = sci_er_interrupt(irq, ptr);
+ 
+ 	/* Break Interrupt */
+-	if ((ssr_status & SCxSR_BRK(port)) && err_enabled)
++	if (s->irqs[SCIx_ERI_IRQ] != s->irqs[SCIx_BRI_IRQ] &&
++	    (ssr_status & SCxSR_BRK(port)) && err_enabled)
+ 		ret = sci_br_interrupt(irq, ptr);
+ 
+ 	/* Overrun Interrupt */
+diff --git a/drivers/usb/chipidea/host.c b/drivers/usb/chipidea/host.c
+index 48e4a5ca18359..f5f56ee07729f 100644
+--- a/drivers/usb/chipidea/host.c
++++ b/drivers/usb/chipidea/host.c
+@@ -233,18 +233,26 @@ static int ci_ehci_hub_control(
+ )
+ {
+ 	struct ehci_hcd	*ehci = hcd_to_ehci(hcd);
++	unsigned int	ports = HCS_N_PORTS(ehci->hcs_params);
+ 	u32 __iomem	*status_reg;
+-	u32		temp;
++	u32		temp, port_index;
+ 	unsigned long	flags;
+ 	int		retval = 0;
+ 	struct device *dev = hcd->self.controller;
+ 	struct ci_hdrc *ci = dev_get_drvdata(dev);
+ 
+-	status_reg = &ehci->regs->port_status[(wIndex & 0xff) - 1];
++	port_index = wIndex & 0xff;
++	port_index -= (port_index > 0);
++	status_reg = &ehci->regs->port_status[port_index];
+ 
+ 	spin_lock_irqsave(&ehci->lock, flags);
+ 
+ 	if (typeReq == SetPortFeature && wValue == USB_PORT_FEAT_SUSPEND) {
++		if (!wIndex || wIndex > ports) {
++			retval = -EPIPE;
++			goto done;
++		}
++
+ 		temp = ehci_readl(ehci, status_reg);
+ 		if ((temp & PORT_PE) == 0 || (temp & PORT_RESET) != 0) {
+ 			retval = -EPIPE;
+@@ -273,7 +281,7 @@ static int ci_ehci_hub_control(
+ 			ehci_writel(ehci, temp, status_reg);
+ 		}
+ 
+-		set_bit((wIndex & 0xff) - 1, &ehci->suspended_ports);
++		set_bit(port_index, &ehci->suspended_ports);
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 1a556a628971f..3ffa939678d77 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -481,7 +481,7 @@ static u8 encode_bMaxPower(enum usb_device_speed speed,
+ {
+ 	unsigned val;
+ 
+-	if (c->MaxPower)
++	if (c->MaxPower || (c->bmAttributes & USB_CONFIG_ATT_SELFPOWER))
+ 		val = c->MaxPower;
+ 	else
+ 		val = CONFIG_USB_GADGET_VBUS_DRAW;
+@@ -905,7 +905,11 @@ static int set_config(struct usb_composite_dev *cdev,
+ 	}
+ 
+ 	/* when we return, be sure our power usage is valid */
+-	power = c->MaxPower ? c->MaxPower : CONFIG_USB_GADGET_VBUS_DRAW;
++	if (c->MaxPower || (c->bmAttributes & USB_CONFIG_ATT_SELFPOWER))
++		power = c->MaxPower;
++	else
++		power = CONFIG_USB_GADGET_VBUS_DRAW;
++
+ 	if (gadget->speed < USB_SPEED_SUPER)
+ 		power = min(power, 500U);
+ 	else
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index c019f2b0c0af3..a9cb647bac6fb 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -491,8 +491,9 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
+ 	}
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
+-	if (skb && !in) {
+-		dev_kfree_skb_any(skb);
++	if (!in) {
++		if (skb)
++			dev_kfree_skb_any(skb);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+diff --git a/drivers/usb/host/ehci-mv.c b/drivers/usb/host/ehci-mv.c
+index cffdc8d01b2a8..8fd27249ad257 100644
+--- a/drivers/usb/host/ehci-mv.c
++++ b/drivers/usb/host/ehci-mv.c
+@@ -42,26 +42,25 @@ struct ehci_hcd_mv {
+ 	int (*set_vbus)(unsigned int vbus);
+ };
+ 
+-static void ehci_clock_enable(struct ehci_hcd_mv *ehci_mv)
++static int mv_ehci_enable(struct ehci_hcd_mv *ehci_mv)
+ {
+-	clk_prepare_enable(ehci_mv->clk);
+-}
++	int retval;
+ 
+-static void ehci_clock_disable(struct ehci_hcd_mv *ehci_mv)
+-{
+-	clk_disable_unprepare(ehci_mv->clk);
+-}
++	retval = clk_prepare_enable(ehci_mv->clk);
++	if (retval)
++		return retval;
+ 
+-static int mv_ehci_enable(struct ehci_hcd_mv *ehci_mv)
+-{
+-	ehci_clock_enable(ehci_mv);
+-	return phy_init(ehci_mv->phy);
++	retval = phy_init(ehci_mv->phy);
++	if (retval)
++		clk_disable_unprepare(ehci_mv->clk);
++
++	return retval;
+ }
+ 
+ static void mv_ehci_disable(struct ehci_hcd_mv *ehci_mv)
+ {
+ 	phy_exit(ehci_mv->phy);
+-	ehci_clock_disable(ehci_mv);
++	clk_disable_unprepare(ehci_mv->clk);
+ }
+ 
+ static int mv_ehci_reset(struct usb_hcd *hcd)
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index bd958f059fe64..ff0b3457fd342 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -2509,11 +2509,6 @@ retry_xacterr:
+ 	return count;
+ }
+ 
+-/* high bandwidth multiplier, as encoded in highspeed endpoint descriptors */
+-#define hb_mult(wMaxPacketSize) (1 + (((wMaxPacketSize) >> 11) & 0x03))
+-/* ... and packet size, for any kind of endpoint descriptor */
+-#define max_packet(wMaxPacketSize) ((wMaxPacketSize) & 0x07ff)
+-
+ /* reverse of qh_urb_transaction:  free a list of TDs.
+  * used for cleanup after errors, before HC sees an URB's TDs.
+  */
+@@ -2599,7 +2594,7 @@ static struct list_head *qh_urb_transaction(struct fotg210_hcd *fotg210,
+ 		token |= (1 /* "in" */ << 8);
+ 	/* else it's already initted to "out" pid (0 << 8) */
+ 
+-	maxpacket = max_packet(usb_maxpacket(urb->dev, urb->pipe, !is_input));
++	maxpacket = usb_maxpacket(urb->dev, urb->pipe, !is_input);
+ 
+ 	/*
+ 	 * buffer gets wrapped in one or more qtds;
+@@ -2713,9 +2708,11 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 		gfp_t flags)
+ {
+ 	struct fotg210_qh *qh = fotg210_qh_alloc(fotg210, flags);
++	struct usb_host_endpoint *ep;
+ 	u32 info1 = 0, info2 = 0;
+ 	int is_input, type;
+ 	int maxp = 0;
++	int mult;
+ 	struct usb_tt *tt = urb->dev->tt;
+ 	struct fotg210_qh_hw *hw;
+ 
+@@ -2730,14 +2727,15 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 
+ 	is_input = usb_pipein(urb->pipe);
+ 	type = usb_pipetype(urb->pipe);
+-	maxp = usb_maxpacket(urb->dev, urb->pipe, !is_input);
++	ep = usb_pipe_endpoint(urb->dev, urb->pipe);
++	maxp = usb_endpoint_maxp(&ep->desc);
++	mult = usb_endpoint_maxp_mult(&ep->desc);
+ 
+ 	/* 1024 byte maxpacket is a hardware ceiling.  High bandwidth
+ 	 * acts like up to 3KB, but is built from smaller packets.
+ 	 */
+-	if (max_packet(maxp) > 1024) {
+-		fotg210_dbg(fotg210, "bogus qh maxpacket %d\n",
+-				max_packet(maxp));
++	if (maxp > 1024) {
++		fotg210_dbg(fotg210, "bogus qh maxpacket %d\n", maxp);
+ 		goto done;
+ 	}
+ 
+@@ -2751,8 +2749,7 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 	 */
+ 	if (type == PIPE_INTERRUPT) {
+ 		qh->usecs = NS_TO_US(usb_calc_bus_time(USB_SPEED_HIGH,
+-				is_input, 0,
+-				hb_mult(maxp) * max_packet(maxp)));
++				is_input, 0, mult * maxp));
+ 		qh->start = NO_FRAME;
+ 
+ 		if (urb->dev->speed == USB_SPEED_HIGH) {
+@@ -2789,7 +2786,7 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 			think_time = tt ? tt->think_time : 0;
+ 			qh->tt_usecs = NS_TO_US(think_time +
+ 					usb_calc_bus_time(urb->dev->speed,
+-					is_input, 0, max_packet(maxp)));
++					is_input, 0, maxp));
+ 			qh->period = urb->interval;
+ 			if (qh->period > fotg210->periodic_size) {
+ 				qh->period = fotg210->periodic_size;
+@@ -2852,11 +2849,11 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 			 * to help them do so.  So now people expect to use
+ 			 * such nonconformant devices with Linux too; sigh.
+ 			 */
+-			info1 |= max_packet(maxp) << 16;
++			info1 |= maxp << 16;
+ 			info2 |= (FOTG210_TUNE_MULT_HS << 30);
+ 		} else {		/* PIPE_INTERRUPT */
+-			info1 |= max_packet(maxp) << 16;
+-			info2 |= hb_mult(maxp) << 30;
++			info1 |= maxp << 16;
++			info2 |= mult << 30;
+ 		}
+ 		break;
+ 	default:
+@@ -3926,6 +3923,7 @@ static void iso_stream_init(struct fotg210_hcd *fotg210,
+ 	int is_input;
+ 	long bandwidth;
+ 	unsigned multi;
++	struct usb_host_endpoint *ep;
+ 
+ 	/*
+ 	 * this might be a "high bandwidth" highspeed endpoint,
+@@ -3933,14 +3931,14 @@ static void iso_stream_init(struct fotg210_hcd *fotg210,
+ 	 */
+ 	epnum = usb_pipeendpoint(pipe);
+ 	is_input = usb_pipein(pipe) ? USB_DIR_IN : 0;
+-	maxp = usb_maxpacket(dev, pipe, !is_input);
++	ep = usb_pipe_endpoint(dev, pipe);
++	maxp = usb_endpoint_maxp(&ep->desc);
+ 	if (is_input)
+ 		buf1 = (1 << 11);
+ 	else
+ 		buf1 = 0;
+ 
+-	maxp = max_packet(maxp);
+-	multi = hb_mult(maxp);
++	multi = usb_endpoint_maxp_mult(&ep->desc);
+ 	buf1 |= maxp;
+ 	maxp *= multi;
+ 
+@@ -4461,13 +4459,12 @@ static bool itd_complete(struct fotg210_hcd *fotg210, struct fotg210_itd *itd)
+ 
+ 			/* HC need not update length with this error */
+ 			if (!(t & FOTG210_ISOC_BABBLE)) {
+-				desc->actual_length =
+-					fotg210_itdlen(urb, desc, t);
++				desc->actual_length = FOTG210_ITD_LENGTH(t);
+ 				urb->actual_length += desc->actual_length;
+ 			}
+ 		} else if (likely((t & FOTG210_ISOC_ACTIVE) == 0)) {
+ 			desc->status = 0;
+-			desc->actual_length = fotg210_itdlen(urb, desc, t);
++			desc->actual_length = FOTG210_ITD_LENGTH(t);
+ 			urb->actual_length += desc->actual_length;
+ 		} else {
+ 			/* URB was too late */
+diff --git a/drivers/usb/host/fotg210.h b/drivers/usb/host/fotg210.h
+index 6cee40ec65b41..67f59517ebade 100644
+--- a/drivers/usb/host/fotg210.h
++++ b/drivers/usb/host/fotg210.h
+@@ -686,11 +686,6 @@ static inline unsigned fotg210_read_frame_index(struct fotg210_hcd *fotg210)
+ 	return fotg210_readl(fotg210, &fotg210->regs->frame_index);
+ }
+ 
+-#define fotg210_itdlen(urb, desc, t) ({			\
+-	usb_pipein((urb)->pipe) ?				\
+-	(desc)->length - FOTG210_ITD_LENGTH(t) :			\
+-	FOTG210_ITD_LENGTH(t);					\
+-})
+ /*-------------------------------------------------------------------------*/
+ 
+ #endif /* __LINUX_FOTG210_H */
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index a8d97e23f601f..c51391b45207e 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -4666,19 +4666,19 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci,
+ {
+ 	unsigned long long timeout_ns;
+ 
+-	if (xhci->quirks & XHCI_INTEL_HOST)
+-		timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
+-	else
+-		timeout_ns = udev->u1_params.sel;
+-
+ 	/* Prevent U1 if service interval is shorter than U1 exit latency */
+ 	if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) {
+-		if (xhci_service_interval_to_ns(desc) <= timeout_ns) {
++		if (xhci_service_interval_to_ns(desc) <= udev->u1_params.mel) {
+ 			dev_dbg(&udev->dev, "Disable U1, ESIT shorter than exit latency\n");
+ 			return USB3_LPM_DISABLED;
+ 		}
+ 	}
+ 
++	if (xhci->quirks & XHCI_INTEL_HOST)
++		timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
++	else
++		timeout_ns = udev->u1_params.sel;
++
+ 	/* The U1 timeout is encoded in 1us intervals.
+ 	 * Don't return a timeout of zero, because that's USB3_LPM_DISABLED.
+ 	 */
+@@ -4730,19 +4730,19 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci,
+ {
+ 	unsigned long long timeout_ns;
+ 
+-	if (xhci->quirks & XHCI_INTEL_HOST)
+-		timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
+-	else
+-		timeout_ns = udev->u2_params.sel;
+-
+ 	/* Prevent U2 if service interval is shorter than U2 exit latency */
+ 	if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) {
+-		if (xhci_service_interval_to_ns(desc) <= timeout_ns) {
++		if (xhci_service_interval_to_ns(desc) <= udev->u2_params.mel) {
+ 			dev_dbg(&udev->dev, "Disable U2, ESIT shorter than exit latency\n");
+ 			return USB3_LPM_DISABLED;
+ 		}
+ 	}
+ 
++	if (xhci->quirks & XHCI_INTEL_HOST)
++		timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
++	else
++		timeout_ns = udev->u2_params.sel;
++
+ 	/* The U2 timeout is encoded in 256us intervals */
+ 	timeout_ns = DIV_ROUND_UP_ULL(timeout_ns, 256 * 1000);
+ 	/* If the necessary timeout value is bigger than what we can set in the
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index 5892f3ce0cdc8..ce9fc46c92661 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -890,23 +890,22 @@ static int dsps_probe(struct platform_device *pdev)
+ 	if (!glue->usbss_base)
+ 		return -ENXIO;
+ 
+-	if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
+-		ret = dsps_setup_optional_vbus_irq(pdev, glue);
+-		if (ret)
+-			goto err_iounmap;
+-	}
+-
+ 	platform_set_drvdata(pdev, glue);
+ 	pm_runtime_enable(&pdev->dev);
+ 	ret = dsps_create_musb_pdev(glue, pdev);
+ 	if (ret)
+ 		goto err;
+ 
++	if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
++		ret = dsps_setup_optional_vbus_irq(pdev, glue);
++		if (ret)
++			goto err;
++	}
++
+ 	return 0;
+ 
+ err:
+ 	pm_runtime_disable(&pdev->dev);
+-err_iounmap:
+ 	iounmap(glue->usbss_base);
+ 	return ret;
+ }
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index 4ba6bcdaa8e9d..b07b2925ff78b 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -455,8 +455,14 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			vhci_hcd->port_status[rhport] &= ~(1 << USB_PORT_FEAT_RESET);
+ 			vhci_hcd->re_timeout = 0;
+ 
++			/*
++			 * A few drivers do usb reset during probe when
++			 * the device could be in VDEV_ST_USED state
++			 */
+ 			if (vhci_hcd->vdev[rhport].ud.status ==
+-			    VDEV_ST_NOTASSIGNED) {
++				VDEV_ST_NOTASSIGNED ||
++			    vhci_hcd->vdev[rhport].ud.status ==
++				VDEV_ST_USED) {
+ 				usbip_dbg_vhci_rh(
+ 					" enable rhport %d (status %u)\n",
+ 					rhport,
+@@ -957,8 +963,32 @@ static void vhci_device_unlink_cleanup(struct vhci_device *vdev)
+ 	spin_lock(&vdev->priv_lock);
+ 
+ 	list_for_each_entry_safe(unlink, tmp, &vdev->unlink_tx, list) {
++		struct urb *urb;
++
++		/* give back urb of unsent unlink request */
+ 		pr_info("unlink cleanup tx %lu\n", unlink->unlink_seqnum);
++
++		urb = pickup_urb_and_free_priv(vdev, unlink->unlink_seqnum);
++		if (!urb) {
++			list_del(&unlink->list);
++			kfree(unlink);
++			continue;
++		}
++
++		urb->status = -ENODEV;
++
++		usb_hcd_unlink_urb_from_ep(hcd, urb);
++
+ 		list_del(&unlink->list);
++
++		spin_unlock(&vdev->priv_lock);
++		spin_unlock_irqrestore(&vhci->lock, flags);
++
++		usb_hcd_giveback_urb(hcd, urb, urb->status);
++
++		spin_lock_irqsave(&vhci->lock, flags);
++		spin_lock(&vdev->priv_lock);
++
+ 		kfree(unlink);
+ 	}
+ 
+diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
+index 67d0bf4efa160..e44bf736e2b22 100644
+--- a/drivers/vfio/Kconfig
++++ b/drivers/vfio/Kconfig
+@@ -29,7 +29,7 @@ menuconfig VFIO
+ 
+ 	  If you don't know what to do here, say N.
+ 
+-menuconfig VFIO_NOIOMMU
++config VFIO_NOIOMMU
+ 	bool "VFIO No-IOMMU support"
+ 	depends on VFIO
+ 	help
+diff --git a/drivers/video/fbdev/asiliantfb.c b/drivers/video/fbdev/asiliantfb.c
+index 3e006da477523..84c56f525889f 100644
+--- a/drivers/video/fbdev/asiliantfb.c
++++ b/drivers/video/fbdev/asiliantfb.c
+@@ -227,6 +227,9 @@ static int asiliantfb_check_var(struct fb_var_screeninfo *var,
+ {
+ 	unsigned long Ftarget, ratio, remainder;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	ratio = 1000000 / var->pixclock;
+ 	remainder = 1000000 % var->pixclock;
+ 	Ftarget = 1000000 * ratio + (1000000 * remainder) / var->pixclock;
+diff --git a/drivers/video/fbdev/kyro/fbdev.c b/drivers/video/fbdev/kyro/fbdev.c
+index 8fbde92ae8b9c..25801e8e3f74a 100644
+--- a/drivers/video/fbdev/kyro/fbdev.c
++++ b/drivers/video/fbdev/kyro/fbdev.c
+@@ -372,6 +372,11 @@ static int kyro_dev_overlay_viewport_set(u32 x, u32 y, u32 ulWidth, u32 ulHeight
+ 		/* probably haven't called CreateOverlay yet */
+ 		return -EINVAL;
+ 
++	if (ulWidth == 0 || ulWidth == 0xffffffff ||
++	    ulHeight == 0 || ulHeight == 0xffffffff ||
++	    (x < 2 && ulWidth + 2 == 0))
++		return -EINVAL;
++
+ 	/* Stop Ramdac Output */
+ 	DisableRamdacOutput(deviceInfo.pSTGReg);
+ 
+@@ -394,6 +399,9 @@ static int kyrofb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ {
+ 	struct kyrofb_info *par = info->par;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	if (var->bits_per_pixel != 16 && var->bits_per_pixel != 32) {
+ 		printk(KERN_WARNING "kyrofb: depth not supported: %u\n", var->bits_per_pixel);
+ 		return -EINVAL;
+diff --git a/drivers/video/fbdev/riva/fbdev.c b/drivers/video/fbdev/riva/fbdev.c
+index ce55b9d2e862b..7dd621c7afe4c 100644
+--- a/drivers/video/fbdev/riva/fbdev.c
++++ b/drivers/video/fbdev/riva/fbdev.c
+@@ -1084,6 +1084,9 @@ static int rivafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ 	int mode_valid = 0;
+ 	
+ 	NVTRACE_ENTER();
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	switch (var->bits_per_pixel) {
+ 	case 1 ... 8:
+ 		var->red.offset = var->green.offset = var->blue.offset = 0;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 69c6786a9fdf2..ff3f0638cdb90 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1202,11 +1202,6 @@ static noinline void async_cow_submit(struct btrfs_work *work)
+ 	nr_pages = (async_chunk->end - async_chunk->start + PAGE_SIZE) >>
+ 		PAGE_SHIFT;
+ 
+-	/* atomic_sub_return implies a barrier */
+-	if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) <
+-	    5 * SZ_1M)
+-		cond_wake_up_nomb(&fs_info->async_submit_wait);
+-
+ 	/*
+ 	 * ->inode could be NULL if async_chunk_start has failed to compress,
+ 	 * in which case we don't have anything to submit, yet we need to
+@@ -1215,6 +1210,11 @@ static noinline void async_cow_submit(struct btrfs_work *work)
+ 	 */
+ 	if (async_chunk->inode)
+ 		submit_compressed_extents(async_chunk);
++
++	/* atomic_sub_return implies a barrier */
++	if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) <
++	    5 * SZ_1M)
++		cond_wake_up_nomb(&fs_info->async_submit_wait);
+ }
+ 
+ static noinline void async_cow_free(struct btrfs_work *work)
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index f36928efcf92d..ec25e5eab3499 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -708,7 +708,9 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans,
+ 			 */
+ 			ret = btrfs_lookup_data_extent(fs_info, ins.objectid,
+ 						ins.offset);
+-			if (ret == 0) {
++			if (ret < 0) {
++				goto out;
++			} else if (ret == 0) {
+ 				btrfs_init_generic_ref(&ref,
+ 						BTRFS_ADD_DELAYED_REF,
+ 						ins.objectid, ins.offset, 0);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index d1fccddcf4035..b4fcc48f255b3 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1129,6 +1129,9 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 		fs_devices->rw_devices--;
+ 	}
+ 
++	if (device->devid == BTRFS_DEV_REPLACE_DEVID)
++		clear_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state);
++
+ 	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
+ 		fs_devices->missing_devices--;
+ 
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index b864c9b9e8df1..678dac8365ed3 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1755,6 +1755,9 @@ struct ceph_cap_flush *ceph_alloc_cap_flush(void)
+ 	struct ceph_cap_flush *cf;
+ 
+ 	cf = kmem_cache_alloc(ceph_cap_flush_cachep, GFP_KERNEL);
++	if (!cf)
++		return NULL;
++
+ 	cf->is_capsnap = false;
+ 	return cf;
+ }
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 1a0298d1e7cda..d58c5ffeca0d9 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -888,7 +888,7 @@ sess_alloc_buffer(struct sess_data *sess_data, int wct)
+ 	return 0;
+ 
+ out_free_smb_buf:
+-	kfree(smb_buf);
++	cifs_small_buf_release(smb_buf);
+ 	sess_data->iov[0].iov_base = NULL;
+ 	sess_data->iov[0].iov_len = 0;
+ 	sess_data->buf0_type = CIFS_NO_BUFFER;
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index f94b13075ea47..30987ea011f1a 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1308,12 +1308,6 @@ out_destroy_crypt:
+ 
+ 	for (--i; i >= 0; i--)
+ 		fscrypt_finalize_bounce_page(&cc->cpages[i]);
+-	for (i = 0; i < cc->nr_cpages; i++) {
+-		if (!cc->cpages[i])
+-			continue;
+-		f2fs_compress_free_page(cc->cpages[i]);
+-		cc->cpages[i] = NULL;
+-	}
+ out_put_cic:
+ 	kmem_cache_free(cic_entry_slab, cic);
+ out_put_dnode:
+@@ -1324,6 +1318,12 @@ out_unlock_op:
+ 	else
+ 		f2fs_unlock_op(sbi);
+ out_free:
++	for (i = 0; i < cc->nr_cpages; i++) {
++		if (!cc->cpages[i])
++			continue;
++		f2fs_compress_free_page(cc->cpages[i]);
++		cc->cpages[i] = NULL;
++	}
+ 	page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
+ 	cc->cpages = NULL;
+ 	return -EAGAIN;
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index cfae2dddb0bae..1b11a42847c48 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1550,7 +1550,21 @@ next_dnode:
+ 	if (err) {
+ 		if (flag == F2FS_GET_BLOCK_BMAP)
+ 			map->m_pblk = 0;
++
+ 		if (err == -ENOENT) {
++			/*
++			 * There is one exceptional case that read_node_page()
++			 * may return -ENOENT due to filesystem has been
++			 * shutdown or cp_error, so force to convert error
++			 * number to EIO for such case.
++			 */
++			if (map->m_may_create &&
++				(is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) ||
++				f2fs_cp_error(sbi))) {
++				err = -EIO;
++				goto unlock_out;
++			}
++
+ 			err = 0;
+ 			if (map->m_next_pgofs)
+ 				*map->m_next_pgofs =
+@@ -2205,6 +2219,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ 			continue;
+ 		}
+ 		unlock_page(page);
++		if (for_write)
++			put_page(page);
+ 		cc->rpages[i] = NULL;
+ 		cc->nr_rpages--;
+ 	}
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 4b9ef8bbfa4a9..6694298b1660f 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -938,6 +938,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(d->inode);
+ 	struct blk_plug plug;
+ 	bool readdir_ra = sbi->readdir_ra == 1;
++	bool found_valid_dirent = false;
+ 	int err = 0;
+ 
+ 	bit_pos = ((unsigned long)ctx->pos % d->max);
+@@ -952,13 +953,15 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 
+ 		de = &d->dentry[bit_pos];
+ 		if (de->name_len == 0) {
++			if (found_valid_dirent || !bit_pos) {
++				printk_ratelimited(
++					"%sF2FS-fs (%s): invalid namelen(0), ino:%u, run fsck to fix.",
++					KERN_WARNING, sbi->sb->s_id,
++					le32_to_cpu(de->ino));
++				set_sbi_flag(sbi, SBI_NEED_FSCK);
++			}
+ 			bit_pos++;
+ 			ctx->pos = start_pos + bit_pos;
+-			printk_ratelimited(
+-				"%sF2FS-fs (%s): invalid namelen(0), ino:%u, run fsck to fix.",
+-				KERN_WARNING, sbi->sb->s_id,
+-				le32_to_cpu(de->ino));
+-			set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 			continue;
+ 		}
+ 
+@@ -1001,6 +1004,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 			f2fs_ra_node_page(sbi, le32_to_cpu(de->ino));
+ 
+ 		ctx->pos = start_pos + bit_pos;
++		found_valid_dirent = true;
+ 	}
+ out:
+ 	if (readdir_ra)
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6ee8b1e0e1741..1fbaab1f7aba8 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1080,7 +1080,6 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 		}
+ 
+ 		if (pg_start < pg_end) {
+-			struct address_space *mapping = inode->i_mapping;
+ 			loff_t blk_start, blk_end;
+ 			struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 
+@@ -1092,8 +1091,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 			down_write(&F2FS_I(inode)->i_mmap_sem);
+ 
+-			truncate_inode_pages_range(mapping, blk_start,
+-					blk_end - 1);
++			truncate_pagecache_range(inode, blk_start, blk_end - 1);
+ 
+ 			f2fs_lock_op(sbi);
+ 			ret = f2fs_truncate_hole(inode, pg_start, pg_end);
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index e02affb5c0e79..72f227f6ebad0 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1477,8 +1477,10 @@ next_step:
+ 			int err;
+ 
+ 			if (S_ISREG(inode->i_mode)) {
+-				if (!down_write_trylock(&fi->i_gc_rwsem[READ]))
++				if (!down_write_trylock(&fi->i_gc_rwsem[READ])) {
++					sbi->skipped_gc_rwsem++;
+ 					continue;
++				}
+ 				if (!down_write_trylock(
+ 						&fi->i_gc_rwsem[WRITE])) {
+ 					sbi->skipped_gc_rwsem++;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 476b2c497d282..de543168b3708 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2206,6 +2206,33 @@ static int f2fs_enable_quotas(struct super_block *sb)
+ 	return 0;
+ }
+ 
++static int f2fs_quota_sync_file(struct f2fs_sb_info *sbi, int type)
++{
++	struct quota_info *dqopt = sb_dqopt(sbi->sb);
++	struct address_space *mapping = dqopt->files[type]->i_mapping;
++	int ret = 0;
++
++	ret = dquot_writeback_dquots(sbi->sb, type);
++	if (ret)
++		goto out;
++
++	ret = filemap_fdatawrite(mapping);
++	if (ret)
++		goto out;
++
++	/* if we are using journalled quota */
++	if (is_journalled_quota(sbi))
++		goto out;
++
++	ret = filemap_fdatawait(mapping);
++
++	truncate_inode_pages(&dqopt->files[type]->i_data, 0);
++out:
++	if (ret)
++		set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++	return ret;
++}
++
+ int f2fs_quota_sync(struct super_block *sb, int type)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -2213,57 +2240,42 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 	int cnt;
+ 	int ret;
+ 
+-	/*
+-	 * do_quotactl
+-	 *  f2fs_quota_sync
+-	 *  down_read(quota_sem)
+-	 *  dquot_writeback_dquots()
+-	 *  f2fs_dquot_commit
+-	 *                            block_operation
+-	 *                            down_read(quota_sem)
+-	 */
+-	f2fs_lock_op(sbi);
+-
+-	down_read(&sbi->quota_sem);
+-	ret = dquot_writeback_dquots(sb, type);
+-	if (ret)
+-		goto out;
+-
+ 	/*
+ 	 * Now when everything is written we can discard the pagecache so
+ 	 * that userspace sees the changes.
+ 	 */
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		struct address_space *mapping;
+ 
+ 		if (type != -1 && cnt != type)
+ 			continue;
+-		if (!sb_has_quota_active(sb, cnt))
+-			continue;
+ 
+-		mapping = dqopt->files[cnt]->i_mapping;
++		if (!sb_has_quota_active(sb, type))
++			return 0;
+ 
+-		ret = filemap_fdatawrite(mapping);
+-		if (ret)
+-			goto out;
++		inode_lock(dqopt->files[cnt]);
+ 
+-		/* if we are using journalled quota */
+-		if (is_journalled_quota(sbi))
+-			continue;
++		/*
++		 * do_quotactl
++		 *  f2fs_quota_sync
++		 *  down_read(quota_sem)
++		 *  dquot_writeback_dquots()
++		 *  f2fs_dquot_commit
++		 *			      block_operation
++		 *			      down_read(quota_sem)
++		 */
++		f2fs_lock_op(sbi);
++		down_read(&sbi->quota_sem);
+ 
+-		ret = filemap_fdatawait(mapping);
+-		if (ret)
+-			set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR);
++		ret = f2fs_quota_sync_file(sbi, cnt);
++
++		up_read(&sbi->quota_sem);
++		f2fs_unlock_op(sbi);
+ 
+-		inode_lock(dqopt->files[cnt]);
+-		truncate_inode_pages(&dqopt->files[cnt]->i_data, 0);
+ 		inode_unlock(dqopt->files[cnt]);
++
++		if (ret)
++			break;
+ 	}
+-out:
+-	if (ret)
+-		set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR);
+-	up_read(&sbi->quota_sem);
+-	f2fs_unlock_op(sbi);
+ 	return ret;
+ }
+ 
+@@ -2898,11 +2910,13 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+-	if (le32_to_cpu(raw_super->cp_payload) >
+-				(blocks_per_seg - F2FS_CP_PACKS)) {
+-		f2fs_info(sbi, "Insane cp_payload (%u > %u)",
++	if (le32_to_cpu(raw_super->cp_payload) >=
++				(blocks_per_seg - F2FS_CP_PACKS -
++				NR_CURSEG_PERSIST_TYPE)) {
++		f2fs_info(sbi, "Insane cp_payload (%u >= %u)",
+ 			  le32_to_cpu(raw_super->cp_payload),
+-			  blocks_per_seg - F2FS_CP_PACKS);
++			  blocks_per_seg - F2FS_CP_PACKS -
++			  NR_CURSEG_PERSIST_TYPE);
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+@@ -2938,6 +2952,7 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 	unsigned int cp_pack_start_sum, cp_payload;
+ 	block_t user_block_count, valid_user_blocks;
+ 	block_t avail_node_count, valid_node_count;
++	unsigned int nat_blocks, nat_bits_bytes, nat_bits_blocks;
+ 	int i, j;
+ 
+ 	total = le32_to_cpu(raw_super->segment_count);
+@@ -3058,6 +3073,17 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 		return 1;
+ 	}
+ 
++	nat_blocks = nat_segs << log_blocks_per_seg;
++	nat_bits_bytes = nat_blocks / BITS_PER_BYTE;
++	nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
++	if (__is_set_ckpt_flags(ckpt, CP_NAT_BITS_FLAG) &&
++		(cp_payload + F2FS_CP_PACKS +
++		NR_CURSEG_PERSIST_TYPE + nat_bits_blocks >= blocks_per_seg)) {
++		f2fs_warn(sbi, "Insane cp_payload: %u, nat_bits_blocks: %u)",
++			  cp_payload, nat_bits_blocks);
++		return -EFSCORRUPTED;
++	}
++
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		f2fs_err(sbi, "A bug case: need to run fsck");
+ 		return 1;
+diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
+index 751bc5b1cddf9..6104f627cc712 100644
+--- a/fs/fscache/cookie.c
++++ b/fs/fscache/cookie.c
+@@ -74,10 +74,8 @@ void fscache_free_cookie(struct fscache_cookie *cookie)
+ static int fscache_set_key(struct fscache_cookie *cookie,
+ 			   const void *index_key, size_t index_key_len)
+ {
+-	unsigned long long h;
+ 	u32 *buf;
+ 	int bufs;
+-	int i;
+ 
+ 	bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf));
+ 
+@@ -91,17 +89,7 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ 	}
+ 
+ 	memcpy(buf, index_key, index_key_len);
+-
+-	/* Calculate a hash and combine this with the length in the first word
+-	 * or first half word
+-	 */
+-	h = (unsigned long)cookie->parent;
+-	h += index_key_len + cookie->type;
+-
+-	for (i = 0; i < bufs; i++)
+-		h += buf[i];
+-
+-	cookie->key_hash = h ^ (h >> 32);
++	cookie->key_hash = fscache_hash(0, buf, bufs);
+ 	return 0;
+ }
+ 
+diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h
+index 08e91efbce538..64aa552b296d7 100644
+--- a/fs/fscache/internal.h
++++ b/fs/fscache/internal.h
+@@ -97,6 +97,8 @@ extern struct workqueue_struct *fscache_object_wq;
+ extern struct workqueue_struct *fscache_op_wq;
+ DECLARE_PER_CPU(wait_queue_head_t, fscache_object_cong_wait);
+ 
++extern unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n);
++
+ static inline bool fscache_object_congested(void)
+ {
+ 	return workqueue_congested(WORK_CPU_UNBOUND, fscache_object_wq);
+diff --git a/fs/fscache/main.c b/fs/fscache/main.c
+index c1e6cc9091aac..4207f98e405fd 100644
+--- a/fs/fscache/main.c
++++ b/fs/fscache/main.c
+@@ -93,6 +93,45 @@ static struct ctl_table fscache_sysctls_root[] = {
+ };
+ #endif
+ 
++/*
++ * Mixing scores (in bits) for (7,20):
++ * Input delta: 1-bit      2-bit
++ * 1 round:     330.3     9201.6
++ * 2 rounds:   1246.4    25475.4
++ * 3 rounds:   1907.1    31295.1
++ * 4 rounds:   2042.3    31718.6
++ * Perfect:    2048      31744
++ *            (32*64)   (32*31/2 * 64)
++ */
++#define HASH_MIX(x, y, a)	\
++	(	x ^= (a),	\
++	y ^= x,	x = rol32(x, 7),\
++	x += y,	y = rol32(y,20),\
++	y *= 9			)
++
++static inline unsigned int fold_hash(unsigned long x, unsigned long y)
++{
++	/* Use arch-optimized multiply if one exists */
++	return __hash_32(y ^ __hash_32(x));
++}
++
++/*
++ * Generate a hash.  This is derived from full_name_hash(), but we want to be
++ * sure it is arch independent and that it doesn't change as bits of the
++ * computed hash value might appear on disk.  The caller also guarantees that
++ * the hashed data will be a series of aligned 32-bit words.
++ */
++unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n)
++{
++	unsigned int a, x = 0, y = salt;
++
++	for (; n; n--) {
++		a = *data++;
++		HASH_MIX(x, y, a);
++	}
++	return fold_hash(x, y);
++}
++
+ /*
+  * initialise the fs caching module
+  */
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 3faa421568b0a..bf539eab92c6f 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -623,16 +623,13 @@ static int freeze_go_xmote_bh(struct gfs2_glock *gl, struct gfs2_holder *gh)
+ 		j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+ 
+ 		error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
+-		if (error)
+-			gfs2_consist(sdp);
+-		if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT))
+-			gfs2_consist(sdp);
+-
+-		/*  Initialize some head of the log stuff  */
+-		if (!gfs2_withdrawn(sdp)) {
+-			sdp->sd_log_sequence = head.lh_sequence + 1;
+-			gfs2_log_pointers_init(sdp, head.lh_blkno);
+-		}
++		if (gfs2_assert_withdraw_delayed(sdp, !error))
++			return error;
++		if (gfs2_assert_withdraw_delayed(sdp, head.lh_flags &
++						 GFS2_LOG_HEAD_UNMOUNT))
++			return -EIO;
++		sdp->sd_log_sequence = head.lh_sequence + 1;
++		gfs2_log_pointers_init(sdp, head.lh_blkno);
+ 	}
+ 	return 0;
+ }
+diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
+index 153272f82984b..5564aa8b45929 100644
+--- a/fs/gfs2/lock_dlm.c
++++ b/fs/gfs2/lock_dlm.c
+@@ -296,6 +296,11 @@ static void gdlm_put_lock(struct gfs2_glock *gl)
+ 	gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT);
+ 	gfs2_update_request_times(gl);
+ 
++	/* don't want to call dlm if we've unmounted the lock protocol */
++	if (test_bit(DFL_UNMOUNT, &ls->ls_recover_flags)) {
++		gfs2_glock_free(gl);
++		return;
++	}
+ 	/* don't want to skip dlm_unlock writing the lvb when lock has one */
+ 
+ 	if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) &&
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 8bb17b6d4de3c..3d5fc76b92d01 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -895,7 +895,7 @@ append:
+ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+ {
+ 	struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
+-	int work_flags;
++	bool do_wake;
+ 	unsigned long flags;
+ 
+ 	/*
+@@ -909,14 +909,14 @@ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+ 		return;
+ 	}
+ 
+-	work_flags = work->flags;
+ 	raw_spin_lock_irqsave(&wqe->lock, flags);
+ 	io_wqe_insert_work(wqe, work);
+ 	wqe->flags &= ~IO_WQE_FLAG_STALLED;
++	do_wake = (work->flags & IO_WQ_WORK_CONCURRENT) ||
++			!atomic_read(&acct->nr_running);
+ 	raw_spin_unlock_irqrestore(&wqe->lock, flags);
+ 
+-	if ((work_flags & IO_WQ_WORK_CONCURRENT) ||
+-	    !atomic_read(&acct->nr_running))
++	if (do_wake)
+ 		io_wqe_wake_worker(wqe, acct);
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 2009d1cda606c..d0089039fee79 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1498,6 +1498,8 @@ static void io_kill_timeout(struct io_kiocb *req, int status)
+ 
+ 	ret = hrtimer_try_to_cancel(&io->timer);
+ 	if (ret != -1) {
++		if (status)
++			req_set_fail_links(req);
+ 		atomic_set(&req->ctx->cq_timeouts,
+ 			atomic_read(&req->ctx->cq_timeouts) + 1);
+ 		list_del_init(&req->timeout.list);
+@@ -3126,7 +3128,7 @@ static ssize_t __io_import_iovec(int rw, struct io_kiocb *req,
+ 
+ 		ret = import_single_range(rw, buf, sqe_len, *iovec, iter);
+ 		*iovec = NULL;
+-		return ret < 0 ? ret : sqe_len;
++		return ret;
+ 	}
+ 
+ 	if (req->flags & REQ_F_BUFFER_SELECT) {
+@@ -3152,7 +3154,7 @@ static ssize_t io_import_iovec(int rw, struct io_kiocb *req,
+ 	if (!iorw)
+ 		return __io_import_iovec(rw, req, iovec, iter, needs_lock);
+ 	*iovec = NULL;
+-	return iov_iter_count(&iorw->iter);
++	return 0;
+ }
+ 
+ static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
+@@ -3411,7 +3413,6 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
+ 	struct iov_iter __iter, *iter = &__iter;
+ 	struct io_async_rw *rw = req->async_data;
+ 	ssize_t io_size, ret, ret2;
+-	size_t iov_count;
+ 	bool no_async;
+ 
+ 	if (rw)
+@@ -3420,8 +3421,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
+ 	ret = io_import_iovec(READ, req, &iovec, iter, !force_nonblock);
+ 	if (ret < 0)
+ 		return ret;
+-	iov_count = iov_iter_count(iter);
+-	io_size = ret;
++	io_size = iov_iter_count(iter);
+ 	req->result = io_size;
+ 	ret = 0;
+ 
+@@ -3437,7 +3437,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
+ 	if (no_async)
+ 		goto copy_iov;
+ 
+-	ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), iov_count);
++	ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), io_size);
+ 	if (unlikely(ret))
+ 		goto out_free;
+ 
+@@ -3456,7 +3456,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
+ 		if (req->file->f_flags & O_NONBLOCK)
+ 			goto done;
+ 		/* some cases will consume bytes even on error returns */
+-		iov_iter_revert(iter, iov_count - iov_iter_count(iter));
++		iov_iter_revert(iter, io_size - iov_iter_count(iter));
+ 		ret = 0;
+ 		goto copy_iov;
+ 	} else if (ret < 0) {
+@@ -3540,7 +3540,6 @@ static int io_write(struct io_kiocb *req, bool force_nonblock,
+ 	struct kiocb *kiocb = &req->rw.kiocb;
+ 	struct iov_iter __iter, *iter = &__iter;
+ 	struct io_async_rw *rw = req->async_data;
+-	size_t iov_count;
+ 	ssize_t ret, ret2, io_size;
+ 
+ 	if (rw)
+@@ -3549,8 +3548,7 @@ static int io_write(struct io_kiocb *req, bool force_nonblock,
+ 	ret = io_import_iovec(WRITE, req, &iovec, iter, !force_nonblock);
+ 	if (ret < 0)
+ 		return ret;
+-	iov_count = iov_iter_count(iter);
+-	io_size = ret;
++	io_size = iov_iter_count(iter);
+ 	req->result = io_size;
+ 
+ 	/* Ensure we clear previously set non-block flag */
+@@ -3568,7 +3566,7 @@ static int io_write(struct io_kiocb *req, bool force_nonblock,
+ 	    (req->flags & REQ_F_ISREG))
+ 		goto copy_iov;
+ 
+-	ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), iov_count);
++	ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), io_size);
+ 	if (unlikely(ret))
+ 		goto out_free;
+ 
+@@ -3611,7 +3609,7 @@ done:
+ 	} else {
+ copy_iov:
+ 		/* some cases will consume bytes even on error returns */
+-		iov_iter_revert(iter, iov_count - iov_iter_count(iter));
++		iov_iter_revert(iter, io_size - iov_iter_count(iter));
+ 		ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);
+ 		if (!ret)
+ 			return -EAGAIN;
+@@ -3746,7 +3744,8 @@ static int io_prep_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
++		     sqe->splice_fd_in))
+ 		return -EINVAL;
+ 
+ 	req->sync.flags = READ_ONCE(sqe->fsync_flags);
+@@ -3779,7 +3778,8 @@ static int io_fsync(struct io_kiocb *req, bool force_nonblock)
+ static int io_fallocate_prep(struct io_kiocb *req,
+ 			     const struct io_uring_sqe *sqe)
+ {
+-	if (sqe->ioprio || sqe->buf_index || sqe->rw_flags)
++	if (sqe->ioprio || sqe->buf_index || sqe->rw_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -3810,7 +3810,7 @@ static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
+ 	const char __user *fname;
+ 	int ret;
+ 
+-	if (unlikely(sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->ioprio || sqe->buf_index || sqe->splice_fd_in))
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+@@ -3926,7 +3926,8 @@ static int io_remove_buffers_prep(struct io_kiocb *req,
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+-	if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off)
++	if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	tmp = READ_ONCE(sqe->fd);
+@@ -4002,7 +4003,7 @@ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+-	if (sqe->ioprio || sqe->rw_flags)
++	if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	tmp = READ_ONCE(sqe->fd);
+@@ -4095,7 +4096,7 @@ static int io_epoll_ctl_prep(struct io_kiocb *req,
+ 			     const struct io_uring_sqe *sqe)
+ {
+ #if defined(CONFIG_EPOLL)
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))
+ 		return -EINVAL;
+@@ -4141,7 +4142,7 @@ static int io_epoll_ctl(struct io_kiocb *req, bool force_nonblock,
+ static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ #if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
+-	if (sqe->ioprio || sqe->buf_index || sqe->off)
++	if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -4176,7 +4177,7 @@ static int io_madvise(struct io_kiocb *req, bool force_nonblock)
+ 
+ static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+-	if (sqe->ioprio || sqe->buf_index || sqe->addr)
++	if (sqe->ioprio || sqe->buf_index || sqe->addr || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -4214,7 +4215,7 @@ static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (req->flags & REQ_F_FIXED_FILE)
+ 		return -EBADF;
+@@ -4261,7 +4262,7 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+ 		return -EINVAL;
+ 	if (sqe->ioprio || sqe->off || sqe->addr || sqe->len ||
+-	    sqe->rw_flags || sqe->buf_index)
++	    sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (req->flags & REQ_F_FIXED_FILE)
+ 		return -EBADF;
+@@ -4317,7 +4318,8 @@ static int io_prep_sfr(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
++		     sqe->splice_fd_in))
+ 		return -EINVAL;
+ 
+ 	req->sync.off = READ_ONCE(sqe->off);
+@@ -4760,7 +4762,7 @@ static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->len || sqe->buf_index)
++	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+@@ -4801,7 +4803,8 @@ static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags)
++	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+@@ -5553,7 +5556,8 @@ static int io_timeout_remove_prep(struct io_kiocb *req,
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->timeout_flags)
++	if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->timeout_flags |
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	req->timeout_rem.addr = READ_ONCE(sqe->addr);
+@@ -5590,7 +5594,8 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len != 1)
++	if (sqe->ioprio || sqe->buf_index || sqe->len != 1 ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (off && is_timeout_link)
+ 		return -EINVAL;
+@@ -5734,7 +5739,8 @@ static int io_async_cancel_prep(struct io_kiocb *req,
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags)
++	if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	req->cancel.addr = READ_ONCE(sqe->addr);
+@@ -7383,7 +7389,7 @@ static int io_sqe_alloc_file_tables(struct fixed_file_data *file_data,
+ 
+ 		this_files = min(nr_files, IORING_MAX_FILES_TABLE);
+ 		table->files = kcalloc(this_files, sizeof(struct file *),
+-					GFP_KERNEL);
++					GFP_KERNEL_ACCOUNT);
+ 		if (!table->files)
+ 			break;
+ 		nr_files -= this_files;
+@@ -7579,8 +7585,10 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 		return -EINVAL;
+ 	if (nr_args > IORING_MAX_FIXED_FILES)
+ 		return -EMFILE;
++	if (nr_args > rlimit(RLIMIT_NOFILE))
++		return -EMFILE;
+ 
+-	file_data = kzalloc(sizeof(*ctx->file_data), GFP_KERNEL);
++	file_data = kzalloc(sizeof(*ctx->file_data), GFP_KERNEL_ACCOUNT);
+ 	if (!file_data)
+ 		return -ENOMEM;
+ 	file_data->ctx = ctx;
+@@ -7590,7 +7598,7 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 
+ 	nr_tables = DIV_ROUND_UP(nr_args, IORING_MAX_FILES_TABLE);
+ 	file_data->table = kcalloc(nr_tables, sizeof(*file_data->table),
+-				   GFP_KERNEL);
++				   GFP_KERNEL_ACCOUNT);
+ 	if (!file_data->table)
+ 		goto out_free;
+ 
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 10cc7979ce380..caed9d98c64aa 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1045,7 +1045,7 @@ iomap_finish_page_writeback(struct inode *inode, struct page *page,
+ 
+ 	if (error) {
+ 		SetPageError(page);
+-		mapping_set_error(inode->i_mapping, -EIO);
++		mapping_set_error(inode->i_mapping, error);
+ 	}
+ 
+ 	WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop);
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 498cb70c2c0d0..273a81971ed57 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -395,28 +395,10 @@ nlmsvc_release_lockowner(struct nlm_lock *lock)
+ 		nlmsvc_put_lockowner(lock->fl.fl_owner);
+ }
+ 
+-static void nlmsvc_locks_copy_lock(struct file_lock *new, struct file_lock *fl)
+-{
+-	struct nlm_lockowner *nlm_lo = (struct nlm_lockowner *)fl->fl_owner;
+-	new->fl_owner = nlmsvc_get_lockowner(nlm_lo);
+-}
+-
+-static void nlmsvc_locks_release_private(struct file_lock *fl)
+-{
+-	nlmsvc_put_lockowner((struct nlm_lockowner *)fl->fl_owner);
+-}
+-
+-static const struct file_lock_operations nlmsvc_lock_ops = {
+-	.fl_copy_lock = nlmsvc_locks_copy_lock,
+-	.fl_release_private = nlmsvc_locks_release_private,
+-};
+-
+ void nlmsvc_locks_init_private(struct file_lock *fl, struct nlm_host *host,
+ 						pid_t pid)
+ {
+ 	fl->fl_owner = nlmsvc_find_lockowner(host, pid);
+-	if (fl->fl_owner != NULL)
+-		fl->fl_ops = &nlmsvc_lock_ops;
+ }
+ 
+ /*
+@@ -788,9 +770,21 @@ nlmsvc_notify_blocked(struct file_lock *fl)
+ 	printk(KERN_WARNING "lockd: notification for unknown block!\n");
+ }
+ 
++static fl_owner_t nlmsvc_get_owner(fl_owner_t owner)
++{
++	return nlmsvc_get_lockowner(owner);
++}
++
++static void nlmsvc_put_owner(fl_owner_t owner)
++{
++	nlmsvc_put_lockowner(owner);
++}
++
+ const struct lock_manager_operations nlmsvc_lock_operations = {
+ 	.lm_notify = nlmsvc_notify_blocked,
+ 	.lm_grant = nlmsvc_grant_deferred,
++	.lm_get_owner = nlmsvc_get_owner,
++	.lm_put_owner = nlmsvc_put_owner,
+ };
+ 
+ /*
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 371665e0c154c..5370e082aded5 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -335,7 +335,7 @@ static bool pnfs_seqid_is_newer(u32 s1, u32 s2)
+ 
+ static void pnfs_barrier_update(struct pnfs_layout_hdr *lo, u32 newseq)
+ {
+-	if (pnfs_seqid_is_newer(newseq, lo->plh_barrier))
++	if (pnfs_seqid_is_newer(newseq, lo->plh_barrier) || !lo->plh_barrier)
+ 		lo->plh_barrier = newseq;
+ }
+ 
+@@ -347,11 +347,15 @@ pnfs_set_plh_return_info(struct pnfs_layout_hdr *lo, enum pnfs_iomode iomode,
+ 		iomode = IOMODE_ANY;
+ 	lo->plh_return_iomode = iomode;
+ 	set_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags);
+-	if (seq != 0) {
+-		WARN_ON_ONCE(lo->plh_return_seq != 0 && lo->plh_return_seq != seq);
++	/*
++	 * We must set lo->plh_return_seq to avoid livelocks with
++	 * pnfs_layout_need_return()
++	 */
++	if (seq == 0)
++		seq = be32_to_cpu(lo->plh_stateid.seqid);
++	if (!lo->plh_return_seq || pnfs_seqid_is_newer(seq, lo->plh_return_seq))
+ 		lo->plh_return_seq = seq;
+-		pnfs_barrier_update(lo, seq);
+-	}
++	pnfs_barrier_update(lo, seq);
+ }
+ 
+ static void
+@@ -1000,7 +1004,7 @@ pnfs_layout_stateid_blocked(const struct pnfs_layout_hdr *lo,
+ {
+ 	u32 seqid = be32_to_cpu(stateid->seqid);
+ 
+-	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier) && lo->plh_barrier;
++	return lo->plh_barrier && pnfs_seqid_is_newer(lo->plh_barrier, seqid);
+ }
+ 
+ /* lget is set to 1 if called from inside send_layoutget call chain */
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 142aac9b63a89..0313390fa4b44 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -6855,8 +6855,7 @@ out:
+ /*
+  * The NFSv4 spec allows a client to do a LOCKT without holding an OPEN,
+  * so we do a temporary open here just to get an open file to pass to
+- * vfs_test_lock.  (Arguably perhaps test_lock should be done with an
+- * inode operation.)
++ * vfs_test_lock.
+  */
+ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct file_lock *lock)
+ {
+@@ -6871,7 +6870,9 @@ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct
+ 							NFSD_MAY_READ));
+ 	if (err)
+ 		goto out;
++	lock->fl_file = nf->nf_file;
+ 	err = nfserrno(vfs_test_lock(nf->nf_file, lock));
++	lock->fl_file = NULL;
+ out:
+ 	fh_unlock(fhp);
+ 	nfsd_file_put(nf);
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index 1192c99536200..c3af99e94f1d1 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -129,11 +129,15 @@ static bool fanotify_should_merge(struct fsnotify_event *old_fsn,
+ 	return false;
+ }
+ 
++/* Limit event merges to limit CPU overhead per event */
++#define FANOTIFY_MAX_MERGE_EVENTS 128
++
+ /* and the list better be locked by something too! */
+ static int fanotify_merge(struct list_head *list, struct fsnotify_event *event)
+ {
+ 	struct fsnotify_event *test_event;
+ 	struct fanotify_event *new;
++	int i = 0;
+ 
+ 	pr_debug("%s: list=%p event=%p\n", __func__, list, event);
+ 	new = FANOTIFY_E(event);
+@@ -147,6 +151,8 @@ static int fanotify_merge(struct list_head *list, struct fsnotify_event *event)
+ 		return 0;
+ 
+ 	list_for_each_entry_reverse(test_event, list, list) {
++		if (++i > FANOTIFY_MAX_MERGE_EVENTS)
++			break;
+ 		if (fanotify_should_merge(test_event, event)) {
+ 			FANOTIFY_E(test_event)->mask |= new->mask;
+ 			return 1;
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index d1efa3a5a5032..08b595c526d74 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -542,8 +542,10 @@ static int ovl_create_over_whiteout(struct dentry *dentry, struct inode *inode,
+ 			goto out_cleanup;
+ 	}
+ 	err = ovl_instantiate(dentry, inode, newdentry, hardlink);
+-	if (err)
+-		goto out_cleanup;
++	if (err) {
++		ovl_cleanup(udir, newdentry);
++		dput(newdentry);
++	}
+ out_dput:
+ 	dput(upper);
+ out_unlock:
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 3d181b1a6d567..17397c7532f12 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -32,11 +32,6 @@ int sysctl_unprivileged_userfaultfd __read_mostly = 1;
+ 
+ static struct kmem_cache *userfaultfd_ctx_cachep __read_mostly;
+ 
+-enum userfaultfd_state {
+-	UFFD_STATE_WAIT_API,
+-	UFFD_STATE_RUNNING,
+-};
+-
+ /*
+  * Start with fault_pending_wqh and fault_wqh so they're more likely
+  * to be in the same cacheline.
+@@ -68,8 +63,6 @@ struct userfaultfd_ctx {
+ 	unsigned int flags;
+ 	/* features requested from the userspace */
+ 	unsigned int features;
+-	/* state machine */
+-	enum userfaultfd_state state;
+ 	/* released */
+ 	bool released;
+ 	/* memory mappings are changing because of non-cooperative event */
+@@ -103,6 +96,14 @@ struct userfaultfd_wake_range {
+ 	unsigned long len;
+ };
+ 
++/* internal indication that UFFD_API ioctl was successfully executed */
++#define UFFD_FEATURE_INITIALIZED		(1u << 31)
++
++static bool userfaultfd_is_initialized(struct userfaultfd_ctx *ctx)
++{
++	return ctx->features & UFFD_FEATURE_INITIALIZED;
++}
++
+ static int userfaultfd_wake_function(wait_queue_entry_t *wq, unsigned mode,
+ 				     int wake_flags, void *key)
+ {
+@@ -659,7 +660,6 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
+ 
+ 		refcount_set(&ctx->refcount, 1);
+ 		ctx->flags = octx->flags;
+-		ctx->state = UFFD_STATE_RUNNING;
+ 		ctx->features = octx->features;
+ 		ctx->released = false;
+ 		ctx->mmap_changing = false;
+@@ -936,38 +936,33 @@ static __poll_t userfaultfd_poll(struct file *file, poll_table *wait)
+ 
+ 	poll_wait(file, &ctx->fd_wqh, wait);
+ 
+-	switch (ctx->state) {
+-	case UFFD_STATE_WAIT_API:
++	if (!userfaultfd_is_initialized(ctx))
+ 		return EPOLLERR;
+-	case UFFD_STATE_RUNNING:
+-		/*
+-		 * poll() never guarantees that read won't block.
+-		 * userfaults can be waken before they're read().
+-		 */
+-		if (unlikely(!(file->f_flags & O_NONBLOCK)))
+-			return EPOLLERR;
+-		/*
+-		 * lockless access to see if there are pending faults
+-		 * __pollwait last action is the add_wait_queue but
+-		 * the spin_unlock would allow the waitqueue_active to
+-		 * pass above the actual list_add inside
+-		 * add_wait_queue critical section. So use a full
+-		 * memory barrier to serialize the list_add write of
+-		 * add_wait_queue() with the waitqueue_active read
+-		 * below.
+-		 */
+-		ret = 0;
+-		smp_mb();
+-		if (waitqueue_active(&ctx->fault_pending_wqh))
+-			ret = EPOLLIN;
+-		else if (waitqueue_active(&ctx->event_wqh))
+-			ret = EPOLLIN;
+ 
+-		return ret;
+-	default:
+-		WARN_ON_ONCE(1);
++	/*
++	 * poll() never guarantees that read won't block.
++	 * userfaults can be waken before they're read().
++	 */
++	if (unlikely(!(file->f_flags & O_NONBLOCK)))
+ 		return EPOLLERR;
+-	}
++	/*
++	 * lockless access to see if there are pending faults
++	 * __pollwait last action is the add_wait_queue but
++	 * the spin_unlock would allow the waitqueue_active to
++	 * pass above the actual list_add inside
++	 * add_wait_queue critical section. So use a full
++	 * memory barrier to serialize the list_add write of
++	 * add_wait_queue() with the waitqueue_active read
++	 * below.
++	 */
++	ret = 0;
++	smp_mb();
++	if (waitqueue_active(&ctx->fault_pending_wqh))
++		ret = EPOLLIN;
++	else if (waitqueue_active(&ctx->event_wqh))
++		ret = EPOLLIN;
++
++	return ret;
+ }
+ 
+ static const struct file_operations userfaultfd_fops;
+@@ -1161,7 +1156,7 @@ static ssize_t userfaultfd_read(struct file *file, char __user *buf,
+ 	struct uffd_msg msg;
+ 	int no_wait = file->f_flags & O_NONBLOCK;
+ 
+-	if (ctx->state == UFFD_STATE_WAIT_API)
++	if (!userfaultfd_is_initialized(ctx))
+ 		return -EINVAL;
+ 
+ 	for (;;) {
+@@ -1816,9 +1811,10 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx,
+ static inline unsigned int uffd_ctx_features(__u64 user_features)
+ {
+ 	/*
+-	 * For the current set of features the bits just coincide
++	 * For the current set of features the bits just coincide. Set
++	 * UFFD_FEATURE_INITIALIZED to mark the features as enabled.
+ 	 */
+-	return (unsigned int)user_features;
++	return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
+ }
+ 
+ /*
+@@ -1831,12 +1827,10 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx,
+ {
+ 	struct uffdio_api uffdio_api;
+ 	void __user *buf = (void __user *)arg;
++	unsigned int ctx_features;
+ 	int ret;
+ 	__u64 features;
+ 
+-	ret = -EINVAL;
+-	if (ctx->state != UFFD_STATE_WAIT_API)
+-		goto out;
+ 	ret = -EFAULT;
+ 	if (copy_from_user(&uffdio_api, buf, sizeof(uffdio_api)))
+ 		goto out;
+@@ -1853,9 +1847,13 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx,
+ 	ret = -EFAULT;
+ 	if (copy_to_user(buf, &uffdio_api, sizeof(uffdio_api)))
+ 		goto out;
+-	ctx->state = UFFD_STATE_RUNNING;
++
+ 	/* only enable the requested features for this uffd context */
+-	ctx->features = uffd_ctx_features(features);
++	ctx_features = uffd_ctx_features(features);
++	ret = -EINVAL;
++	if (cmpxchg(&ctx->features, 0, ctx_features) != 0)
++		goto err_out;
++
+ 	ret = 0;
+ out:
+ 	return ret;
+@@ -1872,7 +1870,7 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
+ 	int ret = -EINVAL;
+ 	struct userfaultfd_ctx *ctx = file->private_data;
+ 
+-	if (cmd != UFFDIO_API && ctx->state == UFFD_STATE_WAIT_API)
++	if (cmd != UFFDIO_API && !userfaultfd_is_initialized(ctx))
+ 		return -EINVAL;
+ 
+ 	switch(cmd) {
+@@ -1976,7 +1974,6 @@ SYSCALL_DEFINE1(userfaultfd, int, flags)
+ 	refcount_set(&ctx->refcount, 1);
+ 	ctx->flags = flags;
+ 	ctx->features = 0;
+-	ctx->state = UFFD_STATE_WAIT_API;
+ 	ctx->released = false;
+ 	ctx->mmap_changing = false;
+ 	ctx->mm = current->mm;
+diff --git a/include/crypto/public_key.h b/include/crypto/public_key.h
+index 948c5203ca9c6..f5bd80858fc51 100644
+--- a/include/crypto/public_key.h
++++ b/include/crypto/public_key.h
+@@ -39,9 +39,9 @@ extern void public_key_free(struct public_key *key);
+ struct public_key_signature {
+ 	struct asymmetric_key_id *auth_ids[2];
+ 	u8 *s;			/* Signature */
+-	u32 s_size;		/* Number of bytes in signature */
+ 	u8 *digest;
+-	u8 digest_size;		/* Number of bytes in digest */
++	u32 s_size;		/* Number of bytes in signature */
++	u32 digest_size;	/* Number of bytes in digest */
+ 	const char *pkey_algo;
+ 	const char *hash_algo;
+ 	const char *encoding;
+diff --git a/include/drm/drm_auth.h b/include/drm/drm_auth.h
+index 6bf8b2b789919..f99d3417f3042 100644
+--- a/include/drm/drm_auth.h
++++ b/include/drm/drm_auth.h
+@@ -107,6 +107,7 @@ struct drm_master {
+ };
+ 
+ struct drm_master *drm_master_get(struct drm_master *master);
++struct drm_master *drm_file_get_master(struct drm_file *file_priv);
+ void drm_master_put(struct drm_master **master);
+ bool drm_is_current_master(struct drm_file *fpriv);
+ 
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index 716990bace104..42d04607d091f 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -226,15 +226,27 @@ struct drm_file {
+ 	/**
+ 	 * @master:
+ 	 *
+-	 * Master this node is currently associated with. Only relevant if
+-	 * drm_is_primary_client() returns true. Note that this only
+-	 * matches &drm_device.master if the master is the currently active one.
++	 * Master this node is currently associated with. Protected by struct
++	 * &drm_device.master_mutex, and serialized by @master_lookup_lock.
++	 *
++	 * Only relevant if drm_is_primary_client() returns true. Note that
++	 * this only matches &drm_device.master if the master is the currently
++	 * active one.
++	 *
++	 * When dereferencing this pointer, either hold struct
++	 * &drm_device.master_mutex for the duration of the pointer's use, or
++	 * use drm_file_get_master() if struct &drm_device.master_mutex is not
++	 * currently held and there is no other need to hold it. This prevents
++	 * @master from being freed during use.
+ 	 *
+ 	 * See also @authentication and @is_master and the :ref:`section on
+ 	 * primary nodes and authentication <drm_primary_node>`.
+ 	 */
+ 	struct drm_master *master;
+ 
++	/** @master_lock: Serializes @master. */
++	spinlock_t master_lookup_lock;
++
+ 	/** @pid: Process that opened this file. */
+ 	struct pid *pid;
+ 
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index 6408b446051f8..b98291d391f34 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -17,8 +17,6 @@
+ #include <linux/compat.h>
+ #include <uapi/linux/ethtool.h>
+ 
+-#ifdef CONFIG_COMPAT
+-
+ struct compat_ethtool_rx_flow_spec {
+ 	u32		flow_type;
+ 	union ethtool_flow_union h_u;
+@@ -38,8 +36,6 @@ struct compat_ethtool_rxnfc {
+ 	u32				rule_locs[];
+ };
+ 
+-#endif /* CONFIG_COMPAT */
+-
+ #include <linux/rculist.h>
+ 
+ /**
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 5b68c9787f7c2..b9fbb6d4150e2 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -722,6 +722,11 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
+ 
+ void hugetlb_report_usage(struct seq_file *m, struct mm_struct *mm);
+ 
++static inline void hugetlb_count_init(struct mm_struct *mm)
++{
++	atomic_long_set(&mm->hugetlb_usage, 0);
++}
++
+ static inline void hugetlb_count_add(long l, struct mm_struct *mm)
+ {
+ 	atomic_long_add(l, &mm->hugetlb_usage);
+@@ -897,6 +902,10 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
+ 	return &mm->page_table_lock;
+ }
+ 
++static inline void hugetlb_count_init(struct mm_struct *mm)
++{
++}
++
+ static inline void hugetlb_report_usage(struct seq_file *f, struct mm_struct *m)
+ {
+ }
+diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
+index 0bff345c4bc68..171bf1be40115 100644
+--- a/include/linux/hugetlb_cgroup.h
++++ b/include/linux/hugetlb_cgroup.h
+@@ -118,6 +118,13 @@ static inline void hugetlb_cgroup_put_rsvd_cgroup(struct hugetlb_cgroup *h_cg)
+ 	css_put(&h_cg->css);
+ }
+ 
++static inline void resv_map_dup_hugetlb_cgroup_uncharge_info(
++						struct resv_map *resv_map)
++{
++	if (resv_map->css)
++		css_get(resv_map->css);
++}
++
+ extern int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ 					struct hugetlb_cgroup **ptr);
+ extern int hugetlb_cgroup_charge_cgroup_rsvd(int idx, unsigned long nr_pages,
+@@ -196,6 +203,11 @@ static inline void hugetlb_cgroup_put_rsvd_cgroup(struct hugetlb_cgroup *h_cg)
+ {
+ }
+ 
++static inline void resv_map_dup_hugetlb_cgroup_uncharge_info(
++						struct resv_map *resv_map)
++{
++}
++
+ static inline int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ 					       struct hugetlb_cgroup **ptr)
+ {
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index c00ee3458a919..142ec79cda84f 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -122,9 +122,9 @@
+ #define DMAR_MTRR_PHYSMASK8_REG 0x208
+ #define DMAR_MTRR_PHYSBASE9_REG 0x210
+ #define DMAR_MTRR_PHYSMASK9_REG 0x218
+-#define DMAR_VCCAP_REG		0xe00 /* Virtual command capability register */
+-#define DMAR_VCMD_REG		0xe10 /* Virtual command register */
+-#define DMAR_VCRSP_REG		0xe20 /* Virtual command response register */
++#define DMAR_VCCAP_REG		0xe30 /* Virtual command capability register */
++#define DMAR_VCMD_REG		0xe00 /* Virtual command register */
++#define DMAR_VCRSP_REG		0xe10 /* Virtual command response register */
+ 
+ #define OFFSET_STRIDE		(9)
+ 
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 0d7013da818cb..095b3b39bd032 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -163,7 +163,7 @@ void synchronize_rcu_tasks(void);
+ # define synchronize_rcu_tasks synchronize_rcu
+ # endif
+ 
+-# ifdef CONFIG_TASKS_RCU_TRACE
++# ifdef CONFIG_TASKS_TRACE_RCU
+ # define rcu_tasks_trace_qs(t)						\
+ 	do {								\
+ 		if (!likely(READ_ONCE((t)->trc_reader_checked)) &&	\
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index cad1fa2b6baa2..e7b997d6f0313 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -421,6 +421,7 @@ void			xprt_unlock_connect(struct rpc_xprt *, void *);
+ #define XPRT_CONGESTED		(9)
+ #define XPRT_CWND_WAIT		(10)
+ #define XPRT_WRITE_SPACE	(11)
++#define XPRT_SND_IS_COOKIE	(12)
+ 
+ static inline void xprt_set_connected(struct rpc_xprt *xprt)
+ {
+diff --git a/include/linux/sunrpc/xprtsock.h b/include/linux/sunrpc/xprtsock.h
+index 3c1423ee74b4e..8c2a712cb2420 100644
+--- a/include/linux/sunrpc/xprtsock.h
++++ b/include/linux/sunrpc/xprtsock.h
+@@ -10,6 +10,7 @@
+ 
+ int		init_socket_xprt(void);
+ void		cleanup_socket_xprt(void);
++unsigned short	get_srcport(struct rpc_xprt *);
+ 
+ #define RPC_MIN_RESVPORT	(1U)
+ #define RPC_MAX_RESVPORT	(65535U)
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index 123b1e9ea304a..010d581598873 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -444,6 +444,7 @@ struct flow_block_offload {
+ 	struct list_head *driver_block_list;
+ 	struct netlink_ext_ack *extack;
+ 	struct Qdisc *sch;
++	struct list_head *cb_list_head;
+ };
+ 
+ enum tc_setup_type;
+diff --git a/include/uapi/linux/serial_reg.h b/include/uapi/linux/serial_reg.h
+index be07b5470f4bb..f51bc8f368134 100644
+--- a/include/uapi/linux/serial_reg.h
++++ b/include/uapi/linux/serial_reg.h
+@@ -62,6 +62,7 @@
+  * ST16C654:	 8  16  56  60		 8  16  32  56	PORT_16654
+  * TI16C750:	 1  16  32  56		xx  xx  xx  xx	PORT_16750
+  * TI16C752:	 8  16  56  60		 8  16  32  56
++ * OX16C950:	16  32 112 120		16  32  64 112	PORT_16C950
+  * Tegra:	 1   4   8  14		16   8   4   1	PORT_TEGRA
+  */
+ #define UART_FCR_R_TRIG_00	0x00
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 14de1271463fd..4457545299177 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -794,7 +794,7 @@ static int dump_show(struct seq_file *seq, void *v)
+ }
+ DEFINE_SHOW_ATTRIBUTE(dump);
+ 
+-static void dma_debug_fs_init(void)
++static int __init dma_debug_fs_init(void)
+ {
+ 	struct dentry *dentry = debugfs_create_dir("dma-api", NULL);
+ 
+@@ -807,7 +807,10 @@ static void dma_debug_fs_init(void)
+ 	debugfs_create_u32("nr_total_entries", 0444, dentry, &nr_total_entries);
+ 	debugfs_create_file("driver_filter", 0644, dentry, NULL, &filter_fops);
+ 	debugfs_create_file("dump", 0444, dentry, NULL, &dump_fops);
++
++	return 0;
+ }
++core_initcall_sync(dma_debug_fs_init);
+ 
+ static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry)
+ {
+@@ -892,8 +895,6 @@ static int dma_debug_init(void)
+ 		spin_lock_init(&dma_entry_hash[i].lock);
+ 	}
+ 
+-	dma_debug_fs_init();
+-
+ 	nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES);
+ 	for (i = 0; i < nr_pages; ++i)
+ 		dma_debug_create_entries(GFP_KERNEL);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 9705439439fe3..3f96400a0ac61 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1037,6 +1037,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
+ 	mm->pmd_huge_pte = NULL;
+ #endif
+ 	mm_init_uprobes_state(mm);
++	hugetlb_count_init(mm);
+ 
+ 	if (current->mm) {
+ 		mm->flags = current->mm->flags & MMF_INIT_MASK;
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index 9de21803a8ae2..ef8733e2a476e 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -51,7 +51,8 @@ static struct kmem_cache *create_pid_cachep(unsigned int level)
+ 	mutex_lock(&pid_caches_mutex);
+ 	/* Name collision forces to do allocation under mutex. */
+ 	if (!*pkc)
+-		*pkc = kmem_cache_create(name, len, 0, SLAB_HWCACHE_ALIGN, 0);
++		*pkc = kmem_cache_create(name, len, 0,
++					 SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT, 0);
+ 	mutex_unlock(&pid_caches_mutex);
+ 	/* current can fail, but someone else can succeed. */
+ 	return READ_ONCE(*pkc);
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 574aeaac9272d..c5091aeaa37bb 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -2591,17 +2591,17 @@ static void noinstr rcu_dynticks_task_exit(void)
+ /* Turn on heavyweight RCU tasks trace readers on idle/user entry. */
+ static void rcu_dynticks_task_trace_enter(void)
+ {
+-#ifdef CONFIG_TASKS_RCU_TRACE
++#ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+ 		current->trc_reader_special.b.need_mb = true;
+-#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
++#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
+ }
+ 
+ /* Turn off heavyweight RCU tasks trace readers on idle/user exit. */
+ static void rcu_dynticks_task_trace_exit(void)
+ {
+-#ifdef CONFIG_TASKS_RCU_TRACE
++#ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+ 		current->trc_reader_special.b.need_mb = false;
+-#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
++#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
+ }
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 51d19fc71e616..4cb622b2661b5 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5893,6 +5893,13 @@ static void __init wq_numa_init(void)
+ 		return;
+ 	}
+ 
++	for_each_possible_cpu(cpu) {
++		if (WARN_ON(cpu_to_node(cpu) == NUMA_NO_NODE)) {
++			pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu);
++			return;
++		}
++	}
++
+ 	wq_update_unbound_numa_attrs_buf = alloc_workqueue_attrs();
+ 	BUG_ON(!wq_update_unbound_numa_attrs_buf);
+ 
+@@ -5910,11 +5917,6 @@ static void __init wq_numa_init(void)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		node = cpu_to_node(cpu);
+-		if (WARN_ON(node == NUMA_NO_NODE)) {
+-			pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu);
+-			/* happens iff arch is bonkers, let's just proceed */
+-			return;
+-		}
+ 		cpumask_set_cpu(cpu, tbl[node]);
+ 	}
+ 
+diff --git a/lib/test_bpf.c b/lib/test_bpf.c
+index ca7d635bccd9d..4a9137c8551ad 100644
+--- a/lib/test_bpf.c
++++ b/lib/test_bpf.c
+@@ -4286,8 +4286,8 @@ static struct bpf_test tests[] = {
+ 		.u.insns_int = {
+ 			BPF_LD_IMM64(R0, 0),
+ 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
+-			BPF_STX_MEM(BPF_W, R10, R1, -40),
+-			BPF_LDX_MEM(BPF_W, R0, R10, -40),
++			BPF_STX_MEM(BPF_DW, R10, R1, -40),
++			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		INTERNAL,
+@@ -6664,7 +6664,14 @@ static int run_one(const struct bpf_prog *fp, struct bpf_test *test)
+ 		u64 duration;
+ 		u32 ret;
+ 
+-		if (test->test[i].data_size == 0 &&
++		/*
++		 * NOTE: Several sub-tests may be present, in which case
++		 * a zero {data_size, result} tuple indicates the end of
++		 * the sub-test array. The first test is always run,
++		 * even if both data_size and result happen to be zero.
++		 */
++		if (i > 0 &&
++		    test->test[i].data_size == 0 &&
+ 		    test->test[i].result == 0)
+ 			break;
+ 
+diff --git a/lib/test_stackinit.c b/lib/test_stackinit.c
+index f93b1e145ada7..16b1d3a3a4975 100644
+--- a/lib/test_stackinit.c
++++ b/lib/test_stackinit.c
+@@ -67,10 +67,10 @@ static bool range_contains(char *haystack_start, size_t haystack_size,
+ #define INIT_STRUCT_none		/**/
+ #define INIT_STRUCT_zero		= { }
+ #define INIT_STRUCT_static_partial	= { .two = 0, }
+-#define INIT_STRUCT_static_all		= { .one = arg->one,		\
+-					    .two = arg->two,		\
+-					    .three = arg->three,	\
+-					    .four = arg->four,		\
++#define INIT_STRUCT_static_all		= { .one = 0,			\
++					    .two = 0,			\
++					    .three = 0,			\
++					    .four = 0,			\
+ 					}
+ #define INIT_STRUCT_dynamic_partial	= { .two = arg->two, }
+ #define INIT_STRUCT_dynamic_all		= { .one = arg->one,		\
+@@ -84,8 +84,7 @@ static bool range_contains(char *haystack_start, size_t haystack_size,
+ 					var.one = 0;			\
+ 					var.two = 0;			\
+ 					var.three = 0;			\
+-					memset(&var.four, 0,		\
+-					       sizeof(var.four))
++					var.four = 0
+ 
+ /*
+  * @name: unique string name for the test
+@@ -210,18 +209,13 @@ struct test_small_hole {
+ 	unsigned long four;
+ };
+ 
+-/* Try to trigger unhandled padding in a structure. */
+-struct test_aligned {
+-	u32 internal1;
+-	u64 internal2;
+-} __aligned(64);
+-
++/* Trigger unhandled padding in a structure. */
+ struct test_big_hole {
+ 	u8 one;
+ 	u8 two;
+ 	u8 three;
+ 	/* 61 byte padding hole here. */
+-	struct test_aligned four;
++	u8 four __aligned(64);
+ } __aligned(64);
+ 
+ struct test_trailing_hole {
+diff --git a/mm/hmm.c b/mm/hmm.c
+index 943cb2ba44423..fb617054f9631 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -291,10 +291,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
+ 		goto fault;
+ 
+ 	/*
++	 * Bypass devmap pte such as DAX page when all pfn requested
++	 * flags(pfn_req_flags) are fulfilled.
+ 	 * Since each architecture defines a struct page for the zero page, just
+ 	 * fall through and treat it like a normal page.
+ 	 */
+-	if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
++	if (pte_special(pte) && !pte_devmap(pte) &&
++	    !is_zero_pfn(pte_pfn(pte))) {
+ 		if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
+ 			pte_unmap(ptep);
+ 			return -EFAULT;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index fa6b0ac6c280d..6e92ab0ae070f 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3659,8 +3659,10 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
+ 	 * after this open call completes.  It is therefore safe to take a
+ 	 * new reference here without additional locking.
+ 	 */
+-	if (resv && is_vma_resv_set(vma, HPAGE_RESV_OWNER))
++	if (resv && is_vma_resv_set(vma, HPAGE_RESV_OWNER)) {
++		resv_map_dup_hugetlb_cgroup_uncharge_info(resv);
+ 		kref_get(&resv->refs);
++	}
+ }
+ 
+ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 7fb9af001ed5c..f2817e80a1ab3 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2378,7 +2378,7 @@ out:
+ 			cgroup_size = max(cgroup_size, protection);
+ 
+ 			scan = lruvec_size - lruvec_size * protection /
+-				cgroup_size;
++				(cgroup_size + 1);
+ 
+ 			/*
+ 			 * Minimally target SWAP_CLUSTER_MAX pages to keep
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index f4fea28e05da6..3ec1a51a6944e 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -138,7 +138,7 @@ static bool p9_xen_write_todo(struct xen_9pfs_dataring *ring, RING_IDX size)
+ 
+ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
+ {
+-	struct xen_9pfs_front_priv *priv = NULL;
++	struct xen_9pfs_front_priv *priv;
+ 	RING_IDX cons, prod, masked_cons, masked_prod;
+ 	unsigned long flags;
+ 	u32 size = p9_req->tc.size;
+@@ -151,7 +151,7 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
+ 			break;
+ 	}
+ 	read_unlock(&xen_9pfs_lock);
+-	if (!priv || priv->client != client)
++	if (list_entry_is_head(priv, &xen_9pfs_devs, list))
+ 		return -EINVAL;
+ 
+ 	num = p9_req->tc.tag % priv->num_rings;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index e59ae24a8f17f..9f52145bb7b76 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4329,6 +4329,21 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ 
+ 	switch (ev->status) {
+ 	case 0x00:
++		/* The synchronous connection complete event should only be
++		 * sent once per new connection. Receiving a successful
++		 * complete event when the connection status is already
++		 * BT_CONNECTED means that the device is misbehaving and sent
++		 * multiple complete event packets for the same new connection.
++		 *
++		 * Registering the device more than once can corrupt kernel
++		 * memory, hence upon detecting this invalid event, we report
++		 * an error and ignore the packet.
++		 */
++		if (conn->state == BT_CONNECTED) {
++			bt_dev_err(hdev, "Ignoring connect complete event for existing connection");
++			goto unlock;
++		}
++
+ 		conn->handle = __le16_to_cpu(ev->handle);
+ 		conn->state  = BT_CONNECTED;
+ 		conn->type   = ev->link_type;
+@@ -5055,9 +5070,64 @@ static void hci_disconn_phylink_complete_evt(struct hci_dev *hdev,
+ }
+ #endif
+ 
++static void le_conn_update_addr(struct hci_conn *conn, bdaddr_t *bdaddr,
++				u8 bdaddr_type, bdaddr_t *local_rpa)
++{
++	if (conn->out) {
++		conn->dst_type = bdaddr_type;
++		conn->resp_addr_type = bdaddr_type;
++		bacpy(&conn->resp_addr, bdaddr);
++
++		/* Check if the controller has set a Local RPA then it must be
++		 * used instead or hdev->rpa.
++		 */
++		if (local_rpa && bacmp(local_rpa, BDADDR_ANY)) {
++			conn->init_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->init_addr, local_rpa);
++		} else if (hci_dev_test_flag(conn->hdev, HCI_PRIVACY)) {
++			conn->init_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->init_addr, &conn->hdev->rpa);
++		} else {
++			hci_copy_identity_address(conn->hdev, &conn->init_addr,
++						  &conn->init_addr_type);
++		}
++	} else {
++		conn->resp_addr_type = conn->hdev->adv_addr_type;
++		/* Check if the controller has set a Local RPA then it must be
++		 * used instead or hdev->rpa.
++		 */
++		if (local_rpa && bacmp(local_rpa, BDADDR_ANY)) {
++			conn->resp_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->resp_addr, local_rpa);
++		} else if (conn->hdev->adv_addr_type == ADDR_LE_DEV_RANDOM) {
++			/* In case of ext adv, resp_addr will be updated in
++			 * Adv Terminated event.
++			 */
++			if (!ext_adv_capable(conn->hdev))
++				bacpy(&conn->resp_addr,
++				      &conn->hdev->random_addr);
++		} else {
++			bacpy(&conn->resp_addr, &conn->hdev->bdaddr);
++		}
++
++		conn->init_addr_type = bdaddr_type;
++		bacpy(&conn->init_addr, bdaddr);
++
++		/* For incoming connections, set the default minimum
++		 * and maximum connection interval. They will be used
++		 * to check if the parameters are in range and if not
++		 * trigger the connection update procedure.
++		 */
++		conn->le_conn_min_interval = conn->hdev->le_conn_min_interval;
++		conn->le_conn_max_interval = conn->hdev->le_conn_max_interval;
++	}
++}
++
+ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+-			bdaddr_t *bdaddr, u8 bdaddr_type, u8 role, u16 handle,
+-			u16 interval, u16 latency, u16 supervision_timeout)
++				 bdaddr_t *bdaddr, u8 bdaddr_type,
++				 bdaddr_t *local_rpa, u8 role, u16 handle,
++				 u16 interval, u16 latency,
++				 u16 supervision_timeout)
+ {
+ 	struct hci_conn_params *params;
+ 	struct hci_conn *conn;
+@@ -5105,32 +5175,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 		cancel_delayed_work(&conn->le_conn_timeout);
+ 	}
+ 
+-	if (!conn->out) {
+-		/* Set the responder (our side) address type based on
+-		 * the advertising address type.
+-		 */
+-		conn->resp_addr_type = hdev->adv_addr_type;
+-		if (hdev->adv_addr_type == ADDR_LE_DEV_RANDOM) {
+-			/* In case of ext adv, resp_addr will be updated in
+-			 * Adv Terminated event.
+-			 */
+-			if (!ext_adv_capable(hdev))
+-				bacpy(&conn->resp_addr, &hdev->random_addr);
+-		} else {
+-			bacpy(&conn->resp_addr, &hdev->bdaddr);
+-		}
+-
+-		conn->init_addr_type = bdaddr_type;
+-		bacpy(&conn->init_addr, bdaddr);
+-
+-		/* For incoming connections, set the default minimum
+-		 * and maximum connection interval. They will be used
+-		 * to check if the parameters are in range and if not
+-		 * trigger the connection update procedure.
+-		 */
+-		conn->le_conn_min_interval = hdev->le_conn_min_interval;
+-		conn->le_conn_max_interval = hdev->le_conn_max_interval;
+-	}
++	le_conn_update_addr(conn, bdaddr, bdaddr_type, local_rpa);
+ 
+ 	/* Lookup the identity address from the stored connection
+ 	 * address and address type.
+@@ -5224,7 +5269,7 @@ static void hci_le_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+ 	le_conn_complete_evt(hdev, ev->status, &ev->bdaddr, ev->bdaddr_type,
+-			     ev->role, le16_to_cpu(ev->handle),
++			     NULL, ev->role, le16_to_cpu(ev->handle),
+ 			     le16_to_cpu(ev->interval),
+ 			     le16_to_cpu(ev->latency),
+ 			     le16_to_cpu(ev->supervision_timeout));
+@@ -5238,7 +5283,7 @@ static void hci_le_enh_conn_complete_evt(struct hci_dev *hdev,
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+ 	le_conn_complete_evt(hdev, ev->status, &ev->bdaddr, ev->bdaddr_type,
+-			     ev->role, le16_to_cpu(ev->handle),
++			     &ev->local_rpa, ev->role, le16_to_cpu(ev->handle),
+ 			     le16_to_cpu(ev->interval),
+ 			     le16_to_cpu(ev->latency),
+ 			     le16_to_cpu(ev->supervision_timeout));
+@@ -5274,7 +5319,8 @@ static void hci_le_ext_adv_term_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	if (conn) {
+ 		struct adv_info *adv_instance;
+ 
+-		if (hdev->adv_addr_type != ADDR_LE_DEV_RANDOM)
++		if (hdev->adv_addr_type != ADDR_LE_DEV_RANDOM ||
++		    bacmp(&conn->resp_addr, BDADDR_ANY))
+ 			return;
+ 
+ 		if (!hdev->cur_adv_instance) {
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 600b1832e1dd6..7c24a9acbc459 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -48,6 +48,8 @@ struct sco_conn {
+ 	spinlock_t	lock;
+ 	struct sock	*sk;
+ 
++	struct delayed_work	timeout_work;
++
+ 	unsigned int    mtu;
+ };
+ 
+@@ -74,9 +76,20 @@ struct sco_pinfo {
+ #define SCO_CONN_TIMEOUT	(HZ * 40)
+ #define SCO_DISCONN_TIMEOUT	(HZ * 2)
+ 
+-static void sco_sock_timeout(struct timer_list *t)
++static void sco_sock_timeout(struct work_struct *work)
+ {
+-	struct sock *sk = from_timer(sk, t, sk_timer);
++	struct sco_conn *conn = container_of(work, struct sco_conn,
++					     timeout_work.work);
++	struct sock *sk;
++
++	sco_conn_lock(conn);
++	sk = conn->sk;
++	if (sk)
++		sock_hold(sk);
++	sco_conn_unlock(conn);
++
++	if (!sk)
++		return;
+ 
+ 	BT_DBG("sock %p state %d", sk, sk->sk_state);
+ 
+@@ -90,14 +103,21 @@ static void sco_sock_timeout(struct timer_list *t)
+ 
+ static void sco_sock_set_timer(struct sock *sk, long timeout)
+ {
++	if (!sco_pi(sk)->conn)
++		return;
++
+ 	BT_DBG("sock %p state %d timeout %ld", sk, sk->sk_state, timeout);
+-	sk_reset_timer(sk, &sk->sk_timer, jiffies + timeout);
++	cancel_delayed_work(&sco_pi(sk)->conn->timeout_work);
++	schedule_delayed_work(&sco_pi(sk)->conn->timeout_work, timeout);
+ }
+ 
+ static void sco_sock_clear_timer(struct sock *sk)
+ {
++	if (!sco_pi(sk)->conn)
++		return;
++
+ 	BT_DBG("sock %p state %d", sk, sk->sk_state);
+-	sk_stop_timer(sk, &sk->sk_timer);
++	cancel_delayed_work(&sco_pi(sk)->conn->timeout_work);
+ }
+ 
+ /* ---- SCO connections ---- */
+@@ -177,6 +197,9 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+ 		sco_chan_del(sk, err);
+ 		bh_unlock_sock(sk);
+ 		sock_put(sk);
++
++		/* Ensure no more work items will run before freeing conn. */
++		cancel_delayed_work_sync(&conn->timeout_work);
+ 	}
+ 
+ 	hcon->sco_data = NULL;
+@@ -191,6 +214,8 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	sco_pi(sk)->conn = conn;
+ 	conn->sk = sk;
+ 
++	INIT_DELAYED_WORK(&conn->timeout_work, sco_sock_timeout);
++
+ 	if (parent)
+ 		bt_accept_enqueue(parent, sk, true);
+ }
+@@ -210,44 +235,32 @@ static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	return err;
+ }
+ 
+-static int sco_connect(struct sock *sk)
++static int sco_connect(struct hci_dev *hdev, struct sock *sk)
+ {
+ 	struct sco_conn *conn;
+ 	struct hci_conn *hcon;
+-	struct hci_dev  *hdev;
+ 	int err, type;
+ 
+ 	BT_DBG("%pMR -> %pMR", &sco_pi(sk)->src, &sco_pi(sk)->dst);
+ 
+-	hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src, BDADDR_BREDR);
+-	if (!hdev)
+-		return -EHOSTUNREACH;
+-
+-	hci_dev_lock(hdev);
+-
+ 	if (lmp_esco_capable(hdev) && !disable_esco)
+ 		type = ESCO_LINK;
+ 	else
+ 		type = SCO_LINK;
+ 
+ 	if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT &&
+-	    (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) {
+-		err = -EOPNOTSUPP;
+-		goto done;
+-	}
++	    (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev)))
++		return -EOPNOTSUPP;
+ 
+ 	hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst,
+ 			       sco_pi(sk)->setting);
+-	if (IS_ERR(hcon)) {
+-		err = PTR_ERR(hcon);
+-		goto done;
+-	}
++	if (IS_ERR(hcon))
++		return PTR_ERR(hcon);
+ 
+ 	conn = sco_conn_add(hcon);
+ 	if (!conn) {
+ 		hci_conn_drop(hcon);
+-		err = -ENOMEM;
+-		goto done;
++		return -ENOMEM;
+ 	}
+ 
+ 	/* Update source addr of the socket */
+@@ -255,7 +268,7 @@ static int sco_connect(struct sock *sk)
+ 
+ 	err = sco_chan_add(conn, sk, NULL);
+ 	if (err)
+-		goto done;
++		return err;
+ 
+ 	if (hcon->state == BT_CONNECTED) {
+ 		sco_sock_clear_timer(sk);
+@@ -265,9 +278,6 @@ static int sco_connect(struct sock *sk)
+ 		sco_sock_set_timer(sk, sk->sk_sndtimeo);
+ 	}
+ 
+-done:
+-	hci_dev_unlock(hdev);
+-	hci_dev_put(hdev);
+ 	return err;
+ }
+ 
+@@ -496,8 +506,6 @@ static struct sock *sco_sock_alloc(struct net *net, struct socket *sock,
+ 
+ 	sco_pi(sk)->setting = BT_VOICE_CVSD_16BIT;
+ 
+-	timer_setup(&sk->sk_timer, sco_sock_timeout, 0);
+-
+ 	bt_sock_link(&sco_sk_list, sk);
+ 	return sk;
+ }
+@@ -562,6 +570,7 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ {
+ 	struct sockaddr_sco *sa = (struct sockaddr_sco *) addr;
+ 	struct sock *sk = sock->sk;
++	struct hci_dev  *hdev;
+ 	int err;
+ 
+ 	BT_DBG("sk %p", sk);
+@@ -576,12 +585,19 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ 	if (sk->sk_type != SOCK_SEQPACKET)
+ 		return -EINVAL;
+ 
++	hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR);
++	if (!hdev)
++		return -EHOSTUNREACH;
++	hci_dev_lock(hdev);
++
+ 	lock_sock(sk);
+ 
+ 	/* Set destination address and psm */
+ 	bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr);
+ 
+-	err = sco_connect(sk);
++	err = sco_connect(hdev, sk);
++	hci_dev_unlock(hdev);
++	hci_dev_put(hdev);
+ 	if (err)
+ 		goto done;
+ 
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index c52e5ea654e99..813c709c61cfb 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1047,8 +1047,10 @@ proto_again:
+ 							      FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+ 							      target_container);
+ 
+-			memcpy(&key_addrs->v4addrs, &iph->saddr,
+-			       sizeof(key_addrs->v4addrs));
++			memcpy(&key_addrs->v4addrs.src, &iph->saddr,
++			       sizeof(key_addrs->v4addrs.src));
++			memcpy(&key_addrs->v4addrs.dst, &iph->daddr,
++			       sizeof(key_addrs->v4addrs.dst));
+ 			key_control->addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+ 		}
+ 
+@@ -1092,8 +1094,10 @@ proto_again:
+ 							      FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+ 							      target_container);
+ 
+-			memcpy(&key_addrs->v6addrs, &iph->saddr,
+-			       sizeof(key_addrs->v6addrs));
++			memcpy(&key_addrs->v6addrs.src, &iph->saddr,
++			       sizeof(key_addrs->v6addrs.src));
++			memcpy(&key_addrs->v6addrs.dst, &iph->daddr,
++			       sizeof(key_addrs->v6addrs.dst));
+ 			key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ 		}
+ 
+diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
+index 715b67f6c62f3..e3f0d59068117 100644
+--- a/net/core/flow_offload.c
++++ b/net/core/flow_offload.c
+@@ -321,6 +321,7 @@ EXPORT_SYMBOL(flow_block_cb_setup_simple);
+ static DEFINE_MUTEX(flow_indr_block_lock);
+ static LIST_HEAD(flow_block_indr_list);
+ static LIST_HEAD(flow_block_indr_dev_list);
++static LIST_HEAD(flow_indir_dev_list);
+ 
+ struct flow_indr_dev {
+ 	struct list_head		list;
+@@ -346,6 +347,33 @@ static struct flow_indr_dev *flow_indr_dev_alloc(flow_indr_block_bind_cb_t *cb,
+ 	return indr_dev;
+ }
+ 
++struct flow_indir_dev_info {
++	void *data;
++	struct net_device *dev;
++	struct Qdisc *sch;
++	enum tc_setup_type type;
++	void (*cleanup)(struct flow_block_cb *block_cb);
++	struct list_head list;
++	enum flow_block_command command;
++	enum flow_block_binder_type binder_type;
++	struct list_head *cb_list;
++};
++
++static void existing_qdiscs_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
++{
++	struct flow_block_offload bo;
++	struct flow_indir_dev_info *cur;
++
++	list_for_each_entry(cur, &flow_indir_dev_list, list) {
++		memset(&bo, 0, sizeof(bo));
++		bo.command = cur->command;
++		bo.binder_type = cur->binder_type;
++		INIT_LIST_HEAD(&bo.cb_list);
++		cb(cur->dev, cur->sch, cb_priv, cur->type, &bo, cur->data, cur->cleanup);
++		list_splice(&bo.cb_list, cur->cb_list);
++	}
++}
++
+ int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
+ {
+ 	struct flow_indr_dev *indr_dev;
+@@ -367,6 +395,7 @@ int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
+ 	}
+ 
+ 	list_add(&indr_dev->list, &flow_block_indr_dev_list);
++	existing_qdiscs_register(cb, cb_priv);
+ 	mutex_unlock(&flow_indr_block_lock);
+ 
+ 	return 0;
+@@ -463,7 +492,59 @@ out:
+ }
+ EXPORT_SYMBOL(flow_indr_block_cb_alloc);
+ 
+-int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
++static struct flow_indir_dev_info *find_indir_dev(void *data)
++{
++	struct flow_indir_dev_info *cur;
++
++	list_for_each_entry(cur, &flow_indir_dev_list, list) {
++		if (cur->data == data)
++			return cur;
++	}
++	return NULL;
++}
++
++static int indir_dev_add(void *data, struct net_device *dev, struct Qdisc *sch,
++			 enum tc_setup_type type, void (*cleanup)(struct flow_block_cb *block_cb),
++			 struct flow_block_offload *bo)
++{
++	struct flow_indir_dev_info *info;
++
++	info = find_indir_dev(data);
++	if (info)
++		return -EEXIST;
++
++	info = kzalloc(sizeof(*info), GFP_KERNEL);
++	if (!info)
++		return -ENOMEM;
++
++	info->data = data;
++	info->dev = dev;
++	info->sch = sch;
++	info->type = type;
++	info->cleanup = cleanup;
++	info->command = bo->command;
++	info->binder_type = bo->binder_type;
++	info->cb_list = bo->cb_list_head;
++
++	list_add(&info->list, &flow_indir_dev_list);
++	return 0;
++}
++
++static int indir_dev_remove(void *data)
++{
++	struct flow_indir_dev_info *info;
++
++	info = find_indir_dev(data);
++	if (!info)
++		return -ENOENT;
++
++	list_del(&info->list);
++
++	kfree(info);
++	return 0;
++}
++
++int flow_indr_dev_setup_offload(struct net_device *dev,	struct Qdisc *sch,
+ 				enum tc_setup_type type, void *data,
+ 				struct flow_block_offload *bo,
+ 				void (*cleanup)(struct flow_block_cb *block_cb))
+@@ -471,6 +552,12 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
+ 	struct flow_indr_dev *this;
+ 
+ 	mutex_lock(&flow_indr_block_lock);
++
++	if (bo->command == FLOW_BLOCK_BIND)
++		indir_dev_add(data, dev, sch, type, cleanup, bo);
++	else if (bo->command == FLOW_BLOCK_UNBIND)
++		indir_dev_remove(data);
++
+ 	list_for_each_entry(this, &flow_block_indr_dev_list, list)
+ 		this->cb(dev, sch, this->cb_priv, type, bo, data, cleanup);
+ 
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 68ff19af195c6..97b402b2d6fbd 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -7,6 +7,7 @@
+  * the information ethtool needs.
+  */
+ 
++#include <linux/compat.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+ #include <linux/capability.h>
+@@ -807,6 +808,120 @@ out:
+ 	return ret;
+ }
+ 
++static noinline_for_stack int
++ethtool_rxnfc_copy_from_compat(struct ethtool_rxnfc *rxnfc,
++			       const struct compat_ethtool_rxnfc __user *useraddr,
++			       size_t size)
++{
++	struct compat_ethtool_rxnfc crxnfc = {};
++
++	/* We expect there to be holes between fs.m_ext and
++	 * fs.ring_cookie and at the end of fs, but nowhere else.
++	 * On non-x86, no conversion should be needed.
++	 */
++	BUILD_BUG_ON(!IS_ENABLED(CONFIG_X86_64) &&
++		     sizeof(struct compat_ethtool_rxnfc) !=
++		     sizeof(struct ethtool_rxnfc));
++	BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.m_ext) +
++		     sizeof(useraddr->fs.m_ext) !=
++		     offsetof(struct ethtool_rxnfc, fs.m_ext) +
++		     sizeof(rxnfc->fs.m_ext));
++	BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.location) -
++		     offsetof(struct compat_ethtool_rxnfc, fs.ring_cookie) !=
++		     offsetof(struct ethtool_rxnfc, fs.location) -
++		     offsetof(struct ethtool_rxnfc, fs.ring_cookie));
++
++	if (copy_from_user(&crxnfc, useraddr, min(size, sizeof(crxnfc))))
++		return -EFAULT;
++
++	*rxnfc = (struct ethtool_rxnfc) {
++		.cmd		= crxnfc.cmd,
++		.flow_type	= crxnfc.flow_type,
++		.data		= crxnfc.data,
++		.fs		= {
++			.flow_type	= crxnfc.fs.flow_type,
++			.h_u		= crxnfc.fs.h_u,
++			.h_ext		= crxnfc.fs.h_ext,
++			.m_u		= crxnfc.fs.m_u,
++			.m_ext		= crxnfc.fs.m_ext,
++			.ring_cookie	= crxnfc.fs.ring_cookie,
++			.location	= crxnfc.fs.location,
++		},
++		.rule_cnt	= crxnfc.rule_cnt,
++	};
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_from_user(struct ethtool_rxnfc *rxnfc,
++					const void __user *useraddr,
++					size_t size)
++{
++	if (compat_need_64bit_alignment_fixup())
++		return ethtool_rxnfc_copy_from_compat(rxnfc, useraddr, size);
++
++	if (copy_from_user(rxnfc, useraddr, size))
++		return -EFAULT;
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_to_compat(void __user *useraddr,
++					const struct ethtool_rxnfc *rxnfc,
++					size_t size, const u32 *rule_buf)
++{
++	struct compat_ethtool_rxnfc crxnfc;
++
++	memset(&crxnfc, 0, sizeof(crxnfc));
++	crxnfc = (struct compat_ethtool_rxnfc) {
++		.cmd		= rxnfc->cmd,
++		.flow_type	= rxnfc->flow_type,
++		.data		= rxnfc->data,
++		.fs		= {
++			.flow_type	= rxnfc->fs.flow_type,
++			.h_u		= rxnfc->fs.h_u,
++			.h_ext		= rxnfc->fs.h_ext,
++			.m_u		= rxnfc->fs.m_u,
++			.m_ext		= rxnfc->fs.m_ext,
++			.ring_cookie	= rxnfc->fs.ring_cookie,
++			.location	= rxnfc->fs.location,
++		},
++		.rule_cnt	= rxnfc->rule_cnt,
++	};
++
++	if (copy_to_user(useraddr, &crxnfc, min(size, sizeof(crxnfc))))
++		return -EFAULT;
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_to_user(void __user *useraddr,
++				      const struct ethtool_rxnfc *rxnfc,
++				      size_t size, const u32 *rule_buf)
++{
++	int ret;
++
++	if (compat_need_64bit_alignment_fixup()) {
++		ret = ethtool_rxnfc_copy_to_compat(useraddr, rxnfc, size,
++						   rule_buf);
++		useraddr += offsetof(struct compat_ethtool_rxnfc, rule_locs);
++	} else {
++		ret = copy_to_user(useraddr, &rxnfc, size);
++		useraddr += offsetof(struct ethtool_rxnfc, rule_locs);
++	}
++
++	if (ret)
++		return -EFAULT;
++
++	if (rule_buf) {
++		if (copy_to_user(useraddr, rule_buf,
++				 rxnfc->rule_cnt * sizeof(u32)))
++			return -EFAULT;
++	}
++
++	return 0;
++}
++
+ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 						u32 cmd, void __user *useraddr)
+ {
+@@ -825,7 +940,7 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 		info_size = (offsetof(struct ethtool_rxnfc, data) +
+ 			     sizeof(info.data));
+ 
+-	if (copy_from_user(&info, useraddr, info_size))
++	if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 		return -EFAULT;
+ 
+ 	rc = dev->ethtool_ops->set_rxnfc(dev, &info);
+@@ -833,7 +948,7 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 		return rc;
+ 
+ 	if (cmd == ETHTOOL_SRXCLSRLINS &&
+-	    copy_to_user(useraddr, &info, info_size))
++	    ethtool_rxnfc_copy_to_user(useraddr, &info, info_size, NULL))
+ 		return -EFAULT;
+ 
+ 	return 0;
+@@ -859,7 +974,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 		info_size = (offsetof(struct ethtool_rxnfc, data) +
+ 			     sizeof(info.data));
+ 
+-	if (copy_from_user(&info, useraddr, info_size))
++	if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 		return -EFAULT;
+ 
+ 	/* If FLOW_RSS was requested then user-space must be using the
+@@ -867,7 +982,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 	 */
+ 	if (cmd == ETHTOOL_GRXFH && info.flow_type & FLOW_RSS) {
+ 		info_size = sizeof(info);
+-		if (copy_from_user(&info, useraddr, info_size))
++		if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 			return -EFAULT;
+ 		/* Since malicious users may modify the original data,
+ 		 * we need to check whether FLOW_RSS is still requested.
+@@ -893,18 +1008,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 	if (ret < 0)
+ 		goto err_out;
+ 
+-	ret = -EFAULT;
+-	if (copy_to_user(useraddr, &info, info_size))
+-		goto err_out;
+-
+-	if (rule_buf) {
+-		useraddr += offsetof(struct ethtool_rxnfc, rule_locs);
+-		if (copy_to_user(useraddr, rule_buf,
+-				 info.rule_cnt * sizeof(u32)))
+-			goto err_out;
+-	}
+-	ret = 0;
+-
++	ret = ethtool_rxnfc_copy_to_user(useraddr, &info, info_size, rule_buf);
+ err_out:
+ 	kfree(rule_buf);
+ 
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 560d5dc435629..10d4cde31c6bf 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -445,8 +445,9 @@ static void ip_copy_addrs(struct iphdr *iph, const struct flowi4 *fl4)
+ {
+ 	BUILD_BUG_ON(offsetof(typeof(*fl4), daddr) !=
+ 		     offsetof(typeof(*fl4), saddr) + sizeof(fl4->saddr));
+-	memcpy(&iph->saddr, &fl4->saddr,
+-	       sizeof(fl4->saddr) + sizeof(fl4->daddr));
++
++	iph->saddr = fl4->saddr;
++	iph->daddr = fl4->daddr;
+ }
+ 
+ /* Note: skb->sk can be different from sk, in case of tunnels */
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index d49709ba8e165..1071119843843 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -379,8 +379,7 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb,
+ 		return NULL;
+ 	}
+ 
+-	if (syn_data &&
+-	    tcp_fastopen_no_cookie(sk, dst, TFO_SERVER_COOKIE_NOT_REQD))
++	if (tcp_fastopen_no_cookie(sk, dst, TFO_SERVER_COOKIE_NOT_REQD))
+ 		goto fastopen;
+ 
+ 	if (foc->len == 0) {
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 30589b4c09da4..3a15ef8dd3228 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -2000,9 +2000,16 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		netdev_set_default_ethtool_ops(ndev, &ieee80211_ethtool_ops);
+ 
+-		/* MTU range: 256 - 2304 */
++		/* MTU range is normally 256 - 2304, where the upper limit is
++		 * the maximum MSDU size. Monitor interfaces send and receive
++		 * MPDU and A-MSDU frames which may be much larger so we do
++		 * not impose an upper limit in that case.
++		 */
+ 		ndev->min_mtu = 256;
+-		ndev->max_mtu = local->hw.max_mtu;
++		if (type == NL80211_IFTYPE_MONITOR)
++			ndev->max_mtu = 0;
++		else
++			ndev->max_mtu = local->hw.max_mtu;
+ 
+ 		ret = register_netdevice(ndev);
+ 		if (ret) {
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index 92047cea3c170..a6b654b028dd4 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -940,6 +940,7 @@ static void nf_flow_table_block_offload_init(struct flow_block_offload *bo,
+ 	bo->command	= cmd;
+ 	bo->binder_type	= FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
+ 	bo->extack	= extack;
++	bo->cb_list_head = &flowtable->flow_block.cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index 9ce776175214c..e5fcbb0e4b8e5 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -323,6 +323,7 @@ static void nft_flow_block_offload_init(struct flow_block_offload *bo,
+ 	bo->command	= cmd;
+ 	bo->binder_type	= FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
+ 	bo->extack	= extack;
++	bo->cb_list_head = &basechain->flow_block.cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
+index 50f40943c8153..f3f1df1b0f8e2 100644
+--- a/net/netlabel/netlabel_cipso_v4.c
++++ b/net/netlabel/netlabel_cipso_v4.c
+@@ -144,8 +144,8 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		return -ENOMEM;
+ 	doi_def->map.std = kzalloc(sizeof(*doi_def->map.std), GFP_KERNEL);
+ 	if (doi_def->map.std == NULL) {
+-		ret_val = -ENOMEM;
+-		goto add_std_failure;
++		kfree(doi_def);
++		return -ENOMEM;
+ 	}
+ 	doi_def->type = CIPSO_V4_MAP_TRANS;
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index e527f5686e2bf..8434da3c0487a 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2537,13 +2537,15 @@ int nlmsg_notify(struct sock *sk, struct sk_buff *skb, u32 portid,
+ 		/* errors reported via destination sk->sk_err, but propagate
+ 		 * delivery errors if NETLINK_BROADCAST_ERROR flag is set */
+ 		err = nlmsg_multicast(sk, skb, exclude_portid, group, flags);
++		if (err == -ESRCH)
++			err = 0;
+ 	}
+ 
+ 	if (report) {
+ 		int err2;
+ 
+ 		err2 = nlmsg_unicast(sk, skb, portid);
+-		if (!err || err == -ESRCH)
++		if (!err)
+ 			err = err2;
+ 	}
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 31ac76a9189ee..8073657a0fd25 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -634,6 +634,7 @@ static void tcf_block_offload_init(struct flow_block_offload *bo,
+ 	bo->block_shared = shared;
+ 	bo->extack = extack;
+ 	bo->sch = sch;
++	bo->cb_list_head = &flow_block->cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 00853065dfa06..cb5e5220da552 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1502,7 +1502,9 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ 	taprio_set_picos_per_byte(dev, q);
+ 
+ 	if (mqprio) {
+-		netdev_set_num_tc(dev, mqprio->num_tc);
++		err = netdev_set_num_tc(dev, mqprio->num_tc);
++		if (err)
++			goto free_sched;
+ 		for (i = 0; i < mqprio->num_tc; i++)
+ 			netdev_set_tc_queue(dev, i,
+ 					    mqprio->count[i],
+diff --git a/net/socket.c b/net/socket.c
+index dd5da07bc1ffc..d52c265ad449b 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -3112,128 +3112,6 @@ static int compat_dev_ifconf(struct net *net, struct compat_ifconf __user *uifc3
+ 	return 0;
+ }
+ 
+-static int ethtool_ioctl(struct net *net, struct compat_ifreq __user *ifr32)
+-{
+-	struct compat_ethtool_rxnfc __user *compat_rxnfc;
+-	bool convert_in = false, convert_out = false;
+-	size_t buf_size = 0;
+-	struct ethtool_rxnfc __user *rxnfc = NULL;
+-	struct ifreq ifr;
+-	u32 rule_cnt = 0, actual_rule_cnt;
+-	u32 ethcmd;
+-	u32 data;
+-	int ret;
+-
+-	if (get_user(data, &ifr32->ifr_ifru.ifru_data))
+-		return -EFAULT;
+-
+-	compat_rxnfc = compat_ptr(data);
+-
+-	if (get_user(ethcmd, &compat_rxnfc->cmd))
+-		return -EFAULT;
+-
+-	/* Most ethtool structures are defined without padding.
+-	 * Unfortunately struct ethtool_rxnfc is an exception.
+-	 */
+-	switch (ethcmd) {
+-	default:
+-		break;
+-	case ETHTOOL_GRXCLSRLALL:
+-		/* Buffer size is variable */
+-		if (get_user(rule_cnt, &compat_rxnfc->rule_cnt))
+-			return -EFAULT;
+-		if (rule_cnt > KMALLOC_MAX_SIZE / sizeof(u32))
+-			return -ENOMEM;
+-		buf_size += rule_cnt * sizeof(u32);
+-		fallthrough;
+-	case ETHTOOL_GRXRINGS:
+-	case ETHTOOL_GRXCLSRLCNT:
+-	case ETHTOOL_GRXCLSRULE:
+-	case ETHTOOL_SRXCLSRLINS:
+-		convert_out = true;
+-		fallthrough;
+-	case ETHTOOL_SRXCLSRLDEL:
+-		buf_size += sizeof(struct ethtool_rxnfc);
+-		convert_in = true;
+-		rxnfc = compat_alloc_user_space(buf_size);
+-		break;
+-	}
+-
+-	if (copy_from_user(&ifr.ifr_name, &ifr32->ifr_name, IFNAMSIZ))
+-		return -EFAULT;
+-
+-	ifr.ifr_data = convert_in ? rxnfc : (void __user *)compat_rxnfc;
+-
+-	if (convert_in) {
+-		/* We expect there to be holes between fs.m_ext and
+-		 * fs.ring_cookie and at the end of fs, but nowhere else.
+-		 */
+-		BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.m_ext) +
+-			     sizeof(compat_rxnfc->fs.m_ext) !=
+-			     offsetof(struct ethtool_rxnfc, fs.m_ext) +
+-			     sizeof(rxnfc->fs.m_ext));
+-		BUILD_BUG_ON(
+-			offsetof(struct compat_ethtool_rxnfc, fs.location) -
+-			offsetof(struct compat_ethtool_rxnfc, fs.ring_cookie) !=
+-			offsetof(struct ethtool_rxnfc, fs.location) -
+-			offsetof(struct ethtool_rxnfc, fs.ring_cookie));
+-
+-		if (copy_in_user(rxnfc, compat_rxnfc,
+-				 (void __user *)(&rxnfc->fs.m_ext + 1) -
+-				 (void __user *)rxnfc) ||
+-		    copy_in_user(&rxnfc->fs.ring_cookie,
+-				 &compat_rxnfc->fs.ring_cookie,
+-				 (void __user *)(&rxnfc->fs.location + 1) -
+-				 (void __user *)&rxnfc->fs.ring_cookie))
+-			return -EFAULT;
+-		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
+-			if (put_user(rule_cnt, &rxnfc->rule_cnt))
+-				return -EFAULT;
+-		} else if (copy_in_user(&rxnfc->rule_cnt,
+-					&compat_rxnfc->rule_cnt,
+-					sizeof(rxnfc->rule_cnt)))
+-			return -EFAULT;
+-	}
+-
+-	ret = dev_ioctl(net, SIOCETHTOOL, &ifr, NULL);
+-	if (ret)
+-		return ret;
+-
+-	if (convert_out) {
+-		if (copy_in_user(compat_rxnfc, rxnfc,
+-				 (const void __user *)(&rxnfc->fs.m_ext + 1) -
+-				 (const void __user *)rxnfc) ||
+-		    copy_in_user(&compat_rxnfc->fs.ring_cookie,
+-				 &rxnfc->fs.ring_cookie,
+-				 (const void __user *)(&rxnfc->fs.location + 1) -
+-				 (const void __user *)&rxnfc->fs.ring_cookie) ||
+-		    copy_in_user(&compat_rxnfc->rule_cnt, &rxnfc->rule_cnt,
+-				 sizeof(rxnfc->rule_cnt)))
+-			return -EFAULT;
+-
+-		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
+-			/* As an optimisation, we only copy the actual
+-			 * number of rules that the underlying
+-			 * function returned.  Since Mallory might
+-			 * change the rule count in user memory, we
+-			 * check that it is less than the rule count
+-			 * originally given (as the user buffer size),
+-			 * which has been range-checked.
+-			 */
+-			if (get_user(actual_rule_cnt, &rxnfc->rule_cnt))
+-				return -EFAULT;
+-			if (actual_rule_cnt < rule_cnt)
+-				rule_cnt = actual_rule_cnt;
+-			if (copy_in_user(&compat_rxnfc->rule_locs[0],
+-					 &rxnfc->rule_locs[0],
+-					 rule_cnt * sizeof(u32)))
+-				return -EFAULT;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+ static int compat_siocwandev(struct net *net, struct compat_ifreq __user *uifr32)
+ {
+ 	compat_uptr_t uptr32;
+@@ -3390,8 +3268,6 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 		return old_bridge_ioctl(argp);
+ 	case SIOCGIFCONF:
+ 		return compat_dev_ifconf(net, argp);
+-	case SIOCETHTOOL:
+-		return ethtool_ioctl(net, argp);
+ 	case SIOCWANDEV:
+ 		return compat_siocwandev(net, argp);
+ 	case SIOCGIFMAP:
+@@ -3404,6 +3280,7 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 		return sock->ops->gettstamp(sock, argp, cmd == SIOCGSTAMP_OLD,
+ 					    !COMPAT_USE_64BIT_TIME);
+ 
++	case SIOCETHTOOL:
+ 	case SIOCBONDSLAVEINFOQUERY:
+ 	case SIOCBONDINFOQUERY:
+ 	case SIOCSHWTSTAMP:
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 6dff64374bfe1..e22f2d65457da 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -1980,7 +1980,7 @@ gss_svc_init_net(struct net *net)
+ 		goto out2;
+ 	return 0;
+ out2:
+-	destroy_use_gss_proxy_proc_entry(net);
++	rsi_cache_destroy_net(net);
+ out1:
+ 	rsc_cache_destroy_net(net);
+ 	return rv;
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 9a50764be9160..8201531ce5d97 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -746,9 +746,9 @@ void xprt_force_disconnect(struct rpc_xprt *xprt)
+ 	/* Try to schedule an autoclose RPC call */
+ 	if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0)
+ 		queue_work(xprtiod_workqueue, &xprt->task_cleanup);
+-	else if (xprt->snd_task)
++	else if (xprt->snd_task && !test_bit(XPRT_SND_IS_COOKIE, &xprt->state))
+ 		rpc_wake_up_queued_task_set_status(&xprt->pending,
+-				xprt->snd_task, -ENOTCONN);
++						   xprt->snd_task, -ENOTCONN);
+ 	spin_unlock(&xprt->transport_lock);
+ }
+ EXPORT_SYMBOL_GPL(xprt_force_disconnect);
+@@ -837,12 +837,14 @@ bool xprt_lock_connect(struct rpc_xprt *xprt,
+ 		goto out;
+ 	if (xprt->snd_task != task)
+ 		goto out;
++	set_bit(XPRT_SND_IS_COOKIE, &xprt->state);
+ 	xprt->snd_task = cookie;
+ 	ret = true;
+ out:
+ 	spin_unlock(&xprt->transport_lock);
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(xprt_lock_connect);
+ 
+ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
+ {
+@@ -852,12 +854,14 @@ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
+ 	if (!test_bit(XPRT_LOCKED, &xprt->state))
+ 		goto out;
+ 	xprt->snd_task =NULL;
++	clear_bit(XPRT_SND_IS_COOKIE, &xprt->state);
+ 	xprt->ops->release_xprt(xprt, NULL);
+ 	xprt_schedule_autodisconnect(xprt);
+ out:
+ 	spin_unlock(&xprt->transport_lock);
+ 	wake_up_bit(&xprt->state, XPRT_LOCKED);
+ }
++EXPORT_SYMBOL_GPL(xprt_unlock_connect);
+ 
+ /**
+  * xprt_connect - schedule a transport connect operation
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index c26db0a379967..8e2368a0c2a29 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -249,12 +249,9 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ 					   xprt->stat.connect_start;
+ 		xprt_set_connected(xprt);
+ 		rc = -EAGAIN;
+-	} else {
+-		/* Force a call to xprt_rdma_close to clean up */
+-		spin_lock(&xprt->transport_lock);
+-		set_bit(XPRT_CLOSE_WAIT, &xprt->state);
+-		spin_unlock(&xprt->transport_lock);
+-	}
++	} else
++		rpcrdma_xprt_disconnect(r_xprt);
++	xprt_unlock_connect(xprt, r_xprt);
+ 	xprt_wake_pending_tasks(xprt, rc);
+ }
+ 
+@@ -487,6 +484,8 @@ xprt_rdma_connect(struct rpc_xprt *xprt, struct rpc_task *task)
+ 	struct rpcrdma_ep *ep = r_xprt->rx_ep;
+ 	unsigned long delay;
+ 
++	WARN_ON_ONCE(!xprt_lock_connect(xprt, task, r_xprt));
++
+ 	delay = 0;
+ 	if (ep && ep->re_connect_status != 0) {
+ 		delay = xprt_reconnect_delay(xprt);
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 9c0f71e82d978..16c7758e7bf30 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1639,6 +1639,13 @@ static int xs_get_srcport(struct sock_xprt *transport)
+ 	return port;
+ }
+ 
++unsigned short get_srcport(struct rpc_xprt *xprt)
++{
++	struct sock_xprt *sock = container_of(xprt, struct sock_xprt, xprt);
++	return xs_sock_getport(sock->sock);
++}
++EXPORT_SYMBOL(get_srcport);
++
+ static unsigned short xs_next_srcport(struct sock_xprt *transport, unsigned short port)
+ {
+ 	if (transport->srcport != 0)
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 9bd72468bc68e..963047c57c27b 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1887,6 +1887,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 	bool connected = !tipc_sk_type_connectionless(sk);
+ 	struct tipc_sock *tsk = tipc_sk(sk);
+ 	int rc, err, hlen, dlen, copy;
++	struct tipc_skb_cb *skb_cb;
+ 	struct sk_buff_head xmitq;
+ 	struct tipc_msg *hdr;
+ 	struct sk_buff *skb;
+@@ -1910,6 +1911,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 		if (unlikely(rc))
+ 			goto exit;
+ 		skb = skb_peek(&sk->sk_receive_queue);
++		skb_cb = TIPC_SKB_CB(skb);
+ 		hdr = buf_msg(skb);
+ 		dlen = msg_data_sz(hdr);
+ 		hlen = msg_hdr_sz(hdr);
+@@ -1929,18 +1931,33 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 
+ 	/* Capture data if non-error msg, otherwise just set return value */
+ 	if (likely(!err)) {
+-		copy = min_t(int, dlen, buflen);
+-		if (unlikely(copy != dlen))
+-			m->msg_flags |= MSG_TRUNC;
+-		rc = skb_copy_datagram_msg(skb, hlen, m, copy);
++		int offset = skb_cb->bytes_read;
++
++		copy = min_t(int, dlen - offset, buflen);
++		rc = skb_copy_datagram_msg(skb, hlen + offset, m, copy);
++		if (unlikely(rc))
++			goto exit;
++		if (unlikely(offset + copy < dlen)) {
++			if (flags & MSG_EOR) {
++				if (!(flags & MSG_PEEK))
++					skb_cb->bytes_read = offset + copy;
++			} else {
++				m->msg_flags |= MSG_TRUNC;
++				skb_cb->bytes_read = 0;
++			}
++		} else {
++			if (flags & MSG_EOR)
++				m->msg_flags |= MSG_EOR;
++			skb_cb->bytes_read = 0;
++		}
+ 	} else {
+ 		copy = 0;
+ 		rc = 0;
+-		if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control)
++		if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control) {
+ 			rc = -ECONNRESET;
++			goto exit;
++		}
+ 	}
+-	if (unlikely(rc))
+-		goto exit;
+ 
+ 	/* Mark message as group event if applicable */
+ 	if (unlikely(grp_evt)) {
+@@ -1963,9 +1980,10 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 		tipc_node_distr_xmit(sock_net(sk), &xmitq);
+ 	}
+ 
+-	tsk_advance_rx_queue(sk);
++	if (!skb_cb->bytes_read)
++		tsk_advance_rx_queue(sk);
+ 
+-	if (likely(!connected))
++	if (likely(!connected) || skb_cb->bytes_read)
+ 		goto exit;
+ 
+ 	/* Send connection flow control advertisement when applicable */
+diff --git a/samples/bpf/test_override_return.sh b/samples/bpf/test_override_return.sh
+index e68b9ee6814b8..35db26f736b9d 100755
+--- a/samples/bpf/test_override_return.sh
++++ b/samples/bpf/test_override_return.sh
+@@ -1,5 +1,6 @@
+ #!/bin/bash
+ 
++rm -r tmpmnt
+ rm -f testfile.img
+ dd if=/dev/zero of=testfile.img bs=1M seek=1000 count=1
+ DEVICE=$(losetup --show -f testfile.img)
+diff --git a/samples/bpf/tracex7_user.c b/samples/bpf/tracex7_user.c
+index fdcd6580dd736..8be7ce18d3ba0 100644
+--- a/samples/bpf/tracex7_user.c
++++ b/samples/bpf/tracex7_user.c
+@@ -14,6 +14,11 @@ int main(int argc, char **argv)
+ 	int ret = 0;
+ 	FILE *f;
+ 
++	if (!argv[1]) {
++		fprintf(stderr, "ERROR: Run with the btrfs device argument!\n");
++		return 0;
++	}
++
+ 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+ 	obj = bpf_object__open_file(filename, NULL);
+ 	if (libbpf_get_error(obj)) {
+diff --git a/scripts/gen_ksymdeps.sh b/scripts/gen_ksymdeps.sh
+index 1324986e1362c..725e8c9c1b53f 100755
+--- a/scripts/gen_ksymdeps.sh
++++ b/scripts/gen_ksymdeps.sh
+@@ -4,7 +4,13 @@
+ set -e
+ 
+ # List of exported symbols
+-ksyms=$($NM $1 | sed -n 's/.*__ksym_marker_\(.*\)/\1/p' | tr A-Z a-z)
++#
++# If the object has no symbol, $NM warns 'no symbols'.
++# Suppress the stderr.
++# TODO:
++#   Use -q instead of 2>/dev/null when we upgrade the minimum version of
++#   binutils to 2.37, llvm to 13.0.0.
++ksyms=$($NM $1 2>/dev/null | sed -n 's/.*__ksym_marker_\(.*\)/\1/p' | tr A-Z a-z)
+ 
+ if [ -z "$ksyms" ]; then
+ 	exit 0
+diff --git a/security/smack/smack_access.c b/security/smack/smack_access.c
+index 7eabb448acab4..169929c6c4eb3 100644
+--- a/security/smack/smack_access.c
++++ b/security/smack/smack_access.c
+@@ -81,23 +81,22 @@ int log_policy = SMACK_AUDIT_DENIED;
+ int smk_access_entry(char *subject_label, char *object_label,
+ 			struct list_head *rule_list)
+ {
+-	int may = -ENOENT;
+ 	struct smack_rule *srp;
+ 
+ 	list_for_each_entry_rcu(srp, rule_list, list) {
+ 		if (srp->smk_object->smk_known == object_label &&
+ 		    srp->smk_subject->smk_known == subject_label) {
+-			may = srp->smk_access;
+-			break;
++			int may = srp->smk_access;
++			/*
++			 * MAY_WRITE implies MAY_LOCK.
++			 */
++			if ((may & MAY_WRITE) == MAY_WRITE)
++				may |= MAY_LOCK;
++			return may;
+ 		}
+ 	}
+ 
+-	/*
+-	 * MAY_WRITE implies MAY_LOCK.
+-	 */
+-	if ((may & MAY_WRITE) == MAY_WRITE)
+-		may |= MAY_LOCK;
+-	return may;
++	return -ENOENT;
+ }
+ 
+ /**
+diff --git a/sound/soc/atmel/Kconfig b/sound/soc/atmel/Kconfig
+index 142373ec411ad..89210048e6c2b 100644
+--- a/sound/soc/atmel/Kconfig
++++ b/sound/soc/atmel/Kconfig
+@@ -11,7 +11,6 @@ if SND_ATMEL_SOC
+ 
+ config SND_ATMEL_SOC_PDC
+ 	bool
+-	depends on HAS_DMA
+ 
+ config SND_ATMEL_SOC_DMA
+ 	bool
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index ca14730232ba9..43ee3d095a1be 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -286,9 +286,6 @@ static const struct snd_soc_dapm_widget byt_rt5640_widgets[] = {
+ static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = {
+ 	{"Headphone", NULL, "Platform Clock"},
+ 	{"Headset Mic", NULL, "Platform Clock"},
+-	{"Internal Mic", NULL, "Platform Clock"},
+-	{"Speaker", NULL, "Platform Clock"},
+-
+ 	{"Headset Mic", NULL, "MICBIAS1"},
+ 	{"IN2P", NULL, "Headset Mic"},
+ 	{"Headphone", NULL, "HPOL"},
+@@ -296,19 +293,23 @@ static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_dmic1_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"DMIC1", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_dmic2_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"DMIC2", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_in1_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"Internal Mic", NULL, "MICBIAS1"},
+ 	{"IN1P", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_in3_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"Internal Mic", NULL, "MICBIAS1"},
+ 	{"IN3P", NULL, "Internal Mic"},
+ };
+@@ -350,6 +351,7 @@ static const struct snd_soc_dapm_route byt_rt5640_ssp0_aif2_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_stereo_spk_map[] = {
++	{"Speaker", NULL, "Platform Clock"},
+ 	{"Speaker", NULL, "SPOLP"},
+ 	{"Speaker", NULL, "SPOLN"},
+ 	{"Speaker", NULL, "SPORP"},
+@@ -357,6 +359,7 @@ static const struct snd_soc_dapm_route byt_rt5640_stereo_spk_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_mono_spk_map[] = {
++	{"Speaker", NULL, "Platform Clock"},
+ 	{"Speaker", NULL, "SPOLP"},
+ 	{"Speaker", NULL, "SPOLN"},
+ };
+diff --git a/sound/soc/intel/boards/sof_pcm512x.c b/sound/soc/intel/boards/sof_pcm512x.c
+index d2b0456236c72..bdd671f07659c 100644
+--- a/sound/soc/intel/boards/sof_pcm512x.c
++++ b/sound/soc/intel/boards/sof_pcm512x.c
+@@ -26,11 +26,16 @@
+ 
+ #define SOF_PCM512X_SSP_CODEC(quirk)		((quirk) & GENMASK(3, 0))
+ #define SOF_PCM512X_SSP_CODEC_MASK			(GENMASK(3, 0))
++#define SOF_PCM512X_ENABLE_SSP_CAPTURE		BIT(4)
++#define SOF_PCM512X_ENABLE_DMIC			BIT(5)
+ 
+ #define IDISP_CODEC_MASK	0x4
+ 
+ /* Default: SSP5 */
+-static unsigned long sof_pcm512x_quirk = SOF_PCM512X_SSP_CODEC(5);
++static unsigned long sof_pcm512x_quirk =
++	SOF_PCM512X_SSP_CODEC(5) |
++	SOF_PCM512X_ENABLE_SSP_CAPTURE |
++	SOF_PCM512X_ENABLE_DMIC;
+ 
+ static bool is_legacy_cpu;
+ 
+@@ -245,8 +250,9 @@ static struct snd_soc_dai_link *sof_card_dai_links_create(struct device *dev,
+ 	links[id].dpcm_playback = 1;
+ 	/*
+ 	 * capture only supported with specific versions of the Hifiberry DAC+
+-	 * links[id].dpcm_capture = 1;
+ 	 */
++	if (sof_pcm512x_quirk & SOF_PCM512X_ENABLE_SSP_CAPTURE)
++		links[id].dpcm_capture = 1;
+ 	links[id].no_pcm = 1;
+ 	links[id].cpus = &cpus[id];
+ 	links[id].num_cpus = 1;
+@@ -381,6 +387,9 @@ static int sof_audio_probe(struct platform_device *pdev)
+ 
+ 	ssp_codec = sof_pcm512x_quirk & SOF_PCM512X_SSP_CODEC_MASK;
+ 
++	if (!(sof_pcm512x_quirk & SOF_PCM512X_ENABLE_DMIC))
++		dmic_be_num = 0;
++
+ 	/* compute number of dai links */
+ 	sof_audio_card_pcm512x.num_links = 1 + dmic_be_num + hdmi_num;
+ 
+diff --git a/sound/soc/intel/skylake/skl-messages.c b/sound/soc/intel/skylake/skl-messages.c
+index 476ef1897961d..79c6cf2c14bfb 100644
+--- a/sound/soc/intel/skylake/skl-messages.c
++++ b/sound/soc/intel/skylake/skl-messages.c
+@@ -802,9 +802,12 @@ static u16 skl_get_module_param_size(struct skl_dev *skl,
+ 
+ 	case SKL_MODULE_TYPE_BASE_OUTFMT:
+ 	case SKL_MODULE_TYPE_MIC_SELECT:
+-	case SKL_MODULE_TYPE_KPB:
+ 		return sizeof(struct skl_base_outfmt_cfg);
+ 
++	case SKL_MODULE_TYPE_MIXER:
++	case SKL_MODULE_TYPE_KPB:
++		return sizeof(struct skl_base_cfg);
++
+ 	default:
+ 		/*
+ 		 * return only base cfg when no specific module type is
+@@ -857,10 +860,14 @@ static int skl_set_module_format(struct skl_dev *skl,
+ 
+ 	case SKL_MODULE_TYPE_BASE_OUTFMT:
+ 	case SKL_MODULE_TYPE_MIC_SELECT:
+-	case SKL_MODULE_TYPE_KPB:
+ 		skl_set_base_outfmt_format(skl, module_config, *param_data);
+ 		break;
+ 
++	case SKL_MODULE_TYPE_MIXER:
++	case SKL_MODULE_TYPE_KPB:
++		skl_set_base_module_format(skl, module_config, *param_data);
++		break;
++
+ 	default:
+ 		skl_set_base_module_format(skl, module_config, *param_data);
+ 		break;
+diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
+index bbe8d782e0af6..b1897a057397d 100644
+--- a/sound/soc/intel/skylake/skl-pcm.c
++++ b/sound/soc/intel/skylake/skl-pcm.c
+@@ -1318,21 +1318,6 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 		return -EIO;
+ 	}
+ 
+-	list_for_each_entry(module, &skl->uuid_list, list) {
+-		if (guid_equal(uuid_mod, &module->uuid)) {
+-			mconfig->id.module_id = module->id;
+-			if (mconfig->module)
+-				mconfig->module->loadable = module->is_loadable;
+-			ret = 0;
+-			break;
+-		}
+-	}
+-
+-	if (ret)
+-		return ret;
+-
+-	uuid_mod = &module->uuid;
+-	ret = -EIO;
+ 	for (i = 0; i < skl->nr_modules; i++) {
+ 		skl_module = skl->modules[i];
+ 		uuid_tplg = &skl_module->uuid;
+@@ -1342,10 +1327,18 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 			break;
+ 		}
+ 	}
++
+ 	if (skl->nr_modules && ret)
+ 		return ret;
+ 
++	ret = -EIO;
+ 	list_for_each_entry(module, &skl->uuid_list, list) {
++		if (guid_equal(uuid_mod, &module->uuid)) {
++			mconfig->id.module_id = module->id;
++			mconfig->module->loadable = module->is_loadable;
++			ret = 0;
++		}
++
+ 		for (i = 0; i < MAX_IN_QUEUE; i++) {
+ 			pin_id = &mconfig->m_in_pin[i].id;
+ 			if (guid_equal(&pin_id->mod_uuid, &module->uuid))
+@@ -1359,7 +1352,7 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 		}
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int skl_populate_modules(struct skl_dev *skl)
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index 593299675b8c7..fa84ec695b525 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -186,7 +186,9 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ {
+ 	struct rk_i2s_dev *i2s = to_info(cpu_dai);
+ 	unsigned int mask = 0, val = 0;
++	int ret = 0;
+ 
++	pm_runtime_get_sync(cpu_dai->dev);
+ 	mask = I2S_CKR_MSS_MASK;
+ 	switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+ 	case SND_SOC_DAIFMT_CBS_CFS:
+@@ -199,7 +201,8 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 		i2s->is_master_mode = false;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_CKR, mask, val);
+@@ -213,7 +216,8 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 		val = I2S_CKR_CKP_POS;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_CKR, mask, val);
+@@ -229,14 +233,15 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 	case SND_SOC_DAIFMT_I2S:
+ 		val = I2S_TXCR_IBM_NORMAL;
+ 		break;
+-	case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */
+-		val = I2S_TXCR_TFS_PCM;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */
++	case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 bit mode */
+ 		val = I2S_TXCR_TFS_PCM | I2S_TXCR_PBM_MODE(1);
+ 		break;
++	case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */
++		val = I2S_TXCR_TFS_PCM;
++		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_TXCR, mask, val);
+@@ -252,19 +257,23 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 	case SND_SOC_DAIFMT_I2S:
+ 		val = I2S_RXCR_IBM_NORMAL;
+ 		break;
+-	case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */
+-		val = I2S_RXCR_TFS_PCM;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */
++	case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 bit mode */
+ 		val = I2S_RXCR_TFS_PCM | I2S_RXCR_PBM_MODE(1);
+ 		break;
++	case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */
++		val = I2S_RXCR_TFS_PCM;
++		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_RXCR, mask, val);
+ 
+-	return 0;
++err_pm_put:
++	pm_runtime_put(cpu_dai->dev);
++
++	return ret;
+ }
+ 
+ static int rockchip_i2s_hw_params(struct snd_pcm_substream *substream,
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 28923b776cdc8..b337d6f29098b 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3613,6 +3613,42 @@ static int bpf_map_find_btf_info(struct bpf_object *obj, struct bpf_map *map)
+ 	return 0;
+ }
+ 
++static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
++{
++	char file[PATH_MAX], buff[4096];
++	FILE *fp;
++	__u32 val;
++	int err;
++
++	snprintf(file, sizeof(file), "/proc/%d/fdinfo/%d", getpid(), fd);
++	memset(info, 0, sizeof(*info));
++
++	fp = fopen(file, "r");
++	if (!fp) {
++		err = -errno;
++		pr_warn("failed to open %s: %d. No procfs support?\n", file,
++			err);
++		return err;
++	}
++
++	while (fgets(buff, sizeof(buff), fp)) {
++		if (sscanf(buff, "map_type:\t%u", &val) == 1)
++			info->type = val;
++		else if (sscanf(buff, "key_size:\t%u", &val) == 1)
++			info->key_size = val;
++		else if (sscanf(buff, "value_size:\t%u", &val) == 1)
++			info->value_size = val;
++		else if (sscanf(buff, "max_entries:\t%u", &val) == 1)
++			info->max_entries = val;
++		else if (sscanf(buff, "map_flags:\t%i", &val) == 1)
++			info->map_flags = val;
++	}
++
++	fclose(fp);
++
++	return 0;
++}
++
+ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ {
+ 	struct bpf_map_info info = {};
+@@ -3621,6 +3657,8 @@ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ 	char *new_name;
+ 
+ 	err = bpf_obj_get_info_by_fd(fd, &info, &len);
++	if (err && errno == EINVAL)
++		err = bpf_get_map_info_from_fdinfo(fd, &info);
+ 	if (err)
+ 		return err;
+ 
+@@ -4032,12 +4070,16 @@ static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
+ 	struct bpf_map_info map_info = {};
+ 	char msg[STRERR_BUFSIZE];
+ 	__u32 map_info_len;
++	int err;
+ 
+ 	map_info_len = sizeof(map_info);
+ 
+-	if (bpf_obj_get_info_by_fd(map_fd, &map_info, &map_info_len)) {
+-		pr_warn("failed to get map info for map FD %d: %s\n",
+-			map_fd, libbpf_strerror_r(errno, msg, sizeof(msg)));
++	err = bpf_obj_get_info_by_fd(map_fd, &map_info, &map_info_len);
++	if (err && errno == EINVAL)
++		err = bpf_get_map_info_from_fdinfo(map_fd, &map_info);
++	if (err) {
++		pr_warn("failed to get map info for map FD %d: %s\n", map_fd,
++			libbpf_strerror_r(errno, msg, sizeof(msg)));
+ 		return false;
+ 	}
+ 
+@@ -4242,10 +4284,13 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 	char *cp, errmsg[STRERR_BUFSIZE];
+ 	unsigned int i, j;
+ 	int err;
++	bool retried;
+ 
+ 	for (i = 0; i < obj->nr_maps; i++) {
+ 		map = &obj->maps[i];
+ 
++		retried = false;
++retry:
+ 		if (map->pin_path) {
+ 			err = bpf_object__reuse_map(map);
+ 			if (err) {
+@@ -4253,6 +4298,12 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 					map->name);
+ 				goto err_out;
+ 			}
++			if (retried && map->fd < 0) {
++				pr_warn("map '%s': cannot find pinned map\n",
++					map->name);
++				err = -ENOENT;
++				goto err_out;
++			}
+ 		}
+ 
+ 		if (map->fd >= 0) {
+@@ -4286,9 +4337,13 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 		if (map->pin_path && !map->pinned) {
+ 			err = bpf_map__pin(map, NULL);
+ 			if (err) {
++				zclose(map->fd);
++				if (!retried && err == -EEXIST) {
++					retried = true;
++					goto retry;
++				}
+ 				pr_warn("map '%s': failed to auto-pin at '%s': %d\n",
+ 					map->name, map->pin_path, err);
+-				zclose(map->fd);
+ 				goto err_out;
+ 			}
+ 		}
+diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
+index 70665ba88cbb1..2703bd628d06c 100644
+--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
++++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
+@@ -285,7 +285,7 @@ int mte_default_setup(void)
+ 	int ret;
+ 
+ 	if (!(hwcaps2 & HWCAP2_MTE)) {
+-		ksft_print_msg("FAIL: MTE features unavailable\n");
++		ksft_print_msg("SKIP: MTE features unavailable\n");
+ 		return KSFT_SKIP;
+ 	}
+ 	/* Get current mte mode */
+diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
+index 592fe538506e3..b743daa772f55 100644
+--- a/tools/testing/selftests/arm64/pauth/pac.c
++++ b/tools/testing/selftests/arm64/pauth/pac.c
+@@ -25,13 +25,15 @@
+ do { \
+ 	unsigned long hwcaps = getauxval(AT_HWCAP); \
+ 	/* data key instructions are not in NOP space. This prevents a SIGILL */ \
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
++	if (!(hwcaps & HWCAP_PACA))					\
++		SKIP(return, "PAUTH not enabled"); \
+ } while (0)
+ #define ASSERT_GENERIC_PAUTH_ENABLED() \
+ do { \
+ 	unsigned long hwcaps = getauxval(AT_HWCAP); \
+ 	/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
+-	ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
++	if (!(hwcaps & HWCAP_PACG)) \
++		SKIP(return, "Generic PAUTH not enabled");	\
+ } while (0)
+ 
+ void sign_specific(struct signatures *sign, size_t val)
+@@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
+ 	unsigned long hwcaps = getauxval(AT_HWCAP);
+ 
+ 	/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
++	ASSERT_PAUTH_ENABLED();
+ 	if (!(hwcaps & HWCAP_PACG)) {
+ 		TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
+ 		nkeys = NKEYS - 1;
+@@ -299,7 +301,7 @@ TEST(exec_changed_keys)
+ 	unsigned long hwcaps = getauxval(AT_HWCAP);
+ 
+ 	/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
++	ASSERT_PAUTH_ENABLED();
+ 	if (!(hwcaps & HWCAP_PACG)) {
+ 		TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
+ 		nkeys = NKEYS - 1;
+diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+index 7043e6ded0e60..75b72c751772b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
++++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+@@ -1,5 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <test_progs.h>
++#include <sys/time.h>
++#include <sys/resource.h>
+ #include "test_send_signal_kern.skel.h"
+ 
+ static volatile int sigusr1_received = 0;
+@@ -41,12 +43,23 @@ static void test_send_signal_common(struct perf_event_attr *attr,
+ 	}
+ 
+ 	if (pid == 0) {
++		int old_prio;
++
+ 		/* install signal handler and notify parent */
+ 		signal(SIGUSR1, sigusr1_handler);
+ 
+ 		close(pipe_c2p[0]); /* close read */
+ 		close(pipe_p2c[1]); /* close write */
+ 
++		/* boost with a high priority so we got a higher chance
++		 * that if an interrupt happens, the underlying task
++		 * is this process.
++		 */
++		errno = 0;
++		old_prio = getpriority(PRIO_PROCESS, 0);
++		ASSERT_OK(errno, "getpriority");
++		ASSERT_OK(setpriority(PRIO_PROCESS, 0, -20), "setpriority");
++
+ 		/* notify parent signal handler is installed */
+ 		CHECK(write(pipe_c2p[1], buf, 1) != 1, "pipe_write", "err %d\n", -errno);
+ 
+@@ -62,6 +75,9 @@ static void test_send_signal_common(struct perf_event_attr *attr,
+ 		/* wait for parent notification and exit */
+ 		CHECK(read(pipe_p2c[0], buf, 1) != 1, "pipe_read", "err %d\n", -errno);
+ 
++		/* restore the old priority */
++		ASSERT_OK(setpriority(PRIO_PROCESS, 0, old_prio), "setpriority");
++
+ 		close(pipe_c2p[1]);
+ 		close(pipe_p2c[0]);
+ 		exit(0);
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c b/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
+index ec281b0363b82..86f97681ad898 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
+@@ -195,8 +195,10 @@ static void run_test(int cgroup_fd)
+ 
+ 	pthread_mutex_lock(&server_started_mtx);
+ 	if (CHECK_FAIL(pthread_create(&tid, NULL, server_thread,
+-				      (void *)&server_fd)))
++				      (void *)&server_fd))) {
++		pthread_mutex_unlock(&server_started_mtx);
+ 		goto close_server_fd;
++	}
+ 	pthread_cond_wait(&server_started, &server_started_mtx);
+ 	pthread_mutex_unlock(&server_started_mtx);
+ 
+diff --git a/tools/testing/selftests/bpf/progs/xdp_tx.c b/tools/testing/selftests/bpf/progs/xdp_tx.c
+index 94e6c2b281cb6..5f725c720e008 100644
+--- a/tools/testing/selftests/bpf/progs/xdp_tx.c
++++ b/tools/testing/selftests/bpf/progs/xdp_tx.c
+@@ -3,7 +3,7 @@
+ #include <linux/bpf.h>
+ #include <bpf/bpf_helpers.h>
+ 
+-SEC("tx")
++SEC("xdp")
+ int xdp_tx(struct xdp_md *xdp)
+ {
+ 	return XDP_TX;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index 0d92ebcb335d1..179e680e8d134 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -968,7 +968,7 @@ static void test_sockmap(unsigned int tasks, void *data)
+ 
+ 		FD_ZERO(&w);
+ 		FD_SET(sfd[3], &w);
+-		to.tv_sec = 1;
++		to.tv_sec = 30;
+ 		to.tv_usec = 0;
+ 		s = select(sfd[3] + 1, &w, NULL, NULL, &to);
+ 		if (s == -1) {
+diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh
+index ba8ffcdaac302..995278e684b6e 100755
+--- a/tools/testing/selftests/bpf/test_xdp_veth.sh
++++ b/tools/testing/selftests/bpf/test_xdp_veth.sh
+@@ -108,7 +108,7 @@ ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1
+ ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2
+ 
+ ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy
+-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec tx
++ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp
+ ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy
+ 
+ trap cleanup EXIT
+diff --git a/tools/testing/selftests/firmware/fw_namespace.c b/tools/testing/selftests/firmware/fw_namespace.c
+index 5ebc1aec7923b..817b2f1e8ee6a 100644
+--- a/tools/testing/selftests/firmware/fw_namespace.c
++++ b/tools/testing/selftests/firmware/fw_namespace.c
+@@ -129,7 +129,8 @@ int main(int argc, char **argv)
+ 		die("mounting tmpfs to /lib/firmware failed\n");
+ 
+ 	sys_path = argv[1];
+-	asprintf(&fw_path, "/lib/firmware/%s", fw_name);
++	if (asprintf(&fw_path, "/lib/firmware/%s", fw_name) < 0)
++		die("error: failed to build full fw_path\n");
+ 
+ 	setup_fw(fw_path);
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
+index a6fac927ee82f..0cee6b067a374 100644
+--- a/tools/testing/selftests/ftrace/test.d/functions
++++ b/tools/testing/selftests/ftrace/test.d/functions
+@@ -115,7 +115,7 @@ check_requires() { # Check required files and tracers
+                 echo "Required tracer $t is not configured."
+                 exit_unsupported
+             fi
+-        elif [ $r != $i ]; then
++        elif [ "$r" != "$i" ]; then
+             if ! grep -Fq "$r" README ; then
+                 echo "Required feature pattern \"$r\" is not in README."
+                 exit_unsupported
+diff --git a/tools/thermal/tmon/Makefile b/tools/thermal/tmon/Makefile
+index 59e417ec3e134..25d7f8f37cfd6 100644
+--- a/tools/thermal/tmon/Makefile
++++ b/tools/thermal/tmon/Makefile
+@@ -10,7 +10,7 @@ override CFLAGS+= $(call cc-option,-O3,-O1) ${WARNFLAGS}
+ # Add "-fstack-protector" only if toolchain supports it.
+ override CFLAGS+= $(call cc-option,-fstack-protector-strong)
+ CC?= $(CROSS_COMPILE)gcc
+-PKG_CONFIG?= pkg-config
++PKG_CONFIG?= $(CROSS_COMPILE)pkg-config
+ 
+ override CFLAGS+=-D VERSION=\"$(VERSION)\"
+ LDFLAGS+=


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-20 22:02 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-20 22:02 UTC (permalink / raw
  To: gentoo-commits

commit:     4acad070ccb07d349bc17d54a57baa1b1946097b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 20 21:57:57 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 20 22:01:59 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4acad070

Move USER_NS to GENTOO_LINUX_PORTAGE

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index d2175f0..74e80d3 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -65,6 +65,7 @@
 +	select NET_NS
 +	select PID_NS
 +	select SYSVIPC
++	select USER_NS
 +	select UTS_NS
 +
 +	help
@@ -145,7 +146,6 @@
 +	select TIMERFD
 +	select TMPFS_POSIX_ACL
 +	select TMPFS_XATTR
-+	select USER_NS
 +
 +	select ANON_INODES
 +	select BLOCK


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-22 11:38 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-22 11:38 UTC (permalink / raw
  To: gentoo-commits

commit:     e9a4a8d320e321ce780a3598be19da864ea4a595
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 22 11:38:14 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 22 11:38:14 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e9a4a8d3

Linux patch 5.10.68

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1067_linux-5.10.68.patch | 3868 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3872 insertions(+)

diff --git a/0000_README b/0000_README
index 20bba3a..416061d 100644
--- a/0000_README
+++ b/0000_README
@@ -311,6 +311,10 @@ Patch:  1066_linux-5.10.67.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.67
 
+Patch:  1067_linux-5.10.68.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.68
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1067_linux-5.10.68.patch b/1067_linux-5.10.68.patch
new file mode 100644
index 0000000..7a0e47b
--- /dev/null
+++ b/1067_linux-5.10.68.patch
@@ -0,0 +1,3868 @@
+diff --git a/Documentation/devicetree/bindings/arm/tegra.yaml b/Documentation/devicetree/bindings/arm/tegra.yaml
+index 767e86354c8e9..2c6911c775c8e 100644
+--- a/Documentation/devicetree/bindings/arm/tegra.yaml
++++ b/Documentation/devicetree/bindings/arm/tegra.yaml
+@@ -54,7 +54,7 @@ properties:
+           - const: toradex,apalis_t30
+           - const: nvidia,tegra30
+       - items:
+-          - const: toradex,apalis_t30-eval-v1.1
++          - const: toradex,apalis_t30-v1.1-eval
+           - const: toradex,apalis_t30-eval
+           - const: toradex,apalis_t30-v1.1
+           - const: toradex,apalis_t30
+diff --git a/Documentation/devicetree/bindings/mtd/gpmc-nand.txt b/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
+index 44919d48d2415..c459f169a9044 100644
+--- a/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
++++ b/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
+@@ -122,7 +122,7 @@ on various other factors also like;
+ 	so the device should have enough free bytes available its OOB/Spare
+ 	area to accommodate ECC for entire page. In general following expression
+ 	helps in determining if given device can accommodate ECC syndrome:
+-	"2 + (PAGESIZE / 512) * ECC_BYTES" >= OOBSIZE"
++	"2 + (PAGESIZE / 512) * ECC_BYTES" <= OOBSIZE"
+ 	where
+ 		OOBSIZE		number of bytes in OOB/spare area
+ 		PAGESIZE	number of bytes in main-area of device page
+diff --git a/Makefile b/Makefile
+index a47273ecfdf21..e50581c9db50e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 67
++SUBLEVEL = 68
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c
+index a2fbea3ee07c7..102418ac5ff4a 100644
+--- a/arch/arc/mm/cache.c
++++ b/arch/arc/mm/cache.c
+@@ -1123,7 +1123,7 @@ void clear_user_page(void *to, unsigned long u_vaddr, struct page *page)
+ 	clear_page(to);
+ 	clear_bit(PG_dc_clean, &page->flags);
+ }
+-
++EXPORT_SYMBOL(clear_user_page);
+ 
+ /**********************************************************************
+  * Explicit Cache flush request from user space via syscall
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 062b21f30f942..a9bbfb800ec2b 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -510,7 +510,7 @@ size_t sve_state_size(struct task_struct const *task)
+ void sve_alloc(struct task_struct *task)
+ {
+ 	if (task->thread.sve_state) {
+-		memset(task->thread.sve_state, 0, sve_state_size(current));
++		memset(task->thread.sve_state, 0, sve_state_size(task));
+ 		return;
+ 	}
+ 
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 5e5dd99e8cee8..5bc978be80434 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -1143,6 +1143,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 		if (copy_from_user(&reg, argp, sizeof(reg)))
+ 			break;
+ 
++		/*
++		 * We could owe a reset due to PSCI. Handle the pending reset
++		 * here to ensure userspace register accesses are ordered after
++		 * the reset.
++		 */
++		if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu))
++			kvm_reset_vcpu(vcpu);
++
+ 		if (ioctl == KVM_SET_ONE_REG)
+ 			r = kvm_arm_set_reg(vcpu, &reg);
+ 		else
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index b969c2157ad2e..204c62debf06e 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -263,10 +263,16 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
+  */
+ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ {
++	struct vcpu_reset_state reset_state;
+ 	int ret;
+ 	bool loaded;
+ 	u32 pstate;
+ 
++	mutex_lock(&vcpu->kvm->lock);
++	reset_state = vcpu->arch.reset_state;
++	WRITE_ONCE(vcpu->arch.reset_state.reset, false);
++	mutex_unlock(&vcpu->kvm->lock);
++
+ 	/* Reset PMU outside of the non-preemptible section */
+ 	kvm_pmu_vcpu_reset(vcpu);
+ 
+@@ -325,8 +331,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 	 * Additional reset state handling that PSCI may have imposed on us.
+ 	 * Must be done after all the sys_reg reset.
+ 	 */
+-	if (vcpu->arch.reset_state.reset) {
+-		unsigned long target_pc = vcpu->arch.reset_state.pc;
++	if (reset_state.reset) {
++		unsigned long target_pc = reset_state.pc;
+ 
+ 		/* Gracefully handle Thumb2 entry point */
+ 		if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) {
+@@ -335,13 +341,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 		}
+ 
+ 		/* Propagate caller endianness */
+-		if (vcpu->arch.reset_state.be)
++		if (reset_state.be)
+ 			kvm_vcpu_set_be(vcpu);
+ 
+ 		*vcpu_pc(vcpu) = target_pc;
+-		vcpu_set_reg(vcpu, 0, vcpu->arch.reset_state.r0);
+-
+-		vcpu->arch.reset_state.reset = false;
++		vcpu_set_reg(vcpu, 0, reset_state.r0);
+ 	}
+ 
+ 	/* Reset timer */
+@@ -366,6 +370,14 @@ int kvm_set_ipa_limit(void)
+ 	mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+ 	parange = cpuid_feature_extract_unsigned_field(mmfr0,
+ 				ID_AA64MMFR0_PARANGE_SHIFT);
++	/*
++	 * IPA size beyond 48 bits could not be supported
++	 * on either 4K or 16K page size. Hence let's cap
++	 * it to 48 bits, in case it's reported as larger
++	 * on the system.
++	 */
++	if (PAGE_SIZE != SZ_64K)
++		parange = min(parange, (unsigned int)ID_AA64MMFR0_PARANGE_48);
+ 
+ 	/*
+ 	 * Check with ARMv8.5-GTG that our PAGE_SIZE is supported at
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index cd9995ee84419..5777b72bb8b62 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -3146,7 +3146,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_P9_TM_HV_ASSIST)
+ 	/* The following code handles the fake_suspend = 1 case */
+ 	mflr	r0
+ 	std	r0, PPC_LR_STKOFF(r1)
+-	stdu	r1, -PPC_MIN_STKFRM(r1)
++	stdu	r1, -TM_FRAME_SIZE(r1)
+ 
+ 	/* Turn on TM. */
+ 	mfmsr	r8
+@@ -3161,10 +3161,42 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
+ 	nop
+ 
++	/*
++	 * It's possible that treclaim. may modify registers, if we have lost
++	 * track of fake-suspend state in the guest due to it using rfscv.
++	 * Save and restore registers in case this occurs.
++	 */
++	mfspr	r3, SPRN_DSCR
++	mfspr	r4, SPRN_XER
++	mfspr	r5, SPRN_AMR
++	/* SPRN_TAR would need to be saved here if the kernel ever used it */
++	mfcr	r12
++	SAVE_NVGPRS(r1)
++	SAVE_GPR(2, r1)
++	SAVE_GPR(3, r1)
++	SAVE_GPR(4, r1)
++	SAVE_GPR(5, r1)
++	stw	r12, 8(r1)
++	std	r1, HSTATE_HOST_R1(r13)
++
+ 	/* We have to treclaim here because that's the only way to do S->N */
+ 	li	r3, TM_CAUSE_KVM_RESCHED
+ 	TRECLAIM(R3)
+ 
++	GET_PACA(r13)
++	ld	r1, HSTATE_HOST_R1(r13)
++	REST_GPR(2, r1)
++	REST_GPR(3, r1)
++	REST_GPR(4, r1)
++	REST_GPR(5, r1)
++	lwz	r12, 8(r1)
++	REST_NVGPRS(r1)
++	mtspr	SPRN_DSCR, r3
++	mtspr	SPRN_XER, r4
++	mtspr	SPRN_AMR, r5
++	mtcr	r12
++	HMT_MEDIUM
++
+ 	/*
+ 	 * We were in fake suspend, so we are not going to save the
+ 	 * register state as the guest checkpointed state (since
+@@ -3192,7 +3224,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
+ 	std	r5, VCPU_TFHAR(r9)
+ 	std	r6, VCPU_TFIAR(r9)
+ 
+-	addi	r1, r1, PPC_MIN_STKFRM
++	addi	r1, r1, TM_FRAME_SIZE
+ 	ld	r0, PPC_LR_STKOFF(r1)
+ 	mtlr	r0
+ 	blr
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index dee01d3b23a40..8d9047d2d1e11 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -248,8 +248,7 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1)
+ 
+ #define EMIT6_PCREL(op1, op2, b1, b2, i, off, mask)		\
+ ({								\
+-	/* Branch instruction needs 6 bytes */			\
+-	int rel = (addrs[(i) + (off) + 1] - (addrs[(i) + 1] - 6)) / 2;\
++	int rel = (addrs[(i) + (off) + 1] - jit->prg) / 2;	\
+ 	_EMIT6((op1) | reg(b1, b2) << 16 | (rel & 0xffff), (op2) | (mask));\
+ 	REG_SET_SEEN(b1);					\
+ 	REG_SET_SEEN(b2);					\
+@@ -761,10 +760,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4(0xb9080000, dst_reg, src_reg);
+ 		break;
+ 	case BPF_ALU | BPF_ADD | BPF_K: /* dst = (u32) dst + (u32) imm */
+-		if (!imm)
+-			break;
+-		/* alfi %dst,imm */
+-		EMIT6_IMM(0xc20b0000, dst_reg, imm);
++		if (imm != 0) {
++			/* alfi %dst,imm */
++			EMIT6_IMM(0xc20b0000, dst_reg, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_ADD | BPF_K: /* dst = dst + imm */
+@@ -786,17 +785,22 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4(0xb9090000, dst_reg, src_reg);
+ 		break;
+ 	case BPF_ALU | BPF_SUB | BPF_K: /* dst = (u32) dst - (u32) imm */
+-		if (!imm)
+-			break;
+-		/* alfi %dst,-imm */
+-		EMIT6_IMM(0xc20b0000, dst_reg, -imm);
++		if (imm != 0) {
++			/* alfi %dst,-imm */
++			EMIT6_IMM(0xc20b0000, dst_reg, -imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_SUB | BPF_K: /* dst = dst - imm */
+ 		if (!imm)
+ 			break;
+-		/* agfi %dst,-imm */
+-		EMIT6_IMM(0xc2080000, dst_reg, -imm);
++		if (imm == -0x80000000) {
++			/* algfi %dst,0x80000000 */
++			EMIT6_IMM(0xc20a0000, dst_reg, 0x80000000);
++		} else {
++			/* agfi %dst,-imm */
++			EMIT6_IMM(0xc2080000, dst_reg, -imm);
++		}
+ 		break;
+ 	/*
+ 	 * BPF_MUL
+@@ -811,10 +815,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4(0xb90c0000, dst_reg, src_reg);
+ 		break;
+ 	case BPF_ALU | BPF_MUL | BPF_K: /* dst = (u32) dst * (u32) imm */
+-		if (imm == 1)
+-			break;
+-		/* msfi %r5,imm */
+-		EMIT6_IMM(0xc2010000, dst_reg, imm);
++		if (imm != 1) {
++			/* msfi %r5,imm */
++			EMIT6_IMM(0xc2010000, dst_reg, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_MUL | BPF_K: /* dst = dst * imm */
+@@ -867,6 +871,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 			if (BPF_OP(insn->code) == BPF_MOD)
+ 				/* lhgi %dst,0 */
+ 				EMIT4_IMM(0xa7090000, dst_reg, 0);
++			else
++				EMIT_ZERO(dst_reg);
+ 			break;
+ 		}
+ 		/* lhi %w0,0 */
+@@ -999,10 +1005,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4(0xb9820000, dst_reg, src_reg);
+ 		break;
+ 	case BPF_ALU | BPF_XOR | BPF_K: /* dst = (u32) dst ^ (u32) imm */
+-		if (!imm)
+-			break;
+-		/* xilf %dst,imm */
+-		EMIT6_IMM(0xc0070000, dst_reg, imm);
++		if (imm != 0) {
++			/* xilf %dst,imm */
++			EMIT6_IMM(0xc0070000, dst_reg, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_XOR | BPF_K: /* dst = dst ^ imm */
+@@ -1033,10 +1039,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT6_DISP_LH(0xeb000000, 0x000d, dst_reg, dst_reg, src_reg, 0);
+ 		break;
+ 	case BPF_ALU | BPF_LSH | BPF_K: /* dst = (u32) dst << (u32) imm */
+-		if (imm == 0)
+-			break;
+-		/* sll %dst,imm(%r0) */
+-		EMIT4_DISP(0x89000000, dst_reg, REG_0, imm);
++		if (imm != 0) {
++			/* sll %dst,imm(%r0) */
++			EMIT4_DISP(0x89000000, dst_reg, REG_0, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_LSH | BPF_K: /* dst = dst << imm */
+@@ -1058,10 +1064,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT6_DISP_LH(0xeb000000, 0x000c, dst_reg, dst_reg, src_reg, 0);
+ 		break;
+ 	case BPF_ALU | BPF_RSH | BPF_K: /* dst = (u32) dst >> (u32) imm */
+-		if (imm == 0)
+-			break;
+-		/* srl %dst,imm(%r0) */
+-		EMIT4_DISP(0x88000000, dst_reg, REG_0, imm);
++		if (imm != 0) {
++			/* srl %dst,imm(%r0) */
++			EMIT4_DISP(0x88000000, dst_reg, REG_0, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_RSH | BPF_K: /* dst = dst >> imm */
+@@ -1083,10 +1089,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT6_DISP_LH(0xeb000000, 0x000a, dst_reg, dst_reg, src_reg, 0);
+ 		break;
+ 	case BPF_ALU | BPF_ARSH | BPF_K: /* ((s32) dst >> imm */
+-		if (imm == 0)
+-			break;
+-		/* sra %dst,imm(%r0) */
+-		EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm);
++		if (imm != 0) {
++			/* sra %dst,imm(%r0) */
++			EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_ARSH | BPF_K: /* ((s64) dst) >>= imm */
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index c9fa7be3df82d..5c95d242f38d7 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -301,8 +301,8 @@ do {									\
+ 	unsigned int __gu_low, __gu_high;				\
+ 	const unsigned int __user *__gu_ptr;				\
+ 	__gu_ptr = (const void __user *)(ptr);				\
+-	__get_user_asm(__gu_low, ptr, "l", "=r", label);		\
+-	__get_user_asm(__gu_high, ptr+1, "l", "=r", label);		\
++	__get_user_asm(__gu_low, __gu_ptr, "l", "=r", label);		\
++	__get_user_asm(__gu_high, __gu_ptr+1, "l", "=r", label);	\
+ 	(x) = ((unsigned long long)__gu_high << 32) | __gu_low;		\
+ } while (0)
+ #else
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 056d0367864e9..14b34963eb1f7 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -1241,6 +1241,9 @@ static void __mc_scan_banks(struct mce *m, struct pt_regs *regs, struct mce *fin
+ 
+ static void kill_me_now(struct callback_head *ch)
+ {
++	struct task_struct *p = container_of(ch, struct task_struct, mce_kill_me);
++
++	p->mce_count = 0;
+ 	force_sig(SIGBUS);
+ }
+ 
+@@ -1249,6 +1252,7 @@ static void kill_me_maybe(struct callback_head *cb)
+ 	struct task_struct *p = container_of(cb, struct task_struct, mce_kill_me);
+ 	int flags = MF_ACTION_REQUIRED;
+ 
++	p->mce_count = 0;
+ 	pr_err("Uncorrected hardware memory error in user-access at %llx", p->mce_addr);
+ 
+ 	if (!p->mce_ripv)
+@@ -1269,17 +1273,34 @@ static void kill_me_maybe(struct callback_head *cb)
+ 	}
+ }
+ 
+-static void queue_task_work(struct mce *m, int kill_it)
++static void queue_task_work(struct mce *m, char *msg, int kill_current_task)
+ {
+-	current->mce_addr = m->addr;
+-	current->mce_kflags = m->kflags;
+-	current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
+-	current->mce_whole_page = whole_page(m);
++	int count = ++current->mce_count;
+ 
+-	if (kill_it)
+-		current->mce_kill_me.func = kill_me_now;
+-	else
+-		current->mce_kill_me.func = kill_me_maybe;
++	/* First call, save all the details */
++	if (count == 1) {
++		current->mce_addr = m->addr;
++		current->mce_kflags = m->kflags;
++		current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
++		current->mce_whole_page = whole_page(m);
++
++		if (kill_current_task)
++			current->mce_kill_me.func = kill_me_now;
++		else
++			current->mce_kill_me.func = kill_me_maybe;
++	}
++
++	/* Ten is likely overkill. Don't expect more than two faults before task_work() */
++	if (count > 10)
++		mce_panic("Too many consecutive machine checks while accessing user data", m, msg);
++
++	/* Second or later call, make sure page address matches the one from first call */
++	if (count > 1 && (current->mce_addr >> PAGE_SHIFT) != (m->addr >> PAGE_SHIFT))
++		mce_panic("Consecutive machine checks to different user pages", m, msg);
++
++	/* Do not call task_work_add() more than once */
++	if (count > 1)
++		return;
+ 
+ 	task_work_add(current, &current->mce_kill_me, TWA_RESUME);
+ }
+@@ -1427,7 +1448,7 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ 		/* If this triggers there is no way to recover. Die hard. */
+ 		BUG_ON(!on_thread_stack() || !user_mode(regs));
+ 
+-		queue_task_work(&m, kill_it);
++		queue_task_work(&m, msg, kill_it);
+ 
+ 	} else {
+ 		/*
+@@ -1445,7 +1466,7 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ 		}
+ 
+ 		if (m.kflags & MCE_IN_KERNEL_COPYIN)
+-			queue_task_work(&m, kill_it);
++			queue_task_work(&m, msg, kill_it);
+ 	}
+ out:
+ 	mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index b5a3fa4033d38..067ca92e69ef9 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -1389,18 +1389,18 @@ int kern_addr_valid(unsigned long addr)
+ 		return 0;
+ 
+ 	p4d = p4d_offset(pgd, addr);
+-	if (p4d_none(*p4d))
++	if (!p4d_present(*p4d))
+ 		return 0;
+ 
+ 	pud = pud_offset(p4d, addr);
+-	if (pud_none(*pud))
++	if (!pud_present(*pud))
+ 		return 0;
+ 
+ 	if (pud_large(*pud))
+ 		return pfn_valid(pud_pfn(*pud));
+ 
+ 	pmd = pmd_offset(pud, addr);
+-	if (pmd_none(*pmd))
++	if (!pmd_present(*pmd))
+ 		return 0;
+ 
+ 	if (pmd_large(*pmd))
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index ca311aaa67b88..232932bda4e5e 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -583,7 +583,12 @@ int memtype_reserve(u64 start, u64 end, enum page_cache_mode req_type,
+ 	int err = 0;
+ 
+ 	start = sanitize_phys(start);
+-	end = sanitize_phys(end);
++
++	/*
++	 * The end address passed into this function is exclusive, but
++	 * sanitize_phys() expects an inclusive address.
++	 */
++	end = sanitize_phys(end - 1) + 1;
+ 	if (start >= end) {
+ 		WARN(1, "%s failed: [mem %#010Lx-%#010Lx], req %s\n", __func__,
+ 				start, end - 1, cattr_name(req_type));
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index d3cdf467d91fa..c758fd913cedd 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1204,6 +1204,11 @@ static void __init xen_dom0_set_legacy_features(void)
+ 	x86_platform.legacy.rtc = 1;
+ }
+ 
++static void __init xen_domu_set_legacy_features(void)
++{
++	x86_platform.legacy.rtc = 0;
++}
++
+ /* First C function to be called on Xen boot */
+ asmlinkage __visible void __init xen_start_kernel(void)
+ {
+@@ -1356,6 +1361,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 		add_preferred_console("xenboot", 0, NULL);
+ 		if (pci_xen)
+ 			x86_init.pci.arch_init = pci_xen_init;
++		x86_platform.set_legacy_features =
++				xen_domu_set_legacy_features;
+ 	} else {
+ 		const struct dom0_vga_console_info *info =
+ 			(void *)((char *)xen_start_info +
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index b8c2ddc01aec3..65c200e0ecb59 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2526,6 +2526,15 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	 * are likely to increase the throughput.
+ 	 */
+ 	bfqq->new_bfqq = new_bfqq;
++	/*
++	 * The above assignment schedules the following redirections:
++	 * each time some I/O for bfqq arrives, the process that
++	 * generated that I/O is disassociated from bfqq and
++	 * associated with new_bfqq. Here we increases new_bfqq->ref
++	 * in advance, adding the number of processes that are
++	 * expected to be associated with new_bfqq as they happen to
++	 * issue I/O.
++	 */
+ 	new_bfqq->ref += process_refs;
+ 	return new_bfqq;
+ }
+@@ -2585,6 +2594,10 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_queue *in_service_bfqq, *new_bfqq;
+ 
++	/* if a merge has already been setup, then proceed with that first */
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++
+ 	/*
+ 	 * Do not perform queue merging if the device is non
+ 	 * rotational and performs internal queueing. In fact, such a
+@@ -2639,9 +2652,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfq_too_late_for_merging(bfqq))
+ 		return NULL;
+ 
+-	if (bfqq->new_bfqq)
+-		return bfqq->new_bfqq;
+-
+ 	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
+ 		return NULL;
+ 
+diff --git a/drivers/base/power/trace.c b/drivers/base/power/trace.c
+index a97f33d0c59f9..94665037f4a35 100644
+--- a/drivers/base/power/trace.c
++++ b/drivers/base/power/trace.c
+@@ -13,6 +13,7 @@
+ #include <linux/export.h>
+ #include <linux/rtc.h>
+ #include <linux/suspend.h>
++#include <linux/init.h>
+ 
+ #include <linux/mc146818rtc.h>
+ 
+@@ -165,6 +166,9 @@ void generate_pm_trace(const void *tracedata, unsigned int user)
+ 	const char *file = *(const char **)(tracedata + 2);
+ 	unsigned int user_hash_value, file_hash_value;
+ 
++	if (!x86_platform.legacy.rtc)
++		return;
++
+ 	user_hash_value = user % USERHASH;
+ 	file_hash_value = hash_string(lineno, file, FILEHASH);
+ 	set_magic_time(user_hash_value, file_hash_value, dev_hash_value);
+@@ -267,6 +271,9 @@ static struct notifier_block pm_trace_nb = {
+ 
+ static int __init early_resume_init(void)
+ {
++	if (!x86_platform.legacy.rtc)
++		return 0;
++
+ 	hash_value_early_read = read_magic_time();
+ 	register_pm_notifier(&pm_trace_nb);
+ 	return 0;
+@@ -277,6 +284,9 @@ static int __init late_resume_init(void)
+ 	unsigned int val = hash_value_early_read;
+ 	unsigned int user, file, dev;
+ 
++	if (!x86_platform.legacy.rtc)
++		return 0;
++
+ 	user = val % USERHASH;
+ 	val = val / USERHASH;
+ 	file = val % FILEHASH;
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index 3c2fa44d9279b..d60d5520707dc 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -374,7 +374,7 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 	    of_device_is_compatible(np, "fsl,ls1088a-gpio"))
+ 		gc->write_reg(mpc8xxx_gc->regs + GPIO_IBE, 0xffffffff);
+ 
+-	ret = gpiochip_add_data(gc, mpc8xxx_gc);
++	ret = devm_gpiochip_add_data(&pdev->dev, gc, mpc8xxx_gc);
+ 	if (ret) {
+ 		pr_err("%pOF: GPIO chip registration failed with status %d\n",
+ 		       np, ret);
+@@ -406,6 +406,8 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ err:
++	if (mpc8xxx_gc->irq)
++		irq_domain_remove(mpc8xxx_gc->irq);
+ 	iounmap(mpc8xxx_gc->regs);
+ 	return ret;
+ }
+@@ -419,7 +421,6 @@ static int mpc8xxx_remove(struct platform_device *pdev)
+ 		irq_domain_remove(mpc8xxx_gc->irq);
+ 	}
+ 
+-	gpiochip_remove(&mpc8xxx_gc->gc);
+ 	iounmap(mpc8xxx_gc->regs);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 76c31aa7b84df..d949d6c52f24b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -717,7 +717,7 @@ enum amd_hw_ip_block_type {
+ 	MAX_HWIP
+ };
+ 
+-#define HWIP_MAX_INSTANCE	8
++#define HWIP_MAX_INSTANCE	10
+ 
+ struct amd_powerplay {
+ 	void *pp_handle;
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index c1926154eda84..29b1ce2140abc 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -867,8 +867,14 @@ static enum drm_mode_status lt9611_bridge_mode_valid(struct drm_bridge *bridge,
+ 						     const struct drm_display_mode *mode)
+ {
+ 	struct lt9611_mode *lt9611_mode = lt9611_find_mode(mode);
++	struct lt9611 *lt9611 = bridge_to_lt9611(bridge);
+ 
+-	return lt9611_mode ? MODE_OK : MODE_BAD;
++	if (!lt9611_mode)
++		return MODE_BAD;
++	else if (lt9611_mode->intfs > 1 && !lt9611->dsi1)
++		return MODE_PANEL;
++	else
++		return MODE_OK;
+ }
+ 
+ static void lt9611_bridge_pre_enable(struct drm_bridge *bridge)
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+index 76d38561c9103..cf741c5c82d25 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+@@ -397,8 +397,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
+ 		if (switch_mmu_context) {
+ 			struct etnaviv_iommu_context *old_context = gpu->mmu_context;
+ 
+-			etnaviv_iommu_context_get(mmu_context);
+-			gpu->mmu_context = mmu_context;
++			gpu->mmu_context = etnaviv_iommu_context_get(mmu_context);
+ 			etnaviv_iommu_context_put(old_context);
+ 		}
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+index 2b7e85318a76a..424474041c943 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+@@ -305,8 +305,7 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get(
+ 		list_del(&mapping->obj_node);
+ 	}
+ 
+-	etnaviv_iommu_context_get(mmu_context);
+-	mapping->context = mmu_context;
++	mapping->context = etnaviv_iommu_context_get(mmu_context);
+ 	mapping->use = 1;
+ 
+ 	ret = etnaviv_iommu_map_gem(mmu_context, etnaviv_obj,
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+index d05c359945799..5f24cc52c2878 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+@@ -532,8 +532,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 		goto err_submit_objects;
+ 
+ 	submit->ctx = file->driver_priv;
+-	etnaviv_iommu_context_get(submit->ctx->mmu);
+-	submit->mmu_context = submit->ctx->mmu;
++	submit->mmu_context = etnaviv_iommu_context_get(submit->ctx->mmu);
+ 	submit->exec_state = args->exec_state;
+ 	submit->flags = args->flags;
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index c6404b8d067f1..2520b7dad6ce7 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -561,6 +561,12 @@ static int etnaviv_hw_reset(struct etnaviv_gpu *gpu)
+ 	/* We rely on the GPU running, so program the clock */
+ 	etnaviv_gpu_update_clock(gpu);
+ 
++	gpu->fe_running = false;
++	gpu->exec_state = -1;
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++	gpu->mmu_context = NULL;
++
+ 	return 0;
+ }
+ 
+@@ -623,19 +629,23 @@ void etnaviv_gpu_start_fe(struct etnaviv_gpu *gpu, u32 address, u16 prefetch)
+ 			  VIVS_MMUv2_SEC_COMMAND_CONTROL_ENABLE |
+ 			  VIVS_MMUv2_SEC_COMMAND_CONTROL_PREFETCH(prefetch));
+ 	}
++
++	gpu->fe_running = true;
+ }
+ 
+-static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu)
++static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu,
++					  struct etnaviv_iommu_context *context)
+ {
+-	u32 address = etnaviv_cmdbuf_get_va(&gpu->buffer,
+-				&gpu->mmu_context->cmdbuf_mapping);
+ 	u16 prefetch;
++	u32 address;
+ 
+ 	/* setup the MMU */
+-	etnaviv_iommu_restore(gpu, gpu->mmu_context);
++	etnaviv_iommu_restore(gpu, context);
+ 
+ 	/* Start command processor */
+ 	prefetch = etnaviv_buffer_init(gpu);
++	address = etnaviv_cmdbuf_get_va(&gpu->buffer,
++					&gpu->mmu_context->cmdbuf_mapping);
+ 
+ 	etnaviv_gpu_start_fe(gpu, address, prefetch);
+ }
+@@ -814,7 +824,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ 	/* Now program the hardware */
+ 	mutex_lock(&gpu->lock);
+ 	etnaviv_gpu_hw_init(gpu);
+-	gpu->exec_state = -1;
+ 	mutex_unlock(&gpu->lock);
+ 
+ 	pm_runtime_mark_last_busy(gpu->dev);
+@@ -1039,8 +1048,6 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu)
+ 	spin_unlock(&gpu->event_spinlock);
+ 
+ 	etnaviv_gpu_hw_init(gpu);
+-	gpu->exec_state = -1;
+-	gpu->mmu_context = NULL;
+ 
+ 	mutex_unlock(&gpu->lock);
+ 	pm_runtime_mark_last_busy(gpu->dev);
+@@ -1352,14 +1359,12 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit)
+ 		goto out_unlock;
+ 	}
+ 
+-	if (!gpu->mmu_context) {
+-		etnaviv_iommu_context_get(submit->mmu_context);
+-		gpu->mmu_context = submit->mmu_context;
+-		etnaviv_gpu_start_fe_idleloop(gpu);
+-	} else {
+-		etnaviv_iommu_context_get(gpu->mmu_context);
+-		submit->prev_mmu_context = gpu->mmu_context;
+-	}
++	if (!gpu->fe_running)
++		etnaviv_gpu_start_fe_idleloop(gpu, submit->mmu_context);
++
++	if (submit->prev_mmu_context)
++		etnaviv_iommu_context_put(submit->prev_mmu_context);
++	submit->prev_mmu_context = etnaviv_iommu_context_get(gpu->mmu_context);
+ 
+ 	if (submit->nr_pmrs) {
+ 		gpu->event[event[1]].sync_point = &sync_point_perfmon_sample_pre;
+@@ -1561,7 +1566,7 @@ int etnaviv_gpu_wait_idle(struct etnaviv_gpu *gpu, unsigned int timeout_ms)
+ 
+ static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu)
+ {
+-	if (gpu->initialized && gpu->mmu_context) {
++	if (gpu->initialized && gpu->fe_running) {
+ 		/* Replace the last WAIT with END */
+ 		mutex_lock(&gpu->lock);
+ 		etnaviv_buffer_end(gpu);
+@@ -1574,8 +1579,7 @@ static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu)
+ 		 */
+ 		etnaviv_gpu_wait_idle(gpu, 100);
+ 
+-		etnaviv_iommu_context_put(gpu->mmu_context);
+-		gpu->mmu_context = NULL;
++		gpu->fe_running = false;
+ 	}
+ 
+ 	gpu->exec_state = -1;
+@@ -1723,6 +1727,9 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
+ 	etnaviv_gpu_hw_suspend(gpu);
+ #endif
+ 
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++
+ 	if (gpu->initialized) {
+ 		etnaviv_cmdbuf_free(&gpu->buffer);
+ 		etnaviv_iommu_global_fini(gpu);
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+index 8ea48697d1321..1c75c8ed5bcea 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+@@ -101,6 +101,7 @@ struct etnaviv_gpu {
+ 	struct workqueue_struct *wq;
+ 	struct drm_gpu_scheduler sched;
+ 	bool initialized;
++	bool fe_running;
+ 
+ 	/* 'ring'-buffer: */
+ 	struct etnaviv_cmdbuf buffer;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c
+index 1a7c89a67bea3..afe5dd6a9925b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c
+@@ -92,6 +92,10 @@ static void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu,
+ 	struct etnaviv_iommuv1_context *v1_context = to_v1_context(context);
+ 	u32 pgtable;
+ 
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++	gpu->mmu_context = etnaviv_iommu_context_get(context);
++
+ 	/* set base addresses */
+ 	gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, context->global->memory_base);
+ 	gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, context->global->memory_base);
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c
+index f8bf488e9d717..d664ae29ae209 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c
+@@ -172,6 +172,10 @@ static void etnaviv_iommuv2_restore_nonsec(struct etnaviv_gpu *gpu,
+ 	if (gpu_read(gpu, VIVS_MMUv2_CONTROL) & VIVS_MMUv2_CONTROL_ENABLE)
+ 		return;
+ 
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++	gpu->mmu_context = etnaviv_iommu_context_get(context);
++
+ 	prefetch = etnaviv_buffer_config_mmuv2(gpu,
+ 				(u32)v2_context->mtlb_dma,
+ 				(u32)context->global->bad_page_dma);
+@@ -192,6 +196,10 @@ static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu,
+ 	if (gpu_read(gpu, VIVS_MMUv2_SEC_CONTROL) & VIVS_MMUv2_SEC_CONTROL_ENABLE)
+ 		return;
+ 
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++	gpu->mmu_context = etnaviv_iommu_context_get(context);
++
+ 	gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_LOW,
+ 		  lower_32_bits(context->global->v2.pta_dma));
+ 	gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_HIGH,
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+index 15d9fa3879e5d..984569a59a90a 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+@@ -197,6 +197,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu_context *context,
+ 		 */
+ 		list_for_each_entry_safe(m, n, &list, scan_node) {
+ 			etnaviv_iommu_remove_mapping(context, m);
++			etnaviv_iommu_context_put(m->context);
+ 			m->context = NULL;
+ 			list_del_init(&m->mmu_node);
+ 			list_del_init(&m->scan_node);
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h
+index d1d6902fd13be..e4a0b7d09c2ea 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h
+@@ -105,9 +105,11 @@ void etnaviv_iommu_dump(struct etnaviv_iommu_context *ctx, void *buf);
+ struct etnaviv_iommu_context *
+ etnaviv_iommu_context_init(struct etnaviv_iommu_global *global,
+ 			   struct etnaviv_cmdbuf_suballoc *suballoc);
+-static inline void etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx)
++static inline struct etnaviv_iommu_context *
++etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx)
+ {
+ 	kref_get(&ctx->refcount);
++	return ctx;
+ }
+ void etnaviv_iommu_context_put(struct etnaviv_iommu_context *ctx);
+ void etnaviv_iommu_restore(struct etnaviv_gpu *gpu,
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index 6802d9b65f828..dec54c70e0082 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -1122,7 +1122,7 @@ static int cdn_dp_suspend(struct device *dev)
+ 	return ret;
+ }
+ 
+-static int cdn_dp_resume(struct device *dev)
++static __maybe_unused int cdn_dp_resume(struct device *dev)
+ {
+ 	struct cdn_dp_device *dp = dev_get_drvdata(dev);
+ 
+diff --git a/drivers/mfd/ab8500-core.c b/drivers/mfd/ab8500-core.c
+index a3bac9da8cbbc..4cea63a4cab73 100644
+--- a/drivers/mfd/ab8500-core.c
++++ b/drivers/mfd/ab8500-core.c
+@@ -493,7 +493,7 @@ static int ab8500_handle_hierarchical_line(struct ab8500 *ab8500,
+ 		if (line == AB8540_INT_GPIO43F || line == AB8540_INT_GPIO44F)
+ 			line += 1;
+ 
+-		handle_nested_irq(irq_create_mapping(ab8500->domain, line));
++		handle_nested_irq(irq_find_mapping(ab8500->domain, line));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c
+index aa59496e43768..9db1000944c34 100644
+--- a/drivers/mfd/axp20x.c
++++ b/drivers/mfd/axp20x.c
+@@ -125,12 +125,13 @@ static const struct regmap_range axp288_writeable_ranges[] = {
+ 
+ static const struct regmap_range axp288_volatile_ranges[] = {
+ 	regmap_reg_range(AXP20X_PWR_INPUT_STATUS, AXP288_POWER_REASON),
++	regmap_reg_range(AXP22X_PWR_OUT_CTRL1, AXP22X_ALDO3_V_OUT),
+ 	regmap_reg_range(AXP288_BC_GLOBAL, AXP288_BC_GLOBAL),
+ 	regmap_reg_range(AXP288_BC_DET_STAT, AXP20X_VBUS_IPSOUT_MGMT),
+ 	regmap_reg_range(AXP20X_CHRG_BAK_CTRL, AXP20X_CHRG_BAK_CTRL),
+ 	regmap_reg_range(AXP20X_IRQ1_EN, AXP20X_IPSOUT_V_HIGH_L),
+ 	regmap_reg_range(AXP20X_TIMER_CTRL, AXP20X_TIMER_CTRL),
+-	regmap_reg_range(AXP22X_GPIO_STATE, AXP22X_GPIO_STATE),
++	regmap_reg_range(AXP20X_GPIO1_CTRL, AXP22X_GPIO_STATE),
+ 	regmap_reg_range(AXP288_RT_BATT_V_H, AXP288_RT_BATT_V_L),
+ 	regmap_reg_range(AXP20X_FG_RES, AXP288_FG_CC_CAP_REG),
+ };
+diff --git a/drivers/mfd/db8500-prcmu.c b/drivers/mfd/db8500-prcmu.c
+index a5983d515db03..8d5f8f07d8a66 100644
+--- a/drivers/mfd/db8500-prcmu.c
++++ b/drivers/mfd/db8500-prcmu.c
+@@ -1622,22 +1622,20 @@ static long round_clock_rate(u8 clock, unsigned long rate)
+ }
+ 
+ static const unsigned long db8500_armss_freqs[] = {
+-	200000000,
+-	400000000,
+-	800000000,
++	199680000,
++	399360000,
++	798720000,
+ 	998400000
+ };
+ 
+ /* The DB8520 has slightly higher ARMSS max frequency */
+ static const unsigned long db8520_armss_freqs[] = {
+-	200000000,
+-	400000000,
+-	800000000,
++	199680000,
++	399360000,
++	798720000,
+ 	1152000000
+ };
+ 
+-
+-
+ static long round_armss_rate(unsigned long rate)
+ {
+ 	unsigned long freq = 0;
+diff --git a/drivers/mfd/lpc_sch.c b/drivers/mfd/lpc_sch.c
+index f27eb8dabc1c8..9ab9adce06fdd 100644
+--- a/drivers/mfd/lpc_sch.c
++++ b/drivers/mfd/lpc_sch.c
+@@ -22,13 +22,10 @@
+ #define SMBASE		0x40
+ #define SMBUS_IO_SIZE	64
+ 
+-#define GPIOBASE	0x44
++#define GPIO_BASE	0x44
+ #define GPIO_IO_SIZE	64
+ #define GPIO_IO_SIZE_CENTERTON	128
+ 
+-/* Intel Quark X1000 GPIO IRQ Number */
+-#define GPIO_IRQ_QUARK_X1000	9
+-
+ #define WDTBASE		0x84
+ #define WDT_IO_SIZE	64
+ 
+@@ -43,30 +40,25 @@ struct lpc_sch_info {
+ 	unsigned int io_size_smbus;
+ 	unsigned int io_size_gpio;
+ 	unsigned int io_size_wdt;
+-	int irq_gpio;
+ };
+ 
+ static struct lpc_sch_info sch_chipset_info[] = {
+ 	[LPC_SCH] = {
+ 		.io_size_smbus = SMBUS_IO_SIZE,
+ 		.io_size_gpio = GPIO_IO_SIZE,
+-		.irq_gpio = -1,
+ 	},
+ 	[LPC_ITC] = {
+ 		.io_size_smbus = SMBUS_IO_SIZE,
+ 		.io_size_gpio = GPIO_IO_SIZE,
+ 		.io_size_wdt = WDT_IO_SIZE,
+-		.irq_gpio = -1,
+ 	},
+ 	[LPC_CENTERTON] = {
+ 		.io_size_smbus = SMBUS_IO_SIZE,
+ 		.io_size_gpio = GPIO_IO_SIZE_CENTERTON,
+ 		.io_size_wdt = WDT_IO_SIZE,
+-		.irq_gpio = -1,
+ 	},
+ 	[LPC_QUARK_X1000] = {
+ 		.io_size_gpio = GPIO_IO_SIZE,
+-		.irq_gpio = GPIO_IRQ_QUARK_X1000,
+ 		.io_size_wdt = WDT_IO_SIZE,
+ 	},
+ };
+@@ -113,13 +105,13 @@ static int lpc_sch_get_io(struct pci_dev *pdev, int where, const char *name,
+ }
+ 
+ static int lpc_sch_populate_cell(struct pci_dev *pdev, int where,
+-				 const char *name, int size, int irq,
+-				 int id, struct mfd_cell *cell)
++				 const char *name, int size, int id,
++				 struct mfd_cell *cell)
+ {
+ 	struct resource *res;
+ 	int ret;
+ 
+-	res = devm_kcalloc(&pdev->dev, 2, sizeof(*res), GFP_KERNEL);
++	res = devm_kzalloc(&pdev->dev, sizeof(*res), GFP_KERNEL);
+ 	if (!res)
+ 		return -ENOMEM;
+ 
+@@ -135,18 +127,6 @@ static int lpc_sch_populate_cell(struct pci_dev *pdev, int where,
+ 	cell->ignore_resource_conflicts = true;
+ 	cell->id = id;
+ 
+-	/* Check if we need to add an IRQ resource */
+-	if (irq < 0)
+-		return 0;
+-
+-	res++;
+-
+-	res->start = irq;
+-	res->end = irq;
+-	res->flags = IORESOURCE_IRQ;
+-
+-	cell->num_resources++;
+-
+ 	return 0;
+ }
+ 
+@@ -158,15 +138,15 @@ static int lpc_sch_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	int ret;
+ 
+ 	ret = lpc_sch_populate_cell(dev, SMBASE, "isch_smbus",
+-				    info->io_size_smbus, -1,
++				    info->io_size_smbus,
+ 				    id->device, &lpc_sch_cells[cells]);
+ 	if (ret < 0)
+ 		return ret;
+ 	if (ret == 0)
+ 		cells++;
+ 
+-	ret = lpc_sch_populate_cell(dev, GPIOBASE, "sch_gpio",
+-				    info->io_size_gpio, info->irq_gpio,
++	ret = lpc_sch_populate_cell(dev, GPIO_BASE, "sch_gpio",
++				    info->io_size_gpio,
+ 				    id->device, &lpc_sch_cells[cells]);
+ 	if (ret < 0)
+ 		return ret;
+@@ -174,7 +154,7 @@ static int lpc_sch_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 		cells++;
+ 
+ 	ret = lpc_sch_populate_cell(dev, WDTBASE, "ie6xx_wdt",
+-				    info->io_size_wdt, -1,
++				    info->io_size_wdt,
+ 				    id->device, &lpc_sch_cells[cells]);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c
+index 1aee3b3253fc9..508349399f8af 100644
+--- a/drivers/mfd/stmpe.c
++++ b/drivers/mfd/stmpe.c
+@@ -1091,7 +1091,7 @@ static irqreturn_t stmpe_irq(int irq, void *data)
+ 
+ 	if (variant->id_val == STMPE801_ID ||
+ 	    variant->id_val == STMPE1600_ID) {
+-		int base = irq_create_mapping(stmpe->domain, 0);
++		int base = irq_find_mapping(stmpe->domain, 0);
+ 
+ 		handle_nested_irq(base);
+ 		return IRQ_HANDLED;
+@@ -1119,7 +1119,7 @@ static irqreturn_t stmpe_irq(int irq, void *data)
+ 		while (status) {
+ 			int bit = __ffs(status);
+ 			int line = bank * 8 + bit;
+-			int nestedirq = irq_create_mapping(stmpe->domain, line);
++			int nestedirq = irq_find_mapping(stmpe->domain, line);
+ 
+ 			handle_nested_irq(nestedirq);
+ 			status &= ~(1 << bit);
+diff --git a/drivers/mfd/tc3589x.c b/drivers/mfd/tc3589x.c
+index 7882a37ffc352..5c2d5a6a6da9c 100644
+--- a/drivers/mfd/tc3589x.c
++++ b/drivers/mfd/tc3589x.c
+@@ -187,7 +187,7 @@ again:
+ 
+ 	while (status) {
+ 		int bit = __ffs(status);
+-		int virq = irq_create_mapping(tc3589x->domain, bit);
++		int virq = irq_find_mapping(tc3589x->domain, bit);
+ 
+ 		handle_nested_irq(virq);
+ 		status &= ~(1 << bit);
+diff --git a/drivers/mfd/tqmx86.c b/drivers/mfd/tqmx86.c
+index ddddf08b6a4cc..732013f40e4e8 100644
+--- a/drivers/mfd/tqmx86.c
++++ b/drivers/mfd/tqmx86.c
+@@ -209,6 +209,8 @@ static int tqmx86_probe(struct platform_device *pdev)
+ 
+ 		/* Assumes the IRQ resource is first. */
+ 		tqmx_gpio_resources[0].start = gpio_irq;
++	} else {
++		tqmx_gpio_resources[0].flags = 0;
+ 	}
+ 
+ 	ocores_platfom_data.clock_khz = tqmx86_board_id_to_clk_rate(board_id);
+diff --git a/drivers/mfd/wm8994-irq.c b/drivers/mfd/wm8994-irq.c
+index 6c3a619e26286..651a028bc519a 100644
+--- a/drivers/mfd/wm8994-irq.c
++++ b/drivers/mfd/wm8994-irq.c
+@@ -154,7 +154,7 @@ static irqreturn_t wm8994_edge_irq(int irq, void *data)
+ 	struct wm8994 *wm8994 = data;
+ 
+ 	while (gpio_get_value_cansleep(wm8994->pdata.irq_gpio))
+-		handle_nested_irq(irq_create_mapping(wm8994->edge_irq, 0));
++		handle_nested_irq(irq_find_mapping(wm8994->edge_irq, 0));
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/mtd/mtdconcat.c b/drivers/mtd/mtdconcat.c
+index 6e4d0017c0bd4..f685a581df481 100644
+--- a/drivers/mtd/mtdconcat.c
++++ b/drivers/mtd/mtdconcat.c
+@@ -641,6 +641,7 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[],	/* subdevices to c
+ 	int i;
+ 	size_t size;
+ 	struct mtd_concat *concat;
++	struct mtd_info *subdev_master = NULL;
+ 	uint32_t max_erasesize, curr_erasesize;
+ 	int num_erase_region;
+ 	int max_writebufsize = 0;
+@@ -679,18 +680,24 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[],	/* subdevices to c
+ 	concat->mtd.subpage_sft = subdev[0]->subpage_sft;
+ 	concat->mtd.oobsize = subdev[0]->oobsize;
+ 	concat->mtd.oobavail = subdev[0]->oobavail;
+-	if (subdev[0]->_writev)
++
++	subdev_master = mtd_get_master(subdev[0]);
++	if (subdev_master->_writev)
+ 		concat->mtd._writev = concat_writev;
+-	if (subdev[0]->_read_oob)
++	if (subdev_master->_read_oob)
+ 		concat->mtd._read_oob = concat_read_oob;
+-	if (subdev[0]->_write_oob)
++	if (subdev_master->_write_oob)
+ 		concat->mtd._write_oob = concat_write_oob;
+-	if (subdev[0]->_block_isbad)
++	if (subdev_master->_block_isbad)
+ 		concat->mtd._block_isbad = concat_block_isbad;
+-	if (subdev[0]->_block_markbad)
++	if (subdev_master->_block_markbad)
+ 		concat->mtd._block_markbad = concat_block_markbad;
+-	if (subdev[0]->_panic_write)
++	if (subdev_master->_panic_write)
+ 		concat->mtd._panic_write = concat_panic_write;
++	if (subdev_master->_read)
++		concat->mtd._read = concat_read;
++	if (subdev_master->_write)
++		concat->mtd._write = concat_write;
+ 
+ 	concat->mtd.ecc_stats.badblocks = subdev[0]->ecc_stats.badblocks;
+ 
+@@ -721,14 +728,22 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[],	/* subdevices to c
+ 				    subdev[i]->flags & MTD_WRITEABLE;
+ 		}
+ 
++		subdev_master = mtd_get_master(subdev[i]);
+ 		concat->mtd.size += subdev[i]->size;
+ 		concat->mtd.ecc_stats.badblocks +=
+ 			subdev[i]->ecc_stats.badblocks;
+ 		if (concat->mtd.writesize   !=  subdev[i]->writesize ||
+ 		    concat->mtd.subpage_sft != subdev[i]->subpage_sft ||
+ 		    concat->mtd.oobsize    !=  subdev[i]->oobsize ||
+-		    !concat->mtd._read_oob  != !subdev[i]->_read_oob ||
+-		    !concat->mtd._write_oob != !subdev[i]->_write_oob) {
++		    !concat->mtd._read_oob  != !subdev_master->_read_oob ||
++		    !concat->mtd._write_oob != !subdev_master->_write_oob) {
++			/*
++			 * Check against subdev[i] for data members, because
++			 * subdev's attributes may be different from master
++			 * mtd device. Check against subdev's master mtd
++			 * device for callbacks, because the existence of
++			 * subdev's callbacks is decided by master mtd device.
++			 */
+ 			kfree(concat);
+ 			printk("Incompatible OOB or ECC data on \"%s\"\n",
+ 			       subdev[i]->name);
+@@ -744,8 +759,6 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[],	/* subdevices to c
+ 	concat->mtd.name = name;
+ 
+ 	concat->mtd._erase = concat_erase;
+-	concat->mtd._read = concat_read;
+-	concat->mtd._write = concat_write;
+ 	concat->mtd._sync = concat_sync;
+ 	concat->mtd._lock = concat_lock;
+ 	concat->mtd._unlock = concat_unlock;
+diff --git a/drivers/mtd/nand/raw/cafe_nand.c b/drivers/mtd/nand/raw/cafe_nand.c
+index 2b94f385a1a88..04502d22efc9c 100644
+--- a/drivers/mtd/nand/raw/cafe_nand.c
++++ b/drivers/mtd/nand/raw/cafe_nand.c
+@@ -751,7 +751,7 @@ static int cafe_nand_probe(struct pci_dev *pdev,
+ 			  "CAFE NAND", mtd);
+ 	if (err) {
+ 		dev_warn(&pdev->dev, "Could not register IRQ %d\n", pdev->irq);
+-		goto out_ior;
++		goto out_free_rs;
+ 	}
+ 
+ 	/* Disable master reset, enable NAND clock */
+@@ -795,6 +795,8 @@ static int cafe_nand_probe(struct pci_dev *pdev,
+ 	/* Disable NAND IRQ in global IRQ mask register */
+ 	cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK);
+ 	free_irq(pdev->irq, mtd);
++ out_free_rs:
++	free_rs(cafe->rs);
+  out_ior:
+ 	pci_iounmap(pdev, cafe->mmio);
+  out_free_mtd:
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 52100d4fe5a25..d3b37cebcfde8 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1083,7 +1083,7 @@ static void b53_force_link(struct b53_device *dev, int port, int link)
+ 	u8 reg, val, off;
+ 
+ 	/* Override the port settings */
+-	if (port == dev->cpu_port) {
++	if (port == dev->imp_port) {
+ 		off = B53_PORT_OVERRIDE_CTRL;
+ 		val = PORT_OVERRIDE_EN;
+ 	} else {
+@@ -1107,7 +1107,7 @@ static void b53_force_port_config(struct b53_device *dev, int port,
+ 	u8 reg, val, off;
+ 
+ 	/* Override the port settings */
+-	if (port == dev->cpu_port) {
++	if (port == dev->imp_port) {
+ 		off = B53_PORT_OVERRIDE_CTRL;
+ 		val = PORT_OVERRIDE_EN;
+ 	} else {
+@@ -1175,7 +1175,7 @@ static void b53_adjust_link(struct dsa_switch *ds, int port,
+ 	b53_force_link(dev, port, phydev->link);
+ 
+ 	if (is531x5(dev) && phy_interface_is_rgmii(phydev)) {
+-		if (port == 8)
++		if (port == dev->imp_port)
+ 			off = B53_RGMII_CTRL_IMP;
+ 		else
+ 			off = B53_RGMII_CTRL_P(port);
+@@ -2238,6 +2238,7 @@ struct b53_chip_data {
+ 	const char *dev_name;
+ 	u16 vlans;
+ 	u16 enabled_ports;
++	u8 imp_port;
+ 	u8 cpu_port;
+ 	u8 vta_regs[3];
+ 	u8 arl_bins;
+@@ -2262,6 +2263,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 2,
+ 		.arl_buckets = 1024,
++		.imp_port = 5,
+ 		.cpu_port = B53_CPU_PORT_25,
+ 		.duplex_reg = B53_DUPLEX_STAT_FE,
+ 	},
+@@ -2272,6 +2274,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 2,
+ 		.arl_buckets = 1024,
++		.imp_port = 5,
+ 		.cpu_port = B53_CPU_PORT_25,
+ 		.duplex_reg = B53_DUPLEX_STAT_FE,
+ 	},
+@@ -2282,6 +2285,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2295,6 +2299,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2308,6 +2313,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS_9798,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2321,6 +2327,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x7f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS_9798,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2335,6 +2342,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
+ 		.vta_regs = B53_VTA_REGS,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+ 		.jumbo_pm_reg = B53_JUMBO_PORT_MASK,
+@@ -2347,6 +2355,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0xff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2360,6 +2369,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1ff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2373,6 +2383,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0, /* pdata must provide them */
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS_63XX,
+ 		.duplex_reg = B53_DUPLEX_STAT_63XX,
+@@ -2386,6 +2397,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2399,6 +2411,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1bf,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2412,6 +2425,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1bf,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2425,6 +2439,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2438,6 +2453,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2451,6 +2467,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1ff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2464,6 +2481,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x103,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2477,6 +2495,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1ff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2490,6 +2509,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1ff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 256,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2515,6 +2535,7 @@ static int b53_switch_init(struct b53_device *dev)
+ 			dev->vta_regs[1] = chip->vta_regs[1];
+ 			dev->vta_regs[2] = chip->vta_regs[2];
+ 			dev->jumbo_pm_reg = chip->jumbo_pm_reg;
++			dev->imp_port = chip->imp_port;
+ 			dev->cpu_port = chip->cpu_port;
+ 			dev->num_vlans = chip->vlans;
+ 			dev->num_arl_bins = chip->arl_bins;
+@@ -2556,9 +2577,10 @@ static int b53_switch_init(struct b53_device *dev)
+ 			dev->cpu_port = 5;
+ 	}
+ 
+-	/* cpu port is always last */
+-	dev->num_ports = dev->cpu_port + 1;
+ 	dev->enabled_ports |= BIT(dev->cpu_port);
++	dev->num_ports = fls(dev->enabled_ports);
++
++	dev->ds->num_ports = min_t(unsigned int, dev->num_ports, DSA_MAX_PORTS);
+ 
+ 	/* Include non standard CPU port built-in PHYs to be probed */
+ 	if (is539x(dev) || is531x5(dev)) {
+@@ -2604,7 +2626,6 @@ struct b53_device *b53_switch_alloc(struct device *base,
+ 		return NULL;
+ 
+ 	ds->dev = base;
+-	ds->num_ports = DSA_MAX_PORTS;
+ 
+ 	dev = devm_kzalloc(base, sizeof(*dev), GFP_KERNEL);
+ 	if (!dev)
+diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h
+index 7c67409bb186d..bdb2ade7ad622 100644
+--- a/drivers/net/dsa/b53/b53_priv.h
++++ b/drivers/net/dsa/b53/b53_priv.h
+@@ -122,6 +122,7 @@ struct b53_device {
+ 
+ 	/* used ports mask */
+ 	u16 enabled_ports;
++	unsigned int imp_port;
+ 	unsigned int cpu_port;
+ 
+ 	/* connect specific data */
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 510324916e916..690e9d9495e75 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -38,7 +38,7 @@ static unsigned int bcm_sf2_num_active_ports(struct dsa_switch *ds)
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+ 	unsigned int port, count = 0;
+ 
+-	for (port = 0; port < ARRAY_SIZE(priv->port_sts); port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		if (dsa_is_cpu_port(ds, port))
+ 			continue;
+ 		if (priv->port_sts[port].enabled)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+index 9108b497b3c99..03eb0179ec008 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+@@ -1225,7 +1225,7 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
+ 
+ 	/* SR-IOV capability was enabled but there are no VFs*/
+ 	if (iov->total == 0) {
+-		err = -EINVAL;
++		err = 0;
+ 		goto failed;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 849ae99a955a3..26179e437bbfd 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -272,6 +272,7 @@ static const u16 bnxt_async_events_arr[] = {
+ 	ASYNC_EVENT_CMPL_EVENT_ID_PORT_PHY_CFG_CHANGE,
+ 	ASYNC_EVENT_CMPL_EVENT_ID_RESET_NOTIFY,
+ 	ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY,
++	ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION,
+ 	ASYNC_EVENT_CMPL_EVENT_ID_RING_MONITOR_MSG,
+ };
+ 
+@@ -1304,8 +1305,7 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ 	} else {
+ 		tpa_info->hash_type = PKT_HASH_TYPE_NONE;
+ 		tpa_info->gso_type = 0;
+-		if (netif_msg_rx_err(bp))
+-			netdev_warn(bp->dev, "TPA packet without valid hash\n");
++		netif_warn(bp, rx_err, bp->dev, "TPA packet without valid hash\n");
+ 	}
+ 	tpa_info->flags2 = le32_to_cpu(tpa_start1->rx_tpa_start_cmp_flags2);
+ 	tpa_info->metadata = le32_to_cpu(tpa_start1->rx_tpa_start_cmp_metadata);
+@@ -2081,10 +2081,9 @@ static int bnxt_async_event_process(struct bnxt *bp,
+ 			goto async_event_process_exit;
+ 		set_bit(BNXT_RESET_TASK_SILENT_SP_EVENT, &bp->sp_event);
+ 		break;
+-	case ASYNC_EVENT_CMPL_EVENT_ID_RESET_NOTIFY:
+-		if (netif_msg_hw(bp))
+-			netdev_warn(bp->dev, "Received RESET_NOTIFY event, data1: 0x%x, data2: 0x%x\n",
+-				    data1, data2);
++	case ASYNC_EVENT_CMPL_EVENT_ID_RESET_NOTIFY: {
++		char *fatal_str = "non-fatal";
++
+ 		if (!bp->fw_health)
+ 			goto async_event_process_exit;
+ 
+@@ -2096,42 +2095,57 @@ static int bnxt_async_event_process(struct bnxt *bp,
+ 		if (!bp->fw_reset_max_dsecs)
+ 			bp->fw_reset_max_dsecs = BNXT_DFLT_FW_RST_MAX_DSECS;
+ 		if (EVENT_DATA1_RESET_NOTIFY_FATAL(data1)) {
+-			netdev_warn(bp->dev, "Firmware fatal reset event received\n");
++			fatal_str = "fatal";
+ 			set_bit(BNXT_STATE_FW_FATAL_COND, &bp->state);
+-		} else {
+-			netdev_warn(bp->dev, "Firmware non-fatal reset event received, max wait time %d msec\n",
+-				    bp->fw_reset_max_dsecs * 100);
+ 		}
++		netif_warn(bp, hw, bp->dev,
++			   "Firmware %s reset event, data1: 0x%x, data2: 0x%x, min wait %u ms, max wait %u ms\n",
++			   fatal_str, data1, data2,
++			   bp->fw_reset_min_dsecs * 100,
++			   bp->fw_reset_max_dsecs * 100);
+ 		set_bit(BNXT_FW_RESET_NOTIFY_SP_EVENT, &bp->sp_event);
+ 		break;
++	}
+ 	case ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY: {
+ 		struct bnxt_fw_health *fw_health = bp->fw_health;
+ 
+ 		if (!fw_health)
+ 			goto async_event_process_exit;
+ 
+-		fw_health->enabled = EVENT_DATA1_RECOVERY_ENABLED(data1);
+-		fw_health->master = EVENT_DATA1_RECOVERY_MASTER_FUNC(data1);
+-		if (!fw_health->enabled)
++		if (!EVENT_DATA1_RECOVERY_ENABLED(data1)) {
++			fw_health->enabled = false;
++			netif_info(bp, drv, bp->dev,
++				   "Error recovery info: error recovery[0]\n");
+ 			break;
+-
+-		if (netif_msg_drv(bp))
+-			netdev_info(bp->dev, "Error recovery info: error recovery[%d], master[%d], reset count[0x%x], health status: 0x%x\n",
+-				    fw_health->enabled, fw_health->master,
+-				    bnxt_fw_health_readl(bp,
+-							 BNXT_FW_RESET_CNT_REG),
+-				    bnxt_fw_health_readl(bp,
+-							 BNXT_FW_HEALTH_REG));
++		}
++		fw_health->master = EVENT_DATA1_RECOVERY_MASTER_FUNC(data1);
+ 		fw_health->tmr_multiplier =
+ 			DIV_ROUND_UP(fw_health->polling_dsecs * HZ,
+ 				     bp->current_interval * 10);
+ 		fw_health->tmr_counter = fw_health->tmr_multiplier;
+-		fw_health->last_fw_heartbeat =
+-			bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG);
++		if (!fw_health->enabled)
++			fw_health->last_fw_heartbeat =
++				bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG);
+ 		fw_health->last_fw_reset_cnt =
+ 			bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
++		netif_info(bp, drv, bp->dev,
++			   "Error recovery info: error recovery[1], master[%d], reset count[%u], health status: 0x%x\n",
++			   fw_health->master, fw_health->last_fw_reset_cnt,
++			   bnxt_fw_health_readl(bp, BNXT_FW_HEALTH_REG));
++		if (!fw_health->enabled) {
++			/* Make sure tmr_counter is set and visible to
++			 * bnxt_health_check() before setting enabled to true.
++			 */
++			smp_wmb();
++			fw_health->enabled = true;
++		}
+ 		goto async_event_process_exit;
+ 	}
++	case ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION:
++		netif_notice(bp, hw, bp->dev,
++			     "Received firmware debug notification, data1: 0x%x, data2: 0x%x\n",
++			     data1, data2);
++		goto async_event_process_exit;
+ 	case ASYNC_EVENT_CMPL_EVENT_ID_RING_MONITOR_MSG: {
+ 		struct bnxt_rx_ring_info *rxr;
+ 		u16 grp_idx;
+@@ -2591,6 +2605,9 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
+ 		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
+ 		int j;
+ 
++		if (!txr->tx_buf_ring)
++			continue;
++
+ 		for (j = 0; j < max_idx;) {
+ 			struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
+ 			struct sk_buff *skb;
+@@ -2675,6 +2692,9 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
+ 	}
+ 
+ skip_rx_tpa_free:
++	if (!rxr->rx_buf_ring)
++		goto skip_rx_buf_free;
++
+ 	for (i = 0; i < max_idx; i++) {
+ 		struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[i];
+ 		dma_addr_t mapping = rx_buf->mapping;
+@@ -2697,6 +2717,11 @@ skip_rx_tpa_free:
+ 			kfree(data);
+ 		}
+ 	}
++
++skip_rx_buf_free:
++	if (!rxr->rx_agg_ring)
++		goto skip_rx_agg_free;
++
+ 	for (i = 0; i < max_agg_idx; i++) {
+ 		struct bnxt_sw_rx_agg_bd *rx_agg_buf = &rxr->rx_agg_ring[i];
+ 		struct page *page = rx_agg_buf->page;
+@@ -2713,6 +2738,8 @@ skip_rx_tpa_free:
+ 
+ 		__free_page(page);
+ 	}
++
++skip_rx_agg_free:
+ 	if (rxr->rx_page) {
+ 		__free_page(rxr->rx_page);
+ 		rxr->rx_page = NULL;
+@@ -10719,6 +10746,8 @@ static void bnxt_fw_health_check(struct bnxt *bp)
+ 	if (!fw_health->enabled || test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))
+ 		return;
+ 
++	/* Make sure it is enabled before checking the tmr_counter. */
++	smp_rmb();
+ 	if (fw_health->tmr_counter) {
+ 		fw_health->tmr_counter--;
+ 		return;
+@@ -11623,6 +11652,11 @@ static void bnxt_fw_reset_task(struct work_struct *work)
+ 			dev_close(bp->dev);
+ 		}
+ 
++		if ((bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY) &&
++		    bp->fw_health->enabled) {
++			bp->fw_health->last_fw_reset_cnt =
++				bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
++		}
+ 		bp->fw_reset_state = 0;
+ 		/* Make sure fw_reset_state is 0 before clearing the flag */
+ 		smp_mb__before_atomic();
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+index 8b0e916afe6b1..e2fd625fc6d20 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+@@ -452,7 +452,7 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ 		return rc;
+ 
+ 	ver_resp = &bp->ver_resp;
+-	sprintf(buf, "%X", ver_resp->chip_rev);
++	sprintf(buf, "%c%d", 'A' + ver_resp->chip_rev, ver_resp->chip_metal);
+ 	rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_FIXED,
+ 			      DEVLINK_INFO_VERSION_GENERIC_ASIC_REV, buf);
+ 	if (rc)
+@@ -474,8 +474,8 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ 	if (BNXT_PF(bp) && !bnxt_hwrm_get_nvm_cfg_ver(bp, &nvm_cfg_ver)) {
+ 		u32 ver = nvm_cfg_ver.vu32;
+ 
+-		sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xf, (ver >> 8) & 0xf,
+-			ver & 0xf);
++		sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xff, (ver >> 8) & 0xff,
++			ver & 0xff);
+ 		rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_STORED,
+ 				      DEVLINK_INFO_VERSION_GENERIC_FW_PSID,
+ 				      buf);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index 5e4429b14b8ca..2186706cf9130 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -1870,9 +1870,6 @@ bnxt_tc_indr_block_cb_lookup(struct bnxt *bp, struct net_device *netdev)
+ {
+ 	struct bnxt_flower_indr_block_cb_priv *cb_priv;
+ 
+-	/* All callback list access should be protected by RTNL. */
+-	ASSERT_RTNL();
+-
+ 	list_for_each_entry(cb_priv, &bp->tc_indr_block_list, list)
+ 		if (cb_priv->tunnel_netdev == netdev)
+ 			return cb_priv;
+diff --git a/drivers/net/ethernet/chelsio/cxgb/cxgb2.c b/drivers/net/ethernet/chelsio/cxgb/cxgb2.c
+index 0e4a0f413960a..c6db85fe16291 100644
+--- a/drivers/net/ethernet/chelsio/cxgb/cxgb2.c
++++ b/drivers/net/ethernet/chelsio/cxgb/cxgb2.c
+@@ -1153,6 +1153,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (!adapter->registered_device_map) {
+ 		pr_err("%s: could not register any net devices\n",
+ 		       pci_name(pdev));
++		err = -EINVAL;
+ 		goto out_release_adapter_res;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 92ca3b21968fe..936b9cfe1a62f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -60,6 +60,7 @@ MODULE_PARM_DESC(debug, " Network interface message level setting");
+ #define HNS3_OUTER_VLAN_TAG	2
+ 
+ #define HNS3_MIN_TX_LEN		33U
++#define HNS3_MIN_TUN_PKT_LEN	65U
+ 
+ /* hns3_pci_tbl - PCI Device ID Table
+  *
+@@ -913,8 +914,11 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
+ 			       l4.tcp->doff);
+ 		break;
+ 	case IPPROTO_UDP:
+-		if (hns3_tunnel_csum_bug(skb))
+-			return skb_checksum_help(skb);
++		if (hns3_tunnel_csum_bug(skb)) {
++			int ret = skb_put_padto(skb, HNS3_MIN_TUN_PKT_LEN);
++
++			return ret ? ret : skb_checksum_help(skb);
++		}
+ 
+ 		hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
+ 		hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 2261de5caf863..59ec538eba1f0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1463,9 +1463,10 @@ static void hclge_init_kdump_kernel_config(struct hclge_dev *hdev)
+ 
+ static int hclge_configure(struct hclge_dev *hdev)
+ {
++	const struct cpumask *cpumask = cpu_online_mask;
+ 	struct hclge_cfg cfg;
+ 	unsigned int i;
+-	int ret;
++	int node, ret;
+ 
+ 	ret = hclge_get_cfg(hdev, &cfg);
+ 	if (ret)
+@@ -1526,11 +1527,12 @@ static int hclge_configure(struct hclge_dev *hdev)
+ 
+ 	hclge_init_kdump_kernel_config(hdev);
+ 
+-	/* Set the init affinity based on pci func number */
+-	i = cpumask_weight(cpumask_of_node(dev_to_node(&hdev->pdev->dev)));
+-	i = i ? PCI_FUNC(hdev->pdev->devfn) % i : 0;
+-	cpumask_set_cpu(cpumask_local_spread(i, dev_to_node(&hdev->pdev->dev)),
+-			&hdev->affinity_mask);
++	/* Set the affinity based on numa node */
++	node = dev_to_node(&hdev->pdev->dev);
++	if (node != NUMA_NO_NODE)
++		cpumask = cpumask_of_node(node);
++
++	cpumask_copy(&hdev->affinity_mask, cpumask);
+ 
+ 	return ret;
+ }
+@@ -7003,11 +7005,12 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 	hclge_clear_arfs_rules(handle);
+ 	spin_unlock_bh(&hdev->fd_rule_lock);
+ 
+-	/* If it is not PF reset, the firmware will disable the MAC,
++	/* If it is not PF reset or FLR, the firmware will disable the MAC,
+ 	 * so it only need to stop phy here.
+ 	 */
+ 	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) &&
+-	    hdev->reset_type != HNAE3_FUNC_RESET) {
++	    hdev->reset_type != HNAE3_FUNC_RESET &&
++	    hdev->reset_type != HNAE3_FLR_RESET) {
+ 		hclge_mac_stop_phy(hdev);
+ 		hclge_update_link_status(hdev);
+ 		return;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index d3010d5ab3665..447457cacf973 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2352,6 +2352,8 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+ 
+ 	hclgevf_enable_vector(&hdev->misc_vector, false);
+ 	event_cause = hclgevf_check_evt_cause(hdev, &clearval);
++	if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
++		hclgevf_clear_event_cause(hdev, clearval);
+ 
+ 	switch (event_cause) {
+ 	case HCLGEVF_VECTOR0_EVENT_RST:
+@@ -2364,10 +2366,8 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+ 		break;
+ 	}
+ 
+-	if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER) {
+-		hclgevf_clear_event_cause(hdev, clearval);
++	if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
+ 		hclgevf_enable_vector(&hdev->misc_vector, true);
+-	}
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 3134c1988db36..bb8d0a0f48ee0 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -4478,6 +4478,14 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
+ 		return 0;
+ 	}
+ 
++	if (adapter->failover_pending) {
++		adapter->init_done_rc = -EAGAIN;
++		netdev_dbg(netdev, "Failover pending, ignoring login response\n");
++		complete(&adapter->init_done);
++		/* login response buffer will be released on reset */
++		return 0;
++	}
++
+ 	netdev->mtu = adapter->req_mtu - ETH_HLEN;
+ 
+ 	netdev_dbg(adapter->netdev, "Login Response Buffer:\n");
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 644d28b0692b3..c26652436c53a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -84,7 +84,8 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu)
+  */
+ int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
+ {
+-	unsigned long timeout = jiffies + usecs_to_jiffies(10000);
++	unsigned long timeout = jiffies + usecs_to_jiffies(20000);
++	bool twice = false;
+ 	void __iomem *reg;
+ 	u64 reg_val;
+ 
+@@ -99,6 +100,15 @@ again:
+ 		usleep_range(1, 5);
+ 		goto again;
+ 	}
++	/* In scenarios where CPU is scheduled out before checking
++	 * 'time_before' (above) and gets scheduled in such that
++	 * jiffies are beyond timeout value, then check again if HW is
++	 * done with the operation in the meantime.
++	 */
++	if (!twice) {
++		twice = true;
++		goto again;
++	}
+ 	return -EBUSY;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 3dfcb20e97c6f..857be86b4a11a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -1007,7 +1007,7 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer)
+ 	err = mlx5_core_alloc_pd(dev, &tracer->buff.pdn);
+ 	if (err) {
+ 		mlx5_core_warn(dev, "FWTracer: Failed to allocate PD %d\n", err);
+-		return err;
++		goto err_cancel_work;
+ 	}
+ 
+ 	err = mlx5_fw_tracer_create_mkey(tracer);
+@@ -1031,6 +1031,7 @@ err_notifier_unregister:
+ 	mlx5_core_destroy_mkey(dev, &tracer->buff.mkey);
+ err_dealloc_pd:
+ 	mlx5_core_dealloc_pd(dev, tracer->buff.pdn);
++err_cancel_work:
+ 	cancel_work_sync(&tracer->read_fw_strings_work);
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+index e6f782743fbe8..2fdea05eec1de 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+@@ -298,9 +298,6 @@ mlx5e_rep_indr_block_priv_lookup(struct mlx5e_rep_priv *rpriv,
+ {
+ 	struct mlx5e_rep_indr_block_priv *cb_priv;
+ 
+-	/* All callback list access should be protected by RTNL. */
+-	ASSERT_RTNL();
+-
+ 	list_for_each_entry(cb_priv,
+ 			    &rpriv->uplink_priv.tc_indr_block_priv_list,
+ 			    list)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 1d4b4e6f6fb41..0ff034b0866e2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1675,14 +1675,13 @@ static int build_match_list(struct match_list *match_head,
+ 
+ 		curr_match = kmalloc(sizeof(*curr_match), GFP_ATOMIC);
+ 		if (!curr_match) {
++			rcu_read_unlock();
+ 			free_match_list(match_head, ft_locked);
+-			err = -ENOMEM;
+-			goto out;
++			return -ENOMEM;
+ 		}
+ 		curr_match->g = g;
+ 		list_add_tail(&curr_match->list, &match_head->list);
+ 	}
+-out:
+ 	rcu_read_unlock();
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+index e95969c462e46..3f34e6da72958 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+@@ -1732,9 +1732,6 @@ nfp_flower_indr_block_cb_priv_lookup(struct nfp_app *app,
+ 	struct nfp_flower_indr_block_cb_priv *cb_priv;
+ 	struct nfp_flower_priv *priv = app->priv;
+ 
+-	/* All callback list access should be protected by RTNL. */
+-	ASSERT_RTNL();
+-
+ 	list_for_each_entry(cb_priv, &priv->indr_block_cb_priv, list)
+ 		if (cb_priv->netdev == netdev)
+ 			return cb_priv;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index caeef25c89bb1..2cd14ee95c1ff 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -3376,6 +3376,7 @@ qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn,
+ 			  struct qed_nvm_image_att *p_image_att)
+ {
+ 	enum nvm_image_type type;
++	int rc;
+ 	u32 i;
+ 
+ 	/* Translate image_id into MFW definitions */
+@@ -3404,7 +3405,10 @@ qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn,
+ 		return -EINVAL;
+ 	}
+ 
+-	qed_mcp_nvm_info_populate(p_hwfn);
++	rc = qed_mcp_nvm_info_populate(p_hwfn);
++	if (rc)
++		return rc;
++
+ 	for (i = 0; i < p_hwfn->nvm_info.num_images; i++)
+ 		if (type == p_hwfn->nvm_info.image_att[i].image_type)
+ 			break;
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
+index e6784023bce42..aa7ee43f92525 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
+@@ -439,7 +439,6 @@ int qlcnic_pinit_from_rom(struct qlcnic_adapter *adapter)
+ 	QLCWR32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c, 1);
+ 	msleep(20);
+ 
+-	qlcnic_rom_unlock(adapter);
+ 	/* big hammer don't reset CAM block on reset */
+ 	QLCWR32(adapter, QLCNIC_ROMUSB_GLB_SW_RESET, 0xfeffffff);
+ 
+diff --git a/drivers/net/ethernet/rdc/r6040.c b/drivers/net/ethernet/rdc/r6040.c
+index 7c74318620b1d..ccdfa930130bc 100644
+--- a/drivers/net/ethernet/rdc/r6040.c
++++ b/drivers/net/ethernet/rdc/r6040.c
+@@ -119,6 +119,8 @@
+ #define PHY_ST		0x8A	/* PHY status register */
+ #define MAC_SM		0xAC	/* MAC status machine */
+ #define  MAC_SM_RST	0x0002	/* MAC status machine reset */
++#define MD_CSC		0xb6	/* MDC speed control register */
++#define  MD_CSC_DEFAULT	0x0030
+ #define MAC_ID		0xBE	/* Identifier register */
+ 
+ #define TX_DCNT		0x80	/* TX descriptor count */
+@@ -355,8 +357,9 @@ static void r6040_reset_mac(struct r6040_private *lp)
+ {
+ 	void __iomem *ioaddr = lp->base;
+ 	int limit = MAC_DEF_TIMEOUT;
+-	u16 cmd;
++	u16 cmd, md_csc;
+ 
++	md_csc = ioread16(ioaddr + MD_CSC);
+ 	iowrite16(MAC_RST, ioaddr + MCR1);
+ 	while (limit--) {
+ 		cmd = ioread16(ioaddr + MCR1);
+@@ -368,6 +371,10 @@ static void r6040_reset_mac(struct r6040_private *lp)
+ 	iowrite16(MAC_SM_RST, ioaddr + MAC_SM);
+ 	iowrite16(0, ioaddr + MAC_SM);
+ 	mdelay(5);
++
++	/* Restore MDIO clock frequency */
++	if (md_csc != MD_CSC_DEFAULT)
++		iowrite16(md_csc, ioaddr + MD_CSC);
+ }
+ 
+ static void r6040_init_mac_regs(struct net_device *dev)
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 5cab2d3c00236..8927d59977458 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -2533,6 +2533,7 @@ static netdev_tx_t sh_eth_start_xmit(struct sk_buff *skb,
+ 	else
+ 		txdesc->status |= cpu_to_le32(TD_TACT);
+ 
++	wmb(); /* cur_tx must be incremented after TACT bit was set */
+ 	mdp->cur_tx++;
+ 
+ 	if (!(sh_eth_read(ndev, EDTRR) & mdp->cd->edtrr_trns))
+diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
+index b3790aa952a15..0747866d60abc 100644
+--- a/drivers/net/ipa/ipa_table.c
++++ b/drivers/net/ipa/ipa_table.c
+@@ -451,7 +451,8 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
+ 	 * table region determines the number of entries it has.
+ 	 */
+ 	if (filter) {
+-		count = hweight32(ipa->filter_map);
++		/* Include one extra "slot" to hold the filter map itself */
++		count = 1 + hweight32(ipa->filter_map);
+ 		hash_count = hash_mem->size ? count : 0;
+ 	} else {
+ 		count = mem->size / IPA_TABLE_ENTRY_SIZE;
+diff --git a/drivers/net/phy/dp83640_reg.h b/drivers/net/phy/dp83640_reg.h
+index 21aa24c741b96..daae7fa58fb82 100644
+--- a/drivers/net/phy/dp83640_reg.h
++++ b/drivers/net/phy/dp83640_reg.h
+@@ -5,7 +5,7 @@
+ #ifndef HAVE_DP83640_REGISTERS
+ #define HAVE_DP83640_REGISTERS
+ 
+-#define PAGE0                     0x0000
++/* #define PAGE0                  0x0000 */
+ #define PHYCR2                    0x001c /* PHY Control Register 2 */
+ 
+ #define PAGE4                     0x0004
+diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
+index eb100eb33de3d..77ac5a721e7b6 100644
+--- a/drivers/net/usb/cdc_mbim.c
++++ b/drivers/net/usb/cdc_mbim.c
+@@ -653,6 +653,11 @@ static const struct usb_device_id mbim_devs[] = {
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
+ 	},
+ 
++	/* Telit LN920 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1061, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
++	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
++	},
++
+ 	/* default entry */
+ 	{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index 5b3aff2c279f7..f269337c82c58 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2537,13 +2537,17 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 	if (!hso_net->mux_bulk_tx_buf)
+ 		goto err_free_tx_urb;
+ 
+-	add_net_device(hso_dev);
++	result = add_net_device(hso_dev);
++	if (result) {
++		dev_err(&interface->dev, "Failed to add net device\n");
++		goto err_free_tx_buf;
++	}
+ 
+ 	/* registering our net device */
+ 	result = register_netdev(net);
+ 	if (result) {
+ 		dev_err(&interface->dev, "Failed to register device\n");
+-		goto err_free_tx_buf;
++		goto err_rmv_ndev;
+ 	}
+ 
+ 	hso_log_port(hso_dev);
+@@ -2552,8 +2556,9 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 
+ 	return hso_dev;
+ 
+-err_free_tx_buf:
++err_rmv_ndev:
+ 	remove_net_device(hso_dev);
++err_free_tx_buf:
+ 	kfree(hso_net->mux_bulk_tx_buf);
+ err_free_tx_urb:
+ 	usb_free_urb(hso_net->mux_bulk_tx_urb);
+diff --git a/drivers/ntb/test/ntb_msi_test.c b/drivers/ntb/test/ntb_msi_test.c
+index 7095ecd6223a7..4e18e08776c98 100644
+--- a/drivers/ntb/test/ntb_msi_test.c
++++ b/drivers/ntb/test/ntb_msi_test.c
+@@ -369,8 +369,10 @@ static int ntb_msit_probe(struct ntb_client *client, struct ntb_dev *ntb)
+ 	if (ret)
+ 		goto remove_dbgfs;
+ 
+-	if (!nm->isr_ctx)
++	if (!nm->isr_ctx) {
++		ret = -ENOMEM;
+ 		goto remove_dbgfs;
++	}
+ 
+ 	ntb_link_enable(ntb, NTB_SPEED_AUTO, NTB_WIDTH_AUTO);
+ 
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index 89df1350fefd8..65e1e5cf1b29a 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -598,6 +598,7 @@ static int perf_setup_inbuf(struct perf_peer *peer)
+ 		return -ENOMEM;
+ 	}
+ 	if (!IS_ALIGNED(peer->inbuf_xlat, xlat_align)) {
++		ret = -EINVAL;
+ 		dev_err(&perf->ntb->dev, "Unaligned inbuf allocated\n");
+ 		goto err_free_inbuf;
+ 	}
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index c9a925999c6ea..a6b3b07627630 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -273,6 +273,12 @@ static inline void nvme_tcp_send_all(struct nvme_tcp_queue *queue)
+ 	} while (ret > 0);
+ }
+ 
++static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
++{
++	return !list_empty(&queue->send_list) ||
++		!llist_empty(&queue->req_list) || queue->more_requests;
++}
++
+ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 		bool sync, bool last)
+ {
+@@ -293,9 +299,10 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 		nvme_tcp_send_all(queue);
+ 		queue->more_requests = false;
+ 		mutex_unlock(&queue->send_mutex);
+-	} else if (last) {
+-		queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
+ 	}
++
++	if (last && nvme_tcp_queue_more(queue))
++		queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
+ }
+ 
+ static void nvme_tcp_process_req_list(struct nvme_tcp_queue *queue)
+@@ -890,12 +897,6 @@ done:
+ 	read_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+-static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+-{
+-	return !list_empty(&queue->send_list) ||
+-		!llist_empty(&queue->req_list) || queue->more_requests;
+-}
+-
+ static inline void nvme_tcp_done_send_req(struct nvme_tcp_queue *queue)
+ {
+ 	queue->request = NULL;
+@@ -1132,8 +1133,7 @@ static void nvme_tcp_io_work(struct work_struct *w)
+ 				pending = true;
+ 			else if (unlikely(result < 0))
+ 				break;
+-		} else
+-			pending = !llist_empty(&queue->req_list);
++		}
+ 
+ 		result = nvme_tcp_try_recv(queue);
+ 		if (result > 0)
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index d34ca0fda0f66..8a6d68e13f301 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -25,6 +25,7 @@
+ #define STATUS_REG_SYS_2	0x508
+ #define STATUS_CLR_REG_SYS_2	0x708
+ #define LINK_DOWN		BIT(1)
++#define J7200_LINK_DOWN		BIT(10)
+ 
+ #define J721E_PCIE_USER_CMD_STATUS	0x4
+ #define LINK_TRAINING_ENABLE		BIT(0)
+@@ -54,6 +55,7 @@ struct j721e_pcie {
+ 	struct cdns_pcie	*cdns_pcie;
+ 	void __iomem		*user_cfg_base;
+ 	void __iomem		*intd_cfg_base;
++	u32			linkdown_irq_regfield;
+ };
+ 
+ enum j721e_pcie_mode {
+@@ -63,7 +65,10 @@ enum j721e_pcie_mode {
+ 
+ struct j721e_pcie_data {
+ 	enum j721e_pcie_mode	mode;
+-	bool quirk_retrain_flag;
++	unsigned int		quirk_retrain_flag:1;
++	unsigned int		quirk_detect_quiet_flag:1;
++	u32			linkdown_irq_regfield;
++	unsigned int		byte_access_allowed:1;
+ };
+ 
+ static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
+@@ -95,12 +100,12 @@ static irqreturn_t j721e_pcie_link_irq_handler(int irq, void *priv)
+ 	u32 reg;
+ 
+ 	reg = j721e_pcie_intd_readl(pcie, STATUS_REG_SYS_2);
+-	if (!(reg & LINK_DOWN))
++	if (!(reg & pcie->linkdown_irq_regfield))
+ 		return IRQ_NONE;
+ 
+ 	dev_err(dev, "LINK DOWN!\n");
+ 
+-	j721e_pcie_intd_writel(pcie, STATUS_CLR_REG_SYS_2, LINK_DOWN);
++	j721e_pcie_intd_writel(pcie, STATUS_CLR_REG_SYS_2, pcie->linkdown_irq_regfield);
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -109,7 +114,7 @@ static void j721e_pcie_config_link_irq(struct j721e_pcie *pcie)
+ 	u32 reg;
+ 
+ 	reg = j721e_pcie_intd_readl(pcie, ENABLE_REG_SYS_2);
+-	reg |= LINK_DOWN;
++	reg |= pcie->linkdown_irq_regfield;
+ 	j721e_pcie_intd_writel(pcie, ENABLE_REG_SYS_2, reg);
+ }
+ 
+@@ -272,10 +277,36 @@ static struct pci_ops cdns_ti_pcie_host_ops = {
+ static const struct j721e_pcie_data j721e_pcie_rc_data = {
+ 	.mode = PCI_MODE_RC,
+ 	.quirk_retrain_flag = true,
++	.byte_access_allowed = false,
++	.linkdown_irq_regfield = LINK_DOWN,
+ };
+ 
+ static const struct j721e_pcie_data j721e_pcie_ep_data = {
+ 	.mode = PCI_MODE_EP,
++	.linkdown_irq_regfield = LINK_DOWN,
++};
++
++static const struct j721e_pcie_data j7200_pcie_rc_data = {
++	.mode = PCI_MODE_RC,
++	.quirk_detect_quiet_flag = true,
++	.linkdown_irq_regfield = J7200_LINK_DOWN,
++	.byte_access_allowed = true,
++};
++
++static const struct j721e_pcie_data j7200_pcie_ep_data = {
++	.mode = PCI_MODE_EP,
++	.quirk_detect_quiet_flag = true,
++};
++
++static const struct j721e_pcie_data am64_pcie_rc_data = {
++	.mode = PCI_MODE_RC,
++	.linkdown_irq_regfield = J7200_LINK_DOWN,
++	.byte_access_allowed = true,
++};
++
++static const struct j721e_pcie_data am64_pcie_ep_data = {
++	.mode = PCI_MODE_EP,
++	.linkdown_irq_regfield = J7200_LINK_DOWN,
+ };
+ 
+ static const struct of_device_id of_j721e_pcie_match[] = {
+@@ -287,6 +318,22 @@ static const struct of_device_id of_j721e_pcie_match[] = {
+ 		.compatible = "ti,j721e-pcie-ep",
+ 		.data = &j721e_pcie_ep_data,
+ 	},
++	{
++		.compatible = "ti,j7200-pcie-host",
++		.data = &j7200_pcie_rc_data,
++	},
++	{
++		.compatible = "ti,j7200-pcie-ep",
++		.data = &j7200_pcie_ep_data,
++	},
++	{
++		.compatible = "ti,am64-pcie-host",
++		.data = &am64_pcie_rc_data,
++	},
++	{
++		.compatible = "ti,am64-pcie-ep",
++		.data = &am64_pcie_ep_data,
++	},
+ 	{},
+ };
+ 
+@@ -319,6 +366,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 
+ 	pcie->dev = dev;
+ 	pcie->mode = mode;
++	pcie->linkdown_irq_regfield = data->linkdown_irq_regfield;
+ 
+ 	base = devm_platform_ioremap_resource_byname(pdev, "intd_cfg");
+ 	if (IS_ERR(base))
+@@ -378,9 +426,11 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 			goto err_get_sync;
+ 		}
+ 
+-		bridge->ops = &cdns_ti_pcie_host_ops;
++		if (!data->byte_access_allowed)
++			bridge->ops = &cdns_ti_pcie_host_ops;
+ 		rc = pci_host_bridge_priv(bridge);
+ 		rc->quirk_retrain_flag = data->quirk_retrain_flag;
++		rc->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag;
+ 
+ 		cdns_pcie = &rc->pcie;
+ 		cdns_pcie->dev = dev;
+@@ -430,6 +480,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 			ret = -ENOMEM;
+ 			goto err_get_sync;
+ 		}
++		ep->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag;
+ 
+ 		cdns_pcie = &ep->pcie;
+ 		cdns_pcie->dev = dev;
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 84cc58dc8512c..1af14474abcf1 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -578,6 +578,10 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ 	ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
+ 	/* Reserve region 0 for IRQs */
+ 	set_bit(0, &ep->ob_region_map);
++
++	if (ep->quirk_detect_quiet_flag)
++		cdns_pcie_detect_quiet_min_delay_set(&ep->pcie);
++
+ 	spin_lock_init(&ep->lock);
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 73dcf8cf98fbf..a40ed9e12b4bb 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -497,6 +497,9 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ 		return PTR_ERR(rc->cfg_base);
+ 	rc->cfg_res = res;
+ 
++	if (rc->quirk_detect_quiet_flag)
++		cdns_pcie_detect_quiet_min_delay_set(&rc->pcie);
++
+ 	ret = cdns_pcie_start_link(pcie);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to start link\n");
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.c b/drivers/pci/controller/cadence/pcie-cadence.c
+index 3c3646502d05c..52767f26048fd 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.c
++++ b/drivers/pci/controller/cadence/pcie-cadence.c
+@@ -7,6 +7,22 @@
+ 
+ #include "pcie-cadence.h"
+ 
++void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
++{
++	u32 delay = 0x3;
++	u32 ltssm_control_cap;
++
++	/*
++	 * Set the LTSSM Detect Quiet state min. delay to 2ms.
++	 */
++	ltssm_control_cap = cdns_pcie_readl(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP);
++	ltssm_control_cap = ((ltssm_control_cap &
++			    ~CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK) |
++			    CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay));
++
++	cdns_pcie_writel(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP, ltssm_control_cap);
++}
++
+ void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
+ 				   u32 r, bool is_io,
+ 				   u64 cpu_addr, u64 pci_addr, size_t size)
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
+index 6705a5fedfbb0..e0b59730bffb7 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.h
++++ b/drivers/pci/controller/cadence/pcie-cadence.h
+@@ -189,6 +189,14 @@
+ /* AXI link down register */
+ #define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824)
+ 
++/* LTSSM Capabilities register */
++#define CDNS_PCIE_LTSSM_CONTROL_CAP             (CDNS_PCIE_LM_BASE + 0x0054)
++#define  CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK  GENMASK(2, 1)
++#define  CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT 1
++#define  CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay) \
++	 (((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \
++	 CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK)
++
+ enum cdns_pcie_rp_bar {
+ 	RP_BAR_UNDEFINED = -1,
+ 	RP_BAR0,
+@@ -291,6 +299,7 @@ struct cdns_pcie {
+  * @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and	RP_NO_BAR if it's free or
+  *                available
+  * @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
++ * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
+  */
+ struct cdns_pcie_rc {
+ 	struct cdns_pcie	pcie;
+@@ -299,7 +308,8 @@ struct cdns_pcie_rc {
+ 	u32			vendor_id;
+ 	u32			device_id;
+ 	bool			avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
+-	bool                    quirk_retrain_flag;
++	unsigned int		quirk_retrain_flag:1;
++	unsigned int		quirk_detect_quiet_flag:1;
+ };
+ 
+ /**
+@@ -330,6 +340,7 @@ struct cdns_pcie_epf {
+  *        registers fields (RMW) accessible by both remote RC and EP to
+  *        minimize time between read and write
+  * @epf: Structure to hold info about endpoint function
++ * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
+  */
+ struct cdns_pcie_ep {
+ 	struct cdns_pcie	pcie;
+@@ -344,6 +355,7 @@ struct cdns_pcie_ep {
+ 	/* protect writing to PCI_STATUS while raising legacy interrupts */
+ 	spinlock_t		lock;
+ 	struct cdns_pcie_epf	*epf;
++	unsigned int		quirk_detect_quiet_flag:1;
+ };
+ 
+ 
+@@ -504,6 +516,9 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ 	return 0;
+ }
+ #endif
++
++void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
++
+ void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
+ 				   u32 r, bool is_io,
+ 				   u64 cpu_addr, u64 pci_addr, size_t size);
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index 506f6a294eac3..a5b677ec07690 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -515,19 +515,19 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
+ 	struct tegra_pcie_dw *pcie = arg;
+ 	struct dw_pcie_ep *ep = &pcie->pci.ep;
+ 	int spurious = 1;
+-	u32 val, tmp;
++	u32 status_l0, status_l1, link_status;
+ 
+-	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
+-	if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
+-		val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
+-		appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
++	status_l0 = appl_readl(pcie, APPL_INTR_STATUS_L0);
++	if (status_l0 & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
++		status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
++		appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_0_0);
+ 
+-		if (val & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE)
++		if (status_l1 & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE)
+ 			pex_ep_event_hot_rst_done(pcie);
+ 
+-		if (val & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) {
+-			tmp = appl_readl(pcie, APPL_LINK_STATUS);
+-			if (tmp & APPL_LINK_STATUS_RDLH_LINK_UP) {
++		if (status_l1 & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) {
++			link_status = appl_readl(pcie, APPL_LINK_STATUS);
++			if (link_status & APPL_LINK_STATUS_RDLH_LINK_UP) {
+ 				dev_dbg(pcie->dev, "Link is up with Host\n");
+ 				dw_pcie_ep_linkup(ep);
+ 			}
+@@ -536,11 +536,11 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
+ 		spurious = 0;
+ 	}
+ 
+-	if (val & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) {
+-		val = appl_readl(pcie, APPL_INTR_STATUS_L1_15);
+-		appl_writel(pcie, val, APPL_INTR_STATUS_L1_15);
++	if (status_l0 & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) {
++		status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_15);
++		appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_15);
+ 
+-		if (val & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED)
++		if (status_l1 & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED)
+ 			return IRQ_WAKE_THREAD;
+ 
+ 		spurious = 0;
+@@ -548,8 +548,8 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
+ 
+ 	if (spurious) {
+ 		dev_warn(pcie->dev, "Random interrupt (STATUS = 0x%08X)\n",
+-			 val);
+-		appl_writel(pcie, val, APPL_INTR_STATUS_L0);
++			 status_l0);
++		appl_writel(pcie, status_l0, APPL_INTR_STATUS_L0);
+ 	}
+ 
+ 	return IRQ_HANDLED;
+@@ -1778,7 +1778,7 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
+ 	val = (ep->msi_mem_phys & MSIX_ADDR_MATCH_LOW_OFF_MASK);
+ 	val |= MSIX_ADDR_MATCH_LOW_OFF_EN;
+ 	dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_LOW_OFF, val);
+-	val = (lower_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK);
++	val = (upper_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK);
+ 	dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_HIGH_OFF, val);
+ 
+ 	ret = dw_pcie_ep_init_complete(ep);
+diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c
+index 1a2af963599ca..b4eb75f25906e 100644
+--- a/drivers/pci/controller/pci-tegra.c
++++ b/drivers/pci/controller/pci-tegra.c
+@@ -2160,13 +2160,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
+ 		rp->np = port;
+ 
+ 		rp->base = devm_pci_remap_cfg_resource(dev, &rp->regs);
+-		if (IS_ERR(rp->base))
+-			return PTR_ERR(rp->base);
++		if (IS_ERR(rp->base)) {
++			err = PTR_ERR(rp->base);
++			goto err_node_put;
++		}
+ 
+ 		label = devm_kasprintf(dev, GFP_KERNEL, "pex-reset-%u", index);
+ 		if (!label) {
+-			dev_err(dev, "failed to create reset GPIO label\n");
+-			return -ENOMEM;
++			err = -ENOMEM;
++			goto err_node_put;
+ 		}
+ 
+ 		/*
+@@ -2184,7 +2186,8 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
+ 			} else {
+ 				dev_err(dev, "failed to get reset GPIO: %ld\n",
+ 					PTR_ERR(rp->reset_gpio));
+-				return PTR_ERR(rp->reset_gpio);
++				err = PTR_ERR(rp->reset_gpio);
++				goto err_node_put;
+ 			}
+ 		}
+ 
+diff --git a/drivers/pci/controller/pcie-iproc-bcma.c b/drivers/pci/controller/pcie-iproc-bcma.c
+index 56b8ee7bf3307..f918c713afb08 100644
+--- a/drivers/pci/controller/pcie-iproc-bcma.c
++++ b/drivers/pci/controller/pcie-iproc-bcma.c
+@@ -35,7 +35,6 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
+ {
+ 	struct device *dev = &bdev->dev;
+ 	struct iproc_pcie *pcie;
+-	LIST_HEAD(resources);
+ 	struct pci_host_bridge *bridge;
+ 	int ret;
+ 
+@@ -60,19 +59,16 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
+ 	pcie->mem.end = bdev->addr_s[0] + SZ_128M - 1;
+ 	pcie->mem.name = "PCIe MEM space";
+ 	pcie->mem.flags = IORESOURCE_MEM;
+-	pci_add_resource(&resources, &pcie->mem);
++	pci_add_resource(&bridge->windows, &pcie->mem);
++	ret = devm_request_pci_bus_resources(dev, &bridge->windows);
++	if (ret)
++		return ret;
+ 
+ 	pcie->map_irq = iproc_pcie_bcma_map_irq;
+ 
+-	ret = iproc_pcie_setup(pcie, &resources);
+-	if (ret) {
+-		dev_err(dev, "PCIe controller setup failed\n");
+-		pci_free_resource_list(&resources);
+-		return ret;
+-	}
+-
+ 	bcma_set_drvdata(bdev, pcie);
+-	return 0;
++
++	return iproc_pcie_setup(pcie, &bridge->windows);
+ }
+ 
+ static void iproc_pcie_bcma_remove(struct bcma_device *bdev)
+diff --git a/drivers/pci/controller/pcie-rcar-ep.c b/drivers/pci/controller/pcie-rcar-ep.c
+index b4a288e24aafb..c91d85b151290 100644
+--- a/drivers/pci/controller/pcie-rcar-ep.c
++++ b/drivers/pci/controller/pcie-rcar-ep.c
+@@ -492,9 +492,9 @@ static int rcar_pcie_ep_probe(struct platform_device *pdev)
+ 	pcie->dev = dev;
+ 
+ 	pm_runtime_enable(dev);
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+-		dev_err(dev, "pm_runtime_get_sync failed\n");
++		dev_err(dev, "pm_runtime_resume_and_get failed\n");
+ 		goto err_pm_disable;
+ 	}
+ 
+diff --git a/drivers/pci/hotplug/TODO b/drivers/pci/hotplug/TODO
+index a32070be5adf9..cc6194aa24c15 100644
+--- a/drivers/pci/hotplug/TODO
++++ b/drivers/pci/hotplug/TODO
+@@ -40,9 +40,6 @@ ibmphp:
+ 
+ * The return value of pci_hp_register() is not checked.
+ 
+-* iounmap(io_mem) is called in the error path of ebda_rsrc_controller()
+-  and once more in the error path of its caller ibmphp_access_ebda().
+-
+ * The various slot data structures are difficult to follow and need to be
+   simplified.  A lot of functions are too large and too complex, they need
+   to be broken up into smaller, manageable pieces.  Negative examples are
+diff --git a/drivers/pci/hotplug/ibmphp_ebda.c b/drivers/pci/hotplug/ibmphp_ebda.c
+index 11a2661dc0627..7fb75401ad8a7 100644
+--- a/drivers/pci/hotplug/ibmphp_ebda.c
++++ b/drivers/pci/hotplug/ibmphp_ebda.c
+@@ -714,8 +714,7 @@ static int __init ebda_rsrc_controller(void)
+ 		/* init hpc structure */
+ 		hpc_ptr = alloc_ebda_hpc(slot_num, bus_num);
+ 		if (!hpc_ptr) {
+-			rc = -ENOMEM;
+-			goto error_no_hpc;
++			return -ENOMEM;
+ 		}
+ 		hpc_ptr->ctlr_id = ctlr_id;
+ 		hpc_ptr->ctlr_relative_id = ctlr;
+@@ -910,8 +909,6 @@ error:
+ 	kfree(tmp_slot);
+ error_no_slot:
+ 	free_ebda_hpc(hpc_ptr);
+-error_no_hpc:
+-	iounmap(io_mem);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/pci/of.c b/drivers/pci/of.c
+index ac24cd5439a93..3f6ef2f45e57a 100644
+--- a/drivers/pci/of.c
++++ b/drivers/pci/of.c
+@@ -295,7 +295,7 @@ static int devm_of_pci_get_host_bridge_resources(struct device *dev,
+ 	/* Check for ranges property */
+ 	err = of_pci_range_parser_init(&parser, dev_node);
+ 	if (err)
+-		goto failed;
++		return 0;
+ 
+ 	dev_dbg(dev, "Parsing ranges property...\n");
+ 	for_each_of_pci_range(&parser, &range) {
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index eae6a9fdd33d4..0d7109018a91f 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -265,7 +265,7 @@ static int pci_dev_str_match_path(struct pci_dev *dev, const char *path,
+ 
+ 	*endptr = strchrnul(path, ';');
+ 
+-	wpath = kmemdup_nul(path, *endptr - path, GFP_KERNEL);
++	wpath = kmemdup_nul(path, *endptr - path, GFP_ATOMIC);
+ 	if (!wpath)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index bad294c352519..5d2acebc3e966 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4626,6 +4626,18 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
+ 		PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ }
+ 
++/*
++ * Each of these NXP Root Ports is in a Root Complex with a unique segment
++ * number and does provide isolation features to disable peer transactions
++ * and validate bus numbers in requests, but does not provide an ACS
++ * capability.
++ */
++static int pci_quirk_nxp_rp_acs(struct pci_dev *dev, u16 acs_flags)
++{
++	return pci_acs_ctrl_enabled(acs_flags,
++		PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
++}
++
+ static int pci_quirk_al_acs(struct pci_dev *dev, u16 acs_flags)
+ {
+ 	if (pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT)
+@@ -4852,6 +4864,10 @@ static const struct pci_dev_acs_enabled {
+ 	{ 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */
+ 	/* Cavium ThunderX */
+ 	{ PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs },
++	/* Cavium multi-function devices */
++	{ PCI_VENDOR_ID_CAVIUM, 0xA026, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_CAVIUM, 0xA059, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_CAVIUM, 0xA060, pci_quirk_mf_endpoint_acs },
+ 	/* APM X-Gene */
+ 	{ PCI_VENDOR_ID_AMCC, 0xE004, pci_quirk_xgene_acs },
+ 	/* Ampere Computing */
+@@ -4872,6 +4888,39 @@ static const struct pci_dev_acs_enabled {
+ 	{ PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs },
+ 	{ PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs },
+ 	{ PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs },
++	/* NXP root ports, xx=16, 12, or 08 cores */
++	/* LX2xx0A : without security features + CAN-FD */
++	{ PCI_VENDOR_ID_NXP, 0x8d81, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8da1, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d83, pci_quirk_nxp_rp_acs },
++	/* LX2xx0C : security features + CAN-FD */
++	{ PCI_VENDOR_ID_NXP, 0x8d80, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8da0, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d82, pci_quirk_nxp_rp_acs },
++	/* LX2xx0E : security features + CAN */
++	{ PCI_VENDOR_ID_NXP, 0x8d90, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8db0, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d92, pci_quirk_nxp_rp_acs },
++	/* LX2xx0N : without security features + CAN */
++	{ PCI_VENDOR_ID_NXP, 0x8d91, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8db1, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d93, pci_quirk_nxp_rp_acs },
++	/* LX2xx2A : without security features + CAN-FD */
++	{ PCI_VENDOR_ID_NXP, 0x8d89, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8da9, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d8b, pci_quirk_nxp_rp_acs },
++	/* LX2xx2C : security features + CAN-FD */
++	{ PCI_VENDOR_ID_NXP, 0x8d88, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8da8, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d8a, pci_quirk_nxp_rp_acs },
++	/* LX2xx2E : security features + CAN */
++	{ PCI_VENDOR_ID_NXP, 0x8d98, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8db8, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d9a, pci_quirk_nxp_rp_acs },
++	/* LX2xx2N : without security features + CAN */
++	{ PCI_VENDOR_ID_NXP, 0x8d99, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8db9, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs },
+ 	/* Zhaoxin Root/Downstream Ports */
+ 	{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
+ 	{ 0 }
+@@ -5346,7 +5395,7 @@ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
+ 			      PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8, quirk_gpu_hda);
+ 
+ /*
+- * Create device link for NVIDIA GPU with integrated USB xHCI Host
++ * Create device link for GPUs with integrated USB xHCI Host
+  * controller to VGA.
+  */
+ static void quirk_gpu_usb(struct pci_dev *usb)
+@@ -5355,9 +5404,11 @@ static void quirk_gpu_usb(struct pci_dev *usb)
+ }
+ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
+ 			      PCI_CLASS_SERIAL_USB, 8, quirk_gpu_usb);
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
++			      PCI_CLASS_SERIAL_USB, 8, quirk_gpu_usb);
+ 
+ /*
+- * Create device link for NVIDIA GPU with integrated Type-C UCSI controller
++ * Create device link for GPUs with integrated Type-C UCSI controller
+  * to VGA. Currently there is no class code defined for UCSI device over PCI
+  * so using UNKNOWN class for now and it will be updated when UCSI
+  * over PCI gets a class code.
+@@ -5370,6 +5421,9 @@ static void quirk_gpu_usb_typec_ucsi(struct pci_dev *ucsi)
+ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
+ 			      PCI_CLASS_SERIAL_UNKNOWN, 8,
+ 			      quirk_gpu_usb_typec_ucsi);
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
++			      PCI_CLASS_SERIAL_UNKNOWN, 8,
++			      quirk_gpu_usb_typec_ucsi);
+ 
+ /*
+  * Enable the NVIDIA GPU integrated HDA controller if the BIOS left it
+diff --git a/drivers/s390/char/sclp_early.c b/drivers/s390/char/sclp_early.c
+index cc5e84b80c699..faa3a4b8ed91d 100644
+--- a/drivers/s390/char/sclp_early.c
++++ b/drivers/s390/char/sclp_early.c
+@@ -40,13 +40,14 @@ static void __init sclp_early_facilities_detect(struct read_info_sccb *sccb)
+ 	sclp.has_gisaf = !!(sccb->fac118 & 0x08);
+ 	sclp.has_hvs = !!(sccb->fac119 & 0x80);
+ 	sclp.has_kss = !!(sccb->fac98 & 0x01);
+-	sclp.has_sipl = !!(sccb->cbl & 0x4000);
+ 	if (sccb->fac85 & 0x02)
+ 		S390_lowcore.machine_flags |= MACHINE_FLAG_ESOP;
+ 	if (sccb->fac91 & 0x40)
+ 		S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_GUEST;
+ 	if (sccb->cpuoff > 134)
+ 		sclp.has_diag318 = !!(sccb->byte_134 & 0x80);
++	if (sccb->cpuoff > 137)
++		sclp.has_sipl = !!(sccb->cbl & 0x4000);
+ 	sclp.rnmax = sccb->rnmax ? sccb->rnmax : sccb->rnmax2;
+ 	sclp.rzm = sccb->rnsize ? sccb->rnsize : sccb->rnsize2;
+ 	sclp.rzm <<= 20;
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index c8784dfafdd73..da02c3e96e7b2 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -466,7 +466,7 @@ static void vhost_tx_batch(struct vhost_net *net,
+ 		.num = nvq->batched_xdp,
+ 		.ptr = nvq->xdp,
+ 	};
+-	int err;
++	int i, err;
+ 
+ 	if (nvq->batched_xdp == 0)
+ 		goto signal_used;
+@@ -475,6 +475,15 @@ static void vhost_tx_batch(struct vhost_net *net,
+ 	err = sock->ops->sendmsg(sock, msghdr, 0);
+ 	if (unlikely(err < 0)) {
+ 		vq_err(&nvq->vq, "Fail to batch sending packets\n");
++
++		/* free pages owned by XDP; since this is an unlikely error path,
++		 * keep it simple and avoid more complex bulk update for the
++		 * used pages
++		 */
++		for (i = 0; i < nvq->batched_xdp; ++i)
++			put_page(virt_to_head_page(nvq->xdp[i].data));
++		nvq->batched_xdp = 0;
++		nvq->done_idx = 0;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/video/backlight/ktd253-backlight.c b/drivers/video/backlight/ktd253-backlight.c
+index e3fee3f1f5828..9d355fd989d86 100644
+--- a/drivers/video/backlight/ktd253-backlight.c
++++ b/drivers/video/backlight/ktd253-backlight.c
+@@ -25,6 +25,7 @@
+ 
+ #define KTD253_T_LOW_NS (200 + 10) /* Additional 10ns as safety factor */
+ #define KTD253_T_HIGH_NS (200 + 10) /* Additional 10ns as safety factor */
++#define KTD253_T_OFF_CRIT_NS 100000 /* 100 us, now it doesn't look good */
+ #define KTD253_T_OFF_MS 3
+ 
+ struct ktd253_backlight {
+@@ -34,13 +35,50 @@ struct ktd253_backlight {
+ 	u16 ratio;
+ };
+ 
++static void ktd253_backlight_set_max_ratio(struct ktd253_backlight *ktd253)
++{
++	gpiod_set_value_cansleep(ktd253->gpiod, 1);
++	ndelay(KTD253_T_HIGH_NS);
++	/* We always fall back to this when we power on */
++}
++
++static int ktd253_backlight_stepdown(struct ktd253_backlight *ktd253)
++{
++	/*
++	 * These GPIO operations absolutely can NOT sleep so no _cansleep
++	 * suffixes, and no using GPIO expanders on slow buses for this!
++	 *
++	 * The maximum number of cycles of the loop is 32  so the time taken
++	 * should nominally be:
++	 * (T_LOW_NS + T_HIGH_NS + loop_time) * 32
++	 *
++	 * Architectures do not always support ndelay() and we will get a few us
++	 * instead. If we get to a critical time limit an interrupt has likely
++	 * occured in the low part of the loop and we need to restart from the
++	 * top so we have the backlight in a known state.
++	 */
++	u64 ns;
++
++	ns = ktime_get_ns();
++	gpiod_set_value(ktd253->gpiod, 0);
++	ndelay(KTD253_T_LOW_NS);
++	gpiod_set_value(ktd253->gpiod, 1);
++	ns = ktime_get_ns() - ns;
++	if (ns >= KTD253_T_OFF_CRIT_NS) {
++		dev_err(ktd253->dev, "PCM on backlight took too long (%llu ns)\n", ns);
++		return -EAGAIN;
++	}
++	ndelay(KTD253_T_HIGH_NS);
++	return 0;
++}
++
+ static int ktd253_backlight_update_status(struct backlight_device *bl)
+ {
+ 	struct ktd253_backlight *ktd253 = bl_get_data(bl);
+ 	int brightness = backlight_get_brightness(bl);
+ 	u16 target_ratio;
+ 	u16 current_ratio = ktd253->ratio;
+-	unsigned long flags;
++	int ret;
+ 
+ 	dev_dbg(ktd253->dev, "new brightness/ratio: %d/32\n", brightness);
+ 
+@@ -62,37 +100,34 @@ static int ktd253_backlight_update_status(struct backlight_device *bl)
+ 	}
+ 
+ 	if (current_ratio == 0) {
+-		gpiod_set_value_cansleep(ktd253->gpiod, 1);
+-		ndelay(KTD253_T_HIGH_NS);
+-		/* We always fall back to this when we power on */
++		ktd253_backlight_set_max_ratio(ktd253);
+ 		current_ratio = KTD253_MAX_RATIO;
+ 	}
+ 
+-	/*
+-	 * WARNING:
+-	 * The loop to set the correct current level is performed
+-	 * with interrupts disabled as it is timing critical.
+-	 * The maximum number of cycles of the loop is 32
+-	 * so the time taken will be (T_LOW_NS + T_HIGH_NS + loop_time) * 32,
+-	 */
+-	local_irq_save(flags);
+ 	while (current_ratio != target_ratio) {
+ 		/*
+ 		 * These GPIO operations absolutely can NOT sleep so no
+ 		 * _cansleep suffixes, and no using GPIO expanders on
+ 		 * slow buses for this!
+ 		 */
+-		gpiod_set_value(ktd253->gpiod, 0);
+-		ndelay(KTD253_T_LOW_NS);
+-		gpiod_set_value(ktd253->gpiod, 1);
+-		ndelay(KTD253_T_HIGH_NS);
+-		/* After 1/32 we loop back to 32/32 */
+-		if (current_ratio == KTD253_MIN_RATIO)
++		ret = ktd253_backlight_stepdown(ktd253);
++		if (ret == -EAGAIN) {
++			/*
++			 * Something disturbed the backlight setting code when
++			 * running so we need to bring the PWM back to a known
++			 * state. This shouldn't happen too much.
++			 */
++			gpiod_set_value_cansleep(ktd253->gpiod, 0);
++			msleep(KTD253_T_OFF_MS);
++			ktd253_backlight_set_max_ratio(ktd253);
++			current_ratio = KTD253_MAX_RATIO;
++		} else if (current_ratio == KTD253_MIN_RATIO) {
++			/* After 1/32 we loop back to 32/32 */
+ 			current_ratio = KTD253_MAX_RATIO;
+-		else
++		} else {
+ 			current_ratio--;
++		}
+ 	}
+-	local_irq_restore(flags);
+ 	ktd253->ratio = current_ratio;
+ 
+ 	dev_dbg(ktd253->dev, "new ratio set to %d/32\n", target_ratio);
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 2946f3a63110c..2ee017442dfcd 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -1164,7 +1164,10 @@ int watchdog_set_last_hw_keepalive(struct watchdog_device *wdd,
+ 
+ 	wd_data->last_hw_keepalive = ktime_sub(now, ms_to_ktime(last_ping_ms));
+ 
+-	return __watchdog_ping(wdd);
++	if (watchdog_hw_running(wdd) && handle_boot_enabled)
++		return __watchdog_ping(wdd);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(watchdog_set_last_hw_keepalive);
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index e025cd8f3f071..ef7df2141f34f 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3019,6 +3019,29 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 	 */
+ 	fs_info->compress_type = BTRFS_COMPRESS_ZLIB;
+ 
++	/*
++	 * Flag our filesystem as having big metadata blocks if they are bigger
++	 * than the page size
++	 */
++	if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
++		if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
++			btrfs_info(fs_info,
++				"flagging fs with big metadata feature");
++		features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
++	}
++
++	/* Set up fs_info before parsing mount options */
++	nodesize = btrfs_super_nodesize(disk_super);
++	sectorsize = btrfs_super_sectorsize(disk_super);
++	stripesize = sectorsize;
++	fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids));
++	fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids));
++
++	/* Cache block sizes */
++	fs_info->nodesize = nodesize;
++	fs_info->sectorsize = sectorsize;
++	fs_info->stripesize = stripesize;
++
+ 	ret = btrfs_parse_options(fs_info, options, sb->s_flags);
+ 	if (ret) {
+ 		err = ret;
+@@ -3045,28 +3068,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 	if (features & BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA)
+ 		btrfs_info(fs_info, "has skinny extents");
+ 
+-	/*
+-	 * flag our filesystem as having big metadata blocks if
+-	 * they are bigger than the page size
+-	 */
+-	if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
+-		if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
+-			btrfs_info(fs_info,
+-				"flagging fs with big metadata feature");
+-		features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
+-	}
+-
+-	nodesize = btrfs_super_nodesize(disk_super);
+-	sectorsize = btrfs_super_sectorsize(disk_super);
+-	stripesize = sectorsize;
+-	fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids));
+-	fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids));
+-
+-	/* Cache block sizes */
+-	fs_info->nodesize = nodesize;
+-	fs_info->sectorsize = sectorsize;
+-	fs_info->stripesize = stripesize;
+-
+ 	/*
+ 	 * mixed block groups end up with duplicate but slightly offset
+ 	 * extent buffers for the same range.  It leads to corruptions
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 4140d5c3ab5a5..f943eea9fe4e1 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -288,10 +288,10 @@ void fuse_request_end(struct fuse_req *req)
+ 
+ 	/*
+ 	 * test_and_set_bit() implies smp_mb() between bit
+-	 * changing and below intr_entry check. Pairs with
++	 * changing and below FR_INTERRUPTED check. Pairs with
+ 	 * smp_mb() from queue_interrupt().
+ 	 */
+-	if (!list_empty(&req->intr_entry)) {
++	if (test_bit(FR_INTERRUPTED, &req->flags)) {
+ 		spin_lock(&fiq->lock);
+ 		list_del_init(&req->intr_entry);
+ 		spin_unlock(&fiq->lock);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index d0089039fee79..a8d07273ddc05 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3206,12 +3206,15 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+ 				ret = nr;
+ 			break;
+ 		}
++		if (!iov_iter_is_bvec(iter)) {
++			iov_iter_advance(iter, nr);
++		} else {
++			req->rw.len -= nr;
++			req->rw.addr += nr;
++		}
+ 		ret += nr;
+ 		if (nr != iovec.iov_len)
+ 			break;
+-		req->rw.len -= nr;
+-		req->rw.addr += nr;
+-		iov_iter_advance(iter, nr);
+ 	}
+ 
+ 	return ret;
+diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
+index 551093b74596b..1dafc7c7f5cfe 100644
+--- a/include/linux/memory_hotplug.h
++++ b/include/linux/memory_hotplug.h
+@@ -359,8 +359,8 @@ extern void sparse_remove_section(struct mem_section *ms,
+ 		unsigned long map_offset, struct vmem_altmap *altmap);
+ extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
+ 					  unsigned long pnum);
+-extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
+-		unsigned long nr_pages);
++extern struct zone *zone_for_pfn_range(int online_type, int nid,
++		unsigned long start_pfn, unsigned long nr_pages);
+ #endif /* CONFIG_MEMORY_HOTPLUG */
+ 
+ #endif /* __LINUX_MEMORY_HOTPLUG_H */
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 22207a79762c2..a55097b4d9927 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1713,8 +1713,9 @@ static inline void pci_disable_device(struct pci_dev *dev) { }
+ static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; }
+ static inline int pci_assign_resource(struct pci_dev *dev, int i)
+ { return -EBUSY; }
+-static inline int __pci_register_driver(struct pci_driver *drv,
+-					struct module *owner)
++static inline int __must_check __pci_register_driver(struct pci_driver *drv,
++						     struct module *owner,
++						     const char *mod_name)
+ { return 0; }
+ static inline int pci_register_driver(struct pci_driver *drv)
+ { return 0; }
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 1ab1e24bcbce5..635a9243cce0d 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2476,7 +2476,8 @@
+ #define PCI_VENDOR_ID_TDI               0x192E
+ #define PCI_DEVICE_ID_TDI_EHCI          0x0101
+ 
+-#define PCI_VENDOR_ID_FREESCALE		0x1957
++#define PCI_VENDOR_ID_FREESCALE		0x1957	/* duplicate: NXP */
++#define PCI_VENDOR_ID_NXP		0x1957	/* duplicate: FREESCALE */
+ #define PCI_DEVICE_ID_MPC8308		0xc006
+ #define PCI_DEVICE_ID_MPC8315E		0x00b4
+ #define PCI_DEVICE_ID_MPC8315		0x00b5
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 2660ee4b08adf..29c7ccd5ae42e 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1354,6 +1354,7 @@ struct task_struct {
+ 					mce_whole_page : 1,
+ 					__mce_reserved : 62;
+ 	struct callback_head		mce_kill_me;
++	int				mce_count;
+ #endif
+ 
+ 	/*
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 0a1239819fd2a..acbf1875ad506 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1908,7 +1908,7 @@ static inline void __skb_insert(struct sk_buff *newsk,
+ 	WRITE_ONCE(newsk->prev, prev);
+ 	WRITE_ONCE(next->prev, newsk);
+ 	WRITE_ONCE(prev->next, newsk);
+-	list->qlen++;
++	WRITE_ONCE(list->qlen, list->qlen + 1);
+ }
+ 
+ static inline void __skb_queue_splice(const struct sk_buff_head *list,
+diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
+index 9e7c2c6078456..69079fbf3ed2d 100644
+--- a/include/uapi/linux/pkt_sched.h
++++ b/include/uapi/linux/pkt_sched.h
+@@ -826,6 +826,8 @@ struct tc_codel_xstats {
+ 
+ /* FQ_CODEL */
+ 
++#define FQ_CODEL_QUANTUM_MAX (1 << 20)
++
+ enum {
+ 	TCA_FQ_CODEL_UNSPEC,
+ 	TCA_FQ_CODEL_TARGET,
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 7e0fdc19043e4..c677f934353af 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -9973,7 +9973,7 @@ static void perf_event_addr_filters_apply(struct perf_event *event)
+ 		return;
+ 
+ 	if (ifh->nr_file_filters) {
+-		mm = get_task_mm(event->ctx->task);
++		mm = get_task_mm(task);
+ 		if (!mm)
+ 			goto restart;
+ 
+diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
+index a82f03f385f89..0996d59750ff0 100644
+--- a/kernel/trace/trace_boot.c
++++ b/kernel/trace/trace_boot.c
+@@ -205,12 +205,15 @@ trace_boot_init_one_event(struct trace_array *tr, struct xbc_node *gnode,
+ 			pr_err("Failed to apply filter: %s\n", buf);
+ 	}
+ 
+-	xbc_node_for_each_array_value(enode, "actions", anode, p) {
+-		if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf))
+-			pr_err("action string is too long: %s\n", p);
+-		else if (trigger_process_regex(file, buf) < 0)
+-			pr_err("Failed to apply an action: %s\n", buf);
+-	}
++	if (IS_ENABLED(CONFIG_HIST_TRIGGERS)) {
++		xbc_node_for_each_array_value(enode, "actions", anode, p) {
++			if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf))
++				pr_err("action string is too long: %s\n", p);
++			else if (trigger_process_regex(file, buf) < 0)
++				pr_err("Failed to apply an action: %s\n", buf);
++		}
++	} else if (xbc_node_find_value(enode, "actions", NULL))
++		pr_err("Failed to apply event actions because CONFIG_HIST_TRIGGERS is not set.\n");
+ 
+ 	if (xbc_node_find_value(enode, "enable", NULL)) {
+ 		if (trace_event_enable_disable(file, 1, 0) < 0)
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 68150b9cbde92..552dbc9d52260 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -647,7 +647,11 @@ static int register_trace_kprobe(struct trace_kprobe *tk)
+ 	/* Register new event */
+ 	ret = register_kprobe_event(tk);
+ 	if (ret) {
+-		pr_warn("Failed to register probe event(%d)\n", ret);
++		if (ret == -EEXIST) {
++			trace_probe_log_set_index(0);
++			trace_probe_log_err(0, EVENT_EXIST);
++		} else
++			pr_warn("Failed to register probe event(%d)\n", ret);
+ 		goto end;
+ 	}
+ 
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index d2867ccc6acaa..1d31bc4acf7a5 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -1029,11 +1029,36 @@ error:
+ 	return ret;
+ }
+ 
++static struct trace_event_call *
++find_trace_event_call(const char *system, const char *event_name)
++{
++	struct trace_event_call *tp_event;
++	const char *name;
++
++	list_for_each_entry(tp_event, &ftrace_events, list) {
++		if (!tp_event->class->system ||
++		    strcmp(system, tp_event->class->system))
++			continue;
++		name = trace_event_name(tp_event);
++		if (!name || strcmp(event_name, name))
++			continue;
++		return tp_event;
++	}
++
++	return NULL;
++}
++
+ int trace_probe_register_event_call(struct trace_probe *tp)
+ {
+ 	struct trace_event_call *call = trace_probe_event_call(tp);
+ 	int ret;
+ 
++	lockdep_assert_held(&event_mutex);
++
++	if (find_trace_event_call(trace_probe_group_name(tp),
++				  trace_probe_name(tp)))
++		return -EEXIST;
++
+ 	ret = register_trace_event(&call->event);
+ 	if (!ret)
+ 		return -ENODEV;
+diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
+index 2f703a20c724c..6d41e20c47ced 100644
+--- a/kernel/trace/trace_probe.h
++++ b/kernel/trace/trace_probe.h
+@@ -398,6 +398,7 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
+ 	C(NO_EVENT_NAME,	"Event name is not specified"),		\
+ 	C(EVENT_TOO_LONG,	"Event name is too long"),		\
+ 	C(BAD_EVENT_NAME,	"Event name must follow the same rules as C identifiers"), \
++	C(EVENT_EXIST,		"Given group/event name is already used by another event"), \
+ 	C(RETVAL_ON_PROBE,	"$retval is not available on probe"),	\
+ 	C(BAD_STACK_NUM,	"Invalid stack number"),		\
+ 	C(BAD_ARG_NUM,		"Invalid argument number"),		\
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 3cf7128e1ad30..0dd6e286e5196 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -514,7 +514,11 @@ static int register_trace_uprobe(struct trace_uprobe *tu)
+ 
+ 	ret = register_uprobe_event(tu);
+ 	if (ret) {
+-		pr_warn("Failed to register probe event(%d)\n", ret);
++		if (ret == -EEXIST) {
++			trace_probe_log_set_index(0);
++			trace_probe_log_err(0, EVENT_EXIST);
++		} else
++			pr_warn("Failed to register probe event(%d)\n", ret);
+ 		goto end;
+ 	}
+ 
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index b9de2df5b8358..6275b1c05f111 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -765,8 +765,8 @@ static inline struct zone *default_zone_for_pfn(int nid, unsigned long start_pfn
+ 	return movable_node_enabled ? movable_zone : kernel_zone;
+ }
+ 
+-struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
+-		unsigned long nr_pages)
++struct zone *zone_for_pfn_range(int online_type, int nid,
++		unsigned long start_pfn, unsigned long nr_pages)
+ {
+ 	if (online_type == MMOP_ONLINE_KERNEL)
+ 		return default_kernel_zone_for_pfn(nid, start_pfn, nr_pages);
+diff --git a/net/caif/chnl_net.c b/net/caif/chnl_net.c
+index 79b6a04d8eb61..42dc080a4dbbc 100644
+--- a/net/caif/chnl_net.c
++++ b/net/caif/chnl_net.c
+@@ -53,20 +53,6 @@ struct chnl_net {
+ 	enum caif_states state;
+ };
+ 
+-static void robust_list_del(struct list_head *delete_node)
+-{
+-	struct list_head *list_node;
+-	struct list_head *n;
+-	ASSERT_RTNL();
+-	list_for_each_safe(list_node, n, &chnl_net_list) {
+-		if (list_node == delete_node) {
+-			list_del(list_node);
+-			return;
+-		}
+-	}
+-	WARN_ON(1);
+-}
+-
+ static int chnl_recv_cb(struct cflayer *layr, struct cfpkt *pkt)
+ {
+ 	struct sk_buff *skb;
+@@ -369,6 +355,7 @@ static int chnl_net_init(struct net_device *dev)
+ 	ASSERT_RTNL();
+ 	priv = netdev_priv(dev);
+ 	strncpy(priv->name, dev->name, sizeof(priv->name));
++	INIT_LIST_HEAD(&priv->list_field);
+ 	return 0;
+ }
+ 
+@@ -377,7 +364,7 @@ static void chnl_net_uninit(struct net_device *dev)
+ 	struct chnl_net *priv;
+ 	ASSERT_RTNL();
+ 	priv = netdev_priv(dev);
+-	robust_list_del(&priv->list_field);
++	list_del_init(&priv->list_field);
+ }
+ 
+ static const struct net_device_ops netdev_ops = {
+@@ -542,7 +529,7 @@ static void __exit chnl_exit_module(void)
+ 	rtnl_lock();
+ 	list_for_each_safe(list_node, _tmp, &chnl_net_list) {
+ 		dev = list_entry(list_node, struct chnl_net, list_field);
+-		list_del(list_node);
++		list_del_init(list_node);
+ 		delete_device(dev);
+ 	}
+ 	rtnl_unlock();
+diff --git a/net/dccp/minisocks.c b/net/dccp/minisocks.c
+index c5c74a34d139d..91e7a22026971 100644
+--- a/net/dccp/minisocks.c
++++ b/net/dccp/minisocks.c
+@@ -94,6 +94,8 @@ struct sock *dccp_create_openreq_child(const struct sock *sk,
+ 		newdp->dccps_role	    = DCCP_ROLE_SERVER;
+ 		newdp->dccps_hc_rx_ackvec   = NULL;
+ 		newdp->dccps_service_list   = NULL;
++		newdp->dccps_hc_rx_ccid     = NULL;
++		newdp->dccps_hc_tx_ccid     = NULL;
+ 		newdp->dccps_service	    = dreq->dreq_service;
+ 		newdp->dccps_timestamp_echo = dreq->dreq_timestamp_echo;
+ 		newdp->dccps_timestamp_time = dreq->dreq_timestamp_time;
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index 9281c9c6a253e..65b125bb3b860 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -1728,13 +1728,11 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev)
+ 		 * use the switch internal MDIO bus instead
+ 		 */
+ 		ret = dsa_slave_phy_connect(slave_dev, dp->index);
+-		if (ret) {
+-			netdev_err(slave_dev,
+-				   "failed to connect to port %d: %d\n",
+-				   dp->index, ret);
+-			phylink_destroy(dp->pl);
+-			return ret;
+-		}
++	}
++	if (ret) {
++		netdev_err(slave_dev, "failed to connect to PHY: %pe\n",
++			   ERR_PTR(ret));
++		phylink_destroy(dp->pl);
+ 	}
+ 
+ 	return ret;
+diff --git a/net/dsa/tag_rtl4_a.c b/net/dsa/tag_rtl4_a.c
+index e9176475bac89..24375ebd684e8 100644
+--- a/net/dsa/tag_rtl4_a.c
++++ b/net/dsa/tag_rtl4_a.c
+@@ -54,9 +54,10 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
+ 	p = (__be16 *)tag;
+ 	*p = htons(RTL4_A_ETHERTYPE);
+ 
+-	out = (RTL4_A_PROTOCOL_RTL8366RB << 12) | (2 << 8);
+-	/* The lower bits is the port number */
+-	out |= (u8)dp->index;
++	out = (RTL4_A_PROTOCOL_RTL8366RB << RTL4_A_PROTOCOL_SHIFT) | (2 << 8);
++	/* The lower bits indicate the port number */
++	out |= BIT(dp->index);
++
+ 	p = (__be16 *)(tag + 2);
+ 	*p = htons(out);
+ 
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 97b402b2d6fbd..80d2a00d30977 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -906,7 +906,7 @@ static int ethtool_rxnfc_copy_to_user(void __user *useraddr,
+ 						   rule_buf);
+ 		useraddr += offsetof(struct compat_ethtool_rxnfc, rule_locs);
+ 	} else {
+-		ret = copy_to_user(useraddr, &rxnfc, size);
++		ret = copy_to_user(useraddr, rxnfc, size);
+ 		useraddr += offsetof(struct ethtool_rxnfc, rule_locs);
+ 	}
+ 
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index a0829495b211e..a9cc05043fa47 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -468,8 +468,6 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
+ 
+ static int gre_handle_offloads(struct sk_buff *skb, bool csum)
+ {
+-	if (csum && skb_checksum_start(skb) < skb->data)
+-		return -EINVAL;
+ 	return iptunnel_handle_offloads(skb, csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
+ }
+ 
+@@ -627,15 +625,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
+ 	}
+ 
+ 	if (dev->header_ops) {
++		const int pull_len = tunnel->hlen + sizeof(struct iphdr);
++
+ 		if (skb_cow_head(skb, 0))
+ 			goto free_skb;
+ 
+ 		tnl_params = (const struct iphdr *)skb->data;
+ 
++		if (pull_len > skb_transport_offset(skb))
++			goto free_skb;
++
+ 		/* Pull skb since ip_tunnel_xmit() needs skb->data pointing
+ 		 * to gre header.
+ 		 */
+-		skb_pull(skb, tunnel->hlen + sizeof(struct iphdr));
++		skb_pull(skb, pull_len);
+ 		skb_reset_mac_header(skb);
+ 	} else {
+ 		if (skb_cow_head(skb, dev->needed_headroom))
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index f2d313c5900df..1075cc2136ac6 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -1303,6 +1303,7 @@ static int nh_create_ipv4(struct net *net, struct nexthop *nh,
+ 		.fc_gw4   = cfg->gw.ipv4,
+ 		.fc_gw_family = cfg->gw.ipv4 ? AF_INET : 0,
+ 		.fc_flags = cfg->nh_flags,
++		.fc_nlinfo = cfg->nlinfo,
+ 		.fc_encap = cfg->nh_encap,
+ 		.fc_encap_type = cfg->nh_encap_type,
+ 	};
+@@ -1341,6 +1342,7 @@ static int nh_create_ipv6(struct net *net,  struct nexthop *nh,
+ 		.fc_ifindex = cfg->nh_ifindex,
+ 		.fc_gateway = cfg->gw.ipv6,
+ 		.fc_flags = cfg->nh_flags,
++		.fc_nlinfo = cfg->nlinfo,
+ 		.fc_encap = cfg->nh_encap,
+ 		.fc_encap_type = cfg->nh_encap_type,
+ 		.fc_is_fdb = cfg->nh_fdb,
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index ac8d38e044002..991e3434957b8 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -1314,7 +1314,7 @@ static u8 tcp_sacktag_one(struct sock *sk,
+ 	if (dup_sack && (sacked & TCPCB_RETRANS)) {
+ 		if (tp->undo_marker && tp->undo_retrans > 0 &&
+ 		    after(end_seq, tp->undo_marker))
+-			tp->undo_retrans--;
++			tp->undo_retrans = max_t(int, 0, tp->undo_retrans - pcount);
+ 		if ((sacked & TCPCB_SACKED_ACKED) &&
+ 		    before(start_seq, state->reord))
+ 				state->reord = start_seq;
+diff --git a/net/ipv4/udp_tunnel_nic.c b/net/ipv4/udp_tunnel_nic.c
+index 0d122edc368dd..b91003538d87a 100644
+--- a/net/ipv4/udp_tunnel_nic.c
++++ b/net/ipv4/udp_tunnel_nic.c
+@@ -935,7 +935,7 @@ static int __init udp_tunnel_nic_init_module(void)
+ {
+ 	int err;
+ 
+-	udp_tunnel_nic_workqueue = alloc_workqueue("udp_tunnel_nic", 0, 0);
++	udp_tunnel_nic_workqueue = alloc_ordered_workqueue("udp_tunnel_nic", 0);
+ 	if (!udp_tunnel_nic_workqueue)
+ 		return -ENOMEM;
+ 
+diff --git a/net/ipv6/netfilter/nf_socket_ipv6.c b/net/ipv6/netfilter/nf_socket_ipv6.c
+index 6fd54744cbc38..aa5bb8789ba0b 100644
+--- a/net/ipv6/netfilter/nf_socket_ipv6.c
++++ b/net/ipv6/netfilter/nf_socket_ipv6.c
+@@ -99,7 +99,7 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
+ {
+ 	__be16 dport, sport;
+ 	const struct in6_addr *daddr = NULL, *saddr = NULL;
+-	struct ipv6hdr *iph = ipv6_hdr(skb);
++	struct ipv6hdr *iph = ipv6_hdr(skb), ipv6_var;
+ 	struct sk_buff *data_skb = NULL;
+ 	int doff = 0;
+ 	int thoff = 0, tproto;
+@@ -129,8 +129,6 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
+ 			thoff + sizeof(*hp);
+ 
+ 	} else if (tproto == IPPROTO_ICMPV6) {
+-		struct ipv6hdr ipv6_var;
+-
+ 		if (extract_icmp6_fields(skb, thoff, &tproto, &saddr, &daddr,
+ 					 &sport, &dport, &ipv6_var))
+ 			return NULL;
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 203890e378cb0..561b6d67ab8b9 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -869,8 +869,10 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
+ 	}
+ 
+ 	if (tunnel->version == L2TP_HDR_VER_3 &&
+-	    l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr))
++	    l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr)) {
++		l2tp_session_dec_refcount(session);
+ 		goto invalid;
++	}
+ 
+ 	l2tp_recv_common(session, skb, ptr, optr, hdrflags, length);
+ 	l2tp_session_dec_refcount(session);
+diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
+index b3f4a334f9d78..94001eb51ffe4 100644
+--- a/net/netfilter/nf_conntrack_proto_dccp.c
++++ b/net/netfilter/nf_conntrack_proto_dccp.c
+@@ -397,6 +397,7 @@ dccp_new(struct nf_conn *ct, const struct sk_buff *skb,
+ 			msg = "not picking up existing connection ";
+ 			goto out_invalid;
+ 		}
++		break;
+ 	case CT_DCCP_REQUEST:
+ 		break;
+ 	case CT_DCCP_INVALID:
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 2b5f97e1d40b9..c605a3e713e76 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -8394,6 +8394,7 @@ static int nf_tables_check_loops(const struct nft_ctx *ctx,
+ 							data->verdict.chain);
+ 				if (err < 0)
+ 					return err;
++				break;
+ 			default:
+ 				break;
+ 			}
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 70d46e0bbf064..7fcb73ac2e6ed 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -41,6 +41,7 @@ struct nft_ct_helper_obj  {
+ #ifdef CONFIG_NF_CONNTRACK_ZONES
+ static DEFINE_PER_CPU(struct nf_conn *, nft_ct_pcpu_template);
+ static unsigned int nft_ct_pcpu_template_refcnt __read_mostly;
++static DEFINE_MUTEX(nft_ct_pcpu_mutex);
+ #endif
+ 
+ static u64 nft_ct_get_eval_counter(const struct nf_conn_counter *c,
+@@ -526,8 +527,11 @@ static void __nft_ct_set_destroy(const struct nft_ctx *ctx, struct nft_ct *priv)
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_ZONES
+ 	case NFT_CT_ZONE:
++		mutex_lock(&nft_ct_pcpu_mutex);
+ 		if (--nft_ct_pcpu_template_refcnt == 0)
+ 			nft_ct_tmpl_put_pcpu();
++		mutex_unlock(&nft_ct_pcpu_mutex);
++		break;
+ #endif
+ 	default:
+ 		break;
+@@ -564,9 +568,13 @@ static int nft_ct_set_init(const struct nft_ctx *ctx,
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_ZONES
+ 	case NFT_CT_ZONE:
+-		if (!nft_ct_tmpl_alloc_pcpu())
++		mutex_lock(&nft_ct_pcpu_mutex);
++		if (!nft_ct_tmpl_alloc_pcpu()) {
++			mutex_unlock(&nft_ct_pcpu_mutex);
+ 			return -ENOMEM;
++		}
+ 		nft_ct_pcpu_template_refcnt++;
++		mutex_unlock(&nft_ct_pcpu_mutex);
+ 		len = sizeof(u16);
+ 		break;
+ #endif
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index bbd5f87536006..99e8db2621984 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -369,6 +369,7 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ {
+ 	struct fq_codel_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_FQ_CODEL_MAX + 1];
++	u32 quantum = 0;
+ 	int err;
+ 
+ 	if (!opt)
+@@ -386,6 +387,13 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 		    q->flows_cnt > 65536)
+ 			return -EINVAL;
+ 	}
++	if (tb[TCA_FQ_CODEL_QUANTUM]) {
++		quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM]));
++		if (quantum > FQ_CODEL_QUANTUM_MAX) {
++			NL_SET_ERR_MSG(extack, "Invalid quantum");
++			return -EINVAL;
++		}
++	}
+ 	sch_tree_lock(sch);
+ 
+ 	if (tb[TCA_FQ_CODEL_TARGET]) {
+@@ -412,8 +420,8 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (tb[TCA_FQ_CODEL_ECN])
+ 		q->cparams.ecn = !!nla_get_u32(tb[TCA_FQ_CODEL_ECN]);
+ 
+-	if (tb[TCA_FQ_CODEL_QUANTUM])
+-		q->quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM]));
++	if (quantum)
++		q->quantum = quantum;
+ 
+ 	if (tb[TCA_FQ_CODEL_DROP_BATCH_SIZE])
+ 		q->drop_batch_size = max(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]));
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 963047c57c27b..ce957ee5383c4 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1980,10 +1980,12 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 		tipc_node_distr_xmit(sock_net(sk), &xmitq);
+ 	}
+ 
+-	if (!skb_cb->bytes_read)
+-		tsk_advance_rx_queue(sk);
++	if (skb_cb->bytes_read)
++		goto exit;
++
++	tsk_advance_rx_queue(sk);
+ 
+-	if (likely(!connected) || skb_cb->bytes_read)
++	if (likely(!connected))
+ 		goto exit;
+ 
+ 	/* Send connection flow control advertisement when applicable */
+@@ -2420,7 +2422,7 @@ static int tipc_sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
+ static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
+ 			    u32 dport, struct sk_buff_head *xmitq)
+ {
+-	unsigned long time_limit = jiffies + 2;
++	unsigned long time_limit = jiffies + usecs_to_jiffies(20000);
+ 	struct sk_buff *skb;
+ 	unsigned int lim;
+ 	atomic_t *dcnt;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 37ffa7725cee2..d5c0ae34b1e45 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2769,7 +2769,7 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ 
+ 		other = unix_peer(sk);
+ 		if (other && unix_peer(other) != sk &&
+-		    unix_recvq_full(other) &&
++		    unix_recvq_full_lockless(other) &&
+ 		    unix_dgram_peer_wake_me(sk, other))
+ 			writable = 0;
+ 
+diff --git a/scripts/clang-tools/gen_compile_commands.py b/scripts/clang-tools/gen_compile_commands.py
+index 8ddb5d099029f..8bf55bb4f515c 100755
+--- a/scripts/clang-tools/gen_compile_commands.py
++++ b/scripts/clang-tools/gen_compile_commands.py
+@@ -13,6 +13,7 @@ import logging
+ import os
+ import re
+ import subprocess
++import sys
+ 
+ _DEFAULT_OUTPUT = 'compile_commands.json'
+ _DEFAULT_LOG_LEVEL = 'WARNING'
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 2abbd75fbf2e3..014b959575cae 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -127,10 +127,10 @@ FEATURE_CHECK_LDFLAGS-libunwind = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS)
+ FEATURE_CHECK_CFLAGS-libunwind-debug-frame = $(LIBUNWIND_CFLAGS)
+ FEATURE_CHECK_LDFLAGS-libunwind-debug-frame = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS)
+ 
+-FEATURE_CHECK_LDFLAGS-libunwind-arm = -lunwind -lunwind-arm
+-FEATURE_CHECK_LDFLAGS-libunwind-aarch64 = -lunwind -lunwind-aarch64
+-FEATURE_CHECK_LDFLAGS-libunwind-x86 = -lunwind -llzma -lunwind-x86
+-FEATURE_CHECK_LDFLAGS-libunwind-x86_64 = -lunwind -llzma -lunwind-x86_64
++FEATURE_CHECK_LDFLAGS-libunwind-arm += -lunwind -lunwind-arm
++FEATURE_CHECK_LDFLAGS-libunwind-aarch64 += -lunwind -lunwind-aarch64
++FEATURE_CHECK_LDFLAGS-libunwind-x86 += -lunwind -llzma -lunwind-x86
++FEATURE_CHECK_LDFLAGS-libunwind-x86_64 += -lunwind -llzma -lunwind-x86_64
+ 
+ FEATURE_CHECK_LDFLAGS-libcrypto = -lcrypto
+ 
+diff --git a/tools/perf/bench/inject-buildid.c b/tools/perf/bench/inject-buildid.c
+index 280227e3ffd7a..f4ec01da8da68 100644
+--- a/tools/perf/bench/inject-buildid.c
++++ b/tools/perf/bench/inject-buildid.c
+@@ -133,7 +133,7 @@ static u64 dso_map_addr(struct bench_dso *dso)
+ 	return 0x400000ULL + dso->ino * 8192ULL;
+ }
+ 
+-static u32 synthesize_attr(struct bench_data *data)
++static ssize_t synthesize_attr(struct bench_data *data)
+ {
+ 	union perf_event event;
+ 
+@@ -151,7 +151,7 @@ static u32 synthesize_attr(struct bench_data *data)
+ 	return writen(data->input_pipe[1], &event, event.header.size);
+ }
+ 
+-static u32 synthesize_fork(struct bench_data *data)
++static ssize_t synthesize_fork(struct bench_data *data)
+ {
+ 	union perf_event event;
+ 
+@@ -169,8 +169,7 @@ static u32 synthesize_fork(struct bench_data *data)
+ 	return writen(data->input_pipe[1], &event, event.header.size);
+ }
+ 
+-static u32 synthesize_mmap(struct bench_data *data, struct bench_dso *dso,
+-			   u64 timestamp)
++static ssize_t synthesize_mmap(struct bench_data *data, struct bench_dso *dso, u64 timestamp)
+ {
+ 	union perf_event event;
+ 	size_t len = offsetof(struct perf_record_mmap2, filename);
+@@ -198,23 +197,25 @@ static u32 synthesize_mmap(struct bench_data *data, struct bench_dso *dso,
+ 
+ 	if (len > sizeof(event.mmap2)) {
+ 		/* write mmap2 event first */
+-		writen(data->input_pipe[1], &event, len - bench_id_hdr_size);
++		if (writen(data->input_pipe[1], &event, len - bench_id_hdr_size) < 0)
++			return -1;
+ 		/* zero-fill sample id header */
+ 		memset(id_hdr_ptr, 0, bench_id_hdr_size);
+ 		/* put timestamp in the right position */
+ 		ts_idx = (bench_id_hdr_size / sizeof(u64)) - 2;
+ 		id_hdr_ptr[ts_idx] = timestamp;
+-		writen(data->input_pipe[1], id_hdr_ptr, bench_id_hdr_size);
+-	} else {
+-		ts_idx = (len / sizeof(u64)) - 2;
+-		id_hdr_ptr[ts_idx] = timestamp;
+-		writen(data->input_pipe[1], &event, len);
++		if (writen(data->input_pipe[1], id_hdr_ptr, bench_id_hdr_size) < 0)
++			return -1;
++
++		return len;
+ 	}
+-	return len;
++
++	ts_idx = (len / sizeof(u64)) - 2;
++	id_hdr_ptr[ts_idx] = timestamp;
++	return writen(data->input_pipe[1], &event, len);
+ }
+ 
+-static u32 synthesize_sample(struct bench_data *data, struct bench_dso *dso,
+-			     u64 timestamp)
++static ssize_t synthesize_sample(struct bench_data *data, struct bench_dso *dso, u64 timestamp)
+ {
+ 	union perf_event event;
+ 	struct perf_sample sample = {
+@@ -233,7 +234,7 @@ static u32 synthesize_sample(struct bench_data *data, struct bench_dso *dso,
+ 	return writen(data->input_pipe[1], &event, event.header.size);
+ }
+ 
+-static u32 synthesize_flush(struct bench_data *data)
++static ssize_t synthesize_flush(struct bench_data *data)
+ {
+ 	struct perf_event_header header = {
+ 		.size = sizeof(header),
+@@ -348,14 +349,16 @@ static int inject_build_id(struct bench_data *data, u64 *max_rss)
+ 	int status;
+ 	unsigned int i, k;
+ 	struct rusage rusage;
+-	u64 len = 0;
+ 
+ 	/* this makes the child to run */
+ 	if (perf_header__write_pipe(data->input_pipe[1]) < 0)
+ 		return -1;
+ 
+-	len += synthesize_attr(data);
+-	len += synthesize_fork(data);
++	if (synthesize_attr(data) < 0)
++		return -1;
++
++	if (synthesize_fork(data) < 0)
++		return -1;
+ 
+ 	for (i = 0; i < nr_mmaps; i++) {
+ 		int idx = rand() % (nr_dsos - 1);
+@@ -363,13 +366,18 @@ static int inject_build_id(struct bench_data *data, u64 *max_rss)
+ 		u64 timestamp = rand() % 1000000;
+ 
+ 		pr_debug2("   [%d] injecting: %s\n", i+1, dso->name);
+-		len += synthesize_mmap(data, dso, timestamp);
++		if (synthesize_mmap(data, dso, timestamp) < 0)
++			return -1;
+ 
+-		for (k = 0; k < nr_samples; k++)
+-			len += synthesize_sample(data, dso, timestamp + k * 1000);
++		for (k = 0; k < nr_samples; k++) {
++			if (synthesize_sample(data, dso, timestamp + k * 1000) < 0)
++				return -1;
++		}
+ 
+-		if ((i + 1) % 10 == 0)
+-			len += synthesize_flush(data);
++		if ((i + 1) % 10 == 0) {
++			if (synthesize_flush(data) < 0)
++				return -1;
++		}
+ 	}
+ 
+ 	/* tihs makes the child to finish */
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 74bf480aa4f05..df515cd8d0184 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2100,6 +2100,7 @@ static int add_callchain_ip(struct thread *thread,
+ 
+ 	al.filtered = 0;
+ 	al.sym = NULL;
++	al.srcline = NULL;
+ 	if (!cpumode) {
+ 		thread__find_cpumode_addr_location(thread, ip, &al);
+ 	} else {
+diff --git a/tools/testing/selftests/net/altnames.sh b/tools/testing/selftests/net/altnames.sh
+index 4254ddc3f70b5..1ef9e4159bba8 100755
+--- a/tools/testing/selftests/net/altnames.sh
++++ b/tools/testing/selftests/net/altnames.sh
+@@ -45,7 +45,7 @@ altnames_test()
+ 	check_err $? "Got unexpected long alternative name from link show JSON"
+ 
+ 	ip link property del $DUMMY_DEV altname $SHORT_NAME
+-	check_err $? "Failed to add short alternative name"
++	check_err $? "Failed to delete short alternative name"
+ 
+ 	ip -j -p link show $SHORT_NAME &>/dev/null
+ 	check_fail $? "Unexpected success while trying to do link show with deleted short alternative name"
+diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
+index 2f649b431456a..8fcb289278182 100755
+--- a/tools/testing/selftests/net/mptcp/simult_flows.sh
++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
+@@ -21,8 +21,8 @@ usage() {
+ 
+ cleanup()
+ {
+-	rm -f "$cin" "$cout"
+-	rm -f "$sin" "$sout"
++	rm -f "$cout" "$sout"
++	rm -f "$large" "$small"
+ 	rm -f "$capout"
+ 
+ 	local netns


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-26 14:12 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-26 14:12 UTC (permalink / raw
  To: gentoo-commits

commit:     304210fdee883a06e79223ce982fec8985222041
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 26 14:12:11 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Sep 26 14:12:11 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=304210fd

Linux patch 5.10.69

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1068_linux-5.10.69.patch | 2394 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2398 insertions(+)

diff --git a/0000_README b/0000_README
index 416061d..456fb50 100644
--- a/0000_README
+++ b/0000_README
@@ -315,6 +315,10 @@ Patch:  1067_linux-5.10.68.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.68
 
+Patch:  1068_linux-5.10.69.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.69
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1068_linux-5.10.69.patch b/1068_linux-5.10.69.patch
new file mode 100644
index 0000000..d9237d5
--- /dev/null
+++ b/1068_linux-5.10.69.patch
@@ -0,0 +1,2394 @@
+diff --git a/Makefile b/Makefile
+index e50581c9db50e..e14943205b832 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 68
++SUBLEVEL = 69
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/include/asm/ftrace.h b/arch/arm/include/asm/ftrace.h
+index 48ec1d0337da7..a4dbac07e4ef0 100644
+--- a/arch/arm/include/asm/ftrace.h
++++ b/arch/arm/include/asm/ftrace.h
+@@ -15,6 +15,9 @@ extern void __gnu_mcount_nc(void);
+ 
+ #ifdef CONFIG_DYNAMIC_FTRACE
+ struct dyn_arch_ftrace {
++#ifdef CONFIG_ARM_MODULE_PLTS
++	struct module *mod;
++#endif
+ };
+ 
+ static inline unsigned long ftrace_call_adjust(unsigned long addr)
+diff --git a/arch/arm/include/asm/insn.h b/arch/arm/include/asm/insn.h
+index f20e08ac85aeb..5475cbf9fb6b4 100644
+--- a/arch/arm/include/asm/insn.h
++++ b/arch/arm/include/asm/insn.h
+@@ -13,18 +13,18 @@ arm_gen_nop(void)
+ }
+ 
+ unsigned long
+-__arm_gen_branch(unsigned long pc, unsigned long addr, bool link);
++__arm_gen_branch(unsigned long pc, unsigned long addr, bool link, bool warn);
+ 
+ static inline unsigned long
+ arm_gen_branch(unsigned long pc, unsigned long addr)
+ {
+-	return __arm_gen_branch(pc, addr, false);
++	return __arm_gen_branch(pc, addr, false, true);
+ }
+ 
+ static inline unsigned long
+-arm_gen_branch_link(unsigned long pc, unsigned long addr)
++arm_gen_branch_link(unsigned long pc, unsigned long addr, bool warn)
+ {
+-	return __arm_gen_branch(pc, addr, true);
++	return __arm_gen_branch(pc, addr, true, warn);
+ }
+ 
+ #endif
+diff --git a/arch/arm/include/asm/module.h b/arch/arm/include/asm/module.h
+index 4b0df09cbe678..cfffae67c04ee 100644
+--- a/arch/arm/include/asm/module.h
++++ b/arch/arm/include/asm/module.h
+@@ -19,8 +19,18 @@ enum {
+ };
+ #endif
+ 
++#define PLT_ENT_STRIDE		L1_CACHE_BYTES
++#define PLT_ENT_COUNT		(PLT_ENT_STRIDE / sizeof(u32))
++#define PLT_ENT_SIZE		(sizeof(struct plt_entries) / PLT_ENT_COUNT)
++
++struct plt_entries {
++	u32	ldr[PLT_ENT_COUNT];
++	u32	lit[PLT_ENT_COUNT];
++};
++
+ struct mod_plt_sec {
+ 	struct elf32_shdr	*plt;
++	struct plt_entries	*plt_ent;
+ 	int			plt_count;
+ };
+ 
+diff --git a/arch/arm/kernel/ftrace.c b/arch/arm/kernel/ftrace.c
+index 9a79ef6b1876c..3c83b5d296979 100644
+--- a/arch/arm/kernel/ftrace.c
++++ b/arch/arm/kernel/ftrace.c
+@@ -68,9 +68,10 @@ int ftrace_arch_code_modify_post_process(void)
+ 	return 0;
+ }
+ 
+-static unsigned long ftrace_call_replace(unsigned long pc, unsigned long addr)
++static unsigned long ftrace_call_replace(unsigned long pc, unsigned long addr,
++					 bool warn)
+ {
+-	return arm_gen_branch_link(pc, addr);
++	return arm_gen_branch_link(pc, addr, warn);
+ }
+ 
+ static int ftrace_modify_code(unsigned long pc, unsigned long old,
+@@ -104,14 +105,14 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
+ 	int ret;
+ 
+ 	pc = (unsigned long)&ftrace_call;
+-	new = ftrace_call_replace(pc, (unsigned long)func);
++	new = ftrace_call_replace(pc, (unsigned long)func, true);
+ 
+ 	ret = ftrace_modify_code(pc, 0, new, false);
+ 
+ #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ 	if (!ret) {
+ 		pc = (unsigned long)&ftrace_regs_call;
+-		new = ftrace_call_replace(pc, (unsigned long)func);
++		new = ftrace_call_replace(pc, (unsigned long)func, true);
+ 
+ 		ret = ftrace_modify_code(pc, 0, new, false);
+ 	}
+@@ -124,10 +125,22 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ {
+ 	unsigned long new, old;
+ 	unsigned long ip = rec->ip;
++	unsigned long aaddr = adjust_address(rec, addr);
++	struct module *mod = NULL;
++
++#ifdef CONFIG_ARM_MODULE_PLTS
++	mod = rec->arch.mod;
++#endif
+ 
+ 	old = ftrace_nop_replace(rec);
+ 
+-	new = ftrace_call_replace(ip, adjust_address(rec, addr));
++	new = ftrace_call_replace(ip, aaddr, !mod);
++#ifdef CONFIG_ARM_MODULE_PLTS
++	if (!new && mod) {
++		aaddr = get_module_plt(mod, ip, aaddr);
++		new = ftrace_call_replace(ip, aaddr, true);
++	}
++#endif
+ 
+ 	return ftrace_modify_code(rec->ip, old, new, true);
+ }
+@@ -140,9 +153,9 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ 	unsigned long new, old;
+ 	unsigned long ip = rec->ip;
+ 
+-	old = ftrace_call_replace(ip, adjust_address(rec, old_addr));
++	old = ftrace_call_replace(ip, adjust_address(rec, old_addr), true);
+ 
+-	new = ftrace_call_replace(ip, adjust_address(rec, addr));
++	new = ftrace_call_replace(ip, adjust_address(rec, addr), true);
+ 
+ 	return ftrace_modify_code(rec->ip, old, new, true);
+ }
+@@ -152,12 +165,29 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ int ftrace_make_nop(struct module *mod,
+ 		    struct dyn_ftrace *rec, unsigned long addr)
+ {
++	unsigned long aaddr = adjust_address(rec, addr);
+ 	unsigned long ip = rec->ip;
+ 	unsigned long old;
+ 	unsigned long new;
+ 	int ret;
+ 
+-	old = ftrace_call_replace(ip, adjust_address(rec, addr));
++#ifdef CONFIG_ARM_MODULE_PLTS
++	/* mod is only supplied during module loading */
++	if (!mod)
++		mod = rec->arch.mod;
++	else
++		rec->arch.mod = mod;
++#endif
++
++	old = ftrace_call_replace(ip, aaddr,
++				  !IS_ENABLED(CONFIG_ARM_MODULE_PLTS) || !mod);
++#ifdef CONFIG_ARM_MODULE_PLTS
++	if (!old && mod) {
++		aaddr = get_module_plt(mod, ip, aaddr);
++		old = ftrace_call_replace(ip, aaddr, true);
++	}
++#endif
++
+ 	new = ftrace_nop_replace(rec);
+ 	ret = ftrace_modify_code(ip, old, new, true);
+ 
+diff --git a/arch/arm/kernel/insn.c b/arch/arm/kernel/insn.c
+index 2e844b70386b3..db0acbb7d7a02 100644
+--- a/arch/arm/kernel/insn.c
++++ b/arch/arm/kernel/insn.c
+@@ -3,8 +3,9 @@
+ #include <linux/kernel.h>
+ #include <asm/opcodes.h>
+ 
+-static unsigned long
+-__arm_gen_branch_thumb2(unsigned long pc, unsigned long addr, bool link)
++static unsigned long __arm_gen_branch_thumb2(unsigned long pc,
++					     unsigned long addr, bool link,
++					     bool warn)
+ {
+ 	unsigned long s, j1, j2, i1, i2, imm10, imm11;
+ 	unsigned long first, second;
+@@ -12,7 +13,7 @@ __arm_gen_branch_thumb2(unsigned long pc, unsigned long addr, bool link)
+ 
+ 	offset = (long)addr - (long)(pc + 4);
+ 	if (offset < -16777216 || offset > 16777214) {
+-		WARN_ON_ONCE(1);
++		WARN_ON_ONCE(warn);
+ 		return 0;
+ 	}
+ 
+@@ -33,8 +34,8 @@ __arm_gen_branch_thumb2(unsigned long pc, unsigned long addr, bool link)
+ 	return __opcode_thumb32_compose(first, second);
+ }
+ 
+-static unsigned long
+-__arm_gen_branch_arm(unsigned long pc, unsigned long addr, bool link)
++static unsigned long __arm_gen_branch_arm(unsigned long pc, unsigned long addr,
++					  bool link, bool warn)
+ {
+ 	unsigned long opcode = 0xea000000;
+ 	long offset;
+@@ -44,7 +45,7 @@ __arm_gen_branch_arm(unsigned long pc, unsigned long addr, bool link)
+ 
+ 	offset = (long)addr - (long)(pc + 8);
+ 	if (unlikely(offset < -33554432 || offset > 33554428)) {
+-		WARN_ON_ONCE(1);
++		WARN_ON_ONCE(warn);
+ 		return 0;
+ 	}
+ 
+@@ -54,10 +55,10 @@ __arm_gen_branch_arm(unsigned long pc, unsigned long addr, bool link)
+ }
+ 
+ unsigned long
+-__arm_gen_branch(unsigned long pc, unsigned long addr, bool link)
++__arm_gen_branch(unsigned long pc, unsigned long addr, bool link, bool warn)
+ {
+ 	if (IS_ENABLED(CONFIG_THUMB2_KERNEL))
+-		return __arm_gen_branch_thumb2(pc, addr, link);
++		return __arm_gen_branch_thumb2(pc, addr, link, warn);
+ 	else
+-		return __arm_gen_branch_arm(pc, addr, link);
++		return __arm_gen_branch_arm(pc, addr, link, warn);
+ }
+diff --git a/arch/arm/kernel/module-plts.c b/arch/arm/kernel/module-plts.c
+index 6e626abaefc54..1fc309b41f944 100644
+--- a/arch/arm/kernel/module-plts.c
++++ b/arch/arm/kernel/module-plts.c
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <linux/elf.h>
++#include <linux/ftrace.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/sort.h>
+@@ -12,10 +13,6 @@
+ #include <asm/cache.h>
+ #include <asm/opcodes.h>
+ 
+-#define PLT_ENT_STRIDE		L1_CACHE_BYTES
+-#define PLT_ENT_COUNT		(PLT_ENT_STRIDE / sizeof(u32))
+-#define PLT_ENT_SIZE		(sizeof(struct plt_entries) / PLT_ENT_COUNT)
+-
+ #ifdef CONFIG_THUMB2_KERNEL
+ #define PLT_ENT_LDR		__opcode_to_mem_thumb32(0xf8dff000 | \
+ 							(PLT_ENT_STRIDE - 4))
+@@ -24,9 +21,11 @@
+ 						    (PLT_ENT_STRIDE - 8))
+ #endif
+ 
+-struct plt_entries {
+-	u32	ldr[PLT_ENT_COUNT];
+-	u32	lit[PLT_ENT_COUNT];
++static const u32 fixed_plts[] = {
++#ifdef CONFIG_DYNAMIC_FTRACE
++	FTRACE_ADDR,
++	MCOUNT_ADDR,
++#endif
+ };
+ 
+ static bool in_init(const struct module *mod, unsigned long loc)
+@@ -34,14 +33,40 @@ static bool in_init(const struct module *mod, unsigned long loc)
+ 	return loc - (u32)mod->init_layout.base < mod->init_layout.size;
+ }
+ 
++static void prealloc_fixed(struct mod_plt_sec *pltsec, struct plt_entries *plt)
++{
++	int i;
++
++	if (!ARRAY_SIZE(fixed_plts) || pltsec->plt_count)
++		return;
++	pltsec->plt_count = ARRAY_SIZE(fixed_plts);
++
++	for (i = 0; i < ARRAY_SIZE(plt->ldr); ++i)
++		plt->ldr[i] = PLT_ENT_LDR;
++
++	BUILD_BUG_ON(sizeof(fixed_plts) > sizeof(plt->lit));
++	memcpy(plt->lit, fixed_plts, sizeof(fixed_plts));
++}
++
+ u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val)
+ {
+ 	struct mod_plt_sec *pltsec = !in_init(mod, loc) ? &mod->arch.core :
+ 							  &mod->arch.init;
++	struct plt_entries *plt;
++	int idx;
++
++	/* cache the address, ELF header is available only during module load */
++	if (!pltsec->plt_ent)
++		pltsec->plt_ent = (struct plt_entries *)pltsec->plt->sh_addr;
++	plt = pltsec->plt_ent;
+ 
+-	struct plt_entries *plt = (struct plt_entries *)pltsec->plt->sh_addr;
+-	int idx = 0;
++	prealloc_fixed(pltsec, plt);
++
++	for (idx = 0; idx < ARRAY_SIZE(fixed_plts); ++idx)
++		if (plt->lit[idx] == val)
++			return (u32)&plt->ldr[idx];
+ 
++	idx = 0;
+ 	/*
+ 	 * Look for an existing entry pointing to 'val'. Given that the
+ 	 * relocations are sorted, this will be the last entry we allocated.
+@@ -189,8 +214,8 @@ static unsigned int count_plts(const Elf32_Sym *syms, Elf32_Addr base,
+ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ 			      char *secstrings, struct module *mod)
+ {
+-	unsigned long core_plts = 0;
+-	unsigned long init_plts = 0;
++	unsigned long core_plts = ARRAY_SIZE(fixed_plts);
++	unsigned long init_plts = ARRAY_SIZE(fixed_plts);
+ 	Elf32_Shdr *s, *sechdrs_end = sechdrs + ehdr->e_shnum;
+ 	Elf32_Sym *syms = NULL;
+ 
+@@ -245,6 +270,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ 	mod->arch.core.plt->sh_size = round_up(core_plts * PLT_ENT_SIZE,
+ 					       sizeof(struct plt_entries));
+ 	mod->arch.core.plt_count = 0;
++	mod->arch.core.plt_ent = NULL;
+ 
+ 	mod->arch.init.plt->sh_type = SHT_NOBITS;
+ 	mod->arch.init.plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC;
+@@ -252,6 +278,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ 	mod->arch.init.plt->sh_size = round_up(init_plts * PLT_ENT_SIZE,
+ 					       sizeof(struct plt_entries));
+ 	mod->arch.init.plt_count = 0;
++	mod->arch.init.plt_ent = NULL;
+ 
+ 	pr_debug("%s: plt=%x, init.plt=%x\n", __func__,
+ 		 mod->arch.core.plt->sh_size, mod->arch.init.plt->sh_size);
+diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
+index d54d69cf17322..75f3ab531bdf4 100644
+--- a/arch/arm/mm/init.c
++++ b/arch/arm/mm/init.c
+@@ -378,7 +378,11 @@ static void __init free_highpages(void)
+ void __init mem_init(void)
+ {
+ #ifdef CONFIG_ARM_LPAE
+-	swiotlb_init(1);
++	if (swiotlb_force == SWIOTLB_FORCE ||
++	    max_pfn > arm_dma_pfn_limit)
++		swiotlb_init(1);
++	else
++		swiotlb_force = SWIOTLB_NO_FORCE;
+ #endif
+ 
+ 	set_max_mapnr(pfn_to_page(max_pfn) - mem_map);
+diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
+index 7fa6828bb488a..587543c6c51cb 100644
+--- a/arch/arm64/kernel/cacheinfo.c
++++ b/arch/arm64/kernel/cacheinfo.c
+@@ -43,7 +43,7 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
+ 	this_leaf->type = type;
+ }
+ 
+-static int __init_cache_level(unsigned int cpu)
++int init_cache_level(unsigned int cpu)
+ {
+ 	unsigned int ctype, level, leaves, fw_level;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+@@ -78,7 +78,7 @@ static int __init_cache_level(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static int __populate_cache_leaves(unsigned int cpu)
++int populate_cache_leaves(unsigned int cpu)
+ {
+ 	unsigned int level, idx;
+ 	enum cache_type type;
+@@ -97,6 +97,3 @@ static int __populate_cache_leaves(unsigned int cpu)
+ 	}
+ 	return 0;
+ }
+-
+-DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+-DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
+diff --git a/arch/mips/kernel/cacheinfo.c b/arch/mips/kernel/cacheinfo.c
+index 47312c5294102..529dab855aac9 100644
+--- a/arch/mips/kernel/cacheinfo.c
++++ b/arch/mips/kernel/cacheinfo.c
+@@ -17,7 +17,7 @@ do {								\
+ 	leaf++;							\
+ } while (0)
+ 
+-static int __init_cache_level(unsigned int cpu)
++int init_cache_level(unsigned int cpu)
+ {
+ 	struct cpuinfo_mips *c = &current_cpu_data;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+@@ -69,7 +69,7 @@ static void fill_cpumask_cluster(int cpu, cpumask_t *cpu_map)
+ 			cpumask_set_cpu(cpu1, cpu_map);
+ }
+ 
+-static int __populate_cache_leaves(unsigned int cpu)
++int populate_cache_leaves(unsigned int cpu)
+ {
+ 	struct cpuinfo_mips *c = &current_cpu_data;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+@@ -98,6 +98,3 @@ static int __populate_cache_leaves(unsigned int cpu)
+ 
+ 	return 0;
+ }
+-
+-DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+-DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
+diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
+index d867813570442..90deabfe63eaa 100644
+--- a/arch/riscv/kernel/cacheinfo.c
++++ b/arch/riscv/kernel/cacheinfo.c
+@@ -113,7 +113,7 @@ static void fill_cacheinfo(struct cacheinfo **this_leaf,
+ 	}
+ }
+ 
+-static int __init_cache_level(unsigned int cpu)
++int init_cache_level(unsigned int cpu)
+ {
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ 	struct device_node *np = of_cpu_device_node_get(cpu);
+@@ -155,7 +155,7 @@ static int __init_cache_level(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static int __populate_cache_leaves(unsigned int cpu)
++int populate_cache_leaves(unsigned int cpu)
+ {
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ 	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+@@ -187,6 +187,3 @@ static int __populate_cache_leaves(unsigned int cpu)
+ 
+ 	return 0;
+ }
+-
+-DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+-DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index 401cf670a2439..37b1bbd1a27cc 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -128,7 +128,7 @@ static long get_pfn(unsigned long user_addr, unsigned long access,
+ 	mmap_read_lock(current->mm);
+ 	ret = -EINVAL;
+ 	vma = find_vma(current->mm, user_addr);
+-	if (!vma)
++	if (!vma || user_addr < vma->vm_start)
+ 		goto out;
+ 	ret = -EACCES;
+ 	if (!(vma->vm_flags & access))
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index c17b8e5ec1869..d11b3d41c3785 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -1113,7 +1113,7 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ 		rc = os_connect_socket(pdata->socket_path);
+ 	} while (rc == -EINTR);
+ 	if (rc < 0)
+-		return rc;
++		goto error_free;
+ 	vu_dev->sock = rc;
+ 
+ 	spin_lock_init(&vu_dev->sock_lock);
+@@ -1132,6 +1132,8 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ 
+ error_init:
+ 	os_close_file(vu_dev->sock);
++error_free:
++	kfree(vu_dev);
+ 	return rc;
+ }
+ 
+diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
+index f9ac682e75e78..b458b0fd98bf6 100644
+--- a/arch/x86/kernel/cpu/cacheinfo.c
++++ b/arch/x86/kernel/cpu/cacheinfo.c
+@@ -985,7 +985,7 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
+ 	this_leaf->priv = base->nb;
+ }
+ 
+-static int __init_cache_level(unsigned int cpu)
++int init_cache_level(unsigned int cpu)
+ {
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ 
+@@ -1014,7 +1014,7 @@ static void get_cache_id(int cpu, struct _cpuid4_info_regs *id4_regs)
+ 	id4_regs->id = c->apicid >> index_msb;
+ }
+ 
+-static int __populate_cache_leaves(unsigned int cpu)
++int populate_cache_leaves(unsigned int cpu)
+ {
+ 	unsigned int idx, ret;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+@@ -1033,6 +1033,3 @@ static int __populate_cache_leaves(unsigned int cpu)
+ 
+ 	return 0;
+ }
+-
+-DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+-DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9e3fedbaa644b..6dcb86c1c985d 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2109,6 +2109,18 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
+ 	}
+ }
+ 
++/*
++ * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
++ * queues. This is important for md arrays to benefit from merging
++ * requests.
++ */
++static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
++{
++	if (plug->multiple_queues)
++		return BLK_MAX_REQUEST_COUNT * 4;
++	return BLK_MAX_REQUEST_COUNT;
++}
++
+ /**
+  * blk_mq_submit_bio - Create and send a request to block device.
+  * @bio: Bio pointer.
+@@ -2202,7 +2214,7 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
+ 		else
+ 			last = list_entry_rq(plug->mq_list.prev);
+ 
+-		if (request_count >= BLK_MAX_REQUEST_COUNT || (last &&
++		if (request_count >= blk_plug_max_rq_count(plug) || (last &&
+ 		    blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) {
+ 			blk_flush_plug_list(plug, false);
+ 			trace_block_plug(q);
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 63e9d00a08321..c53a254171a29 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -2452,6 +2452,7 @@ int blk_throtl_init(struct request_queue *q)
+ void blk_throtl_exit(struct request_queue *q)
+ {
+ 	BUG_ON(!q->td);
++	del_timer_sync(&q->td->service_queue.pending_timer);
+ 	throtl_shutdown_wq(q);
+ 	blkcg_deactivate_policy(q, &blkcg_policy_throtl);
+ 	free_percpu(q->td->latency_buckets[READ]);
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index c7ac49042cee6..192b1c7286b36 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1644,7 +1644,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 	}
+ 
+ 	dev->power.may_skip_resume = true;
+-	dev->power.must_resume = false;
++	dev->power.must_resume = !dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME);
+ 
+ 	dpm_watchdog_set(&wd, dev);
+ 	device_lock(dev);
+diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
+index 4f8224a6ac956..3ca7de37dd8f2 100644
+--- a/drivers/dma-buf/Kconfig
++++ b/drivers/dma-buf/Kconfig
+@@ -42,6 +42,7 @@ config UDMABUF
+ config DMABUF_MOVE_NOTIFY
+ 	bool "Move notify between drivers (EXPERIMENTAL)"
+ 	default n
++	depends on DMA_SHARED_BUFFER
+ 	help
+ 	  Don't pin buffers if the dynamic DMA-buf interface is available on
+ 	  both the exporter as well as the importer. This fixes a security
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index f28bb2334e747..08013345d1f24 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -285,7 +285,7 @@ config INTEL_IDMA64
+ 
+ config INTEL_IDXD
+ 	tristate "Intel Data Accelerators support"
+-	depends on PCI && X86_64
++	depends on PCI && X86_64 && !UML
+ 	depends on PCI_MSI
+ 	depends on SBITMAP
+ 	select DMA_ENGINE
+@@ -299,7 +299,7 @@ config INTEL_IDXD
+ 
+ config INTEL_IOATDMA
+ 	tristate "Intel I/OAT DMA support"
+-	depends on PCI && X86_64
++	depends on PCI && X86_64 && !UML
+ 	select DMA_ENGINE
+ 	select DMA_ENGINE_RAID
+ 	select DCA
+diff --git a/drivers/dma/acpi-dma.c b/drivers/dma/acpi-dma.c
+index 235f1396f9686..52768dc8ce124 100644
+--- a/drivers/dma/acpi-dma.c
++++ b/drivers/dma/acpi-dma.c
+@@ -70,10 +70,14 @@ static int acpi_dma_parse_resource_group(const struct acpi_csrt_group *grp,
+ 
+ 	si = (const struct acpi_csrt_shared_info *)&grp[1];
+ 
+-	/* Match device by MMIO and IRQ */
++	/* Match device by MMIO */
+ 	if (si->mmio_base_low != lower_32_bits(mem) ||
+-	    si->mmio_base_high != upper_32_bits(mem) ||
+-	    si->gsi_interrupt != irq)
++	    si->mmio_base_high != upper_32_bits(mem))
++		return 0;
++
++	/* Match device by Linux vIRQ */
++	ret = acpi_register_gsi(NULL, si->gsi_interrupt, si->interrupt_mode, si->interrupt_polarity);
++	if (ret != irq)
+ 		return 0;
+ 
+ 	dev_dbg(&adev->dev, "matches with %.4s%04X (rev %u)\n",
+diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c
+index 417048e3c42aa..0368c5490788f 100644
+--- a/drivers/dma/idxd/submit.c
++++ b/drivers/dma/idxd/submit.c
+@@ -45,7 +45,7 @@ struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype)
+ 		if (signal_pending_state(TASK_INTERRUPTIBLE, current))
+ 			break;
+ 		idx = sbitmap_queue_get(sbq, &cpu);
+-		if (idx > 0)
++		if (idx >= 0)
+ 			break;
+ 		schedule();
+ 	}
+diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c
+index 0ef5ca81ba4d0..4357d2395e6b7 100644
+--- a/drivers/dma/sprd-dma.c
++++ b/drivers/dma/sprd-dma.c
+@@ -1265,6 +1265,7 @@ static const struct of_device_id sprd_dma_match[] = {
+ 	{ .compatible = "sprd,sc9860-dma", },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, sprd_dma_match);
+ 
+ static int __maybe_unused sprd_dma_runtime_suspend(struct device *dev)
+ {
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index 9ffdbeec436bd..cab4719e4cf9c 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -3070,7 +3070,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 		xdev->ext_addr = false;
+ 
+ 	/* Set the dma mask bits */
+-	dma_set_mask(xdev->dev, DMA_BIT_MASK(addr_width));
++	dma_set_mask_and_coherent(xdev->dev, DMA_BIT_MASK(addr_width));
+ 
+ 	/* Initialize the DMA engine */
+ 	xdev->common.dev = &pdev->dev;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index b76425164e297..7931528bc864b 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -27,6 +27,9 @@
+ #include <linux/pci.h>
+ #include <linux/slab.h>
+ #include <asm/div64.h>
++#if IS_ENABLED(CONFIG_X86_64)
++#include <asm/intel-family.h>
++#endif
+ #include <drm/amdgpu_drm.h>
+ #include "ppatomctrl.h"
+ #include "atombios.h"
+@@ -1606,6 +1609,17 @@ static int smu7_disable_dpm_tasks(struct pp_hwmgr *hwmgr)
+ 	return result;
+ }
+ 
++static bool intel_core_rkl_chk(void)
++{
++#if IS_ENABLED(CONFIG_X86_64)
++	struct cpuinfo_x86 *c = &cpu_data(0);
++
++	return (c->x86 == 6 && c->x86_model == INTEL_FAM6_ROCKETLAKE);
++#else
++	return false;
++#endif
++}
++
+ static void smu7_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ {
+ 	struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
+@@ -1629,7 +1643,8 @@ static void smu7_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 
+ 	data->mclk_dpm_key_disabled = hwmgr->feature_mask & PP_MCLK_DPM_MASK ? false : true;
+ 	data->sclk_dpm_key_disabled = hwmgr->feature_mask & PP_SCLK_DPM_MASK ? false : true;
+-	data->pcie_dpm_key_disabled = hwmgr->feature_mask & PP_PCIE_DPM_MASK ? false : true;
++	data->pcie_dpm_key_disabled =
++		intel_core_rkl_chk() || !(hwmgr->feature_mask & PP_PCIE_DPM_MASK);
+ 	/* need to set voltage control types before EVV patching */
+ 	data->voltage_control = SMU7_VOLTAGE_CONTROL_NONE;
+ 	data->vddci_control = SMU7_VOLTAGE_CONTROL_NONE;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.c
+index b0ece71aefdee..ce774579c89d1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.c
+@@ -57,7 +57,7 @@ nvkm_control_mthd_pstate_info(struct nvkm_control *ctrl, void *data, u32 size)
+ 		args->v0.count = 0;
+ 		args->v0.ustate_ac = NVIF_CONTROL_PSTATE_INFO_V0_USTATE_DISABLE;
+ 		args->v0.ustate_dc = NVIF_CONTROL_PSTATE_INFO_V0_USTATE_DISABLE;
+-		args->v0.pwrsrc = -ENOSYS;
++		args->v0.pwrsrc = -ENODEV;
+ 		args->v0.pstate = NVIF_CONTROL_PSTATE_INFO_V0_PSTATE_UNKNOWN;
+ 	}
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index fa57986c2309c..28de889aa5164 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -298,6 +298,22 @@ int amd_iommu_get_num_iommus(void)
+ 	return amd_iommus_present;
+ }
+ 
++#ifdef CONFIG_IRQ_REMAP
++static bool check_feature_on_all_iommus(u64 mask)
++{
++	bool ret = false;
++	struct amd_iommu *iommu;
++
++	for_each_iommu(iommu) {
++		ret = iommu_feature(iommu, mask);
++		if (!ret)
++			return false;
++	}
++
++	return true;
++}
++#endif
++
+ /*
+  * For IVHD type 0x11/0x40, EFR is also available via IVHD.
+  * Default to IVHD EFR since it is available sooner
+@@ -854,13 +870,6 @@ static int iommu_init_ga(struct amd_iommu *iommu)
+ 	int ret = 0;
+ 
+ #ifdef CONFIG_IRQ_REMAP
+-	/* Note: We have already checked GASup from IVRS table.
+-	 *       Now, we need to make sure that GAMSup is set.
+-	 */
+-	if (AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) &&
+-	    !iommu_feature(iommu, FEATURE_GAM_VAPIC))
+-		amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY_GA;
+-
+ 	ret = iommu_init_ga_log(iommu);
+ #endif /* CONFIG_IRQ_REMAP */
+ 
+@@ -2396,6 +2405,14 @@ static void early_enable_iommus(void)
+ 	}
+ 
+ #ifdef CONFIG_IRQ_REMAP
++	/*
++	 * Note: We have already checked GASup from IVRS table.
++	 *       Now, we need to make sure that GAMSup is set.
++	 */
++	if (AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) &&
++	    !check_feature_on_all_iommus(FEATURE_GAM_VAPIC))
++		amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY_GA;
++
+ 	if (AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir))
+ 		amd_iommu_irq_ops.capability |= (1 << IRQ_POSTING_CAP);
+ #endif
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index 37edd663603f6..ebac53a73bd10 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -5723,6 +5723,12 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ 	u8 cause;
+ 	bool reset_required;
+ 
++	if (event_type >= GAUDI_EVENT_SIZE) {
++		dev_err(hdev->dev, "Event type %u exceeds maximum of %u",
++				event_type, GAUDI_EVENT_SIZE - 1);
++		return;
++	}
++
+ 	gaudi->events_stat[event_type]++;
+ 	gaudi->events_stat_aggregate[event_type]++;
+ 
+diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
+index 5b5d6275c2495..c8023b4428c5c 100644
+--- a/drivers/misc/habanalabs/goya/goya.c
++++ b/drivers/misc/habanalabs/goya/goya.c
+@@ -4623,6 +4623,12 @@ void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry)
+ 				>> EQ_CTL_EVENT_TYPE_SHIFT);
+ 	struct goya_device *goya = hdev->asic_specific;
+ 
++	if (event_type >= GOYA_ASYNC_EVENT_ID_SIZE) {
++		dev_err(hdev->dev, "Event type %u exceeds maximum of %u",
++				event_type, GOYA_ASYNC_EVENT_ID_SIZE - 1);
++		return;
++	}
++
+ 	goya->events_stat[event_type]++;
+ 	goya->events_stat_aggregate[event_type]++;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index 4cba110f6ef8c..0e699330ae77c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -376,48 +376,6 @@ static void mlx5_devlink_set_params_init_values(struct devlink *devlink)
+ #endif
+ }
+ 
+-#define MLX5_TRAP_DROP(_id, _group_id)					\
+-	DEVLINK_TRAP_GENERIC(DROP, DROP, _id,				\
+-			     DEVLINK_TRAP_GROUP_GENERIC_ID_##_group_id, \
+-			     DEVLINK_TRAP_METADATA_TYPE_F_IN_PORT)
+-
+-static const struct devlink_trap mlx5_traps_arr[] = {
+-	MLX5_TRAP_DROP(INGRESS_VLAN_FILTER, L2_DROPS),
+-};
+-
+-static const struct devlink_trap_group mlx5_trap_groups_arr[] = {
+-	DEVLINK_TRAP_GROUP_GENERIC(L2_DROPS, 0),
+-};
+-
+-static int mlx5_devlink_traps_register(struct devlink *devlink)
+-{
+-	struct mlx5_core_dev *core_dev = devlink_priv(devlink);
+-	int err;
+-
+-	err = devlink_trap_groups_register(devlink, mlx5_trap_groups_arr,
+-					   ARRAY_SIZE(mlx5_trap_groups_arr));
+-	if (err)
+-		return err;
+-
+-	err = devlink_traps_register(devlink, mlx5_traps_arr, ARRAY_SIZE(mlx5_traps_arr),
+-				     &core_dev->priv);
+-	if (err)
+-		goto err_trap_group;
+-	return 0;
+-
+-err_trap_group:
+-	devlink_trap_groups_unregister(devlink, mlx5_trap_groups_arr,
+-				       ARRAY_SIZE(mlx5_trap_groups_arr));
+-	return err;
+-}
+-
+-static void mlx5_devlink_traps_unregister(struct devlink *devlink)
+-{
+-	devlink_traps_unregister(devlink, mlx5_traps_arr, ARRAY_SIZE(mlx5_traps_arr));
+-	devlink_trap_groups_unregister(devlink, mlx5_trap_groups_arr,
+-				       ARRAY_SIZE(mlx5_trap_groups_arr));
+-}
+-
+ int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
+ {
+ 	int err;
+@@ -432,16 +390,8 @@ int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
+ 		goto params_reg_err;
+ 	mlx5_devlink_set_params_init_values(devlink);
+ 	devlink_params_publish(devlink);
+-
+-	err = mlx5_devlink_traps_register(devlink);
+-	if (err)
+-		goto traps_reg_err;
+-
+ 	return 0;
+ 
+-traps_reg_err:
+-	devlink_params_unregister(devlink, mlx5_devlink_params,
+-				  ARRAY_SIZE(mlx5_devlink_params));
+ params_reg_err:
+ 	devlink_unregister(devlink);
+ 	return err;
+@@ -449,7 +399,6 @@ params_reg_err:
+ 
+ void mlx5_devlink_unregister(struct devlink *devlink)
+ {
+-	mlx5_devlink_traps_unregister(devlink);
+ 	devlink_params_unpublish(devlink);
+ 	devlink_params_unregister(devlink, mlx5_devlink_params,
+ 				  ARRAY_SIZE(mlx5_devlink_params));
+diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c
+index 889d7ce282ebb..952a92504df69 100644
+--- a/drivers/parisc/dino.c
++++ b/drivers/parisc/dino.c
+@@ -156,15 +156,6 @@ static inline struct dino_device *DINO_DEV(struct pci_hba_data *hba)
+ 	return container_of(hba, struct dino_device, hba);
+ }
+ 
+-/* Check if PCI device is behind a Card-mode Dino. */
+-static int pci_dev_is_behind_card_dino(struct pci_dev *dev)
+-{
+-	struct dino_device *dino_dev;
+-
+-	dino_dev = DINO_DEV(parisc_walk_tree(dev->bus->bridge));
+-	return is_card_dino(&dino_dev->hba.dev->id);
+-}
+-
+ /*
+  * Dino Configuration Space Accessor Functions
+  */
+@@ -447,6 +438,15 @@ static void quirk_cirrus_cardbus(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_6832, quirk_cirrus_cardbus );
+ 
+ #ifdef CONFIG_TULIP
++/* Check if PCI device is behind a Card-mode Dino. */
++static int pci_dev_is_behind_card_dino(struct pci_dev *dev)
++{
++	struct dino_device *dino_dev;
++
++	dino_dev = DINO_DEV(parisc_walk_tree(dev->bus->bridge));
++	return is_card_dino(&dino_dev->hba.dev->id);
++}
++
+ static void pci_fixup_tulip(struct pci_dev *dev)
+ {
+ 	if (!pci_dev_is_behind_card_dino(dev))
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 88e19ad54f646..f175cff39b460 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -225,6 +225,8 @@
+ 
+ #define MSI_IRQ_NUM			32
+ 
++#define CFG_RD_CRS_VAL			0xffff0001
++
+ struct advk_pcie {
+ 	struct platform_device *pdev;
+ 	void __iomem *base;
+@@ -587,7 +589,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
+ }
+ 
+-static int advk_pcie_check_pio_status(struct advk_pcie *pcie, u32 *val)
++static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val)
+ {
+ 	struct device *dev = &pcie->pdev->dev;
+ 	u32 reg;
+@@ -629,9 +631,30 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, u32 *val)
+ 		strcomp_status = "UR";
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CRS:
++		if (allow_crs && val) {
++			/* PCIe r4.0, sec 2.3.2, says:
++			 * If CRS Software Visibility is enabled:
++			 * For a Configuration Read Request that includes both
++			 * bytes of the Vendor ID field of a device Function's
++			 * Configuration Space Header, the Root Complex must
++			 * complete the Request to the host by returning a
++			 * read-data value of 0001h for the Vendor ID field and
++			 * all '1's for any additional bytes included in the
++			 * request.
++			 *
++			 * So CRS in this case is not an error status.
++			 */
++			*val = CFG_RD_CRS_VAL;
++			strcomp_status = NULL;
++			break;
++		}
+ 		/* PCIe r4.0, sec 2.3.2, says:
+ 		 * If CRS Software Visibility is not enabled, the Root Complex
+ 		 * must re-issue the Configuration Request as a new Request.
++		 * If CRS Software Visibility is enabled: For a Configuration
++		 * Write Request or for any other Configuration Read Request,
++		 * the Root Complex must re-issue the Configuration Request as
++		 * a new Request.
+ 		 * A Root Complex implementation may choose to limit the number
+ 		 * of Configuration Request/CRS Completion Status loops before
+ 		 * determining that something is wrong with the target of the
+@@ -700,6 +723,7 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 	case PCI_EXP_RTCTL: {
+ 		u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
+ 		*value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE;
++		*value |= PCI_EXP_RTCAP_CRSVIS << 16;
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+ 
+@@ -781,6 +805,7 @@ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
+ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ {
+ 	struct pci_bridge_emul *bridge = &pcie->bridge;
++	int ret;
+ 
+ 	bridge->conf.vendor =
+ 		cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff);
+@@ -804,7 +829,15 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ 	bridge->data = pcie;
+ 	bridge->ops = &advk_pci_bridge_emul_ops;
+ 
+-	return pci_bridge_emul_init(bridge, 0);
++	/* PCIe config space can be initialized after pci_bridge_emul_init() */
++	ret = pci_bridge_emul_init(bridge, 0);
++	if (ret < 0)
++		return ret;
++
++	/* Indicates supports for Completion Retry Status */
++	bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
++
++	return 0;
+ }
+ 
+ static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
+@@ -856,6 +889,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 			     int where, int size, u32 *val)
+ {
+ 	struct advk_pcie *pcie = bus->sysdata;
++	bool allow_crs;
+ 	u32 reg;
+ 	int ret;
+ 
+@@ -868,7 +902,24 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 		return pci_bridge_emul_conf_read(&pcie->bridge, where,
+ 						 size, val);
+ 
++	/*
++	 * Completion Retry Status is possible to return only when reading all
++	 * 4 bytes from PCI_VENDOR_ID and PCI_DEVICE_ID registers at once and
++	 * CRSSVE flag on Root Bridge is enabled.
++	 */
++	allow_crs = (where == PCI_VENDOR_ID) && (size == 4) &&
++		    (le16_to_cpu(pcie->bridge.pcie_conf.rootctl) &
++		     PCI_EXP_RTCTL_CRSSVE);
++
+ 	if (advk_pcie_pio_is_running(pcie)) {
++		/*
++		 * If it is possible return Completion Retry Status so caller
++		 * tries to issue the request again instead of failing.
++		 */
++		if (allow_crs) {
++			*val = CFG_RD_CRS_VAL;
++			return PCIBIOS_SUCCESSFUL;
++		}
+ 		*val = 0xffffffff;
+ 		return PCIBIOS_SET_FAILED;
+ 	}
+@@ -896,12 +947,20 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 
+ 	ret = advk_pcie_wait_pio(pcie);
+ 	if (ret < 0) {
++		/*
++		 * If it is possible return Completion Retry Status so caller
++		 * tries to issue the request again instead of failing.
++		 */
++		if (allow_crs) {
++			*val = CFG_RD_CRS_VAL;
++			return PCIBIOS_SUCCESSFUL;
++		}
+ 		*val = 0xffffffff;
+ 		return PCIBIOS_SET_FAILED;
+ 	}
+ 
+ 	/* Check PIO status and get the read result */
+-	ret = advk_pcie_check_pio_status(pcie, val);
++	ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
+ 	if (ret < 0) {
+ 		*val = 0xffffffff;
+ 		return PCIBIOS_SET_FAILED;
+@@ -970,7 +1029,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	if (ret < 0)
+ 		return PCIBIOS_SET_FAILED;
+ 
+-	ret = advk_pcie_check_pio_status(pcie, NULL);
++	ret = advk_pcie_check_pio_status(pcie, false, NULL);
+ 	if (ret < 0)
+ 		return PCIBIOS_SET_FAILED;
+ 
+diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h
+index b31883022a8e6..49bbd37ee318a 100644
+--- a/drivers/pci/pci-bridge-emul.h
++++ b/drivers/pci/pci-bridge-emul.h
+@@ -54,7 +54,7 @@ struct pci_bridge_emul_pcie_conf {
+ 	__le16 slotctl;
+ 	__le16 slotsta;
+ 	__le16 rootctl;
+-	__le16 rsvd;
++	__le16 rootcap;
+ 	__le32 rootsta;
+ 	__le32 devcap2;
+ 	__le16 devctl2;
+diff --git a/drivers/platform/chrome/Makefile b/drivers/platform/chrome/Makefile
+index 41baccba033f7..f901d2e43166c 100644
+--- a/drivers/platform/chrome/Makefile
++++ b/drivers/platform/chrome/Makefile
+@@ -20,7 +20,7 @@ obj-$(CONFIG_CROS_EC_CHARDEV)		+= cros_ec_chardev.o
+ obj-$(CONFIG_CROS_EC_LIGHTBAR)		+= cros_ec_lightbar.o
+ obj-$(CONFIG_CROS_EC_VBC)		+= cros_ec_vbc.o
+ obj-$(CONFIG_CROS_EC_DEBUGFS)		+= cros_ec_debugfs.o
+-cros-ec-sensorhub-objs			:= cros_ec_sensorhub.o cros_ec_sensorhub_ring.o
++cros-ec-sensorhub-objs			:= cros_ec_sensorhub.o cros_ec_sensorhub_ring.o cros_ec_trace.o
+ obj-$(CONFIG_CROS_EC_SENSORHUB)		+= cros-ec-sensorhub.o
+ obj-$(CONFIG_CROS_EC_SYSFS)		+= cros_ec_sysfs.o
+ obj-$(CONFIG_CROS_USBPD_LOGGER)		+= cros_usbpd_logger.o
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_ring.c b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+index 8921f24e83bac..98e37080f7609 100644
+--- a/drivers/platform/chrome/cros_ec_sensorhub_ring.c
++++ b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+@@ -17,6 +17,8 @@
+ #include <linux/sort.h>
+ #include <linux/slab.h>
+ 
++#include "cros_ec_trace.h"
++
+ /* Precision of fixed point for the m values from the filter */
+ #define M_PRECISION BIT(23)
+ 
+@@ -291,6 +293,7 @@ cros_ec_sensor_ring_ts_filter_update(struct cros_ec_sensors_ts_filter_state
+ 		state->median_m = 0;
+ 		state->median_error = 0;
+ 	}
++	trace_cros_ec_sensorhub_filter(state, dx, dy);
+ }
+ 
+ /**
+@@ -427,6 +430,11 @@ cros_ec_sensor_ring_process_event(struct cros_ec_sensorhub *sensorhub,
+ 			if (new_timestamp - *current_timestamp > 0)
+ 				*current_timestamp = new_timestamp;
+ 		}
++		trace_cros_ec_sensorhub_timestamp(in->timestamp,
++						  fifo_info->timestamp,
++						  fifo_timestamp,
++						  *current_timestamp,
++						  now);
+ 	}
+ 
+ 	if (in->flags & MOTIONSENSE_SENSOR_FLAG_ODR) {
+@@ -460,6 +468,12 @@ cros_ec_sensor_ring_process_event(struct cros_ec_sensorhub *sensorhub,
+ 
+ 	/* Regular sample */
+ 	out->sensor_id = in->sensor_num;
++	trace_cros_ec_sensorhub_data(in->sensor_num,
++				     fifo_info->timestamp,
++				     fifo_timestamp,
++				     *current_timestamp,
++				     now);
++
+ 	if (*current_timestamp - now > 0) {
+ 		/*
+ 		 * This fix is needed to overcome the timestamp filter putting
+diff --git a/drivers/platform/chrome/cros_ec_trace.h b/drivers/platform/chrome/cros_ec_trace.h
+index f744b21bc655f..7e7cfc98657a4 100644
+--- a/drivers/platform/chrome/cros_ec_trace.h
++++ b/drivers/platform/chrome/cros_ec_trace.h
+@@ -15,6 +15,7 @@
+ #include <linux/types.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+ #include <linux/platform_data/cros_ec_proto.h>
++#include <linux/platform_data/cros_ec_sensorhub.h>
+ 
+ #include <linux/tracepoint.h>
+ 
+@@ -70,6 +71,99 @@ TRACE_EVENT(cros_ec_request_done,
+ 		  __entry->retval)
+ );
+ 
++TRACE_EVENT(cros_ec_sensorhub_timestamp,
++	    TP_PROTO(u32 ec_sample_timestamp, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++		     s64 current_timestamp, s64 current_time),
++	TP_ARGS(ec_sample_timestamp, ec_fifo_timestamp, fifo_timestamp, current_timestamp,
++		current_time),
++	TP_STRUCT__entry(
++		__field(u32, ec_sample_timestamp)
++		__field(u32, ec_fifo_timestamp)
++		__field(s64, fifo_timestamp)
++		__field(s64, current_timestamp)
++		__field(s64, current_time)
++		__field(s64, delta)
++	),
++	TP_fast_assign(
++		__entry->ec_sample_timestamp = ec_sample_timestamp;
++		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
++		__entry->fifo_timestamp = fifo_timestamp;
++		__entry->current_timestamp = current_timestamp;
++		__entry->current_time = current_time;
++		__entry->delta = current_timestamp - current_time;
++	),
++	TP_printk("ec_ts: %9u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++		  __entry->ec_sample_timestamp,
++		__entry->ec_fifo_timestamp,
++		__entry->fifo_timestamp,
++		__entry->current_timestamp,
++		__entry->current_time,
++		__entry->delta
++	)
++);
++
++TRACE_EVENT(cros_ec_sensorhub_data,
++	    TP_PROTO(u32 ec_sensor_num, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++		     s64 current_timestamp, s64 current_time),
++	TP_ARGS(ec_sensor_num, ec_fifo_timestamp, fifo_timestamp, current_timestamp, current_time),
++	TP_STRUCT__entry(
++		__field(u32, ec_sensor_num)
++		__field(u32, ec_fifo_timestamp)
++		__field(s64, fifo_timestamp)
++		__field(s64, current_timestamp)
++		__field(s64, current_time)
++		__field(s64, delta)
++	),
++	TP_fast_assign(
++		__entry->ec_sensor_num = ec_sensor_num;
++		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
++		__entry->fifo_timestamp = fifo_timestamp;
++		__entry->current_timestamp = current_timestamp;
++		__entry->current_time = current_time;
++		__entry->delta = current_timestamp - current_time;
++	),
++	TP_printk("ec_num: %4u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++		  __entry->ec_sensor_num,
++		__entry->ec_fifo_timestamp,
++		__entry->fifo_timestamp,
++		__entry->current_timestamp,
++		__entry->current_time,
++		__entry->delta
++	)
++);
++
++TRACE_EVENT(cros_ec_sensorhub_filter,
++	    TP_PROTO(struct cros_ec_sensors_ts_filter_state *state, s64 dx, s64 dy),
++	TP_ARGS(state, dx, dy),
++	TP_STRUCT__entry(
++		__field(s64, dx)
++		__field(s64, dy)
++		__field(s64, median_m)
++		__field(s64, median_error)
++		__field(s64, history_len)
++		__field(s64, x)
++		__field(s64, y)
++	),
++	TP_fast_assign(
++		__entry->dx = dx;
++		__entry->dy = dy;
++		__entry->median_m = state->median_m;
++		__entry->median_error = state->median_error;
++		__entry->history_len = state->history_len;
++		__entry->x = state->x_offset;
++		__entry->y = state->y_offset;
++	),
++	TP_printk("dx: %12lld. dy: %12lld median_m: %12lld median_error: %12lld len: %lld x: %12lld y: %12lld",
++		  __entry->dx,
++		__entry->dy,
++		__entry->median_m,
++		__entry->median_error,
++		__entry->history_len,
++		__entry->x,
++		__entry->y
++	)
++);
++
+ 
+ #endif /* _CROS_EC_TRACE_H_ */
+ 
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index 22c002e685b34..37f9b688661d4 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -329,23 +329,7 @@ err_pm_disable:
+ static int img_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct img_pwm_chip *pwm_chip = platform_get_drvdata(pdev);
+-	u32 val;
+-	unsigned int i;
+-	int ret;
+-
+-	ret = pm_runtime_get_sync(&pdev->dev);
+-	if (ret < 0) {
+-		pm_runtime_put(&pdev->dev);
+-		return ret;
+-	}
+-
+-	for (i = 0; i < pwm_chip->chip.npwm; i++) {
+-		val = img_pwm_readl(pwm_chip, PWM_CTRL_CFG);
+-		val &= ~BIT(i);
+-		img_pwm_writel(pwm_chip, PWM_CTRL_CFG, val);
+-	}
+ 
+-	pm_runtime_put(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	if (!pm_runtime_status_suspended(&pdev->dev))
+ 		img_pwm_runtime_suspend(&pdev->dev);
+diff --git a/drivers/pwm/pwm-lpc32xx.c b/drivers/pwm/pwm-lpc32xx.c
+index 710d9a207d2b0..522f862eca526 100644
+--- a/drivers/pwm/pwm-lpc32xx.c
++++ b/drivers/pwm/pwm-lpc32xx.c
+@@ -120,17 +120,17 @@ static int lpc32xx_pwm_probe(struct platform_device *pdev)
+ 	lpc32xx->chip.npwm = 1;
+ 	lpc32xx->chip.base = -1;
+ 
++	/* If PWM is disabled, configure the output to the default value */
++	val = readl(lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
++	val &= ~PWM_PIN_LEVEL;
++	writel(val, lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
++
+ 	ret = pwmchip_add(&lpc32xx->chip);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to add PWM chip, error %d\n", ret);
+ 		return ret;
+ 	}
+ 
+-	/* When PWM is disable, configure the output to the default value */
+-	val = readl(lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
+-	val &= ~PWM_PIN_LEVEL;
+-	writel(val, lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
+-
+ 	platform_set_drvdata(pdev, lpc32xx);
+ 
+ 	return 0;
+diff --git a/drivers/pwm/pwm-mxs.c b/drivers/pwm/pwm-mxs.c
+index 7ce616923c52a..41bdbe71ae46b 100644
+--- a/drivers/pwm/pwm-mxs.c
++++ b/drivers/pwm/pwm-mxs.c
+@@ -148,6 +148,11 @@ static int mxs_pwm_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	/* FIXME: Only do this if the PWM isn't already running */
++	ret = stmp_reset_block(mxs->base);
++	if (ret)
++		return dev_err_probe(&pdev->dev, ret, "failed to reset PWM\n");
++
+ 	ret = pwmchip_add(&mxs->chip);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to add pwm chip %d\n", ret);
+@@ -156,15 +161,7 @@ static int mxs_pwm_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, mxs);
+ 
+-	ret = stmp_reset_block(mxs->base);
+-	if (ret)
+-		goto pwm_remove;
+-
+ 	return 0;
+-
+-pwm_remove:
+-	pwmchip_remove(&mxs->chip);
+-	return ret;
+ }
+ 
+ static int mxs_pwm_remove(struct platform_device *pdev)
+diff --git a/drivers/pwm/pwm-rockchip.c b/drivers/pwm/pwm-rockchip.c
+index 3b8da7b0091b1..1f3079562b38d 100644
+--- a/drivers/pwm/pwm-rockchip.c
++++ b/drivers/pwm/pwm-rockchip.c
+@@ -382,20 +382,6 @@ static int rockchip_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct rockchip_pwm_chip *pc = platform_get_drvdata(pdev);
+ 
+-	/*
+-	 * Disable the PWM clk before unpreparing it if the PWM device is still
+-	 * running. This should only happen when the last PWM user left it
+-	 * enabled, or when nobody requested a PWM that was previously enabled
+-	 * by the bootloader.
+-	 *
+-	 * FIXME: Maybe the core should disable all PWM devices in
+-	 * pwmchip_remove(). In this case we'd only have to call
+-	 * clk_unprepare() after pwmchip_remove().
+-	 *
+-	 */
+-	if (pwm_is_enabled(pc->chip.pwms))
+-		clk_disable(pc->clk);
+-
+ 	clk_unprepare(pc->pclk);
+ 	clk_unprepare(pc->clk);
+ 
+diff --git a/drivers/pwm/pwm-stm32-lp.c b/drivers/pwm/pwm-stm32-lp.c
+index 134c14621ee01..945a8b2b85648 100644
+--- a/drivers/pwm/pwm-stm32-lp.c
++++ b/drivers/pwm/pwm-stm32-lp.c
+@@ -225,8 +225,6 @@ static int stm32_pwm_lp_remove(struct platform_device *pdev)
+ {
+ 	struct stm32_pwm_lp *priv = platform_get_drvdata(pdev);
+ 
+-	pwm_disable(&priv->chip.pwms[0]);
+-
+ 	return pwmchip_remove(&priv->chip);
+ }
+ 
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index 33e4ecd6c6659..54cf5ec8f4019 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -624,6 +624,7 @@ config RTC_DRV_FM3130
+ 
+ config RTC_DRV_RX8010
+ 	tristate "Epson RX8010SJ"
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you get support for the Epson RX8010SJ RTC
+ 	  chip.
+diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
+index 03d31e52b3999..4523e825a61a8 100644
+--- a/drivers/staging/rtl8192u/r8192U_core.c
++++ b/drivers/staging/rtl8192u/r8192U_core.c
+@@ -4271,7 +4271,7 @@ static void TranslateRxSignalStuff819xUsb(struct sk_buff *skb,
+ 	bpacket_match_bssid = (type != IEEE80211_FTYPE_CTL) &&
+ 			       (ether_addr_equal(priv->ieee80211->current_network.bssid,  (fc & IEEE80211_FCTL_TODS) ? hdr->addr1 : (fc & IEEE80211_FCTL_FROMDS) ? hdr->addr2 : hdr->addr3))
+ 			       && (!pstats->bHwError) && (!pstats->bCRC) && (!pstats->bICV);
+-	bpacket_toself =  bpacket_match_bssid &
++	bpacket_toself =  bpacket_match_bssid &&
+ 			  (ether_addr_equal(praddr, priv->ieee80211->dev->dev_addr));
+ 
+ 	if (WLAN_FC_GET_FRAMETYPE(fc) == IEEE80211_STYPE_BEACON)
+diff --git a/drivers/thermal/samsung/exynos_tmu.c b/drivers/thermal/samsung/exynos_tmu.c
+index e9a90bc23b11d..f4ab4c5b4b626 100644
+--- a/drivers/thermal/samsung/exynos_tmu.c
++++ b/drivers/thermal/samsung/exynos_tmu.c
+@@ -1073,6 +1073,7 @@ static int exynos_tmu_probe(struct platform_device *pdev)
+ 		data->sclk = devm_clk_get(&pdev->dev, "tmu_sclk");
+ 		if (IS_ERR(data->sclk)) {
+ 			dev_err(&pdev->dev, "Failed to get sclk\n");
++			ret = PTR_ERR(data->sclk);
+ 			goto err_clk;
+ 		} else {
+ 			ret = clk_prepare_enable(data->sclk);
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 06757b1d4aecd..cea40ef090b77 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -2060,7 +2060,7 @@ static void restore_cur(struct vc_data *vc)
+ 
+ enum { ESnormal, ESesc, ESsquare, ESgetpars, ESfunckey,
+ 	EShash, ESsetG0, ESsetG1, ESpercent, EScsiignore, ESnonstd,
+-	ESpalette, ESosc };
++	ESpalette, ESosc, ESapc, ESpm, ESdcs };
+ 
+ /* console_lock is held (except via vc_init()) */
+ static void reset_terminal(struct vc_data *vc, int do_clear)
+@@ -2134,20 +2134,28 @@ static void vc_setGx(struct vc_data *vc, unsigned int which, int c)
+ 		vc->vc_translate = set_translate(*charset, vc);
+ }
+ 
++/* is this state an ANSI control string? */
++static bool ansi_control_string(unsigned int state)
++{
++	if (state == ESosc || state == ESapc || state == ESpm || state == ESdcs)
++		return true;
++	return false;
++}
++
+ /* console_lock is held */
+ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ {
+ 	/*
+ 	 *  Control characters can be used in the _middle_
+-	 *  of an escape sequence.
++	 *  of an escape sequence, aside from ANSI control strings.
+ 	 */
+-	if (vc->vc_state == ESosc && c>=8 && c<=13) /* ... except for OSC */
++	if (ansi_control_string(vc->vc_state) && c >= 8 && c <= 13)
+ 		return;
+ 	switch (c) {
+ 	case 0:
+ 		return;
+ 	case 7:
+-		if (vc->vc_state == ESosc)
++		if (ansi_control_string(vc->vc_state))
+ 			vc->vc_state = ESnormal;
+ 		else if (vc->vc_bell_duration)
+ 			kd_mksound(vc->vc_bell_pitch, vc->vc_bell_duration);
+@@ -2208,6 +2216,12 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ 		case ']':
+ 			vc->vc_state = ESnonstd;
+ 			return;
++		case '_':
++			vc->vc_state = ESapc;
++			return;
++		case '^':
++			vc->vc_state = ESpm;
++			return;
+ 		case '%':
+ 			vc->vc_state = ESpercent;
+ 			return;
+@@ -2225,6 +2239,9 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ 			if (vc->state.x < VC_TABSTOPS_COUNT)
+ 				set_bit(vc->state.x, vc->vc_tab_stop);
+ 			return;
++		case 'P':
++			vc->vc_state = ESdcs;
++			return;
+ 		case 'Z':
+ 			respond_ID(tty);
+ 			return;
+@@ -2521,8 +2538,14 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ 		vc_setGx(vc, 1, c);
+ 		vc->vc_state = ESnormal;
+ 		return;
++	case ESapc:
++		return;
+ 	case ESosc:
+ 		return;
++	case ESpm:
++		return;
++	case ESdcs:
++		return;
+ 	default:
+ 		vc->vc_state = ESnormal;
+ 	}
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index b4fcc48f255b3..509811aabb3fd 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -568,6 +568,8 @@ static int btrfs_free_stale_devices(const char *path,
+ 	struct btrfs_device *device, *tmp_device;
+ 	int ret = 0;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	if (path)
+ 		ret = -ENOENT;
+ 
+@@ -999,11 +1001,12 @@ static struct btrfs_fs_devices *clone_fs_devices(struct btrfs_fs_devices *orig)
+ 	struct btrfs_device *orig_dev;
+ 	int ret = 0;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	fs_devices = alloc_fs_devices(orig->fsid, NULL);
+ 	if (IS_ERR(fs_devices))
+ 		return fs_devices;
+ 
+-	mutex_lock(&orig->device_list_mutex);
+ 	fs_devices->total_devices = orig->total_devices;
+ 
+ 	list_for_each_entry(orig_dev, &orig->devices, dev_list) {
+@@ -1035,10 +1038,8 @@ static struct btrfs_fs_devices *clone_fs_devices(struct btrfs_fs_devices *orig)
+ 		device->fs_devices = fs_devices;
+ 		fs_devices->num_devices++;
+ 	}
+-	mutex_unlock(&orig->device_list_mutex);
+ 	return fs_devices;
+ error:
+-	mutex_unlock(&orig->device_list_mutex);
+ 	free_fs_devices(fs_devices);
+ 	return ERR_PTR(ret);
+ }
+@@ -1855,15 +1856,17 @@ out:
+  * Function to update ctime/mtime for a given device path.
+  * Mainly used for ctime/mtime based probe like libblkid.
+  */
+-static void update_dev_time(const char *path_name)
++static void update_dev_time(struct block_device *bdev)
+ {
+-	struct file *filp;
++	struct inode *inode = bdev->bd_inode;
++	struct timespec64 now;
+ 
+-	filp = filp_open(path_name, O_RDWR, 0);
+-	if (IS_ERR(filp))
++	/* Shouldn't happen but just in case. */
++	if (!inode)
+ 		return;
+-	file_update_time(filp);
+-	filp_close(filp, NULL);
++
++	now = current_time(inode);
++	generic_update_time(inode, &now, S_MTIME | S_CTIME);
+ }
+ 
+ static int btrfs_rm_dev_item(struct btrfs_device *device)
+@@ -2038,7 +2041,7 @@ void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info,
+ 	btrfs_kobject_uevent(bdev, KOBJ_CHANGE);
+ 
+ 	/* Update ctime/mtime for device path for libblkid */
+-	update_dev_time(device_path);
++	update_dev_time(bdev);
+ }
+ 
+ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+@@ -2681,7 +2684,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 	btrfs_forget_devices(device_path);
+ 
+ 	/* Update ctime/mtime for blkid or udev */
+-	update_dev_time(device_path);
++	update_dev_time(bdev);
+ 
+ 	return ret;
+ 
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 678dac8365ed3..48ea95b81df84 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1868,6 +1868,8 @@ static u64 __mark_caps_flushing(struct inode *inode,
+  * try to invalidate mapping pages without blocking.
+  */
+ static int try_nonblocking_invalidate(struct inode *inode)
++	__releases(ci->i_ceph_lock)
++	__acquires(ci->i_ceph_lock)
+ {
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	u32 invalidating_gen = ci->i_rdcache_gen;
+@@ -3169,7 +3171,16 @@ void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr,
+ 				break;
+ 			}
+ 		}
+-		BUG_ON(!found);
++
++		if (!found) {
++			/*
++			 * The capsnap should already be removed when removing
++			 * auth cap in the case of a forced unmount.
++			 */
++			WARN_ON_ONCE(ci->i_auth_cap);
++			goto unlock;
++		}
++
+ 		capsnap->dirty_pages -= nr;
+ 		if (capsnap->dirty_pages == 0) {
+ 			complete_capsnap = true;
+@@ -3191,6 +3202,7 @@ void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr,
+ 		     complete_capsnap ? " (complete capsnap)" : "");
+ 	}
+ 
++unlock:
+ 	spin_unlock(&ci->i_ceph_lock);
+ 
+ 	if (last) {
+@@ -3657,6 +3669,43 @@ out:
+ 		iput(inode);
+ }
+ 
++void __ceph_remove_capsnap(struct inode *inode, struct ceph_cap_snap *capsnap,
++			   bool *wake_ci, bool *wake_mdsc)
++{
++	struct ceph_inode_info *ci = ceph_inode(inode);
++	struct ceph_mds_client *mdsc = ceph_sb_to_client(inode->i_sb)->mdsc;
++	bool ret;
++
++	lockdep_assert_held(&ci->i_ceph_lock);
++
++	dout("removing capsnap %p, inode %p ci %p\n", capsnap, inode, ci);
++
++	list_del_init(&capsnap->ci_item);
++	ret = __detach_cap_flush_from_ci(ci, &capsnap->cap_flush);
++	if (wake_ci)
++		*wake_ci = ret;
++
++	spin_lock(&mdsc->cap_dirty_lock);
++	if (list_empty(&ci->i_cap_flush_list))
++		list_del_init(&ci->i_flushing_item);
++
++	ret = __detach_cap_flush_from_mdsc(mdsc, &capsnap->cap_flush);
++	if (wake_mdsc)
++		*wake_mdsc = ret;
++	spin_unlock(&mdsc->cap_dirty_lock);
++}
++
++void ceph_remove_capsnap(struct inode *inode, struct ceph_cap_snap *capsnap,
++			 bool *wake_ci, bool *wake_mdsc)
++{
++	struct ceph_inode_info *ci = ceph_inode(inode);
++
++	lockdep_assert_held(&ci->i_ceph_lock);
++
++	WARN_ON_ONCE(capsnap->dirty_pages || capsnap->writing);
++	__ceph_remove_capsnap(inode, capsnap, wake_ci, wake_mdsc);
++}
++
+ /*
+  * Handle FLUSHSNAP_ACK.  MDS has flushed snap data to disk and we can
+  * throw away our cap_snap.
+@@ -3694,23 +3743,10 @@ static void handle_cap_flushsnap_ack(struct inode *inode, u64 flush_tid,
+ 			     capsnap, capsnap->follows);
+ 		}
+ 	}
+-	if (flushed) {
+-		WARN_ON(capsnap->dirty_pages || capsnap->writing);
+-		dout(" removing %p cap_snap %p follows %lld\n",
+-		     inode, capsnap, follows);
+-		list_del(&capsnap->ci_item);
+-		wake_ci |= __detach_cap_flush_from_ci(ci, &capsnap->cap_flush);
+-
+-		spin_lock(&mdsc->cap_dirty_lock);
+-
+-		if (list_empty(&ci->i_cap_flush_list))
+-			list_del_init(&ci->i_flushing_item);
+-
+-		wake_mdsc |= __detach_cap_flush_from_mdsc(mdsc,
+-							  &capsnap->cap_flush);
+-		spin_unlock(&mdsc->cap_dirty_lock);
+-	}
++	if (flushed)
++		ceph_remove_capsnap(inode, capsnap, &wake_ci, &wake_mdsc);
+ 	spin_unlock(&ci->i_ceph_lock);
++
+ 	if (flushed) {
+ 		ceph_put_snap_context(capsnap->context);
+ 		ceph_put_cap_snap(capsnap);
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index a4d48370b2b32..f63c1a090139c 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -1797,8 +1797,7 @@ static void ceph_d_release(struct dentry *dentry)
+ 	dentry->d_fsdata = NULL;
+ 	spin_unlock(&dentry->d_lock);
+ 
+-	if (di->lease_session)
+-		ceph_put_mds_session(di->lease_session);
++	ceph_put_mds_session(di->lease_session);
+ 	kmem_cache_free(ceph_dentry_cachep, di);
+ }
+ 
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 3d2e3dd4ee01d..f1895f78ab452 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1723,32 +1723,26 @@ retry_snap:
+ 		goto out;
+ 	}
+ 
+-	err = file_remove_privs(file);
+-	if (err)
++	down_read(&osdc->lock);
++	map_flags = osdc->osdmap->flags;
++	pool_flags = ceph_pg_pool_flags(osdc->osdmap, ci->i_layout.pool_id);
++	up_read(&osdc->lock);
++	if ((map_flags & CEPH_OSDMAP_FULL) ||
++	    (pool_flags & CEPH_POOL_FLAG_FULL)) {
++		err = -ENOSPC;
+ 		goto out;
++	}
+ 
+-	err = file_update_time(file);
++	err = file_remove_privs(file);
+ 	if (err)
+ 		goto out;
+ 
+-	inode_inc_iversion_raw(inode);
+-
+ 	if (ci->i_inline_version != CEPH_INLINE_NONE) {
+ 		err = ceph_uninline_data(file, NULL);
+ 		if (err < 0)
+ 			goto out;
+ 	}
+ 
+-	down_read(&osdc->lock);
+-	map_flags = osdc->osdmap->flags;
+-	pool_flags = ceph_pg_pool_flags(osdc->osdmap, ci->i_layout.pool_id);
+-	up_read(&osdc->lock);
+-	if ((map_flags & CEPH_OSDMAP_FULL) ||
+-	    (pool_flags & CEPH_POOL_FLAG_FULL)) {
+-		err = -ENOSPC;
+-		goto out;
+-	}
+-
+ 	dout("aio_write %p %llx.%llx %llu~%zd getting caps. i_size %llu\n",
+ 	     inode, ceph_vinop(inode), pos, count, i_size_read(inode));
+ 	if (fi->fmode & CEPH_FILE_MODE_LAZY)
+@@ -1761,6 +1755,12 @@ retry_snap:
+ 	if (err < 0)
+ 		goto out;
+ 
++	err = file_update_time(file);
++	if (err)
++		goto out_caps;
++
++	inode_inc_iversion_raw(inode);
++
+ 	dout("aio_write %p %llx.%llx %llu~%zd got cap refs on %s\n",
+ 	     inode, ceph_vinop(inode), pos, count, ceph_cap_string(got));
+ 
+@@ -1844,6 +1844,8 @@ retry_snap:
+ 	}
+ 
+ 	goto out_unlocked;
++out_caps:
++	ceph_put_cap_refs(ci, got);
+ out:
+ 	if (direct_lock)
+ 		ceph_end_io_direct(inode);
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 57cd78e942c08..63e781e4f7e44 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -1121,8 +1121,7 @@ static inline void update_dentry_lease(struct inode *dir, struct dentry *dentry,
+ 	__update_dentry_lease(dir, dentry, lease, session, from_time,
+ 			      &old_lease_session);
+ 	spin_unlock(&dentry->d_lock);
+-	if (old_lease_session)
+-		ceph_put_mds_session(old_lease_session);
++	ceph_put_mds_session(old_lease_session);
+ }
+ 
+ /*
+@@ -1167,8 +1166,7 @@ static void update_dentry_lease_careful(struct dentry *dentry,
+ 			      from_time, &old_lease_session);
+ out_unlock:
+ 	spin_unlock(&dentry->d_lock);
+-	if (old_lease_session)
+-		ceph_put_mds_session(old_lease_session);
++	ceph_put_mds_session(old_lease_session);
+ }
+ 
+ /*
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 816cea4975372..0f57b7d094578 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -661,6 +661,9 @@ struct ceph_mds_session *ceph_get_mds_session(struct ceph_mds_session *s)
+ 
+ void ceph_put_mds_session(struct ceph_mds_session *s)
+ {
++	if (IS_ERR_OR_NULL(s))
++		return;
++
+ 	dout("mdsc put_session %p %d -> %d\n", s,
+ 	     refcount_read(&s->s_ref), refcount_read(&s->s_ref)-1);
+ 	if (refcount_dec_and_test(&s->s_ref)) {
+@@ -1435,8 +1438,7 @@ static void __open_export_target_sessions(struct ceph_mds_client *mdsc,
+ 
+ 	for (i = 0; i < mi->num_export_targets; i++) {
+ 		ts = __open_export_target_session(mdsc, mi->export_targets[i]);
+-		if (!IS_ERR(ts))
+-			ceph_put_mds_session(ts);
++		ceph_put_mds_session(ts);
+ 	}
+ }
+ 
+@@ -1585,14 +1587,39 @@ out:
+ 	return ret;
+ }
+ 
++static int remove_capsnaps(struct ceph_mds_client *mdsc, struct inode *inode)
++{
++	struct ceph_inode_info *ci = ceph_inode(inode);
++	struct ceph_cap_snap *capsnap;
++	int capsnap_release = 0;
++
++	lockdep_assert_held(&ci->i_ceph_lock);
++
++	dout("removing capsnaps, ci is %p, inode is %p\n", ci, inode);
++
++	while (!list_empty(&ci->i_cap_snaps)) {
++		capsnap = list_first_entry(&ci->i_cap_snaps,
++					   struct ceph_cap_snap, ci_item);
++		__ceph_remove_capsnap(inode, capsnap, NULL, NULL);
++		ceph_put_snap_context(capsnap->context);
++		ceph_put_cap_snap(capsnap);
++		capsnap_release++;
++	}
++	wake_up_all(&ci->i_cap_wq);
++	wake_up_all(&mdsc->cap_flushing_wq);
++	return capsnap_release;
++}
++
+ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 				  void *arg)
+ {
+ 	struct ceph_fs_client *fsc = (struct ceph_fs_client *)arg;
++	struct ceph_mds_client *mdsc = fsc->mdsc;
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	LIST_HEAD(to_remove);
+ 	bool dirty_dropped = false;
+ 	bool invalidate = false;
++	int capsnap_release = 0;
+ 
+ 	dout("removing cap %p, ci is %p, inode is %p\n",
+ 	     cap, ci, &ci->vfs_inode);
+@@ -1600,7 +1627,6 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 	__ceph_remove_cap(cap, false);
+ 	if (!ci->i_auth_cap) {
+ 		struct ceph_cap_flush *cf;
+-		struct ceph_mds_client *mdsc = fsc->mdsc;
+ 
+ 		if (READ_ONCE(fsc->mount_state) == CEPH_MOUNT_SHUTDOWN) {
+ 			if (inode->i_data.nrpages > 0)
+@@ -1664,6 +1690,9 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 			list_add(&ci->i_prealloc_cap_flush->i_list, &to_remove);
+ 			ci->i_prealloc_cap_flush = NULL;
+ 		}
++
++		if (!list_empty(&ci->i_cap_snaps))
++			capsnap_release = remove_capsnaps(mdsc, inode);
+ 	}
+ 	spin_unlock(&ci->i_ceph_lock);
+ 	while (!list_empty(&to_remove)) {
+@@ -1680,6 +1709,8 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		ceph_queue_invalidate(inode);
+ 	if (dirty_dropped)
+ 		iput(inode);
++	while (capsnap_release--)
++		iput(inode);
+ 	return 0;
+ }
+ 
+@@ -4857,7 +4888,6 @@ void ceph_mdsc_destroy(struct ceph_fs_client *fsc)
+ 
+ 	ceph_metric_destroy(&mdsc->metric);
+ 
+-	flush_delayed_work(&mdsc->metric.delayed_work);
+ 	fsc->mdsc = NULL;
+ 	kfree(mdsc);
+ 	dout("mdsc_destroy %p done\n", mdsc);
+diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
+index fee4c47783132..9e0a0e26294ee 100644
+--- a/fs/ceph/metric.c
++++ b/fs/ceph/metric.c
+@@ -224,6 +224,8 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
+ 	if (!m)
+ 		return;
+ 
++	cancel_delayed_work_sync(&m->delayed_work);
++
+ 	percpu_counter_destroy(&m->total_inodes);
+ 	percpu_counter_destroy(&m->opened_inodes);
+ 	percpu_counter_destroy(&m->i_caps_mis);
+@@ -231,10 +233,7 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
+ 	percpu_counter_destroy(&m->d_lease_mis);
+ 	percpu_counter_destroy(&m->d_lease_hit);
+ 
+-	cancel_delayed_work_sync(&m->delayed_work);
+-
+-	if (m->session)
+-		ceph_put_mds_session(m->session);
++	ceph_put_mds_session(m->session);
+ }
+ 
+ static inline void __update_latency(ktime_t *totalp, ktime_t *lsump,
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index a8c460393b01b..9362eeb5812d9 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -1134,6 +1134,12 @@ extern void ceph_put_cap_refs_no_check_caps(struct ceph_inode_info *ci,
+ 					    int had);
+ extern void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr,
+ 				       struct ceph_snap_context *snapc);
++extern void __ceph_remove_capsnap(struct inode *inode,
++				  struct ceph_cap_snap *capsnap,
++				  bool *wake_ci, bool *wake_mdsc);
++extern void ceph_remove_capsnap(struct inode *inode,
++				struct ceph_cap_snap *capsnap,
++				bool *wake_ci, bool *wake_mdsc);
+ extern void ceph_flush_snaps(struct ceph_inode_info *ci,
+ 			     struct ceph_mds_session **psession);
+ extern bool __ceph_should_report_size(struct ceph_inode_info *ci);
+diff --git a/fs/coredump.c b/fs/coredump.c
+index c6acfc694f658..c56a3bdce7cd4 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -1111,8 +1111,10 @@ int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+ 
+ 	mmap_write_unlock(mm);
+ 
+-	if (WARN_ON(i != *vma_count))
++	if (WARN_ON(i != *vma_count)) {
++		kvfree(*vma_meta);
+ 		return -EFAULT;
++	}
+ 
+ 	*vma_data_size_ptr = vma_data_size;
+ 	return 0;
+diff --git a/fs/nilfs2/sysfs.c b/fs/nilfs2/sysfs.c
+index 9c6c0e2e5880a..57afd06db62de 100644
+--- a/fs/nilfs2/sysfs.c
++++ b/fs/nilfs2/sysfs.c
+@@ -64,11 +64,9 @@ static const struct sysfs_ops nilfs_##name##_attr_ops = { \
+ #define NILFS_DEV_INT_GROUP_TYPE(name, parent_name) \
+ static void nilfs_##name##_attr_release(struct kobject *kobj) \
+ { \
+-	struct nilfs_sysfs_##parent_name##_subgroups *subgroups; \
+-	struct the_nilfs *nilfs = container_of(kobj->parent, \
+-						struct the_nilfs, \
+-						ns_##parent_name##_kobj); \
+-	subgroups = nilfs->ns_##parent_name##_subgroups; \
++	struct nilfs_sysfs_##parent_name##_subgroups *subgroups = container_of(kobj, \
++						struct nilfs_sysfs_##parent_name##_subgroups, \
++						sg_##name##_kobj); \
+ 	complete(&subgroups->sg_##name##_kobj_unregister); \
+ } \
+ static struct kobj_type nilfs_##name##_ktype = { \
+@@ -94,12 +92,12 @@ static int nilfs_sysfs_create_##name##_group(struct the_nilfs *nilfs) \
+ 	err = kobject_init_and_add(kobj, &nilfs_##name##_ktype, parent, \
+ 				    #name); \
+ 	if (err) \
+-		return err; \
+-	return 0; \
++		kobject_put(kobj); \
++	return err; \
+ } \
+ static void nilfs_sysfs_delete_##name##_group(struct the_nilfs *nilfs) \
+ { \
+-	kobject_del(&nilfs->ns_##parent_name##_subgroups->sg_##name##_kobj); \
++	kobject_put(&nilfs->ns_##parent_name##_subgroups->sg_##name##_kobj); \
+ }
+ 
+ /************************************************************************
+@@ -210,14 +208,14 @@ int nilfs_sysfs_create_snapshot_group(struct nilfs_root *root)
+ 	}
+ 
+ 	if (err)
+-		return err;
++		kobject_put(&root->snapshot_kobj);
+ 
+-	return 0;
++	return err;
+ }
+ 
+ void nilfs_sysfs_delete_snapshot_group(struct nilfs_root *root)
+ {
+-	kobject_del(&root->snapshot_kobj);
++	kobject_put(&root->snapshot_kobj);
+ }
+ 
+ /************************************************************************
+@@ -999,7 +997,7 @@ int nilfs_sysfs_create_device_group(struct super_block *sb)
+ 	err = kobject_init_and_add(&nilfs->ns_dev_kobj, &nilfs_dev_ktype, NULL,
+ 				    "%s", sb->s_id);
+ 	if (err)
+-		goto free_dev_subgroups;
++		goto cleanup_dev_kobject;
+ 
+ 	err = nilfs_sysfs_create_mounted_snapshots_group(nilfs);
+ 	if (err)
+@@ -1036,9 +1034,7 @@ delete_mounted_snapshots_group:
+ 	nilfs_sysfs_delete_mounted_snapshots_group(nilfs);
+ 
+ cleanup_dev_kobject:
+-	kobject_del(&nilfs->ns_dev_kobj);
+-
+-free_dev_subgroups:
++	kobject_put(&nilfs->ns_dev_kobj);
+ 	kfree(nilfs->ns_dev_subgroups);
+ 
+ failed_create_device_group:
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index 221a1cc597f06..c20ebecd7bc24 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -792,14 +792,13 @@ nilfs_find_or_create_root(struct the_nilfs *nilfs, __u64 cno)
+ 
+ void nilfs_put_root(struct nilfs_root *root)
+ {
+-	if (refcount_dec_and_test(&root->count)) {
+-		struct the_nilfs *nilfs = root->nilfs;
++	struct the_nilfs *nilfs = root->nilfs;
+ 
+-		nilfs_sysfs_delete_snapshot_group(root);
+-
+-		spin_lock(&nilfs->ns_cptree_lock);
++	if (refcount_dec_and_lock(&root->count, &nilfs->ns_cptree_lock)) {
+ 		rb_erase(&root->rb_node, &nilfs->ns_cptree);
+ 		spin_unlock(&nilfs->ns_cptree_lock);
++
++		nilfs_sysfs_delete_snapshot_group(root);
+ 		iput(root->ifile);
+ 
+ 		kfree(root);
+diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h
+index 4f72b47973c30..2f909ed084c63 100644
+--- a/include/linux/cacheinfo.h
++++ b/include/linux/cacheinfo.h
+@@ -79,24 +79,6 @@ struct cpu_cacheinfo {
+ 	bool cpu_map_populated;
+ };
+ 
+-/*
+- * Helpers to make sure "func" is executed on the cpu whose cache
+- * attributes are being detected
+- */
+-#define DEFINE_SMP_CALL_CACHE_FUNCTION(func)			\
+-static inline void _##func(void *ret)				\
+-{								\
+-	int cpu = smp_processor_id();				\
+-	*(int *)ret = __##func(cpu);				\
+-}								\
+-								\
+-int func(unsigned int cpu)					\
+-{								\
+-	int ret;						\
+-	smp_call_function_single(cpu, _##func, &ret, true);	\
+-	return ret;						\
+-}
+-
+ struct cpu_cacheinfo *get_cpu_cacheinfo(unsigned int cpu);
+ int init_cache_level(unsigned int cpu);
+ int populate_cache_leaves(unsigned int cpu);
+diff --git a/include/linux/thermal.h b/include/linux/thermal.h
+index d07ea27e72a94..176d9454e8f36 100644
+--- a/include/linux/thermal.h
++++ b/include/linux/thermal.h
+@@ -410,12 +410,13 @@ static inline void thermal_zone_device_unregister(
+ 	struct thermal_zone_device *tz)
+ { }
+ static inline struct thermal_cooling_device *
+-thermal_cooling_device_register(char *type, void *devdata,
++thermal_cooling_device_register(const char *type, void *devdata,
+ 	const struct thermal_cooling_device_ops *ops)
+ { return ERR_PTR(-ENODEV); }
+ static inline struct thermal_cooling_device *
+ thermal_of_cooling_device_register(struct device_node *np,
+-	char *type, void *devdata, const struct thermal_cooling_device_ops *ops)
++	const char *type, void *devdata,
++	const struct thermal_cooling_device_ops *ops)
+ { return ERR_PTR(-ENODEV); }
+ static inline struct thermal_cooling_device *
+ devm_thermal_of_cooling_device_register(struct device *dev,
+diff --git a/kernel/profile.c b/kernel/profile.c
+index 6f69a4195d563..b47fe52f0ade4 100644
+--- a/kernel/profile.c
++++ b/kernel/profile.c
+@@ -41,7 +41,8 @@ struct profile_hit {
+ #define NR_PROFILE_GRP		(NR_PROFILE_HIT/PROFILE_GRPSZ)
+ 
+ static atomic_t *prof_buffer;
+-static unsigned long prof_len, prof_shift;
++static unsigned long prof_len;
++static unsigned short int prof_shift;
+ 
+ int prof_on __read_mostly;
+ EXPORT_SYMBOL_GPL(prof_on);
+@@ -67,8 +68,8 @@ int profile_setup(char *str)
+ 		if (str[strlen(sleepstr)] == ',')
+ 			str += strlen(sleepstr) + 1;
+ 		if (get_option(&str, &par))
+-			prof_shift = par;
+-		pr_info("kernel sleep profiling enabled (shift: %ld)\n",
++			prof_shift = clamp(par, 0, BITS_PER_LONG - 1);
++		pr_info("kernel sleep profiling enabled (shift: %u)\n",
+ 			prof_shift);
+ #else
+ 		pr_warn("kernel sleep profiling requires CONFIG_SCHEDSTATS\n");
+@@ -78,21 +79,21 @@ int profile_setup(char *str)
+ 		if (str[strlen(schedstr)] == ',')
+ 			str += strlen(schedstr) + 1;
+ 		if (get_option(&str, &par))
+-			prof_shift = par;
+-		pr_info("kernel schedule profiling enabled (shift: %ld)\n",
++			prof_shift = clamp(par, 0, BITS_PER_LONG - 1);
++		pr_info("kernel schedule profiling enabled (shift: %u)\n",
+ 			prof_shift);
+ 	} else if (!strncmp(str, kvmstr, strlen(kvmstr))) {
+ 		prof_on = KVM_PROFILING;
+ 		if (str[strlen(kvmstr)] == ',')
+ 			str += strlen(kvmstr) + 1;
+ 		if (get_option(&str, &par))
+-			prof_shift = par;
+-		pr_info("kernel KVM profiling enabled (shift: %ld)\n",
++			prof_shift = clamp(par, 0, BITS_PER_LONG - 1);
++		pr_info("kernel KVM profiling enabled (shift: %u)\n",
+ 			prof_shift);
+ 	} else if (get_option(&str, &par)) {
+-		prof_shift = par;
++		prof_shift = clamp(par, 0, BITS_PER_LONG - 1);
+ 		prof_on = CPU_PROFILING;
+-		pr_info("kernel profiling enabled (shift: %ld)\n",
++		pr_info("kernel profiling enabled (shift: %u)\n",
+ 			prof_shift);
+ 	}
+ 	return 1;
+@@ -468,7 +469,7 @@ read_profile(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+ 	unsigned long p = *ppos;
+ 	ssize_t read;
+ 	char *pnt;
+-	unsigned int sample_step = 1 << prof_shift;
++	unsigned long sample_step = 1UL << prof_shift;
+ 
+ 	profile_flip_buffers();
+ 	if (p >= (prof_len+1)*sizeof(unsigned int))
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 36b545f17206f..2593a733c0849 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -372,10 +372,10 @@ void play_idle_precise(u64 duration_ns, u64 latency_ns)
+ 	cpuidle_use_deepest_state(latency_ns);
+ 
+ 	it.done = 0;
+-	hrtimer_init_on_stack(&it.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	hrtimer_init_on_stack(&it.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ 	it.timer.function = idle_inject_timer_fn;
+ 	hrtimer_start(&it.timer, ns_to_ktime(duration_ns),
+-		      HRTIMER_MODE_REL_PINNED);
++		      HRTIMER_MODE_REL_PINNED_HARD);
+ 
+ 	while (!READ_ONCE(it.done))
+ 		do_idle();
+diff --git a/kernel/sys.c b/kernel/sys.c
+index a730c03ee607c..24a3a28ae2284 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1941,13 +1941,6 @@ static int validate_prctl_map_addr(struct prctl_mm_map *prctl_map)
+ 
+ 	error = -EINVAL;
+ 
+-	/*
+-	 * @brk should be after @end_data in traditional maps.
+-	 */
+-	if (prctl_map->start_brk <= prctl_map->end_data ||
+-	    prctl_map->brk <= prctl_map->end_data)
+-		goto out;
+-
+ 	/*
+ 	 * Neither we should allow to override limits if they set.
+ 	 */
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index bf174798afcb9..95f909540587c 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -981,7 +981,6 @@ config HARDLOCKUP_DETECTOR
+ 	depends on HAVE_HARDLOCKUP_DETECTOR_PERF || HAVE_HARDLOCKUP_DETECTOR_ARCH
+ 	select LOCKUP_DETECTOR
+ 	select HARDLOCKUP_DETECTOR_PERF if HAVE_HARDLOCKUP_DETECTOR_PERF
+-	select HARDLOCKUP_DETECTOR_ARCH if HAVE_HARDLOCKUP_DETECTOR_ARCH
+ 	help
+ 	  Say Y here to enable the kernel to act as a watchdog to detect
+ 	  hard lockups.
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index a3cd90a74012b..f582351d84ecb 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -605,7 +605,7 @@ static int p9_virtio_probe(struct virtio_device *vdev)
+ 	chan->vc_wq = kmalloc(sizeof(wait_queue_head_t), GFP_KERNEL);
+ 	if (!chan->vc_wq) {
+ 		err = -ENOMEM;
+-		goto out_free_tag;
++		goto out_remove_file;
+ 	}
+ 	init_waitqueue_head(chan->vc_wq);
+ 	chan->ring_bufs_avail = 1;
+@@ -623,6 +623,8 @@ static int p9_virtio_probe(struct virtio_device *vdev)
+ 
+ 	return 0;
+ 
++out_remove_file:
++	sysfs_remove_file(&vdev->dev.kobj, &dev_attr_mount_tag.attr);
+ out_free_tag:
+ 	kfree(tag);
+ out_free_vq:
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index ddb5b5c2550ef..49c49a4d203f0 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -1168,6 +1168,9 @@ static struct sctp_association *__sctp_rcv_asconf_lookup(
+ 	union sctp_addr_param *param;
+ 	union sctp_addr paddr;
+ 
++	if (ntohs(ch->length) < sizeof(*asconf) + sizeof(struct sctp_paramhdr))
++		return NULL;
++
+ 	/* Skip over the ADDIP header and find the Address parameter */
+ 	param = (union sctp_addr_param *)(asconf + 1);
+ 
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index 7411fa4428214..fa0d96320baae 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -2150,9 +2150,16 @@ static enum sctp_ierror sctp_verify_param(struct net *net,
+ 		break;
+ 
+ 	case SCTP_PARAM_SET_PRIMARY:
+-		if (ep->asconf_enable)
+-			break;
+-		goto unhandled;
++		if (!ep->asconf_enable)
++			goto unhandled;
++
++		if (ntohs(param.p->length) < sizeof(struct sctp_addip_param) +
++					     sizeof(struct sctp_paramhdr)) {
++			sctp_process_inv_paramlength(asoc, param.p,
++						     chunk, err_chunk);
++			retval = SCTP_IERROR_ABORT;
++		}
++		break;
+ 
+ 	case SCTP_PARAM_HOST_NAME_ADDRESS:
+ 		/* Tell the peer, we won't support this param.  */
+diff --git a/tools/bootconfig/scripts/ftrace2bconf.sh b/tools/bootconfig/scripts/ftrace2bconf.sh
+index a0c3bcc6da4f3..fb201d5afe2c1 100755
+--- a/tools/bootconfig/scripts/ftrace2bconf.sh
++++ b/tools/bootconfig/scripts/ftrace2bconf.sh
+@@ -222,8 +222,8 @@ instance_options() { # [instance-name]
+ 		emit_kv $PREFIX.cpumask = $val
+ 	fi
+ 	val=`cat $INSTANCE/tracing_on`
+-	if [ `echo $val | sed -e s/f//g`x != x ]; then
+-		emit_kv $PREFIX.tracing_on = $val
++	if [ "$val" = "0" ]; then
++		emit_kv $PREFIX.tracing_on = 0
+ 	fi
+ 
+ 	val=
+diff --git a/tools/include/linux/string.h b/tools/include/linux/string.h
+index 5e9e781905edc..db5c99318c799 100644
+--- a/tools/include/linux/string.h
++++ b/tools/include/linux/string.h
+@@ -46,4 +46,5 @@ extern char * __must_check skip_spaces(const char *);
+ 
+ extern char *strim(char *);
+ 
++extern void *memchr_inv(const void *start, int c, size_t bytes);
+ #endif /* _TOOLS_LINUX_STRING_H_ */
+diff --git a/tools/lib/string.c b/tools/lib/string.c
+index f645343815de6..8b6892f959abd 100644
+--- a/tools/lib/string.c
++++ b/tools/lib/string.c
+@@ -168,3 +168,61 @@ char *strreplace(char *s, char old, char new)
+ 			*s = new;
+ 	return s;
+ }
++
++static void *check_bytes8(const u8 *start, u8 value, unsigned int bytes)
++{
++	while (bytes) {
++		if (*start != value)
++			return (void *)start;
++		start++;
++		bytes--;
++	}
++	return NULL;
++}
++
++/**
++ * memchr_inv - Find an unmatching character in an area of memory.
++ * @start: The memory area
++ * @c: Find a character other than c
++ * @bytes: The size of the area.
++ *
++ * returns the address of the first character other than @c, or %NULL
++ * if the whole buffer contains just @c.
++ */
++void *memchr_inv(const void *start, int c, size_t bytes)
++{
++	u8 value = c;
++	u64 value64;
++	unsigned int words, prefix;
++
++	if (bytes <= 16)
++		return check_bytes8(start, value, bytes);
++
++	value64 = value;
++	value64 |= value64 << 8;
++	value64 |= value64 << 16;
++	value64 |= value64 << 32;
++
++	prefix = (unsigned long)start % 8;
++	if (prefix) {
++		u8 *r;
++
++		prefix = 8 - prefix;
++		r = check_bytes8(start, value, prefix);
++		if (r)
++			return r;
++		start += prefix;
++		bytes -= prefix;
++	}
++
++	words = bytes / 8;
++
++	while (words) {
++		if (*(u64 *)start != value64)
++			return check_bytes8(start, value, 8);
++		start += 8;
++		words--;
++	}
++
++	return check_bytes8(start, value, bytes % 8);
++}
+diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
+index 8345ff4acedf2..e5832b74a845d 100644
+--- a/tools/perf/tests/bpf.c
++++ b/tools/perf/tests/bpf.c
+@@ -199,7 +199,7 @@ static int do_test(struct bpf_object *obj, int (*func)(void),
+ 	}
+ 
+ 	if (count != expect * evlist->core.nr_entries) {
+-		pr_debug("BPF filter result incorrect, expected %d, got %d samples\n", expect, count);
++		pr_debug("BPF filter result incorrect, expected %d, got %d samples\n", expect * evlist->core.nr_entries, count);
+ 		goto out_delete_evlist;
+ 	}
+ 
+diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
+index b1ff0c9f32daf..5e9902fa1dc8a 100644
+--- a/tools/perf/util/dso.c
++++ b/tools/perf/util/dso.c
+@@ -1336,6 +1336,16 @@ void dso__set_build_id(struct dso *dso, struct build_id *bid)
+ 
+ bool dso__build_id_equal(const struct dso *dso, struct build_id *bid)
+ {
++	if (dso->bid.size > bid->size && dso->bid.size == BUILD_ID_SIZE) {
++		/*
++		 * For the backward compatibility, it allows a build-id has
++		 * trailing zeros.
++		 */
++		return !memcmp(dso->bid.data, bid->data, bid->size) &&
++			!memchr_inv(&dso->bid.data[bid->size], 0,
++				    dso->bid.size - bid->size);
++	}
++
+ 	return dso->bid.size == bid->size &&
+ 	       memcmp(dso->bid.data, bid->data, dso->bid.size) == 0;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-09-30 10:48 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-09-30 10:48 UTC (permalink / raw
  To: gentoo-commits

commit:     b7948130f9c9b6acc069b1667d0fe5fe32d7e38b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 30 10:48:47 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 30 10:48:47 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b7948130

Linux patch 5.10.70

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1069_linux-5.10.70.patch | 4252 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4256 insertions(+)

diff --git a/0000_README b/0000_README
index 456fb50..89a885a 100644
--- a/0000_README
+++ b/0000_README
@@ -319,6 +319,10 @@ Patch:  1068_linux-5.10.69.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.69
 
+Patch:  1069_linux-5.10.70.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.70
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1069_linux-5.10.70.patch b/1069_linux-5.10.70.patch
new file mode 100644
index 0000000..84d190b
--- /dev/null
+++ b/1069_linux-5.10.70.patch
@@ -0,0 +1,4252 @@
+diff --git a/Makefile b/Makefile
+index e14943205b832..4a9541a18618b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 69
++SUBLEVEL = 70
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/include/asm/io.h b/arch/alpha/include/asm/io.h
+index 1f6a909d1fa59..7bc2f444a89a2 100644
+--- a/arch/alpha/include/asm/io.h
++++ b/arch/alpha/include/asm/io.h
+@@ -60,7 +60,7 @@ extern inline void set_hae(unsigned long new_hae)
+  * Change virtual addresses to physical addresses and vv.
+  */
+ #ifdef USE_48_BIT_KSEG
+-static inline unsigned long virt_to_phys(void *address)
++static inline unsigned long virt_to_phys(volatile void *address)
+ {
+ 	return (unsigned long)address - IDENT_ADDR;
+ }
+@@ -70,7 +70,7 @@ static inline void * phys_to_virt(unsigned long address)
+ 	return (void *) (address + IDENT_ADDR);
+ }
+ #else
+-static inline unsigned long virt_to_phys(void *address)
++static inline unsigned long virt_to_phys(volatile void *address)
+ {
+         unsigned long phys = (unsigned long)address;
+ 
+@@ -106,7 +106,7 @@ static inline void * phys_to_virt(unsigned long address)
+ extern unsigned long __direct_map_base;
+ extern unsigned long __direct_map_size;
+ 
+-static inline unsigned long __deprecated virt_to_bus(void *address)
++static inline unsigned long __deprecated virt_to_bus(volatile void *address)
+ {
+ 	unsigned long phys = virt_to_phys(address);
+ 	unsigned long bus = phys + __direct_map_base;
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index ed919f633ed8e..4999caff32818 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -60,7 +60,7 @@
+ 
+ #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK)
+ #include <linux/stackprotector.h>
+-unsigned long __stack_chk_guard __read_mostly;
++unsigned long __stack_chk_guard __ro_after_init;
+ EXPORT_SYMBOL(__stack_chk_guard);
+ #endif
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index 40cbaca813334..b9518f94bd435 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -2190,8 +2190,8 @@ static int vgic_its_restore_ite(struct vgic_its *its, u32 event_id,
+ 	return offset;
+ }
+ 
+-static int vgic_its_ite_cmp(void *priv, struct list_head *a,
+-			    struct list_head *b)
++static int vgic_its_ite_cmp(void *priv, const struct list_head *a,
++			    const struct list_head *b)
+ {
+ 	struct its_ite *itea = container_of(a, struct its_ite, ite_list);
+ 	struct its_ite *iteb = container_of(b, struct its_ite, ite_list);
+@@ -2329,8 +2329,8 @@ static int vgic_its_restore_dte(struct vgic_its *its, u32 id,
+ 	return offset;
+ }
+ 
+-static int vgic_its_device_cmp(void *priv, struct list_head *a,
+-			       struct list_head *b)
++static int vgic_its_device_cmp(void *priv, const struct list_head *a,
++			       const struct list_head *b)
+ {
+ 	struct its_device *deva = container_of(a, struct its_device, dev_list);
+ 	struct its_device *devb = container_of(b, struct its_device, dev_list);
+diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
+index c3643b7f101b7..4abf7a867b654 100644
+--- a/arch/arm64/kvm/vgic/vgic.c
++++ b/arch/arm64/kvm/vgic/vgic.c
+@@ -255,7 +255,8 @@ static struct kvm_vcpu *vgic_target_oracle(struct vgic_irq *irq)
+  * Return negative if "a" sorts before "b", 0 to preserve order, and positive
+  * to sort "b" before "a".
+  */
+-static int vgic_irq_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int vgic_irq_cmp(void *priv, const struct list_head *a,
++			const struct list_head *b)
+ {
+ 	struct vgic_irq *irqa = container_of(a, struct vgic_irq, ap_list);
+ 	struct vgic_irq *irqb = container_of(b, struct vgic_irq, ap_list);
+diff --git a/arch/m68k/include/asm/raw_io.h b/arch/m68k/include/asm/raw_io.h
+index 911826ea83ce1..80eb2396d01eb 100644
+--- a/arch/m68k/include/asm/raw_io.h
++++ b/arch/m68k/include/asm/raw_io.h
+@@ -17,21 +17,21 @@
+  * two accesses to memory, which may be undesirable for some devices.
+  */
+ #define in_8(addr) \
+-    ({ u8 __v = (*(__force volatile u8 *) (addr)); __v; })
++    ({ u8 __v = (*(__force volatile u8 *) (unsigned long)(addr)); __v; })
+ #define in_be16(addr) \
+-    ({ u16 __v = (*(__force volatile u16 *) (addr)); __v; })
++    ({ u16 __v = (*(__force volatile u16 *) (unsigned long)(addr)); __v; })
+ #define in_be32(addr) \
+-    ({ u32 __v = (*(__force volatile u32 *) (addr)); __v; })
++    ({ u32 __v = (*(__force volatile u32 *) (unsigned long)(addr)); __v; })
+ #define in_le16(addr) \
+-    ({ u16 __v = le16_to_cpu(*(__force volatile __le16 *) (addr)); __v; })
++    ({ u16 __v = le16_to_cpu(*(__force volatile __le16 *) (unsigned long)(addr)); __v; })
+ #define in_le32(addr) \
+-    ({ u32 __v = le32_to_cpu(*(__force volatile __le32 *) (addr)); __v; })
++    ({ u32 __v = le32_to_cpu(*(__force volatile __le32 *) (unsigned long)(addr)); __v; })
+ 
+-#define out_8(addr,b) (void)((*(__force volatile u8 *) (addr)) = (b))
+-#define out_be16(addr,w) (void)((*(__force volatile u16 *) (addr)) = (w))
+-#define out_be32(addr,l) (void)((*(__force volatile u32 *) (addr)) = (l))
+-#define out_le16(addr,w) (void)((*(__force volatile __le16 *) (addr)) = cpu_to_le16(w))
+-#define out_le32(addr,l) (void)((*(__force volatile __le32 *) (addr)) = cpu_to_le32(l))
++#define out_8(addr,b) (void)((*(__force volatile u8 *) (unsigned long)(addr)) = (b))
++#define out_be16(addr,w) (void)((*(__force volatile u16 *) (unsigned long)(addr)) = (w))
++#define out_be32(addr,l) (void)((*(__force volatile u32 *) (unsigned long)(addr)) = (l))
++#define out_le16(addr,w) (void)((*(__force volatile __le16 *) (unsigned long)(addr)) = cpu_to_le16(w))
++#define out_le32(addr,l) (void)((*(__force volatile __le32 *) (unsigned long)(addr)) = cpu_to_le32(l))
+ 
+ #define raw_inb in_8
+ #define raw_inw in_be16
+diff --git a/arch/parisc/include/asm/page.h b/arch/parisc/include/asm/page.h
+index 6b3f6740a6a67..8802ce651a3af 100644
+--- a/arch/parisc/include/asm/page.h
++++ b/arch/parisc/include/asm/page.h
+@@ -184,7 +184,7 @@ extern int npmem_ranges;
+ #include <asm-generic/getorder.h>
+ #include <asm/pdc.h>
+ 
+-#define PAGE0   ((struct zeropage *)__PAGE_OFFSET)
++#define PAGE0   ((struct zeropage *)absolute_pointer(__PAGE_OFFSET))
+ 
+ /* DEFINITION OF THE ZERO-PAGE (PAG0) */
+ /* based on work by Jason Eckhardt (jason@equator.com) */
+diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
+index 8e1d72a167594..7ceae24b0ca99 100644
+--- a/arch/sparc/kernel/ioport.c
++++ b/arch/sparc/kernel/ioport.c
+@@ -356,7 +356,9 @@ err_nomem:
+ void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
+ 		dma_addr_t dma_addr, unsigned long attrs)
+ {
+-	if (!sparc_dma_free_resource(cpu_addr, PAGE_ALIGN(size)))
++	size = PAGE_ALIGN(size);
++
++	if (!sparc_dma_free_resource(cpu_addr, size))
+ 		return;
+ 
+ 	dma_make_coherent(dma_addr, size);
+diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c
+index 8e645ddac58e2..30f171b7b00c2 100644
+--- a/arch/sparc/kernel/mdesc.c
++++ b/arch/sparc/kernel/mdesc.c
+@@ -39,6 +39,7 @@ struct mdesc_hdr {
+ 	u32	node_sz; /* node block size */
+ 	u32	name_sz; /* name block size */
+ 	u32	data_sz; /* data block size */
++	char	data[];
+ } __attribute__((aligned(16)));
+ 
+ struct mdesc_elem {
+@@ -612,7 +613,7 @@ EXPORT_SYMBOL(mdesc_get_node_info);
+ 
+ static struct mdesc_elem *node_block(struct mdesc_hdr *mdesc)
+ {
+-	return (struct mdesc_elem *) (mdesc + 1);
++	return (struct mdesc_elem *) mdesc->data;
+ }
+ 
+ static void *name_block(struct mdesc_hdr *mdesc)
+diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
+index cc177b4431ae8..415693f5d909d 100644
+--- a/arch/x86/include/asm/special_insns.h
++++ b/arch/x86/include/asm/special_insns.h
+@@ -286,8 +286,8 @@ static inline void movdir64b(void *dst, const void *src)
+ static inline int enqcmds(void __iomem *dst, const void *src)
+ {
+ 	const struct { char _[64]; } *__src = src;
+-	struct { char _[64]; } *__dst = dst;
+-	int zf;
++	struct { char _[64]; } __iomem *__dst = dst;
++	bool zf;
+ 
+ 	/*
+ 	 * ENQCMDS %(rdx), rax
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index c758fd913cedd..5af0421ef74ba 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -736,8 +736,8 @@ static void xen_write_idt_entry(gate_desc *dt, int entrynum, const gate_desc *g)
+ 	preempt_enable();
+ }
+ 
+-static void xen_convert_trap_info(const struct desc_ptr *desc,
+-				  struct trap_info *traps)
++static unsigned xen_convert_trap_info(const struct desc_ptr *desc,
++				      struct trap_info *traps, bool full)
+ {
+ 	unsigned in, out, count;
+ 
+@@ -747,17 +747,18 @@ static void xen_convert_trap_info(const struct desc_ptr *desc,
+ 	for (in = out = 0; in < count; in++) {
+ 		gate_desc *entry = (gate_desc *)(desc->address) + in;
+ 
+-		if (cvt_gate_to_trap(in, entry, &traps[out]))
++		if (cvt_gate_to_trap(in, entry, &traps[out]) || full)
+ 			out++;
+ 	}
+-	traps[out].address = 0;
++
++	return out;
+ }
+ 
+ void xen_copy_trap_info(struct trap_info *traps)
+ {
+ 	const struct desc_ptr *desc = this_cpu_ptr(&idt_desc);
+ 
+-	xen_convert_trap_info(desc, traps);
++	xen_convert_trap_info(desc, traps, true);
+ }
+ 
+ /* Load a new IDT into Xen.  In principle this can be per-CPU, so we
+@@ -767,6 +768,7 @@ static void xen_load_idt(const struct desc_ptr *desc)
+ {
+ 	static DEFINE_SPINLOCK(lock);
+ 	static struct trap_info traps[257];
++	unsigned out;
+ 
+ 	trace_xen_cpu_load_idt(desc);
+ 
+@@ -774,7 +776,8 @@ static void xen_load_idt(const struct desc_ptr *desc)
+ 
+ 	memcpy(this_cpu_ptr(&idt_desc), desc, sizeof(idt_desc));
+ 
+-	xen_convert_trap_info(desc, traps);
++	out = xen_convert_trap_info(desc, traps, false);
++	memset(&traps[out], 0, sizeof(traps[0]));
+ 
+ 	xen_mc_flush();
+ 	if (HYPERVISOR_set_trap_table(traps))
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index f13688c4b9317..5b19665bc486a 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1387,10 +1387,14 @@ enomem:
+ 	/* alloc failed, nothing's initialized yet, free everything */
+ 	spin_lock_irq(&q->queue_lock);
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
++		struct blkcg *blkcg = blkg->blkcg;
++
++		spin_lock(&blkcg->lock);
+ 		if (blkg->pd[pol->plid]) {
+ 			pol->pd_free_fn(blkg->pd[pol->plid]);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
++		spin_unlock(&blkcg->lock);
+ 	}
+ 	spin_unlock_irq(&q->queue_lock);
+ 	ret = -ENOMEM;
+@@ -1422,12 +1426,16 @@ void blkcg_deactivate_policy(struct request_queue *q,
+ 	__clear_bit(pol->plid, q->blkcg_pols);
+ 
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
++		struct blkcg *blkcg = blkg->blkcg;
++
++		spin_lock(&blkcg->lock);
+ 		if (blkg->pd[pol->plid]) {
+ 			if (pol->pd_offline_fn)
+ 				pol->pd_offline_fn(blkg->pd[pol->plid]);
+ 			pol->pd_free_fn(blkg->pd[pol->plid]);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
++		spin_unlock(&blkcg->lock);
+ 	}
+ 
+ 	spin_unlock_irq(&q->queue_lock);
+diff --git a/block/blk-integrity.c b/block/blk-integrity.c
+index 410da060d1f5a..9e83159f5a527 100644
+--- a/block/blk-integrity.c
++++ b/block/blk-integrity.c
+@@ -426,8 +426,15 @@ EXPORT_SYMBOL(blk_integrity_register);
+  */
+ void blk_integrity_unregister(struct gendisk *disk)
+ {
++	struct blk_integrity *bi = &disk->queue->integrity;
++
++	if (!bi->profile)
++		return;
++
++	/* ensure all bios are off the integrity workqueue */
++	blk_flush_integrity();
+ 	blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, disk->queue);
+-	memset(&disk->queue->integrity, 0, sizeof(struct blk_integrity));
++	memset(bi, 0, sizeof(*bi));
+ }
+ EXPORT_SYMBOL(blk_integrity_unregister);
+ 
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 581be65a53c15..24c08963890e9 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -75,7 +75,8 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
+ 	blk_mq_run_hw_queue(hctx, true);
+ }
+ 
+-static int sched_rq_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int sched_rq_cmp(void *priv, const struct list_head *a,
++			const struct list_head *b)
+ {
+ 	struct request *rqa = container_of(a, struct request, queuelist);
+ 	struct request *rqb = container_of(b, struct request, queuelist);
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index c4f2f6c123aed..16ad9e6566108 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -207,7 +207,7 @@ static struct request *blk_mq_find_and_get_req(struct blk_mq_tags *tags,
+ 
+ 	spin_lock_irqsave(&tags->lock, flags);
+ 	rq = tags->rqs[bitnr];
+-	if (!rq || !refcount_inc_not_zero(&rq->ref))
++	if (!rq || rq->tag != bitnr || !refcount_inc_not_zero(&rq->ref))
+ 		rq = NULL;
+ 	spin_unlock_irqrestore(&tags->lock, flags);
+ 	return rq;
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 6dcb86c1c985d..eed9a4c1519df 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1866,7 +1866,8 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
+ 	spin_unlock(&ctx->lock);
+ }
+ 
+-static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int plug_rq_cmp(void *priv, const struct list_head *a,
++		       const struct list_head *b)
+ {
+ 	struct request *rqa = container_of(a, struct request, queuelist);
+ 	struct request *rqb = container_of(b, struct request, queuelist);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index cb18cb5c51b17..d061bff5cc96c 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1194,7 +1194,8 @@ static int __nfit_mem_init(struct acpi_nfit_desc *acpi_desc,
+ 	return 0;
+ }
+ 
+-static int nfit_mem_cmp(void *priv, struct list_head *_a, struct list_head *_b)
++static int nfit_mem_cmp(void *priv, const struct list_head *_a,
++		const struct list_head *_b)
+ {
+ 	struct nfit_mem *a = container_of(_a, typeof(*a), list);
+ 	struct nfit_mem *b = container_of(_b, typeof(*b), list);
+diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
+index cb73a5d6ea76d..137a5dd880c26 100644
+--- a/drivers/acpi/numa/hmat.c
++++ b/drivers/acpi/numa/hmat.c
+@@ -558,7 +558,8 @@ static bool hmat_update_best(u8 type, u32 value, u32 *best)
+ 	return updated;
+ }
+ 
+-static int initiator_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int initiator_cmp(void *priv, const struct list_head *a,
++			 const struct list_head *b)
+ {
+ 	struct memory_initiator *ia;
+ 	struct memory_initiator *ib;
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 2a3952925855d..65b22b5af51ac 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2236,6 +2236,7 @@ static void binder_deferred_fd_close(int fd)
+ }
+ 
+ static void binder_transaction_buffer_release(struct binder_proc *proc,
++					      struct binder_thread *thread,
+ 					      struct binder_buffer *buffer,
+ 					      binder_size_t failed_at,
+ 					      bool is_failure)
+@@ -2395,8 +2396,16 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 						&proc->alloc, &fd, buffer,
+ 						offset, sizeof(fd));
+ 				WARN_ON(err);
+-				if (!err)
++				if (!err) {
+ 					binder_deferred_fd_close(fd);
++					/*
++					 * Need to make sure the thread goes
++					 * back to userspace to complete the
++					 * deferred close
++					 */
++					if (thread)
++						thread->looper_need_return = true;
++				}
+ 			}
+ 		} break;
+ 		default:
+@@ -3465,7 +3474,7 @@ err_bad_parent:
+ err_copy_data_failed:
+ 	binder_free_txn_fixups(t);
+ 	trace_binder_transaction_failed_buffer_release(t->buffer);
+-	binder_transaction_buffer_release(target_proc, t->buffer,
++	binder_transaction_buffer_release(target_proc, NULL, t->buffer,
+ 					  buffer_offset, true);
+ 	if (target_node)
+ 		binder_dec_node_tmpref(target_node);
+@@ -3542,7 +3551,9 @@ err_invalid_target_handle:
+  * Cleanup buffer and free it.
+  */
+ static void
+-binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer)
++binder_free_buf(struct binder_proc *proc,
++		struct binder_thread *thread,
++		struct binder_buffer *buffer)
+ {
+ 	binder_inner_proc_lock(proc);
+ 	if (buffer->transaction) {
+@@ -3570,7 +3581,7 @@ binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer)
+ 		binder_node_inner_unlock(buf_node);
+ 	}
+ 	trace_binder_transaction_buffer_release(buffer);
+-	binder_transaction_buffer_release(proc, buffer, 0, false);
++	binder_transaction_buffer_release(proc, thread, buffer, 0, false);
+ 	binder_alloc_free_buf(&proc->alloc, buffer);
+ }
+ 
+@@ -3771,7 +3782,7 @@ static int binder_thread_write(struct binder_proc *proc,
+ 				     proc->pid, thread->pid, (u64)data_ptr,
+ 				     buffer->debug_id,
+ 				     buffer->transaction ? "active" : "finished");
+-			binder_free_buf(proc, buffer);
++			binder_free_buf(proc, thread, buffer);
+ 			break;
+ 		}
+ 
+@@ -4459,7 +4470,7 @@ retry:
+ 			buffer->transaction = NULL;
+ 			binder_cleanup_transaction(t, "fd fixups failed",
+ 						   BR_FAILED_REPLY);
+-			binder_free_buf(proc, buffer);
++			binder_free_buf(proc, thread, buffer);
+ 			binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
+ 				     "%d:%d %stransaction %d fd fixups failed %d/%d, line %d\n",
+ 				     proc->pid, thread->pid,
+diff --git a/drivers/clk/keystone/sci-clk.c b/drivers/clk/keystone/sci-clk.c
+index aaf31abe1c8ff..7e1b136e71ae0 100644
+--- a/drivers/clk/keystone/sci-clk.c
++++ b/drivers/clk/keystone/sci-clk.c
+@@ -503,8 +503,8 @@ static int ti_sci_scan_clocks_from_fw(struct sci_clk_provider *provider)
+ 
+ #else
+ 
+-static int _cmp_sci_clk_list(void *priv, struct list_head *a,
+-			     struct list_head *b)
++static int _cmp_sci_clk_list(void *priv, const struct list_head *a,
++			     const struct list_head *b)
+ {
+ 	struct sci_clk *ca = container_of(a, struct sci_clk, node);
+ 	struct sci_clk *cb = container_of(b, struct sci_clk, node);
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 44a5d15a75728..1686705bee7bd 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -3035,11 +3035,15 @@ static int __init intel_pstate_init(void)
+ 	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ 		return -ENODEV;
+ 
+-	if (no_load)
+-		return -ENODEV;
+-
+ 	id = x86_match_cpu(hwp_support_ids);
+ 	if (id) {
++		bool hwp_forced = intel_pstate_hwp_is_enabled();
++
++		if (hwp_forced)
++			pr_info("HWP enabled by BIOS\n");
++		else if (no_load)
++			return -ENODEV;
++
+ 		copy_cpu_funcs(&core_funcs);
+ 		/*
+ 		 * Avoid enabling HWP for processors without EPP support,
+@@ -3049,8 +3053,7 @@ static int __init intel_pstate_init(void)
+ 		 * If HWP is enabled already, though, there is no choice but to
+ 		 * deal with it.
+ 		 */
+-		if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) ||
+-		    intel_pstate_hwp_is_enabled()) {
++		if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || hwp_forced) {
+ 			hwp_active++;
+ 			hwp_mode_bdw = id->driver_data;
+ 			intel_pstate.attr = hwp_cpufreq_attrs;
+@@ -3061,7 +3064,11 @@ static int __init intel_pstate_init(void)
+ 
+ 			goto hwp_cpu_matched;
+ 		}
++		pr_info("HWP not enabled\n");
+ 	} else {
++		if (no_load)
++			return -ENODEV;
++
+ 		id = x86_match_cpu(intel_pstate_cpu_ids);
+ 		if (!id) {
+ 			pr_info("CPU model not supported\n");
+@@ -3138,10 +3145,9 @@ static int __init intel_pstate_setup(char *str)
+ 	else if (!strcmp(str, "passive"))
+ 		default_driver = &intel_cpufreq;
+ 
+-	if (!strcmp(str, "no_hwp")) {
+-		pr_info("HWP disabled\n");
++	if (!strcmp(str, "no_hwp"))
+ 		no_hwp = 1;
+-	}
++
+ 	if (!strcmp(str, "force"))
+ 		force_load = 1;
+ 	if (!strcmp(str, "hwp_only"))
+diff --git a/drivers/edac/dmc520_edac.c b/drivers/edac/dmc520_edac.c
+index fc1153ab1ebbc..b8a7d9594afd4 100644
+--- a/drivers/edac/dmc520_edac.c
++++ b/drivers/edac/dmc520_edac.c
+@@ -464,7 +464,7 @@ static void dmc520_init_csrow(struct mem_ctl_info *mci)
+ 			dimm->grain	= pvt->mem_width_in_bytes;
+ 			dimm->dtype	= dt;
+ 			dimm->mtype	= mt;
+-			dimm->edac_mode	= EDAC_FLAG_SECDED;
++			dimm->edac_mode	= EDAC_SECDED;
+ 			dimm->nr_pages	= pages_per_rank / csi->nr_channels;
+ 		}
+ 	}
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index 12211dc040e8f..1a801a5d3b08b 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -782,7 +782,7 @@ static void init_csrows(struct mem_ctl_info *mci)
+ 
+ 		for (j = 0; j < csi->nr_channels; j++) {
+ 			dimm		= csi->channels[j]->dimm;
+-			dimm->edac_mode	= EDAC_FLAG_SECDED;
++			dimm->edac_mode	= EDAC_SECDED;
+ 			dimm->mtype	= p_data->get_mtype(priv->baseaddr);
+ 			dimm->nr_pages	= (size >> PAGE_SHIFT) / csi->nr_channels;
+ 			dimm->grain	= SYNPS_EDAC_ERR_GRAIN;
+diff --git a/drivers/fpga/machxo2-spi.c b/drivers/fpga/machxo2-spi.c
+index b316369156fe6..9eef18349eee6 100644
+--- a/drivers/fpga/machxo2-spi.c
++++ b/drivers/fpga/machxo2-spi.c
+@@ -225,8 +225,10 @@ static int machxo2_write_init(struct fpga_manager *mgr,
+ 		goto fail;
+ 
+ 	get_status(spi, &status);
+-	if (test_bit(FAIL, &status))
++	if (test_bit(FAIL, &status)) {
++		ret = -EINVAL;
+ 		goto fail;
++	}
+ 	dump_status_reg(&status);
+ 
+ 	spi_message_init(&msg);
+@@ -313,6 +315,7 @@ static int machxo2_write_complete(struct fpga_manager *mgr,
+ 	dump_status_reg(&status);
+ 	if (!test_bit(DONE, &status)) {
+ 		machxo2_cleanup(mgr);
++		ret = -EINVAL;
+ 		goto fail;
+ 	}
+ 
+@@ -335,6 +338,7 @@ static int machxo2_write_complete(struct fpga_manager *mgr,
+ 			break;
+ 		if (++refreshloop == MACHXO2_MAX_REFRESH_LOOP) {
+ 			machxo2_cleanup(mgr);
++			ret = -EINVAL;
+ 			goto fail;
+ 		}
+ 	} while (1);
+diff --git a/drivers/gpio/gpio-uniphier.c b/drivers/gpio/gpio-uniphier.c
+index f99f3c10bed03..39dca147d587a 100644
+--- a/drivers/gpio/gpio-uniphier.c
++++ b/drivers/gpio/gpio-uniphier.c
+@@ -184,7 +184,7 @@ static void uniphier_gpio_irq_mask(struct irq_data *data)
+ 
+ 	uniphier_gpio_reg_update(priv, UNIPHIER_GPIO_IRQ_EN, mask, 0);
+ 
+-	return irq_chip_mask_parent(data);
++	irq_chip_mask_parent(data);
+ }
+ 
+ static void uniphier_gpio_irq_unmask(struct irq_data *data)
+@@ -194,7 +194,7 @@ static void uniphier_gpio_irq_unmask(struct irq_data *data)
+ 
+ 	uniphier_gpio_reg_update(priv, UNIPHIER_GPIO_IRQ_EN, mask, mask);
+ 
+-	return irq_chip_unmask_parent(data);
++	irq_chip_unmask_parent(data);
+ }
+ 
+ static int uniphier_gpio_irq_set_type(struct irq_data *data, unsigned int type)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index bc9df3f216f56..ce21a21ddb235 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8962,7 +8962,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 			goto fail;
+ 		status = dc_validate_global_state(dc, dm_state->context, false);
+ 		if (status != DC_OK) {
+-			DC_LOG_WARNING("DC global validation failure: %s (%d)",
++			drm_dbg_atomic(dev,
++				       "DC global validation failure: %s (%d)",
+ 				       dc_status_to_str(status), status);
+ 			ret = -EINVAL;
+ 			goto fail;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+index b5986d19dc08b..a1e7ba5995c57 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+@@ -6870,6 +6870,8 @@ static int si_dpm_enable(struct amdgpu_device *adev)
+ 	si_enable_auto_throttle_source(adev, AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL, true);
+ 	si_thermal_start_thermal_controller(adev);
+ 
++	ni_update_current_ps(adev, boot_ps);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c
+index 511cde5c7fa6f..0f99e5453f152 100644
+--- a/drivers/gpu/drm/drm_modes.c
++++ b/drivers/gpu/drm/drm_modes.c
+@@ -1290,7 +1290,8 @@ EXPORT_SYMBOL(drm_mode_prune_invalid);
+  * Negative if @lh_a is better than @lh_b, zero if they're equivalent, or
+  * positive if @lh_b is better than @lh_a.
+  */
+-static int drm_mode_compare(void *priv, struct list_head *lh_a, struct list_head *lh_b)
++static int drm_mode_compare(void *priv, const struct list_head *lh_a,
++			    const struct list_head *lh_b)
+ {
+ 	struct drm_display_mode *a = list_entry(lh_a, struct drm_display_mode, head);
+ 	struct drm_display_mode *b = list_entry(lh_b, struct drm_display_mode, head);
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_user.c b/drivers/gpu/drm/i915/gt/intel_engine_user.c
+index 34e6096f196ed..da21d2a10cc94 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_user.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_user.c
+@@ -49,7 +49,8 @@ static const u8 uabi_classes[] = {
+ 	[VIDEO_ENHANCEMENT_CLASS] = I915_ENGINE_CLASS_VIDEO_ENHANCE,
+ };
+ 
+-static int engine_cmp(void *priv, struct list_head *A, struct list_head *B)
++static int engine_cmp(void *priv, const struct list_head *A,
++		      const struct list_head *B)
+ {
+ 	const struct intel_engine_cs *a =
+ 		container_of((struct rb_node *)A, typeof(*a), uabi_node);
+diff --git a/drivers/gpu/drm/i915/gvt/debugfs.c b/drivers/gpu/drm/i915/gvt/debugfs.c
+index 62e6a14ad58ef..9f1c209d92511 100644
+--- a/drivers/gpu/drm/i915/gvt/debugfs.c
++++ b/drivers/gpu/drm/i915/gvt/debugfs.c
+@@ -41,7 +41,7 @@ struct diff_mmio {
+ 
+ /* Compare two diff_mmio items. */
+ static int mmio_offset_compare(void *priv,
+-	struct list_head *a, struct list_head *b)
++	const struct list_head *a, const struct list_head *b)
+ {
+ 	struct diff_mmio *ma;
+ 	struct diff_mmio *mb;
+diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+index 713770fb2b92d..65e28c4cd4ce5 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
++++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+@@ -1075,7 +1075,8 @@ static int igt_ppgtt_shrink_boom(void *arg)
+ 	return exercise_ppgtt(arg, shrink_boom);
+ }
+ 
+-static int sort_holes(void *priv, struct list_head *A, struct list_head *B)
++static int sort_holes(void *priv, const struct list_head *A,
++		      const struct list_head *B)
+ {
+ 	struct drm_mm_node *a = list_entry(A, typeof(*a), hole_stack);
+ 	struct drm_mm_node *b = list_entry(B, typeof(*b), hole_stack);
+diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c
+index 21ce2f9502c09..a78b60b62caf2 100644
+--- a/drivers/gpu/drm/radeon/radeon_cs.c
++++ b/drivers/gpu/drm/radeon/radeon_cs.c
+@@ -394,8 +394,8 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data)
+ 	return 0;
+ }
+ 
+-static int cmp_size_smaller_first(void *priv, struct list_head *a,
+-				  struct list_head *b)
++static int cmp_size_smaller_first(void *priv, const struct list_head *a,
++				  const struct list_head *b)
+ {
+ 	struct radeon_bo_list *la = list_entry(a, struct radeon_bo_list, tv.head);
+ 	struct radeon_bo_list *lb = list_entry(b, struct radeon_bo_list, tv.head);
+diff --git a/drivers/infiniband/hw/usnic/usnic_uiom_interval_tree.c b/drivers/infiniband/hw/usnic/usnic_uiom_interval_tree.c
+index d399523206c78..29d71267af786 100644
+--- a/drivers/infiniband/hw/usnic/usnic_uiom_interval_tree.c
++++ b/drivers/infiniband/hw/usnic/usnic_uiom_interval_tree.c
+@@ -83,7 +83,8 @@ usnic_uiom_interval_node_alloc(long int start, long int last, int ref_cnt,
+ 	return interval;
+ }
+ 
+-static int interval_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int interval_cmp(void *priv, const struct list_head *a,
++			const struct list_head *b)
+ {
+ 	struct usnic_uiom_interval_node *node_a, *node_b;
+ 
+diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
+index dd0e3bd50b94c..3c0809095a31c 100644
+--- a/drivers/interconnect/qcom/bcm-voter.c
++++ b/drivers/interconnect/qcom/bcm-voter.c
+@@ -39,7 +39,7 @@ struct bcm_voter {
+ 	u32 tcs_wait;
+ };
+ 
+-static int cmp_vcd(void *priv, struct list_head *a, struct list_head *b)
++static int cmp_vcd(void *priv, const struct list_head *a, const struct list_head *b)
+ {
+ 	const struct qcom_icc_bcm *bcm_a =
+ 			list_entry(a, struct qcom_icc_bcm, list);
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index 6156a065681bc..dc062e8c2caf8 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -425,6 +425,7 @@ config MESON_IRQ_GPIO
+ config GOLDFISH_PIC
+        bool "Goldfish programmable interrupt controller"
+        depends on MIPS && (GOLDFISH || COMPILE_TEST)
++       select GENERIC_IRQ_CHIP
+        select IRQ_DOMAIN
+        help
+          Say yes here to enable Goldfish interrupt controller driver used
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 4069c215328b3..95e0b82b6c661 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -4489,7 +4489,7 @@ static int its_vpe_irq_domain_alloc(struct irq_domain *domain, unsigned int virq
+ 
+ 	if (err) {
+ 		if (i > 0)
+-			its_vpe_irq_domain_free(domain, virq, i - 1);
++			its_vpe_irq_domain_free(domain, virq, i);
+ 
+ 		its_lpi_free(bitmap, base, nr_ids);
+ 		its_free_prop_table(vprop_page);
+diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
+index 38fbb3b598731..38cc8340e817d 100644
+--- a/drivers/mcb/mcb-core.c
++++ b/drivers/mcb/mcb-core.c
+@@ -277,8 +277,8 @@ struct mcb_bus *mcb_alloc_bus(struct device *carrier)
+ 
+ 	bus_nr = ida_simple_get(&mcb_ida, 0, 0, GFP_KERNEL);
+ 	if (bus_nr < 0) {
+-		rc = bus_nr;
+-		goto err_free;
++		kfree(bus);
++		return ERR_PTR(bus_nr);
+ 	}
+ 
+ 	bus->bus_nr = bus_nr;
+@@ -293,12 +293,12 @@ struct mcb_bus *mcb_alloc_bus(struct device *carrier)
+ 	dev_set_name(&bus->dev, "mcb:%d", bus_nr);
+ 	rc = device_add(&bus->dev);
+ 	if (rc)
+-		goto err_free;
++		goto err_put;
+ 
+ 	return bus;
+-err_free:
+-	put_device(carrier);
+-	kfree(bus);
++
++err_put:
++	put_device(&bus->dev);
+ 	return ERR_PTR(rc);
+ }
+ EXPORT_SYMBOL_NS_GPL(mcb_alloc_bus, MCB);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 288d26013de27..f16f190546ef3 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -5759,10 +5759,6 @@ static int md_alloc(dev_t dev, char *name)
+ 	disk->flags |= GENHD_FL_EXT_DEVT;
+ 	disk->events |= DISK_EVENT_MEDIA_CHANGE;
+ 	mddev->gendisk = disk;
+-	/* As soon as we call add_disk(), another thread could get
+-	 * through to md_open, so make sure it doesn't get too far
+-	 */
+-	mutex_lock(&mddev->open_mutex);
+ 	add_disk(disk);
+ 
+ 	error = kobject_add(&mddev->kobj, &disk_to_dev(disk)->kobj, "%s", "md");
+@@ -5777,7 +5773,6 @@ static int md_alloc(dev_t dev, char *name)
+ 	if (mddev->kobj.sd &&
+ 	    sysfs_create_group(&mddev->kobj, &md_bitmap_group))
+ 		pr_debug("pointless warning\n");
+-	mutex_unlock(&mddev->open_mutex);
+  abort:
+ 	mutex_unlock(&disks_mutex);
+ 	if (!error && mddev->kobj.sd) {
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 39343479ac2a9..c82953a3299e2 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -953,7 +953,8 @@ static void dispatch_bio_list(struct bio_list *tmp)
+ 		submit_bio_noacct(bio);
+ }
+ 
+-static int cmp_stripe(void *priv, struct list_head *a, struct list_head *b)
++static int cmp_stripe(void *priv, const struct list_head *a,
++		      const struct list_head *b)
+ {
+ 	const struct r5pending_data *da = list_entry(a,
+ 				struct r5pending_data, sibling);
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index 6c1a23cb3e8c0..202bf951e9095 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -144,8 +144,8 @@ static void sram_free_partitions(struct sram_dev *sram)
+ 	}
+ }
+ 
+-static int sram_reserve_cmp(void *priv, struct list_head *a,
+-					struct list_head *b)
++static int sram_reserve_cmp(void *priv, const struct list_head *a,
++					const struct list_head *b)
+ {
+ 	struct sram_reserve *ra = list_entry(a, struct sram_reserve, list);
+ 	struct sram_reserve *rb = list_entry(b, struct sram_reserve, list);
+diff --git a/drivers/net/dsa/realtek-smi-core.c b/drivers/net/dsa/realtek-smi-core.c
+index 8e49d4f85d48c..6bf46d76c0281 100644
+--- a/drivers/net/dsa/realtek-smi-core.c
++++ b/drivers/net/dsa/realtek-smi-core.c
+@@ -368,7 +368,7 @@ int realtek_smi_setup_mdio(struct realtek_smi *smi)
+ 	smi->slave_mii_bus->parent = smi->dev;
+ 	smi->ds->slave_mii_bus = smi->slave_mii_bus;
+ 
+-	ret = of_mdiobus_register(smi->slave_mii_bus, mdio_np);
++	ret = devm_of_mdiobus_register(smi->dev, smi->slave_mii_bus, mdio_np);
+ 	if (ret) {
+ 		dev_err(smi->dev, "unable to register MDIO bus %s\n",
+ 			smi->slave_mii_bus->id);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index f26d037356191..5b996330f228b 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -419,13 +419,13 @@ static int atl_resume_common(struct device *dev, bool deep)
+ 	if (deep) {
+ 		/* Reinitialize Nic/Vecs objects */
+ 		aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
++	}
+ 
++	if (netif_running(nic->ndev)) {
+ 		ret = aq_nic_init(nic);
+ 		if (ret)
+ 			goto err_exit;
+-	}
+ 
+-	if (netif_running(nic->ndev)) {
+ 		ret = aq_nic_start(nic);
+ 		if (ret)
+ 			goto err_exit;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 26179e437bbfd..cb0c270418a4d 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -381,7 +381,7 @@ static bool bnxt_txr_netif_try_stop_queue(struct bnxt *bp,
+ 	 * netif_tx_queue_stopped().
+ 	 */
+ 	smp_mb();
+-	if (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh) {
++	if (bnxt_tx_avail(bp, txr) >= bp->tx_wake_thresh) {
+ 		netif_tx_wake_queue(txq);
+ 		return false;
+ 	}
+@@ -717,7 +717,7 @@ next_tx_int:
+ 	smp_mb();
+ 
+ 	if (unlikely(netif_tx_queue_stopped(txq)) &&
+-	    bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
++	    bnxt_tx_avail(bp, txr) >= bp->tx_wake_thresh &&
+ 	    READ_ONCE(txr->dev_state) != BNXT_DEV_STATE_CLOSING)
+ 		netif_tx_wake_queue(txq);
+ }
+@@ -2300,7 +2300,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 		if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) {
+ 			tx_pkts++;
+ 			/* return full budget so NAPI will complete. */
+-			if (unlikely(tx_pkts > bp->tx_wake_thresh)) {
++			if (unlikely(tx_pkts >= bp->tx_wake_thresh)) {
+ 				rx_pkts = budget;
+ 				raw_cons = NEXT_RAW_CMP(raw_cons);
+ 				if (budget)
+@@ -3431,7 +3431,7 @@ static int bnxt_init_tx_rings(struct bnxt *bp)
+ 	u16 i;
+ 
+ 	bp->tx_wake_thresh = max_t(int, bp->tx_ring_size / 2,
+-				   MAX_SKB_FRAGS + 1);
++				   BNXT_MIN_TX_DESC_CNT);
+ 
+ 	for (i = 0; i < bp->tx_nr_rings; i++) {
+ 		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 95d10e7bbb041..92f9f7f5240b6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -611,6 +611,11 @@ struct nqe_cn {
+ #define BNXT_MAX_RX_JUM_DESC_CNT	(RX_DESC_CNT * MAX_RX_AGG_PAGES - 1)
+ #define BNXT_MAX_TX_DESC_CNT		(TX_DESC_CNT * MAX_TX_PAGES - 1)
+ 
++/* Minimum TX BDs for a TX packet with MAX_SKB_FRAGS + 1.  We need one extra
++ * BD because the first TX BD is always a long BD.
++ */
++#define BNXT_MIN_TX_DESC_CNT		(MAX_SKB_FRAGS + 2)
++
+ #define RX_RING(x)	(((x) & ~(RX_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4))
+ #define RX_IDX(x)	((x) & (RX_DESC_CNT - 1))
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 1471c9a362388..6f9196ff2ac4f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -780,7 +780,7 @@ static int bnxt_set_ringparam(struct net_device *dev,
+ 
+ 	if ((ering->rx_pending > BNXT_MAX_RX_DESC_CNT) ||
+ 	    (ering->tx_pending > BNXT_MAX_TX_DESC_CNT) ||
+-	    (ering->tx_pending <= MAX_SKB_FRAGS))
++	    (ering->tx_pending < BNXT_MIN_TX_DESC_CNT))
+ 		return -EINVAL;
+ 
+ 	if (netif_running(dev))
+diff --git a/drivers/net/ethernet/cadence/macb_pci.c b/drivers/net/ethernet/cadence/macb_pci.c
+index 353393dea6394..3593b310c325d 100644
+--- a/drivers/net/ethernet/cadence/macb_pci.c
++++ b/drivers/net/ethernet/cadence/macb_pci.c
+@@ -111,9 +111,9 @@ static void macb_remove(struct pci_dev *pdev)
+ 	struct platform_device *plat_dev = pci_get_drvdata(pdev);
+ 	struct macb_platform_data *plat_data = dev_get_platdata(&plat_dev->dev);
+ 
+-	platform_device_unregister(plat_dev);
+ 	clk_unregister(plat_data->pclk);
+ 	clk_unregister(plat_data->hclk);
++	platform_device_unregister(plat_dev);
+ }
+ 
+ static const struct pci_device_id dev_id_table[] = {
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index df4a858c80015..15aa3b3c0089f 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -299,7 +299,7 @@ static void enetc_rx_dim_work(struct work_struct *w)
+ 
+ static void enetc_rx_net_dim(struct enetc_int_vector *v)
+ {
+-	struct dim_sample dim_sample;
++	struct dim_sample dim_sample = {};
+ 
+ 	v->comp_cnt++;
+ 
+@@ -1320,7 +1320,6 @@ static void enetc_clear_bdrs(struct enetc_ndev_priv *priv)
+ static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
+ {
+ 	struct pci_dev *pdev = priv->si->pdev;
+-	cpumask_t cpu_mask;
+ 	int i, j, err;
+ 
+ 	for (i = 0; i < priv->bdr_int_num; i++) {
+@@ -1349,9 +1348,7 @@ static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
+ 
+ 			enetc_wr(hw, ENETC_SIMSITRV(idx), entry);
+ 		}
+-		cpumask_clear(&cpu_mask);
+-		cpumask_set_cpu(i % num_online_cpus(), &cpu_mask);
+-		irq_set_affinity_hint(irq, &cpu_mask);
++		irq_set_affinity_hint(irq, get_cpu_mask(i % num_online_cpus()));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 59ec538eba1f0..24357e9071553 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -4377,6 +4377,24 @@ static int hclge_get_rss(struct hnae3_handle *handle, u32 *indir,
+ 	return 0;
+ }
+ 
++static int hclge_parse_rss_hfunc(struct hclge_vport *vport, const u8 hfunc,
++				 u8 *hash_algo)
++{
++	switch (hfunc) {
++	case ETH_RSS_HASH_TOP:
++		*hash_algo = HCLGE_RSS_HASH_ALGO_TOEPLITZ;
++		return 0;
++	case ETH_RSS_HASH_XOR:
++		*hash_algo = HCLGE_RSS_HASH_ALGO_SIMPLE;
++		return 0;
++	case ETH_RSS_HASH_NO_CHANGE:
++		*hash_algo = vport->rss_algo;
++		return 0;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int hclge_set_rss(struct hnae3_handle *handle, const u32 *indir,
+ 			 const  u8 *key, const  u8 hfunc)
+ {
+@@ -4385,30 +4403,27 @@ static int hclge_set_rss(struct hnae3_handle *handle, const u32 *indir,
+ 	u8 hash_algo;
+ 	int ret, i;
+ 
++	ret = hclge_parse_rss_hfunc(vport, hfunc, &hash_algo);
++	if (ret) {
++		dev_err(&hdev->pdev->dev, "invalid hfunc type %u\n", hfunc);
++		return ret;
++	}
++
+ 	/* Set the RSS Hash Key if specififed by the user */
+ 	if (key) {
+-		switch (hfunc) {
+-		case ETH_RSS_HASH_TOP:
+-			hash_algo = HCLGE_RSS_HASH_ALGO_TOEPLITZ;
+-			break;
+-		case ETH_RSS_HASH_XOR:
+-			hash_algo = HCLGE_RSS_HASH_ALGO_SIMPLE;
+-			break;
+-		case ETH_RSS_HASH_NO_CHANGE:
+-			hash_algo = vport->rss_algo;
+-			break;
+-		default:
+-			return -EINVAL;
+-		}
+-
+ 		ret = hclge_set_rss_algo_key(hdev, hash_algo, key);
+ 		if (ret)
+ 			return ret;
+ 
+ 		/* Update the shadow RSS key with user specified qids */
+ 		memcpy(vport->rss_hash_key, key, HCLGE_RSS_KEY_SIZE);
+-		vport->rss_algo = hash_algo;
++	} else {
++		ret = hclge_set_rss_algo_key(hdev, hash_algo,
++					     vport->rss_hash_key);
++		if (ret)
++			return ret;
+ 	}
++	vport->rss_algo = hash_algo;
+ 
+ 	/* Update the shadow RSS table with user specified qids */
+ 	for (i = 0; i < HCLGE_RSS_IND_TBL_SIZE; i++)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index ff9d84a7147f1..5d39967672561 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -581,9 +581,17 @@ static void hclge_get_queue_id_in_pf(struct hclge_vport *vport,
+ 				     struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+ 				     struct hclge_respond_to_vf_msg *resp_msg)
+ {
++	struct hnae3_handle *handle = &vport->nic;
++	struct hclge_dev *hdev = vport->back;
+ 	u16 queue_id, qid_in_pf;
+ 
+ 	memcpy(&queue_id, mbx_req->msg.data, sizeof(queue_id));
++	if (queue_id >= handle->kinfo.num_tqps) {
++		dev_err(&hdev->pdev->dev, "Invalid queue id(%u) from VF %u\n",
++			queue_id, mbx_req->mbx_src_vfid);
++		return;
++	}
++
+ 	qid_in_pf = hclge_covert_handle_qid_global(&vport->nic, queue_id);
+ 	memcpy(resp_msg->data, &qid_in_pf, sizeof(qid_in_pf));
+ 	resp_msg->len = sizeof(qid_in_pf);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 447457cacf973..3641d7c31451c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -785,40 +785,56 @@ static int hclgevf_get_rss(struct hnae3_handle *handle, u32 *indir, u8 *key,
+ 	return 0;
+ }
+ 
++static int hclgevf_parse_rss_hfunc(struct hclgevf_dev *hdev, const u8 hfunc,
++				   u8 *hash_algo)
++{
++	switch (hfunc) {
++	case ETH_RSS_HASH_TOP:
++		*hash_algo = HCLGEVF_RSS_HASH_ALGO_TOEPLITZ;
++		return 0;
++	case ETH_RSS_HASH_XOR:
++		*hash_algo = HCLGEVF_RSS_HASH_ALGO_SIMPLE;
++		return 0;
++	case ETH_RSS_HASH_NO_CHANGE:
++		*hash_algo = hdev->rss_cfg.hash_algo;
++		return 0;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int hclgevf_set_rss(struct hnae3_handle *handle, const u32 *indir,
+ 			   const u8 *key, const u8 hfunc)
+ {
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+ 	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
++	u8 hash_algo;
+ 	int ret, i;
+ 
+ 	if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V2) {
++		ret = hclgevf_parse_rss_hfunc(hdev, hfunc, &hash_algo);
++		if (ret)
++			return ret;
++
+ 		/* Set the RSS Hash Key if specififed by the user */
+ 		if (key) {
+-			switch (hfunc) {
+-			case ETH_RSS_HASH_TOP:
+-				rss_cfg->hash_algo =
+-					HCLGEVF_RSS_HASH_ALGO_TOEPLITZ;
+-				break;
+-			case ETH_RSS_HASH_XOR:
+-				rss_cfg->hash_algo =
+-					HCLGEVF_RSS_HASH_ALGO_SIMPLE;
+-				break;
+-			case ETH_RSS_HASH_NO_CHANGE:
+-				break;
+-			default:
+-				return -EINVAL;
+-			}
+-
+-			ret = hclgevf_set_rss_algo_key(hdev, rss_cfg->hash_algo,
+-						       key);
+-			if (ret)
++			ret = hclgevf_set_rss_algo_key(hdev, hash_algo, key);
++			if (ret) {
++				dev_err(&hdev->pdev->dev,
++					"invalid hfunc type %u\n", hfunc);
+ 				return ret;
++			}
+ 
+ 			/* Update the shadow RSS key with user specified qids */
+ 			memcpy(rss_cfg->rss_hash_key, key,
+ 			       HCLGEVF_RSS_KEY_SIZE);
++		} else {
++			ret = hclgevf_set_rss_algo_key(hdev, hash_algo,
++						       rss_cfg->rss_hash_key);
++			if (ret)
++				return ret;
+ 		}
++		rss_cfg->hash_algo = hash_algo;
+ 	}
+ 
+ 	/* update the shadow RSS table with user specified qids */
+diff --git a/drivers/net/ethernet/i825xx/82596.c b/drivers/net/ethernet/i825xx/82596.c
+index fc8c7cd674712..8b12a5ab3818c 100644
+--- a/drivers/net/ethernet/i825xx/82596.c
++++ b/drivers/net/ethernet/i825xx/82596.c
+@@ -1155,7 +1155,7 @@ struct net_device * __init i82596_probe(int unit)
+ 			err = -ENODEV;
+ 			goto out;
+ 		}
+-		memcpy(eth_addr, (void *) 0xfffc1f2c, ETH_ALEN);	/* YUCK! Get addr from NOVRAM */
++		memcpy(eth_addr, absolute_pointer(0xfffc1f2c), ETH_ALEN); /* YUCK! Get addr from NOVRAM */
+ 		dev->base_addr = MVME_I596_BASE;
+ 		dev->irq = (unsigned) MVME16x_IRQ_I596;
+ 		goto found;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index d8a20e83d9040..8999e9ce4f08e 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -372,6 +372,9 @@ mlx4_en_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb,
+ 	int nhoff = skb_network_offset(skb);
+ 	int ret = 0;
+ 
++	if (skb->encapsulation)
++		return -EPROTONOSUPPORT;
++
+ 	if (skb->protocol != htons(ETH_P_IP))
+ 		return -EPROTONOSUPPORT;
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+index a99861124630a..68fbe536a1f32 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+@@ -1297,6 +1297,14 @@ qed_iwarp_wait_cid_map_cleared(struct qed_hwfn *p_hwfn, struct qed_bmap *bmap)
+ 	prev_weight = weight;
+ 
+ 	while (weight) {
++		/* If the HW device is during recovery, all resources are
++		 * immediately reset without receiving a per-cid indication
++		 * from HW. In this case we don't expect the cid_map to be
++		 * cleared.
++		 */
++		if (p_hwfn->cdev->recov_in_prog)
++			return 0;
++
+ 		msleep(QED_IWARP_MAX_CID_CLEAN_TIME);
+ 
+ 		weight = bitmap_weight(bmap->bitmap, bmap->max_count);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+index f16a157bb95a0..cf5baa5e59bcc 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+@@ -77,6 +77,14 @@ void qed_roce_stop(struct qed_hwfn *p_hwfn)
+ 	 * Beyond the added delay we clear the bitmap anyway.
+ 	 */
+ 	while (bitmap_weight(rcid_map->bitmap, rcid_map->max_count)) {
++		/* If the HW device is during recovery, all resources are
++		 * immediately reset without receiving a per-cid indication
++		 * from HW. In this case we don't expect the cid bitmap to be
++		 * cleared.
++		 */
++		if (p_hwfn->cdev->recov_in_prog)
++			return;
++
+ 		msleep(100);
+ 		if (wait_count++ > 20) {
+ 			DP_NOTICE(p_hwfn, "cid bitmap wait timed out\n");
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 3134f7e669f80..6133b2fe8a78a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -226,7 +226,7 @@ static void stmmac_clk_csr_set(struct stmmac_priv *priv)
+ 			priv->clk_csr = STMMAC_CSR_100_150M;
+ 		else if ((clk_rate >= CSR_F_150M) && (clk_rate < CSR_F_250M))
+ 			priv->clk_csr = STMMAC_CSR_150_250M;
+-		else if ((clk_rate >= CSR_F_250M) && (clk_rate < CSR_F_300M))
++		else if ((clk_rate >= CSR_F_250M) && (clk_rate <= CSR_F_300M))
+ 			priv->clk_csr = STMMAC_CSR_250_300M;
+ 	}
+ 
+diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
+index da13683d52d1a..bd0beb16d68a9 100644
+--- a/drivers/net/hamradio/6pack.c
++++ b/drivers/net/hamradio/6pack.c
+@@ -68,9 +68,9 @@
+ #define SIXP_DAMA_OFF		0
+ 
+ /* default level 2 parameters */
+-#define SIXP_TXDELAY			(HZ/4)	/* in 1 s */
++#define SIXP_TXDELAY			25	/* 250 ms */
+ #define SIXP_PERSIST			50	/* in 256ths */
+-#define SIXP_SLOTTIME			(HZ/10)	/* in 1 s */
++#define SIXP_SLOTTIME			10	/* 100 ms */
+ #define SIXP_INIT_RESYNC_TIMEOUT	(3*HZ/2) /* in 1 s */
+ #define SIXP_RESYNC_TIMEOUT		5*HZ	/* in 1 s */
+ 
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 6072e87ed6c3c..025c3246f3396 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -1493,6 +1493,32 @@ int phylink_ethtool_ksettings_set(struct phylink *pl,
+ 	if (config.an_enabled && phylink_is_empty_linkmode(config.advertising))
+ 		return -EINVAL;
+ 
++	/* If this link is with an SFP, ensure that changes to advertised modes
++	 * also cause the associated interface to be selected such that the
++	 * link can be configured correctly.
++	 */
++	if (pl->sfp_port && pl->sfp_bus) {
++		config.interface = sfp_select_interface(pl->sfp_bus,
++							config.advertising);
++		if (config.interface == PHY_INTERFACE_MODE_NA) {
++			phylink_err(pl,
++				    "selection of interface failed, advertisement %*pb\n",
++				    __ETHTOOL_LINK_MODE_MASK_NBITS,
++				    config.advertising);
++			return -EINVAL;
++		}
++
++		/* Revalidate with the selected interface */
++		linkmode_copy(support, pl->supported);
++		if (phylink_validate(pl, support, &config)) {
++			phylink_err(pl, "validation of %s/%s with support %*pb failed\n",
++				    phylink_an_mode_str(pl->cur_link_an_mode),
++				    phy_modes(config.interface),
++				    __ETHTOOL_LINK_MODE_MASK_NBITS, support);
++			return -EINVAL;
++		}
++	}
++
+ 	mutex_lock(&pl->state_mutex);
+ 	pl->link_config.speed = config.speed;
+ 	pl->link_config.duplex = config.duplex;
+@@ -2072,7 +2098,9 @@ static int phylink_sfp_config(struct phylink *pl, u8 mode,
+ 	if (phy_interface_mode_is_8023z(iface) && pl->phydev)
+ 		return -EINVAL;
+ 
+-	changed = !linkmode_equal(pl->supported, support);
++	changed = !linkmode_equal(pl->supported, support) ||
++		  !linkmode_equal(pl->link_config.advertising,
++				  config.advertising);
+ 	if (changed) {
+ 		linkmode_copy(pl->supported, support);
+ 		linkmode_copy(pl->link_config.advertising, config.advertising);
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index f269337c82c58..df8d4c1e5be74 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2721,14 +2721,14 @@ struct hso_device *hso_create_mux_serial_device(struct usb_interface *interface,
+ 
+ 	serial = kzalloc(sizeof(*serial), GFP_KERNEL);
+ 	if (!serial)
+-		goto exit;
++		goto err_free_dev;
+ 
+ 	hso_dev->port_data.dev_serial = serial;
+ 	serial->parent = hso_dev;
+ 
+ 	if (hso_serial_common_create
+ 	    (serial, 1, CTRL_URB_RX_SIZE, CTRL_URB_TX_SIZE))
+-		goto exit;
++		goto err_free_serial;
+ 
+ 	serial->tx_data_length--;
+ 	serial->write_data = hso_mux_serial_write_data;
+@@ -2744,11 +2744,9 @@ struct hso_device *hso_create_mux_serial_device(struct usb_interface *interface,
+ 	/* done, return it */
+ 	return hso_dev;
+ 
+-exit:
+-	if (serial) {
+-		tty_unregister_device(tty_drv, serial->minor);
+-		kfree(serial);
+-	}
++err_free_serial:
++	kfree(serial);
++err_free_dev:
+ 	kfree(hso_dev);
+ 	return NULL;
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 5a9b2f1b1418a..bbc3efef50278 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -13,7 +13,6 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/backing-dev.h>
+-#include <linux/list_sort.h>
+ #include <linux/slab.h>
+ #include <linux/types.h>
+ #include <linux/pr.h>
+@@ -3801,14 +3800,6 @@ out_unlock:
+ 	return ret;
+ }
+ 
+-static int ns_cmp(void *priv, struct list_head *a, struct list_head *b)
+-{
+-	struct nvme_ns *nsa = container_of(a, struct nvme_ns, list);
+-	struct nvme_ns *nsb = container_of(b, struct nvme_ns, list);
+-
+-	return nsa->head->ns_id - nsb->head->ns_id;
+-}
+-
+ struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+ {
+ 	struct nvme_ns *ns, *ret = NULL;
+@@ -3829,6 +3820,22 @@ struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+ }
+ EXPORT_SYMBOL_NS_GPL(nvme_find_get_ns, NVME_TARGET_PASSTHRU);
+ 
++/*
++ * Add the namespace to the controller list while keeping the list ordered.
++ */
++static void nvme_ns_add_to_ctrl_list(struct nvme_ns *ns)
++{
++	struct nvme_ns *tmp;
++
++	list_for_each_entry_reverse(tmp, &ns->ctrl->namespaces, list) {
++		if (tmp->head->ns_id < ns->head->ns_id) {
++			list_add(&ns->list, &tmp->list);
++			return;
++		}
++	}
++	list_add(&ns->list, &ns->ctrl->namespaces);
++}
++
+ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
+ 		struct nvme_ns_ids *ids)
+ {
+@@ -3888,9 +3895,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
+ 	}
+ 
+ 	down_write(&ctrl->namespaces_rwsem);
+-	list_add_tail(&ns->list, &ctrl->namespaces);
++	nvme_ns_add_to_ctrl_list(ns);
+ 	up_write(&ctrl->namespaces_rwsem);
+-
+ 	nvme_get_ctrl(ctrl);
+ 
+ 	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
+@@ -4159,10 +4165,6 @@ static void nvme_scan_work(struct work_struct *work)
+ 	if (nvme_scan_ns_list(ctrl) != 0)
+ 		nvme_scan_ns_sequential(ctrl);
+ 	mutex_unlock(&ctrl->scan_lock);
+-
+-	down_write(&ctrl->namespaces_rwsem);
+-	list_sort(NULL, &ctrl->namespaces, ns_cmp);
+-	up_write(&ctrl->namespaces_rwsem);
+ }
+ 
+ /*
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 2747efc03825c..46a1e24ba6f47 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -509,14 +509,17 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ 
+ 	down_read(&ctrl->namespaces_rwsem);
+ 	list_for_each_entry(ns, &ctrl->namespaces, list) {
+-		unsigned nsid = le32_to_cpu(desc->nsids[n]);
+-
++		unsigned nsid;
++again:
++		nsid = le32_to_cpu(desc->nsids[n]);
+ 		if (ns->head->ns_id < nsid)
+ 			continue;
+ 		if (ns->head->ns_id == nsid)
+ 			nvme_update_ns_ana_state(desc, ns);
+ 		if (++n == nr_nsids)
+ 			break;
++		if (ns->head->ns_id > nsid)
++			goto again;
+ 	}
+ 	up_read(&ctrl->namespaces_rwsem);
+ 	return 0;
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 9c356be7f016e..51f4647ea2142 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -655,8 +655,8 @@ static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue)
+ 	if (!test_and_clear_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
+ 		return;
+ 
+-	nvme_rdma_destroy_queue_ib(queue);
+ 	rdma_destroy_id(queue->cm_id);
++	nvme_rdma_destroy_queue_ib(queue);
+ 	mutex_destroy(&queue->queue_lock);
+ }
+ 
+@@ -1823,14 +1823,10 @@ static int nvme_rdma_conn_established(struct nvme_rdma_queue *queue)
+ 	for (i = 0; i < queue->queue_size; i++) {
+ 		ret = nvme_rdma_post_recv(queue, &queue->rsp_ring[i]);
+ 		if (ret)
+-			goto out_destroy_queue_ib;
++			return ret;
+ 	}
+ 
+ 	return 0;
+-
+-out_destroy_queue_ib:
+-	nvme_rdma_destroy_queue_ib(queue);
+-	return ret;
+ }
+ 
+ static int nvme_rdma_conn_rejected(struct nvme_rdma_queue *queue,
+@@ -1924,14 +1920,10 @@ static int nvme_rdma_route_resolved(struct nvme_rdma_queue *queue)
+ 	if (ret) {
+ 		dev_err(ctrl->ctrl.device,
+ 			"rdma_connect_locked failed (%d).\n", ret);
+-		goto out_destroy_queue_ib;
++		return ret;
+ 	}
+ 
+ 	return 0;
+-
+-out_destroy_queue_ib:
+-	nvme_rdma_destroy_queue_ib(queue);
+-	return ret;
+ }
+ 
+ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
+@@ -1962,8 +1954,6 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
+ 	case RDMA_CM_EVENT_ROUTE_ERROR:
+ 	case RDMA_CM_EVENT_CONNECT_ERROR:
+ 	case RDMA_CM_EVENT_UNREACHABLE:
+-		nvme_rdma_destroy_queue_ib(queue);
+-		fallthrough;
+ 	case RDMA_CM_EVENT_ADDR_ERROR:
+ 		dev_dbg(queue->ctrl->ctrl.device,
+ 			"CM error event %d\n", ev->event);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index a6b3b07627630..05ad6bee085c1 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -611,7 +611,7 @@ static int nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
+ 		cpu_to_le32(data->hdr.hlen + hdgst + req->pdu_len + ddgst);
+ 	data->ttag = pdu->ttag;
+ 	data->command_id = nvme_cid(rq);
+-	data->data_offset = cpu_to_le32(req->data_sent);
++	data->data_offset = pdu->r2t_offset;
+ 	data->data_length = cpu_to_le32(req->pdu_len);
+ 	return 0;
+ }
+@@ -937,7 +937,15 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 			nvme_tcp_ddgst_update(queue->snd_hash, page,
+ 					offset, ret);
+ 
+-		/* fully successful last write*/
++		/*
++		 * update the request iterator except for the last payload send
++		 * in the request where we don't want to modify it as we may
++		 * compete with the RX path completing the request.
++		 */
++		if (req->data_sent + ret < req->data_len)
++			nvme_tcp_advance_req(req, ret);
++
++		/* fully successful last send in current PDU */
+ 		if (last && ret == len) {
+ 			if (queue->data_digest) {
+ 				nvme_tcp_ddgst_final(queue->snd_hash,
+@@ -949,7 +957,6 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 			}
+ 			return 1;
+ 		}
+-		nvme_tcp_advance_req(req, ret);
+ 	}
+ 	return -EAGAIN;
+ }
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index a40ed9e12b4bb..fb96d37a135c1 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -345,7 +345,8 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc,
+ 	return 0;
+ }
+ 
+-static int cdns_pcie_host_dma_ranges_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int cdns_pcie_host_dma_ranges_cmp(void *priv, const struct list_head *a,
++					 const struct list_head *b)
+ {
+ 	struct resource_entry *entry1, *entry2;
+ 
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index f175cff39b460..4f1a29ede576a 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -214,7 +214,7 @@
+ 	(PCIE_CONF_BUS(bus) | PCIE_CONF_DEV(PCI_SLOT(devfn))	| \
+ 	 PCIE_CONF_FUNC(PCI_FUNC(devfn)) | PCIE_CONF_REG(where))
+ 
+-#define PIO_RETRY_CNT			500
++#define PIO_RETRY_CNT			750000 /* 1.5 s */
+ #define PIO_RETRY_DELAY			2 /* 2 us*/
+ 
+ #define LINK_WAIT_MAX_RETRIES		10
+diff --git a/drivers/platform/x86/intel_punit_ipc.c b/drivers/platform/x86/intel_punit_ipc.c
+index f58b8543f6ac5..66bb39fd0ef90 100644
+--- a/drivers/platform/x86/intel_punit_ipc.c
++++ b/drivers/platform/x86/intel_punit_ipc.c
+@@ -8,7 +8,6 @@
+  * which provide mailbox interface for power management usage.
+  */
+ 
+-#include <linux/acpi.h>
+ #include <linux/bitops.h>
+ #include <linux/delay.h>
+ #include <linux/device.h>
+@@ -319,7 +318,7 @@ static struct platform_driver intel_punit_ipc_driver = {
+ 	.remove = intel_punit_ipc_remove,
+ 	.driver = {
+ 		.name = "intel_punit_ipc",
+-		.acpi_match_table = ACPI_PTR(punit_ipc_acpi_ids),
++		.acpi_match_table = punit_ipc_acpi_ids,
+ 	},
+ };
+ 
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 4d51c4ace8ea3..7b0155b0e99ee 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -210,6 +210,9 @@ static void qeth_clear_working_pool_list(struct qeth_card *card)
+ 				 &card->qdio.in_buf_pool.entry_list, list)
+ 		list_del(&pool_entry->list);
+ 
++	if (!queue)
++		return;
++
+ 	for (i = 0; i < ARRAY_SIZE(queue->bufs); i++)
+ 		queue->bufs[i].pool_entry = NULL;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index bdea2867516c0..2c59a5bf35390 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -6005,7 +6005,8 @@ lpfc_sg_seg_cnt_show(struct device *dev, struct device_attribute *attr,
+ 	len = scnprintf(buf, PAGE_SIZE, "SGL sz: %d  total SGEs: %d\n",
+ 		       phba->cfg_sg_dma_buf_size, phba->cfg_total_seg_cnt);
+ 
+-	len += scnprintf(buf + len, PAGE_SIZE, "Cfg: %d  SCSI: %d  NVME: %d\n",
++	len += scnprintf(buf + len, PAGE_SIZE - len,
++			"Cfg: %d  SCSI: %d  NVME: %d\n",
+ 			phba->cfg_sg_seg_cnt, phba->cfg_scsi_seg_cnt,
+ 			phba->cfg_nvme_seg_cnt);
+ 	return len;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 6faf34fa62206..b7aac3116f2db 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -6934,7 +6934,8 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 				return 0;
+ 			break;
+ 		case QLA2XXX_INI_MODE_DUAL:
+-			if (!qla_dual_mode_enabled(vha))
++			if (!qla_dual_mode_enabled(vha) &&
++			    !qla_ini_mode_enabled(vha))
+ 				return 0;
+ 			break;
+ 		case QLA2XXX_INI_MODE_ENABLED:
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index ac07a9ef35780..41772b88610ae 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -442,9 +442,7 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 	struct iscsi_transport *t = iface->transport;
+ 	int param = -1;
+ 
+-	if (attr == &dev_attr_iface_enabled.attr)
+-		param = ISCSI_NET_PARAM_IFACE_ENABLE;
+-	else if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
++	if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
+ 		param = ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO;
+ 	else if (attr == &dev_attr_iface_header_digest.attr)
+ 		param = ISCSI_IFACE_PARAM_HDRDGST_EN;
+@@ -484,7 +482,9 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 	if (param != -1)
+ 		return t->attr_is_visible(ISCSI_IFACE_PARAM, param);
+ 
+-	if (attr == &dev_attr_iface_vlan_id.attr)
++	if (attr == &dev_attr_iface_enabled.attr)
++		param = ISCSI_NET_PARAM_IFACE_ENABLE;
++	else if (attr == &dev_attr_iface_vlan_id.attr)
+ 		param = ISCSI_NET_PARAM_VLAN_ID;
+ 	else if (attr == &dev_attr_iface_vlan_priority.attr)
+ 		param = ISCSI_NET_PARAM_VLAN_PRIORITY;
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index 87a7274e4632b..01088f333dbc4 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -155,8 +155,8 @@ static void *sd_zbc_alloc_report_buffer(struct scsi_disk *sdkp,
+ 
+ 	/*
+ 	 * Report zone buffer size should be at most 64B times the number of
+-	 * zones requested plus the 64B reply header, but should be at least
+-	 * SECTOR_SIZE for ATA devices.
++	 * zones requested plus the 64B reply header, but should be aligned
++	 * to SECTOR_SIZE for ATA devices.
+ 	 * Make sure that this size does not exceed the hardware capabilities.
+ 	 * Furthermore, since the report zone command cannot be split, make
+ 	 * sure that the allocated buffer can always be mapped by limiting the
+@@ -175,7 +175,7 @@ static void *sd_zbc_alloc_report_buffer(struct scsi_disk *sdkp,
+ 			*buflen = bufsize;
+ 			return buf;
+ 		}
+-		bufsize >>= 1;
++		bufsize = rounddown(bufsize >> 1, SECTOR_SIZE);
+ 	}
+ 
+ 	return NULL;
+diff --git a/drivers/spi/spi-loopback-test.c b/drivers/spi/spi-loopback-test.c
+index 89b91cdfb2a54..4d4f77a186a98 100644
+--- a/drivers/spi/spi-loopback-test.c
++++ b/drivers/spi/spi-loopback-test.c
+@@ -454,7 +454,8 @@ struct rx_ranges {
+ 	u8 *end;
+ };
+ 
+-static int rx_ranges_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int rx_ranges_cmp(void *priv, const struct list_head *a,
++			 const struct list_head *b)
+ {
+ 	struct rx_ranges *rx_a = list_entry(a, struct rx_ranges, list);
+ 	struct rx_ranges *rx_b = list_entry(b, struct rx_ranges, list);
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index f7c832fd40036..669fc4286231f 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1201,7 +1201,7 @@ static int tegra_slink_resume(struct device *dev)
+ }
+ #endif
+ 
+-static int tegra_slink_runtime_suspend(struct device *dev)
++static int __maybe_unused tegra_slink_runtime_suspend(struct device *dev)
+ {
+ 	struct spi_master *master = dev_get_drvdata(dev);
+ 	struct tegra_slink_data *tspi = spi_master_get_devdata(master);
+@@ -1213,7 +1213,7 @@ static int tegra_slink_runtime_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int tegra_slink_runtime_resume(struct device *dev)
++static int __maybe_unused tegra_slink_runtime_resume(struct device *dev)
+ {
+ 	struct spi_master *master = dev_get_drvdata(dev);
+ 	struct tegra_slink_data *tspi = spi_master_get_devdata(master);
+diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
+index 80d74cce2a010..9858fae816f72 100644
+--- a/drivers/staging/comedi/comedi_fops.c
++++ b/drivers/staging/comedi/comedi_fops.c
+@@ -3090,6 +3090,7 @@ static int compat_insnlist(struct file *file, unsigned long arg)
+ 	mutex_lock(&dev->mutex);
+ 	rc = do_insnlist_ioctl(dev, insns, insnlist32.n_insns, file);
+ 	mutex_unlock(&dev->mutex);
++	kfree(insns);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/staging/greybus/uart.c b/drivers/staging/greybus/uart.c
+index a520f7f213db0..edaa83a693d27 100644
+--- a/drivers/staging/greybus/uart.c
++++ b/drivers/staging/greybus/uart.c
+@@ -778,6 +778,17 @@ out:
+ 	gbphy_runtime_put_autosuspend(gb_tty->gbphy_dev);
+ }
+ 
++static void gb_tty_port_destruct(struct tty_port *port)
++{
++	struct gb_tty *gb_tty = container_of(port, struct gb_tty, port);
++
++	if (gb_tty->minor != GB_NUM_MINORS)
++		release_minor(gb_tty);
++	kfifo_free(&gb_tty->write_fifo);
++	kfree(gb_tty->buffer);
++	kfree(gb_tty);
++}
++
+ static const struct tty_operations gb_ops = {
+ 	.install =		gb_tty_install,
+ 	.open =			gb_tty_open,
+@@ -803,6 +814,7 @@ static const struct tty_port_operations gb_port_ops = {
+ 	.dtr_rts =		gb_tty_dtr_rts,
+ 	.activate =		gb_tty_port_activate,
+ 	.shutdown =		gb_tty_port_shutdown,
++	.destruct =		gb_tty_port_destruct,
+ };
+ 
+ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+@@ -815,17 +827,11 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 	int retval;
+ 	int minor;
+ 
+-	gb_tty = kzalloc(sizeof(*gb_tty), GFP_KERNEL);
+-	if (!gb_tty)
+-		return -ENOMEM;
+-
+ 	connection = gb_connection_create(gbphy_dev->bundle,
+ 					  le16_to_cpu(gbphy_dev->cport_desc->id),
+ 					  gb_uart_request_handler);
+-	if (IS_ERR(connection)) {
+-		retval = PTR_ERR(connection);
+-		goto exit_tty_free;
+-	}
++	if (IS_ERR(connection))
++		return PTR_ERR(connection);
+ 
+ 	max_payload = gb_operation_get_payload_size_max(connection);
+ 	if (max_payload < sizeof(struct gb_uart_send_data_request)) {
+@@ -833,13 +839,23 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 		goto exit_connection_destroy;
+ 	}
+ 
++	gb_tty = kzalloc(sizeof(*gb_tty), GFP_KERNEL);
++	if (!gb_tty) {
++		retval = -ENOMEM;
++		goto exit_connection_destroy;
++	}
++
++	tty_port_init(&gb_tty->port);
++	gb_tty->port.ops = &gb_port_ops;
++	gb_tty->minor = GB_NUM_MINORS;
++
+ 	gb_tty->buffer_payload_max = max_payload -
+ 			sizeof(struct gb_uart_send_data_request);
+ 
+ 	gb_tty->buffer = kzalloc(gb_tty->buffer_payload_max, GFP_KERNEL);
+ 	if (!gb_tty->buffer) {
+ 		retval = -ENOMEM;
+-		goto exit_connection_destroy;
++		goto exit_put_port;
+ 	}
+ 
+ 	INIT_WORK(&gb_tty->tx_work, gb_uart_tx_write_work);
+@@ -847,7 +863,7 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 	retval = kfifo_alloc(&gb_tty->write_fifo, GB_UART_WRITE_FIFO_SIZE,
+ 			     GFP_KERNEL);
+ 	if (retval)
+-		goto exit_buf_free;
++		goto exit_put_port;
+ 
+ 	gb_tty->credits = GB_UART_FIRMWARE_CREDITS;
+ 	init_completion(&gb_tty->credits_complete);
+@@ -861,7 +877,7 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 		} else {
+ 			retval = minor;
+ 		}
+-		goto exit_kfifo_free;
++		goto exit_put_port;
+ 	}
+ 
+ 	gb_tty->minor = minor;
+@@ -870,9 +886,6 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 	init_waitqueue_head(&gb_tty->wioctl);
+ 	mutex_init(&gb_tty->mutex);
+ 
+-	tty_port_init(&gb_tty->port);
+-	gb_tty->port.ops = &gb_port_ops;
+-
+ 	gb_tty->connection = connection;
+ 	gb_tty->gbphy_dev = gbphy_dev;
+ 	gb_connection_set_data(connection, gb_tty);
+@@ -880,7 +893,7 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 
+ 	retval = gb_connection_enable_tx(connection);
+ 	if (retval)
+-		goto exit_release_minor;
++		goto exit_put_port;
+ 
+ 	send_control(gb_tty, gb_tty->ctrlout);
+ 
+@@ -907,16 +920,10 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 
+ exit_connection_disable:
+ 	gb_connection_disable(connection);
+-exit_release_minor:
+-	release_minor(gb_tty);
+-exit_kfifo_free:
+-	kfifo_free(&gb_tty->write_fifo);
+-exit_buf_free:
+-	kfree(gb_tty->buffer);
++exit_put_port:
++	tty_port_put(&gb_tty->port);
+ exit_connection_destroy:
+ 	gb_connection_destroy(connection);
+-exit_tty_free:
+-	kfree(gb_tty);
+ 
+ 	return retval;
+ }
+@@ -947,15 +954,10 @@ static void gb_uart_remove(struct gbphy_device *gbphy_dev)
+ 	gb_connection_disable_rx(connection);
+ 	tty_unregister_device(gb_tty_driver, gb_tty->minor);
+ 
+-	/* FIXME - free transmit / receive buffers */
+-
+ 	gb_connection_disable(connection);
+-	tty_port_destroy(&gb_tty->port);
+ 	gb_connection_destroy(connection);
+-	release_minor(gb_tty);
+-	kfifo_free(&gb_tty->write_fifo);
+-	kfree(gb_tty->buffer);
+-	kfree(gb_tty);
++
++	tty_port_put(&gb_tty->port);
+ }
+ 
+ static int gb_tty_init(void)
+diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
+index f043522851550..56ae882fb7b39 100644
+--- a/drivers/target/target_core_configfs.c
++++ b/drivers/target/target_core_configfs.c
+@@ -1110,20 +1110,24 @@ static ssize_t alua_support_store(struct config_item *item,
+ {
+ 	struct se_dev_attrib *da = to_attrib(item);
+ 	struct se_device *dev = da->da_dev;
+-	bool flag;
++	bool flag, oldflag;
+ 	int ret;
+ 
++	ret = strtobool(page, &flag);
++	if (ret < 0)
++		return ret;
++
++	oldflag = !(dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_ALUA);
++	if (flag == oldflag)
++		return count;
++
+ 	if (!(dev->transport->transport_flags_changeable &
+ 	      TRANSPORT_FLAG_PASSTHROUGH_ALUA)) {
+ 		pr_err("dev[%p]: Unable to change SE Device alua_support:"
+ 			" alua_support has fixed value\n", dev);
+-		return -EINVAL;
++		return -ENOSYS;
+ 	}
+ 
+-	ret = strtobool(page, &flag);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (flag)
+ 		dev->transport_flags &= ~TRANSPORT_FLAG_PASSTHROUGH_ALUA;
+ 	else
+@@ -1145,20 +1149,24 @@ static ssize_t pgr_support_store(struct config_item *item,
+ {
+ 	struct se_dev_attrib *da = to_attrib(item);
+ 	struct se_device *dev = da->da_dev;
+-	bool flag;
++	bool flag, oldflag;
+ 	int ret;
+ 
++	ret = strtobool(page, &flag);
++	if (ret < 0)
++		return ret;
++
++	oldflag = !(dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR);
++	if (flag == oldflag)
++		return count;
++
+ 	if (!(dev->transport->transport_flags_changeable &
+ 	      TRANSPORT_FLAG_PASSTHROUGH_PGR)) {
+ 		pr_err("dev[%p]: Unable to change SE Device pgr_support:"
+ 			" pgr_support has fixed value\n", dev);
+-		return -EINVAL;
++		return -ENOSYS;
+ 	}
+ 
+-	ret = strtobool(page, &flag);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (flag)
+ 		dev->transport_flags &= ~TRANSPORT_FLAG_PASSTHROUGH_PGR;
+ 	else
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+index 74158fa660ddf..f5204f96fe723 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+@@ -185,7 +185,7 @@ static int tcc_offset_update(unsigned int tcc)
+ 	return 0;
+ }
+ 
+-static unsigned int tcc_offset_save;
++static int tcc_offset_save = -1;
+ 
+ static ssize_t tcc_offset_degree_celsius_store(struct device *dev,
+ 				struct device_attribute *attr, const char *buf,
+@@ -709,7 +709,8 @@ static int proc_thermal_resume(struct device *dev)
+ 	proc_dev = dev_get_drvdata(dev);
+ 	proc_thermal_read_ppcc(proc_dev);
+ 
+-	tcc_offset_update(tcc_offset_save);
++	if (tcc_offset_save >= 0)
++		tcc_offset_update(tcc_offset_save);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index e669f83faa3c5..17de8a9b991e9 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -224,15 +224,14 @@ int thermal_build_list_of_policies(char *buf)
+ {
+ 	struct thermal_governor *pos;
+ 	ssize_t count = 0;
+-	ssize_t size = PAGE_SIZE;
+ 
+ 	mutex_lock(&thermal_governor_lock);
+ 
+ 	list_for_each_entry(pos, &thermal_governor_list, governor_list) {
+-		size = PAGE_SIZE - count;
+-		count += scnprintf(buf + count, size, "%s ", pos->name);
++		count += scnprintf(buf + count, PAGE_SIZE - count, "%s ",
++				   pos->name);
+ 	}
+-	count += scnprintf(buf + count, size, "\n");
++	count += scnprintf(buf + count, PAGE_SIZE - count, "\n");
+ 
+ 	mutex_unlock(&thermal_governor_lock);
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index efe4cf32add2c..537bee8d2258a 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -106,7 +106,7 @@
+ #define UART_OMAP_EFR2_TIMEOUT_BEHAVE	BIT(6)
+ 
+ /* RX FIFO occupancy indicator */
+-#define UART_OMAP_RX_LVL		0x64
++#define UART_OMAP_RX_LVL		0x19
+ 
+ struct omap8250_priv {
+ 	int line;
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 1e26220c78527..34ff2181afd1a 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -164,7 +164,7 @@ static unsigned int mvebu_uart_tx_empty(struct uart_port *port)
+ 	st = readl(port->membase + UART_STAT);
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ 
+-	return (st & STAT_TX_FIFO_EMP) ? TIOCSER_TEMT : 0;
++	return (st & STAT_TX_EMP) ? TIOCSER_TEMT : 0;
+ }
+ 
+ static unsigned int mvebu_uart_get_mctrl(struct uart_port *port)
+diff --git a/drivers/tty/synclink_gt.c b/drivers/tty/synclink_gt.c
+index afa4cc52e48d7..1a0c7beec1019 100644
+--- a/drivers/tty/synclink_gt.c
++++ b/drivers/tty/synclink_gt.c
+@@ -137,37 +137,14 @@ MODULE_PARM_DESC(maxframe, "Maximum frame size used by device (4096 to 65535)");
+  */
+ static struct tty_driver *serial_driver;
+ 
+-static int  open(struct tty_struct *tty, struct file * filp);
+-static void close(struct tty_struct *tty, struct file * filp);
+-static void hangup(struct tty_struct *tty);
+-static void set_termios(struct tty_struct *tty, struct ktermios *old_termios);
+-
+-static int  write(struct tty_struct *tty, const unsigned char *buf, int count);
+-static int put_char(struct tty_struct *tty, unsigned char ch);
+-static void send_xchar(struct tty_struct *tty, char ch);
+ static void wait_until_sent(struct tty_struct *tty, int timeout);
+-static int  write_room(struct tty_struct *tty);
+-static void flush_chars(struct tty_struct *tty);
+ static void flush_buffer(struct tty_struct *tty);
+-static void tx_hold(struct tty_struct *tty);
+ static void tx_release(struct tty_struct *tty);
+ 
+-static int  ioctl(struct tty_struct *tty, unsigned int cmd, unsigned long arg);
+-static int  chars_in_buffer(struct tty_struct *tty);
+-static void throttle(struct tty_struct * tty);
+-static void unthrottle(struct tty_struct * tty);
+-static int set_break(struct tty_struct *tty, int break_state);
+-
+ /*
+- * generic HDLC support and callbacks
++ * generic HDLC support
+  */
+-#if SYNCLINK_GENERIC_HDLC
+ #define dev_to_port(D) (dev_to_hdlc(D)->priv)
+-static void hdlcdev_tx_done(struct slgt_info *info);
+-static void hdlcdev_rx(struct slgt_info *info, char *buf, int size);
+-static int  hdlcdev_init(struct slgt_info *info);
+-static void hdlcdev_exit(struct slgt_info *info);
+-#endif
+ 
+ 
+ /*
+@@ -186,9 +163,6 @@ struct cond_wait {
+ 	wait_queue_entry_t wait;
+ 	unsigned int data;
+ };
+-static void init_cond_wait(struct cond_wait *w, unsigned int data);
+-static void add_cond_wait(struct cond_wait **head, struct cond_wait *w);
+-static void remove_cond_wait(struct cond_wait **head, struct cond_wait *w);
+ static void flush_cond_wait(struct cond_wait **head);
+ 
+ /*
+@@ -443,12 +417,8 @@ static void shutdown(struct slgt_info *info);
+ static void program_hw(struct slgt_info *info);
+ static void change_params(struct slgt_info *info);
+ 
+-static int  register_test(struct slgt_info *info);
+-static int  irq_test(struct slgt_info *info);
+-static int  loopback_test(struct slgt_info *info);
+ static int  adapter_test(struct slgt_info *info);
+ 
+-static void reset_adapter(struct slgt_info *info);
+ static void reset_port(struct slgt_info *info);
+ static void async_mode(struct slgt_info *info);
+ static void sync_mode(struct slgt_info *info);
+@@ -457,41 +427,23 @@ static void rx_stop(struct slgt_info *info);
+ static void rx_start(struct slgt_info *info);
+ static void reset_rbufs(struct slgt_info *info);
+ static void free_rbufs(struct slgt_info *info, unsigned int first, unsigned int last);
+-static void rdma_reset(struct slgt_info *info);
+ static bool rx_get_frame(struct slgt_info *info);
+ static bool rx_get_buf(struct slgt_info *info);
+ 
+ static void tx_start(struct slgt_info *info);
+ static void tx_stop(struct slgt_info *info);
+ static void tx_set_idle(struct slgt_info *info);
+-static unsigned int free_tbuf_count(struct slgt_info *info);
+ static unsigned int tbuf_bytes(struct slgt_info *info);
+ static void reset_tbufs(struct slgt_info *info);
+ static void tdma_reset(struct slgt_info *info);
+ static bool tx_load(struct slgt_info *info, const char *buf, unsigned int count);
+ 
+-static void get_signals(struct slgt_info *info);
+-static void set_signals(struct slgt_info *info);
+-static void enable_loopback(struct slgt_info *info);
++static void get_gtsignals(struct slgt_info *info);
++static void set_gtsignals(struct slgt_info *info);
+ static void set_rate(struct slgt_info *info, u32 data_rate);
+ 
+-static int  bh_action(struct slgt_info *info);
+-static void bh_handler(struct work_struct *work);
+ static void bh_transmit(struct slgt_info *info);
+-static void isr_serial(struct slgt_info *info);
+-static void isr_rdma(struct slgt_info *info);
+ static void isr_txeom(struct slgt_info *info, unsigned short status);
+-static void isr_tdma(struct slgt_info *info);
+-
+-static int  alloc_dma_bufs(struct slgt_info *info);
+-static void free_dma_bufs(struct slgt_info *info);
+-static int  alloc_desc(struct slgt_info *info);
+-static void free_desc(struct slgt_info *info);
+-static int  alloc_bufs(struct slgt_info *info, struct slgt_desc *bufs, int count);
+-static void free_bufs(struct slgt_info *info, struct slgt_desc *bufs, int count);
+-
+-static int  alloc_tmp_rbuf(struct slgt_info *info);
+-static void free_tmp_rbuf(struct slgt_info *info);
+ 
+ static void tx_timeout(struct timer_list *t);
+ static void rx_timeout(struct timer_list *t);
+@@ -509,10 +461,6 @@ static int  tx_abort(struct slgt_info *info);
+ static int  rx_enable(struct slgt_info *info, int enable);
+ static int  modem_input_wait(struct slgt_info *info,int arg);
+ static int  wait_mgsl_event(struct slgt_info *info, int __user *mask_ptr);
+-static int  tiocmget(struct tty_struct *tty);
+-static int  tiocmset(struct tty_struct *tty,
+-				unsigned int set, unsigned int clear);
+-static int set_break(struct tty_struct *tty, int break_state);
+ static int  get_interface(struct slgt_info *info, int __user *if_mode);
+ static int  set_interface(struct slgt_info *info, int if_mode);
+ static int  set_gpio(struct slgt_info *info, struct gpio_desc __user *gpio);
+@@ -526,9 +474,6 @@ static int  set_xctrl(struct slgt_info *info, int if_mode);
+ /*
+  * driver functions
+  */
+-static void add_device(struct slgt_info *info);
+-static void device_init(int adapter_num, struct pci_dev *pdev);
+-static int  claim_resources(struct slgt_info *info);
+ static void release_resources(struct slgt_info *info);
+ 
+ /*
+@@ -776,7 +721,7 @@ static void set_termios(struct tty_struct *tty, struct ktermios *old_termios)
+ 	if ((old_termios->c_cflag & CBAUD) && !C_BAUD(tty)) {
+ 		info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
+ 		spin_lock_irqsave(&info->lock,flags);
+-		set_signals(info);
++		set_gtsignals(info);
+ 		spin_unlock_irqrestore(&info->lock,flags);
+ 	}
+ 
+@@ -786,7 +731,7 @@ static void set_termios(struct tty_struct *tty, struct ktermios *old_termios)
+ 		if (!C_CRTSCTS(tty) || !tty_throttled(tty))
+ 			info->signals |= SerialSignal_RTS;
+ 		spin_lock_irqsave(&info->lock,flags);
+-	 	set_signals(info);
++	 	set_gtsignals(info);
+ 		spin_unlock_irqrestore(&info->lock,flags);
+ 	}
+ 
+@@ -1237,7 +1182,7 @@ static inline void line_info(struct seq_file *m, struct slgt_info *info)
+ 
+ 	/* output current serial signal states */
+ 	spin_lock_irqsave(&info->lock,flags);
+-	get_signals(info);
++	get_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ 
+ 	stat_buf[0] = 0;
+@@ -1337,7 +1282,7 @@ static void throttle(struct tty_struct * tty)
+ 	if (C_CRTSCTS(tty)) {
+ 		spin_lock_irqsave(&info->lock,flags);
+ 		info->signals &= ~SerialSignal_RTS;
+-		set_signals(info);
++		set_gtsignals(info);
+ 		spin_unlock_irqrestore(&info->lock,flags);
+ 	}
+ }
+@@ -1362,7 +1307,7 @@ static void unthrottle(struct tty_struct * tty)
+ 	if (C_CRTSCTS(tty)) {
+ 		spin_lock_irqsave(&info->lock,flags);
+ 		info->signals |= SerialSignal_RTS;
+-		set_signals(info);
++		set_gtsignals(info);
+ 		spin_unlock_irqrestore(&info->lock,flags);
+ 	}
+ }
+@@ -1533,7 +1478,7 @@ static int hdlcdev_open(struct net_device *dev)
+ 
+ 	/* inform generic HDLC layer of current DCD status */
+ 	spin_lock_irqsave(&info->lock, flags);
+-	get_signals(info);
++	get_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock, flags);
+ 	if (info->signals & SerialSignal_DCD)
+ 		netif_carrier_on(dev);
+@@ -2287,7 +2232,7 @@ static void isr_txeom(struct slgt_info *info, unsigned short status)
+ 		if (info->params.mode != MGSL_MODE_ASYNC && info->drop_rts_on_tx_done) {
+ 			info->signals &= ~SerialSignal_RTS;
+ 			info->drop_rts_on_tx_done = false;
+-			set_signals(info);
++			set_gtsignals(info);
+ 		}
+ 
+ #if SYNCLINK_GENERIC_HDLC
+@@ -2452,7 +2397,7 @@ static void shutdown(struct slgt_info *info)
+ 
+  	if (!info->port.tty || info->port.tty->termios.c_cflag & HUPCL) {
+ 		info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
+-		set_signals(info);
++		set_gtsignals(info);
+ 	}
+ 
+ 	flush_cond_wait(&info->gpio_wait_q);
+@@ -2480,7 +2425,7 @@ static void program_hw(struct slgt_info *info)
+ 	else
+ 		async_mode(info);
+ 
+-	set_signals(info);
++	set_gtsignals(info);
+ 
+ 	info->dcd_chkcount = 0;
+ 	info->cts_chkcount = 0;
+@@ -2488,7 +2433,7 @@ static void program_hw(struct slgt_info *info)
+ 	info->dsr_chkcount = 0;
+ 
+ 	slgt_irq_on(info, IRQ_DCD | IRQ_CTS | IRQ_DSR | IRQ_RI);
+-	get_signals(info);
++	get_gtsignals(info);
+ 
+ 	if (info->netcount ||
+ 	    (info->port.tty && info->port.tty->termios.c_cflag & CREAD))
+@@ -2732,7 +2677,7 @@ static int wait_mgsl_event(struct slgt_info *info, int __user *mask_ptr)
+ 	spin_lock_irqsave(&info->lock,flags);
+ 
+ 	/* return immediately if state matches requested events */
+-	get_signals(info);
++	get_gtsignals(info);
+ 	s = info->signals;
+ 
+ 	events = mask &
+@@ -3150,7 +3095,7 @@ static int tiocmget(struct tty_struct *tty)
+  	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&info->lock,flags);
+- 	get_signals(info);
++ 	get_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ 
+ 	result = ((info->signals & SerialSignal_RTS) ? TIOCM_RTS:0) +
+@@ -3189,7 +3134,7 @@ static int tiocmset(struct tty_struct *tty,
+ 		info->signals &= ~SerialSignal_DTR;
+ 
+ 	spin_lock_irqsave(&info->lock,flags);
+-	set_signals(info);
++	set_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ 	return 0;
+ }
+@@ -3200,7 +3145,7 @@ static int carrier_raised(struct tty_port *port)
+ 	struct slgt_info *info = container_of(port, struct slgt_info, port);
+ 
+ 	spin_lock_irqsave(&info->lock,flags);
+-	get_signals(info);
++	get_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ 	return (info->signals & SerialSignal_DCD) ? 1 : 0;
+ }
+@@ -3215,7 +3160,7 @@ static void dtr_rts(struct tty_port *port, int on)
+ 		info->signals |= SerialSignal_RTS | SerialSignal_DTR;
+ 	else
+ 		info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
+-	set_signals(info);
++	set_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ }
+ 
+@@ -4018,10 +3963,10 @@ static void tx_start(struct slgt_info *info)
+ 
+ 		if (info->params.mode != MGSL_MODE_ASYNC) {
+ 			if (info->params.flags & HDLC_FLAG_AUTO_RTS) {
+-				get_signals(info);
++				get_gtsignals(info);
+ 				if (!(info->signals & SerialSignal_RTS)) {
+ 					info->signals |= SerialSignal_RTS;
+-					set_signals(info);
++					set_gtsignals(info);
+ 					info->drop_rts_on_tx_done = true;
+ 				}
+ 			}
+@@ -4075,7 +4020,7 @@ static void reset_port(struct slgt_info *info)
+ 	rx_stop(info);
+ 
+ 	info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
+-	set_signals(info);
++	set_gtsignals(info);
+ 
+ 	slgt_irq_off(info, IRQ_ALL | IRQ_MASTER);
+ }
+@@ -4497,7 +4442,7 @@ static void tx_set_idle(struct slgt_info *info)
+ /*
+  * get state of V24 status (input) signals
+  */
+-static void get_signals(struct slgt_info *info)
++static void get_gtsignals(struct slgt_info *info)
+ {
+ 	unsigned short status = rd_reg16(info, SSR);
+ 
+@@ -4559,7 +4504,7 @@ static void msc_set_vcr(struct slgt_info *info)
+ /*
+  * set state of V24 control (output) signals
+  */
+-static void set_signals(struct slgt_info *info)
++static void set_gtsignals(struct slgt_info *info)
+ {
+ 	unsigned char val = rd_reg8(info, VCR);
+ 	if (info->signals & SerialSignal_DTR)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index df5b2d1e214f1..7748b1335558e 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -726,7 +726,8 @@ static void acm_port_destruct(struct tty_port *port)
+ {
+ 	struct acm *acm = container_of(port, struct acm, port);
+ 
+-	acm_release_minor(acm);
++	if (acm->minor != ACM_MINOR_INVALID)
++		acm_release_minor(acm);
+ 	usb_put_intf(acm->control);
+ 	kfree(acm->country_codes);
+ 	kfree(acm);
+@@ -1343,8 +1344,10 @@ made_compressed_probe:
+ 	usb_get_intf(acm->control); /* undone in destruct() */
+ 
+ 	minor = acm_alloc_minor(acm);
+-	if (minor < 0)
++	if (minor < 0) {
++		acm->minor = ACM_MINOR_INVALID;
+ 		goto alloc_fail1;
++	}
+ 
+ 	acm->minor = minor;
+ 	acm->dev = usb_dev;
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index 8aef5eb769a0d..3aa7f0a3ad71e 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -22,6 +22,8 @@
+ #define ACM_TTY_MAJOR		166
+ #define ACM_TTY_MINORS		256
+ 
++#define ACM_MINOR_INVALID	ACM_TTY_MINORS
++
+ /*
+  * Requests.
+  */
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 99908d8d2dd36..4bbf3316a9a53 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2640,6 +2640,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ {
+ 	int retval;
+ 	struct usb_device *rhdev;
++	struct usb_hcd *shared_hcd;
+ 
+ 	if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) {
+ 		hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+@@ -2796,13 +2797,26 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 		goto err_hcd_driver_start;
+ 	}
+ 
++	/* starting here, usbcore will pay attention to the shared HCD roothub */
++	shared_hcd = hcd->shared_hcd;
++	if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) {
++		retval = register_root_hub(shared_hcd);
++		if (retval != 0)
++			goto err_register_root_hub;
++
++		if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd))
++			usb_hcd_poll_rh_status(shared_hcd);
++	}
++
+ 	/* starting here, usbcore will pay attention to this root hub */
+-	retval = register_root_hub(hcd);
+-	if (retval != 0)
+-		goto err_register_root_hub;
++	if (!HCD_DEFER_RH_REGISTER(hcd)) {
++		retval = register_root_hub(hcd);
++		if (retval != 0)
++			goto err_register_root_hub;
+ 
+-	if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
+-		usb_hcd_poll_rh_status(hcd);
++		if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
++			usb_hcd_poll_rh_status(hcd);
++	}
+ 
+ 	return retval;
+ 
+@@ -2845,6 +2859,7 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
+ void usb_remove_hcd(struct usb_hcd *hcd)
+ {
+ 	struct usb_device *rhdev = hcd->self.root_hub;
++	bool rh_registered;
+ 
+ 	dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
+ 
+@@ -2855,6 +2870,7 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 
+ 	dev_dbg(hcd->self.controller, "roothub graceful disconnect\n");
+ 	spin_lock_irq (&hcd_root_hub_lock);
++	rh_registered = hcd->rh_registered;
+ 	hcd->rh_registered = 0;
+ 	spin_unlock_irq (&hcd_root_hub_lock);
+ 
+@@ -2864,7 +2880,8 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 	cancel_work_sync(&hcd->died_work);
+ 
+ 	mutex_lock(&usb_bus_idr_lock);
+-	usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
++	if (rh_registered)
++		usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
+ 	mutex_unlock(&usb_bus_idr_lock);
+ 
+ 	/*
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index b06286f132c6f..7207a36c6e26b 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -115,10 +115,16 @@ static inline bool using_desc_dma(struct dwc2_hsotg *hsotg)
+  */
+ static inline void dwc2_gadget_incr_frame_num(struct dwc2_hsotg_ep *hs_ep)
+ {
++	struct dwc2_hsotg *hsotg = hs_ep->parent;
++	u16 limit = DSTS_SOFFN_LIMIT;
++
++	if (hsotg->gadget.speed != USB_SPEED_HIGH)
++		limit >>= 3;
++
+ 	hs_ep->target_frame += hs_ep->interval;
+-	if (hs_ep->target_frame > DSTS_SOFFN_LIMIT) {
++	if (hs_ep->target_frame > limit) {
+ 		hs_ep->frame_overrun = true;
+-		hs_ep->target_frame &= DSTS_SOFFN_LIMIT;
++		hs_ep->target_frame &= limit;
+ 	} else {
+ 		hs_ep->frame_overrun = false;
+ 	}
+@@ -136,10 +142,16 @@ static inline void dwc2_gadget_incr_frame_num(struct dwc2_hsotg_ep *hs_ep)
+  */
+ static inline void dwc2_gadget_dec_frame_num_by_one(struct dwc2_hsotg_ep *hs_ep)
+ {
++	struct dwc2_hsotg *hsotg = hs_ep->parent;
++	u16 limit = DSTS_SOFFN_LIMIT;
++
++	if (hsotg->gadget.speed != USB_SPEED_HIGH)
++		limit >>= 3;
++
+ 	if (hs_ep->target_frame)
+ 		hs_ep->target_frame -= 1;
+ 	else
+-		hs_ep->target_frame = DSTS_SOFFN_LIMIT;
++		hs_ep->target_frame = limit;
+ }
+ 
+ /**
+@@ -1018,6 +1030,12 @@ static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep)
+ 	dwc2_writel(hsotg, ctrl, depctl);
+ }
+ 
++static bool dwc2_gadget_target_frame_elapsed(struct dwc2_hsotg_ep *hs_ep);
++static void dwc2_hsotg_complete_request(struct dwc2_hsotg *hsotg,
++					struct dwc2_hsotg_ep *hs_ep,
++				       struct dwc2_hsotg_req *hs_req,
++				       int result);
++
+ /**
+  * dwc2_hsotg_start_req - start a USB request from an endpoint's queue
+  * @hsotg: The controller state.
+@@ -1170,14 +1188,19 @@ static void dwc2_hsotg_start_req(struct dwc2_hsotg *hsotg,
+ 		}
+ 	}
+ 
+-	if (hs_ep->isochronous && hs_ep->interval == 1) {
+-		hs_ep->target_frame = dwc2_hsotg_read_frameno(hsotg);
+-		dwc2_gadget_incr_frame_num(hs_ep);
+-
+-		if (hs_ep->target_frame & 0x1)
+-			ctrl |= DXEPCTL_SETODDFR;
+-		else
+-			ctrl |= DXEPCTL_SETEVENFR;
++	if (hs_ep->isochronous) {
++		if (!dwc2_gadget_target_frame_elapsed(hs_ep)) {
++			if (hs_ep->interval == 1) {
++				if (hs_ep->target_frame & 0x1)
++					ctrl |= DXEPCTL_SETODDFR;
++				else
++					ctrl |= DXEPCTL_SETEVENFR;
++			}
++			ctrl |= DXEPCTL_CNAK;
++		} else {
++			dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
++			return;
++		}
+ 	}
+ 
+ 	ctrl |= DXEPCTL_EPENA;	/* ensure ep enabled */
+@@ -1325,12 +1348,16 @@ static bool dwc2_gadget_target_frame_elapsed(struct dwc2_hsotg_ep *hs_ep)
+ 	u32 target_frame = hs_ep->target_frame;
+ 	u32 current_frame = hsotg->frame_number;
+ 	bool frame_overrun = hs_ep->frame_overrun;
++	u16 limit = DSTS_SOFFN_LIMIT;
++
++	if (hsotg->gadget.speed != USB_SPEED_HIGH)
++		limit >>= 3;
+ 
+ 	if (!frame_overrun && current_frame >= target_frame)
+ 		return true;
+ 
+ 	if (frame_overrun && current_frame >= target_frame &&
+-	    ((current_frame - target_frame) < DSTS_SOFFN_LIMIT / 2))
++	    ((current_frame - target_frame) < limit / 2))
+ 		return true;
+ 
+ 	return false;
+@@ -1713,11 +1740,9 @@ static struct dwc2_hsotg_req *get_ep_head(struct dwc2_hsotg_ep *hs_ep)
+  */
+ static void dwc2_gadget_start_next_request(struct dwc2_hsotg_ep *hs_ep)
+ {
+-	u32 mask;
+ 	struct dwc2_hsotg *hsotg = hs_ep->parent;
+ 	int dir_in = hs_ep->dir_in;
+ 	struct dwc2_hsotg_req *hs_req;
+-	u32 epmsk_reg = dir_in ? DIEPMSK : DOEPMSK;
+ 
+ 	if (!list_empty(&hs_ep->queue)) {
+ 		hs_req = get_ep_head(hs_ep);
+@@ -1733,9 +1758,6 @@ static void dwc2_gadget_start_next_request(struct dwc2_hsotg_ep *hs_ep)
+ 	} else {
+ 		dev_dbg(hsotg->dev, "%s: No more ISOC-OUT requests\n",
+ 			__func__);
+-		mask = dwc2_readl(hsotg, epmsk_reg);
+-		mask |= DOEPMSK_OUTTKNEPDISMSK;
+-		dwc2_writel(hsotg, mask, epmsk_reg);
+ 	}
+ }
+ 
+@@ -2305,19 +2327,6 @@ static void dwc2_hsotg_ep0_zlp(struct dwc2_hsotg *hsotg, bool dir_in)
+ 	dwc2_hsotg_program_zlp(hsotg, hsotg->eps_out[0]);
+ }
+ 
+-static void dwc2_hsotg_change_ep_iso_parity(struct dwc2_hsotg *hsotg,
+-					    u32 epctl_reg)
+-{
+-	u32 ctrl;
+-
+-	ctrl = dwc2_readl(hsotg, epctl_reg);
+-	if (ctrl & DXEPCTL_EOFRNUM)
+-		ctrl |= DXEPCTL_SETEVENFR;
+-	else
+-		ctrl |= DXEPCTL_SETODDFR;
+-	dwc2_writel(hsotg, ctrl, epctl_reg);
+-}
+-
+ /*
+  * dwc2_gadget_get_xfersize_ddma - get transferred bytes amount from desc
+  * @hs_ep - The endpoint on which transfer went
+@@ -2438,20 +2447,11 @@ static void dwc2_hsotg_handle_outdone(struct dwc2_hsotg *hsotg, int epnum)
+ 			dwc2_hsotg_ep0_zlp(hsotg, true);
+ 	}
+ 
+-	/*
+-	 * Slave mode OUT transfers do not go through XferComplete so
+-	 * adjust the ISOC parity here.
+-	 */
+-	if (!using_dma(hsotg)) {
+-		if (hs_ep->isochronous && hs_ep->interval == 1)
+-			dwc2_hsotg_change_ep_iso_parity(hsotg, DOEPCTL(epnum));
+-		else if (hs_ep->isochronous && hs_ep->interval > 1)
+-			dwc2_gadget_incr_frame_num(hs_ep);
+-	}
+-
+ 	/* Set actual frame number for completed transfers */
+-	if (!using_desc_dma(hsotg) && hs_ep->isochronous)
+-		req->frame_number = hsotg->frame_number;
++	if (!using_desc_dma(hsotg) && hs_ep->isochronous) {
++		req->frame_number = hs_ep->target_frame;
++		dwc2_gadget_incr_frame_num(hs_ep);
++	}
+ 
+ 	dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, result);
+ }
+@@ -2765,6 +2765,12 @@ static void dwc2_hsotg_complete_in(struct dwc2_hsotg *hsotg,
+ 		return;
+ 	}
+ 
++	/* Set actual frame number for completed transfers */
++	if (!using_desc_dma(hsotg) && hs_ep->isochronous) {
++		hs_req->req.frame_number = hs_ep->target_frame;
++		dwc2_gadget_incr_frame_num(hs_ep);
++	}
++
+ 	dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, 0);
+ }
+ 
+@@ -2825,23 +2831,18 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
+ 
+ 		dwc2_hsotg_txfifo_flush(hsotg, hs_ep->fifo_index);
+ 
+-		if (hs_ep->isochronous) {
+-			dwc2_hsotg_complete_in(hsotg, hs_ep);
+-			return;
+-		}
+-
+ 		if ((epctl & DXEPCTL_STALL) && (epctl & DXEPCTL_EPTYPE_BULK)) {
+ 			int dctl = dwc2_readl(hsotg, DCTL);
+ 
+ 			dctl |= DCTL_CGNPINNAK;
+ 			dwc2_writel(hsotg, dctl, DCTL);
+ 		}
+-		return;
+-	}
++	} else {
+ 
+-	if (dctl & DCTL_GOUTNAKSTS) {
+-		dctl |= DCTL_CGOUTNAK;
+-		dwc2_writel(hsotg, dctl, DCTL);
++		if (dctl & DCTL_GOUTNAKSTS) {
++			dctl |= DCTL_CGOUTNAK;
++			dwc2_writel(hsotg, dctl, DCTL);
++		}
+ 	}
+ 
+ 	if (!hs_ep->isochronous)
+@@ -2862,8 +2863,6 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
+ 		/* Update current frame number value. */
+ 		hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
+ 	} while (dwc2_gadget_target_frame_elapsed(hs_ep));
+-
+-	dwc2_gadget_start_next_request(hs_ep);
+ }
+ 
+ /**
+@@ -2880,8 +2879,8 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
+ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
+ {
+ 	struct dwc2_hsotg *hsotg = ep->parent;
++	struct dwc2_hsotg_req *hs_req;
+ 	int dir_in = ep->dir_in;
+-	u32 doepmsk;
+ 
+ 	if (dir_in || !ep->isochronous)
+ 		return;
+@@ -2895,28 +2894,39 @@ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
+ 		return;
+ 	}
+ 
+-	if (ep->interval > 1 &&
+-	    ep->target_frame == TARGET_FRAME_INITIAL) {
++	if (ep->target_frame == TARGET_FRAME_INITIAL) {
+ 		u32 ctrl;
+ 
+ 		ep->target_frame = hsotg->frame_number;
+-		dwc2_gadget_incr_frame_num(ep);
++		if (ep->interval > 1) {
++			ctrl = dwc2_readl(hsotg, DOEPCTL(ep->index));
++			if (ep->target_frame & 0x1)
++				ctrl |= DXEPCTL_SETODDFR;
++			else
++				ctrl |= DXEPCTL_SETEVENFR;
+ 
+-		ctrl = dwc2_readl(hsotg, DOEPCTL(ep->index));
+-		if (ep->target_frame & 0x1)
+-			ctrl |= DXEPCTL_SETODDFR;
+-		else
+-			ctrl |= DXEPCTL_SETEVENFR;
++			dwc2_writel(hsotg, ctrl, DOEPCTL(ep->index));
++		}
++	}
++
++	while (dwc2_gadget_target_frame_elapsed(ep)) {
++		hs_req = get_ep_head(ep);
++		if (hs_req)
++			dwc2_hsotg_complete_request(hsotg, ep, hs_req, -ENODATA);
+ 
+-		dwc2_writel(hsotg, ctrl, DOEPCTL(ep->index));
++		dwc2_gadget_incr_frame_num(ep);
++		/* Update current frame number value. */
++		hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
+ 	}
+ 
+-	dwc2_gadget_start_next_request(ep);
+-	doepmsk = dwc2_readl(hsotg, DOEPMSK);
+-	doepmsk &= ~DOEPMSK_OUTTKNEPDISMSK;
+-	dwc2_writel(hsotg, doepmsk, DOEPMSK);
++	if (!ep->req)
++		dwc2_gadget_start_next_request(ep);
++
+ }
+ 
++static void dwc2_hsotg_ep_stop_xfr(struct dwc2_hsotg *hsotg,
++				   struct dwc2_hsotg_ep *hs_ep);
++
+ /**
+  * dwc2_gadget_handle_nak - handle NAK interrupt
+  * @hs_ep: The endpoint on which interrupt is asserted.
+@@ -2934,7 +2944,9 @@ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
+ static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
+ {
+ 	struct dwc2_hsotg *hsotg = hs_ep->parent;
++	struct dwc2_hsotg_req *hs_req;
+ 	int dir_in = hs_ep->dir_in;
++	u32 ctrl;
+ 
+ 	if (!dir_in || !hs_ep->isochronous)
+ 		return;
+@@ -2976,13 +2988,29 @@ static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
+ 
+ 			dwc2_writel(hsotg, ctrl, DIEPCTL(hs_ep->index));
+ 		}
+-
+-		dwc2_hsotg_complete_request(hsotg, hs_ep,
+-					    get_ep_head(hs_ep), 0);
+ 	}
+ 
+-	if (!using_desc_dma(hsotg))
++	if (using_desc_dma(hsotg))
++		return;
++
++	ctrl = dwc2_readl(hsotg, DIEPCTL(hs_ep->index));
++	if (ctrl & DXEPCTL_EPENA)
++		dwc2_hsotg_ep_stop_xfr(hsotg, hs_ep);
++	else
++		dwc2_hsotg_txfifo_flush(hsotg, hs_ep->fifo_index);
++
++	while (dwc2_gadget_target_frame_elapsed(hs_ep)) {
++		hs_req = get_ep_head(hs_ep);
++		if (hs_req)
++			dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
++
+ 		dwc2_gadget_incr_frame_num(hs_ep);
++		/* Update current frame number value. */
++		hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
++	}
++
++	if (!hs_ep->req)
++		dwc2_gadget_start_next_request(hs_ep);
+ }
+ 
+ /**
+@@ -3038,21 +3066,15 @@ static void dwc2_hsotg_epint(struct dwc2_hsotg *hsotg, unsigned int idx,
+ 
+ 		/* In DDMA handle isochronous requests separately */
+ 		if (using_desc_dma(hsotg) && hs_ep->isochronous) {
+-			/* XferCompl set along with BNA */
+-			if (!(ints & DXEPINT_BNAINTR))
+-				dwc2_gadget_complete_isoc_request_ddma(hs_ep);
++			dwc2_gadget_complete_isoc_request_ddma(hs_ep);
+ 		} else if (dir_in) {
+ 			/*
+ 			 * We get OutDone from the FIFO, so we only
+ 			 * need to look at completing IN requests here
+ 			 * if operating slave mode
+ 			 */
+-			if (hs_ep->isochronous && hs_ep->interval > 1)
+-				dwc2_gadget_incr_frame_num(hs_ep);
+-
+-			dwc2_hsotg_complete_in(hsotg, hs_ep);
+-			if (ints & DXEPINT_NAKINTRPT)
+-				ints &= ~DXEPINT_NAKINTRPT;
++			if (!hs_ep->isochronous || !(ints & DXEPINT_NAKINTRPT))
++				dwc2_hsotg_complete_in(hsotg, hs_ep);
+ 
+ 			if (idx == 0 && !hs_ep->req)
+ 				dwc2_hsotg_enqueue_setup(hsotg);
+@@ -3061,10 +3083,8 @@ static void dwc2_hsotg_epint(struct dwc2_hsotg *hsotg, unsigned int idx,
+ 			 * We're using DMA, we need to fire an OutDone here
+ 			 * as we ignore the RXFIFO.
+ 			 */
+-			if (hs_ep->isochronous && hs_ep->interval > 1)
+-				dwc2_gadget_incr_frame_num(hs_ep);
+-
+-			dwc2_hsotg_handle_outdone(hsotg, idx);
++			if (!hs_ep->isochronous || !(ints & DXEPINT_OUTTKNEPDIS))
++				dwc2_hsotg_handle_outdone(hsotg, idx);
+ 		}
+ 	}
+ 
+@@ -4083,6 +4103,7 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
+ 			mask |= DIEPMSK_NAKMSK;
+ 			dwc2_writel(hsotg, mask, DIEPMSK);
+ 		} else {
++			epctrl |= DXEPCTL_SNAK;
+ 			mask = dwc2_readl(hsotg, DOEPMSK);
+ 			mask |= DOEPMSK_OUTTKNEPDISMSK;
+ 			dwc2_writel(hsotg, mask, DOEPMSK);
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index bfb72902f3a68..1580d51aea4f7 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -264,19 +264,6 @@ static int dwc3_core_soft_reset(struct dwc3 *dwc)
+ {
+ 	u32		reg;
+ 	int		retries = 1000;
+-	int		ret;
+-
+-	usb_phy_init(dwc->usb2_phy);
+-	usb_phy_init(dwc->usb3_phy);
+-	ret = phy_init(dwc->usb2_generic_phy);
+-	if (ret < 0)
+-		return ret;
+-
+-	ret = phy_init(dwc->usb3_generic_phy);
+-	if (ret < 0) {
+-		phy_exit(dwc->usb2_generic_phy);
+-		return ret;
+-	}
+ 
+ 	/*
+ 	 * We're resetting only the device side because, if we're in host mode,
+@@ -310,9 +297,6 @@ static int dwc3_core_soft_reset(struct dwc3 *dwc)
+ 			udelay(1);
+ 	} while (--retries);
+ 
+-	phy_exit(dwc->usb3_generic_phy);
+-	phy_exit(dwc->usb2_generic_phy);
+-
+ 	return -ETIMEDOUT;
+ 
+ done:
+@@ -979,9 +963,21 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 		dwc->phys_ready = true;
+ 	}
+ 
++	usb_phy_init(dwc->usb2_phy);
++	usb_phy_init(dwc->usb3_phy);
++	ret = phy_init(dwc->usb2_generic_phy);
++	if (ret < 0)
++		goto err0a;
++
++	ret = phy_init(dwc->usb3_generic_phy);
++	if (ret < 0) {
++		phy_exit(dwc->usb2_generic_phy);
++		goto err0a;
++	}
++
+ 	ret = dwc3_core_soft_reset(dwc);
+ 	if (ret)
+-		goto err0a;
++		goto err1;
+ 
+ 	if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD &&
+ 	    !DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) {
+diff --git a/drivers/usb/gadget/udc/r8a66597-udc.c b/drivers/usb/gadget/udc/r8a66597-udc.c
+index 65cae48834545..38e4d6b505a05 100644
+--- a/drivers/usb/gadget/udc/r8a66597-udc.c
++++ b/drivers/usb/gadget/udc/r8a66597-udc.c
+@@ -1250,7 +1250,7 @@ static void set_feature(struct r8a66597 *r8a66597, struct usb_ctrlrequest *ctrl)
+ 			do {
+ 				tmp = r8a66597_read(r8a66597, INTSTS0) & CTSQ;
+ 				udelay(1);
+-			} while (tmp != CS_IDST || timeout-- > 0);
++			} while (tmp != CS_IDST && timeout-- > 0);
+ 
+ 			if (tmp == CS_IDST)
+ 				r8a66597_bset(r8a66597,
+diff --git a/drivers/usb/host/bcma-hcd.c b/drivers/usb/host/bcma-hcd.c
+index 337b425dd4b04..2df52f75f6b3c 100644
+--- a/drivers/usb/host/bcma-hcd.c
++++ b/drivers/usb/host/bcma-hcd.c
+@@ -406,12 +406,9 @@ static int bcma_hcd_probe(struct bcma_device *core)
+ 		return -ENOMEM;
+ 	usb_dev->core = core;
+ 
+-	if (core->dev.of_node) {
++	if (core->dev.of_node)
+ 		usb_dev->gpio_desc = devm_gpiod_get(&core->dev, "vcc",
+ 						    GPIOD_OUT_HIGH);
+-		if (IS_ERR(usb_dev->gpio_desc))
+-			return PTR_ERR(usb_dev->gpio_desc);
+-	}
+ 
+ 	switch (core->id.id) {
+ 	case BCMA_CORE_USB20_HOST:
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index c51391b45207e..6389dc99bc9a4 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -693,6 +693,7 @@ int xhci_run(struct usb_hcd *hcd)
+ 		if (ret)
+ 			xhci_free_command(xhci, command);
+ 	}
++	set_bit(HCD_FLAG_DEFER_RH_REGISTER, &hcd->flags);
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ 			"Finished xhci_run for USB2 roothub");
+ 
+diff --git a/drivers/usb/musb/tusb6010.c b/drivers/usb/musb/tusb6010.c
+index c26683a2702b6..0c2afed4131bc 100644
+--- a/drivers/usb/musb/tusb6010.c
++++ b/drivers/usb/musb/tusb6010.c
+@@ -190,6 +190,7 @@ tusb_fifo_write_unaligned(void __iomem *fifo, const u8 *buf, u16 len)
+ 	}
+ 	if (len > 0) {
+ 		/* Write the rest 1 - 3 bytes to FIFO */
++		val = 0;
+ 		memcpy(&val, buf, len);
+ 		musb_writel(fifo, 0, val);
+ 	}
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 329fc25f78a44..6d858bdaf33ce 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -237,6 +237,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1FB9, 0x0602) }, /* Lake Shore Model 648 Magnet Power Supply */
+ 	{ USB_DEVICE(0x1FB9, 0x0700) }, /* Lake Shore Model 737 VSM Controller */
+ 	{ USB_DEVICE(0x1FB9, 0x0701) }, /* Lake Shore Model 776 Hall Matrix */
++	{ USB_DEVICE(0x2184, 0x0030) }, /* GW Instek GDM-834x Digital Multimeter */
+ 	{ USB_DEVICE(0x2626, 0xEA60) }, /* Aruba Networks 7xxx USB Serial Console */
+ 	{ USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */
+ 	{ USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */
+@@ -260,6 +261,7 @@ struct cp210x_serial_private {
+ 	speed_t			min_speed;
+ 	speed_t			max_speed;
+ 	bool			use_actual_rate;
++	bool			no_event_mode;
+ };
+ 
+ enum cp210x_event_state {
+@@ -1331,12 +1333,16 @@ static void cp210x_change_speed(struct tty_struct *tty,
+ 
+ static void cp210x_enable_event_mode(struct usb_serial_port *port)
+ {
++	struct cp210x_serial_private *priv = usb_get_serial_data(port->serial);
+ 	struct cp210x_port_private *port_priv = usb_get_serial_port_data(port);
+ 	int ret;
+ 
+ 	if (port_priv->event_mode)
+ 		return;
+ 
++	if (priv->no_event_mode)
++		return;
++
+ 	port_priv->event_state = ES_DATA;
+ 	port_priv->event_mode = true;
+ 
+@@ -2086,6 +2092,46 @@ static void cp210x_init_max_speed(struct usb_serial *serial)
+ 	priv->use_actual_rate = use_actual_rate;
+ }
+ 
++static void cp2102_determine_quirks(struct usb_serial *serial)
++{
++	struct cp210x_serial_private *priv = usb_get_serial_data(serial);
++	u8 *buf;
++	int ret;
++
++	buf = kmalloc(2, GFP_KERNEL);
++	if (!buf)
++		return;
++	/*
++	 * Some (possibly counterfeit) CP2102 do not support event-insertion
++	 * mode and respond differently to malformed vendor requests.
++	 * Specifically, they return one instead of two bytes when sent a
++	 * two-byte part-number request.
++	 */
++	ret = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
++			CP210X_VENDOR_SPECIFIC, REQTYPE_DEVICE_TO_HOST,
++			CP210X_GET_PARTNUM, 0, buf, 2, USB_CTRL_GET_TIMEOUT);
++	if (ret == 1) {
++		dev_dbg(&serial->interface->dev,
++				"device does not support event-insertion mode\n");
++		priv->no_event_mode = true;
++	}
++
++	kfree(buf);
++}
++
++static void cp210x_determine_quirks(struct usb_serial *serial)
++{
++	struct cp210x_serial_private *priv = usb_get_serial_data(serial);
++
++	switch (priv->partnum) {
++	case CP210X_PARTNUM_CP2102:
++		cp2102_determine_quirks(serial);
++		break;
++	default:
++		break;
++	}
++}
++
+ static int cp210x_attach(struct usb_serial *serial)
+ {
+ 	int result;
+@@ -2106,6 +2152,7 @@ static int cp210x_attach(struct usb_serial *serial)
+ 
+ 	usb_set_serial_data(serial, priv);
+ 
++	cp210x_determine_quirks(serial);
+ 	cp210x_init_max_speed(serial);
+ 
+ 	result = cp210x_gpio_init(serial);
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index 30c25ef0dacd2..48a9a0476a453 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -107,7 +107,6 @@
+ #define BANDB_DEVICE_ID_USOPTL4_2P       0xBC02
+ #define BANDB_DEVICE_ID_USOPTL4_4        0xAC44
+ #define BANDB_DEVICE_ID_USOPTL4_4P       0xBC03
+-#define BANDB_DEVICE_ID_USOPTL2_4        0xAC24
+ 
+ /* Interrupt Routine Defines    */
+ 
+@@ -186,7 +185,6 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_2P) },
+ 	{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4) },
+ 	{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4P) },
+-	{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL2_4) },
+ 	{}			/* terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, id_table);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index acb8eec14f689..1e990a8264a53 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1205,6 +1205,14 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff),	/* Telit FD980 */
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1060, 0xff),	/* Telit LN920 (rmnet) */
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1061, 0xff),	/* Telit LN920 (MBIM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1062, 0xff),	/* Telit LN920 (RNDIS) */
++	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff),	/* Telit LN920 (ECM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -1650,7 +1658,6 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0060, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0070, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0073, 0xff, 0xff, 0xff) },
+-	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0094, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0130, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0133, 0xff, 0xff, 0xff),
+@@ -2068,6 +2075,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE(0x0489, 0xe0b5),						/* Foxconn T77W968 ESIM */
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0db, 0xff),			/* Foxconn T99W265 MBIM */
++	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+ 	  .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x2cb7, 0x0104),						/* Fibocom NL678 series */
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index efa972be2ee34..c6b3fcf901805 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -416,9 +416,16 @@ UNUSUAL_DEV(  0x04cb, 0x0100, 0x0000, 0x2210,
+ 		USB_SC_UFI, USB_PR_DEVICE, NULL, US_FL_FIX_INQUIRY | US_FL_SINGLE_LUN),
+ 
+ /*
+- * Reported by Ondrej Zary <linux@rainbow-software.org>
++ * Reported by Ondrej Zary <linux@zary.sk>
+  * The device reports one sector more and breaks when that sector is accessed
++ * Firmwares older than 2.6c (the latest one and the only that claims Linux
++ * support) have also broken tag handling
+  */
++UNUSUAL_DEV(  0x04ce, 0x0002, 0x0000, 0x026b,
++		"ScanLogic",
++		"SL11R-IDE",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_FIX_CAPACITY | US_FL_BULK_IGNORE_TAG),
+ UNUSUAL_DEV(  0x04ce, 0x0002, 0x026c, 0x026c,
+ 		"ScanLogic",
+ 		"SL11R-IDE",
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index c35a6db993f1b..4051c8cd0cd8a 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -50,7 +50,7 @@ UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
+ 		"LaCie",
+ 		"Rugged USB3-FW",
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+-		US_FL_IGNORE_UAS),
++		US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+ 
+ /*
+  * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index b57b2067ecbfb..15d4b1ef19f83 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -43,6 +43,8 @@
+ #include <linux/sched.h>
+ #include <linux/cred.h>
+ #include <linux/errno.h>
++#include <linux/freezer.h>
++#include <linux/kthread.h>
+ #include <linux/mm.h>
+ #include <linux/memblock.h>
+ #include <linux/pagemap.h>
+@@ -115,7 +117,7 @@ static struct ctl_table xen_root[] = {
+ #define EXTENT_ORDER (fls(XEN_PFN_PER_PAGE) - 1)
+ 
+ /*
+- * balloon_process() state:
++ * balloon_thread() state:
+  *
+  * BP_DONE: done or nothing to do,
+  * BP_WAIT: wait to be rescheduled,
+@@ -130,6 +132,8 @@ enum bp_state {
+ 	BP_ECANCELED
+ };
+ 
++/* Main waiting point for xen-balloon thread. */
++static DECLARE_WAIT_QUEUE_HEAD(balloon_thread_wq);
+ 
+ static DEFINE_MUTEX(balloon_mutex);
+ 
+@@ -144,10 +148,6 @@ static xen_pfn_t frame_list[PAGE_SIZE / sizeof(xen_pfn_t)];
+ static LIST_HEAD(ballooned_pages);
+ static DECLARE_WAIT_QUEUE_HEAD(balloon_wq);
+ 
+-/* Main work function, always executed in process context. */
+-static void balloon_process(struct work_struct *work);
+-static DECLARE_DELAYED_WORK(balloon_worker, balloon_process);
+-
+ /* When ballooning out (allocating memory to return to Xen) we don't really
+    want the kernel to try too hard since that can trigger the oom killer. */
+ #define GFP_BALLOON \
+@@ -366,7 +366,7 @@ static void xen_online_page(struct page *page, unsigned int order)
+ static int xen_memory_notifier(struct notifier_block *nb, unsigned long val, void *v)
+ {
+ 	if (val == MEM_ONLINE)
+-		schedule_delayed_work(&balloon_worker, 0);
++		wake_up(&balloon_thread_wq);
+ 
+ 	return NOTIFY_OK;
+ }
+@@ -491,18 +491,43 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
+ }
+ 
+ /*
+- * As this is a work item it is guaranteed to run as a single instance only.
++ * Stop waiting if either state is not BP_EAGAIN and ballooning action is
++ * needed, or if the credit has changed while state is BP_EAGAIN.
++ */
++static bool balloon_thread_cond(enum bp_state state, long credit)
++{
++	if (state != BP_EAGAIN)
++		credit = 0;
++
++	return current_credit() != credit || kthread_should_stop();
++}
++
++/*
++ * As this is a kthread it is guaranteed to run as a single instance only.
+  * We may of course race updates of the target counts (which are protected
+  * by the balloon lock), or with changes to the Xen hard limit, but we will
+  * recover from these in time.
+  */
+-static void balloon_process(struct work_struct *work)
++static int balloon_thread(void *unused)
+ {
+ 	enum bp_state state = BP_DONE;
+ 	long credit;
++	unsigned long timeout;
++
++	set_freezable();
++	for (;;) {
++		if (state == BP_EAGAIN)
++			timeout = balloon_stats.schedule_delay * HZ;
++		else
++			timeout = 3600 * HZ;
++		credit = current_credit();
+ 
++		wait_event_freezable_timeout(balloon_thread_wq,
++			balloon_thread_cond(state, credit), timeout);
++
++		if (kthread_should_stop())
++			return 0;
+ 
+-	do {
+ 		mutex_lock(&balloon_mutex);
+ 
+ 		credit = current_credit();
+@@ -529,12 +554,7 @@ static void balloon_process(struct work_struct *work)
+ 		mutex_unlock(&balloon_mutex);
+ 
+ 		cond_resched();
+-
+-	} while (credit && state == BP_DONE);
+-
+-	/* Schedule more work if there is some still to be done. */
+-	if (state == BP_EAGAIN)
+-		schedule_delayed_work(&balloon_worker, balloon_stats.schedule_delay * HZ);
++	}
+ }
+ 
+ /* Resets the Xen limit, sets new target, and kicks off processing. */
+@@ -542,7 +562,7 @@ void balloon_set_new_target(unsigned long target)
+ {
+ 	/* No need for lock. Not read-modify-write updates. */
+ 	balloon_stats.target_pages = target;
+-	schedule_delayed_work(&balloon_worker, 0);
++	wake_up(&balloon_thread_wq);
+ }
+ EXPORT_SYMBOL_GPL(balloon_set_new_target);
+ 
+@@ -647,7 +667,7 @@ void free_xenballooned_pages(int nr_pages, struct page **pages)
+ 
+ 	/* The balloon may be too large now. Shrink it if needed. */
+ 	if (current_credit())
+-		schedule_delayed_work(&balloon_worker, 0);
++		wake_up(&balloon_thread_wq);
+ 
+ 	mutex_unlock(&balloon_mutex);
+ }
+@@ -679,6 +699,8 @@ static void __init balloon_add_region(unsigned long start_pfn,
+ 
+ static int __init balloon_init(void)
+ {
++	struct task_struct *task;
++
+ 	if (!xen_domain())
+ 		return -ENODEV;
+ 
+@@ -722,6 +744,12 @@ static int __init balloon_init(void)
+ 	}
+ #endif
+ 
++	task = kthread_run(balloon_thread, NULL, "xen-balloon");
++	if (IS_ERR(task)) {
++		pr_err("xen-balloon thread could not be started, ballooning will not work!\n");
++		return PTR_ERR(task);
++	}
++
+ 	/* Init the xen-balloon driver. */
+ 	xen_balloon_init();
+ 
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 92d7fd7436cb8..262c0ae505af9 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -997,9 +997,9 @@ static struct dentry *afs_lookup(struct inode *dir, struct dentry *dentry,
+  */
+ static int afs_d_revalidate_rcu(struct dentry *dentry)
+ {
+-	struct afs_vnode *dvnode, *vnode;
++	struct afs_vnode *dvnode;
+ 	struct dentry *parent;
+-	struct inode *dir, *inode;
++	struct inode *dir;
+ 	long dir_version, de_version;
+ 
+ 	_enter("%p", dentry);
+@@ -1029,18 +1029,6 @@ static int afs_d_revalidate_rcu(struct dentry *dentry)
+ 			return -ECHILD;
+ 	}
+ 
+-	/* Check to see if the vnode referred to by the dentry still
+-	 * has a callback.
+-	 */
+-	if (d_really_is_positive(dentry)) {
+-		inode = d_inode_rcu(dentry);
+-		if (inode) {
+-			vnode = AFS_FS_I(inode);
+-			if (!afs_check_validity(vnode))
+-				return -ECHILD;
+-		}
+-	}
+-
+ 	return 1; /* Still valid */
+ }
+ 
+@@ -1076,17 +1064,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	if (IS_ERR(key))
+ 		key = NULL;
+ 
+-	if (d_really_is_positive(dentry)) {
+-		inode = d_inode(dentry);
+-		if (inode) {
+-			vnode = AFS_FS_I(inode);
+-			afs_validate(vnode, key);
+-			if (test_bit(AFS_VNODE_DELETED, &vnode->flags))
+-				goto out_bad;
+-		}
+-	}
+-
+-	/* lock down the parent dentry so we can peer at it */
++	/* Hold the parent dentry so we can peer at it */
+ 	parent = dget_parent(dentry);
+ 	dir = AFS_FS_I(d_inode(parent));
+ 
+@@ -1095,7 +1073,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 
+ 	if (test_bit(AFS_VNODE_DELETED, &dir->flags)) {
+ 		_debug("%pd: parent dir deleted", dentry);
+-		goto out_bad_parent;
++		goto not_found;
+ 	}
+ 
+ 	/* We only need to invalidate a dentry if the server's copy changed
+@@ -1121,12 +1099,12 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	case 0:
+ 		/* the filename maps to something */
+ 		if (d_really_is_negative(dentry))
+-			goto out_bad_parent;
++			goto not_found;
+ 		inode = d_inode(dentry);
+ 		if (is_bad_inode(inode)) {
+ 			printk("kAFS: afs_d_revalidate: %pd2 has bad inode\n",
+ 			       dentry);
+-			goto out_bad_parent;
++			goto not_found;
+ 		}
+ 
+ 		vnode = AFS_FS_I(inode);
+@@ -1148,9 +1126,6 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 			       dentry, fid.unique,
+ 			       vnode->fid.unique,
+ 			       vnode->vfs_inode.i_generation);
+-			write_seqlock(&vnode->cb_lock);
+-			set_bit(AFS_VNODE_DELETED, &vnode->flags);
+-			write_sequnlock(&vnode->cb_lock);
+ 			goto not_found;
+ 		}
+ 		goto out_valid;
+@@ -1165,7 +1140,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	default:
+ 		_debug("failed to iterate dir %pd: %d",
+ 		       parent, ret);
+-		goto out_bad_parent;
++		goto not_found;
+ 	}
+ 
+ out_valid:
+@@ -1176,16 +1151,9 @@ out_valid_noupdate:
+ 	_leave(" = 1 [valid]");
+ 	return 1;
+ 
+-	/* the dirent, if it exists, now points to a different vnode */
+ not_found:
+-	spin_lock(&dentry->d_lock);
+-	dentry->d_flags |= DCACHE_NFSFS_RENAMED;
+-	spin_unlock(&dentry->d_lock);
+-
+-out_bad_parent:
+ 	_debug("dropping dentry %pd2", dentry);
+ 	dput(parent);
+-out_bad:
+ 	key_put(key);
+ 
+ 	_leave(" = 0 [bad]");
+diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c
+index 2ffe09abae7fc..3a9cffc081b98 100644
+--- a/fs/afs/dir_edit.c
++++ b/fs/afs/dir_edit.c
+@@ -264,7 +264,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
+ 		if (b == nr_blocks) {
+ 			_debug("init %u", b);
+ 			afs_edit_init_block(meta, block, b);
+-			i_size_write(&vnode->vfs_inode, (b + 1) * AFS_DIR_BLOCK_SIZE);
++			afs_set_i_size(vnode, (b + 1) * AFS_DIR_BLOCK_SIZE);
+ 		}
+ 
+ 		/* Only lower dir pages have a counter in the header. */
+@@ -297,7 +297,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
+ new_directory:
+ 	afs_edit_init_block(meta, meta, 0);
+ 	i_size = AFS_DIR_BLOCK_SIZE;
+-	i_size_write(&vnode->vfs_inode, i_size);
++	afs_set_i_size(vnode, i_size);
+ 	slot = AFS_DIR_RESV_BLOCKS0;
+ 	page = page0;
+ 	block = meta;
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index ae3016a9fb23c..f81a972bdd294 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -53,16 +53,6 @@ static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode *paren
+ 		dump_stack();
+ }
+ 
+-/*
+- * Set the file size and block count.  Estimate the number of 512 bytes blocks
+- * used, rounded up to nearest 1K for consistency with other AFS clients.
+- */
+-static void afs_set_i_size(struct afs_vnode *vnode, u64 size)
+-{
+-	i_size_write(&vnode->vfs_inode, size);
+-	vnode->vfs_inode.i_blocks = ((size + 1023) >> 10) << 1;
+-}
+-
+ /*
+  * Initialise an inode from the vnode status.
+  */
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index ffe318ad2e026..dc08a3d9b3a8b 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -1573,6 +1573,16 @@ static inline void afs_update_dentry_version(struct afs_operation *op,
+ 			(void *)(unsigned long)dir_vp->scb.status.data_version;
+ }
+ 
++/*
++ * Set the file size and block count.  Estimate the number of 512 bytes blocks
++ * used, rounded up to nearest 1K for consistency with other AFS clients.
++ */
++static inline void afs_set_i_size(struct afs_vnode *vnode, u64 size)
++{
++	i_size_write(&vnode->vfs_inode, size);
++	vnode->vfs_inode.i_blocks = ((size + 1023) >> 10) << 1;
++}
++
+ /*
+  * Check for a conflicting operation on a directory that we just unlinked from.
+  * If someone managed to sneak a link or an unlink in on the file we just
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index d37b5cfcf28f5..be60cf1103829 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -184,7 +184,7 @@ int afs_write_end(struct file *file, struct address_space *mapping,
+ 		write_seqlock(&vnode->cb_lock);
+ 		i_size = i_size_read(&vnode->vfs_inode);
+ 		if (maybe_i_size > i_size)
+-			i_size_write(&vnode->vfs_inode, maybe_i_size);
++			afs_set_i_size(vnode, maybe_i_size);
+ 		write_sequnlock(&vnode->cb_lock);
+ 	}
+ 
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index 9d33bf0154abf..e65d0fabb83e5 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -1646,7 +1646,8 @@ struct btrfs_plug_cb {
+ /*
+  * rbios on the plug list are sorted for easier merging.
+  */
+-static int plug_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int plug_cmp(void *priv, const struct list_head *a,
++		    const struct list_head *b)
+ {
+ 	struct btrfs_raid_bio *ra = container_of(a, struct btrfs_raid_bio,
+ 						 plug_list);
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index e8347461c8ddd..69ab10c9237fc 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -417,9 +417,10 @@ static void __btrfs_dump_space_info(struct btrfs_fs_info *fs_info,
+ {
+ 	lockdep_assert_held(&info->lock);
+ 
+-	btrfs_info(fs_info, "space_info %llu has %llu free, is %sfull",
++	/* The free space could be negative in case of overcommit */
++	btrfs_info(fs_info, "space_info %llu has %lld free, is %sfull",
+ 		   info->flags,
+-		   info->total_bytes - btrfs_space_info_used(info, true),
++		   (s64)(info->total_bytes - btrfs_space_info_used(info, true)),
+ 		   info->full ? "" : "not ");
+ 	btrfs_info(fs_info,
+ 		"space_info total=%llu, used=%llu, pinned=%llu, reserved=%llu, may_use=%llu, readonly=%llu",
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index ec25e5eab3499..7bf3936aceda2 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -4070,7 +4070,8 @@ static noinline int copy_items(struct btrfs_trans_handle *trans,
+ 	return ret;
+ }
+ 
+-static int extent_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int extent_cmp(void *priv, const struct list_head *a,
++		      const struct list_head *b)
+ {
+ 	struct extent_map *em1, *em2;
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 509811aabb3fd..d8b8764f5bd10 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1226,7 +1226,8 @@ static int open_fs_devices(struct btrfs_fs_devices *fs_devices,
+ 	return 0;
+ }
+ 
+-static int devid_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int devid_cmp(void *priv, const struct list_head *a,
++		     const struct list_head *b)
+ {
+ 	struct btrfs_device *dev1, *dev2;
+ 
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 8ffe8063e42c1..7f5d173760cfc 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3504,9 +3504,10 @@ cifs_match_super(struct super_block *sb, void *data)
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	cifs_sb = CIFS_SB(sb);
+ 	tlink = cifs_get_tlink(cifs_sb_master_tlink(cifs_sb));
+-	if (IS_ERR(tlink)) {
++	if (tlink == NULL) {
++		/* can not match superblock if tlink were ever null */
+ 		spin_unlock(&cifs_tcp_ses_lock);
+-		return rc;
++		return 0;
+ 	}
+ 	tcon = tlink_tcon(tlink);
+ 	ses = tcon->ses;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index f46904a4ead31..67139f9d583f2 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3039,7 +3039,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx)
+ 	struct cifs_tcon *tcon;
+ 	struct cifs_sb_info *cifs_sb;
+ 	struct dentry *dentry = ctx->cfile->dentry;
+-	int rc;
++	ssize_t rc;
+ 
+ 	tcon = tlink_tcon(ctx->cfile->tlink);
+ 	cifs_sb = CIFS_SB(dentry->d_sb);
+diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
+index 4c2a9fe300672..4493ef0c715e9 100644
+--- a/fs/ext4/fsmap.c
++++ b/fs/ext4/fsmap.c
+@@ -354,8 +354,8 @@ static unsigned int ext4_getfsmap_find_sb(struct super_block *sb,
+ 
+ /* Compare two fsmap items. */
+ static int ext4_getfsmap_compare(void *priv,
+-				 struct list_head *a,
+-				 struct list_head *b)
++				 const struct list_head *a,
++				 const struct list_head *b)
+ {
+ 	struct ext4_fsmap *fa;
+ 	struct ext4_fsmap *fb;
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index cd43c481df4b4..03c3407c8e26f 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1744,7 +1744,8 @@ void gfs2_glock_complete(struct gfs2_glock *gl, int ret)
+ 	spin_unlock(&gl->gl_lockref.lock);
+ }
+ 
+-static int glock_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int glock_cmp(void *priv, const struct list_head *a,
++		     const struct list_head *b)
+ {
+ 	struct gfs2_glock *gla, *glb;
+ 
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 1955dea999f79..7473b894e3c6c 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -605,7 +605,7 @@ void log_flush_wait(struct gfs2_sbd *sdp)
+ 	}
+ }
+ 
+-static int ip_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int ip_cmp(void *priv, const struct list_head *a, const struct list_head *b)
+ {
+ 	struct gfs2_inode *ipa, *ipb;
+ 
+diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
+index 3922b26264f5a..383ac2190ded4 100644
+--- a/fs/gfs2/lops.c
++++ b/fs/gfs2/lops.c
+@@ -627,7 +627,8 @@ static void gfs2_check_magic(struct buffer_head *bh)
+ 	kunmap_atomic(kaddr);
+ }
+ 
+-static int blocknr_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int blocknr_cmp(void *priv, const struct list_head *a,
++		       const struct list_head *b)
+ {
+ 	struct gfs2_bufdata *bda, *bdb;
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index a8d07273ddc05..26753d0cb4312 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4041,7 +4041,7 @@ static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
+ 	int i, bid = pbuf->bid;
+ 
+ 	for (i = 0; i < pbuf->nbufs; i++) {
+-		buf = kmalloc(sizeof(*buf), GFP_KERNEL);
++		buf = kmalloc(sizeof(*buf), GFP_KERNEL_ACCOUNT);
+ 		if (!buf)
+ 			break;
+ 
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index caed9d98c64aa..cd9f7baa5bb7b 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1155,7 +1155,8 @@ iomap_ioend_try_merge(struct iomap_ioend *ioend, struct list_head *more_ioends,
+ EXPORT_SYMBOL_GPL(iomap_ioend_try_merge);
+ 
+ static int
+-iomap_ioend_compare(void *priv, struct list_head *a, struct list_head *b)
++iomap_ioend_compare(void *priv, const struct list_head *a,
++		const struct list_head *b)
+ {
+ 	struct iomap_ioend *ia = container_of(a, struct iomap_ioend, io_list);
+ 	struct iomap_ioend *ib = container_of(b, struct iomap_ioend, io_list);
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 8e3a369086dbd..3e06e9a8cf594 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -3933,7 +3933,7 @@ static int ocfs2_data_convert_worker(struct ocfs2_lock_res *lockres,
+ 		oi = OCFS2_I(inode);
+ 		oi->ip_dir_lock_gen++;
+ 		mlog(0, "generation: %u\n", oi->ip_dir_lock_gen);
+-		goto out;
++		goto out_forget;
+ 	}
+ 
+ 	if (!S_ISREG(inode->i_mode))
+@@ -3964,6 +3964,7 @@ static int ocfs2_data_convert_worker(struct ocfs2_lock_res *lockres,
+ 		filemap_fdatawait(mapping);
+ 	}
+ 
++out_forget:
+ 	forget_all_cached_acls(inode);
+ 
+ out:
+diff --git a/fs/qnx4/dir.c b/fs/qnx4/dir.c
+index a6ee23aadd283..66645a5a35f30 100644
+--- a/fs/qnx4/dir.c
++++ b/fs/qnx4/dir.c
+@@ -15,13 +15,48 @@
+ #include <linux/buffer_head.h>
+ #include "qnx4.h"
+ 
++/*
++ * A qnx4 directory entry is an inode entry or link info
++ * depending on the status field in the last byte. The
++ * first byte is where the name start either way, and a
++ * zero means it's empty.
++ *
++ * Also, due to a bug in gcc, we don't want to use the
++ * real (differently sized) name arrays in the inode and
++ * link entries, but always the 'de_name[]' one in the
++ * fake struct entry.
++ *
++ * See
++ *
++ *   https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99578#c6
++ *
++ * for details, but basically gcc will take the size of the
++ * 'name' array from one of the used union entries randomly.
++ *
++ * This use of 'de_name[]' (48 bytes) avoids the false positive
++ * warnings that would happen if gcc decides to use 'inode.di_name'
++ * (16 bytes) even when the pointer and size were to come from
++ * 'link.dl_name' (48 bytes).
++ *
++ * In all cases the actual name pointer itself is the same, it's
++ * only the gcc internal 'what is the size of this field' logic
++ * that can get confused.
++ */
++union qnx4_directory_entry {
++	struct {
++		const char de_name[48];
++		u8 de_pad[15];
++		u8 de_status;
++	};
++	struct qnx4_inode_entry inode;
++	struct qnx4_link_info link;
++};
++
+ static int qnx4_readdir(struct file *file, struct dir_context *ctx)
+ {
+ 	struct inode *inode = file_inode(file);
+ 	unsigned int offset;
+ 	struct buffer_head *bh;
+-	struct qnx4_inode_entry *de;
+-	struct qnx4_link_info *le;
+ 	unsigned long blknum;
+ 	int ix, ino;
+ 	int size;
+@@ -38,27 +73,27 @@ static int qnx4_readdir(struct file *file, struct dir_context *ctx)
+ 		}
+ 		ix = (ctx->pos >> QNX4_DIR_ENTRY_SIZE_BITS) % QNX4_INODES_PER_BLOCK;
+ 		for (; ix < QNX4_INODES_PER_BLOCK; ix++, ctx->pos += QNX4_DIR_ENTRY_SIZE) {
++			union qnx4_directory_entry *de;
++
+ 			offset = ix * QNX4_DIR_ENTRY_SIZE;
+-			de = (struct qnx4_inode_entry *) (bh->b_data + offset);
+-			if (!de->di_fname[0])
++			de = (union qnx4_directory_entry *) (bh->b_data + offset);
++
++			if (!de->de_name[0])
+ 				continue;
+-			if (!(de->di_status & (QNX4_FILE_USED|QNX4_FILE_LINK)))
++			if (!(de->de_status & (QNX4_FILE_USED|QNX4_FILE_LINK)))
+ 				continue;
+-			if (!(de->di_status & QNX4_FILE_LINK))
+-				size = QNX4_SHORT_NAME_MAX;
+-			else
+-				size = QNX4_NAME_MAX;
+-			size = strnlen(de->di_fname, size);
+-			QNX4DEBUG((KERN_INFO "qnx4_readdir:%.*s\n", size, de->di_fname));
+-			if (!(de->di_status & QNX4_FILE_LINK))
++			if (!(de->de_status & QNX4_FILE_LINK)) {
++				size = sizeof(de->inode.di_fname);
+ 				ino = blknum * QNX4_INODES_PER_BLOCK + ix - 1;
+-			else {
+-				le  = (struct qnx4_link_info*)de;
+-				ino = ( le32_to_cpu(le->dl_inode_blk) - 1 ) *
++			} else {
++				size = sizeof(de->link.dl_fname);
++				ino = ( le32_to_cpu(de->link.dl_inode_blk) - 1 ) *
+ 					QNX4_INODES_PER_BLOCK +
+-					le->dl_inode_ndx;
++					de->link.dl_inode_ndx;
+ 			}
+-			if (!dir_emit(ctx, de->di_fname, size, ino, DT_UNKNOWN)) {
++			size = strnlen(de->de_name, size);
++			QNX4DEBUG((KERN_INFO "qnx4_readdir:%.*s\n", size, name));
++			if (!dir_emit(ctx, de->de_name, size, ino, DT_UNKNOWN)) {
+ 				brelse(bh);
+ 				return 0;
+ 			}
+diff --git a/fs/ubifs/gc.c b/fs/ubifs/gc.c
+index a4aaeea63893e..dc3e26e9ed7b2 100644
+--- a/fs/ubifs/gc.c
++++ b/fs/ubifs/gc.c
+@@ -102,7 +102,8 @@ static int switch_gc_head(struct ubifs_info *c)
+  * This function compares data nodes @a and @b. Returns %1 if @a has greater
+  * inode or block number, and %-1 otherwise.
+  */
+-static int data_nodes_cmp(void *priv, struct list_head *a, struct list_head *b)
++static int data_nodes_cmp(void *priv, const struct list_head *a,
++			  const struct list_head *b)
+ {
+ 	ino_t inuma, inumb;
+ 	struct ubifs_info *c = priv;
+@@ -145,8 +146,8 @@ static int data_nodes_cmp(void *priv, struct list_head *a, struct list_head *b)
+  * first and sorted by length in descending order. Directory entry nodes go
+  * after inode nodes and are sorted in ascending hash valuer order.
+  */
+-static int nondata_nodes_cmp(void *priv, struct list_head *a,
+-			     struct list_head *b)
++static int nondata_nodes_cmp(void *priv, const struct list_head *a,
++			     const struct list_head *b)
+ {
+ 	ino_t inuma, inumb;
+ 	struct ubifs_info *c = priv;
+diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c
+index 1c6fc99fca30e..b2f5563d1489b 100644
+--- a/fs/ubifs/replay.c
++++ b/fs/ubifs/replay.c
+@@ -299,8 +299,8 @@ static int apply_replay_entry(struct ubifs_info *c, struct replay_entry *r)
+  * entries @a and @b by comparing their sequence numer.  Returns %1 if @a has
+  * greater sequence number and %-1 otherwise.
+  */
+-static int replay_entries_cmp(void *priv, struct list_head *a,
+-			      struct list_head *b)
++static int replay_entries_cmp(void *priv, const struct list_head *a,
++			      const struct list_head *b)
+ {
+ 	struct ubifs_info *c = priv;
+ 	struct replay_entry *ra, *rb;
+diff --git a/fs/xfs/scrub/bitmap.c b/fs/xfs/scrub/bitmap.c
+index f88694f22d059..813b5f2191138 100644
+--- a/fs/xfs/scrub/bitmap.c
++++ b/fs/xfs/scrub/bitmap.c
+@@ -63,8 +63,8 @@ xbitmap_init(
+ static int
+ xbitmap_range_cmp(
+ 	void			*priv,
+-	struct list_head	*a,
+-	struct list_head	*b)
++	const struct list_head	*a,
++	const struct list_head	*b)
+ {
+ 	struct xbitmap_range	*ap;
+ 	struct xbitmap_range	*bp;
+diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
+index 9e16a4d0f97cc..984bb480f1774 100644
+--- a/fs/xfs/xfs_bmap_item.c
++++ b/fs/xfs/xfs_bmap_item.c
+@@ -265,8 +265,8 @@ xfs_trans_log_finish_bmap_update(
+ static int
+ xfs_bmap_update_diff_items(
+ 	void				*priv,
+-	struct list_head		*a,
+-	struct list_head		*b)
++	const struct list_head		*a,
++	const struct list_head		*b)
+ {
+ 	struct xfs_bmap_intent		*ba;
+ 	struct xfs_bmap_intent		*bb;
+diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
+index 4e4cf91f4f9fe..118819030dbb5 100644
+--- a/fs/xfs/xfs_buf.c
++++ b/fs/xfs/xfs_buf.c
+@@ -2114,9 +2114,9 @@ xfs_buf_delwri_queue(
+  */
+ static int
+ xfs_buf_cmp(
+-	void		*priv,
+-	struct list_head *a,
+-	struct list_head *b)
++	void			*priv,
++	const struct list_head	*a,
++	const struct list_head	*b)
+ {
+ 	struct xfs_buf	*ap = container_of(a, struct xfs_buf, b_list);
+ 	struct xfs_buf	*bp = container_of(b, struct xfs_buf, b_list);
+diff --git a/fs/xfs/xfs_extent_busy.c b/fs/xfs/xfs_extent_busy.c
+index 3991e59cfd18b..5c2695a42de15 100644
+--- a/fs/xfs/xfs_extent_busy.c
++++ b/fs/xfs/xfs_extent_busy.c
+@@ -643,8 +643,8 @@ xfs_extent_busy_wait_all(
+ int
+ xfs_extent_busy_ag_cmp(
+ 	void			*priv,
+-	struct list_head	*l1,
+-	struct list_head	*l2)
++	const struct list_head	*l1,
++	const struct list_head	*l2)
+ {
+ 	struct xfs_extent_busy	*b1 =
+ 		container_of(l1, struct xfs_extent_busy, list);
+diff --git a/fs/xfs/xfs_extent_busy.h b/fs/xfs/xfs_extent_busy.h
+index 990ab38919717..8aea071000923 100644
+--- a/fs/xfs/xfs_extent_busy.h
++++ b/fs/xfs/xfs_extent_busy.h
+@@ -58,7 +58,8 @@ void
+ xfs_extent_busy_wait_all(struct xfs_mount *mp);
+ 
+ int
+-xfs_extent_busy_ag_cmp(void *priv, struct list_head *a, struct list_head *b);
++xfs_extent_busy_ag_cmp(void *priv, const struct list_head *a,
++	const struct list_head *b);
+ 
+ static inline void xfs_extent_busy_sort(struct list_head *list)
+ {
+diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
+index 6c11bfc3d452a..5c0395256bd1d 100644
+--- a/fs/xfs/xfs_extfree_item.c
++++ b/fs/xfs/xfs_extfree_item.c
+@@ -397,8 +397,8 @@ xfs_trans_free_extent(
+ static int
+ xfs_extent_free_diff_items(
+ 	void				*priv,
+-	struct list_head		*a,
+-	struct list_head		*b)
++	const struct list_head		*a,
++	const struct list_head		*b)
+ {
+ 	struct xfs_mount		*mp = priv;
+ 	struct xfs_extent_free_item	*ra;
+diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
+index 7529eb63ce947..0dee316283a90 100644
+--- a/fs/xfs/xfs_refcount_item.c
++++ b/fs/xfs/xfs_refcount_item.c
+@@ -269,8 +269,8 @@ xfs_trans_log_finish_refcount_update(
+ static int
+ xfs_refcount_update_diff_items(
+ 	void				*priv,
+-	struct list_head		*a,
+-	struct list_head		*b)
++	const struct list_head		*a,
++	const struct list_head		*b)
+ {
+ 	struct xfs_mount		*mp = priv;
+ 	struct xfs_refcount_intent	*ra;
+diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
+index 7adc996ca6e30..20905953fe763 100644
+--- a/fs/xfs/xfs_rmap_item.c
++++ b/fs/xfs/xfs_rmap_item.c
+@@ -337,8 +337,8 @@ xfs_trans_log_finish_rmap_update(
+ static int
+ xfs_rmap_update_diff_items(
+ 	void				*priv,
+-	struct list_head		*a,
+-	struct list_head		*b)
++	const struct list_head		*a,
++	const struct list_head		*b)
+ {
+ 	struct xfs_mount		*mp = priv;
+ 	struct xfs_rmap_intent		*ra;
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index b8fe0c23cfffb..475d0a3ce059e 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -180,6 +180,8 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+     (typeof(ptr)) (__ptr + (off)); })
+ #endif
+ 
++#define absolute_pointer(val)	RELOC_HIDE((void *)(val), 0)
++
+ #ifndef OPTIMIZER_HIDE_VAR
+ /* Make the optimizer believe the variable can be manipulated arbitrarily. */
+ #define OPTIMIZER_HIDE_VAR(var)						\
+diff --git a/include/linux/list_sort.h b/include/linux/list_sort.h
+index 20f178c24e9d1..453105f74e050 100644
+--- a/include/linux/list_sort.h
++++ b/include/linux/list_sort.h
+@@ -6,8 +6,9 @@
+ 
+ struct list_head;
+ 
++typedef int __attribute__((nonnull(2,3))) (*list_cmp_func_t)(void *,
++		const struct list_head *, const struct list_head *);
++
+ __attribute__((nonnull(2,3)))
+-void list_sort(void *priv, struct list_head *head,
+-	       int (*cmp)(void *priv, struct list_head *a,
+-			  struct list_head *b));
++void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp);
+ #endif
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 3dbb42c637c14..9f05016d823f8 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -124,6 +124,7 @@ struct usb_hcd {
+ #define HCD_FLAG_RH_RUNNING		5	/* root hub is running? */
+ #define HCD_FLAG_DEAD			6	/* controller has died? */
+ #define HCD_FLAG_INTF_AUTHORIZED	7	/* authorize interfaces? */
++#define HCD_FLAG_DEFER_RH_REGISTER	8	/* Defer roothub registration */
+ 
+ 	/* The flags can be tested using these macros; they are likely to
+ 	 * be slightly faster than test_bit().
+@@ -134,6 +135,7 @@ struct usb_hcd {
+ #define HCD_WAKEUP_PENDING(hcd)	((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
+ #define HCD_RH_RUNNING(hcd)	((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
+ #define HCD_DEAD(hcd)		((hcd)->flags & (1U << HCD_FLAG_DEAD))
++#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER))
+ 
+ 	/*
+ 	 * Specifies if interfaces are authorized by default
+diff --git a/include/trace/events/erofs.h b/include/trace/events/erofs.h
+index bf9806fd13065..db4f2cec83606 100644
+--- a/include/trace/events/erofs.h
++++ b/include/trace/events/erofs.h
+@@ -35,20 +35,20 @@ TRACE_EVENT(erofs_lookup,
+ 	TP_STRUCT__entry(
+ 		__field(dev_t,		dev	)
+ 		__field(erofs_nid_t,	nid	)
+-		__field(const char *,	name	)
++		__string(name,		dentry->d_name.name	)
+ 		__field(unsigned int,	flags	)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->dev	= dir->i_sb->s_dev;
+ 		__entry->nid	= EROFS_I(dir)->nid;
+-		__entry->name	= dentry->d_name.name;
++		__assign_str(name, dentry->d_name.name);
+ 		__entry->flags	= flags;
+ 	),
+ 
+ 	TP_printk("dev = (%d,%d), pnid = %llu, name:%s, flags:%x",
+ 		show_dev_nid(__entry),
+-		__entry->name,
++		__get_str(name),
+ 		__entry->flags)
+ );
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index cba1f86e75cdb..0c26757ea7fbb 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -8822,6 +8822,8 @@ static int check_btf_line(struct bpf_verifier_env *env,
+ 	nr_linfo = attr->line_info_cnt;
+ 	if (!nr_linfo)
+ 		return 0;
++	if (nr_linfo > INT_MAX / sizeof(struct bpf_line_info))
++		return -EINVAL;
+ 
+ 	rec_size = attr->line_info_rec_size;
+ 	if (rec_size < MIN_BPF_LINEINFO_SIZE ||
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index f1022945e3460..b89ff188a6183 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -1670,6 +1670,14 @@ static int blk_trace_remove_queue(struct request_queue *q)
+ 	if (bt == NULL)
+ 		return -EINVAL;
+ 
++	if (bt->trace_state == Blktrace_running) {
++		bt->trace_state = Blktrace_stopped;
++		spin_lock_irq(&running_trace_lock);
++		list_del_init(&bt->running_list);
++		spin_unlock_irq(&running_trace_lock);
++		relay_flush(bt->rchan);
++	}
++
+ 	put_probe_ref();
+ 	synchronize_rcu();
+ 	blk_trace_free(bt);
+diff --git a/lib/list_sort.c b/lib/list_sort.c
+index 52f0c258c895a..a926d96ffd44d 100644
+--- a/lib/list_sort.c
++++ b/lib/list_sort.c
+@@ -7,16 +7,13 @@
+ #include <linux/list_sort.h>
+ #include <linux/list.h>
+ 
+-typedef int __attribute__((nonnull(2,3))) (*cmp_func)(void *,
+-		struct list_head const *, struct list_head const *);
+-
+ /*
+  * Returns a list organized in an intermediate format suited
+  * to chaining of merge() calls: null-terminated, no reserved or
+  * sentinel head node, "prev" links not maintained.
+  */
+ __attribute__((nonnull(2,3,4)))
+-static struct list_head *merge(void *priv, cmp_func cmp,
++static struct list_head *merge(void *priv, list_cmp_func_t cmp,
+ 				struct list_head *a, struct list_head *b)
+ {
+ 	struct list_head *head, **tail = &head;
+@@ -52,7 +49,7 @@ static struct list_head *merge(void *priv, cmp_func cmp,
+  * throughout.
+  */
+ __attribute__((nonnull(2,3,4,5)))
+-static void merge_final(void *priv, cmp_func cmp, struct list_head *head,
++static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
+ 			struct list_head *a, struct list_head *b)
+ {
+ 	struct list_head *tail = head;
+@@ -185,9 +182,7 @@ static void merge_final(void *priv, cmp_func cmp, struct list_head *head,
+  * 2^(k+1) - 1 (second merge of case 5 when x == 2^(k-1) - 1).
+  */
+ __attribute__((nonnull(2,3)))
+-void list_sort(void *priv, struct list_head *head,
+-		int (*cmp)(void *priv, struct list_head *a,
+-			struct list_head *b))
++void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp)
+ {
+ 	struct list_head *list = head->next, *pending = NULL;
+ 	size_t count = 0;	/* Count of pending */
+@@ -227,7 +222,7 @@ void list_sort(void *priv, struct list_head *head,
+ 		if (likely(bits)) {
+ 			struct list_head *a = *tail, *b = a->prev;
+ 
+-			a = merge(priv, (cmp_func)cmp, b, a);
++			a = merge(priv, cmp, b, a);
+ 			/* Install the merged result in place of the inputs */
+ 			a->prev = b->prev;
+ 			*tail = a;
+@@ -249,10 +244,10 @@ void list_sort(void *priv, struct list_head *head,
+ 
+ 		if (!next)
+ 			break;
+-		list = merge(priv, (cmp_func)cmp, pending, list);
++		list = merge(priv, cmp, pending, list);
+ 		pending = next;
+ 	}
+ 	/* The final merge, rebuilding prev links */
+-	merge_final(priv, (cmp_func)cmp, head, pending, list);
++	merge_final(priv, cmp, head, pending, list);
+ }
+ EXPORT_SYMBOL(list_sort);
+diff --git a/lib/test_list_sort.c b/lib/test_list_sort.c
+index 1f017d3b610ee..00daaf23316f4 100644
+--- a/lib/test_list_sort.c
++++ b/lib/test_list_sort.c
+@@ -56,7 +56,8 @@ static int __init check(struct debug_el *ela, struct debug_el *elb)
+ 	return 0;
+ }
+ 
+-static int __init cmp(void *priv, struct list_head *a, struct list_head *b)
++static int __init cmp(void *priv, const struct list_head *a,
++		      const struct list_head *b)
+ {
+ 	struct debug_el *ela, *elb;
+ 
+diff --git a/mm/util.c b/mm/util.c
+index 4ddb6e186dd5c..d5be677718500 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -756,7 +756,7 @@ int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer,
+ 		size_t *lenp, loff_t *ppos)
+ {
+ 	struct ctl_table t;
+-	int new_policy;
++	int new_policy = -1;
+ 	int ret;
+ 
+ 	/*
+@@ -774,7 +774,7 @@ int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer,
+ 		t = *table;
+ 		t.data = &new_policy;
+ 		ret = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
+-		if (ret)
++		if (ret || new_policy == -1)
+ 			return ret;
+ 
+ 		mm_compute_batch(new_policy);
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 3ada338d7e08b..71c8ef7d40870 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -459,7 +459,7 @@ static int dsa_switch_setup(struct dsa_switch *ds)
+ 	devlink_params_publish(ds->devlink);
+ 
+ 	if (!ds->slave_mii_bus && ds->ops->phy_read) {
+-		ds->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
++		ds->slave_mii_bus = mdiobus_alloc();
+ 		if (!ds->slave_mii_bus) {
+ 			err = -ENOMEM;
+ 			goto teardown;
+@@ -469,13 +469,16 @@ static int dsa_switch_setup(struct dsa_switch *ds)
+ 
+ 		err = mdiobus_register(ds->slave_mii_bus);
+ 		if (err < 0)
+-			goto teardown;
++			goto free_slave_mii_bus;
+ 	}
+ 
+ 	ds->setup = true;
+ 
+ 	return 0;
+ 
++free_slave_mii_bus:
++	if (ds->slave_mii_bus && ds->ops->phy_read)
++		mdiobus_free(ds->slave_mii_bus);
+ teardown:
+ 	if (ds->ops->teardown)
+ 		ds->ops->teardown(ds);
+@@ -500,8 +503,11 @@ static void dsa_switch_teardown(struct dsa_switch *ds)
+ 	if (!ds->setup)
+ 		return;
+ 
+-	if (ds->slave_mii_bus && ds->ops->phy_read)
++	if (ds->slave_mii_bus && ds->ops->phy_read) {
+ 		mdiobus_unregister(ds->slave_mii_bus);
++		mdiobus_free(ds->slave_mii_bus);
++		ds->slave_mii_bus = NULL;
++	}
+ 
+ 	dsa_switch_unregister_notifier(ds);
+ 
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 1fb79dbde0cb3..e43f1fbac28b6 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1376,7 +1376,6 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
+ 	int err = -ENOMEM;
+ 	int allow_create = 1;
+ 	int replace_required = 0;
+-	int sernum = fib6_new_sernum(info->nl_net);
+ 
+ 	if (info->nlh) {
+ 		if (!(info->nlh->nlmsg_flags & NLM_F_CREATE))
+@@ -1476,7 +1475,7 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
+ 	if (!err) {
+ 		if (rt->nh)
+ 			list_add(&rt->nh_list, &rt->nh->f6i_list);
+-		__fib6_update_sernum_upto_root(rt, sernum);
++		__fib6_update_sernum_upto_root(rt, fib6_new_sernum(info->nl_net));
+ 		fib6_start_gc(info->nl_net, rt);
+ 	}
+ 
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index 696d89c2dce4a..5ee5b2ce29a6e 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -230,7 +230,8 @@ static int smc_clc_prfx_set(struct socket *clcsock,
+ 		goto out_rel;
+ 	}
+ 	/* get address to which the internal TCP socket is bound */
+-	kernel_getsockname(clcsock, (struct sockaddr *)&addrs);
++	if (kernel_getsockname(clcsock, (struct sockaddr *)&addrs) < 0)
++		goto out_rel;
+ 	/* analyze IP specific data of net_device belonging to TCP socket */
+ 	addr6 = (struct sockaddr_in6 *)&addrs;
+ 	rcu_read_lock();
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index af96f813c0752..c491dd8e67cda 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1089,7 +1089,9 @@ static void smc_conn_abort_work(struct work_struct *work)
+ 						   abort_work);
+ 	struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
+ 
++	lock_sock(&smc->sk);
+ 	smc_conn_kill(conn, true);
++	release_sock(&smc->sk);
+ 	sock_put(&smc->sk); /* sock_hold done by schedulers of abort_work */
+ }
+ 
+diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
+index 2ac33d32edc2b..f6a6acef42235 100644
+--- a/net/tipc/name_table.c
++++ b/net/tipc/name_table.c
+@@ -381,8 +381,8 @@ static struct publication *tipc_service_remove_publ(struct service_range *sr,
+  * Code reused: time_after32() for the same purpose
+  */
+ #define publication_after(pa, pb) time_after32((pa)->id, (pb)->id)
+-static int tipc_publ_sort(void *priv, struct list_head *a,
+-			  struct list_head *b)
++static int tipc_publ_sort(void *priv, const struct list_head *a,
++			  const struct list_head *b)
+ {
+ 	struct publication *pa, *pb;
+ 
+diff --git a/tools/testing/selftests/arm64/signal/test_signals.h b/tools/testing/selftests/arm64/signal/test_signals.h
+index f96baf1cef1a9..ebe8694dbef0f 100644
+--- a/tools/testing/selftests/arm64/signal/test_signals.h
++++ b/tools/testing/selftests/arm64/signal/test_signals.h
+@@ -33,10 +33,12 @@
+  */
+ enum {
+ 	FSSBS_BIT,
++	FSVE_BIT,
+ 	FMAX_END
+ };
+ 
+ #define FEAT_SSBS		(1UL << FSSBS_BIT)
++#define FEAT_SVE		(1UL << FSVE_BIT)
+ 
+ /*
+  * A descriptor used to describe and configure a test case.
+diff --git a/tools/testing/selftests/arm64/signal/test_signals_utils.c b/tools/testing/selftests/arm64/signal/test_signals_utils.c
+index 2de6e5ed5e258..22722abc9dfa9 100644
+--- a/tools/testing/selftests/arm64/signal/test_signals_utils.c
++++ b/tools/testing/selftests/arm64/signal/test_signals_utils.c
+@@ -26,6 +26,7 @@ static int sig_copyctx = SIGTRAP;
+ 
+ static char const *const feats_names[FMAX_END] = {
+ 	" SSBS ",
++	" SVE ",
+ };
+ 
+ #define MAX_FEATS_SZ	128
+@@ -263,16 +264,21 @@ int test_init(struct tdescr *td)
+ 		 */
+ 		if (getauxval(AT_HWCAP) & HWCAP_SSBS)
+ 			td->feats_supported |= FEAT_SSBS;
+-		if (feats_ok(td))
++		if (getauxval(AT_HWCAP) & HWCAP_SVE)
++			td->feats_supported |= FEAT_SVE;
++		if (feats_ok(td)) {
+ 			fprintf(stderr,
+ 				"Required Features: [%s] supported\n",
+ 				feats_to_string(td->feats_required &
+ 						td->feats_supported));
+-		else
++		} else {
+ 			fprintf(stderr,
+ 				"Required Features: [%s] NOT supported\n",
+ 				feats_to_string(td->feats_required &
+ 						~td->feats_supported));
++			td->result = KSFT_SKIP;
++			return 0;
++		}
+ 	}
+ 
+ 	/* Perform test specific additional initialization */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-10-06 14:18 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-10-06 14:18 UTC (permalink / raw
  To: gentoo-commits

commit:     7c1d00b6c2d21704f6d98b03a95ba313fa46c07b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct  6 14:17:54 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct  6 14:17:54 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7c1d00b6

Linux patch 5.10.71

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1070_linux-5.10.71.patch | 4574 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4578 insertions(+)

diff --git a/0000_README b/0000_README
index 89a885a..ef33fa6 100644
--- a/0000_README
+++ b/0000_README
@@ -323,6 +323,10 @@ Patch:  1069_linux-5.10.70.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.70
 
+Patch:  1070_linux-5.10.71.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.71
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1070_linux-5.10.71.patch b/1070_linux-5.10.71.patch
new file mode 100644
index 0000000..8600714
--- /dev/null
+++ b/1070_linux-5.10.71.patch
@@ -0,0 +1,4574 @@
+diff --git a/Makefile b/Makefile
+index 4a9541a18618b..1637ff7c1b751 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 70
++SUBLEVEL = 71
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c
+index 0af88622c6192..cb6d22439f71b 100644
+--- a/arch/mips/net/bpf_jit.c
++++ b/arch/mips/net/bpf_jit.c
+@@ -662,6 +662,11 @@ static void build_epilogue(struct jit_ctx *ctx)
+ 	((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative : func) : \
+ 	 func##_positive)
+ 
++static bool is_bad_offset(int b_off)
++{
++	return b_off > 0x1ffff || b_off < -0x20000;
++}
++
+ static int build_body(struct jit_ctx *ctx)
+ {
+ 	const struct bpf_prog *prog = ctx->skf;
+@@ -728,7 +733,10 @@ load_common:
+ 			/* Load return register on DS for failures */
+ 			emit_reg_move(r_ret, r_zero, ctx);
+ 			/* Return with error */
+-			emit_b(b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_b(b_off, ctx);
+ 			emit_nop(ctx);
+ 			break;
+ 		case BPF_LD | BPF_W | BPF_IND:
+@@ -775,8 +783,10 @@ load_ind:
+ 			emit_jalr(MIPS_R_RA, r_s0, ctx);
+ 			emit_reg_move(MIPS_R_A0, r_skb, ctx); /* delay slot */
+ 			/* Check the error value */
+-			emit_bcond(MIPS_COND_NE, r_ret, 0,
+-				   b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_bcond(MIPS_COND_NE, r_ret, 0, b_off, ctx);
+ 			emit_reg_move(r_ret, r_zero, ctx);
+ 			/* We are good */
+ 			/* X <- P[1:K] & 0xf */
+@@ -855,8 +865,10 @@ load_ind:
+ 			/* A /= X */
+ 			ctx->flags |= SEEN_X | SEEN_A;
+ 			/* Check if r_X is zero */
+-			emit_bcond(MIPS_COND_EQ, r_X, r_zero,
+-				   b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
+ 			emit_load_imm(r_ret, 0, ctx); /* delay slot */
+ 			emit_div(r_A, r_X, ctx);
+ 			break;
+@@ -864,8 +876,10 @@ load_ind:
+ 			/* A %= X */
+ 			ctx->flags |= SEEN_X | SEEN_A;
+ 			/* Check if r_X is zero */
+-			emit_bcond(MIPS_COND_EQ, r_X, r_zero,
+-				   b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
+ 			emit_load_imm(r_ret, 0, ctx); /* delay slot */
+ 			emit_mod(r_A, r_X, ctx);
+ 			break;
+@@ -926,7 +940,10 @@ load_ind:
+ 			break;
+ 		case BPF_JMP | BPF_JA:
+ 			/* pc += K */
+-			emit_b(b_imm(i + k + 1, ctx), ctx);
++			b_off = b_imm(i + k + 1, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_b(b_off, ctx);
+ 			emit_nop(ctx);
+ 			break;
+ 		case BPF_JMP | BPF_JEQ | BPF_K:
+@@ -1056,12 +1073,16 @@ jmp_cmp:
+ 			break;
+ 		case BPF_RET | BPF_A:
+ 			ctx->flags |= SEEN_A;
+-			if (i != prog->len - 1)
++			if (i != prog->len - 1) {
+ 				/*
+ 				 * If this is not the last instruction
+ 				 * then jump to the epilogue
+ 				 */
+-				emit_b(b_imm(prog->len, ctx), ctx);
++				b_off = b_imm(prog->len, ctx);
++				if (is_bad_offset(b_off))
++					return -E2BIG;
++				emit_b(b_off, ctx);
++			}
+ 			emit_reg_move(r_ret, r_A, ctx); /* delay slot */
+ 			break;
+ 		case BPF_RET | BPF_K:
+@@ -1075,7 +1096,10 @@ jmp_cmp:
+ 				 * If this is not the last instruction
+ 				 * then jump to the epilogue
+ 				 */
+-				emit_b(b_imm(prog->len, ctx), ctx);
++				b_off = b_imm(prog->len, ctx);
++				if (is_bad_offset(b_off))
++					return -E2BIG;
++				emit_b(b_off, ctx);
+ 				emit_nop(ctx);
+ 			}
+ 			break;
+@@ -1133,8 +1157,10 @@ jmp_cmp:
+ 			/* Load *dev pointer */
+ 			emit_load_ptr(r_s0, r_skb, off, ctx);
+ 			/* error (0) in the delay slot */
+-			emit_bcond(MIPS_COND_EQ, r_s0, r_zero,
+-				   b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_bcond(MIPS_COND_EQ, r_s0, r_zero, b_off, ctx);
+ 			emit_reg_move(r_ret, r_zero, ctx);
+ 			if (code == (BPF_ANC | SKF_AD_IFINDEX)) {
+ 				BUILD_BUG_ON(sizeof_field(struct net_device, ifindex) != 4);
+@@ -1244,7 +1270,10 @@ void bpf_jit_compile(struct bpf_prog *fp)
+ 
+ 	/* Generate the actual JIT code */
+ 	build_prologue(&ctx);
+-	build_body(&ctx);
++	if (build_body(&ctx)) {
++		module_memfree(ctx.target);
++		goto out;
++	}
+ 	build_epilogue(&ctx);
+ 
+ 	/* Update the icache */
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 3b8b8eede1a8a..4684bf9fcc428 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -263,6 +263,7 @@ static struct event_constraint intel_icl_event_constraints[] = {
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xa8, 0xb0, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xb7, 0xbd, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xe6, 0xf),
++	INTEL_EVENT_CONSTRAINT(0xef, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xf0, 0xf4, 0xf),
+ 	EVENT_CONSTRAINT_END
+ };
+diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h
+index 87bd6025d91d4..6a5f3acf2b331 100644
+--- a/arch/x86/include/asm/kvm_page_track.h
++++ b/arch/x86/include/asm/kvm_page_track.h
+@@ -46,7 +46,7 @@ struct kvm_page_track_notifier_node {
+ 			    struct kvm_page_track_notifier_node *node);
+ };
+ 
+-void kvm_page_track_init(struct kvm *kvm);
++int kvm_page_track_init(struct kvm *kvm);
+ void kvm_page_track_cleanup(struct kvm *kvm);
+ 
+ void kvm_page_track_free_memslot(struct kvm_memory_slot *slot);
+diff --git a/arch/x86/include/asm/kvmclock.h b/arch/x86/include/asm/kvmclock.h
+index eceea92990974..6c57651921028 100644
+--- a/arch/x86/include/asm/kvmclock.h
++++ b/arch/x86/include/asm/kvmclock.h
+@@ -2,6 +2,20 @@
+ #ifndef _ASM_X86_KVM_CLOCK_H
+ #define _ASM_X86_KVM_CLOCK_H
+ 
++#include <linux/percpu.h>
++
+ extern struct clocksource kvm_clock;
+ 
++DECLARE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
++
++static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
++{
++	return &this_cpu_read(hv_clock_per_cpu)->pvti;
++}
++
++static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void)
++{
++	return this_cpu_read(hv_clock_per_cpu);
++}
++
+ #endif /* _ASM_X86_KVM_CLOCK_H */
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index c4ac26333bc41..bb657e2e6c687 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -50,18 +50,9 @@ early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall);
+ static struct pvclock_vsyscall_time_info
+ 			hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __bss_decrypted __aligned(PAGE_SIZE);
+ static struct pvclock_wall_clock wall_clock __bss_decrypted;
+-static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
+ static struct pvclock_vsyscall_time_info *hvclock_mem;
+-
+-static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
+-{
+-	return &this_cpu_read(hv_clock_per_cpu)->pvti;
+-}
+-
+-static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void)
+-{
+-	return this_cpu_read(hv_clock_per_cpu);
+-}
++DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
++EXPORT_PER_CPU_SYMBOL_GPL(hv_clock_per_cpu);
+ 
+ /*
+  * The wallclock is the time of day when we booted. Since then, some time may
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index ff005fe738a4c..8c065da73f8e5 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -319,8 +319,8 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
+ 	unsigned index;
+ 	bool mask_before, mask_after;
+ 	union kvm_ioapic_redirect_entry *e;
+-	unsigned long vcpu_bitmap;
+ 	int old_remote_irr, old_delivery_status, old_dest_id, old_dest_mode;
++	DECLARE_BITMAP(vcpu_bitmap, KVM_MAX_VCPUS);
+ 
+ 	switch (ioapic->ioregsel) {
+ 	case IOAPIC_REG_VERSION:
+@@ -384,9 +384,9 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
+ 			irq.shorthand = APIC_DEST_NOSHORT;
+ 			irq.dest_id = e->fields.dest_id;
+ 			irq.msi_redir_hint = false;
+-			bitmap_zero(&vcpu_bitmap, 16);
++			bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
+ 			kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq,
+-						 &vcpu_bitmap);
++						 vcpu_bitmap);
+ 			if (old_dest_mode != e->fields.dest_mode ||
+ 			    old_dest_id != e->fields.dest_id) {
+ 				/*
+@@ -399,10 +399,10 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
+ 				    kvm_lapic_irq_dest_mode(
+ 					!!e->fields.dest_mode);
+ 				kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq,
+-							 &vcpu_bitmap);
++							 vcpu_bitmap);
+ 			}
+ 			kvm_make_scan_ioapic_request_mask(ioapic->kvm,
+-							  &vcpu_bitmap);
++							  vcpu_bitmap);
+ 		} else {
+ 			kvm_make_scan_ioapic_request(ioapic->kvm);
+ 		}
+diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
+index 8443a675715b0..81cf4babbd0b4 100644
+--- a/arch/x86/kvm/mmu/page_track.c
++++ b/arch/x86/kvm/mmu/page_track.c
+@@ -163,13 +163,13 @@ void kvm_page_track_cleanup(struct kvm *kvm)
+ 	cleanup_srcu_struct(&head->track_srcu);
+ }
+ 
+-void kvm_page_track_init(struct kvm *kvm)
++int kvm_page_track_init(struct kvm *kvm)
+ {
+ 	struct kvm_page_track_notifier_head *head;
+ 
+ 	head = &kvm->arch.track_notifier_head;
+-	init_srcu_struct(&head->track_srcu);
+ 	INIT_HLIST_HEAD(&head->track_notifier_list);
++	return init_srcu_struct(&head->track_srcu);
+ }
+ 
+ /*
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index df17146e841fb..f0946872f5e6d 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -447,7 +447,6 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm)
+ 		(svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) |
+ 		(svm->nested.hsave->control.int_ctl & int_ctl_vmcb01_bits);
+ 
+-	svm->vmcb->control.virt_ext            = svm->nested.ctl.virt_ext;
+ 	svm->vmcb->control.int_vector          = svm->nested.ctl.int_vector;
+ 	svm->vmcb->control.int_state           = svm->nested.ctl.int_state;
+ 	svm->vmcb->control.event_inj           = svm->nested.ctl.event_inj;
+diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
+index f3199bb02f22d..c0d6fee9225fe 100644
+--- a/arch/x86/kvm/vmx/evmcs.c
++++ b/arch/x86/kvm/vmx/evmcs.c
+@@ -352,14 +352,20 @@ void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata)
+ 	switch (msr_index) {
+ 	case MSR_IA32_VMX_EXIT_CTLS:
+ 	case MSR_IA32_VMX_TRUE_EXIT_CTLS:
+-		ctl_high &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
++		ctl_high &= ~EVMCS1_UNSUPPORTED_VMEXIT_CTRL;
+ 		break;
+ 	case MSR_IA32_VMX_ENTRY_CTLS:
+ 	case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
+-		ctl_high &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
++		ctl_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL;
+ 		break;
+ 	case MSR_IA32_VMX_PROCBASED_CTLS2:
+-		ctl_high &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
++		ctl_high &= ~EVMCS1_UNSUPPORTED_2NDEXEC;
++		break;
++	case MSR_IA32_VMX_PINBASED_CTLS:
++		ctl_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;
++		break;
++	case MSR_IA32_VMX_VMFUNC:
++		ctl_low &= ~EVMCS1_UNSUPPORTED_VMFUNC;
+ 		break;
+ 	}
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index fcd8bcb7e0ea9..e0dba0037a85f 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1867,10 +1867,11 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 				    &msr_info->data))
+ 			return 1;
+ 		/*
+-		 * Enlightened VMCS v1 doesn't have certain fields, but buggy
+-		 * Hyper-V versions are still trying to use corresponding
+-		 * features when they are exposed. Filter out the essential
+-		 * minimum.
++		 * Enlightened VMCS v1 doesn't have certain VMCS fields but
++		 * instead of just ignoring the features, different Hyper-V
++		 * versions are either trying to use them and fail or do some
++		 * sanity checking and refuse to boot. Filter all unsupported
++		 * features out.
+ 		 */
+ 		if (!msr_info->host_initiated &&
+ 		    vmx->nested.enlightened_vmcs_enabled)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 75c59ad27e9fd..d65da3b5837b2 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10392,9 +10392,15 @@ void kvm_arch_free_vm(struct kvm *kvm)
+ 
+ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ {
++	int ret;
++
+ 	if (type)
+ 		return -EINVAL;
+ 
++	ret = kvm_page_track_init(kvm);
++	if (ret)
++		return ret;
++
+ 	INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list);
+ 	INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
+ 	INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
+@@ -10421,7 +10427,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ 	INIT_DELAYED_WORK(&kvm->arch.kvmclock_sync_work, kvmclock_sync_fn);
+ 
+ 	kvm_hv_init_vm(kvm);
+-	kvm_page_track_init(kvm);
+ 	kvm_mmu_init_vm(kvm);
+ 
+ 	return kvm_x86_ops.vm_init(kvm);
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 0a962cd6bac18..a0a7ead52698c 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1547,7 +1547,7 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args,
+ }
+ 
+ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
+-			   struct bpf_prog *p, int stack_size, bool mod_ret)
++			   struct bpf_prog *p, int stack_size, bool save_ret)
+ {
+ 	u8 *prog = *pprog;
+ 	int cnt = 0;
+@@ -1573,11 +1573,15 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
+ 	if (emit_call(&prog, p->bpf_func, prog))
+ 		return -EINVAL;
+ 
+-	/* BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
++	/*
++	 * BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
+ 	 * of the previous call which is then passed on the stack to
+ 	 * the next BPF program.
++	 *
++	 * BPF_TRAMP_FENTRY trampoline may need to return the return
++	 * value of BPF_PROG_TYPE_STRUCT_OPS prog.
+ 	 */
+-	if (mod_ret)
++	if (save_ret)
+ 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+ 
+ 	if (p->aux->sleepable) {
+@@ -1645,13 +1649,15 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
+ }
+ 
+ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
+-		      struct bpf_tramp_progs *tp, int stack_size)
++		      struct bpf_tramp_progs *tp, int stack_size,
++		      bool save_ret)
+ {
+ 	int i;
+ 	u8 *prog = *pprog;
+ 
+ 	for (i = 0; i < tp->nr_progs; i++) {
+-		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, false))
++		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size,
++				    save_ret))
+ 			return -EINVAL;
+ 	}
+ 	*pprog = prog;
+@@ -1694,6 +1700,23 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
+ 	return 0;
+ }
+ 
++static bool is_valid_bpf_tramp_flags(unsigned int flags)
++{
++	if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
++	    (flags & BPF_TRAMP_F_SKIP_FRAME))
++		return false;
++
++	/*
++	 * BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops,
++	 * and it must be used alone.
++	 */
++	if ((flags & BPF_TRAMP_F_RET_FENTRY_RET) &&
++	    (flags & ~BPF_TRAMP_F_RET_FENTRY_RET))
++		return false;
++
++	return true;
++}
++
+ /* Example:
+  * __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
+  * its 'struct btf_func_model' will be nr_args=2
+@@ -1766,17 +1789,19 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 	struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
+ 	u8 **branches = NULL;
+ 	u8 *prog;
++	bool save_ret;
+ 
+ 	/* x86-64 supports up to 6 arguments. 7+ can be added in the future */
+ 	if (nr_args > 6)
+ 		return -ENOTSUPP;
+ 
+-	if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
+-	    (flags & BPF_TRAMP_F_SKIP_FRAME))
++	if (!is_valid_bpf_tramp_flags(flags))
+ 		return -EINVAL;
+ 
+-	if (flags & BPF_TRAMP_F_CALL_ORIG)
+-		stack_size += 8; /* room for return value of orig_call */
++	/* room for return value of orig_call or fentry prog */
++	save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
++	if (save_ret)
++		stack_size += 8;
+ 
+ 	if (flags & BPF_TRAMP_F_SKIP_FRAME)
+ 		/* skip patched call instruction and point orig_call to actual
+@@ -1803,7 +1828,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 	}
+ 
+ 	if (fentry->nr_progs)
+-		if (invoke_bpf(m, &prog, fentry, stack_size))
++		if (invoke_bpf(m, &prog, fentry, stack_size,
++			       flags & BPF_TRAMP_F_RET_FENTRY_RET))
+ 			return -EINVAL;
+ 
+ 	if (fmod_ret->nr_progs) {
+@@ -1850,7 +1876,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 	}
+ 
+ 	if (fexit->nr_progs)
+-		if (invoke_bpf(m, &prog, fexit, stack_size)) {
++		if (invoke_bpf(m, &prog, fexit, stack_size, false)) {
+ 			ret = -EINVAL;
+ 			goto cleanup;
+ 		}
+@@ -1870,9 +1896,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 			ret = -EINVAL;
+ 			goto cleanup;
+ 		}
+-		/* restore original return value back into RAX */
+-		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
+ 	}
++	/* restore return value of orig_call or fentry prog back into RAX */
++	if (save_ret)
++		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
+ 
+ 	EMIT1(0x5B); /* pop rbx */
+ 	EMIT1(0xC9); /* leave */
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 65c200e0ecb59..b8c2ddc01aec3 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2526,15 +2526,6 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	 * are likely to increase the throughput.
+ 	 */
+ 	bfqq->new_bfqq = new_bfqq;
+-	/*
+-	 * The above assignment schedules the following redirections:
+-	 * each time some I/O for bfqq arrives, the process that
+-	 * generated that I/O is disassociated from bfqq and
+-	 * associated with new_bfqq. Here we increases new_bfqq->ref
+-	 * in advance, adding the number of processes that are
+-	 * expected to be associated with new_bfqq as they happen to
+-	 * issue I/O.
+-	 */
+ 	new_bfqq->ref += process_refs;
+ 	return new_bfqq;
+ }
+@@ -2594,10 +2585,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_queue *in_service_bfqq, *new_bfqq;
+ 
+-	/* if a merge has already been setup, then proceed with that first */
+-	if (bfqq->new_bfqq)
+-		return bfqq->new_bfqq;
+-
+ 	/*
+ 	 * Do not perform queue merging if the device is non
+ 	 * rotational and performs internal queueing. In fact, such a
+@@ -2652,6 +2639,9 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfq_too_late_for_merging(bfqq))
+ 		return NULL;
+ 
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++
+ 	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
+ 		return NULL;
+ 
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index d061bff5cc96c..99e23a5df0267 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -3018,6 +3018,18 @@ static int acpi_nfit_register_region(struct acpi_nfit_desc *acpi_desc,
+ 		ndr_desc->target_node = NUMA_NO_NODE;
+ 	}
+ 
++	/* Fallback to address based numa information if node lookup failed */
++	if (ndr_desc->numa_node == NUMA_NO_NODE) {
++		ndr_desc->numa_node = memory_add_physaddr_to_nid(spa->address);
++		dev_info(acpi_desc->dev, "changing numa node from %d to %d for nfit region [%pa-%pa]",
++			NUMA_NO_NODE, ndr_desc->numa_node, &res.start, &res.end);
++	}
++	if (ndr_desc->target_node == NUMA_NO_NODE) {
++		ndr_desc->target_node = phys_to_target_node(spa->address);
++		dev_info(acpi_desc->dev, "changing target node from %d to %d for nfit region [%pa-%pa]",
++			NUMA_NO_NODE, ndr_desc->numa_node, &res.start, &res.end);
++	}
++
+ 	/*
+ 	 * Persistence domain bits are hierarchical, if
+ 	 * ACPI_NFIT_CAPABILITY_CACHE_FLUSH is set then
+diff --git a/drivers/cpufreq/cpufreq_governor_attr_set.c b/drivers/cpufreq/cpufreq_governor_attr_set.c
+index 66b05a326910e..a6f365b9cc1ad 100644
+--- a/drivers/cpufreq/cpufreq_governor_attr_set.c
++++ b/drivers/cpufreq/cpufreq_governor_attr_set.c
+@@ -74,8 +74,8 @@ unsigned int gov_attr_set_put(struct gov_attr_set *attr_set, struct list_head *l
+ 	if (count)
+ 		return count;
+ 
+-	kobject_put(&attr_set->kobj);
+ 	mutex_destroy(&attr_set->update_lock);
++	kobject_put(&attr_set->kobj);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(gov_attr_set_put);
+diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
+index d6a8f4e4b14a8..c15625e8ff66e 100644
+--- a/drivers/crypto/ccp/ccp-ops.c
++++ b/drivers/crypto/ccp/ccp-ops.c
+@@ -778,7 +778,7 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ 				    in_place ? DMA_BIDIRECTIONAL
+ 					     : DMA_TO_DEVICE);
+ 		if (ret)
+-			goto e_ctx;
++			goto e_aad;
+ 
+ 		if (in_place) {
+ 			dst = src;
+@@ -863,7 +863,7 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ 	op.u.aes.size = 0;
+ 	ret = cmd_q->ccp->vdata->perform->aes(&op);
+ 	if (ret)
+-		goto e_dst;
++		goto e_final_wa;
+ 
+ 	if (aes->action == CCP_AES_ACTION_ENCRYPT) {
+ 		/* Put the ciphered tag after the ciphertext. */
+@@ -873,17 +873,19 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ 		ret = ccp_init_dm_workarea(&tag, cmd_q, authsize,
+ 					   DMA_BIDIRECTIONAL);
+ 		if (ret)
+-			goto e_tag;
++			goto e_final_wa;
+ 		ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize);
+-		if (ret)
+-			goto e_tag;
++		if (ret) {
++			ccp_dm_free(&tag);
++			goto e_final_wa;
++		}
+ 
+ 		ret = crypto_memneq(tag.address, final_wa.address,
+ 				    authsize) ? -EBADMSG : 0;
+ 		ccp_dm_free(&tag);
+ 	}
+ 
+-e_tag:
++e_final_wa:
+ 	ccp_dm_free(&final_wa);
+ 
+ e_dst:
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 7cc7d137133aa..3a3aeef1017f5 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -467,15 +467,8 @@ static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off)
+ 	mutex_lock(&chip->i2c_lock);
+ 	ret = regmap_read(chip->regmap, inreg, &reg_val);
+ 	mutex_unlock(&chip->i2c_lock);
+-	if (ret < 0) {
+-		/*
+-		 * NOTE:
+-		 * diagnostic already emitted; that's all we should
+-		 * do unless gpio_*_value_cansleep() calls become different
+-		 * from their nonsleeping siblings (and report faults).
+-		 */
+-		return 0;
+-	}
++	if (ret < 0)
++		return ret;
+ 
+ 	return !!(reg_val & bit);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index ac3a88197b2fc..c7d6a677d86d8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3542,7 +3542,7 @@ static int gfx_v9_0_mqd_init(struct amdgpu_ring *ring)
+ 
+ 	/* set static priority for a queue/ring */
+ 	gfx_v9_0_mqd_set_priority(ring, mqd);
+-	mqd->cp_hqd_quantum = RREG32(mmCP_HQD_QUANTUM);
++	mqd->cp_hqd_quantum = RREG32_SOC15(GC, 0, mmCP_HQD_QUANTUM);
+ 
+ 	/* map_queues packet doesn't need activate the queue,
+ 	 * so only kiq need set this field.
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index ce21a21ddb235..d9525fbedad2d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -951,6 +951,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 
+ 	init_data.asic_id.pci_revision_id = adev->pdev->revision;
+ 	init_data.asic_id.hw_internal_rev = adev->external_rev_id;
++	init_data.asic_id.chip_id = adev->pdev->device;
+ 
+ 	init_data.asic_id.vram_width = adev->gmc.vram_width;
+ 	/* TODO: initialize init_data.asic_id.vram_type here!!!! */
+diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
+index d8fef42ca38e1..896389f930294 100644
+--- a/drivers/gpu/drm/i915/i915_request.c
++++ b/drivers/gpu/drm/i915/i915_request.c
+@@ -776,8 +776,6 @@ static void __i915_request_ctor(void *arg)
+ 	i915_sw_fence_init(&rq->submit, submit_notify);
+ 	i915_sw_fence_init(&rq->semaphore, semaphore_notify);
+ 
+-	dma_fence_init(&rq->fence, &i915_fence_ops, &rq->lock, 0, 0);
+-
+ 	rq->capture_list = NULL;
+ 
+ 	init_llist_head(&rq->execute_cb);
+@@ -840,17 +838,12 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
+ 	rq->ring = ce->ring;
+ 	rq->execution_mask = ce->engine->mask;
+ 
+-	kref_init(&rq->fence.refcount);
+-	rq->fence.flags = 0;
+-	rq->fence.error = 0;
+-	INIT_LIST_HEAD(&rq->fence.cb_list);
+-
+ 	ret = intel_timeline_get_seqno(tl, rq, &seqno);
+ 	if (ret)
+ 		goto err_free;
+ 
+-	rq->fence.context = tl->fence_context;
+-	rq->fence.seqno = seqno;
++	dma_fence_init(&rq->fence, &i915_fence_ops, &rq->lock,
++		       tl->fence_context, seqno);
+ 
+ 	RCU_INIT_POINTER(rq->timeline, tl);
+ 	RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline);
+diff --git a/drivers/hid/hid-betopff.c b/drivers/hid/hid-betopff.c
+index 0790fbd3fc9a2..467d789f9bc2d 100644
+--- a/drivers/hid/hid-betopff.c
++++ b/drivers/hid/hid-betopff.c
+@@ -56,15 +56,22 @@ static int betopff_init(struct hid_device *hid)
+ {
+ 	struct betopff_device *betopff;
+ 	struct hid_report *report;
+-	struct hid_input *hidinput =
+-			list_first_entry(&hid->inputs, struct hid_input, list);
++	struct hid_input *hidinput;
+ 	struct list_head *report_list =
+ 			&hid->report_enum[HID_OUTPUT_REPORT].report_list;
+-	struct input_dev *dev = hidinput->input;
++	struct input_dev *dev;
+ 	int field_count = 0;
+ 	int error;
+ 	int i, j;
+ 
++	if (list_empty(&hid->inputs)) {
++		hid_err(hid, "no inputs found\n");
++		return -ENODEV;
++	}
++
++	hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
++	dev = hidinput->input;
++
+ 	if (list_empty(report_list)) {
+ 		hid_err(hid, "no output reports found\n");
+ 		return -ENODEV;
+diff --git a/drivers/hid/hid-u2fzero.c b/drivers/hid/hid-u2fzero.c
+index 95e0807878c7e..d70cd3d7f583b 100644
+--- a/drivers/hid/hid-u2fzero.c
++++ b/drivers/hid/hid-u2fzero.c
+@@ -198,7 +198,9 @@ static int u2fzero_rng_read(struct hwrng *rng, void *data,
+ 	}
+ 
+ 	ret = u2fzero_recv(dev, &req, &resp);
+-	if (ret < 0)
++
++	/* ignore errors or packets without data */
++	if (ret < offsetof(struct u2f_hid_msg, init.data))
+ 		return 0;
+ 
+ 	/* only take the minimum amount of data it is safe to take */
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index 8d4ac4b9fb9da..009a0469d54f6 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -503,7 +503,7 @@ static void hid_ctrl(struct urb *urb)
+ 
+ 	if (unplug) {
+ 		usbhid->ctrltail = usbhid->ctrlhead;
+-	} else {
++	} else if (usbhid->ctrlhead != usbhid->ctrltail) {
+ 		usbhid->ctrltail = (usbhid->ctrltail + 1) & (HID_CONTROL_FIFO_SIZE - 1);
+ 
+ 		if (usbhid->ctrlhead != usbhid->ctrltail &&
+@@ -1221,9 +1221,20 @@ static void usbhid_stop(struct hid_device *hid)
+ 	mutex_lock(&usbhid->mutex);
+ 
+ 	clear_bit(HID_STARTED, &usbhid->iofl);
++
+ 	spin_lock_irq(&usbhid->lock);	/* Sync with error and led handlers */
+ 	set_bit(HID_DISCONNECTED, &usbhid->iofl);
++	while (usbhid->ctrltail != usbhid->ctrlhead) {
++		if (usbhid->ctrl[usbhid->ctrltail].dir == USB_DIR_OUT) {
++			kfree(usbhid->ctrl[usbhid->ctrltail].raw_report);
++			usbhid->ctrl[usbhid->ctrltail].raw_report = NULL;
++		}
++
++		usbhid->ctrltail = (usbhid->ctrltail + 1) &
++			(HID_CONTROL_FIFO_SIZE - 1);
++	}
+ 	spin_unlock_irq(&usbhid->lock);
++
+ 	usb_kill_urb(usbhid->urbin);
+ 	usb_kill_urb(usbhid->urbout);
+ 	usb_kill_urb(usbhid->urbctrl);
+diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c
+index ed8d59d4eecb3..bd8f5a3aaad9c 100644
+--- a/drivers/hwmon/mlxreg-fan.c
++++ b/drivers/hwmon/mlxreg-fan.c
+@@ -291,8 +291,8 @@ static int mlxreg_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ {
+ 	struct mlxreg_fan *fan = cdev->devdata;
+ 	unsigned long cur_state;
++	int i, config = 0;
+ 	u32 regval;
+-	int i;
+ 	int err;
+ 
+ 	/*
+@@ -305,6 +305,12 @@ static int mlxreg_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ 	 * overwritten.
+ 	 */
+ 	if (state >= MLXREG_FAN_SPEED_MIN && state <= MLXREG_FAN_SPEED_MAX) {
++		/*
++		 * This is configuration change, which is only supported through sysfs.
++		 * For configuration non-zero value is to be returned to avoid thermal
++		 * statistics update.
++		 */
++		config = 1;
+ 		state -= MLXREG_FAN_MAX_STATE;
+ 		for (i = 0; i < state; i++)
+ 			fan->cooling_levels[i] = state;
+@@ -319,7 +325,7 @@ static int mlxreg_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ 
+ 		cur_state = MLXREG_FAN_PWM_DUTY2STATE(regval);
+ 		if (state < cur_state)
+-			return 0;
++			return config;
+ 
+ 		state = cur_state;
+ 	}
+@@ -335,7 +341,7 @@ static int mlxreg_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ 		dev_err(fan->dev, "Failed to write PWM duty\n");
+ 		return err;
+ 	}
+-	return 0;
++	return config;
+ }
+ 
+ static const struct thermal_cooling_device_ops mlxreg_fan_cooling_ops = {
+diff --git a/drivers/hwmon/pmbus/mp2975.c b/drivers/hwmon/pmbus/mp2975.c
+index 1c3e2a9453b12..a41fe06e0ad4c 100644
+--- a/drivers/hwmon/pmbus/mp2975.c
++++ b/drivers/hwmon/pmbus/mp2975.c
+@@ -54,7 +54,7 @@
+ 
+ #define MP2975_RAIL2_FUNC	(PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT | \
+ 				 PMBUS_HAVE_IOUT | PMBUS_HAVE_STATUS_IOUT | \
+-				 PMBUS_PHASE_VIRTUAL)
++				 PMBUS_HAVE_POUT | PMBUS_PHASE_VIRTUAL)
+ 
+ struct mp2975_data {
+ 	struct pmbus_driver_info info;
+diff --git a/drivers/hwmon/tmp421.c b/drivers/hwmon/tmp421.c
+index ede66ea6a730d..b963a369c5ab3 100644
+--- a/drivers/hwmon/tmp421.c
++++ b/drivers/hwmon/tmp421.c
+@@ -100,71 +100,81 @@ struct tmp421_data {
+ 	s16 temp[4];
+ };
+ 
+-static int temp_from_s16(s16 reg)
++static int temp_from_raw(u16 reg, bool extended)
+ {
+ 	/* Mask out status bits */
+ 	int temp = reg & ~0xf;
+ 
+-	return (temp * 1000 + 128) / 256;
+-}
+-
+-static int temp_from_u16(u16 reg)
+-{
+-	/* Mask out status bits */
+-	int temp = reg & ~0xf;
+-
+-	/* Add offset for extended temperature range. */
+-	temp -= 64 * 256;
++	if (extended)
++		temp = temp - 64 * 256;
++	else
++		temp = (s16)temp;
+ 
+-	return (temp * 1000 + 128) / 256;
++	return DIV_ROUND_CLOSEST(temp * 1000, 256);
+ }
+ 
+-static struct tmp421_data *tmp421_update_device(struct device *dev)
++static int tmp421_update_device(struct tmp421_data *data)
+ {
+-	struct tmp421_data *data = dev_get_drvdata(dev);
+ 	struct i2c_client *client = data->client;
++	int ret = 0;
+ 	int i;
+ 
+ 	mutex_lock(&data->update_lock);
+ 
+ 	if (time_after(jiffies, data->last_updated + (HZ / 2)) ||
+ 	    !data->valid) {
+-		data->config = i2c_smbus_read_byte_data(client,
+-			TMP421_CONFIG_REG_1);
++		ret = i2c_smbus_read_byte_data(client, TMP421_CONFIG_REG_1);
++		if (ret < 0)
++			goto exit;
++		data->config = ret;
+ 
+ 		for (i = 0; i < data->channels; i++) {
+-			data->temp[i] = i2c_smbus_read_byte_data(client,
+-				TMP421_TEMP_MSB[i]) << 8;
+-			data->temp[i] |= i2c_smbus_read_byte_data(client,
+-				TMP421_TEMP_LSB[i]);
++			ret = i2c_smbus_read_byte_data(client, TMP421_TEMP_MSB[i]);
++			if (ret < 0)
++				goto exit;
++			data->temp[i] = ret << 8;
++
++			ret = i2c_smbus_read_byte_data(client, TMP421_TEMP_LSB[i]);
++			if (ret < 0)
++				goto exit;
++			data->temp[i] |= ret;
+ 		}
+ 		data->last_updated = jiffies;
+ 		data->valid = 1;
+ 	}
+ 
++exit:
+ 	mutex_unlock(&data->update_lock);
+ 
+-	return data;
++	if (ret < 0) {
++		data->valid = 0;
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static int tmp421_read(struct device *dev, enum hwmon_sensor_types type,
+ 		       u32 attr, int channel, long *val)
+ {
+-	struct tmp421_data *tmp421 = tmp421_update_device(dev);
++	struct tmp421_data *tmp421 = dev_get_drvdata(dev);
++	int ret = 0;
++
++	ret = tmp421_update_device(tmp421);
++	if (ret)
++		return ret;
+ 
+ 	switch (attr) {
+ 	case hwmon_temp_input:
+-		if (tmp421->config & TMP421_CONFIG_RANGE)
+-			*val = temp_from_u16(tmp421->temp[channel]);
+-		else
+-			*val = temp_from_s16(tmp421->temp[channel]);
++		*val = temp_from_raw(tmp421->temp[channel],
++				     tmp421->config & TMP421_CONFIG_RANGE);
+ 		return 0;
+ 	case hwmon_temp_fault:
+ 		/*
+-		 * The OPEN bit signals a fault. This is bit 0 of the temperature
+-		 * register (low byte).
++		 * Any of OPEN or /PVLD bits indicate a hardware mulfunction
++		 * and the conversion result may be incorrect
+ 		 */
+-		*val = tmp421->temp[channel] & 0x01;
++		*val = !!(tmp421->temp[channel] & 0x03);
+ 		return 0;
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -177,9 +187,6 @@ static umode_t tmp421_is_visible(const void *data, enum hwmon_sensor_types type,
+ {
+ 	switch (attr) {
+ 	case hwmon_temp_fault:
+-		if (channel == 0)
+-			return 0;
+-		return 0444;
+ 	case hwmon_temp_input:
+ 		return 0444;
+ 	default:
+diff --git a/drivers/hwmon/w83791d.c b/drivers/hwmon/w83791d.c
+index 37b25a1474c46..3c1be2c11fdf0 100644
+--- a/drivers/hwmon/w83791d.c
++++ b/drivers/hwmon/w83791d.c
+@@ -273,9 +273,6 @@ struct w83791d_data {
+ 	char valid;			/* !=0 if following fields are valid */
+ 	unsigned long last_updated;	/* In jiffies */
+ 
+-	/* array of 2 pointers to subclients */
+-	struct i2c_client *lm75[2];
+-
+ 	/* volts */
+ 	u8 in[NUMBER_OF_VIN];		/* Register value */
+ 	u8 in_max[NUMBER_OF_VIN];	/* Register value */
+@@ -1257,7 +1254,6 @@ static const struct attribute_group w83791d_group_fanpwm45 = {
+ static int w83791d_detect_subclients(struct i2c_client *client)
+ {
+ 	struct i2c_adapter *adapter = client->adapter;
+-	struct w83791d_data *data = i2c_get_clientdata(client);
+ 	int address = client->addr;
+ 	int i, id;
+ 	u8 val;
+@@ -1280,22 +1276,19 @@ static int w83791d_detect_subclients(struct i2c_client *client)
+ 	}
+ 
+ 	val = w83791d_read(client, W83791D_REG_I2C_SUBADDR);
+-	if (!(val & 0x08))
+-		data->lm75[0] = devm_i2c_new_dummy_device(&client->dev, adapter,
+-							  0x48 + (val & 0x7));
+-	if (!(val & 0x80)) {
+-		if (!IS_ERR(data->lm75[0]) &&
+-				((val & 0x7) == ((val >> 4) & 0x7))) {
+-			dev_err(&client->dev,
+-				"duplicate addresses 0x%x, "
+-				"use force_subclient\n",
+-				data->lm75[0]->addr);
+-			return -ENODEV;
+-		}
+-		data->lm75[1] = devm_i2c_new_dummy_device(&client->dev, adapter,
+-							  0x48 + ((val >> 4) & 0x7));
++
++	if (!(val & 0x88) && (val & 0x7) == ((val >> 4) & 0x7)) {
++		dev_err(&client->dev,
++			"duplicate addresses 0x%x, use force_subclient\n", 0x48 + (val & 0x7));
++		return -ENODEV;
+ 	}
+ 
++	if (!(val & 0x08))
++		devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + (val & 0x7));
++
++	if (!(val & 0x80))
++		devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + ((val >> 4) & 0x7));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/w83792d.c b/drivers/hwmon/w83792d.c
+index abd5c3a722b91..1f175f3813506 100644
+--- a/drivers/hwmon/w83792d.c
++++ b/drivers/hwmon/w83792d.c
+@@ -264,9 +264,6 @@ struct w83792d_data {
+ 	char valid;		/* !=0 if following fields are valid */
+ 	unsigned long last_updated;	/* In jiffies */
+ 
+-	/* array of 2 pointers to subclients */
+-	struct i2c_client *lm75[2];
+-
+ 	u8 in[9];		/* Register value */
+ 	u8 in_max[9];		/* Register value */
+ 	u8 in_min[9];		/* Register value */
+@@ -927,7 +924,6 @@ w83792d_detect_subclients(struct i2c_client *new_client)
+ 	int address = new_client->addr;
+ 	u8 val;
+ 	struct i2c_adapter *adapter = new_client->adapter;
+-	struct w83792d_data *data = i2c_get_clientdata(new_client);
+ 
+ 	id = i2c_adapter_id(adapter);
+ 	if (force_subclients[0] == id && force_subclients[1] == address) {
+@@ -946,21 +942,19 @@ w83792d_detect_subclients(struct i2c_client *new_client)
+ 	}
+ 
+ 	val = w83792d_read_value(new_client, W83792D_REG_I2C_SUBADDR);
+-	if (!(val & 0x08))
+-		data->lm75[0] = devm_i2c_new_dummy_device(&new_client->dev, adapter,
+-							  0x48 + (val & 0x7));
+-	if (!(val & 0x80)) {
+-		if (!IS_ERR(data->lm75[0]) &&
+-			((val & 0x7) == ((val >> 4) & 0x7))) {
+-			dev_err(&new_client->dev,
+-				"duplicate addresses 0x%x, use force_subclient\n",
+-				data->lm75[0]->addr);
+-			return -ENODEV;
+-		}
+-		data->lm75[1] = devm_i2c_new_dummy_device(&new_client->dev, adapter,
+-							  0x48 + ((val >> 4) & 0x7));
++
++	if (!(val & 0x88) && (val & 0x7) == ((val >> 4) & 0x7)) {
++		dev_err(&new_client->dev,
++			"duplicate addresses 0x%x, use force_subclient\n", 0x48 + (val & 0x7));
++		return -ENODEV;
+ 	}
+ 
++	if (!(val & 0x08))
++		devm_i2c_new_dummy_device(&new_client->dev, adapter, 0x48 + (val & 0x7));
++
++	if (!(val & 0x80))
++		devm_i2c_new_dummy_device(&new_client->dev, adapter, 0x48 + ((val >> 4) & 0x7));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/w83793.c b/drivers/hwmon/w83793.c
+index e7d0484eabe4c..1d2854de1cfc9 100644
+--- a/drivers/hwmon/w83793.c
++++ b/drivers/hwmon/w83793.c
+@@ -202,7 +202,6 @@ static inline s8 TEMP_TO_REG(long val, s8 min, s8 max)
+ }
+ 
+ struct w83793_data {
+-	struct i2c_client *lm75[2];
+ 	struct device *hwmon_dev;
+ 	struct mutex update_lock;
+ 	char valid;			/* !=0 if following fields are valid */
+@@ -1566,7 +1565,6 @@ w83793_detect_subclients(struct i2c_client *client)
+ 	int address = client->addr;
+ 	u8 tmp;
+ 	struct i2c_adapter *adapter = client->adapter;
+-	struct w83793_data *data = i2c_get_clientdata(client);
+ 
+ 	id = i2c_adapter_id(adapter);
+ 	if (force_subclients[0] == id && force_subclients[1] == address) {
+@@ -1586,21 +1584,19 @@ w83793_detect_subclients(struct i2c_client *client)
+ 	}
+ 
+ 	tmp = w83793_read_value(client, W83793_REG_I2C_SUBADDR);
+-	if (!(tmp & 0x08))
+-		data->lm75[0] = devm_i2c_new_dummy_device(&client->dev, adapter,
+-							  0x48 + (tmp & 0x7));
+-	if (!(tmp & 0x80)) {
+-		if (!IS_ERR(data->lm75[0])
+-		    && ((tmp & 0x7) == ((tmp >> 4) & 0x7))) {
+-			dev_err(&client->dev,
+-				"duplicate addresses 0x%x, "
+-				"use force_subclients\n", data->lm75[0]->addr);
+-			return -ENODEV;
+-		}
+-		data->lm75[1] = devm_i2c_new_dummy_device(&client->dev, adapter,
+-							  0x48 + ((tmp >> 4) & 0x7));
++
++	if (!(tmp & 0x88) && (tmp & 0x7) == ((tmp >> 4) & 0x7)) {
++		dev_err(&client->dev,
++			"duplicate addresses 0x%x, use force_subclient\n", 0x48 + (tmp & 0x7));
++		return -ENODEV;
+ 	}
+ 
++	if (!(tmp & 0x08))
++		devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + (tmp & 0x7));
++
++	if (!(tmp & 0x80))
++		devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + ((tmp >> 4) & 0x7));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 34b94e5253905..8e54184566f7f 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1750,15 +1750,16 @@ static void cma_cancel_route(struct rdma_id_private *id_priv)
+ 	}
+ }
+ 
+-static void cma_cancel_listens(struct rdma_id_private *id_priv)
++static void _cma_cancel_listens(struct rdma_id_private *id_priv)
+ {
+ 	struct rdma_id_private *dev_id_priv;
+ 
++	lockdep_assert_held(&lock);
++
+ 	/*
+ 	 * Remove from listen_any_list to prevent added devices from spawning
+ 	 * additional listen requests.
+ 	 */
+-	mutex_lock(&lock);
+ 	list_del(&id_priv->list);
+ 
+ 	while (!list_empty(&id_priv->listen_list)) {
+@@ -1772,6 +1773,12 @@ static void cma_cancel_listens(struct rdma_id_private *id_priv)
+ 		rdma_destroy_id(&dev_id_priv->id);
+ 		mutex_lock(&lock);
+ 	}
++}
++
++static void cma_cancel_listens(struct rdma_id_private *id_priv)
++{
++	mutex_lock(&lock);
++	_cma_cancel_listens(id_priv);
+ 	mutex_unlock(&lock);
+ }
+ 
+@@ -1814,6 +1821,8 @@ static void cma_release_port(struct rdma_id_private *id_priv)
+ static void destroy_mc(struct rdma_id_private *id_priv,
+ 		       struct cma_multicast *mc)
+ {
++	bool send_only = mc->join_state == BIT(SENDONLY_FULLMEMBER_JOIN);
++
+ 	if (rdma_cap_ib_mcast(id_priv->id.device, id_priv->id.port_num))
+ 		ib_sa_free_multicast(mc->sa_mc);
+ 
+@@ -1830,7 +1839,10 @@ static void destroy_mc(struct rdma_id_private *id_priv,
+ 
+ 			cma_set_mgid(id_priv, (struct sockaddr *)&mc->addr,
+ 				     &mgid);
+-			cma_igmp_send(ndev, &mgid, false);
++
++			if (!send_only)
++				cma_igmp_send(ndev, &mgid, false);
++
+ 			dev_put(ndev);
+ 		}
+ 
+@@ -2577,7 +2589,7 @@ static int cma_listen_on_all(struct rdma_id_private *id_priv)
+ 	return 0;
+ 
+ err_listen:
+-	list_del(&id_priv->list);
++	_cma_cancel_listens(id_priv);
+ 	mutex_unlock(&lock);
+ 	if (to_destroy)
+ 		rdma_destroy_id(&to_destroy->id);
+@@ -3732,9 +3744,13 @@ int rdma_listen(struct rdma_cm_id *id, int backlog)
+ 	int ret;
+ 
+ 	if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_BOUND, RDMA_CM_LISTEN)) {
++		struct sockaddr_in any_in = {
++			.sin_family = AF_INET,
++			.sin_addr.s_addr = htonl(INADDR_ANY),
++		};
++
+ 		/* For a well behaved ULP state will be RDMA_CM_IDLE */
+-		id->route.addr.src_addr.ss_family = AF_INET;
+-		ret = rdma_bind_addr(id, cma_src_addr(id_priv));
++		ret = rdma_bind_addr(id, (struct sockaddr *)&any_in);
+ 		if (ret)
+ 			return ret;
+ 		if (WARN_ON(!cma_comp_exch(id_priv, RDMA_CM_ADDR_BOUND,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_alloc.c b/drivers/infiniband/hw/hns/hns_roce_alloc.c
+index a6b23dec1adcf..5b2baf89d1109 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_alloc.c
++++ b/drivers/infiniband/hw/hns/hns_roce_alloc.c
+@@ -240,7 +240,7 @@ int hns_roce_get_kmem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
+ 	end = start + buf_cnt;
+ 	if (end > buf->npages) {
+ 		dev_err(hr_dev->dev,
+-			"Failed to check kmem bufs, end %d + %d total %d!\n",
++			"failed to check kmem bufs, end %d + %d total %u!\n",
+ 			start, buf_cnt, buf->npages);
+ 		return -EINVAL;
+ 	}
+@@ -262,7 +262,7 @@ int hns_roce_get_umem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
+ 	u64 addr;
+ 
+ 	if (page_shift < HNS_HW_PAGE_SHIFT) {
+-		dev_err(hr_dev->dev, "Failed to check umem page shift %d!\n",
++		dev_err(hr_dev->dev, "failed to check umem page shift %u!\n",
+ 			page_shift);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index da346129f6e9e..8a6bded9c11cb 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -50,29 +50,29 @@ static int alloc_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
+ 
+ 	ret = hns_roce_mtr_find(hr_dev, &hr_cq->mtr, 0, mtts, ARRAY_SIZE(mtts),
+ 				&dma_handle);
+-	if (ret < 1) {
+-		ibdev_err(ibdev, "Failed to find CQ mtr\n");
++	if (!ret) {
++		ibdev_err(ibdev, "failed to find CQ mtr, ret = %d.\n", ret);
+ 		return -EINVAL;
+ 	}
+ 
+ 	cq_table = &hr_dev->cq_table;
+ 	ret = hns_roce_bitmap_alloc(&cq_table->bitmap, &hr_cq->cqn);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc CQ bitmap, err %d\n", ret);
++		ibdev_err(ibdev, "failed to alloc CQ bitmap, ret = %d.\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	/* Get CQC memory HEM(Hardware Entry Memory) table */
+ 	ret = hns_roce_table_get(hr_dev, &cq_table->table, hr_cq->cqn);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to get CQ(0x%lx) context, err %d\n",
++		ibdev_err(ibdev, "failed to get CQ(0x%lx) context, ret = %d.\n",
+ 			  hr_cq->cqn, ret);
+ 		goto err_out;
+ 	}
+ 
+ 	ret = xa_err(xa_store(&cq_table->array, hr_cq->cqn, hr_cq, GFP_KERNEL));
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to xa_store CQ\n");
++		ibdev_err(ibdev, "failed to xa_store CQ, ret = %d.\n", ret);
+ 		goto err_put;
+ 	}
+ 
+@@ -91,7 +91,7 @@ static int alloc_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
+ 	hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+ 	if (ret) {
+ 		ibdev_err(ibdev,
+-			  "Failed to send create cmd for CQ(0x%lx), err %d\n",
++			  "failed to send create cmd for CQ(0x%lx), ret = %d.\n",
+ 			  hr_cq->cqn, ret);
+ 		goto err_xa;
+ 	}
+@@ -147,7 +147,7 @@ static int alloc_cq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
+ {
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+ 	struct hns_roce_buf_attr buf_attr = {};
+-	int err;
++	int ret;
+ 
+ 	buf_attr.page_shift = hr_dev->caps.cqe_buf_pg_sz + HNS_HW_PAGE_SHIFT;
+ 	buf_attr.region[0].size = hr_cq->cq_depth * hr_cq->cqe_size;
+@@ -155,13 +155,13 @@ static int alloc_cq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
+ 	buf_attr.region_count = 1;
+ 	buf_attr.fixed_page = true;
+ 
+-	err = hns_roce_mtr_create(hr_dev, &hr_cq->mtr, &buf_attr,
++	ret = hns_roce_mtr_create(hr_dev, &hr_cq->mtr, &buf_attr,
+ 				  hr_dev->caps.cqe_ba_pg_sz + HNS_HW_PAGE_SHIFT,
+ 				  udata, addr);
+-	if (err)
+-		ibdev_err(ibdev, "Failed to alloc CQ mtr, err %d\n", err);
++	if (ret)
++		ibdev_err(ibdev, "failed to alloc CQ mtr, ret = %d.\n", ret);
+ 
+-	return err;
++	return ret;
+ }
+ 
+ static void free_cq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
+@@ -252,13 +252,13 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
+ 	int ret;
+ 
+ 	if (cq_entries < 1 || cq_entries > hr_dev->caps.max_cqes) {
+-		ibdev_err(ibdev, "Failed to check CQ count %d max=%d\n",
++		ibdev_err(ibdev, "failed to check CQ count %u, max = %u.\n",
+ 			  cq_entries, hr_dev->caps.max_cqes);
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (vector >= hr_dev->caps.num_comp_vectors) {
+-		ibdev_err(ibdev, "Failed to check CQ vector=%d max=%d\n",
++		ibdev_err(ibdev, "failed to check CQ vector = %d, max = %d.\n",
+ 			  vector, hr_dev->caps.num_comp_vectors);
+ 		return -EINVAL;
+ 	}
+@@ -276,7 +276,7 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
+ 		ret = ib_copy_from_udata(&ucmd, udata,
+ 					 min(udata->inlen, sizeof(ucmd)));
+ 		if (ret) {
+-			ibdev_err(ibdev, "Failed to copy CQ udata, err %d\n",
++			ibdev_err(ibdev, "failed to copy CQ udata, ret = %d.\n",
+ 				  ret);
+ 			return ret;
+ 		}
+@@ -286,19 +286,20 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
+ 
+ 	ret = alloc_cq_buf(hr_dev, hr_cq, udata, ucmd.buf_addr);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc CQ buf, err %d\n", ret);
++		ibdev_err(ibdev, "failed to alloc CQ buf, ret = %d.\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	ret = alloc_cq_db(hr_dev, hr_cq, udata, ucmd.db_addr, &resp);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc CQ db, err %d\n", ret);
++		ibdev_err(ibdev, "failed to alloc CQ db, ret = %d.\n", ret);
+ 		goto err_cq_buf;
+ 	}
+ 
+ 	ret = alloc_cqc(hr_dev, hr_cq);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc CQ context, err %d\n", ret);
++		ibdev_err(ibdev,
++			  "failed to alloc CQ context, ret = %d.\n", ret);
+ 		goto err_cq_db;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 66f9f036ef946..c880a8be7e3cd 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -184,7 +184,7 @@ static int get_hem_table_config(struct hns_roce_dev *hr_dev,
+ 		mhop->hop_num = hr_dev->caps.srqc_hop_num;
+ 		break;
+ 	default:
+-		dev_err(dev, "Table %d not support multi-hop addressing!\n",
++		dev_err(dev, "table %u not support multi-hop addressing!\n",
+ 			type);
+ 		return -EINVAL;
+ 	}
+@@ -232,8 +232,8 @@ int hns_roce_calc_hem_mhop(struct hns_roce_dev *hr_dev,
+ 		mhop->l0_idx = table_idx;
+ 		break;
+ 	default:
+-		dev_err(dev, "Table %d not support hop_num = %d!\n",
+-			     table->type, mhop->hop_num);
++		dev_err(dev, "table %u not support hop_num = %u!\n",
++			table->type, mhop->hop_num);
+ 		return -EINVAL;
+ 	}
+ 	if (mhop->l0_idx >= mhop->ba_l0_num)
+@@ -438,13 +438,13 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
+ 		index->buf = l0_idx;
+ 		break;
+ 	default:
+-		ibdev_err(ibdev, "Table %d not support mhop.hop_num = %d!\n",
++		ibdev_err(ibdev, "table %u not support mhop.hop_num = %u!\n",
+ 			  table->type, mhop->hop_num);
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (unlikely(index->buf >= table->num_hem)) {
+-		ibdev_err(ibdev, "Table %d exceed hem limt idx %llu,max %lu!\n",
++		ibdev_err(ibdev, "table %u exceed hem limt idx %llu, max %lu!\n",
+ 			  table->type, index->buf, table->num_hem);
+ 		return -EINVAL;
+ 	}
+@@ -714,15 +714,15 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
+ 			step_idx = hop_num;
+ 
+ 		if (hr_dev->hw->clear_hem(hr_dev, table, obj, step_idx))
+-			ibdev_warn(ibdev, "Clear hop%d HEM failed.\n", hop_num);
++			ibdev_warn(ibdev, "failed to clear hop%u HEM.\n", hop_num);
+ 
+ 		if (index->inited & HEM_INDEX_L1)
+ 			if (hr_dev->hw->clear_hem(hr_dev, table, obj, 1))
+-				ibdev_warn(ibdev, "Clear HEM step 1 failed.\n");
++				ibdev_warn(ibdev, "failed to clear HEM step 1.\n");
+ 
+ 		if (index->inited & HEM_INDEX_L0)
+ 			if (hr_dev->hw->clear_hem(hr_dev, table, obj, 0))
+-				ibdev_warn(ibdev, "Clear HEM step 0 failed.\n");
++				ibdev_warn(ibdev, "failed to clear HEM step 0.\n");
+ 	}
+ }
+ 
+@@ -1234,7 +1234,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+ 	}
+ 
+ 	if (offset < r->offset) {
+-		dev_err(hr_dev->dev, "invalid offset %d,min %d!\n",
++		dev_err(hr_dev->dev, "invalid offset %d, min %u!\n",
+ 			offset, r->offset);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index ebcf26dec1e30..c29ba8ee51e29 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -361,7 +361,7 @@ static int check_send_valid(struct hns_roce_dev *hr_dev,
+ 	} else if (unlikely(hr_qp->state == IB_QPS_RESET ||
+ 		   hr_qp->state == IB_QPS_INIT ||
+ 		   hr_qp->state == IB_QPS_RTR)) {
+-		ibdev_err(ibdev, "failed to post WQE, QP state %d!\n",
++		ibdev_err(ibdev, "failed to post WQE, QP state %hhu!\n",
+ 			  hr_qp->state);
+ 		return -EINVAL;
+ 	} else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN)) {
+@@ -665,7 +665,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
+ 		wqe_idx = (qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1);
+ 
+ 		if (unlikely(wr->num_sge > qp->sq.max_gs)) {
+-			ibdev_err(ibdev, "num_sge=%d > qp->sq.max_gs=%d\n",
++			ibdev_err(ibdev, "num_sge = %d > qp->sq.max_gs = %u.\n",
+ 				  wr->num_sge, qp->sq.max_gs);
+ 			ret = -EINVAL;
+ 			*bad_wr = wr;
+@@ -750,7 +750,7 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
+ 		wqe_idx = (hr_qp->rq.head + nreq) & (hr_qp->rq.wqe_cnt - 1);
+ 
+ 		if (unlikely(wr->num_sge > hr_qp->rq.max_gs)) {
+-			ibdev_err(ibdev, "rq:num_sge=%d >= qp->sq.max_gs=%d\n",
++			ibdev_err(ibdev, "num_sge = %d >= max_sge = %u.\n",
+ 				  wr->num_sge, hr_qp->rq.max_gs);
+ 			ret = -EINVAL;
+ 			*bad_wr = wr;
+@@ -1920,8 +1920,8 @@ static void calc_pg_sz(int obj_num, int obj_size, int hop_num, int ctx_bt_num,
+ 		obj_per_chunk = ctx_bt_num * obj_per_chunk_default;
+ 		break;
+ 	default:
+-		pr_err("Table %d not support hop_num = %d!\n", hem_type,
+-			hop_num);
++		pr_err("table %u not support hop_num = %u!\n", hem_type,
++		       hop_num);
+ 		return;
+ 	}
+ 
+@@ -3562,7 +3562,7 @@ static int get_op_for_set_hem(struct hns_roce_dev *hr_dev, u32 type,
+ 		break;
+ 	default:
+ 		dev_warn(hr_dev->dev,
+-			 "Table %d not to be written by mailbox!\n", type);
++			 "table %u not to be written by mailbox!\n", type);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -3681,7 +3681,7 @@ static int hns_roce_v2_clear_hem(struct hns_roce_dev *hr_dev,
+ 		op = HNS_ROCE_CMD_DESTROY_SRQC_BT0;
+ 		break;
+ 	default:
+-		dev_warn(dev, "Table %d not to be destroyed by mailbox!\n",
++		dev_warn(dev, "table %u not to be destroyed by mailbox!\n",
+ 			 table->type);
+ 		return 0;
+ 	}
+@@ -4318,7 +4318,7 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp,
+ 
+ 	ret = config_qp_sq_buf(hr_dev, hr_qp, context, qpc_mask);
+ 	if (ret) {
+-		ibdev_err(ibdev, "failed to config sq buf, ret %d\n", ret);
++		ibdev_err(ibdev, "failed to config sq buf, ret = %d.\n", ret);
+ 		return ret;
+ 	}
+ 
+@@ -4804,7 +4804,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ 	/* SW pass context to HW */
+ 	ret = hns_roce_v2_qp_modify(hr_dev, context, qpc_mask, hr_qp);
+ 	if (ret) {
+-		ibdev_err(ibdev, "failed to modify QP, ret = %d\n", ret);
++		ibdev_err(ibdev, "failed to modify QP, ret = %d.\n", ret);
+ 		goto out;
+ 	}
+ 
+@@ -4897,7 +4897,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ 
+ 	ret = hns_roce_v2_query_qpc(hr_dev, hr_qp, &context);
+ 	if (ret) {
+-		ibdev_err(ibdev, "failed to query QPC, ret = %d\n", ret);
++		ibdev_err(ibdev, "failed to query QPC, ret = %d.\n", ret);
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+@@ -5018,7 +5018,7 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
+ 					    hr_qp->state, IB_QPS_RESET);
+ 		if (ret)
+ 			ibdev_err(ibdev,
+-				  "failed to modify QP to RST, ret = %d\n",
++				  "failed to modify QP to RST, ret = %d.\n",
+ 				  ret);
+ 	}
+ 
+@@ -5057,7 +5057,7 @@ static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ 	ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
+ 	if (ret)
+ 		ibdev_err(&hr_dev->ib_dev,
+-			  "failed to destroy QP 0x%06lx, ret = %d\n",
++			  "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
+ 			  hr_qp->qpn, ret);
+ 
+ 	hns_roce_qp_destroy(hr_dev, hr_qp, udata);
+@@ -5080,7 +5080,7 @@ static int hns_roce_v2_qp_flow_control_init(struct hns_roce_dev *hr_dev,
+ 	hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_RESET_SCCC, false);
+ 	ret =  hns_roce_cmq_send(hr_dev, &desc, 1);
+ 	if (ret) {
+-		ibdev_err(ibdev, "failed to reset SCC ctx, ret = %d\n", ret);
++		ibdev_err(ibdev, "failed to reset SCC ctx, ret = %d.\n", ret);
+ 		goto out;
+ 	}
+ 
+@@ -5090,7 +5090,7 @@ static int hns_roce_v2_qp_flow_control_init(struct hns_roce_dev *hr_dev,
+ 	clr->qpn = cpu_to_le32(hr_qp->qpn);
+ 	ret =  hns_roce_cmq_send(hr_dev, &desc, 1);
+ 	if (ret) {
+-		ibdev_err(ibdev, "failed to clear SCC ctx, ret = %d\n", ret);
++		ibdev_err(ibdev, "failed to clear SCC ctx, ret = %d.\n", ret);
+ 		goto out;
+ 	}
+ 
+@@ -5339,7 +5339,7 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
+ 	hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+ 	if (ret)
+ 		ibdev_err(&hr_dev->ib_dev,
+-			  "failed to process cmd when modifying CQ, ret = %d\n",
++			  "failed to process cmd when modifying CQ, ret = %d.\n",
+ 			  ret);
+ 
+ 	return ret;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 7f81a695e9af9..027ec8413ac25 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -185,14 +185,14 @@ static int hns_roce_mr_enable(struct hns_roce_dev *hr_dev,
+ 	else
+ 		ret = hr_dev->hw->frmr_write_mtpt(hr_dev, mailbox->buf, mr);
+ 	if (ret) {
+-		dev_err(dev, "Write mtpt fail!\n");
++		dev_err(dev, "failed to write mtpt, ret = %d.\n", ret);
+ 		goto err_page;
+ 	}
+ 
+ 	ret = hns_roce_hw_create_mpt(hr_dev, mailbox,
+ 				     mtpt_idx & (hr_dev->caps.num_mtpts - 1));
+ 	if (ret) {
+-		dev_err(dev, "CREATE_MPT failed (%d)\n", ret);
++		dev_err(dev, "failed to create mpt, ret = %d.\n", ret);
+ 		goto err_page;
+ 	}
+ 
+@@ -495,7 +495,7 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ 
+ 	ret = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page);
+ 	if (ret < 1) {
+-		ibdev_err(ibdev, "failed to store sg pages %d %d, cnt = %d.\n",
++		ibdev_err(ibdev, "failed to store sg pages %u %u, cnt = %d.\n",
+ 			  mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, ret);
+ 		goto err_page_list;
+ 	}
+@@ -862,7 +862,7 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 		if (r->offset + r->count > page_cnt) {
+ 			err = -EINVAL;
+ 			ibdev_err(ibdev,
+-				  "Failed to check mtr%d end %d + %d, max %d\n",
++				  "failed to check mtr%u end %u + %u, max %u.\n",
+ 				  i, r->offset, r->count, page_cnt);
+ 			return err;
+ 		}
+@@ -870,7 +870,7 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 		err = mtr_map_region(hr_dev, mtr, &pages[r->offset], r);
+ 		if (err) {
+ 			ibdev_err(ibdev,
+-				  "Failed to map mtr%d offset %d, err %d\n",
++				  "failed to map mtr%u offset %u, ret = %d.\n",
+ 				  i, r->offset, err);
+ 			return err;
+ 		}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c
+index f78fa1d3d8075..012a769d6a6a8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_pd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_pd.c
+@@ -65,7 +65,7 @@ int hns_roce_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
+ 
+ 	ret = hns_roce_pd_alloc(to_hr_dev(ib_dev), &pd->pdn);
+ 	if (ret) {
+-		ibdev_err(ib_dev, "failed to alloc pd, ret = %d\n", ret);
++		ibdev_err(ib_dev, "failed to alloc pd, ret = %d.\n", ret);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 7ce9ad8aee1ec..291e06d631505 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -452,12 +452,12 @@ static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev,
+ 	/* Sanity check SQ size before proceeding */
+ 	if (ucmd->log_sq_stride > max_sq_stride ||
+ 	    ucmd->log_sq_stride < HNS_ROCE_IB_MIN_SQ_STRIDE) {
+-		ibdev_err(&hr_dev->ib_dev, "Failed to check SQ stride size\n");
++		ibdev_err(&hr_dev->ib_dev, "failed to check SQ stride size.\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (cap->max_send_sge > hr_dev->caps.max_sq_sg) {
+-		ibdev_err(&hr_dev->ib_dev, "Failed to check SQ SGE size %d\n",
++		ibdev_err(&hr_dev->ib_dev, "failed to check SQ SGE size %u.\n",
+ 			  cap->max_send_sge);
+ 		return -EINVAL;
+ 	}
+@@ -563,7 +563,7 @@ static int set_kernel_sq_size(struct hns_roce_dev *hr_dev,
+ 
+ 	cnt = roundup_pow_of_two(max(cap->max_send_wr, hr_dev->caps.min_wqes));
+ 	if (cnt > hr_dev->caps.max_wqes) {
+-		ibdev_err(ibdev, "failed to check WQE num, WQE num = %d.\n",
++		ibdev_err(ibdev, "failed to check WQE num, WQE num = %u.\n",
+ 			  cnt);
+ 		return -EINVAL;
+ 	}
+@@ -736,7 +736,8 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 						   &hr_qp->sdb);
+ 			if (ret) {
+ 				ibdev_err(ibdev,
+-					  "Failed to map user SQ doorbell\n");
++					  "failed to map user SQ doorbell, ret = %d.\n",
++					  ret);
+ 				goto err_out;
+ 			}
+ 			hr_qp->en_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB;
+@@ -747,7 +748,8 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 						   &hr_qp->rdb);
+ 			if (ret) {
+ 				ibdev_err(ibdev,
+-					  "Failed to map user RQ doorbell\n");
++					  "failed to map user RQ doorbell, ret = %d.\n",
++					  ret);
+ 				goto err_sdb;
+ 			}
+ 			hr_qp->en_flags |= HNS_ROCE_QP_CAP_RQ_RECORD_DB;
+@@ -763,7 +765,8 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 			ret = hns_roce_alloc_db(hr_dev, &hr_qp->rdb, 0);
+ 			if (ret) {
+ 				ibdev_err(ibdev,
+-					  "Failed to alloc kernel RQ doorbell\n");
++					  "failed to alloc kernel RQ doorbell, ret = %d.\n",
++					  ret);
+ 				goto err_out;
+ 			}
+ 			*hr_qp->rdb.db_record = 0;
+@@ -806,14 +809,14 @@ static int alloc_kernel_wrid(struct hns_roce_dev *hr_dev,
+ 
+ 	sq_wrid = kcalloc(hr_qp->sq.wqe_cnt, sizeof(u64), GFP_KERNEL);
+ 	if (ZERO_OR_NULL_PTR(sq_wrid)) {
+-		ibdev_err(ibdev, "Failed to alloc SQ wrid\n");
++		ibdev_err(ibdev, "failed to alloc SQ wrid.\n");
+ 		return -ENOMEM;
+ 	}
+ 
+ 	if (hr_qp->rq.wqe_cnt) {
+ 		rq_wrid = kcalloc(hr_qp->rq.wqe_cnt, sizeof(u64), GFP_KERNEL);
+ 		if (ZERO_OR_NULL_PTR(rq_wrid)) {
+-			ibdev_err(ibdev, "Failed to alloc RQ wrid\n");
++			ibdev_err(ibdev, "failed to alloc RQ wrid.\n");
+ 			ret = -ENOMEM;
+ 			goto err_sq;
+ 		}
+@@ -873,7 +876,9 @@ static int set_qp_param(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 
+ 		ret = set_user_sq_size(hr_dev, &init_attr->cap, hr_qp, ucmd);
+ 		if (ret)
+-			ibdev_err(ibdev, "Failed to set user SQ size\n");
++			ibdev_err(ibdev,
++				  "failed to set user SQ size, ret = %d.\n",
++				  ret);
+ 	} else {
+ 		if (init_attr->create_flags &
+ 		    IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
+@@ -888,7 +893,9 @@ static int set_qp_param(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 
+ 		ret = set_kernel_sq_size(hr_dev, &init_attr->cap, hr_qp);
+ 		if (ret)
+-			ibdev_err(ibdev, "Failed to set kernel SQ size\n");
++			ibdev_err(ibdev,
++				  "failed to set kernel SQ size, ret = %d.\n",
++				  ret);
+ 	}
+ 
+ 	return ret;
+@@ -914,45 +921,48 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ 
+ 	ret = set_qp_param(hr_dev, hr_qp, init_attr, udata, &ucmd);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to set QP param\n");
++		ibdev_err(ibdev, "failed to set QP param, ret = %d.\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	if (!udata) {
+ 		ret = alloc_kernel_wrid(hr_dev, hr_qp);
+ 		if (ret) {
+-			ibdev_err(ibdev, "Failed to alloc wrid\n");
++			ibdev_err(ibdev, "failed to alloc wrid, ret = %d.\n",
++				  ret);
+ 			return ret;
+ 		}
+ 	}
+ 
+ 	ret = alloc_qp_db(hr_dev, hr_qp, init_attr, udata, &ucmd, &resp);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc QP doorbell\n");
++		ibdev_err(ibdev, "failed to alloc QP doorbell, ret = %d.\n",
++			  ret);
+ 		goto err_wrid;
+ 	}
+ 
+ 	ret = alloc_qp_buf(hr_dev, hr_qp, init_attr, udata, ucmd.buf_addr);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc QP buffer\n");
++		ibdev_err(ibdev, "failed to alloc QP buffer, ret = %d.\n", ret);
+ 		goto err_db;
+ 	}
+ 
+ 	ret = alloc_qpn(hr_dev, hr_qp);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc QPN\n");
++		ibdev_err(ibdev, "failed to alloc QPN, ret = %d.\n", ret);
+ 		goto err_buf;
+ 	}
+ 
+ 	ret = alloc_qpc(hr_dev, hr_qp);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc QP context\n");
++		ibdev_err(ibdev, "failed to alloc QP context, ret = %d.\n",
++			  ret);
+ 		goto err_qpn;
+ 	}
+ 
+ 	ret = hns_roce_qp_store(hr_dev, hr_qp, init_attr);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to store QP\n");
++		ibdev_err(ibdev, "failed to store QP, ret = %d.\n", ret);
+ 		goto err_qpc;
+ 	}
+ 
+@@ -1098,9 +1108,8 @@ static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 
+ 	if ((attr_mask & IB_QP_PORT) &&
+ 	    (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) {
+-		ibdev_err(&hr_dev->ib_dev,
+-			"attr port_num invalid.attr->port_num=%d\n",
+-			attr->port_num);
++		ibdev_err(&hr_dev->ib_dev, "invalid attr, port_num = %u.\n",
++			  attr->port_num);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1108,8 +1117,8 @@ static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
+ 		if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) {
+ 			ibdev_err(&hr_dev->ib_dev,
+-				"attr pkey_index invalid.attr->pkey_index=%d\n",
+-				attr->pkey_index);
++				  "invalid attr, pkey_index = %u.\n",
++				  attr->pkey_index);
+ 			return -EINVAL;
+ 		}
+ 	}
+@@ -1117,16 +1126,16 @@ static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 	if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
+ 	    attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) {
+ 		ibdev_err(&hr_dev->ib_dev,
+-			"attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n",
+-			attr->max_rd_atomic);
++			  "invalid attr, max_rd_atomic = %u.\n",
++			  attr->max_rd_atomic);
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
+ 	    attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) {
+ 		ibdev_err(&hr_dev->ib_dev,
+-			"attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n",
+-			attr->max_dest_rd_atomic);
++			  "invalid attr, max_dest_rd_atomic = %u.\n",
++			  attr->max_dest_rd_atomic);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index 75d74f4bb52c9..f27523e1a12d7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -93,7 +93,8 @@ static int alloc_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
+ 	ret = hns_roce_mtr_find(hr_dev, &srq->buf_mtr, 0, mtts_wqe,
+ 				ARRAY_SIZE(mtts_wqe), &dma_handle_wqe);
+ 	if (ret < 1) {
+-		ibdev_err(ibdev, "Failed to find mtr for SRQ WQE\n");
++		ibdev_err(ibdev, "failed to find mtr for SRQ WQE, ret = %d.\n",
++			  ret);
+ 		return -ENOBUFS;
+ 	}
+ 
+@@ -101,32 +102,34 @@ static int alloc_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
+ 	ret = hns_roce_mtr_find(hr_dev, &srq->idx_que.mtr, 0, mtts_idx,
+ 				ARRAY_SIZE(mtts_idx), &dma_handle_idx);
+ 	if (ret < 1) {
+-		ibdev_err(ibdev, "Failed to find mtr for SRQ idx\n");
++		ibdev_err(ibdev, "failed to find mtr for SRQ idx, ret = %d.\n",
++			  ret);
+ 		return -ENOBUFS;
+ 	}
+ 
+ 	ret = hns_roce_bitmap_alloc(&srq_table->bitmap, &srq->srqn);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc SRQ number, err %d\n", ret);
++		ibdev_err(ibdev,
++			  "failed to alloc SRQ number, ret = %d.\n", ret);
+ 		return -ENOMEM;
+ 	}
+ 
+ 	ret = hns_roce_table_get(hr_dev, &srq_table->table, srq->srqn);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to get SRQC table, err %d\n", ret);
++		ibdev_err(ibdev, "failed to get SRQC table, ret = %d.\n", ret);
+ 		goto err_out;
+ 	}
+ 
+ 	ret = xa_err(xa_store(&srq_table->xa, srq->srqn, srq, GFP_KERNEL));
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to store SRQC, err %d\n", ret);
++		ibdev_err(ibdev, "failed to store SRQC, ret = %d.\n", ret);
+ 		goto err_put;
+ 	}
+ 
+ 	mailbox = hns_roce_alloc_cmd_mailbox(hr_dev);
+ 	if (IS_ERR_OR_NULL(mailbox)) {
+ 		ret = -ENOMEM;
+-		ibdev_err(ibdev, "Failed to alloc mailbox for SRQC\n");
++		ibdev_err(ibdev, "failed to alloc mailbox for SRQC.\n");
+ 		goto err_xa;
+ 	}
+ 
+@@ -137,7 +140,7 @@ static int alloc_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
+ 	ret = hns_roce_hw_create_srq(hr_dev, mailbox, srq->srqn);
+ 	hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to config SRQC, err %d\n", ret);
++		ibdev_err(ibdev, "failed to config SRQC, ret = %d.\n", ret);
+ 		goto err_xa;
+ 	}
+ 
+@@ -198,7 +201,8 @@ static int alloc_srq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
+ 				  hr_dev->caps.srqwqe_ba_pg_sz +
+ 				  HNS_HW_PAGE_SHIFT, udata, addr);
+ 	if (err)
+-		ibdev_err(ibdev, "Failed to alloc SRQ buf mtr, err %d\n", err);
++		ibdev_err(ibdev,
++			  "failed to alloc SRQ buf mtr, ret = %d.\n", err);
+ 
+ 	return err;
+ }
+@@ -229,14 +233,15 @@ static int alloc_srq_idx(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
+ 				  hr_dev->caps.idx_ba_pg_sz + HNS_HW_PAGE_SHIFT,
+ 				  udata, addr);
+ 	if (err) {
+-		ibdev_err(ibdev, "Failed to alloc SRQ idx mtr, err %d\n", err);
++		ibdev_err(ibdev,
++			  "failed to alloc SRQ idx mtr, ret = %d.\n", err);
+ 		return err;
+ 	}
+ 
+ 	if (!udata) {
+ 		idx_que->bitmap = bitmap_zalloc(srq->wqe_cnt, GFP_KERNEL);
+ 		if (!idx_que->bitmap) {
+-			ibdev_err(ibdev, "Failed to alloc SRQ idx bitmap\n");
++			ibdev_err(ibdev, "failed to alloc SRQ idx bitmap.\n");
+ 			err = -ENOMEM;
+ 			goto err_idx_mtr;
+ 		}
+@@ -303,7 +308,7 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
+ 		ret = ib_copy_from_udata(&ucmd, udata,
+ 					 min(udata->inlen, sizeof(ucmd)));
+ 		if (ret) {
+-			ibdev_err(ibdev, "Failed to copy SRQ udata, err %d\n",
++			ibdev_err(ibdev, "failed to copy SRQ udata, ret = %d.\n",
+ 				  ret);
+ 			return ret;
+ 		}
+@@ -311,20 +316,21 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
+ 
+ 	ret = alloc_srq_buf(hr_dev, srq, udata, ucmd.buf_addr);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc SRQ buffer, err %d\n", ret);
++		ibdev_err(ibdev,
++			  "failed to alloc SRQ buffer, ret = %d.\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	ret = alloc_srq_idx(hr_dev, srq, udata, ucmd.que_addr);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc SRQ idx, err %d\n", ret);
++		ibdev_err(ibdev, "failed to alloc SRQ idx, ret = %d.\n", ret);
+ 		goto err_buf_alloc;
+ 	}
+ 
+ 	if (!udata) {
+ 		ret = alloc_srq_wrid(hr_dev, srq);
+ 		if (ret) {
+-			ibdev_err(ibdev, "Failed to alloc SRQ wrid, err %d\n",
++			ibdev_err(ibdev, "failed to alloc SRQ wrid, ret = %d.\n",
+ 				  ret);
+ 			goto err_idx_alloc;
+ 		}
+@@ -336,7 +342,8 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
+ 
+ 	ret = alloc_srqc(hr_dev, srq, to_hr_pd(ib_srq->pd)->pdn, cqn, 0, 0);
+ 	if (ret) {
+-		ibdev_err(ibdev, "Failed to alloc SRQ context, err %d\n", ret);
++		ibdev_err(ibdev,
++			  "failed to alloc SRQ context, ret = %d.\n", ret);
+ 		goto err_wrid_alloc;
+ 	}
+ 
+diff --git a/drivers/ipack/devices/ipoctal.c b/drivers/ipack/devices/ipoctal.c
+index d480a514c9837..1f7512c991a32 100644
+--- a/drivers/ipack/devices/ipoctal.c
++++ b/drivers/ipack/devices/ipoctal.c
+@@ -35,6 +35,7 @@ struct ipoctal_channel {
+ 	unsigned int			pointer_read;
+ 	unsigned int			pointer_write;
+ 	struct tty_port			tty_port;
++	bool				tty_registered;
+ 	union scc2698_channel __iomem	*regs;
+ 	union scc2698_block __iomem	*block_regs;
+ 	unsigned int			board_id;
+@@ -83,22 +84,34 @@ static int ipoctal_port_activate(struct tty_port *port, struct tty_struct *tty)
+ 	return 0;
+ }
+ 
+-static int ipoctal_open(struct tty_struct *tty, struct file *file)
++static int ipoctal_install(struct tty_driver *driver, struct tty_struct *tty)
+ {
+ 	struct ipoctal_channel *channel = dev_get_drvdata(tty->dev);
+ 	struct ipoctal *ipoctal = chan_to_ipoctal(channel, tty->index);
+-	int err;
+-
+-	tty->driver_data = channel;
++	int res;
+ 
+ 	if (!ipack_get_carrier(ipoctal->dev))
+ 		return -EBUSY;
+ 
+-	err = tty_port_open(&channel->tty_port, tty, file);
+-	if (err)
+-		ipack_put_carrier(ipoctal->dev);
++	res = tty_standard_install(driver, tty);
++	if (res)
++		goto err_put_carrier;
++
++	tty->driver_data = channel;
++
++	return 0;
++
++err_put_carrier:
++	ipack_put_carrier(ipoctal->dev);
++
++	return res;
++}
++
++static int ipoctal_open(struct tty_struct *tty, struct file *file)
++{
++	struct ipoctal_channel *channel = tty->driver_data;
+ 
+-	return err;
++	return tty_port_open(&channel->tty_port, tty, file);
+ }
+ 
+ static void ipoctal_reset_stats(struct ipoctal_stats *stats)
+@@ -266,7 +279,6 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 	int res;
+ 	int i;
+ 	struct tty_driver *tty;
+-	char name[20];
+ 	struct ipoctal_channel *channel;
+ 	struct ipack_region *region;
+ 	void __iomem *addr;
+@@ -357,8 +369,11 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 	/* Fill struct tty_driver with ipoctal data */
+ 	tty->owner = THIS_MODULE;
+ 	tty->driver_name = KBUILD_MODNAME;
+-	sprintf(name, KBUILD_MODNAME ".%d.%d.", bus_nr, slot);
+-	tty->name = name;
++	tty->name = kasprintf(GFP_KERNEL, KBUILD_MODNAME ".%d.%d.", bus_nr, slot);
++	if (!tty->name) {
++		res = -ENOMEM;
++		goto err_put_driver;
++	}
+ 	tty->major = 0;
+ 
+ 	tty->minor_start = 0;
+@@ -374,8 +389,7 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 	res = tty_register_driver(tty);
+ 	if (res) {
+ 		dev_err(&ipoctal->dev->dev, "Can't register tty driver.\n");
+-		put_tty_driver(tty);
+-		return res;
++		goto err_free_name;
+ 	}
+ 
+ 	/* Save struct tty_driver for use it when uninstalling the device */
+@@ -386,7 +400,9 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 
+ 		channel = &ipoctal->channel[i];
+ 		tty_port_init(&channel->tty_port);
+-		tty_port_alloc_xmit_buf(&channel->tty_port);
++		res = tty_port_alloc_xmit_buf(&channel->tty_port);
++		if (res)
++			continue;
+ 		channel->tty_port.ops = &ipoctal_tty_port_ops;
+ 
+ 		ipoctal_reset_stats(&channel->stats);
+@@ -394,13 +410,15 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 		spin_lock_init(&channel->lock);
+ 		channel->pointer_read = 0;
+ 		channel->pointer_write = 0;
+-		tty_dev = tty_port_register_device(&channel->tty_port, tty, i, NULL);
++		tty_dev = tty_port_register_device_attr(&channel->tty_port, tty,
++							i, NULL, channel, NULL);
+ 		if (IS_ERR(tty_dev)) {
+ 			dev_err(&ipoctal->dev->dev, "Failed to register tty device.\n");
++			tty_port_free_xmit_buf(&channel->tty_port);
+ 			tty_port_destroy(&channel->tty_port);
+ 			continue;
+ 		}
+-		dev_set_drvdata(tty_dev, channel);
++		channel->tty_registered = true;
+ 	}
+ 
+ 	/*
+@@ -412,6 +430,13 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 				       ipoctal_irq_handler, ipoctal);
+ 
+ 	return 0;
++
++err_free_name:
++	kfree(tty->name);
++err_put_driver:
++	put_tty_driver(tty);
++
++	return res;
+ }
+ 
+ static inline int ipoctal_copy_write_buffer(struct ipoctal_channel *channel,
+@@ -652,6 +677,7 @@ static void ipoctal_cleanup(struct tty_struct *tty)
+ 
+ static const struct tty_operations ipoctal_fops = {
+ 	.ioctl =		NULL,
++	.install =		ipoctal_install,
+ 	.open =			ipoctal_open,
+ 	.close =		ipoctal_close,
+ 	.write =		ipoctal_write_tty,
+@@ -694,12 +720,17 @@ static void __ipoctal_remove(struct ipoctal *ipoctal)
+ 
+ 	for (i = 0; i < NR_CHANNELS; i++) {
+ 		struct ipoctal_channel *channel = &ipoctal->channel[i];
++
++		if (!channel->tty_registered)
++			continue;
++
+ 		tty_unregister_device(ipoctal->tty_drv, i);
+ 		tty_port_free_xmit_buf(&channel->tty_port);
+ 		tty_port_destroy(&channel->tty_port);
+ 	}
+ 
+ 	tty_unregister_driver(ipoctal->tty_drv);
++	kfree(ipoctal->tty_drv->name);
+ 	put_tty_driver(ipoctal->tty_drv);
+ 	kfree(ipoctal);
+ }
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 3e729a17b35ff..48d52baec1a1c 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -24,6 +24,7 @@ static const u8 COMMAND_VERSION[] = { 'v' };
+ // End transmit and repeat reset command so we exit sump mode
+ static const u8 COMMAND_RESET[] = { 0xff, 0xff, 0, 0, 0, 0, 0 };
+ static const u8 COMMAND_SMODE_ENTER[] = { 's' };
++static const u8 COMMAND_SMODE_EXIT[] = { 0 };
+ static const u8 COMMAND_TXSTART[] = { 0x26, 0x24, 0x25, 0x03 };
+ 
+ #define REPLY_XMITCOUNT 't'
+@@ -309,12 +310,30 @@ static int irtoy_tx(struct rc_dev *rc, uint *txbuf, uint count)
+ 		buf[i] = cpu_to_be16(v);
+ 	}
+ 
+-	buf[count] = cpu_to_be16(0xffff);
++	buf[count] = 0xffff;
+ 
+ 	irtoy->tx_buf = buf;
+ 	irtoy->tx_len = size;
+ 	irtoy->emitted = 0;
+ 
++	// There is an issue where if the unit is receiving IR while the
++	// first TXSTART command is sent, the device might end up hanging
++	// with its led on. It does not respond to any command when this
++	// happens. To work around this, re-enter sample mode.
++	err = irtoy_command(irtoy, COMMAND_SMODE_EXIT,
++			    sizeof(COMMAND_SMODE_EXIT), STATE_RESET);
++	if (err) {
++		dev_err(irtoy->dev, "exit sample mode: %d\n", err);
++		return err;
++	}
++
++	err = irtoy_command(irtoy, COMMAND_SMODE_ENTER,
++			    sizeof(COMMAND_SMODE_ENTER), STATE_COMMAND);
++	if (err) {
++		dev_err(irtoy->dev, "enter sample mode: %d\n", err);
++		return err;
++	}
++
+ 	err = irtoy_command(irtoy, COMMAND_TXSTART, sizeof(COMMAND_TXSTART),
+ 			    STATE_TX);
+ 	kfree(buf);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 184cbc93328c2..18388ea5ebd96 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2613,8 +2613,8 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
+ 	if (err)
+ 		return err;
+ 
+-	/* Port Control 2: don't force a good FCS, set the maximum frame size to
+-	 * 10240 bytes, disable 802.1q tags checking, don't discard tagged or
++	/* Port Control 2: don't force a good FCS, set the MTU size to
++	 * 10222 bytes, disable 802.1q tags checking, don't discard tagged or
+ 	 * untagged frames on this port, do a destination address lookup on all
+ 	 * received packets as usual, disable ARP mirroring and don't send a
+ 	 * copy of all transmitted/received frames on this port to the CPU.
+@@ -2633,7 +2633,7 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
+ 		return err;
+ 
+ 	if (chip->info->ops->port_set_jumbo_size) {
+-		err = chip->info->ops->port_set_jumbo_size(chip, port, 10240);
++		err = chip->info->ops->port_set_jumbo_size(chip, port, 10218);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -2718,10 +2718,10 @@ static int mv88e6xxx_get_max_mtu(struct dsa_switch *ds, int port)
+ 	struct mv88e6xxx_chip *chip = ds->priv;
+ 
+ 	if (chip->info->ops->port_set_jumbo_size)
+-		return 10240;
++		return 10240 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
+ 	else if (chip->info->ops->set_max_frame_size)
+-		return 1632;
+-	return 1522;
++		return 1632 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
++	return 1522 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
+ }
+ 
+ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+@@ -2729,6 +2729,9 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ 	struct mv88e6xxx_chip *chip = ds->priv;
+ 	int ret = 0;
+ 
++	if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
++		new_mtu += EDSA_HLEN;
++
+ 	mv88e6xxx_reg_lock(chip);
+ 	if (chip->info->ops->port_set_jumbo_size)
+ 		ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
+@@ -3455,7 +3458,6 @@ static const struct mv88e6xxx_ops mv88e6161_ops = {
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+ 	.port_set_ether_type = mv88e6351_port_set_ether_type,
+-	.port_set_jumbo_size = mv88e6165_port_set_jumbo_size,
+ 	.port_egress_rate_limiting = mv88e6097_port_egress_rate_limiting,
+ 	.port_pause_limit = mv88e6097_port_pause_limit,
+ 	.port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit,
+@@ -3480,6 +3482,7 @@ static const struct mv88e6xxx_ops mv88e6161_ops = {
+ 	.avb_ops = &mv88e6165_avb_ops,
+ 	.ptp_ops = &mv88e6165_ptp_ops,
+ 	.phylink_validate = mv88e6185_phylink_validate,
++	.set_max_frame_size = mv88e6185_g1_set_max_frame_size,
+ };
+ 
+ static const struct mv88e6xxx_ops mv88e6165_ops = {
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index 81c244fc04195..51a7ff44478ec 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -18,6 +18,7 @@
+ #include <linux/timecounter.h>
+ #include <net/dsa.h>
+ 
++#define EDSA_HLEN		8
+ #define MV88E6XXX_N_FID		4096
+ 
+ /* PVT limits for 4-bit port and 5-bit switch */
+diff --git a/drivers/net/dsa/mv88e6xxx/global1.c b/drivers/net/dsa/mv88e6xxx/global1.c
+index 33d443a37efc4..9936ae69e5ee4 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1.c
++++ b/drivers/net/dsa/mv88e6xxx/global1.c
+@@ -232,6 +232,8 @@ int mv88e6185_g1_set_max_frame_size(struct mv88e6xxx_chip *chip, int mtu)
+ 	u16 val;
+ 	int err;
+ 
++	mtu += ETH_HLEN + ETH_FCS_LEN;
++
+ 	err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_CTL1, &val);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index 8128dc607cf46..dfd9e8292e9a0 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -1082,6 +1082,8 @@ int mv88e6165_port_set_jumbo_size(struct mv88e6xxx_chip *chip, int port,
+ 	u16 reg;
+ 	int err;
+ 
++	size += VLAN_ETH_HLEN + ETH_FCS_LEN;
++
+ 	err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_CTL2, &reg);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 68133563a40c1..716b396bf0947 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -504,8 +504,7 @@ static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
+ 
+ 	if (phy_interface_mode_is_rgmii(phy_mode)) {
+ 		val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
+-		val &= ~ENETC_PM0_IFM_EN_AUTO;
+-		val &= ENETC_PM0_IFM_IFMODE_MASK;
++		val &= ~(ENETC_PM0_IFM_EN_AUTO | ENETC_PM0_IFM_IFMODE_MASK);
+ 		val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
+ 		enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
+ 	}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 936b9cfe1a62f..4777db2623cf4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -444,6 +444,11 @@ static int hns3_nic_net_open(struct net_device *netdev)
+ 	if (hns3_nic_resetting(netdev))
+ 		return -EBUSY;
+ 
++	if (!test_bit(HNS3_NIC_STATE_DOWN, &priv->state)) {
++		netdev_warn(netdev, "net open repeatedly!\n");
++		return 0;
++	}
++
+ 	netif_carrier_off(netdev);
+ 
+ 	ret = hns3_nic_set_real_num_queue(netdev);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index c0aa3be0cdfbb..cd0d7a546957a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -300,33 +300,8 @@ out:
+ 	return ret_val;
+ }
+ 
+-/**
+- * hns3_nic_self_test - self test
+- * @ndev: net device
+- * @eth_test: test cmd
+- * @data: test result
+- */
+-static void hns3_self_test(struct net_device *ndev,
+-			   struct ethtool_test *eth_test, u64 *data)
++static void hns3_set_selftest_param(struct hnae3_handle *h, int (*st_param)[2])
+ {
+-	struct hns3_nic_priv *priv = netdev_priv(ndev);
+-	struct hnae3_handle *h = priv->ae_handle;
+-	int st_param[HNS3_SELF_TEST_TYPE_NUM][2];
+-	bool if_running = netif_running(ndev);
+-	int test_index = 0;
+-	u32 i;
+-
+-	if (hns3_nic_resetting(ndev)) {
+-		netdev_err(ndev, "dev resetting!");
+-		return;
+-	}
+-
+-	/* Only do offline selftest, or pass by default */
+-	if (eth_test->flags != ETH_TEST_FL_OFFLINE)
+-		return;
+-
+-	netif_dbg(h, drv, ndev, "self test start");
+-
+ 	st_param[HNAE3_LOOP_APP][0] = HNAE3_LOOP_APP;
+ 	st_param[HNAE3_LOOP_APP][1] =
+ 			h->flags & HNAE3_SUPPORT_APP_LOOPBACK;
+@@ -343,13 +318,26 @@ static void hns3_self_test(struct net_device *ndev,
+ 	st_param[HNAE3_LOOP_PHY][0] = HNAE3_LOOP_PHY;
+ 	st_param[HNAE3_LOOP_PHY][1] =
+ 			h->flags & HNAE3_SUPPORT_PHY_LOOPBACK;
++}
++
++static void hns3_selftest_prepare(struct net_device *ndev,
++				  bool if_running, int (*st_param)[2])
++{
++	struct hns3_nic_priv *priv = netdev_priv(ndev);
++	struct hnae3_handle *h = priv->ae_handle;
++
++	if (netif_msg_ifdown(h))
++		netdev_info(ndev, "self test start\n");
++
++	hns3_set_selftest_param(h, st_param);
+ 
+ 	if (if_running)
+ 		ndev->netdev_ops->ndo_stop(ndev);
+ 
+ #if IS_ENABLED(CONFIG_VLAN_8021Q)
+ 	/* Disable the vlan filter for selftest does not support it */
+-	if (h->ae_algo->ops->enable_vlan_filter)
++	if (h->ae_algo->ops->enable_vlan_filter &&
++	    ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
+ 		h->ae_algo->ops->enable_vlan_filter(h, false);
+ #endif
+ 
+@@ -361,6 +349,36 @@ static void hns3_self_test(struct net_device *ndev,
+ 		h->ae_algo->ops->halt_autoneg(h, true);
+ 
+ 	set_bit(HNS3_NIC_STATE_TESTING, &priv->state);
++}
++
++static void hns3_selftest_restore(struct net_device *ndev, bool if_running)
++{
++	struct hns3_nic_priv *priv = netdev_priv(ndev);
++	struct hnae3_handle *h = priv->ae_handle;
++
++	clear_bit(HNS3_NIC_STATE_TESTING, &priv->state);
++
++	if (h->ae_algo->ops->halt_autoneg)
++		h->ae_algo->ops->halt_autoneg(h, false);
++
++#if IS_ENABLED(CONFIG_VLAN_8021Q)
++	if (h->ae_algo->ops->enable_vlan_filter &&
++	    ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
++		h->ae_algo->ops->enable_vlan_filter(h, true);
++#endif
++
++	if (if_running)
++		ndev->netdev_ops->ndo_open(ndev);
++
++	if (netif_msg_ifdown(h))
++		netdev_info(ndev, "self test end\n");
++}
++
++static void hns3_do_selftest(struct net_device *ndev, int (*st_param)[2],
++			     struct ethtool_test *eth_test, u64 *data)
++{
++	int test_index = 0;
++	u32 i;
+ 
+ 	for (i = 0; i < HNS3_SELF_TEST_TYPE_NUM; i++) {
+ 		enum hnae3_loop loop_type = (enum hnae3_loop)st_param[i][0];
+@@ -379,21 +397,32 @@ static void hns3_self_test(struct net_device *ndev,
+ 
+ 		test_index++;
+ 	}
++}
+ 
+-	clear_bit(HNS3_NIC_STATE_TESTING, &priv->state);
+-
+-	if (h->ae_algo->ops->halt_autoneg)
+-		h->ae_algo->ops->halt_autoneg(h, false);
++/**
++ * hns3_nic_self_test - self test
++ * @ndev: net device
++ * @eth_test: test cmd
++ * @data: test result
++ */
++static void hns3_self_test(struct net_device *ndev,
++			   struct ethtool_test *eth_test, u64 *data)
++{
++	int st_param[HNS3_SELF_TEST_TYPE_NUM][2];
++	bool if_running = netif_running(ndev);
+ 
+-#if IS_ENABLED(CONFIG_VLAN_8021Q)
+-	if (h->ae_algo->ops->enable_vlan_filter)
+-		h->ae_algo->ops->enable_vlan_filter(h, true);
+-#endif
++	if (hns3_nic_resetting(ndev)) {
++		netdev_err(ndev, "dev resetting!");
++		return;
++	}
+ 
+-	if (if_running)
+-		ndev->netdev_ops->ndo_open(ndev);
++	/* Only do offline selftest, or pass by default */
++	if (eth_test->flags != ETH_TEST_FL_OFFLINE)
++		return;
+ 
+-	netif_dbg(h, drv, ndev, "self test end\n");
++	hns3_selftest_prepare(ndev, if_running, st_param);
++	hns3_do_selftest(ndev, st_param, eth_test, data);
++	hns3_selftest_restore(ndev, if_running);
+ }
+ 
+ static int hns3_get_sset_count(struct net_device *netdev, int stringset)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index a93c7eb4e7cbb..28a90ead4795d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -248,6 +248,10 @@ static int hclge_ieee_setets(struct hnae3_handle *h, struct ieee_ets *ets)
+ 	}
+ 
+ 	hclge_tm_schd_info_update(hdev, num_tc);
++	if (num_tc > 1)
++		hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
++	else
++		hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
+ 
+ 	ret = hclge_ieee_ets_to_tm_info(hdev, ets);
+ 	if (ret)
+@@ -313,8 +317,7 @@ static int hclge_ieee_setpfc(struct hnae3_handle *h, struct ieee_pfc *pfc)
+ 	u8 i, j, pfc_map, *prio_tc;
+ 	int ret;
+ 
+-	if (!(hdev->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) ||
+-	    hdev->flag & HCLGE_FLAG_MQPRIO_ENABLE)
++	if (!(hdev->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
+ 		return -EINVAL;
+ 
+ 	if (pfc->pfc_en == hdev->tm_info.pfc_en)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 24357e9071553..0e869f449f12c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -7581,15 +7581,8 @@ int hclge_add_uc_addr_common(struct hclge_vport *vport,
+ 	}
+ 
+ 	/* check if we just hit the duplicate */
+-	if (!ret) {
+-		dev_warn(&hdev->pdev->dev, "VF %u mac(%pM) exists\n",
+-			 vport->vport_id, addr);
+-		return 0;
+-	}
+-
+-	dev_err(&hdev->pdev->dev,
+-		"PF failed to add unicast entry(%pM) in the MAC table\n",
+-		addr);
++	if (!ret)
++		return -EEXIST;
+ 
+ 	return ret;
+ }
+@@ -7743,7 +7736,13 @@ static void hclge_sync_vport_mac_list(struct hclge_vport *vport,
+ 		} else {
+ 			set_bit(HCLGE_VPORT_STATE_MAC_TBL_CHANGE,
+ 				&vport->state);
+-			break;
++
++			/* If one unicast mac address is existing in hardware,
++			 * we need to try whether other unicast mac addresses
++			 * are new addresses that can be added.
++			 */
++			if (ret != -EEXIST)
++				break;
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index e8495f58a1a8e..69d081515c60a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -646,14 +646,6 @@ static void hclge_tm_tc_info_init(struct hclge_dev *hdev)
+ 	for (i = 0; i < HNAE3_MAX_USER_PRIO; i++)
+ 		hdev->tm_info.prio_tc[i] =
+ 			(i >= hdev->tm_info.num_tc) ? 0 : i;
+-
+-	/* DCB is enabled if we have more than 1 TC or pfc_en is
+-	 * non-zero.
+-	 */
+-	if (hdev->tm_info.num_tc > 1 || hdev->tm_info.pfc_en)
+-		hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
+-	else
+-		hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
+ }
+ 
+ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
+@@ -682,12 +674,12 @@ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
+ 	}
+ }
+ 
+-static void hclge_pfc_info_init(struct hclge_dev *hdev)
++static void hclge_update_fc_mode_by_dcb_flag(struct hclge_dev *hdev)
+ {
+-	if (!(hdev->flag & HCLGE_FLAG_DCB_ENABLE)) {
++	if (hdev->tm_info.num_tc == 1 && !hdev->tm_info.pfc_en) {
+ 		if (hdev->fc_mode_last_time == HCLGE_FC_PFC)
+ 			dev_warn(&hdev->pdev->dev,
+-				 "DCB is disable, but last mode is FC_PFC\n");
++				 "Only 1 tc used, but last mode is FC_PFC\n");
+ 
+ 		hdev->tm_info.fc_mode = hdev->fc_mode_last_time;
+ 	} else if (hdev->tm_info.fc_mode != HCLGE_FC_PFC) {
+@@ -700,6 +692,27 @@ static void hclge_pfc_info_init(struct hclge_dev *hdev)
+ 	}
+ }
+ 
++static void hclge_update_fc_mode(struct hclge_dev *hdev)
++{
++	if (!hdev->tm_info.pfc_en) {
++		hdev->tm_info.fc_mode = hdev->fc_mode_last_time;
++		return;
++	}
++
++	if (hdev->tm_info.fc_mode != HCLGE_FC_PFC) {
++		hdev->fc_mode_last_time = hdev->tm_info.fc_mode;
++		hdev->tm_info.fc_mode = HCLGE_FC_PFC;
++	}
++}
++
++void hclge_tm_pfc_info_update(struct hclge_dev *hdev)
++{
++	if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V3)
++		hclge_update_fc_mode(hdev);
++	else
++		hclge_update_fc_mode_by_dcb_flag(hdev);
++}
++
+ static void hclge_tm_schd_info_init(struct hclge_dev *hdev)
+ {
+ 	hclge_tm_pg_info_init(hdev);
+@@ -708,7 +721,7 @@ static void hclge_tm_schd_info_init(struct hclge_dev *hdev)
+ 
+ 	hclge_tm_vport_info_update(hdev);
+ 
+-	hclge_pfc_info_init(hdev);
++	hclge_tm_pfc_info_update(hdev);
+ }
+ 
+ static int hclge_tm_pg_to_pri_map(struct hclge_dev *hdev)
+@@ -1444,19 +1457,6 @@ void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc)
+ 	hclge_tm_schd_info_init(hdev);
+ }
+ 
+-void hclge_tm_pfc_info_update(struct hclge_dev *hdev)
+-{
+-	/* DCB is enabled if we have more than 1 TC or pfc_en is
+-	 * non-zero.
+-	 */
+-	if (hdev->tm_info.num_tc > 1 || hdev->tm_info.pfc_en)
+-		hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
+-	else
+-		hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
+-
+-	hclge_pfc_info_init(hdev);
+-}
+-
+ int hclge_tm_init_hw(struct hclge_dev *hdev, bool init)
+ {
+ 	int ret;
+@@ -1502,7 +1502,7 @@ int hclge_tm_vport_map_update(struct hclge_dev *hdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!(hdev->flag & HCLGE_FLAG_DCB_ENABLE))
++	if (hdev->tm_info.num_tc == 1 && !hdev->tm_info.pfc_en)
+ 		return 0;
+ 
+ 	return hclge_tm_bp_setup(hdev);
+diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c
+index 609e47b8287d1..ee86ea12fa379 100644
+--- a/drivers/net/ethernet/intel/e100.c
++++ b/drivers/net/ethernet/intel/e100.c
+@@ -2431,11 +2431,15 @@ static void e100_get_drvinfo(struct net_device *netdev,
+ 		sizeof(info->bus_info));
+ }
+ 
+-#define E100_PHY_REGS 0x1C
++#define E100_PHY_REGS 0x1D
+ static int e100_get_regs_len(struct net_device *netdev)
+ {
+ 	struct nic *nic = netdev_priv(netdev);
+-	return 1 + E100_PHY_REGS + sizeof(nic->mem->dump_buf);
++
++	/* We know the number of registers, and the size of the dump buffer.
++	 * Calculate the total size in bytes.
++	 */
++	return (1 + E100_PHY_REGS) * sizeof(u32) + sizeof(nic->mem->dump_buf);
+ }
+ 
+ static void e100_get_regs(struct net_device *netdev,
+@@ -2449,14 +2453,18 @@ static void e100_get_regs(struct net_device *netdev,
+ 	buff[0] = ioread8(&nic->csr->scb.cmd_hi) << 24 |
+ 		ioread8(&nic->csr->scb.cmd_lo) << 16 |
+ 		ioread16(&nic->csr->scb.status);
+-	for (i = E100_PHY_REGS; i >= 0; i--)
+-		buff[1 + E100_PHY_REGS - i] =
+-			mdio_read(netdev, nic->mii.phy_id, i);
++	for (i = 0; i < E100_PHY_REGS; i++)
++		/* Note that we read the registers in reverse order. This
++		 * ordering is the ABI apparently used by ethtool and other
++		 * applications.
++		 */
++		buff[1 + i] = mdio_read(netdev, nic->mii.phy_id,
++					E100_PHY_REGS - 1 - i);
+ 	memset(nic->mem->dump_buf, 0, sizeof(nic->mem->dump_buf));
+ 	e100_exec_cb(nic, NULL, e100_dump);
+ 	msleep(10);
+-	memcpy(&buff[2 + E100_PHY_REGS], nic->mem->dump_buf,
+-		sizeof(nic->mem->dump_buf));
++	memcpy(&buff[1 + E100_PHY_REGS], nic->mem->dump_buf,
++	       sizeof(nic->mem->dump_buf));
+ }
+ 
+ static void e100_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+index a280aa34ca1df..55983904b6df1 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+@@ -3216,7 +3216,7 @@ static unsigned int ixgbe_max_channels(struct ixgbe_adapter *adapter)
+ 		max_combined = ixgbe_max_rss_indices(adapter);
+ 	}
+ 
+-	return max_combined;
++	return min_t(int, max_combined, num_online_cpus());
+ }
+ 
+ static void ixgbe_get_channels(struct net_device *dev,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 37439b76fcb5e..ffe322136c584 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -10123,6 +10123,7 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct bpf_prog *old_prog;
+ 	bool need_reset;
++	int num_queues;
+ 
+ 	if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)
+ 		return -EINVAL;
+@@ -10172,11 +10173,14 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
+ 	/* Kick start the NAPI context if there is an AF_XDP socket open
+ 	 * on that queue id. This so that receiving will start.
+ 	 */
+-	if (need_reset && prog)
+-		for (i = 0; i < adapter->num_rx_queues; i++)
++	if (need_reset && prog) {
++		num_queues = min_t(int, adapter->num_rx_queues,
++				   adapter->num_xdp_queues);
++		for (i = 0; i < num_queues; i++)
+ 			if (adapter->xdp_ring[i]->xsk_pool)
+ 				(void)ixgbe_xsk_wakeup(adapter->netdev, i,
+ 						       XDP_WAKEUP_RX);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/micrel/Makefile b/drivers/net/ethernet/micrel/Makefile
+index 5cc00d22c708c..6ecc4eb30e74b 100644
+--- a/drivers/net/ethernet/micrel/Makefile
++++ b/drivers/net/ethernet/micrel/Makefile
+@@ -4,8 +4,6 @@
+ #
+ 
+ obj-$(CONFIG_KS8842) += ks8842.o
+-obj-$(CONFIG_KS8851) += ks8851.o
+-ks8851-objs = ks8851_common.o ks8851_spi.o
+-obj-$(CONFIG_KS8851_MLL) += ks8851_mll.o
+-ks8851_mll-objs = ks8851_common.o ks8851_par.o
++obj-$(CONFIG_KS8851) += ks8851_common.o ks8851_spi.o
++obj-$(CONFIG_KS8851_MLL) += ks8851_common.o ks8851_par.o
+ obj-$(CONFIG_KSZ884X_PCI) += ksz884x.o
+diff --git a/drivers/net/ethernet/micrel/ks8851_common.c b/drivers/net/ethernet/micrel/ks8851_common.c
+index d65872172229b..f74eae8eed02f 100644
+--- a/drivers/net/ethernet/micrel/ks8851_common.c
++++ b/drivers/net/ethernet/micrel/ks8851_common.c
+@@ -1031,6 +1031,7 @@ int ks8851_suspend(struct device *dev)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(ks8851_suspend);
+ 
+ int ks8851_resume(struct device *dev)
+ {
+@@ -1044,6 +1045,7 @@ int ks8851_resume(struct device *dev)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(ks8851_resume);
+ #endif
+ 
+ int ks8851_probe_common(struct net_device *netdev, struct device *dev,
+@@ -1175,6 +1177,7 @@ err_reg:
+ err_reg_io:
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(ks8851_probe_common);
+ 
+ int ks8851_remove_common(struct device *dev)
+ {
+@@ -1191,3 +1194,8 @@ int ks8851_remove_common(struct device *dev)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(ks8851_remove_common);
++
++MODULE_DESCRIPTION("KS8851 Network driver");
++MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
+index 15812001b3ff0..115044e21c742 100644
+--- a/drivers/net/phy/bcm7xxx.c
++++ b/drivers/net/phy/bcm7xxx.c
+@@ -27,7 +27,12 @@
+ #define MII_BCM7XXX_SHD_2_ADDR_CTRL	0xe
+ #define MII_BCM7XXX_SHD_2_CTRL_STAT	0xf
+ #define MII_BCM7XXX_SHD_2_BIAS_TRIM	0x1a
++#define MII_BCM7XXX_SHD_3_PCS_CTRL	0x0
++#define MII_BCM7XXX_SHD_3_PCS_STATUS	0x1
++#define MII_BCM7XXX_SHD_3_EEE_CAP	0x2
+ #define MII_BCM7XXX_SHD_3_AN_EEE_ADV	0x3
++#define MII_BCM7XXX_SHD_3_EEE_LP	0x4
++#define MII_BCM7XXX_SHD_3_EEE_WK_ERR	0x5
+ #define MII_BCM7XXX_SHD_3_PCS_CTRL_2	0x6
+ #define  MII_BCM7XXX_PCS_CTRL_2_DEF	0x4400
+ #define MII_BCM7XXX_SHD_3_AN_STAT	0xb
+@@ -216,25 +221,37 @@ static int bcm7xxx_28nm_resume(struct phy_device *phydev)
+ 	return genphy_config_aneg(phydev);
+ }
+ 
+-static int phy_set_clr_bits(struct phy_device *dev, int location,
+-					int set_mask, int clr_mask)
++static int __phy_set_clr_bits(struct phy_device *dev, int location,
++			      int set_mask, int clr_mask)
+ {
+ 	int v, ret;
+ 
+-	v = phy_read(dev, location);
++	v = __phy_read(dev, location);
+ 	if (v < 0)
+ 		return v;
+ 
+ 	v &= ~clr_mask;
+ 	v |= set_mask;
+ 
+-	ret = phy_write(dev, location, v);
++	ret = __phy_write(dev, location, v);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	return v;
+ }
+ 
++static int phy_set_clr_bits(struct phy_device *dev, int location,
++			    int set_mask, int clr_mask)
++{
++	int ret;
++
++	mutex_lock(&dev->mdio.bus->mdio_lock);
++	ret = __phy_set_clr_bits(dev, location, set_mask, clr_mask);
++	mutex_unlock(&dev->mdio.bus->mdio_lock);
++
++	return ret;
++}
++
+ static int bcm7xxx_28nm_ephy_01_afe_config_init(struct phy_device *phydev)
+ {
+ 	int ret;
+@@ -398,6 +415,93 @@ static int bcm7xxx_28nm_ephy_config_init(struct phy_device *phydev)
+ 	return bcm7xxx_28nm_ephy_apd_enable(phydev);
+ }
+ 
++#define MII_BCM7XXX_REG_INVALID	0xff
++
++static u8 bcm7xxx_28nm_ephy_regnum_to_shd(u16 regnum)
++{
++	switch (regnum) {
++	case MDIO_CTRL1:
++		return MII_BCM7XXX_SHD_3_PCS_CTRL;
++	case MDIO_STAT1:
++		return MII_BCM7XXX_SHD_3_PCS_STATUS;
++	case MDIO_PCS_EEE_ABLE:
++		return MII_BCM7XXX_SHD_3_EEE_CAP;
++	case MDIO_AN_EEE_ADV:
++		return MII_BCM7XXX_SHD_3_AN_EEE_ADV;
++	case MDIO_AN_EEE_LPABLE:
++		return MII_BCM7XXX_SHD_3_EEE_LP;
++	case MDIO_PCS_EEE_WK_ERR:
++		return MII_BCM7XXX_SHD_3_EEE_WK_ERR;
++	default:
++		return MII_BCM7XXX_REG_INVALID;
++	}
++}
++
++static bool bcm7xxx_28nm_ephy_dev_valid(int devnum)
++{
++	return devnum == MDIO_MMD_AN || devnum == MDIO_MMD_PCS;
++}
++
++static int bcm7xxx_28nm_ephy_read_mmd(struct phy_device *phydev,
++				      int devnum, u16 regnum)
++{
++	u8 shd = bcm7xxx_28nm_ephy_regnum_to_shd(regnum);
++	int ret;
++
++	if (!bcm7xxx_28nm_ephy_dev_valid(devnum) ||
++	    shd == MII_BCM7XXX_REG_INVALID)
++		return -EOPNOTSUPP;
++
++	/* set shadow mode 2 */
++	ret = __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST,
++				 MII_BCM7XXX_SHD_MODE_2, 0);
++	if (ret < 0)
++		return ret;
++
++	/* Access the desired shadow register address */
++	ret = __phy_write(phydev, MII_BCM7XXX_SHD_2_ADDR_CTRL, shd);
++	if (ret < 0)
++		goto reset_shadow_mode;
++
++	ret = __phy_read(phydev, MII_BCM7XXX_SHD_2_CTRL_STAT);
++
++reset_shadow_mode:
++	/* reset shadow mode 2 */
++	__phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, 0,
++			   MII_BCM7XXX_SHD_MODE_2);
++	return ret;
++}
++
++static int bcm7xxx_28nm_ephy_write_mmd(struct phy_device *phydev,
++				       int devnum, u16 regnum, u16 val)
++{
++	u8 shd = bcm7xxx_28nm_ephy_regnum_to_shd(regnum);
++	int ret;
++
++	if (!bcm7xxx_28nm_ephy_dev_valid(devnum) ||
++	    shd == MII_BCM7XXX_REG_INVALID)
++		return -EOPNOTSUPP;
++
++	/* set shadow mode 2 */
++	ret = __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST,
++				 MII_BCM7XXX_SHD_MODE_2, 0);
++	if (ret < 0)
++		return ret;
++
++	/* Access the desired shadow register address */
++	ret = __phy_write(phydev, MII_BCM7XXX_SHD_2_ADDR_CTRL, shd);
++	if (ret < 0)
++		goto reset_shadow_mode;
++
++	/* Write the desired value in the shadow register */
++	__phy_write(phydev, MII_BCM7XXX_SHD_2_CTRL_STAT, val);
++
++reset_shadow_mode:
++	/* reset shadow mode 2 */
++	return __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, 0,
++				  MII_BCM7XXX_SHD_MODE_2);
++}
++
+ static int bcm7xxx_28nm_ephy_resume(struct phy_device *phydev)
+ {
+ 	int ret;
+@@ -595,6 +699,8 @@ static void bcm7xxx_28nm_remove(struct phy_device *phydev)
+ 	.get_stats	= bcm7xxx_28nm_get_phy_stats,			\
+ 	.probe		= bcm7xxx_28nm_probe,				\
+ 	.remove		= bcm7xxx_28nm_remove,				\
++	.read_mmd	= bcm7xxx_28nm_ephy_read_mmd,			\
++	.write_mmd	= bcm7xxx_28nm_ephy_write_mmd,			\
+ }
+ 
+ #define BCM7XXX_40NM_EPHY(_oui, _name)					\
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index df8d4c1e5be74..db484215a78c8 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2354,7 +2354,7 @@ static int remove_net_device(struct hso_device *hso_dev)
+ }
+ 
+ /* Frees our network device */
+-static void hso_free_net_device(struct hso_device *hso_dev, bool bailout)
++static void hso_free_net_device(struct hso_device *hso_dev)
+ {
+ 	int i;
+ 	struct hso_net *hso_net = dev2net(hso_dev);
+@@ -2377,7 +2377,7 @@ static void hso_free_net_device(struct hso_device *hso_dev, bool bailout)
+ 	kfree(hso_net->mux_bulk_tx_buf);
+ 	hso_net->mux_bulk_tx_buf = NULL;
+ 
+-	if (hso_net->net && !bailout)
++	if (hso_net->net)
+ 		free_netdev(hso_net->net);
+ 
+ 	kfree(hso_dev);
+@@ -3137,7 +3137,7 @@ static void hso_free_interface(struct usb_interface *interface)
+ 				rfkill_unregister(rfk);
+ 				rfkill_destroy(rfk);
+ 			}
+-			hso_free_net_device(network_table[i], false);
++			hso_free_net_device(network_table[i]);
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index ea0d5f04dc3a8..465e11dcdf129 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1178,7 +1178,10 @@ static void smsc95xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+ 
+ static void smsc95xx_handle_link_change(struct net_device *net)
+ {
++	struct usbnet *dev = netdev_priv(net);
++
+ 	phy_print_status(net->phydev);
++	usbnet_defer_kevent(dev, EVENT_LINK_CHANGE);
+ }
+ 
+ static int smsc95xx_start_phy(struct usbnet *dev)
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 4ca0b06d09add..b793d61d15d27 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -1795,8 +1795,8 @@ mac80211_hwsim_beacon(struct hrtimer *timer)
+ 		bcn_int -= data->bcn_delta;
+ 		data->bcn_delta = 0;
+ 	}
+-	hrtimer_forward(&data->beacon_timer, hrtimer_get_expires(timer),
+-			ns_to_ktime(bcn_int * NSEC_PER_USEC));
++	hrtimer_forward_now(&data->beacon_timer,
++			    ns_to_ktime(bcn_int * NSEC_PER_USEC));
+ 	return HRTIMER_RESTART;
+ }
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index bbc3efef50278..99b5152482fe4 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -831,6 +831,7 @@ EXPORT_SYMBOL_GPL(nvme_cleanup_cmd);
+ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
+ 		struct nvme_command *cmd)
+ {
++	struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
+ 	blk_status_t ret = BLK_STS_OK;
+ 
+ 	nvme_clear_nvme_request(req);
+@@ -877,7 +878,8 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
+ 		return BLK_STS_IOERR;
+ 	}
+ 
+-	nvme_req(req)->genctr++;
++	if (!(ctrl->quirks & NVME_QUIRK_SKIP_CID_GEN))
++		nvme_req(req)->genctr++;
+ 	cmd->common.command_id = nvme_cid(req);
+ 	trace_nvme_setup_cmd(req, cmd);
+ 	return ret;
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 8c735c55c15bf..5dd1dd8021ba1 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -144,6 +144,12 @@ enum nvme_quirks {
+ 	 * NVMe 1.3 compliance.
+ 	 */
+ 	NVME_QUIRK_NO_NS_DESC_LIST		= (1 << 15),
++
++	/*
++	 * The controller requires the command_id value be be limited, so skip
++	 * encoding the generation sequence number.
++	 */
++	NVME_QUIRK_SKIP_CID_GEN			= (1 << 17),
+ };
+ 
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 09767a805492c..d79abb88a0c62 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3259,7 +3259,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2005),
+ 		.driver_data = NVME_QUIRK_SINGLE_VECTOR |
+ 				NVME_QUIRK_128_BYTES_SQES |
+-				NVME_QUIRK_SHARED_TAGS },
++				NVME_QUIRK_SHARED_TAGS |
++				NVME_QUIRK_SKIP_CID_GEN },
+ 
+ 	{ PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ 	{ 0, }
+diff --git a/drivers/scsi/csiostor/csio_init.c b/drivers/scsi/csiostor/csio_init.c
+index 390b07bf92b97..ccbded3353bd0 100644
+--- a/drivers/scsi/csiostor/csio_init.c
++++ b/drivers/scsi/csiostor/csio_init.c
+@@ -1254,3 +1254,4 @@ MODULE_DEVICE_TABLE(pci, csio_pci_tbl);
+ MODULE_VERSION(CSIO_DRV_VERSION);
+ MODULE_FIRMWARE(FW_FNAME_T5);
+ MODULE_FIRMWARE(FW_FNAME_T6);
++MODULE_SOFTDEP("pre: cxgb4");
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 4f0486fe30dd7..e1fd91a581202 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -3913,7 +3913,6 @@ struct qla_hw_data {
+ 		uint32_t	scm_supported_f:1;
+ 				/* Enabled in Driver */
+ 		uint32_t	scm_enabled:1;
+-		uint32_t	max_req_queue_warned:1;
+ 		uint32_t	plogi_template_valid:1;
+ 	} flags;
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index a24b82de4aab7..5e040b6debc84 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -4158,6 +4158,8 @@ skip_msi:
+ 		ql_dbg(ql_dbg_init, vha, 0x0125,
+ 		    "INTa mode: Enabled.\n");
+ 		ha->flags.mr_intr_valid = 1;
++		/* Set max_qpair to 0, as MSI-X and MSI in not enabled */
++		ha->max_qpairs = 0;
+ 	}
+ 
+ clear_risc_ints:
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index f6c76a063294b..5acee3c798d42 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -109,19 +109,24 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (ha->queue_pair_map[qidx]) {
+-		*handle = ha->queue_pair_map[qidx];
+-		ql_log(ql_log_info, vha, 0x2121,
+-		    "Returning existing qpair of %p for idx=%x\n",
+-		    *handle, qidx);
+-		return 0;
+-	}
++	/* Use base qpair if max_qpairs is 0 */
++	if (!ha->max_qpairs) {
++		qpair = ha->base_qpair;
++	} else {
++		if (ha->queue_pair_map[qidx]) {
++			*handle = ha->queue_pair_map[qidx];
++			ql_log(ql_log_info, vha, 0x2121,
++			       "Returning existing qpair of %p for idx=%x\n",
++			       *handle, qidx);
++			return 0;
++		}
+ 
+-	qpair = qla2xxx_create_qpair(vha, 5, vha->vp_idx, true);
+-	if (qpair == NULL) {
+-		ql_log(ql_log_warn, vha, 0x2122,
+-		    "Failed to allocate qpair\n");
+-		return -EINVAL;
++		qpair = qla2xxx_create_qpair(vha, 5, vha->vp_idx, true);
++		if (!qpair) {
++			ql_log(ql_log_warn, vha, 0x2122,
++			       "Failed to allocate qpair\n");
++			return -EINVAL;
++		}
+ 	}
+ 	*handle = qpair;
+ 
+@@ -715,18 +720,9 @@ int qla_nvme_register_hba(struct scsi_qla_host *vha)
+ 
+ 	WARN_ON(vha->nvme_local_port);
+ 
+-	if (ha->max_req_queues < 3) {
+-		if (!ha->flags.max_req_queue_warned)
+-			ql_log(ql_log_info, vha, 0x2120,
+-			       "%s: Disabling FC-NVME due to lack of free queue pairs (%d).\n",
+-			       __func__, ha->max_req_queues);
+-		ha->flags.max_req_queue_warned = 1;
+-		return ret;
+-	}
+-
+ 	qla_nvme_fc_transport.max_hw_queues =
+ 	    min((uint8_t)(qla_nvme_fc_transport.max_hw_queues),
+-		(uint8_t)(ha->max_req_queues - 2));
++		(uint8_t)(ha->max_qpairs ? ha->max_qpairs : 1));
+ 
+ 	pinfo.node_name = wwn_to_u64(vha->node_name);
+ 	pinfo.port_name = wwn_to_u64(vha->port_name);
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 4dabd09400c6d..3139d9df6f320 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -318,8 +318,7 @@ static void ufshcd_add_query_upiu_trace(struct ufs_hba *hba, unsigned int tag,
+ static void ufshcd_add_tm_upiu_trace(struct ufs_hba *hba, unsigned int tag,
+ 		const char *str)
+ {
+-	int off = (int)tag - hba->nutrs;
+-	struct utp_task_req_desc *descp = &hba->utmrdl_base_addr[off];
++	struct utp_task_req_desc *descp = &hba->utmrdl_base_addr[tag];
+ 
+ 	trace_ufshcd_upiu(dev_name(hba->dev), str, &descp->req_header,
+ 			&descp->input_param1);
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index cea40ef090b77..a7ee1171eeb3e 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1220,8 +1220,25 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ 	new_row_size = new_cols << 1;
+ 	new_screen_size = new_row_size * new_rows;
+ 
+-	if (new_cols == vc->vc_cols && new_rows == vc->vc_rows)
+-		return 0;
++	if (new_cols == vc->vc_cols && new_rows == vc->vc_rows) {
++		/*
++		 * This function is being called here to cover the case
++		 * where the userspace calls the FBIOPUT_VSCREENINFO twice,
++		 * passing the same fb_var_screeninfo containing the fields
++		 * yres/xres equal to a number non-multiple of vc_font.height
++		 * and yres_virtual/xres_virtual equal to number lesser than the
++		 * vc_font.height and yres/xres.
++		 * In the second call, the struct fb_var_screeninfo isn't
++		 * being modified by the underlying driver because of the
++		 * if above, and this causes the fbcon_display->vrows to become
++		 * negative and it eventually leads to out-of-bound
++		 * access by the imageblit function.
++		 * To give the correct values to the struct and to not have
++		 * to deal with possible errors from the code below, we call
++		 * the resize_screen here as well.
++		 */
++		return resize_screen(vc, new_cols, new_rows, user);
++	}
+ 
+ 	if (new_screen_size > KMALLOC_MAX_SIZE || !new_screen_size)
+ 		return -EINVAL;
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 9d38f864cb68c..e111622944139 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -1101,6 +1101,19 @@ static int cdns3_ep_run_stream_transfer(struct cdns3_endpoint *priv_ep,
+ 	return 0;
+ }
+ 
++static void cdns3_rearm_drdy_if_needed(struct cdns3_endpoint *priv_ep)
++{
++	struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
++
++	if (priv_dev->dev_ver < DEV_VER_V3)
++		return;
++
++	if (readl(&priv_dev->regs->ep_sts) & EP_STS_TRBERR) {
++		writel(EP_STS_TRBERR, &priv_dev->regs->ep_sts);
++		writel(EP_CMD_DRDY, &priv_dev->regs->ep_cmd);
++	}
++}
++
+ /**
+  * cdns3_ep_run_transfer - start transfer on no-default endpoint hardware
+  * @priv_ep: endpoint object
+@@ -1352,6 +1365,7 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 		/*clearing TRBERR and EP_STS_DESCMIS before seting DRDY*/
+ 		writel(EP_STS_TRBERR | EP_STS_DESCMIS, &priv_dev->regs->ep_sts);
+ 		writel(EP_CMD_DRDY, &priv_dev->regs->ep_cmd);
++		cdns3_rearm_drdy_if_needed(priv_ep);
+ 		trace_cdns3_doorbell_epx(priv_ep->name,
+ 					 readl(&priv_dev->regs->ep_traddr));
+ 	}
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index fa50e8936f5fc..04c4aa7a1df2c 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -627,7 +627,7 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
+ 
+ 			vaddr = eppnt->p_vaddr;
+ 			if (interp_elf_ex->e_type == ET_EXEC || load_addr_set)
+-				elf_type |= MAP_FIXED_NOREPLACE;
++				elf_type |= MAP_FIXED;
+ 			else if (no_base && interp_elf_ex->e_type == ET_DYN)
+ 				load_addr = -vaddr;
+ 
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 720d65f224f09..848e0aaa8da5d 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -524,7 +524,7 @@ void debugfs_create_file_size(const char *name, umode_t mode,
+ {
+ 	struct dentry *de = debugfs_create_file(name, mode, parent, data, fops);
+ 
+-	if (de)
++	if (!IS_ERR(de))
+ 		d_inode(de)->i_size = file_size;
+ }
+ EXPORT_SYMBOL_GPL(debugfs_create_file_size);
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index ca50c90adc4c4..70a0f5e56f4d5 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -534,7 +534,7 @@ static int ext4_dx_readdir(struct file *file, struct dir_context *ctx)
+ 	struct dir_private_info *info = file->private_data;
+ 	struct inode *inode = file_inode(file);
+ 	struct fname *fname;
+-	int	ret;
++	int ret = 0;
+ 
+ 	if (!info) {
+ 		info = ext4_htree_create_dir_info(file, ctx->pos);
+@@ -582,7 +582,7 @@ static int ext4_dx_readdir(struct file *file, struct dir_context *ctx)
+ 						   info->curr_minor_hash,
+ 						   &info->next_hash);
+ 			if (ret < 0)
+-				return ret;
++				goto finished;
+ 			if (ret == 0) {
+ 				ctx->pos = ext4_get_htree_eof(file);
+ 				break;
+@@ -613,7 +613,7 @@ static int ext4_dx_readdir(struct file *file, struct dir_context *ctx)
+ 	}
+ finished:
+ 	info->last_pos = ctx->pos;
+-	return 0;
++	return ret < 0 ? ret : 0;
+ }
+ 
+ static int ext4_dir_open(struct inode * inode, struct file * filp)
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index e00a35530a4e0..aa4d74f9d1623 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5907,7 +5907,7 @@ void ext4_ext_replay_shrink_inode(struct inode *inode, ext4_lblk_t end)
+ }
+ 
+ /* Check if *cur is a hole and if it is, skip it */
+-static void skip_hole(struct inode *inode, ext4_lblk_t *cur)
++static int skip_hole(struct inode *inode, ext4_lblk_t *cur)
+ {
+ 	int ret;
+ 	struct ext4_map_blocks map;
+@@ -5916,9 +5916,12 @@ static void skip_hole(struct inode *inode, ext4_lblk_t *cur)
+ 	map.m_len = ((inode->i_size) >> inode->i_sb->s_blocksize_bits) - *cur;
+ 
+ 	ret = ext4_map_blocks(NULL, inode, &map, 0);
++	if (ret < 0)
++		return ret;
+ 	if (ret != 0)
+-		return;
++		return 0;
+ 	*cur = *cur + map.m_len;
++	return 0;
+ }
+ 
+ /* Count number of blocks used by this inode and update i_blocks */
+@@ -5967,7 +5970,9 @@ int ext4_ext_replay_set_iblocks(struct inode *inode)
+ 	 * iblocks by total number of differences found.
+ 	 */
+ 	cur = 0;
+-	skip_hole(inode, &cur);
++	ret = skip_hole(inode, &cur);
++	if (ret < 0)
++		goto out;
+ 	path = ext4_find_extent(inode, cur, NULL, 0);
+ 	if (IS_ERR(path))
+ 		goto out;
+@@ -5986,8 +5991,12 @@ int ext4_ext_replay_set_iblocks(struct inode *inode)
+ 		}
+ 		cur = max(cur + 1, le32_to_cpu(ex->ee_block) +
+ 					ext4_ext_get_actual_len(ex));
+-		skip_hole(inode, &cur);
+-
++		ret = skip_hole(inode, &cur);
++		if (ret < 0) {
++			ext4_ext_drop_refs(path);
++			kfree(path);
++			break;
++		}
+ 		path2 = ext4_find_extent(inode, cur, NULL, 0);
+ 		if (IS_ERR(path2)) {
+ 			ext4_ext_drop_refs(path);
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 53647fa038773..08ca690f928bd 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -832,6 +832,12 @@ static int ext4_fc_write_inode_data(struct inode *inode, u32 *crc)
+ 					    sizeof(lrange), (u8 *)&lrange, crc))
+ 				return -ENOSPC;
+ 		} else {
++			unsigned int max = (map.m_flags & EXT4_MAP_UNWRITTEN) ?
++				EXT_UNWRITTEN_MAX_LEN : EXT_INIT_MAX_LEN;
++
++			/* Limit the number of blocks in one extent */
++			map.m_len = min(max, map.m_len);
++
+ 			fc_ext.fc_ino = cpu_to_le32(inode->i_ino);
+ 			ex = (struct ext4_extent *)&fc_ext.fc_ex;
+ 			ex->ee_block = cpu_to_le32(map.m_lblk);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 18a5321b5ef37..63a292db75877 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1641,6 +1641,7 @@ static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	int ret;
+ 	bool allocated = false;
++	bool reserved = false;
+ 
+ 	/*
+ 	 * If the cluster containing lblk is shared with a delayed,
+@@ -1657,6 +1658,7 @@ static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
+ 		ret = ext4_da_reserve_space(inode);
+ 		if (ret != 0)   /* ENOSPC */
+ 			goto errout;
++		reserved = true;
+ 	} else {   /* bigalloc */
+ 		if (!ext4_es_scan_clu(inode, &ext4_es_is_delonly, lblk)) {
+ 			if (!ext4_es_scan_clu(inode,
+@@ -1669,6 +1671,7 @@ static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
+ 					ret = ext4_da_reserve_space(inode);
+ 					if (ret != 0)   /* ENOSPC */
+ 						goto errout;
++					reserved = true;
+ 				} else {
+ 					allocated = true;
+ 				}
+@@ -1679,6 +1682,8 @@ static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
+ 	}
+ 
+ 	ret = ext4_es_insert_delayed_block(inode, lblk, allocated);
++	if (ret && reserved)
++		ext4_da_release_space(inode, 1);
+ 
+ errout:
+ 	return ret;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 099e4afa41e52..cbeb024296719 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1356,6 +1356,12 @@ static void ext4_destroy_inode(struct inode *inode)
+ 				true);
+ 		dump_stack();
+ 	}
++
++	if (EXT4_I(inode)->i_reserved_data_blocks)
++		ext4_msg(inode->i_sb, KERN_ERR,
++			 "Inode %lu (%p): i_reserved_data_blocks (%u) not cleared!",
++			 inode->i_ino, EXT4_I(inode),
++			 EXT4_I(inode)->i_reserved_data_blocks);
+ }
+ 
+ static void init_once(void *foo)
+@@ -3194,17 +3200,17 @@ static loff_t ext4_max_size(int blkbits, int has_huge_files)
+  */
+ static loff_t ext4_max_bitmap_size(int bits, int has_huge_files)
+ {
+-	loff_t res = EXT4_NDIR_BLOCKS;
++	unsigned long long upper_limit, res = EXT4_NDIR_BLOCKS;
+ 	int meta_blocks;
+-	loff_t upper_limit;
+-	/* This is calculated to be the largest file size for a dense, block
++
++	/*
++	 * This is calculated to be the largest file size for a dense, block
+ 	 * mapped file such that the file's total number of 512-byte sectors,
+ 	 * including data and all indirect blocks, does not exceed (2^48 - 1).
+ 	 *
+ 	 * __u32 i_blocks_lo and _u16 i_blocks_high represent the total
+ 	 * number of 512-byte sectors of the file.
+ 	 */
+-
+ 	if (!has_huge_files) {
+ 		/*
+ 		 * !has_huge_files or implies that the inode i_block field
+@@ -3247,7 +3253,7 @@ static loff_t ext4_max_bitmap_size(int bits, int has_huge_files)
+ 	if (res > MAX_LFS_FILESIZE)
+ 		res = MAX_LFS_FILESIZE;
+ 
+-	return res;
++	return (loff_t)res;
+ }
+ 
+ static ext4_fsblk_t descriptor_loc(struct super_block *sb,
+diff --git a/fs/verity/enable.c b/fs/verity/enable.c
+index 5ab3bbec81087..734862e608fd3 100644
+--- a/fs/verity/enable.c
++++ b/fs/verity/enable.c
+@@ -177,7 +177,7 @@ static int build_merkle_tree(struct file *filp,
+ 	 * (level 0) and ascending to the root node (level 'num_levels - 1').
+ 	 * Then at the end (level 'num_levels'), calculate the root hash.
+ 	 */
+-	blocks = (inode->i_size + params->block_size - 1) >>
++	blocks = ((u64)inode->i_size + params->block_size - 1) >>
+ 		 params->log_blocksize;
+ 	for (level = 0; level <= params->num_levels; level++) {
+ 		err = build_merkle_tree_level(filp, level, blocks, params,
+diff --git a/fs/verity/open.c b/fs/verity/open.c
+index bfe0280c14e49..67d71f7b1b483 100644
+--- a/fs/verity/open.c
++++ b/fs/verity/open.c
+@@ -89,7 +89,7 @@ int fsverity_init_merkle_tree_params(struct merkle_tree_params *params,
+ 	 */
+ 
+ 	/* Compute number of levels and the number of blocks in each level */
+-	blocks = (inode->i_size + params->block_size - 1) >> log_blocksize;
++	blocks = ((u64)inode->i_size + params->block_size - 1) >> log_blocksize;
+ 	pr_debug("Data is %lld bytes (%llu blocks)\n", inode->i_size, blocks);
+ 	while (blocks > 1) {
+ 		if (params->num_levels >= FS_VERITY_MAX_LEVELS) {
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 3f93a50c25efe..0caa448f7b409 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -526,6 +526,8 @@ struct btf_func_model {
+  * programs only. Should not be used with normal calls and indirect calls.
+  */
+ #define BPF_TRAMP_F_SKIP_FRAME		BIT(2)
++/* Return the return value of fentry prog. Only used by bpf_struct_ops. */
++#define BPF_TRAMP_F_RET_FENTRY_RET	BIT(4)
+ 
+ /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
+  * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 2ec062aaa9782..4d431d7b4415a 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -553,5 +553,5 @@ int ip_valid_fib_dump_req(struct net *net, const struct nlmsghdr *nlh,
+ int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nh,
+ 		     u8 rt_family, unsigned char *flags, bool skip_oif);
+ int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nh,
+-		    int nh_weight, u8 rt_family);
++		    int nh_weight, u8 rt_family, u32 nh_tclassid);
+ #endif  /* _NET_FIB_H */
+diff --git a/include/net/nexthop.h b/include/net/nexthop.h
+index 4c8c9fe9a3f0e..fd87d727aa217 100644
+--- a/include/net/nexthop.h
++++ b/include/net/nexthop.h
+@@ -211,7 +211,7 @@ int nexthop_mpath_fill_node(struct sk_buff *skb, struct nexthop *nh,
+ 		struct fib_nh_common *nhc = &nhi->fib_nhc;
+ 		int weight = nhg->nh_entries[i].weight;
+ 
+-		if (fib_add_nexthop(skb, nhc, weight, rt_family) < 0)
++		if (fib_add_nexthop(skb, nhc, weight, rt_family, 0) < 0)
+ 			return -EMSGSIZE;
+ 	}
+ 
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 3c7addf951509..cdca984f36305 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -479,8 +479,10 @@ struct sock {
+ 	u32			sk_ack_backlog;
+ 	u32			sk_max_ack_backlog;
+ 	kuid_t			sk_uid;
++	spinlock_t		sk_peer_lock;
+ 	struct pid		*sk_peer_pid;
+ 	const struct cred	*sk_peer_cred;
++
+ 	long			sk_rcvtimeo;
+ 	ktime_t			sk_stamp;
+ #if BITS_PER_LONG==32
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index f527063864b55..ac283f9b2332e 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -367,6 +367,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ 		const struct btf_type *mtype, *ptype;
+ 		struct bpf_prog *prog;
+ 		u32 moff;
++		u32 flags;
+ 
+ 		moff = btf_member_bit_offset(t, member) / 8;
+ 		ptype = btf_type_resolve_ptr(btf_vmlinux, member->type, NULL);
+@@ -430,10 +431,12 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ 
+ 		tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
+ 		tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
++		flags = st_ops->func_models[i].ret_size > 0 ?
++			BPF_TRAMP_F_RET_FENTRY_RET : 0;
+ 		err = arch_prepare_bpf_trampoline(NULL, image,
+ 						  st_map->image + PAGE_SIZE,
+-						  &st_ops->func_models[i], 0,
+-						  tprogs, NULL);
++						  &st_ops->func_models[i],
++						  flags, tprogs, NULL);
+ 		if (err < 0)
+ 			goto reset_unlock;
+ 
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index d12efb2550d35..2e4a658d65d6e 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -831,7 +831,7 @@ int bpf_jit_charge_modmem(u32 pages)
+ {
+ 	if (atomic_long_add_return(pages, &bpf_jit_current) >
+ 	    (bpf_jit_limit >> PAGE_SHIFT)) {
+-		if (!capable(CAP_SYS_ADMIN)) {
++		if (!bpf_capable()) {
+ 			atomic_long_sub(pages, &bpf_jit_current);
+ 			return -EPERM;
+ 		}
+diff --git a/kernel/entry/kvm.c b/kernel/entry/kvm.c
+index b6678a5e3cf64..2a3139dab109e 100644
+--- a/kernel/entry/kvm.c
++++ b/kernel/entry/kvm.c
+@@ -16,8 +16,10 @@ static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work)
+ 		if (ti_work & _TIF_NEED_RESCHED)
+ 			schedule();
+ 
+-		if (ti_work & _TIF_NOTIFY_RESUME)
++		if (ti_work & _TIF_NOTIFY_RESUME) {
+ 			tracehook_notify_resume(NULL);
++			rseq_handle_notify_resume(NULL, NULL);
++		}
+ 
+ 		ret = arch_xfer_to_guest_mode_handle_work(vcpu, ti_work);
+ 		if (ret)
+diff --git a/kernel/rseq.c b/kernel/rseq.c
+index a4f86a9d6937c..0077713bf2400 100644
+--- a/kernel/rseq.c
++++ b/kernel/rseq.c
+@@ -268,9 +268,16 @@ void __rseq_handle_notify_resume(struct ksignal *ksig, struct pt_regs *regs)
+ 		return;
+ 	if (unlikely(!access_ok(t->rseq, sizeof(*t->rseq))))
+ 		goto error;
+-	ret = rseq_ip_fixup(regs);
+-	if (unlikely(ret < 0))
+-		goto error;
++	/*
++	 * regs is NULL if and only if the caller is in a syscall path.  Skip
++	 * fixup and leave rseq_cs as is so that rseq_sycall() will detect and
++	 * kill a misbehaving userspace on debug kernels.
++	 */
++	if (regs) {
++		ret = rseq_ip_fixup(regs);
++		if (unlikely(ret < 0))
++			goto error;
++	}
+ 	if (unlikely(rseq_update_cpu_id(t)))
+ 		goto error;
+ 	return;
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 97d318b0cd0cb..5e39da0ae0868 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -610,9 +610,17 @@ static struct attribute *sugov_attrs[] = {
+ };
+ ATTRIBUTE_GROUPS(sugov);
+ 
++static void sugov_tunables_free(struct kobject *kobj)
++{
++	struct gov_attr_set *attr_set = container_of(kobj, struct gov_attr_set, kobj);
++
++	kfree(to_sugov_tunables(attr_set));
++}
++
+ static struct kobj_type sugov_tunables_ktype = {
+ 	.default_groups = sugov_groups,
+ 	.sysfs_ops = &governor_sysfs_ops,
++	.release = &sugov_tunables_free,
+ };
+ 
+ /********************** cpufreq governor interface *********************/
+@@ -712,12 +720,10 @@ static struct sugov_tunables *sugov_tunables_alloc(struct sugov_policy *sg_polic
+ 	return tunables;
+ }
+ 
+-static void sugov_tunables_free(struct sugov_tunables *tunables)
++static void sugov_clear_global_tunables(void)
+ {
+ 	if (!have_governor_per_policy())
+ 		global_tunables = NULL;
+-
+-	kfree(tunables);
+ }
+ 
+ static int sugov_init(struct cpufreq_policy *policy)
+@@ -780,7 +786,7 @@ out:
+ fail:
+ 	kobject_put(&tunables->attr_set.kobj);
+ 	policy->governor_data = NULL;
+-	sugov_tunables_free(tunables);
++	sugov_clear_global_tunables();
+ 
+ stop_kthread:
+ 	sugov_kthread_stop(sg_policy);
+@@ -807,7 +813,7 @@ static void sugov_exit(struct cpufreq_policy *policy)
+ 	count = gov_attr_set_put(&tunables->attr_set, &sg_policy->tunables_hook);
+ 	policy->governor_data = NULL;
+ 	if (!count)
+-		sugov_tunables_free(tunables);
++		sugov_clear_global_tunables();
+ 
+ 	mutex_unlock(&global_tunables_lock);
+ 
+diff --git a/mm/util.c b/mm/util.c
+index d5be677718500..90792e4eaa252 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -581,6 +581,10 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node)
+ 	if (ret || size <= PAGE_SIZE)
+ 		return ret;
+ 
++	/* Don't even allow crazy sizes */
++	if (WARN_ON_ONCE(size > INT_MAX))
++		return NULL;
++
+ 	return __vmalloc_node(size, 1, flags, node,
+ 			__builtin_return_address(0));
+ }
+diff --git a/net/core/sock.c b/net/core/sock.c
+index d638c5361ed29..f9c835167391d 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1255,6 +1255,16 @@ set_sndbuf:
+ }
+ EXPORT_SYMBOL(sock_setsockopt);
+ 
++static const struct cred *sk_get_peer_cred(struct sock *sk)
++{
++	const struct cred *cred;
++
++	spin_lock(&sk->sk_peer_lock);
++	cred = get_cred(sk->sk_peer_cred);
++	spin_unlock(&sk->sk_peer_lock);
++
++	return cred;
++}
+ 
+ static void cred_to_ucred(struct pid *pid, const struct cred *cred,
+ 			  struct ucred *ucred)
+@@ -1428,7 +1438,11 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		struct ucred peercred;
+ 		if (len > sizeof(peercred))
+ 			len = sizeof(peercred);
++
++		spin_lock(&sk->sk_peer_lock);
+ 		cred_to_ucred(sk->sk_peer_pid, sk->sk_peer_cred, &peercred);
++		spin_unlock(&sk->sk_peer_lock);
++
+ 		if (copy_to_user(optval, &peercred, len))
+ 			return -EFAULT;
+ 		goto lenout;
+@@ -1436,20 +1450,23 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 
+ 	case SO_PEERGROUPS:
+ 	{
++		const struct cred *cred;
+ 		int ret, n;
+ 
+-		if (!sk->sk_peer_cred)
++		cred = sk_get_peer_cred(sk);
++		if (!cred)
+ 			return -ENODATA;
+ 
+-		n = sk->sk_peer_cred->group_info->ngroups;
++		n = cred->group_info->ngroups;
+ 		if (len < n * sizeof(gid_t)) {
+ 			len = n * sizeof(gid_t);
++			put_cred(cred);
+ 			return put_user(len, optlen) ? -EFAULT : -ERANGE;
+ 		}
+ 		len = n * sizeof(gid_t);
+ 
+-		ret = groups_to_user((gid_t __user *)optval,
+-				     sk->sk_peer_cred->group_info);
++		ret = groups_to_user((gid_t __user *)optval, cred->group_info);
++		put_cred(cred);
+ 		if (ret)
+ 			return ret;
+ 		goto lenout;
+@@ -1788,9 +1805,10 @@ static void __sk_destruct(struct rcu_head *head)
+ 		sk->sk_frag.page = NULL;
+ 	}
+ 
+-	if (sk->sk_peer_cred)
+-		put_cred(sk->sk_peer_cred);
++	/* We do not need to acquire sk->sk_peer_lock, we are the last user. */
++	put_cred(sk->sk_peer_cred);
+ 	put_pid(sk->sk_peer_pid);
++
+ 	if (likely(sk->sk_net_refcnt))
+ 		put_net(sock_net(sk));
+ 	sk_prot_free(sk->sk_prot_creator, sk);
+@@ -3000,6 +3018,8 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ 
+ 	sk->sk_peer_pid 	=	NULL;
+ 	sk->sk_peer_cred	=	NULL;
++	spin_lock_init(&sk->sk_peer_lock);
++
+ 	sk->sk_write_pending	=	0;
+ 	sk->sk_rcvlowat		=	1;
+ 	sk->sk_rcvtimeo		=	MAX_SCHEDULE_TIMEOUT;
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 1f75dc686b6b6..642503e89924b 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1663,7 +1663,7 @@ EXPORT_SYMBOL_GPL(fib_nexthop_info);
+ 
+ #if IS_ENABLED(CONFIG_IP_ROUTE_MULTIPATH) || IS_ENABLED(CONFIG_IPV6)
+ int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nhc,
+-		    int nh_weight, u8 rt_family)
++		    int nh_weight, u8 rt_family, u32 nh_tclassid)
+ {
+ 	const struct net_device *dev = nhc->nhc_dev;
+ 	struct rtnexthop *rtnh;
+@@ -1681,6 +1681,9 @@ int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nhc,
+ 
+ 	rtnh->rtnh_flags = flags;
+ 
++	if (nh_tclassid && nla_put_u32(skb, RTA_FLOW, nh_tclassid))
++		goto nla_put_failure;
++
+ 	/* length of rtnetlink header + attributes */
+ 	rtnh->rtnh_len = nlmsg_get_pos(skb) - (void *)rtnh;
+ 
+@@ -1708,14 +1711,13 @@ static int fib_add_multipath(struct sk_buff *skb, struct fib_info *fi)
+ 	}
+ 
+ 	for_nexthops(fi) {
+-		if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight,
+-				    AF_INET) < 0)
+-			goto nla_put_failure;
++		u32 nh_tclassid = 0;
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+-		if (nh->nh_tclassid &&
+-		    nla_put_u32(skb, RTA_FLOW, nh->nh_tclassid))
+-			goto nla_put_failure;
++		nh_tclassid = nh->nh_tclassid;
+ #endif
++		if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight,
++				    AF_INET, nh_tclassid) < 0)
++			goto nla_put_failure;
+ 	} endfor_nexthops(fi);
+ 
+ mp_end:
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index e73312546c5a1..bd7fd9b1f24c8 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1035,7 +1035,7 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	__be16 dport;
+ 	u8  tos;
+ 	int err, is_udplite = IS_UDPLITE(sk);
+-	int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
++	int corkreq = READ_ONCE(up->corkflag) || msg->msg_flags&MSG_MORE;
+ 	int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
+ 	struct sk_buff *skb;
+ 	struct ip_options_data opt_copy;
+@@ -1343,7 +1343,7 @@ int udp_sendpage(struct sock *sk, struct page *page, int offset,
+ 	}
+ 
+ 	up->len += size;
+-	if (!(up->corkflag || (flags&MSG_MORE)))
++	if (!(READ_ONCE(up->corkflag) || (flags&MSG_MORE)))
+ 		ret = udp_push_pending_frames(sk);
+ 	if (!ret)
+ 		ret = size;
+@@ -2609,9 +2609,9 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
+ 	switch (optname) {
+ 	case UDP_CORK:
+ 		if (val != 0) {
+-			up->corkflag = 1;
++			WRITE_ONCE(up->corkflag, 1);
+ 		} else {
+-			up->corkflag = 0;
++			WRITE_ONCE(up->corkflag, 0);
+ 			lock_sock(sk);
+ 			push_pending_frames(sk);
+ 			release_sock(sk);
+@@ -2734,7 +2734,7 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
+ 
+ 	switch (optname) {
+ 	case UDP_CORK:
+-		val = up->corkflag;
++		val = READ_ONCE(up->corkflag);
+ 		break;
+ 
+ 	case UDP_ENCAP:
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 168a7b4d957ae..a68a7d7c07280 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5566,14 +5566,15 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 
+ 		if (fib_add_nexthop(skb, &rt->fib6_nh->nh_common,
+-				    rt->fib6_nh->fib_nh_weight, AF_INET6) < 0)
++				    rt->fib6_nh->fib_nh_weight, AF_INET6,
++				    0) < 0)
+ 			goto nla_put_failure;
+ 
+ 		list_for_each_entry_safe(sibling, next_sibling,
+ 					 &rt->fib6_siblings, fib6_siblings) {
+ 			if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common,
+ 					    sibling->fib6_nh->fib_nh_weight,
+-					    AF_INET6) < 0)
++					    AF_INET6, 0) < 0)
+ 				goto nla_put_failure;
+ 		}
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index a448b6cd47273..1943ae5103eb6 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1288,7 +1288,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	int addr_len = msg->msg_namelen;
+ 	bool connected = false;
+ 	int ulen = len;
+-	int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
++	int corkreq = READ_ONCE(up->corkflag) || msg->msg_flags&MSG_MORE;
+ 	int err;
+ 	int is_udplite = IS_UDPLITE(sk);
+ 	int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
+diff --git a/net/mac80211/mesh_ps.c b/net/mac80211/mesh_ps.c
+index 204830a55240b..3fbd0b9ff9135 100644
+--- a/net/mac80211/mesh_ps.c
++++ b/net/mac80211/mesh_ps.c
+@@ -2,6 +2,7 @@
+ /*
+  * Copyright 2012-2013, Marco Porsch <marco.porsch@s2005.tu-chemnitz.de>
+  * Copyright 2012-2013, cozybit Inc.
++ * Copyright (C) 2021 Intel Corporation
+  */
+ 
+ #include "mesh.h"
+@@ -588,7 +589,7 @@ void ieee80211_mps_frame_release(struct sta_info *sta,
+ 
+ 	/* only transmit to PS STA with announced, non-zero awake window */
+ 	if (test_sta_flag(sta, WLAN_STA_PS_STA) &&
+-	    (!elems->awake_window || !le16_to_cpu(*elems->awake_window)))
++	    (!elems->awake_window || !get_unaligned_le16(elems->awake_window)))
+ 		return;
+ 
+ 	if (!test_sta_flag(sta, WLAN_STA_MPSP_OWNER))
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 673ad3cf2c3ab..bbbcc678c655c 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -2177,7 +2177,11 @@ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
+ 			}
+ 
+ 			vht_mcs = iterator.this_arg[4] >> 4;
++			if (vht_mcs > 11)
++				vht_mcs = 0;
+ 			vht_nss = iterator.this_arg[4] & 0xF;
++			if (!vht_nss || vht_nss > 8)
++				vht_nss = 1;
+ 			break;
+ 
+ 		/*
+@@ -3365,6 +3369,14 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+ 		goto out;
+ 
++	/* If n == 2, the "while (*frag_tail)" loop above didn't execute
++	 * and  frag_tail should be &skb_shinfo(head)->frag_list.
++	 * However, ieee80211_amsdu_prepare_head() can reallocate it.
++	 * Reload frag_tail to have it pointing to the correct place.
++	 */
++	if (n == 2)
++		frag_tail = &skb_shinfo(head)->frag_list;
++
+ 	/*
+ 	 * Pad out the previous subframe to a multiple of 4 by adding the
+ 	 * padding to the next one, that's being added. Note that head->len
+diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
+index bca47fad5a162..4eed23e276104 100644
+--- a/net/mac80211/wpa.c
++++ b/net/mac80211/wpa.c
+@@ -520,6 +520,9 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx,
+ 			return RX_DROP_UNUSABLE;
+ 	}
+ 
++	/* reload hdr - skb might have been reallocated */
++	hdr = (void *)rx->skb->data;
++
+ 	data_len = skb->len - hdrlen - IEEE80211_CCMP_HDR_LEN - mic_len;
+ 	if (!rx->sta || data_len < 0)
+ 		return RX_DROP_UNUSABLE;
+@@ -749,6 +752,9 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx)
+ 			return RX_DROP_UNUSABLE;
+ 	}
+ 
++	/* reload hdr - skb might have been reallocated */
++	hdr = (void *)rx->skb->data;
++
+ 	data_len = skb->len - hdrlen - IEEE80211_GCMP_HDR_LEN - mic_len;
+ 	if (!rx->sta || data_len < 0)
+ 		return RX_DROP_UNUSABLE;
+diff --git a/net/mptcp/mptcp_diag.c b/net/mptcp/mptcp_diag.c
+index 5f390a97f556d..f1af3f44875ed 100644
+--- a/net/mptcp/mptcp_diag.c
++++ b/net/mptcp/mptcp_diag.c
+@@ -36,7 +36,7 @@ static int mptcp_diag_dump_one(struct netlink_callback *cb,
+ 	struct sock *sk;
+ 
+ 	net = sock_net(in_skb->sk);
+-	msk = mptcp_token_get_sock(req->id.idiag_cookie[0]);
++	msk = mptcp_token_get_sock(net, req->id.idiag_cookie[0]);
+ 	if (!msk)
+ 		goto out_nosk;
+ 
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 13ab89dc19141..3e5af8397434a 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -424,7 +424,7 @@ int mptcp_token_new_connect(struct sock *sk);
+ void mptcp_token_accept(struct mptcp_subflow_request_sock *r,
+ 			struct mptcp_sock *msk);
+ bool mptcp_token_exists(u32 token);
+-struct mptcp_sock *mptcp_token_get_sock(u32 token);
++struct mptcp_sock *mptcp_token_get_sock(struct net *net, u32 token);
+ struct mptcp_sock *mptcp_token_iter_next(const struct net *net, long *s_slot,
+ 					 long *s_num);
+ void mptcp_token_destroy(struct mptcp_sock *msk);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index bba5696fee36d..2e92384909241 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -69,7 +69,7 @@ static struct mptcp_sock *subflow_token_join_request(struct request_sock *req,
+ 	struct mptcp_sock *msk;
+ 	int local_id;
+ 
+-	msk = mptcp_token_get_sock(subflow_req->token);
++	msk = mptcp_token_get_sock(sock_net(req_to_sk(req)), subflow_req->token);
+ 	if (!msk) {
+ 		SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINNOTOKEN);
+ 		return NULL;
+diff --git a/net/mptcp/syncookies.c b/net/mptcp/syncookies.c
+index 37127781aee98..7f22526346a7e 100644
+--- a/net/mptcp/syncookies.c
++++ b/net/mptcp/syncookies.c
+@@ -108,18 +108,12 @@ bool mptcp_token_join_cookie_init_state(struct mptcp_subflow_request_sock *subfl
+ 
+ 	e->valid = 0;
+ 
+-	msk = mptcp_token_get_sock(e->token);
++	msk = mptcp_token_get_sock(net, e->token);
+ 	if (!msk) {
+ 		spin_unlock_bh(&join_entry_locks[i]);
+ 		return false;
+ 	}
+ 
+-	/* If this fails, the token got re-used in the mean time by another
+-	 * mptcp socket in a different netns, i.e. entry is outdated.
+-	 */
+-	if (!net_eq(sock_net((struct sock *)msk), net))
+-		goto err_put;
+-
+ 	subflow_req->remote_nonce = e->remote_nonce;
+ 	subflow_req->local_nonce = e->local_nonce;
+ 	subflow_req->backup = e->backup;
+@@ -128,11 +122,6 @@ bool mptcp_token_join_cookie_init_state(struct mptcp_subflow_request_sock *subfl
+ 	subflow_req->msk = msk;
+ 	spin_unlock_bh(&join_entry_locks[i]);
+ 	return true;
+-
+-err_put:
+-	spin_unlock_bh(&join_entry_locks[i]);
+-	sock_put((struct sock *)msk);
+-	return false;
+ }
+ 
+ void __init mptcp_join_cookie_init(void)
+diff --git a/net/mptcp/token.c b/net/mptcp/token.c
+index 0691a4883f3ab..f0d656bf27ada 100644
+--- a/net/mptcp/token.c
++++ b/net/mptcp/token.c
+@@ -232,6 +232,7 @@ found:
+ 
+ /**
+  * mptcp_token_get_sock - retrieve mptcp connection sock using its token
++ * @net: restrict to this namespace
+  * @token: token of the mptcp connection to retrieve
+  *
+  * This function returns the mptcp connection structure with the given token.
+@@ -239,7 +240,7 @@ found:
+  *
+  * returns NULL if no connection with the given token value exists.
+  */
+-struct mptcp_sock *mptcp_token_get_sock(u32 token)
++struct mptcp_sock *mptcp_token_get_sock(struct net *net, u32 token)
+ {
+ 	struct hlist_nulls_node *pos;
+ 	struct token_bucket *bucket;
+@@ -252,11 +253,15 @@ struct mptcp_sock *mptcp_token_get_sock(u32 token)
+ again:
+ 	sk_nulls_for_each_rcu(sk, pos, &bucket->msk_chain) {
+ 		msk = mptcp_sk(sk);
+-		if (READ_ONCE(msk->token) != token)
++		if (READ_ONCE(msk->token) != token ||
++		    !net_eq(sock_net(sk), net))
+ 			continue;
++
+ 		if (!refcount_inc_not_zero(&sk->sk_refcnt))
+ 			goto not_found;
+-		if (READ_ONCE(msk->token) != token) {
++
++		if (READ_ONCE(msk->token) != token ||
++		    !net_eq(sock_net(sk), net)) {
+ 			sock_put(sk);
+ 			goto again;
+ 		}
+diff --git a/net/mptcp/token_test.c b/net/mptcp/token_test.c
+index e1bd6f0a0676f..5d984bec1cd86 100644
+--- a/net/mptcp/token_test.c
++++ b/net/mptcp/token_test.c
+@@ -11,6 +11,7 @@ static struct mptcp_subflow_request_sock *build_req_sock(struct kunit *test)
+ 			    GFP_USER);
+ 	KUNIT_EXPECT_NOT_ERR_OR_NULL(test, req);
+ 	mptcp_token_init_request((struct request_sock *)req);
++	sock_net_set((struct sock *)req, &init_net);
+ 	return req;
+ }
+ 
+@@ -22,7 +23,7 @@ static void mptcp_token_test_req_basic(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, 0,
+ 			mptcp_token_new_request((struct request_sock *)req));
+ 	KUNIT_EXPECT_NE(test, 0, (int)req->token);
+-	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(req->token));
++	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, req->token));
+ 
+ 	/* cleanup */
+ 	mptcp_token_destroy_request((struct request_sock *)req);
+@@ -55,6 +56,7 @@ static struct mptcp_sock *build_msk(struct kunit *test)
+ 	msk = kunit_kzalloc(test, sizeof(struct mptcp_sock), GFP_USER);
+ 	KUNIT_EXPECT_NOT_ERR_OR_NULL(test, msk);
+ 	refcount_set(&((struct sock *)msk)->sk_refcnt, 1);
++	sock_net_set((struct sock *)msk, &init_net);
+ 	return msk;
+ }
+ 
+@@ -74,11 +76,11 @@ static void mptcp_token_test_msk_basic(struct kunit *test)
+ 			mptcp_token_new_connect((struct sock *)icsk));
+ 	KUNIT_EXPECT_NE(test, 0, (int)ctx->token);
+ 	KUNIT_EXPECT_EQ(test, ctx->token, msk->token);
+-	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(ctx->token));
++	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, ctx->token));
+ 	KUNIT_EXPECT_EQ(test, 2, (int)refcount_read(&sk->sk_refcnt));
+ 
+ 	mptcp_token_destroy(msk);
+-	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(ctx->token));
++	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, ctx->token));
+ }
+ 
+ static void mptcp_token_test_accept(struct kunit *test)
+@@ -90,11 +92,11 @@ static void mptcp_token_test_accept(struct kunit *test)
+ 			mptcp_token_new_request((struct request_sock *)req));
+ 	msk->token = req->token;
+ 	mptcp_token_accept(req, msk);
+-	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(msk->token));
++	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, msk->token));
+ 
+ 	/* this is now a no-op */
+ 	mptcp_token_destroy_request((struct request_sock *)req);
+-	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(msk->token));
++	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, msk->token));
+ 
+ 	/* cleanup */
+ 	mptcp_token_destroy(msk);
+@@ -116,7 +118,7 @@ static void mptcp_token_test_destroyed(struct kunit *test)
+ 
+ 	/* simulate race on removal */
+ 	refcount_set(&sk->sk_refcnt, 0);
+-	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(msk->token));
++	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, msk->token));
+ 
+ 	/* cleanup */
+ 	mptcp_token_destroy(msk);
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index 7cd1d31fb2b88..b0670388da49a 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -132,11 +132,11 @@ htable_size(u8 hbits)
+ {
+ 	size_t hsize;
+ 
+-	/* We must fit both into u32 in jhash and size_t */
++	/* We must fit both into u32 in jhash and INT_MAX in kvmalloc_node() */
+ 	if (hbits > 31)
+ 		return 0;
+ 	hsize = jhash_size(hbits);
+-	if ((((size_t)-1) - sizeof(struct htable)) / sizeof(struct hbucket *)
++	if ((INT_MAX - sizeof(struct htable)) / sizeof(struct hbucket *)
+ 	    < hsize)
+ 		return 0;
+ 
+diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
+index c100c6b112c81..2c467c422dc63 100644
+--- a/net/netfilter/ipvs/ip_vs_conn.c
++++ b/net/netfilter/ipvs/ip_vs_conn.c
+@@ -1468,6 +1468,10 @@ int __init ip_vs_conn_init(void)
+ 	int idx;
+ 
+ 	/* Compute size and mask */
++	if (ip_vs_conn_tab_bits < 8 || ip_vs_conn_tab_bits > 20) {
++		pr_info("conn_tab_bits not in [8, 20]. Using default value\n");
++		ip_vs_conn_tab_bits = CONFIG_IP_VS_TAB_BITS;
++	}
+ 	ip_vs_conn_tab_size = 1 << ip_vs_conn_tab_bits;
+ 	ip_vs_conn_tab_mask = ip_vs_conn_tab_size - 1;
+ 
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 54430a34d2f64..6a66e99459351 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -75,6 +75,9 @@ static __read_mostly struct kmem_cache *nf_conntrack_cachep;
+ static DEFINE_SPINLOCK(nf_conntrack_locks_all_lock);
+ static __read_mostly bool nf_conntrack_locks_all;
+ 
++/* serialize hash resizes and nf_ct_iterate_cleanup */
++static DEFINE_MUTEX(nf_conntrack_mutex);
++
+ #define GC_SCAN_INTERVAL	(120u * HZ)
+ #define GC_SCAN_MAX_DURATION	msecs_to_jiffies(10)
+ 
+@@ -2173,28 +2176,31 @@ get_next_corpse(int (*iter)(struct nf_conn *i, void *data),
+ 	spinlock_t *lockp;
+ 
+ 	for (; *bucket < nf_conntrack_htable_size; (*bucket)++) {
++		struct hlist_nulls_head *hslot = &nf_conntrack_hash[*bucket];
++
++		if (hlist_nulls_empty(hslot))
++			continue;
++
+ 		lockp = &nf_conntrack_locks[*bucket % CONNTRACK_LOCKS];
+ 		local_bh_disable();
+ 		nf_conntrack_lock(lockp);
+-		if (*bucket < nf_conntrack_htable_size) {
+-			hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[*bucket], hnnode) {
+-				if (NF_CT_DIRECTION(h) != IP_CT_DIR_REPLY)
+-					continue;
+-				/* All nf_conn objects are added to hash table twice, one
+-				 * for original direction tuple, once for the reply tuple.
+-				 *
+-				 * Exception: In the IPS_NAT_CLASH case, only the reply
+-				 * tuple is added (the original tuple already existed for
+-				 * a different object).
+-				 *
+-				 * We only need to call the iterator once for each
+-				 * conntrack, so we just use the 'reply' direction
+-				 * tuple while iterating.
+-				 */
+-				ct = nf_ct_tuplehash_to_ctrack(h);
+-				if (iter(ct, data))
+-					goto found;
+-			}
++		hlist_nulls_for_each_entry(h, n, hslot, hnnode) {
++			if (NF_CT_DIRECTION(h) != IP_CT_DIR_REPLY)
++				continue;
++			/* All nf_conn objects are added to hash table twice, one
++			 * for original direction tuple, once for the reply tuple.
++			 *
++			 * Exception: In the IPS_NAT_CLASH case, only the reply
++			 * tuple is added (the original tuple already existed for
++			 * a different object).
++			 *
++			 * We only need to call the iterator once for each
++			 * conntrack, so we just use the 'reply' direction
++			 * tuple while iterating.
++			 */
++			ct = nf_ct_tuplehash_to_ctrack(h);
++			if (iter(ct, data))
++				goto found;
+ 		}
+ 		spin_unlock(lockp);
+ 		local_bh_enable();
+@@ -2212,26 +2218,20 @@ found:
+ static void nf_ct_iterate_cleanup(int (*iter)(struct nf_conn *i, void *data),
+ 				  void *data, u32 portid, int report)
+ {
+-	unsigned int bucket = 0, sequence;
++	unsigned int bucket = 0;
+ 	struct nf_conn *ct;
+ 
+ 	might_sleep();
+ 
+-	for (;;) {
+-		sequence = read_seqcount_begin(&nf_conntrack_generation);
+-
+-		while ((ct = get_next_corpse(iter, data, &bucket)) != NULL) {
+-			/* Time to push up daises... */
++	mutex_lock(&nf_conntrack_mutex);
++	while ((ct = get_next_corpse(iter, data, &bucket)) != NULL) {
++		/* Time to push up daises... */
+ 
+-			nf_ct_delete(ct, portid, report);
+-			nf_ct_put(ct);
+-			cond_resched();
+-		}
+-
+-		if (!read_seqcount_retry(&nf_conntrack_generation, sequence))
+-			break;
+-		bucket = 0;
++		nf_ct_delete(ct, portid, report);
++		nf_ct_put(ct);
++		cond_resched();
+ 	}
++	mutex_unlock(&nf_conntrack_mutex);
+ }
+ 
+ struct iter_data {
+@@ -2461,8 +2461,10 @@ int nf_conntrack_hash_resize(unsigned int hashsize)
+ 	if (!hash)
+ 		return -ENOMEM;
+ 
++	mutex_lock(&nf_conntrack_mutex);
+ 	old_size = nf_conntrack_htable_size;
+ 	if (old_size == hashsize) {
++		mutex_unlock(&nf_conntrack_mutex);
+ 		kvfree(hash);
+ 		return 0;
+ 	}
+@@ -2498,6 +2500,8 @@ int nf_conntrack_hash_resize(unsigned int hashsize)
+ 	nf_conntrack_all_unlock();
+ 	local_bh_enable();
+ 
++	mutex_unlock(&nf_conntrack_mutex);
++
+ 	synchronize_net();
+ 	kvfree(old_hash);
+ 	return 0;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index c605a3e713e76..b781ba97c474e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4265,7 +4265,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	if (ops->privsize != NULL)
+ 		size = ops->privsize(nla, &desc);
+ 	alloc_size = sizeof(*set) + size + udlen;
+-	if (alloc_size < size)
++	if (alloc_size < size || alloc_size > INT_MAX)
+ 		return -ENOMEM;
+ 	set = kvzalloc(alloc_size, GFP_KERNEL);
+ 	if (!set)
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index a5212a3f86e2f..8ff6945b9f8f4 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -2169,18 +2169,24 @@ static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg,
+ 
+ 	arg->count = arg->skip;
+ 
++	rcu_read_lock();
+ 	idr_for_each_entry_continue_ul(&head->handle_idr, f, tmp, id) {
+ 		/* don't return filters that are being deleted */
+ 		if (!refcount_inc_not_zero(&f->refcnt))
+ 			continue;
++		rcu_read_unlock();
++
+ 		if (arg->fn(tp, f, arg) < 0) {
+ 			__fl_put(f);
+ 			arg->stop = 1;
++			rcu_read_lock();
+ 			break;
+ 		}
+ 		__fl_put(f);
+ 		arg->count++;
++		rcu_read_lock();
+ 	}
++	rcu_read_unlock();
+ 	arg->cookie = id;
+ }
+ 
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 49c49a4d203f0..34494a0b28bd0 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -677,7 +677,7 @@ static int sctp_rcv_ootb(struct sk_buff *skb)
+ 		ch = skb_header_pointer(skb, offset, sizeof(*ch), &_ch);
+ 
+ 		/* Break out if chunk length is less then minimal. */
+-		if (ntohs(ch->length) < sizeof(_ch))
++		if (!ch || ntohs(ch->length) < sizeof(_ch))
+ 			break;
+ 
+ 		ch_end = offset + SCTP_PAD4(ntohs(ch->length));
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index d5c0ae34b1e45..b7edca89e0ba9 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -593,20 +593,42 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 
+ static void init_peercred(struct sock *sk)
+ {
+-	put_pid(sk->sk_peer_pid);
+-	if (sk->sk_peer_cred)
+-		put_cred(sk->sk_peer_cred);
++	const struct cred *old_cred;
++	struct pid *old_pid;
++
++	spin_lock(&sk->sk_peer_lock);
++	old_pid = sk->sk_peer_pid;
++	old_cred = sk->sk_peer_cred;
+ 	sk->sk_peer_pid  = get_pid(task_tgid(current));
+ 	sk->sk_peer_cred = get_current_cred();
++	spin_unlock(&sk->sk_peer_lock);
++
++	put_pid(old_pid);
++	put_cred(old_cred);
+ }
+ 
+ static void copy_peercred(struct sock *sk, struct sock *peersk)
+ {
+-	put_pid(sk->sk_peer_pid);
+-	if (sk->sk_peer_cred)
+-		put_cred(sk->sk_peer_cred);
++	const struct cred *old_cred;
++	struct pid *old_pid;
++
++	if (sk < peersk) {
++		spin_lock(&sk->sk_peer_lock);
++		spin_lock_nested(&peersk->sk_peer_lock, SINGLE_DEPTH_NESTING);
++	} else {
++		spin_lock(&peersk->sk_peer_lock);
++		spin_lock_nested(&sk->sk_peer_lock, SINGLE_DEPTH_NESTING);
++	}
++	old_pid = sk->sk_peer_pid;
++	old_cred = sk->sk_peer_cred;
+ 	sk->sk_peer_pid  = get_pid(peersk->sk_peer_pid);
+ 	sk->sk_peer_cred = get_cred(peersk->sk_peer_cred);
++
++	spin_unlock(&sk->sk_peer_lock);
++	spin_unlock(&peersk->sk_peer_lock);
++
++	put_pid(old_pid);
++	put_cred(old_cred);
+ }
+ 
+ static int unix_listen(struct socket *sock, int backlog)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f47f639980dbb..9f37adb2b4d09 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6375,6 +6375,20 @@ static void alc_fixup_thinkpad_acpi(struct hda_codec *codec,
+ 	hda_fixup_thinkpad_acpi(codec, fix, action);
+ }
+ 
++/* Fixup for Lenovo Legion 15IMHg05 speaker output on headset removal. */
++static void alc287_fixup_legion_15imhg05_speakers(struct hda_codec *codec,
++						  const struct hda_fixup *fix,
++						  int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		spec->gen.suppress_auto_mute = 1;
++		break;
++	}
++}
++
+ /* for alc295_fixup_hp_top_speakers */
+ #include "hp_x360_helper.c"
+ 
+@@ -6591,6 +6605,10 @@ enum {
+ 	ALC623_FIXUP_LENOVO_THINKSTATION_P340,
+ 	ALC255_FIXUP_ACER_HEADPHONE_AND_MIC,
+ 	ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST,
++	ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS,
++	ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
++	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
++	ALC287_FIXUP_13S_GEN2_SPEAKERS
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8175,6 +8193,113 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ 	},
++	[ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS] = {
++		.type = HDA_FIXUP_VERBS,
++		//.v.verbs = legion_15imhg05_coefs,
++		.v.verbs = (const struct hda_verb[]) {
++			 // set left speaker Legion 7i.
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x1a },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 // set right speaker Legion 7i.
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x42 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2a },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			 {}
++		},
++		.chained = true,
++		.chain_id = ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
++	},
++	[ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc287_fixup_legion_15imhg05_speakers,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE,
++	},
++	[ALC287_FIXUP_YOGA7_14ITL_SPEAKERS] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			 // set left speaker Yoga 7i.
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x1a },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 // set right speaker Yoga 7i.
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x46 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2a },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			 {}
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE,
++	},
++	[ALC287_FIXUP_13S_GEN2_SPEAKERS] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x42 },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			{}
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8567,6 +8692,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ 	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
++	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3853, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 148c095df27b1..f4b380d6aecf8 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -2528,9 +2528,20 @@ static struct snd_soc_dapm_widget *dapm_find_widget(
+ {
+ 	struct snd_soc_dapm_widget *w;
+ 	struct snd_soc_dapm_widget *fallback = NULL;
++	char prefixed_pin[80];
++	const char *pin_name;
++	const char *prefix = soc_dapm_prefix(dapm);
++
++	if (prefix) {
++		snprintf(prefixed_pin, sizeof(prefixed_pin), "%s %s",
++			 prefix, pin);
++		pin_name = prefixed_pin;
++	} else {
++		pin_name = pin;
++	}
+ 
+ 	for_each_card_widgets(dapm->card, w) {
+-		if (!strcmp(w->name, pin)) {
++		if (!strcmp(w->name, pin_name)) {
+ 			if (w->dapm == dapm)
+ 				return w;
+ 			else
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index b5322d60068c4..1d91555333608 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -326,7 +326,8 @@ $(TRUNNER_BPF_OBJS): $(TRUNNER_OUTPUT)/%.o:				\
+ 		     $(TRUNNER_BPF_PROGS_DIR)/%.c			\
+ 		     $(TRUNNER_BPF_PROGS_DIR)/*.h			\
+ 		     $$(INCLUDE_DIR)/vmlinux.h				\
+-		     $(wildcard $(BPFDIR)/bpf_*.h) | $(TRUNNER_OUTPUT)
++		     $(wildcard $(BPFDIR)/bpf_*.h)			\
++		     | $(TRUNNER_OUTPUT) $$(BPFOBJ)
+ 	$$(call $(TRUNNER_BPF_BUILD_RULE),$$<,$$@,			\
+ 					  $(TRUNNER_BPF_CFLAGS),	\
+ 					  $(TRUNNER_BPF_LDFLAGS))
+diff --git a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+index 59ea56945e6cd..b497bb85b667f 100755
+--- a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
++++ b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+@@ -112,6 +112,14 @@ setup()
+ 	ip netns add "${NS2}"
+ 	ip netns add "${NS3}"
+ 
++	# rp_filter gets confused by what these tests are doing, so disable it
++	ip netns exec ${NS1} sysctl -wq net.ipv4.conf.all.rp_filter=0
++	ip netns exec ${NS2} sysctl -wq net.ipv4.conf.all.rp_filter=0
++	ip netns exec ${NS3} sysctl -wq net.ipv4.conf.all.rp_filter=0
++	ip netns exec ${NS1} sysctl -wq net.ipv4.conf.default.rp_filter=0
++	ip netns exec ${NS2} sysctl -wq net.ipv4.conf.default.rp_filter=0
++	ip netns exec ${NS3} sysctl -wq net.ipv4.conf.default.rp_filter=0
++
+ 	ip link add veth1 type veth peer name veth2
+ 	ip link add veth3 type veth peer name veth4
+ 	ip link add veth5 type veth peer name veth6
+@@ -236,11 +244,6 @@ setup()
+ 	ip -netns ${NS1} -6 route add ${IPv6_GRE}/128 dev veth5 via ${IPv6_6} ${VRF}
+ 	ip -netns ${NS2} -6 route add ${IPv6_GRE}/128 dev veth7 via ${IPv6_8} ${VRF}
+ 
+-	# rp_filter gets confused by what these tests are doing, so disable it
+-	ip netns exec ${NS1} sysctl -wq net.ipv4.conf.all.rp_filter=0
+-	ip netns exec ${NS2} sysctl -wq net.ipv4.conf.all.rp_filter=0
+-	ip netns exec ${NS3} sysctl -wq net.ipv4.conf.all.rp_filter=0
+-
+ 	TMPFILE=$(mktemp /tmp/test_lwt_ip_encap.XXXXXX)
+ 
+ 	sleep 1  # reduce flakiness


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-10-09 21:31 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-10-09 21:31 UTC (permalink / raw
  To: gentoo-commits

commit:     b2240d9c74060378b2e4577816eda3349e7db4dd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct  9 21:31:18 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct  9 21:31:18 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b2240d9c

Linux patch 5.10.72

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1071_linux-5.10.72.patch | 1114 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1118 insertions(+)

diff --git a/0000_README b/0000_README
index ef33fa6..42e4628 100644
--- a/0000_README
+++ b/0000_README
@@ -327,6 +327,10 @@ Patch:  1070_linux-5.10.71.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.71
 
+Patch:  1071_linux-5.10.72.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.72
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1071_linux-5.10.72.patch b/1071_linux-5.10.72.patch
new file mode 100644
index 0000000..229e951
--- /dev/null
+++ b/1071_linux-5.10.72.patch
@@ -0,0 +1,1114 @@
+diff --git a/Makefile b/Makefile
+index 1637ff7c1b751..48211c8503d4e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 71
++SUBLEVEL = 72
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/sparc/lib/iomap.c b/arch/sparc/lib/iomap.c
+index c9da9f139694d..f3a8cd491ce0d 100644
+--- a/arch/sparc/lib/iomap.c
++++ b/arch/sparc/lib/iomap.c
+@@ -19,8 +19,10 @@ void ioport_unmap(void __iomem *addr)
+ EXPORT_SYMBOL(ioport_map);
+ EXPORT_SYMBOL(ioport_unmap);
+ 
++#ifdef CONFIG_PCI
+ void pci_iounmap(struct pci_dev *dev, void __iomem * addr)
+ {
+ 	/* nothing to do */
+ }
+ EXPORT_SYMBOL(pci_iounmap);
++#endif
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index e6db1a1f22d7d..1f5d96ba4866d 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2284,6 +2284,7 @@ static int x86_pmu_event_init(struct perf_event *event)
+ 	if (err) {
+ 		if (event->destroy)
+ 			event->destroy(event);
++		event->destroy = NULL;
+ 	}
+ 
+ 	if (READ_ONCE(x86_pmu.attr_rdpmc) &&
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 1c23aee3778c3..5e1d7396a6b8a 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1497,6 +1497,8 @@ static void svm_clear_vintr(struct vcpu_svm *svm)
+ 			(svm->nested.ctl.int_ctl & V_TPR_MASK));
+ 		svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl &
+ 			V_IRQ_INJECTION_BITS_MASK;
++
++		svm->vmcb->control.int_vector = svm->nested.ctl.int_vector;
+ 	}
+ 
+ 	vmcb_mark_dirty(svm->vmcb, VMCB_INTR);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index d65da3b5837b2..b885063dc393f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1250,6 +1250,13 @@ static const u32 msrs_to_save_all[] = {
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
++
++	MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3,
++	MSR_K7_PERFCTR0, MSR_K7_PERFCTR1, MSR_K7_PERFCTR2, MSR_K7_PERFCTR3,
++	MSR_F15H_PERF_CTL0, MSR_F15H_PERF_CTL1, MSR_F15H_PERF_CTL2,
++	MSR_F15H_PERF_CTL3, MSR_F15H_PERF_CTL4, MSR_F15H_PERF_CTL5,
++	MSR_F15H_PERF_CTR0, MSR_F15H_PERF_CTR1, MSR_F15H_PERF_CTR2,
++	MSR_F15H_PERF_CTR3, MSR_F15H_PERF_CTR4, MSR_F15H_PERF_CTR5,
+ };
+ 
+ static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 0e6e73b8023fc..8916163d508e0 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2199,6 +2199,25 @@ static void ata_dev_config_ncq_prio(struct ata_device *dev)
+ 
+ }
+ 
++static bool ata_dev_check_adapter(struct ata_device *dev,
++				  unsigned short vendor_id)
++{
++	struct pci_dev *pcidev = NULL;
++	struct device *parent_dev = NULL;
++
++	for (parent_dev = dev->tdev.parent; parent_dev != NULL;
++	     parent_dev = parent_dev->parent) {
++		if (dev_is_pci(parent_dev)) {
++			pcidev = to_pci_dev(parent_dev);
++			if (pcidev->vendor == vendor_id)
++				return true;
++			break;
++		}
++	}
++
++	return false;
++}
++
+ static int ata_dev_config_ncq(struct ata_device *dev,
+ 			       char *desc, size_t desc_sz)
+ {
+@@ -2217,6 +2236,13 @@ static int ata_dev_config_ncq(struct ata_device *dev,
+ 		snprintf(desc, desc_sz, "NCQ (not used)");
+ 		return 0;
+ 	}
++
++	if (dev->horkage & ATA_HORKAGE_NO_NCQ_ON_ATI &&
++	    ata_dev_check_adapter(dev, PCI_VENDOR_ID_ATI)) {
++		snprintf(desc, desc_sz, "NCQ (not used)");
++		return 0;
++	}
++
+ 	if (ap->flags & ATA_FLAG_NCQ) {
+ 		hdepth = min(ap->scsi_host->can_queue, ATA_MAX_QUEUE);
+ 		dev->flags |= ATA_DFLAG_NCQ;
+@@ -3951,9 +3977,11 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "Samsung SSD 850*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Samsung SSD 860*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+-						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++						ATA_HORKAGE_ZERO_AFTER_TRIM |
++						ATA_HORKAGE_NO_NCQ_ON_ATI, },
+ 	{ "Samsung SSD 870*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+-						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++						ATA_HORKAGE_ZERO_AFTER_TRIM |
++						ATA_HORKAGE_NO_NCQ_ON_ATI, },
+ 	{ "FCCT*M500*",			NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 
+@@ -6108,6 +6136,8 @@ static int __init ata_parse_force_one(char **cur,
+ 		{ "ncq",	.horkage_off	= ATA_HORKAGE_NONCQ },
+ 		{ "noncqtrim",	.horkage_on	= ATA_HORKAGE_NO_NCQ_TRIM },
+ 		{ "ncqtrim",	.horkage_off	= ATA_HORKAGE_NO_NCQ_TRIM },
++		{ "noncqati",	.horkage_on	= ATA_HORKAGE_NO_NCQ_ON_ATI },
++		{ "ncqati",	.horkage_off	= ATA_HORKAGE_NO_NCQ_ON_ATI },
+ 		{ "dump_id",	.horkage_on	= ATA_HORKAGE_DUMP_ID },
+ 		{ "pio0",	.xfer_mask	= 1 << (ATA_SHIFT_PIO + 0) },
+ 		{ "pio1",	.xfer_mask	= 1 << (ATA_SHIFT_PIO + 1) },
+diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
+index 6053245a4754c..176f5f06432d1 100644
+--- a/drivers/irqchip/irq-gic.c
++++ b/drivers/irqchip/irq-gic.c
+@@ -107,6 +107,8 @@ static DEFINE_RAW_SPINLOCK(cpu_map_lock);
+ 
+ #endif
+ 
++static DEFINE_STATIC_KEY_FALSE(needs_rmw_access);
++
+ /*
+  * The GIC mapping of CPU interfaces does not necessarily match
+  * the logical CPU numbering.  Let's use a mapping as returned
+@@ -777,6 +779,25 @@ static int gic_pm_init(struct gic_chip_data *gic)
+ #endif
+ 
+ #ifdef CONFIG_SMP
++static void rmw_writeb(u8 bval, void __iomem *addr)
++{
++	static DEFINE_RAW_SPINLOCK(rmw_lock);
++	unsigned long offset = (unsigned long)addr & 3UL;
++	unsigned long shift = offset * 8;
++	unsigned long flags;
++	u32 val;
++
++	raw_spin_lock_irqsave(&rmw_lock, flags);
++
++	addr -= offset;
++	val = readl_relaxed(addr);
++	val &= ~GENMASK(shift + 7, shift);
++	val |= bval << shift;
++	writel_relaxed(val, addr);
++
++	raw_spin_unlock_irqrestore(&rmw_lock, flags);
++}
++
+ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
+ 			    bool force)
+ {
+@@ -791,7 +812,10 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
+ 	if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids)
+ 		return -EINVAL;
+ 
+-	writeb_relaxed(gic_cpu_map[cpu], reg);
++	if (static_branch_unlikely(&needs_rmw_access))
++		rmw_writeb(gic_cpu_map[cpu], reg);
++	else
++		writeb_relaxed(gic_cpu_map[cpu], reg);
+ 	irq_data_update_effective_affinity(d, cpumask_of(cpu));
+ 
+ 	return IRQ_SET_MASK_OK_DONE;
+@@ -1384,6 +1408,30 @@ static bool gic_check_eoimode(struct device_node *node, void __iomem **base)
+ 	return true;
+ }
+ 
++static bool gic_enable_rmw_access(void *data)
++{
++	/*
++	 * The EMEV2 class of machines has a broken interconnect, and
++	 * locks up on accesses that are less than 32bit. So far, only
++	 * the affinity setting requires it.
++	 */
++	if (of_machine_is_compatible("renesas,emev2")) {
++		static_branch_enable(&needs_rmw_access);
++		return true;
++	}
++
++	return false;
++}
++
++static const struct gic_quirk gic_quirks[] = {
++	{
++		.desc		= "broken byte access",
++		.compatible	= "arm,pl390",
++		.init		= gic_enable_rmw_access,
++	},
++	{ },
++};
++
+ static int gic_of_setup(struct gic_chip_data *gic, struct device_node *node)
+ {
+ 	if (!gic || !node)
+@@ -1400,6 +1448,8 @@ static int gic_of_setup(struct gic_chip_data *gic, struct device_node *node)
+ 	if (of_property_read_u32(node, "cpu-offset", &gic->percpu_offset))
+ 		gic->percpu_offset = 0;
+ 
++	gic_enable_of_quirks(node, gic_quirks, gic);
++
+ 	return 0;
+ 
+ error:
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi_security.c b/drivers/misc/habanalabs/gaudi/gaudi_security.c
+index 2d7add0e5bcc0..9343a81d31221 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi_security.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi_security.c
+@@ -8,16 +8,21 @@
+ #include "gaudiP.h"
+ #include "../include/gaudi/asic_reg/gaudi_regs.h"
+ 
+-#define GAUDI_NUMBER_OF_RR_REGS		24
+-#define GAUDI_NUMBER_OF_LBW_RANGES	12
++#define GAUDI_NUMBER_OF_LBW_RR_REGS	28
++#define GAUDI_NUMBER_OF_HBW_RR_REGS	24
++#define GAUDI_NUMBER_OF_LBW_RANGES	10
+ 
+-static u64 gaudi_rr_lbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_hit_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_HIT_WPROT,
+ 	mmDMA_IF_W_S_DMA0_HIT_WPROT,
+ 	mmDMA_IF_W_S_DMA1_HIT_WPROT,
++	mmDMA_IF_E_S_SOB_HIT_WPROT,
+ 	mmDMA_IF_E_S_DMA0_HIT_WPROT,
+ 	mmDMA_IF_E_S_DMA1_HIT_WPROT,
++	mmDMA_IF_W_N_SOB_HIT_WPROT,
+ 	mmDMA_IF_W_N_DMA0_HIT_WPROT,
+ 	mmDMA_IF_W_N_DMA1_HIT_WPROT,
++	mmDMA_IF_E_N_SOB_HIT_WPROT,
+ 	mmDMA_IF_E_N_DMA0_HIT_WPROT,
+ 	mmDMA_IF_E_N_DMA1_HIT_WPROT,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_HIT_AW,
+@@ -38,13 +43,17 @@ static u64 gaudi_rr_lbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_HIT_AW,
+ };
+ 
+-static u64 gaudi_rr_lbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_hit_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_HIT_RPROT,
+ 	mmDMA_IF_W_S_DMA0_HIT_RPROT,
+ 	mmDMA_IF_W_S_DMA1_HIT_RPROT,
++	mmDMA_IF_E_S_SOB_HIT_RPROT,
+ 	mmDMA_IF_E_S_DMA0_HIT_RPROT,
+ 	mmDMA_IF_E_S_DMA1_HIT_RPROT,
++	mmDMA_IF_W_N_SOB_HIT_RPROT,
+ 	mmDMA_IF_W_N_DMA0_HIT_RPROT,
+ 	mmDMA_IF_W_N_DMA1_HIT_RPROT,
++	mmDMA_IF_E_N_SOB_HIT_RPROT,
+ 	mmDMA_IF_E_N_DMA0_HIT_RPROT,
+ 	mmDMA_IF_E_N_DMA1_HIT_RPROT,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_HIT_AR,
+@@ -65,13 +74,17 @@ static u64 gaudi_rr_lbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_HIT_AR,
+ };
+ 
+-static u64 gaudi_rr_lbw_min_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_min_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_MIN_WPROT_0,
+ 	mmDMA_IF_W_S_DMA0_MIN_WPROT_0,
+ 	mmDMA_IF_W_S_DMA1_MIN_WPROT_0,
++	mmDMA_IF_E_S_SOB_MIN_WPROT_0,
+ 	mmDMA_IF_E_S_DMA0_MIN_WPROT_0,
+ 	mmDMA_IF_E_S_DMA1_MIN_WPROT_0,
++	mmDMA_IF_W_N_SOB_MIN_WPROT_0,
+ 	mmDMA_IF_W_N_DMA0_MIN_WPROT_0,
+ 	mmDMA_IF_W_N_DMA1_MIN_WPROT_0,
++	mmDMA_IF_E_N_SOB_MIN_WPROT_0,
+ 	mmDMA_IF_E_N_DMA0_MIN_WPROT_0,
+ 	mmDMA_IF_E_N_DMA1_MIN_WPROT_0,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_MIN_AW_0,
+@@ -92,13 +105,17 @@ static u64 gaudi_rr_lbw_min_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_MIN_AW_0,
+ };
+ 
+-static u64 gaudi_rr_lbw_max_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_max_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_MAX_WPROT_0,
+ 	mmDMA_IF_W_S_DMA0_MAX_WPROT_0,
+ 	mmDMA_IF_W_S_DMA1_MAX_WPROT_0,
++	mmDMA_IF_E_S_SOB_MAX_WPROT_0,
+ 	mmDMA_IF_E_S_DMA0_MAX_WPROT_0,
+ 	mmDMA_IF_E_S_DMA1_MAX_WPROT_0,
++	mmDMA_IF_W_N_SOB_MAX_WPROT_0,
+ 	mmDMA_IF_W_N_DMA0_MAX_WPROT_0,
+ 	mmDMA_IF_W_N_DMA1_MAX_WPROT_0,
++	mmDMA_IF_E_N_SOB_MAX_WPROT_0,
+ 	mmDMA_IF_E_N_DMA0_MAX_WPROT_0,
+ 	mmDMA_IF_E_N_DMA1_MAX_WPROT_0,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_MAX_AW_0,
+@@ -119,13 +136,17 @@ static u64 gaudi_rr_lbw_max_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_MAX_AW_0,
+ };
+ 
+-static u64 gaudi_rr_lbw_min_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_min_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_MIN_RPROT_0,
+ 	mmDMA_IF_W_S_DMA0_MIN_RPROT_0,
+ 	mmDMA_IF_W_S_DMA1_MIN_RPROT_0,
++	mmDMA_IF_E_S_SOB_MIN_RPROT_0,
+ 	mmDMA_IF_E_S_DMA0_MIN_RPROT_0,
+ 	mmDMA_IF_E_S_DMA1_MIN_RPROT_0,
++	mmDMA_IF_W_N_SOB_MIN_RPROT_0,
+ 	mmDMA_IF_W_N_DMA0_MIN_RPROT_0,
+ 	mmDMA_IF_W_N_DMA1_MIN_RPROT_0,
++	mmDMA_IF_E_N_SOB_MIN_RPROT_0,
+ 	mmDMA_IF_E_N_DMA0_MIN_RPROT_0,
+ 	mmDMA_IF_E_N_DMA1_MIN_RPROT_0,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_MIN_AR_0,
+@@ -146,13 +167,17 @@ static u64 gaudi_rr_lbw_min_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_MIN_AR_0,
+ };
+ 
+-static u64 gaudi_rr_lbw_max_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_max_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_MAX_RPROT_0,
+ 	mmDMA_IF_W_S_DMA0_MAX_RPROT_0,
+ 	mmDMA_IF_W_S_DMA1_MAX_RPROT_0,
++	mmDMA_IF_E_S_SOB_MAX_RPROT_0,
+ 	mmDMA_IF_E_S_DMA0_MAX_RPROT_0,
+ 	mmDMA_IF_E_S_DMA1_MAX_RPROT_0,
++	mmDMA_IF_W_N_SOB_MAX_RPROT_0,
+ 	mmDMA_IF_W_N_DMA0_MAX_RPROT_0,
+ 	mmDMA_IF_W_N_DMA1_MAX_RPROT_0,
++	mmDMA_IF_E_N_SOB_MAX_RPROT_0,
+ 	mmDMA_IF_E_N_DMA0_MAX_RPROT_0,
+ 	mmDMA_IF_E_N_DMA1_MAX_RPROT_0,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_MAX_AR_0,
+@@ -173,7 +198,7 @@ static u64 gaudi_rr_lbw_max_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_MAX_AR_0,
+ };
+ 
+-static u64 gaudi_rr_hbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_hit_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_HIT_AW,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_HIT_AW,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_HIT_AW,
+@@ -200,7 +225,7 @@ static u64 gaudi_rr_hbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_HIT_AW
+ };
+ 
+-static u64 gaudi_rr_hbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_hit_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_HIT_AR,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_HIT_AR,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_HIT_AR,
+@@ -227,7 +252,7 @@ static u64 gaudi_rr_hbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_HIT_AR
+ };
+ 
+-static u64 gaudi_rr_hbw_base_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_base_low_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AW_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_LOW_AW_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AW_0,
+@@ -254,7 +279,7 @@ static u64 gaudi_rr_hbw_base_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_LOW_AW_0
+ };
+ 
+-static u64 gaudi_rr_hbw_base_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_base_high_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AW_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_HIGH_AW_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AW_0,
+@@ -281,7 +306,7 @@ static u64 gaudi_rr_hbw_base_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_HIGH_AW_0
+ };
+ 
+-static u64 gaudi_rr_hbw_mask_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_mask_low_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AW_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_LOW_AW_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AW_0,
+@@ -308,7 +333,7 @@ static u64 gaudi_rr_hbw_mask_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_LOW_AW_0
+ };
+ 
+-static u64 gaudi_rr_hbw_mask_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_mask_high_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AW_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_HIGH_AW_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AW_0,
+@@ -335,7 +360,7 @@ static u64 gaudi_rr_hbw_mask_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_HIGH_AW_0
+ };
+ 
+-static u64 gaudi_rr_hbw_base_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_base_low_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AR_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_LOW_AR_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AR_0,
+@@ -362,7 +387,7 @@ static u64 gaudi_rr_hbw_base_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_LOW_AR_0
+ };
+ 
+-static u64 gaudi_rr_hbw_base_high_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_base_high_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AR_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_HIGH_AR_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AR_0,
+@@ -389,7 +414,7 @@ static u64 gaudi_rr_hbw_base_high_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_HIGH_AR_0
+ };
+ 
+-static u64 gaudi_rr_hbw_mask_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_mask_low_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AR_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_LOW_AR_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AR_0,
+@@ -416,7 +441,7 @@ static u64 gaudi_rr_hbw_mask_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_LOW_AR_0
+ };
+ 
+-static u64 gaudi_rr_hbw_mask_high_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_mask_high_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AR_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_HIGH_AR_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AR_0,
+@@ -8870,50 +8895,44 @@ static void gaudi_init_range_registers_lbw(struct hl_device *hdev)
+ 	u32 lbw_rng_end[GAUDI_NUMBER_OF_LBW_RANGES];
+ 	int i, j;
+ 
+-	lbw_rng_start[0]  = (0xFBFE0000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[0]    = (0xFBFFF000 & 0x3FFFFFF) + 1;
++	lbw_rng_start[0]  = (0xFC0E8000 & 0x3FFFFFF) - 1; /* 0x000E7FFF */
++	lbw_rng_end[0]    = (0xFC11FFFF & 0x3FFFFFF) + 1; /* 0x00120000 */
+ 
+-	lbw_rng_start[1]  = (0xFC0E8000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[1]    = (0xFC120000 & 0x3FFFFFF) + 1;
++	lbw_rng_start[1]  = (0xFC1E8000 & 0x3FFFFFF) - 1; /* 0x001E7FFF */
++	lbw_rng_end[1]    = (0xFC48FFFF & 0x3FFFFFF) + 1; /* 0x00490000 */
+ 
+-	lbw_rng_start[2]  = (0xFC1E8000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[2]    = (0xFC48FFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[2]  = (0xFC600000 & 0x3FFFFFF) - 1; /* 0x005FFFFF */
++	lbw_rng_end[2]    = (0xFCC48FFF & 0x3FFFFFF) + 1; /* 0x00C49000 */
+ 
+-	lbw_rng_start[3]  = (0xFC600000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[3]    = (0xFCC48FFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[3]  = (0xFCC4A000 & 0x3FFFFFF) - 1; /* 0x00C49FFF */
++	lbw_rng_end[3]    = (0xFCCDFFFF & 0x3FFFFFF) + 1; /* 0x00CE0000 */
+ 
+-	lbw_rng_start[4]  = (0xFCC4A000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[4]    = (0xFCCDFFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[4]  = (0xFCCE4000 & 0x3FFFFFF) - 1; /* 0x00CE3FFF */
++	lbw_rng_end[4]    = (0xFCD1FFFF & 0x3FFFFFF) + 1; /* 0x00D20000 */
+ 
+-	lbw_rng_start[5]  = (0xFCCE4000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[5]    = (0xFCD1FFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[5]  = (0xFCD24000 & 0x3FFFFFF) - 1; /* 0x00D23FFF */
++	lbw_rng_end[5]    = (0xFCD5FFFF & 0x3FFFFFF) + 1; /* 0x00D60000 */
+ 
+-	lbw_rng_start[6]  = (0xFCD24000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[6]    = (0xFCD5FFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[6]  = (0xFCD64000 & 0x3FFFFFF) - 1; /* 0x00D63FFF */
++	lbw_rng_end[6]    = (0xFCD9FFFF & 0x3FFFFFF) + 1; /* 0x00DA0000 */
+ 
+-	lbw_rng_start[7]  = (0xFCD64000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[7]    = (0xFCD9FFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[7]  = (0xFCDA4000 & 0x3FFFFFF) - 1; /* 0x00DA3FFF */
++	lbw_rng_end[7]    = (0xFCDDFFFF & 0x3FFFFFF) + 1; /* 0x00DE0000 */
+ 
+-	lbw_rng_start[8]  = (0xFCDA4000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[8]    = (0xFCDDFFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[8]  = (0xFCDE4000 & 0x3FFFFFF) - 1; /* 0x00DE3FFF */
++	lbw_rng_end[8]    = (0xFCE05FFF & 0x3FFFFFF) + 1; /* 0x00E06000 */
+ 
+-	lbw_rng_start[9]  = (0xFCDE4000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[9]    = (0xFCE05FFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[9]  = (0xFCFC9000 & 0x3FFFFFF) - 1; /* 0x00FC8FFF */
++	lbw_rng_end[9]    = (0xFFFFFFFE & 0x3FFFFFF) + 1; /* 0x03FFFFFF */
+ 
+-	lbw_rng_start[10]  = (0xFEC43000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[10]    = (0xFEC43FFF & 0x3FFFFFF) + 1;
+-
+-	lbw_rng_start[11] = (0xFE484000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[11]   = (0xFE484FFF & 0x3FFFFFF) + 1;
+-
+-	for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
++	for (i = 0 ; i < GAUDI_NUMBER_OF_LBW_RR_REGS ; i++) {
+ 		WREG32(gaudi_rr_lbw_hit_aw_regs[i],
+ 				(1 << GAUDI_NUMBER_OF_LBW_RANGES) - 1);
+ 		WREG32(gaudi_rr_lbw_hit_ar_regs[i],
+ 				(1 << GAUDI_NUMBER_OF_LBW_RANGES) - 1);
+ 	}
+ 
+-	for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++)
++	for (i = 0 ; i < GAUDI_NUMBER_OF_LBW_RR_REGS ; i++)
+ 		for (j = 0 ; j < GAUDI_NUMBER_OF_LBW_RANGES ; j++) {
+ 			WREG32(gaudi_rr_lbw_min_aw_regs[i] + (j << 2),
+ 							lbw_rng_start[j]);
+@@ -8960,12 +8979,12 @@ static void gaudi_init_range_registers_hbw(struct hl_device *hdev)
+ 	 * 6th range is the host
+ 	 */
+ 
+-	for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
++	for (i = 0 ; i < GAUDI_NUMBER_OF_HBW_RR_REGS ; i++) {
+ 		WREG32(gaudi_rr_hbw_hit_aw_regs[i], 0x1F);
+ 		WREG32(gaudi_rr_hbw_hit_ar_regs[i], 0x1D);
+ 	}
+ 
+-	for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
++	for (i = 0 ; i < GAUDI_NUMBER_OF_HBW_RR_REGS ; i++) {
+ 		WREG32(gaudi_rr_hbw_base_low_aw_regs[i], dram_addr_lo);
+ 		WREG32(gaudi_rr_hbw_base_low_ar_regs[i], dram_addr_lo);
+ 
+diff --git a/drivers/net/phy/mdio_device.c b/drivers/net/phy/mdio_device.c
+index 0837319a52d75..797c41f5590ef 100644
+--- a/drivers/net/phy/mdio_device.c
++++ b/drivers/net/phy/mdio_device.c
+@@ -179,6 +179,16 @@ static int mdio_remove(struct device *dev)
+ 	return 0;
+ }
+ 
++static void mdio_shutdown(struct device *dev)
++{
++	struct mdio_device *mdiodev = to_mdio_device(dev);
++	struct device_driver *drv = mdiodev->dev.driver;
++	struct mdio_driver *mdiodrv = to_mdio_driver(drv);
++
++	if (mdiodrv->shutdown)
++		mdiodrv->shutdown(mdiodev);
++}
++
+ /**
+  * mdio_driver_register - register an mdio_driver with the MDIO layer
+  * @drv: new mdio_driver to register
+@@ -193,6 +203,7 @@ int mdio_driver_register(struct mdio_driver *drv)
+ 	mdiodrv->driver.bus = &mdio_bus_type;
+ 	mdiodrv->driver.probe = mdio_probe;
+ 	mdiodrv->driver.remove = mdio_remove;
++	mdiodrv->driver.shutdown = mdio_shutdown;
+ 
+ 	retval = driver_register(&mdiodrv->driver);
+ 	if (retval) {
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index 986b569709616..b0cbc7fead745 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -499,7 +499,7 @@ check_frags:
+ 				 * the header's copy failed, and they are
+ 				 * sharing a slot, send an error
+ 				 */
+-				if (i == 0 && sharedslot)
++				if (i == 0 && !first_shinfo && sharedslot)
+ 					xenvif_idx_release(queue, pending_idx,
+ 							   XEN_NETIF_RSP_ERROR);
+ 				else
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index a0bcec33b0208..906cab35afe7a 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2486,6 +2486,7 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
+ 	 */
+ 	if (ctrl->ctrl.queue_count > 1) {
+ 		nvme_stop_queues(&ctrl->ctrl);
++		nvme_sync_io_queues(&ctrl->ctrl);
+ 		blk_mq_tagset_busy_iter(&ctrl->tag_set,
+ 				nvme_fc_terminate_exchange, &ctrl->ctrl);
+ 		blk_mq_tagset_wait_completed_request(&ctrl->tag_set);
+@@ -2509,6 +2510,7 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
+ 	 * clean up the admin queue. Same thing as above.
+ 	 */
+ 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
++	blk_sync_queue(ctrl->ctrl.admin_q);
+ 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
+ 				nvme_fc_terminate_exchange, &ctrl->ctrl);
+ 	blk_mq_tagset_wait_completed_request(&ctrl->admin_tag_set);
+@@ -2952,14 +2954,6 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
+ 	if (ctrl->ctrl.queue_count == 1)
+ 		return 0;
+ 
+-	ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
+-	if (ret)
+-		goto out_free_io_queues;
+-
+-	ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
+-	if (ret)
+-		goto out_delete_hw_queues;
+-
+ 	if (prior_ioq_cnt != nr_io_queues) {
+ 		dev_info(ctrl->ctrl.device,
+ 			"reconnect: revising io queue count from %d to %d\n",
+@@ -2969,6 +2963,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
+ 		nvme_unfreeze(&ctrl->ctrl);
+ 	}
+ 
++	ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
++	if (ret)
++		goto out_free_io_queues;
++
++	ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
++	if (ret)
++		goto out_delete_hw_queues;
++
+ 	return 0;
+ 
+ out_delete_hw_queues:
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 99260915122c0..59b7e90cd5875 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -100,10 +100,10 @@ static const struct ts_dmi_data chuwi_hi10_air_data = {
+ };
+ 
+ static const struct property_entry chuwi_hi10_plus_props[] = {
+-	PROPERTY_ENTRY_U32("touchscreen-min-x", 0),
+-	PROPERTY_ENTRY_U32("touchscreen-min-y", 5),
+-	PROPERTY_ENTRY_U32("touchscreen-size-x", 1914),
+-	PROPERTY_ENTRY_U32("touchscreen-size-y", 1283),
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 12),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 10),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1908),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1270),
+ 	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10plus.fw"),
+ 	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
+ 	PROPERTY_ENTRY_BOOL("silead,home-button"),
+@@ -111,6 +111,15 @@ static const struct property_entry chuwi_hi10_plus_props[] = {
+ };
+ 
+ static const struct ts_dmi_data chuwi_hi10_plus_data = {
++	.embedded_fw = {
++		.name	= "silead/gsl1680-chuwi-hi10plus.fw",
++		.prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
++		.length	= 34056,
++		.sha256	= { 0xfd, 0x0a, 0x08, 0x08, 0x3c, 0xa6, 0x34, 0x4e,
++			    0x2c, 0x49, 0x9c, 0xcd, 0x7d, 0x44, 0x9d, 0x38,
++			    0x10, 0x68, 0xb5, 0xbd, 0xb7, 0x2a, 0x63, 0xb5,
++			    0x67, 0x0b, 0x96, 0xbd, 0x89, 0x67, 0x85, 0x09 },
++	},
+ 	.acpi_name      = "MSSL0017:00",
+ 	.properties     = chuwi_hi10_plus_props,
+ };
+@@ -141,6 +150,33 @@ static const struct ts_dmi_data chuwi_hi10_pro_data = {
+ 	.properties     = chuwi_hi10_pro_props,
+ };
+ 
++static const struct property_entry chuwi_hibook_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 30),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 4),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1892),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1276),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hibook.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	PROPERTY_ENTRY_BOOL("silead,home-button"),
++	{ }
++};
++
++static const struct ts_dmi_data chuwi_hibook_data = {
++	.embedded_fw = {
++		.name	= "silead/gsl1680-chuwi-hibook.fw",
++		.prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
++		.length	= 40392,
++		.sha256	= { 0xf7, 0xc0, 0xe8, 0x5a, 0x6c, 0xf2, 0xeb, 0x8d,
++			    0x12, 0xc4, 0x45, 0xbf, 0x55, 0x13, 0x4c, 0x1a,
++			    0x13, 0x04, 0x31, 0x08, 0x65, 0x73, 0xf7, 0xa8,
++			    0x1b, 0x7d, 0x59, 0xc9, 0xe6, 0x97, 0xf7, 0x38 },
++	},
++	.acpi_name      = "MSSL0017:00",
++	.properties     = chuwi_hibook_props,
++};
++
+ static const struct property_entry chuwi_vi8_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
+ 	PROPERTY_ENTRY_U32("touchscreen-min-y", 6),
+@@ -936,6 +972,16 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
+ 		},
+ 	},
++	{
++		/* Chuwi HiBook (CWI514) */
++		.driver_data = (void *)&chuwi_hibook_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
++			DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			/* Above matches are too generic, add bios-date match */
++			DMI_MATCH(DMI_BIOS_DATE, "05/07/2016"),
++		},
++	},
+ 	{
+ 		/* Chuwi Vi8 (CWI506) */
+ 		.driver_data = (void *)&chuwi_vi8_data,
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index f0c0935d79099..56e2917085874 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3443,15 +3443,16 @@ static int sd_probe(struct device *dev)
+ 	}
+ 
+ 	device_initialize(&sdkp->dev);
+-	sdkp->dev.parent = dev;
++	sdkp->dev.parent = get_device(dev);
+ 	sdkp->dev.class = &sd_disk_class;
+ 	dev_set_name(&sdkp->dev, "%s", dev_name(dev));
+ 
+ 	error = device_add(&sdkp->dev);
+-	if (error)
+-		goto out_free_index;
++	if (error) {
++		put_device(&sdkp->dev);
++		goto out;
++	}
+ 
+-	get_device(dev);
+ 	dev_set_drvdata(dev, sdkp);
+ 
+ 	gd->major = sd_major((index & 0xf0) >> 4);
+diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
+index c2afba2a5414d..43e682297fd5f 100644
+--- a/drivers/scsi/ses.c
++++ b/drivers/scsi/ses.c
+@@ -87,9 +87,16 @@ static int ses_recv_diag(struct scsi_device *sdev, int page_code,
+ 		0
+ 	};
+ 	unsigned char recv_page_code;
++	unsigned int retries = SES_RETRIES;
++	struct scsi_sense_hdr sshdr;
++
++	do {
++		ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,
++				       &sshdr, SES_TIMEOUT, 1, NULL);
++	} while (ret > 0 && --retries && scsi_sense_valid(&sshdr) &&
++		 (sshdr.sense_key == NOT_READY ||
++		  (sshdr.sense_key == UNIT_ATTENTION && sshdr.asc == 0x29)));
+ 
+-	ret =  scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,
+-				NULL, SES_TIMEOUT, SES_RETRIES, NULL);
+ 	if (unlikely(ret))
+ 		return ret;
+ 
+@@ -121,9 +128,16 @@ static int ses_send_diag(struct scsi_device *sdev, int page_code,
+ 		bufflen & 0xff,
+ 		0
+ 	};
++	struct scsi_sense_hdr sshdr;
++	unsigned int retries = SES_RETRIES;
++
++	do {
++		result = scsi_execute_req(sdev, cmd, DMA_TO_DEVICE, buf, bufflen,
++					  &sshdr, SES_TIMEOUT, 1, NULL);
++	} while (result > 0 && --retries && scsi_sense_valid(&sshdr) &&
++		 (sshdr.sense_key == NOT_READY ||
++		  (sshdr.sense_key == UNIT_ATTENTION && sshdr.asc == 0x29)));
+ 
+-	result = scsi_execute_req(sdev, cmd, DMA_TO_DEVICE, buf, bufflen,
+-				  NULL, SES_TIMEOUT, SES_RETRIES, NULL);
+ 	if (result)
+ 		sdev_printk(KERN_ERR, sdev, "SEND DIAGNOSTIC result: %8x\n",
+ 			    result);
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 0aab37cd64e74..624273d0e727f 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -582,6 +582,12 @@ static int rockchip_spi_transfer_one(
+ 	int ret;
+ 	bool use_dma;
+ 
++	/* Zero length transfers won't trigger an interrupt on completion */
++	if (!xfer->len) {
++		spi_finalize_current_transfer(ctlr);
++		return 1;
++	}
++
+ 	WARN_ON(readl_relaxed(rs->regs + ROCKCHIP_SPI_SSIENR) &&
+ 		(readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY));
+ 
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index 3c4c0516e58ab..cb4f4b5224460 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -415,7 +415,7 @@ static irqreturn_t tsens_critical_irq_thread(int irq, void *data)
+ 		const struct tsens_sensor *s = &priv->sensor[i];
+ 		u32 hw_id = s->hw_id;
+ 
+-		if (IS_ERR(s->tzd))
++		if (!s->tzd)
+ 			continue;
+ 		if (!tsens_threshold_violated(priv, hw_id, &d))
+ 			continue;
+@@ -465,7 +465,7 @@ static irqreturn_t tsens_irq_thread(int irq, void *data)
+ 		const struct tsens_sensor *s = &priv->sensor[i];
+ 		u32 hw_id = s->hw_id;
+ 
+-		if (IS_ERR(s->tzd))
++		if (!s->tzd)
+ 			continue;
+ 		if (!tsens_threshold_violated(priv, hw_id, &d))
+ 			continue;
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 6af1dcbc36564..30919f741b7fd 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -5074,6 +5074,10 @@ int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
+ 	hcd->has_tt = 1;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res) {
++		retval = -EINVAL;
++		goto error1;
++	}
+ 	hcd->rsrc_start = res->start;
+ 	hcd->rsrc_len = resource_size(res);
+ 
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index 48a2ea6d70921..2de1d8247494e 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -568,7 +568,18 @@ blk_status_t btrfs_csum_one_bio(struct btrfs_inode *inode, struct bio *bio,
+ 
+ 		if (!ordered) {
+ 			ordered = btrfs_lookup_ordered_extent(inode, offset);
+-			BUG_ON(!ordered); /* Logic error */
++			/*
++			 * The bio range is not covered by any ordered extent,
++			 * must be a code logic error.
++			 */
++			if (unlikely(!ordered)) {
++				WARN(1, KERN_WARNING
++			"no ordered extent for root %llu ino %llu offset %llu\n",
++				     inode->root->root_key.objectid,
++				     btrfs_ino(inode), offset);
++				kvfree(sums);
++				return BLK_STS_IOERR;
++			}
+ 		}
+ 
+ 		nr_sectors = BTRFS_BYTES_TO_BLKS(fs_info,
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index d8b8764f5bd10..593e0c6d6b44e 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1147,6 +1147,19 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 	atomic_set(&device->dev_stats_ccnt, 0);
+ 	extent_io_tree_release(&device->alloc_state);
+ 
++	/*
++	 * Reset the flush error record. We might have a transient flush error
++	 * in this mount, and if so we aborted the current transaction and set
++	 * the fs to an error state, guaranteeing no super blocks can be further
++	 * committed. However that error might be transient and if we unmount the
++	 * filesystem and mount it again, we should allow the mount to succeed
++	 * (btrfs_check_rw_degradable() should not fail) - if after mounting the
++	 * filesystem again we still get flush errors, then we will again abort
++	 * any transaction and set the error state, guaranteeing no commits of
++	 * unsafe super blocks.
++	 */
++	device->last_flush_error = 0;
++
+ 	/* Verify the device is back in a pristine state  */
+ 	ASSERT(!test_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state));
+ 	ASSERT(!test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state));
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index ca5102773b72b..88554b640b0da 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2339,7 +2339,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ 	buf->sd.OffsetDacl = cpu_to_le32(ptr - (__u8 *)&buf->sd);
+ 	/* Ship the ACL for now. we will copy it into buf later. */
+ 	aclptr = ptr;
+-	ptr += sizeof(struct cifs_acl);
++	ptr += sizeof(struct smb3_acl);
+ 
+ 	/* create one ACE to hold the mode embedded in reserved special SID */
+ 	acelen = setup_special_mode_ACE((struct cifs_ace *)ptr, (__u64)mode);
+@@ -2364,7 +2364,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ 	acl.AclRevision = ACL_REVISION; /* See 2.4.4.1 of MS-DTYP */
+ 	acl.AclSize = cpu_to_le16(acl_size);
+ 	acl.AceCount = cpu_to_le16(ace_count);
+-	memcpy(aclptr, &acl, sizeof(struct cifs_acl));
++	memcpy(aclptr, &acl, sizeof(struct smb3_acl));
+ 
+ 	buf->ccontext.DataLength = cpu_to_le32(ptr - (__u8 *)&buf->sd);
+ 	*len = roundup(ptr - (__u8 *)buf, 8);
+diff --git a/fs/ext2/balloc.c b/fs/ext2/balloc.c
+index 1f3f4326bf3ce..c17ccc19b938e 100644
+--- a/fs/ext2/balloc.c
++++ b/fs/ext2/balloc.c
+@@ -48,10 +48,9 @@ struct ext2_group_desc * ext2_get_group_desc(struct super_block * sb,
+ 	struct ext2_sb_info *sbi = EXT2_SB(sb);
+ 
+ 	if (block_group >= sbi->s_groups_count) {
+-		ext2_error (sb, "ext2_get_group_desc",
+-			    "block_group >= groups_count - "
+-			    "block_group = %d, groups_count = %lu",
+-			    block_group, sbi->s_groups_count);
++		WARN(1, "block_group >= groups_count - "
++		     "block_group = %d, groups_count = %lu",
++		     block_group, sbi->s_groups_count);
+ 
+ 		return NULL;
+ 	}
+@@ -59,10 +58,9 @@ struct ext2_group_desc * ext2_get_group_desc(struct super_block * sb,
+ 	group_desc = block_group >> EXT2_DESC_PER_BLOCK_BITS(sb);
+ 	offset = block_group & (EXT2_DESC_PER_BLOCK(sb) - 1);
+ 	if (!sbi->s_group_desc[group_desc]) {
+-		ext2_error (sb, "ext2_get_group_desc",
+-			    "Group descriptor not loaded - "
+-			    "block_group = %d, group_desc = %lu, desc = %lu",
+-			     block_group, group_desc, offset);
++		WARN(1, "Group descriptor not loaded - "
++		     "block_group = %d, group_desc = %lu, desc = %lu",
++		      block_group, group_desc, offset);
+ 		return NULL;
+ 	}
+ 
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 0313390fa4b44..1cdf7e0a5c22d 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -3512,7 +3512,7 @@ static struct nfsd4_conn *__nfsd4_find_conn(struct svc_xprt *xpt, struct nfsd4_s
+ }
+ 
+ static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
+-				struct nfsd4_session *session, u32 req)
++		struct nfsd4_session *session, u32 req, struct nfsd4_conn **conn)
+ {
+ 	struct nfs4_client *clp = session->se_client;
+ 	struct svc_xprt *xpt = rqst->rq_xprt;
+@@ -3535,6 +3535,8 @@ static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
+ 	else
+ 		status = nfserr_inval;
+ 	spin_unlock(&clp->cl_lock);
++	if (status == nfs_ok && conn)
++		*conn = c;
+ 	return status;
+ }
+ 
+@@ -3559,8 +3561,16 @@ __be32 nfsd4_bind_conn_to_session(struct svc_rqst *rqstp,
+ 	status = nfserr_wrong_cred;
+ 	if (!nfsd4_mach_creds_match(session->se_client, rqstp))
+ 		goto out;
+-	status = nfsd4_match_existing_connection(rqstp, session, bcts->dir);
+-	if (status == nfs_ok || status == nfserr_inval)
++	status = nfsd4_match_existing_connection(rqstp, session,
++			bcts->dir, &conn);
++	if (status == nfs_ok) {
++		if (bcts->dir == NFS4_CDFC4_FORE_OR_BOTH ||
++				bcts->dir == NFS4_CDFC4_BACK)
++			conn->cn_flags |= NFS4_CDFC4_BACK;
++		nfsd4_probe_callback(session->se_client);
++		goto out;
++	}
++	if (status == nfserr_inval)
+ 		goto out;
+ 	status = nfsd4_map_bcts_dir(&bcts->dir);
+ 	if (status)
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 5f550eb27f811..57dffa0d58702 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -422,6 +422,7 @@ enum {
+ 	ATA_HORKAGE_NOTRIM	= (1 << 24),	/* don't use TRIM */
+ 	ATA_HORKAGE_MAX_SEC_1024 = (1 << 25),	/* Limit max sects to 1024 */
+ 	ATA_HORKAGE_MAX_TRIM_128M = (1 << 26),	/* Limit max trim size to 128M */
++	ATA_HORKAGE_NO_NCQ_ON_ATI = (1 << 27),	/* Disable NCQ on ATI chipset */
+ 
+ 	 /* DMA mask for user DMA control: User visible values; DO NOT
+ 	    renumber */
+diff --git a/include/linux/mdio.h b/include/linux/mdio.h
+index dbd69b3d170b4..de5fb4b333ce3 100644
+--- a/include/linux/mdio.h
++++ b/include/linux/mdio.h
+@@ -72,6 +72,9 @@ struct mdio_driver {
+ 
+ 	/* Clears up any memory if needed */
+ 	void (*remove)(struct mdio_device *mdiodev);
++
++	/* Quiesces the device on system shutdown, turns off interrupts etc */
++	void (*shutdown)(struct mdio_device *mdiodev);
+ };
+ #define to_mdio_driver(d)						\
+ 	container_of(to_mdio_common_driver(d), struct mdio_driver, mdiodrv)
+diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
+index fcc840088c919..7daedee3e7ee7 100644
+--- a/tools/testing/selftests/kvm/steal_time.c
++++ b/tools/testing/selftests/kvm/steal_time.c
+@@ -120,12 +120,12 @@ struct st_time {
+ 	uint64_t st_time;
+ };
+ 
+-static int64_t smccc(uint32_t func, uint32_t arg)
++static int64_t smccc(uint32_t func, uint64_t arg)
+ {
+ 	unsigned long ret;
+ 
+ 	asm volatile(
+-		"mov	x0, %1\n"
++		"mov	w0, %w1\n"
+ 		"mov	x1, %2\n"
+ 		"hvc	#0\n"
+ 		"mov	%0, x0\n"
+diff --git a/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c b/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
+index e6480fd5c4bdc..8039e1eff9388 100644
+--- a/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
++++ b/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
+@@ -82,7 +82,8 @@ int get_warnings_count(void)
+ 	FILE *f;
+ 
+ 	f = popen("dmesg | grep \"WARNING:\" | wc -l", "r");
+-	fscanf(f, "%d", &warnings);
++	if (fscanf(f, "%d", &warnings) < 1)
++		warnings = 0;
+ 	fclose(f);
+ 
+ 	return warnings;
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index 0af84ad48aa77..b7217b5251f57 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -48,6 +48,7 @@ ARCH		?= $(SUBARCH)
+ # When local build is done, headers are installed in the default
+ # INSTALL_HDR_PATH usr/include.
+ .PHONY: khdr
++.NOTPARALLEL:
+ khdr:
+ ifndef KSFT_KHDR_INSTALL_DONE
+ ifeq (1,$(DEFAULT_INSTALL_HDR_PATH))
+diff --git a/tools/usb/testusb.c b/tools/usb/testusb.c
+index ee8208b2f9460..69c3ead25313d 100644
+--- a/tools/usb/testusb.c
++++ b/tools/usb/testusb.c
+@@ -265,12 +265,6 @@ nomem:
+ 	}
+ 
+ 	entry->ifnum = ifnum;
+-
+-	/* FIXME update USBDEVFS_CONNECTINFO so it tells about high speed etc */
+-
+-	fprintf(stderr, "%s speed\t%s\t%u\n",
+-		speed(entry->speed), entry->name, entry->ifnum);
+-
+ 	entry->next = testdevs;
+ 	testdevs = entry;
+ 	return 0;
+@@ -299,6 +293,14 @@ static void *handle_testdev (void *arg)
+ 		return 0;
+ 	}
+ 
++	status  =  ioctl(fd, USBDEVFS_GET_SPEED, NULL);
++	if (status < 0)
++		fprintf(stderr, "USBDEVFS_GET_SPEED failed %d\n", status);
++	else
++		dev->speed = status;
++	fprintf(stderr, "%s speed\t%s\t%u\n",
++			speed(dev->speed), dev->name, dev->ifnum);
++
+ restart:
+ 	for (i = 0; i < TEST_CASES; i++) {
+ 		if (dev->test != -1 && dev->test != i)
+diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
+index 0517c744b04e8..f62f10c988db1 100644
+--- a/tools/vm/page-types.c
++++ b/tools/vm/page-types.c
+@@ -1331,7 +1331,7 @@ int main(int argc, char *argv[])
+ 	if (opt_list && opt_list_mapcnt)
+ 		kpagecount_fd = checked_open(PROC_KPAGECOUNT, O_RDONLY);
+ 
+-	if (opt_mark_idle && opt_file)
++	if (opt_mark_idle)
+ 		page_idle_fd = checked_open(SYS_KERNEL_MM_PAGE_IDLE, O_RDWR);
+ 
+ 	if (opt_list && opt_pid)
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 0e4310c415a8d..57c0c3b18bde7 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2756,15 +2756,19 @@ out:
+ 
+ static void shrink_halt_poll_ns(struct kvm_vcpu *vcpu)
+ {
+-	unsigned int old, val, shrink;
++	unsigned int old, val, shrink, grow_start;
+ 
+ 	old = val = vcpu->halt_poll_ns;
+ 	shrink = READ_ONCE(halt_poll_ns_shrink);
++	grow_start = READ_ONCE(halt_poll_ns_grow_start);
+ 	if (shrink == 0)
+ 		val = 0;
+ 	else
+ 		val /= shrink;
+ 
++	if (val < grow_start)
++		val = 0;
++
+ 	vcpu->halt_poll_ns = val;
+ 	trace_kvm_halt_poll_ns_shrink(vcpu->vcpu_id, val, old);
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-10-13  9:35 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2021-10-13  9:35 UTC (permalink / raw
  To: gentoo-commits

commit:     37557341c0895a26a577adcc6994453a28bd71cc
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 13 09:34:23 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Oct 13 09:35:03 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=37557341

Linux patch 5.10.73

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1072_linux-5.10.73.patch | 2474 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2478 insertions(+)

diff --git a/0000_README b/0000_README
index 42e4628..9e6befb 100644
--- a/0000_README
+++ b/0000_README
@@ -331,6 +331,10 @@ Patch:  1071_linux-5.10.72.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.72
 
+Patch:  1072_linux-5.10.73.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.73
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1072_linux-5.10.73.patch b/1072_linux-5.10.73.patch
new file mode 100644
index 0000000..5327e55
--- /dev/null
+++ b/1072_linux-5.10.73.patch
@@ -0,0 +1,2474 @@
+diff --git a/Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml b/Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml
+index f8622bd0f61ee..f0e0345da498f 100644
+--- a/Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml
++++ b/Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml
+@@ -18,7 +18,7 @@ properties:
+     const: ti,sn65dsi86
+ 
+   reg:
+-    const: 0x2d
++    enum: [ 0x2c, 0x2d ]
+ 
+   enable-gpios:
+     maxItems: 1
+diff --git a/Makefile b/Makefile
+index 48211c8503d4e..3f62cea9afc0e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 72
++SUBLEVEL = 73
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx53-m53menlo.dts b/arch/arm/boot/dts/imx53-m53menlo.dts
+index d3082b9774e40..4f88e96d81ddb 100644
+--- a/arch/arm/boot/dts/imx53-m53menlo.dts
++++ b/arch/arm/boot/dts/imx53-m53menlo.dts
+@@ -56,6 +56,7 @@
+ 	panel {
+ 		compatible = "edt,etm0700g0dh6";
+ 		pinctrl-0 = <&pinctrl_display_gpio>;
++		pinctrl-names = "default";
+ 		enable-gpios = <&gpio6 0 GPIO_ACTIVE_HIGH>;
+ 
+ 		port {
+@@ -76,8 +77,7 @@
+ 		regulator-name = "vbus";
+ 		regulator-min-microvolt = <5000000>;
+ 		regulator-max-microvolt = <5000000>;
+-		gpio = <&gpio1 2 GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
++		gpio = <&gpio1 2 0>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi b/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
+index 9148a01ed6d9f..ebc0892e37c7a 100644
+--- a/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
++++ b/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
+@@ -5,6 +5,7 @@
+ #include <dt-bindings/gpio/gpio.h>
+ #include <dt-bindings/interrupt-controller/irq.h>
+ #include <dt-bindings/input/input.h>
++#include <dt-bindings/leds/common.h>
+ #include <dt-bindings/pwm/pwm.h>
+ 
+ / {
+@@ -275,6 +276,7 @@
+ 			led-cur = /bits/ 8 <0x20>;
+ 			max-cur = /bits/ 8 <0x60>;
+ 			reg = <0>;
++			color = <LED_COLOR_ID_RED>;
+ 		};
+ 
+ 		chan@1 {
+@@ -282,6 +284,7 @@
+ 			led-cur = /bits/ 8 <0x20>;
+ 			max-cur = /bits/ 8 <0x60>;
+ 			reg = <1>;
++			color = <LED_COLOR_ID_GREEN>;
+ 		};
+ 
+ 		chan@2 {
+@@ -289,6 +292,7 @@
+ 			led-cur = /bits/ 8 <0x20>;
+ 			max-cur = /bits/ 8 <0x60>;
+ 			reg = <2>;
++			color = <LED_COLOR_ID_BLUE>;
+ 		};
+ 
+ 		chan@3 {
+@@ -296,6 +300,7 @@
+ 			led-cur = /bits/ 8 <0x0>;
+ 			max-cur = /bits/ 8 <0x0>;
+ 			reg = <3>;
++			color = <LED_COLOR_ID_WHITE>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-pico.dtsi b/arch/arm/boot/dts/imx6qdl-pico.dtsi
+index 5de4ccb979163..f7a56d6b160c8 100644
+--- a/arch/arm/boot/dts/imx6qdl-pico.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-pico.dtsi
+@@ -176,7 +176,18 @@
+ 	pinctrl-0 = <&pinctrl_enet>;
+ 	phy-mode = "rgmii-id";
+ 	phy-reset-gpios = <&gpio1 26 GPIO_ACTIVE_LOW>;
++	phy-handle = <&phy>;
+ 	status = "okay";
++
++	mdio {
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		phy: ethernet-phy@1 {
++			reg = <1>;
++			qca,clk-out-frequency = <125000000>;
++		};
++	};
+ };
+ 
+ &hdmi {
+diff --git a/arch/arm/boot/dts/omap3430-sdp.dts b/arch/arm/boot/dts/omap3430-sdp.dts
+index c5b9037184149..7d530ae3483b8 100644
+--- a/arch/arm/boot/dts/omap3430-sdp.dts
++++ b/arch/arm/boot/dts/omap3430-sdp.dts
+@@ -101,7 +101,7 @@
+ 
+ 	nand@1,0 {
+ 		compatible = "ti,omap2-nand";
+-		reg = <0 0 4>; /* CS0, offset 0, IO size 4 */
++		reg = <1 0 4>; /* CS1, offset 0, IO size 4 */
+ 		interrupt-parent = <&gpmc>;
+ 		interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
+ 			     <1 IRQ_TYPE_NONE>;	/* termcount */
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index e36d590e83732..72c4a9fc41a20 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -198,7 +198,7 @@
+ 			clock-frequency = <19200000>;
+ 		};
+ 
+-		pxo_board {
++		pxo_board: pxo_board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <27000000>;
+@@ -1148,7 +1148,7 @@
+ 		};
+ 
+ 		gpu: adreno-3xx@4300000 {
+-			compatible = "qcom,adreno-3xx";
++			compatible = "qcom,adreno-320.2", "qcom,adreno";
+ 			reg = <0x04300000 0x20000>;
+ 			reg-names = "kgsl_3d0_reg_memory";
+ 			interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>;
+@@ -1163,7 +1163,6 @@
+ 			    <&mmcc GFX3D_AHB_CLK>,
+ 			    <&mmcc GFX3D_AXI_CLK>,
+ 			    <&mmcc MMSS_IMEM_AHB_CLK>;
+-			qcom,chipid = <0x03020002>;
+ 
+ 			iommus = <&gfx3d 0
+ 				  &gfx3d 1
+@@ -1306,7 +1305,7 @@
+ 			reg-names = "dsi_pll", "dsi_phy", "dsi_phy_regulator";
+ 			clock-names = "iface_clk", "ref";
+ 			clocks = <&mmcc DSI_M_AHB_CLK>,
+-				 <&cxo_board>;
++				 <&pxo_board>;
+ 		};
+ 
+ 
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 120f9aa6fff32..3f015cb6ec2b0 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -517,18 +517,22 @@ static const struct of_device_id ramc_ids[] __initconst = {
+ 	{ /*sentinel*/ }
+ };
+ 
+-static __init void at91_dt_ramc(void)
++static __init int at91_dt_ramc(void)
+ {
+ 	struct device_node *np;
+ 	const struct of_device_id *of_id;
+ 	int idx = 0;
+ 	void *standby = NULL;
+ 	const struct ramc_info *ramc;
++	int ret;
+ 
+ 	for_each_matching_node_and_match(np, ramc_ids, &of_id) {
+ 		soc_pm.data.ramc[idx] = of_iomap(np, 0);
+-		if (!soc_pm.data.ramc[idx])
+-			panic(pr_fmt("unable to map ramc[%d] cpu registers\n"), idx);
++		if (!soc_pm.data.ramc[idx]) {
++			pr_err("unable to map ramc[%d] cpu registers\n", idx);
++			ret = -ENOMEM;
++			goto unmap_ramc;
++		}
+ 
+ 		ramc = of_id->data;
+ 		if (!standby)
+@@ -538,15 +542,26 @@ static __init void at91_dt_ramc(void)
+ 		idx++;
+ 	}
+ 
+-	if (!idx)
+-		panic(pr_fmt("unable to find compatible ram controller node in dtb\n"));
++	if (!idx) {
++		pr_err("unable to find compatible ram controller node in dtb\n");
++		ret = -ENODEV;
++		goto unmap_ramc;
++	}
+ 
+ 	if (!standby) {
+ 		pr_warn("ramc no standby function available\n");
+-		return;
++		return 0;
+ 	}
+ 
+ 	at91_cpuidle_device.dev.platform_data = standby;
++
++	return 0;
++
++unmap_ramc:
++	while (idx)
++		iounmap(soc_pm.data.ramc[--idx]);
++
++	return ret;
+ }
+ 
+ static void at91rm9200_idle(void)
+@@ -869,6 +884,8 @@ static void __init at91_pm_init(void (*pm_idle)(void))
+ 
+ void __init at91rm9200_pm_init(void)
+ {
++	int ret;
++
+ 	if (!IS_ENABLED(CONFIG_SOC_AT91RM9200))
+ 		return;
+ 
+@@ -880,7 +897,9 @@ void __init at91rm9200_pm_init(void)
+ 	soc_pm.data.standby_mode = AT91_PM_STANDBY;
+ 	soc_pm.data.suspend_mode = AT91_PM_ULP0;
+ 
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
+ 
+ 	/*
+ 	 * AT91RM9200 SDRAM low-power mode cannot be used with self-refresh.
+@@ -895,13 +914,17 @@ void __init sam9x60_pm_init(void)
+ 	static const int modes[] __initconst = {
+ 		AT91_PM_STANDBY, AT91_PM_ULP0, AT91_PM_ULP0_FAST, AT91_PM_ULP1,
+ 	};
++	int ret;
+ 
+ 	if (!IS_ENABLED(CONFIG_SOC_SAM9X60))
+ 		return;
+ 
+ 	at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
+ 	at91_pm_modes_init();
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
++
+ 	at91_pm_init(NULL);
+ 
+ 	soc_pm.ws_ids = sam9x60_ws_ids;
+@@ -910,6 +933,8 @@ void __init sam9x60_pm_init(void)
+ 
+ void __init at91sam9_pm_init(void)
+ {
++	int ret;
++
+ 	if (!IS_ENABLED(CONFIG_SOC_AT91SAM9))
+ 		return;
+ 
+@@ -921,7 +946,10 @@ void __init at91sam9_pm_init(void)
+ 	soc_pm.data.standby_mode = AT91_PM_STANDBY;
+ 	soc_pm.data.suspend_mode = AT91_PM_ULP0;
+ 
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
++
+ 	at91_pm_init(at91sam9_idle);
+ }
+ 
+@@ -930,12 +958,16 @@ void __init sama5_pm_init(void)
+ 	static const int modes[] __initconst = {
+ 		AT91_PM_STANDBY, AT91_PM_ULP0, AT91_PM_ULP0_FAST,
+ 	};
++	int ret;
+ 
+ 	if (!IS_ENABLED(CONFIG_SOC_SAMA5))
+ 		return;
+ 
+ 	at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
++
+ 	at91_pm_init(NULL);
+ }
+ 
+@@ -945,13 +977,17 @@ void __init sama5d2_pm_init(void)
+ 		AT91_PM_STANDBY, AT91_PM_ULP0, AT91_PM_ULP0_FAST, AT91_PM_ULP1,
+ 		AT91_PM_BACKUP,
+ 	};
++	int ret;
+ 
+ 	if (!IS_ENABLED(CONFIG_SOC_SAMA5D2))
+ 		return;
+ 
+ 	at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
+ 	at91_pm_modes_init();
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
++
+ 	at91_pm_init(NULL);
+ 
+ 	soc_pm.ws_ids = sama5d2_ws_ids;
+diff --git a/arch/arm/mach-imx/pm-imx6.c b/arch/arm/mach-imx/pm-imx6.c
+index 40c74b4c4d730..e24409c1f5d39 100644
+--- a/arch/arm/mach-imx/pm-imx6.c
++++ b/arch/arm/mach-imx/pm-imx6.c
+@@ -9,6 +9,7 @@
+ #include <linux/io.h>
+ #include <linux/irq.h>
+ #include <linux/genalloc.h>
++#include <linux/irqchip/arm-gic.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
+ #include <linux/of.h>
+@@ -618,6 +619,7 @@ static void __init imx6_pm_common_init(const struct imx6_pm_socdata
+ 
+ static void imx6_pm_stby_poweroff(void)
+ {
++	gic_cpu_if_down(0);
+ 	imx6_set_lpm(STOP_POWER_OFF);
+ 	imx6q_suspend_finish(0);
+ 
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 83d595ebcf1f6..9443f129859b2 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -3618,6 +3618,8 @@ int omap_hwmod_init_module(struct device *dev,
+ 		oh->flags |= HWMOD_SWSUP_SIDLE_ACT;
+ 	if (data->cfg->quirks & SYSC_QUIRK_SWSUP_MSTANDBY)
+ 		oh->flags |= HWMOD_SWSUP_MSTANDBY;
++	if (data->cfg->quirks & SYSC_QUIRK_CLKDM_NOAUTO)
++		oh->flags |= HWMOD_CLKDM_NOAUTO;
+ 
+ 	error = omap_hwmod_check_module(dev, oh, data, sysc_fields,
+ 					rev_offs, sysc_offs, syss_offs,
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index ce8b043263521..1214e39aad5ec 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -36,6 +36,10 @@
+  *                        +-----+
+  *                        |RSVD | JIT scratchpad
+  * current ARM_SP =>      +-----+ <= (BPF_FP - STACK_SIZE + SCRATCH_SIZE)
++ *                        | ... | caller-saved registers
++ *                        +-----+
++ *                        | ... | arguments passed on stack
++ * ARM_SP during call =>  +-----|
+  *                        |     |
+  *                        | ... | Function call stack
+  *                        |     |
+@@ -63,6 +67,12 @@
+  *
+  * When popping registers off the stack at the end of a BPF function, we
+  * reference them via the current ARM_FP register.
++ *
++ * Some eBPF operations are implemented via a call to a helper function.
++ * Such calls are "invisible" in the eBPF code, so it is up to the calling
++ * program to preserve any caller-saved ARM registers during the call. The
++ * JIT emits code to push and pop those registers onto the stack, immediately
++ * above the callee stack frame.
+  */
+ #define CALLEE_MASK	(1 << ARM_R4 | 1 << ARM_R5 | 1 << ARM_R6 | \
+ 			 1 << ARM_R7 | 1 << ARM_R8 | 1 << ARM_R9 | \
+@@ -70,6 +80,8 @@
+ #define CALLEE_PUSH_MASK (CALLEE_MASK | 1 << ARM_LR)
+ #define CALLEE_POP_MASK  (CALLEE_MASK | 1 << ARM_PC)
+ 
++#define CALLER_MASK	(1 << ARM_R0 | 1 << ARM_R1 | 1 << ARM_R2 | 1 << ARM_R3)
++
+ enum {
+ 	/* Stack layout - these are offsets from (top of stack - 4) */
+ 	BPF_R2_HI,
+@@ -464,6 +476,7 @@ static inline int epilogue_offset(const struct jit_ctx *ctx)
+ 
+ static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
+ {
++	const int exclude_mask = BIT(ARM_R0) | BIT(ARM_R1);
+ 	const s8 *tmp = bpf2a32[TMP_REG_1];
+ 
+ #if __LINUX_ARM_ARCH__ == 7
+@@ -495,11 +508,17 @@ static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
+ 		emit(ARM_MOV_R(ARM_R0, rm), ctx);
+ 	}
+ 
++	/* Push caller-saved registers on stack */
++	emit(ARM_PUSH(CALLER_MASK & ~exclude_mask), ctx);
++
+ 	/* Call appropriate function */
+ 	emit_mov_i(ARM_IP, op == BPF_DIV ?
+ 		   (u32)jit_udiv32 : (u32)jit_mod32, ctx);
+ 	emit_blx_r(ARM_IP, ctx);
+ 
++	/* Restore caller-saved registers from stack */
++	emit(ARM_POP(CALLER_MASK & ~exclude_mask), ctx);
++
+ 	/* Save return value */
+ 	if (rd != ARM_R0)
+ 		emit(ARM_MOV_R(rd, ARM_R0), ctx);
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+index 5f42904d53ab6..580690057601c 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+@@ -386,6 +386,24 @@
+ 			status = "disabled";
+ 		};
+ 
++		can0: can@2180000 {
++			compatible = "fsl,ls1028ar1-flexcan", "fsl,lx2160ar1-flexcan";
++			reg = <0x0 0x2180000 0x0 0x10000>;
++			interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>;
++			clocks = <&sysclk>, <&clockgen 4 1>;
++			clock-names = "ipg", "per";
++			status = "disabled";
++		};
++
++		can1: can@2190000 {
++			compatible = "fsl,ls1028ar1-flexcan", "fsl,lx2160ar1-flexcan";
++			reg = <0x0 0x2190000 0x0 0x10000>;
++			interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>;
++			clocks = <&sysclk>, <&clockgen 4 1>;
++			clock-names = "ipg", "per";
++			status = "disabled";
++		};
++
+ 		duart0: serial@21c0500 {
+ 			compatible = "fsl,ns16550", "ns16550a";
+ 			reg = <0x00 0x21c0500 0x0 0x100>;
+diff --git a/arch/arm64/boot/dts/qcom/pm8150.dtsi b/arch/arm64/boot/dts/qcom/pm8150.dtsi
+index 1b6406927509f..82edcd74ce983 100644
+--- a/arch/arm64/boot/dts/qcom/pm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8150.dtsi
+@@ -48,7 +48,7 @@
+ 		#size-cells = <0>;
+ 
+ 		pon: power-on@800 {
+-			compatible = "qcom,pm8916-pon";
++			compatible = "qcom,pm8998-pon";
+ 			reg = <0x0800>;
+ 			pwrkey {
+ 				compatible = "qcom,pm8941-pwrkey";
+diff --git a/arch/powerpc/boot/dts/fsl/t1023rdb.dts b/arch/powerpc/boot/dts/fsl/t1023rdb.dts
+index 5ba6fbfca2742..f82f85c65964c 100644
+--- a/arch/powerpc/boot/dts/fsl/t1023rdb.dts
++++ b/arch/powerpc/boot/dts/fsl/t1023rdb.dts
+@@ -154,7 +154,7 @@
+ 
+ 			fm1mac3: ethernet@e4000 {
+ 				phy-handle = <&sgmii_aqr_phy3>;
+-				phy-connection-type = "sgmii-2500";
++				phy-connection-type = "2500base-x";
+ 				sleep = <&rcpm 0x20000000>;
+ 			};
+ 
+diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c
+index a1c7441940184..9ac0651795cf6 100644
+--- a/arch/powerpc/kernel/dma-iommu.c
++++ b/arch/powerpc/kernel/dma-iommu.c
+@@ -117,6 +117,15 @@ u64 dma_iommu_get_required_mask(struct device *dev)
+ 	struct iommu_table *tbl = get_iommu_table_base(dev);
+ 	u64 mask;
+ 
++	if (dev_is_pci(dev)) {
++		u64 bypass_mask = dma_direct_get_required_mask(dev);
++
++		if (dma_iommu_dma_supported(dev, bypass_mask)) {
++			dev_info(dev, "%s: returning bypass mask 0x%llx\n", __func__, bypass_mask);
++			return bypass_mask;
++		}
++	}
++
+ 	if (!tbl)
+ 		return 0;
+ 
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 9d3b468bd2d7a..10df278dc3fbe 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1715,27 +1715,30 @@ EXC_COMMON_BEGIN(program_check_common)
+ 	 */
+ 
+ 	andi.	r10,r12,MSR_PR
+-	bne	2f			/* If userspace, go normal path */
++	bne	.Lnormal_stack		/* If userspace, go normal path */
+ 
+ 	andis.	r10,r12,(SRR1_PROGTM)@h
+-	bne	1f			/* If TM, emergency		*/
++	bne	.Lemergency_stack	/* If TM, emergency		*/
+ 
+ 	cmpdi	r1,-INT_FRAME_SIZE	/* check if r1 is in userspace	*/
+-	blt	2f			/* normal path if not		*/
++	blt	.Lnormal_stack		/* normal path if not		*/
+ 
+ 	/* Use the emergency stack					*/
+-1:	andi.	r10,r12,MSR_PR		/* Set CR0 correctly for label	*/
++.Lemergency_stack:
++	andi.	r10,r12,MSR_PR		/* Set CR0 correctly for label	*/
+ 					/* 3 in EXCEPTION_PROLOG_COMMON	*/
+ 	mr	r10,r1			/* Save r1			*/
+ 	ld	r1,PACAEMERGSP(r13)	/* Use emergency stack		*/
+ 	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame		*/
+ 	__ISTACK(program_check)=0
+ 	__GEN_COMMON_BODY program_check
+-	b 3f
+-2:
++	b .Ldo_program_check
++
++.Lnormal_stack:
+ 	__ISTACK(program_check)=1
+ 	__GEN_COMMON_BODY program_check
+-3:
++
++.Ldo_program_check:
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
+ 	bl	program_check_exception
+ 	REST_NVGPRS(r1) /* instruction emulation may change GPRs */
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 658ca2bab13cc..0752967f351bb 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -347,18 +347,25 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
+ 			EMIT(PPC_RAW_SUB(dst_reg, dst_reg, src_reg));
+ 			goto bpf_alu32_trunc;
+ 		case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
+-		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+ 		case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
++			if (!imm) {
++				goto bpf_alu32_trunc;
++			} else if (imm >= -32768 && imm < 32768) {
++				EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
++			} else {
++				PPC_LI32(b2p[TMP_REG_1], imm);
++				EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
++			}
++			goto bpf_alu32_trunc;
++		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+ 		case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
+-			if (BPF_OP(code) == BPF_SUB)
+-				imm = -imm;
+-			if (imm) {
+-				if (imm >= -32768 && imm < 32768)
+-					EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
+-				else {
+-					PPC_LI32(b2p[TMP_REG_1], imm);
+-					EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
+-				}
++			if (!imm) {
++				goto bpf_alu32_trunc;
++			} else if (imm > -32768 && imm <= 32768) {
++				EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(-imm)));
++			} else {
++				PPC_LI32(b2p[TMP_REG_1], imm);
++				EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
+ 			}
+ 			goto bpf_alu32_trunc;
+ 		case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
+diff --git a/arch/powerpc/platforms/pseries/eeh_pseries.c b/arch/powerpc/platforms/pseries/eeh_pseries.c
+index cf024fa37bda0..7ed38ebd0c7b6 100644
+--- a/arch/powerpc/platforms/pseries/eeh_pseries.c
++++ b/arch/powerpc/platforms/pseries/eeh_pseries.c
+@@ -868,6 +868,10 @@ static int __init eeh_pseries_init(void)
+ 	if (is_kdump_kernel() || reset_devices) {
+ 		pr_info("Issue PHB reset ...\n");
+ 		list_for_each_entry(phb, &hose_list, list_node) {
++			// Skip if the slot is empty
++			if (list_empty(&PCI_DN(phb->dn)->child_list))
++				continue;
++
+ 			pdn = list_first_entry(&PCI_DN(phb->dn)->child_list, struct pci_dn, list);
+ 			config_addr = pseries_eeh_get_pe_config_addr(pdn);
+ 
+diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
+index 4b989ae15d59f..8062996c2dfd0 100644
+--- a/arch/riscv/include/uapi/asm/unistd.h
++++ b/arch/riscv/include/uapi/asm/unistd.h
+@@ -18,9 +18,10 @@
+ #ifdef __LP64__
+ #define __ARCH_WANT_NEW_STAT
+ #define __ARCH_WANT_SET_GET_RLIMIT
+-#define __ARCH_WANT_SYS_CLONE3
+ #endif /* __LP64__ */
+ 
++#define __ARCH_WANT_SYS_CLONE3
++
+ #include <asm-generic/unistd.h>
+ 
+ /*
+diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
+index 3f1d35e7c98a6..73d45931a053a 100644
+--- a/arch/riscv/kernel/vdso.c
++++ b/arch/riscv/kernel/vdso.c
+@@ -65,7 +65,9 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
+ 
+ 	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
+ 
+-	mmap_write_lock(mm);
++	if (mmap_write_lock_killable(mm))
++		return -EINTR;
++
+ 	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+ 	if (IS_ERR_VALUE(vdso_base)) {
+ 		ret = vdso_base;
+diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
+index 094118663285d..89f81067e09ed 100644
+--- a/arch/riscv/mm/cacheflush.c
++++ b/arch/riscv/mm/cacheflush.c
+@@ -16,6 +16,8 @@ static void ipi_remote_fence_i(void *info)
+ 
+ void flush_icache_all(void)
+ {
++	local_flush_icache_all();
++
+ 	if (IS_ENABLED(CONFIG_RISCV_SBI))
+ 		sbi_remote_fence_i(NULL);
+ 	else
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 8d9047d2d1e11..cd0cbdafedbd2 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -1775,7 +1775,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
+ 	jit.addrs = kvcalloc(fp->len + 1, sizeof(*jit.addrs), GFP_KERNEL);
+ 	if (jit.addrs == NULL) {
+ 		fp = orig_fp;
+-		goto out;
++		goto free_addrs;
+ 	}
+ 	/*
+ 	 * Three initial passes:
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index f3c8a8110f60c..4201d0cf5f835 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1415,7 +1415,7 @@ config HIGHMEM4G
+ 
+ config HIGHMEM64G
+ 	bool "64GB"
+-	depends on !M486SX && !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !WINCHIP3D && !MK6
++	depends on !M486SX && !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !MWINCHIP3D && !MK6
+ 	select X86_PAE
+ 	help
+ 	  Select this if you have a 32-bit processor and more than 4
+diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
+index 6fe54b2813c13..4a382fb6a9ef8 100644
+--- a/arch/x86/include/asm/entry-common.h
++++ b/arch/x86/include/asm/entry-common.h
+@@ -24,7 +24,7 @@ static __always_inline void arch_check_user_regs(struct pt_regs *regs)
+ 		 * For !SMAP hardware we patch out CLAC on entry.
+ 		 */
+ 		if (boot_cpu_has(X86_FEATURE_SMAP) ||
+-		    (IS_ENABLED(CONFIG_64_BIT) && boot_cpu_has(X86_FEATURE_XENPV)))
++		    (IS_ENABLED(CONFIG_64BIT) && boot_cpu_has(X86_FEATURE_XENPV)))
+ 			mask |= X86_EFLAGS_AC;
+ 
+ 		WARN_ON_ONCE(flags & mask);
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 25148ebd36341..ec21f5e9ffd05 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -320,6 +320,7 @@ static __always_inline void setup_smap(struct cpuinfo_x86 *c)
+ #ifdef CONFIG_X86_SMAP
+ 		cr4_set_bits(X86_CR4_SMAP);
+ #else
++		clear_cpu_cap(c, X86_FEATURE_SMAP);
+ 		cr4_clear_bits(X86_CR4_SMAP);
+ #endif
+ 	}
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index a4b5af03dcc1b..0c6d1dc59fa21 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -711,12 +711,6 @@ static struct chipset early_qrk[] __initdata = {
+ 	 */
+ 	{ PCI_VENDOR_ID_INTEL, 0x0f00,
+ 		PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+-	{ PCI_VENDOR_ID_INTEL, 0x3e20,
+-		PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+-	{ PCI_VENDOR_ID_INTEL, 0x3ec4,
+-		PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+-	{ PCI_VENDOR_ID_INTEL, 0x8a12,
+-		PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+ 	{ PCI_VENDOR_ID_BROADCOM, 0x4331,
+ 	  PCI_CLASS_NETWORK_OTHER, PCI_ANY_ID, 0, apple_airport_reset},
+ 	{}
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 7a50f0b62a709..4ab7a9757e521 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -9,6 +9,7 @@
+ 
+ #include <asm/hpet.h>
+ #include <asm/time.h>
++#include <asm/mwait.h>
+ 
+ #undef  pr_fmt
+ #define pr_fmt(fmt) "hpet: " fmt
+@@ -806,6 +807,83 @@ static bool __init hpet_counting(void)
+ 	return false;
+ }
+ 
++static bool __init mwait_pc10_supported(void)
++{
++	unsigned int eax, ebx, ecx, mwait_substates;
++
++	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++		return false;
++
++	if (!cpu_feature_enabled(X86_FEATURE_MWAIT))
++		return false;
++
++	if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF)
++		return false;
++
++	cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &mwait_substates);
++
++	return (ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED) &&
++	       (ecx & CPUID5_ECX_INTERRUPT_BREAK) &&
++	       (mwait_substates & (0xF << 28));
++}
++
++/*
++ * Check whether the system supports PC10. If so force disable HPET as that
++ * stops counting in PC10. This check is overbroad as it does not take any
++ * of the following into account:
++ *
++ *	- ACPI tables
++ *	- Enablement of intel_idle
++ *	- Command line arguments which limit intel_idle C-state support
++ *
++ * That's perfectly fine. HPET is a piece of hardware designed by committee
++ * and the only reasons why it is still in use on modern systems is the
++ * fact that it is impossible to reliably query TSC and CPU frequency via
++ * CPUID or firmware.
++ *
++ * If HPET is functional it is useful for calibrating TSC, but this can be
++ * done via PMTIMER as well which seems to be the last remaining timer on
++ * X86/INTEL platforms that has not been completely wreckaged by feature
++ * creep.
++ *
++ * In theory HPET support should be removed altogether, but there are older
++ * systems out there which depend on it because TSC and APIC timer are
++ * dysfunctional in deeper C-states.
++ *
++ * It's only 20 years now that hardware people have been asked to provide
++ * reliable and discoverable facilities which can be used for timekeeping
++ * and per CPU timer interrupts.
++ *
++ * The probability that this problem is going to be solved in the
++ * forseeable future is close to zero, so the kernel has to be cluttered
++ * with heuristics to keep up with the ever growing amount of hardware and
++ * firmware trainwrecks. Hopefully some day hardware people will understand
++ * that the approach of "This can be fixed in software" is not sustainable.
++ * Hope dies last...
++ */
++static bool __init hpet_is_pc10_damaged(void)
++{
++	unsigned long long pcfg;
++
++	/* Check whether PC10 substates are supported */
++	if (!mwait_pc10_supported())
++		return false;
++
++	/* Check whether PC10 is enabled in PKG C-state limit */
++	rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, pcfg);
++	if ((pcfg & 0xF) < 8)
++		return false;
++
++	if (hpet_force_user) {
++		pr_warn("HPET force enabled via command line, but dysfunctional in PC10.\n");
++		return false;
++	}
++
++	pr_info("HPET dysfunctional in PC10. Force disabled.\n");
++	boot_hpet_disable = true;
++	return true;
++}
++
+ /**
+  * hpet_enable - Try to setup the HPET timer. Returns 1 on success.
+  */
+@@ -819,6 +897,9 @@ int __init hpet_enable(void)
+ 	if (!is_hpet_capable())
+ 		return 0;
+ 
++	if (hpet_is_pc10_damaged())
++		return 0;
++
+ 	hpet_set_mapping();
+ 	if (!hpet_virt_address)
+ 		return 0;
+diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
+index ecb20b17b7df6..82db4014deb21 100644
+--- a/arch/x86/kernel/sev-es-shared.c
++++ b/arch/x86/kernel/sev-es-shared.c
+@@ -130,6 +130,8 @@ static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
+ 		} else {
+ 			ret = ES_VMM_ERROR;
+ 		}
++	} else if (ghcb->save.sw_exit_info_1 & 0xffffffff) {
++		ret = ES_VMM_ERROR;
+ 	} else {
+ 		ret = ES_OK;
+ 	}
+diff --git a/arch/x86/platform/olpc/olpc.c b/arch/x86/platform/olpc/olpc.c
+index ee2beda590d0d..1d4a00e767ece 100644
+--- a/arch/x86/platform/olpc/olpc.c
++++ b/arch/x86/platform/olpc/olpc.c
+@@ -274,7 +274,7 @@ static struct olpc_ec_driver ec_xo1_driver = {
+ 
+ static struct olpc_ec_driver ec_xo1_5_driver = {
+ 	.ec_cmd = olpc_xo1_ec_cmd,
+-#ifdef CONFIG_OLPC_XO1_5_SCI
++#ifdef CONFIG_OLPC_XO15_SCI
+ 	/*
+ 	 * XO-1.5 EC wakeups are available when olpc-xo15-sci driver is
+ 	 * compiled in
+diff --git a/arch/xtensa/include/asm/kmem_layout.h b/arch/xtensa/include/asm/kmem_layout.h
+index 7cbf68ca71060..6fc05cba61a27 100644
+--- a/arch/xtensa/include/asm/kmem_layout.h
++++ b/arch/xtensa/include/asm/kmem_layout.h
+@@ -78,7 +78,7 @@
+ #endif
+ #define XCHAL_KIO_SIZE			0x10000000
+ 
+-#if (!XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY) && defined(CONFIG_OF)
++#if (!XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY) && defined(CONFIG_USE_OF)
+ #define XCHAL_KIO_PADDR			xtensa_get_kio_paddr()
+ #ifndef __ASSEMBLY__
+ extern unsigned long xtensa_kio_paddr;
+diff --git a/arch/xtensa/kernel/irq.c b/arch/xtensa/kernel/irq.c
+index a48bf2d10ac2d..80cc9770a8d2d 100644
+--- a/arch/xtensa/kernel/irq.c
++++ b/arch/xtensa/kernel/irq.c
+@@ -145,7 +145,7 @@ unsigned xtensa_get_ext_irq_no(unsigned irq)
+ 
+ void __init init_IRQ(void)
+ {
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 	irqchip_init();
+ #else
+ #ifdef CONFIG_HAVE_SMP
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index ed184106e4cf9..ee9082a142feb 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -63,7 +63,7 @@ extern unsigned long initrd_end;
+ extern int initrd_below_start_ok;
+ #endif
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ void *dtb_start = __dtb_start;
+ #endif
+ 
+@@ -125,7 +125,7 @@ __tagtable(BP_TAG_INITRD, parse_tag_initrd);
+ 
+ #endif /* CONFIG_BLK_DEV_INITRD */
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 
+ static int __init parse_tag_fdt(const bp_tag_t *tag)
+ {
+@@ -135,7 +135,7 @@ static int __init parse_tag_fdt(const bp_tag_t *tag)
+ 
+ __tagtable(BP_TAG_FDT, parse_tag_fdt);
+ 
+-#endif /* CONFIG_OF */
++#endif /* CONFIG_USE_OF */
+ 
+ static int __init parse_tag_cmdline(const bp_tag_t* tag)
+ {
+@@ -183,7 +183,7 @@ static int __init parse_bootparam(const bp_tag_t *tag)
+ }
+ #endif
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 
+ #if !XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY
+ unsigned long xtensa_kio_paddr = XCHAL_KIO_DEFAULT_PADDR;
+@@ -232,7 +232,7 @@ void __init early_init_devtree(void *params)
+ 		strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
+ }
+ 
+-#endif /* CONFIG_OF */
++#endif /* CONFIG_USE_OF */
+ 
+ /*
+  * Initialize architecture. (Early stage)
+@@ -253,7 +253,7 @@ void __init init_arch(bp_tag_t *bp_start)
+ 	if (bp_start)
+ 		parse_bootparam(bp_start);
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 	early_init_devtree(dtb_start);
+ #endif
+ 
+diff --git a/arch/xtensa/mm/mmu.c b/arch/xtensa/mm/mmu.c
+index fd2193df8a145..511bb92518f28 100644
+--- a/arch/xtensa/mm/mmu.c
++++ b/arch/xtensa/mm/mmu.c
+@@ -100,7 +100,7 @@ void init_mmu(void)
+ 
+ void init_kio(void)
+ {
+-#if XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY && defined(CONFIG_OF)
++#if XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY && defined(CONFIG_USE_OF)
+ 	/*
+ 	 * Update the IO area mapping in case xtensa_kio_paddr has changed
+ 	 */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 159b57c6dc4df..02341fd66e8d2 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1464,6 +1464,9 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 	/* Quirks that need to be set based on detected module */
+ 	SYSC_QUIRK("aess", 0, 0, 0x10, -ENODEV, 0x40000000, 0xffffffff,
+ 		   SYSC_MODULE_QUIRK_AESS),
++	/* Errata i893 handling for dra7 dcan1 and 2 */
++	SYSC_QUIRK("dcan", 0x4ae3c000, 0x20, -ENODEV, -ENODEV, 0xa3170504, 0xffffffff,
++		   SYSC_QUIRK_CLKDM_NOAUTO),
+ 	SYSC_QUIRK("dcan", 0x48480000, 0x20, -ENODEV, -ENODEV, 0xa3170504, 0xffffffff,
+ 		   SYSC_QUIRK_CLKDM_NOAUTO),
+ 	SYSC_QUIRK("dss", 0x4832a000, 0, 0x10, 0x14, 0x00000020, 0xffffffff,
+@@ -2922,6 +2925,7 @@ static int sysc_init_soc(struct sysc *ddata)
+ 			break;
+ 		case SOC_AM3:
+ 			sysc_add_disabled(0x48310000);  /* rng */
++			break;
+ 		default:
+ 			break;
+ 		};
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/crc.c b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+index b8c31b697797e..66f32d965c723 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/crc.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+@@ -704,6 +704,7 @@ static const struct file_operations nv50_crc_flip_threshold_fops = {
+ 	.open = nv50_crc_debugfs_flip_threshold_open,
+ 	.read = seq_read,
+ 	.write = nv50_crc_debugfs_flip_threshold_set,
++	.release = single_release,
+ };
+ 
+ int nv50_head_crc_late_register(struct nv50_head *head)
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c
+index 61826cac3061a..be649d14f8797 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.c
+@@ -51,6 +51,7 @@ nv50_head_flush_clr(struct nv50_head *head,
+ void
+ nv50_head_flush_set_wndw(struct nv50_head *head, struct nv50_head_atom *asyh)
+ {
++	if (asyh->set.curs   ) head->func->curs_set(head, asyh);
+ 	if (asyh->set.olut   ) {
+ 		asyh->olut.offset = nv50_lut_load(&head->olut,
+ 						  asyh->olut.buffer,
+@@ -66,7 +67,6 @@ nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
+ 	if (asyh->set.view   ) head->func->view    (head, asyh);
+ 	if (asyh->set.mode   ) head->func->mode    (head, asyh);
+ 	if (asyh->set.core   ) head->func->core_set(head, asyh);
+-	if (asyh->set.curs   ) head->func->curs_set(head, asyh);
+ 	if (asyh->set.base   ) head->func->base    (head, asyh);
+ 	if (asyh->set.ovly   ) head->func->ovly    (head, asyh);
+ 	if (asyh->set.dither ) head->func->dither  (head, asyh);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index c2bc05eb2e54a..1cbe01048b930 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -207,6 +207,7 @@ static const struct file_operations nouveau_pstate_fops = {
+ 	.open = nouveau_debugfs_pstate_open,
+ 	.read = seq_read,
+ 	.write = nouveau_debugfs_pstate_set,
++	.release = single_release,
+ };
+ 
+ static struct drm_info_list nouveau_debugfs_list[] = {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index c2051380d18c0..6504ebec11901 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -196,10 +196,8 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain,
+ 	}
+ 
+ 	ret = nouveau_bo_init(nvbo, size, align, domain, NULL, NULL);
+-	if (ret) {
+-		nouveau_bo_ref(NULL, &nvbo);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	/* we restrict allowed domains on nv50+ to only the types
+ 	 * that were requested at creation time.  not possibly on
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+index f75fb157f2ff7..016b877051dab 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+@@ -216,11 +216,13 @@ static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master,
+ 		goto err_disable_clk_tmds;
+ 	}
+ 
++	ret = sun8i_hdmi_phy_init(hdmi->phy);
++	if (ret)
++		goto err_disable_clk_tmds;
++
+ 	drm_encoder_helper_add(encoder, &sun8i_dw_hdmi_encoder_helper_funcs);
+ 	drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+ 
+-	sun8i_hdmi_phy_init(hdmi->phy);
+-
+ 	plat_data->mode_valid = hdmi->quirks->mode_valid;
+ 	plat_data->use_drm_infoframe = hdmi->quirks->use_drm_infoframe;
+ 	sun8i_hdmi_phy_set_ops(hdmi->phy, plat_data);
+@@ -262,6 +264,7 @@ static void sun8i_dw_hdmi_unbind(struct device *dev, struct device *master,
+ 	struct sun8i_dw_hdmi *hdmi = dev_get_drvdata(dev);
+ 
+ 	dw_hdmi_unbind(hdmi->hdmi);
++	sun8i_hdmi_phy_deinit(hdmi->phy);
+ 	clk_disable_unprepare(hdmi->clk_tmds);
+ 	reset_control_assert(hdmi->rst_ctrl);
+ 	gpiod_set_value(hdmi->ddc_en, 0);
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
+index 74f6ed0e25709..bffe1b9cd3dcb 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
+@@ -169,6 +169,7 @@ struct sun8i_hdmi_phy {
+ 	struct clk			*clk_phy;
+ 	struct clk			*clk_pll0;
+ 	struct clk			*clk_pll1;
++	struct device			*dev;
+ 	unsigned int			rcal;
+ 	struct regmap			*regs;
+ 	struct reset_control		*rst_phy;
+@@ -205,7 +206,8 @@ encoder_to_sun8i_dw_hdmi(struct drm_encoder *encoder)
+ 
+ int sun8i_hdmi_phy_get(struct sun8i_dw_hdmi *hdmi, struct device_node *node);
+ 
+-void sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy);
++int sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy);
++void sun8i_hdmi_phy_deinit(struct sun8i_hdmi_phy *phy);
+ void sun8i_hdmi_phy_set_ops(struct sun8i_hdmi_phy *phy,
+ 			    struct dw_hdmi_plat_data *plat_data);
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index c9239708d398c..b64d93da651d2 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -506,9 +506,60 @@ static void sun8i_hdmi_phy_init_h3(struct sun8i_hdmi_phy *phy)
+ 	phy->rcal = (val & SUN8I_HDMI_PHY_ANA_STS_RCAL_MASK) >> 2;
+ }
+ 
+-void sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy)
++int sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy)
+ {
++	int ret;
++
++	ret = reset_control_deassert(phy->rst_phy);
++	if (ret) {
++		dev_err(phy->dev, "Cannot deassert phy reset control: %d\n", ret);
++		return ret;
++	}
++
++	ret = clk_prepare_enable(phy->clk_bus);
++	if (ret) {
++		dev_err(phy->dev, "Cannot enable bus clock: %d\n", ret);
++		goto err_assert_rst_phy;
++	}
++
++	ret = clk_prepare_enable(phy->clk_mod);
++	if (ret) {
++		dev_err(phy->dev, "Cannot enable mod clock: %d\n", ret);
++		goto err_disable_clk_bus;
++	}
++
++	if (phy->variant->has_phy_clk) {
++		ret = sun8i_phy_clk_create(phy, phy->dev,
++					   phy->variant->has_second_pll);
++		if (ret) {
++			dev_err(phy->dev, "Couldn't create the PHY clock\n");
++			goto err_disable_clk_mod;
++		}
++
++		clk_prepare_enable(phy->clk_phy);
++	}
++
+ 	phy->variant->phy_init(phy);
++
++	return 0;
++
++err_disable_clk_mod:
++	clk_disable_unprepare(phy->clk_mod);
++err_disable_clk_bus:
++	clk_disable_unprepare(phy->clk_bus);
++err_assert_rst_phy:
++	reset_control_assert(phy->rst_phy);
++
++	return ret;
++}
++
++void sun8i_hdmi_phy_deinit(struct sun8i_hdmi_phy *phy)
++{
++	clk_disable_unprepare(phy->clk_mod);
++	clk_disable_unprepare(phy->clk_bus);
++	clk_disable_unprepare(phy->clk_phy);
++
++	reset_control_assert(phy->rst_phy);
+ }
+ 
+ void sun8i_hdmi_phy_set_ops(struct sun8i_hdmi_phy *phy,
+@@ -638,6 +689,7 @@ static int sun8i_hdmi_phy_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	phy->variant = (struct sun8i_hdmi_phy_variant *)match->data;
++	phy->dev = dev;
+ 
+ 	ret = of_address_to_resource(node, 0, &res);
+ 	if (ret) {
+@@ -696,47 +748,10 @@ static int sun8i_hdmi_phy_probe(struct platform_device *pdev)
+ 		goto err_put_clk_pll1;
+ 	}
+ 
+-	ret = reset_control_deassert(phy->rst_phy);
+-	if (ret) {
+-		dev_err(dev, "Cannot deassert phy reset control: %d\n", ret);
+-		goto err_put_rst_phy;
+-	}
+-
+-	ret = clk_prepare_enable(phy->clk_bus);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable bus clock: %d\n", ret);
+-		goto err_deassert_rst_phy;
+-	}
+-
+-	ret = clk_prepare_enable(phy->clk_mod);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable mod clock: %d\n", ret);
+-		goto err_disable_clk_bus;
+-	}
+-
+-	if (phy->variant->has_phy_clk) {
+-		ret = sun8i_phy_clk_create(phy, dev,
+-					   phy->variant->has_second_pll);
+-		if (ret) {
+-			dev_err(dev, "Couldn't create the PHY clock\n");
+-			goto err_disable_clk_mod;
+-		}
+-
+-		clk_prepare_enable(phy->clk_phy);
+-	}
+-
+ 	platform_set_drvdata(pdev, phy);
+ 
+ 	return 0;
+ 
+-err_disable_clk_mod:
+-	clk_disable_unprepare(phy->clk_mod);
+-err_disable_clk_bus:
+-	clk_disable_unprepare(phy->clk_bus);
+-err_deassert_rst_phy:
+-	reset_control_assert(phy->rst_phy);
+-err_put_rst_phy:
+-	reset_control_put(phy->rst_phy);
+ err_put_clk_pll1:
+ 	clk_put(phy->clk_pll1);
+ err_put_clk_pll0:
+@@ -753,12 +768,6 @@ static int sun8i_hdmi_phy_remove(struct platform_device *pdev)
+ {
+ 	struct sun8i_hdmi_phy *phy = platform_get_drvdata(pdev);
+ 
+-	clk_disable_unprepare(phy->clk_mod);
+-	clk_disable_unprepare(phy->clk_bus);
+-	clk_disable_unprepare(phy->clk_phy);
+-
+-	reset_control_assert(phy->rst_phy);
+-
+ 	reset_control_put(phy->rst_phy);
+ 
+ 	clk_put(phy->clk_pll0);
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 1a5f1ccd1d2f7..0af2784cbd0d9 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -41,6 +41,8 @@
+ #define I2C_HANDSHAKE_RST		0x0020
+ #define I2C_FIFO_ADDR_CLR		0x0001
+ #define I2C_DELAY_LEN			0x0002
++#define I2C_ST_START_CON		0x8001
++#define I2C_FS_START_CON		0x1800
+ #define I2C_TIME_CLR_VALUE		0x0000
+ #define I2C_TIME_DEFAULT_VALUE		0x0003
+ #define I2C_WRRD_TRANAC_VALUE		0x0002
+@@ -479,6 +481,7 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ {
+ 	u16 control_reg;
+ 	u16 intr_stat_reg;
++	u16 ext_conf_val;
+ 
+ 	mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_START);
+ 	intr_stat_reg = mtk_i2c_readw(i2c, OFFSET_INTR_STAT);
+@@ -517,8 +520,13 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ 	if (i2c->dev_comp->ltiming_adjust)
+ 		mtk_i2c_writew(i2c, i2c->ltiming_reg, OFFSET_LTIMING);
+ 
++	if (i2c->speed_hz <= I2C_MAX_STANDARD_MODE_FREQ)
++		ext_conf_val = I2C_ST_START_CON;
++	else
++		ext_conf_val = I2C_FS_START_CON;
++
+ 	if (i2c->dev_comp->timing_adjust) {
+-		mtk_i2c_writew(i2c, i2c->ac_timing.ext, OFFSET_EXT_CONF);
++		ext_conf_val = i2c->ac_timing.ext;
+ 		mtk_i2c_writew(i2c, i2c->ac_timing.inter_clk_div,
+ 			       OFFSET_CLOCK_DIV);
+ 		mtk_i2c_writew(i2c, I2C_SCL_MIS_COMP_VALUE,
+@@ -543,6 +551,7 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ 				       OFFSET_HS_STA_STO_AC_TIMING);
+ 		}
+ 	}
++	mtk_i2c_writew(i2c, ext_conf_val, OFFSET_EXT_CONF);
+ 
+ 	/* If use i2c pin from PMIC mt6397 side, need set PATH_DIR first */
+ 	if (i2c->have_pmic)
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 37c510d9347a7..4b136d8710743 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -426,6 +426,7 @@ static int i2c_acpi_notify(struct notifier_block *nb, unsigned long value,
+ 			break;
+ 
+ 		i2c_acpi_register_device(adapter, adev, &info);
++		put_device(&adapter->dev);
+ 		break;
+ 	case ACPI_RECONFIG_DEVICE_REMOVE:
+ 		if (!acpi_device_enumerated(adev))
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index d3f40c9a8c6c8..b274083a6e635 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -735,7 +735,7 @@ static void meson_mmc_desc_chain_transfer(struct mmc_host *mmc, u32 cmd_cfg)
+ 	writel(start, host->regs + SD_EMMC_START);
+ }
+ 
+-/* local sg copy to buffer version with _to/fromio usage for dram_access_quirk */
++/* local sg copy for dram_access_quirk */
+ static void meson_mmc_copy_buffer(struct meson_host *host, struct mmc_data *data,
+ 				  size_t buflen, bool to_buffer)
+ {
+@@ -753,21 +753,27 @@ static void meson_mmc_copy_buffer(struct meson_host *host, struct mmc_data *data
+ 	sg_miter_start(&miter, sgl, nents, sg_flags);
+ 
+ 	while ((offset < buflen) && sg_miter_next(&miter)) {
+-		unsigned int len;
++		unsigned int buf_offset = 0;
++		unsigned int len, left;
++		u32 *buf = miter.addr;
+ 
+ 		len = min(miter.length, buflen - offset);
++		left = len;
+ 
+-		/* When dram_access_quirk, the bounce buffer is a iomem mapping */
+-		if (host->dram_access_quirk) {
+-			if (to_buffer)
+-				memcpy_toio(host->bounce_iomem_buf + offset, miter.addr, len);
+-			else
+-				memcpy_fromio(miter.addr, host->bounce_iomem_buf + offset, len);
++		if (to_buffer) {
++			do {
++				writel(*buf++, host->bounce_iomem_buf + offset + buf_offset);
++
++				buf_offset += 4;
++				left -= 4;
++			} while (left);
+ 		} else {
+-			if (to_buffer)
+-				memcpy(host->bounce_buf + offset, miter.addr, len);
+-			else
+-				memcpy(miter.addr, host->bounce_buf + offset, len);
++			do {
++				*buf++ = readl(host->bounce_iomem_buf + offset + buf_offset);
++
++				buf_offset += 4;
++				left -= 4;
++			} while (left);
+ 		}
+ 
+ 		offset += len;
+@@ -819,7 +825,11 @@ static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd)
+ 		if (data->flags & MMC_DATA_WRITE) {
+ 			cmd_cfg |= CMD_CFG_DATA_WR;
+ 			WARN_ON(xfer_bytes > host->bounce_buf_size);
+-			meson_mmc_copy_buffer(host, data, xfer_bytes, true);
++			if (host->dram_access_quirk)
++				meson_mmc_copy_buffer(host, data, xfer_bytes, true);
++			else
++				sg_copy_to_buffer(data->sg, data->sg_len,
++						  host->bounce_buf, xfer_bytes);
+ 			dma_wmb();
+ 		}
+ 
+@@ -838,12 +848,43 @@ static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd)
+ 	writel(cmd->arg, host->regs + SD_EMMC_CMD_ARG);
+ }
+ 
++static int meson_mmc_validate_dram_access(struct mmc_host *mmc, struct mmc_data *data)
++{
++	struct scatterlist *sg;
++	int i;
++
++	/* Reject request if any element offset or size is not 32bit aligned */
++	for_each_sg(data->sg, sg, data->sg_len, i) {
++		if (!IS_ALIGNED(sg->offset, sizeof(u32)) ||
++		    !IS_ALIGNED(sg->length, sizeof(u32))) {
++			dev_err(mmc_dev(mmc), "unaligned sg offset %u len %u\n",
++				data->sg->offset, data->sg->length);
++			return -EINVAL;
++		}
++	}
++
++	return 0;
++}
++
+ static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ {
+ 	struct meson_host *host = mmc_priv(mmc);
+ 	bool needs_pre_post_req = mrq->data &&
+ 			!(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE);
+ 
++	/*
++	 * The memory at the end of the controller used as bounce buffer for
++	 * the dram_access_quirk only accepts 32bit read/write access,
++	 * check the aligment and length of the data before starting the request.
++	 */
++	if (host->dram_access_quirk && mrq->data) {
++		mrq->cmd->error = meson_mmc_validate_dram_access(mmc, mrq->data);
++		if (mrq->cmd->error) {
++			mmc_request_done(mmc, mrq);
++			return;
++		}
++	}
++
+ 	if (needs_pre_post_req) {
+ 		meson_mmc_get_transfer_mode(mmc, mrq);
+ 		if (!meson_mmc_desc_chain_mode(mrq->data))
+@@ -988,7 +1029,11 @@ static irqreturn_t meson_mmc_irq_thread(int irq, void *dev_id)
+ 	if (meson_mmc_bounce_buf_read(data)) {
+ 		xfer_bytes = data->blksz * data->blocks;
+ 		WARN_ON(xfer_bytes > host->bounce_buf_size);
+-		meson_mmc_copy_buffer(host, data, xfer_bytes, false);
++		if (host->dram_access_quirk)
++			meson_mmc_copy_buffer(host, data, xfer_bytes, false);
++		else
++			sg_copy_from_buffer(data->sg, data->sg_len,
++					    host->bounce_buf, xfer_bytes);
+ 	}
+ 
+ 	next_cmd = meson_mmc_get_next_command(cmd);
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index 5564d7b23e7cd..d1a1c548c515f 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -11,6 +11,7 @@
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/kernel.h>
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/slot-gpio.h>
+@@ -61,7 +62,6 @@ static void sdhci_at91_set_force_card_detect(struct sdhci_host *host)
+ static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
+ {
+ 	u16 clk;
+-	unsigned long timeout;
+ 
+ 	host->mmc->actual_clock = 0;
+ 
+@@ -86,16 +86,11 @@ static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
+ 	sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
+ 
+ 	/* Wait max 20 ms */
+-	timeout = 20;
+-	while (!((clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL))
+-		& SDHCI_CLOCK_INT_STABLE)) {
+-		if (timeout == 0) {
+-			pr_err("%s: Internal clock never stabilised.\n",
+-			       mmc_hostname(host->mmc));
+-			return;
+-		}
+-		timeout--;
+-		mdelay(1);
++	if (read_poll_timeout(sdhci_readw, clk, (clk & SDHCI_CLOCK_INT_STABLE),
++			      1000, 20000, false, host, SDHCI_CLOCK_CONTROL)) {
++		pr_err("%s: Internal clock never stabilised.\n",
++		       mmc_hostname(host->mmc));
++		return;
+ 	}
+ 
+ 	clk |= SDHCI_CLOCK_CARD_EN;
+@@ -114,6 +109,7 @@ static void sdhci_at91_reset(struct sdhci_host *host, u8 mask)
+ {
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_at91_priv *priv = sdhci_pltfm_priv(pltfm_host);
++	unsigned int tmp;
+ 
+ 	sdhci_reset(host, mask);
+ 
+@@ -126,6 +122,10 @@ static void sdhci_at91_reset(struct sdhci_host *host, u8 mask)
+ 
+ 		sdhci_writel(host, calcr | SDMMC_CALCR_ALWYSON | SDMMC_CALCR_EN,
+ 			     SDMMC_CALCR);
++
++		if (read_poll_timeout(sdhci_readl, tmp, !(tmp & SDMMC_CALCR_EN),
++				      10, 20000, false, host, SDMMC_CALCR))
++			dev_err(mmc_dev(host->mmc), "Failed to calibrate\n");
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index f5c80229ea966..cfb174624d4ee 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -472,7 +472,7 @@ struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv)
+ 				    gve_num_tx_qpls(priv));
+ 
+ 	/* we are out of rx qpls */
+-	if (id == priv->qpl_cfg.qpl_map_size)
++	if (id == gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv))
+ 		return NULL;
+ 
+ 	set_bit(id, priv->qpl_cfg.qpl_id_map);
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 0b714b606ba19..fd52218f48846 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -30,6 +30,7 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ {
+ 	struct gve_priv *priv = netdev_priv(dev);
+ 	unsigned int start;
++	u64 packets, bytes;
+ 	int ring;
+ 
+ 	if (priv->rx) {
+@@ -37,10 +38,12 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ 			do {
+ 				start =
+ 				  u64_stats_fetch_begin(&priv->rx[ring].statss);
+-				s->rx_packets += priv->rx[ring].rpackets;
+-				s->rx_bytes += priv->rx[ring].rbytes;
++				packets = priv->rx[ring].rpackets;
++				bytes = priv->rx[ring].rbytes;
+ 			} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+ 						       start));
++			s->rx_packets += packets;
++			s->rx_bytes += bytes;
+ 		}
+ 	}
+ 	if (priv->tx) {
+@@ -48,10 +51,12 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ 			do {
+ 				start =
+ 				  u64_stats_fetch_begin(&priv->tx[ring].statss);
+-				s->tx_packets += priv->tx[ring].pkt_done;
+-				s->tx_bytes += priv->tx[ring].bytes_done;
++				packets = priv->tx[ring].pkt_done;
++				bytes = priv->tx[ring].bytes_done;
+ 			} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
+ 						       start));
++			s->tx_packets += packets;
++			s->tx_bytes += bytes;
+ 		}
+ 	}
+ }
+@@ -71,6 +76,9 @@ static int gve_alloc_counter_array(struct gve_priv *priv)
+ 
+ static void gve_free_counter_array(struct gve_priv *priv)
+ {
++	if (!priv->counter_array)
++		return;
++
+ 	dma_free_coherent(&priv->pdev->dev,
+ 			  priv->num_event_counters *
+ 			  sizeof(*priv->counter_array),
+@@ -131,6 +139,9 @@ static int gve_alloc_stats_report(struct gve_priv *priv)
+ 
+ static void gve_free_stats_report(struct gve_priv *priv)
+ {
++	if (!priv->stats_report)
++		return;
++
+ 	del_timer_sync(&priv->stats_report_timer);
+ 	dma_free_coherent(&priv->pdev->dev, priv->stats_report_len,
+ 			  priv->stats_report, priv->stats_report_bus);
+@@ -301,18 +312,19 @@ static void gve_free_notify_blocks(struct gve_priv *priv)
+ {
+ 	int i;
+ 
+-	if (priv->msix_vectors) {
+-		/* Free the irqs */
+-		for (i = 0; i < priv->num_ntfy_blks; i++) {
+-			struct gve_notify_block *block = &priv->ntfy_blocks[i];
+-			int msix_idx = i;
++	if (!priv->msix_vectors)
++		return;
+ 
+-			irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
+-					      NULL);
+-			free_irq(priv->msix_vectors[msix_idx].vector, block);
+-		}
+-		free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
++	/* Free the irqs */
++	for (i = 0; i < priv->num_ntfy_blks; i++) {
++		struct gve_notify_block *block = &priv->ntfy_blocks[i];
++		int msix_idx = i;
++
++		irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
++				      NULL);
++		free_irq(priv->msix_vectors[msix_idx].vector, block);
+ 	}
++	free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
+ 	dma_free_coherent(&priv->pdev->dev,
+ 			  priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),
+ 			  priv->ntfy_blocks, priv->ntfy_block_bus);
+@@ -975,9 +987,10 @@ static void gve_handle_reset(struct gve_priv *priv)
+ 
+ void gve_handle_report_stats(struct gve_priv *priv)
+ {
+-	int idx, stats_idx = 0, tx_bytes;
+-	unsigned int start = 0;
+ 	struct stats *stats = priv->stats_report->stats;
++	int idx, stats_idx = 0;
++	unsigned int start = 0;
++	u64 tx_bytes;
+ 
+ 	if (!gve_get_report_stats(priv))
+ 		return;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index bc648ce0743c7..52c2d6fdeb7a0 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -4839,7 +4839,8 @@ static void i40e_clear_interrupt_scheme(struct i40e_pf *pf)
+ {
+ 	int i;
+ 
+-	i40e_free_misc_vector(pf);
++	if (test_bit(__I40E_MISC_IRQ_REQUESTED, pf->state))
++		i40e_free_misc_vector(pf);
+ 
+ 	i40e_put_lump(pf->irq_pile, pf->iwarp_base_vector,
+ 		      I40E_IWARP_IRQ_PILE_ID);
+@@ -9662,7 +9663,7 @@ static int i40e_get_capabilities(struct i40e_pf *pf,
+ 		if (pf->hw.aq.asq_last_status == I40E_AQ_RC_ENOMEM) {
+ 			/* retry with a larger buffer */
+ 			buf_len = data_size;
+-		} else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK) {
++		} else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK || err) {
+ 			dev_info(&pf->pdev->dev,
+ 				 "capability discovery failed, err %s aq_err %s\n",
+ 				 i40e_stat_str(&pf->hw, err),
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index f327b78261ec4..117a593414537 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -999,14 +999,9 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 		goto csum_unnecessary;
+ 
+ 	if (likely(is_last_ethertype_ip(skb, &network_depth, &proto))) {
+-		u8 ipproto = get_ip_proto(skb, network_depth, proto);
+-
+-		if (unlikely(ipproto == IPPROTO_SCTP))
++		if (unlikely(get_ip_proto(skb, network_depth, proto) == IPPROTO_SCTP))
+ 			goto csum_unnecessary;
+ 
+-		if (unlikely(mlx5_ipsec_is_rx_flow(cqe)))
+-			goto csum_none;
+-
+ 		stats->csum_complete++;
+ 		skb->ip_summed = CHECKSUM_COMPLETE;
+ 		skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
+index 3e19b1721303f..b00c7d47833f3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
+@@ -79,12 +79,16 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
+ 	int dest_num = 0;
+ 	int err = 0;
+ 
+-	if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
++	if (vport->egress.legacy.drop_counter) {
++		drop_counter = vport->egress.legacy.drop_counter;
++	} else if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
+ 		drop_counter = mlx5_fc_create(esw->dev, false);
+-		if (IS_ERR(drop_counter))
++		if (IS_ERR(drop_counter)) {
+ 			esw_warn(esw->dev,
+ 				 "vport[%d] configure egress drop rule counter err(%ld)\n",
+ 				 vport->vport, PTR_ERR(drop_counter));
++			drop_counter = NULL;
++		}
+ 		vport->egress.legacy.drop_counter = drop_counter;
+ 	}
+ 
+@@ -123,7 +127,7 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
+ 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
+ 
+ 	/* Attach egress drop flow counter */
+-	if (!IS_ERR_OR_NULL(drop_counter)) {
++	if (drop_counter) {
+ 		flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
+ 		drop_ctr_dst.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
+ 		drop_ctr_dst.counter_id = mlx5_fc_id(drop_counter);
+@@ -162,7 +166,7 @@ void esw_acl_egress_lgcy_cleanup(struct mlx5_eswitch *esw,
+ 	esw_acl_egress_table_destroy(vport);
+ 
+ clean_drop_counter:
+-	if (!IS_ERR_OR_NULL(vport->egress.legacy.drop_counter)) {
++	if (vport->egress.legacy.drop_counter) {
+ 		mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
+ 		vport->egress.legacy.drop_counter = NULL;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+index d64fad2823e73..45570d0a58d2f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+@@ -160,7 +160,9 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
+ 
+ 	esw_acl_ingress_lgcy_rules_destroy(vport);
+ 
+-	if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
++	if (vport->ingress.legacy.drop_counter) {
++		counter = vport->ingress.legacy.drop_counter;
++	} else if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
+ 		counter = mlx5_fc_create(esw->dev, false);
+ 		if (IS_ERR(counter)) {
+ 			esw_warn(esw->dev,
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index b848439fa837c..2645ca35103c9 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -534,6 +534,13 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	bus->dev.groups = NULL;
+ 	dev_set_name(&bus->dev, "%s", bus->id);
+ 
++	/* We need to set state to MDIOBUS_UNREGISTERED to correctly release
++	 * the device in mdiobus_free()
++	 *
++	 * State will be updated later in this function in case of success
++	 */
++	bus->state = MDIOBUS_UNREGISTERED;
++
+ 	err = device_register(&bus->dev);
+ 	if (err) {
+ 		pr_err("mii_bus %s failed to register\n", bus->id);
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 2fff62695455d..32c34c728c7a1 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -133,7 +133,7 @@ static const char * const sm_state_strings[] = {
+ 	[SFP_S_LINK_UP] = "link_up",
+ 	[SFP_S_TX_FAULT] = "tx_fault",
+ 	[SFP_S_REINIT] = "reinit",
+-	[SFP_S_TX_DISABLE] = "rx_disable",
++	[SFP_S_TX_DISABLE] = "tx_disable",
+ };
+ 
+ static const char *sm_state_to_str(unsigned short sm_state)
+diff --git a/drivers/net/wireless/ath/ath5k/Kconfig b/drivers/net/wireless/ath/ath5k/Kconfig
+index f35cd8de228e4..6914b37bb0fbc 100644
+--- a/drivers/net/wireless/ath/ath5k/Kconfig
++++ b/drivers/net/wireless/ath/ath5k/Kconfig
+@@ -3,9 +3,7 @@ config ATH5K
+ 	tristate "Atheros 5xxx wireless cards support"
+ 	depends on (PCI || ATH25) && MAC80211
+ 	select ATH_COMMON
+-	select MAC80211_LEDS
+-	select LEDS_CLASS
+-	select NEW_LEDS
++	select MAC80211_LEDS if LEDS_CLASS=y || LEDS_CLASS=MAC80211
+ 	select ATH5K_AHB if ATH25
+ 	select ATH5K_PCI if !ATH25
+ 	help
+diff --git a/drivers/net/wireless/ath/ath5k/led.c b/drivers/net/wireless/ath/ath5k/led.c
+index 6a2a168567630..33e9928af3635 100644
+--- a/drivers/net/wireless/ath/ath5k/led.c
++++ b/drivers/net/wireless/ath/ath5k/led.c
+@@ -89,7 +89,8 @@ static const struct pci_device_id ath5k_led_devices[] = {
+ 
+ void ath5k_led_enable(struct ath5k_hw *ah)
+ {
+-	if (test_bit(ATH_STAT_LEDSOFT, ah->status)) {
++	if (IS_ENABLED(CONFIG_MAC80211_LEDS) &&
++	    test_bit(ATH_STAT_LEDSOFT, ah->status)) {
+ 		ath5k_hw_set_gpio_output(ah, ah->led_pin);
+ 		ath5k_led_off(ah);
+ 	}
+@@ -104,7 +105,8 @@ static void ath5k_led_on(struct ath5k_hw *ah)
+ 
+ void ath5k_led_off(struct ath5k_hw *ah)
+ {
+-	if (!test_bit(ATH_STAT_LEDSOFT, ah->status))
++	if (!IS_ENABLED(CONFIG_MAC80211_LEDS) ||
++	    !test_bit(ATH_STAT_LEDSOFT, ah->status))
+ 		return;
+ 	ath5k_hw_set_gpio(ah, ah->led_pin, !ah->led_on);
+ }
+@@ -146,7 +148,7 @@ ath5k_register_led(struct ath5k_hw *ah, struct ath5k_led *led,
+ static void
+ ath5k_unregister_led(struct ath5k_led *led)
+ {
+-	if (!led->ah)
++	if (!IS_ENABLED(CONFIG_MAC80211_LEDS) || !led->ah)
+ 		return;
+ 	led_classdev_unregister(&led->led_dev);
+ 	ath5k_led_off(led->ah);
+@@ -169,7 +171,7 @@ int ath5k_init_leds(struct ath5k_hw *ah)
+ 	char name[ATH5K_LED_MAX_NAME_LEN + 1];
+ 	const struct pci_device_id *match;
+ 
+-	if (!ah->pdev)
++	if (!IS_ENABLED(CONFIG_MAC80211_LEDS) || !ah->pdev)
+ 		return 0;
+ 
+ #ifdef CONFIG_ATH5K_AHB
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 90b12e201795c..4e43efd5d1ea1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -635,6 +635,8 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0x43F0, 0x0074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x43F0, 0x0078, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x43F0, 0x007C, iwl_ax201_cfg_qu_hr, NULL),
++	IWL_DEV_INFO(0x43F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650s_name),
++	IWL_DEV_INFO(0x43F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650i_name),
+ 	IWL_DEV_INFO(0x43F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x43F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x0070, iwl_ax201_cfg_qu_hr, NULL),
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 44e15f0e3a2ed..ad3e3cde1c20d 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -3259,9 +3259,17 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
+ 		return 0;
+ 
+ 	if (!keep_devs) {
+-		/* Delete any children which might still exist. */
++		struct list_head removed;
++
++		/* Move all present children to the list on stack */
++		INIT_LIST_HEAD(&removed);
+ 		spin_lock_irqsave(&hbus->device_list_lock, flags);
+-		list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry) {
++		list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry)
++			list_move_tail(&hpdev->list_entry, &removed);
++		spin_unlock_irqrestore(&hbus->device_list_lock, flags);
++
++		/* Remove all children in the list */
++		list_for_each_entry_safe(hpdev, tmp, &removed, list_entry) {
+ 			list_del(&hpdev->list_entry);
+ 			if (hpdev->pci_slot)
+ 				pci_destroy_slot(hpdev->pci_slot);
+@@ -3269,7 +3277,6 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
+ 			put_pcichild(hpdev);
+ 			put_pcichild(hpdev);
+ 		}
+-		spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ 	}
+ 
+ 	ret = hv_send_resources_released(hdev);
+diff --git a/drivers/ptp/ptp_pch.c b/drivers/ptp/ptp_pch.c
+index ce10ecd41ba0f..9492ed09518ff 100644
+--- a/drivers/ptp/ptp_pch.c
++++ b/drivers/ptp/ptp_pch.c
+@@ -651,6 +651,7 @@ static const struct pci_device_id pch_ieee1588_pcidev_id[] = {
+ 	 },
+ 	{0}
+ };
++MODULE_DEVICE_TABLE(pci, pch_ieee1588_pcidev_id);
+ 
+ static SIMPLE_DEV_PM_OPS(pch_pm_ops, pch_suspend, pch_resume);
+ 
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index eba7f76f9d61a..6034cd8992b0e 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -98,7 +98,7 @@ void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len)
+ 	if (ehdr->e_phnum < 2)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	if (phdrs[0].p_type == PT_LOAD || phdrs[1].p_type == PT_LOAD)
++	if (phdrs[0].p_type == PT_LOAD)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if ((phdrs[1].p_flags & QCOM_MDT_TYPE_MASK) != QCOM_MDT_TYPE_HASH)
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index e0620416e5743..60c82dcaa8d1d 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -521,7 +521,7 @@ static int qcom_socinfo_probe(struct platform_device *pdev)
+ 	/* Feed the soc specific unique data into entropy pool */
+ 	add_device_randomness(info, item_size);
+ 
+-	platform_set_drvdata(pdev, qs->soc_dev);
++	platform_set_drvdata(pdev, qs);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/soc/ti/omap_prm.c b/drivers/soc/ti/omap_prm.c
+index fb067b5e4a977..4a782bfd753c3 100644
+--- a/drivers/soc/ti/omap_prm.c
++++ b/drivers/soc/ti/omap_prm.c
+@@ -509,25 +509,28 @@ static int omap_reset_deassert(struct reset_controller_dev *rcdev,
+ 	writel_relaxed(v, reset->prm->base + reset->prm->data->rstctrl);
+ 	spin_unlock_irqrestore(&reset->lock, flags);
+ 
+-	if (!has_rstst)
+-		goto exit;
++	/* wait for the reset bit to clear */
++	ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
++						reset->prm->data->rstctrl,
++						v, !(v & BIT(id)), 1,
++						OMAP_RESET_MAX_WAIT);
++	if (ret)
++		pr_err("%s: timedout waiting for %s:%lu\n", __func__,
++		       reset->prm->data->name, id);
+ 
+ 	/* wait for the status to be set */
+-	ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
++	if (has_rstst) {
++		ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
+ 						 reset->prm->data->rstst,
+ 						 v, v & BIT(st_bit), 1,
+ 						 OMAP_RESET_MAX_WAIT);
+-	if (ret)
+-		pr_err("%s: timedout waiting for %s:%lu\n", __func__,
+-		       reset->prm->data->name, id);
++		if (ret)
++			pr_err("%s: timedout waiting for %s:%lu\n", __func__,
++			       reset->prm->data->name, id);
++	}
+ 
+-exit:
+-	if (reset->clkdm) {
+-		/* At least dra7 iva needs a delay before clkdm idle */
+-		if (has_rstst)
+-			udelay(1);
++	if (reset->clkdm)
+ 		pdata->clkdm_allow_idle(reset->clkdm);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index ee565bdb44d65..b4c6527fe5f66 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -425,11 +425,16 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ 	data->phy = devm_usb_get_phy_by_phandle(dev, "fsl,usbphy", 0);
+ 	if (IS_ERR(data->phy)) {
+ 		ret = PTR_ERR(data->phy);
+-		/* Return -EINVAL if no usbphy is available */
+-		if (ret == -ENODEV)
+-			data->phy = NULL;
+-		else
+-			goto err_clk;
++		if (ret == -ENODEV) {
++			data->phy = devm_usb_get_phy_by_phandle(dev, "phys", 0);
++			if (IS_ERR(data->phy)) {
++				ret = PTR_ERR(data->phy);
++				if (ret == -ENODEV)
++					data->phy = NULL;
++				else
++					goto err_clk;
++			}
++		}
+ 	}
+ 
+ 	pdata.usb_phy = data->phy;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 7748b1335558e..7950d5b3af429 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -340,6 +340,9 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
+ 			acm->iocount.overrun++;
+ 		spin_unlock_irqrestore(&acm->read_lock, flags);
+ 
++		if (newctrl & ACM_CTRL_BRK)
++			tty_flip_buffer_push(&acm->port);
++
+ 		if (difference)
+ 			wake_up_all(&acm->wioctl);
+ 
+@@ -475,11 +478,16 @@ static int acm_submit_read_urbs(struct acm *acm, gfp_t mem_flags)
+ 
+ static void acm_process_read_urb(struct acm *acm, struct urb *urb)
+ {
++	unsigned long flags;
++
+ 	if (!urb->actual_length)
+ 		return;
+ 
++	spin_lock_irqsave(&acm->read_lock, flags);
+ 	tty_insert_flip_string(&acm->port, urb->transfer_buffer,
+ 			urb->actual_length);
++	spin_unlock_irqrestore(&acm->read_lock, flags);
++
+ 	tty_flip_buffer_push(&acm->port);
+ }
+ 
+diff --git a/drivers/usb/common/Kconfig b/drivers/usb/common/Kconfig
+index 5e8a04e3dd3c8..b856622431a73 100644
+--- a/drivers/usb/common/Kconfig
++++ b/drivers/usb/common/Kconfig
+@@ -6,8 +6,7 @@ config USB_COMMON
+ 
+ config USB_LED_TRIG
+ 	bool "USB LED Triggers"
+-	depends on LEDS_CLASS && LEDS_TRIGGERS
+-	select USB_COMMON
++	depends on LEDS_CLASS && USB_COMMON && LEDS_TRIGGERS
+ 	help
+ 	  This option adds LED triggers for USB host and/or gadget activity.
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 0b08dd3b19eb0..291d020427924 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -3922,6 +3922,7 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
+ 			tcpm_set_state(port, SRC_ATTACH_WAIT, 0);
+ 		break;
+ 	case SRC_ATTACHED:
++	case SRC_STARTUP:
+ 	case SRC_SEND_CAPABILITIES:
+ 	case SRC_READY:
+ 		if (tcpm_port_is_disconnected(port) ||
+diff --git a/drivers/video/fbdev/gbefb.c b/drivers/video/fbdev/gbefb.c
+index 31270a8986e8e..8f8ca1f88fe21 100644
+--- a/drivers/video/fbdev/gbefb.c
++++ b/drivers/video/fbdev/gbefb.c
+@@ -1269,7 +1269,7 @@ static struct platform_device *gbefb_device;
+ static int __init gbefb_init(void)
+ {
+ 	int ret = platform_driver_register(&gbefb_driver);
+-	if (!ret) {
++	if (IS_ENABLED(CONFIG_SGI_IP32) && !ret) {
+ 		gbefb_device = platform_device_alloc("gbefb", 0);
+ 		if (gbefb_device) {
+ 			ret = platform_device_add(gbefb_device);
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 15d4b1ef19f83..1911a62a6d9c1 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -491,12 +491,12 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
+ }
+ 
+ /*
+- * Stop waiting if either state is not BP_EAGAIN and ballooning action is
+- * needed, or if the credit has changed while state is BP_EAGAIN.
++ * Stop waiting if either state is BP_DONE and ballooning action is
++ * needed, or if the credit has changed while state is not BP_DONE.
+  */
+ static bool balloon_thread_cond(enum bp_state state, long credit)
+ {
+-	if (state != BP_EAGAIN)
++	if (state == BP_DONE)
+ 		credit = 0;
+ 
+ 	return current_credit() != credit || kthread_should_stop();
+@@ -516,10 +516,19 @@ static int balloon_thread(void *unused)
+ 
+ 	set_freezable();
+ 	for (;;) {
+-		if (state == BP_EAGAIN)
+-			timeout = balloon_stats.schedule_delay * HZ;
+-		else
++		switch (state) {
++		case BP_DONE:
++		case BP_ECANCELED:
+ 			timeout = 3600 * HZ;
++			break;
++		case BP_EAGAIN:
++			timeout = balloon_stats.schedule_delay * HZ;
++			break;
++		case BP_WAIT:
++			timeout = HZ;
++			break;
++		}
++
+ 		credit = current_credit();
+ 
+ 		wait_event_freezable_timeout(balloon_thread_wq,
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index 720a7b7abd46d..fe8df32bb612b 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -803,11 +803,12 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
+ 		unsigned int domid =
+ 			(xdata.flags & XENMEM_rsrc_acq_caller_owned) ?
+ 			DOMID_SELF : kdata.dom;
+-		int num;
++		int num, *errs = (int *)pfns;
+ 
++		BUILD_BUG_ON(sizeof(*errs) > sizeof(*pfns));
+ 		num = xen_remap_domain_mfn_array(vma,
+ 						 kdata.addr & PAGE_MASK,
+-						 pfns, kdata.num, (int *)pfns,
++						 pfns, kdata.num, errs,
+ 						 vma->vm_page_prot,
+ 						 domid,
+ 						 vma->vm_private_data);
+@@ -817,7 +818,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
+ 			unsigned int i;
+ 
+ 			for (i = 0; i < num; i++) {
+-				rc = pfns[i];
++				rc = errs[i];
+ 				if (rc < 0)
+ 					break;
+ 			}
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 5f5169b9c2e90..46f825cf53f4f 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3427,15 +3427,18 @@ nfsd4_encode_dirent(void *ccdv, const char *name, int namlen,
+ 		goto fail;
+ 	cd->rd_maxcount -= entry_bytes;
+ 	/*
+-	 * RFC 3530 14.2.24 describes rd_dircount as only a "hint", so
+-	 * let's always let through the first entry, at least:
++	 * RFC 3530 14.2.24 describes rd_dircount as only a "hint", and
++	 * notes that it could be zero. If it is zero, then the server
++	 * should enforce only the rd_maxcount value.
+ 	 */
+-	if (!cd->rd_dircount)
+-		goto fail;
+-	name_and_cookie = 4 + 4 * XDR_QUADLEN(namlen) + 8;
+-	if (name_and_cookie > cd->rd_dircount && cd->cookie_offset)
+-		goto fail;
+-	cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie);
++	if (cd->rd_dircount) {
++		name_and_cookie = 4 + 4 * XDR_QUADLEN(namlen) + 8;
++		if (name_and_cookie > cd->rd_dircount && cd->cookie_offset)
++			goto fail;
++		cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie);
++		if (!cd->rd_dircount)
++			cd->rd_maxcount = 0;
++	}
+ 
+ 	cd->cookie_offset = cookie_offset;
+ skip_entry:
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 0759e589ab52b..ddf2b375632b7 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1547,7 +1547,7 @@ static int __init init_nfsd(void)
+ 		goto out_free_all;
+ 	return 0;
+ out_free_all:
+-	unregister_pernet_subsys(&nfsd_net_ops);
++	unregister_filesystem(&nfsd_fs_type);
+ out_free_exports:
+ 	remove_proc_entry("fs/nfs/exports", NULL);
+ 	remove_proc_entry("fs/nfs", NULL);
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 08b595c526d74..16955a307dcd9 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -1214,9 +1214,13 @@ static int ovl_rename(struct inode *olddir, struct dentry *old,
+ 				goto out_dput;
+ 		}
+ 	} else {
+-		if (!d_is_negative(newdentry) &&
+-		    (!new_opaque || !ovl_is_whiteout(newdentry)))
+-			goto out_dput;
++		if (!d_is_negative(newdentry)) {
++			if (!new_opaque || !ovl_is_whiteout(newdentry))
++				goto out_dput;
++		} else {
++			if (flags & RENAME_EXCHANGE)
++				goto out_dput;
++		}
+ 	}
+ 
+ 	if (olddentry == trap)
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 5c5c3972ebd0a..f7135777cb4eb 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -301,6 +301,12 @@ static ssize_t ovl_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = -EINVAL;
++	if (iocb->ki_flags & IOCB_DIRECT &&
++	    (!real.file->f_mapping->a_ops ||
++	     !real.file->f_mapping->a_ops->direct_IO))
++		goto out_fdput;
++
+ 	old_cred = ovl_override_creds(file_inode(file)->i_sb);
+ 	if (is_sync_kiocb(iocb)) {
+ 		ret = vfs_iter_read(real.file, iter, &iocb->ki_pos,
+@@ -325,7 +331,7 @@ static ssize_t ovl_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ out:
+ 	revert_creds(old_cred);
+ 	ovl_file_accessed(file);
+-
++out_fdput:
+ 	fdput(real);
+ 
+ 	return ret;
+@@ -354,6 +360,12 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	if (ret)
+ 		goto out_unlock;
+ 
++	ret = -EINVAL;
++	if (iocb->ki_flags & IOCB_DIRECT &&
++	    (!real.file->f_mapping->a_ops ||
++	     !real.file->f_mapping->a_ops->direct_IO))
++		goto out_fdput;
++
+ 	if (!ovl_should_sync(OVL_FS(inode->i_sb)))
+ 		ifl &= ~(IOCB_DSYNC | IOCB_SYNC);
+ 
+@@ -389,6 +401,7 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	}
+ out:
+ 	revert_creds(old_cred);
++out_fdput:
+ 	fdput(real);
+ 
+ out_unlock:
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index ebf60848d5eb7..4477873ac3a0b 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -64,7 +64,8 @@ static inline int stack_map_data_size(struct bpf_map *map)
+ 
+ static int prealloc_elems_and_freelist(struct bpf_stack_map *smap)
+ {
+-	u32 elem_size = sizeof(struct stack_map_bucket) + smap->map.value_size;
++	u64 elem_size = sizeof(struct stack_map_bucket) +
++			(u64)smap->map.value_size;
+ 	int err;
+ 
+ 	smap->elems = bpf_map_area_alloc(elem_size * smap->map.max_entries,
+diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
+index 73f71c22f4c03..31b00ba5dcc84 100644
+--- a/net/bridge/br_netlink.c
++++ b/net/bridge/br_netlink.c
+@@ -1590,7 +1590,8 @@ static size_t br_get_linkxstats_size(const struct net_device *dev, int attr)
+ 	}
+ 
+ 	return numvls * nla_total_size(sizeof(struct bridge_vlan_xstats)) +
+-	       nla_total_size(sizeof(struct br_mcast_stats)) +
++	       nla_total_size_64bit(sizeof(struct br_mcast_stats)) +
++	       (p ? nla_total_size_64bit(sizeof(p->stp_xstats)) : 0) +
+ 	       nla_total_size(0);
+ }
+ 
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 7266571d5c7e2..27ffa83ffeb3c 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -5257,7 +5257,7 @@ nla_put_failure:
+ static size_t if_nlmsg_stats_size(const struct net_device *dev,
+ 				  u32 filter_mask)
+ {
+-	size_t size = 0;
++	size_t size = NLMSG_ALIGN(sizeof(struct if_stats_msg));
+ 
+ 	if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_64, 0))
+ 		size += nla_total_size_64bit(sizeof(struct rtnl_link_stats64));
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 45fb450b45227..f3fd5c911ed09 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -242,8 +242,10 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ 
+ 		if (!inet_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
+ 			return -1;
++		score =  sk->sk_bound_dev_if ? 2 : 1;
+ 
+-		score = sk->sk_family == PF_INET ? 2 : 1;
++		if (sk->sk_family == PF_INET)
++			score++;
+ 		if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ 			score++;
+ 	}
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index bd7fd9b1f24c8..655f0d8a13d36 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -390,7 +390,8 @@ static int compute_score(struct sock *sk, struct net *net,
+ 					dif, sdif);
+ 	if (!dev_match)
+ 		return -1;
+-	score += 4;
++	if (sk->sk_bound_dev_if)
++		score += 4;
+ 
+ 	if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ 		score++;
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 55c290d556059..67c9114835c84 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -106,7 +106,7 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ 		if (!inet_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
+ 			return -1;
+ 
+-		score = 1;
++		score =  sk->sk_bound_dev_if ? 2 : 1;
+ 		if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ 			score++;
+ 	}
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 1943ae5103eb6..bae6b51a9bd46 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -133,7 +133,8 @@ static int compute_score(struct sock *sk, struct net *net,
+ 	dev_match = udp_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif);
+ 	if (!dev_match)
+ 		return -1;
+-	score++;
++	if (sk->sk_bound_dev_if)
++		score++;
+ 
+ 	if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ 		score++;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 8434da3c0487a..0886267ea81ef 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -586,7 +586,10 @@ static int netlink_insert(struct sock *sk, u32 portid)
+ 
+ 	/* We need to ensure that the socket is hashed and visible. */
+ 	smp_wmb();
+-	nlk_sk(sk)->bound = portid;
++	/* Paired with lockless reads from netlink_bind(),
++	 * netlink_connect() and netlink_sendmsg().
++	 */
++	WRITE_ONCE(nlk_sk(sk)->bound, portid);
+ 
+ err:
+ 	release_sock(sk);
+@@ -1004,7 +1007,8 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
+ 	if (nlk->ngroups < BITS_PER_LONG)
+ 		groups &= (1UL << nlk->ngroups) - 1;
+ 
+-	bound = nlk->bound;
++	/* Paired with WRITE_ONCE() in netlink_insert() */
++	bound = READ_ONCE(nlk->bound);
+ 	if (bound) {
+ 		/* Ensure nlk->portid is up-to-date. */
+ 		smp_rmb();
+@@ -1090,8 +1094,9 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
+ 
+ 	/* No need for barriers here as we return to user-space without
+ 	 * using any of the bound attributes.
++	 * Paired with WRITE_ONCE() in netlink_insert().
+ 	 */
+-	if (!nlk->bound)
++	if (!READ_ONCE(nlk->bound))
+ 		err = netlink_autobind(sock);
+ 
+ 	if (err == 0) {
+@@ -1880,7 +1885,8 @@ static int netlink_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		dst_group = nlk->dst_group;
+ 	}
+ 
+-	if (!nlk->bound) {
++	/* Paired with WRITE_ONCE() in netlink_insert() */
++	if (!READ_ONCE(nlk->bound)) {
+ 		err = netlink_autobind(sock);
+ 		if (err)
+ 			goto out;
+diff --git a/net/sched/sch_fifo.c b/net/sched/sch_fifo.c
+index a579a4131d22d..e1040421b7979 100644
+--- a/net/sched/sch_fifo.c
++++ b/net/sched/sch_fifo.c
+@@ -233,6 +233,9 @@ int fifo_set_limit(struct Qdisc *q, unsigned int limit)
+ 	if (strncmp(q->ops->id + 1, "fifo", 4) != 0)
+ 		return 0;
+ 
++	if (!q->ops->change)
++		return 0;
++
+ 	nla = kmalloc(nla_attr_size(sizeof(struct tc_fifo_qopt)), GFP_KERNEL);
+ 	if (nla) {
+ 		nla->nla_type = RTM_NEWQDISC;
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index cb5e5220da552..93899559ba6d2 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1630,6 +1630,10 @@ static void taprio_destroy(struct Qdisc *sch)
+ 	list_del(&q->taprio_list);
+ 	spin_unlock(&taprio_list_lock);
+ 
++	/* Note that taprio_reset() might not be called if an error
++	 * happens in qdisc_create(), after taprio_init() has been called.
++	 */
++	hrtimer_cancel(&q->advance_timer);
+ 
+ 	taprio_disable_offload(dev, q, NULL);
+ 
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index e22f2d65457da..f5111d62972d3 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -643,7 +643,7 @@ static bool gss_check_seq_num(const struct svc_rqst *rqstp, struct rsc *rsci,
+ 		}
+ 		__set_bit(seq_num % GSS_SEQ_WIN, sd->sd_win);
+ 		goto ok;
+-	} else if (seq_num <= sd->sd_max - GSS_SEQ_WIN) {
++	} else if (seq_num + GSS_SEQ_WIN <= sd->sd_max) {
+ 		goto toolow;
+ 	}
+ 	if (__test_and_set_bit(seq_num % GSS_SEQ_WIN, sd->sd_win))
+diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
+index dcfdf6a322dc4..c679a79aef513 100644
+--- a/tools/perf/pmu-events/jevents.c
++++ b/tools/perf/pmu-events/jevents.c
+@@ -1100,12 +1100,13 @@ static int process_one_file(const char *fpath, const struct stat *sb,
+  */
+ int main(int argc, char *argv[])
+ {
+-	int rc, ret = 0;
++	int rc, ret = 0, empty_map = 0;
+ 	int maxfds;
+ 	char ldirname[PATH_MAX];
+ 	const char *arch;
+ 	const char *output_file;
+ 	const char *start_dirname;
++	char *err_string_ext = "";
+ 	struct stat stbuf;
+ 
+ 	prog = basename(argv[0]);
+@@ -1133,7 +1134,8 @@ int main(int argc, char *argv[])
+ 	/* If architecture does not have any event lists, bail out */
+ 	if (stat(ldirname, &stbuf) < 0) {
+ 		pr_info("%s: Arch %s has no PMU event lists\n", prog, arch);
+-		goto empty_map;
++		empty_map = 1;
++		goto err_close_eventsfp;
+ 	}
+ 
+ 	/* Include pmu-events.h first */
+@@ -1150,75 +1152,60 @@ int main(int argc, char *argv[])
+ 	 */
+ 
+ 	maxfds = get_maxfds();
+-	mapfile = NULL;
+ 	rc = nftw(ldirname, preprocess_arch_std_files, maxfds, 0);
+-	if (rc && verbose) {
+-		pr_info("%s: Error preprocessing arch standard files %s\n",
+-			prog, ldirname);
+-		goto empty_map;
+-	} else if (rc < 0) {
+-		/* Make build fail */
+-		fclose(eventsfp);
+-		free_arch_std_events();
+-		return 1;
+-	} else if (rc) {
+-		goto empty_map;
+-	}
++	if (rc)
++		goto err_processing_std_arch_event_dir;
+ 
+ 	rc = nftw(ldirname, process_one_file, maxfds, 0);
+-	if (rc && verbose) {
+-		pr_info("%s: Error walking file tree %s\n", prog, ldirname);
+-		goto empty_map;
+-	} else if (rc < 0) {
+-		/* Make build fail */
+-		fclose(eventsfp);
+-		free_arch_std_events();
+-		ret = 1;
+-		goto out_free_mapfile;
+-	} else if (rc) {
+-		goto empty_map;
+-	}
++	if (rc)
++		goto err_processing_dir;
+ 
+ 	sprintf(ldirname, "%s/test", start_dirname);
+ 
+ 	rc = nftw(ldirname, process_one_file, maxfds, 0);
+-	if (rc && verbose) {
+-		pr_info("%s: Error walking file tree %s rc=%d for test\n",
+-			prog, ldirname, rc);
+-		goto empty_map;
+-	} else if (rc < 0) {
+-		/* Make build fail */
+-		free_arch_std_events();
+-		ret = 1;
+-		goto out_free_mapfile;
+-	} else if (rc) {
+-		goto empty_map;
+-	}
++	if (rc)
++		goto err_processing_dir;
+ 
+ 	if (close_table)
+ 		print_events_table_suffix(eventsfp);
+ 
+ 	if (!mapfile) {
+ 		pr_info("%s: No CPU->JSON mapping?\n", prog);
+-		goto empty_map;
++		empty_map = 1;
++		goto err_close_eventsfp;
+ 	}
+ 
+-	if (process_mapfile(eventsfp, mapfile)) {
++	rc = process_mapfile(eventsfp, mapfile);
++	fclose(eventsfp);
++	if (rc) {
+ 		pr_info("%s: Error processing mapfile %s\n", prog, mapfile);
+ 		/* Make build fail */
+-		fclose(eventsfp);
+-		free_arch_std_events();
+ 		ret = 1;
++		goto err_out;
+ 	}
+ 
++	free_arch_std_events();
++	free(mapfile);
++	return 0;
+ 
+-	goto out_free_mapfile;
+-
+-empty_map:
++err_processing_std_arch_event_dir:
++	err_string_ext = " for std arch event";
++err_processing_dir:
++	if (verbose) {
++		pr_info("%s: Error walking file tree %s%s\n", prog, ldirname,
++			err_string_ext);
++		empty_map = 1;
++	} else if (rc < 0) {
++		ret = 1;
++	} else {
++		empty_map = 1;
++	}
++err_close_eventsfp:
+ 	fclose(eventsfp);
+-	create_empty_mapping(output_file);
++	if (empty_map)
++		create_empty_mapping(output_file);
++err_out:
+ 	free_arch_std_events();
+-out_free_mapfile:
+ 	free(mapfile);
+ 	return ret;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-10-17 13:11 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-10-17 13:11 UTC (permalink / raw
  To: gentoo-commits

commit:     875a8cc1cb1211803fb6a844855c96382f616dc1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 17 13:11:18 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Oct 17 13:11:18 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=875a8cc1

Linux patch 5.10.74

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1073_linux-5.10.74.patch | 1038 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1042 insertions(+)

diff --git a/0000_README b/0000_README
index 9e6befb..11f68cc 100644
--- a/0000_README
+++ b/0000_README
@@ -335,6 +335,10 @@ Patch:  1072_linux-5.10.73.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.73
 
+Patch:  1073_linux-5.10.74.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.74
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1073_linux-5.10.74.patch b/1073_linux-5.10.74.patch
new file mode 100644
index 0000000..d6bbfb1
--- /dev/null
+++ b/1073_linux-5.10.74.patch
@@ -0,0 +1,1038 @@
+diff --git a/Makefile b/Makefile
+index 3f62cea9afc0e..84d540aed24c9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 73
++SUBLEVEL = 74
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c
+index 46f91e0f6a082..fd916844a683f 100644
+--- a/arch/m68k/kernel/signal.c
++++ b/arch/m68k/kernel/signal.c
+@@ -447,7 +447,7 @@ static inline void save_fpu_state(struct sigcontext *sc, struct pt_regs *regs)
+ 
+ 	if (CPU_IS_060 ? sc->sc_fpstate[2] : sc->sc_fpstate[0]) {
+ 		fpu_version = sc->sc_fpstate[0];
+-		if (CPU_IS_020_OR_030 &&
++		if (CPU_IS_020_OR_030 && !regs->stkadj &&
+ 		    regs->vector >= (VEC_FPBRUC * 4) &&
+ 		    regs->vector <= (VEC_FPNAN * 4)) {
+ 			/* Clear pending exception in 68882 idle frame */
+@@ -510,7 +510,7 @@ static inline int rt_save_fpu_state(struct ucontext __user *uc, struct pt_regs *
+ 		if (!(CPU_IS_060 || CPU_IS_COLDFIRE))
+ 			context_size = fpstate[1];
+ 		fpu_version = fpstate[0];
+-		if (CPU_IS_020_OR_030 &&
++		if (CPU_IS_020_OR_030 && !regs->stkadj &&
+ 		    regs->vector >= (VEC_FPBRUC * 4) &&
+ 		    regs->vector <= (VEC_FPNAN * 4)) {
+ 			/* Clear pending exception in 68882 idle frame */
+@@ -828,18 +828,24 @@ badframe:
+ 	return 0;
+ }
+ 
++static inline struct pt_regs *rte_regs(struct pt_regs *regs)
++{
++	return (void *)regs + regs->stkadj;
++}
++
+ static void setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs,
+ 			     unsigned long mask)
+ {
++	struct pt_regs *tregs = rte_regs(regs);
+ 	sc->sc_mask = mask;
+ 	sc->sc_usp = rdusp();
+ 	sc->sc_d0 = regs->d0;
+ 	sc->sc_d1 = regs->d1;
+ 	sc->sc_a0 = regs->a0;
+ 	sc->sc_a1 = regs->a1;
+-	sc->sc_sr = regs->sr;
+-	sc->sc_pc = regs->pc;
+-	sc->sc_formatvec = regs->format << 12 | regs->vector;
++	sc->sc_sr = tregs->sr;
++	sc->sc_pc = tregs->pc;
++	sc->sc_formatvec = tregs->format << 12 | tregs->vector;
+ 	save_a5_state(sc, regs);
+ 	save_fpu_state(sc, regs);
+ }
+@@ -847,6 +853,7 @@ static void setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs,
+ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *regs)
+ {
+ 	struct switch_stack *sw = (struct switch_stack *)regs - 1;
++	struct pt_regs *tregs = rte_regs(regs);
+ 	greg_t __user *gregs = uc->uc_mcontext.gregs;
+ 	int err = 0;
+ 
+@@ -867,9 +874,9 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *
+ 	err |= __put_user(sw->a5, &gregs[13]);
+ 	err |= __put_user(sw->a6, &gregs[14]);
+ 	err |= __put_user(rdusp(), &gregs[15]);
+-	err |= __put_user(regs->pc, &gregs[16]);
+-	err |= __put_user(regs->sr, &gregs[17]);
+-	err |= __put_user((regs->format << 12) | regs->vector, &uc->uc_formatvec);
++	err |= __put_user(tregs->pc, &gregs[16]);
++	err |= __put_user(tregs->sr, &gregs[17]);
++	err |= __put_user((tregs->format << 12) | tregs->vector, &uc->uc_formatvec);
+ 	err |= rt_save_fpu_state(uc, regs);
+ 	return err;
+ }
+@@ -886,13 +893,14 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 			struct pt_regs *regs)
+ {
+ 	struct sigframe __user *frame;
+-	int fsize = frame_extra_sizes(regs->format);
++	struct pt_regs *tregs = rte_regs(regs);
++	int fsize = frame_extra_sizes(tregs->format);
+ 	struct sigcontext context;
+ 	int err = 0, sig = ksig->sig;
+ 
+ 	if (fsize < 0) {
+ 		pr_debug("setup_frame: Unknown frame format %#x\n",
+-			 regs->format);
++			 tregs->format);
+ 		return -EFAULT;
+ 	}
+ 
+@@ -903,7 +911,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 
+ 	err |= __put_user(sig, &frame->sig);
+ 
+-	err |= __put_user(regs->vector, &frame->code);
++	err |= __put_user(tregs->vector, &frame->code);
+ 	err |= __put_user(&frame->sc, &frame->psc);
+ 
+ 	if (_NSIG_WORDS > 1)
+@@ -929,34 +937,28 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 
+ 	push_cache ((unsigned long) &frame->retcode);
+ 
+-	/*
+-	 * Set up registers for signal handler.  All the state we are about
+-	 * to destroy is successfully copied to sigframe.
+-	 */
+-	wrusp ((unsigned long) frame);
+-	regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
+-	adjustformat(regs);
+-
+ 	/*
+ 	 * This is subtle; if we build more than one sigframe, all but the
+ 	 * first one will see frame format 0 and have fsize == 0, so we won't
+ 	 * screw stkadj.
+ 	 */
+-	if (fsize)
++	if (fsize) {
+ 		regs->stkadj = fsize;
+-
+-	/* Prepare to skip over the extra stuff in the exception frame.  */
+-	if (regs->stkadj) {
+-		struct pt_regs *tregs =
+-			(struct pt_regs *)((ulong)regs + regs->stkadj);
++		tregs = rte_regs(regs);
+ 		pr_debug("Performing stackadjust=%04lx\n", regs->stkadj);
+-		/* This must be copied with decreasing addresses to
+-                   handle overlaps.  */
+ 		tregs->vector = 0;
+ 		tregs->format = 0;
+-		tregs->pc = regs->pc;
+ 		tregs->sr = regs->sr;
+ 	}
++
++	/*
++	 * Set up registers for signal handler.  All the state we are about
++	 * to destroy is successfully copied to sigframe.
++	 */
++	wrusp ((unsigned long) frame);
++	tregs->pc = (unsigned long) ksig->ka.sa.sa_handler;
++	adjustformat(regs);
++
+ 	return 0;
+ }
+ 
+@@ -964,7 +966,8 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 			   struct pt_regs *regs)
+ {
+ 	struct rt_sigframe __user *frame;
+-	int fsize = frame_extra_sizes(regs->format);
++	struct pt_regs *tregs = rte_regs(regs);
++	int fsize = frame_extra_sizes(tregs->format);
+ 	int err = 0, sig = ksig->sig;
+ 
+ 	if (fsize < 0) {
+@@ -1014,34 +1017,27 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 
+ 	push_cache ((unsigned long) &frame->retcode);
+ 
+-	/*
+-	 * Set up registers for signal handler.  All the state we are about
+-	 * to destroy is successfully copied to sigframe.
+-	 */
+-	wrusp ((unsigned long) frame);
+-	regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
+-	adjustformat(regs);
+-
+ 	/*
+ 	 * This is subtle; if we build more than one sigframe, all but the
+ 	 * first one will see frame format 0 and have fsize == 0, so we won't
+ 	 * screw stkadj.
+ 	 */
+-	if (fsize)
++	if (fsize) {
+ 		regs->stkadj = fsize;
+-
+-	/* Prepare to skip over the extra stuff in the exception frame.  */
+-	if (regs->stkadj) {
+-		struct pt_regs *tregs =
+-			(struct pt_regs *)((ulong)regs + regs->stkadj);
++		tregs = rte_regs(regs);
+ 		pr_debug("Performing stackadjust=%04lx\n", regs->stkadj);
+-		/* This must be copied with decreasing addresses to
+-                   handle overlaps.  */
+ 		tregs->vector = 0;
+ 		tregs->format = 0;
+-		tregs->pc = regs->pc;
+ 		tregs->sr = regs->sr;
+ 	}
++
++	/*
++	 * Set up registers for signal handler.  All the state we are about
++	 * to destroy is successfully copied to sigframe.
++	 */
++	wrusp ((unsigned long) frame);
++	tregs->pc = (unsigned long) ksig->ka.sa.sa_handler;
++	adjustformat(regs);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index dbc8b76b9b78e..150fa5258fb6f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -1018,6 +1018,8 @@ static int gmc_v10_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	gmc_v10_0_gart_disable(adev);
++
+ 	if (amdgpu_sriov_vf(adev)) {
+ 		/* full access mode, so don't touch any GMC register */
+ 		DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
+@@ -1026,7 +1028,6 @@ static int gmc_v10_0_hw_fini(void *handle)
+ 
+ 	amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+-	gmc_v10_0_gart_disable(adev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 3ebbddb63705c..3a864041968f6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1677,6 +1677,8 @@ static int gmc_v9_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	gmc_v9_0_gart_disable(adev);
++
+ 	if (amdgpu_sriov_vf(adev)) {
+ 		/* full access mode, so don't touch any GMC register */
+ 		DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
+@@ -1685,7 +1687,6 @@ static int gmc_v9_0_hw_fini(void *handle)
+ 
+ 	amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+-	gmc_v9_0_gart_disable(adev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 6b8f0d004d345..5c1d33cda863b 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -322,12 +322,19 @@ static int apple_event(struct hid_device *hdev, struct hid_field *field,
+ 
+ /*
+  * MacBook JIS keyboard has wrong logical maximum
++ * Magic Keyboard JIS has wrong logical maximum
+  */
+ static __u8 *apple_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 		unsigned int *rsize)
+ {
+ 	struct apple_sc *asc = hid_get_drvdata(hdev);
+ 
++	if(*rsize >=71 && rdesc[70] == 0x65 && rdesc[64] == 0x65) {
++		hid_info(hdev,
++			 "fixing up Magic Keyboard JIS report descriptor\n");
++		rdesc[64] = rdesc[70] = 0xe7;
++	}
++
+ 	if ((asc->quirks & APPLE_RDESC_JIS) && *rsize >= 60 &&
+ 			rdesc[53] == 0x65 && rdesc[59] == 0x65) {
+ 		hid_info(hdev,
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 4228ddc3df0e6..b2719cf37aa52 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -4715,6 +4715,12 @@ static const struct wacom_features wacom_features_0x393 =
+ 	{ "Wacom Intuos Pro S", 31920, 19950, 8191, 63,
+ 	  INTUOSP2S_BT, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES, 7,
+ 	  .touch_max = 10 };
++static const struct wacom_features wacom_features_0x3c6 =
++	{ "Wacom Intuos BT S", 15200, 9500, 4095, 63,
++	  INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
++static const struct wacom_features wacom_features_0x3c8 =
++	{ "Wacom Intuos BT M", 21600, 13500, 4095, 63,
++	  INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
+ 
+ static const struct wacom_features wacom_features_HID_ANY_ID =
+ 	{ "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID };
+@@ -4888,6 +4894,8 @@ const struct hid_device_id wacom_ids[] = {
+ 	{ USB_DEVICE_WACOM(0x37A) },
+ 	{ USB_DEVICE_WACOM(0x37B) },
+ 	{ BT_DEVICE_WACOM(0x393) },
++	{ BT_DEVICE_WACOM(0x3c6) },
++	{ BT_DEVICE_WACOM(0x3c8) },
+ 	{ USB_DEVICE_WACOM(0x4001) },
+ 	{ USB_DEVICE_WACOM(0x4004) },
+ 	{ USB_DEVICE_WACOM(0x5000) },
+diff --git a/drivers/hwmon/ltc2947-core.c b/drivers/hwmon/ltc2947-core.c
+index bb3f7749a0b00..5423466de697a 100644
+--- a/drivers/hwmon/ltc2947-core.c
++++ b/drivers/hwmon/ltc2947-core.c
+@@ -989,8 +989,12 @@ static int ltc2947_setup(struct ltc2947_data *st)
+ 		return ret;
+ 
+ 	/* check external clock presence */
+-	extclk = devm_clk_get(st->dev, NULL);
+-	if (!IS_ERR(extclk)) {
++	extclk = devm_clk_get_optional(st->dev, NULL);
++	if (IS_ERR(extclk))
++		return dev_err_probe(st->dev, PTR_ERR(extclk),
++				     "Failed to get external clock\n");
++
++	if (extclk) {
+ 		unsigned long rate_hz;
+ 		u8 pre = 0, div, tbctl;
+ 		u64 aux;
+diff --git a/drivers/hwmon/pmbus/ibm-cffps.c b/drivers/hwmon/pmbus/ibm-cffps.c
+index 79bc2032dcb2a..da261d32450d0 100644
+--- a/drivers/hwmon/pmbus/ibm-cffps.c
++++ b/drivers/hwmon/pmbus/ibm-cffps.c
+@@ -171,8 +171,14 @@ static ssize_t ibm_cffps_debugfs_read(struct file *file, char __user *buf,
+ 		cmd = CFFPS_SN_CMD;
+ 		break;
+ 	case CFFPS_DEBUGFS_MAX_POWER_OUT:
+-		rc = i2c_smbus_read_word_swapped(psu->client,
+-						 CFFPS_MAX_POWER_OUT_CMD);
++		if (psu->version == cffps1) {
++			rc = i2c_smbus_read_word_swapped(psu->client,
++					CFFPS_MAX_POWER_OUT_CMD);
++		} else {
++			rc = i2c_smbus_read_word_data(psu->client,
++					CFFPS_MAX_POWER_OUT_CMD);
++		}
++
+ 		if (rc < 0)
+ 			return rc;
+ 
+diff --git a/drivers/net/ethernet/sun/Kconfig b/drivers/net/ethernet/sun/Kconfig
+index 309de38a75304..b0d3f9a2950c0 100644
+--- a/drivers/net/ethernet/sun/Kconfig
++++ b/drivers/net/ethernet/sun/Kconfig
+@@ -73,6 +73,7 @@ config CASSINI
+ config SUNVNET_COMMON
+ 	tristate "Common routines to support Sun Virtual Networking"
+ 	depends on SUN_LDOMS
++	depends on INET
+ 	default m
+ 
+ config SUNVNET
+diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
+index 43e682297fd5f..0a1734f34587d 100644
+--- a/drivers/scsi/ses.c
++++ b/drivers/scsi/ses.c
+@@ -118,7 +118,7 @@ static int ses_recv_diag(struct scsi_device *sdev, int page_code,
+ static int ses_send_diag(struct scsi_device *sdev, int page_code,
+ 			 void *buf, int bufflen)
+ {
+-	u32 result;
++	int result;
+ 
+ 	unsigned char cmd[] = {
+ 		SEND_DIAGNOSTIC,
+diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
+index b9c86a7e3b97d..6dac58ae61206 100644
+--- a/drivers/scsi/virtio_scsi.c
++++ b/drivers/scsi/virtio_scsi.c
+@@ -302,7 +302,7 @@ static void virtscsi_handle_transport_reset(struct virtio_scsi *vscsi,
+ 		}
+ 		break;
+ 	default:
+-		pr_info("Unsupport virtio scsi event reason %x\n", event->reason);
++		pr_info("Unsupported virtio scsi event reason %x\n", event->reason);
+ 	}
+ }
+ 
+@@ -394,7 +394,7 @@ static void virtscsi_handle_event(struct work_struct *work)
+ 		virtscsi_handle_param_change(vscsi, event);
+ 		break;
+ 	default:
+-		pr_err("Unsupport virtio scsi event %x\n", event->event);
++		pr_err("Unsupported virtio scsi event %x\n", event->event);
+ 	}
+ 	virtscsi_kick_event(vscsi, event_node);
+ }
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 0f7b53d5edea6..a96b688a0410f 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -733,18 +733,13 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	void *kaddr;
+ 	struct ext4_iloc iloc;
+ 
+-	if (unlikely(copied < len)) {
+-		if (!PageUptodate(page)) {
+-			copied = 0;
+-			goto out;
+-		}
+-	}
++	if (unlikely(copied < len) && !PageUptodate(page))
++		return 0;
+ 
+ 	ret = ext4_get_inode_loc(inode, &iloc);
+ 	if (ret) {
+ 		ext4_std_error(inode->i_sb, ret);
+-		copied = 0;
+-		goto out;
++		return ret;
+ 	}
+ 
+ 	ext4_write_lock_xattr(inode, &no_expand);
+@@ -757,7 +752,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	(void) ext4_find_inline_data_nolock(inode);
+ 
+ 	kaddr = kmap_atomic(page);
+-	ext4_write_inline_data(inode, &iloc, kaddr, pos, len);
++	ext4_write_inline_data(inode, &iloc, kaddr, pos, copied);
+ 	kunmap_atomic(kaddr);
+ 	SetPageUptodate(page);
+ 	/* clear page dirty so that writepages wouldn't work for us. */
+@@ -766,7 +761,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	ext4_write_unlock_xattr(inode, &no_expand);
+ 	brelse(iloc.bh);
+ 	mark_inode_dirty(inode);
+-out:
++
+ 	return copied;
+ }
+ 
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 63a292db75877..317aa1b90fb95 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1296,6 +1296,7 @@ static int ext4_write_end(struct file *file,
+ 			goto errout;
+ 		}
+ 		copied = ret;
++		ret = 0;
+ 	} else
+ 		copied = block_write_end(file, mapping, pos,
+ 					 len, copied, page, fsdata);
+@@ -1322,13 +1323,14 @@ static int ext4_write_end(struct file *file,
+ 	if (i_size_changed || inline_data)
+ 		ret = ext4_mark_inode_dirty(handle, inode);
+ 
++errout:
+ 	if (pos + len > inode->i_size && !verity && ext4_can_truncate(inode))
+ 		/* if we have allocated more blocks and copied
+ 		 * less. We will have blocks allocated outside
+ 		 * inode->i_size. So truncate them
+ 		 */
+ 		ext4_orphan_add(handle, inode);
+-errout:
++
+ 	ret2 = ext4_journal_stop(handle);
+ 	if (!ret)
+ 		ret = ret2;
+@@ -1411,6 +1413,7 @@ static int ext4_journalled_write_end(struct file *file,
+ 			goto errout;
+ 		}
+ 		copied = ret;
++		ret = 0;
+ 	} else if (unlikely(copied < len) && !PageUptodate(page)) {
+ 		copied = 0;
+ 		ext4_journalled_zero_new_buffers(handle, page, from, to);
+@@ -1440,6 +1443,7 @@ static int ext4_journalled_write_end(struct file *file,
+ 			ret = ret2;
+ 	}
+ 
++errout:
+ 	if (pos + len > inode->i_size && !verity && ext4_can_truncate(inode))
+ 		/* if we have allocated more blocks and copied
+ 		 * less. We will have blocks allocated outside
+@@ -1447,7 +1451,6 @@ static int ext4_journalled_write_end(struct file *file,
+ 		 */
+ 		ext4_orphan_add(handle, inode);
+ 
+-errout:
+ 	ret2 = ext4_journal_stop(handle);
+ 	if (!ret)
+ 		ret = ret2;
+@@ -3090,35 +3093,37 @@ static int ext4_da_write_end(struct file *file,
+ 	end = start + copied - 1;
+ 
+ 	/*
+-	 * generic_write_end() will run mark_inode_dirty() if i_size
+-	 * changes.  So let's piggyback the i_disksize mark_inode_dirty
+-	 * into that.
++	 * Since we are holding inode lock, we are sure i_disksize <=
++	 * i_size. We also know that if i_disksize < i_size, there are
++	 * delalloc writes pending in the range upto i_size. If the end of
++	 * the current write is <= i_size, there's no need to touch
++	 * i_disksize since writeback will push i_disksize upto i_size
++	 * eventually. If the end of the current write is > i_size and
++	 * inside an allocated block (ext4_da_should_update_i_disksize()
++	 * check), we need to update i_disksize here as neither
++	 * ext4_writepage() nor certain ext4_writepages() paths not
++	 * allocating blocks update i_disksize.
++	 *
++	 * Note that we defer inode dirtying to generic_write_end() /
++	 * ext4_da_write_inline_data_end().
+ 	 */
+ 	new_i_size = pos + copied;
+-	if (copied && new_i_size > EXT4_I(inode)->i_disksize) {
++	if (copied && new_i_size > inode->i_size) {
+ 		if (ext4_has_inline_data(inode) ||
+-		    ext4_da_should_update_i_disksize(page, end)) {
++		    ext4_da_should_update_i_disksize(page, end))
+ 			ext4_update_i_disksize(inode, new_i_size);
+-			/* We need to mark inode dirty even if
+-			 * new_i_size is less that inode->i_size
+-			 * bu greater than i_disksize.(hint delalloc)
+-			 */
+-			ret = ext4_mark_inode_dirty(handle, inode);
+-		}
+ 	}
+ 
+ 	if (write_mode != CONVERT_INLINE_DATA &&
+ 	    ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) &&
+ 	    ext4_has_inline_data(inode))
+-		ret2 = ext4_da_write_inline_data_end(inode, pos, len, copied,
++		ret = ext4_da_write_inline_data_end(inode, pos, len, copied,
+ 						     page);
+ 	else
+-		ret2 = generic_write_end(file, mapping, pos, len, copied,
++		ret = generic_write_end(file, mapping, pos, len, copied,
+ 							page, fsdata);
+ 
+-	copied = ret2;
+-	if (ret2 < 0)
+-		ret = ret2;
++	copied = ret;
+ 	ret2 = ext4_journal_stop(handle);
+ 	if (unlikely(ret2 && !ret))
+ 		ret = ret2;
+diff --git a/fs/vboxsf/super.c b/fs/vboxsf/super.c
+index d7816c01a4f62..c578e772cbd58 100644
+--- a/fs/vboxsf/super.c
++++ b/fs/vboxsf/super.c
+@@ -21,10 +21,7 @@
+ 
+ #define VBOXSF_SUPER_MAGIC 0x786f4256 /* 'VBox' little endian */
+ 
+-#define VBSF_MOUNT_SIGNATURE_BYTE_0 ('\000')
+-#define VBSF_MOUNT_SIGNATURE_BYTE_1 ('\377')
+-#define VBSF_MOUNT_SIGNATURE_BYTE_2 ('\376')
+-#define VBSF_MOUNT_SIGNATURE_BYTE_3 ('\375')
++static const unsigned char VBSF_MOUNT_SIGNATURE[4] = "\000\377\376\375";
+ 
+ static int follow_symlinks;
+ module_param(follow_symlinks, int, 0444);
+@@ -386,12 +383,7 @@ fail_nomem:
+ 
+ static int vboxsf_parse_monolithic(struct fs_context *fc, void *data)
+ {
+-	unsigned char *options = data;
+-
+-	if (options && options[0] == VBSF_MOUNT_SIGNATURE_BYTE_0 &&
+-		       options[1] == VBSF_MOUNT_SIGNATURE_BYTE_1 &&
+-		       options[2] == VBSF_MOUNT_SIGNATURE_BYTE_2 &&
+-		       options[3] == VBSF_MOUNT_SIGNATURE_BYTE_3) {
++	if (data && !memcmp(data, VBSF_MOUNT_SIGNATURE, 4)) {
+ 		vbg_err("vboxsf: Old binary mount data not supported, remove obsolete mount.vboxsf and/or update your VBoxService.\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 072ac6c1ef2b6..c095e713cf08f 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -682,7 +682,9 @@ struct perf_event {
+ 	/*
+ 	 * timestamp shadows the actual context timing but it can
+ 	 * be safely used in NMI interrupt context. It reflects the
+-	 * context time as it was when the event was last scheduled in.
++	 * context time as it was when the event was last scheduled in,
++	 * or when ctx_sched_in failed to schedule the event because we
++	 * run out of PMC.
+ 	 *
+ 	 * ctx_time already accounts for ctx->timestamp. Therefore to
+ 	 * compute ctx_time for a sample, simply add perf_clock().
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 29c7ccd5ae42e..b85b26d9ccefe 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1589,7 +1589,7 @@ extern struct pid *cad_pid;
+ #define tsk_used_math(p)			((p)->flags & PF_USED_MATH)
+ #define used_math()				tsk_used_math(current)
+ 
+-static inline bool is_percpu_thread(void)
++static __always_inline bool is_percpu_thread(void)
+ {
+ #ifdef CONFIG_SMP
+ 	return (current->flags & PF_NO_SETAFFINITY) &&
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 2be90a54a4044..7e58b44705705 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -11,6 +11,7 @@
+ #include <uapi/linux/pkt_sched.h>
+ 
+ #define DEFAULT_TX_QUEUE_LEN	1000
++#define STAB_SIZE_LOG_MAX	30
+ 
+ struct qdisc_walker {
+ 	int	stop;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index c677f934353af..c811519261710 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3695,6 +3695,29 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
+ 	return 0;
+ }
+ 
++static inline bool event_update_userpage(struct perf_event *event)
++{
++	if (likely(!atomic_read(&event->mmap_count)))
++		return false;
++
++	perf_event_update_time(event);
++	perf_set_shadow_time(event, event->ctx);
++	perf_event_update_userpage(event);
++
++	return true;
++}
++
++static inline void group_update_userpage(struct perf_event *group_event)
++{
++	struct perf_event *event;
++
++	if (!event_update_userpage(group_event))
++		return;
++
++	for_each_sibling_event(event, group_event)
++		event_update_userpage(event);
++}
++
+ static int merge_sched_in(struct perf_event *event, void *data)
+ {
+ 	struct perf_event_context *ctx = event->ctx;
+@@ -3713,14 +3736,15 @@ static int merge_sched_in(struct perf_event *event, void *data)
+ 	}
+ 
+ 	if (event->state == PERF_EVENT_STATE_INACTIVE) {
++		*can_add_hw = 0;
+ 		if (event->attr.pinned) {
+ 			perf_cgroup_event_disable(event, ctx);
+ 			perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
++		} else {
++			ctx->rotate_necessary = 1;
++			perf_mux_hrtimer_restart(cpuctx);
++			group_update_userpage(event);
+ 		}
+-
+-		*can_add_hw = 0;
+-		ctx->rotate_necessary = 1;
+-		perf_mux_hrtimer_restart(cpuctx);
+ 	}
+ 
+ 	return 0;
+@@ -6239,6 +6263,8 @@ accounting:
+ 
+ 		ring_buffer_attach(event, rb);
+ 
++		perf_event_update_time(event);
++		perf_set_shadow_time(event, event->ctx);
+ 		perf_event_init_userpage(event);
+ 		perf_event_update_userpage(event);
+ 	} else {
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index eb2b5404806c6..d36168baf6776 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -273,6 +273,7 @@ ip6t_do_table(struct sk_buff *skb,
+ 	 * things we don't know, ie. tcp syn flag or ports).  If the
+ 	 * rule is also a fragment-specific rule, non-fragments won't
+ 	 * match it. */
++	acpar.fragoff = 0;
+ 	acpar.hotdrop = false;
+ 	acpar.state   = state;
+ 
+diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
+index 620ecf922408b..870c8eafef929 100644
+--- a/net/mac80211/mesh_pathtbl.c
++++ b/net/mac80211/mesh_pathtbl.c
+@@ -60,7 +60,10 @@ static struct mesh_table *mesh_table_alloc(void)
+ 	atomic_set(&newtbl->entries,  0);
+ 	spin_lock_init(&newtbl->gates_lock);
+ 	spin_lock_init(&newtbl->walk_lock);
+-	rhashtable_init(&newtbl->rhead, &mesh_rht_params);
++	if (rhashtable_init(&newtbl->rhead, &mesh_rht_params)) {
++		kfree(newtbl);
++		return NULL;
++	}
+ 
+ 	return newtbl;
+ }
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 38b5695c2a0c8..b7979c0bffd0f 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4064,7 +4064,8 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx)
+ 		if (!bssid)
+ 			return false;
+ 		if (ether_addr_equal(sdata->vif.addr, hdr->addr2) ||
+-		    ether_addr_equal(sdata->u.ibss.bssid, hdr->addr2))
++		    ether_addr_equal(sdata->u.ibss.bssid, hdr->addr2) ||
++		    !is_valid_ether_addr(hdr->addr2))
+ 			return false;
+ 		if (ieee80211_is_beacon(hdr->frame_control))
+ 			return true;
+diff --git a/net/netfilter/nf_nat_masquerade.c b/net/netfilter/nf_nat_masquerade.c
+index 8e8a65d46345b..acd73f717a088 100644
+--- a/net/netfilter/nf_nat_masquerade.c
++++ b/net/netfilter/nf_nat_masquerade.c
+@@ -9,8 +9,19 @@
+ 
+ #include <net/netfilter/nf_nat_masquerade.h>
+ 
++struct masq_dev_work {
++	struct work_struct work;
++	struct net *net;
++	union nf_inet_addr addr;
++	int ifindex;
++	int (*iter)(struct nf_conn *i, void *data);
++};
++
++#define MAX_MASQ_WORKER_COUNT	16
++
+ static DEFINE_MUTEX(masq_mutex);
+ static unsigned int masq_refcnt __read_mostly;
++static atomic_t masq_worker_count __read_mostly;
+ 
+ unsigned int
+ nf_nat_masquerade_ipv4(struct sk_buff *skb, unsigned int hooknum,
+@@ -63,13 +74,71 @@ nf_nat_masquerade_ipv4(struct sk_buff *skb, unsigned int hooknum,
+ }
+ EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4);
+ 
+-static int device_cmp(struct nf_conn *i, void *ifindex)
++static void iterate_cleanup_work(struct work_struct *work)
++{
++	struct masq_dev_work *w;
++
++	w = container_of(work, struct masq_dev_work, work);
++
++	nf_ct_iterate_cleanup_net(w->net, w->iter, (void *)w, 0, 0);
++
++	put_net(w->net);
++	kfree(w);
++	atomic_dec(&masq_worker_count);
++	module_put(THIS_MODULE);
++}
++
++/* Iterate conntrack table in the background and remove conntrack entries
++ * that use the device/address being removed.
++ *
++ * In case too many work items have been queued already or memory allocation
++ * fails iteration is skipped, conntrack entries will time out eventually.
++ */
++static void nf_nat_masq_schedule(struct net *net, union nf_inet_addr *addr,
++				 int ifindex,
++				 int (*iter)(struct nf_conn *i, void *data),
++				 gfp_t gfp_flags)
++{
++	struct masq_dev_work *w;
++
++	if (atomic_read(&masq_worker_count) > MAX_MASQ_WORKER_COUNT)
++		return;
++
++	net = maybe_get_net(net);
++	if (!net)
++		return;
++
++	if (!try_module_get(THIS_MODULE))
++		goto err_module;
++
++	w = kzalloc(sizeof(*w), gfp_flags);
++	if (w) {
++		/* We can overshoot MAX_MASQ_WORKER_COUNT, no big deal */
++		atomic_inc(&masq_worker_count);
++
++		INIT_WORK(&w->work, iterate_cleanup_work);
++		w->ifindex = ifindex;
++		w->net = net;
++		w->iter = iter;
++		if (addr)
++			w->addr = *addr;
++		schedule_work(&w->work);
++		return;
++	}
++
++	module_put(THIS_MODULE);
++ err_module:
++	put_net(net);
++}
++
++static int device_cmp(struct nf_conn *i, void *arg)
+ {
+ 	const struct nf_conn_nat *nat = nfct_nat(i);
++	const struct masq_dev_work *w = arg;
+ 
+ 	if (!nat)
+ 		return 0;
+-	return nat->masq_index == (int)(long)ifindex;
++	return nat->masq_index == w->ifindex;
+ }
+ 
+ static int masq_device_event(struct notifier_block *this,
+@@ -85,8 +154,8 @@ static int masq_device_event(struct notifier_block *this,
+ 		 * and forget them.
+ 		 */
+ 
+-		nf_ct_iterate_cleanup_net(net, device_cmp,
+-					  (void *)(long)dev->ifindex, 0, 0);
++		nf_nat_masq_schedule(net, NULL, dev->ifindex,
++				     device_cmp, GFP_KERNEL);
+ 	}
+ 
+ 	return NOTIFY_DONE;
+@@ -94,35 +163,45 @@ static int masq_device_event(struct notifier_block *this,
+ 
+ static int inet_cmp(struct nf_conn *ct, void *ptr)
+ {
+-	struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
+-	struct net_device *dev = ifa->ifa_dev->dev;
+ 	struct nf_conntrack_tuple *tuple;
++	struct masq_dev_work *w = ptr;
+ 
+-	if (!device_cmp(ct, (void *)(long)dev->ifindex))
++	if (!device_cmp(ct, ptr))
+ 		return 0;
+ 
+ 	tuple = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
+ 
+-	return ifa->ifa_address == tuple->dst.u3.ip;
++	return nf_inet_addr_cmp(&w->addr, &tuple->dst.u3);
+ }
+ 
+ static int masq_inet_event(struct notifier_block *this,
+ 			   unsigned long event,
+ 			   void *ptr)
+ {
+-	struct in_device *idev = ((struct in_ifaddr *)ptr)->ifa_dev;
+-	struct net *net = dev_net(idev->dev);
++	const struct in_ifaddr *ifa = ptr;
++	const struct in_device *idev;
++	const struct net_device *dev;
++	union nf_inet_addr addr;
++
++	if (event != NETDEV_DOWN)
++		return NOTIFY_DONE;
+ 
+ 	/* The masq_dev_notifier will catch the case of the device going
+ 	 * down.  So if the inetdev is dead and being destroyed we have
+ 	 * no work to do.  Otherwise this is an individual address removal
+ 	 * and we have to perform the flush.
+ 	 */
++	idev = ifa->ifa_dev;
+ 	if (idev->dead)
+ 		return NOTIFY_DONE;
+ 
+-	if (event == NETDEV_DOWN)
+-		nf_ct_iterate_cleanup_net(net, inet_cmp, ptr, 0, 0);
++	memset(&addr, 0, sizeof(addr));
++
++	addr.ip = ifa->ifa_address;
++
++	dev = idev->dev;
++	nf_nat_masq_schedule(dev_net(idev->dev), &addr, dev->ifindex,
++			     inet_cmp, GFP_KERNEL);
+ 
+ 	return NOTIFY_DONE;
+ }
+@@ -136,8 +215,6 @@ static struct notifier_block masq_inet_notifier = {
+ };
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+-static atomic_t v6_worker_count __read_mostly;
+-
+ static int
+ nat_ipv6_dev_get_saddr(struct net *net, const struct net_device *dev,
+ 		       const struct in6_addr *daddr, unsigned int srcprefs,
+@@ -187,40 +264,6 @@ nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range,
+ }
+ EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6);
+ 
+-struct masq_dev_work {
+-	struct work_struct work;
+-	struct net *net;
+-	struct in6_addr addr;
+-	int ifindex;
+-};
+-
+-static int inet6_cmp(struct nf_conn *ct, void *work)
+-{
+-	struct masq_dev_work *w = (struct masq_dev_work *)work;
+-	struct nf_conntrack_tuple *tuple;
+-
+-	if (!device_cmp(ct, (void *)(long)w->ifindex))
+-		return 0;
+-
+-	tuple = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
+-
+-	return ipv6_addr_equal(&w->addr, &tuple->dst.u3.in6);
+-}
+-
+-static void iterate_cleanup_work(struct work_struct *work)
+-{
+-	struct masq_dev_work *w;
+-
+-	w = container_of(work, struct masq_dev_work, work);
+-
+-	nf_ct_iterate_cleanup_net(w->net, inet6_cmp, (void *)w, 0, 0);
+-
+-	put_net(w->net);
+-	kfree(w);
+-	atomic_dec(&v6_worker_count);
+-	module_put(THIS_MODULE);
+-}
+-
+ /* atomic notifier; can't call nf_ct_iterate_cleanup_net (it can sleep).
+  *
+  * Defer it to the system workqueue.
+@@ -233,36 +276,19 @@ static int masq_inet6_event(struct notifier_block *this,
+ {
+ 	struct inet6_ifaddr *ifa = ptr;
+ 	const struct net_device *dev;
+-	struct masq_dev_work *w;
+-	struct net *net;
++	union nf_inet_addr addr;
+ 
+-	if (event != NETDEV_DOWN || atomic_read(&v6_worker_count) >= 16)
++	if (event != NETDEV_DOWN)
+ 		return NOTIFY_DONE;
+ 
+ 	dev = ifa->idev->dev;
+-	net = maybe_get_net(dev_net(dev));
+-	if (!net)
+-		return NOTIFY_DONE;
+ 
+-	if (!try_module_get(THIS_MODULE))
+-		goto err_module;
++	memset(&addr, 0, sizeof(addr));
+ 
+-	w = kmalloc(sizeof(*w), GFP_ATOMIC);
+-	if (w) {
+-		atomic_inc(&v6_worker_count);
+-
+-		INIT_WORK(&w->work, iterate_cleanup_work);
+-		w->ifindex = dev->ifindex;
+-		w->net = net;
+-		w->addr = ifa->addr;
+-		schedule_work(&w->work);
++	addr.in6 = ifa->addr;
+ 
+-		return NOTIFY_DONE;
+-	}
+-
+-	module_put(THIS_MODULE);
+- err_module:
+-	put_net(net);
++	nf_nat_masq_schedule(dev_net(dev), &addr, dev->ifindex, inet_cmp,
++			     GFP_ATOMIC);
+ 	return NOTIFY_DONE;
+ }
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 54a8c363bcdda..7b24582a8a164 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -513,6 +513,12 @@ static struct qdisc_size_table *qdisc_get_stab(struct nlattr *opt,
+ 		return stab;
+ 	}
+ 
++	if (s->size_log > STAB_SIZE_LOG_MAX ||
++	    s->cell_log > STAB_SIZE_LOG_MAX) {
++		NL_SET_ERR_MSG(extack, "Invalid logarithmic size of size table");
++		return ERR_PTR(-EINVAL);
++	}
++
+ 	stab = kmalloc(sizeof(*stab) + tsize * sizeof(u16), GFP_KERNEL);
+ 	if (!stab)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 2770e8179983a..25548555d8d79 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -847,6 +847,11 @@ static int create_sdw_dailink(struct device *dev, int *be_index,
+ 			      cpus + *cpu_id, cpu_dai_num,
+ 			      codecs, codec_num,
+ 			      NULL, &sdw_ops);
++		/*
++		 * SoundWire DAILINKs use 'stream' functions and Bank Switch operations
++		 * based on wait_for_completion(), tag them as 'nonatomic'.
++		 */
++		dai_links[*be_index].nonatomic = true;
+ 
+ 		ret = set_codec_init_func(link, dai_links + (*be_index)++,
+ 					  playback, group_id);
+diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
+index adc7c37145d64..feced9077dfe1 100644
+--- a/sound/soc/sof/core.c
++++ b/sound/soc/sof/core.c
+@@ -354,7 +354,6 @@ int snd_sof_device_remove(struct device *dev)
+ 			dev_warn(dev, "error: %d failed to prepare DSP for device removal",
+ 				 ret);
+ 
+-		snd_sof_fw_unload(sdev);
+ 		snd_sof_ipc_free(sdev);
+ 		snd_sof_free_debug(sdev);
+ 		snd_sof_free_trace(sdev);
+@@ -377,8 +376,7 @@ int snd_sof_device_remove(struct device *dev)
+ 		snd_sof_remove(sdev);
+ 
+ 	/* release firmware */
+-	release_firmware(pdata->fw);
+-	pdata->fw = NULL;
++	snd_sof_fw_unload(sdev);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/sof/loader.c b/sound/soc/sof/loader.c
+index ba9ed66f98bc7..2d5c3fc93bc5c 100644
+--- a/sound/soc/sof/loader.c
++++ b/sound/soc/sof/loader.c
+@@ -830,5 +830,7 @@ EXPORT_SYMBOL(snd_sof_run_firmware);
+ void snd_sof_fw_unload(struct snd_sof_dev *sdev)
+ {
+ 	/* TODO: support module unloading at runtime */
++	release_firmware(sdev->pdata->fw);
++	sdev->pdata->fw = NULL;
+ }
+ EXPORT_SYMBOL(snd_sof_fw_unload);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-10-18 21:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-10-18 21:17 UTC (permalink / raw
  To: gentoo-commits

commit:     3f28fdfa0105ce87af1306daca0708d5f1d6f8aa
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 18 21:14:04 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Oct 18 21:16:42 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3f28fdfa

For systemd, select CONFIG_KCMP as systemd uses the kcmp() call

Originally tied to CHECKPOINT_RESTORE.

Thanks to Mike Gilbert for reporting.

Bug: https://bugs.gentoo.org/818832

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 74e80d3..95a64aa 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -124,7 +124,6 @@
 +	select BPF_SYSCALL
 +	select CGROUP_BPF
 +	select CGROUPS
-+	select CHECKPOINT_RESTORE
 +	select CRYPTO_HMAC 
 +	select CRYPTO_SHA256
 +	select CRYPTO_USER_API_HASH
@@ -136,6 +135,7 @@
 +	select FILE_LOCKING
 +	select INOTIFY_USER
 +	select IPV6
++	select KCMP
 +	select NET
 +	select NET_NS
 +	select PROC_FS


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-10-20 13:23 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-10-20 13:23 UTC (permalink / raw
  To: gentoo-commits

commit:     bc672db99a4326c8616dc6c6534b6a67eed1c25c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 20 13:23:27 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 20 13:23:27 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bc672db9

Linux patch 5.10.75

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1074_linux-5.10.75.patch | 2564 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2568 insertions(+)

diff --git a/0000_README b/0000_README
index 11f68cc..6ee12e3 100644
--- a/0000_README
+++ b/0000_README
@@ -339,6 +339,10 @@ Patch:  1073_linux-5.10.74.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.74
 
+Patch:  1074_linux-5.10.75.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.75
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1074_linux-5.10.75.patch b/1074_linux-5.10.75.patch
new file mode 100644
index 0000000..8d9c3ca
--- /dev/null
+++ b/1074_linux-5.10.75.patch
@@ -0,0 +1,2564 @@
+diff --git a/Makefile b/Makefile
+index 84d540aed24c9..74318cf964b89 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 74
++SUBLEVEL = 75
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
+index 5395e8c2484e0..167538518a1ec 100644
+--- a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
++++ b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
+@@ -54,8 +54,8 @@
+ 		regulator-always-on;
+ 		regulator-settling-time-us = <5000>;
+ 		gpios = <&expgpio 4 GPIO_ACTIVE_HIGH>;
+-		states = <1800000 0x1
+-			  3300000 0x0>;
++		states = <1800000 0x1>,
++			 <3300000 0x0>;
+ 		status = "okay";
+ 	};
+ 
+@@ -255,15 +255,16 @@
+ };
+ 
+ &pcie0 {
+-	pci@1,0 {
++	pci@0,0 {
++		device_type = "pci";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+ 		ranges;
+ 
+ 		reg = <0 0 0 0 0>;
+ 
+-		usb@1,0 {
+-			reg = <0x10000 0 0 0 0>;
++		usb@0,0 {
++			reg = <0 0 0 0 0>;
+ 			resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index 3d040f6e2a20f..398ecd7b9b68b 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -514,8 +514,8 @@
+ 				compatible = "brcm,genet-mdio-v5";
+ 				reg = <0xe14 0x8>;
+ 				reg-names = "mdio";
+-				#address-cells = <0x0>;
+-				#size-cells = <0x1>;
++				#address-cells = <0x1>;
++				#size-cells = <0x0>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 55ecf6de9ff77..99cd6e7184083 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -43,7 +43,7 @@ void __init arm64_hugetlb_cma_reserve(void)
+ #ifdef CONFIG_ARM64_4K_PAGES
+ 	order = PUD_SHIFT - PAGE_SHIFT;
+ #else
+-	order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;
++	order = CONT_PMD_SHIFT - PAGE_SHIFT;
+ #endif
+ 	/*
+ 	 * HugeTLB CMA reservation is required for gigantic
+diff --git a/arch/csky/kernel/ptrace.c b/arch/csky/kernel/ptrace.c
+index a4cf2e2ac15ac..ac07695bc5418 100644
+--- a/arch/csky/kernel/ptrace.c
++++ b/arch/csky/kernel/ptrace.c
+@@ -98,7 +98,8 @@ static int gpr_set(struct task_struct *target,
+ 	if (ret)
+ 		return ret;
+ 
+-	regs.sr = task_pt_regs(target)->sr;
++	/* BIT(0) of regs.sr is Condition Code/Carry bit */
++	regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0));
+ #ifdef CONFIG_CPU_HAS_HILO
+ 	regs.dcsr = task_pt_regs(target)->dcsr;
+ #endif
+diff --git a/arch/csky/kernel/signal.c b/arch/csky/kernel/signal.c
+index 8b068cf374478..0ca49b5e3dd37 100644
+--- a/arch/csky/kernel/signal.c
++++ b/arch/csky/kernel/signal.c
+@@ -52,10 +52,14 @@ static long restore_sigcontext(struct pt_regs *regs,
+ 	struct sigcontext __user *sc)
+ {
+ 	int err = 0;
++	unsigned long sr = regs->sr;
+ 
+ 	/* sc_pt_regs is structured the same as the start of pt_regs */
+ 	err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs));
+ 
++	/* BIT(0) of regs->sr is Condition Code/Carry bit */
++	regs->sr = (sr & ~1) | (regs->sr & 1);
++
+ 	/* Restore the floating-point state. */
+ 	err |= restore_fpu_state(sc);
+ 
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index 5b0f6b6278e3d..6018f73d947da 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -997,7 +997,8 @@ static int xive_get_irqchip_state(struct irq_data *data,
+ 		 * interrupt to be inactive in that case.
+ 		 */
+ 		*state = (pq != XIVE_ESB_INVALID) && !xd->stale_p &&
+-			(xd->saved_p || !!(pq & XIVE_ESB_VAL_P));
++			(xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) &&
++			 !irqd_irq_disabled(data)));
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+diff --git a/arch/s390/lib/string.c b/arch/s390/lib/string.c
+index 93b3209b94a2f..db4e539d8c7fe 100644
+--- a/arch/s390/lib/string.c
++++ b/arch/s390/lib/string.c
+@@ -246,14 +246,13 @@ EXPORT_SYMBOL(strcmp);
+ #ifdef __HAVE_ARCH_STRRCHR
+ char *strrchr(const char *s, int c)
+ {
+-       size_t len = __strend(s) - s;
+-
+-       if (len)
+-	       do {
+-		       if (s[len] == (char) c)
+-			       return (char *) s + len;
+-	       } while (--len > 0);
+-       return NULL;
++	ssize_t len = __strend(s) - s;
++
++	do {
++		if (s[len] == (char)c)
++			return (char *)s + len;
++	} while (--len >= 0);
++	return NULL;
+ }
+ EXPORT_SYMBOL(strrchr);
+ #endif
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 4201d0cf5f835..c3d9f56c90186 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1534,7 +1534,6 @@ config AMD_MEM_ENCRYPT
+ 
+ config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
+ 	bool "Activate AMD Secure Memory Encryption (SME) by default"
+-	default y
+ 	depends on AMD_MEM_ENCRYPT
+ 	help
+ 	  Say yes to have system memory encrypted by default if running on
+diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
+index e8b5f1cf1ae8c..4ccb9039f5950 100644
+--- a/arch/x86/kernel/cpu/resctrl/core.c
++++ b/arch/x86/kernel/cpu/resctrl/core.c
+@@ -590,6 +590,8 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
+ 	}
+ 
+ 	if (r->mon_capable && domain_setup_mon_state(r, d)) {
++		kfree(d->ctrl_val);
++		kfree(d->mbps_val);
+ 		kfree(d);
+ 		return;
+ 	}
+diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
+index 0a0a982f9c28d..c0e77c1c8e09d 100644
+--- a/drivers/acpi/arm64/gtdt.c
++++ b/drivers/acpi/arm64/gtdt.c
+@@ -36,7 +36,7 @@ struct acpi_gtdt_descriptor {
+ 
+ static struct acpi_gtdt_descriptor acpi_gtdt_desc __initdata;
+ 
+-static inline void *next_platform_timer(void *platform_timer)
++static inline __init void *next_platform_timer(void *platform_timer)
+ {
+ 	struct acpi_gtdt_header *gh = platform_timer;
+ 
+diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c
+index b2f5520882918..0910441321f72 100644
+--- a/drivers/ata/libahci_platform.c
++++ b/drivers/ata/libahci_platform.c
+@@ -440,10 +440,7 @@ struct ahci_host_priv *ahci_platform_get_resources(struct platform_device *pdev,
+ 	hpriv->phy_regulator = devm_regulator_get(dev, "phy");
+ 	if (IS_ERR(hpriv->phy_regulator)) {
+ 		rc = PTR_ERR(hpriv->phy_regulator);
+-		if (rc == -EPROBE_DEFER)
+-			goto err_out;
+-		rc = 0;
+-		hpriv->phy_regulator = NULL;
++		goto err_out;
+ 	}
+ 
+ 	if (flags & AHCI_PLATFORM_GET_RESETS) {
+diff --git a/drivers/ata/pata_legacy.c b/drivers/ata/pata_legacy.c
+index 4fd12b20df239..d91ba47f2fc44 100644
+--- a/drivers/ata/pata_legacy.c
++++ b/drivers/ata/pata_legacy.c
+@@ -315,7 +315,8 @@ static unsigned int pdc_data_xfer_vlb(struct ata_queued_cmd *qc,
+ 			iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+ 
+ 		if (unlikely(slop)) {
+-			__le32 pad;
++			__le32 pad = 0;
++
+ 			if (rw == READ) {
+ 				pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
+ 				memcpy(buf + buflen - slop, &pad, slop);
+@@ -705,7 +706,8 @@ static unsigned int vlb32_data_xfer(struct ata_queued_cmd *qc,
+ 			ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+ 
+ 		if (unlikely(slop)) {
+-			__le32 pad;
++			__le32 pad = 0;
++
+ 			if (rw == WRITE) {
+ 				memcpy(&pad, buf + buflen - slop, slop);
+ 				iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index a364fe565007c..2bc4db5ffe445 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -549,7 +549,8 @@ struct device_link *device_link_add(struct device *consumer,
+ {
+ 	struct device_link *link;
+ 
+-	if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS ||
++	if (!consumer || !supplier || consumer == supplier ||
++	    flags & ~DL_ADD_VALID_FLAGS ||
+ 	    (flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
+ 	    (flags & DL_FLAG_SYNC_STATE_ONLY &&
+ 	     flags != DL_FLAG_SYNC_STATE_ONLY) ||
+diff --git a/drivers/bus/simple-pm-bus.c b/drivers/bus/simple-pm-bus.c
+index c5eb46cbf388b..244b8f3b38b40 100644
+--- a/drivers/bus/simple-pm-bus.c
++++ b/drivers/bus/simple-pm-bus.c
+@@ -16,7 +16,33 @@
+ 
+ static int simple_pm_bus_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
++	const struct device *dev = &pdev->dev;
++	struct device_node *np = dev->of_node;
++	const struct of_device_id *match;
++
++	/*
++	 * Allow user to use driver_override to bind this driver to a
++	 * transparent bus device which has a different compatible string
++	 * that's not listed in simple_pm_bus_of_match. We don't want to do any
++	 * of the simple-pm-bus tasks for these devices, so return early.
++	 */
++	if (pdev->driver_override)
++		return 0;
++
++	match = of_match_device(dev->driver->of_match_table, dev);
++	/*
++	 * These are transparent bus devices (not simple-pm-bus matches) that
++	 * have their child nodes populated automatically.  So, don't need to
++	 * do anything more. We only match with the device if this driver is
++	 * the most specific match because we don't want to incorrectly bind to
++	 * a device that has a more specific driver.
++	 */
++	if (match && match->data) {
++		if (of_property_match_string(np, "compatible", match->compatible) == 0)
++			return 0;
++		else
++			return -ENODEV;
++	}
+ 
+ 	dev_dbg(&pdev->dev, "%s\n", __func__);
+ 
+@@ -30,14 +56,25 @@ static int simple_pm_bus_probe(struct platform_device *pdev)
+ 
+ static int simple_pm_bus_remove(struct platform_device *pdev)
+ {
++	const void *data = of_device_get_match_data(&pdev->dev);
++
++	if (pdev->driver_override || data)
++		return 0;
++
+ 	dev_dbg(&pdev->dev, "%s\n", __func__);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 	return 0;
+ }
+ 
++#define ONLY_BUS	((void *) 1) /* Match if the device is only a bus. */
++
+ static const struct of_device_id simple_pm_bus_of_match[] = {
+ 	{ .compatible = "simple-pm-bus", },
++	{ .compatible = "simple-bus",	.data = ONLY_BUS },
++	{ .compatible = "simple-mfd",	.data = ONLY_BUS },
++	{ .compatible = "isa",		.data = ONLY_BUS },
++	{ .compatible = "arm,amba-bus",	.data = ONLY_BUS },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
+diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c
+index 7182afb4258a7..225636c2b5696 100644
+--- a/drivers/clk/socfpga/clk-agilex.c
++++ b/drivers/clk/socfpga/clk-agilex.c
+@@ -165,13 +165,6 @@ static const struct clk_parent_data mpu_mux[] = {
+ 	  .name = "boot_clk", },
+ };
+ 
+-static const struct clk_parent_data s2f_usr0_mux[] = {
+-	{ .fw_name = "f2s-free-clk",
+-	  .name = "f2s-free-clk", },
+-	{ .fw_name = "boot_clk",
+-	  .name = "boot_clk", },
+-};
+-
+ static const struct clk_parent_data emac_mux[] = {
+ 	{ .fw_name = "emaca_free_clk",
+ 	  .name = "emaca_free_clk", },
+@@ -299,8 +292,6 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  4, 0x44, 28, 1, 0, 0, 0},
+ 	{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
+ 	  5, 0, 0, 0, 0x30, 1, 0},
+-	{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24,
+-	  6, 0, 0, 0, 0, 0, 0},
+ 	{ AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+ 	  0, 0, 0, 0, 0x94, 26, 0},
+ 	{ AGILEX_EMAC1_CLK, "emac1_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+diff --git a/drivers/edac/armada_xp_edac.c b/drivers/edac/armada_xp_edac.c
+index e3e757513d1bc..b1f46a974b9e0 100644
+--- a/drivers/edac/armada_xp_edac.c
++++ b/drivers/edac/armada_xp_edac.c
+@@ -178,7 +178,7 @@ static void axp_mc_check(struct mem_ctl_info *mci)
+ 				     "details unavailable (multiple errors)");
+ 	if (cnt_dbe)
+ 		edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
+-				     cnt_sbe, /* error count */
++				     cnt_dbe, /* error count */
+ 				     0, 0, 0, /* pfn, offset, syndrome */
+ 				     -1, -1, -1, /* top, mid, low layer */
+ 				     mci->ctl_name,
+diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
+index ea7ca74fc1730..232c092c4c970 100644
+--- a/drivers/firmware/efi/cper.c
++++ b/drivers/firmware/efi/cper.c
+@@ -25,8 +25,6 @@
+ #include <acpi/ghes.h>
+ #include <ras/ras_event.h>
+ 
+-static char rcd_decode_str[CPER_REC_LEN];
+-
+ /*
+  * CPER record ID need to be unique even after reboot, because record
+  * ID is used as index for ERST storage, while CPER records from
+@@ -313,6 +311,7 @@ const char *cper_mem_err_unpack(struct trace_seq *p,
+ 				struct cper_mem_err_compact *cmem)
+ {
+ 	const char *ret = trace_seq_buffer_ptr(p);
++	char rcd_decode_str[CPER_REC_LEN];
+ 
+ 	if (cper_mem_err_location(cmem, rcd_decode_str))
+ 		trace_seq_printf(p, "%s", rcd_decode_str);
+@@ -327,6 +326,7 @@ static void cper_print_mem(const char *pfx, const struct cper_sec_mem_err *mem,
+ 	int len)
+ {
+ 	struct cper_mem_err_compact cmem;
++	char rcd_decode_str[CPER_REC_LEN];
+ 
+ 	/* Don't trust UEFI 2.1/2.2 structure with bad validation bits */
+ 	if (len == sizeof(struct cper_sec_mem_err_old) &&
+diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
+index 1410beaef5c30..f3e54f6616f02 100644
+--- a/drivers/firmware/efi/runtime-wrappers.c
++++ b/drivers/firmware/efi/runtime-wrappers.c
+@@ -414,7 +414,7 @@ static void virt_efi_reset_system(int reset_type,
+ 				  unsigned long data_size,
+ 				  efi_char16_t *data)
+ {
+-	if (down_interruptible(&efi_runtime_lock)) {
++	if (down_trylock(&efi_runtime_lock)) {
+ 		pr_warn("failed to invoke the reset_system() runtime service:\n"
+ 			"could not get exclusive access to the firmware\n");
+ 		return;
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 3a3aeef1017f5..a78167b2c9ca2 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -558,21 +558,21 @@ static int pca953x_gpio_set_pull_up_down(struct pca953x_chip *chip,
+ 
+ 	mutex_lock(&chip->i2c_lock);
+ 
+-	/* Disable pull-up/pull-down */
+-	ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
+-	if (ret)
+-		goto exit;
+-
+ 	/* Configure pull-up/pull-down */
+ 	if (config == PIN_CONFIG_BIAS_PULL_UP)
+ 		ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, bit);
+ 	else if (config == PIN_CONFIG_BIAS_PULL_DOWN)
+ 		ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, 0);
++	else
++		ret = 0;
+ 	if (ret)
+ 		goto exit;
+ 
+-	/* Enable pull-up/pull-down */
+-	ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
++	/* Disable/Enable pull-up/pull-down */
++	if (config == PIN_CONFIG_BIAS_DISABLE)
++		ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
++	else
++		ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
+ 
+ exit:
+ 	mutex_unlock(&chip->i2c_lock);
+@@ -586,7 +586,9 @@ static int pca953x_gpio_set_config(struct gpio_chip *gc, unsigned int offset,
+ 
+ 	switch (pinconf_to_config_param(config)) {
+ 	case PIN_CONFIG_BIAS_PULL_UP:
++	case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
++	case PIN_CONFIG_BIAS_DISABLE:
+ 		return pca953x_gpio_set_pull_up_down(chip, offset, config);
+ 	default:
+ 		return -ENOTSUPP;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index b7ddf504e0249..add317bd8d55c 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -1835,11 +1835,20 @@ static void connector_bad_edid(struct drm_connector *connector,
+ 			       u8 *edid, int num_blocks)
+ {
+ 	int i;
+-	u8 num_of_ext = edid[0x7e];
++	u8 last_block;
++
++	/*
++	 * 0x7e in the EDID is the number of extension blocks. The EDID
++	 * is 1 (base block) + num_ext_blocks big. That means we can think
++	 * of 0x7e in the EDID of the _index_ of the last block in the
++	 * combined chunk of memory.
++	 */
++	last_block = edid[0x7e];
+ 
+ 	/* Calculate real checksum for the last edid extension block data */
+-	connector->real_edid_checksum =
+-		drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH);
++	if (last_block < num_blocks)
++		connector->real_edid_checksum =
++			drm_edid_block_checksum(edid + last_block * EDID_LENGTH);
+ 
+ 	if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))
+ 		return;
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 2dcbe02846cd9..9e09805575db4 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -99,7 +99,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
+ 	u32 asid;
+ 	u64 memptr = rbmemptr(ring, ttbr0);
+ 
+-	if (ctx == a6xx_gpu->cur_ctx)
++	if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)
+ 		return;
+ 
+ 	if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
+@@ -132,7 +132,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
+ 	OUT_PKT7(ring, CP_EVENT_WRITE, 1);
+ 	OUT_RING(ring, 0x31);
+ 
+-	a6xx_gpu->cur_ctx = ctx;
++	a6xx_gpu->cur_ctx_seqno = ctx->seqno;
+ }
+ 
+ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+@@ -887,7 +887,7 @@ static int a6xx_hw_init(struct msm_gpu *gpu)
+ 	/* Always come up on rb 0 */
+ 	a6xx_gpu->cur_ring = gpu->rb[0];
+ 
+-	a6xx_gpu->cur_ctx = NULL;
++	a6xx_gpu->cur_ctx_seqno = 0;
+ 
+ 	/* Enable the SQE_to start the CP engine */
+ 	gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+index 69765a722cae6..f923edbd5daaf 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+@@ -19,7 +19,16 @@ struct a6xx_gpu {
+ 	uint64_t sqe_iova;
+ 
+ 	struct msm_ringbuffer *cur_ring;
+-	struct msm_file_private *cur_ctx;
++
++	/**
++	 * cur_ctx_seqno:
++	 *
++	 * The ctx->seqno value of the context with current pgtables
++	 * installed.  Tracked by seqno rather than pointer value to
++	 * avoid dangling pointers, and cases where a ctx can be freed
++	 * and a new one created with the same address.
++	 */
++	int cur_ctx_seqno;
+ 
+ 	struct a6xx_gmu gmu;
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index 7d7668998501a..a8fa084dfa494 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -1119,6 +1119,20 @@ static void mdp5_crtc_reset(struct drm_crtc *crtc)
+ 	__drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
+ }
+ 
++static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = {
++	.set_config = drm_atomic_helper_set_config,
++	.destroy = mdp5_crtc_destroy,
++	.page_flip = drm_atomic_helper_page_flip,
++	.reset = mdp5_crtc_reset,
++	.atomic_duplicate_state = mdp5_crtc_duplicate_state,
++	.atomic_destroy_state = mdp5_crtc_destroy_state,
++	.atomic_print_state = mdp5_crtc_atomic_print_state,
++	.get_vblank_counter = mdp5_crtc_get_vblank_counter,
++	.enable_vblank  = msm_crtc_enable_vblank,
++	.disable_vblank = msm_crtc_disable_vblank,
++	.get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp,
++};
++
+ static const struct drm_crtc_funcs mdp5_crtc_funcs = {
+ 	.set_config = drm_atomic_helper_set_config,
+ 	.destroy = mdp5_crtc_destroy,
+@@ -1307,6 +1321,8 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev,
+ 	mdp5_crtc->lm_cursor_enabled = cursor_plane ? false : true;
+ 
+ 	drm_crtc_init_with_planes(dev, crtc, plane, cursor_plane,
++				  cursor_plane ?
++				  &mdp5_crtc_no_lm_cursor_funcs :
+ 				  &mdp5_crtc_funcs, NULL);
+ 
+ 	drm_flip_work_init(&mdp5_crtc->unref_cursor_work,
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index 7e364b9c9f9e1..1adead764feed 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -208,8 +208,10 @@ int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
+ 		goto fail;
+ 	}
+ 
+-	if (!msm_dsi_manager_validate_current_config(msm_dsi->id))
++	if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {
++		ret = -EINVAL;
+ 		goto fail;
++	}
+ 
+ 	msm_dsi->encoder = encoder;
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index b17ac6c275549..96b5dcf8e4540 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -464,7 +464,7 @@ static int dsi_bus_clk_enable(struct msm_dsi_host *msm_host)
+ 
+ 	return 0;
+ err:
+-	for (; i > 0; i--)
++	while (--i >= 0)
+ 		clk_disable_unprepare(msm_host->bus_clks[i]);
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/msm/edp/edp_ctrl.c b/drivers/gpu/drm/msm/edp/edp_ctrl.c
+index 0d9657cc70dbb..937b4abb15526 100644
+--- a/drivers/gpu/drm/msm/edp/edp_ctrl.c
++++ b/drivers/gpu/drm/msm/edp/edp_ctrl.c
+@@ -1116,7 +1116,7 @@ void msm_edp_ctrl_power(struct edp_ctrl *ctrl, bool on)
+ int msm_edp_ctrl_init(struct msm_edp *edp)
+ {
+ 	struct edp_ctrl *ctrl = NULL;
+-	struct device *dev = &edp->pdev->dev;
++	struct device *dev;
+ 	int ret;
+ 
+ 	if (!edp) {
+@@ -1124,6 +1124,7 @@ int msm_edp_ctrl_init(struct msm_edp *edp)
+ 		return -EINVAL;
+ 	}
+ 
++	dev = &edp->pdev->dev;
+ 	ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL);
+ 	if (!ctrl)
+ 		return -ENOMEM;
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index edee4c2a76ce4..33e42b2f9cfcb 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -581,6 +581,7 @@ static void load_gpu(struct drm_device *dev)
+ 
+ static int context_init(struct drm_device *dev, struct drm_file *file)
+ {
++	static atomic_t ident = ATOMIC_INIT(0);
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct msm_file_private *ctx;
+ 
+@@ -594,6 +595,8 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
+ 	ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current);
+ 	file->driver_priv = ctx;
+ 
++	ctx->seqno = atomic_inc_return(&ident);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
+index 0b2686b060c73..1fe809add8f62 100644
+--- a/drivers/gpu/drm/msm/msm_drv.h
++++ b/drivers/gpu/drm/msm/msm_drv.h
+@@ -58,6 +58,7 @@ struct msm_file_private {
+ 	int queueid;
+ 	struct msm_gem_address_space *aspace;
+ 	struct kref ref;
++	int seqno;
+ };
+ 
+ enum msm_mdp_plane_property {
+@@ -543,7 +544,7 @@ static inline int align_pitch(int width, int bpp)
+ static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
+ {
+ 	ktime_t now = ktime_get();
+-	unsigned long remaining_jiffies;
++	s64 remaining_jiffies;
+ 
+ 	if (ktime_compare(*timeout, now) < 0) {
+ 		remaining_jiffies = 0;
+@@ -552,7 +553,7 @@ static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
+ 		remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
+ 	}
+ 
+-	return remaining_jiffies;
++	return clamp(remaining_jiffies, 0LL, (s64)INT_MAX);
+ }
+ 
+ #endif /* __MSM_DRV_H__ */
+diff --git a/drivers/gpu/drm/panel/Kconfig b/drivers/gpu/drm/panel/Kconfig
+index b9dbedf8f15e8..6153972e0127a 100644
+--- a/drivers/gpu/drm/panel/Kconfig
++++ b/drivers/gpu/drm/panel/Kconfig
+@@ -233,6 +233,7 @@ config DRM_PANEL_OLIMEX_LCD_OLINUXINO
+ 	depends on OF
+ 	depends on I2C
+ 	depends on BACKLIGHT_CLASS_DEVICE
++	select CRC32
+ 	help
+ 	  The panel is used with different sizes LCDs, from 480x272 to
+ 	  1280x800, and 24 bit per pixel.
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index 1141cc13a1249..1b8baba9d4d64 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -293,6 +293,7 @@ static const struct ad_sigma_delta_info ad7192_sigma_delta_info = {
+ 	.has_registers = true,
+ 	.addr_shift = 3,
+ 	.read_mask = BIT(6),
++	.irq_flags = IRQF_TRIGGER_FALLING,
+ };
+ 
+ static const struct ad_sd_calib_data ad7192_calib_arr[8] = {
+diff --git a/drivers/iio/adc/ad7780.c b/drivers/iio/adc/ad7780.c
+index 42e7e8e595d18..c70048bc791bd 100644
+--- a/drivers/iio/adc/ad7780.c
++++ b/drivers/iio/adc/ad7780.c
+@@ -203,7 +203,7 @@ static const struct ad_sigma_delta_info ad7780_sigma_delta_info = {
+ 	.set_mode = ad7780_set_mode,
+ 	.postprocess_sample = ad7780_postprocess_sample,
+ 	.has_registers = false,
+-	.irq_flags = IRQF_TRIGGER_LOW,
++	.irq_flags = IRQF_TRIGGER_FALLING,
+ };
+ 
+ #define _AD7780_CHANNEL(_bits, _wordsize, _mask_all)		\
+diff --git a/drivers/iio/adc/ad7793.c b/drivers/iio/adc/ad7793.c
+index 440ef4c7be074..7c9c95c252cf8 100644
+--- a/drivers/iio/adc/ad7793.c
++++ b/drivers/iio/adc/ad7793.c
+@@ -206,7 +206,7 @@ static const struct ad_sigma_delta_info ad7793_sigma_delta_info = {
+ 	.has_registers = true,
+ 	.addr_shift = 3,
+ 	.read_mask = BIT(6),
+-	.irq_flags = IRQF_TRIGGER_LOW,
++	.irq_flags = IRQF_TRIGGER_FALLING,
+ };
+ 
+ static const struct ad_sd_calib_data ad7793_calib_arr[6] = {
+diff --git a/drivers/iio/adc/aspeed_adc.c b/drivers/iio/adc/aspeed_adc.c
+index 19efaa41bc344..34ec0c28b2dff 100644
+--- a/drivers/iio/adc/aspeed_adc.c
++++ b/drivers/iio/adc/aspeed_adc.c
+@@ -183,6 +183,7 @@ static int aspeed_adc_probe(struct platform_device *pdev)
+ 
+ 	data = iio_priv(indio_dev);
+ 	data->dev = &pdev->dev;
++	platform_set_drvdata(pdev, indio_dev);
+ 
+ 	data->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(data->base))
+diff --git a/drivers/iio/adc/max1027.c b/drivers/iio/adc/max1027.c
+index ca1dff3924ff9..a08efaaf1a814 100644
+--- a/drivers/iio/adc/max1027.c
++++ b/drivers/iio/adc/max1027.c
+@@ -103,7 +103,7 @@ MODULE_DEVICE_TABLE(of, max1027_adc_dt_ids);
+ 			.sign = 'u',					\
+ 			.realbits = depth,				\
+ 			.storagebits = 16,				\
+-			.shift = 2,					\
++			.shift = (depth == 10) ? 2 : 0,			\
+ 			.endianness = IIO_BE,				\
+ 		},							\
+ 	}
+@@ -142,7 +142,6 @@ MODULE_DEVICE_TABLE(of, max1027_adc_dt_ids);
+ 	MAX1027_V_CHAN(11, depth)
+ 
+ #define MAX1X31_CHANNELS(depth)			\
+-	MAX1X27_CHANNELS(depth),		\
+ 	MAX1X29_CHANNELS(depth),		\
+ 	MAX1027_V_CHAN(12, depth),		\
+ 	MAX1027_V_CHAN(13, depth),		\
+diff --git a/drivers/iio/adc/mt6577_auxadc.c b/drivers/iio/adc/mt6577_auxadc.c
+index 79c1dd68b9092..d4fccd52ef08b 100644
+--- a/drivers/iio/adc/mt6577_auxadc.c
++++ b/drivers/iio/adc/mt6577_auxadc.c
+@@ -82,6 +82,10 @@ static const struct iio_chan_spec mt6577_auxadc_iio_channels[] = {
+ 	MT6577_AUXADC_CHANNEL(15),
+ };
+ 
++/* For Voltage calculation */
++#define VOLTAGE_FULL_RANGE  1500	/* VA voltage */
++#define AUXADC_PRECISE      4096	/* 12 bits */
++
+ static int mt_auxadc_get_cali_data(int rawdata, bool enable_cali)
+ {
+ 	return rawdata;
+@@ -191,6 +195,10 @@ static int mt6577_auxadc_read_raw(struct iio_dev *indio_dev,
+ 		}
+ 		if (adc_dev->dev_comp->sample_data_cali)
+ 			*val = mt_auxadc_get_cali_data(*val, true);
++
++		/* Convert adc raw data to voltage: 0 - 1500 mV */
++		*val = *val * VOLTAGE_FULL_RANGE / AUXADC_PRECISE;
++
+ 		return IIO_VAL_INT;
+ 
+ 	default:
+diff --git a/drivers/iio/adc/ti-adc128s052.c b/drivers/iio/adc/ti-adc128s052.c
+index 3143f35a6509a..83c1ae07b3e9a 100644
+--- a/drivers/iio/adc/ti-adc128s052.c
++++ b/drivers/iio/adc/ti-adc128s052.c
+@@ -171,7 +171,13 @@ static int adc128_probe(struct spi_device *spi)
+ 	mutex_init(&adc->lock);
+ 
+ 	ret = iio_device_register(indio_dev);
++	if (ret)
++		goto err_disable_regulator;
+ 
++	return 0;
++
++err_disable_regulator:
++	regulator_disable(adc->reg);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iio/common/ssp_sensors/ssp_spi.c b/drivers/iio/common/ssp_sensors/ssp_spi.c
+index 4864c38b8d1c2..769bd9280524a 100644
+--- a/drivers/iio/common/ssp_sensors/ssp_spi.c
++++ b/drivers/iio/common/ssp_sensors/ssp_spi.c
+@@ -137,7 +137,7 @@ static int ssp_print_mcu_debug(char *data_frame, int *data_index,
+ 	if (length > received_len - *data_index || length <= 0) {
+ 		ssp_dbg("[SSP]: MSG From MCU-invalid debug length(%d/%d)\n",
+ 			length, received_len);
+-		return length ? length : -EPROTO;
++		return -EPROTO;
+ 	}
+ 
+ 	ssp_dbg("[SSP]: MSG From MCU - %s\n", &data_frame[*data_index]);
+@@ -273,6 +273,8 @@ static int ssp_parse_dataframe(struct ssp_data *data, char *dataframe, int len)
+ 	for (idx = 0; idx < len;) {
+ 		switch (dataframe[idx++]) {
+ 		case SSP_MSG2AP_INST_BYPASS_DATA:
++			if (idx >= len)
++				return -EPROTO;
+ 			sd = dataframe[idx++];
+ 			if (sd < 0 || sd >= SSP_SENSOR_MAX) {
+ 				dev_err(SSP_DEV,
+@@ -282,10 +284,13 @@ static int ssp_parse_dataframe(struct ssp_data *data, char *dataframe, int len)
+ 
+ 			if (indio_devs[sd]) {
+ 				spd = iio_priv(indio_devs[sd]);
+-				if (spd->process_data)
++				if (spd->process_data) {
++					if (idx >= len)
++						return -EPROTO;
+ 					spd->process_data(indio_devs[sd],
+ 							  &dataframe[idx],
+ 							  data->timestamp);
++				}
+ 			} else {
+ 				dev_err(SSP_DEV, "no client for frame\n");
+ 			}
+@@ -293,6 +298,8 @@ static int ssp_parse_dataframe(struct ssp_data *data, char *dataframe, int len)
+ 			idx += ssp_offset_map[sd];
+ 			break;
+ 		case SSP_MSG2AP_INST_DEBUG_DATA:
++			if (idx >= len)
++				return -EPROTO;
+ 			sd = ssp_print_mcu_debug(dataframe, &idx, len);
+ 			if (sd) {
+ 				dev_err(SSP_DEV,
+diff --git a/drivers/iio/dac/ti-dac5571.c b/drivers/iio/dac/ti-dac5571.c
+index d3295767a079c..c0714cb1e164a 100644
+--- a/drivers/iio/dac/ti-dac5571.c
++++ b/drivers/iio/dac/ti-dac5571.c
+@@ -350,6 +350,7 @@ static int dac5571_probe(struct i2c_client *client,
+ 		data->dac5571_pwrdwn = dac5571_pwrdwn_quad;
+ 		break;
+ 	default:
++		ret = -EINVAL;
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/iio/light/opt3001.c b/drivers/iio/light/opt3001.c
+index 2d48d61909a4d..ff776259734ad 100644
+--- a/drivers/iio/light/opt3001.c
++++ b/drivers/iio/light/opt3001.c
+@@ -276,6 +276,8 @@ static int opt3001_get_lux(struct opt3001 *opt, int *val, int *val2)
+ 		ret = wait_event_timeout(opt->result_ready_queue,
+ 				opt->result_ready,
+ 				msecs_to_jiffies(OPT3001_RESULT_READY_LONG));
++		if (ret == 0)
++			return -ETIMEDOUT;
+ 	} else {
+ 		/* Sleep for result ready time */
+ 		timeout = (opt->int_time == OPT3001_INT_TIME_SHORT) ?
+@@ -312,9 +314,7 @@ err:
+ 		/* Disallow IRQ to access the device while lock is active */
+ 		opt->ok_to_ignore_lock = false;
+ 
+-	if (ret == 0)
+-		return -ETIMEDOUT;
+-	else if (ret < 0)
++	if (ret < 0)
+ 		return ret;
+ 
+ 	if (opt->use_irq) {
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index e5f1e3cf9179f..ba101afcfc27f 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -331,6 +331,7 @@ static const struct xpad_device {
+ 	{ 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x5d04, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0xfafe, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
++	{ 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 },
+ 	{ 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
+ 	{ 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+ 	{ 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN }
+@@ -447,6 +448,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOXONE_VENDOR(0x24c6),		/* PowerA Controllers */
+ 	XPAD_XBOXONE_VENDOR(0x2e24),		/* Hyperkin Duke X-Box One pad */
+ 	XPAD_XBOX360_VENDOR(0x2f24),		/* GameSir Controllers */
++	XPAD_XBOX360_VENDOR(0x3285),		/* Nacon GC-100 */
+ 	{ }
+ };
+ 
+diff --git a/drivers/misc/cb710/sgbuf2.c b/drivers/misc/cb710/sgbuf2.c
+index e5a4ed3701eb8..a798fad5f03c2 100644
+--- a/drivers/misc/cb710/sgbuf2.c
++++ b/drivers/misc/cb710/sgbuf2.c
+@@ -47,7 +47,7 @@ static inline bool needs_unaligned_copy(const void *ptr)
+ #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ 	return false;
+ #else
+-	return ((ptr - NULL) & 3) != 0;
++	return ((uintptr_t)ptr & 3) != 0;
+ #endif
+ }
+ 
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 273d9c1591793..a9c9d86eef4bc 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -812,10 +812,12 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
+ 			rpra[i].pv = (u64) ctx->args[i].ptr;
+ 			pages[i].addr = ctx->maps[i]->phys;
+ 
++			mmap_read_lock(current->mm);
+ 			vma = find_vma(current->mm, ctx->args[i].ptr);
+ 			if (vma)
+ 				pages[i].addr += ctx->args[i].ptr -
+ 						 vma->vm_start;
++			mmap_read_unlock(current->mm);
+ 
+ 			pg_start = (ctx->args[i].ptr & PAGE_MASK) >> PAGE_SHIFT;
+ 			pg_end = ((ctx->args[i].ptr + len - 1) & PAGE_MASK) >>
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index cb34925e10f15..67bb6a25fd0a0 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -92,6 +92,7 @@
+ #define MEI_DEV_ID_CDF        0x18D3  /* Cedar Fork */
+ 
+ #define MEI_DEV_ID_ICP_LP     0x34E0  /* Ice Lake Point LP */
++#define MEI_DEV_ID_ICP_N      0x38E0  /* Ice Lake Point N */
+ 
+ #define MEI_DEV_ID_JSP_N      0x4DE0  /* Jasper Lake Point N */
+ 
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index c3393b383e598..3a45aaf002ac8 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -96,6 +96,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H_3, MEI_ME_PCH8_ITOUCH_CFG)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ICP_N, MEI_ME_PCH12_CFG)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_TGP_H, MEI_ME_PCH15_SPS_CFG)},
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 88fa0779e0bc9..e3c338624b95c 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -460,8 +460,10 @@ EXPORT_SYMBOL(ksz_switch_register);
+ void ksz_switch_remove(struct ksz_device *dev)
+ {
+ 	/* timer started */
+-	if (dev->mib_read_interval)
++	if (dev->mib_read_interval) {
++		dev->mib_read_interval = 0;
+ 		cancel_delayed_work_sync(&dev->mib_read);
++	}
+ 
+ 	dev->dev_ops->exit(dev);
+ 	dsa_unregister_switch(dev->ds);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 18388ea5ebd96..afc5500ef8ed9 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -726,7 +726,11 @@ static void mv88e6xxx_mac_link_down(struct dsa_switch *ds, int port,
+ 	ops = chip->info->ops;
+ 
+ 	mv88e6xxx_reg_lock(chip);
+-	if ((!mv88e6xxx_port_ppu_updates(chip, port) ||
++	/* Internal PHYs propagate their configuration directly to the MAC.
++	 * External PHYs depend on whether the PPU is enabled for this port.
++	 */
++	if (((!mv88e6xxx_phy_is_internal(ds, port) &&
++	      !mv88e6xxx_port_ppu_updates(chip, port)) ||
+ 	     mode == MLO_AN_FIXED) && ops->port_set_link)
+ 		err = ops->port_set_link(chip, port, LINK_FORCED_DOWN);
+ 	mv88e6xxx_reg_unlock(chip);
+@@ -749,7 +753,12 @@ static void mv88e6xxx_mac_link_up(struct dsa_switch *ds, int port,
+ 	ops = chip->info->ops;
+ 
+ 	mv88e6xxx_reg_lock(chip);
+-	if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) {
++	/* Internal PHYs propagate their configuration directly to the MAC.
++	 * External PHYs depend on whether the PPU is enabled for this port.
++	 */
++	if ((!mv88e6xxx_phy_is_internal(ds, port) &&
++	     !mv88e6xxx_port_ppu_updates(chip, port)) ||
++	    mode == MLO_AN_FIXED) {
+ 		/* FIXME: for an automedia port, should we force the link
+ 		 * down here - what if the link comes up due to "other" media
+ 		 * while we're bringing the port up, how is the exclusivity
+diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
+index de50e8b9e6562..fad9a2c77fa7c 100644
+--- a/drivers/net/ethernet/Kconfig
++++ b/drivers/net/ethernet/Kconfig
+@@ -99,6 +99,7 @@ config JME
+ config KORINA
+ 	tristate "Korina (IDT RC32434) Ethernet support"
+ 	depends on MIKROTIK_RB532
++	select CRC32
+ 	help
+ 	  If you have a Mikrotik RouterBoard 500 or IDT RC32434
+ 	  based system say Y. Otherwise say N.
+diff --git a/drivers/net/ethernet/arc/Kconfig b/drivers/net/ethernet/arc/Kconfig
+index 37a41773dd435..92a79c4ffa2c7 100644
+--- a/drivers/net/ethernet/arc/Kconfig
++++ b/drivers/net/ethernet/arc/Kconfig
+@@ -21,6 +21,7 @@ config ARC_EMAC_CORE
+ 	depends on ARC || ARCH_ROCKCHIP || COMPILE_TEST
+ 	select MII
+ 	select PHYLIB
++	select CRC32
+ 
+ config ARC_EMAC
+ 	tristate "ARC EMAC support"
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+index 360e093874d4f..c74600be570ed 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+@@ -154,6 +154,8 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
+ 	u32 in[MLX5_ST_SZ_DW(destroy_cq_in)] = {};
+ 	int err;
+ 
++	mlx5_debug_cq_remove(dev, cq);
++
+ 	mlx5_eq_del_cq(mlx5_get_async_eq(dev), cq);
+ 	mlx5_eq_del_cq(&cq->eq->core, cq);
+ 
+@@ -161,16 +163,13 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
+ 	MLX5_SET(destroy_cq_in, in, cqn, cq->cqn);
+ 	MLX5_SET(destroy_cq_in, in, uid, cq->uid);
+ 	err = mlx5_cmd_exec_in(dev, destroy_cq, in);
+-	if (err)
+-		return err;
+ 
+ 	synchronize_irq(cq->irqn);
+ 
+-	mlx5_debug_cq_remove(dev, cq);
+ 	mlx5_cq_put(cq);
+ 	wait_for_completion(&cq->free);
+ 
+-	return 0;
++	return err;
+ }
+ EXPORT_SYMBOL(mlx5_core_destroy_cq);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 6974090a7efac..6ec4b96497ffb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3819,20 +3819,67 @@ static int set_feature_rx_all(struct net_device *netdev, bool enable)
+ 	return mlx5_set_port_fcs(mdev, !enable);
+ }
+ 
++static int mlx5e_set_rx_port_ts(struct mlx5_core_dev *mdev, bool enable)
++{
++	u32 in[MLX5_ST_SZ_DW(pcmr_reg)] = {};
++	bool supported, curr_state;
++	int err;
++
++	if (!MLX5_CAP_GEN(mdev, ports_check))
++		return 0;
++
++	err = mlx5_query_ports_check(mdev, in, sizeof(in));
++	if (err)
++		return err;
++
++	supported = MLX5_GET(pcmr_reg, in, rx_ts_over_crc_cap);
++	curr_state = MLX5_GET(pcmr_reg, in, rx_ts_over_crc);
++
++	if (!supported || enable == curr_state)
++		return 0;
++
++	MLX5_SET(pcmr_reg, in, local_port, 1);
++	MLX5_SET(pcmr_reg, in, rx_ts_over_crc, enable);
++
++	return mlx5_set_ports_check(mdev, in, sizeof(in));
++}
++
+ static int set_feature_rx_fcs(struct net_device *netdev, bool enable)
+ {
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
++	struct mlx5e_channels *chs = &priv->channels;
++	struct mlx5_core_dev *mdev = priv->mdev;
+ 	int err;
+ 
+ 	mutex_lock(&priv->state_lock);
+ 
+-	priv->channels.params.scatter_fcs_en = enable;
+-	err = mlx5e_modify_channels_scatter_fcs(&priv->channels, enable);
+-	if (err)
+-		priv->channels.params.scatter_fcs_en = !enable;
++	if (enable) {
++		err = mlx5e_set_rx_port_ts(mdev, false);
++		if (err)
++			goto out;
+ 
+-	mutex_unlock(&priv->state_lock);
++		chs->params.scatter_fcs_en = true;
++		err = mlx5e_modify_channels_scatter_fcs(chs, true);
++		if (err) {
++			chs->params.scatter_fcs_en = false;
++			mlx5e_set_rx_port_ts(mdev, true);
++		}
++	} else {
++		chs->params.scatter_fcs_en = false;
++		err = mlx5e_modify_channels_scatter_fcs(chs, false);
++		if (err) {
++			chs->params.scatter_fcs_en = true;
++			goto out;
++		}
++		err = mlx5e_set_rx_port_ts(mdev, true);
++		if (err) {
++			mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err);
++			err = 0;
++		}
++	}
+ 
++out:
++	mutex_unlock(&priv->state_lock);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+index 42e4437ac3c16..7ec1d0ee9beeb 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+@@ -25,16 +25,8 @@
+ #define MLXSW_THERMAL_ZONE_MAX_NAME	16
+ #define MLXSW_THERMAL_TEMP_SCORE_MAX	GENMASK(31, 0)
+ #define MLXSW_THERMAL_MAX_STATE	10
++#define MLXSW_THERMAL_MIN_STATE	2
+ #define MLXSW_THERMAL_MAX_DUTY	255
+-/* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values
+- * MLXSW_THERMAL_MAX_STATE + x, where x is between 2 and 10 are used for
+- * setting fan speed dynamic minimum. For example, if value is set to 14 (40%)
+- * cooling levels vector will be set to 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10 to
+- * introduce PWM speed in percent: 40, 40, 40, 40, 40, 50, 60. 70, 80, 90, 100.
+- */
+-#define MLXSW_THERMAL_SPEED_MIN		(MLXSW_THERMAL_MAX_STATE + 2)
+-#define MLXSW_THERMAL_SPEED_MAX		(MLXSW_THERMAL_MAX_STATE * 2)
+-#define MLXSW_THERMAL_SPEED_MIN_LEVEL	2		/* 20% */
+ 
+ /* External cooling devices, allowed for binding to mlxsw thermal zones. */
+ static char * const mlxsw_thermal_external_allowed_cdev[] = {
+@@ -635,49 +627,16 @@ static int mlxsw_thermal_set_cur_state(struct thermal_cooling_device *cdev,
+ 	struct mlxsw_thermal *thermal = cdev->devdata;
+ 	struct device *dev = thermal->bus_info->dev;
+ 	char mfsc_pl[MLXSW_REG_MFSC_LEN];
+-	unsigned long cur_state, i;
+ 	int idx;
+-	u8 duty;
+ 	int err;
+ 
++	if (state > MLXSW_THERMAL_MAX_STATE)
++		return -EINVAL;
++
+ 	idx = mlxsw_get_cooling_device_idx(thermal, cdev);
+ 	if (idx < 0)
+ 		return idx;
+ 
+-	/* Verify if this request is for changing allowed fan dynamical
+-	 * minimum. If it is - update cooling levels accordingly and update
+-	 * state, if current state is below the newly requested minimum state.
+-	 * For example, if current state is 5, and minimal state is to be
+-	 * changed from 4 to 6, thermal->cooling_levels[0 to 5] will be changed
+-	 * all from 4 to 6. And state 5 (thermal->cooling_levels[4]) should be
+-	 * overwritten.
+-	 */
+-	if (state >= MLXSW_THERMAL_SPEED_MIN &&
+-	    state <= MLXSW_THERMAL_SPEED_MAX) {
+-		state -= MLXSW_THERMAL_MAX_STATE;
+-		for (i = 0; i <= MLXSW_THERMAL_MAX_STATE; i++)
+-			thermal->cooling_levels[i] = max(state, i);
+-
+-		mlxsw_reg_mfsc_pack(mfsc_pl, idx, 0);
+-		err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfsc), mfsc_pl);
+-		if (err)
+-			return err;
+-
+-		duty = mlxsw_reg_mfsc_pwm_duty_cycle_get(mfsc_pl);
+-		cur_state = mlxsw_duty_to_state(duty);
+-
+-		/* If current fan state is lower than requested dynamical
+-		 * minimum, increase fan speed up to dynamical minimum.
+-		 */
+-		if (state < cur_state)
+-			return 0;
+-
+-		state = cur_state;
+-	}
+-
+-	if (state > MLXSW_THERMAL_MAX_STATE)
+-		return -EINVAL;
+-
+ 	/* Normalize the state to the valid speed range. */
+ 	state = thermal->cooling_levels[state];
+ 	mlxsw_reg_mfsc_pack(mfsc_pl, idx, mlxsw_state_to_duty(state));
+@@ -980,8 +939,7 @@ int mlxsw_thermal_init(struct mlxsw_core *core,
+ 
+ 	/* Initialize cooling levels per PWM state. */
+ 	for (i = 0; i < MLXSW_THERMAL_MAX_STATE; i++)
+-		thermal->cooling_levels[i] = max(MLXSW_THERMAL_SPEED_MIN_LEVEL,
+-						 i);
++		thermal->cooling_levels[i] = max(MLXSW_THERMAL_MIN_STATE, i);
+ 
+ 	thermal->polling_delay = bus_info->low_frequency ?
+ 				 MLXSW_THERMAL_SLOW_POLL_INT :
+diff --git a/drivers/net/ethernet/microchip/encx24j600-regmap.c b/drivers/net/ethernet/microchip/encx24j600-regmap.c
+index 796e46a539269..81a8ccca7e5e0 100644
+--- a/drivers/net/ethernet/microchip/encx24j600-regmap.c
++++ b/drivers/net/ethernet/microchip/encx24j600-regmap.c
+@@ -497,13 +497,19 @@ static struct regmap_bus phymap_encx24j600 = {
+ 	.reg_read = regmap_encx24j600_phy_reg_read,
+ };
+ 
+-void devm_regmap_init_encx24j600(struct device *dev,
+-				 struct encx24j600_context *ctx)
++int devm_regmap_init_encx24j600(struct device *dev,
++				struct encx24j600_context *ctx)
+ {
+ 	mutex_init(&ctx->mutex);
+ 	regcfg.lock_arg = ctx;
+ 	ctx->regmap = devm_regmap_init(dev, &regmap_encx24j600, ctx, &regcfg);
++	if (IS_ERR(ctx->regmap))
++		return PTR_ERR(ctx->regmap);
+ 	ctx->phymap = devm_regmap_init(dev, &phymap_encx24j600, ctx, &phycfg);
++	if (IS_ERR(ctx->phymap))
++		return PTR_ERR(ctx->phymap);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(devm_regmap_init_encx24j600);
+ 
+diff --git a/drivers/net/ethernet/microchip/encx24j600.c b/drivers/net/ethernet/microchip/encx24j600.c
+index 2c0dcd7acf3fd..c95e29ae6f20c 100644
+--- a/drivers/net/ethernet/microchip/encx24j600.c
++++ b/drivers/net/ethernet/microchip/encx24j600.c
+@@ -1024,10 +1024,13 @@ static int encx24j600_spi_probe(struct spi_device *spi)
+ 	priv->speed = SPEED_100;
+ 
+ 	priv->ctx.spi = spi;
+-	devm_regmap_init_encx24j600(&spi->dev, &priv->ctx);
+ 	ndev->irq = spi->irq;
+ 	ndev->netdev_ops = &encx24j600_netdev_ops;
+ 
++	ret = devm_regmap_init_encx24j600(&spi->dev, &priv->ctx);
++	if (ret)
++		goto out_free;
++
+ 	mutex_init(&priv->lock);
+ 
+ 	/* Reset device and check if it is connected */
+diff --git a/drivers/net/ethernet/microchip/encx24j600_hw.h b/drivers/net/ethernet/microchip/encx24j600_hw.h
+index f604a260ede79..711147a159aa9 100644
+--- a/drivers/net/ethernet/microchip/encx24j600_hw.h
++++ b/drivers/net/ethernet/microchip/encx24j600_hw.h
+@@ -15,8 +15,8 @@ struct encx24j600_context {
+ 	int bank;
+ };
+ 
+-void devm_regmap_init_encx24j600(struct device *dev,
+-				 struct encx24j600_context *ctx);
++int devm_regmap_init_encx24j600(struct device *dev,
++				struct encx24j600_context *ctx);
+ 
+ /* Single-byte instructions */
+ #define BANK_SELECT(bank) (0xC0 | ((bank & (BANK_MASK >> BANK_SHIFT)) << 1))
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 5bfc7acfd13a9..8c45b236649a9 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -594,12 +594,12 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ 
+ 		spin_unlock_irqrestore(&port->tx_skbs.lock, flags);
+ 
++		if (WARN_ON(!skb_match))
++			continue;
++
+ 		/* Get the h/w timestamp */
+ 		ocelot_get_hwtimestamp(ocelot, &ts);
+ 
+-		if (unlikely(!skb_match))
+-			continue;
+-
+ 		/* Set the timestamp into the skb */
+ 		memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ 		shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
+index d13d92bf74478..3cae8449fadb7 100644
+--- a/drivers/net/ethernet/neterion/s2io.c
++++ b/drivers/net/ethernet/neterion/s2io.c
+@@ -8555,7 +8555,7 @@ static void s2io_io_resume(struct pci_dev *pdev)
+ 			return;
+ 		}
+ 
+-		if (s2io_set_mac_addr(netdev, netdev->dev_addr) == FAILURE) {
++		if (do_s2io_prog_unicast(netdev, netdev->dev_addr) == FAILURE) {
+ 			s2io_card_down(sp);
+ 			pr_err("Can't restore mac addr after reset.\n");
+ 			return;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c
+index c029950a81e20..ac1dcfa1d1790 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/main.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/main.c
+@@ -830,10 +830,6 @@ static int nfp_flower_init(struct nfp_app *app)
+ 	if (err)
+ 		goto err_cleanup;
+ 
+-	err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);
+-	if (err)
+-		goto err_cleanup;
+-
+ 	if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM)
+ 		nfp_flower_qos_init(app);
+ 
+@@ -942,7 +938,20 @@ static int nfp_flower_start(struct nfp_app *app)
+ 			return err;
+ 	}
+ 
+-	return nfp_tunnel_config_start(app);
++	err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);
++	if (err)
++		return err;
++
++	err = nfp_tunnel_config_start(app);
++	if (err)
++		goto err_tunnel_config;
++
++	return 0;
++
++err_tunnel_config:
++	flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app,
++				 nfp_flower_setup_indr_tc_release);
++	return err;
+ }
+ 
+ static void nfp_flower_stop(struct nfp_app *app)
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 6dc7ce6494488..1b44155fa24b2 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1096,6 +1096,10 @@ static int ionic_ndo_addr_add(struct net_device *netdev, const u8 *addr)
+ 
+ static int ionic_addr_del(struct net_device *netdev, const u8 *addr)
+ {
++	/* Don't delete our own address from the uc list */
++	if (ether_addr_equal(addr, netdev->dev_addr))
++		return 0;
++
+ 	return ionic_lif_addr(netdev_priv(netdev), addr, false, true);
+ }
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 6bb9ec98a12b5..41bc31e3f9356 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -1295,6 +1295,7 @@ static int qed_slowpath_start(struct qed_dev *cdev,
+ 			} else {
+ 				DP_NOTICE(cdev,
+ 					  "Failed to acquire PTT for aRFS\n");
++				rc = -EINVAL;
+ 				goto err;
+ 			}
+ 		}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
+index 2bac49b49f739..fbf2deafe8ba3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
+@@ -218,11 +218,18 @@ static void dwmac1000_dump_dma_regs(void __iomem *ioaddr, u32 *reg_space)
+ 				readl(ioaddr + DMA_BUS_MODE + i * 4);
+ }
+ 
+-static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
+-				     struct dma_features *dma_cap)
++static int dwmac1000_get_hw_feature(void __iomem *ioaddr,
++				    struct dma_features *dma_cap)
+ {
+ 	u32 hw_cap = readl(ioaddr + DMA_HW_FEATURE);
+ 
++	if (!hw_cap) {
++		/* 0x00000000 is the value read on old hardware that does not
++		 * implement this register
++		 */
++		return -EOPNOTSUPP;
++	}
++
+ 	dma_cap->mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL);
+ 	dma_cap->mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1;
+ 	dma_cap->half_duplex = (hw_cap & DMA_HW_FEAT_HDSEL) >> 2;
+@@ -252,6 +259,8 @@ static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
+ 	dma_cap->number_tx_channel = (hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22;
+ 	/* Alternate (enhanced) DESC mode */
+ 	dma_cap->enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24;
++
++	return 0;
+ }
+ 
+ static void dwmac1000_rx_watchdog(void __iomem *ioaddr, u32 riwt,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+index a7249e4071f16..935510cdc3ffd 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+@@ -337,8 +337,8 @@ static void dwmac4_dma_tx_chan_op_mode(void __iomem *ioaddr, int mode,
+ 	writel(mtl_tx_op, ioaddr +  MTL_CHAN_TX_OP_MODE(channel));
+ }
+ 
+-static void dwmac4_get_hw_feature(void __iomem *ioaddr,
+-				  struct dma_features *dma_cap)
++static int dwmac4_get_hw_feature(void __iomem *ioaddr,
++				 struct dma_features *dma_cap)
+ {
+ 	u32 hw_cap = readl(ioaddr + GMAC_HW_FEATURE0);
+ 
+@@ -425,6 +425,8 @@ static void dwmac4_get_hw_feature(void __iomem *ioaddr,
+ 	dma_cap->frpbs = (hw_cap & GMAC_HW_FEAT_FRPBS) >> 11;
+ 	dma_cap->frpsel = (hw_cap & GMAC_HW_FEAT_FRPSEL) >> 10;
+ 	dma_cap->dvlan = (hw_cap & GMAC_HW_FEAT_DVLAN) >> 5;
++
++	return 0;
+ }
+ 
+ /* Enable/disable TSO feature and set MSS */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+index 77308c5c5d294..a5583d706b9f2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+@@ -365,8 +365,8 @@ static int dwxgmac2_dma_interrupt(void __iomem *ioaddr,
+ 	return ret;
+ }
+ 
+-static void dwxgmac2_get_hw_feature(void __iomem *ioaddr,
+-				    struct dma_features *dma_cap)
++static int dwxgmac2_get_hw_feature(void __iomem *ioaddr,
++				   struct dma_features *dma_cap)
+ {
+ 	u32 hw_cap;
+ 
+@@ -439,6 +439,8 @@ static void dwxgmac2_get_hw_feature(void __iomem *ioaddr,
+ 	dma_cap->frpes = (hw_cap & XGMAC_HWFEAT_FRPES) >> 11;
+ 	dma_cap->frpbs = (hw_cap & XGMAC_HWFEAT_FRPPB) >> 9;
+ 	dma_cap->frpsel = (hw_cap & XGMAC_HWFEAT_FRPSEL) >> 3;
++
++	return 0;
+ }
+ 
+ static void dwxgmac2_rx_watchdog(void __iomem *ioaddr, u32 riwt, u32 nchan)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+index b0b84244ef107..8b7ec2457eba2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
++++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+@@ -203,8 +203,8 @@ struct stmmac_dma_ops {
+ 	int (*dma_interrupt) (void __iomem *ioaddr,
+ 			      struct stmmac_extra_stats *x, u32 chan);
+ 	/* If supported then get the optional core features */
+-	void (*get_hw_feature)(void __iomem *ioaddr,
+-			       struct dma_features *dma_cap);
++	int (*get_hw_feature)(void __iomem *ioaddr,
++			      struct dma_features *dma_cap);
+ 	/* Program the HW RX Watchdog */
+ 	void (*rx_watchdog)(void __iomem *ioaddr, u32 riwt, u32 number_chan);
+ 	void (*set_tx_ring_len)(void __iomem *ioaddr, u32 len, u32 chan);
+@@ -255,7 +255,7 @@ struct stmmac_dma_ops {
+ #define stmmac_dma_interrupt_status(__priv, __args...) \
+ 	stmmac_do_callback(__priv, dma, dma_interrupt, __args)
+ #define stmmac_get_hw_feature(__priv, __args...) \
+-	stmmac_do_void_callback(__priv, dma, get_hw_feature, __args)
++	stmmac_do_callback(__priv, dma, get_hw_feature, __args)
+ #define stmmac_rx_watchdog(__priv, __args...) \
+ 	stmmac_do_void_callback(__priv, dma, rx_watchdog, __args)
+ #define stmmac_set_tx_ring_len(__priv, __args...) \
+diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
+index b46993d5f9978..4efad42b9aa97 100644
+--- a/drivers/net/usb/Kconfig
++++ b/drivers/net/usb/Kconfig
+@@ -99,6 +99,10 @@ config USB_RTL8150
+ config USB_RTL8152
+ 	tristate "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters"
+ 	select MII
++	select CRC32
++	select CRYPTO
++	select CRYPTO_HASH
++	select CRYPTO_SHA256
+ 	help
+ 	  This option adds support for Realtek RTL8152 based USB 2.0
+ 	  10/100 Ethernet adapters and RTL8153 based USB 3.0 10/100/1000
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index d79abb88a0c62..1b85349f57af0 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1342,7 +1342,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+ 	cmd.abort.opcode = nvme_admin_abort_cmd;
+-	cmd.abort.cid = req->tag;
++	cmd.abort.cid = nvme_cid(req);
+ 	cmd.abort.sqid = cpu_to_le16(nvmeq->qid);
+ 
+ 	dev_warn(nvmeq->dev->ctrl.device,
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index bb8fb2b3711d4..6b170083cd248 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -1229,7 +1229,8 @@ static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf)
+ 		*p-- = 0;
+ 
+ 	/* clear msb bits if any leftover in the last byte */
+-	*p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0);
++	if (cell->nbits % BITS_PER_BYTE)
++		*p &= GENMASK((cell->nbits % BITS_PER_BYTE) - 1, 0);
+ }
+ 
+ static int __nvmem_cell_read(struct nvmem_device *nvmem,
+diff --git a/drivers/platform/mellanox/mlxreg-io.c b/drivers/platform/mellanox/mlxreg-io.c
+index 7646708d57e42..a916cd89cbbed 100644
+--- a/drivers/platform/mellanox/mlxreg-io.c
++++ b/drivers/platform/mellanox/mlxreg-io.c
+@@ -98,7 +98,7 @@ mlxreg_io_get_reg(void *regmap, struct mlxreg_core_data *data, u32 in_val,
+ 			if (ret)
+ 				goto access_error;
+ 
+-			*regval |= rol32(val, regsize * i);
++			*regval |= rol32(val, regsize * i * 8);
+ 		}
+ 	}
+ 
+@@ -141,7 +141,7 @@ mlxreg_io_attr_store(struct device *dev, struct device_attribute *attr,
+ 		return -EINVAL;
+ 
+ 	/* Convert buffer to input value. */
+-	ret = kstrtou32(buf, len, &input_val);
++	ret = kstrtou32(buf, 0, &input_val);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
+index d9cf7f7602b0b..425d2064148f9 100644
+--- a/drivers/platform/x86/intel_scu_ipc.c
++++ b/drivers/platform/x86/intel_scu_ipc.c
+@@ -232,7 +232,7 @@ static inline u32 ipc_data_readl(struct intel_scu_ipc_dev *scu, u32 offset)
+ /* Wait till scu status is busy */
+ static inline int busy_loop(struct intel_scu_ipc_dev *scu)
+ {
+-	unsigned long end = jiffies + msecs_to_jiffies(IPC_TIMEOUT);
++	unsigned long end = jiffies + IPC_TIMEOUT;
+ 
+ 	do {
+ 		u32 status;
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index c028446c74604..b4d5930be2a95 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1250,10 +1250,14 @@ static void bcm_qspi_hw_init(struct bcm_qspi *qspi)
+ 
+ static void bcm_qspi_hw_uninit(struct bcm_qspi *qspi)
+ {
++	u32 status = bcm_qspi_read(qspi, MSPI, MSPI_MSPI_STATUS);
++
+ 	bcm_qspi_write(qspi, MSPI, MSPI_SPCR2, 0);
+ 	if (has_bspi(qspi))
+ 		bcm_qspi_write(qspi, MSPI, MSPI_WRITE_LOCK, 0);
+ 
++	/* clear interrupt */
++	bcm_qspi_write(qspi, MSPI, MSPI_MSPI_STATUS, status & ~1);
+ }
+ 
+ static const struct spi_controller_mem_ops bcm_qspi_mem_ops = {
+@@ -1397,6 +1401,47 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 	if (!qspi->dev_ids)
+ 		return -ENOMEM;
+ 
++	/*
++	 * Some SoCs integrate spi controller (e.g., its interrupt bits)
++	 * in specific ways
++	 */
++	if (soc_intc) {
++		qspi->soc_intc = soc_intc;
++		soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true);
++	} else {
++		qspi->soc_intc = NULL;
++	}
++
++	if (qspi->clk) {
++		ret = clk_prepare_enable(qspi->clk);
++		if (ret) {
++			dev_err(dev, "failed to prepare clock\n");
++			goto qspi_probe_err;
++		}
++		qspi->base_clk = clk_get_rate(qspi->clk);
++	} else {
++		qspi->base_clk = MSPI_BASE_FREQ;
++	}
++
++	if (data->has_mspi_rev) {
++		rev = bcm_qspi_read(qspi, MSPI, MSPI_REV);
++		/* some older revs do not have a MSPI_REV register */
++		if ((rev & 0xff) == 0xff)
++			rev = 0;
++	}
++
++	qspi->mspi_maj_rev = (rev >> 4) & 0xf;
++	qspi->mspi_min_rev = rev & 0xf;
++	qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk;
++
++	qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2);
++
++	/*
++	 * On SW resets it is possible to have the mask still enabled
++	 * Need to disable the mask and clear the status while we init
++	 */
++	bcm_qspi_hw_uninit(qspi);
++
+ 	for (val = 0; val < num_irqs; val++) {
+ 		irq = -1;
+ 		name = qspi_irq_tab[val].irq_name;
+@@ -1433,38 +1478,6 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 		goto qspi_probe_err;
+ 	}
+ 
+-	/*
+-	 * Some SoCs integrate spi controller (e.g., its interrupt bits)
+-	 * in specific ways
+-	 */
+-	if (soc_intc) {
+-		qspi->soc_intc = soc_intc;
+-		soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true);
+-	} else {
+-		qspi->soc_intc = NULL;
+-	}
+-
+-	ret = clk_prepare_enable(qspi->clk);
+-	if (ret) {
+-		dev_err(dev, "failed to prepare clock\n");
+-		goto qspi_probe_err;
+-	}
+-
+-	qspi->base_clk = clk_get_rate(qspi->clk);
+-
+-	if (data->has_mspi_rev) {
+-		rev = bcm_qspi_read(qspi, MSPI, MSPI_REV);
+-		/* some older revs do not have a MSPI_REV register */
+-		if ((rev & 0xff) == 0xff)
+-			rev = 0;
+-	}
+-
+-	qspi->mspi_maj_rev = (rev >> 4) & 0xf;
+-	qspi->mspi_min_rev = rev & 0xf;
+-	qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk;
+-
+-	qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2);
+-
+ 	bcm_qspi_hw_init(qspi);
+ 	init_completion(&qspi->mspi_done);
+ 	init_completion(&qspi->bspi_done);
+diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
+index 823a81d8ff0ed..f255a96ae5a48 100644
+--- a/drivers/tee/optee/core.c
++++ b/drivers/tee/optee/core.c
+@@ -585,6 +585,9 @@ static int optee_remove(struct platform_device *pdev)
+ {
+ 	struct optee *optee = platform_get_drvdata(pdev);
+ 
++	/* Unregister OP-TEE specific client devices on TEE bus */
++	optee_unregister_devices();
++
+ 	/*
+ 	 * Ask OP-TEE to free all cached shared memory objects to decrease
+ 	 * reference counters and also avoid wild pointers in secure world
+diff --git a/drivers/tee/optee/device.c b/drivers/tee/optee/device.c
+index 7a897d51969fc..031806468af48 100644
+--- a/drivers/tee/optee/device.c
++++ b/drivers/tee/optee/device.c
+@@ -53,6 +53,13 @@ static int get_devices(struct tee_context *ctx, u32 session,
+ 	return 0;
+ }
+ 
++static void optee_release_device(struct device *dev)
++{
++	struct tee_client_device *optee_device = to_tee_client_device(dev);
++
++	kfree(optee_device);
++}
++
+ static int optee_register_device(const uuid_t *device_uuid)
+ {
+ 	struct tee_client_device *optee_device = NULL;
+@@ -63,6 +70,7 @@ static int optee_register_device(const uuid_t *device_uuid)
+ 		return -ENOMEM;
+ 
+ 	optee_device->dev.bus = &tee_bus_type;
++	optee_device->dev.release = optee_release_device;
+ 	if (dev_set_name(&optee_device->dev, "optee-ta-%pUb", device_uuid)) {
+ 		kfree(optee_device);
+ 		return -ENOMEM;
+@@ -154,3 +162,17 @@ int optee_enumerate_devices(u32 func)
+ {
+ 	return  __optee_enumerate_devices(func);
+ }
++
++static int __optee_unregister_device(struct device *dev, void *data)
++{
++	if (!strncmp(dev_name(dev), "optee-ta", strlen("optee-ta")))
++		device_unregister(dev);
++
++	return 0;
++}
++
++void optee_unregister_devices(void)
++{
++	bus_for_each_dev(&tee_bus_type, NULL, NULL,
++			 __optee_unregister_device);
++}
+diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h
+index dbdd367be1568..f6bb4a763ba94 100644
+--- a/drivers/tee/optee/optee_private.h
++++ b/drivers/tee/optee/optee_private.h
+@@ -184,6 +184,7 @@ void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages,
+ #define PTA_CMD_GET_DEVICES		0x0
+ #define PTA_CMD_GET_DEVICES_SUPP	0x1
+ int optee_enumerate_devices(u32 func);
++void optee_unregister_devices(void);
+ 
+ /*
+  * Small helpers
+diff --git a/drivers/usb/host/xhci-dbgtty.c b/drivers/usb/host/xhci-dbgtty.c
+index ae4e4ab638b55..2a53b28319999 100644
+--- a/drivers/usb/host/xhci-dbgtty.c
++++ b/drivers/usb/host/xhci-dbgtty.c
+@@ -408,40 +408,38 @@ static int xhci_dbc_tty_register_device(struct xhci_dbc *dbc)
+ 		return -EBUSY;
+ 
+ 	xhci_dbc_tty_init_port(dbc, port);
+-	tty_dev = tty_port_register_device(&port->port,
+-					   dbc_tty_driver, 0, NULL);
+-	if (IS_ERR(tty_dev)) {
+-		ret = PTR_ERR(tty_dev);
+-		goto register_fail;
+-	}
+ 
+ 	ret = kfifo_alloc(&port->write_fifo, DBC_WRITE_BUF_SIZE, GFP_KERNEL);
+ 	if (ret)
+-		goto buf_alloc_fail;
++		goto err_exit_port;
+ 
+ 	ret = xhci_dbc_alloc_requests(dbc, BULK_IN, &port->read_pool,
+ 				      dbc_read_complete);
+ 	if (ret)
+-		goto request_fail;
++		goto err_free_fifo;
+ 
+ 	ret = xhci_dbc_alloc_requests(dbc, BULK_OUT, &port->write_pool,
+ 				      dbc_write_complete);
+ 	if (ret)
+-		goto request_fail;
++		goto err_free_requests;
++
++	tty_dev = tty_port_register_device(&port->port,
++					   dbc_tty_driver, 0, NULL);
++	if (IS_ERR(tty_dev)) {
++		ret = PTR_ERR(tty_dev);
++		goto err_free_requests;
++	}
+ 
+ 	port->registered = true;
+ 
+ 	return 0;
+ 
+-request_fail:
++err_free_requests:
+ 	xhci_dbc_free_requests(&port->read_pool);
+ 	xhci_dbc_free_requests(&port->write_pool);
++err_free_fifo:
+ 	kfifo_free(&port->write_fifo);
+-
+-buf_alloc_fail:
+-	tty_unregister_device(dbc_tty_driver, 0);
+-
+-register_fail:
++err_exit_port:
+ 	xhci_dbc_tty_exit_port(port);
+ 
+ 	dev_err(dbc->dev, "can't register tty port, err %d\n", ret);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 119d1a8fbb194..2299866dc82ff 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -30,6 +30,7 @@
+ #define PCI_VENDOR_ID_FRESCO_LOGIC	0x1b73
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_PDK	0x1000
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009	0x1009
++#define PCI_DEVICE_ID_FRESCO_LOGIC_FL1100	0x1100
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400	0x1400
+ 
+ #define PCI_VENDOR_ID_ETRON		0x1b6f
+@@ -112,6 +113,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	/* Look for vendor-specific quirks */
+ 	if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC &&
+ 			(pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK ||
++			 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 ||
+ 			 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) {
+ 		if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&
+ 				pdev->revision == 0x0) {
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index dc2068e3bedb7..ec8f2910faf97 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -342,16 +342,22 @@ static void xhci_handle_stopped_cmd_ring(struct xhci_hcd *xhci,
+ /* Must be called with xhci->lock held, releases and aquires lock back */
+ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)
+ {
+-	u64 temp_64;
++	u32 temp_32;
+ 	int ret;
+ 
+ 	xhci_dbg(xhci, "Abort command ring\n");
+ 
+ 	reinit_completion(&xhci->cmd_ring_stop_completion);
+ 
+-	temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);
+-	xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
+-			&xhci->op_regs->cmd_ring);
++	/*
++	 * The control bits like command stop, abort are located in lower
++	 * dword of the command ring control register. Limit the write
++	 * to the lower dword to avoid corrupting the command ring pointer
++	 * in case if the command ring is stopped by the time upper dword
++	 * is written.
++	 */
++	temp_32 = readl(&xhci->op_regs->cmd_ring);
++	writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);
+ 
+ 	/* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the
+ 	 * completion of the Command Abort operation. If CRR is not negated in 5
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 6389dc99bc9a4..0d6dc2e20f2aa 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3173,10 +3173,13 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ 		return;
+ 
+ 	/* Bail out if toggle is already being cleared by a endpoint reset */
++	spin_lock_irqsave(&xhci->lock, flags);
+ 	if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) {
+ 		ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE;
++		spin_unlock_irqrestore(&xhci->lock, flags);
+ 		return;
+ 	}
++	spin_unlock_irqrestore(&xhci->lock, flags);
+ 	/* Only interrupt and bulk ep's use data toggle, USB2 spec 5.5.4-> */
+ 	if (usb_endpoint_xfer_control(&host_ep->desc) ||
+ 	    usb_endpoint_xfer_isoc(&host_ep->desc))
+@@ -3262,8 +3265,10 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ 	xhci_free_command(xhci, cfg_cmd);
+ cleanup:
+ 	xhci_free_command(xhci, stop_cmd);
++	spin_lock_irqsave(&xhci->lock, flags);
+ 	if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE)
+ 		ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;
++	spin_unlock_irqrestore(&xhci->lock, flags);
+ }
+ 
+ static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index ce9fc46c92661..b5935834f9d24 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -899,11 +899,13 @@ static int dsps_probe(struct platform_device *pdev)
+ 	if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
+ 		ret = dsps_setup_optional_vbus_irq(pdev, glue);
+ 		if (ret)
+-			goto err;
++			goto unregister_pdev;
+ 	}
+ 
+ 	return 0;
+ 
++unregister_pdev:
++	platform_device_unregister(glue->musb);
+ err:
+ 	pm_runtime_disable(&pdev->dev);
+ 	iounmap(glue->usbss_base);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 1e990a8264a53..c7356718a7c66 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -246,11 +246,13 @@ static void option_instat_callback(struct urb *urb);
+ /* These Quectel products use Quectel's vendor ID */
+ #define QUECTEL_PRODUCT_EC21			0x0121
+ #define QUECTEL_PRODUCT_EC25			0x0125
++#define QUECTEL_PRODUCT_EG91			0x0191
+ #define QUECTEL_PRODUCT_EG95			0x0195
+ #define QUECTEL_PRODUCT_BG96			0x0296
+ #define QUECTEL_PRODUCT_EP06			0x0306
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
++#define QUECTEL_PRODUCT_EC200S_CN		0x6002
+ #define QUECTEL_PRODUCT_EC200T			0x6026
+ 
+ #define CMOTECH_VENDOR_ID			0x16d8
+@@ -1111,6 +1113,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0xff, 0xff),
+ 	  .driver_info = NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0xff, 0xff),
++	  .driver_info = NUMEP2 },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff),
+ 	  .driver_info = NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) },
+@@ -1128,6 +1133,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+ 	  .driver_info = ZLP },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+ 
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+@@ -1227,6 +1233,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1203, 0xff),	/* Telit LE910Cx (RNDIS) */
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1204, 0xff),	/* Telit LE910Cx (MBIM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920),
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index 83da8236e3c8b..c18bf8164bc2e 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -165,6 +165,7 @@ static const struct usb_device_id id_table[] = {
+ 	{DEVICE_SWI(0x1199, 0x907b)},	/* Sierra Wireless EM74xx */
+ 	{DEVICE_SWI(0x1199, 0x9090)},	/* Sierra Wireless EM7565 QDL */
+ 	{DEVICE_SWI(0x1199, 0x9091)},	/* Sierra Wireless EM7565 */
++	{DEVICE_SWI(0x1199, 0x90d2)},	/* Sierra Wireless EM9191 QDL */
+ 	{DEVICE_SWI(0x413c, 0x81a2)},	/* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
+ 	{DEVICE_SWI(0x413c, 0x81a3)},	/* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
+ 	{DEVICE_SWI(0x413c, 0x81a4)},	/* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index c4d53ff06bf85..fdeb20f2f174c 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -325,7 +325,7 @@ static long vhost_vdpa_set_config_call(struct vhost_vdpa *v, u32 __user *argp)
+ 	struct eventfd_ctx *ctx;
+ 
+ 	cb.callback = vhost_vdpa_config_cb;
+-	cb.private = v->vdpa;
++	cb.private = v;
+ 	if (copy_from_user(&fd, argp, sizeof(fd)))
+ 		return  -EFAULT;
+ 
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 84b5dec5d29cd..5c53098755a35 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -240,6 +240,17 @@ static int virtio_dev_probe(struct device *_d)
+ 		driver_features_legacy = driver_features;
+ 	}
+ 
++	/*
++	 * Some devices detect legacy solely via F_VERSION_1. Write
++	 * F_VERSION_1 to force LE config space accesses before FEATURES_OK for
++	 * these when needed.
++	 */
++	if (drv->validate && !virtio_legacy_is_little_endian()
++			  && device_features & BIT_ULL(VIRTIO_F_VERSION_1)) {
++		dev->features = BIT_ULL(VIRTIO_F_VERSION_1);
++		dev->config->finalize_features(dev);
++	}
++
+ 	if (device_features & (1ULL << VIRTIO_F_VERSION_1))
+ 		dev->features = driver_features & device_features;
+ 	else
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index c346c46020e5e..284294620e9fa 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4715,6 +4715,7 @@ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,
+ out_free_delayed:
+ 	btrfs_free_delayed_extent_op(extent_op);
+ out_free_buf:
++	btrfs_tree_unlock(buf);
+ 	free_extent_buffer(buf);
+ out_free_reserved:
+ 	btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 0);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 6ab91661cd261..f59ec55e5feb2 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -710,8 +710,7 @@ int __btrfs_drop_extents(struct btrfs_trans_handle *trans,
+ 	if (start >= inode->disk_i_size && !replace_extent)
+ 		modify_tree = 0;
+ 
+-	update_refs = (test_bit(BTRFS_ROOT_SHAREABLE, &root->state) ||
+-		       root == fs_info->tree_root);
++	update_refs = (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID);
+ 	while (1) {
+ 		recow = 0;
+ 		ret = btrfs_lookup_file_extent(trans, root, path, ino,
+@@ -2662,14 +2661,16 @@ int btrfs_replace_file_extents(struct inode *inode, struct btrfs_path *path,
+ 					   1, 0, 0, NULL);
+ 		if (ret != -ENOSPC) {
+ 			/*
+-			 * When cloning we want to avoid transaction aborts when
+-			 * nothing was done and we are attempting to clone parts
+-			 * of inline extents, in such cases -EOPNOTSUPP is
+-			 * returned by __btrfs_drop_extents() without having
+-			 * changed anything in the file.
++			 * The only time we don't want to abort is if we are
++			 * attempting to clone a partial inline extent, in which
++			 * case we'll get EOPNOTSUPP.  However if we aren't
++			 * clone we need to abort no matter what, because if we
++			 * got EOPNOTSUPP via prealloc then we messed up and
++			 * need to abort.
+ 			 */
+-			if (extent_info && !extent_info->is_new_extent &&
+-			    ret && ret != -EOPNOTSUPP)
++			if (ret &&
++			    (ret != -EOPNOTSUPP ||
++			     (extent_info && extent_info->is_new_extent)))
+ 				btrfs_abort_transaction(trans, ret);
+ 			break;
+ 		}
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 7bf3936aceda2..c3bb5c4375ab0 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1137,7 +1137,10 @@ next:
+ 	/* look for a conflicting sequence number */
+ 	di = btrfs_lookup_dir_index_item(trans, root, path, btrfs_ino(dir),
+ 					 ref_index, name, namelen, 0);
+-	if (di && !IS_ERR(di)) {
++	if (IS_ERR(di)) {
++		if (PTR_ERR(di) != -ENOENT)
++			return PTR_ERR(di);
++	} else if (di) {
+ 		ret = drop_one_dir_item(trans, root, path, dir, di);
+ 		if (ret)
+ 			return ret;
+@@ -1147,7 +1150,9 @@ next:
+ 	/* look for a conflicting name */
+ 	di = btrfs_lookup_dir_item(trans, root, path, btrfs_ino(dir),
+ 				   name, namelen, 0);
+-	if (di && !IS_ERR(di)) {
++	if (IS_ERR(di)) {
++		return PTR_ERR(di);
++	} else if (di) {
+ 		ret = drop_one_dir_item(trans, root, path, dir, di);
+ 		if (ret)
+ 			return ret;
+@@ -1897,8 +1902,8 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 	struct btrfs_key log_key;
+ 	struct inode *dir;
+ 	u8 log_type;
+-	int exists;
+-	int ret = 0;
++	bool exists;
++	int ret;
+ 	bool update_size = (key->type == BTRFS_DIR_INDEX_KEY);
+ 	bool name_added = false;
+ 
+@@ -1918,12 +1923,12 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 		   name_len);
+ 
+ 	btrfs_dir_item_key_to_cpu(eb, di, &log_key);
+-	exists = btrfs_lookup_inode(trans, root, path, &log_key, 0);
+-	if (exists == 0)
+-		exists = 1;
+-	else
+-		exists = 0;
++	ret = btrfs_lookup_inode(trans, root, path, &log_key, 0);
+ 	btrfs_release_path(path);
++	if (ret < 0)
++		goto out;
++	exists = (ret == 0);
++	ret = 0;
+ 
+ 	if (key->type == BTRFS_DIR_ITEM_KEY) {
+ 		dst_di = btrfs_lookup_dir_item(trans, root, path, key->objectid,
+@@ -1938,7 +1943,14 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+-	if (IS_ERR_OR_NULL(dst_di)) {
++
++	if (dst_di == ERR_PTR(-ENOENT))
++		dst_di = NULL;
++
++	if (IS_ERR(dst_di)) {
++		ret = PTR_ERR(dst_di);
++		goto out;
++	} else if (!dst_di) {
+ 		/* we need a sequence number to insert, so we only
+ 		 * do inserts for the BTRFS_DIR_INDEX_KEY types
+ 		 */
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 70a3664785f80..f5e829e12a76d 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -9274,16 +9274,22 @@ struct mlx5_ifc_pcmr_reg_bits {
+ 	u8         reserved_at_0[0x8];
+ 	u8         local_port[0x8];
+ 	u8         reserved_at_10[0x10];
++
+ 	u8         entropy_force_cap[0x1];
+ 	u8         entropy_calc_cap[0x1];
+ 	u8         entropy_gre_calc_cap[0x1];
+-	u8         reserved_at_23[0x1b];
++	u8         reserved_at_23[0xf];
++	u8         rx_ts_over_crc_cap[0x1];
++	u8         reserved_at_33[0xb];
+ 	u8         fcs_cap[0x1];
+ 	u8         reserved_at_3f[0x1];
++
+ 	u8         entropy_force[0x1];
+ 	u8         entropy_calc[0x1];
+ 	u8         entropy_gre_calc[0x1];
+-	u8         reserved_at_43[0x1b];
++	u8         reserved_at_43[0xf];
++	u8         rx_ts_over_crc[0x1];
++	u8         reserved_at_53[0xb];
+ 	u8         fcs_chk[0x1];
+ 	u8         reserved_at_5f[0x1];
+ };
+diff --git a/net/nfc/af_nfc.c b/net/nfc/af_nfc.c
+index 4a9e72073564a..581358dcbdf8d 100644
+--- a/net/nfc/af_nfc.c
++++ b/net/nfc/af_nfc.c
+@@ -60,6 +60,9 @@ int nfc_proto_register(const struct nfc_protocol *nfc_proto)
+ 		proto_tab[nfc_proto->id] = nfc_proto;
+ 	write_unlock(&proto_tab_lock);
+ 
++	if (rc)
++		proto_unregister(nfc_proto->proto);
++
+ 	return rc;
+ }
+ EXPORT_SYMBOL(nfc_proto_register);
+diff --git a/net/nfc/digital_core.c b/net/nfc/digital_core.c
+index e3599ed4a7a87..9c9caa307cf16 100644
+--- a/net/nfc/digital_core.c
++++ b/net/nfc/digital_core.c
+@@ -277,6 +277,7 @@ int digital_tg_configure_hw(struct nfc_digital_dev *ddev, int type, int param)
+ static int digital_tg_listen_mdaa(struct nfc_digital_dev *ddev, u8 rf_tech)
+ {
+ 	struct digital_tg_mdaa_params *params;
++	int rc;
+ 
+ 	params = kzalloc(sizeof(*params), GFP_KERNEL);
+ 	if (!params)
+@@ -291,8 +292,12 @@ static int digital_tg_listen_mdaa(struct nfc_digital_dev *ddev, u8 rf_tech)
+ 	get_random_bytes(params->nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2);
+ 	params->sc = DIGITAL_SENSF_FELICA_SC;
+ 
+-	return digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params,
+-				500, digital_tg_recv_atr_req, NULL);
++	rc = digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params,
++			      500, digital_tg_recv_atr_req, NULL);
++	if (rc)
++		kfree(params);
++
++	return rc;
+ }
+ 
+ static int digital_tg_listen_md(struct nfc_digital_dev *ddev, u8 rf_tech)
+diff --git a/net/nfc/digital_technology.c b/net/nfc/digital_technology.c
+index 84d2345c75a3f..3adf4589852af 100644
+--- a/net/nfc/digital_technology.c
++++ b/net/nfc/digital_technology.c
+@@ -465,8 +465,12 @@ static int digital_in_send_sdd_req(struct nfc_digital_dev *ddev,
+ 	skb_put_u8(skb, sel_cmd);
+ 	skb_put_u8(skb, DIGITAL_SDD_REQ_SEL_PAR);
+ 
+-	return digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res,
+-				   target);
++	rc = digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res,
++				 target);
++	if (rc)
++		kfree_skb(skb);
++
++	return rc;
+ }
+ 
+ static void digital_in_recv_sens_res(struct nfc_digital_dev *ddev, void *arg,
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 8766ab5b87880..5eb3b1b7ae5e7 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -529,22 +529,28 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ 		for (i = tc.offset; i < tc.offset + tc.count; i++) {
+ 			struct netdev_queue *q = netdev_get_tx_queue(dev, i);
+ 			struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
+-			struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL;
+-			struct gnet_stats_queue __percpu *cpu_qstats = NULL;
+ 
+ 			spin_lock_bh(qdisc_lock(qdisc));
++
+ 			if (qdisc_is_percpu_stats(qdisc)) {
+-				cpu_bstats = qdisc->cpu_bstats;
+-				cpu_qstats = qdisc->cpu_qstats;
++				qlen = qdisc_qlen_sum(qdisc);
++
++				__gnet_stats_copy_basic(NULL, &bstats,
++							qdisc->cpu_bstats,
++							&qdisc->bstats);
++				__gnet_stats_copy_queue(&qstats,
++							qdisc->cpu_qstats,
++							&qdisc->qstats,
++							qlen);
++			} else {
++				qlen		+= qdisc->q.qlen;
++				bstats.bytes	+= qdisc->bstats.bytes;
++				bstats.packets	+= qdisc->bstats.packets;
++				qstats.backlog	+= qdisc->qstats.backlog;
++				qstats.drops	+= qdisc->qstats.drops;
++				qstats.requeues	+= qdisc->qstats.requeues;
++				qstats.overlimits += qdisc->qstats.overlimits;
+ 			}
+-
+-			qlen = qdisc_qlen_sum(qdisc);
+-			__gnet_stats_copy_basic(NULL, &sch->bstats,
+-						cpu_bstats, &qdisc->bstats);
+-			__gnet_stats_copy_queue(&sch->qstats,
+-						cpu_qstats,
+-						&qdisc->qstats,
+-						qlen);
+ 			spin_unlock_bh(qdisc_lock(qdisc));
+ 		}
+ 
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index fa0d96320baae..64e0f4864b733 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -3651,7 +3651,7 @@ struct sctp_chunk *sctp_make_strreset_req(
+ 	outlen = (sizeof(outreq) + stream_len) * out;
+ 	inlen = (sizeof(inreq) + stream_len) * in;
+ 
+-	retval = sctp_make_reconf(asoc, outlen + inlen);
++	retval = sctp_make_reconf(asoc, SCTP_PAD4(outlen) + SCTP_PAD4(inlen));
+ 	if (!retval)
+ 		return NULL;
+ 
+diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
+index 4f84657f55c23..f459ae883a0a6 100755
+--- a/scripts/recordmcount.pl
++++ b/scripts/recordmcount.pl
+@@ -222,7 +222,7 @@ if ($arch =~ /(x86(_64)?)|(i386)/) {
+ $local_regex = "^[0-9a-fA-F]+\\s+t\\s+(\\S+)";
+ $weak_regex = "^[0-9a-fA-F]+\\s+([wW])\\s+(\\S+)";
+ $section_regex = "Disassembly of section\\s+(\\S+):";
+-$function_regex = "^([0-9a-fA-F]+)\\s+<(.*?)>:";
++$function_regex = "^([0-9a-fA-F]+)\\s+<([^^]*?)>:";
+ $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s(mcount|__fentry__)\$";
+ $section_type = '@progbits';
+ $mcount_adjust = 0;
+diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c
+index 590a46a9e78de..a226d8f240287 100644
+--- a/sound/core/pcm_compat.c
++++ b/sound/core/pcm_compat.c
+@@ -466,6 +466,76 @@ static int snd_pcm_ioctl_sync_ptr_x32(struct snd_pcm_substream *substream,
+ }
+ #endif /* CONFIG_X86_X32 */
+ 
++#ifdef __BIG_ENDIAN
++typedef char __pad_before_u32[4];
++typedef char __pad_after_u32[0];
++#else
++typedef char __pad_before_u32[0];
++typedef char __pad_after_u32[4];
++#endif
++
++/* PCM 2.0.15 API definition had a bug in mmap control; it puts the avail_min
++ * at the wrong offset due to a typo in padding type.
++ * The bug hits only 32bit.
++ * A workaround for incorrect read/write is needed only in 32bit compat mode.
++ */
++struct __snd_pcm_mmap_control64_buggy {
++	__pad_before_u32 __pad1;
++	__u32 appl_ptr;
++	__pad_before_u32 __pad2;	/* SiC! here is the bug */
++	__pad_before_u32 __pad3;
++	__u32 avail_min;
++	__pad_after_uframe __pad4;
++};
++
++static int snd_pcm_ioctl_sync_ptr_buggy(struct snd_pcm_substream *substream,
++					struct snd_pcm_sync_ptr __user *_sync_ptr)
++{
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	struct snd_pcm_sync_ptr sync_ptr;
++	struct __snd_pcm_mmap_control64_buggy *sync_cp;
++	volatile struct snd_pcm_mmap_status *status;
++	volatile struct snd_pcm_mmap_control *control;
++	int err;
++
++	memset(&sync_ptr, 0, sizeof(sync_ptr));
++	sync_cp = (struct __snd_pcm_mmap_control64_buggy *)&sync_ptr.c.control;
++	if (get_user(sync_ptr.flags, (unsigned __user *)&(_sync_ptr->flags)))
++		return -EFAULT;
++	if (copy_from_user(sync_cp, &(_sync_ptr->c.control), sizeof(*sync_cp)))
++		return -EFAULT;
++	status = runtime->status;
++	control = runtime->control;
++	if (sync_ptr.flags & SNDRV_PCM_SYNC_PTR_HWSYNC) {
++		err = snd_pcm_hwsync(substream);
++		if (err < 0)
++			return err;
++	}
++	snd_pcm_stream_lock_irq(substream);
++	if (!(sync_ptr.flags & SNDRV_PCM_SYNC_PTR_APPL)) {
++		err = pcm_lib_apply_appl_ptr(substream, sync_cp->appl_ptr);
++		if (err < 0) {
++			snd_pcm_stream_unlock_irq(substream);
++			return err;
++		}
++	} else {
++		sync_cp->appl_ptr = control->appl_ptr;
++	}
++	if (!(sync_ptr.flags & SNDRV_PCM_SYNC_PTR_AVAIL_MIN))
++		control->avail_min = sync_cp->avail_min;
++	else
++		sync_cp->avail_min = control->avail_min;
++	sync_ptr.s.status.state = status->state;
++	sync_ptr.s.status.hw_ptr = status->hw_ptr;
++	sync_ptr.s.status.tstamp = status->tstamp;
++	sync_ptr.s.status.suspended_state = status->suspended_state;
++	sync_ptr.s.status.audio_tstamp = status->audio_tstamp;
++	snd_pcm_stream_unlock_irq(substream);
++	if (copy_to_user(_sync_ptr, &sync_ptr, sizeof(sync_ptr)))
++		return -EFAULT;
++	return 0;
++}
++
+ /*
+  */
+ enum {
+@@ -535,7 +605,7 @@ static long snd_pcm_ioctl_compat(struct file *file, unsigned int cmd, unsigned l
+ 		if (in_x32_syscall())
+ 			return snd_pcm_ioctl_sync_ptr_x32(substream, argp);
+ #endif /* CONFIG_X86_X32 */
+-		return snd_pcm_common_ioctl(file, substream, cmd, argp);
++		return snd_pcm_ioctl_sync_ptr_buggy(substream, argp);
+ 	case SNDRV_PCM_IOCTL_HW_REFINE32:
+ 		return snd_pcm_ioctl_hw_params_compat(substream, 1, argp);
+ 	case SNDRV_PCM_IOCTL_HW_PARAMS32:
+diff --git a/sound/core/seq_device.c b/sound/core/seq_device.c
+index 7ed13cb32ef82..f22d3bbfcaa54 100644
+--- a/sound/core/seq_device.c
++++ b/sound/core/seq_device.c
+@@ -147,6 +147,8 @@ static int snd_seq_device_dev_free(struct snd_device *device)
+ 	struct snd_seq_device *dev = device->device_data;
+ 
+ 	cancel_autoload_drivers();
++	if (dev->private_free)
++		dev->private_free(dev);
+ 	put_device(&dev->dev);
+ 	return 0;
+ }
+@@ -174,11 +176,7 @@ static int snd_seq_device_dev_disconnect(struct snd_device *device)
+ 
+ static void snd_seq_dev_release(struct device *dev)
+ {
+-	struct snd_seq_device *sdev = to_seq_dev(dev);
+-
+-	if (sdev->private_free)
+-		sdev->private_free(sdev);
+-	kfree(sdev);
++	kfree(to_seq_dev(dev));
+ }
+ 
+ /*
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 9f37adb2b4d09..c36239cea474f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -527,6 +527,8 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ 	struct alc_spec *spec = codec->spec;
+ 
+ 	switch (codec->core.vendor_id) {
++	case 0x10ec0236:
++	case 0x10ec0256:
+ 	case 0x10ec0283:
+ 	case 0x10ec0286:
+ 	case 0x10ec0288:
+@@ -2549,7 +2551,8 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED),
+ 	SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x950a, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
+@@ -3531,7 +3534,8 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	/* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ 	 * when booting with headset plugged. So skip setting it for the codec alc257
+ 	 */
+-	if (codec->core.vendor_id != 0x10ec0257)
++	if (spec->codec_variant != ALC269_TYPE_ALC257 &&
++	    spec->codec_variant != ALC269_TYPE_ALC256)
+ 		alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+ 	if (!spec->no_shutup_pins)
+@@ -6395,6 +6399,24 @@ static void alc287_fixup_legion_15imhg05_speakers(struct hda_codec *codec,
+ /* for alc285_fixup_ideapad_s740_coef() */
+ #include "ideapad_s740_helper.c"
+ 
++static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *codec,
++							    const struct hda_fixup *fix,
++							    int action)
++{
++	/*
++	* A certain other OS sets these coeffs to different values. On at least one TongFang
++	* barebone these settings might survive even a cold reboot. So to restore a clean slate the
++	* values are explicitly reset to default here. Without this, the external microphone is
++	* always in a plugged-in state, while the internal microphone is always in an unplugged
++	* state, breaking the ability to use the internal microphone.
++	*/
++	alc_write_coef_idx(codec, 0x24, 0x0000);
++	alc_write_coef_idx(codec, 0x26, 0x0000);
++	alc_write_coef_idx(codec, 0x29, 0x3000);
++	alc_write_coef_idx(codec, 0x37, 0xfe05);
++	alc_write_coef_idx(codec, 0x45, 0x5089);
++}
++
+ enum {
+ 	ALC269_FIXUP_GPIO2,
+ 	ALC269_FIXUP_SONY_VAIO,
+@@ -6608,7 +6630,8 @@ enum {
+ 	ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS,
+ 	ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ 	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+-	ALC287_FIXUP_13S_GEN2_SPEAKERS
++	ALC287_FIXUP_13S_GEN2_SPEAKERS,
++	ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8283,7 +8306,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.v.verbs = (const struct hda_verb[]) {
+ 			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
+-			{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+@@ -8300,6 +8323,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE,
+ 	},
++	[ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc256_fixup_tongfang_reset_persistent_settings,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8391,6 +8418,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a61, "Dell XPS 15 9510", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -8726,6 +8756,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
++	SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS),
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -10098,6 +10129,9 @@ enum {
+ 	ALC671_FIXUP_HP_HEADSET_MIC2,
+ 	ALC662_FIXUP_ACER_X2660G_HEADSET_MODE,
+ 	ALC662_FIXUP_ACER_NITRO_HEADSET_MODE,
++	ALC668_FIXUP_ASUS_NO_HEADSET_MIC,
++	ALC668_FIXUP_HEADSET_MIC,
++	ALC668_FIXUP_MIC_DET_COEF,
+ };
+ 
+ static const struct hda_fixup alc662_fixups[] = {
+@@ -10481,6 +10515,29 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC662_FIXUP_USI_FUNC
+ 	},
++	[ALC668_FIXUP_ASUS_NO_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x1b, 0x04a1112c },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC668_FIXUP_HEADSET_MIC
++	},
++	[ALC668_FIXUP_HEADSET_MIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_headset_mic,
++		.chained = true,
++		.chain_id = ALC668_FIXUP_MIC_DET_COEF
++	},
++	[ALC668_FIXUP_MIC_DET_COEF] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x15 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0d60 },
++			{}
++		},
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+@@ -10516,6 +10573,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
+ 	SND_PCI_QUIRK(0x1043, 0x17bd, "ASUS N751", ALC668_FIXUP_ASUS_Nx51),
++	SND_PCI_QUIRK(0x1043, 0x185d, "ASUS G551JW", ALC668_FIXUP_ASUS_NO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1963, "ASUS X71SL", ALC662_FIXUP_ASUS_MODE8),
+ 	SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x1bf3, "ASUS N76VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 5728bf722c88a..7c649cd380493 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -77,6 +77,48 @@
+ /* E-Mu 0204 USB */
+ { USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f19) },
+ 
++/*
++ * Creative Technology, Ltd Live! Cam Sync HD [VF0770]
++ * The device advertises 8 formats, but only a rate of 48kHz is honored by the
++ * hardware and 24 bits give chopped audio, so only report the one working
++ * combination.
++ */
++{
++	USB_DEVICE(0x041e, 0x4095),
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_COMPOSITE,
++		.data = &(const struct snd_usb_audio_quirk[]) {
++			{
++				.ifnum = 2,
++				.type = QUIRK_AUDIO_STANDARD_MIXER,
++			},
++			{
++				.ifnum = 3,
++				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
++				.data = &(const struct audioformat) {
++					.formats = SNDRV_PCM_FMTBIT_S16_LE,
++					.channels = 2,
++					.fmt_bits = 16,
++					.iface = 3,
++					.altsetting = 4,
++					.altset_idx = 4,
++					.endpoint = 0x82,
++					.ep_attr = 0x05,
++					.rates = SNDRV_PCM_RATE_48000,
++					.rate_min = 48000,
++					.rate_max = 48000,
++					.nr_rates = 1,
++					.rate_table = (unsigned int[]) { 48000 },
++				},
++			},
++			{
++				.ifnum = -1
++			},
++		},
++	},
++},
++
+ /*
+  * HP Wireless Audio
+  * When not ignored, causes instability issues for some users, forcing them to


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-10-27 11:57 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-10-27 11:57 UTC (permalink / raw
  To: gentoo-commits

commit:     bd84217eb3e7544ee3fd54bd26c0071cfc5efea7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 27 11:57:11 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 27 11:57:11 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bd84217e

Linux patch 5.10.76

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1075_linux-5.10.76.patch | 3942 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3946 insertions(+)

diff --git a/0000_README b/0000_README
index 6ee12e3..62f163c 100644
--- a/0000_README
+++ b/0000_README
@@ -343,6 +343,10 @@ Patch:  1074_linux-5.10.75.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.75
 
+Patch:  1075_linux-5.10.76.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.76
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1075_linux-5.10.76.patch b/1075_linux-5.10.76.patch
new file mode 100644
index 0000000..6416563
--- /dev/null
+++ b/1075_linux-5.10.76.patch
@@ -0,0 +1,3942 @@
+diff --git a/Makefile b/Makefile
+index 74318cf964b89..605bd943b224e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 75
++SUBLEVEL = 76
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 002e0cf025f59..a0eac00e2c81a 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -87,6 +87,7 @@ config ARM
+ 	select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
+ 	select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL && !CC_IS_CLANG
+ 	select HAVE_FUNCTION_TRACER if !XIP_KERNEL
++	select HAVE_FUTEX_CMPXCHG if FUTEX
+ 	select HAVE_GCC_PLUGINS
+ 	select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
+ 	select HAVE_IDE if PCI || ISA || PCMCIA
+diff --git a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+index 9a18453d78428..0a53f21a89032 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+@@ -71,7 +71,6 @@
+ 			isc: isc@f0008000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&pinctrl_isc_base &pinctrl_isc_data_8bit &pinctrl_isc_data_9_10 &pinctrl_isc_data_11_12>;
+-				status = "okay";
+ 			};
+ 
+ 			qspi1: spi@f0024000 {
+diff --git a/arch/arm/boot/dts/spear3xx.dtsi b/arch/arm/boot/dts/spear3xx.dtsi
+index f266b7b034823..cc88ebe7a60ce 100644
+--- a/arch/arm/boot/dts/spear3xx.dtsi
++++ b/arch/arm/boot/dts/spear3xx.dtsi
+@@ -47,7 +47,7 @@
+ 		};
+ 
+ 		gmac: eth@e0800000 {
+-			compatible = "st,spear600-gmac";
++			compatible = "snps,dwmac-3.40a";
+ 			reg = <0xe0800000 0x8000>;
+ 			interrupts = <23 22>;
+ 			interrupt-names = "macirq", "eth_wake_irq";
+diff --git a/arch/arm/boot/dts/vexpress-v2m.dtsi b/arch/arm/boot/dts/vexpress-v2m.dtsi
+index 2ac41ed3a57c7..659dcf4004b47 100644
+--- a/arch/arm/boot/dts/vexpress-v2m.dtsi
++++ b/arch/arm/boot/dts/vexpress-v2m.dtsi
+@@ -19,7 +19,7 @@
+  */
+ 
+ / {
+-	bus@4000000 {
++	bus@40000000 {
+ 		motherboard {
+ 			model = "V2M-P1";
+ 			arm,hbi = <0x190>;
+diff --git a/arch/arm/boot/dts/vexpress-v2p-ca9.dts b/arch/arm/boot/dts/vexpress-v2p-ca9.dts
+index 4c58479558562..1317f0f58d53d 100644
+--- a/arch/arm/boot/dts/vexpress-v2p-ca9.dts
++++ b/arch/arm/boot/dts/vexpress-v2p-ca9.dts
+@@ -295,7 +295,7 @@
+ 		};
+ 	};
+ 
+-	smb: bus@4000000 {
++	smb: bus@40000000 {
+ 		compatible = "simple-bus";
+ 
+ 		#address-cells = <2>;
+diff --git a/arch/nios2/include/asm/irqflags.h b/arch/nios2/include/asm/irqflags.h
+index b3ec3e510706d..25acf27862f91 100644
+--- a/arch/nios2/include/asm/irqflags.h
++++ b/arch/nios2/include/asm/irqflags.h
+@@ -9,7 +9,7 @@
+ 
+ static inline unsigned long arch_local_save_flags(void)
+ {
+-	return RDCTL(CTL_STATUS);
++	return RDCTL(CTL_FSTATUS);
+ }
+ 
+ /*
+@@ -18,7 +18,7 @@ static inline unsigned long arch_local_save_flags(void)
+  */
+ static inline void arch_local_irq_restore(unsigned long flags)
+ {
+-	WRCTL(CTL_STATUS, flags);
++	WRCTL(CTL_FSTATUS, flags);
+ }
+ 
+ static inline void arch_local_irq_disable(void)
+diff --git a/arch/nios2/include/asm/registers.h b/arch/nios2/include/asm/registers.h
+index 183c720e454d9..95b67dd16f818 100644
+--- a/arch/nios2/include/asm/registers.h
++++ b/arch/nios2/include/asm/registers.h
+@@ -11,7 +11,7 @@
+ #endif
+ 
+ /* control register numbers */
+-#define CTL_STATUS	0
++#define CTL_FSTATUS	0
+ #define CTL_ESTATUS	1
+ #define CTL_BSTATUS	2
+ #define CTL_IENABLE	3
+diff --git a/arch/parisc/math-emu/fpudispatch.c b/arch/parisc/math-emu/fpudispatch.c
+index 7c46969ead9b1..01ed133227c25 100644
+--- a/arch/parisc/math-emu/fpudispatch.c
++++ b/arch/parisc/math-emu/fpudispatch.c
+@@ -310,12 +310,15 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					r1 &= ~3;
+ 					fpregs[t+3] = fpregs[r1+3];
+ 					fpregs[t+2] = fpregs[r1+2];
++					fallthrough;
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1];
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 3: /* FABS */
+ 				switch (fmt) {
+ 				    case 2: /* illegal */
+@@ -325,13 +328,16 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					r1 &= ~3;
+ 					fpregs[t+3] = fpregs[r1+3];
+ 					fpregs[t+2] = fpregs[r1+2];
++					fallthrough;
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					/* copy and clear sign bit */
+ 					fpregs[t] = fpregs[r1] & 0x7fffffff;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 6: /* FNEG */
+ 				switch (fmt) {
+ 				    case 2: /* illegal */
+@@ -341,13 +347,16 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					r1 &= ~3;
+ 					fpregs[t+3] = fpregs[r1+3];
+ 					fpregs[t+2] = fpregs[r1+2];
++					fallthrough;
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					/* copy and invert sign bit */
+ 					fpregs[t] = fpregs[r1] ^ 0x80000000;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 7: /* FNEGABS */
+ 				switch (fmt) {
+ 				    case 2: /* illegal */
+@@ -357,13 +366,16 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					r1 &= ~3;
+ 					fpregs[t+3] = fpregs[r1+3];
+ 					fpregs[t+2] = fpregs[r1+2];
++					fallthrough;
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					/* copy and set sign bit */
+ 					fpregs[t] = fpregs[r1] | 0x80000000;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 4: /* FSQRT */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -376,6 +388,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 5: /* FRND */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -389,7 +402,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(MAJOR_0C_EXCP);
+ 				}
+ 		} /* end of switch (subop) */
+-
++		BUG();
+ 	case 1: /* class 1 */
+ 		df = extru(ir,fpdfpos,2); /* get dest format */
+ 		if ((df & 2) || (fmt & 2)) {
+@@ -419,6 +432,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* dbl/dbl */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 1: /* FCNVXF */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -434,6 +448,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvxf(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 2: /* FCNVFX */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -449,6 +464,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvfx(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 3: /* FCNVFXT */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -464,6 +480,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvfxt(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 5: /* FCNVUF (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -479,6 +496,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvuf(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 6: /* FCNVFU (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -494,6 +512,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvfu(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 7: /* FCNVFUT (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -509,10 +528,11 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvfut(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 4: /* undefined */
+ 				return(MAJOR_0C_EXCP);
+ 		} /* end of switch subop */
+-
++		BUG();
+ 	case 2: /* class 2 */
+ 		fpu_type_flags=fpregs[FPU_TYPE_FLAG_POS];
+ 		r2 = extru(ir, fpr2pos, 5) * sizeof(double)/sizeof(u_int);
+@@ -590,6 +610,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 1: /* FTEST */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -609,8 +630,10 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3:
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 		    } /* end of switch subop */
+ 		} /* end of else for PA1.0 & PA1.1 */
++		BUG();
+ 	case 3: /* class 3 */
+ 		r2 = extru(ir,fpr2pos,5) * sizeof(double)/sizeof(u_int);
+ 		if (r2 == 0)
+@@ -633,6 +656,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 1: /* FSUB */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -645,6 +669,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 2: /* FMPY */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -657,6 +682,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 3: /* FDIV */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -669,6 +695,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 4: /* FREM */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -681,6 +708,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 		} /* end of class 3 switch */
+ 	} /* end of switch(class) */
+ 
+@@ -736,10 +764,12 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1];
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 3: /* FABS */
+ 				switch (fmt) {
+ 				    case 2:
+@@ -747,10 +777,12 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1] & 0x7fffffff;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 6: /* FNEG */
+ 				switch (fmt) {
+ 				    case 2:
+@@ -758,10 +790,12 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1] ^ 0x80000000;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 7: /* FNEGABS */
+ 				switch (fmt) {
+ 				    case 2:
+@@ -769,10 +803,12 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1] | 0x80000000;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 4: /* FSQRT */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -785,6 +821,7 @@ u_int fpregs[];
+ 				    case 3:
+ 					return(MAJOR_0E_EXCP);
+ 				}
++				BUG();
+ 			case 5: /* FRMD */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -798,7 +835,7 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				}
+ 		} /* end of switch (subop */
+-	
++		BUG();
+ 	case 1: /* class 1 */
+ 		df = extru(ir,fpdfpos,2); /* get dest format */
+ 		/*
+@@ -826,6 +863,7 @@ u_int fpregs[];
+ 				    case 3: /* dbl/dbl */
+ 					return(MAJOR_0E_EXCP);
+ 				}
++				BUG();
+ 			case 1: /* FCNVXF */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -841,6 +879,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvxf(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 2: /* FCNVFX */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -856,6 +895,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvfx(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 3: /* FCNVFXT */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -871,6 +911,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvfxt(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 5: /* FCNVUF (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -886,6 +927,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvuf(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 6: /* FCNVFU (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -901,6 +943,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvfu(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 7: /* FCNVFUT (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -916,9 +959,11 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvfut(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 4: /* undefined */
+ 				return(MAJOR_0C_EXCP);
+ 		} /* end of switch subop */
++		BUG();
+ 	case 2: /* class 2 */
+ 		/*
+ 		 * Be careful out there.
+@@ -994,6 +1039,7 @@ u_int fpregs[];
+ 				}
+ 		    } /* end of switch subop */
+ 		} /* end of else for PA1.0 & PA1.1 */
++		BUG();
+ 	case 3: /* class 3 */
+ 		/*
+ 		 * Be careful out there.
+@@ -1026,6 +1072,7 @@ u_int fpregs[];
+ 					return(dbl_fadd(&fpregs[r1],&fpregs[r2],
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 1: /* FSUB */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -1035,6 +1082,7 @@ u_int fpregs[];
+ 					return(dbl_fsub(&fpregs[r1],&fpregs[r2],
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 2: /* FMPY or XMPYU */
+ 				/*
+ 				 * check for integer multiply (x bit set)
+@@ -1071,6 +1119,7 @@ u_int fpregs[];
+ 					       &fpregs[r2],&fpregs[t],status));
+ 				    }
+ 				}
++				BUG();
+ 			case 3: /* FDIV */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -1080,6 +1129,7 @@ u_int fpregs[];
+ 					return(dbl_fdiv(&fpregs[r1],&fpregs[r2],
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 4: /* FREM */
+ 				switch (fmt) {
+ 				    case 0:
+diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
+index 22f249b6f58de..b16aecaaa9126 100644
+--- a/arch/powerpc/kernel/idle_book3s.S
++++ b/arch/powerpc/kernel/idle_book3s.S
+@@ -52,28 +52,32 @@ _GLOBAL(isa300_idle_stop_mayloss)
+ 	std	r1,PACAR1(r13)
+ 	mflr	r4
+ 	mfcr	r5
+-	/* use stack red zone rather than a new frame for saving regs */
+-	std	r2,-8*0(r1)
+-	std	r14,-8*1(r1)
+-	std	r15,-8*2(r1)
+-	std	r16,-8*3(r1)
+-	std	r17,-8*4(r1)
+-	std	r18,-8*5(r1)
+-	std	r19,-8*6(r1)
+-	std	r20,-8*7(r1)
+-	std	r21,-8*8(r1)
+-	std	r22,-8*9(r1)
+-	std	r23,-8*10(r1)
+-	std	r24,-8*11(r1)
+-	std	r25,-8*12(r1)
+-	std	r26,-8*13(r1)
+-	std	r27,-8*14(r1)
+-	std	r28,-8*15(r1)
+-	std	r29,-8*16(r1)
+-	std	r30,-8*17(r1)
+-	std	r31,-8*18(r1)
+-	std	r4,-8*19(r1)
+-	std	r5,-8*20(r1)
++	/*
++	 * Use the stack red zone rather than a new frame for saving regs since
++	 * in the case of no GPR loss the wakeup code branches directly back to
++	 * the caller without deallocating the stack frame first.
++	 */
++	std	r2,-8*1(r1)
++	std	r14,-8*2(r1)
++	std	r15,-8*3(r1)
++	std	r16,-8*4(r1)
++	std	r17,-8*5(r1)
++	std	r18,-8*6(r1)
++	std	r19,-8*7(r1)
++	std	r20,-8*8(r1)
++	std	r21,-8*9(r1)
++	std	r22,-8*10(r1)
++	std	r23,-8*11(r1)
++	std	r24,-8*12(r1)
++	std	r25,-8*13(r1)
++	std	r26,-8*14(r1)
++	std	r27,-8*15(r1)
++	std	r28,-8*16(r1)
++	std	r29,-8*17(r1)
++	std	r30,-8*18(r1)
++	std	r31,-8*19(r1)
++	std	r4,-8*20(r1)
++	std	r5,-8*21(r1)
+ 	/* 168 bytes */
+ 	PPC_STOP
+ 	b	.	/* catch bugs */
+@@ -89,8 +93,8 @@ _GLOBAL(isa300_idle_stop_mayloss)
+  */
+ _GLOBAL(idle_return_gpr_loss)
+ 	ld	r1,PACAR1(r13)
+-	ld	r4,-8*19(r1)
+-	ld	r5,-8*20(r1)
++	ld	r4,-8*20(r1)
++	ld	r5,-8*21(r1)
+ 	mtlr	r4
+ 	mtcr	r5
+ 	/*
+@@ -98,38 +102,40 @@ _GLOBAL(idle_return_gpr_loss)
+ 	 * from PACATOC. This could be avoided for that less common case
+ 	 * if KVM saved its r2.
+ 	 */
+-	ld	r2,-8*0(r1)
+-	ld	r14,-8*1(r1)
+-	ld	r15,-8*2(r1)
+-	ld	r16,-8*3(r1)
+-	ld	r17,-8*4(r1)
+-	ld	r18,-8*5(r1)
+-	ld	r19,-8*6(r1)
+-	ld	r20,-8*7(r1)
+-	ld	r21,-8*8(r1)
+-	ld	r22,-8*9(r1)
+-	ld	r23,-8*10(r1)
+-	ld	r24,-8*11(r1)
+-	ld	r25,-8*12(r1)
+-	ld	r26,-8*13(r1)
+-	ld	r27,-8*14(r1)
+-	ld	r28,-8*15(r1)
+-	ld	r29,-8*16(r1)
+-	ld	r30,-8*17(r1)
+-	ld	r31,-8*18(r1)
++	ld	r2,-8*1(r1)
++	ld	r14,-8*2(r1)
++	ld	r15,-8*3(r1)
++	ld	r16,-8*4(r1)
++	ld	r17,-8*5(r1)
++	ld	r18,-8*6(r1)
++	ld	r19,-8*7(r1)
++	ld	r20,-8*8(r1)
++	ld	r21,-8*9(r1)
++	ld	r22,-8*10(r1)
++	ld	r23,-8*11(r1)
++	ld	r24,-8*12(r1)
++	ld	r25,-8*13(r1)
++	ld	r26,-8*14(r1)
++	ld	r27,-8*15(r1)
++	ld	r28,-8*16(r1)
++	ld	r29,-8*17(r1)
++	ld	r30,-8*18(r1)
++	ld	r31,-8*19(r1)
+ 	blr
+ 
+ /*
+  * This is the sequence required to execute idle instructions, as
+  * specified in ISA v2.07 (and earlier). MSR[IR] and MSR[DR] must be 0.
+- *
+- * The 0(r1) slot is used to save r2 in isa206, so use that here.
++ * We have to store a GPR somewhere, ptesync, then reload it, and create
++ * a false dependency on the result of the load. It doesn't matter which
++ * GPR we store, or where we store it. We have already stored r2 to the
++ * stack at -8(r1) in isa206_idle_insn_mayloss, so use that.
+  */
+ #define IDLE_STATE_ENTER_SEQ_NORET(IDLE_INST)			\
+ 	/* Magic NAP/SLEEP/WINKLE mode enter sequence */	\
+-	std	r2,0(r1);					\
++	std	r2,-8(r1);					\
+ 	ptesync;						\
+-	ld	r2,0(r1);					\
++	ld	r2,-8(r1);					\
+ 236:	cmpd	cr0,r2,r2;					\
+ 	bne	236b;						\
+ 	IDLE_INST;						\
+@@ -154,28 +160,32 @@ _GLOBAL(isa206_idle_insn_mayloss)
+ 	std	r1,PACAR1(r13)
+ 	mflr	r4
+ 	mfcr	r5
+-	/* use stack red zone rather than a new frame for saving regs */
+-	std	r2,-8*0(r1)
+-	std	r14,-8*1(r1)
+-	std	r15,-8*2(r1)
+-	std	r16,-8*3(r1)
+-	std	r17,-8*4(r1)
+-	std	r18,-8*5(r1)
+-	std	r19,-8*6(r1)
+-	std	r20,-8*7(r1)
+-	std	r21,-8*8(r1)
+-	std	r22,-8*9(r1)
+-	std	r23,-8*10(r1)
+-	std	r24,-8*11(r1)
+-	std	r25,-8*12(r1)
+-	std	r26,-8*13(r1)
+-	std	r27,-8*14(r1)
+-	std	r28,-8*15(r1)
+-	std	r29,-8*16(r1)
+-	std	r30,-8*17(r1)
+-	std	r31,-8*18(r1)
+-	std	r4,-8*19(r1)
+-	std	r5,-8*20(r1)
++	/*
++	 * Use the stack red zone rather than a new frame for saving regs since
++	 * in the case of no GPR loss the wakeup code branches directly back to
++	 * the caller without deallocating the stack frame first.
++	 */
++	std	r2,-8*1(r1)
++	std	r14,-8*2(r1)
++	std	r15,-8*3(r1)
++	std	r16,-8*4(r1)
++	std	r17,-8*5(r1)
++	std	r18,-8*6(r1)
++	std	r19,-8*7(r1)
++	std	r20,-8*8(r1)
++	std	r21,-8*9(r1)
++	std	r22,-8*10(r1)
++	std	r23,-8*11(r1)
++	std	r24,-8*12(r1)
++	std	r25,-8*13(r1)
++	std	r26,-8*14(r1)
++	std	r27,-8*15(r1)
++	std	r28,-8*16(r1)
++	std	r29,-8*17(r1)
++	std	r30,-8*18(r1)
++	std	r31,-8*19(r1)
++	std	r4,-8*20(r1)
++	std	r5,-8*21(r1)
+ 	cmpwi	r3,PNV_THREAD_NAP
+ 	bne	1f
+ 	IDLE_STATE_ENTER_SEQ_NORET(PPC_NAP)
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 91f274134884e..452cbf98bfd71 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1578,8 +1578,6 @@ void __cpu_die(unsigned int cpu)
+ 
+ void arch_cpu_idle_dead(void)
+ {
+-	sched_preempt_enable_no_resched();
+-
+ 	/*
+ 	 * Disable on the down path. This will be re-enabled by
+ 	 * start_secondary() via start_secondary_resume() below
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 5777b72bb8b62..db78123166a8b 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -292,13 +292,16 @@ kvm_novcpu_exit:
+  * r3 contains the SRR1 wakeup value, SRR1 is trashed.
+  */
+ _GLOBAL(idle_kvm_start_guest)
+-	ld	r4,PACAEMERGSP(r13)
+ 	mfcr	r5
+ 	mflr	r0
+-	std	r1,0(r4)
+-	std	r5,8(r4)
+-	std	r0,16(r4)
+-	subi	r1,r4,STACK_FRAME_OVERHEAD
++	std	r5, 8(r1)	// Save CR in caller's frame
++	std	r0, 16(r1)	// Save LR in caller's frame
++	// Create frame on emergency stack
++	ld	r4, PACAEMERGSP(r13)
++	stdu	r1, -SWITCH_FRAME_SIZE(r4)
++	// Switch to new frame on emergency stack
++	mr	r1, r4
++	std	r3, 32(r1)	// Save SRR1 wakeup value
+ 	SAVE_NVGPRS(r1)
+ 
+ 	/*
+@@ -350,6 +353,10 @@ kvm_unsplit_wakeup:
+ 
+ kvm_secondary_got_guest:
+ 
++	// About to go to guest, clear saved SRR1
++	li	r0, 0
++	std	r0, 32(r1)
++
+ 	/* Set HSTATE_DSCR(r13) to something sensible */
+ 	ld	r6, PACA_DSCR_DEFAULT(r13)
+ 	std	r6, HSTATE_DSCR(r13)
+@@ -441,13 +448,12 @@ kvm_no_guest:
+ 	mfspr	r4, SPRN_LPCR
+ 	rlwimi	r4, r3, 0, LPCR_PECE0 | LPCR_PECE1
+ 	mtspr	SPRN_LPCR, r4
+-	/* set up r3 for return */
+-	mfspr	r3,SPRN_SRR1
++	// Return SRR1 wakeup value, or 0 if we went into the guest
++	ld	r3, 32(r1)
+ 	REST_NVGPRS(r1)
+-	addi	r1, r1, STACK_FRAME_OVERHEAD
+-	ld	r0, 16(r1)
+-	ld	r5, 8(r1)
+-	ld	r1, 0(r1)
++	ld	r1, 0(r1)	// Switch back to caller stack
++	ld	r0, 16(r1)	// Reload LR
++	ld	r5, 8(r1)	// Reload CR
+ 	mtlr	r0
+ 	mtcr	r5
+ 	blr
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index a75d94a9bcb2f..1226971533fe4 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -205,6 +205,9 @@ int zpci_create_device(u32 fid, u32 fh, enum zpci_state state);
+ void zpci_remove_device(struct zpci_dev *zdev, bool set_error);
+ int zpci_enable_device(struct zpci_dev *);
+ int zpci_disable_device(struct zpci_dev *);
++void zpci_device_reserved(struct zpci_dev *zdev);
++bool zpci_is_device_configured(struct zpci_dev *zdev);
++
+ int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64);
+ int zpci_unregister_ioat(struct zpci_dev *, u8);
+ void zpci_remove_reserved_devices(void);
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index f5ddbc625c1a5..e14e4a3a647a2 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -92,7 +92,7 @@ void zpci_remove_reserved_devices(void)
+ 	spin_unlock(&zpci_list_lock);
+ 
+ 	list_for_each_entry_safe(zdev, tmp, &remove, entry)
+-		zpci_zdev_put(zdev);
++		zpci_device_reserved(zdev);
+ }
+ 
+ int pci_domain_nr(struct pci_bus *bus)
+@@ -787,6 +787,39 @@ error:
+ 	return rc;
+ }
+ 
++bool zpci_is_device_configured(struct zpci_dev *zdev)
++{
++	enum zpci_state state = zdev->state;
++
++	return state != ZPCI_FN_STATE_RESERVED &&
++		state != ZPCI_FN_STATE_STANDBY;
++}
++
++/**
++ * zpci_device_reserved() - Mark device as resverved
++ * @zdev: the zpci_dev that was reserved
++ *
++ * Handle the case that a given zPCI function was reserved by another system.
++ * After a call to this function the zpci_dev can not be found via
++ * get_zdev_by_fid() anymore but may still be accessible via existing
++ * references though it will not be functional anymore.
++ */
++void zpci_device_reserved(struct zpci_dev *zdev)
++{
++	if (zdev->has_hp_slot)
++		zpci_exit_slot(zdev);
++	/*
++	 * Remove device from zpci_list as it is going away. This also
++	 * makes sure we ignore subsequent zPCI events for this device.
++	 */
++	spin_lock(&zpci_list_lock);
++	list_del(&zdev->entry);
++	spin_unlock(&zpci_list_lock);
++	zdev->state = ZPCI_FN_STATE_RESERVED;
++	zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
++	zpci_zdev_put(zdev);
++}
++
+ void zpci_release_device(struct kref *kref)
+ {
+ 	struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
+@@ -802,6 +835,12 @@ void zpci_release_device(struct kref *kref)
+ 	case ZPCI_FN_STATE_STANDBY:
+ 		if (zdev->has_hp_slot)
+ 			zpci_exit_slot(zdev);
++		spin_lock(&zpci_list_lock);
++		list_del(&zdev->entry);
++		spin_unlock(&zpci_list_lock);
++		zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
++		fallthrough;
++	case ZPCI_FN_STATE_RESERVED:
+ 		zpci_cleanup_bus_resources(zdev);
+ 		zpci_bus_device_unregister(zdev);
+ 		zpci_destroy_iommu(zdev);
+@@ -809,10 +848,6 @@ void zpci_release_device(struct kref *kref)
+ 	default:
+ 		break;
+ 	}
+-
+-	spin_lock(&zpci_list_lock);
+-	list_del(&zdev->entry);
+-	spin_unlock(&zpci_list_lock);
+ 	zpci_dbg(3, "rem fid:%x\n", zdev->fid);
+ 	kfree(zdev);
+ }
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index ac0c65cdd69d9..b7cfde7e80a8a 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -146,7 +146,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 		zdev->state = ZPCI_FN_STATE_STANDBY;
+ 		if (!clp_get_state(ccdf->fid, &state) &&
+ 		    state == ZPCI_FN_STATE_RESERVED) {
+-			zpci_zdev_put(zdev);
++			zpci_device_reserved(zdev);
+ 		}
+ 		break;
+ 	case 0x0306: /* 0x308 or 0x302 for multiple devices */
+@@ -156,7 +156,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 	case 0x0308: /* Standby -> Reserved */
+ 		if (!zdev)
+ 			break;
+-		zpci_zdev_put(zdev);
++		zpci_device_reserved(zdev);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
+index 4be8f9cabd070..ca8ce64df2ceb 100644
+--- a/arch/x86/events/msr.c
++++ b/arch/x86/events/msr.c
+@@ -68,6 +68,7 @@ static bool test_intel(int idx, void *data)
+ 	case INTEL_FAM6_BROADWELL_D:
+ 	case INTEL_FAM6_BROADWELL_G:
+ 	case INTEL_FAM6_BROADWELL_X:
++	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+ 
+ 	case INTEL_FAM6_ATOM_SILVERMONT:
+ 	case INTEL_FAM6_ATOM_SILVERMONT_D:
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index e0dba0037a85f..f77d98973782b 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6316,18 +6316,13 @@ static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+ 
+ 		/*
+ 		 * If we are running L2 and L1 has a new pending interrupt
+-		 * which can be injected, we should re-evaluate
+-		 * what should be done with this new L1 interrupt.
+-		 * If L1 intercepts external-interrupts, we should
+-		 * exit from L2 to L1. Otherwise, interrupt should be
+-		 * delivered directly to L2.
++		 * which can be injected, this may cause a vmexit or it may
++		 * be injected into L2.  Either way, this interrupt will be
++		 * processed via KVM_REQ_EVENT, not RVI, because we do not use
++		 * virtual interrupt delivery to inject L1 interrupts into L2.
+ 		 */
+-		if (is_guest_mode(vcpu) && max_irr_updated) {
+-			if (nested_exit_on_intr(vcpu))
+-				kvm_vcpu_exiting_guest_mode(vcpu);
+-			else
+-				kvm_make_request(KVM_REQ_EVENT, vcpu);
+-		}
++		if (is_guest_mode(vcpu) && max_irr_updated)
++			kvm_make_request(KVM_REQ_EVENT, vcpu);
+ 	} else {
+ 		max_irr = kvm_lapic_find_highest_irr(vcpu);
+ 	}
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index aa9f50fccc5da..0f68c6da7382b 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -51,9 +51,6 @@ DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info);
+ DEFINE_PER_CPU(uint32_t, xen_vcpu_id);
+ EXPORT_PER_CPU_SYMBOL(xen_vcpu_id);
+ 
+-enum xen_domain_type xen_domain_type = XEN_NATIVE;
+-EXPORT_SYMBOL_GPL(xen_domain_type);
+-
+ unsigned long *machine_to_phys_mapping = (void *)MACH2PHYS_VIRT_START;
+ EXPORT_SYMBOL(machine_to_phys_mapping);
+ unsigned long  machine_to_phys_nr;
+@@ -68,9 +65,11 @@ __read_mostly int xen_have_vector_callback;
+ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
+ 
+ /*
+- * NB: needs to live in .data because it's used by xen_prepare_pvh which runs
+- * before clearing the bss.
++ * NB: These need to live in .data or alike because they're used by
++ * xen_prepare_pvh() which runs before clearing the bss.
+  */
++enum xen_domain_type __ro_after_init xen_domain_type = XEN_NATIVE;
++EXPORT_SYMBOL_GPL(xen_domain_type);
+ uint32_t xen_start_flags __section(".data") = 0;
+ EXPORT_SYMBOL(xen_start_flags);
+ 
+diff --git a/arch/xtensa/platforms/xtfpga/setup.c b/arch/xtensa/platforms/xtfpga/setup.c
+index 4f7d6142d41fa..538e6748e85a7 100644
+--- a/arch/xtensa/platforms/xtfpga/setup.c
++++ b/arch/xtensa/platforms/xtfpga/setup.c
+@@ -51,8 +51,12 @@ void platform_power_off(void)
+ 
+ void platform_restart(void)
+ {
+-	/* Flush and reset the mmu, simulate a processor reset, and
+-	 * jump to the reset vector. */
++	/* Try software reset first. */
++	WRITE_ONCE(*(u32 *)XTFPGA_SWRST_VADDR, 0xdead);
++
++	/* If software reset did not work, flush and reset the mmu,
++	 * simulate a processor reset, and jump to the reset vector.
++	 */
+ 	cpu_reset();
+ 	/* control never gets here */
+ }
+@@ -66,7 +70,7 @@ void __init platform_calibrate_ccount(void)
+ 
+ #endif
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 
+ static void __init xtfpga_clk_setup(struct device_node *np)
+ {
+@@ -284,4 +288,4 @@ static int __init xtavnet_init(void)
+  */
+ arch_initcall(xtavnet_init);
+ 
+-#endif /* CONFIG_OF */
++#endif /* CONFIG_USE_OF */
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index 4de03da9a624b..b5f26082b9594 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -129,6 +129,7 @@ static const char *const blk_queue_flag_name[] = {
+ 	QUEUE_FLAG_NAME(PCI_P2PDMA),
+ 	QUEUE_FLAG_NAME(ZONE_RESETALL),
+ 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
++	QUEUE_FLAG_NAME(HCTX_ACTIVE),
+ 	QUEUE_FLAG_NAME(NOWAIT),
+ };
+ #undef QUEUE_FLAG_NAME
+diff --git a/drivers/gpu/drm/amd/display/Kconfig b/drivers/gpu/drm/amd/display/Kconfig
+index 3c410d236c491..f3274eb6b3418 100644
+--- a/drivers/gpu/drm/amd/display/Kconfig
++++ b/drivers/gpu/drm/amd/display/Kconfig
+@@ -33,6 +33,8 @@ config DRM_AMD_DC_HDCP
+ 
+ config DRM_AMD_DC_SI
+ 	bool "AMD DC support for Southern Islands ASICs"
++	depends on DRM_AMDGPU_SI
++	depends on DRM_AMD_DC
+ 	default n
+ 	help
+ 	  Choose this option to enable new AMD DC support for SI asics
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.c b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+index f31e8ef3c258c..25e422d2e7f8f 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+@@ -268,7 +268,11 @@ static void mxsfb_irq_disable(struct drm_device *drm)
+ 	struct mxsfb_drm_private *mxsfb = drm->dev_private;
+ 
+ 	mxsfb_enable_axi_clk(mxsfb);
+-	mxsfb->crtc.funcs->disable_vblank(&mxsfb->crtc);
++
++	/* Disable and clear VBLANK IRQ */
++	writel(CTRL1_CUR_FRAME_DONE_IRQ_EN, mxsfb->base + LCDC_CTRL1 + REG_CLR);
++	writel(CTRL1_CUR_FRAME_DONE_IRQ, mxsfb->base + LCDC_CTRL1 + REG_CLR);
++
+ 	mxsfb_disable_axi_clk(mxsfb);
+ }
+ 
+diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c b/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
+index 0145129d7c661..534dd7414d428 100644
+--- a/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
+@@ -590,14 +590,14 @@ static const struct drm_display_mode k101_im2byl02_default_mode = {
+ 	.clock		= 69700,
+ 
+ 	.hdisplay	= 800,
+-	.hsync_start	= 800 + 6,
+-	.hsync_end	= 800 + 6 + 15,
+-	.htotal		= 800 + 6 + 15 + 16,
++	.hsync_start	= 800 + 52,
++	.hsync_end	= 800 + 52 + 8,
++	.htotal		= 800 + 52 + 8 + 48,
+ 
+ 	.vdisplay	= 1280,
+-	.vsync_start	= 1280 + 8,
+-	.vsync_end	= 1280 + 8 + 48,
+-	.vtotal		= 1280 + 8 + 48 + 52,
++	.vsync_start	= 1280 + 16,
++	.vsync_end	= 1280 + 16 + 6,
++	.vtotal		= 1280 + 16 + 6 + 15,
+ 
+ 	.width_mm	= 135,
+ 	.height_mm	= 217,
+diff --git a/drivers/input/keyboard/snvs_pwrkey.c b/drivers/input/keyboard/snvs_pwrkey.c
+index 2f5e3ab5ed638..65286762b02ab 100644
+--- a/drivers/input/keyboard/snvs_pwrkey.c
++++ b/drivers/input/keyboard/snvs_pwrkey.c
+@@ -3,6 +3,7 @@
+ // Driver for the IMX SNVS ON/OFF Power Key
+ // Copyright (C) 2015 Freescale Semiconductor, Inc. All Rights Reserved.
+ 
++#include <linux/clk.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
+ #include <linux/init.h>
+@@ -99,6 +100,11 @@ static irqreturn_t imx_snvs_pwrkey_interrupt(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
++static void imx_snvs_pwrkey_disable_clk(void *data)
++{
++	clk_disable_unprepare(data);
++}
++
+ static void imx_snvs_pwrkey_act(void *pdata)
+ {
+ 	struct pwrkey_drv_data *pd = pdata;
+@@ -111,6 +117,7 @@ static int imx_snvs_pwrkey_probe(struct platform_device *pdev)
+ 	struct pwrkey_drv_data *pdata;
+ 	struct input_dev *input;
+ 	struct device_node *np;
++	struct clk *clk;
+ 	int error;
+ 	u32 vid;
+ 
+@@ -134,6 +141,28 @@ static int imx_snvs_pwrkey_probe(struct platform_device *pdev)
+ 		dev_warn(&pdev->dev, "KEY_POWER without setting in dts\n");
+ 	}
+ 
++	clk = devm_clk_get_optional(&pdev->dev, NULL);
++	if (IS_ERR(clk)) {
++		dev_err(&pdev->dev, "Failed to get snvs clock (%pe)\n", clk);
++		return PTR_ERR(clk);
++	}
++
++	error = clk_prepare_enable(clk);
++	if (error) {
++		dev_err(&pdev->dev, "Failed to enable snvs clock (%pe)\n",
++			ERR_PTR(error));
++		return error;
++	}
++
++	error = devm_add_action_or_reset(&pdev->dev,
++					 imx_snvs_pwrkey_disable_clk, clk);
++	if (error) {
++		dev_err(&pdev->dev,
++			"Failed to register clock cleanup handler (%pe)\n",
++			ERR_PTR(error));
++		return error;
++	}
++
+ 	pdata->wakeup = of_property_read_bool(np, "wakeup-source");
+ 
+ 	pdata->irq = platform_get_irq(pdev, 0);
+diff --git a/drivers/isdn/capi/kcapi.c b/drivers/isdn/capi/kcapi.c
+index cb0afe8971623..7313454e403a6 100644
+--- a/drivers/isdn/capi/kcapi.c
++++ b/drivers/isdn/capi/kcapi.c
+@@ -480,6 +480,11 @@ int detach_capi_ctr(struct capi_ctr *ctr)
+ 
+ 	ctr_down(ctr, CAPI_CTR_DETACHED);
+ 
++	if (ctr->cnr < 1 || ctr->cnr - 1 >= CAPI_MAXCONTR) {
++		err = -EINVAL;
++		goto unlock_out;
++	}
++
+ 	if (capi_controller[ctr->cnr - 1] != ctr) {
+ 		err = -EINVAL;
+ 		goto unlock_out;
+diff --git a/drivers/isdn/hardware/mISDN/netjet.c b/drivers/isdn/hardware/mISDN/netjet.c
+index 2a1ddd47a0968..a52f275f82634 100644
+--- a/drivers/isdn/hardware/mISDN/netjet.c
++++ b/drivers/isdn/hardware/mISDN/netjet.c
+@@ -949,8 +949,8 @@ nj_release(struct tiger_hw *card)
+ 		nj_disable_hwirq(card);
+ 		mode_tiger(&card->bc[0], ISDN_P_NONE);
+ 		mode_tiger(&card->bc[1], ISDN_P_NONE);
+-		card->isac.release(&card->isac);
+ 		spin_unlock_irqrestore(&card->lock, flags);
++		card->isac.release(&card->isac);
+ 		release_region(card->base, card->base_s);
+ 		card->base_s = 0;
+ 	}
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index 48575900adb75..3570a4de0085a 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -846,10 +846,12 @@ static int __maybe_unused rcar_can_suspend(struct device *dev)
+ 	struct rcar_can_priv *priv = netdev_priv(ndev);
+ 	u16 ctlr;
+ 
+-	if (netif_running(ndev)) {
+-		netif_stop_queue(ndev);
+-		netif_device_detach(ndev);
+-	}
++	if (!netif_running(ndev))
++		return 0;
++
++	netif_stop_queue(ndev);
++	netif_device_detach(ndev);
++
+ 	ctlr = readw(&priv->regs->ctlr);
+ 	ctlr |= RCAR_CAN_CTLR_CANM_HALT;
+ 	writew(ctlr, &priv->regs->ctlr);
+@@ -868,6 +870,9 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ 	u16 ctlr;
+ 	int err;
+ 
++	if (!netif_running(ndev))
++		return 0;
++
+ 	err = clk_enable(priv->clk);
+ 	if (err) {
+ 		netdev_err(ndev, "clk_enable() failed, error %d\n", err);
+@@ -881,10 +886,9 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ 	writew(ctlr, &priv->regs->ctlr);
+ 	priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ 
+-	if (netif_running(ndev)) {
+-		netif_device_attach(ndev);
+-		netif_start_queue(ndev);
+-	}
++	netif_device_attach(ndev);
++	netif_start_queue(ndev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/can/sja1000/peak_pci.c b/drivers/net/can/sja1000/peak_pci.c
+index 4713921bd511e..c3fd7cd802a97 100644
+--- a/drivers/net/can/sja1000/peak_pci.c
++++ b/drivers/net/can/sja1000/peak_pci.c
+@@ -731,16 +731,15 @@ static void peak_pci_remove(struct pci_dev *pdev)
+ 		struct net_device *prev_dev = chan->prev_dev;
+ 
+ 		dev_info(&pdev->dev, "removing device %s\n", dev->name);
++		/* do that only for first channel */
++		if (!prev_dev && chan->pciec_card)
++			peak_pciec_remove(chan->pciec_card);
+ 		unregister_sja1000dev(dev);
+ 		free_sja1000dev(dev);
+ 		dev = prev_dev;
+ 
+-		if (!dev) {
+-			/* do that only for first channel */
+-			if (chan->pciec_card)
+-				peak_pciec_remove(chan->pciec_card);
++		if (!dev)
+ 			break;
+-		}
+ 		priv = netdev_priv(dev);
+ 		chan = priv->priv;
+ 	}
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+index d565922838186..301a0f54fc994 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+@@ -551,11 +551,10 @@ static int pcan_usb_fd_decode_status(struct pcan_usb_fd_if *usb_if,
+ 	} else if (sm->channel_p_w_b & PUCAN_BUS_WARNING) {
+ 		new_state = CAN_STATE_ERROR_WARNING;
+ 	} else {
+-		/* no error bit (so, no error skb, back to active state) */
+-		dev->can.state = CAN_STATE_ERROR_ACTIVE;
++		/* back to (or still in) ERROR_ACTIVE state */
++		new_state = CAN_STATE_ERROR_ACTIVE;
+ 		pdev->bec.txerr = 0;
+ 		pdev->bec.rxerr = 0;
+-		return 0;
+ 	}
+ 
+ 	/* state hasn't changed */
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 95e634cbc4b63..4d23a7aba7961 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -229,7 +229,7 @@
+ #define GSWIP_SDMA_PCTRLp(p)		(0xBC0 + ((p) * 0x6))
+ #define  GSWIP_SDMA_PCTRL_EN		BIT(0)	/* SDMA Port Enable */
+ #define  GSWIP_SDMA_PCTRL_FCEN		BIT(1)	/* Flow Control Enable */
+-#define  GSWIP_SDMA_PCTRL_PAUFWD	BIT(1)	/* Pause Frame Forwarding */
++#define  GSWIP_SDMA_PCTRL_PAUFWD	BIT(3)	/* Pause Frame Forwarding */
+ 
+ #define GSWIP_TABLE_ACTIVE_VLAN		0x01
+ #define GSWIP_TABLE_VLAN_MAPPING	0x02
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 73de09093c350..1f642fdbf214c 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -981,9 +981,6 @@ mt7530_port_enable(struct dsa_switch *ds, int port,
+ {
+ 	struct mt7530_priv *priv = ds->priv;
+ 
+-	if (!dsa_is_user_port(ds, port))
+-		return 0;
+-
+ 	mutex_lock(&priv->reg_mutex);
+ 
+ 	/* Allow the user port gets connected to the cpu port and also
+@@ -1006,9 +1003,6 @@ mt7530_port_disable(struct dsa_switch *ds, int port)
+ {
+ 	struct mt7530_priv *priv = ds->priv;
+ 
+-	if (!dsa_is_user_port(ds, port))
+-		return;
+-
+ 	mutex_lock(&priv->reg_mutex);
+ 
+ 	/* Clear up all port matrix which could be restored in the next
+@@ -2593,7 +2587,7 @@ mt7530_probe(struct mdio_device *mdiodev)
+ 		return -ENOMEM;
+ 
+ 	priv->ds->dev = &mdiodev->dev;
+-	priv->ds->num_ports = DSA_MAX_PORTS;
++	priv->ds->num_ports = MT7530_NUM_PORTS;
+ 
+ 	/* Use medatek,mcm property to distinguish hardware type that would
+ 	 * casues a little bit differences on power-on sequence.
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+index 89e558135432b..9c1690f64a027 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+@@ -157,7 +157,7 @@ static const struct {
+ 	{ ENETC_PM0_TFRM,   "MAC tx frames" },
+ 	{ ENETC_PM0_TFCS,   "MAC tx fcs errors" },
+ 	{ ENETC_PM0_TVLAN,  "MAC tx VLAN frames" },
+-	{ ENETC_PM0_TERR,   "MAC tx frames" },
++	{ ENETC_PM0_TERR,   "MAC tx frame errors" },
+ 	{ ENETC_PM0_TUCA,   "MAC tx unicast frames" },
+ 	{ ENETC_PM0_TMCA,   "MAC tx multicast frames" },
+ 	{ ENETC_PM0_TBCA,   "MAC tx broadcast frames" },
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+index eef1b2764d34a..67b0bf310daaa 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+@@ -10,6 +10,27 @@ static LIST_HEAD(hnae3_ae_algo_list);
+ static LIST_HEAD(hnae3_client_list);
+ static LIST_HEAD(hnae3_ae_dev_list);
+ 
++void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo)
++{
++	const struct pci_device_id *pci_id;
++	struct hnae3_ae_dev *ae_dev;
++
++	if (!ae_algo)
++		return;
++
++	list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {
++		if (!hnae3_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B))
++			continue;
++
++		pci_id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);
++		if (!pci_id)
++			continue;
++		if (IS_ENABLED(CONFIG_PCI_IOV))
++			pci_disable_sriov(ae_dev->pdev);
++	}
++}
++EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare);
++
+ /* we are keeping things simple and using single lock for all the
+  * list. This is a non-critical code so other updations, if happen
+  * in parallel, can wait.
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 912c51e327d64..4a9576a449e10 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -754,6 +754,7 @@ struct hnae3_handle {
+ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev);
+ void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev);
+ 
++void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo);
+ void hnae3_unregister_ae_algo(struct hnae3_ae_algo *ae_algo);
+ void hnae3_register_ae_algo(struct hnae3_ae_algo *ae_algo);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 4777db2623cf4..ae7cd73c823b7 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1283,7 +1283,6 @@ void hns3_shinfo_pack(struct skb_shared_info *shinfo, __u32 *size)
+ 
+ static int hns3_skb_linearize(struct hns3_enet_ring *ring,
+ 			      struct sk_buff *skb,
+-			      u8 max_non_tso_bd_num,
+ 			      unsigned int bd_num)
+ {
+ 	/* 'bd_num == UINT_MAX' means the skb' fraglist has a
+@@ -1300,8 +1299,7 @@ static int hns3_skb_linearize(struct hns3_enet_ring *ring,
+ 	 * will not help.
+ 	 */
+ 	if (skb->len > HNS3_MAX_TSO_SIZE ||
+-	    (!skb_is_gso(skb) && skb->len >
+-	     HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) {
++	    (!skb_is_gso(skb) && skb->len > HNS3_MAX_NON_TSO_SIZE)) {
+ 		u64_stats_update_begin(&ring->syncp);
+ 		ring->stats.hw_limitation++;
+ 		u64_stats_update_end(&ring->syncp);
+@@ -1336,8 +1334,7 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
+ 			goto out;
+ 		}
+ 
+-		if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num,
+-				       bd_num))
++		if (hns3_skb_linearize(ring, skb, bd_num))
+ 			return -ENOMEM;
+ 
+ 		bd_num = hns3_tx_bd_count(skb->len);
+@@ -2424,6 +2421,7 @@ static void hns3_buffer_detach(struct hns3_enet_ring *ring, int i)
+ {
+ 	hns3_unmap_buffer(ring, &ring->desc_cb[i]);
+ 	ring->desc[i].addr = 0;
++	ring->desc_cb[i].refill = 0;
+ }
+ 
+ static void hns3_free_buffer_detach(struct hns3_enet_ring *ring, int i,
+@@ -2501,6 +2499,7 @@ static int hns3_alloc_and_attach_buffer(struct hns3_enet_ring *ring, int i)
+ 		return ret;
+ 
+ 	ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma);
++	ring->desc_cb[i].refill = 1;
+ 
+ 	return 0;
+ }
+@@ -2531,12 +2530,14 @@ static void hns3_replace_buffer(struct hns3_enet_ring *ring, int i,
+ 	hns3_unmap_buffer(ring, &ring->desc_cb[i]);
+ 	ring->desc_cb[i] = *res_cb;
+ 	ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma);
++	ring->desc_cb[i].refill = 1;
+ 	ring->desc[i].rx.bd_base_info = 0;
+ }
+ 
+ static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)
+ {
+ 	ring->desc_cb[i].reuse_flag = 0;
++	ring->desc_cb[i].refill = 1;
+ 	ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +
+ 					 ring->desc_cb[i].page_offset);
+ 	ring->desc[i].rx.bd_base_info = 0;
+@@ -2634,10 +2635,14 @@ static int hns3_desc_unused(struct hns3_enet_ring *ring)
+ 	int ntc = ring->next_to_clean;
+ 	int ntu = ring->next_to_use;
+ 
++	if (unlikely(ntc == ntu && !ring->desc_cb[ntc].refill))
++		return ring->desc_num;
++
+ 	return ((ntc >= ntu) ? 0 : ring->desc_num) + ntc - ntu;
+ }
+ 
+-static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
++/* Return true if there is any allocation failure */
++static bool hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
+ 				      int cleand_count)
+ {
+ 	struct hns3_desc_cb *desc_cb;
+@@ -2662,7 +2667,10 @@ static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
+ 				hns3_rl_err(ring_to_netdev(ring),
+ 					    "alloc rx buffer failed: %d\n",
+ 					    ret);
+-				break;
++
++				writel(i, ring->tqp->io_base +
++				       HNS3_RING_RX_RING_HEAD_REG);
++				return true;
+ 			}
+ 			hns3_replace_buffer(ring, ring->next_to_use, &res_cbs);
+ 
+@@ -2675,6 +2683,7 @@ static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
+ 	}
+ 
+ 	writel(i, ring->tqp->io_base + HNS3_RING_RX_RING_HEAD_REG);
++	return false;
+ }
+ 
+ static bool hns3_page_is_reusable(struct page *page)
+@@ -2905,6 +2914,7 @@ static void hns3_rx_ring_move_fw(struct hns3_enet_ring *ring)
+ {
+ 	ring->desc[ring->next_to_clean].rx.bd_base_info &=
+ 		cpu_to_le32(~BIT(HNS3_RXD_VLD_B));
++	ring->desc_cb[ring->next_to_clean].refill = 0;
+ 	ring->next_to_clean += 1;
+ 
+ 	if (unlikely(ring->next_to_clean == ring->desc_num))
+@@ -3218,6 +3228,7 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget,
+ {
+ #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16
+ 	int unused_count = hns3_desc_unused(ring);
++	bool failure = false;
+ 	int recv_pkts = 0;
+ 	int err;
+ 
+@@ -3226,9 +3237,9 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget,
+ 	while (recv_pkts < budget) {
+ 		/* Reuse or realloc buffers */
+ 		if (unused_count >= RCB_NOF_ALLOC_RX_BUFF_ONCE) {
+-			hns3_nic_alloc_rx_buffers(ring, unused_count);
+-			unused_count = hns3_desc_unused(ring) -
+-					ring->pending_buf;
++			failure = failure ||
++				hns3_nic_alloc_rx_buffers(ring, unused_count);
++			unused_count = 0;
+ 		}
+ 
+ 		/* Poll one pkt */
+@@ -3247,11 +3258,7 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget,
+ 	}
+ 
+ out:
+-	/* Make all data has been write before submit */
+-	if (unused_count > 0)
+-		hns3_nic_alloc_rx_buffers(ring, unused_count);
+-
+-	return recv_pkts;
++	return failure ? budget : recv_pkts;
+ }
+ 
+ static bool hns3_get_new_flow_lvl(struct hns3_enet_ring_group *ring_group)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 398686b15a826..54d02ea4aaa7c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -170,11 +170,9 @@ enum hns3_nic_state {
+ 
+ #define HNS3_MAX_BD_SIZE			65535
+ #define HNS3_MAX_TSO_BD_NUM			63U
+-#define HNS3_MAX_TSO_SIZE \
+-	(HNS3_MAX_BD_SIZE * HNS3_MAX_TSO_BD_NUM)
++#define HNS3_MAX_TSO_SIZE			1048576U
++#define HNS3_MAX_NON_TSO_SIZE			9728U
+ 
+-#define HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num) \
+-	(HNS3_MAX_BD_SIZE * (max_non_tso_bd_num))
+ 
+ #define HNS3_VECTOR_GL0_OFFSET			0x100
+ #define HNS3_VECTOR_GL1_OFFSET			0x200
+@@ -285,6 +283,7 @@ struct hns3_desc_cb {
+ 	u32 length;     /* length of the buffer */
+ 
+ 	u16 reuse_flag;
++	u16 refill;
+ 
+ 	/* desc type, used by the ring user to mark the type of the priv data */
+ 	u16 type;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 28a90ead4795d..8e6085753b9f2 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -134,6 +134,15 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 				*changed = true;
+ 			break;
+ 		case IEEE_8021QAZ_TSA_ETS:
++			/* The hardware will switch to sp mode if bandwidth is
++			 * 0, so limit ets bandwidth must be greater than 0.
++			 */
++			if (!ets->tc_tx_bw[i]) {
++				dev_err(&hdev->pdev->dev,
++					"tc%u ets bw cannot be 0\n", i);
++				return -EINVAL;
++			}
++
+ 			if (hdev->tm_info.tc_info[i].tc_sch_mode !=
+ 				HCLGE_SCH_MODE_DWRR)
+ 				*changed = true;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 0e869f449f12c..7b94764b4f5d9 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -11518,6 +11518,7 @@ static int hclge_init(void)
+ 
+ static void hclge_exit(void)
+ {
++	hnae3_unregister_ae_algo_prepare(&ae_algo);
+ 	hnae3_unregister_ae_algo(&ae_algo);
+ 	destroy_workqueue(hclge_wq);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 69d081515c60a..71aa6d16fc19e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -671,6 +671,8 @@ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
+ 		hdev->tm_info.pg_info[i].tc_bit_map = hdev->hw_tc_map;
+ 		for (k = 0; k < hdev->tm_info.num_tc; k++)
+ 			hdev->tm_info.pg_info[i].tc_dwrr[k] = BW_PERCENT;
++		for (; k < HNAE3_MAX_TC; k++)
++			hdev->tm_info.pg_info[i].tc_dwrr[k] = 0;
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 3641d7c31451c..a47f23f27a11c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2160,9 +2160,9 @@ static void hclgevf_reset_service_task(struct hclgevf_dev *hdev)
+ 		hdev->reset_attempts = 0;
+ 
+ 		hdev->last_reset_time = jiffies;
+-		while ((hdev->reset_type =
+-			hclgevf_get_reset_level(hdev, &hdev->reset_pending))
+-		       != HNAE3_NONE_RESET)
++		hdev->reset_type =
++			hclgevf_get_reset_level(hdev, &hdev->reset_pending);
++		if (hdev->reset_type != HNAE3_NONE_RESET)
+ 			hclgevf_reset(hdev);
+ 	} else if (test_and_clear_bit(HCLGEVF_RESET_REQUESTED,
+ 				      &hdev->reset_state)) {
+diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h
+index 5b2143f4b1f85..3178efd980066 100644
+--- a/drivers/net/ethernet/intel/e1000e/e1000.h
++++ b/drivers/net/ethernet/intel/e1000e/e1000.h
+@@ -113,7 +113,8 @@ enum e1000_boards {
+ 	board_pch2lan,
+ 	board_pch_lpt,
+ 	board_pch_spt,
+-	board_pch_cnp
++	board_pch_cnp,
++	board_pch_tgp
+ };
+ 
+ struct e1000_ps_page {
+@@ -499,6 +500,7 @@ extern const struct e1000_info e1000_pch2_info;
+ extern const struct e1000_info e1000_pch_lpt_info;
+ extern const struct e1000_info e1000_pch_spt_info;
+ extern const struct e1000_info e1000_pch_cnp_info;
++extern const struct e1000_info e1000_pch_tgp_info;
+ extern const struct e1000_info e1000_es2_info;
+ 
+ void e1000e_ptp_init(struct e1000_adapter *adapter);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 854c585de2e13..b38b914f9ac6c 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -4811,7 +4811,7 @@ static s32 e1000_reset_hw_ich8lan(struct e1000_hw *hw)
+ static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
+ {
+ 	struct e1000_mac_info *mac = &hw->mac;
+-	u32 ctrl_ext, txdctl, snoop;
++	u32 ctrl_ext, txdctl, snoop, fflt_dbg;
+ 	s32 ret_val;
+ 	u16 i;
+ 
+@@ -4870,6 +4870,15 @@ static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
+ 		snoop = (u32)~(PCIE_NO_SNOOP_ALL);
+ 	e1000e_set_pcie_no_snoop(hw, snoop);
+ 
++	/* Enable workaround for packet loss issue on TGP PCH
++	 * Do not gate DMA clock from the modPHY block
++	 */
++	if (mac->type >= e1000_pch_tgp) {
++		fflt_dbg = er32(FFLT_DBG);
++		fflt_dbg |= E1000_FFLT_DBG_DONT_GATE_WAKE_DMA_CLK;
++		ew32(FFLT_DBG, fflt_dbg);
++	}
++
+ 	ctrl_ext = er32(CTRL_EXT);
+ 	ctrl_ext |= E1000_CTRL_EXT_RO_DIS;
+ 	ew32(CTRL_EXT, ctrl_ext);
+@@ -5990,3 +5999,23 @@ const struct e1000_info e1000_pch_cnp_info = {
+ 	.phy_ops		= &ich8_phy_ops,
+ 	.nvm_ops		= &spt_nvm_ops,
+ };
++
++const struct e1000_info e1000_pch_tgp_info = {
++	.mac			= e1000_pch_tgp,
++	.flags			= FLAG_IS_ICH
++				  | FLAG_HAS_WOL
++				  | FLAG_HAS_HW_TIMESTAMP
++				  | FLAG_HAS_CTRLEXT_ON_LOAD
++				  | FLAG_HAS_AMT
++				  | FLAG_HAS_FLASH
++				  | FLAG_HAS_JUMBO_FRAMES
++				  | FLAG_APME_IN_WUC,
++	.flags2			= FLAG2_HAS_PHY_STATS
++				  | FLAG2_HAS_EEE,
++	.pba			= 26,
++	.max_hw_frame_size	= 9022,
++	.get_variants		= e1000_get_variants_ich8lan,
++	.mac_ops		= &ich8_mac_ops,
++	.phy_ops		= &ich8_phy_ops,
++	.nvm_ops		= &spt_nvm_ops,
++};
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+index e757896287eba..8f2a8f4ce0ee4 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+@@ -286,6 +286,9 @@
+ /* Proprietary Latency Tolerance Reporting PCI Capability */
+ #define E1000_PCI_LTR_CAP_LPT		0xA8
+ 
++/* Don't gate wake DMA clock */
++#define E1000_FFLT_DBG_DONT_GATE_WAKE_DMA_CLK	0x1000
++
+ void e1000e_write_protect_nvm_ich8lan(struct e1000_hw *hw);
+ void e1000e_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw,
+ 						  bool state);
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 361b8d0bd78d7..d0c4de0231120 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -50,6 +50,7 @@ static const struct e1000_info *e1000_info_tbl[] = {
+ 	[board_pch_lpt]		= &e1000_pch_lpt_info,
+ 	[board_pch_spt]		= &e1000_pch_spt_info,
+ 	[board_pch_cnp]		= &e1000_pch_cnp_info,
++	[board_pch_tgp]		= &e1000_pch_tgp_info,
+ };
+ 
+ struct e1000_reg_info {
+@@ -7837,20 +7838,20 @@ static const struct pci_device_id e1000_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_V11), board_pch_cnp },
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_LM12), board_pch_spt },
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_V12), board_pch_spt },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM13), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V13), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM14), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_cnp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM13), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V13), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM14), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_tgp },
+ 
+ 	{ 0, 0, 0, 0, 0, 0, 0 }	/* terminate list */
+ };
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 2239a5f45e5a7..64714757bd4f4 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -24,6 +24,8 @@ static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+ 	case ICE_DEV_ID_E810C_BACKPLANE:
+ 	case ICE_DEV_ID_E810C_QSFP:
+ 	case ICE_DEV_ID_E810C_SFP:
++	case ICE_DEV_ID_E810_XXV_BACKPLANE:
++	case ICE_DEV_ID_E810_XXV_QSFP:
+ 	case ICE_DEV_ID_E810_XXV_SFP:
+ 		hw->mac_type = ICE_MAC_E810;
+ 		break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_devids.h b/drivers/net/ethernet/intel/ice/ice_devids.h
+index 9d8194671f6a6..ef4392e6e2444 100644
+--- a/drivers/net/ethernet/intel/ice/ice_devids.h
++++ b/drivers/net/ethernet/intel/ice/ice_devids.h
+@@ -21,6 +21,10 @@
+ #define ICE_DEV_ID_E810C_QSFP		0x1592
+ /* Intel(R) Ethernet Controller E810-C for SFP */
+ #define ICE_DEV_ID_E810C_SFP		0x1593
++/* Intel(R) Ethernet Controller E810-XXV for backplane */
++#define ICE_DEV_ID_E810_XXV_BACKPLANE	0x1599
++/* Intel(R) Ethernet Controller E810-XXV for QSFP */
++#define ICE_DEV_ID_E810_XXV_QSFP	0x159A
+ /* Intel(R) Ethernet Controller E810-XXV for SFP */
+ #define ICE_DEV_ID_E810_XXV_SFP		0x159B
+ /* Intel(R) Ethernet Connection E823-C for backplane */
+diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+index 9095b4d274ad2..a81be917f6538 100644
+--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
++++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+@@ -1669,7 +1669,7 @@ static u16 ice_tunnel_idx_to_entry(struct ice_hw *hw, enum ice_tunnel_type type,
+ 	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+ 		if (hw->tnl.tbl[i].valid &&
+ 		    hw->tnl.tbl[i].type == type &&
+-		    idx--)
++		    idx-- == 0)
+ 			return i;
+ 
+ 	WARN_ON_ONCE(1);
+@@ -1829,7 +1829,7 @@ int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table,
+ 	u16 index;
+ 
+ 	tnl_type = ti->type == UDP_TUNNEL_TYPE_VXLAN ? TNL_VXLAN : TNL_GENEVE;
+-	index = ice_tunnel_idx_to_entry(&pf->hw, idx, tnl_type);
++	index = ice_tunnel_idx_to_entry(&pf->hw, tnl_type, idx);
+ 
+ 	status = ice_create_tunnel(&pf->hw, index, tnl_type, ntohs(ti->port));
+ 	if (status) {
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 5d0dc1f811e0f..66d92a0cfef35 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4773,6 +4773,8 @@ static const struct pci_device_id ice_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_BACKPLANE), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_QSFP), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_SFP), 0 },
++	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_BACKPLANE), 0 },
++	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_QSFP), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_SFP), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_BACKPLANE), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_QSFP), 0 },
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
+index fad503820e040..b3365b34cac7c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
+@@ -71,6 +71,7 @@ err_remove_config_dt:
+ 
+ static const struct of_device_id dwmac_generic_match[] = {
+ 	{ .compatible = "st,spear600-gmac"},
++	{ .compatible = "snps,dwmac-3.40a"},
+ 	{ .compatible = "snps,dwmac-3.50a"},
+ 	{ .compatible = "snps,dwmac-3.610"},
+ 	{ .compatible = "snps,dwmac-3.70a"},
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 6133b2fe8a78a..0ac61e7ab43cd 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -605,7 +605,7 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+ 			config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ 			ptp_v2 = PTP_TCR_TSVER2ENA;
+ 			snap_type_sel = PTP_TCR_SNAPTYPSEL_1;
+-			if (priv->synopsys_id != DWMAC_CORE_5_10)
++			if (priv->synopsys_id < DWMAC_CORE_4_10)
+ 				ts_event_en = PTP_TCR_TSEVNTENA;
+ 			ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;
+ 			ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 53be8fc1d125f..48186cd32ce10 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -508,6 +508,14 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
+ 		plat->pmt = 1;
+ 	}
+ 
++	if (of_device_is_compatible(np, "snps,dwmac-3.40a")) {
++		plat->has_gmac = 1;
++		plat->enh_desc = 1;
++		plat->tx_coe = 1;
++		plat->bugged_jumbo = 1;
++		plat->pmt = 1;
++	}
++
+ 	if (of_device_is_compatible(np, "snps,dwmac-4.00") ||
+ 	    of_device_is_compatible(np, "snps,dwmac-4.10a") ||
+ 	    of_device_is_compatible(np, "snps,dwmac-4.20a") ||
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 2645ca35103c9..453490be7055a 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -544,6 +544,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	err = device_register(&bus->dev);
+ 	if (err) {
+ 		pr_err("mii_bus %s failed to register\n", bus->id);
++		put_device(&bus->dev);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
+index 4efad42b9aa97..867ff2ee8ecf3 100644
+--- a/drivers/net/usb/Kconfig
++++ b/drivers/net/usb/Kconfig
+@@ -117,6 +117,7 @@ config USB_LAN78XX
+ 	select PHYLIB
+ 	select MICROCHIP_PHY
+ 	select FIXED_PHY
++	select CRC32
+ 	help
+ 	  This option adds support for Microchip LAN78XX based USB 2
+ 	  & USB 3 10/100/1000 Ethernet adapters.
+diff --git a/drivers/pci/hotplug/s390_pci_hpc.c b/drivers/pci/hotplug/s390_pci_hpc.c
+index a047c421debe2..93174f503464e 100644
+--- a/drivers/pci/hotplug/s390_pci_hpc.c
++++ b/drivers/pci/hotplug/s390_pci_hpc.c
+@@ -109,14 +109,7 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
+ 	struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev,
+ 					     hotplug_slot);
+ 
+-	switch (zdev->state) {
+-	case ZPCI_FN_STATE_STANDBY:
+-		*value = 0;
+-		break;
+-	default:
+-		*value = 1;
+-		break;
+-	}
++	*value = zpci_is_device_configured(zdev) ? 1 : 0;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 3af4430543dca..a5f1f6ba74392 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -1645,8 +1645,8 @@ int __maybe_unused stm32_pinctrl_resume(struct device *dev)
+ 	struct stm32_pinctrl_group *g = pctl->groups;
+ 	int i;
+ 
+-	for (i = g->pin; i < g->pin + pctl->ngroups; i++)
+-		stm32_pinctrl_restore_gpio_regs(pctl, i);
++	for (i = 0; i < pctl->ngroups; i++, g++)
++		stm32_pinctrl_restore_gpio_regs(pctl, g->pin);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
+index 425d2064148f9..69d706039cb20 100644
+--- a/drivers/platform/x86/intel_scu_ipc.c
++++ b/drivers/platform/x86/intel_scu_ipc.c
+@@ -247,7 +247,7 @@ static inline int busy_loop(struct intel_scu_ipc_dev *scu)
+ 	return -ETIMEDOUT;
+ }
+ 
+-/* Wait till ipc ioc interrupt is received or timeout in 3 HZ */
++/* Wait till ipc ioc interrupt is received or timeout in 10 HZ */
+ static inline int ipc_wait_for_interrupt(struct intel_scu_ipc_dev *scu)
+ {
+ 	int status;
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index da3920a19d53d..d664c4650b2dd 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -220,7 +220,8 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 		goto fail;
+ 	}
+ 
+-	shost->cmd_per_lun = min_t(short, shost->cmd_per_lun,
++	/* Use min_t(int, ...) in case shost->can_queue exceeds SHRT_MAX */
++	shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
+ 				   shost->can_queue);
+ 
+ 	error = scsi_init_sense_cache(shost);
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 7fa085969a63a..1fd292a6ac881 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -414,7 +414,7 @@ done_unmap_sg:
+ 	goto done_free_fcport;
+ 
+ done_free_fcport:
+-	if (bsg_request->msgcode == FC_BSG_RPT_ELS)
++	if (bsg_request->msgcode != FC_BSG_RPT_ELS)
+ 		qla2x00_free_fcport(fcport);
+ done:
+ 	return rval;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 41772b88610ae..3f7fa8de36427 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2907,8 +2907,6 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ 			session->recovery_tmo = value;
+ 		break;
+ 	default:
+-		err = transport->set_param(conn, ev->u.set_param.param,
+-					   data, ev->u.set_param.len);
+ 		if ((conn->state == ISCSI_CONN_BOUND) ||
+ 			(conn->state == ISCSI_CONN_UP)) {
+ 			err = transport->set_param(conn, ev->u.set_param.param,
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 2299866dc82ff..8c65e9476b41f 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -276,8 +276,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 			pdev->device == 0x3432)
+ 		xhci->quirks |= XHCI_BROKEN_STREAMS;
+ 
+-	if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483)
++	if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) {
+ 		xhci->quirks |= XHCI_LPM_SUPPORT;
++		xhci->quirks |= XHCI_EP_CTX_BROKEN_DCS;
++	}
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ 		pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index ec8f2910faf97..4512c4223392a 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -562,7 +562,10 @@ void xhci_find_new_dequeue_state(struct xhci_hcd *xhci,
+ 	struct xhci_virt_ep *ep = &dev->eps[ep_index];
+ 	struct xhci_ring *ep_ring;
+ 	struct xhci_segment *new_seg;
++	struct xhci_segment *halted_seg = NULL;
+ 	union xhci_trb *new_deq;
++	union xhci_trb *halted_trb;
++	int index = 0;
+ 	dma_addr_t addr;
+ 	u64 hw_dequeue;
+ 	bool cycle_found = false;
+@@ -600,7 +603,28 @@ void xhci_find_new_dequeue_state(struct xhci_hcd *xhci,
+ 	hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id);
+ 	new_seg = ep_ring->deq_seg;
+ 	new_deq = ep_ring->dequeue;
+-	state->new_cycle_state = hw_dequeue & 0x1;
++
++	/*
++	 * Quirk: xHC write-back of the DCS field in the hardware dequeue
++	 * pointer is wrong - use the cycle state of the TRB pointed to by
++	 * the dequeue pointer.
++	 */
++	if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS &&
++	    !(ep->ep_state & EP_HAS_STREAMS))
++		halted_seg = trb_in_td(xhci, cur_td->start_seg,
++				       cur_td->first_trb, cur_td->last_trb,
++				       hw_dequeue & ~0xf, false);
++	if (halted_seg) {
++		index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) /
++			 sizeof(*halted_trb);
++		halted_trb = &halted_seg->trbs[index];
++		state->new_cycle_state = halted_trb->generic.field[3] & 0x1;
++		xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n",
++			 (u8)(hw_dequeue & 0x1), index,
++			 state->new_cycle_state);
++	} else {
++		state->new_cycle_state = hw_dequeue & 0x1;
++	}
+ 	state->stream_id = stream_id;
+ 
+ 	/*
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 1c97c8d81154d..45584a2783366 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1884,6 +1884,7 @@ struct xhci_hcd {
+ #define XHCI_DISABLE_SPARSE	BIT_ULL(38)
+ #define XHCI_SG_TRB_CACHE_SIZE_QUIRK	BIT_ULL(39)
+ #define XHCI_NO_SOFT_RETRY	BIT_ULL(40)
++#define XHCI_EP_CTX_BROKEN_DCS	BIT_ULL(42)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index c3bb5c4375ab0..3b93a98fd5449 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -894,9 +894,11 @@ out:
+ }
+ 
+ /*
+- * helper function to see if a given name and sequence number found
+- * in an inode back reference are already in a directory and correctly
+- * point to this inode
++ * See if a given name and sequence number found in an inode back reference are
++ * already in a directory and correctly point to this inode.
++ *
++ * Returns: < 0 on error, 0 if the directory entry does not exists and 1 if it
++ * exists.
+  */
+ static noinline int inode_in_dir(struct btrfs_root *root,
+ 				 struct btrfs_path *path,
+@@ -905,29 +907,35 @@ static noinline int inode_in_dir(struct btrfs_root *root,
+ {
+ 	struct btrfs_dir_item *di;
+ 	struct btrfs_key location;
+-	int match = 0;
++	int ret = 0;
+ 
+ 	di = btrfs_lookup_dir_index_item(NULL, root, path, dirid,
+ 					 index, name, name_len, 0);
+-	if (di && !IS_ERR(di)) {
++	if (IS_ERR(di)) {
++		if (PTR_ERR(di) != -ENOENT)
++			ret = PTR_ERR(di);
++		goto out;
++	} else if (di) {
+ 		btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
+ 		if (location.objectid != objectid)
+ 			goto out;
+-	} else
++	} else {
+ 		goto out;
+-	btrfs_release_path(path);
++	}
+ 
++	btrfs_release_path(path);
+ 	di = btrfs_lookup_dir_item(NULL, root, path, dirid, name, name_len, 0);
+-	if (di && !IS_ERR(di)) {
+-		btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
+-		if (location.objectid != objectid)
+-			goto out;
+-	} else
++	if (IS_ERR(di)) {
++		ret = PTR_ERR(di);
+ 		goto out;
+-	match = 1;
++	} else if (di) {
++		btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
++		if (location.objectid == objectid)
++			ret = 1;
++	}
+ out:
+ 	btrfs_release_path(path);
+-	return match;
++	return ret;
+ }
+ 
+ /*
+@@ -1477,10 +1485,12 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 		if (ret)
+ 			goto out;
+ 
+-		/* if we already have a perfect match, we're done */
+-		if (!inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)),
+-					btrfs_ino(BTRFS_I(inode)), ref_index,
+-					name, namelen)) {
++		ret = inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)),
++				   btrfs_ino(BTRFS_I(inode)), ref_index,
++				   name, namelen);
++		if (ret < 0) {
++			goto out;
++		} else if (ret == 0) {
+ 			/*
+ 			 * look for a conflicting back reference in the
+ 			 * metadata. if we find one we have to unlink that name
+@@ -1538,6 +1548,7 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 
+ 			btrfs_update_inode(trans, root, inode);
+ 		}
++		/* Else, ret == 1, we already have a perfect match, we're done. */
+ 
+ 		ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + namelen;
+ 		kfree(name);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 48ea95b81df84..676f551953060 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -2334,7 +2334,6 @@ static int unsafe_request_wait(struct inode *inode)
+ 
+ int ceph_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+-	struct ceph_file_info *fi = file->private_data;
+ 	struct inode *inode = file->f_mapping->host;
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	u64 flush_tid;
+@@ -2369,14 +2368,9 @@ int ceph_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ 	if (err < 0)
+ 		ret = err;
+ 
+-	if (errseq_check(&ci->i_meta_err, READ_ONCE(fi->meta_err))) {
+-		spin_lock(&file->f_lock);
+-		err = errseq_check_and_advance(&ci->i_meta_err,
+-					       &fi->meta_err);
+-		spin_unlock(&file->f_lock);
+-		if (err < 0)
+-			ret = err;
+-	}
++	err = file_check_and_advance_wb_err(file);
++	if (err < 0)
++		ret = err;
+ out:
+ 	dout("fsync %p%s result=%d\n", inode, datasync ? " datasync" : "", ret);
+ 	return ret;
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index f1895f78ab452..8e6855e7ed836 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -233,7 +233,6 @@ static int ceph_init_file_info(struct inode *inode, struct file *file,
+ 
+ 	spin_lock_init(&fi->rw_contexts_lock);
+ 	INIT_LIST_HEAD(&fi->rw_contexts);
+-	fi->meta_err = errseq_sample(&ci->i_meta_err);
+ 	fi->filp_gen = READ_ONCE(ceph_inode_to_client(inode)->filp_gen);
+ 
+ 	return 0;
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 63e781e4f7e44..76be50f6f041a 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -529,8 +529,6 @@ struct inode *ceph_alloc_inode(struct super_block *sb)
+ 
+ 	ceph_fscache_inode_init(ci);
+ 
+-	ci->i_meta_err = 0;
+-
+ 	return &ci->vfs_inode;
+ }
+ 
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 0f57b7d094578..76e347a8cf088 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1481,7 +1481,6 @@ static void cleanup_session_requests(struct ceph_mds_client *mdsc,
+ {
+ 	struct ceph_mds_request *req;
+ 	struct rb_node *p;
+-	struct ceph_inode_info *ci;
+ 
+ 	dout("cleanup_session_requests mds%d\n", session->s_mds);
+ 	mutex_lock(&mdsc->mutex);
+@@ -1490,16 +1489,10 @@ static void cleanup_session_requests(struct ceph_mds_client *mdsc,
+ 				       struct ceph_mds_request, r_unsafe_item);
+ 		pr_warn_ratelimited(" dropping unsafe request %llu\n",
+ 				    req->r_tid);
+-		if (req->r_target_inode) {
+-			/* dropping unsafe change of inode's attributes */
+-			ci = ceph_inode(req->r_target_inode);
+-			errseq_set(&ci->i_meta_err, -EIO);
+-		}
+-		if (req->r_unsafe_dir) {
+-			/* dropping unsafe directory operation */
+-			ci = ceph_inode(req->r_unsafe_dir);
+-			errseq_set(&ci->i_meta_err, -EIO);
+-		}
++		if (req->r_target_inode)
++			mapping_set_error(req->r_target_inode->i_mapping, -EIO);
++		if (req->r_unsafe_dir)
++			mapping_set_error(req->r_unsafe_dir->i_mapping, -EIO);
+ 		__unregister_request(mdsc, req);
+ 	}
+ 	/* zero r_attempts, so kick_requests() will re-send requests */
+@@ -1668,7 +1661,7 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		spin_unlock(&mdsc->cap_dirty_lock);
+ 
+ 		if (dirty_dropped) {
+-			errseq_set(&ci->i_meta_err, -EIO);
++			mapping_set_error(inode->i_mapping, -EIO);
+ 
+ 			if (ci->i_wrbuffer_ref_head == 0 &&
+ 			    ci->i_wr_ref == 0 &&
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 33ba6f0aa55ce..f33bfb255db8f 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -997,16 +997,16 @@ static int ceph_compare_super(struct super_block *sb, struct fs_context *fc)
+ 	struct ceph_fs_client *new = fc->s_fs_info;
+ 	struct ceph_mount_options *fsopt = new->mount_options;
+ 	struct ceph_options *opt = new->client->options;
+-	struct ceph_fs_client *other = ceph_sb_to_client(sb);
++	struct ceph_fs_client *fsc = ceph_sb_to_client(sb);
+ 
+ 	dout("ceph_compare_super %p\n", sb);
+ 
+-	if (compare_mount_options(fsopt, opt, other)) {
++	if (compare_mount_options(fsopt, opt, fsc)) {
+ 		dout("monitor(s)/mount options don't match\n");
+ 		return 0;
+ 	}
+ 	if ((opt->flags & CEPH_OPT_FSID) &&
+-	    ceph_fsid_compare(&opt->fsid, &other->client->fsid)) {
++	    ceph_fsid_compare(&opt->fsid, &fsc->client->fsid)) {
+ 		dout("fsid doesn't match\n");
+ 		return 0;
+ 	}
+@@ -1014,6 +1014,17 @@ static int ceph_compare_super(struct super_block *sb, struct fs_context *fc)
+ 		dout("flags differ\n");
+ 		return 0;
+ 	}
++
++	if (fsc->blocklisted && !ceph_test_mount_opt(fsc, CLEANRECOVER)) {
++		dout("client is blocklisted (and CLEANRECOVER is not set)\n");
++		return 0;
++	}
++
++	if (fsc->mount_state == CEPH_MOUNT_SHUTDOWN) {
++		dout("client has been forcibly unmounted\n");
++		return 0;
++	}
++
+ 	return 1;
+ }
+ 
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 9362eeb5812d9..4db305fd2a02a 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -430,8 +430,6 @@ struct ceph_inode_info {
+ 	struct fscache_cookie *fscache;
+ 	u32 i_fscache_gen;
+ #endif
+-	errseq_t i_meta_err;
+-
+ 	struct inode vfs_inode; /* at end */
+ };
+ 
+@@ -773,7 +771,6 @@ struct ceph_file_info {
+ 	spinlock_t rw_contexts_lock;
+ 	struct list_head rw_contexts;
+ 
+-	errseq_t meta_err;
+ 	u32 filp_gen;
+ 	atomic_t num_locks;
+ };
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 26753d0cb4312..0736487165da4 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -5559,7 +5559,7 @@ static int io_timeout_remove_prep(struct io_kiocb *req,
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->timeout_flags |
++	if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->timeout_flags ||
+ 	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+diff --git a/fs/kernel_read_file.c b/fs/kernel_read_file.c
+index 90d255fbdd9b3..c84d87f558cb6 100644
+--- a/fs/kernel_read_file.c
++++ b/fs/kernel_read_file.c
+@@ -178,7 +178,7 @@ int kernel_read_file_from_fd(int fd, loff_t offset, void **buf,
+ 	struct fd f = fdget(fd);
+ 	int ret = -EBADF;
+ 
+-	if (!f.file)
++	if (!f.file || !(f.file->f_mode & FMODE_READ))
+ 		goto out;
+ 
+ 	ret = kernel_read_file(f.file, offset, buf, buf_size, file_size, id);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index ddf2b375632b7..21c4ffda5f943 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -792,7 +792,10 @@ out_close:
+ 		svc_xprt_put(xprt);
+ 	}
+ out_err:
+-	nfsd_destroy(net);
++	if (!list_empty(&nn->nfsd_serv->sv_permsocks))
++		nn->nfsd_serv->sv_nrthreads--;
++	 else
++		nfsd_destroy(net);
+ 	return err;
+ }
+ 
+diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
+index 78710788c2370..a9a6276ff29bd 100644
+--- a/fs/ocfs2/alloc.c
++++ b/fs/ocfs2/alloc.c
+@@ -7047,7 +7047,7 @@ void ocfs2_set_inode_data_inline(struct inode *inode, struct ocfs2_dinode *di)
+ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 					 struct buffer_head *di_bh)
+ {
+-	int ret, i, has_data, num_pages = 0;
++	int ret, has_data, num_pages = 0;
+ 	int need_free = 0;
+ 	u32 bit_off, num;
+ 	handle_t *handle;
+@@ -7056,26 +7056,17 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 	struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+ 	struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;
+ 	struct ocfs2_alloc_context *data_ac = NULL;
+-	struct page **pages = NULL;
+-	loff_t end = osb->s_clustersize;
++	struct page *page = NULL;
+ 	struct ocfs2_extent_tree et;
+ 	int did_quota = 0;
+ 
+ 	has_data = i_size_read(inode) ? 1 : 0;
+ 
+ 	if (has_data) {
+-		pages = kcalloc(ocfs2_pages_per_cluster(osb->sb),
+-				sizeof(struct page *), GFP_NOFS);
+-		if (pages == NULL) {
+-			ret = -ENOMEM;
+-			mlog_errno(ret);
+-			return ret;
+-		}
+-
+ 		ret = ocfs2_reserve_clusters(osb, 1, &data_ac);
+ 		if (ret) {
+ 			mlog_errno(ret);
+-			goto free_pages;
++			goto out;
+ 		}
+ 	}
+ 
+@@ -7095,7 +7086,8 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 	}
+ 
+ 	if (has_data) {
+-		unsigned int page_end;
++		unsigned int page_end = min_t(unsigned, PAGE_SIZE,
++							osb->s_clustersize);
+ 		u64 phys;
+ 
+ 		ret = dquot_alloc_space_nodirty(inode,
+@@ -7119,15 +7111,8 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 		 */
+ 		block = phys = ocfs2_clusters_to_blocks(inode->i_sb, bit_off);
+ 
+-		/*
+-		 * Non sparse file systems zero on extend, so no need
+-		 * to do that now.
+-		 */
+-		if (!ocfs2_sparse_alloc(osb) &&
+-		    PAGE_SIZE < osb->s_clustersize)
+-			end = PAGE_SIZE;
+-
+-		ret = ocfs2_grab_eof_pages(inode, 0, end, pages, &num_pages);
++		ret = ocfs2_grab_eof_pages(inode, 0, page_end, &page,
++					   &num_pages);
+ 		if (ret) {
+ 			mlog_errno(ret);
+ 			need_free = 1;
+@@ -7138,20 +7123,15 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 		 * This should populate the 1st page for us and mark
+ 		 * it up to date.
+ 		 */
+-		ret = ocfs2_read_inline_data(inode, pages[0], di_bh);
++		ret = ocfs2_read_inline_data(inode, page, di_bh);
+ 		if (ret) {
+ 			mlog_errno(ret);
+ 			need_free = 1;
+ 			goto out_unlock;
+ 		}
+ 
+-		page_end = PAGE_SIZE;
+-		if (PAGE_SIZE > osb->s_clustersize)
+-			page_end = osb->s_clustersize;
+-
+-		for (i = 0; i < num_pages; i++)
+-			ocfs2_map_and_dirty_page(inode, handle, 0, page_end,
+-						 pages[i], i > 0, &phys);
++		ocfs2_map_and_dirty_page(inode, handle, 0, page_end, page, 0,
++					 &phys);
+ 	}
+ 
+ 	spin_lock(&oi->ip_lock);
+@@ -7182,8 +7162,8 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 	}
+ 
+ out_unlock:
+-	if (pages)
+-		ocfs2_unlock_and_free_pages(pages, num_pages);
++	if (page)
++		ocfs2_unlock_and_free_pages(&page, num_pages);
+ 
+ out_commit:
+ 	if (ret < 0 && did_quota)
+@@ -7207,8 +7187,6 @@ out_commit:
+ out:
+ 	if (data_ac)
+ 		ocfs2_free_alloc_context(data_ac);
+-free_pages:
+-	kfree(pages);
+ 	return ret;
+ }
+ 
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 2febc76e9de70..435f82892432c 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -2171,11 +2171,17 @@ static int ocfs2_initialize_super(struct super_block *sb,
+ 	}
+ 
+ 	if (ocfs2_clusterinfo_valid(osb)) {
++		/*
++		 * ci_stack and ci_cluster in ocfs2_cluster_info may not be null
++		 * terminated, so make sure no overflow happens here by using
++		 * memcpy. Destination strings will always be null terminated
++		 * because osb is allocated using kzalloc.
++		 */
+ 		osb->osb_stackflags =
+ 			OCFS2_RAW_SB(di)->s_cluster_info.ci_stackflags;
+-		strlcpy(osb->osb_cluster_stack,
++		memcpy(osb->osb_cluster_stack,
+ 		       OCFS2_RAW_SB(di)->s_cluster_info.ci_stack,
+-		       OCFS2_STACK_LABEL_LEN + 1);
++		       OCFS2_STACK_LABEL_LEN);
+ 		if (strlen(osb->osb_cluster_stack) != OCFS2_STACK_LABEL_LEN) {
+ 			mlog(ML_ERROR,
+ 			     "couldn't mount because of an invalid "
+@@ -2184,9 +2190,9 @@ static int ocfs2_initialize_super(struct super_block *sb,
+ 			status = -EINVAL;
+ 			goto bail;
+ 		}
+-		strlcpy(osb->osb_cluster_name,
++		memcpy(osb->osb_cluster_name,
+ 			OCFS2_RAW_SB(di)->s_cluster_info.ci_cluster,
+-			OCFS2_CLUSTER_NAME_LEN + 1);
++			OCFS2_CLUSTER_NAME_LEN);
+ 	} else {
+ 		/* The empty string is identical with classic tools that
+ 		 * don't know about s_cluster_info. */
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 17397c7532f12..aef0da5d6f636 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1794,9 +1794,15 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx,
+ 	if (mode_wp && mode_dontwake)
+ 		return -EINVAL;
+ 
+-	ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start,
+-				  uffdio_wp.range.len, mode_wp,
+-				  &ctx->mmap_changing);
++	if (mmget_not_zero(ctx->mm)) {
++		ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start,
++					  uffdio_wp.range.len, mode_wp,
++					  &ctx->mmap_changing);
++		mmput(ctx->mm);
++	} else {
++		return -ESRCH;
++	}
++
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/include/linux/elfcore.h b/include/linux/elfcore.h
+index de51c1bef27da..40e93bfa559d1 100644
+--- a/include/linux/elfcore.h
++++ b/include/linux/elfcore.h
+@@ -104,7 +104,7 @@ static inline int elf_core_copy_task_fpregs(struct task_struct *t, struct pt_reg
+ #endif
+ }
+ 
+-#if defined(CONFIG_UM) || defined(CONFIG_IA64)
++#if (defined(CONFIG_UML) && defined(CONFIG_X86_32)) || defined(CONFIG_IA64)
+ /*
+  * These functions parameterize elf_core_dump in fs/binfmt_elf.c to write out
+  * extra segments containing the gate DSO contents.  Dumping its
+diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h
+index 73827b7d17e00..cf530e9fb5f5c 100644
+--- a/include/sound/hda_codec.h
++++ b/include/sound/hda_codec.h
+@@ -225,6 +225,7 @@ struct hda_codec {
+ #endif
+ 
+ 	/* misc flags */
++	unsigned int configured:1; /* codec was configured */
+ 	unsigned int in_freeing:1; /* being released */
+ 	unsigned int registered:1; /* codec was registered */
+ 	unsigned int display_power_control:1; /* needs display power */
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 8dba8f0983b5c..638f424859edc 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -653,7 +653,7 @@ static int audit_filter_rules(struct task_struct *tsk,
+ 			result = audit_comparator(audit_loginuid_set(tsk), f->op, f->val);
+ 			break;
+ 		case AUDIT_SADDR_FAM:
+-			if (ctx->sockaddr)
++			if (ctx && ctx->sockaddr)
+ 				result = audit_comparator(ctx->sockaddr->ss_family,
+ 							  f->op, f->val);
+ 			break;
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 4457545299177..10d07ace46c15 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -1300,6 +1300,12 @@ void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
+ 	if (unlikely(dma_debug_disabled()))
+ 		return;
+ 
++	for_each_sg(sg, s, nents, i) {
++		check_for_stack(dev, sg_page(s), s->offset);
++		if (!PageHighMem(sg_page(s)))
++			check_for_illegal_area(dev, sg_virt(s), s->length);
++	}
++
+ 	for_each_sg(sg, s, mapped_ents, i) {
+ 		entry = dma_entry_alloc();
+ 		if (!entry)
+@@ -1315,12 +1321,6 @@ void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
+ 		entry->sg_call_ents   = nents;
+ 		entry->sg_mapped_ents = mapped_ents;
+ 
+-		check_for_stack(dev, sg_page(s), s->offset);
+-
+-		if (!PageHighMem(sg_page(s))) {
+-			check_for_illegal_area(dev, sg_virt(s), sg_dma_len(s));
+-		}
+-
+ 		check_sg_segment(dev, s);
+ 
+ 		add_dma_entry(entry);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 6db20a66e8e68..e4551d1736fa3 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6677,6 +6677,7 @@ void idle_task_exit(void)
+ 		finish_arch_post_lock_switch();
+ 	}
+ 
++	scs_task_reset(current);
+ 	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
+ }
+ 
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 30010614b9237..4a5d35dc490b2 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -6985,7 +6985,7 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
+ 	struct ftrace_ops *op;
+ 	int bit;
+ 
+-	bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
++	bit = trace_test_and_set_recursion(TRACE_LIST_START);
+ 	if (bit < 0)
+ 		return;
+ 
+@@ -7060,7 +7060,7 @@ static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
+ {
+ 	int bit;
+ 
+-	bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
++	bit = trace_test_and_set_recursion(TRACE_LIST_START);
+ 	if (bit < 0)
+ 		return;
+ 
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 6784b572ce597..15a811d34cd82 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -573,18 +573,6 @@ struct tracer {
+  *    then this function calls...
+  *   The function callback, which can use the FTRACE bits to
+  *    check for recursion.
+- *
+- * Now if the arch does not support a feature, and it calls
+- * the global list function which calls the ftrace callback
+- * all three of these steps will do a recursion protection.
+- * There's no reason to do one if the previous caller already
+- * did. The recursion that we are protecting against will
+- * go through the same steps again.
+- *
+- * To prevent the multiple recursion checks, if a recursion
+- * bit is set that is higher than the MAX bit of the current
+- * check, then we know that the check was made by the previous
+- * caller, and we can skip the current check.
+  */
+ enum {
+ 	/* Function recursion bits */
+@@ -592,12 +580,14 @@ enum {
+ 	TRACE_FTRACE_NMI_BIT,
+ 	TRACE_FTRACE_IRQ_BIT,
+ 	TRACE_FTRACE_SIRQ_BIT,
++	TRACE_FTRACE_TRANSITION_BIT,
+ 
+-	/* INTERNAL_BITs must be greater than FTRACE_BITs */
++	/* Internal use recursion bits */
+ 	TRACE_INTERNAL_BIT,
+ 	TRACE_INTERNAL_NMI_BIT,
+ 	TRACE_INTERNAL_IRQ_BIT,
+ 	TRACE_INTERNAL_SIRQ_BIT,
++	TRACE_INTERNAL_TRANSITION_BIT,
+ 
+ 	TRACE_BRANCH_BIT,
+ /*
+@@ -637,12 +627,6 @@ enum {
+ 	 * function is called to clear it.
+ 	 */
+ 	TRACE_GRAPH_NOTRACE_BIT,
+-
+-	/*
+-	 * When transitioning between context, the preempt_count() may
+-	 * not be correct. Allow for a single recursion to cover this case.
+-	 */
+-	TRACE_TRANSITION_BIT,
+ };
+ 
+ #define trace_recursion_set(bit)	do { (current)->trace_recursion |= (1<<(bit)); } while (0)
+@@ -662,12 +646,18 @@ enum {
+ #define TRACE_CONTEXT_BITS	4
+ 
+ #define TRACE_FTRACE_START	TRACE_FTRACE_BIT
+-#define TRACE_FTRACE_MAX	((1 << (TRACE_FTRACE_START + TRACE_CONTEXT_BITS)) - 1)
+ 
+ #define TRACE_LIST_START	TRACE_INTERNAL_BIT
+-#define TRACE_LIST_MAX		((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)
+ 
+-#define TRACE_CONTEXT_MASK	TRACE_LIST_MAX
++#define TRACE_CONTEXT_MASK	((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)
++
++enum {
++	TRACE_CTX_NMI,
++	TRACE_CTX_IRQ,
++	TRACE_CTX_SOFTIRQ,
++	TRACE_CTX_NORMAL,
++	TRACE_CTX_TRANSITION,
++};
+ 
+ static __always_inline int trace_get_context_bit(void)
+ {
+@@ -675,59 +665,48 @@ static __always_inline int trace_get_context_bit(void)
+ 
+ 	if (in_interrupt()) {
+ 		if (in_nmi())
+-			bit = 0;
++			bit = TRACE_CTX_NMI;
+ 
+ 		else if (in_irq())
+-			bit = 1;
++			bit = TRACE_CTX_IRQ;
+ 		else
+-			bit = 2;
++			bit = TRACE_CTX_SOFTIRQ;
+ 	} else
+-		bit = 3;
++		bit = TRACE_CTX_NORMAL;
+ 
+ 	return bit;
+ }
+ 
+-static __always_inline int trace_test_and_set_recursion(int start, int max)
++static __always_inline int trace_test_and_set_recursion(int start)
+ {
+ 	unsigned int val = current->trace_recursion;
+ 	int bit;
+ 
+-	/* A previous recursion check was made */
+-	if ((val & TRACE_CONTEXT_MASK) > max)
+-		return 0;
+-
+ 	bit = trace_get_context_bit() + start;
+ 	if (unlikely(val & (1 << bit))) {
+ 		/*
+ 		 * It could be that preempt_count has not been updated during
+ 		 * a switch between contexts. Allow for a single recursion.
+ 		 */
+-		bit = TRACE_TRANSITION_BIT;
++		bit = start + TRACE_CTX_TRANSITION;
+ 		if (trace_recursion_test(bit))
+ 			return -1;
+ 		trace_recursion_set(bit);
+ 		barrier();
+-		return bit + 1;
++		return bit;
+ 	}
+ 
+-	/* Normal check passed, clear the transition to allow it again */
+-	trace_recursion_clear(TRACE_TRANSITION_BIT);
+-
+ 	val |= 1 << bit;
+ 	current->trace_recursion = val;
+ 	barrier();
+ 
+-	return bit + 1;
++	return bit;
+ }
+ 
+ static __always_inline void trace_clear_recursion(int bit)
+ {
+ 	unsigned int val = current->trace_recursion;
+ 
+-	if (!bit)
+-		return;
+-
+-	bit--;
+ 	bit = 1 << bit;
+ 	val &= ~bit;
+ 
+diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
+index 2c2126e1871d4..93e20ed642e53 100644
+--- a/kernel/trace/trace_functions.c
++++ b/kernel/trace/trace_functions.c
+@@ -144,7 +144,7 @@ function_trace_call(unsigned long ip, unsigned long parent_ip,
+ 	pc = preempt_count();
+ 	preempt_disable_notrace();
+ 
+-	bit = trace_test_and_set_recursion(TRACE_FTRACE_START, TRACE_FTRACE_MAX);
++	bit = trace_test_and_set_recursion(TRACE_FTRACE_START);
+ 	if (bit < 0)
+ 		goto out;
+ 
+diff --git a/mm/slub.c b/mm/slub.c
+index f5fc44208bdc3..1384dc9068337 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1543,7 +1543,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x)
+ }
+ 
+ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
+-					   void **head, void **tail)
++					   void **head, void **tail,
++					   int *cnt)
+ {
+ 
+ 	void *object;
+@@ -1578,6 +1579,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
+ 			*head = object;
+ 			if (!*tail)
+ 				*tail = object;
++		} else {
++			/*
++			 * Adjust the reconstructed freelist depth
++			 * accordingly if object's reuse is delayed.
++			 */
++			--(*cnt);
+ 		}
+ 	} while (object != old_tail);
+ 
+@@ -3093,7 +3100,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
+ 	struct kmem_cache_cpu *c;
+ 	unsigned long tid;
+ 
+-	memcg_slab_free_hook(s, &head, 1);
++	/* memcg_slab_free_hook() is already called for bulk free. */
++	if (!tail)
++		memcg_slab_free_hook(s, &head, 1);
+ redo:
+ 	/*
+ 	 * Determine the currently cpus per cpu slab.
+@@ -3137,7 +3146,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
+ 	 * With KASAN enabled slab_free_freelist_hook modifies the freelist
+ 	 * to remove objects, whose reuse must be delayed.
+ 	 */
+-	if (slab_free_freelist_hook(s, &head, &tail))
++	if (slab_free_freelist_hook(s, &head, &tail, &cnt))
+ 		do_slab_free(s, page, head, tail, cnt, addr);
+ }
+ 
+@@ -3825,8 +3834,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
+ 	if (alloc_kmem_cache_cpus(s))
+ 		return 0;
+ 
+-	free_kmem_cache_nodes(s);
+ error:
++	__kmem_cache_release(s);
+ 	return -EINVAL;
+ }
+ 
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 72d424a5a1429..eb684f31fd698 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -481,6 +481,12 @@ static void convert_skb_to___skb(struct sk_buff *skb, struct __sk_buff *__skb)
+ 	__skb->gso_segs = skb_shinfo(skb)->gso_segs;
+ }
+ 
++static struct proto bpf_dummy_proto = {
++	.name   = "bpf_dummy",
++	.owner  = THIS_MODULE,
++	.obj_size = sizeof(struct sock),
++};
++
+ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
+ 			  union bpf_attr __user *uattr)
+ {
+@@ -525,20 +531,19 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
+ 		break;
+ 	}
+ 
+-	sk = kzalloc(sizeof(struct sock), GFP_USER);
++	sk = sk_alloc(net, AF_UNSPEC, GFP_USER, &bpf_dummy_proto, 1);
+ 	if (!sk) {
+ 		kfree(data);
+ 		kfree(ctx);
+ 		return -ENOMEM;
+ 	}
+-	sock_net_set(sk, net);
+ 	sock_init_data(NULL, sk);
+ 
+ 	skb = build_skb(data, 0);
+ 	if (!skb) {
+ 		kfree(data);
+ 		kfree(ctx);
+-		kfree(sk);
++		sk_free(sk);
+ 		return -ENOMEM;
+ 	}
+ 	skb->sk = sk;
+@@ -611,8 +616,7 @@ out:
+ 	if (dev && dev != net->loopback_dev)
+ 		dev_put(dev);
+ 	kfree_skb(skb);
+-	bpf_sk_storage_free(sk);
+-	kfree(sk);
++	sk_free(sk);
+ 	kfree(ctx);
+ 	return ret;
+ }
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 5e5726048a1af..2b88b17cc8b25 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -931,9 +931,7 @@ static inline unsigned long br_multicast_lmqt(const struct net_bridge *br)
+ 
+ static inline unsigned long br_multicast_gmi(const struct net_bridge *br)
+ {
+-	/* use the RFC default of 2 for QRV */
+-	return 2 * br->multicast_query_interval +
+-	       br->multicast_query_response_interval;
++	return br->multicast_membership_interval;
+ }
+ #else
+ static inline int br_multicast_rcv(struct net_bridge *br,
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 5fc28f190677b..8ee580538d876 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -121,7 +121,7 @@ enum {
+ struct tpcon {
+ 	int idx;
+ 	int len;
+-	u8 state;
++	u32 state;
+ 	u8 bs;
+ 	u8 sn;
+ 	u8 ll_dl;
+@@ -846,6 +846,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct isotp_sock *so = isotp_sk(sk);
++	u32 old_state = so->tx.state;
+ 	struct sk_buff *skb;
+ 	struct net_device *dev;
+ 	struct canfd_frame *cf;
+@@ -858,37 +859,45 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		return -EADDRNOTAVAIL;
+ 
+ 	/* we do not support multiple buffers - for now */
+-	if (so->tx.state != ISOTP_IDLE || wq_has_sleeper(&so->wait)) {
+-		if (msg->msg_flags & MSG_DONTWAIT)
+-			return -EAGAIN;
++	if (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE ||
++	    wq_has_sleeper(&so->wait)) {
++		if (msg->msg_flags & MSG_DONTWAIT) {
++			err = -EAGAIN;
++			goto err_out;
++		}
+ 
+ 		/* wait for complete transmission of current pdu */
+-		wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++		err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++		if (err)
++			goto err_out;
+ 	}
+ 
+-	if (!size || size > MAX_MSG_LENGTH)
+-		return -EINVAL;
++	if (!size || size > MAX_MSG_LENGTH) {
++		err = -EINVAL;
++		goto err_out;
++	}
+ 
+ 	err = memcpy_from_msg(so->tx.buf, msg, size);
+ 	if (err < 0)
+-		return err;
++		goto err_out;
+ 
+ 	dev = dev_get_by_index(sock_net(sk), so->ifindex);
+-	if (!dev)
+-		return -ENXIO;
++	if (!dev) {
++		err = -ENXIO;
++		goto err_out;
++	}
+ 
+ 	skb = sock_alloc_send_skb(sk, so->ll.mtu + sizeof(struct can_skb_priv),
+ 				  msg->msg_flags & MSG_DONTWAIT, &err);
+ 	if (!skb) {
+ 		dev_put(dev);
+-		return err;
++		goto err_out;
+ 	}
+ 
+ 	can_skb_reserve(skb);
+ 	can_skb_prv(skb)->ifindex = dev->ifindex;
+ 	can_skb_prv(skb)->skbcnt = 0;
+ 
+-	so->tx.state = ISOTP_SENDING;
+ 	so->tx.len = size;
+ 	so->tx.idx = 0;
+ 
+@@ -947,15 +956,25 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	if (err) {
+ 		pr_notice_once("can-isotp: %s: can_send_ret %d\n",
+ 			       __func__, err);
+-		return err;
++		goto err_out;
+ 	}
+ 
+ 	if (wait_tx_done) {
+ 		/* wait for complete transmission of current pdu */
+ 		wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++
++		if (sk->sk_err)
++			return -sk->sk_err;
+ 	}
+ 
+ 	return size;
++
++err_out:
++	so->tx.state = old_state;
++	if (so->tx.state == ISOTP_IDLE)
++		wake_up_interruptible(&so->wait);
++
++	return err;
+ }
+ 
+ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+diff --git a/net/can/j1939/j1939-priv.h b/net/can/j1939/j1939-priv.h
+index 12369b604ce95..cea712fb2a9e0 100644
+--- a/net/can/j1939/j1939-priv.h
++++ b/net/can/j1939/j1939-priv.h
+@@ -326,6 +326,7 @@ int j1939_session_activate(struct j1939_session *session);
+ void j1939_tp_schedule_txtimer(struct j1939_session *session, int msec);
+ void j1939_session_timers_cancel(struct j1939_session *session);
+ 
++#define J1939_MIN_TP_PACKET_SIZE 9
+ #define J1939_MAX_TP_PACKET_SIZE (7 * 0xff)
+ #define J1939_MAX_ETP_PACKET_SIZE (7 * 0x00ffffff)
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 6884d18f919c7..266c189f1e809 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -249,11 +249,14 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 	struct j1939_priv *priv, *priv_new;
+ 	int ret;
+ 
+-	priv = j1939_priv_get_by_ndev(ndev);
++	spin_lock(&j1939_netdev_lock);
++	priv = j1939_priv_get_by_ndev_locked(ndev);
+ 	if (priv) {
+ 		kref_get(&priv->rx_kref);
++		spin_unlock(&j1939_netdev_lock);
+ 		return priv;
+ 	}
++	spin_unlock(&j1939_netdev_lock);
+ 
+ 	priv = j1939_priv_create(ndev);
+ 	if (!priv)
+@@ -269,10 +272,10 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 		/* Someone was faster than us, use their priv and roll
+ 		 * back our's.
+ 		 */
++		kref_get(&priv_new->rx_kref);
+ 		spin_unlock(&j1939_netdev_lock);
+ 		dev_put(ndev);
+ 		kfree(priv);
+-		kref_get(&priv_new->rx_kref);
+ 		return priv_new;
+ 	}
+ 	j1939_priv_set(ndev, priv);
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index bdc95bd7a851f..e59fbbffa31ce 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1230,12 +1230,11 @@ static enum hrtimer_restart j1939_tp_rxtimer(struct hrtimer *hrtimer)
+ 		session->err = -ETIME;
+ 		j1939_session_deactivate(session);
+ 	} else {
+-		netdev_alert(priv->ndev, "%s: 0x%p: rx timeout, send abort\n",
+-			     __func__, session);
+-
+ 		j1939_session_list_lock(session->priv);
+ 		if (session->state >= J1939_SESSION_ACTIVE &&
+ 		    session->state < J1939_SESSION_ACTIVE_MAX) {
++			netdev_alert(priv->ndev, "%s: 0x%p: rx timeout, send abort\n",
++				     __func__, session);
+ 			j1939_session_get(session);
+ 			hrtimer_start(&session->rxtimer,
+ 				      ms_to_ktime(J1939_XTP_ABORT_TIMEOUT_MS),
+@@ -1597,6 +1596,8 @@ j1939_session *j1939_xtp_rx_rts_session_new(struct j1939_priv *priv,
+ 			abort = J1939_XTP_ABORT_FAULT;
+ 		else if (len > priv->tp_max_packet_size)
+ 			abort = J1939_XTP_ABORT_RESOURCE;
++		else if (len < J1939_MIN_TP_PACKET_SIZE)
++			abort = J1939_XTP_ABORT_FAULT;
+ 	}
+ 
+ 	if (abort != J1939_XTP_NO_ABORT) {
+@@ -1771,6 +1772,7 @@ static void j1939_xtp_rx_dpo(struct j1939_priv *priv, struct sk_buff *skb,
+ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ 				 struct sk_buff *skb)
+ {
++	enum j1939_xtp_abort abort = J1939_XTP_ABORT_FAULT;
+ 	struct j1939_priv *priv = session->priv;
+ 	struct j1939_sk_buff_cb *skcb;
+ 	struct sk_buff *se_skb = NULL;
+@@ -1785,9 +1787,11 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ 
+ 	skcb = j1939_skb_to_cb(skb);
+ 	dat = skb->data;
+-	if (skb->len <= 1)
++	if (skb->len != 8) {
+ 		/* makes no sense */
++		abort = J1939_XTP_ABORT_UNEXPECTED_DATA;
+ 		goto out_session_cancel;
++	}
+ 
+ 	switch (session->last_cmd) {
+ 	case 0xff:
+@@ -1885,7 +1889,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+  out_session_cancel:
+ 	kfree_skb(se_skb);
+ 	j1939_session_timers_cancel(session);
+-	j1939_session_cancel(session, J1939_XTP_ABORT_FAULT);
++	j1939_session_cancel(session, abort);
+ 	j1939_session_put(session);
+ }
+ 
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 71395e745bc5e..017cd666387f3 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1022,6 +1022,20 @@ static void tcp_v4_reqsk_destructor(struct request_sock *req)
+ DEFINE_STATIC_KEY_FALSE(tcp_md5_needed);
+ EXPORT_SYMBOL(tcp_md5_needed);
+ 
++static bool better_md5_match(struct tcp_md5sig_key *old, struct tcp_md5sig_key *new)
++{
++	if (!old)
++		return true;
++
++	/* l3index always overrides non-l3index */
++	if (old->l3index && new->l3index == 0)
++		return false;
++	if (old->l3index == 0 && new->l3index)
++		return true;
++
++	return old->prefixlen < new->prefixlen;
++}
++
+ /* Find the Key structure for an address.  */
+ struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index,
+ 					   const union tcp_md5_addr *addr,
+@@ -1059,8 +1073,7 @@ struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index,
+ 			match = false;
+ 		}
+ 
+-		if (match && (!best_match ||
+-			      key->prefixlen > best_match->prefixlen))
++		if (match && better_md5_match(best_match, key))
+ 			best_match = key;
+ 	}
+ 	return best_match;
+@@ -1090,7 +1103,7 @@ static struct tcp_md5sig_key *tcp_md5_do_lookup_exact(const struct sock *sk,
+ 				 lockdep_sock_is_held(sk)) {
+ 		if (key->family != family)
+ 			continue;
+-		if (key->l3index && key->l3index != l3index)
++		if (key->l3index != l3index)
+ 			continue;
+ 		if (!memcmp(&key->addr, addr, size) &&
+ 		    key->prefixlen == prefixlen)
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 72a673a43a754..c2f8e69d7d7a0 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -487,13 +487,14 @@ static bool ip6_pkt_too_big(const struct sk_buff *skb, unsigned int mtu)
+ 
+ int ip6_forward(struct sk_buff *skb)
+ {
+-	struct inet6_dev *idev = __in6_dev_get_safely(skb->dev);
+ 	struct dst_entry *dst = skb_dst(skb);
+ 	struct ipv6hdr *hdr = ipv6_hdr(skb);
+ 	struct inet6_skb_parm *opt = IP6CB(skb);
+ 	struct net *net = dev_net(dst->dev);
++	struct inet6_dev *idev;
+ 	u32 mtu;
+ 
++	idev = __in6_dev_get_safely(dev_get_by_index_rcu(net, IP6CB(skb)->iif));
+ 	if (net->ipv6.devconf_all->forwarding == 0)
+ 		goto error;
+ 
+diff --git a/net/ipv6/netfilter/ip6t_rt.c b/net/ipv6/netfilter/ip6t_rt.c
+index 733c83d38b308..4ad8b2032f1f9 100644
+--- a/net/ipv6/netfilter/ip6t_rt.c
++++ b/net/ipv6/netfilter/ip6t_rt.c
+@@ -25,12 +25,7 @@ MODULE_AUTHOR("Andras Kis-Szabo <kisza@sch.bme.hu>");
+ static inline bool
+ segsleft_match(u_int32_t min, u_int32_t max, u_int32_t id, bool invert)
+ {
+-	bool r;
+-	pr_debug("segsleft_match:%c 0x%x <= 0x%x <= 0x%x\n",
+-		 invert ? '!' : ' ', min, id, max);
+-	r = (id >= min && id <= max) ^ invert;
+-	pr_debug(" result %s\n", r ? "PASS" : "FAILED");
+-	return r;
++	return (id >= min && id <= max) ^ invert;
+ }
+ 
+ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+@@ -65,30 +60,6 @@ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 		return false;
+ 	}
+ 
+-	pr_debug("IPv6 RT LEN %u %u ", hdrlen, rh->hdrlen);
+-	pr_debug("TYPE %04X ", rh->type);
+-	pr_debug("SGS_LEFT %u %02X\n", rh->segments_left, rh->segments_left);
+-
+-	pr_debug("IPv6 RT segsleft %02X ",
+-		 segsleft_match(rtinfo->segsleft[0], rtinfo->segsleft[1],
+-				rh->segments_left,
+-				!!(rtinfo->invflags & IP6T_RT_INV_SGS)));
+-	pr_debug("type %02X %02X %02X ",
+-		 rtinfo->rt_type, rh->type,
+-		 (!(rtinfo->flags & IP6T_RT_TYP) ||
+-		  ((rtinfo->rt_type == rh->type) ^
+-		   !!(rtinfo->invflags & IP6T_RT_INV_TYP))));
+-	pr_debug("len %02X %04X %02X ",
+-		 rtinfo->hdrlen, hdrlen,
+-		 !(rtinfo->flags & IP6T_RT_LEN) ||
+-		  ((rtinfo->hdrlen == hdrlen) ^
+-		   !!(rtinfo->invflags & IP6T_RT_INV_LEN)));
+-	pr_debug("res %02X %02X %02X ",
+-		 rtinfo->flags & IP6T_RT_RES,
+-		 ((const struct rt0_hdr *)rh)->reserved,
+-		 !((rtinfo->flags & IP6T_RT_RES) &&
+-		   (((const struct rt0_hdr *)rh)->reserved)));
+-
+ 	ret = (segsleft_match(rtinfo->segsleft[0], rtinfo->segsleft[1],
+ 			      rh->segments_left,
+ 			      !!(rtinfo->invflags & IP6T_RT_INV_SGS))) &&
+@@ -107,22 +78,22 @@ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 						       reserved),
+ 					sizeof(_reserved),
+ 					&_reserved);
++		if (!rp) {
++			par->hotdrop = true;
++			return false;
++		}
+ 
+ 		ret = (*rp == 0);
+ 	}
+ 
+-	pr_debug("#%d ", rtinfo->addrnr);
+ 	if (!(rtinfo->flags & IP6T_RT_FST)) {
+ 		return ret;
+ 	} else if (rtinfo->flags & IP6T_RT_FST_NSTRICT) {
+-		pr_debug("Not strict ");
+ 		if (rtinfo->addrnr > (unsigned int)((hdrlen - 8) / 16)) {
+-			pr_debug("There isn't enough space\n");
+ 			return false;
+ 		} else {
+ 			unsigned int i = 0;
+ 
+-			pr_debug("#%d ", rtinfo->addrnr);
+ 			for (temp = 0;
+ 			     temp < (unsigned int)((hdrlen - 8) / 16);
+ 			     temp++) {
+@@ -138,26 +109,20 @@ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 					return false;
+ 				}
+ 
+-				if (ipv6_addr_equal(ap, &rtinfo->addrs[i])) {
+-					pr_debug("i=%d temp=%d;\n", i, temp);
++				if (ipv6_addr_equal(ap, &rtinfo->addrs[i]))
+ 					i++;
+-				}
+ 				if (i == rtinfo->addrnr)
+ 					break;
+ 			}
+-			pr_debug("i=%d #%d\n", i, rtinfo->addrnr);
+ 			if (i == rtinfo->addrnr)
+ 				return ret;
+ 			else
+ 				return false;
+ 		}
+ 	} else {
+-		pr_debug("Strict ");
+ 		if (rtinfo->addrnr > (unsigned int)((hdrlen - 8) / 16)) {
+-			pr_debug("There isn't enough space\n");
+ 			return false;
+ 		} else {
+-			pr_debug("#%d ", rtinfo->addrnr);
+ 			for (temp = 0; temp < rtinfo->addrnr; temp++) {
+ 				ap = skb_header_pointer(skb,
+ 							ptr
+@@ -173,7 +138,6 @@ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 				if (!ipv6_addr_equal(ap, &rtinfo->addrs[temp]))
+ 					break;
+ 			}
+-			pr_debug("temp=%d #%d\n", temp, rtinfo->addrnr);
+ 			if (temp == rtinfo->addrnr &&
+ 			    temp == (unsigned int)((hdrlen - 8) / 16))
+ 				return ret;
+diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
+index 52370211e46bf..6bafd3876aff3 100644
+--- a/net/netfilter/Kconfig
++++ b/net/netfilter/Kconfig
+@@ -94,7 +94,7 @@ config NF_CONNTRACK_MARK
+ config NF_CONNTRACK_SECMARK
+ 	bool  'Connection tracking security mark support'
+ 	depends on NETWORK_SECMARK
+-	default m if NETFILTER_ADVANCED=n
++	default y if NETFILTER_ADVANCED=n
+ 	help
+ 	  This option enables security markings to be applied to
+ 	  connections.  Typically they are copied to connections from
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index c25097092a060..29ec3ef63edc7 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -4090,6 +4090,11 @@ static int __net_init ip_vs_control_net_init_sysctl(struct netns_ipvs *ipvs)
+ 	tbl[idx++].data = &ipvs->sysctl_conn_reuse_mode;
+ 	tbl[idx++].data = &ipvs->sysctl_schedule_icmp;
+ 	tbl[idx++].data = &ipvs->sysctl_ignore_tunneled;
++#ifdef CONFIG_IP_VS_DEBUG
++	/* Global sysctls must be ro in non-init netns */
++	if (!net_eq(net, &init_net))
++		tbl[idx++].mode = 0444;
++#endif
+ 
+ 	ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl);
+ 	if (ipvs->sysctl_hdr == NULL) {
+diff --git a/net/netfilter/xt_IDLETIMER.c b/net/netfilter/xt_IDLETIMER.c
+index 7b2f359bfce46..2f7cf5ecebf4f 100644
+--- a/net/netfilter/xt_IDLETIMER.c
++++ b/net/netfilter/xt_IDLETIMER.c
+@@ -137,7 +137,7 @@ static int idletimer_tg_create(struct idletimer_tg_info *info)
+ {
+ 	int ret;
+ 
+-	info->timer = kmalloc(sizeof(*info->timer), GFP_KERNEL);
++	info->timer = kzalloc(sizeof(*info->timer), GFP_KERNEL);
+ 	if (!info->timer) {
+ 		ret = -ENOMEM;
+ 		goto out;
+diff --git a/net/nfc/nci/rsp.c b/net/nfc/nci/rsp.c
+index a48297b79f34f..b0ed2b47ac437 100644
+--- a/net/nfc/nci/rsp.c
++++ b/net/nfc/nci/rsp.c
+@@ -277,6 +277,8 @@ static void nci_core_conn_close_rsp_packet(struct nci_dev *ndev,
+ 							 ndev->cur_conn_id);
+ 		if (conn_info) {
+ 			list_del(&conn_info->list);
++			if (conn_info == ndev->rf_conn_info)
++				ndev->rf_conn_info = NULL;
+ 			devm_kfree(&ndev->nfc_dev->dev, conn_info);
+ 		}
+ 	}
+diff --git a/scripts/Makefile.gcc-plugins b/scripts/Makefile.gcc-plugins
+index 952e46876329a..4aad284800355 100644
+--- a/scripts/Makefile.gcc-plugins
++++ b/scripts/Makefile.gcc-plugins
+@@ -19,6 +19,10 @@ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF)		\
+ 		+= -fplugin-arg-structleak_plugin-byref
+ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL)	\
+ 		+= -fplugin-arg-structleak_plugin-byref-all
++ifdef CONFIG_GCC_PLUGIN_STRUCTLEAK
++    DISABLE_STRUCTLEAK_PLUGIN += -fplugin-arg-structleak_plugin-disable
++endif
++export DISABLE_STRUCTLEAK_PLUGIN
+ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK)		\
+ 		+= -DSTRUCTLEAK_PLUGIN
+ 
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index b98449fd92f3b..522d1897659cb 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -421,8 +421,9 @@ int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset)
+ 	if (!full_reset)
+ 		goto skip_reset;
+ 
+-	/* clear STATESTS */
+-	snd_hdac_chip_writew(bus, STATESTS, STATESTS_INT_MASK);
++	/* clear STATESTS if not in reset */
++	if (snd_hdac_chip_readb(bus, GCTL) & AZX_GCTL_RESET)
++		snd_hdac_chip_writew(bus, STATESTS, STATESTS_INT_MASK);
+ 
+ 	/* reset controller */
+ 	snd_hdac_bus_enter_link_reset(bus);
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 17a25e453f60c..4efbcc41fdfb7 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -301,29 +301,31 @@ int snd_hda_codec_configure(struct hda_codec *codec)
+ {
+ 	int err;
+ 
++	if (codec->configured)
++		return 0;
++
+ 	if (is_generic_config(codec))
+ 		codec->probe_id = HDA_CODEC_ID_GENERIC;
+ 	else
+ 		codec->probe_id = 0;
+ 
+-	err = snd_hdac_device_register(&codec->core);
+-	if (err < 0)
+-		return err;
++	if (!device_is_registered(&codec->core.dev)) {
++		err = snd_hdac_device_register(&codec->core);
++		if (err < 0)
++			return err;
++	}
+ 
+ 	if (!codec->preset)
+ 		codec_bind_module(codec);
+ 	if (!codec->preset) {
+ 		err = codec_bind_generic(codec);
+ 		if (err < 0) {
+-			codec_err(codec, "Unable to bind the codec\n");
+-			goto error;
++			codec_dbg(codec, "Unable to bind the codec\n");
++			return err;
+ 		}
+ 	}
+ 
++	codec->configured = 1;
+ 	return 0;
+-
+- error:
+-	snd_hdac_device_unregister(&codec->core);
+-	return err;
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_codec_configure);
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 4cec1bd77e6fe..6dece719be669 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -791,6 +791,7 @@ void snd_hda_codec_cleanup_for_unbind(struct hda_codec *codec)
+ 	snd_array_free(&codec->nids);
+ 	remove_conn_list(codec);
+ 	snd_hdac_regmap_exit(&codec->core);
++	codec->configured = 0;
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_codec_cleanup_for_unbind);
+ 
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index b972d59eb1ec2..3de7dc34def24 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -25,6 +25,7 @@
+ #include <sound/core.h>
+ #include <sound/initval.h>
+ #include "hda_controller.h"
++#include "hda_local.h"
+ 
+ #define CREATE_TRACE_POINTS
+ #include "hda_controller_trace.h"
+@@ -1259,17 +1260,24 @@ EXPORT_SYMBOL_GPL(azx_probe_codecs);
+ int azx_codec_configure(struct azx *chip)
+ {
+ 	struct hda_codec *codec, *next;
++	int success = 0;
+ 
+-	/* use _safe version here since snd_hda_codec_configure() deregisters
+-	 * the device upon error and deletes itself from the bus list.
+-	 */
+-	list_for_each_codec_safe(codec, next, &chip->bus) {
+-		snd_hda_codec_configure(codec);
++	list_for_each_codec(codec, &chip->bus) {
++		if (!snd_hda_codec_configure(codec))
++			success++;
+ 	}
+ 
+-	if (!azx_bus(chip)->num_codecs)
+-		return -ENODEV;
+-	return 0;
++	if (success) {
++		/* unregister failed codecs if any codec has been probed */
++		list_for_each_codec_safe(codec, next, &chip->bus) {
++			if (!codec->configured) {
++				codec_err(codec, "Unable to configure, disabling\n");
++				snd_hdac_device_unregister(&codec->core);
++			}
++		}
++	}
++
++	return success ? 0 : -ENODEV;
+ }
+ EXPORT_SYMBOL_GPL(azx_codec_configure);
+ 
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index 68f9668788ea2..324cba13c7bac 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -41,7 +41,7 @@
+ /* 24 unused */
+ #define AZX_DCAPS_COUNT_LPIB_DELAY  (1 << 25)	/* Take LPIB as delay */
+ #define AZX_DCAPS_PM_RUNTIME	(1 << 26)	/* runtime PM support */
+-/* 27 unused */
++#define AZX_DCAPS_RETRY_PROBE	(1 << 27)	/* retry probe if no codec is configured */
+ #define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28)	/* CORBRP clears itself after reset */
+ #define AZX_DCAPS_NO_MSI64      (1 << 29)	/* Stick to 32-bit MSIs */
+ #define AZX_DCAPS_SEPARATE_STREAM_TAG	(1 << 30) /* capture and playback use separate stream tag */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 4c8b281c39921..8bc27e7c05905 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -341,7 +341,8 @@ enum {
+ /* quirks for AMD SB */
+ #define AZX_DCAPS_PRESET_AMD_SB \
+ 	(AZX_DCAPS_NO_TCSEL | AZX_DCAPS_AMD_WORKAROUND |\
+-	 AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME)
++	 AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME |\
++	 AZX_DCAPS_RETRY_PROBE)
+ 
+ /* quirks for Nvidia */
+ #define AZX_DCAPS_PRESET_NVIDIA \
+@@ -1758,7 +1759,7 @@ static void azx_check_snoop_available(struct azx *chip)
+ 
+ static void azx_probe_work(struct work_struct *work)
+ {
+-	struct hda_intel *hda = container_of(work, struct hda_intel, probe_work);
++	struct hda_intel *hda = container_of(work, struct hda_intel, probe_work.work);
+ 	azx_probe_continue(&hda->chip);
+ }
+ 
+@@ -1867,7 +1868,7 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 	}
+ 
+ 	/* continue probing in work context as may trigger request module */
+-	INIT_WORK(&hda->probe_work, azx_probe_work);
++	INIT_DELAYED_WORK(&hda->probe_work, azx_probe_work);
+ 
+ 	*rchip = chip;
+ 
+@@ -2202,7 +2203,7 @@ static int azx_probe(struct pci_dev *pci,
+ #endif
+ 
+ 	if (schedule_probe)
+-		schedule_work(&hda->probe_work);
++		schedule_delayed_work(&hda->probe_work, 0);
+ 
+ 	dev++;
+ 	if (chip->disabled)
+@@ -2290,6 +2291,11 @@ static int azx_probe_continue(struct azx *chip)
+ 	int dev = chip->dev_index;
+ 	int err;
+ 
++	if (chip->disabled || hda->init_failed)
++		return -EIO;
++	if (hda->probe_retry)
++		goto probe_retry;
++
+ 	to_hda_bus(bus)->bus_probing = 1;
+ 	hda->probe_continued = 1;
+ 
+@@ -2351,10 +2357,20 @@ static int azx_probe_continue(struct azx *chip)
+ #endif
+ 	}
+ #endif
++
++ probe_retry:
+ 	if (bus->codec_mask && !(probe_only[dev] & 1)) {
+ 		err = azx_codec_configure(chip);
+-		if (err < 0)
++		if (err) {
++			if ((chip->driver_caps & AZX_DCAPS_RETRY_PROBE) &&
++			    ++hda->probe_retry < 60) {
++				schedule_delayed_work(&hda->probe_work,
++						      msecs_to_jiffies(1000));
++				return 0; /* keep things up */
++			}
++			dev_err(chip->card->dev, "Cannot probe codecs, giving up\n");
+ 			goto out_free;
++		}
+ 	}
+ 
+ 	err = snd_card_register(chip->card);
+@@ -2384,6 +2400,7 @@ out_free:
+ 		display_power(chip, false);
+ 	complete_all(&hda->probe_wait);
+ 	to_hda_bus(bus)->bus_probing = 0;
++	hda->probe_retry = 0;
+ 	return 0;
+ }
+ 
+@@ -2409,7 +2426,7 @@ static void azx_remove(struct pci_dev *pci)
+ 		 * device during cancel_work_sync() call.
+ 		 */
+ 		device_unlock(&pci->dev);
+-		cancel_work_sync(&hda->probe_work);
++		cancel_delayed_work_sync(&hda->probe_work);
+ 		device_lock(&pci->dev);
+ 
+ 		snd_card_free(card);
+diff --git a/sound/pci/hda/hda_intel.h b/sound/pci/hda/hda_intel.h
+index 3fb119f090408..0f39418f9328b 100644
+--- a/sound/pci/hda/hda_intel.h
++++ b/sound/pci/hda/hda_intel.h
+@@ -14,7 +14,7 @@ struct hda_intel {
+ 
+ 	/* sync probing */
+ 	struct completion probe_wait;
+-	struct work_struct probe_work;
++	struct delayed_work probe_work;
+ 
+ 	/* card list (for power_save trigger) */
+ 	struct list_head list;
+@@ -30,6 +30,8 @@ struct hda_intel {
+ 	unsigned int freed:1; /* resources already released */
+ 
+ 	bool need_i915_power:1; /* the hda controller needs i915 power */
++
++	int probe_retry;	/* being probe-retry */
+ };
+ 
+ #endif
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c36239cea474f..f511ae66bc8aa 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2547,6 +2547,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65f1, "Clevo PC50HS", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index 9d325555e2191..618692e2e0e41 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -742,9 +742,16 @@ static int wm8960_configure_clocking(struct snd_soc_component *component)
+ 	int i, j, k;
+ 	int ret;
+ 
+-	if (!(iface1 & (1<<6))) {
+-		dev_dbg(component->dev,
+-			"Codec is slave mode, no need to configure clock\n");
++	/*
++	 * For Slave mode clocking should still be configured,
++	 * so this if statement should be removed, but some platform
++	 * may not work if the sysclk is not configured, to avoid such
++	 * compatible issue, just add '!wm8960->sysclk' condition in
++	 * this if statement.
++	 */
++	if (!(iface1 & (1 << 6)) && !wm8960->sysclk) {
++		dev_warn(component->dev,
++			 "slave mode, but proceeding with no clock configuration\n");
+ 		return 0;
+ 	}
+ 
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index f4b380d6aecf8..08960167d34f5 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -2559,6 +2559,7 @@ static int snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
+ 				const char *pin, int status)
+ {
+ 	struct snd_soc_dapm_widget *w = dapm_find_widget(dapm, pin, true);
++	int ret = 0;
+ 
+ 	dapm_assert_locked(dapm);
+ 
+@@ -2571,13 +2572,14 @@ static int snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
+ 		dapm_mark_dirty(w, "pin configuration");
+ 		dapm_widget_invalidate_input_paths(w);
+ 		dapm_widget_invalidate_output_paths(w);
++		ret = 1;
+ 	}
+ 
+ 	w->connected = status;
+ 	if (status == 0)
+ 		w->force = 0;
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+@@ -3582,14 +3584,15 @@ int snd_soc_dapm_put_pin_switch(struct snd_kcontrol *kcontrol,
+ {
+ 	struct snd_soc_card *card = snd_kcontrol_chip(kcontrol);
+ 	const char *pin = (const char *)kcontrol->private_value;
++	int ret;
+ 
+ 	if (ucontrol->value.integer.value[0])
+-		snd_soc_dapm_enable_pin(&card->dapm, pin);
++		ret = snd_soc_dapm_enable_pin(&card->dapm, pin);
+ 	else
+-		snd_soc_dapm_disable_pin(&card->dapm, pin);
++		ret = snd_soc_dapm_disable_pin(&card->dapm, pin);
+ 
+ 	snd_soc_dapm_sync(&card->dapm);
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_dapm_put_pin_switch);
+ 
+@@ -4035,7 +4038,7 @@ static int snd_soc_dapm_dai_link_put(struct snd_kcontrol *kcontrol,
+ 
+ 	rtd->params_select = ucontrol->value.enumerated.item[0];
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static void
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 7c649cd380493..949c6d129f2a9 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3698,6 +3698,38 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ 		}
+ 	}
+ },
++{
++	/*
++	 * Sennheiser GSP670
++	 * Change order of interfaces loaded
++	 */
++	USB_DEVICE(0x1395, 0x0300),
++	.bInterfaceClass = USB_CLASS_PER_INTERFACE,
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_COMPOSITE,
++		.data = &(const struct snd_usb_audio_quirk[]) {
++			// Communication
++			{
++				.ifnum = 3,
++				.type = QUIRK_AUDIO_STANDARD_INTERFACE
++			},
++			// Recording
++			{
++				.ifnum = 4,
++				.type = QUIRK_AUDIO_STANDARD_INTERFACE
++			},
++			// Main
++			{
++				.ifnum = 1,
++				.type = QUIRK_AUDIO_STANDARD_INTERFACE
++			},
++			{
++				.ifnum = -1
++			}
++		}
++	}
++},
+ 
+ #undef USB_DEVICE_VENDOR_SPEC
+ #undef USB_AUDIO_DEVICE
+diff --git a/tools/lib/perf/tests/test-evlist.c b/tools/lib/perf/tests/test-evlist.c
+index bd19cabddaf62..60b5d1801103b 100644
+--- a/tools/lib/perf/tests/test-evlist.c
++++ b/tools/lib/perf/tests/test-evlist.c
+@@ -38,7 +38,7 @@ static int test_stat_cpu(void)
+ 		.type	= PERF_TYPE_SOFTWARE,
+ 		.config	= PERF_COUNT_SW_TASK_CLOCK,
+ 	};
+-	int err, cpu, tmp;
++	int err, idx;
+ 
+ 	cpus = perf_cpu_map__new(NULL);
+ 	__T("failed to create cpus", cpus);
+@@ -64,10 +64,10 @@ static int test_stat_cpu(void)
+ 	perf_evlist__for_each_evsel(evlist, evsel) {
+ 		cpus = perf_evsel__cpus(evsel);
+ 
+-		perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {
++		for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) {
+ 			struct perf_counts_values counts = { .val = 0 };
+ 
+-			perf_evsel__read(evsel, cpu, 0, &counts);
++			perf_evsel__read(evsel, idx, 0, &counts);
+ 			__T("failed to read value for evsel", counts.val != 0);
+ 		}
+ 	}
+diff --git a/tools/lib/perf/tests/test-evsel.c b/tools/lib/perf/tests/test-evsel.c
+index 0ad82d7a2a51b..2de98768d8444 100644
+--- a/tools/lib/perf/tests/test-evsel.c
++++ b/tools/lib/perf/tests/test-evsel.c
+@@ -21,7 +21,7 @@ static int test_stat_cpu(void)
+ 		.type	= PERF_TYPE_SOFTWARE,
+ 		.config	= PERF_COUNT_SW_CPU_CLOCK,
+ 	};
+-	int err, cpu, tmp;
++	int err, idx;
+ 
+ 	cpus = perf_cpu_map__new(NULL);
+ 	__T("failed to create cpus", cpus);
+@@ -32,10 +32,10 @@ static int test_stat_cpu(void)
+ 	err = perf_evsel__open(evsel, cpus, NULL);
+ 	__T("failed to open evsel", err == 0);
+ 
+-	perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {
++	for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) {
+ 		struct perf_counts_values counts = { .val = 0 };
+ 
+-		perf_evsel__read(evsel, cpu, 0, &counts);
++		perf_evsel__read(evsel, idx, 0, &counts);
+ 		__T("failed to read value for evsel", counts.val != 0);
+ 	}
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+index 8b641c306f263..5b52985cb60e6 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+@@ -857,7 +857,7 @@ void test_core_reloc(void)
+ 			goto cleanup;
+ 		}
+ 
+-		if (!ASSERT_FALSE(test_case->fails, "obj_load_should_fail"))
++		if (!ASSERT_EQ(test_case->fails, false, "obj_load_should_fail"))
+ 			goto cleanup;
+ 
+ 		equal = memcmp(data->out, test_case->output,
+diff --git a/tools/testing/selftests/net/forwarding/Makefile b/tools/testing/selftests/net/forwarding/Makefile
+index 250fbb2d16252..881e680c2e9c2 100644
+--- a/tools/testing/selftests/net/forwarding/Makefile
++++ b/tools/testing/selftests/net/forwarding/Makefile
+@@ -9,6 +9,7 @@ TEST_PROGS = bridge_igmp.sh \
+ 	gre_inner_v4_multipath.sh \
+ 	gre_inner_v6_multipath.sh \
+ 	gre_multipath.sh \
++	ip6_forward_instats_vrf.sh \
+ 	ip6gre_inner_v4_multipath.sh \
+ 	ip6gre_inner_v6_multipath.sh \
+ 	ipip_flat_gre_key.sh \
+diff --git a/tools/testing/selftests/net/forwarding/forwarding.config.sample b/tools/testing/selftests/net/forwarding/forwarding.config.sample
+index b802c14d29509..e5e2fbeca22ec 100644
+--- a/tools/testing/selftests/net/forwarding/forwarding.config.sample
++++ b/tools/testing/selftests/net/forwarding/forwarding.config.sample
+@@ -39,3 +39,5 @@ NETIF_CREATE=yes
+ # Timeout (in seconds) before ping exits regardless of how many packets have
+ # been sent or received
+ PING_TIMEOUT=5
++# IPv6 traceroute utility name.
++TROUTE6=traceroute6
+diff --git a/tools/testing/selftests/net/forwarding/ip6_forward_instats_vrf.sh b/tools/testing/selftests/net/forwarding/ip6_forward_instats_vrf.sh
+new file mode 100755
+index 0000000000000..9f5b3e2e5e954
+--- /dev/null
++++ b/tools/testing/selftests/net/forwarding/ip6_forward_instats_vrf.sh
+@@ -0,0 +1,172 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++# Test ipv6 stats on the incoming if when forwarding with VRF
++
++ALL_TESTS="
++	ipv6_ping
++	ipv6_in_too_big_err
++	ipv6_in_hdr_err
++	ipv6_in_addr_err
++	ipv6_in_discard
++"
++
++NUM_NETIFS=4
++source lib.sh
++
++h1_create()
++{
++	simple_if_init $h1 2001:1:1::2/64
++	ip -6 route add vrf v$h1 2001:1:2::/64 via 2001:1:1::1
++}
++
++h1_destroy()
++{
++	ip -6 route del vrf v$h1 2001:1:2::/64 via 2001:1:1::1
++	simple_if_fini $h1 2001:1:1::2/64
++}
++
++router_create()
++{
++	vrf_create router
++	__simple_if_init $rtr1 router 2001:1:1::1/64
++	__simple_if_init $rtr2 router 2001:1:2::1/64
++	mtu_set $rtr2 1280
++}
++
++router_destroy()
++{
++	mtu_restore $rtr2
++	__simple_if_fini $rtr2 2001:1:2::1/64
++	__simple_if_fini $rtr1 2001:1:1::1/64
++	vrf_destroy router
++}
++
++h2_create()
++{
++	simple_if_init $h2 2001:1:2::2/64
++	ip -6 route add vrf v$h2 2001:1:1::/64 via 2001:1:2::1
++	mtu_set $h2 1280
++}
++
++h2_destroy()
++{
++	mtu_restore $h2
++	ip -6 route del vrf v$h2 2001:1:1::/64 via 2001:1:2::1
++	simple_if_fini $h2 2001:1:2::2/64
++}
++
++setup_prepare()
++{
++	h1=${NETIFS[p1]}
++	rtr1=${NETIFS[p2]}
++
++	rtr2=${NETIFS[p3]}
++	h2=${NETIFS[p4]}
++
++	vrf_prepare
++	h1_create
++	router_create
++	h2_create
++
++	forwarding_enable
++}
++
++cleanup()
++{
++	pre_cleanup
++
++	forwarding_restore
++
++	h2_destroy
++	router_destroy
++	h1_destroy
++	vrf_cleanup
++}
++
++ipv6_in_too_big_err()
++{
++	RET=0
++
++	local t0=$(ipv6_stats_get $rtr1 Ip6InTooBigErrors)
++	local vrf_name=$(master_name_get $h1)
++
++	# Send too big packets
++	ip vrf exec $vrf_name \
++		$PING6 -s 1300 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
++
++	local t1=$(ipv6_stats_get $rtr1 Ip6InTooBigErrors)
++	test "$((t1 - t0))" -ne 0
++	check_err $?
++	log_test "Ip6InTooBigErrors"
++}
++
++ipv6_in_hdr_err()
++{
++	RET=0
++
++	local t0=$(ipv6_stats_get $rtr1 Ip6InHdrErrors)
++	local vrf_name=$(master_name_get $h1)
++
++	# Send packets with hop limit 1, easiest with traceroute6 as some ping6
++	# doesn't allow hop limit to be specified
++	ip vrf exec $vrf_name \
++		$TROUTE6 2001:1:2::2 &> /dev/null
++
++	local t1=$(ipv6_stats_get $rtr1 Ip6InHdrErrors)
++	test "$((t1 - t0))" -ne 0
++	check_err $?
++	log_test "Ip6InHdrErrors"
++}
++
++ipv6_in_addr_err()
++{
++	RET=0
++
++	local t0=$(ipv6_stats_get $rtr1 Ip6InAddrErrors)
++	local vrf_name=$(master_name_get $h1)
++
++	# Disable forwarding temporary while sending the packet
++	sysctl -qw net.ipv6.conf.all.forwarding=0
++	ip vrf exec $vrf_name \
++		$PING6 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
++	sysctl -qw net.ipv6.conf.all.forwarding=1
++
++	local t1=$(ipv6_stats_get $rtr1 Ip6InAddrErrors)
++	test "$((t1 - t0))" -ne 0
++	check_err $?
++	log_test "Ip6InAddrErrors"
++}
++
++ipv6_in_discard()
++{
++	RET=0
++
++	local t0=$(ipv6_stats_get $rtr1 Ip6InDiscards)
++	local vrf_name=$(master_name_get $h1)
++
++	# Add a policy to discard
++	ip xfrm policy add dst 2001:1:2::2/128 dir fwd action block
++	ip vrf exec $vrf_name \
++		$PING6 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
++	ip xfrm policy del dst 2001:1:2::2/128 dir fwd
++
++	local t1=$(ipv6_stats_get $rtr1 Ip6InDiscards)
++	test "$((t1 - t0))" -ne 0
++	check_err $?
++	log_test "Ip6InDiscards"
++}
++ipv6_ping()
++{
++	RET=0
++
++	ping6_test $h1 2001:1:2::2
++}
++
++trap cleanup EXIT
++
++setup_prepare
++setup_wait
++tests_run
++
++exit $EXIT_STATUS
+diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
+index 927f9ba49e087..be6fa808d2196 100644
+--- a/tools/testing/selftests/net/forwarding/lib.sh
++++ b/tools/testing/selftests/net/forwarding/lib.sh
+@@ -674,6 +674,14 @@ qdisc_parent_stats_get()
+ 	    | jq '.[] | select(.parent == "'"$parent"'") | '"$selector"
+ }
+ 
++ipv6_stats_get()
++{
++	local dev=$1; shift
++	local stat=$1; shift
++
++	cat /proc/net/dev_snmp6/$dev | grep "^$stat" | cut -f2
++}
++
+ humanize()
+ {
+ 	local speed=$1; shift
+diff --git a/tools/testing/selftests/netfilter/nft_flowtable.sh b/tools/testing/selftests/netfilter/nft_flowtable.sh
+index 431296c0f91cf..aefe50e0e4a85 100755
+--- a/tools/testing/selftests/netfilter/nft_flowtable.sh
++++ b/tools/testing/selftests/netfilter/nft_flowtable.sh
+@@ -199,7 +199,6 @@ fi
+ # test basic connectivity
+ if ! ip netns exec ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then
+   echo "ERROR: ns1 cannot reach ns2" 1>&2
+-  bash
+   exit 1
+ fi
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-10-27 14:55 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-10-27 14:55 UTC (permalink / raw
  To: gentoo-commits

commit:     9270a303580a1dbb5f2a8b45bb7972afa8e62d7e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 27 14:53:48 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 27 14:53:48 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9270a303

Fix how gcc version detectioned to make visible CONFIG_GCC_PLUGINS

Thanks to Kerin Millar.

Bug: https://bugs.gentoo.org/814200

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                         |  4 +++
 2910_fix-gcc-detection-method.patch | 54 +++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/0000_README b/0000_README
index 62f163c..16f8398 100644
--- a/0000_README
+++ b/0000_README
@@ -363,6 +363,10 @@ Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
 
+Patch:  2910_fix-gcc-detection-method.patch
+From:   https://bugs.gentoo.org/814200
+Desc:   Fix how gcc version is detected to make visible CONFIG_GCC_PLUGINS. Thanks to Kerin Millar.
+
 Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL

diff --git a/2910_fix-gcc-detection-method.patch b/2910_fix-gcc-detection-method.patch
new file mode 100644
index 0000000..470f3af
--- /dev/null
+++ b/2910_fix-gcc-detection-method.patch
@@ -0,0 +1,54 @@
+diff --git a/scripts/gcc-plugin.sh b/scripts/gcc-plugin.sh
+deleted file mode 100755
+index b79fd0bea838..000000000000
+--- a/scripts/gcc-plugin.sh
++++ /dev/null
+@@ -1,19 +0,0 @@
+-#!/bin/sh
+-# SPDX-License-Identifier: GPL-2.0
+-
+-set -e
+-
+-srctree=$(dirname "$0")
+-
+-gccplugins_dir=$($* -print-file-name=plugin)
+-
+-# we need a c++ compiler that supports the designated initializer GNU extension
+-$HOSTCC -c -x c++ -std=gnu++98 - -fsyntax-only -I $srctree/gcc-plugins -I $gccplugins_dir/include 2>/dev/null <<EOF
+-#include "gcc-common.h"
+-class test {
+-public:
+-	int test;
+-} test = {
+-	.test = 1
+-};
+-EOF
+diff --git a/scripts/gcc-plugins/Kconfig b/scripts/gcc-plugins/Kconfig
+index ae19fb0243b9..ab9eb4cbe33a 100644
+--- a/scripts/gcc-plugins/Kconfig
++++ b/scripts/gcc-plugins/Kconfig
+@@ -9,7 +9,7 @@ menuconfig GCC_PLUGINS
+ 	bool "GCC plugins"
+ 	depends on HAVE_GCC_PLUGINS
+ 	depends on CC_IS_GCC
+-	depends on $(success,$(srctree)/scripts/gcc-plugin.sh $(CC))
++	depends on $(success,test -e $(shell,$(CC) -print-file-name=plugin)/include/plugin-version.h)
+ 	default y
+ 	help
+ 	  GCC plugins are loadable modules that provide extra features to the
+diff --git a/scripts/gcc-plugins/Makefile b/scripts/gcc-plugins/Makefile
+index d66949bfeba4..b5487cce69e8 100644
+--- a/scripts/gcc-plugins/Makefile
++++ b/scripts/gcc-plugins/Makefile
+@@ -22,9 +22,9 @@ always-y += $(GCC_PLUGIN)
+ GCC_PLUGINS_DIR = $(shell $(CC) -print-file-name=plugin)
+ 
+ plugin_cxxflags	= -Wp,-MMD,$(depfile) $(KBUILD_HOSTCXXFLAGS) -fPIC \
+-		   -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++98 \
++		   -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++11 \
+ 		   -fno-rtti -fno-exceptions -fasynchronous-unwind-tables \
+-		   -ggdb -Wno-narrowing -Wno-unused-variable -Wno-c++11-compat \
++		   -ggdb -Wno-narrowing -Wno-unused-variable \
+ 		   -Wno-format-diag
+ 
+ plugin_ldflags	= -shared


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-11-02 19:30 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-11-02 19:30 UTC (permalink / raw
  To: gentoo-commits

commit:     9cf8f89abc05df38155620672159e91f8c2321f3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov  2 19:30:47 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Nov  2 19:30:47 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9cf8f89a

Linux patch 5.10.77

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1076_linux-5.10.77.patch | 3073 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3077 insertions(+)

diff --git a/0000_README b/0000_README
index 16f8398..40e234a 100644
--- a/0000_README
+++ b/0000_README
@@ -347,6 +347,10 @@ Patch:  1075_linux-5.10.76.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.76
 
+Patch:  1076_linux-5.10.77.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.77
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1076_linux-5.10.77.patch b/1076_linux-5.10.77.patch
new file mode 100644
index 0000000..d168d15
--- /dev/null
+++ b/1076_linux-5.10.77.patch
@@ -0,0 +1,3073 @@
+diff --git a/Makefile b/Makefile
+index 605bd943b224e..a58f49e415dc6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 76
++SUBLEVEL = 77
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/compressed/decompress.c b/arch/arm/boot/compressed/decompress.c
+index aa075d8372ea2..74255e8198314 100644
+--- a/arch/arm/boot/compressed/decompress.c
++++ b/arch/arm/boot/compressed/decompress.c
+@@ -47,7 +47,10 @@ extern char * strchrnul(const char *, int);
+ #endif
+ 
+ #ifdef CONFIG_KERNEL_XZ
++/* Prevent KASAN override of string helpers in decompressor */
++#undef memmove
+ #define memmove memmove
++#undef memcpy
+ #define memcpy memcpy
+ #include "../../../../lib/decompress_unxz.c"
+ #endif
+diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
+index a13d902064722..d9db752c51fe2 100644
+--- a/arch/arm/include/asm/uaccess.h
++++ b/arch/arm/include/asm/uaccess.h
+@@ -200,6 +200,7 @@ extern int __get_user_64t_4(void *);
+ 		register unsigned long __l asm("r1") = __limit;		\
+ 		register int __e asm("r0");				\
+ 		unsigned int __ua_flags = uaccess_save_and_enable();	\
++		int __tmp_e;						\
+ 		switch (sizeof(*(__p))) {				\
+ 		case 1:							\
+ 			if (sizeof((x)) >= 8)				\
+@@ -227,9 +228,10 @@ extern int __get_user_64t_4(void *);
+ 			break;						\
+ 		default: __e = __get_user_bad(); break;			\
+ 		}							\
++		__tmp_e = __e;						\
+ 		uaccess_restore(__ua_flags);				\
+ 		x = (typeof(*(p))) __r2;				\
+-		__e;							\
++		__tmp_e;						\
+ 	})
+ 
+ #define get_user(x, p)							\
+diff --git a/arch/arm/kernel/vmlinux-xip.lds.S b/arch/arm/kernel/vmlinux-xip.lds.S
+index 50136828f5b54..f14c2360ea0b1 100644
+--- a/arch/arm/kernel/vmlinux-xip.lds.S
++++ b/arch/arm/kernel/vmlinux-xip.lds.S
+@@ -40,6 +40,10 @@ SECTIONS
+ 		ARM_DISCARD
+ 		*(.alt.smp.init)
+ 		*(.pv_table)
++#ifndef CONFIG_ARM_UNWIND
++		*(.ARM.exidx) *(.ARM.exidx.*)
++		*(.ARM.extab) *(.ARM.extab.*)
++#endif
+ 	}
+ 
+ 	. = XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR);
+@@ -172,7 +176,7 @@ ASSERT((__arch_info_end - __arch_info_begin), "no machine record defined")
+ ASSERT((_end - __bss_start) >= 12288, ".bss too small for CONFIG_XIP_DEFLATED_DATA")
+ #endif
+ 
+-#ifdef CONFIG_ARM_MPU
++#if defined(CONFIG_ARM_MPU) && !defined(CONFIG_COMPILE_TEST)
+ /*
+  * Due to PMSAv7 restriction on base address and size we have to
+  * enforce minimal alignment restrictions. It was seen that weaker
+diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
+index e2c743aa2eb2b..d9f7dfe2a7ed3 100644
+--- a/arch/arm/mm/proc-macros.S
++++ b/arch/arm/mm/proc-macros.S
+@@ -340,6 +340,7 @@ ENTRY(\name\()_cache_fns)
+ 
+ .macro define_tlb_functions name:req, flags_up:req, flags_smp
+ 	.type	\name\()_tlb_fns, #object
++	.align 2
+ ENTRY(\name\()_tlb_fns)
+ 	.long	\name\()_flush_user_tlb_range
+ 	.long	\name\()_flush_kern_tlb_range
+diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
+index a9653117ca0dd..e513d8a467760 100644
+--- a/arch/arm/probes/kprobes/core.c
++++ b/arch/arm/probes/kprobes/core.c
+@@ -462,7 +462,7 @@ static struct undef_hook kprobes_arm_break_hook = {
+ 
+ #endif /* !CONFIG_THUMB2_KERNEL */
+ 
+-int __init arch_init_kprobes()
++int __init arch_init_kprobes(void)
+ {
+ 	arm_probes_decode_init();
+ #ifdef CONFIG_THUMB2_KERNEL
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5-nanopi-neo2.dts b/arch/arm64/boot/dts/allwinner/sun50i-h5-nanopi-neo2.dts
+index b059e20813bdf..e8ab8c2df51a0 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h5-nanopi-neo2.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h5-nanopi-neo2.dts
+@@ -75,7 +75,7 @@
+ 	pinctrl-0 = <&emac_rgmii_pins>;
+ 	phy-supply = <&reg_gmac_3v3>;
+ 	phy-handle = <&ext_rgmii_phy>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
+index 0f8a3a9e3795b..957a6d092d7af 100644
+--- a/arch/arm64/lib/copy_from_user.S
++++ b/arch/arm64/lib/copy_from_user.S
+@@ -29,7 +29,7 @@
+ 	.endm
+ 
+ 	.macro ldrh1 reg, ptr, val
+-	uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
++	uao_user_alternative 9997f, ldrh, ldtrh, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro strh1 reg, ptr, val
+@@ -37,7 +37,7 @@
+ 	.endm
+ 
+ 	.macro ldr1 reg, ptr, val
+-	uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
++	uao_user_alternative 9997f, ldr, ldtr, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro str1 reg, ptr, val
+@@ -45,7 +45,7 @@
+ 	.endm
+ 
+ 	.macro ldp1 reg1, reg2, ptr, val
+-	uao_ldp 9998f, \reg1, \reg2, \ptr, \val
++	uao_ldp 9997f, \reg1, \reg2, \ptr, \val
+ 	.endm
+ 
+ 	.macro stp1 reg1, reg2, ptr, val
+@@ -53,8 +53,10 @@
+ 	.endm
+ 
+ end	.req	x5
++srcin	.req	x15
+ SYM_FUNC_START(__arch_copy_from_user)
+ 	add	end, x0, x2
++	mov	srcin, x1
+ #include "copy_template.S"
+ 	mov	x0, #0				// Nothing to copy
+ 	ret
+@@ -63,6 +65,11 @@ EXPORT_SYMBOL(__arch_copy_from_user)
+ 
+ 	.section .fixup,"ax"
+ 	.align	2
++9997:	cmp	dst, dstin
++	b.ne	9998f
++	// Before being absolutely sure we couldn't copy anything, try harder
++USER(9998f, ldtrb tmp1w, [srcin])
++	strb	tmp1w, [dst], #1
+ 9998:	sub	x0, end, dst			// bytes not copied
+ 	ret
+ 	.previous
+diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
+index 80e37ada0ee1a..35c01da09323e 100644
+--- a/arch/arm64/lib/copy_in_user.S
++++ b/arch/arm64/lib/copy_in_user.S
+@@ -30,33 +30,34 @@
+ 	.endm
+ 
+ 	.macro ldrh1 reg, ptr, val
+-	uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
++	uao_user_alternative 9997f, ldrh, ldtrh, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro strh1 reg, ptr, val
+-	uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
++	uao_user_alternative 9997f, strh, sttrh, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro ldr1 reg, ptr, val
+-	uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
++	uao_user_alternative 9997f, ldr, ldtr, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro str1 reg, ptr, val
+-	uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
++	uao_user_alternative 9997f, str, sttr, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro ldp1 reg1, reg2, ptr, val
+-	uao_ldp 9998f, \reg1, \reg2, \ptr, \val
++	uao_ldp 9997f, \reg1, \reg2, \ptr, \val
+ 	.endm
+ 
+ 	.macro stp1 reg1, reg2, ptr, val
+-	uao_stp 9998f, \reg1, \reg2, \ptr, \val
++	uao_stp 9997f, \reg1, \reg2, \ptr, \val
+ 	.endm
+ 
+ end	.req	x5
+-
++srcin	.req	x15
+ SYM_FUNC_START(__arch_copy_in_user)
+ 	add	end, x0, x2
++	mov	srcin, x1
+ #include "copy_template.S"
+ 	mov	x0, #0
+ 	ret
+@@ -65,6 +66,12 @@ EXPORT_SYMBOL(__arch_copy_in_user)
+ 
+ 	.section .fixup,"ax"
+ 	.align	2
++9997:	cmp	dst, dstin
++	b.ne	9998f
++	// Before being absolutely sure we couldn't copy anything, try harder
++USER(9998f, ldtrb tmp1w, [srcin])
++USER(9998f, sttrb tmp1w, [dst])
++	add	dst, dst, #1
+ 9998:	sub	x0, end, dst			// bytes not copied
+ 	ret
+ 	.previous
+diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
+index 4ec59704b8f2d..85705350ff354 100644
+--- a/arch/arm64/lib/copy_to_user.S
++++ b/arch/arm64/lib/copy_to_user.S
+@@ -32,7 +32,7 @@
+ 	.endm
+ 
+ 	.macro strh1 reg, ptr, val
+-	uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
++	uao_user_alternative 9997f, strh, sttrh, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro ldr1 reg, ptr, val
+@@ -40,7 +40,7 @@
+ 	.endm
+ 
+ 	.macro str1 reg, ptr, val
+-	uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
++	uao_user_alternative 9997f, str, sttr, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro ldp1 reg1, reg2, ptr, val
+@@ -48,12 +48,14 @@
+ 	.endm
+ 
+ 	.macro stp1 reg1, reg2, ptr, val
+-	uao_stp 9998f, \reg1, \reg2, \ptr, \val
++	uao_stp 9997f, \reg1, \reg2, \ptr, \val
+ 	.endm
+ 
+ end	.req	x5
++srcin	.req	x15
+ SYM_FUNC_START(__arch_copy_to_user)
+ 	add	end, x0, x2
++	mov	srcin, x1
+ #include "copy_template.S"
+ 	mov	x0, #0
+ 	ret
+@@ -62,6 +64,12 @@ EXPORT_SYMBOL(__arch_copy_to_user)
+ 
+ 	.section .fixup,"ax"
+ 	.align	2
++9997:	cmp	dst, dstin
++	b.ne	9998f
++	// Before being absolutely sure we couldn't copy anything, try harder
++	ldrb	tmp1w, [srcin]
++USER(9998f, sttrb tmp1w, [dst])
++	add	dst, dst, #1
+ 9998:	sub	x0, end, dst			// bytes not copied
+ 	ret
+ 	.previous
+diff --git a/arch/nios2/platform/Kconfig.platform b/arch/nios2/platform/Kconfig.platform
+index 9e32fb7f3d4ce..e849daff6fd16 100644
+--- a/arch/nios2/platform/Kconfig.platform
++++ b/arch/nios2/platform/Kconfig.platform
+@@ -37,6 +37,7 @@ config NIOS2_DTB_PHYS_ADDR
+ 
+ config NIOS2_DTB_SOURCE_BOOL
+ 	bool "Compile and link device tree into kernel image"
++	depends on !COMPILE_TEST
+ 	help
+ 	  This allows you to specify a dts (device tree source) file
+ 	  which will be compiled and linked into the kernel image.
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 0752967f351bb..a2750d6ffd0f5 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -415,8 +415,14 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
+ 		case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */
+ 			if (imm == 0)
+ 				return -EINVAL;
+-			else if (imm == 1)
+-				goto bpf_alu32_trunc;
++			if (imm == 1) {
++				if (BPF_OP(code) == BPF_DIV) {
++					goto bpf_alu32_trunc;
++				} else {
++					EMIT(PPC_RAW_LI(dst_reg, 0));
++					break;
++				}
++			}
+ 
+ 			PPC_LI32(b2p[TMP_REG_1], imm);
+ 			switch (BPF_CLASS(code)) {
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index f7abd118d23d3..1b894c3275781 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -138,6 +138,12 @@ config PAGE_OFFSET
+ 	default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB
+ 	default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB
+ 
++config KASAN_SHADOW_OFFSET
++	hex
++	depends on KASAN_GENERIC
++	default 0xdfffffc800000000 if 64BIT
++	default 0xffffffff if 32BIT
++
+ config ARCH_FLATMEM_ENABLE
+ 	def_bool y
+ 
+diff --git a/arch/riscv/include/asm/kasan.h b/arch/riscv/include/asm/kasan.h
+index b04028c6218c9..db10a64e3b981 100644
+--- a/arch/riscv/include/asm/kasan.h
++++ b/arch/riscv/include/asm/kasan.h
+@@ -14,8 +14,7 @@
+ #define KASAN_SHADOW_START	KERN_VIRT_START /* 2^64 - 2^38 */
+ #define KASAN_SHADOW_END	(KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
+ 
+-#define KASAN_SHADOW_OFFSET	(KASAN_SHADOW_END - (1ULL << \
+-					(64 - KASAN_SHADOW_SCALE_SHIFT)))
++#define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+ 
+ void kasan_init(void);
+ asmlinkage void kasan_early_init(void);
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index 7e849797c9c38..1a819c18bedec 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -175,6 +175,7 @@ setup_trap_vector:
+ 	csrw CSR_SCRATCH, zero
+ 	ret
+ 
++.align 2
+ .Lsecondary_park:
+ 	/* We lack SMP support or have too many harts, so park this hart */
+ 	wfi
+diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
+index a8a2ffd9114aa..883c3be43ea98 100644
+--- a/arch/riscv/mm/kasan_init.c
++++ b/arch/riscv/mm/kasan_init.c
+@@ -16,6 +16,9 @@ asmlinkage void __init kasan_early_init(void)
+ 	uintptr_t i;
+ 	pgd_t *pgd = early_pg_dir + pgd_index(KASAN_SHADOW_START);
+ 
++	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
++		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
++
+ 	for (i = 0; i < PTRS_PER_PTE; ++i)
+ 		set_pte(kasan_early_shadow_pte + i,
+ 			mk_pte(virt_to_page(kasan_early_shadow_page),
+diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c
+index 3630d447352c6..cbf7d2414886e 100644
+--- a/arch/riscv/net/bpf_jit_core.c
++++ b/arch/riscv/net/bpf_jit_core.c
+@@ -125,7 +125,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 
+ 	if (i == NR_JIT_ITERATIONS) {
+ 		pr_err("bpf-jit: image did not converge in <%d passes!\n", i);
+-		bpf_jit_binary_free(jit_data->header);
++		if (jit_data->header)
++			bpf_jit_binary_free(jit_data->header);
+ 		prog = orig_prog;
+ 		goto out_offset;
+ 	}
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 2bb9996ff09b4..e6c4f29fc6956 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -3053,13 +3053,14 @@ static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
+ 	int vcpu_idx, online_vcpus = atomic_read(&kvm->online_vcpus);
+ 	struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
+ 	struct kvm_vcpu *vcpu;
++	u8 vcpu_isc_mask;
+ 
+ 	for_each_set_bit(vcpu_idx, kvm->arch.idle_mask, online_vcpus) {
+ 		vcpu = kvm_get_vcpu(kvm, vcpu_idx);
+ 		if (psw_ioint_disabled(vcpu))
+ 			continue;
+-		deliverable_mask &= (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
+-		if (deliverable_mask) {
++		vcpu_isc_mask = (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
++		if (deliverable_mask & vcpu_isc_mask) {
+ 			/* lately kicked but not yet running */
+ 			if (test_and_set_bit(vcpu_idx, gi->kicked_mask))
+ 				return;
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 7f719b468b440..00f03f363c9b0 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -3312,6 +3312,7 @@ out_free_sie_block:
+ 
+ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
+ {
++	clear_bit(vcpu->vcpu_idx, vcpu->kvm->arch.gisa_int.kicked_mask);
+ 	return kvm_s390_vcpu_has_irq(vcpu, 0);
+ }
+ 
+diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
+index b62446ea5f408..11be88f70690e 100644
+--- a/drivers/ata/sata_mv.c
++++ b/drivers/ata/sata_mv.c
+@@ -3892,8 +3892,8 @@ static int mv_chip_id(struct ata_host *host, unsigned int board_idx)
+ 		break;
+ 
+ 	default:
+-		dev_err(host->dev, "BUG: invalid board index %u\n", board_idx);
+-		return 1;
++		dev_alert(host->dev, "BUG: invalid board index %u\n", board_idx);
++		return -EINVAL;
+ 	}
+ 
+ 	hpriv->hp_flags = hp_flags;
+diff --git a/drivers/base/regmap/regcache-rbtree.c b/drivers/base/regmap/regcache-rbtree.c
+index cfa29dc89bbff..fabf87058d80b 100644
+--- a/drivers/base/regmap/regcache-rbtree.c
++++ b/drivers/base/regmap/regcache-rbtree.c
+@@ -281,14 +281,14 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
+ 	if (!blk)
+ 		return -ENOMEM;
+ 
++	rbnode->block = blk;
++
+ 	if (BITS_TO_LONGS(blklen) > BITS_TO_LONGS(rbnode->blklen)) {
+ 		present = krealloc(rbnode->cache_present,
+ 				   BITS_TO_LONGS(blklen) * sizeof(*present),
+ 				   GFP_KERNEL);
+-		if (!present) {
+-			kfree(blk);
++		if (!present)
+ 			return -ENOMEM;
+-		}
+ 
+ 		memset(present + BITS_TO_LONGS(rbnode->blklen), 0,
+ 		       (BITS_TO_LONGS(blklen) - BITS_TO_LONGS(rbnode->blklen))
+@@ -305,7 +305,6 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
+ 	}
+ 
+ 	/* update the rbnode block, its size and the base register */
+-	rbnode->block = blk;
+ 	rbnode->blklen = blklen;
+ 	rbnode->base_reg = base_reg;
+ 	rbnode->cache_present = present;
+diff --git a/drivers/gpio/gpio-xgs-iproc.c b/drivers/gpio/gpio-xgs-iproc.c
+index ad5489a65d542..dd40277b9d06d 100644
+--- a/drivers/gpio/gpio-xgs-iproc.c
++++ b/drivers/gpio/gpio-xgs-iproc.c
+@@ -224,7 +224,7 @@ static int iproc_gpio_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	chip->gc.label = dev_name(dev);
+-	if (of_property_read_u32(dn, "ngpios", &num_gpios))
++	if (!of_property_read_u32(dn, "ngpios", &num_gpios))
+ 		chip->gc.ngpio = num_gpios;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index fbb65c95464b3..e43f82bcb231a 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -264,7 +264,7 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
+ 	if (!wr_buf)
+ 		return -ENOSPC;
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					   (long *)param, buf,
+ 					   max_param_num,
+ 					   &param_nums)) {
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
+index fb2a25f8408fc..8fba425a76268 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
+@@ -322,6 +322,7 @@ static void ttm_transfered_destroy(struct ttm_buffer_object *bo)
+ 	struct ttm_transfer_obj *fbo;
+ 
+ 	fbo = container_of(bo, struct ttm_transfer_obj, base);
++	dma_resv_fini(&fbo->base.base._resv);
+ 	ttm_bo_put(fbo->bo);
+ 	kfree(fbo);
+ }
+diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
+index 8c930bf1df894..de88f472eaad2 100644
+--- a/drivers/infiniband/core/sa_query.c
++++ b/drivers/infiniband/core/sa_query.c
+@@ -760,8 +760,9 @@ static void ib_nl_set_path_rec_attrs(struct sk_buff *skb,
+ 
+ 	/* Construct the family header first */
+ 	header = skb_put(skb, NLMSG_ALIGN(sizeof(*header)));
+-	memcpy(header->device_name, dev_name(&query->port->agent->device->dev),
+-	       LS_DEVICE_NAME_MAX);
++	strscpy_pad(header->device_name,
++		    dev_name(&query->port->agent->device->dev),
++		    LS_DEVICE_NAME_MAX);
+ 	header->port_num = query->port->port_num;
+ 
+ 	if ((comp_mask & IB_SA_PATH_REC_REVERSIBLE) &&
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index ff864f6f02667..1cd8f80f097a7 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -920,6 +920,7 @@ void sc_disable(struct send_context *sc)
+ {
+ 	u64 reg;
+ 	struct pio_buf *pbuf;
++	LIST_HEAD(wake_list);
+ 
+ 	if (!sc)
+ 		return;
+@@ -954,19 +955,21 @@ void sc_disable(struct send_context *sc)
+ 	spin_unlock(&sc->release_lock);
+ 
+ 	write_seqlock(&sc->waitlock);
+-	while (!list_empty(&sc->piowait)) {
++	if (!list_empty(&sc->piowait))
++		list_move(&sc->piowait, &wake_list);
++	write_sequnlock(&sc->waitlock);
++	while (!list_empty(&wake_list)) {
+ 		struct iowait *wait;
+ 		struct rvt_qp *qp;
+ 		struct hfi1_qp_priv *priv;
+ 
+-		wait = list_first_entry(&sc->piowait, struct iowait, list);
++		wait = list_first_entry(&wake_list, struct iowait, list);
+ 		qp = iowait_to_qp(wait);
+ 		priv = qp->priv;
+ 		list_del_init(&priv->s_iowait.list);
+ 		priv->s_iowait.lock = NULL;
+ 		hfi1_qp_wakeup(qp, RVT_S_WAIT_PIO | HFI1_S_WAIT_PIO_DRAIN);
+ 	}
+-	write_sequnlock(&sc->waitlock);
+ 
+ 	spin_unlock_irq(&sc->alloc_lock);
+ }
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 011477356a1de..7a2bec0ac0055 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -4216,6 +4216,8 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		MLX5_SET(dctc, dctc, mtu, attr->path_mtu);
+ 		MLX5_SET(dctc, dctc, my_addr_index, attr->ah_attr.grh.sgid_index);
+ 		MLX5_SET(dctc, dctc, hop_limit, attr->ah_attr.grh.hop_limit);
++		if (attr->ah_attr.type == RDMA_AH_ATTR_TYPE_ROCE)
++			MLX5_SET(dctc, dctc, eth_prio, attr->ah_attr.sl & 0x7);
+ 
+ 		err = mlx5_core_create_dct(dev, &qp->dct.mdct, qp->dct.in,
+ 					   MLX5_ST_SZ_BYTES(create_dct_in), out,
+diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c
+index a67599b5a550a..ac11943a5ddb0 100644
+--- a/drivers/infiniband/hw/qib/qib_user_sdma.c
++++ b/drivers/infiniband/hw/qib/qib_user_sdma.c
+@@ -602,7 +602,7 @@ done:
+ /*
+  * How many pages in this iovec element?
+  */
+-static int qib_user_sdma_num_pages(const struct iovec *iov)
++static size_t qib_user_sdma_num_pages(const struct iovec *iov)
+ {
+ 	const unsigned long addr  = (unsigned long) iov->iov_base;
+ 	const unsigned long  len  = iov->iov_len;
+@@ -658,7 +658,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev,
+ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd,
+ 				   struct qib_user_sdma_queue *pq,
+ 				   struct qib_user_sdma_pkt *pkt,
+-				   unsigned long addr, int tlen, int npages)
++				   unsigned long addr, int tlen, size_t npages)
+ {
+ 	struct page *pages[8];
+ 	int i, j;
+@@ -722,7 +722,7 @@ static int qib_user_sdma_pin_pkt(const struct qib_devdata *dd,
+ 	unsigned long idx;
+ 
+ 	for (idx = 0; idx < niov; idx++) {
+-		const int npages = qib_user_sdma_num_pages(iov + idx);
++		const size_t npages = qib_user_sdma_num_pages(iov + idx);
+ 		const unsigned long addr = (unsigned long) iov[idx].iov_base;
+ 
+ 		ret = qib_user_sdma_pin_pages(dd, pq, pkt, addr,
+@@ -824,8 +824,8 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 		unsigned pktnw;
+ 		unsigned pktnwc;
+ 		int nfrags = 0;
+-		int npages = 0;
+-		int bytes_togo = 0;
++		size_t npages = 0;
++		size_t bytes_togo = 0;
+ 		int tiddma = 0;
+ 		int cfur;
+ 
+@@ -885,7 +885,11 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 
+ 			npages += qib_user_sdma_num_pages(&iov[idx]);
+ 
+-			bytes_togo += slen;
++			if (check_add_overflow(bytes_togo, slen, &bytes_togo) ||
++			    bytes_togo > type_max(typeof(pkt->bytes_togo))) {
++				ret = -EINVAL;
++				goto free_pbc;
++			}
+ 			pktnwc += slen >> 2;
+ 			idx++;
+ 			nfrags++;
+@@ -904,8 +908,7 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 		}
+ 
+ 		if (frag_size) {
+-			int tidsmsize, n;
+-			size_t pktsize;
++			size_t tidsmsize, n, pktsize, sz, addrlimit;
+ 
+ 			n = npages*((2*PAGE_SIZE/frag_size)+1);
+ 			pktsize = struct_size(pkt, addr, n);
+@@ -923,14 +926,24 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 			else
+ 				tidsmsize = 0;
+ 
+-			pkt = kmalloc(pktsize+tidsmsize, GFP_KERNEL);
++			if (check_add_overflow(pktsize, tidsmsize, &sz)) {
++				ret = -EINVAL;
++				goto free_pbc;
++			}
++			pkt = kmalloc(sz, GFP_KERNEL);
+ 			if (!pkt) {
+ 				ret = -ENOMEM;
+ 				goto free_pbc;
+ 			}
+ 			pkt->largepkt = 1;
+ 			pkt->frag_size = frag_size;
+-			pkt->addrlimit = n + ARRAY_SIZE(pkt->addr);
++			if (check_add_overflow(n, ARRAY_SIZE(pkt->addr),
++					       &addrlimit) ||
++			    addrlimit > type_max(typeof(pkt->addrlimit))) {
++				ret = -EINVAL;
++				goto free_pbc;
++			}
++			pkt->addrlimit = addrlimit;
+ 
+ 			if (tiddma) {
+ 				char *tidsm = (char *)pkt + pktsize;
+diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
+index 697fe40756bf2..7ba4f714106f9 100644
+--- a/drivers/mmc/host/cqhci.c
++++ b/drivers/mmc/host/cqhci.c
+@@ -273,6 +273,9 @@ static void __cqhci_enable(struct cqhci_host *cq_host)
+ 
+ 	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+ 
++	if (cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT)
++		cqhci_writel(cq_host, 0, CQHCI_CTL);
++
+ 	mmc->cqe_on = true;
+ 
+ 	if (cq_host->ops->enable)
+diff --git a/drivers/mmc/host/dw_mmc-exynos.c b/drivers/mmc/host/dw_mmc-exynos.c
+index 0c75810812a0a..1f8a3c0ddfe11 100644
+--- a/drivers/mmc/host/dw_mmc-exynos.c
++++ b/drivers/mmc/host/dw_mmc-exynos.c
+@@ -464,6 +464,18 @@ static s8 dw_mci_exynos_get_best_clksmpl(u8 candiates)
+ 		}
+ 	}
+ 
++	/*
++	 * If there is no cadiates value, then it needs to return -EIO.
++	 * If there are candiates values and don't find bset clk sample value,
++	 * then use a first candiates clock sample value.
++	 */
++	for (i = 0; i < iter; i++) {
++		__c = ror8(candiates, i);
++		if ((__c & 0x1) == 0x1) {
++			loc = i;
++			goto out;
++		}
++	}
+ out:
+ 	return loc;
+ }
+@@ -494,6 +506,8 @@ static int dw_mci_exynos_execute_tuning(struct dw_mci_slot *slot, u32 opcode)
+ 		priv->tuned_sample = found;
+ 	} else {
+ 		ret = -EIO;
++		dev_warn(&mmc->class_dev,
++			"There is no candiates value about clksmpl!\n");
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index dc84e2dff4085..fb8b9475b1d01 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2503,6 +2503,25 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 		host->dma_mask = DMA_BIT_MASK(32);
+ 	mmc_dev(mmc)->dma_mask = &host->dma_mask;
+ 
++	host->timeout_clks = 3 * 1048576;
++	host->dma.gpd = dma_alloc_coherent(&pdev->dev,
++				2 * sizeof(struct mt_gpdma_desc),
++				&host->dma.gpd_addr, GFP_KERNEL);
++	host->dma.bd = dma_alloc_coherent(&pdev->dev,
++				MAX_BD_NUM * sizeof(struct mt_bdma_desc),
++				&host->dma.bd_addr, GFP_KERNEL);
++	if (!host->dma.gpd || !host->dma.bd) {
++		ret = -ENOMEM;
++		goto release_mem;
++	}
++	msdc_init_gpd_bd(host, &host->dma);
++	INIT_DELAYED_WORK(&host->req_timeout, msdc_request_timeout);
++	spin_lock_init(&host->lock);
++
++	platform_set_drvdata(pdev, mmc);
++	msdc_ungate_clock(host);
++	msdc_init_hw(host);
++
+ 	if (mmc->caps2 & MMC_CAP2_CQE) {
+ 		host->cq_host = devm_kzalloc(mmc->parent,
+ 					     sizeof(*host->cq_host),
+@@ -2523,25 +2542,6 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 		mmc->max_seg_size = 64 * 1024;
+ 	}
+ 
+-	host->timeout_clks = 3 * 1048576;
+-	host->dma.gpd = dma_alloc_coherent(&pdev->dev,
+-				2 * sizeof(struct mt_gpdma_desc),
+-				&host->dma.gpd_addr, GFP_KERNEL);
+-	host->dma.bd = dma_alloc_coherent(&pdev->dev,
+-				MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+-				&host->dma.bd_addr, GFP_KERNEL);
+-	if (!host->dma.gpd || !host->dma.bd) {
+-		ret = -ENOMEM;
+-		goto release_mem;
+-	}
+-	msdc_init_gpd_bd(host, &host->dma);
+-	INIT_DELAYED_WORK(&host->req_timeout, msdc_request_timeout);
+-	spin_lock_init(&host->lock);
+-
+-	platform_set_drvdata(pdev, mmc);
+-	msdc_ungate_clock(host);
+-	msdc_init_hw(host);
+-
+ 	ret = devm_request_irq(&pdev->dev, host->irq, msdc_irq,
+ 			       IRQF_TRIGGER_NONE, pdev->name, host);
+ 	if (ret)
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index d28809e479628..20cbd71cba9d9 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -1157,6 +1157,7 @@ static void esdhc_reset_tuning(struct sdhci_host *host)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
+ 	u32 ctrl;
++	int ret;
+ 
+ 	/* Reset the tuning circuit */
+ 	if (esdhc_is_usdhc(imx_data)) {
+@@ -1169,7 +1170,22 @@ static void esdhc_reset_tuning(struct sdhci_host *host)
+ 		} else if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) {
+ 			ctrl = readl(host->ioaddr + SDHCI_AUTO_CMD_STATUS);
+ 			ctrl &= ~ESDHC_MIX_CTRL_SMPCLK_SEL;
++			ctrl &= ~ESDHC_MIX_CTRL_EXE_TUNE;
+ 			writel(ctrl, host->ioaddr + SDHCI_AUTO_CMD_STATUS);
++			/* Make sure ESDHC_MIX_CTRL_EXE_TUNE cleared */
++			ret = readl_poll_timeout(host->ioaddr + SDHCI_AUTO_CMD_STATUS,
++				ctrl, !(ctrl & ESDHC_MIX_CTRL_EXE_TUNE), 1, 50);
++			if (ret == -ETIMEDOUT)
++				dev_warn(mmc_dev(host->mmc),
++				 "Warning! clear execute tuning bit failed\n");
++			/*
++			 * SDHCI_INT_DATA_AVAIL is W1C bit, set this bit will clear the
++			 * usdhc IP internal logic flag execute_tuning_with_clr_buf, which
++			 * will finally make sure the normal data transfer logic correct.
++			 */
++			ctrl = readl(host->ioaddr + SDHCI_INT_STATUS);
++			ctrl |= SDHCI_INT_DATA_AVAIL;
++			writel(ctrl, host->ioaddr + SDHCI_INT_STATUS);
+ 		}
+ 	}
+ }
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index b1e1d327cb8eb..07d131fac7606 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2043,6 +2043,12 @@ void sdhci_set_power_noreg(struct sdhci_host *host, unsigned char mode,
+ 			break;
+ 		case MMC_VDD_32_33:
+ 		case MMC_VDD_33_34:
++		/*
++		 * 3.4 ~ 3.6V are valid only for those platforms where it's
++		 * known that the voltage range is supported by hardware.
++		 */
++		case MMC_VDD_34_35:
++		case MMC_VDD_35_36:
+ 			pwr = SDHCI_POWER_330;
+ 			break;
+ 		default:
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index 4950d10d3a191..97beece62fec4 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -576,7 +576,7 @@ static void check_vub300_port_status(struct vub300_mmc_host *vub300)
+ 				GET_SYSTEM_PORT_STATUS,
+ 				USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				0x0000, 0x0000, &vub300->system_port_status,
+-				sizeof(vub300->system_port_status), HZ);
++				sizeof(vub300->system_port_status), 1000);
+ 	if (sizeof(vub300->system_port_status) == retval)
+ 		new_system_port_status(vub300);
+ }
+@@ -1241,7 +1241,7 @@ static void __download_offload_pseudocode(struct vub300_mmc_host *vub300,
+ 						SET_INTERRUPT_PSEUDOCODE,
+ 						USB_DIR_OUT | USB_TYPE_VENDOR |
+ 						USB_RECIP_DEVICE, 0x0000, 0x0000,
+-						xfer_buffer, xfer_length, HZ);
++						xfer_buffer, xfer_length, 1000);
+ 			kfree(xfer_buffer);
+ 			if (retval < 0)
+ 				goto copy_error_message;
+@@ -1284,7 +1284,7 @@ static void __download_offload_pseudocode(struct vub300_mmc_host *vub300,
+ 						SET_TRANSFER_PSEUDOCODE,
+ 						USB_DIR_OUT | USB_TYPE_VENDOR |
+ 						USB_RECIP_DEVICE, 0x0000, 0x0000,
+-						xfer_buffer, xfer_length, HZ);
++						xfer_buffer, xfer_length, 1000);
+ 			kfree(xfer_buffer);
+ 			if (retval < 0)
+ 				goto copy_error_message;
+@@ -1991,7 +1991,7 @@ static void __set_clock_speed(struct vub300_mmc_host *vub300, u8 buf[8],
+ 		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_CLOCK_SPEED,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				0x00, 0x00, buf, buf_array_size, HZ);
++				0x00, 0x00, buf, buf_array_size, 1000);
+ 	if (retval != 8) {
+ 		dev_err(&vub300->udev->dev, "SET_CLOCK_SPEED"
+ 			" %dkHz failed with retval=%d\n", kHzClock, retval);
+@@ -2013,14 +2013,14 @@ static void vub300_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_SD_POWER,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				0x0000, 0x0000, NULL, 0, HZ);
++				0x0000, 0x0000, NULL, 0, 1000);
+ 		/* must wait for the VUB300 u-proc to boot up */
+ 		msleep(600);
+ 	} else if ((ios->power_mode == MMC_POWER_UP) && !vub300->card_powered) {
+ 		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_SD_POWER,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				0x0001, 0x0000, NULL, 0, HZ);
++				0x0001, 0x0000, NULL, 0, 1000);
+ 		msleep(600);
+ 		vub300->card_powered = 1;
+ 	} else if (ios->power_mode == MMC_POWER_ON) {
+@@ -2275,14 +2275,14 @@ static int vub300_probe(struct usb_interface *interface,
+ 				GET_HC_INF0,
+ 				USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				0x0000, 0x0000, &vub300->hc_info,
+-				sizeof(vub300->hc_info), HZ);
++				sizeof(vub300->hc_info), 1000);
+ 	if (retval < 0)
+ 		goto error5;
+ 	retval =
+ 		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_ROM_WAIT_STATES,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				firmware_rom_wait_states, 0x0000, NULL, 0, HZ);
++				firmware_rom_wait_states, 0x0000, NULL, 0, 1000);
+ 	if (retval < 0)
+ 		goto error5;
+ 	dev_info(&vub300->udev->dev,
+@@ -2297,7 +2297,7 @@ static int vub300_probe(struct usb_interface *interface,
+ 				GET_SYSTEM_PORT_STATUS,
+ 				USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				0x0000, 0x0000, &vub300->system_port_status,
+-				sizeof(vub300->system_port_status), HZ);
++				sizeof(vub300->system_port_status), 1000);
+ 	if (retval < 0) {
+ 		goto error4;
+ 	} else if (sizeof(vub300->system_port_status) == retval) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index bc870bff14df7..5205796859f6c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -139,18 +139,85 @@ static const struct file_operations rvu_dbg_##name##_fops = { \
+ 
+ static void print_nix_qsize(struct seq_file *filp, struct rvu_pfvf *pfvf);
+ 
++static void get_lf_str_list(struct rvu_block block, int pcifunc,
++			    char *lfs)
++{
++	int lf = 0, seq = 0, len = 0, prev_lf = block.lf.max;
++
++	for_each_set_bit(lf, block.lf.bmap, block.lf.max) {
++		if (lf >= block.lf.max)
++			break;
++
++		if (block.fn_map[lf] != pcifunc)
++			continue;
++
++		if (lf == prev_lf + 1) {
++			prev_lf = lf;
++			seq = 1;
++			continue;
++		}
++
++		if (seq)
++			len += sprintf(lfs + len, "-%d,%d", prev_lf, lf);
++		else
++			len += (len ? sprintf(lfs + len, ",%d", lf) :
++				      sprintf(lfs + len, "%d", lf));
++
++		prev_lf = lf;
++		seq = 0;
++	}
++
++	if (seq)
++		len += sprintf(lfs + len, "-%d", prev_lf);
++
++	lfs[len] = '\0';
++}
++
++static int get_max_column_width(struct rvu *rvu)
++{
++	int index, pf, vf, lf_str_size = 12, buf_size = 256;
++	struct rvu_block block;
++	u16 pcifunc;
++	char *buf;
++
++	buf = kzalloc(buf_size, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
++		for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
++			pcifunc = pf << 10 | vf;
++			if (!pcifunc)
++				continue;
++
++			for (index = 0; index < BLK_COUNT; index++) {
++				block = rvu->hw->block[index];
++				if (!strlen(block.name))
++					continue;
++
++				get_lf_str_list(block, pcifunc, buf);
++				if (lf_str_size <= strlen(buf))
++					lf_str_size = strlen(buf) + 1;
++			}
++		}
++	}
++
++	kfree(buf);
++	return lf_str_size;
++}
++
+ /* Dumps current provisioning status of all RVU block LFs */
+ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 					  char __user *buffer,
+ 					  size_t count, loff_t *ppos)
+ {
+-	int index, off = 0, flag = 0, go_back = 0, len = 0;
++	int index, off = 0, flag = 0, len = 0, i = 0;
+ 	struct rvu *rvu = filp->private_data;
+-	int lf, pf, vf, pcifunc;
++	int bytes_not_copied = 0;
+ 	struct rvu_block block;
+-	int bytes_not_copied;
+-	int lf_str_size = 12;
++	int pf, vf, pcifunc;
+ 	int buf_size = 2048;
++	int lf_str_size;
+ 	char *lfs;
+ 	char *buf;
+ 
+@@ -162,6 +229,9 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 	if (!buf)
+ 		return -ENOSPC;
+ 
++	/* Get the maximum width of a column */
++	lf_str_size = get_max_column_width(rvu);
++
+ 	lfs = kzalloc(lf_str_size, GFP_KERNEL);
+ 	if (!lfs) {
+ 		kfree(buf);
+@@ -175,65 +245,69 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 					 "%-*s", lf_str_size,
+ 					 rvu->hw->block[index].name);
+ 		}
++
+ 	off += scnprintf(&buf[off], buf_size - 1 - off, "\n");
++	bytes_not_copied = copy_to_user(buffer + (i * off), buf, off);
++	if (bytes_not_copied)
++		goto out;
++
++	i++;
++	*ppos += off;
+ 	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+ 		for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
++			off = 0;
++			flag = 0;
+ 			pcifunc = pf << 10 | vf;
+ 			if (!pcifunc)
+ 				continue;
+ 
+ 			if (vf) {
+ 				sprintf(lfs, "PF%d:VF%d", pf, vf - 1);
+-				go_back = scnprintf(&buf[off],
+-						    buf_size - 1 - off,
+-						    "%-*s", lf_str_size, lfs);
++				off = scnprintf(&buf[off],
++						buf_size - 1 - off,
++						"%-*s", lf_str_size, lfs);
+ 			} else {
+ 				sprintf(lfs, "PF%d", pf);
+-				go_back = scnprintf(&buf[off],
+-						    buf_size - 1 - off,
+-						    "%-*s", lf_str_size, lfs);
++				off = scnprintf(&buf[off],
++						buf_size - 1 - off,
++						"%-*s", lf_str_size, lfs);
+ 			}
+ 
+-			off += go_back;
+-			for (index = 0; index < BLKTYPE_MAX; index++) {
++			for (index = 0; index < BLK_COUNT; index++) {
+ 				block = rvu->hw->block[index];
+ 				if (!strlen(block.name))
+ 					continue;
+ 				len = 0;
+ 				lfs[len] = '\0';
+-				for (lf = 0; lf < block.lf.max; lf++) {
+-					if (block.fn_map[lf] != pcifunc)
+-						continue;
++				get_lf_str_list(block, pcifunc, lfs);
++				if (strlen(lfs))
+ 					flag = 1;
+-					len += sprintf(&lfs[len], "%d,", lf);
+-				}
+ 
+-				if (flag)
+-					len--;
+-				lfs[len] = '\0';
+ 				off += scnprintf(&buf[off], buf_size - 1 - off,
+ 						 "%-*s", lf_str_size, lfs);
+-				if (!strlen(lfs))
+-					go_back += lf_str_size;
+ 			}
+-			if (!flag)
+-				off -= go_back;
+-			else
+-				flag = 0;
+-			off--;
+-			off +=	scnprintf(&buf[off], buf_size - 1 - off, "\n");
++			if (flag) {
++				off +=	scnprintf(&buf[off],
++						  buf_size - 1 - off, "\n");
++				bytes_not_copied = copy_to_user(buffer +
++								(i * off),
++								buf, off);
++				if (bytes_not_copied)
++					goto out;
++
++				i++;
++				*ppos += off;
++			}
+ 		}
+ 	}
+ 
+-	bytes_not_copied = copy_to_user(buffer, buf, off);
++out:
+ 	kfree(lfs);
+ 	kfree(buf);
+-
+ 	if (bytes_not_copied)
+ 		return -EFAULT;
+ 
+-	*ppos = off;
+-	return off;
++	return *ppos;
+ }
+ 
+ RVU_DEBUG_FOPS(rsrc_status, rsrc_attach_status, NULL);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+index 641cdd81882b8..ffaeda75eec42 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+@@ -353,13 +353,10 @@ static int mlxsw_pci_rdq_skb_alloc(struct mlxsw_pci *mlxsw_pci,
+ 	struct sk_buff *skb;
+ 	int err;
+ 
+-	elem_info->u.rdq.skb = NULL;
+ 	skb = netdev_alloc_skb_ip_align(NULL, buf_len);
+ 	if (!skb)
+ 		return -ENOMEM;
+ 
+-	/* Assume that wqe was previously zeroed. */
+-
+ 	err = mlxsw_pci_wqe_frag_map(mlxsw_pci, wqe, 0, skb->data,
+ 				     buf_len, DMA_FROM_DEVICE);
+ 	if (err)
+@@ -548,21 +545,26 @@ static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci,
+ 	struct pci_dev *pdev = mlxsw_pci->pdev;
+ 	struct mlxsw_pci_queue_elem_info *elem_info;
+ 	struct mlxsw_rx_info rx_info = {};
+-	char *wqe;
++	char wqe[MLXSW_PCI_WQE_SIZE];
+ 	struct sk_buff *skb;
+ 	u16 byte_count;
+ 	int err;
+ 
+ 	elem_info = mlxsw_pci_queue_elem_info_consumer_get(q);
+-	skb = elem_info->u.sdq.skb;
+-	if (!skb)
+-		return;
+-	wqe = elem_info->elem;
+-	mlxsw_pci_wqe_frag_unmap(mlxsw_pci, wqe, 0, DMA_FROM_DEVICE);
++	skb = elem_info->u.rdq.skb;
++	memcpy(wqe, elem_info->elem, MLXSW_PCI_WQE_SIZE);
+ 
+ 	if (q->consumer_counter++ != consumer_counter_limit)
+ 		dev_dbg_ratelimited(&pdev->dev, "Consumer counter does not match limit in RDQ\n");
+ 
++	err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info);
++	if (err) {
++		dev_err_ratelimited(&pdev->dev, "Failed to alloc skb for RDQ\n");
++		goto out;
++	}
++
++	mlxsw_pci_wqe_frag_unmap(mlxsw_pci, wqe, 0, DMA_FROM_DEVICE);
++
+ 	if (mlxsw_pci_cqe_lag_get(cqe_v, cqe)) {
+ 		rx_info.is_lag = true;
+ 		rx_info.u.lag_id = mlxsw_pci_cqe_lag_id_get(cqe_v, cqe);
+@@ -594,10 +596,7 @@ static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci,
+ 	skb_put(skb, byte_count);
+ 	mlxsw_core_skb_receive(mlxsw_pci->core, skb, &rx_info);
+ 
+-	memset(wqe, 0, q->elem_size);
+-	err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info);
+-	if (err)
+-		dev_dbg_ratelimited(&pdev->dev, "Failed to alloc skb for RDQ\n");
++out:
+ 	/* Everything is set up, ring doorbell to pass elem to HW */
+ 	q->producer_counter++;
+ 	mlxsw_pci_queue_doorbell_producer_ring(mlxsw_pci, q);
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index 8947c3a628109..e14dfaafe4391 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -1280,7 +1280,7 @@ static void lan743x_tx_release_desc(struct lan743x_tx *tx,
+ 	if (!(buffer_info->flags & TX_BUFFER_INFO_FLAG_ACTIVE))
+ 		goto done;
+ 
+-	descriptor_type = (descriptor->data0) &
++	descriptor_type = le32_to_cpu(descriptor->data0) &
+ 			  TX_DESC_DATA0_DTYPE_MASK_;
+ 	if (descriptor_type == TX_DESC_DATA0_DTYPE_DATA_)
+ 		goto clean_up_data_descriptor;
+@@ -1340,7 +1340,7 @@ static int lan743x_tx_next_index(struct lan743x_tx *tx, int index)
+ 
+ static void lan743x_tx_release_completed_descriptors(struct lan743x_tx *tx)
+ {
+-	while ((*tx->head_cpu_ptr) != (tx->last_head)) {
++	while (le32_to_cpu(*tx->head_cpu_ptr) != (tx->last_head)) {
+ 		lan743x_tx_release_desc(tx, tx->last_head, false);
+ 		tx->last_head = lan743x_tx_next_index(tx, tx->last_head);
+ 	}
+@@ -1426,10 +1426,10 @@ static int lan743x_tx_frame_start(struct lan743x_tx *tx,
+ 	if (dma_mapping_error(dev, dma_ptr))
+ 		return -ENOMEM;
+ 
+-	tx_descriptor->data1 = DMA_ADDR_LOW32(dma_ptr);
+-	tx_descriptor->data2 = DMA_ADDR_HIGH32(dma_ptr);
+-	tx_descriptor->data3 = (frame_length << 16) &
+-		TX_DESC_DATA3_FRAME_LENGTH_MSS_MASK_;
++	tx_descriptor->data1 = cpu_to_le32(DMA_ADDR_LOW32(dma_ptr));
++	tx_descriptor->data2 = cpu_to_le32(DMA_ADDR_HIGH32(dma_ptr));
++	tx_descriptor->data3 = cpu_to_le32((frame_length << 16) &
++		TX_DESC_DATA3_FRAME_LENGTH_MSS_MASK_);
+ 
+ 	buffer_info->skb = NULL;
+ 	buffer_info->dma_ptr = dma_ptr;
+@@ -1470,7 +1470,7 @@ static void lan743x_tx_frame_add_lso(struct lan743x_tx *tx,
+ 		tx->frame_data0 |= TX_DESC_DATA0_IOC_;
+ 	}
+ 	tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail];
+-	tx_descriptor->data0 = tx->frame_data0;
++	tx_descriptor->data0 = cpu_to_le32(tx->frame_data0);
+ 
+ 	/* move to next descriptor */
+ 	tx->frame_tail = lan743x_tx_next_index(tx, tx->frame_tail);
+@@ -1514,7 +1514,7 @@ static int lan743x_tx_frame_add_fragment(struct lan743x_tx *tx,
+ 
+ 	/* wrap up previous descriptor */
+ 	tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail];
+-	tx_descriptor->data0 = tx->frame_data0;
++	tx_descriptor->data0 = cpu_to_le32(tx->frame_data0);
+ 
+ 	/* move to next descriptor */
+ 	tx->frame_tail = lan743x_tx_next_index(tx, tx->frame_tail);
+@@ -1540,10 +1540,10 @@ static int lan743x_tx_frame_add_fragment(struct lan743x_tx *tx,
+ 		return -ENOMEM;
+ 	}
+ 
+-	tx_descriptor->data1 = DMA_ADDR_LOW32(dma_ptr);
+-	tx_descriptor->data2 = DMA_ADDR_HIGH32(dma_ptr);
+-	tx_descriptor->data3 = (frame_length << 16) &
+-			       TX_DESC_DATA3_FRAME_LENGTH_MSS_MASK_;
++	tx_descriptor->data1 = cpu_to_le32(DMA_ADDR_LOW32(dma_ptr));
++	tx_descriptor->data2 = cpu_to_le32(DMA_ADDR_HIGH32(dma_ptr));
++	tx_descriptor->data3 = cpu_to_le32((frame_length << 16) &
++			       TX_DESC_DATA3_FRAME_LENGTH_MSS_MASK_);
+ 
+ 	buffer_info->skb = NULL;
+ 	buffer_info->dma_ptr = dma_ptr;
+@@ -1587,7 +1587,7 @@ static void lan743x_tx_frame_end(struct lan743x_tx *tx,
+ 	if (ignore_sync)
+ 		buffer_info->flags |= TX_BUFFER_INFO_FLAG_IGNORE_SYNC;
+ 
+-	tx_descriptor->data0 = tx->frame_data0;
++	tx_descriptor->data0 = cpu_to_le32(tx->frame_data0);
+ 	tx->frame_tail = lan743x_tx_next_index(tx, tx->frame_tail);
+ 	tx->last_tail = tx->frame_tail;
+ 
+@@ -1770,6 +1770,16 @@ static int lan743x_tx_ring_init(struct lan743x_tx *tx)
+ 		ret = -EINVAL;
+ 		goto cleanup;
+ 	}
++	if (dma_set_mask_and_coherent(&tx->adapter->pdev->dev,
++				      DMA_BIT_MASK(64))) {
++		if (dma_set_mask_and_coherent(&tx->adapter->pdev->dev,
++					      DMA_BIT_MASK(32))) {
++			dev_warn(&tx->adapter->pdev->dev,
++				 "lan743x_: No suitable DMA available\n");
++			ret = -ENOMEM;
++			goto cleanup;
++		}
++	}
+ 	ring_allocation_size = ALIGN(tx->ring_size *
+ 				     sizeof(struct lan743x_tx_descriptor),
+ 				     PAGE_SIZE);
+@@ -1994,11 +2004,11 @@ static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
+ 	}
+ 
+ 	buffer_info->buffer_length = length;
+-	descriptor->data1 = DMA_ADDR_LOW32(buffer_info->dma_ptr);
+-	descriptor->data2 = DMA_ADDR_HIGH32(buffer_info->dma_ptr);
++	descriptor->data1 = cpu_to_le32(DMA_ADDR_LOW32(buffer_info->dma_ptr));
++	descriptor->data2 = cpu_to_le32(DMA_ADDR_HIGH32(buffer_info->dma_ptr));
+ 	descriptor->data3 = 0;
+-	descriptor->data0 = (RX_DESC_DATA0_OWN_ |
+-			    (length & RX_DESC_DATA0_BUF_LENGTH_MASK_));
++	descriptor->data0 = cpu_to_le32((RX_DESC_DATA0_OWN_ |
++			    (length & RX_DESC_DATA0_BUF_LENGTH_MASK_)));
+ 	skb_reserve(buffer_info->skb, RX_HEAD_PADDING);
+ 	lan743x_rx_update_tail(rx, index);
+ 
+@@ -2013,12 +2023,12 @@ static void lan743x_rx_reuse_ring_element(struct lan743x_rx *rx, int index)
+ 	descriptor = &rx->ring_cpu_ptr[index];
+ 	buffer_info = &rx->buffer_info[index];
+ 
+-	descriptor->data1 = DMA_ADDR_LOW32(buffer_info->dma_ptr);
+-	descriptor->data2 = DMA_ADDR_HIGH32(buffer_info->dma_ptr);
++	descriptor->data1 = cpu_to_le32(DMA_ADDR_LOW32(buffer_info->dma_ptr));
++	descriptor->data2 = cpu_to_le32(DMA_ADDR_HIGH32(buffer_info->dma_ptr));
+ 	descriptor->data3 = 0;
+-	descriptor->data0 = (RX_DESC_DATA0_OWN_ |
++	descriptor->data0 = cpu_to_le32((RX_DESC_DATA0_OWN_ |
+ 			    ((buffer_info->buffer_length) &
+-			    RX_DESC_DATA0_BUF_LENGTH_MASK_));
++			    RX_DESC_DATA0_BUF_LENGTH_MASK_)));
+ 	lan743x_rx_update_tail(rx, index);
+ }
+ 
+@@ -2052,7 +2062,7 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ {
+ 	struct skb_shared_hwtstamps *hwtstamps = NULL;
+ 	int result = RX_PROCESS_RESULT_NOTHING_TO_DO;
+-	int current_head_index = *rx->head_cpu_ptr;
++	int current_head_index = le32_to_cpu(*rx->head_cpu_ptr);
+ 	struct lan743x_rx_buffer_info *buffer_info;
+ 	struct lan743x_rx_descriptor *descriptor;
+ 	int extension_index = -1;
+@@ -2067,14 +2077,14 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 
+ 	if (rx->last_head != current_head_index) {
+ 		descriptor = &rx->ring_cpu_ptr[rx->last_head];
+-		if (descriptor->data0 & RX_DESC_DATA0_OWN_)
++		if (le32_to_cpu(descriptor->data0) & RX_DESC_DATA0_OWN_)
+ 			goto done;
+ 
+-		if (!(descriptor->data0 & RX_DESC_DATA0_FS_))
++		if (!(le32_to_cpu(descriptor->data0) & RX_DESC_DATA0_FS_))
+ 			goto done;
+ 
+ 		first_index = rx->last_head;
+-		if (descriptor->data0 & RX_DESC_DATA0_LS_) {
++		if (le32_to_cpu(descriptor->data0) & RX_DESC_DATA0_LS_) {
+ 			last_index = rx->last_head;
+ 		} else {
+ 			int index;
+@@ -2082,10 +2092,10 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 			index = lan743x_rx_next_index(rx, first_index);
+ 			while (index != current_head_index) {
+ 				descriptor = &rx->ring_cpu_ptr[index];
+-				if (descriptor->data0 & RX_DESC_DATA0_OWN_)
++				if (le32_to_cpu(descriptor->data0) & RX_DESC_DATA0_OWN_)
+ 					goto done;
+ 
+-				if (descriptor->data0 & RX_DESC_DATA0_LS_) {
++				if (le32_to_cpu(descriptor->data0) & RX_DESC_DATA0_LS_) {
+ 					last_index = index;
+ 					break;
+ 				}
+@@ -2094,17 +2104,17 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 		}
+ 		if (last_index >= 0) {
+ 			descriptor = &rx->ring_cpu_ptr[last_index];
+-			if (descriptor->data0 & RX_DESC_DATA0_EXT_) {
++			if (le32_to_cpu(descriptor->data0) & RX_DESC_DATA0_EXT_) {
+ 				/* extension is expected to follow */
+ 				int index = lan743x_rx_next_index(rx,
+ 								  last_index);
+ 				if (index != current_head_index) {
+ 					descriptor = &rx->ring_cpu_ptr[index];
+-					if (descriptor->data0 &
++					if (le32_to_cpu(descriptor->data0) &
+ 					    RX_DESC_DATA0_OWN_) {
+ 						goto done;
+ 					}
+-					if (descriptor->data0 &
++					if (le32_to_cpu(descriptor->data0) &
+ 					    RX_DESC_DATA0_EXT_) {
+ 						extension_index = index;
+ 					} else {
+@@ -2156,7 +2166,7 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 			}
+ 			buffer_info->skb = NULL;
+ 			packet_length =	RX_DESC_DATA0_FRAME_LENGTH_GET_
+-					(descriptor->data0);
++					(le32_to_cpu(descriptor->data0));
+ 			skb_put(skb, packet_length - 4);
+ 			skb->protocol = eth_type_trans(skb,
+ 						       rx->adapter->netdev);
+@@ -2194,8 +2204,8 @@ process_extension:
+ 			descriptor = &rx->ring_cpu_ptr[extension_index];
+ 			buffer_info = &rx->buffer_info[extension_index];
+ 
+-			ts_sec = descriptor->data1;
+-			ts_nsec = (descriptor->data2 &
++			ts_sec = le32_to_cpu(descriptor->data1);
++			ts_nsec = (le32_to_cpu(descriptor->data2) &
+ 				  RX_DESC_DATA2_TS_NS_MASK_);
+ 			lan743x_rx_reuse_ring_element(rx, extension_index);
+ 			real_last_index = extension_index;
+@@ -2318,6 +2328,16 @@ static int lan743x_rx_ring_init(struct lan743x_rx *rx)
+ 		ret = -EINVAL;
+ 		goto cleanup;
+ 	}
++	if (dma_set_mask_and_coherent(&rx->adapter->pdev->dev,
++				      DMA_BIT_MASK(64))) {
++		if (dma_set_mask_and_coherent(&rx->adapter->pdev->dev,
++					      DMA_BIT_MASK(32))) {
++			dev_warn(&rx->adapter->pdev->dev,
++				 "lan743x_: No suitable DMA available\n");
++			ret = -ENOMEM;
++			goto cleanup;
++		}
++	}
+ 	ring_allocation_size = ALIGN(rx->ring_size *
+ 				     sizeof(struct lan743x_rx_descriptor),
+ 				     PAGE_SIZE);
+@@ -3066,6 +3086,8 @@ static int lan743x_pm_resume(struct device *dev)
+ 	if (ret) {
+ 		netif_err(adapter, probe, adapter->netdev,
+ 			  "lan743x_hardware_init returned %d\n", ret);
++		lan743x_pci_cleanup(adapter);
++		return ret;
+ 	}
+ 
+ 	/* open netdev when netdev is at running state while resume.
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.h b/drivers/net/ethernet/microchip/lan743x_main.h
+index a536f4a4994df..751f2bc9ce84e 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.h
++++ b/drivers/net/ethernet/microchip/lan743x_main.h
+@@ -660,7 +660,7 @@ struct lan743x_tx {
+ 
+ 	struct lan743x_tx_buffer_info *buffer_info;
+ 
+-	u32		*head_cpu_ptr;
++	__le32		*head_cpu_ptr;
+ 	dma_addr_t	head_dma_ptr;
+ 	int		last_head;
+ 	int		last_tail;
+@@ -690,7 +690,7 @@ struct lan743x_rx {
+ 
+ 	struct lan743x_rx_buffer_info *buffer_info;
+ 
+-	u32		*head_cpu_ptr;
++	__le32		*head_cpu_ptr;
+ 	dma_addr_t	head_dma_ptr;
+ 	u32		last_head;
+ 	u32		last_tail;
+@@ -775,10 +775,10 @@ struct lan743x_adapter {
+ #define TX_DESC_DATA3_FRAME_LENGTH_MSS_MASK_	(0x3FFF0000)
+ 
+ struct lan743x_tx_descriptor {
+-	u32     data0;
+-	u32     data1;
+-	u32     data2;
+-	u32     data3;
++	__le32     data0;
++	__le32     data1;
++	__le32     data2;
++	__le32     data3;
+ } __aligned(DEFAULT_DMA_DESCRIPTOR_SPACING);
+ 
+ #define TX_BUFFER_INFO_FLAG_ACTIVE		BIT(0)
+@@ -813,10 +813,10 @@ struct lan743x_tx_buffer_info {
+ #define RX_HEAD_PADDING		NET_IP_ALIGN
+ 
+ struct lan743x_rx_descriptor {
+-	u32     data0;
+-	u32     data1;
+-	u32     data2;
+-	u32     data3;
++	__le32     data0;
++	__le32     data1;
++	__le32     data2;
++	__le32     data3;
+ } __aligned(DEFAULT_DMA_DESCRIPTOR_SPACING);
+ 
+ #define RX_BUFFER_INFO_FLAG_ACTIVE      BIT(0)
+diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
+index d3cbb4215f5cf..9e098e40fb1c6 100644
+--- a/drivers/net/ethernet/nxp/lpc_eth.c
++++ b/drivers/net/ethernet/nxp/lpc_eth.c
+@@ -1015,9 +1015,6 @@ static int lpc_eth_close(struct net_device *ndev)
+ 	napi_disable(&pldat->napi);
+ 	netif_stop_queue(ndev);
+ 
+-	if (ndev->phydev)
+-		phy_stop(ndev->phydev);
+-
+ 	spin_lock_irqsave(&pldat->lock, flags);
+ 	__lpc_eth_reset(pldat);
+ 	netif_carrier_off(ndev);
+@@ -1025,6 +1022,8 @@ static int lpc_eth_close(struct net_device *ndev)
+ 	writel(0, LPC_ENET_MAC2(pldat->net_base));
+ 	spin_unlock_irqrestore(&pldat->lock, flags);
+ 
++	if (ndev->phydev)
++		phy_stop(ndev->phydev);
+ 	clk_disable_unprepare(pldat->clk);
+ 
+ 	return 0;
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 453490be7055a..2645ca35103c9 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -544,7 +544,6 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	err = device_register(&bus->dev);
+ 	if (err) {
+ 		pr_err("mii_bus %s failed to register\n", bus->id);
+-		put_device(&bus->dev);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 28ddaad721ed1..5ee7cde0c2e97 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -260,62 +260,10 @@ static void phy_sanitize_settings(struct phy_device *phydev)
+ 	}
+ }
+ 
+-int phy_ethtool_ksettings_set(struct phy_device *phydev,
+-			      const struct ethtool_link_ksettings *cmd)
+-{
+-	__ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
+-	u8 autoneg = cmd->base.autoneg;
+-	u8 duplex = cmd->base.duplex;
+-	u32 speed = cmd->base.speed;
+-
+-	if (cmd->base.phy_address != phydev->mdio.addr)
+-		return -EINVAL;
+-
+-	linkmode_copy(advertising, cmd->link_modes.advertising);
+-
+-	/* We make sure that we don't pass unsupported values in to the PHY */
+-	linkmode_and(advertising, advertising, phydev->supported);
+-
+-	/* Verify the settings we care about. */
+-	if (autoneg != AUTONEG_ENABLE && autoneg != AUTONEG_DISABLE)
+-		return -EINVAL;
+-
+-	if (autoneg == AUTONEG_ENABLE && linkmode_empty(advertising))
+-		return -EINVAL;
+-
+-	if (autoneg == AUTONEG_DISABLE &&
+-	    ((speed != SPEED_1000 &&
+-	      speed != SPEED_100 &&
+-	      speed != SPEED_10) ||
+-	     (duplex != DUPLEX_HALF &&
+-	      duplex != DUPLEX_FULL)))
+-		return -EINVAL;
+-
+-	phydev->autoneg = autoneg;
+-
+-	if (autoneg == AUTONEG_DISABLE) {
+-		phydev->speed = speed;
+-		phydev->duplex = duplex;
+-	}
+-
+-	linkmode_copy(phydev->advertising, advertising);
+-
+-	linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+-			 phydev->advertising, autoneg == AUTONEG_ENABLE);
+-
+-	phydev->master_slave_set = cmd->base.master_slave_cfg;
+-	phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
+-
+-	/* Restart the PHY */
+-	phy_start_aneg(phydev);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL(phy_ethtool_ksettings_set);
+-
+ void phy_ethtool_ksettings_get(struct phy_device *phydev,
+ 			       struct ethtool_link_ksettings *cmd)
+ {
++	mutex_lock(&phydev->lock);
+ 	linkmode_copy(cmd->link_modes.supported, phydev->supported);
+ 	linkmode_copy(cmd->link_modes.advertising, phydev->advertising);
+ 	linkmode_copy(cmd->link_modes.lp_advertising, phydev->lp_advertising);
+@@ -334,6 +282,7 @@ void phy_ethtool_ksettings_get(struct phy_device *phydev,
+ 	cmd->base.autoneg = phydev->autoneg;
+ 	cmd->base.eth_tp_mdix_ctrl = phydev->mdix_ctrl;
+ 	cmd->base.eth_tp_mdix = phydev->mdix;
++	mutex_unlock(&phydev->lock);
+ }
+ EXPORT_SYMBOL(phy_ethtool_ksettings_get);
+ 
+@@ -767,7 +716,7 @@ static int phy_check_link_status(struct phy_device *phydev)
+ }
+ 
+ /**
+- * phy_start_aneg - start auto-negotiation for this PHY device
++ * _phy_start_aneg - start auto-negotiation for this PHY device
+  * @phydev: the phy_device struct
+  *
+  * Description: Sanitizes the settings (if we're not autonegotiating
+@@ -775,25 +724,43 @@ static int phy_check_link_status(struct phy_device *phydev)
+  *   If the PHYCONTROL Layer is operating, we change the state to
+  *   reflect the beginning of Auto-negotiation or forcing.
+  */
+-int phy_start_aneg(struct phy_device *phydev)
++static int _phy_start_aneg(struct phy_device *phydev)
+ {
+ 	int err;
+ 
++	lockdep_assert_held(&phydev->lock);
++
+ 	if (!phydev->drv)
+ 		return -EIO;
+ 
+-	mutex_lock(&phydev->lock);
+-
+ 	if (AUTONEG_DISABLE == phydev->autoneg)
+ 		phy_sanitize_settings(phydev);
+ 
+ 	err = phy_config_aneg(phydev);
+ 	if (err < 0)
+-		goto out_unlock;
++		return err;
+ 
+ 	if (phy_is_started(phydev))
+ 		err = phy_check_link_status(phydev);
+-out_unlock:
++
++	return err;
++}
++
++/**
++ * phy_start_aneg - start auto-negotiation for this PHY device
++ * @phydev: the phy_device struct
++ *
++ * Description: Sanitizes the settings (if we're not autonegotiating
++ *   them), and then calls the driver's config_aneg function.
++ *   If the PHYCONTROL Layer is operating, we change the state to
++ *   reflect the beginning of Auto-negotiation or forcing.
++ */
++int phy_start_aneg(struct phy_device *phydev)
++{
++	int err;
++
++	mutex_lock(&phydev->lock);
++	err = _phy_start_aneg(phydev);
+ 	mutex_unlock(&phydev->lock);
+ 
+ 	return err;
+@@ -816,6 +783,61 @@ static int phy_poll_aneg_done(struct phy_device *phydev)
+ 	return ret < 0 ? ret : 0;
+ }
+ 
++int phy_ethtool_ksettings_set(struct phy_device *phydev,
++			      const struct ethtool_link_ksettings *cmd)
++{
++	__ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
++	u8 autoneg = cmd->base.autoneg;
++	u8 duplex = cmd->base.duplex;
++	u32 speed = cmd->base.speed;
++
++	if (cmd->base.phy_address != phydev->mdio.addr)
++		return -EINVAL;
++
++	linkmode_copy(advertising, cmd->link_modes.advertising);
++
++	/* We make sure that we don't pass unsupported values in to the PHY */
++	linkmode_and(advertising, advertising, phydev->supported);
++
++	/* Verify the settings we care about. */
++	if (autoneg != AUTONEG_ENABLE && autoneg != AUTONEG_DISABLE)
++		return -EINVAL;
++
++	if (autoneg == AUTONEG_ENABLE && linkmode_empty(advertising))
++		return -EINVAL;
++
++	if (autoneg == AUTONEG_DISABLE &&
++	    ((speed != SPEED_1000 &&
++	      speed != SPEED_100 &&
++	      speed != SPEED_10) ||
++	     (duplex != DUPLEX_HALF &&
++	      duplex != DUPLEX_FULL)))
++		return -EINVAL;
++
++	mutex_lock(&phydev->lock);
++	phydev->autoneg = autoneg;
++
++	if (autoneg == AUTONEG_DISABLE) {
++		phydev->speed = speed;
++		phydev->duplex = duplex;
++	}
++
++	linkmode_copy(phydev->advertising, advertising);
++
++	linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
++			 phydev->advertising, autoneg == AUTONEG_ENABLE);
++
++	phydev->master_slave_set = cmd->base.master_slave_cfg;
++	phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
++
++	/* Restart the PHY */
++	_phy_start_aneg(phydev);
++
++	mutex_unlock(&phydev->lock);
++	return 0;
++}
++EXPORT_SYMBOL(phy_ethtool_ksettings_set);
++
+ /**
+  * phy_speed_down - set speed to lowest speed supported by both link partners
+  * @phydev: the phy_device struct
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index a5cd42bae9621..a4f9912b36366 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -3745,6 +3745,12 @@ static int lan78xx_probe(struct usb_interface *intf,
+ 
+ 	dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out, 1);
+ 
++	/* Reject broken descriptors. */
++	if (dev->maxpacket == 0) {
++		ret = -ENODEV;
++		goto out4;
++	}
++
+ 	/* driver requires remote-wakeup capability during autosuspend. */
+ 	intf->needs_remote_wakeup = 1;
+ 
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 6062dc27870e7..402390b1a66b5 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1755,6 +1755,11 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 	if (!dev->rx_urb_size)
+ 		dev->rx_urb_size = dev->hard_mtu;
+ 	dev->maxpacket = usb_maxpacket (dev->udev, dev->out, 1);
++	if (dev->maxpacket == 0) {
++		/* that is a broken device */
++		status = -ENODEV;
++		goto out4;
++	}
+ 
+ 	/* let userspace know we have a random address */
+ 	if (ether_addr_equal(net->dev_addr, node_id))
+diff --git a/drivers/nfc/port100.c b/drivers/nfc/port100.c
+index 8e4d355dc3aec..1caebefb25ff1 100644
+--- a/drivers/nfc/port100.c
++++ b/drivers/nfc/port100.c
+@@ -1003,11 +1003,11 @@ static u64 port100_get_command_type_mask(struct port100 *dev)
+ 
+ 	skb = port100_alloc_skb(dev, 0);
+ 	if (!skb)
+-		return -ENOMEM;
++		return 0;
+ 
+ 	resp = port100_send_cmd_sync(dev, PORT100_CMD_GET_COMMAND_TYPE, skb);
+ 	if (IS_ERR(resp))
+-		return PTR_ERR(resp);
++		return 0;
+ 
+ 	if (resp->len < 8)
+ 		mask = 0;
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 05ad6bee085c1..e99d439894187 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -910,12 +910,14 @@ static void nvme_tcp_fail_request(struct nvme_tcp_request *req)
+ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ {
+ 	struct nvme_tcp_queue *queue = req->queue;
++	int req_data_len = req->data_len;
+ 
+ 	while (true) {
+ 		struct page *page = nvme_tcp_req_cur_page(req);
+ 		size_t offset = nvme_tcp_req_cur_offset(req);
+ 		size_t len = nvme_tcp_req_cur_length(req);
+ 		bool last = nvme_tcp_pdu_last_send(req, len);
++		int req_data_sent = req->data_sent;
+ 		int ret, flags = MSG_DONTWAIT;
+ 
+ 		if (last && !queue->data_digest && !nvme_tcp_queue_more(queue))
+@@ -942,7 +944,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 		 * in the request where we don't want to modify it as we may
+ 		 * compete with the RX path completing the request.
+ 		 */
+-		if (req->data_sent + ret < req->data_len)
++		if (req_data_sent + ret < req_data_len)
+ 			nvme_tcp_advance_req(req, ret);
+ 
+ 		/* fully successful last send in current PDU */
+@@ -1035,10 +1037,11 @@ static int nvme_tcp_try_send_data_pdu(struct nvme_tcp_request *req)
+ static int nvme_tcp_try_send_ddgst(struct nvme_tcp_request *req)
+ {
+ 	struct nvme_tcp_queue *queue = req->queue;
++	size_t offset = req->offset;
+ 	int ret;
+ 	struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
+ 	struct kvec iov = {
+-		.iov_base = &req->ddgst + req->offset,
++		.iov_base = (u8 *)&req->ddgst + req->offset,
+ 		.iov_len = NVME_TCP_DIGEST_LENGTH - req->offset
+ 	};
+ 
+@@ -1051,7 +1054,7 @@ static int nvme_tcp_try_send_ddgst(struct nvme_tcp_request *req)
+ 	if (unlikely(ret <= 0))
+ 		return ret;
+ 
+-	if (req->offset + ret == NVME_TCP_DIGEST_LENGTH) {
++	if (offset + ret == NVME_TCP_DIGEST_LENGTH) {
+ 		nvme_tcp_done_send_req(queue);
+ 		return 1;
+ 	}
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index dedcb7aaf0d82..5266d534c4b31 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -690,7 +690,7 @@ static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
+ 	struct nvmet_tcp_queue *queue = cmd->queue;
+ 	struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
+ 	struct kvec iov = {
+-		.iov_base = &cmd->exp_ddgst + cmd->offset,
++		.iov_base = (u8 *)&cmd->exp_ddgst + cmd->offset,
+ 		.iov_len = NVME_TCP_DIGEST_LENGTH - cmd->offset
+ 	};
+ 	int ret;
+diff --git a/drivers/pinctrl/bcm/pinctrl-ns.c b/drivers/pinctrl/bcm/pinctrl-ns.c
+index e79690bd8b85f..d7f8175d2c1c8 100644
+--- a/drivers/pinctrl/bcm/pinctrl-ns.c
++++ b/drivers/pinctrl/bcm/pinctrl-ns.c
+@@ -5,7 +5,6 @@
+ 
+ #include <linux/err.h>
+ #include <linux/io.h>
+-#include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+@@ -13,7 +12,6 @@
+ #include <linux/pinctrl/pinctrl.h>
+ #include <linux/pinctrl/pinmux.h>
+ #include <linux/platform_device.h>
+-#include <linux/regmap.h>
+ #include <linux/slab.h>
+ 
+ #define FLAG_BCM4708		BIT(1)
+@@ -24,8 +22,7 @@ struct ns_pinctrl {
+ 	struct device *dev;
+ 	unsigned int chipset_flag;
+ 	struct pinctrl_dev *pctldev;
+-	struct regmap *regmap;
+-	u32 offset;
++	void __iomem *base;
+ 
+ 	struct pinctrl_desc pctldesc;
+ 	struct ns_pinctrl_group *groups;
+@@ -232,9 +229,9 @@ static int ns_pinctrl_set_mux(struct pinctrl_dev *pctrl_dev,
+ 		unset |= BIT(pin_number);
+ 	}
+ 
+-	regmap_read(ns_pinctrl->regmap, ns_pinctrl->offset, &tmp);
++	tmp = readl(ns_pinctrl->base);
+ 	tmp &= ~unset;
+-	regmap_write(ns_pinctrl->regmap, ns_pinctrl->offset, tmp);
++	writel(tmp, ns_pinctrl->base);
+ 
+ 	return 0;
+ }
+@@ -266,13 +263,13 @@ static const struct of_device_id ns_pinctrl_of_match_table[] = {
+ static int ns_pinctrl_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+-	struct device_node *np = dev->of_node;
+ 	const struct of_device_id *of_id;
+ 	struct ns_pinctrl *ns_pinctrl;
+ 	struct pinctrl_desc *pctldesc;
+ 	struct pinctrl_pin_desc *pin;
+ 	struct ns_pinctrl_group *group;
+ 	struct ns_pinctrl_function *function;
++	struct resource *res;
+ 	int i;
+ 
+ 	ns_pinctrl = devm_kzalloc(dev, sizeof(*ns_pinctrl), GFP_KERNEL);
+@@ -290,18 +287,12 @@ static int ns_pinctrl_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	ns_pinctrl->chipset_flag = (uintptr_t)of_id->data;
+ 
+-	ns_pinctrl->regmap = syscon_node_to_regmap(of_get_parent(np));
+-	if (IS_ERR(ns_pinctrl->regmap)) {
+-		int err = PTR_ERR(ns_pinctrl->regmap);
+-
+-		dev_err(dev, "Failed to map pinctrl regs: %d\n", err);
+-
+-		return err;
+-	}
+-
+-	if (of_property_read_u32(np, "offset", &ns_pinctrl->offset)) {
+-		dev_err(dev, "Failed to get register offset\n");
+-		return -ENOENT;
++	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
++					   "cru_gpio_control");
++	ns_pinctrl->base = devm_ioremap_resource(dev, res);
++	if (IS_ERR(ns_pinctrl->base)) {
++		dev_err(dev, "Failed to map pinctrl regs\n");
++		return PTR_ERR(ns_pinctrl->base);
+ 	}
+ 
+ 	memcpy(pctldesc, &ns_pinctrl_desc, sizeof(*pctldesc));
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index ef49402c0623d..e20bcc835d6a8 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -764,6 +764,34 @@ static const struct pinconf_ops amd_pinconf_ops = {
+ 	.pin_config_group_set = amd_pinconf_group_set,
+ };
+ 
++static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
++{
++	struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++	unsigned long flags;
++	u32 pin_reg, mask;
++	int i;
++
++	mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3) |
++		BIT(INTERRUPT_MASK_OFF) | BIT(INTERRUPT_ENABLE_OFF) |
++		BIT(WAKE_CNTRL_OFF_S4);
++
++	for (i = 0; i < desc->npins; i++) {
++		int pin = desc->pins[i].number;
++		const struct pin_desc *pd = pin_desc_get(gpio_dev->pctrl, pin);
++
++		if (!pd)
++			continue;
++
++		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++
++		pin_reg = readl(gpio_dev->base + i * 4);
++		pin_reg &= ~mask;
++		writel(pin_reg, gpio_dev->base + i * 4);
++
++		raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
++	}
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static bool amd_gpio_should_save(struct amd_gpio *gpio_dev, unsigned int pin)
+ {
+@@ -901,6 +929,9 @@ static int amd_gpio_probe(struct platform_device *pdev)
+ 		return PTR_ERR(gpio_dev->pctrl);
+ 	}
+ 
++	/* Disable and mask interrupts */
++	amd_gpio_irq_init(gpio_dev);
++
+ 	girq = &gpio_dev->gc.irq;
+ 	girq->chip = &amd_gpio_irqchip;
+ 	/* This will let us handle the parent IRQ in the driver */
+diff --git a/drivers/reset/reset-brcmstb-rescal.c b/drivers/reset/reset-brcmstb-rescal.c
+index b6f074d6a65f8..433fa0c40e477 100644
+--- a/drivers/reset/reset-brcmstb-rescal.c
++++ b/drivers/reset/reset-brcmstb-rescal.c
+@@ -38,7 +38,7 @@ static int brcm_rescal_reset_set(struct reset_controller_dev *rcdev,
+ 	}
+ 
+ 	ret = readl_poll_timeout(base + BRCM_RESCAL_STATUS, reg,
+-				 !(reg & BRCM_RESCAL_STATUS_BIT), 100, 1000);
++				 (reg & BRCM_RESCAL_STATUS_BIT), 100, 1000);
+ 	if (ret) {
+ 		dev_err(data->dev, "time out on SATA/PCIe rescal\n");
+ 		return ret;
+diff --git a/drivers/scsi/ufs/ufs-exynos.c b/drivers/scsi/ufs/ufs-exynos.c
+index 3f4f3d6f48f9f..0246ea99df7b3 100644
+--- a/drivers/scsi/ufs/ufs-exynos.c
++++ b/drivers/scsi/ufs/ufs-exynos.c
+@@ -654,9 +654,9 @@ static int exynos_ufs_pre_pwr_mode(struct ufs_hba *hba,
+ 	}
+ 
+ 	/* setting for three timeout values for traffic class #0 */
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA0), 8064);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA1), 28224);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA2), 20160);
++	ufshcd_dme_set(hba, UIC_ARG_MIB(DL_FC0PROTTIMEOUTVAL), 8064);
++	ufshcd_dme_set(hba, UIC_ARG_MIB(DL_TC0REPLAYTIMEOUTVAL), 28224);
++	ufshcd_dme_set(hba, UIC_ARG_MIB(DL_AFC0REQTIMEOUTVAL), 20160);
+ 
+ 	return 0;
+ out:
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 6cb598b549ca1..bc364c119af6a 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -156,7 +156,12 @@ static int kmmpd(void *data)
+ 	memcpy(mmp->mmp_nodename, init_utsname()->nodename,
+ 	       sizeof(mmp->mmp_nodename));
+ 
+-	while (!kthread_should_stop()) {
++	while (!kthread_should_stop() && !sb_rdonly(sb)) {
++		if (!ext4_has_feature_mmp(sb)) {
++			ext4_warning(sb, "kmmpd being stopped since MMP feature"
++				     " has been disabled.");
++			goto wait_to_exit;
++		}
+ 		if (++seq > EXT4_MMP_SEQ_MAX)
+ 			seq = 1;
+ 
+@@ -177,16 +182,6 @@ static int kmmpd(void *data)
+ 			failed_writes++;
+ 		}
+ 
+-		if (!(le32_to_cpu(es->s_feature_incompat) &
+-		    EXT4_FEATURE_INCOMPAT_MMP)) {
+-			ext4_warning(sb, "kmmpd being stopped since MMP feature"
+-				     " has been disabled.");
+-			goto exit_thread;
+-		}
+-
+-		if (sb_rdonly(sb))
+-			break;
+-
+ 		diff = jiffies - last_update_time;
+ 		if (diff < mmp_update_interval * HZ)
+ 			schedule_timeout_interruptible(mmp_update_interval *
+@@ -207,7 +202,7 @@ static int kmmpd(void *data)
+ 				ext4_error_err(sb, -retval,
+ 					       "error reading MMP data: %d",
+ 					       retval);
+-				goto exit_thread;
++				goto wait_to_exit;
+ 			}
+ 
+ 			mmp_check = (struct mmp_struct *)(bh_check->b_data);
+@@ -221,7 +216,7 @@ static int kmmpd(void *data)
+ 				ext4_error_err(sb, EBUSY, "abort");
+ 				put_bh(bh_check);
+ 				retval = -EBUSY;
+-				goto exit_thread;
++				goto wait_to_exit;
+ 			}
+ 			put_bh(bh_check);
+ 		}
+@@ -244,7 +239,13 @@ static int kmmpd(void *data)
+ 
+ 	retval = write_mmp_block(sb, bh);
+ 
+-exit_thread:
++wait_to_exit:
++	while (!kthread_should_stop()) {
++		set_current_state(TASK_INTERRUPTIBLE);
++		if (!kthread_should_stop())
++			schedule();
++	}
++	set_current_state(TASK_RUNNING);
+ 	return retval;
+ }
+ 
+@@ -391,5 +392,3 @@ failed:
+ 	brelse(bh);
+ 	return 1;
+ }
+-
+-
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index cbeb024296719..953c7e8757831 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -5932,7 +5932,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 				 */
+ 				ext4_mark_recovery_complete(sb, es);
+ 			}
+-			ext4_stop_mmpd(sbi);
+ 		} else {
+ 			/* Make sure we can mount this feature set readwrite */
+ 			if (ext4_has_feature_readonly(sb) ||
+@@ -6046,6 +6045,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
+ 		ext4_release_system_zone(sb);
+ 
++	if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
++		ext4_stop_mmpd(sbi);
++
+ 	/*
+ 	 * Some options can be enabled by ext4 and/or by VFS mount flag
+ 	 * either way we need to make sure it matches in both *flags and
+@@ -6078,6 +6080,8 @@ restore_opts:
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+ 		kfree(to_free[i]);
+ #endif
++	if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
++		ext4_stop_mmpd(sbi);
+ 	kfree(orig_data);
+ 	return err;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 0736487165da4..ee7ceea899346 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2075,7 +2075,9 @@ static void io_req_task_cancel(struct callback_head *cb)
+ 	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 
++	mutex_lock(&ctx->uring_lock);
+ 	__io_req_task_cancel(req, -ECANCELED);
++	mutex_unlock(&ctx->uring_lock);
+ 	percpu_ref_put(&ctx->refs);
+ }
+ 
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index 8c8cf7f4eb34e..e7d04adb6cb87 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -1253,7 +1253,7 @@ static int ocfs2_test_bg_bit_allocatable(struct buffer_head *bg_bh,
+ {
+ 	struct ocfs2_group_desc *bg = (struct ocfs2_group_desc *) bg_bh->b_data;
+ 	struct journal_head *jh;
+-	int ret;
++	int ret = 1;
+ 
+ 	if (ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap))
+ 		return 0;
+@@ -1261,14 +1261,18 @@ static int ocfs2_test_bg_bit_allocatable(struct buffer_head *bg_bh,
+ 	if (!buffer_jbd(bg_bh))
+ 		return 1;
+ 
+-	jh = bh2jh(bg_bh);
+-	spin_lock(&jh->b_state_lock);
+-	bg = (struct ocfs2_group_desc *) jh->b_committed_data;
+-	if (bg)
+-		ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
+-	else
+-		ret = 1;
+-	spin_unlock(&jh->b_state_lock);
++	jbd_lock_bh_journal_head(bg_bh);
++	if (buffer_jbd(bg_bh)) {
++		jh = bh2jh(bg_bh);
++		spin_lock(&jh->b_state_lock);
++		bg = (struct ocfs2_group_desc *) jh->b_committed_data;
++		if (bg)
++			ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
++		else
++			ret = 1;
++		spin_unlock(&jh->b_state_lock);
++	}
++	jbd_unlock_bh_journal_head(bg_bh);
+ 
+ 	return ret;
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 0caa448f7b409..1f62a4eec283c 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -862,8 +862,11 @@ struct bpf_array_aux {
+ 	 * stored in the map to make sure that all callers and callees have
+ 	 * the same prog type and JITed flag.
+ 	 */
+-	enum bpf_prog_type type;
+-	bool jited;
++	struct {
++		spinlock_t lock;
++		enum bpf_prog_type type;
++		bool jited;
++	} owner;
+ 	/* Programs with direct jumps into programs part of this array. */
+ 	struct list_head poke_progs;
+ 	struct bpf_map *map;
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 8a1bf2dbadd0c..2a819be384a78 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -5202,7 +5202,6 @@ struct cfg80211_cqm_config;
+  *	netdev and may otherwise be used by driver read-only, will be update
+  *	by cfg80211 on change_interface
+  * @mgmt_registrations: list of registrations for management frames
+- * @mgmt_registrations_lock: lock for the list
+  * @mgmt_registrations_need_update: mgmt registrations were updated,
+  *	need to propagate the update to the driver
+  * @mtx: mutex used to lock data in this struct, may be used by drivers
+@@ -5249,7 +5248,6 @@ struct wireless_dev {
+ 	u32 identifier;
+ 
+ 	struct list_head mgmt_registrations;
+-	spinlock_t mgmt_registrations_lock;
+ 	u8 mgmt_registrations_need_update:1;
+ 
+ 	struct mutex mtx;
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 43891b28fc482..745b3bc6ce91d 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -359,6 +359,7 @@ int tls_sk_query(struct sock *sk, int optname, char __user *optval,
+ 		int __user *optlen);
+ int tls_sk_attach(struct sock *sk, int optname, char __user *optval,
+ 		  unsigned int optlen);
++void tls_err_abort(struct sock *sk, int err);
+ 
+ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
+ void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
+@@ -467,12 +468,6 @@ static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk)
+ #endif
+ }
+ 
+-static inline void tls_err_abort(struct sock *sk, int err)
+-{
+-	sk->sk_err = err;
+-	sk->sk_error_report(sk);
+-}
+-
+ static inline bool tls_bigint_increment(unsigned char *seq, int len)
+ {
+ 	int i;
+@@ -513,7 +508,7 @@ static inline void tls_advance_record_sn(struct sock *sk,
+ 					 struct cipher_context *ctx)
+ {
+ 	if (tls_bigint_increment(ctx->rec_seq, prot->rec_seq_size))
+-		tls_err_abort(sk, EBADMSG);
++		tls_err_abort(sk, -EBADMSG);
+ 
+ 	if (prot->version != TLS_1_3_VERSION)
+ 		tls_bigint_increment(ctx->iv + TLS_CIPHER_AES_GCM_128_SALT_SIZE,
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index c6c81eceb68fb..36c68dcea2369 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -1025,6 +1025,7 @@ static struct bpf_map *prog_array_map_alloc(union bpf_attr *attr)
+ 	INIT_WORK(&aux->work, prog_array_map_clear_deferred);
+ 	INIT_LIST_HEAD(&aux->poke_progs);
+ 	mutex_init(&aux->poke_mutex);
++	spin_lock_init(&aux->owner.lock);
+ 
+ 	map = array_map_alloc(attr);
+ 	if (IS_ERR(map)) {
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 2e4a658d65d6e..72e4bf0ee5460 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1775,20 +1775,26 @@ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ bool bpf_prog_array_compatible(struct bpf_array *array,
+ 			       const struct bpf_prog *fp)
+ {
++	bool ret;
++
+ 	if (fp->kprobe_override)
+ 		return false;
+ 
+-	if (!array->aux->type) {
++	spin_lock(&array->aux->owner.lock);
++
++	if (!array->aux->owner.type) {
+ 		/* There's no owner yet where we could check for
+ 		 * compatibility.
+ 		 */
+-		array->aux->type  = fp->type;
+-		array->aux->jited = fp->jited;
+-		return true;
++		array->aux->owner.type  = fp->type;
++		array->aux->owner.jited = fp->jited;
++		ret = true;
++	} else {
++		ret = array->aux->owner.type  == fp->type &&
++		      array->aux->owner.jited == fp->jited;
+ 	}
+-
+-	return array->aux->type  == fp->type &&
+-	       array->aux->jited == fp->jited;
++	spin_unlock(&array->aux->owner.lock);
++	return ret;
+ }
+ 
+ static int bpf_check_tail_call(const struct bpf_prog *fp)
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 9433ab9995cd7..5b6da64da46d7 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -535,8 +535,10 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
+ 
+ 	if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY) {
+ 		array = container_of(map, struct bpf_array, map);
+-		type  = array->aux->type;
+-		jited = array->aux->jited;
++		spin_lock(&array->aux->owner.lock);
++		type  = array->aux->owner.type;
++		jited = array->aux->owner.jited;
++		spin_unlock(&array->aux->owner.lock);
+ 	}
+ 
+ 	seq_printf(m,
+@@ -1307,12 +1309,11 @@ int generic_map_update_batch(struct bpf_map *map,
+ 	void __user *values = u64_to_user_ptr(attr->batch.values);
+ 	void __user *keys = u64_to_user_ptr(attr->batch.keys);
+ 	u32 value_size, cp, max_count;
+-	int ufd = attr->map_fd;
++	int ufd = attr->batch.map_fd;
+ 	void *key, *value;
+ 	struct fd f;
+ 	int err = 0;
+ 
+-	f = fdget(ufd);
+ 	if (attr->batch.elem_flags & ~BPF_F_LOCK)
+ 		return -EINVAL;
+ 
+@@ -1337,6 +1338,7 @@ int generic_map_update_batch(struct bpf_map *map,
+ 		return -ENOMEM;
+ 	}
+ 
++	f = fdget(ufd); /* bpf_map_do_batch() guarantees ufd is valid */
+ 	for (cp = 0; cp < max_count; cp++) {
+ 		err = -EFAULT;
+ 		if (copy_from_user(key, keys + cp * map->key_size,
+@@ -1356,6 +1358,7 @@ int generic_map_update_batch(struct bpf_map *map,
+ 
+ 	kfree(value);
+ 	kfree(key);
++	fdput(f);
+ 	return err;
+ }
+ 
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index c8b811e039cc2..60d38e2f69dd8 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2147,8 +2147,10 @@ static void cgroup_kill_sb(struct super_block *sb)
+ 	 * And don't kill the default root.
+ 	 */
+ 	if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
+-	    !percpu_ref_is_dying(&root->cgrp.self.refcnt))
++	    !percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
++		cgroup_bpf_offline(&root->cgrp);
+ 		percpu_ref_kill(&root->cgrp.self.refcnt);
++	}
+ 	cgroup_put(&root->cgrp);
+ 	kernfs_kill_sb(sb);
+ }
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index ee88125785638..ff389d970b683 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1758,6 +1758,10 @@ static void collapse_file(struct mm_struct *mm,
+ 				filemap_flush(mapping);
+ 				result = SCAN_FAIL;
+ 				goto xa_unlocked;
++			} else if (PageWriteback(page)) {
++				xas_unlock_irq(&xas);
++				result = SCAN_FAIL;
++				goto xa_unlocked;
+ 			} else if (trylock_page(page)) {
+ 				get_page(page);
+ 				xas_unlock_irq(&xas);
+@@ -1793,7 +1797,8 @@ static void collapse_file(struct mm_struct *mm,
+ 			goto out_unlock;
+ 		}
+ 
+-		if (!is_shmem && PageDirty(page)) {
++		if (!is_shmem && (PageDirty(page) ||
++				  PageWriteback(page))) {
+ 			/*
+ 			 * khugepaged only works on read-only fd, so this
+ 			 * page is dirty because it hasn't been flushed
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index ba0027d1f2dfd..ee9cead765450 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -1561,10 +1561,14 @@ int batadv_bla_init(struct batadv_priv *bat_priv)
+ 		return 0;
+ 
+ 	bat_priv->bla.claim_hash = batadv_hash_new(128);
+-	bat_priv->bla.backbone_hash = batadv_hash_new(32);
++	if (!bat_priv->bla.claim_hash)
++		return -ENOMEM;
+ 
+-	if (!bat_priv->bla.claim_hash || !bat_priv->bla.backbone_hash)
++	bat_priv->bla.backbone_hash = batadv_hash_new(32);
++	if (!bat_priv->bla.backbone_hash) {
++		batadv_hash_destroy(bat_priv->bla.claim_hash);
+ 		return -ENOMEM;
++	}
+ 
+ 	batadv_hash_set_lock_class(bat_priv->bla.claim_hash,
+ 				   &batadv_claim_hash_lock_class_key);
+diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c
+index 70fee9b42e25f..9f267b190779f 100644
+--- a/net/batman-adv/main.c
++++ b/net/batman-adv/main.c
+@@ -196,29 +196,41 @@ int batadv_mesh_init(struct net_device *soft_iface)
+ 
+ 	bat_priv->gw.generation = 0;
+ 
+-	ret = batadv_v_mesh_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
+-
+ 	ret = batadv_originator_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_orig;
++	}
+ 
+ 	ret = batadv_tt_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_tt;
++	}
++
++	ret = batadv_v_mesh_init(bat_priv);
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_v;
++	}
+ 
+ 	ret = batadv_bla_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_bla;
++	}
+ 
+ 	ret = batadv_dat_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_dat;
++	}
+ 
+ 	ret = batadv_nc_mesh_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_nc;
++	}
+ 
+ 	batadv_gw_init(bat_priv);
+ 	batadv_mcast_init(bat_priv);
+@@ -228,8 +240,20 @@ int batadv_mesh_init(struct net_device *soft_iface)
+ 
+ 	return 0;
+ 
+-err:
+-	batadv_mesh_free(soft_iface);
++err_nc:
++	batadv_dat_free(bat_priv);
++err_dat:
++	batadv_bla_free(bat_priv);
++err_bla:
++	batadv_v_mesh_free(bat_priv);
++err_v:
++	batadv_tt_free(bat_priv);
++err_tt:
++	batadv_originator_free(bat_priv);
++err_orig:
++	batadv_purge_outstanding_packets(bat_priv, NULL);
++	atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE);
++
+ 	return ret;
+ }
+ 
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index 61ddd6d709a0e..35b3e03c07774 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -155,8 +155,10 @@ int batadv_nc_mesh_init(struct batadv_priv *bat_priv)
+ 				   &batadv_nc_coding_hash_lock_class_key);
+ 
+ 	bat_priv->nc.decoding_hash = batadv_hash_new(128);
+-	if (!bat_priv->nc.decoding_hash)
++	if (!bat_priv->nc.decoding_hash) {
++		batadv_hash_destroy(bat_priv->nc.coding_hash);
+ 		goto err;
++	}
+ 
+ 	batadv_hash_set_lock_class(bat_priv->nc.decoding_hash,
+ 				   &batadv_nc_decoding_hash_lock_class_key);
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index a510f7f86a7dc..de946ea8f13c8 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -4405,8 +4405,10 @@ int batadv_tt_init(struct batadv_priv *bat_priv)
+ 		return ret;
+ 
+ 	ret = batadv_tt_global_init(bat_priv);
+-	if (ret < 0)
++	if (ret < 0) {
++		batadv_tt_local_table_free(bat_priv);
+ 		return ret;
++	}
+ 
+ 	batadv_tvlv_handler_register(bat_priv, batadv_tt_tvlv_ogm_handler_v1,
+ 				     batadv_tt_tvlv_unicast_handler_v1,
+diff --git a/net/core/dev.c b/net/core/dev.c
+index b9d19fbb15890..6a4e0e3c59fec 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3171,6 +3171,12 @@ static u16 skb_tx_hash(const struct net_device *dev,
+ 
+ 		qoffset = sb_dev->tc_to_txq[tc].offset;
+ 		qcount = sb_dev->tc_to_txq[tc].count;
++		if (unlikely(!qcount)) {
++			net_warn_ratelimited("%s: invalid qcount, qoffset %u for tc %u\n",
++					     sb_dev->name, qoffset, tc);
++			qoffset = 0;
++			qcount = dev->real_num_tx_queues;
++		}
+ 	}
+ 
+ 	if (skb_rx_queue_recorded(skb)) {
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index b4562f9d074cf..cc5f760c78250 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -1957,9 +1957,9 @@ int netdev_register_kobject(struct net_device *ndev)
+ int netdev_change_owner(struct net_device *ndev, const struct net *net_old,
+ 			const struct net *net_new)
+ {
++	kuid_t old_uid = GLOBAL_ROOT_UID, new_uid = GLOBAL_ROOT_UID;
++	kgid_t old_gid = GLOBAL_ROOT_GID, new_gid = GLOBAL_ROOT_GID;
+ 	struct device *dev = &ndev->dev;
+-	kuid_t old_uid, new_uid;
+-	kgid_t old_gid, new_gid;
+ 	int error;
+ 
+ 	net_ns_get_ownership(net_old, &old_uid, &old_gid);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index f91ae827d47f5..9194070c67250 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -317,6 +317,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ 	bool cork = false, enospc = sk_msg_full(msg);
+ 	struct sock *sk_redir;
+ 	u32 tosend, delta = 0;
++	u32 eval = __SK_NONE;
+ 	int ret;
+ 
+ more_data:
+@@ -360,13 +361,24 @@ more_data:
+ 	case __SK_REDIRECT:
+ 		sk_redir = psock->sk_redir;
+ 		sk_msg_apply_bytes(psock, tosend);
++		if (!psock->apply_bytes) {
++			/* Clean up before releasing the sock lock. */
++			eval = psock->eval;
++			psock->eval = __SK_NONE;
++			psock->sk_redir = NULL;
++		}
+ 		if (psock->cork) {
+ 			cork = true;
+ 			psock->cork = NULL;
+ 		}
+ 		sk_msg_return(sk, msg, tosend);
+ 		release_sock(sk);
++
+ 		ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
++
++		if (eval == __SK_REDIRECT)
++			sock_put(sk_redir);
++
+ 		lock_sock(sk);
+ 		if (unlikely(ret < 0)) {
+ 			int free = sk_msg_free_nocharge(sk, msg);
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index b65bdaa84228f..096e6be1d8fc8 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -697,6 +697,9 @@ enum sctp_disposition sctp_sf_do_5_1D_ce(struct net *net,
+ 	struct sock *sk;
+ 	int error = 0;
+ 
++	if (asoc && !sctp_vtag_verify(chunk, asoc))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* If the packet is an OOTB packet which is temporarily on the
+ 	 * control endpoint, respond with an ABORT.
+ 	 */
+@@ -711,7 +714,8 @@ enum sctp_disposition sctp_sf_do_5_1D_ce(struct net *net,
+ 	 * in sctp_unpack_cookie().
+ 	 */
+ 	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+-		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
++						  commands);
+ 
+ 	/* If the endpoint is not listening or if the number of associations
+ 	 * on the TCP-style socket exceed the max backlog, respond with an
+@@ -2141,9 +2145,11 @@ enum sctp_disposition sctp_sf_do_5_2_4_dupcook(
+ 	 * enough for the chunk header.  Cookie length verification is
+ 	 * done later.
+ 	 */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+-		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr))) {
++		if (!sctp_vtag_verify(chunk, asoc))
++			asoc = NULL;
++		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, commands);
++	}
+ 
+ 	/* "Decode" the chunk.  We have no optional parameters so we
+ 	 * are in good shape.
+@@ -2280,7 +2286,7 @@ enum sctp_disposition sctp_sf_shutdown_pending_abort(
+ 	 */
+ 	if (SCTP_ADDR_DEL ==
+ 		    sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	if (!sctp_err_chunk_valid(chunk))
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+@@ -2326,7 +2332,7 @@ enum sctp_disposition sctp_sf_shutdown_sent_abort(
+ 	 */
+ 	if (SCTP_ADDR_DEL ==
+ 		    sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	if (!sctp_err_chunk_valid(chunk))
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+@@ -2596,7 +2602,7 @@ enum sctp_disposition sctp_sf_do_9_1_abort(
+ 	 */
+ 	if (SCTP_ADDR_DEL ==
+ 		    sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	if (!sctp_err_chunk_valid(chunk))
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+@@ -3562,6 +3568,9 @@ enum sctp_disposition sctp_sf_ootb(struct net *net,
+ 
+ 	SCTP_INC_STATS(net, SCTP_MIB_OUTOFBLUES);
+ 
++	if (asoc && !sctp_vtag_verify(chunk, asoc))
++		asoc = NULL;
++
+ 	ch = (struct sctp_chunkhdr *)chunk->chunk_hdr;
+ 	do {
+ 		/* Report violation if the chunk is less then minimal */
+@@ -3677,12 +3686,6 @@ static enum sctp_disposition sctp_sf_shut_8_4_5(
+ 
+ 	SCTP_INC_STATS(net, SCTP_MIB_OUTCTRLCHUNKS);
+ 
+-	/* If the chunk length is invalid, we don't want to process
+-	 * the reset of the packet.
+-	 */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+-		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+-
+ 	/* We need to discard the rest of the packet to prevent
+ 	 * potential bomming attacks from additional bundled chunks.
+ 	 * This is documented in SCTP Threats ID.
+@@ -3710,6 +3713,9 @@ enum sctp_disposition sctp_sf_do_8_5_1_E_sa(struct net *net,
+ {
+ 	struct sctp_chunk *chunk = arg;
+ 
++	if (!sctp_vtag_verify(chunk, asoc))
++		asoc = NULL;
++
+ 	/* Make sure that the SHUTDOWN_ACK chunk has a valid length. */
+ 	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+ 		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+@@ -3745,6 +3751,11 @@ enum sctp_disposition sctp_sf_do_asconf(struct net *net,
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 	}
+ 
++	/* Make sure that the ASCONF ADDIP chunk has a valid length.  */
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_addip_chunk)))
++		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
++						  commands);
++
+ 	/* ADD-IP: Section 4.1.1
+ 	 * This chunk MUST be sent in an authenticated way by using
+ 	 * the mechanism defined in [I-D.ietf-tsvwg-sctp-auth]. If this chunk
+@@ -3753,13 +3764,7 @@ enum sctp_disposition sctp_sf_do_asconf(struct net *net,
+ 	 */
+ 	if (!asoc->peer.asconf_capable ||
+ 	    (!net->sctp.addip_noauth && !chunk->auth))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg,
+-					     commands);
+-
+-	/* Make sure that the ASCONF ADDIP chunk has a valid length.  */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_addip_chunk)))
+-		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	hdr = (struct sctp_addiphdr *)chunk->skb->data;
+ 	serial = ntohl(hdr->serial);
+@@ -3888,6 +3893,12 @@ enum sctp_disposition sctp_sf_do_asconf_ack(struct net *net,
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 	}
+ 
++	/* Make sure that the ADDIP chunk has a valid length.  */
++	if (!sctp_chunk_length_valid(asconf_ack,
++				     sizeof(struct sctp_addip_chunk)))
++		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
++						  commands);
++
+ 	/* ADD-IP, Section 4.1.2:
+ 	 * This chunk MUST be sent in an authenticated way by using
+ 	 * the mechanism defined in [I-D.ietf-tsvwg-sctp-auth]. If this chunk
+@@ -3896,14 +3907,7 @@ enum sctp_disposition sctp_sf_do_asconf_ack(struct net *net,
+ 	 */
+ 	if (!asoc->peer.asconf_capable ||
+ 	    (!net->sctp.addip_noauth && !asconf_ack->auth))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg,
+-					     commands);
+-
+-	/* Make sure that the ADDIP chunk has a valid length.  */
+-	if (!sctp_chunk_length_valid(asconf_ack,
+-				     sizeof(struct sctp_addip_chunk)))
+-		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	addip_hdr = (struct sctp_addiphdr *)asconf_ack->skb->data;
+ 	rcvd_serial = ntohl(addip_hdr->serial);
+@@ -4475,6 +4479,9 @@ enum sctp_disposition sctp_sf_discard_chunk(struct net *net,
+ {
+ 	struct sctp_chunk *chunk = arg;
+ 
++	if (asoc && !sctp_vtag_verify(chunk, asoc))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* Make sure that the chunk has a valid length.
+ 	 * Since we don't know the chunk type, we use a general
+ 	 * chunkhdr structure to make a comparison.
+@@ -4542,6 +4549,9 @@ enum sctp_disposition sctp_sf_violation(struct net *net,
+ {
+ 	struct sctp_chunk *chunk = arg;
+ 
++	if (!sctp_vtag_verify(chunk, asoc))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* Make sure that the chunk has a valid length. */
+ 	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+ 		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+@@ -6248,6 +6258,7 @@ static struct sctp_packet *sctp_ootb_pkt_new(
+ 		 * yet.
+ 		 */
+ 		switch (chunk->chunk_hdr->type) {
++		case SCTP_CID_INIT:
+ 		case SCTP_CID_INIT_ACK:
+ 		{
+ 			struct sctp_initack_chunk *initack;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index f8e73c4a00933..23b100f36ee48 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -2278,43 +2278,53 @@ static bool tipc_crypto_key_rcv(struct tipc_crypto *rx, struct tipc_msg *hdr)
+ 	u16 key_gen = msg_key_gen(hdr);
+ 	u16 size = msg_data_sz(hdr);
+ 	u8 *data = msg_data(hdr);
++	unsigned int keylen;
++
++	/* Verify whether the size can exist in the packet */
++	if (unlikely(size < sizeof(struct tipc_aead_key) + TIPC_AEAD_KEYLEN_MIN)) {
++		pr_debug("%s: message data size is too small\n", rx->name);
++		goto exit;
++	}
++
++	keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
++
++	/* Verify the supplied size values */
++	if (unlikely(size != keylen + sizeof(struct tipc_aead_key) ||
++		     keylen > TIPC_AEAD_KEY_SIZE_MAX)) {
++		pr_debug("%s: invalid MSG_CRYPTO key size\n", rx->name);
++		goto exit;
++	}
+ 
+ 	spin_lock(&rx->lock);
+ 	if (unlikely(rx->skey || (key_gen == rx->key_gen && rx->key.keys))) {
+ 		pr_err("%s: key existed <%p>, gen %d vs %d\n", rx->name,
+ 		       rx->skey, key_gen, rx->key_gen);
+-		goto exit;
++		goto exit_unlock;
+ 	}
+ 
+ 	/* Allocate memory for the key */
+ 	skey = kmalloc(size, GFP_ATOMIC);
+ 	if (unlikely(!skey)) {
+ 		pr_err("%s: unable to allocate memory for skey\n", rx->name);
+-		goto exit;
++		goto exit_unlock;
+ 	}
+ 
+ 	/* Copy key from msg data */
+-	skey->keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
++	skey->keylen = keylen;
+ 	memcpy(skey->alg_name, data, TIPC_AEAD_ALG_NAME);
+ 	memcpy(skey->key, data + TIPC_AEAD_ALG_NAME + sizeof(__be32),
+ 	       skey->keylen);
+ 
+-	/* Sanity check */
+-	if (unlikely(size != tipc_aead_key_size(skey))) {
+-		kfree(skey);
+-		skey = NULL;
+-		goto exit;
+-	}
+-
+ 	rx->key_gen = key_gen;
+ 	rx->skey_mode = msg_key_mode(hdr);
+ 	rx->skey = skey;
+ 	rx->nokey = 0;
+ 	mb(); /* for nokey flag */
+ 
+-exit:
++exit_unlock:
+ 	spin_unlock(&rx->lock);
+ 
++exit:
+ 	/* Schedule the key attaching on this crypto */
+ 	if (likely(skey && queue_delayed_work(tx->wq, &rx->work, 0)))
+ 		return true;
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 15395683b8e2a..14cce61160a58 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -35,6 +35,7 @@
+  * SOFTWARE.
+  */
+ 
++#include <linux/bug.h>
+ #include <linux/sched/signal.h>
+ #include <linux/module.h>
+ #include <linux/splice.h>
+@@ -43,6 +44,14 @@
+ #include <net/strparser.h>
+ #include <net/tls.h>
+ 
++noinline void tls_err_abort(struct sock *sk, int err)
++{
++	WARN_ON_ONCE(err >= 0);
++	/* sk->sk_err should contain a positive error code. */
++	sk->sk_err = -err;
++	sk->sk_error_report(sk);
++}
++
+ static int __skb_nsg(struct sk_buff *skb, int offset, int len,
+                      unsigned int recursion_level)
+ {
+@@ -419,7 +428,7 @@ int tls_tx_records(struct sock *sk, int flags)
+ 
+ tx_err:
+ 	if (rc < 0 && rc != -EAGAIN)
+-		tls_err_abort(sk, EBADMSG);
++		tls_err_abort(sk, -EBADMSG);
+ 
+ 	return rc;
+ }
+@@ -450,7 +459,7 @@ static void tls_encrypt_done(struct crypto_async_request *req, int err)
+ 
+ 		/* If err is already set on socket, return the same code */
+ 		if (sk->sk_err) {
+-			ctx->async_wait.err = sk->sk_err;
++			ctx->async_wait.err = -sk->sk_err;
+ 		} else {
+ 			ctx->async_wait.err = err;
+ 			tls_err_abort(sk, err);
+@@ -764,7 +773,7 @@ static int tls_push_record(struct sock *sk, int flags,
+ 			       msg_pl->sg.size + prot->tail_size, i);
+ 	if (rc < 0) {
+ 		if (rc != -EINPROGRESS) {
+-			tls_err_abort(sk, EBADMSG);
++			tls_err_abort(sk, -EBADMSG);
+ 			if (split) {
+ 				tls_ctx->pending_open_record_frags = true;
+ 				tls_merge_open_record(sk, rec, tmp, orig_end);
+@@ -1828,7 +1837,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 		err = decrypt_skb_update(sk, skb, &msg->msg_iter,
+ 					 &chunk, &zc, async_capable);
+ 		if (err < 0 && err != -EINPROGRESS) {
+-			tls_err_abort(sk, EBADMSG);
++			tls_err_abort(sk, -EBADMSG);
+ 			goto recv_end;
+ 		}
+ 
+@@ -2008,7 +2017,7 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
+ 		}
+ 
+ 		if (err < 0) {
+-			tls_err_abort(sk, EBADMSG);
++			tls_err_abort(sk, -EBADMSG);
+ 			goto splice_read_end;
+ 		}
+ 		ctx->decrypted = 1;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 240282c083aa7..3f4554723761d 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -501,6 +501,7 @@ use_default_name:
+ 	INIT_WORK(&rdev->propagate_cac_done_wk, cfg80211_propagate_cac_done_wk);
+ 	INIT_WORK(&rdev->mgmt_registrations_update_wk,
+ 		  cfg80211_mgmt_registrations_update_wk);
++	spin_lock_init(&rdev->mgmt_registrations_lock);
+ 
+ #ifdef CONFIG_CFG80211_DEFAULT_PS
+ 	rdev->wiphy.flags |= WIPHY_FLAG_PS_ON_BY_DEFAULT;
+@@ -1256,7 +1257,6 @@ void cfg80211_init_wdev(struct wireless_dev *wdev)
+ 	INIT_LIST_HEAD(&wdev->event_list);
+ 	spin_lock_init(&wdev->event_lock);
+ 	INIT_LIST_HEAD(&wdev->mgmt_registrations);
+-	spin_lock_init(&wdev->mgmt_registrations_lock);
+ 	INIT_LIST_HEAD(&wdev->pmsr_list);
+ 	spin_lock_init(&wdev->pmsr_lock);
+ 	INIT_WORK(&wdev->pmsr_free_wk, cfg80211_pmsr_free_wk);
+diff --git a/net/wireless/core.h b/net/wireless/core.h
+index 7df91f9402124..a3362a32acb32 100644
+--- a/net/wireless/core.h
++++ b/net/wireless/core.h
+@@ -101,6 +101,8 @@ struct cfg80211_registered_device {
+ 	struct work_struct propagate_cac_done_wk;
+ 
+ 	struct work_struct mgmt_registrations_update_wk;
++	/* lock for all wdev lists */
++	spinlock_t mgmt_registrations_lock;
+ 
+ 	/* must be last because of the way we do wiphy_priv(),
+ 	 * and it should at least be aligned to NETDEV_ALIGN */
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index 0ac820780437d..6dcfc5a348742 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -448,9 +448,9 @@ static void cfg80211_mgmt_registrations_update(struct wireless_dev *wdev)
+ 
+ 	ASSERT_RTNL();
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 	if (!wdev->mgmt_registrations_need_update) {
+-		spin_unlock_bh(&wdev->mgmt_registrations_lock);
++		spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 		return;
+ 	}
+ 
+@@ -475,7 +475,7 @@ static void cfg80211_mgmt_registrations_update(struct wireless_dev *wdev)
+ 	rcu_read_unlock();
+ 
+ 	wdev->mgmt_registrations_need_update = 0;
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	rdev_update_mgmt_frame_registrations(rdev, wdev, &upd);
+ }
+@@ -499,6 +499,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_portid,
+ 				int match_len, bool multicast_rx,
+ 				struct netlink_ext_ack *extack)
+ {
++	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
+ 	struct cfg80211_mgmt_registration *reg, *nreg;
+ 	int err = 0;
+ 	u16 mgmt_type;
+@@ -544,7 +545,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_portid,
+ 	if (!nreg)
+ 		return -ENOMEM;
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	list_for_each_entry(reg, &wdev->mgmt_registrations, list) {
+ 		int mlen = min(match_len, reg->match_len);
+@@ -579,7 +580,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_portid,
+ 		list_add(&nreg->list, &wdev->mgmt_registrations);
+ 	}
+ 	wdev->mgmt_registrations_need_update = 1;
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	cfg80211_mgmt_registrations_update(wdev);
+ 
+@@ -587,7 +588,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_portid,
+ 
+  out:
+ 	kfree(nreg);
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	return err;
+ }
+@@ -598,7 +599,7 @@ void cfg80211_mlme_unregister_socket(struct wireless_dev *wdev, u32 nlportid)
+ 	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+ 	struct cfg80211_mgmt_registration *reg, *tmp;
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	list_for_each_entry_safe(reg, tmp, &wdev->mgmt_registrations, list) {
+ 		if (reg->nlportid != nlportid)
+@@ -611,7 +612,7 @@ void cfg80211_mlme_unregister_socket(struct wireless_dev *wdev, u32 nlportid)
+ 		schedule_work(&rdev->mgmt_registrations_update_wk);
+ 	}
+ 
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	if (nlportid && rdev->crit_proto_nlportid == nlportid) {
+ 		rdev->crit_proto_nlportid = 0;
+@@ -624,15 +625,16 @@ void cfg80211_mlme_unregister_socket(struct wireless_dev *wdev, u32 nlportid)
+ 
+ void cfg80211_mlme_purge_registrations(struct wireless_dev *wdev)
+ {
++	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
+ 	struct cfg80211_mgmt_registration *reg, *tmp;
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 	list_for_each_entry_safe(reg, tmp, &wdev->mgmt_registrations, list) {
+ 		list_del(&reg->list);
+ 		kfree(reg);
+ 	}
+ 	wdev->mgmt_registrations_need_update = 1;
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	cfg80211_mgmt_registrations_update(wdev);
+ }
+@@ -780,7 +782,7 @@ bool cfg80211_rx_mgmt_khz(struct wireless_dev *wdev, int freq, int sig_dbm,
+ 	data = buf + ieee80211_hdrlen(mgmt->frame_control);
+ 	data_len = len - ieee80211_hdrlen(mgmt->frame_control);
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	list_for_each_entry(reg, &wdev->mgmt_registrations, list) {
+ 		if (reg->frame_type != ftype)
+@@ -804,7 +806,7 @@ bool cfg80211_rx_mgmt_khz(struct wireless_dev *wdev, int freq, int sig_dbm,
+ 		break;
+ 	}
+ 
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	trace_cfg80211_return_bool(result);
+ 	return result;
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index fab1f0d504036..fd614a5a00b42 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -418,14 +418,17 @@ cfg80211_add_nontrans_list(struct cfg80211_bss *trans_bss,
+ 	}
+ 	ssid_len = ssid[1];
+ 	ssid = ssid + 2;
+-	rcu_read_unlock();
+ 
+ 	/* check if nontrans_bss is in the list */
+ 	list_for_each_entry(bss, &trans_bss->nontrans_list, nontrans_list) {
+-		if (is_bss(bss, nontrans_bss->bssid, ssid, ssid_len))
++		if (is_bss(bss, nontrans_bss->bssid, ssid, ssid_len)) {
++			rcu_read_unlock();
+ 			return 0;
++		}
+ 	}
+ 
++	rcu_read_unlock();
++
+ 	/* add to the list */
+ 	list_add_tail(&nontrans_bss->nontrans_list, &trans_bss->nontrans_list);
+ 	return 0;
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 4fb8d1b14e76a..3f8c46bb6d9a4 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1028,14 +1028,14 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+ 	    !(rdev->wiphy.interface_modes & (1 << ntype)))
+ 		return -EOPNOTSUPP;
+ 
+-	/* if it's part of a bridge, reject changing type to station/ibss */
+-	if (netif_is_bridge_port(dev) &&
+-	    (ntype == NL80211_IFTYPE_ADHOC ||
+-	     ntype == NL80211_IFTYPE_STATION ||
+-	     ntype == NL80211_IFTYPE_P2P_CLIENT))
+-		return -EBUSY;
+-
+ 	if (ntype != otype) {
++		/* if it's part of a bridge, reject changing type to station/ibss */
++		if (netif_is_bridge_port(dev) &&
++		    (ntype == NL80211_IFTYPE_ADHOC ||
++		     ntype == NL80211_IFTYPE_STATION ||
++		     ntype == NL80211_IFTYPE_P2P_CLIENT))
++			return -EBUSY;
++
+ 		dev->ieee80211_ptr->use_4addr = false;
+ 		dev->ieee80211_ptr->mesh_id_up_len = 0;
+ 		wdev_lock(dev->ieee80211_ptr);
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 2bb159c105035..1d727387cb205 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -3820,11 +3820,15 @@ int cmd_script(int argc, const char **argv)
+ 		goto out_delete;
+ 
+ 	uname(&uts);
+-	if (data.is_pipe ||  /* assume pipe_mode indicates native_arch */
+-	    !strcmp(uts.machine, session->header.env.arch) ||
+-	    (!strcmp(uts.machine, "x86_64") &&
+-	     !strcmp(session->header.env.arch, "i386")))
++	if (data.is_pipe) { /* Assume pipe_mode indicates native_arch */
+ 		native_arch = true;
++	} else if (session->header.env.arch) {
++		if (!strcmp(uts.machine, session->header.env.arch))
++			native_arch = true;
++		else if (!strcmp(uts.machine, "x86_64") &&
++			 !strcmp(session->header.env.arch, "i386"))
++			native_arch = true;
++	}
+ 
+ 	script.session = session;
+ 	script__setup_sample_type(&script);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-11-06 13:36 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-11-06 13:36 UTC (permalink / raw
  To: gentoo-commits

commit:     3b4e7ee3ecafc67a33779255317083543e571448
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Nov  6 13:36:36 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Nov  6 13:36:36 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3b4e7ee3

Linux patch 5.10.78

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1077_linux-5.10.78.patch | 465 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 469 insertions(+)

diff --git a/0000_README b/0000_README
index 40e234a..cb41873 100644
--- a/0000_README
+++ b/0000_README
@@ -351,6 +351,10 @@ Patch:  1076_linux-5.10.77.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.77
 
+Patch:  1077_linux-5.10.78.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.78
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1077_linux-5.10.78.patch b/1077_linux-5.10.78.patch
new file mode 100644
index 0000000..77ec5e8
--- /dev/null
+++ b/1077_linux-5.10.78.patch
@@ -0,0 +1,465 @@
+diff --git a/Makefile b/Makefile
+index a58f49e415dc6..288fb48392538 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 77
++SUBLEVEL = 78
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c
+index b5f5ca4e3f343..8f4ae6e967e39 100644
+--- a/drivers/amba/bus.c
++++ b/drivers/amba/bus.c
+@@ -375,9 +375,6 @@ static int amba_device_try_add(struct amba_device *dev, struct resource *parent)
+ 	void __iomem *tmp;
+ 	int i, ret;
+ 
+-	WARN_ON(dev->irq[0] == (unsigned int)-1);
+-	WARN_ON(dev->irq[1] == (unsigned int)-1);
+-
+ 	ret = request_resource(parent, &dev->res);
+ 	if (ret)
+ 		goto err_out;
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
+index 8fba425a76268..fb2a25f8408fc 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
+@@ -322,7 +322,6 @@ static void ttm_transfered_destroy(struct ttm_buffer_object *bo)
+ 	struct ttm_transfer_obj *fbo;
+ 
+ 	fbo = container_of(bo, struct ttm_transfer_obj, base);
+-	dma_resv_fini(&fbo->base.base._resv);
+ 	ttm_bo_put(fbo->bo);
+ 	kfree(fbo);
+ }
+diff --git a/drivers/media/firewire/firedtv-avc.c b/drivers/media/firewire/firedtv-avc.c
+index 2bf9467b917d1..71991f8638e6b 100644
+--- a/drivers/media/firewire/firedtv-avc.c
++++ b/drivers/media/firewire/firedtv-avc.c
+@@ -1165,7 +1165,11 @@ int avc_ca_pmt(struct firedtv *fdtv, char *msg, int length)
+ 		read_pos += program_info_length;
+ 		write_pos += program_info_length;
+ 	}
+-	while (read_pos < length) {
++	while (read_pos + 4 < length) {
++		if (write_pos + 4 >= sizeof(c->operand) - 4) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 		c->operand[write_pos++] = msg[read_pos++];
+ 		c->operand[write_pos++] = msg[read_pos++];
+ 		c->operand[write_pos++] = msg[read_pos++];
+@@ -1177,13 +1181,17 @@ int avc_ca_pmt(struct firedtv *fdtv, char *msg, int length)
+ 		c->operand[write_pos++] = es_info_length >> 8;
+ 		c->operand[write_pos++] = es_info_length & 0xff;
+ 		if (es_info_length > 0) {
++			if (read_pos >= length) {
++				ret = -EINVAL;
++				goto out;
++			}
+ 			pmt_cmd_id = msg[read_pos++];
+ 			if (pmt_cmd_id != 1 && pmt_cmd_id != 4)
+ 				dev_err(fdtv->device, "invalid pmt_cmd_id %d at stream level\n",
+ 					pmt_cmd_id);
+ 
+-			if (es_info_length > sizeof(c->operand) - 4 -
+-					     write_pos) {
++			if (es_info_length > sizeof(c->operand) - 4 - write_pos ||
++			    es_info_length > length - read_pos) {
+ 				ret = -EINVAL;
+ 				goto out;
+ 			}
+diff --git a/drivers/media/firewire/firedtv-ci.c b/drivers/media/firewire/firedtv-ci.c
+index 9363d005e2b61..e0d57e09dab0c 100644
+--- a/drivers/media/firewire/firedtv-ci.c
++++ b/drivers/media/firewire/firedtv-ci.c
+@@ -134,6 +134,8 @@ static int fdtv_ca_pmt(struct firedtv *fdtv, void *arg)
+ 	} else {
+ 		data_length = msg->msg[3];
+ 	}
++	if (data_length > sizeof(msg->msg) - data_pos)
++		return -EINVAL;
+ 
+ 	return avc_ca_pmt(fdtv, &msg->msg[data_pos], data_length);
+ }
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index e14dfaafe4391..3eea8cf076c48 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -1963,13 +1963,13 @@ static int lan743x_rx_next_index(struct lan743x_rx *rx, int index)
+ 	return ((++index) % rx->ring_size);
+ }
+ 
+-static struct sk_buff *lan743x_rx_allocate_skb(struct lan743x_rx *rx)
++static struct sk_buff *lan743x_rx_allocate_skb(struct lan743x_rx *rx, gfp_t gfp)
+ {
+ 	int length = 0;
+ 
+ 	length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
+ 	return __netdev_alloc_skb(rx->adapter->netdev,
+-				  length, GFP_ATOMIC | GFP_DMA);
++				  length, gfp);
+ }
+ 
+ static void lan743x_rx_update_tail(struct lan743x_rx *rx, int index)
+@@ -2141,7 +2141,8 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 			struct sk_buff *new_skb = NULL;
+ 			int packet_length;
+ 
+-			new_skb = lan743x_rx_allocate_skb(rx);
++			new_skb = lan743x_rx_allocate_skb(rx,
++							  GFP_ATOMIC | GFP_DMA);
+ 			if (!new_skb) {
+ 				/* failed to allocate next skb.
+ 				 * Memory is very low.
+@@ -2377,7 +2378,8 @@ static int lan743x_rx_ring_init(struct lan743x_rx *rx)
+ 
+ 	rx->last_head = 0;
+ 	for (index = 0; index < rx->ring_size; index++) {
+-		struct sk_buff *new_skb = lan743x_rx_allocate_skb(rx);
++		struct sk_buff *new_skb = lan743x_rx_allocate_skb(rx,
++								   GFP_KERNEL);
+ 
+ 		ret = lan743x_rx_init_ring_element(rx, index, new_skb);
+ 		if (ret)
+diff --git a/drivers/net/ethernet/sfc/ethtool_common.c b/drivers/net/ethernet/sfc/ethtool_common.c
+index bf1443539a1a4..bd552c7dffcb1 100644
+--- a/drivers/net/ethernet/sfc/ethtool_common.c
++++ b/drivers/net/ethernet/sfc/ethtool_common.c
+@@ -563,20 +563,14 @@ int efx_ethtool_get_link_ksettings(struct net_device *net_dev,
+ {
+ 	struct efx_nic *efx = netdev_priv(net_dev);
+ 	struct efx_link_state *link_state = &efx->link_state;
+-	u32 supported;
+ 
+ 	mutex_lock(&efx->mac_lock);
+ 	efx_mcdi_phy_get_link_ksettings(efx, cmd);
+ 	mutex_unlock(&efx->mac_lock);
+ 
+ 	/* Both MACs support pause frames (bidirectional and respond-only) */
+-	ethtool_convert_link_mode_to_legacy_u32(&supported,
+-						cmd->link_modes.supported);
+-
+-	supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
+-
+-	ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
+-						supported);
++	ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
++	ethtool_link_ksettings_add_link_mode(cmd, supported, Asym_Pause);
+ 
+ 	if (LOOPBACK_INTERNAL(efx)) {
+ 		cmd->base.speed = link_state->speed;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index d406da82b4fb8..2746f77745e4d 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1313,8 +1313,6 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ 	bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);
+ 	bool is_ndisc = ipv6_ndisc_frame(skb);
+ 
+-	nf_reset_ct(skb);
+-
+ 	/* loopback, multicast & non-ND link-local traffic; do not push through
+ 	 * packet taps again. Reset pkt_type for upper layers to process skb.
+ 	 * For strict packets with a source LLA, determine the dst using the
+@@ -1371,8 +1369,6 @@ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ 	skb->skb_iif = vrf_dev->ifindex;
+ 	IPCB(skb)->flags |= IPSKB_L3SLAVE;
+ 
+-	nf_reset_ct(skb);
+-
+ 	if (ipv4_is_multicast(ip_hdr(skb)->daddr))
+ 		goto out;
+ 
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 6bed619535427..43be20baab354 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -601,15 +601,6 @@ static int wcn36xx_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 				}
+ 			}
+ 		}
+-		/* FIXME: Only enable bmps support when encryption is enabled.
+-		 * For any reasons, when connected to open/no-security BSS,
+-		 * the wcn36xx controller in bmps mode does not forward
+-		 * 'wake-up' beacons despite AP sends DTIM with station AID.
+-		 * It could be due to a firmware issue or to the way driver
+-		 * configure the station.
+-		 */
+-		if (vif->type == NL80211_IFTYPE_STATION)
+-			vif_priv->allow_bmps = true;
+ 		break;
+ 	case DISABLE_KEY:
+ 		if (!(IEEE80211_KEY_FLAG_PAIRWISE & key_conf->flags)) {
+@@ -909,7 +900,6 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
+ 				    vif->addr,
+ 				    bss_conf->aid);
+ 			vif_priv->sta_assoc = false;
+-			vif_priv->allow_bmps = false;
+ 			wcn36xx_smd_set_link_st(wcn,
+ 						bss_conf->bssid,
+ 						vif->addr,
+diff --git a/drivers/net/wireless/ath/wcn36xx/pmc.c b/drivers/net/wireless/ath/wcn36xx/pmc.c
+index 2d0780fefd477..2936aaf532738 100644
+--- a/drivers/net/wireless/ath/wcn36xx/pmc.c
++++ b/drivers/net/wireless/ath/wcn36xx/pmc.c
+@@ -23,10 +23,7 @@ int wcn36xx_pmc_enter_bmps_state(struct wcn36xx *wcn,
+ {
+ 	int ret = 0;
+ 	struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
+-
+-	if (!vif_priv->allow_bmps)
+-		return -ENOTSUPP;
+-
++	/* TODO: Make sure the TX chain clean */
+ 	ret = wcn36xx_smd_enter_bmps(wcn, vif);
+ 	if (!ret) {
+ 		wcn36xx_dbg(WCN36XX_DBG_PMC, "Entered BMPS\n");
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index d0fcce86903ae..9b4dee2fc6483 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -127,7 +127,6 @@ struct wcn36xx_vif {
+ 	enum wcn36xx_hal_bss_type bss_type;
+ 
+ 	/* Power management */
+-	bool allow_bmps;
+ 	enum wcn36xx_power_state pw_state;
+ 
+ 	u8 bss_index;
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index 24619c3bebd52..6ad834d61d4c7 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -545,8 +545,10 @@ EXPORT_SYMBOL(scsi_device_get);
+  */
+ void scsi_device_put(struct scsi_device *sdev)
+ {
+-	module_put(sdev->host->hostt->module);
++	struct module *mod = sdev->host->hostt->module;
++
+ 	put_device(&sdev->sdev_gendev);
++	module_put(mod);
+ }
+ EXPORT_SYMBOL(scsi_device_put);
+ 
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 8173b67ec7b0f..1378bb1a7371c 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -450,9 +450,12 @@ static void scsi_device_dev_release_usercontext(struct work_struct *work)
+ 	struct scsi_vpd *vpd_pg80 = NULL, *vpd_pg83 = NULL;
+ 	struct scsi_vpd *vpd_pg0 = NULL, *vpd_pg89 = NULL;
+ 	unsigned long flags;
++	struct module *mod;
+ 
+ 	sdev = container_of(work, struct scsi_device, ew.work);
+ 
++	mod = sdev->host->hostt->module;
++
+ 	scsi_dh_release_device(sdev);
+ 
+ 	parent = sdev->sdev_gendev.parent;
+@@ -501,11 +504,17 @@ static void scsi_device_dev_release_usercontext(struct work_struct *work)
+ 
+ 	if (parent)
+ 		put_device(parent);
++	module_put(mod);
+ }
+ 
+ static void scsi_device_dev_release(struct device *dev)
+ {
+ 	struct scsi_device *sdp = to_scsi_device(dev);
++
++	/* Set module pointer as NULL in case of module unloading */
++	if (!try_module_get(sdp->host->hostt->module))
++		sdp->host->hostt->module = NULL;
++
+ 	execute_in_process_context(scsi_device_dev_release_usercontext,
+ 				   &sdp->ew);
+ }
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 4bbf3316a9a53..99908d8d2dd36 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2640,7 +2640,6 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ {
+ 	int retval;
+ 	struct usb_device *rhdev;
+-	struct usb_hcd *shared_hcd;
+ 
+ 	if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) {
+ 		hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+@@ -2797,26 +2796,13 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 		goto err_hcd_driver_start;
+ 	}
+ 
+-	/* starting here, usbcore will pay attention to the shared HCD roothub */
+-	shared_hcd = hcd->shared_hcd;
+-	if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) {
+-		retval = register_root_hub(shared_hcd);
+-		if (retval != 0)
+-			goto err_register_root_hub;
+-
+-		if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd))
+-			usb_hcd_poll_rh_status(shared_hcd);
+-	}
+-
+ 	/* starting here, usbcore will pay attention to this root hub */
+-	if (!HCD_DEFER_RH_REGISTER(hcd)) {
+-		retval = register_root_hub(hcd);
+-		if (retval != 0)
+-			goto err_register_root_hub;
++	retval = register_root_hub(hcd);
++	if (retval != 0)
++		goto err_register_root_hub;
+ 
+-		if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
+-			usb_hcd_poll_rh_status(hcd);
+-	}
++	if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
++		usb_hcd_poll_rh_status(hcd);
+ 
+ 	return retval;
+ 
+@@ -2859,7 +2845,6 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
+ void usb_remove_hcd(struct usb_hcd *hcd)
+ {
+ 	struct usb_device *rhdev = hcd->self.root_hub;
+-	bool rh_registered;
+ 
+ 	dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
+ 
+@@ -2870,7 +2855,6 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 
+ 	dev_dbg(hcd->self.controller, "roothub graceful disconnect\n");
+ 	spin_lock_irq (&hcd_root_hub_lock);
+-	rh_registered = hcd->rh_registered;
+ 	hcd->rh_registered = 0;
+ 	spin_unlock_irq (&hcd_root_hub_lock);
+ 
+@@ -2880,8 +2864,7 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 	cancel_work_sync(&hcd->died_work);
+ 
+ 	mutex_lock(&usb_bus_idr_lock);
+-	if (rh_registered)
+-		usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
++	usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
+ 	mutex_unlock(&usb_bus_idr_lock);
+ 
+ 	/*
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 0d6dc2e20f2aa..bf42ba3e4415a 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -693,7 +693,6 @@ int xhci_run(struct usb_hcd *hcd)
+ 		if (ret)
+ 			xhci_free_command(xhci, command);
+ 	}
+-	set_bit(HCD_FLAG_DEFER_RH_REGISTER, &hcd->flags);
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ 			"Finished xhci_run for USB2 roothub");
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index ee7ceea899346..104dff9c71314 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -8759,9 +8759,10 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+ 	io_cancel_defer_files(ctx, task, files);
+ 	io_cqring_overflow_flush(ctx, true, task, files);
+ 
+-	io_uring_cancel_files(ctx, task, files);
+ 	if (!files)
+ 		__io_uring_cancel_task_requests(ctx, task);
++	else
++		io_uring_cancel_files(ctx, task, files);
+ 
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+ 		atomic_dec(&task->io_uring->in_idle);
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 9f05016d823f8..3dbb42c637c14 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -124,7 +124,6 @@ struct usb_hcd {
+ #define HCD_FLAG_RH_RUNNING		5	/* root hub is running? */
+ #define HCD_FLAG_DEAD			6	/* controller has died? */
+ #define HCD_FLAG_INTF_AUTHORIZED	7	/* authorize interfaces? */
+-#define HCD_FLAG_DEFER_RH_REGISTER	8	/* Defer roothub registration */
+ 
+ 	/* The flags can be tested using these macros; they are likely to
+ 	 * be slightly faster than test_bit().
+@@ -135,7 +134,6 @@ struct usb_hcd {
+ #define HCD_WAKEUP_PENDING(hcd)	((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
+ #define HCD_RH_RUNNING(hcd)	((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
+ #define HCD_DEAD(hcd)		((hcd)->flags & (1U << HCD_FLAG_DEAD))
+-#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER))
+ 
+ 	/*
+ 	 * Specifies if interfaces are authorized by default
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index ff389d970b683..969e57dde65f9 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -443,21 +443,24 @@ static bool hugepage_vma_check(struct vm_area_struct *vma,
+ 	if (!transhuge_vma_enabled(vma, vm_flags))
+ 		return false;
+ 
++	if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) -
++				vma->vm_pgoff, HPAGE_PMD_NR))
++		return false;
++
+ 	/* Enabled via shmem mount options or sysfs settings. */
+-	if (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) {
+-		return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
+-				HPAGE_PMD_NR);
+-	}
++	if (shmem_file(vma->vm_file))
++		return shmem_huge_enabled(vma);
+ 
+ 	/* THP settings require madvise. */
+ 	if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always())
+ 		return false;
+ 
+-	/* Read-only file mappings need to be aligned for THP to work. */
++	/* Only regular file is valid */
+ 	if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file &&
+ 	    (vm_flags & VM_DENYWRITE)) {
+-		return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
+-				HPAGE_PMD_NR);
++		struct inode *inode = vma->vm_file->f_inode;
++
++		return S_ISREG(inode->i_mode);
+ 	}
+ 
+ 	if (!vma->anon_vma || vma->vm_ops)
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index c5794e83fd800..8f6823df944ff 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -528,6 +528,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x2573, 0x0008),
+ 		.map = maya44_map,
+ 	},
++	{
++		.id = USB_ID(0x2708, 0x0002), /* Audient iD14 */
++		.ignore_ctl_error = 1,
++	},
+ 	{
+ 		/* KEF X300A */
+ 		.id = USB_ID(0x27ac, 0x1000),
+@@ -538,6 +542,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x25c4, 0x0003),
+ 		.map = scms_usb3318_map,
+ 	},
++	{
++		.id = USB_ID(0x30be, 0x0101), /*  Schiit Hel */
++		.ignore_ctl_error = 1,
++	},
+ 	{
+ 		/* Bose Companion 5 */
+ 		.id = USB_ID(0x05a7, 0x1020),


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-11-12 14:18 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-11-12 14:18 UTC (permalink / raw
  To: gentoo-commits

commit:     f7fba2ee4a4a4b5fb5140ceeac559f7c20e2665c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 12 14:17:52 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 12 14:17:52 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f7fba2ee

Linux patch 5.10.79

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1078_linux-5.10.79.patch | 712 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 716 insertions(+)

diff --git a/0000_README b/0000_README
index cb41873c..5066489e 100644
--- a/0000_README
+++ b/0000_README
@@ -355,6 +355,10 @@ Patch:  1077_linux-5.10.78.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.78
 
+Patch:  1078_linux-5.10.79.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.79
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1078_linux-5.10.79.patch b/1078_linux-5.10.79.patch
new file mode 100644
index 00000000..13278012
--- /dev/null
+++ b/1078_linux-5.10.79.patch
@@ -0,0 +1,712 @@
+diff --git a/Makefile b/Makefile
+index 288fb48392538..756479b101f80 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 78
++SUBLEVEL = 79
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index 8c065da73f8e5..4e0f52660842b 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -96,7 +96,7 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
+ static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic)
+ {
+ 	ioapic->rtc_status.pending_eoi = 0;
+-	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID + 1);
++	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID);
+ }
+ 
+ static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic);
+diff --git a/arch/x86/kvm/ioapic.h b/arch/x86/kvm/ioapic.h
+index 11e4065e16176..660401700075d 100644
+--- a/arch/x86/kvm/ioapic.h
++++ b/arch/x86/kvm/ioapic.h
+@@ -43,13 +43,13 @@ struct kvm_vcpu;
+ 
+ struct dest_map {
+ 	/* vcpu bitmap where IRQ has been sent */
+-	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID + 1);
++	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID);
+ 
+ 	/*
+ 	 * Vector sent to a given vcpu, only valid when
+ 	 * the vcpu's bit in map is set
+ 	 */
+-	u8 vectors[KVM_MAX_VCPU_ID + 1];
++	u8 vectors[KVM_MAX_VCPU_ID];
+ };
+ 
+ 
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 060d9a906535c..770d18dc46509 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3545,7 +3545,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
+ 		 * reserved bit and EPT's invalid memtype/XWR checks to avoid
+ 		 * adding a Jcc in the loop.
+ 		 */
+-		reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level - 1]) |
++		reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level - 1]) ||
+ 			    __is_rsvd_bits_set(rsvd_check, sptes[level - 1],
+ 					       level);
+ 	}
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 65b22b5af51ac..d9977ce0be76d 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2254,7 +2254,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 		binder_dec_node(buffer->target_node, 1, 0);
+ 
+ 	off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
+-	off_end_offset = is_failure ? failed_at :
++	off_end_offset = is_failure && failed_at ? failed_at :
+ 				off_start_offset + buffer->offsets_size;
+ 	for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
+ 	     buffer_offset += sizeof(binder_size_t)) {
+@@ -2340,9 +2340,8 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 			binder_size_t fd_buf_size;
+ 			binder_size_t num_valid;
+ 
+-			if (proc->tsk != current->group_leader) {
++			if (is_failure) {
+ 				/*
+-				 * Nothing to do if running in sender context
+ 				 * The fd fixups have not been applied so no
+ 				 * fds need to be closed.
+ 				 */
+@@ -3544,6 +3543,7 @@ err_invalid_target_handle:
+  * binder_free_buf() - free the specified buffer
+  * @proc:	binder proc that owns buffer
+  * @buffer:	buffer to be freed
++ * @is_failure:	failed to send transaction
+  *
+  * If buffer for an async transaction, enqueue the next async
+  * transaction from the node.
+@@ -3553,7 +3553,7 @@ err_invalid_target_handle:
+ static void
+ binder_free_buf(struct binder_proc *proc,
+ 		struct binder_thread *thread,
+-		struct binder_buffer *buffer)
++		struct binder_buffer *buffer, bool is_failure)
+ {
+ 	binder_inner_proc_lock(proc);
+ 	if (buffer->transaction) {
+@@ -3581,7 +3581,7 @@ binder_free_buf(struct binder_proc *proc,
+ 		binder_node_inner_unlock(buf_node);
+ 	}
+ 	trace_binder_transaction_buffer_release(buffer);
+-	binder_transaction_buffer_release(proc, thread, buffer, 0, false);
++	binder_transaction_buffer_release(proc, thread, buffer, 0, is_failure);
+ 	binder_alloc_free_buf(&proc->alloc, buffer);
+ }
+ 
+@@ -3782,7 +3782,7 @@ static int binder_thread_write(struct binder_proc *proc,
+ 				     proc->pid, thread->pid, (u64)data_ptr,
+ 				     buffer->debug_id,
+ 				     buffer->transaction ? "active" : "finished");
+-			binder_free_buf(proc, thread, buffer);
++			binder_free_buf(proc, thread, buffer, false);
+ 			break;
+ 		}
+ 
+@@ -4470,7 +4470,7 @@ retry:
+ 			buffer->transaction = NULL;
+ 			binder_cleanup_transaction(t, "fd fixups failed",
+ 						   BR_FAILED_REPLY);
+-			binder_free_buf(proc, thread, buffer);
++			binder_free_buf(proc, thread, buffer, true);
+ 			binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
+ 				     "%d:%d %stransaction %d fd fixups failed %d/%d, line %d\n",
+ 				     proc->pid, thread->pid,
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 3b13de59605e1..983045ad79dcf 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -61,7 +61,7 @@ static int rsi_usb_card_write(struct rsi_hw *adapter,
+ 			      (void *)seg,
+ 			      (int)len,
+ 			      &transfer,
+-			      HZ * 5);
++			      USB_CTRL_SET_TIMEOUT);
+ 
+ 	if (status < 0) {
+ 		rsi_dbg(ERR_ZONE,
+diff --git a/drivers/staging/comedi/drivers/dt9812.c b/drivers/staging/comedi/drivers/dt9812.c
+index 634f57730c1e0..704b04d2980d3 100644
+--- a/drivers/staging/comedi/drivers/dt9812.c
++++ b/drivers/staging/comedi/drivers/dt9812.c
+@@ -32,6 +32,7 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/errno.h>
++#include <linux/slab.h>
+ #include <linux/uaccess.h>
+ 
+ #include "../comedi_usb.h"
+@@ -237,22 +238,42 @@ static int dt9812_read_info(struct comedi_device *dev,
+ {
+ 	struct usb_device *usb = comedi_to_usb_dev(dev);
+ 	struct dt9812_private *devpriv = dev->private;
+-	struct dt9812_usb_cmd cmd;
++	struct dt9812_usb_cmd *cmd;
++	size_t tbuf_size;
+ 	int count, ret;
++	void *tbuf;
+ 
+-	cmd.cmd = cpu_to_le32(DT9812_R_FLASH_DATA);
+-	cmd.u.flash_data_info.address =
++	tbuf_size = max(sizeof(*cmd), buf_size);
++
++	tbuf = kzalloc(tbuf_size, GFP_KERNEL);
++	if (!tbuf)
++		return -ENOMEM;
++
++	cmd = tbuf;
++
++	cmd->cmd = cpu_to_le32(DT9812_R_FLASH_DATA);
++	cmd->u.flash_data_info.address =
+ 	    cpu_to_le16(DT9812_DIAGS_BOARD_INFO_ADDR + offset);
+-	cmd.u.flash_data_info.numbytes = cpu_to_le16(buf_size);
++	cmd->u.flash_data_info.numbytes = cpu_to_le16(buf_size);
+ 
+ 	/* DT9812 only responds to 32 byte writes!! */
+ 	ret = usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
+-			   &cmd, 32, &count, DT9812_USB_TIMEOUT);
++			   cmd, sizeof(*cmd), &count, DT9812_USB_TIMEOUT);
+ 	if (ret)
+-		return ret;
++		goto out;
++
++	ret = usb_bulk_msg(usb, usb_rcvbulkpipe(usb, devpriv->cmd_rd.addr),
++			   tbuf, buf_size, &count, DT9812_USB_TIMEOUT);
++	if (!ret) {
++		if (count == buf_size)
++			memcpy(buf, tbuf, buf_size);
++		else
++			ret = -EREMOTEIO;
++	}
++out:
++	kfree(tbuf);
+ 
+-	return usb_bulk_msg(usb, usb_rcvbulkpipe(usb, devpriv->cmd_rd.addr),
+-			    buf, buf_size, &count, DT9812_USB_TIMEOUT);
++	return ret;
+ }
+ 
+ static int dt9812_read_multiple_registers(struct comedi_device *dev,
+@@ -261,22 +282,42 @@ static int dt9812_read_multiple_registers(struct comedi_device *dev,
+ {
+ 	struct usb_device *usb = comedi_to_usb_dev(dev);
+ 	struct dt9812_private *devpriv = dev->private;
+-	struct dt9812_usb_cmd cmd;
++	struct dt9812_usb_cmd *cmd;
+ 	int i, count, ret;
++	size_t buf_size;
++	void *buf;
+ 
+-	cmd.cmd = cpu_to_le32(DT9812_R_MULTI_BYTE_REG);
+-	cmd.u.read_multi_info.count = reg_count;
++	buf_size = max_t(size_t, sizeof(*cmd), reg_count);
++
++	buf = kzalloc(buf_size, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	cmd = buf;
++
++	cmd->cmd = cpu_to_le32(DT9812_R_MULTI_BYTE_REG);
++	cmd->u.read_multi_info.count = reg_count;
+ 	for (i = 0; i < reg_count; i++)
+-		cmd.u.read_multi_info.address[i] = address[i];
++		cmd->u.read_multi_info.address[i] = address[i];
+ 
+ 	/* DT9812 only responds to 32 byte writes!! */
+ 	ret = usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
+-			   &cmd, 32, &count, DT9812_USB_TIMEOUT);
++			   cmd, sizeof(*cmd), &count, DT9812_USB_TIMEOUT);
+ 	if (ret)
+-		return ret;
++		goto out;
++
++	ret = usb_bulk_msg(usb, usb_rcvbulkpipe(usb, devpriv->cmd_rd.addr),
++			   buf, reg_count, &count, DT9812_USB_TIMEOUT);
++	if (!ret) {
++		if (count == reg_count)
++			memcpy(value, buf, reg_count);
++		else
++			ret = -EREMOTEIO;
++	}
++out:
++	kfree(buf);
+ 
+-	return usb_bulk_msg(usb, usb_rcvbulkpipe(usb, devpriv->cmd_rd.addr),
+-			    value, reg_count, &count, DT9812_USB_TIMEOUT);
++	return ret;
+ }
+ 
+ static int dt9812_write_multiple_registers(struct comedi_device *dev,
+@@ -285,19 +326,27 @@ static int dt9812_write_multiple_registers(struct comedi_device *dev,
+ {
+ 	struct usb_device *usb = comedi_to_usb_dev(dev);
+ 	struct dt9812_private *devpriv = dev->private;
+-	struct dt9812_usb_cmd cmd;
++	struct dt9812_usb_cmd *cmd;
+ 	int i, count;
++	int ret;
++
++	cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
++	if (!cmd)
++		return -ENOMEM;
+ 
+-	cmd.cmd = cpu_to_le32(DT9812_W_MULTI_BYTE_REG);
+-	cmd.u.read_multi_info.count = reg_count;
++	cmd->cmd = cpu_to_le32(DT9812_W_MULTI_BYTE_REG);
++	cmd->u.read_multi_info.count = reg_count;
+ 	for (i = 0; i < reg_count; i++) {
+-		cmd.u.write_multi_info.write[i].address = address[i];
+-		cmd.u.write_multi_info.write[i].value = value[i];
++		cmd->u.write_multi_info.write[i].address = address[i];
++		cmd->u.write_multi_info.write[i].value = value[i];
+ 	}
+ 
+ 	/* DT9812 only responds to 32 byte writes!! */
+-	return usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
+-			    &cmd, 32, &count, DT9812_USB_TIMEOUT);
++	ret = usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
++			   cmd, sizeof(*cmd), &count, DT9812_USB_TIMEOUT);
++	kfree(cmd);
++
++	return ret;
+ }
+ 
+ static int dt9812_rmw_multiple_registers(struct comedi_device *dev,
+@@ -306,17 +355,25 @@ static int dt9812_rmw_multiple_registers(struct comedi_device *dev,
+ {
+ 	struct usb_device *usb = comedi_to_usb_dev(dev);
+ 	struct dt9812_private *devpriv = dev->private;
+-	struct dt9812_usb_cmd cmd;
++	struct dt9812_usb_cmd *cmd;
+ 	int i, count;
++	int ret;
++
++	cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
++	if (!cmd)
++		return -ENOMEM;
+ 
+-	cmd.cmd = cpu_to_le32(DT9812_RMW_MULTI_BYTE_REG);
+-	cmd.u.rmw_multi_info.count = reg_count;
++	cmd->cmd = cpu_to_le32(DT9812_RMW_MULTI_BYTE_REG);
++	cmd->u.rmw_multi_info.count = reg_count;
+ 	for (i = 0; i < reg_count; i++)
+-		cmd.u.rmw_multi_info.rmw[i] = rmw[i];
++		cmd->u.rmw_multi_info.rmw[i] = rmw[i];
+ 
+ 	/* DT9812 only responds to 32 byte writes!! */
+-	return usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
+-			    &cmd, 32, &count, DT9812_USB_TIMEOUT);
++	ret = usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
++			   cmd, sizeof(*cmd), &count, DT9812_USB_TIMEOUT);
++	kfree(cmd);
++
++	return ret;
+ }
+ 
+ static int dt9812_digital_in(struct comedi_device *dev, u8 *bits)
+diff --git a/drivers/staging/comedi/drivers/ni_usb6501.c b/drivers/staging/comedi/drivers/ni_usb6501.c
+index 5b6d9d783b2f7..c42987b74b1dc 100644
+--- a/drivers/staging/comedi/drivers/ni_usb6501.c
++++ b/drivers/staging/comedi/drivers/ni_usb6501.c
+@@ -144,6 +144,10 @@ static const u8 READ_COUNTER_RESPONSE[]	= {0x00, 0x01, 0x00, 0x10,
+ 					   0x00, 0x00, 0x00, 0x02,
+ 					   0x00, 0x00, 0x00, 0x00};
+ 
++/* Largest supported packets */
++static const size_t TX_MAX_SIZE	= sizeof(SET_PORT_DIR_REQUEST);
++static const size_t RX_MAX_SIZE	= sizeof(READ_PORT_RESPONSE);
++
+ enum commands {
+ 	READ_PORT,
+ 	WRITE_PORT,
+@@ -501,6 +505,12 @@ static int ni6501_find_endpoints(struct comedi_device *dev)
+ 	if (!devpriv->ep_rx || !devpriv->ep_tx)
+ 		return -ENODEV;
+ 
++	if (usb_endpoint_maxp(devpriv->ep_rx) < RX_MAX_SIZE)
++		return -ENODEV;
++
++	if (usb_endpoint_maxp(devpriv->ep_tx) < TX_MAX_SIZE)
++		return -ENODEV;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/comedi/drivers/vmk80xx.c b/drivers/staging/comedi/drivers/vmk80xx.c
+index 7956abcbae22b..7769eadfaf61d 100644
+--- a/drivers/staging/comedi/drivers/vmk80xx.c
++++ b/drivers/staging/comedi/drivers/vmk80xx.c
+@@ -90,6 +90,9 @@ enum {
+ #define IC3_VERSION		BIT(0)
+ #define IC6_VERSION		BIT(1)
+ 
++#define MIN_BUF_SIZE		64
++#define PACKET_TIMEOUT		10000	/* ms */
++
+ enum vmk80xx_model {
+ 	VMK8055_MODEL,
+ 	VMK8061_MODEL
+@@ -157,22 +160,21 @@ static void vmk80xx_do_bulk_msg(struct comedi_device *dev)
+ 	__u8 rx_addr;
+ 	unsigned int tx_pipe;
+ 	unsigned int rx_pipe;
+-	size_t size;
++	size_t tx_size;
++	size_t rx_size;
+ 
+ 	tx_addr = devpriv->ep_tx->bEndpointAddress;
+ 	rx_addr = devpriv->ep_rx->bEndpointAddress;
+ 	tx_pipe = usb_sndbulkpipe(usb, tx_addr);
+ 	rx_pipe = usb_rcvbulkpipe(usb, rx_addr);
++	tx_size = usb_endpoint_maxp(devpriv->ep_tx);
++	rx_size = usb_endpoint_maxp(devpriv->ep_rx);
+ 
+-	/*
+-	 * The max packet size attributes of the K8061
+-	 * input/output endpoints are identical
+-	 */
+-	size = usb_endpoint_maxp(devpriv->ep_tx);
++	usb_bulk_msg(usb, tx_pipe, devpriv->usb_tx_buf, tx_size, NULL,
++		     PACKET_TIMEOUT);
+ 
+-	usb_bulk_msg(usb, tx_pipe, devpriv->usb_tx_buf,
+-		     size, NULL, devpriv->ep_tx->bInterval);
+-	usb_bulk_msg(usb, rx_pipe, devpriv->usb_rx_buf, size, NULL, HZ * 10);
++	usb_bulk_msg(usb, rx_pipe, devpriv->usb_rx_buf, rx_size, NULL,
++		     PACKET_TIMEOUT);
+ }
+ 
+ static int vmk80xx_read_packet(struct comedi_device *dev)
+@@ -191,7 +193,7 @@ static int vmk80xx_read_packet(struct comedi_device *dev)
+ 	pipe = usb_rcvintpipe(usb, ep->bEndpointAddress);
+ 	return usb_interrupt_msg(usb, pipe, devpriv->usb_rx_buf,
+ 				 usb_endpoint_maxp(ep), NULL,
+-				 HZ * 10);
++				 PACKET_TIMEOUT);
+ }
+ 
+ static int vmk80xx_write_packet(struct comedi_device *dev, int cmd)
+@@ -212,7 +214,7 @@ static int vmk80xx_write_packet(struct comedi_device *dev, int cmd)
+ 	pipe = usb_sndintpipe(usb, ep->bEndpointAddress);
+ 	return usb_interrupt_msg(usb, pipe, devpriv->usb_tx_buf,
+ 				 usb_endpoint_maxp(ep), NULL,
+-				 HZ * 10);
++				 PACKET_TIMEOUT);
+ }
+ 
+ static int vmk80xx_reset_device(struct comedi_device *dev)
+@@ -678,12 +680,12 @@ static int vmk80xx_alloc_usb_buffers(struct comedi_device *dev)
+ 	struct vmk80xx_private *devpriv = dev->private;
+ 	size_t size;
+ 
+-	size = usb_endpoint_maxp(devpriv->ep_rx);
++	size = max(usb_endpoint_maxp(devpriv->ep_rx), MIN_BUF_SIZE);
+ 	devpriv->usb_rx_buf = kzalloc(size, GFP_KERNEL);
+ 	if (!devpriv->usb_rx_buf)
+ 		return -ENOMEM;
+ 
+-	size = usb_endpoint_maxp(devpriv->ep_tx);
++	size = max(usb_endpoint_maxp(devpriv->ep_rx), MIN_BUF_SIZE);
+ 	devpriv->usb_tx_buf = kzalloc(size, GFP_KERNEL);
+ 	if (!devpriv->usb_tx_buf)
+ 		return -ENOMEM;
+diff --git a/drivers/staging/media/ipu3/ipu3-css-fw.c b/drivers/staging/media/ipu3/ipu3-css-fw.c
+index 45aff76198e2c..981693eed8155 100644
+--- a/drivers/staging/media/ipu3/ipu3-css-fw.c
++++ b/drivers/staging/media/ipu3/ipu3-css-fw.c
+@@ -124,12 +124,11 @@ int imgu_css_fw_init(struct imgu_css *css)
+ 	/* Check and display fw header info */
+ 
+ 	css->fwp = (struct imgu_fw_header *)css->fw->data;
+-	if (css->fw->size < sizeof(struct imgu_fw_header *) ||
++	if (css->fw->size < struct_size(css->fwp, binary_header, 1) ||
+ 	    css->fwp->file_header.h_size != sizeof(struct imgu_fw_bi_file_h))
+ 		goto bad_fw;
+-	if (sizeof(struct imgu_fw_bi_file_h) +
+-	    css->fwp->file_header.binary_nr * sizeof(struct imgu_fw_info) >
+-	    css->fw->size)
++	if (struct_size(css->fwp, binary_header,
++			css->fwp->file_header.binary_nr) > css->fw->size)
+ 		goto bad_fw;
+ 
+ 	dev_info(dev, "loaded firmware version %.64s, %u binaries, %zu bytes\n",
+diff --git a/drivers/staging/media/ipu3/ipu3-css-fw.h b/drivers/staging/media/ipu3/ipu3-css-fw.h
+index 79ffa70451390..650fd25fc79ee 100644
+--- a/drivers/staging/media/ipu3/ipu3-css-fw.h
++++ b/drivers/staging/media/ipu3/ipu3-css-fw.h
+@@ -170,7 +170,7 @@ struct imgu_fw_bi_file_h {
+ 
+ struct imgu_fw_header {
+ 	struct imgu_fw_bi_file_h file_header;
+-	struct imgu_fw_info binary_header[1];	/* binary_nr items */
++	struct imgu_fw_info binary_header[];	/* binary_nr items */
+ };
+ 
+ /******************* Firmware functions *******************/
+diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
+index 4523e825a61a8..7f90af8a7c7c9 100644
+--- a/drivers/staging/rtl8192u/r8192U_core.c
++++ b/drivers/staging/rtl8192u/r8192U_core.c
+@@ -229,7 +229,7 @@ int write_nic_byte_E(struct net_device *dev, int indx, u8 data)
+ 
+ 	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				 RTL8187_REQ_SET_REGS, RTL8187_REQT_WRITE,
+-				 indx | 0xfe00, 0, usbdata, 1, HZ / 2);
++				 indx | 0xfe00, 0, usbdata, 1, 500);
+ 	kfree(usbdata);
+ 
+ 	if (status < 0) {
+@@ -251,7 +251,7 @@ int read_nic_byte_E(struct net_device *dev, int indx, u8 *data)
+ 
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+-				 indx | 0xfe00, 0, usbdata, 1, HZ / 2);
++				 indx | 0xfe00, 0, usbdata, 1, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+@@ -279,7 +279,7 @@ int write_nic_byte(struct net_device *dev, int indx, u8 data)
+ 	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				 RTL8187_REQ_SET_REGS, RTL8187_REQT_WRITE,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 1, HZ / 2);
++				 usbdata, 1, 500);
+ 	kfree(usbdata);
+ 
+ 	if (status < 0) {
+@@ -305,7 +305,7 @@ int write_nic_word(struct net_device *dev, int indx, u16 data)
+ 	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				 RTL8187_REQ_SET_REGS, RTL8187_REQT_WRITE,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 2, HZ / 2);
++				 usbdata, 2, 500);
+ 	kfree(usbdata);
+ 
+ 	if (status < 0) {
+@@ -331,7 +331,7 @@ int write_nic_dword(struct net_device *dev, int indx, u32 data)
+ 	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				 RTL8187_REQ_SET_REGS, RTL8187_REQT_WRITE,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 4, HZ / 2);
++				 usbdata, 4, 500);
+ 	kfree(usbdata);
+ 
+ 	if (status < 0) {
+@@ -355,7 +355,7 @@ int read_nic_byte(struct net_device *dev, int indx, u8 *data)
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 1, HZ / 2);
++				 usbdata, 1, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+@@ -380,7 +380,7 @@ int read_nic_word(struct net_device *dev, int indx, u16 *data)
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 2, HZ / 2);
++				 usbdata, 2, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+@@ -404,7 +404,7 @@ static int read_nic_word_E(struct net_device *dev, int indx, u16 *data)
+ 
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+-				 indx | 0xfe00, 0, usbdata, 2, HZ / 2);
++				 indx | 0xfe00, 0, usbdata, 2, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+@@ -430,7 +430,7 @@ int read_nic_dword(struct net_device *dev, int indx, u32 *data)
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 4, HZ / 2);
++				 usbdata, 4, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index 17d28af0d0867..fed96d4251bfa 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -598,12 +598,12 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ 
+ 	/* never exit with a firmware callback pending */
+ 	wait_for_completion(&padapter->rtl8712_fw_ready);
++	if (pnetdev->reg_state != NETREG_UNINITIALIZED)
++		unregister_netdev(pnetdev); /* will call netdev_close() */
+ 	usb_set_intfdata(pusb_intf, NULL);
+ 	release_firmware(padapter->fw);
+ 	if (drvpriv.drv_registered)
+ 		padapter->surprise_removed = true;
+-	if (pnetdev->reg_state != NETREG_UNINITIALIZED)
+-		unregister_netdev(pnetdev); /* will call netdev_close() */
+ 	r8712_flush_rwctrl_works(padapter);
+ 	r8712_flush_led_works(padapter);
+ 	udelay(1);
+diff --git a/drivers/staging/rtl8712/usb_ops_linux.c b/drivers/staging/rtl8712/usb_ops_linux.c
+index 655497cead122..f984a5ab2c6ff 100644
+--- a/drivers/staging/rtl8712/usb_ops_linux.c
++++ b/drivers/staging/rtl8712/usb_ops_linux.c
+@@ -494,7 +494,7 @@ int r8712_usbctrl_vendorreq(struct intf_priv *pintfpriv, u8 request, u16 value,
+ 		memcpy(pIo_buf, pdata, len);
+ 	}
+ 	status = usb_control_msg(udev, pipe, request, reqtype, value, index,
+-				 pIo_buf, len, HZ / 2);
++				 pIo_buf, len, 500);
+ 	if (status > 0) {  /* Success this control transfer. */
+ 		if (requesttype == 0x01) {
+ 			/* For Control read transfer, we have to copy the read
+diff --git a/drivers/usb/gadget/udc/Kconfig b/drivers/usb/gadget/udc/Kconfig
+index 1a12aab208b46..933e80d5053ac 100644
+--- a/drivers/usb/gadget/udc/Kconfig
++++ b/drivers/usb/gadget/udc/Kconfig
+@@ -330,6 +330,7 @@ config USB_AMD5536UDC
+ config USB_FSL_QE
+ 	tristate "Freescale QE/CPM USB Device Controller"
+ 	depends on FSL_SOC && (QUICC_ENGINE || CPM)
++	depends on !64BIT || BROKEN
+ 	help
+ 	   Some of Freescale PowerPC processors have a Full Speed
+ 	   QE/CPM2 USB controller, which support device mode with 4
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index 6793fd99c1cb4..8aff19ff8e8f0 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -634,7 +634,16 @@ static int ehci_run (struct usb_hcd *hcd)
+ 	/* Wait until HC become operational */
+ 	ehci_readl(ehci, &ehci->regs->command);	/* unblock posted writes */
+ 	msleep(5);
+-	rc = ehci_handshake(ehci, &ehci->regs->status, STS_HALT, 0, 100 * 1000);
++
++	/* For Aspeed, STS_HALT also depends on ASS/PSS status.
++	 * Check CMD_RUN instead.
++	 */
++	if (ehci->is_aspeed)
++		rc = ehci_handshake(ehci, &ehci->regs->command, CMD_RUN,
++				    1, 100 * 1000);
++	else
++		rc = ehci_handshake(ehci, &ehci->regs->status, STS_HALT,
++				    0, 100 * 1000);
+ 
+ 	up_write(&ehci_cf_port_reset_rwsem);
+ 
+diff --git a/drivers/usb/host/ehci-platform.c b/drivers/usb/host/ehci-platform.c
+index a48dd3fac1537..2dcfc67f2ba81 100644
+--- a/drivers/usb/host/ehci-platform.c
++++ b/drivers/usb/host/ehci-platform.c
+@@ -294,6 +294,12 @@ static int ehci_platform_probe(struct platform_device *dev)
+ 					  "has-transaction-translator"))
+ 			hcd->has_tt = 1;
+ 
++		if (of_device_is_compatible(dev->dev.of_node,
++					    "aspeed,ast2500-ehci") ||
++		    of_device_is_compatible(dev->dev.of_node,
++					    "aspeed,ast2600-ehci"))
++			ehci->is_aspeed = 1;
++
+ 		if (soc_device_match(quirk_poll_match))
+ 			priv->quirk_poll = true;
+ 
+diff --git a/drivers/usb/host/ehci.h b/drivers/usb/host/ehci.h
+index eabf22a78eae0..59fd523c55f30 100644
+--- a/drivers/usb/host/ehci.h
++++ b/drivers/usb/host/ehci.h
+@@ -218,6 +218,7 @@ struct ehci_hcd {			/* one per controller */
+ 	unsigned		frame_index_bug:1; /* MosChip (AKA NetMos) */
+ 	unsigned		need_oc_pp_cycle:1; /* MPC834X port power */
+ 	unsigned		imx28_write_fix:1; /* For Freescale i.MX28 */
++	unsigned		is_aspeed:1;
+ 
+ 	/* required for usb32 quirk */
+ 	#define OHCI_CTRL_HCFS          (3 << 6)
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index f62ffaede1abb..fb806b33178a0 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1247,9 +1247,11 @@ static int musb_gadget_queue(struct usb_ep *ep, struct usb_request *req,
+ 		status = musb_queue_resume_work(musb,
+ 						musb_ep_restart_resume_work,
+ 						request);
+-		if (status < 0)
++		if (status < 0) {
+ 			dev_err(musb->controller, "%s resume work: %i\n",
+ 				__func__, status);
++			list_del(&request->list);
++		}
+ 	}
+ 
+ unlock:
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index c6b3fcf901805..29191d33c0e3e 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -406,6 +406,16 @@ UNUSUAL_DEV(  0x04b8, 0x0602, 0x0110, 0x0110,
+ 		"785EPX Storage",
+ 		USB_SC_SCSI, USB_PR_BULK, NULL, US_FL_SINGLE_LUN),
+ 
++/*
++ * Reported by James Buren <braewoods+lkml@braewoods.net>
++ * Virtual ISOs cannot be remounted if ejected while the device is locked
++ * Disable locking to mimic Windows behavior that bypasses the issue
++ */
++UNUSUAL_DEV(  0x04c5, 0x2028, 0x0001, 0x0001,
++		"iODD",
++		"2531/2541",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_NOT_LOCKABLE),
++
+ /*
+  * Not sure who reported this originally but
+  * Pavel Machek <pavel@ucw.cz> reported that the extra US_FL_SINGLE_LUN
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index 35675a1065be8..f62b5a5015668 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -1321,6 +1321,8 @@ static int isofs_read_inode(struct inode *inode, int relocated)
+ 
+ 	de = (struct iso_directory_record *) (bh->b_data + offset);
+ 	de_len = *(unsigned char *) de;
++	if (de_len < sizeof(struct iso_directory_record))
++		goto fail;
+ 
+ 	if (offset + de_len > bufsize) {
+ 		int frag1 = bufsize - offset;
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index d0df95346ab3f..85351a12c85dc 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -2213,8 +2213,15 @@ static int __init console_setup(char *str)
+ 	char *s, *options, *brl_options = NULL;
+ 	int idx;
+ 
+-	if (str[0] == 0)
++	/*
++	 * console="" or console=null have been suggested as a way to
++	 * disable console output. Use ttynull that has been created
++	 * for exacly this purpose.
++	 */
++	if (str[0] == 0 || strcmp(str, "null") == 0) {
++		__add_preferred_console("ttynull", 0, NULL, NULL, true);
+ 		return 1;
++	}
+ 
+ 	if (_braille_console_setup(&str, &brl_options))
+ 		return 1;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-11-18 15:33 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-11-18 15:33 UTC (permalink / raw
  To: gentoo-commits

commit:     52e7b48d28035db525f246d1b8ee434cd97b3748
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 18 15:33:13 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 18 15:33:13 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=52e7b48d

Linux patch 5.10.80

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1079_linux-5.10.80.patch | 19944 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 19948 insertions(+)

diff --git a/0000_README b/0000_README
index 5066489e..22873d23 100644
--- a/0000_README
+++ b/0000_README
@@ -359,6 +359,10 @@ Patch:  1078_linux-5.10.79.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.79
 
+Patch:  1079_linux-5.10.80.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.80
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1079_linux-5.10.80.patch b/1079_linux-5.10.80.patch
new file mode 100644
index 00000000..6f630b94
--- /dev/null
+++ b/1079_linux-5.10.80.patch
@@ -0,0 +1,19944 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index f103667d3727f..516499f9ccae4 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5988,6 +5988,13 @@
+ 			improve timer resolution at the expense of processing
+ 			more timer interrupts.
+ 
++	xen.balloon_boot_timeout= [XEN]
++			The time (in seconds) to wait before giving up to boot
++			in case initial ballooning fails to free enough memory.
++			Applies only when running as HVM or PVH guest and
++			started with less memory configured than allowed at
++			max. Default is 180.
++
+ 	xen.event_eoi_delay=	[XEN]
+ 			How long to delay EOI handling in case of event
+ 			storms (jiffies). Default is 10.
+diff --git a/Documentation/devicetree/bindings/regulator/samsung,s5m8767.txt b/Documentation/devicetree/bindings/regulator/samsung,s5m8767.txt
+index 093edda0c8dfc..6cd83d920155f 100644
+--- a/Documentation/devicetree/bindings/regulator/samsung,s5m8767.txt
++++ b/Documentation/devicetree/bindings/regulator/samsung,s5m8767.txt
+@@ -13,6 +13,14 @@ common regulator binding documented in:
+ 
+ 
+ Required properties of the main device node (the parent!):
++ - s5m8767,pmic-buck-ds-gpios: GPIO specifiers for three host gpio's used
++   for selecting GPIO DVS lines. It is one-to-one mapped to dvs gpio lines.
++
++ [1] If either of the 's5m8767,pmic-buck[2/3/4]-uses-gpio-dvs' optional
++     property is specified, then all the eight voltage values for the
++     's5m8767,pmic-buck[2/3/4]-dvs-voltage' should be specified.
++
++Optional properties of the main device node (the parent!):
+  - s5m8767,pmic-buck2-dvs-voltage: A set of 8 voltage values in micro-volt (uV)
+    units for buck2 when changing voltage using gpio dvs. Refer to [1] below
+    for additional information.
+@@ -25,26 +33,13 @@ Required properties of the main device node (the parent!):
+    units for buck4 when changing voltage using gpio dvs. Refer to [1] below
+    for additional information.
+ 
+- - s5m8767,pmic-buck-ds-gpios: GPIO specifiers for three host gpio's used
+-   for selecting GPIO DVS lines. It is one-to-one mapped to dvs gpio lines.
+-
+- [1] If none of the 's5m8767,pmic-buck[2/3/4]-uses-gpio-dvs' optional
+-     property is specified, the 's5m8767,pmic-buck[2/3/4]-dvs-voltage'
+-     property should specify atleast one voltage level (which would be a
+-     safe operating voltage).
+-
+-     If either of the 's5m8767,pmic-buck[2/3/4]-uses-gpio-dvs' optional
+-     property is specified, then all the eight voltage values for the
+-     's5m8767,pmic-buck[2/3/4]-dvs-voltage' should be specified.
+-
+-Optional properties of the main device node (the parent!):
+  - s5m8767,pmic-buck2-uses-gpio-dvs: 'buck2' can be controlled by gpio dvs.
+  - s5m8767,pmic-buck3-uses-gpio-dvs: 'buck3' can be controlled by gpio dvs.
+  - s5m8767,pmic-buck4-uses-gpio-dvs: 'buck4' can be controlled by gpio dvs.
+ 
+ Additional properties required if either of the optional properties are used:
+ 
+- - s5m8767,pmic-buck234-default-dvs-idx: Default voltage setting selected from
++ - s5m8767,pmic-buck-default-dvs-idx: Default voltage setting selected from
+    the possible 8 options selectable by the dvs gpios. The value of this
+    property should be between 0 and 7. If not specified or if out of range, the
+    default value of this property is set to 0.
+diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
+index 44b67ebd6e40d..936fae06db770 100644
+--- a/Documentation/filesystems/fscrypt.rst
++++ b/Documentation/filesystems/fscrypt.rst
+@@ -176,11 +176,11 @@ Master Keys
+ 
+ Each encrypted directory tree is protected by a *master key*.  Master
+ keys can be up to 64 bytes long, and must be at least as long as the
+-greater of the key length needed by the contents and filenames
+-encryption modes being used.  For example, if AES-256-XTS is used for
+-contents encryption, the master key must be 64 bytes (512 bits).  Note
+-that the XTS mode is defined to require a key twice as long as that
+-required by the underlying block cipher.
++greater of the security strength of the contents and filenames
++encryption modes being used.  For example, if any AES-256 mode is
++used, the master key must be at least 256 bits, i.e. 32 bytes.  A
++stricter requirement applies if the key is used by a v1 encryption
++policy and AES-256-XTS is used; such keys must be 64 bytes.
+ 
+ To "unlock" an encrypted directory tree, userspace must provide the
+ appropriate master key.  There can be any number of master keys, each
+diff --git a/Makefile b/Makefile
+index 756479b101f80..71fdc74801e0a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 79
++SUBLEVEL = 80
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 69fe7133c765d..632d60e13494c 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -1026,6 +1026,9 @@ config RELR
+ config ARCH_HAS_MEM_ENCRYPT
+ 	bool
+ 
++config ARCH_HAS_CC_PLATFORM
++	bool
++
+ config HAVE_SPARSE_SYSCALL_NR
+        bool
+        help
+diff --git a/arch/arm/Makefile b/arch/arm/Makefile
+index e15f76ca2887c..0e5a8765e60b3 100644
+--- a/arch/arm/Makefile
++++ b/arch/arm/Makefile
+@@ -60,15 +60,15 @@ KBUILD_CFLAGS	+= $(call cc-option,-fno-ipa-sra)
+ # Note that GCC does not numerically define an architecture version
+ # macro, but instead defines a whole series of macros which makes
+ # testing for a specific architecture or later rather impossible.
+-arch-$(CONFIG_CPU_32v7M)	=-D__LINUX_ARM_ARCH__=7 -march=armv7-m -Wa,-march=armv7-m
+-arch-$(CONFIG_CPU_32v7)		=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7-a,-march=armv5t -Wa$(comma)-march=armv7-a)
+-arch-$(CONFIG_CPU_32v6)		=-D__LINUX_ARM_ARCH__=6 $(call cc-option,-march=armv6,-march=armv5t -Wa$(comma)-march=armv6)
++arch-$(CONFIG_CPU_32v7M)	=-D__LINUX_ARM_ARCH__=7 -march=armv7-m
++arch-$(CONFIG_CPU_32v7)		=-D__LINUX_ARM_ARCH__=7 -march=armv7-a
++arch-$(CONFIG_CPU_32v6)		=-D__LINUX_ARM_ARCH__=6 -march=armv6
+ # Only override the compiler option if ARMv6. The ARMv6K extensions are
+ # always available in ARMv7
+ ifeq ($(CONFIG_CPU_32v6),y)
+-arch-$(CONFIG_CPU_32v6K)	=-D__LINUX_ARM_ARCH__=6 $(call cc-option,-march=armv6k,-march=armv5t -Wa$(comma)-march=armv6k)
++arch-$(CONFIG_CPU_32v6K)	=-D__LINUX_ARM_ARCH__=6 -march=armv6k
+ endif
+-arch-$(CONFIG_CPU_32v5)		=-D__LINUX_ARM_ARCH__=5 $(call cc-option,-march=armv5te,-march=armv4t)
++arch-$(CONFIG_CPU_32v5)		=-D__LINUX_ARM_ARCH__=5 -march=armv5te
+ arch-$(CONFIG_CPU_32v4T)	=-D__LINUX_ARM_ARCH__=4 -march=armv4t
+ arch-$(CONFIG_CPU_32v4)		=-D__LINUX_ARM_ARCH__=4 -march=armv4
+ arch-$(CONFIG_CPU_32v3)		=-D__LINUX_ARM_ARCH__=3 -march=armv3m
+@@ -82,7 +82,7 @@ tune-$(CONFIG_CPU_ARM720T)	=-mtune=arm7tdmi
+ tune-$(CONFIG_CPU_ARM740T)	=-mtune=arm7tdmi
+ tune-$(CONFIG_CPU_ARM9TDMI)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_ARM940T)	=-mtune=arm9tdmi
+-tune-$(CONFIG_CPU_ARM946E)	=$(call cc-option,-mtune=arm9e,-mtune=arm9tdmi)
++tune-$(CONFIG_CPU_ARM946E)	=-mtune=arm9e
+ tune-$(CONFIG_CPU_ARM920T)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_ARM922T)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_ARM925T)	=-mtune=arm9tdmi
+@@ -90,11 +90,11 @@ tune-$(CONFIG_CPU_ARM926T)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_FA526)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_SA110)	=-mtune=strongarm110
+ tune-$(CONFIG_CPU_SA1100)	=-mtune=strongarm1100
+-tune-$(CONFIG_CPU_XSCALE)	=$(call cc-option,-mtune=xscale,-mtune=strongarm110) -Wa,-mcpu=xscale
+-tune-$(CONFIG_CPU_XSC3)		=$(call cc-option,-mtune=xscale,-mtune=strongarm110) -Wa,-mcpu=xscale
+-tune-$(CONFIG_CPU_FEROCEON)	=$(call cc-option,-mtune=marvell-f,-mtune=xscale)
+-tune-$(CONFIG_CPU_V6)		=$(call cc-option,-mtune=arm1136j-s,-mtune=strongarm)
+-tune-$(CONFIG_CPU_V6K)		=$(call cc-option,-mtune=arm1136j-s,-mtune=strongarm)
++tune-$(CONFIG_CPU_XSCALE)	=-mtune=xscale
++tune-$(CONFIG_CPU_XSC3)		=-mtune=xscale
++tune-$(CONFIG_CPU_FEROCEON)	=-mtune=xscale
++tune-$(CONFIG_CPU_V6)		=-mtune=arm1136j-s
++tune-$(CONFIG_CPU_V6K)		=-mtune=arm1136j-s
+ 
+ # Evaluate tune cc-option calls now
+ tune-y := $(tune-y)
+diff --git a/arch/arm/boot/dts/at91-tse850-3.dts b/arch/arm/boot/dts/at91-tse850-3.dts
+index 3ca97b47c69ce..7e5c598e7e68f 100644
+--- a/arch/arm/boot/dts/at91-tse850-3.dts
++++ b/arch/arm/boot/dts/at91-tse850-3.dts
+@@ -262,7 +262,7 @@
+ &macb1 {
+ 	status = "okay";
+ 
+-	phy-mode = "rgmii";
++	phy-mode = "rmii";
+ 
+ 	#address-cells = <1>;
+ 	#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
+index 61c7b137607e5..7900aac4f35a9 100644
+--- a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
++++ b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
+@@ -20,7 +20,7 @@
+ 		bootargs = "console=ttyS0,115200 earlycon";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts b/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
+index 6c6bb7b17d27a..7546c8d07bcd7 100644
+--- a/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
++++ b/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
+@@ -19,7 +19,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
+index d29e7f80ea6aa..beae9eab9cb8c 100644
+--- a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
++++ b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
+@@ -19,7 +19,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x18000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts b/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
+index 38fbefdf2e4e4..ee94455a7236d 100644
+--- a/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
++++ b/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
+@@ -16,7 +16,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-netgear-r7000.dts b/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
+index 7989a53597d4f..56d309dbc6b0d 100644
+--- a/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
++++ b/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
+@@ -19,7 +19,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
+index 87b655be674c5..184e3039aa864 100644
+--- a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
++++ b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
+@@ -30,7 +30,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-tplink-archer-c9-v1.dts b/arch/arm/boot/dts/bcm4709-tplink-archer-c9-v1.dts
+index f806be5da7237..c2a266a439d05 100644
+--- a/arch/arm/boot/dts/bcm4709-tplink-archer-c9-v1.dts
++++ b/arch/arm/boot/dts/bcm4709-tplink-archer-c9-v1.dts
+@@ -15,7 +15,7 @@
+ 		bootargs = "console=ttyS0,115200 earlycon";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>;
+ 	};
+diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+index 2666195b6ffeb..3d415d874bd39 100644
+--- a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
++++ b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+@@ -16,7 +16,7 @@
+ 		bootargs = "earlycon";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x18000000>;
+diff --git a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+index 3b978dc8997a4..612d61852bfb9 100644
+--- a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
++++ b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+@@ -20,7 +20,7 @@
+ 		bootargs = " console=ttyS0,115200n8 earlycon";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x08000000>;
+ 		device_type = "memory";
+ 	};
+diff --git a/arch/arm/boot/dts/bcm94708.dts b/arch/arm/boot/dts/bcm94708.dts
+index 3d13e46c69494..d9eb2040b9631 100644
+--- a/arch/arm/boot/dts/bcm94708.dts
++++ b/arch/arm/boot/dts/bcm94708.dts
+@@ -38,7 +38,7 @@
+ 	model = "NorthStar SVK (BCM94708)";
+ 	compatible = "brcm,bcm94708", "brcm,bcm4708";
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>;
+ 	};
+diff --git a/arch/arm/boot/dts/bcm94709.dts b/arch/arm/boot/dts/bcm94709.dts
+index 5017b7b259cbe..618c812eef73e 100644
+--- a/arch/arm/boot/dts/bcm94709.dts
++++ b/arch/arm/boot/dts/bcm94709.dts
+@@ -38,7 +38,7 @@
+ 	model = "NorthStar SVK (BCM94709)";
+ 	compatible = "brcm,bcm94709", "brcm,bcm4709", "brcm,bcm4708";
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>;
+ 	};
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index 7b8c18e6605e4..80c9e5e34136a 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -515,7 +515,7 @@
+ 		compatible = "bosch,bma180";
+ 		reg = <0x41>;
+ 		pinctrl-names = "default";
+-		pintcrl-0 = <&bma180_pins>;
++		pinctrl-0 = <&bma180_pins>;
+ 		interrupt-parent = <&gpio4>;
+ 		interrupts = <19 IRQ_TYPE_LEVEL_HIGH>; /* GPIO_115 */
+ 	};
+diff --git a/arch/arm/boot/dts/qcom-msm8974.dtsi b/arch/arm/boot/dts/qcom-msm8974.dtsi
+index 51f5f904f9eb9..5f7426fb4e419 100644
+--- a/arch/arm/boot/dts/qcom-msm8974.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8974.dtsi
+@@ -1528,8 +1528,8 @@
+ 				#phy-cells = <0>;
+ 				qcom,dsi-phy-index = <0>;
+ 
+-				clocks = <&mmcc MDSS_AHB_CLK>;
+-				clock-names = "iface";
++				clocks = <&mmcc MDSS_AHB_CLK>, <&xo_board>;
++				clock-names = "iface", "ref";
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index dee4d32ab32c4..ccf66adbbf623 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1091,7 +1091,7 @@
+ 		};
+ 	};
+ 
+-	sai2a_pins_c: sai2a-4 {
++	sai2a_pins_c: sai2a-2 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('D', 13, AF10)>, /* SAI2_SCK_A */
+ 				 <STM32_PINMUX('D', 11, AF10)>, /* SAI2_SD_A */
+@@ -1102,7 +1102,7 @@
+ 		};
+ 	};
+ 
+-	sai2a_sleep_pins_c: sai2a-5 {
++	sai2a_sleep_pins_c: sai2a-2 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('D', 13, ANALOG)>, /* SAI2_SCK_A */
+ 				 <STM32_PINMUX('D', 11, ANALOG)>, /* SAI2_SD_A */
+@@ -1147,14 +1147,14 @@
+ 		};
+ 	};
+ 
+-	sai2b_pins_c: sai2a-4 {
++	sai2b_pins_c: sai2b-2 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('F', 11, AF10)>; /* SAI2_SD_B */
+ 			bias-disable;
+ 		};
+ 	};
+ 
+-	sai2b_sleep_pins_c: sai2a-sleep-5 {
++	sai2b_sleep_pins_c: sai2b-sleep-2 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('F', 11, ANALOG)>; /* SAI2_SD_B */
+ 		};
+diff --git a/arch/arm/boot/dts/stm32mp151.dtsi b/arch/arm/boot/dts/stm32mp151.dtsi
+index b479016fef008..7a0ef01de969e 100644
+--- a/arch/arm/boot/dts/stm32mp151.dtsi
++++ b/arch/arm/boot/dts/stm32mp151.dtsi
+@@ -811,7 +811,7 @@
+ 				#sound-dai-cells = <0>;
+ 
+ 				compatible = "st,stm32-sai-sub-a";
+-				reg = <0x4 0x1c>;
++				reg = <0x4 0x20>;
+ 				clocks = <&rcc SAI1_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 87 0x400 0x01>;
+@@ -821,7 +821,7 @@
+ 			sai1b: audio-controller@4400a024 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-b";
+-				reg = <0x24 0x1c>;
++				reg = <0x24 0x20>;
+ 				clocks = <&rcc SAI1_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 88 0x400 0x01>;
+@@ -842,7 +842,7 @@
+ 			sai2a: audio-controller@4400b004 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-a";
+-				reg = <0x4 0x1c>;
++				reg = <0x4 0x20>;
+ 				clocks = <&rcc SAI2_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 89 0x400 0x01>;
+@@ -852,7 +852,7 @@
+ 			sai2b: audio-controller@4400b024 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-b";
+-				reg = <0x24 0x1c>;
++				reg = <0x24 0x20>;
+ 				clocks = <&rcc SAI2_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 90 0x400 0x01>;
+@@ -873,7 +873,7 @@
+ 			sai3a: audio-controller@4400c004 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-a";
+-				reg = <0x04 0x1c>;
++				reg = <0x04 0x20>;
+ 				clocks = <&rcc SAI3_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 113 0x400 0x01>;
+@@ -883,7 +883,7 @@
+ 			sai3b: audio-controller@4400c024 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-b";
+-				reg = <0x24 0x1c>;
++				reg = <0x24 0x20>;
+ 				clocks = <&rcc SAI3_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 114 0x400 0x01>;
+@@ -1250,7 +1250,7 @@
+ 			sai4a: audio-controller@50027004 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-a";
+-				reg = <0x04 0x1c>;
++				reg = <0x04 0x20>;
+ 				clocks = <&rcc SAI4_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 99 0x400 0x01>;
+@@ -1260,7 +1260,7 @@
+ 			sai4b: audio-controller@50027024 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-b";
+-				reg = <0x24 0x1c>;
++				reg = <0x24 0x20>;
+ 				clocks = <&rcc SAI4_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 100 0x400 0x01>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+index a9eb82b2f1704..5af32140e128b 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+@@ -198,7 +198,7 @@
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-rx-bus-width = <4>;
+-		spi-max-frequency = <108000000>;
++		spi-max-frequency = <50000000>;
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 	};
+diff --git a/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2.dts b/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2.dts
+index 9ba62774e89a1..488933b87ad5a 100644
+--- a/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2.dts
++++ b/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2.dts
+@@ -112,7 +112,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&gmac_rgmii_pins>;
+ 	phy-handle = <&phy1>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c
+index 76ea4178a55cb..db798eac74315 100644
+--- a/arch/arm/kernel/stacktrace.c
++++ b/arch/arm/kernel/stacktrace.c
+@@ -54,8 +54,7 @@ int notrace unwind_frame(struct stackframe *frame)
+ 
+ 	frame->sp = frame->fp;
+ 	frame->fp = *(unsigned long *)(fp);
+-	frame->pc = frame->lr;
+-	frame->lr = *(unsigned long *)(fp + 4);
++	frame->pc = *(unsigned long *)(fp + 4);
+ #else
+ 	/* check current frame pointer is within bounds */
+ 	if (fp < low + 12 || fp > high - 4)
+diff --git a/arch/arm/mach-s3c/irq-s3c24xx.c b/arch/arm/mach-s3c/irq-s3c24xx.c
+index 79b5f19af7a52..19fb9bdf446b4 100644
+--- a/arch/arm/mach-s3c/irq-s3c24xx.c
++++ b/arch/arm/mach-s3c/irq-s3c24xx.c
+@@ -360,11 +360,25 @@ static inline int s3c24xx_handle_intc(struct s3c_irq_intc *intc,
+ asmlinkage void __exception_irq_entry s3c24xx_handle_irq(struct pt_regs *regs)
+ {
+ 	do {
+-		if (likely(s3c_intc[0]))
+-			if (s3c24xx_handle_intc(s3c_intc[0], regs, 0))
+-				continue;
++		/*
++		 * For platform based machines, neither ERR nor NULL can happen here.
++		 * The s3c24xx_handle_irq() will be set as IRQ handler iff this succeeds:
++		 *
++		 *    s3c_intc[0] = s3c24xx_init_intc()
++		 *
++		 * If this fails, the next calls to s3c24xx_init_intc() won't be executed.
++		 *
++		 * For DT machine, s3c_init_intc_of() could set the IRQ handler without
++		 * setting s3c_intc[0] only if it was called with num_ctrl=0. There is no
++		 * such code path, so again the s3c_intc[0] will have a valid pointer if
++		 * set_handle_irq() is called.
++		 *
++		 * Therefore in s3c24xx_handle_irq(), the s3c_intc[0] is always something.
++		 */
++		if (s3c24xx_handle_intc(s3c_intc[0], regs, 0))
++			continue;
+ 
+-		if (s3c_intc[2])
++		if (!IS_ERR_OR_NULL(s3c_intc[2]))
+ 			if (s3c24xx_handle_intc(s3c_intc[2], regs, 64))
+ 				continue;
+ 
+diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
+index 02692fbe2db5c..423a97dd2f57c 100644
+--- a/arch/arm/mm/Kconfig
++++ b/arch/arm/mm/Kconfig
+@@ -753,7 +753,7 @@ config CPU_BIG_ENDIAN
+ config CPU_ENDIAN_BE8
+ 	bool
+ 	depends on CPU_BIG_ENDIAN
+-	default CPU_V6 || CPU_V6K || CPU_V7
++	default CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
+ 	help
+ 	  Support for the BE-8 (big-endian) mode on ARMv6 and ARMv7 processors.
+ 
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index fa259825310c5..4df688f410728 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -391,9 +391,9 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
+ 		     FIXADDR_END);
+ 	BUG_ON(idx >= __end_of_fixed_addresses);
+ 
+-	/* we only support device mappings until pgprot_kernel has been set */
++	/* We support only device mappings before pgprot_kernel is set. */
+ 	if (WARN_ON(pgprot_val(prot) != pgprot_val(FIXMAP_PAGE_IO) &&
+-		    pgprot_val(pgprot_kernel) == 0))
++		    pgprot_val(prot) && pgprot_val(pgprot_kernel) == 0))
+ 		return;
+ 
+ 	if (pgprot_val(prot))
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
+index b00d0468c7534..4d5b3e514b514 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
+@@ -139,7 +139,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&dc_in>;
++		pwm-supply = <&dc_in>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-u200.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-u200.dts
+index a26bfe72550fe..4b5d11e56364d 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-u200.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-u200.dts
+@@ -139,7 +139,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
+index 463a72d6bb7c7..26b5d9327324a 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
+@@ -139,7 +139,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&dc_in>;
++		pwm-supply = <&dc_in>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
+index f42cf4b8af2d4..16dd409051b40 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
+@@ -18,7 +18,7 @@
+ 		regulator-min-microvolt = <690000>;
+ 		regulator-max-microvolt = <1050000>;
+ 
+-		vin-supply = <&dc_in>;
++		pwm-supply = <&dc_in>;
+ 
+ 		pwms = <&pwm_ab 0 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -37,7 +37,7 @@
+ 		regulator-min-microvolt = <690000>;
+ 		regulator-max-microvolt = <1050000>;
+ 
+-		vin-supply = <&vsys_3v3>;
++		pwm-supply = <&vsys_3v3>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+index 39a09661c5f62..59b5f39088757 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+@@ -128,7 +128,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_ab 0 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -147,7 +147,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
+index feb0885047400..b40d2c1002c92 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
+@@ -96,7 +96,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_ab 0 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -115,7 +115,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 0e34ed48b9fae..b1ffc056eea0b 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1322,11 +1322,17 @@
+ 		lpass: audio-controller@7708000 {
+ 			status = "disabled";
+ 			compatible = "qcom,lpass-cpu-apq8016";
++
++			/*
++			 * Note: Unlike the name would suggest, the SEC_I2S_CLK
++			 * is actually only used by Tertiary MI2S while
++			 * Primary/Secondary MI2S both use the PRI_I2S_CLK.
++			 */
+ 			clocks = <&gcc GCC_ULTAUDIO_AHBFABRIC_IXFABRIC_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_PCNOC_MPORT_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_PCNOC_SWAY_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_LPAIF_PRI_I2S_CLK>,
+-				 <&gcc GCC_ULTAUDIO_LPAIF_SEC_I2S_CLK>,
++				 <&gcc GCC_ULTAUDIO_LPAIF_PRI_I2S_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_LPAIF_SEC_I2S_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_LPAIF_AUX_I2S_CLK>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/pm8916.dtsi b/arch/arm64/boot/dts/qcom/pm8916.dtsi
+index f931cb0de231f..42180f1b5dbbb 100644
+--- a/arch/arm64/boot/dts/qcom/pm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8916.dtsi
+@@ -86,7 +86,6 @@
+ 		rtc@6000 {
+ 			compatible = "qcom,pm8941-rtc";
+ 			reg = <0x6000>;
+-			reg-names = "rtc", "alarm";
+ 			interrupts = <0x0 0x61 0x1 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+index 3c73dfc430afc..929c7910c68df 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+@@ -54,6 +54,7 @@
+ &avb {
+ 	pinctrl-0 = <&avb_pins>;
+ 	pinctrl-names = "default";
++	phy-mode = "rgmii-rxid";
+ 	phy-handle = <&phy0>;
+ 	rx-internal-delay-ps = <1800>;
+ 	tx-internal-delay-ps = <2000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index e546c9d1d6463..72112fe05a5c4 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -603,7 +603,7 @@
+ 
+ 	gpu: gpu@ff300000 {
+ 		compatible = "rockchip,rk3328-mali", "arm,mali-450";
+-		reg = <0x0 0xff300000 0x0 0x40000>;
++		reg = <0x0 0xff300000 0x0 0x30000>;
+ 		interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 87 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index 6ffdebd601223..85526f72b4616 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -629,7 +629,7 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		vendor-id = <0x104c>;
+ 		device-id = <0xb00d>;
+ 		msi-map = <0x0 &gic_its 0x0 0x10000>;
+@@ -656,7 +656,7 @@
+ 		clock-names = "fck";
+ 		cdns,max-outbound-regions = <16>;
+ 		max-functions = /bits/ 8 <6>;
+-		max-virtual-functions = /bits/ 16 <4 4 4 4 0 0>;
++		max-virtual-functions = /bits/ 8 <4 4 4 4 0 0>;
+ 		dma-coherent;
+ 	};
+ 
+@@ -678,7 +678,7 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		vendor-id = <0x104c>;
+ 		device-id = <0xb00d>;
+ 		msi-map = <0x0 &gic_its 0x10000 0x10000>;
+@@ -705,7 +705,7 @@
+ 		clock-names = "fck";
+ 		cdns,max-outbound-regions = <16>;
+ 		max-functions = /bits/ 8 <6>;
+-		max-virtual-functions = /bits/ 16 <4 4 4 4 0 0>;
++		max-virtual-functions = /bits/ 8 <4 4 4 4 0 0>;
+ 		dma-coherent;
+ 	};
+ 
+@@ -727,7 +727,7 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		vendor-id = <0x104c>;
+ 		device-id = <0xb00d>;
+ 		msi-map = <0x0 &gic_its 0x20000 0x10000>;
+@@ -754,7 +754,7 @@
+ 		clock-names = "fck";
+ 		cdns,max-outbound-regions = <16>;
+ 		max-functions = /bits/ 8 <6>;
+-		max-virtual-functions = /bits/ 16 <4 4 4 4 0 0>;
++		max-virtual-functions = /bits/ 8 <4 4 4 4 0 0>;
+ 		dma-coherent;
+ 	};
+ 
+@@ -776,7 +776,7 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		vendor-id = <0x104c>;
+ 		device-id = <0xb00d>;
+ 		msi-map = <0x0 &gic_its 0x30000 0x10000>;
+@@ -803,7 +803,7 @@
+ 		clock-names = "fck";
+ 		cdns,max-outbound-regions = <16>;
+ 		max-functions = /bits/ 8 <6>;
+-		max-virtual-functions = /bits/ 16 <4 4 4 4 0 0>;
++		max-virtual-functions = /bits/ 8 <4 4 4 4 0 0>;
+ 		dma-coherent;
+ 		#address-cells = <2>;
+ 		#size-cells = <2>;
+diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
+index 85a3e49f92f4a..4a76f566e44f4 100644
+--- a/arch/arm64/include/asm/esr.h
++++ b/arch/arm64/include/asm/esr.h
+@@ -68,6 +68,7 @@
+ #define ESR_ELx_EC_MAX		(0x3F)
+ 
+ #define ESR_ELx_EC_SHIFT	(26)
++#define ESR_ELx_EC_WIDTH	(6)
+ #define ESR_ELx_EC_MASK		(UL(0x3F) << ESR_ELx_EC_SHIFT)
+ #define ESR_ELx_EC(esr)		(((esr) & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT)
+ 
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 10ffbc96ac31f..f3a70dc7c5942 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -69,9 +69,15 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+  * page table entry, taking care of 52-bit addresses.
+  */
+ #ifdef CONFIG_ARM64_PA_BITS_52
+-#define __pte_to_phys(pte)	\
+-	((pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << 36))
+-#define __phys_to_pte_val(phys)	(((phys) | ((phys) >> 36)) & PTE_ADDR_MASK)
++static inline phys_addr_t __pte_to_phys(pte_t pte)
++{
++	return (pte_val(pte) & PTE_ADDR_LOW) |
++		((pte_val(pte) & PTE_ADDR_HIGH) << 36);
++}
++static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
++{
++	return (phys | (phys >> 36)) & PTE_ADDR_MASK;
++}
+ #else
+ #define __pte_to_phys(pte)	(pte_val(pte) & PTE_ADDR_MASK)
+ #define __phys_to_pte_val(phys)	(phys)
+diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
+index 0a5b36eb54b3e..bcbead3746c66 100644
+--- a/arch/arm64/kvm/hyp/hyp-entry.S
++++ b/arch/arm64/kvm/hyp/hyp-entry.S
+@@ -43,7 +43,7 @@
+ el1_sync:				// Guest trapped into EL2
+ 
+ 	mrs	x0, esr_el2
+-	lsr	x0, x0, #ESR_ELx_EC_SHIFT
++	ubfx	x0, x0, #ESR_ELx_EC_SHIFT, #ESR_ELx_EC_WIDTH
+ 	cmp	x0, #ESR_ELx_EC_HVC64
+ 	ccmp	x0, #ESR_ELx_EC_HVC32, #4, ne
+ 	b.ne	el1_trap
+diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
+index 4ce934fc1f72a..4f57a803d1c8d 100644
+--- a/arch/arm64/kvm/hyp/nvhe/host.S
++++ b/arch/arm64/kvm/hyp/nvhe/host.S
+@@ -97,7 +97,7 @@ SYM_FUNC_END(__hyp_do_panic)
+ .L__vect_start\@:
+ 	stp	x0, x1, [sp, #-16]!
+ 	mrs	x0, esr_el2
+-	lsr	x0, x0, #ESR_ELx_EC_SHIFT
++	ubfx	x0, x0, #ESR_ELx_EC_SHIFT, #ESR_ELx_EC_WIDTH
+ 	cmp	x0, #ESR_ELx_EC_HVC64
+ 	ldp	x0, x1, [sp], #16
+ 	b.ne	__host_exit
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 58dc93e566179..2601a514d8c4a 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -1492,6 +1492,11 @@ int arch_add_memory(int nid, u64 start, u64 size,
+ 	if (ret)
+ 		__remove_pgd_mapping(swapper_pg_dir,
+ 				     __phys_to_virt(start), size);
++	else {
++		max_pfn = PFN_UP(start + size);
++		max_low_pfn = max_pfn;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 345066b8e9fc8..064577ff9ff59 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1134,6 +1134,11 @@ out:
+ 	return prog;
+ }
+ 
++u64 bpf_jit_alloc_exec_limit(void)
++{
++	return BPF_JIT_REGION_SIZE;
++}
++
+ void *bpf_jit_alloc_exec(unsigned long size)
+ {
+ 	return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
+diff --git a/arch/ia64/Kconfig.debug b/arch/ia64/Kconfig.debug
+index 40ca23bd228d6..2ce008e2d1644 100644
+--- a/arch/ia64/Kconfig.debug
++++ b/arch/ia64/Kconfig.debug
+@@ -39,7 +39,7 @@ config DISABLE_VHPT
+ 
+ config IA64_DEBUG_CMPXCHG
+ 	bool "Turn on compare-and-exchange bug checking (slow!)"
+-	depends on DEBUG_KERNEL
++	depends on DEBUG_KERNEL && PRINTK
+ 	help
+ 	  Selecting this option turns on bug checking for the IA-64
+ 	  compare-and-exchange instructions.  This is slow!  Itaniums
+diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c
+index fc1ff8a4d7de6..ca4b4fa45aef5 100644
+--- a/arch/ia64/kernel/kprobes.c
++++ b/arch/ia64/kernel/kprobes.c
+@@ -398,7 +398,8 @@ static void kretprobe_trampoline(void)
+ 
+ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
+ {
+-	regs->cr_iip = __kretprobe_trampoline_handler(regs, kretprobe_trampoline, NULL);
++	regs->cr_iip = __kretprobe_trampoline_handler(regs,
++		dereference_function_descriptor(kretprobe_trampoline), NULL);
+ 	/*
+ 	 * By returning a non-zero value, we are telling
+ 	 * kprobe_handler() that we don't want the post_handler
+@@ -414,7 +415,7 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
+ 	ri->fp = NULL;
+ 
+ 	/* Replace the return addr with trampoline addr */
+-	regs->b0 = ((struct fnptr *)kretprobe_trampoline)->ip;
++	regs->b0 = (unsigned long)dereference_function_descriptor(kretprobe_trampoline);
+ }
+ 
+ /* Check the instruction in the slot is break */
+@@ -918,14 +919,14 @@ static struct kprobe trampoline_p = {
+ int __init arch_init_kprobes(void)
+ {
+ 	trampoline_p.addr =
+-		(kprobe_opcode_t *)((struct fnptr *)kretprobe_trampoline)->ip;
++		dereference_function_descriptor(kretprobe_trampoline);
+ 	return register_kprobe(&trampoline_p);
+ }
+ 
+ int __kprobes arch_trampoline_kprobe(struct kprobe *p)
+ {
+ 	if (p->addr ==
+-		(kprobe_opcode_t *)((struct fnptr *)kretprobe_trampoline)->ip)
++		dereference_function_descriptor(kretprobe_trampoline))
+ 		return 1;
+ 
+ 	return 0;
+diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine
+index e161a4e1493b4..51a878803fb6d 100644
+--- a/arch/m68k/Kconfig.machine
++++ b/arch/m68k/Kconfig.machine
+@@ -191,6 +191,7 @@ config INIT_LCD
+ config MEMORY_RESERVE
+ 	int "Memory reservation (MiB)"
+ 	depends on (UCSIMM || UCDIMM)
++	default 0
+ 	help
+ 	  Reserve certain memory regions on 68x328 based boards.
+ 
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 1a63f592034eb..5c6e9ed9b2a75 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -1380,6 +1380,7 @@ config CPU_LOONGSON64
+ 	select MIPS_ASID_BITS_VARIABLE
+ 	select MIPS_PGD_C0_CONTEXT
+ 	select MIPS_L1_CACHE_SHIFT_6
++	select MIPS_FP_SUPPORT
+ 	select GPIOLIB
+ 	select SWIOTLB
+ 	select HAVE_KVM
+diff --git a/arch/mips/include/asm/cmpxchg.h b/arch/mips/include/asm/cmpxchg.h
+index ed8f3f3c4304a..3e9c41f691659 100644
+--- a/arch/mips/include/asm/cmpxchg.h
++++ b/arch/mips/include/asm/cmpxchg.h
+@@ -249,6 +249,7 @@ static inline unsigned long __cmpxchg64(volatile void *ptr,
+ 	/* Load 64 bits from ptr */
+ 	"	" __SYNC(full, loongson3_war) "		\n"
+ 	"1:	lld	%L0, %3		# __cmpxchg64	\n"
++	"	.set	pop				\n"
+ 	/*
+ 	 * Split the 64 bit value we loaded into the 2 registers that hold the
+ 	 * ret variable.
+@@ -276,12 +277,14 @@ static inline unsigned long __cmpxchg64(volatile void *ptr,
+ 	"	or	%L1, %L1, $at			\n"
+ 	"	.set	at				\n"
+ #  endif
++	"	.set	push				\n"
++	"	.set	" MIPS_ISA_ARCH_LEVEL "		\n"
+ 	/* Attempt to store new at ptr */
+ 	"	scd	%L1, %2				\n"
+ 	/* If we failed, loop! */
+ 	"\t" __SC_BEQZ "%L1, 1b				\n"
+-	"	.set	pop				\n"
+ 	"2:	" __SYNC(full, loongson3_war) "		\n"
++	"	.set	pop				\n"
+ 	: "=&r"(ret),
+ 	  "=&r"(tmp),
+ 	  "=" GCC_OFF_SMALL_ASM() (*(unsigned long long *)ptr)
+diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h
+index aeae2effa123d..23c67c0871b17 100644
+--- a/arch/mips/include/asm/mips-cm.h
++++ b/arch/mips/include/asm/mips-cm.h
+@@ -11,6 +11,7 @@
+ #ifndef __MIPS_ASM_MIPS_CM_H__
+ #define __MIPS_ASM_MIPS_CM_H__
+ 
++#include <linux/bitfield.h>
+ #include <linux/bitops.h>
+ #include <linux/errno.h>
+ 
+@@ -153,8 +154,8 @@ GCR_ACCESSOR_RO(32, 0x030, rev)
+ #define CM_GCR_REV_MINOR			GENMASK(7, 0)
+ 
+ #define CM_ENCODE_REV(major, minor) \
+-		(((major) << __ffs(CM_GCR_REV_MAJOR)) | \
+-		 ((minor) << __ffs(CM_GCR_REV_MINOR)))
++		(FIELD_PREP(CM_GCR_REV_MAJOR, major) | \
++		 FIELD_PREP(CM_GCR_REV_MINOR, minor))
+ 
+ #define CM_REV_CM2				CM_ENCODE_REV(6, 0)
+ #define CM_REV_CM2_5				CM_ENCODE_REV(7, 0)
+@@ -362,10 +363,10 @@ static inline int mips_cm_revision(void)
+ static inline unsigned int mips_cm_max_vp_width(void)
+ {
+ 	extern int smp_num_siblings;
+-	uint32_t cfg;
+ 
+ 	if (mips_cm_revision() >= CM_REV_CM3)
+-		return read_gcr_sys_config2() & CM_GCR_SYS_CONFIG2_MAXVPW;
++		return FIELD_GET(CM_GCR_SYS_CONFIG2_MAXVPW,
++				 read_gcr_sys_config2());
+ 
+ 	if (mips_cm_present()) {
+ 		/*
+@@ -373,8 +374,7 @@ static inline unsigned int mips_cm_max_vp_width(void)
+ 		 * number of VP(E)s, and if that ever changes then this will
+ 		 * need revisiting.
+ 		 */
+-		cfg = read_gcr_cl_config() & CM_GCR_Cx_CONFIG_PVPE;
+-		return (cfg >> __ffs(CM_GCR_Cx_CONFIG_PVPE)) + 1;
++		return FIELD_GET(CM_GCR_Cx_CONFIG_PVPE, read_gcr_cl_config()) + 1;
+ 	}
+ 
+ 	if (IS_ENABLED(CONFIG_SMP))
+diff --git a/arch/mips/kernel/mips-cm.c b/arch/mips/kernel/mips-cm.c
+index f60af512c8773..72c8374a39002 100644
+--- a/arch/mips/kernel/mips-cm.c
++++ b/arch/mips/kernel/mips-cm.c
+@@ -221,8 +221,7 @@ static void mips_cm_probe_l2sync(void)
+ 	phys_addr_t addr;
+ 
+ 	/* L2-only sync was introduced with CM major revision 6 */
+-	major_rev = (read_gcr_rev() & CM_GCR_REV_MAJOR) >>
+-		__ffs(CM_GCR_REV_MAJOR);
++	major_rev = FIELD_GET(CM_GCR_REV_MAJOR, read_gcr_rev());
+ 	if (major_rev < 6)
+ 		return;
+ 
+@@ -305,13 +304,13 @@ void mips_cm_lock_other(unsigned int cluster, unsigned int core,
+ 	preempt_disable();
+ 
+ 	if (cm_rev >= CM_REV_CM3) {
+-		val = core << __ffs(CM3_GCR_Cx_OTHER_CORE);
+-		val |= vp << __ffs(CM3_GCR_Cx_OTHER_VP);
++		val = FIELD_PREP(CM3_GCR_Cx_OTHER_CORE, core) |
++		      FIELD_PREP(CM3_GCR_Cx_OTHER_VP, vp);
+ 
+ 		if (cm_rev >= CM_REV_CM3_5) {
+ 			val |= CM_GCR_Cx_OTHER_CLUSTER_EN;
+-			val |= cluster << __ffs(CM_GCR_Cx_OTHER_CLUSTER);
+-			val |= block << __ffs(CM_GCR_Cx_OTHER_BLOCK);
++			val |= FIELD_PREP(CM_GCR_Cx_OTHER_CLUSTER, cluster);
++			val |= FIELD_PREP(CM_GCR_Cx_OTHER_BLOCK, block);
+ 		} else {
+ 			WARN_ON(cluster != 0);
+ 			WARN_ON(block != CM_GCR_Cx_OTHER_BLOCK_LOCAL);
+@@ -341,7 +340,7 @@ void mips_cm_lock_other(unsigned int cluster, unsigned int core,
+ 		spin_lock_irqsave(&per_cpu(cm_core_lock, curr_core),
+ 				  per_cpu(cm_core_lock_flags, curr_core));
+ 
+-		val = core << __ffs(CM_GCR_Cx_OTHER_CORENUM);
++		val = FIELD_PREP(CM_GCR_Cx_OTHER_CORENUM, core);
+ 	}
+ 
+ 	write_gcr_cl_other(val);
+@@ -385,8 +384,8 @@ void mips_cm_error_report(void)
+ 	cm_other = read_gcr_error_mult();
+ 
+ 	if (revision < CM_REV_CM3) { /* CM2 */
+-		cause = cm_error >> __ffs(CM_GCR_ERROR_CAUSE_ERRTYPE);
+-		ocause = cm_other >> __ffs(CM_GCR_ERROR_MULT_ERR2ND);
++		cause = FIELD_GET(CM_GCR_ERROR_CAUSE_ERRTYPE, cm_error);
++		ocause = FIELD_GET(CM_GCR_ERROR_MULT_ERR2ND, cm_other);
+ 
+ 		if (!cause)
+ 			return;
+@@ -444,8 +443,8 @@ void mips_cm_error_report(void)
+ 		ulong core_id_bits, vp_id_bits, cmd_bits, cmd_group_bits;
+ 		ulong cm3_cca_bits, mcp_bits, cm3_tr_bits, sched_bit;
+ 
+-		cause = cm_error >> __ffs64(CM3_GCR_ERROR_CAUSE_ERRTYPE);
+-		ocause = cm_other >> __ffs(CM_GCR_ERROR_MULT_ERR2ND);
++		cause = FIELD_GET(CM3_GCR_ERROR_CAUSE_ERRTYPE, cm_error);
++		ocause = FIELD_GET(CM_GCR_ERROR_MULT_ERR2ND, cm_other);
+ 
+ 		if (!cause)
+ 			return;
+diff --git a/arch/mips/kernel/r2300_fpu.S b/arch/mips/kernel/r2300_fpu.S
+index 12e58053544fc..cbf6db98cfb38 100644
+--- a/arch/mips/kernel/r2300_fpu.S
++++ b/arch/mips/kernel/r2300_fpu.S
+@@ -29,8 +29,8 @@
+ #define EX2(a,b)						\
+ 9:	a,##b;							\
+ 	.section __ex_table,"a";				\
+-	PTR	9b,bad_stack;					\
+-	PTR	9b+4,bad_stack;					\
++	PTR	9b,fault;					\
++	PTR	9b+4,fault;					\
+ 	.previous
+ 
+ 	.set	mips1
+diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c
+index 2afa3eef486a9..5512cd586e6e8 100644
+--- a/arch/mips/kernel/syscall.c
++++ b/arch/mips/kernel/syscall.c
+@@ -240,12 +240,3 @@ SYSCALL_DEFINE3(cachectl, char *, addr, int, nbytes, int, op)
+ {
+ 	return -ENOSYS;
+ }
+-
+-/*
+- * If we ever come here the user sp is bad.  Zap the process right away.
+- * Due to the bad stack signaling wouldn't work.
+- */
+-asmlinkage void bad_stack(void)
+-{
+-	do_exit(SIGSEGV);
+-}
+diff --git a/arch/mips/lantiq/xway/dma.c b/arch/mips/lantiq/xway/dma.c
+index aeb1b989cd4ee..ab13e257132af 100644
+--- a/arch/mips/lantiq/xway/dma.c
++++ b/arch/mips/lantiq/xway/dma.c
+@@ -11,6 +11,7 @@
+ #include <linux/export.h>
+ #include <linux/spinlock.h>
+ #include <linux/clk.h>
++#include <linux/delay.h>
+ #include <linux/err.h>
+ 
+ #include <lantiq_soc.h>
+@@ -29,6 +30,7 @@
+ #define LTQ_DMA_PCTRL		0x44
+ #define LTQ_DMA_IRNEN		0xf4
+ 
++#define DMA_ID_CHNR		GENMASK(26, 20)	/* channel number */
+ #define DMA_DESCPT		BIT(3)		/* descriptor complete irq */
+ #define DMA_TX			BIT(8)		/* TX channel direction */
+ #define DMA_CHAN_ON		BIT(0)		/* channel on / off bit */
+@@ -38,8 +40,11 @@
+ #define DMA_IRQ_ACK		0x7e		/* IRQ status register */
+ #define DMA_POLL		BIT(31)		/* turn on channel polling */
+ #define DMA_CLK_DIV4		BIT(6)		/* polling clock divider */
+-#define DMA_2W_BURST		BIT(1)		/* 2 word burst length */
+-#define DMA_MAX_CHANNEL		20		/* the soc has 20 channels */
++#define DMA_PCTRL_2W_BURST	0x1		/* 2 word burst length */
++#define DMA_PCTRL_4W_BURST	0x2		/* 4 word burst length */
++#define DMA_PCTRL_8W_BURST	0x3		/* 8 word burst length */
++#define DMA_TX_BURST_SHIFT	4		/* tx burst shift */
++#define DMA_RX_BURST_SHIFT	2		/* rx burst shift */
+ #define DMA_ETOP_ENDIANNESS	(0xf << 8) /* endianness swap etop channels */
+ #define DMA_WEIGHT	(BIT(17) | BIT(16))	/* default channel wheight */
+ 
+@@ -190,7 +195,8 @@ ltq_dma_init_port(int p)
+ 		break;
+ 
+ 	case DMA_PORT_DEU:
+-		ltq_dma_w32((DMA_2W_BURST << 4) | (DMA_2W_BURST << 2),
++		ltq_dma_w32((DMA_PCTRL_2W_BURST << DMA_TX_BURST_SHIFT) |
++			(DMA_PCTRL_2W_BURST << DMA_RX_BURST_SHIFT),
+ 			LTQ_DMA_PCTRL);
+ 		break;
+ 
+@@ -205,7 +211,7 @@ ltq_dma_init(struct platform_device *pdev)
+ {
+ 	struct clk *clk;
+ 	struct resource *res;
+-	unsigned id;
++	unsigned int id, nchannels;
+ 	int i;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -221,21 +227,24 @@ ltq_dma_init(struct platform_device *pdev)
+ 	clk_enable(clk);
+ 	ltq_dma_w32_mask(0, DMA_RESET, LTQ_DMA_CTRL);
+ 
++	usleep_range(1, 10);
++
+ 	/* disable all interrupts */
+ 	ltq_dma_w32(0, LTQ_DMA_IRNEN);
+ 
+ 	/* reset/configure each channel */
+-	for (i = 0; i < DMA_MAX_CHANNEL; i++) {
++	id = ltq_dma_r32(LTQ_DMA_ID);
++	nchannels = ((id & DMA_ID_CHNR) >> 20);
++	for (i = 0; i < nchannels; i++) {
+ 		ltq_dma_w32(i, LTQ_DMA_CS);
+ 		ltq_dma_w32(DMA_CHAN_RST, LTQ_DMA_CCTRL);
+ 		ltq_dma_w32(DMA_POLL | DMA_CLK_DIV4, LTQ_DMA_CPOLL);
+ 		ltq_dma_w32_mask(DMA_CHAN_ON, 0, LTQ_DMA_CCTRL);
+ 	}
+ 
+-	id = ltq_dma_r32(LTQ_DMA_ID);
+ 	dev_info(&pdev->dev,
+ 		"Init done - hw rev: %X, ports: %d, channels: %d\n",
+-		id & 0x1f, (id >> 16) & 0xf, id >> 20);
++		id & 0x1f, (id >> 16) & 0xf, nchannels);
+ 
+ 	return 0;
+ }
+diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c
+index 1b16d97e7da7f..a82b2caaa560d 100644
+--- a/arch/openrisc/kernel/dma.c
++++ b/arch/openrisc/kernel/dma.c
+@@ -33,7 +33,7 @@ page_set_nocache(pte_t *pte, unsigned long addr,
+ 	 * Flush the page out of the TLB so that the new page flags get
+ 	 * picked up next time there's an access
+ 	 */
+-	flush_tlb_page(NULL, addr);
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ 
+ 	/* Flush page out of dcache */
+ 	for (cl = __pa(addr); cl < __pa(next); cl += cpuinfo->dcache_block_size)
+@@ -56,7 +56,7 @@ page_clear_nocache(pte_t *pte, unsigned long addr,
+ 	 * Flush the page out of the TLB so that the new page flags get
+ 	 * picked up next time there's an access
+ 	 */
+-	flush_tlb_page(NULL, addr);
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ 
+ 	return 0;
+ }
+diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
+index e4dad76066aed..18b320a06fe56 100644
+--- a/arch/openrisc/kernel/smp.c
++++ b/arch/openrisc/kernel/smp.c
+@@ -261,7 +261,7 @@ static inline void ipi_flush_tlb_range(void *info)
+ 	local_flush_tlb_range(NULL, fd->addr1, fd->addr2);
+ }
+ 
+-static void smp_flush_tlb_range(struct cpumask *cmask, unsigned long start,
++static void smp_flush_tlb_range(const struct cpumask *cmask, unsigned long start,
+ 				unsigned long end)
+ {
+ 	unsigned int cpuid;
+@@ -309,7 +309,9 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
+ void flush_tlb_range(struct vm_area_struct *vma,
+ 		     unsigned long start, unsigned long end)
+ {
+-	smp_flush_tlb_range(mm_cpumask(vma->vm_mm), start, end);
++	const struct cpumask *cmask = vma ? mm_cpumask(vma->vm_mm)
++					  : cpu_online_mask;
++	smp_flush_tlb_range(cmask, start, end);
+ }
+ 
+ /* Instruction cache invalidate - performed on each cpu */
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 5d8123eb38ec5..9c76d50a5654b 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -1848,7 +1848,7 @@ syscall_restore:
+ 	LDREG	TI_TASK-THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r1
+ 
+ 	/* Are we being ptraced? */
+-	ldw	TASK_FLAGS(%r1),%r19
++	LDREG	TI_FLAGS-THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r19
+ 	ldi	_TIF_SYSCALL_TRACE_MASK,%r2
+ 	and,COND(=)	%r19,%r2,%r0
+ 	b,n	syscall_restore_rfi
+diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
+index 1405b603b91b6..cf92ece20b757 100644
+--- a/arch/parisc/kernel/smp.c
++++ b/arch/parisc/kernel/smp.c
+@@ -29,6 +29,7 @@
+ #include <linux/bitops.h>
+ #include <linux/ftrace.h>
+ #include <linux/cpu.h>
++#include <linux/kgdb.h>
+ 
+ #include <linux/atomic.h>
+ #include <asm/current.h>
+@@ -69,7 +70,10 @@ enum ipi_message_type {
+ 	IPI_CALL_FUNC,
+ 	IPI_CPU_START,
+ 	IPI_CPU_STOP,
+-	IPI_CPU_TEST
++	IPI_CPU_TEST,
++#ifdef CONFIG_KGDB
++	IPI_ENTER_KGDB,
++#endif
+ };
+ 
+ 
+@@ -167,7 +171,12 @@ ipi_interrupt(int irq, void *dev_id)
+ 			case IPI_CPU_TEST:
+ 				smp_debug(100, KERN_DEBUG "CPU%d is alive!\n", this_cpu);
+ 				break;
+-
++#ifdef CONFIG_KGDB
++			case IPI_ENTER_KGDB:
++				smp_debug(100, KERN_DEBUG "CPU%d ENTER_KGDB\n", this_cpu);
++				kgdb_nmicallback(raw_smp_processor_id(), get_irq_regs());
++				break;
++#endif
+ 			default:
+ 				printk(KERN_CRIT "Unknown IPI num on CPU%d: %lu\n",
+ 					this_cpu, which);
+@@ -226,6 +235,12 @@ send_IPI_allbutself(enum ipi_message_type op)
+ 	}
+ }
+ 
++#ifdef CONFIG_KGDB
++void kgdb_roundup_cpus(void)
++{
++	send_IPI_allbutself(IPI_ENTER_KGDB);
++}
++#endif
+ 
+ inline void 
+ smp_send_stop(void)	{ send_IPI_allbutself(IPI_CPU_STOP); }
+diff --git a/arch/parisc/kernel/unwind.c b/arch/parisc/kernel/unwind.c
+index 87ae476d1c4f5..86a57fb0e6fae 100644
+--- a/arch/parisc/kernel/unwind.c
++++ b/arch/parisc/kernel/unwind.c
+@@ -21,6 +21,8 @@
+ #include <asm/ptrace.h>
+ 
+ #include <asm/unwind.h>
++#include <asm/switch_to.h>
++#include <asm/sections.h>
+ 
+ /* #define DEBUG 1 */
+ #ifdef DEBUG
+@@ -203,6 +205,11 @@ int __init unwind_init(void)
+ 	return 0;
+ }
+ 
++static bool pc_is_kernel_fn(unsigned long pc, void *fn)
++{
++	return (unsigned long)dereference_kernel_function_descriptor(fn) == pc;
++}
++
+ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int frame_size)
+ {
+ 	/*
+@@ -221,7 +228,7 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ 	extern void * const _call_on_stack;
+ #endif /* CONFIG_IRQSTACKS */
+ 
+-	if (pc == (unsigned long) &handle_interruption) {
++	if (pc_is_kernel_fn(pc, handle_interruption)) {
+ 		struct pt_regs *regs = (struct pt_regs *)(info->sp - frame_size - PT_SZ_ALGN);
+ 		dbg("Unwinding through handle_interruption()\n");
+ 		info->prev_sp = regs->gr[30];
+@@ -229,13 +236,13 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ 		return 1;
+ 	}
+ 
+-	if (pc == (unsigned long) &ret_from_kernel_thread ||
+-	    pc == (unsigned long) &syscall_exit) {
++	if (pc_is_kernel_fn(pc, ret_from_kernel_thread) ||
++	    pc_is_kernel_fn(pc, syscall_exit)) {
+ 		info->prev_sp = info->prev_ip = 0;
+ 		return 1;
+ 	}
+ 
+-	if (pc == (unsigned long) &intr_return) {
++	if (pc_is_kernel_fn(pc, intr_return)) {
+ 		struct pt_regs *regs;
+ 
+ 		dbg("Found intr_return()\n");
+@@ -246,20 +253,20 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ 		return 1;
+ 	}
+ 
+-	if (pc == (unsigned long) &_switch_to_ret) {
++	if (pc_is_kernel_fn(pc, _switch_to) ||
++	    pc_is_kernel_fn(pc, _switch_to_ret)) {
+ 		info->prev_sp = info->sp - CALLEE_SAVE_FRAME_SIZE;
+ 		info->prev_ip = *(unsigned long *)(info->prev_sp - RP_OFFSET);
+ 		return 1;
+ 	}
+ 
+ #ifdef CONFIG_IRQSTACKS
+-	if (pc == (unsigned long) &_call_on_stack) {
++	if (pc_is_kernel_fn(pc, _call_on_stack)) {
+ 		info->prev_sp = *(unsigned long *)(info->sp - FRAME_SIZE - REG_SZ);
+ 		info->prev_ip = *(unsigned long *)(info->sp - FRAME_SIZE - RP_OFFSET);
+ 		return 1;
+ 	}
+ #endif
+-
+ 	return 0;
+ }
+ 
+diff --git a/arch/parisc/kernel/vmlinux.lds.S b/arch/parisc/kernel/vmlinux.lds.S
+index 2769eb991f58d..3d208afd15bc6 100644
+--- a/arch/parisc/kernel/vmlinux.lds.S
++++ b/arch/parisc/kernel/vmlinux.lds.S
+@@ -57,6 +57,8 @@ SECTIONS
+ {
+ 	. = KERNEL_BINARY_TEXT_START;
+ 
++	_stext = .;	/* start of kernel text, includes init code & data */
++
+ 	__init_begin = .;
+ 	HEAD_TEXT_SECTION
+ 	MLONGCALL_DISCARD(INIT_TEXT_SECTION(8))
+@@ -80,7 +82,6 @@ SECTIONS
+ 	/* freed after init ends here */
+ 
+ 	_text = .;		/* Text and read-only data */
+-	_stext = .;
+ 	MLONGCALL_KEEP(INIT_TEXT_SECTION(8))
+ 	.text ALIGN(PAGE_SIZE) : {
+ 		TEXT_TEXT
+diff --git a/arch/parisc/mm/fixmap.c b/arch/parisc/mm/fixmap.c
+index 24426a7e1a5e5..cc15d737fda64 100644
+--- a/arch/parisc/mm/fixmap.c
++++ b/arch/parisc/mm/fixmap.c
+@@ -20,12 +20,9 @@ void notrace set_fixmap(enum fixed_addresses idx, phys_addr_t phys)
+ 	pte_t *pte;
+ 
+ 	if (pmd_none(*pmd))
+-		pmd = pmd_alloc(NULL, pud, vaddr);
+-
+-	pte = pte_offset_kernel(pmd, vaddr);
+-	if (pte_none(*pte))
+ 		pte = pte_alloc_kernel(pmd, vaddr);
+ 
++	pte = pte_offset_kernel(pmd, vaddr);
+ 	set_pte_at(&init_mm, vaddr, pte, __mk_pte(phys, PAGE_KERNEL_RWX));
+ 	flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE);
+ }
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 3ec633b11b542..8f10cc6ee0fce 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -844,9 +844,9 @@ void flush_tlb_all(void)
+ {
+ 	int do_recycle;
+ 
+-	__inc_irq_stat(irq_tlb_count);
+ 	do_recycle = 0;
+ 	spin_lock(&sid_lock);
++	__inc_irq_stat(irq_tlb_count);
+ 	if (dirty_space_ids > RECYCLE_THRESHOLD) {
+ 	    BUG_ON(recycle_inuse);  /* FIXME: Use a semaphore/wait queue here */
+ 	    get_dirty_sids(&recycle_ndirty,recycle_dirty_array);
+@@ -865,8 +865,8 @@ void flush_tlb_all(void)
+ #else
+ void flush_tlb_all(void)
+ {
+-	__inc_irq_stat(irq_tlb_count);
+ 	spin_lock(&sid_lock);
++	__inc_irq_stat(irq_tlb_count);
+ 	flush_tlb_all_local(NULL);
+ 	recycle_sids();
+ 	spin_unlock(&sid_lock);
+diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
+index d5b3c3bb95b40..fa8746c320067 100644
+--- a/arch/powerpc/include/asm/code-patching.h
++++ b/arch/powerpc/include/asm/code-patching.h
+@@ -23,6 +23,7 @@
+ #define BRANCH_ABSOLUTE	0x2
+ 
+ bool is_offset_in_branch_range(long offset);
++bool is_offset_in_cond_branch_range(long offset);
+ int create_branch(struct ppc_inst *instr, const struct ppc_inst *addr,
+ 		  unsigned long target, int flags);
+ int create_cond_branch(struct ppc_inst *instr, const struct ppc_inst *addr,
+diff --git a/arch/powerpc/include/asm/firmware.h b/arch/powerpc/include/asm/firmware.h
+index 0b295bdb201e8..aa6a5ef5d4830 100644
+--- a/arch/powerpc/include/asm/firmware.h
++++ b/arch/powerpc/include/asm/firmware.h
+@@ -134,12 +134,6 @@ extern int ibm_nmi_interlock_token;
+ 
+ extern unsigned int __start___fw_ftr_fixup, __stop___fw_ftr_fixup;
+ 
+-#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
+-bool is_kvm_guest(void);
+-#else
+-static inline bool is_kvm_guest(void) { return false; }
+-#endif
+-
+ #ifdef CONFIG_PPC_PSERIES
+ void pseries_probe_fw_features(void);
+ #else
+diff --git a/arch/powerpc/include/asm/kvm_guest.h b/arch/powerpc/include/asm/kvm_guest.h
+new file mode 100644
+index 0000000000000..c63105d2c9e7c
+--- /dev/null
++++ b/arch/powerpc/include/asm/kvm_guest.h
+@@ -0,0 +1,25 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (C) 2020 IBM Corporation
++ */
++
++#ifndef _ASM_POWERPC_KVM_GUEST_H_
++#define _ASM_POWERPC_KVM_GUEST_H_
++
++#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
++#include <linux/jump_label.h>
++
++DECLARE_STATIC_KEY_FALSE(kvm_guest);
++
++static inline bool is_kvm_guest(void)
++{
++	return static_branch_unlikely(&kvm_guest);
++}
++
++int check_kvm_guest(void);
++#else
++static inline bool is_kvm_guest(void) { return false; }
++static inline int check_kvm_guest(void) { return 0; }
++#endif
++
++#endif /* _ASM_POWERPC_KVM_GUEST_H_ */
+diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
+index 744612054c94c..abe1b5e82547b 100644
+--- a/arch/powerpc/include/asm/kvm_para.h
++++ b/arch/powerpc/include/asm/kvm_para.h
+@@ -8,7 +8,7 @@
+ #ifndef __POWERPC_KVM_PARA_H__
+ #define __POWERPC_KVM_PARA_H__
+ 
+-#include <asm/firmware.h>
++#include <asm/kvm_guest.h>
+ 
+ #include <uapi/asm/kvm_para.h>
+ 
+diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
+index b774a4477d5f1..e380acc6e4132 100644
+--- a/arch/powerpc/include/asm/security_features.h
++++ b/arch/powerpc/include/asm/security_features.h
+@@ -39,6 +39,11 @@ static inline bool security_ftr_enabled(u64 feature)
+ 	return !!(powerpc_security_features & feature);
+ }
+ 
++#ifdef CONFIG_PPC_BOOK3S_64
++enum stf_barrier_type stf_barrier_type_get(void);
++#else
++static inline enum stf_barrier_type stf_barrier_type_get(void) { return STF_BARRIER_NONE; }
++#endif
+ 
+ // Features indicating support for Spectre/Meltdown mitigations
+ 
+diff --git a/arch/powerpc/kernel/firmware.c b/arch/powerpc/kernel/firmware.c
+index fe48d319d490e..20328f72f9f2b 100644
+--- a/arch/powerpc/kernel/firmware.c
++++ b/arch/powerpc/kernel/firmware.c
+@@ -14,6 +14,7 @@
+ #include <linux/of.h>
+ 
+ #include <asm/firmware.h>
++#include <asm/kvm_guest.h>
+ 
+ #ifdef CONFIG_PPC64
+ unsigned long powerpc_firmware_features __read_mostly;
+@@ -21,7 +22,8 @@ EXPORT_SYMBOL_GPL(powerpc_firmware_features);
+ #endif
+ 
+ #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
+-bool is_kvm_guest(void)
++DEFINE_STATIC_KEY_FALSE(kvm_guest);
++int __init check_kvm_guest(void)
+ {
+ 	struct device_node *hyper_node;
+ 
+@@ -29,9 +31,11 @@ bool is_kvm_guest(void)
+ 	if (!hyper_node)
+ 		return 0;
+ 
+-	if (!of_device_is_compatible(hyper_node, "linux,kvm"))
+-		return 0;
++	if (of_device_is_compatible(hyper_node, "linux,kvm"))
++		static_branch_enable(&kvm_guest);
+ 
+-	return 1;
++	of_node_put(hyper_node);
++	return 0;
+ }
++core_initcall(check_kvm_guest); // before kvm_guest_init()
+ #endif
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index e4e1a94ccf6a6..3f510c911b107 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -261,6 +261,11 @@ static int __init handle_no_stf_barrier(char *p)
+ 
+ early_param("no_stf_barrier", handle_no_stf_barrier);
+ 
++enum stf_barrier_type stf_barrier_type_get(void)
++{
++	return stf_enabled_flush_types;
++}
++
+ /* This is the generic flag used by other architectures */
+ static int __init handle_ssbd(char *p)
+ {
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index 2333625b5e315..a2e4f864b63d2 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -230,6 +230,11 @@ bool is_offset_in_branch_range(long offset)
+ 	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
+ }
+ 
++bool is_offset_in_cond_branch_range(long offset)
++{
++	return offset >= -0x8000 && offset <= 0x7fff && !(offset & 0x3);
++}
++
+ /*
+  * Helper to check if a given instruction is a conditional branch
+  * Derived from the conditional checks in analyse_instr()
+@@ -283,7 +288,7 @@ int create_cond_branch(struct ppc_inst *instr, const struct ppc_inst *addr,
+ 		offset = offset - (unsigned long)addr;
+ 
+ 	/* Check we can represent the target in the instruction format */
+-	if (offset < -0x8000 || offset > 0x7FFF || offset & 0x3)
++	if (!is_offset_in_cond_branch_range(offset))
+ 		return 1;
+ 
+ 	/* Mask out the flags and target, so they don't step on each other. */
+diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
+index d0a67a1bbaf18..1a5b4da8a2355 100644
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -12,6 +12,7 @@
+ 
+ #include <asm/types.h>
+ #include <asm/ppc-opcode.h>
++#include <asm/code-patching.h>
+ 
+ #ifdef PPC64_ELF_ABI_v1
+ #define FUNCTION_DESCR_SIZE	24
+@@ -24,13 +25,26 @@
+ #define EMIT(instr)		PLANT_INSTR(image, ctx->idx, instr)
+ 
+ /* Long jump; (unconditional 'branch') */
+-#define PPC_JMP(dest)		EMIT(PPC_INST_BRANCH |			      \
+-				     (((dest) - (ctx->idx * 4)) & 0x03fffffc))
++#define PPC_JMP(dest)							      \
++	do {								      \
++		long offset = (long)(dest) - (ctx->idx * 4);		      \
++		if (!is_offset_in_branch_range(offset)) {		      \
++			pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);			\
++			return -ERANGE;					      \
++		}							      \
++		EMIT(PPC_INST_BRANCH | (offset & 0x03fffffc));		      \
++	} while (0)
+ /* "cond" here covers BO:BI fields. */
+-#define PPC_BCC_SHORT(cond, dest)	EMIT(PPC_INST_BRANCH_COND |	      \
+-					     (((cond) & 0x3ff) << 16) |	      \
+-					     (((dest) - (ctx->idx * 4)) &     \
+-					      0xfffc))
++#define PPC_BCC_SHORT(cond, dest)					      \
++	do {								      \
++		long offset = (long)(dest) - (ctx->idx * 4);		      \
++		if (!is_offset_in_cond_branch_range(offset)) {		      \
++			pr_err_ratelimited("Conditional branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);		\
++			return -ERANGE;					      \
++		}							      \
++		EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc));					\
++	} while (0)
++
+ /* Sign-extended 32-bit immediate load */
+ #define PPC_LI32(d, i)		do {					      \
+ 		if ((int)(uintptr_t)(i) >= -32768 &&			      \
+@@ -71,11 +85,6 @@
+ #define PPC_FUNC_ADDR(d,i) do { PPC_LI32(d, i); } while(0)
+ #endif
+ 
+-static inline bool is_nearbranch(int offset)
+-{
+-	return (offset < 32768) && (offset >= -32768);
+-}
+-
+ /*
+  * The fly in the ointment of code size changing from pass to pass is
+  * avoided by padding the short branch case with a NOP.	 If code size differs
+@@ -84,7 +93,7 @@ static inline bool is_nearbranch(int offset)
+  * state.
+  */
+ #define PPC_BCC(cond, dest)	do {					      \
+-		if (is_nearbranch((dest) - (ctx->idx * 4))) {		      \
++		if (is_offset_in_cond_branch_range((long)(dest) - (ctx->idx * 4))) {	\
+ 			PPC_BCC_SHORT(cond, dest);			      \
+ 			EMIT(PPC_RAW_NOP());				      \
+ 		} else {						      \
+diff --git a/arch/powerpc/net/bpf_jit64.h b/arch/powerpc/net/bpf_jit64.h
+index 2e33c6673ff95..4d164e865b392 100644
+--- a/arch/powerpc/net/bpf_jit64.h
++++ b/arch/powerpc/net/bpf_jit64.h
+@@ -16,18 +16,18 @@
+  * with our redzone usage.
+  *
+  *		[	prev sp		] <-------------
+- *		[   nv gpr save area	] 6*8		|
++ *		[   nv gpr save area	] 5*8		|
+  *		[    tail_call_cnt	] 8		|
+- *		[    local_tmp_var	] 8		|
++ *		[    local_tmp_var	] 16		|
+  * fp (r31) -->	[   ebpf stack space	] upto 512	|
+  *		[     frame header	] 32/112	|
+  * sp (r1) --->	[    stack pointer	] --------------
+  */
+ 
+ /* for gpr non volatile registers BPG_REG_6 to 10 */
+-#define BPF_PPC_STACK_SAVE	(6*8)
++#define BPF_PPC_STACK_SAVE	(5*8)
+ /* for bpf JIT code internal usage */
+-#define BPF_PPC_STACK_LOCALS	16
++#define BPF_PPC_STACK_LOCALS	24
+ /* stack frame excluding BPF stack, ensure this is quadword aligned */
+ #define BPF_PPC_STACKFRAME	(STACK_FRAME_MIN_SIZE + \
+ 				 BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE)
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index a2750d6ffd0f5..8936090acb579 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -15,6 +15,7 @@
+ #include <linux/if_vlan.h>
+ #include <asm/kprobes.h>
+ #include <linux/bpf.h>
++#include <asm/security_features.h>
+ 
+ #include "bpf_jit64.h"
+ 
+@@ -56,9 +57,9 @@ static inline bool bpf_has_stack_frame(struct codegen_context *ctx)
+  *		[	prev sp		] <-------------
+  *		[	  ...       	] 		|
+  * sp (r1) --->	[    stack pointer	] --------------
+- *		[   nv gpr save area	] 6*8
++ *		[   nv gpr save area	] 5*8
+  *		[    tail_call_cnt	] 8
+- *		[    local_tmp_var	] 8
++ *		[    local_tmp_var	] 16
+  *		[   unused red zone	] 208 bytes protected
+  */
+ static int bpf_jit_stack_local(struct codegen_context *ctx)
+@@ -66,12 +67,12 @@ static int bpf_jit_stack_local(struct codegen_context *ctx)
+ 	if (bpf_has_stack_frame(ctx))
+ 		return STACK_FRAME_MIN_SIZE + ctx->stack_size;
+ 	else
+-		return -(BPF_PPC_STACK_SAVE + 16);
++		return -(BPF_PPC_STACK_SAVE + 24);
+ }
+ 
+ static int bpf_jit_stack_tailcallcnt(struct codegen_context *ctx)
+ {
+-	return bpf_jit_stack_local(ctx) + 8;
++	return bpf_jit_stack_local(ctx) + 16;
+ }
+ 
+ static int bpf_jit_stack_offsetof(struct codegen_context *ctx, int reg)
+@@ -224,7 +225,7 @@ static void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx,
+ 	EMIT(PPC_RAW_BLRL());
+ }
+ 
+-static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
++static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
+ {
+ 	/*
+ 	 * By now, the eBPF program has already setup parameters in r3, r4 and r5
+@@ -285,14 +286,39 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	bpf_jit_emit_common_epilogue(image, ctx);
+ 
+ 	EMIT(PPC_RAW_BCTR());
++
+ 	/* out: */
++	return 0;
+ }
+ 
++/*
++ * We spill into the redzone always, even if the bpf program has its own stackframe.
++ * Offsets hardcoded based on BPF_PPC_STACK_SAVE -- see bpf_jit_stack_local()
++ */
++void bpf_stf_barrier(void);
++
++asm (
++"		.global bpf_stf_barrier		;"
++"	bpf_stf_barrier:			;"
++"		std	21,-64(1)		;"
++"		std	22,-56(1)		;"
++"		sync				;"
++"		ld	21,-64(1)		;"
++"		ld	22,-56(1)		;"
++"		ori	31,31,0			;"
++"		.rept 14			;"
++"		b	1f			;"
++"	1:					;"
++"		.endr				;"
++"		blr				;"
++);
++
+ /* Assemble the body code between the prologue & epilogue */
+ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
+ 			      struct codegen_context *ctx,
+ 			      u32 *addrs, bool extra_pass)
+ {
++	enum stf_barrier_type stf_barrier = stf_barrier_type_get();
+ 	const struct bpf_insn *insn = fp->insnsi;
+ 	int flen = fp->len;
+ 	int i, ret;
+@@ -663,6 +689,30 @@ emit_clear:
+ 		 * BPF_ST NOSPEC (speculation barrier)
+ 		 */
+ 		case BPF_ST | BPF_NOSPEC:
++			if (!security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) ||
++					(!security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) &&
++					 (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) || !cpu_has_feature(CPU_FTR_HVMODE))))
++				break;
++
++			switch (stf_barrier) {
++			case STF_BARRIER_EIEIO:
++				EMIT(0x7c0006ac | 0x02000000);
++				break;
++			case STF_BARRIER_SYNC_ORI:
++				EMIT(PPC_INST_SYNC);
++				EMIT(PPC_RAW_LD(b2p[TMP_REG_1], 13, 0));
++				EMIT(PPC_RAW_ORI(31, 31, 0));
++				break;
++			case STF_BARRIER_FALLBACK:
++				EMIT(PPC_INST_MFLR | ___PPC_RT(b2p[TMP_REG_1]));
++				PPC_LI64(12, dereference_kernel_function_descriptor(bpf_stf_barrier));
++				EMIT(PPC_RAW_MTCTR(12));
++				EMIT(PPC_INST_BCTR | 0x1);
++				EMIT(PPC_RAW_MTLR(b2p[TMP_REG_1]));
++				break;
++			case STF_BARRIER_NONE:
++				break;
++			}
+ 			break;
+ 
+ 		/*
+@@ -1010,7 +1060,9 @@ cond_branch:
+ 		 */
+ 		case BPF_JMP | BPF_TAIL_CALL:
+ 			ctx->seen |= SEEN_TAILCALL;
+-			bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
++			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
++			if (ret < 0)
++				return ret;
+ 			break;
+ 
+ 		default:
+diff --git a/arch/powerpc/platforms/44x/fsp2.c b/arch/powerpc/platforms/44x/fsp2.c
+index b299e43f5ef94..823397c802def 100644
+--- a/arch/powerpc/platforms/44x/fsp2.c
++++ b/arch/powerpc/platforms/44x/fsp2.c
+@@ -208,6 +208,7 @@ static void node_irq_request(const char *compat, irq_handler_t errirq_handler)
+ 		if (irq == NO_IRQ) {
+ 			pr_err("device tree node %pOFn is missing a interrupt",
+ 			      np);
++			of_node_put(np);
+ 			return;
+ 		}
+ 
+@@ -215,6 +216,7 @@ static void node_irq_request(const char *compat, irq_handler_t errirq_handler)
+ 		if (rc) {
+ 			pr_err("fsp_of_probe: request_irq failed: np=%pOF rc=%d",
+ 			      np, rc);
++			of_node_put(np);
+ 			return;
+ 		}
+ 	}
+diff --git a/arch/powerpc/platforms/85xx/Makefile b/arch/powerpc/platforms/85xx/Makefile
+index d1dd0dca5ebf0..bd750edeb105b 100644
+--- a/arch/powerpc/platforms/85xx/Makefile
++++ b/arch/powerpc/platforms/85xx/Makefile
+@@ -3,7 +3,9 @@
+ # Makefile for the PowerPC 85xx linux kernel.
+ #
+ obj-$(CONFIG_SMP) += smp.o
+-obj-$(CONFIG_FSL_PMC)		  += mpc85xx_pm_ops.o
++ifneq ($(CONFIG_FSL_CORENET_RCPM),y)
++obj-$(CONFIG_SMP) += mpc85xx_pm_ops.o
++endif
+ 
+ obj-y += common.o
+ 
+diff --git a/arch/powerpc/platforms/85xx/mpc85xx_pm_ops.c b/arch/powerpc/platforms/85xx/mpc85xx_pm_ops.c
+index 7c0133f558d02..4a8af80011a6f 100644
+--- a/arch/powerpc/platforms/85xx/mpc85xx_pm_ops.c
++++ b/arch/powerpc/platforms/85xx/mpc85xx_pm_ops.c
+@@ -17,6 +17,7 @@
+ 
+ static struct ccsr_guts __iomem *guts;
+ 
++#ifdef CONFIG_FSL_PMC
+ static void mpc85xx_irq_mask(int cpu)
+ {
+ 
+@@ -49,6 +50,7 @@ static void mpc85xx_cpu_up_prepare(int cpu)
+ {
+ 
+ }
++#endif
+ 
+ static void mpc85xx_freeze_time_base(bool freeze)
+ {
+@@ -76,10 +78,12 @@ static const struct of_device_id mpc85xx_smp_guts_ids[] = {
+ 
+ static const struct fsl_pm_ops mpc85xx_pm_ops = {
+ 	.freeze_time_base = mpc85xx_freeze_time_base,
++#ifdef CONFIG_FSL_PMC
+ 	.irq_mask = mpc85xx_irq_mask,
+ 	.irq_unmask = mpc85xx_irq_unmask,
+ 	.cpu_die = mpc85xx_cpu_die,
+ 	.cpu_up_prepare = mpc85xx_cpu_up_prepare,
++#endif
+ };
+ 
+ int __init mpc85xx_setup_pmc(void)
+@@ -94,9 +98,8 @@ int __init mpc85xx_setup_pmc(void)
+ 			pr_err("Could not map guts node address\n");
+ 			return -ENOMEM;
+ 		}
++		qoriq_pm_ops = &mpc85xx_pm_ops;
+ 	}
+ 
+-	qoriq_pm_ops = &mpc85xx_pm_ops;
+-
+ 	return 0;
+ }
+diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
+index c6df294054fe9..83f4a6389a282 100644
+--- a/arch/powerpc/platforms/85xx/smp.c
++++ b/arch/powerpc/platforms/85xx/smp.c
+@@ -40,7 +40,6 @@ struct epapr_spin_table {
+ 	u32	pir;
+ };
+ 
+-#ifdef CONFIG_HOTPLUG_CPU
+ static u64 timebase;
+ static int tb_req;
+ static int tb_valid;
+@@ -112,6 +111,7 @@ static void mpc85xx_take_timebase(void)
+ 	local_irq_restore(flags);
+ }
+ 
++#ifdef CONFIG_HOTPLUG_CPU
+ static void smp_85xx_cpu_offline_self(void)
+ {
+ 	unsigned int cpu = smp_processor_id();
+@@ -495,21 +495,21 @@ void __init mpc85xx_smp_init(void)
+ 		smp_85xx_ops.probe = NULL;
+ 	}
+ 
+-#ifdef CONFIG_HOTPLUG_CPU
+ #ifdef CONFIG_FSL_CORENET_RCPM
++	/* Assign a value to qoriq_pm_ops on PPC_E500MC */
+ 	fsl_rcpm_init();
+-#endif
+-
+-#ifdef CONFIG_FSL_PMC
++#else
++	/* Assign a value to qoriq_pm_ops on !PPC_E500MC */
+ 	mpc85xx_setup_pmc();
+ #endif
+ 	if (qoriq_pm_ops) {
+ 		smp_85xx_ops.give_timebase = mpc85xx_give_timebase;
+ 		smp_85xx_ops.take_timebase = mpc85xx_take_timebase;
++#ifdef CONFIG_HOTPLUG_CPU
+ 		smp_85xx_ops.cpu_offline_self = smp_85xx_cpu_offline_self;
+ 		smp_85xx_ops.cpu_die = qoriq_cpu_kill;
+-	}
+ #endif
++	}
+ 	smp_ops = &smp_85xx_ops;
+ 
+ #ifdef CONFIG_KEXEC_CORE
+diff --git a/arch/powerpc/platforms/powernv/opal-prd.c b/arch/powerpc/platforms/powernv/opal-prd.c
+index deddaebf8c14f..17a2874d12561 100644
+--- a/arch/powerpc/platforms/powernv/opal-prd.c
++++ b/arch/powerpc/platforms/powernv/opal-prd.c
+@@ -372,6 +372,12 @@ static struct notifier_block opal_prd_event_nb = {
+ 	.priority	= 0,
+ };
+ 
++static struct notifier_block opal_prd_event_nb2 = {
++	.notifier_call	= opal_prd_msg_notifier,
++	.next		= NULL,
++	.priority	= 0,
++};
++
+ static int opal_prd_probe(struct platform_device *pdev)
+ {
+ 	int rc;
+@@ -393,9 +399,10 @@ static int opal_prd_probe(struct platform_device *pdev)
+ 		return rc;
+ 	}
+ 
+-	rc = opal_message_notifier_register(OPAL_MSG_PRD2, &opal_prd_event_nb);
++	rc = opal_message_notifier_register(OPAL_MSG_PRD2, &opal_prd_event_nb2);
+ 	if (rc) {
+ 		pr_err("Couldn't register PRD2 event notifier\n");
++		opal_message_notifier_unregister(OPAL_MSG_PRD, &opal_prd_event_nb);
+ 		return rc;
+ 	}
+ 
+@@ -404,6 +411,8 @@ static int opal_prd_probe(struct platform_device *pdev)
+ 		pr_err("failed to register miscdev\n");
+ 		opal_message_notifier_unregister(OPAL_MSG_PRD,
+ 				&opal_prd_event_nb);
++		opal_message_notifier_unregister(OPAL_MSG_PRD2,
++				&opal_prd_event_nb2);
+ 		return rc;
+ 	}
+ 
+@@ -414,6 +423,7 @@ static int opal_prd_remove(struct platform_device *pdev)
+ {
+ 	misc_deregister(&opal_prd_dev);
+ 	opal_message_notifier_unregister(OPAL_MSG_PRD, &opal_prd_event_nb);
++	opal_message_notifier_unregister(OPAL_MSG_PRD2, &opal_prd_event_nb2);
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
+index 624e80b00eb18..f47429323eee9 100644
+--- a/arch/powerpc/platforms/pseries/smp.c
++++ b/arch/powerpc/platforms/pseries/smp.c
+@@ -42,6 +42,7 @@
+ #include <asm/plpar_wrappers.h>
+ #include <asm/code-patching.h>
+ #include <asm/svm.h>
++#include <asm/kvm_guest.h>
+ 
+ #include "pseries.h"
+ 
+@@ -207,6 +208,8 @@ static __init void pSeries_smp_probe(void)
+ 	if (!cpu_has_feature(CPU_FTR_SMT))
+ 		return;
+ 
++	check_kvm_guest();
++
+ 	if (is_kvm_guest()) {
+ 		/*
+ 		 * KVM emulates doorbells by disabling FSCR[MSGP] so msgsndp
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index cd74989ce0b02..3b1a498e58d25 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -397,6 +397,8 @@ static int handle_sske(struct kvm_vcpu *vcpu)
+ 		mmap_read_unlock(current->mm);
+ 		if (rc == -EFAULT)
+ 			return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
++		if (rc == -EAGAIN)
++			continue;
+ 		if (rc < 0)
+ 			return rc;
+ 		start += PAGE_SIZE;
+diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
+index f5847f9dec7c9..8228878872228 100644
+--- a/arch/s390/kvm/pv.c
++++ b/arch/s390/kvm/pv.c
+@@ -16,18 +16,17 @@
+ 
+ int kvm_s390_pv_destroy_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc)
+ {
+-	int cc = 0;
++	int cc;
+ 
+-	if (kvm_s390_pv_cpu_get_handle(vcpu)) {
+-		cc = uv_cmd_nodata(kvm_s390_pv_cpu_get_handle(vcpu),
+-				   UVC_CMD_DESTROY_SEC_CPU, rc, rrc);
++	if (!kvm_s390_pv_cpu_get_handle(vcpu))
++		return 0;
++
++	cc = uv_cmd_nodata(kvm_s390_pv_cpu_get_handle(vcpu), UVC_CMD_DESTROY_SEC_CPU, rc, rrc);
++
++	KVM_UV_EVENT(vcpu->kvm, 3, "PROTVIRT DESTROY VCPU %d: rc %x rrc %x",
++		     vcpu->vcpu_id, *rc, *rrc);
++	WARN_ONCE(cc, "protvirt destroy cpu failed rc %x rrc %x", *rc, *rrc);
+ 
+-		KVM_UV_EVENT(vcpu->kvm, 3,
+-			     "PROTVIRT DESTROY VCPU %d: rc %x rrc %x",
+-			     vcpu->vcpu_id, *rc, *rrc);
+-		WARN_ONCE(cc, "protvirt destroy cpu failed rc %x rrc %x",
+-			  *rc, *rrc);
+-	}
+ 	/* Intended memory leak for something that should never happen. */
+ 	if (!cc)
+ 		free_pages(vcpu->arch.pv.stor_base,
+@@ -191,7 +190,7 @@ int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
+ 	uvcb.conf_base_stor_origin = (u64)kvm->arch.pv.stor_base;
+ 	uvcb.conf_virt_stor_origin = (u64)kvm->arch.pv.stor_var;
+ 
+-	cc = uv_call(0, (u64)&uvcb);
++	cc = uv_call_sched(0, (u64)&uvcb);
+ 	*rc = uvcb.header.rc;
+ 	*rrc = uvcb.header.rrc;
+ 	KVM_UV_EVENT(kvm, 3, "PROTVIRT CREATE VM: handle %llx len %llx rc %x rrc %x",
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 64795d0349263..f2d19d40272cf 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -684,9 +684,10 @@ void __gmap_zap(struct gmap *gmap, unsigned long gaddr)
+ 		vmaddr |= gaddr & ~PMD_MASK;
+ 		/* Get pointer to the page table entry */
+ 		ptep = get_locked_pte(gmap->mm, vmaddr, &ptl);
+-		if (likely(ptep))
++		if (likely(ptep)) {
+ 			ptep_zap_unused(gmap->mm, vmaddr, ptep, 0);
+-		pte_unmap_unlock(ptep, ptl);
++			pte_unmap_unlock(ptep, ptl);
++		}
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(__gmap_zap);
+diff --git a/arch/sh/kernel/cpu/fpu.c b/arch/sh/kernel/cpu/fpu.c
+index ae354a2931e7e..fd6db0ab19288 100644
+--- a/arch/sh/kernel/cpu/fpu.c
++++ b/arch/sh/kernel/cpu/fpu.c
+@@ -62,18 +62,20 @@ void fpu_state_restore(struct pt_regs *regs)
+ 	}
+ 
+ 	if (!tsk_used_math(tsk)) {
+-		local_irq_enable();
++		int ret;
+ 		/*
+ 		 * does a slab alloc which can sleep
+ 		 */
+-		if (init_fpu(tsk)) {
++		local_irq_enable();
++		ret = init_fpu(tsk);
++		local_irq_disable();
++		if (ret) {
+ 			/*
+ 			 * ran out of memory!
+ 			 */
+-			do_group_exit(SIGKILL);
++			force_sig(SIGKILL);
+ 			return;
+ 		}
+-		local_irq_disable();
+ 	}
+ 
+ 	grab_fpu(regs);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index c3d9f56c90186..2b5957b27a3d6 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1527,6 +1527,7 @@ config AMD_MEM_ENCRYPT
+ 	select ARCH_USE_MEMREMAP_PROT
+ 	select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+ 	select INSTRUCTION_DECODER
++	select ARCH_HAS_CC_PLATFORM
+ 	help
+ 	  Say yes to enable support for the encryption of system memory.
+ 	  This requires an AMD processor that supports Secure Memory
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 2701f87a9a7c6..c01b51d1cbdff 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -444,7 +444,7 @@
+ #define ICX_M3UPI_PCI_PMON_BOX_CTL		0xa0
+ 
+ /* ICX IMC */
+-#define ICX_NUMBER_IMC_CHN			2
++#define ICX_NUMBER_IMC_CHN			3
+ #define ICX_IMC_MEM_STRIDE			0x4
+ 
+ DEFINE_UNCORE_FORMAT_ATTR(event, event, "config:0-7");
+@@ -4898,8 +4898,10 @@ static struct event_constraint icx_uncore_iio_constraints[] = {
+ 	UNCORE_EVENT_CONSTRAINT(0x02, 0x3),
+ 	UNCORE_EVENT_CONSTRAINT(0x03, 0x3),
+ 	UNCORE_EVENT_CONSTRAINT(0x83, 0x3),
++	UNCORE_EVENT_CONSTRAINT(0x88, 0xc),
+ 	UNCORE_EVENT_CONSTRAINT(0xc0, 0xc),
+ 	UNCORE_EVENT_CONSTRAINT(0xc5, 0xc),
++	UNCORE_EVENT_CONSTRAINT(0xd5, 0xc),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -5228,7 +5230,7 @@ static struct intel_uncore_ops icx_uncore_mmio_ops = {
+ static struct intel_uncore_type icx_uncore_imc = {
+ 	.name		= "imc",
+ 	.num_counters   = 4,
+-	.num_boxes	= 8,
++	.num_boxes	= 12,
+ 	.perf_ctr_bits	= 48,
+ 	.fixed_ctr_bits	= 48,
+ 	.fixed_ctr	= SNR_IMC_MMIO_PMON_FIXED_CTR,
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 6375967a8244d..3cf4030232590 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -168,7 +168,6 @@ void set_hv_tscchange_cb(void (*cb)(void))
+ 	struct hv_reenlightenment_control re_ctrl = {
+ 		.vector = HYPERV_REENLIGHTENMENT_VECTOR,
+ 		.enabled = 1,
+-		.target_vp = hv_vp_index[smp_processor_id()]
+ 	};
+ 	struct hv_tsc_emulation_control emu_ctrl = {.enabled = 1};
+ 
+@@ -182,8 +181,12 @@ void set_hv_tscchange_cb(void (*cb)(void))
+ 	/* Make sure callback is registered before we write to MSRs */
+ 	wmb();
+ 
++	re_ctrl.target_vp = hv_vp_index[get_cpu()];
++
+ 	wrmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl));
+ 	wrmsrl(HV_X64_MSR_TSC_EMULATION_CONTROL, *((u64 *)&emu_ctrl));
++
++	put_cpu();
+ }
+ EXPORT_SYMBOL_GPL(set_hv_tscchange_cb);
+ 
+diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
+index 3d52b094850a9..dd5ea1bdf04c5 100644
+--- a/arch/x86/include/asm/cpu_entry_area.h
++++ b/arch/x86/include/asm/cpu_entry_area.h
+@@ -10,6 +10,12 @@
+ 
+ #ifdef CONFIG_X86_64
+ 
++#ifdef CONFIG_AMD_MEM_ENCRYPT
++#define VC_EXCEPTION_STKSZ	EXCEPTION_STKSZ
++#else
++#define VC_EXCEPTION_STKSZ	0
++#endif
++
+ /* Macro to enforce the same ordering and stack sizes */
+ #define ESTACKS_MEMBERS(guardsize, optional_stack_size)		\
+ 	char	DF_stack_guard[guardsize];			\
+@@ -28,7 +34,7 @@
+ 
+ /* The exception stacks' physical storage. No guard pages required */
+ struct exception_stacks {
+-	ESTACKS_MEMBERS(0, 0)
++	ESTACKS_MEMBERS(0, VC_EXCEPTION_STKSZ)
+ };
+ 
+ /* The effective cpu entry area mapping with guard pages. */
+diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
+index 2f62bbdd9d12f..8f3fa55d43ce7 100644
+--- a/arch/x86/include/asm/mem_encrypt.h
++++ b/arch/x86/include/asm/mem_encrypt.h
+@@ -13,6 +13,7 @@
+ #ifndef __ASSEMBLY__
+ 
+ #include <linux/init.h>
++#include <linux/cc_platform.h>
+ 
+ #include <asm/bootparam.h>
+ 
+diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
+index 3f49dac03617d..224d71aef6303 100644
+--- a/arch/x86/include/asm/page_64_types.h
++++ b/arch/x86/include/asm/page_64_types.h
+@@ -15,7 +15,7 @@
+ #define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
+ #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
+ 
+-#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
++#define EXCEPTION_STACK_ORDER (1 + KASAN_STACK_ORDER)
+ #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
+ 
+ #define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
+diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
+index 68608bd892c0d..c06f3a961d647 100644
+--- a/arch/x86/kernel/Makefile
++++ b/arch/x86/kernel/Makefile
+@@ -21,6 +21,7 @@ CFLAGS_REMOVE_ftrace.o = -pg
+ CFLAGS_REMOVE_early_printk.o = -pg
+ CFLAGS_REMOVE_head64.o = -pg
+ CFLAGS_REMOVE_sev-es.o = -pg
++CFLAGS_REMOVE_cc_platform.o = -pg
+ endif
+ 
+ KASAN_SANITIZE_head$(BITS).o				:= n
+@@ -29,6 +30,7 @@ KASAN_SANITIZE_dumpstack_$(BITS).o			:= n
+ KASAN_SANITIZE_stacktrace.o				:= n
+ KASAN_SANITIZE_paravirt.o				:= n
+ KASAN_SANITIZE_sev-es.o					:= n
++KASAN_SANITIZE_cc_platform.o				:= n
+ 
+ # With some compiler versions the generated code results in boot hangs, caused
+ # by several compilation units. To be safe, disable all instrumentation.
+@@ -48,6 +50,7 @@ endif
+ KCOV_INSTRUMENT		:= n
+ 
+ CFLAGS_head$(BITS).o	+= -fno-stack-protector
++CFLAGS_cc_platform.o	+= -fno-stack-protector
+ 
+ CFLAGS_irq.o := -I $(srctree)/$(src)/../include/asm/trace
+ 
+@@ -151,6 +154,9 @@ obj-$(CONFIG_UNWINDER_FRAME_POINTER)	+= unwind_frame.o
+ obj-$(CONFIG_UNWINDER_GUESS)		+= unwind_guess.o
+ 
+ obj-$(CONFIG_AMD_MEM_ENCRYPT)		+= sev-es.o
++
++obj-$(CONFIG_ARCH_HAS_CC_PLATFORM)	+= cc_platform.o
++
+ ###
+ # 64 bit specific files
+ ifeq ($(CONFIG_X86_64),y)
+diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
+new file mode 100644
+index 0000000000000..03bb2f343ddb7
+--- /dev/null
++++ b/arch/x86/kernel/cc_platform.c
+@@ -0,0 +1,69 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Confidential Computing Platform Capability checks
++ *
++ * Copyright (C) 2021 Advanced Micro Devices, Inc.
++ *
++ * Author: Tom Lendacky <thomas.lendacky@amd.com>
++ */
++
++#include <linux/export.h>
++#include <linux/cc_platform.h>
++#include <linux/mem_encrypt.h>
++
++#include <asm/processor.h>
++
++static bool __maybe_unused intel_cc_platform_has(enum cc_attr attr)
++{
++#ifdef CONFIG_INTEL_TDX_GUEST
++	return false;
++#else
++	return false;
++#endif
++}
++
++/*
++ * SME and SEV are very similar but they are not the same, so there are
++ * times that the kernel will need to distinguish between SME and SEV. The
++ * cc_platform_has() function is used for this.  When a distinction isn't
++ * needed, the CC_ATTR_MEM_ENCRYPT attribute can be used.
++ *
++ * The trampoline code is a good example for this requirement.  Before
++ * paging is activated, SME will access all memory as decrypted, but SEV
++ * will access all memory as encrypted.  So, when APs are being brought
++ * up under SME the trampoline area cannot be encrypted, whereas under SEV
++ * the trampoline area must be encrypted.
++ */
++static bool amd_cc_platform_has(enum cc_attr attr)
++{
++#ifdef CONFIG_AMD_MEM_ENCRYPT
++	switch (attr) {
++	case CC_ATTR_MEM_ENCRYPT:
++		return sme_me_mask;
++
++	case CC_ATTR_HOST_MEM_ENCRYPT:
++		return sme_me_mask && !(sev_status & MSR_AMD64_SEV_ENABLED);
++
++	case CC_ATTR_GUEST_MEM_ENCRYPT:
++		return sev_status & MSR_AMD64_SEV_ENABLED;
++
++	case CC_ATTR_GUEST_STATE_ENCRYPT:
++		return sev_status & MSR_AMD64_SEV_ES_ENABLED;
++
++	default:
++		return false;
++	}
++#else
++	return false;
++#endif
++}
++
++
++bool cc_platform_has(enum cc_attr attr)
++{
++	if (sme_me_mask)
++		return amd_cc_platform_has(attr);
++
++	return false;
++}
++EXPORT_SYMBOL_GPL(cc_platform_has);
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index a2551b10780c6..acea05eed27d4 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1017,6 +1017,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 	if (cpu_has(c, X86_FEATURE_IRPERF) &&
+ 	    !cpu_has_amd_erratum(c, amd_erratum_1054))
+ 		msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
++
++	check_null_seg_clears_base(c);
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index ec21f5e9ffd05..9c8fc6f513ed3 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1391,9 +1391,8 @@ void __init early_cpu_init(void)
+ 	early_identify_cpu(&boot_cpu_data);
+ }
+ 
+-static void detect_null_seg_behavior(struct cpuinfo_x86 *c)
++static bool detect_null_seg_behavior(void)
+ {
+-#ifdef CONFIG_X86_64
+ 	/*
+ 	 * Empirically, writing zero to a segment selector on AMD does
+ 	 * not clear the base, whereas writing zero to a segment
+@@ -1414,10 +1413,43 @@ static void detect_null_seg_behavior(struct cpuinfo_x86 *c)
+ 	wrmsrl(MSR_FS_BASE, 1);
+ 	loadsegment(fs, 0);
+ 	rdmsrl(MSR_FS_BASE, tmp);
+-	if (tmp != 0)
+-		set_cpu_bug(c, X86_BUG_NULL_SEG);
+ 	wrmsrl(MSR_FS_BASE, old_base);
+-#endif
++	return tmp == 0;
++}
++
++void check_null_seg_clears_base(struct cpuinfo_x86 *c)
++{
++	/* BUG_NULL_SEG is only relevant with 64bit userspace */
++	if (!IS_ENABLED(CONFIG_X86_64))
++		return;
++
++	/* Zen3 CPUs advertise Null Selector Clears Base in CPUID. */
++	if (c->extended_cpuid_level >= 0x80000021 &&
++	    cpuid_eax(0x80000021) & BIT(6))
++		return;
++
++	/*
++	 * CPUID bit above wasn't set. If this kernel is still running
++	 * as a HV guest, then the HV has decided not to advertize
++	 * that CPUID bit for whatever reason.	For example, one
++	 * member of the migration pool might be vulnerable.  Which
++	 * means, the bug is present: set the BUG flag and return.
++	 */
++	if (cpu_has(c, X86_FEATURE_HYPERVISOR)) {
++		set_cpu_bug(c, X86_BUG_NULL_SEG);
++		return;
++	}
++
++	/*
++	 * Zen2 CPUs also have this behaviour, but no CPUID bit.
++	 * 0x18 is the respective family for Hygon.
++	 */
++	if ((c->x86 == 0x17 || c->x86 == 0x18) &&
++	    detect_null_seg_behavior())
++		return;
++
++	/* All the remaining ones are affected */
++	set_cpu_bug(c, X86_BUG_NULL_SEG);
+ }
+ 
+ static void generic_identify(struct cpuinfo_x86 *c)
+@@ -1453,8 +1485,6 @@ static void generic_identify(struct cpuinfo_x86 *c)
+ 
+ 	get_model_name(c); /* Default name */
+ 
+-	detect_null_seg_behavior(c);
+-
+ 	/*
+ 	 * ESPFIX is a strange bug.  All real CPUs have it.  Paravirt
+ 	 * systems that run Linux at CPL > 0 may or may not have the
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 67944128876d7..093f5fc860e3f 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -73,6 +73,7 @@ extern int detect_extended_topology_early(struct cpuinfo_x86 *c);
+ extern int detect_extended_topology(struct cpuinfo_x86 *c);
+ extern int detect_ht_early(struct cpuinfo_x86 *c);
+ extern void detect_ht(struct cpuinfo_x86 *c);
++extern void check_null_seg_clears_base(struct cpuinfo_x86 *c);
+ 
+ unsigned int aperfmperf_get_khz(int cpu);
+ 
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index dc0840aae26c1..b78c471ec344b 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -351,6 +351,8 @@ static void init_hygon(struct cpuinfo_x86 *c)
+ 	/* Hygon CPUs don't reset SS attributes on SYSRET, Xen does. */
+ 	if (!cpu_has(c, X86_FEATURE_XENPV))
+ 		set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
++
++	check_null_seg_clears_base(c);
+ }
+ 
+ static void cpu_detect_tlb_hygon(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
+index abe9fe0fb8517..2577d78757810 100644
+--- a/arch/x86/kernel/cpu/mce/intel.c
++++ b/arch/x86/kernel/cpu/mce/intel.c
+@@ -526,12 +526,13 @@ bool intel_filter_mce(struct mce *m)
+ {
+ 	struct cpuinfo_x86 *c = &boot_cpu_data;
+ 
+-	/* MCE errata HSD131, HSM142, HSW131, BDM48, and HSM142 */
++	/* MCE errata HSD131, HSM142, HSW131, BDM48, HSM142 and SKX37 */
+ 	if ((c->x86 == 6) &&
+ 	    ((c->x86_model == INTEL_FAM6_HASWELL) ||
+ 	     (c->x86_model == INTEL_FAM6_HASWELL_L) ||
+ 	     (c->x86_model == INTEL_FAM6_BROADWELL) ||
+-	     (c->x86_model == INTEL_FAM6_HASWELL_G)) &&
++	     (c->x86_model == INTEL_FAM6_HASWELL_G) ||
++	     (c->x86_model == INTEL_FAM6_SKYLAKE_X)) &&
+ 	    (m->bank == 0) &&
+ 	    ((m->status & 0xa0000000ffffffff) == 0x80000000000f0005))
+ 		return true;
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index c5dd50369e2f3..ce904c89c6c70 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -290,8 +290,10 @@ void kvm_set_posted_intr_wakeup_handler(void (*handler)(void))
+ {
+ 	if (handler)
+ 		kvm_posted_intr_wakeup_handler = handler;
+-	else
++	else {
+ 		kvm_posted_intr_wakeup_handler = dummy_handler;
++		synchronize_rcu();
++	}
+ }
+ EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wakeup_handler);
+ 
+diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
+index f3202b2e3c157..865e234ea24bd 100644
+--- a/arch/x86/kernel/sev-es.c
++++ b/arch/x86/kernel/sev-es.c
+@@ -46,16 +46,6 @@ static struct ghcb __initdata *boot_ghcb;
+ struct sev_es_runtime_data {
+ 	struct ghcb ghcb_page;
+ 
+-	/* Physical storage for the per-CPU IST stack of the #VC handler */
+-	char ist_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
+-
+-	/*
+-	 * Physical storage for the per-CPU fall-back stack of the #VC handler.
+-	 * The fall-back stack is used when it is not safe to switch back to the
+-	 * interrupted stack in the #VC entry code.
+-	 */
+-	char fallback_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
+-
+ 	/*
+ 	 * Reserve one page per CPU as backup storage for the unencrypted GHCB.
+ 	 * It is needed when an NMI happens while the #VC handler uses the real
+@@ -99,27 +89,6 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
+ /* Needed in vc_early_forward_exception */
+ void do_early_exception(struct pt_regs *regs, int trapnr);
+ 
+-static void __init setup_vc_stacks(int cpu)
+-{
+-	struct sev_es_runtime_data *data;
+-	struct cpu_entry_area *cea;
+-	unsigned long vaddr;
+-	phys_addr_t pa;
+-
+-	data = per_cpu(runtime_data, cpu);
+-	cea  = get_cpu_entry_area(cpu);
+-
+-	/* Map #VC IST stack */
+-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC);
+-	pa    = __pa(data->ist_stack);
+-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
+-
+-	/* Map VC fall-back stack */
+-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC2);
+-	pa    = __pa(data->fallback_stack);
+-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
+-}
+-
+ static __always_inline bool on_vc_stack(struct pt_regs *regs)
+ {
+ 	unsigned long sp = regs->sp;
+@@ -753,7 +722,6 @@ void __init sev_es_init_vc_handling(void)
+ 	for_each_possible_cpu(cpu) {
+ 		alloc_runtime_data(cpu);
+ 		init_ghcb(cpu);
+-		setup_vc_stacks(cpu);
+ 	}
+ 
+ 	sev_es_setup_play_dead();
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 7692bf7908e6c..143fcb8af38f4 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -701,7 +701,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
+ 	stack = (unsigned long *)sp;
+ 
+ 	if (!get_stack_info_noinstr(stack, current, &info) || info.type == STACK_TYPE_ENTRY ||
+-	    info.type >= STACK_TYPE_EXCEPTION_LAST)
++	    info.type > STACK_TYPE_EXCEPTION_LAST)
+ 		sp = __this_cpu_ist_top_va(VC2);
+ 
+ sync:
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index f77d98973782b..baa4244f3e6a1 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -859,15 +859,15 @@ void update_exception_bitmap(struct kvm_vcpu *vcpu)
+ /*
+  * Check if MSR is intercepted for currently loaded MSR bitmap.
+  */
+-static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
++static bool msr_write_intercepted(struct vcpu_vmx *vmx, u32 msr)
+ {
+ 	unsigned long *msr_bitmap;
+ 	int f = sizeof(unsigned long);
+ 
+-	if (!cpu_has_vmx_msr_bitmap())
++	if (!(exec_controls_get(vmx) & CPU_BASED_USE_MSR_BITMAPS))
+ 		return true;
+ 
+-	msr_bitmap = to_vmx(vcpu)->loaded_vmcs->msr_bitmap;
++	msr_bitmap = vmx->loaded_vmcs->msr_bitmap;
+ 
+ 	if (msr <= 0x1fff) {
+ 		return !!test_bit(msr, msr_bitmap + 0x800 / f);
+@@ -6744,7 +6744,7 @@ reenter_guest:
+ 	 * If the L02 MSR bitmap does not intercept the MSR, then we need to
+ 	 * save it.
+ 	 */
+-	if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
++	if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)))
+ 		vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+ 
+ 	x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0);
+@@ -7586,6 +7586,8 @@ static void vmx_migrate_timers(struct kvm_vcpu *vcpu)
+ 
+ static void hardware_unsetup(void)
+ {
++	kvm_set_posted_intr_wakeup_handler(NULL);
++
+ 	if (nested)
+ 		nested_vmx_hardware_unsetup();
+ 
+@@ -7877,8 +7879,6 @@ static __init int hardware_setup(void)
+ 		vmx_x86_ops.request_immediate_exit = __kvm_request_immediate_exit;
+ 	}
+ 
+-	kvm_set_posted_intr_wakeup_handler(pi_wakeup_handler);
+-
+ 	kvm_mce_cap_supported |= MCG_LMCE_P;
+ 
+ 	if (pt_mode != PT_MODE_SYSTEM && pt_mode != PT_MODE_HOST_GUEST)
+@@ -7900,6 +7900,9 @@ static __init int hardware_setup(void)
+ 	r = alloc_kvm_area();
+ 	if (r)
+ 		nested_vmx_hardware_unsetup();
++
++	kvm_set_posted_intr_wakeup_handler(pi_wakeup_handler);
++
+ 	return r;
+ }
+ 
+diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
+index f5e1e60c9095f..6c2f1b76a0b61 100644
+--- a/arch/x86/mm/cpu_entry_area.c
++++ b/arch/x86/mm/cpu_entry_area.c
+@@ -110,6 +110,13 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu)
+ 	cea_map_stack(NMI);
+ 	cea_map_stack(DB);
+ 	cea_map_stack(MCE);
++
++	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) {
++		if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
++			cea_map_stack(VC);
++			cea_map_stack(VC2);
++		}
++	}
+ }
+ #else
+ static inline void percpu_setup_exception_stacks(unsigned int cpu)
+diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
+index cc85e199108eb..97f7eb5d15c2f 100644
+--- a/arch/x86/mm/mem_encrypt.c
++++ b/arch/x86/mm/mem_encrypt.c
+@@ -19,6 +19,7 @@
+ #include <linux/kernel.h>
+ #include <linux/bitops.h>
+ #include <linux/dma-mapping.h>
++#include <linux/cc_platform.h>
+ 
+ #include <asm/tlbflush.h>
+ #include <asm/fixmap.h>
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index 65f599e9075bc..011e042b47ba7 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -27,6 +27,15 @@
+ #undef CONFIG_PARAVIRT_XXL
+ #undef CONFIG_PARAVIRT_SPINLOCKS
+ 
++/*
++ * This code runs before CPU feature bits are set. By default, the
++ * pgtable_l5_enabled() function uses bit X86_FEATURE_LA57 to determine if
++ * 5-level paging is active, so that won't work here. USE_EARLY_PGTABLE_L5
++ * is provided to handle this situation and, instead, use a variable that
++ * has been set by the early boot code.
++ */
++#define USE_EARLY_PGTABLE_L5
++
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/mem_encrypt.h>
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index eed9a4c1519df..15a11a217cd03 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -774,7 +774,6 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
+ 	/* this request will be re-inserted to io scheduler queue */
+ 	blk_mq_sched_requeue_request(rq);
+ 
+-	BUG_ON(!list_empty(&rq->queuelist));
+ 	blk_mq_add_to_requeue_list(rq, true, kick_requeue_list);
+ }
+ EXPORT_SYMBOL(blk_mq_requeue_request);
+@@ -1327,6 +1326,7 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ 	int errors, queued;
+ 	blk_status_t ret = BLK_STS_OK;
+ 	LIST_HEAD(zone_list);
++	bool needs_resource = false;
+ 
+ 	if (list_empty(list))
+ 		return false;
+@@ -1372,6 +1372,8 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ 			queued++;
+ 			break;
+ 		case BLK_STS_RESOURCE:
++			needs_resource = true;
++			fallthrough;
+ 		case BLK_STS_DEV_RESOURCE:
+ 			blk_mq_handle_dev_resource(rq, list);
+ 			goto out;
+@@ -1382,6 +1384,7 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ 			 * accept.
+ 			 */
+ 			blk_mq_handle_zone_resource(rq, &zone_list);
++			needs_resource = true;
+ 			break;
+ 		default:
+ 			errors++;
+@@ -1408,7 +1411,6 @@ out:
+ 		/* For non-shared tags, the RESTART check will suffice */
+ 		bool no_tag = prep == PREP_DISPATCH_NO_TAG &&
+ 			(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED);
+-		bool no_budget_avail = prep == PREP_DISPATCH_NO_BUDGET;
+ 
+ 		blk_mq_release_budgets(q, nr_budgets);
+ 
+@@ -1448,14 +1450,16 @@ out:
+ 		 * If driver returns BLK_STS_RESOURCE and SCHED_RESTART
+ 		 * bit is set, run queue after a delay to avoid IO stalls
+ 		 * that could otherwise occur if the queue is idle.  We'll do
+-		 * similar if we couldn't get budget and SCHED_RESTART is set.
++		 * similar if we couldn't get budget or couldn't lock a zone
++		 * and SCHED_RESTART is set.
+ 		 */
+ 		needs_restart = blk_mq_sched_needs_restart(hctx);
++		if (prep == PREP_DISPATCH_NO_BUDGET)
++			needs_resource = true;
+ 		if (!needs_restart ||
+ 		    (no_tag && list_empty_careful(&hctx->dispatch_wait.entry)))
+ 			blk_mq_run_hw_queue(hctx, true);
+-		else if (needs_restart && (ret == BLK_STS_RESOURCE ||
+-					   no_budget_avail))
++		else if (needs_restart && needs_resource)
+ 			blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY);
+ 
+ 		blk_mq_update_dispatch_busy(hctx, true);
+@@ -2111,14 +2115,14 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
+ }
+ 
+ /*
+- * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
++ * Allow 2x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
+  * queues. This is important for md arrays to benefit from merging
+  * requests.
+  */
+ static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
+ {
+ 	if (plug->multiple_queues)
+-		return BLK_MAX_REQUEST_COUNT * 4;
++		return BLK_MAX_REQUEST_COUNT * 2;
+ 	return BLK_MAX_REQUEST_COUNT;
+ }
+ 
+diff --git a/block/blk.h b/block/blk.h
+index f84c83300f6fa..997941cd999f6 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -188,6 +188,12 @@ bool blk_bio_list_merge(struct request_queue *q, struct list_head *list,
+ void blk_account_io_start(struct request *req);
+ void blk_account_io_done(struct request *req, u64 now);
+ 
++/*
++ * Plug flush limits
++ */
++#define BLK_MAX_REQUEST_COUNT	32
++#define BLK_PLUG_FLUSH_SIZE	(128 * 1024)
++
+ /*
+  * Internal elevator interface
+  */
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 774adc9846fa8..1157f82dc9cf4 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -238,12 +238,12 @@ config CRYPTO_DH
+ 
+ config CRYPTO_ECC
+ 	tristate
++	select CRYPTO_RNG_DEFAULT
+ 
+ config CRYPTO_ECDH
+ 	tristate "ECDH algorithm"
+ 	select CRYPTO_ECC
+ 	select CRYPTO_KPP
+-	select CRYPTO_RNG_DEFAULT
+ 	help
+ 	  Generic implementation of the ECDH algorithm
+ 
+diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
+index d569c7ed6c800..9d10b846ccf73 100644
+--- a/crypto/pcrypt.c
++++ b/crypto/pcrypt.c
+@@ -78,12 +78,14 @@ static void pcrypt_aead_enc(struct padata_priv *padata)
+ {
+ 	struct pcrypt_request *preq = pcrypt_padata_request(padata);
+ 	struct aead_request *req = pcrypt_request_ctx(preq);
++	int ret;
+ 
+-	padata->info = crypto_aead_encrypt(req);
++	ret = crypto_aead_encrypt(req);
+ 
+-	if (padata->info == -EINPROGRESS)
++	if (ret == -EINPROGRESS)
+ 		return;
+ 
++	padata->info = ret;
+ 	padata_do_serial(padata);
+ }
+ 
+@@ -123,12 +125,14 @@ static void pcrypt_aead_dec(struct padata_priv *padata)
+ {
+ 	struct pcrypt_request *preq = pcrypt_padata_request(padata);
+ 	struct aead_request *req = pcrypt_request_ctx(preq);
++	int ret;
+ 
+-	padata->info = crypto_aead_decrypt(req);
++	ret = crypto_aead_decrypt(req);
+ 
+-	if (padata->info == -EINPROGRESS)
++	if (ret == -EINPROGRESS)
+ 		return;
+ 
++	padata->info = ret;
+ 	padata_do_serial(padata);
+ }
+ 
+diff --git a/drivers/acpi/ac.c b/drivers/acpi/ac.c
+index 46a64e9fa7165..23ca1a1c67b75 100644
+--- a/drivers/acpi/ac.c
++++ b/drivers/acpi/ac.c
+@@ -64,6 +64,7 @@ static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume);
+ 
+ static int ac_sleep_before_get_state_ms;
+ static int ac_check_pmic = 1;
++static int ac_only;
+ 
+ static struct acpi_driver acpi_ac_driver = {
+ 	.name = "ac",
+@@ -99,6 +100,11 @@ static int acpi_ac_get_state(struct acpi_ac *ac)
+ 	if (!ac)
+ 		return -EINVAL;
+ 
++	if (ac_only) {
++		ac->state = 1;
++		return 0;
++	}
++
+ 	status = acpi_evaluate_integer(ac->device->handle, "_PSR", NULL,
+ 				       &ac->state);
+ 	if (ACPI_FAILURE(status)) {
+@@ -212,6 +218,12 @@ static int __init ac_do_not_check_pmic_quirk(const struct dmi_system_id *d)
+ 	return 0;
+ }
+ 
++static int __init ac_only_quirk(const struct dmi_system_id *d)
++{
++	ac_only = 1;
++	return 0;
++}
++
+ /* Please keep this list alphabetically sorted */
+ static const struct dmi_system_id ac_dmi_table[]  __initconst = {
+ 	{
+@@ -221,6 +233,13 @@ static const struct dmi_system_id ac_dmi_table[]  __initconst = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"),
+ 		},
+ 	},
++	{
++		/* Kodlix GK45 returning incorrect state */
++		.callback = ac_only_quirk,
++		.matches = {
++			DMI_MATCH(DMI_PRODUCT_NAME, "GK45"),
++		},
++	},
+ 	{
+ 		/* Lenovo Ideapad Miix 320, AXP288 PMIC, separate fuel-gauge */
+ 		.callback = ac_do_not_check_pmic_quirk,
+diff --git a/drivers/acpi/acpica/acglobal.h b/drivers/acpi/acpica/acglobal.h
+index 2fee91f57b213..bd84d7f95e5f9 100644
+--- a/drivers/acpi/acpica/acglobal.h
++++ b/drivers/acpi/acpica/acglobal.h
+@@ -226,6 +226,8 @@ extern struct acpi_bit_register_info
+     acpi_gbl_bit_register_info[ACPI_NUM_BITREG];
+ ACPI_GLOBAL(u8, acpi_gbl_sleep_type_a);
+ ACPI_GLOBAL(u8, acpi_gbl_sleep_type_b);
++ACPI_GLOBAL(u8, acpi_gbl_sleep_type_a_s0);
++ACPI_GLOBAL(u8, acpi_gbl_sleep_type_b_s0);
+ 
+ /*****************************************************************************
+  *
+diff --git a/drivers/acpi/acpica/hwesleep.c b/drivers/acpi/acpica/hwesleep.c
+index d9be5d0545d4c..4836a4b8b38b8 100644
+--- a/drivers/acpi/acpica/hwesleep.c
++++ b/drivers/acpi/acpica/hwesleep.c
+@@ -147,17 +147,13 @@ acpi_status acpi_hw_extended_sleep(u8 sleep_state)
+ 
+ acpi_status acpi_hw_extended_wake_prep(u8 sleep_state)
+ {
+-	acpi_status status;
+ 	u8 sleep_type_value;
+ 
+ 	ACPI_FUNCTION_TRACE(hw_extended_wake_prep);
+ 
+-	status = acpi_get_sleep_type_data(ACPI_STATE_S0,
+-					  &acpi_gbl_sleep_type_a,
+-					  &acpi_gbl_sleep_type_b);
+-	if (ACPI_SUCCESS(status)) {
++	if (acpi_gbl_sleep_type_a_s0 != ACPI_SLEEP_TYPE_INVALID) {
+ 		sleep_type_value =
+-		    ((acpi_gbl_sleep_type_a << ACPI_X_SLEEP_TYPE_POSITION) &
++		    ((acpi_gbl_sleep_type_a_s0 << ACPI_X_SLEEP_TYPE_POSITION) &
+ 		     ACPI_X_SLEEP_TYPE_MASK);
+ 
+ 		(void)acpi_write((u64)(sleep_type_value | ACPI_X_SLEEP_ENABLE),
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index 317ae870336b7..fcc84d196238a 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -179,7 +179,7 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
+ 
+ acpi_status acpi_hw_legacy_wake_prep(u8 sleep_state)
+ {
+-	acpi_status status;
++	acpi_status status = AE_OK;
+ 	struct acpi_bit_register_info *sleep_type_reg_info;
+ 	struct acpi_bit_register_info *sleep_enable_reg_info;
+ 	u32 pm1a_control;
+@@ -192,10 +192,7 @@ acpi_status acpi_hw_legacy_wake_prep(u8 sleep_state)
+ 	 * This is unclear from the ACPI Spec, but it is required
+ 	 * by some machines.
+ 	 */
+-	status = acpi_get_sleep_type_data(ACPI_STATE_S0,
+-					  &acpi_gbl_sleep_type_a,
+-					  &acpi_gbl_sleep_type_b);
+-	if (ACPI_SUCCESS(status)) {
++	if (acpi_gbl_sleep_type_a_s0 != ACPI_SLEEP_TYPE_INVALID) {
+ 		sleep_type_reg_info =
+ 		    acpi_hw_get_bit_register_info(ACPI_BITREG_SLEEP_TYPE);
+ 		sleep_enable_reg_info =
+@@ -216,9 +213,9 @@ acpi_status acpi_hw_legacy_wake_prep(u8 sleep_state)
+ 
+ 			/* Insert the SLP_TYP bits */
+ 
+-			pm1a_control |= (acpi_gbl_sleep_type_a <<
++			pm1a_control |= (acpi_gbl_sleep_type_a_s0 <<
+ 					 sleep_type_reg_info->bit_position);
+-			pm1b_control |= (acpi_gbl_sleep_type_b <<
++			pm1b_control |= (acpi_gbl_sleep_type_b_s0 <<
+ 					 sleep_type_reg_info->bit_position);
+ 
+ 			/* Write the control registers and ignore any errors */
+diff --git a/drivers/acpi/acpica/hwxfsleep.c b/drivers/acpi/acpica/hwxfsleep.c
+index a4b66f4b27141..f1645d87864c3 100644
+--- a/drivers/acpi/acpica/hwxfsleep.c
++++ b/drivers/acpi/acpica/hwxfsleep.c
+@@ -217,6 +217,13 @@ acpi_status acpi_enter_sleep_state_prep(u8 sleep_state)
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
++	status = acpi_get_sleep_type_data(ACPI_STATE_S0,
++					  &acpi_gbl_sleep_type_a_s0,
++					  &acpi_gbl_sleep_type_b_s0);
++	if (ACPI_FAILURE(status)) {
++		acpi_gbl_sleep_type_a_s0 = ACPI_SLEEP_TYPE_INVALID;
++	}
++
+ 	/* Execute the _PTS method (Prepare To Sleep) */
+ 
+ 	arg_list.count = 1;
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index 08ee1c7b12e00..e04352c1dc2ce 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -174,7 +174,7 @@ static int acpi_battery_is_charged(struct acpi_battery *battery)
+ 		return 1;
+ 
+ 	/* fallback to using design values for broken batteries */
+-	if (battery->design_capacity == battery->capacity_now)
++	if (battery->design_capacity <= battery->capacity_now)
+ 		return 1;
+ 
+ 	/* we don't do any sort of metric based on percentages */
+diff --git a/drivers/acpi/pmic/intel_pmic.c b/drivers/acpi/pmic/intel_pmic.c
+index a371f273f99dd..9cde299eba880 100644
+--- a/drivers/acpi/pmic/intel_pmic.c
++++ b/drivers/acpi/pmic/intel_pmic.c
+@@ -211,31 +211,36 @@ static acpi_status intel_pmic_regs_handler(u32 function,
+ 		void *handler_context, void *region_context)
+ {
+ 	struct intel_pmic_opregion *opregion = region_context;
+-	int result = 0;
++	int result = -EINVAL;
++
++	if (function == ACPI_WRITE) {
++		switch (address) {
++		case 0:
++			return AE_OK;
++		case 1:
++			opregion->ctx.addr |= (*value64 & 0xff) << 8;
++			return AE_OK;
++		case 2:
++			opregion->ctx.addr |= *value64 & 0xff;
++			return AE_OK;
++		case 3:
++			opregion->ctx.val = *value64 & 0xff;
++			return AE_OK;
++		case 4:
++			if (*value64) {
++				result = regmap_write(opregion->regmap, opregion->ctx.addr,
++						      opregion->ctx.val);
++			} else {
++				result = regmap_read(opregion->regmap, opregion->ctx.addr,
++						     &opregion->ctx.val);
++			}
++			opregion->ctx.addr = 0;
++		}
++	}
+ 
+-	switch (address) {
+-	case 0:
+-		return AE_OK;
+-	case 1:
+-		opregion->ctx.addr |= (*value64 & 0xff) << 8;
+-		return AE_OK;
+-	case 2:
+-		opregion->ctx.addr |= *value64 & 0xff;
++	if (function == ACPI_READ && address == 3) {
++		*value64 = opregion->ctx.val;
+ 		return AE_OK;
+-	case 3:
+-		opregion->ctx.val = *value64 & 0xff;
+-		return AE_OK;
+-	case 4:
+-		if (*value64) {
+-			result = regmap_write(opregion->regmap, opregion->ctx.addr,
+-					      opregion->ctx.val);
+-		} else {
+-			result = regmap_read(opregion->regmap, opregion->ctx.addr,
+-					     &opregion->ctx.val);
+-			if (result == 0)
+-				*value64 = opregion->ctx.val;
+-		}
+-		memset(&opregion->ctx, 0x00, sizeof(opregion->ctx));
+ 	}
+ 
+ 	if (result < 0) {
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index d9977ce0be76d..8f14ad7ab5bd8 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -420,6 +420,9 @@ enum binder_deferred_state {
+  *                        (invariant after initialized)
+  * @tsk                   task_struct for group_leader of process
+  *                        (invariant after initialized)
++ * @cred                  struct cred associated with the `struct file`
++ *                        in binder_open()
++ *                        (invariant after initialized)
+  * @deferred_work_node:   element for binder_deferred_list
+  *                        (protected by binder_deferred_lock)
+  * @deferred_work:        bitmap of deferred work to perform
+@@ -465,6 +468,7 @@ struct binder_proc {
+ 	struct list_head waiting_threads;
+ 	int pid;
+ 	struct task_struct *tsk;
++	const struct cred *cred;
+ 	struct hlist_node deferred_work_node;
+ 	int deferred_work;
+ 	bool is_dead;
+@@ -2439,7 +2443,7 @@ static int binder_translate_binder(struct flat_binder_object *fp,
+ 		ret = -EINVAL;
+ 		goto done;
+ 	}
+-	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
++	if (security_binder_transfer_binder(proc->cred, target_proc->cred)) {
+ 		ret = -EPERM;
+ 		goto done;
+ 	}
+@@ -2485,7 +2489,7 @@ static int binder_translate_handle(struct flat_binder_object *fp,
+ 				  proc->pid, thread->pid, fp->handle);
+ 		return -EINVAL;
+ 	}
+-	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
++	if (security_binder_transfer_binder(proc->cred, target_proc->cred)) {
+ 		ret = -EPERM;
+ 		goto done;
+ 	}
+@@ -2573,7 +2577,7 @@ static int binder_translate_fd(u32 fd, binder_size_t fd_offset,
+ 		ret = -EBADF;
+ 		goto err_fget;
+ 	}
+-	ret = security_binder_transfer_file(proc->tsk, target_proc->tsk, file);
++	ret = security_binder_transfer_file(proc->cred, target_proc->cred, file);
+ 	if (ret < 0) {
+ 		ret = -EPERM;
+ 		goto err_security;
+@@ -2971,8 +2975,8 @@ static void binder_transaction(struct binder_proc *proc,
+ 			return_error_line = __LINE__;
+ 			goto err_invalid_target_handle;
+ 		}
+-		if (security_binder_transaction(proc->tsk,
+-						target_proc->tsk) < 0) {
++		if (security_binder_transaction(proc->cred,
++						target_proc->cred) < 0) {
+ 			return_error = BR_FAILED_REPLY;
+ 			return_error_param = -EPERM;
+ 			return_error_line = __LINE__;
+@@ -3087,7 +3091,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		t->from = thread;
+ 	else
+ 		t->from = NULL;
+-	t->sender_euid = task_euid(proc->tsk);
++	t->sender_euid = proc->cred->euid;
+ 	t->to_proc = target_proc;
+ 	t->to_thread = target_thread;
+ 	t->code = tr->code;
+@@ -3098,7 +3102,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		u32 secid;
+ 		size_t added_size;
+ 
+-		security_task_getsecid(proc->tsk, &secid);
++		security_cred_getsecid(proc->cred, &secid);
+ 		ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
+ 		if (ret) {
+ 			return_error = BR_FAILED_REPLY;
+@@ -4703,6 +4707,7 @@ static void binder_free_proc(struct binder_proc *proc)
+ 	}
+ 	binder_alloc_deferred_release(&proc->alloc);
+ 	put_task_struct(proc->tsk);
++	put_cred(proc->cred);
+ 	binder_stats_deleted(BINDER_STAT_PROC);
+ 	kfree(proc);
+ }
+@@ -4913,7 +4918,7 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp,
+ 		ret = -EBUSY;
+ 		goto out;
+ 	}
+-	ret = security_binder_set_context_mgr(proc->tsk);
++	ret = security_binder_set_context_mgr(proc->cred);
+ 	if (ret < 0)
+ 		goto out;
+ 	if (uid_valid(context->binder_context_mgr_uid)) {
+@@ -5220,6 +5225,7 @@ static int binder_open(struct inode *nodp, struct file *filp)
+ 	spin_lock_init(&proc->outer_lock);
+ 	get_task_struct(current->group_leader);
+ 	proc->tsk = current->group_leader;
++	proc->cred = get_cred(filp->f_cred);
+ 	INIT_LIST_HEAD(&proc->todo);
+ 	proc->default_priority = task_nice(current);
+ 	/* binderfs stashes devices in i_private */
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 8916163d508e0..8acf99b88b21e 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2004,7 +2004,7 @@ unsigned int ata_read_log_page(struct ata_device *dev, u8 log,
+ 
+ retry:
+ 	ata_tf_init(dev, &tf);
+-	if (dev->dma_mode && ata_id_has_read_log_dma_ext(dev->id) &&
++	if (ata_dma_enabled(dev) && ata_id_has_read_log_dma_ext(dev->id) &&
+ 	    !(dev->horkage & ATA_HORKAGE_NO_DMA_LOG)) {
+ 		tf.command = ATA_CMD_READ_LOG_DMA_EXT;
+ 		tf.protocol = ATA_PROT_DMA;
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index b6f92050e60cb..018ed8736a64d 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -93,6 +93,12 @@ static const unsigned long ata_eh_identify_timeouts[] = {
+ 	ULONG_MAX,
+ };
+ 
++static const unsigned long ata_eh_revalidate_timeouts[] = {
++	15000,	/* Some drives are slow to read log pages when waking-up */
++	15000,  /* combined time till here is enough even for media access */
++	ULONG_MAX,
++};
++
+ static const unsigned long ata_eh_flush_timeouts[] = {
+ 	15000,	/* be generous with flush */
+ 	15000,  /* ditto */
+@@ -129,6 +135,8 @@ static const struct ata_eh_cmd_timeout_ent
+ ata_eh_cmd_timeout_table[ATA_EH_CMD_TIMEOUT_TABLE_SIZE] = {
+ 	{ .commands = CMDS(ATA_CMD_ID_ATA, ATA_CMD_ID_ATAPI),
+ 	  .timeouts = ata_eh_identify_timeouts, },
++	{ .commands = CMDS(ATA_CMD_READ_LOG_EXT, ATA_CMD_READ_LOG_DMA_EXT),
++	  .timeouts = ata_eh_revalidate_timeouts, },
+ 	{ .commands = CMDS(ATA_CMD_READ_NATIVE_MAX, ATA_CMD_READ_NATIVE_MAX_EXT),
+ 	  .timeouts = ata_eh_other_timeouts, },
+ 	{ .commands = CMDS(ATA_CMD_SET_MAX, ATA_CMD_SET_MAX_EXT),
+diff --git a/drivers/auxdisplay/ht16k33.c b/drivers/auxdisplay/ht16k33.c
+index d8602843e8a53..7e3858c4e030f 100644
+--- a/drivers/auxdisplay/ht16k33.c
++++ b/drivers/auxdisplay/ht16k33.c
+@@ -219,6 +219,15 @@ static const struct backlight_ops ht16k33_bl_ops = {
+ 	.check_fb	= ht16k33_bl_check_fb,
+ };
+ 
++/*
++ * Blank events will be passed to the actual device handling the backlight when
++ * we return zero here.
++ */
++static int ht16k33_blank(int blank, struct fb_info *info)
++{
++	return 0;
++}
++
+ static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma)
+ {
+ 	struct ht16k33_priv *priv = info->par;
+@@ -231,6 +240,7 @@ static const struct fb_ops ht16k33_fb_ops = {
+ 	.owner = THIS_MODULE,
+ 	.fb_read = fb_sys_read,
+ 	.fb_write = fb_sys_write,
++	.fb_blank = ht16k33_blank,
+ 	.fb_fillrect = sys_fillrect,
+ 	.fb_copyarea = sys_copyarea,
+ 	.fb_imageblit = sys_imageblit,
+@@ -418,6 +428,33 @@ static int ht16k33_probe(struct i2c_client *client,
+ 	if (err)
+ 		return err;
+ 
++	/* Backlight */
++	memset(&bl_props, 0, sizeof(struct backlight_properties));
++	bl_props.type = BACKLIGHT_RAW;
++	bl_props.max_brightness = MAX_BRIGHTNESS;
++
++	bl = devm_backlight_device_register(&client->dev, DRIVER_NAME"-bl",
++					    &client->dev, priv,
++					    &ht16k33_bl_ops, &bl_props);
++	if (IS_ERR(bl)) {
++		dev_err(&client->dev, "failed to register backlight\n");
++		return PTR_ERR(bl);
++	}
++
++	err = of_property_read_u32(node, "default-brightness-level",
++				   &dft_brightness);
++	if (err) {
++		dft_brightness = MAX_BRIGHTNESS;
++	} else if (dft_brightness > MAX_BRIGHTNESS) {
++		dev_warn(&client->dev,
++			 "invalid default brightness level: %u, using %u\n",
++			 dft_brightness, MAX_BRIGHTNESS);
++		dft_brightness = MAX_BRIGHTNESS;
++	}
++
++	bl->props.brightness = dft_brightness;
++	ht16k33_bl_update_status(bl);
++
+ 	/* Framebuffer (2 bytes per column) */
+ 	BUILD_BUG_ON(PAGE_SIZE < HT16K33_FB_SIZE);
+ 	fbdev->buffer = (unsigned char *) get_zeroed_page(GFP_KERNEL);
+@@ -450,6 +487,7 @@ static int ht16k33_probe(struct i2c_client *client,
+ 	fbdev->info->screen_size = HT16K33_FB_SIZE;
+ 	fbdev->info->fix = ht16k33_fb_fix;
+ 	fbdev->info->var = ht16k33_fb_var;
++	fbdev->info->bl_dev = bl;
+ 	fbdev->info->pseudo_palette = NULL;
+ 	fbdev->info->flags = FBINFO_FLAG_DEFAULT;
+ 	fbdev->info->par = priv;
+@@ -462,34 +500,6 @@ static int ht16k33_probe(struct i2c_client *client,
+ 	if (err)
+ 		goto err_fbdev_unregister;
+ 
+-	/* Backlight */
+-	memset(&bl_props, 0, sizeof(struct backlight_properties));
+-	bl_props.type = BACKLIGHT_RAW;
+-	bl_props.max_brightness = MAX_BRIGHTNESS;
+-
+-	bl = devm_backlight_device_register(&client->dev, DRIVER_NAME"-bl",
+-					    &client->dev, priv,
+-					    &ht16k33_bl_ops, &bl_props);
+-	if (IS_ERR(bl)) {
+-		dev_err(&client->dev, "failed to register backlight\n");
+-		err = PTR_ERR(bl);
+-		goto err_fbdev_unregister;
+-	}
+-
+-	err = of_property_read_u32(node, "default-brightness-level",
+-				   &dft_brightness);
+-	if (err) {
+-		dft_brightness = MAX_BRIGHTNESS;
+-	} else if (dft_brightness > MAX_BRIGHTNESS) {
+-		dev_warn(&client->dev,
+-			 "invalid default brightness level: %u, using %u\n",
+-			 dft_brightness, MAX_BRIGHTNESS);
+-		dft_brightness = MAX_BRIGHTNESS;
+-	}
+-
+-	bl->props.brightness = dft_brightness;
+-	ht16k33_bl_update_status(bl);
+-
+ 	ht16k33_fb_queue(priv);
+ 	return 0;
+ 
+diff --git a/drivers/auxdisplay/img-ascii-lcd.c b/drivers/auxdisplay/img-ascii-lcd.c
+index 1cce409ce5cac..e33ce0151cdfd 100644
+--- a/drivers/auxdisplay/img-ascii-lcd.c
++++ b/drivers/auxdisplay/img-ascii-lcd.c
+@@ -280,6 +280,16 @@ static int img_ascii_lcd_display(struct img_ascii_lcd_ctx *ctx,
+ 	if (msg[count - 1] == '\n')
+ 		count--;
+ 
++	if (!count) {
++		/* clear the LCD */
++		devm_kfree(&ctx->pdev->dev, ctx->message);
++		ctx->message = NULL;
++		ctx->message_len = 0;
++		memset(ctx->curr, ' ', ctx->cfg->num_chars);
++		ctx->cfg->update(ctx);
++		return 0;
++	}
++
+ 	new_msg = devm_kmalloc(&ctx->pdev->dev, count + 1, GFP_KERNEL);
+ 	if (!new_msg)
+ 		return -ENOMEM;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 2bc4db5ffe445..389d13616d1df 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -668,9 +668,7 @@ struct device_link *device_link_add(struct device *consumer,
+ 		     dev_bus_name(supplier), dev_name(supplier),
+ 		     dev_bus_name(consumer), dev_name(consumer));
+ 	if (device_register(&link->link_dev)) {
+-		put_device(consumer);
+-		put_device(supplier);
+-		kfree(link);
++		put_device(&link->link_dev);
+ 		link = NULL;
+ 		goto out;
+ 	}
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 192b1c7286b36..4167e2aef3975 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1053,7 +1053,7 @@ static void device_complete(struct device *dev, pm_message_t state)
+ 	const char *info = NULL;
+ 
+ 	if (dev->power.syscore)
+-		return;
++		goto out;
+ 
+ 	device_lock(dev);
+ 
+@@ -1083,6 +1083,7 @@ static void device_complete(struct device *dev, pm_message_t state)
+ 
+ 	device_unlock(dev);
+ 
++out:
+ 	pm_runtime_put(dev);
+ }
+ 
+@@ -1796,9 +1797,6 @@ static int device_prepare(struct device *dev, pm_message_t state)
+ 	int (*callback)(struct device *) = NULL;
+ 	int ret = 0;
+ 
+-	if (dev->power.syscore)
+-		return 0;
+-
+ 	/*
+ 	 * If a device's parent goes into runtime suspend at the wrong time,
+ 	 * it won't be possible to resume the device.  To prevent this we
+@@ -1807,6 +1805,9 @@ static int device_prepare(struct device *dev, pm_message_t state)
+ 	 */
+ 	pm_runtime_get_noresume(dev);
+ 
++	if (dev->power.syscore)
++		return 0;
++
+ 	device_lock(dev);
+ 
+ 	dev->power.wakeup_path = false;
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 7dce17fd59baa..0636df6b67db6 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -907,7 +907,7 @@ static ssize_t read_block_state(struct file *file, char __user *buf,
+ 			zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.',
+ 			zram_test_flag(zram, index, ZRAM_IDLE) ? 'i' : '.');
+ 
+-		if (count < copied) {
++		if (count <= copied) {
+ 			zram_slot_unlock(zram, index);
+ 			break;
+ 		}
+diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
+index 6c40bc75fb5b8..719d4685a2ddd 100644
+--- a/drivers/bluetooth/btmtkuart.c
++++ b/drivers/bluetooth/btmtkuart.c
+@@ -158,8 +158,10 @@ static int mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	int err;
+ 
+ 	hlen = sizeof(*hdr) + wmt_params->dlen;
+-	if (hlen > 255)
+-		return -EINVAL;
++	if (hlen > 255) {
++		err = -EINVAL;
++		goto err_free_skb;
++	}
+ 
+ 	hdr = (struct mtk_wmt_hdr *)&wc;
+ 	hdr->dir = 1;
+@@ -173,7 +175,7 @@ static int mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	err = __hci_cmd_send(hdev, 0xfc6f, hlen, &wc);
+ 	if (err < 0) {
+ 		clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+-		return err;
++		goto err_free_skb;
+ 	}
+ 
+ 	/* The vendor specific WMT commands are all answered by a vendor
+@@ -190,13 +192,14 @@ static int mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	if (err == -EINTR) {
+ 		bt_dev_err(hdev, "Execution of wmt command interrupted");
+ 		clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+-		return err;
++		goto err_free_skb;
+ 	}
+ 
+ 	if (err) {
+ 		bt_dev_err(hdev, "Execution of wmt command timed out");
+ 		clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+-		return -ETIMEDOUT;
++		err = -ETIMEDOUT;
++		goto err_free_skb;
+ 	}
+ 
+ 	/* Parse and handle the return WMT event */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 02341fd66e8d2..2ff437e5c7051 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -17,6 +17,7 @@
+ #include <linux/of_platform.h>
+ #include <linux/slab.h>
+ #include <linux/sys_soc.h>
++#include <linux/timekeeping.h>
+ #include <linux/iopoll.h>
+ 
+ #include <linux/platform_data/ti-sysc.h>
+@@ -223,37 +224,77 @@ static u32 sysc_read_sysstatus(struct sysc *ddata)
+ 	return sysc_read(ddata, offset);
+ }
+ 
+-/* Poll on reset status */
+-static int sysc_wait_softreset(struct sysc *ddata)
++static int sysc_poll_reset_sysstatus(struct sysc *ddata)
+ {
+-	u32 sysc_mask, syss_done, rstval;
+-	int syss_offset, error = 0;
+-
+-	if (ddata->cap->regbits->srst_shift < 0)
+-		return 0;
+-
+-	syss_offset = ddata->offsets[SYSC_SYSSTATUS];
+-	sysc_mask = BIT(ddata->cap->regbits->srst_shift);
++	int error, retries;
++	u32 syss_done, rstval;
+ 
+ 	if (ddata->cfg.quirks & SYSS_QUIRK_RESETDONE_INVERTED)
+ 		syss_done = 0;
+ 	else
+ 		syss_done = ddata->cfg.syss_mask;
+ 
+-	if (syss_offset >= 0) {
++	if (likely(!timekeeping_suspended)) {
+ 		error = readx_poll_timeout_atomic(sysc_read_sysstatus, ddata,
+ 				rstval, (rstval & ddata->cfg.syss_mask) ==
+ 				syss_done, 100, MAX_MODULE_SOFTRESET_WAIT);
++	} else {
++		retries = MAX_MODULE_SOFTRESET_WAIT;
++		while (retries--) {
++			rstval = sysc_read_sysstatus(ddata);
++			if ((rstval & ddata->cfg.syss_mask) == syss_done)
++				return 0;
++			udelay(2); /* Account for udelay flakeyness */
++		}
++		error = -ETIMEDOUT;
++	}
+ 
+-	} else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) {
++	return error;
++}
++
++static int sysc_poll_reset_sysconfig(struct sysc *ddata)
++{
++	int error, retries;
++	u32 sysc_mask, rstval;
++
++	sysc_mask = BIT(ddata->cap->regbits->srst_shift);
++
++	if (likely(!timekeeping_suspended)) {
+ 		error = readx_poll_timeout_atomic(sysc_read_sysconfig, ddata,
+ 				rstval, !(rstval & sysc_mask),
+ 				100, MAX_MODULE_SOFTRESET_WAIT);
++	} else {
++		retries = MAX_MODULE_SOFTRESET_WAIT;
++		while (retries--) {
++			rstval = sysc_read_sysconfig(ddata);
++			if (!(rstval & sysc_mask))
++				return 0;
++			udelay(2); /* Account for udelay flakeyness */
++		}
++		error = -ETIMEDOUT;
+ 	}
+ 
+ 	return error;
+ }
+ 
++/* Poll on reset status */
++static int sysc_wait_softreset(struct sysc *ddata)
++{
++	int syss_offset, error = 0;
++
++	if (ddata->cap->regbits->srst_shift < 0)
++		return 0;
++
++	syss_offset = ddata->offsets[SYSC_SYSSTATUS];
++
++	if (syss_offset >= 0)
++		error = sysc_poll_reset_sysstatus(ddata);
++	else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS)
++		error = sysc_poll_reset_sysconfig(ddata);
++
++	return error;
++}
++
+ static int sysc_add_named_clock_from_child(struct sysc *ddata,
+ 					   const char *name,
+ 					   const char *optfck_name)
+diff --git a/drivers/char/hw_random/mtk-rng.c b/drivers/char/hw_random/mtk-rng.c
+index 8ad7b515a51b8..6c00ea0085553 100644
+--- a/drivers/char/hw_random/mtk-rng.c
++++ b/drivers/char/hw_random/mtk-rng.c
+@@ -166,8 +166,13 @@ static int mtk_rng_runtime_resume(struct device *dev)
+ 	return mtk_rng_init(&priv->rng);
+ }
+ 
+-static UNIVERSAL_DEV_PM_OPS(mtk_rng_pm_ops, mtk_rng_runtime_suspend,
+-			    mtk_rng_runtime_resume, NULL);
++static const struct dev_pm_ops mtk_rng_pm_ops = {
++	SET_RUNTIME_PM_OPS(mtk_rng_runtime_suspend,
++			   mtk_rng_runtime_resume, NULL)
++	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
++				pm_runtime_force_resume)
++};
++
+ #define MTK_RNG_PM_OPS (&mtk_rng_pm_ops)
+ #else	/* CONFIG_PM */
+ #define MTK_RNG_PM_OPS NULL
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 8774a3b8ff959..abb865b1dff29 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -4802,7 +4802,9 @@ static atomic_t recv_msg_inuse_count = ATOMIC_INIT(0);
+ static void free_smi_msg(struct ipmi_smi_msg *msg)
+ {
+ 	atomic_dec(&smi_msg_inuse_count);
+-	kfree(msg);
++	/* Try to keep as much stuff out of the panic path as possible. */
++	if (!oops_in_progress)
++		kfree(msg);
+ }
+ 
+ struct ipmi_smi_msg *ipmi_alloc_smi_msg(void)
+@@ -4821,7 +4823,9 @@ EXPORT_SYMBOL(ipmi_alloc_smi_msg);
+ static void free_recv_msg(struct ipmi_recv_msg *msg)
+ {
+ 	atomic_dec(&recv_msg_inuse_count);
+-	kfree(msg);
++	/* Try to keep as much stuff out of the panic path as possible. */
++	if (!oops_in_progress)
++		kfree(msg);
+ }
+ 
+ static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void)
+@@ -4839,7 +4843,7 @@ static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void)
+ 
+ void ipmi_free_recv_msg(struct ipmi_recv_msg *msg)
+ {
+-	if (msg->user)
++	if (msg->user && !oops_in_progress)
+ 		kref_put(&msg->user->refcount, free_user);
+ 	msg->done(msg);
+ }
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index 6384510c48d6b..92eda5b2f1341 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -342,13 +342,17 @@ static atomic_t msg_tofree = ATOMIC_INIT(0);
+ static DECLARE_COMPLETION(msg_wait);
+ static void msg_free_smi(struct ipmi_smi_msg *msg)
+ {
+-	if (atomic_dec_and_test(&msg_tofree))
+-		complete(&msg_wait);
++	if (atomic_dec_and_test(&msg_tofree)) {
++		if (!oops_in_progress)
++			complete(&msg_wait);
++	}
+ }
+ static void msg_free_recv(struct ipmi_recv_msg *msg)
+ {
+-	if (atomic_dec_and_test(&msg_tofree))
+-		complete(&msg_wait);
++	if (atomic_dec_and_test(&msg_tofree)) {
++		if (!oops_in_progress)
++			complete(&msg_wait);
++	}
+ }
+ static struct ipmi_smi_msg smi_msg = {
+ 	.done = msg_free_smi
+@@ -434,8 +438,10 @@ static int _ipmi_set_timeout(int do_heartbeat)
+ 	rv = __ipmi_set_timeout(&smi_msg,
+ 				&recv_msg,
+ 				&send_heartbeat_now);
+-	if (rv)
++	if (rv) {
++		atomic_set(&msg_tofree, 0);
+ 		return rv;
++	}
+ 
+ 	wait_for_completion(&msg_wait);
+ 
+@@ -580,6 +586,7 @@ restart:
+ 				      &recv_msg,
+ 				      1);
+ 	if (rv) {
++		atomic_set(&msg_tofree, 0);
+ 		pr_warn("heartbeat send failure: %d\n", rv);
+ 		return rv;
+ 	}
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 784b8b3cb903f..97e916856cf3e 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -455,6 +455,9 @@ static int tpm2_map_response_body(struct tpm_chip *chip, u32 cc, u8 *rsp,
+ 	if (be32_to_cpu(data->capability) != TPM2_CAP_HANDLES)
+ 		return 0;
+ 
++	if (be32_to_cpu(data->count) > (UINT_MAX - TPM_HEADER_SIZE - 9) / 4)
++		return -EFAULT;
++
+ 	if (len != TPM_HEADER_SIZE + 9 + 4 * be32_to_cpu(data->count))
+ 		return -EFAULT;
+ 
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 69579efb247b3..b2659a4c40168 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -48,6 +48,7 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
+ 		unsigned long timeout, wait_queue_head_t *queue,
+ 		bool check_cancel)
+ {
++	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ 	unsigned long stop;
+ 	long rc;
+ 	u8 status;
+@@ -80,8 +81,8 @@ again:
+ 		}
+ 	} else {
+ 		do {
+-			usleep_range(TPM_TIMEOUT_USECS_MIN,
+-				     TPM_TIMEOUT_USECS_MAX);
++			usleep_range(priv->timeout_min,
++				     priv->timeout_max);
+ 			status = chip->ops->status(chip);
+ 			if ((status & mask) == mask)
+ 				return 0;
+@@ -945,7 +946,22 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	chip->timeout_b = msecs_to_jiffies(TIS_TIMEOUT_B_MAX);
+ 	chip->timeout_c = msecs_to_jiffies(TIS_TIMEOUT_C_MAX);
+ 	chip->timeout_d = msecs_to_jiffies(TIS_TIMEOUT_D_MAX);
++	priv->timeout_min = TPM_TIMEOUT_USECS_MIN;
++	priv->timeout_max = TPM_TIMEOUT_USECS_MAX;
+ 	priv->phy_ops = phy_ops;
++
++	rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor);
++	if (rc < 0)
++		goto out_err;
++
++	priv->manufacturer_id = vendor;
++
++	if (priv->manufacturer_id == TPM_VID_ATML &&
++		!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
++		priv->timeout_min = TIS_TIMEOUT_MIN_ATML;
++		priv->timeout_max = TIS_TIMEOUT_MAX_ATML;
++	}
++
+ 	dev_set_drvdata(&chip->dev, priv);
+ 
+ 	if (is_bsw()) {
+@@ -988,12 +1004,6 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	if (rc)
+ 		goto out_err;
+ 
+-	rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor);
+-	if (rc < 0)
+-		goto out_err;
+-
+-	priv->manufacturer_id = vendor;
+-
+ 	rc = tpm_tis_read8(priv, TPM_RID(0), &rid);
+ 	if (rc < 0)
+ 		goto out_err;
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index b2a3c6c72882d..3be24f221e32a 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -54,6 +54,8 @@ enum tis_defaults {
+ 	TIS_MEM_LEN = 0x5000,
+ 	TIS_SHORT_TIMEOUT = 750,	/* ms */
+ 	TIS_LONG_TIMEOUT = 2000,	/* 2 sec */
++	TIS_TIMEOUT_MIN_ATML = 14700,	/* usecs */
++	TIS_TIMEOUT_MAX_ATML = 15000,	/* usecs */
+ };
+ 
+ /* Some timeout values are needed before it is known whether the chip is
+@@ -98,6 +100,8 @@ struct tpm_tis_data {
+ 	wait_queue_head_t read_queue;
+ 	const struct tpm_tis_phy_ops *phy_ops;
+ 	unsigned short rng_quality;
++	unsigned int timeout_min; /* usecs */
++	unsigned int timeout_max; /* usecs */
+ };
+ 
+ struct tpm_tis_phy_ops {
+diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
+index de4209003a448..d64bea3298a29 100644
+--- a/drivers/char/tpm/tpm_tis_spi_main.c
++++ b/drivers/char/tpm/tpm_tis_spi_main.c
+@@ -263,6 +263,7 @@ static const struct spi_device_id tpm_tis_spi_id[] = {
+ 	{ "st33htpm-spi", (unsigned long)tpm_tis_spi_probe },
+ 	{ "slb9670", (unsigned long)tpm_tis_spi_probe },
+ 	{ "tpm_tis_spi", (unsigned long)tpm_tis_spi_probe },
++	{ "tpm_tis-spi", (unsigned long)tpm_tis_spi_probe },
+ 	{ "cr50", (unsigned long)cr50_spi_probe },
+ 	{}
+ };
+diff --git a/drivers/clk/at91/clk-sam9x60-pll.c b/drivers/clk/at91/clk-sam9x60-pll.c
+index 78f458a7b2ef4..5a9daa3643a72 100644
+--- a/drivers/clk/at91/clk-sam9x60-pll.c
++++ b/drivers/clk/at91/clk-sam9x60-pll.c
+@@ -71,8 +71,8 @@ static unsigned long sam9x60_frac_pll_recalc_rate(struct clk_hw *hw,
+ 	struct sam9x60_pll_core *core = to_sam9x60_pll_core(hw);
+ 	struct sam9x60_frac *frac = to_sam9x60_frac(core);
+ 
+-	return (parent_rate * (frac->mul + 1) +
+-		((u64)parent_rate * frac->frac >> 22));
++	return parent_rate * (frac->mul + 1) +
++		DIV_ROUND_CLOSEST_ULL((u64)parent_rate * frac->frac, (1 << 22));
+ }
+ 
+ static int sam9x60_frac_pll_prepare(struct clk_hw *hw)
+diff --git a/drivers/clk/at91/pmc.c b/drivers/clk/at91/pmc.c
+index 20ee9dccee787..b40035b011d0a 100644
+--- a/drivers/clk/at91/pmc.c
++++ b/drivers/clk/at91/pmc.c
+@@ -267,6 +267,11 @@ static int __init pmc_register_ops(void)
+ 	if (!np)
+ 		return -ENODEV;
+ 
++	if (!of_device_is_available(np)) {
++		of_node_put(np);
++		return -ENODEV;
++	}
++
+ 	pmcreg = device_node_to_regmap(np);
+ 	of_node_put(np);
+ 	if (IS_ERR(pmcreg))
+diff --git a/drivers/clk/mvebu/ap-cpu-clk.c b/drivers/clk/mvebu/ap-cpu-clk.c
+index b4259b60dcfd6..25de4b6da776f 100644
+--- a/drivers/clk/mvebu/ap-cpu-clk.c
++++ b/drivers/clk/mvebu/ap-cpu-clk.c
+@@ -256,12 +256,15 @@ static int ap_cpu_clock_probe(struct platform_device *pdev)
+ 		int cpu, err;
+ 
+ 		err = of_property_read_u32(dn, "reg", &cpu);
+-		if (WARN_ON(err))
++		if (WARN_ON(err)) {
++			of_node_put(dn);
+ 			return err;
++		}
+ 
+ 		/* If cpu2 or cpu3 is enabled */
+ 		if (cpu & APN806_CLUSTER_NUM_MASK) {
+ 			nclusters = 2;
++			of_node_put(dn);
+ 			break;
+ 		}
+ 	}
+@@ -288,8 +291,10 @@ static int ap_cpu_clock_probe(struct platform_device *pdev)
+ 		int cpu, err;
+ 
+ 		err = of_property_read_u32(dn, "reg", &cpu);
+-		if (WARN_ON(err))
++		if (WARN_ON(err)) {
++			of_node_put(dn);
+ 			return err;
++		}
+ 
+ 		cluster_index = cpu & APN806_CLUSTER_NUM_MASK;
+ 		cluster_index >>= APN806_CLUSTER_NUM_OFFSET;
+@@ -301,6 +306,7 @@ static int ap_cpu_clock_probe(struct platform_device *pdev)
+ 		parent = of_clk_get(np, cluster_index);
+ 		if (IS_ERR(parent)) {
+ 			dev_err(dev, "Could not get the clock parent\n");
++			of_node_put(dn);
+ 			return -EINVAL;
+ 		}
+ 		parent_name =  __clk_get_name(parent);
+@@ -319,8 +325,10 @@ static int ap_cpu_clock_probe(struct platform_device *pdev)
+ 		init.parent_names = &parent_name;
+ 
+ 		ret = devm_clk_hw_register(dev, &ap_cpu_clk[cluster_index].hw);
+-		if (ret)
++		if (ret) {
++			of_node_put(dn);
+ 			return ret;
++		}
+ 		ap_cpu_data->hws[cluster_index] = &ap_cpu_clk[cluster_index].hw;
+ 	}
+ 
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index 39f4d88662002..a0c6e88bebe08 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -24,6 +24,7 @@ config I8253_LOCK
+ 
+ config OMAP_DM_TIMER
+ 	bool
++	select TIMER_OF
+ 
+ config CLKBLD_I8253
+ 	def_bool y if CLKSRC_I8253 || CLKEVT_I8253 || I8253_LOCK
+diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
+index 53ec9585ccd44..469e18547d06c 100644
+--- a/drivers/cpuidle/sysfs.c
++++ b/drivers/cpuidle/sysfs.c
+@@ -488,6 +488,7 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device)
+ 					   &kdev->kobj, "state%d", i);
+ 		if (ret) {
+ 			kobject_put(&kobj->kobj);
++			kfree(kobj);
+ 			goto error_state;
+ 		}
+ 		cpuidle_add_s2idle_attr_group(kobj);
+@@ -619,6 +620,7 @@ static int cpuidle_add_driver_sysfs(struct cpuidle_device *dev)
+ 				   &kdev->kobj, "driver");
+ 	if (ret) {
+ 		kobject_put(&kdrv->kobj);
++		kfree(kdrv);
+ 		return ret;
+ 	}
+ 
+@@ -705,7 +707,6 @@ int cpuidle_add_sysfs(struct cpuidle_device *dev)
+ 	if (!kdev)
+ 		return -ENOMEM;
+ 	kdev->dev = dev;
+-	dev->kobj_dev = kdev;
+ 
+ 	init_completion(&kdev->kobj_unregister);
+ 
+@@ -713,9 +714,11 @@ int cpuidle_add_sysfs(struct cpuidle_device *dev)
+ 				   "cpuidle");
+ 	if (error) {
+ 		kobject_put(&kdev->kobj);
++		kfree(kdev);
+ 		return error;
+ 	}
+ 
++	dev->kobj_dev = kdev;
+ 	kobject_uevent(&kdev->kobj, KOBJ_ADD);
+ 
+ 	return 0;
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index dd5f101e43f83..3acc825da4cca 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -1152,16 +1152,27 @@ static struct caam_akcipher_alg caam_rsa = {
+ int caam_pkc_init(struct device *ctrldev)
+ {
+ 	struct caam_drv_private *priv = dev_get_drvdata(ctrldev);
+-	u32 pk_inst;
++	u32 pk_inst, pkha;
+ 	int err;
+ 	init_done = false;
+ 
+ 	/* Determine public key hardware accelerator presence. */
+-	if (priv->era < 10)
++	if (priv->era < 10) {
+ 		pk_inst = (rd_reg32(&priv->ctrl->perfmon.cha_num_ls) &
+ 			   CHA_ID_LS_PK_MASK) >> CHA_ID_LS_PK_SHIFT;
+-	else
+-		pk_inst = rd_reg32(&priv->ctrl->vreg.pkha) & CHA_VER_NUM_MASK;
++	} else {
++		pkha = rd_reg32(&priv->ctrl->vreg.pkha);
++		pk_inst = pkha & CHA_VER_NUM_MASK;
++
++		/*
++		 * Newer CAAMs support partially disabled functionality. If this is the
++		 * case, the number is non-zero, but this bit is set to indicate that
++		 * no encryption or decryption is supported. Only signing and verifying
++		 * is supported.
++		 */
++		if (pkha & CHA_VER_MISC_PKHA_NO_CRYPT)
++			pk_inst = 0;
++	}
+ 
+ 	/* Do not register algorithms if PKHA is not present. */
+ 	if (!pk_inst)
+diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
+index af61f3a2c0d46..3738625c02509 100644
+--- a/drivers/crypto/caam/regs.h
++++ b/drivers/crypto/caam/regs.h
+@@ -322,6 +322,9 @@ struct version_regs {
+ /* CHA Miscellaneous Information - AESA_MISC specific */
+ #define CHA_VER_MISC_AES_GCM	BIT(1 + CHA_VER_MISC_SHIFT)
+ 
++/* CHA Miscellaneous Information - PKHA_MISC specific */
++#define CHA_VER_MISC_PKHA_NO_CRYPT	BIT(7 + CHA_VER_MISC_SHIFT)
++
+ /*
+  * caam_perfmon - Performance Monitor/Secure Memory Status/
+  *                CAAM Global Status/Component Version IDs
+diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+index e829c6aaf16fd..d7ca222f0df18 100644
+--- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+@@ -150,6 +150,13 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 		val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
+ 	} while ((val & int_bit) && (count++ < ADF_IOV_MSG_ACK_MAX_RETRY));
+ 
++	if (val != msg) {
++		dev_dbg(&GET_DEV(accel_dev),
++			"Collision - PFVF CSR overwritten by remote function\n");
++		ret = -EIO;
++		goto out;
++	}
++
+ 	if (val & int_bit) {
+ 		dev_dbg(&GET_DEV(accel_dev), "ACK not received from remote\n");
+ 		val &= ~int_bit;
+@@ -198,6 +205,11 @@ void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
+ 
+ 	/* Read message from the VF */
+ 	msg = ADF_CSR_RD(pmisc_addr, hw_data->get_pf2vf_offset(vf_nr));
++	if (!(msg & ADF_VF2PF_INT)) {
++		dev_info(&GET_DEV(accel_dev),
++			 "Spurious VF2PF interrupt, msg %X. Ignored\n", msg);
++		goto out;
++	}
+ 
+ 	/* To ACK, clear the VF2PFINT bit */
+ 	msg &= ~ADF_VF2PF_INT;
+@@ -281,6 +293,7 @@ void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
+ 	if (resp && adf_iov_putmsg(accel_dev, resp, vf_nr))
+ 		dev_err(&GET_DEV(accel_dev), "Failed to send response to VF\n");
+ 
++out:
+ 	/* re-enable interrupt on PF from this VF */
+ 	adf_enable_vf2pf_interrupts(accel_dev, (1 << vf_nr));
+ 	return;
+diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+index 024401ec9d1ae..fa1b3a94155cc 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+@@ -79,6 +79,11 @@ static void adf_pf2vf_bh_handler(void *data)
+ 
+ 	/* Read the message from PF */
+ 	msg = ADF_CSR_RD(pmisc_bar_addr, hw_data->get_pf2vf_offset(0));
++	if (!(msg & ADF_PF2VF_INT)) {
++		dev_info(&GET_DEV(accel_dev),
++			 "Spurious PF2VF interrupt, msg %X. Ignored\n", msg);
++		goto out;
++	}
+ 
+ 	if (!(msg & ADF_PF2VF_MSGORIGIN_SYSTEM))
+ 		/* Ignore legacy non-system (non-kernel) PF2VF messages */
+@@ -127,6 +132,7 @@ static void adf_pf2vf_bh_handler(void *data)
+ 	msg &= ~ADF_PF2VF_INT;
+ 	ADF_CSR_WR(pmisc_bar_addr, hw_data->get_pf2vf_offset(0), msg);
+ 
++out:
+ 	/* Re-enable PF2VF interrupts */
+ 	adf_enable_pf2vf_interrupts(accel_dev);
+ 	return;
+diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
+index 88a6c853ffd73..c4a7a2bba0866 100644
+--- a/drivers/crypto/s5p-sss.c
++++ b/drivers/crypto/s5p-sss.c
+@@ -2173,6 +2173,8 @@ static int s5p_aes_probe(struct platform_device *pdev)
+ 
+ 	variant = find_s5p_sss_version(pdev);
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 
+ 	/*
+ 	 * Note: HASH and PRNG uses the same registers in secss, avoid
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 922416b3aaceb..93e9bf7382595 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -79,6 +79,7 @@ static void dma_buf_release(struct dentry *dentry)
+ 	if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
+ 		dma_resv_fini(dmabuf->resv);
+ 
++	WARN_ON(!list_empty(&dmabuf->attachments));
+ 	module_put(dmabuf->owner);
+ 	kfree(dmabuf->name);
+ 	kfree(dmabuf);
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 3b53115db2686..627ad74c879fd 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -145,7 +145,7 @@
+ #define		AT_XDMAC_CC_WRIP	(0x1 << 23)	/* Write in Progress (read only) */
+ #define			AT_XDMAC_CC_WRIP_DONE		(0x0 << 23)
+ #define			AT_XDMAC_CC_WRIP_IN_PROGRESS	(0x1 << 23)
+-#define		AT_XDMAC_CC_PERID(i)	(0x7f & (i) << 24)	/* Channel Peripheral Identifier */
++#define		AT_XDMAC_CC_PERID(i)	((0x7f & (i)) << 24)	/* Channel Peripheral Identifier */
+ #define AT_XDMAC_CDS_MSP	0x2C	/* Channel Data Stride Memory Set Pattern */
+ #define AT_XDMAC_CSUS		0x30	/* Channel Source Microblock Stride */
+ #define AT_XDMAC_CDUS		0x34	/* Channel Destination Microblock Stride */
+diff --git a/drivers/dma/dmaengine.h b/drivers/dma/dmaengine.h
+index 1bfbd64b13717..53f16d3f00294 100644
+--- a/drivers/dma/dmaengine.h
++++ b/drivers/dma/dmaengine.h
+@@ -176,7 +176,7 @@ dmaengine_desc_get_callback_invoke(struct dma_async_tx_descriptor *tx,
+ static inline bool
+ dmaengine_desc_callback_valid(struct dmaengine_desc_callback *cb)
+ {
+-	return (cb->callback) ? true : false;
++	return cb->callback || cb->callback_result;
+ }
+ 
+ struct dma_chan *dma_get_slave_channel(struct dma_chan *chan);
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index b36d5879b91e0..f5635dfa9acf6 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -786,12 +786,14 @@ static void debug_dump_dramcfg_low(struct amd64_pvt *pvt, u32 dclr, int chan)
+ #define CS_ODD_PRIMARY		BIT(1)
+ #define CS_EVEN_SECONDARY	BIT(2)
+ #define CS_ODD_SECONDARY	BIT(3)
++#define CS_3R_INTERLEAVE	BIT(4)
+ 
+ #define CS_EVEN			(CS_EVEN_PRIMARY | CS_EVEN_SECONDARY)
+ #define CS_ODD			(CS_ODD_PRIMARY | CS_ODD_SECONDARY)
+ 
+ static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt)
+ {
++	u8 base, count = 0;
+ 	int cs_mode = 0;
+ 
+ 	if (csrow_enabled(2 * dimm, ctrl, pvt))
+@@ -804,6 +806,20 @@ static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt)
+ 	if (csrow_sec_enabled(2 * dimm + 1, ctrl, pvt))
+ 		cs_mode |= CS_ODD_SECONDARY;
+ 
++	/*
++	 * 3 Rank inteleaving support.
++	 * There should be only three bases enabled and their two masks should
++	 * be equal.
++	 */
++	for_each_chip_select(base, ctrl, pvt)
++		count += csrow_enabled(base, ctrl, pvt);
++
++	if (count == 3 &&
++	    pvt->csels[ctrl].csmasks[0] == pvt->csels[ctrl].csmasks[1]) {
++		edac_dbg(1, "3R interleaving in use.\n");
++		cs_mode |= CS_3R_INTERLEAVE;
++	}
++
+ 	return cs_mode;
+ }
+ 
+@@ -1612,10 +1628,14 @@ static int f17_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc,
+ 	 *
+ 	 * The MSB is the number of bits in the full mask because BIT[0] is
+ 	 * always 0.
++	 *
++	 * In the special 3 Rank interleaving case, a single bit is flipped
++	 * without swapping with the most significant bit. This can be handled
++	 * by keeping the MSB where it is and ignoring the single zero bit.
+ 	 */
+ 	msb = fls(addr_mask_orig) - 1;
+ 	weight = hweight_long(addr_mask_orig);
+-	num_zero_bits = msb - weight;
++	num_zero_bits = msb - weight - !!(cs_mode & CS_3R_INTERLEAVE);
+ 
+ 	/* Take the number of zero bits off from the top of the mask. */
+ 	addr_mask_deinterleaved = GENMASK_ULL(msb - num_zero_bits, 1);
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 4c626fcd4dcbb..1522d4aa2ca62 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -1052,7 +1052,7 @@ static u64 haswell_get_tohm(struct sbridge_pvt *pvt)
+ 	pci_read_config_dword(pvt->info.pci_vtd, HASWELL_TOHM_1, &reg);
+ 	rc = ((reg << 6) | rc) << 26;
+ 
+-	return rc | 0x1ffffff;
++	return rc | 0x3ffffff;
+ }
+ 
+ static u64 knl_get_tolm(struct sbridge_pvt *pvt)
+diff --git a/drivers/firmware/psci/psci_checker.c b/drivers/firmware/psci/psci_checker.c
+index 9a369a2eda71d..116eb465cdb42 100644
+--- a/drivers/firmware/psci/psci_checker.c
++++ b/drivers/firmware/psci/psci_checker.c
+@@ -155,7 +155,7 @@ static int alloc_init_cpu_groups(cpumask_var_t **pcpu_groups)
+ 	if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
+-	cpu_groups = kcalloc(nb_available_cpus, sizeof(cpu_groups),
++	cpu_groups = kcalloc(nb_available_cpus, sizeof(*cpu_groups),
+ 			     GFP_KERNEL);
+ 	if (!cpu_groups) {
+ 		free_cpumask_var(tmp);
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index c5b20bdc08e9d..e10a99860ca4b 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -252,7 +252,7 @@ static bool __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+ 		break;
+ 	default:
+ 		pr_err("Unknown SMC convention being used\n");
+-		return -EINVAL;
++		return false;
+ 	}
+ 
+ 	ret = qcom_scm_call(dev, &desc, &res);
+diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c
+index befa5e1099439..d4b250b470b41 100644
+--- a/drivers/gpio/gpio-mlxbf2.c
++++ b/drivers/gpio/gpio-mlxbf2.c
+@@ -268,6 +268,11 @@ mlxbf2_gpio_probe(struct platform_device *pdev)
+ 			NULL,
+ 			0);
+ 
++	if (ret) {
++		dev_err(dev, "bgpio_init failed\n");
++		return ret;
++	}
++
+ 	gc->direction_input = mlxbf2_gpio_direction_input;
+ 	gc->direction_output = mlxbf2_gpio_direction_output;
+ 	gc->ngpio = npins;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+index 15c45b2a39835..714178f1b6c6e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+@@ -61,7 +61,7 @@ static void amdgpu_bo_list_free(struct kref *ref)
+ 
+ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
+ 			  struct drm_amdgpu_bo_list_entry *info,
+-			  unsigned num_entries, struct amdgpu_bo_list **result)
++			  size_t num_entries, struct amdgpu_bo_list **result)
+ {
+ 	unsigned last_entry = 0, first_userptr = num_entries;
+ 	struct amdgpu_bo_list_entry *array;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
+index a130e766cbdbe..529d52a204cf4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
+@@ -60,7 +60,7 @@ int amdgpu_bo_create_list_entry_array(struct drm_amdgpu_bo_list_in *in,
+ int amdgpu_bo_list_create(struct amdgpu_device *adev,
+ 				 struct drm_file *filp,
+ 				 struct drm_amdgpu_bo_list_entry *info,
+-				 unsigned num_entries,
++				 size_t num_entries,
+ 				 struct amdgpu_bo_list **list);
+ 
+ static inline struct amdgpu_bo_list_entry *
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+index 95a9117e95640..861d0cc45fc10 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+@@ -842,12 +842,12 @@ static int gmc_v6_0_sw_init(void *handle)
+ 
+ 	adev->gmc.mc_mask = 0xffffffffffULL;
+ 
+-	r = dma_set_mask_and_coherent(adev->dev, DMA_BIT_MASK(44));
++	r = dma_set_mask_and_coherent(adev->dev, DMA_BIT_MASK(40));
+ 	if (r) {
+ 		dev_warn(adev->dev, "No suitable DMA available.\n");
+ 		return r;
+ 	}
+-	adev->need_swiotlb = drm_need_swiotlb(44);
++	adev->need_swiotlb = drm_need_swiotlb(40);
+ 
+ 	r = gmc_v6_0_init_microcode(adev);
+ 	if (r) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+index f493b5c3d382b..79bcc78f77045 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+@@ -22,6 +22,7 @@
+  */
+ 
+ #include <linux/firmware.h>
++#include <drm/drm_drv.h>
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_vcn.h"
+@@ -192,11 +193,14 @@ static int vcn_v2_0_sw_init(void *handle)
+  */
+ static int vcn_v2_0_sw_fini(void *handle)
+ {
+-	int r;
++	int r, idx;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	volatile struct amdgpu_fw_shared *fw_shared = adev->vcn.inst->fw_shared_cpu_addr;
+ 
+-	fw_shared->present_flag_0 = 0;
++	if (drm_dev_enter(&adev->ddev, &idx)) {
++		fw_shared->present_flag_0 = 0;
++		drm_dev_exit(idx);
++	}
+ 
+ 	amdgpu_virt_free_mm_table(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+index ce64d4016f903..381839d005db9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+@@ -22,6 +22,7 @@
+  */
+ 
+ #include <linux/firmware.h>
++#include <drm/drm_drv.h>
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_vcn.h"
+@@ -233,17 +234,21 @@ static int vcn_v2_5_sw_init(void *handle)
+  */
+ static int vcn_v2_5_sw_fini(void *handle)
+ {
+-	int i, r;
++	int i, r, idx;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	volatile struct amdgpu_fw_shared *fw_shared;
+ 
+-	for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
+-		if (adev->vcn.harvest_config & (1 << i))
+-			continue;
+-		fw_shared = adev->vcn.inst[i].fw_shared_cpu_addr;
+-		fw_shared->present_flag_0 = 0;
++	if (drm_dev_enter(&adev->ddev, &idx)) {
++		for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
++			if (adev->vcn.harvest_config & (1 << i))
++				continue;
++			fw_shared = adev->vcn.inst[i].fw_shared_cpu_addr;
++			fw_shared->present_flag_0 = 0;
++		}
++		drm_dev_exit(idx);
+ 	}
+ 
++
+ 	if (amdgpu_sriov_vf(adev))
+ 		amdgpu_virt_free_mm_table(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 903170e59342c..5751bddc9cadd 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -744,6 +744,7 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 	kfd_double_confirm_iommu_support(kfd);
+ 
+ 	if (kfd_iommu_device_init(kfd)) {
++		kfd->use_iommu_v2 = false;
+ 		dev_err(kfd_device, "Error initializing iommuv2\n");
+ 		goto device_iommu_error;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 5dbc290bcbe86..3121816546467 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -3754,16 +3754,22 @@ static bool init_soc_bounding_box(struct dc *dc,
+ 			clock_limits_available = (status == PP_SMU_RESULT_OK);
+ 		}
+ 
+-		if (clock_limits_available && uclk_states_available && num_states)
++		if (clock_limits_available && uclk_states_available && num_states) {
++			DC_FP_START();
+ 			dcn20_update_bounding_box(dc, loaded_bb, &max_clocks, uclk_states, num_states);
+-		else if (clock_limits_available)
++			DC_FP_END();
++		} else if (clock_limits_available) {
++			DC_FP_START();
+ 			dcn20_cap_soc_clocks(loaded_bb, max_clocks);
++			DC_FP_END();
++		}
+ 	}
+ 
+ 	loaded_ip->max_num_otg = pool->base.res_cap->num_timing_generator;
+ 	loaded_ip->max_num_dpp = pool->base.pipe_count;
++	DC_FP_START();
+ 	dcn20_patch_bounding_box(dc, loaded_bb);
+-
++	DC_FP_END();
+ 	return true;
+ }
+ 
+@@ -3783,8 +3789,6 @@ static bool dcn20_resource_construct(
+ 	enum dml_project dml_project_version =
+ 			get_dml_project_version(ctx->asic_id.hw_internal_rev);
+ 
+-	DC_FP_START();
+-
+ 	ctx->dc_bios->regs = &bios_regs;
+ 	pool->base.funcs = &dcn20_res_pool_funcs;
+ 
+@@ -4128,12 +4132,10 @@ static bool dcn20_resource_construct(
+ 		pool->base.oem_device = NULL;
+ 	}
+ 
+-	DC_FP_END();
+ 	return true;
+ 
+ create_fail:
+ 
+-	DC_FP_END();
+ 	dcn20_resource_destruct(pool);
+ 
+ 	return false;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index f6bdec7fa9253..a950d5db211c5 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -109,6 +109,12 @@ static const struct drm_dmi_panel_orientation_data lcd1200x1920_rightside_up = {
+ 	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+ 
++static const struct drm_dmi_panel_orientation_data lcd1280x1920_rightside_up = {
++	.width = 1280,
++	.height = 1920,
++	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
++};
++
+ static const struct dmi_system_id orientation_data[] = {
+ 	{	/* Acer One 10 (S1003) */
+ 		.matches = {
+@@ -134,6 +140,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* AYA NEO 2021 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYADEVICE"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYA NEO 2021"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/* GPD MicroPC (generic strings, also match on bios date) */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
+@@ -185,6 +197,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
+ 		},
+ 		.driver_data = (void *)&gpd_win2,
++	}, {	/* GPD Win 3 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "G1618-03")
++		},
++		.driver_data = (void *)&lcd720x1280_rightside_up,
+ 	}, {	/* I.T.Works TW891 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "To be filled by O.E.M."),
+@@ -193,6 +211,13 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "TW891"),
+ 		},
+ 		.driver_data = (void *)&itworks_tw891,
++	}, {	/* KD Kurio Smart C15200 2-in-1 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "KD Interactive"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Kurio Smart"),
++		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "KDM960BCP"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/*
+ 		 * Lenovo Ideapad Miix 310 laptop, only some production batches
+ 		 * have a portrait screen, the resolution checks makes the quirk
+@@ -211,10 +236,15 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
+-	}, {	/* Lenovo Ideapad D330 */
++	}, {	/* Lenovo Ideapad D330-10IGM (HD) */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* Lenovo Ideapad D330-10IGM (FHD) */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "81H3"),
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
+ 		},
+ 		.driver_data = (void *)&lcd1200x1920_rightside_up,
+@@ -225,6 +255,19 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Default string"),
+ 		},
+ 		.driver_data = (void *)&onegx1_pro,
++	}, {	/* Samsung GalaxyBook 10.6 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Galaxy Book 10.6"),
++		},
++		.driver_data = (void *)&lcd1280x1920_rightside_up,
++	}, {	/* Valve Steam Deck */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Valve"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Jupiter"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "1"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/* VIOS LTH17 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "VIOS"),
+diff --git a/drivers/gpu/drm/drm_plane_helper.c b/drivers/gpu/drm/drm_plane_helper.c
+index 3aae7ea522f23..c3f2292dc93d5 100644
+--- a/drivers/gpu/drm/drm_plane_helper.c
++++ b/drivers/gpu/drm/drm_plane_helper.c
+@@ -123,7 +123,6 @@ static int drm_plane_helper_check_update(struct drm_plane *plane,
+ 		.crtc_w = drm_rect_width(dst),
+ 		.crtc_h = drm_rect_height(dst),
+ 		.rotation = rotation,
+-		.visible = *visible,
+ 	};
+ 	struct drm_crtc_state crtc_state = {
+ 		.crtc = crtc,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+index c940b69435e16..016c462bdb5d2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+@@ -138,11 +138,13 @@ static int _sspp_subblk_offset(struct dpu_hw_pipe *ctx,
+ 		u32 *idx)
+ {
+ 	int rc = 0;
+-	const struct dpu_sspp_sub_blks *sblk = ctx->cap->sblk;
++	const struct dpu_sspp_sub_blks *sblk;
+ 
+-	if (!ctx)
++	if (!ctx || !ctx->cap || !ctx->cap->sblk)
+ 		return -EINVAL;
+ 
++	sblk = ctx->cap->sblk;
++
+ 	switch (s_id) {
+ 	case DPU_SSPP_SRC:
+ 		*idx = sblk->src_blk.base;
+@@ -419,7 +421,7 @@ static void _dpu_hw_sspp_setup_scaler3(struct dpu_hw_pipe *ctx,
+ 
+ 	(void)pe;
+ 	if (_sspp_subblk_offset(ctx, DPU_SSPP_SCALER_QSEED3, &idx) || !sspp
+-		|| !scaler3_cfg || !ctx || !ctx->cap || !ctx->cap->sblk)
++		|| !scaler3_cfg)
+ 		return;
+ 
+ 	dpu_hw_setup_scaler3(&ctx->hw, scaler3_cfg, idx,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index c8217f4858a15..b4a2e8eb35dd2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -846,6 +846,10 @@ static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms)
+ 		return 0;
+ 
+ 	mmu = msm_iommu_new(dpu_kms->dev->dev, domain);
++	if (IS_ERR(mmu)) {
++		iommu_domain_free(domain);
++		return PTR_ERR(mmu);
++	}
+ 	aspace = msm_gem_address_space_create(mmu, "dpu1",
+ 		0x1000, 0x100000000 - 0x1000);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 04be4cfcccc18..819567e40565c 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -1061,7 +1061,7 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ 
+ 	ret = msm_gem_new_impl(dev, size, flags, &obj);
+ 	if (ret)
+-		goto fail;
++		return ERR_PTR(ret);
+ 
+ 	msm_obj = to_msm_bo(obj);
+ 
+@@ -1149,7 +1149,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+ 
+ 	ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj);
+ 	if (ret)
+-		goto fail;
++		return ERR_PTR(ret);
+ 
+ 	drm_gem_private_object_init(dev, obj, size);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
+index 55d16489d0f3f..90c26da109026 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.c
++++ b/drivers/gpu/drm/msm/msm_gpu.c
+@@ -376,7 +376,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
+ 		state->bos = kcalloc(nr,
+ 			sizeof(struct msm_gpu_state_bo), GFP_KERNEL);
+ 
+-		for (i = 0; i < submit->nr_bos; i++) {
++		for (i = 0; state->bos && i < submit->nr_bos; i++) {
+ 			if (should_dump(submit, i)) {
+ 				msm_gpu_crashstate_get_bo(state, submit->bos[i].obj,
+ 					submit->bos[i].iova, submit->bos[i].flags);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
+index 1c3f890377d2c..f67700c028c75 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
+@@ -156,10 +156,14 @@ nouveau_svmm_bind(struct drm_device *dev, void *data,
+ 	 */
+ 
+ 	mm = get_task_mm(current);
++	if (!mm) {
++		return -EINVAL;
++	}
+ 	mmap_read_lock(mm);
+ 
+ 	if (!cli->svm.svmm) {
+ 		mmap_read_unlock(mm);
++		mmput(mm);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun8i_csc.h b/drivers/gpu/drm/sun4i/sun8i_csc.h
+index a55a38ad849c1..022cafa6c06cb 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_csc.h
++++ b/drivers/gpu/drm/sun4i/sun8i_csc.h
+@@ -16,8 +16,8 @@ struct sun8i_mixer;
+ #define CCSC10_OFFSET 0xA0000
+ #define CCSC11_OFFSET 0xF0000
+ 
+-#define SUN8I_CSC_CTRL(base)		(base + 0x0)
+-#define SUN8I_CSC_COEFF(base, i)	(base + 0x10 + 4 * i)
++#define SUN8I_CSC_CTRL(base)		((base) + 0x0)
++#define SUN8I_CSC_COEFF(base, i)	((base) + 0x10 + 4 * (i))
+ 
+ #define SUN8I_CSC_CTRL_EN		BIT(0)
+ 
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+index 98a006fc30a58..0b1daf442425f 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+@@ -500,11 +500,6 @@ int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	switch (bo->mem.mem_type) {
+ 	case TTM_PL_SYSTEM:
+-		if (unlikely(bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) {
+-			ret = ttm_tt_swapin(bo->ttm);
+-			if (unlikely(ret != 0))
+-				return ret;
+-		}
+ 		fallthrough;
+ 	case TTM_PL_TT:
+ 		ret = ttm_bo_vm_access_kmap(bo, offset, buf, len, write);
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index 182c586525eb8..64fe63c1938f5 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -195,8 +195,8 @@ v3d_clean_caches(struct v3d_dev *v3d)
+ 
+ 	V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL, V3D_L2TCACTL_TMUWCF);
+ 	if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) &
+-		       V3D_L2TCACTL_L2TFLS), 100)) {
+-		DRM_ERROR("Timeout waiting for L1T write combiner flush\n");
++		       V3D_L2TCACTL_TMUWCF), 100)) {
++		DRM_ERROR("Timeout waiting for TMU write combiner flush\n");
+ 	}
+ 
+ 	mutex_lock(&v3d->cache_clean_lock);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
+index 07945ca238e2d..5e40fa0f5e8f2 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
+@@ -91,9 +91,7 @@ virtio_gpu_get_vbuf(struct virtio_gpu_device *vgdev,
+ {
+ 	struct virtio_gpu_vbuffer *vbuf;
+ 
+-	vbuf = kmem_cache_zalloc(vgdev->vbufs, GFP_KERNEL);
+-	if (!vbuf)
+-		return ERR_PTR(-ENOMEM);
++	vbuf = kmem_cache_zalloc(vgdev->vbufs, GFP_KERNEL | __GFP_NOFAIL);
+ 
+ 	BUG_ON(size > MAX_INLINE_CMD_SIZE ||
+ 	       size < sizeof(struct virtio_gpu_ctrl_hdr));
+@@ -147,10 +145,6 @@ static void *virtio_gpu_alloc_cmd_resp(struct virtio_gpu_device *vgdev,
+ 
+ 	vbuf = virtio_gpu_get_vbuf(vgdev, cmd_size,
+ 				   resp_size, resp_buf, cb);
+-	if (IS_ERR(vbuf)) {
+-		*vbuffer_p = NULL;
+-		return ERR_CAST(vbuf);
+-	}
+ 	*vbuffer_p = vbuf;
+ 	return (struct virtio_gpu_command *)vbuf->buf;
+ }
+diff --git a/drivers/hid/hid-u2fzero.c b/drivers/hid/hid-u2fzero.c
+index d70cd3d7f583b..67ae2b18e33ac 100644
+--- a/drivers/hid/hid-u2fzero.c
++++ b/drivers/hid/hid-u2fzero.c
+@@ -132,7 +132,7 @@ static int u2fzero_recv(struct u2fzero_device *dev,
+ 
+ 	ret = (wait_for_completion_timeout(
+ 		&ctx.done, msecs_to_jiffies(USB_CTRL_SET_TIMEOUT)));
+-	if (ret < 0) {
++	if (ret == 0) {
+ 		usb_kill_urb(dev->urb);
+ 		hid_err(hdev, "urb submission timed out");
+ 	} else {
+@@ -191,6 +191,8 @@ static int u2fzero_rng_read(struct hwrng *rng, void *data,
+ 	struct u2f_hid_msg resp;
+ 	int ret;
+ 	size_t actual_length;
++	/* valid packets must have a correct header */
++	int min_length = offsetof(struct u2f_hid_msg, init.data);
+ 
+ 	if (!dev->present) {
+ 		hid_dbg(dev->hdev, "device not present");
+@@ -200,12 +202,12 @@ static int u2fzero_rng_read(struct hwrng *rng, void *data,
+ 	ret = u2fzero_recv(dev, &req, &resp);
+ 
+ 	/* ignore errors or packets without data */
+-	if (ret < offsetof(struct u2f_hid_msg, init.data))
++	if (ret < min_length)
+ 		return 0;
+ 
+ 	/* only take the minimum amount of data it is safe to take */
+-	actual_length = min3((size_t)ret - offsetof(struct u2f_hid_msg,
+-		init.data), U2F_HID_MSG_LEN(resp), max);
++	actual_length = min3((size_t)ret - min_length,
++		U2F_HID_MSG_LEN(resp), max);
+ 
+ 	memcpy(data, resp.init.data, actual_length);
+ 
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index 40e2b9f91163c..7845fa5de79e9 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -13,6 +13,7 @@
+ #define _HYPERV_VMBUS_H
+ 
+ #include <linux/list.h>
++#include <linux/bitops.h>
+ #include <asm/sync_bitops.h>
+ #include <asm/hyperv-tlfs.h>
+ #include <linux/atomic.h>
+diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c
+index 6c684058bfdfc..e5a83f7492677 100644
+--- a/drivers/hwmon/hwmon.c
++++ b/drivers/hwmon/hwmon.c
+@@ -760,8 +760,10 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
+ 	dev_set_drvdata(hdev, drvdata);
+ 	dev_set_name(hdev, HWMON_ID_FORMAT, id);
+ 	err = device_register(hdev);
+-	if (err)
+-		goto free_hwmon;
++	if (err) {
++		put_device(hdev);
++		goto ida_remove;
++	}
+ 
+ 	INIT_LIST_HEAD(&hwdev->tzdata);
+ 
+diff --git a/drivers/hwmon/pmbus/lm25066.c b/drivers/hwmon/pmbus/lm25066.c
+index 429172a42902c..17199a1104c72 100644
+--- a/drivers/hwmon/pmbus/lm25066.c
++++ b/drivers/hwmon/pmbus/lm25066.c
+@@ -51,26 +51,31 @@ struct __coeff {
+ #define PSC_CURRENT_IN_L	(PSC_NUM_CLASSES)
+ #define PSC_POWER_L		(PSC_NUM_CLASSES + 1)
+ 
+-static struct __coeff lm25066_coeff[6][PSC_NUM_CLASSES + 2] = {
++static struct __coeff lm25066_coeff[][PSC_NUM_CLASSES + 2] = {
+ 	[lm25056] = {
+ 		[PSC_VOLTAGE_IN] = {
+ 			.m = 16296,
++			.b = 1343,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN] = {
+ 			.m = 13797,
++			.b = -1833,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN_L] = {
+ 			.m = 6726,
++			.b = -537,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER] = {
+ 			.m = 5501,
++			.b = -2908,
+ 			.R = -3,
+ 		},
+ 		[PSC_POWER_L] = {
+ 			.m = 26882,
++			.b = -5646,
+ 			.R = -4,
+ 		},
+ 		[PSC_TEMPERATURE] = {
+@@ -82,26 +87,32 @@ static struct __coeff lm25066_coeff[6][PSC_NUM_CLASSES + 2] = {
+ 	[lm25066] = {
+ 		[PSC_VOLTAGE_IN] = {
+ 			.m = 22070,
++			.b = -1800,
+ 			.R = -2,
+ 		},
+ 		[PSC_VOLTAGE_OUT] = {
+ 			.m = 22070,
++			.b = -1800,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN] = {
+ 			.m = 13661,
++			.b = -5200,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN_L] = {
+ 			.m = 6852,
++			.b = -3100,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER] = {
+ 			.m = 736,
++			.b = -3300,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER_L] = {
+ 			.m = 369,
++			.b = -1900,
+ 			.R = -2,
+ 		},
+ 		[PSC_TEMPERATURE] = {
+@@ -111,26 +122,32 @@ static struct __coeff lm25066_coeff[6][PSC_NUM_CLASSES + 2] = {
+ 	[lm5064] = {
+ 		[PSC_VOLTAGE_IN] = {
+ 			.m = 4611,
++			.b = -642,
+ 			.R = -2,
+ 		},
+ 		[PSC_VOLTAGE_OUT] = {
+ 			.m = 4621,
++			.b = 423,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN] = {
+ 			.m = 10742,
++			.b = 1552,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN_L] = {
+ 			.m = 5456,
++			.b = 2118,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER] = {
+ 			.m = 1204,
++			.b = 8524,
+ 			.R = -3,
+ 		},
+ 		[PSC_POWER_L] = {
+ 			.m = 612,
++			.b = 11202,
+ 			.R = -3,
+ 		},
+ 		[PSC_TEMPERATURE] = {
+@@ -140,26 +157,32 @@ static struct __coeff lm25066_coeff[6][PSC_NUM_CLASSES + 2] = {
+ 	[lm5066] = {
+ 		[PSC_VOLTAGE_IN] = {
+ 			.m = 4587,
++			.b = -1200,
+ 			.R = -2,
+ 		},
+ 		[PSC_VOLTAGE_OUT] = {
+ 			.m = 4587,
++			.b = -2400,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN] = {
+ 			.m = 10753,
++			.b = -1200,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN_L] = {
+ 			.m = 5405,
++			.b = -600,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER] = {
+ 			.m = 1204,
++			.b = -6000,
+ 			.R = -3,
+ 		},
+ 		[PSC_POWER_L] = {
+ 			.m = 605,
++			.b = -8000,
+ 			.R = -3,
+ 		},
+ 		[PSC_TEMPERATURE] = {
+diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c
+index 61dbc1afd8da5..7ea93598f0eea 100644
+--- a/drivers/hwtracing/coresight/coresight-cti-core.c
++++ b/drivers/hwtracing/coresight/coresight-cti-core.c
+@@ -174,7 +174,7 @@ static int cti_disable_hw(struct cti_drvdata *drvdata)
+ 	coresight_disclaim_device_unlocked(drvdata->base);
+ 	CS_LOCK(drvdata->base);
+ 	spin_unlock(&drvdata->spinlock);
+-	pm_runtime_put(dev);
++	pm_runtime_put(dev->parent);
+ 	return 0;
+ 
+ 	/* not disabled this call */
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 0af2784cbd0d9..265635db29aa5 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -195,7 +195,7 @@ static const u16 mt_i2c_regs_v2[] = {
+ 	[OFFSET_CLOCK_DIV] = 0x48,
+ 	[OFFSET_SOFTRESET] = 0x50,
+ 	[OFFSET_SCL_MIS_COMP_POINT] = 0x90,
+-	[OFFSET_DEBUGSTAT] = 0xe0,
++	[OFFSET_DEBUGSTAT] = 0xe4,
+ 	[OFFSET_DEBUGCTRL] = 0xe8,
+ 	[OFFSET_FIFO_STAT] = 0xf4,
+ 	[OFFSET_FIFO_THRESH] = 0xf8,
+diff --git a/drivers/i2c/busses/i2c-xlr.c b/drivers/i2c/busses/i2c-xlr.c
+index 126d1393e548b..9ce20652d4942 100644
+--- a/drivers/i2c/busses/i2c-xlr.c
++++ b/drivers/i2c/busses/i2c-xlr.c
+@@ -431,11 +431,15 @@ static int xlr_i2c_probe(struct platform_device *pdev)
+ 	i2c_set_adapdata(&priv->adap, priv);
+ 	ret = i2c_add_numbered_adapter(&priv->adap);
+ 	if (ret < 0)
+-		return ret;
++		goto err_unprepare_clk;
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 	dev_info(&priv->adap.dev, "Added I2C Bus.\n");
+ 	return 0;
++
++err_unprepare_clk:
++	clk_unprepare(clk);
++	return ret;
+ }
+ 
+ static int xlr_i2c_remove(struct platform_device *pdev)
+diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c
+index 43c50167d220c..bde0ca3ef7a4c 100644
+--- a/drivers/iio/accel/st_accel_core.c
++++ b/drivers/iio/accel/st_accel_core.c
+@@ -1255,13 +1255,9 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &accel_info;
+ 
+-	err = st_sensors_power_enable(indio_dev);
+-	if (err)
+-		return err;
+-
+ 	err = st_sensors_verify_id(indio_dev);
+ 	if (err < 0)
+-		goto st_accel_power_off;
++		return err;
+ 
+ 	adata->num_data_channels = ST_ACCEL_NUMBER_DATA_CHANNELS;
+ 	indio_dev->num_channels = ST_SENSORS_NUMBER_ALL_CHANNELS;
+@@ -1270,10 +1266,8 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
+ 	channels = devm_kmemdup(&indio_dev->dev,
+ 				adata->sensor_settings->ch,
+ 				channels_size, GFP_KERNEL);
+-	if (!channels) {
+-		err = -ENOMEM;
+-		goto st_accel_power_off;
+-	}
++	if (!channels)
++		return -ENOMEM;
+ 
+ 	if (apply_acpi_orientation(indio_dev, channels))
+ 		dev_warn(&indio_dev->dev,
+@@ -1288,11 +1282,11 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
+ 
+ 	err = st_sensors_init_sensor(indio_dev, pdata);
+ 	if (err < 0)
+-		goto st_accel_power_off;
++		return err;
+ 
+ 	err = st_accel_allocate_ring(indio_dev);
+ 	if (err < 0)
+-		goto st_accel_power_off;
++		return err;
+ 
+ 	if (adata->irq > 0) {
+ 		err = st_sensors_allocate_trigger(indio_dev,
+@@ -1315,9 +1309,6 @@ st_accel_device_register_error:
+ 		st_sensors_deallocate_trigger(indio_dev);
+ st_accel_probe_trigger_error:
+ 	st_accel_deallocate_ring(indio_dev);
+-st_accel_power_off:
+-	st_sensors_power_disable(indio_dev);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL(st_accel_common_probe);
+@@ -1326,8 +1317,6 @@ void st_accel_common_remove(struct iio_dev *indio_dev)
+ {
+ 	struct st_sensor_data *adata = iio_priv(indio_dev);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	iio_device_unregister(indio_dev);
+ 	if (adata->irq > 0)
+ 		st_sensors_deallocate_trigger(indio_dev);
+diff --git a/drivers/iio/accel/st_accel_i2c.c b/drivers/iio/accel/st_accel_i2c.c
+index 360e16f2cadb9..02c823b93ecd4 100644
+--- a/drivers/iio/accel/st_accel_i2c.c
++++ b/drivers/iio/accel/st_accel_i2c.c
+@@ -174,16 +174,29 @@ static int st_accel_i2c_probe(struct i2c_client *client)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = st_sensors_power_enable(indio_dev);
++	if (ret)
++		return ret;
++
+ 	ret = st_accel_common_probe(indio_dev);
+ 	if (ret < 0)
+-		return ret;
++		goto st_accel_power_off;
+ 
+ 	return 0;
++
++st_accel_power_off:
++	st_sensors_power_disable(indio_dev);
++
++	return ret;
+ }
+ 
+ static int st_accel_i2c_remove(struct i2c_client *client)
+ {
+-	st_accel_common_remove(i2c_get_clientdata(client));
++	struct iio_dev *indio_dev = i2c_get_clientdata(client);
++
++	st_accel_common_remove(indio_dev);
++
++	st_sensors_power_disable(indio_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/accel/st_accel_spi.c b/drivers/iio/accel/st_accel_spi.c
+index 568ff1bae0eee..386ae18d5f269 100644
+--- a/drivers/iio/accel/st_accel_spi.c
++++ b/drivers/iio/accel/st_accel_spi.c
+@@ -123,16 +123,29 @@ static int st_accel_spi_probe(struct spi_device *spi)
+ 	if (err < 0)
+ 		return err;
+ 
++	err = st_sensors_power_enable(indio_dev);
++	if (err)
++		return err;
++
+ 	err = st_accel_common_probe(indio_dev);
+ 	if (err < 0)
+-		return err;
++		goto st_accel_power_off;
+ 
+ 	return 0;
++
++st_accel_power_off:
++	st_sensors_power_disable(indio_dev);
++
++	return err;
+ }
+ 
+ static int st_accel_spi_remove(struct spi_device *spi)
+ {
+-	st_accel_common_remove(spi_get_drvdata(spi));
++	struct iio_dev *indio_dev = spi_get_drvdata(spi);
++
++	st_accel_common_remove(indio_dev);
++
++	st_sensors_power_disable(indio_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/dac/ad5446.c b/drivers/iio/dac/ad5446.c
+index d87e21016863c..e86886ca5ee41 100644
+--- a/drivers/iio/dac/ad5446.c
++++ b/drivers/iio/dac/ad5446.c
+@@ -531,8 +531,15 @@ static int ad5622_write(struct ad5446_state *st, unsigned val)
+ {
+ 	struct i2c_client *client = to_i2c_client(st->dev);
+ 	__be16 data = cpu_to_be16(val);
++	int ret;
++
++	ret = i2c_master_send(client, (char *)&data, sizeof(data));
++	if (ret < 0)
++		return ret;
++	if (ret != sizeof(data))
++		return -EIO;
+ 
+-	return i2c_master_send(client, (char *)&data, sizeof(data));
++	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/iio/dac/ad5770r.c b/drivers/iio/dac/ad5770r.c
+index 42decba1463cc..56d8bd2dd92fd 100644
+--- a/drivers/iio/dac/ad5770r.c
++++ b/drivers/iio/dac/ad5770r.c
+@@ -522,7 +522,7 @@ static int ad5770r_channel_config(struct ad5770r_state *st)
+ 		return -EINVAL;
+ 
+ 	device_for_each_child_node(&st->spi->dev, child) {
+-		ret = fwnode_property_read_u32(child, "num", &num);
++		ret = fwnode_property_read_u32(child, "reg", &num);
+ 		if (ret)
+ 			goto err_child_out;
+ 		if (num >= AD5770R_MAX_CHANNELS) {
+diff --git a/drivers/iio/gyro/st_gyro_core.c b/drivers/iio/gyro/st_gyro_core.c
+index c8aa051995d3b..8c87f85f20bd1 100644
+--- a/drivers/iio/gyro/st_gyro_core.c
++++ b/drivers/iio/gyro/st_gyro_core.c
+@@ -466,13 +466,9 @@ int st_gyro_common_probe(struct iio_dev *indio_dev)
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &gyro_info;
+ 
+-	err = st_sensors_power_enable(indio_dev);
+-	if (err)
+-		return err;
+-
+ 	err = st_sensors_verify_id(indio_dev);
+ 	if (err < 0)
+-		goto st_gyro_power_off;
++		return err;
+ 
+ 	gdata->num_data_channels = ST_GYRO_NUMBER_DATA_CHANNELS;
+ 	indio_dev->channels = gdata->sensor_settings->ch;
+@@ -485,11 +481,11 @@ int st_gyro_common_probe(struct iio_dev *indio_dev)
+ 
+ 	err = st_sensors_init_sensor(indio_dev, pdata);
+ 	if (err < 0)
+-		goto st_gyro_power_off;
++		return err;
+ 
+ 	err = st_gyro_allocate_ring(indio_dev);
+ 	if (err < 0)
+-		goto st_gyro_power_off;
++		return err;
+ 
+ 	if (gdata->irq > 0) {
+ 		err = st_sensors_allocate_trigger(indio_dev,
+@@ -512,9 +508,6 @@ st_gyro_device_register_error:
+ 		st_sensors_deallocate_trigger(indio_dev);
+ st_gyro_probe_trigger_error:
+ 	st_gyro_deallocate_ring(indio_dev);
+-st_gyro_power_off:
+-	st_sensors_power_disable(indio_dev);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL(st_gyro_common_probe);
+@@ -523,8 +516,6 @@ void st_gyro_common_remove(struct iio_dev *indio_dev)
+ {
+ 	struct st_sensor_data *gdata = iio_priv(indio_dev);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	iio_device_unregister(indio_dev);
+ 	if (gdata->irq > 0)
+ 		st_sensors_deallocate_trigger(indio_dev);
+diff --git a/drivers/iio/gyro/st_gyro_i2c.c b/drivers/iio/gyro/st_gyro_i2c.c
+index 8190966e6ff0a..3ed5779779465 100644
+--- a/drivers/iio/gyro/st_gyro_i2c.c
++++ b/drivers/iio/gyro/st_gyro_i2c.c
+@@ -86,16 +86,29 @@ static int st_gyro_i2c_probe(struct i2c_client *client,
+ 	if (err < 0)
+ 		return err;
+ 
++	err = st_sensors_power_enable(indio_dev);
++	if (err)
++		return err;
++
+ 	err = st_gyro_common_probe(indio_dev);
+ 	if (err < 0)
+-		return err;
++		goto st_gyro_power_off;
+ 
+ 	return 0;
++
++st_gyro_power_off:
++	st_sensors_power_disable(indio_dev);
++
++	return err;
+ }
+ 
+ static int st_gyro_i2c_remove(struct i2c_client *client)
+ {
+-	st_gyro_common_remove(i2c_get_clientdata(client));
++	struct iio_dev *indio_dev = i2c_get_clientdata(client);
++
++	st_gyro_common_remove(indio_dev);
++
++	st_sensors_power_disable(indio_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/gyro/st_gyro_spi.c b/drivers/iio/gyro/st_gyro_spi.c
+index efb862763ca3d..c04bcf2518c11 100644
+--- a/drivers/iio/gyro/st_gyro_spi.c
++++ b/drivers/iio/gyro/st_gyro_spi.c
+@@ -90,16 +90,29 @@ static int st_gyro_spi_probe(struct spi_device *spi)
+ 	if (err < 0)
+ 		return err;
+ 
++	err = st_sensors_power_enable(indio_dev);
++	if (err)
++		return err;
++
+ 	err = st_gyro_common_probe(indio_dev);
+ 	if (err < 0)
+-		return err;
++		goto st_gyro_power_off;
+ 
+ 	return 0;
++
++st_gyro_power_off:
++	st_sensors_power_disable(indio_dev);
++
++	return err;
+ }
+ 
+ static int st_gyro_spi_remove(struct spi_device *spi)
+ {
+-	st_gyro_common_remove(spi_get_drvdata(spi));
++	struct iio_dev *indio_dev = spi_get_drvdata(spi);
++
++	st_gyro_common_remove(indio_dev);
++
++	st_sensors_power_disable(indio_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/imu/adis.c b/drivers/iio/imu/adis.c
+index f8b7837d8b8f6..715eef81bc248 100644
+--- a/drivers/iio/imu/adis.c
++++ b/drivers/iio/imu/adis.c
+@@ -434,6 +434,8 @@ int __adis_initial_startup(struct adis *adis)
+ 	if (ret)
+ 		return ret;
+ 
++	adis_enable_irq(adis, false);
++
+ 	if (!adis->data->prod_id_reg)
+ 		return 0;
+ 
+@@ -530,7 +532,7 @@ int adis_init(struct adis *adis, struct iio_dev *indio_dev,
+ 		adis->current_page = 0;
+ 	}
+ 
+-	return adis_enable_irq(adis, false);
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(adis_init);
+ 
+diff --git a/drivers/iio/magnetometer/st_magn_core.c b/drivers/iio/magnetometer/st_magn_core.c
+index 79de721e60159..0fc38f17dbe04 100644
+--- a/drivers/iio/magnetometer/st_magn_core.c
++++ b/drivers/iio/magnetometer/st_magn_core.c
+@@ -494,13 +494,9 @@ int st_magn_common_probe(struct iio_dev *indio_dev)
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &magn_info;
+ 
+-	err = st_sensors_power_enable(indio_dev);
+-	if (err)
+-		return err;
+-
+ 	err = st_sensors_verify_id(indio_dev);
+ 	if (err < 0)
+-		goto st_magn_power_off;
++		return err;
+ 
+ 	mdata->num_data_channels = ST_MAGN_NUMBER_DATA_CHANNELS;
+ 	indio_dev->channels = mdata->sensor_settings->ch;
+@@ -511,11 +507,11 @@ int st_magn_common_probe(struct iio_dev *indio_dev)
+ 
+ 	err = st_sensors_init_sensor(indio_dev, NULL);
+ 	if (err < 0)
+-		goto st_magn_power_off;
++		return err;
+ 
+ 	err = st_magn_allocate_ring(indio_dev);
+ 	if (err < 0)
+-		goto st_magn_power_off;
++		return err;
+ 
+ 	if (mdata->irq > 0) {
+ 		err = st_sensors_allocate_trigger(indio_dev,
+@@ -538,9 +534,6 @@ st_magn_device_register_error:
+ 		st_sensors_deallocate_trigger(indio_dev);
+ st_magn_probe_trigger_error:
+ 	st_magn_deallocate_ring(indio_dev);
+-st_magn_power_off:
+-	st_sensors_power_disable(indio_dev);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL(st_magn_common_probe);
+@@ -549,8 +542,6 @@ void st_magn_common_remove(struct iio_dev *indio_dev)
+ {
+ 	struct st_sensor_data *mdata = iio_priv(indio_dev);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	iio_device_unregister(indio_dev);
+ 	if (mdata->irq > 0)
+ 		st_sensors_deallocate_trigger(indio_dev);
+diff --git a/drivers/iio/magnetometer/st_magn_i2c.c b/drivers/iio/magnetometer/st_magn_i2c.c
+index c6bb4ce775943..4b6a251dd44ef 100644
+--- a/drivers/iio/magnetometer/st_magn_i2c.c
++++ b/drivers/iio/magnetometer/st_magn_i2c.c
+@@ -78,18 +78,30 @@ static int st_magn_i2c_probe(struct i2c_client *client,
+ 	if (err < 0)
+ 		return err;
+ 
++	err = st_sensors_power_enable(indio_dev);
++	if (err)
++		return err;
++
+ 	err = st_magn_common_probe(indio_dev);
+ 	if (err < 0)
+-		return err;
++		goto st_magn_power_off;
+ 
+ 	return 0;
++
++st_magn_power_off:
++	st_sensors_power_disable(indio_dev);
++
++	return err;
+ }
+ 
+ static int st_magn_i2c_remove(struct i2c_client *client)
+ {
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(client);
++
+ 	st_magn_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/magnetometer/st_magn_spi.c b/drivers/iio/magnetometer/st_magn_spi.c
+index 3d08d74c367da..501eff32df783 100644
+--- a/drivers/iio/magnetometer/st_magn_spi.c
++++ b/drivers/iio/magnetometer/st_magn_spi.c
+@@ -72,18 +72,30 @@ static int st_magn_spi_probe(struct spi_device *spi)
+ 	if (err < 0)
+ 		return err;
+ 
++	err = st_sensors_power_enable(indio_dev);
++	if (err)
++		return err;
++
+ 	err = st_magn_common_probe(indio_dev);
+ 	if (err < 0)
+-		return err;
++		goto st_magn_power_off;
+ 
+ 	return 0;
++
++st_magn_power_off:
++	st_sensors_power_disable(indio_dev);
++
++	return err;
+ }
+ 
+ static int st_magn_spi_remove(struct spi_device *spi)
+ {
+ 	struct iio_dev *indio_dev = spi_get_drvdata(spi);
++
+ 	st_magn_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/pressure/st_pressure_core.c b/drivers/iio/pressure/st_pressure_core.c
+index 789a2928504a7..7912b5a683955 100644
+--- a/drivers/iio/pressure/st_pressure_core.c
++++ b/drivers/iio/pressure/st_pressure_core.c
+@@ -689,13 +689,9 @@ int st_press_common_probe(struct iio_dev *indio_dev)
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &press_info;
+ 
+-	err = st_sensors_power_enable(indio_dev);
+-	if (err)
+-		return err;
+-
+ 	err = st_sensors_verify_id(indio_dev);
+ 	if (err < 0)
+-		goto st_press_power_off;
++		return err;
+ 
+ 	/*
+ 	 * Skip timestamping channel while declaring available channels to
+@@ -718,11 +714,11 @@ int st_press_common_probe(struct iio_dev *indio_dev)
+ 
+ 	err = st_sensors_init_sensor(indio_dev, pdata);
+ 	if (err < 0)
+-		goto st_press_power_off;
++		return err;
+ 
+ 	err = st_press_allocate_ring(indio_dev);
+ 	if (err < 0)
+-		goto st_press_power_off;
++		return err;
+ 
+ 	if (press_data->irq > 0) {
+ 		err = st_sensors_allocate_trigger(indio_dev,
+@@ -745,9 +741,6 @@ st_press_device_register_error:
+ 		st_sensors_deallocate_trigger(indio_dev);
+ st_press_probe_trigger_error:
+ 	st_press_deallocate_ring(indio_dev);
+-st_press_power_off:
+-	st_sensors_power_disable(indio_dev);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL(st_press_common_probe);
+@@ -756,8 +749,6 @@ void st_press_common_remove(struct iio_dev *indio_dev)
+ {
+ 	struct st_sensor_data *press_data = iio_priv(indio_dev);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	iio_device_unregister(indio_dev);
+ 	if (press_data->irq > 0)
+ 		st_sensors_deallocate_trigger(indio_dev);
+diff --git a/drivers/iio/pressure/st_pressure_i2c.c b/drivers/iio/pressure/st_pressure_i2c.c
+index 09c6903f99b87..8c26ff61e56ad 100644
+--- a/drivers/iio/pressure/st_pressure_i2c.c
++++ b/drivers/iio/pressure/st_pressure_i2c.c
+@@ -98,16 +98,29 @@ static int st_press_i2c_probe(struct i2c_client *client,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = st_sensors_power_enable(indio_dev);
++	if (ret)
++		return ret;
++
+ 	ret = st_press_common_probe(indio_dev);
+ 	if (ret < 0)
+-		return ret;
++		goto st_press_power_off;
+ 
+ 	return 0;
++
++st_press_power_off:
++	st_sensors_power_disable(indio_dev);
++
++	return ret;
+ }
+ 
+ static int st_press_i2c_remove(struct i2c_client *client)
+ {
+-	st_press_common_remove(i2c_get_clientdata(client));
++	struct iio_dev *indio_dev = i2c_get_clientdata(client);
++
++	st_press_common_remove(indio_dev);
++
++	st_sensors_power_disable(indio_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/pressure/st_pressure_spi.c b/drivers/iio/pressure/st_pressure_spi.c
+index b5ee3ec2764ff..8cf8cd3b4554a 100644
+--- a/drivers/iio/pressure/st_pressure_spi.c
++++ b/drivers/iio/pressure/st_pressure_spi.c
+@@ -82,16 +82,29 @@ static int st_press_spi_probe(struct spi_device *spi)
+ 	if (err < 0)
+ 		return err;
+ 
++	err = st_sensors_power_enable(indio_dev);
++	if (err)
++		return err;
++
+ 	err = st_press_common_probe(indio_dev);
+ 	if (err < 0)
+-		return err;
++		goto st_press_power_off;
+ 
+ 	return 0;
++
++st_press_power_off:
++	st_sensors_power_disable(indio_dev);
++
++	return err;
+ }
+ 
+ static int st_press_spi_remove(struct spi_device *spi)
+ {
+-	st_press_common_remove(spi_get_drvdata(spi));
++	struct iio_dev *indio_dev = spi_get_drvdata(spi);
++
++	st_press_common_remove(indio_dev);
++
++	st_sensors_power_disable(indio_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index d4d4959c2434c..bd153aa7e9ab3 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -707,12 +707,13 @@ int bnxt_qplib_query_srq(struct bnxt_qplib_res *res,
+ 	int rc = 0;
+ 
+ 	RCFW_CMD_PREP(req, QUERY_SRQ, cmd_flags);
+-	req.srq_cid = cpu_to_le32(srq->id);
+ 
+ 	/* Configure the request */
+ 	sbuf = bnxt_qplib_rcfw_alloc_sbuf(rcfw, sizeof(*sb));
+ 	if (!sbuf)
+ 		return -ENOMEM;
++	req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS;
++	req.srq_cid = cpu_to_le32(srq->id);
+ 	sb = sbuf->sb;
+ 	rc = bnxt_qplib_rcfw_send_message(rcfw, (void *)&req, (void *)&resp,
+ 					  (void *)sbuf, 0);
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 6bc0818f4b2c6..c6a815a705fef 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -1099,8 +1099,10 @@ static int create_qp_common(struct ib_pd *pd, struct ib_qp_init_attr *init_attr,
+ 			if (dev->steering_support ==
+ 			    MLX4_STEERING_MODE_DEVICE_MANAGED)
+ 				qp->flags |= MLX4_IB_QP_NETIF;
+-			else
++			else {
++				err = -EINVAL;
+ 				goto err;
++			}
+ 		}
+ 
+ 		err = set_kernel_sq_size(dev, &init_attr->cap, qp_type, qp);
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index cdfb7732dff3e..16d5283651894 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -2744,15 +2744,18 @@ int qedr_query_qp(struct ib_qp *ibqp,
+ 	int rc = 0;
+ 
+ 	memset(&params, 0, sizeof(params));
+-
+-	rc = dev->ops->rdma_query_qp(dev->rdma_ctx, qp->qed_qp, &params);
+-	if (rc)
+-		goto err;
+-
+ 	memset(qp_attr, 0, sizeof(*qp_attr));
+ 	memset(qp_init_attr, 0, sizeof(*qp_init_attr));
+ 
+-	qp_attr->qp_state = qedr_get_ibqp_state(params.state);
++	if (qp->qp_type != IB_QPT_GSI) {
++		rc = dev->ops->rdma_query_qp(dev->rdma_ctx, qp->qed_qp, &params);
++		if (rc)
++			goto err;
++		qp_attr->qp_state = qedr_get_ibqp_state(params.state);
++	} else {
++		qp_attr->qp_state = qedr_get_ibqp_state(QED_ROCE_QP_STATE_RTS);
++	}
++
+ 	qp_attr->cur_qp_state = qedr_get_ibqp_state(params.state);
+ 	qp_attr->path_mtu = ib_mtu_int_to_enum(params.mtu);
+ 	qp_attr->path_mig_state = IB_MIG_MIGRATED;
+diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
+index 25ab50d9b7c28..f9fb56ec6dfda 100644
+--- a/drivers/infiniband/sw/rxe/rxe_param.h
++++ b/drivers/infiniband/sw/rxe/rxe_param.h
+@@ -108,7 +108,7 @@ enum rxe_device_param {
+ /* default/initial rxe port parameters */
+ enum rxe_port_param {
+ 	RXE_PORT_GID_TBL_LEN		= 1024,
+-	RXE_PORT_PORT_CAP_FLAGS		= RDMA_CORE_CAP_PROT_ROCE_UDP_ENCAP,
++	RXE_PORT_PORT_CAP_FLAGS		= IB_PORT_CM_SUP,
+ 	RXE_PORT_MAX_MSG_SZ		= 0x800000,
+ 	RXE_PORT_BAD_PKEY_CNTR		= 0,
+ 	RXE_PORT_QKEY_VIOL_CNTR		= 0,
+diff --git a/drivers/input/joystick/iforce/iforce-usb.c b/drivers/input/joystick/iforce/iforce-usb.c
+index 6c554c11a7ac3..ea58805c480fa 100644
+--- a/drivers/input/joystick/iforce/iforce-usb.c
++++ b/drivers/input/joystick/iforce/iforce-usb.c
+@@ -92,7 +92,7 @@ static int iforce_usb_get_id(struct iforce *iforce, u8 id,
+ 				 id,
+ 				 USB_TYPE_VENDOR | USB_DIR_IN |
+ 					USB_RECIP_INTERFACE,
+-				 0, 0, buf, IFORCE_MAX_LENGTH, HZ);
++				 0, 0, buf, IFORCE_MAX_LENGTH, 1000);
+ 	if (status < 0) {
+ 		dev_err(&iforce_usb->intf->dev,
+ 			"usb_submit_urb failed: %d\n", status);
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index e0e53a9a816fc..4357d30c15c56 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -517,6 +517,19 @@ static void elantech_report_trackpoint(struct psmouse *psmouse,
+ 	case 0x16008020U:
+ 	case 0x26800010U:
+ 	case 0x36808000U:
++
++		/*
++		 * This firmware misreport coordinates for trackpoint
++		 * occasionally. Discard packets outside of [-127, 127] range
++		 * to prevent cursor jumps.
++		 */
++		if (packet[4] == 0x80 || packet[5] == 0x80 ||
++		    packet[1] >> 7 == packet[4] >> 7 ||
++		    packet[2] >> 7 == packet[5] >> 7) {
++			elantech_debug("discarding packet [%6ph]\n", packet);
++			break;
++
++		}
+ 		x = packet[4] - (int)((packet[1]^0x80) << 1);
+ 		y = (int)((packet[2]^0x80) << 1) - packet[5];
+ 
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index a5a0035536462..aedd055410443 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -272,6 +272,13 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"),
+ 		},
+ 	},
++	{
++		/* Fujitsu Lifebook T725 laptop */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
++		},
++	},
+ 	{
+ 		/* Fujitsu Lifebook U745 */
+ 		.matches = {
+@@ -840,6 +847,13 @@ static const struct dmi_system_id __initconst i8042_dmi_notimeout_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"),
+ 		},
+ 	},
++	{
++		/* Fujitsu Lifebook T725 laptop */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
++		},
++	},
+ 	{
+ 		/* Fujitsu U574 laptop */
+ 		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
+diff --git a/drivers/irqchip/irq-bcm6345-l1.c b/drivers/irqchip/irq-bcm6345-l1.c
+index e3483789f4df3..1bd0621c4ce2a 100644
+--- a/drivers/irqchip/irq-bcm6345-l1.c
++++ b/drivers/irqchip/irq-bcm6345-l1.c
+@@ -140,7 +140,7 @@ static void bcm6345_l1_irq_handle(struct irq_desc *desc)
+ 		for_each_set_bit(hwirq, &pending, IRQS_PER_WORD) {
+ 			irq = irq_linear_revmap(intc->domain, base + hwirq);
+ 			if (irq)
+-				do_IRQ(irq);
++				generic_handle_irq(irq);
+ 			else
+ 				spurious_interrupt();
+ 		}
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index 6f432d2a5cebd..926e55d838cb1 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -163,7 +163,13 @@ static void plic_irq_eoi(struct irq_data *d)
+ {
+ 	struct plic_handler *handler = this_cpu_ptr(&plic_handlers);
+ 
+-	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++	if (irqd_irq_masked(d)) {
++		plic_irq_unmask(d);
++		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++		plic_irq_mask(d);
++	} else {
++		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++	}
+ }
+ 
+ static struct irq_chip plic_chip = {
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index e501cb03f211d..bd087cca1c1d2 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -1994,14 +1994,14 @@ setup_hw(struct hfc_pci *hc)
+ 	pci_set_master(hc->pdev);
+ 	if (!hc->irq) {
+ 		printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n");
+-		return 1;
++		return -EINVAL;
+ 	}
+ 	hc->hw.pci_io =
+ 		(char __iomem *)(unsigned long)hc->pdev->resource[1].start;
+ 
+ 	if (!hc->hw.pci_io) {
+ 		printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 	/* Allocate memory for FIFOS */
+ 	/* the memory needs to be on a 32k boundary within the first 4G */
+@@ -2012,7 +2012,7 @@ setup_hw(struct hfc_pci *hc)
+ 	if (!buffer) {
+ 		printk(KERN_WARNING
+ 		       "HFC-PCI: Error allocating memory for FIFO!\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 	hc->hw.fifos = buffer;
+ 	pci_write_config_dword(hc->pdev, 0x80, hc->hw.dmahandle);
+@@ -2022,7 +2022,7 @@ setup_hw(struct hfc_pci *hc)
+ 		       "HFC-PCI: Error in ioremap for PCI!\n");
+ 		dma_free_coherent(&hc->pdev->dev, 0x8000, hc->hw.fifos,
+ 				  hc->hw.dmahandle);
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 
+ 	printk(KERN_INFO
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index f16f190546ef3..7871e7dcd4836 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -3024,7 +3024,11 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 	 *  -write_error - clears WriteErrorSeen
+ 	 *  {,-}failfast - set/clear FailFast
+ 	 */
++
++	struct mddev *mddev = rdev->mddev;
+ 	int err = -EINVAL;
++	bool need_update_sb = false;
++
+ 	if (cmd_match(buf, "faulty") && rdev->mddev->pers) {
+ 		md_error(rdev->mddev, rdev);
+ 		if (test_bit(Faulty, &rdev->flags))
+@@ -3039,7 +3043,6 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 		if (rdev->raid_disk >= 0)
+ 			err = -EBUSY;
+ 		else {
+-			struct mddev *mddev = rdev->mddev;
+ 			err = 0;
+ 			if (mddev_is_clustered(mddev))
+ 				err = md_cluster_ops->remove_disk(mddev, rdev);
+@@ -3056,10 +3059,12 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 	} else if (cmd_match(buf, "writemostly")) {
+ 		set_bit(WriteMostly, &rdev->flags);
+ 		mddev_create_serial_pool(rdev->mddev, rdev, false);
++		need_update_sb = true;
+ 		err = 0;
+ 	} else if (cmd_match(buf, "-writemostly")) {
+ 		mddev_destroy_serial_pool(rdev->mddev, rdev, false);
+ 		clear_bit(WriteMostly, &rdev->flags);
++		need_update_sb = true;
+ 		err = 0;
+ 	} else if (cmd_match(buf, "blocked")) {
+ 		set_bit(Blocked, &rdev->flags);
+@@ -3085,9 +3090,11 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 		err = 0;
+ 	} else if (cmd_match(buf, "failfast")) {
+ 		set_bit(FailFast, &rdev->flags);
++		need_update_sb = true;
+ 		err = 0;
+ 	} else if (cmd_match(buf, "-failfast")) {
+ 		clear_bit(FailFast, &rdev->flags);
++		need_update_sb = true;
+ 		err = 0;
+ 	} else if (cmd_match(buf, "-insync") && rdev->raid_disk >= 0 &&
+ 		   !test_bit(Journal, &rdev->flags)) {
+@@ -3166,6 +3173,8 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 		clear_bit(ExternalBbl, &rdev->flags);
+ 		err = 0;
+ 	}
++	if (need_update_sb)
++		md_update_sb(mddev, 1);
+ 	if (!err)
+ 		sysfs_notify_dirent_safe(rdev->sysfs_state);
+ 	return err ? err : len;
+diff --git a/drivers/media/dvb-frontends/mn88443x.c b/drivers/media/dvb-frontends/mn88443x.c
+index e4528784f8477..fff212c0bf3b5 100644
+--- a/drivers/media/dvb-frontends/mn88443x.c
++++ b/drivers/media/dvb-frontends/mn88443x.c
+@@ -204,11 +204,18 @@ struct mn88443x_priv {
+ 	struct regmap *regmap_t;
+ };
+ 
+-static void mn88443x_cmn_power_on(struct mn88443x_priv *chip)
++static int mn88443x_cmn_power_on(struct mn88443x_priv *chip)
+ {
++	struct device *dev = &chip->client_s->dev;
+ 	struct regmap *r_t = chip->regmap_t;
++	int ret;
+ 
+-	clk_prepare_enable(chip->mclk);
++	ret = clk_prepare_enable(chip->mclk);
++	if (ret) {
++		dev_err(dev, "Failed to prepare and enable mclk: %d\n",
++			ret);
++		return ret;
++	}
+ 
+ 	gpiod_set_value_cansleep(chip->reset_gpio, 1);
+ 	usleep_range(100, 1000);
+@@ -222,6 +229,8 @@ static void mn88443x_cmn_power_on(struct mn88443x_priv *chip)
+ 	} else {
+ 		regmap_write(r_t, HIZSET3, 0x8f);
+ 	}
++
++	return 0;
+ }
+ 
+ static void mn88443x_cmn_power_off(struct mn88443x_priv *chip)
+@@ -738,7 +747,10 @@ static int mn88443x_probe(struct i2c_client *client,
+ 	chip->fe.demodulator_priv = chip;
+ 	i2c_set_clientdata(client, chip);
+ 
+-	mn88443x_cmn_power_on(chip);
++	ret = mn88443x_cmn_power_on(chip);
++	if (ret)
++		goto err_i2c_t;
++
+ 	mn88443x_s_sleep(chip);
+ 	mn88443x_t_sleep(chip);
+ 
+diff --git a/drivers/media/i2c/ir-kbd-i2c.c b/drivers/media/i2c/ir-kbd-i2c.c
+index 92376592455ee..56674173524fd 100644
+--- a/drivers/media/i2c/ir-kbd-i2c.c
++++ b/drivers/media/i2c/ir-kbd-i2c.c
+@@ -791,6 +791,7 @@ static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		rc_proto    = RC_PROTO_BIT_RC5 | RC_PROTO_BIT_RC6_MCE |
+ 							RC_PROTO_BIT_RC6_6A_32;
+ 		ir_codes    = RC_MAP_HAUPPAUGE;
++		ir->polling_interval = 125;
+ 		probe_tx = true;
+ 		break;
+ 	}
+diff --git a/drivers/media/i2c/mt9p031.c b/drivers/media/i2c/mt9p031.c
+index dc23b9ed510a4..18440c5104ad9 100644
+--- a/drivers/media/i2c/mt9p031.c
++++ b/drivers/media/i2c/mt9p031.c
+@@ -78,7 +78,9 @@
+ #define		MT9P031_PIXEL_CLOCK_INVERT		(1 << 15)
+ #define		MT9P031_PIXEL_CLOCK_SHIFT(n)		((n) << 8)
+ #define		MT9P031_PIXEL_CLOCK_DIVIDE(n)		((n) << 0)
+-#define MT9P031_FRAME_RESTART				0x0b
++#define MT9P031_RESTART					0x0b
++#define		MT9P031_FRAME_PAUSE_RESTART		(1 << 1)
++#define		MT9P031_FRAME_RESTART			(1 << 0)
+ #define MT9P031_SHUTTER_DELAY				0x0c
+ #define MT9P031_RST					0x0d
+ #define		MT9P031_RST_ENABLE			1
+@@ -445,9 +447,23 @@ static int mt9p031_set_params(struct mt9p031 *mt9p031)
+ static int mt9p031_s_stream(struct v4l2_subdev *subdev, int enable)
+ {
+ 	struct mt9p031 *mt9p031 = to_mt9p031(subdev);
++	struct i2c_client *client = v4l2_get_subdevdata(subdev);
++	int val;
+ 	int ret;
+ 
+ 	if (!enable) {
++		/* enable pause restart */
++		val = MT9P031_FRAME_PAUSE_RESTART;
++		ret = mt9p031_write(client, MT9P031_RESTART, val);
++		if (ret < 0)
++			return ret;
++
++		/* enable restart + keep pause restart set */
++		val |= MT9P031_FRAME_RESTART;
++		ret = mt9p031_write(client, MT9P031_RESTART, val);
++		if (ret < 0)
++			return ret;
++
+ 		/* Stop sensor readout */
+ 		ret = mt9p031_set_output_control(mt9p031,
+ 						 MT9P031_OUTPUT_CONTROL_CEN, 0);
+@@ -467,6 +483,16 @@ static int mt9p031_s_stream(struct v4l2_subdev *subdev, int enable)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	/*
++	 * - clear pause restart
++	 * - don't clear restart as clearing restart manually can cause
++	 *   undefined behavior
++	 */
++	val = MT9P031_FRAME_RESTART;
++	ret = mt9p031_write(client, MT9P031_RESTART, val);
++	if (ret < 0)
++		return ret;
++
+ 	return mt9p031_pll_enable(mt9p031);
+ }
+ 
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index 17cc69c3227f8..8476330964fc7 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -1247,13 +1247,13 @@ tda1997x_parse_infoframe(struct tda1997x_state *state, u16 addr)
+ {
+ 	struct v4l2_subdev *sd = &state->sd;
+ 	union hdmi_infoframe frame;
+-	u8 buffer[40];
++	u8 buffer[40] = { 0 };
+ 	u8 reg;
+ 	int len, err;
+ 
+ 	/* read data */
+ 	len = io_readn(sd, addr, sizeof(buffer), buffer);
+-	err = hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer));
++	err = hdmi_infoframe_unpack(&frame, buffer, len);
+ 	if (err) {
+ 		v4l_err(state->client,
+ 			"failed parsing %d byte infoframe: 0x%04x/0x%02x\n",
+@@ -1927,13 +1927,13 @@ static int tda1997x_log_infoframe(struct v4l2_subdev *sd, int addr)
+ {
+ 	struct tda1997x_state *state = to_state(sd);
+ 	union hdmi_infoframe frame;
+-	u8 buffer[40];
++	u8 buffer[40] = { 0 };
+ 	int len, err;
+ 
+ 	/* read data */
+ 	len = io_readn(sd, addr, sizeof(buffer), buffer);
+ 	v4l2_dbg(1, debug, sd, "infoframe: addr=%d len=%d\n", addr, len);
+-	err = hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer));
++	err = hdmi_infoframe_unpack(&frame, buffer, len);
+ 	if (err) {
+ 		v4l_err(state->client,
+ 			"failed parsing %d byte infoframe: 0x%04x/0x%02x\n",
+diff --git a/drivers/media/pci/cx23885/cx23885-alsa.c b/drivers/media/pci/cx23885/cx23885-alsa.c
+index 13689c5dd47ff..9154031c087d4 100644
+--- a/drivers/media/pci/cx23885/cx23885-alsa.c
++++ b/drivers/media/pci/cx23885/cx23885-alsa.c
+@@ -550,7 +550,7 @@ struct cx23885_audio_dev *cx23885_audio_register(struct cx23885_dev *dev)
+ 			   SNDRV_DEFAULT_IDX1, SNDRV_DEFAULT_STR1,
+ 			THIS_MODULE, sizeof(struct cx23885_audio_dev), &card);
+ 	if (err < 0)
+-		goto error;
++		goto error_msg;
+ 
+ 	chip = (struct cx23885_audio_dev *) card->private_data;
+ 	chip->dev = dev;
+@@ -576,6 +576,7 @@ struct cx23885_audio_dev *cx23885_audio_register(struct cx23885_dev *dev)
+ 
+ error:
+ 	snd_card_free(card);
++error_msg:
+ 	pr_err("%s(): Failed to register analog audio adapter\n",
+ 	       __func__);
+ 
+diff --git a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+index 6f3125c2d0976..77bae14685513 100644
+--- a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
++++ b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+@@ -258,19 +258,24 @@ static irqreturn_t netup_unidvb_isr(int irq, void *dev_id)
+ 	if ((reg40 & AVL_IRQ_ASSERTED) != 0) {
+ 		/* IRQ is being signaled */
+ 		reg_isr = readw(ndev->bmmio0 + REG_ISR);
+-		if (reg_isr & NETUP_UNIDVB_IRQ_I2C0) {
+-			iret = netup_i2c_interrupt(&ndev->i2c[0]);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_I2C1) {
+-			iret = netup_i2c_interrupt(&ndev->i2c[1]);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_SPI) {
++		if (reg_isr & NETUP_UNIDVB_IRQ_SPI)
+ 			iret = netup_spi_interrupt(ndev->spi);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_DMA1) {
+-			iret = netup_dma_interrupt(&ndev->dma[0]);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_DMA2) {
+-			iret = netup_dma_interrupt(&ndev->dma[1]);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_CI) {
+-			iret = netup_ci_interrupt(ndev);
++		else if (!ndev->old_fw) {
++			if (reg_isr & NETUP_UNIDVB_IRQ_I2C0) {
++				iret = netup_i2c_interrupt(&ndev->i2c[0]);
++			} else if (reg_isr & NETUP_UNIDVB_IRQ_I2C1) {
++				iret = netup_i2c_interrupt(&ndev->i2c[1]);
++			} else if (reg_isr & NETUP_UNIDVB_IRQ_DMA1) {
++				iret = netup_dma_interrupt(&ndev->dma[0]);
++			} else if (reg_isr & NETUP_UNIDVB_IRQ_DMA2) {
++				iret = netup_dma_interrupt(&ndev->dma[1]);
++			} else if (reg_isr & NETUP_UNIDVB_IRQ_CI) {
++				iret = netup_ci_interrupt(ndev);
++			} else {
++				goto err;
++			}
+ 		} else {
++err:
+ 			dev_err(&pci_dev->dev,
+ 				"%s(): unknown interrupt 0x%x\n",
+ 				__func__, reg_isr);
+diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+index 36cb9b6131f7e..c62eb212cca92 100644
+--- a/drivers/media/platform/mtk-vpu/mtk_vpu.c
++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+@@ -820,7 +820,8 @@ static int mtk_vpu_probe(struct platform_device *pdev)
+ 	vpu->wdt.wq = create_singlethread_workqueue("vpu_wdt");
+ 	if (!vpu->wdt.wq) {
+ 		dev_err(dev, "initialize wdt workqueue failed\n");
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto clk_unprepare;
+ 	}
+ 	INIT_WORK(&vpu->wdt.ws, vpu_wdt_reset_func);
+ 	mutex_init(&vpu->vpu_mutex);
+@@ -914,6 +915,8 @@ disable_vpu_clk:
+ 	vpu_clock_disable(vpu);
+ workqueue_destroy:
+ 	destroy_workqueue(vpu->wdt.wq);
++clk_unprepare:
++	clk_unprepare(vpu->clk);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
+index 79f229756805e..d2d87a204e918 100644
+--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
+@@ -544,6 +544,8 @@ static int rcsi2_start_receiver(struct rcar_csi2 *priv)
+ 
+ 	/* Code is validated in set_fmt. */
+ 	format = rcsi2_code_to_fmt(priv->mf.code);
++	if (!format)
++		return -EINVAL;
+ 
+ 	/*
+ 	 * Enable all supported CSI-2 channels with virtual channel and
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index eba2b9f040df0..f336a95432732 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -1283,11 +1283,15 @@ static int s5p_mfc_probe(struct platform_device *pdev)
+ 	spin_lock_init(&dev->condlock);
+ 	dev->plat_dev = pdev;
+ 	if (!dev->plat_dev) {
+-		dev_err(&pdev->dev, "No platform data specified\n");
++		mfc_err("No platform data specified\n");
+ 		return -ENODEV;
+ 	}
+ 
+ 	dev->variant = of_device_get_match_data(&pdev->dev);
++	if (!dev->variant) {
++		dev_err(&pdev->dev, "Failed to get device MFC hardware variant information\n");
++		return -ENOENT;
++	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	dev->regs_base = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/media/platform/stm32/stm32-dcmi.c b/drivers/media/platform/stm32/stm32-dcmi.c
+index fd1c41cba52fc..233e4d3feacd9 100644
+--- a/drivers/media/platform/stm32/stm32-dcmi.c
++++ b/drivers/media/platform/stm32/stm32-dcmi.c
+@@ -135,6 +135,7 @@ struct stm32_dcmi {
+ 	int				sequence;
+ 	struct list_head		buffers;
+ 	struct dcmi_buf			*active;
++	int			irq;
+ 
+ 	struct v4l2_device		v4l2_dev;
+ 	struct video_device		*vdev;
+@@ -1720,6 +1721,14 @@ static int dcmi_graph_notify_complete(struct v4l2_async_notifier *notifier)
+ 		return ret;
+ 	}
+ 
++	ret = devm_request_threaded_irq(dcmi->dev, dcmi->irq, dcmi_irq_callback,
++					dcmi_irq_thread, IRQF_ONESHOT,
++					dev_name(dcmi->dev), dcmi);
++	if (ret) {
++		dev_err(dcmi->dev, "Unable to request irq %d\n", dcmi->irq);
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1881,6 +1890,8 @@ static int dcmi_probe(struct platform_device *pdev)
+ 	if (irq <= 0)
+ 		return irq ? irq : -ENXIO;
+ 
++	dcmi->irq = irq;
++
+ 	dcmi->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (!dcmi->res) {
+ 		dev_err(&pdev->dev, "Could not get resource\n");
+@@ -1893,14 +1904,6 @@ static int dcmi_probe(struct platform_device *pdev)
+ 		return PTR_ERR(dcmi->regs);
+ 	}
+ 
+-	ret = devm_request_threaded_irq(&pdev->dev, irq, dcmi_irq_callback,
+-					dcmi_irq_thread, IRQF_ONESHOT,
+-					dev_name(&pdev->dev), dcmi);
+-	if (ret) {
+-		dev_err(&pdev->dev, "Unable to request irq %d\n", irq);
+-		return ret;
+-	}
+-
+ 	mclk = devm_clk_get(&pdev->dev, "mclk");
+ 	if (IS_ERR(mclk)) {
+ 		if (PTR_ERR(mclk) != -EPROBE_DEFER)
+diff --git a/drivers/media/radio/radio-wl1273.c b/drivers/media/radio/radio-wl1273.c
+index 1123768731676..484046471c03f 100644
+--- a/drivers/media/radio/radio-wl1273.c
++++ b/drivers/media/radio/radio-wl1273.c
+@@ -1279,7 +1279,7 @@ static int wl1273_fm_vidioc_querycap(struct file *file, void *priv,
+ 
+ 	strscpy(capability->driver, WL1273_FM_DRIVER_NAME,
+ 		sizeof(capability->driver));
+-	strscpy(capability->card, "Texas Instruments Wl1273 FM Radio",
++	strscpy(capability->card, "TI Wl1273 FM Radio",
+ 		sizeof(capability->card));
+ 	strscpy(capability->bus_info, radio->bus_type,
+ 		sizeof(capability->bus_info));
+diff --git a/drivers/media/radio/si470x/radio-si470x-i2c.c b/drivers/media/radio/si470x/radio-si470x-i2c.c
+index f491420d7b538..a972c0705ac79 100644
+--- a/drivers/media/radio/si470x/radio-si470x-i2c.c
++++ b/drivers/media/radio/si470x/radio-si470x-i2c.c
+@@ -11,7 +11,7 @@
+ 
+ /* driver definitions */
+ #define DRIVER_AUTHOR "Joonyoung Shim <jy0922.shim@samsung.com>";
+-#define DRIVER_CARD "Silicon Labs Si470x FM Radio Receiver"
++#define DRIVER_CARD "Silicon Labs Si470x FM Radio"
+ #define DRIVER_DESC "I2C radio driver for Si470x FM Radio Receivers"
+ #define DRIVER_VERSION "1.0.2"
+ 
+diff --git a/drivers/media/radio/si470x/radio-si470x-usb.c b/drivers/media/radio/si470x/radio-si470x-usb.c
+index fedff68d8c496..3f8634a465730 100644
+--- a/drivers/media/radio/si470x/radio-si470x-usb.c
++++ b/drivers/media/radio/si470x/radio-si470x-usb.c
+@@ -16,7 +16,7 @@
+ 
+ /* driver definitions */
+ #define DRIVER_AUTHOR "Tobias Lorenz <tobias.lorenz@gmx.net>"
+-#define DRIVER_CARD "Silicon Labs Si470x FM Radio Receiver"
++#define DRIVER_CARD "Silicon Labs Si470x FM Radio"
+ #define DRIVER_DESC "USB radio driver for Si470x FM Radio Receivers"
+ #define DRIVER_VERSION "1.0.10"
+ 
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 48d52baec1a1c..1aa7989e756cc 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -310,7 +310,7 @@ static int irtoy_tx(struct rc_dev *rc, uint *txbuf, uint count)
+ 		buf[i] = cpu_to_be16(v);
+ 	}
+ 
+-	buf[count] = 0xffff;
++	buf[count] = cpu_to_be16(0xffff);
+ 
+ 	irtoy->tx_buf = buf;
+ 	irtoy->tx_len = size;
+diff --git a/drivers/media/rc/ite-cir.c b/drivers/media/rc/ite-cir.c
+index e5c4a6941d26b..fbe794ca1eef1 100644
+--- a/drivers/media/rc/ite-cir.c
++++ b/drivers/media/rc/ite-cir.c
+@@ -283,7 +283,7 @@ static irqreturn_t ite_cir_isr(int irq, void *data)
+ 	}
+ 
+ 	/* check for the receive interrupt */
+-	if (iflags & ITE_IRQ_RX_FIFO) {
++	if (iflags & (ITE_IRQ_RX_FIFO | ITE_IRQ_RX_FIFO_OVERRUN)) {
+ 		/* read the FIFO bytes */
+ 		rx_bytes =
+ 			dev->params.get_rx_bytes(dev, rx_buf,
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index 5642595a057ec..8870c4e6c5f44 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -1386,6 +1386,7 @@ static void mceusb_dev_recv(struct urb *urb)
+ 	case -ECONNRESET:
+ 	case -ENOENT:
+ 	case -EILSEQ:
++	case -EPROTO:
+ 	case -ESHUTDOWN:
+ 		usb_unlink_urb(urb);
+ 		return;
+diff --git a/drivers/media/spi/cxd2880-spi.c b/drivers/media/spi/cxd2880-spi.c
+index 93194f03764d2..11273be702b6e 100644
+--- a/drivers/media/spi/cxd2880-spi.c
++++ b/drivers/media/spi/cxd2880-spi.c
+@@ -618,7 +618,7 @@ fail_frontend:
+ fail_attach:
+ 	dvb_unregister_adapter(&dvb_spi->adapter);
+ fail_adapter:
+-	if (!dvb_spi->vcc_supply)
++	if (dvb_spi->vcc_supply)
+ 		regulator_disable(dvb_spi->vcc_supply);
+ fail_regulator:
+ 	kfree(dvb_spi);
+diff --git a/drivers/media/usb/dvb-usb/az6027.c b/drivers/media/usb/dvb-usb/az6027.c
+index 1c39b61cde29b..86788771175b7 100644
+--- a/drivers/media/usb/dvb-usb/az6027.c
++++ b/drivers/media/usb/dvb-usb/az6027.c
+@@ -391,6 +391,7 @@ static struct rc_map_table rc_map_az6027_table[] = {
+ /* remote control stuff (does not work with my box) */
+ static int az6027_rc_query(struct dvb_usb_device *d, u32 *event, int *state)
+ {
++	*state = REMOTE_NO_KEY_PRESSED;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/usb/dvb-usb/dibusb-common.c b/drivers/media/usb/dvb-usb/dibusb-common.c
+index 02b51d1a1b67c..aff60c10cb0b2 100644
+--- a/drivers/media/usb/dvb-usb/dibusb-common.c
++++ b/drivers/media/usb/dvb-usb/dibusb-common.c
+@@ -223,7 +223,7 @@ int dibusb_read_eeprom_byte(struct dvb_usb_device *d, u8 offs, u8 *val)
+ 	u8 *buf;
+ 	int rc;
+ 
+-	buf = kmalloc(2, GFP_KERNEL);
++	buf = kzalloc(2, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 5144888ae36f7..cf45cc566cbe2 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -4089,8 +4089,11 @@ static void em28xx_usb_disconnect(struct usb_interface *intf)
+ 
+ 	em28xx_close_extension(dev);
+ 
+-	if (dev->dev_next)
++	if (dev->dev_next) {
++		em28xx_close_extension(dev->dev_next);
+ 		em28xx_release_resources(dev->dev_next);
++	}
++
+ 	em28xx_release_resources(dev);
+ 
+ 	if (dev->dev_next) {
+diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
+index 3daa64bb1e1d9..af9216278024f 100644
+--- a/drivers/media/usb/em28xx/em28xx-core.c
++++ b/drivers/media/usb/em28xx/em28xx-core.c
+@@ -1152,8 +1152,9 @@ int em28xx_suspend_extension(struct em28xx *dev)
+ 	dev_info(&dev->intf->dev, "Suspending extensions\n");
+ 	mutex_lock(&em28xx_devlist_mutex);
+ 	list_for_each_entry(ops, &em28xx_extension_devlist, next) {
+-		if (ops->suspend)
+-			ops->suspend(dev);
++		if (!ops->suspend)
++			continue;
++		ops->suspend(dev);
+ 		if (dev->dev_next)
+ 			ops->suspend(dev->dev_next);
+ 	}
+diff --git a/drivers/media/usb/tm6000/tm6000-video.c b/drivers/media/usb/tm6000/tm6000-video.c
+index 2df736c029d6e..01071e6cd7574 100644
+--- a/drivers/media/usb/tm6000/tm6000-video.c
++++ b/drivers/media/usb/tm6000/tm6000-video.c
+@@ -854,8 +854,7 @@ static int vidioc_querycap(struct file *file, void  *priv,
+ 	struct tm6000_core *dev = ((struct tm6000_fh *)priv)->dev;
+ 
+ 	strscpy(cap->driver, "tm6000", sizeof(cap->driver));
+-	strscpy(cap->card, "Trident TVMaster TM5600/6000/6010",
+-		sizeof(cap->card));
++	strscpy(cap->card, "Trident TM5600/6000/6010", sizeof(cap->card));
+ 	usb_make_path(dev->udev, cap->bus_info, sizeof(cap->bus_info));
+ 	cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_READWRITE |
+ 			    V4L2_CAP_DEVICE_CAPS;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 282f3d2388cc2..447b6a198926e 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2065,6 +2065,7 @@ int uvc_register_video_device(struct uvc_device *dev,
+ 			      const struct v4l2_file_operations *fops,
+ 			      const struct v4l2_ioctl_ops *ioctl_ops)
+ {
++	const char *name;
+ 	int ret;
+ 
+ 	/* Initialize the video buffers queue. */
+@@ -2093,16 +2094,20 @@ int uvc_register_video_device(struct uvc_device *dev,
+ 	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ 	default:
+ 		vdev->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
++		name = "Video Capture";
+ 		break;
+ 	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+ 		vdev->device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING;
++		name = "Video Output";
+ 		break;
+ 	case V4L2_BUF_TYPE_META_CAPTURE:
+ 		vdev->device_caps = V4L2_CAP_META_CAPTURE | V4L2_CAP_STREAMING;
++		name = "Metadata";
+ 		break;
+ 	}
+ 
+-	strscpy(vdev->name, dev->name, sizeof(vdev->name));
++	snprintf(vdev->name, sizeof(vdev->name), "%s %u", name,
++		 stream->header.bTerminalLink);
+ 
+ 	/*
+ 	 * Set the driver data before calling video_register_device, otherwise
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 5f0e2fa69da5c..753b8a99e08fc 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -471,10 +471,13 @@ static int uvc_v4l2_set_streamparm(struct uvc_streaming *stream,
+ 	uvc_simplify_fraction(&timeperframe.numerator,
+ 		&timeperframe.denominator, 8, 333);
+ 
+-	if (parm->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
++	if (parm->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ 		parm->parm.capture.timeperframe = timeperframe;
+-	else
++		parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
++	} else {
+ 		parm->parm.output.timeperframe = timeperframe;
++		parm->parm.output.capability = V4L2_CAP_TIMEPERFRAME;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 5878c78334862..b8477fa93b7d7 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -112,6 +112,11 @@ int uvc_query_ctrl(struct uvc_device *dev, u8 query, u8 unit,
+ 	case 5: /* Invalid unit */
+ 	case 6: /* Invalid control */
+ 	case 7: /* Invalid Request */
++		/*
++		 * The firmware has not properly implemented
++		 * the control or there has been a HW error.
++		 */
++		return -EIO;
+ 	case 8: /* Invalid value within range */
+ 		return -EINVAL;
+ 	default: /* reserved or unknown */
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 9eda8b91d17af..4ffa14e44efe4 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -907,7 +907,7 @@ static void v4l_print_default(const void *arg, bool write_only)
+ 	pr_cont("driver-specific ioctl\n");
+ }
+ 
+-static int check_ext_ctrls(struct v4l2_ext_controls *c, int allow_priv)
++static bool check_ext_ctrls(struct v4l2_ext_controls *c, unsigned long ioctl)
+ {
+ 	__u32 i;
+ 
+@@ -916,23 +916,41 @@ static int check_ext_ctrls(struct v4l2_ext_controls *c, int allow_priv)
+ 	for (i = 0; i < c->count; i++)
+ 		c->controls[i].reserved2[0] = 0;
+ 
+-	/* V4L2_CID_PRIVATE_BASE cannot be used as control class
+-	   when using extended controls.
+-	   Only when passed in through VIDIOC_G_CTRL and VIDIOC_S_CTRL
+-	   is it allowed for backwards compatibility.
+-	 */
+-	if (!allow_priv && c->which == V4L2_CID_PRIVATE_BASE)
+-		return 0;
+-	if (!c->which)
+-		return 1;
++	switch (c->which) {
++	case V4L2_CID_PRIVATE_BASE:
++		/*
++		 * V4L2_CID_PRIVATE_BASE cannot be used as control class
++		 * when using extended controls.
++		 * Only when passed in through VIDIOC_G_CTRL and VIDIOC_S_CTRL
++		 * is it allowed for backwards compatibility.
++		 */
++		if (ioctl == VIDIOC_G_CTRL || ioctl == VIDIOC_S_CTRL)
++			return false;
++		break;
++	case V4L2_CTRL_WHICH_DEF_VAL:
++		/* Default value cannot be changed */
++		if (ioctl == VIDIOC_S_EXT_CTRLS ||
++		    ioctl == VIDIOC_TRY_EXT_CTRLS) {
++			c->error_idx = c->count;
++			return false;
++		}
++		return true;
++	case V4L2_CTRL_WHICH_CUR_VAL:
++		return true;
++	case V4L2_CTRL_WHICH_REQUEST_VAL:
++		c->error_idx = c->count;
++		return false;
++	}
++
+ 	/* Check that all controls are from the same control class. */
+ 	for (i = 0; i < c->count; i++) {
+ 		if (V4L2_CTRL_ID2WHICH(c->controls[i].id) != c->which) {
+-			c->error_idx = i;
+-			return 0;
++			c->error_idx = ioctl == VIDIOC_TRY_EXT_CTRLS ? i :
++								      c->count;
++			return false;
+ 		}
+ 	}
+-	return 1;
++	return true;
+ }
+ 
+ static int check_fmt(struct file *file, enum v4l2_buf_type type)
+@@ -2226,7 +2244,7 @@ static int v4l_g_ctrl(const struct v4l2_ioctl_ops *ops,
+ 	ctrls.controls = &ctrl;
+ 	ctrl.id = p->id;
+ 	ctrl.value = p->value;
+-	if (check_ext_ctrls(&ctrls, 1)) {
++	if (check_ext_ctrls(&ctrls, VIDIOC_G_CTRL)) {
+ 		int ret = ops->vidioc_g_ext_ctrls(file, fh, &ctrls);
+ 
+ 		if (ret == 0)
+@@ -2245,6 +2263,7 @@ static int v4l_s_ctrl(const struct v4l2_ioctl_ops *ops,
+ 		test_bit(V4L2_FL_USES_V4L2_FH, &vfd->flags) ? fh : NULL;
+ 	struct v4l2_ext_controls ctrls;
+ 	struct v4l2_ext_control ctrl;
++	int ret;
+ 
+ 	if (vfh && vfh->ctrl_handler)
+ 		return v4l2_s_ctrl(vfh, vfh->ctrl_handler, p);
+@@ -2260,9 +2279,11 @@ static int v4l_s_ctrl(const struct v4l2_ioctl_ops *ops,
+ 	ctrls.controls = &ctrl;
+ 	ctrl.id = p->id;
+ 	ctrl.value = p->value;
+-	if (check_ext_ctrls(&ctrls, 1))
+-		return ops->vidioc_s_ext_ctrls(file, fh, &ctrls);
+-	return -EINVAL;
++	if (!check_ext_ctrls(&ctrls, VIDIOC_S_CTRL))
++		return -EINVAL;
++	ret = ops->vidioc_s_ext_ctrls(file, fh, &ctrls);
++	p->value = ctrl.value;
++	return ret;
+ }
+ 
+ static int v4l_g_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+@@ -2282,8 +2303,8 @@ static int v4l_g_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+ 					vfd, vfd->v4l2_dev->mdev, p);
+ 	if (ops->vidioc_g_ext_ctrls == NULL)
+ 		return -ENOTTY;
+-	return check_ext_ctrls(p, 0) ? ops->vidioc_g_ext_ctrls(file, fh, p) :
+-					-EINVAL;
++	return check_ext_ctrls(p, VIDIOC_G_EXT_CTRLS) ?
++				ops->vidioc_g_ext_ctrls(file, fh, p) : -EINVAL;
+ }
+ 
+ static int v4l_s_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+@@ -2303,8 +2324,8 @@ static int v4l_s_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+ 					vfd, vfd->v4l2_dev->mdev, p);
+ 	if (ops->vidioc_s_ext_ctrls == NULL)
+ 		return -ENOTTY;
+-	return check_ext_ctrls(p, 0) ? ops->vidioc_s_ext_ctrls(file, fh, p) :
+-					-EINVAL;
++	return check_ext_ctrls(p, VIDIOC_S_EXT_CTRLS) ?
++				ops->vidioc_s_ext_ctrls(file, fh, p) : -EINVAL;
+ }
+ 
+ static int v4l_try_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+@@ -2324,8 +2345,8 @@ static int v4l_try_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+ 					  vfd, vfd->v4l2_dev->mdev, p);
+ 	if (ops->vidioc_try_ext_ctrls == NULL)
+ 		return -ENOTTY;
+-	return check_ext_ctrls(p, 0) ? ops->vidioc_try_ext_ctrls(file, fh, p) :
+-					-EINVAL;
++	return check_ext_ctrls(p, VIDIOC_TRY_EXT_CTRLS) ?
++			ops->vidioc_try_ext_ctrls(file, fh, p) : -EINVAL;
+ }
+ 
+ /*
+diff --git a/drivers/memory/fsl_ifc.c b/drivers/memory/fsl_ifc.c
+index d062c2f8250f4..75a8c38df9394 100644
+--- a/drivers/memory/fsl_ifc.c
++++ b/drivers/memory/fsl_ifc.c
+@@ -263,7 +263,7 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 
+ 	ret = fsl_ifc_ctrl_init(fsl_ifc_ctrl_dev);
+ 	if (ret < 0)
+-		goto err;
++		goto err_unmap_nandirq;
+ 
+ 	init_waitqueue_head(&fsl_ifc_ctrl_dev->nand_wait);
+ 
+@@ -272,7 +272,7 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 	if (ret != 0) {
+ 		dev_err(&dev->dev, "failed to install irq (%d)\n",
+ 			fsl_ifc_ctrl_dev->irq);
+-		goto err_irq;
++		goto err_unmap_nandirq;
+ 	}
+ 
+ 	if (fsl_ifc_ctrl_dev->nand_irq) {
+@@ -281,17 +281,16 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 		if (ret != 0) {
+ 			dev_err(&dev->dev, "failed to install irq (%d)\n",
+ 				fsl_ifc_ctrl_dev->nand_irq);
+-			goto err_nandirq;
++			goto err_free_irq;
+ 		}
+ 	}
+ 
+ 	return 0;
+ 
+-err_nandirq:
+-	free_irq(fsl_ifc_ctrl_dev->nand_irq, fsl_ifc_ctrl_dev);
+-	irq_dispose_mapping(fsl_ifc_ctrl_dev->nand_irq);
+-err_irq:
++err_free_irq:
+ 	free_irq(fsl_ifc_ctrl_dev->irq, fsl_ifc_ctrl_dev);
++err_unmap_nandirq:
++	irq_dispose_mapping(fsl_ifc_ctrl_dev->nand_irq);
+ 	irq_dispose_mapping(fsl_ifc_ctrl_dev->irq);
+ err:
+ 	iounmap(fsl_ifc_ctrl_dev->gregs);
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index 1fe6c35b7503e..a760ab08256ff 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -161,10 +161,62 @@ static const struct regmap_access_table rpcif_volatile_table = {
+ 	.n_yes_ranges	= ARRAY_SIZE(rpcif_volatile_ranges),
+ };
+ 
++
++/*
++ * Custom accessor functions to ensure SMRDR0 and SMWDR0 are always accessed
++ * with proper width. Requires SMENR_SPIDE to be correctly set before!
++ */
++static int rpcif_reg_read(void *context, unsigned int reg, unsigned int *val)
++{
++	struct rpcif *rpc = context;
++
++	if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) {
++		u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF);
++
++		if (spide == 0x8) {
++			*val = readb(rpc->base + reg);
++			return 0;
++		} else if (spide == 0xC) {
++			*val = readw(rpc->base + reg);
++			return 0;
++		} else if (spide != 0xF) {
++			return -EILSEQ;
++		}
++	}
++
++	*val = readl(rpc->base + reg);
++	return 0;
++
++}
++
++static int rpcif_reg_write(void *context, unsigned int reg, unsigned int val)
++{
++	struct rpcif *rpc = context;
++
++	if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) {
++		u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF);
++
++		if (spide == 0x8) {
++			writeb(val, rpc->base + reg);
++			return 0;
++		} else if (spide == 0xC) {
++			writew(val, rpc->base + reg);
++			return 0;
++		} else if (spide != 0xF) {
++			return -EILSEQ;
++		}
++	}
++
++	writel(val, rpc->base + reg);
++	return 0;
++}
++
+ static const struct regmap_config rpcif_regmap_config = {
+ 	.reg_bits	= 32,
+ 	.val_bits	= 32,
+ 	.reg_stride	= 4,
++	.reg_read	= rpcif_reg_read,
++	.reg_write	= rpcif_reg_write,
+ 	.fast_io	= true,
+ 	.max_register	= RPCIF_PHYINT,
+ 	.volatile_table	= &rpcif_volatile_table,
+@@ -174,17 +226,15 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct resource *res;
+-	void __iomem *base;
+ 
+ 	rpc->dev = dev;
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
+-	base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(base))
+-		return PTR_ERR(base);
++	rpc->base = devm_ioremap_resource(&pdev->dev, res);
++	if (IS_ERR(rpc->base))
++		return PTR_ERR(rpc->base);
+ 
+-	rpc->regmap = devm_regmap_init_mmio(&pdev->dev, base,
+-					    &rpcif_regmap_config);
++	rpc->regmap = devm_regmap_init(&pdev->dev, NULL, rpc, &rpcif_regmap_config);
+ 	if (IS_ERR(rpc->regmap)) {
+ 		dev_err(&pdev->dev,
+ 			"failed to init regmap for rpcif, error %ld\n",
+@@ -367,20 +417,16 @@ void rpcif_prepare(struct rpcif *rpc, const struct rpcif_op *op, u64 *offs,
+ 			nbytes = op->data.nbytes;
+ 		rpc->xferlen = nbytes;
+ 
+-		rpc->enable |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes)) |
+-			RPCIF_SMENR_SPIDB(rpcif_bit_size(op->data.buswidth));
++		rpc->enable |= RPCIF_SMENR_SPIDB(rpcif_bit_size(op->data.buswidth));
+ 	}
+ }
+ EXPORT_SYMBOL(rpcif_prepare);
+ 
+ int rpcif_manual_xfer(struct rpcif *rpc)
+ {
+-	u32 smenr, smcr, pos = 0, max = 4;
++	u32 smenr, smcr, pos = 0, max = rpc->bus_size == 2 ? 8 : 4;
+ 	int ret = 0;
+ 
+-	if (rpc->bus_size == 2)
+-		max = 8;
+-
+ 	pm_runtime_get_sync(rpc->dev);
+ 
+ 	regmap_update_bits(rpc->regmap, RPCIF_PHYCNT,
+@@ -391,37 +437,36 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ 	regmap_write(rpc->regmap, RPCIF_SMOPR, rpc->option);
+ 	regmap_write(rpc->regmap, RPCIF_SMDMCR, rpc->dummy);
+ 	regmap_write(rpc->regmap, RPCIF_SMDRENR, rpc->ddr);
++	regmap_write(rpc->regmap, RPCIF_SMADR, rpc->smadr);
+ 	smenr = rpc->enable;
+ 
+ 	switch (rpc->dir) {
+ 	case RPCIF_DATA_OUT:
+ 		while (pos < rpc->xferlen) {
+-			u32 nbytes = rpc->xferlen - pos;
+-			u32 data[2];
++			u32 bytes_left = rpc->xferlen - pos;
++			u32 nbytes, data[2];
+ 
+ 			smcr = rpc->smcr | RPCIF_SMCR_SPIE;
+-			if (nbytes > max) {
+-				nbytes = max;
++
++			/* nbytes may only be 1, 2, 4, or 8 */
++			nbytes = bytes_left >= max ? max : (1 << ilog2(bytes_left));
++			if (bytes_left > nbytes)
+ 				smcr |= RPCIF_SMCR_SSLKP;
+-			}
++
++			smenr |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes));
++			regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
+ 
+ 			memcpy(data, rpc->buffer + pos, nbytes);
+-			if (nbytes > 4) {
++			if (nbytes == 8) {
+ 				regmap_write(rpc->regmap, RPCIF_SMWDR1,
+ 					     data[0]);
+ 				regmap_write(rpc->regmap, RPCIF_SMWDR0,
+ 					     data[1]);
+-			} else if (nbytes > 2) {
++			} else {
+ 				regmap_write(rpc->regmap, RPCIF_SMWDR0,
+ 					     data[0]);
+-			} else	{
+-				regmap_write(rpc->regmap, RPCIF_SMWDR0,
+-					     data[0] << 16);
+ 			}
+ 
+-			regmap_write(rpc->regmap, RPCIF_SMADR,
+-				     rpc->smadr + pos);
+-			regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
+ 			regmap_write(rpc->regmap, RPCIF_SMCR, smcr);
+ 			ret = wait_msg_xfer_end(rpc);
+ 			if (ret)
+@@ -461,14 +506,16 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ 			break;
+ 		}
+ 		while (pos < rpc->xferlen) {
+-			u32 nbytes = rpc->xferlen - pos;
+-			u32 data[2];
++			u32 bytes_left = rpc->xferlen - pos;
++			u32 nbytes, data[2];
+ 
+-			if (nbytes > max)
+-				nbytes = max;
++			/* nbytes may only be 1, 2, 4, or 8 */
++			nbytes = bytes_left >= max ? max : (1 << ilog2(bytes_left));
+ 
+ 			regmap_write(rpc->regmap, RPCIF_SMADR,
+ 				     rpc->smadr + pos);
++			smenr &= ~RPCIF_SMENR_SPIDE(0xF);
++			smenr |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes));
+ 			regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
+ 			regmap_write(rpc->regmap, RPCIF_SMCR,
+ 				     rpc->smcr | RPCIF_SMCR_SPIE);
+@@ -476,18 +523,14 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ 			if (ret)
+ 				goto err_out;
+ 
+-			if (nbytes > 4) {
++			if (nbytes == 8) {
+ 				regmap_read(rpc->regmap, RPCIF_SMRDR1,
+ 					    &data[0]);
+ 				regmap_read(rpc->regmap, RPCIF_SMRDR0,
+ 					    &data[1]);
+-			} else if (nbytes > 2) {
+-				regmap_read(rpc->regmap, RPCIF_SMRDR0,
+-					    &data[0]);
+-			} else	{
++			} else {
+ 				regmap_read(rpc->regmap, RPCIF_SMRDR0,
+ 					    &data[0]);
+-				data[0] >>= 16;
+ 			}
+ 			memcpy(rpc->buffer + pos, data, nbytes);
+ 
+diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
+index 8004dd64d09a8..bc1f484f50f1d 100644
+--- a/drivers/memstick/core/ms_block.c
++++ b/drivers/memstick/core/ms_block.c
+@@ -1727,7 +1727,7 @@ static int msb_init_card(struct memstick_dev *card)
+ 	msb->pages_in_block = boot_block->attr.block_size * 2;
+ 	msb->block_size = msb->page_size * msb->pages_in_block;
+ 
+-	if (msb->page_size > PAGE_SIZE) {
++	if ((size_t)msb->page_size > PAGE_SIZE) {
+ 		/* this isn't supported by linux at all, anyway*/
+ 		dbg("device page %d size isn't supported", msb->page_size);
+ 		return -EINVAL;
+diff --git a/drivers/memstick/host/jmb38x_ms.c b/drivers/memstick/host/jmb38x_ms.c
+index e83c3ada9389e..9e8cccbd2817e 100644
+--- a/drivers/memstick/host/jmb38x_ms.c
++++ b/drivers/memstick/host/jmb38x_ms.c
+@@ -882,7 +882,7 @@ static struct memstick_host *jmb38x_ms_alloc_host(struct jmb38x_ms *jm, int cnt)
+ 
+ 	iounmap(host->addr);
+ err_out_free:
+-	kfree(msh);
++	memstick_free_host(msh);
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
+index d2ef46337191c..eaa2a94d18be4 100644
+--- a/drivers/memstick/host/r592.c
++++ b/drivers/memstick/host/r592.c
+@@ -837,15 +837,15 @@ static void r592_remove(struct pci_dev *pdev)
+ 	}
+ 	memstick_remove_host(dev->host);
+ 
++	if (dev->dummy_dma_page)
++		dma_free_coherent(&pdev->dev, PAGE_SIZE, dev->dummy_dma_page,
++			dev->dummy_dma_page_physical_address);
++
+ 	free_irq(dev->irq, dev);
+ 	iounmap(dev->mmio);
+ 	pci_release_regions(pdev);
+ 	pci_disable_device(pdev);
+ 	memstick_free_host(dev->host);
+-
+-	if (dev->dummy_dma_page)
+-		dma_free_coherent(&pdev->dev, PAGE_SIZE, dev->dummy_dma_page,
+-			dev->dummy_dma_page_physical_address);
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index 83e676a096dc1..852129ea07666 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -50,6 +50,7 @@ enum dln2_handle {
+ 	DLN2_HANDLE_GPIO,
+ 	DLN2_HANDLE_I2C,
+ 	DLN2_HANDLE_SPI,
++	DLN2_HANDLE_ADC,
+ 	DLN2_HANDLES
+ };
+ 
+@@ -653,6 +654,7 @@ enum {
+ 	DLN2_ACPI_MATCH_GPIO	= 0,
+ 	DLN2_ACPI_MATCH_I2C	= 1,
+ 	DLN2_ACPI_MATCH_SPI	= 2,
++	DLN2_ACPI_MATCH_ADC	= 3,
+ };
+ 
+ static struct dln2_platform_data dln2_pdata_gpio = {
+@@ -683,6 +685,16 @@ static struct mfd_cell_acpi_match dln2_acpi_match_spi = {
+ 	.adr = DLN2_ACPI_MATCH_SPI,
+ };
+ 
++/* Only one ADC port supported */
++static struct dln2_platform_data dln2_pdata_adc = {
++	.handle = DLN2_HANDLE_ADC,
++	.port = 0,
++};
++
++static struct mfd_cell_acpi_match dln2_acpi_match_adc = {
++	.adr = DLN2_ACPI_MATCH_ADC,
++};
++
+ static const struct mfd_cell dln2_devs[] = {
+ 	{
+ 		.name = "dln2-gpio",
+@@ -702,6 +714,12 @@ static const struct mfd_cell dln2_devs[] = {
+ 		.platform_data = &dln2_pdata_spi,
+ 		.pdata_size = sizeof(struct dln2_platform_data),
+ 	},
++	{
++		.name = "dln2-adc",
++		.acpi_match = &dln2_acpi_match_adc,
++		.platform_data = &dln2_pdata_adc,
++		.pdata_size = sizeof(struct dln2_platform_data),
++	},
+ };
+ 
+ static void dln2_stop(struct dln2_dev *dln2)
+diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
+index fc00aaccb5f72..a3a6faa99de05 100644
+--- a/drivers/mfd/mfd-core.c
++++ b/drivers/mfd/mfd-core.c
+@@ -210,6 +210,7 @@ static int mfd_add_device(struct device *parent, int id,
+ 			if (of_device_is_compatible(np, cell->of_compatible)) {
+ 				/* Ignore 'disabled' devices error free */
+ 				if (!of_device_is_available(np)) {
++					of_node_put(np);
+ 					ret = 0;
+ 					goto fail_alias;
+ 				}
+@@ -217,6 +218,7 @@ static int mfd_add_device(struct device *parent, int id,
+ 				ret = mfd_match_of_node_to_dev(pdev, np, cell);
+ 				if (ret == -EAGAIN)
+ 					continue;
++				of_node_put(np);
+ 				if (ret)
+ 					goto fail_alias;
+ 
+diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
+index 31481c9fcc2ec..30ff42fd173e2 100644
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -503,7 +503,7 @@ config MMC_OMAP_HS
+ 
+ config MMC_WBSD
+ 	tristate "Winbond W83L51xD SD/MMC Card Interface support"
+-	depends on ISA_DMA_API
++	depends on ISA_DMA_API && !M68K
+ 	help
+ 	  This selects the Winbond(R) W83L51xD Secure digital and
+ 	  Multimedia card Interface.
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 7f90326b1be50..a6170f80ba64d 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2014,7 +2014,8 @@ static void dw_mci_tasklet_func(unsigned long priv)
+ 				 * delayed. Allowing the transfer to take place
+ 				 * avoids races and keeps things simple.
+ 				 */
+-				if (err != -ETIMEDOUT) {
++				if (err != -ETIMEDOUT &&
++				    host->dir_status == DW_MCI_RECV_STATUS) {
+ 					state = STATE_SENDING_DATA;
+ 					continue;
+ 				}
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index 2e4a7c6971dc9..7697068ad9695 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -569,37 +569,37 @@ static int moxart_probe(struct platform_device *pdev)
+ 	if (!mmc) {
+ 		dev_err(dev, "mmc_alloc_host failed\n");
+ 		ret = -ENOMEM;
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &res_mmc);
+ 	if (ret) {
+ 		dev_err(dev, "of_address_to_resource failed\n");
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	irq = irq_of_parse_and_map(node, 0);
+ 	if (irq <= 0) {
+ 		dev_err(dev, "irq_of_parse_and_map failed\n");
+ 		ret = -EINVAL;
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	clk = devm_clk_get(dev, NULL);
+ 	if (IS_ERR(clk)) {
+ 		ret = PTR_ERR(clk);
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	reg_mmc = devm_ioremap_resource(dev, &res_mmc);
+ 	if (IS_ERR(reg_mmc)) {
+ 		ret = PTR_ERR(reg_mmc);
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	ret = mmc_of_parse(mmc);
+ 	if (ret)
+-		goto out;
++		goto out_mmc;
+ 
+ 	host = mmc_priv(mmc);
+ 	host->mmc = mmc;
+@@ -624,6 +624,14 @@ static int moxart_probe(struct platform_device *pdev)
+ 			ret = -EPROBE_DEFER;
+ 			goto out;
+ 		}
++		if (!IS_ERR(host->dma_chan_tx)) {
++			dma_release_channel(host->dma_chan_tx);
++			host->dma_chan_tx = NULL;
++		}
++		if (!IS_ERR(host->dma_chan_rx)) {
++			dma_release_channel(host->dma_chan_rx);
++			host->dma_chan_rx = NULL;
++		}
+ 		dev_dbg(dev, "PIO mode transfer enabled\n");
+ 		host->have_dma = false;
+ 	} else {
+@@ -678,6 +686,11 @@ static int moxart_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ out:
++	if (!IS_ERR_OR_NULL(host->dma_chan_tx))
++		dma_release_channel(host->dma_chan_tx);
++	if (!IS_ERR_OR_NULL(host->dma_chan_rx))
++		dma_release_channel(host->dma_chan_rx);
++out_mmc:
+ 	if (mmc)
+ 		mmc_free_host(mmc);
+ 	return ret;
+@@ -690,9 +703,9 @@ static int moxart_remove(struct platform_device *pdev)
+ 
+ 	dev_set_drvdata(&pdev->dev, NULL);
+ 
+-	if (!IS_ERR(host->dma_chan_tx))
++	if (!IS_ERR_OR_NULL(host->dma_chan_tx))
+ 		dma_release_channel(host->dma_chan_tx);
+-	if (!IS_ERR(host->dma_chan_rx))
++	if (!IS_ERR_OR_NULL(host->dma_chan_rx))
+ 		dma_release_channel(host->dma_chan_rx);
+ 	mmc_remove_host(mmc);
+ 	mmc_free_host(mmc);
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index fb8b9475b1d01..f5c965da95013 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -8,6 +8,7 @@
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/dma-mapping.h>
++#include <linux/iopoll.h>
+ #include <linux/ioport.h>
+ #include <linux/irq.h>
+ #include <linux/of_address.h>
+@@ -2285,6 +2286,7 @@ static void msdc_cqe_enable(struct mmc_host *mmc)
+ static void msdc_cqe_disable(struct mmc_host *mmc, bool recovery)
+ {
+ 	struct msdc_host *host = mmc_priv(mmc);
++	unsigned int val = 0;
+ 
+ 	/* disable cmdq irq */
+ 	sdr_clr_bits(host->base + MSDC_INTEN, MSDC_INT_CMDQ);
+@@ -2294,6 +2296,9 @@ static void msdc_cqe_disable(struct mmc_host *mmc, bool recovery)
+ 	if (recovery) {
+ 		sdr_set_field(host->base + MSDC_DMA_CTRL,
+ 			      MSDC_DMA_CTRL_STOP, 1);
++		if (WARN_ON(readl_poll_timeout(host->base + MSDC_DMA_CFG, val,
++			!(val & MSDC_DMA_CFG_STS), 1, 3000)))
++			return;
+ 		msdc_reset_hw(host);
+ 	}
+ }
+diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c
+index 4fbbff03137c3..2ec3eb651d6b5 100644
+--- a/drivers/mmc/host/mxs-mmc.c
++++ b/drivers/mmc/host/mxs-mmc.c
+@@ -565,6 +565,11 @@ static const struct of_device_id mxs_mmc_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, mxs_mmc_dt_ids);
+ 
++static void mxs_mmc_regulator_disable(void *regulator)
++{
++	regulator_disable(regulator);
++}
++
+ static int mxs_mmc_probe(struct platform_device *pdev)
+ {
+ 	const struct of_device_id *of_id =
+@@ -606,6 +611,11 @@ static int mxs_mmc_probe(struct platform_device *pdev)
+ 				"Failed to enable vmmc regulator: %d\n", ret);
+ 			goto out_mmc_free;
+ 		}
++
++		ret = devm_add_action_or_reset(&pdev->dev, mxs_mmc_regulator_disable,
++					       reg_vmmc);
++		if (ret)
++			goto out_mmc_free;
+ 	}
+ 
+ 	ssp->clk = devm_clk_get(&pdev->dev, NULL);
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index 7893fd3599b61..53c362bb28661 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -62,6 +62,8 @@
+ #define SDHCI_OMAP_IE		0x234
+ #define INT_CC_EN		BIT(0)
+ 
++#define SDHCI_OMAP_ISE		0x238
++
+ #define SDHCI_OMAP_AC12		0x23c
+ #define AC12_V1V8_SIGEN		BIT(19)
+ #define AC12_SCLK_SEL		BIT(23)
+@@ -113,6 +115,8 @@ struct sdhci_omap_host {
+ 	u32			hctl;
+ 	u32			sysctl;
+ 	u32			capa;
++	u32			ie;
++	u32			ise;
+ };
+ 
+ static void sdhci_omap_start_clock(struct sdhci_omap_host *omap_host);
+@@ -682,7 +686,8 @@ static void sdhci_omap_set_power(struct sdhci_host *host, unsigned char mode,
+ {
+ 	struct mmc_host *mmc = host->mmc;
+ 
+-	mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
++	if (!IS_ERR(mmc->supply.vmmc))
++		mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
+ }
+ 
+ static int sdhci_omap_enable_dma(struct sdhci_host *host)
+@@ -1245,14 +1250,23 @@ static void sdhci_omap_context_save(struct sdhci_omap_host *omap_host)
+ {
+ 	omap_host->con = sdhci_omap_readl(omap_host, SDHCI_OMAP_CON);
+ 	omap_host->hctl = sdhci_omap_readl(omap_host, SDHCI_OMAP_HCTL);
++	omap_host->sysctl = sdhci_omap_readl(omap_host, SDHCI_OMAP_SYSCTL);
+ 	omap_host->capa = sdhci_omap_readl(omap_host, SDHCI_OMAP_CAPA);
++	omap_host->ie = sdhci_omap_readl(omap_host, SDHCI_OMAP_IE);
++	omap_host->ise = sdhci_omap_readl(omap_host, SDHCI_OMAP_ISE);
+ }
+ 
++/* Order matters here, HCTL must be restored in two phases */
+ static void sdhci_omap_context_restore(struct sdhci_omap_host *omap_host)
+ {
+-	sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, omap_host->con);
+ 	sdhci_omap_writel(omap_host, SDHCI_OMAP_HCTL, omap_host->hctl);
+ 	sdhci_omap_writel(omap_host, SDHCI_OMAP_CAPA, omap_host->capa);
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_HCTL, omap_host->hctl);
++
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_SYSCTL, omap_host->sysctl);
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, omap_host->con);
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_IE, omap_host->ie);
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_ISE, omap_host->ise);
+ }
+ 
+ static int __maybe_unused sdhci_omap_suspend(struct device *dev)
+diff --git a/drivers/most/most_usb.c b/drivers/most/most_usb.c
+index 2640c5b326a49..acabb7715b423 100644
+--- a/drivers/most/most_usb.c
++++ b/drivers/most/most_usb.c
+@@ -149,7 +149,8 @@ static inline int drci_rd_reg(struct usb_device *dev, u16 reg, u16 *buf)
+ 	retval = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 				 DRCI_READ_REQ, req_type,
+ 				 0x0000,
+-				 reg, dma_buf, sizeof(*dma_buf), 5 * HZ);
++				 reg, dma_buf, sizeof(*dma_buf),
++				 USB_CTRL_GET_TIMEOUT);
+ 	*buf = le16_to_cpu(*dma_buf);
+ 	kfree(dma_buf);
+ 
+@@ -176,7 +177,7 @@ static inline int drci_wr_reg(struct usb_device *dev, u16 reg, u16 data)
+ 			       reg,
+ 			       NULL,
+ 			       0,
+-			       5 * HZ);
++			       USB_CTRL_SET_TIMEOUT);
+ }
+ 
+ static inline int start_sync_ep(struct usb_device *usb_dev, u16 ep)
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index 1c8c407286783..a5197a4819025 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -721,8 +721,6 @@ int del_mtd_device(struct mtd_info *mtd)
+ 
+ 	mutex_lock(&mtd_table_mutex);
+ 
+-	debugfs_remove_recursive(mtd->dbg.dfs_dir);
+-
+ 	if (idr_find(&mtd_idr, mtd->index) != mtd) {
+ 		ret = -ENODEV;
+ 		goto out_error;
+@@ -738,6 +736,8 @@ int del_mtd_device(struct mtd_info *mtd)
+ 		       mtd->index, mtd->name, mtd->usecount);
+ 		ret = -EBUSY;
+ 	} else {
++		debugfs_remove_recursive(mtd->dbg.dfs_dir);
++
+ 		/* Try to remove the NVMEM provider */
+ 		if (mtd->nvmem)
+ 			nvmem_unregister(mtd->nvmem);
+diff --git a/drivers/mtd/nand/raw/ams-delta.c b/drivers/mtd/nand/raw/ams-delta.c
+index ff1697f899ba6..13de39aa3288f 100644
+--- a/drivers/mtd/nand/raw/ams-delta.c
++++ b/drivers/mtd/nand/raw/ams-delta.c
+@@ -217,9 +217,8 @@ static int gpio_nand_setup_interface(struct nand_chip *this, int csline,
+ 
+ static int gpio_nand_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -370,6 +369,13 @@ static int gpio_nand_probe(struct platform_device *pdev)
+ 	/* Release write protection */
+ 	gpiod_set_value(priv->gpiod_nwp, 0);
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Scan to find existence of the device */
+ 	err = nand_scan(this, 1);
+ 	if (err)
+diff --git a/drivers/mtd/nand/raw/au1550nd.c b/drivers/mtd/nand/raw/au1550nd.c
+index 7b6b354f2d398..48901a1b8bf52 100644
+--- a/drivers/mtd/nand/raw/au1550nd.c
++++ b/drivers/mtd/nand/raw/au1550nd.c
+@@ -238,9 +238,8 @@ static int au1550nd_exec_op(struct nand_chip *this,
+ 
+ static int au1550nd_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -309,6 +308,13 @@ static int au1550nd_probe(struct platform_device *pdev)
+ 	if (pd->devwidth)
+ 		this->options |= NAND_BUSWIDTH_16;
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	ret = nand_scan(this, 1);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "NAND scan failed with %d\n", ret);
+diff --git a/drivers/mtd/nand/raw/gpio.c b/drivers/mtd/nand/raw/gpio.c
+index fb7a086de35e5..fdf073d2e1b6c 100644
+--- a/drivers/mtd/nand/raw/gpio.c
++++ b/drivers/mtd/nand/raw/gpio.c
+@@ -163,9 +163,8 @@ static int gpio_nand_exec_op(struct nand_chip *chip,
+ 
+ static int gpio_nand_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -365,6 +364,13 @@ static int gpio_nand_probe(struct platform_device *pdev)
+ 	if (gpiomtd->nwp && !IS_ERR(gpiomtd->nwp))
+ 		gpiod_direction_output(gpiomtd->nwp, 1);
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	ret = nand_scan(chip, 1);
+ 	if (ret)
+ 		goto err_wp;
+diff --git a/drivers/mtd/nand/raw/mpc5121_nfc.c b/drivers/mtd/nand/raw/mpc5121_nfc.c
+index bcd4a556c959c..cb293c50acb87 100644
+--- a/drivers/mtd/nand/raw/mpc5121_nfc.c
++++ b/drivers/mtd/nand/raw/mpc5121_nfc.c
+@@ -605,9 +605,8 @@ static void mpc5121_nfc_free(struct device *dev, struct mtd_info *mtd)
+ 
+ static int mpc5121_nfc_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -772,6 +771,13 @@ static int mpc5121_nfc_probe(struct platform_device *op)
+ 		goto error;
+ 	}
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Detect NAND chips */
+ 	retval = nand_scan(chip, be32_to_cpup(chips_no));
+ 	if (retval) {
+diff --git a/drivers/mtd/nand/raw/orion_nand.c b/drivers/mtd/nand/raw/orion_nand.c
+index 66211c9311d2f..2c87c7d892058 100644
+--- a/drivers/mtd/nand/raw/orion_nand.c
++++ b/drivers/mtd/nand/raw/orion_nand.c
+@@ -85,9 +85,8 @@ static void orion_nand_read_buf(struct nand_chip *chip, uint8_t *buf, int len)
+ 
+ static int orion_nand_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -190,6 +189,13 @@ static int __init orion_nand_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	nc->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	ret = nand_scan(nc, 1);
+ 	if (ret)
+ 		goto no_dev;
+diff --git a/drivers/mtd/nand/raw/pasemi_nand.c b/drivers/mtd/nand/raw/pasemi_nand.c
+index 68c08772d7c26..b0ba1fdbf5f2a 100644
+--- a/drivers/mtd/nand/raw/pasemi_nand.c
++++ b/drivers/mtd/nand/raw/pasemi_nand.c
+@@ -76,9 +76,8 @@ static int pasemi_device_ready(struct nand_chip *chip)
+ 
+ static int pasemi_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -155,6 +154,13 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
+ 	/* Enable the following for a flash based bad block table */
+ 	chip->bbt_options = NAND_BBT_USE_FLASH;
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Scan to find existence of the device */
+ 	err = nand_scan(chip, 1);
+ 	if (err)
+diff --git a/drivers/mtd/nand/raw/plat_nand.c b/drivers/mtd/nand/raw/plat_nand.c
+index 7711e1020c21c..0ee08c42cc35b 100644
+--- a/drivers/mtd/nand/raw/plat_nand.c
++++ b/drivers/mtd/nand/raw/plat_nand.c
+@@ -21,9 +21,8 @@ struct plat_nand_data {
+ 
+ static int plat_nand_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -94,6 +93,13 @@ static int plat_nand_probe(struct platform_device *pdev)
+ 			goto out;
+ 	}
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	data->chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Scan to find existence of the device */
+ 	err = nand_scan(&data->chip, pdata->chip.nr_chips);
+ 	if (err)
+diff --git a/drivers/mtd/nand/raw/socrates_nand.c b/drivers/mtd/nand/raw/socrates_nand.c
+index 70f8305c9b6e1..fb39cc7ebce03 100644
+--- a/drivers/mtd/nand/raw/socrates_nand.c
++++ b/drivers/mtd/nand/raw/socrates_nand.c
+@@ -119,9 +119,8 @@ static int socrates_nand_device_ready(struct nand_chip *nand_chip)
+ 
+ static int socrates_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -175,6 +174,13 @@ static int socrates_nand_probe(struct platform_device *ofdev)
+ 	/* TODO: I have no idea what real delay is. */
+ 	nand_chip->legacy.chip_delay = 20;	/* 20us command delay time */
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	nand_chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	dev_set_drvdata(&ofdev->dev, host);
+ 
+ 	res = nand_scan(nand_chip, 1);
+diff --git a/drivers/mtd/nand/raw/xway_nand.c b/drivers/mtd/nand/raw/xway_nand.c
+index 26751976e5026..236fd8c5a958f 100644
+--- a/drivers/mtd/nand/raw/xway_nand.c
++++ b/drivers/mtd/nand/raw/xway_nand.c
+@@ -148,9 +148,8 @@ static void xway_write_buf(struct nand_chip *chip, const u_char *buf, int len)
+ 
+ static int xway_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -219,6 +218,13 @@ static int xway_nand_probe(struct platform_device *pdev)
+ 		    | NAND_CON_SE_P | NAND_CON_WP_P | NAND_CON_PRE_P
+ 		    | cs_flag, EBU_NAND_CON);
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	data->chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Scan to find existence of the device */
+ 	err = nand_scan(&data->chip, 1);
+ 	if (err)
+diff --git a/drivers/mtd/spi-nor/controllers/hisi-sfc.c b/drivers/mtd/spi-nor/controllers/hisi-sfc.c
+index 440fc5ae7d34c..fd2c19a047485 100644
+--- a/drivers/mtd/spi-nor/controllers/hisi-sfc.c
++++ b/drivers/mtd/spi-nor/controllers/hisi-sfc.c
+@@ -477,7 +477,6 @@ static int hisi_spi_nor_remove(struct platform_device *pdev)
+ 
+ 	hisi_spi_nor_unregister_all(host);
+ 	mutex_destroy(&host->lock);
+-	clk_disable_unprepare(host->clk);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index 600b9d09ec087..cb865b7ec3750 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -148,7 +148,7 @@ config NET_FC
+ 
+ config IFB
+ 	tristate "Intermediate Functional Block support"
+-	depends on NET_CLS_ACT
++	depends on NET_ACT_MIRRED || NFT_FWD_NETDEV
+ 	select NET_REDIRECT
+ 	help
+ 	  This is an intermediate driver that allows sharing of
+diff --git a/drivers/net/bonding/bond_sysfs_slave.c b/drivers/net/bonding/bond_sysfs_slave.c
+index fd07561da0348..6a6cdd0bb2585 100644
+--- a/drivers/net/bonding/bond_sysfs_slave.c
++++ b/drivers/net/bonding/bond_sysfs_slave.c
+@@ -108,15 +108,15 @@ static ssize_t ad_partner_oper_port_state_show(struct slave *slave, char *buf)
+ }
+ static SLAVE_ATTR_RO(ad_partner_oper_port_state);
+ 
+-static const struct slave_attribute *slave_attrs[] = {
+-	&slave_attr_state,
+-	&slave_attr_mii_status,
+-	&slave_attr_link_failure_count,
+-	&slave_attr_perm_hwaddr,
+-	&slave_attr_queue_id,
+-	&slave_attr_ad_aggregator_id,
+-	&slave_attr_ad_actor_oper_port_state,
+-	&slave_attr_ad_partner_oper_port_state,
++static const struct attribute *slave_attrs[] = {
++	&slave_attr_state.attr,
++	&slave_attr_mii_status.attr,
++	&slave_attr_link_failure_count.attr,
++	&slave_attr_perm_hwaddr.attr,
++	&slave_attr_queue_id.attr,
++	&slave_attr_ad_aggregator_id.attr,
++	&slave_attr_ad_actor_oper_port_state.attr,
++	&slave_attr_ad_partner_oper_port_state.attr,
+ 	NULL
+ };
+ 
+@@ -137,24 +137,10 @@ const struct sysfs_ops slave_sysfs_ops = {
+ 
+ int bond_sysfs_slave_add(struct slave *slave)
+ {
+-	const struct slave_attribute **a;
+-	int err;
+-
+-	for (a = slave_attrs; *a; ++a) {
+-		err = sysfs_create_file(&slave->kobj, &((*a)->attr));
+-		if (err) {
+-			kobject_put(&slave->kobj);
+-			return err;
+-		}
+-	}
+-
+-	return 0;
++	return sysfs_create_files(&slave->kobj, slave_attrs);
+ }
+ 
+ void bond_sysfs_slave_del(struct slave *slave)
+ {
+-	const struct slave_attribute **a;
+-
+-	for (a = slave_attrs; *a; ++a)
+-		sysfs_remove_file(&slave->kobj, &((*a)->attr));
++	sysfs_remove_files(&slave->kobj, slave_attrs);
+ }
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 68ff931993c25..4e13f6dfb91a2 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -1041,7 +1041,7 @@ static int mcp251xfd_chip_start(struct mcp251xfd_priv *priv)
+ 
+ 	err = mcp251xfd_chip_rx_int_enable(priv);
+ 	if (err)
+-		return err;
++		goto out_chip_stop;
+ 
+ 	err = mcp251xfd_chip_ecc_init(priv);
+ 	if (err)
+diff --git a/drivers/net/dsa/rtl8366rb.c b/drivers/net/dsa/rtl8366rb.c
+index cfe56960f44b9..12d7e5cd31974 100644
+--- a/drivers/net/dsa/rtl8366rb.c
++++ b/drivers/net/dsa/rtl8366rb.c
+@@ -1343,7 +1343,7 @@ static int rtl8366rb_set_mc_index(struct realtek_smi *smi, int port, int index)
+ 
+ static bool rtl8366rb_is_vlan_valid(struct realtek_smi *smi, unsigned int vlan)
+ {
+-	unsigned int max = RTL8366RB_NUM_VLANS;
++	unsigned int max = RTL8366RB_NUM_VLANS - 1;
+ 
+ 	if (smi->vlan4k_enabled)
+ 		max = RTL8366RB_NUM_VIDS - 1;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+index b2cd3bdba9f89..533b8519ec352 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+@@ -1331,6 +1331,10 @@
+ #define MDIO_VEND2_PMA_CDR_CONTROL	0x8056
+ #endif
+ 
++#ifndef MDIO_VEND2_PMA_MISC_CTRL0
++#define MDIO_VEND2_PMA_MISC_CTRL0	0x8090
++#endif
++
+ #ifndef MDIO_CTRL1_SPEED1G
+ #define MDIO_CTRL1_SPEED1G		(MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100)
+ #endif
+@@ -1389,6 +1393,10 @@
+ #define XGBE_PMA_RX_RST_0_RESET_ON	0x10
+ #define XGBE_PMA_RX_RST_0_RESET_OFF	0x00
+ 
++#define XGBE_PMA_PLL_CTRL_MASK		BIT(15)
++#define XGBE_PMA_PLL_CTRL_ENABLE	BIT(15)
++#define XGBE_PMA_PLL_CTRL_DISABLE	0x0000
++
+ /* Bit setting and getting macros
+  *  The get macro will extract the current bit field value from within
+  *  the variable
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 18e48b3bc402b..213769054391c 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -1977,12 +1977,26 @@ static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata)
+ 	}
+ }
+ 
++static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable)
++{
++	XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_MISC_CTRL0,
++			 XGBE_PMA_PLL_CTRL_MASK,
++			 enable ? XGBE_PMA_PLL_CTRL_ENABLE
++				: XGBE_PMA_PLL_CTRL_DISABLE);
++
++	/* Wait for command to complete */
++	usleep_range(100, 200);
++}
++
+ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 					unsigned int cmd, unsigned int sub_cmd)
+ {
+ 	unsigned int s0 = 0;
+ 	unsigned int wait;
+ 
++	/* Disable PLL re-initialization during FW command processing */
++	xgbe_phy_pll_ctrl(pdata, false);
++
+ 	/* Log if a previous command did not complete */
+ 	if (XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS)) {
+ 		netif_dbg(pdata, link, pdata->netdev,
+@@ -2003,7 +2017,7 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 	wait = XGBE_RATECHANGE_COUNT;
+ 	while (wait--) {
+ 		if (!XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS))
+-			return;
++			goto reenable_pll;
+ 
+ 		usleep_range(1000, 2000);
+ 	}
+@@ -2013,6 +2027,10 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 
+ 	/* Reset on error */
+ 	xgbe_phy_rx_reset(pdata);
++
++reenable_pll:
++	/* Enable PLL re-initialization */
++	xgbe_phy_pll_ctrl(pdata, true);
+ }
+ 
+ static void xgbe_phy_rrc(struct xgbe_prv_data *pdata)
+diff --git a/drivers/net/ethernet/cavium/thunder/nic_main.c b/drivers/net/ethernet/cavium/thunder/nic_main.c
+index 9361f964bb9b2..816453a4f8d6c 100644
+--- a/drivers/net/ethernet/cavium/thunder/nic_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nic_main.c
+@@ -1193,7 +1193,7 @@ static int nic_register_interrupts(struct nicpf *nic)
+ 		dev_err(&nic->pdev->dev,
+ 			"Request for #%d msix vectors failed, returned %d\n",
+ 			   nic->num_vec, ret);
+-		return 1;
++		return ret;
+ 	}
+ 
+ 	/* Register mailbox interrupt handler */
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index f3b7b443f9648..c00f1a7ffc15f 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -1226,7 +1226,7 @@ static int nicvf_register_misc_interrupt(struct nicvf *nic)
+ 	if (ret < 0) {
+ 		netdev_err(nic->netdev,
+ 			   "Req for #%d msix vectors failed\n", nic->num_vec);
+-		return 1;
++		return ret;
+ 	}
+ 
+ 	sprintf(nic->irq_name[irq], "%s Mbox", "NICVF");
+@@ -1245,7 +1245,7 @@ static int nicvf_register_misc_interrupt(struct nicvf *nic)
+ 	if (!nicvf_check_pf_ready(nic)) {
+ 		nicvf_disable_intr(nic, NICVF_INTR_MBOX, 0);
+ 		nicvf_unregister_interrupts(nic);
+-		return 1;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+index 83ed10ac86606..7080cb6c83e4a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+@@ -2011,12 +2011,15 @@ static int cxgb4_get_module_info(struct net_device *dev,
+ 		if (ret)
+ 			return ret;
+ 
+-		if (!sff8472_comp || (sff_diag_type & 4)) {
++		if (!sff8472_comp || (sff_diag_type & SFP_DIAG_ADDRMODE)) {
+ 			modinfo->type = ETH_MODULE_SFF_8079;
+ 			modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
+ 		} else {
+ 			modinfo->type = ETH_MODULE_SFF_8472;
+-			modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
++			if (sff_diag_type & SFP_DIAG_IMPLEMENTED)
++				modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
++			else
++				modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN / 2;
+ 		}
+ 		break;
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
+index 002fc62ea7262..63bc956d20376 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
+@@ -293,6 +293,8 @@ enum {
+ #define I2C_PAGE_SIZE		0x100
+ #define SFP_DIAG_TYPE_ADDR	0x5c
+ #define SFP_DIAG_TYPE_LEN	0x1
++#define SFP_DIAG_ADDRMODE	BIT(2)
++#define SFP_DIAG_IMPLEMENTED	BIT(6)
+ #define SFF_8472_COMP_ADDR	0x5e
+ #define SFF_8472_COMP_LEN	0x1
+ #define SFF_REV_ADDR		0x1
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index a262c949ed76b..d6b6ebb3f1ec7 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -870,7 +870,7 @@ static void do_abort_syn_rcv(struct sock *child, struct sock *parent)
+ 		 * created only after 3 way handshake is done.
+ 		 */
+ 		sock_orphan(child);
+-		percpu_counter_inc((child)->sk_prot->orphan_count);
++		INC_ORPHAN_COUNT(child);
+ 		chtls_release_resources(child);
+ 		chtls_conn_done(child);
+ 	} else {
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
+index b1161bdeda4dc..f61ca657601ca 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
+@@ -95,7 +95,7 @@ struct deferred_skb_cb {
+ #define WSCALE_OK(tp) ((tp)->rx_opt.wscale_ok)
+ #define TSTAMP_OK(tp) ((tp)->rx_opt.tstamp_ok)
+ #define SACK_OK(tp) ((tp)->rx_opt.sack_ok)
+-#define INC_ORPHAN_COUNT(sk) percpu_counter_inc((sk)->sk_prot->orphan_count)
++#define INC_ORPHAN_COUNT(sk) this_cpu_inc(*(sk)->sk_prot->orphan_count)
+ 
+ /* TLS SKB */
+ #define skb_ulp_tls_inline(skb)      (ULP_SKB_CB(skb)->ulp.tls.ofld)
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index dbceb99c4441a..9e6988fd3787a 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -486,14 +486,16 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 
+ 	data_size = sizeof(struct streamid_data);
+ 	si_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
++	if (!si_data)
++		return -ENOMEM;
+ 	cbd.length = cpu_to_le16(data_size);
+ 
+ 	dma = dma_map_single(&priv->si->pdev->dev, si_data,
+ 			     data_size, DMA_FROM_DEVICE);
+ 	if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+ 		netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+-		kfree(si_data);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto out;
+ 	}
+ 
+ 	cbd.addr[0] = lower_32_bits(dma);
+@@ -513,12 +515,10 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 
+ 	err = enetc_send_cmd(priv->si, &cbd);
+ 	if (err)
+-		return -EINVAL;
++		goto out;
+ 
+-	if (!enable) {
+-		kfree(si_data);
+-		return 0;
+-	}
++	if (!enable)
++		goto out;
+ 
+ 	/* Enable the entry overwrite again incase space flushed by hardware */
+ 	memset(&cbd, 0, sizeof(cbd));
+@@ -563,6 +563,10 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 	}
+ 
+ 	err = enetc_send_cmd(priv->si, &cbd);
++out:
++	if (!dma_mapping_error(&priv->si->pdev->dev, dma))
++		dma_unmap_single(&priv->si->pdev->dev, dma, data_size, DMA_FROM_DEVICE);
++
+ 	kfree(si_data);
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index cfb174624d4ee..5c9a4d4362c7b 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -28,7 +28,7 @@
+ #define GVE_MIN_MSIX 3
+ 
+ /* Numbers of gve tx/rx stats in stats report. */
+-#define GVE_TX_STATS_REPORT_NUM	5
++#define GVE_TX_STATS_REPORT_NUM	6
+ #define GVE_RX_STATS_REPORT_NUM	2
+ 
+ /* Interval to schedule a stats report update, 20000ms. */
+@@ -147,7 +147,9 @@ struct gve_tx_ring {
+ 	u32 q_num ____cacheline_aligned; /* queue idx */
+ 	u32 stop_queue; /* count of queue stops */
+ 	u32 wake_queue; /* count of queue wakes */
++	u32 queue_timeout; /* count of queue timeouts */
+ 	u32 ntfy_id; /* notification block index */
++	u32 last_kick_msec; /* Last time the queue was kicked */
+ 	dma_addr_t bus; /* dma address of the descr ring */
+ 	dma_addr_t q_resources_bus; /* dma address of the queue resources */
+ 	struct u64_stats_sync statss; /* sync stats for 32bit archs */
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h
+index 015796a20118b..8dbc2c03fbbdd 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.h
++++ b/drivers/net/ethernet/google/gve/gve_adminq.h
+@@ -212,6 +212,7 @@ enum gve_stat_names {
+ 	TX_LAST_COMPLETION_PROCESSED	= 5,
+ 	RX_NEXT_EXPECTED_SEQUENCE	= 6,
+ 	RX_BUFFERS_POSTED		= 7,
++	TX_TIMEOUT_CNT			= 8,
+ 	// stats from NIC
+ 	RX_QUEUE_DROP_CNT		= 65,
+ 	RX_NO_BUFFERS_POSTED		= 66,
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index fd52218f48846..6cb75bb1ed052 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -23,6 +23,9 @@
+ #define GVE_VERSION		"1.0.0"
+ #define GVE_VERSION_PREFIX	"GVE-"
+ 
++// Minimum amount of time between queue kicks in msec (10 seconds)
++#define MIN_TX_TIMEOUT_GAP (1000 * 10)
++
+ const char gve_version_str[] = GVE_VERSION;
+ static const char gve_version_prefix[] = GVE_VERSION_PREFIX;
+ 
+@@ -943,9 +946,47 @@ static void gve_turnup(struct gve_priv *priv)
+ 
+ static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ {
+-	struct gve_priv *priv = netdev_priv(dev);
++	struct gve_notify_block *block;
++	struct gve_tx_ring *tx = NULL;
++	struct gve_priv *priv;
++	u32 last_nic_done;
++	u32 current_time;
++	u32 ntfy_idx;
++
++	netdev_info(dev, "Timeout on tx queue, %d", txqueue);
++	priv = netdev_priv(dev);
++	if (txqueue > priv->tx_cfg.num_queues)
++		goto reset;
++
++	ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue);
++	if (ntfy_idx >= priv->num_ntfy_blks)
++		goto reset;
++
++	block = &priv->ntfy_blocks[ntfy_idx];
++	tx = block->tx;
+ 
++	current_time = jiffies_to_msecs(jiffies);
++	if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
++		goto reset;
++
++	/* Check to see if there are missed completions, which will allow us to
++	 * kick the queue.
++	 */
++	last_nic_done = gve_tx_load_event_counter(priv, tx);
++	if (last_nic_done - tx->done) {
++		netdev_info(dev, "Kicking queue %d", txqueue);
++		iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block));
++		napi_schedule(&block->napi);
++		tx->last_kick_msec = current_time;
++		goto out;
++	} // Else reset.
++
++reset:
+ 	gve_schedule_reset(priv);
++
++out:
++	if (tx)
++		tx->queue_timeout++;
+ 	priv->tx_timeo_cnt++;
+ }
+ 
+@@ -1028,6 +1069,11 @@ void gve_handle_report_stats(struct gve_priv *priv)
+ 				.value = cpu_to_be64(priv->tx[idx].done),
+ 				.queue_id = cpu_to_be32(idx),
+ 			};
++			stats[stats_idx++] = (struct stats) {
++				.stat_name = cpu_to_be32(TX_TIMEOUT_CNT),
++				.value = cpu_to_be64(priv->tx[idx].queue_timeout),
++				.queue_id = cpu_to_be32(idx),
++			};
+ 		}
+ 	}
+ 	/* rx stats */
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 8e6085753b9f2..5bab885744fc8 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -126,7 +126,7 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 	if (ret)
+ 		return ret;
+ 
+-	for (i = 0; i < hdev->tc_max; i++) {
++	for (i = 0; i < HNAE3_MAX_TC; i++) {
+ 		switch (ets->tc_tsa[i]) {
+ 		case IEEE_8021QAZ_TSA_STRICT:
+ 			if (hdev->tm_info.tc_info[i].tc_sch_mode !=
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 71aa6d16fc19e..9168e39b63641 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -1039,7 +1039,6 @@ static int hclge_tm_pri_tc_base_dwrr_cfg(struct hclge_dev *hdev)
+ 
+ static int hclge_tm_ets_tc_dwrr_cfg(struct hclge_dev *hdev)
+ {
+-#define DEFAULT_TC_WEIGHT	1
+ #define DEFAULT_TC_OFFSET	14
+ 
+ 	struct hclge_ets_tc_weight_cmd *ets_weight;
+@@ -1052,13 +1051,7 @@ static int hclge_tm_ets_tc_dwrr_cfg(struct hclge_dev *hdev)
+ 	for (i = 0; i < HNAE3_MAX_TC; i++) {
+ 		struct hclge_pg_info *pg_info;
+ 
+-		ets_weight->tc_weight[i] = DEFAULT_TC_WEIGHT;
+-
+-		if (!(hdev->hw_tc_map & BIT(i)))
+-			continue;
+-
+-		pg_info =
+-			&hdev->tm_info.pg_info[hdev->tm_info.tc_info[i].pgid];
++		pg_info = &hdev->tm_info.pg_info[hdev->tm_info.tc_info[i].pgid];
+ 		ets_weight->tc_weight[i] = pg_info->tc_dwrr[i];
+ 	}
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index a47f23f27a11c..e27af38f6b161 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2887,7 +2887,10 @@ static void hclgevf_uninit_client_instance(struct hnae3_client *client,
+ 
+ 	/* un-init roce, if it exists */
+ 	if (hdev->roce_client) {
++		while (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state))
++			msleep(HCLGEVF_WAIT_RESET_DONE);
+ 		clear_bit(HCLGEVF_STATE_ROCE_REGISTERED, &hdev->state);
++
+ 		hdev->roce_client->ops->uninit_instance(&hdev->roce, 0);
+ 		hdev->roce_client = NULL;
+ 		hdev->roce.client = NULL;
+@@ -2896,6 +2899,8 @@ static void hclgevf_uninit_client_instance(struct hnae3_client *client,
+ 	/* un-init nic/unic, if this was not called by roce client */
+ 	if (client->ops->uninit_instance && hdev->nic_client &&
+ 	    client->type != HNAE3_CLIENT_ROCE) {
++		while (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state))
++			msleep(HCLGEVF_WAIT_RESET_DONE);
+ 		clear_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state);
+ 
+ 		client->ops->uninit_instance(&hdev->nic, 0);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index 526a62f970466..c9b0fa5e8589d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -106,6 +106,8 @@
+ #define HCLGEVF_VF_RST_ING		0x07008
+ #define HCLGEVF_VF_RST_ING_BIT		BIT(16)
+ 
++#define HCLGEVF_WAIT_RESET_DONE		100
++
+ #define HCLGEVF_RSS_IND_TBL_SIZE		512
+ #define HCLGEVF_RSS_SET_BITMAP_MSK	0xffff
+ #define HCLGEVF_RSS_KEY_SIZE		40
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index bb8d0a0f48ee0..4f99d97638248 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1548,8 +1548,6 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	netdev_tx_t ret = NETDEV_TX_OK;
+ 
+ 	if (test_bit(0, &adapter->resetting)) {
+-		if (!netif_subqueue_stopped(netdev, skb))
+-			netif_stop_subqueue(netdev, queue_num);
+ 		dev_kfree_skb_any(skb);
+ 
+ 		tx_send_failed++;
+@@ -5187,6 +5185,9 @@ static int init_crq_queue(struct ibmvnic_adapter *adapter)
+ 	crq->cur = 0;
+ 	spin_lock_init(&crq->lock);
+ 
++	/* process any CRQs that were queued before we enabled interrupts */
++	tasklet_schedule(&adapter->tasklet);
++
+ 	return retrc;
+ 
+ req_irq_failed:
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index fe4320e2d1f2f..1929847b8c404 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -839,7 +839,7 @@ ice_vsi_stop_tx_ring(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
+ 	} else if (status == ICE_ERR_DOES_NOT_EXIST) {
+ 		dev_dbg(ice_pf_to_dev(vsi->back), "LAN Tx queues do not exist, nothing to disable\n");
+ 	} else if (status) {
+-		dev_err(ice_pf_to_dev(vsi->back), "Failed to disable LAN Tx queues, error: %s\n",
++		dev_dbg(ice_pf_to_dev(vsi->back), "Failed to disable LAN Tx queues, error: %s\n",
+ 			ice_stat_str(status));
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index c9f82fd3cf48d..69ce5d60a8570 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -362,8 +362,7 @@ void ice_free_vfs(struct ice_pf *pf)
+ 
+ 	/* Avoid wait time by stopping all VFs at the same time */
+ 	ice_for_each_vf(pf, i)
+-		if (test_bit(ICE_VF_STATE_QS_ENA, pf->vf[i].vf_states))
+-			ice_dis_vf_qs(&pf->vf[i]);
++		ice_dis_vf_qs(&pf->vf[i]);
+ 
+ 	tmp = pf->num_alloc_vfs;
+ 	pf->num_qps_per_vf = 0;
+@@ -1291,8 +1290,7 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
+ 
+ 	vsi = pf->vsi[vf->lan_vsi_idx];
+ 
+-	if (test_bit(ICE_VF_STATE_QS_ENA, vf->vf_states))
+-		ice_dis_vf_qs(vf);
++	ice_dis_vf_qs(vf);
+ 
+ 	/* Call Disable LAN Tx queue AQ whether or not queues are
+ 	 * enabled. This is needed for successful completion of VFR.
+@@ -3068,6 +3066,7 @@ ice_vc_add_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi, u8 *mac_addr)
+ {
+ 	struct device *dev = ice_pf_to_dev(vf->pf);
+ 	enum ice_status status;
++	int ret = 0;
+ 
+ 	/* default unicast MAC already added */
+ 	if (ether_addr_equal(mac_addr, vf->dflt_lan_addr.addr))
+@@ -3080,13 +3079,18 @@ ice_vc_add_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi, u8 *mac_addr)
+ 
+ 	status = ice_fltr_add_mac(vsi, mac_addr, ICE_FWD_TO_VSI);
+ 	if (status == ICE_ERR_ALREADY_EXISTS) {
+-		dev_err(dev, "MAC %pM already exists for VF %d\n", mac_addr,
++		dev_dbg(dev, "MAC %pM already exists for VF %d\n", mac_addr,
+ 			vf->vf_id);
+-		return -EEXIST;
++		/* don't return since we might need to update
++		 * the primary MAC in ice_vfhw_mac_add() below
++		 */
++		ret = -EEXIST;
+ 	} else if (status) {
+ 		dev_err(dev, "Failed to add MAC %pM for VF %d\n, error %s\n",
+ 			mac_addr, vf->vf_id, ice_stat_str(status));
+ 		return -EIO;
++	} else {
++		vf->num_mac++;
+ 	}
+ 
+ 	/* Set the default LAN address to the latest unicast MAC address added
+@@ -3096,9 +3100,7 @@ ice_vc_add_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi, u8 *mac_addr)
+ 	if (is_unicast_ether_addr(mac_addr))
+ 		ether_addr_copy(vf->dflt_lan_addr.addr, mac_addr);
+ 
+-	vf->num_mac++;
+-
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c
+index 11c83a99b0140..f469950c72657 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/main.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c
+@@ -182,15 +182,21 @@ static int
+ nfp_bpf_check_mtu(struct nfp_app *app, struct net_device *netdev, int new_mtu)
+ {
+ 	struct nfp_net *nn = netdev_priv(netdev);
+-	unsigned int max_mtu;
++	struct nfp_bpf_vnic *bv;
++	struct bpf_prog *prog;
+ 
+ 	if (~nn->dp.ctrl & NFP_NET_CFG_CTRL_BPF)
+ 		return 0;
+ 
+-	max_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
+-	if (new_mtu > max_mtu) {
+-		nn_info(nn, "BPF offload active, MTU over %u not supported\n",
+-			max_mtu);
++	if (nn->xdp_hw.prog) {
++		prog = nn->xdp_hw.prog;
++	} else {
++		bv = nn->app_priv;
++		prog = bv->tc_prog;
++	}
++
++	if (nfp_bpf_offload_check_mtu(nn, prog, new_mtu)) {
++		nn_info(nn, "BPF offload active, potential packet access beyond hardware packet boundary");
+ 		return -EBUSY;
+ 	}
+ 	return 0;
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h
+index fac9c6f9e197b..c74620fcc539c 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/main.h
++++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h
+@@ -560,6 +560,8 @@ bool nfp_is_subprog_start(struct nfp_insn_meta *meta);
+ void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog);
+ int nfp_bpf_jit(struct nfp_prog *prog);
+ bool nfp_bpf_supported_opcode(u8 code);
++bool nfp_bpf_offload_check_mtu(struct nfp_net *nn, struct bpf_prog *prog,
++			       unsigned int mtu);
+ 
+ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
+ 		    int prev_insn_idx);
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/offload.c b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+index 53851853562c6..9d97cd281f18e 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+@@ -481,19 +481,28 @@ int nfp_bpf_event_output(struct nfp_app_bpf *bpf, const void *data,
+ 	return 0;
+ }
+ 
++bool nfp_bpf_offload_check_mtu(struct nfp_net *nn, struct bpf_prog *prog,
++			       unsigned int mtu)
++{
++	unsigned int fw_mtu, pkt_off;
++
++	fw_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
++	pkt_off = min(prog->aux->max_pkt_offset, mtu);
++
++	return fw_mtu < pkt_off;
++}
++
+ static int
+ nfp_net_bpf_load(struct nfp_net *nn, struct bpf_prog *prog,
+ 		 struct netlink_ext_ack *extack)
+ {
+ 	struct nfp_prog *nfp_prog = prog->aux->offload->dev_priv;
+-	unsigned int fw_mtu, pkt_off, max_stack, max_prog_len;
++	unsigned int max_stack, max_prog_len;
+ 	dma_addr_t dma_addr;
+ 	void *img;
+ 	int err;
+ 
+-	fw_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
+-	pkt_off = min(prog->aux->max_pkt_offset, nn->dp.netdev->mtu);
+-	if (fw_mtu < pkt_off) {
++	if (nfp_bpf_offload_check_mtu(nn, prog, nn->dp.netdev->mtu)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "BPF offload not supported with potential packet access beyond HW packet split boundary");
+ 		return -EOPNOTSUPP;
+ 	}
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 30be18bac8063..5eac3f494d9e9 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -157,6 +157,7 @@ static const struct pci_device_id rtl8169_pci_tbl[] = {
+ 	{ PCI_VDEVICE(REALTEK,	0x8129) },
+ 	{ PCI_VDEVICE(REALTEK,	0x8136), RTL_CFG_NO_GBIT },
+ 	{ PCI_VDEVICE(REALTEK,	0x8161) },
++	{ PCI_VDEVICE(REALTEK,	0x8162) },
+ 	{ PCI_VDEVICE(REALTEK,	0x8167) },
+ 	{ PCI_VDEVICE(REALTEK,	0x8168) },
+ 	{ PCI_VDEVICE(NCUBE,	0x8168) },
+diff --git a/drivers/net/ethernet/sfc/mcdi_port_common.c b/drivers/net/ethernet/sfc/mcdi_port_common.c
+index 4bd3ef8f3384e..c4fe3c48ac46a 100644
+--- a/drivers/net/ethernet/sfc/mcdi_port_common.c
++++ b/drivers/net/ethernet/sfc/mcdi_port_common.c
+@@ -132,16 +132,27 @@ void mcdi_to_ethtool_linkset(u32 media, u32 cap, unsigned long *linkset)
+ 	case MC_CMD_MEDIA_SFP_PLUS:
+ 	case MC_CMD_MEDIA_QSFP_PLUS:
+ 		SET_BIT(FIBRE);
+-		if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN))
++		if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN)) {
+ 			SET_BIT(1000baseT_Full);
+-		if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN))
+-			SET_BIT(10000baseT_Full);
+-		if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN))
++			SET_BIT(1000baseX_Full);
++		}
++		if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN)) {
++			SET_BIT(10000baseCR_Full);
++			SET_BIT(10000baseLR_Full);
++			SET_BIT(10000baseSR_Full);
++		}
++		if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN)) {
+ 			SET_BIT(40000baseCR4_Full);
+-		if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN))
++			SET_BIT(40000baseSR4_Full);
++		}
++		if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN)) {
+ 			SET_BIT(100000baseCR4_Full);
+-		if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN))
++			SET_BIT(100000baseSR4_Full);
++		}
++		if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN)) {
+ 			SET_BIT(25000baseCR_Full);
++			SET_BIT(25000baseSR_Full);
++		}
+ 		if (cap & (1 << MC_CMD_PHY_CAP_50000FDX_LBN))
+ 			SET_BIT(50000baseCR2_Full);
+ 		break;
+@@ -192,15 +203,19 @@ u32 ethtool_linkset_to_mcdi_cap(const unsigned long *linkset)
+ 		result |= (1 << MC_CMD_PHY_CAP_100FDX_LBN);
+ 	if (TEST_BIT(1000baseT_Half))
+ 		result |= (1 << MC_CMD_PHY_CAP_1000HDX_LBN);
+-	if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full))
++	if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full) ||
++			TEST_BIT(1000baseX_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_1000FDX_LBN);
+-	if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full))
++	if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full) ||
++			TEST_BIT(10000baseCR_Full) || TEST_BIT(10000baseLR_Full) ||
++			TEST_BIT(10000baseSR_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_10000FDX_LBN);
+-	if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full))
++	if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full) ||
++			TEST_BIT(40000baseSR4_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_40000FDX_LBN);
+-	if (TEST_BIT(100000baseCR4_Full))
++	if (TEST_BIT(100000baseCR4_Full) || TEST_BIT(100000baseSR4_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_100000FDX_LBN);
+-	if (TEST_BIT(25000baseCR_Full))
++	if (TEST_BIT(25000baseCR_Full) || TEST_BIT(25000baseSR_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_25000FDX_LBN);
+ 	if (TEST_BIT(50000baseCR2_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_50000FDX_LBN);
+diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
+index a39c5143b3864..797e51802ccbb 100644
+--- a/drivers/net/ethernet/sfc/ptp.c
++++ b/drivers/net/ethernet/sfc/ptp.c
+@@ -648,7 +648,7 @@ static int efx_ptp_get_attributes(struct efx_nic *efx)
+ 	} else if (rc == -EINVAL) {
+ 		fmt = MC_CMD_PTP_OUT_GET_ATTRIBUTES_SECONDS_NANOSECONDS;
+ 	} else if (rc == -EPERM) {
+-		netif_info(efx, probe, efx->net_dev, "no PTP support\n");
++		pci_info(efx->pci_dev, "no PTP support\n");
+ 		return rc;
+ 	} else {
+ 		efx_mcdi_display_error(efx, MC_CMD_PTP, sizeof(inbuf),
+@@ -824,7 +824,7 @@ static int efx_ptp_disable(struct efx_nic *efx)
+ 	 * should only have been called during probe.
+ 	 */
+ 	if (rc == -ENOSYS || rc == -EPERM)
+-		netif_info(efx, probe, efx->net_dev, "no PTP support\n");
++		pci_info(efx->pci_dev, "no PTP support\n");
+ 	else if (rc)
+ 		efx_mcdi_display_error(efx, MC_CMD_PTP,
+ 				       MC_CMD_PTP_IN_DISABLE_LEN,
+diff --git a/drivers/net/ethernet/sfc/siena_sriov.c b/drivers/net/ethernet/sfc/siena_sriov.c
+index 83dcfcae3d4b5..441e7f3e53751 100644
+--- a/drivers/net/ethernet/sfc/siena_sriov.c
++++ b/drivers/net/ethernet/sfc/siena_sriov.c
+@@ -1057,7 +1057,7 @@ void efx_siena_sriov_probe(struct efx_nic *efx)
+ 		return;
+ 
+ 	if (efx_siena_sriov_cmd(efx, false, &efx->vi_scale, &count)) {
+-		netif_info(efx, probe, efx->net_dev, "no SR-IOV VFs probed\n");
++		pci_info(efx->pci_dev, "no SR-IOV VFs probed\n");
+ 		return;
+ 	}
+ 	if (count > 0 && count > max_vfs)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 6399803061158..43165c662740d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -679,8 +679,6 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 		goto disable;
+ 	if (qopt->num_entries >= dep)
+ 		return -EINVAL;
+-	if (!qopt->base_time)
+-		return -ERANGE;
+ 	if (!qopt->cycle_time)
+ 		return -ERANGE;
+ 
+diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
+index 03055c96f0760..ad5293571af4d 100644
+--- a/drivers/net/ethernet/ti/davinci_emac.c
++++ b/drivers/net/ethernet/ti/davinci_emac.c
+@@ -412,8 +412,20 @@ static int emac_set_coalesce(struct net_device *ndev,
+ 	u32 int_ctrl, num_interrupts = 0;
+ 	u32 prescale = 0, addnl_dvdr = 1, coal_intvl = 0;
+ 
+-	if (!coal->rx_coalesce_usecs)
+-		return -EINVAL;
++	if (!coal->rx_coalesce_usecs) {
++		priv->coal_intvl = 0;
++
++		switch (priv->version) {
++		case EMAC_VERSION_2:
++			emac_ctrl_write(EMAC_DM646X_CMINTCTRL, 0);
++			break;
++		default:
++			emac_ctrl_write(EMAC_CTRL_EWINTTCNT, 0);
++			break;
++		}
++
++		return 0;
++	}
+ 
+ 	coal_intvl = coal->rx_coalesce_usecs;
+ 
+diff --git a/drivers/net/ifb.c b/drivers/net/ifb.c
+index 7fe306e76281d..db3a9b93d4db7 100644
+--- a/drivers/net/ifb.c
++++ b/drivers/net/ifb.c
+@@ -76,7 +76,9 @@ static void ifb_ri_tasklet(unsigned long _txp)
+ 
+ 	while ((skb = __skb_dequeue(&txp->tq)) != NULL) {
+ 		skb->redirected = 0;
++#ifdef CONFIG_NET_CLS_ACT
+ 		skb->tc_skip_classify = 1;
++#endif
+ 
+ 		u64_stats_update_begin(&txp->tsync);
+ 		txp->tx_packets++;
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 69b20a466c61c..92e94ac94a342 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -732,9 +732,9 @@ static int ksz9031_config_init(struct phy_device *phydev)
+ 				MII_KSZ9031RN_TX_DATA_PAD_SKEW, 4,
+ 				tx_data_skews, 4, &update);
+ 
+-		if (update && phydev->interface != PHY_INTERFACE_MODE_RGMII)
++		if (update && !phy_interface_is_rgmii(phydev))
+ 			phydev_warn(phydev,
+-				    "*-skew-ps values should be used only with phy-mode = \"rgmii\"\n");
++				    "*-skew-ps values should be used only with RGMII PHY modes\n");
+ 
+ 		/* Silicon Errata Sheet (DS80000691D or DS80000692D):
+ 		 * When the device links in the 1000BASE-T slave mode only,
+@@ -1216,8 +1216,9 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	/* No suspend/resume callbacks because of errata DS80000700A,
++	 * receiver error following software power down.
++	 */
+ }, {
+ 	.phy_id		= PHY_ID_KSZ8041RNLI,
+ 	.phy_id_mask	= MICREL_PHY_ID_MASK,
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 5ee7cde0c2e97..db7866b6f7525 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -831,7 +831,12 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev,
+ 	phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
+ 
+ 	/* Restart the PHY */
+-	_phy_start_aneg(phydev);
++	if (phy_is_started(phydev)) {
++		phydev->state = PHY_UP;
++		phy_trigger_machine(phydev);
++	} else {
++		_phy_start_aneg(phydev);
++	}
+ 
+ 	mutex_unlock(&phydev->lock);
+ 	return 0;
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 025c3246f3396..899496f089d2e 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -1610,7 +1610,7 @@ int phylink_ethtool_set_pauseparam(struct phylink *pl,
+ 		return -EOPNOTSUPP;
+ 
+ 	if (!phylink_test(pl->supported, Asym_Pause) &&
+-	    !pause->autoneg && pause->rx_pause != pause->tx_pause)
++	    pause->rx_pause != pause->tx_pause)
+ 		return -EINVAL;
+ 
+ 	pause_state = 0;
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 336504b7531d9..932a39945cc62 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -3765,7 +3765,6 @@ vmxnet3_suspend(struct device *device)
+ 	vmxnet3_free_intr_resources(adapter);
+ 
+ 	netif_device_detach(netdev);
+-	netif_tx_stop_all_queues(netdev);
+ 
+ 	/* Create wake-up filters. */
+ 	pmConf = adapter->pm_conf;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 2746f77745e4d..71902706234cc 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -34,6 +34,7 @@
+ #include <net/l3mdev.h>
+ #include <net/fib_rules.h>
+ #include <net/netns/generic.h>
++#include <net/netfilter/nf_conntrack.h>
+ 
+ #define DRV_NAME	"vrf"
+ #define DRV_VERSION	"1.1"
+@@ -423,12 +424,26 @@ static int vrf_local_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	return NETDEV_TX_OK;
+ }
+ 
++static void vrf_nf_set_untracked(struct sk_buff *skb)
++{
++	if (skb_get_nfct(skb) == 0)
++		nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
++}
++
++static void vrf_nf_reset_ct(struct sk_buff *skb)
++{
++	if (skb_get_nfct(skb) == IP_CT_UNTRACKED)
++		nf_reset_ct(skb);
++}
++
+ #if IS_ENABLED(CONFIG_IPV6)
+ static int vrf_ip6_local_out(struct net *net, struct sock *sk,
+ 			     struct sk_buff *skb)
+ {
+ 	int err;
+ 
++	vrf_nf_reset_ct(skb);
++
+ 	err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net,
+ 		      sk, skb, NULL, skb_dst(skb)->dev, dst_output);
+ 
+@@ -508,6 +523,8 @@ static int vrf_ip_local_out(struct net *net, struct sock *sk,
+ {
+ 	int err;
+ 
++	vrf_nf_reset_ct(skb);
++
+ 	err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk,
+ 		      skb, NULL, skb_dst(skb)->dev, dst_output);
+ 	if (likely(err == 1))
+@@ -627,8 +644,7 @@ static void vrf_finish_direct(struct sk_buff *skb)
+ 		skb_pull(skb, ETH_HLEN);
+ 	}
+ 
+-	/* reset skb device */
+-	nf_reset_ct(skb);
++	vrf_nf_reset_ct(skb);
+ }
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -642,7 +658,7 @@ static int vrf_finish_output6(struct net *net, struct sock *sk,
+ 	struct neighbour *neigh;
+ 	int ret;
+ 
+-	nf_reset_ct(skb);
++	vrf_nf_reset_ct(skb);
+ 
+ 	skb->protocol = htons(ETH_P_IPV6);
+ 	skb->dev = dev;
+@@ -753,6 +769,8 @@ static struct sk_buff *vrf_ip6_out_direct(struct net_device *vrf_dev,
+ 
+ 	skb->dev = vrf_dev;
+ 
++	vrf_nf_set_untracked(skb);
++
+ 	err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk,
+ 		      skb, NULL, vrf_dev, vrf_ip6_out_direct_finish);
+ 
+@@ -860,7 +878,7 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
+ 	bool is_v6gw = false;
+ 	int ret = -EINVAL;
+ 
+-	nf_reset_ct(skb);
++	vrf_nf_reset_ct(skb);
+ 
+ 	/* Be paranoid, rather than too clever. */
+ 	if (unlikely(skb_headroom(skb) < hh_len && dev->header_ops)) {
+@@ -988,6 +1006,8 @@ static struct sk_buff *vrf_ip_out_direct(struct net_device *vrf_dev,
+ 
+ 	skb->dev = vrf_dev;
+ 
++	vrf_nf_set_untracked(skb);
++
+ 	err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk,
+ 		      skb, NULL, vrf_dev, vrf_ip_out_direct_finish);
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 36183fdfb7f03..b59d482d9c23e 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -982,8 +982,12 @@ static void ath10k_mac_vif_beacon_cleanup(struct ath10k_vif *arvif)
+ 	ath10k_mac_vif_beacon_free(arvif);
+ 
+ 	if (arvif->beacon_buf) {
+-		dma_free_coherent(ar->dev, IEEE80211_MAX_FRAME_LEN,
+-				  arvif->beacon_buf, arvif->beacon_paddr);
++		if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL)
++			kfree(arvif->beacon_buf);
++		else
++			dma_free_coherent(ar->dev, IEEE80211_MAX_FRAME_LEN,
++					  arvif->beacon_buf,
++					  arvif->beacon_paddr);
+ 		arvif->beacon_buf = NULL;
+ 	}
+ }
+@@ -1037,7 +1041,7 @@ static int ath10k_monitor_vdev_start(struct ath10k *ar, int vdev_id)
+ 	arg.channel.min_power = 0;
+ 	arg.channel.max_power = channel->max_power * 2;
+ 	arg.channel.max_reg_power = channel->max_reg_power * 2;
+-	arg.channel.max_antenna_gain = channel->max_antenna_gain * 2;
++	arg.channel.max_antenna_gain = channel->max_antenna_gain;
+ 
+ 	reinit_completion(&ar->vdev_setup_done);
+ 	reinit_completion(&ar->vdev_delete_done);
+@@ -1483,7 +1487,7 @@ static int ath10k_vdev_start_restart(struct ath10k_vif *arvif,
+ 	arg.channel.min_power = 0;
+ 	arg.channel.max_power = chandef->chan->max_power * 2;
+ 	arg.channel.max_reg_power = chandef->chan->max_reg_power * 2;
+-	arg.channel.max_antenna_gain = chandef->chan->max_antenna_gain * 2;
++	arg.channel.max_antenna_gain = chandef->chan->max_antenna_gain;
+ 
+ 	if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
+ 		arg.ssid = arvif->u.ap.ssid;
+@@ -3254,7 +3258,7 @@ static int ath10k_update_channel_list(struct ath10k *ar)
+ 			ch->min_power = 0;
+ 			ch->max_power = channel->max_power * 2;
+ 			ch->max_reg_power = channel->max_reg_power * 2;
+-			ch->max_antenna_gain = channel->max_antenna_gain * 2;
++			ch->max_antenna_gain = channel->max_antenna_gain;
+ 			ch->reg_class_id = 0; /* FIXME */
+ 
+ 			/* FIXME: why use only legacy modes, why not any
+@@ -5466,10 +5470,25 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
+ 	if (vif->type == NL80211_IFTYPE_ADHOC ||
+ 	    vif->type == NL80211_IFTYPE_MESH_POINT ||
+ 	    vif->type == NL80211_IFTYPE_AP) {
+-		arvif->beacon_buf = dma_alloc_coherent(ar->dev,
+-						       IEEE80211_MAX_FRAME_LEN,
+-						       &arvif->beacon_paddr,
+-						       GFP_ATOMIC);
++		if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL) {
++			arvif->beacon_buf = kmalloc(IEEE80211_MAX_FRAME_LEN,
++						    GFP_KERNEL);
++
++			/* Using a kernel pointer in place of a dma_addr_t
++			 * token can lead to undefined behavior if that
++			 * makes it into cache management functions. Use a
++			 * known-invalid address token instead, which
++			 * avoids the warning and makes it easier to catch
++			 * bugs if it does end up getting used.
++			 */
++			arvif->beacon_paddr = DMA_MAPPING_ERROR;
++		} else {
++			arvif->beacon_buf =
++				dma_alloc_coherent(ar->dev,
++						   IEEE80211_MAX_FRAME_LEN,
++						   &arvif->beacon_paddr,
++						   GFP_ATOMIC);
++		}
+ 		if (!arvif->beacon_buf) {
+ 			ret = -ENOMEM;
+ 			ath10k_warn(ar, "failed to allocate beacon buffer: %d\n",
+@@ -5684,8 +5703,12 @@ err_vdev_delete:
+ 
+ err:
+ 	if (arvif->beacon_buf) {
+-		dma_free_coherent(ar->dev, IEEE80211_MAX_FRAME_LEN,
+-				  arvif->beacon_buf, arvif->beacon_paddr);
++		if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL)
++			kfree(arvif->beacon_buf);
++		else
++			dma_free_coherent(ar->dev, IEEE80211_MAX_FRAME_LEN,
++					  arvif->beacon_buf,
++					  arvif->beacon_paddr);
+ 		arvif->beacon_buf = NULL;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index 81ddaafb6721c..0fe639710a8bb 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -1363,8 +1363,11 @@ static void ath10k_rx_indication_async_work(struct work_struct *work)
+ 		ep->ep_ops.ep_rx_complete(ar, skb);
+ 	}
+ 
+-	if (test_bit(ATH10K_FLAG_CORE_REGISTERED, &ar->dev_flags))
++	if (test_bit(ATH10K_FLAG_CORE_REGISTERED, &ar->dev_flags)) {
++		local_bh_disable();
+ 		napi_schedule(&ar->napi);
++		local_bh_enable();
++	}
+ }
+ 
+ static int ath10k_sdio_read_rtc_state(struct ath10k_sdio *ar_sdio, unsigned char *state)
+diff --git a/drivers/net/wireless/ath/ath10k/usb.c b/drivers/net/wireless/ath/ath10k/usb.c
+index 19b9c27e30e20..3d98f19c6ec8a 100644
+--- a/drivers/net/wireless/ath/ath10k/usb.c
++++ b/drivers/net/wireless/ath/ath10k/usb.c
+@@ -525,7 +525,7 @@ static int ath10k_usb_submit_ctrl_in(struct ath10k *ar,
+ 			      req,
+ 			      USB_DIR_IN | USB_TYPE_VENDOR |
+ 			      USB_RECIP_DEVICE, value, index, buf,
+-			      size, 2 * HZ);
++			      size, 2000);
+ 
+ 	if (ret < 0) {
+ 		ath10k_warn(ar, "Failed to read usb control message: %d\n",
+@@ -853,6 +853,11 @@ static int ath10k_usb_setup_pipe_resources(struct ath10k *ar,
+ 				   le16_to_cpu(endpoint->wMaxPacketSize),
+ 				   endpoint->bInterval);
+ 		}
++
++		/* Ignore broken descriptors. */
++		if (usb_endpoint_maxp(endpoint) == 0)
++			continue;
++
+ 		urbcount = 0;
+ 
+ 		pipe_num =
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 37b53af760d76..85fe855ece097 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -2610,6 +2610,10 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
+ 	if (ieee80211_is_beacon(hdr->frame_control))
+ 		ath10k_mac_handle_beacon(ar, skb);
+ 
++	if (ieee80211_is_beacon(hdr->frame_control) ||
++	    ieee80211_is_probe_resp(hdr->frame_control))
++		status->boottime_ns = ktime_get_boottime_ns();
++
+ 	ath10k_dbg(ar, ATH10K_DBG_MGMT,
+ 		   "event mgmt rx skb %pK len %d ftype %02x stype %02x\n",
+ 		   skb, skb->len,
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 66ecf09068c19..e244b7038e606 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -2066,7 +2066,9 @@ struct wmi_channel {
+ 	union {
+ 		__le32 reginfo1;
+ 		struct {
++			/* note: power unit is 1 dBm */
+ 			u8 antenna_max;
++			/* note: power unit is 0.5 dBm */
+ 			u8 max_tx_power;
+ 		} __packed;
+ 	} __packed;
+@@ -2086,6 +2088,7 @@ struct wmi_channel_arg {
+ 	u32 min_power;
+ 	u32 max_power;
+ 	u32 max_reg_power;
++	/* note: power unit is 1 dBm */
+ 	u32 max_antenna_gain;
+ 	u32 reg_class_id;
+ 	enum wmi_phy_mode mode;
+diff --git a/drivers/net/wireless/ath/ath11k/dbring.c b/drivers/net/wireless/ath/ath11k/dbring.c
+index 5e1f5437b4185..fd98ba5b1130b 100644
+--- a/drivers/net/wireless/ath/ath11k/dbring.c
++++ b/drivers/net/wireless/ath/ath11k/dbring.c
+@@ -8,8 +8,7 @@
+ 
+ static int ath11k_dbring_bufs_replenish(struct ath11k *ar,
+ 					struct ath11k_dbring *ring,
+-					struct ath11k_dbring_element *buff,
+-					gfp_t gfp)
++					struct ath11k_dbring_element *buff)
+ {
+ 	struct ath11k_base *ab = ar->ab;
+ 	struct hal_srng *srng;
+@@ -35,7 +34,7 @@ static int ath11k_dbring_bufs_replenish(struct ath11k *ar,
+ 		goto err;
+ 
+ 	spin_lock_bh(&ring->idr_lock);
+-	buf_id = idr_alloc(&ring->bufs_idr, buff, 0, ring->bufs_max, gfp);
++	buf_id = idr_alloc(&ring->bufs_idr, buff, 0, ring->bufs_max, GFP_ATOMIC);
+ 	spin_unlock_bh(&ring->idr_lock);
+ 	if (buf_id < 0) {
+ 		ret = -ENOBUFS;
+@@ -72,8 +71,7 @@ err:
+ }
+ 
+ static int ath11k_dbring_fill_bufs(struct ath11k *ar,
+-				   struct ath11k_dbring *ring,
+-				   gfp_t gfp)
++				   struct ath11k_dbring *ring)
+ {
+ 	struct ath11k_dbring_element *buff;
+ 	struct hal_srng *srng;
+@@ -92,11 +90,11 @@ static int ath11k_dbring_fill_bufs(struct ath11k *ar,
+ 	size = sizeof(*buff) + ring->buf_sz + align - 1;
+ 
+ 	while (num_remain > 0) {
+-		buff = kzalloc(size, gfp);
++		buff = kzalloc(size, GFP_ATOMIC);
+ 		if (!buff)
+ 			break;
+ 
+-		ret = ath11k_dbring_bufs_replenish(ar, ring, buff, gfp);
++		ret = ath11k_dbring_bufs_replenish(ar, ring, buff);
+ 		if (ret) {
+ 			ath11k_warn(ar->ab, "failed to replenish db ring num_remain %d req_ent %d\n",
+ 				    num_remain, req_entries);
+@@ -176,7 +174,7 @@ int ath11k_dbring_buf_setup(struct ath11k *ar,
+ 	ring->hp_addr = ath11k_hal_srng_get_hp_addr(ar->ab, srng);
+ 	ring->tp_addr = ath11k_hal_srng_get_tp_addr(ar->ab, srng);
+ 
+-	ret = ath11k_dbring_fill_bufs(ar, ring, GFP_KERNEL);
++	ret = ath11k_dbring_fill_bufs(ar, ring);
+ 
+ 	return ret;
+ }
+@@ -322,7 +320,7 @@ int ath11k_dbring_buffer_release_event(struct ath11k_base *ab,
+ 		}
+ 
+ 		memset(buff, 0, size);
+-		ath11k_dbring_bufs_replenish(ar, ring, buff, GFP_ATOMIC);
++		ath11k_dbring_bufs_replenish(ar, ring, buff);
+ 	}
+ 
+ 	spin_unlock_bh(&srng->lock);
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 2bff8eb507d4d..2e77dca6b1ad6 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2303,8 +2303,10 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc,
+ 	channel_num = ath11k_dp_rx_h_msdu_start_freq(rx_desc);
+ 	center_freq = ath11k_dp_rx_h_msdu_start_freq(rx_desc) >> 16;
+ 
+-	if (center_freq >= 5935 && center_freq <= 7105) {
++	if (center_freq >= ATH11K_MIN_6G_FREQ &&
++	    center_freq <= ATH11K_MAX_6G_FREQ) {
+ 		rx_status->band = NL80211_BAND_6GHZ;
++		rx_status->freq = center_freq;
+ 	} else if (channel_num >= 1 && channel_num <= 14) {
+ 		rx_status->band = NL80211_BAND_2GHZ;
+ 	} else if (channel_num >= 36 && channel_num <= 173) {
+@@ -2322,8 +2324,9 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc,
+ 				rx_desc, sizeof(struct hal_rx_desc));
+ 	}
+ 
+-	rx_status->freq = ieee80211_channel_to_frequency(channel_num,
+-							 rx_status->band);
++	if (rx_status->band != NL80211_BAND_6GHZ)
++		rx_status->freq = ieee80211_channel_to_frequency(channel_num,
++								 rx_status->band);
+ 
+ 	ath11k_dp_rx_h_rate(ar, rx_desc, rx_status);
+ }
+@@ -3273,7 +3276,7 @@ static int ath11k_dp_rx_h_defrag_reo_reinject(struct ath11k *ar, struct dp_rx_ti
+ 
+ 	paddr = dma_map_single(ab->dev, defrag_skb->data,
+ 			       defrag_skb->len + skb_tailroom(defrag_skb),
+-			       DMA_FROM_DEVICE);
++			       DMA_TO_DEVICE);
+ 	if (dma_mapping_error(ab->dev, paddr))
+ 		return -ENOMEM;
+ 
+@@ -3338,7 +3341,7 @@ err_free_idr:
+ 	spin_unlock_bh(&rx_refill_ring->idr_lock);
+ err_unmap_dma:
+ 	dma_unmap_single(ab->dev, paddr, defrag_skb->len + skb_tailroom(defrag_skb),
+-			 DMA_FROM_DEVICE);
++			 DMA_TO_DEVICE);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 63d70aecbd0f1..0924bc8b35205 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -6320,7 +6320,7 @@ static int __ath11k_mac_register(struct ath11k *ar)
+ 		ar->hw->wiphy->interface_modes &= ~BIT(NL80211_IFTYPE_MONITOR);
+ 
+ 	/* Apply the regd received during initialization */
+-	ret = ath11k_regd_update(ar, true);
++	ret = ath11k_regd_update(ar);
+ 	if (ret) {
+ 		ath11k_err(ar->ab, "ath11k regd update failed: %d\n", ret);
+ 		goto err_unregister_hw;
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index 2ae7c6bf091e9..c842e275d1adf 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2616,8 +2616,10 @@ static void ath11k_qmi_driver_event_work(struct work_struct *work)
+ 		list_del(&event->list);
+ 		spin_unlock(&qmi->event_lock);
+ 
+-		if (test_bit(ATH11K_FLAG_UNREGISTERING, &ab->dev_flags))
++		if (test_bit(ATH11K_FLAG_UNREGISTERING, &ab->dev_flags)) {
++			kfree(event);
+ 			return;
++		}
+ 
+ 		switch (event->type) {
+ 		case ATH11K_QMI_EVENT_SERVER_ARRIVE:
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index 678d0885fcee7..b8f9f34408879 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -198,7 +198,7 @@ static void ath11k_copy_regd(struct ieee80211_regdomain *regd_orig,
+ 		       sizeof(struct ieee80211_reg_rule));
+ }
+ 
+-int ath11k_regd_update(struct ath11k *ar, bool init)
++int ath11k_regd_update(struct ath11k *ar)
+ {
+ 	struct ieee80211_regdomain *regd, *regd_copy = NULL;
+ 	int ret, regd_len, pdev_id;
+@@ -209,7 +209,10 @@ int ath11k_regd_update(struct ath11k *ar, bool init)
+ 
+ 	spin_lock_bh(&ab->base_lock);
+ 
+-	if (init) {
++	/* Prefer the latest regd update over default if it's available */
++	if (ab->new_regd[pdev_id]) {
++		regd = ab->new_regd[pdev_id];
++	} else {
+ 		/* Apply the regd received during init through
+ 		 * WMI_REG_CHAN_LIST_CC event. In case of failure to
+ 		 * receive the regd, initialize with a default world
+@@ -222,8 +225,6 @@ int ath11k_regd_update(struct ath11k *ar, bool init)
+ 				    "failed to receive default regd during init\n");
+ 			regd = (struct ieee80211_regdomain *)&ath11k_world_regd;
+ 		}
+-	} else {
+-		regd = ab->new_regd[pdev_id];
+ 	}
+ 
+ 	if (!regd) {
+@@ -680,7 +681,7 @@ void ath11k_regd_update_work(struct work_struct *work)
+ 					 regd_update_work);
+ 	int ret;
+ 
+-	ret = ath11k_regd_update(ar, false);
++	ret = ath11k_regd_update(ar);
+ 	if (ret) {
+ 		/* Firmware has already moved to the new regd. We need
+ 		 * to maintain channel consistency across FW, Host driver
+diff --git a/drivers/net/wireless/ath/ath11k/reg.h b/drivers/net/wireless/ath/ath11k/reg.h
+index 39b7fc9435415..7dbbba9fae1d2 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.h
++++ b/drivers/net/wireless/ath/ath11k/reg.h
+@@ -30,6 +30,6 @@ void ath11k_regd_update_work(struct work_struct *work);
+ struct ieee80211_regdomain *
+ ath11k_reg_build_regd(struct ath11k_base *ab,
+ 		      struct cur_regulatory_info *reg_info, bool intersect);
+-int ath11k_regd_update(struct ath11k *ar, bool init);
++int ath11k_regd_update(struct ath11k *ar);
+ int ath11k_reg_update_chan_list(struct ath11k *ar);
+ #endif
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index eca86225a3413..74ebe8e7d1d81 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -1333,6 +1333,7 @@ int ath11k_wmi_pdev_bss_chan_info_request(struct ath11k *ar,
+ 				     WMI_TAG_PDEV_BSS_CHAN_INFO_REQUEST) |
+ 			  FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
+ 	cmd->req_type = type;
++	cmd->pdev_id = ar->pdev->pdev_id;
+ 
+ 	ath11k_dbg(ar->ab, ATH11K_DBG_WMI,
+ 		   "WMI bss chan info req type %d\n", type);
+@@ -5361,6 +5362,17 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
+ 
+ 	pdev_idx = reg_info->phy_id;
+ 
++	/* Avoid default reg rule updates sent during FW recovery if
++	 * it is already available
++	 */
++	spin_lock(&ab->base_lock);
++	if (test_bit(ATH11K_FLAG_RECOVERY, &ab->dev_flags) &&
++	    ab->default_regd[pdev_idx]) {
++		spin_unlock(&ab->base_lock);
++		goto mem_free;
++	}
++	spin_unlock(&ab->base_lock);
++
+ 	if (pdev_idx >= ab->num_radios) {
+ 		/* Process the event for phy0 only if single_pdev_only
+ 		 * is true. If pdev_idx is valid but not 0, discard the
+@@ -5398,10 +5410,10 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
+ 	}
+ 
+ 	spin_lock(&ab->base_lock);
+-	if (test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags)) {
+-		/* Once mac is registered, ar is valid and all CC events from
+-		 * fw is considered to be received due to user requests
+-		 * currently.
++	if (ab->default_regd[pdev_idx]) {
++		/* The initial rules from FW after WMI Init is to build
++		 * the default regd. From then on, any rules updated for
++		 * the pdev could be due to user reg changes.
+ 		 * Free previously built regd before assigning the newly
+ 		 * generated regd to ar. NULL pointer handling will be
+ 		 * taken care by kfree itself.
+@@ -5411,13 +5423,9 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
+ 		ab->new_regd[pdev_idx] = regd;
+ 		ieee80211_queue_work(ar->hw, &ar->regd_update_work);
+ 	} else {
+-		/* Multiple events for the same *ar is not expected. But we
+-		 * can still clear any previously stored default_regd if we
+-		 * are receiving this event for the same radio by mistake.
+-		 * NULL pointer handling will be taken care by kfree itself.
++		/* This regd would be applied during mac registration and is
++		 * held constant throughout for regd intersection purpose
+ 		 */
+-		kfree(ab->default_regd[pdev_idx]);
+-		/* This regd would be applied during mac registration */
+ 		ab->default_regd[pdev_idx] = regd;
+ 	}
+ 	ab->dfs_region = reg_info->dfs_region;
+@@ -5660,8 +5668,10 @@ static void ath11k_mgmt_rx_event(struct ath11k_base *ab, struct sk_buff *skb)
+ 	if (rx_ev.status & WMI_RX_STATUS_ERR_MIC)
+ 		status->flag |= RX_FLAG_MMIC_ERROR;
+ 
+-	if (rx_ev.chan_freq >= ATH11K_MIN_6G_FREQ) {
++	if (rx_ev.chan_freq >= ATH11K_MIN_6G_FREQ &&
++	    rx_ev.chan_freq <= ATH11K_MAX_6G_FREQ) {
+ 		status->band = NL80211_BAND_6GHZ;
++		status->freq = rx_ev.chan_freq;
+ 	} else if (rx_ev.channel >= 1 && rx_ev.channel <= 14) {
+ 		status->band = NL80211_BAND_2GHZ;
+ 	} else if (rx_ev.channel >= 36 && rx_ev.channel <= ATH11K_MAX_5G_CHAN) {
+@@ -5682,8 +5692,10 @@ static void ath11k_mgmt_rx_event(struct ath11k_base *ab, struct sk_buff *skb)
+ 
+ 	sband = &ar->mac.sbands[status->band];
+ 
+-	status->freq = ieee80211_channel_to_frequency(rx_ev.channel,
+-						      status->band);
++	if (status->band != NL80211_BAND_6GHZ)
++		status->freq = ieee80211_channel_to_frequency(rx_ev.channel,
++							      status->band);
++
+ 	status->signal = rx_ev.snr + ATH11K_DEFAULT_NOISE_FLOOR;
+ 	status->rate_idx = ath11k_mac_bitrate_to_idx(sband, rx_ev.rate / 100);
+ 
+@@ -5844,6 +5856,8 @@ static void ath11k_scan_event(struct ath11k_base *ab, struct sk_buff *skb)
+ 		ath11k_wmi_event_scan_start_failed(ar);
+ 		break;
+ 	case WMI_SCAN_EVENT_DEQUEUED:
++		__ath11k_mac_scan_finish(ar);
++		break;
+ 	case WMI_SCAN_EVENT_PREEMPTED:
+ 	case WMI_SCAN_EVENT_RESTARTED:
+ 	case WMI_SCAN_EVENT_FOREIGN_CHAN_EXIT:
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.h b/drivers/net/wireless/ath/ath11k/wmi.h
+index 5a32ba0eb4f57..c47adaab7918b 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.h
++++ b/drivers/net/wireless/ath/ath11k/wmi.h
+@@ -2935,6 +2935,7 @@ struct wmi_pdev_bss_chan_info_req_cmd {
+ 	u32 tlv_header;
+ 	/* ref wmi_bss_chan_info_req_type */
+ 	u32 req_type;
++	u32 pdev_id;
+ } __packed;
+ 
+ struct wmi_ap_ps_peer_cmd {
+@@ -4028,7 +4029,6 @@ struct wmi_vdev_stopped_event {
+ } __packed;
+ 
+ struct wmi_pdev_bss_chan_info_event {
+-	u32 pdev_id;
+ 	u32 freq;	/* Units in MHz */
+ 	u32 noise_floor;	/* units are dBm */
+ 	/* rx clear - how often the channel was unused */
+@@ -4046,6 +4046,7 @@ struct wmi_pdev_bss_chan_info_event {
+ 	/*rx_cycle cnt for my bss in 64bits format */
+ 	u32 rx_bss_cycle_count_low;
+ 	u32 rx_bss_cycle_count_high;
++	u32 pdev_id;
+ } __packed;
+ 
+ #define WMI_VDEV_INSTALL_KEY_COMPL_STATUS_SUCCESS 0
+diff --git a/drivers/net/wireless/ath/ath6kl/usb.c b/drivers/net/wireless/ath/ath6kl/usb.c
+index 5372e948e761d..aba70f35e574b 100644
+--- a/drivers/net/wireless/ath/ath6kl/usb.c
++++ b/drivers/net/wireless/ath/ath6kl/usb.c
+@@ -340,6 +340,11 @@ static int ath6kl_usb_setup_pipe_resources(struct ath6kl_usb *ar_usb)
+ 				   le16_to_cpu(endpoint->wMaxPacketSize),
+ 				   endpoint->bInterval);
+ 		}
++
++		/* Ignore broken descriptors. */
++		if (usb_endpoint_maxp(endpoint) == 0)
++			continue;
++
+ 		urbcount = 0;
+ 
+ 		pipe_num =
+@@ -907,7 +912,7 @@ static int ath6kl_usb_submit_ctrl_in(struct ath6kl_usb *ar_usb,
+ 				 req,
+ 				 USB_DIR_IN | USB_TYPE_VENDOR |
+ 				 USB_RECIP_DEVICE, value, index, buf,
+-				 size, 2 * HZ);
++				 size, 2000);
+ 
+ 	if (ret < 0) {
+ 		ath6kl_warn("Failed to read usb control message: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 5739c1dbf1661..af367696fd92f 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -533,8 +533,10 @@ irqreturn_t ath_isr(int irq, void *dev)
+ 	ath9k_debug_sync_cause(sc, sync_cause);
+ 	status &= ah->imask;	/* discard unasked-for bits */
+ 
+-	if (test_bit(ATH_OP_HW_RESET, &common->op_flags))
++	if (test_bit(ATH_OP_HW_RESET, &common->op_flags)) {
++		ath9k_hw_kill_interrupts(sc->sc_ah);
+ 		return IRQ_HANDLED;
++	}
+ 
+ 	/*
+ 	 * If there are no status bits set, then this interrupt was not
+diff --git a/drivers/net/wireless/ath/dfs_pattern_detector.c b/drivers/net/wireless/ath/dfs_pattern_detector.c
+index 0813473793df1..87369073098c8 100644
+--- a/drivers/net/wireless/ath/dfs_pattern_detector.c
++++ b/drivers/net/wireless/ath/dfs_pattern_detector.c
+@@ -182,10 +182,12 @@ static void channel_detector_exit(struct dfs_pattern_detector *dpd,
+ 	if (cd == NULL)
+ 		return;
+ 	list_del(&cd->head);
+-	for (i = 0; i < dpd->num_radar_types; i++) {
+-		struct pri_detector *de = cd->detectors[i];
+-		if (de != NULL)
+-			de->exit(de);
++	if (cd->detectors) {
++		for (i = 0; i < dpd->num_radar_types; i++) {
++			struct pri_detector *de = cd->detectors[i];
++			if (de != NULL)
++				de->exit(de);
++		}
+ 	}
+ 	kfree(cd->detectors);
+ 	kfree(cd);
+diff --git a/drivers/net/wireless/ath/wcn36xx/dxe.c b/drivers/net/wireless/ath/wcn36xx/dxe.c
+index 63079231e48ec..cf4eb0fb28151 100644
+--- a/drivers/net/wireless/ath/wcn36xx/dxe.c
++++ b/drivers/net/wireless/ath/wcn36xx/dxe.c
+@@ -403,8 +403,21 @@ static void reap_tx_dxes(struct wcn36xx *wcn, struct wcn36xx_dxe_ch *ch)
+ 			dma_unmap_single(wcn->dev, ctl->desc->src_addr_l,
+ 					 ctl->skb->len, DMA_TO_DEVICE);
+ 			info = IEEE80211_SKB_CB(ctl->skb);
+-			if (!(info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS)) {
+-				/* Keep frame until TX status comes */
++			if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
++				if (info->flags & IEEE80211_TX_CTL_NO_ACK) {
++					info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
++					ieee80211_tx_status_irqsafe(wcn->hw, ctl->skb);
++				} else {
++					/* Wait for the TX ack indication or timeout... */
++					spin_lock(&wcn->dxe_lock);
++					if (WARN_ON(wcn->tx_ack_skb))
++						ieee80211_free_txskb(wcn->hw, wcn->tx_ack_skb);
++					wcn->tx_ack_skb = ctl->skb; /* Tracking ref */
++					mod_timer(&wcn->tx_ack_timer, jiffies + HZ / 10);
++					spin_unlock(&wcn->dxe_lock);
++				}
++				/* do not free, ownership transferred to mac80211 status cb */
++			} else {
+ 				ieee80211_free_txskb(wcn->hw, ctl->skb);
+ 			}
+ 
+@@ -426,7 +439,6 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
+ {
+ 	struct wcn36xx *wcn = (struct wcn36xx *)dev;
+ 	int int_src, int_reason;
+-	bool transmitted = false;
+ 
+ 	wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_INT_SRC_RAW_REG, &int_src);
+ 
+@@ -466,7 +478,6 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
+ 		if (int_reason & (WCN36XX_CH_STAT_INT_DONE_MASK |
+ 				  WCN36XX_CH_STAT_INT_ED_MASK)) {
+ 			reap_tx_dxes(wcn, &wcn->dxe_tx_h_ch);
+-			transmitted = true;
+ 		}
+ 	}
+ 
+@@ -479,7 +490,6 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
+ 					   WCN36XX_DXE_0_INT_CLR,
+ 					   WCN36XX_INT_MASK_CHAN_TX_L);
+ 
+-
+ 		if (int_reason & WCN36XX_CH_STAT_INT_ERR_MASK ) {
+ 			wcn36xx_dxe_write_register(wcn,
+ 						   WCN36XX_DXE_0_INT_ERR_CLR,
+@@ -507,25 +517,8 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
+ 		if (int_reason & (WCN36XX_CH_STAT_INT_DONE_MASK |
+ 				  WCN36XX_CH_STAT_INT_ED_MASK)) {
+ 			reap_tx_dxes(wcn, &wcn->dxe_tx_l_ch);
+-			transmitted = true;
+-		}
+-	}
+-
+-	spin_lock(&wcn->dxe_lock);
+-	if (wcn->tx_ack_skb && transmitted) {
+-		struct ieee80211_tx_info *info = IEEE80211_SKB_CB(wcn->tx_ack_skb);
+-
+-		/* TX complete, no need to wait for 802.11 ack indication */
+-		if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS &&
+-		    info->flags & IEEE80211_TX_CTL_NO_ACK) {
+-			info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
+-			del_timer(&wcn->tx_ack_timer);
+-			ieee80211_tx_status_irqsafe(wcn->hw, wcn->tx_ack_skb);
+-			wcn->tx_ack_skb = NULL;
+-			ieee80211_wake_queues(wcn->hw);
+ 		}
+ 	}
+-	spin_unlock(&wcn->dxe_lock);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -613,6 +606,10 @@ static int wcn36xx_rx_handle_packets(struct wcn36xx *wcn,
+ 	dxe = ctl->desc;
+ 
+ 	while (!(READ_ONCE(dxe->ctrl) & WCN36xx_DXE_CTRL_VLD)) {
++		/* do not read until we own DMA descriptor */
++		dma_rmb();
++
++		/* read/modify DMA descriptor */
+ 		skb = ctl->skb;
+ 		dma_addr = dxe->dst_addr_l;
+ 		ret = wcn36xx_dxe_fill_skb(wcn->dev, ctl, GFP_ATOMIC);
+@@ -623,9 +620,15 @@ static int wcn36xx_rx_handle_packets(struct wcn36xx *wcn,
+ 			dma_unmap_single(wcn->dev, dma_addr, WCN36XX_PKT_SIZE,
+ 					DMA_FROM_DEVICE);
+ 			wcn36xx_rx_skb(wcn, skb);
+-		} /* else keep old skb not submitted and use it for rx DMA */
++		}
++		/* else keep old skb not submitted and reuse it for rx DMA
++		 * (dropping the packet that it contained)
++		 */
+ 
++		/* flush descriptor changes before re-marking as valid */
++		dma_wmb();
+ 		dxe->ctrl = ctrl;
++
+ 		ctl = ctl->next;
+ 		dxe = ctl->desc;
+ 	}
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 43be20baab354..629ddfd74da1a 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -134,7 +134,9 @@ static struct ieee80211_supported_band wcn_band_2ghz = {
+ 		.cap =	IEEE80211_HT_CAP_GRN_FLD |
+ 			IEEE80211_HT_CAP_SGI_20 |
+ 			IEEE80211_HT_CAP_DSSSCCK40 |
+-			IEEE80211_HT_CAP_LSIG_TXOP_PROT,
++			IEEE80211_HT_CAP_LSIG_TXOP_PROT |
++			IEEE80211_HT_CAP_SGI_40 |
++			IEEE80211_HT_CAP_SUP_WIDTH_20_40,
+ 		.ht_supported = true,
+ 		.ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K,
+ 		.ampdu_density = IEEE80211_HT_MPDU_DENSITY_16,
+@@ -566,12 +568,14 @@ static int wcn36xx_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 		if (IEEE80211_KEY_FLAG_PAIRWISE & key_conf->flags) {
+ 			sta_priv->is_data_encrypted = true;
+ 			/* Reconfigure bss with encrypt_type */
+-			if (NL80211_IFTYPE_STATION == vif->type)
++			if (NL80211_IFTYPE_STATION == vif->type) {
+ 				wcn36xx_smd_config_bss(wcn,
+ 						       vif,
+ 						       sta,
+ 						       sta->addr,
+ 						       true);
++				wcn36xx_smd_config_sta(wcn, vif, sta);
++			}
+ 
+ 			wcn36xx_smd_set_stakey(wcn,
+ 				vif_priv->encrypt_type,
+diff --git a/drivers/net/wireless/ath/wcn36xx/smd.c b/drivers/net/wireless/ath/wcn36xx/smd.c
+index 766400f7b61c1..3793907ace92e 100644
+--- a/drivers/net/wireless/ath/wcn36xx/smd.c
++++ b/drivers/net/wireless/ath/wcn36xx/smd.c
+@@ -2632,30 +2632,52 @@ static int wcn36xx_smd_delete_sta_context_ind(struct wcn36xx *wcn,
+ 					      size_t len)
+ {
+ 	struct wcn36xx_hal_delete_sta_context_ind_msg *rsp = buf;
+-	struct wcn36xx_vif *tmp;
++	struct wcn36xx_vif *vif_priv;
++	struct ieee80211_vif *vif;
++	struct ieee80211_bss_conf *bss_conf;
+ 	struct ieee80211_sta *sta;
++	bool found = false;
+ 
+ 	if (len != sizeof(*rsp)) {
+ 		wcn36xx_warn("Corrupted delete sta indication\n");
+ 		return -EIO;
+ 	}
+ 
+-	wcn36xx_dbg(WCN36XX_DBG_HAL, "delete station indication %pM index %d\n",
+-		    rsp->addr2, rsp->sta_id);
++	wcn36xx_dbg(WCN36XX_DBG_HAL,
++		    "delete station indication %pM index %d reason %d\n",
++		    rsp->addr2, rsp->sta_id, rsp->reason_code);
+ 
+-	list_for_each_entry(tmp, &wcn->vif_list, list) {
++	list_for_each_entry(vif_priv, &wcn->vif_list, list) {
+ 		rcu_read_lock();
+-		sta = ieee80211_find_sta(wcn36xx_priv_to_vif(tmp), rsp->addr2);
+-		if (sta)
+-			ieee80211_report_low_ack(sta, 0);
++		vif = wcn36xx_priv_to_vif(vif_priv);
++
++		if (vif->type == NL80211_IFTYPE_STATION) {
++			/* We could call ieee80211_find_sta too, but checking
++			 * bss_conf is clearer.
++			 */
++			bss_conf = &vif->bss_conf;
++			if (vif_priv->sta_assoc &&
++			    !memcmp(bss_conf->bssid, rsp->addr2, ETH_ALEN)) {
++				found = true;
++				wcn36xx_dbg(WCN36XX_DBG_HAL,
++					    "connection loss bss_index %d\n",
++					    vif_priv->bss_index);
++				ieee80211_connection_loss(vif);
++			}
++		} else {
++			sta = ieee80211_find_sta(vif, rsp->addr2);
++			if (sta) {
++				found = true;
++				ieee80211_report_low_ack(sta, 0);
++			}
++		}
++
+ 		rcu_read_unlock();
+-		if (sta)
++		if (found)
+ 			return 0;
+ 	}
+ 
+-	wcn36xx_warn("STA with addr %pM and index %d not found\n",
+-		     rsp->addr2,
+-		     rsp->sta_id);
++	wcn36xx_warn("BSS or STA with addr %pM not found\n", rsp->addr2);
+ 	return -ENOENT;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.c b/drivers/net/wireless/ath/wcn36xx/txrx.c
+index cab196bb38cd4..bbd7194c82e27 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.c
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.c
+@@ -31,6 +31,13 @@ struct wcn36xx_rate {
+ 	enum rate_info_bw bw;
+ };
+ 
++/* Buffer descriptor rx_ch field is limited to 5-bit (4+1), a mapping is used
++ * for 11A Channels.
++ */
++static const u8 ab_rx_ch_map[] = { 36, 40, 44, 48, 52, 56, 60, 64, 100, 104,
++				   108, 112, 116, 120, 124, 128, 132, 136, 140,
++				   149, 153, 157, 161, 165, 144 };
++
+ static const struct wcn36xx_rate wcn36xx_rate_table[] = {
+ 	/* 11b rates */
+ 	{  10, 0, RX_ENC_LEGACY, 0, RATE_INFO_BW_20 },
+@@ -291,6 +298,22 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 	    ieee80211_is_probe_resp(hdr->frame_control))
+ 		status.boottime_ns = ktime_get_boottime_ns();
+ 
++	if (bd->scan_learn) {
++		/* If packet originates from hardware scanning, extract the
++		 * band/channel from bd descriptor.
++		 */
++		u8 hwch = (bd->reserved0 << 4) + bd->rx_ch;
++
++		if (bd->rf_band != 1 && hwch <= sizeof(ab_rx_ch_map) && hwch >= 1) {
++			status.band = NL80211_BAND_5GHZ;
++			status.freq = ieee80211_channel_to_frequency(ab_rx_ch_map[hwch - 1],
++								     status.band);
++		} else {
++			status.band = NL80211_BAND_2GHZ;
++			status.freq = ieee80211_channel_to_frequency(hwch, status.band);
++		}
++	}
++
+ 	memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control)) {
+@@ -321,8 +344,6 @@ static void wcn36xx_set_tx_pdu(struct wcn36xx_tx_bd *bd,
+ 		bd->pdu.mpdu_header_off;
+ 	bd->pdu.mpdu_len = len;
+ 	bd->pdu.tid = tid;
+-	/* Use seq number generated by mac80211 */
+-	bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_HOST;
+ }
+ 
+ static inline struct wcn36xx_vif *get_vif_by_addr(struct wcn36xx *wcn,
+@@ -419,6 +440,9 @@ static void wcn36xx_set_tx_data(struct wcn36xx_tx_bd *bd,
+ 		tid = ieee80211_get_tid(hdr);
+ 		/* TID->QID is one-to-one mapping */
+ 		bd->queue_id = tid;
++		bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_DPU_QOS;
++	} else {
++		bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_DPU_NON_QOS;
+ 	}
+ 
+ 	if (info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT ||
+@@ -429,6 +453,9 @@ static void wcn36xx_set_tx_data(struct wcn36xx_tx_bd *bd,
+ 	if (ieee80211_is_any_nullfunc(hdr->frame_control)) {
+ 		/* Don't use a regular queue for null packet (no ampdu) */
+ 		bd->queue_id = WCN36XX_TX_U_WQ_ID;
++		bd->bd_rate = WCN36XX_BD_RATE_CTRL;
++		if (ieee80211_is_qos_nullfunc(hdr->frame_control))
++			bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_HOST;
+ 	}
+ 
+ 	if (bcast) {
+@@ -488,6 +515,8 @@ static void wcn36xx_set_tx_mgmt(struct wcn36xx_tx_bd *bd,
+ 		bd->queue_id = WCN36XX_TX_U_WQ_ID;
+ 	*vif_priv = __vif_priv;
+ 
++	bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_DPU_NON_QOS;
++
+ 	wcn36xx_set_tx_pdu(bd,
+ 			   ieee80211_is_data_qos(hdr->frame_control) ?
+ 			   sizeof(struct ieee80211_qos_hdr) :
+@@ -502,10 +531,11 @@ int wcn36xx_start_tx(struct wcn36xx *wcn,
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ 	struct wcn36xx_vif *vif_priv = NULL;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+-	unsigned long flags;
+ 	bool is_low = ieee80211_is_data(hdr->frame_control);
+ 	bool bcast = is_broadcast_ether_addr(hdr->addr1) ||
+ 		is_multicast_ether_addr(hdr->addr1);
++	bool ack_ind = (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) &&
++					!(info->flags & IEEE80211_TX_CTL_NO_ACK);
+ 	struct wcn36xx_tx_bd bd;
+ 	int ret;
+ 
+@@ -521,30 +551,16 @@ int wcn36xx_start_tx(struct wcn36xx *wcn,
+ 
+ 	bd.dpu_rf = WCN36XX_BMU_WQ_TX;
+ 
+-	if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
++	if (unlikely(ack_ind)) {
+ 		wcn36xx_dbg(WCN36XX_DBG_DXE, "TX_ACK status requested\n");
+ 
+-		spin_lock_irqsave(&wcn->dxe_lock, flags);
+-		if (wcn->tx_ack_skb) {
+-			spin_unlock_irqrestore(&wcn->dxe_lock, flags);
+-			wcn36xx_warn("tx_ack_skb already set\n");
+-			return -EINVAL;
+-		}
+-
+-		wcn->tx_ack_skb = skb;
+-		spin_unlock_irqrestore(&wcn->dxe_lock, flags);
+-
+ 		/* Only one at a time is supported by fw. Stop the TX queues
+ 		 * until the ack status gets back.
+ 		 */
+ 		ieee80211_stop_queues(wcn->hw);
+ 
+-		/* TX watchdog if no TX irq or ack indication received  */
+-		mod_timer(&wcn->tx_ack_timer, jiffies + HZ / 10);
+-
+ 		/* Request ack indication from the firmware */
+-		if (!(info->flags & IEEE80211_TX_CTL_NO_ACK))
+-			bd.tx_comp = 1;
++		bd.tx_comp = 1;
+ 	}
+ 
+ 	/* Data frames served first*/
+@@ -558,14 +574,8 @@ int wcn36xx_start_tx(struct wcn36xx *wcn,
+ 	bd.tx_bd_sign = 0xbdbdbdbd;
+ 
+ 	ret = wcn36xx_dxe_tx_frame(wcn, vif_priv, &bd, skb, is_low);
+-	if (ret && (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS)) {
+-		/* If the skb has not been transmitted,
+-		 * don't keep a reference to it.
+-		 */
+-		spin_lock_irqsave(&wcn->dxe_lock, flags);
+-		wcn->tx_ack_skb = NULL;
+-		spin_unlock_irqrestore(&wcn->dxe_lock, flags);
+-
++	if (unlikely(ret && ack_ind)) {
++		/* If the skb has not been transmitted, resume TX queue */
+ 		ieee80211_wake_queues(wcn->hw);
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.h b/drivers/net/wireless/ath/wcn36xx/txrx.h
+index 032216e82b2be..b54311ffde9c5 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.h
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.h
+@@ -110,7 +110,8 @@ struct wcn36xx_rx_bd {
+ 	/* 0x44 */
+ 	u32	exp_seq_num:12;
+ 	u32	cur_seq_num:12;
+-	u32	fr_type_subtype:8;
++	u32	rf_band:2;
++	u32	fr_type_subtype:6;
+ 
+ 	/* 0x48 */
+ 	u32	msdu_size:16;
+diff --git a/drivers/net/wireless/broadcom/b43/phy_g.c b/drivers/net/wireless/broadcom/b43/phy_g.c
+index d5a1a5c582366..ac72ca39e409b 100644
+--- a/drivers/net/wireless/broadcom/b43/phy_g.c
++++ b/drivers/net/wireless/broadcom/b43/phy_g.c
+@@ -2297,7 +2297,7 @@ static u8 b43_gphy_aci_scan(struct b43_wldev *dev)
+ 	b43_phy_mask(dev, B43_PHY_G_CRS, 0x7FFF);
+ 	b43_set_all_gains(dev, 3, 8, 1);
+ 
+-	start = (channel - 5 > 0) ? channel - 5 : 1;
++	start = (channel > 5) ? channel - 5 : 1;
+ 	end = (channel + 5 < 14) ? channel + 5 : 13;
+ 
+ 	for (i = start; i <= end; i++) {
+diff --git a/drivers/net/wireless/broadcom/b43legacy/radio.c b/drivers/net/wireless/broadcom/b43legacy/radio.c
+index 06891b4f837b9..fdf78c10a05c2 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/radio.c
++++ b/drivers/net/wireless/broadcom/b43legacy/radio.c
+@@ -283,7 +283,7 @@ u8 b43legacy_radio_aci_scan(struct b43legacy_wldev *dev)
+ 			    & 0x7FFF);
+ 	b43legacy_set_all_gains(dev, 3, 8, 1);
+ 
+-	start = (channel - 5 > 0) ? channel - 5 : 1;
++	start = (channel > 5) ? channel - 5 : 1;
+ 	end = (channel + 5 < 14) ? channel + 5 : 13;
+ 
+ 	for (i = start; i <= end; i++) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+index 6d5188b78f2de..0af452dca7664 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+@@ -75,6 +75,16 @@ static const struct dmi_system_id dmi_platform_data[] = {
+ 		},
+ 		.driver_data = (void *)&acepc_t8_data,
+ 	},
++	{
++		/* Cyberbook T116 rugged tablet */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "20170531"),
++		},
++		/* The factory image nvram file is identical to the ACEPC T8 one */
++		.driver_data = (void *)&acepc_t8_data,
++	},
+ 	{
+ 		/* Match for the GPDwin which unfortunately uses somewhat
+ 		 * generic dmi strings, which is why we test for 4 strings.
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+index 3123036978a59..caf38ef64d3ce 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+@@ -741,6 +741,9 @@ bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm)
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	if (iwlmvm_mod_params.power_scheme != IWL_POWER_SCHEME_CAM)
++		return false;
++
+ 	if (num_of_ant(iwl_mvm_get_valid_rx_ant(mvm)) == 1)
+ 		return false;
+ 
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index 20436a289d5cd..5d6dc1dd050d4 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -292,6 +292,7 @@ err_add_card:
+ 	if_usb_reset_device(cardp);
+ dealloc:
+ 	if_usb_free(cardp);
++	kfree(cardp);
+ 
+ error:
+ 	return r;
+@@ -316,6 +317,7 @@ static void if_usb_disconnect(struct usb_interface *intf)
+ 
+ 	/* Unlink and free urb */
+ 	if_usb_free(cardp);
++	kfree(cardp);
+ 
+ 	usb_set_intfdata(intf, NULL);
+ 	usb_put_dev(interface_to_usbdev(intf));
+diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+index a92916dc81a96..ecce8b56f8a28 100644
+--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+@@ -230,6 +230,7 @@ static int if_usb_probe(struct usb_interface *intf,
+ 
+ dealloc:
+ 	if_usb_free(cardp);
++	kfree(cardp);
+ error:
+ lbtf_deb_leave(LBTF_DEB_MAIN);
+ 	return -ENOMEM;
+@@ -254,6 +255,7 @@ static void if_usb_disconnect(struct usb_interface *intf)
+ 
+ 	/* Unlink and free urb */
+ 	if_usb_free(cardp);
++	kfree(cardp);
+ 
+ 	usb_set_intfdata(intf, NULL);
+ 	usb_put_dev(interface_to_usbdev(intf));
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
+index 6696bce561786..cf08a4af84d6d 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n.c
+@@ -657,14 +657,15 @@ int mwifiex_send_delba(struct mwifiex_private *priv, int tid, u8 *peer_mac,
+ 	uint16_t del_ba_param_set;
+ 
+ 	memset(&delba, 0, sizeof(delba));
+-	delba.del_ba_param_set = cpu_to_le16(tid << DELBA_TID_POS);
+ 
+-	del_ba_param_set = le16_to_cpu(delba.del_ba_param_set);
++	del_ba_param_set = tid << DELBA_TID_POS;
++
+ 	if (initiator)
+ 		del_ba_param_set |= IEEE80211_DELBA_PARAM_INITIATOR_MASK;
+ 	else
+ 		del_ba_param_set &= ~IEEE80211_DELBA_PARAM_INITIATOR_MASK;
+ 
++	delba.del_ba_param_set = cpu_to_le16(del_ba_param_set);
+ 	memcpy(&delba.peer_mac_addr, peer_mac, ETH_ALEN);
+ 
+ 	/* We don't wait for the response of this command */
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index a6b9dc6700b14..3d1b5d3d295ae 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -908,16 +908,20 @@ mwifiex_init_new_priv_params(struct mwifiex_private *priv,
+ 	switch (type) {
+ 	case NL80211_IFTYPE_STATION:
+ 	case NL80211_IFTYPE_ADHOC:
+-		priv->bss_role =  MWIFIEX_BSS_ROLE_STA;
++		priv->bss_role = MWIFIEX_BSS_ROLE_STA;
++		priv->bss_type = MWIFIEX_BSS_TYPE_STA;
+ 		break;
+ 	case NL80211_IFTYPE_P2P_CLIENT:
+-		priv->bss_role =  MWIFIEX_BSS_ROLE_STA;
++		priv->bss_role = MWIFIEX_BSS_ROLE_STA;
++		priv->bss_type = MWIFIEX_BSS_TYPE_P2P;
+ 		break;
+ 	case NL80211_IFTYPE_P2P_GO:
+-		priv->bss_role =  MWIFIEX_BSS_ROLE_UAP;
++		priv->bss_role = MWIFIEX_BSS_ROLE_UAP;
++		priv->bss_type = MWIFIEX_BSS_TYPE_P2P;
+ 		break;
+ 	case NL80211_IFTYPE_AP:
+ 		priv->bss_role = MWIFIEX_BSS_ROLE_UAP;
++		priv->bss_type = MWIFIEX_BSS_TYPE_UAP;
+ 		break;
+ 	default:
+ 		mwifiex_dbg(adapter, ERROR,
+@@ -1229,29 +1233,15 @@ mwifiex_cfg80211_change_virtual_intf(struct wiphy *wiphy,
+ 		break;
+ 	case NL80211_IFTYPE_P2P_CLIENT:
+ 	case NL80211_IFTYPE_P2P_GO:
++		if (mwifiex_cfg80211_deinit_p2p(priv))
++			return -EFAULT;
++
+ 		switch (type) {
+-		case NL80211_IFTYPE_STATION:
+-			if (mwifiex_cfg80211_deinit_p2p(priv))
+-				return -EFAULT;
+-			priv->adapter->curr_iface_comb.p2p_intf--;
+-			priv->adapter->curr_iface_comb.sta_intf++;
+-			dev->ieee80211_ptr->iftype = type;
+-			if (mwifiex_deinit_priv_params(priv))
+-				return -1;
+-			if (mwifiex_init_new_priv_params(priv, dev, type))
+-				return -1;
+-			if (mwifiex_sta_init_cmd(priv, false, false))
+-				return -1;
+-			break;
+ 		case NL80211_IFTYPE_ADHOC:
+-			if (mwifiex_cfg80211_deinit_p2p(priv))
+-				return -EFAULT;
++		case NL80211_IFTYPE_STATION:
+ 			return mwifiex_change_vif_to_sta_adhoc(dev, curr_iftype,
+ 							       type, params);
+-			break;
+ 		case NL80211_IFTYPE_AP:
+-			if (mwifiex_cfg80211_deinit_p2p(priv))
+-				return -EFAULT;
+ 			return mwifiex_change_vif_to_ap(dev, curr_iftype, type,
+ 							params);
+ 		case NL80211_IFTYPE_UNSPECIFIED:
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index b2de8d03c5fac..7c137eba8cda7 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -17,6 +17,7 @@
+  * this warranty disclaimer.
+  */
+ 
++#include <linux/iopoll.h>
+ #include <linux/firmware.h>
+ 
+ #include "decl.h"
+@@ -637,11 +638,15 @@ static void mwifiex_delay_for_sleep_cookie(struct mwifiex_adapter *adapter,
+ 			    "max count reached while accessing sleep cookie\n");
+ }
+ 
++#define N_WAKEUP_TRIES_SHORT_INTERVAL 15
++#define N_WAKEUP_TRIES_LONG_INTERVAL 35
++
+ /* This function wakes up the card by reading fw_status register. */
+ static int mwifiex_pm_wakeup_card(struct mwifiex_adapter *adapter)
+ {
+ 	struct pcie_service_card *card = adapter->card;
+ 	const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
++	int retval;
+ 
+ 	mwifiex_dbg(adapter, EVENT,
+ 		    "event: Wakeup device...\n");
+@@ -649,11 +654,24 @@ static int mwifiex_pm_wakeup_card(struct mwifiex_adapter *adapter)
+ 	if (reg->sleep_cookie)
+ 		mwifiex_pcie_dev_wakeup_delay(adapter);
+ 
+-	/* Accessing fw_status register will wakeup device */
+-	if (mwifiex_write_reg(adapter, reg->fw_status, FIRMWARE_READY_PCIE)) {
+-		mwifiex_dbg(adapter, ERROR,
+-			    "Writing fw_status register failed\n");
+-		return -1;
++	/* The 88W8897 PCIe+USB firmware (latest version 15.68.19.p21) sometimes
++	 * appears to ignore or miss our wakeup request, so we continue trying
++	 * until we receive an interrupt from the card.
++	 */
++	if (read_poll_timeout(mwifiex_write_reg, retval,
++			      READ_ONCE(adapter->int_status) != 0,
++			      500, 500 * N_WAKEUP_TRIES_SHORT_INTERVAL,
++			      false,
++			      adapter, reg->fw_status, FIRMWARE_READY_PCIE)) {
++		if (read_poll_timeout(mwifiex_write_reg, retval,
++				      READ_ONCE(adapter->int_status) != 0,
++				      10000, 10000 * N_WAKEUP_TRIES_LONG_INTERVAL,
++				      false,
++				      adapter, reg->fw_status, FIRMWARE_READY_PCIE)) {
++			mwifiex_dbg(adapter, ERROR,
++				    "Firmware didn't wake up\n");
++			return -EIO;
++		}
+ 	}
+ 
+ 	if (reg->sleep_cookie) {
+@@ -1480,6 +1498,14 @@ mwifiex_pcie_send_data(struct mwifiex_adapter *adapter, struct sk_buff *skb,
+ 			ret = -1;
+ 			goto done_unmap;
+ 		}
++
++		/* The firmware (latest version 15.68.19.p21) of the 88W8897 PCIe+USB card
++		 * seems to crash randomly after setting the TX ring write pointer when
++		 * ASPM powersaving is enabled. A workaround seems to be keeping the bus
++		 * busy by reading a random register afterwards.
++		 */
++		mwifiex_read_reg(adapter, PCI_VENDOR_ID, &rx_val);
++
+ 		if ((mwifiex_pcie_txbd_not_full(card)) &&
+ 		    tx_param->next_pkt_len) {
+ 			/* have more packets and TxBD still can hold more */
+diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
+index 426e39d4ccf0f..9736aa0ab7fd4 100644
+--- a/drivers/net/wireless/marvell/mwifiex/usb.c
++++ b/drivers/net/wireless/marvell/mwifiex/usb.c
+@@ -505,6 +505,22 @@ static int mwifiex_usb_probe(struct usb_interface *intf,
+ 		}
+ 	}
+ 
++	switch (card->usb_boot_state) {
++	case USB8XXX_FW_DNLD:
++		/* Reject broken descriptors. */
++		if (!card->rx_cmd_ep || !card->tx_cmd_ep)
++			return -ENODEV;
++		if (card->bulk_out_maxpktsize == 0)
++			return -ENODEV;
++		break;
++	case USB8XXX_FW_READY:
++		/* Assume the driver can handle missing endpoints for now. */
++		break;
++	default:
++		WARN_ON(1);
++		return -ENODEV;
++	}
++
+ 	usb_set_intfdata(intf, card);
+ 
+ 	ret = mwifiex_add_card(card, &card->fw_done, &usb_ops,
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index 27b7d4b779e0b..dc91ac8cbd48b 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -5796,8 +5796,8 @@ static void mwl8k_fw_state_machine(const struct firmware *fw, void *context)
+ fail:
+ 	priv->fw_state = FW_STATE_ERROR;
+ 	complete(&priv->firmware_loading_complete);
+-	device_release_driver(&priv->pdev->dev);
+ 	mwl8k_release_firmware(priv);
++	device_release_driver(&priv->pdev->dev);
+ }
+ 
+ #define MAX_RESTART_ATTEMPTS 1
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index f44f478bb970e..424be103093c6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -672,12 +672,15 @@ int mt7615_mac_write_txwi(struct mt7615_dev *dev, __le32 *txwi,
+ 	if (info->flags & IEEE80211_TX_CTL_NO_ACK)
+ 		txwi[3] |= cpu_to_le32(MT_TXD3_NO_ACK);
+ 
+-	txwi[7] = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+-		  FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype) |
+-		  FIELD_PREP(MT_TXD7_SPE_IDX, 0x18);
+-	if (!is_mmio)
+-		txwi[8] = FIELD_PREP(MT_TXD8_L_TYPE, fc_type) |
+-			  FIELD_PREP(MT_TXD8_L_SUB_TYPE, fc_stype);
++	val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
++	      FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype) |
++	      FIELD_PREP(MT_TXD7_SPE_IDX, 0x18);
++	txwi[7] = cpu_to_le32(val);
++	if (!is_mmio) {
++		val = FIELD_PREP(MT_TXD8_L_TYPE, fc_type) |
++		      FIELD_PREP(MT_TXD8_L_SUB_TYPE, fc_stype);
++		txwi[8] = cpu_to_le32(val);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+index da6d3f51f6d47..677082d8659a6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+@@ -176,7 +176,7 @@ void mt76x02_mac_wcid_set_drop(struct mt76x02_dev *dev, u8 idx, bool drop)
+ 		mt76_wr(dev, MT_WCID_DROP(idx), (val & ~bit) | (bit * drop));
+ }
+ 
+-static __le16
++static u16
+ mt76x02_mac_tx_rate_val(struct mt76x02_dev *dev,
+ 			const struct ieee80211_tx_rate *rate, u8 *nss_val)
+ {
+@@ -222,14 +222,14 @@ mt76x02_mac_tx_rate_val(struct mt76x02_dev *dev,
+ 		rateval |= MT_RXWI_RATE_SGI;
+ 
+ 	*nss_val = nss;
+-	return cpu_to_le16(rateval);
++	return rateval;
+ }
+ 
+ void mt76x02_mac_wcid_set_rate(struct mt76x02_dev *dev, struct mt76_wcid *wcid,
+ 			       const struct ieee80211_tx_rate *rate)
+ {
+ 	s8 max_txpwr_adj = mt76x02_tx_get_max_txpwr_adj(dev, rate);
+-	__le16 rateval;
++	u16 rateval;
+ 	u32 tx_info;
+ 	s8 nss;
+ 
+@@ -342,7 +342,7 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
+ 	struct ieee80211_key_conf *key = info->control.hw_key;
+ 	u32 wcid_tx_info;
+ 	u16 rate_ht_mask = FIELD_PREP(MT_RXWI_RATE_PHY, BIT(1) | BIT(2));
+-	u16 txwi_flags = 0;
++	u16 txwi_flags = 0, rateval;
+ 	u8 nss;
+ 	s8 txpwr_adj, max_txpwr_adj;
+ 	u8 ccmp_pn[8], nstreams = dev->chainmask & 0xf;
+@@ -380,14 +380,15 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
+ 
+ 	if (wcid && (rate->idx < 0 || !rate->count)) {
+ 		wcid_tx_info = wcid->tx_info;
+-		txwi->rate = FIELD_GET(MT_WCID_TX_INFO_RATE, wcid_tx_info);
++		rateval = FIELD_GET(MT_WCID_TX_INFO_RATE, wcid_tx_info);
+ 		max_txpwr_adj = FIELD_GET(MT_WCID_TX_INFO_TXPWR_ADJ,
+ 					  wcid_tx_info);
+ 		nss = FIELD_GET(MT_WCID_TX_INFO_NSS, wcid_tx_info);
+ 	} else {
+-		txwi->rate = mt76x02_mac_tx_rate_val(dev, rate, &nss);
++		rateval = mt76x02_mac_tx_rate_val(dev, rate, &nss);
+ 		max_txpwr_adj = mt76x02_tx_get_max_txpwr_adj(dev, rate);
+ 	}
++	txwi->rate = cpu_to_le16(rateval);
+ 
+ 	txpwr_adj = mt76x02_tx_get_txpwr_adj(dev, dev->txpower_conf,
+ 					     max_txpwr_adj);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index ea71409751519..7b6e9a5352b35 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -631,7 +631,7 @@ mt7915_mcu_alloc_sta_req(struct mt7915_dev *dev, struct mt7915_vif *mvif,
+ 		.bss_idx = mvif->idx,
+ 		.wlan_idx_lo = msta ? to_wcid_lo(msta->wcid.idx) : 0,
+ 		.wlan_idx_hi = msta ? to_wcid_hi(msta->wcid.idx) : 0,
+-		.muar_idx = msta ? mvif->omac_idx : 0,
++		.muar_idx = msta && msta->wcid.sta ? mvif->omac_idx : 0xe,
+ 		.is_tlv_append = 1,
+ 	};
+ 	struct sk_buff *skb;
+@@ -667,7 +667,7 @@ mt7915_mcu_alloc_wtbl_req(struct mt7915_dev *dev, struct mt7915_sta *msta,
+ 	}
+ 
+ 	if (sta_hdr)
+-		sta_hdr->len = cpu_to_le16(sizeof(hdr));
++		le16_add_cpu(&sta_hdr->len, sizeof(hdr));
+ 
+ 	return skb_put_data(nskb, &hdr, sizeof(hdr));
+ }
+@@ -830,7 +830,7 @@ static void mt7915_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy,
+ 
+ 	elem = ieee80211_bss_get_elem(bss, WLAN_EID_EXT_CAPABILITY);
+ 
+-	if (!elem || elem->datalen < 10 ||
++	if (!elem || elem->datalen <= 10 ||
+ 	    !(elem->data[10] &
+ 	      WLAN_EXT_CAPA10_OBSS_NARROW_BW_RU_TOLERANCE_SUPPORT))
+ 		data->tolerated = false;
+@@ -2648,7 +2648,7 @@ out:
+ 	default:
+ 		ret = -EAGAIN;
+ 		dev_err(dev->mt76.dev, "Failed to release patch semaphore\n");
+-		goto out;
++		break;
+ 	}
+ 	release_firmware(fw);
+ 
+diff --git a/drivers/net/wireless/microchip/wilc1000/cfg80211.c b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+index c1ac1d84790f0..6be5ac8ba518d 100644
+--- a/drivers/net/wireless/microchip/wilc1000/cfg80211.c
++++ b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+@@ -129,8 +129,7 @@ static void cfg_scan_result(enum scan_event scan_event,
+ 						info->frame_len,
+ 						(s32)info->rssi * 100,
+ 						GFP_KERNEL);
+-		if (!bss)
+-			cfg80211_put_bss(wiphy, bss);
++		cfg80211_put_bss(wiphy, bss);
+ 	} else if (scan_event == SCAN_EVENT_DONE) {
+ 		mutex_lock(&priv->scan_req_lock);
+ 
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c b/drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c
+index 585784258c665..4efab907a3ac6 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c
+@@ -28,7 +28,7 @@ u8 rtl818x_ioread8_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_rcvctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_GET_REG, RTL8187_REQT_READ,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits8, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits8, sizeof(val), 500);
+ 
+ 	val = priv->io_dmabuf->bits8;
+ 	mutex_unlock(&priv->io_mutex);
+@@ -45,7 +45,7 @@ u16 rtl818x_ioread16_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_rcvctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_GET_REG, RTL8187_REQT_READ,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits16, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits16, sizeof(val), 500);
+ 
+ 	val = priv->io_dmabuf->bits16;
+ 	mutex_unlock(&priv->io_mutex);
+@@ -62,7 +62,7 @@ u32 rtl818x_ioread32_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_rcvctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_GET_REG, RTL8187_REQT_READ,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits32, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits32, sizeof(val), 500);
+ 
+ 	val = priv->io_dmabuf->bits32;
+ 	mutex_unlock(&priv->io_mutex);
+@@ -79,7 +79,7 @@ void rtl818x_iowrite8_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_sndctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_SET_REG, RTL8187_REQT_WRITE,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits8, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits8, sizeof(val), 500);
+ 
+ 	mutex_unlock(&priv->io_mutex);
+ }
+@@ -93,7 +93,7 @@ void rtl818x_iowrite16_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_sndctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_SET_REG, RTL8187_REQT_WRITE,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits16, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits16, sizeof(val), 500);
+ 
+ 	mutex_unlock(&priv->io_mutex);
+ }
+@@ -107,7 +107,7 @@ void rtl818x_iowrite32_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_sndctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_SET_REG, RTL8187_REQT_WRITE,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits32, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits32, sizeof(val), 500);
+ 
+ 	mutex_unlock(&priv->io_mutex);
+ }
+@@ -183,7 +183,7 @@ static void rtl8225_write_8051(struct ieee80211_hw *dev, u8 addr, __le16 data)
+ 	usb_control_msg(priv->udev, usb_sndctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_SET_REG, RTL8187_REQT_WRITE,
+ 			addr, 0x8225, &priv->io_dmabuf->bits16, sizeof(data),
+-			HZ / 2);
++			500);
+ 
+ 	mutex_unlock(&priv->io_mutex);
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index 0452630bcfacc..40bcfabd2d214 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -1421,12 +1421,10 @@ static void rtw_fw_read_fifo_page(struct rtw_dev *rtwdev, u32 offset, u32 size,
+ 	u32 i;
+ 	u16 idx = 0;
+ 	u16 ctl;
+-	u8 rcr;
+ 
+-	rcr = rtw_read8(rtwdev, REG_RCR + 2);
+ 	ctl = rtw_read16(rtwdev, REG_PKTBUF_DBG_CTRL) & 0xf000;
+ 	/* disable rx clock gate */
+-	rtw_write8(rtwdev, REG_RCR, rcr | BIT(3));
++	rtw_write32_set(rtwdev, REG_RCR, BIT_DISGCLK);
+ 
+ 	do {
+ 		rtw_write16(rtwdev, REG_PKTBUF_DBG_CTRL, start_pg | ctl);
+@@ -1445,7 +1443,8 @@ static void rtw_fw_read_fifo_page(struct rtw_dev *rtwdev, u32 offset, u32 size,
+ 
+ out:
+ 	rtw_write16(rtwdev, REG_PKTBUF_DBG_CTRL, ctl);
+-	rtw_write8(rtwdev, REG_RCR + 2, rcr);
++	/* restore rx clock gate */
++	rtw_write32_clr(rtwdev, REG_RCR, BIT_DISGCLK);
+ }
+ 
+ static void rtw_fw_read_fifo(struct rtw_dev *rtwdev, enum rtw_fw_fifo_sel sel,
+diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h
+index aca3dbdc2d5a5..9088bfb2a3157 100644
+--- a/drivers/net/wireless/realtek/rtw88/reg.h
++++ b/drivers/net/wireless/realtek/rtw88/reg.h
+@@ -400,6 +400,7 @@
+ #define BIT_MFBEN		BIT(22)
+ #define BIT_DISCHKPPDLLEN	BIT(21)
+ #define BIT_PKTCTL_DLEN		BIT(20)
++#define BIT_DISGCLK		BIT(19)
+ #define BIT_TIM_PARSER_EN	BIT(18)
+ #define BIT_BC_MD_EN		BIT(17)
+ #define BIT_UC_MD_EN		BIT(16)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_core.c b/drivers/net/wireless/rsi/rsi_91x_core.c
+index 2d49c5b5eefb4..9c4c585572488 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_core.c
++++ b/drivers/net/wireless/rsi/rsi_91x_core.c
+@@ -400,6 +400,8 @@ void rsi_core_xmit(struct rsi_common *common, struct sk_buff *skb)
+ 
+ 	info = IEEE80211_SKB_CB(skb);
+ 	tx_params = (struct skb_info *)info->driver_data;
++	/* info->driver_data and info->control part of union so make copy */
++	tx_params->have_key = !!info->control.hw_key;
+ 	wh = (struct ieee80211_hdr *)&skb->data[0];
+ 	tx_params->sta_id = 0;
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index f4a26f16f00f4..dca81a4bbdd7f 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -203,7 +203,7 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 		wh->frame_control |= cpu_to_le16(RSI_SET_PS_ENABLE);
+ 
+ 	if ((!(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT)) &&
+-	    info->control.hw_key) {
++	    tx_params->have_key) {
+ 		if (rsi_is_cipher_wep(common))
+ 			ieee80211_size += 4;
+ 		else
+@@ -214,15 +214,17 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 			RSI_WIFI_DATA_Q);
+ 	data_desc->header_len = ieee80211_size;
+ 
+-	if (common->min_rate != RSI_RATE_AUTO) {
++	if (common->rate_config[common->band].fixed_enabled) {
+ 		/* Send fixed rate */
++		u16 fixed_rate = common->rate_config[common->band].fixed_hw_rate;
++
+ 		data_desc->frame_info = cpu_to_le16(RATE_INFO_ENABLE);
+-		data_desc->rate_info = cpu_to_le16(common->min_rate);
++		data_desc->rate_info = cpu_to_le16(fixed_rate);
+ 
+ 		if (conf_is_ht40(&common->priv->hw->conf))
+ 			data_desc->bbp_info = cpu_to_le16(FULL40M_ENABLE);
+ 
+-		if ((common->vif_info[0].sgi) && (common->min_rate & 0x100)) {
++		if (common->vif_info[0].sgi && (fixed_rate & 0x100)) {
+ 		       /* Only MCS rates */
+ 			data_desc->rate_info |=
+ 				cpu_to_le16(ENABLE_SHORTGI_RATE);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+index 57c9e3559dfd1..8abf9f699db17 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+@@ -510,7 +510,6 @@ static int rsi_mac80211_add_interface(struct ieee80211_hw *hw,
+ 	if ((vif->type == NL80211_IFTYPE_AP) ||
+ 	    (vif->type == NL80211_IFTYPE_P2P_GO)) {
+ 		rsi_send_rx_filter_frame(common, DISALLOW_BEACONS);
+-		common->min_rate = RSI_RATE_AUTO;
+ 		for (i = 0; i < common->max_stations; i++)
+ 			common->stations[i].sta = NULL;
+ 	}
+@@ -1211,20 +1210,32 @@ static int rsi_mac80211_set_rate_mask(struct ieee80211_hw *hw,
+ 				      struct ieee80211_vif *vif,
+ 				      const struct cfg80211_bitrate_mask *mask)
+ {
++	const unsigned int mcs_offset = ARRAY_SIZE(rsi_rates);
+ 	struct rsi_hw *adapter = hw->priv;
+ 	struct rsi_common *common = adapter->priv;
+-	enum nl80211_band band = hw->conf.chandef.chan->band;
++	int i;
+ 
+ 	mutex_lock(&common->mutex);
+-	common->fixedrate_mask[band] = 0;
+ 
+-	if (mask->control[band].legacy == 0xfff) {
+-		common->fixedrate_mask[band] =
+-			(mask->control[band].ht_mcs[0] << 12);
+-	} else {
+-		common->fixedrate_mask[band] =
+-			mask->control[band].legacy;
++	for (i = 0; i < ARRAY_SIZE(common->rate_config); i++) {
++		struct rsi_rate_config *cfg = &common->rate_config[i];
++		u32 bm;
++
++		bm = mask->control[i].legacy | (mask->control[i].ht_mcs[0] << mcs_offset);
++		if (hweight32(bm) == 1) { /* single rate */
++			int rate_index = ffs(bm) - 1;
++
++			if (rate_index < mcs_offset)
++				cfg->fixed_hw_rate = rsi_rates[rate_index].hw_value;
++			else
++				cfg->fixed_hw_rate = rsi_mcsrates[rate_index - mcs_offset];
++			cfg->fixed_enabled = true;
++		} else {
++			cfg->configured_mask = bm;
++			cfg->fixed_enabled = false;
++		}
+ 	}
++
+ 	mutex_unlock(&common->mutex);
+ 
+ 	return 0;
+@@ -1361,46 +1372,6 @@ void rsi_indicate_pkt_to_os(struct rsi_common *common,
+ 	ieee80211_rx_irqsafe(hw, skb);
+ }
+ 
+-static void rsi_set_min_rate(struct ieee80211_hw *hw,
+-			     struct ieee80211_sta *sta,
+-			     struct rsi_common *common)
+-{
+-	u8 band = hw->conf.chandef.chan->band;
+-	u8 ii;
+-	u32 rate_bitmap;
+-	bool matched = false;
+-
+-	common->bitrate_mask[band] = sta->supp_rates[band];
+-
+-	rate_bitmap = (common->fixedrate_mask[band] & sta->supp_rates[band]);
+-
+-	if (rate_bitmap & 0xfff) {
+-		/* Find out the min rate */
+-		for (ii = 0; ii < ARRAY_SIZE(rsi_rates); ii++) {
+-			if (rate_bitmap & BIT(ii)) {
+-				common->min_rate = rsi_rates[ii].hw_value;
+-				matched = true;
+-				break;
+-			}
+-		}
+-	}
+-
+-	common->vif_info[0].is_ht = sta->ht_cap.ht_supported;
+-
+-	if ((common->vif_info[0].is_ht) && (rate_bitmap >> 12)) {
+-		for (ii = 0; ii < ARRAY_SIZE(rsi_mcsrates); ii++) {
+-			if ((rate_bitmap >> 12) & BIT(ii)) {
+-				common->min_rate = rsi_mcsrates[ii];
+-				matched = true;
+-				break;
+-			}
+-		}
+-	}
+-
+-	if (!matched)
+-		common->min_rate = 0xffff;
+-}
+-
+ /**
+  * rsi_mac80211_sta_add() - This function notifies driver about a peer getting
+  *			    connected.
+@@ -1499,9 +1470,9 @@ static int rsi_mac80211_sta_add(struct ieee80211_hw *hw,
+ 
+ 	if ((vif->type == NL80211_IFTYPE_STATION) ||
+ 	    (vif->type == NL80211_IFTYPE_P2P_CLIENT)) {
+-		rsi_set_min_rate(hw, sta, common);
++		common->bitrate_mask[common->band] = sta->supp_rates[common->band];
++		common->vif_info[0].is_ht = sta->ht_cap.ht_supported;
+ 		if (sta->ht_cap.ht_supported) {
+-			common->vif_info[0].is_ht = true;
+ 			common->bitrate_mask[NL80211_BAND_2GHZ] =
+ 					sta->supp_rates[NL80211_BAND_2GHZ];
+ 			if ((sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20) ||
+@@ -1575,7 +1546,6 @@ static int rsi_mac80211_sta_remove(struct ieee80211_hw *hw,
+ 		bss->qos = sta->wme;
+ 		common->bitrate_mask[NL80211_BAND_2GHZ] = 0;
+ 		common->bitrate_mask[NL80211_BAND_5GHZ] = 0;
+-		common->min_rate = 0xffff;
+ 		common->vif_info[0].is_ht = false;
+ 		common->vif_info[0].sgi = false;
+ 		common->vif_info[0].seq_start = 0;
+diff --git a/drivers/net/wireless/rsi/rsi_91x_main.c b/drivers/net/wireless/rsi/rsi_91x_main.c
+index 9a3d2439a8e7a..8c638cfeac52f 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_main.c
++++ b/drivers/net/wireless/rsi/rsi_91x_main.c
+@@ -211,9 +211,10 @@ int rsi_read_pkt(struct rsi_common *common, u8 *rx_pkt, s32 rcv_pkt_len)
+ 			bt_pkt_type = frame_desc[offset + BT_RX_PKT_TYPE_OFST];
+ 			if (bt_pkt_type == BT_CARD_READY_IND) {
+ 				rsi_dbg(INFO_ZONE, "BT Card ready recvd\n");
+-				if (rsi_bt_ops.attach(common, &g_proto_ops))
+-					rsi_dbg(ERR_ZONE,
+-						"Failed to attach BT module\n");
++				if (common->fsm_state == FSM_MAC_INIT_DONE)
++					rsi_attach_bt(common);
++				else
++					common->bt_defer_attach = true;
+ 			} else {
+ 				if (common->bt_adapter)
+ 					rsi_bt_ops.recv_pkt(common->bt_adapter,
+@@ -278,6 +279,15 @@ void rsi_set_bt_context(void *priv, void *bt_context)
+ }
+ #endif
+ 
++void rsi_attach_bt(struct rsi_common *common)
++{
++#ifdef CONFIG_RSI_COEX
++	if (rsi_bt_ops.attach(common, &g_proto_ops))
++		rsi_dbg(ERR_ZONE,
++			"Failed to attach BT module\n");
++#endif
++}
++
+ /**
+  * rsi_91x_init() - This function initializes os interface operations.
+  * @oper_mode: One of DEV_OPMODE_*.
+@@ -359,6 +369,7 @@ struct rsi_hw *rsi_91x_init(u16 oper_mode)
+ 	if (common->coex_mode > 1) {
+ 		if (rsi_coex_attach(common)) {
+ 			rsi_dbg(ERR_ZONE, "Failed to init coex module\n");
++			rsi_kill_thread(&common->tx_thread);
+ 			goto err;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mgmt.c b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+index b6d050a2fbe7e..9000a5d510034 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mgmt.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+@@ -276,7 +276,7 @@ static void rsi_set_default_parameters(struct rsi_common *common)
+ 	common->channel_width = BW_20MHZ;
+ 	common->rts_threshold = IEEE80211_MAX_RTS_THRESHOLD;
+ 	common->channel = 1;
+-	common->min_rate = 0xffff;
++	memset(&common->rate_config, 0, sizeof(common->rate_config));
+ 	common->fsm_state = FSM_CARD_NOT_READY;
+ 	common->iface_down = true;
+ 	common->endpoint = EP_2GHZ_20MHZ;
+@@ -1314,7 +1314,7 @@ static int rsi_send_auto_rate_request(struct rsi_common *common,
+ 	u8 band = hw->conf.chandef.chan->band;
+ 	u8 num_supported_rates = 0;
+ 	u8 rate_table_offset, rate_offset = 0;
+-	u32 rate_bitmap;
++	u32 rate_bitmap, configured_rates;
+ 	u16 *selected_rates, min_rate;
+ 	bool is_ht = false, is_sgi = false;
+ 	u16 frame_len = sizeof(struct rsi_auto_rate);
+@@ -1364,6 +1364,10 @@ static int rsi_send_auto_rate_request(struct rsi_common *common,
+ 			is_sgi = true;
+ 	}
+ 
++	/* Limit to any rates administratively configured by cfg80211 */
++	configured_rates = common->rate_config[band].configured_mask ?: 0xffffffff;
++	rate_bitmap &= configured_rates;
++
+ 	if (band == NL80211_BAND_2GHZ) {
+ 		if ((rate_bitmap == 0) && (is_ht))
+ 			min_rate = RSI_RATE_MCS0;
+@@ -1389,10 +1393,13 @@ static int rsi_send_auto_rate_request(struct rsi_common *common,
+ 	num_supported_rates = jj;
+ 
+ 	if (is_ht) {
+-		for (ii = 0; ii < ARRAY_SIZE(mcs); ii++)
+-			selected_rates[jj++] = mcs[ii];
+-		num_supported_rates += ARRAY_SIZE(mcs);
+-		rate_offset += ARRAY_SIZE(mcs);
++		for (ii = 0; ii < ARRAY_SIZE(mcs); ii++) {
++			if (configured_rates & BIT(ii + ARRAY_SIZE(rsi_rates))) {
++				selected_rates[jj++] = mcs[ii];
++				num_supported_rates++;
++				rate_offset++;
++			}
++		}
+ 	}
+ 
+ 	sort(selected_rates, jj, sizeof(u16), &rsi_compare, NULL);
+@@ -1482,7 +1489,7 @@ void rsi_inform_bss_status(struct rsi_common *common,
+ 					      qos_enable,
+ 					      aid, sta_id,
+ 					      vif);
+-		if (common->min_rate == 0xffff)
++		if (!common->rate_config[common->band].fixed_enabled)
+ 			rsi_send_auto_rate_request(common, sta, sta_id, vif);
+ 		if (opmode == RSI_OPMODE_STA &&
+ 		    !(assoc_cap & WLAN_CAPABILITY_PRIVACY) &&
+@@ -2071,6 +2078,9 @@ static int rsi_handle_ta_confirm_type(struct rsi_common *common,
+ 				if (common->reinit_hw) {
+ 					complete(&common->wlan_init_completion);
+ 				} else {
++					if (common->bt_defer_attach)
++						rsi_attach_bt(common);
++
+ 					return rsi_mac80211_attach(common);
+ 				}
+ 			}
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index 3a243c5326471..8108f941ccd3f 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -24,10 +24,7 @@
+ /* Default operating mode is wlan STA + BT */
+ static u16 dev_oper_mode = DEV_OPMODE_STA_BT_DUAL;
+ module_param(dev_oper_mode, ushort, 0444);
+-MODULE_PARM_DESC(dev_oper_mode,
+-		 "1[Wi-Fi], 4[BT], 8[BT LE], 5[Wi-Fi STA + BT classic]\n"
+-		 "9[Wi-Fi STA + BT LE], 13[Wi-Fi STA + BT classic + BT LE]\n"
+-		 "6[AP + BT classic], 14[AP + BT classic + BT LE]");
++MODULE_PARM_DESC(dev_oper_mode, DEV_OPMODE_PARAM_DESC);
+ 
+ /**
+  * rsi_sdio_set_cmd52_arg() - This function prepares cmd 52 read/write arg.
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 983045ad79dcf..d881df9ebd0c3 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -25,10 +25,7 @@
+ /* Default operating mode is wlan STA + BT */
+ static u16 dev_oper_mode = DEV_OPMODE_STA_BT_DUAL;
+ module_param(dev_oper_mode, ushort, 0444);
+-MODULE_PARM_DESC(dev_oper_mode,
+-		 "1[Wi-Fi], 4[BT], 8[BT LE], 5[Wi-Fi STA + BT classic]\n"
+-		 "9[Wi-Fi STA + BT LE], 13[Wi-Fi STA + BT classic + BT LE]\n"
+-		 "6[AP + BT classic], 14[AP + BT classic + BT LE]");
++MODULE_PARM_DESC(dev_oper_mode, DEV_OPMODE_PARAM_DESC);
+ 
+ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num, gfp_t flags);
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_hal.h b/drivers/net/wireless/rsi/rsi_hal.h
+index 46e36df9e8e3c..a2fbec1cec4c3 100644
+--- a/drivers/net/wireless/rsi/rsi_hal.h
++++ b/drivers/net/wireless/rsi/rsi_hal.h
+@@ -28,6 +28,17 @@
+ #define DEV_OPMODE_AP_BT		6
+ #define DEV_OPMODE_AP_BT_DUAL		14
+ 
++#define DEV_OPMODE_PARAM_DESC		\
++	__stringify(DEV_OPMODE_WIFI_ALONE)	"[Wi-Fi alone], "	\
++	__stringify(DEV_OPMODE_BT_ALONE)	"[BT classic alone], "	\
++	__stringify(DEV_OPMODE_BT_LE_ALONE)	"[BT LE alone], "	\
++	__stringify(DEV_OPMODE_BT_DUAL)		"[BT classic + BT LE alone], " \
++	__stringify(DEV_OPMODE_STA_BT)		"[Wi-Fi STA + BT classic], " \
++	__stringify(DEV_OPMODE_STA_BT_LE)	"[Wi-Fi STA + BT LE], "	\
++	__stringify(DEV_OPMODE_STA_BT_DUAL)	"[Wi-Fi STA + BT classic + BT LE], " \
++	__stringify(DEV_OPMODE_AP_BT)		"[Wi-Fi AP + BT classic], "	\
++	__stringify(DEV_OPMODE_AP_BT_DUAL)	"[Wi-Fi AP + BT classic + BT LE]"
++
+ #define FLASH_WRITE_CHUNK_SIZE		(4 * 1024)
+ #define FLASH_SECTOR_SIZE		(4 * 1024)
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_main.h b/drivers/net/wireless/rsi/rsi_main.h
+index b3e25bc28682c..de595025c0197 100644
+--- a/drivers/net/wireless/rsi/rsi_main.h
++++ b/drivers/net/wireless/rsi/rsi_main.h
+@@ -61,6 +61,7 @@ enum RSI_FSM_STATES {
+ extern u32 rsi_zone_enabled;
+ extern __printf(2, 3) void rsi_dbg(u32 zone, const char *fmt, ...);
+ 
++#define RSI_MAX_BANDS			2
+ #define RSI_MAX_VIFS                    3
+ #define NUM_EDCA_QUEUES                 4
+ #define IEEE80211_ADDR_LEN              6
+@@ -139,6 +140,7 @@ struct skb_info {
+ 	u8 internal_hdr_size;
+ 	struct ieee80211_vif *vif;
+ 	u8 vap_id;
++	bool have_key;
+ };
+ 
+ enum edca_queue {
+@@ -229,6 +231,12 @@ struct rsi_9116_features {
+ 	u32 ps_options;
+ };
+ 
++struct rsi_rate_config {
++	u32 configured_mask;	/* configured by mac80211 bits 0-11=legacy 12+ mcs */
++	u16 fixed_hw_rate;
++	bool fixed_enabled;
++};
++
+ struct rsi_common {
+ 	struct rsi_hw *priv;
+ 	struct vif_priv vif_info[RSI_MAX_VIFS];
+@@ -254,8 +262,8 @@ struct rsi_common {
+ 	u8 channel_width;
+ 
+ 	u16 rts_threshold;
+-	u16 bitrate_mask[2];
+-	u32 fixedrate_mask[2];
++	u32 bitrate_mask[RSI_MAX_BANDS];
++	struct rsi_rate_config rate_config[RSI_MAX_BANDS];
+ 
+ 	u8 rf_reset;
+ 	struct transmit_q_stats tx_stats;
+@@ -276,7 +284,6 @@ struct rsi_common {
+ 	u8 mac_id;
+ 	u8 radio_id;
+ 	u16 rate_pwr[20];
+-	u16 min_rate;
+ 
+ 	/* WMM algo related */
+ 	u8 selected_qnum;
+@@ -320,6 +327,7 @@ struct rsi_common {
+ 	struct ieee80211_vif *roc_vif;
+ 
+ 	bool eapol4_confirm;
++	bool bt_defer_attach;
+ 	void *bt_adapter;
+ 
+ 	struct cfg80211_scan_request *hwscan;
+@@ -401,5 +409,6 @@ struct rsi_host_intf_ops {
+ 
+ enum rsi_host_intf rsi_get_host_intf(void *priv);
+ void rsi_set_bt_context(void *priv, void *bt_context);
++void rsi_attach_bt(struct rsi_common *common);
+ 
+ #endif
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 3e9895bec15f0..dd79534910b05 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -1671,6 +1671,10 @@ static int netfront_resume(struct xenbus_device *dev)
+ 
+ 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+ 
++	netif_tx_lock_bh(info->netdev);
++	netif_device_detach(info->netdev);
++	netif_tx_unlock_bh(info->netdev);
++
+ 	xennet_disconnect_backend(info);
+ 	return 0;
+ }
+@@ -2285,6 +2289,10 @@ static int xennet_connect(struct net_device *dev)
+ 	 * domain a kick because we've probably just requeued some
+ 	 * packets.
+ 	 */
++	netif_tx_lock_bh(np->netdev);
++	netif_device_attach(np->netdev);
++	netif_tx_unlock_bh(np->netdev);
++
+ 	netif_carrier_on(np->netdev);
+ 	for (j = 0; j < num_queues; ++j) {
+ 		queue = &np->queues[j];
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index 18e3435ab8f33..d2c0116157759 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -2258,7 +2258,7 @@ static int pn533_fill_fragment_skbs(struct pn533 *dev, struct sk_buff *skb)
+ 		frag = pn533_alloc_skb(dev, frag_size);
+ 		if (!frag) {
+ 			skb_queue_purge(&dev->fragment_skb);
+-			break;
++			return -ENOMEM;
+ 		}
+ 
+ 		if (!dev->tgt_mode) {
+@@ -2329,7 +2329,7 @@ static int pn533_transceive(struct nfc_dev *nfc_dev,
+ 		/* jumbo frame ? */
+ 		if (skb->len > PN533_CMD_DATAEXCH_DATA_MAXLEN) {
+ 			rc = pn533_fill_fragment_skbs(dev, skb);
+-			if (rc <= 0)
++			if (rc < 0)
+ 				goto error;
+ 
+ 			skb = skb_dequeue(&dev->fragment_skb);
+@@ -2401,7 +2401,7 @@ static int pn533_tm_send(struct nfc_dev *nfc_dev, struct sk_buff *skb)
+ 	/* let's split in multiple chunks if size's too big */
+ 	if (skb->len > PN533_CMD_DATAEXCH_DATA_MAXLEN) {
+ 		rc = pn533_fill_fragment_skbs(dev, skb);
+-		if (rc <= 0)
++		if (rc < 0)
+ 			goto error;
+ 
+ 		/* get the first skb */
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 46a1e24ba6f47..18a756444d5a9 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -135,13 +135,12 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+ {
+ 	struct nvme_ns *ns;
+ 
+-	mutex_lock(&ctrl->scan_lock);
+ 	down_read(&ctrl->namespaces_rwsem);
+-	list_for_each_entry(ns, &ctrl->namespaces, list)
+-		if (nvme_mpath_clear_current_path(ns))
+-			kblockd_schedule_work(&ns->head->requeue_work);
++	list_for_each_entry(ns, &ctrl->namespaces, list) {
++		nvme_mpath_clear_current_path(ns);
++		kblockd_schedule_work(&ns->head->requeue_work);
++	}
+ 	up_read(&ctrl->namespaces_rwsem);
+-	mutex_unlock(&ctrl->scan_lock);
+ }
+ 
+ static bool nvme_path_is_disabled(struct nvme_ns *ns)
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 51f4647ea2142..1b90563818434 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1103,11 +1103,13 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+ 		return ret;
+ 
+ 	if (ctrl->ctrl.icdoff) {
++		ret = -EOPNOTSUPP;
+ 		dev_err(ctrl->ctrl.device, "icdoff is not supported!\n");
+ 		goto destroy_admin;
+ 	}
+ 
+ 	if (!(ctrl->ctrl.sgls & (1 << 2))) {
++		ret = -EOPNOTSUPP;
+ 		dev_err(ctrl->ctrl.device,
+ 			"Mandatory keyed sgls are not supported!\n");
+ 		goto destroy_admin;
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 37e1d7784e175..9aed5cc710960 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -1462,6 +1462,8 @@ static void nvmet_port_release(struct config_item *item)
+ {
+ 	struct nvmet_port *port = to_nvmet_port(item);
+ 
++	/* Let inflight controllers teardown complete */
++	flush_scheduled_work();
+ 	list_del(&port->global_entry);
+ 
+ 	kfree(port->ana_state);
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 7d607f435e366..6d5552f2f184a 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -1819,12 +1819,36 @@ restart:
+ 	mutex_unlock(&nvmet_rdma_queue_mutex);
+ }
+ 
++static void nvmet_rdma_destroy_port_queues(struct nvmet_rdma_port *port)
++{
++	struct nvmet_rdma_queue *queue, *tmp;
++	struct nvmet_port *nport = port->nport;
++
++	mutex_lock(&nvmet_rdma_queue_mutex);
++	list_for_each_entry_safe(queue, tmp, &nvmet_rdma_queue_list,
++				 queue_list) {
++		if (queue->port != nport)
++			continue;
++
++		list_del_init(&queue->queue_list);
++		__nvmet_rdma_queue_disconnect(queue);
++	}
++	mutex_unlock(&nvmet_rdma_queue_mutex);
++}
++
+ static void nvmet_rdma_disable_port(struct nvmet_rdma_port *port)
+ {
+ 	struct rdma_cm_id *cm_id = xchg(&port->cm_id, NULL);
+ 
+ 	if (cm_id)
+ 		rdma_destroy_id(cm_id);
++
++	/*
++	 * Destroy the remaining queues, which are not belong to any
++	 * controller yet. Do it here after the RDMA-CM was destroyed
++	 * guarantees that no new queue will be created.
++	 */
++	nvmet_rdma_destroy_port_queues(port);
+ }
+ 
+ static int nvmet_rdma_enable_port(struct nvmet_rdma_port *port)
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 5266d534c4b31..1251fd6e92780 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1084,7 +1084,7 @@ recv:
+ 	}
+ 
+ 	if (queue->hdr_digest &&
+-	    nvmet_tcp_verify_hdgst(queue, &queue->pdu, queue->offset)) {
++	    nvmet_tcp_verify_hdgst(queue, &queue->pdu, hdr->hlen)) {
+ 		nvmet_tcp_fatal_error(queue); /* fatal */
+ 		return -EPROTO;
+ 	}
+@@ -1398,6 +1398,7 @@ static void nvmet_tcp_uninit_data_in_cmds(struct nvmet_tcp_queue *queue)
+ 
+ static void nvmet_tcp_release_queue_work(struct work_struct *w)
+ {
++	struct page *page;
+ 	struct nvmet_tcp_queue *queue =
+ 		container_of(w, struct nvmet_tcp_queue, release_work);
+ 
+@@ -1417,6 +1418,8 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w)
+ 		nvmet_tcp_free_crypto(queue);
+ 	ida_simple_remove(&nvmet_tcp_queue_ida, queue->idx);
+ 
++	page = virt_to_head_page(queue->pf_cache.va);
++	__page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias);
+ 	kfree(queue);
+ }
+ 
+@@ -1705,6 +1708,17 @@ err_port:
+ 	return ret;
+ }
+ 
++static void nvmet_tcp_destroy_port_queues(struct nvmet_tcp_port *port)
++{
++	struct nvmet_tcp_queue *queue;
++
++	mutex_lock(&nvmet_tcp_queue_mutex);
++	list_for_each_entry(queue, &nvmet_tcp_queue_list, queue_list)
++		if (queue->port == port)
++			kernel_sock_shutdown(queue->sock, SHUT_RDWR);
++	mutex_unlock(&nvmet_tcp_queue_mutex);
++}
++
+ static void nvmet_tcp_remove_port(struct nvmet_port *nport)
+ {
+ 	struct nvmet_tcp_port *port = nport->priv;
+@@ -1714,6 +1728,11 @@ static void nvmet_tcp_remove_port(struct nvmet_port *nport)
+ 	port->sock->sk->sk_user_data = NULL;
+ 	write_unlock_bh(&port->sock->sk->sk_callback_lock);
+ 	cancel_work_sync(&port->accept_work);
++	/*
++	 * Destroy the remaining queues, which are not belong to any
++	 * controller yet.
++	 */
++	nvmet_tcp_destroy_port_queues(port);
+ 
+ 	sock_release(port->sock);
+ 	kfree(port);
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index eb51bc1474401..1d4b0b7d0cc10 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -1682,19 +1682,19 @@ static void __init of_unittest_overlay_gpio(void)
+ 	 */
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+-		     "GPIO line <<int>> (line-B-input) hogged as input\n");
++		     "gpio-<<int>> (line-B-input): hogged as input\n");
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+-		     "GPIO line <<int>> (line-A-input) hogged as input\n");
++		     "gpio-<<int>> (line-A-input): hogged as input\n");
+ 
+ 	ret = platform_driver_register(&unittest_gpio_driver);
+ 	if (unittest(ret == 0, "could not register unittest gpio driver\n"))
+ 		return;
+ 
+ 	EXPECT_END(KERN_INFO,
+-		   "GPIO line <<int>> (line-A-input) hogged as input\n");
++		   "gpio-<<int>> (line-A-input): hogged as input\n");
+ 	EXPECT_END(KERN_INFO,
+-		   "GPIO line <<int>> (line-B-input) hogged as input\n");
++		   "gpio-<<int>> (line-B-input): hogged as input\n");
+ 
+ 	unittest(probe_pass_count + 2 == unittest_gpio_probe_pass_count,
+ 		 "unittest_gpio_probe() failed or not called\n");
+@@ -1721,7 +1721,7 @@ static void __init of_unittest_overlay_gpio(void)
+ 	chip_request_count = unittest_gpio_chip_request_count;
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+-		     "GPIO line <<int>> (line-D-input) hogged as input\n");
++		     "gpio-<<int>> (line-D-input): hogged as input\n");
+ 
+ 	/* overlay_gpio_03 contains gpio node and child gpio hog node */
+ 
+@@ -1729,7 +1729,7 @@ static void __init of_unittest_overlay_gpio(void)
+ 		 "Adding overlay 'overlay_gpio_03' failed\n");
+ 
+ 	EXPECT_END(KERN_INFO,
+-		   "GPIO line <<int>> (line-D-input) hogged as input\n");
++		   "gpio-<<int>> (line-D-input): hogged as input\n");
+ 
+ 	unittest(probe_pass_count + 1 == unittest_gpio_probe_pass_count,
+ 		 "unittest_gpio_probe() failed or not called\n");
+@@ -1768,7 +1768,7 @@ static void __init of_unittest_overlay_gpio(void)
+ 	 */
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+-		     "GPIO line <<int>> (line-C-input) hogged as input\n");
++		     "gpio-<<int>> (line-C-input): hogged as input\n");
+ 
+ 	/* overlay_gpio_04b contains child gpio hog node */
+ 
+@@ -1776,7 +1776,7 @@ static void __init of_unittest_overlay_gpio(void)
+ 		 "Adding overlay 'overlay_gpio_04b' failed\n");
+ 
+ 	EXPECT_END(KERN_INFO,
+-		   "GPIO line <<int>> (line-C-input) hogged as input\n");
++		   "gpio-<<int>> (line-C-input): hogged as input\n");
+ 
+ 	unittest(chip_request_count + 1 == unittest_gpio_chip_request_count,
+ 		 "unittest_gpio_chip_request() called %d times (expected 1 time)\n",
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index f83f4f6d70349..5de46aa99d243 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -827,7 +827,7 @@ free_required_opps:
+ free_opp:
+ 	_opp_free(new_opp);
+ 
+-	return ERR_PTR(ret);
++	return ret ? ERR_PTR(ret) : NULL;
+ }
+ 
+ /* Initializes OPP tables based on new bindings */
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c
+index 5fee0f89ab594..a224afadbcc00 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-plat.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c
+@@ -127,6 +127,8 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
+ 			goto err_init;
+ 	}
+ 
++	return 0;
++
+  err_init:
+  err_get_sync:
+ 	pm_runtime_put_sync(dev);
+diff --git a/drivers/pci/controller/dwc/pcie-uniphier.c b/drivers/pci/controller/dwc/pcie-uniphier.c
+index 48176265c867e..527ec8aeb602f 100644
+--- a/drivers/pci/controller/dwc/pcie-uniphier.c
++++ b/drivers/pci/controller/dwc/pcie-uniphier.c
+@@ -171,30 +171,21 @@ static void uniphier_pcie_irq_enable(struct uniphier_pcie_priv *priv)
+ 	writel(PCL_RCV_INTX_ALL_ENABLE, priv->base + PCL_RCV_INTX);
+ }
+ 
+-static void uniphier_pcie_irq_ack(struct irq_data *d)
+-{
+-	struct pcie_port *pp = irq_data_get_irq_chip_data(d);
+-	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+-	struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
+-	u32 val;
+-
+-	val = readl(priv->base + PCL_RCV_INTX);
+-	val &= ~PCL_RCV_INTX_ALL_STATUS;
+-	val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_STATUS_SHIFT);
+-	writel(val, priv->base + PCL_RCV_INTX);
+-}
+-
+ static void uniphier_pcie_irq_mask(struct irq_data *d)
+ {
+ 	struct pcie_port *pp = irq_data_get_irq_chip_data(d);
+ 	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ 	struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
++	unsigned long flags;
+ 	u32 val;
+ 
++	raw_spin_lock_irqsave(&pp->lock, flags);
++
+ 	val = readl(priv->base + PCL_RCV_INTX);
+-	val &= ~PCL_RCV_INTX_ALL_MASK;
+ 	val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT);
+ 	writel(val, priv->base + PCL_RCV_INTX);
++
++	raw_spin_unlock_irqrestore(&pp->lock, flags);
+ }
+ 
+ static void uniphier_pcie_irq_unmask(struct irq_data *d)
+@@ -202,17 +193,20 @@ static void uniphier_pcie_irq_unmask(struct irq_data *d)
+ 	struct pcie_port *pp = irq_data_get_irq_chip_data(d);
+ 	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ 	struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
++	unsigned long flags;
+ 	u32 val;
+ 
++	raw_spin_lock_irqsave(&pp->lock, flags);
++
+ 	val = readl(priv->base + PCL_RCV_INTX);
+-	val &= ~PCL_RCV_INTX_ALL_MASK;
+ 	val &= ~BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT);
+ 	writel(val, priv->base + PCL_RCV_INTX);
++
++	raw_spin_unlock_irqrestore(&pp->lock, flags);
+ }
+ 
+ static struct irq_chip uniphier_pcie_irq_chip = {
+ 	.name = "PCI",
+-	.irq_ack = uniphier_pcie_irq_ack,
+ 	.irq_mask = uniphier_pcie_irq_mask,
+ 	.irq_unmask = uniphier_pcie_irq_unmask,
+ };
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 4f1a29ede576a..434522465d983 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -30,10 +30,8 @@
+ /* PCIe core registers */
+ #define PCIE_CORE_DEV_ID_REG					0x0
+ #define PCIE_CORE_CMD_STATUS_REG				0x4
+-#define     PCIE_CORE_CMD_IO_ACCESS_EN				BIT(0)
+-#define     PCIE_CORE_CMD_MEM_ACCESS_EN				BIT(1)
+-#define     PCIE_CORE_CMD_MEM_IO_REQ_EN				BIT(2)
+ #define PCIE_CORE_DEV_REV_REG					0x8
++#define PCIE_CORE_EXP_ROM_BAR_REG				0x30
+ #define PCIE_CORE_PCIEXP_CAP					0xc0
+ #define PCIE_CORE_ERR_CAPCTL_REG				0x118
+ #define     PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX			BIT(5)
+@@ -98,6 +96,7 @@
+ #define     PCIE_CORE_CTRL2_MSI_ENABLE		BIT(10)
+ #define PCIE_CORE_REF_CLK_REG			(CONTROL_BASE_ADDR + 0x14)
+ #define     PCIE_CORE_REF_CLK_TX_ENABLE		BIT(1)
++#define     PCIE_CORE_REF_CLK_RX_ENABLE		BIT(2)
+ #define PCIE_MSG_LOG_REG			(CONTROL_BASE_ADDR + 0x30)
+ #define PCIE_ISR0_REG				(CONTROL_BASE_ADDR + 0x40)
+ #define PCIE_MSG_PM_PME_MASK			BIT(7)
+@@ -105,18 +104,19 @@
+ #define     PCIE_ISR0_MSI_INT_PENDING		BIT(24)
+ #define     PCIE_ISR0_INTX_ASSERT(val)		BIT(16 + (val))
+ #define     PCIE_ISR0_INTX_DEASSERT(val)	BIT(20 + (val))
+-#define	    PCIE_ISR0_ALL_MASK			GENMASK(26, 0)
++#define     PCIE_ISR0_ALL_MASK			GENMASK(31, 0)
+ #define PCIE_ISR1_REG				(CONTROL_BASE_ADDR + 0x48)
+ #define PCIE_ISR1_MASK_REG			(CONTROL_BASE_ADDR + 0x4C)
+ #define     PCIE_ISR1_POWER_STATE_CHANGE	BIT(4)
+ #define     PCIE_ISR1_FLUSH			BIT(5)
+ #define     PCIE_ISR1_INTX_ASSERT(val)		BIT(8 + (val))
+-#define     PCIE_ISR1_ALL_MASK			GENMASK(11, 4)
++#define     PCIE_ISR1_ALL_MASK			GENMASK(31, 0)
+ #define PCIE_MSI_ADDR_LOW_REG			(CONTROL_BASE_ADDR + 0x50)
+ #define PCIE_MSI_ADDR_HIGH_REG			(CONTROL_BASE_ADDR + 0x54)
+ #define PCIE_MSI_STATUS_REG			(CONTROL_BASE_ADDR + 0x58)
+ #define PCIE_MSI_MASK_REG			(CONTROL_BASE_ADDR + 0x5C)
+ #define PCIE_MSI_PAYLOAD_REG			(CONTROL_BASE_ADDR + 0x9C)
++#define     PCIE_MSI_DATA_MASK			GENMASK(15, 0)
+ 
+ /* PCIe window configuration */
+ #define OB_WIN_BASE_ADDR			0x4c00
+@@ -163,8 +163,50 @@
+ #define CFG_REG					(LMI_BASE_ADDR + 0x0)
+ #define     LTSSM_SHIFT				24
+ #define     LTSSM_MASK				0x3f
+-#define     LTSSM_L0				0x10
+ #define     RC_BAR_CONFIG			0x300
++
++/* LTSSM values in CFG_REG */
++enum {
++	LTSSM_DETECT_QUIET			= 0x0,
++	LTSSM_DETECT_ACTIVE			= 0x1,
++	LTSSM_POLLING_ACTIVE			= 0x2,
++	LTSSM_POLLING_COMPLIANCE		= 0x3,
++	LTSSM_POLLING_CONFIGURATION		= 0x4,
++	LTSSM_CONFIG_LINKWIDTH_START		= 0x5,
++	LTSSM_CONFIG_LINKWIDTH_ACCEPT		= 0x6,
++	LTSSM_CONFIG_LANENUM_ACCEPT		= 0x7,
++	LTSSM_CONFIG_LANENUM_WAIT		= 0x8,
++	LTSSM_CONFIG_COMPLETE			= 0x9,
++	LTSSM_CONFIG_IDLE			= 0xa,
++	LTSSM_RECOVERY_RCVR_LOCK		= 0xb,
++	LTSSM_RECOVERY_SPEED			= 0xc,
++	LTSSM_RECOVERY_RCVR_CFG			= 0xd,
++	LTSSM_RECOVERY_IDLE			= 0xe,
++	LTSSM_L0				= 0x10,
++	LTSSM_RX_L0S_ENTRY			= 0x11,
++	LTSSM_RX_L0S_IDLE			= 0x12,
++	LTSSM_RX_L0S_FTS			= 0x13,
++	LTSSM_TX_L0S_ENTRY			= 0x14,
++	LTSSM_TX_L0S_IDLE			= 0x15,
++	LTSSM_TX_L0S_FTS			= 0x16,
++	LTSSM_L1_ENTRY				= 0x17,
++	LTSSM_L1_IDLE				= 0x18,
++	LTSSM_L2_IDLE				= 0x19,
++	LTSSM_L2_TRANSMIT_WAKE			= 0x1a,
++	LTSSM_DISABLED				= 0x20,
++	LTSSM_LOOPBACK_ENTRY_MASTER		= 0x21,
++	LTSSM_LOOPBACK_ACTIVE_MASTER		= 0x22,
++	LTSSM_LOOPBACK_EXIT_MASTER		= 0x23,
++	LTSSM_LOOPBACK_ENTRY_SLAVE		= 0x24,
++	LTSSM_LOOPBACK_ACTIVE_SLAVE		= 0x25,
++	LTSSM_LOOPBACK_EXIT_SLAVE		= 0x26,
++	LTSSM_HOT_RESET				= 0x27,
++	LTSSM_RECOVERY_EQUALIZATION_PHASE0	= 0x28,
++	LTSSM_RECOVERY_EQUALIZATION_PHASE1	= 0x29,
++	LTSSM_RECOVERY_EQUALIZATION_PHASE2	= 0x2a,
++	LTSSM_RECOVERY_EQUALIZATION_PHASE3	= 0x2b,
++};
++
+ #define VENDOR_ID_REG				(LMI_BASE_ADDR + 0x44)
+ 
+ /* PCIe core controller registers */
+@@ -197,7 +239,7 @@
+ #define     PCIE_IRQ_MSI_INT2_DET		BIT(21)
+ #define     PCIE_IRQ_RC_DBELL_DET		BIT(22)
+ #define     PCIE_IRQ_EP_STATUS			BIT(23)
+-#define     PCIE_IRQ_ALL_MASK			0xfff0fb
++#define     PCIE_IRQ_ALL_MASK			GENMASK(31, 0)
+ #define     PCIE_IRQ_ENABLE_INTS_MASK		PCIE_IRQ_CORE_INT
+ 
+ /* Transaction types */
+@@ -269,13 +311,49 @@ static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg)
+ 	return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8);
+ }
+ 
+-static int advk_pcie_link_up(struct advk_pcie *pcie)
++static u8 advk_pcie_ltssm_state(struct advk_pcie *pcie)
+ {
+-	u32 val, ltssm_state;
++	u32 val;
++	u8 ltssm_state;
+ 
+ 	val = advk_readl(pcie, CFG_REG);
+ 	ltssm_state = (val >> LTSSM_SHIFT) & LTSSM_MASK;
+-	return ltssm_state >= LTSSM_L0;
++	return ltssm_state;
++}
++
++static inline bool advk_pcie_link_up(struct advk_pcie *pcie)
++{
++	/* check if LTSSM is in normal operation - some L* state */
++	u8 ltssm_state = advk_pcie_ltssm_state(pcie);
++	return ltssm_state >= LTSSM_L0 && ltssm_state < LTSSM_DISABLED;
++}
++
++static inline bool advk_pcie_link_active(struct advk_pcie *pcie)
++{
++	/*
++	 * According to PCIe Base specification 3.0, Table 4-14: Link
++	 * Status Mapped to the LTSSM, and 4.2.6.3.6 Configuration.Idle
++	 * is Link Up mapped to LTSSM Configuration.Idle, Recovery, L0,
++	 * L0s, L1 and L2 states. And according to 3.2.1. Data Link
++	 * Control and Management State Machine Rules is DL Up status
++	 * reported in DL Active state.
++	 */
++	u8 ltssm_state = advk_pcie_ltssm_state(pcie);
++	return ltssm_state >= LTSSM_CONFIG_IDLE && ltssm_state < LTSSM_DISABLED;
++}
++
++static inline bool advk_pcie_link_training(struct advk_pcie *pcie)
++{
++	/*
++	 * According to PCIe Base specification 3.0, Table 4-14: Link
++	 * Status Mapped to the LTSSM is Link Training mapped to LTSSM
++	 * Configuration and Recovery states.
++	 */
++	u8 ltssm_state = advk_pcie_ltssm_state(pcie);
++	return ((ltssm_state >= LTSSM_CONFIG_LINKWIDTH_START &&
++		 ltssm_state < LTSSM_L0) ||
++		(ltssm_state >= LTSSM_RECOVERY_EQUALIZATION_PHASE0 &&
++		 ltssm_state <= LTSSM_RECOVERY_EQUALIZATION_PHASE3));
+ }
+ 
+ static int advk_pcie_wait_for_link(struct advk_pcie *pcie)
+@@ -298,7 +376,7 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
+ 	size_t retries;
+ 
+ 	for (retries = 0; retries < RETRAIN_WAIT_MAX_RETRIES; ++retries) {
+-		if (!advk_pcie_link_up(pcie))
++		if (advk_pcie_link_training(pcie))
+ 			break;
+ 		udelay(RETRAIN_WAIT_USLEEP_US);
+ 	}
+@@ -451,9 +529,15 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	u32 reg;
+ 	int i;
+ 
+-	/* Enable TX */
++	/*
++	 * Configure PCIe Reference clock. Direction is from the PCIe
++	 * controller to the endpoint card, so enable transmitting of
++	 * Reference clock differential signal off-chip and disable
++	 * receiving off-chip differential signal.
++	 */
+ 	reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG);
+ 	reg |= PCIE_CORE_REF_CLK_TX_ENABLE;
++	reg &= ~PCIE_CORE_REF_CLK_RX_ENABLE;
+ 	advk_writel(pcie, reg, PCIE_CORE_REF_CLK_REG);
+ 
+ 	/* Set to Direct mode */
+@@ -477,6 +561,31 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL;
+ 	advk_writel(pcie, reg, VENDOR_ID_REG);
+ 
++	/*
++	 * Change Class Code of PCI Bridge device to PCI Bridge (0x600400),
++	 * because the default value is Mass storage controller (0x010400).
++	 *
++	 * Note that this Aardvark PCI Bridge does not have compliant Type 1
++	 * Configuration Space and it even cannot be accessed via Aardvark's
++	 * PCI config space access method. Something like config space is
++	 * available in internal Aardvark registers starting at offset 0x0
++	 * and is reported as Type 0. In range 0x10 - 0x34 it has totally
++	 * different registers.
++	 *
++	 * Therefore driver uses emulation of PCI Bridge which emulates
++	 * access to configuration space via internal Aardvark registers or
++	 * emulated configuration buffer.
++	 */
++	reg = advk_readl(pcie, PCIE_CORE_DEV_REV_REG);
++	reg &= ~0xffffff00;
++	reg |= (PCI_CLASS_BRIDGE_PCI << 8) << 8;
++	advk_writel(pcie, reg, PCIE_CORE_DEV_REV_REG);
++
++	/* Disable Root Bridge I/O space, memory space and bus mastering */
++	reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
++	reg &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
++	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
++
+ 	/* Set Advanced Error Capabilities and Control PF0 register */
+ 	reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX |
+ 		PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN |
+@@ -488,8 +597,9 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL);
+ 	reg &= ~PCI_EXP_DEVCTL_RELAX_EN;
+ 	reg &= ~PCI_EXP_DEVCTL_NOSNOOP_EN;
++	reg &= ~PCI_EXP_DEVCTL_PAYLOAD;
+ 	reg &= ~PCI_EXP_DEVCTL_READRQ;
+-	reg |= PCI_EXP_DEVCTL_PAYLOAD; /* Set max payload size */
++	reg |= PCI_EXP_DEVCTL_PAYLOAD_512B;
+ 	reg |= PCI_EXP_DEVCTL_READRQ_512B;
+ 	advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL);
+ 
+@@ -574,19 +684,6 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 		advk_pcie_disable_ob_win(pcie, i);
+ 
+ 	advk_pcie_train_link(pcie);
+-
+-	/*
+-	 * FIXME: The following register update is suspicious. This register is
+-	 * applicable only when the PCI controller is configured for Endpoint
+-	 * mode, not as a Root Complex. But apparently when this code is
+-	 * removed, some cards stop working. This should be investigated and
+-	 * a comment explaining this should be put here.
+-	 */
+-	reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
+-	reg |= PCIE_CORE_CMD_MEM_ACCESS_EN |
+-		PCIE_CORE_CMD_IO_ACCESS_EN |
+-		PCIE_CORE_CMD_MEM_IO_REQ_EN;
+-	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
+ }
+ 
+ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val)
+@@ -682,7 +779,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
+ 	else
+ 		str_posted = "Posted";
+ 
+-	dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
++	dev_dbg(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
+ 		str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
+ 
+ 	return -EFAULT;
+@@ -707,6 +804,72 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
+ 	return -ETIMEDOUT;
+ }
+ 
++static pci_bridge_emul_read_status_t
++advk_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge,
++				    int reg, u32 *value)
++{
++	struct advk_pcie *pcie = bridge->data;
++
++	switch (reg) {
++	case PCI_COMMAND:
++		*value = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
++		return PCI_BRIDGE_EMUL_HANDLED;
++
++	case PCI_ROM_ADDRESS1:
++		*value = advk_readl(pcie, PCIE_CORE_EXP_ROM_BAR_REG);
++		return PCI_BRIDGE_EMUL_HANDLED;
++
++	case PCI_INTERRUPT_LINE: {
++		/*
++		 * From the whole 32bit register we support reading from HW only
++		 * one bit: PCI_BRIDGE_CTL_BUS_RESET.
++		 * Other bits are retrieved only from emulated config buffer.
++		 */
++		__le32 *cfgspace = (__le32 *)&bridge->conf;
++		u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]);
++		if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN)
++			val |= PCI_BRIDGE_CTL_BUS_RESET << 16;
++		else
++			val &= ~(PCI_BRIDGE_CTL_BUS_RESET << 16);
++		*value = val;
++		return PCI_BRIDGE_EMUL_HANDLED;
++	}
++
++	default:
++		return PCI_BRIDGE_EMUL_NOT_HANDLED;
++	}
++}
++
++static void
++advk_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
++				     int reg, u32 old, u32 new, u32 mask)
++{
++	struct advk_pcie *pcie = bridge->data;
++
++	switch (reg) {
++	case PCI_COMMAND:
++		advk_writel(pcie, new, PCIE_CORE_CMD_STATUS_REG);
++		break;
++
++	case PCI_ROM_ADDRESS1:
++		advk_writel(pcie, new, PCIE_CORE_EXP_ROM_BAR_REG);
++		break;
++
++	case PCI_INTERRUPT_LINE:
++		if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) {
++			u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG);
++			if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16))
++				val |= HOT_RESET_GEN;
++			else
++				val &= ~HOT_RESET_GEN;
++			advk_writel(pcie, val, PCIE_CORE_CTRL1_REG);
++		}
++		break;
++
++	default:
++		break;
++	}
++}
+ 
+ static pci_bridge_emul_read_status_t
+ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+@@ -723,6 +886,7 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 	case PCI_EXP_RTCTL: {
+ 		u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
+ 		*value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE;
++		*value |= le16_to_cpu(bridge->pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE;
+ 		*value |= PCI_EXP_RTCAP_CRSVIS << 16;
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+@@ -734,12 +898,26 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+ 
++	case PCI_EXP_LNKCAP: {
++		u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
++		/*
++		 * PCI_EXP_LNKCAP_DLLLARC bit is hardwired in aardvark HW to 0.
++		 * But support for PCI_EXP_LNKSTA_DLLLA is emulated via ltssm
++		 * state so explicitly enable PCI_EXP_LNKCAP_DLLLARC flag.
++		 */
++		val |= PCI_EXP_LNKCAP_DLLLARC;
++		*value = val;
++		return PCI_BRIDGE_EMUL_HANDLED;
++	}
++
+ 	case PCI_EXP_LNKCTL: {
+ 		/* u32 contains both PCI_EXP_LNKCTL and PCI_EXP_LNKSTA */
+ 		u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg) &
+ 			~(PCI_EXP_LNKSTA_LT << 16);
+-		if (!advk_pcie_link_up(pcie))
++		if (advk_pcie_link_training(pcie))
+ 			val |= (PCI_EXP_LNKSTA_LT << 16);
++		if (advk_pcie_link_active(pcie))
++			val |= (PCI_EXP_LNKSTA_DLLLA << 16);
+ 		*value = val;
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+@@ -747,7 +925,6 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 	case PCI_CAP_LIST_ID:
+ 	case PCI_EXP_DEVCAP:
+ 	case PCI_EXP_DEVCTL:
+-	case PCI_EXP_LNKCAP:
+ 		*value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	default:
+@@ -794,6 +971,8 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
+ }
+ 
+ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
++	.read_base = advk_pci_bridge_emul_base_conf_read,
++	.write_base = advk_pci_bridge_emul_base_conf_write,
+ 	.read_pcie = advk_pci_bridge_emul_pcie_conf_read,
+ 	.write_pcie = advk_pci_bridge_emul_pcie_conf_write,
+ };
+@@ -1082,7 +1261,7 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
+ 				    domain->host_data, handle_simple_irq,
+ 				    NULL, NULL);
+ 
+-	return hwirq;
++	return 0;
+ }
+ 
+ static void advk_msi_irq_domain_free(struct irq_domain *domain,
+@@ -1263,8 +1442,12 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
+ 		if (!(BIT(msi_idx) & msi_status))
+ 			continue;
+ 
++		/*
++		 * msi_idx contains bits [4:0] of the msi_data and msi_data
++		 * contains 16bit MSI interrupt number
++		 */
+ 		advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG);
+-		msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & 0xFF;
++		msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK;
+ 		generic_handle_irq(msi_data);
+ 	}
+ 
+@@ -1286,12 +1469,6 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
+ 	isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ 	isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK);
+ 
+-	if (!isr0_status && !isr1_status) {
+-		advk_writel(pcie, isr0_val, PCIE_ISR0_REG);
+-		advk_writel(pcie, isr1_val, PCIE_ISR1_REG);
+-		return;
+-	}
+-
+ 	/* Process MSI interrupts */
+ 	if (isr0_status & PCIE_ISR0_MSI_INT_PENDING)
+ 		advk_pcie_handle_msi(pcie);
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index fdaf86a888b73..db97cddfc85e1 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -431,8 +431,21 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+ 	/* Clear the W1C bits */
+ 	new &= ~((value << shift) & (behavior[reg / 4].w1c & mask));
+ 
++	/* Save the new value with the cleared W1C bits into the cfgspace */
+ 	cfgspace[reg / 4] = cpu_to_le32(new);
+ 
++	/*
++	 * Clear the W1C bits not specified by the write mask, so that the
++	 * write_op() does not clear them.
++	 */
++	new &= ~(behavior[reg / 4].w1c & ~mask);
++
++	/*
++	 * Set the W1C bits specified by the write mask, so that write_op()
++	 * knows about that they are to be cleared.
++	 */
++	new |= (value << shift) & (behavior[reg / 4].w1c & mask);
++
+ 	if (write_op)
+ 		write_op(bridge, reg, old, new, mask);
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 5d2acebc3e966..fb91b2d7b1c59 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3584,6 +3584,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003e, quirk_no_bus_reset);
+ 
+ /*
+  * Root port on some Cavium CN8xxx chips do not successfully complete a bus
+diff --git a/drivers/phy/qualcomm/phy-qcom-qusb2.c b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+index 557547dabfd50..f531043ec3deb 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qusb2.c
++++ b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+@@ -474,7 +474,7 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
+ {
+ 	struct device *dev = &qphy->phy->dev;
+ 	const struct qusb2_phy_cfg *cfg = qphy->cfg;
+-	u8 *val;
++	u8 *val, hstx_trim;
+ 
+ 	/* efuse register is optional */
+ 	if (!qphy->cell)
+@@ -488,7 +488,13 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
+ 	 * set while configuring the phy.
+ 	 */
+ 	val = nvmem_cell_read(qphy->cell, NULL);
+-	if (IS_ERR(val) || !val[0]) {
++	if (IS_ERR(val)) {
++		dev_dbg(dev, "failed to read a valid hs-tx trim value\n");
++		return;
++	}
++	hstx_trim = val[0];
++	kfree(val);
++	if (!hstx_trim) {
+ 		dev_dbg(dev, "failed to read a valid hs-tx trim value\n");
+ 		return;
+ 	}
+@@ -496,12 +502,10 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
+ 	/* Fused TUNE1/2 value is the higher nibble only */
+ 	if (cfg->update_tune1_with_efuse)
+ 		qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1],
+-				 val[0] << HSTX_TRIM_SHIFT,
+-				 HSTX_TRIM_MASK);
++				 hstx_trim << HSTX_TRIM_SHIFT, HSTX_TRIM_MASK);
+ 	else
+ 		qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2],
+-				 val[0] << HSTX_TRIM_SHIFT,
+-				 HSTX_TRIM_MASK);
++				 hstx_trim << HSTX_TRIM_SHIFT, HSTX_TRIM_MASK);
+ }
+ 
+ static int qusb2_phy_set_mode(struct phy *phy,
+diff --git a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
+index ae4bac024c7b1..7e61202aa234e 100644
+--- a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
++++ b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
+@@ -33,7 +33,7 @@
+ 
+ #define USB2_PHY_USB_PHY_HS_PHY_CTRL_COMMON0	(0x54)
+ #define RETENABLEN				BIT(3)
+-#define FSEL_MASK				GENMASK(7, 5)
++#define FSEL_MASK				GENMASK(6, 4)
+ #define FSEL_DEFAULT				(0x3 << 4)
+ 
+ #define USB2_PHY_USB_PHY_HS_PHY_CTRL_COMMON1	(0x58)
+diff --git a/drivers/phy/ti/phy-gmii-sel.c b/drivers/phy/ti/phy-gmii-sel.c
+index 5fd2e8a08bfcf..d0ab69750c6b4 100644
+--- a/drivers/phy/ti/phy-gmii-sel.c
++++ b/drivers/phy/ti/phy-gmii-sel.c
+@@ -320,6 +320,8 @@ static int phy_gmii_sel_init_ports(struct phy_gmii_sel_priv *priv)
+ 		u64 size;
+ 
+ 		offset = of_get_address(dev->of_node, 0, &size, NULL);
++		if (!offset)
++			return -EINVAL;
+ 		priv->num_ports = size / sizeof(u32);
+ 		if (!priv->num_ports)
+ 			return -EINVAL;
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 6e6825d17a1d1..840000870d5a0 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -2077,6 +2077,8 @@ int pinctrl_enable(struct pinctrl_dev *pctldev)
+ 	if (error) {
+ 		dev_err(pctldev->dev, "could not claim hogs: %i\n",
+ 			error);
++		pinctrl_free_pindescs(pctldev, pctldev->desc->pins,
++				      pctldev->desc->npins);
+ 		mutex_destroy(&pctldev->mutex);
+ 		kfree(pctldev);
+ 
+diff --git a/drivers/pinctrl/pinctrl-equilibrium.c b/drivers/pinctrl/pinctrl-equilibrium.c
+index ac1c47f542c11..3b6dcaa80e000 100644
+--- a/drivers/pinctrl/pinctrl-equilibrium.c
++++ b/drivers/pinctrl/pinctrl-equilibrium.c
+@@ -674,6 +674,11 @@ static int eqbr_build_functions(struct eqbr_pinctrl_drv_data *drvdata)
+ 		return ret;
+ 
+ 	for (i = 0; i < nr_funcs; i++) {
++
++		/* Ignore the same function with multiple groups */
++		if (funcs[i].name == NULL)
++			continue;
++
+ 		ret = pinmux_generic_add_function(drvdata->pctl_dev,
+ 						  funcs[i].name,
+ 						  funcs[i].groups,
+@@ -805,7 +810,7 @@ static int pinctrl_reg(struct eqbr_pinctrl_drv_data *drvdata)
+ 
+ 	ret = eqbr_build_functions(drvdata);
+ 	if (ret) {
+-		dev_err(dev, "Failed to build groups\n");
++		dev_err(dev, "Failed to build functions\n");
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index c528c124fb0e9..9d168b90cd281 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -890,7 +890,7 @@ static void __init sh_pfc_check_drive_reg(const struct sh_pfc_soc_info *info,
+ 		if (!field->pin && !field->offset && !field->size)
+ 			continue;
+ 
+-		mask = GENMASK(field->offset + field->size, field->offset);
++		mask = GENMASK(field->offset + field->size - 1, field->offset);
+ 		if (mask & seen)
+ 			sh_pfc_err("drive_reg 0x%x: field %u overlap\n",
+ 				   drive->reg, i);
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 5c2f2e337b57b..2a313643e0388 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -9097,7 +9097,7 @@ static int fan_write_cmd_level(const char *cmd, int *rc)
+ 
+ 	if (strlencmp(cmd, "level auto") == 0)
+ 		level = TP_EC_FAN_AUTO;
+-	else if ((strlencmp(cmd, "level disengaged") == 0) |
++	else if ((strlencmp(cmd, "level disengaged") == 0) ||
+ 			(strlencmp(cmd, "level full-speed") == 0))
+ 		level = TP_EC_FAN_FULLSPEED;
+ 	else if (sscanf(cmd, "level %d", &level) != 1)
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index d88f388a3450f..1f80b26281628 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -354,7 +354,14 @@ static acpi_status __query_block(struct wmi_block *wblock, u8 instance,
+ 	 * the WQxx method failed - we should disable collection anyway.
+ 	 */
+ 	if ((block->flags & ACPI_WMI_EXPENSIVE) && ACPI_SUCCESS(wc_status)) {
+-		status = acpi_execute_simple_method(handle, wc_method, 0);
++		/*
++		 * Ignore whether this WCxx call succeeds or not since
++		 * the previously executed WQxx method call might have
++		 * succeeded, and returning the failing status code
++		 * of this call would throw away the result of the WQxx
++		 * call, potentially leaking memory.
++		 */
++		acpi_execute_simple_method(handle, wc_method, 0);
+ 	}
+ 
+ 	return status;
+diff --git a/drivers/power/supply/bq27xxx_battery_i2c.c b/drivers/power/supply/bq27xxx_battery_i2c.c
+index eb4f4284982fa..3012eb13a08cb 100644
+--- a/drivers/power/supply/bq27xxx_battery_i2c.c
++++ b/drivers/power/supply/bq27xxx_battery_i2c.c
+@@ -187,7 +187,8 @@ static int bq27xxx_battery_i2c_probe(struct i2c_client *client,
+ 			dev_err(&client->dev,
+ 				"Unable to register IRQ %d error %d\n",
+ 				client->irq, ret);
+-			return ret;
++			bq27xxx_battery_teardown(di);
++			goto err_failed;
+ 		}
+ 	}
+ 
+diff --git a/drivers/power/supply/max17040_battery.c b/drivers/power/supply/max17040_battery.c
+index d956c67d51558..b6b29ec3d93ec 100644
+--- a/drivers/power/supply/max17040_battery.c
++++ b/drivers/power/supply/max17040_battery.c
+@@ -482,6 +482,8 @@ static int max17040_probe(struct i2c_client *client,
+ 	chip->client = client;
+ 	chip->regmap = devm_regmap_init_i2c(client, &max17040_regmap);
+ 	chip->pdata = client->dev.platform_data;
++	if (IS_ERR(chip->regmap))
++		return PTR_ERR(chip->regmap);
+ 	chip_id = (enum chip_id) id->driver_data;
+ 	if (client->dev.of_node) {
+ 		ret = max17040_get_of_data(chip);
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index 69bb0f56e492a..76b0f45a20b40 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -316,7 +316,10 @@ static int max17042_get_property(struct power_supply *psy,
+ 		val->intval = data * 625 / 8;
+ 		break;
+ 	case POWER_SUPPLY_PROP_CAPACITY:
+-		ret = regmap_read(map, MAX17042_RepSOC, &data);
++		if (chip->pdata->enable_current_sense)
++			ret = regmap_read(map, MAX17042_RepSOC, &data);
++		else
++			ret = regmap_read(map, MAX17042_VFSOC, &data);
+ 		if (ret < 0)
+ 			return ret;
+ 
+@@ -851,7 +854,8 @@ static void max17042_set_soc_threshold(struct max17042_chip *chip, u16 off)
+ 	regmap_read(map, MAX17042_RepSOC, &soc);
+ 	soc >>= 8;
+ 	soc_tr = (soc + off) << 8;
+-	soc_tr |= (soc - off);
++	if (off < soc)
++		soc_tr |= soc - off;
+ 	regmap_write(map, MAX17042_SALRT_Th, soc_tr);
+ }
+ 
+@@ -871,6 +875,10 @@ static irqreturn_t max17042_thread_handler(int id, void *dev)
+ 		max17042_set_soc_threshold(chip, 1);
+ 	}
+ 
++	/* we implicitly handle all alerts via power_supply_changed */
++	regmap_clear_bits(chip->regmap, MAX17042_STATUS,
++			  0xFFFF & ~(STATUS_POR_BIT | STATUS_BST_BIT));
++
+ 	power_supply_changed(chip->battery);
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/power/supply/rt5033_battery.c b/drivers/power/supply/rt5033_battery.c
+index 9ad0afe83d1b7..7a23c70f48791 100644
+--- a/drivers/power/supply/rt5033_battery.c
++++ b/drivers/power/supply/rt5033_battery.c
+@@ -60,7 +60,7 @@ static int rt5033_battery_get_watt_prop(struct i2c_client *client,
+ 	regmap_read(battery->regmap, regh, &msb);
+ 	regmap_read(battery->regmap, regl, &lsb);
+ 
+-	ret = ((msb << 4) + (lsb >> 4)) * 1250 / 1000;
++	ret = ((msb << 4) + (lsb >> 4)) * 1250;
+ 
+ 	return ret;
+ }
+diff --git a/drivers/regulator/s5m8767.c b/drivers/regulator/s5m8767.c
+index 7c111bbdc2afa..35269f9982105 100644
+--- a/drivers/regulator/s5m8767.c
++++ b/drivers/regulator/s5m8767.c
+@@ -850,18 +850,15 @@ static int s5m8767_pmic_probe(struct platform_device *pdev)
+ 	/* DS4 GPIO */
+ 	gpio_direction_output(pdata->buck_ds[2], 0x0);
+ 
+-	if (pdata->buck2_gpiodvs || pdata->buck3_gpiodvs ||
+-	   pdata->buck4_gpiodvs) {
+-		regmap_update_bits(s5m8767->iodev->regmap_pmic,
+-				S5M8767_REG_BUCK2CTRL, 1 << 1,
+-				(pdata->buck2_gpiodvs) ? (1 << 1) : (0 << 1));
+-		regmap_update_bits(s5m8767->iodev->regmap_pmic,
+-				S5M8767_REG_BUCK3CTRL, 1 << 1,
+-				(pdata->buck3_gpiodvs) ? (1 << 1) : (0 << 1));
+-		regmap_update_bits(s5m8767->iodev->regmap_pmic,
+-				S5M8767_REG_BUCK4CTRL, 1 << 1,
+-				(pdata->buck4_gpiodvs) ? (1 << 1) : (0 << 1));
+-	}
++	regmap_update_bits(s5m8767->iodev->regmap_pmic,
++			   S5M8767_REG_BUCK2CTRL, 1 << 1,
++			   (pdata->buck2_gpiodvs) ? (1 << 1) : (0 << 1));
++	regmap_update_bits(s5m8767->iodev->regmap_pmic,
++			   S5M8767_REG_BUCK3CTRL, 1 << 1,
++			   (pdata->buck3_gpiodvs) ? (1 << 1) : (0 << 1));
++	regmap_update_bits(s5m8767->iodev->regmap_pmic,
++			   S5M8767_REG_BUCK4CTRL, 1 << 1,
++			   (pdata->buck4_gpiodvs) ? (1 << 1) : (0 << 1));
+ 
+ 	/* Initialize GPIO DVS registers */
+ 	for (i = 0; i < 8; i++) {
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index 47924d5ed4f56..369a97f3eca99 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -550,9 +550,6 @@ static int rproc_handle_vdev(struct rproc *rproc, struct fw_rsc_vdev *rsc,
+ 	/* Initialise vdev subdevice */
+ 	snprintf(name, sizeof(name), "vdev%dbuffer", rvdev->index);
+ 	rvdev->dev.parent = &rproc->dev;
+-	ret = copy_dma_range_map(&rvdev->dev, rproc->dev.parent);
+-	if (ret)
+-		return ret;
+ 	rvdev->dev.release = rproc_rvdev_release;
+ 	dev_set_name(&rvdev->dev, "%s#%s", dev_name(rvdev->dev.parent), name);
+ 	dev_set_drvdata(&rvdev->dev, rvdev);
+@@ -562,6 +559,11 @@ static int rproc_handle_vdev(struct rproc *rproc, struct fw_rsc_vdev *rsc,
+ 		put_device(&rvdev->dev);
+ 		return ret;
+ 	}
++
++	ret = copy_dma_range_map(&rvdev->dev, rproc->dev.parent);
++	if (ret)
++		goto free_rvdev;
++
+ 	/* Make device dma capable by inheriting from parent's capabilities */
+ 	set_dma_ops(&rvdev->dev, get_dma_ops(rproc->dev.parent));
+ 
+diff --git a/drivers/reset/reset-socfpga.c b/drivers/reset/reset-socfpga.c
+index bdd9842961960..f9fa7fde7afb1 100644
+--- a/drivers/reset/reset-socfpga.c
++++ b/drivers/reset/reset-socfpga.c
+@@ -85,3 +85,29 @@ void __init socfpga_reset_init(void)
+ 	for_each_matching_node(np, socfpga_early_reset_dt_ids)
+ 		a10_reset_init(np);
+ }
++
++/*
++ * The early driver is problematic, because it doesn't register
++ * itself as a driver. This causes certain device links to prevent
++ * consumer devices from probing. The hacky solution is to register
++ * an empty driver, whose only job is to attach itself to the reset
++ * manager and call probe.
++ */
++static const struct of_device_id socfpga_reset_dt_ids[] = {
++	{ .compatible = "altr,rst-mgr", },
++	{ /* sentinel */ },
++};
++
++static int reset_simple_probe(struct platform_device *pdev)
++{
++	return 0;
++}
++
++static struct platform_driver reset_socfpga_driver = {
++	.probe	= reset_simple_probe,
++	.driver = {
++		.name		= "socfpga-reset",
++		.of_match_table	= socfpga_reset_dt_ids,
++	},
++};
++builtin_platform_driver(reset_socfpga_driver);
+diff --git a/drivers/rtc/rtc-rv3032.c b/drivers/rtc/rtc-rv3032.c
+index 3e67f71f42614..9e6166864bd73 100644
+--- a/drivers/rtc/rtc-rv3032.c
++++ b/drivers/rtc/rtc-rv3032.c
+@@ -617,11 +617,11 @@ static int rv3032_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	ret = rv3032_enter_eerd(rv3032, &eerd);
+ 	if (ret)
+-		goto exit_eerd;
++		return ret;
+ 
+ 	ret = regmap_write(rv3032->regmap, RV3032_CLKOUT1, hfd & 0xff);
+ 	if (ret)
+-		return ret;
++		goto exit_eerd;
+ 
+ 	ret = regmap_write(rv3032->regmap, RV3032_CLKOUT2, RV3032_CLKOUT2_OS |
+ 			    FIELD_PREP(RV3032_CLKOUT2_HFD_MSK, hfd >> 8));
+diff --git a/drivers/s390/char/tape_std.c b/drivers/s390/char/tape_std.c
+index 1f5fab617b679..f7e75d9fedf61 100644
+--- a/drivers/s390/char/tape_std.c
++++ b/drivers/s390/char/tape_std.c
+@@ -53,7 +53,6 @@ int
+ tape_std_assign(struct tape_device *device)
+ {
+ 	int                  rc;
+-	struct timer_list    timeout;
+ 	struct tape_request *request;
+ 
+ 	request = tape_alloc_request(2, 11);
+@@ -70,7 +69,7 @@ tape_std_assign(struct tape_device *device)
+ 	 * So we set up a timeout for this call.
+ 	 */
+ 	timer_setup(&request->timer, tape_std_assign_timeout, 0);
+-	mod_timer(&timeout, jiffies + 2 * HZ);
++	mod_timer(&request->timer, jiffies + msecs_to_jiffies(2000));
+ 
+ 	rc = tape_do_io_interruptible(device, request);
+ 
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index 305db4173dcf3..cf2c3c4c590f9 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -433,8 +433,8 @@ static ssize_t dev_busid_show(struct device *dev,
+ 	struct subchannel *sch = to_subchannel(dev);
+ 	struct pmcw *pmcw = &sch->schib.pmcw;
+ 
+-	if ((pmcw->st == SUBCHANNEL_TYPE_IO ||
+-	     pmcw->st == SUBCHANNEL_TYPE_MSG) && pmcw->dnv)
++	if ((pmcw->st == SUBCHANNEL_TYPE_IO && pmcw->dnv) ||
++	    (pmcw->st == SUBCHANNEL_TYPE_MSG && pmcw->w))
+ 		return sysfs_emit(buf, "0.%x.%04x\n", sch->schid.ssid,
+ 				  pmcw->dev);
+ 	else
+diff --git a/drivers/s390/cio/device_ops.c b/drivers/s390/cio/device_ops.c
+index 0fe7b2f2e7f52..c533d1dadc6bb 100644
+--- a/drivers/s390/cio/device_ops.c
++++ b/drivers/s390/cio/device_ops.c
+@@ -825,13 +825,23 @@ EXPORT_SYMBOL_GPL(ccw_device_get_chid);
+  */
+ void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size)
+ {
+-	return cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size);
++	void *addr;
++
++	if (!get_device(&cdev->dev))
++		return NULL;
++	addr = cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size);
++	if (IS_ERR_OR_NULL(addr))
++		put_device(&cdev->dev);
++	return addr;
+ }
+ EXPORT_SYMBOL(ccw_device_dma_zalloc);
+ 
+ void ccw_device_dma_free(struct ccw_device *cdev, void *cpu_addr, size_t size)
+ {
++	if (!cpu_addr)
++		return;
+ 	cio_gp_dma_free(cdev->private->dma_pool, cpu_addr, size);
++	put_device(&cdev->dev);
+ }
+ EXPORT_SYMBOL(ccw_device_dma_free);
+ 
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index 639f8d25679c3..ff0018f5bbe5e 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -142,6 +142,8 @@ static struct ap_queue_status ap_sm_recv(struct ap_queue *aq)
+ 	switch (status.response_code) {
+ 	case AP_RESPONSE_NORMAL:
+ 		aq->queue_count = max_t(int, 0, aq->queue_count - 1);
++		if (!status.queue_empty && !aq->queue_count)
++			aq->queue_count++;
+ 		if (aq->queue_count > 0)
+ 			mod_timer(&aq->timeout,
+ 				  jiffies + aq->request_timeout);
+diff --git a/drivers/scsi/csiostor/csio_lnode.c b/drivers/scsi/csiostor/csio_lnode.c
+index dc98f51f466fb..d5ac938970232 100644
+--- a/drivers/scsi/csiostor/csio_lnode.c
++++ b/drivers/scsi/csiostor/csio_lnode.c
+@@ -619,7 +619,7 @@ csio_ln_vnp_read_cbfn(struct csio_hw *hw, struct csio_mb *mbp)
+ 	struct fc_els_csp *csp;
+ 	struct fc_els_cssp *clsp;
+ 	enum fw_retval retval;
+-	__be32 nport_id;
++	__be32 nport_id = 0;
+ 
+ 	retval = FW_CMD_RETVAL_G(ntohl(rsp->alloc_to_len16));
+ 	if (retval != FW_SUCCESS) {
+diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c
+index fa16894d8758c..6cb48ae8e1241 100644
+--- a/drivers/scsi/dc395x.c
++++ b/drivers/scsi/dc395x.c
+@@ -4658,6 +4658,7 @@ static int dc395x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
+ 	/* initialise the adapter and everything we need */
+  	if (adapter_init(acb, io_port_base, io_port_len, irq)) {
+ 		dprintkl(KERN_INFO, "adapter init failed\n");
++		acb = NULL;
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 2114d2dd3501a..5d751628a6340 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -3107,7 +3107,7 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	 * fw_control_context->usrAddr
+ 	 */
+ 	complete(pm8001_ha->nvmd_completion);
+-	pm8001_dbg(pm8001_ha, MSG, "Set nvm data complete!\n");
++	pm8001_dbg(pm8001_ha, MSG, "Get nvmd data complete!\n");
+ 	ccb->task = NULL;
+ 	ccb->ccb_tag = 0xFFFFFFFF;
+ 	pm8001_tag_free(pm8001_ha, tag);
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 6a2c4a6fcded8..e40a37236aa10 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -1862,6 +1862,18 @@ qla2x00_port_speed_store(struct device *dev, struct device_attribute *attr,
+ 	return strlen(buf);
+ }
+ 
++static const struct {
++	u16 rate;
++	char *str;
++} port_speed_str[] = {
++	{ PORT_SPEED_4GB, "4" },
++	{ PORT_SPEED_8GB, "8" },
++	{ PORT_SPEED_16GB, "16" },
++	{ PORT_SPEED_32GB, "32" },
++	{ PORT_SPEED_64GB, "64" },
++	{ PORT_SPEED_10GB, "10" },
++};
++
+ static ssize_t
+ qla2x00_port_speed_show(struct device *dev, struct device_attribute *attr,
+     char *buf)
+@@ -1869,7 +1881,8 @@ qla2x00_port_speed_show(struct device *dev, struct device_attribute *attr,
+ 	struct scsi_qla_host *vha = shost_priv(dev_to_shost(dev));
+ 	struct qla_hw_data *ha = vha->hw;
+ 	ssize_t rval;
+-	char *spd[7] = {"0", "0", "0", "4", "8", "16", "32"};
++	u16 i;
++	char *speed = "Unknown";
+ 
+ 	rval = qla2x00_get_data_rate(vha);
+ 	if (rval != QLA_SUCCESS) {
+@@ -1878,7 +1891,14 @@ qla2x00_port_speed_show(struct device *dev, struct device_attribute *attr,
+ 		return -EINVAL;
+ 	}
+ 
+-	return scnprintf(buf, PAGE_SIZE, "%s\n", spd[ha->link_data_rate]);
++	for (i = 0; i < ARRAY_SIZE(port_speed_str); i++) {
++		if (port_speed_str[i].rate != ha->link_data_rate)
++			continue;
++		speed = port_speed_str[i].str;
++		break;
++	}
++
++	return scnprintf(buf, PAGE_SIZE, "%s\n", speed);
+ }
+ 
+ /* ----- */
+diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
+index 144a893e7335b..3a20bf8ce5ab9 100644
+--- a/drivers/scsi/qla2xxx/qla_dbg.c
++++ b/drivers/scsi/qla2xxx/qla_dbg.c
+@@ -12,8 +12,7 @@
+  * ----------------------------------------------------------------------
+  * | Module Init and Probe        |       0x0199       |                |
+  * | Mailbox commands             |       0x1206       | 0x11a5-0x11ff	|
+- * | Device Discovery             |       0x2134       | 0x210e-0x2116  |
+- * |				  | 		       | 0x211a         |
++ * | Device Discovery             |       0x2134       | 0x210e-0x2115  |
+  * |                              |                    | 0x211c-0x2128  |
+  * |                              |                    | 0x212c-0x2134  |
+  * | Queue Command and IO tracing |       0x3074       | 0x300b         |
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index e39b4f2da73a0..3bc1850273421 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -158,7 +158,6 @@ extern int ql2xasynctmfenable;
+ extern int ql2xgffidenable;
+ extern int ql2xenabledif;
+ extern int ql2xenablehba_err_chk;
+-extern int ql2xtargetreset;
+ extern int ql2xdontresethba;
+ extern uint64_t ql2xmaxlun;
+ extern int ql2xmdcapmask;
+@@ -791,7 +790,6 @@ extern void qlafx00_abort_iocb(srb_t *, struct abort_iocb_entry_fx00 *);
+ extern void qlafx00_fxdisc_iocb(srb_t *, struct fxdisc_entry_fx00 *);
+ extern void qlafx00_timer_routine(scsi_qla_host_t *);
+ extern int qlafx00_rescan_isp(scsi_qla_host_t *);
+-extern int qlafx00_loop_reset(scsi_qla_host_t *vha);
+ 
+ /* qla82xx related functions */
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index b7aac3116f2db..fdae25ec554d9 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -976,8 +976,6 @@ static void qla24xx_async_gnl_sp_done(srb_t *sp, int res)
+ 	    sp->name, res, sp->u.iocb_cmd.u.mbx.in_mb[1],
+ 	    sp->u.iocb_cmd.u.mbx.in_mb[2]);
+ 
+-	if (res == QLA_FUNCTION_TIMEOUT)
+-		return;
+ 
+ 	sp->fcport->flags &= ~(FCF_ASYNC_SENT|FCF_ASYNC_ACTIVE);
+ 	memset(&ea, 0, sizeof(ea));
+@@ -1015,8 +1013,8 @@ static void qla24xx_async_gnl_sp_done(srb_t *sp, int res)
+ 	spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 
+ 	list_for_each_entry_safe(fcport, tf, &h, gnl_entry) {
+-		list_del_init(&fcport->gnl_entry);
+ 		spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
++		list_del_init(&fcport->gnl_entry);
+ 		fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 		spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 		ea.fcport = fcport;
+@@ -1708,10 +1706,52 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	fc_port_t *fcport;
+ 	unsigned long flags;
+ 
+-	fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1);
+-	if (fcport) {
+-		fcport->scan_needed = 1;
+-		fcport->rscn_gen++;
++	switch (ea->id.b.rsvd_1) {
++	case RSCN_PORT_ADDR:
++		fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1);
++		if (fcport) {
++			if (fcport->flags & FCF_FCP2_DEVICE) {
++				ql_dbg(ql_dbg_disc, vha, 0x2115,
++				       "Delaying session delete for FCP2 portid=%06x %8phC ",
++					fcport->d_id.b24, fcport->port_name);
++				return;
++			}
++			fcport->scan_needed = 1;
++			fcport->rscn_gen++;
++		}
++		break;
++	case RSCN_AREA_ADDR:
++		list_for_each_entry(fcport, &vha->vp_fcports, list) {
++			if (fcport->flags & FCF_FCP2_DEVICE)
++				continue;
++
++			if ((ea->id.b24 & 0xffff00) == (fcport->d_id.b24 & 0xffff00)) {
++				fcport->scan_needed = 1;
++				fcport->rscn_gen++;
++			}
++		}
++		break;
++	case RSCN_DOM_ADDR:
++		list_for_each_entry(fcport, &vha->vp_fcports, list) {
++			if (fcport->flags & FCF_FCP2_DEVICE)
++				continue;
++
++			if ((ea->id.b24 & 0xff0000) == (fcport->d_id.b24 & 0xff0000)) {
++				fcport->scan_needed = 1;
++				fcport->rscn_gen++;
++			}
++		}
++		break;
++	case RSCN_FAB_ADDR:
++	default:
++		list_for_each_entry(fcport, &vha->vp_fcports, list) {
++			if (fcport->flags & FCF_FCP2_DEVICE)
++				continue;
++
++			fcport->scan_needed = 1;
++			fcport->rscn_gen++;
++		}
++		break;
+ 	}
+ 
+ 	spin_lock_irqsave(&vha->work_lock, flags);
+diff --git a/drivers/scsi/qla2xxx/qla_mr.c b/drivers/scsi/qla2xxx/qla_mr.c
+index ca73066853255..7178646ee0f06 100644
+--- a/drivers/scsi/qla2xxx/qla_mr.c
++++ b/drivers/scsi/qla2xxx/qla_mr.c
+@@ -738,29 +738,6 @@ qlafx00_lun_reset(fc_port_t *fcport, uint64_t l, int tag)
+ 	return qla2x00_async_tm_cmd(fcport, TCF_LUN_RESET, l, tag);
+ }
+ 
+-int
+-qlafx00_loop_reset(scsi_qla_host_t *vha)
+-{
+-	int ret;
+-	struct fc_port *fcport;
+-	struct qla_hw_data *ha = vha->hw;
+-
+-	if (ql2xtargetreset) {
+-		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+-			if (fcport->port_type != FCT_TARGET)
+-				continue;
+-
+-			ret = ha->isp_ops->target_reset(fcport, 0, 0);
+-			if (ret != QLA_SUCCESS) {
+-				ql_dbg(ql_dbg_taskm, vha, 0x803d,
+-				    "Bus Reset failed: Reset=%d "
+-				    "d_id=%x.\n", ret, fcport->d_id.b24);
+-			}
+-		}
+-	}
+-	return QLA_SUCCESS;
+-}
+-
+ int
+ qlafx00_iospace_config(struct qla_hw_data *ha)
+ {
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 4af794c46d175..e7f73a167fbd6 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -197,12 +197,6 @@ MODULE_PARM_DESC(ql2xdbwr,
+ 		" 0 -- Regular doorbell.\n"
+ 		" 1 -- CAMRAM doorbell (faster).\n");
+ 
+-int ql2xtargetreset = 1;
+-module_param(ql2xtargetreset, int, S_IRUGO);
+-MODULE_PARM_DESC(ql2xtargetreset,
+-		 "Enable target reset."
+-		 "Default is 1 - use hw defaults.");
+-
+ int ql2xgffidenable;
+ module_param(ql2xgffidenable, int, S_IRUGO);
+ MODULE_PARM_DESC(ql2xgffidenable,
+@@ -1254,6 +1248,7 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ 	uint32_t ratov_j;
+ 	struct qla_qpair *qpair;
+ 	unsigned long flags;
++	int fast_fail_status = SUCCESS;
+ 
+ 	if (qla2x00_isp_reg_stat(ha)) {
+ 		ql_log(ql_log_info, vha, 0x8042,
+@@ -1261,15 +1256,16 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ 		return FAILED;
+ 	}
+ 
++	/* Save any FAST_IO_FAIL value to return later if abort succeeds */
+ 	ret = fc_block_scsi_eh(cmd);
+ 	if (ret != 0)
+-		return ret;
++		fast_fail_status = ret;
+ 
+ 	sp = scsi_cmd_priv(cmd);
+ 	qpair = sp->qpair;
+ 
+ 	if ((sp->fcport && sp->fcport->deleted) || !qpair)
+-		return SUCCESS;
++		return fast_fail_status != SUCCESS ? fast_fail_status : FAILED;
+ 
+ 	spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ 	sp->comp = &comp;
+@@ -1304,7 +1300,7 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ 			    __func__, ha->r_a_tov/10);
+ 			ret = FAILED;
+ 		} else {
+-			ret = SUCCESS;
++			ret = fast_fail_status;
+ 		}
+ 		break;
+ 	default:
+@@ -1650,27 +1646,10 @@ int
+ qla2x00_loop_reset(scsi_qla_host_t *vha)
+ {
+ 	int ret;
+-	struct fc_port *fcport;
+ 	struct qla_hw_data *ha = vha->hw;
+ 
+-	if (IS_QLAFX00(ha)) {
+-		return qlafx00_loop_reset(vha);
+-	}
+-
+-	if (ql2xtargetreset == 1 && ha->flags.enable_target_reset) {
+-		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+-			if (fcport->port_type != FCT_TARGET)
+-				continue;
+-
+-			ret = ha->isp_ops->target_reset(fcport, 0, 0);
+-			if (ret != QLA_SUCCESS) {
+-				ql_dbg(ql_dbg_taskm, vha, 0x802c,
+-				    "Bus Reset failed: Reset=%d "
+-				    "d_id=%x.\n", ret, fcport->d_id.b24);
+-			}
+-		}
+-	}
+-
++	if (IS_QLAFX00(ha))
++		return QLA_SUCCESS;
+ 
+ 	if (ha->flags.enable_lip_full_login && !IS_CNA_CAPABLE(ha)) {
+ 		atomic_set(&vha->loop_state, LOOP_DOWN);
+@@ -3953,6 +3932,16 @@ qla2x00_mark_all_devices_lost(scsi_qla_host_t *vha)
+ 	    "Mark all dev lost\n");
+ 
+ 	list_for_each_entry(fcport, &vha->vp_fcports, list) {
++		if (fcport->loop_id != FC_NO_LOOP_ID &&
++		    (fcport->flags & FCF_FCP2_DEVICE) &&
++		    fcport->port_type == FCT_TARGET &&
++		    !qla2x00_reset_active(vha)) {
++			ql_dbg(ql_dbg_disc, vha, 0x211a,
++			       "Delaying session delete for FCP2 flags 0x%x port_type = 0x%x port_id=%06x %phC",
++			       fcport->flags, fcport->port_type,
++			       fcport->d_id.b24, fcport->port_name);
++			continue;
++		}
+ 		fcport->scan_state = 0;
+ 		qlt_schedule_sess_for_deletion(fcport);
+ 	}
+@@ -4077,7 +4066,7 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len,
+ 					ql_dbg_pci(ql_dbg_init, ha->pdev,
+ 					    0xe0ee, "%s: failed alloc dsd\n",
+ 					    __func__);
+-					return 1;
++					return -ENOMEM;
+ 				}
+ 				ha->dif_bundle_kallocs++;
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 8d4976725a75a..ebed14bed7835 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -3256,8 +3256,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ 			"RESET-RSP online/active/old-count/new-count = %d/%d/%d/%d.\n",
+ 			vha->flags.online, qla2x00_reset_active(vha),
+ 			cmd->reset_count, qpair->chip_reset);
+-		spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+-		return 0;
++		goto out_unmap_unlock;
+ 	}
+ 
+ 	/* Does F/W have an IOCBs for this request */
+@@ -3380,10 +3379,6 @@ int qlt_rdy_to_xfer(struct qla_tgt_cmd *cmd)
+ 	prm.sg = NULL;
+ 	prm.req_cnt = 1;
+ 
+-	/* Calculate number of entries and segments required */
+-	if (qlt_pci_map_calc_cnt(&prm) != 0)
+-		return -EAGAIN;
+-
+ 	if (!qpair->fw_started || (cmd->reset_count != qpair->chip_reset) ||
+ 	    (cmd->sess && cmd->sess->deleted)) {
+ 		/*
+@@ -3401,6 +3396,10 @@ int qlt_rdy_to_xfer(struct qla_tgt_cmd *cmd)
+ 		return 0;
+ 	}
+ 
++	/* Calculate number of entries and segments required */
++	if (qlt_pci_map_calc_cnt(&prm) != 0)
++		return -EAGAIN;
++
+ 	spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ 	/* Does F/W have an IOCBs for this request */
+ 	res = qlt_check_reserve_free_req(qpair, prm.req_cnt);
+@@ -3805,9 +3804,6 @@ void qlt_free_cmd(struct qla_tgt_cmd *cmd)
+ 
+ 	BUG_ON(cmd->cmd_in_wq);
+ 
+-	if (cmd->sg_mapped)
+-		qlt_unmap_sg(cmd->vha, cmd);
+-
+ 	if (!cmd->q_full)
+ 		qlt_decr_num_pend_cmds(cmd->vha);
+ 
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index d89db29fa829c..6f3d29d16d1f4 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1193,8 +1193,6 @@ static blk_status_t scsi_setup_scsi_cmnd(struct scsi_device *sdev,
+ 	}
+ 
+ 	cmd->cmd_len = scsi_req(req)->cmd_len;
+-	if (cmd->cmd_len == 0)
+-		cmd->cmd_len = scsi_command_size(cmd->cmnd);
+ 	cmd->cmnd = scsi_req(req)->cmd;
+ 	cmd->transfersize = blk_rq_bytes(req);
+ 	cmd->allowed = scsi_req(req)->retries;
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index 24927cf485b47..8c92d1bde64be 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -91,7 +91,9 @@ static int ufshcd_parse_clock_info(struct ufs_hba *hba)
+ 
+ 		clki->min_freq = clkfreq[i];
+ 		clki->max_freq = clkfreq[i+1];
+-		clki->name = kstrdup(name, GFP_KERNEL);
++		clki->name = devm_kstrdup(dev, name, GFP_KERNEL);
++		if (!strcmp(name, "ref_clk"))
++			clki->keep_link_active = true;
+ 		dev_dbg(dev, "%s: min %u max %u name %s\n", "freq-table-hz",
+ 				clki->min_freq, clki->max_freq, clki->name);
+ 		list_add_tail(&clki->list, &hba->clk_list_head);
+@@ -125,7 +127,7 @@ static int ufshcd_populate_vreg(struct device *dev, const char *name,
+ 	if (!vreg)
+ 		return -ENOMEM;
+ 
+-	vreg->name = kstrdup(name, GFP_KERNEL);
++	vreg->name = devm_kstrdup(dev, name, GFP_KERNEL);
+ 
+ 	snprintf(prop_name, MAX_PROP_SIZE, "%s-max-microamp", name);
+ 	if (of_property_read_u32(np, prop_name, &vreg->max_uA)) {
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 3139d9df6f320..930f35863cbb5 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -221,8 +221,6 @@ static int ufshcd_eh_host_reset_handler(struct scsi_cmnd *cmd);
+ static int ufshcd_clear_tm_cmd(struct ufs_hba *hba, int tag);
+ static void ufshcd_hba_exit(struct ufs_hba *hba);
+ static int ufshcd_probe_hba(struct ufs_hba *hba, bool async);
+-static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
+-				 bool skip_ref_clk);
+ static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on);
+ static int ufshcd_uic_hibern8_enter(struct ufs_hba *hba);
+ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba);
+@@ -1714,11 +1712,7 @@ static void ufshcd_gate_work(struct work_struct *work)
+ 
+ 	ufshcd_disable_irq(hba);
+ 
+-	if (!ufshcd_is_link_active(hba))
+-		ufshcd_setup_clocks(hba, false);
+-	else
+-		/* If link is active, device ref_clk can't be switched off */
+-		__ufshcd_setup_clocks(hba, false, true);
++	ufshcd_setup_clocks(hba, false);
+ 
+ 	/*
+ 	 * In case you are here to cancel this work the gating state
+@@ -8055,8 +8049,7 @@ static int ufshcd_init_hba_vreg(struct ufs_hba *hba)
+ 	return 0;
+ }
+ 
+-static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
+-					bool skip_ref_clk)
++static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
+ {
+ 	int ret = 0;
+ 	struct ufs_clk_info *clki;
+@@ -8074,7 +8067,12 @@ static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
+ 
+ 	list_for_each_entry(clki, head, list) {
+ 		if (!IS_ERR_OR_NULL(clki->clk)) {
+-			if (skip_ref_clk && !strcmp(clki->name, "ref_clk"))
++			/*
++			 * Don't disable clocks which are needed
++			 * to keep the link active.
++			 */
++			if (ufshcd_is_link_active(hba) &&
++			    clki->keep_link_active)
+ 				continue;
+ 
+ 			clk_state_changed = on ^ clki->enabled;
+@@ -8119,11 +8117,6 @@ out:
+ 	return ret;
+ }
+ 
+-static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
+-{
+-	return  __ufshcd_setup_clocks(hba, on, false);
+-}
+-
+ static int ufshcd_init_clocks(struct ufs_hba *hba)
+ {
+ 	int ret = 0;
+@@ -8642,11 +8635,7 @@ disable_clks:
+ 	 */
+ 	ufshcd_disable_irq(hba);
+ 
+-	if (!ufshcd_is_link_active(hba))
+-		ufshcd_setup_clocks(hba, false);
+-	else
+-		/* If link is active, device ref_clk can't be switched off */
+-		__ufshcd_setup_clocks(hba, false, true);
++	ufshcd_setup_clocks(hba, false);
+ 
+ 	if (ufshcd_is_clkgating_allowed(hba)) {
+ 		hba->clk_gating.state = CLKS_OFF;
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 812aa348751eb..1ba9c786feb6d 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -229,6 +229,8 @@ struct ufs_dev_cmd {
+  * @max_freq: maximum frequency supported by the clock
+  * @min_freq: min frequency that can be used for clock scaling
+  * @curr_freq: indicates the current frequency that it is set to
++ * @keep_link_active: indicates that the clk should not be disabled if
++		      link is active
+  * @enabled: variable to check against multiple enable/disable
+  */
+ struct ufs_clk_info {
+@@ -238,6 +240,7 @@ struct ufs_clk_info {
+ 	u32 max_freq;
+ 	u32 min_freq;
+ 	u32 curr_freq;
++	bool keep_link_active;
+ 	bool enabled;
+ };
+ 
+diff --git a/drivers/soc/fsl/dpaa2-console.c b/drivers/soc/fsl/dpaa2-console.c
+index 27243f706f376..53917410f2bdb 100644
+--- a/drivers/soc/fsl/dpaa2-console.c
++++ b/drivers/soc/fsl/dpaa2-console.c
+@@ -231,6 +231,7 @@ static ssize_t dpaa2_console_read(struct file *fp, char __user *buf,
+ 	cd->cur_ptr += bytes;
+ 	written += bytes;
+ 
++	kfree(kbuf);
+ 	return written;
+ 
+ err_free_buf:
+diff --git a/drivers/soc/fsl/dpio/dpio-service.c b/drivers/soc/fsl/dpio/dpio-service.c
+index 7351f30305506..779c319a4b820 100644
+--- a/drivers/soc/fsl/dpio/dpio-service.c
++++ b/drivers/soc/fsl/dpio/dpio-service.c
+@@ -59,7 +59,7 @@ static inline struct dpaa2_io *service_select_by_cpu(struct dpaa2_io *d,
+ 	 * potentially being migrated away.
+ 	 */
+ 	if (cpu < 0)
+-		cpu = smp_processor_id();
++		cpu = raw_smp_processor_id();
+ 
+ 	/* If a specific cpu was requested, pick it up immediately */
+ 	return dpio_by_cpu[cpu];
+diff --git a/drivers/soc/fsl/dpio/qbman-portal.c b/drivers/soc/fsl/dpio/qbman-portal.c
+index 659b4a570d5b5..3d69f56d9b9f4 100644
+--- a/drivers/soc/fsl/dpio/qbman-portal.c
++++ b/drivers/soc/fsl/dpio/qbman-portal.c
+@@ -732,8 +732,7 @@ int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
+ 	int i, num_enqueued = 0;
+ 	unsigned long irq_flags;
+ 
+-	spin_lock(&s->access_spinlock);
+-	local_irq_save(irq_flags);
++	spin_lock_irqsave(&s->access_spinlock, irq_flags);
+ 
+ 	half_mask = (s->eqcr.pi_ci_mask>>1);
+ 	full_mask = s->eqcr.pi_ci_mask;
+@@ -744,8 +743,7 @@ int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
+ 		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ 					eqcr_ci, s->eqcr.ci);
+ 		if (!s->eqcr.available) {
+-			local_irq_restore(irq_flags);
+-			spin_unlock(&s->access_spinlock);
++			spin_unlock_irqrestore(&s->access_spinlock, irq_flags);
+ 			return 0;
+ 		}
+ 	}
+@@ -784,8 +782,7 @@ int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
+ 	dma_wmb();
+ 	qbman_write_register(s, QBMAN_CINH_SWP_EQCR_PI,
+ 				(QB_RT_BIT)|(s->eqcr.pi)|s->eqcr.pi_vb);
+-	local_irq_restore(irq_flags);
+-	spin_unlock(&s->access_spinlock);
++	spin_unlock_irqrestore(&s->access_spinlock, irq_flags);
+ 
+ 	return num_enqueued;
+ }
+diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
+index 7abfc8c4fdc72..f736d208362c9 100644
+--- a/drivers/soc/qcom/apr.c
++++ b/drivers/soc/qcom/apr.c
+@@ -323,12 +323,14 @@ static int of_apr_add_pd_lookups(struct device *dev)
+ 						    1, &service_path);
+ 		if (ret < 0) {
+ 			dev_err(dev, "pdr service path missing: %d\n", ret);
++			of_node_put(node);
+ 			return ret;
+ 		}
+ 
+ 		pds = pdr_add_lookup(apr->pdr, service_name, service_path);
+ 		if (IS_ERR(pds) && PTR_ERR(pds) != -EALREADY) {
+ 			dev_err(dev, "pdr add lookup failed: %ld\n", PTR_ERR(pds));
++			of_node_put(node);
+ 			return PTR_ERR(pds);
+ 		}
+ 	}
+diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
+index c8b584d0c8fb4..436ec79122ed2 100644
+--- a/drivers/soc/qcom/rpmhpd.c
++++ b/drivers/soc/qcom/rpmhpd.c
+@@ -24,9 +24,13 @@
+  * struct rpmhpd - top level RPMh power domain resource data structure
+  * @dev:		rpmh power domain controller device
+  * @pd:			generic_pm_domain corrresponding to the power domain
++ * @parent:		generic_pm_domain corrresponding to the parent's power domain
+  * @peer:		A peer power domain in case Active only Voting is
+  *			supported
+  * @active_only:	True if it represents an Active only peer
++ * @corner:		current corner
++ * @active_corner:	current active corner
++ * @enable_corner:	lowest non-zero corner
+  * @level:		An array of level (vlvl) to corner (hlvl) mappings
+  *			derived from cmd-db
+  * @level_count:	Number of levels supported by the power domain. max
+@@ -44,6 +48,7 @@ struct rpmhpd {
+ 	const bool	active_only;
+ 	unsigned int	corner;
+ 	unsigned int	active_corner;
++	unsigned int	enable_corner;
+ 	u32		level[RPMH_ARC_MAX_LEVELS];
+ 	size_t		level_count;
+ 	bool		enabled;
+@@ -292,13 +297,13 @@ static int rpmhpd_aggregate_corner(struct rpmhpd *pd, unsigned int corner)
+ static int rpmhpd_power_on(struct generic_pm_domain *domain)
+ {
+ 	struct rpmhpd *pd = domain_to_rpmhpd(domain);
+-	int ret = 0;
++	unsigned int corner;
++	int ret;
+ 
+ 	mutex_lock(&rpmhpd_lock);
+ 
+-	if (pd->corner)
+-		ret = rpmhpd_aggregate_corner(pd, pd->corner);
+-
++	corner = max(pd->corner, pd->enable_corner);
++	ret = rpmhpd_aggregate_corner(pd, corner);
+ 	if (!ret)
+ 		pd->enabled = true;
+ 
+@@ -343,6 +348,10 @@ static int rpmhpd_set_performance_state(struct generic_pm_domain *domain,
+ 		i--;
+ 
+ 	if (pd->enabled) {
++		/* Ensure that the domain isn't turn off */
++		if (i < pd->enable_corner)
++			i = pd->enable_corner;
++
+ 		ret = rpmhpd_aggregate_corner(pd, i);
+ 		if (ret)
+ 			goto out;
+@@ -379,6 +388,10 @@ static int rpmhpd_update_level_mapping(struct rpmhpd *rpmhpd)
+ 	for (i = 0; i < rpmhpd->level_count; i++) {
+ 		rpmhpd->level[i] = buf[i];
+ 
++		/* Remember the first corner with non-zero level */
++		if (!rpmhpd->level[rpmhpd->enable_corner] && rpmhpd->level[i])
++			rpmhpd->enable_corner = i;
++
+ 		/*
+ 		 * The AUX data may be zero padded.  These 0 valued entries at
+ 		 * the end of the map must be ignored.
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index 0118bd986f902..5726c232e61d5 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -693,7 +693,7 @@ static int tegra_powergate_power_up(struct tegra_powergate *pg,
+ 
+ 	err = tegra_powergate_enable_clocks(pg);
+ 	if (err)
+-		goto disable_clks;
++		goto powergate_off;
+ 
+ 	usleep_range(10, 20);
+ 
+@@ -705,7 +705,7 @@ static int tegra_powergate_power_up(struct tegra_powergate *pg,
+ 
+ 	err = reset_control_deassert(pg->reset);
+ 	if (err)
+-		goto powergate_off;
++		goto disable_clks;
+ 
+ 	usleep_range(10, 20);
+ 
+diff --git a/drivers/soundwire/debugfs.c b/drivers/soundwire/debugfs.c
+index b6cad0d59b7b9..49900cd207bc7 100644
+--- a/drivers/soundwire/debugfs.c
++++ b/drivers/soundwire/debugfs.c
+@@ -19,7 +19,7 @@ void sdw_bus_debugfs_init(struct sdw_bus *bus)
+ 		return;
+ 
+ 	/* create the debugfs master-N */
+-	snprintf(name, sizeof(name), "master-%d", bus->link_id);
++	snprintf(name, sizeof(name), "master-%d-%d", bus->id, bus->link_id);
+ 	bus->debugfs = debugfs_create_dir(name, sdw_debugfs_root);
+ }
+ 
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index b4d5930be2a95..3c0ae6dbc43e2 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1460,7 +1460,7 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 					       &qspi->dev_ids[val]);
+ 			if (ret < 0) {
+ 				dev_err(&pdev->dev, "IRQ %s not found\n", name);
+-				goto qspi_probe_err;
++				goto qspi_unprepare_err;
+ 			}
+ 
+ 			qspi->dev_ids[val].dev = qspi;
+@@ -1475,7 +1475,7 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 	if (!num_ints) {
+ 		dev_err(&pdev->dev, "no IRQs registered, cannot init driver\n");
+ 		ret = -EINVAL;
+-		goto qspi_probe_err;
++		goto qspi_unprepare_err;
+ 	}
+ 
+ 	bcm_qspi_hw_init(qspi);
+@@ -1499,6 +1499,7 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 
+ qspi_reg_err:
+ 	bcm_qspi_hw_uninit(qspi);
++qspi_unprepare_err:
+ 	clk_disable_unprepare(qspi->clk);
+ qspi_probe_err:
+ 	kfree(qspi->dev_ids);
+diff --git a/drivers/spi/spi-pl022.c b/drivers/spi/spi-pl022.c
+index d1776fea287e5..e4ee8b0847993 100644
+--- a/drivers/spi/spi-pl022.c
++++ b/drivers/spi/spi-pl022.c
+@@ -1723,12 +1723,13 @@ static int verify_controller_parameters(struct pl022 *pl022,
+ 				return -EINVAL;
+ 			}
+ 		} else {
+-			if (chip_info->duplex != SSP_MICROWIRE_CHANNEL_FULL_DUPLEX)
++			if (chip_info->duplex != SSP_MICROWIRE_CHANNEL_FULL_DUPLEX) {
+ 				dev_err(&pl022->adev->dev,
+ 					"Microwire half duplex mode requested,"
+ 					" but this is only available in the"
+ 					" ST version of PL022\n");
+-			return -EINVAL;
++				return -EINVAL;
++			}
+ 		}
+ 	}
+ 	return 0;
+diff --git a/drivers/spi/spi-rpc-if.c b/drivers/spi/spi-rpc-if.c
+index 3579675485a5e..727d7cf0a6ad8 100644
+--- a/drivers/spi/spi-rpc-if.c
++++ b/drivers/spi/spi-rpc-if.c
+@@ -139,7 +139,9 @@ static int rpcif_spi_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	rpc = spi_controller_get_devdata(ctlr);
+-	rpcif_sw_init(rpc, parent);
++	error = rpcif_sw_init(rpc, parent);
++	if (error)
++		return error;
+ 
+ 	platform_set_drvdata(pdev, ctlr);
+ 
+diff --git a/drivers/staging/ks7010/Kconfig b/drivers/staging/ks7010/Kconfig
+index 0987fdc2f70db..8ea6c09286798 100644
+--- a/drivers/staging/ks7010/Kconfig
++++ b/drivers/staging/ks7010/Kconfig
+@@ -5,6 +5,9 @@ config KS7010
+ 	select WIRELESS_EXT
+ 	select WEXT_PRIV
+ 	select FW_LOADER
++	select CRYPTO
++	select CRYPTO_HASH
++	select CRYPTO_MICHAEL_MIC
+ 	help
+ 	  This is a driver for KeyStream KS7010 based SDIO WIFI cards. It is
+ 	  found on at least later Spectec SDW-821 (FCC-ID "S2Y-WLAN-11G-K" only,
+diff --git a/drivers/staging/media/allegro-dvt/allegro-core.c b/drivers/staging/media/allegro-dvt/allegro-core.c
+index 640451134072b..28b6ba895ccd5 100644
+--- a/drivers/staging/media/allegro-dvt/allegro-core.c
++++ b/drivers/staging/media/allegro-dvt/allegro-core.c
+@@ -1802,6 +1802,15 @@ static irqreturn_t allegro_irq_thread(int irq, void *data)
+ {
+ 	struct allegro_dev *dev = data;
+ 
++	/*
++	 * The firmware is initialized after the mailbox is setup. We further
++	 * check the AL5_ITC_CPU_IRQ_STA register, if the firmware actually
++	 * triggered the interrupt. Although this should not happen, make sure
++	 * that we ignore interrupts, if the mailbox is not initialized.
++	 */
++	if (!dev->mbox_status)
++		return IRQ_NONE;
++
+ 	allegro_mbox_notify(dev->mbox_status);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+index 0ab67b2aec671..8739f0874103e 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+@@ -836,7 +836,6 @@ static int lm3554_probe(struct i2c_client *client)
+ 	int err = 0;
+ 	struct lm3554 *flash;
+ 	unsigned int i;
+-	int ret;
+ 
+ 	flash = kzalloc(sizeof(*flash), GFP_KERNEL);
+ 	if (!flash)
+@@ -845,7 +844,7 @@ static int lm3554_probe(struct i2c_client *client)
+ 	flash->pdata = lm3554_platform_data_func(client);
+ 	if (IS_ERR(flash->pdata)) {
+ 		err = PTR_ERR(flash->pdata);
+-		goto fail1;
++		goto free_flash;
+ 	}
+ 
+ 	v4l2_i2c_subdev_init(&flash->sd, client, &lm3554_ops);
+@@ -853,12 +852,12 @@ static int lm3554_probe(struct i2c_client *client)
+ 	flash->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+ 	flash->mode = ATOMISP_FLASH_MODE_OFF;
+ 	flash->timeout = LM3554_MAX_TIMEOUT / LM3554_TIMEOUT_STEPSIZE - 1;
+-	ret =
++	err =
+ 	    v4l2_ctrl_handler_init(&flash->ctrl_handler,
+ 				   ARRAY_SIZE(lm3554_controls));
+-	if (ret) {
++	if (err) {
+ 		dev_err(&client->dev, "error initialize a ctrl_handler.\n");
+-		goto fail3;
++		goto unregister_subdev;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(lm3554_controls); i++)
+@@ -867,14 +866,15 @@ static int lm3554_probe(struct i2c_client *client)
+ 
+ 	if (flash->ctrl_handler.error) {
+ 		dev_err(&client->dev, "ctrl_handler error.\n");
+-		goto fail3;
++		err = flash->ctrl_handler.error;
++		goto free_handler;
+ 	}
+ 
+ 	flash->sd.ctrl_handler = &flash->ctrl_handler;
+ 	err = media_entity_pads_init(&flash->sd.entity, 0, NULL);
+ 	if (err) {
+ 		dev_err(&client->dev, "error initialize a media entity.\n");
+-		goto fail2;
++		goto free_handler;
+ 	}
+ 
+ 	flash->sd.entity.function = MEDIA_ENT_F_FLASH;
+@@ -885,16 +885,27 @@ static int lm3554_probe(struct i2c_client *client)
+ 
+ 	err = lm3554_gpio_init(client);
+ 	if (err) {
+-		dev_err(&client->dev, "gpio request/direction_output fail");
+-		goto fail3;
++		dev_err(&client->dev, "gpio request/direction_output fail.\n");
++		goto cleanup_media;
++	}
++
++	err = atomisp_register_i2c_module(&flash->sd, NULL, LED_FLASH);
++	if (err) {
++		dev_err(&client->dev, "fail to register atomisp i2c module.\n");
++		goto uninit_gpio;
+ 	}
+-	return atomisp_register_i2c_module(&flash->sd, NULL, LED_FLASH);
+-fail3:
++
++	return 0;
++
++uninit_gpio:
++	lm3554_gpio_uninit(client);
++cleanup_media:
+ 	media_entity_cleanup(&flash->sd.entity);
++free_handler:
+ 	v4l2_ctrl_handler_free(&flash->ctrl_handler);
+-fail2:
++unregister_subdev:
+ 	v4l2_device_unregister_subdev(&flash->sd);
+-fail1:
++free_flash:
+ 	kfree(flash);
+ 
+ 	return err;
+diff --git a/drivers/staging/media/imx/imx-media-dev-common.c b/drivers/staging/media/imx/imx-media-dev-common.c
+index 5fe4b22ab8473..7e0d769566bdd 100644
+--- a/drivers/staging/media/imx/imx-media-dev-common.c
++++ b/drivers/staging/media/imx/imx-media-dev-common.c
+@@ -363,6 +363,8 @@ struct imx_media_dev *imx_media_dev_init(struct device *dev,
+ 	imxmd->v4l2_dev.notify = imx_media_notify;
+ 	strscpy(imxmd->v4l2_dev.name, "imx-media",
+ 		sizeof(imxmd->v4l2_dev.name));
++	snprintf(imxmd->md.bus_info, sizeof(imxmd->md.bus_info),
++		 "platform:%s", dev_name(imxmd->md.dev));
+ 
+ 	media_device_init(&imxmd->md);
+ 
+diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c
+index e0179616a29cf..103f84466f6fc 100644
+--- a/drivers/staging/media/ipu3/ipu3-v4l2.c
++++ b/drivers/staging/media/ipu3/ipu3-v4l2.c
+@@ -592,11 +592,12 @@ static const struct imgu_fmt *find_format(struct v4l2_format *f, u32 type)
+ static int imgu_vidioc_querycap(struct file *file, void *fh,
+ 				struct v4l2_capability *cap)
+ {
+-	struct imgu_video_device *node = file_to_intel_imgu_node(file);
++	struct imgu_device *imgu = video_drvdata(file);
+ 
+ 	strscpy(cap->driver, IMGU_NAME, sizeof(cap->driver));
+ 	strscpy(cap->card, IMGU_NAME, sizeof(cap->card));
+-	snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s", node->name);
++	snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s",
++		 pci_name(imgu->pci_dev));
+ 
+ 	return 0;
+ }
+@@ -696,7 +697,7 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
+ 
+ 		/* CSS expects some format on OUT queue */
+ 		if (i != IPU3_CSS_QUEUE_OUT &&
+-		    !imgu_pipe->nodes[inode].enabled) {
++		    !imgu_pipe->nodes[inode].enabled && !try) {
+ 			fmts[i] = NULL;
+ 			continue;
+ 		}
+diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c
+index 7cc3b478a5f49..5487f6d0bcb63 100644
+--- a/drivers/staging/media/rkvdec/rkvdec-h264.c
++++ b/drivers/staging/media/rkvdec/rkvdec-h264.c
+@@ -1015,8 +1015,9 @@ static int rkvdec_h264_adjust_fmt(struct rkvdec_ctx *ctx,
+ 	struct v4l2_pix_format_mplane *fmt = &f->fmt.pix_mp;
+ 
+ 	fmt->num_planes = 1;
+-	fmt->plane_fmt[0].sizeimage = fmt->width * fmt->height *
+-				      RKVDEC_H264_MAX_DEPTH_IN_BYTES;
++	if (!fmt->plane_fmt[0].sizeimage)
++		fmt->plane_fmt[0].sizeimage = fmt->width * fmt->height *
++					      RKVDEC_H264_MAX_DEPTH_IN_BYTES;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index e68303e2b3907..a7788e7a9542a 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -270,31 +270,20 @@ static int rkvdec_try_output_fmt(struct file *file, void *priv,
+ 	return 0;
+ }
+ 
+-static int rkvdec_s_fmt(struct file *file, void *priv,
+-			struct v4l2_format *f,
+-			int (*try_fmt)(struct file *, void *,
+-				       struct v4l2_format *))
++static int rkvdec_s_capture_fmt(struct file *file, void *priv,
++				struct v4l2_format *f)
+ {
+ 	struct rkvdec_ctx *ctx = fh_to_rkvdec_ctx(priv);
+ 	struct vb2_queue *vq;
++	int ret;
+ 
+-	if (!try_fmt)
+-		return -EINVAL;
+-
+-	vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
++	/* Change not allowed if queue is busy */
++	vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx,
++			     V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+ 	if (vb2_is_busy(vq))
+ 		return -EBUSY;
+ 
+-	return try_fmt(file, priv, f);
+-}
+-
+-static int rkvdec_s_capture_fmt(struct file *file, void *priv,
+-				struct v4l2_format *f)
+-{
+-	struct rkvdec_ctx *ctx = fh_to_rkvdec_ctx(priv);
+-	int ret;
+-
+-	ret = rkvdec_s_fmt(file, priv, f, rkvdec_try_capture_fmt);
++	ret = rkvdec_try_capture_fmt(file, priv, f);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -309,9 +298,20 @@ static int rkvdec_s_output_fmt(struct file *file, void *priv,
+ 	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
+ 	const struct rkvdec_coded_fmt_desc *desc;
+ 	struct v4l2_format *cap_fmt;
+-	struct vb2_queue *peer_vq;
++	struct vb2_queue *peer_vq, *vq;
+ 	int ret;
+ 
++	/*
++	 * In order to support dynamic resolution change, the decoder admits
++	 * a resolution change, as long as the pixelformat remains. Can't be
++	 * done if streaming.
++	 */
++	vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
++	if (vb2_is_streaming(vq) ||
++	    (vb2_is_busy(vq) &&
++	     f->fmt.pix_mp.pixelformat != ctx->coded_fmt.fmt.pix_mp.pixelformat))
++		return -EBUSY;
++
+ 	/*
+ 	 * Since format change on the OUTPUT queue will reset the CAPTURE
+ 	 * queue, we can't allow doing so when the CAPTURE queue has buffers
+@@ -321,7 +321,7 @@ static int rkvdec_s_output_fmt(struct file *file, void *priv,
+ 	if (vb2_is_busy(peer_vq))
+ 		return -EBUSY;
+ 
+-	ret = rkvdec_s_fmt(file, priv, f, rkvdec_try_output_fmt);
++	ret = rkvdec_try_output_fmt(file, priv, f);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/staging/most/dim2/Makefile b/drivers/staging/most/dim2/Makefile
+index 861adacf6c729..5f9612af3fa3c 100644
+--- a/drivers/staging/most/dim2/Makefile
++++ b/drivers/staging/most/dim2/Makefile
+@@ -1,4 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-$(CONFIG_MOST_DIM2) += most_dim2.o
+ 
+-most_dim2-objs := dim2.o hal.o sysfs.o
++most_dim2-objs := dim2.o hal.o
+diff --git a/drivers/staging/most/dim2/dim2.c b/drivers/staging/most/dim2/dim2.c
+index b34e3c130f53f..8c2f384233aab 100644
+--- a/drivers/staging/most/dim2/dim2.c
++++ b/drivers/staging/most/dim2/dim2.c
+@@ -115,7 +115,8 @@ struct dim2_platform_data {
+ 	(((p)[1] == 0x18) && ((p)[2] == 0x05) && ((p)[3] == 0x0C) && \
+ 	 ((p)[13] == 0x3C) && ((p)[14] == 0x00) && ((p)[15] == 0x0A))
+ 
+-bool dim2_sysfs_get_state_cb(void)
++static ssize_t state_show(struct device *dev, struct device_attribute *attr,
++			  char *buf)
+ {
+ 	bool state;
+ 	unsigned long flags;
+@@ -124,9 +125,18 @@ bool dim2_sysfs_get_state_cb(void)
+ 	state = dim_get_lock_state();
+ 	spin_unlock_irqrestore(&dim_lock, flags);
+ 
+-	return state;
++	return sysfs_emit(buf, "%s\n", state ? "locked" : "");
+ }
+ 
++static DEVICE_ATTR_RO(state);
++
++static struct attribute *dim2_attrs[] = {
++	&dev_attr_state.attr,
++	NULL,
++};
++
++ATTRIBUTE_GROUPS(dim2);
++
+ /**
+  * dimcb_on_error - callback from HAL to report miscommunication between
+  * HDM and HAL
+@@ -863,16 +873,8 @@ static int dim2_probe(struct platform_device *pdev)
+ 		goto err_stop_thread;
+ 	}
+ 
+-	ret = dim2_sysfs_probe(&dev->dev);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to create sysfs attribute\n");
+-		goto err_unreg_iface;
+-	}
+-
+ 	return 0;
+ 
+-err_unreg_iface:
+-	most_deregister_interface(&dev->most_iface);
+ err_stop_thread:
+ 	kthread_stop(dev->netinfo_task);
+ err_shutdown_dim:
+@@ -895,7 +897,6 @@ static int dim2_remove(struct platform_device *pdev)
+ 	struct dim2_hdm *dev = platform_get_drvdata(pdev);
+ 	unsigned long flags;
+ 
+-	dim2_sysfs_destroy(&dev->dev);
+ 	most_deregister_interface(&dev->most_iface);
+ 	kthread_stop(dev->netinfo_task);
+ 
+@@ -1079,6 +1080,7 @@ static struct platform_driver dim2_driver = {
+ 	.driver = {
+ 		.name = "hdm_dim2",
+ 		.of_match_table = dim2_of_match,
++		.dev_groups = dim2_groups,
+ 	},
+ };
+ 
+diff --git a/drivers/staging/most/dim2/sysfs.c b/drivers/staging/most/dim2/sysfs.c
+deleted file mode 100644
+index c85b2cdcdca3d..0000000000000
+--- a/drivers/staging/most/dim2/sysfs.c
++++ /dev/null
+@@ -1,49 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * sysfs.c - MediaLB sysfs information
+- *
+- * Copyright (C) 2015, Microchip Technology Germany II GmbH & Co. KG
+- */
+-
+-/* Author: Andrey Shvetsov <andrey.shvetsov@k2l.de> */
+-
+-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+-
+-#include <linux/kernel.h>
+-#include "sysfs.h"
+-#include <linux/device.h>
+-
+-static ssize_t state_show(struct device *dev, struct device_attribute *attr,
+-			  char *buf)
+-{
+-	bool state = dim2_sysfs_get_state_cb();
+-
+-	return sprintf(buf, "%s\n", state ? "locked" : "");
+-}
+-
+-static DEVICE_ATTR_RO(state);
+-
+-static struct attribute *dev_attrs[] = {
+-	&dev_attr_state.attr,
+-	NULL,
+-};
+-
+-static struct attribute_group dev_attr_group = {
+-	.attrs = dev_attrs,
+-};
+-
+-static const struct attribute_group *dev_attr_groups[] = {
+-	&dev_attr_group,
+-	NULL,
+-};
+-
+-int dim2_sysfs_probe(struct device *dev)
+-{
+-	dev->groups = dev_attr_groups;
+-	return device_register(dev);
+-}
+-
+-void dim2_sysfs_destroy(struct device *dev)
+-{
+-	device_unregister(dev);
+-}
+diff --git a/drivers/staging/most/dim2/sysfs.h b/drivers/staging/most/dim2/sysfs.h
+index 24277a17cff3d..09115cf4ed00e 100644
+--- a/drivers/staging/most/dim2/sysfs.h
++++ b/drivers/staging/most/dim2/sysfs.h
+@@ -16,15 +16,4 @@ struct medialb_bus {
+ 	struct kobject kobj_group;
+ };
+ 
+-struct device;
+-
+-int dim2_sysfs_probe(struct device *dev);
+-void dim2_sysfs_destroy(struct device *dev);
+-
+-/*
+- * callback,
+- * must deliver MediaLB state as true if locked or false if unlocked
+- */
+-bool dim2_sysfs_get_state_cb(void);
+-
+ #endif	/* DIM2_SYSFS_H */
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index a3a0154da567d..49559731bbcf1 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -726,7 +726,7 @@ static struct platform_driver dw8250_platform_driver = {
+ 		.name		= "dw-apb-uart",
+ 		.pm		= &dw8250_pm_ops,
+ 		.of_match_table	= dw8250_of_match,
+-		.acpi_match_table = ACPI_PTR(dw8250_acpi_match),
++		.acpi_match_table = dw8250_acpi_match,
+ 	},
+ 	.probe			= dw8250_probe,
+ 	.remove			= dw8250_remove,
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 5d40f1010fbfd..110a19c5138e8 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2675,21 +2675,32 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ void serial8250_update_uartclk(struct uart_port *port, unsigned int uartclk)
+ {
+ 	struct uart_8250_port *up = up_to_u8250p(port);
++	struct tty_port *tport = &port->state->port;
+ 	unsigned int baud, quot, frac = 0;
+ 	struct ktermios *termios;
++	struct tty_struct *tty;
+ 	unsigned long flags;
+ 
+-	mutex_lock(&port->state->port.mutex);
++	tty = tty_port_tty_get(tport);
++	if (!tty) {
++		mutex_lock(&tport->mutex);
++		port->uartclk = uartclk;
++		mutex_unlock(&tport->mutex);
++		return;
++	}
++
++	down_write(&tty->termios_rwsem);
++	mutex_lock(&tport->mutex);
+ 
+ 	if (port->uartclk == uartclk)
+ 		goto out_lock;
+ 
+ 	port->uartclk = uartclk;
+ 
+-	if (!tty_port_initialized(&port->state->port))
++	if (!tty_port_initialized(tport))
+ 		goto out_lock;
+ 
+-	termios = &port->state->port.tty->termios;
++	termios = &tty->termios;
+ 
+ 	baud = serial8250_get_baud_rate(port, termios, NULL);
+ 	quot = serial8250_get_divisor(port, baud, &frac);
+@@ -2706,7 +2717,9 @@ void serial8250_update_uartclk(struct uart_port *port, unsigned int uartclk)
+ 	serial8250_rpm_put(up);
+ 
+ out_lock:
+-	mutex_unlock(&port->state->port.mutex);
++	mutex_unlock(&tport->mutex);
++	up_write(&tty->termios_rwsem);
++	tty_kref_put(tty);
+ }
+ EXPORT_SYMBOL_GPL(serial8250_update_uartclk);
+ 
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index cacf7266a262d..28cc328ddb6eb 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2049,7 +2049,7 @@ imx_uart_console_write(struct console *co, const char *s, unsigned int count)
+  * If the port was already initialised (eg, by a boot loader),
+  * try to determine the current setup.
+  */
+-static void __init
++static void
+ imx_uart_console_get_options(struct imx_port *sport, int *baud,
+ 			     int *parity, int *bits)
+ {
+@@ -2108,7 +2108,7 @@ imx_uart_console_get_options(struct imx_port *sport, int *baud,
+ 	}
+ }
+ 
+-static int __init
++static int
+ imx_uart_console_setup(struct console *co, char *options)
+ {
+ 	struct imx_port *sport;
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 68a0ff6054761..e6fb5077fe349 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -222,7 +222,11 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
+ 	if (retval == 0) {
+ 		if (uart_console(uport) && uport->cons->cflag) {
+ 			tty->termios.c_cflag = uport->cons->cflag;
++			tty->termios.c_ispeed = uport->cons->ispeed;
++			tty->termios.c_ospeed = uport->cons->ospeed;
+ 			uport->cons->cflag = 0;
++			uport->cons->ispeed = 0;
++			uport->cons->ospeed = 0;
+ 		}
+ 		/*
+ 		 * Initialise the hardware port settings.
+@@ -290,8 +294,11 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ 		/*
+ 		 * Turn off DTR and RTS early.
+ 		 */
+-		if (uport && uart_console(uport) && tty)
++		if (uport && uart_console(uport) && tty) {
+ 			uport->cons->cflag = tty->termios.c_cflag;
++			uport->cons->ispeed = tty->termios.c_ispeed;
++			uport->cons->ospeed = tty->termios.c_ospeed;
++		}
+ 
+ 		if (!tty || C_HUPCL(tty))
+ 			uart_port_dtr_rts(uport, 0);
+@@ -2123,8 +2130,11 @@ uart_set_options(struct uart_port *port, struct console *co,
+ 	 * Allow the setting of the UART parameters with a NULL console
+ 	 * too:
+ 	 */
+-	if (co)
++	if (co) {
+ 		co->cflag = termios.c_cflag;
++		co->ispeed = termios.c_ispeed;
++		co->ospeed = termios.c_ospeed;
++	}
+ 
+ 	return 0;
+ }
+@@ -2258,6 +2268,8 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
+ 		 */
+ 		memset(&termios, 0, sizeof(struct ktermios));
+ 		termios.c_cflag = uport->cons->cflag;
++		termios.c_ispeed = uport->cons->ispeed;
++		termios.c_ospeed = uport->cons->ospeed;
+ 
+ 		/*
+ 		 * If that's unset, use the tty termios setting.
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index a9b1ee27183a7..b5a8afbc452ba 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -601,9 +601,10 @@ static void cdns_uart_start_tx(struct uart_port *port)
+ 	if (uart_circ_empty(&port->state->xmit))
+ 		return;
+ 
++	writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_ISR);
++
+ 	cdns_uart_handle_tx(port);
+ 
+-	writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_ISR);
+ 	/* Enable the TX Empty interrupt */
+ 	writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_IER);
+ }
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index aa40e510b806e..127b1a62b1bf4 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -509,7 +509,7 @@ int hw_device_reset(struct ci_hdrc *ci)
+ 	return 0;
+ }
+ 
+-static irqreturn_t ci_irq(int irq, void *data)
++static irqreturn_t ci_irq_handler(int irq, void *data)
+ {
+ 	struct ci_hdrc *ci = data;
+ 	irqreturn_t ret = IRQ_NONE;
+@@ -562,6 +562,15 @@ static irqreturn_t ci_irq(int irq, void *data)
+ 	return ret;
+ }
+ 
++static void ci_irq(struct ci_hdrc *ci)
++{
++	unsigned long flags;
++
++	local_irq_save(flags);
++	ci_irq_handler(ci->irq, ci);
++	local_irq_restore(flags);
++}
++
+ static int ci_cable_notifier(struct notifier_block *nb, unsigned long event,
+ 			     void *ptr)
+ {
+@@ -571,7 +580,7 @@ static int ci_cable_notifier(struct notifier_block *nb, unsigned long event,
+ 	cbl->connected = event;
+ 	cbl->changed = true;
+ 
+-	ci_irq(ci->irq, ci);
++	ci_irq(ci);
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -612,7 +621,7 @@ static int ci_usb_role_switch_set(struct usb_role_switch *sw,
+ 	if (cable) {
+ 		cable->changed = true;
+ 		cable->connected = false;
+-		ci_irq(ci->irq, ci);
++		ci_irq(ci);
+ 		spin_unlock_irqrestore(&ci->lock, flags);
+ 		if (ci->wq && role != USB_ROLE_NONE)
+ 			flush_workqueue(ci->wq);
+@@ -630,7 +639,7 @@ static int ci_usb_role_switch_set(struct usb_role_switch *sw,
+ 	if (cable) {
+ 		cable->changed = true;
+ 		cable->connected = true;
+-		ci_irq(ci->irq, ci);
++		ci_irq(ci);
+ 	}
+ 	spin_unlock_irqrestore(&ci->lock, flags);
+ 	pm_runtime_put_sync(ci->dev);
+@@ -1166,7 +1175,7 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	ret = devm_request_irq(dev, ci->irq, ci_irq, IRQF_SHARED,
++	ret = devm_request_irq(dev, ci->irq, ci_irq_handler, IRQF_SHARED,
+ 			ci->platdata->name, ci);
+ 	if (ret)
+ 		goto stop;
+@@ -1287,11 +1296,11 @@ static void ci_extcon_wakeup_int(struct ci_hdrc *ci)
+ 
+ 	if (!IS_ERR(cable_id->edev) && ci->is_otg &&
+ 		(otgsc & OTGSC_IDIE) && (otgsc & OTGSC_IDIS))
+-		ci_irq(ci->irq, ci);
++		ci_irq(ci);
+ 
+ 	if (!IS_ERR(cable_vbus->edev) && ci->is_otg &&
+ 		(otgsc & OTGSC_BSVIE) && (otgsc & OTGSC_BSVIS))
+-		ci_irq(ci->irq, ci);
++		ci_irq(ci);
+ }
+ 
+ static int ci_controller_resume(struct device *dev)
+diff --git a/drivers/usb/dwc2/drd.c b/drivers/usb/dwc2/drd.c
+index 2d4176f5788eb..aa6eb76f64ddc 100644
+--- a/drivers/usb/dwc2/drd.c
++++ b/drivers/usb/dwc2/drd.c
+@@ -7,6 +7,7 @@
+  * Author(s): Amelie Delaunay <amelie.delaunay@st.com>
+  */
+ 
++#include <linux/clk.h>
+ #include <linux/iopoll.h>
+ #include <linux/platform_device.h>
+ #include <linux/usb/role.h>
+@@ -25,9 +26,9 @@ static void dwc2_ovr_init(struct dwc2_hsotg *hsotg)
+ 	gotgctl &= ~(GOTGCTL_BVALOVAL | GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL);
+ 	dwc2_writel(hsotg, gotgctl, GOTGCTL);
+ 
+-	dwc2_force_mode(hsotg, false);
+-
+ 	spin_unlock_irqrestore(&hsotg->lock, flags);
++
++	dwc2_force_mode(hsotg, (hsotg->dr_mode == USB_DR_MODE_HOST));
+ }
+ 
+ static int dwc2_ovr_avalid(struct dwc2_hsotg *hsotg, bool valid)
+@@ -39,6 +40,7 @@ static int dwc2_ovr_avalid(struct dwc2_hsotg *hsotg, bool valid)
+ 	    (!valid && !(gotgctl & GOTGCTL_ASESVLD)))
+ 		return -EALREADY;
+ 
++	gotgctl &= ~GOTGCTL_BVALOVAL;
+ 	if (valid)
+ 		gotgctl |= GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL;
+ 	else
+@@ -57,6 +59,7 @@ static int dwc2_ovr_bvalid(struct dwc2_hsotg *hsotg, bool valid)
+ 	    (!valid && !(gotgctl & GOTGCTL_BSESVLD)))
+ 		return -EALREADY;
+ 
++	gotgctl &= ~GOTGCTL_AVALOVAL;
+ 	if (valid)
+ 		gotgctl |= GOTGCTL_BVALOVAL | GOTGCTL_VBVALOVAL;
+ 	else
+@@ -86,6 +89,20 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
+ 	}
+ #endif
+ 
++	/*
++	 * In case of USB_DR_MODE_PERIPHERAL, clock is disabled at the end of
++	 * the probe and enabled on udc_start.
++	 * If role-switch set is called before the udc_start, we need to enable
++	 * the clock to read/write GOTGCTL and GUSBCFG registers to override
++	 * mode and sessions. It is the case if cable is plugged at boot.
++	 */
++	if (!hsotg->ll_hw_enabled && hsotg->clk) {
++		int ret = clk_prepare_enable(hsotg->clk);
++
++		if (ret)
++			return ret;
++	}
++
+ 	spin_lock_irqsave(&hsotg->lock, flags);
+ 
+ 	if (role == USB_ROLE_HOST) {
+@@ -110,6 +127,9 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
+ 		/* This will raise a Connector ID Status Change Interrupt */
+ 		dwc2_force_mode(hsotg, role == USB_ROLE_HOST);
+ 
++	if (!hsotg->ll_hw_enabled && hsotg->clk)
++		clk_disable_unprepare(hsotg->clk);
++
+ 	dev_dbg(hsotg->dev, "%s-session valid\n",
+ 		role == USB_ROLE_NONE ? "No" :
+ 		role == USB_ROLE_HOST ? "A" : "B");
+diff --git a/drivers/usb/gadget/legacy/hid.c b/drivers/usb/gadget/legacy/hid.c
+index 5b27d289443fe..3912cc805f3af 100644
+--- a/drivers/usb/gadget/legacy/hid.c
++++ b/drivers/usb/gadget/legacy/hid.c
+@@ -99,8 +99,10 @@ static int do_config(struct usb_configuration *c)
+ 
+ 	list_for_each_entry(e, &hidg_func_list, node) {
+ 		e->f = usb_get_function(e->fi);
+-		if (IS_ERR(e->f))
++		if (IS_ERR(e->f)) {
++			status = PTR_ERR(e->f);
+ 			goto put;
++		}
+ 		status = usb_add_function(c, e->f);
+ 		if (status < 0) {
+ 			usb_put_function(e->f);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 8466527eb2462..41d5a46c1dc1a 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -171,7 +171,6 @@ static void xhci_common_hub_descriptor(struct xhci_hcd *xhci,
+ {
+ 	u16 temp;
+ 
+-	desc->bPwrOn2PwrGood = 10;	/* xhci section 5.4.9 says 20ms max */
+ 	desc->bHubContrCurrent = 0;
+ 
+ 	desc->bNbrPorts = ports;
+@@ -206,6 +205,7 @@ static void xhci_usb2_hub_descriptor(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+ 	desc->bDescriptorType = USB_DT_HUB;
+ 	temp = 1 + (ports / 8);
+ 	desc->bDescLength = USB_DT_HUB_NONVAR_SIZE + 2 * temp;
++	desc->bPwrOn2PwrGood = 10;	/* xhci section 5.4.8 says 20ms */
+ 
+ 	/* The Device Removable bits are reported on a byte granularity.
+ 	 * If the port doesn't exist within that byte, the bit is set to 0.
+@@ -258,6 +258,7 @@ static void xhci_usb3_hub_descriptor(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+ 	xhci_common_hub_descriptor(xhci, desc, ports);
+ 	desc->bDescriptorType = USB_DT_SS_HUB;
+ 	desc->bDescLength = USB_DT_SS_HUB_SIZE;
++	desc->bPwrOn2PwrGood = 50;	/* usb 3.1 may fail if less than 100ms */
+ 
+ 	/* header decode latency should be zero for roothubs,
+ 	 * see section 4.23.5.2.
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 8c65e9476b41f..80251a2579fda 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -64,6 +64,13 @@
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_2			0x43bb
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_1			0x43bc
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1		0x161a
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2		0x161b
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3		0x161d
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4		0x161e
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5		0x15d6
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6		0x15d7
++
+ #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI			0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI		0x1142
+ #define PCI_DEVICE_ID_ASMEDIA_1142_XHCI			0x1242
+@@ -312,6 +319,15 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4))
+ 		xhci->quirks |= XHCI_NO_SOFT_RETRY;
+ 
++	if (pdev->vendor == PCI_VENDOR_ID_AMD &&
++	    (pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6))
++		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
++
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
+ 				"QUIRK: Resetting on resume");
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index 70ec296815268..72a06af250812 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -99,10 +99,6 @@ struct iowarrior {
+ /*    globals   */
+ /*--------------*/
+ 
+-/*
+- *  USB spec identifies 5 second timeouts.
+- */
+-#define GET_TIMEOUT 5
+ #define USB_REQ_GET_REPORT  0x01
+ //#if 0
+ static int usb_get_report(struct usb_device *dev,
+@@ -114,7 +110,7 @@ static int usb_get_report(struct usb_device *dev,
+ 			       USB_DIR_IN | USB_TYPE_CLASS |
+ 			       USB_RECIP_INTERFACE, (type << 8) + id,
+ 			       inter->desc.bInterfaceNumber, buf, size,
+-			       GET_TIMEOUT*HZ);
++			       USB_CTRL_GET_TIMEOUT);
+ }
+ //#endif
+ 
+@@ -129,7 +125,7 @@ static int usb_set_report(struct usb_interface *intf, unsigned char type,
+ 			       USB_TYPE_CLASS | USB_RECIP_INTERFACE,
+ 			       (type << 8) + id,
+ 			       intf->cur_altsetting->desc.bInterfaceNumber, buf,
+-			       size, HZ);
++			       size, 1000);
+ }
+ 
+ /*---------------------*/
+diff --git a/drivers/usb/musb/Kconfig b/drivers/usb/musb/Kconfig
+index 8de143807c1ae..4d61df6a9b5c8 100644
+--- a/drivers/usb/musb/Kconfig
++++ b/drivers/usb/musb/Kconfig
+@@ -120,7 +120,7 @@ config USB_MUSB_MEDIATEK
+ 	tristate "MediaTek platforms"
+ 	depends on ARCH_MEDIATEK || COMPILE_TEST
+ 	depends on NOP_USB_XCEIV
+-	depends on GENERIC_PHY
++	select GENERIC_PHY
+ 	select USB_ROLE_SWITCH
+ 
+ comment "MUSB DMA mode"
+diff --git a/drivers/usb/serial/keyspan.c b/drivers/usb/serial/keyspan.c
+index aa3dbce22cfbe..451759f38b573 100644
+--- a/drivers/usb/serial/keyspan.c
++++ b/drivers/usb/serial/keyspan.c
+@@ -2910,22 +2910,22 @@ static int keyspan_port_probe(struct usb_serial_port *port)
+ 	for (i = 0; i < ARRAY_SIZE(p_priv->in_buffer); ++i) {
+ 		p_priv->in_buffer[i] = kzalloc(IN_BUFLEN, GFP_KERNEL);
+ 		if (!p_priv->in_buffer[i])
+-			goto err_in_buffer;
++			goto err_free_in_buffer;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(p_priv->out_buffer); ++i) {
+ 		p_priv->out_buffer[i] = kzalloc(OUT_BUFLEN, GFP_KERNEL);
+ 		if (!p_priv->out_buffer[i])
+-			goto err_out_buffer;
++			goto err_free_out_buffer;
+ 	}
+ 
+ 	p_priv->inack_buffer = kzalloc(INACK_BUFLEN, GFP_KERNEL);
+ 	if (!p_priv->inack_buffer)
+-		goto err_inack_buffer;
++		goto err_free_out_buffer;
+ 
+ 	p_priv->outcont_buffer = kzalloc(OUTCONT_BUFLEN, GFP_KERNEL);
+ 	if (!p_priv->outcont_buffer)
+-		goto err_outcont_buffer;
++		goto err_free_inack_buffer;
+ 
+ 	p_priv->device_details = d_details;
+ 
+@@ -2971,15 +2971,14 @@ static int keyspan_port_probe(struct usb_serial_port *port)
+ 
+ 	return 0;
+ 
+-err_outcont_buffer:
++err_free_inack_buffer:
+ 	kfree(p_priv->inack_buffer);
+-err_inack_buffer:
++err_free_out_buffer:
+ 	for (i = 0; i < ARRAY_SIZE(p_priv->out_buffer); ++i)
+ 		kfree(p_priv->out_buffer[i]);
+-err_out_buffer:
++err_free_in_buffer:
+ 	for (i = 0; i < ARRAY_SIZE(p_priv->in_buffer); ++i)
+ 		kfree(p_priv->in_buffer[i]);
+-err_in_buffer:
+ 	kfree(p_priv);
+ 
+ 	return -ENOMEM;
+diff --git a/drivers/usb/typec/Kconfig b/drivers/usb/typec/Kconfig
+index e7f120874c483..0d953c6805f0f 100644
+--- a/drivers/usb/typec/Kconfig
++++ b/drivers/usb/typec/Kconfig
+@@ -75,9 +75,9 @@ config TYPEC_TPS6598X
+ 
+ config TYPEC_STUSB160X
+ 	tristate "STMicroelectronics STUSB160x Type-C controller driver"
+-	depends on I2C
+-	depends on REGMAP_I2C
+ 	depends on USB_ROLE_SWITCH || !USB_ROLE_SWITCH
++	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  Say Y or M here if your system has STMicroelectronics STUSB160x
+ 	  Type-C port controller.
+diff --git a/drivers/video/backlight/backlight.c b/drivers/video/backlight/backlight.c
+index 537fe1b376ad7..fc990e576340b 100644
+--- a/drivers/video/backlight/backlight.c
++++ b/drivers/video/backlight/backlight.c
+@@ -688,12 +688,6 @@ static struct backlight_device *of_find_backlight(struct device *dev)
+ 			of_node_put(np);
+ 			if (!bd)
+ 				return ERR_PTR(-EPROBE_DEFER);
+-			/*
+-			 * Note: gpio_backlight uses brightness as
+-			 * power state during probe
+-			 */
+-			if (!bd->props.brightness)
+-				bd->props.brightness = bd->props.max_brightness;
+ 		}
+ 	}
+ 
+diff --git a/drivers/video/fbdev/chipsfb.c b/drivers/video/fbdev/chipsfb.c
+index 998067b701fa0..393894af26f84 100644
+--- a/drivers/video/fbdev/chipsfb.c
++++ b/drivers/video/fbdev/chipsfb.c
+@@ -331,7 +331,7 @@ static const struct fb_var_screeninfo chipsfb_var = {
+ 
+ static void init_chips(struct fb_info *p, unsigned long addr)
+ {
+-	memset(p->screen_base, 0, 0x100000);
++	fb_memset(p->screen_base, 0, 0x100000);
+ 
+ 	p->fix = chipsfb_fix;
+ 	p->fix.smem_start = addr;
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 6c730d6d50f71..e9432dbbec0a7 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -992,6 +992,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
+ 
+ 	head = vq->packed.next_avail_idx;
+ 	desc = alloc_indirect_packed(total_sg, gfp);
++	if (!desc)
++		return -ENOMEM;
+ 
+ 	if (unlikely(vq->vq.num_free < 1)) {
+ 		pr_debug("Can't add buf len 1 - avail = 0\n");
+@@ -1103,6 +1105,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 	unsigned int i, n, c, descs_used, err_idx;
+ 	__le16 head_flags, flags;
+ 	u16 head, id, prev, curr, avail_used_flags;
++	int err;
+ 
+ 	START_USE(vq);
+ 
+@@ -1118,9 +1121,14 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 
+ 	BUG_ON(total_sg == 0);
+ 
+-	if (virtqueue_use_indirect(_vq, total_sg))
+-		return virtqueue_add_indirect_packed(vq, sgs, total_sg,
+-				out_sgs, in_sgs, data, gfp);
++	if (virtqueue_use_indirect(_vq, total_sg)) {
++		err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
++						    in_sgs, data, gfp);
++		if (err != -ENOMEM)
++			return err;
++
++		/* fall back on direct */
++	}
+ 
+ 	head = vq->packed.next_avail_idx;
+ 	avail_used_flags = vq->packed.avail_used_flags;
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index db935d6b10c27..01ce3f41cc219 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -1723,7 +1723,7 @@ config SIBYTE_WDOG
+ 
+ config AR7_WDT
+ 	tristate "TI AR7 Watchdog Timer"
+-	depends on AR7 || (MIPS && COMPILE_TEST)
++	depends on AR7 || (MIPS && 32BIT && COMPILE_TEST)
+ 	help
+ 	  Hardware driver for the TI AR7 Watchdog Timer.
+ 
+diff --git a/drivers/watchdog/f71808e_wdt.c b/drivers/watchdog/f71808e_wdt.c
+index f60beec1bbaea..f7d82d2619133 100644
+--- a/drivers/watchdog/f71808e_wdt.c
++++ b/drivers/watchdog/f71808e_wdt.c
+@@ -228,15 +228,17 @@ static int watchdog_set_timeout(int timeout)
+ 
+ 	mutex_lock(&watchdog.lock);
+ 
+-	watchdog.timeout = timeout;
+ 	if (timeout > 0xff) {
+ 		watchdog.timer_val = DIV_ROUND_UP(timeout, 60);
+ 		watchdog.minutes_mode = true;
++		timeout = watchdog.timer_val * 60;
+ 	} else {
+ 		watchdog.timer_val = timeout;
+ 		watchdog.minutes_mode = false;
+ 	}
+ 
++	watchdog.timeout = timeout;
++
+ 	mutex_unlock(&watchdog.lock);
+ 
+ 	return 0;
+diff --git a/drivers/watchdog/omap_wdt.c b/drivers/watchdog/omap_wdt.c
+index 1616f93dfad7f..74d785b2b478f 100644
+--- a/drivers/watchdog/omap_wdt.c
++++ b/drivers/watchdog/omap_wdt.c
+@@ -268,8 +268,12 @@ static int omap_wdt_probe(struct platform_device *pdev)
+ 			wdev->wdog.bootstatus = WDIOF_CARDRESET;
+ 	}
+ 
+-	if (!early_enable)
++	if (early_enable) {
++		omap_wdt_start(&wdev->wdog);
++		set_bit(WDOG_HW_RUNNING, &wdev->wdog.status);
++	} else {
+ 		omap_wdt_disable(wdev);
++	}
+ 
+ 	ret = watchdog_register_device(&wdev->wdog);
+ 	if (ret) {
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 1911a62a6d9c1..c5b02365a5fe7 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -58,6 +58,7 @@
+ #include <linux/percpu-defs.h>
+ #include <linux/slab.h>
+ #include <linux/sysctl.h>
++#include <linux/moduleparam.h>
+ 
+ #include <asm/page.h>
+ #include <asm/tlb.h>
+@@ -73,6 +74,12 @@
+ #include <xen/page.h>
+ #include <xen/mem-reservation.h>
+ 
++#undef MODULE_PARAM_PREFIX
++#define MODULE_PARAM_PREFIX "xen."
++
++static uint __read_mostly balloon_boot_timeout = 180;
++module_param(balloon_boot_timeout, uint, 0444);
++
+ static int xen_hotplug_unpopulated;
+ 
+ #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG
+@@ -125,12 +132,12 @@ static struct ctl_table xen_root[] = {
+  * BP_ECANCELED: error, balloon operation canceled.
+  */
+ 
+-enum bp_state {
++static enum bp_state {
+ 	BP_DONE,
+ 	BP_WAIT,
+ 	BP_EAGAIN,
+ 	BP_ECANCELED
+-};
++} balloon_state = BP_DONE;
+ 
+ /* Main waiting point for xen-balloon thread. */
+ static DECLARE_WAIT_QUEUE_HEAD(balloon_thread_wq);
+@@ -199,18 +206,15 @@ static struct page *balloon_next_page(struct page *page)
+ 	return list_entry(next, struct page, lru);
+ }
+ 
+-static enum bp_state update_schedule(enum bp_state state)
++static void update_schedule(void)
+ {
+-	if (state == BP_WAIT)
+-		return BP_WAIT;
+-
+-	if (state == BP_ECANCELED)
+-		return BP_ECANCELED;
++	if (balloon_state == BP_WAIT || balloon_state == BP_ECANCELED)
++		return;
+ 
+-	if (state == BP_DONE) {
++	if (balloon_state == BP_DONE) {
+ 		balloon_stats.schedule_delay = 1;
+ 		balloon_stats.retry_count = 1;
+-		return BP_DONE;
++		return;
+ 	}
+ 
+ 	++balloon_stats.retry_count;
+@@ -219,7 +223,8 @@ static enum bp_state update_schedule(enum bp_state state)
+ 			balloon_stats.retry_count > balloon_stats.max_retry_count) {
+ 		balloon_stats.schedule_delay = 1;
+ 		balloon_stats.retry_count = 1;
+-		return BP_ECANCELED;
++		balloon_state = BP_ECANCELED;
++		return;
+ 	}
+ 
+ 	balloon_stats.schedule_delay <<= 1;
+@@ -227,7 +232,7 @@ static enum bp_state update_schedule(enum bp_state state)
+ 	if (balloon_stats.schedule_delay > balloon_stats.max_schedule_delay)
+ 		balloon_stats.schedule_delay = balloon_stats.max_schedule_delay;
+ 
+-	return BP_EAGAIN;
++	balloon_state = BP_EAGAIN;
+ }
+ 
+ #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG
+@@ -494,9 +499,9 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
+  * Stop waiting if either state is BP_DONE and ballooning action is
+  * needed, or if the credit has changed while state is not BP_DONE.
+  */
+-static bool balloon_thread_cond(enum bp_state state, long credit)
++static bool balloon_thread_cond(long credit)
+ {
+-	if (state == BP_DONE)
++	if (balloon_state == BP_DONE)
+ 		credit = 0;
+ 
+ 	return current_credit() != credit || kthread_should_stop();
+@@ -510,13 +515,12 @@ static bool balloon_thread_cond(enum bp_state state, long credit)
+  */
+ static int balloon_thread(void *unused)
+ {
+-	enum bp_state state = BP_DONE;
+ 	long credit;
+ 	unsigned long timeout;
+ 
+ 	set_freezable();
+ 	for (;;) {
+-		switch (state) {
++		switch (balloon_state) {
+ 		case BP_DONE:
+ 		case BP_ECANCELED:
+ 			timeout = 3600 * HZ;
+@@ -532,7 +536,7 @@ static int balloon_thread(void *unused)
+ 		credit = current_credit();
+ 
+ 		wait_event_freezable_timeout(balloon_thread_wq,
+-			balloon_thread_cond(state, credit), timeout);
++			balloon_thread_cond(credit), timeout);
+ 
+ 		if (kthread_should_stop())
+ 			return 0;
+@@ -543,22 +547,23 @@ static int balloon_thread(void *unused)
+ 
+ 		if (credit > 0) {
+ 			if (balloon_is_inflated())
+-				state = increase_reservation(credit);
++				balloon_state = increase_reservation(credit);
+ 			else
+-				state = reserve_additional_memory();
++				balloon_state = reserve_additional_memory();
+ 		}
+ 
+ 		if (credit < 0) {
+ 			long n_pages;
+ 
+ 			n_pages = min(-credit, si_mem_available());
+-			state = decrease_reservation(n_pages, GFP_BALLOON);
+-			if (state == BP_DONE && n_pages != -credit &&
++			balloon_state = decrease_reservation(n_pages,
++							     GFP_BALLOON);
++			if (balloon_state == BP_DONE && n_pages != -credit &&
+ 			    n_pages < totalreserve_pages)
+-				state = BP_EAGAIN;
++				balloon_state = BP_EAGAIN;
+ 		}
+ 
+-		state = update_schedule(state);
++		update_schedule();
+ 
+ 		mutex_unlock(&balloon_mutex);
+ 
+@@ -765,3 +770,38 @@ static int __init balloon_init(void)
+ 	return 0;
+ }
+ subsys_initcall(balloon_init);
++
++static int __init balloon_wait_finish(void)
++{
++	long credit, last_credit = 0;
++	unsigned long last_changed = 0;
++
++	if (!xen_domain())
++		return -ENODEV;
++
++	/* PV guests don't need to wait. */
++	if (xen_pv_domain() || !current_credit())
++		return 0;
++
++	pr_notice("Waiting for initial ballooning down having finished.\n");
++
++	while ((credit = current_credit()) < 0) {
++		if (credit != last_credit) {
++			last_changed = jiffies;
++			last_credit = credit;
++		}
++		if (balloon_state == BP_ECANCELED) {
++			pr_warn_once("Initial ballooning failed, %ld pages need to be freed.\n",
++				     -credit);
++			if (jiffies - last_changed >= HZ * balloon_boot_timeout)
++				panic("Initial ballooning failed!\n");
++		}
++
++		schedule_timeout_interruptible(HZ / 10);
++	}
++
++	pr_notice("Initial ballooning down finished.\n");
++
++	return 0;
++}
++late_initcall_sync(balloon_wait_finish);
+diff --git a/drivers/xen/xen-pciback/conf_space_capability.c b/drivers/xen/xen-pciback/conf_space_capability.c
+index 22f13abbe9130..5e53b4817f167 100644
+--- a/drivers/xen/xen-pciback/conf_space_capability.c
++++ b/drivers/xen/xen-pciback/conf_space_capability.c
+@@ -160,7 +160,7 @@ static void *pm_ctrl_init(struct pci_dev *dev, int offset)
+ 	}
+ 
+ out:
+-	return ERR_PTR(err);
++	return err ? ERR_PTR(err) : NULL;
+ }
+ 
+ static const struct config_field caplist_pm[] = {
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index ef7df2141f34f..9051bb47cbdd9 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3223,7 +3223,8 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		goto fail_sysfs;
+ 	}
+ 
+-	if (!sb_rdonly(sb) && !btrfs_check_rw_degradable(fs_info, NULL)) {
++	if (!sb_rdonly(sb) && fs_info->fs_devices->missing_devices &&
++	    !btrfs_check_rw_degradable(fs_info, NULL)) {
+ 		btrfs_warn(fs_info,
+ 		"writable mount is not allowed due to too many missing devices");
+ 		goto fail_sysfs;
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index 96ef9fed9a656..3a3102bc15a05 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -634,7 +634,7 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 len,
+ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen,
+ 			     struct inode *dst, u64 dst_loff)
+ {
+-	int ret;
++	int ret = 0;
+ 	u64 i, tail_len, chunk_count;
+ 	struct btrfs_root *root_dst = BTRFS_I(dst)->root;
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 3b93a98fd5449..4a5a3ae0acaae 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -2466,7 +2466,9 @@ again:
+ 		else {
+ 			ret = find_dir_range(log, path, dirid, key_type,
+ 					     &range_start, &range_end);
+-			if (ret != 0)
++			if (ret < 0)
++				goto out;
++			else if (ret > 0)
+ 				break;
+ 		}
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 593e0c6d6b44e..d9e582e40b5b7 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1133,8 +1133,10 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 	if (device->devid == BTRFS_DEV_REPLACE_DEVID)
+ 		clear_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state);
+ 
+-	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
++	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state)) {
++		clear_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state);
+ 		fs_devices->missing_devices--;
++	}
+ 
+ 	btrfs_close_bdev(device);
+ 	if (device->bdev) {
+@@ -2067,8 +2069,11 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 	u64 num_devices;
+ 	int ret = 0;
+ 
+-	mutex_lock(&uuid_mutex);
+-
++	/*
++	 * The device list in fs_devices is accessed without locks (neither
++	 * uuid_mutex nor device_list_mutex) as it won't change on a mounted
++	 * filesystem and another device rm cannot run.
++	 */
+ 	num_devices = btrfs_num_devices(fs_info);
+ 
+ 	ret = btrfs_check_raid_min_devices(fs_info, num_devices - 1);
+@@ -2112,11 +2117,9 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 	}
+ 
+-	mutex_unlock(&uuid_mutex);
+ 	ret = btrfs_shrink_device(device, 0);
+ 	if (!ret)
+ 		btrfs_reada_remove_dev(device);
+-	mutex_lock(&uuid_mutex);
+ 	if (ret)
+ 		goto error_undo;
+ 
+@@ -2192,7 +2195,6 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 	}
+ 
+ out:
+-	mutex_unlock(&uuid_mutex);
+ 	return ret;
+ 
+ error_undo:
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index 322ecae9a7580..052ad40ecdb28 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -557,8 +557,9 @@ int __init fscrypt_init_keyring(void);
+ struct fscrypt_mode {
+ 	const char *friendly_name;
+ 	const char *cipher_str;
+-	int keysize;
+-	int ivsize;
++	int keysize;		/* key size in bytes */
++	int security_strength;	/* security strength in bytes */
++	int ivsize;		/* IV size in bytes */
+ 	int logged_impl_name;
+ 	enum blk_crypto_mode_num blk_crypto_mode;
+ };
+diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c
+index 0cba7928446d3..24172bf3e8c6f 100644
+--- a/fs/crypto/hkdf.c
++++ b/fs/crypto/hkdf.c
+@@ -16,9 +16,14 @@
+ 
+ /*
+  * HKDF supports any unkeyed cryptographic hash algorithm, but fscrypt uses
+- * SHA-512 because it is reasonably secure and efficient; and since it produces
+- * a 64-byte digest, deriving an AES-256-XTS key preserves all 64 bytes of
+- * entropy from the master key and requires only one iteration of HKDF-Expand.
++ * SHA-512 because it is well-established, secure, and reasonably efficient.
++ *
++ * HKDF-SHA256 was also considered, as its 256-bit security strength would be
++ * sufficient here.  A 512-bit security strength is "nice to have", though.
++ * Also, on 64-bit CPUs, SHA-512 is usually just as fast as SHA-256.  In the
++ * common case of deriving an AES-256-XTS key (512 bits), that can result in
++ * HKDF-SHA512 being much faster than HKDF-SHA256, as the longer digest size of
++ * SHA-512 causes HKDF-Expand to only need to do one iteration rather than two.
+  */
+ #define HKDF_HMAC_ALG		"hmac(sha512)"
+ #define HKDF_HASHLEN		SHA512_DIGEST_SIZE
+diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
+index 9a6f9a188efb9..73d96e35d9ae4 100644
+--- a/fs/crypto/keysetup.c
++++ b/fs/crypto/keysetup.c
+@@ -19,6 +19,7 @@ struct fscrypt_mode fscrypt_modes[] = {
+ 		.friendly_name = "AES-256-XTS",
+ 		.cipher_str = "xts(aes)",
+ 		.keysize = 64,
++		.security_strength = 32,
+ 		.ivsize = 16,
+ 		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
+ 	},
+@@ -26,12 +27,14 @@ struct fscrypt_mode fscrypt_modes[] = {
+ 		.friendly_name = "AES-256-CTS-CBC",
+ 		.cipher_str = "cts(cbc(aes))",
+ 		.keysize = 32,
++		.security_strength = 32,
+ 		.ivsize = 16,
+ 	},
+ 	[FSCRYPT_MODE_AES_128_CBC] = {
+ 		.friendly_name = "AES-128-CBC-ESSIV",
+ 		.cipher_str = "essiv(cbc(aes),sha256)",
+ 		.keysize = 16,
++		.security_strength = 16,
+ 		.ivsize = 16,
+ 		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+ 	},
+@@ -39,12 +42,14 @@ struct fscrypt_mode fscrypt_modes[] = {
+ 		.friendly_name = "AES-128-CTS-CBC",
+ 		.cipher_str = "cts(cbc(aes))",
+ 		.keysize = 16,
++		.security_strength = 16,
+ 		.ivsize = 16,
+ 	},
+ 	[FSCRYPT_MODE_ADIANTUM] = {
+ 		.friendly_name = "Adiantum",
+ 		.cipher_str = "adiantum(xchacha12,aes)",
+ 		.keysize = 32,
++		.security_strength = 32,
+ 		.ivsize = 32,
+ 		.blk_crypto_mode = BLK_ENCRYPTION_MODE_ADIANTUM,
+ 	},
+@@ -357,6 +362,45 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci,
+ 	return 0;
+ }
+ 
++/*
++ * Check whether the size of the given master key (@mk) is appropriate for the
++ * encryption settings which a particular file will use (@ci).
++ *
++ * If the file uses a v1 encryption policy, then the master key must be at least
++ * as long as the derived key, as this is a requirement of the v1 KDF.
++ *
++ * Otherwise, the KDF can accept any size key, so we enforce a slightly looser
++ * requirement: we require that the size of the master key be at least the
++ * maximum security strength of any algorithm whose key will be derived from it
++ * (but in practice we only need to consider @ci->ci_mode, since any other
++ * possible subkeys such as DIRHASH and INODE_HASH will never increase the
++ * required key size over @ci->ci_mode).  This allows AES-256-XTS keys to be
++ * derived from a 256-bit master key, which is cryptographically sufficient,
++ * rather than requiring a 512-bit master key which is unnecessarily long.  (We
++ * still allow 512-bit master keys if the user chooses to use them, though.)
++ */
++static bool fscrypt_valid_master_key_size(const struct fscrypt_master_key *mk,
++					  const struct fscrypt_info *ci)
++{
++	unsigned int min_keysize;
++
++	if (ci->ci_policy.version == FSCRYPT_POLICY_V1)
++		min_keysize = ci->ci_mode->keysize;
++	else
++		min_keysize = ci->ci_mode->security_strength;
++
++	if (mk->mk_secret.size < min_keysize) {
++		fscrypt_warn(NULL,
++			     "key with %s %*phN is too short (got %u bytes, need %u+ bytes)",
++			     master_key_spec_type(&mk->mk_spec),
++			     master_key_spec_len(&mk->mk_spec),
++			     (u8 *)&mk->mk_spec.u,
++			     mk->mk_secret.size, min_keysize);
++		return false;
++	}
++	return true;
++}
++
+ /*
+  * Find the master key, then set up the inode's actual encryption key.
+  *
+@@ -422,18 +466,7 @@ static int setup_file_encryption_key(struct fscrypt_info *ci,
+ 		goto out_release_key;
+ 	}
+ 
+-	/*
+-	 * Require that the master key be at least as long as the derived key.
+-	 * Otherwise, the derived key cannot possibly contain as much entropy as
+-	 * that required by the encryption mode it will be used for.  For v1
+-	 * policies it's also required for the KDF to work at all.
+-	 */
+-	if (mk->mk_secret.size < ci->ci_mode->keysize) {
+-		fscrypt_warn(NULL,
+-			     "key with %s %*phN is too short (got %u bytes, need %u+ bytes)",
+-			     master_key_spec_type(&mk_spec),
+-			     master_key_spec_len(&mk_spec), (u8 *)&mk_spec.u,
+-			     mk->mk_secret.size, ci->ci_mode->keysize);
++	if (!fscrypt_valid_master_key_size(mk, ci)) {
+ 		err = -ENOKEY;
+ 		goto out_release_key;
+ 	}
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index cbadbf55c6c20..8a6260aac26cb 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -170,7 +170,6 @@ static int z_erofs_lz4_decompress(struct z_erofs_decompress_req *rq, u8 *out)
+ 		erofs_err(rq->sb, "failed to decompress %d in[%u, %u] out[%u]",
+ 			  ret, inlen, inputmargin, rq->outputsize);
+ 
+-		WARN_ON(1);
+ 		print_hex_dump(KERN_DEBUG, "[ in]: ", DUMP_PREFIX_OFFSET,
+ 			       16, 1, src + inputmargin, inlen, true);
+ 		print_hex_dump(KERN_DEBUG, "[out]: ", DUMP_PREFIX_OFFSET,
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index 730373e0965af..8b0288f70e93d 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -602,7 +602,7 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
+ 	exfat_save_attr(inode, info->attr);
+ 
+ 	inode->i_blocks = ((i_size_read(inode) + (sbi->cluster_size - 1)) &
+-		~(sbi->cluster_size - 1)) >> inode->i_blkbits;
++		~((loff_t)sbi->cluster_size - 1)) >> inode->i_blkbits;
+ 	inode->i_mtime = info->mtime;
+ 	inode->i_ctime = info->mtime;
+ 	ei->i_crtime = info->crtime;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index aa4d74f9d1623..c2f237653f68e 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4970,36 +4970,6 @@ int ext4_get_es_cache(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 	return ext4_fill_es_cache_info(inode, start_blk, len_blks, fieinfo);
+ }
+ 
+-/*
+- * ext4_access_path:
+- * Function to access the path buffer for marking it dirty.
+- * It also checks if there are sufficient credits left in the journal handle
+- * to update path.
+- */
+-static int
+-ext4_access_path(handle_t *handle, struct inode *inode,
+-		struct ext4_ext_path *path)
+-{
+-	int credits, err;
+-
+-	if (!ext4_handle_valid(handle))
+-		return 0;
+-
+-	/*
+-	 * Check if need to extend journal credits
+-	 * 3 for leaf, sb, and inode plus 2 (bmap and group
+-	 * descriptor) for each block group; assume two block
+-	 * groups
+-	 */
+-	credits = ext4_writepage_trans_blocks(inode);
+-	err = ext4_datasem_ensure_credits(handle, inode, 7, credits, 0);
+-	if (err < 0)
+-		return err;
+-
+-	err = ext4_ext_get_access(handle, inode, path);
+-	return err;
+-}
+-
+ /*
+  * ext4_ext_shift_path_extents:
+  * Shift the extents of a path structure lying between path[depth].p_ext
+@@ -5014,6 +4984,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
+ 	int depth, err = 0;
+ 	struct ext4_extent *ex_start, *ex_last;
+ 	bool update = false;
++	int credits, restart_credits;
+ 	depth = path->p_depth;
+ 
+ 	while (depth >= 0) {
+@@ -5023,13 +4994,26 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
+ 				return -EFSCORRUPTED;
+ 
+ 			ex_last = EXT_LAST_EXTENT(path[depth].p_hdr);
++			/* leaf + sb + inode */
++			credits = 3;
++			if (ex_start == EXT_FIRST_EXTENT(path[depth].p_hdr)) {
++				update = true;
++				/* extent tree + sb + inode */
++				credits = depth + 2;
++			}
+ 
+-			err = ext4_access_path(handle, inode, path + depth);
+-			if (err)
++			restart_credits = ext4_writepage_trans_blocks(inode);
++			err = ext4_datasem_ensure_credits(handle, inode, credits,
++					restart_credits, 0);
++			if (err) {
++				if (err > 0)
++					err = -EAGAIN;
+ 				goto out;
++			}
+ 
+-			if (ex_start == EXT_FIRST_EXTENT(path[depth].p_hdr))
+-				update = true;
++			err = ext4_ext_get_access(handle, inode, path + depth);
++			if (err)
++				goto out;
+ 
+ 			while (ex_start <= ex_last) {
+ 				if (SHIFT == SHIFT_LEFT) {
+@@ -5060,7 +5044,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
+ 		}
+ 
+ 		/* Update index too */
+-		err = ext4_access_path(handle, inode, path + depth);
++		err = ext4_ext_get_access(handle, inode, path + depth);
+ 		if (err)
+ 			goto out;
+ 
+@@ -5099,6 +5083,7 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 	int ret = 0, depth;
+ 	struct ext4_extent *extent;
+ 	ext4_lblk_t stop, *iterator, ex_start, ex_end;
++	ext4_lblk_t tmp = EXT_MAX_BLOCKS;
+ 
+ 	/* Let path point to the last extent */
+ 	path = ext4_find_extent(inode, EXT_MAX_BLOCKS - 1, NULL,
+@@ -5152,11 +5137,15 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 	 * till we reach stop. In case of right shift, iterator points to stop
+ 	 * and it is decreased till we reach start.
+ 	 */
++again:
+ 	if (SHIFT == SHIFT_LEFT)
+ 		iterator = &start;
+ 	else
+ 		iterator = &stop;
+ 
++	if (tmp != EXT_MAX_BLOCKS)
++		*iterator = tmp;
++
+ 	/*
+ 	 * Its safe to start updating extents.  Start and stop are unsigned, so
+ 	 * in case of right shift if extent with 0 block is reached, iterator
+@@ -5185,6 +5174,7 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 			}
+ 		}
+ 
++		tmp = *iterator;
+ 		if (SHIFT == SHIFT_LEFT) {
+ 			extent = EXT_LAST_EXTENT(path[depth].p_hdr);
+ 			*iterator = le32_to_cpu(extent->ee_block) +
+@@ -5203,6 +5193,9 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 		}
+ 		ret = ext4_ext_shift_path_extents(path, shift, inode,
+ 				handle, SHIFT);
++		/* iterator can be NULL which means we should break */
++		if (ret == -EAGAIN)
++			goto again;
+ 		if (ret)
+ 			break;
+ 	}
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 953c7e8757831..b1af6588bad01 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3436,9 +3436,9 @@ static int ext4_run_li_request(struct ext4_li_request *elr)
+ 	struct super_block *sb = elr->lr_super;
+ 	ext4_group_t ngroups = EXT4_SB(sb)->s_groups_count;
+ 	ext4_group_t group = elr->lr_next_group;
+-	unsigned long timeout = 0;
+ 	unsigned int prefetch_ios = 0;
+ 	int ret = 0;
++	u64 start_time;
+ 
+ 	if (elr->lr_mode == EXT4_LI_MODE_PREFETCH_BBITMAP) {
+ 		elr->lr_next_group = ext4_mb_prefetch(sb, group,
+@@ -3475,14 +3475,13 @@ static int ext4_run_li_request(struct ext4_li_request *elr)
+ 		ret = 1;
+ 
+ 	if (!ret) {
+-		timeout = jiffies;
++		start_time = ktime_get_real_ns();
+ 		ret = ext4_init_inode_table(sb, group,
+ 					    elr->lr_timeout ? 0 : 1);
+ 		trace_ext4_lazy_itable_init(sb, group);
+ 		if (elr->lr_timeout == 0) {
+-			timeout = (jiffies - timeout) *
+-				EXT4_SB(elr->lr_super)->s_li_wait_mult;
+-			elr->lr_timeout = timeout;
++			elr->lr_timeout = nsecs_to_jiffies((ktime_get_real_ns() - start_time) *
++				EXT4_SB(elr->lr_super)->s_li_wait_mult);
+ 		}
+ 		elr->lr_next_sched = jiffies + elr->lr_timeout;
+ 		elr->lr_next_group = group + 1;
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 657db2fb6739f..a35fcf43ad5a3 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -511,7 +511,7 @@ make_now:
+ 		inode->i_op = &f2fs_dir_inode_operations;
+ 		inode->i_fop = &f2fs_dir_operations;
+ 		inode->i_mapping->a_ops = &f2fs_dblock_aops;
+-		inode_nohighmem(inode);
++		mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+ 	} else if (S_ISLNK(inode->i_mode)) {
+ 		if (file_is_encrypt(inode))
+ 			inode->i_op = &f2fs_encrypted_symlink_inode_operations;
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 710a6f73a6858..6ae2beabe578d 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -744,7 +744,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	inode->i_op = &f2fs_dir_inode_operations;
+ 	inode->i_fop = &f2fs_dir_operations;
+ 	inode->i_mapping->a_ops = &f2fs_dblock_aops;
+-	inode_nohighmem(inode);
++	mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+ 
+ 	set_inode_flag(inode, FI_INC_LINK);
+ 	f2fs_lock_op(sbi);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index f943eea9fe4e1..fb1917730e0e4 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -851,6 +851,12 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
+ 		goto out_put_old;
+ 	}
+ 
++	/*
++	 * Release while we have extra ref on stolen page.  Otherwise
++	 * anon_pipe_buf_release() might think the page can be reused.
++	 */
++	pipe_buf_release(cs->pipe, buf);
++
+ 	get_page(newpage);
+ 
+ 	if (!(buf->flags & PIPE_BUF_FLAG_LRU))
+@@ -2035,8 +2041,12 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ 
+ 	pipe_lock(pipe);
+ out_free:
+-	for (idx = 0; idx < nbuf; idx++)
+-		pipe_buf_release(pipe, &bufs[idx]);
++	for (idx = 0; idx < nbuf; idx++) {
++		struct pipe_buffer *buf = &bufs[idx];
++
++		if (buf->ops)
++			pipe_buf_release(pipe, buf);
++	}
+ 	pipe_unlock(pipe);
+ 
+ 	kvfree(bufs);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 03c3407c8e26f..dd052101e2266 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1885,10 +1885,10 @@ static void glock_hash_walk(glock_examiner examiner, const struct gfs2_sbd *sdp)
+ 	do {
+ 		rhashtable_walk_start(&iter);
+ 
+-		while ((gl = rhashtable_walk_next(&iter)) && !IS_ERR(gl))
+-			if (gl->gl_name.ln_sbd == sdp &&
+-			    lockref_get_not_dead(&gl->gl_lockref))
++		while ((gl = rhashtable_walk_next(&iter)) && !IS_ERR(gl)) {
++			if (gl->gl_name.ln_sbd == sdp)
+ 				examiner(gl);
++		}
+ 
+ 		rhashtable_walk_stop(&iter);
+ 	} while (cond_resched(), gl == ERR_PTR(-EAGAIN));
+@@ -1911,7 +1911,7 @@ bool gfs2_queue_delete_work(struct gfs2_glock *gl, unsigned long delay)
+ 
+ void gfs2_cancel_delete_work(struct gfs2_glock *gl)
+ {
+-	if (cancel_delayed_work_sync(&gl->gl_delete)) {
++	if (cancel_delayed_work(&gl->gl_delete)) {
+ 		clear_bit(GLF_PENDING_DELETE, &gl->gl_flags);
+ 		gfs2_glock_put(gl);
+ 	}
+@@ -1930,7 +1930,6 @@ static void flush_delete_work(struct gfs2_glock *gl)
+ 					   &gl->gl_delete, 0);
+ 		}
+ 	}
+-	gfs2_glock_queue_work(gl, 0);
+ }
+ 
+ void gfs2_flush_delete_work(struct gfs2_sbd *sdp)
+@@ -1947,10 +1946,10 @@ void gfs2_flush_delete_work(struct gfs2_sbd *sdp)
+ 
+ static void thaw_glock(struct gfs2_glock *gl)
+ {
+-	if (!test_and_clear_bit(GLF_FROZEN, &gl->gl_flags)) {
+-		gfs2_glock_put(gl);
++	if (!test_and_clear_bit(GLF_FROZEN, &gl->gl_flags))
++		return;
++	if (!lockref_get_not_dead(&gl->gl_lockref))
+ 		return;
+-	}
+ 	set_bit(GLF_REPLY_PENDING, &gl->gl_flags);
+ 	gfs2_glock_queue_work(gl, 0);
+ }
+@@ -1966,9 +1965,12 @@ static void clear_glock(struct gfs2_glock *gl)
+ 	gfs2_glock_remove_from_lru(gl);
+ 
+ 	spin_lock(&gl->gl_lockref.lock);
+-	if (gl->gl_state != LM_ST_UNLOCKED)
+-		handle_callback(gl, LM_ST_UNLOCKED, 0, false);
+-	__gfs2_glock_queue_work(gl, 0);
++	if (!__lockref_is_dead(&gl->gl_lockref)) {
++		gl->gl_lockref.count++;
++		if (gl->gl_state != LM_ST_UNLOCKED)
++			handle_callback(gl, LM_ST_UNLOCKED, 0, false);
++		__gfs2_glock_queue_work(gl, 0);
++	}
+ 	spin_unlock(&gl->gl_lockref.lock);
+ }
+ 
+diff --git a/fs/jfs/jfs_mount.c b/fs/jfs/jfs_mount.c
+index 5d7d7170c03c0..aa4ff7bcaff23 100644
+--- a/fs/jfs/jfs_mount.c
++++ b/fs/jfs/jfs_mount.c
+@@ -81,14 +81,14 @@ int jfs_mount(struct super_block *sb)
+ 	 * (initialize mount inode from the superblock)
+ 	 */
+ 	if ((rc = chkSuper(sb))) {
+-		goto errout20;
++		goto out;
+ 	}
+ 
+ 	ipaimap = diReadSpecial(sb, AGGREGATE_I, 0);
+ 	if (ipaimap == NULL) {
+ 		jfs_err("jfs_mount: Failed to read AGGREGATE_I");
+ 		rc = -EIO;
+-		goto errout20;
++		goto out;
+ 	}
+ 	sbi->ipaimap = ipaimap;
+ 
+@@ -99,7 +99,7 @@ int jfs_mount(struct super_block *sb)
+ 	 */
+ 	if ((rc = diMount(ipaimap))) {
+ 		jfs_err("jfs_mount: diMount(ipaimap) failed w/rc = %d", rc);
+-		goto errout21;
++		goto err_ipaimap;
+ 	}
+ 
+ 	/*
+@@ -108,7 +108,7 @@ int jfs_mount(struct super_block *sb)
+ 	ipbmap = diReadSpecial(sb, BMAP_I, 0);
+ 	if (ipbmap == NULL) {
+ 		rc = -EIO;
+-		goto errout22;
++		goto err_umount_ipaimap;
+ 	}
+ 
+ 	jfs_info("jfs_mount: ipbmap:0x%p", ipbmap);
+@@ -120,7 +120,7 @@ int jfs_mount(struct super_block *sb)
+ 	 */
+ 	if ((rc = dbMount(ipbmap))) {
+ 		jfs_err("jfs_mount: dbMount failed w/rc = %d", rc);
+-		goto errout22;
++		goto err_ipbmap;
+ 	}
+ 
+ 	/*
+@@ -139,7 +139,7 @@ int jfs_mount(struct super_block *sb)
+ 		if (!ipaimap2) {
+ 			jfs_err("jfs_mount: Failed to read AGGREGATE_I");
+ 			rc = -EIO;
+-			goto errout35;
++			goto err_umount_ipbmap;
+ 		}
+ 		sbi->ipaimap2 = ipaimap2;
+ 
+@@ -151,7 +151,7 @@ int jfs_mount(struct super_block *sb)
+ 		if ((rc = diMount(ipaimap2))) {
+ 			jfs_err("jfs_mount: diMount(ipaimap2) failed, rc = %d",
+ 				rc);
+-			goto errout35;
++			goto err_ipaimap2;
+ 		}
+ 	} else
+ 		/* Secondary aggregate inode table is not valid */
+@@ -168,7 +168,7 @@ int jfs_mount(struct super_block *sb)
+ 		jfs_err("jfs_mount: Failed to read FILESYSTEM_I");
+ 		/* open fileset secondary inode allocation map */
+ 		rc = -EIO;
+-		goto errout40;
++		goto err_umount_ipaimap2;
+ 	}
+ 	jfs_info("jfs_mount: ipimap:0x%p", ipimap);
+ 
+@@ -178,41 +178,34 @@ int jfs_mount(struct super_block *sb)
+ 	/* initialize fileset inode allocation map */
+ 	if ((rc = diMount(ipimap))) {
+ 		jfs_err("jfs_mount: diMount failed w/rc = %d", rc);
+-		goto errout41;
++		goto err_ipimap;
+ 	}
+ 
+-	goto out;
++	return rc;
+ 
+ 	/*
+ 	 *	unwind on error
+ 	 */
+-      errout41:		/* close fileset inode allocation map inode */
++err_ipimap:
++	/* close fileset inode allocation map inode */
+ 	diFreeSpecial(ipimap);
+-
+-      errout40:		/* fileset closed */
+-
++err_umount_ipaimap2:
+ 	/* close secondary aggregate inode allocation map */
+-	if (ipaimap2) {
++	if (ipaimap2)
+ 		diUnmount(ipaimap2, 1);
++err_ipaimap2:
++	/* close aggregate inodes */
++	if (ipaimap2)
+ 		diFreeSpecial(ipaimap2);
+-	}
+-
+-      errout35:
+-
+-	/* close aggregate block allocation map */
++err_umount_ipbmap:	/* close aggregate block allocation map */
+ 	dbUnmount(ipbmap, 1);
++err_ipbmap:		/* close aggregate inodes */
+ 	diFreeSpecial(ipbmap);
+-
+-      errout22:		/* close aggregate inode allocation map */
+-
++err_umount_ipaimap:	/* close aggregate inode allocation map */
+ 	diUnmount(ipaimap, 1);
+-
+-      errout21:		/* close aggregate inodes */
++err_ipaimap:		/* close aggregate inodes */
+ 	diFreeSpecial(ipaimap);
+-      errout20:		/* aggregate closed */
+-
+-      out:
+-
++out:
+ 	if (rc)
+ 		jfs_err("Mount JFS Failure: %d", rc);
+ 
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index c837675cd395a..8b963c72dd3b1 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1061,13 +1061,12 @@ static bool nfs_verifier_is_delegated(struct dentry *dentry)
+ static void nfs_set_verifier_locked(struct dentry *dentry, unsigned long verf)
+ {
+ 	struct inode *inode = d_inode(dentry);
++	struct inode *dir = d_inode(dentry->d_parent);
+ 
+-	if (!nfs_verifier_is_delegated(dentry) &&
+-	    !nfs_verify_change_attribute(d_inode(dentry->d_parent), verf))
+-		goto out;
++	if (!nfs_verify_change_attribute(dir, verf))
++		return;
+ 	if (inode && NFS_PROTO(inode)->have_delegation(inode, FMODE_READ))
+ 		nfs_set_verifier_delegated(&verf);
+-out:
+ 	dentry->d_time = verf;
+ }
+ 
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 2e894fec036b0..3c0335c15a730 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -620,7 +620,7 @@ static void nfs_direct_commit_complete(struct nfs_commit_data *data)
+ 		nfs_unlock_and_release_request(req);
+ 	}
+ 
+-	if (atomic_dec_and_test(&cinfo.mds->rpcs_out))
++	if (nfs_commit_end(cinfo.mds))
+ 		nfs_direct_write_complete(dreq);
+ }
+ 
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index 3eda40a320a53..1f12297109b41 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -378,10 +378,10 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ 		goto noconnect;
+ 
+ 	ds = mirror->mirror_ds->ds;
++	if (READ_ONCE(ds->ds_clp))
++		goto out;
+ 	/* matching smp_wmb() in _nfs4_pnfs_v3/4_ds_connect */
+ 	smp_rmb();
+-	if (ds->ds_clp)
+-		goto out;
+ 
+ 	/* FIXME: For now we assume the server sent only one version of NFS
+ 	 * to use for the DS.
+diff --git a/fs/nfs/nfs4idmap.c b/fs/nfs/nfs4idmap.c
+index 8d8aba305ecca..f331866dd4182 100644
+--- a/fs/nfs/nfs4idmap.c
++++ b/fs/nfs/nfs4idmap.c
+@@ -487,7 +487,7 @@ nfs_idmap_new(struct nfs_client *clp)
+ err_destroy_pipe:
+ 	rpc_destroy_pipe_data(idmap->idmap_pipe);
+ err:
+-	get_user_ns(idmap->user_ns);
++	put_user_ns(idmap->user_ns);
+ 	kfree(idmap);
+ 	return error;
+ }
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 5365000e83bd6..3106bd28b1132 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1590,15 +1590,16 @@ static bool nfs_stateid_is_sequential(struct nfs4_state *state,
+ {
+ 	if (test_bit(NFS_OPEN_STATE, &state->flags)) {
+ 		/* The common case - we're updating to a new sequence number */
+-		if (nfs4_stateid_match_other(stateid, &state->open_stateid) &&
+-			nfs4_stateid_is_next(&state->open_stateid, stateid)) {
+-			return true;
++		if (nfs4_stateid_match_other(stateid, &state->open_stateid)) {
++			if (nfs4_stateid_is_next(&state->open_stateid, stateid))
++				return true;
++			return false;
+ 		}
+-	} else {
+-		/* This is the first OPEN in this generation */
+-		if (stateid->seqid == cpu_to_be32(1))
+-			return true;
++		/* The server returned a new stateid */
+ 	}
++	/* This is the first OPEN in this generation */
++	if (stateid->seqid == cpu_to_be32(1))
++		return true;
+ 	return false;
+ }
+ 
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 132a345e93731..0212fe32e63aa 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -515,7 +515,7 @@ pnfs_mark_request_commit(struct nfs_page *req, struct pnfs_layout_segment *lseg,
+ {
+ 	struct pnfs_ds_commit_info *fl_cinfo = cinfo->ds;
+ 
+-	if (!lseg || !fl_cinfo->ops->mark_request_commit)
++	if (!lseg || !fl_cinfo->ops || !fl_cinfo->ops->mark_request_commit)
+ 		return false;
+ 	fl_cinfo->ops->mark_request_commit(req, lseg, cinfo, ds_commit_idx);
+ 	return true;
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 251c4a3aef9a6..7b9d701bef016 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -468,7 +468,6 @@ pnfs_bucket_alloc_ds_commits(struct list_head *list,
+ 				goto out_error;
+ 			data->ds_commit_index = i;
+ 			list_add_tail(&data->list, list);
+-			atomic_inc(&cinfo->mds->rpcs_out);
+ 			nreq++;
+ 		}
+ 		mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+@@ -520,7 +519,6 @@ pnfs_generic_commit_pagelist(struct inode *inode, struct list_head *mds_pages,
+ 		data->ds_commit_index = -1;
+ 		list_splice_init(mds_pages, &data->pages);
+ 		list_add_tail(&data->list, &list);
+-		atomic_inc(&cinfo->mds->rpcs_out);
+ 		nreq++;
+ 	}
+ 
+@@ -876,7 +874,7 @@ static int _nfs4_pnfs_v3_ds_connect(struct nfs_server *mds_srv,
+ 	}
+ 
+ 	smp_wmb();
+-	ds->ds_clp = clp;
++	WRITE_ONCE(ds->ds_clp, clp);
+ 	dprintk("%s [new] addr: %s\n", __func__, ds->ds_remotestr);
+ out:
+ 	return status;
+@@ -949,7 +947,7 @@ static int _nfs4_pnfs_v4_ds_connect(struct nfs_server *mds_srv,
+ 	}
+ 
+ 	smp_wmb();
+-	ds->ds_clp = clp;
++	WRITE_ONCE(ds->ds_clp, clp);
+ 	dprintk("%s [new] addr: %s\n", __func__, ds->ds_remotestr);
+ out:
+ 	return status;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 639c34fec04a8..bde4c362841f0 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1034,25 +1034,11 @@ nfs_scan_commit_list(struct list_head *src, struct list_head *dst,
+ 	struct nfs_page *req, *tmp;
+ 	int ret = 0;
+ 
+-restart:
+ 	list_for_each_entry_safe(req, tmp, src, wb_list) {
+ 		kref_get(&req->wb_kref);
+ 		if (!nfs_lock_request(req)) {
+-			int status;
+-
+-			/* Prevent deadlock with nfs_lock_and_join_requests */
+-			if (!list_empty(dst)) {
+-				nfs_release_request(req);
+-				continue;
+-			}
+-			/* Ensure we make progress to prevent livelock */
+-			mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+-			status = nfs_wait_on_request(req);
+ 			nfs_release_request(req);
+-			mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+-			if (status < 0)
+-				break;
+-			goto restart;
++			continue;
+ 		}
+ 		nfs_request_remove_commit_list(req, cinfo);
+ 		clear_bit(PG_COMMIT_TO_DS, &req->wb_flags);
+@@ -1663,10 +1649,13 @@ static void nfs_commit_begin(struct nfs_mds_commit_info *cinfo)
+ 	atomic_inc(&cinfo->rpcs_out);
+ }
+ 
+-static void nfs_commit_end(struct nfs_mds_commit_info *cinfo)
++bool nfs_commit_end(struct nfs_mds_commit_info *cinfo)
+ {
+-	if (atomic_dec_and_test(&cinfo->rpcs_out))
++	if (atomic_dec_and_test(&cinfo->rpcs_out)) {
+ 		wake_up_var(&cinfo->rpcs_out);
++		return true;
++	}
++	return false;
+ }
+ 
+ void nfs_commitdata_release(struct nfs_commit_data *data)
+@@ -1766,6 +1755,7 @@ void nfs_init_commit(struct nfs_commit_data *data,
+ 	data->res.fattr   = &data->fattr;
+ 	data->res.verf    = &data->verf;
+ 	nfs_fattr_init(&data->fattr);
++	nfs_commit_begin(cinfo->mds);
+ }
+ EXPORT_SYMBOL_GPL(nfs_init_commit);
+ 
+@@ -1811,7 +1801,6 @@ nfs_commit_list(struct inode *inode, struct list_head *head, int how,
+ 
+ 	/* Set up the argument struct */
+ 	nfs_init_commit(data, head, NULL, cinfo);
+-	atomic_inc(&cinfo->mds->rpcs_out);
+ 	return nfs_initiate_commit(NFS_CLIENT(inode), data, NFS_PROTO(inode),
+ 				   data->mds_ops, how, RPC_TASK_CRED_NOREF);
+ }
+@@ -1924,6 +1913,7 @@ static int __nfs_commit_inode(struct inode *inode, int how,
+ 	int may_wait = how & FLUSH_SYNC;
+ 	int ret, nscan;
+ 
++	how &= ~FLUSH_SYNC;
+ 	nfs_init_cinfo_from_inode(&cinfo, inode);
+ 	nfs_commit_begin(cinfo.mds);
+ 	for (;;) {
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 919e552abf637..1470b49adb2db 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -478,10 +478,11 @@ int ocfs2_truncate_file(struct inode *inode,
+ 	 * greater than page size, so we have to truncate them
+ 	 * anyway.
+ 	 */
+-	unmap_mapping_range(inode->i_mapping, new_i_size + PAGE_SIZE - 1, 0, 1);
+-	truncate_inode_pages(inode->i_mapping, new_i_size);
+ 
+ 	if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL) {
++		unmap_mapping_range(inode->i_mapping,
++				    new_i_size + PAGE_SIZE - 1, 0, 1);
++		truncate_inode_pages(inode->i_mapping, new_i_size);
+ 		status = ocfs2_truncate_inline(inode, di_bh, new_i_size,
+ 					       i_size_read(inode), 1);
+ 		if (status)
+@@ -500,6 +501,9 @@ int ocfs2_truncate_file(struct inode *inode,
+ 		goto bail_unlock_sem;
+ 	}
+ 
++	unmap_mapping_range(inode->i_mapping, new_i_size + PAGE_SIZE - 1, 0, 1);
++	truncate_inode_pages(inode->i_mapping, new_i_size);
++
+ 	status = ocfs2_commit_truncate(osb, inode, di_bh);
+ 	if (status < 0) {
+ 		mlog_errno(status);
+diff --git a/fs/orangefs/dcache.c b/fs/orangefs/dcache.c
+index fe484cf93e5cd..8bbe9486e3a62 100644
+--- a/fs/orangefs/dcache.c
++++ b/fs/orangefs/dcache.c
+@@ -26,8 +26,10 @@ static int orangefs_revalidate_lookup(struct dentry *dentry)
+ 	gossip_debug(GOSSIP_DCACHE_DEBUG, "%s: attempting lookup.\n", __func__);
+ 
+ 	new_op = op_alloc(ORANGEFS_VFS_OP_LOOKUP);
+-	if (!new_op)
++	if (!new_op) {
++		ret = -ENOMEM;
+ 		goto out_put_parent;
++	}
+ 
+ 	new_op->upcall.req.lookup.sym_follow = ORANGEFS_LOOKUP_LINK_NO_FOLLOW;
+ 	new_op->upcall.req.lookup.parent_refn = parent->refn;
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index f7135777cb4eb..d1ae643a999a5 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -17,6 +17,7 @@
+ 
+ struct ovl_aio_req {
+ 	struct kiocb iocb;
++	refcount_t ref;
+ 	struct kiocb *orig_iocb;
+ 	struct fd fd;
+ };
+@@ -257,6 +258,14 @@ static rwf_t ovl_iocb_to_rwf(int ifl)
+ 	return flags;
+ }
+ 
++static inline void ovl_aio_put(struct ovl_aio_req *aio_req)
++{
++	if (refcount_dec_and_test(&aio_req->ref)) {
++		fdput(aio_req->fd);
++		kmem_cache_free(ovl_aio_request_cachep, aio_req);
++	}
++}
++
+ static void ovl_aio_cleanup_handler(struct ovl_aio_req *aio_req)
+ {
+ 	struct kiocb *iocb = &aio_req->iocb;
+@@ -273,8 +282,7 @@ static void ovl_aio_cleanup_handler(struct ovl_aio_req *aio_req)
+ 	}
+ 
+ 	orig_iocb->ki_pos = iocb->ki_pos;
+-	fdput(aio_req->fd);
+-	kmem_cache_free(ovl_aio_request_cachep, aio_req);
++	ovl_aio_put(aio_req);
+ }
+ 
+ static void ovl_aio_rw_complete(struct kiocb *iocb, long res, long res2)
+@@ -324,7 +332,9 @@ static ssize_t ovl_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 		aio_req->orig_iocb = iocb;
+ 		kiocb_clone(&aio_req->iocb, iocb, real.file);
+ 		aio_req->iocb.ki_complete = ovl_aio_rw_complete;
++		refcount_set(&aio_req->ref, 2);
+ 		ret = vfs_iocb_iter_read(real.file, &aio_req->iocb, iter);
++		ovl_aio_put(aio_req);
+ 		if (ret != -EIOCBQUEUED)
+ 			ovl_aio_cleanup_handler(aio_req);
+ 	}
+@@ -395,7 +405,9 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 		kiocb_clone(&aio_req->iocb, iocb, real.file);
+ 		aio_req->iocb.ki_flags = ifl;
+ 		aio_req->iocb.ki_complete = ovl_aio_rw_complete;
++		refcount_set(&aio_req->ref, 2);
+ 		ret = vfs_iocb_iter_write(real.file, &aio_req->iocb, iter);
++		ovl_aio_put(aio_req);
+ 		if (ret != -EIOCBQUEUED)
+ 			ovl_aio_cleanup_handler(aio_req);
+ 	}
+diff --git a/fs/proc/stat.c b/fs/proc/stat.c
+index 4695b6de31512..3bed48d8228b4 100644
+--- a/fs/proc/stat.c
++++ b/fs/proc/stat.c
+@@ -23,7 +23,7 @@
+ 
+ #ifdef arch_idle_time
+ 
+-static u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
++u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
+ {
+ 	u64 idle;
+ 
+@@ -45,7 +45,7 @@ static u64 get_iowait_time(struct kernel_cpustat *kcs, int cpu)
+ 
+ #else
+ 
+-static u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
++u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
+ {
+ 	u64 idle, idle_usecs = -1ULL;
+ 
+diff --git a/fs/proc/uptime.c b/fs/proc/uptime.c
+index 5a1b228964fb7..deb99bc9b7e6b 100644
+--- a/fs/proc/uptime.c
++++ b/fs/proc/uptime.c
+@@ -12,18 +12,22 @@ static int uptime_proc_show(struct seq_file *m, void *v)
+ {
+ 	struct timespec64 uptime;
+ 	struct timespec64 idle;
+-	u64 nsec;
++	u64 idle_nsec;
+ 	u32 rem;
+ 	int i;
+ 
+-	nsec = 0;
+-	for_each_possible_cpu(i)
+-		nsec += (__force u64) kcpustat_cpu(i).cpustat[CPUTIME_IDLE];
++	idle_nsec = 0;
++	for_each_possible_cpu(i) {
++		struct kernel_cpustat kcs;
++
++		kcpustat_cpu_fetch(&kcs, i);
++		idle_nsec += get_idle_time(&kcs, i);
++	}
+ 
+ 	ktime_get_boottime_ts64(&uptime);
+ 	timens_add_boottime(&uptime);
+ 
+-	idle.tv_sec = div_u64_rem(nsec, NSEC_PER_SEC, &rem);
++	idle.tv_sec = div_u64_rem(idle_nsec, NSEC_PER_SEC, &rem);
+ 	idle.tv_nsec = rem;
+ 	seq_printf(m, "%lu.%02lu %lu.%02lu\n",
+ 			(unsigned long) uptime.tv_sec,
+diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
+index c5562c871c8be..1a188fbdf34e5 100644
+--- a/fs/quota/quota_tree.c
++++ b/fs/quota/quota_tree.c
+@@ -423,6 +423,7 @@ static int free_dqentry(struct qtree_mem_dqinfo *info, struct dquot *dquot,
+ 		quota_error(dquot->dq_sb, "Quota structure has offset to "
+ 			"other block (%u) than it should (%u)", blk,
+ 			(uint)(dquot->dq_off >> info->dqi_blocksize_bits));
++		ret = -EIO;
+ 		goto out_buf;
+ 	}
+ 	ret = read_blk(info, blk, buf);
+@@ -488,6 +489,13 @@ static int remove_tree(struct qtree_mem_dqinfo *info, struct dquot *dquot,
+ 		goto out_buf;
+ 	}
+ 	newblk = le32_to_cpu(ref[get_index(info, dquot->dq_id, depth)]);
++	if (newblk < QT_TREEOFF || newblk >= info->dqi_blocks) {
++		quota_error(dquot->dq_sb, "Getting block too big (%u >= %u)",
++			    newblk, info->dqi_blocks);
++		ret = -EUCLEAN;
++		goto out_buf;
++	}
++
+ 	if (depth == info->dqi_qtree_depth - 1) {
+ 		ret = free_dqentry(info, dquot, newblk);
+ 		newblk = 0;
+@@ -587,6 +595,13 @@ static loff_t find_tree_dqentry(struct qtree_mem_dqinfo *info,
+ 	blk = le32_to_cpu(ref[get_index(info, dquot->dq_id, depth)]);
+ 	if (!blk)	/* No reference? */
+ 		goto out_buf;
++	if (blk < QT_TREEOFF || blk >= info->dqi_blocks) {
++		quota_error(dquot->dq_sb, "Getting block too big (%u >= %u)",
++			    blk, info->dqi_blocks);
++		ret = -EUCLEAN;
++		goto out_buf;
++	}
++
+ 	if (depth < info->dqi_qtree_depth - 1)
+ 		ret = find_tree_dqentry(info, dquot, blk, depth+1);
+ 	else
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 0ee8c6dfb0364..bf58ae6f984fe 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -430,7 +430,8 @@ static struct dentry *__create_dir(const char *name, struct dentry *parent,
+ 	if (unlikely(!inode))
+ 		return failed_creating(dentry);
+ 
+-	inode->i_mode = S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO;
++	/* Do not set bits for OTH */
++	inode->i_mode = S_IFDIR | S_IRWXU | S_IRUSR| S_IRGRP | S_IXUSR | S_IXGRP;
+ 	inode->i_op = ops;
+ 	inode->i_fop = &simple_dir_operations;
+ 
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 8aae375864b6b..4ba17736b614f 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1248,8 +1248,6 @@ struct blk_plug {
+ 	bool multiple_queues;
+ 	bool nowait;
+ };
+-#define BLK_MAX_REQUEST_COUNT 16
+-#define BLK_PLUG_FLUSH_SIZE (128 * 1024)
+ 
+ struct blk_plug_cb;
+ typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *, bool);
+diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
+new file mode 100644
+index 0000000000000..a075b70b9a70c
+--- /dev/null
++++ b/include/linux/cc_platform.h
+@@ -0,0 +1,88 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Confidential Computing Platform Capability checks
++ *
++ * Copyright (C) 2021 Advanced Micro Devices, Inc.
++ *
++ * Author: Tom Lendacky <thomas.lendacky@amd.com>
++ */
++
++#ifndef _LINUX_CC_PLATFORM_H
++#define _LINUX_CC_PLATFORM_H
++
++#include <linux/types.h>
++#include <linux/stddef.h>
++
++/**
++ * enum cc_attr - Confidential computing attributes
++ *
++ * These attributes represent confidential computing features that are
++ * currently active.
++ */
++enum cc_attr {
++	/**
++	 * @CC_ATTR_MEM_ENCRYPT: Memory encryption is active
++	 *
++	 * The platform/OS is running with active memory encryption. This
++	 * includes running either as a bare-metal system or a hypervisor
++	 * and actively using memory encryption or as a guest/virtual machine
++	 * and actively using memory encryption.
++	 *
++	 * Examples include SME, SEV and SEV-ES.
++	 */
++	CC_ATTR_MEM_ENCRYPT,
++
++	/**
++	 * @CC_ATTR_HOST_MEM_ENCRYPT: Host memory encryption is active
++	 *
++	 * The platform/OS is running as a bare-metal system or a hypervisor
++	 * and actively using memory encryption.
++	 *
++	 * Examples include SME.
++	 */
++	CC_ATTR_HOST_MEM_ENCRYPT,
++
++	/**
++	 * @CC_ATTR_GUEST_MEM_ENCRYPT: Guest memory encryption is active
++	 *
++	 * The platform/OS is running as a guest/virtual machine and actively
++	 * using memory encryption.
++	 *
++	 * Examples include SEV and SEV-ES.
++	 */
++	CC_ATTR_GUEST_MEM_ENCRYPT,
++
++	/**
++	 * @CC_ATTR_GUEST_STATE_ENCRYPT: Guest state encryption is active
++	 *
++	 * The platform/OS is running as a guest/virtual machine and actively
++	 * using memory encryption and register state encryption.
++	 *
++	 * Examples include SEV-ES.
++	 */
++	CC_ATTR_GUEST_STATE_ENCRYPT,
++};
++
++#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
++
++/**
++ * cc_platform_has() - Checks if the specified cc_attr attribute is active
++ * @attr: Confidential computing attribute to check
++ *
++ * The cc_platform_has() function will return an indicator as to whether the
++ * specified Confidential Computing attribute is currently active.
++ *
++ * Context: Any context
++ * Return:
++ * * TRUE  - Specified Confidential Computing attribute is active
++ * * FALSE - Specified Confidential Computing attribute is not active
++ */
++bool cc_platform_has(enum cc_attr attr);
++
++#else	/* !CONFIG_ARCH_HAS_CC_PLATFORM */
++
++static inline bool cc_platform_has(enum cc_attr attr) { return false; }
++
++#endif	/* CONFIG_ARCH_HAS_CC_PLATFORM */
++
++#endif	/* _LINUX_CC_PLATFORM_H */
+diff --git a/include/linux/console.h b/include/linux/console.h
+index 4b1e26c4cb428..bc2a749e6f0d4 100644
+--- a/include/linux/console.h
++++ b/include/linux/console.h
+@@ -150,6 +150,8 @@ struct console {
+ 	short	flags;
+ 	short	index;
+ 	int	cflag;
++	uint	ispeed;
++	uint	ospeed;
+ 	void	*data;
+ 	struct	 console *next;
+ };
+diff --git a/include/linux/ethtool_netlink.h b/include/linux/ethtool_netlink.h
+index 1e7bf78cb3829..aba348d58ff61 100644
+--- a/include/linux/ethtool_netlink.h
++++ b/include/linux/ethtool_netlink.h
+@@ -10,6 +10,9 @@
+ #define __ETHTOOL_LINK_MODE_MASK_NWORDS \
+ 	DIV_ROUND_UP(__ETHTOOL_LINK_MODE_MASK_NBITS, 32)
+ 
++#define ETHTOOL_PAUSE_STAT_CNT	(__ETHTOOL_A_PAUSE_STAT_CNT -		\
++				 ETHTOOL_A_PAUSE_STAT_TX_FRAMES)
++
+ enum ethtool_multicast_groups {
+ 	ETHNL_MCGRP_MONITOR,
+ };
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 822b701c803d1..bc6ce4b202a80 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -998,6 +998,7 @@ extern int bpf_jit_enable;
+ extern int bpf_jit_harden;
+ extern int bpf_jit_kallsyms;
+ extern long bpf_jit_limit;
++extern long bpf_jit_limit_max;
+ 
+ typedef void (*bpf_jit_fill_hole_t)(void *area, unsigned int size);
+ 
+diff --git a/include/linux/kernel_stat.h b/include/linux/kernel_stat.h
+index 89f0745c096d4..8fff3500d50ee 100644
+--- a/include/linux/kernel_stat.h
++++ b/include/linux/kernel_stat.h
+@@ -103,6 +103,7 @@ extern void account_system_index_time(struct task_struct *, u64,
+ 				      enum cpu_usage_stat);
+ extern void account_steal_time(u64);
+ extern void account_idle_time(u64);
++extern u64 get_idle_time(struct kernel_cpustat *kcs, int cpu);
+ 
+ #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
+ static inline void account_process_tick(struct task_struct *tsk, int user)
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 57dffa0d58702..d3600fc7f7c4d 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -390,7 +390,7 @@ enum {
+ 	/* This should match the actual table size of
+ 	 * ata_eh_cmd_timeout_table in libata-eh.c.
+ 	 */
+-	ATA_EH_CMD_TIMEOUT_TABLE_SIZE = 6,
++	ATA_EH_CMD_TIMEOUT_TABLE_SIZE = 7,
+ 
+ 	/* Horkage types. May be set by libata or controller on drives
+ 	   (some horkage may be drive/controller pair dependent */
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 32a940117e7af..a6a3d4ddfc2df 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -26,13 +26,13 @@
+  *   #undef LSM_HOOK
+  * };
+  */
+-LSM_HOOK(int, 0, binder_set_context_mgr, struct task_struct *mgr)
+-LSM_HOOK(int, 0, binder_transaction, struct task_struct *from,
+-	 struct task_struct *to)
+-LSM_HOOK(int, 0, binder_transfer_binder, struct task_struct *from,
+-	 struct task_struct *to)
+-LSM_HOOK(int, 0, binder_transfer_file, struct task_struct *from,
+-	 struct task_struct *to, struct file *file)
++LSM_HOOK(int, 0, binder_set_context_mgr, const struct cred *mgr)
++LSM_HOOK(int, 0, binder_transaction, const struct cred *from,
++	 const struct cred *to)
++LSM_HOOK(int, 0, binder_transfer_binder, const struct cred *from,
++	 const struct cred *to)
++LSM_HOOK(int, 0, binder_transfer_file, const struct cred *from,
++	 const struct cred *to, struct file *file)
+ LSM_HOOK(int, 0, ptrace_access_check, struct task_struct *child,
+ 	 unsigned int mode)
+ LSM_HOOK(int, 0, ptrace_traceme, struct task_struct *parent)
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index c503f7ab8afb6..a8531b37e6f52 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -1288,22 +1288,22 @@
+  *
+  * @binder_set_context_mgr:
+  *	Check whether @mgr is allowed to be the binder context manager.
+- *	@mgr contains the task_struct for the task being registered.
++ *	@mgr contains the struct cred for the current binder process.
+  *	Return 0 if permission is granted.
+  * @binder_transaction:
+  *	Check whether @from is allowed to invoke a binder transaction call
+  *	to @to.
+- *	@from contains the task_struct for the sending task.
+- *	@to contains the task_struct for the receiving task.
++ *	@from contains the struct cred for the sending process.
++ *	@to contains the struct cred for the receiving process.
+  * @binder_transfer_binder:
+  *	Check whether @from is allowed to transfer a binder reference to @to.
+- *	@from contains the task_struct for the sending task.
+- *	@to contains the task_struct for the receiving task.
++ *	@from contains the struct cred for the sending process.
++ *	@to contains the struct cred for the receiving process.
+  * @binder_transfer_file:
+  *	Check whether @from is allowed to transfer @file to @to.
+- *	@from contains the task_struct for the sending task.
++ *	@from contains the struct cred for the sending process.
+  *	@file contains the struct file being transferred.
+- *	@to contains the task_struct for the receiving task.
++ *	@to contains the struct cred for the receiving process.
+  *
+  * @ptrace_access_check:
+  *	Check permission before allowing the current process to trace the
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 91a6525a98cb7..aff5cd382fef5 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -553,6 +553,7 @@ extern int nfs_wb_page_cancel(struct inode *inode, struct page* page);
+ extern int  nfs_commit_inode(struct inode *, int);
+ extern struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail);
+ extern void nfs_commit_free(struct nfs_commit_data *data);
++bool nfs_commit_end(struct nfs_mds_commit_info *cinfo);
+ 
+ static inline int
+ nfs_have_writebacks(struct inode *inode)
+diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
+index 896c16d2c5fb2..913aa60228b16 100644
+--- a/include/linux/posix-timers.h
++++ b/include/linux/posix-timers.h
+@@ -177,8 +177,10 @@ static inline void posix_cputimers_group_init(struct posix_cputimers *pct,
+ #endif
+ 
+ #ifdef CONFIG_POSIX_CPU_TIMERS_TASK_WORK
++void clear_posix_cputimers_work(struct task_struct *p);
+ void posix_cputimers_init_work(void);
+ #else
++static inline void clear_posix_cputimers_work(struct task_struct *p) { }
+ static inline void posix_cputimers_init_work(void) { }
+ #endif
+ 
+diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h
+index 9fe156d1c018e..a68972b097b72 100644
+--- a/include/linux/rpmsg.h
++++ b/include/linux/rpmsg.h
+@@ -177,7 +177,7 @@ static inline struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev
+ 	/* This shouldn't be possible */
+ 	WARN_ON(1);
+ 
+-	return ERR_PTR(-ENXIO);
++	return NULL;
+ }
+ 
+ static inline int rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len)
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 85fb2f34c59b7..24cacb1ca654d 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -55,7 +55,8 @@ extern asmlinkage void schedule_tail(struct task_struct *prev);
+ extern void init_idle(struct task_struct *idle, int cpu);
+ 
+ extern int sched_fork(unsigned long clone_flags, struct task_struct *p);
+-extern void sched_post_fork(struct task_struct *p);
++extern void sched_post_fork(struct task_struct *p,
++			    struct kernel_clone_args *kargs);
+ extern void sched_dead(struct task_struct *p);
+ 
+ void __noreturn do_task_dead(void);
+diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
+index 2413427e439c7..d10150587d819 100644
+--- a/include/linux/sched/task_stack.h
++++ b/include/linux/sched/task_stack.h
+@@ -25,7 +25,11 @@ static inline void *task_stack_page(const struct task_struct *task)
+ 
+ static inline unsigned long *end_of_stack(const struct task_struct *task)
+ {
++#ifdef CONFIG_STACK_GROWSUP
++	return (unsigned long *)((unsigned long)task->stack + THREAD_SIZE) - 1;
++#else
+ 	return task->stack;
++#endif
+ }
+ 
+ #elif !defined(__HAVE_THREAD_FUNCTIONS)
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 7ef74d01b8e74..35355429648e3 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -254,13 +254,13 @@ extern int security_init(void);
+ extern int early_security_init(void);
+ 
+ /* Security operations */
+-int security_binder_set_context_mgr(struct task_struct *mgr);
+-int security_binder_transaction(struct task_struct *from,
+-				struct task_struct *to);
+-int security_binder_transfer_binder(struct task_struct *from,
+-				    struct task_struct *to);
+-int security_binder_transfer_file(struct task_struct *from,
+-				  struct task_struct *to, struct file *file);
++int security_binder_set_context_mgr(const struct cred *mgr);
++int security_binder_transaction(const struct cred *from,
++				const struct cred *to);
++int security_binder_transfer_binder(const struct cred *from,
++				    const struct cred *to);
++int security_binder_transfer_file(const struct cred *from,
++				  const struct cred *to, struct file *file);
+ int security_ptrace_access_check(struct task_struct *child, unsigned int mode);
+ int security_ptrace_traceme(struct task_struct *parent);
+ int security_capget(struct task_struct *target,
+@@ -493,25 +493,25 @@ static inline int early_security_init(void)
+ 	return 0;
+ }
+ 
+-static inline int security_binder_set_context_mgr(struct task_struct *mgr)
++static inline int security_binder_set_context_mgr(const struct cred *mgr)
+ {
+ 	return 0;
+ }
+ 
+-static inline int security_binder_transaction(struct task_struct *from,
+-					      struct task_struct *to)
++static inline int security_binder_transaction(const struct cred *from,
++					      const struct cred *to)
+ {
+ 	return 0;
+ }
+ 
+-static inline int security_binder_transfer_binder(struct task_struct *from,
+-						  struct task_struct *to)
++static inline int security_binder_transfer_binder(const struct cred *from,
++						  const struct cred *to)
+ {
+ 	return 0;
+ }
+ 
+-static inline int security_binder_transfer_file(struct task_struct *from,
+-						struct task_struct *to,
++static inline int security_binder_transfer_file(const struct cred *from,
++						const struct cred *to,
+ 						struct file *file)
+ {
+ 	return 0;
+@@ -1003,6 +1003,11 @@ static inline void security_transfer_creds(struct cred *new,
+ {
+ }
+ 
++static inline void security_cred_getsecid(const struct cred *c, u32 *secid)
++{
++	*secid = 0;
++}
++
+ static inline int security_kernel_act_as(struct cred *cred, u32 secid)
+ {
+ 	return 0;
+diff --git a/include/linux/seq_file.h b/include/linux/seq_file.h
+index b83b3ae3c877f..662a8cfa1bcd3 100644
+--- a/include/linux/seq_file.h
++++ b/include/linux/seq_file.h
+@@ -182,7 +182,7 @@ static const struct file_operations __name ## _fops = {			\
+ #define DEFINE_PROC_SHOW_ATTRIBUTE(__name)				\
+ static int __name ## _open(struct inode *inode, struct file *file)	\
+ {									\
+-	return single_open(file, __name ## _show, inode->i_private);	\
++	return single_open(file, __name ## _show, PDE_DATA(inode));	\
+ }									\
+ 									\
+ static const struct proc_ops __name ## _proc_ops = {			\
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 804a3f69bbd93..95c3069823f9b 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -262,6 +262,7 @@ enum tpm2_cc_attrs {
+ #define TPM_VID_INTEL    0x8086
+ #define TPM_VID_WINBOND  0x1050
+ #define TPM_VID_STM      0x104A
++#define TPM_VID_ATML     0x1114
+ 
+ enum tpm_chip_flags {
+ 	TPM_CHIP_FLAG_TPM2		= BIT(1),
+diff --git a/include/memory/renesas-rpc-if.h b/include/memory/renesas-rpc-if.h
+index 9ad136682c476..aceb2c360d3fa 100644
+--- a/include/memory/renesas-rpc-if.h
++++ b/include/memory/renesas-rpc-if.h
+@@ -58,6 +58,7 @@ struct	rpcif_op {
+ 
+ struct	rpcif {
+ 	struct device *dev;
++	void __iomem *base;
+ 	void __iomem *dirmap;
+ 	struct regmap *regmap;
+ 	struct reset_control *rstc;
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index aa92af3dd444d..0b1864a82d4ad 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -291,7 +291,7 @@ static inline void inet_csk_prepare_for_destroy_sock(struct sock *sk)
+ {
+ 	/* The below has to be done to allow calling inet_csk_destroy_sock */
+ 	sock_set_flag(sk, SOCK_DEAD);
+-	percpu_counter_inc(sk->sk_prot->orphan_count);
++	this_cpu_inc(*sk->sk_prot->orphan_count);
+ }
+ 
+ void inet_csk_destroy_sock(struct sock *sk);
+diff --git a/include/net/llc.h b/include/net/llc.h
+index df282d9b40170..9c10b121b49b0 100644
+--- a/include/net/llc.h
++++ b/include/net/llc.h
+@@ -72,7 +72,9 @@ struct llc_sap {
+ static inline
+ struct hlist_head *llc_sk_dev_hash(struct llc_sap *sap, int ifindex)
+ {
+-	return &sap->sk_dev_hash[ifindex % LLC_SK_DEV_HASH_ENTRIES];
++	u32 bucket = hash_32(ifindex, LLC_SK_DEV_HASH_BITS);
++
++	return &sap->sk_dev_hash[bucket];
+ }
+ 
+ static inline
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index 22ced1381ede5..d5767e25509cc 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -253,6 +253,7 @@ static inline void *neighbour_priv(const struct neighbour *n)
+ #define NEIGH_UPDATE_F_OVERRIDE			0x00000001
+ #define NEIGH_UPDATE_F_WEAK_OVERRIDE		0x00000002
+ #define NEIGH_UPDATE_F_OVERRIDE_ISROUTER	0x00000004
++#define NEIGH_UPDATE_F_USE			0x10000000
+ #define NEIGH_UPDATE_F_EXT_LEARNED		0x20000000
+ #define NEIGH_UPDATE_F_ISROUTER			0x40000000
+ #define NEIGH_UPDATE_F_ADMIN			0x80000000
+@@ -504,10 +505,15 @@ static inline int neigh_output(struct neighbour *n, struct sk_buff *skb,
+ {
+ 	const struct hh_cache *hh = &n->hh;
+ 
+-	if ((n->nud_state & NUD_CONNECTED) && hh->hh_len && !skip_cache)
++	/* n->nud_state and hh->hh_len could be changed under us.
++	 * neigh_hh_output() is taking care of the race later.
++	 */
++	if (!skip_cache &&
++	    (READ_ONCE(n->nud_state) & NUD_CONNECTED) &&
++	    READ_ONCE(hh->hh_len))
+ 		return neigh_hh_output(hh, skb);
+-	else
+-		return n->output(n, skb);
++
++	return n->output(n, skb);
+ }
+ 
+ static inline struct neighbour *
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index f8631ad3c8686..9226a84dcc14d 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -302,6 +302,8 @@ struct Qdisc_ops {
+ 					  struct netlink_ext_ack *extack);
+ 	void			(*attach)(struct Qdisc *sch);
+ 	int			(*change_tx_queue_len)(struct Qdisc *, unsigned int);
++	void			(*change_real_num_tx)(struct Qdisc *sch,
++						      unsigned int new_real_tx);
+ 
+ 	int			(*dump)(struct Qdisc *, struct sk_buff *);
+ 	int			(*dump_stats)(struct Qdisc *, struct gnet_dump *);
+@@ -683,6 +685,8 @@ void qdisc_class_hash_grow(struct Qdisc *, struct Qdisc_class_hash *);
+ void qdisc_class_hash_destroy(struct Qdisc_class_hash *);
+ 
+ int dev_qdisc_change_tx_queue_len(struct net_device *dev);
++void dev_qdisc_change_real_num_tx(struct net_device *dev,
++				  unsigned int new_real_tx);
+ void dev_init_scheduler(struct net_device *dev);
+ void dev_shutdown(struct net_device *dev);
+ void dev_activate(struct net_device *dev);
+diff --git a/include/net/sock.h b/include/net/sock.h
+index cdca984f36305..6270d1d9436b0 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1214,7 +1214,7 @@ struct proto {
+ 	unsigned int		useroffset;	/* Usercopy region offset */
+ 	unsigned int		usersize;	/* Usercopy region size */
+ 
+-	struct percpu_counter	*orphan_count;
++	unsigned int __percpu	*orphan_count;
+ 
+ 	struct request_sock_ops	*rsk_prot;
+ 	struct timewait_sock_ops *twsk_prot;
+diff --git a/include/net/strparser.h b/include/net/strparser.h
+index 1d20b98493a10..bec1439bd3be6 100644
+--- a/include/net/strparser.h
++++ b/include/net/strparser.h
+@@ -54,10 +54,24 @@ struct strp_msg {
+ 	int offset;
+ };
+ 
++struct _strp_msg {
++	/* Internal cb structure. struct strp_msg must be first for passing
++	 * to upper layer.
++	 */
++	struct strp_msg strp;
++	int accum_len;
++};
++
++struct sk_skb_cb {
++#define SK_SKB_CB_PRIV_LEN 20
++	unsigned char data[SK_SKB_CB_PRIV_LEN];
++	struct _strp_msg strp;
++};
++
+ static inline struct strp_msg *strp_msg(struct sk_buff *skb)
+ {
+ 	return (struct strp_msg *)((void *)skb->cb +
+-		offsetof(struct qdisc_skb_cb, data));
++		offsetof(struct sk_skb_cb, strp));
+ }
+ 
+ /* Structure for an attached lower socket */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index eff611da5780b..334b8d1b54429 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -48,7 +48,9 @@
+ 
+ extern struct inet_hashinfo tcp_hashinfo;
+ 
+-extern struct percpu_counter tcp_orphan_count;
++DECLARE_PER_CPU(unsigned int, tcp_orphan_count);
++int tcp_orphan_count_sum(void);
++
+ void tcp_time_wait(struct sock *sk, int state, int timeo);
+ 
+ #define MAX_TCP_HEADER	L1_CACHE_ALIGN(128 + MAX_HEADER)
+@@ -290,19 +292,6 @@ static inline bool tcp_out_of_memory(struct sock *sk)
+ 
+ void sk_forced_mem_schedule(struct sock *sk, int size);
+ 
+-static inline bool tcp_too_many_orphans(struct sock *sk, int shift)
+-{
+-	struct percpu_counter *ocp = sk->sk_prot->orphan_count;
+-	int orphans = percpu_counter_read_positive(ocp);
+-
+-	if (orphans << shift > sysctl_tcp_max_orphans) {
+-		orphans = percpu_counter_sum_positive(ocp);
+-		if (orphans << shift > sysctl_tcp_max_orphans)
+-			return true;
+-	}
+-	return false;
+-}
+-
+ bool tcp_check_oom(struct sock *sk, int shift);
+ 
+ 
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 949ae14a54250..435cc009e6eaa 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -488,8 +488,9 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ 	 * CHECKSUM_NONE in __udp_gso_segment. UDP GRO indeed builds partial
+ 	 * packets in udp_gro_complete_segment. As does UDP GSO, verified by
+ 	 * udp_send_skb. But when those packets are looped in dev_loopback_xmit
+-	 * their ip_summed is set to CHECKSUM_UNNECESSARY. Reset in this
+-	 * specific case, where PARTIAL is both correct and required.
++	 * their ip_summed CHECKSUM_NONE is changed to CHECKSUM_UNNECESSARY.
++	 * Reset in this specific case, where PARTIAL is both correct and
++	 * required.
+ 	 */
+ 	if (skb->pkt_type == PACKET_LOOPBACK)
+ 		skb->ip_summed = CHECKSUM_PARTIAL;
+diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h
+index e2bf36e6964b6..c94fa29415021 100644
+--- a/include/uapi/linux/ethtool_netlink.h
++++ b/include/uapi/linux/ethtool_netlink.h
+@@ -394,7 +394,9 @@ enum {
+ 	ETHTOOL_A_PAUSE_STAT_TX_FRAMES,
+ 	ETHTOOL_A_PAUSE_STAT_RX_FRAMES,
+ 
+-	/* add new constants above here */
++	/* add new constants above here
++	 * adjust ETHTOOL_PAUSE_STAT_CNT if adding non-stats!
++	 */
+ 	__ETHTOOL_A_PAUSE_STAT_CNT,
+ 	ETHTOOL_A_PAUSE_STAT_MAX = (__ETHTOOL_A_PAUSE_STAT_CNT - 1)
+ };
+diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
+index a95d55f9f2576..142b184eca8b4 100644
+--- a/include/uapi/linux/pci_regs.h
++++ b/include/uapi/linux/pci_regs.h
+@@ -504,6 +504,12 @@
+ #define  PCI_EXP_DEVCTL_URRE	0x0008	/* Unsupported Request Reporting En. */
+ #define  PCI_EXP_DEVCTL_RELAX_EN 0x0010 /* Enable relaxed ordering */
+ #define  PCI_EXP_DEVCTL_PAYLOAD	0x00e0	/* Max_Payload_Size */
++#define  PCI_EXP_DEVCTL_PAYLOAD_128B 0x0000 /* 128 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_256B 0x0020 /* 256 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_512B 0x0040 /* 512 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_1024B 0x0060 /* 1024 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_2048B 0x0080 /* 2048 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_4096B 0x00a0 /* 4096 Bytes */
+ #define  PCI_EXP_DEVCTL_EXT_TAG	0x0100	/* Extended Tag Field Enable */
+ #define  PCI_EXP_DEVCTL_PHANTOM	0x0200	/* Phantom Functions Enable */
+ #define  PCI_EXP_DEVCTL_AUX_PME	0x0400	/* Auxiliary Power PM Enable */
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 72e4bf0ee5460..d3a1f25f8ec2e 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -528,6 +528,7 @@ int bpf_jit_enable   __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON);
+ int bpf_jit_kallsyms __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON);
+ int bpf_jit_harden   __read_mostly;
+ long bpf_jit_limit   __read_mostly;
++long bpf_jit_limit_max __read_mostly;
+ 
+ static void
+ bpf_prog_ksym_set_addr(struct bpf_prog *prog)
+@@ -821,7 +822,8 @@ u64 __weak bpf_jit_alloc_exec_limit(void)
+ static int __init bpf_jit_charge_init(void)
+ {
+ 	/* Only used as heuristic here to derive limit. */
+-	bpf_jit_limit = min_t(u64, round_up(bpf_jit_alloc_exec_limit() >> 2,
++	bpf_jit_limit_max = bpf_jit_alloc_exec_limit();
++	bpf_jit_limit = min_t(u64, round_up(bpf_jit_limit_max >> 2,
+ 					    PAGE_SIZE), LONG_MAX);
+ 	return 0;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 0c26757ea7fbb..a15826a9a644f 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1298,12 +1298,12 @@ static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
+ 
+ static bool __reg64_bound_s32(s64 a)
+ {
+-	return a > S32_MIN && a < S32_MAX;
++	return a >= S32_MIN && a <= S32_MAX;
+ }
+ 
+ static bool __reg64_bound_u32(u64 a)
+ {
+-	return a > U32_MIN && a < U32_MAX;
++	return a >= U32_MIN && a <= U32_MAX;
+ }
+ 
+ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 60d38e2f69dd8..a86857edaa571 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1711,6 +1711,7 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 	struct cgroup *dcgrp = &dst_root->cgrp;
+ 	struct cgroup_subsys *ss;
+ 	int ssid, i, ret;
++	u16 dfl_disable_ss_mask = 0;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+ 
+@@ -1727,8 +1728,28 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 		/* can't move between two non-dummy roots either */
+ 		if (ss->root != &cgrp_dfl_root && dst_root != &cgrp_dfl_root)
+ 			return -EBUSY;
++
++		/*
++		 * Collect ssid's that need to be disabled from default
++		 * hierarchy.
++		 */
++		if (ss->root == &cgrp_dfl_root)
++			dfl_disable_ss_mask |= 1 << ssid;
++
+ 	} while_each_subsys_mask();
+ 
++	if (dfl_disable_ss_mask) {
++		struct cgroup *scgrp = &cgrp_dfl_root.cgrp;
++
++		/*
++		 * Controllers from default hierarchy that need to be rebound
++		 * are all disabled together in one go.
++		 */
++		cgrp_dfl_root.subsys_mask &= ~dfl_disable_ss_mask;
++		WARN_ON(cgroup_apply_control(scgrp));
++		cgroup_finalize_control(scgrp, 0);
++	}
++
+ 	do_each_subsys_mask(ss, ssid, ss_mask) {
+ 		struct cgroup_root *src_root = ss->root;
+ 		struct cgroup *scgrp = &src_root->cgrp;
+@@ -1737,10 +1758,12 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 
+ 		WARN_ON(!css || cgroup_css(dcgrp, ss));
+ 
+-		/* disable from the source */
+-		src_root->subsys_mask &= ~(1 << ssid);
+-		WARN_ON(cgroup_apply_control(scgrp));
+-		cgroup_finalize_control(scgrp, 0);
++		if (src_root != &cgrp_dfl_root) {
++			/* disable from the source */
++			src_root->subsys_mask &= ~(1 << ssid);
++			WARN_ON(cgroup_apply_control(scgrp));
++			cgroup_finalize_control(scgrp, 0);
++		}
+ 
+ 		/* rebind */
+ 		RCU_INIT_POINTER(scgrp->subsys[ssid], NULL);
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index d51175cedfca4..89ca9b61aa0d9 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -421,8 +421,6 @@ static void root_cgroup_cputime(struct task_cputime *cputime)
+ 		cputime->sum_exec_runtime += user;
+ 		cputime->sum_exec_runtime += sys;
+ 		cputime->sum_exec_runtime += cpustat[CPUTIME_STEAL];
+-		cputime->sum_exec_runtime += cpustat[CPUTIME_GUEST];
+-		cputime->sum_exec_runtime += cpustat[CPUTIME_GUEST_NICE];
+ 	}
+ }
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 3f96400a0ac61..e465903abed9e 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2191,6 +2191,7 @@ static __latent_entropy struct task_struct *copy_process(
+ 	p->pdeath_signal = 0;
+ 	INIT_LIST_HEAD(&p->thread_group);
+ 	p->task_works = NULL;
++	clear_posix_cputimers_work(p);
+ 
+ 	/*
+ 	 * Ensure that the cgroup subsystem policies allow the new process to be
+@@ -2310,7 +2311,7 @@ static __latent_entropy struct task_struct *copy_process(
+ 	write_unlock_irq(&tasklist_lock);
+ 
+ 	proc_fork_connector(p);
+-	sched_post_fork(p);
++	sched_post_fork(p, args);
+ 	cgroup_post_fork(p, args);
+ 	perf_event_fork(p);
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index f590e9ff37062..66a6ba81edb1e 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2943,13 +2943,12 @@ static const struct file_operations fops_kp = {
+ static int __init debugfs_kprobe_init(void)
+ {
+ 	struct dentry *dir;
+-	unsigned int value = 1;
+ 
+ 	dir = debugfs_create_dir("kprobes", NULL);
+ 
+ 	debugfs_create_file("list", 0400, dir, NULL, &kprobes_fops);
+ 
+-	debugfs_create_file("enabled", 0600, dir, &value, &fops_kp);
++	debugfs_create_file("enabled", 0600, dir, NULL, &fops_kp);
+ 
+ 	debugfs_create_file("blacklist", 0400, dir, NULL,
+ 			    &kprobe_blacklist_fops);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 5184f68968158..1f6a2f1226fa9 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -887,7 +887,7 @@ look_up_lock_class(const struct lockdep_map *lock, unsigned int subclass)
+ 	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ 		return NULL;
+ 
+-	hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
++	hlist_for_each_entry_rcu_notrace(class, hash_head, hash_entry) {
+ 		if (class->key == key) {
+ 			/*
+ 			 * Huh! same key, different name? Did someone trample
+@@ -5303,7 +5303,7 @@ int __lock_is_held(const struct lockdep_map *lock, int read)
+ 		struct held_lock *hlock = curr->held_locks + i;
+ 
+ 		if (match_held_lock(hlock, lock)) {
+-			if (read == -1 || hlock->read == read)
++			if (read == -1 || !!hlock->read == read)
+ 				return 1;
+ 
+ 			return 0;
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index be381eb6116a1..119b929dcff0f 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -94,8 +94,7 @@ static void em_debug_remove_pd(struct device *dev) {}
+ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+ 				int nr_states, struct em_data_callback *cb)
+ {
+-	unsigned long opp_eff, prev_opp_eff = ULONG_MAX;
+-	unsigned long power, freq, prev_freq = 0;
++	unsigned long power, freq, prev_freq = 0, prev_cost = ULONG_MAX;
+ 	struct em_perf_state *table;
+ 	int i, ret;
+ 	u64 fmax;
+@@ -140,27 +139,21 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+ 
+ 		table[i].power = power;
+ 		table[i].frequency = prev_freq = freq;
+-
+-		/*
+-		 * The hertz/watts efficiency ratio should decrease as the
+-		 * frequency grows on sane platforms. But this isn't always
+-		 * true in practice so warn the user if a higher OPP is more
+-		 * power efficient than a lower one.
+-		 */
+-		opp_eff = freq / power;
+-		if (opp_eff >= prev_opp_eff)
+-			dev_dbg(dev, "EM: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n",
+-					i, i - 1);
+-		prev_opp_eff = opp_eff;
+ 	}
+ 
+ 	/* Compute the cost of each performance state. */
+ 	fmax = (u64) table[nr_states - 1].frequency;
+-	for (i = 0; i < nr_states; i++) {
++	for (i = nr_states - 1; i >= 0; i--) {
+ 		unsigned long power_res = em_scale_power(table[i].power);
+ 
+ 		table[i].cost = div64_u64(fmax * power_res,
+ 					  table[i].frequency);
++		if (table[i].cost >= prev_cost) {
++			dev_dbg(dev, "EM: OPP:%lu is inefficient\n",
++				table[i].frequency);
++		} else {
++			prev_cost = table[i].cost;
++		}
+ 	}
+ 
+ 	pd->table = table;
+diff --git a/kernel/power/swap.c b/kernel/power/swap.c
+index 72e33054a2e1b..25e7cb96bb884 100644
+--- a/kernel/power/swap.c
++++ b/kernel/power/swap.c
+@@ -299,7 +299,7 @@ static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr,
+ 	return error;
+ }
+ 
+-static blk_status_t hib_wait_io(struct hib_bio_batch *hb)
++static int hib_wait_io(struct hib_bio_batch *hb)
+ {
+ 	/*
+ 	 * We are relying on the behavior of blk_plug that a thread with
+@@ -1521,9 +1521,10 @@ end:
+ int swsusp_check(void)
+ {
+ 	int error;
++	void *holder;
+ 
+ 	hib_resume_bdev = blkdev_get_by_dev(swsusp_resume_device,
+-					    FMODE_READ, NULL);
++					    FMODE_READ | FMODE_EXCL, &holder);
+ 	if (!IS_ERR(hib_resume_bdev)) {
+ 		set_blocksize(hib_resume_bdev, PAGE_SIZE);
+ 		clear_page(swsusp_header);
+@@ -1545,7 +1546,7 @@ int swsusp_check(void)
+ 
+ put:
+ 		if (error)
+-			blkdev_put(hib_resume_bdev, FMODE_READ);
++			blkdev_put(hib_resume_bdev, FMODE_READ | FMODE_EXCL);
+ 		else
+ 			pr_debug("Image signature found, resuming\n");
+ 	} else {
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 916ea4f66e4b2..6c1aea48a79a1 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -1238,28 +1238,34 @@ static void rcutorture_one_extend(int *readstate, int newstate,
+ 	/* First, put new protection in place to avoid critical-section gap. */
+ 	if (statesnew & RCUTORTURE_RDR_BH)
+ 		local_bh_disable();
++	if (statesnew & RCUTORTURE_RDR_RBH)
++		rcu_read_lock_bh();
+ 	if (statesnew & RCUTORTURE_RDR_IRQ)
+ 		local_irq_disable();
+ 	if (statesnew & RCUTORTURE_RDR_PREEMPT)
+ 		preempt_disable();
+-	if (statesnew & RCUTORTURE_RDR_RBH)
+-		rcu_read_lock_bh();
+ 	if (statesnew & RCUTORTURE_RDR_SCHED)
+ 		rcu_read_lock_sched();
+ 	if (statesnew & RCUTORTURE_RDR_RCU)
+ 		idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
+ 
+-	/* Next, remove old protection, irq first due to bh conflict. */
++	/*
++	 * Next, remove old protection, in decreasing order of strength
++	 * to avoid unlock paths that aren't safe in the stronger
++	 * context. Namely: BH can not be enabled with disabled interrupts.
++	 * Additionally PREEMPT_RT requires that BH is enabled in preemptible
++	 * context.
++	 */
+ 	if (statesold & RCUTORTURE_RDR_IRQ)
+ 		local_irq_enable();
+-	if (statesold & RCUTORTURE_RDR_BH)
+-		local_bh_enable();
+ 	if (statesold & RCUTORTURE_RDR_PREEMPT)
+ 		preempt_enable();
+-	if (statesold & RCUTORTURE_RDR_RBH)
+-		rcu_read_unlock_bh();
+ 	if (statesold & RCUTORTURE_RDR_SCHED)
+ 		rcu_read_unlock_sched();
++	if (statesold & RCUTORTURE_RDR_BH)
++		local_bh_enable();
++	if (statesold & RCUTORTURE_RDR_RBH)
++		rcu_read_unlock_bh();
+ 	if (statesold & RCUTORTURE_RDR_RCU) {
+ 		bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
+ 
+@@ -1302,6 +1308,9 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
+ 	int mask = rcutorture_extend_mask_max();
+ 	unsigned long randmask1 = torture_random(trsp) >> 8;
+ 	unsigned long randmask2 = randmask1 >> 3;
++	unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
++	unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
++	unsigned long bhs = RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
+ 
+ 	WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
+ 	/* Mostly only one bit (need preemption!), sometimes lots of bits. */
+@@ -1309,11 +1318,26 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
+ 		mask = mask & randmask2;
+ 	else
+ 		mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
+-	/* Can't enable bh w/irq disabled. */
+-	if ((mask & RCUTORTURE_RDR_IRQ) &&
+-	    ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
+-	     (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
+-		mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
++
++	/*
++	 * Can't enable bh w/irq disabled.
++	 */
++	if (mask & RCUTORTURE_RDR_IRQ)
++		mask |= oldmask & bhs;
++
++	/*
++	 * Ideally these sequences would be detected in debug builds
++	 * (regardless of RT), but until then don't stop testing
++	 * them on non-RT.
++	 */
++	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
++		/* Can't modify BH in atomic context */
++		if (oldmask & preempts_irq)
++			mask &= ~bhs;
++		if ((oldmask | mask) & preempts_irq)
++			mask |= oldmask & bhs;
++	}
++
+ 	return mask ?: RCUTORTURE_RDR_RCU;
+ }
+ 
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index b338f514ee5aa..7c05c5ab78653 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -197,6 +197,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
+ 	 * This loop is terminated by the system going down.  ;-)
+ 	 */
+ 	for (;;) {
++		set_tasks_gp_state(rtp, RTGS_WAIT_CBS);
+ 
+ 		/* Pick up any new callbacks. */
+ 		raw_spin_lock_irqsave(&rtp->cbs_lock, flags);
+@@ -236,8 +237,6 @@ static int __noreturn rcu_tasks_kthread(void *arg)
+ 		}
+ 		/* Paranoid sleep to keep this from entering a tight loop */
+ 		schedule_timeout_idle(rtp->gp_sleep);
+-
+-		set_tasks_gp_state(rtp, RTGS_WAIT_CBS);
+ 	}
+ }
+ 
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 8760b6ead770a..0ffe185c1f46a 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -759,7 +759,7 @@ static void sync_sched_exp_online_cleanup(int cpu)
+ 	my_cpu = get_cpu();
+ 	/* Quiescent state either not needed or already requested, leave. */
+ 	if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) ||
+-	    __this_cpu_read(rcu_data.cpu_no_qs.b.exp)) {
++	    rdp->cpu_no_qs.b.exp) {
+ 		put_cpu();
+ 		return;
+ 	}
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index c5091aeaa37bb..6ed153f226b39 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -2573,7 +2573,7 @@ static void rcu_bind_gp_kthread(void)
+ }
+ 
+ /* Record the current task on dyntick-idle entry. */
+-static void noinstr rcu_dynticks_task_enter(void)
++static __always_inline void rcu_dynticks_task_enter(void)
+ {
+ #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL)
+ 	WRITE_ONCE(current->rcu_tasks_idle_cpu, smp_processor_id());
+@@ -2581,7 +2581,7 @@ static void noinstr rcu_dynticks_task_enter(void)
+ }
+ 
+ /* Record no current task on dyntick-idle exit. */
+-static void noinstr rcu_dynticks_task_exit(void)
++static __always_inline void rcu_dynticks_task_exit(void)
+ {
+ #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL)
+ 	WRITE_ONCE(current->rcu_tasks_idle_cpu, -1);
+@@ -2589,7 +2589,7 @@ static void noinstr rcu_dynticks_task_exit(void)
+ }
+ 
+ /* Turn on heavyweight RCU tasks trace readers on idle/user entry. */
+-static void rcu_dynticks_task_trace_enter(void)
++static __always_inline void rcu_dynticks_task_trace_enter(void)
+ {
+ #ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+@@ -2598,7 +2598,7 @@ static void rcu_dynticks_task_trace_enter(void)
+ }
+ 
+ /* Turn off heavyweight RCU tasks trace readers on idle/user exit. */
+-static void rcu_dynticks_task_trace_exit(void)
++static __always_inline void rcu_dynticks_task_trace_exit(void)
+ {
+ #ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index e4551d1736fa3..bc8ff11e60242 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -3231,8 +3231,6 @@ static inline void init_schedstats(void) {}
+  */
+ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ {
+-	unsigned long flags;
+-
+ 	__sched_fork(clone_flags, p);
+ 	/*
+ 	 * We mark the process as NEW here. This guarantees that
+@@ -3278,24 +3276,6 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 
+ 	init_entity_runnable_average(&p->se);
+ 
+-	/*
+-	 * The child is not yet in the pid-hash so no cgroup attach races,
+-	 * and the cgroup is pinned to this child due to cgroup_fork()
+-	 * is ran before sched_fork().
+-	 *
+-	 * Silence PROVE_RCU.
+-	 */
+-	raw_spin_lock_irqsave(&p->pi_lock, flags);
+-	rseq_migrate(p);
+-	/*
+-	 * We're setting the CPU for the first time, we don't migrate,
+-	 * so use __set_task_cpu().
+-	 */
+-	__set_task_cpu(p, smp_processor_id());
+-	if (p->sched_class->task_fork)
+-		p->sched_class->task_fork(p);
+-	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+-
+ #ifdef CONFIG_SCHED_INFO
+ 	if (likely(sched_info_on()))
+ 		memset(&p->sched_info, 0, sizeof(p->sched_info));
+@@ -3311,8 +3291,29 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 	return 0;
+ }
+ 
+-void sched_post_fork(struct task_struct *p)
++void sched_post_fork(struct task_struct *p, struct kernel_clone_args *kargs)
+ {
++	unsigned long flags;
++#ifdef CONFIG_CGROUP_SCHED
++	struct task_group *tg;
++#endif
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++#ifdef CONFIG_CGROUP_SCHED
++	tg = container_of(kargs->cset->subsys[cpu_cgrp_id],
++			  struct task_group, css);
++	p->sched_task_group = autogroup_task_group(p, tg);
++#endif
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	if (p->sched_class->task_fork)
++		p->sched_class->task_fork(p);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
+ 	uclamp_post_fork(p);
+ }
+ 
+diff --git a/kernel/signal.c b/kernel/signal.c
+index ef8f2a28d37c5..6bb2df4f6109d 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2097,15 +2097,6 @@ static inline bool may_ptrace_stop(void)
+ 	return true;
+ }
+ 
+-/*
+- * Return non-zero if there is a SIGKILL that should be waking us up.
+- * Called with the siglock held.
+- */
+-static bool sigkill_pending(struct task_struct *tsk)
+-{
+-	return sigismember(&tsk->pending.signal, SIGKILL) ||
+-	       sigismember(&tsk->signal->shared_pending.signal, SIGKILL);
+-}
+ 
+ /*
+  * This must be called with current->sighand->siglock held.
+@@ -2132,17 +2123,16 @@ static void ptrace_stop(int exit_code, int why, int clear_code, kernel_siginfo_t
+ 		 * calling arch_ptrace_stop, so we must release it now.
+ 		 * To preserve proper semantics, we must do this before
+ 		 * any signal bookkeeping like checking group_stop_count.
+-		 * Meanwhile, a SIGKILL could come in before we retake the
+-		 * siglock.  That must prevent us from sleeping in TASK_TRACED.
+-		 * So after regaining the lock, we must check for SIGKILL.
+ 		 */
+ 		spin_unlock_irq(&current->sighand->siglock);
+ 		arch_ptrace_stop(exit_code, info);
+ 		spin_lock_irq(&current->sighand->siglock);
+-		if (sigkill_pending(current))
+-			return;
+ 	}
+ 
++	/*
++	 * schedule() will not sleep if there is a pending signal that
++	 * can awaken the task.
++	 */
+ 	set_special_state(TASK_TRACED);
+ 
+ 	/*
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 08c033b802569..5d76edd0ad9c2 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1100,14 +1100,29 @@ static void posix_cpu_timers_work(struct callback_head *work)
+ 	handle_posix_cpu_timers(current);
+ }
+ 
++/*
++ * Clear existing posix CPU timers task work.
++ */
++void clear_posix_cputimers_work(struct task_struct *p)
++{
++	/*
++	 * A copied work entry from the old task is not meaningful, clear it.
++	 * N.B. init_task_work will not do this.
++	 */
++	memset(&p->posix_cputimers_work.work, 0,
++	       sizeof(p->posix_cputimers_work.work));
++	init_task_work(&p->posix_cputimers_work.work,
++		       posix_cpu_timers_work);
++	p->posix_cputimers_work.scheduled = false;
++}
++
+ /*
+  * Initialize posix CPU timers task work in init task. Out of line to
+  * keep the callback static and to avoid header recursion hell.
+  */
+ void __init posix_cputimers_init_work(void)
+ {
+-	init_task_work(&current->posix_cputimers_work.work,
+-		       posix_cpu_timers_work);
++	clear_posix_cputimers_work(current);
+ }
+ 
+ /*
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 39d4d9b25d1a1..6deac666ba3e4 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -5000,6 +5000,9 @@ void ring_buffer_reset(struct trace_buffer *buffer)
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	int cpu;
+ 
++	/* prevent another thread from changing buffer sizes */
++	mutex_lock(&buffer->mutex);
++
+ 	for_each_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+ 
+@@ -5018,6 +5021,8 @@ void ring_buffer_reset(struct trace_buffer *buffer)
+ 		atomic_dec(&cpu_buffer->record_disabled);
+ 		atomic_dec(&cpu_buffer->resize_disabled);
+ 	}
++
++	mutex_unlock(&buffer->mutex);
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_reset);
+ 
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index 4b50fc0cb12c7..d63e51dde0d24 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -834,29 +834,35 @@ int tracing_map_init(struct tracing_map *map)
+ 	return err;
+ }
+ 
+-static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
+-			   const struct tracing_map_sort_entry **b)
++static int cmp_entries_dup(const void *A, const void *B)
+ {
++	const struct tracing_map_sort_entry *a, *b;
+ 	int ret = 0;
+ 
+-	if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
++	a = *(const struct tracing_map_sort_entry **)A;
++	b = *(const struct tracing_map_sort_entry **)B;
++
++	if (memcmp(a->key, b->key, a->elt->map->key_size))
+ 		ret = 1;
+ 
+ 	return ret;
+ }
+ 
+-static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
+-			   const struct tracing_map_sort_entry **b)
++static int cmp_entries_sum(const void *A, const void *B)
+ {
+ 	const struct tracing_map_elt *elt_a, *elt_b;
++	const struct tracing_map_sort_entry *a, *b;
+ 	struct tracing_map_sort_key *sort_key;
+ 	struct tracing_map_field *field;
+ 	tracing_map_cmp_fn_t cmp_fn;
+ 	void *val_a, *val_b;
+ 	int ret = 0;
+ 
+-	elt_a = (*a)->elt;
+-	elt_b = (*b)->elt;
++	a = *(const struct tracing_map_sort_entry **)A;
++	b = *(const struct tracing_map_sort_entry **)B;
++
++	elt_a = a->elt;
++	elt_b = b->elt;
+ 
+ 	sort_key = &elt_a->map->sort_key;
+ 
+@@ -873,18 +879,21 @@ static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
+ 	return ret;
+ }
+ 
+-static int cmp_entries_key(const struct tracing_map_sort_entry **a,
+-			   const struct tracing_map_sort_entry **b)
++static int cmp_entries_key(const void *A, const void *B)
+ {
+ 	const struct tracing_map_elt *elt_a, *elt_b;
++	const struct tracing_map_sort_entry *a, *b;
+ 	struct tracing_map_sort_key *sort_key;
+ 	struct tracing_map_field *field;
+ 	tracing_map_cmp_fn_t cmp_fn;
+ 	void *val_a, *val_b;
+ 	int ret = 0;
+ 
+-	elt_a = (*a)->elt;
+-	elt_b = (*b)->elt;
++	a = *(const struct tracing_map_sort_entry **)A;
++	b = *(const struct tracing_map_sort_entry **)B;
++
++	elt_a = a->elt;
++	elt_b = b->elt;
+ 
+ 	sort_key = &elt_a->map->sort_key;
+ 
+@@ -989,10 +998,8 @@ static void sort_secondary(struct tracing_map *map,
+ 			   struct tracing_map_sort_key *primary_key,
+ 			   struct tracing_map_sort_key *secondary_key)
+ {
+-	int (*primary_fn)(const struct tracing_map_sort_entry **,
+-			  const struct tracing_map_sort_entry **);
+-	int (*secondary_fn)(const struct tracing_map_sort_entry **,
+-			    const struct tracing_map_sort_entry **);
++	int (*primary_fn)(const void *, const void *);
++	int (*secondary_fn)(const void *, const void *);
+ 	unsigned i, start = 0, n_sub = 1;
+ 
+ 	if (is_key(map, primary_key->field_idx))
+@@ -1061,8 +1068,7 @@ int tracing_map_sort_entries(struct tracing_map *map,
+ 			     unsigned int n_sort_keys,
+ 			     struct tracing_map_sort_entry ***sort_entries)
+ {
+-	int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
+-			      const struct tracing_map_sort_entry **);
++	int (*cmp_entries_fn)(const void *, const void *);
+ 	struct tracing_map_sort_entry *sort_entry, **entries;
+ 	int i, n_entries, ret;
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 4cb622b2661b5..d02073b9d56e2 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5326,9 +5326,6 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)
+ 	int ret = -EINVAL;
+ 	cpumask_var_t saved_cpumask;
+ 
+-	if (!zalloc_cpumask_var(&saved_cpumask, GFP_KERNEL))
+-		return -ENOMEM;
+-
+ 	/*
+ 	 * Not excluding isolated cpus on purpose.
+ 	 * If the user wishes to include them, we allow that.
+@@ -5336,6 +5333,15 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)
+ 	cpumask_and(cpumask, cpumask, cpu_possible_mask);
+ 	if (!cpumask_empty(cpumask)) {
+ 		apply_wqattrs_lock();
++		if (cpumask_equal(cpumask, wq_unbound_cpumask)) {
++			ret = 0;
++			goto out_unlock;
++		}
++
++		if (!zalloc_cpumask_var(&saved_cpumask, GFP_KERNEL)) {
++			ret = -ENOMEM;
++			goto out_unlock;
++		}
+ 
+ 		/* save the old wq_unbound_cpumask. */
+ 		cpumask_copy(saved_cpumask, wq_unbound_cpumask);
+@@ -5348,10 +5354,11 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)
+ 		if (ret < 0)
+ 			cpumask_copy(wq_unbound_cpumask, saved_cpumask);
+ 
++		free_cpumask_var(saved_cpumask);
++out_unlock:
+ 		apply_wqattrs_unlock();
+ 	}
+ 
+-	free_cpumask_var(saved_cpumask);
+ 	return ret;
+ }
+ 
+diff --git a/lib/decompress_unxz.c b/lib/decompress_unxz.c
+index 25d59a95bd668..abea25310ac73 100644
+--- a/lib/decompress_unxz.c
++++ b/lib/decompress_unxz.c
+@@ -167,7 +167,7 @@
+  * memeq and memzero are not used much and any remotely sane implementation
+  * is fast enough. memcpy/memmove speed matters in multi-call mode, but
+  * the kernel image is decompressed in single-call mode, in which only
+- * memcpy speed can matter and only if there is a lot of uncompressible data
++ * memmove speed can matter and only if there is a lot of uncompressible data
+  * (LZMA2 stores uncompressible chunks in uncompressed form). Thus, the
+  * functions below should just be kept small; it's probably not worth
+  * optimizing for speed.
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 537bfdc8cd095..b364231b5fc8c 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1343,7 +1343,7 @@ ssize_t iov_iter_get_pages(struct iov_iter *i,
+ 		res = get_user_pages_fast(addr, n,
+ 				iov_iter_rw(i) != WRITE ?  FOLL_WRITE : 0,
+ 				pages);
+-		if (unlikely(res < 0))
++		if (unlikely(res <= 0))
+ 			return res;
+ 		return (res == n ? len : res * PAGE_SIZE) - *start;
+ 	0;}),({
+@@ -1424,8 +1424,9 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
+ 			return -ENOMEM;
+ 		res = get_user_pages_fast(addr, n,
+ 				iov_iter_rw(i) != WRITE ?  FOLL_WRITE : 0, p);
+-		if (unlikely(res < 0)) {
++		if (unlikely(res <= 0)) {
+ 			kvfree(p);
++			*pages = NULL;
+ 			return res;
+ 		}
+ 		*pages = p;
+diff --git a/lib/xz/xz_dec_lzma2.c b/lib/xz/xz_dec_lzma2.c
+index 65a1aad8c223b..a18b52759fd91 100644
+--- a/lib/xz/xz_dec_lzma2.c
++++ b/lib/xz/xz_dec_lzma2.c
+@@ -387,7 +387,14 @@ static void dict_uncompressed(struct dictionary *dict, struct xz_buf *b,
+ 
+ 		*left -= copy_size;
+ 
+-		memcpy(dict->buf + dict->pos, b->in + b->in_pos, copy_size);
++		/*
++		 * If doing in-place decompression in single-call mode and the
++		 * uncompressed size of the file is larger than the caller
++		 * thought (i.e. it is invalid input!), the buffers below may
++		 * overlap and cause undefined behavior with memcpy().
++		 * With valid inputs memcpy() would be fine here.
++		 */
++		memmove(dict->buf + dict->pos, b->in + b->in_pos, copy_size);
+ 		dict->pos += copy_size;
+ 
+ 		if (dict->full < dict->pos)
+@@ -397,7 +404,11 @@ static void dict_uncompressed(struct dictionary *dict, struct xz_buf *b,
+ 			if (dict->pos == dict->end)
+ 				dict->pos = 0;
+ 
+-			memcpy(b->out + b->out_pos, b->in + b->in_pos,
++			/*
++			 * Like above but for multi-call mode: use memmove()
++			 * to avoid undefined behavior with invalid input.
++			 */
++			memmove(b->out + b->out_pos, b->in + b->in_pos,
+ 					copy_size);
+ 		}
+ 
+@@ -421,6 +432,12 @@ static uint32_t dict_flush(struct dictionary *dict, struct xz_buf *b)
+ 		if (dict->pos == dict->end)
+ 			dict->pos = 0;
+ 
++		/*
++		 * These buffers cannot overlap even if doing in-place
++		 * decompression because in multi-call mode dict->buf
++		 * has been allocated by us in this file; it's not
++		 * provided by the caller like in single-call mode.
++		 */
+ 		memcpy(b->out + b->out_pos, dict->buf + dict->start,
+ 				copy_size);
+ 	}
+diff --git a/lib/xz/xz_dec_stream.c b/lib/xz/xz_dec_stream.c
+index 32ab2a08b7cbc..a30e3308035fa 100644
+--- a/lib/xz/xz_dec_stream.c
++++ b/lib/xz/xz_dec_stream.c
+@@ -402,12 +402,12 @@ static enum xz_ret dec_stream_header(struct xz_dec *s)
+ 	 * we will accept other check types too, but then the check won't
+ 	 * be verified and a warning (XZ_UNSUPPORTED_CHECK) will be given.
+ 	 */
++	if (s->temp.buf[HEADER_MAGIC_SIZE + 1] > XZ_CHECK_MAX)
++		return XZ_OPTIONS_ERROR;
++
+ 	s->check_type = s->temp.buf[HEADER_MAGIC_SIZE + 1];
+ 
+ #ifdef XZ_DEC_ANY_CHECK
+-	if (s->check_type > XZ_CHECK_MAX)
+-		return XZ_OPTIONS_ERROR;
+-
+ 	if (s->check_type > XZ_CHECK_CRC32)
+ 		return XZ_UNSUPPORTED_CHECK;
+ #else
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 92bf987d0a410..4bb2a4c593f73 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -230,7 +230,7 @@ enum res_type {
+ 	     iter != NULL;				\
+ 	     iter = mem_cgroup_iter(NULL, iter, NULL))
+ 
+-static inline bool should_force_charge(void)
++static inline bool task_is_dying(void)
+ {
+ 	return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
+ 		(current->flags & PF_EXITING);
+@@ -1729,7 +1729,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ 	 * A few threads which were not waiting at mutex_lock_killable() can
+ 	 * fail to bail out. Therefore, check again after holding oom_lock.
+ 	 */
+-	ret = should_force_charge() || out_of_memory(&oc);
++	ret = task_is_dying() || out_of_memory(&oc);
+ 
+ unlock:
+ 	mutex_unlock(&oom_lock);
+@@ -2683,6 +2683,7 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ 	struct page_counter *counter;
+ 	enum oom_status oom_status;
+ 	unsigned long nr_reclaimed;
++	bool passed_oom = false;
+ 	bool may_swap = true;
+ 	bool drained = false;
+ 	unsigned long pflags;
+@@ -2719,15 +2720,6 @@ retry:
+ 	if (gfp_mask & __GFP_ATOMIC)
+ 		goto force;
+ 
+-	/*
+-	 * Unlike in global OOM situations, memcg is not in a physical
+-	 * memory shortage.  Allow dying and OOM-killed tasks to
+-	 * bypass the last charges so that they can exit quickly and
+-	 * free their memory.
+-	 */
+-	if (unlikely(should_force_charge()))
+-		goto force;
+-
+ 	/*
+ 	 * Prevent unbounded recursion when reclaim operations need to
+ 	 * allocate memory. This might exceed the limits temporarily,
+@@ -2788,8 +2780,9 @@ retry:
+ 	if (gfp_mask & __GFP_NOFAIL)
+ 		goto force;
+ 
+-	if (fatal_signal_pending(current))
+-		goto force;
++	/* Avoid endless loop for tasks bypassed by the oom killer */
++	if (passed_oom && task_is_dying())
++		goto nomem;
+ 
+ 	/*
+ 	 * keep retrying as long as the memcg oom killer is able to make
+@@ -2798,14 +2791,10 @@ retry:
+ 	 */
+ 	oom_status = mem_cgroup_oom(mem_over_limit, gfp_mask,
+ 		       get_order(nr_pages * PAGE_SIZE));
+-	switch (oom_status) {
+-	case OOM_SUCCESS:
++	if (oom_status == OOM_SUCCESS) {
++		passed_oom = true;
+ 		nr_retries = MAX_RECLAIM_RETRIES;
+ 		goto retry;
+-	case OOM_FAILED:
+-		goto force;
+-	default:
+-		goto nomem;
+ 	}
+ nomem:
+ 	if (!(gfp_mask & __GFP_NOFAIL))
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 8b84661a64109..419a814f467e0 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -1118,25 +1118,22 @@ bool out_of_memory(struct oom_control *oc)
+ }
+ 
+ /*
+- * The pagefault handler calls here because it is out of memory, so kill a
+- * memory-hogging task. If oom_lock is held by somebody else, a parallel oom
+- * killing is already in progress so do nothing.
++ * The pagefault handler calls here because some allocation has failed. We have
++ * to take care of the memcg OOM here because this is the only safe context without
++ * any locks held but let the oom killer triggered from the allocation context care
++ * about the global OOM.
+  */
+ void pagefault_out_of_memory(void)
+ {
+-	struct oom_control oc = {
+-		.zonelist = NULL,
+-		.nodemask = NULL,
+-		.memcg = NULL,
+-		.gfp_mask = 0,
+-		.order = 0,
+-	};
++	static DEFINE_RATELIMIT_STATE(pfoom_rs, DEFAULT_RATELIMIT_INTERVAL,
++				      DEFAULT_RATELIMIT_BURST);
+ 
+ 	if (mem_cgroup_oom_synchronize(true))
+ 		return;
+ 
+-	if (!mutex_trylock(&oom_lock))
++	if (fatal_signal_pending(current))
+ 		return;
+-	out_of_memory(&oc);
+-	mutex_unlock(&oom_lock);
++
++	if (__ratelimit(&pfoom_rs))
++		pr_warn("Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF\n");
+ }
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index 7a0b79b0a6899..73cd50735df29 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -1835,10 +1835,11 @@ static inline void zs_pool_dec_isolated(struct zs_pool *pool)
+ 	VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
+ 	atomic_long_dec(&pool->isolated_pages);
+ 	/*
+-	 * There's no possibility of racing, since wait_for_isolated_drain()
+-	 * checks the isolated count under &class->lock after enqueuing
+-	 * on migration_wait.
++	 * Checking pool->destroying must happen after atomic_long_dec()
++	 * for pool->isolated_pages above. Paired with the smp_mb() in
++	 * zs_unregister_migration().
+ 	 */
++	smp_mb__after_atomic();
+ 	if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
+ 		wake_up_all(&pool->migration_wait);
+ }
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index 15bbfaf943fd1..ad3780067a7d8 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -120,9 +120,6 @@ void unregister_vlan_dev(struct net_device *dev, struct list_head *head)
+ 	}
+ 
+ 	vlan_vid_del(real_dev, vlan->vlan_proto, vlan_id);
+-
+-	/* Get rid of the vlan's reference to real_dev */
+-	dev_put(real_dev);
+ }
+ 
+ int vlan_check_real_dev(struct net_device *real_dev,
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index ec8408d1638fb..c7eba7dab0938 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -813,6 +813,9 @@ static void vlan_dev_free(struct net_device *dev)
+ 
+ 	free_percpu(vlan->vlan_pcpu_stats);
+ 	vlan->vlan_pcpu_stats = NULL;
++
++	/* Get rid of the vlan's reference to real_dev */
++	dev_put(vlan->real_dev);
+ }
+ 
+ void vlan_setup(struct net_device *dev)
+diff --git a/net/9p/client.c b/net/9p/client.c
+index eb42bbb72f523..bf6ed00d7c37d 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -538,6 +538,8 @@ static int p9_check_errors(struct p9_client *c, struct p9_req_t *req)
+ 		kfree(ename);
+ 	} else {
+ 		err = p9pdu_readf(&req->rc, c->proto_version, "d", &ecode);
++		if (err)
++			goto out_err;
+ 		err = -ecode;
+ 
+ 		p9_debug(P9_DEBUG_9P, "<<< RLERROR (%d)\n", -ecode);
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index c99d65ef13b1e..160c016a5dfb9 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1508,6 +1508,9 @@ static void l2cap_sock_close_cb(struct l2cap_chan *chan)
+ {
+ 	struct sock *sk = chan->data;
+ 
++	if (!sk)
++		return;
++
+ 	l2cap_sock_kill(sk);
+ }
+ 
+@@ -1516,6 +1519,9 @@ static void l2cap_sock_teardown_cb(struct l2cap_chan *chan, int err)
+ 	struct sock *sk = chan->data;
+ 	struct sock *parent;
+ 
++	if (!sk)
++		return;
++
+ 	BT_DBG("chan %p state %s", chan, state_to_string(chan->state));
+ 
+ 	/* This callback can be called both for server (BT_LISTEN)
+@@ -1707,8 +1713,10 @@ static void l2cap_sock_destruct(struct sock *sk)
+ {
+ 	BT_DBG("sk %p", sk);
+ 
+-	if (l2cap_pi(sk)->chan)
++	if (l2cap_pi(sk)->chan) {
++		l2cap_pi(sk)->chan->data = NULL;
+ 		l2cap_chan_put(l2cap_pi(sk)->chan);
++	}
+ 
+ 	if (l2cap_pi(sk)->rx_busy_skb) {
+ 		kfree_skb(l2cap_pi(sk)->rx_busy_skb);
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 7c24a9acbc459..2f2b8ddc4dd5d 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -134,6 +134,7 @@ static struct sco_conn *sco_conn_add(struct hci_conn *hcon)
+ 		return NULL;
+ 
+ 	spin_lock_init(&conn->lock);
++	INIT_DELAYED_WORK(&conn->timeout_work, sco_sock_timeout);
+ 
+ 	hcon->sco_data = conn;
+ 	conn->hcon = hcon;
+@@ -197,11 +198,11 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+ 		sco_chan_del(sk, err);
+ 		bh_unlock_sock(sk);
+ 		sock_put(sk);
+-
+-		/* Ensure no more work items will run before freeing conn. */
+-		cancel_delayed_work_sync(&conn->timeout_work);
+ 	}
+ 
++	/* Ensure no more work items will run before freeing conn. */
++	cancel_delayed_work_sync(&conn->timeout_work);
++
+ 	hcon->sco_data = NULL;
+ 	kfree(conn);
+ }
+@@ -214,8 +215,6 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	sco_pi(sk)->conn = conn;
+ 	conn->sk = sk;
+ 
+-	INIT_DELAYED_WORK(&conn->timeout_work, sco_sock_timeout);
+-
+ 	if (parent)
+ 		bt_accept_enqueue(parent, sk, true);
+ }
+@@ -281,7 +280,8 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk)
+ 	return err;
+ }
+ 
+-static int sco_send_frame(struct sock *sk, struct msghdr *msg, int len)
++static int sco_send_frame(struct sock *sk, void *buf, int len,
++			  unsigned int msg_flags)
+ {
+ 	struct sco_conn *conn = sco_pi(sk)->conn;
+ 	struct sk_buff *skb;
+@@ -293,15 +293,11 @@ static int sco_send_frame(struct sock *sk, struct msghdr *msg, int len)
+ 
+ 	BT_DBG("sk %p len %d", sk, len);
+ 
+-	skb = bt_skb_send_alloc(sk, len, msg->msg_flags & MSG_DONTWAIT, &err);
++	skb = bt_skb_send_alloc(sk, len, msg_flags & MSG_DONTWAIT, &err);
+ 	if (!skb)
+ 		return err;
+ 
+-	if (memcpy_from_msg(skb_put(skb, len), msg, len)) {
+-		kfree_skb(skb);
+-		return -EFAULT;
+-	}
+-
++	memcpy(skb_put(skb, len), buf, len);
+ 	hci_send_sco(conn->hcon, skb);
+ 
+ 	return len;
+@@ -726,6 +722,7 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 			    size_t len)
+ {
+ 	struct sock *sk = sock->sk;
++	void *buf;
+ 	int err;
+ 
+ 	BT_DBG("sock %p, sk %p", sock, sk);
+@@ -737,14 +734,24 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	if (msg->msg_flags & MSG_OOB)
+ 		return -EOPNOTSUPP;
+ 
++	buf = kmalloc(len, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	if (memcpy_from_msg(buf, msg, len)) {
++		kfree(buf);
++		return -EFAULT;
++	}
++
+ 	lock_sock(sk);
+ 
+ 	if (sk->sk_state == BT_CONNECTED)
+-		err = sco_send_frame(sk, msg, len);
++		err = sco_send_frame(sk, buf, len, msg->msg_flags);
+ 	else
+ 		err = -ENOTCONN;
+ 
+ 	release_sock(sk);
++	kfree(buf);
+ 	return err;
+ }
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 266c189f1e809..ca75d1b8f415c 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -75,6 +75,13 @@ static void j1939_can_recv(struct sk_buff *iskb, void *data)
+ 	skcb->addr.pgn = (cf->can_id >> 8) & J1939_PGN_MAX;
+ 	/* set default message type */
+ 	skcb->addr.type = J1939_TP;
++
++	if (!j1939_address_is_valid(skcb->addr.sa)) {
++		netdev_err_once(priv->ndev, "%s: sa is broadcast address, ignoring!\n",
++				__func__);
++		goto done;
++	}
++
+ 	if (j1939_pgn_is_pdu1(skcb->addr.pgn)) {
+ 		/* Type 1: with destination address */
+ 		skcb->addr.da = skcb->addr.pgn;
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index e59fbbffa31ce..fe35fdad35c9b 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -2065,6 +2065,12 @@ static void j1939_tp_cmd_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ 		break;
+ 
+ 	case J1939_ETP_CMD_ABORT: /* && J1939_TP_CMD_ABORT */
++		if (j1939_cb_is_broadcast(skcb)) {
++			netdev_err_once(priv->ndev, "%s: abort to broadcast (%02x), ignoring!\n",
++					__func__, skcb->addr.sa);
++			return;
++		}
++
+ 		if (j1939_tp_im_transmitter(skcb))
+ 			j1939_xtp_rx_abort(priv, skb, true);
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 6a4e0e3c59fec..7dd7b9fb600c8 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2973,6 +2973,8 @@ int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq)
+ 		if (dev->num_tc)
+ 			netif_setup_tc(dev, txq);
+ 
++		dev_qdisc_change_real_num_tx(dev, txq);
++
+ 		dev->real_num_tx_queues = txq;
+ 
+ 		if (disabling) {
+@@ -3867,7 +3869,8 @@ int dev_loopback_xmit(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 	skb_reset_mac_header(skb);
+ 	__skb_pull(skb, skb_network_offset(skb));
+ 	skb->pkt_type = PACKET_LOOPBACK;
+-	skb->ip_summed = CHECKSUM_UNNECESSARY;
++	if (skb->ip_summed == CHECKSUM_NONE)
++		skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 	WARN_ON(!skb_dst(skb));
+ 	skb_dst_force(skb);
+ 	netif_rx_ni(skb);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 7ea752af7894d..abd58dce49bbc 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -9493,6 +9493,27 @@ static u32 sk_skb_convert_ctx_access(enum bpf_access_type type,
+ 		*insn++ = BPF_LDX_MEM(BPF_SIZEOF(void *), si->dst_reg,
+ 				      si->src_reg, off);
+ 		break;
++	case offsetof(struct __sk_buff, cb[0]) ...
++	     offsetofend(struct __sk_buff, cb[4]) - 1:
++		BUILD_BUG_ON(sizeof_field(struct sk_skb_cb, data) < 20);
++		BUILD_BUG_ON((offsetof(struct sk_buff, cb) +
++			      offsetof(struct sk_skb_cb, data)) %
++			     sizeof(__u64));
++
++		prog->cb_access = 1;
++		off  = si->off;
++		off -= offsetof(struct __sk_buff, cb[0]);
++		off += offsetof(struct sk_buff, cb);
++		off += offsetof(struct sk_skb_cb, data);
++		if (type == BPF_WRITE)
++			*insn++ = BPF_STX_MEM(BPF_SIZE(si->code), si->dst_reg,
++					      si->src_reg, off);
++		else
++			*insn++ = BPF_LDX_MEM(BPF_SIZE(si->code), si->dst_reg,
++					      si->src_reg, off);
++		break;
++
++
+ 	default:
+ 		return bpf_convert_ctx_access(type, si, insn_buf, prog,
+ 					      target_size);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index c452ebf209394..8eec7667aa761 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -380,7 +380,7 @@ EXPORT_SYMBOL(neigh_ifdown);
+ 
+ static struct neighbour *neigh_alloc(struct neigh_table *tbl,
+ 				     struct net_device *dev,
+-				     bool exempt_from_gc)
++				     u8 flags, bool exempt_from_gc)
+ {
+ 	struct neighbour *n = NULL;
+ 	unsigned long now = jiffies;
+@@ -413,6 +413,7 @@ do_alloc:
+ 	n->updated	  = n->used = now;
+ 	n->nud_state	  = NUD_NONE;
+ 	n->output	  = neigh_blackhole;
++	n->flags	  = flags;
+ 	seqlock_init(&n->hh.hh_lock);
+ 	n->parms	  = neigh_parms_clone(&tbl->parms);
+ 	timer_setup(&n->timer, neigh_timer_handler, 0);
+@@ -576,19 +577,18 @@ struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
+ }
+ EXPORT_SYMBOL(neigh_lookup_nodev);
+ 
+-static struct neighbour *___neigh_create(struct neigh_table *tbl,
+-					 const void *pkey,
+-					 struct net_device *dev,
+-					 bool exempt_from_gc, bool want_ref)
++static struct neighbour *
++___neigh_create(struct neigh_table *tbl, const void *pkey,
++		struct net_device *dev, u8 flags,
++		bool exempt_from_gc, bool want_ref)
+ {
+-	struct neighbour *n1, *rc, *n = neigh_alloc(tbl, dev, exempt_from_gc);
+-	u32 hash_val;
+-	unsigned int key_len = tbl->key_len;
+-	int error;
++	u32 hash_val, key_len = tbl->key_len;
++	struct neighbour *n1, *rc, *n;
+ 	struct neigh_hash_table *nht;
++	int error;
+ 
++	n = neigh_alloc(tbl, dev, flags, exempt_from_gc);
+ 	trace_neigh_create(tbl, dev, pkey, n, exempt_from_gc);
+-
+ 	if (!n) {
+ 		rc = ERR_PTR(-ENOBUFS);
+ 		goto out;
+@@ -675,7 +675,7 @@ out_neigh_release:
+ struct neighbour *__neigh_create(struct neigh_table *tbl, const void *pkey,
+ 				 struct net_device *dev, bool want_ref)
+ {
+-	return ___neigh_create(tbl, pkey, dev, false, want_ref);
++	return ___neigh_create(tbl, pkey, dev, 0, false, want_ref);
+ }
+ EXPORT_SYMBOL(__neigh_create);
+ 
+@@ -1222,7 +1222,7 @@ static void neigh_update_hhs(struct neighbour *neigh)
+ 				lladdr instead of overriding it
+ 				if it is different.
+ 	NEIGH_UPDATE_F_ADMIN	means that the change is administrative.
+-
++	NEIGH_UPDATE_F_USE	means that the entry is user triggered.
+ 	NEIGH_UPDATE_F_OVERRIDE_ISROUTER allows to override existing
+ 				NTF_ROUTER flag.
+ 	NEIGH_UPDATE_F_ISROUTER	indicates if the neighbour is known as
+@@ -1260,6 +1260,12 @@ static int __neigh_update(struct neighbour *neigh, const u8 *lladdr,
+ 		goto out;
+ 
+ 	ext_learn_change = neigh_update_ext_learned(neigh, flags, &notify);
++	if (flags & NEIGH_UPDATE_F_USE) {
++		new = old & ~NUD_PERMANENT;
++		neigh->nud_state = new;
++		err = 0;
++		goto out;
++	}
+ 
+ 	if (!(new & NUD_VALID)) {
+ 		neigh_del_timer(neigh);
+@@ -1950,7 +1956,9 @@ static int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 		exempt_from_gc = ndm->ndm_state & NUD_PERMANENT ||
+ 				 ndm->ndm_flags & NTF_EXT_LEARNED;
+-		neigh = ___neigh_create(tbl, dst, dev, exempt_from_gc, true);
++		neigh = ___neigh_create(tbl, dst, dev,
++					ndm->ndm_flags & NTF_EXT_LEARNED,
++					exempt_from_gc, true);
+ 		if (IS_ERR(neigh)) {
+ 			err = PTR_ERR(neigh);
+ 			goto out;
+@@ -1969,22 +1977,20 @@ static int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	if (protocol)
+ 		neigh->protocol = protocol;
+-
+ 	if (ndm->ndm_flags & NTF_EXT_LEARNED)
+ 		flags |= NEIGH_UPDATE_F_EXT_LEARNED;
+-
+ 	if (ndm->ndm_flags & NTF_ROUTER)
+ 		flags |= NEIGH_UPDATE_F_ISROUTER;
++	if (ndm->ndm_flags & NTF_USE)
++		flags |= NEIGH_UPDATE_F_USE;
+ 
+-	if (ndm->ndm_flags & NTF_USE) {
++	err = __neigh_update(neigh, lladdr, ndm->ndm_state, flags,
++			     NETLINK_CB(skb).portid, extack);
++	if (!err && ndm->ndm_flags & NTF_USE) {
+ 		neigh_event_send(neigh, NULL);
+ 		err = 0;
+-	} else
+-		err = __neigh_update(neigh, lladdr, ndm->ndm_state, flags,
+-				     NETLINK_CB(skb).portid, extack);
+-
++	}
+ 	neigh_release(neigh);
+-
+ out:
+ 	return err;
+ }
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index cc5f760c78250..af59123601055 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -175,6 +175,14 @@ static int change_carrier(struct net_device *dev, unsigned long new_carrier)
+ static ssize_t carrier_store(struct device *dev, struct device_attribute *attr,
+ 			     const char *buf, size_t len)
+ {
++	struct net_device *netdev = to_net_dev(dev);
++
++	/* The check is also done in change_carrier; this helps returning early
++	 * without hitting the trylock/restart in netdev_store.
++	 */
++	if (!netdev->netdev_ops->ndo_change_carrier)
++		return -EOPNOTSUPP;
++
+ 	return netdev_store(dev, attr, buf, len, change_carrier);
+ }
+ 
+@@ -196,6 +204,12 @@ static ssize_t speed_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	int ret = -EINVAL;
+ 
++	/* The check is also done in __ethtool_get_link_ksettings; this helps
++	 * returning early without hitting the trylock/restart below.
++	 */
++	if (!netdev->ethtool_ops->get_link_ksettings)
++		return ret;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -216,6 +230,12 @@ static ssize_t duplex_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	int ret = -EINVAL;
+ 
++	/* The check is also done in __ethtool_get_link_ksettings; this helps
++	 * returning early without hitting the trylock/restart below.
++	 */
++	if (!netdev->ethtool_ops->get_link_ksettings)
++		return ret;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -468,6 +488,14 @@ static ssize_t proto_down_store(struct device *dev,
+ 				struct device_attribute *attr,
+ 				const char *buf, size_t len)
+ {
++	struct net_device *netdev = to_net_dev(dev);
++
++	/* The check is also done in change_proto_down; this helps returning
++	 * early without hitting the trylock/restart in netdev_store.
++	 */
++	if (!netdev->netdev_ops->ndo_change_proto_down)
++		return -EOPNOTSUPP;
++
+ 	return netdev_store(dev, attr, buf, len, change_proto_down);
+ }
+ NETDEVICE_SHOW_RW(proto_down, fmt_dec);
+@@ -478,6 +506,12 @@ static ssize_t phys_port_id_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	ssize_t ret = -EINVAL;
+ 
++	/* The check is also done in dev_get_phys_port_id; this helps returning
++	 * early without hitting the trylock/restart below.
++	 */
++	if (!netdev->netdev_ops->ndo_get_phys_port_id)
++		return -EOPNOTSUPP;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -500,6 +534,13 @@ static ssize_t phys_port_name_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	ssize_t ret = -EINVAL;
+ 
++	/* The checks are also done in dev_get_phys_port_name; this helps
++	 * returning early without hitting the trylock/restart below.
++	 */
++	if (!netdev->netdev_ops->ndo_get_phys_port_name &&
++	    !netdev->netdev_ops->ndo_get_devlink_port)
++		return -EOPNOTSUPP;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -522,6 +563,14 @@ static ssize_t phys_switch_id_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	ssize_t ret = -EINVAL;
+ 
++	/* The checks are also done in dev_get_phys_port_name; this helps
++	 * returning early without hitting the trylock/restart below. This works
++	 * because recurse is false when calling dev_get_port_parent_id.
++	 */
++	if (!netdev->netdev_ops->ndo_get_port_parent_id &&
++	    !netdev->netdev_ops->ndo_get_devlink_port)
++		return -EOPNOTSUPP;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -1179,6 +1228,12 @@ static ssize_t tx_maxrate_store(struct netdev_queue *queue,
+ 	if (!capable(CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
++	/* The check is also done later; this helps returning early without
++	 * hitting the trylock/restart below.
++	 */
++	if (!dev->netdev_ops->ndo_set_tx_maxrate)
++		return -EOPNOTSUPP;
++
+ 	err = kstrtou32(buf, 10, &rate);
+ 	if (err < 0)
+ 		return err;
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index 5c9d95f30be60..ac852db83de9f 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -486,7 +486,9 @@ struct net *copy_net_ns(unsigned long flags,
+ 
+ 	if (rv < 0) {
+ put_userns:
++#ifdef CONFIG_KEYS
+ 		key_remove_domain(net->key_domain);
++#endif
+ 		put_user_ns(user_ns);
+ 		net_drop_ns(net);
+ dec_ucounts:
+@@ -618,7 +620,9 @@ static void cleanup_net(struct work_struct *work)
+ 	list_for_each_entry_safe(net, tmp, &net_exit_list, exit_list) {
+ 		list_del_init(&net->exit_list);
+ 		dec_net_namespaces(net->ucounts);
++#ifdef CONFIG_KEYS
+ 		key_remove_domain(net->key_domain);
++#endif
+ 		put_user_ns(net->user_ns);
+ 		net_drop_ns(net);
+ 	}
+diff --git a/net/core/stream.c b/net/core/stream.c
+index 4f1d4aa5fb38d..a166a32b411fa 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -195,9 +195,6 @@ void sk_stream_kill_queues(struct sock *sk)
+ 	/* First the read buffer. */
+ 	__skb_queue_purge(&sk->sk_receive_queue);
+ 
+-	/* Next, the error queue. */
+-	__skb_queue_purge(&sk->sk_error_queue);
+-
+ 	/* Next, the write queue. */
+ 	WARN_ON(!skb_queue_empty(&sk->sk_write_queue));
+ 
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index d86d8d11cfe4a..2e0a4378e778a 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -419,7 +419,7 @@ static struct ctl_table net_core_table[] = {
+ 		.mode		= 0600,
+ 		.proc_handler	= proc_dolongvec_minmax_bpf_restricted,
+ 		.extra1		= &long_one,
+-		.extra2		= &long_max,
++		.extra2		= &bpf_jit_limit_max,
+ 	},
+ #endif
+ 	{
+diff --git a/net/dccp/dccp.h b/net/dccp/dccp.h
+index c5c1d2b8045e8..5183e627468d8 100644
+--- a/net/dccp/dccp.h
++++ b/net/dccp/dccp.h
+@@ -48,7 +48,7 @@ extern bool dccp_debug;
+ 
+ extern struct inet_hashinfo dccp_hashinfo;
+ 
+-extern struct percpu_counter dccp_orphan_count;
++DECLARE_PER_CPU(unsigned int, dccp_orphan_count);
+ 
+ void dccp_time_wait(struct sock *sk, int state, int timeo);
+ 
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 6d705d90c6149..548cf0135647d 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -42,8 +42,8 @@ DEFINE_SNMP_STAT(struct dccp_mib, dccp_statistics) __read_mostly;
+ 
+ EXPORT_SYMBOL_GPL(dccp_statistics);
+ 
+-struct percpu_counter dccp_orphan_count;
+-EXPORT_SYMBOL_GPL(dccp_orphan_count);
++DEFINE_PER_CPU(unsigned int, dccp_orphan_count);
++EXPORT_PER_CPU_SYMBOL_GPL(dccp_orphan_count);
+ 
+ struct inet_hashinfo dccp_hashinfo;
+ EXPORT_SYMBOL_GPL(dccp_hashinfo);
+@@ -1055,7 +1055,7 @@ adjudge_to_death:
+ 	bh_lock_sock(sk);
+ 	WARN_ON(sock_owned_by_user(sk));
+ 
+-	percpu_counter_inc(sk->sk_prot->orphan_count);
++	this_cpu_inc(dccp_orphan_count);
+ 
+ 	/* Have we already been destroyed by a softirq or backlog? */
+ 	if (state != DCCP_CLOSED && sk->sk_state == DCCP_CLOSED)
+@@ -1115,13 +1115,10 @@ static int __init dccp_init(void)
+ 
+ 	BUILD_BUG_ON(sizeof(struct dccp_skb_cb) >
+ 		     sizeof_field(struct sk_buff, cb));
+-	rc = percpu_counter_init(&dccp_orphan_count, 0, GFP_KERNEL);
+-	if (rc)
+-		goto out_fail;
+ 	inet_hashinfo_init(&dccp_hashinfo);
+ 	rc = inet_hashinfo2_init_mod(&dccp_hashinfo);
+ 	if (rc)
+-		goto out_free_percpu;
++		goto out_fail;
+ 	rc = -ENOBUFS;
+ 	dccp_hashinfo.bind_bucket_cachep =
+ 		kmem_cache_create("dccp_bind_bucket",
+@@ -1226,8 +1223,6 @@ out_free_bind_bucket_cachep:
+ 	kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep);
+ out_free_hashinfo2:
+ 	inet_hashinfo2_free_mod(&dccp_hashinfo);
+-out_free_percpu:
+-	percpu_counter_destroy(&dccp_orphan_count);
+ out_fail:
+ 	dccp_hashinfo.bhash = NULL;
+ 	dccp_hashinfo.ehash = NULL;
+@@ -1250,7 +1245,6 @@ static void __exit dccp_fini(void)
+ 	dccp_ackvec_exit();
+ 	dccp_sysctl_exit();
+ 	inet_hashinfo2_free_mod(&dccp_hashinfo);
+-	percpu_counter_destroy(&dccp_orphan_count);
+ }
+ 
+ module_init(dccp_init);
+diff --git a/net/ethtool/pause.c b/net/ethtool/pause.c
+index d4ac02718b72a..c7bc704c8862a 100644
+--- a/net/ethtool/pause.c
++++ b/net/ethtool/pause.c
+@@ -62,8 +62,7 @@ static int pause_reply_size(const struct ethnl_req_info *req_base,
+ 
+ 	if (req_base->flags & ETHTOOL_FLAG_STATS)
+ 		n += nla_total_size(0) +	/* _PAUSE_STATS */
+-			nla_total_size_64bit(sizeof(u64)) *
+-				(ETHTOOL_A_PAUSE_STAT_MAX - 2);
++		     nla_total_size_64bit(sizeof(u64)) * ETHTOOL_PAUSE_STAT_CNT;
+ 	return n;
+ }
+ 
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 1dfa561e8f981..addd595bb3fe6 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -892,7 +892,7 @@ void inet_csk_destroy_sock(struct sock *sk)
+ 
+ 	sk_refcnt_debug_release(sk);
+ 
+-	percpu_counter_dec(sk->sk_prot->orphan_count);
++	this_cpu_dec(*sk->sk_prot->orphan_count);
+ 
+ 	sock_put(sk);
+ }
+@@ -951,7 +951,7 @@ static void inet_child_forget(struct sock *sk, struct request_sock *req,
+ 
+ 	sock_orphan(child);
+ 
+-	percpu_counter_inc(sk->sk_prot->orphan_count);
++	this_cpu_inc(*sk->sk_prot->orphan_count);
+ 
+ 	if (sk->sk_protocol == IPPROTO_TCP && tcp_rsk(req)->tfo_listener) {
+ 		BUG_ON(rcu_access_pointer(tcp_sk(child)->fastopen_rsk) != req);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index f3fd5c911ed09..e093847c334da 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -598,7 +598,7 @@ bool inet_ehash_nolisten(struct sock *sk, struct sock *osk, bool *found_dup_sk)
+ 	if (ok) {
+ 		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+ 	} else {
+-		percpu_counter_inc(sk->sk_prot->orphan_count);
++		this_cpu_inc(*sk->sk_prot->orphan_count);
+ 		inet_sk_set_state(sk, TCP_CLOSE);
+ 		sock_set_flag(sk, SOCK_DEAD);
+ 		inet_csk_destroy_sock(sk);
+diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
+index 8d5e1695b9aa8..80d13d8f982dc 100644
+--- a/net/ipv4/proc.c
++++ b/net/ipv4/proc.c
+@@ -53,7 +53,7 @@ static int sockstat_seq_show(struct seq_file *seq, void *v)
+ 	struct net *net = seq->private;
+ 	int orphans, sockets;
+ 
+-	orphans = percpu_counter_sum_positive(&tcp_orphan_count);
++	orphans = tcp_orphan_count_sum();
+ 	sockets = proto_sockets_allocated_sum_positive(&tcp_prot);
+ 
+ 	socket_seq_show(seq);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 54230852e5f95..e8aca226c4ae3 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -280,8 +280,8 @@
+ #include <asm/ioctls.h>
+ #include <net/busy_poll.h>
+ 
+-struct percpu_counter tcp_orphan_count;
+-EXPORT_SYMBOL_GPL(tcp_orphan_count);
++DEFINE_PER_CPU(unsigned int, tcp_orphan_count);
++EXPORT_PER_CPU_SYMBOL_GPL(tcp_orphan_count);
+ 
+ long sysctl_tcp_mem[3] __read_mostly;
+ EXPORT_SYMBOL(sysctl_tcp_mem);
+@@ -956,7 +956,7 @@ int tcp_send_mss(struct sock *sk, int *size_goal, int flags)
+  */
+ static void tcp_remove_empty_skb(struct sock *sk, struct sk_buff *skb)
+ {
+-	if (skb && !skb->len) {
++	if (skb && TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) {
+ 		tcp_unlink_write_queue(skb, sk);
+ 		if (tcp_write_queue_empty(sk))
+ 			tcp_chrono_stop(sk, TCP_CHRONO_BUSY);
+@@ -2394,11 +2394,36 @@ void tcp_shutdown(struct sock *sk, int how)
+ }
+ EXPORT_SYMBOL(tcp_shutdown);
+ 
++int tcp_orphan_count_sum(void)
++{
++	int i, total = 0;
++
++	for_each_possible_cpu(i)
++		total += per_cpu(tcp_orphan_count, i);
++
++	return max(total, 0);
++}
++
++static int tcp_orphan_cache;
++static struct timer_list tcp_orphan_timer;
++#define TCP_ORPHAN_TIMER_PERIOD msecs_to_jiffies(100)
++
++static void tcp_orphan_update(struct timer_list *unused)
++{
++	WRITE_ONCE(tcp_orphan_cache, tcp_orphan_count_sum());
++	mod_timer(&tcp_orphan_timer, jiffies + TCP_ORPHAN_TIMER_PERIOD);
++}
++
++static bool tcp_too_many_orphans(int shift)
++{
++	return READ_ONCE(tcp_orphan_cache) << shift > sysctl_tcp_max_orphans;
++}
++
+ bool tcp_check_oom(struct sock *sk, int shift)
+ {
+ 	bool too_many_orphans, out_of_socket_memory;
+ 
+-	too_many_orphans = tcp_too_many_orphans(sk, shift);
++	too_many_orphans = tcp_too_many_orphans(shift);
+ 	out_of_socket_memory = tcp_out_of_memory(sk);
+ 
+ 	if (too_many_orphans)
+@@ -2508,7 +2533,7 @@ adjudge_to_death:
+ 	/* remove backlog if any, without releasing ownership. */
+ 	__release_sock(sk);
+ 
+-	percpu_counter_inc(sk->sk_prot->orphan_count);
++	this_cpu_inc(tcp_orphan_count);
+ 
+ 	/* Have we already been destroyed by a softirq or backlog? */
+ 	if (state != TCP_CLOSE && sk->sk_state == TCP_CLOSE)
+@@ -4145,7 +4170,10 @@ void __init tcp_init(void)
+ 		     sizeof_field(struct sk_buff, cb));
+ 
+ 	percpu_counter_init(&tcp_sockets_allocated, 0, GFP_KERNEL);
+-	percpu_counter_init(&tcp_orphan_count, 0, GFP_KERNEL);
++
++	timer_setup(&tcp_orphan_timer, tcp_orphan_update, TIMER_DEFERRABLE);
++	mod_timer(&tcp_orphan_timer, jiffies + TCP_ORPHAN_TIMER_PERIOD);
++
+ 	inet_hashinfo_init(&tcp_hashinfo);
+ 	inet_hashinfo2_init(&tcp_hashinfo, "tcp_listen_portaddr_hash",
+ 			    thash_entries, 21,  /* one slot per 2 MB*/
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 9194070c67250..6b745ce4108c8 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -573,7 +573,6 @@ static void tcp_bpf_rebuild_protos(struct proto prot[TCP_BPF_NUM_CFGS],
+ 				   struct proto *base)
+ {
+ 	prot[TCP_BPF_BASE]			= *base;
+-	prot[TCP_BPF_BASE].unhash		= sock_map_unhash;
+ 	prot[TCP_BPF_BASE].close		= sock_map_close;
+ 	prot[TCP_BPF_BASE].recvmsg		= tcp_bpf_recvmsg;
+ 	prot[TCP_BPF_BASE].stream_memory_read	= tcp_bpf_stream_read;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 884d430e23cb3..29526937077b3 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3097,6 +3097,9 @@ static void sit_add_v4_addrs(struct inet6_dev *idev)
+ 	memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4);
+ 
+ 	if (idev->dev->flags&IFF_POINTOPOINT) {
++		if (idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_NONE)
++			return;
++
+ 		addr.s6_addr32[0] = htonl(0xfe800000);
+ 		scope = IFA_LINK;
+ 		plen = 64;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index bae6b51a9bd46..8a1863146f34c 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1420,7 +1420,6 @@ do_udp_sendmsg:
+ 	if (!fl6.flowi6_oif)
+ 		fl6.flowi6_oif = np->sticky_pktinfo.ipi6_ifindex;
+ 
+-	fl6.flowi6_mark = ipc6.sockc.mark;
+ 	fl6.flowi6_uid = sk->sk_uid;
+ 
+ 	if (msg->msg_controllen) {
+@@ -1456,6 +1455,7 @@ do_udp_sendmsg:
+ 	ipc6.opt = opt;
+ 
+ 	fl6.flowi6_proto = sk->sk_protocol;
++	fl6.flowi6_mark = ipc6.sockc.mark;
+ 	fl6.daddr = *daddr;
+ 	if (ipv6_addr_any(&fl6.saddr) && !ipv6_addr_any(&np->saddr))
+ 		fl6.saddr = np->saddr;
+diff --git a/net/netfilter/nf_conntrack_proto_udp.c b/net/netfilter/nf_conntrack_proto_udp.c
+index af402f458ee02..0528e9c26cebd 100644
+--- a/net/netfilter/nf_conntrack_proto_udp.c
++++ b/net/netfilter/nf_conntrack_proto_udp.c
+@@ -105,10 +105,13 @@ int nf_conntrack_udp_packet(struct nf_conn *ct,
+ 	 */
+ 	if (test_bit(IPS_SEEN_REPLY_BIT, &ct->status)) {
+ 		unsigned long extra = timeouts[UDP_CT_UNREPLIED];
++		bool stream = false;
+ 
+ 		/* Still active after two seconds? Extend timeout. */
+-		if (time_after(jiffies, ct->proto.udp.stream_ts))
++		if (time_after(jiffies, ct->proto.udp.stream_ts)) {
+ 			extra = timeouts[UDP_CT_REPLIED];
++			stream = true;
++		}
+ 
+ 		nf_ct_refresh_acct(ct, ctinfo, skb, extra);
+ 
+@@ -117,7 +120,7 @@ int nf_conntrack_udp_packet(struct nf_conn *ct,
+ 			return NF_ACCEPT;
+ 
+ 		/* Also, more likely to be important, and not a probe */
+-		if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
++		if (stream && !test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
+ 			nf_conntrack_event_cache(IPCT_ASSURED, ct);
+ 	} else {
+ 		nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[UDP_CT_UNREPLIED]);
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index d1d8bca03b4f0..98994fe677fe9 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -562,7 +562,7 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
+ 		goto nla_put_failure;
+ 
+ 	if (indev && entskb->dev &&
+-	    entskb->mac_header != entskb->network_header) {
++	    skb_mac_header_was_set(entskb)) {
+ 		struct nfqnl_msg_packet_hw phw;
+ 		int len;
+ 
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 5c84a968dae29..58904bee1a0df 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -141,17 +141,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 		return -EBUSY;
+ 
+ 	priv->op = ntohl(nla_get_be32(tb[NFTA_DYNSET_OP]));
+-	switch (priv->op) {
+-	case NFT_DYNSET_OP_ADD:
+-	case NFT_DYNSET_OP_DELETE:
+-		break;
+-	case NFT_DYNSET_OP_UPDATE:
+-		if (!(set->flags & NFT_SET_TIMEOUT))
+-			return -EOPNOTSUPP;
+-		break;
+-	default:
++	if (priv->op > NFT_DYNSET_OP_DELETE)
+ 		return -EOPNOTSUPP;
+-	}
+ 
+ 	timeout = 0;
+ 	if (tb[NFTA_DYNSET_TIMEOUT] != NULL) {
+diff --git a/net/rds/ib.c b/net/rds/ib.c
+index deecbdcdae84e..24c9a9005a6fb 100644
+--- a/net/rds/ib.c
++++ b/net/rds/ib.c
+@@ -30,7 +30,6 @@
+  * SOFTWARE.
+  *
+  */
+-#include <linux/dmapool.h>
+ #include <linux/kernel.h>
+ #include <linux/in.h>
+ #include <linux/if.h>
+@@ -108,7 +107,6 @@ static void rds_ib_dev_free(struct work_struct *work)
+ 		rds_ib_destroy_mr_pool(rds_ibdev->mr_1m_pool);
+ 	if (rds_ibdev->pd)
+ 		ib_dealloc_pd(rds_ibdev->pd);
+-	dma_pool_destroy(rds_ibdev->rid_hdrs_pool);
+ 
+ 	list_for_each_entry_safe(i_ipaddr, i_next, &rds_ibdev->ipaddr_list, list) {
+ 		list_del(&i_ipaddr->list);
+@@ -191,14 +189,6 @@ static int rds_ib_add_one(struct ib_device *device)
+ 		rds_ibdev->pd = NULL;
+ 		goto put_dev;
+ 	}
+-	rds_ibdev->rid_hdrs_pool = dma_pool_create(device->name,
+-						   device->dma_device,
+-						   sizeof(struct rds_header),
+-						   L1_CACHE_BYTES, 0);
+-	if (!rds_ibdev->rid_hdrs_pool) {
+-		ret = -ENOMEM;
+-		goto put_dev;
+-	}
+ 
+ 	rds_ibdev->mr_1m_pool =
+ 		rds_ib_create_mr_pool(rds_ibdev, RDS_IB_MR_1M_POOL);
+diff --git a/net/rds/ib.h b/net/rds/ib.h
+index c23a11d9ad362..2ba71102b1f1f 100644
+--- a/net/rds/ib.h
++++ b/net/rds/ib.h
+@@ -246,7 +246,6 @@ struct rds_ib_device {
+ 	struct list_head	conn_list;
+ 	struct ib_device	*dev;
+ 	struct ib_pd		*pd;
+-	struct dma_pool		*rid_hdrs_pool; /* RDS headers DMA pool */
+ 	u8			odp_capable:1;
+ 
+ 	unsigned int		max_mrs;
+@@ -380,11 +379,6 @@ int rds_ib_cm_handle_connect(struct rdma_cm_id *cm_id,
+ int rds_ib_cm_initiate_connect(struct rdma_cm_id *cm_id, bool isv6);
+ void rds_ib_cm_connect_complete(struct rds_connection *conn,
+ 				struct rdma_cm_event *event);
+-struct rds_header **rds_dma_hdrs_alloc(struct ib_device *ibdev,
+-				       struct dma_pool *pool,
+-				       dma_addr_t **dma_addrs, u32 num_hdrs);
+-void rds_dma_hdrs_free(struct dma_pool *pool, struct rds_header **hdrs,
+-		       dma_addr_t *dma_addrs, u32 num_hdrs);
+ 
+ #define rds_ib_conn_error(conn, fmt...) \
+ 	__rds_ib_conn_error(conn, KERN_WARNING "RDS/IB: " fmt)
+diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
+index b36b60668b1da..f5cbe963cd8f7 100644
+--- a/net/rds/ib_cm.c
++++ b/net/rds/ib_cm.c
+@@ -30,7 +30,6 @@
+  * SOFTWARE.
+  *
+  */
+-#include <linux/dmapool.h>
+ #include <linux/kernel.h>
+ #include <linux/in.h>
+ #include <linux/slab.h>
+@@ -441,42 +440,87 @@ static inline void ibdev_put_vector(struct rds_ib_device *rds_ibdev, int index)
+ 	rds_ibdev->vector_load[index]--;
+ }
+ 
++static void rds_dma_hdr_free(struct ib_device *dev, struct rds_header *hdr,
++		dma_addr_t dma_addr, enum dma_data_direction dir)
++{
++	ib_dma_unmap_single(dev, dma_addr, sizeof(*hdr), dir);
++	kfree(hdr);
++}
++
++static struct rds_header *rds_dma_hdr_alloc(struct ib_device *dev,
++		dma_addr_t *dma_addr, enum dma_data_direction dir)
++{
++	struct rds_header *hdr;
++
++	hdr = kzalloc_node(sizeof(*hdr), GFP_KERNEL, ibdev_to_node(dev));
++	if (!hdr)
++		return NULL;
++
++	*dma_addr = ib_dma_map_single(dev, hdr, sizeof(*hdr),
++				      DMA_BIDIRECTIONAL);
++	if (ib_dma_mapping_error(dev, *dma_addr)) {
++		kfree(hdr);
++		return NULL;
++	}
++
++	return hdr;
++}
++
++/* Free the DMA memory used to store struct rds_header.
++ *
++ * @dev: the RDS IB device
++ * @hdrs: pointer to the array storing DMA memory pointers
++ * @dma_addrs: pointer to the array storing DMA addresses
++ * @num_hdars: number of headers to free.
++ */
++static void rds_dma_hdrs_free(struct rds_ib_device *dev,
++		struct rds_header **hdrs, dma_addr_t *dma_addrs, u32 num_hdrs,
++		enum dma_data_direction dir)
++{
++	u32 i;
++
++	for (i = 0; i < num_hdrs; i++)
++		rds_dma_hdr_free(dev->dev, hdrs[i], dma_addrs[i], dir);
++	kvfree(hdrs);
++	kvfree(dma_addrs);
++}
++
++
+ /* Allocate DMA coherent memory to be used to store struct rds_header for
+  * sending/receiving packets.  The pointers to the DMA memory and the
+  * associated DMA addresses are stored in two arrays.
+  *
+- * @ibdev: the IB device
+- * @pool: the DMA memory pool
++ * @dev: the RDS IB device
+  * @dma_addrs: pointer to the array for storing DMA addresses
+  * @num_hdrs: number of headers to allocate
+  *
+  * It returns the pointer to the array storing the DMA memory pointers.  On
+  * error, NULL pointer is returned.
+  */
+-struct rds_header **rds_dma_hdrs_alloc(struct ib_device *ibdev,
+-				       struct dma_pool *pool,
+-				       dma_addr_t **dma_addrs, u32 num_hdrs)
++static struct rds_header **rds_dma_hdrs_alloc(struct rds_ib_device *dev,
++		dma_addr_t **dma_addrs, u32 num_hdrs,
++		enum dma_data_direction dir)
+ {
+ 	struct rds_header **hdrs;
+ 	dma_addr_t *hdr_daddrs;
+ 	u32 i;
+ 
+ 	hdrs = kvmalloc_node(sizeof(*hdrs) * num_hdrs, GFP_KERNEL,
+-			     ibdev_to_node(ibdev));
++			     ibdev_to_node(dev->dev));
+ 	if (!hdrs)
+ 		return NULL;
+ 
+ 	hdr_daddrs = kvmalloc_node(sizeof(*hdr_daddrs) * num_hdrs, GFP_KERNEL,
+-				   ibdev_to_node(ibdev));
++				   ibdev_to_node(dev->dev));
+ 	if (!hdr_daddrs) {
+ 		kvfree(hdrs);
+ 		return NULL;
+ 	}
+ 
+ 	for (i = 0; i < num_hdrs; i++) {
+-		hdrs[i] = dma_pool_zalloc(pool, GFP_KERNEL, &hdr_daddrs[i]);
++		hdrs[i] = rds_dma_hdr_alloc(dev->dev, &hdr_daddrs[i], dir);
+ 		if (!hdrs[i]) {
+-			rds_dma_hdrs_free(pool, hdrs, hdr_daddrs, i);
++			rds_dma_hdrs_free(dev, hdrs, hdr_daddrs, i, dir);
+ 			return NULL;
+ 		}
+ 	}
+@@ -485,24 +529,6 @@ struct rds_header **rds_dma_hdrs_alloc(struct ib_device *ibdev,
+ 	return hdrs;
+ }
+ 
+-/* Free the DMA memory used to store struct rds_header.
+- *
+- * @pool: the DMA memory pool
+- * @hdrs: pointer to the array storing DMA memory pointers
+- * @dma_addrs: pointer to the array storing DMA addresses
+- * @num_hdars: number of headers to free.
+- */
+-void rds_dma_hdrs_free(struct dma_pool *pool, struct rds_header **hdrs,
+-		       dma_addr_t *dma_addrs, u32 num_hdrs)
+-{
+-	u32 i;
+-
+-	for (i = 0; i < num_hdrs; i++)
+-		dma_pool_free(pool, hdrs[i], dma_addrs[i]);
+-	kvfree(hdrs);
+-	kvfree(dma_addrs);
+-}
+-
+ /*
+  * This needs to be very careful to not leave IS_ERR pointers around for
+  * cleanup to trip over.
+@@ -516,7 +542,6 @@ static int rds_ib_setup_qp(struct rds_connection *conn)
+ 	struct rds_ib_device *rds_ibdev;
+ 	unsigned long max_wrs;
+ 	int ret, fr_queue_space;
+-	struct dma_pool *pool;
+ 
+ 	/*
+ 	 * It's normal to see a null device if an incoming connection races
+@@ -612,25 +637,26 @@ static int rds_ib_setup_qp(struct rds_connection *conn)
+ 		goto recv_cq_out;
+ 	}
+ 
+-	pool = rds_ibdev->rid_hdrs_pool;
+-	ic->i_send_hdrs = rds_dma_hdrs_alloc(dev, pool, &ic->i_send_hdrs_dma,
+-					     ic->i_send_ring.w_nr);
++	ic->i_send_hdrs = rds_dma_hdrs_alloc(rds_ibdev, &ic->i_send_hdrs_dma,
++					     ic->i_send_ring.w_nr,
++					     DMA_TO_DEVICE);
+ 	if (!ic->i_send_hdrs) {
+ 		ret = -ENOMEM;
+ 		rdsdebug("DMA send hdrs alloc failed\n");
+ 		goto qp_out;
+ 	}
+ 
+-	ic->i_recv_hdrs = rds_dma_hdrs_alloc(dev, pool, &ic->i_recv_hdrs_dma,
+-					     ic->i_recv_ring.w_nr);
++	ic->i_recv_hdrs = rds_dma_hdrs_alloc(rds_ibdev, &ic->i_recv_hdrs_dma,
++					     ic->i_recv_ring.w_nr,
++					     DMA_FROM_DEVICE);
+ 	if (!ic->i_recv_hdrs) {
+ 		ret = -ENOMEM;
+ 		rdsdebug("DMA recv hdrs alloc failed\n");
+ 		goto send_hdrs_dma_out;
+ 	}
+ 
+-	ic->i_ack = dma_pool_zalloc(pool, GFP_KERNEL,
+-				    &ic->i_ack_dma);
++	ic->i_ack = rds_dma_hdr_alloc(rds_ibdev->dev, &ic->i_ack_dma,
++				      DMA_TO_DEVICE);
+ 	if (!ic->i_ack) {
+ 		ret = -ENOMEM;
+ 		rdsdebug("DMA ack header alloc failed\n");
+@@ -666,18 +692,19 @@ sends_out:
+ 	vfree(ic->i_sends);
+ 
+ ack_dma_out:
+-	dma_pool_free(pool, ic->i_ack, ic->i_ack_dma);
++	rds_dma_hdr_free(rds_ibdev->dev, ic->i_ack, ic->i_ack_dma,
++			 DMA_TO_DEVICE);
+ 	ic->i_ack = NULL;
+ 
+ recv_hdrs_dma_out:
+-	rds_dma_hdrs_free(pool, ic->i_recv_hdrs, ic->i_recv_hdrs_dma,
+-			  ic->i_recv_ring.w_nr);
++	rds_dma_hdrs_free(rds_ibdev, ic->i_recv_hdrs, ic->i_recv_hdrs_dma,
++			  ic->i_recv_ring.w_nr, DMA_FROM_DEVICE);
+ 	ic->i_recv_hdrs = NULL;
+ 	ic->i_recv_hdrs_dma = NULL;
+ 
+ send_hdrs_dma_out:
+-	rds_dma_hdrs_free(pool, ic->i_send_hdrs, ic->i_send_hdrs_dma,
+-			  ic->i_send_ring.w_nr);
++	rds_dma_hdrs_free(rds_ibdev, ic->i_send_hdrs, ic->i_send_hdrs_dma,
++			  ic->i_send_ring.w_nr, DMA_TO_DEVICE);
+ 	ic->i_send_hdrs = NULL;
+ 	ic->i_send_hdrs_dma = NULL;
+ 
+@@ -1110,29 +1137,30 @@ void rds_ib_conn_path_shutdown(struct rds_conn_path *cp)
+ 		}
+ 
+ 		if (ic->rds_ibdev) {
+-			struct dma_pool *pool;
+-
+-			pool = ic->rds_ibdev->rid_hdrs_pool;
+-
+ 			/* then free the resources that ib callbacks use */
+ 			if (ic->i_send_hdrs) {
+-				rds_dma_hdrs_free(pool, ic->i_send_hdrs,
++				rds_dma_hdrs_free(ic->rds_ibdev,
++						  ic->i_send_hdrs,
+ 						  ic->i_send_hdrs_dma,
+-						  ic->i_send_ring.w_nr);
++						  ic->i_send_ring.w_nr,
++						  DMA_TO_DEVICE);
+ 				ic->i_send_hdrs = NULL;
+ 				ic->i_send_hdrs_dma = NULL;
+ 			}
+ 
+ 			if (ic->i_recv_hdrs) {
+-				rds_dma_hdrs_free(pool, ic->i_recv_hdrs,
++				rds_dma_hdrs_free(ic->rds_ibdev,
++						  ic->i_recv_hdrs,
+ 						  ic->i_recv_hdrs_dma,
+-						  ic->i_recv_ring.w_nr);
++						  ic->i_recv_ring.w_nr,
++						  DMA_FROM_DEVICE);
+ 				ic->i_recv_hdrs = NULL;
+ 				ic->i_recv_hdrs_dma = NULL;
+ 			}
+ 
+ 			if (ic->i_ack) {
+-				dma_pool_free(pool, ic->i_ack, ic->i_ack_dma);
++				rds_dma_hdr_free(ic->rds_ibdev->dev, ic->i_ack,
++						 ic->i_ack_dma, DMA_TO_DEVICE);
+ 				ic->i_ack = NULL;
+ 			}
+ 		} else {
+diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
+index 3cffcec5fb371..6fdedd9dbbc28 100644
+--- a/net/rds/ib_recv.c
++++ b/net/rds/ib_recv.c
+@@ -662,10 +662,16 @@ static void rds_ib_send_ack(struct rds_ib_connection *ic, unsigned int adv_credi
+ 	seq = rds_ib_get_ack(ic);
+ 
+ 	rdsdebug("send_ack: ic %p ack %llu\n", ic, (unsigned long long) seq);
++
++	ib_dma_sync_single_for_cpu(ic->rds_ibdev->dev, ic->i_ack_dma,
++				   sizeof(*hdr), DMA_TO_DEVICE);
+ 	rds_message_populate_header(hdr, 0, 0, 0);
+ 	hdr->h_ack = cpu_to_be64(seq);
+ 	hdr->h_credit = adv_credits;
+ 	rds_message_make_checksum(hdr);
++	ib_dma_sync_single_for_device(ic->rds_ibdev->dev, ic->i_ack_dma,
++				      sizeof(*hdr), DMA_TO_DEVICE);
++
+ 	ic->i_ack_queued = jiffies;
+ 
+ 	ret = ib_post_send(ic->i_cm_id->qp, &ic->i_ack_wr, NULL);
+@@ -845,6 +851,7 @@ static void rds_ib_process_recv(struct rds_connection *conn,
+ 	struct rds_ib_connection *ic = conn->c_transport_data;
+ 	struct rds_ib_incoming *ibinc = ic->i_ibinc;
+ 	struct rds_header *ihdr, *hdr;
++	dma_addr_t dma_addr = ic->i_recv_hdrs_dma[recv - ic->i_recvs];
+ 
+ 	/* XXX shut down the connection if port 0,0 are seen? */
+ 
+@@ -863,6 +870,8 @@ static void rds_ib_process_recv(struct rds_connection *conn,
+ 
+ 	ihdr = ic->i_recv_hdrs[recv - ic->i_recvs];
+ 
++	ib_dma_sync_single_for_cpu(ic->rds_ibdev->dev, dma_addr,
++				   sizeof(*ihdr), DMA_FROM_DEVICE);
+ 	/* Validate the checksum. */
+ 	if (!rds_message_verify_checksum(ihdr)) {
+ 		rds_ib_conn_error(conn, "incoming message "
+@@ -870,7 +879,7 @@ static void rds_ib_process_recv(struct rds_connection *conn,
+ 		       "forcing a reconnect\n",
+ 		       &conn->c_faddr);
+ 		rds_stats_inc(s_recv_drop_bad_checksum);
+-		return;
++		goto done;
+ 	}
+ 
+ 	/* Process the ACK sequence which comes with every packet */
+@@ -899,7 +908,7 @@ static void rds_ib_process_recv(struct rds_connection *conn,
+ 		 */
+ 		rds_ib_frag_free(ic, recv->r_frag);
+ 		recv->r_frag = NULL;
+-		return;
++		goto done;
+ 	}
+ 
+ 	/*
+@@ -933,7 +942,7 @@ static void rds_ib_process_recv(struct rds_connection *conn,
+ 		    hdr->h_dport != ihdr->h_dport) {
+ 			rds_ib_conn_error(conn,
+ 				"fragment header mismatch; forcing reconnect\n");
+-			return;
++			goto done;
+ 		}
+ 	}
+ 
+@@ -965,6 +974,9 @@ static void rds_ib_process_recv(struct rds_connection *conn,
+ 
+ 		rds_inc_put(&ibinc->ii_inc);
+ 	}
++done:
++	ib_dma_sync_single_for_device(ic->rds_ibdev->dev, dma_addr,
++				      sizeof(*ihdr), DMA_FROM_DEVICE);
+ }
+ 
+ void rds_ib_recv_cqe_handler(struct rds_ib_connection *ic,
+diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
+index dfe778220657a..92b4a8689aae7 100644
+--- a/net/rds/ib_send.c
++++ b/net/rds/ib_send.c
+@@ -638,6 +638,10 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
+ 		send->s_sge[0].length = sizeof(struct rds_header);
+ 		send->s_sge[0].lkey = ic->i_pd->local_dma_lkey;
+ 
++		ib_dma_sync_single_for_cpu(ic->rds_ibdev->dev,
++					   ic->i_send_hdrs_dma[pos],
++					   sizeof(struct rds_header),
++					   DMA_TO_DEVICE);
+ 		memcpy(ic->i_send_hdrs[pos], &rm->m_inc.i_hdr,
+ 		       sizeof(struct rds_header));
+ 
+@@ -688,6 +692,10 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
+ 			adv_credits = 0;
+ 			rds_ib_stats_inc(s_ib_tx_credit_updates);
+ 		}
++		ib_dma_sync_single_for_device(ic->rds_ibdev->dev,
++					      ic->i_send_hdrs_dma[pos],
++					      sizeof(struct rds_header),
++					      DMA_TO_DEVICE);
+ 
+ 		if (prev)
+ 			prev->s_wr.next = &send->s_wr;
+diff --git a/net/rxrpc/rtt.c b/net/rxrpc/rtt.c
+index 4e565eeab4260..be61d6f5be8d1 100644
+--- a/net/rxrpc/rtt.c
++++ b/net/rxrpc/rtt.c
+@@ -22,7 +22,7 @@ static u32 rxrpc_rto_min_us(struct rxrpc_peer *peer)
+ 
+ static u32 __rxrpc_set_rto(const struct rxrpc_peer *peer)
+ {
+-	return _usecs_to_jiffies((peer->srtt_us >> 3) + peer->rttvar_us);
++	return usecs_to_jiffies((peer->srtt_us >> 3) + peer->rttvar_us);
+ }
+ 
+ static u32 rxrpc_bound_rto(u32 rto)
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 05aa2571a4095..6a9c1a39874a0 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1303,6 +1303,15 @@ static int qdisc_change_tx_queue_len(struct net_device *dev,
+ 	return 0;
+ }
+ 
++void dev_qdisc_change_real_num_tx(struct net_device *dev,
++				  unsigned int new_real_tx)
++{
++	struct Qdisc *qdisc = dev->qdisc;
++
++	if (qdisc->ops->change_real_num_tx)
++		qdisc->ops->change_real_num_tx(qdisc, new_real_tx);
++}
++
+ int dev_qdisc_change_tx_queue_len(struct net_device *dev)
+ {
+ 	bool up = dev->flags & IFF_UP;
+diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
+index e79f1afe0cfd6..db18d8a860f9c 100644
+--- a/net/sched/sch_mq.c
++++ b/net/sched/sch_mq.c
+@@ -125,6 +125,29 @@ static void mq_attach(struct Qdisc *sch)
+ 	priv->qdiscs = NULL;
+ }
+ 
++static void mq_change_real_num_tx(struct Qdisc *sch, unsigned int new_real_tx)
++{
++#ifdef CONFIG_NET_SCHED
++	struct net_device *dev = qdisc_dev(sch);
++	struct Qdisc *qdisc;
++	unsigned int i;
++
++	for (i = new_real_tx; i < dev->real_num_tx_queues; i++) {
++		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		/* Only update the default qdiscs we created,
++		 * qdiscs with handles are always hashed.
++		 */
++		if (qdisc != &noop_qdisc && !qdisc->handle)
++			qdisc_hash_del(qdisc);
++	}
++	for (i = dev->real_num_tx_queues; i < new_real_tx; i++) {
++		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		if (qdisc != &noop_qdisc && !qdisc->handle)
++			qdisc_hash_add(qdisc, false);
++	}
++#endif
++}
++
+ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb)
+ {
+ 	struct net_device *dev = qdisc_dev(sch);
+@@ -288,6 +311,7 @@ struct Qdisc_ops mq_qdisc_ops __read_mostly = {
+ 	.init		= mq_init,
+ 	.destroy	= mq_destroy,
+ 	.attach		= mq_attach,
++	.change_real_num_tx = mq_change_real_num_tx,
+ 	.dump		= mq_dump,
+ 	.owner		= THIS_MODULE,
+ };
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 5eb3b1b7ae5e7..50e15add6068f 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -306,6 +306,28 @@ static void mqprio_attach(struct Qdisc *sch)
+ 	priv->qdiscs = NULL;
+ }
+ 
++static void mqprio_change_real_num_tx(struct Qdisc *sch,
++				      unsigned int new_real_tx)
++{
++	struct net_device *dev = qdisc_dev(sch);
++	struct Qdisc *qdisc;
++	unsigned int i;
++
++	for (i = new_real_tx; i < dev->real_num_tx_queues; i++) {
++		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		/* Only update the default qdiscs we created,
++		 * qdiscs with handles are always hashed.
++		 */
++		if (qdisc != &noop_qdisc && !qdisc->handle)
++			qdisc_hash_del(qdisc);
++	}
++	for (i = dev->real_num_tx_queues; i < new_real_tx; i++) {
++		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		if (qdisc != &noop_qdisc && !qdisc->handle)
++			qdisc_hash_add(qdisc, false);
++	}
++}
++
+ static struct netdev_queue *mqprio_queue_get(struct Qdisc *sch,
+ 					     unsigned long cl)
+ {
+@@ -629,6 +651,7 @@ static struct Qdisc_ops mqprio_qdisc_ops __read_mostly = {
+ 	.init		= mqprio_init,
+ 	.destroy	= mqprio_destroy,
+ 	.attach		= mqprio_attach,
++	.change_real_num_tx = mqprio_change_real_num_tx,
+ 	.dump		= mqprio_dump,
+ 	.owner		= THIS_MODULE,
+ };
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 93899559ba6d2..806babdd838d2 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -94,18 +94,22 @@ static ktime_t sched_base_time(const struct sched_gate_list *sched)
+ 	return ns_to_ktime(sched->base_time);
+ }
+ 
+-static ktime_t taprio_get_time(struct taprio_sched *q)
++static ktime_t taprio_mono_to_any(const struct taprio_sched *q, ktime_t mono)
+ {
+-	ktime_t mono = ktime_get();
++	/* This pairs with WRITE_ONCE() in taprio_parse_clockid() */
++	enum tk_offsets tk_offset = READ_ONCE(q->tk_offset);
+ 
+-	switch (q->tk_offset) {
++	switch (tk_offset) {
+ 	case TK_OFFS_MAX:
+ 		return mono;
+ 	default:
+-		return ktime_mono_to_any(mono, q->tk_offset);
++		return ktime_mono_to_any(mono, tk_offset);
+ 	}
++}
+ 
+-	return KTIME_MAX;
++static ktime_t taprio_get_time(const struct taprio_sched *q)
++{
++	return taprio_mono_to_any(q, ktime_get());
+ }
+ 
+ static void taprio_free_sched_cb(struct rcu_head *head)
+@@ -321,7 +325,7 @@ static ktime_t get_tcp_tstamp(struct taprio_sched *q, struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	return ktime_mono_to_any(skb->skb_mstamp_ns, q->tk_offset);
++	return taprio_mono_to_any(q, skb->skb_mstamp_ns);
+ }
+ 
+ /* There are a few scenarios where we will have to modify the txtime from
+@@ -1341,6 +1345,7 @@ static int taprio_parse_clockid(struct Qdisc *sch, struct nlattr **tb,
+ 		}
+ 	} else if (tb[TCA_TAPRIO_ATTR_SCHED_CLOCKID]) {
+ 		int clockid = nla_get_s32(tb[TCA_TAPRIO_ATTR_SCHED_CLOCKID]);
++		enum tk_offsets tk_offset;
+ 
+ 		/* We only support static clockids and we don't allow
+ 		 * for it to be modified after the first init.
+@@ -1355,22 +1360,24 @@ static int taprio_parse_clockid(struct Qdisc *sch, struct nlattr **tb,
+ 
+ 		switch (clockid) {
+ 		case CLOCK_REALTIME:
+-			q->tk_offset = TK_OFFS_REAL;
++			tk_offset = TK_OFFS_REAL;
+ 			break;
+ 		case CLOCK_MONOTONIC:
+-			q->tk_offset = TK_OFFS_MAX;
++			tk_offset = TK_OFFS_MAX;
+ 			break;
+ 		case CLOCK_BOOTTIME:
+-			q->tk_offset = TK_OFFS_BOOT;
++			tk_offset = TK_OFFS_BOOT;
+ 			break;
+ 		case CLOCK_TAI:
+-			q->tk_offset = TK_OFFS_TAI;
++			tk_offset = TK_OFFS_TAI;
+ 			break;
+ 		default:
+ 			NL_SET_ERR_MSG(extack, "Invalid 'clockid'");
+ 			err = -EINVAL;
+ 			goto out;
+ 		}
++		/* This pairs with READ_ONCE() in taprio_mono_to_any */
++		WRITE_ONCE(q->tk_offset, tk_offset);
+ 
+ 		q->clockid = clockid;
+ 	} else {
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 030d7f30b13fe..cfb5b9be0569d 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -146,14 +146,18 @@ static int __smc_release(struct smc_sock *smc)
+ 		sock_set_flag(sk, SOCK_DEAD);
+ 		sk->sk_shutdown |= SHUTDOWN_MASK;
+ 	} else {
+-		if (sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_INIT)
+-			sock_put(sk); /* passive closing */
+-		if (sk->sk_state == SMC_LISTEN) {
+-			/* wake up clcsock accept */
+-			rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR);
++		if (sk->sk_state != SMC_CLOSED) {
++			if (sk->sk_state != SMC_LISTEN &&
++			    sk->sk_state != SMC_INIT)
++				sock_put(sk); /* passive closing */
++			if (sk->sk_state == SMC_LISTEN) {
++				/* wake up clcsock accept */
++				rc = kernel_sock_shutdown(smc->clcsock,
++							  SHUT_RDWR);
++			}
++			sk->sk_state = SMC_CLOSED;
++			sk->sk_state_change(sk);
+ 		}
+-		sk->sk_state = SMC_CLOSED;
+-		sk->sk_state_change(sk);
+ 		smc_restore_fallback_changes(smc);
+ 	}
+ 
+@@ -1018,7 +1022,7 @@ static void smc_connect_work(struct work_struct *work)
+ 	if (smc->clcsock->sk->sk_err) {
+ 		smc->sk.sk_err = smc->clcsock->sk->sk_err;
+ 	} else if ((1 << smc->clcsock->sk->sk_state) &
+-					(TCPF_SYN_SENT | TCP_SYN_RECV)) {
++					(TCPF_SYN_SENT | TCPF_SYN_RECV)) {
+ 		rc = sk_stream_wait_connect(smc->clcsock->sk, &timeo);
+ 		if ((rc == -EPIPE) &&
+ 		    ((1 << smc->clcsock->sk->sk_state) &
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index 2e7560eba9812..d8fe4e1f24d1f 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -1787,7 +1787,7 @@ void smc_llc_link_active(struct smc_link *link)
+ 			    link->smcibdev->ibdev->name, link->ibport);
+ 	link->state = SMC_LNK_ACTIVE;
+ 	if (link->lgr->llc_testlink_time) {
+-		link->llc_testlink_time = link->lgr->llc_testlink_time * HZ;
++		link->llc_testlink_time = link->lgr->llc_testlink_time;
+ 		schedule_delayed_work(&link->llc_testlink_wrk,
+ 				      link->llc_testlink_time);
+ 	}
+diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
+index b3815c1e8f2ea..cd9954c4ad808 100644
+--- a/net/strparser/strparser.c
++++ b/net/strparser/strparser.c
+@@ -27,18 +27,10 @@
+ 
+ static struct workqueue_struct *strp_wq;
+ 
+-struct _strp_msg {
+-	/* Internal cb structure. struct strp_msg must be first for passing
+-	 * to upper layer.
+-	 */
+-	struct strp_msg strp;
+-	int accum_len;
+-};
+-
+ static inline struct _strp_msg *_strp_msg(struct sk_buff *skb)
+ {
+ 	return (struct _strp_msg *)((void *)skb->cb +
+-		offsetof(struct qdisc_skb_cb, data));
++		offsetof(struct sk_skb_cb, strp));
+ }
+ 
+ /* Lower lock held */
+diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c
+index 6e4dbd577a39f..d435bffc61999 100644
+--- a/net/sunrpc/addr.c
++++ b/net/sunrpc/addr.c
+@@ -162,8 +162,10 @@ static int rpc_parse_scope_id(struct net *net, const char *buf,
+ 			      const size_t buflen, const char *delim,
+ 			      struct sockaddr_in6 *sin6)
+ {
+-	char *p;
++	char p[IPV6_SCOPE_ID_LEN + 1];
+ 	size_t len;
++	u32 scope_id = 0;
++	struct net_device *dev;
+ 
+ 	if ((buf + buflen) == delim)
+ 		return 1;
+@@ -175,29 +177,23 @@ static int rpc_parse_scope_id(struct net *net, const char *buf,
+ 		return 0;
+ 
+ 	len = (buf + buflen) - delim - 1;
+-	p = kmemdup_nul(delim + 1, len, GFP_KERNEL);
+-	if (p) {
+-		u32 scope_id = 0;
+-		struct net_device *dev;
+-
+-		dev = dev_get_by_name(net, p);
+-		if (dev != NULL) {
+-			scope_id = dev->ifindex;
+-			dev_put(dev);
+-		} else {
+-			if (kstrtou32(p, 10, &scope_id) != 0) {
+-				kfree(p);
+-				return 0;
+-			}
+-		}
+-
+-		kfree(p);
+-
+-		sin6->sin6_scope_id = scope_id;
+-		return 1;
++	if (len > IPV6_SCOPE_ID_LEN)
++		return 0;
++
++	memcpy(p, delim + 1, len);
++	p[len] = 0;
++
++	dev = dev_get_by_name(net, p);
++	if (dev != NULL) {
++		scope_id = dev->ifindex;
++		dev_put(dev);
++	} else {
++		if (kstrtou32(p, 10, &scope_id) != 0)
++			return 0;
+ 	}
+ 
+-	return 0;
++	sin6->sin6_scope_id = scope_id;
++	return 1;
+ }
+ 
+ static size_t rpc_pton6(struct net *net, const char *buf, const size_t buflen,
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 8201531ce5d97..04aaca4b8bf93 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1552,15 +1552,14 @@ xprt_transmit(struct rpc_task *task)
+ {
+ 	struct rpc_rqst *next, *req = task->tk_rqstp;
+ 	struct rpc_xprt	*xprt = req->rq_xprt;
+-	int counter, status;
++	int status;
+ 
+ 	spin_lock(&xprt->queue_lock);
+-	counter = 0;
+-	while (!list_empty(&xprt->xmit_queue)) {
+-		if (++counter == 20)
++	for (;;) {
++		next = list_first_entry_or_null(&xprt->xmit_queue,
++						struct rpc_rqst, rq_xmit);
++		if (!next)
+ 			break;
+-		next = list_first_entry(&xprt->xmit_queue,
+-				struct rpc_rqst, rq_xmit);
+ 		xprt_pin_rqst(next);
+ 		spin_unlock(&xprt->queue_lock);
+ 		status = xprt_request_transmit(next, task);
+@@ -1568,13 +1567,16 @@ xprt_transmit(struct rpc_task *task)
+ 			status = 0;
+ 		spin_lock(&xprt->queue_lock);
+ 		xprt_unpin_rqst(next);
+-		if (status == 0) {
+-			if (!xprt_request_data_received(task) ||
+-			    test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+-				continue;
+-		} else if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+-			task->tk_status = status;
+-		break;
++		if (status < 0) {
++			if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
++				task->tk_status = status;
++			break;
++		}
++		/* Was @task transmitted, and has it received a reply? */
++		if (xprt_request_data_received(task) &&
++		    !test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
++			break;
++		cond_resched_lock(&xprt->queue_lock);
+ 	}
+ 	spin_unlock(&xprt->queue_lock);
+ }
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 326250513570e..7fe36dbcbe187 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1279,6 +1279,8 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
+ 		 * non-blocking call.
+ 		 */
+ 		err = -EALREADY;
++		if (flags & O_NONBLOCK)
++			goto out;
+ 		break;
+ 	default:
+ 		if ((sk->sk_state == TCP_LISTEN) ||
+diff --git a/samples/kprobes/kretprobe_example.c b/samples/kprobes/kretprobe_example.c
+index 5dc1bf3baa98b..228321ecb1616 100644
+--- a/samples/kprobes/kretprobe_example.c
++++ b/samples/kprobes/kretprobe_example.c
+@@ -86,7 +86,7 @@ static int __init kretprobe_init(void)
+ 	ret = register_kretprobe(&my_kretprobe);
+ 	if (ret < 0) {
+ 		pr_err("register_kretprobe failed, returned %d\n", ret);
+-		return -1;
++		return ret;
+ 	}
+ 	pr_info("Planted return probe at %s: %p\n",
+ 			my_kretprobe.kp.symbol_name, my_kretprobe.kp.addr);
+diff --git a/scripts/leaking_addresses.pl b/scripts/leaking_addresses.pl
+index b2d8b8aa2d99e..8f636a23bc3f2 100755
+--- a/scripts/leaking_addresses.pl
++++ b/scripts/leaking_addresses.pl
+@@ -455,8 +455,9 @@ sub parse_file
+ 
+ 	open my $fh, "<", $file or return;
+ 	while ( <$fh> ) {
++		chomp;
+ 		if (may_leak_address($_)) {
+-			print $file . ': ' . $_;
++			printf("$file: $_\n");
+ 		}
+ 	}
+ 	close $fh;
+diff --git a/security/apparmor/label.c b/security/apparmor/label.c
+index e68bcedca976b..6222fdfebe4e5 100644
+--- a/security/apparmor/label.c
++++ b/security/apparmor/label.c
+@@ -1454,7 +1454,7 @@ bool aa_update_label_name(struct aa_ns *ns, struct aa_label *label, gfp_t gfp)
+ 	if (label->hname || labels_ns(label) != ns)
+ 		return res;
+ 
+-	if (aa_label_acntsxprint(&name, ns, label, FLAGS_NONE, gfp) == -1)
++	if (aa_label_acntsxprint(&name, ns, label, FLAGS_NONE, gfp) < 0)
+ 		return res;
+ 
+ 	ls = labels_set(label);
+@@ -1704,7 +1704,7 @@ int aa_label_asxprint(char **strp, struct aa_ns *ns, struct aa_label *label,
+ 
+ /**
+  * aa_label_acntsxprint - allocate a __counted string buffer and print label
+- * @strp: buffer to write to. (MAY BE NULL if @size == 0)
++ * @strp: buffer to write to.
+  * @ns: namespace profile is being viewed from
+  * @label: label to view (NOT NULL)
+  * @flags: flags controlling what label info is printed
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index f1ca3cac9b861..b929c683aba12 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -54,7 +54,7 @@ static struct xattr_list evm_config_default_xattrnames[] = {
+ 
+ LIST_HEAD(evm_config_xattrnames);
+ 
+-static int evm_fixmode;
++static int evm_fixmode __ro_after_init;
+ static int __init evm_set_fixmode(char *str)
+ {
+ 	if (strncmp(str, "fix", 3) == 0)
+diff --git a/security/security.c b/security/security.c
+index 1c696bce8e1c9..a864ff824dd3b 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -723,25 +723,25 @@ static void __init lsm_early_task(struct task_struct *task)
+ 
+ /* Security operations */
+ 
+-int security_binder_set_context_mgr(struct task_struct *mgr)
++int security_binder_set_context_mgr(const struct cred *mgr)
+ {
+ 	return call_int_hook(binder_set_context_mgr, 0, mgr);
+ }
+ 
+-int security_binder_transaction(struct task_struct *from,
+-				struct task_struct *to)
++int security_binder_transaction(const struct cred *from,
++				const struct cred *to)
+ {
+ 	return call_int_hook(binder_transaction, 0, from, to);
+ }
+ 
+-int security_binder_transfer_binder(struct task_struct *from,
+-				    struct task_struct *to)
++int security_binder_transfer_binder(const struct cred *from,
++				    const struct cred *to)
+ {
+ 	return call_int_hook(binder_transfer_binder, 0, from, to);
+ }
+ 
+-int security_binder_transfer_file(struct task_struct *from,
+-				  struct task_struct *to, struct file *file)
++int security_binder_transfer_file(const struct cred *from,
++				  const struct cred *to, struct file *file)
+ {
+ 	return call_int_hook(binder_transfer_file, 0, from, to, file);
+ }
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 227eb89679637..f32026bc96b42 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -2004,22 +2004,19 @@ static inline u32 open_file_to_av(struct file *file)
+ 
+ /* Hook functions begin here. */
+ 
+-static int selinux_binder_set_context_mgr(struct task_struct *mgr)
++static int selinux_binder_set_context_mgr(const struct cred *mgr)
+ {
+-	u32 mysid = current_sid();
+-	u32 mgrsid = task_sid(mgr);
+-
+ 	return avc_has_perm(&selinux_state,
+-			    mysid, mgrsid, SECCLASS_BINDER,
++			    current_sid(), cred_sid(mgr), SECCLASS_BINDER,
+ 			    BINDER__SET_CONTEXT_MGR, NULL);
+ }
+ 
+-static int selinux_binder_transaction(struct task_struct *from,
+-				      struct task_struct *to)
++static int selinux_binder_transaction(const struct cred *from,
++				      const struct cred *to)
+ {
+ 	u32 mysid = current_sid();
+-	u32 fromsid = task_sid(from);
+-	u32 tosid = task_sid(to);
++	u32 fromsid = cred_sid(from);
++	u32 tosid = cred_sid(to);
+ 	int rc;
+ 
+ 	if (mysid != fromsid) {
+@@ -2030,27 +2027,24 @@ static int selinux_binder_transaction(struct task_struct *from,
+ 			return rc;
+ 	}
+ 
+-	return avc_has_perm(&selinux_state,
+-			    fromsid, tosid, SECCLASS_BINDER, BINDER__CALL,
+-			    NULL);
++	return avc_has_perm(&selinux_state, fromsid, tosid,
++			    SECCLASS_BINDER, BINDER__CALL, NULL);
+ }
+ 
+-static int selinux_binder_transfer_binder(struct task_struct *from,
+-					  struct task_struct *to)
++static int selinux_binder_transfer_binder(const struct cred *from,
++					  const struct cred *to)
+ {
+-	u32 fromsid = task_sid(from);
+-	u32 tosid = task_sid(to);
+-
+ 	return avc_has_perm(&selinux_state,
+-			    fromsid, tosid, SECCLASS_BINDER, BINDER__TRANSFER,
++			    cred_sid(from), cred_sid(to),
++			    SECCLASS_BINDER, BINDER__TRANSFER,
+ 			    NULL);
+ }
+ 
+-static int selinux_binder_transfer_file(struct task_struct *from,
+-					struct task_struct *to,
++static int selinux_binder_transfer_file(const struct cred *from,
++					const struct cred *to,
+ 					struct file *file)
+ {
+-	u32 sid = task_sid(to);
++	u32 sid = cred_sid(to);
+ 	struct file_security_struct *fsec = selinux_file(file);
+ 	struct dentry *dentry = file->f_path.dentry;
+ 	struct inode_security_struct *isec;
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index fbdbfb5aa3707..31d631fa846ef 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -2368,6 +2368,43 @@ err_policy:
+ 	return rc;
+ }
+ 
++/**
++ * ocontext_to_sid - Helper to safely get sid for an ocontext
++ * @sidtab: SID table
++ * @c: ocontext structure
++ * @index: index of the context entry (0 or 1)
++ * @out_sid: pointer to the resulting SID value
++ *
++ * For all ocontexts except OCON_ISID the SID fields are populated
++ * on-demand when needed. Since updating the SID value is an SMP-sensitive
++ * operation, this helper must be used to do that safely.
++ *
++ * WARNING: This function may return -ESTALE, indicating that the caller
++ * must retry the operation after re-acquiring the policy pointer!
++ */
++static int ocontext_to_sid(struct sidtab *sidtab, struct ocontext *c,
++			   size_t index, u32 *out_sid)
++{
++	int rc;
++	u32 sid;
++
++	/* Ensure the associated sidtab entry is visible to this thread. */
++	sid = smp_load_acquire(&c->sid[index]);
++	if (!sid) {
++		rc = sidtab_context_to_sid(sidtab, &c->context[index], &sid);
++		if (rc)
++			return rc;
++
++		/*
++		 * Ensure the new sidtab entry is visible to other threads
++		 * when they see the SID.
++		 */
++		smp_store_release(&c->sid[index], sid);
++	}
++	*out_sid = sid;
++	return 0;
++}
++
+ /**
+  * security_port_sid - Obtain the SID for a port.
+  * @protocol: protocol number
+@@ -2405,17 +2442,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*out_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else {
+ 		*out_sid = SECINITSID_PORT;
+ 	}
+@@ -2463,18 +2496,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab,
+-						   &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*out_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else
+ 		*out_sid = SECINITSID_UNLABELED;
+ 
+@@ -2522,17 +2550,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*out_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else
+ 		*out_sid = SECINITSID_UNLABELED;
+ 
+@@ -2575,25 +2599,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0] || !c->sid[1]) {
+-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
+-			rc = sidtab_context_to_sid(sidtab, &c->context[1],
+-						   &c->sid[1]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, if_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*if_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else
+ 		*if_sid = SECINITSID_NETIF;
+ 
+@@ -2684,18 +2696,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab,
+-						   &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*out_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else {
+ 		*out_sid = SECINITSID_NODE;
+ 	}
+@@ -2859,7 +2866,7 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
+ 	u16 sclass;
+ 	struct genfs *genfs;
+ 	struct ocontext *c;
+-	int rc, cmp = 0;
++	int cmp = 0;
+ 
+ 	while (path[0] == '/' && path[1] == '/')
+ 		path++;
+@@ -2873,9 +2880,8 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
+ 			break;
+ 	}
+ 
+-	rc = -ENOENT;
+ 	if (!genfs || cmp)
+-		goto out;
++		return -ENOENT;
+ 
+ 	for (c = genfs->head; c; c = c->next) {
+ 		len = strlen(c->u.name);
+@@ -2884,20 +2890,10 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
+ 			break;
+ 	}
+ 
+-	rc = -ENOENT;
+ 	if (!c)
+-		goto out;
+-
+-	if (!c->sid[0]) {
+-		rc = sidtab_context_to_sid(sidtab, &c->context[0], &c->sid[0]);
+-		if (rc)
+-			goto out;
+-	}
++		return -ENOENT;
+ 
+-	*sid = c->sid[0];
+-	rc = 0;
+-out:
+-	return rc;
++	return ocontext_to_sid(sidtab, c, 0, sid);
+ }
+ 
+ /**
+@@ -2980,17 +2976,13 @@ retry:
+ 
+ 	if (c) {
+ 		sbsec->behavior = c->v.behavior;
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, &sbsec->sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		sbsec->sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else {
+ 		rc = __security_genfs_sid(policy, fstype, "/",
+ 					SECCLASS_DIR, &sbsec->sid);
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index b88c1a9538334..3eabcc469669e 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -693,9 +693,7 @@ static void smk_cipso_doi(void)
+ 		printk(KERN_WARNING "%s:%d remove rc = %d\n",
+ 		       __func__, __LINE__, rc);
+ 
+-	doip = kmalloc(sizeof(struct cipso_v4_doi), GFP_KERNEL);
+-	if (doip == NULL)
+-		panic("smack:  Failed to initialize cipso DOI.\n");
++	doip = kmalloc(sizeof(struct cipso_v4_doi), GFP_KERNEL | __GFP_NOFAIL);
+ 	doip->map.std = NULL;
+ 	doip->doi = smk_cipso_doi_value;
+ 	doip->type = CIPSO_V4_MAP_PASS;
+@@ -714,7 +712,7 @@ static void smk_cipso_doi(void)
+ 	if (rc != 0) {
+ 		printk(KERN_WARNING "%s:%d map add rc = %d\n",
+ 		       __func__, __LINE__, rc);
+-		kfree(doip);
++		netlbl_cfg_cipsov4_del(doip->doi, &nai);
+ 		return;
+ 	}
+ }
+@@ -831,6 +829,7 @@ static int smk_open_cipso(struct inode *inode, struct file *file)
+ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 				size_t count, loff_t *ppos, int format)
+ {
++	struct netlbl_lsm_catmap *old_cat;
+ 	struct smack_known *skp;
+ 	struct netlbl_lsm_secattr ncats;
+ 	char mapcatset[SMK_CIPSOLEN];
+@@ -920,9 +919,11 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 
+ 	rc = smk_netlbl_mls(maplevel, mapcatset, &ncats, SMK_CIPSOLEN);
+ 	if (rc >= 0) {
+-		netlbl_catmap_free(skp->smk_netlabel.attr.mls.cat);
++		old_cat = skp->smk_netlabel.attr.mls.cat;
+ 		skp->smk_netlabel.attr.mls.cat = ncats.attr.mls.cat;
+ 		skp->smk_netlabel.attr.mls.lvl = ncats.attr.mls.lvl;
++		synchronize_rcu();
++		netlbl_catmap_free(old_cat);
+ 		rc = count;
+ 		/*
+ 		 * This mapping may have been cached, so clear the cache.
+diff --git a/sound/core/oss/mixer_oss.c b/sound/core/oss/mixer_oss.c
+index f702c96a7478f..bfed82a3a1881 100644
+--- a/sound/core/oss/mixer_oss.c
++++ b/sound/core/oss/mixer_oss.c
+@@ -130,11 +130,13 @@ static int snd_mixer_oss_devmask(struct snd_mixer_oss_file *fmixer)
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	for (chn = 0; chn < 31; chn++) {
+ 		pslot = &mixer->slots[chn];
+ 		if (pslot->put_volume || pslot->put_recsrc)
+ 			result |= 1 << chn;
+ 	}
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -146,11 +148,13 @@ static int snd_mixer_oss_stereodevs(struct snd_mixer_oss_file *fmixer)
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	for (chn = 0; chn < 31; chn++) {
+ 		pslot = &mixer->slots[chn];
+ 		if (pslot->put_volume && pslot->stereo)
+ 			result |= 1 << chn;
+ 	}
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -161,6 +165,7 @@ static int snd_mixer_oss_recmask(struct snd_mixer_oss_file *fmixer)
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	if (mixer->put_recsrc && mixer->get_recsrc) {	/* exclusive */
+ 		result = mixer->mask_recsrc;
+ 	} else {
+@@ -172,6 +177,7 @@ static int snd_mixer_oss_recmask(struct snd_mixer_oss_file *fmixer)
+ 				result |= 1 << chn;
+ 		}
+ 	}
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -182,11 +188,12 @@ static int snd_mixer_oss_get_recsrc(struct snd_mixer_oss_file *fmixer)
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	if (mixer->put_recsrc && mixer->get_recsrc) {	/* exclusive */
+-		int err;
+ 		unsigned int index;
+-		if ((err = mixer->get_recsrc(fmixer, &index)) < 0)
+-			return err;
++		result = mixer->get_recsrc(fmixer, &index);
++		if (result < 0)
++			goto unlock;
+ 		result = 1 << index;
+ 	} else {
+ 		struct snd_mixer_oss_slot *pslot;
+@@ -201,7 +208,10 @@ static int snd_mixer_oss_get_recsrc(struct snd_mixer_oss_file *fmixer)
+ 			}
+ 		}
+ 	}
+-	return mixer->oss_recsrc = result;
++	mixer->oss_recsrc = result;
++ unlock:
++	mutex_unlock(&mixer->reg_mutex);
++	return result;
+ }
+ 
+ static int snd_mixer_oss_set_recsrc(struct snd_mixer_oss_file *fmixer, int recsrc)
+@@ -214,6 +224,7 @@ static int snd_mixer_oss_set_recsrc(struct snd_mixer_oss_file *fmixer, int recsr
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	if (mixer->get_recsrc && mixer->put_recsrc) {	/* exclusive input */
+ 		if (recsrc & ~mixer->oss_recsrc)
+ 			recsrc &= ~mixer->oss_recsrc;
+@@ -239,6 +250,7 @@ static int snd_mixer_oss_set_recsrc(struct snd_mixer_oss_file *fmixer, int recsr
+ 			}
+ 		}
+ 	}
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -250,6 +262,7 @@ static int snd_mixer_oss_get_volume(struct snd_mixer_oss_file *fmixer, int slot)
+ 
+ 	if (mixer == NULL || slot > 30)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	pslot = &mixer->slots[slot];
+ 	left = pslot->volume[0];
+ 	right = pslot->volume[1];
+@@ -257,15 +270,21 @@ static int snd_mixer_oss_get_volume(struct snd_mixer_oss_file *fmixer, int slot)
+ 		result = pslot->get_volume(fmixer, pslot, &left, &right);
+ 	if (!pslot->stereo)
+ 		right = left;
+-	if (snd_BUG_ON(left < 0 || left > 100))
+-		return -EIO;
+-	if (snd_BUG_ON(right < 0 || right > 100))
+-		return -EIO;
++	if (snd_BUG_ON(left < 0 || left > 100)) {
++		result = -EIO;
++		goto unlock;
++	}
++	if (snd_BUG_ON(right < 0 || right > 100)) {
++		result = -EIO;
++		goto unlock;
++	}
+ 	if (result >= 0) {
+ 		pslot->volume[0] = left;
+ 		pslot->volume[1] = right;
+ 	 	result = (left & 0xff) | ((right & 0xff) << 8);
+ 	}
++ unlock:
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -278,6 +297,7 @@ static int snd_mixer_oss_set_volume(struct snd_mixer_oss_file *fmixer,
+ 
+ 	if (mixer == NULL || slot > 30)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	pslot = &mixer->slots[slot];
+ 	if (left > 100)
+ 		left = 100;
+@@ -288,10 +308,13 @@ static int snd_mixer_oss_set_volume(struct snd_mixer_oss_file *fmixer,
+ 	if (pslot->put_volume)
+ 		result = pslot->put_volume(fmixer, pslot, left, right);
+ 	if (result < 0)
+-		return result;
++		goto unlock;
+ 	pslot->volume[0] = left;
+ 	pslot->volume[1] = right;
+- 	return (left & 0xff) | ((right & 0xff) << 8);
++	result = (left & 0xff) | ((right & 0xff) << 8);
++ unlock:
++	mutex_unlock(&mixer->reg_mutex);
++	return result;
+ }
+ 
+ static int snd_mixer_oss_ioctl1(struct snd_mixer_oss_file *fmixer, unsigned int cmd, unsigned long arg)
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index c15c8314671b7..04cd8953605ab 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -624,13 +624,13 @@ static int snd_timer_stop1(struct snd_timer_instance *timeri, bool stop)
+ 	if (!timer)
+ 		return -EINVAL;
+ 	spin_lock_irqsave(&timer->lock, flags);
++	list_del_init(&timeri->ack_list);
++	list_del_init(&timeri->active_list);
+ 	if (!(timeri->flags & (SNDRV_TIMER_IFLG_RUNNING |
+ 			       SNDRV_TIMER_IFLG_START))) {
+ 		result = -EBUSY;
+ 		goto unlock;
+ 	}
+-	list_del_init(&timeri->ack_list);
+-	list_del_init(&timeri->active_list);
+ 	if (timer->card && timer->card->shutdown)
+ 		goto unlock;
+ 	if (stop) {
+@@ -665,23 +665,22 @@ static int snd_timer_stop1(struct snd_timer_instance *timeri, bool stop)
+ static int snd_timer_stop_slave(struct snd_timer_instance *timeri, bool stop)
+ {
+ 	unsigned long flags;
++	bool running;
+ 
+ 	spin_lock_irqsave(&slave_active_lock, flags);
+-	if (!(timeri->flags & SNDRV_TIMER_IFLG_RUNNING)) {
+-		spin_unlock_irqrestore(&slave_active_lock, flags);
+-		return -EBUSY;
+-	}
++	running = timeri->flags & SNDRV_TIMER_IFLG_RUNNING;
+ 	timeri->flags &= ~SNDRV_TIMER_IFLG_RUNNING;
+ 	if (timeri->timer) {
+ 		spin_lock(&timeri->timer->lock);
+ 		list_del_init(&timeri->ack_list);
+ 		list_del_init(&timeri->active_list);
+-		snd_timer_notify1(timeri, stop ? SNDRV_TIMER_EVENT_STOP :
+-				  SNDRV_TIMER_EVENT_PAUSE);
++		if (running)
++			snd_timer_notify1(timeri, stop ? SNDRV_TIMER_EVENT_STOP :
++					  SNDRV_TIMER_EVENT_PAUSE);
+ 		spin_unlock(&timeri->timer->lock);
+ 	}
+ 	spin_unlock_irqrestore(&slave_active_lock, flags);
+-	return 0;
++	return running ? 0 : -EBUSY;
+ }
+ 
+ /*
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 8bc27e7c05905..64115a796af06 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -672,13 +672,17 @@ static int azx_position_check(struct azx *chip, struct azx_dev *azx_dev)
+  * the update-IRQ timing.  The IRQ is issued before actually the
+  * data is processed.  So, we need to process it afterwords in a
+  * workqueue.
++ *
++ * Returns 1 if OK to proceed, 0 for delay handling, -1 for skipping update
+  */
+ static int azx_position_ok(struct azx *chip, struct azx_dev *azx_dev)
+ {
+ 	struct snd_pcm_substream *substream = azx_dev->core.substream;
++	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	int stream = substream->stream;
+ 	u32 wallclk;
+ 	unsigned int pos;
++	snd_pcm_uframes_t hwptr, target;
+ 
+ 	wallclk = azx_readl(chip, WALLCLK) - azx_dev->core.start_wallclk;
+ 	if (wallclk < (azx_dev->core.period_wallclk * 2) / 3)
+@@ -715,6 +719,24 @@ static int azx_position_ok(struct azx *chip, struct azx_dev *azx_dev)
+ 		/* NG - it's below the first next period boundary */
+ 		return chip->bdl_pos_adj ? 0 : -1;
+ 	azx_dev->core.start_wallclk += wallclk;
++
++	if (azx_dev->core.no_period_wakeup)
++		return 1; /* OK, no need to check period boundary */
++
++	if (runtime->hw_ptr_base != runtime->hw_ptr_interrupt)
++		return 1; /* OK, already in hwptr updating process */
++
++	/* check whether the period gets really elapsed */
++	pos = bytes_to_frames(runtime, pos);
++	hwptr = runtime->hw_ptr_base + pos;
++	if (hwptr < runtime->status->hw_ptr)
++		hwptr += runtime->buffer_size;
++	target = runtime->hw_ptr_interrupt + runtime->period_size;
++	if (hwptr < target) {
++		/* too early wakeup, process it later */
++		return chip->bdl_pos_adj ? 0 : -1;
++	}
++
+ 	return 1; /* OK, it's fine */
+ }
+ 
+@@ -893,35 +915,24 @@ static int azx_get_delay_from_fifo(struct azx *chip, struct azx_dev *azx_dev,
+ 	return substream->runtime->delay;
+ }
+ 
+-static unsigned int azx_skl_get_dpib_pos(struct azx *chip,
+-					 struct azx_dev *azx_dev)
++static void __azx_shutdown_chip(struct azx *chip, bool skip_link_reset)
+ {
+-	return _snd_hdac_chip_readl(azx_bus(chip),
+-				    AZX_REG_VS_SDXDPIB_XBASE +
+-				    (AZX_REG_VS_SDXDPIB_XINTERVAL *
+-				     azx_dev->core.index));
+-}
+-
+-/* get the current DMA position with correction on SKL+ chips */
+-static unsigned int azx_get_pos_skl(struct azx *chip, struct azx_dev *azx_dev)
+-{
+-	/* DPIB register gives a more accurate position for playback */
+-	if (azx_dev->core.substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		return azx_skl_get_dpib_pos(chip, azx_dev);
+-
+-	/* For capture, we need to read posbuf, but it requires a delay
+-	 * for the possible boundary overlap; the read of DPIB fetches the
+-	 * actual posbuf
+-	 */
+-	udelay(20);
+-	azx_skl_get_dpib_pos(chip, azx_dev);
+-	return azx_get_pos_posbuf(chip, azx_dev);
++	azx_stop_chip(chip);
++	if (!skip_link_reset)
++		azx_enter_link_reset(chip);
++	azx_clear_irq_pending(chip);
++	display_power(chip, false);
+ }
+ 
+ #ifdef CONFIG_PM
+ static DEFINE_MUTEX(card_list_lock);
+ static LIST_HEAD(card_list);
+ 
++static void azx_shutdown_chip(struct azx *chip)
++{
++	__azx_shutdown_chip(chip, false);
++}
++
+ static void azx_add_card_list(struct azx *chip)
+ {
+ 	struct hda_intel *hda = container_of(chip, struct hda_intel, chip);
+@@ -977,14 +988,6 @@ static bool azx_is_pm_ready(struct snd_card *card)
+ 	return true;
+ }
+ 
+-static void __azx_runtime_suspend(struct azx *chip)
+-{
+-	azx_stop_chip(chip);
+-	azx_enter_link_reset(chip);
+-	azx_clear_irq_pending(chip);
+-	display_power(chip, false);
+-}
+-
+ static void __azx_runtime_resume(struct azx *chip)
+ {
+ 	struct hda_intel *hda = container_of(chip, struct hda_intel, chip);
+@@ -1063,7 +1066,7 @@ static int azx_suspend(struct device *dev)
+ 
+ 	chip = card->private_data;
+ 	bus = azx_bus(chip);
+-	__azx_runtime_suspend(chip);
++	azx_shutdown_chip(chip);
+ 	if (bus->irq >= 0) {
+ 		free_irq(bus->irq, chip);
+ 		bus->irq = -1;
+@@ -1142,7 +1145,7 @@ static int azx_runtime_suspend(struct device *dev)
+ 	/* enable controller wake up event */
+ 	azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) | STATESTS_INT_MASK);
+ 
+-	__azx_runtime_suspend(chip);
++	azx_shutdown_chip(chip);
+ 	trace_azx_runtime_suspend(chip);
+ 	return 0;
+ }
+@@ -1608,7 +1611,7 @@ static void assign_position_fix(struct azx *chip, int fix)
+ 		[POS_FIX_POSBUF] = azx_get_pos_posbuf,
+ 		[POS_FIX_VIACOMBO] = azx_via_get_position,
+ 		[POS_FIX_COMBO] = azx_get_pos_lpib,
+-		[POS_FIX_SKL] = azx_get_pos_skl,
++		[POS_FIX_SKL] = azx_get_pos_posbuf,
+ 		[POS_FIX_FIFO] = azx_get_pos_fifo,
+ 	};
+ 
+@@ -2392,7 +2395,8 @@ static int azx_probe_continue(struct azx *chip)
+ 
+ out_free:
+ 	if (err < 0) {
+-		azx_free(chip);
++		pci_set_drvdata(pci, NULL);
++		snd_card_free(chip->card);
+ 		return err;
+ 	}
+ 
+@@ -2442,7 +2446,7 @@ static void azx_shutdown(struct pci_dev *pci)
+ 		return;
+ 	chip = card->private_data;
+ 	if (chip && chip->running)
+-		azx_stop_chip(chip);
++		__azx_shutdown_chip(chip, true);
+ }
+ 
+ /* PCI IDs */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f511ae66bc8aa..2eb06351de1fb 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2551,6 +2551,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67f1, "Clevo PC70H[PRS]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED),
+@@ -4300,6 +4301,16 @@ static void alc287_fixup_hp_gpio_led(struct hda_codec *codec,
+ 	alc_fixup_hp_gpio_led(codec, action, 0x10, 0);
+ }
+ 
++static void alc245_fixup_hp_gpio_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		spec->micmute_led_polarity = 1;
++	alc_fixup_hp_gpio_led(codec, action, 0, 0x04);
++}
++
+ /* turn on/off mic-mute LED per capture hook via VREF change */
+ static int vref_micmute_led_set(struct led_classdev *led_cdev,
+ 				enum led_brightness brightness)
+@@ -6352,6 +6363,44 @@ static void alc_fixup_no_int_mic(struct hda_codec *codec,
+ 	}
+ }
+ 
++/* GPIO1 = amplifier on/off
++ * GPIO3 = mic mute LED
++ */
++static void alc285_fixup_hp_spectre_x360_eb1(struct hda_codec *codec,
++					  const struct hda_fixup *fix, int action)
++{
++	static const hda_nid_t conn[] = { 0x02 };
++
++	struct alc_spec *spec = codec->spec;
++	static const struct hda_pintbl pincfgs[] = {
++		{ 0x14, 0x90170110 },  /* front/high speakers */
++		{ 0x17, 0x90170130 },  /* back/bass speakers */
++		{ }
++	};
++
++	//enable micmute led
++	alc_fixup_hp_gpio_led(codec, action, 0x00, 0x04);
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		spec->micmute_led_polarity = 1;
++		/* needed for amp of back speakers */
++		spec->gpio_mask |= 0x01;
++		spec->gpio_dir |= 0x01;
++		snd_hda_apply_pincfgs(codec, pincfgs);
++		/* share DAC to have unified volume control */
++		snd_hda_override_conn_list(codec, 0x14, ARRAY_SIZE(conn), conn);
++		snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		/* need to toggle GPIO to enable the amp of back speakers */
++		alc_update_gpio_data(codec, 0x01, true);
++		msleep(100);
++		alc_update_gpio_data(codec, 0x01, false);
++		break;
++	}
++}
++
+ static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
+ 					  const struct hda_fixup *fix, int action)
+ {
+@@ -6504,6 +6553,7 @@ enum {
+ 	ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED,
+ 	ALC280_FIXUP_HP_9480M,
+ 	ALC245_FIXUP_HP_X360_AMP,
++	ALC285_FIXUP_HP_SPECTRE_X360_EB1,
+ 	ALC288_FIXUP_DELL_HEADSET_MODE,
+ 	ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC288_FIXUP_DELL_XPS_13,
+@@ -6616,6 +6666,7 @@ enum {
+ 	ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ 	ALC287_FIXUP_HP_GPIO_LED,
+ 	ALC256_FIXUP_HP_HEADSET_MIC,
++	ALC245_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_DELL_AIO_HEADSET_MIC,
+ 	ALC282_FIXUP_ACER_DISABLE_LINEOUT,
+ 	ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
+@@ -6633,6 +6684,7 @@ enum {
+ 	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+ 	ALC287_FIXUP_13S_GEN2_SPEAKERS,
+ 	ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,
++	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7239,6 +7291,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC245_FIXUP_HP_X360_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc245_fixup_hp_x360_amp,
++		.chained = true,
++		.chain_id = ALC245_FIXUP_HP_GPIO_LED
+ 	},
+ 	[ALC288_FIXUP_DELL_HEADSET_MODE] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -8190,6 +8244,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_spectre_x360,
+ 	},
++	[ALC285_FIXUP_HP_SPECTRE_X360_EB1] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_spectre_x360_eb1
++	},
+ 	[ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_ideapad_s740_coef,
+@@ -8328,6 +8386,19 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc256_fixup_tongfang_reset_persistent_settings,
+ 	},
++	[ALC245_FIXUP_HP_GPIO_LED] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc245_fixup_hp_gpio_led,
++	},
++	[ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x03a11120 }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8364,6 +8435,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x141f, "Acer Spin SP513-54N", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+@@ -8503,6 +8575,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8728, "HP EliteBook 840 G7", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+@@ -8513,6 +8586,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation",
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -8524,6 +8598,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
++	SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -8560,6 +8636,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x194e, "ASUS UX563FD", ALC294_FIXUP_ASUS_HPE),
++	SND_PCI_QUIRK(0x1043, 0x1970, "ASUS UX550VE", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1982, "ASUS B1400CEPE", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -8623,11 +8700,15 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x40a1, "Clevo NL40GU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x40c1, "Clevo NL40[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x40d1, "Clevo NL41DU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x5015, "Clevo NH5[58]H[HJK]Q", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x5017, "Clevo NH7[79]H[HJK]Q", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50a3, "Clevo NJ51GU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50b3, "Clevo NK50S[BEZ]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50b6, "Clevo NK50S5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50b8, "Clevo NK50SZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50d5, "Clevo NP50D5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x50e1, "Clevo NH5[58]HPQ", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x50e2, "Clevo NH7[79]HPQ", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50f0, "Clevo NH50A[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50f2, "Clevo NH50E[PR]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50f3, "Clevo NH58DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -8943,6 +9024,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
+ 	{.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
+ 	{.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
++	{.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"},
+ 	{.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
+ 	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+ 	{.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 828dc78202e8b..54c1ede59b8b7 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -20,10 +20,9 @@
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
+ #include <linux/platform_device.h>
++#include <linux/property.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/gpio/consumer.h>
+-#include <linux/of.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of_device.h>
+ #include <sound/core.h>
+ #include <sound/pcm.h>
+@@ -91,7 +90,7 @@ static const struct reg_default cs42l42_reg_defaults[] = {
+ 	{ CS42L42_ASP_RX_INT_MASK,		0x1F },
+ 	{ CS42L42_ASP_TX_INT_MASK,		0x0F },
+ 	{ CS42L42_CODEC_INT_MASK,		0x03 },
+-	{ CS42L42_SRCPL_INT_MASK,		0xFF },
++	{ CS42L42_SRCPL_INT_MASK,		0x7F },
+ 	{ CS42L42_VPMON_INT_MASK,		0x01 },
+ 	{ CS42L42_PLL_LOCK_INT_MASK,		0x01 },
+ 	{ CS42L42_TSRS_PLUG_INT_MASK,		0x0F },
+@@ -128,7 +127,7 @@ static const struct reg_default cs42l42_reg_defaults[] = {
+ 	{ CS42L42_MIXER_CHA_VOL,		0x3F },
+ 	{ CS42L42_MIXER_ADC_VOL,		0x3F },
+ 	{ CS42L42_MIXER_CHB_VOL,		0x3F },
+-	{ CS42L42_EQ_COEF_IN0,			0x22 },
++	{ CS42L42_EQ_COEF_IN0,			0x00 },
+ 	{ CS42L42_EQ_COEF_IN1,			0x00 },
+ 	{ CS42L42_EQ_COEF_IN2,			0x00 },
+ 	{ CS42L42_EQ_COEF_IN3,			0x00 },
+@@ -1530,12 +1529,15 @@ static void cs42l42_setup_hs_type_detect(struct cs42l42_private *cs42l42)
+ 			(1 << CS42L42_HS_CLAMP_DISABLE_SHIFT));
+ 
+ 	/* Enable the tip sense circuit */
++	regmap_update_bits(cs42l42->regmap, CS42L42_TSENSE_CTL,
++			   CS42L42_TS_INV_MASK, CS42L42_TS_INV_MASK);
++
+ 	regmap_update_bits(cs42l42->regmap, CS42L42_TIPSENSE_CTL,
+ 			CS42L42_TIP_SENSE_CTRL_MASK |
+ 			CS42L42_TIP_SENSE_INV_MASK |
+ 			CS42L42_TIP_SENSE_DEBOUNCE_MASK,
+ 			(3 << CS42L42_TIP_SENSE_CTRL_SHIFT) |
+-			(0 << CS42L42_TIP_SENSE_INV_SHIFT) |
++			(!cs42l42->ts_inv << CS42L42_TIP_SENSE_INV_SHIFT) |
+ 			(2 << CS42L42_TIP_SENSE_DEBOUNCE_SHIFT));
+ 
+ 	/* Save the initial status of the tip sense */
+@@ -1554,17 +1556,15 @@ static const unsigned int threshold_defaults[] = {
+ 	CS42L42_HS_DET_LEVEL_1
+ };
+ 
+-static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
++static int cs42l42_handle_device_data(struct device *dev,
+ 					struct cs42l42_private *cs42l42)
+ {
+-	struct device_node *np = i2c_client->dev.of_node;
+ 	unsigned int val;
+-	unsigned int thresholds[CS42L42_NUM_BIASES];
++	u32 thresholds[CS42L42_NUM_BIASES];
+ 	int ret;
+ 	int i;
+ 
+-	ret = of_property_read_u32(np, "cirrus,ts-inv", &val);
+-
++	ret = device_property_read_u32(dev, "cirrus,ts-inv", &val);
+ 	if (!ret) {
+ 		switch (val) {
+ 		case CS42L42_TS_INV_EN:
+@@ -1572,7 +1572,7 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			cs42l42->ts_inv = val;
+ 			break;
+ 		default:
+-			dev_err(&i2c_client->dev,
++			dev_err(dev,
+ 				"Wrong cirrus,ts-inv DT value %d\n",
+ 				val);
+ 			cs42l42->ts_inv = CS42L42_TS_INV_DIS;
+@@ -1581,12 +1581,7 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 		cs42l42->ts_inv = CS42L42_TS_INV_DIS;
+ 	}
+ 
+-	regmap_update_bits(cs42l42->regmap, CS42L42_TSENSE_CTL,
+-			CS42L42_TS_INV_MASK,
+-			(cs42l42->ts_inv << CS42L42_TS_INV_SHIFT));
+-
+-	ret = of_property_read_u32(np, "cirrus,ts-dbnc-rise", &val);
+-
++	ret = device_property_read_u32(dev, "cirrus,ts-dbnc-rise", &val);
+ 	if (!ret) {
+ 		switch (val) {
+ 		case CS42L42_TS_DBNCE_0:
+@@ -1600,7 +1595,7 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			cs42l42->ts_dbnc_rise = val;
+ 			break;
+ 		default:
+-			dev_err(&i2c_client->dev,
++			dev_err(dev,
+ 				"Wrong cirrus,ts-dbnc-rise DT value %d\n",
+ 				val);
+ 			cs42l42->ts_dbnc_rise = CS42L42_TS_DBNCE_1000;
+@@ -1614,8 +1609,7 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			(cs42l42->ts_dbnc_rise <<
+ 			CS42L42_TS_RISE_DBNCE_TIME_SHIFT));
+ 
+-	ret = of_property_read_u32(np, "cirrus,ts-dbnc-fall", &val);
+-
++	ret = device_property_read_u32(dev, "cirrus,ts-dbnc-fall", &val);
+ 	if (!ret) {
+ 		switch (val) {
+ 		case CS42L42_TS_DBNCE_0:
+@@ -1629,7 +1623,7 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			cs42l42->ts_dbnc_fall = val;
+ 			break;
+ 		default:
+-			dev_err(&i2c_client->dev,
++			dev_err(dev,
+ 				"Wrong cirrus,ts-dbnc-fall DT value %d\n",
+ 				val);
+ 			cs42l42->ts_dbnc_fall = CS42L42_TS_DBNCE_0;
+@@ -1643,13 +1637,12 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			(cs42l42->ts_dbnc_fall <<
+ 			CS42L42_TS_FALL_DBNCE_TIME_SHIFT));
+ 
+-	ret = of_property_read_u32(np, "cirrus,btn-det-init-dbnce", &val);
+-
++	ret = device_property_read_u32(dev, "cirrus,btn-det-init-dbnce", &val);
+ 	if (!ret) {
+ 		if (val <= CS42L42_BTN_DET_INIT_DBNCE_MAX)
+ 			cs42l42->btn_det_init_dbnce = val;
+ 		else {
+-			dev_err(&i2c_client->dev,
++			dev_err(dev,
+ 				"Wrong cirrus,btn-det-init-dbnce DT value %d\n",
+ 				val);
+ 			cs42l42->btn_det_init_dbnce =
+@@ -1660,14 +1653,13 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			CS42L42_BTN_DET_INIT_DBNCE_DEFAULT;
+ 	}
+ 
+-	ret = of_property_read_u32(np, "cirrus,btn-det-event-dbnce", &val);
+-
++	ret = device_property_read_u32(dev, "cirrus,btn-det-event-dbnce", &val);
+ 	if (!ret) {
+ 		if (val <= CS42L42_BTN_DET_EVENT_DBNCE_MAX)
+ 			cs42l42->btn_det_event_dbnce = val;
+ 		else {
+-			dev_err(&i2c_client->dev,
+-			"Wrong cirrus,btn-det-event-dbnce DT value %d\n", val);
++			dev_err(dev,
++				"Wrong cirrus,btn-det-event-dbnce DT value %d\n", val);
+ 			cs42l42->btn_det_event_dbnce =
+ 				CS42L42_BTN_DET_EVENT_DBNCE_DEFAULT;
+ 		}
+@@ -1676,19 +1668,17 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			CS42L42_BTN_DET_EVENT_DBNCE_DEFAULT;
+ 	}
+ 
+-	ret = of_property_read_u32_array(np, "cirrus,bias-lvls",
+-				   (u32 *)thresholds, CS42L42_NUM_BIASES);
+-
++	ret = device_property_read_u32_array(dev, "cirrus,bias-lvls",
++					     thresholds, ARRAY_SIZE(thresholds));
+ 	if (!ret) {
+ 		for (i = 0; i < CS42L42_NUM_BIASES; i++) {
+ 			if (thresholds[i] <= CS42L42_HS_DET_LEVEL_MAX)
+ 				cs42l42->bias_thresholds[i] = thresholds[i];
+ 			else {
+-				dev_err(&i2c_client->dev,
+-				"Wrong cirrus,bias-lvls[%d] DT value %d\n", i,
++				dev_err(dev,
++					"Wrong cirrus,bias-lvls[%d] DT value %d\n", i,
+ 					thresholds[i]);
+-				cs42l42->bias_thresholds[i] =
+-					threshold_defaults[i];
++				cs42l42->bias_thresholds[i] = threshold_defaults[i];
+ 			}
+ 		}
+ 	} else {
+@@ -1696,8 +1686,7 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			cs42l42->bias_thresholds[i] = threshold_defaults[i];
+ 	}
+ 
+-	ret = of_property_read_u32(np, "cirrus,hs-bias-ramp-rate", &val);
+-
++	ret = device_property_read_u32(dev, "cirrus,hs-bias-ramp-rate", &val);
+ 	if (!ret) {
+ 		switch (val) {
+ 		case CS42L42_HSBIAS_RAMP_FAST_RISE_SLOW_FALL:
+@@ -1717,7 +1706,7 @@ static int cs42l42_handle_device_data(struct i2c_client *i2c_client,
+ 			cs42l42->hs_bias_ramp_time = CS42L42_HSBIAS_RAMP_TIME3;
+ 			break;
+ 		default:
+-			dev_err(&i2c_client->dev,
++			dev_err(dev,
+ 				"Wrong cirrus,hs-bias-ramp-rate DT value %d\n",
+ 				val);
+ 			cs42l42->hs_bias_ramp_rate = CS42L42_HSBIAS_RAMP_SLOW;
+@@ -1781,8 +1770,10 @@ static int cs42l42_i2c_probe(struct i2c_client *i2c_client,
+ 	/* Reset the Device */
+ 	cs42l42->reset_gpio = devm_gpiod_get_optional(&i2c_client->dev,
+ 		"reset", GPIOD_OUT_LOW);
+-	if (IS_ERR(cs42l42->reset_gpio))
+-		return PTR_ERR(cs42l42->reset_gpio);
++	if (IS_ERR(cs42l42->reset_gpio)) {
++		ret = PTR_ERR(cs42l42->reset_gpio);
++		goto err_disable;
++	}
+ 
+ 	if (cs42l42->reset_gpio) {
+ 		dev_dbg(&i2c_client->dev, "Found reset GPIO\n");
+@@ -1796,8 +1787,9 @@ static int cs42l42_i2c_probe(struct i2c_client *i2c_client,
+ 			NULL, cs42l42_irq_thread,
+ 			IRQF_ONESHOT | IRQF_TRIGGER_LOW,
+ 			"cs42l42", cs42l42);
+-
+-	if (ret != 0)
++	if (ret == -EPROBE_DEFER)
++		goto err_disable;
++	else if (ret != 0)
+ 		dev_err(&i2c_client->dev,
+ 			"Failed to request IRQ: %d\n", ret);
+ 
+@@ -1816,13 +1808,13 @@ static int cs42l42_i2c_probe(struct i2c_client *i2c_client,
+ 		dev_err(&i2c_client->dev,
+ 			"CS42L42 Device ID (%X). Expected %X\n",
+ 			devid, CS42L42_CHIP_ID);
+-		return ret;
++		goto err_disable;
+ 	}
+ 
+ 	ret = regmap_read(cs42l42->regmap, CS42L42_REVID, &reg);
+ 	if (ret < 0) {
+ 		dev_err(&i2c_client->dev, "Get Revision ID failed\n");
+-		return ret;
++		goto err_disable;
+ 	}
+ 
+ 	dev_info(&i2c_client->dev,
+@@ -1845,11 +1837,9 @@ static int cs42l42_i2c_probe(struct i2c_client *i2c_client,
+ 			(1 << CS42L42_ADC_PDN_SHIFT) |
+ 			(0 << CS42L42_PDN_ALL_SHIFT));
+ 
+-	if (i2c_client->dev.of_node) {
+-		ret = cs42l42_handle_device_data(i2c_client, cs42l42);
+-		if (ret != 0)
+-			return ret;
+-	}
++	ret = cs42l42_handle_device_data(&i2c_client->dev, cs42l42);
++	if (ret != 0)
++		goto err_disable;
+ 
+ 	/* Setup headset detection */
+ 	cs42l42_setup_hs_type_detect(cs42l42);
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index e677422c10585..1332965968646 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -2454,6 +2454,7 @@ int snd_soc_component_initialize(struct snd_soc_component *component,
+ 	INIT_LIST_HEAD(&component->dai_list);
+ 	INIT_LIST_HEAD(&component->dobj_list);
+ 	INIT_LIST_HEAD(&component->card_list);
++	INIT_LIST_HEAD(&component->list);
+ 	mutex_init(&component->io_mutex);
+ 
+ 	component->name = fmt_single_name(dev, &component->id);
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 69313fbdb636a..b6327c30c2b5a 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -2590,6 +2590,15 @@ static int sof_widget_unload(struct snd_soc_component *scomp,
+ 
+ 		/* power down the pipeline schedule core */
+ 		pipeline = swidget->private;
++
++		/*
++		 * Runtime PM should still function normally if topology loading fails and
++		 * it's components are unloaded. Do not power down the primary core so that the
++		 * CTX_SAVE IPC can succeed during runtime suspend.
++		 */
++		if (pipeline->core == SOF_DSP_PRIMARY_CORE)
++			break;
++
+ 		ret = snd_sof_dsp_core_power_down(sdev, 1 << pipeline->core);
+ 		if (ret < 0)
+ 			dev_err(scomp->dev, "error: powering down pipeline schedule core %d\n",
+diff --git a/sound/synth/emux/emux.c b/sound/synth/emux/emux.c
+index f65e6c7b139f5..6695530bba9b3 100644
+--- a/sound/synth/emux/emux.c
++++ b/sound/synth/emux/emux.c
+@@ -88,7 +88,7 @@ int snd_emux_register(struct snd_emux *emu, struct snd_card *card, int index, ch
+ 	emu->name = kstrdup(name, GFP_KERNEL);
+ 	emu->voices = kcalloc(emu->max_voices, sizeof(struct snd_emux_voice),
+ 			      GFP_KERNEL);
+-	if (emu->voices == NULL)
++	if (emu->name == NULL || emu->voices == NULL)
+ 		return -ENOMEM;
+ 
+ 	/* create soundfont list */
+diff --git a/sound/usb/6fire/comm.c b/sound/usb/6fire/comm.c
+index 43a2a62d66f7e..49629d4bb327a 100644
+--- a/sound/usb/6fire/comm.c
++++ b/sound/usb/6fire/comm.c
+@@ -95,7 +95,7 @@ static int usb6fire_comm_send_buffer(u8 *buffer, struct usb_device *dev)
+ 	int actual_len;
+ 
+ 	ret = usb_interrupt_msg(dev, usb_sndintpipe(dev, COMM_EP),
+-			buffer, buffer[1] + 2, &actual_len, HZ);
++			buffer, buffer[1] + 2, &actual_len, 1000);
+ 	if (ret < 0)
+ 		return ret;
+ 	else if (actual_len != buffer[1] + 2)
+diff --git a/sound/usb/6fire/firmware.c b/sound/usb/6fire/firmware.c
+index 8981e61f2da4a..c51abc54d2f84 100644
+--- a/sound/usb/6fire/firmware.c
++++ b/sound/usb/6fire/firmware.c
+@@ -160,7 +160,7 @@ static int usb6fire_fw_ezusb_write(struct usb_device *device,
+ {
+ 	return usb_control_msg_send(device, 0, type,
+ 				    USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				    value, 0, data, len, HZ, GFP_KERNEL);
++				    value, 0, data, len, 1000, GFP_KERNEL);
+ }
+ 
+ static int usb6fire_fw_ezusb_read(struct usb_device *device,
+@@ -168,7 +168,7 @@ static int usb6fire_fw_ezusb_read(struct usb_device *device,
+ {
+ 	return usb_control_msg_recv(device, 0, type,
+ 				    USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				    value, 0, data, len, HZ, GFP_KERNEL);
++				    value, 0, data, len, 1000, GFP_KERNEL);
+ }
+ 
+ static int usb6fire_fw_fpga_write(struct usb_device *device,
+@@ -178,7 +178,7 @@ static int usb6fire_fw_fpga_write(struct usb_device *device,
+ 	int ret;
+ 
+ 	ret = usb_bulk_msg(device, usb_sndbulkpipe(device, FPGA_EP), data, len,
+-			&actual_len, HZ);
++			&actual_len, 1000);
+ 	if (ret < 0)
+ 		return ret;
+ 	else if (actual_len != len)
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 5c5b76c611480..4693384db0695 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -410,6 +410,7 @@ static int line6_parse_audio_format_rates_quirk(struct snd_usb_audio *chip,
+ 	case USB_ID(0x0e41, 0x4242): /* Line6 Helix Rack */
+ 	case USB_ID(0x0e41, 0x4244): /* Line6 Helix LT */
+ 	case USB_ID(0x0e41, 0x4246): /* Line6 HX-Stomp */
++	case USB_ID(0x0e41, 0x4253): /* Line6 HX-Stomp XL */
+ 	case USB_ID(0x0e41, 0x4247): /* Line6 Pod Go */
+ 	case USB_ID(0x0e41, 0x4248): /* Line6 Helix >= fw 2.82 */
+ 	case USB_ID(0x0e41, 0x4249): /* Line6 Helix Rack >= fw 2.82 */
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index 9602929b7de90..59faa5a9a7141 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -113,12 +113,12 @@ int line6_send_raw_message(struct usb_line6 *line6, const char *buffer,
+ 			retval = usb_interrupt_msg(line6->usbdev,
+ 						usb_sndintpipe(line6->usbdev, properties->ep_ctrl_w),
+ 						(char *)frag_buf, frag_size,
+-						&partial, LINE6_TIMEOUT * HZ);
++						&partial, LINE6_TIMEOUT);
+ 		} else {
+ 			retval = usb_bulk_msg(line6->usbdev,
+ 						usb_sndbulkpipe(line6->usbdev, properties->ep_ctrl_w),
+ 						(char *)frag_buf, frag_size,
+-						&partial, LINE6_TIMEOUT * HZ);
++						&partial, LINE6_TIMEOUT);
+ 		}
+ 
+ 		if (retval) {
+@@ -347,7 +347,7 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 	ret = usb_control_msg_send(usbdev, 0, 0x67,
+ 				   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 				   (datalen << 8) | 0x21, address, NULL, 0,
+-				   LINE6_TIMEOUT * HZ, GFP_KERNEL);
++				   LINE6_TIMEOUT, GFP_KERNEL);
+ 	if (ret) {
+ 		dev_err(line6->ifcdev, "read request failed (error %d)\n", ret);
+ 		goto exit;
+@@ -360,7 +360,7 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 		ret = usb_control_msg_recv(usbdev, 0, 0x67,
+ 					   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+ 					   0x0012, 0x0000, &len, 1,
+-					   LINE6_TIMEOUT * HZ, GFP_KERNEL);
++					   LINE6_TIMEOUT, GFP_KERNEL);
+ 		if (ret) {
+ 			dev_err(line6->ifcdev,
+ 				"receive length failed (error %d)\n", ret);
+@@ -387,7 +387,7 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 	/* receive the result: */
+ 	ret = usb_control_msg_recv(usbdev, 0, 0x67,
+ 				   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-				   0x0013, 0x0000, data, datalen, LINE6_TIMEOUT * HZ,
++				   0x0013, 0x0000, data, datalen, LINE6_TIMEOUT,
+ 				   GFP_KERNEL);
+ 	if (ret)
+ 		dev_err(line6->ifcdev, "read failed (error %d)\n", ret);
+@@ -417,7 +417,7 @@ int line6_write_data(struct usb_line6 *line6, unsigned address, void *data,
+ 
+ 	ret = usb_control_msg_send(usbdev, 0, 0x67,
+ 				   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-				   0x0022, address, data, datalen, LINE6_TIMEOUT * HZ,
++				   0x0022, address, data, datalen, LINE6_TIMEOUT,
+ 				   GFP_KERNEL);
+ 	if (ret) {
+ 		dev_err(line6->ifcdev,
+@@ -430,7 +430,7 @@ int line6_write_data(struct usb_line6 *line6, unsigned address, void *data,
+ 
+ 		ret = usb_control_msg_recv(usbdev, 0, 0x67,
+ 					   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-					   0x0012, 0x0000, status, 1, LINE6_TIMEOUT * HZ,
++					   0x0012, 0x0000, status, 1, LINE6_TIMEOUT,
+ 					   GFP_KERNEL);
+ 		if (ret) {
+ 			dev_err(line6->ifcdev,
+diff --git a/sound/usb/line6/driver.h b/sound/usb/line6/driver.h
+index 71d3da1db8c81..ecf3a2b39c7eb 100644
+--- a/sound/usb/line6/driver.h
++++ b/sound/usb/line6/driver.h
+@@ -27,7 +27,7 @@
+ #define LINE6_FALLBACK_INTERVAL 10
+ #define LINE6_FALLBACK_MAXPACKETSIZE 16
+ 
+-#define LINE6_TIMEOUT 1
++#define LINE6_TIMEOUT 1000
+ #define LINE6_BUFSIZE_LISTEN 64
+ #define LINE6_MIDI_MESSAGE_MAXLEN 256
+ 
+diff --git a/sound/usb/line6/podhd.c b/sound/usb/line6/podhd.c
+index 28794a35949d4..b24bc82f89e37 100644
+--- a/sound/usb/line6/podhd.c
++++ b/sound/usb/line6/podhd.c
+@@ -190,7 +190,7 @@ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ 	ret = usb_control_msg_send(usbdev, 0,
+ 					0x67, USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 					0x11, 0,
+-					NULL, 0, LINE6_TIMEOUT * HZ, GFP_KERNEL);
++					NULL, 0, LINE6_TIMEOUT, GFP_KERNEL);
+ 	if (ret) {
+ 		dev_err(pod->line6.ifcdev, "read request failed (error %d)\n", ret);
+ 		goto exit;
+@@ -200,7 +200,7 @@ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ 	ret = usb_control_msg_recv(usbdev, 0, 0x67,
+ 					USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+ 					0x11, 0x0,
+-					init_bytes, 3, LINE6_TIMEOUT * HZ, GFP_KERNEL);
++					init_bytes, 3, LINE6_TIMEOUT, GFP_KERNEL);
+ 	if (ret) {
+ 		dev_err(pod->line6.ifcdev,
+ 			"receive length failed (error %d)\n", ret);
+@@ -220,7 +220,7 @@ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ 					USB_REQ_SET_FEATURE,
+ 					USB_TYPE_STANDARD | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 					1, 0,
+-					NULL, 0, LINE6_TIMEOUT * HZ, GFP_KERNEL);
++					NULL, 0, LINE6_TIMEOUT, GFP_KERNEL);
+ exit:
+ 	return ret;
+ }
+diff --git a/sound/usb/line6/toneport.c b/sound/usb/line6/toneport.c
+index 4e5693c97aa42..e33df58740a91 100644
+--- a/sound/usb/line6/toneport.c
++++ b/sound/usb/line6/toneport.c
+@@ -128,7 +128,7 @@ static int toneport_send_cmd(struct usb_device *usbdev, int cmd1, int cmd2)
+ 
+ 	ret = usb_control_msg_send(usbdev, 0, 0x67,
+ 				   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-				   cmd1, cmd2, NULL, 0, LINE6_TIMEOUT * HZ,
++				   cmd1, cmd2, NULL, 0, LINE6_TIMEOUT,
+ 				   GFP_KERNEL);
+ 
+ 	if (ret) {
+diff --git a/sound/usb/misc/ua101.c b/sound/usb/misc/ua101.c
+index 6b30155964ec0..5dd1cbe29d12b 100644
+--- a/sound/usb/misc/ua101.c
++++ b/sound/usb/misc/ua101.c
+@@ -1001,7 +1001,7 @@ static int detect_usb_format(struct ua101 *ua)
+ 		fmt_playback->bSubframeSize * ua->playback.channels;
+ 
+ 	epd = &ua->intf[INTF_CAPTURE]->altsetting[1].endpoint[0].desc;
+-	if (!usb_endpoint_is_isoc_in(epd)) {
++	if (!usb_endpoint_is_isoc_in(epd) || usb_endpoint_maxp(epd) == 0) {
+ 		dev_err(&ua->dev->dev, "invalid capture endpoint\n");
+ 		return -ENXIO;
+ 	}
+@@ -1009,7 +1009,7 @@ static int detect_usb_format(struct ua101 *ua)
+ 	ua->capture.max_packet_bytes = usb_endpoint_maxp(epd);
+ 
+ 	epd = &ua->intf[INTF_PLAYBACK]->altsetting[1].endpoint[0].desc;
+-	if (!usb_endpoint_is_isoc_out(epd)) {
++	if (!usb_endpoint_is_isoc_out(epd) || usb_endpoint_maxp(epd) == 0) {
+ 		dev_err(&ua->dev->dev, "invalid playback endpoint\n");
+ 		return -ENXIO;
+ 	}
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index a45b27a2ed4ec..75d4d317b34b6 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1897,6 +1897,7 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f47, 2),	/* JBL Quantum 800 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x1f4c, 2),	/* JBL Quantum 400 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 14237ffb90bae..592536904dde2 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -304,18 +304,12 @@ static void show_prog_metadata(int fd, __u32 num_maps)
+ 		if (printed_header)
+ 			jsonw_end_object(json_wtr);
+ 	} else {
+-		json_writer_t *btf_wtr = jsonw_new(stdout);
++		json_writer_t *btf_wtr;
+ 		struct btf_dumper d = {
+ 			.btf = btf,
+-			.jw = btf_wtr,
+ 			.is_plain_text = true,
+ 		};
+ 
+-		if (!btf_wtr) {
+-			p_err("jsonw alloc failed");
+-			goto out_free;
+-		}
+-
+ 		for (i = 0; i < vlen; i++, vsi++) {
+ 			t_var = btf__type_by_id(btf, vsi->type);
+ 			name = btf__name_by_offset(btf, t_var->name_off);
+@@ -325,6 +319,14 @@ static void show_prog_metadata(int fd, __u32 num_maps)
+ 
+ 			if (!printed_header) {
+ 				printf("\tmetadata:");
++
++				btf_wtr = jsonw_new(stdout);
++				if (!btf_wtr) {
++					p_err("jsonw alloc failed");
++					goto out_free;
++				}
++				d.jw = btf_wtr,
++
+ 				printed_header = true;
+ 			}
+ 
+diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
+index 4538ed762a209..f05cfc082915d 100644
+--- a/tools/lib/bpf/bpf_core_read.h
++++ b/tools/lib/bpf/bpf_core_read.h
+@@ -40,7 +40,7 @@ enum bpf_enum_value_kind {
+ #define __CORE_RELO(src, field, info)					      \
+ 	__builtin_preserve_field_info((src)->field, BPF_FIELD_##info)
+ 
+-#if __BYTE_ORDER == __LITTLE_ENDIAN
++#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ #define __CORE_BITFIELD_PROBE_READ(dst, src, fld)			      \
+ 	bpf_probe_read_kernel(						      \
+ 			(void *)dst,				      \
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 231b07203e3d2..e6f644cdc9f15 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -205,32 +205,29 @@ static int btf_parse_hdr(struct btf *btf)
+ 		}
+ 		btf_bswap_hdr(hdr);
+ 	} else if (hdr->magic != BTF_MAGIC) {
+-		pr_debug("Invalid BTF magic:%x\n", hdr->magic);
++		pr_debug("Invalid BTF magic: %x\n", hdr->magic);
+ 		return -EINVAL;
+ 	}
+ 
+-	meta_left = btf->raw_size - sizeof(*hdr);
+-	if (!meta_left) {
+-		pr_debug("BTF has no data\n");
++	if (btf->raw_size < hdr->hdr_len) {
++		pr_debug("BTF header len %u larger than data size %u\n",
++			 hdr->hdr_len, btf->raw_size);
+ 		return -EINVAL;
+ 	}
+ 
+-	if (meta_left < hdr->type_off) {
+-		pr_debug("Invalid BTF type section offset:%u\n", hdr->type_off);
++	meta_left = btf->raw_size - hdr->hdr_len;
++	if (meta_left < (long long)hdr->str_off + hdr->str_len) {
++		pr_debug("Invalid BTF total size: %u\n", btf->raw_size);
+ 		return -EINVAL;
+ 	}
+ 
+-	if (meta_left < hdr->str_off) {
+-		pr_debug("Invalid BTF string section offset:%u\n", hdr->str_off);
++	if ((long long)hdr->type_off + hdr->type_len > hdr->str_off) {
++		pr_debug("Invalid BTF data sections layout: type data at %u + %u, strings data at %u + %u\n",
++			 hdr->type_off, hdr->type_len, hdr->str_off, hdr->str_len);
+ 		return -EINVAL;
+ 	}
+ 
+-	if (hdr->type_off >= hdr->str_off) {
+-		pr_debug("BTF type section offset >= string section offset. No type?\n");
+-		return -EINVAL;
+-	}
+-
+-	if (hdr->type_off & 0x02) {
++	if (hdr->type_off % 4) {
+ 		pr_debug("BTF type section is not aligned to 4 bytes\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 5c83f73ad6687..8932f41c387ff 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -156,6 +156,8 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
+ 		"machine_real_restart",
+ 		"rewind_stack_do_exit",
+ 		"kunit_try_catch_throw",
++		"xen_start_kernel",
++		"cpu_bringup_and_idle",
+ 	};
+ 
+ 	if (!func)
+@@ -952,6 +954,11 @@ static int add_call_destinations(struct objtool_file *file)
+ 		} else
+ 			insn->call_dest = reloc->sym;
+ 
++		if (insn->call_dest && insn->call_dest->static_call_tramp) {
++			list_add_tail(&insn->static_call_node,
++				      &file->static_call_list);
++		}
++
+ 		/*
+ 		 * Many compilers cannot disable KCOV with a function attribute
+ 		 * so they need a little help, NOP out any KCOV calls from noinstr
+@@ -1666,6 +1673,9 @@ static int decode_sections(struct objtool_file *file)
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Must be before add_{jump_call}_destination.
++	 */
+ 	ret = read_static_call_tramps(file);
+ 	if (ret)
+ 		return ret;
+@@ -1678,6 +1688,10 @@ static int decode_sections(struct objtool_file *file)
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Must be before add_call_destination(); it changes INSN_CALL to
++	 * INSN_JUMP.
++	 */
+ 	ret = read_intra_function_calls(file);
+ 	if (ret)
+ 		return ret;
+@@ -2532,11 +2546,6 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 			if (dead_end_function(file, insn->call_dest))
+ 				return 0;
+ 
+-			if (insn->type == INSN_CALL && insn->call_dest->static_call_tramp) {
+-				list_add_tail(&insn->static_call_node,
+-					      &file->static_call_list);
+-			}
+-
+ 			break;
+ 
+ 		case INSN_JUMP_CONDITIONAL:
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index 3742511a08d15..c8101575dbf45 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -557,7 +557,7 @@ void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+ 		synthesize_bpf_prog_name(name, KSYM_NAME_LEN, info, btf, 0);
+ 		fprintf(fp, "# bpf_prog_info %u: %s addr 0x%llx size %u\n",
+ 			info->id, name, prog_addrs[0], prog_lens[0]);
+-		return;
++		goto out;
+ 	}
+ 
+ 	fprintf(fp, "# bpf_prog_info %u:\n", info->id);
+@@ -567,4 +567,6 @@ void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+ 		fprintf(fp, "# \tsub_prog %u: %s addr 0x%llx size %u\n",
+ 			i, name, prog_addrs[i], prog_lens[i]);
+ 	}
++out:
++	btf__free(btf);
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/perf_buffer.c b/tools/testing/selftests/bpf/prog_tests/perf_buffer.c
+index ca9f0895ec84e..8d75475408f57 100644
+--- a/tools/testing/selftests/bpf/prog_tests/perf_buffer.c
++++ b/tools/testing/selftests/bpf/prog_tests/perf_buffer.c
+@@ -107,8 +107,8 @@ void test_perf_buffer(void)
+ 		  "expect %d, seen %d\n", nr_on_cpus, CPU_COUNT(&cpu_seen)))
+ 		goto out_free_pb;
+ 
+-	if (CHECK(perf_buffer__buffer_cnt(pb) != nr_cpus, "buf_cnt",
+-		  "got %zu, expected %d\n", perf_buffer__buffer_cnt(pb), nr_cpus))
++	if (CHECK(perf_buffer__buffer_cnt(pb) != nr_on_cpus, "buf_cnt",
++		  "got %zu, expected %d\n", perf_buffer__buffer_cnt(pb), nr_on_cpus))
+ 		goto out_close;
+ 
+ 	for (i = 0; i < nr_cpus; i++) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+index 9ff0412e1fd38..b4c9f4a96ae4d 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
++++ b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+@@ -241,6 +241,48 @@ fail:
+ 	return -1;
+ }
+ 
++static __u64 socket_cookie(int fd)
++{
++	__u64 cookie;
++	socklen_t cookie_len = sizeof(cookie);
++
++	if (CHECK(getsockopt(fd, SOL_SOCKET, SO_COOKIE, &cookie, &cookie_len) < 0,
++		  "getsockopt(SO_COOKIE)", "%s\n", strerror(errno)))
++		return 0;
++	return cookie;
++}
++
++static int fill_sk_lookup_ctx(struct bpf_sk_lookup *ctx, const char *local_ip, __u16 local_port,
++			      const char *remote_ip, __u16 remote_port)
++{
++	void *local, *remote;
++	int err;
++
++	memset(ctx, 0, sizeof(*ctx));
++	ctx->local_port = local_port;
++	ctx->remote_port = htons(remote_port);
++
++	if (is_ipv6(local_ip)) {
++		ctx->family = AF_INET6;
++		local = &ctx->local_ip6[0];
++		remote = &ctx->remote_ip6[0];
++	} else {
++		ctx->family = AF_INET;
++		local = &ctx->local_ip4;
++		remote = &ctx->remote_ip4;
++	}
++
++	err = inet_pton(ctx->family, local_ip, local);
++	if (CHECK(err != 1, "inet_pton", "local_ip failed\n"))
++		return 1;
++
++	err = inet_pton(ctx->family, remote_ip, remote);
++	if (CHECK(err != 1, "inet_pton", "remote_ip failed\n"))
++		return 1;
++
++	return 0;
++}
++
+ static int send_byte(int fd)
+ {
+ 	ssize_t n;
+@@ -556,7 +598,7 @@ close:
+ 
+ static void run_lookup_prog(const struct test *t)
+ {
+-	int server_fds[MAX_SERVERS] = { -1 };
++	int server_fds[] = { [0 ... MAX_SERVERS - 1] = -1 };
+ 	int client_fd, reuse_conn_fd = -1;
+ 	struct bpf_link *lookup_link;
+ 	int i, err;
+@@ -1009,18 +1051,27 @@ static void test_drop_on_reuseport(struct test_sk_lookup *skel)
+ 
+ static void run_sk_assign(struct test_sk_lookup *skel,
+ 			  struct bpf_program *lookup_prog,
+-			  const char *listen_ip, const char *connect_ip)
++			  const char *remote_ip, const char *local_ip)
+ {
+-	int client_fd, peer_fd, server_fds[MAX_SERVERS] = { -1 };
+-	struct bpf_link *lookup_link;
++	int server_fds[] = { [0 ... MAX_SERVERS - 1] = -1 };
++	struct bpf_sk_lookup ctx;
++	__u64 server_cookie;
+ 	int i, err;
+ 
+-	lookup_link = attach_lookup_prog(lookup_prog);
+-	if (!lookup_link)
++	DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
++		.ctx_in = &ctx,
++		.ctx_size_in = sizeof(ctx),
++		.ctx_out = &ctx,
++		.ctx_size_out = sizeof(ctx),
++	);
++
++	if (fill_sk_lookup_ctx(&ctx, local_ip, EXT_PORT, remote_ip, INT_PORT))
+ 		return;
+ 
++	ctx.protocol = IPPROTO_TCP;
++
+ 	for (i = 0; i < ARRAY_SIZE(server_fds); i++) {
+-		server_fds[i] = make_server(SOCK_STREAM, listen_ip, 0, NULL);
++		server_fds[i] = make_server(SOCK_STREAM, local_ip, 0, NULL);
+ 		if (server_fds[i] < 0)
+ 			goto close_servers;
+ 
+@@ -1030,23 +1081,25 @@ static void run_sk_assign(struct test_sk_lookup *skel,
+ 			goto close_servers;
+ 	}
+ 
+-	client_fd = make_client(SOCK_STREAM, connect_ip, EXT_PORT);
+-	if (client_fd < 0)
++	server_cookie = socket_cookie(server_fds[SERVER_B]);
++	if (!server_cookie)
++		return;
++
++	err = bpf_prog_test_run_opts(bpf_program__fd(lookup_prog), &opts);
++	if (CHECK(err, "test_run", "failed with error %d\n", errno))
++		goto close_servers;
++
++	if (CHECK(ctx.cookie == 0, "ctx.cookie", "no socket selected\n"))
+ 		goto close_servers;
+ 
+-	peer_fd = accept(server_fds[SERVER_B], NULL, NULL);
+-	if (CHECK(peer_fd < 0, "accept", "failed\n"))
+-		goto close_client;
++	CHECK(ctx.cookie != server_cookie, "ctx.cookie",
++	      "selected sk %llu instead of %llu\n", ctx.cookie, server_cookie);
+ 
+-	close(peer_fd);
+-close_client:
+-	close(client_fd);
+ close_servers:
+ 	for (i = 0; i < ARRAY_SIZE(server_fds); i++) {
+ 		if (server_fds[i] != -1)
+ 			close(server_fds[i]);
+ 	}
+-	bpf_link__destroy(lookup_link);
+ }
+ 
+ static void run_sk_assign_v4(struct test_sk_lookup *skel,
+diff --git a/tools/testing/selftests/bpf/progs/strobemeta.h b/tools/testing/selftests/bpf/progs/strobemeta.h
+index 7de534f38c3f1..60c93aee2f4ad 100644
+--- a/tools/testing/selftests/bpf/progs/strobemeta.h
++++ b/tools/testing/selftests/bpf/progs/strobemeta.h
+@@ -358,7 +358,7 @@ static __always_inline uint64_t read_str_var(struct strobemeta_cfg *cfg,
+ 					     void *payload)
+ {
+ 	void *location;
+-	uint32_t len;
++	uint64_t len;
+ 
+ 	data->str_lens[idx] = 0;
+ 	location = calc_location(&cfg->str_locs[idx], tls_base);
+@@ -390,7 +390,7 @@ static __always_inline void *read_map_var(struct strobemeta_cfg *cfg,
+ 	struct strobe_map_descr* descr = &data->map_descrs[idx];
+ 	struct strobe_map_raw map;
+ 	void *location;
+-	uint32_t len;
++	uint64_t len;
+ 	int i;
+ 
+ 	descr->tag_len = 0; /* presume no tag is set */
+diff --git a/tools/testing/selftests/bpf/progs/test_sk_lookup.c b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
+index 1032b292af5b7..ac6f7f205e25d 100644
+--- a/tools/testing/selftests/bpf/progs/test_sk_lookup.c
++++ b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
+@@ -64,6 +64,10 @@ static const int PROG_DONE = 1;
+ static const __u32 KEY_SERVER_A = SERVER_A;
+ static const __u32 KEY_SERVER_B = SERVER_B;
+ 
++static const __u16 SRC_PORT = bpf_htons(8008);
++static const __u32 SRC_IP4 = IP4(127, 0, 0, 2);
++static const __u32 SRC_IP6[] = IP6(0xfd000000, 0x0, 0x0, 0x00000002);
++
+ static const __u16 DST_PORT = 7007; /* Host byte order */
+ static const __u32 DST_IP4 = IP4(127, 0, 0, 1);
+ static const __u32 DST_IP6[] = IP6(0xfd000000, 0x0, 0x0, 0x00000001);
+@@ -398,11 +402,12 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
+ 	if (LSW(ctx->protocol, 0) != IPPROTO_TCP)
+ 		return SK_DROP;
+ 
+-	/* Narrow loads from remote_port field. Expect non-0 value. */
+-	if (LSB(ctx->remote_port, 0) == 0 && LSB(ctx->remote_port, 1) == 0 &&
+-	    LSB(ctx->remote_port, 2) == 0 && LSB(ctx->remote_port, 3) == 0)
++	/* Narrow loads from remote_port field. Expect SRC_PORT. */
++	if (LSB(ctx->remote_port, 0) != ((SRC_PORT >> 0) & 0xff) ||
++	    LSB(ctx->remote_port, 1) != ((SRC_PORT >> 8) & 0xff) ||
++	    LSB(ctx->remote_port, 2) != 0 || LSB(ctx->remote_port, 3) != 0)
+ 		return SK_DROP;
+-	if (LSW(ctx->remote_port, 0) == 0)
++	if (LSW(ctx->remote_port, 0) != SRC_PORT)
+ 		return SK_DROP;
+ 
+ 	/* Narrow loads from local_port field. Expect DST_PORT. */
+@@ -415,11 +420,14 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
+ 
+ 	/* Narrow loads from IPv4 fields */
+ 	if (v4) {
+-		/* Expect non-0.0.0.0 in remote_ip4 */
+-		if (LSB(ctx->remote_ip4, 0) == 0 && LSB(ctx->remote_ip4, 1) == 0 &&
+-		    LSB(ctx->remote_ip4, 2) == 0 && LSB(ctx->remote_ip4, 3) == 0)
++		/* Expect SRC_IP4 in remote_ip4 */
++		if (LSB(ctx->remote_ip4, 0) != ((SRC_IP4 >> 0) & 0xff) ||
++		    LSB(ctx->remote_ip4, 1) != ((SRC_IP4 >> 8) & 0xff) ||
++		    LSB(ctx->remote_ip4, 2) != ((SRC_IP4 >> 16) & 0xff) ||
++		    LSB(ctx->remote_ip4, 3) != ((SRC_IP4 >> 24) & 0xff))
+ 			return SK_DROP;
+-		if (LSW(ctx->remote_ip4, 0) == 0 && LSW(ctx->remote_ip4, 1) == 0)
++		if (LSW(ctx->remote_ip4, 0) != ((SRC_IP4 >> 0) & 0xffff) ||
++		    LSW(ctx->remote_ip4, 1) != ((SRC_IP4 >> 16) & 0xffff))
+ 			return SK_DROP;
+ 
+ 		/* Expect DST_IP4 in local_ip4 */
+@@ -448,20 +456,32 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
+ 
+ 	/* Narrow loads from IPv6 fields */
+ 	if (!v4) {
+-		/* Expect non-:: IP in remote_ip6 */
+-		if (LSB(ctx->remote_ip6[0], 0) == 0 && LSB(ctx->remote_ip6[0], 1) == 0 &&
+-		    LSB(ctx->remote_ip6[0], 2) == 0 && LSB(ctx->remote_ip6[0], 3) == 0 &&
+-		    LSB(ctx->remote_ip6[1], 0) == 0 && LSB(ctx->remote_ip6[1], 1) == 0 &&
+-		    LSB(ctx->remote_ip6[1], 2) == 0 && LSB(ctx->remote_ip6[1], 3) == 0 &&
+-		    LSB(ctx->remote_ip6[2], 0) == 0 && LSB(ctx->remote_ip6[2], 1) == 0 &&
+-		    LSB(ctx->remote_ip6[2], 2) == 0 && LSB(ctx->remote_ip6[2], 3) == 0 &&
+-		    LSB(ctx->remote_ip6[3], 0) == 0 && LSB(ctx->remote_ip6[3], 1) == 0 &&
+-		    LSB(ctx->remote_ip6[3], 2) == 0 && LSB(ctx->remote_ip6[3], 3) == 0)
++		/* Expect SRC_IP6 in remote_ip6 */
++		if (LSB(ctx->remote_ip6[0], 0) != ((SRC_IP6[0] >> 0) & 0xff) ||
++		    LSB(ctx->remote_ip6[0], 1) != ((SRC_IP6[0] >> 8) & 0xff) ||
++		    LSB(ctx->remote_ip6[0], 2) != ((SRC_IP6[0] >> 16) & 0xff) ||
++		    LSB(ctx->remote_ip6[0], 3) != ((SRC_IP6[0] >> 24) & 0xff) ||
++		    LSB(ctx->remote_ip6[1], 0) != ((SRC_IP6[1] >> 0) & 0xff) ||
++		    LSB(ctx->remote_ip6[1], 1) != ((SRC_IP6[1] >> 8) & 0xff) ||
++		    LSB(ctx->remote_ip6[1], 2) != ((SRC_IP6[1] >> 16) & 0xff) ||
++		    LSB(ctx->remote_ip6[1], 3) != ((SRC_IP6[1] >> 24) & 0xff) ||
++		    LSB(ctx->remote_ip6[2], 0) != ((SRC_IP6[2] >> 0) & 0xff) ||
++		    LSB(ctx->remote_ip6[2], 1) != ((SRC_IP6[2] >> 8) & 0xff) ||
++		    LSB(ctx->remote_ip6[2], 2) != ((SRC_IP6[2] >> 16) & 0xff) ||
++		    LSB(ctx->remote_ip6[2], 3) != ((SRC_IP6[2] >> 24) & 0xff) ||
++		    LSB(ctx->remote_ip6[3], 0) != ((SRC_IP6[3] >> 0) & 0xff) ||
++		    LSB(ctx->remote_ip6[3], 1) != ((SRC_IP6[3] >> 8) & 0xff) ||
++		    LSB(ctx->remote_ip6[3], 2) != ((SRC_IP6[3] >> 16) & 0xff) ||
++		    LSB(ctx->remote_ip6[3], 3) != ((SRC_IP6[3] >> 24) & 0xff))
+ 			return SK_DROP;
+-		if (LSW(ctx->remote_ip6[0], 0) == 0 && LSW(ctx->remote_ip6[0], 1) == 0 &&
+-		    LSW(ctx->remote_ip6[1], 0) == 0 && LSW(ctx->remote_ip6[1], 1) == 0 &&
+-		    LSW(ctx->remote_ip6[2], 0) == 0 && LSW(ctx->remote_ip6[2], 1) == 0 &&
+-		    LSW(ctx->remote_ip6[3], 0) == 0 && LSW(ctx->remote_ip6[3], 1) == 0)
++		if (LSW(ctx->remote_ip6[0], 0) != ((SRC_IP6[0] >> 0) & 0xffff) ||
++		    LSW(ctx->remote_ip6[0], 1) != ((SRC_IP6[0] >> 16) & 0xffff) ||
++		    LSW(ctx->remote_ip6[1], 0) != ((SRC_IP6[1] >> 0) & 0xffff) ||
++		    LSW(ctx->remote_ip6[1], 1) != ((SRC_IP6[1] >> 16) & 0xffff) ||
++		    LSW(ctx->remote_ip6[2], 0) != ((SRC_IP6[2] >> 0) & 0xffff) ||
++		    LSW(ctx->remote_ip6[2], 1) != ((SRC_IP6[2] >> 16) & 0xffff) ||
++		    LSW(ctx->remote_ip6[3], 0) != ((SRC_IP6[3] >> 0) & 0xffff) ||
++		    LSW(ctx->remote_ip6[3], 1) != ((SRC_IP6[3] >> 16) & 0xffff))
+ 			return SK_DROP;
+ 		/* Expect DST_IP6 in local_ip6 */
+ 		if (LSB(ctx->local_ip6[0], 0) != ((DST_IP6[0] >> 0) & 0xff) ||
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index 22943b58d752a..4a13477aef9dd 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -347,7 +347,7 @@ int extract_build_id(char *build_id, size_t size)
+ 
+ 	if (getline(&line, &len, fp) == -1)
+ 		goto err;
+-	fclose(fp);
++	pclose(fp);
+ 
+ 	if (len > size)
+ 		len = size;
+@@ -356,7 +356,7 @@ int extract_build_id(char *build_id, size_t size)
+ 	free(line);
+ 	return 0;
+ err:
+-	fclose(fp);
++	pclose(fp);
+ 	return -1;
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
+index 1b1c798e92489..1b138cd2b187d 100644
+--- a/tools/testing/selftests/bpf/verifier/array_access.c
++++ b/tools/testing/selftests/bpf/verifier/array_access.c
+@@ -186,7 +186,7 @@
+ 	},
+ 	.fixup_map_hash_48b = { 3 },
+ 	.errstr_unpriv = "R0 leaks addr",
+-	.errstr = "R0 unbounded memory access",
++	.errstr = "invalid access to map value, value_size=48 off=44 size=8",
+ 	.result_unpriv = REJECT,
+ 	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+diff --git a/tools/testing/selftests/core/close_range_test.c b/tools/testing/selftests/core/close_range_test.c
+index 575b391ddc78d..0a26795842f6f 100644
+--- a/tools/testing/selftests/core/close_range_test.c
++++ b/tools/testing/selftests/core/close_range_test.c
+@@ -33,7 +33,7 @@ static inline int sys_close_range(unsigned int fd, unsigned int max_fd,
+ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+ #endif
+ 
+-TEST(close_range)
++TEST(core_close_range)
+ {
+ 	int i, ret;
+ 	int open_fds[101];
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c
+index 3a5c72ed2b792..a58507a7b5d6d 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/svm.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c
+@@ -57,6 +57,18 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
+ 	seg->base = base;
+ }
+ 
++/*
++ * Avoid using memset to clear the vmcb, since libc may not be
++ * available in L1 (and, even if it is, features that libc memset may
++ * want to use, like AVX, may not be enabled).
++ */
++static void clear_vmcb(struct vmcb *vmcb)
++{
++	int n = sizeof(*vmcb) / sizeof(u32);
++
++	asm volatile ("rep stosl" : "+c"(n), "+D"(vmcb) : "a"(0) : "memory");
++}
++
+ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_rsp)
+ {
+ 	struct vmcb *vmcb = svm->vmcb;
+@@ -73,8 +85,8 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r
+ 	wrmsr(MSR_EFER, efer | EFER_SVME);
+ 	wrmsr(MSR_VM_HSAVE_PA, svm->save_area_gpa);
+ 
+-	memset(vmcb, 0, sizeof(*vmcb));
+-	asm volatile ("vmsave\n\t" : : "a" (vmcb_gpa) : "memory");
++	clear_vmcb(vmcb);
++	asm volatile ("vmsave %0\n\t" : : "a" (vmcb_gpa) : "memory");
+ 	vmcb_set_seg(&save->es, get_es(), 0, -1U, data_seg_attr);
+ 	vmcb_set_seg(&save->cs, get_cs(), 0, -1U, code_seg_attr);
+ 	vmcb_set_seg(&save->ss, get_ss(), 0, -1U, data_seg_attr);
+@@ -131,19 +143,19 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r
+ void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa)
+ {
+ 	asm volatile (
+-		"vmload\n\t"
++		"vmload %[vmcb_gpa]\n\t"
+ 		"mov rflags, %%r15\n\t"	// rflags
+ 		"mov %%r15, 0x170(%[vmcb])\n\t"
+ 		"mov guest_regs, %%r15\n\t"	// rax
+ 		"mov %%r15, 0x1f8(%[vmcb])\n\t"
+ 		LOAD_GPR_C
+-		"vmrun\n\t"
++		"vmrun %[vmcb_gpa]\n\t"
+ 		SAVE_GPR_C
+ 		"mov 0x170(%[vmcb]), %%r15\n\t"	// rflags
+ 		"mov %%r15, rflags\n\t"
+ 		"mov 0x1f8(%[vmcb]), %%r15\n\t"	// rax
+ 		"mov %%r15, guest_regs\n\t"
+-		"vmsave\n\t"
++		"vmsave %[vmcb_gpa]\n\t"
+ 		: : [vmcb] "r" (vmcb), [vmcb_gpa] "a" (vmcb_gpa)
+ 		: "r15", "memory");
+ }
+diff --git a/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c b/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
+index 8039e1eff9388..9f55ccd169a13 100644
+--- a/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
++++ b/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
+@@ -84,7 +84,7 @@ int get_warnings_count(void)
+ 	f = popen("dmesg | grep \"WARNING:\" | wc -l", "r");
+ 	if (fscanf(f, "%d", &warnings) < 1)
+ 		warnings = 0;
+-	fclose(f);
++	pclose(f);
+ 
+ 	return warnings;
+ }
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index 02b0b9ead40b9..225440f5f99eb 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -436,10 +436,13 @@ cleanup()
+ 		ip -netns ${NSA} link set dev ${NSA_DEV} down
+ 		ip -netns ${NSA} link del dev ${NSA_DEV}
+ 
++		ip netns pids ${NSA} | xargs kill 2>/dev/null
+ 		ip netns del ${NSA}
+ 	fi
+ 
++	ip netns pids ${NSB} | xargs kill 2>/dev/null
+ 	ip netns del ${NSB}
++	ip netns pids ${NSC} | xargs kill 2>/dev/null
+ 	ip netns del ${NSC} >/dev/null 2>&1
+ }
+ 
+diff --git a/tools/testing/selftests/net/udpgso_bench_rx.c b/tools/testing/selftests/net/udpgso_bench_rx.c
+index 76a24052f4b47..6a193425c367f 100644
+--- a/tools/testing/selftests/net/udpgso_bench_rx.c
++++ b/tools/testing/selftests/net/udpgso_bench_rx.c
+@@ -293,19 +293,17 @@ static void usage(const char *filepath)
+ 
+ static void parse_opts(int argc, char **argv)
+ {
++	const char *bind_addr = NULL;
+ 	int c;
+ 
+-	/* bind to any by default */
+-	setup_sockaddr(PF_INET6, "::", &cfg_bind_addr);
+ 	while ((c = getopt(argc, argv, "4b:C:Gl:n:p:rR:S:tv")) != -1) {
+ 		switch (c) {
+ 		case '4':
+ 			cfg_family = PF_INET;
+ 			cfg_alen = sizeof(struct sockaddr_in);
+-			setup_sockaddr(PF_INET, "0.0.0.0", &cfg_bind_addr);
+ 			break;
+ 		case 'b':
+-			setup_sockaddr(cfg_family, optarg, &cfg_bind_addr);
++			bind_addr = optarg;
+ 			break;
+ 		case 'C':
+ 			cfg_connect_timeout_ms = strtoul(optarg, NULL, 0);
+@@ -341,6 +339,11 @@ static void parse_opts(int argc, char **argv)
+ 		}
+ 	}
+ 
++	if (!bind_addr)
++		bind_addr = cfg_family == PF_INET6 ? "::" : "0.0.0.0";
++
++	setup_sockaddr(cfg_family, bind_addr, &cfg_bind_addr);
++
+ 	if (optind != argc)
+ 		usage(argv[0]);
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-11-21 20:42 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-11-21 20:42 UTC (permalink / raw
  To: gentoo-commits

commit:     ffcf9f68f9a2e99641b96e1629e477e770e99f48
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 21 20:42:40 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 21 20:42:40 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ffcf9f68

Linux patch 5.10.81

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1080_linux-5.10.81.patch | 1173 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1177 insertions(+)

diff --git a/0000_README b/0000_README
index 22873d23..f4fa5656 100644
--- a/0000_README
+++ b/0000_README
@@ -363,6 +363,10 @@ Patch:  1079_linux-5.10.80.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.80
 
+Patch:  1080_linux-5.10.81.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.81
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1080_linux-5.10.81.patch b/1080_linux-5.10.81.patch
new file mode 100644
index 00000000..0a03b38e
--- /dev/null
+++ b/1080_linux-5.10.81.patch
@@ -0,0 +1,1173 @@
+diff --git a/Makefile b/Makefile
+index 71fdc74801e0a..1baeadb574f1c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 80
++SUBLEVEL = 81
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 9c76d50a5654b..3da39140babcf 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -1849,7 +1849,7 @@ syscall_restore:
+ 
+ 	/* Are we being ptraced? */
+ 	LDREG	TI_FLAGS-THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r19
+-	ldi	_TIF_SYSCALL_TRACE_MASK,%r2
++	ldi	_TIF_SINGLESTEP|_TIF_BLOCKSTEP,%r2
+ 	and,COND(=)	%r19,%r2,%r0
+ 	b,n	syscall_restore_rfi
+ 
+diff --git a/arch/x86/include/asm/insn-eval.h b/arch/x86/include/asm/insn-eval.h
+index 98b4dae5e8bc8..c1438f9239e46 100644
+--- a/arch/x86/include/asm/insn-eval.h
++++ b/arch/x86/include/asm/insn-eval.h
+@@ -21,6 +21,7 @@ int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs);
+ int insn_get_modrm_reg_off(struct insn *insn, struct pt_regs *regs);
+ unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx);
+ int insn_get_code_seg_params(struct pt_regs *regs);
++unsigned long insn_get_effective_ip(struct pt_regs *regs);
+ int insn_fetch_from_user(struct pt_regs *regs,
+ 			 unsigned char buf[MAX_INSN_SIZE]);
+ int insn_fetch_from_user_inatomic(struct pt_regs *regs,
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 50d02db723177..d428d611a43a9 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -534,6 +534,7 @@ struct thread_struct {
+ 	 */
+ 	unsigned long		iopl_emul;
+ 
++	unsigned int		iopl_warn:1;
+ 	unsigned int		sig_on_uaccess_err:1;
+ 
+ 	/* Floating point and extended processor state */
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 145a7ac0c19aa..0aa1baf9a3afc 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -138,6 +138,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
+ 	frame->ret_addr = (unsigned long) ret_from_fork;
+ 	p->thread.sp = (unsigned long) fork_frame;
+ 	p->thread.io_bitmap = NULL;
++	p->thread.iopl_warn = 0;
+ 	memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
+ 
+ #ifdef CONFIG_X86_64
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 143fcb8af38f4..2d4ecd50e69b8 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -523,6 +523,37 @@ static enum kernel_gp_hint get_kernel_gp_address(struct pt_regs *regs,
+ 
+ #define GPFSTR "general protection fault"
+ 
++static bool fixup_iopl_exception(struct pt_regs *regs)
++{
++	struct thread_struct *t = &current->thread;
++	unsigned char byte;
++	unsigned long ip;
++
++	if (!IS_ENABLED(CONFIG_X86_IOPL_IOPERM) || t->iopl_emul != 3)
++		return false;
++
++	ip = insn_get_effective_ip(regs);
++	if (!ip)
++		return false;
++
++	if (get_user(byte, (const char __user *)ip))
++		return false;
++
++	if (byte != 0xfa && byte != 0xfb)
++		return false;
++
++	if (!t->iopl_warn && printk_ratelimit()) {
++		pr_err("%s[%d] attempts to use CLI/STI, pretending it's a NOP, ip:%lx",
++		       current->comm, task_pid_nr(current), ip);
++		print_vma_addr(KERN_CONT " in ", ip);
++		pr_cont("\n");
++		t->iopl_warn = 1;
++	}
++
++	regs->ip += 1;
++	return true;
++}
++
+ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection)
+ {
+ 	char desc[sizeof(GPFSTR) + 50 + 2*sizeof(unsigned long) + 1] = GPFSTR;
+@@ -548,6 +579,9 @@ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection)
+ 	tsk = current;
+ 
+ 	if (user_mode(regs)) {
++		if (fixup_iopl_exception(regs))
++			goto exit;
++
+ 		tsk->thread.error_code = error_code;
+ 		tsk->thread.trap_nr = X86_TRAP_GP;
+ 
+diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
+index bb0b3fe1e0a02..c6a19c88af547 100644
+--- a/arch/x86/lib/insn-eval.c
++++ b/arch/x86/lib/insn-eval.c
+@@ -1415,7 +1415,7 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
+ 	}
+ }
+ 
+-static unsigned long insn_get_effective_ip(struct pt_regs *regs)
++unsigned long insn_get_effective_ip(struct pt_regs *regs)
+ {
+ 	unsigned long seg_base = 0;
+ 
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index f0fa0c8e7ec60..ee537a9f1d1a4 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -228,19 +228,6 @@ static void __loop_update_dio(struct loop_device *lo, bool dio)
+ 		blk_mq_unfreeze_queue(lo->lo_queue);
+ }
+ 
+-/**
+- * loop_validate_block_size() - validates the passed in block size
+- * @bsize: size to validate
+- */
+-static int
+-loop_validate_block_size(unsigned short bsize)
+-{
+-	if (bsize < 512 || bsize > PAGE_SIZE || !is_power_of_2(bsize))
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+ /**
+  * loop_set_size() - sets device size and notifies userspace
+  * @lo: struct loop_device to set the size for
+@@ -1121,7 +1108,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 	}
+ 
+ 	if (config->block_size) {
+-		error = loop_validate_block_size(config->block_size);
++		error = blk_validate_block_size(config->block_size);
+ 		if (error)
+ 			goto out_unlock;
+ 	}
+@@ -1617,7 +1604,7 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
+ 	if (lo->lo_state != Lo_bound)
+ 		return -ENXIO;
+ 
+-	err = loop_validate_block_size(arg);
++	err = blk_validate_block_size(arg);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+index 6ef30252bfe0a..143b2cb13bf94 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+@@ -21,7 +21,6 @@
+ #include <linux/delay.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/regmap.h>
+-#include <linux/pm_runtime.h>
+ 
+ #include "stmmac_platform.h"
+ 
+@@ -1336,9 +1335,6 @@ static int rk_gmac_powerup(struct rk_priv_data *bsp_priv)
+ 		return ret;
+ 	}
+ 
+-	pm_runtime_enable(dev);
+-	pm_runtime_get_sync(dev);
+-
+ 	if (bsp_priv->integrated_phy)
+ 		rk_gmac_integrated_phy_powerup(bsp_priv);
+ 
+@@ -1347,14 +1343,9 @@ static int rk_gmac_powerup(struct rk_priv_data *bsp_priv)
+ 
+ static void rk_gmac_powerdown(struct rk_priv_data *gmac)
+ {
+-	struct device *dev = &gmac->pdev->dev;
+-
+ 	if (gmac->integrated_phy)
+ 		rk_gmac_integrated_phy_powerdown(gmac);
+ 
+-	pm_runtime_put_sync(dev);
+-	pm_runtime_disable(dev);
+-
+ 	phy_power_on(gmac, false);
+ 	gmac_clk_enable(gmac, false);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index 727e68dfaf1c2..a4ca283e02284 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -270,6 +270,7 @@ void stmmac_disable_eee_mode(struct stmmac_priv *priv);
+ bool stmmac_eee_init(struct stmmac_priv *priv);
+ int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt);
+ int stmmac_reinit_ringparam(struct net_device *dev, u32 rx_size, u32 tx_size);
++int stmmac_bus_clks_config(struct stmmac_priv *priv, bool enabled);
+ 
+ #if IS_ENABLED(CONFIG_STMMAC_SELFTESTS)
+ void stmmac_selftest_run(struct net_device *dev,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 0ac61e7ab43cd..4a75e73f06bbd 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -28,6 +28,7 @@
+ #include <linux/if_vlan.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/slab.h>
++#include <linux/pm_runtime.h>
+ #include <linux/prefetch.h>
+ #include <linux/pinctrl/consumer.h>
+ #ifdef CONFIG_DEBUG_FS
+@@ -113,6 +114,28 @@ static void stmmac_exit_fs(struct net_device *dev);
+ 
+ #define STMMAC_COAL_TIMER(x) (jiffies + usecs_to_jiffies(x))
+ 
++int stmmac_bus_clks_config(struct stmmac_priv *priv, bool enabled)
++{
++	int ret = 0;
++
++	if (enabled) {
++		ret = clk_prepare_enable(priv->plat->stmmac_clk);
++		if (ret)
++			return ret;
++		ret = clk_prepare_enable(priv->plat->pclk);
++		if (ret) {
++			clk_disable_unprepare(priv->plat->stmmac_clk);
++			return ret;
++		}
++	} else {
++		clk_disable_unprepare(priv->plat->stmmac_clk);
++		clk_disable_unprepare(priv->plat->pclk);
++	}
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(stmmac_bus_clks_config);
++
+ /**
+  * stmmac_verify_args - verify the driver parameters.
+  * Description: it checks the driver parameters and set a default in case of
+@@ -2792,6 +2815,12 @@ static int stmmac_open(struct net_device *dev)
+ 	u32 chan;
+ 	int ret;
+ 
++	ret = pm_runtime_get_sync(priv->device);
++	if (ret < 0) {
++		pm_runtime_put_noidle(priv->device);
++		return ret;
++	}
++
+ 	if (priv->hw->pcs != STMMAC_PCS_TBI &&
+ 	    priv->hw->pcs != STMMAC_PCS_RTBI &&
+ 	    priv->hw->xpcs == NULL) {
+@@ -2800,7 +2829,7 @@ static int stmmac_open(struct net_device *dev)
+ 			netdev_err(priv->dev,
+ 				   "%s: Cannot attach to PHY (error: %d)\n",
+ 				   __func__, ret);
+-			return ret;
++			goto init_phy_error;
+ 		}
+ 	}
+ 
+@@ -2915,6 +2944,8 @@ init_error:
+ 	free_dma_desc_resources(priv);
+ dma_desc_error:
+ 	phylink_disconnect_phy(priv->phylink);
++init_phy_error:
++	pm_runtime_put(priv->device);
+ 	return ret;
+ }
+ 
+@@ -2965,6 +2996,8 @@ static int stmmac_release(struct net_device *dev)
+ 
+ 	stmmac_release_ptp(priv);
+ 
++	pm_runtime_put(priv->device);
++
+ 	return 0;
+ }
+ 
+@@ -4291,12 +4324,21 @@ static int stmmac_set_mac_address(struct net_device *ndev, void *addr)
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	int ret = 0;
+ 
++	ret = pm_runtime_get_sync(priv->device);
++	if (ret < 0) {
++		pm_runtime_put_noidle(priv->device);
++		return ret;
++	}
++
+ 	ret = eth_mac_addr(ndev, addr);
+ 	if (ret)
+-		return ret;
++		goto set_mac_error;
+ 
+ 	stmmac_set_umac_addr(priv, priv->hw, ndev->dev_addr, 0);
+ 
++set_mac_error:
++	pm_runtime_put(priv->device);
++
+ 	return ret;
+ }
+ 
+@@ -4616,6 +4658,12 @@ static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vi
+ 	bool is_double = false;
+ 	int ret;
+ 
++	ret = pm_runtime_get_sync(priv->device);
++	if (ret < 0) {
++		pm_runtime_put_noidle(priv->device);
++		return ret;
++	}
++
+ 	if (be16_to_cpu(proto) == ETH_P_8021AD)
+ 		is_double = true;
+ 
+@@ -4624,10 +4672,15 @@ static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vi
+ 	if (priv->hw->num_vlan) {
+ 		ret = stmmac_del_hw_vlan_rx_fltr(priv, ndev, priv->hw, proto, vid);
+ 		if (ret)
+-			return ret;
++			goto del_vlan_error;
+ 	}
+ 
+-	return stmmac_vlan_update(priv, is_double);
++	ret = stmmac_vlan_update(priv, is_double);
++
++del_vlan_error:
++	pm_runtime_put(priv->device);
++
++	return ret;
+ }
+ 
+ static const struct net_device_ops stmmac_netdev_ops = {
+@@ -5066,6 +5119,10 @@ int stmmac_dvr_probe(struct device *device,
+ 
+ 	stmmac_check_pcs_mode(priv);
+ 
++	pm_runtime_get_noresume(device);
++	pm_runtime_set_active(device);
++	pm_runtime_enable(device);
++
+ 	if (priv->hw->pcs != STMMAC_PCS_TBI &&
+ 	    priv->hw->pcs != STMMAC_PCS_RTBI) {
+ 		/* MDIO bus Registration */
+@@ -5103,6 +5160,11 @@ int stmmac_dvr_probe(struct device *device,
+ 	stmmac_init_fs(ndev);
+ #endif
+ 
++	/* Let pm_runtime_put() disable the clocks.
++	 * If CONFIG_PM is not enabled, the clocks will stay powered.
++	 */
++	pm_runtime_put(device);
++
+ 	return ret;
+ 
+ error_serdes_powerup:
+@@ -5152,8 +5214,8 @@ int stmmac_dvr_remove(struct device *dev)
+ 	phylink_destroy(priv->phylink);
+ 	if (priv->plat->stmmac_rst)
+ 		reset_control_assert(priv->plat->stmmac_rst);
+-	clk_disable_unprepare(priv->plat->pclk);
+-	clk_disable_unprepare(priv->plat->stmmac_clk);
++	pm_runtime_put(dev);
++	pm_runtime_disable(dev);
+ 	if (priv->hw->pcs != STMMAC_PCS_TBI &&
+ 	    priv->hw->pcs != STMMAC_PCS_RTBI)
+ 		stmmac_mdio_unregister(ndev);
+@@ -5176,6 +5238,7 @@ int stmmac_suspend(struct device *dev)
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	u32 chan;
++	int ret;
+ 
+ 	if (!ndev || !netif_running(ndev))
+ 		return 0;
+@@ -5219,8 +5282,11 @@ int stmmac_suspend(struct device *dev)
+ 		pinctrl_pm_select_sleep_state(priv->device);
+ 		/* Disable clock in case of PWM is off */
+ 		clk_disable_unprepare(priv->plat->clk_ptp_ref);
+-		clk_disable_unprepare(priv->plat->pclk);
+-		clk_disable_unprepare(priv->plat->stmmac_clk);
++		ret = pm_runtime_force_suspend(dev);
++		if (ret) {
++			mutex_unlock(&priv->lock);
++			return ret;
++		}
+ 	}
+ 	mutex_unlock(&priv->lock);
+ 
+@@ -5286,8 +5352,9 @@ int stmmac_resume(struct device *dev)
+ 	} else {
+ 		pinctrl_pm_select_default_state(priv->device);
+ 		/* enable the clk previously disabled */
+-		clk_prepare_enable(priv->plat->stmmac_clk);
+-		clk_prepare_enable(priv->plat->pclk);
++		ret = pm_runtime_force_resume(dev);
++		if (ret)
++			return ret;
+ 		if (priv->plat->clk_ptp_ref)
+ 			clk_prepare_enable(priv->plat->clk_ptp_ref);
+ 		/* reset the phy so that it's ready */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index 678726c62a8af..7c1a14b256da3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -15,6 +15,7 @@
+ #include <linux/iopoll.h>
+ #include <linux/mii.h>
+ #include <linux/of_mdio.h>
++#include <linux/pm_runtime.h>
+ #include <linux/phy.h>
+ #include <linux/property.h>
+ #include <linux/slab.h>
+@@ -87,21 +88,29 @@ static int stmmac_xgmac2_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
+ 	u32 tmp, addr, value = MII_XGMAC_BUSY;
+ 	int ret;
+ 
++	ret = pm_runtime_get_sync(priv->device);
++	if (ret < 0) {
++		pm_runtime_put_noidle(priv->device);
++		return ret;
++	}
++
+ 	/* Wait until any existing MII operation is complete */
+ 	if (readl_poll_timeout(priv->ioaddr + mii_data, tmp,
+-			       !(tmp & MII_XGMAC_BUSY), 100, 10000))
+-		return -EBUSY;
++			       !(tmp & MII_XGMAC_BUSY), 100, 10000)) {
++		ret = -EBUSY;
++		goto err_disable_clks;
++	}
+ 
+ 	if (phyreg & MII_ADDR_C45) {
+ 		phyreg &= ~MII_ADDR_C45;
+ 
+ 		ret = stmmac_xgmac2_c45_format(priv, phyaddr, phyreg, &addr);
+ 		if (ret)
+-			return ret;
++			goto err_disable_clks;
+ 	} else {
+ 		ret = stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr);
+ 		if (ret)
+-			return ret;
++			goto err_disable_clks;
+ 
+ 		value |= MII_XGMAC_SADDR;
+ 	}
+@@ -112,8 +121,10 @@ static int stmmac_xgmac2_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
+ 
+ 	/* Wait until any existing MII operation is complete */
+ 	if (readl_poll_timeout(priv->ioaddr + mii_data, tmp,
+-			       !(tmp & MII_XGMAC_BUSY), 100, 10000))
+-		return -EBUSY;
++			       !(tmp & MII_XGMAC_BUSY), 100, 10000)) {
++		ret = -EBUSY;
++		goto err_disable_clks;
++	}
+ 
+ 	/* Set the MII address register to read */
+ 	writel(addr, priv->ioaddr + mii_address);
+@@ -121,11 +132,18 @@ static int stmmac_xgmac2_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
+ 
+ 	/* Wait until any existing MII operation is complete */
+ 	if (readl_poll_timeout(priv->ioaddr + mii_data, tmp,
+-			       !(tmp & MII_XGMAC_BUSY), 100, 10000))
+-		return -EBUSY;
++			       !(tmp & MII_XGMAC_BUSY), 100, 10000)) {
++		ret = -EBUSY;
++		goto err_disable_clks;
++	}
+ 
+ 	/* Read the data from the MII data register */
+-	return readl(priv->ioaddr + mii_data) & GENMASK(15, 0);
++	ret = (int)readl(priv->ioaddr + mii_data) & GENMASK(15, 0);
++
++err_disable_clks:
++	pm_runtime_put(priv->device);
++
++	return ret;
+ }
+ 
+ static int stmmac_xgmac2_mdio_write(struct mii_bus *bus, int phyaddr,
+@@ -138,21 +156,29 @@ static int stmmac_xgmac2_mdio_write(struct mii_bus *bus, int phyaddr,
+ 	u32 addr, tmp, value = MII_XGMAC_BUSY;
+ 	int ret;
+ 
++	ret = pm_runtime_get_sync(priv->device);
++	if (ret < 0) {
++		pm_runtime_put_noidle(priv->device);
++		return ret;
++	}
++
+ 	/* Wait until any existing MII operation is complete */
+ 	if (readl_poll_timeout(priv->ioaddr + mii_data, tmp,
+-			       !(tmp & MII_XGMAC_BUSY), 100, 10000))
+-		return -EBUSY;
++			       !(tmp & MII_XGMAC_BUSY), 100, 10000)) {
++		ret = -EBUSY;
++		goto err_disable_clks;
++	}
+ 
+ 	if (phyreg & MII_ADDR_C45) {
+ 		phyreg &= ~MII_ADDR_C45;
+ 
+ 		ret = stmmac_xgmac2_c45_format(priv, phyaddr, phyreg, &addr);
+ 		if (ret)
+-			return ret;
++			goto err_disable_clks;
+ 	} else {
+ 		ret = stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr);
+ 		if (ret)
+-			return ret;
++			goto err_disable_clks;
+ 
+ 		value |= MII_XGMAC_SADDR;
+ 	}
+@@ -164,16 +190,23 @@ static int stmmac_xgmac2_mdio_write(struct mii_bus *bus, int phyaddr,
+ 
+ 	/* Wait until any existing MII operation is complete */
+ 	if (readl_poll_timeout(priv->ioaddr + mii_data, tmp,
+-			       !(tmp & MII_XGMAC_BUSY), 100, 10000))
+-		return -EBUSY;
++			       !(tmp & MII_XGMAC_BUSY), 100, 10000)) {
++		ret = -EBUSY;
++		goto err_disable_clks;
++	}
+ 
+ 	/* Set the MII address register to write */
+ 	writel(addr, priv->ioaddr + mii_address);
+ 	writel(value, priv->ioaddr + mii_data);
+ 
+ 	/* Wait until any existing MII operation is complete */
+-	return readl_poll_timeout(priv->ioaddr + mii_data, tmp,
+-				  !(tmp & MII_XGMAC_BUSY), 100, 10000);
++	ret = readl_poll_timeout(priv->ioaddr + mii_data, tmp,
++				 !(tmp & MII_XGMAC_BUSY), 100, 10000);
++
++err_disable_clks:
++	pm_runtime_put(priv->device);
++
++	return ret;
+ }
+ 
+ /**
+@@ -196,6 +229,12 @@ static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
+ 	int data = 0;
+ 	u32 v;
+ 
++	data = pm_runtime_get_sync(priv->device);
++	if (data < 0) {
++		pm_runtime_put_noidle(priv->device);
++		return data;
++	}
++
+ 	value |= (phyaddr << priv->hw->mii.addr_shift)
+ 		& priv->hw->mii.addr_mask;
+ 	value |= (phyreg << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask;
+@@ -216,19 +255,26 @@ static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
+ 	}
+ 
+ 	if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
+-			       100, 10000))
+-		return -EBUSY;
++			       100, 10000)) {
++		data = -EBUSY;
++		goto err_disable_clks;
++	}
+ 
+ 	writel(data, priv->ioaddr + mii_data);
+ 	writel(value, priv->ioaddr + mii_address);
+ 
+ 	if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
+-			       100, 10000))
+-		return -EBUSY;
++			       100, 10000)) {
++		data = -EBUSY;
++		goto err_disable_clks;
++	}
+ 
+ 	/* Read the data from the MII data register */
+ 	data = (int)readl(priv->ioaddr + mii_data) & MII_DATA_MASK;
+ 
++err_disable_clks:
++	pm_runtime_put(priv->device);
++
+ 	return data;
+ }
+ 
+@@ -247,10 +293,16 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	unsigned int mii_address = priv->hw->mii.addr;
+ 	unsigned int mii_data = priv->hw->mii.data;
++	int ret, data = phydata;
+ 	u32 value = MII_BUSY;
+-	int data = phydata;
+ 	u32 v;
+ 
++	ret = pm_runtime_get_sync(priv->device);
++	if (ret < 0) {
++		pm_runtime_put_noidle(priv->device);
++		return ret;
++	}
++
+ 	value |= (phyaddr << priv->hw->mii.addr_shift)
+ 		& priv->hw->mii.addr_mask;
+ 	value |= (phyreg << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask;
+@@ -275,16 +327,23 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
+ 
+ 	/* Wait until any existing MII operation is complete */
+ 	if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
+-			       100, 10000))
+-		return -EBUSY;
++			       100, 10000)) {
++		ret = -EBUSY;
++		goto err_disable_clks;
++	}
+ 
+ 	/* Set the MII address register to write */
+ 	writel(data, priv->ioaddr + mii_data);
+ 	writel(value, priv->ioaddr + mii_address);
+ 
+ 	/* Wait until any existing MII operation is complete */
+-	return readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
+-				  100, 10000);
++	ret = readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
++				 100, 10000);
++
++err_disable_clks:
++	pm_runtime_put(priv->device);
++
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 48186cd32ce10..035f9aef4308f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -720,7 +720,6 @@ int stmmac_pltfr_remove(struct platform_device *pdev)
+ }
+ EXPORT_SYMBOL_GPL(stmmac_pltfr_remove);
+ 
+-#ifdef CONFIG_PM_SLEEP
+ /**
+  * stmmac_pltfr_suspend
+  * @dev: device pointer
+@@ -728,7 +727,7 @@ EXPORT_SYMBOL_GPL(stmmac_pltfr_remove);
+  * call the main suspend function and then, if required, on some platform, it
+  * can call an exit helper.
+  */
+-static int stmmac_pltfr_suspend(struct device *dev)
++static int __maybe_unused stmmac_pltfr_suspend(struct device *dev)
+ {
+ 	int ret;
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+@@ -749,7 +748,7 @@ static int stmmac_pltfr_suspend(struct device *dev)
+  * the main resume function, on some platforms, it can call own init helper
+  * if required.
+  */
+-static int stmmac_pltfr_resume(struct device *dev)
++static int __maybe_unused stmmac_pltfr_resume(struct device *dev)
+ {
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+@@ -760,10 +759,29 @@ static int stmmac_pltfr_resume(struct device *dev)
+ 
+ 	return stmmac_resume(dev);
+ }
+-#endif /* CONFIG_PM_SLEEP */
+ 
+-SIMPLE_DEV_PM_OPS(stmmac_pltfr_pm_ops, stmmac_pltfr_suspend,
+-				       stmmac_pltfr_resume);
++static int __maybe_unused stmmac_runtime_suspend(struct device *dev)
++{
++	struct net_device *ndev = dev_get_drvdata(dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++
++	stmmac_bus_clks_config(priv, false);
++
++	return 0;
++}
++
++static int __maybe_unused stmmac_runtime_resume(struct device *dev)
++{
++	struct net_device *ndev = dev_get_drvdata(dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++
++	return stmmac_bus_clks_config(priv, true);
++}
++
++const struct dev_pm_ops stmmac_pltfr_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_suspend, stmmac_pltfr_resume)
++	SET_RUNTIME_PM_OPS(stmmac_runtime_suspend, stmmac_runtime_resume, NULL)
++};
+ EXPORT_SYMBOL_GPL(stmmac_pltfr_pm_ops);
+ 
+ MODULE_DESCRIPTION("STMMAC 10/100/1000 Ethernet platform support");
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index a7a1c74113483..db7475dc601f5 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -396,18 +396,6 @@ static void free_msi_irqs(struct pci_dev *dev)
+ 			for (i = 0; i < entry->nvec_used; i++)
+ 				BUG_ON(irq_has_action(entry->irq + i));
+ 
+-	pci_msi_teardown_msi_irqs(dev);
+-
+-	list_for_each_entry_safe(entry, tmp, msi_list, list) {
+-		if (entry->msi_attrib.is_msix) {
+-			if (list_is_last(&entry->list, msi_list))
+-				iounmap(entry->mask_base);
+-		}
+-
+-		list_del(&entry->list);
+-		free_msi_entry(entry);
+-	}
+-
+ 	if (dev->msi_irq_groups) {
+ 		sysfs_remove_groups(&dev->dev.kobj, dev->msi_irq_groups);
+ 		msi_attrs = dev->msi_irq_groups[0]->attrs;
+@@ -423,6 +411,18 @@ static void free_msi_irqs(struct pci_dev *dev)
+ 		kfree(dev->msi_irq_groups);
+ 		dev->msi_irq_groups = NULL;
+ 	}
++
++	pci_msi_teardown_msi_irqs(dev);
++
++	list_for_each_entry_safe(entry, tmp, msi_list, list) {
++		if (entry->msi_attrib.is_msix) {
++			if (list_is_last(&entry->list, msi_list))
++				iounmap(entry->mask_base);
++		}
++
++		list_del(&entry->list);
++		free_msi_entry(entry);
++	}
+ }
+ 
+ static void pci_intx_for_msi(struct pci_dev *dev, int enable)
+@@ -592,6 +592,9 @@ msi_setup_entry(struct pci_dev *dev, int nvec, struct irq_affinity *affd)
+ 		goto out;
+ 
+ 	pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control);
++	/* Lies, damned lies, and MSIs */
++	if (dev->dev_flags & PCI_DEV_FLAGS_HAS_MSI_MASKING)
++		control |= PCI_MSI_FLAGS_MASKBIT;
+ 
+ 	entry->msi_attrib.is_msix	= 0;
+ 	entry->msi_attrib.is_64		= !!(control & PCI_MSI_FLAGS_64BIT);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index fb91b2d7b1c59..bb863ddb59bfc 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5756,3 +5756,9 @@ static void apex_pci_fixup_class(struct pci_dev *pdev)
+ }
+ DECLARE_PCI_FIXUP_CLASS_HEADER(0x1ac1, 0x089a,
+ 			       PCI_CLASS_NOT_DEFINED, 8, apex_pci_fixup_class);
++
++static void nvidia_ion_ahci_fixup(struct pci_dev *pdev)
++{
++	pdev->dev_flags |= PCI_DEV_FLAGS_HAS_MSI_MASKING;
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0ab8, nvidia_ion_ahci_fixup);
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 6379f26a335f6..9233f7e744544 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -89,7 +89,7 @@ static int of_thermal_get_temp(struct thermal_zone_device *tz,
+ {
+ 	struct __thermal_zone *data = tz->devdata;
+ 
+-	if (!data->ops->get_temp)
++	if (!data->ops || !data->ops->get_temp)
+ 		return -EINVAL;
+ 
+ 	return data->ops->get_temp(data->sensor_data, temp);
+@@ -186,6 +186,9 @@ static int of_thermal_set_emul_temp(struct thermal_zone_device *tz,
+ {
+ 	struct __thermal_zone *data = tz->devdata;
+ 
++	if (!data->ops || !data->ops->set_emul_temp)
++		return -EINVAL;
++
+ 	return data->ops->set_emul_temp(data->sensor_data, temp);
+ }
+ 
+@@ -194,7 +197,7 @@ static int of_thermal_get_trend(struct thermal_zone_device *tz, int trip,
+ {
+ 	struct __thermal_zone *data = tz->devdata;
+ 
+-	if (!data->ops->get_trend)
++	if (!data->ops || !data->ops->get_trend)
+ 		return -EINVAL;
+ 
+ 	return data->ops->get_trend(data->sensor_data, trip, trend);
+@@ -301,7 +304,7 @@ static int of_thermal_set_trip_temp(struct thermal_zone_device *tz, int trip,
+ 	if (trip >= data->ntrips || trip < 0)
+ 		return -EDOM;
+ 
+-	if (data->ops->set_trip_temp) {
++	if (data->ops && data->ops->set_trip_temp) {
+ 		int ret;
+ 
+ 		ret = data->ops->set_trip_temp(data->sensor_data, trip, temp);
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 86fd3bf62af61..8cb2cf612e49b 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -278,11 +278,10 @@ static inline bool z_erofs_try_inplace_io(struct z_erofs_collector *clt,
+ 
+ /* callers must be with collection lock held */
+ static int z_erofs_attach_page(struct z_erofs_collector *clt,
+-			       struct page *page,
+-			       enum z_erofs_page_type type)
++			       struct page *page, enum z_erofs_page_type type,
++			       bool pvec_safereuse)
+ {
+ 	int ret;
+-	bool occupied;
+ 
+ 	/* give priority for inplaceio */
+ 	if (clt->mode >= COLLECT_PRIMARY &&
+@@ -290,10 +289,9 @@ static int z_erofs_attach_page(struct z_erofs_collector *clt,
+ 	    z_erofs_try_inplace_io(clt, page))
+ 		return 0;
+ 
+-	ret = z_erofs_pagevec_enqueue(&clt->vector,
+-				      page, type, &occupied);
++	ret = z_erofs_pagevec_enqueue(&clt->vector, page, type,
++				      pvec_safereuse);
+ 	clt->cl->vcnt += (unsigned int)ret;
+-
+ 	return ret ? 0 : -EAGAIN;
+ }
+ 
+@@ -647,7 +645,8 @@ hitted:
+ 		tight &= (clt->mode >= COLLECT_PRIMARY_FOLLOWED);
+ 
+ retry:
+-	err = z_erofs_attach_page(clt, page, page_type);
++	err = z_erofs_attach_page(clt, page, page_type,
++				  clt->mode >= COLLECT_PRIMARY_FOLLOWED);
+ 	/* should allocate an additional staging page for pagevec */
+ 	if (err == -EAGAIN) {
+ 		struct page *const newpage =
+@@ -655,7 +654,7 @@ retry:
+ 
+ 		newpage->mapping = Z_EROFS_MAPPING_STAGING;
+ 		err = z_erofs_attach_page(clt, newpage,
+-					  Z_EROFS_PAGE_TYPE_EXCLUSIVE);
++					  Z_EROFS_PAGE_TYPE_EXCLUSIVE, true);
+ 		if (!err)
+ 			goto retry;
+ 	}
+diff --git a/fs/erofs/zpvec.h b/fs/erofs/zpvec.h
+index 1d67cbd387042..52898176ef31d 100644
+--- a/fs/erofs/zpvec.h
++++ b/fs/erofs/zpvec.h
+@@ -108,12 +108,17 @@ static inline void z_erofs_pagevec_ctor_init(struct z_erofs_pagevec_ctor *ctor,
+ static inline bool z_erofs_pagevec_enqueue(struct z_erofs_pagevec_ctor *ctor,
+ 					   struct page *page,
+ 					   enum z_erofs_page_type type,
+-					   bool *occupied)
++					   bool pvec_safereuse)
+ {
+-	*occupied = false;
+-	if (!ctor->next && type)
+-		if (ctor->index + 1 == ctor->nr)
++	if (!ctor->next) {
++		/* some pages cannot be reused as pvec safely without I/O */
++		if (type == Z_EROFS_PAGE_TYPE_EXCLUSIVE && !pvec_safereuse)
++			type = Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED;
++
++		if (type != Z_EROFS_PAGE_TYPE_EXCLUSIVE &&
++		    ctor->index + 1 == ctor->nr)
+ 			return false;
++	}
+ 
+ 	if (ctor->index >= ctor->nr)
+ 		z_erofs_pagevec_ctor_pagedown(ctor, false);
+@@ -125,7 +130,6 @@ static inline bool z_erofs_pagevec_enqueue(struct z_erofs_pagevec_ctor *ctor,
+ 	/* should remind that collector->next never equal to 1, 2 */
+ 	if (type == (uintptr_t)ctor->next) {
+ 		ctor->next = page;
+-		*occupied = true;
+ 	}
+ 	ctor->pages[ctor->index++] = tagptr_fold(erofs_vtptr_t, page, type);
+ 	return true;
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 4ba17736b614f..98fdf5a31fd66 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -59,6 +59,14 @@ struct blk_keyslot_manager;
+  */
+ #define BLKCG_MAX_POLS		5
+ 
++static inline int blk_validate_block_size(unsigned int bsize)
++{
++	if (bsize < 512 || bsize > PAGE_SIZE || !is_power_of_2(bsize))
++		return -EINVAL;
++
++	return 0;
++}
++
+ typedef void (rq_end_io_fn)(struct request *, blk_status_t);
+ 
+ /*
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index a55097b4d9927..4519bd12643f6 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -227,6 +227,8 @@ enum pci_dev_flags {
+ 	PCI_DEV_FLAGS_NO_FLR_RESET = (__force pci_dev_flags_t) (1 << 10),
+ 	/* Don't use Relaxed Ordering for TLPs directed at this device */
+ 	PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 11),
++	/* Device does honor MSI masking despite saying otherwise */
++	PCI_DEV_FLAGS_HAS_MSI_MASKING = (__force pci_dev_flags_t) (1 << 12),
+ };
+ 
+ enum pci_irq_reroute_variant {
+diff --git a/init/main.c b/init/main.c
+index dd26a42e80a87..4fe58ed4aca7b 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -380,6 +380,7 @@ static char * __init xbc_make_cmdline(const char *key)
+ 	ret = xbc_snprint_cmdline(new_cmdline, len + 1, root);
+ 	if (ret < 0 || ret > len) {
+ 		pr_err("Failed to print extra kernel cmdline.\n");
++		memblock_free(__pa(new_cmdline), len + 1);
+ 		return NULL;
+ 	}
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index c811519261710..908417736f4e9 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -7036,7 +7036,6 @@ void perf_output_sample(struct perf_output_handle *handle,
+ static u64 perf_virt_to_phys(u64 virt)
+ {
+ 	u64 phys_addr = 0;
+-	struct page *p = NULL;
+ 
+ 	if (!virt)
+ 		return 0;
+@@ -7055,14 +7054,15 @@ static u64 perf_virt_to_phys(u64 virt)
+ 		 * If failed, leave phys_addr as 0.
+ 		 */
+ 		if (current->mm != NULL) {
++			struct page *p;
++
+ 			pagefault_disable();
+-			if (get_user_page_fast_only(virt, 0, &p))
++			if (get_user_page_fast_only(virt, 0, &p)) {
+ 				phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
++				put_page(p);
++			}
+ 			pagefault_enable();
+ 		}
+-
+-		if (p)
+-			put_page(p);
+ 	}
+ 
+ 	return phys_addr;
+diff --git a/scripts/lld-version.sh b/scripts/lld-version.sh
+index d70edb4d8a4f2..f1eeee450a23c 100755
+--- a/scripts/lld-version.sh
++++ b/scripts/lld-version.sh
+@@ -6,15 +6,32 @@
+ # Print the linker version of `ld.lld' in a 5 or 6-digit form
+ # such as `100001' for ld.lld 10.0.1 etc.
+ 
+-linker_string="$($* --version)"
++set -e
+ 
+-if ! ( echo $linker_string | grep -q LLD ); then
++# Convert the version string x.y.z to a canonical 5 or 6-digit form.
++get_canonical_version()
++{
++	IFS=.
++	set -- $1
++
++	# If the 2nd or 3rd field is missing, fill it with a zero.
++	echo $((10000 * $1 + 100 * ${2:-0} + ${3:-0}))
++}
++
++# Get the first line of the --version output.
++IFS='
++'
++set -- $(LC_ALL=C "$@" --version)
++
++# Split the line on spaces.
++IFS=' '
++set -- $1
++
++while [ $# -gt 1 -a "$1" != "LLD" ]; do
++	shift
++done
++if [ "$1" = LLD ]; then
++	echo $(get_canonical_version ${2%-*})
++else
+ 	echo 0
+-	exit 1
+ fi
+-
+-VERSION=$(echo $linker_string | cut -d ' ' -f 2)
+-MAJOR=$(echo $VERSION | cut -d . -f 1)
+-MINOR=$(echo $VERSION | cut -d . -f 2)
+-PATCHLEVEL=$(echo $VERSION | cut -d . -f 3)
+-printf "%d%02d%02d\\n" $MAJOR $MINOR $PATCHLEVEL
+diff --git a/security/Kconfig b/security/Kconfig
+index 7561f6f99f1d2..0548db16c49dc 100644
+--- a/security/Kconfig
++++ b/security/Kconfig
+@@ -191,6 +191,9 @@ config HARDENED_USERCOPY_PAGESPAN
+ config FORTIFY_SOURCE
+ 	bool "Harden common str/mem functions against buffer overflows"
+ 	depends on ARCH_HAS_FORTIFY_SOURCE
++	# https://bugs.llvm.org/show_bug.cgi?id=50322
++	# https://bugs.llvm.org/show_bug.cgi?id=41459
++	depends on !CC_IS_CLANG
+ 	help
+ 	  Detect overflows of buffers in common string and memory functions
+ 	  where the compiler can determine and validate the buffer sizes.
+diff --git a/tools/testing/selftests/x86/iopl.c b/tools/testing/selftests/x86/iopl.c
+index bab2f6e06b63d..7e3e09c1abac6 100644
+--- a/tools/testing/selftests/x86/iopl.c
++++ b/tools/testing/selftests/x86/iopl.c
+@@ -85,48 +85,88 @@ static void expect_gp_outb(unsigned short port)
+ 	printf("[OK]\toutb to 0x%02hx failed\n", port);
+ }
+ 
+-static bool try_cli(void)
++#define RET_FAULTED	0
++#define RET_FAIL	1
++#define RET_EMUL	2
++
++static int try_cli(void)
+ {
++	unsigned long flags;
++
+ 	sethandler(SIGSEGV, sigsegv, SA_RESETHAND);
+ 	if (sigsetjmp(jmpbuf, 1) != 0) {
+-		return false;
++		return RET_FAULTED;
+ 	} else {
+-		asm volatile ("cli");
+-		return true;
++		asm volatile("cli; pushf; pop %[flags]"
++				: [flags] "=rm" (flags));
++
++		/* X86_FLAGS_IF */
++		if (!(flags & (1 << 9)))
++			return RET_FAIL;
++		else
++			return RET_EMUL;
+ 	}
+ 	clearhandler(SIGSEGV);
+ }
+ 
+-static bool try_sti(void)
++static int try_sti(bool irqs_off)
+ {
++	unsigned long flags;
++
+ 	sethandler(SIGSEGV, sigsegv, SA_RESETHAND);
+ 	if (sigsetjmp(jmpbuf, 1) != 0) {
+-		return false;
++		return RET_FAULTED;
+ 	} else {
+-		asm volatile ("sti");
+-		return true;
++		asm volatile("sti; pushf; pop %[flags]"
++				: [flags] "=rm" (flags));
++
++		/* X86_FLAGS_IF */
++		if (irqs_off && (flags & (1 << 9)))
++			return RET_FAIL;
++		else
++			return RET_EMUL;
+ 	}
+ 	clearhandler(SIGSEGV);
+ }
+ 
+-static void expect_gp_sti(void)
++static void expect_gp_sti(bool irqs_off)
+ {
+-	if (try_sti()) {
++	int ret = try_sti(irqs_off);
++
++	switch (ret) {
++	case RET_FAULTED:
++		printf("[OK]\tSTI faulted\n");
++		break;
++	case RET_EMUL:
++		printf("[OK]\tSTI NOPped\n");
++		break;
++	default:
+ 		printf("[FAIL]\tSTI worked\n");
+ 		nerrs++;
+-	} else {
+-		printf("[OK]\tSTI faulted\n");
+ 	}
+ }
+ 
+-static void expect_gp_cli(void)
++/*
++ * Returns whether it managed to disable interrupts.
++ */
++static bool test_cli(void)
+ {
+-	if (try_cli()) {
++	int ret = try_cli();
++
++	switch (ret) {
++	case RET_FAULTED:
++		printf("[OK]\tCLI faulted\n");
++		break;
++	case RET_EMUL:
++		printf("[OK]\tCLI NOPped\n");
++		break;
++	default:
+ 		printf("[FAIL]\tCLI worked\n");
+ 		nerrs++;
+-	} else {
+-		printf("[OK]\tCLI faulted\n");
++		return true;
+ 	}
++
++	return false;
+ }
+ 
+ int main(void)
+@@ -152,8 +192,7 @@ int main(void)
+ 	}
+ 
+ 	/* Make sure that CLI/STI are blocked even with IOPL level 3 */
+-	expect_gp_cli();
+-	expect_gp_sti();
++	expect_gp_sti(test_cli());
+ 	expect_ok_outb(0x80);
+ 
+ 	/* Establish an I/O bitmap to test the restore */
+@@ -204,8 +243,7 @@ int main(void)
+ 	printf("[RUN]\tparent: write to 0x80 (should fail)\n");
+ 
+ 	expect_gp_outb(0x80);
+-	expect_gp_cli();
+-	expect_gp_sti();
++	expect_gp_sti(test_cli());
+ 
+ 	/* Test the capability checks. */
+ 	printf("\tiopl(3)\n");


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-11-26 11:57 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-11-26 11:57 UTC (permalink / raw
  To: gentoo-commits

commit:     622e893ac3faa125fb76d52b63229cee5b8c0fae
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 26 11:57:27 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 26 11:57:27 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=622e893a

Linux patch 5.10.82

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1081_linux-5.10.82.patch | 6378 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6382 insertions(+)

diff --git a/0000_README b/0000_README
index f4fa5656..db2b4487 100644
--- a/0000_README
+++ b/0000_README
@@ -367,6 +367,10 @@ Patch:  1080_linux-5.10.81.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.81
 
+Patch:  1081_linux-5.10.82.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.82
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1081_linux-5.10.82.patch b/1081_linux-5.10.82.patch
new file mode 100644
index 00000000..3518a7c9
--- /dev/null
+++ b/1081_linux-5.10.82.patch
@@ -0,0 +1,6378 @@
+diff --git a/Makefile b/Makefile
+index 1baeadb574f1c..84b15766ad66f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 81
++SUBLEVEL = 82
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/bcm-nsp.dtsi b/arch/arm/boot/dts/bcm-nsp.dtsi
+index 605b6d2f4a569..1dae02bb82c2d 100644
+--- a/arch/arm/boot/dts/bcm-nsp.dtsi
++++ b/arch/arm/boot/dts/bcm-nsp.dtsi
+@@ -77,7 +77,7 @@
+ 		interrupt-affinity = <&cpu0>, <&cpu1>;
+ 	};
+ 
+-	mpcore@19000000 {
++	mpcore-bus@19000000 {
+ 		compatible = "simple-bus";
+ 		ranges = <0x00000000 0x19000000 0x00023000>;
+ 		#address-cells = <1>;
+@@ -219,7 +219,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdio: sdhci@21000 {
++		sdio: mmc@21000 {
+ 			compatible = "brcm,sdhci-iproc-cygnus";
+ 			reg = <0x21000 0x100>;
+ 			interrupts = <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+index 612d61852bfb9..577a4dc604d93 100644
+--- a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
++++ b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+@@ -195,3 +195,25 @@
+ 		};
+ 	};
+ };
++
++&srab {
++	status = "okay";
++
++	ports {
++		port@0 {
++			reg = <0>;
++			label = "poe";
++		};
++
++		port@5 {
++			reg = <5>;
++			label = "cpu";
++			ethernet = <&gmac0>;
++
++			fixed-link {
++				speed = <1000>;
++				duplex-full;
++			};
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/ls1021a-tsn.dts b/arch/arm/boot/dts/ls1021a-tsn.dts
+index 9d8f0c2a8aba3..aca78b5eddf20 100644
+--- a/arch/arm/boot/dts/ls1021a-tsn.dts
++++ b/arch/arm/boot/dts/ls1021a-tsn.dts
+@@ -251,7 +251,7 @@
+ 
+ 	flash@0 {
+ 		/* Rev. A uses 64MB flash, Rev. B & C use 32MB flash */
+-		compatible = "jedec,spi-nor", "s25fl256s1", "s25fl512s";
++		compatible = "jedec,spi-nor";
+ 		spi-max-frequency = <20000000>;
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
+index 827373ef1a547..37026b2fa6497 100644
+--- a/arch/arm/boot/dts/ls1021a.dtsi
++++ b/arch/arm/boot/dts/ls1021a.dtsi
+@@ -331,39 +331,6 @@
+ 			#thermal-sensor-cells = <1>;
+ 		};
+ 
+-		thermal-zones {
+-			cpu_thermal: cpu-thermal {
+-				polling-delay-passive = <1000>;
+-				polling-delay = <5000>;
+-
+-				thermal-sensors = <&tmu 0>;
+-
+-				trips {
+-					cpu_alert: cpu-alert {
+-						temperature = <85000>;
+-						hysteresis = <2000>;
+-						type = "passive";
+-					};
+-					cpu_crit: cpu-crit {
+-						temperature = <95000>;
+-						hysteresis = <2000>;
+-						type = "critical";
+-					};
+-				};
+-
+-				cooling-maps {
+-					map0 {
+-						trip = <&cpu_alert>;
+-						cooling-device =
+-							<&cpu0 THERMAL_NO_LIMIT
+-							THERMAL_NO_LIMIT>,
+-							<&cpu1 THERMAL_NO_LIMIT
+-							THERMAL_NO_LIMIT>;
+-					};
+-				};
+-			};
+-		};
+-
+ 		dspi0: spi@2100000 {
+ 			compatible = "fsl,ls1021a-v1.0-dspi";
+ 			#address-cells = <1>;
+@@ -1018,4 +985,37 @@
+ 			big-endian;
+ 		};
+ 	};
++
++	thermal-zones {
++		cpu_thermal: cpu-thermal {
++			polling-delay-passive = <1000>;
++			polling-delay = <5000>;
++
++			thermal-sensors = <&tmu 0>;
++
++			trips {
++				cpu_alert: cpu-alert {
++					temperature = <85000>;
++					hysteresis = <2000>;
++					type = "passive";
++				};
++				cpu_crit: cpu-crit {
++					temperature = <95000>;
++					hysteresis = <2000>;
++					type = "critical";
++				};
++			};
++
++			cooling-maps {
++				map0 {
++					trip = <&cpu_alert>;
++					cooling-device =
++						<&cpu0 THERMAL_NO_LIMIT
++						THERMAL_NO_LIMIT>,
++						<&cpu1 THERMAL_NO_LIMIT
++						THERMAL_NO_LIMIT>;
++				};
++			};
++		};
++	};
+ };
+diff --git a/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi b/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi
+index 7f6aefd134514..e7534fe9c53cf 100644
+--- a/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi
++++ b/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi
+@@ -29,7 +29,7 @@
+ 		compatible = "smsc,lan9221","smsc,lan9115";
+ 		bank-width = <2>;
+ 
+-		gpmc,mux-add-data;
++		gpmc,mux-add-data = <0>;
+ 		gpmc,cs-on-ns = <0>;
+ 		gpmc,cs-rd-off-ns = <42>;
+ 		gpmc,cs-wr-off-ns = <36>;
+diff --git a/arch/arm/boot/dts/omap3-overo-tobiduo-common.dtsi b/arch/arm/boot/dts/omap3-overo-tobiduo-common.dtsi
+index e5da3bc6f1050..218a10c0d8159 100644
+--- a/arch/arm/boot/dts/omap3-overo-tobiduo-common.dtsi
++++ b/arch/arm/boot/dts/omap3-overo-tobiduo-common.dtsi
+@@ -22,7 +22,7 @@
+ 		compatible = "smsc,lan9221","smsc,lan9115";
+ 		bank-width = <2>;
+ 
+-		gpmc,mux-add-data;
++		gpmc,mux-add-data = <0>;
+ 		gpmc,cs-on-ns = <0>;
+ 		gpmc,cs-rd-off-ns = <42>;
+ 		gpmc,cs-wr-off-ns = <36>;
+diff --git a/arch/arm/boot/dts/qcom-ipq8064-rb3011.dts b/arch/arm/boot/dts/qcom-ipq8064-rb3011.dts
+index 282b89ce3d451..33545cf40f3ab 100644
+--- a/arch/arm/boot/dts/qcom-ipq8064-rb3011.dts
++++ b/arch/arm/boot/dts/qcom-ipq8064-rb3011.dts
+@@ -19,12 +19,12 @@
+ 		stdout-path = "serial0:115200n8";
+ 	};
+ 
+-	memory@0 {
++	memory@42000000 {
+ 		reg = <0x42000000 0x3e000000>;
+ 		device_type = "memory";
+ 	};
+ 
+-	mdio0: mdio@0 {
++	mdio0: mdio-0 {
+ 		status = "okay";
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&qcom_pinmux 1 GPIO_ACTIVE_HIGH>,
+@@ -91,7 +91,7 @@
+ 		};
+ 	};
+ 
+-	mdio1: mdio@1 {
++	mdio1: mdio-1 {
+ 		status = "okay";
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&qcom_pinmux 11 GPIO_ACTIVE_HIGH>,
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts b/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
+index 27722c42b61c4..08bddbf0336da 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
+@@ -262,10 +262,10 @@
+ 					};
+ 
+ 					ab8500_ldo_aux2 {
+-						/* Supplies the Cypress TMA140 touchscreen only with 3.3V */
++						/* Supplies the Cypress TMA140 touchscreen only with 3.0V */
+ 						regulator-name = "AUX2";
+-						regulator-min-microvolt = <3300000>;
+-						regulator-max-microvolt = <3300000>;
++						regulator-min-microvolt = <3000000>;
++						regulator-max-microvolt = <3000000>;
+ 					};
+ 
+ 					ab8500_ldo_aux3 {
+@@ -284,9 +284,9 @@
+ 
+ 					ab8500_ldo_aux5 {
+ 						regulator-name = "AUX5";
++						/* Intended for 1V8 for touchscreen but actually left unused */
+ 						regulator-min-microvolt = <1050000>;
+ 						regulator-max-microvolt = <2790000>;
+-						regulator-always-on;
+ 					};
+ 
+ 					ab8500_ldo_aux6 {
+diff --git a/arch/arm/boot/dts/sun8i-a33.dtsi b/arch/arm/boot/dts/sun8i-a33.dtsi
+index c458f5fb124fb..46f4242e9f95d 100644
+--- a/arch/arm/boot/dts/sun8i-a33.dtsi
++++ b/arch/arm/boot/dts/sun8i-a33.dtsi
+@@ -46,7 +46,7 @@
+ #include <dt-bindings/thermal/thermal.h>
+ 
+ / {
+-	cpu0_opp_table: opp_table0 {
++	cpu0_opp_table: opp-table-cpu {
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+@@ -164,7 +164,7 @@
+ 		io-channels = <&ths>;
+ 	};
+ 
+-	mali_opp_table: gpu-opp-table {
++	mali_opp_table: opp-table-gpu {
+ 		compatible = "operating-points-v2";
+ 
+ 		opp-144000000 {
+diff --git a/arch/arm/boot/dts/sun8i-a83t.dtsi b/arch/arm/boot/dts/sun8i-a83t.dtsi
+index c010b27fdb6a6..a746e449b0bae 100644
+--- a/arch/arm/boot/dts/sun8i-a83t.dtsi
++++ b/arch/arm/boot/dts/sun8i-a83t.dtsi
+@@ -200,7 +200,7 @@
+ 		status = "disabled";
+ 	};
+ 
+-	cpu0_opp_table: opp_table0 {
++	cpu0_opp_table: opp-table-cluster0 {
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+@@ -253,7 +253,7 @@
+ 		};
+ 	};
+ 
+-	cpu1_opp_table: opp_table1 {
++	cpu1_opp_table: opp-table-cluster1 {
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+diff --git a/arch/arm/boot/dts/sun8i-h3.dtsi b/arch/arm/boot/dts/sun8i-h3.dtsi
+index 4e89701df91f8..ae4f933abb895 100644
+--- a/arch/arm/boot/dts/sun8i-h3.dtsi
++++ b/arch/arm/boot/dts/sun8i-h3.dtsi
+@@ -44,7 +44,7 @@
+ #include <dt-bindings/thermal/thermal.h>
+ 
+ / {
+-	cpu0_opp_table: opp_table0 {
++	cpu0_opp_table: opp-table-cpu {
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+@@ -112,7 +112,7 @@
+ 		};
+ 	};
+ 
+-	gpu_opp_table: gpu-opp-table {
++	gpu_opp_table: opp-table-gpu {
+ 		compatible = "operating-points-v2";
+ 
+ 		opp-120000000 {
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
+index cc321c04f1219..f6d7d7f7fdabe 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
+@@ -343,19 +343,19 @@
+ 	};
+ 
+ 	thermal-zones {
+-		cpu-thermal-zone {
++		cpu-thermal {
+ 			polling-delay-passive = <0>;
+ 			polling-delay = <0>;
+ 			thermal-sensors = <&ths 0>;
+ 		};
+ 
+-		ddr-thermal-zone {
++		ddr-thermal {
+ 			polling-delay-passive = <0>;
+ 			polling-delay = <0>;
+ 			thermal-sensors = <&ths 2>;
+ 		};
+ 
+-		gpu-thermal-zone {
++		gpu-thermal {
+ 			polling-delay-passive = <0>;
+ 			polling-delay = <0>;
+ 			thermal-sensors = <&ths 1>;
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-cpu-opp.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-cpu-opp.dtsi
+index 578c37490d901..e39db51eb4489 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-cpu-opp.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-cpu-opp.dtsi
+@@ -4,7 +4,7 @@
+  */
+ 
+ / {
+-	cpu0_opp_table: opp_table0 {
++	cpu0_opp_table: opp-table-cpu {
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5-cpu-opp.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h5-cpu-opp.dtsi
+index b2657201957eb..1afad8b437d72 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h5-cpu-opp.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h5-cpu-opp.dtsi
+@@ -2,7 +2,7 @@
+ // Copyright (C) 2020 Chen-Yu Tsai <wens@csie.org>
+ 
+ / {
+-	cpu_opp_table: cpu-opp-table {
++	cpu_opp_table: opp-table-cpu {
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
+index 10489e5086956..0ee8a5adf02b0 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
+@@ -204,7 +204,7 @@
+ 			};
+ 		};
+ 
+-		gpu_thermal {
++		gpu-thermal {
+ 			polling-delay-passive = <0>;
+ 			polling-delay = <0>;
+ 			thermal-sensors = <&ths 1>;
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-cpu-opp.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6-cpu-opp.dtsi
+index 1a5eddc5a40f3..653452926d857 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-cpu-opp.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-cpu-opp.dtsi
+@@ -3,7 +3,7 @@
+ // Copyright (C) 2020 Clément Péron <peron.clem@gmail.com>
+ 
+ / {
+-	cpu_opp_table: cpu-opp-table {
++	cpu_opp_table: opp-table-cpu {
+ 		compatible = "allwinner,sun50i-h6-operating-points";
+ 		nvmem-cells = <&cpu_speed_grade>;
+ 		opp-shared;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi
+index 692d8f4a206da..334af263d7b5d 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi
+@@ -673,56 +673,56 @@
+ 		};
+ 
+ 		cluster1_core0_watchdog: wdt@c000000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc000000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 15>, <&clockgen 4 15>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster1_core1_watchdog: wdt@c010000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc010000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 15>, <&clockgen 4 15>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster1_core2_watchdog: wdt@c020000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc020000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 15>, <&clockgen 4 15>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster1_core3_watchdog: wdt@c030000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc030000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 15>, <&clockgen 4 15>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster2_core0_watchdog: wdt@c100000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc100000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 15>, <&clockgen 4 15>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster2_core1_watchdog: wdt@c110000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc110000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 15>, <&clockgen 4 15>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster2_core2_watchdog: wdt@c120000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc120000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 15>, <&clockgen 4 15>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster2_core3_watchdog: wdt@c130000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc130000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 15>, <&clockgen 4 15>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
+index 4d34d82b898a4..eb6641a3566e1 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
+@@ -351,56 +351,56 @@
+ 		};
+ 
+ 		cluster1_core0_watchdog: wdt@c000000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc000000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster1_core1_watchdog: wdt@c010000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc010000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster2_core0_watchdog: wdt@c100000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc100000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster2_core1_watchdog: wdt@c110000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc110000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster3_core0_watchdog: wdt@c200000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc200000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster3_core1_watchdog: wdt@c210000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc210000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster4_core0_watchdog: wdt@c300000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc300000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+ 		};
+ 
+ 		cluster4_core1_watchdog: wdt@c310000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xc310000 0x0 0x1000>;
+ 			clocks = <&clockgen 4 3>, <&clockgen 4 3>;
+ 			clock-names = "wdog_clk", "apb_pclk";
+diff --git a/arch/arm64/boot/dts/hisilicon/hi3660.dtsi b/arch/arm64/boot/dts/hisilicon/hi3660.dtsi
+index 994140fbc916e..fe4dce23ef7e1 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi3660.dtsi
++++ b/arch/arm64/boot/dts/hisilicon/hi3660.dtsi
+@@ -1086,7 +1086,7 @@
+ 		};
+ 
+ 		watchdog0: watchdog@e8a06000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xe8a06000 0x0 0x1000>;
+ 			interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&crg_ctrl HI3660_OSC32K>,
+@@ -1095,7 +1095,7 @@
+ 		};
+ 
+ 		watchdog1: watchdog@e8a07000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xe8a07000 0x0 0x1000>;
+ 			interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&crg_ctrl HI3660_OSC32K>,
+diff --git a/arch/arm64/boot/dts/hisilicon/hi6220.dtsi b/arch/arm64/boot/dts/hisilicon/hi6220.dtsi
+index 014735a9bc731..fbce014bdc270 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi6220.dtsi
++++ b/arch/arm64/boot/dts/hisilicon/hi6220.dtsi
+@@ -840,7 +840,7 @@
+ 		};
+ 
+ 		watchdog0: watchdog@f8005000 {
+-			compatible = "arm,sp805-wdt", "arm,primecell";
++			compatible = "arm,sp805", "arm,primecell";
+ 			reg = <0x0 0xf8005000 0x0 0x1000>;
+ 			interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&ao_ctrl HI6220_WDT0_PCLK>,
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 3ceb36cac512f..9cb8f7a052df9 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -200,7 +200,7 @@
+ 			clock-names = "bam_clk";
+ 			#dma-cells = <1>;
+ 			qcom,ee = <1>;
+-			qcom,controlled-remotely = <1>;
++			qcom,controlled-remotely;
+ 			qcom,config-pipe-trust-reg = <0>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index c45870600909f..9e04ac3f596d0 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -300,38 +300,42 @@
+ 			LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
+ 				compatible = "arm,idle-state";
+ 				idle-state-name = "little-retention";
++				/* CPU Retention (C2D), L2 Active */
+ 				arm,psci-suspend-param = <0x00000002>;
+ 				entry-latency-us = <81>;
+ 				exit-latency-us = <86>;
+-				min-residency-us = <200>;
++				min-residency-us = <504>;
+ 			};
+ 
+ 			LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
+ 				compatible = "arm,idle-state";
+ 				idle-state-name = "little-power-collapse";
++				/* CPU + L2 Power Collapse (C3, D4) */
+ 				arm,psci-suspend-param = <0x40000003>;
+-				entry-latency-us = <273>;
+-				exit-latency-us = <612>;
+-				min-residency-us = <1000>;
++				entry-latency-us = <814>;
++				exit-latency-us = <4562>;
++				min-residency-us = <9183>;
+ 				local-timer-stop;
+ 			};
+ 
+ 			BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
+ 				compatible = "arm,idle-state";
+ 				idle-state-name = "big-retention";
++				/* CPU Retention (C2D), L2 Active */
+ 				arm,psci-suspend-param = <0x00000002>;
+ 				entry-latency-us = <79>;
+ 				exit-latency-us = <82>;
+-				min-residency-us = <200>;
++				min-residency-us = <1302>;
+ 			};
+ 
+ 			BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
+ 				compatible = "arm,idle-state";
+ 				idle-state-name = "big-power-collapse";
++				/* CPU + L2 Power Collapse (C3, D4) */
+ 				arm,psci-suspend-param = <0x40000003>;
+-				entry-latency-us = <336>;
+-				exit-latency-us = <525>;
+-				min-residency-us = <1000>;
++				entry-latency-us = <724>;
++				exit-latency-us = <2027>;
++				min-residency-us = <9419>;
+ 				local-timer-stop;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+index 219b7507a10fb..4297c1db5a413 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+@@ -379,10 +379,6 @@
+ 	};
+ };
+ 
+-&cdn_dp {
+-	status = "okay";
+-};
+-
+ &cpu_b0 {
+ 	cpu-supply = <&vdd_cpu_b>;
+ };
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-zc1751-xm016-dc2.dts b/arch/arm64/boot/dts/xilinx/zynqmp-zc1751-xm016-dc2.dts
+index 4a86efa32d687..f7124e15f0ff6 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp-zc1751-xm016-dc2.dts
++++ b/arch/arm64/boot/dts/xilinx/zynqmp-zc1751-xm016-dc2.dts
+@@ -131,7 +131,7 @@
+ 		reg = <0>;
+ 
+ 		partition@0 {
+-			label = "data";
++			label = "spi0-data";
+ 			reg = <0x0 0x100000>;
+ 		};
+ 	};
+@@ -149,7 +149,7 @@
+ 		reg = <0>;
+ 
+ 		partition@0 {
+-			label = "data";
++			label = "spi1-data";
+ 			reg = <0x0 0x84000>;
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+index 771f60e0346d0..9e198cacc37dd 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
++++ b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+@@ -688,7 +688,7 @@
+ 		};
+ 
+ 		uart0: serial@ff000000 {
+-			compatible = "cdns,uart-r1p12", "xlnx,xuartps";
++			compatible = "xlnx,zynqmp-uart", "cdns,uart-r1p12";
+ 			status = "disabled";
+ 			interrupt-parent = <&gic>;
+ 			interrupts = <0 21 4>;
+@@ -698,7 +698,7 @@
+ 		};
+ 
+ 		uart1: serial@ff010000 {
+-			compatible = "cdns,uart-r1p12", "xlnx,xuartps";
++			compatible = "xlnx,zynqmp-uart", "cdns,uart-r1p12";
+ 			status = "disabled";
+ 			interrupt-parent = <&gic>;
+ 			interrupts = <0 22 4>;
+diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
+index 79280c53b9a61..a463b9bceed41 100644
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -48,7 +48,8 @@ cc32-as-instr = $(call try-run,\
+ # As a result we set our own flags here.
+ 
+ # KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile
+-VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include)
++VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc
++VDSO_CPPFLAGS += -isystem $(shell $(CC_COMPAT) -print-file-name=include 2>/dev/null)
+ VDSO_CPPFLAGS += $(LINUXINCLUDE)
+ 
+ # Common C and assembly flags
+diff --git a/arch/hexagon/include/asm/timer-regs.h b/arch/hexagon/include/asm/timer-regs.h
+deleted file mode 100644
+index ee6c61423a058..0000000000000
+--- a/arch/hexagon/include/asm/timer-regs.h
++++ /dev/null
+@@ -1,26 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-only */
+-/*
+- * Timer support for Hexagon
+- *
+- * Copyright (c) 2010-2011, The Linux Foundation. All rights reserved.
+- */
+-
+-#ifndef _ASM_TIMER_REGS_H
+-#define _ASM_TIMER_REGS_H
+-
+-/*  This stuff should go into a platform specific file  */
+-#define TCX0_CLK_RATE		19200
+-#define TIMER_ENABLE		0
+-#define TIMER_CLR_ON_MATCH	1
+-
+-/*
+- * 8x50 HDD Specs 5-8.  Simulator co-sim not fixed until
+- * release 1.1, and then it's "adjustable" and probably not defaulted.
+- */
+-#define RTOS_TIMER_INT		3
+-#ifdef CONFIG_HEXAGON_COMET
+-#define RTOS_TIMER_REGS_ADDR	0xAB000000UL
+-#endif
+-#define SLEEP_CLK_RATE		32000
+-
+-#endif
+diff --git a/arch/hexagon/include/asm/timex.h b/arch/hexagon/include/asm/timex.h
+index 8d4ec76fceb45..dfe69e118b2be 100644
+--- a/arch/hexagon/include/asm/timex.h
++++ b/arch/hexagon/include/asm/timex.h
+@@ -7,11 +7,10 @@
+ #define _ASM_TIMEX_H
+ 
+ #include <asm-generic/timex.h>
+-#include <asm/timer-regs.h>
+ #include <asm/hexagon_vm.h>
+ 
+ /* Using TCX0 as our clock.  CLOCK_TICK_RATE scheduled to be removed. */
+-#define CLOCK_TICK_RATE              TCX0_CLK_RATE
++#define CLOCK_TICK_RATE              19200
+ 
+ #define ARCH_HAS_READ_CURRENT_TIMER
+ 
+diff --git a/arch/hexagon/kernel/time.c b/arch/hexagon/kernel/time.c
+index feffe527ac929..febc95714d756 100644
+--- a/arch/hexagon/kernel/time.c
++++ b/arch/hexagon/kernel/time.c
+@@ -17,9 +17,10 @@
+ #include <linux/of_irq.h>
+ #include <linux/module.h>
+ 
+-#include <asm/timer-regs.h>
+ #include <asm/hexagon_vm.h>
+ 
++#define TIMER_ENABLE		BIT(0)
++
+ /*
+  * For the clocksource we need:
+  *	pcycle frequency (600MHz)
+@@ -33,6 +34,13 @@ cycles_t	pcycle_freq_mhz;
+ cycles_t	thread_freq_mhz;
+ cycles_t	sleep_clk_freq;
+ 
++/*
++ * 8x50 HDD Specs 5-8.  Simulator co-sim not fixed until
++ * release 1.1, and then it's "adjustable" and probably not defaulted.
++ */
++#define RTOS_TIMER_INT		3
++#define RTOS_TIMER_REGS_ADDR	0xAB000000UL
++
+ static struct resource rtos_timer_resources[] = {
+ 	{
+ 		.start	= RTOS_TIMER_REGS_ADDR,
+@@ -80,7 +88,7 @@ static int set_next_event(unsigned long delta, struct clock_event_device *evt)
+ 	iowrite32(0, &rtos_timer->clear);
+ 
+ 	iowrite32(delta, &rtos_timer->match);
+-	iowrite32(1 << TIMER_ENABLE, &rtos_timer->enable);
++	iowrite32(TIMER_ENABLE, &rtos_timer->enable);
+ 	return 0;
+ }
+ 
+diff --git a/arch/hexagon/lib/io.c b/arch/hexagon/lib/io.c
+index d35d69d6588c4..55f75392857b0 100644
+--- a/arch/hexagon/lib/io.c
++++ b/arch/hexagon/lib/io.c
+@@ -27,6 +27,7 @@ void __raw_readsw(const void __iomem *addr, void *data, int len)
+ 		*dst++ = *src;
+ 
+ }
++EXPORT_SYMBOL(__raw_readsw);
+ 
+ /*
+  * __raw_writesw - read words a short at a time
+@@ -47,6 +48,7 @@ void __raw_writesw(void __iomem *addr, const void *data, int len)
+ 
+ 
+ }
++EXPORT_SYMBOL(__raw_writesw);
+ 
+ /*  Pretty sure len is pre-adjusted for the length of the access already */
+ void __raw_readsl(const void __iomem *addr, void *data, int len)
+@@ -62,6 +64,7 @@ void __raw_readsl(const void __iomem *addr, void *data, int len)
+ 
+ 
+ }
++EXPORT_SYMBOL(__raw_readsl);
+ 
+ void __raw_writesl(void __iomem *addr, const void *data, int len)
+ {
+@@ -76,3 +79,4 @@ void __raw_writesl(void __iomem *addr, const void *data, int len)
+ 
+ 
+ }
++EXPORT_SYMBOL(__raw_writesl);
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 5c6e9ed9b2a75..94a748e95231b 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -320,6 +320,9 @@ config BCM63XX
+ 	select SYS_SUPPORTS_32BIT_KERNEL
+ 	select SYS_SUPPORTS_BIG_ENDIAN
+ 	select SYS_HAS_EARLY_PRINTK
++	select SYS_HAS_CPU_BMIPS32_3300
++	select SYS_HAS_CPU_BMIPS4350
++	select SYS_HAS_CPU_BMIPS4380
+ 	select SWAP_IO_SPACE
+ 	select GPIOLIB
+ 	select MIPS_L1_CACHE_SHIFT_4
+diff --git a/arch/mips/bcm63xx/clk.c b/arch/mips/bcm63xx/clk.c
+index 164115944a7fd..aba6e2d6a736c 100644
+--- a/arch/mips/bcm63xx/clk.c
++++ b/arch/mips/bcm63xx/clk.c
+@@ -381,6 +381,12 @@ void clk_disable(struct clk *clk)
+ 
+ EXPORT_SYMBOL(clk_disable);
+ 
++struct clk *clk_get_parent(struct clk *clk)
++{
++	return NULL;
++}
++EXPORT_SYMBOL(clk_get_parent);
++
+ unsigned long clk_get_rate(struct clk *clk)
+ {
+ 	if (!clk)
+diff --git a/arch/mips/generic/yamon-dt.c b/arch/mips/generic/yamon-dt.c
+index a3aa22c77cadc..a07a5edbcda78 100644
+--- a/arch/mips/generic/yamon-dt.c
++++ b/arch/mips/generic/yamon-dt.c
+@@ -75,7 +75,7 @@ static unsigned int __init gen_fdt_mem_array(
+ __init int yamon_dt_append_memory(void *fdt,
+ 				  const struct yamon_mem_region *regions)
+ {
+-	unsigned long phys_memsize, memsize;
++	unsigned long phys_memsize = 0, memsize;
+ 	__be32 mem_array[2 * MAX_MEM_ARRAY_ENTRIES];
+ 	unsigned int mem_entries;
+ 	int i, err, mem_off;
+diff --git a/arch/mips/lantiq/clk.c b/arch/mips/lantiq/clk.c
+index dd819e31fcbbf..4916cccf378fd 100644
+--- a/arch/mips/lantiq/clk.c
++++ b/arch/mips/lantiq/clk.c
+@@ -158,6 +158,12 @@ void clk_deactivate(struct clk *clk)
+ }
+ EXPORT_SYMBOL(clk_deactivate);
+ 
++struct clk *clk_get_parent(struct clk *clk)
++{
++	return NULL;
++}
++EXPORT_SYMBOL(clk_get_parent);
++
+ static inline u32 get_counter_resolution(void)
+ {
+ 	u32 res;
+diff --git a/arch/mips/sni/time.c b/arch/mips/sni/time.c
+index 240bb68ec2478..ff3ba7e778901 100644
+--- a/arch/mips/sni/time.c
++++ b/arch/mips/sni/time.c
+@@ -18,14 +18,14 @@ static int a20r_set_periodic(struct clock_event_device *evt)
+ {
+ 	*(volatile u8 *)(A20R_PT_CLOCK_BASE + 12) = 0x34;
+ 	wmb();
+-	*(volatile u8 *)(A20R_PT_CLOCK_BASE + 0) = SNI_COUNTER0_DIV;
++	*(volatile u8 *)(A20R_PT_CLOCK_BASE + 0) = SNI_COUNTER0_DIV & 0xff;
+ 	wmb();
+ 	*(volatile u8 *)(A20R_PT_CLOCK_BASE + 0) = SNI_COUNTER0_DIV >> 8;
+ 	wmb();
+ 
+ 	*(volatile u8 *)(A20R_PT_CLOCK_BASE + 12) = 0xb4;
+ 	wmb();
+-	*(volatile u8 *)(A20R_PT_CLOCK_BASE + 8) = SNI_COUNTER2_DIV;
++	*(volatile u8 *)(A20R_PT_CLOCK_BASE + 8) = SNI_COUNTER2_DIV & 0xff;
+ 	wmb();
+ 	*(volatile u8 *)(A20R_PT_CLOCK_BASE + 8) = SNI_COUNTER2_DIV >> 8;
+ 	wmb();
+diff --git a/arch/powerpc/boot/dts/charon.dts b/arch/powerpc/boot/dts/charon.dts
+index 408b486b13dff..cd589539f313f 100644
+--- a/arch/powerpc/boot/dts/charon.dts
++++ b/arch/powerpc/boot/dts/charon.dts
+@@ -35,7 +35,7 @@
+ 		};
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>;	// 128MB
+ 	};
+diff --git a/arch/powerpc/boot/dts/digsy_mtc.dts b/arch/powerpc/boot/dts/digsy_mtc.dts
+index 0e5e9d3acf79f..19a14e62e65f4 100644
+--- a/arch/powerpc/boot/dts/digsy_mtc.dts
++++ b/arch/powerpc/boot/dts/digsy_mtc.dts
+@@ -16,7 +16,7 @@
+ 	model = "intercontrol,digsy-mtc";
+ 	compatible = "intercontrol,digsy-mtc";
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x02000000>;	// 32MB
+ 	};
+ 
+diff --git a/arch/powerpc/boot/dts/lite5200.dts b/arch/powerpc/boot/dts/lite5200.dts
+index cb2782dd6132c..e7b194775d783 100644
+--- a/arch/powerpc/boot/dts/lite5200.dts
++++ b/arch/powerpc/boot/dts/lite5200.dts
+@@ -32,7 +32,7 @@
+ 		};
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x04000000>;	// 64MB
+ 	};
+diff --git a/arch/powerpc/boot/dts/lite5200b.dts b/arch/powerpc/boot/dts/lite5200b.dts
+index 2b86c81f90485..547cbe726ff23 100644
+--- a/arch/powerpc/boot/dts/lite5200b.dts
++++ b/arch/powerpc/boot/dts/lite5200b.dts
+@@ -31,7 +31,7 @@
+ 		led4 { gpios = <&gpio_simple 2 1>; };
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x10000000>;	// 256MB
+ 	};
+ 
+diff --git a/arch/powerpc/boot/dts/media5200.dts b/arch/powerpc/boot/dts/media5200.dts
+index 61cae9dcddef4..f3188018faceb 100644
+--- a/arch/powerpc/boot/dts/media5200.dts
++++ b/arch/powerpc/boot/dts/media5200.dts
+@@ -32,7 +32,7 @@
+ 		};
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x08000000>;	// 128MB RAM
+ 	};
+ 
+diff --git a/arch/powerpc/boot/dts/mpc5200b.dtsi b/arch/powerpc/boot/dts/mpc5200b.dtsi
+index 648fe31795f49..8b796f3b11da7 100644
+--- a/arch/powerpc/boot/dts/mpc5200b.dtsi
++++ b/arch/powerpc/boot/dts/mpc5200b.dtsi
+@@ -33,7 +33,7 @@
+ 		};
+ 	};
+ 
+-	memory: memory {
++	memory: memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x04000000>;	// 64MB
+ 	};
+diff --git a/arch/powerpc/boot/dts/o2d.dts b/arch/powerpc/boot/dts/o2d.dts
+index 24a46f65e5299..e0a8d3034417f 100644
+--- a/arch/powerpc/boot/dts/o2d.dts
++++ b/arch/powerpc/boot/dts/o2d.dts
+@@ -12,7 +12,7 @@
+ 	model = "ifm,o2d";
+ 	compatible = "ifm,o2d";
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x08000000>;  // 128MB
+ 	};
+ 
+diff --git a/arch/powerpc/boot/dts/o2d.dtsi b/arch/powerpc/boot/dts/o2d.dtsi
+index 6661955a2be47..b55a9e5bd828c 100644
+--- a/arch/powerpc/boot/dts/o2d.dtsi
++++ b/arch/powerpc/boot/dts/o2d.dtsi
+@@ -19,7 +19,7 @@
+ 	model = "ifm,o2d";
+ 	compatible = "ifm,o2d";
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x04000000>;	// 64MB
+ 	};
+ 
+diff --git a/arch/powerpc/boot/dts/o2dnt2.dts b/arch/powerpc/boot/dts/o2dnt2.dts
+index eeba7f5507d5d..c2eedbd1f5fcb 100644
+--- a/arch/powerpc/boot/dts/o2dnt2.dts
++++ b/arch/powerpc/boot/dts/o2dnt2.dts
+@@ -12,7 +12,7 @@
+ 	model = "ifm,o2dnt2";
+ 	compatible = "ifm,o2d";
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x08000000>;  // 128MB
+ 	};
+ 
+diff --git a/arch/powerpc/boot/dts/o3dnt.dts b/arch/powerpc/boot/dts/o3dnt.dts
+index fd00396b0593e..e4c1bdd412716 100644
+--- a/arch/powerpc/boot/dts/o3dnt.dts
++++ b/arch/powerpc/boot/dts/o3dnt.dts
+@@ -12,7 +12,7 @@
+ 	model = "ifm,o3dnt";
+ 	compatible = "ifm,o2d";
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x04000000>;  // 64MB
+ 	};
+ 
+diff --git a/arch/powerpc/boot/dts/pcm032.dts b/arch/powerpc/boot/dts/pcm032.dts
+index 780e13d99e7b8..1895bc95900cc 100644
+--- a/arch/powerpc/boot/dts/pcm032.dts
++++ b/arch/powerpc/boot/dts/pcm032.dts
+@@ -20,7 +20,7 @@
+ 	model = "phytec,pcm032";
+ 	compatible = "phytec,pcm032";
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x08000000>;	// 128MB
+ 	};
+ 
+diff --git a/arch/powerpc/boot/dts/tqm5200.dts b/arch/powerpc/boot/dts/tqm5200.dts
+index 9ed0bc78967e1..5bb25a9e40a01 100644
+--- a/arch/powerpc/boot/dts/tqm5200.dts
++++ b/arch/powerpc/boot/dts/tqm5200.dts
+@@ -32,7 +32,7 @@
+ 		};
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x04000000>;	// 64MB
+ 	};
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index ce5fd93499a74..a61b4ff3b7102 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -766,6 +766,7 @@ _GLOBAL(mmu_pin_tlb)
+ #ifdef CONFIG_PIN_TLB_DATA
+ 	LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET)
+ 	LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED)
++	li	r8, 0
+ #ifdef CONFIG_PIN_TLB_IMMR
+ 	li	r0, 3
+ #else
+@@ -774,26 +775,26 @@ _GLOBAL(mmu_pin_tlb)
+ 	mtctr	r0
+ 	cmpwi	r4, 0
+ 	beq	4f
+-	LOAD_REG_IMMEDIATE(r8, 0xf0 | _PAGE_RO | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT)
+ 	LOAD_REG_ADDR(r9, _sinittext)
+ 
+ 2:	ori	r0, r6, MD_EVALID
++	ori	r12, r8, 0xf0 | _PAGE_RO | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT
+ 	mtspr	SPRN_MD_CTR, r5
+ 	mtspr	SPRN_MD_EPN, r0
+ 	mtspr	SPRN_MD_TWC, r7
+-	mtspr	SPRN_MD_RPN, r8
++	mtspr	SPRN_MD_RPN, r12
+ 	addi	r5, r5, 0x100
+ 	addis	r6, r6, SZ_8M@h
+ 	addis	r8, r8, SZ_8M@h
+ 	cmplw	r6, r9
+ 	bdnzt	lt, 2b
+-
+-4:	LOAD_REG_IMMEDIATE(r8, 0xf0 | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT)
++4:
+ 2:	ori	r0, r6, MD_EVALID
++	ori	r12, r8, 0xf0 | _PAGE_DIRTY | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT
+ 	mtspr	SPRN_MD_CTR, r5
+ 	mtspr	SPRN_MD_EPN, r0
+ 	mtspr	SPRN_MD_TWC, r7
+-	mtspr	SPRN_MD_RPN, r8
++	mtspr	SPRN_MD_RPN, r12
+ 	addi	r5, r5, 0x100
+ 	addis	r6, r6, SZ_8M@h
+ 	addis	r8, r8, SZ_8M@h
+@@ -814,7 +815,7 @@ _GLOBAL(mmu_pin_tlb)
+ #endif
+ #if defined(CONFIG_PIN_TLB_IMMR) || defined(CONFIG_PIN_TLB_DATA)
+ 	lis	r0, (MD_RSV4I | MD_TWAM)@h
+-	mtspr	SPRN_MI_CTR, r0
++	mtspr	SPRN_MD_CTR, r0
+ #endif
+ 	mtspr	SPRN_SRR1, r10
+ 	mtspr	SPRN_SRR0, r11
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index db78123166a8b..b1d9afffd8419 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -2539,7 +2539,7 @@ hcall_real_table:
+ 	.globl	hcall_real_table_end
+ hcall_real_table_end:
+ 
+-_GLOBAL(kvmppc_h_set_xdabr)
++_GLOBAL_TOC(kvmppc_h_set_xdabr)
+ EXPORT_SYMBOL_GPL(kvmppc_h_set_xdabr)
+ 	andi.	r0, r5, DABRX_USER | DABRX_KERNEL
+ 	beq	6f
+@@ -2549,7 +2549,7 @@ EXPORT_SYMBOL_GPL(kvmppc_h_set_xdabr)
+ 6:	li	r3, H_PARAMETER
+ 	blr
+ 
+-_GLOBAL(kvmppc_h_set_dabr)
++_GLOBAL_TOC(kvmppc_h_set_dabr)
+ EXPORT_SYMBOL_GPL(kvmppc_h_set_dabr)
+ 	li	r5, DABRX_USER | DABRX_KERNEL
+ 3:
+diff --git a/arch/powerpc/sysdev/dcr-low.S b/arch/powerpc/sysdev/dcr-low.S
+index efeeb1b885a17..329b9c4ae5429 100644
+--- a/arch/powerpc/sysdev/dcr-low.S
++++ b/arch/powerpc/sysdev/dcr-low.S
+@@ -11,7 +11,7 @@
+ #include <asm/export.h>
+ 
+ #define DCR_ACCESS_PROLOG(table) \
+-	cmpli	cr0,r3,1024;	 \
++	cmplwi	cr0,r3,1024;	 \
+ 	rlwinm  r3,r3,4,18,27;   \
+ 	lis     r5,table@h;      \
+ 	ori     r5,r5,table@l;   \
+diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h
+index ea398a05f6432..7f3c9ac34bd8d 100644
+--- a/arch/s390/include/asm/kexec.h
++++ b/arch/s390/include/asm/kexec.h
+@@ -74,6 +74,12 @@ void *kexec_file_add_components(struct kimage *image,
+ int arch_kexec_do_relocs(int r_type, void *loc, unsigned long val,
+ 			 unsigned long addr);
+ 
++#define ARCH_HAS_KIMAGE_ARCH
++
++struct kimage_arch {
++	void *ipl_buf;
++};
++
+ extern const struct kexec_file_ops s390_kexec_image_ops;
+ extern const struct kexec_file_ops s390_kexec_elf_ops;
+ 
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index 98b3aca1de8e1..6da06905ddce5 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -2156,7 +2156,7 @@ void *ipl_report_finish(struct ipl_report *report)
+ 
+ 	buf = vzalloc(report->size);
+ 	if (!buf)
+-		return ERR_PTR(-ENOMEM);
++		goto out;
+ 	ptr = buf;
+ 
+ 	memcpy(ptr, report->ipib, report->ipib->hdr.len);
+@@ -2195,6 +2195,7 @@ void *ipl_report_finish(struct ipl_report *report)
+ 	}
+ 
+ 	BUG_ON(ptr > buf + report->size);
++out:
+ 	return buf;
+ }
+ 
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index f9e4baa64b675..e7435f3a3d2d2 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -12,6 +12,7 @@
+ #include <linux/kexec.h>
+ #include <linux/module_signature.h>
+ #include <linux/verification.h>
++#include <linux/vmalloc.h>
+ #include <asm/boot_data.h>
+ #include <asm/ipl.h>
+ #include <asm/setup.h>
+@@ -170,6 +171,7 @@ static int kexec_file_add_ipl_report(struct kimage *image,
+ 	struct kexec_buf buf;
+ 	unsigned long addr;
+ 	void *ptr, *end;
++	int ret;
+ 
+ 	buf.image = image;
+ 
+@@ -199,9 +201,13 @@ static int kexec_file_add_ipl_report(struct kimage *image,
+ 		ptr += len;
+ 	}
+ 
++	ret = -ENOMEM;
+ 	buf.buffer = ipl_report_finish(data->report);
++	if (!buf.buffer)
++		goto out;
+ 	buf.bufsz = data->report->size;
+ 	buf.memsz = buf.bufsz;
++	image->arch.ipl_buf = buf.buffer;
+ 
+ 	data->memsz += buf.memsz;
+ 
+@@ -209,7 +215,9 @@ static int kexec_file_add_ipl_report(struct kimage *image,
+ 		data->kernel_buf + offsetof(struct lowcore, ipl_parmblock_ptr);
+ 	*lc_ipl_parmblock_ptr = (__u32)buf.mem;
+ 
+-	return kexec_add_buffer(&buf);
++	ret = kexec_add_buffer(&buf);
++out:
++	return ret;
+ }
+ 
+ void *kexec_file_add_components(struct kimage *image,
+@@ -321,3 +329,11 @@ int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
+ 
+ 	return kexec_image_probe_default(image, buf, buf_len);
+ }
++
++int arch_kimage_file_post_load_cleanup(struct kimage *image)
++{
++	vfree(image->arch.ipl_buf);
++	image->arch.ipl_buf = NULL;
++
++	return kexec_image_post_load_cleanup_default(image);
++}
+diff --git a/arch/sh/Kconfig.debug b/arch/sh/Kconfig.debug
+index 28a43d63bde1f..97b0e26cf05a1 100644
+--- a/arch/sh/Kconfig.debug
++++ b/arch/sh/Kconfig.debug
+@@ -57,6 +57,7 @@ config DUMP_CODE
+ 
+ config DWARF_UNWINDER
+ 	bool "Enable the DWARF unwinder for stacktraces"
++	depends on DEBUG_KERNEL
+ 	select FRAME_POINTER
+ 	default n
+ 	help
+diff --git a/arch/sh/include/asm/sfp-machine.h b/arch/sh/include/asm/sfp-machine.h
+index cbc7cf8c97ce6..2d2423478b71d 100644
+--- a/arch/sh/include/asm/sfp-machine.h
++++ b/arch/sh/include/asm/sfp-machine.h
+@@ -13,6 +13,14 @@
+ #ifndef _SFP_MACHINE_H
+ #define _SFP_MACHINE_H
+ 
++#ifdef __BIG_ENDIAN__
++#define __BYTE_ORDER __BIG_ENDIAN
++#define __LITTLE_ENDIAN 0
++#else
++#define __BYTE_ORDER __LITTLE_ENDIAN
++#define __BIG_ENDIAN 0
++#endif
++
+ #define _FP_W_TYPE_SIZE		32
+ #define _FP_W_TYPE		unsigned long
+ #define _FP_WS_TYPE		signed long
+diff --git a/arch/sh/kernel/cpu/sh4a/smp-shx3.c b/arch/sh/kernel/cpu/sh4a/smp-shx3.c
+index f8a2bec0f260b..1261dc7b84e8b 100644
+--- a/arch/sh/kernel/cpu/sh4a/smp-shx3.c
++++ b/arch/sh/kernel/cpu/sh4a/smp-shx3.c
+@@ -73,8 +73,9 @@ static void shx3_prepare_cpus(unsigned int max_cpus)
+ 	BUILD_BUG_ON(SMP_MSG_NR >= 8);
+ 
+ 	for (i = 0; i < SMP_MSG_NR; i++)
+-		request_irq(104 + i, ipi_interrupt_handler,
+-			    IRQF_PERCPU, "IPI", (void *)(long)i);
++		if (request_irq(104 + i, ipi_interrupt_handler,
++			    IRQF_PERCPU, "IPI", (void *)(long)i))
++			pr_err("Failed to request irq %d\n", i);
+ 
+ 	for (i = 0; i < max_cpus; i++)
+ 		set_cpu_present(i, true);
+diff --git a/arch/sh/math-emu/math.c b/arch/sh/math-emu/math.c
+index e8be0eca0444a..615ba932c398e 100644
+--- a/arch/sh/math-emu/math.c
++++ b/arch/sh/math-emu/math.c
+@@ -467,109 +467,6 @@ static int fpu_emulate(u16 code, struct sh_fpu_soft_struct *fregs, struct pt_reg
+ 		return id_sys(fregs, regs, code);
+ }
+ 
+-/**
+- *	denormal_to_double - Given denormalized float number,
+- *	                     store double float
+- *
+- *	@fpu: Pointer to sh_fpu_soft structure
+- *	@n: Index to FP register
+- */
+-static void denormal_to_double(struct sh_fpu_soft_struct *fpu, int n)
+-{
+-	unsigned long du, dl;
+-	unsigned long x = fpu->fpul;
+-	int exp = 1023 - 126;
+-
+-	if (x != 0 && (x & 0x7f800000) == 0) {
+-		du = (x & 0x80000000);
+-		while ((x & 0x00800000) == 0) {
+-			x <<= 1;
+-			exp--;
+-		}
+-		x &= 0x007fffff;
+-		du |= (exp << 20) | (x >> 3);
+-		dl = x << 29;
+-
+-		fpu->fp_regs[n] = du;
+-		fpu->fp_regs[n+1] = dl;
+-	}
+-}
+-
+-/**
+- *	ieee_fpe_handler - Handle denormalized number exception
+- *
+- *	@regs: Pointer to register structure
+- *
+- *	Returns 1 when it's handled (should not cause exception).
+- */
+-static int ieee_fpe_handler(struct pt_regs *regs)
+-{
+-	unsigned short insn = *(unsigned short *)regs->pc;
+-	unsigned short finsn;
+-	unsigned long nextpc;
+-	int nib[4] = {
+-		(insn >> 12) & 0xf,
+-		(insn >> 8) & 0xf,
+-		(insn >> 4) & 0xf,
+-		insn & 0xf};
+-
+-	if (nib[0] == 0xb ||
+-	    (nib[0] == 0x4 && nib[2] == 0x0 && nib[3] == 0xb)) /* bsr & jsr */
+-		regs->pr = regs->pc + 4;
+-
+-	if (nib[0] == 0xa || nib[0] == 0xb) { /* bra & bsr */
+-		nextpc = regs->pc + 4 + ((short) ((insn & 0xfff) << 4) >> 3);
+-		finsn = *(unsigned short *) (regs->pc + 2);
+-	} else if (nib[0] == 0x8 && nib[1] == 0xd) { /* bt/s */
+-		if (regs->sr & 1)
+-			nextpc = regs->pc + 4 + ((char) (insn & 0xff) << 1);
+-		else
+-			nextpc = regs->pc + 4;
+-		finsn = *(unsigned short *) (regs->pc + 2);
+-	} else if (nib[0] == 0x8 && nib[1] == 0xf) { /* bf/s */
+-		if (regs->sr & 1)
+-			nextpc = regs->pc + 4;
+-		else
+-			nextpc = regs->pc + 4 + ((char) (insn & 0xff) << 1);
+-		finsn = *(unsigned short *) (regs->pc + 2);
+-	} else if (nib[0] == 0x4 && nib[3] == 0xb &&
+-		 (nib[2] == 0x0 || nib[2] == 0x2)) { /* jmp & jsr */
+-		nextpc = regs->regs[nib[1]];
+-		finsn = *(unsigned short *) (regs->pc + 2);
+-	} else if (nib[0] == 0x0 && nib[3] == 0x3 &&
+-		 (nib[2] == 0x0 || nib[2] == 0x2)) { /* braf & bsrf */
+-		nextpc = regs->pc + 4 + regs->regs[nib[1]];
+-		finsn = *(unsigned short *) (regs->pc + 2);
+-	} else if (insn == 0x000b) { /* rts */
+-		nextpc = regs->pr;
+-		finsn = *(unsigned short *) (regs->pc + 2);
+-	} else {
+-		nextpc = regs->pc + 2;
+-		finsn = insn;
+-	}
+-
+-	if ((finsn & 0xf1ff) == 0xf0ad) { /* fcnvsd */
+-		struct task_struct *tsk = current;
+-
+-		if ((tsk->thread.xstate->softfpu.fpscr & (1 << 17))) {
+-			/* FPU error */
+-			denormal_to_double (&tsk->thread.xstate->softfpu,
+-					    (finsn >> 8) & 0xf);
+-			tsk->thread.xstate->softfpu.fpscr &=
+-				~(FPSCR_CAUSE_MASK | FPSCR_FLAG_MASK);
+-			task_thread_info(tsk)->status |= TS_USEDFPU;
+-		} else {
+-			force_sig_fault(SIGFPE, FPE_FLTINV,
+-					(void __user *)regs->pc);
+-		}
+-
+-		regs->pc = nextpc;
+-		return 1;
+-	}
+-
+-	return 0;
+-}
+-
+ /**
+  * fpu_init - Initialize FPU registers
+  * @fpu: Pointer to software emulated FPU registers.
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 2b5957b27a3d6..a853ed7240eef 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1266,7 +1266,8 @@ config TOSHIBA
+ 
+ config I8K
+ 	tristate "Dell i8k legacy laptop support"
+-	select HWMON
++	depends on HWMON
++	depends on PROC_FS
+ 	select SENSORS_DELL_SMM
+ 	help
+ 	  This option enables legacy /proc/i8k userspace interface in hwmon
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 4684bf9fcc428..a521135247eb6 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2879,8 +2879,10 @@ intel_vlbr_constraints(struct perf_event *event)
+ {
+ 	struct event_constraint *c = &vlbr_constraint;
+ 
+-	if (unlikely(constraint_match(c, event->hw.config)))
++	if (unlikely(constraint_match(c, event->hw.config))) {
++		event->hw.flags |= c->flags;
+ 		return c;
++	}
+ 
+ 	return NULL;
+ }
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index c01b51d1cbdff..ba26792d96731 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3545,6 +3545,9 @@ static int skx_cha_hw_config(struct intel_uncore_box *box, struct perf_event *ev
+ 	struct hw_perf_event_extra *reg1 = &event->hw.extra_reg;
+ 	struct extra_reg *er;
+ 	int idx = 0;
++	/* Any of the CHA events may be filtered by Thread/Core-ID.*/
++	if (event->hw.config & SNBEP_CBO_PMON_CTL_TID_EN)
++		idx = SKX_CHA_MSR_PMON_BOX_FILTER_TID;
+ 
+ 	for (er = skx_uncore_cha_extra_regs; er->msr; er++) {
+ 		if (er->event != (event->hw.config & er->config_mask))
+@@ -3612,6 +3615,7 @@ static struct event_constraint skx_uncore_iio_constraints[] = {
+ 	UNCORE_EVENT_CONSTRAINT(0xc0, 0xc),
+ 	UNCORE_EVENT_CONSTRAINT(0xc5, 0xc),
+ 	UNCORE_EVENT_CONSTRAINT(0xd4, 0xc),
++	UNCORE_EVENT_CONSTRAINT(0xd5, 0xc),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 3cf4030232590..01860c0d324d7 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -176,6 +176,9 @@ void set_hv_tscchange_cb(void (*cb)(void))
+ 		return;
+ 	}
+ 
++	if (!hv_vp_index)
++		return;
++
+ 	hv_reenlightenment_cb = cb;
+ 
+ 	/* Make sure callback is registered before we write to MSRs */
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index d5f24a2f3e916..257ec2cbf69a4 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2851,6 +2851,17 @@ static int nested_vmx_check_controls(struct kvm_vcpu *vcpu,
+ 	return 0;
+ }
+ 
++static int nested_vmx_check_address_space_size(struct kvm_vcpu *vcpu,
++				       struct vmcs12 *vmcs12)
++{
++#ifdef CONFIG_X86_64
++	if (CC(!!(vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE) !=
++		!!(vcpu->arch.efer & EFER_LMA)))
++		return -EINVAL;
++#endif
++	return 0;
++}
++
+ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
+ 				       struct vmcs12 *vmcs12)
+ {
+@@ -2875,18 +2886,16 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 
+ #ifdef CONFIG_X86_64
+-	ia32e = !!(vcpu->arch.efer & EFER_LMA);
++	ia32e = !!(vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE);
+ #else
+ 	ia32e = false;
+ #endif
+ 
+ 	if (ia32e) {
+-		if (CC(!(vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE)) ||
+-		    CC(!(vmcs12->host_cr4 & X86_CR4_PAE)))
++		if (CC(!(vmcs12->host_cr4 & X86_CR4_PAE)))
+ 			return -EINVAL;
+ 	} else {
+-		if (CC(vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE) ||
+-		    CC(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) ||
++		if (CC(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) ||
+ 		    CC(vmcs12->host_cr4 & X86_CR4_PCIDE) ||
+ 		    CC((vmcs12->host_rip) >> 32))
+ 			return -EINVAL;
+@@ -3555,6 +3564,9 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
+ 	if (nested_vmx_check_controls(vcpu, vmcs12))
+ 		return nested_vmx_fail(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD);
+ 
++	if (nested_vmx_check_address_space_size(vcpu, vmcs12))
++		return nested_vmx_fail(vcpu, VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
++
+ 	if (nested_vmx_check_host_state(vcpu, vmcs12))
+ 		return nested_vmx_fail(vcpu, VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
+ 
+diff --git a/block/blk-core.c b/block/blk-core.c
+index fbc39756f37de..26664f2a139eb 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -897,10 +897,8 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
+ 	if (unlikely(!current->io_context))
+ 		create_task_io_context(current, GFP_ATOMIC, q->node);
+ 
+-	if (blk_throtl_bio(bio)) {
+-		blkcg_bio_issue_init(bio);
++	if (blk_throtl_bio(bio))
+ 		return false;
+-	}
+ 
+ 	blk_cgroup_bio_start(bio);
+ 	blkcg_bio_issue_init(bio);
+diff --git a/block/ioprio.c b/block/ioprio.c
+index 364d2294ba904..84da6c71b2ccb 100644
+--- a/block/ioprio.c
++++ b/block/ioprio.c
+@@ -69,7 +69,14 @@ int ioprio_check_cap(int ioprio)
+ 
+ 	switch (class) {
+ 		case IOPRIO_CLASS_RT:
+-			if (!capable(CAP_SYS_NICE) && !capable(CAP_SYS_ADMIN))
++			/*
++			 * Originally this only checked for CAP_SYS_ADMIN,
++			 * which was implicitly allowed for pid 0 by security
++			 * modules such as SELinux. Make sure we check
++			 * CAP_SYS_ADMIN first to avoid a denial/avc for
++			 * possibly missing CAP_SYS_NICE permission.
++			 */
++			if (!capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_NICE))
+ 				return -EPERM;
+ 			fallthrough;
+ 			/* rt has prio field too */
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index f41e4e4993d37..1372f40d0371f 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -99,12 +99,15 @@ static struct firmware_cache fw_cache;
+ extern struct builtin_fw __start_builtin_fw[];
+ extern struct builtin_fw __end_builtin_fw[];
+ 
+-static void fw_copy_to_prealloc_buf(struct firmware *fw,
++static bool fw_copy_to_prealloc_buf(struct firmware *fw,
+ 				    void *buf, size_t size)
+ {
+-	if (!buf || size < fw->size)
+-		return;
++	if (!buf)
++		return true;
++	if (size < fw->size)
++		return false;
+ 	memcpy(buf, fw->data, fw->size);
++	return true;
+ }
+ 
+ static bool fw_get_builtin_firmware(struct firmware *fw, const char *name,
+@@ -116,9 +119,7 @@ static bool fw_get_builtin_firmware(struct firmware *fw, const char *name,
+ 		if (strcmp(name, b_fw->name) == 0) {
+ 			fw->size = b_fw->size;
+ 			fw->data = b_fw->data;
+-			fw_copy_to_prealloc_buf(fw, buf, size);
+-
+-			return true;
++			return fw_copy_to_prealloc_buf(fw, buf, size);
+ 		}
+ 	}
+ 
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 2ff437e5c7051..43603dc9da430 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -6,6 +6,7 @@
+ #include <linux/io.h>
+ #include <linux/clk.h>
+ #include <linux/clkdev.h>
++#include <linux/cpu_pm.h>
+ #include <linux/delay.h>
+ #include <linux/list.h>
+ #include <linux/module.h>
+@@ -52,11 +53,18 @@ struct sysc_address {
+ 	struct list_head node;
+ };
+ 
++struct sysc_module {
++	struct sysc *ddata;
++	struct list_head node;
++};
++
+ struct sysc_soc_info {
+ 	unsigned long general_purpose:1;
+ 	enum sysc_soc soc;
+-	struct mutex list_lock;			/* disabled modules list lock */
++	struct mutex list_lock;	/* disabled and restored modules list lock */
+ 	struct list_head disabled_modules;
++	struct list_head restored_modules;
++	struct notifier_block nb;
+ };
+ 
+ enum sysc_clocks {
+@@ -1555,7 +1563,7 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 		   0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
+ 	SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -ENODEV, 0x4ea2080d, 0xffffffff,
+ 		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY |
+-		   SYSC_QUIRK_REINIT_ON_RESUME),
++		   SYSC_QUIRK_REINIT_ON_CTX_LOST),
+ 	SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0,
+ 		   SYSC_MODULE_QUIRK_WDT),
+ 	/* PRUSS on am3, am4 and am5 */
+@@ -2429,6 +2437,79 @@ static struct dev_pm_domain sysc_child_pm_domain = {
+ 	}
+ };
+ 
++/* Caller needs to take list_lock if ever used outside of cpu_pm */
++static void sysc_reinit_modules(struct sysc_soc_info *soc)
++{
++	struct sysc_module *module;
++	struct list_head *pos;
++	struct sysc *ddata;
++	int error = 0;
++
++	list_for_each(pos, &sysc_soc->restored_modules) {
++		module = list_entry(pos, struct sysc_module, node);
++		ddata = module->ddata;
++		error = sysc_reinit_module(ddata, ddata->enabled);
++	}
++}
++
++/**
++ * sysc_context_notifier - optionally reset and restore module after idle
++ * @nb: notifier block
++ * @cmd: unused
++ * @v: unused
++ *
++ * Some interconnect target modules need to be restored, or reset and restored
++ * on CPU_PM CPU_PM_CLUSTER_EXIT notifier. This is needed at least for am335x
++ * OTG and GPMC target modules even if the modules are unused.
++ */
++static int sysc_context_notifier(struct notifier_block *nb, unsigned long cmd,
++				 void *v)
++{
++	struct sysc_soc_info *soc;
++
++	soc = container_of(nb, struct sysc_soc_info, nb);
++
++	switch (cmd) {
++	case CPU_CLUSTER_PM_ENTER:
++		break;
++	case CPU_CLUSTER_PM_ENTER_FAILED:	/* No need to restore context */
++		break;
++	case CPU_CLUSTER_PM_EXIT:
++		sysc_reinit_modules(soc);
++		break;
++	}
++
++	return NOTIFY_OK;
++}
++
++/**
++ * sysc_add_restored - optionally add reset and restore quirk hanlling
++ * @ddata: device data
++ */
++static void sysc_add_restored(struct sysc *ddata)
++{
++	struct sysc_module *restored_module;
++
++	restored_module = kzalloc(sizeof(*restored_module), GFP_KERNEL);
++	if (!restored_module)
++		return;
++
++	restored_module->ddata = ddata;
++
++	mutex_lock(&sysc_soc->list_lock);
++
++	list_add(&restored_module->node, &sysc_soc->restored_modules);
++
++	if (sysc_soc->nb.notifier_call)
++		goto out_unlock;
++
++	sysc_soc->nb.notifier_call = sysc_context_notifier;
++	cpu_pm_register_notifier(&sysc_soc->nb);
++
++out_unlock:
++	mutex_unlock(&sysc_soc->list_lock);
++}
++
+ /**
+  * sysc_legacy_idle_quirk - handle children in omap_device compatible way
+  * @ddata: device driver data
+@@ -2928,12 +3009,14 @@ static int sysc_add_disabled(unsigned long base)
+ }
+ 
+ /*
+- * One time init to detect the booted SoC and disable unavailable features.
++ * One time init to detect the booted SoC, disable unavailable features
++ * and initialize list for optional cpu_pm notifier.
++ *
+  * Note that we initialize static data shared across all ti-sysc instances
+  * so ddata is only used for SoC type. This can be called from module_init
+  * once we no longer need to rely on platform data.
+  */
+-static int sysc_init_soc(struct sysc *ddata)
++static int sysc_init_static_data(struct sysc *ddata)
+ {
+ 	const struct soc_device_attribute *match;
+ 	struct ti_sysc_platform_data *pdata;
+@@ -2948,6 +3031,7 @@ static int sysc_init_soc(struct sysc *ddata)
+ 
+ 	mutex_init(&sysc_soc->list_lock);
+ 	INIT_LIST_HEAD(&sysc_soc->disabled_modules);
++	INIT_LIST_HEAD(&sysc_soc->restored_modules);
+ 	sysc_soc->general_purpose = true;
+ 
+ 	pdata = dev_get_platdata(ddata->dev);
+@@ -2994,15 +3078,24 @@ static int sysc_init_soc(struct sysc *ddata)
+ 	return 0;
+ }
+ 
+-static void sysc_cleanup_soc(void)
++static void sysc_cleanup_static_data(void)
+ {
++	struct sysc_module *restored_module;
+ 	struct sysc_address *disabled_module;
+ 	struct list_head *pos, *tmp;
+ 
+ 	if (!sysc_soc)
+ 		return;
+ 
++	if (sysc_soc->nb.notifier_call)
++		cpu_pm_unregister_notifier(&sysc_soc->nb);
++
+ 	mutex_lock(&sysc_soc->list_lock);
++	list_for_each_safe(pos, tmp, &sysc_soc->restored_modules) {
++		restored_module = list_entry(pos, struct sysc_module, node);
++		list_del(pos);
++		kfree(restored_module);
++	}
+ 	list_for_each_safe(pos, tmp, &sysc_soc->disabled_modules) {
+ 		disabled_module = list_entry(pos, struct sysc_address, node);
+ 		list_del(pos);
+@@ -3067,7 +3160,7 @@ static int sysc_probe(struct platform_device *pdev)
+ 	ddata->dev = &pdev->dev;
+ 	platform_set_drvdata(pdev, ddata);
+ 
+-	error = sysc_init_soc(ddata);
++	error = sysc_init_static_data(ddata);
+ 	if (error)
+ 		return error;
+ 
+@@ -3166,6 +3259,9 @@ static int sysc_probe(struct platform_device *pdev)
+ 		pm_runtime_put(&pdev->dev);
+ 	}
+ 
++	if (ddata->cfg.quirks & SYSC_QUIRK_REINIT_ON_CTX_LOST)
++		sysc_add_restored(ddata);
++
+ 	return 0;
+ 
+ err:
+@@ -3248,7 +3344,7 @@ static void __exit sysc_exit(void)
+ {
+ 	bus_unregister_notifier(&platform_bus_type, &sysc_nb);
+ 	platform_driver_unregister(&sysc_driver);
+-	sysc_cleanup_soc();
++	sysc_cleanup_static_data();
+ }
+ module_exit(sysc_exit);
+ 
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index bc3be5f3eae15..24dab2312bc6f 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -51,6 +51,8 @@ static DEFINE_SPINLOCK(aspeed_g6_clk_lock);
+ static struct clk_hw_onecell_data *aspeed_g6_clk_data;
+ 
+ static void __iomem *scu_g6_base;
++/* AST2600 revision: A0, A1, A2, etc */
++static u8 soc_rev;
+ 
+ /*
+  * Clocks marked with CLK_IS_CRITICAL:
+@@ -191,9 +193,8 @@ static struct clk_hw *ast2600_calc_pll(const char *name, u32 val)
+ static struct clk_hw *ast2600_calc_apll(const char *name, u32 val)
+ {
+ 	unsigned int mult, div;
+-	u32 chip_id = readl(scu_g6_base + ASPEED_G6_SILICON_REV);
+ 
+-	if (((chip_id & CHIP_REVISION_ID) >> 16) >= 2) {
++	if (soc_rev >= 2) {
+ 		if (val & BIT(24)) {
+ 			/* Pass through mode */
+ 			mult = div = 1;
+@@ -707,7 +708,7 @@ static const u32 ast2600_a1_axi_ahb200_tbl[] = {
+ static void __init aspeed_g6_cc(struct regmap *map)
+ {
+ 	struct clk_hw *hw;
+-	u32 val, div, divbits, chip_id, axi_div, ahb_div;
++	u32 val, div, divbits, axi_div, ahb_div;
+ 
+ 	clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, 25000000);
+ 
+@@ -738,8 +739,7 @@ static void __init aspeed_g6_cc(struct regmap *map)
+ 		axi_div = 2;
+ 
+ 	divbits = (val >> 11) & 0x3;
+-	regmap_read(map, ASPEED_G6_SILICON_REV, &chip_id);
+-	if (chip_id & BIT(16)) {
++	if (soc_rev >= 1) {
+ 		if (!divbits) {
+ 			ahb_div = ast2600_a1_axi_ahb200_tbl[(val >> 8) & 0x3];
+ 			if (val & BIT(16))
+@@ -784,6 +784,8 @@ static void __init aspeed_g6_cc_init(struct device_node *np)
+ 	if (!scu_g6_base)
+ 		return;
+ 
++	soc_rev = (readl(scu_g6_base + ASPEED_G6_SILICON_REV) & CHIP_REVISION_ID) >> 16;
++
+ 	aspeed_g6_clk_data = kzalloc(struct_size(aspeed_g6_clk_data, hws,
+ 				      ASPEED_G6_NUM_CLKS), GFP_KERNEL);
+ 	if (!aspeed_g6_clk_data)
+diff --git a/drivers/clk/imx/clk-imx6ul.c b/drivers/clk/imx/clk-imx6ul.c
+index 5dbb6a9377324..206e4c43f68f8 100644
+--- a/drivers/clk/imx/clk-imx6ul.c
++++ b/drivers/clk/imx/clk-imx6ul.c
+@@ -161,7 +161,6 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
+ 	hws[IMX6UL_PLL5_BYPASS] = imx_clk_hw_mux_flags("pll5_bypass", base + 0xa0, 16, 1, pll5_bypass_sels, ARRAY_SIZE(pll5_bypass_sels), CLK_SET_RATE_PARENT);
+ 	hws[IMX6UL_PLL6_BYPASS] = imx_clk_hw_mux_flags("pll6_bypass", base + 0xe0, 16, 1, pll6_bypass_sels, ARRAY_SIZE(pll6_bypass_sels), CLK_SET_RATE_PARENT);
+ 	hws[IMX6UL_PLL7_BYPASS] = imx_clk_hw_mux_flags("pll7_bypass", base + 0x20, 16, 1, pll7_bypass_sels, ARRAY_SIZE(pll7_bypass_sels), CLK_SET_RATE_PARENT);
+-	hws[IMX6UL_CLK_CSI_SEL] = imx_clk_hw_mux_flags("csi_sel", base + 0x3c, 9, 2, csi_sels, ARRAY_SIZE(csi_sels), CLK_SET_RATE_PARENT);
+ 
+ 	/* Do not bypass PLLs initially */
+ 	clk_set_parent(hws[IMX6UL_PLL1_BYPASS]->clk, hws[IMX6UL_CLK_PLL1]->clk);
+@@ -270,6 +269,7 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
+ 	hws[IMX6UL_CLK_ECSPI_SEL]	  = imx_clk_hw_mux("ecspi_sel",	base + 0x38, 18, 1, ecspi_sels, ARRAY_SIZE(ecspi_sels));
+ 	hws[IMX6UL_CLK_LCDIF_PRE_SEL]	  = imx_clk_hw_mux_flags("lcdif_pre_sel", base + 0x38, 15, 3, lcdif_pre_sels, ARRAY_SIZE(lcdif_pre_sels), CLK_SET_RATE_PARENT);
+ 	hws[IMX6UL_CLK_LCDIF_SEL]	  = imx_clk_hw_mux("lcdif_sel",	base + 0x38, 9, 3, lcdif_sels, ARRAY_SIZE(lcdif_sels));
++	hws[IMX6UL_CLK_CSI_SEL]		  = imx_clk_hw_mux("csi_sel", base + 0x3c, 9, 2, csi_sels, ARRAY_SIZE(csi_sels));
+ 
+ 	hws[IMX6UL_CLK_LDB_DI0_DIV_SEL]  = imx_clk_hw_mux("ldb_di0", base + 0x20, 10, 1, ldb_di0_div_sels, ARRAY_SIZE(ldb_di0_div_sels));
+ 	hws[IMX6UL_CLK_LDB_DI1_DIV_SEL]  = imx_clk_hw_mux("ldb_di1", base + 0x20, 11, 1, ldb_di1_div_sels, ARRAY_SIZE(ldb_di1_div_sels));
+diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c
+index c8e9cb6c8e39c..2b9bb7d55efc8 100644
+--- a/drivers/clk/ingenic/cgu.c
++++ b/drivers/clk/ingenic/cgu.c
+@@ -425,15 +425,15 @@ ingenic_clk_calc_div(const struct ingenic_cgu_clk_info *clk_info,
+ 	}
+ 
+ 	/* Impose hardware constraints */
+-	div = min_t(unsigned, div, 1 << clk_info->div.bits);
+-	div = max_t(unsigned, div, 1);
++	div = clamp_t(unsigned int, div, clk_info->div.div,
++		      clk_info->div.div << clk_info->div.bits);
+ 
+ 	/*
+ 	 * If the divider value itself must be divided before being written to
+ 	 * the divider register, we must ensure we don't have any bits set that
+ 	 * would be lost as a result of doing so.
+ 	 */
+-	div /= clk_info->div.div;
++	div = DIV_ROUND_UP(div, clk_info->div.div);
+ 	div *= clk_info->div.div;
+ 
+ 	return div;
+diff --git a/drivers/clk/qcom/gcc-msm8996.c b/drivers/clk/qcom/gcc-msm8996.c
+index 3c3a7ff045621..9b1674b28d45d 100644
+--- a/drivers/clk/qcom/gcc-msm8996.c
++++ b/drivers/clk/qcom/gcc-msm8996.c
+@@ -2937,20 +2937,6 @@ static struct clk_branch gcc_smmu_aggre0_ahb_clk = {
+ 	},
+ };
+ 
+-static struct clk_branch gcc_aggre1_pnoc_ahb_clk = {
+-	.halt_reg = 0x82014,
+-	.clkr = {
+-		.enable_reg = 0x82014,
+-		.enable_mask = BIT(0),
+-		.hw.init = &(struct clk_init_data){
+-			.name = "gcc_aggre1_pnoc_ahb_clk",
+-			.parent_names = (const char *[]){ "periph_noc_clk_src" },
+-			.num_parents = 1,
+-			.ops = &clk_branch2_ops,
+-		},
+-	},
+-};
+-
+ static struct clk_branch gcc_aggre2_ufs_axi_clk = {
+ 	.halt_reg = 0x83014,
+ 	.clkr = {
+@@ -3474,7 +3460,6 @@ static struct clk_regmap *gcc_msm8996_clocks[] = {
+ 	[GCC_AGGRE0_CNOC_AHB_CLK] = &gcc_aggre0_cnoc_ahb_clk.clkr,
+ 	[GCC_SMMU_AGGRE0_AXI_CLK] = &gcc_smmu_aggre0_axi_clk.clkr,
+ 	[GCC_SMMU_AGGRE0_AHB_CLK] = &gcc_smmu_aggre0_ahb_clk.clkr,
+-	[GCC_AGGRE1_PNOC_AHB_CLK] = &gcc_aggre1_pnoc_ahb_clk.clkr,
+ 	[GCC_AGGRE2_UFS_AXI_CLK] = &gcc_aggre2_ufs_axi_clk.clkr,
+ 	[GCC_AGGRE2_USB3_AXI_CLK] = &gcc_aggre2_usb3_axi_clk.clkr,
+ 	[GCC_QSPI_AHB_CLK] = &gcc_qspi_ahb_clk.clkr,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index b9c11c2b2885a..0de66f59adb8a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -827,6 +827,7 @@ static int amdgpu_connector_vga_get_modes(struct drm_connector *connector)
+ 
+ 	amdgpu_connector_get_edid(connector);
+ 	ret = amdgpu_connector_ddc_get_modes(connector);
++	amdgpu_get_native_mode(connector);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 3121816546467..53ac826935328 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -1852,7 +1852,9 @@ static void swizzle_to_dml_params(
+ 	case DC_SW_VAR_D_X:
+ 		*sw_mode = dm_sw_var_d_x;
+ 		break;
+-
++	case DC_SW_VAR_R_X:
++		*sw_mode = dm_sw_var_r_x;
++		break;
+ 	default:
+ 		ASSERT(0); /* Not supported */
+ 		break;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
+index 64f9c735f74d8..e73cee275729c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
++++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
+@@ -80,11 +80,11 @@ enum dm_swizzle_mode {
+ 	dm_sw_SPARE_13 = 24,
+ 	dm_sw_64kb_s_x = 25,
+ 	dm_sw_64kb_d_x = 26,
+-	dm_sw_SPARE_14 = 27,
++	dm_sw_64kb_r_x = 27,
+ 	dm_sw_SPARE_15 = 28,
+ 	dm_sw_var_s_x = 29,
+ 	dm_sw_var_d_x = 30,
+-	dm_sw_64kb_r_x,
++	dm_sw_var_r_x = 31,
+ 	dm_sw_gfx7_2d_thin_l_vp,
+ 	dm_sw_gfx7_2d_thin_gl,
+ };
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 65d73eb5e155c..1c1931f5c958b 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -154,6 +154,12 @@ static void vlv_steal_power_sequencer(struct drm_i915_private *dev_priv,
+ 				      enum pipe pipe);
+ static void intel_dp_unset_edid(struct intel_dp *intel_dp);
+ 
++static void intel_dp_set_default_sink_rates(struct intel_dp *intel_dp)
++{
++	intel_dp->sink_rates[0] = 162000;
++	intel_dp->num_sink_rates = 1;
++}
++
+ /* update sink rates from dpcd */
+ static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
+ {
+@@ -4678,6 +4684,9 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
+ 	 */
+ 	intel_psr_init_dpcd(intel_dp);
+ 
++	/* Clear the default sink rates */
++	intel_dp->num_sink_rates = 0;
++
+ 	/* Read the eDP 1.4+ supported link rates. */
+ 	if (intel_dp->edp_dpcd[0] >= DP_EDP_14) {
+ 		__le16 sink_rates[DP_MAX_SUPPORTED_RATES];
+@@ -7779,6 +7788,8 @@ intel_dp_init_connector(struct intel_digital_port *dig_port,
+ 		return false;
+ 
+ 	intel_dp_set_source_rates(intel_dp);
++	intel_dp_set_default_sink_rates(intel_dp);
++	intel_dp_set_common_rates(intel_dp);
+ 
+ 	intel_dp->reset_link_params = true;
+ 	intel_dp->pps_pipe = INVALID_PIPE;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 42fc5c813a9bb..ac96b6ab44c07 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -557,6 +557,7 @@ nouveau_drm_device_init(struct drm_device *dev)
+ 		nvkm_dbgopt(nouveau_debug, "DRM");
+ 
+ 	INIT_LIST_HEAD(&drm->clients);
++	mutex_init(&drm->clients_lock);
+ 	spin_lock_init(&drm->tile.lock);
+ 
+ 	/* workaround an odd issue on nvc1 by disabling the device's
+@@ -627,6 +628,7 @@ fail_alloc:
+ static void
+ nouveau_drm_device_fini(struct drm_device *dev)
+ {
++	struct nouveau_cli *cli, *temp_cli;
+ 	struct nouveau_drm *drm = nouveau_drm(dev);
+ 
+ 	if (nouveau_pmops_runtime()) {
+@@ -651,9 +653,28 @@ nouveau_drm_device_fini(struct drm_device *dev)
+ 	nouveau_ttm_fini(drm);
+ 	nouveau_vga_fini(drm);
+ 
++	/*
++	 * There may be existing clients from as-yet unclosed files. For now,
++	 * clean them up here rather than deferring until the file is closed,
++	 * but this likely not correct if we want to support hot-unplugging
++	 * properly.
++	 */
++	mutex_lock(&drm->clients_lock);
++	list_for_each_entry_safe(cli, temp_cli, &drm->clients, head) {
++		list_del(&cli->head);
++		mutex_lock(&cli->mutex);
++		if (cli->abi16)
++			nouveau_abi16_fini(cli->abi16);
++		mutex_unlock(&cli->mutex);
++		nouveau_cli_fini(cli);
++		kfree(cli);
++	}
++	mutex_unlock(&drm->clients_lock);
++
+ 	nouveau_cli_fini(&drm->client);
+ 	nouveau_cli_fini(&drm->master);
+ 	nvif_parent_dtor(&drm->parent);
++	mutex_destroy(&drm->clients_lock);
+ 	kfree(drm);
+ }
+ 
+@@ -792,7 +813,7 @@ nouveau_drm_device_remove(struct drm_device *dev)
+ 	struct nvkm_client *client;
+ 	struct nvkm_device *device;
+ 
+-	drm_dev_unregister(dev);
++	drm_dev_unplug(dev);
+ 
+ 	dev->irq_enabled = false;
+ 	client = nvxx_client(&drm->client.base);
+@@ -1086,9 +1107,9 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv)
+ 
+ 	fpriv->driver_priv = cli;
+ 
+-	mutex_lock(&drm->client.mutex);
++	mutex_lock(&drm->clients_lock);
+ 	list_add(&cli->head, &drm->clients);
+-	mutex_unlock(&drm->client.mutex);
++	mutex_unlock(&drm->clients_lock);
+ 
+ done:
+ 	if (ret && cli) {
+@@ -1106,6 +1127,16 @@ nouveau_drm_postclose(struct drm_device *dev, struct drm_file *fpriv)
+ {
+ 	struct nouveau_cli *cli = nouveau_cli(fpriv);
+ 	struct nouveau_drm *drm = nouveau_drm(dev);
++	int dev_index;
++
++	/*
++	 * The device is gone, and as it currently stands all clients are
++	 * cleaned up in the removal codepath. In the future this may change
++	 * so that we can support hot-unplugging, but for now we immediately
++	 * return to avoid a double-free situation.
++	 */
++	if (!drm_dev_enter(dev, &dev_index))
++		return;
+ 
+ 	pm_runtime_get_sync(dev->dev);
+ 
+@@ -1114,14 +1145,15 @@ nouveau_drm_postclose(struct drm_device *dev, struct drm_file *fpriv)
+ 		nouveau_abi16_fini(cli->abi16);
+ 	mutex_unlock(&cli->mutex);
+ 
+-	mutex_lock(&drm->client.mutex);
++	mutex_lock(&drm->clients_lock);
+ 	list_del(&cli->head);
+-	mutex_unlock(&drm->client.mutex);
++	mutex_unlock(&drm->clients_lock);
+ 
+ 	nouveau_cli_fini(cli);
+ 	kfree(cli);
+ 	pm_runtime_mark_last_busy(dev->dev);
+ 	pm_runtime_put_autosuspend(dev->dev);
++	drm_dev_exit(dev_index);
+ }
+ 
+ static const struct drm_ioctl_desc
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h
+index b8025507a9e4c..8b252dca0fc3e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
++++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
+@@ -142,6 +142,11 @@ struct nouveau_drm {
+ 
+ 	struct list_head clients;
+ 
++	/**
++	 * @clients_lock: Protects access to the @clients list of &struct nouveau_cli.
++	 */
++	struct mutex clients_lock;
++
+ 	u8 old_pm_cap;
+ 
+ 	struct {
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigv100.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigv100.c
+index 6e3c450eaacef..3ff49344abc77 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigv100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigv100.c
+@@ -62,7 +62,6 @@ gv100_hdmi_ctrl(struct nvkm_ior *ior, int head, bool enable, u8 max_ac_packet,
+ 		nvkm_wr32(device, 0x6f0108 + hdmi, vendor_infoframe.header);
+ 		nvkm_wr32(device, 0x6f010c + hdmi, vendor_infoframe.subpack0_low);
+ 		nvkm_wr32(device, 0x6f0110 + hdmi, vendor_infoframe.subpack0_high);
+-		nvkm_wr32(device, 0x6f0110 + hdmi, 0x00000000);
+ 		nvkm_wr32(device, 0x6f0114 + hdmi, 0x00000000);
+ 		nvkm_wr32(device, 0x6f0118 + hdmi, 0x00000000);
+ 		nvkm_wr32(device, 0x6f011c + hdmi, 0x00000000);
+diff --git a/drivers/gpu/drm/udl/udl_connector.c b/drivers/gpu/drm/udl/udl_connector.c
+index cdc1c42e16695..aac41a809924e 100644
+--- a/drivers/gpu/drm/udl/udl_connector.c
++++ b/drivers/gpu/drm/udl/udl_connector.c
+@@ -30,7 +30,7 @@ static int udl_get_edid_block(void *data, u8 *buf, unsigned int block,
+ 		ret = usb_control_msg(udl->udev,
+ 				      usb_rcvctrlpipe(udl->udev, 0),
+ 					  (0x02), (0x80 | (0x02 << 5)), bval,
+-					  0xA1, read_buff, 2, HZ);
++					  0xA1, read_buff, 2, 1000);
+ 		if (ret < 1) {
+ 			DRM_ERROR("Read EDID byte %d failed err %x\n", i, ret);
+ 			kfree(read_buff);
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 2ab1ac5a2412f..558ca3843bb95 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -1465,6 +1465,8 @@ st_lsm6dsx_set_odr(struct st_lsm6dsx_sensor *sensor, u32 req_odr)
+ 	int err;
+ 
+ 	switch (sensor->id) {
++	case ST_LSM6DSX_ID_GYRO:
++		break;
+ 	case ST_LSM6DSX_ID_EXT0:
+ 	case ST_LSM6DSX_ID_EXT1:
+ 	case ST_LSM6DSX_ID_EXT2:
+@@ -1490,8 +1492,8 @@ st_lsm6dsx_set_odr(struct st_lsm6dsx_sensor *sensor, u32 req_odr)
+ 		}
+ 		break;
+ 	}
+-	default:
+-		break;
++	default: /* should never occur */
++		return -EINVAL;
+ 	}
+ 
+ 	if (req_odr > 0) {
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 441952a5eca4a..10d77f50f818b 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -3368,8 +3368,11 @@ static void bnxt_re_process_res_ud_wc(struct bnxt_re_qp *qp,
+ 				      struct ib_wc *wc,
+ 				      struct bnxt_qplib_cqe *cqe)
+ {
++	struct bnxt_re_dev *rdev;
++	u16 vlan_id = 0;
+ 	u8 nw_type;
+ 
++	rdev = qp->rdev;
+ 	wc->opcode = IB_WC_RECV;
+ 	wc->status = __rc_to_ib_wc_status(cqe->status);
+ 
+@@ -3381,9 +3384,12 @@ static void bnxt_re_process_res_ud_wc(struct bnxt_re_qp *qp,
+ 		memcpy(wc->smac, cqe->smac, ETH_ALEN);
+ 		wc->wc_flags |= IB_WC_WITH_SMAC;
+ 		if (cqe->flags & CQ_RES_UD_FLAGS_META_FORMAT_VLAN) {
+-			wc->vlan_id = (cqe->cfa_meta & 0xFFF);
+-			if (wc->vlan_id < 0x1000)
+-				wc->wc_flags |= IB_WC_WITH_VLAN;
++			vlan_id = (cqe->cfa_meta & 0xFFF);
++		}
++		/* Mark only if vlan_id is non zero */
++		if (vlan_id && bnxt_re_check_if_vlan_valid(rdev, vlan_id)) {
++			wc->vlan_id = vlan_id;
++			wc->wc_flags |= IB_WC_WITH_VLAN;
+ 		}
+ 		nw_type = (cqe->flags & CQ_RES_UD_FLAGS_ROCE_IP_VER_MASK) >>
+ 			   CQ_RES_UD_FLAGS_ROCE_IP_VER_SFT;
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_init_ops.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_init_ops.h
+index 1835d2e451c01..fc7fce642666c 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_init_ops.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_init_ops.h
+@@ -635,11 +635,13 @@ static int bnx2x_ilt_client_mem_op(struct bnx2x *bp, int cli_num,
+ {
+ 	int i, rc;
+ 	struct bnx2x_ilt *ilt = BP_ILT(bp);
+-	struct ilt_client_info *ilt_cli = &ilt->clients[cli_num];
++	struct ilt_client_info *ilt_cli;
+ 
+ 	if (!ilt || !ilt->lines)
+ 		return -1;
+ 
++	ilt_cli = &ilt->clients[cli_num];
++
+ 	if (ilt_cli->flags & (ILT_CLIENT_SKIP_INIT | ILT_CLIENT_SKIP_MEM))
+ 		return 0;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index 2186706cf9130..3e9b1f59e381d 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -1854,7 +1854,7 @@ static int bnxt_tc_setup_indr_block_cb(enum tc_setup_type type,
+ 	struct flow_cls_offload *flower = type_data;
+ 	struct bnxt *bp = priv->bp;
+ 
+-	if (flower->common.chain_index)
++	if (!tc_cls_can_offload_and_chain0(bp->dev, type_data))
+ 		return -EOPNOTSUPP;
+ 
+ 	switch (type) {
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index f91c67489e629..a4ef35216e2f7 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -4432,10 +4432,10 @@ static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
+ 
+ 	fsl_mc_portal_free(priv->mc_io);
+ 
+-	free_netdev(net_dev);
+-
+ 	dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
+ 
++	free_netdev(net_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c
+index ee86ea12fa379..9295a9a1efc73 100644
+--- a/drivers/net/ethernet/intel/e100.c
++++ b/drivers/net/ethernet/intel/e100.c
+@@ -2997,9 +2997,10 @@ static void __e100_shutdown(struct pci_dev *pdev, bool *enable_wake)
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct nic *nic = netdev_priv(netdev);
+ 
++	netif_device_detach(netdev);
++
+ 	if (netif_running(netdev))
+ 		e100_down(nic);
+-	netif_device_detach(netdev);
+ 
+ 	if ((nic->flags & wol_magic) | e100_asf(nic)) {
+ 		/* enable reverse auto-negotiation */
+@@ -3016,7 +3017,7 @@ static void __e100_shutdown(struct pci_dev *pdev, bool *enable_wake)
+ 		*enable_wake = false;
+ 	}
+ 
+-	pci_clear_master(pdev);
++	pci_disable_device(pdev);
+ }
+ 
+ static int __e100_power_off(struct pci_dev *pdev, bool wake)
+@@ -3036,8 +3037,6 @@ static int __maybe_unused e100_suspend(struct device *dev_d)
+ 
+ 	__e100_shutdown(to_pci_dev(dev_d), &wake);
+ 
+-	device_wakeup_disable(dev_d);
+-
+ 	return 0;
+ }
+ 
+@@ -3045,6 +3044,14 @@ static int __maybe_unused e100_resume(struct device *dev_d)
+ {
+ 	struct net_device *netdev = dev_get_drvdata(dev_d);
+ 	struct nic *nic = netdev_priv(netdev);
++	int err;
++
++	err = pci_enable_device(to_pci_dev(dev_d));
++	if (err) {
++		netdev_err(netdev, "Resume cannot enable PCI device, aborting\n");
++		return err;
++	}
++	pci_set_master(to_pci_dev(dev_d));
+ 
+ 	/* disable reverse auto-negotiation */
+ 	if (nic->phy == phy_82552_v) {
+@@ -3056,10 +3063,11 @@ static int __maybe_unused e100_resume(struct device *dev_d)
+ 		           smartspeed & ~(E100_82552_REV_ANEG));
+ 	}
+ 
+-	netif_device_attach(netdev);
+ 	if (netif_running(netdev))
+ 		e100_up(nic);
+ 
++	netif_device_attach(netdev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index fe1258778cbc4..5b83d1bc0e74d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -159,6 +159,7 @@ enum i40e_vsi_state_t {
+ 	__I40E_VSI_OVERFLOW_PROMISC,
+ 	__I40E_VSI_REINIT_REQUESTED,
+ 	__I40E_VSI_DOWN_REQUESTED,
++	__I40E_VSI_RELEASING,
+ 	/* This must be last as it determines the size of the BITMAP */
+ 	__I40E_VSI_STATE_SIZE__,
+ };
+@@ -1144,6 +1145,7 @@ void i40e_ptp_save_hw_time(struct i40e_pf *pf);
+ void i40e_ptp_restore_hw_time(struct i40e_pf *pf);
+ void i40e_ptp_init(struct i40e_pf *pf);
+ void i40e_ptp_stop(struct i40e_pf *pf);
++int i40e_update_adq_vsi_queues(struct i40e_vsi *vsi, int vsi_offset);
+ int i40e_is_vsi_uplink_mode_veb(struct i40e_vsi *vsi);
+ i40e_status i40e_get_partition_bw_setting(struct i40e_pf *pf);
+ i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 52c2d6fdeb7a0..583eae71cda4b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -1789,6 +1789,7 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi,
+ 				     bool is_add)
+ {
+ 	struct i40e_pf *pf = vsi->back;
++	u16 num_tc_qps = 0;
+ 	u16 sections = 0;
+ 	u8 netdev_tc = 0;
+ 	u16 numtc = 1;
+@@ -1796,13 +1797,33 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi,
+ 	u8 offset;
+ 	u16 qmap;
+ 	int i;
+-	u16 num_tc_qps = 0;
+ 
+ 	sections = I40E_AQ_VSI_PROP_QUEUE_MAP_VALID;
+ 	offset = 0;
++	/* zero out queue mapping, it will get updated on the end of the function */
++	memset(ctxt->info.queue_mapping, 0, sizeof(ctxt->info.queue_mapping));
++
++	if (vsi->type == I40E_VSI_MAIN) {
++		/* This code helps add more queue to the VSI if we have
++		 * more cores than RSS can support, the higher cores will
++		 * be served by ATR or other filters. Furthermore, the
++		 * non-zero req_queue_pairs says that user requested a new
++		 * queue count via ethtool's set_channels, so use this
++		 * value for queues distribution across traffic classes
++		 */
++		if (vsi->req_queue_pairs > 0)
++			vsi->num_queue_pairs = vsi->req_queue_pairs;
++		else if (pf->flags & I40E_FLAG_MSIX_ENABLED)
++			vsi->num_queue_pairs = pf->num_lan_msix;
++	}
+ 
+ 	/* Number of queues per enabled TC */
+-	num_tc_qps = vsi->alloc_queue_pairs;
++	if (vsi->type == I40E_VSI_MAIN ||
++	    (vsi->type == I40E_VSI_SRIOV && vsi->num_queue_pairs != 0))
++		num_tc_qps = vsi->num_queue_pairs;
++	else
++		num_tc_qps = vsi->alloc_queue_pairs;
++
+ 	if (enabled_tc && (vsi->back->flags & I40E_FLAG_DCB_ENABLED)) {
+ 		/* Find numtc from enabled TC bitmap */
+ 		for (i = 0, numtc = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+@@ -1880,15 +1901,11 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi,
+ 		}
+ 		ctxt->info.tc_mapping[i] = cpu_to_le16(qmap);
+ 	}
+-
+-	/* Set actual Tx/Rx queue pairs */
+-	vsi->num_queue_pairs = offset;
+-	if ((vsi->type == I40E_VSI_MAIN) && (numtc == 1)) {
+-		if (vsi->req_queue_pairs > 0)
+-			vsi->num_queue_pairs = vsi->req_queue_pairs;
+-		else if (pf->flags & I40E_FLAG_MSIX_ENABLED)
+-			vsi->num_queue_pairs = pf->num_lan_msix;
+-	}
++	/* Do not change previously set num_queue_pairs for PFs and VFs*/
++	if ((vsi->type == I40E_VSI_MAIN && numtc != 1) ||
++	    (vsi->type == I40E_VSI_SRIOV && vsi->num_queue_pairs == 0) ||
++	    (vsi->type != I40E_VSI_MAIN && vsi->type != I40E_VSI_SRIOV))
++		vsi->num_queue_pairs = offset;
+ 
+ 	/* Scheduler section valid can only be set for ADD VSI */
+ 	if (is_add) {
+@@ -2622,7 +2639,8 @@ static void i40e_sync_filters_subtask(struct i40e_pf *pf)
+ 
+ 	for (v = 0; v < pf->num_alloc_vsi; v++) {
+ 		if (pf->vsi[v] &&
+-		    (pf->vsi[v]->flags & I40E_VSI_FLAG_FILTER_CHANGED)) {
++		    (pf->vsi[v]->flags & I40E_VSI_FLAG_FILTER_CHANGED) &&
++		    !test_bit(__I40E_VSI_RELEASING, pf->vsi[v]->state)) {
+ 			int ret = i40e_sync_vsi_filters(pf->vsi[v]);
+ 
+ 			if (ret) {
+@@ -5393,6 +5411,58 @@ static void i40e_vsi_update_queue_map(struct i40e_vsi *vsi,
+ 	       sizeof(vsi->info.tc_mapping));
+ }
+ 
++/**
++ * i40e_update_adq_vsi_queues - update queue mapping for ADq VSI
++ * @vsi: the VSI being reconfigured
++ * @vsi_offset: offset from main VF VSI
++ */
++int i40e_update_adq_vsi_queues(struct i40e_vsi *vsi, int vsi_offset)
++{
++	struct i40e_vsi_context ctxt = {};
++	struct i40e_pf *pf;
++	struct i40e_hw *hw;
++	int ret;
++
++	if (!vsi)
++		return I40E_ERR_PARAM;
++	pf = vsi->back;
++	hw = &pf->hw;
++
++	ctxt.seid = vsi->seid;
++	ctxt.pf_num = hw->pf_id;
++	ctxt.vf_num = vsi->vf_id + hw->func_caps.vf_base_id + vsi_offset;
++	ctxt.uplink_seid = vsi->uplink_seid;
++	ctxt.connection_type = I40E_AQ_VSI_CONN_TYPE_NORMAL;
++	ctxt.flags = I40E_AQ_VSI_TYPE_VF;
++	ctxt.info = vsi->info;
++
++	i40e_vsi_setup_queue_map(vsi, &ctxt, vsi->tc_config.enabled_tc,
++				 false);
++	if (vsi->reconfig_rss) {
++		vsi->rss_size = min_t(int, pf->alloc_rss_size,
++				      vsi->num_queue_pairs);
++		ret = i40e_vsi_config_rss(vsi);
++		if (ret) {
++			dev_info(&pf->pdev->dev, "Failed to reconfig rss for num_queues\n");
++			return ret;
++		}
++		vsi->reconfig_rss = false;
++	}
++
++	ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
++	if (ret) {
++		dev_info(&pf->pdev->dev, "Update vsi config failed, err %s aq_err %s\n",
++			 i40e_stat_str(hw, ret),
++			 i40e_aq_str(hw, hw->aq.asq_last_status));
++		return ret;
++	}
++	/* update the local VSI info with updated queue map */
++	i40e_vsi_update_queue_map(vsi, &ctxt);
++	vsi->info.valid_sections = 0;
++
++	return ret;
++}
++
+ /**
+  * i40e_vsi_config_tc - Configure VSI Tx Scheduler for given TC map
+  * @vsi: VSI to be configured
+@@ -5683,24 +5753,6 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
+ 	INIT_LIST_HEAD(&vsi->ch_list);
+ }
+ 
+-/**
+- * i40e_is_any_channel - channel exist or not
+- * @vsi: ptr to VSI to which channels are associated with
+- *
+- * Returns true or false if channel(s) exist for associated VSI or not
+- **/
+-static bool i40e_is_any_channel(struct i40e_vsi *vsi)
+-{
+-	struct i40e_channel *ch, *ch_tmp;
+-
+-	list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list, list) {
+-		if (ch->initialized)
+-			return true;
+-	}
+-
+-	return false;
+-}
+-
+ /**
+  * i40e_get_max_queues_for_channel
+  * @vsi: ptr to VSI to which channels are associated with
+@@ -6206,26 +6258,15 @@ int i40e_create_queue_channel(struct i40e_vsi *vsi,
+ 	/* By default we are in VEPA mode, if this is the first VF/VMDq
+ 	 * VSI to be added switch to VEB mode.
+ 	 */
+-	if ((!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) ||
+-	    (!i40e_is_any_channel(vsi))) {
+-		if (!is_power_of_2(vsi->tc_config.tc_info[0].qcount)) {
+-			dev_dbg(&pf->pdev->dev,
+-				"Failed to create channel. Override queues (%u) not power of 2\n",
+-				vsi->tc_config.tc_info[0].qcount);
+-			return -EINVAL;
+-		}
+ 
+-		if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) {
+-			pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
++	if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) {
++		pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+ 
+-			if (vsi->type == I40E_VSI_MAIN) {
+-				if (pf->flags & I40E_FLAG_TC_MQPRIO)
+-					i40e_do_reset(pf, I40E_PF_RESET_FLAG,
+-						      true);
+-				else
+-					i40e_do_reset_safe(pf,
+-							   I40E_PF_RESET_FLAG);
+-			}
++		if (vsi->type == I40E_VSI_MAIN) {
++			if (pf->flags & I40E_FLAG_TC_MQPRIO)
++				i40e_do_reset(pf, I40E_PF_RESET_FLAG, true);
++			else
++				i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG);
+ 		}
+ 		/* now onwards for main VSI, number of queues will be value
+ 		 * of TC0's queue count
+@@ -7552,12 +7593,20 @@ config_tc:
+ 			    vsi->seid);
+ 		need_reset = true;
+ 		goto exit;
+-	} else {
+-		dev_info(&vsi->back->pdev->dev,
+-			 "Setup channel (id:%u) utilizing num_queues %d\n",
+-			 vsi->seid, vsi->tc_config.tc_info[0].qcount);
++	} else if (enabled_tc &&
++		   (!is_power_of_2(vsi->tc_config.tc_info[0].qcount))) {
++		netdev_info(netdev,
++			    "Failed to create channel. Override queues (%u) not power of 2\n",
++			    vsi->tc_config.tc_info[0].qcount);
++		ret = -EINVAL;
++		need_reset = true;
++		goto exit;
+ 	}
+ 
++	dev_info(&vsi->back->pdev->dev,
++		 "Setup channel (id:%u) utilizing num_queues %d\n",
++		 vsi->seid, vsi->tc_config.tc_info[0].qcount);
++
+ 	if (pf->flags & I40E_FLAG_TC_MQPRIO) {
+ 		if (vsi->mqprio_qopt.max_rate[0]) {
+ 			u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
+@@ -8122,9 +8171,8 @@ static int i40e_configure_clsflower(struct i40e_vsi *vsi,
+ 		err = i40e_add_del_cloud_filter(vsi, filter, true);
+ 
+ 	if (err) {
+-		dev_err(&pf->pdev->dev,
+-			"Failed to add cloud filter, err %s\n",
+-			i40e_stat_str(&pf->hw, err));
++		dev_err(&pf->pdev->dev, "Failed to add cloud filter, err %d\n",
++			err);
+ 		goto err;
+ 	}
+ 
+@@ -13308,7 +13356,7 @@ int i40e_vsi_release(struct i40e_vsi *vsi)
+ 		dev_info(&pf->pdev->dev, "Can't remove PF VSI\n");
+ 		return -ENODEV;
+ 	}
+-
++	set_bit(__I40E_VSI_RELEASING, vsi->state);
+ 	uplink_seid = vsi->uplink_seid;
+ 	if (vsi->type != I40E_VSI_SRIOV) {
+ 		if (vsi->netdev_registered) {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index a02167cce81e1..41c0a103119c1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -130,17 +130,18 @@ void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
+ /***********************misc routines*****************************/
+ 
+ /**
+- * i40e_vc_disable_vf
++ * i40e_vc_reset_vf
+  * @vf: pointer to the VF info
+- *
+- * Disable the VF through a SW reset.
++ * @notify_vf: notify vf about reset or not
++ * Reset VF handler.
+  **/
+-static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
++static void i40e_vc_reset_vf(struct i40e_vf *vf, bool notify_vf)
+ {
+ 	struct i40e_pf *pf = vf->pf;
+ 	int i;
+ 
+-	i40e_vc_notify_vf_reset(vf);
++	if (notify_vf)
++		i40e_vc_notify_vf_reset(vf);
+ 
+ 	/* We want to ensure that an actual reset occurs initiated after this
+ 	 * function was called. However, we do not want to wait forever, so
+@@ -158,9 +159,14 @@ static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
+ 		usleep_range(10000, 20000);
+ 	}
+ 
+-	dev_warn(&vf->pf->pdev->dev,
+-		 "Failed to initiate reset for VF %d after 200 milliseconds\n",
+-		 vf->vf_id);
++	if (notify_vf)
++		dev_warn(&vf->pf->pdev->dev,
++			 "Failed to initiate reset for VF %d after 200 milliseconds\n",
++			 vf->vf_id);
++	else
++		dev_dbg(&vf->pf->pdev->dev,
++			"Failed to initiate reset for VF %d after 200 milliseconds\n",
++			vf->vf_id);
+ }
+ 
+ /**
+@@ -621,14 +627,13 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
+ 				    u16 vsi_queue_id,
+ 				    struct virtchnl_rxq_info *info)
+ {
++	u16 pf_queue_id = i40e_vc_get_pf_queue_id(vf, vsi_id, vsi_queue_id);
+ 	struct i40e_pf *pf = vf->pf;
++	struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
+ 	struct i40e_hw *hw = &pf->hw;
+ 	struct i40e_hmc_obj_rxq rx_ctx;
+-	u16 pf_queue_id;
+ 	int ret = 0;
+ 
+-	pf_queue_id = i40e_vc_get_pf_queue_id(vf, vsi_id, vsi_queue_id);
+-
+ 	/* clear the context structure first */
+ 	memset(&rx_ctx, 0, sizeof(struct i40e_hmc_obj_rxq));
+ 
+@@ -666,6 +671,10 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
+ 	}
+ 	rx_ctx.rxmax = info->max_pkt_size;
+ 
++	/* if port VLAN is configured increase the max packet size */
++	if (vsi->info.pvid)
++		rx_ctx.rxmax += VLAN_HLEN;
++
+ 	/* enable 32bytes desc always */
+ 	rx_ctx.dsize = 1;
+ 
+@@ -2051,20 +2060,6 @@ err:
+ 	return ret;
+ }
+ 
+-/**
+- * i40e_vc_reset_vf_msg
+- * @vf: pointer to the VF info
+- *
+- * called from the VF to reset itself,
+- * unlike other virtchnl messages, PF driver
+- * doesn't send the response back to the VF
+- **/
+-static void i40e_vc_reset_vf_msg(struct i40e_vf *vf)
+-{
+-	if (test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states))
+-		i40e_reset_vf(vf, false);
+-}
+-
+ /**
+  * i40e_vc_config_promiscuous_mode_msg
+  * @vf: pointer to the VF info
+@@ -2163,11 +2158,12 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	struct virtchnl_vsi_queue_config_info *qci =
+ 	    (struct virtchnl_vsi_queue_config_info *)msg;
+ 	struct virtchnl_queue_pair_info *qpi;
+-	struct i40e_pf *pf = vf->pf;
+ 	u16 vsi_id, vsi_queue_id = 0;
+-	u16 num_qps_all = 0;
++	struct i40e_pf *pf = vf->pf;
+ 	i40e_status aq_ret = 0;
+ 	int i, j = 0, idx = 0;
++	struct i40e_vsi *vsi;
++	u16 num_qps_all = 0;
+ 
+ 	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
+ 		aq_ret = I40E_ERR_PARAM;
+@@ -2256,9 +2252,15 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 		pf->vsi[vf->lan_vsi_idx]->num_queue_pairs =
+ 			qci->num_queue_pairs;
+ 	} else {
+-		for (i = 0; i < vf->num_tc; i++)
+-			pf->vsi[vf->ch[i].vsi_idx]->num_queue_pairs =
+-			       vf->ch[i].num_qps;
++		for (i = 0; i < vf->num_tc; i++) {
++			vsi = pf->vsi[vf->ch[i].vsi_idx];
++			vsi->num_queue_pairs = vf->ch[i].num_qps;
++
++			if (i40e_update_adq_vsi_queues(vsi, i)) {
++				aq_ret = I40E_ERR_CONFIG;
++				goto error_param;
++			}
++		}
+ 	}
+ 
+ error_param:
+@@ -2553,8 +2555,7 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	} else {
+ 		/* successful request */
+ 		vf->num_req_queues = req_pairs;
+-		i40e_vc_notify_vf_reset(vf);
+-		i40e_reset_vf(vf, false);
++		i40e_vc_reset_vf(vf, true);
+ 		return 0;
+ 	}
+ 
+@@ -3767,8 +3768,7 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
+ 	vf->num_req_queues = 0;
+ 
+ 	/* reset the VF in order to allocate resources */
+-	i40e_vc_notify_vf_reset(vf);
+-	i40e_reset_vf(vf, false);
++	i40e_vc_reset_vf(vf, true);
+ 
+ 	return I40E_SUCCESS;
+ 
+@@ -3808,8 +3808,7 @@ static int i40e_vc_del_qch_msg(struct i40e_vf *vf, u8 *msg)
+ 	}
+ 
+ 	/* reset the VF in order to allocate resources */
+-	i40e_vc_notify_vf_reset(vf);
+-	i40e_reset_vf(vf, false);
++	i40e_vc_reset_vf(vf, true);
+ 
+ 	return I40E_SUCCESS;
+ 
+@@ -3871,7 +3870,7 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode,
+ 		i40e_vc_notify_vf_link_state(vf);
+ 		break;
+ 	case VIRTCHNL_OP_RESET_VF:
+-		i40e_vc_reset_vf_msg(vf);
++		i40e_vc_reset_vf(vf, false);
+ 		ret = 0;
+ 		break;
+ 	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+@@ -4125,7 +4124,7 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
+ 	/* Force the VF interface down so it has to bring up with new MAC
+ 	 * address
+ 	 */
+-	i40e_vc_disable_vf(vf);
++	i40e_vc_reset_vf(vf, true);
+ 	dev_info(&pf->pdev->dev, "Bring down and up the VF interface to make this change effective.\n");
+ 
+ error_param:
+@@ -4133,34 +4132,6 @@ error_param:
+ 	return ret;
+ }
+ 
+-/**
+- * i40e_vsi_has_vlans - True if VSI has configured VLANs
+- * @vsi: pointer to the vsi
+- *
+- * Check if a VSI has configured any VLANs. False if we have a port VLAN or if
+- * we have no configured VLANs. Do not call while holding the
+- * mac_filter_hash_lock.
+- */
+-static bool i40e_vsi_has_vlans(struct i40e_vsi *vsi)
+-{
+-	bool have_vlans;
+-
+-	/* If we have a port VLAN, then the VSI cannot have any VLANs
+-	 * configured, as all MAC/VLAN filters will be assigned to the PVID.
+-	 */
+-	if (vsi->info.pvid)
+-		return false;
+-
+-	/* Since we don't have a PVID, we know that if the device is in VLAN
+-	 * mode it must be because of a VLAN filter configured on this VSI.
+-	 */
+-	spin_lock_bh(&vsi->mac_filter_hash_lock);
+-	have_vlans = i40e_is_vsi_in_vlan(vsi);
+-	spin_unlock_bh(&vsi->mac_filter_hash_lock);
+-
+-	return have_vlans;
+-}
+-
+ /**
+  * i40e_ndo_set_vf_port_vlan
+  * @netdev: network interface device structure
+@@ -4217,19 +4188,9 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
+ 		/* duplicate request, so just return success */
+ 		goto error_pvid;
+ 
+-	if (i40e_vsi_has_vlans(vsi)) {
+-		dev_err(&pf->pdev->dev,
+-			"VF %d has already configured VLAN filters and the administrator is requesting a port VLAN override.\nPlease unload and reload the VF driver for this change to take effect.\n",
+-			vf_id);
+-		/* Administrator Error - knock the VF offline until he does
+-		 * the right thing by reconfiguring his network correctly
+-		 * and then reloading the VF driver.
+-		 */
+-		i40e_vc_disable_vf(vf);
+-		/* During reset the VF got a new VSI, so refresh the pointer. */
+-		vsi = pf->vsi[vf->lan_vsi_idx];
+-	}
+-
++	i40e_vc_reset_vf(vf, true);
++	/* During reset the VF got a new VSI, so refresh a pointer. */
++	vsi = pf->vsi[vf->lan_vsi_idx];
+ 	/* Locked once because multiple functions below iterate list */
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+ 
+@@ -4610,7 +4571,7 @@ int i40e_ndo_set_vf_trust(struct net_device *netdev, int vf_id, bool setting)
+ 		goto out;
+ 
+ 	vf->trusted = setting;
+-	i40e_vc_disable_vf(vf);
++	i40e_vc_reset_vf(vf, true);
+ 	dev_info(&pf->pdev->dev, "VF %u is now %strusted\n",
+ 		 vf_id, setting ? "" : "un");
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index c93567f4d0f79..ea85b06857fa2 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -892,6 +892,7 @@ static int iavf_set_channels(struct net_device *netdev,
+ {
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
+ 	u32 num_req = ch->combined_count;
++	int i;
+ 
+ 	if ((adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ) &&
+ 	    adapter->num_tc) {
+@@ -902,7 +903,7 @@ static int iavf_set_channels(struct net_device *netdev,
+ 	/* All of these should have already been checked by ethtool before this
+ 	 * even gets to us, but just to be sure.
+ 	 */
+-	if (num_req > adapter->vsi_res->num_queue_pairs)
++	if (num_req == 0 || num_req > adapter->vsi_res->num_queue_pairs)
+ 		return -EINVAL;
+ 
+ 	if (num_req == adapter->num_active_queues)
+@@ -914,6 +915,20 @@ static int iavf_set_channels(struct net_device *netdev,
+ 	adapter->num_req_queues = num_req;
+ 	adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED;
+ 	iavf_schedule_reset(adapter);
++
++	/* wait for the reset is done */
++	for (i = 0; i < IAVF_RESET_WAIT_COMPLETE_COUNT; i++) {
++		msleep(IAVF_RESET_WAIT_MS);
++		if (adapter->flags & IAVF_FLAG_RESET_PENDING)
++			continue;
++		break;
++	}
++	if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) {
++		adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
++		adapter->num_active_queues = num_req;
++		return -EOPNOTSUPP;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -960,14 +975,13 @@ static int iavf_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
+ 
+ 	if (hfunc)
+ 		*hfunc = ETH_RSS_HASH_TOP;
+-	if (!indir)
+-		return 0;
+-
+-	memcpy(key, adapter->rss_key, adapter->rss_key_size);
++	if (key)
++		memcpy(key, adapter->rss_key, adapter->rss_key_size);
+ 
+-	/* Each 32 bits pointed by 'indir' is stored with a lut entry */
+-	for (i = 0; i < adapter->rss_lut_size; i++)
+-		indir[i] = (u32)adapter->rss_lut[i];
++	if (indir)
++		/* Each 32 bits pointed by 'indir' is stored with a lut entry */
++		for (i = 0; i < adapter->rss_lut_size; i++)
++			indir[i] = (u32)adapter->rss_lut[i];
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index f06c079e812ec..643679cad8657 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1616,8 +1616,7 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter)
+ 		iavf_set_promiscuous(adapter, FLAG_VF_MULTICAST_PROMISC);
+ 		return 0;
+ 	}
+-
+-	if ((adapter->aq_required & IAVF_FLAG_AQ_RELEASE_PROMISC) &&
++	if ((adapter->aq_required & IAVF_FLAG_AQ_RELEASE_PROMISC) ||
+ 	    (adapter->aq_required & IAVF_FLAG_AQ_RELEASE_ALLMULTI)) {
+ 		iavf_set_promiscuous(adapter, 0);
+ 		return 0;
+@@ -2047,8 +2046,8 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ 
+ 	iavf_free_misc_irq(adapter);
+ 	iavf_reset_interrupt_capability(adapter);
+-	iavf_free_queues(adapter);
+ 	iavf_free_q_vectors(adapter);
++	iavf_free_queues(adapter);
+ 	memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
+ 	iavf_shutdown_adminq(&adapter->hw);
+ 	adapter->netdev->flags &= ~IFF_UP;
+@@ -2330,7 +2329,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ 
+ 	/* check for error indications */
+ 	val = rd32(hw, hw->aq.arq.len);
+-	if (val == 0xdeadbeef) /* indicates device in reset */
++	if (val == 0xdeadbeef || val == 0xffffffff) /* device in reset */
+ 		goto freedom;
+ 	oldval = val;
+ 	if (val & IAVF_VF_ARQLEN1_ARQVFE_MASK) {
+@@ -3028,11 +3027,11 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 	/* start out with flow type and eth type IPv4 to begin with */
+ 	filter->f.flow_type = VIRTCHNL_TCP_V4_FLOW;
+ 	err = iavf_parse_cls_flower(adapter, cls_flower, filter);
+-	if (err < 0)
++	if (err)
+ 		goto err;
+ 
+ 	err = iavf_handle_tclass(adapter, tc, filter);
+-	if (err < 0)
++	if (err)
+ 		goto err;
+ 
+ 	/* add filter to the list */
+@@ -3419,7 +3418,8 @@ static netdev_features_t iavf_fix_features(struct net_device *netdev,
+ {
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
+ 
+-	if (!(adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
++	if (adapter->vf_res &&
++	    !(adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ 		features &= ~(NETIF_F_HW_VLAN_CTAG_TX |
+ 			      NETIF_F_HW_VLAN_CTAG_RX |
+ 			      NETIF_F_HW_VLAN_CTAG_FILTER);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 66d92a0cfef35..5b67d24b2b5ed 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4361,9 +4361,6 @@ static void ice_remove(struct pci_dev *pdev)
+ 	struct ice_pf *pf = pci_get_drvdata(pdev);
+ 	int i;
+ 
+-	if (!pf)
+-		return;
+-
+ 	for (i = 0; i < ICE_MAX_RESET_WAIT; i++) {
+ 		if (!ice_is_reset_in_progress(pf->state))
+ 			break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+index c74600be570ed..68d7ca17b6f51 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+@@ -163,13 +163,14 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
+ 	MLX5_SET(destroy_cq_in, in, cqn, cq->cqn);
+ 	MLX5_SET(destroy_cq_in, in, uid, cq->uid);
+ 	err = mlx5_cmd_exec_in(dev, destroy_cq, in);
++	if (err)
++		return err;
+ 
+ 	synchronize_irq(cq->irqn);
+-
+ 	mlx5_cq_put(cq);
+ 	wait_for_completion(&cq->free);
+ 
+-	return err;
++	return 0;
+ }
+ EXPORT_SYMBOL(mlx5_core_destroy_cq);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
+index 07c8d9811bc81..10d195042ab55 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
+@@ -507,6 +507,8 @@ void mlx5_debug_cq_remove(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
+ 	if (!mlx5_debugfs_root)
+ 		return;
+ 
+-	if (cq->dbg)
++	if (cq->dbg) {
+ 		rem_res_tree(cq->dbg);
++		cq->dbg = NULL;
++	}
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 401b2f5128dd4..78cc6f0bbc72b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1663,7 +1663,7 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs)
+ 	if (!ESW_ALLOWED(esw))
+ 		return 0;
+ 
+-	mutex_lock(&esw->mode_lock);
++	down_write(&esw->mode_lock);
+ 	if (esw->mode == MLX5_ESWITCH_NONE) {
+ 		ret = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_LEGACY, num_vfs);
+ 	} else {
+@@ -1675,7 +1675,7 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs)
+ 		if (!ret)
+ 			esw->esw_funcs.num_vfs = num_vfs;
+ 	}
+-	mutex_unlock(&esw->mode_lock);
++	up_write(&esw->mode_lock);
+ 	return ret;
+ }
+ 
+@@ -1719,10 +1719,10 @@ void mlx5_eswitch_disable(struct mlx5_eswitch *esw, bool clear_vf)
+ 	if (!ESW_ALLOWED(esw))
+ 		return;
+ 
+-	mutex_lock(&esw->mode_lock);
++	down_write(&esw->mode_lock);
+ 	mlx5_eswitch_disable_locked(esw, clear_vf);
+ 	esw->esw_funcs.num_vfs = 0;
+-	mutex_unlock(&esw->mode_lock);
++	up_write(&esw->mode_lock);
+ }
+ 
+ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
+@@ -1778,7 +1778,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
+ 	atomic64_set(&esw->offloads.num_flows, 0);
+ 	ida_init(&esw->offloads.vport_metadata_ida);
+ 	mutex_init(&esw->state_lock);
+-	mutex_init(&esw->mode_lock);
++	init_rwsem(&esw->mode_lock);
+ 
+ 	mlx5_esw_for_all_vports(esw, i, vport) {
+ 		vport->vport = mlx5_eswitch_index_to_vport_num(esw, i);
+@@ -1813,7 +1813,6 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
+ 	esw->dev->priv.eswitch = NULL;
+ 	destroy_workqueue(esw->work_queue);
+ 	esw_offloads_cleanup_reps(esw);
+-	mutex_destroy(&esw->mode_lock);
+ 	mutex_destroy(&esw->state_lock);
+ 	ida_destroy(&esw->offloads.vport_metadata_ida);
+ 	mlx5e_mod_hdr_tbl_destroy(&esw->offloads.mod_hdr);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index cf87de94418ff..59c674f157a8c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -262,7 +262,7 @@ struct mlx5_eswitch {
+ 	/* Protects eswitch mode change that occurs via one or more
+ 	 * user commands, i.e. sriov state change, devlink commands.
+ 	 */
+-	struct mutex mode_lock;
++	struct rw_semaphore mode_lock;
+ 
+ 	struct {
+ 		bool            enabled;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 5801f55ff0771..e06b1ba7d2349 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -2508,7 +2508,7 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
+ 	if (esw_mode_from_devlink(mode, &mlx5_mode))
+ 		return -EINVAL;
+ 
+-	mutex_lock(&esw->mode_lock);
++	down_write(&esw->mode_lock);
+ 	cur_mlx5_mode = esw->mode;
+ 	if (cur_mlx5_mode == mlx5_mode)
+ 		goto unlock;
+@@ -2521,7 +2521,7 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
+ 		err = -EINVAL;
+ 
+ unlock:
+-	mutex_unlock(&esw->mode_lock);
++	up_write(&esw->mode_lock);
+ 	return err;
+ }
+ 
+@@ -2534,14 +2534,14 @@ int mlx5_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode)
+ 	if (IS_ERR(esw))
+ 		return PTR_ERR(esw);
+ 
+-	mutex_lock(&esw->mode_lock);
++	down_write(&esw->mode_lock);
+ 	err = eswitch_devlink_esw_mode_check(esw);
+ 	if (err)
+ 		goto unlock;
+ 
+ 	err = esw_mode_to_devlink(esw->mode, mode);
+ unlock:
+-	mutex_unlock(&esw->mode_lock);
++	up_write(&esw->mode_lock);
+ 	return err;
+ }
+ 
+@@ -2557,7 +2557,7 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
+ 	if (IS_ERR(esw))
+ 		return PTR_ERR(esw);
+ 
+-	mutex_lock(&esw->mode_lock);
++	down_write(&esw->mode_lock);
+ 	err = eswitch_devlink_esw_mode_check(esw);
+ 	if (err)
+ 		goto out;
+@@ -2599,7 +2599,7 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
+ 	}
+ 
+ 	esw->offloads.inline_mode = mlx5_mode;
+-	mutex_unlock(&esw->mode_lock);
++	up_write(&esw->mode_lock);
+ 	return 0;
+ 
+ revert_inline_mode:
+@@ -2609,7 +2609,7 @@ revert_inline_mode:
+ 						 vport,
+ 						 esw->offloads.inline_mode);
+ out:
+-	mutex_unlock(&esw->mode_lock);
++	up_write(&esw->mode_lock);
+ 	return err;
+ }
+ 
+@@ -2622,14 +2622,14 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode)
+ 	if (IS_ERR(esw))
+ 		return PTR_ERR(esw);
+ 
+-	mutex_lock(&esw->mode_lock);
++	down_write(&esw->mode_lock);
+ 	err = eswitch_devlink_esw_mode_check(esw);
+ 	if (err)
+ 		goto unlock;
+ 
+ 	err = esw_inline_mode_to_devlink(esw->offloads.inline_mode, mode);
+ unlock:
+-	mutex_unlock(&esw->mode_lock);
++	up_write(&esw->mode_lock);
+ 	return err;
+ }
+ 
+@@ -2645,7 +2645,7 @@ int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink,
+ 	if (IS_ERR(esw))
+ 		return PTR_ERR(esw);
+ 
+-	mutex_lock(&esw->mode_lock);
++	down_write(&esw->mode_lock);
+ 	err = eswitch_devlink_esw_mode_check(esw);
+ 	if (err)
+ 		goto unlock;
+@@ -2691,7 +2691,7 @@ int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink,
+ 	}
+ 
+ unlock:
+-	mutex_unlock(&esw->mode_lock);
++	up_write(&esw->mode_lock);
+ 	return err;
+ }
+ 
+@@ -2706,15 +2706,15 @@ int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink,
+ 		return PTR_ERR(esw);
+ 
+ 
+-	mutex_lock(&esw->mode_lock);
++	down_write(&esw->mode_lock);
+ 	err = eswitch_devlink_esw_mode_check(esw);
+ 	if (err)
+ 		goto unlock;
+ 
+ 	*encap = esw->offloads.encap;
+ unlock:
+-	mutex_unlock(&esw->mode_lock);
+-	return 0;
++	up_write(&esw->mode_lock);
++	return err;
+ }
+ 
+ static bool
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+index fe5476a76464f..11cc3ea5010aa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+@@ -365,6 +365,7 @@ static int mlx5_handle_changeupper_event(struct mlx5_lag *ldev,
+ 	bool is_bonded, is_in_lag, mode_supported;
+ 	int bond_status = 0;
+ 	int num_slaves = 0;
++	int changed = 0;
+ 	int idx;
+ 
+ 	if (!netif_is_lag_master(upper))
+@@ -401,27 +402,27 @@ static int mlx5_handle_changeupper_event(struct mlx5_lag *ldev,
+ 	 */
+ 	is_in_lag = num_slaves == MLX5_MAX_PORTS && bond_status == 0x3;
+ 
+-	if (!mlx5_lag_is_ready(ldev) && is_in_lag) {
+-		NL_SET_ERR_MSG_MOD(info->info.extack,
+-				   "Can't activate LAG offload, PF is configured with more than 64 VFs");
+-		return 0;
+-	}
+-
+ 	/* Lag mode must be activebackup or hash. */
+ 	mode_supported = tracker->tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP ||
+ 			 tracker->tx_type == NETDEV_LAG_TX_TYPE_HASH;
+ 
+-	if (is_in_lag && !mode_supported)
+-		NL_SET_ERR_MSG_MOD(info->info.extack,
+-				   "Can't activate LAG offload, TX type isn't supported");
+-
+ 	is_bonded = is_in_lag && mode_supported;
+ 	if (tracker->is_bonded != is_bonded) {
+ 		tracker->is_bonded = is_bonded;
+-		return 1;
++		changed = 1;
+ 	}
+ 
+-	return 0;
++	if (!is_in_lag)
++		return changed;
++
++	if (!mlx5_lag_is_ready(ldev))
++		NL_SET_ERR_MSG_MOD(info->info.extack,
++				   "Can't activate LAG offload, PF is configured with more than 64 VFs");
++	else if (!mode_supported)
++		NL_SET_ERR_MSG_MOD(info->info.extack,
++				   "Can't activate LAG offload, TX type isn't supported");
++
++	return changed;
+ }
+ 
+ static int mlx5_handle_changelowerstate_event(struct mlx5_lag *ldev,
+@@ -464,9 +465,6 @@ static int mlx5_lag_netdev_event(struct notifier_block *this,
+ 
+ 	ldev    = container_of(this, struct mlx5_lag, nb);
+ 
+-	if (!mlx5_lag_is_ready(ldev) && event == NETDEV_CHANGELOWERSTATE)
+-		return NOTIFY_DONE;
+-
+ 	tracker = ldev->tracker;
+ 
+ 	switch (event) {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+index 143b2cb13bf94..e7fbc9b30bf96 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+@@ -21,6 +21,7 @@
+ #include <linux/delay.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/regmap.h>
++#include <linux/pm_runtime.h>
+ 
+ #include "stmmac_platform.h"
+ 
+@@ -1335,6 +1336,8 @@ static int rk_gmac_powerup(struct rk_priv_data *bsp_priv)
+ 		return ret;
+ 	}
+ 
++	pm_runtime_get_sync(dev);
++
+ 	if (bsp_priv->integrated_phy)
+ 		rk_gmac_integrated_phy_powerup(bsp_priv);
+ 
+@@ -1346,6 +1349,8 @@ static void rk_gmac_powerdown(struct rk_priv_data *gmac)
+ 	if (gmac->integrated_phy)
+ 		rk_gmac_integrated_phy_powerdown(gmac);
+ 
++	pm_runtime_put_sync(&gmac->pdev->dev);
++
+ 	phy_power_on(gmac, false);
+ 	gmac_clk_enable(gmac, false);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index 70d41783329dd..f37b6d57b2fe2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -485,8 +485,28 @@ static int socfpga_dwmac_resume(struct device *dev)
+ }
+ #endif /* CONFIG_PM_SLEEP */
+ 
+-static SIMPLE_DEV_PM_OPS(socfpga_dwmac_pm_ops, stmmac_suspend,
+-					       socfpga_dwmac_resume);
++static int __maybe_unused socfpga_dwmac_runtime_suspend(struct device *dev)
++{
++	struct net_device *ndev = dev_get_drvdata(dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++
++	stmmac_bus_clks_config(priv, false);
++
++	return 0;
++}
++
++static int __maybe_unused socfpga_dwmac_runtime_resume(struct device *dev)
++{
++	struct net_device *ndev = dev_get_drvdata(dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++
++	return stmmac_bus_clks_config(priv, true);
++}
++
++static const struct dev_pm_ops socfpga_dwmac_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(stmmac_suspend, socfpga_dwmac_resume)
++	SET_RUNTIME_PM_OPS(socfpga_dwmac_runtime_suspend, socfpga_dwmac_runtime_resume, NULL)
++};
+ 
+ static const struct socfpga_dwmac_ops socfpga_gen5_ops = {
+ 	.set_phy_mode = socfpga_gen5_set_phy_mode,
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index b40b711cf4bd5..a37aae00e128f 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -703,6 +703,7 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
+ 	u32 offset;
+ 	u32 val;
+ 
++	/* This should only be changed when HOL_BLOCK_EN is disabled */
+ 	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id);
+ 	val = ipa_reg_init_hol_block_timer_val(ipa, microseconds);
+ 	iowrite32(val, ipa->reg_virt + offset);
+@@ -730,6 +731,7 @@ void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa)
+ 		if (endpoint->toward_ipa || endpoint->ee_id != GSI_EE_MODEM)
+ 			continue;
+ 
++		ipa_endpoint_init_hol_block_enable(endpoint, false);
+ 		ipa_endpoint_init_hol_block_timer(endpoint, 0);
+ 		ipa_endpoint_init_hol_block_enable(endpoint, true);
+ 	}
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index c671d8e257741..ffbc7eda95eed 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1021,6 +1021,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct tun_struct *tun = netdev_priv(dev);
+ 	int txq = skb->queue_mapping;
++	struct netdev_queue *queue;
+ 	struct tun_file *tfile;
+ 	int len = skb->len;
+ 
+@@ -1065,6 +1066,10 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (ptr_ring_produce(&tfile->tx_ring, skb))
+ 		goto drop;
+ 
++	/* NETIF_F_LLTX requires to do our own update of trans_start */
++	queue = netdev_get_tx_queue(dev, txq);
++	queue->trans_start = jiffies;
++
+ 	/* Notify and wake up reader process */
+ 	if (tfile->flags & TUN_FASYNC)
+ 		kill_fasync(&tfile->fasync, SIGIO, POLL_IN);
+diff --git a/drivers/pinctrl/qcom/pinctrl-sdm845.c b/drivers/pinctrl/qcom/pinctrl-sdm845.c
+index c51793f6546f1..fdfd7b8f3a76d 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sdm845.c
++++ b/drivers/pinctrl/qcom/pinctrl-sdm845.c
+@@ -1310,6 +1310,7 @@ static const struct msm_pinctrl_soc_data sdm845_pinctrl = {
+ 	.ngpios = 151,
+ 	.wakeirq_map = sdm845_pdc_map,
+ 	.nwakeirq_map = ARRAY_SIZE(sdm845_pdc_map),
++	.wakeirq_dual_edge_errata = true,
+ };
+ 
+ static const struct msm_pinctrl_soc_data sdm845_acpi_pinctrl = {
+diff --git a/drivers/platform/x86/hp_accel.c b/drivers/platform/x86/hp_accel.c
+index 8c0867bda8280..0dfaa1a43b674 100644
+--- a/drivers/platform/x86/hp_accel.c
++++ b/drivers/platform/x86/hp_accel.c
+@@ -372,9 +372,11 @@ static int lis3lv02d_add(struct acpi_device *device)
+ 	INIT_WORK(&hpled_led.work, delayed_set_status_worker);
+ 	ret = led_classdev_register(NULL, &hpled_led.led_classdev);
+ 	if (ret) {
++		i8042_remove_filter(hp_accel_i8042_filter);
+ 		lis3lv02d_joystick_disable(&lis3_dev);
+ 		lis3lv02d_poweroff(&lis3_dev);
+ 		flush_work(&hpled_led.work);
++		lis3lv02d_remove_fs(&lis3_dev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c
+index c2c7850ff7b42..727d8f019eddd 100644
+--- a/drivers/scsi/advansys.c
++++ b/drivers/scsi/advansys.c
+@@ -3366,8 +3366,8 @@ static void asc_prt_adv_board_info(struct seq_file *m, struct Scsi_Host *shost)
+ 		   shost->host_no);
+ 
+ 	seq_printf(m,
+-		   " iop_base 0x%lx, cable_detect: %X, err_code %u\n",
+-		   (unsigned long)v->iop_base,
++		   " iop_base 0x%p, cable_detect: %X, err_code %u\n",
++		   v->iop_base,
+ 		   AdvReadWordRegister(iop_base,IOPW_SCSI_CFG1) & CABLE_DETECT,
+ 		   v->err_code);
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 990b700de6892..06a23718a7c7f 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -20080,6 +20080,7 @@ lpfc_drain_txq(struct lpfc_hba *phba)
+ 					fail_msg,
+ 					piocbq->iotag, piocbq->sli4_xritag);
+ 			list_add_tail(&piocbq->list, &completions);
++			fail_msg = NULL;
+ 		}
+ 		spin_unlock_irqrestore(&pring->ring_lock, iflags);
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 4ebd8851a0c9f..734745f450211 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -1650,10 +1650,8 @@ qla2x00_get_adapter_id(scsi_qla_host_t *vha, uint16_t *id, uint8_t *al_pa,
+ 		mcp->in_mb |= MBX_13|MBX_12|MBX_11|MBX_10;
+ 	if (IS_FWI2_CAPABLE(vha->hw))
+ 		mcp->in_mb |= MBX_19|MBX_18|MBX_17|MBX_16;
+-	if (IS_QLA27XX(vha->hw) || IS_QLA28XX(vha->hw)) {
+-		mcp->in_mb |= MBX_15;
+-		mcp->out_mb |= MBX_7|MBX_21|MBX_22|MBX_23;
+-	}
++	if (IS_QLA27XX(vha->hw) || IS_QLA28XX(vha->hw))
++		mcp->in_mb |= MBX_15|MBX_21|MBX_22|MBX_23;
+ 
+ 	mcp->tov = MBX_TOV_SECONDS;
+ 	mcp->flags = 0;
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index b6540b92f5661..3fc7c2a31c191 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -1855,7 +1855,7 @@ static int resp_readcap16(struct scsi_cmnd *scp,
+ {
+ 	unsigned char *cmd = scp->cmnd;
+ 	unsigned char arr[SDEBUG_READCAP16_ARR_SZ];
+-	int alloc_len;
++	u32 alloc_len;
+ 
+ 	alloc_len = get_unaligned_be32(cmd + 10);
+ 	/* following just in case virtual_gb changed */
+@@ -1884,7 +1884,7 @@ static int resp_readcap16(struct scsi_cmnd *scp,
+ 	}
+ 
+ 	return fill_from_dev_buffer(scp, arr,
+-			    min_t(int, alloc_len, SDEBUG_READCAP16_ARR_SZ));
++			    min_t(u32, alloc_len, SDEBUG_READCAP16_ARR_SZ));
+ }
+ 
+ #define SDEBUG_MAX_TGTPGS_ARR_SZ 1412
+@@ -1895,8 +1895,9 @@ static int resp_report_tgtpgs(struct scsi_cmnd *scp,
+ 	unsigned char *cmd = scp->cmnd;
+ 	unsigned char *arr;
+ 	int host_no = devip->sdbg_host->shost->host_no;
+-	int n, ret, alen, rlen;
+ 	int port_group_a, port_group_b, port_a, port_b;
++	u32 alen, n, rlen;
++	int ret;
+ 
+ 	alen = get_unaligned_be32(cmd + 6);
+ 	arr = kzalloc(SDEBUG_MAX_TGTPGS_ARR_SZ, GFP_ATOMIC);
+@@ -1958,9 +1959,9 @@ static int resp_report_tgtpgs(struct scsi_cmnd *scp,
+ 	 * - The constructed command length
+ 	 * - The maximum array size
+ 	 */
+-	rlen = min_t(int, alen, n);
++	rlen = min(alen, n);
+ 	ret = fill_from_dev_buffer(scp, arr,
+-			   min_t(int, rlen, SDEBUG_MAX_TGTPGS_ARR_SZ));
++			   min_t(u32, rlen, SDEBUG_MAX_TGTPGS_ARR_SZ));
+ 	kfree(arr);
+ 	return ret;
+ }
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 1378bb1a7371c..8de67679a8782 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -796,6 +796,7 @@ store_state_field(struct device *dev, struct device_attribute *attr,
+ 	int i, ret;
+ 	struct scsi_device *sdev = to_scsi_device(dev);
+ 	enum scsi_device_state state = 0;
++	bool rescan_dev = false;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(sdev_states); i++) {
+ 		const int len = strlen(sdev_states[i].name);
+@@ -814,20 +815,27 @@ store_state_field(struct device *dev, struct device_attribute *attr,
+ 	}
+ 
+ 	mutex_lock(&sdev->state_mutex);
+-	ret = scsi_device_set_state(sdev, state);
+-	/*
+-	 * If the device state changes to SDEV_RUNNING, we need to
+-	 * run the queue to avoid I/O hang, and rescan the device
+-	 * to revalidate it. Running the queue first is necessary
+-	 * because another thread may be waiting inside
+-	 * blk_mq_freeze_queue_wait() and because that call may be
+-	 * waiting for pending I/O to finish.
+-	 */
+-	if (ret == 0 && state == SDEV_RUNNING) {
++	if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {
++		ret = count;
++	} else {
++		ret = scsi_device_set_state(sdev, state);
++		if (ret == 0 && state == SDEV_RUNNING)
++			rescan_dev = true;
++	}
++	mutex_unlock(&sdev->state_mutex);
++
++	if (rescan_dev) {
++		/*
++		 * If the device state changes to SDEV_RUNNING, we need to
++		 * run the queue to avoid I/O hang, and rescan the device
++		 * to revalidate it. Running the queue first is necessary
++		 * because another thread may be waiting inside
++		 * blk_mq_freeze_queue_wait() and because that call may be
++		 * waiting for pending I/O to finish.
++		 */
+ 		blk_mq_run_hw_queues(sdev->request_queue, true);
+ 		scsi_rescan_device(dev);
+ 	}
+-	mutex_unlock(&sdev->state_mutex);
+ 
+ 	return ret == 0 ? count : -EINVAL;
+ }
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 930f35863cbb5..e3a9a02cadf5a 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -6099,27 +6099,6 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba)
+ 	return retval;
+ }
+ 
+-struct ctm_info {
+-	struct ufs_hba	*hba;
+-	unsigned long	pending;
+-	unsigned int	ncpl;
+-};
+-
+-static bool ufshcd_compl_tm(struct request *req, void *priv, bool reserved)
+-{
+-	struct ctm_info *const ci = priv;
+-	struct completion *c;
+-
+-	WARN_ON_ONCE(reserved);
+-	if (test_bit(req->tag, &ci->pending))
+-		return true;
+-	ci->ncpl++;
+-	c = req->end_io_data;
+-	if (c)
+-		complete(c);
+-	return true;
+-}
+-
+ /**
+  * ufshcd_tmc_handler - handle task management function completion
+  * @hba: per adapter instance
+@@ -6130,14 +6109,22 @@ static bool ufshcd_compl_tm(struct request *req, void *priv, bool reserved)
+  */
+ static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba)
+ {
+-	struct request_queue *q = hba->tmf_queue;
+-	struct ctm_info ci = {
+-		.hba	 = hba,
+-		.pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL),
+-	};
++	unsigned long pending, issued;
++	irqreturn_t ret = IRQ_NONE;
++	int tag;
+ 
+-	blk_mq_tagset_busy_iter(q->tag_set, ufshcd_compl_tm, &ci);
+-	return ci.ncpl ? IRQ_HANDLED : IRQ_NONE;
++	pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL);
++
++	issued = hba->outstanding_tasks & ~pending;
++	for_each_set_bit(tag, &issued, hba->nutmrs) {
++		struct request *req = hba->tmf_rqs[tag];
++		struct completion *c = req->end_io_data;
++
++		complete(c);
++		ret = IRQ_HANDLED;
++	}
++
++	return ret;
+ }
+ 
+ /**
+@@ -6267,9 +6254,9 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
+ 	ufshcd_hold(hba, false);
+ 
+ 	spin_lock_irqsave(host->host_lock, flags);
+-	blk_mq_start_request(req);
+ 
+ 	task_tag = req->tag;
++	hba->tmf_rqs[req->tag] = req;
+ 	treq->req_header.dword_0 |= cpu_to_be32(task_tag);
+ 
+ 	memcpy(hba->utmrdl_base_addr + task_tag, treq, sizeof(*treq));
+@@ -6293,11 +6280,6 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
+ 	err = wait_for_completion_io_timeout(&wait,
+ 			msecs_to_jiffies(TM_CMD_TIMEOUT));
+ 	if (!err) {
+-		/*
+-		 * Make sure that ufshcd_compl_tm() does not trigger a
+-		 * use-after-free.
+-		 */
+-		req->end_io_data = NULL;
+ 		ufshcd_add_tm_upiu_trace(hba, task_tag, "tm_complete_err");
+ 		dev_err(hba->dev, "%s: task management cmd 0x%.2x timed-out\n",
+ 				__func__, tm_function);
+@@ -6313,6 +6295,7 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
+ 	}
+ 
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
++	hba->tmf_rqs[req->tag] = NULL;
+ 	__clear_bit(task_tag, &hba->outstanding_tasks);
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 
+@@ -9235,6 +9218,12 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ 		err = PTR_ERR(hba->tmf_queue);
+ 		goto free_tmf_tag_set;
+ 	}
++	hba->tmf_rqs = devm_kcalloc(hba->dev, hba->nutmrs,
++				    sizeof(*hba->tmf_rqs), GFP_KERNEL);
++	if (!hba->tmf_rqs) {
++		err = -ENOMEM;
++		goto free_tmf_queue;
++	}
+ 
+ 	/* Reset the attached device */
+ 	ufshcd_vops_device_reset(hba);
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 1ba9c786feb6d..35dd5197ccb96 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -734,6 +734,7 @@ struct ufs_hba {
+ 
+ 	struct blk_mq_tag_set tmf_tag_set;
+ 	struct request_queue *tmf_queue;
++	struct request **tmf_rqs;
+ 
+ 	struct uic_command *active_uic_cmd;
+ 	struct mutex uic_cmd_mutex;
+diff --git a/drivers/sh/maple/maple.c b/drivers/sh/maple/maple.c
+index e5d7fb81ad665..44a931d41a132 100644
+--- a/drivers/sh/maple/maple.c
++++ b/drivers/sh/maple/maple.c
+@@ -835,8 +835,10 @@ static int __init maple_bus_init(void)
+ 
+ 	maple_queue_cache = KMEM_CACHE(maple_buffer, SLAB_HWCACHE_ALIGN);
+ 
+-	if (!maple_queue_cache)
++	if (!maple_queue_cache) {
++		retval = -ENOMEM;
+ 		goto cleanup_bothirqs;
++	}
+ 
+ 	INIT_LIST_HEAD(&maple_waitq);
+ 	INIT_LIST_HEAD(&maple_sentq);
+@@ -849,6 +851,7 @@ static int __init maple_bus_init(void)
+ 		if (!mdev[i]) {
+ 			while (i-- > 0)
+ 				maple_free_dev(mdev[i]);
++			retval = -ENOMEM;
+ 			goto cleanup_cache;
+ 		}
+ 		baseunits[i] = mdev[i];
+diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
+index b912ad2f4b720..4df6d04315e39 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
++++ b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
+@@ -6679,7 +6679,6 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 	struct sta_info *psta_bmc;
+ 	struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 	struct xmit_frame *pxmitframe = NULL;
+-	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 	struct sta_priv  *pstapriv = &padapter->stapriv;
+ 
+ 	/* for BC/MC Frames */
+@@ -6690,8 +6689,7 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 	if ((pstapriv->tim_bitmap&BIT(0)) && (psta_bmc->sleepq_len > 0)) {
+ 		msleep(10);/*  10ms, ATIM(HIQ) Windows */
+ 
+-		/* spin_lock_bh(&psta_bmc->sleep_q.lock); */
+-		spin_lock_bh(&pxmitpriv->lock);
++		spin_lock_bh(&psta_bmc->sleep_q.lock);
+ 
+ 		xmitframe_phead = get_list_head(&psta_bmc->sleep_q);
+ 		xmitframe_plist = get_next(xmitframe_phead);
+@@ -6717,8 +6715,7 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 			rtw_hal_xmitframe_enqueue(padapter, pxmitframe);
+ 		}
+ 
+-		/* spin_unlock_bh(&psta_bmc->sleep_q.lock); */
+-		spin_unlock_bh(&pxmitpriv->lock);
++		spin_unlock_bh(&psta_bmc->sleep_q.lock);
+ 
+ 		/* check hi queue and bmc_sleepq */
+ 		rtw_chk_hi_queue_cmd(padapter);
+diff --git a/drivers/staging/rtl8723bs/core/rtw_recv.c b/drivers/staging/rtl8723bs/core/rtw_recv.c
+index 6979f8dbccb84..0d47e6e121777 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_recv.c
++++ b/drivers/staging/rtl8723bs/core/rtw_recv.c
+@@ -1144,10 +1144,8 @@ sint validate_recv_ctrl_frame(struct adapter *padapter, union recv_frame *precv_
+ 		if ((psta->state&WIFI_SLEEP_STATE) && (pstapriv->sta_dz_bitmap&BIT(psta->aid))) {
+ 			struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 			struct xmit_frame *pxmitframe = NULL;
+-			struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+-			/* spin_lock_bh(&psta->sleep_q.lock); */
+-			spin_lock_bh(&pxmitpriv->lock);
++			spin_lock_bh(&psta->sleep_q.lock);
+ 
+ 			xmitframe_phead = get_list_head(&psta->sleep_q);
+ 			xmitframe_plist = get_next(xmitframe_phead);
+@@ -1182,12 +1180,10 @@ sint validate_recv_ctrl_frame(struct adapter *padapter, union recv_frame *precv_
+ 					update_beacon(padapter, _TIM_IE_, NULL, true);
+ 				}
+ 
+-				/* spin_unlock_bh(&psta->sleep_q.lock); */
+-				spin_unlock_bh(&pxmitpriv->lock);
++				spin_unlock_bh(&psta->sleep_q.lock);
+ 
+ 			} else {
+-				/* spin_unlock_bh(&psta->sleep_q.lock); */
+-				spin_unlock_bh(&pxmitpriv->lock);
++				spin_unlock_bh(&psta->sleep_q.lock);
+ 
+ 				/* DBG_871X("no buffered packets to xmit\n"); */
+ 				if (pstapriv->tim_bitmap&BIT(psta->aid)) {
+diff --git a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
+index e3f56c6cc882e..b1784b4e466f3 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
++++ b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
+@@ -330,46 +330,48 @@ u32 rtw_free_stainfo(struct adapter *padapter, struct sta_info *psta)
+ 
+ 	/* list_del_init(&psta->wakeup_list); */
+ 
+-	spin_lock_bh(&pxmitpriv->lock);
+-
++	spin_lock_bh(&psta->sleep_q.lock);
+ 	rtw_free_xmitframe_queue(pxmitpriv, &psta->sleep_q);
+ 	psta->sleepq_len = 0;
++	spin_unlock_bh(&psta->sleep_q.lock);
++
++	spin_lock_bh(&pxmitpriv->lock);
+ 
+ 	/* vo */
+-	/* spin_lock_bh(&(pxmitpriv->vo_pending.lock)); */
++	spin_lock_bh(&pstaxmitpriv->vo_q.sta_pending.lock);
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vo_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->vo_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits;
+ 	phwxmit->accnt -= pstaxmitpriv->vo_q.qcnt;
+ 	pstaxmitpriv->vo_q.qcnt = 0;
+-	/* spin_unlock_bh(&(pxmitpriv->vo_pending.lock)); */
++	spin_unlock_bh(&pstaxmitpriv->vo_q.sta_pending.lock);
+ 
+ 	/* vi */
+-	/* spin_lock_bh(&(pxmitpriv->vi_pending.lock)); */
++	spin_lock_bh(&pstaxmitpriv->vi_q.sta_pending.lock);
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vi_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->vi_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+1;
+ 	phwxmit->accnt -= pstaxmitpriv->vi_q.qcnt;
+ 	pstaxmitpriv->vi_q.qcnt = 0;
+-	/* spin_unlock_bh(&(pxmitpriv->vi_pending.lock)); */
++	spin_unlock_bh(&pstaxmitpriv->vi_q.sta_pending.lock);
+ 
+ 	/* be */
+-	/* spin_lock_bh(&(pxmitpriv->be_pending.lock)); */
++	spin_lock_bh(&pstaxmitpriv->be_q.sta_pending.lock);
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->be_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->be_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+2;
+ 	phwxmit->accnt -= pstaxmitpriv->be_q.qcnt;
+ 	pstaxmitpriv->be_q.qcnt = 0;
+-	/* spin_unlock_bh(&(pxmitpriv->be_pending.lock)); */
++	spin_unlock_bh(&pstaxmitpriv->be_q.sta_pending.lock);
+ 
+ 	/* bk */
+-	/* spin_lock_bh(&(pxmitpriv->bk_pending.lock)); */
++	spin_lock_bh(&pstaxmitpriv->bk_q.sta_pending.lock);
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->bk_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->bk_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+3;
+ 	phwxmit->accnt -= pstaxmitpriv->bk_q.qcnt;
+ 	pstaxmitpriv->bk_q.qcnt = 0;
+-	/* spin_unlock_bh(&(pxmitpriv->bk_pending.lock)); */
++	spin_unlock_bh(&pstaxmitpriv->bk_q.sta_pending.lock);
+ 
+ 	spin_unlock_bh(&pxmitpriv->lock);
+ 
+diff --git a/drivers/staging/rtl8723bs/core/rtw_xmit.c b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+index 6ecaff9728fd4..d78cff7ed6a01 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_xmit.c
++++ b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+@@ -1871,8 +1871,6 @@ void rtw_free_xmitframe_queue(struct xmit_priv *pxmitpriv, struct __queue *pfram
+ 	struct list_head	*plist, *phead;
+ 	struct	xmit_frame	*pxmitframe;
+ 
+-	spin_lock_bh(&pframequeue->lock);
+-
+ 	phead = get_list_head(pframequeue);
+ 	plist = get_next(phead);
+ 
+@@ -1883,7 +1881,6 @@ void rtw_free_xmitframe_queue(struct xmit_priv *pxmitpriv, struct __queue *pfram
+ 
+ 		rtw_free_xmitframe(pxmitpriv, pxmitframe);
+ 	}
+-	spin_unlock_bh(&pframequeue->lock);
+ }
+ 
+ s32 rtw_xmitframe_enqueue(struct adapter *padapter, struct xmit_frame *pxmitframe)
+@@ -1946,6 +1943,7 @@ s32 rtw_xmit_classifier(struct adapter *padapter, struct xmit_frame *pxmitframe)
+ 	struct sta_info *psta;
+ 	struct tx_servq	*ptxservq;
+ 	struct pkt_attrib	*pattrib = &pxmitframe->attrib;
++	struct xmit_priv *xmit_priv = &padapter->xmitpriv;
+ 	struct hw_xmit	*phwxmits =  padapter->xmitpriv.hwxmits;
+ 	sint res = _SUCCESS;
+ 
+@@ -1974,12 +1972,14 @@ s32 rtw_xmit_classifier(struct adapter *padapter, struct xmit_frame *pxmitframe)
+ 
+ 	ptxservq = rtw_get_sta_pending(padapter, psta, pattrib->priority, (u8 *)(&ac_index));
+ 
++	spin_lock_bh(&xmit_priv->lock);
+ 	if (list_empty(&ptxservq->tx_pending))
+ 		list_add_tail(&ptxservq->tx_pending, get_list_head(phwxmits[ac_index].sta_queue));
+ 
+ 	list_add_tail(&pxmitframe->list, get_list_head(&ptxservq->sta_pending));
+ 	ptxservq->qcnt++;
+ 	phwxmits[ac_index].accnt++;
++	spin_unlock_bh(&xmit_priv->lock);
+ 
+ exit:
+ 
+@@ -2397,11 +2397,10 @@ void wakeup_sta_to_xmit(struct adapter *padapter, struct sta_info *psta)
+ 	struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 	struct xmit_frame *pxmitframe = NULL;
+ 	struct sta_priv *pstapriv = &padapter->stapriv;
+-	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+ 	psta_bmc = rtw_get_bcmc_stainfo(padapter);
+ 
+-	spin_lock_bh(&pxmitpriv->lock);
++	spin_lock_bh(&psta->sleep_q.lock);
+ 
+ 	xmitframe_phead = get_list_head(&psta->sleep_q);
+ 	xmitframe_plist = get_next(xmitframe_phead);
+@@ -2509,7 +2508,7 @@ void wakeup_sta_to_xmit(struct adapter *padapter, struct sta_info *psta)
+ 
+ _exit:
+ 
+-	spin_unlock_bh(&pxmitpriv->lock);
++	spin_unlock_bh(&psta->sleep_q.lock);
+ 
+ 	if (update_mask)
+ 		update_beacon(padapter, _TIM_IE_, NULL, true);
+@@ -2521,9 +2520,8 @@ void xmit_delivery_enabled_frames(struct adapter *padapter, struct sta_info *pst
+ 	struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 	struct xmit_frame *pxmitframe = NULL;
+ 	struct sta_priv *pstapriv = &padapter->stapriv;
+-	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+-	spin_lock_bh(&pxmitpriv->lock);
++	spin_lock_bh(&psta->sleep_q.lock);
+ 
+ 	xmitframe_phead = get_list_head(&psta->sleep_q);
+ 	xmitframe_plist = get_next(xmitframe_phead);
+@@ -2579,7 +2577,7 @@ void xmit_delivery_enabled_frames(struct adapter *padapter, struct sta_info *pst
+ 		}
+ 	}
+ 
+-	spin_unlock_bh(&pxmitpriv->lock);
++	spin_unlock_bh(&psta->sleep_q.lock);
+ }
+ 
+ void enqueue_pending_xmitbuf(
+diff --git a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
+index 44799c4a9f35b..ce5bf2861d0c1 100644
+--- a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
++++ b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
+@@ -572,9 +572,7 @@ s32 rtl8723bs_hal_xmit(
+ 			rtw_issue_addbareq_cmd(padapter, pxmitframe);
+ 	}
+ 
+-	spin_lock_bh(&pxmitpriv->lock);
+ 	err = rtw_xmitframe_enqueue(padapter, pxmitframe);
+-	spin_unlock_bh(&pxmitpriv->lock);
+ 	if (err != _SUCCESS) {
+ 		RT_TRACE(_module_hal_xmit_c_, _drv_err_, ("rtl8723bs_hal_xmit: enqueue xmitframe fail\n"));
+ 		rtw_free_xmitframe(pxmitpriv, pxmitframe);
+diff --git a/drivers/staging/wfx/bus_sdio.c b/drivers/staging/wfx/bus_sdio.c
+index e06d7e1ebe9c3..61b8cc05f2935 100644
+--- a/drivers/staging/wfx/bus_sdio.c
++++ b/drivers/staging/wfx/bus_sdio.c
+@@ -120,19 +120,22 @@ static int wfx_sdio_irq_subscribe(void *priv)
+ 		return ret;
+ 	}
+ 
++	flags = irq_get_trigger_type(bus->of_irq);
++	if (!flags)
++		flags = IRQF_TRIGGER_HIGH;
++	flags |= IRQF_ONESHOT;
++	ret = devm_request_threaded_irq(&bus->func->dev, bus->of_irq, NULL,
++					wfx_sdio_irq_handler_ext, flags,
++					"wfx", bus);
++	if (ret)
++		return ret;
+ 	sdio_claim_host(bus->func);
+ 	cccr = sdio_f0_readb(bus->func, SDIO_CCCR_IENx, NULL);
+ 	cccr |= BIT(0);
+ 	cccr |= BIT(bus->func->num);
+ 	sdio_f0_writeb(bus->func, cccr, SDIO_CCCR_IENx, NULL);
+ 	sdio_release_host(bus->func);
+-	flags = irq_get_trigger_type(bus->of_irq);
+-	if (!flags)
+-		flags = IRQF_TRIGGER_HIGH;
+-	flags |= IRQF_ONESHOT;
+-	return devm_request_threaded_irq(&bus->func->dev, bus->of_irq, NULL,
+-					 wfx_sdio_irq_handler_ext, flags,
+-					 "wfx", bus);
++	return 0;
+ }
+ 
+ static int wfx_sdio_irq_unsubscribe(void *priv)
+diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
+index 6b72afee2f8b7..b240bd1ccb71d 100644
+--- a/drivers/target/target_core_alua.c
++++ b/drivers/target/target_core_alua.c
+@@ -1702,7 +1702,6 @@ int core_alua_set_tg_pt_gp_id(
+ 		pr_err("Maximum ALUA alua_tg_pt_gps_count:"
+ 			" 0x0000ffff reached\n");
+ 		spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
+-		kmem_cache_free(t10_alua_tg_pt_gp_cache, tg_pt_gp);
+ 		return -ENOSPC;
+ 	}
+ again:
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 405d82d447176..109f019d21480 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -758,6 +758,8 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
+ 	INIT_LIST_HEAD(&dev->t10_alua.lba_map_list);
+ 	spin_lock_init(&dev->t10_alua.lba_map_lock);
+ 
++	INIT_WORK(&dev->delayed_cmd_work, target_do_delayed_work);
++
+ 	dev->t10_wwn.t10_dev = dev;
+ 	dev->t10_alua.t10_dev = dev;
+ 
+diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h
+index e7b3c6e5d5744..e4f072a680d41 100644
+--- a/drivers/target/target_core_internal.h
++++ b/drivers/target/target_core_internal.h
+@@ -150,6 +150,7 @@ int	transport_dump_vpd_ident(struct t10_vpd *, unsigned char *, int);
+ void	transport_clear_lun_ref(struct se_lun *);
+ sense_reason_t	target_cmd_size_check(struct se_cmd *cmd, unsigned int size);
+ void	target_qf_do_work(struct work_struct *work);
++void	target_do_delayed_work(struct work_struct *work);
+ bool	target_check_wce(struct se_device *dev);
+ bool	target_check_fua(struct se_device *dev);
+ void	__target_execute_cmd(struct se_cmd *, bool);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 61b79804d462c..bca3a32a4bfb7 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -2065,32 +2065,35 @@ static bool target_handle_task_attr(struct se_cmd *cmd)
+ 	 */
+ 	switch (cmd->sam_task_attr) {
+ 	case TCM_HEAD_TAG:
++		atomic_inc_mb(&dev->non_ordered);
+ 		pr_debug("Added HEAD_OF_QUEUE for CDB: 0x%02x\n",
+ 			 cmd->t_task_cdb[0]);
+ 		return false;
+ 	case TCM_ORDERED_TAG:
+-		atomic_inc_mb(&dev->dev_ordered_sync);
++		atomic_inc_mb(&dev->delayed_cmd_count);
+ 
+ 		pr_debug("Added ORDERED for CDB: 0x%02x to ordered list\n",
+ 			 cmd->t_task_cdb[0]);
+-
+-		/*
+-		 * Execute an ORDERED command if no other older commands
+-		 * exist that need to be completed first.
+-		 */
+-		if (!atomic_read(&dev->simple_cmds))
+-			return false;
+ 		break;
+ 	default:
+ 		/*
+ 		 * For SIMPLE and UNTAGGED Task Attribute commands
+ 		 */
+-		atomic_inc_mb(&dev->simple_cmds);
++		atomic_inc_mb(&dev->non_ordered);
++
++		if (atomic_read(&dev->delayed_cmd_count) == 0)
++			return false;
+ 		break;
+ 	}
+ 
+-	if (atomic_read(&dev->dev_ordered_sync) == 0)
+-		return false;
++	if (cmd->sam_task_attr != TCM_ORDERED_TAG) {
++		atomic_inc_mb(&dev->delayed_cmd_count);
++		/*
++		 * We will account for this when we dequeue from the delayed
++		 * list.
++		 */
++		atomic_dec_mb(&dev->non_ordered);
++	}
+ 
+ 	spin_lock(&dev->delayed_cmd_lock);
+ 	list_add_tail(&cmd->se_delayed_node, &dev->delayed_cmd_list);
+@@ -2098,6 +2101,12 @@ static bool target_handle_task_attr(struct se_cmd *cmd)
+ 
+ 	pr_debug("Added CDB: 0x%02x Task Attr: 0x%02x to delayed CMD listn",
+ 		cmd->t_task_cdb[0], cmd->sam_task_attr);
++	/*
++	 * We may have no non ordered cmds when this function started or we
++	 * could have raced with the last simple/head cmd completing, so kick
++	 * the delayed handler here.
++	 */
++	schedule_work(&dev->delayed_cmd_work);
+ 	return true;
+ }
+ 
+@@ -2135,29 +2144,48 @@ EXPORT_SYMBOL(target_execute_cmd);
+  * Process all commands up to the last received ORDERED task attribute which
+  * requires another blocking boundary
+  */
+-static void target_restart_delayed_cmds(struct se_device *dev)
++void target_do_delayed_work(struct work_struct *work)
+ {
+-	for (;;) {
++	struct se_device *dev = container_of(work, struct se_device,
++					     delayed_cmd_work);
++
++	spin_lock(&dev->delayed_cmd_lock);
++	while (!dev->ordered_sync_in_progress) {
+ 		struct se_cmd *cmd;
+ 
+-		spin_lock(&dev->delayed_cmd_lock);
+-		if (list_empty(&dev->delayed_cmd_list)) {
+-			spin_unlock(&dev->delayed_cmd_lock);
++		if (list_empty(&dev->delayed_cmd_list))
+ 			break;
+-		}
+ 
+ 		cmd = list_entry(dev->delayed_cmd_list.next,
+ 				 struct se_cmd, se_delayed_node);
++
++		if (cmd->sam_task_attr == TCM_ORDERED_TAG) {
++			/*
++			 * Check if we started with:
++			 * [ordered] [simple] [ordered]
++			 * and we are now at the last ordered so we have to wait
++			 * for the simple cmd.
++			 */
++			if (atomic_read(&dev->non_ordered) > 0)
++				break;
++
++			dev->ordered_sync_in_progress = true;
++		}
++
+ 		list_del(&cmd->se_delayed_node);
++		atomic_dec_mb(&dev->delayed_cmd_count);
+ 		spin_unlock(&dev->delayed_cmd_lock);
+ 
++		if (cmd->sam_task_attr != TCM_ORDERED_TAG)
++			atomic_inc_mb(&dev->non_ordered);
++
+ 		cmd->transport_state |= CMD_T_SENT;
+ 
+ 		__target_execute_cmd(cmd, true);
+ 
+-		if (cmd->sam_task_attr == TCM_ORDERED_TAG)
+-			break;
++		spin_lock(&dev->delayed_cmd_lock);
+ 	}
++	spin_unlock(&dev->delayed_cmd_lock);
+ }
+ 
+ /*
+@@ -2175,14 +2203,17 @@ static void transport_complete_task_attr(struct se_cmd *cmd)
+ 		goto restart;
+ 
+ 	if (cmd->sam_task_attr == TCM_SIMPLE_TAG) {
+-		atomic_dec_mb(&dev->simple_cmds);
++		atomic_dec_mb(&dev->non_ordered);
+ 		dev->dev_cur_ordered_id++;
+ 	} else if (cmd->sam_task_attr == TCM_HEAD_TAG) {
++		atomic_dec_mb(&dev->non_ordered);
+ 		dev->dev_cur_ordered_id++;
+ 		pr_debug("Incremented dev_cur_ordered_id: %u for HEAD_OF_QUEUE\n",
+ 			 dev->dev_cur_ordered_id);
+ 	} else if (cmd->sam_task_attr == TCM_ORDERED_TAG) {
+-		atomic_dec_mb(&dev->dev_ordered_sync);
++		spin_lock(&dev->delayed_cmd_lock);
++		dev->ordered_sync_in_progress = false;
++		spin_unlock(&dev->delayed_cmd_lock);
+ 
+ 		dev->dev_cur_ordered_id++;
+ 		pr_debug("Incremented dev_cur_ordered_id: %u for ORDERED\n",
+@@ -2191,7 +2222,8 @@ static void transport_complete_task_attr(struct se_cmd *cmd)
+ 	cmd->se_cmd_flags &= ~SCF_TASK_ATTR_SET;
+ 
+ restart:
+-	target_restart_delayed_cmds(dev);
++	if (atomic_read(&dev->delayed_cmd_count) > 0)
++		schedule_work(&dev->delayed_cmd_work);
+ }
+ 
+ static void transport_complete_qf(struct se_cmd *cmd)
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index bd2d91546e327..0fc473321d3e3 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -534,6 +534,9 @@ static void flush_to_ldisc(struct work_struct *work)
+ 		if (!count)
+ 			break;
+ 		head->read += count;
++
++		if (need_resched())
++			cond_resched();
+ 	}
+ 
+ 	mutex_unlock(&buf->lock);
+diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
+index c86d413226eb9..b875da01c5309 100644
+--- a/drivers/usb/host/max3421-hcd.c
++++ b/drivers/usb/host/max3421-hcd.c
+@@ -125,8 +125,6 @@ struct max3421_hcd {
+ 
+ 	struct task_struct *spi_thread;
+ 
+-	struct max3421_hcd *next;
+-
+ 	enum max3421_rh_state rh_state;
+ 	/* lower 16 bits contain port status, upper 16 bits the change mask: */
+ 	u32 port_status;
+@@ -174,8 +172,6 @@ struct max3421_ep {
+ 	u8 retransmit;			/* packet needs retransmission */
+ };
+ 
+-static struct max3421_hcd *max3421_hcd_list;
+-
+ #define MAX3421_FIFO_SIZE	64
+ 
+ #define MAX3421_SPI_DIR_RD	0	/* read register from MAX3421 */
+@@ -1882,9 +1878,8 @@ max3421_probe(struct spi_device *spi)
+ 	}
+ 	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+ 	max3421_hcd = hcd_to_max3421(hcd);
+-	max3421_hcd->next = max3421_hcd_list;
+-	max3421_hcd_list = max3421_hcd;
+ 	INIT_LIST_HEAD(&max3421_hcd->ep_list);
++	spi_set_drvdata(spi, max3421_hcd);
+ 
+ 	max3421_hcd->tx = kmalloc(sizeof(*max3421_hcd->tx), GFP_KERNEL);
+ 	if (!max3421_hcd->tx)
+@@ -1934,28 +1929,18 @@ error:
+ static int
+ max3421_remove(struct spi_device *spi)
+ {
+-	struct max3421_hcd *max3421_hcd = NULL, **prev;
+-	struct usb_hcd *hcd = NULL;
++	struct max3421_hcd *max3421_hcd;
++	struct usb_hcd *hcd;
+ 	unsigned long flags;
+ 
+-	for (prev = &max3421_hcd_list; *prev; prev = &(*prev)->next) {
+-		max3421_hcd = *prev;
+-		hcd = max3421_to_hcd(max3421_hcd);
+-		if (hcd->self.controller == &spi->dev)
+-			break;
+-	}
+-	if (!max3421_hcd) {
+-		dev_err(&spi->dev, "no MAX3421 HCD found for SPI device %p\n",
+-			spi);
+-		return -ENODEV;
+-	}
++	max3421_hcd = spi_get_drvdata(spi);
++	hcd = max3421_to_hcd(max3421_hcd);
+ 
+ 	usb_remove_hcd(hcd);
+ 
+ 	spin_lock_irqsave(&max3421_hcd->lock, flags);
+ 
+ 	kthread_stop(max3421_hcd->spi_thread);
+-	*prev = max3421_hcd->next;
+ 
+ 	spin_unlock_irqrestore(&max3421_hcd->lock, flags);
+ 
+diff --git a/drivers/usb/host/ohci-tmio.c b/drivers/usb/host/ohci-tmio.c
+index 08ec2ab0d95a5..3f3d62dc06746 100644
+--- a/drivers/usb/host/ohci-tmio.c
++++ b/drivers/usb/host/ohci-tmio.c
+@@ -199,7 +199,7 @@ static int ohci_hcd_tmio_drv_probe(struct platform_device *dev)
+ 	if (usb_disabled())
+ 		return -ENODEV;
+ 
+-	if (!cell)
++	if (!cell || !regs || !config || !sram)
+ 		return -EINVAL;
+ 
+ 	if (irq < 0)
+diff --git a/drivers/usb/musb/tusb6010.c b/drivers/usb/musb/tusb6010.c
+index 0c2afed4131bc..038307f661985 100644
+--- a/drivers/usb/musb/tusb6010.c
++++ b/drivers/usb/musb/tusb6010.c
+@@ -1103,6 +1103,11 @@ static int tusb_musb_init(struct musb *musb)
+ 
+ 	/* dma address for async dma */
+ 	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!mem) {
++		pr_debug("no async dma resource?\n");
++		ret = -ENODEV;
++		goto done;
++	}
+ 	musb->async = mem->start;
+ 
+ 	/* dma address for sync dma */
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index 30bfc314b743c..6cb5c8e2c8535 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -109,7 +109,7 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
+ 	u8 data[TPS_MAX_LEN + 1];
+ 	int ret;
+ 
+-	if (WARN_ON(len + 1 > sizeof(data)))
++	if (len + 1 > sizeof(data))
+ 		return -EINVAL;
+ 
+ 	if (!tps->i2c_protocol)
+diff --git a/drivers/video/console/sticon.c b/drivers/video/console/sticon.c
+index 1b451165311c9..40496e9e9b438 100644
+--- a/drivers/video/console/sticon.c
++++ b/drivers/video/console/sticon.c
+@@ -332,13 +332,13 @@ static u8 sticon_build_attr(struct vc_data *conp, u8 color,
+ 			    bool blink, bool underline, bool reverse,
+ 			    bool italic)
+ {
+-    u8 attr = ((color & 0x70) >> 1) | ((color & 7));
++	u8 fg = color & 7;
++	u8 bg = (color & 0x70) >> 4;
+ 
+-    if (reverse) {
+-	color = ((color >> 3) & 0x7) | ((color & 0x7) << 3);
+-    }
+-
+-    return attr;
++	if (reverse)
++		return (fg << 3) | bg;
++	else
++		return (bg << 3) | fg;
+ }
+ 
+ static void sticon_invert_region(struct vc_data *conp, u16 *p, int count)
+diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c
+index 309516e6a9682..43c89952b7d25 100644
+--- a/fs/btrfs/async-thread.c
++++ b/fs/btrfs/async-thread.c
+@@ -234,6 +234,13 @@ static void run_ordered_work(struct __btrfs_workqueue *wq,
+ 				  ordered_list);
+ 		if (!test_bit(WORK_DONE_BIT, &work->flags))
+ 			break;
++		/*
++		 * Orders all subsequent loads after reading WORK_DONE_BIT,
++		 * paired with the smp_mb__before_atomic in btrfs_work_helper
++		 * this guarantees that the ordered function will see all
++		 * updates from ordinary work function.
++		 */
++		smp_rmb();
+ 
+ 		/*
+ 		 * we are going to call the ordered done function, but
+@@ -317,6 +324,13 @@ static void btrfs_work_helper(struct work_struct *normal_work)
+ 	thresh_exec_hook(wq);
+ 	work->func(work);
+ 	if (need_order) {
++		/*
++		 * Ensures all memory accesses done in the work function are
++		 * ordered before setting the WORK_DONE_BIT. Ensuring the thread
++		 * which is going to executed the ordered work sees them.
++		 * Pairs with the smp_rmb in run_ordered_work.
++		 */
++		smp_mb__before_atomic();
+ 		set_bit(WORK_DONE_BIT, &work->flags);
+ 		run_ordered_work(wq, work);
+ 	} else {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index d9e582e40b5b7..e462de9917237 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -14,6 +14,7 @@
+ #include <linux/semaphore.h>
+ #include <linux/uuid.h>
+ #include <linux/list_sort.h>
++#include <linux/namei.h>
+ #include "misc.h"
+ #include "ctree.h"
+ #include "extent_map.h"
+@@ -1871,18 +1872,22 @@ out:
+ /*
+  * Function to update ctime/mtime for a given device path.
+  * Mainly used for ctime/mtime based probe like libblkid.
++ *
++ * We don't care about errors here, this is just to be kind to userspace.
+  */
+-static void update_dev_time(struct block_device *bdev)
++static void update_dev_time(const char *device_path)
+ {
+-	struct inode *inode = bdev->bd_inode;
++	struct path path;
+ 	struct timespec64 now;
++	int ret;
+ 
+-	/* Shouldn't happen but just in case. */
+-	if (!inode)
++	ret = kern_path(device_path, LOOKUP_FOLLOW, &path);
++	if (ret)
+ 		return;
+ 
+-	now = current_time(inode);
+-	generic_update_time(inode, &now, S_MTIME | S_CTIME);
++	now = current_time(d_inode(path.dentry));
++	inode_update_time(d_inode(path.dentry), &now, S_MTIME | S_CTIME);
++	path_put(&path);
+ }
+ 
+ static int btrfs_rm_dev_item(struct btrfs_device *device)
+@@ -2057,7 +2062,7 @@ void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info,
+ 	btrfs_kobject_uevent(bdev, KOBJ_CHANGE);
+ 
+ 	/* Update ctime/mtime for device path for libblkid */
+-	update_dev_time(bdev);
++	update_dev_time(device_path);
+ }
+ 
+ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+@@ -2700,7 +2705,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 	btrfs_forget_devices(device_path);
+ 
+ 	/* Update ctime/mtime for blkid or udev */
+-	update_dev_time(bdev);
++	update_dev_time(device_path);
+ 
+ 	return ret;
+ 
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 2d7799bd30b10..bc488a7d01903 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3908,8 +3908,7 @@ static inline bool f2fs_disable_compressed_file(struct inode *inode)
+ 
+ 	if (!f2fs_compressed_file(inode))
+ 		return true;
+-	if (S_ISREG(inode->i_mode) &&
+-		(get_dirty_pages(inode) || atomic_read(&fi->i_compr_blocks)))
++	if (S_ISREG(inode->i_mode) && F2FS_HAS_BLOCKS(inode))
+ 		return false;
+ 
+ 	fi->i_flags &= ~F2FS_COMPR_FL;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index de543168b3708..b7287b722e9e1 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1020,7 +1020,7 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ 	/* Not pass down write hints if the number of active logs is lesser
+ 	 * than NR_CURSEG_PERSIST_TYPE.
+ 	 */
+-	if (F2FS_OPTION(sbi).active_logs != NR_CURSEG_TYPE)
++	if (F2FS_OPTION(sbi).active_logs != NR_CURSEG_PERSIST_TYPE)
+ 		F2FS_OPTION(sbi).whint_mode = WHINT_MODE_OFF;
+ 	return 0;
+ }
+@@ -3081,7 +3081,7 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 		NR_CURSEG_PERSIST_TYPE + nat_bits_blocks >= blocks_per_seg)) {
+ 		f2fs_warn(sbi, "Insane cp_payload: %u, nat_bits_blocks: %u)",
+ 			  cp_payload, nat_bits_blocks);
+-		return -EFSCORRUPTED;
++		return 1;
+ 	}
+ 
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+diff --git a/fs/inode.c b/fs/inode.c
+index 5eea9912a0b9d..638d5d5bf42df 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -1772,12 +1772,13 @@ EXPORT_SYMBOL(generic_update_time);
+  * This does the actual work of updating an inodes time or version.  Must have
+  * had called mnt_want_write() before calling this.
+  */
+-static int update_time(struct inode *inode, struct timespec64 *time, int flags)
++int inode_update_time(struct inode *inode, struct timespec64 *time, int flags)
+ {
+ 	if (inode->i_op->update_time)
+ 		return inode->i_op->update_time(inode, time, flags);
+ 	return generic_update_time(inode, time, flags);
+ }
++EXPORT_SYMBOL(inode_update_time);
+ 
+ /**
+  *	touch_atime	-	update the access time
+@@ -1847,7 +1848,7 @@ void touch_atime(const struct path *path)
+ 	 * of the fs read only, e.g. subvolumes in Btrfs.
+ 	 */
+ 	now = current_time(inode);
+-	update_time(inode, &now, S_ATIME);
++	inode_update_time(inode, &now, S_ATIME);
+ 	__mnt_drop_write(mnt);
+ skip_update:
+ 	sb_end_write(inode->i_sb);
+@@ -1991,7 +1992,7 @@ int file_update_time(struct file *file)
+ 	if (__mnt_want_write_file(file))
+ 		return 0;
+ 
+-	ret = update_time(inode, &now, sync_it);
++	ret = inode_update_time(inode, &now, sync_it);
+ 	__mnt_drop_write_file(file);
+ 
+ 	return ret;
+diff --git a/fs/udf/dir.c b/fs/udf/dir.c
+index c19dba45aa209..d0f92a52e3bab 100644
+--- a/fs/udf/dir.c
++++ b/fs/udf/dir.c
+@@ -31,6 +31,7 @@
+ #include <linux/mm.h>
+ #include <linux/slab.h>
+ #include <linux/bio.h>
++#include <linux/iversion.h>
+ 
+ #include "udf_i.h"
+ #include "udf_sb.h"
+@@ -44,7 +45,7 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
+ 	struct fileIdentDesc *fi = NULL;
+ 	struct fileIdentDesc cfi;
+ 	udf_pblk_t block, iblock;
+-	loff_t nf_pos;
++	loff_t nf_pos, emit_pos = 0;
+ 	int flen;
+ 	unsigned char *fname = NULL, *copy_name = NULL;
+ 	unsigned char *nameptr;
+@@ -58,6 +59,7 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
+ 	int i, num, ret = 0;
+ 	struct extent_position epos = { NULL, 0, {0, 0} };
+ 	struct super_block *sb = dir->i_sb;
++	bool pos_valid = false;
+ 
+ 	if (ctx->pos == 0) {
+ 		if (!dir_emit_dot(file, ctx))
+@@ -68,6 +70,21 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
+ 	if (nf_pos >= size)
+ 		goto out;
+ 
++	/*
++	 * Something changed since last readdir (either lseek was called or dir
++	 * changed)?  We need to verify the position correctly points at the
++	 * beginning of some dir entry so that the directory parsing code does
++	 * not get confused. Since UDF does not have any reliable way of
++	 * identifying beginning of dir entry (names are under user control),
++	 * we need to scan the directory from the beginning.
++	 */
++	if (!inode_eq_iversion(dir, file->f_version)) {
++		emit_pos = nf_pos;
++		nf_pos = 0;
++	} else {
++		pos_valid = true;
++	}
++
+ 	fname = kmalloc(UDF_NAME_LEN, GFP_NOFS);
+ 	if (!fname) {
+ 		ret = -ENOMEM;
+@@ -123,13 +140,21 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
+ 
+ 	while (nf_pos < size) {
+ 		struct kernel_lb_addr tloc;
++		loff_t cur_pos = nf_pos;
+ 
+-		ctx->pos = (nf_pos >> 2) + 1;
++		/* Update file position only if we got past the current one */
++		if (nf_pos >= emit_pos) {
++			ctx->pos = (nf_pos >> 2) + 1;
++			pos_valid = true;
++		}
+ 
+ 		fi = udf_fileident_read(dir, &nf_pos, &fibh, &cfi, &epos, &eloc,
+ 					&elen, &offset);
+ 		if (!fi)
+ 			goto out;
++		/* Still not at offset where user asked us to read from? */
++		if (cur_pos < emit_pos)
++			continue;
+ 
+ 		liu = le16_to_cpu(cfi.lengthOfImpUse);
+ 		lfi = cfi.lengthFileIdent;
+@@ -187,8 +212,11 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
+ 	} /* end while */
+ 
+ 	ctx->pos = (nf_pos >> 2) + 1;
++	pos_valid = true;
+ 
+ out:
++	if (pos_valid)
++		file->f_version = inode_query_iversion(dir);
+ 	if (fibh.sbh != fibh.ebh)
+ 		brelse(fibh.ebh);
+ 	brelse(fibh.sbh);
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index f4a72ff8cf959..9f3aced46c68f 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -30,6 +30,7 @@
+ #include <linux/sched.h>
+ #include <linux/crc-itu-t.h>
+ #include <linux/exportfs.h>
++#include <linux/iversion.h>
+ 
+ static inline int udf_match(int len1, const unsigned char *name1, int len2,
+ 			    const unsigned char *name2)
+@@ -135,6 +136,8 @@ int udf_write_fi(struct inode *inode, struct fileIdentDesc *cfi,
+ 			mark_buffer_dirty_inode(fibh->ebh, inode);
+ 		mark_buffer_dirty_inode(fibh->sbh, inode);
+ 	}
++	inode_inc_iversion(inode);
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 5d2b820ef303a..3448098e54768 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -57,6 +57,7 @@
+ #include <linux/crc-itu-t.h>
+ #include <linux/log2.h>
+ #include <asm/byteorder.h>
++#include <linux/iversion.h>
+ 
+ #include "udf_sb.h"
+ #include "udf_i.h"
+@@ -149,6 +150,7 @@ static struct inode *udf_alloc_inode(struct super_block *sb)
+ 	init_rwsem(&ei->i_data_sem);
+ 	ei->cached_extent.lstart = -1;
+ 	spin_lock_init(&ei->i_extent_cache_lock);
++	inode_set_iversion(&ei->vfs_inode, 1);
+ 
+ 	return &ei->vfs_inode;
+ }
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 43bb6a51e42d9..42d246a942283 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2214,6 +2214,8 @@ enum file_time_flags {
+ 
+ extern bool atime_needs_update(const struct path *, struct inode *);
+ extern void touch_atime(const struct path *);
++int inode_update_time(struct inode *inode, struct timespec64 *time, int flags);
++
+ static inline void file_accessed(struct file *file)
+ {
+ 	if (!(file->f_flags & O_NOATIME))
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index c095e713cf08f..ce14fb2772b5b 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -607,7 +607,6 @@ struct swevent_hlist {
+ #define PERF_ATTACH_TASK_DATA	0x08
+ #define PERF_ATTACH_ITRACE	0x10
+ #define PERF_ATTACH_SCHED_CB	0x20
+-#define PERF_ATTACH_CHILD	0x40
+ 
+ struct perf_cgroup;
+ struct perf_buffer;
+diff --git a/include/linux/platform_data/ti-sysc.h b/include/linux/platform_data/ti-sysc.h
+index 9837fb011f2fb..989aa30c598dc 100644
+--- a/include/linux/platform_data/ti-sysc.h
++++ b/include/linux/platform_data/ti-sysc.h
+@@ -50,6 +50,7 @@ struct sysc_regbits {
+ 	s8 emufree_shift;
+ };
+ 
++#define SYSC_QUIRK_REINIT_ON_CTX_LOST	BIT(28)
+ #define SYSC_QUIRK_REINIT_ON_RESUME	BIT(27)
+ #define SYSC_QUIRK_GPMC_DEBUG		BIT(26)
+ #define SYSC_MODULE_QUIRK_ENA_RESETDONE	BIT(25)
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index d321fe5ad1a14..c57b79301a75e 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -571,7 +571,7 @@ struct trace_event_file {
+ 
+ #define PERF_MAX_TRACE_SIZE	2048
+ 
+-#define MAX_FILTER_STR_VAL	256	/* Should handle KSYM_SYMBOL_LEN */
++#define MAX_FILTER_STR_VAL	256U	/* Should handle KSYM_SYMBOL_LEN */
+ 
+ enum event_trigger_type {
+ 	ETT_NONE		= (0),
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index b465f8f3e554f..04e87f4b9417c 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -120,10 +120,15 @@ retry:
+ 
+ 	if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+ 		u16 gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
++		unsigned int nh_off = p_off;
+ 		struct skb_shared_info *shinfo = skb_shinfo(skb);
+ 
++		/* UFO may not include transport header in gso_size. */
++		if (gso_type & SKB_GSO_UDP)
++			nh_off -= thlen;
++
+ 		/* Too small packets are not really GSO ones. */
+-		if (skb->len - p_off > gso_size) {
++		if (skb->len - nh_off > gso_size) {
+ 			shinfo->gso_size = gso_size;
+ 			shinfo->gso_type = gso_type;
+ 
+diff --git a/include/net/nfc/nci_core.h b/include/net/nfc/nci_core.h
+index 33979017b7824..004e49f748419 100644
+--- a/include/net/nfc/nci_core.h
++++ b/include/net/nfc/nci_core.h
+@@ -30,6 +30,7 @@ enum nci_flag {
+ 	NCI_UP,
+ 	NCI_DATA_EXCHANGE,
+ 	NCI_DATA_EXCHANGE_TO,
++	NCI_UNREG,
+ };
+ 
+ /* NCI device states */
+diff --git a/include/rdma/rdma_netlink.h b/include/rdma/rdma_netlink.h
+index 2758d9df71ee9..c2a79aeee113c 100644
+--- a/include/rdma/rdma_netlink.h
++++ b/include/rdma/rdma_netlink.h
+@@ -30,7 +30,7 @@ enum rdma_nl_flags {
+  * constant as well and the compiler checks they are the same.
+  */
+ #define MODULE_ALIAS_RDMA_NETLINK(_index, _val)                                \
+-	static inline void __chk_##_index(void)                                \
++	static inline void __maybe_unused __chk_##_index(void)                 \
+ 	{                                                                      \
+ 		BUILD_BUG_ON(_index != _val);                                  \
+ 	}                                                                      \
+diff --git a/include/sound/hdaudio_ext.h b/include/sound/hdaudio_ext.h
+index 7abf74c1c4740..75048ea178f62 100644
+--- a/include/sound/hdaudio_ext.h
++++ b/include/sound/hdaudio_ext.h
+@@ -88,6 +88,8 @@ struct hdac_ext_stream *snd_hdac_ext_stream_assign(struct hdac_bus *bus,
+ 					   struct snd_pcm_substream *substream,
+ 					   int type);
+ void snd_hdac_ext_stream_release(struct hdac_ext_stream *azx_dev, int type);
++void snd_hdac_ext_stream_decouple_locked(struct hdac_bus *bus,
++				  struct hdac_ext_stream *azx_dev, bool decouple);
+ void snd_hdac_ext_stream_decouple(struct hdac_bus *bus,
+ 				struct hdac_ext_stream *azx_dev, bool decouple);
+ void snd_hdac_ext_stop_streams(struct hdac_bus *bus);
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index 549947d407cfd..18a5dcd275f88 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -788,8 +788,9 @@ struct se_device {
+ 	atomic_long_t		read_bytes;
+ 	atomic_long_t		write_bytes;
+ 	/* Active commands on this virtual SE device */
+-	atomic_t		simple_cmds;
+-	atomic_t		dev_ordered_sync;
++	atomic_t		non_ordered;
++	bool			ordered_sync_in_progress;
++	atomic_t		delayed_cmd_count;
+ 	atomic_t		dev_qf_count;
+ 	u32			export_count;
+ 	spinlock_t		delayed_cmd_lock;
+@@ -811,6 +812,7 @@ struct se_device {
+ 	struct list_head	dev_sep_list;
+ 	struct list_head	dev_tmr_list;
+ 	struct work_struct	qf_work_queue;
++	struct work_struct	delayed_cmd_work;
+ 	struct list_head	delayed_cmd_list;
+ 	struct list_head	state_list;
+ 	struct list_head	qf_cmd_list;
+diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
+index 56b113e3cd6aa..df293bc7f03b8 100644
+--- a/include/trace/events/f2fs.h
++++ b/include/trace/events/f2fs.h
+@@ -807,20 +807,20 @@ TRACE_EVENT(f2fs_lookup_start,
+ 	TP_STRUCT__entry(
+ 		__field(dev_t,	dev)
+ 		__field(ino_t,	ino)
+-		__field(const char *,	name)
++		__string(name,	dentry->d_name.name)
+ 		__field(unsigned int, flags)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->dev	= dir->i_sb->s_dev;
+ 		__entry->ino	= dir->i_ino;
+-		__entry->name	= dentry->d_name.name;
++		__assign_str(name, dentry->d_name.name);
+ 		__entry->flags	= flags;
+ 	),
+ 
+ 	TP_printk("dev = (%d,%d), pino = %lu, name:%s, flags:%u",
+ 		show_dev_ino(__entry),
+-		__entry->name,
++		__get_str(name),
+ 		__entry->flags)
+ );
+ 
+@@ -834,7 +834,7 @@ TRACE_EVENT(f2fs_lookup_end,
+ 	TP_STRUCT__entry(
+ 		__field(dev_t,	dev)
+ 		__field(ino_t,	ino)
+-		__field(const char *,	name)
++		__string(name,	dentry->d_name.name)
+ 		__field(nid_t,	cino)
+ 		__field(int,	err)
+ 	),
+@@ -842,14 +842,14 @@ TRACE_EVENT(f2fs_lookup_end,
+ 	TP_fast_assign(
+ 		__entry->dev	= dir->i_sb->s_dev;
+ 		__entry->ino	= dir->i_ino;
+-		__entry->name	= dentry->d_name.name;
++		__assign_str(name, dentry->d_name.name);
+ 		__entry->cino	= ino;
+ 		__entry->err	= err;
+ 	),
+ 
+ 	TP_printk("dev = (%d,%d), pino = %lu, name:%s, ino:%u, err:%d",
+ 		show_dev_ino(__entry),
+-		__entry->name,
++		__get_str(name),
+ 		__entry->cino,
+ 		__entry->err)
+ );
+diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h
+index cfcb10b754838..62db78b9c1a0a 100644
+--- a/include/uapi/linux/tcp.h
++++ b/include/uapi/linux/tcp.h
+@@ -349,5 +349,7 @@ struct tcp_zerocopy_receive {
+ 	__u32 recv_skip_hint;	/* out: amount of bytes to skip */
+ 	__u32 inq; /* out: amount of bytes in read queue */
+ 	__s32 err; /* out: socket error */
++	__u64 copybuf_address;	/* in: copybuf address (small reads) */
++	__s32 copybuf_len; /* in/out: copybuf bytes avail/used or error */
+ };
+ #endif /* _UAPI_LINUX_TCP_H */
+diff --git a/ipc/util.c b/ipc/util.c
+index cfa0045e748d5..bbb5190af6d9f 100644
+--- a/ipc/util.c
++++ b/ipc/util.c
+@@ -446,8 +446,8 @@ static int ipcget_public(struct ipc_namespace *ns, struct ipc_ids *ids,
+ static void ipc_kht_remove(struct ipc_ids *ids, struct kern_ipc_perm *ipcp)
+ {
+ 	if (ipcp->key != IPC_PRIVATE)
+-		rhashtable_remove_fast(&ids->key_ht, &ipcp->khtnode,
+-				       ipc_kht_params);
++		WARN_ON_ONCE(rhashtable_remove_fast(&ids->key_ht, &ipcp->khtnode,
++				       ipc_kht_params));
+ }
+ 
+ /**
+@@ -462,7 +462,7 @@ void ipc_rmid(struct ipc_ids *ids, struct kern_ipc_perm *ipcp)
+ {
+ 	int idx = ipcid_to_idx(ipcp->id);
+ 
+-	idr_remove(&ids->ipcs_idr, idx);
++	WARN_ON_ONCE(idr_remove(&ids->ipcs_idr, idx) != ipcp);
+ 	ipc_kht_remove(ids, ipcp);
+ 	ids->in_use--;
+ 	ipcp->deleted = true;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 908417736f4e9..639b99a318db1 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2209,26 +2209,6 @@ out:
+ 	perf_event__header_size(leader);
+ }
+ 
+-static void sync_child_event(struct perf_event *child_event);
+-
+-static void perf_child_detach(struct perf_event *event)
+-{
+-	struct perf_event *parent_event = event->parent;
+-
+-	if (!(event->attach_state & PERF_ATTACH_CHILD))
+-		return;
+-
+-	event->attach_state &= ~PERF_ATTACH_CHILD;
+-
+-	if (WARN_ON_ONCE(!parent_event))
+-		return;
+-
+-	lockdep_assert_held(&parent_event->child_mutex);
+-
+-	sync_child_event(event);
+-	list_del_init(&event->child_list);
+-}
+-
+ static bool is_orphaned_event(struct perf_event *event)
+ {
+ 	return event->state == PERF_EVENT_STATE_DEAD;
+@@ -2336,7 +2316,6 @@ group_sched_out(struct perf_event *group_event,
+ }
+ 
+ #define DETACH_GROUP	0x01UL
+-#define DETACH_CHILD	0x02UL
+ 
+ /*
+  * Cross CPU call to remove a performance event
+@@ -2360,8 +2339,6 @@ __perf_remove_from_context(struct perf_event *event,
+ 	event_sched_out(event, cpuctx, ctx);
+ 	if (flags & DETACH_GROUP)
+ 		perf_group_detach(event);
+-	if (flags & DETACH_CHILD)
+-		perf_child_detach(event);
+ 	list_del_event(event, ctx);
+ 
+ 	if (!ctx->nr_events && ctx->is_active) {
+@@ -2390,21 +2367,25 @@ static void perf_remove_from_context(struct perf_event *event, unsigned long fla
+ 
+ 	lockdep_assert_held(&ctx->mutex);
+ 
++	event_function_call(event, __perf_remove_from_context, (void *)flags);
++
+ 	/*
+-	 * Because of perf_event_exit_task(), perf_remove_from_context() ought
+-	 * to work in the face of TASK_TOMBSTONE, unlike every other
+-	 * event_function_call() user.
++	 * The above event_function_call() can NO-OP when it hits
++	 * TASK_TOMBSTONE. In that case we must already have been detached
++	 * from the context (by perf_event_exit_event()) but the grouping
++	 * might still be in-tact.
+ 	 */
+-	raw_spin_lock_irq(&ctx->lock);
+-	if (!ctx->is_active) {
+-		__perf_remove_from_context(event, __get_cpu_context(ctx),
+-					   ctx, (void *)flags);
++	WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT);
++	if ((flags & DETACH_GROUP) &&
++	    (event->attach_state & PERF_ATTACH_GROUP)) {
++		/*
++		 * Since in that case we cannot possibly be scheduled, simply
++		 * detach now.
++		 */
++		raw_spin_lock_irq(&ctx->lock);
++		perf_group_detach(event);
+ 		raw_spin_unlock_irq(&ctx->lock);
+-		return;
+ 	}
+-	raw_spin_unlock_irq(&ctx->lock);
+-
+-	event_function_call(event, __perf_remove_from_context, (void *)flags);
+ }
+ 
+ /*
+@@ -12296,17 +12277,14 @@ void perf_pmu_migrate_context(struct pmu *pmu, int src_cpu, int dst_cpu)
+ }
+ EXPORT_SYMBOL_GPL(perf_pmu_migrate_context);
+ 
+-static void sync_child_event(struct perf_event *child_event)
++static void sync_child_event(struct perf_event *child_event,
++			       struct task_struct *child)
+ {
+ 	struct perf_event *parent_event = child_event->parent;
+ 	u64 child_val;
+ 
+-	if (child_event->attr.inherit_stat) {
+-		struct task_struct *task = child_event->ctx->task;
+-
+-		if (task && task != TASK_TOMBSTONE)
+-			perf_event_read_event(child_event, task);
+-	}
++	if (child_event->attr.inherit_stat)
++		perf_event_read_event(child_event, child);
+ 
+ 	child_val = perf_event_count(child_event);
+ 
+@@ -12321,53 +12299,60 @@ static void sync_child_event(struct perf_event *child_event)
+ }
+ 
+ static void
+-perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx)
++perf_event_exit_event(struct perf_event *child_event,
++		      struct perf_event_context *child_ctx,
++		      struct task_struct *child)
+ {
+-	struct perf_event *parent_event = event->parent;
+-	unsigned long detach_flags = 0;
+-
+-	if (parent_event) {
+-		/*
+-		 * Do not destroy the 'original' grouping; because of the
+-		 * context switch optimization the original events could've
+-		 * ended up in a random child task.
+-		 *
+-		 * If we were to destroy the original group, all group related
+-		 * operations would cease to function properly after this
+-		 * random child dies.
+-		 *
+-		 * Do destroy all inherited groups, we don't care about those
+-		 * and being thorough is better.
+-		 */
+-		detach_flags = DETACH_GROUP | DETACH_CHILD;
+-		mutex_lock(&parent_event->child_mutex);
+-	}
++	struct perf_event *parent_event = child_event->parent;
+ 
+-	perf_remove_from_context(event, detach_flags);
++	/*
++	 * Do not destroy the 'original' grouping; because of the context
++	 * switch optimization the original events could've ended up in a
++	 * random child task.
++	 *
++	 * If we were to destroy the original group, all group related
++	 * operations would cease to function properly after this random
++	 * child dies.
++	 *
++	 * Do destroy all inherited groups, we don't care about those
++	 * and being thorough is better.
++	 */
++	raw_spin_lock_irq(&child_ctx->lock);
++	WARN_ON_ONCE(child_ctx->is_active);
+ 
+-	raw_spin_lock_irq(&ctx->lock);
+-	if (event->state > PERF_EVENT_STATE_EXIT)
+-		perf_event_set_state(event, PERF_EVENT_STATE_EXIT);
+-	raw_spin_unlock_irq(&ctx->lock);
++	if (parent_event)
++		perf_group_detach(child_event);
++	list_del_event(child_event, child_ctx);
++	perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */
++	raw_spin_unlock_irq(&child_ctx->lock);
+ 
+ 	/*
+-	 * Child events can be freed.
++	 * Parent events are governed by their filedesc, retain them.
+ 	 */
+-	if (parent_event) {
+-		mutex_unlock(&parent_event->child_mutex);
+-		/*
+-		 * Kick perf_poll() for is_event_hup();
+-		 */
+-		perf_event_wakeup(parent_event);
+-		free_event(event);
+-		put_event(parent_event);
++	if (!parent_event) {
++		perf_event_wakeup(child_event);
+ 		return;
+ 	}
++	/*
++	 * Child events can be cleaned up.
++	 */
++
++	sync_child_event(child_event, child);
+ 
+ 	/*
+-	 * Parent events are governed by their filedesc, retain them.
++	 * Remove this event from the parent's list
++	 */
++	WARN_ON_ONCE(parent_event->ctx->parent_ctx);
++	mutex_lock(&parent_event->child_mutex);
++	list_del_init(&child_event->child_list);
++	mutex_unlock(&parent_event->child_mutex);
++
++	/*
++	 * Kick perf_poll() for is_event_hup().
+ 	 */
+-	perf_event_wakeup(event);
++	perf_event_wakeup(parent_event);
++	free_event(child_event);
++	put_event(parent_event);
+ }
+ 
+ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
+@@ -12424,7 +12409,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
+ 	perf_event_task(child, child_ctx, 0);
+ 
+ 	list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry)
+-		perf_event_exit_event(child_event, child_ctx);
++		perf_event_exit_event(child_event, child_ctx, child);
+ 
+ 	mutex_unlock(&child_ctx->mutex);
+ 
+@@ -12684,7 +12669,6 @@ inherit_event(struct perf_event *parent_event,
+ 	 */
+ 	raw_spin_lock_irqsave(&child_ctx->lock, flags);
+ 	add_event_to_ctx(child_event, child_ctx);
+-	child_event->attach_state |= PERF_ATTACH_CHILD;
+ 	raw_spin_unlock_irqrestore(&child_ctx->lock, flags);
+ 
+ 	/*
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index bc8ff11e60242..e456cce772a3a 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -2650,6 +2650,9 @@ out:
+ 
+ bool cpus_share_cache(int this_cpu, int that_cpu)
+ {
++	if (this_cpu == that_cpu)
++		return true;
++
+ 	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
+ }
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 1b7f90e00eb05..c2ec467a5766b 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1684,9 +1684,10 @@ static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
+ 		if (!hist_field->type)
+ 			goto free;
+ 
+-		if (field->filter_type == FILTER_STATIC_STRING)
++		if (field->filter_type == FILTER_STATIC_STRING) {
+ 			hist_field->fn = hist_field_string;
+-		else if (field->filter_type == FILTER_DYN_STRING)
++			hist_field->size = field->size;
++		} else if (field->filter_type == FILTER_DYN_STRING)
+ 			hist_field->fn = hist_field_dynstring;
+ 		else
+ 			hist_field->fn = hist_field_pstring;
+@@ -2623,8 +2624,10 @@ static inline void __update_field_vars(struct tracing_map_elt *elt,
+ 		if (val->flags & HIST_FIELD_FL_STRING) {
+ 			char *str = elt_data->field_var_str[j++];
+ 			char *val_str = (char *)(uintptr_t)var_val;
++			unsigned int size;
+ 
+-			strscpy(str, val_str, STR_VAR_LEN_MAX);
++			size = min(val->size, STR_VAR_LEN_MAX);
++			strscpy(str, val_str, size);
+ 			var_val = (u64)(uintptr_t)str;
+ 		}
+ 		tracing_map_set_var(elt, var_idx, var_val);
+@@ -4464,6 +4467,7 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
+ 			if (hist_field->flags & HIST_FIELD_FL_STRING) {
+ 				unsigned int str_start, var_str_idx, idx;
+ 				char *str, *val_str;
++				unsigned int size;
+ 
+ 				str_start = hist_data->n_field_var_str +
+ 					hist_data->n_save_var_str;
+@@ -4472,7 +4476,9 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
+ 
+ 				str = elt_data->field_var_str[idx];
+ 				val_str = (char *)(uintptr_t)hist_val;
+-				strscpy(str, val_str, STR_VAR_LEN_MAX);
++
++				size = min(hist_field->size, STR_VAR_LEN_MAX);
++				strscpy(str, val_str, size);
+ 
+ 				hist_val = (u64)(uintptr_t)str;
+ 			}
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 6e92ab0ae070f..fce705fc2848a 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3913,6 +3913,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 	struct hstate *h = hstate_vma(vma);
+ 	unsigned long sz = huge_page_size(h);
+ 	struct mmu_notifier_range range;
++	bool force_flush = false;
+ 
+ 	WARN_ON(!is_vm_hugetlb_page(vma));
+ 	BUG_ON(start & ~huge_page_mask(h));
+@@ -3941,10 +3942,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 		ptl = huge_pte_lock(h, mm, ptep);
+ 		if (huge_pmd_unshare(mm, vma, &address, ptep)) {
+ 			spin_unlock(ptl);
+-			/*
+-			 * We just unmapped a page of PMDs by clearing a PUD.
+-			 * The caller's TLB flush range should cover this area.
+-			 */
++			tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
++			force_flush = true;
+ 			continue;
+ 		}
+ 
+@@ -4001,6 +4000,22 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 	}
+ 	mmu_notifier_invalidate_range_end(&range);
+ 	tlb_end_vma(tlb, vma);
++
++	/*
++	 * If we unshared PMDs, the TLB flush was not recorded in mmu_gather. We
++	 * could defer the flush until now, since by holding i_mmap_rwsem we
++	 * guaranteed that the last refernece would not be dropped. But we must
++	 * do the flushing before we return, as otherwise i_mmap_rwsem will be
++	 * dropped and the last reference to the shared PMDs page might be
++	 * dropped as well.
++	 *
++	 * In theory we could defer the freeing of the PMD pages as well, but
++	 * huge_pmd_unshare() relies on the exact page_count for the PMD page to
++	 * detect sharing, so we cannot defer the release of the page either.
++	 * Instead, do flush now.
++	 */
++	if (force_flush)
++		tlb_flush_mmu_tlbonly(tlb);
+ }
+ 
+ void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+diff --git a/mm/slab.h b/mm/slab.h
+index 944e8b2040ae2..6952e10cf33b4 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -147,7 +147,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
+ #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
+ 			  SLAB_TEMPORARY | SLAB_ACCOUNT)
+ #else
+-#define SLAB_CACHE_FLAGS (0)
++#define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
+ #endif
+ 
+ /* Common flags available with current configuration */
+diff --git a/net/core/sock.c b/net/core/sock.c
+index f9c835167391d..6d9af4ef93d7a 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1883,123 +1883,120 @@ static void sk_init_common(struct sock *sk)
+ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
+ {
+ 	struct proto *prot = READ_ONCE(sk->sk_prot);
+-	struct sock *newsk;
++	struct sk_filter *filter;
+ 	bool is_charged = true;
++	struct sock *newsk;
+ 
+ 	newsk = sk_prot_alloc(prot, priority, sk->sk_family);
+-	if (newsk != NULL) {
+-		struct sk_filter *filter;
++	if (!newsk)
++		goto out;
+ 
+-		sock_copy(newsk, sk);
++	sock_copy(newsk, sk);
+ 
+-		newsk->sk_prot_creator = prot;
++	newsk->sk_prot_creator = prot;
+ 
+-		/* SANITY */
+-		if (likely(newsk->sk_net_refcnt))
+-			get_net(sock_net(newsk));
+-		sk_node_init(&newsk->sk_node);
+-		sock_lock_init(newsk);
+-		bh_lock_sock(newsk);
+-		newsk->sk_backlog.head	= newsk->sk_backlog.tail = NULL;
+-		newsk->sk_backlog.len = 0;
++	/* SANITY */
++	if (likely(newsk->sk_net_refcnt)) {
++		get_net(sock_net(newsk));
++		sock_inuse_add(sock_net(newsk), 1);
++	}
++	sk_node_init(&newsk->sk_node);
++	sock_lock_init(newsk);
++	bh_lock_sock(newsk);
++	newsk->sk_backlog.head	= newsk->sk_backlog.tail = NULL;
++	newsk->sk_backlog.len = 0;
+ 
+-		atomic_set(&newsk->sk_rmem_alloc, 0);
+-		/*
+-		 * sk_wmem_alloc set to one (see sk_free() and sock_wfree())
+-		 */
+-		refcount_set(&newsk->sk_wmem_alloc, 1);
+-		atomic_set(&newsk->sk_omem_alloc, 0);
+-		sk_init_common(newsk);
++	atomic_set(&newsk->sk_rmem_alloc, 0);
+ 
+-		newsk->sk_dst_cache	= NULL;
+-		newsk->sk_dst_pending_confirm = 0;
+-		newsk->sk_wmem_queued	= 0;
+-		newsk->sk_forward_alloc = 0;
+-		atomic_set(&newsk->sk_drops, 0);
+-		newsk->sk_send_head	= NULL;
+-		newsk->sk_userlocks	= sk->sk_userlocks & ~SOCK_BINDPORT_LOCK;
+-		atomic_set(&newsk->sk_zckey, 0);
++	/* sk_wmem_alloc set to one (see sk_free() and sock_wfree()) */
++	refcount_set(&newsk->sk_wmem_alloc, 1);
+ 
+-		sock_reset_flag(newsk, SOCK_DONE);
++	atomic_set(&newsk->sk_omem_alloc, 0);
++	sk_init_common(newsk);
+ 
+-		/* sk->sk_memcg will be populated at accept() time */
+-		newsk->sk_memcg = NULL;
++	newsk->sk_dst_cache	= NULL;
++	newsk->sk_dst_pending_confirm = 0;
++	newsk->sk_wmem_queued	= 0;
++	newsk->sk_forward_alloc = 0;
++	atomic_set(&newsk->sk_drops, 0);
++	newsk->sk_send_head	= NULL;
++	newsk->sk_userlocks	= sk->sk_userlocks & ~SOCK_BINDPORT_LOCK;
++	atomic_set(&newsk->sk_zckey, 0);
+ 
+-		cgroup_sk_clone(&newsk->sk_cgrp_data);
++	sock_reset_flag(newsk, SOCK_DONE);
+ 
+-		rcu_read_lock();
+-		filter = rcu_dereference(sk->sk_filter);
+-		if (filter != NULL)
+-			/* though it's an empty new sock, the charging may fail
+-			 * if sysctl_optmem_max was changed between creation of
+-			 * original socket and cloning
+-			 */
+-			is_charged = sk_filter_charge(newsk, filter);
+-		RCU_INIT_POINTER(newsk->sk_filter, filter);
+-		rcu_read_unlock();
++	/* sk->sk_memcg will be populated at accept() time */
++	newsk->sk_memcg = NULL;
+ 
+-		if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk, sk))) {
+-			/* We need to make sure that we don't uncharge the new
+-			 * socket if we couldn't charge it in the first place
+-			 * as otherwise we uncharge the parent's filter.
+-			 */
+-			if (!is_charged)
+-				RCU_INIT_POINTER(newsk->sk_filter, NULL);
+-			sk_free_unlock_clone(newsk);
+-			newsk = NULL;
+-			goto out;
+-		}
+-		RCU_INIT_POINTER(newsk->sk_reuseport_cb, NULL);
++	cgroup_sk_clone(&newsk->sk_cgrp_data);
+ 
+-		if (bpf_sk_storage_clone(sk, newsk)) {
+-			sk_free_unlock_clone(newsk);
+-			newsk = NULL;
+-			goto out;
+-		}
++	rcu_read_lock();
++	filter = rcu_dereference(sk->sk_filter);
++	if (filter != NULL)
++		/* though it's an empty new sock, the charging may fail
++		 * if sysctl_optmem_max was changed between creation of
++		 * original socket and cloning
++		 */
++		is_charged = sk_filter_charge(newsk, filter);
++	RCU_INIT_POINTER(newsk->sk_filter, filter);
++	rcu_read_unlock();
+ 
+-		/* Clear sk_user_data if parent had the pointer tagged
+-		 * as not suitable for copying when cloning.
++	if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk, sk))) {
++		/* We need to make sure that we don't uncharge the new
++		 * socket if we couldn't charge it in the first place
++		 * as otherwise we uncharge the parent's filter.
+ 		 */
+-		if (sk_user_data_is_nocopy(newsk))
+-			newsk->sk_user_data = NULL;
++		if (!is_charged)
++			RCU_INIT_POINTER(newsk->sk_filter, NULL);
++		sk_free_unlock_clone(newsk);
++		newsk = NULL;
++		goto out;
++	}
++	RCU_INIT_POINTER(newsk->sk_reuseport_cb, NULL);
+ 
+-		newsk->sk_err	   = 0;
+-		newsk->sk_err_soft = 0;
+-		newsk->sk_priority = 0;
+-		newsk->sk_incoming_cpu = raw_smp_processor_id();
+-		if (likely(newsk->sk_net_refcnt))
+-			sock_inuse_add(sock_net(newsk), 1);
++	if (bpf_sk_storage_clone(sk, newsk)) {
++		sk_free_unlock_clone(newsk);
++		newsk = NULL;
++		goto out;
++	}
+ 
+-		/*
+-		 * Before updating sk_refcnt, we must commit prior changes to memory
+-		 * (Documentation/RCU/rculist_nulls.rst for details)
+-		 */
+-		smp_wmb();
+-		refcount_set(&newsk->sk_refcnt, 2);
++	/* Clear sk_user_data if parent had the pointer tagged
++	 * as not suitable for copying when cloning.
++	 */
++	if (sk_user_data_is_nocopy(newsk))
++		newsk->sk_user_data = NULL;
+ 
+-		/*
+-		 * Increment the counter in the same struct proto as the master
+-		 * sock (sk_refcnt_debug_inc uses newsk->sk_prot->socks, that
+-		 * is the same as sk->sk_prot->socks, as this field was copied
+-		 * with memcpy).
+-		 *
+-		 * This _changes_ the previous behaviour, where
+-		 * tcp_create_openreq_child always was incrementing the
+-		 * equivalent to tcp_prot->socks (inet_sock_nr), so this have
+-		 * to be taken into account in all callers. -acme
+-		 */
+-		sk_refcnt_debug_inc(newsk);
+-		sk_set_socket(newsk, NULL);
+-		sk_tx_queue_clear(newsk);
+-		RCU_INIT_POINTER(newsk->sk_wq, NULL);
++	newsk->sk_err	   = 0;
++	newsk->sk_err_soft = 0;
++	newsk->sk_priority = 0;
++	newsk->sk_incoming_cpu = raw_smp_processor_id();
+ 
+-		if (newsk->sk_prot->sockets_allocated)
+-			sk_sockets_allocated_inc(newsk);
++	/* Before updating sk_refcnt, we must commit prior changes to memory
++	 * (Documentation/RCU/rculist_nulls.rst for details)
++	 */
++	smp_wmb();
++	refcount_set(&newsk->sk_refcnt, 2);
+ 
+-		if (sock_needs_netstamp(sk) &&
+-		    newsk->sk_flags & SK_FLAGS_TIMESTAMP)
+-			net_enable_timestamp();
+-	}
++	/* Increment the counter in the same struct proto as the master
++	 * sock (sk_refcnt_debug_inc uses newsk->sk_prot->socks, that
++	 * is the same as sk->sk_prot->socks, as this field was copied
++	 * with memcpy).
++	 *
++	 * This _changes_ the previous behaviour, where
++	 * tcp_create_openreq_child always was incrementing the
++	 * equivalent to tcp_prot->socks (inet_sock_nr), so this have
++	 * to be taken into account in all callers. -acme
++	 */
++	sk_refcnt_debug_inc(newsk);
++	sk_set_socket(newsk, NULL);
++	sk_tx_queue_clear(newsk);
++	RCU_INIT_POINTER(newsk->sk_wq, NULL);
++
++	if (newsk->sk_prot->sockets_allocated)
++		sk_sockets_allocated_inc(newsk);
++
++	if (sock_needs_netstamp(sk) && newsk->sk_flags & SK_FLAGS_TIMESTAMP)
++		net_enable_timestamp();
+ out:
+ 	return newsk;
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index e8aca226c4ae3..bb16c88f58a3c 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1746,6 +1746,77 @@ int tcp_mmap(struct file *file, struct socket *sock,
+ }
+ EXPORT_SYMBOL(tcp_mmap);
+ 
++static skb_frag_t *skb_advance_to_frag(struct sk_buff *skb, u32 offset_skb,
++				       u32 *offset_frag)
++{
++	skb_frag_t *frag;
++
++	if (unlikely(offset_skb >= skb->len))
++		return NULL;
++
++	offset_skb -= skb_headlen(skb);
++	if ((int)offset_skb < 0 || skb_has_frag_list(skb))
++		return NULL;
++
++	frag = skb_shinfo(skb)->frags;
++	while (offset_skb) {
++		if (skb_frag_size(frag) > offset_skb) {
++			*offset_frag = offset_skb;
++			return frag;
++		}
++		offset_skb -= skb_frag_size(frag);
++		++frag;
++	}
++	*offset_frag = 0;
++	return frag;
++}
++
++static int tcp_copy_straggler_data(struct tcp_zerocopy_receive *zc,
++				   struct sk_buff *skb, u32 copylen,
++				   u32 *offset, u32 *seq)
++{
++	unsigned long copy_address = (unsigned long)zc->copybuf_address;
++	struct msghdr msg = {};
++	struct iovec iov;
++	int err;
++
++	if (copy_address != zc->copybuf_address)
++		return -EINVAL;
++
++	err = import_single_range(READ, (void __user *)copy_address,
++				  copylen, &iov, &msg.msg_iter);
++	if (err)
++		return err;
++	err = skb_copy_datagram_msg(skb, *offset, &msg, copylen);
++	if (err)
++		return err;
++	zc->recv_skip_hint -= copylen;
++	*offset += copylen;
++	*seq += copylen;
++	return (__s32)copylen;
++}
++
++static int tcp_zerocopy_handle_leftover_data(struct tcp_zerocopy_receive *zc,
++					     struct sock *sk,
++					     struct sk_buff *skb,
++					     u32 *seq,
++					     s32 copybuf_len)
++{
++	u32 offset, copylen = min_t(u32, copybuf_len, zc->recv_skip_hint);
++
++	if (!copylen)
++		return 0;
++	/* skb is null if inq < PAGE_SIZE. */
++	if (skb)
++		offset = *seq - TCP_SKB_CB(skb)->seq;
++	else
++		skb = tcp_recv_skb(sk, *seq, &offset);
++
++	zc->copybuf_len = tcp_copy_straggler_data(zc, skb, copylen, &offset,
++						  seq);
++	return zc->copybuf_len < 0 ? 0 : copylen;
++}
++
+ static int tcp_zerocopy_vm_insert_batch(struct vm_area_struct *vma,
+ 					struct page **pages,
+ 					unsigned long pages_to_map,
+@@ -1779,8 +1850,10 @@ static int tcp_zerocopy_vm_insert_batch(struct vm_area_struct *vma,
+ static int tcp_zerocopy_receive(struct sock *sk,
+ 				struct tcp_zerocopy_receive *zc)
+ {
++	u32 length = 0, offset, vma_len, avail_len, aligned_len, copylen = 0;
+ 	unsigned long address = (unsigned long)zc->address;
+-	u32 length = 0, seq, offset, zap_len;
++	s32 copybuf_len = zc->copybuf_len;
++	struct tcp_sock *tp = tcp_sk(sk);
+ 	#define PAGE_BATCH_SIZE 8
+ 	struct page *pages[PAGE_BATCH_SIZE];
+ 	const skb_frag_t *frags = NULL;
+@@ -1788,10 +1861,12 @@ static int tcp_zerocopy_receive(struct sock *sk,
+ 	struct sk_buff *skb = NULL;
+ 	unsigned long pg_idx = 0;
+ 	unsigned long curr_addr;
+-	struct tcp_sock *tp;
+-	int inq;
++	u32 seq = tp->copied_seq;
++	int inq = tcp_inq(sk);
+ 	int ret;
+ 
++	zc->copybuf_len = 0;
++
+ 	if (address & (PAGE_SIZE - 1) || address != zc->address)
+ 		return -EINVAL;
+ 
+@@ -1800,8 +1875,6 @@ static int tcp_zerocopy_receive(struct sock *sk,
+ 
+ 	sock_rps_record_flow(sk);
+ 
+-	tp = tcp_sk(sk);
+-
+ 	mmap_read_lock(current->mm);
+ 
+ 	vma = find_vma(current->mm, address);
+@@ -1809,22 +1882,23 @@ static int tcp_zerocopy_receive(struct sock *sk,
+ 		mmap_read_unlock(current->mm);
+ 		return -EINVAL;
+ 	}
+-	zc->length = min_t(unsigned long, zc->length, vma->vm_end - address);
+-
+-	seq = tp->copied_seq;
+-	inq = tcp_inq(sk);
+-	zc->length = min_t(u32, zc->length, inq);
+-	zap_len = zc->length & ~(PAGE_SIZE - 1);
+-	if (zap_len) {
+-		zap_page_range(vma, address, zap_len);
++	vma_len = min_t(unsigned long, zc->length, vma->vm_end - address);
++	avail_len = min_t(u32, vma_len, inq);
++	aligned_len = avail_len & ~(PAGE_SIZE - 1);
++	if (aligned_len) {
++		zap_page_range(vma, address, aligned_len);
++		zc->length = aligned_len;
+ 		zc->recv_skip_hint = 0;
+ 	} else {
+-		zc->recv_skip_hint = zc->length;
++		zc->length = avail_len;
++		zc->recv_skip_hint = avail_len;
+ 	}
+ 	ret = 0;
+ 	curr_addr = address;
+ 	while (length + PAGE_SIZE <= zc->length) {
+ 		if (zc->recv_skip_hint < PAGE_SIZE) {
++			u32 offset_frag;
++
+ 			/* If we're here, finish the current batch. */
+ 			if (pg_idx) {
+ 				ret = tcp_zerocopy_vm_insert_batch(vma, pages,
+@@ -1845,16 +1919,9 @@ static int tcp_zerocopy_receive(struct sock *sk,
+ 				skb = tcp_recv_skb(sk, seq, &offset);
+ 			}
+ 			zc->recv_skip_hint = skb->len - offset;
+-			offset -= skb_headlen(skb);
+-			if ((int)offset < 0 || skb_has_frag_list(skb))
++			frags = skb_advance_to_frag(skb, offset, &offset_frag);
++			if (!frags || offset_frag)
+ 				break;
+-			frags = skb_shinfo(skb)->frags;
+-			while (offset) {
+-				if (skb_frag_size(frags) > offset)
+-					goto out;
+-				offset -= skb_frag_size(frags);
+-				frags++;
+-			}
+ 		}
+ 		if (skb_frag_size(frags) != PAGE_SIZE || skb_frag_off(frags)) {
+ 			int remaining = zc->recv_skip_hint;
+@@ -1888,13 +1955,18 @@ static int tcp_zerocopy_receive(struct sock *sk,
+ 	}
+ out:
+ 	mmap_read_unlock(current->mm);
+-	if (length) {
++	/* Try to copy straggler data. */
++	if (!ret)
++		copylen = tcp_zerocopy_handle_leftover_data(zc, sk, skb, &seq,
++							    copybuf_len);
++
++	if (length + copylen) {
+ 		WRITE_ONCE(tp->copied_seq, seq);
+ 		tcp_rcv_space_adjust(sk);
+ 
+ 		/* Clean up data we have read: This will do ACK frames. */
+ 		tcp_recv_skb(sk, seq, &offset);
+-		tcp_cleanup_rbuf(sk, length);
++		tcp_cleanup_rbuf(sk, length + copylen);
+ 		ret = 0;
+ 		if (length == zc->length)
+ 			zc->recv_skip_hint = 0;
+diff --git a/net/nfc/core.c b/net/nfc/core.c
+index eb377f87bcae8..6800470dd6df7 100644
+--- a/net/nfc/core.c
++++ b/net/nfc/core.c
+@@ -94,13 +94,13 @@ int nfc_dev_up(struct nfc_dev *dev)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (dev->rfkill && rfkill_blocked(dev->rfkill)) {
+-		rc = -ERFKILL;
++	if (!device_is_registered(&dev->dev)) {
++		rc = -ENODEV;
+ 		goto error;
+ 	}
+ 
+-	if (!device_is_registered(&dev->dev)) {
+-		rc = -ENODEV;
++	if (dev->rfkill && rfkill_blocked(dev->rfkill)) {
++		rc = -ERFKILL;
+ 		goto error;
+ 	}
+ 
+@@ -1117,11 +1117,7 @@ int nfc_register_device(struct nfc_dev *dev)
+ 	if (rc)
+ 		pr_err("Could not register llcp device\n");
+ 
+-	rc = nfc_genl_device_added(dev);
+-	if (rc)
+-		pr_debug("The userspace won't be notified that the device %s was added\n",
+-			 dev_name(&dev->dev));
+-
++	device_lock(&dev->dev);
+ 	dev->rfkill = rfkill_alloc(dev_name(&dev->dev), &dev->dev,
+ 				   RFKILL_TYPE_NFC, &nfc_rfkill_ops, dev);
+ 	if (dev->rfkill) {
+@@ -1130,6 +1126,12 @@ int nfc_register_device(struct nfc_dev *dev)
+ 			dev->rfkill = NULL;
+ 		}
+ 	}
++	device_unlock(&dev->dev);
++
++	rc = nfc_genl_device_added(dev);
++	if (rc)
++		pr_debug("The userspace won't be notified that the device %s was added\n",
++			 dev_name(&dev->dev));
+ 
+ 	return 0;
+ }
+@@ -1146,10 +1148,17 @@ void nfc_unregister_device(struct nfc_dev *dev)
+ 
+ 	pr_debug("dev_name=%s\n", dev_name(&dev->dev));
+ 
++	rc = nfc_genl_device_removed(dev);
++	if (rc)
++		pr_debug("The userspace won't be notified that the device %s "
++			 "was removed\n", dev_name(&dev->dev));
++
++	device_lock(&dev->dev);
+ 	if (dev->rfkill) {
+ 		rfkill_unregister(dev->rfkill);
+ 		rfkill_destroy(dev->rfkill);
+ 	}
++	device_unlock(&dev->dev);
+ 
+ 	if (dev->ops->check_presence) {
+ 		device_lock(&dev->dev);
+@@ -1159,11 +1168,6 @@ void nfc_unregister_device(struct nfc_dev *dev)
+ 		cancel_work_sync(&dev->check_pres_work);
+ 	}
+ 
+-	rc = nfc_genl_device_removed(dev);
+-	if (rc)
+-		pr_debug("The userspace won't be notified that the device %s "
+-			 "was removed\n", dev_name(&dev->dev));
+-
+ 	nfc_llcp_unregister_device(dev);
+ 
+ 	mutex_lock(&nfc_devlist_mutex);
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index 32e8154363cab..e38719e2ee582 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -144,12 +144,15 @@ inline int nci_request(struct nci_dev *ndev,
+ {
+ 	int rc;
+ 
+-	if (!test_bit(NCI_UP, &ndev->flags))
+-		return -ENETDOWN;
+-
+ 	/* Serialize all requests */
+ 	mutex_lock(&ndev->req_lock);
+-	rc = __nci_request(ndev, req, opt, timeout);
++	/* check the state after obtaing the lock against any races
++	 * from nci_close_device when the device gets removed.
++	 */
++	if (test_bit(NCI_UP, &ndev->flags))
++		rc = __nci_request(ndev, req, opt, timeout);
++	else
++		rc = -ENETDOWN;
+ 	mutex_unlock(&ndev->req_lock);
+ 
+ 	return rc;
+@@ -470,6 +473,11 @@ static int nci_open_device(struct nci_dev *ndev)
+ 
+ 	mutex_lock(&ndev->req_lock);
+ 
++	if (test_bit(NCI_UNREG, &ndev->flags)) {
++		rc = -ENODEV;
++		goto done;
++	}
++
+ 	if (test_bit(NCI_UP, &ndev->flags)) {
+ 		rc = -EALREADY;
+ 		goto done;
+@@ -533,6 +541,10 @@ done:
+ static int nci_close_device(struct nci_dev *ndev)
+ {
+ 	nci_req_cancel(ndev, ENODEV);
++
++	/* This mutex needs to be held as a barrier for
++	 * caller nci_unregister_device
++	 */
+ 	mutex_lock(&ndev->req_lock);
+ 
+ 	if (!test_and_clear_bit(NCI_UP, &ndev->flags)) {
+@@ -565,13 +577,13 @@ static int nci_close_device(struct nci_dev *ndev)
+ 
+ 	clear_bit(NCI_INIT, &ndev->flags);
+ 
+-	del_timer_sync(&ndev->cmd_timer);
+-
+ 	/* Flush cmd wq */
+ 	flush_workqueue(ndev->cmd_wq);
+ 
+-	/* Clear flags */
+-	ndev->flags = 0;
++	del_timer_sync(&ndev->cmd_timer);
++
++	/* Clear flags except NCI_UNREG */
++	ndev->flags &= BIT(NCI_UNREG);
+ 
+ 	mutex_unlock(&ndev->req_lock);
+ 
+@@ -1256,6 +1268,12 @@ void nci_unregister_device(struct nci_dev *ndev)
+ {
+ 	struct nci_conn_info    *conn_info, *n;
+ 
++	/* This set_bit is not protected with specialized barrier,
++	 * However, it is fine because the mutex_lock(&ndev->req_lock);
++	 * in nci_close_device() will help to emit one.
++	 */
++	set_bit(NCI_UNREG, &ndev->flags);
++
+ 	nci_close_device(ndev);
+ 
+ 	destroy_workqueue(ndev->cmd_wq);
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index 0b0eb18919c09..24d561d8d9c97 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -19,6 +19,7 @@
+ #include <linux/if_arp.h>
+ #include <net/net_namespace.h>
+ #include <net/netlink.h>
++#include <net/dst.h>
+ #include <net/pkt_sched.h>
+ #include <net/pkt_cls.h>
+ #include <linux/tc_act/tc_mirred.h>
+@@ -218,6 +219,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 	bool want_ingress;
+ 	bool is_redirect;
+ 	bool expects_nh;
++	bool at_ingress;
+ 	int m_eaction;
+ 	int mac_len;
+ 	bool at_nh;
+@@ -253,7 +255,8 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 	 * ingress - that covers the TC S/W datapath.
+ 	 */
+ 	is_redirect = tcf_mirred_is_act_redirect(m_eaction);
+-	use_reinsert = skb_at_tc_ingress(skb) && is_redirect &&
++	at_ingress = skb_at_tc_ingress(skb);
++	use_reinsert = at_ingress && is_redirect &&
+ 		       tcf_mirred_can_reinsert(retval);
+ 	if (!use_reinsert) {
+ 		skb2 = skb_clone(skb, GFP_ATOMIC);
+@@ -261,10 +264,12 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 			goto out;
+ 	}
+ 
++	want_ingress = tcf_mirred_act_wants_ingress(m_eaction);
++
+ 	/* All mirred/redirected skbs should clear previous ct info */
+ 	nf_reset_ct(skb2);
+-
+-	want_ingress = tcf_mirred_act_wants_ingress(m_eaction);
++	if (want_ingress && !at_ingress) /* drop dst for egress -> ingress */
++		skb_dst_drop(skb2);
+ 
+ 	expects_nh = want_ingress || !m_mac_header_xmit;
+ 	at_nh = skb->data == skb_network_header(skb);
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index c491dd8e67cda..109d790eaebe2 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -287,13 +287,14 @@ static u8 smcr_next_link_id(struct smc_link_group *lgr)
+ 	int i;
+ 
+ 	while (1) {
++again:
+ 		link_id = ++lgr->next_link_id;
+ 		if (!link_id)	/* skip zero as link_id */
+ 			link_id = ++lgr->next_link_id;
+ 		for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+ 			if (smc_link_usable(&lgr->lnk[i]) &&
+ 			    lgr->lnk[i].link_id == link_id)
+-				continue;
++				goto again;
+ 		}
+ 		break;
+ 	}
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 23b100f36ee48..d8a2f424786fc 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -590,6 +590,10 @@ static int tipc_aead_init(struct tipc_aead **aead, struct tipc_aead_key *ukey,
+ 	tmp->cloned = NULL;
+ 	tmp->authsize = TIPC_AES_GCM_TAG_SIZE;
+ 	tmp->key = kmemdup(ukey, tipc_aead_key_size(ukey), GFP_KERNEL);
++	if (!tmp->key) {
++		tipc_aead_free(&tmp->rcu);
++		return -ENOMEM;
++	}
+ 	memcpy(&tmp->salt, ukey->key + keylen, TIPC_AES_GCM_SALT_SIZE);
+ 	atomic_set(&tmp->users, 0);
+ 	atomic64_set(&tmp->seqno, 0);
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index c92e6984933cb..29591955d08a5 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -1258,8 +1258,11 @@ static bool tipc_data_input(struct tipc_link *l, struct sk_buff *skb,
+ 		return false;
+ #ifdef CONFIG_TIPC_CRYPTO
+ 	case MSG_CRYPTO:
+-		tipc_crypto_msg_rcv(l->net, skb);
+-		return true;
++		if (TIPC_SKB_CB(skb)->decrypted) {
++			tipc_crypto_msg_rcv(l->net, skb);
++			return true;
++		}
++		fallthrough;
+ #endif
+ 	default:
+ 		pr_warn("Dropping received illegal msg type\n");
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 3f8c46bb6d9a4..4b32e85c2d9a1 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1044,6 +1044,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+ 
+ 		switch (otype) {
+ 		case NL80211_IFTYPE_AP:
++		case NL80211_IFTYPE_P2P_GO:
+ 			cfg80211_stop_ap(rdev, dev, true);
+ 			break;
+ 		case NL80211_IFTYPE_ADHOC:
+diff --git a/security/selinux/ss/hashtab.c b/security/selinux/ss/hashtab.c
+index dab8c25c739b9..7335f67ce54eb 100644
+--- a/security/selinux/ss/hashtab.c
++++ b/security/selinux/ss/hashtab.c
+@@ -30,13 +30,20 @@ static u32 hashtab_compute_size(u32 nel)
+ 
+ int hashtab_init(struct hashtab *h, u32 nel_hint)
+ {
+-	h->size = hashtab_compute_size(nel_hint);
++	u32 size = hashtab_compute_size(nel_hint);
++
++	/* should already be zeroed, but better be safe */
+ 	h->nel = 0;
+-	if (!h->size)
+-		return 0;
++	h->size = 0;
++	h->htable = NULL;
+ 
+-	h->htable = kcalloc(h->size, sizeof(*h->htable), GFP_KERNEL);
+-	return h->htable ? 0 : -ENOMEM;
++	if (size) {
++		h->htable = kcalloc(size, sizeof(*h->htable), GFP_KERNEL);
++		if (!h->htable)
++			return -ENOMEM;
++		h->size = size;
++	}
++	return 0;
+ }
+ 
+ int __hashtab_insert(struct hashtab *h, struct hashtab_node **dst,
+diff --git a/sound/core/Makefile b/sound/core/Makefile
+index ee4a4a6b99ba7..d123587c0fd8f 100644
+--- a/sound/core/Makefile
++++ b/sound/core/Makefile
+@@ -9,7 +9,9 @@ ifneq ($(CONFIG_SND_PROC_FS),)
+ snd-y += info.o
+ snd-$(CONFIG_SND_OSSEMUL) += info_oss.o
+ endif
++ifneq ($(CONFIG_M68K),y)
+ snd-$(CONFIG_ISA_DMA_API) += isadma.o
++endif
+ snd-$(CONFIG_SND_OSSEMUL) += sound_oss.o
+ snd-$(CONFIG_SND_VMASTER) += vmaster.o
+ snd-$(CONFIG_SND_JACK)	  += ctljack.o jack.o
+diff --git a/sound/hda/ext/hdac_ext_stream.c b/sound/hda/ext/hdac_ext_stream.c
+index c4d54a838773c..1e6e4cf428cda 100644
+--- a/sound/hda/ext/hdac_ext_stream.c
++++ b/sound/hda/ext/hdac_ext_stream.c
+@@ -106,20 +106,14 @@ void snd_hdac_stream_free_all(struct hdac_bus *bus)
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_stream_free_all);
+ 
+-/**
+- * snd_hdac_ext_stream_decouple - decouple the hdac stream
+- * @bus: HD-audio core bus
+- * @stream: HD-audio ext core stream object to initialize
+- * @decouple: flag to decouple
+- */
+-void snd_hdac_ext_stream_decouple(struct hdac_bus *bus,
+-				struct hdac_ext_stream *stream, bool decouple)
++void snd_hdac_ext_stream_decouple_locked(struct hdac_bus *bus,
++					 struct hdac_ext_stream *stream,
++					 bool decouple)
+ {
+ 	struct hdac_stream *hstream = &stream->hstream;
+ 	u32 val;
+ 	int mask = AZX_PPCTL_PROCEN(hstream->index);
+ 
+-	spin_lock_irq(&bus->reg_lock);
+ 	val = readw(bus->ppcap + AZX_REG_PP_PPCTL) & mask;
+ 
+ 	if (decouple && !val)
+@@ -128,6 +122,20 @@ void snd_hdac_ext_stream_decouple(struct hdac_bus *bus,
+ 		snd_hdac_updatel(bus->ppcap, AZX_REG_PP_PPCTL, mask, 0);
+ 
+ 	stream->decoupled = decouple;
++}
++EXPORT_SYMBOL_GPL(snd_hdac_ext_stream_decouple_locked);
++
++/**
++ * snd_hdac_ext_stream_decouple - decouple the hdac stream
++ * @bus: HD-audio core bus
++ * @stream: HD-audio ext core stream object to initialize
++ * @decouple: flag to decouple
++ */
++void snd_hdac_ext_stream_decouple(struct hdac_bus *bus,
++				  struct hdac_ext_stream *stream, bool decouple)
++{
++	spin_lock_irq(&bus->reg_lock);
++	snd_hdac_ext_stream_decouple_locked(bus, stream, decouple);
+ 	spin_unlock_irq(&bus->reg_lock);
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_ext_stream_decouple);
+@@ -252,6 +260,7 @@ hdac_ext_link_stream_assign(struct hdac_bus *bus,
+ 		return NULL;
+ 	}
+ 
++	spin_lock_irq(&bus->reg_lock);
+ 	list_for_each_entry(stream, &bus->stream_list, list) {
+ 		struct hdac_ext_stream *hstream = container_of(stream,
+ 						struct hdac_ext_stream,
+@@ -266,17 +275,16 @@ hdac_ext_link_stream_assign(struct hdac_bus *bus,
+ 		}
+ 
+ 		if (!hstream->link_locked) {
+-			snd_hdac_ext_stream_decouple(bus, hstream, true);
++			snd_hdac_ext_stream_decouple_locked(bus, hstream, true);
+ 			res = hstream;
+ 			break;
+ 		}
+ 	}
+ 	if (res) {
+-		spin_lock_irq(&bus->reg_lock);
+ 		res->link_locked = 1;
+ 		res->link_substream = substream;
+-		spin_unlock_irq(&bus->reg_lock);
+ 	}
++	spin_unlock_irq(&bus->reg_lock);
+ 	return res;
+ }
+ 
+@@ -292,6 +300,7 @@ hdac_ext_host_stream_assign(struct hdac_bus *bus,
+ 		return NULL;
+ 	}
+ 
++	spin_lock_irq(&bus->reg_lock);
+ 	list_for_each_entry(stream, &bus->stream_list, list) {
+ 		struct hdac_ext_stream *hstream = container_of(stream,
+ 						struct hdac_ext_stream,
+@@ -301,18 +310,17 @@ hdac_ext_host_stream_assign(struct hdac_bus *bus,
+ 
+ 		if (!stream->opened) {
+ 			if (!hstream->decoupled)
+-				snd_hdac_ext_stream_decouple(bus, hstream, true);
++				snd_hdac_ext_stream_decouple_locked(bus, hstream, true);
+ 			res = hstream;
+ 			break;
+ 		}
+ 	}
+ 	if (res) {
+-		spin_lock_irq(&bus->reg_lock);
+ 		res->hstream.opened = 1;
+ 		res->hstream.running = 0;
+ 		res->hstream.substream = substream;
+-		spin_unlock_irq(&bus->reg_lock);
+ 	}
++	spin_unlock_irq(&bus->reg_lock);
+ 
+ 	return res;
+ }
+@@ -378,15 +386,17 @@ void snd_hdac_ext_stream_release(struct hdac_ext_stream *stream, int type)
+ 		break;
+ 
+ 	case HDAC_EXT_STREAM_TYPE_HOST:
++		spin_lock_irq(&bus->reg_lock);
+ 		if (stream->decoupled && !stream->link_locked)
+-			snd_hdac_ext_stream_decouple(bus, stream, false);
++			snd_hdac_ext_stream_decouple_locked(bus, stream, false);
++		spin_unlock_irq(&bus->reg_lock);
+ 		snd_hdac_stream_release(&stream->hstream);
+ 		break;
+ 
+ 	case HDAC_EXT_STREAM_TYPE_LINK:
+-		if (stream->decoupled && !stream->hstream.opened)
+-			snd_hdac_ext_stream_decouple(bus, stream, false);
+ 		spin_lock_irq(&bus->reg_lock);
++		if (stream->decoupled && !stream->hstream.opened)
++			snd_hdac_ext_stream_decouple_locked(bus, stream, false);
+ 		stream->link_locked = 0;
+ 		stream->link_substream = NULL;
+ 		spin_unlock_irq(&bus->reg_lock);
+diff --git a/sound/hda/hdac_stream.c b/sound/hda/hdac_stream.c
+index abe7a1b16fe1e..ce77a53201639 100644
+--- a/sound/hda/hdac_stream.c
++++ b/sound/hda/hdac_stream.c
+@@ -296,6 +296,7 @@ struct hdac_stream *snd_hdac_stream_assign(struct hdac_bus *bus,
+ 	int key = (substream->pcm->device << 16) | (substream->number << 2) |
+ 		(substream->stream + 1);
+ 
++	spin_lock_irq(&bus->reg_lock);
+ 	list_for_each_entry(azx_dev, &bus->stream_list, list) {
+ 		if (azx_dev->direction != substream->stream)
+ 			continue;
+@@ -309,13 +310,12 @@ struct hdac_stream *snd_hdac_stream_assign(struct hdac_bus *bus,
+ 			res = azx_dev;
+ 	}
+ 	if (res) {
+-		spin_lock_irq(&bus->reg_lock);
+ 		res->opened = 1;
+ 		res->running = 0;
+ 		res->assigned_key = key;
+ 		res->substream = substream;
+-		spin_unlock_irq(&bus->reg_lock);
+ 	}
++	spin_unlock_irq(&bus->reg_lock);
+ 	return res;
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_stream_assign);
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 61e1de6d7be0a..6cdb3db7507b1 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -30,6 +30,7 @@ struct config_entry {
+ 	u32 flags;
+ 	u16 device;
+ 	const struct dmi_system_id *dmi_table;
++	u8 codec_hid[ACPI_ID_LEN];
+ };
+ 
+ /*
+@@ -55,7 +56,7 @@ static const struct config_entry config_table[] = {
+ /*
+  * Apollolake (Broxton-P)
+  * the legacy HDAudio driver is used except on Up Squared (SOF) and
+- * Chromebooks (SST)
++ * Chromebooks (SST), as well as devices based on the ES8336 codec
+  */
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_APOLLOLAKE)
+ 	{
+@@ -72,6 +73,11 @@ static const struct config_entry config_table[] = {
+ 			{}
+ 		}
+ 	},
++	{
++		.flags = FLAG_SOF,
++		.device = 0x5a98,
++		.codec_hid = "ESSX8336",
++	},
+ #endif
+ #if IS_ENABLED(CONFIG_SND_SOC_INTEL_APL)
+ 	{
+@@ -136,7 +142,7 @@ static const struct config_entry config_table[] = {
+ 
+ /*
+  * Geminilake uses legacy HDAudio driver except for Google
+- * Chromebooks
++ * Chromebooks and devices based on the ES8336 codec
+  */
+ /* Geminilake */
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_GEMINILAKE)
+@@ -153,6 +159,11 @@ static const struct config_entry config_table[] = {
+ 			{}
+ 		}
+ 	},
++	{
++		.flags = FLAG_SOF,
++		.device = 0x3198,
++		.codec_hid = "ESSX8336",
++	},
+ #endif
+ 
+ /*
+@@ -310,6 +321,11 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ 		.device = 0x43c8,
+ 	},
++	{
++		.flags = FLAG_SOF,
++		.device = 0xa0c8,
++		.codec_hid = "ESSX8336",
++	},
+ #endif
+ 
+ /* Elkhart Lake */
+@@ -337,6 +353,8 @@ static const struct config_entry *snd_intel_dsp_find_config
+ 			continue;
+ 		if (table->dmi_table && !dmi_check_system(table->dmi_table))
+ 			continue;
++		if (table->codec_hid[0] && !acpi_dev_present(table->codec_hid, NULL, -1))
++			continue;
+ 		return table;
+ 	}
+ 	return NULL;
+diff --git a/sound/isa/Kconfig b/sound/isa/Kconfig
+index 6ffa48dd59830..570b88e0b2018 100644
+--- a/sound/isa/Kconfig
++++ b/sound/isa/Kconfig
+@@ -22,7 +22,7 @@ config SND_SB16_DSP
+ menuconfig SND_ISA
+ 	bool "ISA sound devices"
+ 	depends on ISA || COMPILE_TEST
+-	depends on ISA_DMA_API
++	depends on ISA_DMA_API && !M68K
+ 	default y
+ 	help
+ 	  Support for sound devices connected via the ISA bus.
+diff --git a/sound/isa/gus/gus_dma.c b/sound/isa/gus/gus_dma.c
+index a1c770d826dda..6d664dd8dde0b 100644
+--- a/sound/isa/gus/gus_dma.c
++++ b/sound/isa/gus/gus_dma.c
+@@ -126,6 +126,8 @@ static void snd_gf1_dma_interrupt(struct snd_gus_card * gus)
+ 	}
+ 	block = snd_gf1_dma_next_block(gus);
+ 	spin_unlock(&gus->dma_lock);
++	if (!block)
++		return;
+ 	snd_gf1_dma_program(gus, block->addr, block->buf_addr, block->count, (unsigned short) block->cmd);
+ 	kfree(block);
+ #if 0
+diff --git a/sound/pci/Kconfig b/sound/pci/Kconfig
+index 93bc9bef7641f..41ce125971777 100644
+--- a/sound/pci/Kconfig
++++ b/sound/pci/Kconfig
+@@ -279,6 +279,7 @@ config SND_CS46XX_NEW_DSP
+ config SND_CS5530
+ 	tristate "CS5530 Audio"
+ 	depends on ISA_DMA_API && (X86_32 || COMPILE_TEST)
++	depends on !M68K
+ 	select SND_SB16_DSP
+ 	help
+ 	  Say Y here to include support for audio on Cyrix/NatSemi CS5530 chips.
+diff --git a/sound/soc/codecs/nau8824.c b/sound/soc/codecs/nau8824.c
+index 15bd8335f6678..c8ccfa2fff848 100644
+--- a/sound/soc/codecs/nau8824.c
++++ b/sound/soc/codecs/nau8824.c
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/module.h>
+ #include <linux/delay.h>
++#include <linux/dmi.h>
+ #include <linux/init.h>
+ #include <linux/i2c.h>
+ #include <linux/regmap.h>
+@@ -27,6 +28,12 @@
+ 
+ #include "nau8824.h"
+ 
++#define NAU8824_JD_ACTIVE_HIGH			BIT(0)
++
++static int nau8824_quirk;
++static int quirk_override = -1;
++module_param_named(quirk, quirk_override, uint, 0444);
++MODULE_PARM_DESC(quirk, "Board-specific quirk override");
+ 
+ static int nau8824_config_sysclk(struct nau8824 *nau8824,
+ 	int clk_id, unsigned int freq);
+@@ -1875,6 +1882,34 @@ static int nau8824_read_device_properties(struct device *dev,
+ 	return 0;
+ }
+ 
++/* Please keep this list alphabetically sorted */
++static const struct dmi_system_id nau8824_quirk_table[] = {
++	{
++		/* Cyberbook T116 rugged tablet */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "20170531"),
++		},
++		.driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH),
++	},
++	{}
++};
++
++static void nau8824_check_quirks(void)
++{
++	const struct dmi_system_id *dmi_id;
++
++	if (quirk_override != -1) {
++		nau8824_quirk = quirk_override;
++		return;
++	}
++
++	dmi_id = dmi_first_match(nau8824_quirk_table);
++	if (dmi_id)
++		nau8824_quirk = (unsigned long)dmi_id->driver_data;
++}
++
+ static int nau8824_i2c_probe(struct i2c_client *i2c,
+ 	const struct i2c_device_id *id)
+ {
+@@ -1899,6 +1934,11 @@ static int nau8824_i2c_probe(struct i2c_client *i2c,
+ 	nau8824->irq = i2c->irq;
+ 	sema_init(&nau8824->jd_sem, 1);
+ 
++	nau8824_check_quirks();
++
++	if (nau8824_quirk & NAU8824_JD_ACTIVE_HIGH)
++		nau8824->jkdet_polarity = 0;
++
+ 	nau8824_print_device_properties(nau8824);
+ 
+ 	ret = regmap_read(nau8824->regmap, NAU8824_REG_I2C_DEVICE_ID, &value);
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 08960167d34f5..2924d89bf0daf 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -2555,8 +2555,13 @@ static struct snd_soc_dapm_widget *dapm_find_widget(
+ 	return NULL;
+ }
+ 
+-static int snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
+-				const char *pin, int status)
++/*
++ * set the DAPM pin status:
++ * returns 1 when the value has been updated, 0 when unchanged, or a negative
++ * error code; called from kcontrol put callback
++ */
++static int __snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
++				  const char *pin, int status)
+ {
+ 	struct snd_soc_dapm_widget *w = dapm_find_widget(dapm, pin, true);
+ 	int ret = 0;
+@@ -2582,6 +2587,18 @@ static int snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
+ 	return ret;
+ }
+ 
++/*
++ * similar as __snd_soc_dapm_set_pin(), but returns 0 when successful;
++ * called from several API functions below
++ */
++static int snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
++				const char *pin, int status)
++{
++	int ret = __snd_soc_dapm_set_pin(dapm, pin, status);
++
++	return ret < 0 ? ret : 0;
++}
++
+ /**
+  * snd_soc_dapm_sync_unlocked - scan and power dapm paths
+  * @dapm: DAPM context
+@@ -3586,10 +3603,10 @@ int snd_soc_dapm_put_pin_switch(struct snd_kcontrol *kcontrol,
+ 	const char *pin = (const char *)kcontrol->private_value;
+ 	int ret;
+ 
+-	if (ucontrol->value.integer.value[0])
+-		ret = snd_soc_dapm_enable_pin(&card->dapm, pin);
+-	else
+-		ret = snd_soc_dapm_disable_pin(&card->dapm, pin);
++	mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
++	ret = __snd_soc_dapm_set_pin(&card->dapm, pin,
++				     !!ucontrol->value.integer.value[0]);
++	mutex_unlock(&card->dapm_mutex);
+ 
+ 	snd_soc_dapm_sync(&card->dapm);
+ 	return ret;
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index c6cb8c212eca5..ef316311e959a 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -68,6 +68,7 @@ static struct hdac_ext_stream *
+ 		return NULL;
+ 	}
+ 
++	spin_lock_irq(&bus->reg_lock);
+ 	list_for_each_entry(stream, &bus->stream_list, list) {
+ 		struct hdac_ext_stream *hstream =
+ 			stream_to_hdac_ext_stream(stream);
+@@ -107,12 +108,12 @@ static struct hdac_ext_stream *
+ 		 * is updated in snd_hdac_ext_stream_decouple().
+ 		 */
+ 		if (!res->decoupled)
+-			snd_hdac_ext_stream_decouple(bus, res, true);
+-		spin_lock_irq(&bus->reg_lock);
++			snd_hdac_ext_stream_decouple_locked(bus, res, true);
++
+ 		res->link_locked = 1;
+ 		res->link_substream = substream;
+-		spin_unlock_irq(&bus->reg_lock);
+ 	}
++	spin_unlock_irq(&bus->reg_lock);
+ 
+ 	return res;
+ }
+diff --git a/tools/perf/bench/futex-lock-pi.c b/tools/perf/bench/futex-lock-pi.c
+index bb25d8beb3b85..159bc89e6a79a 100644
+--- a/tools/perf/bench/futex-lock-pi.c
++++ b/tools/perf/bench/futex-lock-pi.c
+@@ -226,6 +226,7 @@ int bench_futex_lock_pi(int argc, const char **argv)
+ 	print_summary();
+ 
+ 	free(worker);
++	perf_cpu_map__put(cpu);
+ 	return ret;
+ err:
+ 	usage_with_options(bench_futex_lock_pi_usage, options);
+diff --git a/tools/perf/bench/futex-requeue.c b/tools/perf/bench/futex-requeue.c
+index 7a15c2e610228..105b36cdc42d3 100644
+--- a/tools/perf/bench/futex-requeue.c
++++ b/tools/perf/bench/futex-requeue.c
+@@ -216,6 +216,7 @@ int bench_futex_requeue(int argc, const char **argv)
+ 	print_summary();
+ 
+ 	free(worker);
++	perf_cpu_map__put(cpu);
+ 	return ret;
+ err:
+ 	usage_with_options(bench_futex_requeue_usage, options);
+diff --git a/tools/perf/bench/futex-wake-parallel.c b/tools/perf/bench/futex-wake-parallel.c
+index cd2b81a845acb..a129c94eb3fe1 100644
+--- a/tools/perf/bench/futex-wake-parallel.c
++++ b/tools/perf/bench/futex-wake-parallel.c
+@@ -320,6 +320,7 @@ int bench_futex_wake_parallel(int argc, const char **argv)
+ 	print_summary();
+ 
+ 	free(blocked_worker);
++	perf_cpu_map__put(cpu);
+ 	return ret;
+ }
+ #endif /* HAVE_PTHREAD_BARRIER */
+diff --git a/tools/perf/bench/futex-wake.c b/tools/perf/bench/futex-wake.c
+index 2dfcef3e371e4..507ff533612c6 100644
+--- a/tools/perf/bench/futex-wake.c
++++ b/tools/perf/bench/futex-wake.c
+@@ -210,5 +210,6 @@ int bench_futex_wake(int argc, const char **argv)
+ 	print_summary();
+ 
+ 	free(worker);
++	perf_cpu_map__put(cpu);
+ 	return ret;
+ }
+diff --git a/tools/perf/tests/shell/record+zstd_comp_decomp.sh b/tools/perf/tests/shell/record+zstd_comp_decomp.sh
+index 045723b3d9928..c62af807198de 100755
+--- a/tools/perf/tests/shell/record+zstd_comp_decomp.sh
++++ b/tools/perf/tests/shell/record+zstd_comp_decomp.sh
+@@ -12,7 +12,7 @@ skip_if_no_z_record() {
+ 
+ collect_z_record() {
+ 	echo "Collecting compressed record file:"
+-	[[ "$(uname -m)" != s390x ]] && gflag='-g'
++	[ "$(uname -m)" != s390x ] && gflag='-g'
+ 	$perf_tool record -o $trace_file $gflag -z -F 5000 -- \
+ 		dd count=500 if=/dev/urandom of=/dev/null
+ }
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index c8101575dbf45..4eb02762104ba 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -109,7 +109,11 @@ static int perf_env__fetch_btf(struct perf_env *env,
+ 	node->data_size = data_size;
+ 	memcpy(node->data, data, data_size);
+ 
+-	perf_env__insert_btf(env, node);
++	if (!perf_env__insert_btf(env, node)) {
++		/* Insertion failed because of a duplicate. */
++		free(node);
++		return -1;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
+index f0dceb527ca38..d81ed1bc14bdc 100644
+--- a/tools/perf/util/env.c
++++ b/tools/perf/util/env.c
+@@ -71,12 +71,13 @@ out:
+ 	return node;
+ }
+ 
+-void perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
++bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
+ {
+ 	struct rb_node *parent = NULL;
+ 	__u32 btf_id = btf_node->id;
+ 	struct btf_node *node;
+ 	struct rb_node **p;
++	bool ret = true;
+ 
+ 	down_write(&env->bpf_progs.lock);
+ 	p = &env->bpf_progs.btfs.rb_node;
+@@ -90,6 +91,7 @@ void perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
+ 			p = &(*p)->rb_right;
+ 		} else {
+ 			pr_debug("duplicated btf %u\n", btf_id);
++			ret = false;
+ 			goto out;
+ 		}
+ 	}
+@@ -99,6 +101,7 @@ void perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
+ 	env->bpf_progs.btfs_cnt++;
+ out:
+ 	up_write(&env->bpf_progs.lock);
++	return ret;
+ }
+ 
+ struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id)
+diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
+index a129726520064..01378a955dd5e 100644
+--- a/tools/perf/util/env.h
++++ b/tools/perf/util/env.h
+@@ -143,7 +143,7 @@ void perf_env__insert_bpf_prog_info(struct perf_env *env,
+ 				    struct bpf_prog_info_node *info_node);
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+ 							__u32 prog_id);
+-void perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node);
++bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node);
+ struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id);
+ 
+ int perf_env__numa_node(struct perf_env *env, int cpu);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-01 12:49 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-01 12:49 UTC (permalink / raw
  To: gentoo-commits

commit:     13a0ba67243adf18e654cb01e36e3016691fbdfc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec  1 12:49:07 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec  1 12:49:07 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=13a0ba67

Linux patch 5.10.83

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1082_linux-5.10.83.patch | 5729 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5733 insertions(+)

diff --git a/0000_README b/0000_README
index db2b4487..7050b7a7 100644
--- a/0000_README
+++ b/0000_README
@@ -371,6 +371,10 @@ Patch:  1081_linux-5.10.82.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.82
 
+Patch:  1082_linux-5.10.83.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.83
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1082_linux-5.10.83.patch b/1082_linux-5.10.83.patch
new file mode 100644
index 00000000..3905baee
--- /dev/null
+++ b/1082_linux-5.10.83.patch
@@ -0,0 +1,5729 @@
+diff --git a/Documentation/networking/ipvs-sysctl.rst b/Documentation/networking/ipvs-sysctl.rst
+index 2afccc63856ee..1cfbf1add2fc9 100644
+--- a/Documentation/networking/ipvs-sysctl.rst
++++ b/Documentation/networking/ipvs-sysctl.rst
+@@ -37,8 +37,7 @@ conn_reuse_mode - INTEGER
+ 
+ 	0: disable any special handling on port reuse. The new
+ 	connection will be delivered to the same real server that was
+-	servicing the previous connection. This will effectively
+-	disable expire_nodest_conn.
++	servicing the previous connection.
+ 
+ 	bit 1: enable rescheduling of new connections when it is safe.
+ 	That is, whenever expire_nodest_conn and for TCP sockets, when
+diff --git a/Makefile b/Makefile
+index 84b15766ad66f..4646baabfe783 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 82
++SUBLEVEL = 83
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index 398ecd7b9b68b..4ade854bdcdaf 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -480,11 +480,17 @@
+ 			#address-cells = <3>;
+ 			#interrupt-cells = <1>;
+ 			#size-cells = <2>;
+-			interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "pcie", "msi";
+ 			interrupt-map-mask = <0x0 0x0 0x0 0x7>;
+ 			interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
++							IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 2 &gicv2 GIC_SPI 144
++							IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 3 &gicv2 GIC_SPI 145
++							IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 4 &gicv2 GIC_SPI 146
+ 							IRQ_TYPE_LEVEL_HIGH>;
+ 			msi-controller;
+ 			msi-parent = <&pcie0>;
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 72b0df6910bd5..9fdad20c40d17 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -242,6 +242,8 @@
+ 
+ 			gpio-controller;
+ 			#gpio-cells = <2>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
+ 		};
+ 
+ 		pcie0: pcie@12000 {
+@@ -408,7 +410,7 @@
+ 	i2c0: i2c@18009000 {
+ 		compatible = "brcm,iproc-i2c";
+ 		reg = <0x18009000 0x50>;
+-		interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 		clock-frequency = <100000>;
+diff --git a/arch/arm/mach-socfpga/core.h b/arch/arm/mach-socfpga/core.h
+index fc2608b18a0d0..18f01190dcfd4 100644
+--- a/arch/arm/mach-socfpga/core.h
++++ b/arch/arm/mach-socfpga/core.h
+@@ -33,7 +33,7 @@ extern void __iomem *sdr_ctl_base_addr;
+ u32 socfpga_sdram_self_refresh(u32 sdr_base);
+ extern unsigned int socfpga_sdram_self_refresh_sz;
+ 
+-extern char secondary_trampoline, secondary_trampoline_end;
++extern char secondary_trampoline[], secondary_trampoline_end[];
+ 
+ extern unsigned long socfpga_cpu1start_addr;
+ 
+diff --git a/arch/arm/mach-socfpga/platsmp.c b/arch/arm/mach-socfpga/platsmp.c
+index fbb80b883e5dd..201191cf68f32 100644
+--- a/arch/arm/mach-socfpga/platsmp.c
++++ b/arch/arm/mach-socfpga/platsmp.c
+@@ -20,14 +20,14 @@
+ 
+ static int socfpga_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ {
+-	int trampoline_size = &secondary_trampoline_end - &secondary_trampoline;
++	int trampoline_size = secondary_trampoline_end - secondary_trampoline;
+ 
+ 	if (socfpga_cpu1start_addr) {
+ 		/* This will put CPU #1 into reset. */
+ 		writel(RSTMGR_MPUMODRST_CPU1,
+ 		       rst_manager_base_addr + SOCFPGA_RSTMGR_MODMPURST);
+ 
+-		memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size);
++		memcpy(phys_to_virt(0), secondary_trampoline, trampoline_size);
+ 
+ 		writel(__pa_symbol(secondary_startup),
+ 		       sys_manager_base_addr + (socfpga_cpu1start_addr & 0x000000ff));
+@@ -45,12 +45,12 @@ static int socfpga_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 
+ static int socfpga_a10_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ {
+-	int trampoline_size = &secondary_trampoline_end - &secondary_trampoline;
++	int trampoline_size = secondary_trampoline_end - secondary_trampoline;
+ 
+ 	if (socfpga_cpu1start_addr) {
+ 		writel(RSTMGR_MPUMODRST_CPU1, rst_manager_base_addr +
+ 		       SOCFPGA_A10_RSTMGR_MODMPURST);
+-		memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size);
++		memcpy(phys_to_virt(0), secondary_trampoline, trampoline_size);
+ 
+ 		writel(__pa_symbol(secondary_startup),
+ 		       sys_manager_base_addr + (socfpga_cpu1start_addr & 0x00000fff));
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 94a748e95231b..23d756fe0fd6c 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -3189,7 +3189,7 @@ config STACKTRACE_SUPPORT
+ config PGTABLE_LEVELS
+ 	int
+ 	default 4 if PAGE_SIZE_4KB && MIPS_VA_BITS_48
+-	default 3 if 64BIT && !PAGE_SIZE_64KB
++	default 3 if 64BIT && (!PAGE_SIZE_64KB || MIPS_VA_BITS_48)
+ 	default 2
+ 
+ config MIPS_AUTO_PFN_OFFSET
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index 067cb3eb16141..d120201910acf 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1721,8 +1721,6 @@ static inline void decode_cpucfg(struct cpuinfo_mips *c)
+ 
+ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ {
+-	decode_configs(c);
+-
+ 	/* All Loongson processors covered here define ExcCode 16 as GSExc. */
+ 	c->options |= MIPS_CPU_GSEXCEX;
+ 
+@@ -1783,6 +1781,8 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		panic("Unknown Loongson Processor ID!");
+ 		break;
+ 	}
++
++	decode_configs(c);
+ }
+ #else
+ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu) { }
+diff --git a/arch/parisc/kernel/vmlinux.lds.S b/arch/parisc/kernel/vmlinux.lds.S
+index 3d208afd15bc6..2769eb991f58d 100644
+--- a/arch/parisc/kernel/vmlinux.lds.S
++++ b/arch/parisc/kernel/vmlinux.lds.S
+@@ -57,8 +57,6 @@ SECTIONS
+ {
+ 	. = KERNEL_BINARY_TEXT_START;
+ 
+-	_stext = .;	/* start of kernel text, includes init code & data */
+-
+ 	__init_begin = .;
+ 	HEAD_TEXT_SECTION
+ 	MLONGCALL_DISCARD(INIT_TEXT_SECTION(8))
+@@ -82,6 +80,7 @@ SECTIONS
+ 	/* freed after init ends here */
+ 
+ 	_text = .;		/* Text and read-only data */
++	_stext = .;
+ 	MLONGCALL_KEEP(INIT_TEXT_SECTION(8))
+ 	.text ALIGN(PAGE_SIZE) : {
+ 		TEXT_TEXT
+diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
+index f8e3d15ddf694..abb057a86739b 100644
+--- a/arch/powerpc/kernel/head_32.h
++++ b/arch/powerpc/kernel/head_32.h
+@@ -333,11 +333,11 @@ label:
+ 	mfspr	r1, SPRN_SPRG_THREAD
+ 	lwz	r1, TASK_CPU - THREAD(r1)
+ 	slwi	r1, r1, 3
+-	addis	r1, r1, emergency_ctx@ha
++	addis	r1, r1, emergency_ctx-PAGE_OFFSET@ha
+ #else
+-	lis	r1, emergency_ctx@ha
++	lis	r1, emergency_ctx-PAGE_OFFSET@ha
+ #endif
+-	lwz	r1, emergency_ctx@l(r1)
++	lwz	r1, emergency_ctx-PAGE_OFFSET@l(r1)
+ 	addi	r1, r1, THREAD_SIZE - INT_FRAME_SIZE
+ 	EXCEPTION_PROLOG_2
+ 	SAVE_NVGPRS(r11)
+diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
+index 4621905bdd9ea..121fca2bcd82b 100644
+--- a/arch/powerpc/kvm/book3s_hv_builtin.c
++++ b/arch/powerpc/kvm/book3s_hv_builtin.c
+@@ -867,6 +867,7 @@ static void flush_guest_tlb(struct kvm *kvm)
+ 				       "r" (0) : "memory");
+ 		}
+ 		asm volatile("ptesync": : :"memory");
++		// POWER9 congruence-class TLBIEL leaves ERAT. Flush it now.
+ 		asm volatile(PPC_RADIX_INVALIDATE_ERAT_GUEST : : :"memory");
+ 	} else {
+ 		for (set = 0; set < kvm->arch.tlb_sets; ++set) {
+@@ -877,7 +878,9 @@ static void flush_guest_tlb(struct kvm *kvm)
+ 			rb += PPC_BIT(51);	/* increment set number */
+ 		}
+ 		asm volatile("ptesync": : :"memory");
+-		asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT : : :"memory");
++		// POWER9 congruence-class TLBIEL leaves ERAT. Flush it now.
++		if (cpu_has_feature(CPU_FTR_ARCH_300))
++			asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT : : :"memory");
+ 	}
+ }
+ 
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index 18205f851c247..fabaedddc90cb 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -988,6 +988,7 @@ EXPORT_SYMBOL(get_guest_storage_key);
+ int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
+ 			unsigned long *oldpte, unsigned long *oldpgste)
+ {
++	struct vm_area_struct *vma;
+ 	unsigned long pgstev;
+ 	spinlock_t *ptl;
+ 	pgste_t pgste;
+@@ -997,6 +998,10 @@ int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
+ 	WARN_ON_ONCE(orc > ESSA_MAX);
+ 	if (unlikely(orc > ESSA_MAX))
+ 		return -EINVAL;
++
++	vma = find_vma(mm, hva);
++	if (!vma || hva < vma->vm_start || is_vm_hugetlb_page(vma))
++		return -EFAULT;
+ 	ptep = get_locked_pte(mm, hva, &ptl);
+ 	if (unlikely(!ptep))
+ 		return -EFAULT;
+@@ -1089,10 +1094,14 @@ EXPORT_SYMBOL(pgste_perform_essa);
+ int set_pgste_bits(struct mm_struct *mm, unsigned long hva,
+ 			unsigned long bits, unsigned long value)
+ {
++	struct vm_area_struct *vma;
+ 	spinlock_t *ptl;
+ 	pgste_t new;
+ 	pte_t *ptep;
+ 
++	vma = find_vma(mm, hva);
++	if (!vma || hva < vma->vm_start || is_vm_hugetlb_page(vma))
++		return -EFAULT;
+ 	ptep = get_locked_pte(mm, hva, &ptl);
+ 	if (unlikely(!ptep))
+ 		return -EFAULT;
+@@ -1117,9 +1126,13 @@ EXPORT_SYMBOL(set_pgste_bits);
+  */
+ int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep)
+ {
++	struct vm_area_struct *vma;
+ 	spinlock_t *ptl;
+ 	pte_t *ptep;
+ 
++	vma = find_vma(mm, hva);
++	if (!vma || hva < vma->vm_start || is_vm_hugetlb_page(vma))
++		return -EFAULT;
+ 	ptep = get_locked_pte(mm, hva, &ptl);
+ 	if (unlikely(!ptep))
+ 		return -EFAULT;
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index e3dd64aa43737..18bd428f11ac0 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -1110,15 +1110,10 @@ struct fwnode_handle *acpi_node_get_parent(const struct fwnode_handle *fwnode)
+ 		/* All data nodes have parent pointer so just return that */
+ 		return to_acpi_data_node(fwnode)->parent;
+ 	} else if (is_acpi_device_node(fwnode)) {
+-		acpi_handle handle, parent_handle;
++		struct device *dev = to_acpi_device_node(fwnode)->dev.parent;
+ 
+-		handle = to_acpi_device_node(fwnode)->handle;
+-		if (ACPI_SUCCESS(acpi_get_parent(handle, &parent_handle))) {
+-			struct acpi_device *adev;
+-
+-			if (!acpi_bus_get_device(parent_handle, &adev))
+-				return acpi_fwnode_handle(adev);
+-		}
++		if (dev)
++			return acpi_fwnode_handle(to_acpi_device(dev));
+ 	}
+ 
+ 	return NULL;
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 8f14ad7ab5bd8..a1255971e50ce 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -3091,7 +3091,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		t->from = thread;
+ 	else
+ 		t->from = NULL;
+-	t->sender_euid = proc->cred->euid;
++	t->sender_euid = task_euid(proc->tsk);
+ 	t->to_proc = target_proc;
+ 	t->to_thread = target_thread;
+ 	t->code = tr->code;
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 10078a7435644..ff7b62597b525 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -80,6 +80,7 @@ enum blkif_state {
+ 	BLKIF_STATE_DISCONNECTED,
+ 	BLKIF_STATE_CONNECTED,
+ 	BLKIF_STATE_SUSPENDED,
++	BLKIF_STATE_ERROR,
+ };
+ 
+ struct grant {
+@@ -89,6 +90,7 @@ struct grant {
+ };
+ 
+ enum blk_req_status {
++	REQ_PROCESSING,
+ 	REQ_WAITING,
+ 	REQ_DONE,
+ 	REQ_ERROR,
+@@ -543,10 +545,10 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
+ 
+ 	id = get_id_from_freelist(rinfo);
+ 	rinfo->shadow[id].request = req;
+-	rinfo->shadow[id].status = REQ_WAITING;
++	rinfo->shadow[id].status = REQ_PROCESSING;
+ 	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
+ 
+-	(*ring_req)->u.rw.id = id;
++	rinfo->shadow[id].req.u.rw.id = id;
+ 
+ 	return id;
+ }
+@@ -554,11 +556,12 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
+ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
+ {
+ 	struct blkfront_info *info = rinfo->dev_info;
+-	struct blkif_request *ring_req;
++	struct blkif_request *ring_req, *final_ring_req;
+ 	unsigned long id;
+ 
+ 	/* Fill out a communications ring structure. */
+-	id = blkif_ring_get_request(rinfo, req, &ring_req);
++	id = blkif_ring_get_request(rinfo, req, &final_ring_req);
++	ring_req = &rinfo->shadow[id].req;
+ 
+ 	ring_req->operation = BLKIF_OP_DISCARD;
+ 	ring_req->u.discard.nr_sectors = blk_rq_sectors(req);
+@@ -569,8 +572,9 @@ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_inf
+ 	else
+ 		ring_req->u.discard.flag = 0;
+ 
+-	/* Keep a private copy so we can reissue requests when recovering. */
+-	rinfo->shadow[id].req = *ring_req;
++	/* Copy the request to the ring page. */
++	*final_ring_req = *ring_req;
++	rinfo->shadow[id].status = REQ_WAITING;
+ 
+ 	return 0;
+ }
+@@ -703,6 +707,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
+ {
+ 	struct blkfront_info *info = rinfo->dev_info;
+ 	struct blkif_request *ring_req, *extra_ring_req = NULL;
++	struct blkif_request *final_ring_req, *final_extra_ring_req = NULL;
+ 	unsigned long id, extra_id = NO_ASSOCIATED_ID;
+ 	bool require_extra_req = false;
+ 	int i;
+@@ -747,7 +752,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
+ 	}
+ 
+ 	/* Fill out a communications ring structure. */
+-	id = blkif_ring_get_request(rinfo, req, &ring_req);
++	id = blkif_ring_get_request(rinfo, req, &final_ring_req);
++	ring_req = &rinfo->shadow[id].req;
+ 
+ 	num_sg = blk_rq_map_sg(req->q, req, rinfo->shadow[id].sg);
+ 	num_grant = 0;
+@@ -798,7 +804,9 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
+ 		ring_req->u.rw.nr_segments = num_grant;
+ 		if (unlikely(require_extra_req)) {
+ 			extra_id = blkif_ring_get_request(rinfo, req,
+-							  &extra_ring_req);
++							  &final_extra_ring_req);
++			extra_ring_req = &rinfo->shadow[extra_id].req;
++
+ 			/*
+ 			 * Only the first request contains the scatter-gather
+ 			 * list.
+@@ -840,10 +848,13 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
+ 	if (setup.segments)
+ 		kunmap_atomic(setup.segments);
+ 
+-	/* Keep a private copy so we can reissue requests when recovering. */
+-	rinfo->shadow[id].req = *ring_req;
+-	if (unlikely(require_extra_req))
+-		rinfo->shadow[extra_id].req = *extra_ring_req;
++	/* Copy request(s) to the ring page. */
++	*final_ring_req = *ring_req;
++	rinfo->shadow[id].status = REQ_WAITING;
++	if (unlikely(require_extra_req)) {
++		*final_extra_ring_req = *extra_ring_req;
++		rinfo->shadow[extra_id].status = REQ_WAITING;
++	}
+ 
+ 	if (new_persistent_gnts)
+ 		gnttab_free_grant_references(setup.gref_head);
+@@ -1415,8 +1426,8 @@ static enum blk_req_status blkif_rsp_to_req_status(int rsp)
+ static int blkif_get_final_status(enum blk_req_status s1,
+ 				  enum blk_req_status s2)
+ {
+-	BUG_ON(s1 == REQ_WAITING);
+-	BUG_ON(s2 == REQ_WAITING);
++	BUG_ON(s1 < REQ_DONE);
++	BUG_ON(s2 < REQ_DONE);
+ 
+ 	if (s1 == REQ_ERROR || s2 == REQ_ERROR)
+ 		return BLKIF_RSP_ERROR;
+@@ -1449,7 +1460,7 @@ static bool blkif_completion(unsigned long *id,
+ 		s->status = blkif_rsp_to_req_status(bret->status);
+ 
+ 		/* Wait the second response if not yet here. */
+-		if (s2->status == REQ_WAITING)
++		if (s2->status < REQ_DONE)
+ 			return false;
+ 
+ 		bret->status = blkif_get_final_status(s->status,
+@@ -1557,7 +1568,7 @@ static bool blkif_completion(unsigned long *id,
+ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ {
+ 	struct request *req;
+-	struct blkif_response *bret;
++	struct blkif_response bret;
+ 	RING_IDX i, rp;
+ 	unsigned long flags;
+ 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
+@@ -1568,54 +1579,76 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 
+ 	spin_lock_irqsave(&rinfo->ring_lock, flags);
+  again:
+-	rp = rinfo->ring.sring->rsp_prod;
+-	rmb(); /* Ensure we see queued responses up to 'rp'. */
++	rp = READ_ONCE(rinfo->ring.sring->rsp_prod);
++	virt_rmb(); /* Ensure we see queued responses up to 'rp'. */
++	if (RING_RESPONSE_PROD_OVERFLOW(&rinfo->ring, rp)) {
++		pr_alert("%s: illegal number of responses %u\n",
++			 info->gd->disk_name, rp - rinfo->ring.rsp_cons);
++		goto err;
++	}
+ 
+ 	for (i = rinfo->ring.rsp_cons; i != rp; i++) {
+ 		unsigned long id;
++		unsigned int op;
++
++		RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
++		id = bret.id;
+ 
+-		bret = RING_GET_RESPONSE(&rinfo->ring, i);
+-		id   = bret->id;
+ 		/*
+ 		 * The backend has messed up and given us an id that we would
+ 		 * never have given to it (we stamp it up to BLK_RING_SIZE -
+ 		 * look in get_id_from_freelist.
+ 		 */
+ 		if (id >= BLK_RING_SIZE(info)) {
+-			WARN(1, "%s: response to %s has incorrect id (%ld)\n",
+-			     info->gd->disk_name, op_name(bret->operation), id);
+-			/* We can't safely get the 'struct request' as
+-			 * the id is busted. */
+-			continue;
++			pr_alert("%s: response has incorrect id (%ld)\n",
++				 info->gd->disk_name, id);
++			goto err;
+ 		}
++		if (rinfo->shadow[id].status != REQ_WAITING) {
++			pr_alert("%s: response references no pending request\n",
++				 info->gd->disk_name);
++			goto err;
++		}
++
++		rinfo->shadow[id].status = REQ_PROCESSING;
+ 		req  = rinfo->shadow[id].request;
+ 
+-		if (bret->operation != BLKIF_OP_DISCARD) {
++		op = rinfo->shadow[id].req.operation;
++		if (op == BLKIF_OP_INDIRECT)
++			op = rinfo->shadow[id].req.u.indirect.indirect_op;
++		if (bret.operation != op) {
++			pr_alert("%s: response has wrong operation (%u instead of %u)\n",
++				 info->gd->disk_name, bret.operation, op);
++			goto err;
++		}
++
++		if (bret.operation != BLKIF_OP_DISCARD) {
+ 			/*
+ 			 * We may need to wait for an extra response if the
+ 			 * I/O request is split in 2
+ 			 */
+-			if (!blkif_completion(&id, rinfo, bret))
++			if (!blkif_completion(&id, rinfo, &bret))
+ 				continue;
+ 		}
+ 
+ 		if (add_id_to_freelist(rinfo, id)) {
+ 			WARN(1, "%s: response to %s (id %ld) couldn't be recycled!\n",
+-			     info->gd->disk_name, op_name(bret->operation), id);
++			     info->gd->disk_name, op_name(bret.operation), id);
+ 			continue;
+ 		}
+ 
+-		if (bret->status == BLKIF_RSP_OKAY)
++		if (bret.status == BLKIF_RSP_OKAY)
+ 			blkif_req(req)->error = BLK_STS_OK;
+ 		else
+ 			blkif_req(req)->error = BLK_STS_IOERR;
+ 
+-		switch (bret->operation) {
++		switch (bret.operation) {
+ 		case BLKIF_OP_DISCARD:
+-			if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
++			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
+ 				struct request_queue *rq = info->rq;
+-				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
+-					   info->gd->disk_name, op_name(bret->operation));
++
++				pr_warn_ratelimited("blkfront: %s: %s op failed\n",
++					   info->gd->disk_name, op_name(bret.operation));
+ 				blkif_req(req)->error = BLK_STS_NOTSUPP;
+ 				info->feature_discard = 0;
+ 				info->feature_secdiscard = 0;
+@@ -1625,15 +1658,15 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 			break;
+ 		case BLKIF_OP_FLUSH_DISKCACHE:
+ 		case BLKIF_OP_WRITE_BARRIER:
+-			if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
+-				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
+-				       info->gd->disk_name, op_name(bret->operation));
++			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
++				pr_warn_ratelimited("blkfront: %s: %s op failed\n",
++				       info->gd->disk_name, op_name(bret.operation));
+ 				blkif_req(req)->error = BLK_STS_NOTSUPP;
+ 			}
+-			if (unlikely(bret->status == BLKIF_RSP_ERROR &&
++			if (unlikely(bret.status == BLKIF_RSP_ERROR &&
+ 				     rinfo->shadow[id].req.u.rw.nr_segments == 0)) {
+-				printk(KERN_WARNING "blkfront: %s: empty %s op failed\n",
+-				       info->gd->disk_name, op_name(bret->operation));
++				pr_warn_ratelimited("blkfront: %s: empty %s op failed\n",
++				       info->gd->disk_name, op_name(bret.operation));
+ 				blkif_req(req)->error = BLK_STS_NOTSUPP;
+ 			}
+ 			if (unlikely(blkif_req(req)->error)) {
+@@ -1646,9 +1679,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 			fallthrough;
+ 		case BLKIF_OP_READ:
+ 		case BLKIF_OP_WRITE:
+-			if (unlikely(bret->status != BLKIF_RSP_OKAY))
+-				dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
+-					"request: %x\n", bret->status);
++			if (unlikely(bret.status != BLKIF_RSP_OKAY))
++				dev_dbg_ratelimited(&info->xbdev->dev,
++					"Bad return from blkdev data request: %#x\n",
++					bret.status);
+ 
+ 			break;
+ 		default:
+@@ -1674,6 +1708,14 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+ 
+ 	return IRQ_HANDLED;
++
++ err:
++	info->connected = BLKIF_STATE_ERROR;
++
++	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
++
++	pr_alert("%s disabled for further use\n", info->gd->disk_name);
++	return IRQ_HANDLED;
+ }
+ 
+ 
+diff --git a/drivers/firmware/arm_scmi/scmi_pm_domain.c b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+index 9e44479f02842..a4e4aa9a35426 100644
+--- a/drivers/firmware/arm_scmi/scmi_pm_domain.c
++++ b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+@@ -106,9 +106,7 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ 	scmi_pd_data->domains = domains;
+ 	scmi_pd_data->num_domains = num_domains;
+ 
+-	of_genpd_add_provider_onecell(np, scmi_pd_data);
+-
+-	return 0;
++	return of_genpd_add_provider_onecell(np, scmi_pd_data);
+ }
+ 
+ static const struct scmi_device_id scmi_id_table[] = {
+diff --git a/drivers/firmware/smccc/soc_id.c b/drivers/firmware/smccc/soc_id.c
+index 581aa5e9b0778..dd7c3d5e8b0bb 100644
+--- a/drivers/firmware/smccc/soc_id.c
++++ b/drivers/firmware/smccc/soc_id.c
+@@ -50,7 +50,7 @@ static int __init smccc_soc_init(void)
+ 	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+ 			     ARM_SMCCC_ARCH_SOC_ID, &res);
+ 
+-	if (res.a0 == SMCCC_RET_NOT_SUPPORTED) {
++	if ((int)res.a0 == SMCCC_RET_NOT_SUPPORTED) {
+ 		pr_info("ARCH_SOC_ID not implemented, skipping ....\n");
+ 		return 0;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index c7d6a677d86d8..bea451a39d601 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -137,6 +137,11 @@ MODULE_FIRMWARE("amdgpu/green_sardine_rlc.bin");
+ #define mmTCP_CHAN_STEER_5_ARCT								0x0b0c
+ #define mmTCP_CHAN_STEER_5_ARCT_BASE_IDX							0
+ 
++#define mmGOLDEN_TSC_COUNT_UPPER_Renoir                0x0025
++#define mmGOLDEN_TSC_COUNT_UPPER_Renoir_BASE_IDX       1
++#define mmGOLDEN_TSC_COUNT_LOWER_Renoir                0x0026
++#define mmGOLDEN_TSC_COUNT_LOWER_Renoir_BASE_IDX       1
++
+ enum ta_ras_gfx_subblock {
+ 	/*CPC*/
+ 	TA_RAS_BLOCK__GFX_CPC_INDEX_START = 0,
+@@ -4147,19 +4152,38 @@ failed_kiq_read:
+ 
+ static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev)
+ {
+-	uint64_t clock;
++	uint64_t clock, clock_lo, clock_hi, hi_check;
+ 
+-	amdgpu_gfx_off_ctrl(adev, false);
+-	mutex_lock(&adev->gfx.gpu_clock_mutex);
+-	if (adev->asic_type == CHIP_VEGA10 && amdgpu_sriov_runtime(adev)) {
+-		clock = gfx_v9_0_kiq_read_clock(adev);
+-	} else {
+-		WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1);
+-		clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) |
+-			((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL);
++	switch (adev->asic_type) {
++	case CHIP_RENOIR:
++		preempt_disable();
++		clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Renoir);
++		clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Renoir);
++		hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Renoir);
++		/* The SMUIO TSC clock frequency is 100MHz, which sets 32-bit carry over
++		 * roughly every 42 seconds.
++		 */
++		if (hi_check != clock_hi) {
++			clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Renoir);
++			clock_hi = hi_check;
++		}
++		preempt_enable();
++		clock = clock_lo | (clock_hi << 32ULL);
++		break;
++	default:
++		amdgpu_gfx_off_ctrl(adev, false);
++		mutex_lock(&adev->gfx.gpu_clock_mutex);
++		if (adev->asic_type == CHIP_VEGA10 && amdgpu_sriov_runtime(adev)) {
++			clock = gfx_v9_0_kiq_read_clock(adev);
++		} else {
++			WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1);
++			clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) |
++				((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL);
++		}
++		mutex_unlock(&adev->gfx.gpu_clock_mutex);
++		amdgpu_gfx_off_ctrl(adev, true);
++		break;
+ 	}
+-	mutex_unlock(&adev->gfx.gpu_clock_mutex);
+-	amdgpu_gfx_off_ctrl(adev, true);
+ 	return clock;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index d9525fbedad2d..a5b6f36fe1d72 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1963,8 +1963,8 @@ static int dm_resume(void *handle)
+ 
+ 		for (i = 0; i < dc_state->stream_count; i++) {
+ 			dc_state->streams[i]->mode_changed = true;
+-			for (j = 0; j < dc_state->stream_status->plane_count; j++) {
+-				dc_state->stream_status->plane_states[j]->update_flags.raw
++			for (j = 0; j < dc_state->stream_status[i].plane_count; j++) {
++				dc_state->stream_status[i].plane_states[j]->update_flags.raw
+ 					= 0xffffffff;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm200.c
+index cd41b2e6cc879..18502fd6ebaa0 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm200.c
+@@ -207,11 +207,13 @@ int
+ gm200_acr_wpr_parse(struct nvkm_acr *acr)
+ {
+ 	const struct wpr_header *hdr = (void *)acr->wpr_fw->data;
++	struct nvkm_acr_lsfw *lsfw;
+ 
+ 	while (hdr->falcon_id != WPR_HEADER_V0_FALCON_ID_INVALID) {
+ 		wpr_header_dump(&acr->subdev, hdr);
+-		if (!nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id))
+-			return -ENOMEM;
++		lsfw = nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id);
++		if (IS_ERR(lsfw))
++			return PTR_ERR(lsfw);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp102.c
+index 80eb9d8dbc803..e5c8303a5b7b7 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp102.c
+@@ -161,11 +161,13 @@ int
+ gp102_acr_wpr_parse(struct nvkm_acr *acr)
+ {
+ 	const struct wpr_header_v1 *hdr = (void *)acr->wpr_fw->data;
++	struct nvkm_acr_lsfw *lsfw;
+ 
+ 	while (hdr->falcon_id != WPR_HEADER_V1_FALCON_ID_INVALID) {
+ 		wpr_header_v1_dump(&acr->subdev, hdr);
+-		if (!nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id))
+-			return -ENOMEM;
++		lsfw = nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id);
++		if (IS_ERR(lsfw))
++			return PTR_ERR(lsfw);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
+index cc74a3f3a07af..9006b9861c90c 100644
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -389,7 +389,7 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
+ 
+ 	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+ 	if (!bo)
+-		return ERR_PTR(-ENOMEM);
++		return NULL;
+ 
+ 	bo->madv = VC4_MADV_WILLNEED;
+ 	refcount_set(&bo->usecnt, 0);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index b2719cf37aa52..c25274275258f 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2578,6 +2578,9 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 		return;
+ 
+ 	switch (equivalent_usage) {
++	case HID_DG_CONFIDENCE:
++		wacom_wac->hid_data.confidence = value;
++		break;
+ 	case HID_GD_X:
+ 		wacom_wac->hid_data.x = value;
+ 		break;
+@@ -2610,7 +2613,8 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 	}
+ 
+ 	if (usage->usage_index + 1 == field->report_count) {
+-		if (equivalent_usage == wacom_wac->hid_data.last_slot_field)
++		if (equivalent_usage == wacom_wac->hid_data.last_slot_field &&
++		    wacom_wac->hid_data.confidence)
+ 			wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
+ 	}
+ }
+@@ -2625,6 +2629,8 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
+ 
+ 	wacom_wac->is_invalid_bt_frame = false;
+ 
++	hid_data->confidence = true;
++
+ 	for (i = 0; i < report->maxfield; i++) {
+ 		struct hid_field *field = report->field[i];
+ 		int j;
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index e3835407e8d23..8dea7cb298e69 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -300,6 +300,7 @@ struct hid_data {
+ 	bool tipswitch;
+ 	bool barrelswitch;
+ 	bool barrelswitch2;
++	bool confidence;
+ 	int x;
+ 	int y;
+ 	int pressure;
+diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
+index 5ecc0bc608ec6..fb61bdca4c2c1 100644
+--- a/drivers/iommu/amd/iommu_v2.c
++++ b/drivers/iommu/amd/iommu_v2.c
+@@ -927,10 +927,8 @@ static int __init amd_iommu_v2_init(void)
+ {
+ 	int ret;
+ 
+-	pr_info("AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>\n");
+-
+ 	if (!amd_iommu_v2_supported()) {
+-		pr_info("AMD IOMMUv2 functionality not available on this system\n");
++		pr_info("AMD IOMMUv2 functionality not available on this system - This is not a bug.\n");
+ 		/*
+ 		 * Load anyway to provide the symbols to other modules
+ 		 * which may use AMD IOMMUv2 optionally.
+@@ -947,6 +945,8 @@ static int __init amd_iommu_v2_init(void)
+ 
+ 	amd_iommu_register_ppr_notifier(&ppr_nb);
+ 
++	pr_info("AMD IOMMUv2 loaded and initialized\n");
++
+ 	return 0;
+ 
+ out:
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index d5d5d28d0b36a..2e5698fbc3a87 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -1199,6 +1199,7 @@ void cec_received_msg_ts(struct cec_adapter *adap,
+ 			if (abort)
+ 				dst->rx_status |= CEC_RX_STATUS_FEATURE_ABORT;
+ 			msg->flags = dst->flags;
++			msg->sequence = dst->sequence;
+ 			/* Remove it from the wait_queue */
+ 			list_del_init(&data->list);
+ 
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 20cbd71cba9d9..a4bd85b200a3e 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -263,7 +263,6 @@ static struct esdhc_soc_data usdhc_imx8qxp_data = {
+ 	.flags = ESDHC_FLAG_USDHC | ESDHC_FLAG_STD_TUNING
+ 			| ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
+ 			| ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
+-			| ESDHC_FLAG_CQHCI
+ 			| ESDHC_FLAG_STATE_LOST_IN_LPMODE
+ 			| ESDHC_FLAG_CLK_RATE_LOST_IN_PM_RUNTIME,
+ };
+@@ -272,7 +271,6 @@ static struct esdhc_soc_data usdhc_imx8mm_data = {
+ 	.flags = ESDHC_FLAG_USDHC | ESDHC_FLAG_STD_TUNING
+ 			| ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
+ 			| ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
+-			| ESDHC_FLAG_CQHCI
+ 			| ESDHC_FLAG_STATE_LOST_IN_LPMODE,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 07d131fac7606..d42e86cdff12e 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -772,7 +772,19 @@ static void sdhci_adma_table_pre(struct sdhci_host *host,
+ 			len -= offset;
+ 		}
+ 
+-		BUG_ON(len > 65536);
++		/*
++		 * The block layer forces a minimum segment size of PAGE_SIZE,
++		 * so 'len' can be too big here if PAGE_SIZE >= 64KiB. Write
++		 * multiple descriptors, noting that the ADMA table is sized
++		 * for 4KiB chunks anyway, so it will be big enough.
++		 */
++		while (len > host->max_adma) {
++			int n = 32 * 1024; /* 32KiB*/
++
++			__sdhci_adma_write_desc(host, &desc, addr, n, ADMA2_TRAN_VALID);
++			addr += n;
++			len -= n;
++		}
+ 
+ 		/* tran, valid */
+ 		if (len)
+@@ -3948,6 +3960,7 @@ struct sdhci_host *sdhci_alloc_host(struct device *dev,
+ 	 * descriptor for each segment, plus 1 for a nop end descriptor.
+ 	 */
+ 	host->adma_table_cnt = SDHCI_MAX_SEGS * 2 + 1;
++	host->max_adma = 65536;
+ 
+ 	return host;
+ }
+@@ -4611,10 +4624,12 @@ int sdhci_setup_host(struct sdhci_host *host)
+ 	 * be larger than 64 KiB though.
+ 	 */
+ 	if (host->flags & SDHCI_USE_ADMA) {
+-		if (host->quirks & SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC)
++		if (host->quirks & SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC) {
++			host->max_adma = 65532; /* 32-bit alignment */
+ 			mmc->max_seg_size = 65535;
+-		else
++		} else {
+ 			mmc->max_seg_size = 65536;
++		}
+ 	} else {
+ 		mmc->max_seg_size = mmc->max_req_size;
+ 	}
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 960fed78529e1..8b1650f37fbba 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -338,7 +338,8 @@ struct sdhci_adma2_64_desc {
+ 
+ /*
+  * Maximum segments assuming a 512KiB maximum requisition size and a minimum
+- * 4KiB page size.
++ * 4KiB page size. Note this also allows enough for multiple descriptors in
++ * case of PAGE_SIZE >= 64KiB.
+  */
+ #define SDHCI_MAX_SEGS		128
+ 
+@@ -540,6 +541,7 @@ struct sdhci_host {
+ 	unsigned int blocks;	/* remaining PIO blocks */
+ 
+ 	int sg_count;		/* Mapped sg entries */
++	int max_adma;		/* Max. length in ADMA descriptor */
+ 
+ 	void *adma_table;	/* ADMA descriptor table */
+ 	void *align_buffer;	/* Bounce buffer */
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index e27af38f6b161..6e7da1dc2e8c3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -679,9 +679,9 @@ static int hclgevf_set_rss_tc_mode(struct hclgevf_dev *hdev,  u16 rss_size)
+ 	roundup_size = ilog2(roundup_size);
+ 
+ 	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) {
+-		tc_valid[i] = !!(hdev->hw_tc_map & BIT(i));
++		tc_valid[i] = 1;
+ 		tc_size[i] = roundup_size;
+-		tc_offset[i] = rss_size * i;
++		tc_offset[i] = (hdev->hw_tc_map & BIT(i)) ? rss_size * i : 0;
+ 	}
+ 
+ 	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_TC_MODE, false);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index ea85b06857fa2..90f5ec982d513 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -719,12 +719,31 @@ static int iavf_get_per_queue_coalesce(struct net_device *netdev, u32 queue,
+  *
+  * Change the ITR settings for a specific queue.
+  **/
+-static void iavf_set_itr_per_queue(struct iavf_adapter *adapter,
+-				   struct ethtool_coalesce *ec, int queue)
++static int iavf_set_itr_per_queue(struct iavf_adapter *adapter,
++				  struct ethtool_coalesce *ec, int queue)
+ {
+ 	struct iavf_ring *rx_ring = &adapter->rx_rings[queue];
+ 	struct iavf_ring *tx_ring = &adapter->tx_rings[queue];
+ 	struct iavf_q_vector *q_vector;
++	u16 itr_setting;
++
++	itr_setting = rx_ring->itr_setting & ~IAVF_ITR_DYNAMIC;
++
++	if (ec->rx_coalesce_usecs != itr_setting &&
++	    ec->use_adaptive_rx_coalesce) {
++		netif_info(adapter, drv, adapter->netdev,
++			   "Rx interrupt throttling cannot be changed if adaptive-rx is enabled\n");
++		return -EINVAL;
++	}
++
++	itr_setting = tx_ring->itr_setting & ~IAVF_ITR_DYNAMIC;
++
++	if (ec->tx_coalesce_usecs != itr_setting &&
++	    ec->use_adaptive_tx_coalesce) {
++		netif_info(adapter, drv, adapter->netdev,
++			   "Tx interrupt throttling cannot be changed if adaptive-tx is enabled\n");
++		return -EINVAL;
++	}
+ 
+ 	rx_ring->itr_setting = ITR_REG_ALIGN(ec->rx_coalesce_usecs);
+ 	tx_ring->itr_setting = ITR_REG_ALIGN(ec->tx_coalesce_usecs);
+@@ -747,6 +766,7 @@ static void iavf_set_itr_per_queue(struct iavf_adapter *adapter,
+ 	 * the Tx and Rx ITR values based on the values we have entered
+ 	 * into the q_vector, no need to write the values now.
+ 	 */
++	return 0;
+ }
+ 
+ /**
+@@ -788,9 +808,11 @@ static int __iavf_set_coalesce(struct net_device *netdev,
+ 	 */
+ 	if (queue < 0) {
+ 		for (i = 0; i < adapter->num_active_queues; i++)
+-			iavf_set_itr_per_queue(adapter, ec, i);
++			if (iavf_set_itr_per_queue(adapter, ec, i))
++				return -EINVAL;
+ 	} else if (queue < adapter->num_active_queues) {
+-		iavf_set_itr_per_queue(adapter, ec, queue);
++		if (iavf_set_itr_per_queue(adapter, ec, queue))
++			return -EINVAL;
+ 	} else {
+ 		netif_info(adapter, drv, netdev, "Invalid queue value, queue range is 0 - %d\n",
+ 			   adapter->num_active_queues - 1);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index dc944d605a741..52ac6cc08e83e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -83,8 +83,13 @@ static int ice_vsi_alloc_arrays(struct ice_vsi *vsi)
+ 	if (!vsi->rx_rings)
+ 		goto err_rings;
+ 
+-	/* XDP will have vsi->alloc_txq Tx queues as well, so double the size */
+-	vsi->txq_map = devm_kcalloc(dev, (2 * vsi->alloc_txq),
++	/* txq_map needs to have enough space to track both Tx (stack) rings
++	 * and XDP rings; at this point vsi->num_xdp_txq might not be set,
++	 * so use num_possible_cpus() as we want to always provide XDP ring
++	 * per CPU, regardless of queue count settings from user that might
++	 * have come from ethtool's set_channels() callback;
++	 */
++	vsi->txq_map = devm_kcalloc(dev, (vsi->alloc_txq + num_possible_cpus()),
+ 				    sizeof(*vsi->txq_map), GFP_KERNEL);
+ 
+ 	if (!vsi->txq_map)
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 5b67d24b2b5ed..746a5bd178d3b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2397,7 +2397,18 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog)
+ 			ice_stat_str(status));
+ 		goto clear_xdp_rings;
+ 	}
+-	ice_vsi_assign_bpf_prog(vsi, prog);
++
++	/* assign the prog only when it's not already present on VSI;
++	 * this flow is a subject of both ethtool -L and ndo_bpf flows;
++	 * VSI rebuild that happens under ethtool -L can expose us to
++	 * the bpf_prog refcount issues as we would be swapping same
++	 * bpf_prog pointers from vsi->xdp_prog and calling bpf_prog_put
++	 * on it as it would be treated as an 'old_prog'; for ndo_bpf
++	 * this is not harmful as dev_xdp_install bumps the refcount
++	 * before calling the op exposed by the driver;
++	 */
++	if (!ice_is_xdp_ena_vsi(vsi))
++		ice_vsi_assign_bpf_prog(vsi, prog);
+ 
+ 	return 0;
+ clear_xdp_rings:
+@@ -2527,6 +2538,11 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 		if (xdp_ring_err)
+ 			NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Tx resources failed");
+ 	} else {
++		/* safe to call even when prog == vsi->xdp_prog as
++		 * dev_xdp_install in net/core/dev.c incremented prog's
++		 * refcount so corresponding bpf_prog_put won't cause
++		 * underflow
++		 */
+ 		ice_vsi_assign_bpf_prog(vsi, prog);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index e24fb122c03a2..d5432d1448c05 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -8032,7 +8032,7 @@ static int igb_poll(struct napi_struct *napi, int budget)
+ 	if (likely(napi_complete_done(napi, work_done)))
+ 		igb_ring_irq_enable(q_vector);
+ 
+-	return min(work_done, budget - 1);
++	return work_done;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index ec9b6c564300e..e220d44df2e65 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -4652,11 +4652,13 @@ static int mvpp2_change_mtu(struct net_device *dev, int mtu)
+ 		mtu = ALIGN(MVPP2_RX_PKT_SIZE(mtu), 8);
+ 	}
+ 
++	if (port->xdp_prog && mtu > MVPP2_MAX_RX_BUF_SIZE) {
++		netdev_err(dev, "Illegal MTU value %d (> %d) for XDP mode\n",
++			   mtu, (int)MVPP2_MAX_RX_BUF_SIZE);
++		return -EINVAL;
++	}
++
+ 	if (MVPP2_RX_PKT_SIZE(mtu) > MVPP2_BM_LONG_PKT_SIZE) {
+-		if (port->xdp_prog) {
+-			netdev_err(dev, "Jumbo frames are not supported with XDP\n");
+-			return -EINVAL;
+-		}
+ 		if (priv->percpu_pools) {
+ 			netdev_warn(dev, "mtu %d too high, switching to shared buffers", mtu);
+ 			mvpp2_bm_switch_buffers(priv, false);
+@@ -4942,8 +4944,8 @@ static int mvpp2_xdp_setup(struct mvpp2_port *port, struct netdev_bpf *bpf)
+ 	bool running = netif_running(port->dev);
+ 	bool reset = !prog != !port->xdp_prog;
+ 
+-	if (port->dev->mtu > ETH_DATA_LEN) {
+-		NL_SET_ERR_MSG_MOD(bpf->extack, "XDP is not supported with jumbo frames enabled");
++	if (port->dev->mtu > MVPP2_MAX_RX_BUF_SIZE) {
++		NL_SET_ERR_MSG_MOD(bpf->extack, "MTU too large for XDP");
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c b/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
+index 7d83e1f91ef17..9101d00e96b9d 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
+@@ -439,8 +439,8 @@ static int prestera_port_bridge_join(struct prestera_port *port,
+ 
+ 	br_port = prestera_bridge_port_add(bridge, port->dev);
+ 	if (IS_ERR(br_port)) {
+-		err = PTR_ERR(br_port);
+-		goto err_brport_create;
++		prestera_bridge_put(bridge);
++		return PTR_ERR(br_port);
+ 	}
+ 
+ 	if (bridge->vlan_enabled)
+@@ -454,8 +454,6 @@ static int prestera_port_bridge_join(struct prestera_port *port,
+ 
+ err_port_join:
+ 	prestera_bridge_port_put(br_port);
+-err_brport_create:
+-	prestera_bridge_put(bridge);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/minimal.c b/drivers/net/ethernet/mellanox/mlxsw/minimal.c
+index c010db2c9dba9..443dc44452ef8 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/minimal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/minimal.c
+@@ -234,6 +234,7 @@ static void mlxsw_m_port_remove(struct mlxsw_m *mlxsw_m, u8 local_port)
+ static int mlxsw_m_port_module_map(struct mlxsw_m *mlxsw_m, u8 local_port,
+ 				   u8 *last_module)
+ {
++	unsigned int max_ports = mlxsw_core_max_ports(mlxsw_m->core);
+ 	u8 module, width;
+ 	int err;
+ 
+@@ -249,6 +250,9 @@ static int mlxsw_m_port_module_map(struct mlxsw_m *mlxsw_m, u8 local_port,
+ 	if (module == *last_module)
+ 		return 0;
+ 	*last_module = module;
++
++	if (WARN_ON_ONCE(module >= max_ports))
++		return -EINVAL;
+ 	mlxsw_m->module_to_port[module] = ++mlxsw_m->max_ports;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index b08853f71b2be..4110e15c22c79 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -2052,9 +2052,14 @@ static void mlxsw_sp_pude_event_func(const struct mlxsw_reg_info *reg,
+ 	struct mlxsw_sp *mlxsw_sp = priv;
+ 	struct mlxsw_sp_port *mlxsw_sp_port;
+ 	enum mlxsw_reg_pude_oper_status status;
++	unsigned int max_ports;
+ 	u8 local_port;
+ 
++	max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
+ 	local_port = mlxsw_reg_pude_local_port_get(pude_pl);
++
++	if (WARN_ON_ONCE(!local_port || local_port >= max_ports))
++		return;
+ 	mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ 	if (!mlxsw_sp_port)
+ 		return;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
+index ca8090a28dec6..50eca2daad843 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
+@@ -568,10 +568,13 @@ void mlxsw_sp1_ptp_got_timestamp(struct mlxsw_sp *mlxsw_sp, bool ingress,
+ 				 u8 domain_number, u16 sequence_id,
+ 				 u64 timestamp)
+ {
++	unsigned int max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
+ 	struct mlxsw_sp_port *mlxsw_sp_port;
+ 	struct mlxsw_sp1_ptp_key key;
+ 	u8 types;
+ 
++	if (WARN_ON_ONCE(local_port >= max_ports))
++		return;
+ 	mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ 	if (!mlxsw_sp_port)
+ 		return;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 4381f8c6c3fb7..53128382fc2e0 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -2177,6 +2177,7 @@ static void mlxsw_sp_router_neigh_ent_ipv4_process(struct mlxsw_sp *mlxsw_sp,
+ 						   char *rauhtd_pl,
+ 						   int ent_index)
+ {
++	u64 max_rifs = MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS);
+ 	struct net_device *dev;
+ 	struct neighbour *n;
+ 	__be32 dipn;
+@@ -2185,6 +2186,8 @@ static void mlxsw_sp_router_neigh_ent_ipv4_process(struct mlxsw_sp *mlxsw_sp,
+ 
+ 	mlxsw_reg_rauhtd_ent_ipv4_unpack(rauhtd_pl, ent_index, &rif, &dip);
+ 
++	if (WARN_ON_ONCE(rif >= max_rifs))
++		return;
+ 	if (!mlxsw_sp->router->rifs[rif]) {
+ 		dev_err_ratelimited(mlxsw_sp->bus_info->dev, "Incorrect RIF in neighbour entry\n");
+ 		return;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 6501ce94ace58..368fa0e5ad315 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -2410,6 +2410,7 @@ static void mlxsw_sp_fdb_notify_mac_process(struct mlxsw_sp *mlxsw_sp,
+ 					    char *sfn_pl, int rec_index,
+ 					    bool adding)
+ {
++	unsigned int max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
+ 	struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan;
+ 	struct mlxsw_sp_bridge_device *bridge_device;
+ 	struct mlxsw_sp_bridge_port *bridge_port;
+@@ -2422,6 +2423,9 @@ static void mlxsw_sp_fdb_notify_mac_process(struct mlxsw_sp *mlxsw_sp,
+ 	int err;
+ 
+ 	mlxsw_reg_sfn_mac_unpack(sfn_pl, rec_index, mac, &fid, &local_port);
++
++	if (WARN_ON_ONCE(local_port >= max_ports))
++		return;
+ 	mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ 	if (!mlxsw_sp_port) {
+ 		dev_err_ratelimited(mlxsw_sp->bus_info->dev, "Incorrect local port in FDB notification\n");
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index 3eea8cf076c48..481f89d193f77 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -922,8 +922,7 @@ static int lan743x_phy_reset(struct lan743x_adapter *adapter)
+ }
+ 
+ static void lan743x_phy_update_flowcontrol(struct lan743x_adapter *adapter,
+-					   u8 duplex, u16 local_adv,
+-					   u16 remote_adv)
++					   u16 local_adv, u16 remote_adv)
+ {
+ 	struct lan743x_phy *phy = &adapter->phy;
+ 	u8 cap;
+@@ -951,7 +950,6 @@ static void lan743x_phy_link_status_change(struct net_device *netdev)
+ 
+ 	phy_print_status(phydev);
+ 	if (phydev->state == PHY_RUNNING) {
+-		struct ethtool_link_ksettings ksettings;
+ 		int remote_advertisement = 0;
+ 		int local_advertisement = 0;
+ 
+@@ -988,18 +986,14 @@ static void lan743x_phy_link_status_change(struct net_device *netdev)
+ 		}
+ 		lan743x_csr_write(adapter, MAC_CR, data);
+ 
+-		memset(&ksettings, 0, sizeof(ksettings));
+-		phy_ethtool_get_link_ksettings(netdev, &ksettings);
+ 		local_advertisement =
+ 			linkmode_adv_to_mii_adv_t(phydev->advertising);
+ 		remote_advertisement =
+ 			linkmode_adv_to_mii_adv_t(phydev->lp_advertising);
+ 
+-		lan743x_phy_update_flowcontrol(adapter,
+-					       ksettings.base.duplex,
+-					       local_advertisement,
++		lan743x_phy_update_flowcontrol(adapter, local_advertisement,
+ 					       remote_advertisement);
+-		lan743x_ptp_update_latency(adapter, ksettings.base.speed);
++		lan743x_ptp_update_latency(adapter, phydev->speed);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 8c45b236649a9..52401915828a1 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -811,12 +811,6 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ 	switch (cfg.rx_filter) {
+ 	case HWTSTAMP_FILTER_NONE:
+ 		break;
+-	case HWTSTAMP_FILTER_ALL:
+-	case HWTSTAMP_FILTER_SOME:
+-	case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+-	case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+-	case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+-	case HWTSTAMP_FILTER_NTP_ALL:
+ 	case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ 	case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ 	case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+@@ -935,7 +929,10 @@ int ocelot_get_ts_info(struct ocelot *ocelot, int port,
+ 				 SOF_TIMESTAMPING_RAW_HARDWARE;
+ 	info->tx_types = BIT(HWTSTAMP_TX_OFF) | BIT(HWTSTAMP_TX_ON) |
+ 			 BIT(HWTSTAMP_TX_ONESTEP_SYNC);
+-	info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) | BIT(HWTSTAMP_FILTER_ALL);
++	info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) |
++			   BIT(HWTSTAMP_FILTER_PTP_V2_EVENT) |
++			   BIT(HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
++			   BIT(HWTSTAMP_FILTER_PTP_V2_L4_EVENT);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h
+index df5b748be068c..cc2ce452000a3 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net.h
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h
+@@ -557,7 +557,6 @@ struct nfp_net_dp {
+  * @exn_name:           Name for Exception interrupt
+  * @shared_handler:     Handler for shared interrupts
+  * @shared_name:        Name for shared interrupt
+- * @me_freq_mhz:        ME clock_freq (MHz)
+  * @reconfig_lock:	Protects @reconfig_posted, @reconfig_timer_active,
+  *			@reconfig_sync_present and HW reconfiguration request
+  *			regs/machinery from async requests (sync must take
+@@ -640,8 +639,6 @@ struct nfp_net {
+ 	irq_handler_t shared_handler;
+ 	char shared_name[IFNAMSIZ + 8];
+ 
+-	u32 me_freq_mhz;
+-
+ 	bool link_up;
+ 	spinlock_t link_status_lock;
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index c036a1d0f8de6..cd0c9623f7dd2 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -1347,7 +1347,7 @@ static int nfp_net_set_coalesce(struct net_device *netdev,
+ 	 * ME timestamp ticks.  There are 16 ME clock cycles for each timestamp
+ 	 * count.
+ 	 */
+-	factor = nn->me_freq_mhz / 16;
++	factor = nn->tlv_caps.me_freq_mhz / 16;
+ 
+ 	/* Each pair of (usecs, max_frames) fields specifies that interrupts
+ 	 * should be coalesced until
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index a4ca283e02284..617c960cfb5a5 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -258,6 +258,7 @@ int stmmac_mdio_register(struct net_device *ndev);
+ int stmmac_mdio_reset(struct mii_bus *mii);
+ void stmmac_set_ethtool_ops(struct net_device *netdev);
+ 
++int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags);
+ void stmmac_ptp_register(struct stmmac_priv *priv);
+ void stmmac_ptp_unregister(struct stmmac_priv *priv);
+ int stmmac_resume(struct device *dev);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 4a75e73f06bbd..a8c5492cb39be 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -47,6 +47,13 @@
+ #include "dwxgmac2.h"
+ #include "hwif.h"
+ 
++/* As long as the interface is active, we keep the timestamping counter enabled
++ * with fine resolution and binary rollover. This avoid non-monotonic behavior
++ * (clock jumps) when changing timestamping settings at runtime.
++ */
++#define STMMAC_HWTS_ACTIVE	(PTP_TCR_TSENA | PTP_TCR_TSCFUPDT | \
++				 PTP_TCR_TSCTRLSSR)
++
+ #define	STMMAC_ALIGN(x)		ALIGN(ALIGN(x, SMP_CACHE_BYTES), 16)
+ #define	TSO_MAX_BUFF_SIZE	(SZ_16K - 1)
+ 
+@@ -508,8 +515,6 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 	struct hwtstamp_config config;
+-	struct timespec64 now;
+-	u64 temp = 0;
+ 	u32 ptp_v2 = 0;
+ 	u32 tstamp_all = 0;
+ 	u32 ptp_over_ipv4_udp = 0;
+@@ -518,11 +523,6 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+ 	u32 snap_type_sel = 0;
+ 	u32 ts_master_en = 0;
+ 	u32 ts_event_en = 0;
+-	u32 sec_inc = 0;
+-	u32 value = 0;
+-	bool xmac;
+-
+-	xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
+ 
+ 	if (!(priv->dma_cap.time_stamp || priv->adv_ts)) {
+ 		netdev_alert(priv->dev, "No support for HW time stamping\n");
+@@ -684,42 +684,17 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+ 	priv->hwts_rx_en = ((config.rx_filter == HWTSTAMP_FILTER_NONE) ? 0 : 1);
+ 	priv->hwts_tx_en = config.tx_type == HWTSTAMP_TX_ON;
+ 
+-	if (!priv->hwts_tx_en && !priv->hwts_rx_en)
+-		stmmac_config_hw_tstamping(priv, priv->ptpaddr, 0);
+-	else {
+-		value = (PTP_TCR_TSENA | PTP_TCR_TSCFUPDT | PTP_TCR_TSCTRLSSR |
+-			 tstamp_all | ptp_v2 | ptp_over_ethernet |
+-			 ptp_over_ipv6_udp | ptp_over_ipv4_udp | ts_event_en |
+-			 ts_master_en | snap_type_sel);
+-		stmmac_config_hw_tstamping(priv, priv->ptpaddr, value);
+-
+-		/* program Sub Second Increment reg */
+-		stmmac_config_sub_second_increment(priv,
+-				priv->ptpaddr, priv->plat->clk_ptp_rate,
+-				xmac, &sec_inc);
+-		temp = div_u64(1000000000ULL, sec_inc);
+-
+-		/* Store sub second increment and flags for later use */
+-		priv->sub_second_inc = sec_inc;
+-		priv->systime_flags = value;
+-
+-		/* calculate default added value:
+-		 * formula is :
+-		 * addend = (2^32)/freq_div_ratio;
+-		 * where, freq_div_ratio = 1e9ns/sec_inc
+-		 */
+-		temp = (u64)(temp << 32);
+-		priv->default_addend = div_u64(temp, priv->plat->clk_ptp_rate);
+-		stmmac_config_addend(priv, priv->ptpaddr, priv->default_addend);
+-
+-		/* initialize system time */
+-		ktime_get_real_ts64(&now);
++	priv->systime_flags = STMMAC_HWTS_ACTIVE;
+ 
+-		/* lower 32 bits of tv_sec are safe until y2106 */
+-		stmmac_init_systime(priv, priv->ptpaddr,
+-				(u32)now.tv_sec, now.tv_nsec);
++	if (priv->hwts_tx_en || priv->hwts_rx_en) {
++		priv->systime_flags |= tstamp_all | ptp_v2 |
++				       ptp_over_ethernet | ptp_over_ipv6_udp |
++				       ptp_over_ipv4_udp | ts_event_en |
++				       ts_master_en | snap_type_sel;
+ 	}
+ 
++	stmmac_config_hw_tstamping(priv, priv->ptpaddr, priv->systime_flags);
++
+ 	memcpy(&priv->tstamp_config, &config, sizeof(config));
+ 
+ 	return copy_to_user(ifr->ifr_data, &config,
+@@ -747,6 +722,66 @@ static int stmmac_hwtstamp_get(struct net_device *dev, struct ifreq *ifr)
+ 			    sizeof(*config)) ? -EFAULT : 0;
+ }
+ 
++/**
++ * stmmac_init_tstamp_counter - init hardware timestamping counter
++ * @priv: driver private structure
++ * @systime_flags: timestamping flags
++ * Description:
++ * Initialize hardware counter for packet timestamping.
++ * This is valid as long as the interface is open and not suspended.
++ * Will be rerun after resuming from suspend, case in which the timestamping
++ * flags updated by stmmac_hwtstamp_set() also need to be restored.
++ */
++int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags)
++{
++	bool xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
++	struct timespec64 now;
++	u32 sec_inc = 0;
++	u64 temp = 0;
++	int ret;
++
++	if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
++		return -EOPNOTSUPP;
++
++	ret = clk_prepare_enable(priv->plat->clk_ptp_ref);
++	if (ret < 0) {
++		netdev_warn(priv->dev,
++			    "failed to enable PTP reference clock: %pe\n",
++			    ERR_PTR(ret));
++		return ret;
++	}
++
++	stmmac_config_hw_tstamping(priv, priv->ptpaddr, systime_flags);
++	priv->systime_flags = systime_flags;
++
++	/* program Sub Second Increment reg */
++	stmmac_config_sub_second_increment(priv, priv->ptpaddr,
++					   priv->plat->clk_ptp_rate,
++					   xmac, &sec_inc);
++	temp = div_u64(1000000000ULL, sec_inc);
++
++	/* Store sub second increment for later use */
++	priv->sub_second_inc = sec_inc;
++
++	/* calculate default added value:
++	 * formula is :
++	 * addend = (2^32)/freq_div_ratio;
++	 * where, freq_div_ratio = 1e9ns/sec_inc
++	 */
++	temp = (u64)(temp << 32);
++	priv->default_addend = div_u64(temp, priv->plat->clk_ptp_rate);
++	stmmac_config_addend(priv, priv->ptpaddr, priv->default_addend);
++
++	/* initialize system time */
++	ktime_get_real_ts64(&now);
++
++	/* lower 32 bits of tv_sec are safe until y2106 */
++	stmmac_init_systime(priv, priv->ptpaddr, (u32)now.tv_sec, now.tv_nsec);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(stmmac_init_tstamp_counter);
++
+ /**
+  * stmmac_init_ptp - init PTP
+  * @priv: driver private structure
+@@ -757,9 +792,11 @@ static int stmmac_hwtstamp_get(struct net_device *dev, struct ifreq *ifr)
+ static int stmmac_init_ptp(struct stmmac_priv *priv)
+ {
+ 	bool xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
++	int ret;
+ 
+-	if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
+-		return -EOPNOTSUPP;
++	ret = stmmac_init_tstamp_counter(priv, STMMAC_HWTS_ACTIVE);
++	if (ret)
++		return ret;
+ 
+ 	priv->adv_ts = 0;
+ 	/* Check if adv_ts can be enabled for dwmac 4.x / xgmac core */
+@@ -2721,10 +2758,6 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
+ 	stmmac_mmc_setup(priv);
+ 
+ 	if (init_ptp) {
+-		ret = clk_prepare_enable(priv->plat->clk_ptp_ref);
+-		if (ret < 0)
+-			netdev_warn(priv->dev, "failed to enable PTP reference clock: %d\n", ret);
+-
+ 		ret = stmmac_init_ptp(priv);
+ 		if (ret == -EOPNOTSUPP)
+ 			netdev_warn(priv->dev, "PTP not supported by HW\n");
+@@ -5238,7 +5271,6 @@ int stmmac_suspend(struct device *dev)
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	u32 chan;
+-	int ret;
+ 
+ 	if (!ndev || !netif_running(ndev))
+ 		return 0;
+@@ -5280,13 +5312,6 @@ int stmmac_suspend(struct device *dev)
+ 
+ 		stmmac_mac_set(priv, priv->ioaddr, false);
+ 		pinctrl_pm_select_sleep_state(priv->device);
+-		/* Disable clock in case of PWM is off */
+-		clk_disable_unprepare(priv->plat->clk_ptp_ref);
+-		ret = pm_runtime_force_suspend(dev);
+-		if (ret) {
+-			mutex_unlock(&priv->lock);
+-			return ret;
+-		}
+ 	}
+ 	mutex_unlock(&priv->lock);
+ 
+@@ -5351,12 +5376,6 @@ int stmmac_resume(struct device *dev)
+ 		priv->irq_wake = 0;
+ 	} else {
+ 		pinctrl_pm_select_default_state(priv->device);
+-		/* enable the clk previously disabled */
+-		ret = pm_runtime_force_resume(dev);
+-		if (ret)
+-			return ret;
+-		if (priv->plat->clk_ptp_ref)
+-			clk_prepare_enable(priv->plat->clk_ptp_ref);
+ 		/* reset the phy so that it's ready */
+ 		if (priv->mii)
+ 			stmmac_mdio_reset(priv->mii);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 035f9aef4308f..3183d8826981e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -9,6 +9,7 @@
+ *******************************************************************************/
+ 
+ #include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ #include <linux/module.h>
+ #include <linux/io.h>
+ #include <linux/of.h>
+@@ -778,9 +779,52 @@ static int __maybe_unused stmmac_runtime_resume(struct device *dev)
+ 	return stmmac_bus_clks_config(priv, true);
+ }
+ 
++static int __maybe_unused stmmac_pltfr_noirq_suspend(struct device *dev)
++{
++	struct net_device *ndev = dev_get_drvdata(dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++	int ret;
++
++	if (!netif_running(ndev))
++		return 0;
++
++	if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
++		/* Disable clock in case of PWM is off */
++		clk_disable_unprepare(priv->plat->clk_ptp_ref);
++
++		ret = pm_runtime_force_suspend(dev);
++		if (ret)
++			return ret;
++	}
++
++	return 0;
++}
++
++static int __maybe_unused stmmac_pltfr_noirq_resume(struct device *dev)
++{
++	struct net_device *ndev = dev_get_drvdata(dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++	int ret;
++
++	if (!netif_running(ndev))
++		return 0;
++
++	if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
++		/* enable the clk previously disabled */
++		ret = pm_runtime_force_resume(dev);
++		if (ret)
++			return ret;
++
++		stmmac_init_tstamp_counter(priv, priv->systime_flags);
++	}
++
++	return 0;
++}
++
+ const struct dev_pm_ops stmmac_pltfr_pm_ops = {
+ 	SET_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_suspend, stmmac_pltfr_resume)
+ 	SET_RUNTIME_PM_OPS(stmmac_runtime_suspend, stmmac_runtime_resume, NULL)
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_noirq_suspend, stmmac_pltfr_noirq_resume)
+ };
+ EXPORT_SYMBOL_GPL(stmmac_pltfr_pm_ops);
+ 
+diff --git a/drivers/net/mdio/mdio-aspeed.c b/drivers/net/mdio/mdio-aspeed.c
+index cad820568f751..966c3b4ad59d1 100644
+--- a/drivers/net/mdio/mdio-aspeed.c
++++ b/drivers/net/mdio/mdio-aspeed.c
+@@ -61,6 +61,13 @@ static int aspeed_mdio_read(struct mii_bus *bus, int addr, int regnum)
+ 
+ 	iowrite32(ctrl, ctx->base + ASPEED_MDIO_CTRL);
+ 
++	rc = readl_poll_timeout(ctx->base + ASPEED_MDIO_CTRL, ctrl,
++				!(ctrl & ASPEED_MDIO_CTRL_FIRE),
++				ASPEED_MDIO_INTERVAL_US,
++				ASPEED_MDIO_TIMEOUT_US);
++	if (rc < 0)
++		return rc;
++
+ 	rc = readl_poll_timeout(ctx->base + ASPEED_MDIO_DATA, data,
+ 				data & ASPEED_MDIO_DATA_IDLE,
+ 				ASPEED_MDIO_INTERVAL_US,
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 899496f089d2e..57b1b138522e0 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -644,6 +644,7 @@ static void phylink_resolve(struct work_struct *w)
+ 	struct phylink_link_state link_state;
+ 	struct net_device *ndev = pl->netdev;
+ 	bool mac_config = false;
++	bool retrigger = false;
+ 	bool cur_link_state;
+ 
+ 	mutex_lock(&pl->state_mutex);
+@@ -657,6 +658,7 @@ static void phylink_resolve(struct work_struct *w)
+ 		link_state.link = false;
+ 	} else if (pl->mac_link_dropped) {
+ 		link_state.link = false;
++		retrigger = true;
+ 	} else {
+ 		switch (pl->cur_link_an_mode) {
+ 		case MLO_AN_PHY:
+@@ -673,6 +675,19 @@ static void phylink_resolve(struct work_struct *w)
+ 		case MLO_AN_INBAND:
+ 			phylink_mac_pcs_get_state(pl, &link_state);
+ 
++			/* The PCS may have a latching link-fail indicator.
++			 * If the link was up, bring the link down and
++			 * re-trigger the resolve. Otherwise, re-read the
++			 * PCS state to get the current status of the link.
++			 */
++			if (!link_state.link) {
++				if (cur_link_state)
++					retrigger = true;
++				else
++					phylink_mac_pcs_get_state(pl,
++								  &link_state);
++			}
++
+ 			/* If we have a phy, the "up" state is the union of
+ 			 * both the PHY and the MAC */
+ 			if (pl->phydev)
+@@ -680,6 +695,15 @@ static void phylink_resolve(struct work_struct *w)
+ 
+ 			/* Only update if the PHY link is up */
+ 			if (pl->phydev && pl->phy_state.link) {
++				/* If the interface has changed, force a
++				 * link down event if the link isn't already
++				 * down, and re-resolve.
++				 */
++				if (link_state.interface !=
++				    pl->phy_state.interface) {
++					retrigger = true;
++					link_state.link = false;
++				}
+ 				link_state.interface = pl->phy_state.interface;
+ 
+ 				/* If we have a PHY, we need to update with
+@@ -721,7 +745,7 @@ static void phylink_resolve(struct work_struct *w)
+ 		else
+ 			phylink_link_up(pl, link_state);
+ 	}
+-	if (!link_state.link && pl->mac_link_dropped) {
++	if (!link_state.link && retrigger) {
+ 		pl->mac_link_dropped = false;
+ 		queue_work(system_power_efficient_wq, &pl->resolve);
+ 	}
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index dd79534910b05..8505024b89e9e 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -126,21 +126,17 @@ struct netfront_queue {
+ 
+ 	/*
+ 	 * {tx,rx}_skbs store outstanding skbuffs. Free tx_skb entries
+-	 * are linked from tx_skb_freelist through skb_entry.link.
+-	 *
+-	 *  NB. Freelist index entries are always going to be less than
+-	 *  PAGE_OFFSET, whereas pointers to skbs will always be equal or
+-	 *  greater than PAGE_OFFSET: we use this property to distinguish
+-	 *  them.
++	 * are linked from tx_skb_freelist through tx_link.
+ 	 */
+-	union skb_entry {
+-		struct sk_buff *skb;
+-		unsigned long link;
+-	} tx_skbs[NET_TX_RING_SIZE];
++	struct sk_buff *tx_skbs[NET_TX_RING_SIZE];
++	unsigned short tx_link[NET_TX_RING_SIZE];
++#define TX_LINK_NONE 0xffff
++#define TX_PENDING   0xfffe
+ 	grant_ref_t gref_tx_head;
+ 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+ 	struct page *grant_tx_page[NET_TX_RING_SIZE];
+ 	unsigned tx_skb_freelist;
++	unsigned int tx_pend_queue;
+ 
+ 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
+ 	struct xen_netif_rx_front_ring rx;
+@@ -173,6 +169,9 @@ struct netfront_info {
+ 	bool netback_has_xdp_headroom;
+ 	bool netfront_xdp_enabled;
+ 
++	/* Is device behaving sane? */
++	bool broken;
++
+ 	atomic_t rx_gso_checksum_fixup;
+ };
+ 
+@@ -181,33 +180,25 @@ struct netfront_rx_info {
+ 	struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX - 1];
+ };
+ 
+-static void skb_entry_set_link(union skb_entry *list, unsigned short id)
+-{
+-	list->link = id;
+-}
+-
+-static int skb_entry_is_link(const union skb_entry *list)
+-{
+-	BUILD_BUG_ON(sizeof(list->skb) != sizeof(list->link));
+-	return (unsigned long)list->skb < PAGE_OFFSET;
+-}
+-
+ /*
+  * Access macros for acquiring freeing slots in tx_skbs[].
+  */
+ 
+-static void add_id_to_freelist(unsigned *head, union skb_entry *list,
+-			       unsigned short id)
++static void add_id_to_list(unsigned *head, unsigned short *list,
++			   unsigned short id)
+ {
+-	skb_entry_set_link(&list[id], *head);
++	list[id] = *head;
+ 	*head = id;
+ }
+ 
+-static unsigned short get_id_from_freelist(unsigned *head,
+-					   union skb_entry *list)
++static unsigned short get_id_from_list(unsigned *head, unsigned short *list)
+ {
+ 	unsigned int id = *head;
+-	*head = list[id].link;
++
++	if (id != TX_LINK_NONE) {
++		*head = list[id];
++		list[id] = TX_LINK_NONE;
++	}
+ 	return id;
+ }
+ 
+@@ -363,7 +354,7 @@ static int xennet_open(struct net_device *dev)
+ 	unsigned int i = 0;
+ 	struct netfront_queue *queue = NULL;
+ 
+-	if (!np->queues)
++	if (!np->queues || np->broken)
+ 		return -ENODEV;
+ 
+ 	for (i = 0; i < num_queues; ++i) {
+@@ -391,27 +382,47 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
+ 	unsigned short id;
+ 	struct sk_buff *skb;
+ 	bool more_to_do;
++	const struct device *dev = &queue->info->netdev->dev;
+ 
+ 	BUG_ON(!netif_carrier_ok(queue->info->netdev));
+ 
+ 	do {
+ 		prod = queue->tx.sring->rsp_prod;
++		if (RING_RESPONSE_PROD_OVERFLOW(&queue->tx, prod)) {
++			dev_alert(dev, "Illegal number of responses %u\n",
++				  prod - queue->tx.rsp_cons);
++			goto err;
++		}
+ 		rmb(); /* Ensure we see responses up to 'rp'. */
+ 
+ 		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
+-			struct xen_netif_tx_response *txrsp;
++			struct xen_netif_tx_response txrsp;
+ 
+-			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
+-			if (txrsp->status == XEN_NETIF_RSP_NULL)
++			RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
++			if (txrsp.status == XEN_NETIF_RSP_NULL)
+ 				continue;
+ 
+-			id  = txrsp->id;
+-			skb = queue->tx_skbs[id].skb;
++			id = txrsp.id;
++			if (id >= RING_SIZE(&queue->tx)) {
++				dev_alert(dev,
++					  "Response has incorrect id (%u)\n",
++					  id);
++				goto err;
++			}
++			if (queue->tx_link[id] != TX_PENDING) {
++				dev_alert(dev,
++					  "Response for inactive request\n");
++				goto err;
++			}
++
++			queue->tx_link[id] = TX_LINK_NONE;
++			skb = queue->tx_skbs[id];
++			queue->tx_skbs[id] = NULL;
+ 			if (unlikely(gnttab_query_foreign_access(
+ 				queue->grant_tx_ref[id]) != 0)) {
+-				pr_alert("%s: warning -- grant still in use by backend domain\n",
+-					 __func__);
+-				BUG();
++				dev_alert(dev,
++					  "Grant still in use by backend domain\n");
++				goto err;
+ 			}
+ 			gnttab_end_foreign_access_ref(
+ 				queue->grant_tx_ref[id], GNTMAP_readonly);
+@@ -419,7 +430,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
+ 				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+ 			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+ 			queue->grant_tx_page[id] = NULL;
+-			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
++			add_id_to_list(&queue->tx_skb_freelist, queue->tx_link, id);
+ 			dev_kfree_skb_irq(skb);
+ 		}
+ 
+@@ -429,13 +440,20 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
+ 	} while (more_to_do);
+ 
+ 	xennet_maybe_wake_tx(queue);
++
++	return;
++
++ err:
++	queue->info->broken = true;
++	dev_alert(dev, "Disabled for further use\n");
+ }
+ 
+ struct xennet_gnttab_make_txreq {
+ 	struct netfront_queue *queue;
+ 	struct sk_buff *skb;
+ 	struct page *page;
+-	struct xen_netif_tx_request *tx; /* Last request */
++	struct xen_netif_tx_request *tx;      /* Last request on ring page */
++	struct xen_netif_tx_request tx_local; /* Last request local copy*/
+ 	unsigned int size;
+ };
+ 
+@@ -451,7 +469,7 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
+ 	struct netfront_queue *queue = info->queue;
+ 	struct sk_buff *skb = info->skb;
+ 
+-	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
++	id = get_id_from_list(&queue->tx_skb_freelist, queue->tx_link);
+ 	tx = RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
+ 	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
+ 	WARN_ON_ONCE(IS_ERR_VALUE((unsigned long)(int)ref));
+@@ -459,34 +477,37 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
+ 	gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
+ 					gfn, GNTMAP_readonly);
+ 
+-	queue->tx_skbs[id].skb = skb;
++	queue->tx_skbs[id] = skb;
+ 	queue->grant_tx_page[id] = page;
+ 	queue->grant_tx_ref[id] = ref;
+ 
+-	tx->id = id;
+-	tx->gref = ref;
+-	tx->offset = offset;
+-	tx->size = len;
+-	tx->flags = 0;
++	info->tx_local.id = id;
++	info->tx_local.gref = ref;
++	info->tx_local.offset = offset;
++	info->tx_local.size = len;
++	info->tx_local.flags = 0;
++
++	*tx = info->tx_local;
++
++	/*
++	 * Put the request in the pending queue, it will be set to be pending
++	 * when the producer index is about to be raised.
++	 */
++	add_id_to_list(&queue->tx_pend_queue, queue->tx_link, id);
+ 
+ 	info->tx = tx;
+-	info->size += tx->size;
++	info->size += info->tx_local.size;
+ }
+ 
+ static struct xen_netif_tx_request *xennet_make_first_txreq(
+-	struct netfront_queue *queue, struct sk_buff *skb,
+-	struct page *page, unsigned int offset, unsigned int len)
++	struct xennet_gnttab_make_txreq *info,
++	unsigned int offset, unsigned int len)
+ {
+-	struct xennet_gnttab_make_txreq info = {
+-		.queue = queue,
+-		.skb = skb,
+-		.page = page,
+-		.size = 0,
+-	};
++	info->size = 0;
+ 
+-	gnttab_for_one_grant(page, offset, len, xennet_tx_setup_grant, &info);
++	gnttab_for_one_grant(info->page, offset, len, xennet_tx_setup_grant, info);
+ 
+-	return info.tx;
++	return info->tx;
+ }
+ 
+ static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset,
+@@ -499,35 +520,27 @@ static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset,
+ 	xennet_tx_setup_grant(gfn, offset, len, data);
+ }
+ 
+-static struct xen_netif_tx_request *xennet_make_txreqs(
+-	struct netfront_queue *queue, struct xen_netif_tx_request *tx,
+-	struct sk_buff *skb, struct page *page,
++static void xennet_make_txreqs(
++	struct xennet_gnttab_make_txreq *info,
++	struct page *page,
+ 	unsigned int offset, unsigned int len)
+ {
+-	struct xennet_gnttab_make_txreq info = {
+-		.queue = queue,
+-		.skb = skb,
+-		.tx = tx,
+-	};
+-
+ 	/* Skip unused frames from start of page */
+ 	page += offset >> PAGE_SHIFT;
+ 	offset &= ~PAGE_MASK;
+ 
+ 	while (len) {
+-		info.page = page;
+-		info.size = 0;
++		info->page = page;
++		info->size = 0;
+ 
+ 		gnttab_foreach_grant_in_range(page, offset, len,
+ 					      xennet_make_one_txreq,
+-					      &info);
++					      info);
+ 
+ 		page++;
+ 		offset = 0;
+-		len -= info.size;
++		len -= info->size;
+ 	}
+-
+-	return info.tx;
+ }
+ 
+ /*
+@@ -574,19 +587,34 @@ static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+ 	return queue_idx;
+ }
+ 
++static void xennet_mark_tx_pending(struct netfront_queue *queue)
++{
++	unsigned int i;
++
++	while ((i = get_id_from_list(&queue->tx_pend_queue, queue->tx_link)) !=
++	       TX_LINK_NONE)
++		queue->tx_link[i] = TX_PENDING;
++}
++
+ static int xennet_xdp_xmit_one(struct net_device *dev,
+ 			       struct netfront_queue *queue,
+ 			       struct xdp_frame *xdpf)
+ {
+ 	struct netfront_info *np = netdev_priv(dev);
+ 	struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats);
++	struct xennet_gnttab_make_txreq info = {
++		.queue = queue,
++		.skb = NULL,
++		.page = virt_to_page(xdpf->data),
++	};
+ 	int notify;
+ 
+-	xennet_make_first_txreq(queue, NULL,
+-				virt_to_page(xdpf->data),
++	xennet_make_first_txreq(&info,
+ 				offset_in_page(xdpf->data),
+ 				xdpf->len);
+ 
++	xennet_mark_tx_pending(queue);
++
+ 	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
+ 	if (notify)
+ 		notify_remote_via_irq(queue->tx_irq);
+@@ -611,6 +639,8 @@ static int xennet_xdp_xmit(struct net_device *dev, int n,
+ 	int drops = 0;
+ 	int i, err;
+ 
++	if (unlikely(np->broken))
++		return -ENODEV;
+ 	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+ 		return -EINVAL;
+ 
+@@ -640,7 +670,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ {
+ 	struct netfront_info *np = netdev_priv(dev);
+ 	struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats);
+-	struct xen_netif_tx_request *tx, *first_tx;
++	struct xen_netif_tx_request *first_tx;
+ 	unsigned int i;
+ 	int notify;
+ 	int slots;
+@@ -649,6 +679,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 	unsigned int len;
+ 	unsigned long flags;
+ 	struct netfront_queue *queue = NULL;
++	struct xennet_gnttab_make_txreq info = { };
+ 	unsigned int num_queues = dev->real_num_tx_queues;
+ 	u16 queue_index;
+ 	struct sk_buff *nskb;
+@@ -656,6 +687,8 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 	/* Drop the packet if no queues are set up */
+ 	if (num_queues < 1)
+ 		goto drop;
++	if (unlikely(np->broken))
++		goto drop;
+ 	/* Determine which queue to transmit this SKB on */
+ 	queue_index = skb_get_queue_mapping(skb);
+ 	queue = &np->queues[queue_index];
+@@ -706,21 +739,24 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 	}
+ 
+ 	/* First request for the linear area. */
+-	first_tx = tx = xennet_make_first_txreq(queue, skb,
+-						page, offset, len);
+-	offset += tx->size;
++	info.queue = queue;
++	info.skb = skb;
++	info.page = page;
++	first_tx = xennet_make_first_txreq(&info, offset, len);
++	offset += info.tx_local.size;
+ 	if (offset == PAGE_SIZE) {
+ 		page++;
+ 		offset = 0;
+ 	}
+-	len -= tx->size;
++	len -= info.tx_local.size;
+ 
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL)
+ 		/* local packet? */
+-		tx->flags |= XEN_NETTXF_csum_blank | XEN_NETTXF_data_validated;
++		first_tx->flags |= XEN_NETTXF_csum_blank |
++				   XEN_NETTXF_data_validated;
+ 	else if (skb->ip_summed == CHECKSUM_UNNECESSARY)
+ 		/* remote but checksummed. */
+-		tx->flags |= XEN_NETTXF_data_validated;
++		first_tx->flags |= XEN_NETTXF_data_validated;
+ 
+ 	/* Optional extra info after the first request. */
+ 	if (skb_shinfo(skb)->gso_size) {
+@@ -729,7 +765,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 		gso = (struct xen_netif_extra_info *)
+ 			RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
+ 
+-		tx->flags |= XEN_NETTXF_extra_info;
++		first_tx->flags |= XEN_NETTXF_extra_info;
+ 
+ 		gso->u.gso.size = skb_shinfo(skb)->gso_size;
+ 		gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
+@@ -743,12 +779,12 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 	}
+ 
+ 	/* Requests for the rest of the linear area. */
+-	tx = xennet_make_txreqs(queue, tx, skb, page, offset, len);
++	xennet_make_txreqs(&info, page, offset, len);
+ 
+ 	/* Requests for all the frags. */
+ 	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ 		skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+-		tx = xennet_make_txreqs(queue, tx, skb, skb_frag_page(frag),
++		xennet_make_txreqs(&info, skb_frag_page(frag),
+ 					skb_frag_off(frag),
+ 					skb_frag_size(frag));
+ 	}
+@@ -759,6 +795,8 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 	/* timestamp packet in software */
+ 	skb_tx_timestamp(skb);
+ 
++	xennet_mark_tx_pending(queue);
++
+ 	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
+ 	if (notify)
+ 		notify_remote_via_irq(queue->tx_irq);
+@@ -816,7 +854,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
+ 			     RING_IDX rp)
+ 
+ {
+-	struct xen_netif_extra_info *extra;
++	struct xen_netif_extra_info extra;
+ 	struct device *dev = &queue->info->netdev->dev;
+ 	RING_IDX cons = queue->rx.rsp_cons;
+ 	int err = 0;
+@@ -832,24 +870,22 @@ static int xennet_get_extras(struct netfront_queue *queue,
+ 			break;
+ 		}
+ 
+-		extra = (struct xen_netif_extra_info *)
+-			RING_GET_RESPONSE(&queue->rx, ++cons);
++		RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
+ 
+-		if (unlikely(!extra->type ||
+-			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
++		if (unlikely(!extra.type ||
++			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
+ 			if (net_ratelimit())
+ 				dev_warn(dev, "Invalid extra type: %d\n",
+-					extra->type);
++					 extra.type);
+ 			err = -EINVAL;
+ 		} else {
+-			memcpy(&extras[extra->type - 1], extra,
+-			       sizeof(*extra));
++			extras[extra.type - 1] = extra;
+ 		}
+ 
+ 		skb = xennet_get_rx_skb(queue, cons);
+ 		ref = xennet_get_rx_ref(queue, cons);
+ 		xennet_move_rx_slot(queue, skb, ref);
+-	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
++	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
+ 
+ 	queue->rx.rsp_cons = cons;
+ 	return err;
+@@ -907,7 +943,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
+ 				struct sk_buff_head *list,
+ 				bool *need_xdp_flush)
+ {
+-	struct xen_netif_rx_response *rx = &rinfo->rx;
++	struct xen_netif_rx_response *rx = &rinfo->rx, rx_local;
+ 	int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
+ 	RING_IDX cons = queue->rx.rsp_cons;
+ 	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+@@ -991,7 +1027,8 @@ next:
+ 			break;
+ 		}
+ 
+-		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
++		RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx_local);
++		rx = &rx_local;
+ 		skb = xennet_get_rx_skb(queue, cons + slots);
+ 		ref = xennet_get_rx_ref(queue, cons + slots);
+ 		slots++;
+@@ -1046,10 +1083,11 @@ static int xennet_fill_frags(struct netfront_queue *queue,
+ 	struct sk_buff *nskb;
+ 
+ 	while ((nskb = __skb_dequeue(list))) {
+-		struct xen_netif_rx_response *rx =
+-			RING_GET_RESPONSE(&queue->rx, ++cons);
++		struct xen_netif_rx_response rx;
+ 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
+ 
++		RING_COPY_RESPONSE(&queue->rx, ++cons, &rx);
++
+ 		if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS) {
+ 			unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
+ 
+@@ -1064,7 +1102,7 @@ static int xennet_fill_frags(struct netfront_queue *queue,
+ 
+ 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ 				skb_frag_page(nfrag),
+-				rx->offset, rx->status, PAGE_SIZE);
++				rx.offset, rx.status, PAGE_SIZE);
+ 
+ 		skb_shinfo(nskb)->nr_frags = 0;
+ 		kfree_skb(nskb);
+@@ -1158,12 +1196,19 @@ static int xennet_poll(struct napi_struct *napi, int budget)
+ 	skb_queue_head_init(&tmpq);
+ 
+ 	rp = queue->rx.sring->rsp_prod;
++	if (RING_RESPONSE_PROD_OVERFLOW(&queue->rx, rp)) {
++		dev_alert(&dev->dev, "Illegal number of responses %u\n",
++			  rp - queue->rx.rsp_cons);
++		queue->info->broken = true;
++		spin_unlock(&queue->rx_lock);
++		return 0;
++	}
+ 	rmb(); /* Ensure we see queued responses up to 'rp'. */
+ 
+ 	i = queue->rx.rsp_cons;
+ 	work_done = 0;
+ 	while ((i != rp) && (work_done < budget)) {
+-		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
++		RING_COPY_RESPONSE(&queue->rx, i, rx);
+ 		memset(extras, 0, sizeof(rinfo.extras));
+ 
+ 		err = xennet_get_responses(queue, &rinfo, rp, &tmpq,
+@@ -1288,17 +1333,18 @@ static void xennet_release_tx_bufs(struct netfront_queue *queue)
+ 
+ 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+ 		/* Skip over entries which are actually freelist references */
+-		if (skb_entry_is_link(&queue->tx_skbs[i]))
++		if (!queue->tx_skbs[i])
+ 			continue;
+ 
+-		skb = queue->tx_skbs[i].skb;
++		skb = queue->tx_skbs[i];
++		queue->tx_skbs[i] = NULL;
+ 		get_page(queue->grant_tx_page[i]);
+ 		gnttab_end_foreign_access(queue->grant_tx_ref[i],
+ 					  GNTMAP_readonly,
+ 					  (unsigned long)page_address(queue->grant_tx_page[i]));
+ 		queue->grant_tx_page[i] = NULL;
+ 		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+-		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
++		add_id_to_list(&queue->tx_skb_freelist, queue->tx_link, i);
+ 		dev_kfree_skb_irq(skb);
+ 	}
+ }
+@@ -1378,6 +1424,9 @@ static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
+ 	struct netfront_queue *queue = dev_id;
+ 	unsigned long flags;
+ 
++	if (queue->info->broken)
++		return IRQ_HANDLED;
++
+ 	spin_lock_irqsave(&queue->tx_lock, flags);
+ 	xennet_tx_buf_gc(queue);
+ 	spin_unlock_irqrestore(&queue->tx_lock, flags);
+@@ -1390,6 +1439,9 @@ static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
+ 	struct netfront_queue *queue = dev_id;
+ 	struct net_device *dev = queue->info->netdev;
+ 
++	if (queue->info->broken)
++		return IRQ_HANDLED;
++
+ 	if (likely(netif_carrier_ok(dev) &&
+ 		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+ 		napi_schedule(&queue->napi);
+@@ -1411,6 +1463,10 @@ static void xennet_poll_controller(struct net_device *dev)
+ 	struct netfront_info *info = netdev_priv(dev);
+ 	unsigned int num_queues = dev->real_num_tx_queues;
+ 	unsigned int i;
++
++	if (info->broken)
++		return;
++
+ 	for (i = 0; i < num_queues; ++i)
+ 		xennet_interrupt(0, &info->queues[i]);
+ }
+@@ -1482,6 +1538,11 @@ static int xennet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 
+ static int xennet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+ {
++	struct netfront_info *np = netdev_priv(dev);
++
++	if (np->broken)
++		return -ENODEV;
++
+ 	switch (xdp->command) {
+ 	case XDP_SETUP_PROG:
+ 		return xennet_xdp_set(dev, xdp->prog, xdp->extack);
+@@ -1859,13 +1920,15 @@ static int xennet_init_queue(struct netfront_queue *queue)
+ 	snprintf(queue->name, sizeof(queue->name), "vif%s-q%u",
+ 		 devid, queue->id);
+ 
+-	/* Initialise tx_skbs as a free chain containing every entry. */
++	/* Initialise tx_skb_freelist as a free chain containing every entry. */
+ 	queue->tx_skb_freelist = 0;
++	queue->tx_pend_queue = TX_LINK_NONE;
+ 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+-		skb_entry_set_link(&queue->tx_skbs[i], i+1);
++		queue->tx_link[i] = i + 1;
+ 		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+ 		queue->grant_tx_page[i] = NULL;
+ 	}
++	queue->tx_link[NET_TX_RING_SIZE - 1] = TX_LINK_NONE;
+ 
+ 	/* Clear out rx_skbs */
+ 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+@@ -2134,6 +2197,9 @@ static int talk_to_netback(struct xenbus_device *dev,
+ 	if (info->queues)
+ 		xennet_destroy_queues(info);
+ 
++	/* For the case of a reconnect reset the "broken" indicator. */
++	info->broken = false;
++
+ 	err = xennet_create_queues(info, &num_queues);
+ 	if (err < 0) {
+ 		xenbus_dev_fatal(dev, err, "creating queues");
+diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
+index b575997244482..c81690b2a681b 100644
+--- a/drivers/nvme/target/io-cmd-file.c
++++ b/drivers/nvme/target/io-cmd-file.c
+@@ -8,6 +8,7 @@
+ #include <linux/uio.h>
+ #include <linux/falloc.h>
+ #include <linux/file.h>
++#include <linux/fs.h>
+ #include "nvmet.h"
+ 
+ #define NVMET_MAX_MPOOL_BVEC		16
+@@ -266,7 +267,8 @@ static void nvmet_file_execute_rw(struct nvmet_req *req)
+ 
+ 	if (req->ns->buffered_io) {
+ 		if (likely(!req->f.mpool_alloc) &&
+-				nvmet_file_execute_io(req, IOCB_NOWAIT))
++		    (req->ns->file->f_mode & FMODE_NOWAIT) &&
++		    nvmet_file_execute_io(req, IOCB_NOWAIT))
+ 			return;
+ 		nvmet_file_submit_buffered_io(req);
+ 	} else
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 1251fd6e92780..96b67a70cbbbd 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -688,10 +688,11 @@ static int nvmet_try_send_r2t(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
+ static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
+ {
+ 	struct nvmet_tcp_queue *queue = cmd->queue;
++	int left = NVME_TCP_DIGEST_LENGTH - cmd->offset;
+ 	struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
+ 	struct kvec iov = {
+ 		.iov_base = (u8 *)&cmd->exp_ddgst + cmd->offset,
+-		.iov_len = NVME_TCP_DIGEST_LENGTH - cmd->offset
++		.iov_len = left
+ 	};
+ 	int ret;
+ 
+@@ -705,6 +706,10 @@ static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
+ 		return ret;
+ 
+ 	cmd->offset += ret;
++	left -= ret;
++
++	if (left)
++		return -EAGAIN;
+ 
+ 	if (queue->nvme_sq.sqhd_disabled) {
+ 		cmd->queue->snd_cmd = NULL;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 434522465d983..604b294bb15c9 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -306,11 +306,6 @@ static inline u32 advk_readl(struct advk_pcie *pcie, u64 reg)
+ 	return readl(pcie->base + reg);
+ }
+ 
+-static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg)
+-{
+-	return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8);
+-}
+-
+ static u8 advk_pcie_ltssm_state(struct advk_pcie *pcie)
+ {
+ 	u32 val;
+@@ -384,16 +379,9 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
+ 
+ static void advk_pcie_issue_perst(struct advk_pcie *pcie)
+ {
+-	u32 reg;
+-
+ 	if (!pcie->reset_gpio)
+ 		return;
+ 
+-	/* PERST does not work for some cards when link training is enabled */
+-	reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+-	reg &= ~LINK_TRAINING_EN;
+-	advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+-
+ 	/* 10ms delay is needed for some cards */
+ 	dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n");
+ 	gpiod_set_value_cansleep(pcie->reset_gpio, 1);
+@@ -401,53 +389,46 @@ static void advk_pcie_issue_perst(struct advk_pcie *pcie)
+ 	gpiod_set_value_cansleep(pcie->reset_gpio, 0);
+ }
+ 
+-static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen)
++static void advk_pcie_train_link(struct advk_pcie *pcie)
+ {
+-	int ret, neg_gen;
++	struct device *dev = &pcie->pdev->dev;
+ 	u32 reg;
++	int ret;
+ 
+-	/* Setup link speed */
++	/*
++	 * Setup PCIe rev / gen compliance based on device tree property
++	 * 'max-link-speed' which also forces maximal link speed.
++	 */
+ 	reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+ 	reg &= ~PCIE_GEN_SEL_MSK;
+-	if (gen == 3)
++	if (pcie->link_gen == 3)
+ 		reg |= SPEED_GEN_3;
+-	else if (gen == 2)
++	else if (pcie->link_gen == 2)
+ 		reg |= SPEED_GEN_2;
+ 	else
+ 		reg |= SPEED_GEN_1;
+ 	advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+ 
+ 	/*
+-	 * Enable link training. This is not needed in every call to this
+-	 * function, just once suffices, but it does not break anything either.
++	 * Set maximal link speed value also into PCIe Link Control 2 register.
++	 * Armada 3700 Functional Specification says that default value is based
++	 * on SPEED_GEN but tests showed that default value is always 8.0 GT/s.
+ 	 */
++	reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2);
++	reg &= ~PCI_EXP_LNKCTL2_TLS;
++	if (pcie->link_gen == 3)
++		reg |= PCI_EXP_LNKCTL2_TLS_8_0GT;
++	else if (pcie->link_gen == 2)
++		reg |= PCI_EXP_LNKCTL2_TLS_5_0GT;
++	else
++		reg |= PCI_EXP_LNKCTL2_TLS_2_5GT;
++	advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2);
++
++	/* Enable link training after selecting PCIe generation */
+ 	reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+ 	reg |= LINK_TRAINING_EN;
+ 	advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+ 
+-	/*
+-	 * Start link training immediately after enabling it.
+-	 * This solves problems for some buggy cards.
+-	 */
+-	reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL);
+-	reg |= PCI_EXP_LNKCTL_RL;
+-	advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL);
+-
+-	ret = advk_pcie_wait_for_link(pcie);
+-	if (ret)
+-		return ret;
+-
+-	reg = advk_read16(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKSTA);
+-	neg_gen = reg & PCI_EXP_LNKSTA_CLS;
+-
+-	return neg_gen;
+-}
+-
+-static void advk_pcie_train_link(struct advk_pcie *pcie)
+-{
+-	struct device *dev = &pcie->pdev->dev;
+-	int neg_gen = -1, gen;
+-
+ 	/*
+ 	 * Reset PCIe card via PERST# signal. Some cards are not detected
+ 	 * during link training when they are in some non-initial state.
+@@ -458,41 +439,18 @@ static void advk_pcie_train_link(struct advk_pcie *pcie)
+ 	 * PERST# signal could have been asserted by pinctrl subsystem before
+ 	 * probe() callback has been called or issued explicitly by reset gpio
+ 	 * function advk_pcie_issue_perst(), making the endpoint going into
+-	 * fundamental reset. As required by PCI Express spec a delay for at
+-	 * least 100ms after such a reset before link training is needed.
+-	 */
+-	msleep(PCI_PM_D3COLD_WAIT);
+-
+-	/*
+-	 * Try link training at link gen specified by device tree property
+-	 * 'max-link-speed'. If this fails, iteratively train at lower gen.
+-	 */
+-	for (gen = pcie->link_gen; gen > 0; --gen) {
+-		neg_gen = advk_pcie_train_at_gen(pcie, gen);
+-		if (neg_gen > 0)
+-			break;
+-	}
+-
+-	if (neg_gen < 0)
+-		goto err;
+-
+-	/*
+-	 * After successful training if negotiated gen is lower than requested,
+-	 * train again on negotiated gen. This solves some stability issues for
+-	 * some buggy gen1 cards.
++	 * fundamental reset. As required by PCI Express spec (PCI Express
++	 * Base Specification, REV. 4.0 PCI Express, February 19 2014, 6.6.1
++	 * Conventional Reset) a delay for at least 100ms after such a reset
++	 * before sending a Configuration Request to the device is needed.
++	 * So wait until PCIe link is up. Function advk_pcie_wait_for_link()
++	 * waits for link at least 900ms.
+ 	 */
+-	if (neg_gen < gen) {
+-		gen = neg_gen;
+-		neg_gen = advk_pcie_train_at_gen(pcie, gen);
+-	}
+-
+-	if (neg_gen == gen) {
+-		dev_info(dev, "link up at gen %i\n", gen);
+-		return;
+-	}
+-
+-err:
+-	dev_err(dev, "link never came up\n");
++	ret = advk_pcie_wait_for_link(pcie);
++	if (ret < 0)
++		dev_err(dev, "link never came up\n");
++	else
++		dev_info(dev, "link up\n");
+ }
+ 
+ /*
+@@ -692,6 +650,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
+ 	u32 reg;
+ 	unsigned int status;
+ 	char *strcomp_status, *str_posted;
++	int ret;
+ 
+ 	reg = advk_readl(pcie, PIO_STAT);
+ 	status = (reg & PIO_COMPLETION_STATUS_MASK) >>
+@@ -716,6 +675,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
+ 	case PIO_COMPLETION_STATUS_OK:
+ 		if (reg & PIO_ERR_STATUS) {
+ 			strcomp_status = "COMP_ERR";
++			ret = -EFAULT;
+ 			break;
+ 		}
+ 		/* Get the read result */
+@@ -723,9 +683,11 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
+ 			*val = advk_readl(pcie, PIO_RD_DATA);
+ 		/* No error */
+ 		strcomp_status = NULL;
++		ret = 0;
+ 		break;
+ 	case PIO_COMPLETION_STATUS_UR:
+ 		strcomp_status = "UR";
++		ret = -EOPNOTSUPP;
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CRS:
+ 		if (allow_crs && val) {
+@@ -743,6 +705,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
+ 			 */
+ 			*val = CFG_RD_CRS_VAL;
+ 			strcomp_status = NULL;
++			ret = 0;
+ 			break;
+ 		}
+ 		/* PCIe r4.0, sec 2.3.2, says:
+@@ -758,21 +721,24 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
+ 		 * Request and taking appropriate action, e.g., complete the
+ 		 * Request to the host as a failed transaction.
+ 		 *
+-		 * To simplify implementation do not re-issue the Configuration
+-		 * Request and complete the Request as a failed transaction.
++		 * So return -EAGAIN and caller (pci-aardvark.c driver) will
++		 * re-issue request again up to the PIO_RETRY_CNT retries.
+ 		 */
+ 		strcomp_status = "CRS";
++		ret = -EAGAIN;
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CA:
+ 		strcomp_status = "CA";
++		ret = -ECANCELED;
+ 		break;
+ 	default:
+ 		strcomp_status = "Unknown";
++		ret = -EINVAL;
+ 		break;
+ 	}
+ 
+ 	if (!strcomp_status)
+-		return 0;
++		return ret;
+ 
+ 	if (reg & PIO_NON_POSTED_REQ)
+ 		str_posted = "Non-posted";
+@@ -782,7 +748,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
+ 	dev_dbg(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
+ 		str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
+ 
+-	return -EFAULT;
++	return ret;
+ }
+ 
+ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
+@@ -790,13 +756,13 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
+ 	struct device *dev = &pcie->pdev->dev;
+ 	int i;
+ 
+-	for (i = 0; i < PIO_RETRY_CNT; i++) {
++	for (i = 1; i <= PIO_RETRY_CNT; i++) {
+ 		u32 start, isr;
+ 
+ 		start = advk_readl(pcie, PIO_START);
+ 		isr = advk_readl(pcie, PIO_ISR);
+ 		if (!start && isr)
+-			return 0;
++			return i;
+ 		udelay(PIO_RETRY_DELAY);
+ 	}
+ 
+@@ -984,7 +950,6 @@ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
+ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ {
+ 	struct pci_bridge_emul *bridge = &pcie->bridge;
+-	int ret;
+ 
+ 	bridge->conf.vendor =
+ 		cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff);
+@@ -1004,19 +969,14 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ 	/* Support interrupt A for MSI feature */
+ 	bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
+ 
++	/* Indicates supports for Completion Retry Status */
++	bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
++
+ 	bridge->has_pcie = true;
+ 	bridge->data = pcie;
+ 	bridge->ops = &advk_pci_bridge_emul_ops;
+ 
+-	/* PCIe config space can be initialized after pci_bridge_emul_init() */
+-	ret = pci_bridge_emul_init(bridge, 0);
+-	if (ret < 0)
+-		return ret;
+-
+-	/* Indicates supports for Completion Retry Status */
+-	bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
+-
+-	return 0;
++	return pci_bridge_emul_init(bridge, 0);
+ }
+ 
+ static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
+@@ -1068,6 +1028,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 			     int where, int size, u32 *val)
+ {
+ 	struct advk_pcie *pcie = bus->sysdata;
++	int retry_count;
+ 	bool allow_crs;
+ 	u32 reg;
+ 	int ret;
+@@ -1090,18 +1051,8 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 		    (le16_to_cpu(pcie->bridge.pcie_conf.rootctl) &
+ 		     PCI_EXP_RTCTL_CRSSVE);
+ 
+-	if (advk_pcie_pio_is_running(pcie)) {
+-		/*
+-		 * If it is possible return Completion Retry Status so caller
+-		 * tries to issue the request again instead of failing.
+-		 */
+-		if (allow_crs) {
+-			*val = CFG_RD_CRS_VAL;
+-			return PCIBIOS_SUCCESSFUL;
+-		}
+-		*val = 0xffffffff;
+-		return PCIBIOS_SET_FAILED;
+-	}
++	if (advk_pcie_pio_is_running(pcie))
++		goto try_crs;
+ 
+ 	/* Program the control register */
+ 	reg = advk_readl(pcie, PIO_CTRL);
+@@ -1120,30 +1071,24 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 	/* Program the data strobe */
+ 	advk_writel(pcie, 0xf, PIO_WR_DATA_STRB);
+ 
+-	/* Clear PIO DONE ISR and start the transfer */
+-	advk_writel(pcie, 1, PIO_ISR);
+-	advk_writel(pcie, 1, PIO_START);
++	retry_count = 0;
++	do {
++		/* Clear PIO DONE ISR and start the transfer */
++		advk_writel(pcie, 1, PIO_ISR);
++		advk_writel(pcie, 1, PIO_START);
+ 
+-	ret = advk_pcie_wait_pio(pcie);
+-	if (ret < 0) {
+-		/*
+-		 * If it is possible return Completion Retry Status so caller
+-		 * tries to issue the request again instead of failing.
+-		 */
+-		if (allow_crs) {
+-			*val = CFG_RD_CRS_VAL;
+-			return PCIBIOS_SUCCESSFUL;
+-		}
+-		*val = 0xffffffff;
+-		return PCIBIOS_SET_FAILED;
+-	}
++		ret = advk_pcie_wait_pio(pcie);
++		if (ret < 0)
++			goto try_crs;
+ 
+-	/* Check PIO status and get the read result */
+-	ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
+-	if (ret < 0) {
+-		*val = 0xffffffff;
+-		return PCIBIOS_SET_FAILED;
+-	}
++		retry_count += ret;
++
++		/* Check PIO status and get the read result */
++		ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
++	} while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT);
++
++	if (ret < 0)
++		goto fail;
+ 
+ 	if (size == 1)
+ 		*val = (*val >> (8 * (where & 3))) & 0xff;
+@@ -1151,6 +1096,20 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 		*val = (*val >> (8 * (where & 3))) & 0xffff;
+ 
+ 	return PCIBIOS_SUCCESSFUL;
++
++try_crs:
++	/*
++	 * If it is possible, return Completion Retry Status so that caller
++	 * tries to issue the request again instead of failing.
++	 */
++	if (allow_crs) {
++		*val = CFG_RD_CRS_VAL;
++		return PCIBIOS_SUCCESSFUL;
++	}
++
++fail:
++	*val = 0xffffffff;
++	return PCIBIOS_SET_FAILED;
+ }
+ 
+ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+@@ -1159,6 +1118,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	struct advk_pcie *pcie = bus->sysdata;
+ 	u32 reg;
+ 	u32 data_strobe = 0x0;
++	int retry_count;
+ 	int offset;
+ 	int ret;
+ 
+@@ -1200,19 +1160,22 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	/* Program the data strobe */
+ 	advk_writel(pcie, data_strobe, PIO_WR_DATA_STRB);
+ 
+-	/* Clear PIO DONE ISR and start the transfer */
+-	advk_writel(pcie, 1, PIO_ISR);
+-	advk_writel(pcie, 1, PIO_START);
++	retry_count = 0;
++	do {
++		/* Clear PIO DONE ISR and start the transfer */
++		advk_writel(pcie, 1, PIO_ISR);
++		advk_writel(pcie, 1, PIO_START);
+ 
+-	ret = advk_pcie_wait_pio(pcie);
+-	if (ret < 0)
+-		return PCIBIOS_SET_FAILED;
++		ret = advk_pcie_wait_pio(pcie);
++		if (ret < 0)
++			return PCIBIOS_SET_FAILED;
+ 
+-	ret = advk_pcie_check_pio_status(pcie, false, NULL);
+-	if (ret < 0)
+-		return PCIBIOS_SET_FAILED;
++		retry_count += ret;
+ 
+-	return PCIBIOS_SUCCESSFUL;
++		ret = advk_pcie_check_pio_status(pcie, false, NULL);
++	} while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT);
++
++	return ret < 0 ? PCIBIOS_SET_FAILED : PCIBIOS_SUCCESSFUL;
+ }
+ 
+ static struct pci_ops advk_pcie_ops = {
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 31c384108bc9c..8418b59b3743b 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -3675,7 +3675,7 @@ _scsih_ublock_io_device(struct MPT3SAS_ADAPTER *ioc, u64 sas_address)
+ 
+ 	shost_for_each_device(sdev, ioc->shost) {
+ 		sas_device_priv_data = sdev->hostdata;
+-		if (!sas_device_priv_data)
++		if (!sas_device_priv_data || !sas_device_priv_data->sas_target)
+ 			continue;
+ 		if (sas_device_priv_data->sas_target->sas_address
+ 		    != sas_address)
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 3fc7c2a31c191..1a3f5adc68849 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -4628,6 +4628,7 @@ static void zbc_rwp_zone(struct sdebug_dev_info *devip,
+ 			 struct sdeb_zone_state *zsp)
+ {
+ 	enum sdebug_z_cond zc;
++	struct sdeb_store_info *sip = devip2sip(devip, false);
+ 
+ 	if (zbc_zone_is_conv(zsp))
+ 		return;
+@@ -4639,6 +4640,10 @@ static void zbc_rwp_zone(struct sdebug_dev_info *devip,
+ 	if (zsp->z_cond == ZC4_CLOSED)
+ 		devip->nr_closed--;
+ 
++	if (zsp->z_wp > zsp->z_start)
++		memset(sip->storep + zsp->z_start * sdebug_sector_size, 0,
++		       (zsp->z_wp - zsp->z_start) * sdebug_sector_size);
++
+ 	zsp->z_non_seq_resource = false;
+ 	zsp->z_wp = zsp->z_start;
+ 	zsp->z_cond = ZC1_EMPTY;
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 8de67679a8782..42db9c52208e6 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -816,7 +816,7 @@ store_state_field(struct device *dev, struct device_attribute *attr,
+ 
+ 	mutex_lock(&sdev->state_mutex);
+ 	if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {
+-		ret = count;
++		ret = 0;
+ 	} else {
+ 		ret = scsi_device_set_state(sdev, state);
+ 		if (ret == 0 && state == SDEV_RUNNING)
+diff --git a/drivers/staging/fbtft/fb_ssd1351.c b/drivers/staging/fbtft/fb_ssd1351.c
+index cf263a58a1489..6fd549a424d53 100644
+--- a/drivers/staging/fbtft/fb_ssd1351.c
++++ b/drivers/staging/fbtft/fb_ssd1351.c
+@@ -187,7 +187,6 @@ static struct fbtft_display display = {
+ 	},
+ };
+ 
+-#ifdef CONFIG_FB_BACKLIGHT
+ static int update_onboard_backlight(struct backlight_device *bd)
+ {
+ 	struct fbtft_par *par = bl_get_data(bd);
+@@ -231,9 +230,6 @@ static void register_onboard_backlight(struct fbtft_par *par)
+ 	if (!par->fbtftops.unregister_backlight)
+ 		par->fbtftops.unregister_backlight = fbtft_unregister_backlight;
+ }
+-#else
+-static void register_onboard_backlight(struct fbtft_par *par) { };
+-#endif
+ 
+ FBTFT_REGISTER_DRIVER(DRVNAME, "solomon,ssd1351", &display);
+ 
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index 3723269890d5f..d0c8d85f3db0f 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -128,7 +128,6 @@ static int fbtft_request_gpios(struct fbtft_par *par)
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_FB_BACKLIGHT
+ static int fbtft_backlight_update_status(struct backlight_device *bd)
+ {
+ 	struct fbtft_par *par = bl_get_data(bd);
+@@ -161,6 +160,7 @@ void fbtft_unregister_backlight(struct fbtft_par *par)
+ 		par->info->bl_dev = NULL;
+ 	}
+ }
++EXPORT_SYMBOL(fbtft_unregister_backlight);
+ 
+ static const struct backlight_ops fbtft_bl_ops = {
+ 	.get_brightness	= fbtft_backlight_get_brightness,
+@@ -198,12 +198,7 @@ void fbtft_register_backlight(struct fbtft_par *par)
+ 	if (!par->fbtftops.unregister_backlight)
+ 		par->fbtftops.unregister_backlight = fbtft_unregister_backlight;
+ }
+-#else
+-void fbtft_register_backlight(struct fbtft_par *par) { };
+-void fbtft_unregister_backlight(struct fbtft_par *par) { };
+-#endif
+ EXPORT_SYMBOL(fbtft_register_backlight);
+-EXPORT_SYMBOL(fbtft_unregister_backlight);
+ 
+ static void fbtft_set_addr_win(struct fbtft_par *par, int xs, int ys, int xe,
+ 			       int ye)
+@@ -853,13 +848,11 @@ int fbtft_register_framebuffer(struct fb_info *fb_info)
+ 		 fb_info->fix.smem_len >> 10, text1,
+ 		 HZ / fb_info->fbdefio->delay, text2);
+ 
+-#ifdef CONFIG_FB_BACKLIGHT
+ 	/* Turn on backlight if available */
+ 	if (fb_info->bl_dev) {
+ 		fb_info->bl_dev->props.power = FB_BLANK_UNBLANK;
+ 		fb_info->bl_dev->ops->update_status(fb_info->bl_dev);
+ 	}
+-#endif
+ 
+ 	return 0;
+ 
+diff --git a/drivers/staging/greybus/audio_helper.c b/drivers/staging/greybus/audio_helper.c
+index 3011b8abce389..a9576f92efaa4 100644
+--- a/drivers/staging/greybus/audio_helper.c
++++ b/drivers/staging/greybus/audio_helper.c
+@@ -192,7 +192,11 @@ int gbaudio_remove_component_controls(struct snd_soc_component *component,
+ 				      unsigned int num_controls)
+ {
+ 	struct snd_card *card = component->card->snd_card;
++	int err;
+ 
+-	return gbaudio_remove_controls(card, component->dev, controls,
+-				       num_controls, component->name_prefix);
++	down_write(&card->controls_rwsem);
++	err = gbaudio_remove_controls(card, component->dev, controls,
++				      num_controls, component->name_prefix);
++	up_write(&card->controls_rwsem);
++	return err;
+ }
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+index 663675efcfe4c..99c27d6b42333 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+@@ -2551,13 +2551,14 @@ static void _rtl92e_pci_disconnect(struct pci_dev *pdev)
+ 			free_irq(dev->irq, dev);
+ 			priv->irq = 0;
+ 		}
+-		free_rtllib(dev);
+ 
+ 		if (dev->mem_start != 0) {
+ 			iounmap((void __iomem *)dev->mem_start);
+ 			release_mem_region(pci_resource_start(pdev, 1),
+ 					pci_resource_len(pdev, 1));
+ 		}
++
++		free_rtllib(dev);
+ 	} else {
+ 		priv = rtllib_priv(dev);
+ 	}
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index 92c9a476defc9..8f143c09a1696 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -86,7 +86,11 @@ static int __write_console(struct xencons_info *xencons,
+ 	cons = intf->out_cons;
+ 	prod = intf->out_prod;
+ 	mb();			/* update queue values before going on */
+-	BUG_ON((prod - cons) > sizeof(intf->out));
++
++	if ((prod - cons) > sizeof(intf->out)) {
++		pr_err_once("xencons: Illegal ring page indices");
++		return -EINVAL;
++	}
+ 
+ 	while ((sent < len) && ((prod - cons) < sizeof(intf->out)))
+ 		intf->out[MASK_XENCONS_IDX(prod++, intf->out)] = data[sent++];
+@@ -114,7 +118,10 @@ static int domU_write_console(uint32_t vtermno, const char *data, int len)
+ 	 */
+ 	while (len) {
+ 		int sent = __write_console(cons, data, len);
+-		
++
++		if (sent < 0)
++			return sent;
++
+ 		data += sent;
+ 		len -= sent;
+ 
+@@ -138,7 +145,11 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
+ 	cons = intf->in_cons;
+ 	prod = intf->in_prod;
+ 	mb();			/* get pointers before reading ring */
+-	BUG_ON((prod - cons) > sizeof(intf->in));
++
++	if ((prod - cons) > sizeof(intf->in)) {
++		pr_err_once("xencons: Illegal ring page indices");
++		return -EINVAL;
++	}
+ 
+ 	while (cons != prod && recv < len)
+ 		buf[recv++] = intf->in[MASK_XENCONS_IDX(cons++, intf->in)];
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index b4c6527fe5f66..f798455942844 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -425,15 +425,15 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ 	data->phy = devm_usb_get_phy_by_phandle(dev, "fsl,usbphy", 0);
+ 	if (IS_ERR(data->phy)) {
+ 		ret = PTR_ERR(data->phy);
+-		if (ret == -ENODEV) {
+-			data->phy = devm_usb_get_phy_by_phandle(dev, "phys", 0);
+-			if (IS_ERR(data->phy)) {
+-				ret = PTR_ERR(data->phy);
+-				if (ret == -ENODEV)
+-					data->phy = NULL;
+-				else
+-					goto err_clk;
+-			}
++		if (ret != -ENODEV)
++			goto err_clk;
++		data->phy = devm_usb_get_phy_by_phandle(dev, "phys", 0);
++		if (IS_ERR(data->phy)) {
++			ret = PTR_ERR(data->phy);
++			if (ret == -ENODEV)
++				data->phy = NULL;
++			else
++				goto err_clk;
+ 		}
+ 	}
+ 
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 95a9bae72f135..3f406519da58d 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -4628,8 +4628,6 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	if (oldspeed == USB_SPEED_LOW)
+ 		delay = HUB_LONG_RESET_TIME;
+ 
+-	mutex_lock(hcd->address0_mutex);
+-
+ 	/* Reset the device; full speed may morph to high speed */
+ 	/* FIXME a USB 2.0 device may morph into SuperSpeed on reset. */
+ 	retval = hub_port_reset(hub, port1, udev, delay, false);
+@@ -4940,7 +4938,6 @@ fail:
+ 		hub_port_disable(hub, port1, 0);
+ 		update_devnum(udev, devnum);	/* for disconnect processing */
+ 	}
+-	mutex_unlock(hcd->address0_mutex);
+ 	return retval;
+ }
+ 
+@@ -5115,6 +5112,7 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
+ 	struct usb_port *port_dev = hub->ports[port1 - 1];
+ 	struct usb_device *udev = port_dev->child;
+ 	static int unreliable_port = -1;
++	bool retry_locked;
+ 
+ 	/* Disconnect any existing devices under this port */
+ 	if (udev) {
+@@ -5170,8 +5168,11 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
+ 		unit_load = 100;
+ 
+ 	status = 0;
+-	for (i = 0; i < PORT_INIT_TRIES; i++) {
+ 
++	for (i = 0; i < PORT_INIT_TRIES; i++) {
++		usb_lock_port(port_dev);
++		mutex_lock(hcd->address0_mutex);
++		retry_locked = true;
+ 		/* reallocate for each attempt, since references
+ 		 * to the previous one can escape in various ways
+ 		 */
+@@ -5179,6 +5180,8 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
+ 		if (!udev) {
+ 			dev_err(&port_dev->dev,
+ 					"couldn't allocate usb_device\n");
++			mutex_unlock(hcd->address0_mutex);
++			usb_unlock_port(port_dev);
+ 			goto done;
+ 		}
+ 
+@@ -5200,12 +5203,14 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
+ 		}
+ 
+ 		/* reset (non-USB 3.0 devices) and get descriptor */
+-		usb_lock_port(port_dev);
+ 		status = hub_port_init(hub, udev, port1, i);
+-		usb_unlock_port(port_dev);
+ 		if (status < 0)
+ 			goto loop;
+ 
++		mutex_unlock(hcd->address0_mutex);
++		usb_unlock_port(port_dev);
++		retry_locked = false;
++
+ 		if (udev->quirks & USB_QUIRK_DELAY_INIT)
+ 			msleep(2000);
+ 
+@@ -5298,6 +5303,10 @@ loop:
+ 		usb_ep0_reinit(udev);
+ 		release_devnum(udev);
+ 		hub_free_dev(udev);
++		if (retry_locked) {
++			mutex_unlock(hcd->address0_mutex);
++			usb_unlock_port(port_dev);
++		}
+ 		usb_put_dev(udev);
+ 		if ((status == -ENOTCONN) || (status == -ENOTSUPP))
+ 			break;
+@@ -5839,6 +5848,8 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	bos = udev->bos;
+ 	udev->bos = NULL;
+ 
++	mutex_lock(hcd->address0_mutex);
++
+ 	for (i = 0; i < PORT_INIT_TRIES; ++i) {
+ 
+ 		/* ep0 maxpacket size may change; let the HCD know about it.
+@@ -5848,6 +5859,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 		if (ret >= 0 || ret == -ENOTCONN || ret == -ENODEV)
+ 			break;
+ 	}
++	mutex_unlock(hcd->address0_mutex);
+ 
+ 	if (ret < 0)
+ 		goto re_enumerate;
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 7207a36c6e26b..449f19c3633c2 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -1198,6 +1198,8 @@ static void dwc2_hsotg_start_req(struct dwc2_hsotg *hsotg,
+ 			}
+ 			ctrl |= DXEPCTL_CNAK;
+ 		} else {
++			hs_req->req.frame_number = hs_ep->target_frame;
++			hs_req->req.actual = 0;
+ 			dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
+ 			return;
+ 		}
+@@ -2856,9 +2858,12 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
+ 
+ 	do {
+ 		hs_req = get_ep_head(hs_ep);
+-		if (hs_req)
++		if (hs_req) {
++			hs_req->req.frame_number = hs_ep->target_frame;
++			hs_req->req.actual = 0;
+ 			dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req,
+ 						    -ENODATA);
++		}
+ 		dwc2_gadget_incr_frame_num(hs_ep);
+ 		/* Update current frame number value. */
+ 		hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
+@@ -2911,8 +2916,11 @@ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
+ 
+ 	while (dwc2_gadget_target_frame_elapsed(ep)) {
+ 		hs_req = get_ep_head(ep);
+-		if (hs_req)
++		if (hs_req) {
++			hs_req->req.frame_number = ep->target_frame;
++			hs_req->req.actual = 0;
+ 			dwc2_hsotg_complete_request(hsotg, ep, hs_req, -ENODATA);
++		}
+ 
+ 		dwc2_gadget_incr_frame_num(ep);
+ 		/* Update current frame number value. */
+@@ -3001,8 +3009,11 @@ static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
+ 
+ 	while (dwc2_gadget_target_frame_elapsed(hs_ep)) {
+ 		hs_req = get_ep_head(hs_ep);
+-		if (hs_req)
++		if (hs_req) {
++			hs_req->req.frame_number = hs_ep->target_frame;
++			hs_req->req.actual = 0;
+ 			dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
++		}
+ 
+ 		dwc2_gadget_incr_frame_num(hs_ep);
+ 		/* Update current frame number value. */
+diff --git a/drivers/usb/dwc2/hcd_queue.c b/drivers/usb/dwc2/hcd_queue.c
+index 68bbac64b7536..94af71e9856f2 100644
+--- a/drivers/usb/dwc2/hcd_queue.c
++++ b/drivers/usb/dwc2/hcd_queue.c
+@@ -59,7 +59,7 @@
+ #define DWC2_UNRESERVE_DELAY (msecs_to_jiffies(5))
+ 
+ /* If we get a NAK, wait this long before retrying */
+-#define DWC2_RETRY_WAIT_DELAY 1*1E6L
++#define DWC2_RETRY_WAIT_DELAY (1 * NSEC_PER_MSEC)
+ 
+ /**
+  * dwc2_periodic_channel_available() - Checks that a channel is available for a
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index b75fe568096f9..e9a87e1f49508 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -310,13 +310,24 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
+ 	if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
+ 		int link_state;
+ 
++		/*
++		 * Initiate remote wakeup if the link state is in U3 when
++		 * operating in SS/SSP or L1/L2 when operating in HS/FS. If the
++		 * link state is in U1/U2, no remote wakeup is needed. The Start
++		 * Transfer command will initiate the link recovery.
++		 */
+ 		link_state = dwc3_gadget_get_link_state(dwc);
+-		if (link_state == DWC3_LINK_STATE_U1 ||
+-		    link_state == DWC3_LINK_STATE_U2 ||
+-		    link_state == DWC3_LINK_STATE_U3) {
++		switch (link_state) {
++		case DWC3_LINK_STATE_U2:
++			if (dwc->gadget->speed >= USB_SPEED_SUPER)
++				break;
++
++			fallthrough;
++		case DWC3_LINK_STATE_U3:
+ 			ret = __dwc3_gadget_wakeup(dwc);
+ 			dev_WARN_ONCE(dwc->dev, ret, "wakeup failed --> %d\n",
+ 					ret);
++			break;
+ 		}
+ 	}
+ 
+@@ -2907,6 +2918,9 @@ static bool dwc3_gadget_endpoint_trbs_complete(struct dwc3_ep *dep,
+ 	struct dwc3		*dwc = dep->dwc;
+ 	bool			no_started_trb = true;
+ 
++	if (!dep->endpoint.desc)
++		return no_started_trb;
++
+ 	dwc3_gadget_ep_cleanup_completed_requests(dep, event, status);
+ 
+ 	if (dep->flags & DWC3_EP_END_TRANSFER_PENDING)
+@@ -2954,6 +2968,9 @@ static void dwc3_gadget_endpoint_transfer_in_progress(struct dwc3_ep *dep,
+ {
+ 	int status = 0;
+ 
++	if (!dep->endpoint.desc)
++		return;
++
+ 	if (usb_endpoint_xfer_isoc(dep->endpoint.desc))
+ 		dwc3_gadget_endpoint_frame_from_event(dep, event);
+ 
+@@ -3007,6 +3024,14 @@ static void dwc3_gadget_endpoint_command_complete(struct dwc3_ep *dep,
+ 	if (cmd != DWC3_DEPCMD_ENDTRANSFER)
+ 		return;
+ 
++	/*
++	 * The END_TRANSFER command will cause the controller to generate a
++	 * NoStream Event, and it's not due to the host DP NoStream rejection.
++	 * Ignore the next NoStream event.
++	 */
++	if (dep->stream_capable)
++		dep->flags |= DWC3_EP_IGNORE_NEXT_NOSTREAM;
++
+ 	dep->flags &= ~DWC3_EP_END_TRANSFER_PENDING;
+ 	dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
+ 	dwc3_gadget_ep_cleanup_cancelled_requests(dep);
+@@ -3229,14 +3254,6 @@ static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
+ 	WARN_ON_ONCE(ret);
+ 	dep->resource_index = 0;
+ 
+-	/*
+-	 * The END_TRANSFER command will cause the controller to generate a
+-	 * NoStream Event, and it's not due to the host DP NoStream rejection.
+-	 * Ignore the next NoStream event.
+-	 */
+-	if (dep->stream_capable)
+-		dep->flags |= DWC3_EP_IGNORE_NEXT_NOSTREAM;
+-
+ 	if (!interrupt)
+ 		dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
+ 	else
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index c7356718a7c66..28ffe4e358b77 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1267,6 +1267,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010),				/* Telit SBL FN980 flashing device */
+ 	  .driver_info = NCTRL(0) | ZLP },
++	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9200),				/* Telit LE910S1 flashing device */
++	  .driver_info = NCTRL(0) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) },
+@@ -2094,6 +2096,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) },	/* Fibocom FG150 Diag */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) },		/* Fibocom FG150 AT */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) },			/* Fibocom FM101-GL (laptop MBIM) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff),			/* Fibocom FM101-GL (laptop MBIM) */
++	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) },			/* LongSung M5710 */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) },			/* GosunCn GM500 RNDIS */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) },			/* GosunCn GM500 MBIM */
+diff --git a/drivers/usb/typec/tcpm/fusb302.c b/drivers/usb/typec/tcpm/fusb302.c
+index 99562cc65ca69..700e38e921523 100644
+--- a/drivers/usb/typec/tcpm/fusb302.c
++++ b/drivers/usb/typec/tcpm/fusb302.c
+@@ -669,25 +669,27 @@ static int tcpm_set_cc(struct tcpc_dev *dev, enum typec_cc_status cc)
+ 		ret = fusb302_i2c_mask_write(chip, FUSB_REG_MASK,
+ 					     FUSB_REG_MASK_BC_LVL |
+ 					     FUSB_REG_MASK_COMP_CHNG,
+-					     FUSB_REG_MASK_COMP_CHNG);
++					     FUSB_REG_MASK_BC_LVL);
+ 		if (ret < 0) {
+ 			fusb302_log(chip, "cannot set SRC interrupt, ret=%d",
+ 				    ret);
+ 			goto done;
+ 		}
+ 		chip->intr_comp_chng = true;
++		chip->intr_bc_lvl = false;
+ 		break;
+ 	case TYPEC_CC_RD:
+ 		ret = fusb302_i2c_mask_write(chip, FUSB_REG_MASK,
+ 					     FUSB_REG_MASK_BC_LVL |
+ 					     FUSB_REG_MASK_COMP_CHNG,
+-					     FUSB_REG_MASK_BC_LVL);
++					     FUSB_REG_MASK_COMP_CHNG);
+ 		if (ret < 0) {
+ 			fusb302_log(chip, "cannot set SRC interrupt, ret=%d",
+ 				    ret);
+ 			goto done;
+ 		}
+ 		chip->intr_bc_lvl = true;
++		chip->intr_comp_chng = false;
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index a483cec31d5cb..5cd1ee66d2326 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -494,7 +494,7 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
+ 			virtio_transport_free_pkt(pkt);
+ 
+ 		len += sizeof(pkt->hdr);
+-		vhost_add_used(vq, head, len);
++		vhost_add_used(vq, head, 0);
+ 		total_len += len;
+ 		added = true;
+ 	} while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 8a75092bb148b..98d870672dc5e 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -846,7 +846,7 @@ static struct notifier_block xenbus_resume_nb = {
+ 
+ static int __init xenbus_init(void)
+ {
+-	int err = 0;
++	int err;
+ 	uint64_t v = 0;
+ 	xen_store_domain_type = XS_UNKNOWN;
+ 
+@@ -886,6 +886,29 @@ static int __init xenbus_init(void)
+ 		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
+ 		if (err)
+ 			goto out_error;
++		/*
++		 * Uninitialized hvm_params are zero and return no error.
++		 * Although it is theoretically possible to have
++		 * HVM_PARAM_STORE_PFN set to zero on purpose, in reality it is
++		 * not zero when valid. If zero, it means that Xenstore hasn't
++		 * been properly initialized. Instead of attempting to map a
++		 * wrong guest physical address return error.
++		 *
++		 * Also recognize all bits set as an invalid value.
++		 */
++		if (!v || !~v) {
++			err = -ENOENT;
++			goto out_error;
++		}
++		/* Avoid truncation on 32-bit. */
++#if BITS_PER_LONG == 32
++		if (v > ULONG_MAX) {
++			pr_err("%s: cannot handle HVM_PARAM_STORE_PFN=%llx > ULONG_MAX\n",
++			       __func__, v);
++			err = -EINVAL;
++			goto out_error;
++		}
++#endif
+ 		xen_store_gfn = (unsigned long)v;
+ 		xen_store_interface =
+ 			xen_remap(xen_store_gfn << XEN_PAGE_SHIFT,
+@@ -920,8 +943,10 @@ static int __init xenbus_init(void)
+ 	 */
+ 	proc_create_mount_point("xen");
+ #endif
++	return 0;
+ 
+ out_error:
++	xen_store_domain_type = XS_UNKNOWN;
+ 	return err;
+ }
+ 
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index f33bfb255db8f..08c8d34c98091 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -52,8 +52,7 @@ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	struct ceph_fs_client *fsc = ceph_inode_to_client(d_inode(dentry));
+ 	struct ceph_mon_client *monc = &fsc->client->monc;
+ 	struct ceph_statfs st;
+-	u64 fsid;
+-	int err;
++	int i, err;
+ 	u64 data_pool;
+ 
+ 	if (fsc->mdsc->mdsmap->m_num_data_pg_pools == 1) {
+@@ -99,12 +98,14 @@ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	buf->f_namelen = NAME_MAX;
+ 
+ 	/* Must convert the fsid, for consistent values across arches */
++	buf->f_fsid.val[0] = 0;
+ 	mutex_lock(&monc->mutex);
+-	fsid = le64_to_cpu(*(__le64 *)(&monc->monmap->fsid)) ^
+-	       le64_to_cpu(*((__le64 *)&monc->monmap->fsid + 1));
++	for (i = 0 ; i < sizeof(monc->monmap->fsid) / sizeof(__le32) ; ++i)
++		buf->f_fsid.val[0] ^= le32_to_cpu(((__le32 *)&monc->monmap->fsid)[i]);
+ 	mutex_unlock(&monc->mutex);
+ 
+-	buf->f_fsid = u64_to_fsid(fsid);
++	/* fold the fs_cluster_id into the upper bits */
++	buf->f_fsid.val[1] = monc->fs_cluster_id;
+ 
+ 	return 0;
+ }
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 67139f9d583f2..6c06870f90184 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -2618,12 +2618,23 @@ int cifs_strict_fsync(struct file *file, loff_t start, loff_t end,
+ 	tcon = tlink_tcon(smbfile->tlink);
+ 	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) {
+ 		server = tcon->ses->server;
+-		if (server->ops->flush)
+-			rc = server->ops->flush(xid, tcon, &smbfile->fid);
+-		else
++		if (server->ops->flush == NULL) {
+ 			rc = -ENOSYS;
++			goto strict_fsync_exit;
++		}
++
++		if ((OPEN_FMODE(smbfile->f_flags) & FMODE_WRITE) == 0) {
++			smbfile = find_writable_file(CIFS_I(inode), FIND_WR_ANY);
++			if (smbfile) {
++				rc = server->ops->flush(xid, tcon, &smbfile->fid);
++				cifsFileInfo_put(smbfile);
++			} else
++				cifs_dbg(FYI, "ignore fsync for file not open for write\n");
++		} else
++			rc = server->ops->flush(xid, tcon, &smbfile->fid);
+ 	}
+ 
++strict_fsync_exit:
+ 	free_xid(xid);
+ 	return rc;
+ }
+@@ -2635,6 +2646,7 @@ int cifs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ 	struct cifs_tcon *tcon;
+ 	struct TCP_Server_Info *server;
+ 	struct cifsFileInfo *smbfile = file->private_data;
++	struct inode *inode = file_inode(file);
+ 	struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);
+ 
+ 	rc = file_write_and_wait_range(file, start, end);
+@@ -2651,12 +2663,23 @@ int cifs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ 	tcon = tlink_tcon(smbfile->tlink);
+ 	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) {
+ 		server = tcon->ses->server;
+-		if (server->ops->flush)
+-			rc = server->ops->flush(xid, tcon, &smbfile->fid);
+-		else
++		if (server->ops->flush == NULL) {
+ 			rc = -ENOSYS;
++			goto fsync_exit;
++		}
++
++		if ((OPEN_FMODE(smbfile->f_flags) & FMODE_WRITE) == 0) {
++			smbfile = find_writable_file(CIFS_I(inode), FIND_WR_ANY);
++			if (smbfile) {
++				rc = server->ops->flush(xid, tcon, &smbfile->fid);
++				cifsFileInfo_put(smbfile);
++			} else
++				cifs_dbg(FYI, "ignore fsync for file not open for write\n");
++		} else
++			rc = server->ops->flush(xid, tcon, &smbfile->fid);
+ 	}
+ 
++fsync_exit:
+ 	free_xid(xid);
+ 	return rc;
+ }
+diff --git a/fs/erofs/utils.c b/fs/erofs/utils.c
+index de9986d2f82fd..5c11199d753a6 100644
+--- a/fs/erofs/utils.c
++++ b/fs/erofs/utils.c
+@@ -154,7 +154,7 @@ static bool erofs_try_to_release_workgroup(struct erofs_sb_info *sbi,
+ 	 * however in order to avoid some race conditions, add a
+ 	 * DBG_BUGON to observe this in advance.
+ 	 */
+-	DBG_BUGON(xa_erase(&sbi->managed_pslots, grp->index) != grp);
++	DBG_BUGON(__xa_erase(&sbi->managed_pslots, grp->index) != grp);
+ 
+ 	/* last refcount should be connected with its managed pslot.  */
+ 	erofs_workgroup_unfreeze(grp, 0);
+@@ -169,15 +169,19 @@ static unsigned long erofs_shrink_workstation(struct erofs_sb_info *sbi,
+ 	unsigned int freed = 0;
+ 	unsigned long index;
+ 
++	xa_lock(&sbi->managed_pslots);
+ 	xa_for_each(&sbi->managed_pslots, index, grp) {
+ 		/* try to shrink each valid workgroup */
+ 		if (!erofs_try_to_release_workgroup(sbi, grp))
+ 			continue;
++		xa_unlock(&sbi->managed_pslots);
+ 
+ 		++freed;
+ 		if (!--nr_shrink)
+-			break;
++			return freed;
++		xa_lock(&sbi->managed_pslots);
+ 	}
++	xa_unlock(&sbi->managed_pslots);
+ 	return freed;
+ }
+ 
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 597a145c08ef5..7e625806bd4a2 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1389,6 +1389,7 @@ page_hit:
+ 			  nid, nid_of_node(page), ino_of_node(page),
+ 			  ofs_of_node(page), cpver_of_node(page),
+ 			  next_blkaddr_of_node(page));
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 		err = -EINVAL;
+ out_err:
+ 		ClearPageUptodate(page);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index fb1917730e0e4..d100b5dfedbd2 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -851,17 +851,17 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
+ 		goto out_put_old;
+ 	}
+ 
++	get_page(newpage);
++
++	if (!(buf->flags & PIPE_BUF_FLAG_LRU))
++		lru_cache_add(newpage);
++
+ 	/*
+ 	 * Release while we have extra ref on stolen page.  Otherwise
+ 	 * anon_pipe_buf_release() might think the page can be reused.
+ 	 */
+ 	pipe_buf_release(cs->pipe, buf);
+ 
+-	get_page(newpage);
+-
+-	if (!(buf->flags & PIPE_BUF_FLAG_LRU))
+-		lru_cache_add(newpage);
+-
+ 	err = 0;
+ 	spin_lock(&cs->req->waitq.lock);
+ 	if (test_bit(FR_ABORTED, &cs->req->flags))
+diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
+index c078f88552695..f2248d9d4db51 100644
+--- a/fs/nfs/nfs42xdr.c
++++ b/fs/nfs/nfs42xdr.c
+@@ -1396,8 +1396,7 @@ static int nfs4_xdr_dec_clone(struct rpc_rqst *rqstp,
+ 	status = decode_clone(xdr);
+ 	if (status)
+ 		goto out;
+-	status = decode_getfattr(xdr, res->dst_fattr, res->server);
+-
++	decode_getfattr(xdr, res->dst_fattr, res->server);
+ out:
+ 	res->rpc_status = status;
+ 	return status;
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index c3a345c28a933..0e4278d4a7691 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -124,9 +124,13 @@ ssize_t read_from_oldmem(char *buf, size_t count,
+ 			nr_bytes = count;
+ 
+ 		/* If pfn is not ram, return zeros for sparse dump files */
+-		if (pfn_is_ram(pfn) == 0)
+-			memset(buf, 0, nr_bytes);
+-		else {
++		if (pfn_is_ram(pfn) == 0) {
++			tmp = 0;
++			if (!userbuf)
++				memset(buf, 0, nr_bytes);
++			else if (clear_user(buf, nr_bytes))
++				tmp = -EFAULT;
++		} else {
+ 			if (encrypted)
+ 				tmp = copy_oldmem_page_encrypted(pfn, buf,
+ 								 nr_bytes,
+@@ -135,10 +139,10 @@ ssize_t read_from_oldmem(char *buf, size_t count,
+ 			else
+ 				tmp = copy_oldmem_page(pfn, buf, nr_bytes,
+ 						       offset, userbuf);
+-
+-			if (tmp < 0)
+-				return tmp;
+ 		}
++		if (tmp < 0)
++			return tmp;
++
+ 		*ppos += nr_bytes;
+ 		count -= nr_bytes;
+ 		buf += nr_bytes;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 1f62a4eec283c..474a0d852614f 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -173,7 +173,7 @@ struct bpf_map {
+ 	atomic64_t usercnt;
+ 	struct work_struct work;
+ 	struct mutex freeze_mutex;
+-	u64 writecnt; /* writable mmap cnt; protected by freeze_mutex */
++	atomic64_t writecnt;
+ };
+ 
+ static inline bool map_value_has_spin_lock(const struct bpf_map *map)
+@@ -1252,6 +1252,7 @@ void bpf_map_charge_move(struct bpf_map_memory *dst,
+ void *bpf_map_area_alloc(u64 size, int numa_node);
+ void *bpf_map_area_mmapable_alloc(u64 size, int numa_node);
+ void bpf_map_area_free(void *base);
++bool bpf_map_write_active(const struct bpf_map *map);
+ void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr);
+ int  generic_map_lookup_batch(struct bpf_map *map,
+ 			      const union bpf_attr *attr,
+diff --git a/include/linux/ipc_namespace.h b/include/linux/ipc_namespace.h
+index a06a78c67f19f..08325105131a2 100644
+--- a/include/linux/ipc_namespace.h
++++ b/include/linux/ipc_namespace.h
+@@ -132,6 +132,16 @@ static inline struct ipc_namespace *get_ipc_ns(struct ipc_namespace *ns)
+ 	return ns;
+ }
+ 
++static inline struct ipc_namespace *get_ipc_ns_not_zero(struct ipc_namespace *ns)
++{
++	if (ns) {
++		if (refcount_inc_not_zero(&ns->count))
++			return ns;
++	}
++
++	return NULL;
++}
++
+ extern void put_ipc_ns(struct ipc_namespace *ns);
+ #else
+ static inline struct ipc_namespace *copy_ipcs(unsigned long flags,
+@@ -148,6 +158,11 @@ static inline struct ipc_namespace *get_ipc_ns(struct ipc_namespace *ns)
+ 	return ns;
+ }
+ 
++static inline struct ipc_namespace *get_ipc_ns_not_zero(struct ipc_namespace *ns)
++{
++	return ns;
++}
++
+ static inline void put_ipc_ns(struct ipc_namespace *ns)
+ {
+ }
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 24cacb1ca654d..fa75f325dad53 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -158,7 +158,7 @@ static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t)
+  * Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring
+  * subscriptions and synchronises with wait4().  Also used in procfs.  Also
+  * pins the final release of task.io_context.  Also protects ->cpuset and
+- * ->cgroup.subsys[]. And ->vfork_done.
++ * ->cgroup.subsys[]. And ->vfork_done. And ->sysvshm.shm_clist.
+  *
+  * Nests both inside and outside of read_lock(&tasklist_lock).
+  * It must not be nested with write_lock_irq(&tasklist_lock),
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index ac5ff3c3afb14..88bc66b8d02b0 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -491,6 +491,7 @@ int fib6_nh_init(struct net *net, struct fib6_nh *fib6_nh,
+ 		 struct fib6_config *cfg, gfp_t gfp_flags,
+ 		 struct netlink_ext_ack *extack);
+ void fib6_nh_release(struct fib6_nh *fib6_nh);
++void fib6_nh_release_dsts(struct fib6_nh *fib6_nh);
+ 
+ int call_fib6_entry_notifiers(struct net *net,
+ 			      enum fib_event_type event_type,
+diff --git a/include/net/ipv6_stubs.h b/include/net/ipv6_stubs.h
+index 8fce558b5fea3..14a43111ffc6a 100644
+--- a/include/net/ipv6_stubs.h
++++ b/include/net/ipv6_stubs.h
+@@ -47,6 +47,7 @@ struct ipv6_stub {
+ 			    struct fib6_config *cfg, gfp_t gfp_flags,
+ 			    struct netlink_ext_ack *extack);
+ 	void (*fib6_nh_release)(struct fib6_nh *fib6_nh);
++	void (*fib6_nh_release_dsts)(struct fib6_nh *fib6_nh);
+ 	void (*fib6_update_sernum)(struct net *net, struct fib6_info *rt);
+ 	int (*ip6_del_rt)(struct net *net, struct fib6_info *rt, bool skip_notify);
+ 	void (*fib6_rt_update)(struct net *net, struct fib6_info *rt,
+diff --git a/include/net/nl802154.h b/include/net/nl802154.h
+index ddcee128f5d9a..145acb8f25095 100644
+--- a/include/net/nl802154.h
++++ b/include/net/nl802154.h
+@@ -19,6 +19,8 @@
+  *
+  */
+ 
++#include <linux/types.h>
++
+ #define NL802154_GENL_NAME "nl802154"
+ 
+ enum nl802154_commands {
+@@ -150,10 +152,9 @@ enum nl802154_attrs {
+ };
+ 
+ enum nl802154_iftype {
+-	/* for backwards compatibility TODO */
+-	NL802154_IFTYPE_UNSPEC = -1,
++	NL802154_IFTYPE_UNSPEC = (~(__u32)0),
+ 
+-	NL802154_IFTYPE_NODE,
++	NL802154_IFTYPE_NODE = 0,
+ 	NL802154_IFTYPE_MONITOR,
+ 	NL802154_IFTYPE_COORD,
+ 
+diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
+index 2af7a1cd66589..b39cdbc522ec7 100644
+--- a/include/xen/interface/io/ring.h
++++ b/include/xen/interface/io/ring.h
+@@ -1,21 +1,53 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+ /******************************************************************************
+  * ring.h
+  *
+  * Shared producer-consumer ring macros.
+  *
++ * Permission is hereby granted, free of charge, to any person obtaining a copy
++ * of this software and associated documentation files (the "Software"), to
++ * deal in the Software without restriction, including without limitation the
++ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
++ * sell copies of the Software, and to permit persons to whom the Software is
++ * furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
++ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
++ * DEALINGS IN THE SOFTWARE.
++ *
+  * Tim Deegan and Andrew Warfield November 2004.
+  */
+ 
+ #ifndef __XEN_PUBLIC_IO_RING_H__
+ #define __XEN_PUBLIC_IO_RING_H__
+ 
++/*
++ * When #include'ing this header, you need to provide the following
++ * declaration upfront:
++ * - standard integers types (uint8_t, uint16_t, etc)
++ * They are provided by stdint.h of the standard headers.
++ *
++ * In addition, if you intend to use the FLEX macros, you also need to
++ * provide the following, before invoking the FLEX macros:
++ * - size_t
++ * - memcpy
++ * - grant_ref_t
++ * These declarations are provided by string.h of the standard headers,
++ * and grant_table.h from the Xen public headers.
++ */
++
+ #include <xen/interface/grant_table.h>
+ 
+ typedef unsigned int RING_IDX;
+ 
+ /* Round a 32-bit unsigned constant down to the nearest power of two. */
+-#define __RD2(_x)  (((_x) & 0x00000002) ? 0x2		       : ((_x) & 0x1))
++#define __RD2(_x)  (((_x) & 0x00000002) ? 0x2                  : ((_x) & 0x1))
+ #define __RD4(_x)  (((_x) & 0x0000000c) ? __RD2((_x)>>2)<<2    : __RD2(_x))
+ #define __RD8(_x)  (((_x) & 0x000000f0) ? __RD4((_x)>>4)<<4    : __RD4(_x))
+ #define __RD16(_x) (((_x) & 0x0000ff00) ? __RD8((_x)>>8)<<8    : __RD8(_x))
+@@ -27,82 +59,79 @@ typedef unsigned int RING_IDX;
+  * A ring contains as many entries as will fit, rounded down to the nearest
+  * power of two (so we can mask with (size-1) to loop around).
+  */
+-#define __CONST_RING_SIZE(_s, _sz)				\
+-	(__RD32(((_sz) - offsetof(struct _s##_sring, ring)) /	\
+-		sizeof(((struct _s##_sring *)0)->ring[0])))
+-
++#define __CONST_RING_SIZE(_s, _sz) \
++    (__RD32(((_sz) - offsetof(struct _s##_sring, ring)) / \
++	    sizeof(((struct _s##_sring *)0)->ring[0])))
+ /*
+  * The same for passing in an actual pointer instead of a name tag.
+  */
+-#define __RING_SIZE(_s, _sz)						\
+-	(__RD32(((_sz) - (long)&(_s)->ring + (long)(_s)) / sizeof((_s)->ring[0])))
++#define __RING_SIZE(_s, _sz) \
++    (__RD32(((_sz) - (long)(_s)->ring + (long)(_s)) / sizeof((_s)->ring[0])))
+ 
+ /*
+  * Macros to make the correct C datatypes for a new kind of ring.
+  *
+  * To make a new ring datatype, you need to have two message structures,
+- * let's say struct request, and struct response already defined.
++ * let's say request_t, and response_t already defined.
+  *
+  * In a header where you want the ring datatype declared, you then do:
+  *
+- *     DEFINE_RING_TYPES(mytag, struct request, struct response);
++ *     DEFINE_RING_TYPES(mytag, request_t, response_t);
+  *
+  * These expand out to give you a set of types, as you can see below.
+  * The most important of these are:
+  *
+- *     struct mytag_sring      - The shared ring.
+- *     struct mytag_front_ring - The 'front' half of the ring.
+- *     struct mytag_back_ring  - The 'back' half of the ring.
++ *     mytag_sring_t      - The shared ring.
++ *     mytag_front_ring_t - The 'front' half of the ring.
++ *     mytag_back_ring_t  - The 'back' half of the ring.
+  *
+  * To initialize a ring in your code you need to know the location and size
+  * of the shared memory area (PAGE_SIZE, for instance). To initialise
+  * the front half:
+  *
+- *     struct mytag_front_ring front_ring;
+- *     SHARED_RING_INIT((struct mytag_sring *)shared_page);
+- *     FRONT_RING_INIT(&front_ring, (struct mytag_sring *)shared_page,
+- *		       PAGE_SIZE);
++ *     mytag_front_ring_t front_ring;
++ *     SHARED_RING_INIT((mytag_sring_t *)shared_page);
++ *     FRONT_RING_INIT(&front_ring, (mytag_sring_t *)shared_page, PAGE_SIZE);
+  *
+  * Initializing the back follows similarly (note that only the front
+  * initializes the shared ring):
+  *
+- *     struct mytag_back_ring back_ring;
+- *     BACK_RING_INIT(&back_ring, (struct mytag_sring *)shared_page,
+- *		      PAGE_SIZE);
++ *     mytag_back_ring_t back_ring;
++ *     BACK_RING_INIT(&back_ring, (mytag_sring_t *)shared_page, PAGE_SIZE);
+  */
+ 
+-#define DEFINE_RING_TYPES(__name, __req_t, __rsp_t)			\
+-									\
+-/* Shared ring entry */							\
+-union __name##_sring_entry {						\
+-    __req_t req;							\
+-    __rsp_t rsp;							\
+-};									\
+-									\
+-/* Shared ring page */							\
+-struct __name##_sring {							\
+-    RING_IDX req_prod, req_event;					\
+-    RING_IDX rsp_prod, rsp_event;					\
+-    uint8_t  pad[48];							\
+-    union __name##_sring_entry ring[1]; /* variable-length */		\
+-};									\
+-									\
+-/* "Front" end's private variables */					\
+-struct __name##_front_ring {						\
+-    RING_IDX req_prod_pvt;						\
+-    RING_IDX rsp_cons;							\
+-    unsigned int nr_ents;						\
+-    struct __name##_sring *sring;					\
+-};									\
+-									\
+-/* "Back" end's private variables */					\
+-struct __name##_back_ring {						\
+-    RING_IDX rsp_prod_pvt;						\
+-    RING_IDX req_cons;							\
+-    unsigned int nr_ents;						\
+-    struct __name##_sring *sring;					\
+-};
+-
++#define DEFINE_RING_TYPES(__name, __req_t, __rsp_t)                     \
++                                                                        \
++/* Shared ring entry */                                                 \
++union __name##_sring_entry {                                            \
++    __req_t req;                                                        \
++    __rsp_t rsp;                                                        \
++};                                                                      \
++                                                                        \
++/* Shared ring page */                                                  \
++struct __name##_sring {                                                 \
++    RING_IDX req_prod, req_event;                                       \
++    RING_IDX rsp_prod, rsp_event;                                       \
++    uint8_t __pad[48];                                                  \
++    union __name##_sring_entry ring[1]; /* variable-length */           \
++};                                                                      \
++                                                                        \
++/* "Front" end's private variables */                                   \
++struct __name##_front_ring {                                            \
++    RING_IDX req_prod_pvt;                                              \
++    RING_IDX rsp_cons;                                                  \
++    unsigned int nr_ents;                                               \
++    struct __name##_sring *sring;                                       \
++};                                                                      \
++                                                                        \
++/* "Back" end's private variables */                                    \
++struct __name##_back_ring {                                             \
++    RING_IDX rsp_prod_pvt;                                              \
++    RING_IDX req_cons;                                                  \
++    unsigned int nr_ents;                                               \
++    struct __name##_sring *sring;                                       \
++};                                                                      \
++                                                                        \
+ /*
+  * Macros for manipulating rings.
+  *
+@@ -119,94 +148,99 @@ struct __name##_back_ring {						\
+  */
+ 
+ /* Initialising empty rings */
+-#define SHARED_RING_INIT(_s) do {					\
+-    (_s)->req_prod  = (_s)->rsp_prod  = 0;				\
+-    (_s)->req_event = (_s)->rsp_event = 1;				\
+-    memset((_s)->pad, 0, sizeof((_s)->pad));				\
++#define SHARED_RING_INIT(_s) do {                                       \
++    (_s)->req_prod  = (_s)->rsp_prod  = 0;                              \
++    (_s)->req_event = (_s)->rsp_event = 1;                              \
++    (void)memset((_s)->__pad, 0, sizeof((_s)->__pad));                  \
+ } while(0)
+ 
+-#define FRONT_RING_ATTACH(_r, _s, _i, __size) do {			\
+-    (_r)->req_prod_pvt = (_i);						\
+-    (_r)->rsp_cons = (_i);						\
+-    (_r)->nr_ents = __RING_SIZE(_s, __size);				\
+-    (_r)->sring = (_s);							\
++#define FRONT_RING_ATTACH(_r, _s, _i, __size) do {                      \
++    (_r)->req_prod_pvt = (_i);                                          \
++    (_r)->rsp_cons = (_i);                                              \
++    (_r)->nr_ents = __RING_SIZE(_s, __size);                            \
++    (_r)->sring = (_s);                                                 \
+ } while (0)
+ 
+ #define FRONT_RING_INIT(_r, _s, __size) FRONT_RING_ATTACH(_r, _s, 0, __size)
+ 
+-#define BACK_RING_ATTACH(_r, _s, _i, __size) do {			\
+-    (_r)->rsp_prod_pvt = (_i);						\
+-    (_r)->req_cons = (_i);						\
+-    (_r)->nr_ents = __RING_SIZE(_s, __size);				\
+-    (_r)->sring = (_s);							\
++#define BACK_RING_ATTACH(_r, _s, _i, __size) do {                       \
++    (_r)->rsp_prod_pvt = (_i);                                          \
++    (_r)->req_cons = (_i);                                              \
++    (_r)->nr_ents = __RING_SIZE(_s, __size);                            \
++    (_r)->sring = (_s);                                                 \
+ } while (0)
+ 
+ #define BACK_RING_INIT(_r, _s, __size) BACK_RING_ATTACH(_r, _s, 0, __size)
+ 
+ /* How big is this ring? */
+-#define RING_SIZE(_r)							\
++#define RING_SIZE(_r)                                                   \
+     ((_r)->nr_ents)
+ 
+ /* Number of free requests (for use on front side only). */
+-#define RING_FREE_REQUESTS(_r)						\
++#define RING_FREE_REQUESTS(_r)                                          \
+     (RING_SIZE(_r) - ((_r)->req_prod_pvt - (_r)->rsp_cons))
+ 
+ /* Test if there is an empty slot available on the front ring.
+  * (This is only meaningful from the front. )
+  */
+-#define RING_FULL(_r)							\
++#define RING_FULL(_r)                                                   \
+     (RING_FREE_REQUESTS(_r) == 0)
+ 
+ /* Test if there are outstanding messages to be processed on a ring. */
+-#define RING_HAS_UNCONSUMED_RESPONSES(_r)				\
++#define RING_HAS_UNCONSUMED_RESPONSES(_r)                               \
+     ((_r)->sring->rsp_prod - (_r)->rsp_cons)
+ 
+-#define RING_HAS_UNCONSUMED_REQUESTS(_r)				\
+-    ({									\
+-	unsigned int req = (_r)->sring->req_prod - (_r)->req_cons;	\
+-	unsigned int rsp = RING_SIZE(_r) -				\
+-			   ((_r)->req_cons - (_r)->rsp_prod_pvt);	\
+-	req < rsp ? req : rsp;						\
+-    })
++#define RING_HAS_UNCONSUMED_REQUESTS(_r) ({                             \
++    unsigned int req = (_r)->sring->req_prod - (_r)->req_cons;          \
++    unsigned int rsp = RING_SIZE(_r) -                                  \
++        ((_r)->req_cons - (_r)->rsp_prod_pvt);                          \
++    req < rsp ? req : rsp;                                              \
++})
+ 
+ /* Direct access to individual ring elements, by index. */
+-#define RING_GET_REQUEST(_r, _idx)					\
++#define RING_GET_REQUEST(_r, _idx)                                      \
+     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
+ 
++#define RING_GET_RESPONSE(_r, _idx)                                     \
++    (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
++
+ /*
+- * Get a local copy of a request.
++ * Get a local copy of a request/response.
+  *
+- * Use this in preference to RING_GET_REQUEST() so all processing is
++ * Use this in preference to RING_GET_{REQUEST,RESPONSE}() so all processing is
+  * done on a local copy that cannot be modified by the other end.
+  *
+  * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+- * to be ineffective where _req is a struct which consists of only bitfields.
++ * to be ineffective where dest is a struct which consists of only bitfields.
+  */
+-#define RING_COPY_REQUEST(_r, _idx, _req) do {				\
+-	/* Use volatile to force the copy into _req. */			\
+-	*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx);	\
++#define RING_COPY_(type, r, idx, dest) do {				\
++	/* Use volatile to force the copy into dest. */			\
++	*(dest) = *(volatile typeof(dest))RING_GET_##type(r, idx);	\
+ } while (0)
+ 
+-#define RING_GET_RESPONSE(_r, _idx)					\
+-    (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
++#define RING_COPY_REQUEST(r, idx, req)  RING_COPY_(REQUEST, r, idx, req)
++#define RING_COPY_RESPONSE(r, idx, rsp) RING_COPY_(RESPONSE, r, idx, rsp)
+ 
+ /* Loop termination condition: Would the specified index overflow the ring? */
+-#define RING_REQUEST_CONS_OVERFLOW(_r, _cons)				\
++#define RING_REQUEST_CONS_OVERFLOW(_r, _cons)                           \
+     (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r))
+ 
+ /* Ill-behaved frontend determination: Can there be this many requests? */
+-#define RING_REQUEST_PROD_OVERFLOW(_r, _prod)               \
++#define RING_REQUEST_PROD_OVERFLOW(_r, _prod)                           \
+     (((_prod) - (_r)->rsp_prod_pvt) > RING_SIZE(_r))
+ 
++/* Ill-behaved backend determination: Can there be this many responses? */
++#define RING_RESPONSE_PROD_OVERFLOW(_r, _prod)                          \
++    (((_prod) - (_r)->rsp_cons) > RING_SIZE(_r))
+ 
+-#define RING_PUSH_REQUESTS(_r) do {					\
+-    virt_wmb(); /* back sees requests /before/ updated producer index */	\
+-    (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
++#define RING_PUSH_REQUESTS(_r) do {                                     \
++    virt_wmb(); /* back sees requests /before/ updated producer index */\
++    (_r)->sring->req_prod = (_r)->req_prod_pvt;                         \
+ } while (0)
+ 
+-#define RING_PUSH_RESPONSES(_r) do {					\
+-    virt_wmb(); /* front sees responses /before/ updated producer index */	\
+-    (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
++#define RING_PUSH_RESPONSES(_r) do {                                    \
++    virt_wmb(); /* front sees resps /before/ updated producer index */  \
++    (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;                         \
+ } while (0)
+ 
+ /*
+@@ -239,40 +273,40 @@ struct __name##_back_ring {						\
+  *  field appropriately.
+  */
+ 
+-#define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
+-    RING_IDX __old = (_r)->sring->req_prod;				\
+-    RING_IDX __new = (_r)->req_prod_pvt;				\
+-    virt_wmb(); /* back sees requests /before/ updated producer index */	\
+-    (_r)->sring->req_prod = __new;					\
+-    virt_mb(); /* back sees new requests /before/ we check req_event */	\
+-    (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
+-		 (RING_IDX)(__new - __old));				\
++#define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {           \
++    RING_IDX __old = (_r)->sring->req_prod;                             \
++    RING_IDX __new = (_r)->req_prod_pvt;                                \
++    virt_wmb(); /* back sees requests /before/ updated producer index */\
++    (_r)->sring->req_prod = __new;                                      \
++    virt_mb(); /* back sees new requests /before/ we check req_event */ \
++    (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <           \
++                 (RING_IDX)(__new - __old));                            \
+ } while (0)
+ 
+-#define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
+-    RING_IDX __old = (_r)->sring->rsp_prod;				\
+-    RING_IDX __new = (_r)->rsp_prod_pvt;				\
+-    virt_wmb(); /* front sees responses /before/ updated producer index */	\
+-    (_r)->sring->rsp_prod = __new;					\
+-    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
+-    (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
+-		 (RING_IDX)(__new - __old));				\
++#define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {          \
++    RING_IDX __old = (_r)->sring->rsp_prod;                             \
++    RING_IDX __new = (_r)->rsp_prod_pvt;                                \
++    virt_wmb(); /* front sees resps /before/ updated producer index */  \
++    (_r)->sring->rsp_prod = __new;                                      \
++    virt_mb(); /* front sees new resps /before/ we check rsp_event */   \
++    (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <           \
++                 (RING_IDX)(__new - __old));                            \
+ } while (0)
+ 
+-#define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do {		\
+-    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
+-    if (_work_to_do) break;						\
+-    (_r)->sring->req_event = (_r)->req_cons + 1;			\
+-    virt_mb();								\
+-    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
++#define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do {             \
++    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);                   \
++    if (_work_to_do) break;                                             \
++    (_r)->sring->req_event = (_r)->req_cons + 1;                        \
++    virt_mb();                                                          \
++    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);                   \
+ } while (0)
+ 
+-#define RING_FINAL_CHECK_FOR_RESPONSES(_r, _work_to_do) do {		\
+-    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
+-    if (_work_to_do) break;						\
+-    (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
+-    virt_mb();								\
+-    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
++#define RING_FINAL_CHECK_FOR_RESPONSES(_r, _work_to_do) do {            \
++    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);                  \
++    if (_work_to_do) break;                                             \
++    (_r)->sring->rsp_event = (_r)->rsp_cons + 1;                        \
++    virt_mb();                                                          \
++    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);                  \
+ } while (0)
+ 
+ 
+diff --git a/ipc/shm.c b/ipc/shm.c
+index e25c7c6106bcf..471ac3e7498d5 100644
+--- a/ipc/shm.c
++++ b/ipc/shm.c
+@@ -62,9 +62,18 @@ struct shmid_kernel /* private to the kernel */
+ 	struct pid		*shm_lprid;
+ 	struct user_struct	*mlock_user;
+ 
+-	/* The task created the shm object.  NULL if the task is dead. */
++	/*
++	 * The task created the shm object, for
++	 * task_lock(shp->shm_creator)
++	 */
+ 	struct task_struct	*shm_creator;
+-	struct list_head	shm_clist;	/* list by creator */
++
++	/*
++	 * List by creator. task_lock(->shm_creator) required for read/write.
++	 * If list_empty(), then the creator is dead already.
++	 */
++	struct list_head	shm_clist;
++	struct ipc_namespace	*ns;
+ } __randomize_layout;
+ 
+ /* shm_mode upper byte flags */
+@@ -115,6 +124,7 @@ static void do_shm_rmid(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
+ 	struct shmid_kernel *shp;
+ 
+ 	shp = container_of(ipcp, struct shmid_kernel, shm_perm);
++	WARN_ON(ns != shp->ns);
+ 
+ 	if (shp->shm_nattch) {
+ 		shp->shm_perm.mode |= SHM_DEST;
+@@ -225,10 +235,43 @@ static void shm_rcu_free(struct rcu_head *head)
+ 	kvfree(shp);
+ }
+ 
+-static inline void shm_rmid(struct ipc_namespace *ns, struct shmid_kernel *s)
++/*
++ * It has to be called with shp locked.
++ * It must be called before ipc_rmid()
++ */
++static inline void shm_clist_rm(struct shmid_kernel *shp)
+ {
+-	list_del(&s->shm_clist);
+-	ipc_rmid(&shm_ids(ns), &s->shm_perm);
++	struct task_struct *creator;
++
++	/* ensure that shm_creator does not disappear */
++	rcu_read_lock();
++
++	/*
++	 * A concurrent exit_shm may do a list_del_init() as well.
++	 * Just do nothing if exit_shm already did the work
++	 */
++	if (!list_empty(&shp->shm_clist)) {
++		/*
++		 * shp->shm_creator is guaranteed to be valid *only*
++		 * if shp->shm_clist is not empty.
++		 */
++		creator = shp->shm_creator;
++
++		task_lock(creator);
++		/*
++		 * list_del_init() is a nop if the entry was already removed
++		 * from the list.
++		 */
++		list_del_init(&shp->shm_clist);
++		task_unlock(creator);
++	}
++	rcu_read_unlock();
++}
++
++static inline void shm_rmid(struct shmid_kernel *s)
++{
++	shm_clist_rm(s);
++	ipc_rmid(&shm_ids(s->ns), &s->shm_perm);
+ }
+ 
+ 
+@@ -283,7 +326,7 @@ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
+ 	shm_file = shp->shm_file;
+ 	shp->shm_file = NULL;
+ 	ns->shm_tot -= (shp->shm_segsz + PAGE_SIZE - 1) >> PAGE_SHIFT;
+-	shm_rmid(ns, shp);
++	shm_rmid(shp);
+ 	shm_unlock(shp);
+ 	if (!is_file_hugepages(shm_file))
+ 		shmem_lock(shm_file, 0, shp->mlock_user);
+@@ -306,10 +349,10 @@ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
+  *
+  * 2) sysctl kernel.shm_rmid_forced is set to 1.
+  */
+-static bool shm_may_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
++static bool shm_may_destroy(struct shmid_kernel *shp)
+ {
+ 	return (shp->shm_nattch == 0) &&
+-	       (ns->shm_rmid_forced ||
++	       (shp->ns->shm_rmid_forced ||
+ 		(shp->shm_perm.mode & SHM_DEST));
+ }
+ 
+@@ -340,7 +383,7 @@ static void shm_close(struct vm_area_struct *vma)
+ 	ipc_update_pid(&shp->shm_lprid, task_tgid(current));
+ 	shp->shm_dtim = ktime_get_real_seconds();
+ 	shp->shm_nattch--;
+-	if (shm_may_destroy(ns, shp))
++	if (shm_may_destroy(shp))
+ 		shm_destroy(ns, shp);
+ 	else
+ 		shm_unlock(shp);
+@@ -361,10 +404,10 @@ static int shm_try_destroy_orphaned(int id, void *p, void *data)
+ 	 *
+ 	 * As shp->* are changed under rwsem, it's safe to skip shp locking.
+ 	 */
+-	if (shp->shm_creator != NULL)
++	if (!list_empty(&shp->shm_clist))
+ 		return 0;
+ 
+-	if (shm_may_destroy(ns, shp)) {
++	if (shm_may_destroy(shp)) {
+ 		shm_lock_by_ptr(shp);
+ 		shm_destroy(ns, shp);
+ 	}
+@@ -382,48 +425,97 @@ void shm_destroy_orphaned(struct ipc_namespace *ns)
+ /* Locking assumes this will only be called with task == current */
+ void exit_shm(struct task_struct *task)
+ {
+-	struct ipc_namespace *ns = task->nsproxy->ipc_ns;
+-	struct shmid_kernel *shp, *n;
++	for (;;) {
++		struct shmid_kernel *shp;
++		struct ipc_namespace *ns;
+ 
+-	if (list_empty(&task->sysvshm.shm_clist))
+-		return;
++		task_lock(task);
++
++		if (list_empty(&task->sysvshm.shm_clist)) {
++			task_unlock(task);
++			break;
++		}
++
++		shp = list_first_entry(&task->sysvshm.shm_clist, struct shmid_kernel,
++				shm_clist);
+ 
+-	/*
+-	 * If kernel.shm_rmid_forced is not set then only keep track of
+-	 * which shmids are orphaned, so that a later set of the sysctl
+-	 * can clean them up.
+-	 */
+-	if (!ns->shm_rmid_forced) {
+-		down_read(&shm_ids(ns).rwsem);
+-		list_for_each_entry(shp, &task->sysvshm.shm_clist, shm_clist)
+-			shp->shm_creator = NULL;
+ 		/*
+-		 * Only under read lock but we are only called on current
+-		 * so no entry on the list will be shared.
++		 * 1) Get pointer to the ipc namespace. It is worth to say
++		 * that this pointer is guaranteed to be valid because
++		 * shp lifetime is always shorter than namespace lifetime
++		 * in which shp lives.
++		 * We taken task_lock it means that shp won't be freed.
+ 		 */
+-		list_del(&task->sysvshm.shm_clist);
+-		up_read(&shm_ids(ns).rwsem);
+-		return;
+-	}
++		ns = shp->ns;
+ 
+-	/*
+-	 * Destroy all already created segments, that were not yet mapped,
+-	 * and mark any mapped as orphan to cover the sysctl toggling.
+-	 * Destroy is skipped if shm_may_destroy() returns false.
+-	 */
+-	down_write(&shm_ids(ns).rwsem);
+-	list_for_each_entry_safe(shp, n, &task->sysvshm.shm_clist, shm_clist) {
+-		shp->shm_creator = NULL;
++		/*
++		 * 2) If kernel.shm_rmid_forced is not set then only keep track of
++		 * which shmids are orphaned, so that a later set of the sysctl
++		 * can clean them up.
++		 */
++		if (!ns->shm_rmid_forced)
++			goto unlink_continue;
+ 
+-		if (shm_may_destroy(ns, shp)) {
+-			shm_lock_by_ptr(shp);
+-			shm_destroy(ns, shp);
++		/*
++		 * 3) get a reference to the namespace.
++		 *    The refcount could be already 0. If it is 0, then
++		 *    the shm objects will be free by free_ipc_work().
++		 */
++		ns = get_ipc_ns_not_zero(ns);
++		if (!ns) {
++unlink_continue:
++			list_del_init(&shp->shm_clist);
++			task_unlock(task);
++			continue;
+ 		}
+-	}
+ 
+-	/* Remove the list head from any segments still attached. */
+-	list_del(&task->sysvshm.shm_clist);
+-	up_write(&shm_ids(ns).rwsem);
++		/*
++		 * 4) get a reference to shp.
++		 *   This cannot fail: shm_clist_rm() is called before
++		 *   ipc_rmid(), thus the refcount cannot be 0.
++		 */
++		WARN_ON(!ipc_rcu_getref(&shp->shm_perm));
++
++		/*
++		 * 5) unlink the shm segment from the list of segments
++		 *    created by current.
++		 *    This must be done last. After unlinking,
++		 *    only the refcounts obtained above prevent IPC_RMID
++		 *    from destroying the segment or the namespace.
++		 */
++		list_del_init(&shp->shm_clist);
++
++		task_unlock(task);
++
++		/*
++		 * 6) we have all references
++		 *    Thus lock & if needed destroy shp.
++		 */
++		down_write(&shm_ids(ns).rwsem);
++		shm_lock_by_ptr(shp);
++		/*
++		 * rcu_read_lock was implicitly taken in shm_lock_by_ptr, it's
++		 * safe to call ipc_rcu_putref here
++		 */
++		ipc_rcu_putref(&shp->shm_perm, shm_rcu_free);
++
++		if (ipc_valid_object(&shp->shm_perm)) {
++			if (shm_may_destroy(shp))
++				shm_destroy(ns, shp);
++			else
++				shm_unlock(shp);
++		} else {
++			/*
++			 * Someone else deleted the shp from namespace
++			 * idr/kht while we have waited.
++			 * Just unlock and continue.
++			 */
++			shm_unlock(shp);
++		}
++
++		up_write(&shm_ids(ns).rwsem);
++		put_ipc_ns(ns); /* paired with get_ipc_ns_not_zero */
++	}
+ }
+ 
+ static vm_fault_t shm_fault(struct vm_fault *vmf)
+@@ -680,7 +772,11 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
+ 	if (error < 0)
+ 		goto no_id;
+ 
++	shp->ns = ns;
++
++	task_lock(current);
+ 	list_add(&shp->shm_clist, &current->sysvshm.shm_clist);
++	task_unlock(current);
+ 
+ 	/*
+ 	 * shmid gets reported as "inode#" in /proc/pid/maps.
+@@ -1573,7 +1669,8 @@ out_nattch:
+ 	down_write(&shm_ids(ns).rwsem);
+ 	shp = shm_lock(ns, shmid);
+ 	shp->shm_nattch--;
+-	if (shm_may_destroy(ns, shp))
++
++	if (shm_may_destroy(shp))
+ 		shm_destroy(ns, shp);
+ 	else
+ 		shm_unlock(shp);
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 5b6da64da46d7..bb9a9cb1f321e 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -127,6 +127,21 @@ static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
+ 	return map;
+ }
+ 
++static void bpf_map_write_active_inc(struct bpf_map *map)
++{
++	atomic64_inc(&map->writecnt);
++}
++
++static void bpf_map_write_active_dec(struct bpf_map *map)
++{
++	atomic64_dec(&map->writecnt);
++}
++
++bool bpf_map_write_active(const struct bpf_map *map)
++{
++	return atomic64_read(&map->writecnt) != 0;
++}
++
+ static u32 bpf_map_value_size(struct bpf_map *map)
+ {
+ 	if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH ||
+@@ -588,11 +603,8 @@ static void bpf_map_mmap_open(struct vm_area_struct *vma)
+ {
+ 	struct bpf_map *map = vma->vm_file->private_data;
+ 
+-	if (vma->vm_flags & VM_MAYWRITE) {
+-		mutex_lock(&map->freeze_mutex);
+-		map->writecnt++;
+-		mutex_unlock(&map->freeze_mutex);
+-	}
++	if (vma->vm_flags & VM_MAYWRITE)
++		bpf_map_write_active_inc(map);
+ }
+ 
+ /* called for all unmapped memory region (including initial) */
+@@ -600,11 +612,8 @@ static void bpf_map_mmap_close(struct vm_area_struct *vma)
+ {
+ 	struct bpf_map *map = vma->vm_file->private_data;
+ 
+-	if (vma->vm_flags & VM_MAYWRITE) {
+-		mutex_lock(&map->freeze_mutex);
+-		map->writecnt--;
+-		mutex_unlock(&map->freeze_mutex);
+-	}
++	if (vma->vm_flags & VM_MAYWRITE)
++		bpf_map_write_active_dec(map);
+ }
+ 
+ static const struct vm_operations_struct bpf_map_default_vmops = {
+@@ -654,7 +663,7 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
+ 		goto out;
+ 
+ 	if (vma->vm_flags & VM_MAYWRITE)
+-		map->writecnt++;
++		bpf_map_write_active_inc(map);
+ out:
+ 	mutex_unlock(&map->freeze_mutex);
+ 	return err;
+@@ -1086,6 +1095,7 @@ static int map_update_elem(union bpf_attr *attr)
+ 	map = __bpf_map_get(f);
+ 	if (IS_ERR(map))
+ 		return PTR_ERR(map);
++	bpf_map_write_active_inc(map);
+ 	if (!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
+ 		err = -EPERM;
+ 		goto err_put;
+@@ -1127,6 +1137,7 @@ free_value:
+ free_key:
+ 	kfree(key);
+ err_put:
++	bpf_map_write_active_dec(map);
+ 	fdput(f);
+ 	return err;
+ }
+@@ -1149,6 +1160,7 @@ static int map_delete_elem(union bpf_attr *attr)
+ 	map = __bpf_map_get(f);
+ 	if (IS_ERR(map))
+ 		return PTR_ERR(map);
++	bpf_map_write_active_inc(map);
+ 	if (!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
+ 		err = -EPERM;
+ 		goto err_put;
+@@ -1179,6 +1191,7 @@ static int map_delete_elem(union bpf_attr *attr)
+ out:
+ 	kfree(key);
+ err_put:
++	bpf_map_write_active_dec(map);
+ 	fdput(f);
+ 	return err;
+ }
+@@ -1483,6 +1496,7 @@ static int map_lookup_and_delete_elem(union bpf_attr *attr)
+ 	map = __bpf_map_get(f);
+ 	if (IS_ERR(map))
+ 		return PTR_ERR(map);
++	bpf_map_write_active_inc(map);
+ 	if (!(map_get_sys_perms(map, f) & FMODE_CAN_READ) ||
+ 	    !(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
+ 		err = -EPERM;
+@@ -1524,6 +1538,7 @@ free_value:
+ free_key:
+ 	kfree(key);
+ err_put:
++	bpf_map_write_active_dec(map);
+ 	fdput(f);
+ 	return err;
+ }
+@@ -1550,8 +1565,7 @@ static int map_freeze(const union bpf_attr *attr)
+ 	}
+ 
+ 	mutex_lock(&map->freeze_mutex);
+-
+-	if (map->writecnt) {
++	if (bpf_map_write_active(map)) {
+ 		err = -EBUSY;
+ 		goto err_put;
+ 	}
+@@ -3976,6 +3990,9 @@ static int bpf_map_do_batch(const union bpf_attr *attr,
+ 			    union bpf_attr __user *uattr,
+ 			    int cmd)
+ {
++	bool has_read  = cmd == BPF_MAP_LOOKUP_BATCH ||
++			 cmd == BPF_MAP_LOOKUP_AND_DELETE_BATCH;
++	bool has_write = cmd != BPF_MAP_LOOKUP_BATCH;
+ 	struct bpf_map *map;
+ 	int err, ufd;
+ 	struct fd f;
+@@ -3988,16 +4005,13 @@ static int bpf_map_do_batch(const union bpf_attr *attr,
+ 	map = __bpf_map_get(f);
+ 	if (IS_ERR(map))
+ 		return PTR_ERR(map);
+-
+-	if ((cmd == BPF_MAP_LOOKUP_BATCH ||
+-	     cmd == BPF_MAP_LOOKUP_AND_DELETE_BATCH) &&
+-	    !(map_get_sys_perms(map, f) & FMODE_CAN_READ)) {
++	if (has_write)
++		bpf_map_write_active_inc(map);
++	if (has_read && !(map_get_sys_perms(map, f) & FMODE_CAN_READ)) {
+ 		err = -EPERM;
+ 		goto err_put;
+ 	}
+-
+-	if (cmd != BPF_MAP_LOOKUP_BATCH &&
+-	    !(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
++	if (has_write && !(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
+ 		err = -EPERM;
+ 		goto err_put;
+ 	}
+@@ -4010,8 +4024,9 @@ static int bpf_map_do_batch(const union bpf_attr *attr,
+ 		BPF_DO_BATCH(map->ops->map_update_batch);
+ 	else
+ 		BPF_DO_BATCH(map->ops->map_delete_batch);
+-
+ err_put:
++	if (has_write)
++		bpf_map_write_active_dec(map);
+ 	fdput(f);
+ 	return err;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a15826a9a644f..5a2b28e6816ee 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3486,7 +3486,22 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
+ 
+ static bool bpf_map_is_rdonly(const struct bpf_map *map)
+ {
+-	return (map->map_flags & BPF_F_RDONLY_PROG) && map->frozen;
++	/* A map is considered read-only if the following condition are true:
++	 *
++	 * 1) BPF program side cannot change any of the map content. The
++	 *    BPF_F_RDONLY_PROG flag is throughout the lifetime of a map
++	 *    and was set at map creation time.
++	 * 2) The map value(s) have been initialized from user space by a
++	 *    loader and then "frozen", such that no new map update/delete
++	 *    operations from syscall side are possible for the rest of
++	 *    the map's lifetime from that point onwards.
++	 * 3) Any parallel/pending map update/delete operations from syscall
++	 *    side have been completed. Only after that point, it's safe to
++	 *    assume that map value(s) are immutable.
++	 */
++	return (map->map_flags & BPF_F_RDONLY_PROG) &&
++	       READ_ONCE(map->frozen) &&
++	       !bpf_map_write_active(map);
+ }
+ 
+ static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val)
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 67c22941b5f27..c06ced18f78ad 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -31,6 +31,7 @@
+ #include <linux/smpboot.h>
+ #include <linux/relay.h>
+ #include <linux/slab.h>
++#include <linux/scs.h>
+ #include <linux/percpu-rwsem.h>
+ #include <linux/cpuset.h>
+ 
+@@ -551,6 +552,12 @@ static int bringup_cpu(unsigned int cpu)
+ 	struct task_struct *idle = idle_thread_get(cpu);
+ 	int ret;
+ 
++	/*
++	 * Reset stale stack state from the last time this CPU was online.
++	 */
++	scs_task_reset(idle);
++	kasan_unpoison_task_stack(idle);
++
+ 	/*
+ 	 * Some architectures have to walk the irq descriptors to
+ 	 * setup the vector space for the cpu which comes online.
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 2fc7d509a34fc..bf640fd6142a0 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -688,7 +688,7 @@ static int load_image_and_restore(void)
+ 		goto Unlock;
+ 
+ 	error = swsusp_read(&flags);
+-	swsusp_close(FMODE_READ);
++	swsusp_close(FMODE_READ | FMODE_EXCL);
+ 	if (!error)
+ 		error = hibernation_restore(flags & SF_PLATFORM_MODE);
+ 
+@@ -978,7 +978,7 @@ static int software_resume(void)
+ 	/* The snapshot device should not be opened while we're running */
+ 	if (!hibernate_acquire()) {
+ 		error = -EBUSY;
+-		swsusp_close(FMODE_READ);
++		swsusp_close(FMODE_READ | FMODE_EXCL);
+ 		goto Unlock;
+ 	}
+ 
+@@ -1013,7 +1013,7 @@ static int software_resume(void)
+ 	pm_pr_dbg("Hibernation image not present or could not be loaded.\n");
+ 	return error;
+  Close_Finish:
+-	swsusp_close(FMODE_READ);
++	swsusp_close(FMODE_READ | FMODE_EXCL);
+ 	goto Finish;
+ }
+ 
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index e456cce772a3a..304aad997da11 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6523,9 +6523,6 @@ void __init init_idle(struct task_struct *idle, int cpu)
+ 	idle->se.exec_start = sched_clock();
+ 	idle->flags |= PF_IDLE;
+ 
+-	scs_task_reset(idle);
+-	kasan_unpoison_task_stack(idle);
+-
+ #ifdef CONFIG_SMP
+ 	/*
+ 	 * Its possible that init_idle() gets called multiple times on a task,
+@@ -6681,7 +6678,6 @@ void idle_task_exit(void)
+ 		finish_arch_post_lock_switch();
+ 	}
+ 
+-	scs_task_reset(current);
+ 	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
+ }
+ 
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 15a811d34cd82..8d67f7f448400 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1506,14 +1506,26 @@ __event_trigger_test_discard(struct trace_event_file *file,
+ 	if (eflags & EVENT_FILE_FL_TRIGGER_COND)
+ 		*tt = event_triggers_call(file, entry, event);
+ 
+-	if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags) ||
+-	    (unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
+-	     !filter_match_preds(file->filter, entry))) {
+-		__trace_event_discard_commit(buffer, event);
+-		return true;
+-	}
++	if (likely(!(file->flags & (EVENT_FILE_FL_SOFT_DISABLED |
++				    EVENT_FILE_FL_FILTERED |
++				    EVENT_FILE_FL_PID_FILTER))))
++		return false;
++
++	if (file->flags & EVENT_FILE_FL_SOFT_DISABLED)
++		goto discard;
++
++	if (file->flags & EVENT_FILE_FL_FILTERED &&
++	    !filter_match_preds(file->filter, entry))
++		goto discard;
++
++	if ((file->flags & EVENT_FILE_FL_PID_FILTER) &&
++	    trace_event_ignore_this_pid(file))
++		goto discard;
+ 
+ 	return false;
++ discard:
++	__trace_event_discard_commit(buffer, event);
++	return true;
+ }
+ 
+ /**
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index ab3cb67b869e5..7cc5f0a77c3cc 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -2462,12 +2462,22 @@ static struct trace_event_file *
+ trace_create_new_event(struct trace_event_call *call,
+ 		       struct trace_array *tr)
+ {
++	struct trace_pid_list *no_pid_list;
++	struct trace_pid_list *pid_list;
+ 	struct trace_event_file *file;
+ 
+ 	file = kmem_cache_alloc(file_cachep, GFP_TRACE);
+ 	if (!file)
+ 		return NULL;
+ 
++	pid_list = rcu_dereference_protected(tr->filtered_pids,
++					     lockdep_is_held(&event_mutex));
++	no_pid_list = rcu_dereference_protected(tr->filtered_no_pids,
++					     lockdep_is_held(&event_mutex));
++
++	if (pid_list || no_pid_list)
++		file->flags |= EVENT_FILE_FL_PID_FILTER;
++
+ 	file->event_call = call;
+ 	file->tr = tr;
+ 	atomic_set(&file->sm_ref, 0);
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 0dd6e286e5196..9900d4e3808cc 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1312,6 +1312,7 @@ static int uprobe_perf_open(struct trace_event_call *call,
+ 		return 0;
+ 
+ 	list_for_each_entry(pos, trace_probe_probe_list(tp), list) {
++		tu = container_of(pos, struct trace_uprobe, tp);
+ 		err = uprobe_apply(tu->inode, tu->offset, &tu->consumer, true);
+ 		if (err) {
+ 			uprobe_perf_close(call, event);
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index ad3780067a7d8..d12c9a8a9953e 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -181,9 +181,6 @@ int register_vlan_dev(struct net_device *dev, struct netlink_ext_ack *extack)
+ 	if (err)
+ 		goto out_unregister_netdev;
+ 
+-	/* Account for reference in struct vlan_dev_priv */
+-	dev_hold(real_dev);
+-
+ 	vlan_stacked_transfer_operstate(real_dev, dev, vlan);
+ 	linkwatch_fire_event(dev); /* _MUST_ call rfc2863_policy() */
+ 
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index c7eba7dab0938..86a1c99025ea0 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -606,6 +606,9 @@ static int vlan_dev_init(struct net_device *dev)
+ 	if (!vlan->vlan_pcpu_stats)
+ 		return -ENOMEM;
+ 
++	/* Get vlan's reference to real_dev */
++	dev_hold(real_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 1075cc2136ac6..8bd3f5e3c0e7a 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -924,15 +924,36 @@ static void remove_nexthop(struct net *net, struct nexthop *nh,
+ /* if any FIB entries reference this nexthop, any dst entries
+  * need to be regenerated
+  */
+-static void nh_rt_cache_flush(struct net *net, struct nexthop *nh)
++static void nh_rt_cache_flush(struct net *net, struct nexthop *nh,
++			      struct nexthop *replaced_nh)
+ {
+ 	struct fib6_info *f6i;
++	struct nh_group *nhg;
++	int i;
+ 
+ 	if (!list_empty(&nh->fi_list))
+ 		rt_cache_flush(net);
+ 
+ 	list_for_each_entry(f6i, &nh->f6i_list, nh_list)
+ 		ipv6_stub->fib6_update_sernum(net, f6i);
++
++	/* if an IPv6 group was replaced, we have to release all old
++	 * dsts to make sure all refcounts are released
++	 */
++	if (!replaced_nh->is_group)
++		return;
++
++	/* new dsts must use only the new nexthop group */
++	synchronize_net();
++
++	nhg = rtnl_dereference(replaced_nh->nh_grp);
++	for (i = 0; i < nhg->num_nh; i++) {
++		struct nh_grp_entry *nhge = &nhg->nh_entries[i];
++		struct nh_info *nhi = rtnl_dereference(nhge->nh->nh_info);
++
++		if (nhi->family == AF_INET6)
++			ipv6_stub->fib6_nh_release_dsts(&nhi->fib6_nh);
++	}
+ }
+ 
+ static int replace_nexthop_grp(struct net *net, struct nexthop *old,
+@@ -1111,7 +1132,7 @@ static int replace_nexthop(struct net *net, struct nexthop *old,
+ 		err = replace_nexthop_single(net, old, new, extack);
+ 
+ 	if (!err) {
+-		nh_rt_cache_flush(net, old);
++		nh_rt_cache_flush(net, old, new);
+ 
+ 		__remove_nexthop(net, new, NULL);
+ 		nexthop_put(new);
+@@ -1355,11 +1376,15 @@ static int nh_create_ipv6(struct net *net,  struct nexthop *nh,
+ 	/* sets nh_dev if successful */
+ 	err = ipv6_stub->fib6_nh_init(net, fib6_nh, &fib6_cfg, GFP_KERNEL,
+ 				      extack);
+-	if (err)
++	if (err) {
++		/* IPv6 is not enabled, don't call fib6_nh_release */
++		if (err == -EAFNOSUPPORT)
++			goto out;
+ 		ipv6_stub->fib6_nh_release(fib6_nh);
+-	else
++	} else {
+ 		nh->nh_flags = fib6_nh->fib_nh_flags;
+-
++	}
++out:
+ 	return err;
+ }
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index bb16c88f58a3c..63c81af41b43e 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3931,7 +3931,7 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 	}
+ #ifdef CONFIG_MMU
+ 	case TCP_ZEROCOPY_RECEIVE: {
+-		struct tcp_zerocopy_receive zc;
++		struct tcp_zerocopy_receive zc = {};
+ 		int err;
+ 
+ 		if (get_user(len, optlen))
+@@ -3949,7 +3949,7 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 		lock_sock(sk);
+ 		err = tcp_zerocopy_receive(sk, &zc);
+ 		release_sock(sk);
+-		if (len == sizeof(zc))
++		if (len >= offsetofend(struct tcp_zerocopy_receive, err))
+ 			goto zerocopy_rcv_sk_err;
+ 		switch (len) {
+ 		case offsetofend(struct tcp_zerocopy_receive, err):
+diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
+index c7bf5b26bf0c2..fffa011a007d4 100644
+--- a/net/ipv4/tcp_cubic.c
++++ b/net/ipv4/tcp_cubic.c
+@@ -337,8 +337,6 @@ static void bictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ 		return;
+ 
+ 	if (tcp_in_slow_start(tp)) {
+-		if (hystart && after(ack, ca->end_seq))
+-			bictcp_hystart_reset(sk);
+ 		acked = tcp_slow_start(tp, acked);
+ 		if (!acked)
+ 			return;
+@@ -398,6 +396,9 @@ static void hystart_update(struct sock *sk, u32 delay)
+ 	struct bictcp *ca = inet_csk_ca(sk);
+ 	u32 threshold;
+ 
++	if (after(tp->snd_una, ca->end_seq))
++		bictcp_hystart_reset(sk);
++
+ 	if (hystart_detect & HYSTART_ACK_TRAIN) {
+ 		u32 now = bictcp_clock_us(sk);
+ 
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index e648fbebb1670..090575346daf6 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -1016,6 +1016,7 @@ static const struct ipv6_stub ipv6_stub_impl = {
+ 	.ip6_mtu_from_fib6 = ip6_mtu_from_fib6,
+ 	.fib6_nh_init	   = fib6_nh_init,
+ 	.fib6_nh_release   = fib6_nh_release,
++	.fib6_nh_release_dsts = fib6_nh_release_dsts,
+ 	.fib6_update_sernum = fib6_update_sernum_stub,
+ 	.fib6_rt_update	   = fib6_rt_update,
+ 	.ip6_del_rt	   = ip6_del_rt,
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index c2f8e69d7d7a0..54cabf1c2ae15 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -193,7 +193,7 @@ static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff
+ #if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
+ 	/* Policy lookup after SNAT yielded a new policy */
+ 	if (skb_dst(skb)->xfrm) {
+-		IPCB(skb)->flags |= IPSKB_REROUTED;
++		IP6CB(skb)->flags |= IP6SKB_REROUTED;
+ 		return dst_output(net, sk, skb);
+ 	}
+ #endif
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index a68a7d7c07280..6fef0d7586bf6 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3570,6 +3570,25 @@ void fib6_nh_release(struct fib6_nh *fib6_nh)
+ 	fib_nh_common_release(&fib6_nh->nh_common);
+ }
+ 
++void fib6_nh_release_dsts(struct fib6_nh *fib6_nh)
++{
++	int cpu;
++
++	if (!fib6_nh->rt6i_pcpu)
++		return;
++
++	for_each_possible_cpu(cpu) {
++		struct rt6_info *pcpu_rt, **ppcpu_rt;
++
++		ppcpu_rt = per_cpu_ptr(fib6_nh->rt6i_pcpu, cpu);
++		pcpu_rt = xchg(ppcpu_rt, NULL);
++		if (pcpu_rt) {
++			dst_dev_put(&pcpu_rt->dst);
++			dst_release(&pcpu_rt->dst);
++		}
++	}
++}
++
+ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
+ 					      gfp_t gfp_flags,
+ 					      struct netlink_ext_ack *extack)
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index ac0233c9cd349..64afe71e2129a 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -368,9 +368,10 @@ static void schedule_3rdack_retransmission(struct sock *sk)
+ 
+ 	/* reschedule with a timeout above RTT, as we must look only for drop */
+ 	if (tp->srtt_us)
+-		timeout = tp->srtt_us << 1;
++		timeout = usecs_to_jiffies(tp->srtt_us >> (3 - 1));
+ 	else
+ 		timeout = TCP_TIMEOUT_INIT;
++	timeout += jiffies;
+ 
+ 	WARN_ON_ONCE(icsk->icsk_ack.pending & ICSK_ACK_TIMER);
+ 	icsk->icsk_ack.pending |= ICSK_ACK_SCHED | ICSK_ACK_TIMER;
+diff --git a/net/ncsi/ncsi-cmd.c b/net/ncsi/ncsi-cmd.c
+index ba9ae482141b0..dda8b76b77988 100644
+--- a/net/ncsi/ncsi-cmd.c
++++ b/net/ncsi/ncsi-cmd.c
+@@ -18,6 +18,8 @@
+ #include "internal.h"
+ #include "ncsi-pkt.h"
+ 
++static const int padding_bytes = 26;
++
+ u32 ncsi_calculate_checksum(unsigned char *data, int len)
+ {
+ 	u32 checksum = 0;
+@@ -213,12 +215,17 @@ static int ncsi_cmd_handler_oem(struct sk_buff *skb,
+ {
+ 	struct ncsi_cmd_oem_pkt *cmd;
+ 	unsigned int len;
++	int payload;
++	/* NC-SI spec DSP_0222_1.2.0, section 8.2.2.2
++	 * requires payload to be padded with 0 to
++	 * 32-bit boundary before the checksum field.
++	 * Ensure the padding bytes are accounted for in
++	 * skb allocation
++	 */
+ 
++	payload = ALIGN(nca->payload, 4);
+ 	len = sizeof(struct ncsi_cmd_pkt_hdr) + 4;
+-	if (nca->payload < 26)
+-		len += 26;
+-	else
+-		len += nca->payload;
++	len += max(payload, padding_bytes);
+ 
+ 	cmd = skb_put_zero(skb, len);
+ 	memcpy(&cmd->mfr_id, nca->data, nca->payload);
+@@ -272,6 +279,7 @@ static struct ncsi_request *ncsi_alloc_command(struct ncsi_cmd_arg *nca)
+ 	struct net_device *dev = nd->dev;
+ 	int hlen = LL_RESERVED_SPACE(dev);
+ 	int tlen = dev->needed_tailroom;
++	int payload;
+ 	int len = hlen + tlen;
+ 	struct sk_buff *skb;
+ 	struct ncsi_request *nr;
+@@ -281,14 +289,14 @@ static struct ncsi_request *ncsi_alloc_command(struct ncsi_cmd_arg *nca)
+ 		return NULL;
+ 
+ 	/* NCSI command packet has 16-bytes header, payload, 4 bytes checksum.
++	 * Payload needs padding so that the checksum field following payload is
++	 * aligned to 32-bit boundary.
+ 	 * The packet needs padding if its payload is less than 26 bytes to
+ 	 * meet 64 bytes minimal ethernet frame length.
+ 	 */
+ 	len += sizeof(struct ncsi_cmd_pkt_hdr) + 4;
+-	if (nca->payload < 26)
+-		len += 26;
+-	else
+-		len += nca->payload;
++	payload = ALIGN(nca->payload, 4);
++	len += max(payload, padding_bytes);
+ 
+ 	/* Allocate skb */
+ 	skb = alloc_skb(len, GFP_ATOMIC);
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index c0b8215ab3d47..3a76da58d88bb 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -1976,7 +1976,6 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+ 	struct ip_vs_proto_data *pd;
+ 	struct ip_vs_conn *cp;
+ 	int ret, pkts;
+-	int conn_reuse_mode;
+ 	struct sock *sk;
+ 
+ 	/* Already marked as IPVS request or reply? */
+@@ -2053,15 +2052,16 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+ 	cp = INDIRECT_CALL_1(pp->conn_in_get, ip_vs_conn_in_get_proto,
+ 			     ipvs, af, skb, &iph);
+ 
+-	conn_reuse_mode = sysctl_conn_reuse_mode(ipvs);
+-	if (conn_reuse_mode && !iph.fragoffs && is_new_conn(skb, &iph) && cp) {
++	if (!iph.fragoffs && is_new_conn(skb, &iph) && cp) {
++		int conn_reuse_mode = sysctl_conn_reuse_mode(ipvs);
+ 		bool old_ct = false, resched = false;
+ 
+ 		if (unlikely(sysctl_expire_nodest_conn(ipvs)) && cp->dest &&
+ 		    unlikely(!atomic_read(&cp->dest->weight))) {
+ 			resched = true;
+ 			old_ct = ip_vs_conn_uses_old_conntrack(cp, skb);
+-		} else if (is_new_conn_expected(cp, conn_reuse_mode)) {
++		} else if (conn_reuse_mode &&
++			   is_new_conn_expected(cp, conn_reuse_mode)) {
+ 			old_ct = ip_vs_conn_uses_old_conntrack(cp, skb);
+ 			if (!atomic_read(&cp->n_control)) {
+ 				resched = true;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index cb4cfa4f61a8d..60a1a666e797a 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -973,11 +973,9 @@ ctnetlink_alloc_filter(const struct nlattr * const cda[], u8 family)
+ 						   CTA_TUPLE_REPLY,
+ 						   filter->family,
+ 						   &filter->zone,
+-						   filter->orig_flags);
+-		if (err < 0) {
+-			err = -EINVAL;
++						   filter->reply_flags);
++		if (err < 0)
+ 			goto err_filter;
+-		}
+ 	}
+ 
+ 	return filter;
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index a6b654b028dd4..d1862782be450 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -63,11 +63,11 @@ static void nf_flow_rule_lwt_match(struct nf_flow_match *match,
+ 		       sizeof(struct in6_addr));
+ 		if (memcmp(&key->enc_ipv6.src, &in6addr_any,
+ 			   sizeof(struct in6_addr)))
+-			memset(&key->enc_ipv6.src, 0xff,
++			memset(&mask->enc_ipv6.src, 0xff,
+ 			       sizeof(struct in6_addr));
+ 		if (memcmp(&key->enc_ipv6.dst, &in6addr_any,
+ 			   sizeof(struct in6_addr)))
+-			memset(&key->enc_ipv6.dst, 0xff,
++			memset(&mask->enc_ipv6.dst, 0xff,
+ 			       sizeof(struct in6_addr));
+ 		enc_keys |= BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS);
+ 		key->enc_control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index c76701ac35abf..c34cb6e81d855 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -667,12 +667,14 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 			q->classes[i].deficit = quanta[i];
+ 		}
+ 	}
++	for (i = q->nbands; i < oldbands; i++) {
++		qdisc_tree_flush_backlog(q->classes[i].qdisc);
++		if (i >= q->nstrict)
++			list_del(&q->classes[i].alist);
++	}
+ 	q->nstrict = nstrict;
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+ 
+-	for (i = q->nbands; i < oldbands; i++)
+-		qdisc_tree_flush_backlog(q->classes[i].qdisc);
+-
+ 	for (i = 0; i < q->nbands; i++)
+ 		q->classes[i].quantum = quanta[i];
+ 
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index cfb5b9be0569d..ac8265e35b2d2 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1864,8 +1864,10 @@ static int smc_listen(struct socket *sock, int backlog)
+ 	smc->clcsock->sk->sk_user_data =
+ 		(void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY);
+ 	rc = kernel_listen(smc->clcsock, backlog);
+-	if (rc)
++	if (rc) {
++		smc->clcsock->sk->sk_data_ready = smc->clcsk_data_ready;
+ 		goto out;
++	}
+ 	sk->sk_max_ack_backlog = backlog;
+ 	sk->sk_ack_backlog = 0;
+ 	sk->sk_state = SMC_LISTEN;
+@@ -2096,8 +2098,10 @@ static __poll_t smc_poll(struct file *file, struct socket *sock,
+ static int smc_shutdown(struct socket *sock, int how)
+ {
+ 	struct sock *sk = sock->sk;
++	bool do_shutdown = true;
+ 	struct smc_sock *smc;
+ 	int rc = -EINVAL;
++	int old_state;
+ 	int rc1 = 0;
+ 
+ 	smc = smc_sk(sk);
+@@ -2124,7 +2128,11 @@ static int smc_shutdown(struct socket *sock, int how)
+ 	}
+ 	switch (how) {
+ 	case SHUT_RDWR:		/* shutdown in both directions */
++		old_state = sk->sk_state;
+ 		rc = smc_close_active(smc);
++		if (old_state == SMC_ACTIVE &&
++		    sk->sk_state == SMC_PEERCLOSEWAIT1)
++			do_shutdown = false;
+ 		break;
+ 	case SHUT_WR:
+ 		rc = smc_close_shutdown_write(smc);
+@@ -2134,7 +2142,7 @@ static int smc_shutdown(struct socket *sock, int how)
+ 		/* nothing more to do because peer is not involved */
+ 		break;
+ 	}
+-	if (smc->clcsock)
++	if (do_shutdown && smc->clcsock)
+ 		rc1 = kernel_sock_shutdown(smc->clcsock, how);
+ 	/* map sock_shutdown_cmd constants to sk_shutdown value range */
+ 	sk->sk_shutdown |= how + 1;
+diff --git a/net/smc/smc_close.c b/net/smc/smc_close.c
+index 0f9ffba07d268..04620b53b74a7 100644
+--- a/net/smc/smc_close.c
++++ b/net/smc/smc_close.c
+@@ -228,6 +228,12 @@ again:
+ 			/* send close request */
+ 			rc = smc_close_final(conn);
+ 			sk->sk_state = SMC_PEERCLOSEWAIT1;
++
++			/* actively shutdown clcsock before peer close it,
++			 * prevent peer from entering TIME_WAIT state.
++			 */
++			if (smc->clcsock && smc->clcsock->sk)
++				rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR);
+ 		} else {
+ 			/* peer event has changed the state */
+ 			goto again;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 109d790eaebe2..cd625b672429f 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1209,14 +1209,26 @@ static void smc_link_down_work(struct work_struct *work)
+ 	mutex_unlock(&lgr->llc_conf_mutex);
+ }
+ 
+-/* Determine vlan of internal TCP socket.
+- * @vlan_id: address to store the determined vlan id into
+- */
++static int smc_vlan_by_tcpsk_walk(struct net_device *lower_dev,
++				  struct netdev_nested_priv *priv)
++{
++	unsigned short *vlan_id = (unsigned short *)priv->data;
++
++	if (is_vlan_dev(lower_dev)) {
++		*vlan_id = vlan_dev_vlan_id(lower_dev);
++		return 1;
++	}
++
++	return 0;
++}
++
++/* Determine vlan of internal TCP socket. */
+ int smc_vlan_by_tcpsk(struct socket *clcsock, struct smc_init_info *ini)
+ {
+ 	struct dst_entry *dst = sk_dst_get(clcsock->sk);
++	struct netdev_nested_priv priv;
+ 	struct net_device *ndev;
+-	int i, nest_lvl, rc = 0;
++	int rc = 0;
+ 
+ 	ini->vlan_id = 0;
+ 	if (!dst) {
+@@ -1234,20 +1246,9 @@ int smc_vlan_by_tcpsk(struct socket *clcsock, struct smc_init_info *ini)
+ 		goto out_rel;
+ 	}
+ 
++	priv.data = (void *)&ini->vlan_id;
+ 	rtnl_lock();
+-	nest_lvl = ndev->lower_level;
+-	for (i = 0; i < nest_lvl; i++) {
+-		struct list_head *lower = &ndev->adj_list.lower;
+-
+-		if (list_empty(lower))
+-			break;
+-		lower = lower->next;
+-		ndev = (struct net_device *)netdev_lower_get_next(ndev, &lower);
+-		if (is_vlan_dev(ndev)) {
+-			ini->vlan_id = vlan_dev_vlan_id(ndev);
+-			break;
+-		}
+-	}
++	netdev_walk_all_lower_dev(ndev, smc_vlan_by_tcpsk_walk, &priv);
+ 	rtnl_unlock();
+ 
+ out_rel:
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 32a51b20509c9..58d22d6b86ae6 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -61,7 +61,7 @@ static DEFINE_MUTEX(tcpv6_prot_mutex);
+ static const struct proto *saved_tcpv4_prot;
+ static DEFINE_MUTEX(tcpv4_prot_mutex);
+ static struct proto tls_prots[TLS_NUM_PROTS][TLS_NUM_CONFIG][TLS_NUM_CONFIG];
+-static struct proto_ops tls_sw_proto_ops;
++static struct proto_ops tls_proto_ops[TLS_NUM_PROTS][TLS_NUM_CONFIG][TLS_NUM_CONFIG];
+ static void build_protos(struct proto prot[TLS_NUM_CONFIG][TLS_NUM_CONFIG],
+ 			 const struct proto *base);
+ 
+@@ -71,6 +71,8 @@ void update_sk_prot(struct sock *sk, struct tls_context *ctx)
+ 
+ 	WRITE_ONCE(sk->sk_prot,
+ 		   &tls_prots[ip_ver][ctx->tx_conf][ctx->rx_conf]);
++	WRITE_ONCE(sk->sk_socket->ops,
++		   &tls_proto_ops[ip_ver][ctx->tx_conf][ctx->rx_conf]);
+ }
+ 
+ int wait_on_pending_writer(struct sock *sk, long *timeo)
+@@ -578,8 +580,6 @@ static int do_tls_setsockopt_conf(struct sock *sk, sockptr_t optval,
+ 	if (tx) {
+ 		ctx->sk_write_space = sk->sk_write_space;
+ 		sk->sk_write_space = tls_write_space;
+-	} else {
+-		sk->sk_socket->ops = &tls_sw_proto_ops;
+ 	}
+ 	goto out;
+ 
+@@ -637,6 +637,39 @@ struct tls_context *tls_ctx_create(struct sock *sk)
+ 	return ctx;
+ }
+ 
++static void build_proto_ops(struct proto_ops ops[TLS_NUM_CONFIG][TLS_NUM_CONFIG],
++			    const struct proto_ops *base)
++{
++	ops[TLS_BASE][TLS_BASE] = *base;
++
++	ops[TLS_SW  ][TLS_BASE] = ops[TLS_BASE][TLS_BASE];
++	ops[TLS_SW  ][TLS_BASE].sendpage_locked	= tls_sw_sendpage_locked;
++
++	ops[TLS_BASE][TLS_SW  ] = ops[TLS_BASE][TLS_BASE];
++	ops[TLS_BASE][TLS_SW  ].splice_read	= tls_sw_splice_read;
++
++	ops[TLS_SW  ][TLS_SW  ] = ops[TLS_SW  ][TLS_BASE];
++	ops[TLS_SW  ][TLS_SW  ].splice_read	= tls_sw_splice_read;
++
++#ifdef CONFIG_TLS_DEVICE
++	ops[TLS_HW  ][TLS_BASE] = ops[TLS_BASE][TLS_BASE];
++	ops[TLS_HW  ][TLS_BASE].sendpage_locked	= NULL;
++
++	ops[TLS_HW  ][TLS_SW  ] = ops[TLS_BASE][TLS_SW  ];
++	ops[TLS_HW  ][TLS_SW  ].sendpage_locked	= NULL;
++
++	ops[TLS_BASE][TLS_HW  ] = ops[TLS_BASE][TLS_SW  ];
++
++	ops[TLS_SW  ][TLS_HW  ] = ops[TLS_SW  ][TLS_SW  ];
++
++	ops[TLS_HW  ][TLS_HW  ] = ops[TLS_HW  ][TLS_SW  ];
++	ops[TLS_HW  ][TLS_HW  ].sendpage_locked	= NULL;
++#endif
++#ifdef CONFIG_TLS_TOE
++	ops[TLS_HW_RECORD][TLS_HW_RECORD] = *base;
++#endif
++}
++
+ static void tls_build_proto(struct sock *sk)
+ {
+ 	int ip_ver = sk->sk_family == AF_INET6 ? TLSV6 : TLSV4;
+@@ -648,6 +681,8 @@ static void tls_build_proto(struct sock *sk)
+ 		mutex_lock(&tcpv6_prot_mutex);
+ 		if (likely(prot != saved_tcpv6_prot)) {
+ 			build_protos(tls_prots[TLSV6], prot);
++			build_proto_ops(tls_proto_ops[TLSV6],
++					sk->sk_socket->ops);
+ 			smp_store_release(&saved_tcpv6_prot, prot);
+ 		}
+ 		mutex_unlock(&tcpv6_prot_mutex);
+@@ -658,6 +693,8 @@ static void tls_build_proto(struct sock *sk)
+ 		mutex_lock(&tcpv4_prot_mutex);
+ 		if (likely(prot != saved_tcpv4_prot)) {
+ 			build_protos(tls_prots[TLSV4], prot);
++			build_proto_ops(tls_proto_ops[TLSV4],
++					sk->sk_socket->ops);
+ 			smp_store_release(&saved_tcpv4_prot, prot);
+ 		}
+ 		mutex_unlock(&tcpv4_prot_mutex);
+@@ -868,10 +905,6 @@ static int __init tls_register(void)
+ 	if (err)
+ 		return err;
+ 
+-	tls_sw_proto_ops = inet_stream_ops;
+-	tls_sw_proto_ops.splice_read = tls_sw_splice_read;
+-	tls_sw_proto_ops.sendpage_locked   = tls_sw_sendpage_locked;
+-
+ 	tls_device_init();
+ 	tcp_register_ulp(&tcp_tls_ulp_ops);
+ 
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 14cce61160a58..122d5daed8b61 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -2007,21 +2007,18 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
+ 	if (!skb)
+ 		goto splice_read_end;
+ 
+-	if (!ctx->decrypted) {
+-		err = decrypt_skb_update(sk, skb, NULL, &chunk, &zc, false);
+-
+-		/* splice does not support reading control messages */
+-		if (ctx->control != TLS_RECORD_TYPE_DATA) {
+-			err = -EINVAL;
+-			goto splice_read_end;
+-		}
++	err = decrypt_skb_update(sk, skb, NULL, &chunk, &zc, false);
++	if (err < 0) {
++		tls_err_abort(sk, -EBADMSG);
++		goto splice_read_end;
++	}
+ 
+-		if (err < 0) {
+-			tls_err_abort(sk, -EBADMSG);
+-			goto splice_read_end;
+-		}
+-		ctx->decrypted = 1;
++	/* splice does not support reading control messages */
++	if (ctx->control != TLS_RECORD_TYPE_DATA) {
++		err = -EINVAL;
++		goto splice_read_end;
+ 	}
++
+ 	rxm = strp_msg(skb);
+ 
+ 	chunk = min_t(unsigned int, rxm->full_len, len);
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 6cdb3db7507b1..fc61571a3ac73 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -298,6 +298,15 @@ static const struct config_entry config_table[] = {
+ 	},
+ #endif
+ 
++/* JasperLake */
++#if IS_ENABLED(CONFIG_SND_SOC_SOF_JASPERLAKE)
++	{
++		.flags = FLAG_SOF,
++		.device = 0x4dc8,
++		.codec_hid = "ESSX8336",
++	},
++#endif
++
+ /* Tigerlake */
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_TIGERLAKE)
+ 	{
+diff --git a/sound/pci/ctxfi/ctamixer.c b/sound/pci/ctxfi/ctamixer.c
+index d4ff377eb3a34..6d636bdcaa5a3 100644
+--- a/sound/pci/ctxfi/ctamixer.c
++++ b/sound/pci/ctxfi/ctamixer.c
+@@ -23,16 +23,15 @@
+ 
+ #define BLANK_SLOT		4094
+ 
+-static int amixer_master(struct rsc *rsc)
++static void amixer_master(struct rsc *rsc)
+ {
+ 	rsc->conj = 0;
+-	return rsc->idx = container_of(rsc, struct amixer, rsc)->idx[0];
++	rsc->idx = container_of(rsc, struct amixer, rsc)->idx[0];
+ }
+ 
+-static int amixer_next_conj(struct rsc *rsc)
++static void amixer_next_conj(struct rsc *rsc)
+ {
+ 	rsc->conj++;
+-	return container_of(rsc, struct amixer, rsc)->idx[rsc->conj];
+ }
+ 
+ static int amixer_index(const struct rsc *rsc)
+@@ -331,16 +330,15 @@ int amixer_mgr_destroy(struct amixer_mgr *amixer_mgr)
+ 
+ /* SUM resource management */
+ 
+-static int sum_master(struct rsc *rsc)
++static void sum_master(struct rsc *rsc)
+ {
+ 	rsc->conj = 0;
+-	return rsc->idx = container_of(rsc, struct sum, rsc)->idx[0];
++	rsc->idx = container_of(rsc, struct sum, rsc)->idx[0];
+ }
+ 
+-static int sum_next_conj(struct rsc *rsc)
++static void sum_next_conj(struct rsc *rsc)
+ {
+ 	rsc->conj++;
+-	return container_of(rsc, struct sum, rsc)->idx[rsc->conj];
+ }
+ 
+ static int sum_index(const struct rsc *rsc)
+diff --git a/sound/pci/ctxfi/ctdaio.c b/sound/pci/ctxfi/ctdaio.c
+index 4cb47b5a792c5..aae544dff8868 100644
+--- a/sound/pci/ctxfi/ctdaio.c
++++ b/sound/pci/ctxfi/ctdaio.c
+@@ -51,12 +51,12 @@ static const struct daio_rsc_idx idx_20k2[NUM_DAIOTYP] = {
+ 	[SPDIFIO] = {.left = 0x05, .right = 0x85},
+ };
+ 
+-static int daio_master(struct rsc *rsc)
++static void daio_master(struct rsc *rsc)
+ {
+ 	/* Actually, this is not the resource index of DAIO.
+ 	 * For DAO, it is the input mapper index. And, for DAI,
+ 	 * it is the output time-slot index. */
+-	return rsc->conj = rsc->idx;
++	rsc->conj = rsc->idx;
+ }
+ 
+ static int daio_index(const struct rsc *rsc)
+@@ -64,19 +64,19 @@ static int daio_index(const struct rsc *rsc)
+ 	return rsc->conj;
+ }
+ 
+-static int daio_out_next_conj(struct rsc *rsc)
++static void daio_out_next_conj(struct rsc *rsc)
+ {
+-	return rsc->conj += 2;
++	rsc->conj += 2;
+ }
+ 
+-static int daio_in_next_conj_20k1(struct rsc *rsc)
++static void daio_in_next_conj_20k1(struct rsc *rsc)
+ {
+-	return rsc->conj += 0x200;
++	rsc->conj += 0x200;
+ }
+ 
+-static int daio_in_next_conj_20k2(struct rsc *rsc)
++static void daio_in_next_conj_20k2(struct rsc *rsc)
+ {
+-	return rsc->conj += 0x100;
++	rsc->conj += 0x100;
+ }
+ 
+ static const struct rsc_ops daio_out_rsc_ops = {
+diff --git a/sound/pci/ctxfi/ctresource.c b/sound/pci/ctxfi/ctresource.c
+index 61e51e35ba168..edf9d9ef9b848 100644
+--- a/sound/pci/ctxfi/ctresource.c
++++ b/sound/pci/ctxfi/ctresource.c
+@@ -109,18 +109,17 @@ static int audio_ring_slot(const struct rsc *rsc)
+     return (rsc->conj << 4) + offset_in_audio_slot_block[rsc->type];
+ }
+ 
+-static int rsc_next_conj(struct rsc *rsc)
++static void rsc_next_conj(struct rsc *rsc)
+ {
+ 	unsigned int i;
+ 	for (i = 0; (i < 8) && (!(rsc->msr & (0x1 << i))); )
+ 		i++;
+ 	rsc->conj += (AUDIO_SLOT_BLOCK_NUM >> i);
+-	return rsc->conj;
+ }
+ 
+-static int rsc_master(struct rsc *rsc)
++static void rsc_master(struct rsc *rsc)
+ {
+-	return rsc->conj = rsc->idx;
++	rsc->conj = rsc->idx;
+ }
+ 
+ static const struct rsc_ops rsc_generic_ops = {
+diff --git a/sound/pci/ctxfi/ctresource.h b/sound/pci/ctxfi/ctresource.h
+index 93e47488a1c1c..92146054af582 100644
+--- a/sound/pci/ctxfi/ctresource.h
++++ b/sound/pci/ctxfi/ctresource.h
+@@ -39,8 +39,8 @@ struct rsc {
+ };
+ 
+ struct rsc_ops {
+-	int (*master)(struct rsc *rsc);	/* Move to master resource */
+-	int (*next_conj)(struct rsc *rsc); /* Move to next conjugate resource */
++	void (*master)(struct rsc *rsc); /* Move to master resource */
++	void (*next_conj)(struct rsc *rsc); /* Move to next conjugate resource */
+ 	int (*index)(const struct rsc *rsc); /* Return the index of resource */
+ 	/* Return the output slot number */
+ 	int (*output_slot)(const struct rsc *rsc);
+diff --git a/sound/pci/ctxfi/ctsrc.c b/sound/pci/ctxfi/ctsrc.c
+index 37c18ce84974a..7d2bda0c3d3de 100644
+--- a/sound/pci/ctxfi/ctsrc.c
++++ b/sound/pci/ctxfi/ctsrc.c
+@@ -590,16 +590,15 @@ int src_mgr_destroy(struct src_mgr *src_mgr)
+ 
+ /* SRCIMP resource manager operations */
+ 
+-static int srcimp_master(struct rsc *rsc)
++static void srcimp_master(struct rsc *rsc)
+ {
+ 	rsc->conj = 0;
+-	return rsc->idx = container_of(rsc, struct srcimp, rsc)->idx[0];
++	rsc->idx = container_of(rsc, struct srcimp, rsc)->idx[0];
+ }
+ 
+-static int srcimp_next_conj(struct rsc *rsc)
++static void srcimp_next_conj(struct rsc *rsc)
+ {
+ 	rsc->conj++;
+-	return container_of(rsc, struct srcimp, rsc)->idx[rsc->conj];
+ }
+ 
+ static int srcimp_index(const struct rsc *rsc)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2eb06351de1fb..b980fa617229e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6467,6 +6467,27 @@ static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *co
+ 	alc_write_coef_idx(codec, 0x45, 0x5089);
+ }
+ 
++static const struct coef_fw alc233_fixup_no_audio_jack_coefs[] = {
++	WRITE_COEF(0x1a, 0x9003), WRITE_COEF(0x1b, 0x0e2b), WRITE_COEF(0x37, 0xfe06),
++	WRITE_COEF(0x38, 0x4981), WRITE_COEF(0x45, 0xd489), WRITE_COEF(0x46, 0x0074),
++	WRITE_COEF(0x49, 0x0149),
++	{}
++};
++
++static void alc233_fixup_no_audio_jack(struct hda_codec *codec,
++				       const struct hda_fixup *fix,
++				       int action)
++{
++	/*
++	 * The audio jack input and output is not detected on the ASRock NUC Box
++	 * 1100 series when cold booting without this fix. Warm rebooting from a
++	 * certain other OS makes the audio functional, as COEF settings are
++	 * preserved in this case. This fix sets these altered COEF values as
++	 * the default.
++	 */
++	alc_process_coef_fw(codec, alc233_fixup_no_audio_jack_coefs);
++}
++
+ enum {
+ 	ALC269_FIXUP_GPIO2,
+ 	ALC269_FIXUP_SONY_VAIO,
+@@ -6685,6 +6706,7 @@ enum {
+ 	ALC287_FIXUP_13S_GEN2_SPEAKERS,
+ 	ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,
+ 	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
++	ALC233_FIXUP_NO_AUDIO_JACK,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8399,6 +8421,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC,
+ 	},
++	[ALC233_FIXUP_NO_AUDIO_JACK] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc233_fixup_no_audio_jack,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8578,6 +8604,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8728, "HP EliteBook 840 G7", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8735, "HP ProBook 435 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+@@ -8831,6 +8858,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
++	SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ 	SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+ 	SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index d18ae5e3ee809..699b59cd389c0 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -1812,9 +1812,8 @@ static int wcd934x_hw_params(struct snd_pcm_substream *substream,
+ 	}
+ 
+ 	wcd->dai[dai->id].sconfig.rate = params_rate(params);
+-	wcd934x_slim_set_hw_params(wcd, &wcd->dai[dai->id], substream->stream);
+ 
+-	return 0;
++	return wcd934x_slim_set_hw_params(wcd, &wcd->dai[dai->id], substream->stream);
+ }
+ 
+ static int wcd934x_hw_free(struct snd_pcm_substream *substream,
+diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
+index 9766725c29166..84cf190aa01a6 100644
+--- a/sound/soc/qcom/qdsp6/q6asm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
+@@ -269,9 +269,7 @@ static int q6asm_dai_prepare(struct snd_soc_component *component,
+ 
+ 	if (ret < 0) {
+ 		dev_err(dev, "%s: q6asm_open_write failed\n", __func__);
+-		q6asm_audio_client_free(prtd->audio_client);
+-		prtd->audio_client = NULL;
+-		return -ENOMEM;
++		goto open_err;
+ 	}
+ 
+ 	prtd->session_id = q6asm_get_session_id(prtd->audio_client);
+@@ -279,7 +277,7 @@ static int q6asm_dai_prepare(struct snd_soc_component *component,
+ 			      prtd->session_id, substream->stream);
+ 	if (ret) {
+ 		dev_err(dev, "%s: stream reg failed ret:%d\n", __func__, ret);
+-		return ret;
++		goto routing_err;
+ 	}
+ 
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+@@ -301,10 +299,19 @@ static int q6asm_dai_prepare(struct snd_soc_component *component,
+ 	}
+ 	if (ret < 0)
+ 		dev_info(dev, "%s: CMD Format block failed\n", __func__);
++	else
++		prtd->state = Q6ASM_STREAM_RUNNING;
+ 
+-	prtd->state = Q6ASM_STREAM_RUNNING;
++	return ret;
+ 
+-	return 0;
++routing_err:
++	q6asm_cmd(prtd->audio_client, prtd->stream_id,  CMD_CLOSE);
++open_err:
++	q6asm_unmap_memory_regions(substream->stream, prtd->audio_client);
++	q6asm_audio_client_free(prtd->audio_client);
++	prtd->audio_client = NULL;
++
++	return ret;
+ }
+ 
+ static int q6asm_dai_trigger(struct snd_soc_component *component,
+diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c
+index 0a6b9433f6acf..934b3f282bccd 100644
+--- a/sound/soc/qcom/qdsp6/q6routing.c
++++ b/sound/soc/qcom/qdsp6/q6routing.c
+@@ -491,7 +491,11 @@ static int msm_routing_put_audio_mixer(struct snd_kcontrol *kcontrol,
+ 		session->port_id = be_id;
+ 		snd_soc_dapm_mixer_update_power(dapm, kcontrol, 1, update);
+ 	} else {
+-		session->port_id = -1;
++		if (session->port_id == be_id) {
++			session->port_id = -1;
++			return 0;
++		}
++
+ 		snd_soc_dapm_mixer_update_power(dapm, kcontrol, 0, update);
+ 	}
+ 
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 1030e11017b27..4d24ac255d253 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -2873,6 +2873,7 @@ EXPORT_SYMBOL_GPL(snd_soc_tplg_widget_remove_all);
+ /* remove dynamic controls from the component driver */
+ int snd_soc_tplg_component_remove(struct snd_soc_component *comp, u32 index)
+ {
++	struct snd_card *card = comp->card->snd_card;
+ 	struct snd_soc_dobj *dobj, *next_dobj;
+ 	int pass = SOC_TPLG_PASS_END;
+ 
+@@ -2880,6 +2881,7 @@ int snd_soc_tplg_component_remove(struct snd_soc_component *comp, u32 index)
+ 	while (pass >= SOC_TPLG_PASS_START) {
+ 
+ 		/* remove mixer controls */
++		down_write(&card->controls_rwsem);
+ 		list_for_each_entry_safe(dobj, next_dobj, &comp->dobj_list,
+ 			list) {
+ 
+@@ -2923,6 +2925,7 @@ int snd_soc_tplg_component_remove(struct snd_soc_component *comp, u32 index)
+ 				break;
+ 			}
+ 		}
++		up_write(&card->controls_rwsem);
+ 		pass--;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-08 12:53 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-08 12:53 UTC (permalink / raw
  To: gentoo-commits

commit:     5318e7bb842e6ae387c1a931286d5c867c4cefda
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec  8 12:53:39 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec  8 12:53:39 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5318e7bb

Linux 5.10.84

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1083_linux-5.10.84.patch | 5206 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5210 insertions(+)

diff --git a/0000_README b/0000_README
index 7050b7a7..19b3a8b7 100644
--- a/0000_README
+++ b/0000_README
@@ -375,6 +375,10 @@ Patch:  1082_linux-5.10.83.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.83
 
+Patch:  1083_linux-5.10.84.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.84
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1083_linux-5.10.84.patch b/1083_linux-5.10.84.patch
new file mode 100644
index 00000000..8e86a38a
--- /dev/null
+++ b/1083_linux-5.10.84.patch
@@ -0,0 +1,5206 @@
+diff --git a/Makefile b/Makefile
+index 4646baabfe783..cce1e5dac06fb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 83
++SUBLEVEL = 84
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
+index 395cb22d9f7d3..7f532b2d7bbc6 100644
+--- a/arch/arm64/include/asm/kvm_arm.h
++++ b/arch/arm64/include/asm/kvm_arm.h
+@@ -83,7 +83,7 @@
+ #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
+ 
+ /* TCR_EL2 Registers bits */
+-#define TCR_EL2_RES1		((1 << 31) | (1 << 23))
++#define TCR_EL2_RES1		((1U << 31) | (1 << 23))
+ #define TCR_EL2_TBI		(1 << 20)
+ #define TCR_EL2_PS_SHIFT	16
+ #define TCR_EL2_PS_MASK		(7 << TCR_EL2_PS_SHIFT)
+@@ -268,7 +268,7 @@
+ #define CPTR_EL2_TFP_SHIFT 10
+ 
+ /* Hyp Coprocessor Trap Register */
+-#define CPTR_EL2_TCPAC	(1 << 31)
++#define CPTR_EL2_TCPAC	(1U << 31)
+ #define CPTR_EL2_TAM	(1 << 30)
+ #define CPTR_EL2_TTA	(1 << 20)
+ #define CPTR_EL2_TFP	(1 << CPTR_EL2_TFP_SHIFT)
+diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
+index a338f40e64d39..67f68c9ef94c4 100644
+--- a/arch/arm64/kernel/entry-ftrace.S
++++ b/arch/arm64/kernel/entry-ftrace.S
+@@ -77,11 +77,17 @@
+ 	.endm
+ 
+ SYM_CODE_START(ftrace_regs_caller)
++#ifdef BTI_C
++	BTI_C
++#endif
+ 	ftrace_regs_entry	1
+ 	b	ftrace_common
+ SYM_CODE_END(ftrace_regs_caller)
+ 
+ SYM_CODE_START(ftrace_caller)
++#ifdef BTI_C
++	BTI_C
++#endif
+ 	ftrace_regs_entry	0
+ 	b	ftrace_common
+ SYM_CODE_END(ftrace_caller)
+diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile
+index 5140c602207f6..0cf86ed2b7c17 100644
+--- a/arch/parisc/Makefile
++++ b/arch/parisc/Makefile
+@@ -17,7 +17,12 @@
+ # Mike Shaver, Helge Deller and Martin K. Petersen
+ #
+ 
++ifdef CONFIG_PARISC_SELF_EXTRACT
++boot := arch/parisc/boot
++KBUILD_IMAGE := $(boot)/bzImage
++else
+ KBUILD_IMAGE := vmlinuz
++endif
+ 
+ NM		= sh $(srctree)/arch/parisc/nm
+ CHECKFLAGS	+= -D__hppa__=1
+diff --git a/arch/parisc/install.sh b/arch/parisc/install.sh
+index 056d588befdd6..70d3cffb02515 100644
+--- a/arch/parisc/install.sh
++++ b/arch/parisc/install.sh
+@@ -39,6 +39,7 @@ verify "$3"
+ if [ -n "${INSTALLKERNEL}" ]; then
+   if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
+   if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
++  if [ -x /usr/sbin/${INSTALLKERNEL} ]; then exec /usr/sbin/${INSTALLKERNEL} "$@"; fi
+ fi
+ 
+ # Default install
+diff --git a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
+index 13d94f0f94a08..42be3ff1315e5 100644
+--- a/arch/parisc/kernel/time.c
++++ b/arch/parisc/kernel/time.c
+@@ -252,27 +252,13 @@ void __init time_init(void)
+ static int __init init_cr16_clocksource(void)
+ {
+ 	/*
+-	 * The cr16 interval timers are not syncronized across CPUs on
+-	 * different sockets, so mark them unstable and lower rating on
+-	 * multi-socket SMP systems.
++	 * The cr16 interval timers are not syncronized across CPUs, even if
++	 * they share the same socket.
+ 	 */
+ 	if (num_online_cpus() > 1 && !running_on_qemu) {
+-		int cpu;
+-		unsigned long cpu0_loc;
+-		cpu0_loc = per_cpu(cpu_data, 0).cpu_loc;
+-
+-		for_each_online_cpu(cpu) {
+-			if (cpu == 0)
+-				continue;
+-			if ((cpu0_loc != 0) &&
+-			    (cpu0_loc == per_cpu(cpu_data, cpu).cpu_loc))
+-				continue;
+-
+-			clocksource_cr16.name = "cr16_unstable";
+-			clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;
+-			clocksource_cr16.rating = 0;
+-			break;
+-		}
++		clocksource_cr16.name = "cr16_unstable";
++		clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;
++		clocksource_cr16.rating = 0;
+ 	}
+ 
+ 	/* XXX: We may want to mark sched_clock stable here if cr16 clocks are
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index e4198700ed1a3..245f1f8df6563 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -1034,15 +1034,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
+ 	phys_addr_t max_addr = memory_hotplug_max();
+ 	struct device_node *memory;
+ 
+-	/*
+-	 * The "ibm,pmemory" can appear anywhere in the address space.
+-	 * Assuming it is still backed by page structs, set the upper limit
+-	 * for the huge DMA window as MAX_PHYSMEM_BITS.
+-	 */
+-	if (of_find_node_by_type(NULL, "ibm,pmemory"))
+-		return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
+-			(phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
+-
+ 	for_each_node_by_type(memory, "memory") {
+ 		unsigned long start, size;
+ 		int n_mem_addr_cells, n_mem_size_cells, len;
+diff --git a/arch/s390/include/asm/pci_io.h b/arch/s390/include/asm/pci_io.h
+index e4dc64cc9c555..287bb88f76986 100644
+--- a/arch/s390/include/asm/pci_io.h
++++ b/arch/s390/include/asm/pci_io.h
+@@ -14,12 +14,13 @@
+ 
+ /* I/O Map */
+ #define ZPCI_IOMAP_SHIFT		48
+-#define ZPCI_IOMAP_ADDR_BASE		0x8000000000000000UL
++#define ZPCI_IOMAP_ADDR_SHIFT		62
++#define ZPCI_IOMAP_ADDR_BASE		(1UL << ZPCI_IOMAP_ADDR_SHIFT)
+ #define ZPCI_IOMAP_ADDR_OFF_MASK	((1UL << ZPCI_IOMAP_SHIFT) - 1)
+ #define ZPCI_IOMAP_MAX_ENTRIES							\
+-	((ULONG_MAX - ZPCI_IOMAP_ADDR_BASE + 1) / (1UL << ZPCI_IOMAP_SHIFT))
++	(1UL << (ZPCI_IOMAP_ADDR_SHIFT - ZPCI_IOMAP_SHIFT))
+ #define ZPCI_IOMAP_ADDR_IDX_MASK						\
+-	(~ZPCI_IOMAP_ADDR_OFF_MASK - ZPCI_IOMAP_ADDR_BASE)
++	((ZPCI_IOMAP_ADDR_BASE - 1) & ~ZPCI_IOMAP_ADDR_OFF_MASK)
+ 
+ struct zpci_iomap_entry {
+ 	u32 fh;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 5cd9d20af31e9..f9f8721dc5321 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -845,9 +845,6 @@ static void __init setup_memory(void)
+ 		storage_key_init_range(start, end);
+ 
+ 	psw_set_key(PAGE_DEFAULT_KEY);
+-
+-	/* Only cosmetics */
+-	memblock_enforce_memory_limit(memblock_end_of_DRAM());
+ }
+ 
+ /*
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index f18f3932e971a..a24ce5905ab82 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -575,6 +575,10 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
+ 	ud2
+ 1:
+ #endif
++#ifdef CONFIG_XEN_PV
++	ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
++#endif
++
+ 	POP_REGS pop_rdi=0
+ 
+ 	/*
+@@ -669,7 +673,7 @@ native_irq_return_ldt:
+ 	 */
+ 
+ 	pushq	%rdi				/* Stash user RDI */
+-	SWAPGS					/* to kernel GS */
++	swapgs					/* to kernel GS */
+ 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi	/* to kernel CR3 */
+ 
+ 	movq	PER_CPU_VAR(espfix_waddr), %rdi
+@@ -699,7 +703,7 @@ native_irq_return_ldt:
+ 	orq	PER_CPU_VAR(espfix_stack), %rax
+ 
+ 	SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi
+-	SWAPGS					/* to user GS */
++	swapgs					/* to user GS */
+ 	popq	%rdi				/* Restore user RDI */
+ 
+ 	movq	%rax, %rsp
+@@ -932,6 +936,7 @@ SYM_CODE_START_LOCAL(paranoid_entry)
+ .Lparanoid_entry_checkgs:
+ 	/* EBX = 1 -> kernel GSBASE active, no restore required */
+ 	movl	$1, %ebx
++
+ 	/*
+ 	 * The kernel-enforced convention is a negative GSBASE indicates
+ 	 * a kernel value. No SWAPGS needed on entry and exit.
+@@ -939,21 +944,14 @@ SYM_CODE_START_LOCAL(paranoid_entry)
+ 	movl	$MSR_GS_BASE, %ecx
+ 	rdmsr
+ 	testl	%edx, %edx
+-	jns	.Lparanoid_entry_swapgs
+-	ret
+-
+-.Lparanoid_entry_swapgs:
+-	SWAPGS
+-
+-	/*
+-	 * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an
+-	 * unconditional CR3 write, even in the PTI case.  So do an lfence
+-	 * to prevent GS speculation, regardless of whether PTI is enabled.
+-	 */
+-	FENCE_SWAPGS_KERNEL_ENTRY
++	js	.Lparanoid_kernel_gsbase
+ 
+ 	/* EBX = 0 -> SWAPGS required on exit */
+ 	xorl	%ebx, %ebx
++	swapgs
++.Lparanoid_kernel_gsbase:
++
++	FENCE_SWAPGS_KERNEL_ENTRY
+ 	ret
+ SYM_CODE_END(paranoid_entry)
+ 
+@@ -1001,7 +999,7 @@ SYM_CODE_START_LOCAL(paranoid_exit)
+ 	jnz		restore_regs_and_return_to_kernel
+ 
+ 	/* We are returning to a context with user GSBASE */
+-	SWAPGS_UNSAFE_STACK
++	swapgs
+ 	jmp		restore_regs_and_return_to_kernel
+ SYM_CODE_END(paranoid_exit)
+ 
+@@ -1035,11 +1033,6 @@ SYM_CODE_START_LOCAL(error_entry)
+ 	pushq	%r12
+ 	ret
+ 
+-.Lerror_entry_done_lfence:
+-	FENCE_SWAPGS_KERNEL_ENTRY
+-.Lerror_entry_done:
+-	ret
+-
+ 	/*
+ 	 * There are two places in the kernel that can potentially fault with
+ 	 * usergs. Handle them here.  B stepping K8s sometimes report a
+@@ -1062,8 +1055,14 @@ SYM_CODE_START_LOCAL(error_entry)
+ 	 * .Lgs_change's error handler with kernel gsbase.
+ 	 */
+ 	SWAPGS
+-	FENCE_SWAPGS_USER_ENTRY
+-	jmp .Lerror_entry_done
++
++	/*
++	 * Issue an LFENCE to prevent GS speculation, regardless of whether it is a
++	 * kernel or user gsbase.
++	 */
++.Lerror_entry_done_lfence:
++	FENCE_SWAPGS_KERNEL_ENTRY
++	ret
+ 
+ .Lbstep_iret:
+ 	/* Fix truncated RIP */
+@@ -1426,7 +1425,7 @@ nmi_no_fsgsbase:
+ 	jnz	nmi_restore
+ 
+ nmi_swapgs:
+-	SWAPGS_UNSAFE_STACK
++	swapgs
+ 
+ nmi_restore:
+ 	POP_REGS
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 2dfc8d380dab1..8c86edefa1150 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -131,18 +131,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
+ #define SAVE_FLAGS(x)		pushfq; popq %rax
+ #endif
+ 
+-#define SWAPGS	swapgs
+-/*
+- * Currently paravirt can't handle swapgs nicely when we
+- * don't have a stack we can rely on (such as a user space
+- * stack).  So we either find a way around these or just fault
+- * and emulate if a guest tries to call swapgs directly.
+- *
+- * Either way, this is a good way to document that we don't
+- * have a reliable stack. x86_64 only.
+- */
+-#define SWAPGS_UNSAFE_STACK	swapgs
+-
+ #define INTERRUPT_RETURN	jmp native_iret
+ #define USERGS_SYSRET64				\
+ 	swapgs;					\
+@@ -170,6 +158,14 @@ static __always_inline int arch_irqs_disabled(void)
+ 
+ 	return arch_irqs_disabled_flags(flags);
+ }
++#else
++#ifdef CONFIG_X86_64
++#ifdef CONFIG_XEN_PV
++#define SWAPGS	ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV
++#else
++#define SWAPGS	swapgs
++#endif
++#endif
+ #endif /* !__ASSEMBLY__ */
+ 
+ #endif
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index d25cc6830e895..5647bcdba776e 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -776,26 +776,6 @@ extern void default_banner(void);
+ 
+ #ifdef CONFIG_X86_64
+ #ifdef CONFIG_PARAVIRT_XXL
+-/*
+- * If swapgs is used while the userspace stack is still current,
+- * there's no way to call a pvop.  The PV replacement *must* be
+- * inlined, or the swapgs instruction must be trapped and emulated.
+- */
+-#define SWAPGS_UNSAFE_STACK						\
+-	PARA_SITE(PARA_PATCH(PV_CPU_swapgs), swapgs)
+-
+-/*
+- * Note: swapgs is very special, and in practise is either going to be
+- * implemented with a single "swapgs" instruction or something very
+- * special.  Either way, we don't need to save any registers for
+- * it.
+- */
+-#define SWAPGS								\
+-	PARA_SITE(PARA_PATCH(PV_CPU_swapgs),				\
+-		  ANNOTATE_RETPOLINE_SAFE;				\
+-		  call PARA_INDIRECT(pv_ops+PV_CPU_swapgs);		\
+-		 )
+-
+ #define USERGS_SYSRET64							\
+ 	PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64),			\
+ 		  ANNOTATE_RETPOLINE_SAFE;				\
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index 0fad9f61c76ab..903d71884fa25 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -169,8 +169,6 @@ struct pv_cpu_ops {
+ 	   frame set up. */
+ 	void (*iret)(void);
+ 
+-	void (*swapgs)(void);
+-
+ 	void (*start_context_switch)(struct task_struct *prev);
+ 	void (*end_context_switch)(struct task_struct *next);
+ #endif
+diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
+index 828be792231e9..1354bc30614d7 100644
+--- a/arch/x86/kernel/asm-offsets_64.c
++++ b/arch/x86/kernel/asm-offsets_64.c
+@@ -15,7 +15,6 @@ int main(void)
+ #ifdef CONFIG_PARAVIRT_XXL
+ 	OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
+ 	       cpu.usergs_sysret64);
+-	OFFSET(PV_CPU_swapgs, paravirt_patch_template, cpu.swapgs);
+ #ifdef CONFIG_DEBUG_ENTRY
+ 	OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
+ #endif
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 6c3407ba6ee98..5e5fcf5c376de 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -312,7 +312,6 @@ struct paravirt_patch_template pv_ops = {
+ 
+ 	.cpu.usergs_sysret64	= native_usergs_sysret64,
+ 	.cpu.iret		= native_iret,
+-	.cpu.swapgs		= native_swapgs,
+ 
+ #ifdef CONFIG_X86_IOPL_IOPERM
+ 	.cpu.invalidate_io_bitmap	= native_tss_invalidate_io_bitmap,
+diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
+index ace6e334cb393..7c518b08aa3c5 100644
+--- a/arch/x86/kernel/paravirt_patch.c
++++ b/arch/x86/kernel/paravirt_patch.c
+@@ -28,7 +28,6 @@ struct patch_xxl {
+ 	const unsigned char	irq_restore_fl[2];
+ 	const unsigned char	cpu_wbinvd[2];
+ 	const unsigned char	cpu_usergs_sysret64[6];
+-	const unsigned char	cpu_swapgs[3];
+ 	const unsigned char	mov64[3];
+ };
+ 
+@@ -43,7 +42,6 @@ static const struct patch_xxl patch_data_xxl = {
+ 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
+ 	.cpu_usergs_sysret64	= { 0x0f, 0x01, 0xf8,
+ 				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
+-	.cpu_swapgs		= { 0x0f, 0x01, 0xf8 },	// swapgs
+ 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
+ };
+ 
+@@ -86,7 +84,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
+ 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
+ 
+ 	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
+-	PATCH_CASE(cpu, swapgs, xxl, insn_buff, len);
+ 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
+ #endif
+ 
+diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
+index 865e234ea24bd..c222fab112cbd 100644
+--- a/arch/x86/kernel/sev-es.c
++++ b/arch/x86/kernel/sev-es.c
+@@ -260,11 +260,6 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
+ 				   char *dst, char *buf, size_t size)
+ {
+ 	unsigned long error_code = X86_PF_PROT | X86_PF_WRITE;
+-	char __user *target = (char __user *)dst;
+-	u64 d8;
+-	u32 d4;
+-	u16 d2;
+-	u8  d1;
+ 
+ 	/*
+ 	 * This function uses __put_user() independent of whether kernel or user
+@@ -286,26 +281,42 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
+ 	 * instructions here would cause infinite nesting.
+ 	 */
+ 	switch (size) {
+-	case 1:
++	case 1: {
++		u8 d1;
++		u8 __user *target = (u8 __user *)dst;
++
+ 		memcpy(&d1, buf, 1);
+ 		if (__put_user(d1, target))
+ 			goto fault;
+ 		break;
+-	case 2:
++	}
++	case 2: {
++		u16 d2;
++		u16 __user *target = (u16 __user *)dst;
++
+ 		memcpy(&d2, buf, 2);
+ 		if (__put_user(d2, target))
+ 			goto fault;
+ 		break;
+-	case 4:
++	}
++	case 4: {
++		u32 d4;
++		u32 __user *target = (u32 __user *)dst;
++
+ 		memcpy(&d4, buf, 4);
+ 		if (__put_user(d4, target))
+ 			goto fault;
+ 		break;
+-	case 8:
++	}
++	case 8: {
++		u64 d8;
++		u64 __user *target = (u64 __user *)dst;
++
+ 		memcpy(&d8, buf, 8);
+ 		if (__put_user(d8, target))
+ 			goto fault;
+ 		break;
++	}
+ 	default:
+ 		WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size);
+ 		return ES_UNSUPPORTED;
+@@ -328,11 +339,6 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
+ 				  char *src, char *buf, size_t size)
+ {
+ 	unsigned long error_code = X86_PF_PROT;
+-	char __user *s = (char __user *)src;
+-	u64 d8;
+-	u32 d4;
+-	u16 d2;
+-	u8  d1;
+ 
+ 	/*
+ 	 * This function uses __get_user() independent of whether kernel or user
+@@ -354,26 +360,41 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
+ 	 * instructions here would cause infinite nesting.
+ 	 */
+ 	switch (size) {
+-	case 1:
++	case 1: {
++		u8 d1;
++		u8 __user *s = (u8 __user *)src;
++
+ 		if (__get_user(d1, s))
+ 			goto fault;
+ 		memcpy(buf, &d1, 1);
+ 		break;
+-	case 2:
++	}
++	case 2: {
++		u16 d2;
++		u16 __user *s = (u16 __user *)src;
++
+ 		if (__get_user(d2, s))
+ 			goto fault;
+ 		memcpy(buf, &d2, 2);
+ 		break;
+-	case 4:
++	}
++	case 4: {
++		u32 d4;
++		u32 __user *s = (u32 __user *)src;
++
+ 		if (__get_user(d4, s))
+ 			goto fault;
+ 		memcpy(buf, &d4, 4);
+ 		break;
+-	case 8:
++	}
++	case 8: {
++		u64 d8;
++		u64 __user *s = (u64 __user *)src;
+ 		if (__get_user(d8, s))
+ 			goto fault;
+ 		memcpy(buf, &d8, 8);
+ 		break;
++	}
+ 	default:
+ 		WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size);
+ 		return ES_UNSUPPORTED;
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 56289170753c5..f9f1b45e5ddc4 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1178,6 +1178,12 @@ void mark_tsc_unstable(char *reason)
+ 
+ EXPORT_SYMBOL_GPL(mark_tsc_unstable);
+ 
++static void __init tsc_disable_clocksource_watchdog(void)
++{
++	clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY;
++	clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY;
++}
++
+ static void __init check_system_tsc_reliable(void)
+ {
+ #if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC)
+@@ -1194,6 +1200,23 @@ static void __init check_system_tsc_reliable(void)
+ #endif
+ 	if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE))
+ 		tsc_clocksource_reliable = 1;
++
++	/*
++	 * Disable the clocksource watchdog when the system has:
++	 *  - TSC running at constant frequency
++	 *  - TSC which does not stop in C-States
++	 *  - the TSC_ADJUST register which allows to detect even minimal
++	 *    modifications
++	 *  - not more than two sockets. As the number of sockets cannot be
++	 *    evaluated at the early boot stage where this has to be
++	 *    invoked, check the number of online memory nodes as a
++	 *    fallback solution which is an reasonable estimate.
++	 */
++	if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC) &&
++	    boot_cpu_has(X86_FEATURE_NONSTOP_TSC) &&
++	    boot_cpu_has(X86_FEATURE_TSC_ADJUST) &&
++	    nr_online_nodes <= 2)
++		tsc_disable_clocksource_watchdog();
+ }
+ 
+ /*
+@@ -1385,9 +1408,6 @@ static int __init init_tsc_clocksource(void)
+ 	if (tsc_unstable)
+ 		goto unreg;
+ 
+-	if (tsc_clocksource_reliable || no_tsc_watchdog)
+-		clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY;
+-
+ 	if (boot_cpu_has(X86_FEATURE_NONSTOP_TSC_S3))
+ 		clocksource_tsc.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP;
+ 
+@@ -1525,7 +1545,7 @@ void __init tsc_init(void)
+ 	}
+ 
+ 	if (tsc_clocksource_reliable || no_tsc_watchdog)
+-		clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY;
++		tsc_disable_clocksource_watchdog();
+ 
+ 	clocksource_register_khz(&clocksource_tsc_early, tsc_khz);
+ 	detect_art();
+diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c
+index 3d3c761eb74a6..923660057f363 100644
+--- a/arch/x86/kernel/tsc_sync.c
++++ b/arch/x86/kernel/tsc_sync.c
+@@ -30,6 +30,7 @@ struct tsc_adjust {
+ };
+ 
+ static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust);
++static struct timer_list tsc_sync_check_timer;
+ 
+ /*
+  * TSC's on different sockets may be reset asynchronously.
+@@ -77,6 +78,46 @@ void tsc_verify_tsc_adjust(bool resume)
+ 	}
+ }
+ 
++/*
++ * Normally the tsc_sync will be checked every time system enters idle
++ * state, but there is still caveat that a system won't enter idle,
++ * either because it's too busy or configured purposely to not enter
++ * idle.
++ *
++ * So setup a periodic timer (every 10 minutes) to make sure the check
++ * is always on.
++ */
++
++#define SYNC_CHECK_INTERVAL		(HZ * 600)
++
++static void tsc_sync_check_timer_fn(struct timer_list *unused)
++{
++	int next_cpu;
++
++	tsc_verify_tsc_adjust(false);
++
++	/* Run the check for all onlined CPUs in turn */
++	next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask);
++	if (next_cpu >= nr_cpu_ids)
++		next_cpu = cpumask_first(cpu_online_mask);
++
++	tsc_sync_check_timer.expires += SYNC_CHECK_INTERVAL;
++	add_timer_on(&tsc_sync_check_timer, next_cpu);
++}
++
++static int __init start_sync_check_timer(void)
++{
++	if (!cpu_feature_enabled(X86_FEATURE_TSC_ADJUST) || tsc_clocksource_reliable)
++		return 0;
++
++	timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn, 0);
++	tsc_sync_check_timer.expires = jiffies + SYNC_CHECK_INTERVAL;
++	add_timer(&tsc_sync_check_timer);
++
++	return 0;
++}
++late_initcall(start_sync_check_timer);
++
+ static void tsc_sanitize_first_cpu(struct tsc_adjust *cur, s64 bootval,
+ 				   unsigned int cpu, bool bootcpu)
+ {
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 770d18dc46509..c2516ddc3cbec 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5152,7 +5152,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_gva);
+ 
+ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
+ {
+-	kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, gva, INVALID_PAGE);
++	kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE);
+ 	++vcpu->stat.invlpg;
+ }
+ EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index 035da07500e8b..5a5c165a30ed1 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -274,7 +274,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
+ 		pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS;
+ 
+ 	pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
+-	pmu->reserved_bits = 0xffffffff00200000ull;
++	pmu->reserved_bits = 0xfffffff000280000ull;
+ 	pmu->version = 1;
+ 	/* not applicable to AMD; but clean them to prevent any fall out */
+ 	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 257ec2cbf69a4..36661b15c3d04 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2619,8 +2619,10 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 
+ 	if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) &&
+ 	    WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
+-				     vmcs12->guest_ia32_perf_global_ctrl)))
++				     vmcs12->guest_ia32_perf_global_ctrl))) {
++		*entry_failure_code = ENTRY_FAIL_DEFAULT;
+ 		return -EINVAL;
++	}
+ 
+ 	kvm_rsp_write(vcpu, vmcs12->guest_rsp);
+ 	kvm_rip_write(vcpu, vmcs12->guest_rip);
+diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
+index f02962dcc72cb..fbd9b10354790 100644
+--- a/arch/x86/kvm/vmx/posted_intr.c
++++ b/arch/x86/kvm/vmx/posted_intr.c
+@@ -5,6 +5,7 @@
+ #include <asm/cpu.h>
+ 
+ #include "lapic.h"
++#include "irq.h"
+ #include "posted_intr.h"
+ #include "trace.h"
+ #include "vmx.h"
+@@ -77,13 +78,18 @@ after_clear_sn:
+ 		pi_set_on(pi_desc);
+ }
+ 
++static bool vmx_can_use_vtd_pi(struct kvm *kvm)
++{
++	return irqchip_in_kernel(kvm) && enable_apicv &&
++		kvm_arch_has_assigned_device(kvm) &&
++		irq_remapping_cap(IRQ_POSTING_CAP);
++}
++
+ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)
+ {
+ 	struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
+ 
+-	if (!kvm_arch_has_assigned_device(vcpu->kvm) ||
+-		!irq_remapping_cap(IRQ_POSTING_CAP)  ||
+-		!kvm_vcpu_apicv_active(vcpu))
++	if (!vmx_can_use_vtd_pi(vcpu->kvm))
+ 		return;
+ 
+ 	/* Set SN when the vCPU is preempted */
+@@ -141,9 +147,7 @@ int pi_pre_block(struct kvm_vcpu *vcpu)
+ 	struct pi_desc old, new;
+ 	struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
+ 
+-	if (!kvm_arch_has_assigned_device(vcpu->kvm) ||
+-		!irq_remapping_cap(IRQ_POSTING_CAP)  ||
+-		!kvm_vcpu_apicv_active(vcpu))
++	if (!vmx_can_use_vtd_pi(vcpu->kvm))
+ 		return 0;
+ 
+ 	WARN_ON(irqs_disabled());
+@@ -256,9 +260,7 @@ int pi_update_irte(struct kvm *kvm, unsigned int host_irq, uint32_t guest_irq,
+ 	struct vcpu_data vcpu_info;
+ 	int idx, ret = 0;
+ 
+-	if (!kvm_arch_has_assigned_device(kvm) ||
+-	    !irq_remapping_cap(IRQ_POSTING_CAP) ||
+-	    !kvm_vcpu_apicv_active(kvm->vcpus[0]))
++	if (!vmx_can_use_vtd_pi(kvm))
+ 		return 0;
+ 
+ 	idx = srcu_read_lock(&kvm->irq_srcu);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index baa4244f3e6a1..5b7664d51dc2b 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2908,6 +2908,13 @@ static void vmx_flush_tlb_all(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
++static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu)
++{
++	if (is_guest_mode(vcpu))
++		return nested_get_vpid02(vcpu);
++	return to_vmx(vcpu)->vpid;
++}
++
+ static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
+ {
+ 	struct kvm_mmu *mmu = vcpu->arch.mmu;
+@@ -2920,31 +2927,29 @@ static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
+ 	if (enable_ept)
+ 		ept_sync_context(construct_eptp(vcpu, root_hpa,
+ 						mmu->shadow_root_level));
+-	else if (!is_guest_mode(vcpu))
+-		vpid_sync_context(to_vmx(vcpu)->vpid);
+ 	else
+-		vpid_sync_context(nested_get_vpid02(vcpu));
++		vpid_sync_context(vmx_get_current_vpid(vcpu));
+ }
+ 
+ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
+ {
+ 	/*
+-	 * vpid_sync_vcpu_addr() is a nop if vmx->vpid==0, see the comment in
++	 * vpid_sync_vcpu_addr() is a nop if vpid==0, see the comment in
+ 	 * vmx_flush_tlb_guest() for an explanation of why this is ok.
+ 	 */
+-	vpid_sync_vcpu_addr(to_vmx(vcpu)->vpid, addr);
++	vpid_sync_vcpu_addr(vmx_get_current_vpid(vcpu), addr);
+ }
+ 
+ static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)
+ {
+ 	/*
+-	 * vpid_sync_context() is a nop if vmx->vpid==0, e.g. if enable_vpid==0
+-	 * or a vpid couldn't be allocated for this vCPU.  VM-Enter and VM-Exit
+-	 * are required to flush GVA->{G,H}PA mappings from the TLB if vpid is
++	 * vpid_sync_context() is a nop if vpid==0, e.g. if enable_vpid==0 or a
++	 * vpid couldn't be allocated for this vCPU.  VM-Enter and VM-Exit are
++	 * required to flush GVA->{G,H}PA mappings from the TLB if vpid is
+ 	 * disabled (VM-Enter with vpid enabled and vpid==0 is disallowed),
+ 	 * i.e. no explicit INVVPID is necessary.
+ 	 */
+-	vpid_sync_context(to_vmx(vcpu)->vpid);
++	vpid_sync_context(vmx_get_current_vpid(vcpu));
+ }
+ 
+ void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
+index 22fda7d991590..3313bffbecd4d 100644
+--- a/arch/x86/realmode/init.c
++++ b/arch/x86/realmode/init.c
+@@ -70,6 +70,7 @@ static void __init setup_real_mode(void)
+ #ifdef CONFIG_X86_64
+ 	u64 *trampoline_pgd;
+ 	u64 efer;
++	int i;
+ #endif
+ 
+ 	base = (unsigned char *)real_mode_header;
+@@ -126,8 +127,17 @@ static void __init setup_real_mode(void)
+ 	trampoline_header->flags = 0;
+ 
+ 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
++
++	/* Map the real mode stub as virtual == physical */
+ 	trampoline_pgd[0] = trampoline_pgd_entry.pgd;
+-	trampoline_pgd[511] = init_top_pgt[511].pgd;
++
++	/*
++	 * Include the entirety of the kernel mapping into the trampoline
++	 * PGD.  This way, all mappings present in the normal kernel page
++	 * tables are usable while running on trampoline_pgd.
++	 */
++	for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++)
++		trampoline_pgd[i] = init_top_pgt[i].pgd;
+ #endif
+ 
+ 	sme_sev_setup_real_mode(trampoline_header);
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 5af0421ef74ba..16ff25d6935e7 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1083,9 +1083,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
+ #endif
+ 	.io_delay = xen_io_delay,
+ 
+-	/* Xen takes care of %gs when switching to usermode for us */
+-	.swapgs = paravirt_nop,
+-
+ 	.start_context_switch = paravirt_start_context_switch,
+ 	.end_context_switch = xen_end_context_switch,
+ };
+diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
+index 53cf8aa35032d..011ec649f3886 100644
+--- a/arch/x86/xen/xen-asm.S
++++ b/arch/x86/xen/xen-asm.S
+@@ -19,6 +19,7 @@
+ 
+ #include <linux/init.h>
+ #include <linux/linkage.h>
++#include <../entry/calling.h>
+ 
+ /*
+  * Enable events.  This clears the event mask and tests the pending
+@@ -235,6 +236,25 @@ SYM_CODE_START(xen_sysret64)
+ 	jmp hypercall_iret
+ SYM_CODE_END(xen_sysret64)
+ 
++/*
++ * XEN pv doesn't use trampoline stack, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is
++ * also the kernel stack.  Reusing swapgs_restore_regs_and_return_to_usermode()
++ * in XEN pv would cause %rsp to move up to the top of the kernel stack and
++ * leave the IRET frame below %rsp, which is dangerous to be corrupted if #NMI
++ * interrupts. And swapgs_restore_regs_and_return_to_usermode() pushing the IRET
++ * frame at the same address is useless.
++ */
++SYM_CODE_START(xenpv_restore_regs_and_return_to_usermode)
++	UNWIND_HINT_REGS
++	POP_REGS
++
++	/* stackleak_erase() can work safely on the kernel stack. */
++	STACKLEAK_ERASE_NOCLOBBER
++
++	addq	$8, %rsp	/* skip regs->orig_ax */
++	jmp xen_iret
++SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode)
++
+ /*
+  * Xen handles syscall callbacks much like ordinary exceptions, which
+  * means we have:
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 33192a8f687d6..ff2add0101fe5 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -442,6 +442,7 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	/* AMD */
+ 	{ PCI_VDEVICE(AMD, 0x7800), board_ahci }, /* AMD Hudson-2 */
+ 	{ PCI_VDEVICE(AMD, 0x7900), board_ahci }, /* AMD CZ */
++	{ PCI_VDEVICE(AMD, 0x7901), board_ahci_mobile }, /* AMD Green Sardine */
+ 	/* AMD is using RAID class only for ahci controllers */
+ 	{ PCI_VENDOR_ID_AMD, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+ 	  PCI_CLASS_STORAGE_RAID << 8, 0xffffff, board_ahci },
+diff --git a/drivers/ata/sata_fsl.c b/drivers/ata/sata_fsl.c
+index d55ee244d6931..0ba231e80b191 100644
+--- a/drivers/ata/sata_fsl.c
++++ b/drivers/ata/sata_fsl.c
+@@ -1394,6 +1394,14 @@ static int sata_fsl_init_controller(struct ata_host *host)
+ 	return 0;
+ }
+ 
++static void sata_fsl_host_stop(struct ata_host *host)
++{
++        struct sata_fsl_host_priv *host_priv = host->private_data;
++
++        iounmap(host_priv->hcr_base);
++        kfree(host_priv);
++}
++
+ /*
+  * scsi mid-layer and libata interface structures
+  */
+@@ -1426,6 +1434,8 @@ static struct ata_port_operations sata_fsl_ops = {
+ 	.port_start = sata_fsl_port_start,
+ 	.port_stop = sata_fsl_port_stop,
+ 
++	.host_stop      = sata_fsl_host_stop,
++
+ 	.pmp_attach = sata_fsl_pmp_attach,
+ 	.pmp_detach = sata_fsl_pmp_detach,
+ };
+@@ -1480,9 +1490,9 @@ static int sata_fsl_probe(struct platform_device *ofdev)
+ 	host_priv->ssr_base = ssr_base;
+ 	host_priv->csr_base = csr_base;
+ 
+-	irq = irq_of_parse_and_map(ofdev->dev.of_node, 0);
+-	if (!irq) {
+-		dev_err(&ofdev->dev, "invalid irq from platform\n");
++	irq = platform_get_irq(ofdev, 0);
++	if (irq < 0) {
++		retval = irq;
+ 		goto error_exit_with_cleanup;
+ 	}
+ 	host_priv->irq = irq;
+@@ -1557,10 +1567,6 @@ static int sata_fsl_remove(struct platform_device *ofdev)
+ 
+ 	ata_host_detach(host);
+ 
+-	irq_dispose_mapping(host_priv->irq);
+-	iounmap(host_priv->hcr_base);
+-	kfree(host_priv);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index abb865b1dff29..38b545bef05a3 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -203,6 +203,8 @@ struct ipmi_user {
+ 	struct work_struct remove_work;
+ };
+ 
++static struct workqueue_struct *remove_work_wq;
++
+ static struct ipmi_user *acquire_ipmi_user(struct ipmi_user *user, int *index)
+ 	__acquires(user->release_barrier)
+ {
+@@ -1272,7 +1274,7 @@ static void free_user(struct kref *ref)
+ 	struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
+ 
+ 	/* SRCU cleanup must happen in task context. */
+-	schedule_work(&user->remove_work);
++	queue_work(remove_work_wq, &user->remove_work);
+ }
+ 
+ static void _ipmi_destroy_user(struct ipmi_user *user)
+@@ -5166,6 +5168,13 @@ static int ipmi_init_msghandler(void)
+ 
+ 	atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
+ 
++	remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");
++	if (!remove_work_wq) {
++		pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
++		rv = -ENOMEM;
++		goto out;
++	}
++
+ 	initialized = true;
+ 
+ out:
+@@ -5191,6 +5200,8 @@ static void __exit cleanup_ipmi(void)
+ 	int count;
+ 
+ 	if (initialized) {
++		destroy_workqueue(remove_work_wq);
++
+ 		atomic_notifier_chain_unregister(&panic_notifier_list,
+ 						 &panic_block);
+ 
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index ebee0ad559fad..8e159fb6af9cd 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1004,10 +1004,9 @@ static struct kobj_type ktype_cpufreq = {
+ 	.release	= cpufreq_sysfs_release,
+ };
+ 
+-static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu)
++static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu,
++				struct device *dev)
+ {
+-	struct device *dev = get_cpu_device(cpu);
+-
+ 	if (unlikely(!dev))
+ 		return;
+ 
+@@ -1391,7 +1390,7 @@ static int cpufreq_online(unsigned int cpu)
+ 	if (new_policy) {
+ 		for_each_cpu(j, policy->related_cpus) {
+ 			per_cpu(cpufreq_cpu_data, j) = policy;
+-			add_cpu_dev_symlink(policy, j);
++			add_cpu_dev_symlink(policy, j, get_cpu_device(j));
+ 		}
+ 
+ 		policy->min_freq_req = kzalloc(2 * sizeof(*policy->min_freq_req),
+@@ -1553,7 +1552,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
+ 	/* Create sysfs link on CPU registration */
+ 	policy = per_cpu(cpufreq_cpu_data, cpu);
+ 	if (policy)
+-		add_cpu_dev_symlink(policy, cpu);
++		add_cpu_dev_symlink(policy, cpu, dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+index 0526dec1d736e..042c85fc528bb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+@@ -358,6 +358,7 @@ struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev)
+ 			"%s", "xgmi_hive_info");
+ 	if (ret) {
+ 		dev_err(adev->dev, "XGMI: failed initializing kobject for xgmi hive\n");
++		kobject_put(&hive->kobj);
+ 		kfree(hive);
+ 		hive = NULL;
+ 		goto pro_end;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 352a32dc609b2..2645ebc63a14d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -1207,6 +1207,11 @@ static int stop_cpsch(struct device_queue_manager *dqm)
+ 	bool hanging;
+ 
+ 	dqm_lock(dqm);
++	if (!dqm->sched_running) {
++		dqm_unlock(dqm);
++		return 0;
++	}
++
+ 	if (!dqm->is_hws_hang)
+ 		unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0);
+ 	hanging = dqm->is_hws_hang || dqm->is_resetting;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 955a055bd9800..d617e98afb76d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -36,6 +36,8 @@
+ #include "dm_helpers.h"
+ 
+ #include "dc_link_ddc.h"
++#include "ddc_service_types.h"
++#include "dpcd_defs.h"
+ 
+ #include "i2caux_interface.h"
+ #if defined(CONFIG_DEBUG_FS)
+@@ -152,6 +154,16 @@ static const struct drm_connector_funcs dm_dp_mst_connector_funcs = {
+ };
+ 
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
++static bool needs_dsc_aux_workaround(struct dc_link *link)
++{
++	if (link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 &&
++	    (link->dpcd_caps.dpcd_rev.raw == DPCD_REV_14 || link->dpcd_caps.dpcd_rev.raw == DPCD_REV_12) &&
++	    link->dpcd_caps.sink_count.bits.SINK_COUNT >= 2)
++		return true;
++
++	return false;
++}
++
+ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector)
+ {
+ 	struct dc_sink *dc_sink = aconnector->dc_sink;
+@@ -159,7 +171,7 @@ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnecto
+ 	u8 dsc_caps[16] = { 0 };
+ 
+ 	aconnector->dsc_aux = drm_dp_mst_dsc_aux_for_port(port);
+-#if defined(CONFIG_HP_HOOK_WORKAROUND)
++
+ 	/*
+ 	 * drm_dp_mst_dsc_aux_for_port() will return NULL for certain configs
+ 	 * because it only check the dsc/fec caps of the "port variable" and not the dock
+@@ -169,10 +181,10 @@ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnecto
+ 	 * Workaround: explicitly check the use case above and use the mst dock's aux as dsc_aux
+ 	 *
+ 	 */
+-
+-	if (!aconnector->dsc_aux && !port->parent->port_parent)
++	if (!aconnector->dsc_aux && !port->parent->port_parent &&
++	    needs_dsc_aux_workaround(aconnector->dc_link))
+ 		aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux;
+-#endif
++
+ 	if (!aconnector->dsc_aux)
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index e9ede19193b0e..87d845d6d3bc3 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -777,12 +777,12 @@ static void a6xx_get_gmu_registers(struct msm_gpu *gpu,
+ 	struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ 
+ 	a6xx_state->gmu_registers = state_kcalloc(a6xx_state,
+-		2, sizeof(*a6xx_state->gmu_registers));
++		3, sizeof(*a6xx_state->gmu_registers));
+ 
+ 	if (!a6xx_state->gmu_registers)
+ 		return;
+ 
+-	a6xx_state->nr_gmu_registers = 2;
++	a6xx_state->nr_gmu_registers = 3;
+ 
+ 	/* Get the CX GMU registers from AHB */
+ 	_a6xx_get_gmu_registers(gpu, a6xx_state, &a6xx_gmu_reglist[0],
+diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c
+index ee2e270f464c1..7a7ccad65c922 100644
+--- a/drivers/gpu/drm/msm/msm_debugfs.c
++++ b/drivers/gpu/drm/msm/msm_debugfs.c
+@@ -77,6 +77,7 @@ static int msm_gpu_open(struct inode *inode, struct file *file)
+ 		goto free_priv;
+ 
+ 	pm_runtime_get_sync(&gpu->pdev->dev);
++	msm_gpu_hw_init(gpu);
+ 	show_priv->state = gpu->funcs->gpu_state_get(gpu);
+ 	pm_runtime_put_sync(&gpu->pdev->dev);
+ 
+diff --git a/drivers/gpu/drm/sun4i/Kconfig b/drivers/gpu/drm/sun4i/Kconfig
+index 5755f0432e774..8c796de53222c 100644
+--- a/drivers/gpu/drm/sun4i/Kconfig
++++ b/drivers/gpu/drm/sun4i/Kconfig
+@@ -46,6 +46,7 @@ config DRM_SUN6I_DSI
+ 	default MACH_SUN8I
+ 	select CRC_CCITT
+ 	select DRM_MIPI_DSI
++	select RESET_CONTROLLER
+ 	select PHY_SUN6I_MIPI_DPHY
+ 	help
+ 	  Choose this option if you want have an Allwinner SoC with
+diff --git a/drivers/i2c/busses/i2c-cbus-gpio.c b/drivers/i2c/busses/i2c-cbus-gpio.c
+index 72df563477b1c..f8639a4457d23 100644
+--- a/drivers/i2c/busses/i2c-cbus-gpio.c
++++ b/drivers/i2c/busses/i2c-cbus-gpio.c
+@@ -195,8 +195,9 @@ static u32 cbus_i2c_func(struct i2c_adapter *adapter)
+ }
+ 
+ static const struct i2c_algorithm cbus_i2c_algo = {
+-	.smbus_xfer	= cbus_i2c_smbus_xfer,
+-	.functionality	= cbus_i2c_func,
++	.smbus_xfer		= cbus_i2c_smbus_xfer,
++	.smbus_xfer_atomic	= cbus_i2c_smbus_xfer,
++	.functionality		= cbus_i2c_func,
+ };
+ 
+ static int cbus_i2c_remove(struct platform_device *pdev)
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index 1e800b65e20a0..0f4c0282028b0 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -1472,6 +1472,7 @@ static irqreturn_t stm32f7_i2c_isr_event(int irq, void *data)
+ {
+ 	struct stm32f7_i2c_dev *i2c_dev = data;
+ 	struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg;
++	struct stm32_i2c_dma *dma = i2c_dev->dma;
+ 	void __iomem *base = i2c_dev->base;
+ 	u32 status, mask;
+ 	int ret = IRQ_HANDLED;
+@@ -1497,6 +1498,10 @@ static irqreturn_t stm32f7_i2c_isr_event(int irq, void *data)
+ 		dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n",
+ 			__func__, f7_msg->addr);
+ 		writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR);
++		if (i2c_dev->use_dma) {
++			stm32f7_i2c_disable_dma_req(i2c_dev);
++			dmaengine_terminate_all(dma->chan_using);
++		}
+ 		f7_msg->result = -ENXIO;
+ 	}
+ 
+@@ -1512,7 +1517,7 @@ static irqreturn_t stm32f7_i2c_isr_event(int irq, void *data)
+ 		/* Clear STOP flag */
+ 		writel_relaxed(STM32F7_I2C_ICR_STOPCF, base + STM32F7_I2C_ICR);
+ 
+-		if (i2c_dev->use_dma) {
++		if (i2c_dev->use_dma && !f7_msg->result) {
+ 			ret = IRQ_WAKE_THREAD;
+ 		} else {
+ 			i2c_dev->master_mode = false;
+@@ -1525,7 +1530,7 @@ static irqreturn_t stm32f7_i2c_isr_event(int irq, void *data)
+ 		if (f7_msg->stop) {
+ 			mask = STM32F7_I2C_CR2_STOP;
+ 			stm32f7_i2c_set_bits(base + STM32F7_I2C_CR2, mask);
+-		} else if (i2c_dev->use_dma) {
++		} else if (i2c_dev->use_dma && !f7_msg->result) {
+ 			ret = IRQ_WAKE_THREAD;
+ 		} else if (f7_msg->smbus) {
+ 			stm32f7_i2c_smbus_rep_start(i2c_dev);
+@@ -1665,12 +1670,23 @@ static int stm32f7_i2c_xfer(struct i2c_adapter *i2c_adap,
+ 	time_left = wait_for_completion_timeout(&i2c_dev->complete,
+ 						i2c_dev->adap.timeout);
+ 	ret = f7_msg->result;
++	if (ret) {
++		/*
++		 * It is possible that some unsent data have already been
++		 * written into TXDR. To avoid sending old data in a
++		 * further transfer, flush TXDR in case of any error
++		 */
++		writel_relaxed(STM32F7_I2C_ISR_TXE,
++			       i2c_dev->base + STM32F7_I2C_ISR);
++		goto pm_free;
++	}
+ 
+ 	if (!time_left) {
+ 		dev_dbg(i2c_dev->dev, "Access to slave 0x%x timed out\n",
+ 			i2c_dev->msg->addr);
+ 		if (i2c_dev->use_dma)
+ 			dmaengine_terminate_all(dma->chan_using);
++		stm32f7_i2c_wait_free_bus(i2c_dev);
+ 		ret = -ETIMEDOUT;
+ 	}
+ 
+@@ -1713,13 +1729,22 @@ static int stm32f7_i2c_smbus_xfer(struct i2c_adapter *adapter, u16 addr,
+ 	timeout = wait_for_completion_timeout(&i2c_dev->complete,
+ 					      i2c_dev->adap.timeout);
+ 	ret = f7_msg->result;
+-	if (ret)
++	if (ret) {
++		/*
++		 * It is possible that some unsent data have already been
++		 * written into TXDR. To avoid sending old data in a
++		 * further transfer, flush TXDR in case of any error
++		 */
++		writel_relaxed(STM32F7_I2C_ISR_TXE,
++			       i2c_dev->base + STM32F7_I2C_ISR);
+ 		goto pm_free;
++	}
+ 
+ 	if (!timeout) {
+ 		dev_dbg(dev, "Access to slave 0x%x timed out\n", f7_msg->addr);
+ 		if (i2c_dev->use_dma)
+ 			dmaengine_terminate_all(dma->chan_using);
++		stm32f7_i2c_wait_free_bus(i2c_dev);
+ 		ret = -ETIMEDOUT;
+ 		goto pm_free;
+ 	}
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_common.h b/drivers/net/ethernet/aquantia/atlantic/aq_common.h
+index 23b2d390fcdda..ace691d7cd759 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_common.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_common.h
+@@ -40,10 +40,12 @@
+ 
+ #define AQ_DEVICE_ID_AQC113DEV	0x00C0
+ #define AQ_DEVICE_ID_AQC113CS	0x94C0
++#define AQ_DEVICE_ID_AQC113CA	0x34C0
+ #define AQ_DEVICE_ID_AQC114CS	0x93C0
+ #define AQ_DEVICE_ID_AQC113	0x04C0
+ #define AQ_DEVICE_ID_AQC113C	0x14C0
+ #define AQ_DEVICE_ID_AQC115C	0x12C0
++#define AQ_DEVICE_ID_AQC116C	0x11C0
+ 
+ #define HW_ATL_NIC_NAME "Marvell (aQuantia) AQtion 10Gbit Network Adapter"
+ 
+@@ -53,20 +55,19 @@
+ 
+ #define AQ_NIC_RATE_10G		BIT(0)
+ #define AQ_NIC_RATE_5G		BIT(1)
+-#define AQ_NIC_RATE_5GSR	BIT(2)
+-#define AQ_NIC_RATE_2G5		BIT(3)
+-#define AQ_NIC_RATE_1G		BIT(4)
+-#define AQ_NIC_RATE_100M	BIT(5)
+-#define AQ_NIC_RATE_10M		BIT(6)
+-#define AQ_NIC_RATE_1G_HALF	BIT(7)
+-#define AQ_NIC_RATE_100M_HALF	BIT(8)
+-#define AQ_NIC_RATE_10M_HALF	BIT(9)
++#define AQ_NIC_RATE_2G5		BIT(2)
++#define AQ_NIC_RATE_1G		BIT(3)
++#define AQ_NIC_RATE_100M	BIT(4)
++#define AQ_NIC_RATE_10M		BIT(5)
++#define AQ_NIC_RATE_1G_HALF	BIT(6)
++#define AQ_NIC_RATE_100M_HALF	BIT(7)
++#define AQ_NIC_RATE_10M_HALF	BIT(8)
+ 
+-#define AQ_NIC_RATE_EEE_10G	BIT(10)
+-#define AQ_NIC_RATE_EEE_5G	BIT(11)
+-#define AQ_NIC_RATE_EEE_2G5	BIT(12)
+-#define AQ_NIC_RATE_EEE_1G	BIT(13)
+-#define AQ_NIC_RATE_EEE_100M	BIT(14)
++#define AQ_NIC_RATE_EEE_10G	BIT(9)
++#define AQ_NIC_RATE_EEE_5G	BIT(10)
++#define AQ_NIC_RATE_EEE_2G5	BIT(11)
++#define AQ_NIC_RATE_EEE_1G	BIT(12)
++#define AQ_NIC_RATE_EEE_100M	BIT(13)
+ #define AQ_NIC_RATE_EEE_MSK     (AQ_NIC_RATE_EEE_10G |\
+ 				 AQ_NIC_RATE_EEE_5G |\
+ 				 AQ_NIC_RATE_EEE_2G5 |\
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
+index bed481816ea31..7442850ca95f0 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
+@@ -80,6 +80,8 @@ struct aq_hw_link_status_s {
+ };
+ 
+ struct aq_stats_s {
++	u64 brc;
++	u64 btc;
+ 	u64 uprc;
+ 	u64 mprc;
+ 	u64 bprc;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index bf5e0e9bd0e2b..0cf8ae8aeac83 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -903,8 +903,14 @@ u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data)
+ 	data[++i] = stats->mbtc;
+ 	data[++i] = stats->bbrc;
+ 	data[++i] = stats->bbtc;
+-	data[++i] = stats->ubrc + stats->mbrc + stats->bbrc;
+-	data[++i] = stats->ubtc + stats->mbtc + stats->bbtc;
++	if (stats->brc)
++		data[++i] = stats->brc;
++	else
++		data[++i] = stats->ubrc + stats->mbrc + stats->bbrc;
++	if (stats->btc)
++		data[++i] = stats->btc;
++	else
++		data[++i] = stats->ubtc + stats->mbtc + stats->bbtc;
+ 	data[++i] = stats->dma_pkt_rc;
+ 	data[++i] = stats->dma_pkt_tc;
+ 	data[++i] = stats->dma_oct_rc;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index 5b996330f228b..1826253f97dc4 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -49,6 +49,8 @@ static const struct pci_device_id aq_pci_tbl[] = {
+ 	{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113), },
+ 	{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113C), },
+ 	{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC115C), },
++	{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113CA), },
++	{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC116C), },
+ 
+ 	{}
+ };
+@@ -85,7 +87,10 @@ static const struct aq_board_revision_s hw_atl_boards[] = {
+ 	{ AQ_DEVICE_ID_AQC113CS,	AQ_HWREV_ANY,	&hw_atl2_ops, &hw_atl2_caps_aqc113, },
+ 	{ AQ_DEVICE_ID_AQC114CS,	AQ_HWREV_ANY,	&hw_atl2_ops, &hw_atl2_caps_aqc113, },
+ 	{ AQ_DEVICE_ID_AQC113C,		AQ_HWREV_ANY,	&hw_atl2_ops, &hw_atl2_caps_aqc113, },
+-	{ AQ_DEVICE_ID_AQC115C,		AQ_HWREV_ANY,	&hw_atl2_ops, &hw_atl2_caps_aqc113, },
++	{ AQ_DEVICE_ID_AQC115C,		AQ_HWREV_ANY,	&hw_atl2_ops, &hw_atl2_caps_aqc115c, },
++	{ AQ_DEVICE_ID_AQC113CA,	AQ_HWREV_ANY,	&hw_atl2_ops, &hw_atl2_caps_aqc113, },
++	{ AQ_DEVICE_ID_AQC116C,		AQ_HWREV_ANY,	&hw_atl2_ops, &hw_atl2_caps_aqc116c, },
++
+ };
+ 
+ MODULE_DEVICE_TABLE(pci, aq_pci_tbl);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_vec.c b/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
+index d281322d7dd29..f4774cf051c97 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
+@@ -362,9 +362,6 @@ unsigned int aq_vec_get_sw_stats(struct aq_vec_s *self, const unsigned int tc, u
+ {
+ 	unsigned int count;
+ 
+-	WARN_ONCE(!aq_vec_is_valid_tc(self, tc),
+-		  "Invalid tc %u (#rx=%u, #tx=%u)\n",
+-		  tc, self->rx_rings, self->tx_rings);
+ 	if (!aq_vec_is_valid_tc(self, tc))
+ 		return 0;
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
+index 404cbf60d3f2f..65b9e5846be45 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
+@@ -559,6 +559,11 @@ int hw_atl_utils_fw_rpc_wait(struct aq_hw_s *self,
+ 			goto err_exit;
+ 
+ 		if (fw.len == 0xFFFFU) {
++			if (sw.len > sizeof(self->rpc)) {
++				printk(KERN_INFO "Invalid sw len: %x\n", sw.len);
++				err = -EINVAL;
++				goto err_exit;
++			}
+ 			err = hw_atl_utils_fw_rpc_call(self, sw.len);
+ 			if (err < 0)
+ 				goto err_exit;
+@@ -567,6 +572,11 @@ int hw_atl_utils_fw_rpc_wait(struct aq_hw_s *self,
+ 
+ 	if (rpc) {
+ 		if (fw.len) {
++			if (fw.len > sizeof(self->rpc)) {
++				printk(KERN_INFO "Invalid fw len: %x\n", fw.len);
++				err = -EINVAL;
++				goto err_exit;
++			}
+ 			err =
+ 			hw_atl_utils_fw_downld_dwords(self,
+ 						      self->rpc_addr,
+@@ -857,12 +867,20 @@ static int hw_atl_fw1x_deinit(struct aq_hw_s *self)
+ int hw_atl_utils_update_stats(struct aq_hw_s *self)
+ {
+ 	struct aq_stats_s *cs = &self->curr_stats;
++	struct aq_stats_s curr_stats = *cs;
+ 	struct hw_atl_utils_mbox mbox;
++	bool corrupted_stats = false;
+ 
+ 	hw_atl_utils_mpi_read_stats(self, &mbox);
+ 
+-#define AQ_SDELTA(_N_) (self->curr_stats._N_ += \
+-			mbox.stats._N_ - self->last_stats._N_)
++#define AQ_SDELTA(_N_)  \
++do { \
++	if (!corrupted_stats && \
++	    ((s64)(mbox.stats._N_ - self->last_stats._N_)) >= 0) \
++		curr_stats._N_ += mbox.stats._N_ - self->last_stats._N_; \
++	else \
++		corrupted_stats = true; \
++} while (0)
+ 
+ 	if (self->aq_link_status.mbps) {
+ 		AQ_SDELTA(uprc);
+@@ -882,6 +900,9 @@ int hw_atl_utils_update_stats(struct aq_hw_s *self)
+ 		AQ_SDELTA(bbrc);
+ 		AQ_SDELTA(bbtc);
+ 		AQ_SDELTA(dpc);
++
++		if (!corrupted_stats)
++			*cs = curr_stats;
+ 	}
+ #undef AQ_SDELTA
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
+index ee0c22d049354..05086f0040fd9 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
+@@ -132,9 +132,6 @@ static enum hw_atl_fw2x_rate link_speed_mask_2fw2x_ratemask(u32 speed)
+ 	if (speed & AQ_NIC_RATE_5G)
+ 		rate |= FW2X_RATE_5G;
+ 
+-	if (speed & AQ_NIC_RATE_5GSR)
+-		rate |= FW2X_RATE_5G;
+-
+ 	if (speed & AQ_NIC_RATE_2G5)
+ 		rate |= FW2X_RATE_2G5;
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.c
+index 92f64048bf691..c76ccdc77ba60 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.c
+@@ -65,11 +65,25 @@ const struct aq_hw_caps_s hw_atl2_caps_aqc113 = {
+ 			  AQ_NIC_RATE_5G  |
+ 			  AQ_NIC_RATE_2G5 |
+ 			  AQ_NIC_RATE_1G  |
+-			  AQ_NIC_RATE_1G_HALF   |
+ 			  AQ_NIC_RATE_100M      |
+-			  AQ_NIC_RATE_100M_HALF |
+-			  AQ_NIC_RATE_10M       |
+-			  AQ_NIC_RATE_10M_HALF,
++			  AQ_NIC_RATE_10M,
++};
++
++const struct aq_hw_caps_s hw_atl2_caps_aqc115c = {
++	DEFAULT_BOARD_BASIC_CAPABILITIES,
++	.media_type = AQ_HW_MEDIA_TYPE_TP,
++	.link_speed_msk = AQ_NIC_RATE_2G5 |
++			  AQ_NIC_RATE_1G  |
++			  AQ_NIC_RATE_100M      |
++			  AQ_NIC_RATE_10M,
++};
++
++const struct aq_hw_caps_s hw_atl2_caps_aqc116c = {
++	DEFAULT_BOARD_BASIC_CAPABILITIES,
++	.media_type = AQ_HW_MEDIA_TYPE_TP,
++	.link_speed_msk = AQ_NIC_RATE_1G  |
++			  AQ_NIC_RATE_100M      |
++			  AQ_NIC_RATE_10M,
+ };
+ 
+ static u32 hw_atl2_sem_act_rslvr_get(struct aq_hw_s *self)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.h
+index de8723f1c28a1..346f0dc9912e5 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.h
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.h
+@@ -9,6 +9,8 @@
+ #include "aq_common.h"
+ 
+ extern const struct aq_hw_caps_s hw_atl2_caps_aqc113;
++extern const struct aq_hw_caps_s hw_atl2_caps_aqc115c;
++extern const struct aq_hw_caps_s hw_atl2_caps_aqc116c;
+ extern const struct aq_hw_ops hw_atl2_ops;
+ 
+ #endif /* HW_ATL2_H */
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils.h
+index b66fa346581ce..6bad64c77b87c 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils.h
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils.h
+@@ -239,7 +239,8 @@ struct version_s {
+ 		u8 minor;
+ 		u16 build;
+ 	} phy;
+-	u32 rsvd;
++	u32 drv_iface_ver:4;
++	u32 rsvd:28;
+ };
+ 
+ struct link_status_s {
+@@ -424,7 +425,7 @@ struct cable_diag_status_s {
+ 	u16 rsvd2;
+ };
+ 
+-struct statistics_s {
++struct statistics_a0_s {
+ 	struct {
+ 		u32 link_up;
+ 		u32 link_down;
+@@ -457,6 +458,33 @@ struct statistics_s {
+ 	u32 reserve_fw_gap;
+ };
+ 
++struct __packed statistics_b0_s {
++	u64 rx_good_octets;
++	u64 rx_pause_frames;
++	u64 rx_good_frames;
++	u64 rx_errors;
++	u64 rx_unicast_frames;
++	u64 rx_multicast_frames;
++	u64 rx_broadcast_frames;
++
++	u64 tx_good_octets;
++	u64 tx_pause_frames;
++	u64 tx_good_frames;
++	u64 tx_errors;
++	u64 tx_unicast_frames;
++	u64 tx_multicast_frames;
++	u64 tx_broadcast_frames;
++
++	u32 main_loop_cycles;
++};
++
++struct __packed statistics_s {
++	union __packed {
++		struct statistics_a0_s a0;
++		struct statistics_b0_s b0;
++	};
++};
++
+ struct filter_caps_s {
+ 	u8 l2_filters_base_index:6;
+ 	u8 flexible_filter_mask:2;
+@@ -545,7 +573,7 @@ struct management_status_s {
+ 	u32 rsvd5;
+ };
+ 
+-struct fw_interface_out {
++struct __packed fw_interface_out {
+ 	struct transaction_counter_s transaction_id;
+ 	struct version_s version;
+ 	struct link_status_s link_status;
+@@ -569,7 +597,6 @@ struct fw_interface_out {
+ 	struct core_dump_s core_dump;
+ 	u32 rsvd11;
+ 	struct statistics_s stats;
+-	u32 rsvd12;
+ 	struct filter_caps_s filter_caps;
+ 	struct device_caps_s device_caps;
+ 	u32 rsvd13;
+@@ -592,6 +619,9 @@ struct fw_interface_out {
+ #define  AQ_HOST_MODE_LOW_POWER    3U
+ #define  AQ_HOST_MODE_SHUTDOWN     4U
+ 
++#define  AQ_A2_FW_INTERFACE_A0     0
++#define  AQ_A2_FW_INTERFACE_B0     1
++
+ int hw_atl2_utils_initfw(struct aq_hw_s *self, const struct aq_fw_ops **fw_ops);
+ 
+ int hw_atl2_utils_soft_reset(struct aq_hw_s *self);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
+index dd259c8f2f4f3..58d426dda3edb 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
+@@ -84,7 +84,7 @@ static int hw_atl2_shared_buffer_read_block(struct aq_hw_s *self,
+ 			if (cnt > AQ_A2_FW_READ_TRY_MAX)
+ 				return -ETIME;
+ 			if (tid1.transaction_cnt_a != tid1.transaction_cnt_b)
+-				udelay(1);
++				mdelay(1);
+ 		} while (tid1.transaction_cnt_a != tid1.transaction_cnt_b);
+ 
+ 		hw_atl2_mif_shared_buf_read(self, offset, (u32 *)data, dwords);
+@@ -154,7 +154,7 @@ static void a2_link_speed_mask2fw(u32 speed,
+ {
+ 	link_options->rate_10G = !!(speed & AQ_NIC_RATE_10G);
+ 	link_options->rate_5G = !!(speed & AQ_NIC_RATE_5G);
+-	link_options->rate_N5G = !!(speed & AQ_NIC_RATE_5GSR);
++	link_options->rate_N5G = link_options->rate_5G;
+ 	link_options->rate_2P5G = !!(speed & AQ_NIC_RATE_2G5);
+ 	link_options->rate_N2P5G = link_options->rate_2P5G;
+ 	link_options->rate_1G = !!(speed & AQ_NIC_RATE_1G);
+@@ -192,8 +192,6 @@ static u32 a2_fw_lkp_to_mask(struct lkp_link_caps_s *lkp_link_caps)
+ 		rate |= AQ_NIC_RATE_10G;
+ 	if (lkp_link_caps->rate_5G)
+ 		rate |= AQ_NIC_RATE_5G;
+-	if (lkp_link_caps->rate_N5G)
+-		rate |= AQ_NIC_RATE_5GSR;
+ 	if (lkp_link_caps->rate_2P5G)
+ 		rate |= AQ_NIC_RATE_2G5;
+ 	if (lkp_link_caps->rate_1G)
+@@ -335,15 +333,22 @@ static int aq_a2_fw_get_mac_permanent(struct aq_hw_s *self, u8 *mac)
+ 	return 0;
+ }
+ 
+-static int aq_a2_fw_update_stats(struct aq_hw_s *self)
++static void aq_a2_fill_a0_stats(struct aq_hw_s *self,
++				struct statistics_s *stats)
+ {
+ 	struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv;
+-	struct statistics_s stats;
+-
+-	hw_atl2_shared_buffer_read_safe(self, stats, &stats);
+-
+-#define AQ_SDELTA(_N_, _F_) (self->curr_stats._N_ += \
+-			stats.msm._F_ - priv->last_stats.msm._F_)
++	struct aq_stats_s *cs = &self->curr_stats;
++	struct aq_stats_s curr_stats = *cs;
++	bool corrupted_stats = false;
++
++#define AQ_SDELTA(_N, _F)  \
++do { \
++	if (!corrupted_stats && \
++	    ((s64)(stats->a0.msm._F - priv->last_stats.a0.msm._F)) >= 0) \
++		curr_stats._N += stats->a0.msm._F - priv->last_stats.a0.msm._F;\
++	else \
++		corrupted_stats = true; \
++} while (0)
+ 
+ 	if (self->aq_link_status.mbps) {
+ 		AQ_SDELTA(uprc, rx_unicast_frames);
+@@ -362,17 +367,76 @@ static int aq_a2_fw_update_stats(struct aq_hw_s *self)
+ 		AQ_SDELTA(mbtc, tx_multicast_octets);
+ 		AQ_SDELTA(bbrc, rx_broadcast_octets);
+ 		AQ_SDELTA(bbtc, tx_broadcast_octets);
++
++		if (!corrupted_stats)
++			*cs = curr_stats;
+ 	}
+ #undef AQ_SDELTA
+-	self->curr_stats.dma_pkt_rc =
+-		hw_atl_stats_rx_dma_good_pkt_counter_get(self);
+-	self->curr_stats.dma_pkt_tc =
+-		hw_atl_stats_tx_dma_good_pkt_counter_get(self);
+-	self->curr_stats.dma_oct_rc =
+-		hw_atl_stats_rx_dma_good_octet_counter_get(self);
+-	self->curr_stats.dma_oct_tc =
+-		hw_atl_stats_tx_dma_good_octet_counter_get(self);
+-	self->curr_stats.dpc = hw_atl_rpb_rx_dma_drop_pkt_cnt_get(self);
++
++}
++
++static void aq_a2_fill_b0_stats(struct aq_hw_s *self,
++				struct statistics_s *stats)
++{
++	struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv;
++	struct aq_stats_s *cs = &self->curr_stats;
++	struct aq_stats_s curr_stats = *cs;
++	bool corrupted_stats = false;
++
++#define AQ_SDELTA(_N, _F)  \
++do { \
++	if (!corrupted_stats && \
++	    ((s64)(stats->b0._F - priv->last_stats.b0._F)) >= 0) \
++		curr_stats._N += stats->b0._F - priv->last_stats.b0._F; \
++	else \
++		corrupted_stats = true; \
++} while (0)
++
++	if (self->aq_link_status.mbps) {
++		AQ_SDELTA(uprc, rx_unicast_frames);
++		AQ_SDELTA(mprc, rx_multicast_frames);
++		AQ_SDELTA(bprc, rx_broadcast_frames);
++		AQ_SDELTA(erpr, rx_errors);
++		AQ_SDELTA(brc, rx_good_octets);
++
++		AQ_SDELTA(uptc, tx_unicast_frames);
++		AQ_SDELTA(mptc, tx_multicast_frames);
++		AQ_SDELTA(bptc, tx_broadcast_frames);
++		AQ_SDELTA(erpt, tx_errors);
++		AQ_SDELTA(btc, tx_good_octets);
++
++		if (!corrupted_stats)
++			*cs = curr_stats;
++	}
++#undef AQ_SDELTA
++}
++
++static int aq_a2_fw_update_stats(struct aq_hw_s *self)
++{
++	struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv;
++	struct aq_stats_s *cs = &self->curr_stats;
++	struct statistics_s stats;
++	struct version_s version;
++	int err;
++
++	err = hw_atl2_shared_buffer_read_safe(self, version, &version);
++	if (err)
++		return err;
++
++	err = hw_atl2_shared_buffer_read_safe(self, stats, &stats);
++	if (err)
++		return err;
++
++	if (version.drv_iface_ver == AQ_A2_FW_INTERFACE_A0)
++		aq_a2_fill_a0_stats(self, &stats);
++	else
++		aq_a2_fill_b0_stats(self, &stats);
++
++	cs->dma_pkt_rc = hw_atl_stats_rx_dma_good_pkt_counter_get(self);
++	cs->dma_pkt_tc = hw_atl_stats_tx_dma_good_pkt_counter_get(self);
++	cs->dma_oct_rc = hw_atl_stats_rx_dma_good_octet_counter_get(self);
++	cs->dma_oct_tc = hw_atl_stats_tx_dma_good_octet_counter_get(self);
++	cs->dpc = hw_atl_rpb_rx_dma_drop_pkt_cnt_get(self);
+ 
+ 	memcpy(&priv->last_stats, &stats, sizeof(stats));
+ 
+@@ -499,9 +563,9 @@ u32 hw_atl2_utils_get_fw_version(struct aq_hw_s *self)
+ 	hw_atl2_shared_buffer_read_safe(self, version, &version);
+ 
+ 	/* A2 FW version is stored in reverse order */
+-	return version.mac.major << 24 |
+-	       version.mac.minor << 16 |
+-	       version.mac.build;
++	return version.bundle.major << 24 |
++	       version.bundle.minor << 16 |
++	       version.bundle.build;
+ }
+ 
+ int hw_atl2_utils_get_action_resolve_table_caps(struct aq_hw_s *self,
+diff --git a/drivers/net/ethernet/dec/tulip/de4x5.c b/drivers/net/ethernet/dec/tulip/de4x5.c
+index 683e328b5461d..8edd394bc3358 100644
+--- a/drivers/net/ethernet/dec/tulip/de4x5.c
++++ b/drivers/net/ethernet/dec/tulip/de4x5.c
+@@ -4706,6 +4706,10 @@ type3_infoblock(struct net_device *dev, u_char count, u_char *p)
+         lp->ibn = 3;
+         lp->active = *p++;
+ 	if (MOTO_SROM_BUG) lp->active = 0;
++	/* if (MOTO_SROM_BUG) statement indicates lp->active could
++	 * be 8 (i.e. the size of array lp->phy) */
++	if (WARN_ON(lp->active >= ARRAY_SIZE(lp->phy)))
++		return -EINVAL;
+ 	lp->phy[lp->active].gep = (*p ? p : NULL); p += (2 * (*p) + 1);
+ 	lp->phy[lp->active].rst = (*p ? p : NULL); p += (2 * (*p) + 1);
+ 	lp->phy[lp->active].mc  = get_unaligned_le16(p); p += 2;
+@@ -4997,19 +5001,23 @@ mii_get_phy(struct net_device *dev)
+ 	}
+ 	if ((j == limit) && (i < DE4X5_MAX_MII)) {
+ 	    for (k=0; k < DE4X5_MAX_PHY && lp->phy[k].id; k++);
+-	    lp->phy[k].addr = i;
+-	    lp->phy[k].id = id;
+-	    lp->phy[k].spd.reg = GENERIC_REG;      /* ANLPA register         */
+-	    lp->phy[k].spd.mask = GENERIC_MASK;    /* 100Mb/s technologies   */
+-	    lp->phy[k].spd.value = GENERIC_VALUE;  /* TX & T4, H/F Duplex    */
+-	    lp->mii_cnt++;
+-	    lp->active++;
+-	    printk("%s: Using generic MII device control. If the board doesn't operate,\nplease mail the following dump to the author:\n", dev->name);
+-	    j = de4x5_debug;
+-	    de4x5_debug |= DEBUG_MII;
+-	    de4x5_dbg_mii(dev, k);
+-	    de4x5_debug = j;
+-	    printk("\n");
++	    if (k < DE4X5_MAX_PHY) {
++		lp->phy[k].addr = i;
++		lp->phy[k].id = id;
++		lp->phy[k].spd.reg = GENERIC_REG;      /* ANLPA register         */
++		lp->phy[k].spd.mask = GENERIC_MASK;    /* 100Mb/s technologies   */
++		lp->phy[k].spd.value = GENERIC_VALUE;  /* TX & T4, H/F Duplex    */
++		lp->mii_cnt++;
++		lp->active++;
++		printk("%s: Using generic MII device control. If the board doesn't operate,\nplease mail the following dump to the author:\n", dev->name);
++		j = de4x5_debug;
++		de4x5_debug |= DEBUG_MII;
++		de4x5_dbg_mii(dev, k);
++		de4x5_debug = j;
++		printk("\n");
++	    } else {
++		goto purgatory;
++	    }
+ 	}
+     }
+   purgatory:
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index a4ef35216e2f7..f06d88c471d0f 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -4432,6 +4432,8 @@ static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
+ 
+ 	fsl_mc_portal_free(priv->mc_io);
+ 
++	destroy_workqueue(priv->dpaa2_ptp_wq);
++
+ 	dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
+ 
+ 	free_netdev(net_dev);
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
+index a9aca8c24e90d..aa87e4d121532 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
+@@ -400,6 +400,10 @@ static void hns_dsaf_ge_srst_by_port(struct dsaf_device *dsaf_dev, u32 port,
+ 		return;
+ 
+ 	if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
++		/* DSAF_MAX_PORT_NUM is 6, but DSAF_GE_NUM is 8.
++		   We need check to prevent array overflow */
++		if (port >= DSAF_MAX_PORT_NUM)
++			return;
+ 		reg_val_1  = 0x1 << port;
+ 		port_rst_off = dsaf_dev->mac_cb[port]->port_rst_off;
+ 		/* there is difference between V1 and V2 in register.*/
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index e220d44df2e65..f39841b0a248c 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -6918,7 +6918,7 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 
+ 	shared = num_present_cpus() - priv->nthreads;
+ 	if (shared > 0)
+-		bitmap_fill(&priv->lock_map,
++		bitmap_set(&priv->lock_map, 0,
+ 			    min_t(int, shared, MVPP2_MAX_THREADS));
+ 
+ 	for (i = 0; i < MVPP2_MAX_THREADS; i++) {
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index 8999e9ce4f08e..a0551d6c2ffd6 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -2276,9 +2276,14 @@ int mlx4_en_try_alloc_resources(struct mlx4_en_priv *priv,
+ 				bool carry_xdp_prog)
+ {
+ 	struct bpf_prog *xdp_prog;
+-	int i, t;
++	int i, t, ret;
+ 
+-	mlx4_en_copy_priv(tmp, priv, prof);
++	ret = mlx4_en_copy_priv(tmp, priv, prof);
++	if (ret) {
++		en_warn(priv, "%s: mlx4_en_copy_priv() failed, return\n",
++			__func__);
++		return ret;
++	}
+ 
+ 	if (mlx4_en_alloc_resources(tmp)) {
+ 		en_warn(priv,
+diff --git a/drivers/net/ethernet/natsemi/xtsonic.c b/drivers/net/ethernet/natsemi/xtsonic.c
+index 28d9e98db81a8..33f0014b3c4bb 100644
+--- a/drivers/net/ethernet/natsemi/xtsonic.c
++++ b/drivers/net/ethernet/natsemi/xtsonic.c
+@@ -120,7 +120,7 @@ static const struct net_device_ops xtsonic_netdev_ops = {
+ 	.ndo_set_mac_address	= eth_mac_addr,
+ };
+ 
+-static int __init sonic_probe1(struct net_device *dev)
++static int sonic_probe1(struct net_device *dev)
+ {
+ 	unsigned int silicon_revision;
+ 	struct sonic_local *lp = netdev_priv(dev);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index d51bac7ba5afa..bd06076803295 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -1077,8 +1077,14 @@ static int qlcnic_83xx_add_rings(struct qlcnic_adapter *adapter)
+ 	sds_mbx_size = sizeof(struct qlcnic_sds_mbx);
+ 	context_id = recv_ctx->context_id;
+ 	num_sds = adapter->drv_sds_rings - QLCNIC_MAX_SDS_RINGS;
+-	ahw->hw_ops->alloc_mbx_args(&cmd, adapter,
+-				    QLCNIC_CMD_ADD_RCV_RINGS);
++	err = ahw->hw_ops->alloc_mbx_args(&cmd, adapter,
++					QLCNIC_CMD_ADD_RCV_RINGS);
++	if (err) {
++		dev_err(&adapter->pdev->dev,
++			"Failed to alloc mbx args %d\n", err);
++		return err;
++	}
++
+ 	cmd.req.arg[1] = 0 | (num_sds << 8) | (context_id << 16);
+ 
+ 	/* set up status rings, mbx 2-81 */
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index a4f9912b36366..c666e990900b9 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -2128,7 +2128,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
+ 	if (dev->domain_data.phyirq > 0)
+ 		phydev->irq = dev->domain_data.phyirq;
+ 	else
+-		phydev->irq = 0;
++		phydev->irq = PHY_POLL;
+ 	netdev_dbg(dev->net, "phydev->irq = %d\n", phydev->irq);
+ 
+ 	/* set to AUTOMDIX */
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 71902706234cc..570e1d11ddc7a 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -497,6 +497,7 @@ static netdev_tx_t vrf_process_v6_outbound(struct sk_buff *skb,
+ 	/* strip the ethernet header added for pass through VRF device */
+ 	__skb_pull(skb, skb_network_offset(skb));
+ 
++	memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+ 	ret = vrf_ip6_local_out(net, skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(ret)))
+ 		dev->stats.tx_errors++;
+@@ -580,6 +581,7 @@ static netdev_tx_t vrf_process_v4_outbound(struct sk_buff *skb,
+ 					       RT_SCOPE_LINK);
+ 	}
+ 
++	memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 	ret = vrf_ip_local_out(dev_net(skb_dst(skb)->dev), skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(ret)))
+ 		vrf_dev->stats.tx_errors++;
+diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
+index b7197e80f2264..9a4c8ff32d9dd 100644
+--- a/drivers/net/wireguard/allowedips.c
++++ b/drivers/net/wireguard/allowedips.c
+@@ -163,7 +163,7 @@ static bool node_placement(struct allowedips_node __rcu *trie, const u8 *key,
+ 	return exact;
+ }
+ 
+-static inline void connect_node(struct allowedips_node **parent, u8 bit, struct allowedips_node *node)
++static inline void connect_node(struct allowedips_node __rcu **parent, u8 bit, struct allowedips_node *node)
+ {
+ 	node->parent_bit_packed = (unsigned long)parent | bit;
+ 	rcu_assign_pointer(*parent, node);
+diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
+index e01ab0742738a..e189eb95678d8 100644
+--- a/drivers/net/wireguard/device.c
++++ b/drivers/net/wireguard/device.c
+@@ -98,6 +98,7 @@ static int wg_stop(struct net_device *dev)
+ {
+ 	struct wg_device *wg = netdev_priv(dev);
+ 	struct wg_peer *peer;
++	struct sk_buff *skb;
+ 
+ 	mutex_lock(&wg->device_update_lock);
+ 	list_for_each_entry(peer, &wg->peer_list, peer_list) {
+@@ -108,7 +109,9 @@ static int wg_stop(struct net_device *dev)
+ 		wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
+ 	}
+ 	mutex_unlock(&wg->device_update_lock);
+-	skb_queue_purge(&wg->incoming_handshakes);
++	while ((skb = ptr_ring_consume(&wg->handshake_queue.ring)) != NULL)
++		kfree_skb(skb);
++	atomic_set(&wg->handshake_queue_len, 0);
+ 	wg_socket_reinit(wg, NULL, NULL);
+ 	return 0;
+ }
+@@ -235,14 +238,13 @@ static void wg_destruct(struct net_device *dev)
+ 	destroy_workqueue(wg->handshake_receive_wq);
+ 	destroy_workqueue(wg->handshake_send_wq);
+ 	destroy_workqueue(wg->packet_crypt_wq);
+-	wg_packet_queue_free(&wg->decrypt_queue);
+-	wg_packet_queue_free(&wg->encrypt_queue);
++	wg_packet_queue_free(&wg->handshake_queue, true);
++	wg_packet_queue_free(&wg->decrypt_queue, false);
++	wg_packet_queue_free(&wg->encrypt_queue, false);
+ 	rcu_barrier(); /* Wait for all the peers to be actually freed. */
+ 	wg_ratelimiter_uninit();
+ 	memzero_explicit(&wg->static_identity, sizeof(wg->static_identity));
+-	skb_queue_purge(&wg->incoming_handshakes);
+ 	free_percpu(dev->tstats);
+-	free_percpu(wg->incoming_handshakes_worker);
+ 	kvfree(wg->index_hashtable);
+ 	kvfree(wg->peer_hashtable);
+ 	mutex_unlock(&wg->device_update_lock);
+@@ -298,7 +300,6 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
+ 	init_rwsem(&wg->static_identity.lock);
+ 	mutex_init(&wg->socket_update_lock);
+ 	mutex_init(&wg->device_update_lock);
+-	skb_queue_head_init(&wg->incoming_handshakes);
+ 	wg_allowedips_init(&wg->peer_allowedips);
+ 	wg_cookie_checker_init(&wg->cookie_checker, wg);
+ 	INIT_LIST_HEAD(&wg->peer_list);
+@@ -316,16 +317,10 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
+ 	if (!dev->tstats)
+ 		goto err_free_index_hashtable;
+ 
+-	wg->incoming_handshakes_worker =
+-		wg_packet_percpu_multicore_worker_alloc(
+-				wg_packet_handshake_receive_worker, wg);
+-	if (!wg->incoming_handshakes_worker)
+-		goto err_free_tstats;
+-
+ 	wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s",
+ 			WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name);
+ 	if (!wg->handshake_receive_wq)
+-		goto err_free_incoming_handshakes;
++		goto err_free_tstats;
+ 
+ 	wg->handshake_send_wq = alloc_workqueue("wg-kex-%s",
+ 			WQ_UNBOUND | WQ_FREEZABLE, 0, dev->name);
+@@ -347,10 +342,15 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
+ 	if (ret < 0)
+ 		goto err_free_encrypt_queue;
+ 
+-	ret = wg_ratelimiter_init();
++	ret = wg_packet_queue_init(&wg->handshake_queue, wg_packet_handshake_receive_worker,
++				   MAX_QUEUED_INCOMING_HANDSHAKES);
+ 	if (ret < 0)
+ 		goto err_free_decrypt_queue;
+ 
++	ret = wg_ratelimiter_init();
++	if (ret < 0)
++		goto err_free_handshake_queue;
++
+ 	ret = register_netdevice(dev);
+ 	if (ret < 0)
+ 		goto err_uninit_ratelimiter;
+@@ -367,18 +367,18 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
+ 
+ err_uninit_ratelimiter:
+ 	wg_ratelimiter_uninit();
++err_free_handshake_queue:
++	wg_packet_queue_free(&wg->handshake_queue, false);
+ err_free_decrypt_queue:
+-	wg_packet_queue_free(&wg->decrypt_queue);
++	wg_packet_queue_free(&wg->decrypt_queue, false);
+ err_free_encrypt_queue:
+-	wg_packet_queue_free(&wg->encrypt_queue);
++	wg_packet_queue_free(&wg->encrypt_queue, false);
+ err_destroy_packet_crypt:
+ 	destroy_workqueue(wg->packet_crypt_wq);
+ err_destroy_handshake_send:
+ 	destroy_workqueue(wg->handshake_send_wq);
+ err_destroy_handshake_receive:
+ 	destroy_workqueue(wg->handshake_receive_wq);
+-err_free_incoming_handshakes:
+-	free_percpu(wg->incoming_handshakes_worker);
+ err_free_tstats:
+ 	free_percpu(dev->tstats);
+ err_free_index_hashtable:
+@@ -398,6 +398,7 @@ static struct rtnl_link_ops link_ops __read_mostly = {
+ static void wg_netns_pre_exit(struct net *net)
+ {
+ 	struct wg_device *wg;
++	struct wg_peer *peer;
+ 
+ 	rtnl_lock();
+ 	list_for_each_entry(wg, &device_list, device_list) {
+@@ -407,6 +408,8 @@ static void wg_netns_pre_exit(struct net *net)
+ 			mutex_lock(&wg->device_update_lock);
+ 			rcu_assign_pointer(wg->creating_net, NULL);
+ 			wg_socket_reinit(wg, NULL, NULL);
++			list_for_each_entry(peer, &wg->peer_list, peer_list)
++				wg_socket_clear_peer_endpoint_src(peer);
+ 			mutex_unlock(&wg->device_update_lock);
+ 		}
+ 	}
+diff --git a/drivers/net/wireguard/device.h b/drivers/net/wireguard/device.h
+index 854bc3d97150e..43c7cebbf50b0 100644
+--- a/drivers/net/wireguard/device.h
++++ b/drivers/net/wireguard/device.h
+@@ -39,21 +39,18 @@ struct prev_queue {
+ 
+ struct wg_device {
+ 	struct net_device *dev;
+-	struct crypt_queue encrypt_queue, decrypt_queue;
++	struct crypt_queue encrypt_queue, decrypt_queue, handshake_queue;
+ 	struct sock __rcu *sock4, *sock6;
+ 	struct net __rcu *creating_net;
+ 	struct noise_static_identity static_identity;
+-	struct workqueue_struct *handshake_receive_wq, *handshake_send_wq;
+-	struct workqueue_struct *packet_crypt_wq;
+-	struct sk_buff_head incoming_handshakes;
+-	int incoming_handshake_cpu;
+-	struct multicore_worker __percpu *incoming_handshakes_worker;
++	struct workqueue_struct *packet_crypt_wq,*handshake_receive_wq, *handshake_send_wq;
+ 	struct cookie_checker cookie_checker;
+ 	struct pubkey_hashtable *peer_hashtable;
+ 	struct index_hashtable *index_hashtable;
+ 	struct allowedips peer_allowedips;
+ 	struct mutex device_update_lock, socket_update_lock;
+ 	struct list_head device_list, peer_list;
++	atomic_t handshake_queue_len;
+ 	unsigned int num_peers, device_update_gen;
+ 	u32 fwmark;
+ 	u16 incoming_port;
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 48e7b982a3073..1de413b19e342 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -38,11 +38,11 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+ 	return 0;
+ }
+ 
+-void wg_packet_queue_free(struct crypt_queue *queue)
++void wg_packet_queue_free(struct crypt_queue *queue, bool purge)
+ {
+ 	free_percpu(queue->worker);
+-	WARN_ON(!__ptr_ring_empty(&queue->ring));
+-	ptr_ring_cleanup(&queue->ring, NULL);
++	WARN_ON(!purge && !__ptr_ring_empty(&queue->ring));
++	ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL);
+ }
+ 
+ #define NEXT(skb) ((skb)->prev)
+diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
+index 4ef2944a68bc9..e2388107f7fdc 100644
+--- a/drivers/net/wireguard/queueing.h
++++ b/drivers/net/wireguard/queueing.h
+@@ -23,7 +23,7 @@ struct sk_buff;
+ /* queueing.c APIs: */
+ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+ 			 unsigned int len);
+-void wg_packet_queue_free(struct crypt_queue *queue);
++void wg_packet_queue_free(struct crypt_queue *queue, bool purge);
+ struct multicore_worker __percpu *
+ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr);
+ 
+diff --git a/drivers/net/wireguard/ratelimiter.c b/drivers/net/wireguard/ratelimiter.c
+index 3fedd1d21f5ee..dd55e5c26f468 100644
+--- a/drivers/net/wireguard/ratelimiter.c
++++ b/drivers/net/wireguard/ratelimiter.c
+@@ -176,12 +176,12 @@ int wg_ratelimiter_init(void)
+ 			(1U << 14) / sizeof(struct hlist_head)));
+ 	max_entries = table_size * 8;
+ 
+-	table_v4 = kvzalloc(table_size * sizeof(*table_v4), GFP_KERNEL);
++	table_v4 = kvcalloc(table_size, sizeof(*table_v4), GFP_KERNEL);
+ 	if (unlikely(!table_v4))
+ 		goto err_kmemcache;
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+-	table_v6 = kvzalloc(table_size * sizeof(*table_v6), GFP_KERNEL);
++	table_v6 = kvcalloc(table_size, sizeof(*table_v6), GFP_KERNEL);
+ 	if (unlikely(!table_v6)) {
+ 		kvfree(table_v4);
+ 		goto err_kmemcache;
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index 7dc84bcca2613..7b8df406c7737 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -116,8 +116,8 @@ static void wg_receive_handshake_packet(struct wg_device *wg,
+ 		return;
+ 	}
+ 
+-	under_load = skb_queue_len(&wg->incoming_handshakes) >=
+-		     MAX_QUEUED_INCOMING_HANDSHAKES / 8;
++	under_load = atomic_read(&wg->handshake_queue_len) >=
++			MAX_QUEUED_INCOMING_HANDSHAKES / 8;
+ 	if (under_load) {
+ 		last_under_load = ktime_get_coarse_boottime_ns();
+ 	} else if (last_under_load) {
+@@ -212,13 +212,14 @@ static void wg_receive_handshake_packet(struct wg_device *wg,
+ 
+ void wg_packet_handshake_receive_worker(struct work_struct *work)
+ {
+-	struct wg_device *wg = container_of(work, struct multicore_worker,
+-					    work)->ptr;
++	struct crypt_queue *queue = container_of(work, struct multicore_worker, work)->ptr;
++	struct wg_device *wg = container_of(queue, struct wg_device, handshake_queue);
+ 	struct sk_buff *skb;
+ 
+-	while ((skb = skb_dequeue(&wg->incoming_handshakes)) != NULL) {
++	while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) {
+ 		wg_receive_handshake_packet(wg, skb);
+ 		dev_kfree_skb(skb);
++		atomic_dec(&wg->handshake_queue_len);
+ 		cond_resched();
+ 	}
+ }
+@@ -553,22 +554,28 @@ void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb)
+ 	case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION):
+ 	case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE):
+ 	case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): {
+-		int cpu;
+-
+-		if (skb_queue_len(&wg->incoming_handshakes) >
+-			    MAX_QUEUED_INCOMING_HANDSHAKES ||
+-		    unlikely(!rng_is_initialized())) {
++		int cpu, ret = -EBUSY;
++
++		if (unlikely(!rng_is_initialized()))
++			goto drop;
++		if (atomic_read(&wg->handshake_queue_len) > MAX_QUEUED_INCOMING_HANDSHAKES / 2) {
++			if (spin_trylock_bh(&wg->handshake_queue.ring.producer_lock)) {
++				ret = __ptr_ring_produce(&wg->handshake_queue.ring, skb);
++				spin_unlock_bh(&wg->handshake_queue.ring.producer_lock);
++			}
++		} else
++			ret = ptr_ring_produce_bh(&wg->handshake_queue.ring, skb);
++		if (ret) {
++	drop:
+ 			net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n",
+ 						wg->dev->name, skb);
+ 			goto err;
+ 		}
+-		skb_queue_tail(&wg->incoming_handshakes, skb);
+-		/* Queues up a call to packet_process_queued_handshake_
+-		 * packets(skb):
+-		 */
+-		cpu = wg_cpumask_next_online(&wg->incoming_handshake_cpu);
++		atomic_inc(&wg->handshake_queue_len);
++		cpu = wg_cpumask_next_online(&wg->handshake_queue.last_cpu);
++		/* Queues up a call to packet_process_queued_handshake_packets(skb): */
+ 		queue_work_on(cpu, wg->handshake_receive_wq,
+-			&per_cpu_ptr(wg->incoming_handshakes_worker, cpu)->work);
++			      &per_cpu_ptr(wg->handshake_queue.worker, cpu)->work);
+ 		break;
+ 	}
+ 	case cpu_to_le32(MESSAGE_DATA):
+diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c
+index c8cd385d233b6..52b9bc83abcbc 100644
+--- a/drivers/net/wireguard/socket.c
++++ b/drivers/net/wireguard/socket.c
+@@ -308,7 +308,7 @@ void wg_socket_clear_peer_endpoint_src(struct wg_peer *peer)
+ {
+ 	write_lock_bh(&peer->endpoint_lock);
+ 	memset(&peer->endpoint.src6, 0, sizeof(peer->endpoint.src6));
+-	dst_cache_reset(&peer->endpoint_cache);
++	dst_cache_reset_now(&peer->endpoint_cache);
+ 	write_unlock_bh(&peer->endpoint_lock);
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index 9dcd2e990c9c1..be214f39f52be 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1303,23 +1303,31 @@ _iwl_op_mode_start(struct iwl_drv *drv, struct iwlwifi_opmode_table *op)
+ 	const struct iwl_op_mode_ops *ops = op->ops;
+ 	struct dentry *dbgfs_dir = NULL;
+ 	struct iwl_op_mode *op_mode = NULL;
++	int retry, max_retry = !!iwlwifi_mod_params.fw_restart * IWL_MAX_INIT_RETRY;
++
++	for (retry = 0; retry <= max_retry; retry++) {
+ 
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+-	drv->dbgfs_op_mode = debugfs_create_dir(op->name,
+-						drv->dbgfs_drv);
+-	dbgfs_dir = drv->dbgfs_op_mode;
++		drv->dbgfs_op_mode = debugfs_create_dir(op->name,
++							drv->dbgfs_drv);
++		dbgfs_dir = drv->dbgfs_op_mode;
+ #endif
+ 
+-	op_mode = ops->start(drv->trans, drv->trans->cfg, &drv->fw, dbgfs_dir);
++		op_mode = ops->start(drv->trans, drv->trans->cfg,
++				     &drv->fw, dbgfs_dir);
++
++		if (op_mode)
++			return op_mode;
++
++		IWL_ERR(drv, "retry init count %d\n", retry);
+ 
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+-	if (!op_mode) {
+ 		debugfs_remove_recursive(drv->dbgfs_op_mode);
+ 		drv->dbgfs_op_mode = NULL;
+-	}
+ #endif
++	}
+ 
+-	return op_mode;
++	return NULL;
+ }
+ 
+ static void _iwl_op_mode_stop(struct iwl_drv *drv)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.h b/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
+index 8938a64679963..a6e9bc56f7ddd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
+@@ -144,4 +144,7 @@ void iwl_drv_stop(struct iwl_drv *drv);
+ #define IWL_EXPORT_SYMBOL(sym)
+ #endif
+ 
++/* max retry for init flow */
++#define IWL_MAX_INIT_RETRY 2
++
+ #endif /* __iwl_drv_h__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 6f301ac8cce20..81cc85a97eb20 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -71,6 +71,7 @@
+ #include <net/ieee80211_radiotap.h>
+ #include <net/tcp.h>
+ 
++#include "iwl-drv.h"
+ #include "iwl-op-mode.h"
+ #include "iwl-io.h"
+ #include "mvm.h"
+@@ -1163,9 +1164,30 @@ static int iwl_mvm_mac_start(struct ieee80211_hw *hw)
+ {
+ 	struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ 	int ret;
++	int retry, max_retry = 0;
+ 
+ 	mutex_lock(&mvm->mutex);
+-	ret = __iwl_mvm_mac_start(mvm);
++
++	/* we are starting the mac not in error flow, and restart is enabled */
++	if (!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) &&
++	    iwlwifi_mod_params.fw_restart) {
++		max_retry = IWL_MAX_INIT_RETRY;
++		/*
++		 * This will prevent mac80211 recovery flows to trigger during
++		 * init failures
++		 */
++		set_bit(IWL_MVM_STATUS_STARTING, &mvm->status);
++	}
++
++	for (retry = 0; retry <= max_retry; retry++) {
++		ret = __iwl_mvm_mac_start(mvm);
++		if (!ret)
++			break;
++
++		IWL_ERR(mvm, "mac start retry %d\n", retry);
++	}
++	clear_bit(IWL_MVM_STATUS_STARTING, &mvm->status);
++
+ 	mutex_unlock(&mvm->mutex);
+ 
+ 	return ret;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index 7159d1da3e772..64f5a4cb3d3ac 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -1162,6 +1162,8 @@ struct iwl_mvm {
+  * @IWL_MVM_STATUS_FIRMWARE_RUNNING: firmware is running
+  * @IWL_MVM_STATUS_NEED_FLUSH_P2P: need to flush P2P bcast STA
+  * @IWL_MVM_STATUS_IN_D3: in D3 (or at least about to go into it)
++ * @IWL_MVM_STATUS_STARTING: starting mac,
++ *	used to disable restart flow while in STARTING state
+  */
+ enum iwl_mvm_status {
+ 	IWL_MVM_STATUS_HW_RFKILL,
+@@ -1173,6 +1175,7 @@ enum iwl_mvm_status {
+ 	IWL_MVM_STATUS_FIRMWARE_RUNNING,
+ 	IWL_MVM_STATUS_NEED_FLUSH_P2P,
+ 	IWL_MVM_STATUS_IN_D3,
++	IWL_MVM_STATUS_STARTING,
+ };
+ 
+ /* Keep track of completed init configuration */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 0be8ff30b13e6..7c61d179895b3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -1295,6 +1295,9 @@ void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error)
+ 	 */
+ 	if (!mvm->fw_restart && fw_error) {
+ 		iwl_fw_error_collect(&mvm->fwrt);
++	} else if (test_bit(IWL_MVM_STATUS_STARTING,
++			    &mvm->status)) {
++		IWL_ERR(mvm, "Starting mac, retry will be triggered anyway\n");
+ 	} else if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
+ 		struct iwl_mvm_reprobe *reprobe;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 7b6e9a5352b35..9a7f317a098fc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -182,7 +182,7 @@ mt7915_get_phy_mode(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ 		if (ht_cap->ht_supported)
+ 			mode |= PHY_MODE_GN;
+ 
+-		if (he_cap->has_he)
++		if (he_cap && he_cap->has_he)
+ 			mode |= PHY_MODE_AX_24G;
+ 	} else if (band == NL80211_BAND_5GHZ) {
+ 		mode |= PHY_MODE_A;
+@@ -193,7 +193,7 @@ mt7915_get_phy_mode(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ 		if (vht_cap->vht_supported)
+ 			mode |= PHY_MODE_AC;
+ 
+-		if (he_cap->has_he)
++		if (he_cap && he_cap->has_he)
+ 			mode |= PHY_MODE_AX_5G;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
+index e4473a5512415..74c3d8cb31002 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
+@@ -25,6 +25,9 @@ static bool rt2x00usb_check_usb_error(struct rt2x00_dev *rt2x00dev, int status)
+ 	if (status == -ENODEV || status == -ENOENT)
+ 		return true;
+ 
++	if (!test_bit(DEVICE_STATE_STARTED, &rt2x00dev->flags))
++		return false;
++
+ 	if (status == -EPROTO || status == -ETIMEDOUT)
+ 		rt2x00dev->num_proto_errs++;
+ 	else
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 2a313643e0388..d8d241344d22d 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -1170,15 +1170,6 @@ static int tpacpi_rfk_update_swstate(const struct tpacpi_rfk *tp_rfk)
+ 	return status;
+ }
+ 
+-/* Query FW and update rfkill sw state for all rfkill switches */
+-static void tpacpi_rfk_update_swstate_all(void)
+-{
+-	unsigned int i;
+-
+-	for (i = 0; i < TPACPI_RFK_SW_MAX; i++)
+-		tpacpi_rfk_update_swstate(tpacpi_rfkill_switches[i]);
+-}
+-
+ /*
+  * Sync the HW-blocking state of all rfkill switches,
+  * do notice it causes the rfkill core to schedule uevents
+@@ -3121,9 +3112,6 @@ static void tpacpi_send_radiosw_update(void)
+ 	if (wlsw == TPACPI_RFK_RADIO_OFF)
+ 		tpacpi_rfk_update_hwblock_state(true);
+ 
+-	/* Sync sw blocking state */
+-	tpacpi_rfk_update_swstate_all();
+-
+ 	/* Sync hw blocking state last if it is hw-unblocked */
+ 	if (wlsw == TPACPI_RFK_RADIO_ON)
+ 		tpacpi_rfk_update_hwblock_state(false);
+@@ -8805,6 +8793,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ 	TPACPI_Q_LNV3('N', '2', 'E', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (1st gen) */
+ 	TPACPI_Q_LNV3('N', '2', 'O', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (2nd gen) */
+ 	TPACPI_Q_LNV3('N', '2', 'V', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (3nd gen) */
++	TPACPI_Q_LNV3('N', '4', '0', TPACPI_FAN_2CTL),	/* P1 / X1 Extreme (4nd gen) */
+ 	TPACPI_Q_LNV3('N', '3', '0', TPACPI_FAN_2CTL),	/* P15 (1st gen) / P15v (1st gen) */
+ 	TPACPI_Q_LNV3('N', '3', '2', TPACPI_FAN_2CTL),	/* X1 Carbon (9th gen) */
+ };
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 3f7fa8de36427..a5759d0e388a8 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -1909,12 +1909,12 @@ static void session_recovery_timedout(struct work_struct *work)
+ 	}
+ 	spin_unlock_irqrestore(&session->lock, flags);
+ 
+-	if (session->transport->session_recovery_timedout)
+-		session->transport->session_recovery_timedout(session);
+-
+ 	ISCSI_DBG_TRANS_SESSION(session, "Unblocking SCSI target\n");
+ 	scsi_target_unblock(&session->dev, SDEV_TRANSPORT_OFFLINE);
+ 	ISCSI_DBG_TRANS_SESSION(session, "Completed unblocking SCSI target\n");
++
++	if (session->transport->session_recovery_timedout)
++		session->transport->session_recovery_timedout(session);
+ }
+ 
+ static void __iscsi_unblock_session(struct work_struct *work)
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 17de8a9b991e9..d9e34ac376626 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -474,6 +474,8 @@ static void thermal_zone_device_init(struct thermal_zone_device *tz)
+ {
+ 	struct thermal_instance *pos;
+ 	tz->temperature = THERMAL_TEMP_INVALID;
++	tz->prev_low_trip = -INT_MAX;
++	tz->prev_high_trip = INT_MAX;
+ 	list_for_each_entry(pos, &tz->thermal_instances, tz_node)
+ 		pos->initialized = false;
+ }
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 58f718ed1bb98..019328d644d8b 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -1349,29 +1349,33 @@ pericom_do_set_divisor(struct uart_port *port, unsigned int baud,
+ {
+ 	int scr;
+ 	int lcr;
+-	int actual_baud;
+-	int tolerance;
+ 
+-	for (scr = 5 ; scr <= 15 ; scr++) {
+-		actual_baud = 921600 * 16 / scr;
+-		tolerance = actual_baud / 50;
++	for (scr = 16; scr > 4; scr--) {
++		unsigned int maxrate = port->uartclk / scr;
++		unsigned int divisor = max(maxrate / baud, 1U);
++		int delta = maxrate / divisor - baud;
+ 
+-		if ((baud < actual_baud + tolerance) &&
+-			(baud > actual_baud - tolerance)) {
++		if (baud > maxrate + baud / 50)
++			continue;
+ 
++		if (delta > baud / 50)
++			divisor++;
++
++		if (divisor > 0xffff)
++			continue;
++
++		/* Update delta due to possible divisor change */
++		delta = maxrate / divisor - baud;
++		if (abs(delta) < baud / 50) {
+ 			lcr = serial_port_in(port, UART_LCR);
+ 			serial_port_out(port, UART_LCR, lcr | 0x80);
+-
+-			serial_port_out(port, UART_DLL, 1);
+-			serial_port_out(port, UART_DLM, 0);
++			serial_port_out(port, UART_DLL, divisor & 0xff);
++			serial_port_out(port, UART_DLM, divisor >> 8 & 0xff);
+ 			serial_port_out(port, 2, 16 - scr);
+ 			serial_port_out(port, UART_LCR, lcr);
+ 			return;
+-		} else if (baud > actual_baud) {
+-			break;
+ 		}
+ 	}
+-	serial8250_do_set_divisor(port, baud, quot, quot_frac);
+ }
+ static int pci_pericom_setup(struct serial_private *priv,
+ 		  const struct pciserial_board *board,
+@@ -2317,12 +2321,19 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
+ 		.setup      = pci_pericom_setup_four_at_eight,
+ 	},
+ 	{
+-		.vendor     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
+ 		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
+ 		.subvendor  = PCI_ANY_ID,
+ 		.subdevice  = PCI_ANY_ID,
+ 		.setup      = pci_pericom_setup_four_at_eight,
+ 	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup_four_at_eight,
++	},
+ 	{
+ 		.vendor     = PCI_VENDOR_ID_ACCESIO,
+ 		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_4,
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 110a19c5138e8..7c07ebb37b1b9 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2029,13 +2029,6 @@ void serial8250_do_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 	unsigned char mcr;
+ 
+-	if (port->rs485.flags & SER_RS485_ENABLED) {
+-		if (serial8250_in_MCR(up) & UART_MCR_RTS)
+-			mctrl |= TIOCM_RTS;
+-		else
+-			mctrl &= ~TIOCM_RTS;
+-	}
+-
+ 	mcr = serial8250_TIOCM_to_MCR(mctrl);
+ 
+ 	mcr = (mcr & up->mcr_mask) | up->mcr_force | up->mcr;
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 87dc3fc15694a..b3cddcdcbdad0 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -2791,6 +2791,7 @@ MODULE_DEVICE_TABLE(of, sbsa_uart_of_match);
+ 
+ static const struct acpi_device_id sbsa_uart_acpi_match[] = {
+ 	{ "ARMH0011", 0 },
++	{ "ARMHB000", 0 },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(acpi, sbsa_uart_acpi_match);
+diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c
+index 87f005e5d2aff..26bcbec5422e2 100644
+--- a/drivers/tty/serial/msm_serial.c
++++ b/drivers/tty/serial/msm_serial.c
+@@ -599,6 +599,9 @@ static void msm_start_rx_dma(struct msm_port *msm_port)
+ 	u32 val;
+ 	int ret;
+ 
++	if (IS_ENABLED(CONFIG_CONSOLE_POLL))
++		return;
++
+ 	if (!dma->chan)
+ 		return;
+ 
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index 26fa69609ee5b..c2be22c3b7d1b 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -1501,7 +1501,7 @@ static struct tegra_uart_chip_data tegra20_uart_chip_data = {
+ 	.fifo_mode_enable_status	= false,
+ 	.uart_max_port			= 5,
+ 	.max_dma_burst_bytes		= 4,
+-	.error_tolerance_low_range	= 0,
++	.error_tolerance_low_range	= -4,
+ 	.error_tolerance_high_range	= 4,
+ };
+ 
+@@ -1512,7 +1512,7 @@ static struct tegra_uart_chip_data tegra30_uart_chip_data = {
+ 	.fifo_mode_enable_status	= false,
+ 	.uart_max_port			= 5,
+ 	.max_dma_burst_bytes		= 4,
+-	.error_tolerance_low_range	= 0,
++	.error_tolerance_low_range	= -4,
+ 	.error_tolerance_high_range	= 4,
+ };
+ 
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index e6fb5077fe349..046bedca7b8f5 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1102,6 +1102,11 @@ uart_tiocmset(struct tty_struct *tty, unsigned int set, unsigned int clear)
+ 		goto out;
+ 
+ 	if (!tty_io_error(tty)) {
++		if (uport->rs485.flags & SER_RS485_ENABLED) {
++			set &= ~TIOCM_RTS;
++			clear &= ~TIOCM_RTS;
++		}
++
+ 		uart_update_mctrl(uport, set, clear);
+ 		ret = 0;
+ 	}
+@@ -1576,6 +1581,7 @@ static void uart_tty_port_shutdown(struct tty_port *port)
+ {
+ 	struct uart_state *state = container_of(port, struct uart_state, port);
+ 	struct uart_port *uport = uart_port_check(state);
++	char *buf;
+ 
+ 	/*
+ 	 * At this point, we stop accepting input.  To do this, we
+@@ -1597,8 +1603,18 @@ static void uart_tty_port_shutdown(struct tty_port *port)
+ 	 */
+ 	tty_port_set_suspended(port, 0);
+ 
+-	uart_change_pm(state, UART_PM_STATE_OFF);
++	/*
++	 * Free the transmit buffer.
++	 */
++	spin_lock_irq(&uport->lock);
++	buf = state->xmit.buf;
++	state->xmit.buf = NULL;
++	spin_unlock_irq(&uport->lock);
+ 
++	if (buf)
++		free_page((unsigned long)buf);
++
++	uart_change_pm(state, UART_PM_STATE_OFF);
+ }
+ 
+ static void uart_wait_until_sent(struct tty_struct *tty, int timeout)
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index a54a735b63843..61f686c5bd9c6 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -435,6 +435,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x1532, 0x0116), .driver_info =
+ 			USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
+ 
++	/* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */
++	{ USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* Lenovo ThinkCenter A630Z TI024Gen3 usb-audio */
+ 	{ USB_DEVICE(0x17ef, 0xa012), .driver_info =
+ 			USB_QUIRK_DISCONNECT_SUSPEND },
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 4512c4223392a..667a37f509828 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -342,7 +342,9 @@ static void xhci_handle_stopped_cmd_ring(struct xhci_hcd *xhci,
+ /* Must be called with xhci->lock held, releases and aquires lock back */
+ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)
+ {
+-	u32 temp_32;
++	struct xhci_segment *new_seg	= xhci->cmd_ring->deq_seg;
++	union xhci_trb *new_deq		= xhci->cmd_ring->dequeue;
++	u64 crcr;
+ 	int ret;
+ 
+ 	xhci_dbg(xhci, "Abort command ring\n");
+@@ -351,13 +353,18 @@ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)
+ 
+ 	/*
+ 	 * The control bits like command stop, abort are located in lower
+-	 * dword of the command ring control register. Limit the write
+-	 * to the lower dword to avoid corrupting the command ring pointer
+-	 * in case if the command ring is stopped by the time upper dword
+-	 * is written.
++	 * dword of the command ring control register.
++	 * Some controllers require all 64 bits to be written to abort the ring.
++	 * Make sure the upper dword is valid, pointing to the next command,
++	 * avoiding corrupting the command ring pointer in case the command ring
++	 * is stopped by the time the upper dword is written.
+ 	 */
+-	temp_32 = readl(&xhci->op_regs->cmd_ring);
+-	writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);
++	next_trb(xhci, NULL, &new_seg, &new_deq);
++	if (trb_is_link(new_deq))
++		next_trb(xhci, NULL, &new_seg, &new_deq);
++
++	crcr = xhci_trb_virt_to_dma(new_seg, new_deq);
++	xhci_write_64(xhci, crcr | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);
+ 
+ 	/* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the
+ 	 * completion of the Command Abort operation. If CRR is not negated in 5
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 291d020427924..721d9c4ddc81f 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -3293,11 +3293,7 @@ static void run_state_machine(struct tcpm_port *port)
+ 				       tcpm_try_src(port) ? SRC_TRY
+ 							  : SNK_ATTACHED,
+ 				       0);
+-		else
+-			/* Wait for VBUS, but not forever */
+-			tcpm_set_state(port, PORT_RESET, PD_T_PS_SOURCE_ON);
+ 		break;
+-
+ 	case SRC_TRY:
+ 		port->try_src_count++;
+ 		tcpm_set_cc(port, tcpm_rp_cc(port));
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index 5dc88a914b349..042e166d92682 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -370,11 +370,17 @@ static void vgacon_init(struct vc_data *c, int init)
+ 	struct uni_pagedir *p;
+ 
+ 	/*
+-	 * We cannot be loaded as a module, therefore init is always 1,
+-	 * but vgacon_init can be called more than once, and init will
+-	 * not be 1.
++	 * We cannot be loaded as a module, therefore init will be 1
++	 * if we are the default console, however if we are a fallback
++	 * console, for example if fbcon has failed registration, then
++	 * init will be 0, so we need to make sure our boot parameters
++	 * have been copied to the console structure for vgacon_resize
++	 * ultimately called by vc_resize.  Any subsequent calls to
++	 * vgacon_init init will have init set to 0 too.
+ 	 */
+ 	c->vc_can_do_color = vga_can_do_color;
++	c->vc_scan_lines = vga_scan_lines;
++	c->vc_font.height = c->vc_cell_height = vga_video_font_height;
+ 
+ 	/* set dimensions manually if init != 0 since vc_resize() will fail */
+ 	if (init) {
+@@ -383,8 +389,6 @@ static void vgacon_init(struct vc_data *c, int init)
+ 	} else
+ 		vc_resize(c, vga_video_num_columns, vga_video_num_lines);
+ 
+-	c->vc_scan_lines = vga_scan_lines;
+-	c->vc_font.height = c->vc_cell_height = vga_video_font_height;
+ 	c->vc_complement_mask = 0x7700;
+ 	if (vga_512_chars)
+ 		c->vc_hi_font_mask = 0x0800;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 9051bb47cbdd9..bab2091c81683 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3692,11 +3692,23 @@ static void btrfs_end_empty_barrier(struct bio *bio)
+  */
+ static void write_dev_flush(struct btrfs_device *device)
+ {
+-	struct request_queue *q = bdev_get_queue(device->bdev);
+ 	struct bio *bio = device->flush_bio;
+ 
++#ifndef CONFIG_BTRFS_FS_CHECK_INTEGRITY
++	/*
++	 * When a disk has write caching disabled, we skip submission of a bio
++	 * with flush and sync requests before writing the superblock, since
++	 * it's not needed. However when the integrity checker is enabled, this
++	 * results in reports that there are metadata blocks referred by a
++	 * superblock that were not properly flushed. So don't skip the bio
++	 * submission only when the integrity checker is enabled for the sake
++	 * of simplicity, since this is a debug tool and not meant for use in
++	 * non-debug builds.
++	 */
++	struct request_queue *q = bdev_get_queue(device->bdev);
+ 	if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags))
+ 		return;
++#endif
+ 
+ 	bio_reset(bio);
+ 	bio->bi_end_io = btrfs_end_empty_barrier;
+diff --git a/fs/file.c b/fs/file.c
+index 21c0893f2f1df..9d02352fa18c3 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -834,6 +834,10 @@ loop:
+ 			file = NULL;
+ 		else if (!get_file_rcu_many(file, refs))
+ 			goto loop;
++		else if (__fcheck_files(files, fd) != file) {
++			fput_many(file, refs);
++			goto loop;
++		}
+ 	}
+ 	rcu_read_unlock();
+ 
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index a1f9dde33058f..b34c02985d9d2 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -940,7 +940,7 @@ do_alloc:
+ 		else if (height == ip->i_height)
+ 			ret = gfs2_hole_size(inode, lblock, len, mp, iomap);
+ 		else
+-			iomap->length = size - pos;
++			iomap->length = size - iomap->offset;
+ 	} else if (flags & IOMAP_WRITE) {
+ 		u64 alloc_size;
+ 
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 6a355e1347d7f..d2b7ecbd1b150 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1438,13 +1438,6 @@ out:
+ 	gfs2_ordered_del_inode(ip);
+ 	clear_inode(inode);
+ 	gfs2_dir_hash_inval(ip);
+-	if (ip->i_gl) {
+-		glock_clear_object(ip->i_gl, ip);
+-		wait_on_bit_io(&ip->i_flags, GIF_GLOP_PENDING, TASK_UNINTERRUPTIBLE);
+-		gfs2_glock_add_to_lru(ip->i_gl);
+-		gfs2_glock_put_eventually(ip->i_gl);
+-		ip->i_gl = NULL;
+-	}
+ 	if (gfs2_holder_initialized(&ip->i_iopen_gh)) {
+ 		struct gfs2_glock *gl = ip->i_iopen_gh.gh_gl;
+ 
+@@ -1457,6 +1450,13 @@ out:
+ 		gfs2_holder_uninit(&ip->i_iopen_gh);
+ 		gfs2_glock_put_eventually(gl);
+ 	}
++	if (ip->i_gl) {
++		glock_clear_object(ip->i_gl, ip);
++		wait_on_bit_io(&ip->i_flags, GIF_GLOP_PENDING, TASK_UNINTERRUPTIBLE);
++		gfs2_glock_add_to_lru(ip->i_gl);
++		gfs2_glock_put_eventually(ip->i_gl);
++		ip->i_gl = NULL;
++	}
+ }
+ 
+ static struct inode *gfs2_alloc_inode(struct super_block *sb)
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 4ebcd9dd15352..2587b1b8e2ef7 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -362,8 +362,9 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 			goto out;
+ 	}
+ 
+-	truncate_pagecache_range(dst_inode, pos_dst,
+-				 pos_dst + res->write_res.count);
++	WARN_ON_ONCE(invalidate_inode_pages2_range(dst_inode->i_mapping,
++					pos_dst >> PAGE_SHIFT,
++					(pos_dst + res->write_res.count - 1) >> PAGE_SHIFT));
+ 	spin_lock(&dst_inode->i_lock);
+ 	NFS_I(dst_inode)->cache_validity |= (NFS_INO_REVAL_PAGECACHE |
+ 			NFS_INO_REVAL_FORCED | NFS_INO_INVALID_SIZE |
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index d1ae643a999a5..b019f27c13601 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -422,45 +422,48 @@ out_unlock:
+ 	return ret;
+ }
+ 
+-static ssize_t ovl_splice_read(struct file *in, loff_t *ppos,
+-			 struct pipe_inode_info *pipe, size_t len,
+-			 unsigned int flags)
++/*
++ * Calling iter_file_splice_write() directly from overlay's f_op may deadlock
++ * due to lock order inversion between pipe->mutex in iter_file_splice_write()
++ * and file_start_write(real.file) in ovl_write_iter().
++ *
++ * So do everything ovl_write_iter() does and call iter_file_splice_write() on
++ * the real file.
++ */
++static ssize_t ovl_splice_write(struct pipe_inode_info *pipe, struct file *out,
++				loff_t *ppos, size_t len, unsigned int flags)
+ {
+-	ssize_t ret;
+ 	struct fd real;
+ 	const struct cred *old_cred;
++	struct inode *inode = file_inode(out);
++	struct inode *realinode = ovl_inode_real(inode);
++	ssize_t ret;
+ 
+-	ret = ovl_real_fdget(in, &real);
++	inode_lock(inode);
++	/* Update mode */
++	ovl_copyattr(realinode, inode);
++	ret = file_remove_privs(out);
+ 	if (ret)
+-		return ret;
+-
+-	old_cred = ovl_override_creds(file_inode(in)->i_sb);
+-	ret = generic_file_splice_read(real.file, ppos, pipe, len, flags);
+-	revert_creds(old_cred);
+-
+-	ovl_file_accessed(in);
+-	fdput(real);
+-	return ret;
+-}
+-
+-static ssize_t
+-ovl_splice_write(struct pipe_inode_info *pipe, struct file *out,
+-			  loff_t *ppos, size_t len, unsigned int flags)
+-{
+-	struct fd real;
+-	const struct cred *old_cred;
+-	ssize_t ret;
++		goto out_unlock;
+ 
+ 	ret = ovl_real_fdget(out, &real);
+ 	if (ret)
+-		return ret;
++		goto out_unlock;
++
++	old_cred = ovl_override_creds(inode->i_sb);
++	file_start_write(real.file);
+ 
+-	old_cred = ovl_override_creds(file_inode(out)->i_sb);
+ 	ret = iter_file_splice_write(pipe, real.file, ppos, len, flags);
+-	revert_creds(old_cred);
+ 
+-	ovl_file_accessed(out);
++	file_end_write(real.file);
++	/* Update size */
++	ovl_copyattr(realinode, inode);
++	revert_creds(old_cred);
+ 	fdput(real);
++
++out_unlock:
++	inode_unlock(inode);
++
+ 	return ret;
+ }
+ 
+@@ -772,7 +775,7 @@ const struct file_operations ovl_file_operations = {
+ #ifdef CONFIG_COMPAT
+ 	.compat_ioctl	= ovl_compat_ioctl,
+ #endif
+-	.splice_read    = ovl_splice_read,
++	.splice_read    = generic_file_splice_read,
+ 	.splice_write   = ovl_splice_write,
+ 
+ 	.copy_file_range	= ovl_copy_file_range,
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index fdb1d5262ce84..96d69404a54ff 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -953,6 +953,15 @@ static inline struct acpi_device *acpi_resource_consumer(struct resource *res)
+ 	return NULL;
+ }
+ 
++static inline int acpi_register_wakeup_handler(int wake_irq,
++	bool (*wakeup)(void *context), void *context)
++{
++	return -ENXIO;
++}
++
++static inline void acpi_unregister_wakeup_handler(
++	bool (*wakeup)(void *context), void *context) { }
++
+ #endif	/* !CONFIG_ACPI */
+ 
+ #ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 21f21f7f878ce..4dbebd319b6f5 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -155,6 +155,8 @@ struct kretprobe {
+ 	raw_spinlock_t lock;
+ };
+ 
++#define KRETPROBE_MAX_DATA_SIZE	4096
++
+ struct kretprobe_instance {
+ 	union {
+ 		struct hlist_node hlist;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 4fdeccf223784..3476d20b75d49 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -4212,7 +4212,8 @@ static inline u32 netif_msg_init(int debug_value, int default_msg_enable_bits)
+ static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)
+ {
+ 	spin_lock(&txq->_xmit_lock);
+-	txq->xmit_lock_owner = cpu;
++	/* Pairs with READ_ONCE() in __dev_queue_xmit() */
++	WRITE_ONCE(txq->xmit_lock_owner, cpu);
+ }
+ 
+ static inline bool __netif_tx_acquire(struct netdev_queue *txq)
+@@ -4229,26 +4230,32 @@ static inline void __netif_tx_release(struct netdev_queue *txq)
+ static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
+ {
+ 	spin_lock_bh(&txq->_xmit_lock);
+-	txq->xmit_lock_owner = smp_processor_id();
++	/* Pairs with READ_ONCE() in __dev_queue_xmit() */
++	WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id());
+ }
+ 
+ static inline bool __netif_tx_trylock(struct netdev_queue *txq)
+ {
+ 	bool ok = spin_trylock(&txq->_xmit_lock);
+-	if (likely(ok))
+-		txq->xmit_lock_owner = smp_processor_id();
++
++	if (likely(ok)) {
++		/* Pairs with READ_ONCE() in __dev_queue_xmit() */
++		WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id());
++	}
+ 	return ok;
+ }
+ 
+ static inline void __netif_tx_unlock(struct netdev_queue *txq)
+ {
+-	txq->xmit_lock_owner = -1;
++	/* Pairs with READ_ONCE() in __dev_queue_xmit() */
++	WRITE_ONCE(txq->xmit_lock_owner, -1);
+ 	spin_unlock(&txq->_xmit_lock);
+ }
+ 
+ static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
+ {
+-	txq->xmit_lock_owner = -1;
++	/* Pairs with READ_ONCE() in __dev_queue_xmit() */
++	WRITE_ONCE(txq->xmit_lock_owner, -1);
+ 	spin_unlock_bh(&txq->_xmit_lock);
+ }
+ 
+diff --git a/include/linux/siphash.h b/include/linux/siphash.h
+index bf21591a9e5e6..0cda61855d907 100644
+--- a/include/linux/siphash.h
++++ b/include/linux/siphash.h
+@@ -27,9 +27,7 @@ static inline bool siphash_key_is_zero(const siphash_key_t *key)
+ }
+ 
+ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key);
+-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key);
+-#endif
+ 
+ u64 siphash_1u64(const u64 a, const siphash_key_t *key);
+ u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key);
+@@ -82,10 +80,9 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len,
+ static inline u64 siphash(const void *data, size_t len,
+ 			  const siphash_key_t *key)
+ {
+-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+-	if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT))
++	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
++	    !IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT))
+ 		return __siphash_unaligned(data, len, key);
+-#endif
+ 	return ___siphash_aligned(data, len, key);
+ }
+ 
+@@ -96,10 +93,8 @@ typedef struct {
+ 
+ u32 __hsiphash_aligned(const void *data, size_t len,
+ 		       const hsiphash_key_t *key);
+-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u32 __hsiphash_unaligned(const void *data, size_t len,
+ 			 const hsiphash_key_t *key);
+-#endif
+ 
+ u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key);
+ u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key);
+@@ -135,10 +130,9 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len,
+ static inline u32 hsiphash(const void *data, size_t len,
+ 			   const hsiphash_key_t *key)
+ {
+-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+-	if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT))
++	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
++	    !IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT))
+ 		return __hsiphash_unaligned(data, len, key);
+-#endif
+ 	return ___hsiphash_aligned(data, len, key);
+ }
+ 
+diff --git a/include/net/dst_cache.h b/include/net/dst_cache.h
+index 67634675e9197..df6622a5fe98f 100644
+--- a/include/net/dst_cache.h
++++ b/include/net/dst_cache.h
+@@ -79,6 +79,17 @@ static inline void dst_cache_reset(struct dst_cache *dst_cache)
+ 	dst_cache->reset_ts = jiffies;
+ }
+ 
++/**
++ *	dst_cache_reset_now - invalidate the cache contents immediately
++ *	@dst_cache: the cache
++ *
++ *	The caller must be sure there are no concurrent users, as this frees
++ *	all dst_cache users immediately, rather than waiting for the next
++ *	per-cpu usage like dst_cache_reset does. Most callers should use the
++ *	higher speed lazily-freed dst_cache_reset function instead.
++ */
++void dst_cache_reset_now(struct dst_cache *dst_cache);
++
+ /**
+  *	dst_cache_init - initialize the cache, allocating the required storage
+  *	@dst_cache: the cache
+diff --git a/include/net/fib_rules.h b/include/net/fib_rules.h
+index 4b10676c69d19..bd07484ab9dd5 100644
+--- a/include/net/fib_rules.h
++++ b/include/net/fib_rules.h
+@@ -69,7 +69,7 @@ struct fib_rules_ops {
+ 	int			(*action)(struct fib_rule *,
+ 					  struct flowi *, int,
+ 					  struct fib_lookup_arg *);
+-	bool			(*suppress)(struct fib_rule *,
++	bool			(*suppress)(struct fib_rule *, int,
+ 					    struct fib_lookup_arg *);
+ 	int			(*match)(struct fib_rule *,
+ 					 struct flowi *, int);
+@@ -218,7 +218,9 @@ INDIRECT_CALLABLE_DECLARE(int fib4_rule_action(struct fib_rule *rule,
+ 			    struct fib_lookup_arg *arg));
+ 
+ INDIRECT_CALLABLE_DECLARE(bool fib6_rule_suppress(struct fib_rule *rule,
++						int flags,
+ 						struct fib_lookup_arg *arg));
+ INDIRECT_CALLABLE_DECLARE(bool fib4_rule_suppress(struct fib_rule *rule,
++						int flags,
+ 						struct fib_lookup_arg *arg));
+ #endif
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 4d431d7b4415a..088f257cd6fb3 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -437,7 +437,7 @@ int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+ static inline int fib_num_tclassid_users(struct net *net)
+ {
+-	return net->ipv4.fib_num_tclassid_users;
++	return atomic_read(&net->ipv4.fib_num_tclassid_users);
+ }
+ #else
+ static inline int fib_num_tclassid_users(struct net *net)
+diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
+index 8e4fcac4df72f..75484f425e558 100644
+--- a/include/net/netns/ipv4.h
++++ b/include/net/netns/ipv4.h
+@@ -61,7 +61,7 @@ struct netns_ipv4 {
+ #endif
+ 	bool			fib_has_custom_local_routes;
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+-	int			fib_num_tclassid_users;
++	atomic_t		fib_num_tclassid_users;
+ #endif
+ 	struct hlist_head	*fib_table_hash;
+ 	bool			fib_offload_disabled;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 6270d1d9436b0..bb40d4de545ca 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2322,19 +2322,22 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp,
+  * @sk: socket
+  *
+  * Use the per task page_frag instead of the per socket one for
+- * optimization when we know that we're in the normal context and owns
++ * optimization when we know that we're in process context and own
+  * everything that's associated with %current.
+  *
+- * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest
+- * inside other socket operations and end up recursing into sk_page_frag()
+- * while it's already in use.
++ * Both direct reclaim and page faults can nest inside other
++ * socket operations and end up recursing into sk_page_frag()
++ * while it's already in use: explicitly avoid task page_frag
++ * usage if the caller is potentially doing any of them.
++ * This assumes that page fault handlers use the GFP_NOFS flags.
+  *
+  * Return: a per task page_frag if context allows that,
+  * otherwise a per socket one.
+  */
+ static inline struct page_frag *sk_page_frag(struct sock *sk)
+ {
+-	if (gfpflags_normal_context(sk->sk_allocation))
++	if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) ==
++	    (__GFP_DIRECT_RECLAIM | __GFP_FS))
+ 		return &current->task_frag;
+ 
+ 	return &sk->sk_frag;
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 66a6ba81edb1e..cdea59acd66bf 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2137,6 +2137,9 @@ int register_kretprobe(struct kretprobe *rp)
+ 		}
+ 	}
+ 
++	if (rp->data_size > KRETPROBE_MAX_DATA_SIZE)
++		return -E2BIG;
++
+ 	rp->kp.pre_handler = pre_handler_kretprobe;
+ 	rp->kp.post_handler = NULL;
+ 	rp->kp.fault_handler = NULL;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 304aad997da11..0a5f9fad45e4b 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1526,7 +1526,7 @@ static void __init init_uclamp_rq(struct rq *rq)
+ 		};
+ 	}
+ 
+-	rq->uclamp_flags = 0;
++	rq->uclamp_flags = UCLAMP_FLAG_IDLE;
+ }
+ 
+ static void __init init_uclamp(void)
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index c2ec467a5766b..003e5f37861e3 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -3344,7 +3344,7 @@ static int check_synth_field(struct synth_event *event,
+ 
+ 	if (strcmp(field->type, hist_field->type) != 0) {
+ 		if (field->size != hist_field->size ||
+-		    field->is_signed != hist_field->is_signed)
++		    (!field->is_string && field->is_signed != hist_field->is_signed))
+ 			return -EINVAL;
+ 	}
+ 
+diff --git a/lib/siphash.c b/lib/siphash.c
+index c47bb6ff21499..025f0cbf6d7a7 100644
+--- a/lib/siphash.c
++++ b/lib/siphash.c
+@@ -49,6 +49,7 @@
+ 	SIPROUND; \
+ 	return (v0 ^ v1) ^ (v2 ^ v3);
+ 
++#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)
+ {
+ 	const u8 *end = data + len - (len % sizeof(u64));
+@@ -80,8 +81,8 @@ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)
+ 	POSTAMBLE
+ }
+ EXPORT_SYMBOL(__siphash_aligned);
++#endif
+ 
+-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)
+ {
+ 	const u8 *end = data + len - (len % sizeof(u64));
+@@ -113,7 +114,6 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)
+ 	POSTAMBLE
+ }
+ EXPORT_SYMBOL(__siphash_unaligned);
+-#endif
+ 
+ /**
+  * siphash_1u64 - compute 64-bit siphash PRF value of a u64
+@@ -250,6 +250,7 @@ EXPORT_SYMBOL(siphash_3u32);
+ 	HSIPROUND; \
+ 	return (v0 ^ v1) ^ (v2 ^ v3);
+ 
++#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
+ {
+ 	const u8 *end = data + len - (len % sizeof(u64));
+@@ -280,8 +281,8 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
+ 	HPOSTAMBLE
+ }
+ EXPORT_SYMBOL(__hsiphash_aligned);
++#endif
+ 
+-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u32 __hsiphash_unaligned(const void *data, size_t len,
+ 			 const hsiphash_key_t *key)
+ {
+@@ -313,7 +314,6 @@ u32 __hsiphash_unaligned(const void *data, size_t len,
+ 	HPOSTAMBLE
+ }
+ EXPORT_SYMBOL(__hsiphash_unaligned);
+-#endif
+ 
+ /**
+  * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32
+@@ -418,6 +418,7 @@ EXPORT_SYMBOL(hsiphash_4u32);
+ 	HSIPROUND; \
+ 	return v1 ^ v3;
+ 
++#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
+ {
+ 	const u8 *end = data + len - (len % sizeof(u32));
+@@ -438,8 +439,8 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
+ 	HPOSTAMBLE
+ }
+ EXPORT_SYMBOL(__hsiphash_aligned);
++#endif
+ 
+-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u32 __hsiphash_unaligned(const void *data, size_t len,
+ 			 const hsiphash_key_t *key)
+ {
+@@ -461,7 +462,6 @@ u32 __hsiphash_unaligned(const void *data, size_t len,
+ 	HPOSTAMBLE
+ }
+ EXPORT_SYMBOL(__hsiphash_unaligned);
+-#endif
+ 
+ /**
+  * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index fe35fdad35c9b..9c39b0f5d6e07 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -2004,6 +2004,12 @@ static void j1939_tp_cmd_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ 		extd = J1939_ETP;
+ 		fallthrough;
+ 	case J1939_TP_CMD_BAM:
++		if (cmd == J1939_TP_CMD_BAM && !j1939_cb_is_broadcast(skcb)) {
++			netdev_err_once(priv->ndev, "%s: BAM to unicast (%02x), ignoring!\n",
++					__func__, skcb->addr.sa);
++			return;
++		}
++		fallthrough;
+ 	case J1939_TP_CMD_RTS: /* fall through */
+ 		if (skcb->addr.type != extd)
+ 			return;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 7dd7b9fb600c8..60cf3cd0c282f 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4147,7 +4147,10 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
+ 	if (dev->flags & IFF_UP) {
+ 		int cpu = smp_processor_id(); /* ok because BHs are off */
+ 
+-		if (txq->xmit_lock_owner != cpu) {
++		/* Other cpus might concurrently change txq->xmit_lock_owner
++		 * to -1 or to their cpu id, but not to our id.
++		 */
++		if (READ_ONCE(txq->xmit_lock_owner) != cpu) {
+ 			if (dev_xmit_recursion())
+ 				goto recursion_alert;
+ 
+diff --git a/net/core/dst_cache.c b/net/core/dst_cache.c
+index be74ab4551c20..0ccfd5fa5cb9b 100644
+--- a/net/core/dst_cache.c
++++ b/net/core/dst_cache.c
+@@ -162,3 +162,22 @@ void dst_cache_destroy(struct dst_cache *dst_cache)
+ 	free_percpu(dst_cache->cache);
+ }
+ EXPORT_SYMBOL_GPL(dst_cache_destroy);
++
++void dst_cache_reset_now(struct dst_cache *dst_cache)
++{
++	int i;
++
++	if (!dst_cache->cache)
++		return;
++
++	dst_cache->reset_ts = jiffies;
++	for_each_possible_cpu(i) {
++		struct dst_cache_pcpu *idst = per_cpu_ptr(dst_cache->cache, i);
++		struct dst_entry *dst = idst->dst;
++
++		idst->cookie = 0;
++		idst->dst = NULL;
++		dst_release(dst);
++	}
++}
++EXPORT_SYMBOL_GPL(dst_cache_reset_now);
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index 9258ffc4ebffc..a6159624eec04 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -323,7 +323,7 @@ jumped:
+ 		if (!err && ops->suppress && INDIRECT_CALL_MT(ops->suppress,
+ 							      fib6_rule_suppress,
+ 							      fib4_rule_suppress,
+-							      rule, arg))
++							      rule, flags, arg))
+ 			continue;
+ 
+ 		if (err != -EAGAIN) {
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 7c18597774297..148ef484a66ce 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -2582,7 +2582,7 @@ static int __devinet_sysctl_register(struct net *net, char *dev_name,
+ free:
+ 	kfree(t);
+ out:
+-	return -ENOBUFS;
++	return -ENOMEM;
+ }
+ 
+ static void __devinet_sysctl_unregister(struct net *net,
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 647bceab56c2d..917ea953dfad8 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1578,7 +1578,7 @@ static int __net_init fib_net_init(struct net *net)
+ 	int error;
+ 
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+-	net->ipv4.fib_num_tclassid_users = 0;
++	atomic_set(&net->ipv4.fib_num_tclassid_users, 0);
+ #endif
+ 	error = ip_fib_net_init(net);
+ 	if (error < 0)
+diff --git a/net/ipv4/fib_rules.c b/net/ipv4/fib_rules.c
+index ce54a30c2ef1e..d279cb8ac1584 100644
+--- a/net/ipv4/fib_rules.c
++++ b/net/ipv4/fib_rules.c
+@@ -141,6 +141,7 @@ INDIRECT_CALLABLE_SCOPE int fib4_rule_action(struct fib_rule *rule,
+ }
+ 
+ INDIRECT_CALLABLE_SCOPE bool fib4_rule_suppress(struct fib_rule *rule,
++						int flags,
+ 						struct fib_lookup_arg *arg)
+ {
+ 	struct fib_result *result = (struct fib_result *) arg->result;
+@@ -263,7 +264,7 @@ static int fib4_rule_configure(struct fib_rule *rule, struct sk_buff *skb,
+ 	if (tb[FRA_FLOW]) {
+ 		rule4->tclassid = nla_get_u32(tb[FRA_FLOW]);
+ 		if (rule4->tclassid)
+-			net->ipv4.fib_num_tclassid_users++;
++			atomic_inc(&net->ipv4.fib_num_tclassid_users);
+ 	}
+ #endif
+ 
+@@ -295,7 +296,7 @@ static int fib4_rule_delete(struct fib_rule *rule)
+ 
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+ 	if (((struct fib4_rule *)rule)->tclassid)
+-		net->ipv4.fib_num_tclassid_users--;
++		atomic_dec(&net->ipv4.fib_num_tclassid_users);
+ #endif
+ 	net->ipv4.fib_has_custom_rules = true;
+ 
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 642503e89924b..36f34977dda19 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -222,7 +222,7 @@ void fib_nh_release(struct net *net, struct fib_nh *fib_nh)
+ {
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+ 	if (fib_nh->nh_tclassid)
+-		net->ipv4.fib_num_tclassid_users--;
++		atomic_dec(&net->ipv4.fib_num_tclassid_users);
+ #endif
+ 	fib_nh_common_release(&fib_nh->nh_common);
+ }
+@@ -633,7 +633,7 @@ int fib_nh_init(struct net *net, struct fib_nh *nh,
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+ 	nh->nh_tclassid = cfg->fc_flow;
+ 	if (nh->nh_tclassid)
+-		net->ipv4.fib_num_tclassid_users++;
++		atomic_inc(&net->ipv4.fib_num_tclassid_users);
+ #endif
+ #ifdef CONFIG_IP_ROUTE_MULTIPATH
+ 	nh->fib_nh_weight = nh_weight;
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 8d001f665fb15..7f2ffc7b1f75a 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -808,6 +808,12 @@ int esp6_input_done2(struct sk_buff *skb, int err)
+ 		struct tcphdr *th;
+ 
+ 		offset = ipv6_skip_exthdr(skb, offset, &nexthdr, &frag_off);
++
++		if (offset < 0) {
++			err = -EINVAL;
++			goto out;
++		}
++
+ 		uh = (void *)(skb->data + offset);
+ 		th = (void *)(skb->data + offset);
+ 		hdr_len += offset;
+diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c
+index 8f9a83314de7d..3e4c87b29b115 100644
+--- a/net/ipv6/fib6_rules.c
++++ b/net/ipv6/fib6_rules.c
+@@ -267,6 +267,7 @@ INDIRECT_CALLABLE_SCOPE int fib6_rule_action(struct fib_rule *rule,
+ }
+ 
+ INDIRECT_CALLABLE_SCOPE bool fib6_rule_suppress(struct fib_rule *rule,
++						int flags,
+ 						struct fib_lookup_arg *arg)
+ {
+ 	struct fib6_result *res = arg->result;
+@@ -294,8 +295,7 @@ INDIRECT_CALLABLE_SCOPE bool fib6_rule_suppress(struct fib_rule *rule,
+ 	return false;
+ 
+ suppress_route:
+-	if (!(arg->flags & FIB_LOOKUP_NOREF))
+-		ip6_rt_put(rt);
++	ip6_rt_put_flags(rt, flags);
+ 	return true;
+ }
+ 
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index b7979c0bffd0f..6a24431b90095 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1945,7 +1945,8 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+ 		int keyid = rx->sta->ptk_idx;
+ 		sta_ptk = rcu_dereference(rx->sta->ptk[keyid]);
+ 
+-		if (ieee80211_has_protected(fc)) {
++		if (ieee80211_has_protected(fc) &&
++		    !(status->flag & RX_FLAG_IV_STRIPPED)) {
+ 			cs = rx->sta->cipher_scheme;
+ 			keyid = ieee80211_get_keyid(rx->skb, cs);
+ 
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index f2868a8a50c30..9c047c148a112 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -1490,22 +1490,52 @@ static void mpls_dev_destroy_rcu(struct rcu_head *head)
+ 	kfree(mdev);
+ }
+ 
+-static void mpls_ifdown(struct net_device *dev, int event)
++static int mpls_ifdown(struct net_device *dev, int event)
+ {
+ 	struct mpls_route __rcu **platform_label;
+ 	struct net *net = dev_net(dev);
+-	u8 alive, deleted;
+ 	unsigned index;
+ 
+ 	platform_label = rtnl_dereference(net->mpls.platform_label);
+ 	for (index = 0; index < net->mpls.platform_labels; index++) {
+ 		struct mpls_route *rt = rtnl_dereference(platform_label[index]);
++		bool nh_del = false;
++		u8 alive = 0;
+ 
+ 		if (!rt)
+ 			continue;
+ 
+-		alive = 0;
+-		deleted = 0;
++		if (event == NETDEV_UNREGISTER) {
++			u8 deleted = 0;
++
++			for_nexthops(rt) {
++				struct net_device *nh_dev =
++					rtnl_dereference(nh->nh_dev);
++
++				if (!nh_dev || nh_dev == dev)
++					deleted++;
++				if (nh_dev == dev)
++					nh_del = true;
++			} endfor_nexthops(rt);
++
++			/* if there are no more nexthops, delete the route */
++			if (deleted == rt->rt_nhn) {
++				mpls_route_update(net, index, NULL, NULL);
++				continue;
++			}
++
++			if (nh_del) {
++				size_t size = sizeof(*rt) + rt->rt_nhn *
++					rt->rt_nh_size;
++				struct mpls_route *orig = rt;
++
++				rt = kmalloc(size, GFP_KERNEL);
++				if (!rt)
++					return -ENOMEM;
++				memcpy(rt, orig, size);
++			}
++		}
++
+ 		change_nexthops(rt) {
+ 			unsigned int nh_flags = nh->nh_flags;
+ 
+@@ -1529,16 +1559,15 @@ static void mpls_ifdown(struct net_device *dev, int event)
+ next:
+ 			if (!(nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN)))
+ 				alive++;
+-			if (!rtnl_dereference(nh->nh_dev))
+-				deleted++;
+ 		} endfor_nexthops(rt);
+ 
+ 		WRITE_ONCE(rt->rt_nhn_alive, alive);
+ 
+-		/* if there are no more nexthops, delete the route */
+-		if (event == NETDEV_UNREGISTER && deleted == rt->rt_nhn)
+-			mpls_route_update(net, index, NULL, NULL);
++		if (nh_del)
++			mpls_route_update(net, index, rt, NULL);
+ 	}
++
++	return 0;
+ }
+ 
+ static void mpls_ifup(struct net_device *dev, unsigned int flags)
+@@ -1596,8 +1625,12 @@ static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
+ 		return NOTIFY_OK;
+ 
+ 	switch (event) {
++		int err;
++
+ 	case NETDEV_DOWN:
+-		mpls_ifdown(dev, event);
++		err = mpls_ifdown(dev, event);
++		if (err)
++			return notifier_from_errno(err);
+ 		break;
+ 	case NETDEV_UP:
+ 		flags = dev_get_flags(dev);
+@@ -1608,13 +1641,18 @@ static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
+ 		break;
+ 	case NETDEV_CHANGE:
+ 		flags = dev_get_flags(dev);
+-		if (flags & (IFF_RUNNING | IFF_LOWER_UP))
++		if (flags & (IFF_RUNNING | IFF_LOWER_UP)) {
+ 			mpls_ifup(dev, RTNH_F_DEAD | RTNH_F_LINKDOWN);
+-		else
+-			mpls_ifdown(dev, event);
++		} else {
++			err = mpls_ifdown(dev, event);
++			if (err)
++				return notifier_from_errno(err);
++		}
+ 		break;
+ 	case NETDEV_UNREGISTER:
+-		mpls_ifdown(dev, event);
++		err = mpls_ifdown(dev, event);
++		if (err)
++			return notifier_from_errno(err);
+ 		mdev = mpls_dev_get(dev);
+ 		if (mdev) {
+ 			mpls_dev_sysctl_unregister(dev, mdev);
+@@ -1625,8 +1663,6 @@ static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
+ 	case NETDEV_CHANGENAME:
+ 		mdev = mpls_dev_get(dev);
+ 		if (mdev) {
+-			int err;
+-
+ 			mpls_dev_sysctl_unregister(dev, mdev);
+ 			err = mpls_dev_sysctl_register(dev, mdev);
+ 			if (err)
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index abf19c0e3ba0b..5327d130c4b56 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -500,7 +500,7 @@ void rds_tcp_tune(struct socket *sock)
+ 		sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+ 	}
+ 	if (rtn->rcvbuf_size > 0) {
+-		sk->sk_sndbuf = rtn->rcvbuf_size;
++		sk->sk_rcvbuf = rtn->rcvbuf_size;
+ 		sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+ 	}
+ 	release_sock(sk);
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index 7e574c75be8e1..f5fb223aba82a 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -135,16 +135,20 @@ struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle)
+ 	return bundle;
+ }
+ 
++static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)
++{
++	rxrpc_put_peer(bundle->params.peer);
++	kfree(bundle);
++}
++
+ void rxrpc_put_bundle(struct rxrpc_bundle *bundle)
+ {
+ 	unsigned int d = bundle->debug_id;
+ 	unsigned int u = atomic_dec_return(&bundle->usage);
+ 
+ 	_debug("PUT B=%x %u", d, u);
+-	if (u == 0) {
+-		rxrpc_put_peer(bundle->params.peer);
+-		kfree(bundle);
+-	}
++	if (u == 0)
++		rxrpc_free_bundle(bundle);
+ }
+ 
+ /*
+@@ -334,7 +338,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
+ 	return candidate;
+ 
+ found_bundle_free:
+-	kfree(candidate);
++	rxrpc_free_bundle(candidate);
+ found_bundle:
+ 	rxrpc_get_bundle(bundle);
+ 	spin_unlock(&local->client_bundles_lock);
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 68396d0520525..0298fe2ad6d32 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -299,6 +299,12 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx,
+ 	return peer;
+ }
+ 
++static void rxrpc_free_peer(struct rxrpc_peer *peer)
++{
++	rxrpc_put_local(peer->local);
++	kfree_rcu(peer, rcu);
++}
++
+ /*
+  * Set up a new incoming peer.  There shouldn't be any other matching peers
+  * since we've already done a search in the list from the non-reentrant context
+@@ -365,7 +371,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx,
+ 		spin_unlock_bh(&rxnet->peer_hash_lock);
+ 
+ 		if (peer)
+-			kfree(candidate);
++			rxrpc_free_peer(candidate);
+ 		else
+ 			peer = candidate;
+ 	}
+@@ -420,8 +426,7 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
+ 	list_del_init(&peer->keepalive_link);
+ 	spin_unlock_bh(&rxnet->peer_hash_lock);
+ 
+-	rxrpc_put_local(peer->local);
+-	kfree_rcu(peer, rcu);
++	rxrpc_free_peer(peer);
+ }
+ 
+ /*
+@@ -457,8 +462,7 @@ void rxrpc_put_peer_locked(struct rxrpc_peer *peer)
+ 	if (n == 0) {
+ 		hash_del_rcu(&peer->hash_link);
+ 		list_del_init(&peer->keepalive_link);
+-		rxrpc_put_local(peer->local);
+-		kfree_rcu(peer, rcu);
++		rxrpc_free_peer(peer);
+ 	}
+ }
+ 
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index ac8265e35b2d2..d324a12c26cd9 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -513,12 +513,26 @@ static void smc_link_save_peer_info(struct smc_link *link,
+ 
+ static void smc_switch_to_fallback(struct smc_sock *smc)
+ {
++	wait_queue_head_t *smc_wait = sk_sleep(&smc->sk);
++	wait_queue_head_t *clc_wait = sk_sleep(smc->clcsock->sk);
++	unsigned long flags;
++
+ 	smc->use_fallback = true;
+ 	if (smc->sk.sk_socket && smc->sk.sk_socket->file) {
+ 		smc->clcsock->file = smc->sk.sk_socket->file;
+ 		smc->clcsock->file->private_data = smc->clcsock;
+ 		smc->clcsock->wq.fasync_list =
+ 			smc->sk.sk_socket->wq.fasync_list;
++
++		/* There may be some entries remaining in
++		 * smc socket->wq, which should be removed
++		 * to clcsocket->wq during the fallback.
++		 */
++		spin_lock_irqsave(&smc_wait->lock, flags);
++		spin_lock_nested(&clc_wait->lock, SINGLE_DEPTH_NESTING);
++		list_splice_init(&smc_wait->head, &clc_wait->head);
++		spin_unlock(&clc_wait->lock);
++		spin_unlock_irqrestore(&smc_wait->lock, flags);
+ 	}
+ }
+ 
+diff --git a/net/smc/smc_close.c b/net/smc/smc_close.c
+index 04620b53b74a7..84102db5bb314 100644
+--- a/net/smc/smc_close.c
++++ b/net/smc/smc_close.c
+@@ -195,6 +195,7 @@ int smc_close_active(struct smc_sock *smc)
+ 	int old_state;
+ 	long timeout;
+ 	int rc = 0;
++	int rc1 = 0;
+ 
+ 	timeout = current->flags & PF_EXITING ?
+ 		  0 : sock_flag(sk, SOCK_LINGER) ?
+@@ -232,8 +233,11 @@ again:
+ 			/* actively shutdown clcsock before peer close it,
+ 			 * prevent peer from entering TIME_WAIT state.
+ 			 */
+-			if (smc->clcsock && smc->clcsock->sk)
+-				rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR);
++			if (smc->clcsock && smc->clcsock->sk) {
++				rc1 = kernel_sock_shutdown(smc->clcsock,
++							   SHUT_RDWR);
++				rc = rc ? rc : rc1;
++			}
+ 		} else {
+ 			/* peer event has changed the state */
+ 			goto again;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index cd625b672429f..3f1343dfa16ba 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -204,18 +204,17 @@ static void smc_lgr_unregister_conn(struct smc_connection *conn)
+ void smc_lgr_cleanup_early(struct smc_connection *conn)
+ {
+ 	struct smc_link_group *lgr = conn->lgr;
+-	struct list_head *lgr_list;
+ 	spinlock_t *lgr_lock;
+ 
+ 	if (!lgr)
+ 		return;
+ 
+ 	smc_conn_free(conn);
+-	lgr_list = smc_lgr_list_head(lgr, &lgr_lock);
++	smc_lgr_list_head(lgr, &lgr_lock);
+ 	spin_lock_bh(lgr_lock);
+ 	/* do not use this link group for new connections */
+-	if (!list_empty(lgr_list))
+-		list_del_init(lgr_list);
++	if (!list_empty(&lgr->list))
++		list_del_init(&lgr->list);
+ 	spin_unlock_bh(lgr_lock);
+ 	__smc_lgr_terminate(lgr, true);
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 122d5daed8b61..8cd011ea9fbb8 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -515,7 +515,7 @@ static int tls_do_encryption(struct sock *sk,
+ 	memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv,
+ 	       prot->iv_size + prot->salt_size);
+ 
+-	xor_iv_with_seq(prot->version, rec->iv_data, tls_ctx->tx.rec_seq);
++	xor_iv_with_seq(prot->version, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq);
+ 
+ 	sge->offset += prot->prepend_size;
+ 	sge->length -= prot->prepend_size;
+@@ -1487,7 +1487,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
+ 	else
+ 		memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
+ 
+-	xor_iv_with_seq(prot->version, iv, tls_ctx->rx.rec_seq);
++	xor_iv_with_seq(prot->version, iv + iv_offset, tls_ctx->rx.rec_seq);
+ 
+ 	/* Prepare AAD */
+ 	tls_make_aad(aad, rxm->full_len - prot->overhead_size +
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index fc61571a3ac73..2a5ba9dca6b08 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -251,6 +251,11 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ 		.device = 0x02c8,
+ 	},
++	{
++		.flags = FLAG_SOF,
++		.device = 0x02c8,
++		.codec_hid = "ESSX8336",
++	},
+ /* Cometlake-H */
+ 	{
+ 		.flags = FLAG_SOF,
+@@ -275,6 +280,11 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ 		.device = 0x06c8,
+ 	},
++		{
++		.flags = FLAG_SOF,
++		.device = 0x06c8,
++		.codec_hid = "ESSX8336",
++	},
+ #endif
+ 
+ /* Icelake */
+diff --git a/sound/soc/tegra/tegra186_dspk.c b/sound/soc/tegra/tegra186_dspk.c
+index 0cbe31e2c7e9c..373189e5907b9 100644
+--- a/sound/soc/tegra/tegra186_dspk.c
++++ b/sound/soc/tegra/tegra186_dspk.c
+@@ -26,51 +26,162 @@ static const struct reg_default tegra186_dspk_reg_defaults[] = {
+ 	{ TEGRA186_DSPK_CODEC_CTRL,  0x03000000 },
+ };
+ 
+-static int tegra186_dspk_get_control(struct snd_kcontrol *kcontrol,
++static int tegra186_dspk_get_fifo_th(struct snd_kcontrol *kcontrol,
+ 				     struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
+ 	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
+ 
+-	if (strstr(kcontrol->id.name, "FIFO Threshold"))
+-		ucontrol->value.integer.value[0] = dspk->rx_fifo_th;
+-	else if (strstr(kcontrol->id.name, "OSR Value"))
+-		ucontrol->value.integer.value[0] = dspk->osr_val;
+-	else if (strstr(kcontrol->id.name, "LR Polarity Select"))
+-		ucontrol->value.integer.value[0] = dspk->lrsel;
+-	else if (strstr(kcontrol->id.name, "Channel Select"))
+-		ucontrol->value.integer.value[0] = dspk->ch_sel;
+-	else if (strstr(kcontrol->id.name, "Mono To Stereo"))
+-		ucontrol->value.integer.value[0] = dspk->mono_to_stereo;
+-	else if (strstr(kcontrol->id.name, "Stereo To Mono"))
+-		ucontrol->value.integer.value[0] = dspk->stereo_to_mono;
++	ucontrol->value.integer.value[0] = dspk->rx_fifo_th;
+ 
+ 	return 0;
+ }
+ 
+-static int tegra186_dspk_put_control(struct snd_kcontrol *kcontrol,
++static int tegra186_dspk_put_fifo_th(struct snd_kcontrol *kcontrol,
+ 				     struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
+ 	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
+-	int val = ucontrol->value.integer.value[0];
+-
+-	if (strstr(kcontrol->id.name, "FIFO Threshold"))
+-		dspk->rx_fifo_th = val;
+-	else if (strstr(kcontrol->id.name, "OSR Value"))
+-		dspk->osr_val = val;
+-	else if (strstr(kcontrol->id.name, "LR Polarity Select"))
+-		dspk->lrsel = val;
+-	else if (strstr(kcontrol->id.name, "Channel Select"))
+-		dspk->ch_sel = val;
+-	else if (strstr(kcontrol->id.name, "Mono To Stereo"))
+-		dspk->mono_to_stereo = val;
+-	else if (strstr(kcontrol->id.name, "Stereo To Mono"))
+-		dspk->stereo_to_mono = val;
++	int value = ucontrol->value.integer.value[0];
++
++	if (value == dspk->rx_fifo_th)
++		return 0;
++
++	dspk->rx_fifo_th = value;
++
++	return 1;
++}
++
++static int tegra186_dspk_get_osr_val(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++
++	ucontrol->value.enumerated.item[0] = dspk->osr_val;
+ 
+ 	return 0;
+ }
+ 
++static int tegra186_dspk_put_osr_val(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dspk->osr_val)
++		return 0;
++
++	dspk->osr_val = value;
++
++	return 1;
++}
++
++static int tegra186_dspk_get_pol_sel(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++
++	ucontrol->value.enumerated.item[0] = dspk->lrsel;
++
++	return 0;
++}
++
++static int tegra186_dspk_put_pol_sel(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dspk->lrsel)
++		return 0;
++
++	dspk->lrsel = value;
++
++	return 1;
++}
++
++static int tegra186_dspk_get_ch_sel(struct snd_kcontrol *kcontrol,
++				    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++
++	ucontrol->value.enumerated.item[0] = dspk->ch_sel;
++
++	return 0;
++}
++
++static int tegra186_dspk_put_ch_sel(struct snd_kcontrol *kcontrol,
++				    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dspk->ch_sel)
++		return 0;
++
++	dspk->ch_sel = value;
++
++	return 1;
++}
++
++static int tegra186_dspk_get_mono_to_stereo(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++
++	ucontrol->value.enumerated.item[0] = dspk->mono_to_stereo;
++
++	return 0;
++}
++
++static int tegra186_dspk_put_mono_to_stereo(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dspk->mono_to_stereo)
++		return 0;
++
++	dspk->mono_to_stereo = value;
++
++	return 1;
++}
++
++static int tegra186_dspk_get_stereo_to_mono(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++
++	ucontrol->value.enumerated.item[0] = dspk->stereo_to_mono;
++
++	return 0;
++}
++
++static int tegra186_dspk_put_stereo_to_mono(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);
++	struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dspk->stereo_to_mono)
++		return 0;
++
++	dspk->stereo_to_mono = value;
++
++	return 1;
++}
++
+ static int __maybe_unused tegra186_dspk_runtime_suspend(struct device *dev)
+ {
+ 	struct tegra186_dspk *dspk = dev_get_drvdata(dev);
+@@ -279,17 +390,19 @@ static const struct soc_enum tegra186_dspk_lrsel_enum =
+ static const struct snd_kcontrol_new tegrat186_dspk_controls[] = {
+ 	SOC_SINGLE_EXT("FIFO Threshold", SND_SOC_NOPM, 0,
+ 		       TEGRA186_DSPK_RX_FIFO_DEPTH - 1, 0,
+-		       tegra186_dspk_get_control, tegra186_dspk_put_control),
++		       tegra186_dspk_get_fifo_th, tegra186_dspk_put_fifo_th),
+ 	SOC_ENUM_EXT("OSR Value", tegra186_dspk_osr_enum,
+-		     tegra186_dspk_get_control, tegra186_dspk_put_control),
++		     tegra186_dspk_get_osr_val, tegra186_dspk_put_osr_val),
+ 	SOC_ENUM_EXT("LR Polarity Select", tegra186_dspk_lrsel_enum,
+-		     tegra186_dspk_get_control, tegra186_dspk_put_control),
++		     tegra186_dspk_get_pol_sel, tegra186_dspk_put_pol_sel),
+ 	SOC_ENUM_EXT("Channel Select", tegra186_dspk_ch_sel_enum,
+-		     tegra186_dspk_get_control, tegra186_dspk_put_control),
++		     tegra186_dspk_get_ch_sel, tegra186_dspk_put_ch_sel),
+ 	SOC_ENUM_EXT("Mono To Stereo", tegra186_dspk_mono_conv_enum,
+-		     tegra186_dspk_get_control, tegra186_dspk_put_control),
++		     tegra186_dspk_get_mono_to_stereo,
++		     tegra186_dspk_put_mono_to_stereo),
+ 	SOC_ENUM_EXT("Stereo To Mono", tegra186_dspk_stereo_conv_enum,
+-		     tegra186_dspk_get_control, tegra186_dspk_put_control),
++		     tegra186_dspk_get_stereo_to_mono,
++		     tegra186_dspk_put_stereo_to_mono),
+ };
+ 
+ static const struct snd_soc_component_driver tegra186_dspk_cmpnt = {
+diff --git a/sound/soc/tegra/tegra210_admaif.c b/sound/soc/tegra/tegra210_admaif.c
+index 1268046b345d9..d610cbe4a7c4a 100644
+--- a/sound/soc/tegra/tegra210_admaif.c
++++ b/sound/soc/tegra/tegra210_admaif.c
+@@ -424,46 +424,122 @@ static const struct snd_soc_dai_ops tegra_admaif_dai_ops = {
+ 	.trigger	= tegra_admaif_trigger,
+ };
+ 
+-static int tegra_admaif_get_control(struct snd_kcontrol *kcontrol,
+-				    struct snd_ctl_elem_value *ucontrol)
++static int tegra210_admaif_pget_mono_to_stereo(struct snd_kcontrol *kcontrol,
++	struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt);
++	struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value;
++
++	ucontrol->value.enumerated.item[0] =
++		admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg];
++
++	return 0;
++}
++
++static int tegra210_admaif_pput_mono_to_stereo(struct snd_kcontrol *kcontrol,
++	struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt);
++	struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value;
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg])
++		return 0;
++
++	admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg] = value;
++
++	return 1;
++}
++
++static int tegra210_admaif_cget_mono_to_stereo(struct snd_kcontrol *kcontrol,
++	struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt);
++	struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value;
++
++	ucontrol->value.enumerated.item[0] =
++		admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg];
++
++	return 0;
++}
++
++static int tegra210_admaif_cput_mono_to_stereo(struct snd_kcontrol *kcontrol,
++	struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt);
+ 	struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value;
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg])
++		return 0;
++
++	admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg] = value;
++
++	return 1;
++}
++
++static int tegra210_admaif_pget_stereo_to_mono(struct snd_kcontrol *kcontrol,
++	struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);
+ 	struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt);
+-	long *uctl_val = &ucontrol->value.integer.value[0];
++	struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value;
+ 
+-	if (strstr(kcontrol->id.name, "Playback Mono To Stereo"))
+-		*uctl_val = admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg];
+-	else if (strstr(kcontrol->id.name, "Capture Mono To Stereo"))
+-		*uctl_val = admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg];
+-	else if (strstr(kcontrol->id.name, "Playback Stereo To Mono"))
+-		*uctl_val = admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg];
+-	else if (strstr(kcontrol->id.name, "Capture Stereo To Mono"))
+-		*uctl_val = admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg];
++	ucontrol->value.enumerated.item[0] =
++		admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg];
+ 
+ 	return 0;
+ }
+ 
+-static int tegra_admaif_put_control(struct snd_kcontrol *kcontrol,
+-				    struct snd_ctl_elem_value *ucontrol)
++static int tegra210_admaif_pput_stereo_to_mono(struct snd_kcontrol *kcontrol,
++	struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt);
+ 	struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value;
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg])
++		return 0;
++
++	admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg] = value;
++
++	return 1;
++}
++
++static int tegra210_admaif_cget_stereo_to_mono(struct snd_kcontrol *kcontrol,
++	struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);
+ 	struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt);
+-	int value = ucontrol->value.integer.value[0];
++	struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value;
+ 
+-	if (strstr(kcontrol->id.name, "Playback Mono To Stereo"))
+-		admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg] = value;
+-	else if (strstr(kcontrol->id.name, "Capture Mono To Stereo"))
+-		admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg] = value;
+-	else if (strstr(kcontrol->id.name, "Playback Stereo To Mono"))
+-		admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg] = value;
+-	else if (strstr(kcontrol->id.name, "Capture Stereo To Mono"))
+-		admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg] = value;
++	ucontrol->value.enumerated.item[0] =
++		admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg];
+ 
+ 	return 0;
+ }
+ 
++static int tegra210_admaif_cput_stereo_to_mono(struct snd_kcontrol *kcontrol,
++	struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt);
++	struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value;
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg])
++		return 0;
++
++	admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg] = value;
++
++	return 1;
++}
++
+ static int tegra_admaif_dai_probe(struct snd_soc_dai *dai)
+ {
+ 	struct tegra_admaif *admaif = snd_soc_dai_get_drvdata(dai);
+@@ -559,17 +635,21 @@ static const char * const tegra_admaif_mono_conv_text[] = {
+ }
+ 
+ #define TEGRA_ADMAIF_CIF_CTRL(reg)					       \
+-	NV_SOC_ENUM_EXT("ADMAIF" #reg " Playback Mono To Stereo", reg - 1,\
+-			tegra_admaif_get_control, tegra_admaif_put_control,    \
++	NV_SOC_ENUM_EXT("ADMAIF" #reg " Playback Mono To Stereo", reg - 1,     \
++			tegra210_admaif_pget_mono_to_stereo,		       \
++			tegra210_admaif_pput_mono_to_stereo,		       \
+ 			tegra_admaif_mono_conv_text),			       \
+-	NV_SOC_ENUM_EXT("ADMAIF" #reg " Playback Stereo To Mono", reg - 1,\
+-			tegra_admaif_get_control, tegra_admaif_put_control,    \
++	NV_SOC_ENUM_EXT("ADMAIF" #reg " Playback Stereo To Mono", reg - 1,     \
++			tegra210_admaif_pget_stereo_to_mono,		       \
++			tegra210_admaif_pput_stereo_to_mono,		       \
+ 			tegra_admaif_stereo_conv_text),			       \
+-	NV_SOC_ENUM_EXT("ADMAIF" #reg " Capture Mono To Stereo", reg - 1, \
+-			tegra_admaif_get_control, tegra_admaif_put_control,    \
++	NV_SOC_ENUM_EXT("ADMAIF" #reg " Capture Mono To Stereo", reg - 1,      \
++			tegra210_admaif_cget_mono_to_stereo,		       \
++			tegra210_admaif_cput_mono_to_stereo,		       \
+ 			tegra_admaif_mono_conv_text),			       \
+-	NV_SOC_ENUM_EXT("ADMAIF" #reg " Capture Stereo To Mono", reg - 1, \
+-			tegra_admaif_get_control, tegra_admaif_put_control,    \
++	NV_SOC_ENUM_EXT("ADMAIF" #reg " Capture Stereo To Mono", reg - 1,      \
++			tegra210_admaif_cget_stereo_to_mono,		       \
++			tegra210_admaif_cput_stereo_to_mono,		       \
+ 			tegra_admaif_stereo_conv_text)
+ 
+ static struct snd_kcontrol_new tegra210_admaif_controls[] = {
+diff --git a/sound/soc/tegra/tegra210_ahub.c b/sound/soc/tegra/tegra210_ahub.c
+index 66287a7c9865d..1b2f7cb8c6adc 100644
+--- a/sound/soc/tegra/tegra210_ahub.c
++++ b/sound/soc/tegra/tegra210_ahub.c
+@@ -62,6 +62,7 @@ static int tegra_ahub_put_value_enum(struct snd_kcontrol *kctl,
+ 	unsigned int *item = uctl->value.enumerated.item;
+ 	unsigned int value = e->values[item[0]];
+ 	unsigned int i, bit_pos, reg_idx = 0, reg_val = 0;
++	int change = 0;
+ 
+ 	if (item[0] >= e->items)
+ 		return -EINVAL;
+@@ -86,12 +87,14 @@ static int tegra_ahub_put_value_enum(struct snd_kcontrol *kctl,
+ 
+ 		/* Update widget power if state has changed */
+ 		if (snd_soc_component_test_bits(cmpnt, update[i].reg,
+-						update[i].mask, update[i].val))
+-			snd_soc_dapm_mux_update_power(dapm, kctl, item[0], e,
+-						      &update[i]);
++						update[i].mask,
++						update[i].val))
++			change |= snd_soc_dapm_mux_update_power(dapm, kctl,
++								item[0], e,
++								&update[i]);
+ 	}
+ 
+-	return 0;
++	return change;
+ }
+ 
+ static struct snd_soc_dai_driver tegra210_ahub_dais[] = {
+diff --git a/sound/soc/tegra/tegra210_dmic.c b/sound/soc/tegra/tegra210_dmic.c
+index a661f40bc41c7..dd3481ae3372f 100644
+--- a/sound/soc/tegra/tegra210_dmic.c
++++ b/sound/soc/tegra/tegra210_dmic.c
+@@ -156,51 +156,162 @@ static int tegra210_dmic_hw_params(struct snd_pcm_substream *substream,
+ 	return 0;
+ }
+ 
+-static int tegra210_dmic_get_control(struct snd_kcontrol *kcontrol,
++static int tegra210_dmic_get_boost_gain(struct snd_kcontrol *kcontrol,
++					struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++
++	ucontrol->value.integer.value[0] = dmic->boost_gain;
++
++	return 0;
++}
++
++static int tegra210_dmic_put_boost_gain(struct snd_kcontrol *kcontrol,
++					struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++	int value = ucontrol->value.integer.value[0];
++
++	if (value == dmic->boost_gain)
++		return 0;
++
++	dmic->boost_gain = value;
++
++	return 1;
++}
++
++static int tegra210_dmic_get_ch_select(struct snd_kcontrol *kcontrol,
++				       struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++
++	ucontrol->value.enumerated.item[0] = dmic->ch_select;
++
++	return 0;
++}
++
++static int tegra210_dmic_put_ch_select(struct snd_kcontrol *kcontrol,
++				       struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dmic->ch_select)
++		return 0;
++
++	dmic->ch_select = value;
++
++	return 1;
++}
++
++static int tegra210_dmic_get_mono_to_stereo(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++
++	ucontrol->value.enumerated.item[0] = dmic->mono_to_stereo;
++
++	return 0;
++}
++
++static int tegra210_dmic_put_mono_to_stereo(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dmic->mono_to_stereo)
++		return 0;
++
++	dmic->mono_to_stereo = value;
++
++	return 1;
++}
++
++static int tegra210_dmic_get_stereo_to_mono(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++
++	ucontrol->value.enumerated.item[0] = dmic->stereo_to_mono;
++
++	return 0;
++}
++
++static int tegra210_dmic_put_stereo_to_mono(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dmic->stereo_to_mono)
++		return 0;
++
++	dmic->stereo_to_mono = value;
++
++	return 1;
++}
++
++static int tegra210_dmic_get_osr_val(struct snd_kcontrol *kcontrol,
+ 				     struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
+ 	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
+ 
+-	if (strstr(kcontrol->id.name, "Boost Gain Volume"))
+-		ucontrol->value.integer.value[0] = dmic->boost_gain;
+-	else if (strstr(kcontrol->id.name, "Channel Select"))
+-		ucontrol->value.integer.value[0] = dmic->ch_select;
+-	else if (strstr(kcontrol->id.name, "Mono To Stereo"))
+-		ucontrol->value.integer.value[0] = dmic->mono_to_stereo;
+-	else if (strstr(kcontrol->id.name, "Stereo To Mono"))
+-		ucontrol->value.integer.value[0] = dmic->stereo_to_mono;
+-	else if (strstr(kcontrol->id.name, "OSR Value"))
+-		ucontrol->value.integer.value[0] = dmic->osr_val;
+-	else if (strstr(kcontrol->id.name, "LR Polarity Select"))
+-		ucontrol->value.integer.value[0] = dmic->lrsel;
++	ucontrol->value.enumerated.item[0] = dmic->osr_val;
+ 
+ 	return 0;
+ }
+ 
+-static int tegra210_dmic_put_control(struct snd_kcontrol *kcontrol,
++static int tegra210_dmic_put_osr_val(struct snd_kcontrol *kcontrol,
+ 				     struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
+ 	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
+-	int value = ucontrol->value.integer.value[0];
++	unsigned int value = ucontrol->value.enumerated.item[0];
+ 
+-	if (strstr(kcontrol->id.name, "Boost Gain Volume"))
+-		dmic->boost_gain = value;
+-	else if (strstr(kcontrol->id.name, "Channel Select"))
+-		dmic->ch_select = ucontrol->value.integer.value[0];
+-	else if (strstr(kcontrol->id.name, "Mono To Stereo"))
+-		dmic->mono_to_stereo = value;
+-	else if (strstr(kcontrol->id.name, "Stereo To Mono"))
+-		dmic->stereo_to_mono = value;
+-	else if (strstr(kcontrol->id.name, "OSR Value"))
+-		dmic->osr_val = value;
+-	else if (strstr(kcontrol->id.name, "LR Polarity Select"))
+-		dmic->lrsel = value;
++	if (value == dmic->osr_val)
++		return 0;
++
++	dmic->osr_val = value;
++
++	return 1;
++}
++
++static int tegra210_dmic_get_pol_sel(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++
++	ucontrol->value.enumerated.item[0] = dmic->lrsel;
+ 
+ 	return 0;
+ }
+ 
++static int tegra210_dmic_put_pol_sel(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == dmic->lrsel)
++		return 0;
++
++	dmic->lrsel = value;
++
++	return 1;
++}
++
+ static const struct snd_soc_dai_ops tegra210_dmic_dai_ops = {
+ 	.hw_params	= tegra210_dmic_hw_params,
+ };
+@@ -287,19 +398,22 @@ static const struct soc_enum tegra210_dmic_lrsel_enum =
+ 
+ static const struct snd_kcontrol_new tegra210_dmic_controls[] = {
+ 	SOC_SINGLE_EXT("Boost Gain Volume", 0, 0, MAX_BOOST_GAIN, 0,
+-		       tegra210_dmic_get_control, tegra210_dmic_put_control),
++		       tegra210_dmic_get_boost_gain,
++		       tegra210_dmic_put_boost_gain),
+ 	SOC_ENUM_EXT("Channel Select", tegra210_dmic_ch_enum,
+-		     tegra210_dmic_get_control, tegra210_dmic_put_control),
++		     tegra210_dmic_get_ch_select, tegra210_dmic_put_ch_select),
+ 	SOC_ENUM_EXT("Mono To Stereo",
+-		     tegra210_dmic_mono_conv_enum, tegra210_dmic_get_control,
+-		     tegra210_dmic_put_control),
++		     tegra210_dmic_mono_conv_enum,
++		     tegra210_dmic_get_mono_to_stereo,
++		     tegra210_dmic_put_mono_to_stereo),
+ 	SOC_ENUM_EXT("Stereo To Mono",
+-		     tegra210_dmic_stereo_conv_enum, tegra210_dmic_get_control,
+-		     tegra210_dmic_put_control),
++		     tegra210_dmic_stereo_conv_enum,
++		     tegra210_dmic_get_stereo_to_mono,
++		     tegra210_dmic_put_stereo_to_mono),
+ 	SOC_ENUM_EXT("OSR Value", tegra210_dmic_osr_enum,
+-		     tegra210_dmic_get_control, tegra210_dmic_put_control),
++		     tegra210_dmic_get_osr_val, tegra210_dmic_put_osr_val),
+ 	SOC_ENUM_EXT("LR Polarity Select", tegra210_dmic_lrsel_enum,
+-		     tegra210_dmic_get_control, tegra210_dmic_put_control),
++		     tegra210_dmic_get_pol_sel, tegra210_dmic_put_pol_sel),
+ };
+ 
+ static const struct snd_soc_component_driver tegra210_dmic_compnt = {
+diff --git a/sound/soc/tegra/tegra210_i2s.c b/sound/soc/tegra/tegra210_i2s.c
+index a383bd5c51cd4..33fb37ad1391d 100644
+--- a/sound/soc/tegra/tegra210_i2s.c
++++ b/sound/soc/tegra/tegra210_i2s.c
+@@ -302,85 +302,235 @@ static int tegra210_i2s_set_tdm_slot(struct snd_soc_dai *dai,
+ 	return 0;
+ }
+ 
+-static int tegra210_i2s_set_dai_bclk_ratio(struct snd_soc_dai *dai,
+-					   unsigned int ratio)
++static int tegra210_i2s_get_loopback(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
+ {
+-	struct tegra210_i2s *i2s = snd_soc_dai_get_drvdata(dai);
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
+ 
+-	i2s->bclk_ratio = ratio;
++	ucontrol->value.integer.value[0] = i2s->loopback;
+ 
+ 	return 0;
+ }
+ 
+-static int tegra210_i2s_get_control(struct snd_kcontrol *kcontrol,
+-				    struct snd_ctl_elem_value *ucontrol)
++static int tegra210_i2s_put_loopback(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++	int value = ucontrol->value.integer.value[0];
++
++	if (value == i2s->loopback)
++		return 0;
++
++	i2s->loopback = value;
++
++	regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL, I2S_CTRL_LPBK_MASK,
++			   i2s->loopback << I2S_CTRL_LPBK_SHIFT);
++
++	return 1;
++}
++
++static int tegra210_i2s_get_fsync_width(struct snd_kcontrol *kcontrol,
++					struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
+ 	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
+-	long *uctl_val = &ucontrol->value.integer.value[0];
+-
+-	if (strstr(kcontrol->id.name, "Loopback"))
+-		*uctl_val = i2s->loopback;
+-	else if (strstr(kcontrol->id.name, "FSYNC Width"))
+-		*uctl_val = i2s->fsync_width;
+-	else if (strstr(kcontrol->id.name, "Capture Stereo To Mono"))
+-		*uctl_val = i2s->stereo_to_mono[I2S_TX_PATH];
+-	else if (strstr(kcontrol->id.name, "Capture Mono To Stereo"))
+-		*uctl_val = i2s->mono_to_stereo[I2S_TX_PATH];
+-	else if (strstr(kcontrol->id.name, "Playback Stereo To Mono"))
+-		*uctl_val = i2s->stereo_to_mono[I2S_RX_PATH];
+-	else if (strstr(kcontrol->id.name, "Playback Mono To Stereo"))
+-		*uctl_val = i2s->mono_to_stereo[I2S_RX_PATH];
+-	else if (strstr(kcontrol->id.name, "Playback FIFO Threshold"))
+-		*uctl_val = i2s->rx_fifo_th;
+-	else if (strstr(kcontrol->id.name, "BCLK Ratio"))
+-		*uctl_val = i2s->bclk_ratio;
++
++	ucontrol->value.integer.value[0] = i2s->fsync_width;
+ 
+ 	return 0;
+ }
+ 
+-static int tegra210_i2s_put_control(struct snd_kcontrol *kcontrol,
+-				    struct snd_ctl_elem_value *ucontrol)
++static int tegra210_i2s_put_fsync_width(struct snd_kcontrol *kcontrol,
++					struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
+ 	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
+ 	int value = ucontrol->value.integer.value[0];
+ 
+-	if (strstr(kcontrol->id.name, "Loopback")) {
+-		i2s->loopback = value;
++	if (value == i2s->fsync_width)
++		return 0;
+ 
+-		regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,
+-				   I2S_CTRL_LPBK_MASK,
+-				   i2s->loopback << I2S_CTRL_LPBK_SHIFT);
++	i2s->fsync_width = value;
+ 
+-	} else if (strstr(kcontrol->id.name, "FSYNC Width")) {
+-		/*
+-		 * Frame sync width is used only for FSYNC modes and not
+-		 * applicable for LRCK modes. Reset value for this field is "0",
+-		 * which means the width is one bit clock wide.
+-		 * The width requirement may depend on the codec and in such
+-		 * cases mixer control is used to update custom values. A value
+-		 * of "N" here means, width is "N + 1" bit clock wide.
+-		 */
+-		i2s->fsync_width = value;
+-
+-		regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,
+-				   I2S_CTRL_FSYNC_WIDTH_MASK,
+-				   i2s->fsync_width << I2S_FSYNC_WIDTH_SHIFT);
+-
+-	} else if (strstr(kcontrol->id.name, "Capture Stereo To Mono")) {
+-		i2s->stereo_to_mono[I2S_TX_PATH] = value;
+-	} else if (strstr(kcontrol->id.name, "Capture Mono To Stereo")) {
+-		i2s->mono_to_stereo[I2S_TX_PATH] = value;
+-	} else if (strstr(kcontrol->id.name, "Playback Stereo To Mono")) {
+-		i2s->stereo_to_mono[I2S_RX_PATH] = value;
+-	} else if (strstr(kcontrol->id.name, "Playback Mono To Stereo")) {
+-		i2s->mono_to_stereo[I2S_RX_PATH] = value;
+-	} else if (strstr(kcontrol->id.name, "Playback FIFO Threshold")) {
+-		i2s->rx_fifo_th = value;
+-	} else if (strstr(kcontrol->id.name, "BCLK Ratio")) {
+-		i2s->bclk_ratio = value;
+-	}
++	/*
++	 * Frame sync width is used only for FSYNC modes and not
++	 * applicable for LRCK modes. Reset value for this field is "0",
++	 * which means the width is one bit clock wide.
++	 * The width requirement may depend on the codec and in such
++	 * cases mixer control is used to update custom values. A value
++	 * of "N" here means, width is "N + 1" bit clock wide.
++	 */
++	regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,
++			   I2S_CTRL_FSYNC_WIDTH_MASK,
++			   i2s->fsync_width << I2S_FSYNC_WIDTH_SHIFT);
++
++	return 1;
++}
++
++static int tegra210_i2s_cget_stereo_to_mono(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++
++	ucontrol->value.enumerated.item[0] = i2s->stereo_to_mono[I2S_TX_PATH];
++
++	return 0;
++}
++
++static int tegra210_i2s_cput_stereo_to_mono(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == i2s->stereo_to_mono[I2S_TX_PATH])
++		return 0;
++
++	i2s->stereo_to_mono[I2S_TX_PATH] = value;
++
++	return 1;
++}
++
++static int tegra210_i2s_cget_mono_to_stereo(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++
++	ucontrol->value.enumerated.item[0] = i2s->mono_to_stereo[I2S_TX_PATH];
++
++	return 0;
++}
++
++static int tegra210_i2s_cput_mono_to_stereo(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == i2s->mono_to_stereo[I2S_TX_PATH])
++		return 0;
++
++	i2s->mono_to_stereo[I2S_TX_PATH] = value;
++
++	return 1;
++}
++
++static int tegra210_i2s_pget_stereo_to_mono(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++
++	ucontrol->value.enumerated.item[0] = i2s->stereo_to_mono[I2S_RX_PATH];
++
++	return 0;
++}
++
++static int tegra210_i2s_pput_stereo_to_mono(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == i2s->stereo_to_mono[I2S_RX_PATH])
++		return 0;
++
++	i2s->stereo_to_mono[I2S_RX_PATH] = value;
++
++	return 1;
++}
++
++static int tegra210_i2s_pget_mono_to_stereo(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++
++	ucontrol->value.enumerated.item[0] = i2s->mono_to_stereo[I2S_RX_PATH];
++
++	return 0;
++}
++
++static int tegra210_i2s_pput_mono_to_stereo(struct snd_kcontrol *kcontrol,
++					    struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++	unsigned int value = ucontrol->value.enumerated.item[0];
++
++	if (value == i2s->mono_to_stereo[I2S_RX_PATH])
++		return 0;
++
++	i2s->mono_to_stereo[I2S_RX_PATH] = value;
++
++	return 1;
++}
++
++static int tegra210_i2s_pget_fifo_th(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++
++	ucontrol->value.integer.value[0] = i2s->rx_fifo_th;
++
++	return 0;
++}
++
++static int tegra210_i2s_pput_fifo_th(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++	int value = ucontrol->value.integer.value[0];
++
++	if (value == i2s->rx_fifo_th)
++		return 0;
++
++	i2s->rx_fifo_th = value;
++
++	return 1;
++}
++
++static int tegra210_i2s_get_bclk_ratio(struct snd_kcontrol *kcontrol,
++				       struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++
++	ucontrol->value.integer.value[0] = i2s->bclk_ratio;
++
++	return 0;
++}
++
++static int tegra210_i2s_put_bclk_ratio(struct snd_kcontrol *kcontrol,
++				       struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);
++	struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);
++	int value = ucontrol->value.integer.value[0];
++
++	if (value == i2s->bclk_ratio)
++		return 0;
++
++	i2s->bclk_ratio = value;
++
++	return 1;
++}
++
++static int tegra210_i2s_set_dai_bclk_ratio(struct snd_soc_dai *dai,
++					   unsigned int ratio)
++{
++	struct tegra210_i2s *i2s = snd_soc_dai_get_drvdata(dai);
++
++	i2s->bclk_ratio = ratio;
+ 
+ 	return 0;
+ }
+@@ -598,22 +748,28 @@ static const struct soc_enum tegra210_i2s_stereo_conv_enum =
+ 			tegra210_i2s_stereo_conv_text);
+ 
+ static const struct snd_kcontrol_new tegra210_i2s_controls[] = {
+-	SOC_SINGLE_EXT("Loopback", 0, 0, 1, 0, tegra210_i2s_get_control,
+-		       tegra210_i2s_put_control),
+-	SOC_SINGLE_EXT("FSYNC Width", 0, 0, 255, 0, tegra210_i2s_get_control,
+-		       tegra210_i2s_put_control),
++	SOC_SINGLE_EXT("Loopback", 0, 0, 1, 0, tegra210_i2s_get_loopback,
++		       tegra210_i2s_put_loopback),
++	SOC_SINGLE_EXT("FSYNC Width", 0, 0, 255, 0,
++		       tegra210_i2s_get_fsync_width,
++		       tegra210_i2s_put_fsync_width),
+ 	SOC_ENUM_EXT("Capture Stereo To Mono", tegra210_i2s_stereo_conv_enum,
+-		     tegra210_i2s_get_control, tegra210_i2s_put_control),
++		     tegra210_i2s_cget_stereo_to_mono,
++		     tegra210_i2s_cput_stereo_to_mono),
+ 	SOC_ENUM_EXT("Capture Mono To Stereo", tegra210_i2s_mono_conv_enum,
+-		     tegra210_i2s_get_control, tegra210_i2s_put_control),
++		     tegra210_i2s_cget_mono_to_stereo,
++		     tegra210_i2s_cput_mono_to_stereo),
+ 	SOC_ENUM_EXT("Playback Stereo To Mono", tegra210_i2s_stereo_conv_enum,
+-		     tegra210_i2s_get_control, tegra210_i2s_put_control),
++		     tegra210_i2s_pget_mono_to_stereo,
++		     tegra210_i2s_pput_mono_to_stereo),
+ 	SOC_ENUM_EXT("Playback Mono To Stereo", tegra210_i2s_mono_conv_enum,
+-		     tegra210_i2s_get_control, tegra210_i2s_put_control),
++		     tegra210_i2s_pget_stereo_to_mono,
++		     tegra210_i2s_pput_stereo_to_mono),
+ 	SOC_SINGLE_EXT("Playback FIFO Threshold", 0, 0, I2S_RX_FIFO_DEPTH - 1,
+-		       0, tegra210_i2s_get_control, tegra210_i2s_put_control),
+-	SOC_SINGLE_EXT("BCLK Ratio", 0, 0, INT_MAX, 0, tegra210_i2s_get_control,
+-		       tegra210_i2s_put_control),
++		       0, tegra210_i2s_pget_fifo_th, tegra210_i2s_pput_fifo_th),
++	SOC_SINGLE_EXT("BCLK Ratio", 0, 0, INT_MAX, 0,
++		       tegra210_i2s_get_bclk_ratio,
++		       tegra210_i2s_put_bclk_ratio),
+ };
+ 
+ static const struct snd_soc_dapm_widget tegra210_i2s_widgets[] = {
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 5824aa24acfcc..91cab5cdfbc16 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -610,14 +610,17 @@ static int report__browse_hists(struct report *rep)
+ 	int ret;
+ 	struct perf_session *session = rep->session;
+ 	struct evlist *evlist = session->evlist;
+-	const char *help = perf_tip(system_path(TIPDIR));
++	char *help = NULL, *path = NULL;
+ 
+-	if (help == NULL) {
++	path = system_path(TIPDIR);
++	if (perf_tip(&help, path) || help == NULL) {
+ 		/* fallback for people who don't install perf ;-) */
+-		help = perf_tip(DOCDIR);
+-		if (help == NULL)
+-			help = "Cannot load tips.txt file, please install perf!";
++		free(path);
++		path = system_path(DOCDIR);
++		if (perf_tip(&help, path) || help == NULL)
++			help = strdup("Cannot load tips.txt file, please install perf!");
+ 	}
++	free(path);
+ 
+ 	switch (use_browser) {
+ 	case 1:
+@@ -644,7 +647,7 @@ static int report__browse_hists(struct report *rep)
+ 		ret = perf_evlist__tty_browse_hists(evlist, rep, help);
+ 		break;
+ 	}
+-
++	free(help);
+ 	return ret;
+ }
+ 
+diff --git a/tools/perf/ui/hist.c b/tools/perf/ui/hist.c
+index c1f24d0048527..5075ecead5f3d 100644
+--- a/tools/perf/ui/hist.c
++++ b/tools/perf/ui/hist.c
+@@ -535,6 +535,18 @@ struct perf_hpp_list perf_hpp_list = {
+ #undef __HPP_SORT_ACC_FN
+ #undef __HPP_SORT_RAW_FN
+ 
++static void fmt_free(struct perf_hpp_fmt *fmt)
++{
++	/*
++	 * At this point fmt should be completely
++	 * unhooked, if not it's a bug.
++	 */
++	BUG_ON(!list_empty(&fmt->list));
++	BUG_ON(!list_empty(&fmt->sort_list));
++
++	if (fmt->free)
++		fmt->free(fmt);
++}
+ 
+ void perf_hpp__init(void)
+ {
+@@ -598,9 +610,10 @@ void perf_hpp_list__prepend_sort_field(struct perf_hpp_list *list,
+ 	list_add(&format->sort_list, &list->sorts);
+ }
+ 
+-void perf_hpp__column_unregister(struct perf_hpp_fmt *format)
++static void perf_hpp__column_unregister(struct perf_hpp_fmt *format)
+ {
+ 	list_del_init(&format->list);
++	fmt_free(format);
+ }
+ 
+ void perf_hpp__cancel_cumulate(void)
+@@ -672,19 +685,6 @@ next:
+ }
+ 
+ 
+-static void fmt_free(struct perf_hpp_fmt *fmt)
+-{
+-	/*
+-	 * At this point fmt should be completely
+-	 * unhooked, if not it's a bug.
+-	 */
+-	BUG_ON(!list_empty(&fmt->list));
+-	BUG_ON(!list_empty(&fmt->sort_list));
+-
+-	if (fmt->free)
+-		fmt->free(fmt);
+-}
+-
+ void perf_hpp__reset_output_field(struct perf_hpp_list *list)
+ {
+ 	struct perf_hpp_fmt *fmt, *tmp;
+diff --git a/tools/perf/util/arm-spe.c b/tools/perf/util/arm-spe.c
+index 3882a5360ada4..0350020acb96f 100644
+--- a/tools/perf/util/arm-spe.c
++++ b/tools/perf/util/arm-spe.c
+@@ -48,6 +48,7 @@ struct arm_spe {
+ 	u8				timeless_decoding;
+ 	u8				data_queued;
+ 
++	u64				sample_type;
+ 	u8				sample_flc;
+ 	u8				sample_llc;
+ 	u8				sample_tlb;
+@@ -244,6 +245,12 @@ static void arm_spe_prep_sample(struct arm_spe *spe,
+ 	event->sample.header.size = sizeof(struct perf_event_header);
+ }
+ 
++static int arm_spe__inject_event(union perf_event *event, struct perf_sample *sample, u64 type)
++{
++	event->header.size = perf_event__sample_event_size(sample, type, 0);
++	return perf_event__synthesize_sample(event, type, 0, sample);
++}
++
+ static inline int
+ arm_spe_deliver_synth_event(struct arm_spe *spe,
+ 			    struct arm_spe_queue *speq __maybe_unused,
+@@ -252,6 +259,12 @@ arm_spe_deliver_synth_event(struct arm_spe *spe,
+ {
+ 	int ret;
+ 
++	if (spe->synth_opts.inject) {
++		ret = arm_spe__inject_event(event, sample, spe->sample_type);
++		if (ret)
++			return ret;
++	}
++
+ 	ret = perf_session__deliver_synth_event(spe->session, event, sample);
+ 	if (ret)
+ 		pr_err("ARM SPE: failed to deliver event, error %d\n", ret);
+@@ -809,6 +822,8 @@ arm_spe_synth_events(struct arm_spe *spe, struct perf_session *session)
+ 	else
+ 		attr.sample_type |= PERF_SAMPLE_TIME;
+ 
++	spe->sample_type = attr.sample_type;
++
+ 	attr.exclude_user = evsel->core.attr.exclude_user;
+ 	attr.exclude_kernel = evsel->core.attr.exclude_kernel;
+ 	attr.exclude_hv = evsel->core.attr.exclude_hv;
+diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h
+index 96b1c13bbccc5..919f2c6c48142 100644
+--- a/tools/perf/util/hist.h
++++ b/tools/perf/util/hist.h
+@@ -362,7 +362,6 @@ enum {
+ };
+ 
+ void perf_hpp__init(void);
+-void perf_hpp__column_unregister(struct perf_hpp_fmt *format);
+ void perf_hpp__cancel_cumulate(void);
+ void perf_hpp__setup_output_field(struct perf_hpp_list *list);
+ void perf_hpp__reset_output_field(struct perf_hpp_list *list);
+diff --git a/tools/perf/util/util.c b/tools/perf/util/util.c
+index 37a9492edb3eb..df3c4671be72a 100644
+--- a/tools/perf/util/util.c
++++ b/tools/perf/util/util.c
+@@ -379,32 +379,32 @@ fetch_kernel_version(unsigned int *puint, char *str,
+ 	return 0;
+ }
+ 
+-const char *perf_tip(const char *dirpath)
++int perf_tip(char **strp, const char *dirpath)
+ {
+ 	struct strlist *tips;
+ 	struct str_node *node;
+-	char *tip = NULL;
+ 	struct strlist_config conf = {
+ 		.dirname = dirpath,
+ 		.file_only = true,
+ 	};
++	int ret = 0;
+ 
++	*strp = NULL;
+ 	tips = strlist__new("tips.txt", &conf);
+ 	if (tips == NULL)
+-		return errno == ENOENT ? NULL :
+-			"Tip: check path of tips.txt or get more memory! ;-p";
++		return -errno;
+ 
+ 	if (strlist__nr_entries(tips) == 0)
+ 		goto out;
+ 
+ 	node = strlist__entry(tips, random() % strlist__nr_entries(tips));
+-	if (asprintf(&tip, "Tip: %s", node->s) < 0)
+-		tip = (char *)"Tip: get more memory! ;-)";
++	if (asprintf(strp, "Tip: %s", node->s) < 0)
++		ret = -ENOMEM;
+ 
+ out:
+ 	strlist__delete(tips);
+ 
+-	return tip;
++	return ret;
+ }
+ 
+ char *perf_exe(char *buf, int len)
+diff --git a/tools/perf/util/util.h b/tools/perf/util/util.h
+index ad737052e5977..9f0d36ba77f2d 100644
+--- a/tools/perf/util/util.h
++++ b/tools/perf/util/util.h
+@@ -39,7 +39,7 @@ int fetch_kernel_version(unsigned int *puint,
+ #define KVER_FMT	"%d.%d.%d"
+ #define KVER_PARAM(x)	KVER_VERSION(x), KVER_PATCHLEVEL(x), KVER_SUBLEVEL(x)
+ 
+-const char *perf_tip(const char *dirpath);
++int perf_tip(char **strp, const char *dirpath);
+ 
+ #ifndef HAVE_SCHED_GETCPU_SUPPORT
+ int sched_getcpu(void);
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index 225440f5f99eb..7c9ace9d15991 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -3911,8 +3911,8 @@ EOF
+ ################################################################################
+ # main
+ 
+-TESTS_IPV4="ipv4_ping ipv4_tcp ipv4_udp ipv4_addr_bind ipv4_runtime ipv4_netfilter"
+-TESTS_IPV6="ipv6_ping ipv6_tcp ipv6_udp ipv6_addr_bind ipv6_runtime ipv6_netfilter"
++TESTS_IPV4="ipv4_ping ipv4_tcp ipv4_udp ipv4_bind ipv4_runtime ipv4_netfilter"
++TESTS_IPV6="ipv6_ping ipv6_tcp ipv6_udp ipv6_bind ipv6_runtime ipv6_netfilter"
+ TESTS_OTHER="use_cases"
+ 
+ PAUSE_ON_FAIL=no
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index ebc4ee0fe179f..8a9461aa0878a 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -276,7 +276,11 @@ n0 ping -W 1 -c 1 192.168.241.2
+ n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7
+ ip2 link del wg0
+ ip2 link del wg1
+-! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel
++read _ _ tx_bytes_before < <(n0 wg show wg1 transfer)
++! n0 ping -W 1 -c 10 -f 192.168.241.2 || false
++sleep 1
++read _ _ tx_bytes_after < <(n0 wg show wg1 transfer)
++(( tx_bytes_after - tx_bytes_before < 70000 ))
+ 
+ ip0 link del wg1
+ ip1 link del wg0
+@@ -609,6 +613,28 @@ ip0 link set wg0 up
+ kill $ncat_pid
+ ip0 link del wg0
+ 
++# Ensure that dst_cache references don't outlive netns lifetime
++ip1 link add dev wg0 type wireguard
++ip2 link add dev wg0 type wireguard
++configure_peers
++ip1 link add veth1 type veth peer name veth2
++ip1 link set veth2 netns $netns2
++ip1 addr add fd00:aa::1/64 dev veth1
++ip2 addr add fd00:aa::2/64 dev veth2
++ip1 link set veth1 up
++ip2 link set veth2 up
++waitiface $netns1 veth1
++waitiface $netns2 veth2
++ip1 -6 route add default dev veth1 via fd00:aa::2
++ip2 -6 route add default dev veth2 via fd00:aa::1
++n1 wg set wg0 peer "$pub2" endpoint [fd00:aa::2]:2
++n2 wg set wg0 peer "$pub1" endpoint [fd00:aa::1]:1
++n1 ping6 -c 1 fd00::2
++pp ip netns delete $netns1
++pp ip netns delete $netns2
++pp ip netns add $netns1
++pp ip netns add $netns2
++
+ # Ensure there aren't circular reference loops
+ ip1 link add wg1 type wireguard
+ ip2 link add wg2 type wireguard
+@@ -627,7 +653,7 @@ while read -t 0.1 -r line 2>/dev/null || [[ $? -ne 142 ]]; do
+ done < /dev/kmsg
+ alldeleted=1
+ for object in "${!objects[@]}"; do
+-	if [[ ${objects["$object"]} != *createddestroyed ]]; then
++	if [[ ${objects["$object"]} != *createddestroyed && ${objects["$object"]} != *createdcreateddestroyeddestroyed ]]; then
+ 		echo "Error: $object: merely ${objects["$object"]}" >&3
+ 		alldeleted=0
+ 	fi
+diff --git a/tools/testing/selftests/wireguard/qemu/debug.config b/tools/testing/selftests/wireguard/qemu/debug.config
+index b50c2085c1ac0..a92c5590e87ca 100644
+--- a/tools/testing/selftests/wireguard/qemu/debug.config
++++ b/tools/testing/selftests/wireguard/qemu/debug.config
+@@ -48,7 +48,7 @@ CONFIG_DEBUG_ATOMIC_SLEEP=y
+ CONFIG_TRACE_IRQFLAGS=y
+ CONFIG_DEBUG_BUGVERBOSE=y
+ CONFIG_DEBUG_LIST=y
+-CONFIG_DEBUG_PI_LIST=y
++CONFIG_DEBUG_PLIST=y
+ CONFIG_PROVE_RCU=y
+ CONFIG_SPARSE_RCU_POINTER=y
+ CONFIG_RCU_CPU_STALL_TIMEOUT=21
+diff --git a/tools/testing/selftests/wireguard/qemu/kernel.config b/tools/testing/selftests/wireguard/qemu/kernel.config
+index 74db83a0aedd8..a9b5a520a1d22 100644
+--- a/tools/testing/selftests/wireguard/qemu/kernel.config
++++ b/tools/testing/selftests/wireguard/qemu/kernel.config
+@@ -66,6 +66,7 @@ CONFIG_PROC_SYSCTL=y
+ CONFIG_SYSFS=y
+ CONFIG_TMPFS=y
+ CONFIG_CONSOLE_LOGLEVEL_DEFAULT=15
++CONFIG_LOG_BUF_SHIFT=18
+ CONFIG_PRINTK_TIME=y
+ CONFIG_BLK_DEV_INITRD=y
+ CONFIG_LEGACY_VSYSCALL_NONE=y
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 57c0c3b18bde7..97ac3c6fd4441 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1297,7 +1297,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
+ 	id = (u16)mem->slot;
+ 
+ 	/* General sanity checks */
+-	if (mem->memory_size & (PAGE_SIZE - 1))
++	if ((mem->memory_size & (PAGE_SIZE - 1)) ||
++	    (mem->memory_size != (unsigned long)mem->memory_size))
+ 		return -EINVAL;
+ 	if (mem->guest_phys_addr & (PAGE_SIZE - 1))
+ 		return -EINVAL;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-14 12:12 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-14 12:12 UTC (permalink / raw
  To: gentoo-commits

commit:     7bc0d05e4d1b0680ef4e6889c40344b985aeacfc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 14 12:12:03 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Dec 14 12:12:03 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7bc0d05e

Linux patch 5.10.85

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1084_linux-5.10.85.patch | 6226 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6230 insertions(+)

diff --git a/0000_README b/0000_README
index 19b3a8b7..9b6b1174 100644
--- a/0000_README
+++ b/0000_README
@@ -379,6 +379,10 @@ Patch:  1083_linux-5.10.84.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.84
 
+Patch:  1084_linux-5.10.85.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.85
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1084_linux-5.10.85.patch b/1084_linux-5.10.85.patch
new file mode 100644
index 00000000..80c28595
--- /dev/null
+++ b/1084_linux-5.10.85.patch
@@ -0,0 +1,6226 @@
+diff --git a/Documentation/devicetree/bindings/net/ethernet-phy.yaml b/Documentation/devicetree/bindings/net/ethernet-phy.yaml
+index 6dd72faebd896..a054accc20bd4 100644
+--- a/Documentation/devicetree/bindings/net/ethernet-phy.yaml
++++ b/Documentation/devicetree/bindings/net/ethernet-phy.yaml
+@@ -91,6 +91,14 @@ properties:
+       compensate for the board being designed with the lanes
+       swapped.
+ 
++  enet-phy-lane-no-swap:
++    $ref: /schemas/types.yaml#/definitions/flag
++    description:
++      If set, indicates that PHY will disable swap of the
++      TX/RX lanes. This property allows the PHY to work correcly after
++      e.g. wrong bootstrap configuration caused by issues in PCB
++      layout design.
++
+   eee-broken-100tx:
+     $ref: /schemas/types.yaml#definitions/flag
+     description:
+diff --git a/Documentation/kbuild/gcc-plugins.rst b/Documentation/kbuild/gcc-plugins.rst
+index 4b1c10f88e304..3349966f213dc 100644
+--- a/Documentation/kbuild/gcc-plugins.rst
++++ b/Documentation/kbuild/gcc-plugins.rst
+@@ -11,16 +11,13 @@ compiler [1]_. They are useful for runtime instrumentation and static analysis.
+ We can analyse, change and add further code during compilation via
+ callbacks [2]_, GIMPLE [3]_, IPA [4]_ and RTL passes [5]_.
+ 
+-The GCC plugin infrastructure of the kernel supports all gcc versions from
+-4.5 to 6.0, building out-of-tree modules, cross-compilation and building in a
+-separate directory.
+-Plugin source files have to be compilable by both a C and a C++ compiler as well
+-because gcc versions 4.5 and 4.6 are compiled by a C compiler,
+-gcc-4.7 can be compiled by a C or a C++ compiler,
+-and versions 4.8+ can only be compiled by a C++ compiler.
++The GCC plugin infrastructure of the kernel supports building out-of-tree
++modules, cross-compilation and building in a separate directory.
++Plugin source files have to be compilable by a C++ compiler.
+ 
+-Currently the GCC plugin infrastructure supports only the x86, arm, arm64 and
+-powerpc architectures.
++Currently the GCC plugin infrastructure supports only some architectures.
++Grep "select HAVE_GCC_PLUGINS" to find out which architectures support
++GCC plugins.
+ 
+ This infrastructure was ported from grsecurity [6]_ and PaX [7]_.
+ 
+@@ -47,20 +44,13 @@ Files
+ 	This is a compatibility header for GCC plugins.
+ 	It should be always included instead of individual gcc headers.
+ 
+-**$(src)/scripts/gcc-plugin.sh**
+-
+-	This script checks the availability of the included headers in
+-	gcc-common.h and chooses the proper host compiler to build the plugins
+-	(gcc-4.7 can be built by either gcc or g++).
+-
+ **$(src)/scripts/gcc-plugins/gcc-generate-gimple-pass.h,
+ $(src)/scripts/gcc-plugins/gcc-generate-ipa-pass.h,
+ $(src)/scripts/gcc-plugins/gcc-generate-simple_ipa-pass.h,
+ $(src)/scripts/gcc-plugins/gcc-generate-rtl-pass.h**
+ 
+ 	These headers automatically generate the registration structures for
+-	GIMPLE, SIMPLE_IPA, IPA and RTL passes. They support all gcc versions
+-	from 4.5 to 6.0.
++	GIMPLE, SIMPLE_IPA, IPA and RTL passes.
+ 	They should be preferred to creating the structures by hand.
+ 
+ 
+@@ -68,21 +58,25 @@ Usage
+ =====
+ 
+ You must install the gcc plugin headers for your gcc version,
+-e.g., on Ubuntu for gcc-4.9::
++e.g., on Ubuntu for gcc-10::
+ 
+-	apt-get install gcc-4.9-plugin-dev
++	apt-get install gcc-10-plugin-dev
+ 
+ Or on Fedora::
+ 
+ 	dnf install gcc-plugin-devel
+ 
+-Enable a GCC plugin based feature in the kernel config::
++Enable the GCC plugin infrastructure and some plugin(s) you want to use
++in the kernel config::
+ 
+-	CONFIG_GCC_PLUGIN_CYC_COMPLEXITY = y
++	CONFIG_GCC_PLUGINS=y
++	CONFIG_GCC_PLUGIN_CYC_COMPLEXITY=y
++	CONFIG_GCC_PLUGIN_LATENT_ENTROPY=y
++	...
+ 
+-To compile only the plugin(s)::
++To compile the minimum tool set including the plugin(s)::
+ 
+-	make gcc-plugins
++	make scripts
+ 
+ or just run the kernel make and compile the whole kernel with
+ the cyclomatic complexity GCC plugin.
+@@ -91,7 +85,8 @@ the cyclomatic complexity GCC plugin.
+ 4. How to add a new GCC plugin
+ ==============================
+ 
+-The GCC plugins are in $(src)/scripts/gcc-plugins/. You can use a file or a directory
+-here. It must be added to $(src)/scripts/gcc-plugins/Makefile,
+-$(src)/scripts/Makefile.gcc-plugins and $(src)/arch/Kconfig.
++The GCC plugins are in scripts/gcc-plugins/. You need to put plugin source files
++right under scripts/gcc-plugins/. Creating subdirectories is not supported.
++It must be added to scripts/gcc-plugins/Makefile, scripts/Makefile.gcc-plugins
++and a relevant Kconfig file.
+ See the cyc_complexity_plugin.c (CONFIG_GCC_PLUGIN_CYC_COMPLEXITY) GCC plugin.
+diff --git a/Documentation/locking/locktypes.rst b/Documentation/locking/locktypes.rst
+index ddada4a537493..4fd7b70fcde19 100644
+--- a/Documentation/locking/locktypes.rst
++++ b/Documentation/locking/locktypes.rst
+@@ -439,11 +439,9 @@ preemption. The following substitution works on both kernels::
+   spin_lock(&p->lock);
+   p->count += this_cpu_read(var2);
+ 
+-On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable()
+-which makes the above code fully equivalent. On a PREEMPT_RT kernel
+ migrate_disable() ensures that the task is pinned on the current CPU which
+ in turn guarantees that the per-CPU access to var1 and var2 are staying on
+-the same CPU.
++the same CPU while the task remains preemptible.
+ 
+ The migrate_disable() substitution is not valid for the following
+ scenario::
+@@ -456,9 +454,8 @@ scenario::
+     p = this_cpu_ptr(&var1);
+     p->val = func2();
+ 
+-While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because
+-here migrate_disable() does not protect against reentrancy from a
+-preempting task. A correct substitution for this case is::
++This breaks because migrate_disable() does not protect against reentrancy from
++a preempting task. A correct substitution for this case is::
+ 
+   func()
+   {
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 4fef10dd29753..c64c9354c287f 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -7310,7 +7310,6 @@ L:	linux-hardening@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/kbuild/gcc-plugins.rst
+ F:	scripts/Makefile.gcc-plugins
+-F:	scripts/gcc-plugin.sh
+ F:	scripts/gcc-plugins/
+ 
+ GCOV BASED KERNEL PROFILING
+diff --git a/Makefile b/Makefile
+index cce1e5dac06fb..8de80228df3f6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 84
++SUBLEVEL = 85
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/csky/kernel/traps.c b/arch/csky/kernel/traps.c
+index 959a917c989d8..22721468a04b2 100644
+--- a/arch/csky/kernel/traps.c
++++ b/arch/csky/kernel/traps.c
+@@ -211,7 +211,7 @@ asmlinkage void do_trap_illinsn(struct pt_regs *regs)
+ 
+ asmlinkage void do_trap_fpe(struct pt_regs *regs)
+ {
+-#ifdef CONFIG_CPU_HAS_FP
++#ifdef CONFIG_CPU_HAS_FPU
+ 	return fpu_fpe(regs);
+ #else
+ 	do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->pc,
+@@ -221,7 +221,7 @@ asmlinkage void do_trap_fpe(struct pt_regs *regs)
+ 
+ asmlinkage void do_trap_priv(struct pt_regs *regs)
+ {
+-#ifdef CONFIG_CPU_HAS_FP
++#ifdef CONFIG_CPU_HAS_FPU
+ 	if (user_mode(regs) && fpu_libc_helper(regs))
+ 		return;
+ #endif
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index a853ed7240eef..fb873a7bb65c8 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1939,6 +1939,7 @@ config EFI
+ 	depends on ACPI
+ 	select UCS2_STRING
+ 	select EFI_RUNTIME_WRAPPERS
++	select ARCH_USE_MEMREMAP_PROT
+ 	help
+ 	  This enables the kernel to use EFI runtime services that are
+ 	  available (such as the EFI variable services).
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index b1cd8334db11a..e8fb4b0394af2 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -85,7 +85,7 @@
+ 	KVM_ARCH_REQ_FLAGS(25, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+ #define KVM_REQ_TLB_FLUSH_CURRENT	KVM_ARCH_REQ(26)
+ #define KVM_REQ_TLB_FLUSH_GUEST \
+-	KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_NO_WAKEUP)
++	KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+ #define KVM_REQ_APF_READY		KVM_ARCH_REQ(28)
+ #define KVM_REQ_MSR_FILTER_CHANGED	KVM_ARCH_REQ(29)
+ 
+diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
+index 5a40fe411ebda..c1eec019dcee6 100644
+--- a/arch/x86/platform/efi/quirks.c
++++ b/arch/x86/platform/efi/quirks.c
+@@ -277,7 +277,8 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
+ 		return;
+ 	}
+ 
+-	new = early_memremap(data.phys_map, data.size);
++	new = early_memremap_prot(data.phys_map, data.size,
++				  pgprot_val(pgprot_encrypted(FIXMAP_PAGE_NORMAL)));
+ 	if (!new) {
+ 		pr_err("Failed to map new boot services memmap\n");
+ 		return;
+diff --git a/block/ioprio.c b/block/ioprio.c
+index 84da6c71b2ccb..c8878647de400 100644
+--- a/block/ioprio.c
++++ b/block/ioprio.c
+@@ -214,6 +214,7 @@ SYSCALL_DEFINE2(ioprio_get, int, which, int, who)
+ 				pgrp = task_pgrp(current);
+ 			else
+ 				pgrp = find_vpid(who);
++			read_lock(&tasklist_lock);
+ 			do_each_pid_thread(pgrp, PIDTYPE_PGID, p) {
+ 				tmpio = get_task_ioprio(p);
+ 				if (tmpio < 0)
+@@ -223,6 +224,8 @@ SYSCALL_DEFINE2(ioprio_get, int, which, int, who)
+ 				else
+ 					ret = ioprio_best(ret, tmpio);
+ 			} while_each_pid_thread(pgrp, PIDTYPE_PGID, p);
++			read_unlock(&tasklist_lock);
++
+ 			break;
+ 		case IOPRIO_WHO_USER:
+ 			uid = make_kuid(current_user_ns(), who);
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index a1255971e50ce..80e2bbb36422e 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -4784,23 +4784,20 @@ static int binder_thread_release(struct binder_proc *proc,
+ 	__release(&t->lock);
+ 
+ 	/*
+-	 * If this thread used poll, make sure we remove the waitqueue
+-	 * from any epoll data structures holding it with POLLFREE.
+-	 * waitqueue_active() is safe to use here because we're holding
+-	 * the inner lock.
++	 * If this thread used poll, make sure we remove the waitqueue from any
++	 * poll data structures holding it.
+ 	 */
+-	if ((thread->looper & BINDER_LOOPER_STATE_POLL) &&
+-	    waitqueue_active(&thread->wait)) {
+-		wake_up_poll(&thread->wait, EPOLLHUP | POLLFREE);
+-	}
++	if (thread->looper & BINDER_LOOPER_STATE_POLL)
++		wake_up_pollfree(&thread->wait);
+ 
+ 	binder_inner_proc_unlock(thread->proc);
+ 
+ 	/*
+-	 * This is needed to avoid races between wake_up_poll() above and
+-	 * and ep_remove_waitqueue() called for other reasons (eg the epoll file
+-	 * descriptor being closed); ep_remove_waitqueue() holds an RCU read
+-	 * lock, so we can be sure it's done after calling synchronize_rcu().
++	 * This is needed to avoid races between wake_up_pollfree() above and
++	 * someone else removing the last entry from the queue for other reasons
++	 * (e.g. ep_remove_wait_queue() being called due to an epoll file
++	 * descriptor being closed).  Such other users hold an RCU read lock, so
++	 * we can be sure they're done after we call synchronize_rcu().
+ 	 */
+ 	if (thread->looper & BINDER_LOOPER_STATE_POLL)
+ 		synchronize_rcu();
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 8acf99b88b21e..1f54f82d22d61 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3831,6 +3831,8 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "VRFDFC22048UCHC-TE*", NULL,		ATA_HORKAGE_NODMA },
+ 	/* Odd clown on sil3726/4726 PMPs */
+ 	{ "Config  Disk",	NULL,		ATA_HORKAGE_DISABLE },
++	/* Similar story with ASMedia 1092 */
++	{ "ASMT109x- Config",	NULL,		ATA_HORKAGE_DISABLE },
+ 
+ 	/* Weird ATAPI devices */
+ 	{ "TORiSAN DVD-ROM DRD-N216", NULL,	ATA_HORKAGE_MAX_SEC_128 },
+diff --git a/drivers/clk/imx/clk-imx8qxp-lpcg.c b/drivers/clk/imx/clk-imx8qxp-lpcg.c
+index e947a70054acd..522c03a12b693 100644
+--- a/drivers/clk/imx/clk-imx8qxp-lpcg.c
++++ b/drivers/clk/imx/clk-imx8qxp-lpcg.c
+@@ -231,7 +231,7 @@ static struct platform_driver imx8qxp_lpcg_clk_driver = {
+ 	.probe = imx8qxp_lpcg_clk_probe,
+ };
+ 
+-builtin_platform_driver(imx8qxp_lpcg_clk_driver);
++module_platform_driver(imx8qxp_lpcg_clk_driver);
+ 
+ MODULE_AUTHOR("Aisheng Dong <aisheng.dong@nxp.com>");
+ MODULE_DESCRIPTION("NXP i.MX8QXP LPCG clock driver");
+diff --git a/drivers/clk/imx/clk-imx8qxp.c b/drivers/clk/imx/clk-imx8qxp.c
+index d650ca33cdc80..8c14e0bbe1a24 100644
+--- a/drivers/clk/imx/clk-imx8qxp.c
++++ b/drivers/clk/imx/clk-imx8qxp.c
+@@ -151,7 +151,7 @@ static struct platform_driver imx8qxp_clk_driver = {
+ 	},
+ 	.probe = imx8qxp_clk_probe,
+ };
+-builtin_platform_driver(imx8qxp_clk_driver);
++module_platform_driver(imx8qxp_clk_driver);
+ 
+ MODULE_AUTHOR("Aisheng Dong <aisheng.dong@nxp.com>");
+ MODULE_DESCRIPTION("NXP i.MX8QXP clock driver");
+diff --git a/drivers/clk/qcom/clk-regmap-mux.c b/drivers/clk/qcom/clk-regmap-mux.c
+index b2d00b4519634..45d9cca28064f 100644
+--- a/drivers/clk/qcom/clk-regmap-mux.c
++++ b/drivers/clk/qcom/clk-regmap-mux.c
+@@ -28,7 +28,7 @@ static u8 mux_get_parent(struct clk_hw *hw)
+ 	val &= mask;
+ 
+ 	if (mux->parent_map)
+-		return qcom_find_src_index(hw, mux->parent_map, val);
++		return qcom_find_cfg_index(hw, mux->parent_map, val);
+ 
+ 	return val;
+ }
+diff --git a/drivers/clk/qcom/common.c b/drivers/clk/qcom/common.c
+index 60d2a78d13950..2af04fc4abfa9 100644
+--- a/drivers/clk/qcom/common.c
++++ b/drivers/clk/qcom/common.c
+@@ -69,6 +69,18 @@ int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map, u8 src)
+ }
+ EXPORT_SYMBOL_GPL(qcom_find_src_index);
+ 
++int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map, u8 cfg)
++{
++	int i, num_parents = clk_hw_get_num_parents(hw);
++
++	for (i = 0; i < num_parents; i++)
++		if (cfg == map[i].cfg)
++			return i;
++
++	return -ENOENT;
++}
++EXPORT_SYMBOL_GPL(qcom_find_cfg_index);
++
+ struct regmap *
+ qcom_cc_map(struct platform_device *pdev, const struct qcom_cc_desc *desc)
+ {
+diff --git a/drivers/clk/qcom/common.h b/drivers/clk/qcom/common.h
+index bb39a7e106d8a..9c8f7b798d9fc 100644
+--- a/drivers/clk/qcom/common.h
++++ b/drivers/clk/qcom/common.h
+@@ -49,6 +49,8 @@ extern void
+ qcom_pll_set_fsm_mode(struct regmap *m, u32 reg, u8 bias_count, u8 lock_count);
+ extern int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map,
+ 			       u8 src);
++extern int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map,
++			       u8 cfg);
+ 
+ extern int qcom_cc_register_board_clk(struct device *dev, const char *path,
+ 				      const char *name, unsigned long rate);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 0544460653b95..fb6230c62daad 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -47,12 +47,8 @@ int amdgpu_amdkfd_init(void)
+ 	amdgpu_amdkfd_total_mem_size = si.totalram - si.totalhigh;
+ 	amdgpu_amdkfd_total_mem_size *= si.mem_unit;
+ 
+-#ifdef CONFIG_HSA_AMD
+ 	ret = kgd2kfd_init();
+ 	amdgpu_amdkfd_gpuvm_init_mem_limits();
+-#else
+-	ret = -ENOENT;
+-#endif
+ 	kfd_initialized = !ret;
+ 
+ 	return ret;
+@@ -194,6 +190,16 @@ void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm)
+ 		kgd2kfd_suspend(adev->kfd.dev, run_pm);
+ }
+ 
++int amdgpu_amdkfd_resume_iommu(struct amdgpu_device *adev)
++{
++	int r = 0;
++
++	if (adev->kfd.dev)
++		r = kgd2kfd_resume_iommu(adev->kfd.dev);
++
++	return r;
++}
++
+ int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm)
+ {
+ 	int r = 0;
+@@ -695,86 +701,3 @@ bool amdgpu_amdkfd_have_atomics_support(struct kgd_dev *kgd)
+ 
+ 	return adev->have_atomics_support;
+ }
+-
+-#ifndef CONFIG_HSA_AMD
+-bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm)
+-{
+-	return false;
+-}
+-
+-void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo)
+-{
+-}
+-
+-int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo)
+-{
+-	return 0;
+-}
+-
+-void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
+-					struct amdgpu_vm *vm)
+-{
+-}
+-
+-struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f)
+-{
+-	return NULL;
+-}
+-
+-int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem, struct mm_struct *mm)
+-{
+-	return 0;
+-}
+-
+-struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev,
+-			      unsigned int asic_type, bool vf)
+-{
+-	return NULL;
+-}
+-
+-bool kgd2kfd_device_init(struct kfd_dev *kfd,
+-			 struct drm_device *ddev,
+-			 const struct kgd2kfd_shared_resources *gpu_resources)
+-{
+-	return false;
+-}
+-
+-void kgd2kfd_device_exit(struct kfd_dev *kfd)
+-{
+-}
+-
+-void kgd2kfd_exit(void)
+-{
+-}
+-
+-void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
+-{
+-}
+-
+-int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
+-{
+-	return 0;
+-}
+-
+-int kgd2kfd_pre_reset(struct kfd_dev *kfd)
+-{
+-	return 0;
+-}
+-
+-int kgd2kfd_post_reset(struct kfd_dev *kfd)
+-{
+-	return 0;
+-}
+-
+-void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
+-{
+-}
+-
+-void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd)
+-{
+-}
+-
+-void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask)
+-{
+-}
+-#endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index ea391ca7f2f1c..32e385f287cb6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -94,11 +94,6 @@ enum kgd_engine_type {
+ 	KGD_ENGINE_MAX
+ };
+ 
+-struct amdgpu_amdkfd_fence *amdgpu_amdkfd_fence_create(u64 context,
+-						       struct mm_struct *mm);
+-bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm);
+-struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f);
+-int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo);
+ 
+ struct amdkfd_process_info {
+ 	/* List head of all VMs that belong to a KFD process */
+@@ -126,14 +121,13 @@ int amdgpu_amdkfd_init(void);
+ void amdgpu_amdkfd_fini(void);
+ 
+ void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm);
++int amdgpu_amdkfd_resume_iommu(struct amdgpu_device *adev);
+ int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm);
+ void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
+ 			const void *ih_ring_entry);
+ void amdgpu_amdkfd_device_probe(struct amdgpu_device *adev);
+ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev);
+ void amdgpu_amdkfd_device_fini(struct amdgpu_device *adev);
+-
+-int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem, struct mm_struct *mm);
+ int amdgpu_amdkfd_submit_ib(struct kgd_dev *kgd, enum kgd_engine_type engine,
+ 				uint32_t vmid, uint64_t gpu_addr,
+ 				uint32_t *ib_cmd, uint32_t ib_len);
+@@ -153,6 +147,38 @@ void amdgpu_amdkfd_gpu_reset(struct kgd_dev *kgd);
+ int amdgpu_queue_mask_bit_to_set_resource_bit(struct amdgpu_device *adev,
+ 					int queue_bit);
+ 
++struct amdgpu_amdkfd_fence *amdgpu_amdkfd_fence_create(u64 context,
++								struct mm_struct *mm);
++#if IS_ENABLED(CONFIG_HSA_AMD)
++bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm);
++struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f);
++int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo);
++int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem, struct mm_struct *mm);
++#else
++static inline
++bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm)
++{
++	return false;
++}
++
++static inline
++struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f)
++{
++	return NULL;
++}
++
++static inline
++int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo)
++{
++	return 0;
++}
++
++static inline
++int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem, struct mm_struct *mm)
++{
++	return 0;
++}
++#endif
+ /* Shared API */
+ int amdgpu_amdkfd_alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 				void **mem_obj, uint64_t *gpu_addr,
+@@ -215,8 +241,6 @@ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
+ 					struct file *filp, u32 pasid,
+ 					void **vm, void **process_info,
+ 					struct dma_fence **ef);
+-void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
+-				struct amdgpu_vm *vm);
+ void amdgpu_amdkfd_gpuvm_destroy_process_vm(struct kgd_dev *kgd, void *vm);
+ void amdgpu_amdkfd_gpuvm_release_process_vm(struct kgd_dev *kgd, void *vm);
+ uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void *vm);
+@@ -236,23 +260,43 @@ int amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(struct kgd_dev *kgd,
+ 		struct kgd_mem *mem, void **kptr, uint64_t *size);
+ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *process_info,
+ 					    struct dma_fence **ef);
+-
+ int amdgpu_amdkfd_gpuvm_get_vm_fault_info(struct kgd_dev *kgd,
+ 					      struct kfd_vm_fault_info *info);
+-
+ int amdgpu_amdkfd_gpuvm_import_dmabuf(struct kgd_dev *kgd,
+ 				      struct dma_buf *dmabuf,
+ 				      uint64_t va, void *vm,
+ 				      struct kgd_mem **mem, uint64_t *size,
+ 				      uint64_t *mmap_offset);
+-
+-void amdgpu_amdkfd_gpuvm_init_mem_limits(void);
+-void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo);
+-
+ int amdgpu_amdkfd_get_tile_config(struct kgd_dev *kgd,
+ 				struct tile_config *config);
++#if IS_ENABLED(CONFIG_HSA_AMD)
++void amdgpu_amdkfd_gpuvm_init_mem_limits(void);
++void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
++				struct amdgpu_vm *vm);
++void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo);
++#else
++static inline
++void amdgpu_amdkfd_gpuvm_init_mem_limits(void)
++{
++}
+ 
++static inline
++void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
++					struct amdgpu_vm *vm)
++{
++}
++
++static inline
++void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo)
++{
++}
++#endif
+ /* KGD2KFD callbacks */
++int kgd2kfd_quiesce_mm(struct mm_struct *mm);
++int kgd2kfd_resume_mm(struct mm_struct *mm);
++int kgd2kfd_schedule_evict_and_restore_process(struct mm_struct *mm,
++						struct dma_fence *fence);
++#if IS_ENABLED(CONFIG_HSA_AMD)
+ int kgd2kfd_init(void);
+ void kgd2kfd_exit(void);
+ struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev,
+@@ -262,15 +306,78 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 			 const struct kgd2kfd_shared_resources *gpu_resources);
+ void kgd2kfd_device_exit(struct kfd_dev *kfd);
+ void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm);
++int kgd2kfd_resume_iommu(struct kfd_dev *kfd);
+ int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm);
+ int kgd2kfd_pre_reset(struct kfd_dev *kfd);
+ int kgd2kfd_post_reset(struct kfd_dev *kfd);
+ void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry);
+-int kgd2kfd_quiesce_mm(struct mm_struct *mm);
+-int kgd2kfd_resume_mm(struct mm_struct *mm);
+-int kgd2kfd_schedule_evict_and_restore_process(struct mm_struct *mm,
+-					       struct dma_fence *fence);
+ void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd);
+ void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask);
++#else
++static inline int kgd2kfd_init(void)
++{
++	return -ENOENT;
++}
+ 
++static inline void kgd2kfd_exit(void)
++{
++}
++
++static inline
++struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev,
++					unsigned int asic_type, bool vf)
++{
++	return NULL;
++}
++
++static inline
++bool kgd2kfd_device_init(struct kfd_dev *kfd, struct drm_device *ddev,
++				const struct kgd2kfd_shared_resources *gpu_resources)
++{
++	return false;
++}
++
++static inline void kgd2kfd_device_exit(struct kfd_dev *kfd)
++{
++}
++
++static inline void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
++{
++}
++
++static int __maybe_unused kgd2kfd_resume_iommu(struct kfd_dev *kfd)
++{
++	return 0;
++}
++
++static inline int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
++{
++	return 0;
++}
++
++static inline int kgd2kfd_pre_reset(struct kfd_dev *kfd)
++{
++	return 0;
++}
++
++static inline int kgd2kfd_post_reset(struct kfd_dev *kfd)
++{
++	return 0;
++}
++
++static inline
++void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
++{
++}
++
++static inline
++void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd)
++{
++}
++
++static inline
++void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask)
++{
++}
++#endif
+ #endif /* AMDGPU_AMDKFD_H_INCLUDED */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 97723f2b5ece7..f262c4e7a48a2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2913,6 +2913,10 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
+ {
+ 	int r;
+ 
++	r = amdgpu_amdkfd_resume_iommu(adev);
++	if (r)
++		return r;
++
+ 	r = amdgpu_device_ip_resume_phase1(adev);
+ 	if (r)
+ 		return r;
+@@ -4296,6 +4300,10 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,
+ 
+ 			if (!r) {
+ 				dev_info(tmp_adev->dev, "GPU reset succeeded, trying to resume\n");
++				r = amdgpu_amdkfd_resume_iommu(tmp_adev);
++				if (r)
++					goto out;
++
+ 				r = amdgpu_device_ip_resume_phase1(tmp_adev);
+ 				if (r)
+ 					goto out;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 5751bddc9cadd..84313135c2eae 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -751,6 +751,9 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 
+ 	kfd_cwsr_init(kfd);
+ 
++	if(kgd2kfd_resume_iommu(kfd))
++		goto device_iommu_error;
++
+ 	if (kfd_resume(kfd))
+ 		goto kfd_resume_error;
+ 
+@@ -896,17 +899,21 @@ int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
+ 	return ret;
+ }
+ 
+-static int kfd_resume(struct kfd_dev *kfd)
++int kgd2kfd_resume_iommu(struct kfd_dev *kfd)
+ {
+ 	int err = 0;
+ 
+ 	err = kfd_iommu_resume(kfd);
+-	if (err) {
++	if (err)
+ 		dev_err(kfd_device,
+ 			"Failed to resume IOMMU for device %x:%x\n",
+ 			kfd->pdev->vendor, kfd->pdev->device);
+-		return err;
+-	}
++	return err;
++}
++
++static int kfd_resume(struct kfd_dev *kfd)
++{
++	int err = 0;
+ 
+ 	err = kfd->dqm->ops.start(kfd->dqm);
+ 	if (err) {
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index 3491460498491..993898a2c7791 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -391,8 +391,17 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
+ 
+ 	if (*fence) {
+ 		ret = dma_fence_chain_find_seqno(fence, point);
+-		if (!ret)
++		if (!ret) {
++			/* If the requested seqno is already signaled
++			 * drm_syncobj_find_fence may return a NULL
++			 * fence. To make sure the recipient gets
++			 * signalled, use a new fence instead.
++			 */
++			if (!*fence)
++				*fence = dma_fence_get_stub();
++
+ 			goto out;
++		}
+ 		dma_fence_put(*fence);
+ 	} else {
+ 		ret = -EINVAL;
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index 54bc563a8dff1..f8ad3b2be0bfb 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -207,14 +207,14 @@ config HID_CHERRY
+ 
+ config HID_CHICONY
+ 	tristate "Chicony devices"
+-	depends on HID
++	depends on USB_HID
+ 	default !EXPERT
+ 	help
+ 	Support for Chicony Tactical pad and special keys on Chicony keyboards.
+ 
+ config HID_CORSAIR
+ 	tristate "Corsair devices"
+-	depends on HID && USB && LEDS_CLASS
++	depends on USB_HID && LEDS_CLASS
+ 	help
+ 	Support for Corsair devices that are not fully compliant with the
+ 	HID standard.
+@@ -245,7 +245,7 @@ config HID_MACALLY
+ 
+ config HID_PRODIKEYS
+ 	tristate "Prodikeys PC-MIDI Keyboard support"
+-	depends on HID && SND
++	depends on USB_HID && SND
+ 	select SND_RAWMIDI
+ 	help
+ 	Support for Prodikeys PC-MIDI Keyboard device support.
+@@ -541,7 +541,7 @@ config HID_LENOVO
+ 
+ config HID_LOGITECH
+ 	tristate "Logitech devices"
+-	depends on HID
++	depends on USB_HID
+ 	depends on LEDS_CLASS
+ 	default !EXPERT
+ 	help
+@@ -889,7 +889,7 @@ config HID_SAITEK
+ 
+ config HID_SAMSUNG
+ 	tristate "Samsung InfraRed remote control or keyboards"
+-	depends on HID
++	depends on USB_HID
+ 	help
+ 	Support for Samsung InfraRed remote control or keyboards.
+ 
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index c183caf89d492..f85c6e3309a09 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -918,8 +918,7 @@ static int asus_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	if (drvdata->quirks & QUIRK_IS_MULTITOUCH)
+ 		drvdata->tp = &asus_i2c_tp;
+ 
+-	if ((drvdata->quirks & QUIRK_T100_KEYBOARD) &&
+-	    hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
++	if ((drvdata->quirks & QUIRK_T100_KEYBOARD) && hid_is_usb(hdev)) {
+ 		struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
+ 
+ 		if (intf->altsetting->desc.bInterfaceNumber == T100_TPAD_INTF) {
+@@ -947,8 +946,7 @@ static int asus_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 		drvdata->tp = &asus_t100chi_tp;
+ 	}
+ 
+-	if ((drvdata->quirks & QUIRK_MEDION_E1239T) &&
+-	    hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
++	if ((drvdata->quirks & QUIRK_MEDION_E1239T) && hid_is_usb(hdev)) {
+ 		struct usb_host_interface *alt =
+ 			to_usb_interface(hdev->dev.parent)->altsetting;
+ 
+diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c
+index db6da21ade063..74ad8bf98bfd5 100644
+--- a/drivers/hid/hid-bigbenff.c
++++ b/drivers/hid/hid-bigbenff.c
+@@ -191,7 +191,7 @@ static void bigben_worker(struct work_struct *work)
+ 		struct bigben_device, worker);
+ 	struct hid_field *report_field = bigben->report->field[0];
+ 
+-	if (bigben->removed)
++	if (bigben->removed || !report_field)
+ 		return;
+ 
+ 	if (bigben->work_led) {
+diff --git a/drivers/hid/hid-chicony.c b/drivers/hid/hid-chicony.c
+index 3f0ed6a952234..e19e2b5973396 100644
+--- a/drivers/hid/hid-chicony.c
++++ b/drivers/hid/hid-chicony.c
+@@ -58,8 +58,12 @@ static int ch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ static __u8 *ch_switch12_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 		unsigned int *rsize)
+ {
+-	struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
+-	
++	struct usb_interface *intf;
++
++	if (!hid_is_usb(hdev))
++		return rdesc;
++
++	intf = to_usb_interface(hdev->dev.parent);
+ 	if (intf->cur_altsetting->desc.bInterfaceNumber == 1) {
+ 		/* Change usage maximum and logical maximum from 0x7fff to
+ 		 * 0x2fff, so they don't exceed HID_MAX_USAGES */
+diff --git a/drivers/hid/hid-corsair.c b/drivers/hid/hid-corsair.c
+index 902a60e249ed2..8c895c820b672 100644
+--- a/drivers/hid/hid-corsair.c
++++ b/drivers/hid/hid-corsair.c
+@@ -553,7 +553,12 @@ static int corsair_probe(struct hid_device *dev, const struct hid_device_id *id)
+ 	int ret;
+ 	unsigned long quirks = id->driver_data;
+ 	struct corsair_drvdata *drvdata;
+-	struct usb_interface *usbif = to_usb_interface(dev->dev.parent);
++	struct usb_interface *usbif;
++
++	if (!hid_is_usb(dev))
++		return -EINVAL;
++
++	usbif = to_usb_interface(dev->dev.parent);
+ 
+ 	drvdata = devm_kzalloc(&dev->dev, sizeof(struct corsair_drvdata),
+ 			       GFP_KERNEL);
+diff --git a/drivers/hid/hid-elan.c b/drivers/hid/hid-elan.c
+index dae193749d443..0e8f424025fea 100644
+--- a/drivers/hid/hid-elan.c
++++ b/drivers/hid/hid-elan.c
+@@ -50,7 +50,7 @@ struct elan_drvdata {
+ 
+ static int is_not_elan_touchpad(struct hid_device *hdev)
+ {
+-	if (hdev->bus == BUS_USB) {
++	if (hid_is_usb(hdev)) {
+ 		struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
+ 
+ 		return (intf->altsetting->desc.bInterfaceNumber !=
+diff --git a/drivers/hid/hid-elo.c b/drivers/hid/hid-elo.c
+index 0d22713a38742..2876cb6a7dcab 100644
+--- a/drivers/hid/hid-elo.c
++++ b/drivers/hid/hid-elo.c
+@@ -229,6 +229,9 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	struct elo_priv *priv;
+ 	int ret;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
+diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c
+index 2a176f77b32e9..0476301983964 100644
+--- a/drivers/hid/hid-google-hammer.c
++++ b/drivers/hid/hid-google-hammer.c
+@@ -528,6 +528,8 @@ static void hammer_remove(struct hid_device *hdev)
+ static const struct hid_device_id hammer_devices[] = {
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_DON) },
++	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_EEL) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+diff --git a/drivers/hid/hid-holtek-kbd.c b/drivers/hid/hid-holtek-kbd.c
+index 0a38e8e9bc783..403506b9697e7 100644
+--- a/drivers/hid/hid-holtek-kbd.c
++++ b/drivers/hid/hid-holtek-kbd.c
+@@ -140,12 +140,17 @@ static int holtek_kbd_input_event(struct input_dev *dev, unsigned int type,
+ static int holtek_kbd_probe(struct hid_device *hdev,
+ 		const struct hid_device_id *id)
+ {
+-	struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
+-	int ret = hid_parse(hdev);
++	struct usb_interface *intf;
++	int ret;
++
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
+ 
++	ret = hid_parse(hdev);
+ 	if (!ret)
+ 		ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT);
+ 
++	intf = to_usb_interface(hdev->dev.parent);
+ 	if (!ret && intf->cur_altsetting->desc.bInterfaceNumber == 1) {
+ 		struct hid_input *hidinput;
+ 		list_for_each_entry(hidinput, &hdev->inputs, list) {
+diff --git a/drivers/hid/hid-holtek-mouse.c b/drivers/hid/hid-holtek-mouse.c
+index 195b735b001d0..b7172c48ef9f0 100644
+--- a/drivers/hid/hid-holtek-mouse.c
++++ b/drivers/hid/hid-holtek-mouse.c
+@@ -62,6 +62,14 @@ static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 	return rdesc;
+ }
+ 
++static int holtek_mouse_probe(struct hid_device *hdev,
++			      const struct hid_device_id *id)
++{
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++	return 0;
++}
++
+ static const struct hid_device_id holtek_mouse_devices[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,
+ 			USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) },
+@@ -83,6 +91,7 @@ static struct hid_driver holtek_mouse_driver = {
+ 	.name = "holtek_mouse",
+ 	.id_table = holtek_mouse_devices,
+ 	.report_fixup = holtek_mouse_report_fixup,
++	.probe = holtek_mouse_probe,
+ };
+ 
+ module_hid_driver(holtek_mouse_driver);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 136b58a91c04c..370ec4402ebe3 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -488,6 +488,7 @@
+ #define USB_DEVICE_ID_GOOGLE_MAGNEMITE	0x503d
+ #define USB_DEVICE_ID_GOOGLE_MOONBALL	0x5044
+ #define USB_DEVICE_ID_GOOGLE_DON	0x5050
++#define USB_DEVICE_ID_GOOGLE_EEL	0x5057
+ 
+ #define USB_VENDOR_ID_GOTOP		0x08f2
+ #define USB_DEVICE_ID_SUPER_Q2		0x007f
+@@ -865,6 +866,7 @@
+ #define USB_DEVICE_ID_MS_TOUCH_COVER_2   0x07a7
+ #define USB_DEVICE_ID_MS_TYPE_COVER_2    0x07a9
+ #define USB_DEVICE_ID_MS_POWER_COVER     0x07da
++#define USB_DEVICE_ID_MS_SURFACE3_COVER		0x07de
+ #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER	0x02fd
+ #define USB_DEVICE_ID_MS_PIXART_MOUSE    0x00cb
+ #define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS      0x02e0
+diff --git a/drivers/hid/hid-lg.c b/drivers/hid/hid-lg.c
+index 0dc7cdfc56f77..2c7e7c089bf99 100644
+--- a/drivers/hid/hid-lg.c
++++ b/drivers/hid/hid-lg.c
+@@ -769,12 +769,18 @@ static int lg_raw_event(struct hid_device *hdev, struct hid_report *report,
+ 
+ static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ {
+-	struct usb_interface *iface = to_usb_interface(hdev->dev.parent);
+-	__u8 iface_num = iface->cur_altsetting->desc.bInterfaceNumber;
++	struct usb_interface *iface;
++	__u8 iface_num;
+ 	unsigned int connect_mask = HID_CONNECT_DEFAULT;
+ 	struct lg_drv_data *drv_data;
+ 	int ret;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
++	iface = to_usb_interface(hdev->dev.parent);
++	iface_num = iface->cur_altsetting->desc.bInterfaceNumber;
++
+ 	/* G29 only work with the 1st interface */
+ 	if ((hdev->product == USB_DEVICE_ID_LOGITECH_G29_WHEEL) &&
+ 	    (iface_num != 0)) {
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index 271bd8d243395..a311b0a33eba7 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1693,7 +1693,7 @@ static int logi_dj_probe(struct hid_device *hdev,
+ 	case recvr_type_27mhz:		no_dj_interfaces = 2; break;
+ 	case recvr_type_bluetooth:	no_dj_interfaces = 2; break;
+ 	}
+-	if (hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
++	if (hid_is_usb(hdev)) {
+ 		intf = to_usb_interface(hdev->dev.parent);
+ 		if (intf && intf->altsetting->desc.bInterfaceNumber >=
+ 							no_dj_interfaces) {
+diff --git a/drivers/hid/hid-prodikeys.c b/drivers/hid/hid-prodikeys.c
+index 2666af02d5c1a..e4e9471d0f1e9 100644
+--- a/drivers/hid/hid-prodikeys.c
++++ b/drivers/hid/hid-prodikeys.c
+@@ -798,12 +798,18 @@ static int pk_raw_event(struct hid_device *hdev, struct hid_report *report,
+ static int pk_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ {
+ 	int ret;
+-	struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
+-	unsigned short ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
++	struct usb_interface *intf;
++	unsigned short ifnum;
+ 	unsigned long quirks = id->driver_data;
+ 	struct pk_device *pk;
+ 	struct pcmidi_snd *pm = NULL;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
++	intf = to_usb_interface(hdev->dev.parent);
++	ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
++
+ 	pk = kzalloc(sizeof(*pk), GFP_KERNEL);
+ 	if (pk == NULL) {
+ 		hid_err(hdev, "can't alloc descriptor\n");
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index be53c723c729d..84a30202e3dbe 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -124,6 +124,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE3_COVER), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_2), HID_QUIRK_NO_INIT_REPORTS },
+diff --git a/drivers/hid/hid-roccat-arvo.c b/drivers/hid/hid-roccat-arvo.c
+index ffcd444ae2ba6..4b18e1a4fc7ac 100644
+--- a/drivers/hid/hid-roccat-arvo.c
++++ b/drivers/hid/hid-roccat-arvo.c
+@@ -344,6 +344,9 @@ static int arvo_probe(struct hid_device *hdev,
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-isku.c b/drivers/hid/hid-roccat-isku.c
+index ce5f22519956a..e95d59cd8d075 100644
+--- a/drivers/hid/hid-roccat-isku.c
++++ b/drivers/hid/hid-roccat-isku.c
+@@ -324,6 +324,9 @@ static int isku_probe(struct hid_device *hdev,
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-kone.c b/drivers/hid/hid-roccat-kone.c
+index 1ca64481145ee..e8522eacf7973 100644
+--- a/drivers/hid/hid-roccat-kone.c
++++ b/drivers/hid/hid-roccat-kone.c
+@@ -749,6 +749,9 @@ static int kone_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-koneplus.c b/drivers/hid/hid-roccat-koneplus.c
+index 0316edf8c5bb4..1896c69ea512f 100644
+--- a/drivers/hid/hid-roccat-koneplus.c
++++ b/drivers/hid/hid-roccat-koneplus.c
+@@ -431,6 +431,9 @@ static int koneplus_probe(struct hid_device *hdev,
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-konepure.c b/drivers/hid/hid-roccat-konepure.c
+index 5248b3c7cf785..cf8eeb33a1257 100644
+--- a/drivers/hid/hid-roccat-konepure.c
++++ b/drivers/hid/hid-roccat-konepure.c
+@@ -133,6 +133,9 @@ static int konepure_probe(struct hid_device *hdev,
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-kovaplus.c b/drivers/hid/hid-roccat-kovaplus.c
+index 9600128815705..6fb9b9563769d 100644
+--- a/drivers/hid/hid-roccat-kovaplus.c
++++ b/drivers/hid/hid-roccat-kovaplus.c
+@@ -501,6 +501,9 @@ static int kovaplus_probe(struct hid_device *hdev,
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-lua.c b/drivers/hid/hid-roccat-lua.c
+index 4a88a76d5c622..d5ddf0d68346b 100644
+--- a/drivers/hid/hid-roccat-lua.c
++++ b/drivers/hid/hid-roccat-lua.c
+@@ -160,6 +160,9 @@ static int lua_probe(struct hid_device *hdev,
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-pyra.c b/drivers/hid/hid-roccat-pyra.c
+index 989927defe8db..4fcc8e7d276f2 100644
+--- a/drivers/hid/hid-roccat-pyra.c
++++ b/drivers/hid/hid-roccat-pyra.c
+@@ -449,6 +449,9 @@ static int pyra_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-ryos.c b/drivers/hid/hid-roccat-ryos.c
+index 3956a6c9c5217..5bf1971a2b14d 100644
+--- a/drivers/hid/hid-roccat-ryos.c
++++ b/drivers/hid/hid-roccat-ryos.c
+@@ -141,6 +141,9 @@ static int ryos_probe(struct hid_device *hdev,
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-roccat-savu.c b/drivers/hid/hid-roccat-savu.c
+index 818701f7a0281..a784bb4ee6512 100644
+--- a/drivers/hid/hid-roccat-savu.c
++++ b/drivers/hid/hid-roccat-savu.c
+@@ -113,6 +113,9 @@ static int savu_probe(struct hid_device *hdev,
+ {
+ 	int retval;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	retval = hid_parse(hdev);
+ 	if (retval) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-samsung.c b/drivers/hid/hid-samsung.c
+index 2e1c31156eca0..cf5992e970940 100644
+--- a/drivers/hid/hid-samsung.c
++++ b/drivers/hid/hid-samsung.c
+@@ -152,6 +152,9 @@ static int samsung_probe(struct hid_device *hdev,
+ 	int ret;
+ 	unsigned int cmask = HID_CONNECT_DEFAULT;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	ret = hid_parse(hdev);
+ 	if (ret) {
+ 		hid_err(hdev, "parse failed\n");
+diff --git a/drivers/hid/hid-u2fzero.c b/drivers/hid/hid-u2fzero.c
+index 67ae2b18e33ac..ac3fd870673d2 100644
+--- a/drivers/hid/hid-u2fzero.c
++++ b/drivers/hid/hid-u2fzero.c
+@@ -290,7 +290,7 @@ static int u2fzero_probe(struct hid_device *hdev,
+ 	unsigned int minor;
+ 	int ret;
+ 
+-	if (!hid_is_using_ll_driver(hdev, &usb_hid_driver))
++	if (!hid_is_usb(hdev))
+ 		return -EINVAL;
+ 
+ 	dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL);
+diff --git a/drivers/hid/hid-uclogic-core.c b/drivers/hid/hid-uclogic-core.c
+index 8e9c9e646cb7d..4edb241957040 100644
+--- a/drivers/hid/hid-uclogic-core.c
++++ b/drivers/hid/hid-uclogic-core.c
+@@ -164,6 +164,9 @@ static int uclogic_probe(struct hid_device *hdev,
+ 	struct uclogic_drvdata *drvdata = NULL;
+ 	bool params_initialized = false;
+ 
++	if (!hid_is_usb(hdev))
++		return -EINVAL;
++
+ 	/*
+ 	 * libinput requires the pad interface to be on a different node
+ 	 * than the pen, so use QUIRK_MULTI_INPUT for all tablets.
+diff --git a/drivers/hid/hid-uclogic-params.c b/drivers/hid/hid-uclogic-params.c
+index d26d8cd98efcf..dd05bed4ca53a 100644
+--- a/drivers/hid/hid-uclogic-params.c
++++ b/drivers/hid/hid-uclogic-params.c
+@@ -841,8 +841,7 @@ int uclogic_params_init(struct uclogic_params *params,
+ 	struct uclogic_params p = {0, };
+ 
+ 	/* Check arguments */
+-	if (params == NULL || hdev == NULL ||
+-	    !hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
++	if (params == NULL || hdev == NULL || !hid_is_usb(hdev)) {
+ 		rc = -EINVAL;
+ 		goto cleanup;
+ 	}
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 73dafa60080f1..329bb1a46f90e 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -726,7 +726,7 @@ static void wacom_retrieve_hid_descriptor(struct hid_device *hdev,
+ 	 * Skip the query for this type and modify defaults based on
+ 	 * interface number.
+ 	 */
+-	if (features->type == WIRELESS) {
++	if (features->type == WIRELESS && intf) {
+ 		if (intf->cur_altsetting->desc.bInterfaceNumber == 0)
+ 			features->device_type = WACOM_DEVICETYPE_WL_MONITOR;
+ 		else
+@@ -2217,7 +2217,7 @@ static void wacom_update_name(struct wacom *wacom, const char *suffix)
+ 	if ((features->type == HID_GENERIC) && !strcmp("Wacom HID", features->name)) {
+ 		char *product_name = wacom->hdev->name;
+ 
+-		if (hid_is_using_ll_driver(wacom->hdev, &usb_hid_driver)) {
++		if (hid_is_usb(wacom->hdev)) {
+ 			struct usb_interface *intf = to_usb_interface(wacom->hdev->dev.parent);
+ 			struct usb_device *dev = interface_to_usbdev(intf);
+ 			product_name = dev->product;
+@@ -2448,6 +2448,9 @@ static void wacom_wireless_work(struct work_struct *work)
+ 
+ 	wacom_destroy_battery(wacom);
+ 
++	if (!usbdev)
++		return;
++
+ 	/* Stylus interface */
+ 	hdev1 = usb_get_intfdata(usbdev->config->interface[1]);
+ 	wacom1 = hid_get_drvdata(hdev1);
+@@ -2727,8 +2730,6 @@ static void wacom_mode_change_work(struct work_struct *work)
+ static int wacom_probe(struct hid_device *hdev,
+ 		const struct hid_device_id *id)
+ {
+-	struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
+-	struct usb_device *dev = interface_to_usbdev(intf);
+ 	struct wacom *wacom;
+ 	struct wacom_wac *wacom_wac;
+ 	struct wacom_features *features;
+@@ -2763,8 +2764,14 @@ static int wacom_probe(struct hid_device *hdev,
+ 	wacom_wac->hid_data.inputmode = -1;
+ 	wacom_wac->mode_report = -1;
+ 
+-	wacom->usbdev = dev;
+-	wacom->intf = intf;
++	if (hid_is_usb(hdev)) {
++		struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
++		struct usb_device *dev = interface_to_usbdev(intf);
++
++		wacom->usbdev = dev;
++		wacom->intf = intf;
++	}
++
+ 	mutex_init(&wacom->lock);
+ 	INIT_DELAYED_WORK(&wacom->init_work, wacom_init_work);
+ 	INIT_WORK(&wacom->wireless_work, wacom_wireless_work);
+diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
+index c99e90469a245..2eaf85b6e39f4 100644
+--- a/drivers/iio/accel/kxcjk-1013.c
++++ b/drivers/iio/accel/kxcjk-1013.c
+@@ -1435,8 +1435,7 @@ static int kxcjk1013_probe(struct i2c_client *client,
+ 	return 0;
+ 
+ err_buffer_cleanup:
+-	if (data->dready_trig)
+-		iio_triggered_buffer_cleanup(indio_dev);
++	iio_triggered_buffer_cleanup(indio_dev);
+ err_trigger_unregister:
+ 	if (data->dready_trig)
+ 		iio_trigger_unregister(data->dready_trig);
+@@ -1459,8 +1458,8 @@ static int kxcjk1013_remove(struct i2c_client *client)
+ 	pm_runtime_set_suspended(&client->dev);
+ 	pm_runtime_put_noidle(&client->dev);
+ 
++	iio_triggered_buffer_cleanup(indio_dev);
+ 	if (data->dready_trig) {
+-		iio_triggered_buffer_cleanup(indio_dev);
+ 		iio_trigger_unregister(data->dready_trig);
+ 		iio_trigger_unregister(data->motion_trig);
+ 	}
+diff --git a/drivers/iio/accel/kxsd9.c b/drivers/iio/accel/kxsd9.c
+index 0e18b92e20992..a51568ba8b7d2 100644
+--- a/drivers/iio/accel/kxsd9.c
++++ b/drivers/iio/accel/kxsd9.c
+@@ -224,14 +224,14 @@ static irqreturn_t kxsd9_trigger_handler(int irq, void *p)
+ 			       hw_values.chan,
+ 			       sizeof(hw_values.chan));
+ 	if (ret) {
+-		dev_err(st->dev,
+-			"error reading data\n");
+-		return ret;
++		dev_err(st->dev, "error reading data: %d\n", ret);
++		goto out;
+ 	}
+ 
+ 	iio_push_to_buffers_with_timestamp(indio_dev,
+ 					   &hw_values,
+ 					   iio_get_time_ns(indio_dev));
++out:
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index bf1d2c8afdbd7..a7208704d31c9 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -1473,7 +1473,7 @@ static int mma8452_trigger_setup(struct iio_dev *indio_dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	indio_dev->trig = trig;
++	indio_dev->trig = iio_trigger_get(trig);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index c7e15c45140ad..4afa50e5c058a 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -470,8 +470,8 @@ static irqreturn_t ad7768_trigger_handler(int irq, void *p)
+ 	iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan,
+ 					   iio_get_time_ns(indio_dev));
+ 
+-	iio_trigger_notify_done(indio_dev->trig);
+ err_unlock:
++	iio_trigger_notify_done(indio_dev->trig);
+ 	mutex_unlock(&st->lock);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
+index b8139c435a4b0..4ede7e766765e 100644
+--- a/drivers/iio/adc/at91-sama5d2_adc.c
++++ b/drivers/iio/adc/at91-sama5d2_adc.c
+@@ -1401,7 +1401,8 @@ static int at91_adc_read_info_raw(struct iio_dev *indio_dev,
+ 		*val = st->conversion_value;
+ 		ret = at91_adc_adjust_val_osr(st, val);
+ 		if (chan->scan_type.sign == 's')
+-			*val = sign_extend32(*val, 11);
++			*val = sign_extend32(*val,
++					     chan->scan_type.realbits - 1);
+ 		st->conversion_done = false;
+ 	}
+ 
+diff --git a/drivers/iio/adc/axp20x_adc.c b/drivers/iio/adc/axp20x_adc.c
+index 3e0c0233b4315..df99f1365c398 100644
+--- a/drivers/iio/adc/axp20x_adc.c
++++ b/drivers/iio/adc/axp20x_adc.c
+@@ -251,19 +251,8 @@ static int axp22x_adc_raw(struct iio_dev *indio_dev,
+ 			  struct iio_chan_spec const *chan, int *val)
+ {
+ 	struct axp20x_adc_iio *info = iio_priv(indio_dev);
+-	int size;
+ 
+-	/*
+-	 * N.B.: Unlike the Chinese datasheets tell, the charging current is
+-	 * stored on 12 bits, not 13 bits. Only discharging current is on 13
+-	 * bits.
+-	 */
+-	if (chan->type == IIO_CURRENT && chan->channel == AXP22X_BATT_DISCHRG_I)
+-		size = 13;
+-	else
+-		size = 12;
+-
+-	*val = axp20x_read_variable_width(info->regmap, chan->address, size);
++	*val = axp20x_read_variable_width(info->regmap, chan->address, 12);
+ 	if (*val < 0)
+ 		return *val;
+ 
+@@ -386,9 +375,8 @@ static int axp22x_adc_scale(struct iio_chan_spec const *chan, int *val,
+ 		return IIO_VAL_INT_PLUS_MICRO;
+ 
+ 	case IIO_CURRENT:
+-		*val = 0;
+-		*val2 = 500000;
+-		return IIO_VAL_INT_PLUS_MICRO;
++		*val = 1;
++		return IIO_VAL_INT;
+ 
+ 	case IIO_TEMP:
+ 		*val = 100;
+diff --git a/drivers/iio/adc/dln2-adc.c b/drivers/iio/adc/dln2-adc.c
+index 0d53ef18e0459..58b3f6065db0d 100644
+--- a/drivers/iio/adc/dln2-adc.c
++++ b/drivers/iio/adc/dln2-adc.c
+@@ -248,7 +248,6 @@ static int dln2_adc_set_chan_period(struct dln2_adc *dln2,
+ static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel)
+ {
+ 	int ret, i;
+-	struct iio_dev *indio_dev = platform_get_drvdata(dln2->pdev);
+ 	u16 conflict;
+ 	__le16 value;
+ 	int olen = sizeof(value);
+@@ -257,13 +256,9 @@ static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel)
+ 		.chan = channel,
+ 	};
+ 
+-	ret = iio_device_claim_direct_mode(indio_dev);
+-	if (ret < 0)
+-		return ret;
+-
+ 	ret = dln2_adc_set_chan_enabled(dln2, channel, true);
+ 	if (ret < 0)
+-		goto release_direct;
++		return ret;
+ 
+ 	ret = dln2_adc_set_port_enabled(dln2, true, &conflict);
+ 	if (ret < 0) {
+@@ -300,8 +295,6 @@ disable_port:
+ 	dln2_adc_set_port_enabled(dln2, false, NULL);
+ disable_chan:
+ 	dln2_adc_set_chan_enabled(dln2, channel, false);
+-release_direct:
+-	iio_device_release_direct_mode(indio_dev);
+ 
+ 	return ret;
+ }
+@@ -337,10 +330,16 @@ static int dln2_adc_read_raw(struct iio_dev *indio_dev,
+ 
+ 	switch (mask) {
+ 	case IIO_CHAN_INFO_RAW:
++		ret = iio_device_claim_direct_mode(indio_dev);
++		if (ret < 0)
++			return ret;
++
+ 		mutex_lock(&dln2->mutex);
+ 		ret = dln2_adc_read(dln2, chan->channel);
+ 		mutex_unlock(&dln2->mutex);
+ 
++		iio_device_release_direct_mode(indio_dev);
++
+ 		if (ret < 0)
+ 			return ret;
+ 
+@@ -655,7 +654,11 @@ static int dln2_adc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 	}
+ 	iio_trigger_set_drvdata(dln2->trig, dln2);
+-	devm_iio_trigger_register(dev, dln2->trig);
++	ret = devm_iio_trigger_register(dev, dln2->trig);
++	if (ret) {
++		dev_err(dev, "failed to register trigger: %d\n", ret);
++		return ret;
++	}
+ 	iio_trigger_set_immutable(indio_dev, dln2->trig);
+ 
+ 	ret = devm_iio_triggered_buffer_setup(dev, indio_dev, NULL,
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index 16c02c30dec75..9939dee017433 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -979,6 +979,7 @@ static void stm32h7_adc_unprepare(struct iio_dev *indio_dev)
+ {
+ 	struct stm32_adc *adc = iio_priv(indio_dev);
+ 
++	stm32_adc_writel(adc, STM32H7_ADC_PCSEL, 0);
+ 	stm32h7_adc_disable(indio_dev);
+ 	stm32h7_adc_enter_pwr_down(adc);
+ }
+diff --git a/drivers/iio/gyro/adxrs290.c b/drivers/iio/gyro/adxrs290.c
+index ca6fc234076ee..c7f9963391558 100644
+--- a/drivers/iio/gyro/adxrs290.c
++++ b/drivers/iio/gyro/adxrs290.c
+@@ -7,6 +7,7 @@
+  */
+ 
+ #include <linux/bitfield.h>
++#include <linux/bitops.h>
+ #include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/kernel.h>
+@@ -124,7 +125,7 @@ static int adxrs290_get_rate_data(struct iio_dev *indio_dev, const u8 cmd, int *
+ 		goto err_unlock;
+ 	}
+ 
+-	*val = temp;
++	*val = sign_extend32(temp, 15);
+ 
+ err_unlock:
+ 	mutex_unlock(&st->lock);
+@@ -146,7 +147,7 @@ static int adxrs290_get_temp_data(struct iio_dev *indio_dev, int *val)
+ 	}
+ 
+ 	/* extract lower 12 bits temperature reading */
+-	*val = temp & 0x0FFF;
++	*val = sign_extend32(temp, 11);
+ 
+ err_unlock:
+ 	mutex_unlock(&st->lock);
+diff --git a/drivers/iio/gyro/itg3200_buffer.c b/drivers/iio/gyro/itg3200_buffer.c
+index 1c3c1bd53374a..98b3f021f0bec 100644
+--- a/drivers/iio/gyro/itg3200_buffer.c
++++ b/drivers/iio/gyro/itg3200_buffer.c
+@@ -61,9 +61,9 @@ static irqreturn_t itg3200_trigger_handler(int irq, void *p)
+ 
+ 	iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
+ 
++error_ret:
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+-error_ret:
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/iio/industrialio-trigger.c b/drivers/iio/industrialio-trigger.c
+index 583bb51f65a75..6bcc562d7857b 100644
+--- a/drivers/iio/industrialio-trigger.c
++++ b/drivers/iio/industrialio-trigger.c
+@@ -550,7 +550,6 @@ struct iio_trigger *viio_trigger_alloc(const char *fmt, va_list vargs)
+ 		irq_modify_status(trig->subirq_base + i,
+ 				  IRQ_NOREQUEST | IRQ_NOAUTOEN, IRQ_NOPROBE);
+ 	}
+-	get_device(&trig->dev);
+ 
+ 	return trig;
+ 
+diff --git a/drivers/iio/light/ltr501.c b/drivers/iio/light/ltr501.c
+index 74ed2d88a3ed3..379236a806fac 100644
+--- a/drivers/iio/light/ltr501.c
++++ b/drivers/iio/light/ltr501.c
+@@ -1273,7 +1273,7 @@ static irqreturn_t ltr501_trigger_handler(int irq, void *p)
+ 		ret = regmap_bulk_read(data->regmap, LTR501_ALS_DATA1,
+ 				       als_buf, sizeof(als_buf));
+ 		if (ret < 0)
+-			return ret;
++			goto done;
+ 		if (test_bit(0, indio_dev->active_scan_mask))
+ 			scan.channels[j++] = le16_to_cpu(als_buf[1]);
+ 		if (test_bit(1, indio_dev->active_scan_mask))
+diff --git a/drivers/iio/light/stk3310.c b/drivers/iio/light/stk3310.c
+index a2827d03ab0fb..6654573a887c9 100644
+--- a/drivers/iio/light/stk3310.c
++++ b/drivers/iio/light/stk3310.c
+@@ -546,9 +546,8 @@ static irqreturn_t stk3310_irq_event_handler(int irq, void *private)
+ 	mutex_lock(&data->lock);
+ 	ret = regmap_field_read(data->reg_flag_nf, &dir);
+ 	if (ret < 0) {
+-		dev_err(&data->client->dev, "register read failed\n");
+-		mutex_unlock(&data->lock);
+-		return ret;
++		dev_err(&data->client->dev, "register read failed: %d\n", ret);
++		goto out;
+ 	}
+ 	event = IIO_UNMOD_EVENT_CODE(IIO_PROXIMITY, 1,
+ 				     IIO_EV_TYPE_THRESH,
+@@ -560,6 +559,7 @@ static irqreturn_t stk3310_irq_event_handler(int irq, void *private)
+ 	ret = regmap_field_write(data->reg_flag_psint, 0);
+ 	if (ret < 0)
+ 		dev_err(&data->client->dev, "failed to reset interrupts\n");
++out:
+ 	mutex_unlock(&data->lock);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/iio/trigger/stm32-timer-trigger.c b/drivers/iio/trigger/stm32-timer-trigger.c
+index 3aa9e8bba005f..e38671b41df74 100644
+--- a/drivers/iio/trigger/stm32-timer-trigger.c
++++ b/drivers/iio/trigger/stm32-timer-trigger.c
+@@ -912,6 +912,6 @@ static struct platform_driver stm32_timer_trigger_driver = {
+ };
+ module_platform_driver(stm32_timer_trigger_driver);
+ 
+-MODULE_ALIAS("platform: stm32-timer-trigger");
++MODULE_ALIAS("platform:stm32-timer-trigger");
+ MODULE_DESCRIPTION("STMicroelectronics STM32 Timer Trigger driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index c87b94ea29397..88476a1a601a4 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -8456,6 +8456,8 @@ static void receive_interrupt_common(struct hfi1_ctxtdata *rcd)
+  */
+ static void __hfi1_rcd_eoi_intr(struct hfi1_ctxtdata *rcd)
+ {
++	if (!rcd->rcvhdrq)
++		return;
+ 	clear_recv_intr(rcd);
+ 	if (check_packet_present(rcd))
+ 		force_recv_intr(rcd);
+diff --git a/drivers/infiniband/hw/hfi1/driver.c b/drivers/infiniband/hw/hfi1/driver.c
+index a40701a6e1b64..808571855ed15 100644
+--- a/drivers/infiniband/hw/hfi1/driver.c
++++ b/drivers/infiniband/hw/hfi1/driver.c
+@@ -1053,6 +1053,8 @@ int handle_receive_interrupt(struct hfi1_ctxtdata *rcd, int thread)
+ 	struct hfi1_packet packet;
+ 	int skip_pkt = 0;
+ 
++	if (!rcd->rcvhdrq)
++		return RCV_PKT_OK;
+ 	/* Control context will always use the slow path interrupt handler */
+ 	needset = (rcd->ctxt == HFI1_CTRL_CTXT) ? 0 : 1;
+ 
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index b6e453e9ba236..fa2cd76747ff4 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -154,7 +154,6 @@ static int hfi1_create_kctxt(struct hfi1_devdata *dd,
+ 	rcd->fast_handler = get_dma_rtail_setting(rcd) ?
+ 				handle_receive_interrupt_dma_rtail :
+ 				handle_receive_interrupt_nodma_rtail;
+-	rcd->slow_handler = handle_receive_interrupt;
+ 
+ 	hfi1_set_seq_cnt(rcd, 1);
+ 
+@@ -375,6 +374,8 @@ int hfi1_create_ctxtdata(struct hfi1_pportdata *ppd, int numa,
+ 		rcd->numa_id = numa;
+ 		rcd->rcv_array_groups = dd->rcv_entries.ngroups;
+ 		rcd->rhf_rcv_function_map = normal_rhf_rcv_functions;
++		rcd->slow_handler = handle_receive_interrupt;
++		rcd->do_interrupt = rcd->slow_handler;
+ 		rcd->msix_intr = CCE_NUM_MSIX_VECTORS;
+ 
+ 		mutex_init(&rcd->exp_mutex);
+@@ -915,18 +916,6 @@ int hfi1_init(struct hfi1_devdata *dd, int reinit)
+ 	if (ret)
+ 		goto done;
+ 
+-	/* allocate dummy tail memory for all receive contexts */
+-	dd->rcvhdrtail_dummy_kvaddr = dma_alloc_coherent(&dd->pcidev->dev,
+-							 sizeof(u64),
+-							 &dd->rcvhdrtail_dummy_dma,
+-							 GFP_KERNEL);
+-
+-	if (!dd->rcvhdrtail_dummy_kvaddr) {
+-		dd_dev_err(dd, "cannot allocate dummy tail memory\n");
+-		ret = -ENOMEM;
+-		goto done;
+-	}
+-
+ 	/* dd->rcd can be NULL if early initialization failed */
+ 	for (i = 0; dd->rcd && i < dd->first_dyn_alloc_ctxt; ++i) {
+ 		/*
+@@ -939,8 +928,6 @@ int hfi1_init(struct hfi1_devdata *dd, int reinit)
+ 		if (!rcd)
+ 			continue;
+ 
+-		rcd->do_interrupt = &handle_receive_interrupt;
+-
+ 		lastfail = hfi1_create_rcvhdrq(dd, rcd);
+ 		if (!lastfail)
+ 			lastfail = hfi1_setup_eagerbufs(rcd);
+@@ -1161,7 +1148,7 @@ void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd)
+ 	rcd->egrbufs.rcvtids = NULL;
+ 
+ 	for (e = 0; e < rcd->egrbufs.alloced; e++) {
+-		if (rcd->egrbufs.buffers[e].dma)
++		if (rcd->egrbufs.buffers[e].addr)
+ 			dma_free_coherent(&dd->pcidev->dev,
+ 					  rcd->egrbufs.buffers[e].len,
+ 					  rcd->egrbufs.buffers[e].addr,
+@@ -1242,6 +1229,11 @@ void hfi1_free_devdata(struct hfi1_devdata *dd)
+ 	dd->tx_opstats    = NULL;
+ 	kfree(dd->comp_vect);
+ 	dd->comp_vect = NULL;
++	if (dd->rcvhdrtail_dummy_kvaddr)
++		dma_free_coherent(&dd->pcidev->dev, sizeof(u64),
++				  (void *)dd->rcvhdrtail_dummy_kvaddr,
++				  dd->rcvhdrtail_dummy_dma);
++	dd->rcvhdrtail_dummy_kvaddr = NULL;
+ 	sdma_clean(dd, dd->num_sdma);
+ 	rvt_dealloc_device(&dd->verbs_dev.rdi);
+ }
+@@ -1339,6 +1331,15 @@ static struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev,
+ 		goto bail;
+ 	}
+ 
++	/* allocate dummy tail memory for all receive contexts */
++	dd->rcvhdrtail_dummy_kvaddr =
++		dma_alloc_coherent(&dd->pcidev->dev, sizeof(u64),
++				   &dd->rcvhdrtail_dummy_dma, GFP_KERNEL);
++	if (!dd->rcvhdrtail_dummy_kvaddr) {
++		ret = -ENOMEM;
++		goto bail;
++	}
++
+ 	atomic_set(&dd->ipoib_rsm_usr_num, 0);
+ 	return dd;
+ 
+@@ -1546,13 +1547,6 @@ static void cleanup_device_data(struct hfi1_devdata *dd)
+ 
+ 	free_credit_return(dd);
+ 
+-	if (dd->rcvhdrtail_dummy_kvaddr) {
+-		dma_free_coherent(&dd->pcidev->dev, sizeof(u64),
+-				  (void *)dd->rcvhdrtail_dummy_kvaddr,
+-				  dd->rcvhdrtail_dummy_dma);
+-		dd->rcvhdrtail_dummy_kvaddr = NULL;
+-	}
+-
+ 	/*
+ 	 * Free any resources still in use (usually just kernel contexts)
+ 	 * at unload; we do for ctxtcnt, because that's what we allocate.
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index ac6f87137b637..0b73dc7847aae 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -880,8 +880,8 @@ struct sdma_engine *sdma_select_user_engine(struct hfi1_devdata *dd,
+ 	if (current->nr_cpus_allowed != 1)
+ 		goto out;
+ 
+-	cpu_id = smp_processor_id();
+ 	rcu_read_lock();
++	cpu_id = smp_processor_id();
+ 	rht_node = rhashtable_lookup(dd->sdma_rht, &cpu_id,
+ 				     sdma_rht_params);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index c29ba8ee51e29..abe882ec1bae7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -33,6 +33,7 @@
+ #include <linux/acpi.h>
+ #include <linux/etherdevice.h>
+ #include <linux/interrupt.h>
++#include <linux/iopoll.h>
+ #include <linux/kernel.h>
+ #include <linux/types.h>
+ #include <net/addrconf.h>
+@@ -964,9 +965,14 @@ static int hns_roce_v2_cmd_hw_resetting(struct hns_roce_dev *hr_dev,
+ 					unsigned long instance_stage,
+ 					unsigned long reset_stage)
+ {
++#define HW_RESET_TIMEOUT_US 1000000
++#define HW_RESET_SLEEP_US 1000
++
+ 	struct hns_roce_v2_priv *priv = hr_dev->priv;
+ 	struct hnae3_handle *handle = priv->handle;
+ 	const struct hnae3_ae_ops *ops = handle->ae_algo->ops;
++	unsigned long val;
++	int ret;
+ 
+ 	/* When hardware reset is detected, we should stop sending mailbox&cmq&
+ 	 * doorbell to hardware. If now in .init_instance() function, we should
+@@ -978,7 +984,11 @@ static int hns_roce_v2_cmd_hw_resetting(struct hns_roce_dev *hr_dev,
+ 	 * again.
+ 	 */
+ 	hr_dev->dis_db = true;
+-	if (!ops->get_hw_reset_stat(handle))
++
++	ret = read_poll_timeout(ops->ae_dev_reset_cnt, val,
++				val > hr_dev->reset_cnt, HW_RESET_SLEEP_US,
++				HW_RESET_TIMEOUT_US, false, handle);
++	if (!ret)
+ 		hr_dev->is_reset = true;
+ 
+ 	if (!hr_dev->is_reset || reset_stage == HNS_ROCE_STATE_RST_INIT ||
+@@ -6342,10 +6352,8 @@ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+ 	if (!hr_dev)
+ 		return 0;
+ 
+-	hr_dev->is_reset = true;
+ 	hr_dev->active = false;
+ 	hr_dev->dis_db = true;
+-
+ 	hr_dev->state = HNS_ROCE_DEVICE_STATE_RST_DOWN;
+ 
+ 	return 0;
+diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
+index d7eb2e93db8f5..84f2741aaac6a 100644
+--- a/drivers/irqchip/irq-armada-370-xp.c
++++ b/drivers/irqchip/irq-armada-370-xp.c
+@@ -232,16 +232,12 @@ static int armada_370_xp_msi_alloc(struct irq_domain *domain, unsigned int virq,
+ 	int hwirq, i;
+ 
+ 	mutex_lock(&msi_used_lock);
++	hwirq = bitmap_find_free_region(msi_used, PCI_MSI_DOORBELL_NR,
++					order_base_2(nr_irqs));
++	mutex_unlock(&msi_used_lock);
+ 
+-	hwirq = bitmap_find_next_zero_area(msi_used, PCI_MSI_DOORBELL_NR,
+-					   0, nr_irqs, 0);
+-	if (hwirq >= PCI_MSI_DOORBELL_NR) {
+-		mutex_unlock(&msi_used_lock);
++	if (hwirq < 0)
+ 		return -ENOSPC;
+-	}
+-
+-	bitmap_set(msi_used, hwirq, nr_irqs);
+-	mutex_unlock(&msi_used_lock);
+ 
+ 	for (i = 0; i < nr_irqs; i++) {
+ 		irq_domain_set_info(domain, virq + i, hwirq + i,
+@@ -250,7 +246,7 @@ static int armada_370_xp_msi_alloc(struct irq_domain *domain, unsigned int virq,
+ 				    NULL, NULL);
+ 	}
+ 
+-	return hwirq;
++	return 0;
+ }
+ 
+ static void armada_370_xp_msi_free(struct irq_domain *domain,
+@@ -259,7 +255,7 @@ static void armada_370_xp_msi_free(struct irq_domain *domain,
+ 	struct irq_data *d = irq_domain_get_irq_data(domain, virq);
+ 
+ 	mutex_lock(&msi_used_lock);
+-	bitmap_clear(msi_used, d->hwirq, nr_irqs);
++	bitmap_release_region(msi_used, d->hwirq, order_base_2(nr_irqs));
+ 	mutex_unlock(&msi_used_lock);
+ }
+ 
+diff --git a/drivers/irqchip/irq-aspeed-scu-ic.c b/drivers/irqchip/irq-aspeed-scu-ic.c
+index c90a3346b9857..0f0aac7cc1140 100644
+--- a/drivers/irqchip/irq-aspeed-scu-ic.c
++++ b/drivers/irqchip/irq-aspeed-scu-ic.c
+@@ -78,8 +78,8 @@ static void aspeed_scu_ic_irq_handler(struct irq_desc *desc)
+ 				       bit - scu_ic->irq_shift);
+ 		generic_handle_irq(irq);
+ 
+-		regmap_update_bits(scu_ic->scu, scu_ic->reg, mask,
+-				   BIT(bit + ASPEED_SCU_IC_STATUS_SHIFT));
++		regmap_write_bits(scu_ic->scu, scu_ic->reg, mask,
++				  BIT(bit + ASPEED_SCU_IC_STATUS_SHIFT));
+ 	}
+ 
+ 	chained_irq_exit(chip, desc);
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 95e0b82b6c661..42b295337bafb 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -742,7 +742,7 @@ static struct its_collection *its_build_invall_cmd(struct its_node *its,
+ 
+ 	its_fixup_cmd(cmd);
+ 
+-	return NULL;
++	return desc->its_invall_cmd.col;
+ }
+ 
+ static struct its_vpe *its_build_vinvall_cmd(struct its_node *its,
+diff --git a/drivers/irqchip/irq-nvic.c b/drivers/irqchip/irq-nvic.c
+index f747e2209ea99..21cb31ff2bbf2 100644
+--- a/drivers/irqchip/irq-nvic.c
++++ b/drivers/irqchip/irq-nvic.c
+@@ -26,7 +26,7 @@
+ 
+ #define NVIC_ISER		0x000
+ #define NVIC_ICER		0x080
+-#define NVIC_IPR		0x300
++#define NVIC_IPR		0x400
+ 
+ #define NVIC_MAX_BANKS		16
+ /*
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 7871e7dcd4836..2069b16b50eca 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -2252,6 +2252,7 @@ super_1_rdev_size_change(struct md_rdev *rdev, sector_t num_sectors)
+ 
+ 		if (!num_sectors || num_sectors > max_sectors)
+ 			num_sectors = max_sectors;
++		rdev->sb_start = sb_start;
+ 	}
+ 	sb = page_address(rdev->sb_page);
+ 	sb->data_size = cpu_to_le64(num_sectors);
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index a9c9d86eef4bc..ef49ac8d91019 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -717,16 +717,18 @@ static int fastrpc_get_meta_size(struct fastrpc_invoke_ctx *ctx)
+ static u64 fastrpc_get_payload_size(struct fastrpc_invoke_ctx *ctx, int metalen)
+ {
+ 	u64 size = 0;
+-	int i;
++	int oix;
+ 
+ 	size = ALIGN(metalen, FASTRPC_ALIGN);
+-	for (i = 0; i < ctx->nscalars; i++) {
++	for (oix = 0; oix < ctx->nbufs; oix++) {
++		int i = ctx->olaps[oix].raix;
++
+ 		if (ctx->args[i].fd == 0 || ctx->args[i].fd == -1) {
+ 
+-			if (ctx->olaps[i].offset == 0)
++			if (ctx->olaps[oix].offset == 0)
+ 				size = ALIGN(size, FASTRPC_ALIGN);
+ 
+-			size += (ctx->olaps[i].mend - ctx->olaps[i].mstart);
++			size += (ctx->olaps[oix].mend - ctx->olaps[oix].mstart);
+ 		}
+ 	}
+ 
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index addaaf2810e26..782879d46ff48 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -660,7 +660,7 @@ static int renesas_sdhi_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ 
+ 	/* Issue CMD19 twice for each tap */
+ 	for (i = 0; i < 2 * priv->tap_num; i++) {
+-		int cmd_error;
++		int cmd_error = 0;
+ 
+ 		/* Set sampling clock position */
+ 		sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_TAPSET, i % priv->tap_num);
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index ce05dd4088e9d..663ff5300ad99 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -15,6 +15,7 @@
+ 
+ #include <linux/clk.h>
+ #include <linux/completion.h>
++#include <linux/delay.h>
+ #include <linux/dmaengine.h>
+ #include <linux/dma-direction.h>
+ #include <linux/dma-mapping.h>
+@@ -93,6 +94,14 @@
+ 
+ #define FSMC_BUSY_WAIT_TIMEOUT	(1 * HZ)
+ 
++/*
++ * According to SPEAr300 Reference Manual (RM0082)
++ *  TOUDEL = 7ns (Output delay from the flip-flops to the board)
++ *  TINDEL = 5ns (Input delay from the board to the flipflop)
++ */
++#define TOUTDEL	7000
++#define TINDEL	5000
++
+ struct fsmc_nand_timings {
+ 	u8 tclr;
+ 	u8 tar;
+@@ -277,7 +286,7 @@ static int fsmc_calc_timings(struct fsmc_nand_data *host,
+ {
+ 	unsigned long hclk = clk_get_rate(host->clk);
+ 	unsigned long hclkn = NSEC_PER_SEC / hclk;
+-	u32 thiz, thold, twait, tset;
++	u32 thiz, thold, twait, tset, twait_min;
+ 
+ 	if (sdrt->tRC_min < 30000)
+ 		return -EOPNOTSUPP;
+@@ -309,13 +318,6 @@ static int fsmc_calc_timings(struct fsmc_nand_data *host,
+ 	else if (tims->thold > FSMC_THOLD_MASK)
+ 		tims->thold = FSMC_THOLD_MASK;
+ 
+-	twait = max(sdrt->tRP_min, sdrt->tWP_min);
+-	tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;
+-	if (tims->twait == 0)
+-		tims->twait = 1;
+-	else if (tims->twait > FSMC_TWAIT_MASK)
+-		tims->twait = FSMC_TWAIT_MASK;
+-
+ 	tset = max(sdrt->tCS_min - sdrt->tWP_min,
+ 		   sdrt->tCEA_max - sdrt->tREA_max);
+ 	tims->tset = DIV_ROUND_UP(tset / 1000, hclkn) - 1;
+@@ -324,6 +326,21 @@ static int fsmc_calc_timings(struct fsmc_nand_data *host,
+ 	else if (tims->tset > FSMC_TSET_MASK)
+ 		tims->tset = FSMC_TSET_MASK;
+ 
++	/*
++	 * According to SPEAr300 Reference Manual (RM0082) which gives more
++	 * information related to FSMSC timings than the SPEAr600 one (RM0305),
++	 *   twait >= tCEA - (tset * TCLK) + TOUTDEL + TINDEL
++	 */
++	twait_min = sdrt->tCEA_max - ((tims->tset + 1) * hclkn * 1000)
++		    + TOUTDEL + TINDEL;
++	twait = max3(sdrt->tRP_min, sdrt->tWP_min, twait_min);
++
++	tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;
++	if (tims->twait == 0)
++		tims->twait = 1;
++	else if (tims->twait > FSMC_TWAIT_MASK)
++		tims->twait = FSMC_TWAIT_MASK;
++
+ 	return 0;
+ }
+ 
+@@ -653,6 +670,9 @@ static int fsmc_exec_op(struct nand_chip *chip, const struct nand_operation *op,
+ 						instr->ctx.waitrdy.timeout_ms);
+ 			break;
+ 		}
++
++		if (instr->delay_ns)
++			ndelay(instr->delay_ns);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
+index c3091e00dd5fb..0436aef9c9ef5 100644
+--- a/drivers/net/bonding/bond_alb.c
++++ b/drivers/net/bonding/bond_alb.c
+@@ -1531,14 +1531,14 @@ void bond_alb_monitor(struct work_struct *work)
+ 	struct slave *slave;
+ 
+ 	if (!bond_has_slaves(bond)) {
+-		bond_info->tx_rebalance_counter = 0;
++		atomic_set(&bond_info->tx_rebalance_counter, 0);
+ 		bond_info->lp_counter = 0;
+ 		goto re_arm;
+ 	}
+ 
+ 	rcu_read_lock();
+ 
+-	bond_info->tx_rebalance_counter++;
++	atomic_inc(&bond_info->tx_rebalance_counter);
+ 	bond_info->lp_counter++;
+ 
+ 	/* send learning packets */
+@@ -1560,7 +1560,7 @@ void bond_alb_monitor(struct work_struct *work)
+ 	}
+ 
+ 	/* rebalance tx traffic */
+-	if (bond_info->tx_rebalance_counter >= BOND_TLB_REBALANCE_TICKS) {
++	if (atomic_read(&bond_info->tx_rebalance_counter) >= BOND_TLB_REBALANCE_TICKS) {
+ 		bond_for_each_slave_rcu(bond, slave, iter) {
+ 			tlb_clear_slave(bond, slave, 1);
+ 			if (slave == rcu_access_pointer(bond->curr_active_slave)) {
+@@ -1570,7 +1570,7 @@ void bond_alb_monitor(struct work_struct *work)
+ 				bond_info->unbalanced_load = 0;
+ 			}
+ 		}
+-		bond_info->tx_rebalance_counter = 0;
++		atomic_set(&bond_info->tx_rebalance_counter, 0);
+ 	}
+ 
+ 	if (bond_info->rlb_enabled) {
+@@ -1640,7 +1640,8 @@ int bond_alb_init_slave(struct bonding *bond, struct slave *slave)
+ 	tlb_init_slave(slave);
+ 
+ 	/* order a rebalance ASAP */
+-	bond->alb_info.tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS;
++	atomic_set(&bond->alb_info.tx_rebalance_counter,
++		   BOND_TLB_REBALANCE_TICKS);
+ 
+ 	if (bond->alb_info.rlb_enabled)
+ 		bond->alb_info.rlb_rebalance = 1;
+@@ -1677,7 +1678,8 @@ void bond_alb_handle_link_change(struct bonding *bond, struct slave *slave, char
+ 			rlb_clear_slave(bond, slave);
+ 	} else if (link == BOND_LINK_UP) {
+ 		/* order a rebalance ASAP */
+-		bond_info->tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS;
++		atomic_set(&bond_info->tx_rebalance_counter,
++			   BOND_TLB_REBALANCE_TICKS);
+ 		if (bond->alb_info.rlb_enabled) {
+ 			bond->alb_info.rlb_rebalance = 1;
+ 			/* If the updelay module parameter is smaller than the
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 99323c273aa56..9d7445f6ef143 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -248,6 +248,9 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
+ #define KVASER_PCIEFD_SPACK_EWLR BIT(23)
+ #define KVASER_PCIEFD_SPACK_EPLR BIT(24)
+ 
++/* Kvaser KCAN_EPACK second word */
++#define KVASER_PCIEFD_EPACK_DIR_TX BIT(0)
++
+ struct kvaser_pciefd;
+ 
+ struct kvaser_pciefd_can {
+@@ -1285,7 +1288,10 @@ static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
+ 
+ 	can->err_rep_cnt++;
+ 	can->can.can_stats.bus_error++;
+-	stats->rx_errors++;
++	if (p->header[1] & KVASER_PCIEFD_EPACK_DIR_TX)
++		stats->tx_errors++;
++	else
++		stats->rx_errors++;
+ 
+ 	can->bec.txerr = bec.txerr;
+ 	can->bec.rxerr = bec.rxerr;
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 62bcef4bb95fd..19a7e4adb9338 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -207,15 +207,15 @@ enum m_can_reg {
+ 
+ /* Interrupts for version 3.0.x */
+ #define IR_ERR_LEC_30X	(IR_STE	| IR_FOE | IR_ACKE | IR_BE | IR_CRCE)
+-#define IR_ERR_BUS_30X	(IR_ERR_LEC_30X | IR_WDI | IR_ELO | IR_BEU | \
+-			 IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \
+-			 IR_RF1L | IR_RF0L)
++#define IR_ERR_BUS_30X	(IR_ERR_LEC_30X | IR_WDI | IR_BEU | IR_BEC | \
++			 IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \
++			 IR_RF0L)
+ #define IR_ERR_ALL_30X	(IR_ERR_STATE | IR_ERR_BUS_30X)
+ /* Interrupts for version >= 3.1.x */
+ #define IR_ERR_LEC_31X	(IR_PED | IR_PEA)
+-#define IR_ERR_BUS_31X      (IR_ERR_LEC_31X | IR_WDI | IR_ELO | IR_BEU | \
+-			 IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \
+-			 IR_RF1L | IR_RF0L)
++#define IR_ERR_BUS_31X      (IR_ERR_LEC_31X | IR_WDI | IR_BEU | IR_BEC | \
++			 IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \
++			 IR_RF0L)
+ #define IR_ERR_ALL_31X	(IR_ERR_STATE | IR_ERR_BUS_31X)
+ 
+ /* Interrupt Line Select (ILS) */
+@@ -752,8 +752,6 @@ static void m_can_handle_other_err(struct net_device *dev, u32 irqstatus)
+ {
+ 	if (irqstatus & IR_WDI)
+ 		netdev_err(dev, "Message RAM Watchdog event due to missing READY\n");
+-	if (irqstatus & IR_ELO)
+-		netdev_err(dev, "Error Logging Overflow\n");
+ 	if (irqstatus & IR_BEU)
+ 		netdev_err(dev, "Bit Error Uncorrected\n");
+ 	if (irqstatus & IR_BEC)
+diff --git a/drivers/net/can/pch_can.c b/drivers/net/can/pch_can.c
+index 5c180d2f3c3c1..79d9abdcc65aa 100644
+--- a/drivers/net/can/pch_can.c
++++ b/drivers/net/can/pch_can.c
+@@ -692,11 +692,11 @@ static int pch_can_rx_normal(struct net_device *ndev, u32 obj_num, int quota)
+ 			cf->data[i + 1] = data_reg >> 8;
+ 		}
+ 
+-		netif_receive_skb(skb);
+ 		rcv_pkts++;
+ 		stats->rx_packets++;
+ 		quota--;
+ 		stats->rx_bytes += cf->can_dlc;
++		netif_receive_skb(skb);
+ 
+ 		pch_fifo_thresh(priv, obj_num);
+ 		obj_num++;
+diff --git a/drivers/net/can/sja1000/ems_pcmcia.c b/drivers/net/can/sja1000/ems_pcmcia.c
+index 770304eaef950..80b30768d9c6c 100644
+--- a/drivers/net/can/sja1000/ems_pcmcia.c
++++ b/drivers/net/can/sja1000/ems_pcmcia.c
+@@ -235,7 +235,12 @@ static int ems_pcmcia_add_card(struct pcmcia_device *pdev, unsigned long base)
+ 			free_sja1000dev(dev);
+ 	}
+ 
+-	err = request_irq(dev->irq, &ems_pcmcia_interrupt, IRQF_SHARED,
++	if (!card->channels) {
++		err = -ENODEV;
++		goto failure_cleanup;
++	}
++
++	err = request_irq(pdev->irq, &ems_pcmcia_interrupt, IRQF_SHARED,
+ 			  DRV_NAME, card);
+ 	if (!err)
+ 		return 0;
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index 1b9957f12459a..8b5d1add899a6 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -28,10 +28,6 @@
+ 
+ #include "kvaser_usb.h"
+ 
+-/* Forward declaration */
+-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg;
+-
+-#define CAN_USB_CLOCK			8000000
+ #define MAX_USBCAN_NET_DEVICES		2
+ 
+ /* Command header size */
+@@ -80,6 +76,12 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg;
+ 
+ #define CMD_LEAF_LOG_MESSAGE		106
+ 
++/* Leaf frequency options */
++#define KVASER_USB_LEAF_SWOPTION_FREQ_MASK 0x60
++#define KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK 0
++#define KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK BIT(5)
++#define KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK BIT(6)
++
+ /* error factors */
+ #define M16C_EF_ACKE			BIT(0)
+ #define M16C_EF_CRCE			BIT(1)
+@@ -340,6 +342,50 @@ struct kvaser_usb_err_summary {
+ 	};
+ };
+ 
++static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
++	.name = "kvaser_usb",
++	.tseg1_min = KVASER_USB_TSEG1_MIN,
++	.tseg1_max = KVASER_USB_TSEG1_MAX,
++	.tseg2_min = KVASER_USB_TSEG2_MIN,
++	.tseg2_max = KVASER_USB_TSEG2_MAX,
++	.sjw_max = KVASER_USB_SJW_MAX,
++	.brp_min = KVASER_USB_BRP_MIN,
++	.brp_max = KVASER_USB_BRP_MAX,
++	.brp_inc = KVASER_USB_BRP_INC,
++};
++
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = {
++	.clock = {
++		.freq = 8000000,
++	},
++	.timestamp_freq = 1,
++	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
++};
++
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = {
++	.clock = {
++		.freq = 16000000,
++	},
++	.timestamp_freq = 1,
++	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
++};
++
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = {
++	.clock = {
++		.freq = 24000000,
++	},
++	.timestamp_freq = 1,
++	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
++};
++
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = {
++	.clock = {
++		.freq = 32000000,
++	},
++	.timestamp_freq = 1,
++	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
++};
++
+ static void *
+ kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
+ 			     const struct sk_buff *skb, int *frame_len,
+@@ -471,6 +517,27 @@ static int kvaser_usb_leaf_send_simple_cmd(const struct kvaser_usb *dev,
+ 	return rc;
+ }
+ 
++static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev,
++						   const struct leaf_cmd_softinfo *softinfo)
++{
++	u32 sw_options = le32_to_cpu(softinfo->sw_options);
++
++	dev->fw_version = le32_to_cpu(softinfo->fw_version);
++	dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx);
++
++	switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) {
++	case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK:
++		dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz;
++		break;
++	case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK:
++		dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz;
++		break;
++	case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK:
++		dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz;
++		break;
++	}
++}
++
+ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
+ {
+ 	struct kvaser_cmd cmd;
+@@ -486,14 +553,13 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
+ 
+ 	switch (dev->card_data.leaf.family) {
+ 	case KVASER_LEAF:
+-		dev->fw_version = le32_to_cpu(cmd.u.leaf.softinfo.fw_version);
+-		dev->max_tx_urbs =
+-			le16_to_cpu(cmd.u.leaf.softinfo.max_outstanding_tx);
++		kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo);
+ 		break;
+ 	case KVASER_USBCAN:
+ 		dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version);
+ 		dev->max_tx_urbs =
+ 			le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx);
++		dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz;
+ 		break;
+ 	}
+ 
+@@ -1225,24 +1291,11 @@ static int kvaser_usb_leaf_init_card(struct kvaser_usb *dev)
+ {
+ 	struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
+ 
+-	dev->cfg = &kvaser_usb_leaf_dev_cfg;
+ 	card_data->ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
+ 
+ 	return 0;
+ }
+ 
+-static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
+-	.name = "kvaser_usb",
+-	.tseg1_min = KVASER_USB_TSEG1_MIN,
+-	.tseg1_max = KVASER_USB_TSEG1_MAX,
+-	.tseg2_min = KVASER_USB_TSEG2_MIN,
+-	.tseg2_max = KVASER_USB_TSEG2_MAX,
+-	.sjw_max = KVASER_USB_SJW_MAX,
+-	.brp_min = KVASER_USB_BRP_MIN,
+-	.brp_max = KVASER_USB_BRP_MAX,
+-	.brp_inc = KVASER_USB_BRP_INC,
+-};
+-
+ static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
+ {
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+@@ -1348,11 +1401,3 @@ const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops = {
+ 	.dev_read_bulk_callback = kvaser_usb_leaf_read_bulk_callback,
+ 	.dev_frame_to_cmd = kvaser_usb_leaf_frame_to_cmd,
+ };
+-
+-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg = {
+-	.clock = {
+-		.freq = CAN_USB_CLOCK,
+-	},
+-	.timestamp_freq = 1,
+-	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
+-};
+diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c
+index 907125abef2c1..a7d8d45e0e941 100644
+--- a/drivers/net/ethernet/altera/altera_tse_main.c
++++ b/drivers/net/ethernet/altera/altera_tse_main.c
+@@ -1431,16 +1431,19 @@ static int altera_tse_probe(struct platform_device *pdev)
+ 		priv->rxdescmem_busaddr = dma_res->start;
+ 
+ 	} else {
++		ret = -ENODEV;
+ 		goto err_free_netdev;
+ 	}
+ 
+-	if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask)))
++	if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) {
+ 		dma_set_coherent_mask(priv->device,
+ 				      DMA_BIT_MASK(priv->dmaops->dmamask));
+-	else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32)))
++	} else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) {
+ 		dma_set_coherent_mask(priv->device, DMA_BIT_MASK(32));
+-	else
++	} else {
++		ret = -EIO;
+ 		goto err_free_netdev;
++	}
+ 
+ 	/* MAC address space */
+ 	ret = request_and_map(pdev, "control_port", &control_port,
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index c527f4ee1d3ae..6ea98af63b341 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -373,6 +373,9 @@ struct bufdesc_ex {
+ #define FEC_ENET_WAKEUP	((uint)0x00020000)	/* Wakeup request */
+ #define FEC_ENET_TXF	(FEC_ENET_TXF_0 | FEC_ENET_TXF_1 | FEC_ENET_TXF_2)
+ #define FEC_ENET_RXF	(FEC_ENET_RXF_0 | FEC_ENET_RXF_1 | FEC_ENET_RXF_2)
++#define FEC_ENET_RXF_GET(X)	(((X) == 0) ? FEC_ENET_RXF_0 :	\
++				(((X) == 1) ? FEC_ENET_RXF_1 :	\
++				FEC_ENET_RXF_2))
+ #define FEC_ENET_TS_AVAIL       ((uint)0x00010000)
+ #define FEC_ENET_TS_TIMER       ((uint)0x00008000)
+ 
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 94eb838a01760..166bc3f3b34cc 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1439,7 +1439,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
+ 			break;
+ 		pkt_received++;
+ 
+-		writel(FEC_ENET_RXF, fep->hwp + FEC_IEVENT);
++		writel(FEC_ENET_RXF_GET(queue_id), fep->hwp + FEC_IEVENT);
+ 
+ 		/* Check for errors. */
+ 		status ^= BD_ENET_RX_LAST;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index d627b59ad4465..714b578b2b49c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -553,6 +553,14 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
+ 		dev_info(&pf->pdev->dev, "vsi %d not found\n", vsi_seid);
+ 		return;
+ 	}
++	if (vsi->type != I40E_VSI_MAIN &&
++	    vsi->type != I40E_VSI_FDIR &&
++	    vsi->type != I40E_VSI_VMDQ2) {
++		dev_info(&pf->pdev->dev,
++			 "vsi %d type %d descriptor rings not available\n",
++			 vsi_seid, vsi->type);
++		return;
++	}
+ 	if (type == RING_TYPE_XDP && !i40e_enabled_xdp_vsi(vsi)) {
+ 		dev_info(&pf->pdev->dev, "XDP not enabled on VSI %d\n", vsi_seid);
+ 		return;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 41c0a103119c1..5a58edba4adfc 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1895,6 +1895,32 @@ static int i40e_vc_send_resp_to_vf(struct i40e_vf *vf,
+ 	return i40e_vc_send_msg_to_vf(vf, opcode, retval, NULL, 0);
+ }
+ 
++/**
++ * i40e_sync_vf_state
++ * @vf: pointer to the VF info
++ * @state: VF state
++ *
++ * Called from a VF message to synchronize the service with a potential
++ * VF reset state
++ **/
++static bool i40e_sync_vf_state(struct i40e_vf *vf, enum i40e_vf_states state)
++{
++	int i;
++
++	/* When handling some messages, it needs VF state to be set.
++	 * It is possible that this flag is cleared during VF reset,
++	 * so there is a need to wait until the end of the reset to
++	 * handle the request message correctly.
++	 */
++	for (i = 0; i < I40E_VF_STATE_WAIT_COUNT; i++) {
++		if (test_bit(state, &vf->vf_states))
++			return true;
++		usleep_range(10000, 20000);
++	}
++
++	return test_bit(state, &vf->vf_states);
++}
++
+ /**
+  * i40e_vc_get_version_msg
+  * @vf: pointer to the VF info
+@@ -1955,7 +1981,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ 	size_t len = 0;
+ 	int ret;
+ 
+-	if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -2077,7 +2103,7 @@ static int i40e_vc_config_promiscuous_mode_msg(struct i40e_vf *vf, u8 *msg)
+ 	bool allmulti = false;
+ 	bool alluni = false;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err_out;
+ 	}
+@@ -2165,7 +2191,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	struct i40e_vsi *vsi;
+ 	u16 num_qps_all = 0;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto error_param;
+ 	}
+@@ -2314,7 +2340,7 @@ static int i40e_vc_config_irq_map_msg(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	int i;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto error_param;
+ 	}
+@@ -2486,7 +2512,7 @@ static int i40e_vc_disable_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	struct i40e_pf *pf = vf->pf;
+ 	i40e_status aq_ret = 0;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto error_param;
+ 	}
+@@ -2536,7 +2562,7 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	u8 cur_pairs = vf->num_queue_pairs;
+ 	struct i40e_pf *pf = vf->pf;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states))
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE))
+ 		return -EINVAL;
+ 
+ 	if (req_pairs > I40E_MAX_VF_QUEUES) {
+@@ -2581,7 +2607,7 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg)
+ 
+ 	memset(&stats, 0, sizeof(struct i40e_eth_stats));
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto error_param;
+ 	}
+@@ -2698,7 +2724,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status ret = 0;
+ 	int i;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
+ 	    !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {
+ 		ret = I40E_ERR_PARAM;
+ 		goto error_param;
+@@ -2770,7 +2796,7 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status ret = 0;
+ 	int i;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
+ 	    !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {
+ 		ret = I40E_ERR_PARAM;
+ 		goto error_param;
+@@ -2914,7 +2940,7 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	int i;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
+ 	    !i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto error_param;
+@@ -3034,9 +3060,9 @@ static int i40e_vc_config_rss_key(struct i40e_vf *vf, u8 *msg)
+ 	struct i40e_vsi *vsi = NULL;
+ 	i40e_status aq_ret = 0;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
+ 	    !i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) ||
+-	    (vrk->key_len != I40E_HKEY_ARRAY_SIZE)) {
++	    vrk->key_len != I40E_HKEY_ARRAY_SIZE) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -3065,9 +3091,9 @@ static int i40e_vc_config_rss_lut(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	u16 i;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
+ 	    !i40e_vc_isvalid_vsi_id(vf, vrl->vsi_id) ||
+-	    (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) {
++	    vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -3100,7 +3126,7 @@ static int i40e_vc_get_rss_hena(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	int len = 0;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -3136,7 +3162,7 @@ static int i40e_vc_set_rss_hena(struct i40e_vf *vf, u8 *msg)
+ 	struct i40e_hw *hw = &pf->hw;
+ 	i40e_status aq_ret = 0;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -3161,7 +3187,7 @@ static int i40e_vc_enable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	struct i40e_vsi *vsi;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -3187,7 +3213,7 @@ static int i40e_vc_disable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	struct i40e_vsi *vsi;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -3414,7 +3440,7 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	int i, ret;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -3545,7 +3571,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	int i, ret;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err_out;
+ 	}
+@@ -3654,7 +3680,7 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	u64 speed = 0;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+@@ -3761,11 +3787,6 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
+ 
+ 	/* set this flag only after making sure all inputs are sane */
+ 	vf->adq_enabled = true;
+-	/* num_req_queues is set when user changes number of queues via ethtool
+-	 * and this causes issue for default VSI(which depends on this variable)
+-	 * when ADq is enabled, hence reset it.
+-	 */
+-	vf->num_req_queues = 0;
+ 
+ 	/* reset the VF in order to allocate resources */
+ 	i40e_vc_reset_vf(vf, true);
+@@ -3788,7 +3809,7 @@ static int i40e_vc_del_qch_msg(struct i40e_vf *vf, u8 *msg)
+ 	struct i40e_pf *pf = vf->pf;
+ 	i40e_status aq_ret = 0;
+ 
+-	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
++	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto err;
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 091e32c1bb46f..49575a640a84c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -18,6 +18,8 @@
+ 
+ #define I40E_MAX_VF_PROMISC_FLAGS	3
+ 
++#define I40E_VF_STATE_WAIT_COUNT	20
++
+ /* Various queue ctrls */
+ enum i40e_queue_ctrl {
+ 	I40E_QUEUE_CTRL_UNKNOWN = 0,
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index 90f5ec982d513..4680a2fe6d3cc 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -612,23 +612,44 @@ static int iavf_set_ringparam(struct net_device *netdev,
+ 	if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
+ 		return -EINVAL;
+ 
+-	new_tx_count = clamp_t(u32, ring->tx_pending,
+-			       IAVF_MIN_TXD,
+-			       IAVF_MAX_TXD);
+-	new_tx_count = ALIGN(new_tx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE);
++	if (ring->tx_pending > IAVF_MAX_TXD ||
++	    ring->tx_pending < IAVF_MIN_TXD ||
++	    ring->rx_pending > IAVF_MAX_RXD ||
++	    ring->rx_pending < IAVF_MIN_RXD) {
++		netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n",
++			   ring->tx_pending, ring->rx_pending, IAVF_MIN_TXD,
++			   IAVF_MAX_RXD, IAVF_REQ_DESCRIPTOR_MULTIPLE);
++		return -EINVAL;
++	}
+ 
+-	new_rx_count = clamp_t(u32, ring->rx_pending,
+-			       IAVF_MIN_RXD,
+-			       IAVF_MAX_RXD);
+-	new_rx_count = ALIGN(new_rx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE);
++	new_tx_count = ALIGN(ring->tx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE);
++	if (new_tx_count != ring->tx_pending)
++		netdev_info(netdev, "Requested Tx descriptor count rounded up to %d\n",
++			    new_tx_count);
++
++	new_rx_count = ALIGN(ring->rx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE);
++	if (new_rx_count != ring->rx_pending)
++		netdev_info(netdev, "Requested Rx descriptor count rounded up to %d\n",
++			    new_rx_count);
+ 
+ 	/* if nothing to do return success */
+ 	if ((new_tx_count == adapter->tx_desc_count) &&
+-	    (new_rx_count == adapter->rx_desc_count))
++	    (new_rx_count == adapter->rx_desc_count)) {
++		netdev_dbg(netdev, "Nothing to change, descriptor count is same as requested\n");
+ 		return 0;
++	}
+ 
+-	adapter->tx_desc_count = new_tx_count;
+-	adapter->rx_desc_count = new_rx_count;
++	if (new_tx_count != adapter->tx_desc_count) {
++		netdev_dbg(netdev, "Changing Tx descriptor count from %d to %d\n",
++			   adapter->tx_desc_count, new_tx_count);
++		adapter->tx_desc_count = new_tx_count;
++	}
++
++	if (new_rx_count != adapter->rx_desc_count) {
++		netdev_dbg(netdev, "Changing Rx descriptor count from %d to %d\n",
++			   adapter->rx_desc_count, new_rx_count);
++		adapter->rx_desc_count = new_rx_count;
++	}
+ 
+ 	if (netif_running(netdev)) {
+ 		adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 643679cad8657..7aa49d4eaa87c 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2139,6 +2139,7 @@ static void iavf_reset_task(struct work_struct *work)
+ 	}
+ 
+ 	pci_set_master(adapter->pdev);
++	pci_restore_msi_state(adapter->pdev);
+ 
+ 	if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) {
+ 		dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 746a5bd178d3b..4c7d1720113a0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -5267,6 +5267,9 @@ static int ice_up_complete(struct ice_vsi *vsi)
+ 		netif_carrier_on(vsi->netdev);
+ 	}
+ 
++	/* clear this now, and the first stats read will be used as baseline */
++	vsi->stat_offsets_loaded = false;
++
+ 	ice_service_task_schedule(pf);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index f39841b0a248c..542cd6f2c9bd4 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -2607,11 +2607,11 @@ static int mvpp2_rxq_init(struct mvpp2_port *port,
+ 	mvpp2_rxq_status_update(port, rxq->id, 0, rxq->size);
+ 
+ 	if (priv->percpu_pools) {
+-		err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->id);
++		err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->logic_rxq);
+ 		if (err < 0)
+ 			goto err_free_dma;
+ 
+-		err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->id);
++		err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->logic_rxq);
+ 		if (err < 0)
+ 			goto err_unregister_rxq_short;
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
+index 94994a939277b..6ef48eb3a77d4 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
++++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
+@@ -803,8 +803,10 @@ int nfp_cpp_area_cache_add(struct nfp_cpp *cpp, size_t size)
+ 		return -ENOMEM;
+ 
+ 	cache = kzalloc(sizeof(*cache), GFP_KERNEL);
+-	if (!cache)
++	if (!cache) {
++		nfp_cpp_area_free(area);
+ 		return -ENOMEM;
++	}
+ 
+ 	cache->id = 0;
+ 	cache->addr = 0;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+index ca0ee29a57b50..21c906200e791 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+@@ -1659,6 +1659,13 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 			data_split = true;
+ 		}
+ 	} else {
++		if (unlikely(skb->len > ETH_TX_MAX_NON_LSO_PKT_LEN)) {
++			DP_ERR(edev, "Unexpected non LSO skb length = 0x%x\n", skb->len);
++			qede_free_failed_tx_pkt(txq, first_bd, 0, false);
++			qede_update_tx_producer(txq);
++			return NETDEV_TX_OK;
++		}
++
+ 		val |= ((skb->len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) <<
+ 			 ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT);
+ 	}
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index c7923e22a4c42..c9f32fc50254f 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -3494,20 +3494,19 @@ static int ql_adapter_up(struct ql3_adapter *qdev)
+ 
+ 	spin_lock_irqsave(&qdev->hw_lock, hw_flags);
+ 
+-	err = ql_wait_for_drvr_lock(qdev);
+-	if (err) {
+-		err = ql_adapter_initialize(qdev);
+-		if (err) {
+-			netdev_err(ndev, "Unable to initialize adapter\n");
+-			goto err_init;
+-		}
+-		netdev_err(ndev, "Releasing driver lock\n");
+-		ql_sem_unlock(qdev, QL_DRVR_SEM_MASK);
+-	} else {
++	if (!ql_wait_for_drvr_lock(qdev)) {
+ 		netdev_err(ndev, "Could not acquire driver lock\n");
++		err = -ENODEV;
+ 		goto err_lock;
+ 	}
+ 
++	err = ql_adapter_initialize(qdev);
++	if (err) {
++		netdev_err(ndev, "Unable to initialize adapter\n");
++		goto err_init;
++	}
++	ql_sem_unlock(qdev, QL_DRVR_SEM_MASK);
++
+ 	spin_unlock_irqrestore(&qdev->hw_lock, hw_flags);
+ 
+ 	set_bit(QL_ADAPTER_UP, &qdev->flags);
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 04c4f1570bc8c..eaaa5aee58251 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
+ 		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
+ 
+ 	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
++	if (max == 0)
++		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
+ 
+ 	/* some devices set dwNtbOutMaxSize too low for the above default */
+ 	min = min(min, max);
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 570e1d11ddc7a..8ab0b5a8dfeff 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -771,8 +771,6 @@ static struct sk_buff *vrf_ip6_out_direct(struct net_device *vrf_dev,
+ 
+ 	skb->dev = vrf_dev;
+ 
+-	vrf_nf_set_untracked(skb);
+-
+ 	err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk,
+ 		      skb, NULL, vrf_dev, vrf_ip6_out_direct_finish);
+ 
+@@ -793,6 +791,8 @@ static struct sk_buff *vrf_ip6_out(struct net_device *vrf_dev,
+ 	if (rt6_need_strict(&ipv6_hdr(skb)->daddr))
+ 		return skb;
+ 
++	vrf_nf_set_untracked(skb);
++
+ 	if (qdisc_tx_is_default(vrf_dev) ||
+ 	    IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED)
+ 		return vrf_ip6_out_direct(vrf_dev, sk, skb);
+@@ -1008,8 +1008,6 @@ static struct sk_buff *vrf_ip_out_direct(struct net_device *vrf_dev,
+ 
+ 	skb->dev = vrf_dev;
+ 
+-	vrf_nf_set_untracked(skb);
+-
+ 	err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk,
+ 		      skb, NULL, vrf_dev, vrf_ip_out_direct_finish);
+ 
+@@ -1031,6 +1029,8 @@ static struct sk_buff *vrf_ip_out(struct net_device *vrf_dev,
+ 	    ipv4_is_lbcast(ip_hdr(skb)->daddr))
+ 		return skb;
+ 
++	vrf_nf_set_untracked(skb);
++
+ 	if (qdisc_tx_is_default(vrf_dev) ||
+ 	    IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
+ 		return vrf_ip_out_direct(vrf_dev, sk, skb);
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 604b294bb15c9..0f6a6685ab5b5 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -31,7 +31,6 @@
+ #define PCIE_CORE_DEV_ID_REG					0x0
+ #define PCIE_CORE_CMD_STATUS_REG				0x4
+ #define PCIE_CORE_DEV_REV_REG					0x8
+-#define PCIE_CORE_EXP_ROM_BAR_REG				0x30
+ #define PCIE_CORE_PCIEXP_CAP					0xc0
+ #define PCIE_CORE_ERR_CAPCTL_REG				0x118
+ #define     PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX			BIT(5)
+@@ -781,10 +780,6 @@ advk_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge,
+ 		*value = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 
+-	case PCI_ROM_ADDRESS1:
+-		*value = advk_readl(pcie, PCIE_CORE_EXP_ROM_BAR_REG);
+-		return PCI_BRIDGE_EMUL_HANDLED;
+-
+ 	case PCI_INTERRUPT_LINE: {
+ 		/*
+ 		 * From the whole 32bit register we support reading from HW only
+@@ -817,10 +812,6 @@ advk_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
+ 		advk_writel(pcie, new, PCIE_CORE_CMD_STATUS_REG);
+ 		break;
+ 
+-	case PCI_ROM_ADDRESS1:
+-		advk_writel(pcie, new, PCIE_CORE_EXP_ROM_BAR_REG);
+-		break;
+-
+ 	case PCI_INTERRUPT_LINE:
+ 		if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) {
+ 			u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG);
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 13b8ddec61894..01eb2ade20709 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -280,12 +280,12 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
+ 	if (rc) {
+ 		pm8001_dbg(pm8001_ha, FAIL,
+ 			   "pm8001_setup_irq failed [ret: %d]\n", rc);
+-		goto err_out_shost;
++		goto err_out;
+ 	}
+ 	/* Request Interrupt */
+ 	rc = pm8001_request_irq(pm8001_ha);
+ 	if (rc)
+-		goto err_out_shost;
++		goto err_out;
+ 
+ 	count = pm8001_ha->max_q_num;
+ 	/* Queues are chosen based on the number of cores/msix availability */
+@@ -419,8 +419,6 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
+ 	pm8001_tag_init(pm8001_ha);
+ 	return 0;
+ 
+-err_out_shost:
+-	scsi_remove_host(pm8001_ha->shost);
+ err_out_nodev:
+ 	for (i = 0; i < pm8001_ha->max_memcnt; i++) {
+ 		if (pm8001_ha->memoryMap.region[i].virt_ptr != NULL) {
+diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
+index 3a20bf8ce5ab9..00b4d033b07a9 100644
+--- a/drivers/scsi/qla2xxx/qla_dbg.c
++++ b/drivers/scsi/qla2xxx/qla_dbg.c
+@@ -2477,6 +2477,9 @@ ql_dbg(uint level, scsi_qla_host_t *vha, uint id, const char *fmt, ...)
+ 	struct va_format vaf;
+ 	char pbuf[64];
+ 
++	if (!ql_mask_match(level) && !trace_ql_dbg_log_enabled())
++		return;
++
+ 	va_start(va, fmt);
+ 
+ 	vaf.fmt = fmt;
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 1a3f5adc68849..9188191433439 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -4313,7 +4313,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
+ 	rep_max_zones = min((alloc_len - 64) >> ilog2(RZONES_DESC_HD),
+ 			    max_zones);
+ 
+-	arr = kcalloc(RZONES_DESC_HD, alloc_len, GFP_ATOMIC);
++	arr = kzalloc(alloc_len, GFP_ATOMIC);
+ 	if (!arr) {
+ 		mk_sense_buffer(scp, ILLEGAL_REQUEST, INSUFF_RES_ASC,
+ 				INSUFF_RES_ASCQ);
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 562a730befdac..39f1eca60a714 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -406,7 +406,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	 * the USB-2 spec requires such endpoints to have wMaxPacketSize = 0
+ 	 * (see the end of section 5.6.3), so don't warn about them.
+ 	 */
+-	maxp = usb_endpoint_maxp(&endpoint->desc);
++	maxp = le16_to_cpu(endpoint->desc.wMaxPacketSize);
+ 	if (maxp == 0 && !(usb_endpoint_xfer_isoc(d) && asnum == 0)) {
+ 		dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid wMaxPacketSize 0\n",
+ 		    cfgno, inum, asnum, d->bEndpointAddress);
+@@ -422,9 +422,9 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 		maxpacket_maxes = full_speed_maxpacket_maxes;
+ 		break;
+ 	case USB_SPEED_HIGH:
+-		/* Bits 12..11 are allowed only for HS periodic endpoints */
++		/* Multiple-transactions bits are allowed only for HS periodic endpoints */
+ 		if (usb_endpoint_xfer_int(d) || usb_endpoint_xfer_isoc(d)) {
+-			i = maxp & (BIT(12) | BIT(11));
++			i = maxp & USB_EP_MAXP_MULT_MASK;
+ 			maxp &= ~i;
+ 		}
+ 		fallthrough;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 3ffa939678d77..426132988512d 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1648,6 +1648,18 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 	struct usb_function		*f = NULL;
+ 	u8				endp;
+ 
++	if (w_length > USB_COMP_EP0_BUFSIZ) {
++		if (ctrl->bRequestType == USB_DIR_OUT) {
++			goto done;
++		} else {
++			/* Cast away the const, we are going to overwrite on purpose. */
++			__le16 *temp = (__le16 *)&ctrl->wLength;
++
++			*temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ);
++			w_length = USB_COMP_EP0_BUFSIZ;
++		}
++	}
++
+ 	/* partial re-init of the response message; the function or the
+ 	 * gadget might need to intercept e.g. a control-OUT completion
+ 	 * when we delegate to it.
+@@ -2161,7 +2173,7 @@ int composite_dev_prepare(struct usb_composite_driver *composite,
+ 	if (!cdev->req)
+ 		return -ENOMEM;
+ 
+-	cdev->req->buf = kmalloc(USB_COMP_EP0_BUFSIZ, GFP_KERNEL);
++	cdev->req->buf = kzalloc(USB_COMP_EP0_BUFSIZ, GFP_KERNEL);
+ 	if (!cdev->req->buf)
+ 		goto fail;
+ 
+diff --git a/drivers/usb/gadget/function/uvc.h b/drivers/usb/gadget/function/uvc.h
+index 23ee25383c1f7..893aaa70f81a4 100644
+--- a/drivers/usb/gadget/function/uvc.h
++++ b/drivers/usb/gadget/function/uvc.h
+@@ -117,6 +117,7 @@ struct uvc_device {
+ 	enum uvc_state state;
+ 	struct usb_function func;
+ 	struct uvc_video video;
++	bool func_connected;
+ 
+ 	/* Descriptors */
+ 	struct {
+@@ -147,6 +148,7 @@ static inline struct uvc_device *to_uvc(struct usb_function *f)
+ struct uvc_file_handle {
+ 	struct v4l2_fh vfh;
+ 	struct uvc_video *device;
++	bool is_uvc_app_handle;
+ };
+ 
+ #define to_uvc_file_handle(handle) \
+diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
+index 4ca89eab61590..197c26f7aec63 100644
+--- a/drivers/usb/gadget/function/uvc_v4l2.c
++++ b/drivers/usb/gadget/function/uvc_v4l2.c
+@@ -227,17 +227,55 @@ static int
+ uvc_v4l2_subscribe_event(struct v4l2_fh *fh,
+ 			 const struct v4l2_event_subscription *sub)
+ {
++	struct uvc_device *uvc = video_get_drvdata(fh->vdev);
++	struct uvc_file_handle *handle = to_uvc_file_handle(fh);
++	int ret;
++
+ 	if (sub->type < UVC_EVENT_FIRST || sub->type > UVC_EVENT_LAST)
+ 		return -EINVAL;
+ 
+-	return v4l2_event_subscribe(fh, sub, 2, NULL);
++	if (sub->type == UVC_EVENT_SETUP && uvc->func_connected)
++		return -EBUSY;
++
++	ret = v4l2_event_subscribe(fh, sub, 2, NULL);
++	if (ret < 0)
++		return ret;
++
++	if (sub->type == UVC_EVENT_SETUP) {
++		uvc->func_connected = true;
++		handle->is_uvc_app_handle = true;
++		uvc_function_connect(uvc);
++	}
++
++	return 0;
++}
++
++static void uvc_v4l2_disable(struct uvc_device *uvc)
++{
++	uvc->func_connected = false;
++	uvc_function_disconnect(uvc);
++	uvcg_video_enable(&uvc->video, 0);
++	uvcg_free_buffers(&uvc->video.queue);
+ }
+ 
+ static int
+ uvc_v4l2_unsubscribe_event(struct v4l2_fh *fh,
+ 			   const struct v4l2_event_subscription *sub)
+ {
+-	return v4l2_event_unsubscribe(fh, sub);
++	struct uvc_device *uvc = video_get_drvdata(fh->vdev);
++	struct uvc_file_handle *handle = to_uvc_file_handle(fh);
++	int ret;
++
++	ret = v4l2_event_unsubscribe(fh, sub);
++	if (ret < 0)
++		return ret;
++
++	if (sub->type == UVC_EVENT_SETUP && handle->is_uvc_app_handle) {
++		uvc_v4l2_disable(uvc);
++		handle->is_uvc_app_handle = false;
++	}
++
++	return 0;
+ }
+ 
+ static long
+@@ -292,7 +330,6 @@ uvc_v4l2_open(struct file *file)
+ 	handle->device = &uvc->video;
+ 	file->private_data = &handle->vfh;
+ 
+-	uvc_function_connect(uvc);
+ 	return 0;
+ }
+ 
+@@ -304,11 +341,9 @@ uvc_v4l2_release(struct file *file)
+ 	struct uvc_file_handle *handle = to_uvc_file_handle(file->private_data);
+ 	struct uvc_video *video = handle->device;
+ 
+-	uvc_function_disconnect(uvc);
+-
+ 	mutex_lock(&video->mutex);
+-	uvcg_video_enable(video, 0);
+-	uvcg_free_buffers(&video->queue);
++	if (handle->is_uvc_app_handle)
++		uvc_v4l2_disable(uvc);
+ 	mutex_unlock(&video->mutex);
+ 
+ 	file->private_data = NULL;
+diff --git a/drivers/usb/gadget/legacy/dbgp.c b/drivers/usb/gadget/legacy/dbgp.c
+index e1d566c9918ae..355bc7dab9d5f 100644
+--- a/drivers/usb/gadget/legacy/dbgp.c
++++ b/drivers/usb/gadget/legacy/dbgp.c
+@@ -137,7 +137,7 @@ static int dbgp_enable_ep_req(struct usb_ep *ep)
+ 		goto fail_1;
+ 	}
+ 
+-	req->buf = kmalloc(DBGP_REQ_LEN, GFP_KERNEL);
++	req->buf = kzalloc(DBGP_REQ_LEN, GFP_KERNEL);
+ 	if (!req->buf) {
+ 		err = -ENOMEM;
+ 		stp = 2;
+@@ -345,6 +345,19 @@ static int dbgp_setup(struct usb_gadget *gadget,
+ 	void *data = NULL;
+ 	u16 len = 0;
+ 
++	if (length > DBGP_REQ_LEN) {
++		if (ctrl->bRequestType == USB_DIR_OUT) {
++			return err;
++		} else {
++			/* Cast away the const, we are going to overwrite on purpose. */
++			__le16 *temp = (__le16 *)&ctrl->wLength;
++
++			*temp = cpu_to_le16(DBGP_REQ_LEN);
++			length = DBGP_REQ_LEN;
++		}
++	}
++
++
+ 	if (request == USB_REQ_GET_DESCRIPTOR) {
+ 		switch (value>>8) {
+ 		case USB_DT_DEVICE:
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 71e7d10dd76b9..04b9c4f5f129d 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -110,6 +110,8 @@ enum ep0_state {
+ /* enough for the whole queue: most events invalidate others */
+ #define	N_EVENT			5
+ 
++#define RBUF_SIZE		256
++
+ struct dev_data {
+ 	spinlock_t			lock;
+ 	refcount_t			count;
+@@ -144,7 +146,7 @@ struct dev_data {
+ 	struct dentry			*dentry;
+ 
+ 	/* except this scratch i/o buffer for ep0 */
+-	u8				rbuf [256];
++	u8				rbuf[RBUF_SIZE];
+ };
+ 
+ static inline void get_dev (struct dev_data *data)
+@@ -1333,6 +1335,18 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 	u16				w_value = le16_to_cpu(ctrl->wValue);
+ 	u16				w_length = le16_to_cpu(ctrl->wLength);
+ 
++	if (w_length > RBUF_SIZE) {
++		if (ctrl->bRequestType == USB_DIR_OUT) {
++			return value;
++		} else {
++			/* Cast away the const, we are going to overwrite on purpose. */
++			__le16 *temp = (__le16 *)&ctrl->wLength;
++
++			*temp = cpu_to_le16(RBUF_SIZE);
++			w_length = RBUF_SIZE;
++		}
++	}
++
+ 	spin_lock (&dev->lock);
+ 	dev->setup_abort = 0;
+ 	if (dev->state == STATE_DEV_UNCONNECTED) {
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 41d5a46c1dc1a..71b018e9a5735 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -631,6 +631,7 @@ static int xhci_enter_test_mode(struct xhci_hcd *xhci,
+ 			continue;
+ 
+ 		retval = xhci_disable_slot(xhci, i);
++		xhci_free_virt_device(xhci, i);
+ 		if (retval)
+ 			xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n",
+ 				 i, retval);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 667a37f509828..76389c0dda8bc 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1298,7 +1298,6 @@ static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id)
+ 	if (xhci->quirks & XHCI_EP_LIMIT_QUIRK)
+ 		/* Delete default control endpoint resources */
+ 		xhci_free_device_endpoint_resources(xhci, virt_dev, true);
+-	xhci_free_virt_device(xhci, slot_id);
+ }
+ 
+ static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id,
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index bf42ba3e4415a..325eb1609f8c5 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3893,7 +3893,6 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	struct xhci_slot_ctx *slot_ctx;
+ 	int i, ret;
+ 
+-#ifndef CONFIG_USB_DEFAULT_PERSIST
+ 	/*
+ 	 * We called pm_runtime_get_noresume when the device was attached.
+ 	 * Decrement the counter here to allow controller to runtime suspend
+@@ -3901,7 +3900,6 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	 */
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+ 		pm_runtime_put_noidle(hcd->self.controller);
+-#endif
+ 
+ 	ret = xhci_check_args(hcd, udev, NULL, 0, true, __func__);
+ 	/* If the host is halted due to driver unload, we still need to free the
+@@ -3920,9 +3918,8 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 		del_timer_sync(&virt_dev->eps[i].stop_cmd_timer);
+ 	}
+ 	virt_dev->udev = NULL;
+-	ret = xhci_disable_slot(xhci, udev->slot_id);
+-	if (ret)
+-		xhci_free_virt_device(xhci, udev->slot_id);
++	xhci_disable_slot(xhci, udev->slot_id);
++	xhci_free_virt_device(xhci, udev->slot_id);
+ }
+ 
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
+@@ -3932,7 +3929,7 @@ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
+ 	u32 state;
+ 	int ret = 0;
+ 
+-	command = xhci_alloc_command(xhci, false, GFP_KERNEL);
++	command = xhci_alloc_command(xhci, true, GFP_KERNEL);
+ 	if (!command)
+ 		return -ENOMEM;
+ 
+@@ -3957,6 +3954,15 @@ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
+ 	}
+ 	xhci_ring_cmd_db(xhci);
+ 	spin_unlock_irqrestore(&xhci->lock, flags);
++
++	wait_for_completion(command->completion);
++
++	if (command->status != COMP_SUCCESS)
++		xhci_warn(xhci, "Unsuccessful disable slot %u command, status %d\n",
++			  slot_id, command->status);
++
++	xhci_free_command(xhci, command);
++
+ 	return ret;
+ }
+ 
+@@ -4053,23 +4059,20 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 
+ 	xhci_debugfs_create_slot(xhci, slot_id);
+ 
+-#ifndef CONFIG_USB_DEFAULT_PERSIST
+ 	/*
+ 	 * If resetting upon resume, we can't put the controller into runtime
+ 	 * suspend if there is a device attached.
+ 	 */
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+ 		pm_runtime_get_noresume(hcd->self.controller);
+-#endif
+ 
+ 	/* Is this a LS or FS device under a HS hub? */
+ 	/* Hub or peripherial? */
+ 	return 1;
+ 
+ disable_slot:
+-	ret = xhci_disable_slot(xhci, udev->slot_id);
+-	if (ret)
+-		xhci_free_virt_device(xhci, udev->slot_id);
++	xhci_disable_slot(xhci, udev->slot_id);
++	xhci_free_virt_device(xhci, udev->slot_id);
+ 
+ 	return 0;
+ }
+@@ -4199,6 +4202,7 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 
+ 		mutex_unlock(&xhci->mutex);
+ 		ret = xhci_disable_slot(xhci, udev->slot_id);
++		xhci_free_virt_device(xhci, udev->slot_id);
+ 		if (!ret)
+ 			xhci_alloc_dev(hcd, udev);
+ 		kfree(command->completion);
+diff --git a/fs/aio.c b/fs/aio.c
+index 6a21d8919409c..2a9dfa58ec3ab 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -182,8 +182,9 @@ struct poll_iocb {
+ 	struct file		*file;
+ 	struct wait_queue_head	*head;
+ 	__poll_t		events;
+-	bool			done;
+ 	bool			cancelled;
++	bool			work_scheduled;
++	bool			work_need_resched;
+ 	struct wait_queue_entry	wait;
+ 	struct work_struct	work;
+ };
+@@ -1621,6 +1622,51 @@ static void aio_poll_put_work(struct work_struct *work)
+ 	iocb_put(iocb);
+ }
+ 
++/*
++ * Safely lock the waitqueue which the request is on, synchronizing with the
++ * case where the ->poll() provider decides to free its waitqueue early.
++ *
++ * Returns true on success, meaning that req->head->lock was locked, req->wait
++ * is on req->head, and an RCU read lock was taken.  Returns false if the
++ * request was already removed from its waitqueue (which might no longer exist).
++ */
++static bool poll_iocb_lock_wq(struct poll_iocb *req)
++{
++	wait_queue_head_t *head;
++
++	/*
++	 * While we hold the waitqueue lock and the waitqueue is nonempty,
++	 * wake_up_pollfree() will wait for us.  However, taking the waitqueue
++	 * lock in the first place can race with the waitqueue being freed.
++	 *
++	 * We solve this as eventpoll does: by taking advantage of the fact that
++	 * all users of wake_up_pollfree() will RCU-delay the actual free.  If
++	 * we enter rcu_read_lock() and see that the pointer to the queue is
++	 * non-NULL, we can then lock it without the memory being freed out from
++	 * under us, then check whether the request is still on the queue.
++	 *
++	 * Keep holding rcu_read_lock() as long as we hold the queue lock, in
++	 * case the caller deletes the entry from the queue, leaving it empty.
++	 * In that case, only RCU prevents the queue memory from being freed.
++	 */
++	rcu_read_lock();
++	head = smp_load_acquire(&req->head);
++	if (head) {
++		spin_lock(&head->lock);
++		if (!list_empty(&req->wait.entry))
++			return true;
++		spin_unlock(&head->lock);
++	}
++	rcu_read_unlock();
++	return false;
++}
++
++static void poll_iocb_unlock_wq(struct poll_iocb *req)
++{
++	spin_unlock(&req->head->lock);
++	rcu_read_unlock();
++}
++
+ static void aio_poll_complete_work(struct work_struct *work)
+ {
+ 	struct poll_iocb *req = container_of(work, struct poll_iocb, work);
+@@ -1640,14 +1686,27 @@ static void aio_poll_complete_work(struct work_struct *work)
+ 	 * avoid further branches in the fast path.
+ 	 */
+ 	spin_lock_irq(&ctx->ctx_lock);
+-	if (!mask && !READ_ONCE(req->cancelled)) {
+-		add_wait_queue(req->head, &req->wait);
+-		spin_unlock_irq(&ctx->ctx_lock);
+-		return;
+-	}
++	if (poll_iocb_lock_wq(req)) {
++		if (!mask && !READ_ONCE(req->cancelled)) {
++			/*
++			 * The request isn't actually ready to be completed yet.
++			 * Reschedule completion if another wakeup came in.
++			 */
++			if (req->work_need_resched) {
++				schedule_work(&req->work);
++				req->work_need_resched = false;
++			} else {
++				req->work_scheduled = false;
++			}
++			poll_iocb_unlock_wq(req);
++			spin_unlock_irq(&ctx->ctx_lock);
++			return;
++		}
++		list_del_init(&req->wait.entry);
++		poll_iocb_unlock_wq(req);
++	} /* else, POLLFREE has freed the waitqueue, so we must complete */
+ 	list_del_init(&iocb->ki_list);
+ 	iocb->ki_res.res = mangle_poll(mask);
+-	req->done = true;
+ 	spin_unlock_irq(&ctx->ctx_lock);
+ 
+ 	iocb_put(iocb);
+@@ -1659,13 +1718,14 @@ static int aio_poll_cancel(struct kiocb *iocb)
+ 	struct aio_kiocb *aiocb = container_of(iocb, struct aio_kiocb, rw);
+ 	struct poll_iocb *req = &aiocb->poll;
+ 
+-	spin_lock(&req->head->lock);
+-	WRITE_ONCE(req->cancelled, true);
+-	if (!list_empty(&req->wait.entry)) {
+-		list_del_init(&req->wait.entry);
+-		schedule_work(&aiocb->poll.work);
+-	}
+-	spin_unlock(&req->head->lock);
++	if (poll_iocb_lock_wq(req)) {
++		WRITE_ONCE(req->cancelled, true);
++		if (!req->work_scheduled) {
++			schedule_work(&aiocb->poll.work);
++			req->work_scheduled = true;
++		}
++		poll_iocb_unlock_wq(req);
++	} /* else, the request was force-cancelled by POLLFREE already */
+ 
+ 	return 0;
+ }
+@@ -1682,20 +1742,26 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 	if (mask && !(mask & req->events))
+ 		return 0;
+ 
+-	list_del_init(&req->wait.entry);
+-
+-	if (mask && spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
++	/*
++	 * Complete the request inline if possible.  This requires that three
++	 * conditions be met:
++	 *   1. An event mask must have been passed.  If a plain wakeup was done
++	 *	instead, then mask == 0 and we have to call vfs_poll() to get
++	 *	the events, so inline completion isn't possible.
++	 *   2. The completion work must not have already been scheduled.
++	 *   3. ctx_lock must not be busy.  We have to use trylock because we
++	 *	already hold the waitqueue lock, so this inverts the normal
++	 *	locking order.  Use irqsave/irqrestore because not all
++	 *	filesystems (e.g. fuse) call this function with IRQs disabled,
++	 *	yet IRQs have to be disabled before ctx_lock is obtained.
++	 */
++	if (mask && !req->work_scheduled &&
++	    spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
+ 		struct kioctx *ctx = iocb->ki_ctx;
+ 
+-		/*
+-		 * Try to complete the iocb inline if we can. Use
+-		 * irqsave/irqrestore because not all filesystems (e.g. fuse)
+-		 * call this function with IRQs disabled and because IRQs
+-		 * have to be disabled before ctx_lock is obtained.
+-		 */
++		list_del_init(&req->wait.entry);
+ 		list_del(&iocb->ki_list);
+ 		iocb->ki_res.res = mangle_poll(mask);
+-		req->done = true;
+ 		if (iocb->ki_eventfd && eventfd_signal_count()) {
+ 			iocb = NULL;
+ 			INIT_WORK(&req->work, aio_poll_put_work);
+@@ -1705,7 +1771,43 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 		if (iocb)
+ 			iocb_put(iocb);
+ 	} else {
+-		schedule_work(&req->work);
++		/*
++		 * Schedule the completion work if needed.  If it was already
++		 * scheduled, record that another wakeup came in.
++		 *
++		 * Don't remove the request from the waitqueue here, as it might
++		 * not actually be complete yet (we won't know until vfs_poll()
++		 * is called), and we must not miss any wakeups.  POLLFREE is an
++		 * exception to this; see below.
++		 */
++		if (req->work_scheduled) {
++			req->work_need_resched = true;
++		} else {
++			schedule_work(&req->work);
++			req->work_scheduled = true;
++		}
++
++		/*
++		 * If the waitqueue is being freed early but we can't complete
++		 * the request inline, we have to tear down the request as best
++		 * we can.  That means immediately removing the request from its
++		 * waitqueue and preventing all further accesses to the
++		 * waitqueue via the request.  We also need to schedule the
++		 * completion work (done above).  Also mark the request as
++		 * cancelled, to potentially skip an unneeded call to ->poll().
++		 */
++		if (mask & POLLFREE) {
++			WRITE_ONCE(req->cancelled, true);
++			list_del_init(&req->wait.entry);
++
++			/*
++			 * Careful: this *must* be the last step, since as soon
++			 * as req->head is NULL'ed out, the request can be
++			 * completed and freed, since aio_poll_complete_work()
++			 * will no longer need to take the waitqueue lock.
++			 */
++			smp_store_release(&req->head, NULL);
++		}
+ 	}
+ 	return 1;
+ }
+@@ -1713,6 +1815,7 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ struct aio_poll_table {
+ 	struct poll_table_struct	pt;
+ 	struct aio_kiocb		*iocb;
++	bool				queued;
+ 	int				error;
+ };
+ 
+@@ -1723,11 +1826,12 @@ aio_poll_queue_proc(struct file *file, struct wait_queue_head *head,
+ 	struct aio_poll_table *pt = container_of(p, struct aio_poll_table, pt);
+ 
+ 	/* multiple wait queues per file are not supported */
+-	if (unlikely(pt->iocb->poll.head)) {
++	if (unlikely(pt->queued)) {
+ 		pt->error = -EINVAL;
+ 		return;
+ 	}
+ 
++	pt->queued = true;
+ 	pt->error = 0;
+ 	pt->iocb->poll.head = head;
+ 	add_wait_queue(head, &pt->iocb->poll.wait);
+@@ -1752,12 +1856,14 @@ static int aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 	req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP;
+ 
+ 	req->head = NULL;
+-	req->done = false;
+ 	req->cancelled = false;
++	req->work_scheduled = false;
++	req->work_need_resched = false;
+ 
+ 	apt.pt._qproc = aio_poll_queue_proc;
+ 	apt.pt._key = req->events;
+ 	apt.iocb = aiocb;
++	apt.queued = false;
+ 	apt.error = -EINVAL; /* same as no support for IOCB_CMD_POLL */
+ 
+ 	/* initialized the list so that we can do list_empty checks */
+@@ -1766,23 +1872,35 @@ static int aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 
+ 	mask = vfs_poll(req->file, &apt.pt) & req->events;
+ 	spin_lock_irq(&ctx->ctx_lock);
+-	if (likely(req->head)) {
+-		spin_lock(&req->head->lock);
+-		if (unlikely(list_empty(&req->wait.entry))) {
+-			if (apt.error)
++	if (likely(apt.queued)) {
++		bool on_queue = poll_iocb_lock_wq(req);
++
++		if (!on_queue || req->work_scheduled) {
++			/*
++			 * aio_poll_wake() already either scheduled the async
++			 * completion work, or completed the request inline.
++			 */
++			if (apt.error) /* unsupported case: multiple queues */
+ 				cancel = true;
+ 			apt.error = 0;
+ 			mask = 0;
+ 		}
+ 		if (mask || apt.error) {
++			/* Steal to complete synchronously. */
+ 			list_del_init(&req->wait.entry);
+ 		} else if (cancel) {
++			/* Cancel if possible (may be too late though). */
+ 			WRITE_ONCE(req->cancelled, true);
+-		} else if (!req->done) { /* actually waiting for an event */
++		} else if (on_queue) {
++			/*
++			 * Actually waiting for an event, so add the request to
++			 * active_reqs so that it can be cancelled if needed.
++			 */
+ 			list_add_tail(&aiocb->ki_list, &ctx->active_reqs);
+ 			aiocb->ki_cancel = aio_poll_cancel;
+ 		}
+-		spin_unlock(&req->head->lock);
++		if (on_queue)
++			poll_iocb_unlock_wq(req);
+ 	}
+ 	if (mask) { /* no async, we'd stolen it */
+ 		aiocb->ki_res.res = mangle_poll(mask);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 81e98a457130f..5fc65a780f83b 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3769,6 +3769,12 @@ static void set_btree_ioerr(struct page *page)
+ 	if (test_and_set_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags))
+ 		return;
+ 
++	/*
++	 * A read may stumble upon this buffer later, make sure that it gets an
++	 * error and knows there was an error.
++	 */
++	clear_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
++
+ 	/*
+ 	 * If we error out, we should add back the dirty_metadata_bytes
+ 	 * to make it consistent.
+diff --git a/fs/btrfs/root-tree.c b/fs/btrfs/root-tree.c
+index 702dc5441f039..db37a37996497 100644
+--- a/fs/btrfs/root-tree.c
++++ b/fs/btrfs/root-tree.c
+@@ -336,7 +336,8 @@ int btrfs_del_root_ref(struct btrfs_trans_handle *trans, u64 root_id,
+ 	key.offset = ref_id;
+ again:
+ 	ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1);
+-	BUG_ON(ret < 0);
++	if (ret < 0)
++		goto out;
+ 	if (ret == 0) {
+ 		leaf = path->nodes[0];
+ 		ref = btrfs_item_ptr(leaf, path->slots[0],
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index 186fa2c2c6ba6..f9b730c43192d 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -2156,6 +2156,7 @@ static struct notifier_block nfsd4_cld_block = {
+ int
+ register_cld_notifier(void)
+ {
++	WARN_ON(!nfsd_net_id);
+ 	return rpc_pipefs_notifier_register(&nfsd4_cld_block);
+ }
+ 
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 1cdf7e0a5c22d..210147960c52e 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1089,6 +1089,11 @@ hash_delegation_locked(struct nfs4_delegation *dp, struct nfs4_file *fp)
+ 	return 0;
+ }
+ 
++static bool delegation_hashed(struct nfs4_delegation *dp)
++{
++	return !(list_empty(&dp->dl_perfile));
++}
++
+ static bool
+ unhash_delegation_locked(struct nfs4_delegation *dp)
+ {
+@@ -1096,7 +1101,7 @@ unhash_delegation_locked(struct nfs4_delegation *dp)
+ 
+ 	lockdep_assert_held(&state_lock);
+ 
+-	if (list_empty(&dp->dl_perfile))
++	if (!delegation_hashed(dp))
+ 		return false;
+ 
+ 	dp->dl_stid.sc_type = NFS4_CLOSED_DELEG_STID;
+@@ -4512,7 +4517,7 @@ static void nfsd4_cb_recall_prepare(struct nfsd4_callback *cb)
+ 	 * queued for a lease break. Don't queue it again.
+ 	 */
+ 	spin_lock(&state_lock);
+-	if (dp->dl_time == 0) {
++	if (delegation_hashed(dp) && dp->dl_time == 0) {
+ 		dp->dl_time = ktime_get_boottime_seconds();
+ 		list_add_tail(&dp->dl_recall_lru, &nn->del_recall_lru);
+ 	}
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 21c4ffda5f943..a8f954bbde4f5 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1525,12 +1525,9 @@ static int __init init_nfsd(void)
+ 	int retval;
+ 	printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n");
+ 
+-	retval = register_cld_notifier();
+-	if (retval)
+-		return retval;
+ 	retval = nfsd4_init_slabs();
+ 	if (retval)
+-		goto out_unregister_notifier;
++		return retval;
+ 	retval = nfsd4_init_pnfs();
+ 	if (retval)
+ 		goto out_free_slabs;
+@@ -1547,9 +1544,14 @@ static int __init init_nfsd(void)
+ 		goto out_free_exports;
+ 	retval = register_pernet_subsys(&nfsd_net_ops);
+ 	if (retval < 0)
++		goto out_free_filesystem;
++	retval = register_cld_notifier();
++	if (retval)
+ 		goto out_free_all;
+ 	return 0;
+ out_free_all:
++	unregister_pernet_subsys(&nfsd_net_ops);
++out_free_filesystem:
+ 	unregister_filesystem(&nfsd_fs_type);
+ out_free_exports:
+ 	remove_proc_entry("fs/nfs/exports", NULL);
+@@ -1562,13 +1564,12 @@ out_free_stat:
+ 	nfsd4_exit_pnfs();
+ out_free_slabs:
+ 	nfsd4_free_slabs();
+-out_unregister_notifier:
+-	unregister_cld_notifier();
+ 	return retval;
+ }
+ 
+ static void __exit exit_nfsd(void)
+ {
++	unregister_cld_notifier();
+ 	unregister_pernet_subsys(&nfsd_net_ops);
+ 	nfsd_drc_slab_free();
+ 	remove_proc_entry("fs/nfs/exports", NULL);
+@@ -1578,7 +1579,6 @@ static void __exit exit_nfsd(void)
+ 	nfsd4_free_slabs();
+ 	nfsd4_exit_pnfs();
+ 	unregister_filesystem(&nfsd_fs_type);
+-	unregister_cld_notifier();
+ }
+ 
+ MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
+diff --git a/fs/signalfd.c b/fs/signalfd.c
+index 456046e158737..b94fb5f81797a 100644
+--- a/fs/signalfd.c
++++ b/fs/signalfd.c
+@@ -35,17 +35,7 @@
+ 
+ void signalfd_cleanup(struct sighand_struct *sighand)
+ {
+-	wait_queue_head_t *wqh = &sighand->signalfd_wqh;
+-	/*
+-	 * The lockless check can race with remove_wait_queue() in progress,
+-	 * but in this case its caller should run under rcu_read_lock() and
+-	 * sighand_cachep is SLAB_TYPESAFE_BY_RCU, we can safely return.
+-	 */
+-	if (likely(!waitqueue_active(wqh)))
+-		return;
+-
+-	/* wait_queue_entry_t->func(POLLFREE) should do remove_wait_queue() */
+-	wake_up_poll(wqh, EPOLLHUP | POLLFREE);
++	wake_up_pollfree(&sighand->signalfd_wqh);
+ }
+ 
+ struct signalfd_ctx {
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index bf58ae6f984fe..ade05887070dd 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -159,6 +159,77 @@ struct tracefs_fs_info {
+ 	struct tracefs_mount_opts mount_opts;
+ };
+ 
++static void change_gid(struct dentry *dentry, kgid_t gid)
++{
++	if (!dentry->d_inode)
++		return;
++	dentry->d_inode->i_gid = gid;
++}
++
++/*
++ * Taken from d_walk, but without he need for handling renames.
++ * Nothing can be renamed while walking the list, as tracefs
++ * does not support renames. This is only called when mounting
++ * or remounting the file system, to set all the files to
++ * the given gid.
++ */
++static void set_gid(struct dentry *parent, kgid_t gid)
++{
++	struct dentry *this_parent;
++	struct list_head *next;
++
++	this_parent = parent;
++	spin_lock(&this_parent->d_lock);
++
++	change_gid(this_parent, gid);
++repeat:
++	next = this_parent->d_subdirs.next;
++resume:
++	while (next != &this_parent->d_subdirs) {
++		struct list_head *tmp = next;
++		struct dentry *dentry = list_entry(tmp, struct dentry, d_child);
++		next = tmp->next;
++
++		spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
++
++		change_gid(dentry, gid);
++
++		if (!list_empty(&dentry->d_subdirs)) {
++			spin_unlock(&this_parent->d_lock);
++			spin_release(&dentry->d_lock.dep_map, _RET_IP_);
++			this_parent = dentry;
++			spin_acquire(&this_parent->d_lock.dep_map, 0, 1, _RET_IP_);
++			goto repeat;
++		}
++		spin_unlock(&dentry->d_lock);
++	}
++	/*
++	 * All done at this level ... ascend and resume the search.
++	 */
++	rcu_read_lock();
++ascend:
++	if (this_parent != parent) {
++		struct dentry *child = this_parent;
++		this_parent = child->d_parent;
++
++		spin_unlock(&child->d_lock);
++		spin_lock(&this_parent->d_lock);
++
++		/* go into the first sibling still alive */
++		do {
++			next = child->d_child.next;
++			if (next == &this_parent->d_subdirs)
++				goto ascend;
++			child = list_entry(next, struct dentry, d_child);
++		} while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED));
++		rcu_read_unlock();
++		goto resume;
++	}
++	rcu_read_unlock();
++	spin_unlock(&this_parent->d_lock);
++	return;
++}
++
+ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ {
+ 	substring_t args[MAX_OPT_ARGS];
+@@ -191,6 +262,7 @@ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ 			if (!gid_valid(gid))
+ 				return -EINVAL;
+ 			opts->gid = gid;
++			set_gid(tracefs_mount->mnt_root, gid);
+ 			break;
+ 		case Opt_mode:
+ 			if (match_octal(&args[0], &option))
+@@ -412,6 +484,8 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode,
+ 	inode->i_mode = mode;
+ 	inode->i_fop = fops ? fops : &tracefs_file_operations;
+ 	inode->i_private = data;
++	inode->i_uid = d_inode(dentry->d_parent)->i_uid;
++	inode->i_gid = d_inode(dentry->d_parent)->i_gid;
+ 	d_instantiate(dentry, inode);
+ 	fsnotify_create(dentry->d_parent->d_inode, dentry);
+ 	return end_creating(dentry);
+@@ -434,6 +508,8 @@ static struct dentry *__create_dir(const char *name, struct dentry *parent,
+ 	inode->i_mode = S_IFDIR | S_IRWXU | S_IRUSR| S_IRGRP | S_IXUSR | S_IXGRP;
+ 	inode->i_op = ops;
+ 	inode->i_fop = &simple_dir_operations;
++	inode->i_uid = d_inode(dentry->d_parent)->i_uid;
++	inode->i_gid = d_inode(dentry->d_parent)->i_gid;
+ 
+ 	/* directory inodes start off with i_nlink == 2 (for "." entry) */
+ 	inc_nlink(inode);
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 474a0d852614f..e6ddf5a3beaf8 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -666,6 +666,7 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
+ struct bpf_trampoline *bpf_trampoline_get(u64 key,
+ 					  struct bpf_attach_target_info *tgt_info);
+ void bpf_trampoline_put(struct bpf_trampoline *tr);
++int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs);
+ #define BPF_DISPATCHER_INIT(_name) {				\
+ 	.mutex = __MUTEX_INITIALIZER(_name.mutex),		\
+ 	.func = &_name##_func,					\
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 6ed2a97eb55f1..fc56d53cc68bf 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -833,6 +833,11 @@ static inline bool hid_is_using_ll_driver(struct hid_device *hdev,
+ 	return hdev->ll_driver == driver;
+ }
+ 
++static inline bool hid_is_usb(struct hid_device *hdev)
++{
++	return hid_is_using_ll_driver(hdev, &usb_hid_driver);
++}
++
+ #define	PM_HINT_FULLON	1<<5
+ #define PM_HINT_NORMAL	1<<1
+ 
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index 6c08a085367bf..161acd4ede448 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -127,7 +127,7 @@ static inline bool pm_runtime_suspended(struct device *dev)
+  * pm_runtime_active - Check whether or not a device is runtime-active.
+  * @dev: Target device.
+  *
+- * Return %true if runtime PM is enabled for @dev and its runtime PM status is
++ * Return %true if runtime PM is disabled for @dev or its runtime PM status is
+  * %RPM_ACTIVE, or %false otherwise.
+  *
+  * Note that the return value of this function can only be trusted if it is
+diff --git a/include/linux/wait.h b/include/linux/wait.h
+index f8b0704968a1e..9b8b0833100a0 100644
+--- a/include/linux/wait.h
++++ b/include/linux/wait.h
+@@ -207,6 +207,7 @@ void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void
+ void __wake_up_locked_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
+ void __wake_up_locked(struct wait_queue_head *wq_head, unsigned int mode, int nr);
+ void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode);
++void __wake_up_pollfree(struct wait_queue_head *wq_head);
+ 
+ #define wake_up(x)			__wake_up(x, TASK_NORMAL, 1, NULL)
+ #define wake_up_nr(x, nr)		__wake_up(x, TASK_NORMAL, nr, NULL)
+@@ -235,6 +236,31 @@ void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode);
+ #define wake_up_interruptible_sync_poll_locked(x, m)				\
+ 	__wake_up_locked_sync_key((x), TASK_INTERRUPTIBLE, poll_to_key(m))
+ 
++/**
++ * wake_up_pollfree - signal that a polled waitqueue is going away
++ * @wq_head: the wait queue head
++ *
++ * In the very rare cases where a ->poll() implementation uses a waitqueue whose
++ * lifetime is tied to a task rather than to the 'struct file' being polled,
++ * this function must be called before the waitqueue is freed so that
++ * non-blocking polls (e.g. epoll) are notified that the queue is going away.
++ *
++ * The caller must also RCU-delay the freeing of the wait_queue_head, e.g. via
++ * an explicit synchronize_rcu() or call_rcu(), or via SLAB_TYPESAFE_BY_RCU.
++ */
++static inline void wake_up_pollfree(struct wait_queue_head *wq_head)
++{
++	/*
++	 * For performance reasons, we don't always take the queue lock here.
++	 * Therefore, we might race with someone removing the last entry from
++	 * the queue, and proceed while they still hold the queue lock.
++	 * However, rcu_read_lock() is required to be held in such cases, so we
++	 * can safely proceed with an RCU-delayed free.
++	 */
++	if (waitqueue_active(wq_head))
++		__wake_up_pollfree(wq_head);
++}
++
+ #define ___wait_cond_timeout(condition)						\
+ ({										\
+ 	bool __cond = (condition);						\
+diff --git a/include/net/bond_alb.h b/include/net/bond_alb.h
+index f6af76c87a6c3..191c36afa1f4a 100644
+--- a/include/net/bond_alb.h
++++ b/include/net/bond_alb.h
+@@ -126,7 +126,7 @@ struct tlb_slave_info {
+ struct alb_bond_info {
+ 	struct tlb_client_info	*tx_hashtbl; /* Dynamically allocated */
+ 	u32			unbalanced_load;
+-	int			tx_rebalance_counter;
++	atomic_t		tx_rebalance_counter;
+ 	int			lp_counter;
+ 	/* -------- rlb parameters -------- */
+ 	int rlb_enabled;
+diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
+index 439379ca9ffac..5cf23a23f3c44 100644
+--- a/include/net/netfilter/nf_conntrack.h
++++ b/include/net/netfilter/nf_conntrack.h
+@@ -262,14 +262,14 @@ static inline bool nf_is_loopback_packet(const struct sk_buff *skb)
+ /* jiffies until ct expires, 0 if already expired */
+ static inline unsigned long nf_ct_expires(const struct nf_conn *ct)
+ {
+-	s32 timeout = ct->timeout - nfct_time_stamp;
++	s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp;
+ 
+ 	return timeout > 0 ? timeout : 0;
+ }
+ 
+ static inline bool nf_ct_is_expired(const struct nf_conn *ct)
+ {
+-	return (__s32)(ct->timeout - nfct_time_stamp) <= 0;
++	return (__s32)(READ_ONCE(ct->timeout) - nfct_time_stamp) <= 0;
+ }
+ 
+ /* use after obtaining a reference count */
+@@ -288,7 +288,7 @@ static inline bool nf_ct_should_gc(const struct nf_conn *ct)
+ static inline void nf_ct_offload_timeout(struct nf_conn *ct)
+ {
+ 	if (nf_ct_expires(ct) < NF_CT_DAY / 2)
+-		ct->timeout = nfct_time_stamp + NF_CT_DAY;
++		WRITE_ONCE(ct->timeout, nfct_time_stamp + NF_CT_DAY);
+ }
+ 
+ struct kernel_param;
+diff --git a/include/uapi/asm-generic/poll.h b/include/uapi/asm-generic/poll.h
+index 41b509f410bf9..f9c520ce4bf4e 100644
+--- a/include/uapi/asm-generic/poll.h
++++ b/include/uapi/asm-generic/poll.h
+@@ -29,7 +29,7 @@
+ #define POLLRDHUP       0x2000
+ #endif
+ 
+-#define POLLFREE	(__force __poll_t)0x4000	/* currently only for epoll */
++#define POLLFREE	(__force __poll_t)0x4000
+ 
+ #define POLL_BUSY_LOOP	(__force __poll_t)0x8000
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 5a2b28e6816ee..95ab3f243acde 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7240,7 +7240,7 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
+ 
+ 	new_range = dst_reg->off;
+ 	if (range_right_open)
+-		new_range--;
++		new_range++;
+ 
+ 	/* Examples for register markings:
+ 	 *
+diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
+index 21005b980a6b7..a55642aa3f68b 100644
+--- a/kernel/sched/wait.c
++++ b/kernel/sched/wait.c
+@@ -223,6 +223,13 @@ void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode)
+ }
+ EXPORT_SYMBOL_GPL(__wake_up_sync);	/* For internal use only */
+ 
++void __wake_up_pollfree(struct wait_queue_head *wq_head)
++{
++	__wake_up(wq_head, TASK_NORMAL, 0, poll_to_key(EPOLLHUP | POLLFREE));
++	/* POLLFREE must have cleared the queue. */
++	WARN_ON_ONCE(waitqueue_active(wq_head));
++}
++
+ /*
+  * Note: we use "set_current_state()" _after_ the wait-queue add,
+  * because we need a memory barrier there on SMP, so that any
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 408d5051d05b3..ca770a783a9f9 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -872,6 +872,13 @@ void bdi_unregister(struct backing_dev_info *bdi)
+ 	wb_shutdown(&bdi->wb);
+ 	cgwb_bdi_unregister(bdi);
+ 
++	/*
++	 * If this BDI's min ratio has been set, use bdi_set_min_ratio() to
++	 * update the global bdi_min_ratio.
++	 */
++	if (bdi->min_ratio)
++		bdi_set_min_ratio(bdi, 0);
++
+ 	if (bdi->dev) {
+ 		bdi_debug_unregister(bdi);
+ 		device_unregister(bdi->dev);
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 96cf4bc1f9585..442b67c044a9f 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -3265,14 +3265,6 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
+ 		return err;
+ 	}
+ 
+-	if (info->attrs[DEVLINK_ATTR_NETNS_PID] ||
+-	    info->attrs[DEVLINK_ATTR_NETNS_FD] ||
+-	    info->attrs[DEVLINK_ATTR_NETNS_ID]) {
+-		dest_net = devlink_netns_get(skb, info);
+-		if (IS_ERR(dest_net))
+-			return PTR_ERR(dest_net);
+-	}
+-
+ 	if (info->attrs[DEVLINK_ATTR_RELOAD_ACTION])
+ 		action = nla_get_u8(info->attrs[DEVLINK_ATTR_RELOAD_ACTION]);
+ 	else
+@@ -3315,6 +3307,14 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
+ 			return -EINVAL;
+ 		}
+ 	}
++	if (info->attrs[DEVLINK_ATTR_NETNS_PID] ||
++	    info->attrs[DEVLINK_ATTR_NETNS_FD] ||
++	    info->attrs[DEVLINK_ATTR_NETNS_ID]) {
++		dest_net = devlink_netns_get(skb, info);
++		if (IS_ERR(dest_net))
++			return PTR_ERR(dest_net);
++	}
++
+ 	err = devlink_reload(devlink, dest_net, action, limit, &actions_performed, info->extack);
+ 
+ 	if (dest_net)
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 8eec7667aa761..52a1c8725337b 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -734,11 +734,10 @@ struct pneigh_entry * pneigh_lookup(struct neigh_table *tbl,
+ 
+ 	ASSERT_RTNL();
+ 
+-	n = kmalloc(sizeof(*n) + key_len, GFP_KERNEL);
++	n = kzalloc(sizeof(*n) + key_len, GFP_KERNEL);
+ 	if (!n)
+ 		goto out;
+ 
+-	n->protocol = 0;
+ 	write_pnet(&n->net, net);
+ 	memcpy(n->key, pkey, key_len);
+ 	n->dev = dev;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 655f0d8a13d36..86ed2afbee302 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -899,7 +899,7 @@ static int udp_send_skb(struct sk_buff *skb, struct flowi4 *fl4,
+ 			kfree_skb(skb);
+ 			return -EINVAL;
+ 		}
+-		if (skb->len > cork->gso_size * UDP_MAX_SEGMENTS) {
++		if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
+ 			kfree_skb(skb);
+ 			return -EINVAL;
+ 		}
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index 897fa59c47de7..4d4399c5c5ea9 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -160,6 +160,14 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
+ 		hdr->hop_limit = ip6_dst_hoplimit(skb_dst(skb));
+ 
+ 		memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
++
++		/* the control block has been erased, so we have to set the
++		 * iif once again.
++		 * We read the receiving interface index directly from the
++		 * skb->skb_iif as it is done in the IPv4 receiving path (i.e.:
++		 * ip_rcv_core(...)).
++		 */
++		IP6CB(skb)->iif = skb->skb_iif;
+ 	}
+ 
+ 	hdr->nexthdr = NEXTHDR_ROUTING;
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 6a66e99459351..f4cf26b606f92 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -660,7 +660,7 @@ bool nf_ct_delete(struct nf_conn *ct, u32 portid, int report)
+ 
+ 	tstamp = nf_conn_tstamp_find(ct);
+ 	if (tstamp) {
+-		s32 timeout = ct->timeout - nfct_time_stamp;
++		s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp;
+ 
+ 		tstamp->stop = ktime_get_real_ns();
+ 		if (timeout < 0)
+@@ -980,7 +980,7 @@ static int nf_ct_resolve_clash_harder(struct sk_buff *skb, u32 repl_idx)
+ 	}
+ 
+ 	/* We want the clashing entry to go away real soon: 1 second timeout. */
+-	loser_ct->timeout = nfct_time_stamp + HZ;
++	WRITE_ONCE(loser_ct->timeout, nfct_time_stamp + HZ);
+ 
+ 	/* IPS_NAT_CLASH removes the entry automatically on the first
+ 	 * reply.  Also prevents UDP tracker from moving the entry to
+@@ -1487,7 +1487,7 @@ __nf_conntrack_alloc(struct net *net,
+ 	/* save hash for reusing when confirming */
+ 	*(unsigned long *)(&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev) = hash;
+ 	ct->status = 0;
+-	ct->timeout = 0;
++	WRITE_ONCE(ct->timeout, 0);
+ 	write_pnet(&ct->ct_net, net);
+ 	memset(&ct->__nfct_init_offset, 0,
+ 	       offsetof(struct nf_conn, proto) -
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 60a1a666e797a..c6bcc28ae3387 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -1971,7 +1971,7 @@ static int ctnetlink_change_timeout(struct nf_conn *ct,
+ 
+ 	if (timeout > INT_MAX)
+ 		timeout = INT_MAX;
+-	ct->timeout = nfct_time_stamp + (u32)timeout;
++	WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout);
+ 
+ 	if (test_bit(IPS_DYING_BIT, &ct->status))
+ 		return -ETIME;
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index f4029fc2c8846..d091d51b5e19f 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -151,8 +151,8 @@ static void flow_offload_fixup_ct_timeout(struct nf_conn *ct)
+ 	else
+ 		return;
+ 
+-	if (nf_flow_timeout_delta(ct->timeout) > (__s32)timeout)
+-		ct->timeout = nfct_time_stamp + timeout;
++	if (nf_flow_timeout_delta(READ_ONCE(ct->timeout)) > (__s32)timeout)
++		WRITE_ONCE(ct->timeout, nfct_time_stamp + timeout);
+ }
+ 
+ static void flow_offload_fixup_ct_state(struct nf_conn *ct)
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index eabdb8d552eef..10332178da8c5 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -887,7 +887,7 @@ static int nft_pipapo_avx2_lookup_8b_6(unsigned long *map, unsigned long *fill,
+ 			NFT_PIPAPO_AVX2_BUCKET_LOAD8(4,  lt, 4, pkt[4], bsize);
+ 
+ 			NFT_PIPAPO_AVX2_AND(5, 0, 1);
+-			NFT_PIPAPO_AVX2_BUCKET_LOAD8(6,  lt, 6, pkt[5], bsize);
++			NFT_PIPAPO_AVX2_BUCKET_LOAD8(6,  lt, 5, pkt[5], bsize);
+ 			NFT_PIPAPO_AVX2_AND(7, 2, 3);
+ 
+ 			/* Stall */
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index bec7847f8eaac..0767404636c14 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1392,8 +1392,10 @@ static int nfc_genl_dump_ses_done(struct netlink_callback *cb)
+ {
+ 	struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0];
+ 
+-	nfc_device_iter_exit(iter);
+-	kfree(iter);
++	if (iter) {
++		nfc_device_iter_exit(iter);
++		kfree(iter);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index cac684952edc5..c70802785518f 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -531,6 +531,7 @@ static void fq_pie_destroy(struct Qdisc *sch)
+ 	struct fq_pie_sched_data *q = qdisc_priv(sch);
+ 
+ 	tcf_block_put(q->block);
++	q->p_params.tupdate = 0;
+ 	del_timer_sync(&q->adapt_timer);
+ 	kvfree(q->flows);
+ }
+diff --git a/scripts/dummy-tools/gcc b/scripts/dummy-tools/gcc
+index 11c9f045ee4b9..0d0589cf8184e 100755
+--- a/scripts/dummy-tools/gcc
++++ b/scripts/dummy-tools/gcc
+@@ -75,16 +75,12 @@ if arg_contain -S "$@"; then
+ 	fi
+ fi
+ 
+-# For scripts/gcc-plugin.sh
++# To set GCC_PLUGINS
+ if arg_contain -print-file-name=plugin "$@"; then
+ 	plugin_dir=$(mktemp -d)
+ 
+-	sed -n 's/.*#include "\(.*\)"/\1/p' $(dirname $0)/../gcc-plugins/gcc-common.h |
+-	while read header
+-	do
+-		mkdir -p $plugin_dir/include/$(dirname $header)
+-		touch $plugin_dir/include/$header
+-	done
++	mkdir -p $plugin_dir/include
++	touch $plugin_dir/include/plugin-version.h
+ 
+ 	echo $plugin_dir
+ 	exit 0
+diff --git a/scripts/gcc-plugin.sh b/scripts/gcc-plugin.sh
+deleted file mode 100755
+index b79fd0bea8384..0000000000000
+--- a/scripts/gcc-plugin.sh
++++ /dev/null
+@@ -1,19 +0,0 @@
+-#!/bin/sh
+-# SPDX-License-Identifier: GPL-2.0
+-
+-set -e
+-
+-srctree=$(dirname "$0")
+-
+-gccplugins_dir=$($* -print-file-name=plugin)
+-
+-# we need a c++ compiler that supports the designated initializer GNU extension
+-$HOSTCC -c -x c++ -std=gnu++98 - -fsyntax-only -I $srctree/gcc-plugins -I $gccplugins_dir/include 2>/dev/null <<EOF
+-#include "gcc-common.h"
+-class test {
+-public:
+-	int test;
+-} test = {
+-	.test = 1
+-};
+-EOF
+diff --git a/scripts/gcc-plugins/Kconfig b/scripts/gcc-plugins/Kconfig
+index ae19fb0243b9b..ab9eb4cbe33a6 100644
+--- a/scripts/gcc-plugins/Kconfig
++++ b/scripts/gcc-plugins/Kconfig
+@@ -9,7 +9,7 @@ menuconfig GCC_PLUGINS
+ 	bool "GCC plugins"
+ 	depends on HAVE_GCC_PLUGINS
+ 	depends on CC_IS_GCC
+-	depends on $(success,$(srctree)/scripts/gcc-plugin.sh $(CC))
++	depends on $(success,test -e $(shell,$(CC) -print-file-name=plugin)/include/plugin-version.h)
+ 	default y
+ 	help
+ 	  GCC plugins are loadable modules that provide extra features to the
+diff --git a/scripts/gcc-plugins/Makefile b/scripts/gcc-plugins/Makefile
+index d66949bfeba45..b5487cce69e8e 100644
+--- a/scripts/gcc-plugins/Makefile
++++ b/scripts/gcc-plugins/Makefile
+@@ -22,9 +22,9 @@ always-y += $(GCC_PLUGIN)
+ GCC_PLUGINS_DIR = $(shell $(CC) -print-file-name=plugin)
+ 
+ plugin_cxxflags	= -Wp,-MMD,$(depfile) $(KBUILD_HOSTCXXFLAGS) -fPIC \
+-		   -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++98 \
++		   -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++11 \
+ 		   -fno-rtti -fno-exceptions -fasynchronous-unwind-tables \
+-		   -ggdb -Wno-narrowing -Wno-unused-variable -Wno-c++11-compat \
++		   -ggdb -Wno-narrowing -Wno-unused-variable \
+ 		   -Wno-format-diag
+ 
+ plugin_ldflags	= -shared
+diff --git a/sound/core/control_compat.c b/sound/core/control_compat.c
+index 1d708aab9c98e..97467f6a32a13 100644
+--- a/sound/core/control_compat.c
++++ b/sound/core/control_compat.c
+@@ -264,6 +264,7 @@ static int copy_ctl_value_to_user(void __user *userdata,
+ 				  struct snd_ctl_elem_value *data,
+ 				  int type, int count)
+ {
++	struct snd_ctl_elem_value32 __user *data32 = userdata;
+ 	int i, size;
+ 
+ 	if (type == SNDRV_CTL_ELEM_TYPE_BOOLEAN ||
+@@ -280,6 +281,8 @@ static int copy_ctl_value_to_user(void __user *userdata,
+ 		if (copy_to_user(valuep, data->value.bytes.data, size))
+ 			return -EFAULT;
+ 	}
++	if (copy_to_user(&data32->id, &data->id, sizeof(data32->id)))
++		return -EFAULT;
+ 	return 0;
+ }
+ 
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 142fc751a8477..77727a69c3c4e 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -147,7 +147,7 @@ snd_pcm_hw_param_value_min(const struct snd_pcm_hw_params *params,
+  *
+  * Return the maximum value for field PAR.
+  */
+-static unsigned int
++static int
+ snd_pcm_hw_param_value_max(const struct snd_pcm_hw_params *params,
+ 			   snd_pcm_hw_param_t var, int *dir)
+ {
+@@ -682,18 +682,24 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+ 				   struct snd_pcm_hw_params *oss_params,
+ 				   struct snd_pcm_hw_params *slave_params)
+ {
+-	size_t s;
+-	size_t oss_buffer_size, oss_period_size, oss_periods;
+-	size_t min_period_size, max_period_size;
++	ssize_t s;
++	ssize_t oss_buffer_size;
++	ssize_t oss_period_size, oss_periods;
++	ssize_t min_period_size, max_period_size;
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	size_t oss_frame_size;
+ 
+ 	oss_frame_size = snd_pcm_format_physical_width(params_format(oss_params)) *
+ 			 params_channels(oss_params) / 8;
+ 
++	oss_buffer_size = snd_pcm_hw_param_value_max(slave_params,
++						     SNDRV_PCM_HW_PARAM_BUFFER_SIZE,
++						     NULL);
++	if (oss_buffer_size <= 0)
++		return -EINVAL;
+ 	oss_buffer_size = snd_pcm_plug_client_size(substream,
+-						   snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_BUFFER_SIZE, NULL)) * oss_frame_size;
+-	if (!oss_buffer_size)
++						   oss_buffer_size * oss_frame_size);
++	if (oss_buffer_size <= 0)
+ 		return -EINVAL;
+ 	oss_buffer_size = rounddown_pow_of_two(oss_buffer_size);
+ 	if (atomic_read(&substream->mmap_count)) {
+@@ -730,7 +736,7 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+ 
+ 	min_period_size = snd_pcm_plug_client_size(substream,
+ 						   snd_pcm_hw_param_value_min(slave_params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, NULL));
+-	if (min_period_size) {
++	if (min_period_size > 0) {
+ 		min_period_size *= oss_frame_size;
+ 		min_period_size = roundup_pow_of_two(min_period_size);
+ 		if (oss_period_size < min_period_size)
+@@ -739,7 +745,7 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+ 
+ 	max_period_size = snd_pcm_plug_client_size(substream,
+ 						   snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, NULL));
+-	if (max_period_size) {
++	if (max_period_size > 0) {
+ 		max_period_size *= oss_frame_size;
+ 		max_period_size = rounddown_pow_of_two(max_period_size);
+ 		if (oss_period_size > max_period_size)
+@@ -752,7 +758,7 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+ 		oss_periods = substream->oss.setup.periods;
+ 
+ 	s = snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_PERIODS, NULL);
+-	if (runtime->oss.maxfrags && s > runtime->oss.maxfrags)
++	if (s > 0 && runtime->oss.maxfrags && s > runtime->oss.maxfrags)
+ 		s = runtime->oss.maxfrags;
+ 	if (oss_periods > s)
+ 		oss_periods = s;
+@@ -878,8 +884,15 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ 		err = -EINVAL;
+ 		goto failure;
+ 	}
+-	choose_rate(substream, sparams, runtime->oss.rate);
+-	snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_CHANNELS, runtime->oss.channels, NULL);
++
++	err = choose_rate(substream, sparams, runtime->oss.rate);
++	if (err < 0)
++		goto failure;
++	err = snd_pcm_hw_param_near(substream, sparams,
++				    SNDRV_PCM_HW_PARAM_CHANNELS,
++				    runtime->oss.channels, NULL);
++	if (err < 0)
++		goto failure;
+ 
+ 	format = snd_pcm_oss_format_from(runtime->oss.format);
+ 
+@@ -1947,7 +1960,7 @@ static int snd_pcm_oss_set_fragment1(struct snd_pcm_substream *substream, unsign
+ 	if (runtime->oss.subdivision || runtime->oss.fragshift)
+ 		return -EINVAL;
+ 	fragshift = val & 0xffff;
+-	if (fragshift >= 31)
++	if (fragshift >= 25) /* should be large enough */
+ 		return -EINVAL;
+ 	runtime->oss.fragshift = fragshift;
+ 	runtime->oss.maxfrags = (val >> 16) & 0xffff;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b980fa617229e..bed2a93001635 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6449,22 +6449,26 @@ static void alc287_fixup_legion_15imhg05_speakers(struct hda_codec *codec,
+ /* for alc285_fixup_ideapad_s740_coef() */
+ #include "ideapad_s740_helper.c"
+ 
+-static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *codec,
+-							    const struct hda_fixup *fix,
+-							    int action)
++static const struct coef_fw alc256_fixup_set_coef_defaults_coefs[] = {
++	WRITE_COEF(0x10, 0x0020), WRITE_COEF(0x24, 0x0000),
++	WRITE_COEF(0x26, 0x0000), WRITE_COEF(0x29, 0x3000),
++	WRITE_COEF(0x37, 0xfe05), WRITE_COEF(0x45, 0x5089),
++	{}
++};
++
++static void alc256_fixup_set_coef_defaults(struct hda_codec *codec,
++					   const struct hda_fixup *fix,
++					   int action)
+ {
+ 	/*
+-	* A certain other OS sets these coeffs to different values. On at least one TongFang
+-	* barebone these settings might survive even a cold reboot. So to restore a clean slate the
+-	* values are explicitly reset to default here. Without this, the external microphone is
+-	* always in a plugged-in state, while the internal microphone is always in an unplugged
+-	* state, breaking the ability to use the internal microphone.
+-	*/
+-	alc_write_coef_idx(codec, 0x24, 0x0000);
+-	alc_write_coef_idx(codec, 0x26, 0x0000);
+-	alc_write_coef_idx(codec, 0x29, 0x3000);
+-	alc_write_coef_idx(codec, 0x37, 0xfe05);
+-	alc_write_coef_idx(codec, 0x45, 0x5089);
++	 * A certain other OS sets these coeffs to different values. On at least
++	 * one TongFang barebone these settings might survive even a cold
++	 * reboot. So to restore a clean slate the values are explicitly reset
++	 * to default here. Without this, the external microphone is always in a
++	 * plugged-in state, while the internal microphone is always in an
++	 * unplugged state, breaking the ability to use the internal microphone.
++	 */
++	alc_process_coef_fw(codec, alc256_fixup_set_coef_defaults_coefs);
+ }
+ 
+ static const struct coef_fw alc233_fixup_no_audio_jack_coefs[] = {
+@@ -6704,7 +6708,7 @@ enum {
+ 	ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ 	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+ 	ALC287_FIXUP_13S_GEN2_SPEAKERS,
+-	ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,
++	ALC256_FIXUP_SET_COEF_DEFAULTS,
+ 	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ 	ALC233_FIXUP_NO_AUDIO_JACK,
+ };
+@@ -8404,9 +8408,9 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE,
+ 	},
+-	[ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS] = {
++	[ALC256_FIXUP_SET_COEF_DEFAULTS] = {
+ 		.type = HDA_FIXUP_FUNC,
+-		.v.func = alc256_fixup_tongfang_reset_persistent_settings,
++		.v.func = alc256_fixup_set_coef_defaults,
+ 	},
+ 	[ALC245_FIXUP_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -8866,7 +8870,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+-	SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS),
++	SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS),
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -10163,6 +10167,27 @@ static void alc671_fixup_hp_headset_mic2(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc897_hp_automute_hook(struct hda_codec *codec,
++					 struct hda_jack_callback *jack)
++{
++	struct alc_spec *spec = codec->spec;
++	int vref;
++
++	snd_hda_gen_hp_automute(codec, jack);
++	vref = spec->gen.hp_jack_present ? (PIN_HP | AC_PINCTL_VREF_100) : PIN_HP;
++	snd_hda_codec_write(codec, 0x1b, 0, AC_VERB_SET_PIN_WIDGET_CONTROL,
++			    vref);
++}
++
++static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec,
++				     const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->gen.hp_automute_hook = alc897_hp_automute_hook;
++	}
++}
++
+ static const struct coef_fw alc668_coefs[] = {
+ 	WRITE_COEF(0x01, 0xbebe), WRITE_COEF(0x02, 0xaaaa), WRITE_COEF(0x03,    0x0),
+ 	WRITE_COEF(0x04, 0x0180), WRITE_COEF(0x06,    0x0), WRITE_COEF(0x07, 0x0f80),
+@@ -10243,6 +10268,8 @@ enum {
+ 	ALC668_FIXUP_ASUS_NO_HEADSET_MIC,
+ 	ALC668_FIXUP_HEADSET_MIC,
+ 	ALC668_FIXUP_MIC_DET_COEF,
++	ALC897_FIXUP_LENOVO_HEADSET_MIC,
++	ALC897_FIXUP_HEADSET_MIC_PIN,
+ };
+ 
+ static const struct hda_fixup alc662_fixups[] = {
+@@ -10649,6 +10676,19 @@ static const struct hda_fixup alc662_fixups[] = {
+ 			{}
+ 		},
+ 	},
++	[ALC897_FIXUP_LENOVO_HEADSET_MIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc897_fixup_lenovo_headset_mic,
++	},
++	[ALC897_FIXUP_HEADSET_MIC_PIN] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x1a, 0x03a11050 },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC897_FIXUP_LENOVO_HEADSET_MIC
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+@@ -10693,6 +10733,10 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS),
++	SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x32f7, "Lenovo ThinkCentre M90", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x1849, 0x5892, "ASRock B150M", ALC892_FIXUP_ASROCK_MOBO),
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index 0486b14697994..41827cdf26a39 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -2797,6 +2797,8 @@ static int rt5682_register_dai_clks(struct snd_soc_component *component)
+ 
+ 	for (i = 0; i < RT5682_DAI_NUM_CLKS; ++i) {
+ 		struct clk_init_data init = { };
++		struct clk_parent_data parent_data;
++		const struct clk_hw *parent;
+ 
+ 		dai_clk_hw = &rt5682->dai_clks_hw[i];
+ 
+@@ -2804,17 +2806,17 @@ static int rt5682_register_dai_clks(struct snd_soc_component *component)
+ 		case RT5682_DAI_WCLK_IDX:
+ 			/* Make MCLK the parent of WCLK */
+ 			if (rt5682->mclk) {
+-				init.parent_data = &(struct clk_parent_data){
++				parent_data = (struct clk_parent_data){
+ 					.fw_name = "mclk",
+ 				};
++				init.parent_data = &parent_data;
+ 				init.num_parents = 1;
+ 			}
+ 			break;
+ 		case RT5682_DAI_BCLK_IDX:
+ 			/* Make WCLK the parent of BCLK */
+-			init.parent_hws = &(const struct clk_hw *){
+-				&rt5682->dai_clks_hw[RT5682_DAI_WCLK_IDX]
+-			};
++			parent = &rt5682->dai_clks_hw[RT5682_DAI_WCLK_IDX];
++			init.parent_hws = &parent;
+ 			init.num_parents = 1;
+ 			break;
+ 		default:
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index 699b59cd389c0..01df3f4e045a9 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -2470,6 +2470,9 @@ static int wcd934x_compander_set(struct snd_kcontrol *kc,
+ 	int value = ucontrol->value.integer.value[0];
+ 	int sel;
+ 
++	if (wcd->comp_enabled[comp] == value)
++		return 0;
++
+ 	wcd->comp_enabled[comp] = value;
+ 	sel = value ? WCD934X_HPH_GAIN_SRC_SEL_COMPANDER :
+ 		WCD934X_HPH_GAIN_SRC_SEL_REGISTER;
+@@ -2493,10 +2496,10 @@ static int wcd934x_compander_set(struct snd_kcontrol *kc,
+ 	case COMPANDER_8:
+ 		break;
+ 	default:
+-		break;
++		return 0;
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static int wcd934x_rx_hph_mode_get(struct snd_kcontrol *kc,
+@@ -2540,6 +2543,31 @@ static int slim_rx_mux_get(struct snd_kcontrol *kc,
+ 	return 0;
+ }
+ 
++static int slim_rx_mux_to_dai_id(int mux)
++{
++	int aif_id;
++
++	switch (mux) {
++	case 1:
++		aif_id = AIF1_PB;
++		break;
++	case 2:
++		aif_id = AIF2_PB;
++		break;
++	case 3:
++		aif_id = AIF3_PB;
++		break;
++	case 4:
++		aif_id = AIF4_PB;
++		break;
++	default:
++		aif_id = -1;
++		break;
++	}
++
++	return aif_id;
++}
++
+ static int slim_rx_mux_put(struct snd_kcontrol *kc,
+ 			   struct snd_ctl_elem_value *ucontrol)
+ {
+@@ -2547,43 +2575,59 @@ static int slim_rx_mux_put(struct snd_kcontrol *kc,
+ 	struct wcd934x_codec *wcd = dev_get_drvdata(w->dapm->dev);
+ 	struct soc_enum *e = (struct soc_enum *)kc->private_value;
+ 	struct snd_soc_dapm_update *update = NULL;
++	struct wcd934x_slim_ch *ch, *c;
+ 	u32 port_id = w->shift;
++	bool found = false;
++	int mux_idx;
++	int prev_mux_idx = wcd->rx_port_value[port_id];
++	int aif_id;
+ 
+-	if (wcd->rx_port_value[port_id] == ucontrol->value.enumerated.item[0])
+-		return 0;
++	mux_idx = ucontrol->value.enumerated.item[0];
+ 
+-	wcd->rx_port_value[port_id] = ucontrol->value.enumerated.item[0];
++	if (mux_idx == prev_mux_idx)
++		return 0;
+ 
+-	switch (wcd->rx_port_value[port_id]) {
++	switch(mux_idx) {
+ 	case 0:
+-		list_del_init(&wcd->rx_chs[port_id].list);
+-		break;
+-	case 1:
+-		list_add_tail(&wcd->rx_chs[port_id].list,
+-			      &wcd->dai[AIF1_PB].slim_ch_list);
+-		break;
+-	case 2:
+-		list_add_tail(&wcd->rx_chs[port_id].list,
+-			      &wcd->dai[AIF2_PB].slim_ch_list);
+-		break;
+-	case 3:
+-		list_add_tail(&wcd->rx_chs[port_id].list,
+-			      &wcd->dai[AIF3_PB].slim_ch_list);
++		aif_id = slim_rx_mux_to_dai_id(prev_mux_idx);
++		if (aif_id < 0)
++			return 0;
++
++		list_for_each_entry_safe(ch, c, &wcd->dai[aif_id].slim_ch_list, list) {
++			if (ch->port == port_id + WCD934X_RX_START) {
++				found = true;
++				list_del_init(&ch->list);
++				break;
++			}
++		}
++		if (!found)
++			return 0;
++
+ 		break;
+-	case 4:
+-		list_add_tail(&wcd->rx_chs[port_id].list,
+-			      &wcd->dai[AIF4_PB].slim_ch_list);
++	case 1 ... 4:
++		aif_id = slim_rx_mux_to_dai_id(mux_idx);
++		if (aif_id < 0)
++			return 0;
++
++		if (list_empty(&wcd->rx_chs[port_id].list)) {
++			list_add_tail(&wcd->rx_chs[port_id].list,
++				      &wcd->dai[aif_id].slim_ch_list);
++		} else {
++			dev_err(wcd->dev ,"SLIM_RX%d PORT is busy\n", port_id);
++			return 0;
++		}
+ 		break;
++
+ 	default:
+-		dev_err(wcd->dev, "Unknown AIF %d\n",
+-			wcd->rx_port_value[port_id]);
++		dev_err(wcd->dev, "Unknown AIF %d\n", mux_idx);
+ 		goto err;
+ 	}
+ 
++	wcd->rx_port_value[port_id] = mux_idx;
+ 	snd_soc_dapm_mux_update_power(w->dapm, kc, wcd->rx_port_value[port_id],
+ 				      e, update);
+ 
+-	return 0;
++	return 1;
+ err:
+ 	return -EINVAL;
+ }
+@@ -3029,6 +3073,7 @@ static int slim_tx_mixer_put(struct snd_kcontrol *kc,
+ 	struct soc_mixer_control *mixer =
+ 			(struct soc_mixer_control *)kc->private_value;
+ 	int enable = ucontrol->value.integer.value[0];
++	struct wcd934x_slim_ch *ch, *c;
+ 	int dai_id = widget->shift;
+ 	int port_id = mixer->shift;
+ 
+@@ -3036,17 +3081,32 @@ static int slim_tx_mixer_put(struct snd_kcontrol *kc,
+ 	if (enable == wcd->tx_port_value[port_id])
+ 		return 0;
+ 
+-	wcd->tx_port_value[port_id] = enable;
+-
+-	if (enable)
+-		list_add_tail(&wcd->tx_chs[port_id].list,
+-			      &wcd->dai[dai_id].slim_ch_list);
+-	else
+-		list_del_init(&wcd->tx_chs[port_id].list);
++	if (enable) {
++		if (list_empty(&wcd->tx_chs[port_id].list)) {
++			list_add_tail(&wcd->tx_chs[port_id].list,
++				      &wcd->dai[dai_id].slim_ch_list);
++		} else {
++			dev_err(wcd->dev ,"SLIM_TX%d PORT is busy\n", port_id);
++			return 0;
++		}
++	 } else {
++		bool found = false;
++
++		list_for_each_entry_safe(ch, c, &wcd->dai[dai_id].slim_ch_list, list) {
++			if (ch->port == port_id) {
++				found = true;
++				list_del_init(&wcd->tx_chs[port_id].list);
++				break;
++			}
++		}
++		if (!found)
++			return 0;
++	 }
+ 
++	wcd->tx_port_value[port_id] = enable;
+ 	snd_soc_dapm_mixer_update_power(widget->dapm, kc, enable, update);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static const struct snd_kcontrol_new aif1_slim_cap_mixer[] = {
+diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c
+index db87e07b11c94..601525c77bbaf 100644
+--- a/sound/soc/codecs/wsa881x.c
++++ b/sound/soc/codecs/wsa881x.c
+@@ -772,7 +772,8 @@ static int wsa881x_put_pa_gain(struct snd_kcontrol *kc,
+ 
+ 		usleep_range(1000, 1010);
+ 	}
+-	return 0;
++
++	return 1;
+ }
+ 
+ static int wsa881x_get_port(struct snd_kcontrol *kcontrol,
+@@ -816,15 +817,22 @@ static int wsa881x_set_port(struct snd_kcontrol *kcontrol,
+ 		(struct soc_mixer_control *)kcontrol->private_value;
+ 	int portidx = mixer->reg;
+ 
+-	if (ucontrol->value.integer.value[0])
++	if (ucontrol->value.integer.value[0]) {
++		if (data->port_enable[portidx])
++			return 0;
++
+ 		data->port_enable[portidx] = true;
+-	else
++	} else {
++		if (!data->port_enable[portidx])
++			return 0;
++
+ 		data->port_enable[portidx] = false;
++	}
+ 
+ 	if (portidx == WSA881X_PORT_BOOST) /* Boost Switch */
+ 		wsa881x_boost_ctrl(comp, data->port_enable[portidx]);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static const char * const smart_boost_lvl_text[] = {
+diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c
+index 934b3f282bccd..2026fa5902927 100644
+--- a/sound/soc/qcom/qdsp6/q6routing.c
++++ b/sound/soc/qcom/qdsp6/q6routing.c
+@@ -488,14 +488,16 @@ static int msm_routing_put_audio_mixer(struct snd_kcontrol *kcontrol,
+ 	struct session_data *session = &data->sessions[session_id];
+ 
+ 	if (ucontrol->value.integer.value[0]) {
++		if (session->port_id == be_id)
++			return 0;
++
+ 		session->port_id = be_id;
+ 		snd_soc_dapm_mixer_update_power(dapm, kcontrol, 1, update);
+ 	} else {
+-		if (session->port_id == be_id) {
+-			session->port_id = -1;
++		if (session->port_id == -1 || session->port_id != be_id)
+ 			return 0;
+-		}
+ 
++		session->port_id = -1;
+ 		snd_soc_dapm_mixer_update_power(dapm, kcontrol, 0, update);
+ 	}
+ 
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index 97cbfb31b7625..e1d2c255669e6 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -49,7 +49,6 @@ FEATURE_TESTS_BASIC :=                  \
+         numa_num_possible_cpus          \
+         libperl                         \
+         libpython                       \
+-        libpython-version               \
+         libslang                        \
+         libslang-include-subdir         \
+         libcrypto                       \
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index 89ba522e377dc..22ea350dab588 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -31,7 +31,6 @@ FILES=                                          \
+          test-numa_num_possible_cpus.bin        \
+          test-libperl.bin                       \
+          test-libpython.bin                     \
+-         test-libpython-version.bin             \
+          test-libslang.bin                      \
+          test-libslang-include-subdir.bin       \
+          test-libcrypto.bin                     \
+@@ -220,9 +219,6 @@ $(OUTPUT)test-libperl.bin:
+ $(OUTPUT)test-libpython.bin:
+ 	$(BUILD) $(FLAGS_PYTHON_EMBED)
+ 
+-$(OUTPUT)test-libpython-version.bin:
+-	$(BUILD)
+-
+ $(OUTPUT)test-libbfd.bin:
+ 	$(BUILD) -DPACKAGE='"perf"' -lbfd -ldl
+ 
+diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
+index 464873883396e..09517ff2fad56 100644
+--- a/tools/build/feature/test-all.c
++++ b/tools/build/feature/test-all.c
+@@ -14,10 +14,6 @@
+ # include "test-libpython.c"
+ #undef main
+ 
+-#define main main_test_libpython_version
+-# include "test-libpython-version.c"
+-#undef main
+-
+ #define main main_test_libperl
+ # include "test-libperl.c"
+ #undef main
+@@ -181,7 +177,6 @@
+ int main(int argc, char *argv[])
+ {
+ 	main_test_libpython();
+-	main_test_libpython_version();
+ 	main_test_libperl();
+ 	main_test_hello();
+ 	main_test_libelf();
+diff --git a/tools/build/feature/test-libpython-version.c b/tools/build/feature/test-libpython-version.c
+deleted file mode 100644
+index 47714b942d4d3..0000000000000
+--- a/tools/build/feature/test-libpython-version.c
++++ /dev/null
+@@ -1,11 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-#include <Python.h>
+-
+-#if PY_VERSION_HEX >= 0x03000000
+-	#error
+-#endif
+-
+-int main(void)
+-{
+-	return 0;
+-}
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 014b959575cae..68408a5ecfed2 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -259,8 +259,6 @@ endif
+ 
+ FEATURE_CHECK_CFLAGS-libpython := $(PYTHON_EMBED_CCOPTS)
+ FEATURE_CHECK_LDFLAGS-libpython := $(PYTHON_EMBED_LDOPTS)
+-FEATURE_CHECK_CFLAGS-libpython-version := $(PYTHON_EMBED_CCOPTS)
+-FEATURE_CHECK_LDFLAGS-libpython-version := $(PYTHON_EMBED_LDOPTS)
+ 
+ FEATURE_CHECK_LDFLAGS-libaio = -lrt
+ 
+diff --git a/tools/perf/util/smt.c b/tools/perf/util/smt.c
+index 20bacd5972ade..34f1b1b1176c7 100644
+--- a/tools/perf/util/smt.c
++++ b/tools/perf/util/smt.c
+@@ -15,7 +15,7 @@ int smt_on(void)
+ 	if (cached)
+ 		return cached_result;
+ 
+-	if (sysfs__read_int("devices/system/cpu/smt/active", &cached_result) > 0)
++	if (sysfs__read_int("devices/system/cpu/smt/active", &cached_result) >= 0)
+ 		goto done;
+ 
+ 	ncpu = sysconf(_SC_NPROCESSORS_CONF);
+diff --git a/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c b/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c
+index bfb97383e6b5a..b4ec228eb95d0 100644
+--- a/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c
++++ b/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c
+@@ -35,7 +35,7 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ },
+ {
+-	"XDP pkt read, pkt_data' > pkt_end, good access",
++	"XDP pkt read, pkt_data' > pkt_end, corner case, good access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+@@ -87,6 +87,41 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_data' > pkt_end, corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
++	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data' > pkt_end, corner case -1, bad access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.errstr = "R1 offset is outside of the packet",
++	.result = REJECT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+ {
+ 	"XDP pkt read, pkt_end > pkt_data', good access",
+ 	.insns = {
+@@ -106,16 +141,16 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_end > pkt_data', bad access 1",
++	"XDP pkt read, pkt_end > pkt_data', corner case -1, bad access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_end)),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+ 	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+@@ -142,6 +177,42 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_end > pkt_data', corner case, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_end > pkt_data', corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+ {
+ 	"XDP pkt read, pkt_data' < pkt_end, good access",
+ 	.insns = {
+@@ -161,16 +232,16 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_data' < pkt_end, bad access 1",
++	"XDP pkt read, pkt_data' < pkt_end, corner case -1, bad access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_end)),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
+ 	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+ 	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+@@ -198,7 +269,43 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_end < pkt_data', good access",
++	"XDP pkt read, pkt_data' < pkt_end, corner case, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data' < pkt_end, corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_end < pkt_data', corner case, good access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+@@ -250,6 +357,41 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_end < pkt_data', corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
++	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_end < pkt_data', corner case -1, bad access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.errstr = "R1 offset is outside of the packet",
++	.result = REJECT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+ {
+ 	"XDP pkt read, pkt_data' >= pkt_end, good access",
+ 	.insns = {
+@@ -268,15 +410,15 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_data' >= pkt_end, bad access 1",
++	"XDP pkt read, pkt_data' >= pkt_end, corner case -1, bad access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_end)),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
+ 	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+@@ -304,7 +446,41 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_end >= pkt_data', good access",
++	"XDP pkt read, pkt_data' >= pkt_end, corner case, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data' >= pkt_end, corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_end >= pkt_data', corner case, good access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+@@ -359,7 +535,44 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_data' <= pkt_end, good access",
++	"XDP pkt read, pkt_end >= pkt_data', corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
++	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_end >= pkt_data', corner case -1, bad access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.errstr = "R1 offset is outside of the packet",
++	.result = REJECT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data' <= pkt_end, corner case, good access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+@@ -413,6 +626,43 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_data' <= pkt_end, corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
++	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data' <= pkt_end, corner case -1, bad access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.errstr = "R1 offset is outside of the packet",
++	.result = REJECT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+ {
+ 	"XDP pkt read, pkt_end <= pkt_data', good access",
+ 	.insns = {
+@@ -431,15 +681,15 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_end <= pkt_data', bad access 1",
++	"XDP pkt read, pkt_end <= pkt_data', corner case -1, bad access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_end)),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
+ 	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+@@ -467,7 +717,41 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_meta' > pkt_data, good access",
++	"XDP pkt read, pkt_end <= pkt_data', corner case, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_end <= pkt_data', corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct xdp_md, data_end)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_meta' > pkt_data, corner case, good access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_meta)),
+@@ -519,6 +803,41 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_meta' > pkt_data, corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
++	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_meta' > pkt_data, corner case -1, bad access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.errstr = "R1 offset is outside of the packet",
++	.result = REJECT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+ {
+ 	"XDP pkt read, pkt_data > pkt_meta', good access",
+ 	.insns = {
+@@ -538,16 +857,16 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_data > pkt_meta', bad access 1",
++	"XDP pkt read, pkt_data > pkt_meta', corner case -1, bad access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_meta)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+ 	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+@@ -574,6 +893,42 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_data > pkt_meta', corner case, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data > pkt_meta', corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+ {
+ 	"XDP pkt read, pkt_meta' < pkt_data, good access",
+ 	.insns = {
+@@ -593,16 +948,16 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_meta' < pkt_data, bad access 1",
++	"XDP pkt read, pkt_meta' < pkt_data, corner case -1, bad access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_meta)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
+ 	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+ 	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+@@ -630,7 +985,43 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_data < pkt_meta', good access",
++	"XDP pkt read, pkt_meta' < pkt_data, corner case, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_meta' < pkt_data, corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data < pkt_meta', corner case, good access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_meta)),
+@@ -682,6 +1073,41 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_data < pkt_meta', corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
++	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data < pkt_meta', corner case -1, bad access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.errstr = "R1 offset is outside of the packet",
++	.result = REJECT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+ {
+ 	"XDP pkt read, pkt_meta' >= pkt_data, good access",
+ 	.insns = {
+@@ -700,15 +1126,15 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_meta' >= pkt_data, bad access 1",
++	"XDP pkt read, pkt_meta' >= pkt_data, corner case -1, bad access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_meta)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
+ 	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+@@ -736,7 +1162,41 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_data >= pkt_meta', good access",
++	"XDP pkt read, pkt_meta' >= pkt_data, corner case, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_meta' >= pkt_data, corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data >= pkt_meta', corner case, good access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_meta)),
+@@ -791,7 +1251,44 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_meta' <= pkt_data, good access",
++	"XDP pkt read, pkt_data >= pkt_meta', corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
++	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data >= pkt_meta', corner case -1, bad access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.errstr = "R1 offset is outside of the packet",
++	.result = REJECT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_meta' <= pkt_data, corner case, good access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_meta)),
+@@ -845,6 +1342,43 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_meta' <= pkt_data, corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
++	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_meta' <= pkt_data, corner case -1, bad access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
++	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.errstr = "R1 offset is outside of the packet",
++	.result = REJECT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+ {
+ 	"XDP pkt read, pkt_data <= pkt_meta', good access",
+ 	.insns = {
+@@ -863,15 +1397,15 @@
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+-	"XDP pkt read, pkt_data <= pkt_meta', bad access 1",
++	"XDP pkt read, pkt_data <= pkt_meta', corner case -1, bad access",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+ 		    offsetof(struct xdp_md, data_meta)),
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
+ 	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+@@ -898,3 +1432,37 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
++{
++	"XDP pkt read, pkt_data <= pkt_meta', corner case, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
++	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
++{
++	"XDP pkt read, pkt_data <= pkt_meta', corner case +1, good access",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct xdp_md, data_meta)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_XDP,
++	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 6fad54c7ecb4a..a7f53c2a9580a 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -444,24 +444,63 @@ fib_rp_filter_test()
+ 	setup
+ 
+ 	set -e
++	ip netns add ns2
++	ip netns set ns2 auto
++
++	ip -netns ns2 link set dev lo up
++
++	$IP link add name veth1 type veth peer name veth2
++	$IP link set dev veth2 netns ns2
++	$IP address add 192.0.2.1/24 dev veth1
++	ip -netns ns2 address add 192.0.2.1/24 dev veth2
++	$IP link set dev veth1 up
++	ip -netns ns2 link set dev veth2 up
++
+ 	$IP link set dev lo address 52:54:00:6a:c7:5e
+-	$IP link set dummy0 address 52:54:00:6a:c7:5e
+-	$IP link add dummy1 type dummy
+-	$IP link set dummy1 address 52:54:00:6a:c7:5e
+-	$IP link set dev dummy1 up
++	$IP link set dev veth1 address 52:54:00:6a:c7:5e
++	ip -netns ns2 link set dev lo address 52:54:00:6a:c7:5e
++	ip -netns ns2 link set dev veth2 address 52:54:00:6a:c7:5e
++
++	# 1. (ns2) redirect lo's egress to veth2's egress
++	ip netns exec ns2 tc qdisc add dev lo parent root handle 1: fq_codel
++	ip netns exec ns2 tc filter add dev lo parent 1: protocol arp basic \
++		action mirred egress redirect dev veth2
++	ip netns exec ns2 tc filter add dev lo parent 1: protocol ip basic \
++		action mirred egress redirect dev veth2
++
++	# 2. (ns1) redirect veth1's ingress to lo's ingress
++	$NS_EXEC tc qdisc add dev veth1 ingress
++	$NS_EXEC tc filter add dev veth1 ingress protocol arp basic \
++		action mirred ingress redirect dev lo
++	$NS_EXEC tc filter add dev veth1 ingress protocol ip basic \
++		action mirred ingress redirect dev lo
++
++	# 3. (ns1) redirect lo's egress to veth1's egress
++	$NS_EXEC tc qdisc add dev lo parent root handle 1: fq_codel
++	$NS_EXEC tc filter add dev lo parent 1: protocol arp basic \
++		action mirred egress redirect dev veth1
++	$NS_EXEC tc filter add dev lo parent 1: protocol ip basic \
++		action mirred egress redirect dev veth1
++
++	# 4. (ns2) redirect veth2's ingress to lo's ingress
++	ip netns exec ns2 tc qdisc add dev veth2 ingress
++	ip netns exec ns2 tc filter add dev veth2 ingress protocol arp basic \
++		action mirred ingress redirect dev lo
++	ip netns exec ns2 tc filter add dev veth2 ingress protocol ip basic \
++		action mirred ingress redirect dev lo
++
+ 	$NS_EXEC sysctl -qw net.ipv4.conf.all.rp_filter=1
+ 	$NS_EXEC sysctl -qw net.ipv4.conf.all.accept_local=1
+ 	$NS_EXEC sysctl -qw net.ipv4.conf.all.route_localnet=1
+-
+-	$NS_EXEC tc qd add dev dummy1 parent root handle 1: fq_codel
+-	$NS_EXEC tc filter add dev dummy1 parent 1: protocol arp basic action mirred egress redirect dev lo
+-	$NS_EXEC tc filter add dev dummy1 parent 1: protocol ip basic action mirred egress redirect dev lo
++	ip netns exec ns2 sysctl -qw net.ipv4.conf.all.rp_filter=1
++	ip netns exec ns2 sysctl -qw net.ipv4.conf.all.accept_local=1
++	ip netns exec ns2 sysctl -qw net.ipv4.conf.all.route_localnet=1
+ 	set +e
+ 
+-	run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 198.51.100.1"
++	run_cmd "ip netns exec ns2 ping -w1 -c1 192.0.2.1"
+ 	log_test $? 0 "rp_filter passes local packets"
+ 
+-	run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 127.0.0.1"
++	run_cmd "ip netns exec ns2 ping -w1 -c1 127.0.0.1"
+ 	log_test $? 0 "rp_filter passes loopback packets"
+ 
+ 	cleanup
+diff --git a/tools/testing/selftests/netfilter/Makefile b/tools/testing/selftests/netfilter/Makefile
+index a374e10ef5065..a56cfc4f2f3fa 100644
+--- a/tools/testing/selftests/netfilter/Makefile
++++ b/tools/testing/selftests/netfilter/Makefile
+@@ -4,7 +4,8 @@
+ TEST_PROGS := nft_trans_stress.sh nft_nat.sh bridge_brouter.sh \
+ 	conntrack_icmp_related.sh nft_flowtable.sh ipvs.sh \
+ 	nft_concat_range.sh nft_conntrack_helper.sh \
+-	nft_queue.sh nft_meta.sh
++	nft_queue.sh nft_meta.sh \
++	conntrack_vrf.sh
+ 
+ LDLIBS = -lmnl
+ TEST_GEN_FILES =  nf-queue
+diff --git a/tools/testing/selftests/netfilter/conntrack_vrf.sh b/tools/testing/selftests/netfilter/conntrack_vrf.sh
+new file mode 100644
+index 0000000000000..8b5ea92345882
+--- /dev/null
++++ b/tools/testing/selftests/netfilter/conntrack_vrf.sh
+@@ -0,0 +1,241 @@
++#!/bin/sh
++
++# This script demonstrates interaction of conntrack and vrf.
++# The vrf driver calls the netfilter hooks again, with oif/iif
++# pointing at the VRF device.
++#
++# For ingress, this means first iteration has iifname of lower/real
++# device.  In this script, thats veth0.
++# Second iteration is iifname set to vrf device, tvrf in this script.
++#
++# For egress, this is reversed: first iteration has the vrf device,
++# second iteration is done with the lower/real/veth0 device.
++#
++# test_ct_zone_in demonstrates unexpected change of nftables
++# behavior # caused by commit 09e856d54bda5f28 "vrf: Reset skb conntrack
++# connection on VRF rcv"
++#
++# It was possible to assign conntrack zone to a packet (or mark it for
++# `notracking`) in the prerouting chain before conntrack, based on real iif.
++#
++# After the change, the zone assignment is lost and the zone is assigned based
++# on the VRF master interface (in case such a rule exists).
++# assignment is lost. Instead, assignment based on the `iif` matching
++# Thus it is impossible to distinguish packets based on the original
++# interface.
++#
++# test_masquerade_vrf and test_masquerade_veth0 demonstrate the problem
++# that was supposed to be fixed by the commit mentioned above to make sure
++# that any fix to test case 1 won't break masquerade again.
++
++ksft_skip=4
++
++IP0=172.30.30.1
++IP1=172.30.30.2
++PFXL=30
++ret=0
++
++sfx=$(mktemp -u "XXXXXXXX")
++ns0="ns0-$sfx"
++ns1="ns1-$sfx"
++
++cleanup()
++{
++	ip netns pids $ns0 | xargs kill 2>/dev/null
++	ip netns pids $ns1 | xargs kill 2>/dev/null
++
++	ip netns del $ns0 $ns1
++}
++
++nft --version > /dev/null 2>&1
++if [ $? -ne 0 ];then
++	echo "SKIP: Could not run test without nft tool"
++	exit $ksft_skip
++fi
++
++ip -Version > /dev/null 2>&1
++if [ $? -ne 0 ];then
++	echo "SKIP: Could not run test without ip tool"
++	exit $ksft_skip
++fi
++
++ip netns add "$ns0"
++if [ $? -ne 0 ];then
++	echo "SKIP: Could not create net namespace $ns0"
++	exit $ksft_skip
++fi
++ip netns add "$ns1"
++
++trap cleanup EXIT
++
++ip netns exec $ns0 sysctl -q -w net.ipv4.conf.default.rp_filter=0
++ip netns exec $ns0 sysctl -q -w net.ipv4.conf.all.rp_filter=0
++ip netns exec $ns0 sysctl -q -w net.ipv4.conf.all.rp_filter=0
++
++ip link add veth0 netns "$ns0" type veth peer name veth0 netns "$ns1" > /dev/null 2>&1
++if [ $? -ne 0 ];then
++	echo "SKIP: Could not add veth device"
++	exit $ksft_skip
++fi
++
++ip -net $ns0 li add tvrf type vrf table 9876
++if [ $? -ne 0 ];then
++	echo "SKIP: Could not add vrf device"
++	exit $ksft_skip
++fi
++
++ip -net $ns0 li set lo up
++
++ip -net $ns0 li set veth0 master tvrf
++ip -net $ns0 li set tvrf up
++ip -net $ns0 li set veth0 up
++ip -net $ns1 li set veth0 up
++
++ip -net $ns0 addr add $IP0/$PFXL dev veth0
++ip -net $ns1 addr add $IP1/$PFXL dev veth0
++
++ip netns exec $ns1 iperf3 -s > /dev/null 2>&1&
++if [ $? -ne 0 ];then
++	echo "SKIP: Could not start iperf3"
++	exit $ksft_skip
++fi
++
++# test vrf ingress handling.
++# The incoming connection should be placed in conntrack zone 1,
++# as decided by the first iteration of the ruleset.
++test_ct_zone_in()
++{
++ip netns exec $ns0 nft -f - <<EOF
++table testct {
++	chain rawpre {
++		type filter hook prerouting priority raw;
++
++		iif { veth0, tvrf } counter meta nftrace set 1
++		iif veth0 counter ct zone set 1 counter return
++		iif tvrf counter ct zone set 2 counter return
++		ip protocol icmp counter
++		notrack counter
++	}
++
++	chain rawout {
++		type filter hook output priority raw;
++
++		oif veth0 counter ct zone set 1 counter return
++		oif tvrf counter ct zone set 2 counter return
++		notrack counter
++	}
++}
++EOF
++	ip netns exec $ns1 ping -W 1 -c 1 -I veth0 $IP0 > /dev/null
++
++	# should be in zone 1, not zone 2
++	count=$(ip netns exec $ns0 conntrack -L -s $IP1 -d $IP0 -p icmp --zone 1 2>/dev/null | wc -l)
++	if [ $count -eq 1 ]; then
++		echo "PASS: entry found in conntrack zone 1"
++	else
++		echo "FAIL: entry not found in conntrack zone 1"
++		count=$(ip netns exec $ns0 conntrack -L -s $IP1 -d $IP0 -p icmp --zone 2 2> /dev/null | wc -l)
++		if [ $count -eq 1 ]; then
++			echo "FAIL: entry found in zone 2 instead"
++		else
++			echo "FAIL: entry not in zone 1 or 2, dumping table"
++			ip netns exec $ns0 conntrack -L
++			ip netns exec $ns0 nft list ruleset
++		fi
++	fi
++}
++
++# add masq rule that gets evaluated w. outif set to vrf device.
++# This tests the first iteration of the packet through conntrack,
++# oifname is the vrf device.
++test_masquerade_vrf()
++{
++	local qdisc=$1
++
++	if [ "$qdisc" != "default" ]; then
++		tc -net $ns0 qdisc add dev tvrf root $qdisc
++	fi
++
++	ip netns exec $ns0 conntrack -F 2>/dev/null
++
++ip netns exec $ns0 nft -f - <<EOF
++flush ruleset
++table ip nat {
++	chain rawout {
++		type filter hook output priority raw;
++
++		oif tvrf ct state untracked counter
++	}
++	chain postrouting2 {
++		type filter hook postrouting priority mangle;
++
++		oif tvrf ct state untracked counter
++	}
++	chain postrouting {
++		type nat hook postrouting priority 0;
++		# NB: masquerade should always be combined with 'oif(name) bla',
++		# lack of this is intentional here, we want to exercise double-snat.
++		ip saddr 172.30.30.0/30 counter masquerade random
++	}
++}
++EOF
++	ip netns exec $ns0 ip vrf exec tvrf iperf3 -t 1 -c $IP1 >/dev/null
++	if [ $? -ne 0 ]; then
++		echo "FAIL: iperf3 connect failure with masquerade + sport rewrite on vrf device"
++		ret=1
++		return
++	fi
++
++	# must also check that nat table was evaluated on second (lower device) iteration.
++	ip netns exec $ns0 nft list table ip nat |grep -q 'counter packets 2' &&
++	ip netns exec $ns0 nft list table ip nat |grep -q 'untracked counter packets [1-9]'
++	if [ $? -eq 0 ]; then
++		echo "PASS: iperf3 connect with masquerade + sport rewrite on vrf device ($qdisc qdisc)"
++	else
++		echo "FAIL: vrf rules have unexpected counter value"
++		ret=1
++	fi
++
++	if [ "$qdisc" != "default" ]; then
++		tc -net $ns0 qdisc del dev tvrf root
++	fi
++}
++
++# add masq rule that gets evaluated w. outif set to veth device.
++# This tests the 2nd iteration of the packet through conntrack,
++# oifname is the lower device (veth0 in this case).
++test_masquerade_veth()
++{
++	ip netns exec $ns0 conntrack -F 2>/dev/null
++ip netns exec $ns0 nft -f - <<EOF
++flush ruleset
++table ip nat {
++	chain postrouting {
++		type nat hook postrouting priority 0;
++		meta oif veth0 ip saddr 172.30.30.0/30 counter masquerade random
++	}
++}
++EOF
++	ip netns exec $ns0 ip vrf exec tvrf iperf3 -t 1 -c $IP1 > /dev/null
++	if [ $? -ne 0 ]; then
++		echo "FAIL: iperf3 connect failure with masquerade + sport rewrite on veth device"
++		ret=1
++		return
++	fi
++
++	# must also check that nat table was evaluated on second (lower device) iteration.
++	ip netns exec $ns0 nft list table ip nat |grep -q 'counter packets 2'
++	if [ $? -eq 0 ]; then
++		echo "PASS: iperf3 connect with masquerade + sport rewrite on veth device"
++	else
++		echo "FAIL: vrf masq rule has unexpected counter value"
++		ret=1
++	fi
++}
++
++test_ct_zone_in
++test_masquerade_vrf "default"
++test_masquerade_vrf "pfifo"
++test_masquerade_veth
++
++exit $ret


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-14 12:51 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-14 12:51 UTC (permalink / raw
  To: gentoo-commits

commit:     3db6d426b361f4832dd25eee71351bd9a3ac6da1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 14 12:51:09 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Dec 14 12:51:09 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3db6d426

Remove redundant patch

Removed:
2910_fix-gcc-detection-method.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                         |  4 ---
 2910_fix-gcc-detection-method.patch | 54 -------------------------------------
 2 files changed, 58 deletions(-)

diff --git a/0000_README b/0000_README
index 9b6b1174..0052fe2c 100644
--- a/0000_README
+++ b/0000_README
@@ -399,10 +399,6 @@ Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
 
-Patch:  2910_fix-gcc-detection-method.patch
-From:   https://bugs.gentoo.org/814200
-Desc:   Fix how gcc version is detected to make visible CONFIG_GCC_PLUGINS. Thanks to Kerin Millar.
-
 Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL

diff --git a/2910_fix-gcc-detection-method.patch b/2910_fix-gcc-detection-method.patch
deleted file mode 100644
index 470f3afa..00000000
--- a/2910_fix-gcc-detection-method.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-diff --git a/scripts/gcc-plugin.sh b/scripts/gcc-plugin.sh
-deleted file mode 100755
-index b79fd0bea838..000000000000
---- a/scripts/gcc-plugin.sh
-+++ /dev/null
-@@ -1,19 +0,0 @@
--#!/bin/sh
--# SPDX-License-Identifier: GPL-2.0
--
--set -e
--
--srctree=$(dirname "$0")
--
--gccplugins_dir=$($* -print-file-name=plugin)
--
--# we need a c++ compiler that supports the designated initializer GNU extension
--$HOSTCC -c -x c++ -std=gnu++98 - -fsyntax-only -I $srctree/gcc-plugins -I $gccplugins_dir/include 2>/dev/null <<EOF
--#include "gcc-common.h"
--class test {
--public:
--	int test;
--} test = {
--	.test = 1
--};
--EOF
-diff --git a/scripts/gcc-plugins/Kconfig b/scripts/gcc-plugins/Kconfig
-index ae19fb0243b9..ab9eb4cbe33a 100644
---- a/scripts/gcc-plugins/Kconfig
-+++ b/scripts/gcc-plugins/Kconfig
-@@ -9,7 +9,7 @@ menuconfig GCC_PLUGINS
- 	bool "GCC plugins"
- 	depends on HAVE_GCC_PLUGINS
- 	depends on CC_IS_GCC
--	depends on $(success,$(srctree)/scripts/gcc-plugin.sh $(CC))
-+	depends on $(success,test -e $(shell,$(CC) -print-file-name=plugin)/include/plugin-version.h)
- 	default y
- 	help
- 	  GCC plugins are loadable modules that provide extra features to the
-diff --git a/scripts/gcc-plugins/Makefile b/scripts/gcc-plugins/Makefile
-index d66949bfeba4..b5487cce69e8 100644
---- a/scripts/gcc-plugins/Makefile
-+++ b/scripts/gcc-plugins/Makefile
-@@ -22,9 +22,9 @@ always-y += $(GCC_PLUGIN)
- GCC_PLUGINS_DIR = $(shell $(CC) -print-file-name=plugin)
- 
- plugin_cxxflags	= -Wp,-MMD,$(depfile) $(KBUILD_HOSTCXXFLAGS) -fPIC \
--		   -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++98 \
-+		   -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++11 \
- 		   -fno-rtti -fno-exceptions -fasynchronous-unwind-tables \
--		   -ggdb -Wno-narrowing -Wno-unused-variable -Wno-c++11-compat \
-+		   -ggdb -Wno-narrowing -Wno-unused-variable \
- 		   -Wno-format-diag
- 
- plugin_ldflags	= -shared


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-16 16:04 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-16 16:04 UTC (permalink / raw
  To: gentoo-commits

commit:     480d15c730822be6b4d0cbf7d11f5794c6ca6659
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 16 16:04:31 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 16 16:04:31 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=480d15c7

Linux patch 5.10.86

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |  4 ++++
 1085_linux-5.10.86.patch | 16 ++++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/0000_README b/0000_README
index 0052fe2c..333a9c3a 100644
--- a/0000_README
+++ b/0000_README
@@ -383,6 +383,10 @@ Patch:  1084_linux-5.10.85.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.85
 
+Patch:  1085_linux-5.10.86.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.86
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1085_linux-5.10.86.patch b/1085_linux-5.10.86.patch
new file mode 100644
index 00000000..911d13d7
--- /dev/null
+++ b/1085_linux-5.10.86.patch
@@ -0,0 +1,16 @@
+diff --git a/Makefile b/Makefile
+index 8de80228df3f6..5c1a33f1ecad9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 85
++SUBLEVEL = 86
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/tools/testing/selftests/netfilter/conntrack_vrf.sh b/tools/testing/selftests/netfilter/conntrack_vrf.sh
+old mode 100644
+new mode 100755


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-17 11:55 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-17 11:55 UTC (permalink / raw
  To: gentoo-commits

commit:     960e1ee2758bcfea460e98357888176a0a157f60
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 17 11:54:48 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec 17 11:54:48 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=960e1ee2

Linux patch 5.10.87

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1086_linux-5.10.87.patch | 974 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 978 insertions(+)

diff --git a/0000_README b/0000_README
index 333a9c3a..b743d7ac 100644
--- a/0000_README
+++ b/0000_README
@@ -387,6 +387,10 @@ Patch:  1085_linux-5.10.86.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.86
 
+Patch:  1086_linux-5.10.87.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.87
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1086_linux-5.10.87.patch b/1086_linux-5.10.87.patch
new file mode 100644
index 00000000..6b219b9f
--- /dev/null
+++ b/1086_linux-5.10.87.patch
@@ -0,0 +1,974 @@
+diff --git a/Makefile b/Makefile
+index 5c1a33f1ecad9..d627f4ae5af56 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 86
++SUBLEVEL = 87
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
+index 75f3ab531bdf4..15af4dd524262 100644
+--- a/arch/arm/mm/init.c
++++ b/arch/arm/mm/init.c
+@@ -125,11 +125,22 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
+ int pfn_valid(unsigned long pfn)
+ {
+ 	phys_addr_t addr = __pfn_to_phys(pfn);
++	unsigned long pageblock_size = PAGE_SIZE * pageblock_nr_pages;
+ 
+ 	if (__phys_to_pfn(addr) != pfn)
+ 		return 0;
+ 
+-	return memblock_is_map_memory(addr);
++	/*
++	 * If address less than pageblock_size bytes away from a present
++	 * memory chunk there still will be a memory map entry for it
++	 * because we round freed memory map to the pageblock boundaries.
++	 */
++	if (memblock_overlaps_region(&memblock.memory,
++				     ALIGN_DOWN(addr, pageblock_size),
++				     pageblock_size))
++		return 1;
++
++	return 0;
+ }
+ EXPORT_SYMBOL(pfn_valid);
+ #endif
+@@ -313,14 +324,14 @@ static void __init free_unused_memmap(void)
+ 		 */
+ 		start = min(start,
+ 				 ALIGN(prev_end, PAGES_PER_SECTION));
+-#else
++#endif
+ 		/*
+-		 * Align down here since the VM subsystem insists that the
+-		 * memmap entries are valid from the bank start aligned to
+-		 * MAX_ORDER_NR_PAGES.
++		 * Align down here since many operations in VM subsystem
++		 * presume that there are no holes in the memory map inside
++		 * a pageblock
+ 		 */
+-		start = round_down(start, MAX_ORDER_NR_PAGES);
+-#endif
++		start = round_down(start, pageblock_nr_pages);
++
+ 		/*
+ 		 * If we had a previous bank, and there is a space
+ 		 * between the current bank and the previous, free it.
+@@ -329,17 +340,19 @@ static void __init free_unused_memmap(void)
+ 			free_memmap(prev_end, start);
+ 
+ 		/*
+-		 * Align up here since the VM subsystem insists that the
+-		 * memmap entries are valid from the bank end aligned to
+-		 * MAX_ORDER_NR_PAGES.
++		 * Align up here since many operations in VM subsystem
++		 * presume that there are no holes in the memory map inside
++		 * a pageblock
+ 		 */
+-		prev_end = ALIGN(end, MAX_ORDER_NR_PAGES);
++		prev_end = ALIGN(end, pageblock_nr_pages);
+ 	}
+ 
+ #ifdef CONFIG_SPARSEMEM
+-	if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION))
++	if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION)) {
++		prev_end = ALIGN(end, pageblock_nr_pages);
+ 		free_memmap(prev_end,
+ 			    ALIGN(prev_end, PAGES_PER_SECTION));
++	}
+ #endif
+ }
+ 
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index 000e8210000bd..80fb5a4a5c050 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -27,6 +27,7 @@
+ #include <linux/vmalloc.h>
+ #include <linux/io.h>
+ #include <linux/sizes.h>
++#include <linux/memblock.h>
+ 
+ #include <asm/cp15.h>
+ #include <asm/cputype.h>
+@@ -284,7 +285,8 @@ static void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
+ 	 * Don't allow RAM to be mapped with mismatched attributes - this
+ 	 * causes problems with ARMv6+
+ 	 */
+-	if (WARN_ON(pfn_valid(pfn) && mtype != MT_MEMORY_RW))
++	if (WARN_ON(memblock_is_map_memory(PFN_PHYS(pfn)) &&
++		    mtype != MT_MEMORY_RW))
+ 		return NULL;
+ 
+ 	area = get_vm_area_caller(size, VM_IOREMAP, caller);
+diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
+index 1f875a8f20c47..8116ae1e636a2 100644
+--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
++++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
+@@ -406,6 +406,12 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
+  */
+ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
++	/*
++	 * Save PSTATE early so that we can evaluate the vcpu mode
++	 * early on.
++	 */
++	vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR);
++
+ 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
+ 		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
+ 
+diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+index cce43bfe158fa..0eacfb9d17b02 100644
+--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
++++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+@@ -54,7 +54,12 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
+ static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt)
+ {
+ 	ctxt->regs.pc			= read_sysreg_el2(SYS_ELR);
+-	ctxt->regs.pstate		= read_sysreg_el2(SYS_SPSR);
++	/*
++	 * Guest PSTATE gets saved at guest fixup time in all
++	 * cases. We still need to handle the nVHE host side here.
++	 */
++	if (!has_vhe() && ctxt->__hyp_running_vcpu)
++		ctxt->regs.pstate	= read_sysreg_el2(SYS_SPSR);
+ 
+ 	if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
+ 		ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2);
+diff --git a/arch/s390/lib/test_unwind.c b/arch/s390/lib/test_unwind.c
+index 6bad84c372dcb..b0b67e6d1f6e2 100644
+--- a/arch/s390/lib/test_unwind.c
++++ b/arch/s390/lib/test_unwind.c
+@@ -171,10 +171,11 @@ static noinline int unwindme_func4(struct unwindme *u)
+ 		}
+ 
+ 		/*
+-		 * trigger specification exception
++		 * Trigger operation exception; use insn notation to bypass
++		 * llvm's integrated assembler sanity checks.
+ 		 */
+ 		asm volatile(
+-			"	mvcl	%%r1,%%r1\n"
++			"	.insn	e,0x0000\n"	/* illegal opcode */
+ 			"0:	nopr	%%r7\n"
+ 			EX_TABLE(0b, 0b)
+ 			:);
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index bb39f493447cf..328f37e4fd3a7 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -1641,11 +1641,13 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *current_vcpu, u64 ingpa, u64 outgpa,
+ 
+ 		all_cpus = send_ipi_ex.vp_set.format == HV_GENERIC_SET_ALL;
+ 
++		if (all_cpus)
++			goto check_and_send_ipi;
++
+ 		if (!sparse_banks_len)
+ 			goto ret_success;
+ 
+-		if (!all_cpus &&
+-		    kvm_read_guest(kvm,
++		if (kvm_read_guest(kvm,
+ 				   ingpa + offsetof(struct hv_send_ipi_ex,
+ 						    vp_set.bank_contents),
+ 				   sparse_banks,
+@@ -1653,6 +1655,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *current_vcpu, u64 ingpa, u64 outgpa,
+ 			return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ 	}
+ 
++check_and_send_ipi:
+ 	if ((vector < HV_IPI_LOW_VECTOR) || (vector > HV_IPI_HIGH_VECTOR))
+ 		return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ 
+diff --git a/drivers/char/agp/parisc-agp.c b/drivers/char/agp/parisc-agp.c
+index ed3c4c42fc23b..d68d05d5d3838 100644
+--- a/drivers/char/agp/parisc-agp.c
++++ b/drivers/char/agp/parisc-agp.c
+@@ -281,7 +281,7 @@ agp_ioc_init(void __iomem *ioc_regs)
+         return 0;
+ }
+ 
+-static int
++static int __init
+ lba_find_capability(int cap)
+ {
+ 	struct _parisc_agp_info *info = &parisc_agp_info;
+@@ -366,7 +366,7 @@ fail:
+ 	return error;
+ }
+ 
+-static int
++static int __init
+ find_quicksilver(struct device *dev, void *data)
+ {
+ 	struct parisc_device **lba = data;
+@@ -378,7 +378,7 @@ find_quicksilver(struct device *dev, void *data)
+ 	return 0;
+ }
+ 
+-static int
++static int __init
+ parisc_agp_init(void)
+ {
+ 	extern struct sba_device *sba_list;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index e00a30e7d2529..04c20ce6e94df 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -226,6 +226,14 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ 			ret = -EINVAL;
+ 			goto cleanup;
+ 		}
++
++		if ((aconn->base.connector_type != DRM_MODE_CONNECTOR_DisplayPort) &&
++				(aconn->base.connector_type != DRM_MODE_CONNECTOR_eDP)) {
++			DRM_DEBUG_DRIVER("No DP connector available for CRC source\n");
++			ret = -EINVAL;
++			goto cleanup;
++		}
++
+ 	}
+ 
+ 	if (amdgpu_dm_crtc_configure_crc_source(crtc, crtc_state, source)) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 59d48cf819ea8..5f4cdb05c4db9 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1698,6 +1698,10 @@ bool dc_is_stream_unchanged(
+ 	if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param)
+ 		return false;
+ 
++	// Only Have Audio left to check whether it is same or not. This is a corner case for Tiled sinks
++	if (old_stream->audio_info.mode_count != stream->audio_info.mode_count)
++		return false;
++
+ 	return true;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 96b5dcf8e4540..64454a63bbacf 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1692,6 +1692,8 @@ static int dsi_host_parse_lane_data(struct msm_dsi_host *msm_host,
+ 	if (!prop) {
+ 		DRM_DEV_DEBUG(dev,
+ 			"failed to find data lane mapping, using default\n");
++		/* Set the number of date lanes to 4 by default. */
++		msm_host->num_data_lanes = 4;
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c
+index 63b74e781c5d9..87f401100466d 100644
+--- a/drivers/hwmon/dell-smm-hwmon.c
++++ b/drivers/hwmon/dell-smm-hwmon.c
+@@ -603,15 +603,18 @@ static const struct proc_ops i8k_proc_ops = {
+ 	.proc_ioctl	= i8k_ioctl,
+ };
+ 
++static struct proc_dir_entry *entry;
++
+ static void __init i8k_init_procfs(void)
+ {
+ 	/* Register the proc entry */
+-	proc_create("i8k", 0, NULL, &i8k_proc_ops);
++	entry = proc_create("i8k", 0, NULL, &i8k_proc_ops);
+ }
+ 
+ static void __exit i8k_exit_procfs(void)
+ {
+-	remove_proc_entry("i8k", NULL);
++	if (entry)
++		remove_proc_entry("i8k", NULL);
+ }
+ 
+ #else
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 819ab4ee517e1..02ddb237f69af 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -423,8 +423,8 @@ static void rk3x_i2c_handle_read(struct rk3x_i2c *i2c, unsigned int ipd)
+ 	if (!(ipd & REG_INT_MBRF))
+ 		return;
+ 
+-	/* ack interrupt */
+-	i2c_writel(i2c, REG_INT_MBRF, REG_IPD);
++	/* ack interrupt (read also produces a spurious START flag, clear it too) */
++	i2c_writel(i2c, REG_INT_MBRF | REG_INT_START, REG_IPD);
+ 
+ 	/* Can only handle a maximum of 32 bytes at a time */
+ 	if (len > 32)
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index 3616b77caa0ad..01275c376721c 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -663,7 +663,7 @@ void __init mlx4_en_init_ptys2ethtool_map(void)
+ 	MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_T, SPEED_1000,
+ 				       ETHTOOL_LINK_MODE_1000baseT_Full_BIT);
+ 	MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_CX_SGMII, SPEED_1000,
+-				       ETHTOOL_LINK_MODE_1000baseKX_Full_BIT);
++				       ETHTOOL_LINK_MODE_1000baseX_Full_BIT);
+ 	MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_KX, SPEED_1000,
+ 				       ETHTOOL_LINK_MODE_1000baseKX_Full_BIT);
+ 	MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_T, SPEED_10000,
+@@ -675,9 +675,9 @@ void __init mlx4_en_init_ptys2ethtool_map(void)
+ 	MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_KR, SPEED_10000,
+ 				       ETHTOOL_LINK_MODE_10000baseKR_Full_BIT);
+ 	MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_CR, SPEED_10000,
+-				       ETHTOOL_LINK_MODE_10000baseKR_Full_BIT);
++				       ETHTOOL_LINK_MODE_10000baseCR_Full_BIT);
+ 	MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_SR, SPEED_10000,
+-				       ETHTOOL_LINK_MODE_10000baseKR_Full_BIT);
++				       ETHTOOL_LINK_MODE_10000baseSR_Full_BIT);
+ 	MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_20GBASE_KR2, SPEED_20000,
+ 				       ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT,
+ 				       ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT);
+diff --git a/drivers/staging/most/dim2/dim2.c b/drivers/staging/most/dim2/dim2.c
+index 8c2f384233aab..2fd6886f7728c 100644
+--- a/drivers/staging/most/dim2/dim2.c
++++ b/drivers/staging/most/dim2/dim2.c
+@@ -723,6 +723,23 @@ static int get_dim2_clk_speed(const char *clock_speed, u8 *val)
+ 	return -EINVAL;
+ }
+ 
++static void dim2_release(struct device *d)
++{
++	struct dim2_hdm *dev = container_of(d, struct dim2_hdm, dev);
++	unsigned long flags;
++
++	kthread_stop(dev->netinfo_task);
++
++	spin_lock_irqsave(&dim_lock, flags);
++	dim_shutdown();
++	spin_unlock_irqrestore(&dim_lock, flags);
++
++	if (dev->disable_platform)
++		dev->disable_platform(to_platform_device(d->parent));
++
++	kfree(dev);
++}
++
+ /*
+  * dim2_probe - dim2 probe handler
+  * @pdev: platform device structure
+@@ -743,7 +760,7 @@ static int dim2_probe(struct platform_device *pdev)
+ 
+ 	enum { MLB_INT_IDX, AHB0_INT_IDX };
+ 
+-	dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL);
++	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ 	if (!dev)
+ 		return -ENOMEM;
+ 
+@@ -755,25 +772,27 @@ static int dim2_probe(struct platform_device *pdev)
+ 				      "microchip,clock-speed", &clock_speed);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "missing dt property clock-speed\n");
+-		return ret;
++		goto err_free_dev;
+ 	}
+ 
+ 	ret = get_dim2_clk_speed(clock_speed, &dev->clk_speed);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "bad dt property clock-speed\n");
+-		return ret;
++		goto err_free_dev;
+ 	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	dev->io_base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(dev->io_base))
+-		return PTR_ERR(dev->io_base);
++	if (IS_ERR(dev->io_base)) {
++		ret = PTR_ERR(dev->io_base);
++		goto err_free_dev;
++	}
+ 
+ 	of_id = of_match_node(dim2_of_match, pdev->dev.of_node);
+ 	pdata = of_id->data;
+ 	ret = pdata && pdata->enable ? pdata->enable(pdev) : 0;
+ 	if (ret)
+-		return ret;
++		goto err_free_dev;
+ 
+ 	dev->disable_platform = pdata ? pdata->disable : NULL;
+ 
+@@ -864,24 +883,19 @@ static int dim2_probe(struct platform_device *pdev)
+ 	dev->most_iface.request_netinfo = request_netinfo;
+ 	dev->most_iface.driver_dev = &pdev->dev;
+ 	dev->most_iface.dev = &dev->dev;
+-	dev->dev.init_name = "dim2_state";
++	dev->dev.init_name = dev->name;
+ 	dev->dev.parent = &pdev->dev;
++	dev->dev.release = dim2_release;
+ 
+-	ret = most_register_interface(&dev->most_iface);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to register MOST interface\n");
+-		goto err_stop_thread;
+-	}
+-
+-	return 0;
++	return most_register_interface(&dev->most_iface);
+ 
+-err_stop_thread:
+-	kthread_stop(dev->netinfo_task);
+ err_shutdown_dim:
+ 	dim_shutdown();
+ err_disable_platform:
+ 	if (dev->disable_platform)
+ 		dev->disable_platform(pdev);
++err_free_dev:
++	kfree(dev);
+ 
+ 	return ret;
+ }
+@@ -895,17 +909,8 @@ err_disable_platform:
+ static int dim2_remove(struct platform_device *pdev)
+ {
+ 	struct dim2_hdm *dev = platform_get_drvdata(pdev);
+-	unsigned long flags;
+ 
+ 	most_deregister_interface(&dev->most_iface);
+-	kthread_stop(dev->netinfo_task);
+-
+-	spin_lock_irqsave(&dim_lock, flags);
+-	dim_shutdown();
+-	spin_unlock_irqrestore(&dim_lock, flags);
+-
+-	if (dev->disable_platform)
+-		dev->disable_platform(pdev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index a70911a227a84..b9f8add284e33 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2559,6 +2559,7 @@ OF_EARLYCON_DECLARE(lpuart, "fsl,vf610-lpuart", lpuart_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1021a-lpuart", lpuart32_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1028a-lpuart", ls1028a_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,imx7ulp-lpuart", lpuart32_imx_early_console_setup);
++OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8qxp-lpuart", lpuart32_imx_early_console_setup);
+ EARLYCON_DECLARE(lpuart, lpuart_early_console_setup);
+ EARLYCON_DECLARE(lpuart32, lpuart32_early_console_setup);
+ 
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 2e300176cb889..e7667497b6b77 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -791,11 +791,19 @@ static int fuse_symlink(struct inode *dir, struct dentry *entry,
+ 	return create_new_entry(fm, &args, dir, entry, S_IFLNK);
+ }
+ 
++void fuse_flush_time_update(struct inode *inode)
++{
++	int err = sync_inode_metadata(inode, 1);
++
++	mapping_set_error(inode->i_mapping, err);
++}
++
+ void fuse_update_ctime(struct inode *inode)
+ {
+ 	if (!IS_NOCMTIME(inode)) {
+ 		inode->i_ctime = current_time(inode);
+ 		mark_inode_dirty_sync(inode);
++		fuse_flush_time_update(inode);
+ 	}
+ }
+ 
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index c9606f2d2864d..4dd70b53df81a 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1849,6 +1849,17 @@ int fuse_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 	struct fuse_file *ff;
+ 	int err;
+ 
++	/*
++	 * Inode is always written before the last reference is dropped and
++	 * hence this should not be reached from reclaim.
++	 *
++	 * Writing back the inode from reclaim can deadlock if the request
++	 * processing itself needs an allocation.  Allocations triggering
++	 * reclaim while serving a request can't be prevented, because it can
++	 * involve any number of unrelated userspace processes.
++	 */
++	WARN_ON(wbc->for_reclaim);
++
+ 	ff = __fuse_write_file_get(fc, fi);
+ 	err = fuse_flush_times(inode, ff);
+ 	if (ff)
+@@ -3338,6 +3349,8 @@ out:
+ 	if (lock_inode)
+ 		inode_unlock(inode);
+ 
++	fuse_flush_time_update(inode);
++
+ 	return err;
+ }
+ 
+@@ -3447,6 +3460,8 @@ out:
+ 	inode_unlock(inode_out);
+ 	file_accessed(file_in);
+ 
++	fuse_flush_time_update(inode_out);
++
+ 	return err;
+ }
+ 
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index ff94da6840176..b159d8b5e8937 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -1113,6 +1113,7 @@ int fuse_allow_current_process(struct fuse_conn *fc);
+ 
+ u64 fuse_lock_owner_id(struct fuse_conn *fc, fl_owner_t id);
+ 
++void fuse_flush_time_update(struct inode *inode);
+ void fuse_update_ctime(struct inode *inode);
+ 
+ int fuse_update_attributes(struct inode *inode, struct file *file);
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 053c56af3b6f3..5e484676343eb 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -119,6 +119,9 @@ static void fuse_evict_inode(struct inode *inode)
+ {
+ 	struct fuse_inode *fi = get_fuse_inode(inode);
+ 
++	/* Will write inode on close/munmap and in all other dirtiers */
++	WARN_ON(inode->i_state & I_DIRTY_INODE);
++
+ 	truncate_inode_pages_final(&inode->i_data);
+ 	clear_inode(inode);
+ 	if (inode->i_sb->s_flags & SB_ACTIVE) {
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index b5be9659ab590..01149821ded91 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -92,7 +92,7 @@ static struct hlist_head *dev_map_create_hash(unsigned int entries,
+ 	int i;
+ 	struct hlist_head *hash;
+ 
+-	hash = bpf_map_area_alloc(entries * sizeof(*hash), numa_node);
++	hash = bpf_map_area_alloc((u64) entries * sizeof(*hash), numa_node);
+ 	if (hash != NULL)
+ 		for (i = 0; i < entries; i++)
+ 			INIT_HLIST_HEAD(&hash[i]);
+@@ -153,7 +153,7 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
+ 
+ 		spin_lock_init(&dtab->index_lock);
+ 	} else {
+-		dtab->netdev_map = bpf_map_area_alloc(dtab->map.max_entries *
++		dtab->netdev_map = bpf_map_area_alloc((u64) dtab->map.max_entries *
+ 						      sizeof(struct bpf_dtab_netdev *),
+ 						      dtab->map.numa_node);
+ 		if (!dtab->netdev_map)
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index d63e51dde0d24..51a9d1185033b 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -15,6 +15,7 @@
+ #include <linux/jhash.h>
+ #include <linux/slab.h>
+ #include <linux/sort.h>
++#include <linux/kmemleak.h>
+ 
+ #include "tracing_map.h"
+ #include "trace.h"
+@@ -307,6 +308,7 @@ static void tracing_map_array_free(struct tracing_map_array *a)
+ 	for (i = 0; i < a->n_pages; i++) {
+ 		if (!a->pages[i])
+ 			break;
++		kmemleak_free(a->pages[i]);
+ 		free_page((unsigned long)a->pages[i]);
+ 	}
+ 
+@@ -342,6 +344,7 @@ static struct tracing_map_array *tracing_map_array_alloc(unsigned int n_elts,
+ 		a->pages[i] = (void *)get_zeroed_page(GFP_KERNEL);
+ 		if (!a->pages[i])
+ 			goto free;
++		kmemleak_alloc(a->pages[i], PAGE_SIZE, 1, GFP_KERNEL);
+ 	}
+  out:
+ 	return a;
+diff --git a/mm/memblock.c b/mm/memblock.c
+index c337df03b6a17..faa4de579b3db 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -182,6 +182,8 @@ bool __init_memblock memblock_overlaps_region(struct memblock_type *type,
+ {
+ 	unsigned long i;
+ 
++	memblock_cap_size(base, &size);
++
+ 	for (i = 0; i < type->cnt; i++)
+ 		if (memblock_addrs_overlap(base, size, type->regions[i].base,
+ 					   type->regions[i].size))
+@@ -1792,7 +1794,6 @@ bool __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t siz
+  */
+ bool __init_memblock memblock_is_region_reserved(phys_addr_t base, phys_addr_t size)
+ {
+-	memblock_cap_size(base, &size);
+ 	return memblock_overlaps_region(&memblock.reserved, base, size);
+ }
+ 
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index ddc899e83313a..4ea5bc65848f2 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -52,7 +52,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
+ 	if (err)
+ 		goto free_stab;
+ 
+-	stab->sks = bpf_map_area_alloc(stab->map.max_entries *
++	stab->sks = bpf_map_area_alloc((u64) stab->map.max_entries *
+ 				       sizeof(struct sock *),
+ 				       stab->map.numa_node);
+ 	if (stab->sks)
+diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h
+index d8efec516d868..979dee6bb88c5 100644
+--- a/net/ethtool/netlink.h
++++ b/net/ethtool/netlink.h
+@@ -249,6 +249,9 @@ struct ethnl_reply_data {
+ 
+ static inline int ethnl_ops_begin(struct net_device *dev)
+ {
++	if (dev && dev->reg_state == NETREG_UNREGISTERING)
++		return -ENODEV;
++
+ 	if (dev && dev->ethtool_ops->begin)
+ 		return dev->ethtool_ops->begin(dev);
+ 	else
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 0886267ea81ef..e55af5c078ac0 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1863,6 +1863,11 @@ static int netlink_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	if (msg->msg_flags & MSG_OOB)
+ 		return -EOPNOTSUPP;
+ 
++	if (len == 0) {
++		pr_warn_once("Zero length message leads to an empty skb\n");
++		return -ENODATA;
++	}
++
+ 	err = scm_send(sock, msg, &scm, true);
+ 	if (err < 0)
+ 		return err;
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 0767404636c14..78acc4e9ac932 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -636,8 +636,10 @@ static int nfc_genl_dump_devices_done(struct netlink_callback *cb)
+ {
+ 	struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0];
+ 
+-	nfc_device_iter_exit(iter);
+-	kfree(iter);
++	if (iter) {
++		nfc_device_iter_exit(iter);
++		kfree(iter);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 64115a796af06..3cc936f2cbf8d 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -369,7 +369,10 @@ enum {
+ 					((pci)->device == 0x0c0c) || \
+ 					((pci)->device == 0x0d0c) || \
+ 					((pci)->device == 0x160c) || \
+-					((pci)->device == 0x490d))
++					((pci)->device == 0x490d) || \
++					((pci)->device == 0x4f90) || \
++					((pci)->device == 0x4f91) || \
++					((pci)->device == 0x4f92))
+ 
+ #define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
+ 
+@@ -2540,6 +2543,13 @@ static const struct pci_device_id azx_ids[] = {
+ 	/* DG1 */
+ 	{ PCI_DEVICE(0x8086, 0x490d),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++	/* DG2 */
++	{ PCI_DEVICE(0x8086, 0x4f90),
++	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++	{ PCI_DEVICE(0x8086, 0x4f91),
++	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++	{ PCI_DEVICE(0x8086, 0x4f92),
++	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ 	/* Alderlake-S */
+ 	{ PCI_DEVICE(0x8086, 0x7ad0),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index c65144715af78..fe725f0f09312 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4362,10 +4362,11 @@ HDA_CODEC_ENTRY(0x8086280f, "Icelake HDMI",	patch_i915_icl_hdmi),
+ HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI",	patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x80862814, "DG1 HDMI",	patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x80862815, "Alderlake HDMI",	patch_i915_tgl_hdmi),
+-HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI",	patch_i915_tgl_hdmi),
++HDA_CODEC_ENTRY(0x80862819, "DG2 HDMI",	patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI",	patch_i915_icl_hdmi),
+ HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI",	patch_i915_icl_hdmi),
++HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI",	patch_generic_hdmi),
+ HDA_CODEC_ENTRY(0x80862882, "Valleyview2 HDMI",	patch_i915_byt_hdmi),
+ HDA_CODEC_ENTRY(0x80862883, "Braswell HDMI",	patch_i915_byt_hdmi),
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index 5378a14e38368..8f1a99e2fcd7c 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -752,7 +752,7 @@ static int __cmd_inject(struct perf_inject *inject)
+ 		inject->tool.ordered_events = true;
+ 		inject->tool.ordering_requires_timestamps = true;
+ 		/* Allow space in the header for new attributes */
+-		output_data_offset = 4096;
++		output_data_offset = roundup(8192 + session->header.data_offset, 4096);
+ 		if (inject->strip)
+ 			strip_init(inject);
+ 	}
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index e6029d4c096fb..e4c485f92c028 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -1114,61 +1114,69 @@ out_no_progress:
+ 
+ static bool intel_pt_fup_event(struct intel_pt_decoder *decoder)
+ {
++	enum intel_pt_sample_type type = decoder->state.type;
+ 	bool ret = false;
+ 
++	decoder->state.type &= ~INTEL_PT_BRANCH;
++
+ 	if (decoder->set_fup_tx_flags) {
+ 		decoder->set_fup_tx_flags = false;
+ 		decoder->tx_flags = decoder->fup_tx_flags;
+-		decoder->state.type = INTEL_PT_TRANSACTION;
++		decoder->state.type |= INTEL_PT_TRANSACTION;
+ 		if (decoder->fup_tx_flags & INTEL_PT_ABORT_TX)
+ 			decoder->state.type |= INTEL_PT_BRANCH;
+-		decoder->state.from_ip = decoder->ip;
+-		decoder->state.to_ip = 0;
+ 		decoder->state.flags = decoder->fup_tx_flags;
+-		return true;
++		ret = true;
+ 	}
+ 	if (decoder->set_fup_ptw) {
+ 		decoder->set_fup_ptw = false;
+-		decoder->state.type = INTEL_PT_PTW;
++		decoder->state.type |= INTEL_PT_PTW;
+ 		decoder->state.flags |= INTEL_PT_FUP_IP;
+-		decoder->state.from_ip = decoder->ip;
+-		decoder->state.to_ip = 0;
+ 		decoder->state.ptw_payload = decoder->fup_ptw_payload;
+-		return true;
++		ret = true;
+ 	}
+ 	if (decoder->set_fup_mwait) {
+ 		decoder->set_fup_mwait = false;
+-		decoder->state.type = INTEL_PT_MWAIT_OP;
+-		decoder->state.from_ip = decoder->ip;
+-		decoder->state.to_ip = 0;
++		decoder->state.type |= INTEL_PT_MWAIT_OP;
+ 		decoder->state.mwait_payload = decoder->fup_mwait_payload;
+ 		ret = true;
+ 	}
+ 	if (decoder->set_fup_pwre) {
+ 		decoder->set_fup_pwre = false;
+ 		decoder->state.type |= INTEL_PT_PWR_ENTRY;
+-		decoder->state.type &= ~INTEL_PT_BRANCH;
+-		decoder->state.from_ip = decoder->ip;
+-		decoder->state.to_ip = 0;
+ 		decoder->state.pwre_payload = decoder->fup_pwre_payload;
+ 		ret = true;
+ 	}
+ 	if (decoder->set_fup_exstop) {
+ 		decoder->set_fup_exstop = false;
+ 		decoder->state.type |= INTEL_PT_EX_STOP;
+-		decoder->state.type &= ~INTEL_PT_BRANCH;
+ 		decoder->state.flags |= INTEL_PT_FUP_IP;
+-		decoder->state.from_ip = decoder->ip;
+-		decoder->state.to_ip = 0;
+ 		ret = true;
+ 	}
+ 	if (decoder->set_fup_bep) {
+ 		decoder->set_fup_bep = false;
+ 		decoder->state.type |= INTEL_PT_BLK_ITEMS;
+-		decoder->state.type &= ~INTEL_PT_BRANCH;
++		ret = true;
++	}
++	if (decoder->overflow) {
++		decoder->overflow = false;
++		if (!ret && !decoder->pge) {
++			if (decoder->hop) {
++				decoder->state.type = 0;
++				decoder->pkt_state = INTEL_PT_STATE_RESAMPLE;
++			}
++			decoder->pge = true;
++			decoder->state.type |= INTEL_PT_BRANCH | INTEL_PT_TRACE_BEGIN;
++			decoder->state.from_ip = 0;
++			decoder->state.to_ip = decoder->ip;
++			return true;
++		}
++	}
++	if (ret) {
+ 		decoder->state.from_ip = decoder->ip;
+ 		decoder->state.to_ip = 0;
+-		ret = true;
++	} else {
++		decoder->state.type = type;
+ 	}
+ 	return ret;
+ }
+@@ -1486,7 +1494,16 @@ static int intel_pt_overflow(struct intel_pt_decoder *decoder)
+ 	intel_pt_log("ERROR: Buffer overflow\n");
+ 	intel_pt_clear_tx_flags(decoder);
+ 	decoder->timestamp_insn_cnt = 0;
+-	decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC;
++	decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
++	decoder->state.from_ip = decoder->ip;
++	decoder->ip = 0;
++	decoder->pge = false;
++	decoder->set_fup_tx_flags = false;
++	decoder->set_fup_ptw = false;
++	decoder->set_fup_mwait = false;
++	decoder->set_fup_pwre = false;
++	decoder->set_fup_exstop = false;
++	decoder->set_fup_bep = false;
+ 	decoder->overflow = true;
+ 	return -EOVERFLOW;
+ }
+@@ -1937,6 +1954,8 @@ static int intel_pt_scan_for_psb(struct intel_pt_decoder *decoder);
+ /* Hop mode: Ignore TNT, do not walk code, but get ip from FUPs and TIPs */
+ static int intel_pt_hop_trace(struct intel_pt_decoder *decoder, bool *no_tip, int *err)
+ {
++	*err = 0;
++
+ 	/* Leap from PSB to PSB, getting ip from FUP within PSB+ */
+ 	if (decoder->leap && !decoder->in_psb && decoder->packet.type != INTEL_PT_PSB) {
+ 		*err = intel_pt_scan_for_psb(decoder);
+@@ -1949,6 +1968,7 @@ static int intel_pt_hop_trace(struct intel_pt_decoder *decoder, bool *no_tip, in
+ 		return HOP_IGNORE;
+ 
+ 	case INTEL_PT_TIP_PGD:
++		decoder->pge = false;
+ 		if (!decoder->packet.count)
+ 			return HOP_IGNORE;
+ 		intel_pt_set_ip(decoder);
+@@ -1970,18 +1990,21 @@ static int intel_pt_hop_trace(struct intel_pt_decoder *decoder, bool *no_tip, in
+ 		if (!decoder->packet.count)
+ 			return HOP_IGNORE;
+ 		intel_pt_set_ip(decoder);
+-		if (intel_pt_fup_event(decoder))
+-			return HOP_RETURN;
+-		if (!decoder->branch_enable)
++		if (decoder->set_fup_mwait || decoder->set_fup_pwre)
++			*no_tip = true;
++		if (!decoder->branch_enable || !decoder->pge)
+ 			*no_tip = true;
+ 		if (*no_tip) {
+ 			decoder->state.type = INTEL_PT_INSTRUCTION;
+ 			decoder->state.from_ip = decoder->ip;
+ 			decoder->state.to_ip = 0;
++			intel_pt_fup_event(decoder);
+ 			return HOP_RETURN;
+ 		}
++		intel_pt_fup_event(decoder);
++		decoder->state.type |= INTEL_PT_INSTRUCTION | INTEL_PT_BRANCH;
+ 		*err = intel_pt_walk_fup_tip(decoder);
+-		if (!*err)
++		if (!*err && decoder->state.to_ip)
+ 			decoder->pkt_state = INTEL_PT_STATE_RESAMPLE;
+ 		return HOP_RETURN;
+ 
+@@ -2050,6 +2073,7 @@ static int intel_pt_walk_trace(struct intel_pt_decoder *decoder)
+ 		if (err)
+ 			return err;
+ next:
++		err = 0;
+ 		if (decoder->cyc_threshold) {
+ 			if (decoder->sample_cyc && last_packet_type != INTEL_PT_CYC)
+ 				decoder->sample_cyc = false;
+@@ -2088,6 +2112,7 @@ next:
+ 
+ 		case INTEL_PT_TIP_PGE: {
+ 			decoder->pge = true;
++			decoder->overflow = false;
+ 			intel_pt_mtc_cyc_cnt_pge(decoder);
+ 			if (decoder->packet.count == 0) {
+ 				intel_pt_log_at("Skipping zero TIP.PGE",
+@@ -2124,7 +2149,7 @@ next:
+ 				break;
+ 			}
+ 			intel_pt_set_last_ip(decoder);
+-			if (!decoder->branch_enable) {
++			if (!decoder->branch_enable || !decoder->pge) {
+ 				decoder->ip = decoder->last_ip;
+ 				if (intel_pt_fup_event(decoder))
+ 					return 0;
+@@ -2601,10 +2626,10 @@ static int intel_pt_sync_ip(struct intel_pt_decoder *decoder)
+ 	decoder->set_fup_pwre = false;
+ 	decoder->set_fup_exstop = false;
+ 	decoder->set_fup_bep = false;
++	decoder->overflow = false;
+ 
+ 	if (!decoder->branch_enable) {
+ 		decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+-		decoder->overflow = false;
+ 		decoder->state.type = 0; /* Do not have a sample */
+ 		return 0;
+ 	}
+@@ -2619,7 +2644,6 @@ static int intel_pt_sync_ip(struct intel_pt_decoder *decoder)
+ 		decoder->pkt_state = INTEL_PT_STATE_RESAMPLE;
+ 	else
+ 		decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+-	decoder->overflow = false;
+ 
+ 	decoder->state.from_ip = 0;
+ 	decoder->state.to_ip = decoder->ip;
+@@ -2732,7 +2756,7 @@ leap:
+ 		return err;
+ 
+ 	decoder->have_last_ip = true;
+-	decoder->pkt_state = INTEL_PT_STATE_NO_IP;
++	decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ 
+ 	err = intel_pt_walk_psb(decoder);
+ 	if (err)
+@@ -2828,7 +2852,8 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder)
+ 
+ 	if (err) {
+ 		decoder->state.err = intel_pt_ext_err(err);
+-		decoder->state.from_ip = decoder->ip;
++		if (err != -EOVERFLOW)
++			decoder->state.from_ip = decoder->ip;
+ 		intel_pt_update_sample_time(decoder);
+ 		decoder->sample_tot_cyc_cnt = decoder->tot_cyc_cnt;
+ 	} else {
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index e5aaf1337be98..5163d2ffea70d 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -2271,6 +2271,7 @@ static int intel_pt_run_decoder(struct intel_pt_queue *ptq, u64 *timestamp)
+ 				ptq->sync_switch = false;
+ 				intel_pt_next_tid(pt, ptq);
+ 			}
++			ptq->timestamp = state->est_timestamp;
+ 			if (pt->synth_opts.errors) {
+ 				err = intel_ptq_synth_error(ptq, state);
+ 				if (err)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-21 19:37 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-21 19:37 UTC (permalink / raw
  To: gentoo-commits

commit:     5b73bbee19e8b9b38023f7b2c6a0178498e62b93
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 21 19:37:22 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Dec 21 19:37:22 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5b73bbee

Move X86 and ARM only config settings to their respective sections

Thanks to gyakovlev

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 95a64aa2..91c92074 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-08-24 15:34:24.700702871 -0400
-+++ b/distro/Kconfig	2021-08-24 15:49:16.965525424 -0400
-@@ -0,0 +1,281 @@
+--- /dev/null	2021-12-21 08:57:43.779324794 -0500
++++ b/distro/Kconfig	2021-12-21 14:33:33.759225728 -0500
+@@ -0,0 +1,283 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -211,7 +211,6 @@
 +	select PAGE_POISONING_ZERO
 +	select INIT_ON_ALLOC_DEFAULT_ON
 +	select INIT_ON_FREE_DEFAULT_ON
-+	select VMAP_STACK
 +	select REFCOUNT_FULL
 +	select FORTIFY_SOURCE
 +	select SECURITY_DMESG_RESTRICT
@@ -219,7 +218,6 @@
 +	select GCC_PLUGIN_LATENT_ENTROPY
 +	select GCC_PLUGIN_STRUCTLEAK
 +	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
-+	select GCC_PLUGIN_STACKLEAK
 +	select GCC_PLUGIN_RANDSTRUCT
 +	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
 +
@@ -239,6 +237,8 @@
 +	select RELOCATABLE
 +	select LEGACY_VSYSCALL_NONE
 + 	select PAGE_TABLE_ISOLATION
++	select GCC_PLUGIN_STACKLEAK
++	select VMAP_STACK
 +
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM64
@@ -251,6 +251,8 @@
 +	select RELOCATABLE
 +	select ARM64_SW_TTBR0_PAN
 +	select CONFIG_UNMAP_KERNEL_AT_EL0
++	select GCC_PLUGIN_STACKLEAK
++	select VMAP_STACK
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_X86_32
 +	bool "X86_32 KSPP Settings"


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-22 14:05 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-22 14:05 UTC (permalink / raw
  To: gentoo-commits

commit:     a65240864c0cc8dd0f787643c5f01b65e5dd5685
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 22 14:05:30 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 22 14:05:30 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a6524086

Linux patch 5.10.88

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1087_linux-5.10.88.patch | 3012 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3016 insertions(+)

diff --git a/0000_README b/0000_README
index b743d7ac..a6cd68ff 100644
--- a/0000_README
+++ b/0000_README
@@ -391,6 +391,10 @@ Patch:  1086_linux-5.10.87.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.87
 
+Patch:  1087_linux-5.10.88.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.88
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1087_linux-5.10.88.patch b/1087_linux-5.10.88.patch
new file mode 100644
index 00000000..2deec06c
--- /dev/null
+++ b/1087_linux-5.10.88.patch
@@ -0,0 +1,3012 @@
+diff --git a/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst b/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
+index f1d5233e5e510..0a233b17c664e 100644
+--- a/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
++++ b/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
+@@ -440,6 +440,22 @@ NOTE: For 82599-based network connections, if you are enabling jumbo frames in
+ a virtual function (VF), jumbo frames must first be enabled in the physical
+ function (PF). The VF MTU setting cannot be larger than the PF MTU.
+ 
++NBASE-T Support
++---------------
++The ixgbe driver supports NBASE-T on some devices. However, the advertisement
++of NBASE-T speeds is suppressed by default, to accommodate broken network
++switches which cannot cope with advertised NBASE-T speeds. Use the ethtool
++command to enable advertising NBASE-T speeds on devices which support it::
++
++  ethtool -s eth? advertise 0x1800000001028
++
++On Linux systems with INTERFACES(5), this can be specified as a pre-up command
++in /etc/network/interfaces so that the interface is always brought up with
++NBASE-T support, e.g.::
++
++  iface eth? inet dhcp
++       pre-up ethtool -s eth? advertise 0x1800000001028 || true
++
+ Generic Receive Offload, aka GRO
+ --------------------------------
+ The driver supports the in-kernel software implementation of GRO. GRO has
+diff --git a/Makefile b/Makefile
+index d627f4ae5af56..0b74b414f4e57 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 87
++SUBLEVEL = 88
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6ull-pinfunc.h b/arch/arm/boot/dts/imx6ull-pinfunc.h
+index eb025a9d47592..7328d4ef8559f 100644
+--- a/arch/arm/boot/dts/imx6ull-pinfunc.h
++++ b/arch/arm/boot/dts/imx6ull-pinfunc.h
+@@ -82,6 +82,6 @@
+ #define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS                         0x01F4 0x0480 0x0000 0x9 0x0
+ #define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK                        0x01F8 0x0484 0x0000 0x9 0x0
+ #define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0                       0x01FC 0x0488 0x0000 0x9 0x0
+-#define MX6ULL_PAD_CSI_DATA07__ESAI_T0                            0x0200 0x048C 0x0000 0x9 0x0
++#define MX6ULL_PAD_CSI_DATA07__ESAI_TX0                           0x0200 0x048C 0x0000 0x9 0x0
+ 
+ #endif /* __DTS_IMX6ULL_PINFUNC_H */
+diff --git a/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts b/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
+index 2b645642b9352..2a745522404d6 100644
+--- a/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
++++ b/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
+@@ -12,7 +12,7 @@
+ 	flash0: n25q00@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q00aa";
++		compatible = "micron,mt25qu02g", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_arria5_socdk.dts b/arch/arm/boot/dts/socfpga_arria5_socdk.dts
+index 90e676e7019f2..1b02d46496a85 100644
+--- a/arch/arm/boot/dts/socfpga_arria5_socdk.dts
++++ b/arch/arm/boot/dts/socfpga_arria5_socdk.dts
+@@ -119,7 +119,7 @@
+ 	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q256a";
++		compatible = "micron,n25q256a", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts b/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
+index 6f138b2b26163..51bb436784e24 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
+@@ -124,7 +124,7 @@
+ 	flash0: n25q00@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q00";
++		compatible = "micron,mt25qu02g", "jedec,spi-nor";
+ 		reg = <0>;	/* chip select */
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
+index c155ff02eb6e0..cae9ddd5ed38b 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
+@@ -169,7 +169,7 @@
+ 	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q00";
++		compatible = "micron,mt25qu02g", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
+index 8d5d3996f6f27..ca18b959e6559 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
+@@ -80,7 +80,7 @@
+ 	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q256a";
++		compatible = "micron,n25q256a", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 		m25p,fast-read;
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts b/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
+index 99a71757cdf46..3f7aa7bf0863a 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
+@@ -116,7 +116,7 @@
+ 	flash0: n25q512a@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q512a";
++		compatible = "micron,n25q512a", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
+index a060718758b67..25874e1b9c829 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
+@@ -224,7 +224,7 @@
+ 	n25q128@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q128";
++		compatible = "micron,n25q128", "jedec,spi-nor";
+ 		reg = <0>;		/* chip select */
+ 		spi-max-frequency = <100000000>;
+ 		m25p,fast-read;
+@@ -241,7 +241,7 @@
+ 	n25q00@1 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q00";
++		compatible = "micron,mt25qu02g", "jedec,spi-nor";
+ 		reg = <1>;		/* chip select */
+ 		spi-max-frequency = <100000000>;
+ 		m25p,fast-read;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index 05ee062548e4f..f4d7bb75707df 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -866,11 +866,12 @@
+ 				assigned-clocks = <&clk IMX8MM_CLK_ENET_AXI>,
+ 						  <&clk IMX8MM_CLK_ENET_TIMER>,
+ 						  <&clk IMX8MM_CLK_ENET_REF>,
+-						  <&clk IMX8MM_CLK_ENET_TIMER>;
++						  <&clk IMX8MM_CLK_ENET_PHY_REF>;
+ 				assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_266M>,
+ 							 <&clk IMX8MM_SYS_PLL2_100M>,
+-							 <&clk IMX8MM_SYS_PLL2_125M>;
+-				assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
++							 <&clk IMX8MM_SYS_PLL2_125M>,
++							 <&clk IMX8MM_SYS_PLL2_50M>;
++				assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
+ 				fsl,num-tx-queues = <3>;
+ 				fsl,num-rx-queues = <3>;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+index 16c7202885d70..aea723eb2ba3f 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+@@ -753,11 +753,12 @@
+ 				assigned-clocks = <&clk IMX8MN_CLK_ENET_AXI>,
+ 						  <&clk IMX8MN_CLK_ENET_TIMER>,
+ 						  <&clk IMX8MN_CLK_ENET_REF>,
+-						  <&clk IMX8MN_CLK_ENET_TIMER>;
++						  <&clk IMX8MN_CLK_ENET_PHY_REF>;
+ 				assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_266M>,
+ 							 <&clk IMX8MN_SYS_PLL2_100M>,
+-							 <&clk IMX8MN_SYS_PLL2_125M>;
+-				assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
++							 <&clk IMX8MN_SYS_PLL2_125M>,
++							 <&clk IMX8MN_SYS_PLL2_50M>;
++				assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
+ 				fsl,num-tx-queues = <3>;
+ 				fsl,num-rx-queues = <3>;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
+index ad66f1286d95c..c13b4a02d12f8 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
+@@ -62,6 +62,8 @@
+ 			reg = <1>;
+ 			eee-broken-1000t;
+ 			reset-gpios = <&gpio4 2 GPIO_ACTIVE_LOW>;
++			reset-assert-us = <10000>;
++			reset-deassert-us = <80000>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 03ef0e5f909e4..acee71ca32d83 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -725,11 +725,12 @@
+ 				assigned-clocks = <&clk IMX8MP_CLK_ENET_AXI>,
+ 						  <&clk IMX8MP_CLK_ENET_TIMER>,
+ 						  <&clk IMX8MP_CLK_ENET_REF>,
+-						  <&clk IMX8MP_CLK_ENET_TIMER>;
++						  <&clk IMX8MP_CLK_ENET_PHY_REF>;
+ 				assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_266M>,
+ 							 <&clk IMX8MP_SYS_PLL2_100M>,
+-							 <&clk IMX8MP_SYS_PLL2_125M>;
+-				assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
++							 <&clk IMX8MP_SYS_PLL2_125M>,
++							 <&clk IMX8MP_SYS_PLL2_50M>;
++				assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
+ 				fsl,num-tx-queues = <3>;
+ 				fsl,num-rx-queues = <3>;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+index bce6f8b7db436..fbcb9531cc70d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+@@ -91,7 +91,7 @@
+ 		regulator-max-microvolt = <3300000>;
+ 		regulator-always-on;
+ 		regulator-boot-on;
+-		vim-supply = <&vcc_io>;
++		vin-supply = <&vcc_io>;
+ 	};
+ 
+ 	vdd_core: vdd-core {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
+index 635afdd99122f..2c644ac1f84b9 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
+@@ -699,7 +699,6 @@
+ &sdhci {
+ 	bus-width = <8>;
+ 	mmc-hs400-1_8v;
+-	mmc-hs400-enhanced-strobe;
+ 	non-removable;
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts b/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
+index 1fa80ac15464b..88984b5e67b6e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
+@@ -49,7 +49,7 @@
+ 		regulator-boot-on;
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+-		vim-supply = <&vcc3v3_sys>;
++		vin-supply = <&vcc3v3_sys>;
+ 	};
+ 
+ 	vcc3v3_sys: vcc3v3-sys {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+index 678a336010bf8..f121203081b97 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+@@ -459,7 +459,7 @@
+ 	status = "okay";
+ 
+ 	bt656-supply = <&vcc_3v0>;
+-	audio-supply = <&vcc_3v0>;
++	audio-supply = <&vcc1v8_codec>;
+ 	sdmmc-supply = <&vcc_sdio>;
+ 	gpio1830-supply = <&vcc_3v0>;
+ };
+diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
+index 83f4a6389a282..d7081e9af65c7 100644
+--- a/arch/powerpc/platforms/85xx/smp.c
++++ b/arch/powerpc/platforms/85xx/smp.c
+@@ -220,7 +220,7 @@ static int smp_85xx_start_cpu(int cpu)
+ 	local_irq_save(flags);
+ 	hard_irq_disable();
+ 
+-	if (qoriq_pm_ops)
++	if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
+ 		qoriq_pm_ops->cpu_up_prepare(cpu);
+ 
+ 	/* if cpu is not spinning, reset it */
+@@ -292,7 +292,7 @@ static int smp_85xx_kick_cpu(int nr)
+ 		booting_thread_hwid = cpu_thread_in_core(nr);
+ 		primary = cpu_first_thread_sibling(nr);
+ 
+-		if (qoriq_pm_ops)
++		if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
+ 			qoriq_pm_ops->cpu_up_prepare(nr);
+ 
+ 		/*
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index e7435f3a3d2d2..76cd09879eaf4 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -277,6 +277,7 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ {
+ 	Elf_Rela *relas;
+ 	int i, r_type;
++	int ret;
+ 
+ 	relas = (void *)pi->ehdr + relsec->sh_offset;
+ 
+@@ -311,7 +312,11 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ 		addr = section->sh_addr + relas[i].r_offset;
+ 
+ 		r_type = ELF64_R_TYPE(relas[i].r_info);
+-		arch_kexec_do_relocs(r_type, loc, val, addr);
++		ret = arch_kexec_do_relocs(r_type, loc, val, addr);
++		if (ret) {
++			pr_err("Unknown rela relocation: %d\n", r_type);
++			return -ENOEXEC;
++		}
+ 	}
+ 	return 0;
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index b885063dc393f..4f828cac0273e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3065,7 +3065,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 
+ 		if (!msr_info->host_initiated)
+ 			return 1;
+-		if (guest_cpuid_has(vcpu, X86_FEATURE_PDCM) && kvm_get_msr_feature(&msr_ent))
++		if (kvm_get_msr_feature(&msr_ent))
+ 			return 1;
+ 		if (data & ~msr_ent.data)
+ 			return 1;
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index e95b93f72bd5c..9af32b44b7173 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2246,7 +2246,14 @@ static void ioc_timer_fn(struct timer_list *timer)
+ 			hwm = current_hweight_max(iocg);
+ 			new_hwi = hweight_after_donation(iocg, old_hwi, hwm,
+ 							 usage, &now);
+-			if (new_hwi < hwm) {
++			/*
++			 * Donation calculation assumes hweight_after_donation
++			 * to be positive, a condition that a donor w/ hwa < 2
++			 * can't meet. Don't bother with donation if hwa is
++			 * below 2. It's not gonna make a meaningful difference
++			 * anyway.
++			 */
++			if (new_hwi < hwm && hwa >= 2) {
+ 				iocg->hweight_donating = hwa;
+ 				iocg->hweight_after_donation = new_hwi;
+ 				list_add(&iocg->surplus_list, &surpluses);
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 48b8934970f36..a0e788b648214 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -2870,8 +2870,19 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
+ 		goto invalid_fld;
+ 	}
+ 
+-	if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0)
+-		tf->protocol = ATA_PROT_NCQ_NODATA;
++	if ((cdb[2 + cdb_offset] & 0x3) == 0) {
++		/*
++		 * When T_LENGTH is zero (No data is transferred), dir should
++		 * be DMA_NONE.
++		 */
++		if (scmd->sc_data_direction != DMA_NONE) {
++			fp = 2 + cdb_offset;
++			goto invalid_fld;
++		}
++
++		if (ata_is_ncq(tf->protocol))
++			tf->protocol = ATA_PROT_NCQ_NODATA;
++	}
+ 
+ 	/* enable LBA */
+ 	tf->flags |= ATA_TFLAG_LBA;
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index ff7b62597b525..22842d2938c28 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1573,9 +1573,12 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 	unsigned long flags;
+ 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
+ 	struct blkfront_info *info = rinfo->dev_info;
++	unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
+ 
+-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
++	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
++		xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
+ 		return IRQ_HANDLED;
++	}
+ 
+ 	spin_lock_irqsave(&rinfo->ring_lock, flags);
+  again:
+@@ -1591,6 +1594,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 		unsigned long id;
+ 		unsigned int op;
+ 
++		eoiflag = 0;
++
+ 		RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
+ 		id = bret.id;
+ 
+@@ -1707,6 +1712,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 
+ 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+ 
++	xen_irq_lateeoi(irq, eoiflag);
++
+ 	return IRQ_HANDLED;
+ 
+  err:
+@@ -1714,6 +1721,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 
+ 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+ 
++	/* No EOI in order to avoid further interrupts. */
++
+ 	pr_alert("%s disabled for further use\n", info->gd->disk_name);
+ 	return IRQ_HANDLED;
+ }
+@@ -1753,8 +1762,8 @@ static int setup_blkring(struct xenbus_device *dev,
+ 	if (err)
+ 		goto fail;
+ 
+-	err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0,
+-					"blkif", rinfo);
++	err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt,
++						0, "blkif", rinfo);
+ 	if (err <= 0) {
+ 		xenbus_dev_fatal(dev, err,
+ 				 "bind_evtchn_to_irqhandler failed");
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 43603dc9da430..18f0650c5d405 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -2443,12 +2443,11 @@ static void sysc_reinit_modules(struct sysc_soc_info *soc)
+ 	struct sysc_module *module;
+ 	struct list_head *pos;
+ 	struct sysc *ddata;
+-	int error = 0;
+ 
+ 	list_for_each(pos, &sysc_soc->restored_modules) {
+ 		module = list_entry(pos, struct sysc_module, node);
+ 		ddata = module->ddata;
+-		error = sysc_reinit_module(ddata, ddata->enabled);
++		sysc_reinit_module(ddata, ddata->enabled);
+ 	}
+ }
+ 
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 61c78714c0957..515ef39c4610c 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3389,6 +3389,14 @@ static int __clk_core_init(struct clk_core *core)
+ 
+ 	clk_prepare_lock();
+ 
++	/*
++	 * Set hw->core after grabbing the prepare_lock to synchronize with
++	 * callers of clk_core_fill_parent_index() where we treat hw->core
++	 * being NULL as the clk not being registered yet. This is crucial so
++	 * that clks aren't parented until their parent is fully registered.
++	 */
++	core->hw->core = core;
++
+ 	ret = clk_pm_runtime_get(core);
+ 	if (ret)
+ 		goto unlock;
+@@ -3557,8 +3565,10 @@ static int __clk_core_init(struct clk_core *core)
+ out:
+ 	clk_pm_runtime_put(core);
+ unlock:
+-	if (ret)
++	if (ret) {
+ 		hlist_del_init(&core->child_node);
++		core->hw->core = NULL;
++	}
+ 
+ 	clk_prepare_unlock();
+ 
+@@ -3804,7 +3814,6 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ 	core->num_parents = init->num_parents;
+ 	core->min_rate = 0;
+ 	core->max_rate = ULONG_MAX;
+-	hw->core = core;
+ 
+ 	ret = clk_core_populate_parent_map(core, init);
+ 	if (ret)
+@@ -3822,7 +3831,7 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ 		goto fail_create_clk;
+ 	}
+ 
+-	clk_core_link_consumer(hw->core, hw->clk);
++	clk_core_link_consumer(core, hw->clk);
+ 
+ 	ret = __clk_core_init(core);
+ 	if (!ret)
+diff --git a/drivers/dma/st_fdma.c b/drivers/dma/st_fdma.c
+index 962b6e05287b5..d95c421877fb7 100644
+--- a/drivers/dma/st_fdma.c
++++ b/drivers/dma/st_fdma.c
+@@ -874,4 +874,4 @@ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver");
+ MODULE_AUTHOR("Ludovic.barre <Ludovic.barre@st.com>");
+ MODULE_AUTHOR("Peter Griffin <peter.griffin@linaro.org>");
+-MODULE_ALIAS("platform: " DRIVER_NAME);
++MODULE_ALIAS("platform:" DRIVER_NAME);
+diff --git a/drivers/firmware/scpi_pm_domain.c b/drivers/firmware/scpi_pm_domain.c
+index 51201600d789b..800673910b511 100644
+--- a/drivers/firmware/scpi_pm_domain.c
++++ b/drivers/firmware/scpi_pm_domain.c
+@@ -16,7 +16,6 @@ struct scpi_pm_domain {
+ 	struct generic_pm_domain genpd;
+ 	struct scpi_ops *ops;
+ 	u32 domain;
+-	char name[30];
+ };
+ 
+ /*
+@@ -110,8 +109,13 @@ static int scpi_pm_domain_probe(struct platform_device *pdev)
+ 
+ 		scpi_pd->domain = i;
+ 		scpi_pd->ops = scpi_ops;
+-		sprintf(scpi_pd->name, "%pOFn.%d", np, i);
+-		scpi_pd->genpd.name = scpi_pd->name;
++		scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL,
++						     "%pOFn.%d", np, i);
++		if (!scpi_pd->genpd.name) {
++			dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n",
++				np, i);
++			continue;
++		}
+ 		scpi_pd->genpd.power_off = scpi_pd_power_off;
+ 		scpi_pd->genpd.power_on = scpi_pd_power_on;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index bea451a39d601..b19f7bd37781f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3002,8 +3002,8 @@ static void gfx_v9_0_init_pg(struct amdgpu_device *adev)
+ 			      AMD_PG_SUPPORT_CP |
+ 			      AMD_PG_SUPPORT_GDS |
+ 			      AMD_PG_SUPPORT_RLC_SMU_HS)) {
+-		WREG32(mmRLC_JUMP_TABLE_RESTORE,
+-		       adev->gfx.rlc.cp_table_gpu_addr >> 8);
++		WREG32_SOC15(GC, 0, mmRLC_JUMP_TABLE_RESTORE,
++			     adev->gfx.rlc.cp_table_gpu_addr >> 8);
+ 		gfx_v9_0_init_gfx_power_gating(adev);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
+index 7907c9e0b5dec..b938fd12da4d5 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
+@@ -187,6 +187,9 @@ int smu_v12_0_fini_smc_tables(struct smu_context *smu)
+ 	kfree(smu_table->watermarks_table);
+ 	smu_table->watermarks_table = NULL;
+ 
++	kfree(smu_table->gpu_metrics_table);
++	smu_table->gpu_metrics_table = NULL;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index a3c2f76668abe..d27f2840b9555 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -857,7 +857,10 @@ static void ast_crtc_reset(struct drm_crtc *crtc)
+ 	if (crtc->state)
+ 		crtc->funcs->atomic_destroy_state(crtc, crtc->state);
+ 
+-	__drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
++	if (ast_state)
++		__drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
++	else
++		__drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+ 
+ static struct drm_crtc_state *
+diff --git a/drivers/input/touchscreen/of_touchscreen.c b/drivers/input/touchscreen/of_touchscreen.c
+index 97342e14b4f18..8719a8b0e8682 100644
+--- a/drivers/input/touchscreen/of_touchscreen.c
++++ b/drivers/input/touchscreen/of_touchscreen.c
+@@ -79,27 +79,27 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
+ 
+ 	data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-x",
+ 						input_abs_get_min(input, axis_x),
+-						&minimum) |
+-		       touchscreen_get_prop_u32(dev, "touchscreen-size-x",
+-						input_abs_get_max(input,
+-								  axis_x) + 1,
+-						&maximum) |
+-		       touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
+-						input_abs_get_fuzz(input, axis_x),
+-						&fuzz);
++						&minimum);
++	data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-x",
++						 input_abs_get_max(input,
++								   axis_x) + 1,
++						 &maximum);
++	data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
++						 input_abs_get_fuzz(input, axis_x),
++						 &fuzz);
+ 	if (data_present)
+ 		touchscreen_set_params(input, axis_x, minimum, maximum - 1, fuzz);
+ 
+ 	data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-y",
+ 						input_abs_get_min(input, axis_y),
+-						&minimum) |
+-		       touchscreen_get_prop_u32(dev, "touchscreen-size-y",
+-						input_abs_get_max(input,
+-								  axis_y) + 1,
+-						&maximum) |
+-		       touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
+-						input_abs_get_fuzz(input, axis_y),
+-						&fuzz);
++						&minimum);
++	data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-y",
++						 input_abs_get_max(input,
++								   axis_y) + 1,
++						 &maximum);
++	data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
++						 input_abs_get_fuzz(input, axis_y),
++						 &fuzz);
+ 	if (data_present)
+ 		touchscreen_set_params(input, axis_y, minimum, maximum - 1, fuzz);
+ 
+@@ -107,11 +107,11 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
+ 	data_present = touchscreen_get_prop_u32(dev,
+ 						"touchscreen-max-pressure",
+ 						input_abs_get_max(input, axis),
+-						&maximum) |
+-		       touchscreen_get_prop_u32(dev,
+-						"touchscreen-fuzz-pressure",
+-						input_abs_get_fuzz(input, axis),
+-						&fuzz);
++						&maximum);
++	data_present |= touchscreen_get_prop_u32(dev,
++						 "touchscreen-fuzz-pressure",
++						 input_abs_get_fuzz(input, axis),
++						 &fuzz);
+ 	if (data_present)
+ 		touchscreen_set_params(input, axis, 0, maximum, fuzz);
+ 
+diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
+index 9e4d1212f4c16..63f2baed3c8a6 100644
+--- a/drivers/md/persistent-data/dm-btree-remove.c
++++ b/drivers/md/persistent-data/dm-btree-remove.c
+@@ -423,9 +423,9 @@ static int rebalance_children(struct shadow_spine *s,
+ 
+ 		memcpy(n, dm_block_data(child),
+ 		       dm_bm_block_size(dm_tm_get_bm(info->tm)));
+-		dm_tm_unlock(info->tm, child);
+ 
+ 		dm_tm_dec(info->tm, dm_block_location(child));
++		dm_tm_unlock(info->tm, child);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/mxl111sf.c b/drivers/media/usb/dvb-usb-v2/mxl111sf.c
+index 7865fa0a82957..cd5861a30b6f8 100644
+--- a/drivers/media/usb/dvb-usb-v2/mxl111sf.c
++++ b/drivers/media/usb/dvb-usb-v2/mxl111sf.c
+@@ -931,8 +931,6 @@ static int mxl111sf_init(struct dvb_usb_device *d)
+ 		  .len = sizeof(eeprom), .buf = eeprom },
+ 	};
+ 
+-	mutex_init(&state->msg_lock);
+-
+ 	ret = get_chip_info(state);
+ 	if (mxl_fail(ret))
+ 		pr_err("failed to get chip info during probe");
+@@ -1074,6 +1072,14 @@ static int mxl111sf_get_stream_config_dvbt(struct dvb_frontend *fe,
+ 	return 0;
+ }
+ 
++static int mxl111sf_probe(struct dvb_usb_device *dev)
++{
++	struct mxl111sf_state *state = d_to_priv(dev);
++
++	mutex_init(&state->msg_lock);
++	return 0;
++}
++
+ static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
+ 	.driver_name = KBUILD_MODNAME,
+ 	.owner = THIS_MODULE,
+@@ -1083,6 +1089,7 @@ static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_dvbt,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1124,6 +1131,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_atsc,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1165,6 +1173,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mh = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_mh,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1233,6 +1242,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc_mh = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_atsc_mh,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1311,6 +1321,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_mercury,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1381,6 +1392,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury_mh = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_mercury_mh,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 0404aafd5ce56..1a703b95208b0 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1304,11 +1304,11 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
+ 	struct bcm_sysport_priv *priv = netdev_priv(dev);
+ 	struct device *kdev = &priv->pdev->dev;
+ 	struct bcm_sysport_tx_ring *ring;
++	unsigned long flags, desc_flags;
+ 	struct bcm_sysport_cb *cb;
+ 	struct netdev_queue *txq;
+ 	u32 len_status, addr_lo;
+ 	unsigned int skb_len;
+-	unsigned long flags;
+ 	dma_addr_t mapping;
+ 	u16 queue;
+ 	int ret;
+@@ -1368,8 +1368,10 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
+ 	ring->desc_count--;
+ 
+ 	/* Ports are latched, so write upper address first */
++	spin_lock_irqsave(&priv->desc_lock, desc_flags);
+ 	tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index));
+ 	tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index));
++	spin_unlock_irqrestore(&priv->desc_lock, desc_flags);
+ 
+ 	/* Check ring space and update SW control flow */
+ 	if (ring->desc_count == 0)
+@@ -2008,6 +2010,7 @@ static int bcm_sysport_open(struct net_device *dev)
+ 	}
+ 
+ 	/* Initialize both hardware and software ring */
++	spin_lock_init(&priv->desc_lock);
+ 	for (i = 0; i < dev->num_tx_queues; i++) {
+ 		ret = bcm_sysport_init_tx_ring(priv, i);
+ 		if (ret) {
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h
+index 3a5cb6f128f57..1276e330e9d03 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.h
++++ b/drivers/net/ethernet/broadcom/bcmsysport.h
+@@ -742,6 +742,7 @@ struct bcm_sysport_priv {
+ 	int			wol_irq;
+ 
+ 	/* Transmit rings */
++	spinlock_t		desc_lock;
+ 	struct bcm_sysport_tx_ring *tx_rings;
+ 
+ 	/* Receive queue */
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index 5b2dcd97c1078..b8e5ca6700ed5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -109,7 +109,8 @@ int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev,
+ 
+ 	memcpy(&req->msg, send_msg, sizeof(struct hclge_vf_to_pf_msg));
+ 
+-	trace_hclge_vf_mbx_send(hdev, req);
++	if (test_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state))
++		trace_hclge_vf_mbx_send(hdev, req);
+ 
+ 	/* synchronous send */
+ 	if (need_resp) {
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index d5432d1448c05..1662c0985eca4 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -7654,6 +7654,20 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
+ 	struct vf_mac_filter *entry = NULL;
+ 	int ret = 0;
+ 
++	if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
++	    !vf_data->trusted) {
++		dev_warn(&pdev->dev,
++			 "VF %d requested MAC filter but is administratively denied\n",
++			  vf);
++		return -EINVAL;
++	}
++	if (!is_valid_ether_addr(addr)) {
++		dev_warn(&pdev->dev,
++			 "VF %d attempted to set invalid MAC filter\n",
++			  vf);
++		return -EINVAL;
++	}
++
+ 	switch (info) {
+ 	case E1000_VF_MAC_FILTER_CLR:
+ 		/* remove all unicast MAC filters related to the current VF */
+@@ -7667,20 +7681,6 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
+ 		}
+ 		break;
+ 	case E1000_VF_MAC_FILTER_ADD:
+-		if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
+-		    !vf_data->trusted) {
+-			dev_warn(&pdev->dev,
+-				 "VF %d requested MAC filter but is administratively denied\n",
+-				 vf);
+-			return -EINVAL;
+-		}
+-		if (!is_valid_ether_addr(addr)) {
+-			dev_warn(&pdev->dev,
+-				 "VF %d attempted to set invalid MAC filter\n",
+-				 vf);
+-			return -EINVAL;
+-		}
+-
+ 		/* try to find empty slot in the list */
+ 		list_for_each(pos, &adapter->vf_macs.l) {
+ 			entry = list_entry(pos, struct vf_mac_filter, l);
+diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
+index 07c9e9e0546f5..fe8c0a26b7201 100644
+--- a/drivers/net/ethernet/intel/igbvf/netdev.c
++++ b/drivers/net/ethernet/intel/igbvf/netdev.c
+@@ -2873,6 +2873,7 @@ static int igbvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	return 0;
+ 
+ err_hw_init:
++	netif_napi_del(&adapter->rx_ring->napi);
+ 	kfree(adapter->tx_ring);
+ 	kfree(adapter->rx_ring);
+ err_sw_init:
+diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c
+index 7ec04e48860c6..553d6bc78e6bd 100644
+--- a/drivers/net/ethernet/intel/igc/igc_i225.c
++++ b/drivers/net/ethernet/intel/igc/igc_i225.c
+@@ -636,7 +636,7 @@ s32 igc_set_ltr_i225(struct igc_hw *hw, bool link)
+ 		ltrv = rd32(IGC_LTRMAXV);
+ 		if (ltr_max != (ltrv & IGC_LTRMAXV_LTRV_MASK)) {
+ 			ltrv = IGC_LTRMAXV_LSNP_REQ | ltr_max |
+-			       (scale_min << IGC_LTRMAXV_SCALE_SHIFT);
++			       (scale_max << IGC_LTRMAXV_SCALE_SHIFT);
+ 			wr32(IGC_LTRMAXV, ltrv);
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index ffe322136c584..a3a02e2f92f64 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -5532,6 +5532,10 @@ static int ixgbe_non_sfp_link_config(struct ixgbe_hw *hw)
+ 	if (!speed && hw->mac.ops.get_link_capabilities) {
+ 		ret = hw->mac.ops.get_link_capabilities(hw, &speed,
+ 							&autoneg);
++		/* remove NBASE-T speeds from default autonegotiation
++		 * to accommodate broken network switches in the field
++		 * which cannot cope with advertised NBASE-T speeds
++		 */
+ 		speed &= ~(IXGBE_LINK_SPEED_5GB_FULL |
+ 			   IXGBE_LINK_SPEED_2_5GB_FULL);
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+index 5e339afa682a6..37f2bc6de4b65 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+@@ -3405,6 +3405,9 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 	/* flush pending Tx transactions */
+ 	ixgbe_clear_tx_pending(hw);
+ 
++	/* set MDIO speed before talking to the PHY in case it's the 1st time */
++	ixgbe_set_mdio_speed(hw);
++
+ 	/* PHY ops must be identified and initialized prior to reset */
+ 	status = hw->phy.ops.init(hw);
+ 	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+diff --git a/drivers/net/ethernet/sfc/ef100_nic.c b/drivers/net/ethernet/sfc/ef100_nic.c
+index 3148fe7703564..cb6897c2193c2 100644
+--- a/drivers/net/ethernet/sfc/ef100_nic.c
++++ b/drivers/net/ethernet/sfc/ef100_nic.c
+@@ -597,6 +597,9 @@ static size_t ef100_update_stats(struct efx_nic *efx,
+ 	ef100_common_stat_mask(mask);
+ 	ef100_ethtool_stat_mask(mask);
+ 
++	if (!mc_stats)
++		return 0;
++
+ 	efx_nic_copy_stats(efx, mc_stats);
+ 	efx_nic_update_stats(ef100_stat_desc, EF100_STAT_COUNT, mask,
+ 			     stats, mc_stats, false);
+diff --git a/drivers/net/netdevsim/bpf.c b/drivers/net/netdevsim/bpf.c
+index 90aafb56f1409..a438202129323 100644
+--- a/drivers/net/netdevsim/bpf.c
++++ b/drivers/net/netdevsim/bpf.c
+@@ -514,6 +514,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
+ 				goto err_free;
+ 			key = nmap->entry[i].key;
+ 			*key = i;
++			memset(nmap->entry[i].value, 0, offmap->map.value_size);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
+index 8ee24e351bdc2..6a9178896c909 100644
+--- a/drivers/net/xen-netback/common.h
++++ b/drivers/net/xen-netback/common.h
+@@ -203,6 +203,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
+ 	unsigned int rx_queue_max;
+ 	unsigned int rx_queue_len;
+ 	unsigned long last_rx_time;
++	unsigned int rx_slots_needed;
+ 	bool stalled;
+ 
+ 	struct xenvif_copy_state rx_copy;
+diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
+index accc991d153f7..dbac4c03d21a1 100644
+--- a/drivers/net/xen-netback/rx.c
++++ b/drivers/net/xen-netback/rx.c
+@@ -33,28 +33,36 @@
+ #include <xen/xen.h>
+ #include <xen/events.h>
+ 
+-static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
++/*
++ * Update the needed ring page slots for the first SKB queued.
++ * Note that any call sequence outside the RX thread calling this function
++ * needs to wake up the RX thread via a call of xenvif_kick_thread()
++ * afterwards in order to avoid a race with putting the thread to sleep.
++ */
++static void xenvif_update_needed_slots(struct xenvif_queue *queue,
++				       const struct sk_buff *skb)
+ {
+-	RING_IDX prod, cons;
+-	struct sk_buff *skb;
+-	int needed;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&queue->rx_queue.lock, flags);
++	unsigned int needed = 0;
+ 
+-	skb = skb_peek(&queue->rx_queue);
+-	if (!skb) {
+-		spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
+-		return false;
++	if (skb) {
++		needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
++		if (skb_is_gso(skb))
++			needed++;
++		if (skb->sw_hash)
++			needed++;
+ 	}
+ 
+-	needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
+-	if (skb_is_gso(skb))
+-		needed++;
+-	if (skb->sw_hash)
+-		needed++;
++	WRITE_ONCE(queue->rx_slots_needed, needed);
++}
+ 
+-	spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
++static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
++{
++	RING_IDX prod, cons;
++	unsigned int needed;
++
++	needed = READ_ONCE(queue->rx_slots_needed);
++	if (!needed)
++		return false;
+ 
+ 	do {
+ 		prod = queue->rx.sring->req_prod;
+@@ -80,13 +88,19 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
+ 
+ 	spin_lock_irqsave(&queue->rx_queue.lock, flags);
+ 
+-	__skb_queue_tail(&queue->rx_queue, skb);
+-
+-	queue->rx_queue_len += skb->len;
+-	if (queue->rx_queue_len > queue->rx_queue_max) {
++	if (queue->rx_queue_len >= queue->rx_queue_max) {
+ 		struct net_device *dev = queue->vif->dev;
+ 
+ 		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
++		kfree_skb(skb);
++		queue->vif->dev->stats.rx_dropped++;
++	} else {
++		if (skb_queue_empty(&queue->rx_queue))
++			xenvif_update_needed_slots(queue, skb);
++
++		__skb_queue_tail(&queue->rx_queue, skb);
++
++		queue->rx_queue_len += skb->len;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
+@@ -100,6 +114,8 @@ static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
+ 
+ 	skb = __skb_dequeue(&queue->rx_queue);
+ 	if (skb) {
++		xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue));
++
+ 		queue->rx_queue_len -= skb->len;
+ 		if (queue->rx_queue_len < queue->rx_queue_max) {
+ 			struct netdev_queue *txq;
+@@ -134,6 +150,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue)
+ 			break;
+ 		xenvif_rx_dequeue(queue);
+ 		kfree_skb(skb);
++		queue->vif->dev->stats.rx_dropped++;
+ 	}
+ }
+ 
+@@ -487,27 +504,31 @@ void xenvif_rx_action(struct xenvif_queue *queue)
+ 	xenvif_rx_copy_flush(queue);
+ }
+ 
+-static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)
++static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue)
+ {
+ 	RING_IDX prod, cons;
+ 
+ 	prod = queue->rx.sring->req_prod;
+ 	cons = queue->rx.req_cons;
+ 
++	return prod - cons;
++}
++
++static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue)
++{
++	unsigned int needed = READ_ONCE(queue->rx_slots_needed);
++
+ 	return !queue->stalled &&
+-		prod - cons < 1 &&
++		xenvif_rx_queue_slots(queue) < needed &&
+ 		time_after(jiffies,
+ 			   queue->last_rx_time + queue->vif->stall_timeout);
+ }
+ 
+ static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)
+ {
+-	RING_IDX prod, cons;
+-
+-	prod = queue->rx.sring->req_prod;
+-	cons = queue->rx.req_cons;
++	unsigned int needed = READ_ONCE(queue->rx_slots_needed);
+ 
+-	return queue->stalled && prod - cons >= 1;
++	return queue->stalled && xenvif_rx_queue_slots(queue) >= needed;
+ }
+ 
+ bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 8505024b89e9e..fce3a90a335cb 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -148,6 +148,9 @@ struct netfront_queue {
+ 	grant_ref_t gref_rx_head;
+ 	grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
+ 
++	unsigned int rx_rsp_unconsumed;
++	spinlock_t rx_cons_lock;
++
+ 	struct page_pool *page_pool;
+ 	struct xdp_rxq_info xdp_rxq;
+ };
+@@ -376,12 +379,13 @@ static int xennet_open(struct net_device *dev)
+ 	return 0;
+ }
+ 
+-static void xennet_tx_buf_gc(struct netfront_queue *queue)
++static bool xennet_tx_buf_gc(struct netfront_queue *queue)
+ {
+ 	RING_IDX cons, prod;
+ 	unsigned short id;
+ 	struct sk_buff *skb;
+ 	bool more_to_do;
++	bool work_done = false;
+ 	const struct device *dev = &queue->info->netdev->dev;
+ 
+ 	BUG_ON(!netif_carrier_ok(queue->info->netdev));
+@@ -398,6 +402,8 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
+ 		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
+ 			struct xen_netif_tx_response txrsp;
+ 
++			work_done = true;
++
+ 			RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
+ 			if (txrsp.status == XEN_NETIF_RSP_NULL)
+ 				continue;
+@@ -441,11 +447,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
+ 
+ 	xennet_maybe_wake_tx(queue);
+ 
+-	return;
++	return work_done;
+ 
+  err:
+ 	queue->info->broken = true;
+ 	dev_alert(dev, "Disabled for further use\n");
++
++	return work_done;
+ }
+ 
+ struct xennet_gnttab_make_txreq {
+@@ -836,6 +844,16 @@ static int xennet_close(struct net_device *dev)
+ 	return 0;
+ }
+ 
++static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&queue->rx_cons_lock, flags);
++	queue->rx.rsp_cons = val;
++	queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
++	spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
++}
++
+ static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
+ 				grant_ref_t ref)
+ {
+@@ -887,7 +905,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
+ 		xennet_move_rx_slot(queue, skb, ref);
+ 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
+ 
+-	queue->rx.rsp_cons = cons;
++	xennet_set_rx_rsp_cons(queue, cons);
+ 	return err;
+ }
+ 
+@@ -1041,7 +1059,7 @@ next:
+ 	}
+ 
+ 	if (unlikely(err))
+-		queue->rx.rsp_cons = cons + slots;
++		xennet_set_rx_rsp_cons(queue, cons + slots);
+ 
+ 	return err;
+ }
+@@ -1095,7 +1113,8 @@ static int xennet_fill_frags(struct netfront_queue *queue,
+ 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
+ 		}
+ 		if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
+-			queue->rx.rsp_cons = ++cons + skb_queue_len(list);
++			xennet_set_rx_rsp_cons(queue,
++					       ++cons + skb_queue_len(list));
+ 			kfree_skb(nskb);
+ 			return -ENOENT;
+ 		}
+@@ -1108,7 +1127,7 @@ static int xennet_fill_frags(struct netfront_queue *queue,
+ 		kfree_skb(nskb);
+ 	}
+ 
+-	queue->rx.rsp_cons = cons;
++	xennet_set_rx_rsp_cons(queue, cons);
+ 
+ 	return 0;
+ }
+@@ -1231,7 +1250,9 @@ err:
+ 
+ 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
+ 				__skb_queue_head(&tmpq, skb);
+-				queue->rx.rsp_cons += skb_queue_len(&tmpq);
++				xennet_set_rx_rsp_cons(queue,
++						       queue->rx.rsp_cons +
++						       skb_queue_len(&tmpq));
+ 				goto err;
+ 			}
+ 		}
+@@ -1255,7 +1276,8 @@ err:
+ 
+ 		__skb_queue_tail(&rxq, skb);
+ 
+-		i = ++queue->rx.rsp_cons;
++		i = queue->rx.rsp_cons + 1;
++		xennet_set_rx_rsp_cons(queue, i);
+ 		work_done++;
+ 	}
+ 	if (need_xdp_flush)
+@@ -1419,40 +1441,79 @@ static int xennet_set_features(struct net_device *dev,
+ 	return 0;
+ }
+ 
+-static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
++static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi)
+ {
+-	struct netfront_queue *queue = dev_id;
+ 	unsigned long flags;
+ 
+-	if (queue->info->broken)
+-		return IRQ_HANDLED;
++	if (unlikely(queue->info->broken))
++		return false;
+ 
+ 	spin_lock_irqsave(&queue->tx_lock, flags);
+-	xennet_tx_buf_gc(queue);
++	if (xennet_tx_buf_gc(queue))
++		*eoi = 0;
+ 	spin_unlock_irqrestore(&queue->tx_lock, flags);
+ 
++	return true;
++}
++
++static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
++{
++	unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
++
++	if (likely(xennet_handle_tx(dev_id, &eoiflag)))
++		xen_irq_lateeoi(irq, eoiflag);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+-static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
++static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi)
+ {
+-	struct netfront_queue *queue = dev_id;
+-	struct net_device *dev = queue->info->netdev;
++	unsigned int work_queued;
++	unsigned long flags;
+ 
+-	if (queue->info->broken)
+-		return IRQ_HANDLED;
++	if (unlikely(queue->info->broken))
++		return false;
++
++	spin_lock_irqsave(&queue->rx_cons_lock, flags);
++	work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
++	if (work_queued > queue->rx_rsp_unconsumed) {
++		queue->rx_rsp_unconsumed = work_queued;
++		*eoi = 0;
++	} else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) {
++		const struct device *dev = &queue->info->netdev->dev;
++
++		spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
++		dev_alert(dev, "RX producer index going backwards\n");
++		dev_alert(dev, "Disabled for further use\n");
++		queue->info->broken = true;
++		return false;
++	}
++	spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
+ 
+-	if (likely(netif_carrier_ok(dev) &&
+-		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
++	if (likely(netif_carrier_ok(queue->info->netdev) && work_queued))
+ 		napi_schedule(&queue->napi);
+ 
++	return true;
++}
++
++static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
++{
++	unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
++
++	if (likely(xennet_handle_rx(dev_id, &eoiflag)))
++		xen_irq_lateeoi(irq, eoiflag);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
+ {
+-	xennet_tx_interrupt(irq, dev_id);
+-	xennet_rx_interrupt(irq, dev_id);
++	unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
++
++	if (xennet_handle_tx(dev_id, &eoiflag) &&
++	    xennet_handle_rx(dev_id, &eoiflag))
++		xen_irq_lateeoi(irq, eoiflag);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -1770,9 +1831,10 @@ static int setup_netfront_single(struct netfront_queue *queue)
+ 	if (err < 0)
+ 		goto fail;
+ 
+-	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
+-					xennet_interrupt,
+-					0, queue->info->netdev->name, queue);
++	err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
++						xennet_interrupt, 0,
++						queue->info->netdev->name,
++						queue);
+ 	if (err < 0)
+ 		goto bind_fail;
+ 	queue->rx_evtchn = queue->tx_evtchn;
+@@ -1800,18 +1862,18 @@ static int setup_netfront_split(struct netfront_queue *queue)
+ 
+ 	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+ 		 "%s-tx", queue->name);
+-	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
+-					xennet_tx_interrupt,
+-					0, queue->tx_irq_name, queue);
++	err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
++						xennet_tx_interrupt, 0,
++						queue->tx_irq_name, queue);
+ 	if (err < 0)
+ 		goto bind_tx_fail;
+ 	queue->tx_irq = err;
+ 
+ 	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+ 		 "%s-rx", queue->name);
+-	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
+-					xennet_rx_interrupt,
+-					0, queue->rx_irq_name, queue);
++	err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn,
++						xennet_rx_interrupt, 0,
++						queue->rx_irq_name, queue);
+ 	if (err < 0)
+ 		goto bind_rx_fail;
+ 	queue->rx_irq = err;
+@@ -1913,6 +1975,7 @@ static int xennet_init_queue(struct netfront_queue *queue)
+ 
+ 	spin_lock_init(&queue->tx_lock);
+ 	spin_lock_init(&queue->rx_lock);
++	spin_lock_init(&queue->rx_cons_lock);
+ 
+ 	timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
+ 
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index db7475dc601f5..57314fec2261b 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -828,9 +828,6 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ 		goto out_disable;
+ 	}
+ 
+-	/* Ensure that all table entries are masked. */
+-	msix_mask_all(base, tsize);
+-
+ 	ret = msix_setup_entries(dev, base, entries, nvec, affd);
+ 	if (ret)
+ 		goto out_disable;
+@@ -853,6 +850,16 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ 	/* Set MSI-X enabled bits and unmask the function */
+ 	pci_intx_for_msi(dev, 0);
+ 	dev->msix_enabled = 1;
++
++	/*
++	 * Ensure that all table entries are masked to prevent
++	 * stale entries from firing in a crash kernel.
++	 *
++	 * Done late to deal with a broken Marvell NVME device
++	 * which takes the MSI-X mask bits into account even
++	 * when MSI-X is disabled, which prevents MSI delivery.
++	 */
++	msix_mask_all(base, tsize);
+ 	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
+ 
+ 	pcibios_free_irq(dev);
+@@ -879,7 +886,7 @@ out_free:
+ 	free_msi_irqs(dev);
+ 
+ out_disable:
+-	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
++	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 9188191433439..6b00de6b6f0ef 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -1188,7 +1188,7 @@ static int p_fill_from_dev_buffer(struct scsi_cmnd *scp, const void *arr,
+ 		 __func__, off_dst, scsi_bufflen(scp), act_len,
+ 		 scsi_get_resid(scp));
+ 	n = scsi_bufflen(scp) - (off_dst + act_len);
+-	scsi_set_resid(scp, min_t(int, scsi_get_resid(scp), n));
++	scsi_set_resid(scp, min_t(u32, scsi_get_resid(scp), n));
+ 	return 0;
+ }
+ 
+@@ -1561,7 +1561,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 	unsigned char pq_pdt;
+ 	unsigned char *arr;
+ 	unsigned char *cmd = scp->cmnd;
+-	int alloc_len, n, ret;
++	u32 alloc_len, n;
++	int ret;
+ 	bool have_wlun, is_disk, is_zbc, is_disk_zbc;
+ 
+ 	alloc_len = get_unaligned_be16(cmd + 3);
+@@ -1584,7 +1585,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 		kfree(arr);
+ 		return check_condition_result;
+ 	} else if (0x1 & cmd[1]) {  /* EVPD bit set */
+-		int lu_id_num, port_group_id, target_dev_id, len;
++		int lu_id_num, port_group_id, target_dev_id;
++		u32 len;
+ 		char lu_id_str[6];
+ 		int host_no = devip->sdbg_host->shost->host_no;
+ 		
+@@ -1675,9 +1677,9 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 			kfree(arr);
+ 			return check_condition_result;
+ 		}
+-		len = min(get_unaligned_be16(arr + 2) + 4, alloc_len);
++		len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len);
+ 		ret = fill_from_dev_buffer(scp, arr,
+-			    min(len, SDEBUG_MAX_INQ_ARR_SZ));
++			    min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ));
+ 		kfree(arr);
+ 		return ret;
+ 	}
+@@ -1713,7 +1715,7 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 	}
+ 	put_unaligned_be16(0x2100, arr + n);	/* SPL-4 no version claimed */
+ 	ret = fill_from_dev_buffer(scp, arr,
+-			    min_t(int, alloc_len, SDEBUG_LONG_INQ_SZ));
++			    min_t(u32, alloc_len, SDEBUG_LONG_INQ_SZ));
+ 	kfree(arr);
+ 	return ret;
+ }
+@@ -1728,8 +1730,8 @@ static int resp_requests(struct scsi_cmnd *scp,
+ 	unsigned char *cmd = scp->cmnd;
+ 	unsigned char arr[SCSI_SENSE_BUFFERSIZE];	/* assume >= 18 bytes */
+ 	bool dsense = !!(cmd[1] & 1);
+-	int alloc_len = cmd[4];
+-	int len = 18;
++	u32 alloc_len = cmd[4];
++	u32 len = 18;
+ 	int stopped_state = atomic_read(&devip->stopped);
+ 
+ 	memset(arr, 0, sizeof(arr));
+@@ -1773,7 +1775,7 @@ static int resp_requests(struct scsi_cmnd *scp,
+ 			arr[7] = 0xa;
+ 		}
+ 	}
+-	return fill_from_dev_buffer(scp, arr, min_t(int, len, alloc_len));
++	return fill_from_dev_buffer(scp, arr, min_t(u32, len, alloc_len));
+ }
+ 
+ static int resp_start_stop(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+@@ -2311,7 +2313,8 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
+ {
+ 	int pcontrol, pcode, subpcode, bd_len;
+ 	unsigned char dev_spec;
+-	int alloc_len, offset, len, target_dev_id;
++	u32 alloc_len, offset, len;
++	int target_dev_id;
+ 	int target = scp->device->id;
+ 	unsigned char *ap;
+ 	unsigned char arr[SDEBUG_MAX_MSENSE_SZ];
+@@ -2467,7 +2470,7 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
+ 		arr[0] = offset - 1;
+ 	else
+ 		put_unaligned_be16((offset - 2), arr + 0);
+-	return fill_from_dev_buffer(scp, arr, min_t(int, alloc_len, offset));
++	return fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, offset));
+ }
+ 
+ #define SDEBUG_MAX_MSELECT_SZ 512
+@@ -2498,11 +2501,11 @@ static int resp_mode_select(struct scsi_cmnd *scp,
+ 			    __func__, param_len, res);
+ 	md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2);
+ 	bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6);
+-	if (md_len > 2) {
++	off = bd_len + (mselect6 ? 4 : 8);
++	if (md_len > 2 || off >= res) {
+ 		mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1);
+ 		return check_condition_result;
+ 	}
+-	off = bd_len + (mselect6 ? 4 : 8);
+ 	mpage = arr[off] & 0x3f;
+ 	ps = !!(arr[off] & 0x80);
+ 	if (ps) {
+@@ -2582,7 +2585,8 @@ static int resp_ie_l_pg(unsigned char *arr)
+ static int resp_log_sense(struct scsi_cmnd *scp,
+ 			  struct sdebug_dev_info *devip)
+ {
+-	int ppc, sp, pcode, subpcode, alloc_len, len, n;
++	int ppc, sp, pcode, subpcode;
++	u32 alloc_len, len, n;
+ 	unsigned char arr[SDEBUG_MAX_LSENSE_SZ];
+ 	unsigned char *cmd = scp->cmnd;
+ 
+@@ -2652,9 +2656,9 @@ static int resp_log_sense(struct scsi_cmnd *scp,
+ 		mk_sense_invalid_fld(scp, SDEB_IN_CDB, 3, -1);
+ 		return check_condition_result;
+ 	}
+-	len = min_t(int, get_unaligned_be16(arr + 2) + 4, alloc_len);
++	len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len);
+ 	return fill_from_dev_buffer(scp, arr,
+-		    min_t(int, len, SDEBUG_MAX_INQ_ARR_SZ));
++		    min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ));
+ }
+ 
+ static inline bool sdebug_dev_is_zoned(struct sdebug_dev_info *devip)
+@@ -4238,6 +4242,8 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 		mk_sense_invalid_opcode(scp);
+ 		return check_condition_result;
+ 	}
++	if (vnum == 0)
++		return 0;	/* not an error */
+ 	a_num = is_bytchk3 ? 1 : vnum;
+ 	/* Treat following check like one for read (i.e. no write) access */
+ 	ret = check_device_access_params(scp, lba, a_num, false);
+@@ -4301,6 +4307,8 @@ static int resp_report_zones(struct scsi_cmnd *scp,
+ 	}
+ 	zs_lba = get_unaligned_be64(cmd + 2);
+ 	alloc_len = get_unaligned_be32(cmd + 10);
++	if (alloc_len == 0)
++		return 0;	/* not an error */
+ 	rep_opts = cmd[14] & 0x3f;
+ 	partial = cmd[14] & 0x80;
+ 
+@@ -4405,7 +4413,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
+ 	put_unaligned_be64(sdebug_capacity - 1, arr + 8);
+ 
+ 	rep_len = (unsigned long)desc - (unsigned long)arr;
+-	ret = fill_from_dev_buffer(scp, arr, min_t(int, alloc_len, rep_len));
++	ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));
+ 
+ fini:
+ 	read_unlock(macc_lckp);
+diff --git a/drivers/soc/imx/soc-imx.c b/drivers/soc/imx/soc-imx.c
+index 01bfea1cb64a8..1e8780299d5c4 100644
+--- a/drivers/soc/imx/soc-imx.c
++++ b/drivers/soc/imx/soc-imx.c
+@@ -33,6 +33,10 @@ static int __init imx_soc_device_init(void)
+ 	u32 val;
+ 	int ret;
+ 
++	/* Return early if this is running on devices with different SoCs */
++	if (!__mxc_cpu_type)
++		return 0;
++
+ 	if (of_machine_is_compatible("fsl,ls1021a"))
+ 		return 0;
+ 
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c
+index 94b60a692b515..4388a4a5e0919 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra.c
+@@ -260,7 +260,7 @@ static struct platform_driver tegra_fuse_driver = {
+ };
+ builtin_platform_driver(tegra_fuse_driver);
+ 
+-bool __init tegra_fuse_read_spare(unsigned int spare)
++u32 __init tegra_fuse_read_spare(unsigned int spare)
+ {
+ 	unsigned int offset = fuse->soc->info->spare + spare * 4;
+ 
+diff --git a/drivers/soc/tegra/fuse/fuse.h b/drivers/soc/tegra/fuse/fuse.h
+index e057a58e20603..21887a57cf2c2 100644
+--- a/drivers/soc/tegra/fuse/fuse.h
++++ b/drivers/soc/tegra/fuse/fuse.h
+@@ -63,7 +63,7 @@ struct tegra_fuse {
+ void tegra_init_revision(void);
+ void tegra_init_apbmisc(void);
+ 
+-bool __init tegra_fuse_read_spare(unsigned int spare);
++u32 __init tegra_fuse_read_spare(unsigned int spare);
+ u32 __init tegra_fuse_read_early(unsigned int offset);
+ 
+ u8 tegra_get_major_rev(void);
+diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
+index da6b88e80dc07..297dc62bca298 100644
+--- a/drivers/tee/amdtee/core.c
++++ b/drivers/tee/amdtee/core.c
+@@ -203,9 +203,8 @@ static int copy_ta_binary(struct tee_context *ctx, void *ptr, void **ta,
+ 
+ 	*ta_size = roundup(fw->size, PAGE_SIZE);
+ 	*ta = (void *)__get_free_pages(GFP_KERNEL, get_order(*ta_size));
+-	if (IS_ERR(*ta)) {
+-		pr_err("%s: get_free_pages failed 0x%llx\n", __func__,
+-		       (u64)*ta);
++	if (!*ta) {
++		pr_err("%s: get_free_pages failed\n", __func__);
+ 		rc = -ENOMEM;
+ 		goto rel_fw;
+ 	}
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index 8f143c09a1696..7948660e042fd 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -37,6 +37,8 @@ struct xencons_info {
+ 	struct xenbus_device *xbdev;
+ 	struct xencons_interface *intf;
+ 	unsigned int evtchn;
++	XENCONS_RING_IDX out_cons;
++	unsigned int out_cons_same;
+ 	struct hvc_struct *hvc;
+ 	int irq;
+ 	int vtermno;
+@@ -138,6 +140,8 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
+ 	XENCONS_RING_IDX cons, prod;
+ 	int recv = 0;
+ 	struct xencons_info *xencons = vtermno_to_xencons(vtermno);
++	unsigned int eoiflag = 0;
++
+ 	if (xencons == NULL)
+ 		return -EINVAL;
+ 	intf = xencons->intf;
+@@ -157,7 +161,27 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
+ 	mb();			/* read ring before consuming */
+ 	intf->in_cons = cons;
+ 
+-	notify_daemon(xencons);
++	/*
++	 * When to mark interrupt having been spurious:
++	 * - there was no new data to be read, and
++	 * - the backend did not consume some output bytes, and
++	 * - the previous round with no read data didn't see consumed bytes
++	 *   (we might have a race with an interrupt being in flight while
++	 *   updating xencons->out_cons, so account for that by allowing one
++	 *   round without any visible reason)
++	 */
++	if (intf->out_cons != xencons->out_cons) {
++		xencons->out_cons = intf->out_cons;
++		xencons->out_cons_same = 0;
++	}
++	if (recv) {
++		notify_daemon(xencons);
++	} else if (xencons->out_cons_same++ > 1) {
++		eoiflag = XEN_EOI_FLAG_SPURIOUS;
++	}
++
++	xen_irq_lateeoi(xencons->irq, eoiflag);
++
+ 	return recv;
+ }
+ 
+@@ -386,7 +410,7 @@ static int xencons_connect_backend(struct xenbus_device *dev,
+ 	if (ret)
+ 		return ret;
+ 	info->evtchn = evtchn;
+-	irq = bind_evtchn_to_irq(evtchn);
++	irq = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn);
+ 	if (irq < 0)
+ 		return irq;
+ 	info->irq = irq;
+@@ -550,7 +574,7 @@ static int __init xen_hvc_init(void)
+ 			return r;
+ 
+ 		info = vtermno_to_xencons(HVC_COOKIE);
+-		info->irq = bind_evtchn_to_irq(info->evtchn);
++		info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn);
+ 	}
+ 	if (info->irq < 0)
+ 		info->irq = 0; /* NO_IRQ */
+diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
+index 1363e659dc1db..48c64e68017cd 100644
+--- a/drivers/tty/n_hdlc.c
++++ b/drivers/tty/n_hdlc.c
+@@ -139,6 +139,8 @@ struct n_hdlc {
+ 	struct n_hdlc_buf_list	rx_buf_list;
+ 	struct n_hdlc_buf_list	tx_free_buf_list;
+ 	struct n_hdlc_buf_list	rx_free_buf_list;
++	struct work_struct	write_work;
++	struct tty_struct	*tty_for_write_work;
+ };
+ 
+ /*
+@@ -153,6 +155,7 @@ static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *list);
+ /* Local functions */
+ 
+ static struct n_hdlc *n_hdlc_alloc(void);
++static void n_hdlc_tty_write_work(struct work_struct *work);
+ 
+ /* max frame size for memory allocations */
+ static int maxframe = 4096;
+@@ -209,6 +212,8 @@ static void n_hdlc_tty_close(struct tty_struct *tty)
+ 	wake_up_interruptible(&tty->read_wait);
+ 	wake_up_interruptible(&tty->write_wait);
+ 
++	cancel_work_sync(&n_hdlc->write_work);
++
+ 	n_hdlc_free_buf_list(&n_hdlc->rx_free_buf_list);
+ 	n_hdlc_free_buf_list(&n_hdlc->tx_free_buf_list);
+ 	n_hdlc_free_buf_list(&n_hdlc->rx_buf_list);
+@@ -240,6 +245,8 @@ static int n_hdlc_tty_open(struct tty_struct *tty)
+ 		return -ENFILE;
+ 	}
+ 
++	INIT_WORK(&n_hdlc->write_work, n_hdlc_tty_write_work);
++	n_hdlc->tty_for_write_work = tty;
+ 	tty->disc_data = n_hdlc;
+ 	tty->receive_room = 65536;
+ 
+@@ -333,6 +340,20 @@ check_again:
+ 		goto check_again;
+ }	/* end of n_hdlc_send_frames() */
+ 
++/**
++ * n_hdlc_tty_write_work - Asynchronous callback for transmit wakeup
++ * @work: pointer to work_struct
++ *
++ * Called when low level device driver can accept more send data.
++ */
++static void n_hdlc_tty_write_work(struct work_struct *work)
++{
++	struct n_hdlc *n_hdlc = container_of(work, struct n_hdlc, write_work);
++	struct tty_struct *tty = n_hdlc->tty_for_write_work;
++
++	n_hdlc_send_frames(n_hdlc, tty);
++}	/* end of n_hdlc_tty_write_work() */
++
+ /**
+  * n_hdlc_tty_wakeup - Callback for transmit wakeup
+  * @tty: pointer to associated tty instance data
+@@ -343,7 +364,7 @@ static void n_hdlc_tty_wakeup(struct tty_struct *tty)
+ {
+ 	struct n_hdlc *n_hdlc = tty->disc_data;
+ 
+-	n_hdlc_send_frames(n_hdlc, tty);
++	schedule_work(&n_hdlc->write_work);
+ }	/* end of n_hdlc_tty_wakeup() */
+ 
+ /**
+diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
+index 31c9e83ea3cb2..251f0018ae8ca 100644
+--- a/drivers/tty/serial/8250/8250_fintek.c
++++ b/drivers/tty/serial/8250/8250_fintek.c
+@@ -290,25 +290,6 @@ static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata)
+ 	}
+ }
+ 
+-static void fintek_8250_goto_highspeed(struct uart_8250_port *uart,
+-			      struct fintek_8250 *pdata)
+-{
+-	sio_write_reg(pdata, LDN, pdata->index);
+-
+-	switch (pdata->pid) {
+-	case CHIP_ID_F81966:
+-	case CHIP_ID_F81866: /* set uart clock for high speed serial mode */
+-		sio_write_mask_reg(pdata, F81866_UART_CLK,
+-			F81866_UART_CLK_MASK,
+-			F81866_UART_CLK_14_769MHZ);
+-
+-		uart->port.uartclk = 921600 * 16;
+-		break;
+-	default: /* leave clock speed untouched */
+-		break;
+-	}
+-}
+-
+ static void fintek_8250_set_termios(struct uart_port *port,
+ 				    struct ktermios *termios,
+ 				    struct ktermios *old)
+@@ -430,7 +411,6 @@ static int probe_setup_port(struct fintek_8250 *pdata,
+ 
+ 				fintek_8250_set_irq_mode(pdata, level_mode);
+ 				fintek_8250_set_max_fifo(pdata);
+-				fintek_8250_goto_highspeed(uart, pdata);
+ 
+ 				fintek_8250_exit_key(addr[i]);
+ 
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 61f686c5bd9c6..baf80e2ac7d8e 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -435,6 +435,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x1532, 0x0116), .driver_info =
+ 			USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
+ 
++	/* Lenovo USB-C to Ethernet Adapter RTL8153-04 */
++	{ USB_DEVICE(0x17ef, 0x720c), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */
+ 	{ USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM },
+ 
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 5f18acac74068..49d333f02af4e 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -542,6 +542,9 @@ static int dwc2_driver_probe(struct platform_device *dev)
+ 		ggpio |= GGPIO_STM32_OTG_GCCFG_IDEN;
+ 		ggpio |= GGPIO_STM32_OTG_GCCFG_VBDEN;
+ 		dwc2_writel(hsotg, ggpio, GGPIO);
++
++		/* ID/VBUS detection startup time */
++		usleep_range(5000, 7000);
+ 	}
+ 
+ 	retval = dwc2_drd_init(hsotg);
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index be4ecbabdd586..6c0434100e38c 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -14,7 +14,6 @@
+ #include <linux/pci_ids.h>
+ #include <linux/memblock.h>
+ #include <linux/io.h>
+-#include <linux/iopoll.h>
+ #include <asm/pci-direct.h>
+ #include <asm/fixmap.h>
+ #include <linux/bcd.h>
+@@ -136,9 +135,17 @@ static int handshake(void __iomem *ptr, u32 mask, u32 done, int wait, int delay)
+ {
+ 	u32 result;
+ 
+-	return readl_poll_timeout_atomic(ptr, result,
+-					 ((result & mask) == done),
+-					 delay, wait);
++	/* Can not use readl_poll_timeout_atomic() for early boot things */
++	do {
++		result = readl(ptr);
++		result &= mask;
++		if (result == done)
++			return 0;
++		udelay(delay);
++		wait -= delay;
++	} while (wait > 0);
++
++	return -ETIMEDOUT;
+ }
+ 
+ static void __init xdbc_bios_handoff(void)
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 426132988512d..8bec0cbf844ed 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1649,14 +1649,14 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 	u8				endp;
+ 
+ 	if (w_length > USB_COMP_EP0_BUFSIZ) {
+-		if (ctrl->bRequestType == USB_DIR_OUT) {
+-			goto done;
+-		} else {
++		if (ctrl->bRequestType & USB_DIR_IN) {
+ 			/* Cast away the const, we are going to overwrite on purpose. */
+ 			__le16 *temp = (__le16 *)&ctrl->wLength;
+ 
+ 			*temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ);
+ 			w_length = USB_COMP_EP0_BUFSIZ;
++		} else {
++			goto done;
+ 		}
+ 	}
+ 
+diff --git a/drivers/usb/gadget/legacy/dbgp.c b/drivers/usb/gadget/legacy/dbgp.c
+index 355bc7dab9d5f..6bcbad3825802 100644
+--- a/drivers/usb/gadget/legacy/dbgp.c
++++ b/drivers/usb/gadget/legacy/dbgp.c
+@@ -346,14 +346,14 @@ static int dbgp_setup(struct usb_gadget *gadget,
+ 	u16 len = 0;
+ 
+ 	if (length > DBGP_REQ_LEN) {
+-		if (ctrl->bRequestType == USB_DIR_OUT) {
+-			return err;
+-		} else {
++		if (ctrl->bRequestType & USB_DIR_IN) {
+ 			/* Cast away the const, we are going to overwrite on purpose. */
+ 			__le16 *temp = (__le16 *)&ctrl->wLength;
+ 
+ 			*temp = cpu_to_le16(DBGP_REQ_LEN);
+ 			length = DBGP_REQ_LEN;
++		} else {
++			return err;
+ 		}
+ 	}
+ 
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 04b9c4f5f129d..217d2b66fa514 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -1336,14 +1336,14 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 	u16				w_length = le16_to_cpu(ctrl->wLength);
+ 
+ 	if (w_length > RBUF_SIZE) {
+-		if (ctrl->bRequestType == USB_DIR_OUT) {
+-			return value;
+-		} else {
++		if (ctrl->bRequestType & USB_DIR_IN) {
+ 			/* Cast away the const, we are going to overwrite on purpose. */
+ 			__le16 *temp = (__le16 *)&ctrl->wLength;
+ 
+ 			*temp = cpu_to_le16(RBUF_SIZE);
+ 			w_length = RBUF_SIZE;
++		} else {
++			return value;
+ 		}
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 80251a2579fda..c9133df71e52b 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -70,6 +70,8 @@
+ #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4		0x161e
+ #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5		0x15d6
+ #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6		0x15d7
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7		0x161c
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8		0x161f
+ 
+ #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI			0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI		0x1142
+@@ -325,7 +327,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 ||
+ 	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 ||
+ 	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6))
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 6d858bdaf33ce..f906c1308f9f9 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -1750,6 +1750,8 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
+ 
+ 	/*  2 banks of GPIO - One for the pins taken from each serial port */
+ 	if (intf_num == 0) {
++		priv->gc.ngpio = 2;
++
+ 		if (mode.eci == CP210X_PIN_MODE_MODEM) {
+ 			/* mark all GPIOs of this interface as reserved */
+ 			priv->gpio_altfunc = 0xff;
+@@ -1760,8 +1762,9 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
+ 		priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &
+ 						CP210X_ECI_GPIO_MODE_MASK) >>
+ 						CP210X_ECI_GPIO_MODE_OFFSET);
+-		priv->gc.ngpio = 2;
+ 	} else if (intf_num == 1) {
++		priv->gc.ngpio = 3;
++
+ 		if (mode.sci == CP210X_PIN_MODE_MODEM) {
+ 			/* mark all GPIOs of this interface as reserved */
+ 			priv->gpio_altfunc = 0xff;
+@@ -1772,7 +1775,6 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
+ 		priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &
+ 						CP210X_SCI_GPIO_MODE_MASK) >>
+ 						CP210X_SCI_GPIO_MODE_OFFSET);
+-		priv->gc.ngpio = 3;
+ 	} else {
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 28ffe4e358b77..21b1488fe4461 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1219,6 +1219,14 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff),	/* Telit LN920 (ECM) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff),	/* Telit FN990 (rmnet) */
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff),	/* Telit FN990 (MBIM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff),	/* Telit FN990 (RNDIS) */
++	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff),	/* Telit FN990 (ECM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index fdeb20f2f174c..e4d60009d9083 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -196,7 +196,7 @@ static int vhost_vdpa_config_validate(struct vhost_vdpa *v,
+ 		break;
+ 	}
+ 
+-	if (c->len == 0)
++	if (c->len == 0 || c->off > size)
+ 		return -EINVAL;
+ 
+ 	if (c->len > size - c->off)
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index e9432dbbec0a7..cce75d3b3ba05 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -263,7 +263,7 @@ size_t virtio_max_dma_size(struct virtio_device *vdev)
+ 	size_t max_segment_size = SIZE_MAX;
+ 
+ 	if (vring_use_dma_api(vdev))
+-		max_segment_size = dma_max_mapping_size(&vdev->dev);
++		max_segment_size = dma_max_mapping_size(vdev->dev.parent);
+ 
+ 	return max_segment_size;
+ }
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index bab2091c81683..a5bcad0278835 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1603,6 +1603,14 @@ again:
+ 	}
+ 	return root;
+ fail:
++	/*
++	 * If our caller provided us an anonymous device, then it's his
++	 * responsability to free it in case we fail. So we have to set our
++	 * root's anon_dev to 0 to avoid a double free, once by btrfs_put_root()
++	 * and once again by our caller.
++	 */
++	if (anon_dev)
++		root->anon_dev = 0;
+ 	btrfs_put_root(root);
+ 	return ERR_PTR(ret);
+ }
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 4a5a3ae0acaae..09ef6419e890a 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1109,6 +1109,7 @@ again:
+ 					     parent_objectid, victim_name,
+ 					     victim_name_len);
+ 			if (ret < 0) {
++				kfree(victim_name);
+ 				return ret;
+ 			} else if (!ret) {
+ 				ret = -ENOENT;
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 676f551953060..d3f67271d3c72 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -4359,7 +4359,7 @@ void ceph_get_fmode(struct ceph_inode_info *ci, int fmode, int count)
+ {
+ 	struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb);
+ 	int bits = (fmode << 1) | 1;
+-	bool is_opened = false;
++	bool already_opened = false;
+ 	int i;
+ 
+ 	if (count == 1)
+@@ -4367,19 +4367,19 @@ void ceph_get_fmode(struct ceph_inode_info *ci, int fmode, int count)
+ 
+ 	spin_lock(&ci->i_ceph_lock);
+ 	for (i = 0; i < CEPH_FILE_MODE_BITS; i++) {
+-		if (bits & (1 << i))
+-			ci->i_nr_by_mode[i] += count;
+-
+ 		/*
+-		 * If any of the mode ref is larger than 1,
++		 * If any of the mode ref is larger than 0,
+ 		 * that means it has been already opened by
+ 		 * others. Just skip checking the PIN ref.
+ 		 */
+-		if (i && ci->i_nr_by_mode[i] > 1)
+-			is_opened = true;
++		if (i && ci->i_nr_by_mode[i])
++			already_opened = true;
++
++		if (bits & (1 << i))
++			ci->i_nr_by_mode[i] += count;
+ 	}
+ 
+-	if (!is_opened)
++	if (!already_opened)
+ 		percpu_counter_inc(&mdsc->metric.opened_inodes);
+ 	spin_unlock(&ci->i_ceph_lock);
+ }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 76e347a8cf088..981a915906314 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3696,7 +3696,7 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 	struct ceph_pagelist *pagelist = recon_state->pagelist;
+ 	struct dentry *dentry;
+ 	char *path;
+-	int pathlen, err;
++	int pathlen = 0, err;
+ 	u64 pathbase;
+ 	u64 snap_follows;
+ 
+@@ -3716,7 +3716,6 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		}
+ 	} else {
+ 		path = NULL;
+-		pathlen = 0;
+ 		pathbase = 0;
+ 	}
+ 
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index e7667497b6b77..8e95a75a4559c 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1132,7 +1132,7 @@ int fuse_reverse_inval_entry(struct fuse_conn *fc, u64 parent_nodeid,
+ 	if (!parent)
+ 		return -ENOENT;
+ 
+-	inode_lock(parent);
++	inode_lock_nested(parent, I_MUTEX_PARENT);
+ 	if (!S_ISDIR(parent->i_mode))
+ 		goto unlock;
+ 
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 16955a307dcd9..d0e5cde277022 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -137,8 +137,7 @@ kill_whiteout:
+ 	goto out;
+ }
+ 
+-static int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry,
+-			  umode_t mode)
++int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode)
+ {
+ 	int err;
+ 	struct dentry *d, *dentry = *newdentry;
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index e43dc68bd1b54..898de3bf884e4 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -519,6 +519,7 @@ struct ovl_cattr {
+ 
+ #define OVL_CATTR(m) (&(struct ovl_cattr) { .mode = (m) })
+ 
++int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode);
+ struct dentry *ovl_create_real(struct inode *dir, struct dentry *newdentry,
+ 			       struct ovl_cattr *attr);
+ int ovl_cleanup(struct inode *dir, struct dentry *dentry);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 77f08ac04d1f3..45c596dfe3a36 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -743,10 +743,14 @@ retry:
+ 			goto retry;
+ 		}
+ 
+-		work = ovl_create_real(dir, work, OVL_CATTR(attr.ia_mode));
+-		err = PTR_ERR(work);
+-		if (IS_ERR(work))
+-			goto out_err;
++		err = ovl_mkdir_real(dir, &work, attr.ia_mode);
++		if (err)
++			goto out_dput;
++
++		/* Weird filesystem returning with hashed negative (kernfs)? */
++		err = -EINVAL;
++		if (d_really_is_negative(work))
++			goto out_dput;
+ 
+ 		/*
+ 		 * Try to remove POSIX ACL xattrs from workdir.  We are good if:
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 2243dc1fb48fe..e60759d8bb5fb 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -1799,5 +1799,6 @@ static void __exit zonefs_exit(void)
+ MODULE_AUTHOR("Damien Le Moal");
+ MODULE_DESCRIPTION("Zone file system for zoned block devices");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS_FS("zonefs");
+ module_init(zonefs_init);
+ module_exit(zonefs_exit);
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 68cee3bc8cfe6..d784000921da3 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -718,7 +718,7 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
+ {
+ 	int rc = 0;
+ 	struct sk_buff *skb;
+-	static unsigned int failed = 0;
++	unsigned int failed = 0;
+ 
+ 	/* NOTE: kauditd_thread takes care of all our locking, we just use
+ 	 *       the netlink info passed to us (e.g. sk and portid) */
+@@ -735,32 +735,30 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
+ 			continue;
+ 		}
+ 
++retry:
+ 		/* grab an extra skb reference in case of error */
+ 		skb_get(skb);
+ 		rc = netlink_unicast(sk, skb, portid, 0);
+ 		if (rc < 0) {
+-			/* fatal failure for our queue flush attempt? */
++			/* send failed - try a few times unless fatal error */
+ 			if (++failed >= retry_limit ||
+ 			    rc == -ECONNREFUSED || rc == -EPERM) {
+-				/* yes - error processing for the queue */
+ 				sk = NULL;
+ 				if (err_hook)
+ 					(*err_hook)(skb);
+-				if (!skb_hook)
+-					goto out;
+-				/* keep processing with the skb_hook */
++				if (rc == -EAGAIN)
++					rc = 0;
++				/* continue to drain the queue */
+ 				continue;
+ 			} else
+-				/* no - requeue to preserve ordering */
+-				skb_queue_head(queue, skb);
++				goto retry;
+ 		} else {
+-			/* it worked - drop the extra reference and continue */
++			/* skb sent - drop the extra reference and continue */
+ 			consume_skb(skb);
+ 			failed = 0;
+ 		}
+ 	}
+ 
+-out:
+ 	return (rc >= 0 ? 0 : rc);
+ }
+ 
+@@ -1609,7 +1607,8 @@ static int __net_init audit_net_init(struct net *net)
+ 		audit_panic("cannot initialize netlink socket in namespace");
+ 		return -ENOMEM;
+ 	}
+-	aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT;
++	/* limit the timeout in case auditd is blocked/stopped */
++	aunet->sk->sk_sndtimeo = HZ / 10;
+ 
+ 	return 0;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 95ab3f243acde..4e28961cfa53e 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1249,22 +1249,28 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
+ 	reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);
+ }
+ 
++static bool __reg32_bound_s64(s32 a)
++{
++	return a >= 0 && a <= S32_MAX;
++}
++
+ static void __reg_assign_32_into_64(struct bpf_reg_state *reg)
+ {
+ 	reg->umin_value = reg->u32_min_value;
+ 	reg->umax_value = reg->u32_max_value;
+-	/* Attempt to pull 32-bit signed bounds into 64-bit bounds
+-	 * but must be positive otherwise set to worse case bounds
+-	 * and refine later from tnum.
++
++	/* Attempt to pull 32-bit signed bounds into 64-bit bounds but must
++	 * be positive otherwise set to worse case bounds and refine later
++	 * from tnum.
+ 	 */
+-	if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0)
+-		reg->smax_value = reg->s32_max_value;
+-	else
+-		reg->smax_value = U32_MAX;
+-	if (reg->s32_min_value >= 0)
++	if (__reg32_bound_s64(reg->s32_min_value) &&
++	    __reg32_bound_s64(reg->s32_max_value)) {
+ 		reg->smin_value = reg->s32_min_value;
+-	else
++		reg->smax_value = reg->s32_max_value;
++	} else {
+ 		reg->smin_value = 0;
++		reg->smax_value = U32_MAX;
++	}
+ }
+ 
+ static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
+@@ -7125,6 +7131,10 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 							 insn->dst_reg);
+ 				}
+ 				zext_32_to_64(dst_reg);
++
++				__update_reg_bounds(dst_reg);
++				__reg_deduce_bounds(dst_reg);
++				__reg_bound_offset(dst_reg);
+ 			}
+ 		} else {
+ 			/* case: R = imm
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 8c81c05c4236a..b74e7ace4376b 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1888,7 +1888,7 @@ static void rcu_gp_fqs(bool first_time)
+ 	struct rcu_node *rnp = rcu_get_root();
+ 
+ 	WRITE_ONCE(rcu_state.gp_activity, jiffies);
+-	rcu_state.n_force_qs++;
++	WRITE_ONCE(rcu_state.n_force_qs, rcu_state.n_force_qs + 1);
+ 	if (first_time) {
+ 		/* Collect dyntick-idle snapshots. */
+ 		force_qs_rnp(dyntick_save_progress_counter);
+@@ -2530,7 +2530,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
+ 	/* Reset ->qlen_last_fqs_check trigger if enough CBs have drained. */
+ 	if (count == 0 && rdp->qlen_last_fqs_check != 0) {
+ 		rdp->qlen_last_fqs_check = 0;
+-		rdp->n_force_qs_snap = rcu_state.n_force_qs;
++		rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
+ 	} else if (count < rdp->qlen_last_fqs_check - qhimark)
+ 		rdp->qlen_last_fqs_check = count;
+ 
+@@ -2876,10 +2876,10 @@ static void __call_rcu_core(struct rcu_data *rdp, struct rcu_head *head,
+ 		} else {
+ 			/* Give the grace period a kick. */
+ 			rdp->blimit = DEFAULT_MAX_RCU_BLIMIT;
+-			if (rcu_state.n_force_qs == rdp->n_force_qs_snap &&
++			if (READ_ONCE(rcu_state.n_force_qs) == rdp->n_force_qs_snap &&
+ 			    rcu_segcblist_first_pend_cb(&rdp->cblist) != head)
+ 				rcu_force_quiescent_state();
+-			rdp->n_force_qs_snap = rcu_state.n_force_qs;
++			rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
+ 			rdp->qlen_last_fqs_check = rcu_segcblist_n_cbs(&rdp->cblist);
+ 		}
+ 	}
+@@ -3986,7 +3986,7 @@ int rcutree_prepare_cpu(unsigned int cpu)
+ 	/* Set up local state, ensuring consistent view of global state. */
+ 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
+ 	rdp->qlen_last_fqs_check = 0;
+-	rdp->n_force_qs_snap = rcu_state.n_force_qs;
++	rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
+ 	rdp->blimit = blimit;
+ 	if (rcu_segcblist_empty(&rdp->cblist) && /* No early-boot CBs? */
+ 	    !rcu_segcblist_is_offloaded(&rdp->cblist))
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 6858a31364b64..cc4dc2857a870 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -1310,8 +1310,7 @@ int do_settimeofday64(const struct timespec64 *ts)
+ 	timekeeping_forward_now(tk);
+ 
+ 	xt = tk_xtime(tk);
+-	ts_delta.tv_sec = ts->tv_sec - xt.tv_sec;
+-	ts_delta.tv_nsec = ts->tv_nsec - xt.tv_nsec;
++	ts_delta = timespec64_sub(*ts, xt);
+ 
+ 	if (timespec64_compare(&tk->wall_to_monotonic, &ts_delta) > 0) {
+ 		ret = -EINVAL;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 825e6b9880030..0215ae898e836 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -769,7 +769,7 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt)
+ 	       ntohs(skb->protocol), skb->pkt_type, skb->skb_iif);
+ 
+ 	if (dev)
+-		printk("%sdev name=%s feat=0x%pNF\n",
++		printk("%sdev name=%s feat=%pNF\n",
+ 		       level, dev->name, &dev->features);
+ 	if (sk)
+ 		printk("%ssk family=%hu type=%u proto=%u\n",
+diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
+index 93474b1bea4e0..fa9f1de58df46 100644
+--- a/net/ipv4/inet_diag.c
++++ b/net/ipv4/inet_diag.c
+@@ -261,6 +261,7 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
+ 	r->idiag_state = sk->sk_state;
+ 	r->idiag_timer = 0;
+ 	r->idiag_retrans = 0;
++	r->idiag_expires = 0;
+ 
+ 	if (inet_diag_msg_attrs_fill(sk, skb, r, ext,
+ 				     sk_user_ns(NETLINK_CB(cb->skb).sk),
+@@ -314,9 +315,6 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
+ 		r->idiag_retrans = icsk->icsk_probes_out;
+ 		r->idiag_expires =
+ 			jiffies_delta_to_msecs(sk->sk_timer.expires - jiffies);
+-	} else {
+-		r->idiag_timer = 0;
+-		r->idiag_expires = 0;
+ 	}
+ 
+ 	if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) {
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index a6a3d759246ec..bab0e99f6e356 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1924,7 +1924,6 @@ static int __net_init sit_init_net(struct net *net)
+ 	return 0;
+ 
+ err_reg_dev:
+-	ipip6_dev_free(sitn->fb_tunnel_dev);
+ 	free_netdev(sitn->fb_tunnel_dev);
+ err_alloc_dev:
+ 	return err;
+diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c
+index cd4cf84a7f99f..6ef8ded4ec764 100644
+--- a/net/mac80211/agg-rx.c
++++ b/net/mac80211/agg-rx.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ 
+ /**
+@@ -191,7 +191,8 @@ static void ieee80211_add_addbaext(struct ieee80211_sub_if_data *sdata,
+ 	sband = ieee80211_get_sband(sdata);
+ 	if (!sband)
+ 		return;
+-	he_cap = ieee80211_get_he_iftype_cap(sband, sdata->vif.type);
++	he_cap = ieee80211_get_he_iftype_cap(sband,
++					     ieee80211_vif_type_p2p(&sdata->vif));
+ 	if (!he_cap)
+ 		return;
+ 
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index b37c8a983d88d..190f300d8923c 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2020 Intel Corporation
++ * Copyright (C) 2018 - 2021 Intel Corporation
+  */
+ 
+ #include <linux/ieee80211.h>
+@@ -106,7 +106,7 @@ static void ieee80211_send_addba_request(struct ieee80211_sub_if_data *sdata,
+ 	mgmt->u.action.u.addba_req.start_seq_num =
+ 					cpu_to_le16(start_seq_num << 4);
+ 
+-	ieee80211_tx_skb(sdata, skb);
++	ieee80211_tx_skb_tid(sdata, skb, tid);
+ }
+ 
+ void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn)
+@@ -213,6 +213,8 @@ ieee80211_agg_start_txq(struct sta_info *sta, int tid, bool enable)
+ 	struct ieee80211_txq *txq = sta->sta.txq[tid];
+ 	struct txq_info *txqi;
+ 
++	lockdep_assert_held(&sta->ampdu_mlme.mtx);
++
+ 	if (!txq)
+ 		return;
+ 
+@@ -290,7 +292,6 @@ static void ieee80211_remove_tid_tx(struct sta_info *sta, int tid)
+ 	ieee80211_assign_tid_tx(sta, tid, NULL);
+ 
+ 	ieee80211_agg_splice_finish(sta->sdata, tid);
+-	ieee80211_agg_start_txq(sta, tid, false);
+ 
+ 	kfree_rcu(tid_tx, rcu_head);
+ }
+@@ -480,8 +481,7 @@ static void ieee80211_send_addba_with_timeout(struct sta_info *sta,
+ 
+ 	/* send AddBA request */
+ 	ieee80211_send_addba_request(sdata, sta->sta.addr, tid,
+-				     tid_tx->dialog_token,
+-				     sta->tid_seq[tid] >> 4,
++				     tid_tx->dialog_token, tid_tx->ssn,
+ 				     buf_size, tid_tx->timeout);
+ 
+ 	WARN_ON(test_and_set_bit(HT_AGG_STATE_SENT_ADDBA, &tid_tx->state));
+@@ -523,6 +523,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ 
+ 	params.ssn = sta->tid_seq[tid] >> 4;
+ 	ret = drv_ampdu_action(local, sdata, &params);
++	tid_tx->ssn = params.ssn;
+ 	if (ret == IEEE80211_AMPDU_TX_START_DELAY_ADDBA) {
+ 		return;
+ 	} else if (ret == IEEE80211_AMPDU_TX_START_IMMEDIATE) {
+@@ -889,6 +890,7 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
+ {
+ 	struct ieee80211_sub_if_data *sdata = sta->sdata;
+ 	bool send_delba = false;
++	bool start_txq = false;
+ 
+ 	ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n",
+ 	       sta->sta.addr, tid);
+@@ -906,10 +908,14 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
+ 		send_delba = true;
+ 
+ 	ieee80211_remove_tid_tx(sta, tid);
++	start_txq = true;
+ 
+  unlock_sta:
+ 	spin_unlock_bh(&sta->lock);
+ 
++	if (start_txq)
++		ieee80211_agg_start_txq(sta, tid, false);
++
+ 	if (send_delba)
+ 		ieee80211_send_delba(sdata, sta->sta.addr, tid,
+ 			WLAN_BACK_INITIATOR, WLAN_REASON_QSTA_NOT_USE);
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index bcdfd19a596be..a172f69c71123 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -1201,8 +1201,11 @@ static inline void drv_wake_tx_queue(struct ieee80211_local *local,
+ {
+ 	struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif);
+ 
+-	if (local->in_reconfig)
++	/* In reconfig don't transmit now, but mark for waking later */
++	if (local->in_reconfig) {
++		set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags);
+ 		return;
++	}
+ 
+ 	if (!check_sdata_in_driver(sdata))
+ 		return;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 32bc30ec50ec9..7bd42827540ae 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2493,11 +2493,18 @@ static void ieee80211_sta_tx_wmm_ac_notify(struct ieee80211_sub_if_data *sdata,
+ 					   u16 tx_time)
+ {
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+-	u16 tid = ieee80211_get_tid(hdr);
+-	int ac = ieee80211_ac_from_tid(tid);
+-	struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac];
++	u16 tid;
++	int ac;
++	struct ieee80211_sta_tx_tspec *tx_tspec;
+ 	unsigned long now = jiffies;
+ 
++	if (!ieee80211_is_data_qos(hdr->frame_control))
++		return;
++
++	tid = ieee80211_get_tid(hdr);
++	ac = ieee80211_ac_from_tid(tid);
++	tx_tspec = &ifmgd->tx_tspec[ac];
++
+ 	if (likely(!tx_tspec->admitted_time))
+ 		return;
+ 
+diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
+index 355e006432ccc..b9e5f8e8f29cc 100644
+--- a/net/mac80211/sta_info.h
++++ b/net/mac80211/sta_info.h
+@@ -190,6 +190,7 @@ struct tid_ampdu_tx {
+ 	u8 stop_initiator;
+ 	bool tx_stop;
+ 	u16 buf_size;
++	u16 ssn;
+ 
+ 	u16 failed_bar_ssn;
+ 	bool bar_pending;
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index fbf56a203c0e8..a1f129292ad88 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -950,7 +950,12 @@ static void ieee80211_parse_extension_element(u32 *crc,
+ 					      struct ieee802_11_elems *elems)
+ {
+ 	const void *data = elem->data + 1;
+-	u8 len = elem->datalen - 1;
++	u8 len;
++
++	if (!elem->datalen)
++		return;
++
++	len = elem->datalen - 1;
+ 
+ 	switch (elem->data[0]) {
+ 	case WLAN_EID_EXT_HE_MU_EDCA:
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 3ca8b359e399a..8123c79e27913 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2149,7 +2149,7 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
+ 		 */
+ 		if (WARN_ON_ONCE(!new_mptcp_sock)) {
+ 			tcp_sk(newsk)->is_mptcp = 0;
+-			return newsk;
++			goto out;
+ 		}
+ 
+ 		/* acquire the 2nd reference for the owning socket */
+@@ -2174,6 +2174,8 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
+ 				MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK);
+ 	}
+ 
++out:
++	newsk->sk_kern_sock = kern;
+ 	return newsk;
+ }
+ 
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 08144559eed56..f78097aa403a8 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4461,9 +4461,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 	}
+ 
+ out_free_pg_vec:
+-	bitmap_free(rx_owner_map);
+-	if (pg_vec)
++	if (pg_vec) {
++		bitmap_free(rx_owner_map);
+ 		free_pg_vec(pg_vec, order, req->tp_block_nr);
++	}
+ out:
+ 	return err;
+ }
+diff --git a/net/rds/connection.c b/net/rds/connection.c
+index a3bc4b54d4910..b4cc699c5fad3 100644
+--- a/net/rds/connection.c
++++ b/net/rds/connection.c
+@@ -253,6 +253,7 @@ static struct rds_connection *__rds_conn_create(struct net *net,
+ 				 * should end up here, but if it
+ 				 * does, reset/destroy the connection.
+ 				 */
++				kfree(conn->c_path);
+ 				kmem_cache_free(rds_conn_slab, conn);
+ 				conn = ERR_PTR(-EOPNOTSUPP);
+ 				goto out;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 8073657a0fd25..cb1331b357451 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -3703,6 +3703,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
+ 				entry->mpls_mangle.ttl = tcf_mpls_ttl(act);
+ 				break;
+ 			default:
++				err = -EOPNOTSUPP;
+ 				goto err_out_locked;
+ 			}
+ 		} else if (is_tcf_skbedit_ptype(act)) {
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index c2c37ffd94f22..c580139fcedec 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -2736,7 +2736,7 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
+ 	q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data),
+ 			   GFP_KERNEL);
+ 	if (!q->tins)
+-		goto nomem;
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < CAKE_MAX_TINS; i++) {
+ 		struct cake_tin_data *b = q->tins + i;
+@@ -2766,10 +2766,6 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
+ 	q->min_netlen = ~0;
+ 	q->min_adjlen = ~0;
+ 	return 0;
+-
+-nomem:
+-	cake_destroy(sch);
+-	return -ENOMEM;
+ }
+ 
+ static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index c34cb6e81d855..9c224872ef035 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -668,9 +668,9 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 		}
+ 	}
+ 	for (i = q->nbands; i < oldbands; i++) {
+-		qdisc_tree_flush_backlog(q->classes[i].qdisc);
+-		if (i >= q->nstrict)
++		if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
+ 			list_del(&q->classes[i].alist);
++		qdisc_tree_flush_backlog(q->classes[i].qdisc);
+ 	}
+ 	q->nstrict = nstrict;
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index d324a12c26cd9..99b902e410c49 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -191,7 +191,9 @@ static int smc_release(struct socket *sock)
+ 	/* cleanup for a dangling non-blocking connect */
+ 	if (smc->connect_nonblock && sk->sk_state == SMC_INIT)
+ 		tcp_abort(smc->clcsock->sk, ECONNABORTED);
+-	flush_work(&smc->connect_work);
++
++	if (cancel_work_sync(&smc->connect_work))
++		sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */
+ 
+ 	if (sk->sk_state == SMC_LISTEN)
+ 		/* smc_close_non_accepted() is called and acquires
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 902cb6dd710bd..d6d3a05c008a4 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1153,7 +1153,8 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
+ 	space_available = virtio_transport_space_update(sk, pkt);
+ 
+ 	/* Update CID in case it has changed after a transport reset event */
+-	vsk->local_addr.svm_cid = dst.svm_cid;
++	if (vsk->local_addr.svm_cid != VMADDR_CID_ANY)
++		vsk->local_addr.svm_cid = dst.svm_cid;
+ 
+ 	if (space_available)
+ 		sk->sk_write_space(sk);
+diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
+index f459ae883a0a6..a4ca050815aba 100755
+--- a/scripts/recordmcount.pl
++++ b/scripts/recordmcount.pl
+@@ -252,7 +252,7 @@ if ($arch eq "x86_64") {
+ 
+ } elsif ($arch eq "s390" && $bits == 64) {
+     if ($cc =~ /-DCC_USING_HOTPATCH/) {
+-	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*brcl\\s*0,[0-9a-f]+ <([^\+]*)>\$";
++	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(bcrl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$";
+ 	$mcount_adjust = 0;
+     } else {
+ 	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_390_(PC|PLT)32DBL\\s+_mcount\\+0x2\$";
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c b/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
+index 86ccf37e26b3f..d16fd888230a5 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
+@@ -90,7 +90,7 @@ static void print_err_line(void)
+ 
+ static void test_conn(void)
+ {
+-	int listen_fd = -1, cli_fd = -1, err;
++	int listen_fd = -1, cli_fd = -1, srv_fd = -1, err;
+ 	socklen_t addrlen = sizeof(srv_sa6);
+ 	int srv_port;
+ 
+@@ -112,6 +112,10 @@ static void test_conn(void)
+ 	if (CHECK_FAIL(cli_fd == -1))
+ 		goto done;
+ 
++	srv_fd = accept(listen_fd, NULL, NULL);
++	if (CHECK_FAIL(srv_fd == -1))
++		goto done;
++
+ 	if (CHECK(skel->bss->listen_tp_sport != srv_port ||
+ 		  skel->bss->req_sk_sport != srv_port,
+ 		  "Unexpected sk src port",
+@@ -134,11 +138,13 @@ done:
+ 		close(listen_fd);
+ 	if (cli_fd != -1)
+ 		close(cli_fd);
++	if (srv_fd != -1)
++		close(srv_fd);
+ }
+ 
+ static void test_syncookie(void)
+ {
+-	int listen_fd = -1, cli_fd = -1, err;
++	int listen_fd = -1, cli_fd = -1, srv_fd = -1, err;
+ 	socklen_t addrlen = sizeof(srv_sa6);
+ 	int srv_port;
+ 
+@@ -161,6 +167,10 @@ static void test_syncookie(void)
+ 	if (CHECK_FAIL(cli_fd == -1))
+ 		goto done;
+ 
++	srv_fd = accept(listen_fd, NULL, NULL);
++	if (CHECK_FAIL(srv_fd == -1))
++		goto done;
++
+ 	if (CHECK(skel->bss->listen_tp_sport != srv_port,
+ 		  "Unexpected tp src port",
+ 		  "listen_tp_sport:%u expected:%u\n",
+@@ -188,6 +198,8 @@ done:
+ 		close(listen_fd);
+ 	if (cli_fd != -1)
+ 		close(cli_fd);
++	if (srv_fd != -1)
++		close(srv_fd);
+ }
+ 
+ struct test {
+diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+index a3e593ddfafc9..d8765a4d5bc6b 100644
+--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
++++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+@@ -848,6 +848,29 @@
+ 	.errstr = "R0 invalid mem access 'inv'",
+ 	.errstr_unpriv = "R0 pointer -= pointer prohibited",
+ },
++{
++	"map access: trying to leak tained dst reg",
++	.insns = {
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
++	BPF_EXIT_INSN(),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
++	BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF),
++	BPF_MOV32_REG(BPF_REG_1, BPF_REG_1),
++	BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1),
++	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.fixup_map_array_48b = { 4 },
++	.result = REJECT,
++	.errstr = "math between map_value pointer and 4294967295 is not allowed",
++},
+ {
+ 	"32bit pkt_ptr -= scalar",
+ 	.insns = {
+diff --git a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
+index 0299cd81b8ba2..aa3795cd7bd3d 100644
+--- a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
++++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
+@@ -12,6 +12,7 @@
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
++#include <sys/resource.h>
+ 
+ #include "test_util.h"
+ 
+@@ -40,10 +41,39 @@ int main(int argc, char *argv[])
+ {
+ 	int kvm_max_vcpu_id = kvm_check_cap(KVM_CAP_MAX_VCPU_ID);
+ 	int kvm_max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
++	/*
++	 * Number of file descriptors reqired, KVM_CAP_MAX_VCPUS for vCPU fds +
++	 * an arbitrary number for everything else.
++	 */
++	int nr_fds_wanted = kvm_max_vcpus + 100;
++	struct rlimit rl;
+ 
+ 	pr_info("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id);
+ 	pr_info("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus);
+ 
++	/*
++	 * Check that we're allowed to open nr_fds_wanted file descriptors and
++	 * try raising the limits if needed.
++	 */
++	TEST_ASSERT(!getrlimit(RLIMIT_NOFILE, &rl), "getrlimit() failed!");
++
++	if (rl.rlim_cur < nr_fds_wanted) {
++		rl.rlim_cur = nr_fds_wanted;
++		if (rl.rlim_max < nr_fds_wanted) {
++			int old_rlim_max = rl.rlim_max;
++			rl.rlim_max = nr_fds_wanted;
++
++			int r = setrlimit(RLIMIT_NOFILE, &rl);
++			if (r < 0) {
++				printf("RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n",
++				       old_rlim_max, nr_fds_wanted);
++				exit(KSFT_SKIP);
++			}
++		} else {
++			TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!");
++		}
++	}
++
+ 	/*
+ 	 * Upstream KVM prior to 4.8 does not support KVM_CAP_MAX_VCPU_ID.
+ 	 * Userspace is supposed to use KVM_CAP_MAX_VCPUS as the maximum ID
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index 7c9ace9d15991..ace976d891252 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -446,6 +446,22 @@ cleanup()
+ 	ip netns del ${NSC} >/dev/null 2>&1
+ }
+ 
++cleanup_vrf_dup()
++{
++	ip link del ${NSA_DEV2} >/dev/null 2>&1
++	ip netns pids ${NSC} | xargs kill 2>/dev/null
++	ip netns del ${NSC} >/dev/null 2>&1
++}
++
++setup_vrf_dup()
++{
++	# some VRF tests use ns-C which has the same config as
++	# ns-B but for a device NOT in the VRF
++	create_ns ${NSC} "-" "-"
++	connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \
++		   ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64
++}
++
+ setup()
+ {
+ 	local with_vrf=${1}
+@@ -475,12 +491,6 @@ setup()
+ 
+ 		ip -netns ${NSB} ro add ${VRF_IP}/32 via ${NSA_IP} dev ${NSB_DEV}
+ 		ip -netns ${NSB} -6 ro add ${VRF_IP6}/128 via ${NSA_IP6} dev ${NSB_DEV}
+-
+-		# some VRF tests use ns-C which has the same config as
+-		# ns-B but for a device NOT in the VRF
+-		create_ns ${NSC} "-" "-"
+-		connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \
+-			   ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64
+ 	else
+ 		ip -netns ${NSA} ro add ${NSB_LO_IP}/32 via ${NSB_IP} dev ${NSA_DEV}
+ 		ip -netns ${NSA} ro add ${NSB_LO_IP6}/128 via ${NSB_IP6} dev ${NSA_DEV}
+@@ -1177,7 +1187,9 @@ ipv4_tcp_vrf()
+ 	log_test_addr ${a} $? 1 "Global server, local connection"
+ 
+ 	# run MD5 tests
++	setup_vrf_dup
+ 	ipv4_tcp_md5
++	cleanup_vrf_dup
+ 
+ 	#
+ 	# enable VRF global server
+@@ -1735,8 +1747,9 @@ ipv4_addr_bind_vrf()
+ 	for a in ${NSA_IP} ${VRF_IP}
+ 	do
+ 		log_start
++		show_hint "Socket not bound to VRF, but address is in VRF"
+ 		run_cmd nettest -s -R -P icmp -l ${a} -b
+-		log_test_addr ${a} $? 0 "Raw socket bind to local address"
++		log_test_addr ${a} $? 1 "Raw socket bind to local address"
+ 
+ 		log_start
+ 		run_cmd nettest -s -R -P icmp -l ${a} -d ${NSA_DEV} -b
+@@ -2128,7 +2141,7 @@ ipv6_ping_vrf()
+ 		log_start
+ 		show_hint "Fails since VRF device does not support linklocal or multicast"
+ 		run_cmd ${ping6} -c1 -w1 ${a}
+-		log_test_addr ${a} $? 2 "ping out, VRF bind"
++		log_test_addr ${a} $? 1 "ping out, VRF bind"
+ 	done
+ 
+ 	for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV}
+@@ -2656,7 +2669,9 @@ ipv6_tcp_vrf()
+ 	log_test_addr ${a} $? 1 "Global server, local connection"
+ 
+ 	# run MD5 tests
++	setup_vrf_dup
+ 	ipv6_tcp_md5
++	cleanup_vrf_dup
+ 
+ 	#
+ 	# enable VRF global server
+@@ -3351,11 +3366,14 @@ ipv6_addr_bind_novrf()
+ 	run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
+ 	log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind"
+ 
++	# Sadly, the kernel allows binding a socket to a device and then
++	# binding to an address not on the device. So this test passes
++	# when it really should not
+ 	a=${NSA_LO_IP6}
+ 	log_start
+-	show_hint "Should fail with 'Cannot assign requested address'"
+-	run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
+-	log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address"
++	show_hint "Tecnically should fail since address is not on device but kernel allows"
++	run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b
++	log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address"
+ }
+ 
+ ipv6_addr_bind_vrf()
+@@ -3396,10 +3414,15 @@ ipv6_addr_bind_vrf()
+ 	run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
+ 	log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind"
+ 
++	# Sadly, the kernel allows binding a socket to a device and then
++	# binding to an address not on the device. The only restriction
++	# is that the address is valid in the L3 domain. So this test
++	# passes when it really should not
+ 	a=${VRF_IP6}
+ 	log_start
+-	run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
+-	log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind"
++	show_hint "Tecnically should fail since address is not on device but kernel allows"
++	run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b
++	log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind"
+ 
+ 	a=${NSA_LO_IP6}
+ 	log_start
+diff --git a/tools/testing/selftests/net/forwarding/forwarding.config.sample b/tools/testing/selftests/net/forwarding/forwarding.config.sample
+index e5e2fbeca22ec..e51def39fd801 100644
+--- a/tools/testing/selftests/net/forwarding/forwarding.config.sample
++++ b/tools/testing/selftests/net/forwarding/forwarding.config.sample
+@@ -13,6 +13,8 @@ NETIFS[p5]=veth4
+ NETIFS[p6]=veth5
+ NETIFS[p7]=veth6
+ NETIFS[p8]=veth7
++NETIFS[p9]=veth8
++NETIFS[p10]=veth9
+ 
+ # Port that does not have a cable connected.
+ NETIF_NO_CABLE=eth8
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 97ac3c6fd4441..4a7d377b3a500 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2590,7 +2590,8 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
+ 	int r;
+ 	gpa_t gpa = ghc->gpa + offset;
+ 
+-	BUG_ON(len + offset > ghc->len);
++	if (WARN_ON_ONCE(len + offset > ghc->len))
++		return -EINVAL;
+ 
+ 	if (slots->generation != ghc->generation) {
+ 		if (__kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len))
+@@ -2627,7 +2628,8 @@ int kvm_read_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
+ 	int r;
+ 	gpa_t gpa = ghc->gpa + offset;
+ 
+-	BUG_ON(len + offset > ghc->len);
++	if (WARN_ON_ONCE(len + offset > ghc->len))
++		return -EINVAL;
+ 
+ 	if (slots->generation != ghc->generation) {
+ 		if (__kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len))


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2021-12-29 13:06 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2021-12-29 13:06 UTC (permalink / raw
  To: gentoo-commits

commit:     bd0c2b6eb76e3e0c39aa3f953a7cd3546013b9d8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 29 13:06:24 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 29 13:06:24 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bd0c2b6e

Linux patch 5.10.89

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1088_linux-5.10.89.patch | 2684 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2688 insertions(+)

diff --git a/0000_README b/0000_README
index a6cd68ff..aa52e9d4 100644
--- a/0000_README
+++ b/0000_README
@@ -395,6 +395,10 @@ Patch:  1087_linux-5.10.88.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.88
 
+Patch:  1088_linux-5.10.89.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.89
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1088_linux-5.10.89.patch b/1088_linux-5.10.89.patch
new file mode 100644
index 00000000..80aaf5db
--- /dev/null
+++ b/1088_linux-5.10.89.patch
@@ -0,0 +1,2684 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 516499f9ccae4..ccaa72562538e 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2294,8 +2294,12 @@
+ 			Default is 1 (enabled)
+ 
+ 	kvm-intel.emulate_invalid_guest_state=
+-			[KVM,Intel] Enable emulation of invalid guest states
+-			Default is 0 (disabled)
++			[KVM,Intel] Disable emulation of invalid guest state.
++			Ignored if kvm-intel.enable_unrestricted_guest=1, as
++			guest state is never invalid for unrestricted guests.
++			This param doesn't apply to nested guests (L2), as KVM
++			never emulates invalid L2 guest state.
++			Default is 1 (enabled)
+ 
+ 	kvm-intel.flexpriority=
+ 			[KVM,Intel] Disable FlexPriority feature (TPR shadow).
+diff --git a/Documentation/hwmon/lm90.rst b/Documentation/hwmon/lm90.rst
+index 3da8c6e06a365..05391fb4042d9 100644
+--- a/Documentation/hwmon/lm90.rst
++++ b/Documentation/hwmon/lm90.rst
+@@ -265,6 +265,16 @@ Supported chips:
+ 
+ 	       https://www.ti.com/litv/pdf/sbos686
+ 
++  * Texas Instruments TMP461
++
++    Prefix: 'tmp461'
++
++    Addresses scanned: I2C 0x48 through 0x4F
++
++    Datasheet: Publicly available at TI website
++
++	       https://www.ti.com/lit/gpn/tmp461
++
+ Author: Jean Delvare <jdelvare@suse.de>
+ 
+ 
+diff --git a/Documentation/networking/bonding.rst b/Documentation/networking/bonding.rst
+index adc314639085b..413dca513e1db 100644
+--- a/Documentation/networking/bonding.rst
++++ b/Documentation/networking/bonding.rst
+@@ -196,11 +196,12 @@ ad_actor_sys_prio
+ ad_actor_system
+ 
+ 	In an AD system, this specifies the mac-address for the actor in
+-	protocol packet exchanges (LACPDUs). The value cannot be NULL or
+-	multicast. It is preferred to have the local-admin bit set for this
+-	mac but driver does not enforce it. If the value is not given then
+-	system defaults to using the masters' mac address as actors' system
+-	address.
++	protocol packet exchanges (LACPDUs). The value cannot be a multicast
++	address. If the all-zeroes MAC is specified, bonding will internally
++	use the MAC of the bond itself. It is preferred to have the
++	local-admin bit set for this mac but driver does not enforce it. If
++	the value is not given then system defaults to using the masters'
++	mac address as actors' system address.
+ 
+ 	This parameter has effect only in 802.3ad mode and is available through
+ 	SysFs interface.
+diff --git a/Documentation/sound/hd-audio/models.rst b/Documentation/sound/hd-audio/models.rst
+index 0ea967d345838..d25335993e553 100644
+--- a/Documentation/sound/hd-audio/models.rst
++++ b/Documentation/sound/hd-audio/models.rst
+@@ -326,6 +326,8 @@ usi-headset
+     Headset support on USI machines
+ dual-codecs
+     Lenovo laptops with dual codecs
++alc285-hp-amp-init
++    HP laptops which require speaker amplifier initialization (ALC285)
+ 
+ ALC680
+ ======
+diff --git a/Makefile b/Makefile
+index 0b74b414f4e57..1500ea340424d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 88
++SUBLEVEL = 89
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+index c070893c509ee..5bad982bc5a05 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+@@ -289,6 +289,7 @@
+ 
+ 		ethphy: ethernet-phy@1 {
+ 			reg = <1>;
++			qca,clk-out-frequency = <125000000>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 1c9e6d1452c5b..63fbcdc97ded9 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -596,11 +596,9 @@ call_fpe:
+ 	tstne	r0, #0x04000000			@ bit 26 set on both ARM and Thumb-2
+ 	reteq	lr
+ 	and	r8, r0, #0x00000f00		@ mask out CP number
+- THUMB(	lsr	r8, r8, #8		)
+ 	mov	r7, #1
+-	add	r6, r10, #TI_USED_CP
+- ARM(	strb	r7, [r6, r8, lsr #8]	)	@ set appropriate used_cp[]
+- THUMB(	strb	r7, [r6, r8]		)	@ set appropriate used_cp[]
++	add	r6, r10, r8, lsr #8		@ add used_cp[] array offset first
++	strb	r7, [r6, #TI_USED_CP]		@ set appropriate used_cp[]
+ #ifdef CONFIG_IWMMXT
+ 	@ Test if we need to give access to iWMMXt coprocessors
+ 	ldr	r5, [r10, #TI_FLAGS]
+@@ -609,7 +607,7 @@ call_fpe:
+ 	bcs	iwmmxt_task_enable
+ #endif
+  ARM(	add	pc, pc, r8, lsr #6	)
+- THUMB(	lsl	r8, r8, #2		)
++ THUMB(	lsr	r8, r8, #6		)
+  THUMB(	add	pc, r8			)
+ 	nop
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 5e5cf3af63515..3da71fe56b922 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1265,7 +1265,8 @@ config KUSER_HELPERS
+ 
+ config COMPAT_VDSO
+ 	bool "Enable vDSO for 32-bit applications"
+-	depends on !CPU_BIG_ENDIAN && "$(CROSS_COMPILE_COMPAT)" != ""
++	depends on !CPU_BIG_ENDIAN
++	depends on (CC_IS_CLANG && LD_IS_LLD) || "$(CROSS_COMPILE_COMPAT)" != ""
+ 	select GENERIC_COMPAT_VDSO
+ 	default y
+ 	help
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts b/arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts
+index ef5ca64442203..de448ca51e216 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts
+@@ -69,7 +69,7 @@
+ 	pinctrl-0 = <&emac_rgmii_pins>;
+ 	phy-supply = <&reg_gmac_3v3>;
+ 	phy-handle = <&ext_rgmii_phy>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
+index a463b9bceed41..abad38c576e1d 100644
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -10,26 +10,15 @@ include $(srctree)/lib/vdso/Makefile
+ 
+ # Same as cc-*option, but using CC_COMPAT instead of CC
+ ifeq ($(CONFIG_CC_IS_CLANG), y)
+-COMPAT_GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE_COMPAT)elfedit))
+-COMPAT_GCC_TOOLCHAIN := $(realpath $(COMPAT_GCC_TOOLCHAIN_DIR)/..)
+-
+-CC_COMPAT_CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE_COMPAT:%-=%))
+-CC_COMPAT_CLANG_FLAGS += --prefix=$(COMPAT_GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE_COMPAT))
+-CC_COMPAT_CLANG_FLAGS += -no-integrated-as -Qunused-arguments
+-ifneq ($(COMPAT_GCC_TOOLCHAIN),)
+-CC_COMPAT_CLANG_FLAGS += --gcc-toolchain=$(COMPAT_GCC_TOOLCHAIN)
+-endif
+-
+ CC_COMPAT ?= $(CC)
+-CC_COMPAT += $(CC_COMPAT_CLANG_FLAGS)
+-
+-ifneq ($(LLVM),)
+-LD_COMPAT ?= $(LD)
++CC_COMPAT += --target=arm-linux-gnueabi
+ else
+-LD_COMPAT ?= $(CROSS_COMPILE_COMPAT)ld
++CC_COMPAT ?= $(CROSS_COMPILE_COMPAT)gcc
+ endif
++
++ifeq ($(CONFIG_LD_IS_LLD), y)
++LD_COMPAT ?= $(LD)
+ else
+-CC_COMPAT ?= $(CROSS_COMPILE_COMPAT)gcc
+ LD_COMPAT ?= $(CROSS_COMPILE_COMPAT)ld
+ endif
+ 
+@@ -55,10 +44,6 @@ VDSO_CPPFLAGS += $(LINUXINCLUDE)
+ # Common C and assembly flags
+ # From top-level Makefile
+ VDSO_CAFLAGS := $(VDSO_CPPFLAGS)
+-ifneq ($(shell $(CC_COMPAT) --version 2>&1 | head -n 1 | grep clang),)
+-VDSO_CAFLAGS += --target=$(notdir $(CROSS_COMPILE_COMPAT:%-=%))
+-endif
+-
+ VDSO_CAFLAGS += $(call cc32-option,-fno-PIE)
+ ifdef CONFIG_DEBUG_INFO
+ VDSO_CAFLAGS += -g
+diff --git a/arch/parisc/include/asm/futex.h b/arch/parisc/include/asm/futex.h
+index fceb9cf02fb3a..71aa0921d6c72 100644
+--- a/arch/parisc/include/asm/futex.h
++++ b/arch/parisc/include/asm/futex.h
+@@ -16,7 +16,7 @@ static inline void
+ _futex_spin_lock_irqsave(u32 __user *uaddr, unsigned long int *flags)
+ {
+ 	extern u32 lws_lock_start[];
+-	long index = ((long)uaddr & 0x3f8) >> 1;
++	long index = ((long)uaddr & 0x7f8) >> 1;
+ 	arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];
+ 	local_irq_save(*flags);
+ 	arch_spin_lock(s);
+@@ -26,7 +26,7 @@ static inline void
+ _futex_spin_unlock_irqrestore(u32 __user *uaddr, unsigned long int *flags)
+ {
+ 	extern u32 lws_lock_start[];
+-	long index = ((long)uaddr & 0x3f8) >> 1;
++	long index = ((long)uaddr & 0x7f8) >> 1;
+ 	arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];
+ 	arch_spin_unlock(s);
+ 	local_irq_restore(*flags);
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 322503780db61..4e53515cf81f1 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -478,7 +478,7 @@ lws_start:
+ 	extrd,u	%r1,PSW_W_BIT,1,%r1
+ 	/* sp must be aligned on 4, so deposit the W bit setting into
+ 	 * the bottom of sp temporarily */
+-	or,ev	%r1,%r30,%r30
++	or,od	%r1,%r30,%r30
+ 
+ 	/* Clip LWS number to a 32-bit value for 32-bit processes */
+ 	depdi	0, 31, 32, %r20
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index a02c67291cfcb..87de9f2d71cf2 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -1360,8 +1360,8 @@ static inline pmd_t pmd_swp_clear_uffd_wp(pmd_t pmd)
+ }
+ #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */
+ 
+-#define PKRU_AD_BIT 0x1
+-#define PKRU_WD_BIT 0x2
++#define PKRU_AD_BIT 0x1u
++#define PKRU_WD_BIT 0x2u
+ #define PKRU_BITS_PER_PKEY 2
+ 
+ #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 5b7664d51dc2b..38c453f28f1f0 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4007,8 +4007,7 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
+ 	if (pi_test_and_set_on(&vmx->pi_desc))
+ 		return 0;
+ 
+-	if (vcpu != kvm_get_running_vcpu() &&
+-	    !kvm_vcpu_trigger_posted_interrupt(vcpu, false))
++	if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
+ 		kvm_vcpu_kick(vcpu);
+ 
+ 	return 0;
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 38b545bef05a3..8f147274f826a 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -2945,7 +2945,7 @@ cleanup_bmc_device(struct kref *ref)
+ 	 * with removing the device attributes while reading a device
+ 	 * attribute.
+ 	 */
+-	schedule_work(&bmc->remove_work);
++	queue_work(remove_work_wq, &bmc->remove_work);
+ }
+ 
+ /*
+@@ -5161,22 +5161,27 @@ static int ipmi_init_msghandler(void)
+ 	if (initialized)
+ 		goto out;
+ 
+-	init_srcu_struct(&ipmi_interfaces_srcu);
+-
+-	timer_setup(&ipmi_timer, ipmi_timeout, 0);
+-	mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);
+-
+-	atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
++	rv = init_srcu_struct(&ipmi_interfaces_srcu);
++	if (rv)
++		goto out;
+ 
+ 	remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");
+ 	if (!remove_work_wq) {
+ 		pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
+ 		rv = -ENOMEM;
+-		goto out;
++		goto out_wq;
+ 	}
+ 
++	timer_setup(&ipmi_timer, ipmi_timeout, 0);
++	mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);
++
++	atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
++
+ 	initialized = true;
+ 
++out_wq:
++	if (rv)
++		cleanup_srcu_struct(&ipmi_interfaces_srcu);
+ out:
+ 	mutex_unlock(&ipmi_interfaces_mutex);
+ 	return rv;
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 0416b9c9d4105..3de679723648b 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -1700,6 +1700,9 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		}
+ 	}
+ 
++	ssif_info->client = client;
++	i2c_set_clientdata(client, ssif_info);
++
+ 	rv = ssif_check_and_remove(client, ssif_info);
+ 	/* If rv is 0 and addr source is not SI_ACPI, continue probing */
+ 	if (!rv && ssif_info->addr_source == SI_ACPI) {
+@@ -1720,9 +1723,6 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		ipmi_addr_src_to_str(ssif_info->addr_source),
+ 		client->addr, client->adapter->name, slave_addr);
+ 
+-	ssif_info->client = client;
+-	i2c_set_clientdata(client, ssif_info);
+-
+ 	/* Now check for system interface capabilities */
+ 	msg[0] = IPMI_NETFN_APP_REQUEST << 2;
+ 	msg[1] = IPMI_GET_SYSTEM_INTERFACE_CAPABILITIES_CMD;
+@@ -1922,6 +1922,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 
+ 		dev_err(&ssif_info->client->dev,
+ 			"Unable to start IPMI SSIF: %d\n", rv);
++		i2c_set_clientdata(client, NULL);
+ 		kfree(ssif_info);
+ 	}
+ 	kfree(resp);
+diff --git a/drivers/gpio/gpio-dln2.c b/drivers/gpio/gpio-dln2.c
+index 4c5f6d0c8d745..22f11dd5210db 100644
+--- a/drivers/gpio/gpio-dln2.c
++++ b/drivers/gpio/gpio-dln2.c
+@@ -46,6 +46,7 @@
+ struct dln2_gpio {
+ 	struct platform_device *pdev;
+ 	struct gpio_chip gpio;
++	struct irq_chip irqchip;
+ 
+ 	/*
+ 	 * Cache pin direction to save us one transfer, since the hardware has
+@@ -383,15 +384,6 @@ static void dln2_irq_bus_unlock(struct irq_data *irqd)
+ 	mutex_unlock(&dln2->irq_lock);
+ }
+ 
+-static struct irq_chip dln2_gpio_irqchip = {
+-	.name = "dln2-irq",
+-	.irq_mask = dln2_irq_mask,
+-	.irq_unmask = dln2_irq_unmask,
+-	.irq_set_type = dln2_irq_set_type,
+-	.irq_bus_lock = dln2_irq_bus_lock,
+-	.irq_bus_sync_unlock = dln2_irq_bus_unlock,
+-};
+-
+ static void dln2_gpio_event(struct platform_device *pdev, u16 echo,
+ 			    const void *data, int len)
+ {
+@@ -477,8 +469,15 @@ static int dln2_gpio_probe(struct platform_device *pdev)
+ 	dln2->gpio.direction_output = dln2_gpio_direction_output;
+ 	dln2->gpio.set_config = dln2_gpio_set_config;
+ 
++	dln2->irqchip.name = "dln2-irq",
++	dln2->irqchip.irq_mask = dln2_irq_mask,
++	dln2->irqchip.irq_unmask = dln2_irq_unmask,
++	dln2->irqchip.irq_set_type = dln2_irq_set_type,
++	dln2->irqchip.irq_bus_lock = dln2_irq_bus_lock,
++	dln2->irqchip.irq_bus_sync_unlock = dln2_irq_bus_unlock,
++
+ 	girq = &dln2->gpio.irq;
+-	girq->chip = &dln2_gpio_irqchip;
++	girq->chip = &dln2->irqchip;
+ 	/* The event comes from the outside so no parent handler */
+ 	girq->parent_handler = NULL;
+ 	girq->num_parents = 0;
+diff --git a/drivers/hid/hid-holtek-mouse.c b/drivers/hid/hid-holtek-mouse.c
+index b7172c48ef9f0..7c907939bfae1 100644
+--- a/drivers/hid/hid-holtek-mouse.c
++++ b/drivers/hid/hid-holtek-mouse.c
+@@ -65,8 +65,23 @@ static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ static int holtek_mouse_probe(struct hid_device *hdev,
+ 			      const struct hid_device_id *id)
+ {
++	int ret;
++
+ 	if (!hid_is_usb(hdev))
+ 		return -EINVAL;
++
++	ret = hid_parse(hdev);
++	if (ret) {
++		hid_err(hdev, "hid parse failed: %d\n", ret);
++		return ret;
++	}
++
++	ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT);
++	if (ret) {
++		hid_err(hdev, "hw start failed: %d\n", ret);
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hid/hid-vivaldi.c b/drivers/hid/hid-vivaldi.c
+index cd7ada48b1d9f..72957a9f71170 100644
+--- a/drivers/hid/hid-vivaldi.c
++++ b/drivers/hid/hid-vivaldi.c
+@@ -57,6 +57,9 @@ static int vivaldi_probe(struct hid_device *hdev,
+ 	int ret;
+ 
+ 	drvdata = devm_kzalloc(&hdev->dev, sizeof(*drvdata), GFP_KERNEL);
++	if (!drvdata)
++		return -ENOMEM;
++
+ 	hid_set_drvdata(hdev, drvdata);
+ 
+ 	ret = hid_parse(hdev);
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index a850e4f0e0bde..0c2b032ee6176 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1275,7 +1275,7 @@ config SENSORS_LM90
+ 	  Maxim MAX6646, MAX6647, MAX6648, MAX6649, MAX6654, MAX6657, MAX6658,
+ 	  MAX6659, MAX6680, MAX6681, MAX6692, MAX6695, MAX6696,
+ 	  ON Semiconductor NCT1008, Winbond/Nuvoton W83L771W/G/AWG/ASG,
+-	  Philips SA56004, GMT G781, and Texas Instruments TMP451
++	  Philips SA56004, GMT G781, Texas Instruments TMP451 and TMP461
+ 	  sensor chips.
+ 
+ 	  This driver can also be built as a module. If so, the module
+diff --git a/drivers/hwmon/lm90.c b/drivers/hwmon/lm90.c
+index ebbfd5f352c06..959446b0137bc 100644
+--- a/drivers/hwmon/lm90.c
++++ b/drivers/hwmon/lm90.c
+@@ -35,13 +35,14 @@
+  * explicitly as max6659, or if its address is not 0x4c.
+  * These chips lack the remote temperature offset feature.
+  *
+- * This driver also supports the MAX6654 chip made by Maxim. This chip can
+- * be at 9 different addresses, similar to MAX6680/MAX6681. The MAX6654 is
+- * otherwise similar to MAX6657/MAX6658/MAX6659. Extended range is available
+- * by setting the configuration register accordingly, and is done during
+- * initialization. Extended precision is only available at conversion rates
+- * of 1 Hz and slower. Note that extended precision is not enabled by
+- * default, as this driver initializes all chips to 2 Hz by design.
++ * This driver also supports the MAX6654 chip made by Maxim. This chip can be
++ * at 9 different addresses, similar to MAX6680/MAX6681. The MAX6654 is similar
++ * to MAX6657/MAX6658/MAX6659, but does not support critical temperature
++ * limits. Extended range is available by setting the configuration register
++ * accordingly, and is done during initialization. Extended precision is only
++ * available at conversion rates of 1 Hz and slower. Note that extended
++ * precision is not enabled by default, as this driver initializes all chips
++ * to 2 Hz by design.
+  *
+  * This driver also supports the MAX6646, MAX6647, MAX6648, MAX6649 and
+  * MAX6692 chips made by Maxim.  These are again similar to the LM86,
+@@ -69,10 +70,10 @@
+  * This driver also supports the G781 from GMT. This device is compatible
+  * with the ADM1032.
+  *
+- * This driver also supports TMP451 from Texas Instruments. This device is
+- * supported in both compatibility and extended mode. It's mostly compatible
+- * with ADT7461 except for local temperature low byte register and max
+- * conversion rate.
++ * This driver also supports TMP451 and TMP461 from Texas Instruments.
++ * Those devices are supported in both compatibility and extended mode.
++ * They are mostly compatible with ADT7461 except for local temperature
++ * low byte register and max conversion rate.
+  *
+  * Since the LM90 was the first chipset supported by this driver, most
+  * comments will refer to this chipset, but are actually general and
+@@ -112,7 +113,7 @@ static const unsigned short normal_i2c[] = {
+ 	0x4d, 0x4e, 0x4f, I2C_CLIENT_END };
+ 
+ enum chips { lm90, adm1032, lm99, lm86, max6657, max6659, adt7461, max6680,
+-	max6646, w83l771, max6696, sa56004, g781, tmp451, max6654 };
++	max6646, w83l771, max6696, sa56004, g781, tmp451, tmp461, max6654 };
+ 
+ /*
+  * The LM90 registers
+@@ -168,8 +169,12 @@ enum chips { lm90, adm1032, lm99, lm86, max6657, max6659, adt7461, max6680,
+ 
+ #define LM90_MAX_CONVRATE_MS	16000	/* Maximum conversion rate in ms */
+ 
+-/* TMP451 registers */
++/* TMP451/TMP461 registers */
+ #define TMP451_REG_R_LOCAL_TEMPL	0x15
++#define TMP451_REG_CONALERT		0x22
++
++#define TMP461_REG_CHEN			0x16
++#define TMP461_REG_DFC			0x24
+ 
+ /*
+  * Device flags
+@@ -182,7 +187,10 @@ enum chips { lm90, adm1032, lm99, lm86, max6657, max6659, adt7461, max6680,
+ #define LM90_HAVE_EMERGENCY_ALARM (1 << 5)/* emergency alarm		*/
+ #define LM90_HAVE_TEMP3		(1 << 6) /* 3rd temperature sensor	*/
+ #define LM90_HAVE_BROKEN_ALERT	(1 << 7) /* Broken alert		*/
+-#define LM90_PAUSE_FOR_CONFIG	(1 << 8) /* Pause conversion for config	*/
++#define LM90_HAVE_EXTENDED_TEMP	(1 << 8) /* extended temperature support*/
++#define LM90_PAUSE_FOR_CONFIG	(1 << 9) /* Pause conversion for config	*/
++#define LM90_HAVE_CRIT		(1 << 10)/* Chip supports CRIT/OVERT register	*/
++#define LM90_HAVE_CRIT_ALRM_SWP	(1 << 11)/* critical alarm bits swapped	*/
+ 
+ /* LM90 status */
+ #define LM90_STATUS_LTHRM	(1 << 0) /* local THERM limit tripped */
+@@ -192,6 +200,7 @@ enum chips { lm90, adm1032, lm99, lm86, max6657, max6659, adt7461, max6680,
+ #define LM90_STATUS_RHIGH	(1 << 4) /* remote high temp limit tripped */
+ #define LM90_STATUS_LLOW	(1 << 5) /* local low temp limit tripped */
+ #define LM90_STATUS_LHIGH	(1 << 6) /* local high temp limit tripped */
++#define LM90_STATUS_BUSY	(1 << 7) /* conversion is ongoing */
+ 
+ #define MAX6696_STATUS2_R2THRM	(1 << 1) /* remote2 THERM limit tripped */
+ #define MAX6696_STATUS2_R2OPEN	(1 << 2) /* remote2 is an open circuit */
+@@ -229,6 +238,7 @@ static const struct i2c_device_id lm90_id[] = {
+ 	{ "w83l771", w83l771 },
+ 	{ "sa56004", sa56004 },
+ 	{ "tmp451", tmp451 },
++	{ "tmp461", tmp461 },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(i2c, lm90_id);
+@@ -326,6 +336,10 @@ static const struct of_device_id __maybe_unused lm90_of_match[] = {
+ 		.compatible = "ti,tmp451",
+ 		.data = (void *)tmp451
+ 	},
++	{
++		.compatible = "ti,tmp461",
++		.data = (void *)tmp461
++	},
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(of, lm90_of_match);
+@@ -344,38 +358,43 @@ struct lm90_params {
+ static const struct lm90_params lm90_params[] = {
+ 	[adm1032] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
+-		  | LM90_HAVE_BROKEN_ALERT,
++		  | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 10,
+ 	},
+ 	[adt7461] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
+-		  | LM90_HAVE_BROKEN_ALERT,
++		  | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP
++		  | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 10,
+ 	},
+ 	[g781] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
+-		  | LM90_HAVE_BROKEN_ALERT,
++		  | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 8,
+ 	},
+ 	[lm86] = {
+-		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
++		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
++		  | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7b,
+ 		.max_convrate = 9,
+ 	},
+ 	[lm90] = {
+-		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
++		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
++		  | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7b,
+ 		.max_convrate = 9,
+ 	},
+ 	[lm99] = {
+-		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
++		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
++		  | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7b,
+ 		.max_convrate = 9,
+ 	},
+ 	[max6646] = {
++		.flags = LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 6,
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+@@ -386,43 +405,51 @@ static const struct lm90_params lm90_params[] = {
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+ 	},
+ 	[max6657] = {
+-		.flags = LM90_PAUSE_FOR_CONFIG,
++		.flags = LM90_PAUSE_FOR_CONFIG | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 8,
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+ 	},
+ 	[max6659] = {
+-		.flags = LM90_HAVE_EMERGENCY,
++		.flags = LM90_HAVE_EMERGENCY | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 8,
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+ 	},
+ 	[max6680] = {
+-		.flags = LM90_HAVE_OFFSET,
++		.flags = LM90_HAVE_OFFSET | LM90_HAVE_CRIT
++		  | LM90_HAVE_CRIT_ALRM_SWP,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 7,
+ 	},
+ 	[max6696] = {
+ 		.flags = LM90_HAVE_EMERGENCY
+-		  | LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3,
++		  | LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3 | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x1c7c,
+ 		.max_convrate = 6,
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+ 	},
+ 	[w83l771] = {
+-		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
++		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 8,
+ 	},
+ 	[sa56004] = {
+-		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
++		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7b,
+ 		.max_convrate = 9,
+ 		.reg_local_ext = SA56004_REG_R_LOCAL_TEMPL,
+ 	},
+ 	[tmp451] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
+-		  | LM90_HAVE_BROKEN_ALERT,
++		  | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP | LM90_HAVE_CRIT,
++		.alert_alarms = 0x7c,
++		.max_convrate = 9,
++		.reg_local_ext = TMP451_REG_R_LOCAL_TEMPL,
++	},
++	[tmp461] = {
++		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
++		  | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 9,
+ 		.reg_local_ext = TMP451_REG_R_LOCAL_TEMPL,
+@@ -650,20 +677,22 @@ static int lm90_update_limits(struct device *dev)
+ 	struct i2c_client *client = data->client;
+ 	int val;
+ 
+-	val = lm90_read_reg(client, LM90_REG_R_LOCAL_CRIT);
+-	if (val < 0)
+-		return val;
+-	data->temp8[LOCAL_CRIT] = val;
++	if (data->flags & LM90_HAVE_CRIT) {
++		val = lm90_read_reg(client, LM90_REG_R_LOCAL_CRIT);
++		if (val < 0)
++			return val;
++		data->temp8[LOCAL_CRIT] = val;
+ 
+-	val = lm90_read_reg(client, LM90_REG_R_REMOTE_CRIT);
+-	if (val < 0)
+-		return val;
+-	data->temp8[REMOTE_CRIT] = val;
++		val = lm90_read_reg(client, LM90_REG_R_REMOTE_CRIT);
++		if (val < 0)
++			return val;
++		data->temp8[REMOTE_CRIT] = val;
+ 
+-	val = lm90_read_reg(client, LM90_REG_R_TCRIT_HYST);
+-	if (val < 0)
+-		return val;
+-	data->temp_hyst = val;
++		val = lm90_read_reg(client, LM90_REG_R_TCRIT_HYST);
++		if (val < 0)
++			return val;
++		data->temp_hyst = val;
++	}
+ 
+ 	val = lm90_read_reg(client, LM90_REG_R_REMOTE_LOWH);
+ 	if (val < 0)
+@@ -791,7 +820,7 @@ static int lm90_update_device(struct device *dev)
+ 		val = lm90_read_reg(client, LM90_REG_R_STATUS);
+ 		if (val < 0)
+ 			return val;
+-		data->alarms = val;	/* lower 8 bit of alarms */
++		data->alarms = val & ~LM90_STATUS_BUSY;
+ 
+ 		if (data->kind == max6696) {
+ 			val = lm90_select_remote_channel(data, 1);
+@@ -997,7 +1026,7 @@ static int lm90_get_temp11(struct lm90_data *data, int index)
+ 	s16 temp11 = data->temp11[index];
+ 	int temp;
+ 
+-	if (data->kind == adt7461 || data->kind == tmp451)
++	if (data->flags & LM90_HAVE_EXTENDED_TEMP)
+ 		temp = temp_from_u16_adt7461(data, temp11);
+ 	else if (data->kind == max6646)
+ 		temp = temp_from_u16(temp11);
+@@ -1031,7 +1060,7 @@ static int lm90_set_temp11(struct lm90_data *data, int index, long val)
+ 	if (data->kind == lm99 && index <= 2)
+ 		val -= 16000;
+ 
+-	if (data->kind == adt7461 || data->kind == tmp451)
++	if (data->flags & LM90_HAVE_EXTENDED_TEMP)
+ 		data->temp11[index] = temp_to_u16_adt7461(data, val);
+ 	else if (data->kind == max6646)
+ 		data->temp11[index] = temp_to_u8(val) << 8;
+@@ -1058,7 +1087,7 @@ static int lm90_get_temp8(struct lm90_data *data, int index)
+ 	s8 temp8 = data->temp8[index];
+ 	int temp;
+ 
+-	if (data->kind == adt7461 || data->kind == tmp451)
++	if (data->flags & LM90_HAVE_EXTENDED_TEMP)
+ 		temp = temp_from_u8_adt7461(data, temp8);
+ 	else if (data->kind == max6646)
+ 		temp = temp_from_u8(temp8);
+@@ -1091,7 +1120,7 @@ static int lm90_set_temp8(struct lm90_data *data, int index, long val)
+ 	if (data->kind == lm99 && index == 3)
+ 		val -= 16000;
+ 
+-	if (data->kind == adt7461 || data->kind == tmp451)
++	if (data->flags & LM90_HAVE_EXTENDED_TEMP)
+ 		data->temp8[index] = temp_to_u8_adt7461(data, val);
+ 	else if (data->kind == max6646)
+ 		data->temp8[index] = temp_to_u8(val);
+@@ -1109,7 +1138,7 @@ static int lm90_get_temphyst(struct lm90_data *data, int index)
+ {
+ 	int temp;
+ 
+-	if (data->kind == adt7461 || data->kind == tmp451)
++	if (data->flags & LM90_HAVE_EXTENDED_TEMP)
+ 		temp = temp_from_u8_adt7461(data, data->temp8[index]);
+ 	else if (data->kind == max6646)
+ 		temp = temp_from_u8(data->temp8[index]);
+@@ -1129,7 +1158,7 @@ static int lm90_set_temphyst(struct lm90_data *data, long val)
+ 	int temp;
+ 	int err;
+ 
+-	if (data->kind == adt7461 || data->kind == tmp451)
++	if (data->flags & LM90_HAVE_EXTENDED_TEMP)
+ 		temp = temp_from_u8_adt7461(data, data->temp8[LOCAL_CRIT]);
+ 	else if (data->kind == max6646)
+ 		temp = temp_from_u8(data->temp8[LOCAL_CRIT]);
+@@ -1165,6 +1194,7 @@ static const u8 lm90_temp_emerg_index[3] = {
+ static const u8 lm90_min_alarm_bits[3] = { 5, 3, 11 };
+ static const u8 lm90_max_alarm_bits[3] = { 6, 4, 12 };
+ static const u8 lm90_crit_alarm_bits[3] = { 0, 1, 9 };
++static const u8 lm90_crit_alarm_bits_swapped[3] = { 1, 0, 9 };
+ static const u8 lm90_emergency_alarm_bits[3] = { 15, 13, 14 };
+ static const u8 lm90_fault_bits[3] = { 0, 2, 10 };
+ 
+@@ -1190,7 +1220,10 @@ static int lm90_temp_read(struct device *dev, u32 attr, int channel, long *val)
+ 		*val = (data->alarms >> lm90_max_alarm_bits[channel]) & 1;
+ 		break;
+ 	case hwmon_temp_crit_alarm:
+-		*val = (data->alarms >> lm90_crit_alarm_bits[channel]) & 1;
++		if (data->flags & LM90_HAVE_CRIT_ALRM_SWP)
++			*val = (data->alarms >> lm90_crit_alarm_bits_swapped[channel]) & 1;
++		else
++			*val = (data->alarms >> lm90_crit_alarm_bits[channel]) & 1;
+ 		break;
+ 	case hwmon_temp_emergency_alarm:
+ 		*val = (data->alarms >> lm90_emergency_alarm_bits[channel]) & 1;
+@@ -1438,12 +1471,11 @@ static int lm90_detect(struct i2c_client *client,
+ 	if (man_id < 0 || chip_id < 0 || config1 < 0 || convrate < 0)
+ 		return -ENODEV;
+ 
+-	if (man_id == 0x01 || man_id == 0x5C || man_id == 0x41) {
++	if (man_id == 0x01 || man_id == 0x5C || man_id == 0xA1) {
+ 		config2 = i2c_smbus_read_byte_data(client, LM90_REG_R_CONFIG2);
+ 		if (config2 < 0)
+ 			return -ENODEV;
+-	} else
+-		config2 = 0;		/* Make compiler happy */
++	}
+ 
+ 	if ((address == 0x4C || address == 0x4D)
+ 	 && man_id == 0x01) { /* National Semiconductor */
+@@ -1617,18 +1649,26 @@ static int lm90_detect(struct i2c_client *client,
+ 		 && convrate <= 0x08)
+ 			name = "g781";
+ 	} else
+-	if (address == 0x4C
+-	 && man_id == 0x55) { /* Texas Instruments */
+-		int local_ext;
++	if (man_id == 0x55 && chip_id == 0x00 &&
++	    (config1 & 0x1B) == 0x00 && convrate <= 0x09) {
++		int local_ext, conalert, chen, dfc;
+ 
+ 		local_ext = i2c_smbus_read_byte_data(client,
+ 						     TMP451_REG_R_LOCAL_TEMPL);
+-
+-		if (chip_id == 0x00 /* TMP451 */
+-		 && (config1 & 0x1B) == 0x00
+-		 && convrate <= 0x09
+-		 && (local_ext & 0x0F) == 0x00)
+-			name = "tmp451";
++		conalert = i2c_smbus_read_byte_data(client,
++						    TMP451_REG_CONALERT);
++		chen = i2c_smbus_read_byte_data(client, TMP461_REG_CHEN);
++		dfc = i2c_smbus_read_byte_data(client, TMP461_REG_DFC);
++
++		if ((local_ext & 0x0F) == 0x00 &&
++		    (conalert & 0xf1) == 0x01 &&
++		    (chen & 0xfc) == 0x00 &&
++		    (dfc & 0xfc) == 0x00) {
++			if (address == 0x4c && !(chen & 0x03))
++				name = "tmp451";
++			else if (address >= 0x48 && address <= 0x4f)
++				name = "tmp461";
++		}
+ 	}
+ 
+ 	if (!name) { /* identification failed */
+@@ -1675,7 +1715,7 @@ static int lm90_init_client(struct i2c_client *client, struct lm90_data *data)
+ 	lm90_set_convrate(client, data, 500); /* 500ms; 2Hz conversion rate */
+ 
+ 	/* Check Temperature Range Select */
+-	if (data->kind == adt7461 || data->kind == tmp451) {
++	if (data->flags & LM90_HAVE_EXTENDED_TEMP) {
+ 		if (config & 0x04)
+ 			data->flags |= LM90_FLAG_ADT7461_EXT;
+ 	}
+@@ -1842,11 +1882,14 @@ static int lm90_probe(struct i2c_client *client)
+ 	info->config = data->channel_config;
+ 
+ 	data->channel_config[0] = HWMON_T_INPUT | HWMON_T_MIN | HWMON_T_MAX |
+-		HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_MIN_ALARM |
+-		HWMON_T_MAX_ALARM | HWMON_T_CRIT_ALARM;
++		HWMON_T_MIN_ALARM | HWMON_T_MAX_ALARM;
+ 	data->channel_config[1] = HWMON_T_INPUT | HWMON_T_MIN | HWMON_T_MAX |
+-		HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_MIN_ALARM |
+-		HWMON_T_MAX_ALARM | HWMON_T_CRIT_ALARM | HWMON_T_FAULT;
++		HWMON_T_MIN_ALARM | HWMON_T_MAX_ALARM | HWMON_T_FAULT;
++
++	if (data->flags & LM90_HAVE_CRIT) {
++		data->channel_config[0] |= HWMON_T_CRIT | HWMON_T_CRIT_ALARM | HWMON_T_CRIT_HYST;
++		data->channel_config[1] |= HWMON_T_CRIT | HWMON_T_CRIT_ALARM | HWMON_T_CRIT_HYST;
++	}
+ 
+ 	if (data->flags & LM90_HAVE_OFFSET)
+ 		data->channel_config[1] |= HWMON_T_OFFSET;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index f27523e1a12d7..08df97e0a6654 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -277,7 +277,7 @@ static int alloc_srq_wrid(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
+ 
+ static void free_srq_wrid(struct hns_roce_srq *srq)
+ {
+-	kfree(srq->wrid);
++	kvfree(srq->wrid);
+ 	srq->wrid = NULL;
+ }
+ 
+diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c
+index ac11943a5ddb0..bf2f30d67949d 100644
+--- a/drivers/infiniband/hw/qib/qib_user_sdma.c
++++ b/drivers/infiniband/hw/qib/qib_user_sdma.c
+@@ -941,7 +941,7 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 					       &addrlimit) ||
+ 			    addrlimit > type_max(typeof(pkt->addrlimit))) {
+ 				ret = -EINVAL;
+-				goto free_pbc;
++				goto free_pkt;
+ 			}
+ 			pkt->addrlimit = addrlimit;
+ 
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 4357d30c15c56..2e53ea261e014 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1588,7 +1588,13 @@ static const struct dmi_system_id no_hw_res_dmi_table[] = {
+  */
+ static int elantech_change_report_id(struct psmouse *psmouse)
+ {
+-	unsigned char param[2] = { 0x10, 0x03 };
++	/*
++	 * NOTE: the code is expecting to receive param[] as an array of 3
++	 * items (see __ps2_command()), even if in this case only 2 are
++	 * actually needed. Make sure the array size is 3 to avoid potential
++	 * stack out-of-bound accesses.
++	 */
++	unsigned char param[3] = { 0x10, 0x03 };
+ 
+ 	if (elantech_write_reg_params(psmouse, 0x7, param) ||
+ 	    elantech_read_reg_params(psmouse, 0x7, param) ||
+diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
+index b6f75367a284a..8df402a1ed446 100644
+--- a/drivers/input/touchscreen/atmel_mxt_ts.c
++++ b/drivers/input/touchscreen/atmel_mxt_ts.c
+@@ -1839,7 +1839,7 @@ static int mxt_read_info_block(struct mxt_data *data)
+ 	if (error) {
+ 		dev_err(&client->dev, "Error %d parsing object table\n", error);
+ 		mxt_free_object_table(data);
+-		goto err_free_mem;
++		return error;
+ 	}
+ 
+ 	data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START);
+diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
+index 03a4825359448..c09aefa2661dd 100644
+--- a/drivers/input/touchscreen/elants_i2c.c
++++ b/drivers/input/touchscreen/elants_i2c.c
+@@ -109,6 +109,19 @@
+ #define ELAN_POWERON_DELAY_USEC	500
+ #define ELAN_RESET_DELAY_MSEC	20
+ 
++/* FW boot code version */
++#define BC_VER_H_BYTE_FOR_EKTH3900x1_I2C        0x72
++#define BC_VER_H_BYTE_FOR_EKTH3900x2_I2C        0x82
++#define BC_VER_H_BYTE_FOR_EKTH3900x3_I2C        0x92
++#define BC_VER_H_BYTE_FOR_EKTH5312x1_I2C        0x6D
++#define BC_VER_H_BYTE_FOR_EKTH5312x2_I2C        0x6E
++#define BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C       0x77
++#define BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C       0x78
++#define BC_VER_H_BYTE_FOR_EKTH5312x1_I2C_USB    0x67
++#define BC_VER_H_BYTE_FOR_EKTH5312x2_I2C_USB    0x68
++#define BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C_USB   0x74
++#define BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C_USB   0x75
++
+ enum elants_state {
+ 	ELAN_STATE_NORMAL,
+ 	ELAN_WAIT_QUEUE_HEADER,
+@@ -663,6 +676,37 @@ static int elants_i2c_validate_remark_id(struct elants_data *ts,
+ 	return 0;
+ }
+ 
++static bool elants_i2c_should_check_remark_id(struct elants_data *ts)
++{
++	struct i2c_client *client = ts->client;
++	const u8 bootcode_version = ts->iap_version;
++	bool check;
++
++	/* I2C eKTH3900 and eKTH5312 are NOT support Remark ID */
++	if ((bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x1_I2C) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x2_I2C) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x3_I2C) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x1_I2C) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x2_I2C) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x1_I2C_USB) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x2_I2C_USB) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C_USB) ||
++	    (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C_USB)) {
++		dev_dbg(&client->dev,
++			"eKTH3900/eKTH5312(0x%02x) are not support remark id\n",
++			bootcode_version);
++		check = false;
++	} else if (bootcode_version >= 0x60) {
++		check = true;
++	} else {
++		check = false;
++	}
++
++	return check;
++}
++
+ static int elants_i2c_do_update_firmware(struct i2c_client *client,
+ 					 const struct firmware *fw,
+ 					 bool force)
+@@ -676,7 +720,7 @@ static int elants_i2c_do_update_firmware(struct i2c_client *client,
+ 	u16 send_id;
+ 	int page, n_fw_pages;
+ 	int error;
+-	bool check_remark_id = ts->iap_version >= 0x60;
++	bool check_remark_id = elants_i2c_should_check_remark_id(ts);
+ 
+ 	/* Recovery mode detection! */
+ 	if (force) {
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index a06385c55af2a..5fc789f717c8a 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -162,6 +162,7 @@ static const struct goodix_chip_id goodix_chip_ids[] = {
+ 	{ .id = "911", .data = &gt911_chip_data },
+ 	{ .id = "9271", .data = &gt911_chip_data },
+ 	{ .id = "9110", .data = &gt911_chip_data },
++	{ .id = "9111", .data = &gt911_chip_data },
+ 	{ .id = "927", .data = &gt911_chip_data },
+ 	{ .id = "928", .data = &gt911_chip_data },
+ 
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index b5f3f160c8420..eb82f6aac951f 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2327,7 +2327,7 @@ void mmc_start_host(struct mmc_host *host)
+ 	_mmc_detect_change(host, 0, false);
+ }
+ 
+-void mmc_stop_host(struct mmc_host *host)
++void __mmc_stop_host(struct mmc_host *host)
+ {
+ 	if (host->slot.cd_irq >= 0) {
+ 		mmc_gpio_set_cd_wake(host, false);
+@@ -2336,6 +2336,11 @@ void mmc_stop_host(struct mmc_host *host)
+ 
+ 	host->rescan_disable = 1;
+ 	cancel_delayed_work_sync(&host->detect);
++}
++
++void mmc_stop_host(struct mmc_host *host)
++{
++	__mmc_stop_host(host);
+ 
+ 	/* clear pm flags now and let card drivers set them as needed */
+ 	host->pm_flags = 0;
+diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h
+index db3c9c68875d8..a6c814fdbf0a9 100644
+--- a/drivers/mmc/core/core.h
++++ b/drivers/mmc/core/core.h
+@@ -69,6 +69,7 @@ static inline void mmc_delay(unsigned int ms)
+ 
+ void mmc_rescan(struct work_struct *work);
+ void mmc_start_host(struct mmc_host *host);
++void __mmc_stop_host(struct mmc_host *host);
+ void mmc_stop_host(struct mmc_host *host);
+ 
+ void _mmc_detect_change(struct mmc_host *host, unsigned long delay,
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index 58112999a69ab..864c8c205ff78 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -79,9 +79,18 @@ static void mmc_host_classdev_release(struct device *dev)
+ 	kfree(host);
+ }
+ 
++static int mmc_host_classdev_shutdown(struct device *dev)
++{
++	struct mmc_host *host = cls_dev_to_mmc_host(dev);
++
++	__mmc_stop_host(host);
++	return 0;
++}
++
+ static struct class mmc_host_class = {
+ 	.name		= "mmc_host",
+ 	.dev_release	= mmc_host_classdev_release,
++	.shutdown_pre	= mmc_host_classdev_shutdown,
+ 	.pm		= MMC_HOST_CLASS_DEV_PM_OPS,
+ };
+ 
+diff --git a/drivers/mmc/host/meson-mx-sdhc-mmc.c b/drivers/mmc/host/meson-mx-sdhc-mmc.c
+index 7cd9c0ec2fcfe..8fdd0bbbfa21f 100644
+--- a/drivers/mmc/host/meson-mx-sdhc-mmc.c
++++ b/drivers/mmc/host/meson-mx-sdhc-mmc.c
+@@ -135,6 +135,7 @@ static void meson_mx_sdhc_start_cmd(struct mmc_host *mmc,
+ 				    struct mmc_command *cmd)
+ {
+ 	struct meson_mx_sdhc_host *host = mmc_priv(mmc);
++	bool manual_stop = false;
+ 	u32 ictl, send;
+ 	int pack_len;
+ 
+@@ -172,12 +173,27 @@ static void meson_mx_sdhc_start_cmd(struct mmc_host *mmc,
+ 		else
+ 			/* software flush: */
+ 			ictl |= MESON_SDHC_ICTL_DATA_XFER_OK;
++
++		/*
++		 * Mimic the logic from the vendor driver where (only)
++		 * SD_IO_RW_EXTENDED commands with more than one block set the
++		 * MESON_SDHC_MISC_MANUAL_STOP bit. This fixes the firmware
++		 * download in the brcmfmac driver for a BCM43362/1 card.
++		 * Without this sdio_memcpy_toio() (with a size of 219557
++		 * bytes) times out if MESON_SDHC_MISC_MANUAL_STOP is not set.
++		 */
++		manual_stop = cmd->data->blocks > 1 &&
++			      cmd->opcode == SD_IO_RW_EXTENDED;
+ 	} else {
+ 		pack_len = 0;
+ 
+ 		ictl |= MESON_SDHC_ICTL_RESP_OK;
+ 	}
+ 
++	regmap_update_bits(host->regmap, MESON_SDHC_MISC,
++			   MESON_SDHC_MISC_MANUAL_STOP,
++			   manual_stop ? MESON_SDHC_MISC_MANUAL_STOP : 0);
++
+ 	if (cmd->opcode == MMC_STOP_TRANSMISSION)
+ 		send |= MESON_SDHC_SEND_DATA_STOP;
+ 
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index fdaa11f92fe6f..a75d3dd34d18c 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -441,6 +441,8 @@ static int sdmmc_dlyb_phase_tuning(struct mmci_host *host, u32 opcode)
+ 		return -EINVAL;
+ 	}
+ 
++	writel_relaxed(0, dlyb->base + DLYB_CR);
++
+ 	phase = end_of_len - max_len / 2;
+ 	sdmmc_dlyb_set_cfgr(dlyb, dlyb->unit, phase, false);
+ 
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 8ea9132ebca4e..d50b691f6c44e 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -354,23 +354,6 @@ static void tegra_sdhci_set_tap(struct sdhci_host *host, unsigned int tap)
+ 	}
+ }
+ 
+-static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc,
+-					      struct mmc_ios *ios)
+-{
+-	struct sdhci_host *host = mmc_priv(mmc);
+-	u32 val;
+-
+-	val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);
+-
+-	if (ios->enhanced_strobe)
+-		val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;
+-	else
+-		val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;
+-
+-	sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);
+-
+-}
+-
+ static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)
+ {
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+@@ -791,6 +774,32 @@ static void tegra_sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+ 	}
+ }
+ 
++static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc,
++					      struct mmc_ios *ios)
++{
++	struct sdhci_host *host = mmc_priv(mmc);
++	u32 val;
++
++	val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);
++
++	if (ios->enhanced_strobe) {
++		val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;
++		/*
++		 * When CMD13 is sent from mmc_select_hs400es() after
++		 * switching to HS400ES mode, the bus is operating at
++		 * either MMC_HIGH_26_MAX_DTR or MMC_HIGH_52_MAX_DTR.
++		 * To meet Tegra SDHCI requirement at HS400ES mode, force SDHCI
++		 * interface clock to MMC_HS200_MAX_DTR (200 MHz) so that host
++		 * controller CAR clock and the interface clock are rate matched.
++		 */
++		tegra_sdhci_set_clock(host, MMC_HS200_MAX_DTR);
++	} else {
++		val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;
++	}
++
++	sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);
++}
++
+ static unsigned int tegra_sdhci_get_max_clock(struct sdhci_host *host)
+ {
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index a4e4e15f574df..fe55c81608daa 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -1466,7 +1466,7 @@ static int bond_option_ad_actor_system_set(struct bonding *bond,
+ 		mac = (u8 *)&newval->value;
+ 	}
+ 
+-	if (!is_valid_ether_addr(mac))
++	if (is_multicast_ether_addr(mac))
+ 		goto err;
+ 
+ 	netdev_dbg(bond->dev, "Setting ad_actor_system to %pM\n", mac);
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 1662c0985eca4..f854d41c6c94d 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -9260,7 +9260,7 @@ static int __maybe_unused igb_suspend(struct device *dev)
+ 	return __igb_shutdown(to_pci_dev(dev), NULL, 0);
+ }
+ 
+-static int __maybe_unused igb_resume(struct device *dev)
++static int __maybe_unused __igb_resume(struct device *dev, bool rpm)
+ {
+ 	struct pci_dev *pdev = to_pci_dev(dev);
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+@@ -9303,17 +9303,24 @@ static int __maybe_unused igb_resume(struct device *dev)
+ 
+ 	wr32(E1000_WUS, ~0);
+ 
+-	rtnl_lock();
++	if (!rpm)
++		rtnl_lock();
+ 	if (!err && netif_running(netdev))
+ 		err = __igb_open(netdev, true);
+ 
+ 	if (!err)
+ 		netif_device_attach(netdev);
+-	rtnl_unlock();
++	if (!rpm)
++		rtnl_unlock();
+ 
+ 	return err;
+ }
+ 
++static int __maybe_unused igb_resume(struct device *dev)
++{
++	return __igb_resume(dev, false);
++}
++
+ static int __maybe_unused igb_runtime_idle(struct device *dev)
+ {
+ 	struct net_device *netdev = dev_get_drvdata(dev);
+@@ -9332,7 +9339,7 @@ static int __maybe_unused igb_runtime_suspend(struct device *dev)
+ 
+ static int __maybe_unused igb_runtime_resume(struct device *dev)
+ {
+-	return igb_resume(dev);
++	return __igb_resume(dev, true);
+ }
+ 
+ static void igb_shutdown(struct pci_dev *pdev)
+@@ -9448,7 +9455,7 @@ static pci_ers_result_t igb_io_error_detected(struct pci_dev *pdev,
+  *  @pdev: Pointer to PCI device
+  *
+  *  Restart the card from scratch, as if from a cold-boot. Implementation
+- *  resembles the first-half of the igb_resume routine.
++ *  resembles the first-half of the __igb_resume routine.
+  **/
+ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)
+ {
+@@ -9488,7 +9495,7 @@ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)
+  *
+  *  This callback is called when the error recovery driver tells us that
+  *  its OK to resume normal operation. Implementation resembles the
+- *  second-half of the igb_resume routine.
++ *  second-half of the __igb_resume routine.
+  */
+ static void igb_io_resume(struct pci_dev *pdev)
+ {
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_main.c b/drivers/net/ethernet/marvell/prestera/prestera_main.c
+index feb69fcd908e3..f406f5b517b02 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_main.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_main.c
+@@ -50,12 +50,14 @@ int prestera_port_pvid_set(struct prestera_port *port, u16 vid)
+ struct prestera_port *prestera_port_find_by_hwid(struct prestera_switch *sw,
+ 						 u32 dev_id, u32 hw_id)
+ {
+-	struct prestera_port *port = NULL;
++	struct prestera_port *port = NULL, *tmp;
+ 
+ 	read_lock(&sw->port_list_lock);
+-	list_for_each_entry(port, &sw->port_list, list) {
+-		if (port->dev_id == dev_id && port->hw_id == hw_id)
++	list_for_each_entry(tmp, &sw->port_list, list) {
++		if (tmp->dev_id == dev_id && tmp->hw_id == hw_id) {
++			port = tmp;
+ 			break;
++		}
+ 	}
+ 	read_unlock(&sw->port_list_lock);
+ 
+@@ -64,12 +66,14 @@ struct prestera_port *prestera_port_find_by_hwid(struct prestera_switch *sw,
+ 
+ struct prestera_port *prestera_find_port(struct prestera_switch *sw, u32 id)
+ {
+-	struct prestera_port *port = NULL;
++	struct prestera_port *port = NULL, *tmp;
+ 
+ 	read_lock(&sw->port_list_lock);
+-	list_for_each_entry(port, &sw->port_list, list) {
+-		if (port->id == id)
++	list_for_each_entry(tmp, &sw->port_list, list) {
++		if (tmp->id == id) {
++			port = tmp;
+ 			break;
++		}
+ 	}
+ 	read_unlock(&sw->port_list_lock);
+ 
+diff --git a/drivers/net/ethernet/micrel/ks8851_par.c b/drivers/net/ethernet/micrel/ks8851_par.c
+index 3bab0cb2b1a56..c7c99cc54ca11 100644
+--- a/drivers/net/ethernet/micrel/ks8851_par.c
++++ b/drivers/net/ethernet/micrel/ks8851_par.c
+@@ -323,6 +323,8 @@ static int ks8851_probe_par(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	netdev->irq = platform_get_irq(pdev, 0);
++	if (netdev->irq < 0)
++		return netdev->irq;
+ 
+ 	return ks8851_probe_common(netdev, dev, msg_enable);
+ }
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h
+index 7160b42f51ddd..d0111cb3b40e1 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h
+@@ -201,7 +201,7 @@ int qlcnic_sriov_get_vf_vport_info(struct qlcnic_adapter *,
+ 				   struct qlcnic_info *, u16);
+ int qlcnic_sriov_cfg_vf_guest_vlan(struct qlcnic_adapter *, u16, u8);
+ void qlcnic_sriov_free_vlans(struct qlcnic_adapter *);
+-void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *);
++int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *);
+ bool qlcnic_sriov_check_any_vlan(struct qlcnic_vf_info *);
+ void qlcnic_sriov_del_vlan_id(struct qlcnic_sriov *,
+ 			      struct qlcnic_vf_info *, u16);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+index 30e52f9697593..8367891bfb139 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+@@ -432,7 +432,7 @@ static int qlcnic_sriov_set_guest_vlan_mode(struct qlcnic_adapter *adapter,
+ 					    struct qlcnic_cmd_args *cmd)
+ {
+ 	struct qlcnic_sriov *sriov = adapter->ahw->sriov;
+-	int i, num_vlans;
++	int i, num_vlans, ret;
+ 	u16 *vlans;
+ 
+ 	if (sriov->allowed_vlans)
+@@ -443,7 +443,9 @@ static int qlcnic_sriov_set_guest_vlan_mode(struct qlcnic_adapter *adapter,
+ 	dev_info(&adapter->pdev->dev, "Number of allowed Guest VLANs = %d\n",
+ 		 sriov->num_allowed_vlans);
+ 
+-	qlcnic_sriov_alloc_vlans(adapter);
++	ret = qlcnic_sriov_alloc_vlans(adapter);
++	if (ret)
++		return ret;
+ 
+ 	if (!sriov->any_vlan)
+ 		return 0;
+@@ -2159,7 +2161,7 @@ static int qlcnic_sriov_vf_resume(struct qlcnic_adapter *adapter)
+ 	return err;
+ }
+ 
+-void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter)
++int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter)
+ {
+ 	struct qlcnic_sriov *sriov = adapter->ahw->sriov;
+ 	struct qlcnic_vf_info *vf;
+@@ -2169,7 +2171,11 @@ void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter)
+ 		vf = &sriov->vf_info[i];
+ 		vf->sriov_vlans = kcalloc(sriov->num_allowed_vlans,
+ 					  sizeof(*vf->sriov_vlans), GFP_KERNEL);
++		if (!vf->sriov_vlans)
++			return -ENOMEM;
+ 	}
++
++	return 0;
+ }
+ 
+ void qlcnic_sriov_free_vlans(struct qlcnic_adapter *adapter)
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
+index 447720b93e5ab..e90fa97c0ae6c 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
+@@ -597,7 +597,9 @@ static int __qlcnic_pci_sriov_enable(struct qlcnic_adapter *adapter,
+ 	if (err)
+ 		goto del_flr_queue;
+ 
+-	qlcnic_sriov_alloc_vlans(adapter);
++	err = qlcnic_sriov_alloc_vlans(adapter);
++	if (err)
++		goto del_flr_queue;
+ 
+ 	return err;
+ 
+diff --git a/drivers/net/ethernet/sfc/falcon/rx.c b/drivers/net/ethernet/sfc/falcon/rx.c
+index 966f13e7475dd..11a6aee852e92 100644
+--- a/drivers/net/ethernet/sfc/falcon/rx.c
++++ b/drivers/net/ethernet/sfc/falcon/rx.c
+@@ -728,7 +728,10 @@ static void ef4_init_rx_recycle_ring(struct ef4_nic *efx,
+ 					    efx->rx_bufs_per_page);
+ 	rx_queue->page_ring = kcalloc(page_ring_size,
+ 				      sizeof(*rx_queue->page_ring), GFP_KERNEL);
+-	rx_queue->page_ptr_mask = page_ring_size - 1;
++	if (!rx_queue->page_ring)
++		rx_queue->page_ptr_mask = 0;
++	else
++		rx_queue->page_ptr_mask = page_ring_size - 1;
+ }
+ 
+ void ef4_init_rx_queue(struct ef4_rx_queue *rx_queue)
+diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c
+index 19cf7cac1e6e9..8834bcb12fa97 100644
+--- a/drivers/net/ethernet/sfc/rx_common.c
++++ b/drivers/net/ethernet/sfc/rx_common.c
+@@ -150,7 +150,10 @@ static void efx_init_rx_recycle_ring(struct efx_rx_queue *rx_queue)
+ 					    efx->rx_bufs_per_page);
+ 	rx_queue->page_ring = kcalloc(page_ring_size,
+ 				      sizeof(*rx_queue->page_ring), GFP_KERNEL);
+-	rx_queue->page_ptr_mask = page_ring_size - 1;
++	if (!rx_queue->page_ring)
++		rx_queue->page_ptr_mask = 0;
++	else
++		rx_queue->page_ptr_mask = page_ring_size - 1;
+ }
+ 
+ static void efx_fini_rx_recycle_ring(struct efx_rx_queue *rx_queue)
+diff --git a/drivers/net/ethernet/smsc/smc911x.c b/drivers/net/ethernet/smsc/smc911x.c
+index 01069dfaf75c9..288b420f88d42 100644
+--- a/drivers/net/ethernet/smsc/smc911x.c
++++ b/drivers/net/ethernet/smsc/smc911x.c
+@@ -2069,6 +2069,11 @@ static int smc911x_drv_probe(struct platform_device *pdev)
+ 
+ 	ndev->dma = (unsigned char)-1;
+ 	ndev->irq = platform_get_irq(pdev, 0);
++	if (ndev->irq < 0) {
++		ret = ndev->irq;
++		goto release_both;
++	}
++
+ 	lp = netdev_priv(ndev);
+ 	lp->netdev = ndev;
+ #ifdef SMC_DYNAMIC_BUS_CONFIG
+diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c
+index e449d94661225..2a569eea4ee8f 100644
+--- a/drivers/net/fjes/fjes_main.c
++++ b/drivers/net/fjes/fjes_main.c
+@@ -1269,6 +1269,11 @@ static int fjes_probe(struct platform_device *plat_dev)
+ 	hw->hw_res.start = res->start;
+ 	hw->hw_res.size = resource_size(res);
+ 	hw->hw_res.irq = platform_get_irq(plat_dev, 0);
++	if (hw->hw_res.irq < 0) {
++		err = hw->hw_res.irq;
++		goto err_free_control_wq;
++	}
++
+ 	err = fjes_hw_init(&adapter->hw);
+ 	if (err)
+ 		goto err_free_control_wq;
+diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c
+index 920e9f888cc35..63502a85a9751 100644
+--- a/drivers/net/hamradio/mkiss.c
++++ b/drivers/net/hamradio/mkiss.c
+@@ -792,13 +792,14 @@ static void mkiss_close(struct tty_struct *tty)
+ 	 */
+ 	netif_stop_queue(ax->dev);
+ 
+-	/* Free all AX25 frame buffers. */
++	unregister_netdev(ax->dev);
++
++	/* Free all AX25 frame buffers after unreg. */
+ 	kfree(ax->rbuff);
+ 	kfree(ax->xbuff);
+ 
+ 	ax->tty = NULL;
+ 
+-	unregister_netdev(ax->dev);
+ 	free_netdev(ax->dev);
+ }
+ 
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index c666e990900b9..6f7b70522d926 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -64,6 +64,8 @@
+ #define LAN7801_USB_PRODUCT_ID		(0x7801)
+ #define LAN78XX_EEPROM_MAGIC		(0x78A5)
+ #define LAN78XX_OTP_MAGIC		(0x78F3)
++#define AT29M2AF_USB_VENDOR_ID		(0x07C9)
++#define AT29M2AF_USB_PRODUCT_ID	(0x0012)
+ 
+ #define	MII_READ			1
+ #define	MII_WRITE			0
+@@ -4142,6 +4144,10 @@ static const struct usb_device_id products[] = {
+ 	/* LAN7801 USB Gigabit Ethernet Device */
+ 	USB_DEVICE(LAN78XX_USB_VENDOR_ID, LAN7801_USB_PRODUCT_ID),
+ 	},
++	{
++	/* ATM2-AF USB Gigabit Ethernet Device */
++	USB_DEVICE(AT29M2AF_USB_VENDOR_ID, AT29M2AF_USB_PRODUCT_ID),
++	},
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(usb, products);
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 1d21129f7751c..40ce18a0d0190 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -1244,6 +1244,18 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 		raw_spin_lock_init(&pc->irq_lock[i]);
+ 	}
+ 
++	pc->pctl_desc = *pdata->pctl_desc;
++	pc->pctl_dev = devm_pinctrl_register(dev, &pc->pctl_desc, pc);
++	if (IS_ERR(pc->pctl_dev)) {
++		gpiochip_remove(&pc->gpio_chip);
++		return PTR_ERR(pc->pctl_dev);
++	}
++
++	pc->gpio_range = *pdata->gpio_range;
++	pc->gpio_range.base = pc->gpio_chip.base;
++	pc->gpio_range.gc = &pc->gpio_chip;
++	pinctrl_add_gpio_range(pc->pctl_dev, &pc->gpio_range);
++
+ 	girq = &pc->gpio_chip.irq;
+ 	girq->chip = &bcm2835_gpio_irq_chip;
+ 	girq->parent_handler = bcm2835_gpio_irq_handler;
+@@ -1251,8 +1263,10 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 	girq->parents = devm_kcalloc(dev, BCM2835_NUM_IRQS,
+ 				     sizeof(*girq->parents),
+ 				     GFP_KERNEL);
+-	if (!girq->parents)
++	if (!girq->parents) {
++		pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
+ 		return -ENOMEM;
++	}
+ 
+ 	if (is_7211) {
+ 		pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS,
+@@ -1303,21 +1317,10 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 	err = gpiochip_add_data(&pc->gpio_chip, pc);
+ 	if (err) {
+ 		dev_err(dev, "could not add GPIO chip\n");
++		pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
+ 		return err;
+ 	}
+ 
+-	pc->pctl_desc = *pdata->pctl_desc;
+-	pc->pctl_dev = devm_pinctrl_register(dev, &pc->pctl_desc, pc);
+-	if (IS_ERR(pc->pctl_dev)) {
+-		gpiochip_remove(&pc->gpio_chip);
+-		return PTR_ERR(pc->pctl_dev);
+-	}
+-
+-	pc->gpio_range = *pdata->gpio_range;
+-	pc->gpio_range.base = pc->gpio_chip.base;
+-	pc->gpio_range.gc = &pc->gpio_chip;
+-	pinctrl_add_gpio_range(pc->pctl_dev, &pc->gpio_range);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index 10002b8497fea..fbb7807e0da29 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -280,8 +280,12 @@ static int mtk_xt_get_gpio_n(void *data, unsigned long eint_n,
+ 	desc = (const struct mtk_pin_desc *)hw->soc->pins;
+ 	*gpio_chip = &hw->chip;
+ 
+-	/* Be greedy to guess first gpio_n is equal to eint_n */
+-	if (desc[eint_n].eint.eint_n == eint_n)
++	/*
++	 * Be greedy to guess first gpio_n is equal to eint_n.
++	 * Only eint virtual eint number is greater than gpio number.
++	 */
++	if (hw->soc->npins > eint_n &&
++	    desc[eint_n].eint.eint_n == eint_n)
+ 		*gpio_n = eint_n;
+ 	else
+ 		*gpio_n = mtk_xt_find_eint_num(hw, eint_n);
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index a5f1f6ba74392..e13723bb2be41 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -1255,10 +1255,10 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl,
+ 		bank_nr = args.args[1] / STM32_GPIO_PINS_PER_BANK;
+ 		bank->gpio_chip.base = args.args[1];
+ 
+-		npins = args.args[2];
+-		while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3,
+-							 ++i, &args))
+-			npins += args.args[2];
++		/* get the last defined gpio line (offset + nb of pins) */
++		npins = args.args[0] + args.args[2];
++		while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, ++i, &args))
++			npins = max(npins, (int)(args.args[0] + args.args[2]));
+ 	} else {
+ 		bank_nr = pctl->nbanks;
+ 		bank->gpio_chip.base = bank_nr * STM32_GPIO_PINS_PER_BANK;
+diff --git a/drivers/platform/x86/intel_pmc_core_pltdrv.c b/drivers/platform/x86/intel_pmc_core_pltdrv.c
+index 73797680b895c..15ca8afdd973d 100644
+--- a/drivers/platform/x86/intel_pmc_core_pltdrv.c
++++ b/drivers/platform/x86/intel_pmc_core_pltdrv.c
+@@ -65,7 +65,7 @@ static int __init pmc_core_platform_init(void)
+ 
+ 	retval = platform_device_register(pmc_core_device);
+ 	if (retval)
+-		kfree(pmc_core_device);
++		platform_device_put(pmc_core_device);
+ 
+ 	return retval;
+ }
+diff --git a/drivers/spi/spi-armada-3700.c b/drivers/spi/spi-armada-3700.c
+index 46feafe4e201c..d8cc4b270644a 100644
+--- a/drivers/spi/spi-armada-3700.c
++++ b/drivers/spi/spi-armada-3700.c
+@@ -901,7 +901,7 @@ static int a3700_spi_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ error_clk:
+-	clk_disable_unprepare(spi->clk);
++	clk_unprepare(spi->clk);
+ error:
+ 	spi_master_put(master);
+ out:
+diff --git a/drivers/tee/optee/shm_pool.c b/drivers/tee/optee/shm_pool.c
+index c41a9a501a6e9..fa75024f16f7f 100644
+--- a/drivers/tee/optee/shm_pool.c
++++ b/drivers/tee/optee/shm_pool.c
+@@ -41,10 +41,8 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
+ 			goto err;
+ 		}
+ 
+-		for (i = 0; i < nr_pages; i++) {
+-			pages[i] = page;
+-			page++;
+-		}
++		for (i = 0; i < nr_pages; i++)
++			pages[i] = page + i;
+ 
+ 		shm->flags |= TEE_SHM_REGISTER;
+ 		rc = optee_shm_register(shm->ctx, shm, pages, nr_pages,
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index 8a9384a64f3e2..499fccba3d74b 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -1,11 +1,11 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2015-2016, Linaro Limited
++ * Copyright (c) 2015-2017, 2019-2021 Linaro Limited
+  */
++#include <linux/anon_inodes.h>
+ #include <linux/device.h>
+-#include <linux/dma-buf.h>
+-#include <linux/fdtable.h>
+ #include <linux/idr.h>
++#include <linux/mm.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/tee_drv.h>
+@@ -28,16 +28,8 @@ static void release_registered_pages(struct tee_shm *shm)
+ 	}
+ }
+ 
+-static void tee_shm_release(struct tee_shm *shm)
++static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm)
+ {
+-	struct tee_device *teedev = shm->ctx->teedev;
+-
+-	if (shm->flags & TEE_SHM_DMA_BUF) {
+-		mutex_lock(&teedev->mutex);
+-		idr_remove(&teedev->idr, shm->id);
+-		mutex_unlock(&teedev->mutex);
+-	}
+-
+ 	if (shm->flags & TEE_SHM_POOL) {
+ 		struct tee_shm_pool_mgr *poolm;
+ 
+@@ -64,45 +56,6 @@ static void tee_shm_release(struct tee_shm *shm)
+ 	tee_device_put(teedev);
+ }
+ 
+-static struct sg_table *tee_shm_op_map_dma_buf(struct dma_buf_attachment
+-			*attach, enum dma_data_direction dir)
+-{
+-	return NULL;
+-}
+-
+-static void tee_shm_op_unmap_dma_buf(struct dma_buf_attachment *attach,
+-				     struct sg_table *table,
+-				     enum dma_data_direction dir)
+-{
+-}
+-
+-static void tee_shm_op_release(struct dma_buf *dmabuf)
+-{
+-	struct tee_shm *shm = dmabuf->priv;
+-
+-	tee_shm_release(shm);
+-}
+-
+-static int tee_shm_op_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+-{
+-	struct tee_shm *shm = dmabuf->priv;
+-	size_t size = vma->vm_end - vma->vm_start;
+-
+-	/* Refuse sharing shared memory provided by application */
+-	if (shm->flags & TEE_SHM_USER_MAPPED)
+-		return -EINVAL;
+-
+-	return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT,
+-			       size, vma->vm_page_prot);
+-}
+-
+-static const struct dma_buf_ops tee_shm_dma_buf_ops = {
+-	.map_dma_buf = tee_shm_op_map_dma_buf,
+-	.unmap_dma_buf = tee_shm_op_unmap_dma_buf,
+-	.release = tee_shm_op_release,
+-	.mmap = tee_shm_op_mmap,
+-};
+-
+ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
+ {
+ 	struct tee_device *teedev = ctx->teedev;
+@@ -137,6 +90,7 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
+ 		goto err_dev_put;
+ 	}
+ 
++	refcount_set(&shm->refcount, 1);
+ 	shm->flags = flags | TEE_SHM_POOL;
+ 	shm->ctx = ctx;
+ 	if (flags & TEE_SHM_DMA_BUF)
+@@ -150,10 +104,7 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
+ 		goto err_kfree;
+ 	}
+ 
+-
+ 	if (flags & TEE_SHM_DMA_BUF) {
+-		DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+-
+ 		mutex_lock(&teedev->mutex);
+ 		shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL);
+ 		mutex_unlock(&teedev->mutex);
+@@ -161,28 +112,11 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
+ 			ret = ERR_PTR(shm->id);
+ 			goto err_pool_free;
+ 		}
+-
+-		exp_info.ops = &tee_shm_dma_buf_ops;
+-		exp_info.size = shm->size;
+-		exp_info.flags = O_RDWR;
+-		exp_info.priv = shm;
+-
+-		shm->dmabuf = dma_buf_export(&exp_info);
+-		if (IS_ERR(shm->dmabuf)) {
+-			ret = ERR_CAST(shm->dmabuf);
+-			goto err_rem;
+-		}
+ 	}
+ 
+ 	teedev_ctx_get(ctx);
+ 
+ 	return shm;
+-err_rem:
+-	if (flags & TEE_SHM_DMA_BUF) {
+-		mutex_lock(&teedev->mutex);
+-		idr_remove(&teedev->idr, shm->id);
+-		mutex_unlock(&teedev->mutex);
+-	}
+ err_pool_free:
+ 	poolm->ops->free(poolm, shm);
+ err_kfree:
+@@ -243,6 +177,7 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
+ 		goto err;
+ 	}
+ 
++	refcount_set(&shm->refcount, 1);
+ 	shm->flags = flags | TEE_SHM_REGISTER;
+ 	shm->ctx = ctx;
+ 	shm->id = -1;
+@@ -303,22 +238,6 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
+ 		goto err;
+ 	}
+ 
+-	if (flags & TEE_SHM_DMA_BUF) {
+-		DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+-
+-		exp_info.ops = &tee_shm_dma_buf_ops;
+-		exp_info.size = shm->size;
+-		exp_info.flags = O_RDWR;
+-		exp_info.priv = shm;
+-
+-		shm->dmabuf = dma_buf_export(&exp_info);
+-		if (IS_ERR(shm->dmabuf)) {
+-			ret = ERR_CAST(shm->dmabuf);
+-			teedev->desc->ops->shm_unregister(ctx, shm);
+-			goto err;
+-		}
+-	}
+-
+ 	return shm;
+ err:
+ 	if (shm) {
+@@ -336,6 +255,35 @@ err:
+ }
+ EXPORT_SYMBOL_GPL(tee_shm_register);
+ 
++static int tee_shm_fop_release(struct inode *inode, struct file *filp)
++{
++	tee_shm_put(filp->private_data);
++	return 0;
++}
++
++static int tee_shm_fop_mmap(struct file *filp, struct vm_area_struct *vma)
++{
++	struct tee_shm *shm = filp->private_data;
++	size_t size = vma->vm_end - vma->vm_start;
++
++	/* Refuse sharing shared memory provided by application */
++	if (shm->flags & TEE_SHM_USER_MAPPED)
++		return -EINVAL;
++
++	/* check for overflowing the buffer's size */
++	if (vma->vm_pgoff + vma_pages(vma) > shm->size >> PAGE_SHIFT)
++		return -EINVAL;
++
++	return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT,
++			       size, vma->vm_page_prot);
++}
++
++static const struct file_operations tee_shm_fops = {
++	.owner = THIS_MODULE,
++	.release = tee_shm_fop_release,
++	.mmap = tee_shm_fop_mmap,
++};
++
+ /**
+  * tee_shm_get_fd() - Increase reference count and return file descriptor
+  * @shm:	Shared memory handle
+@@ -348,10 +296,11 @@ int tee_shm_get_fd(struct tee_shm *shm)
+ 	if (!(shm->flags & TEE_SHM_DMA_BUF))
+ 		return -EINVAL;
+ 
+-	get_dma_buf(shm->dmabuf);
+-	fd = dma_buf_fd(shm->dmabuf, O_CLOEXEC);
++	/* matched by tee_shm_put() in tee_shm_op_release() */
++	refcount_inc(&shm->refcount);
++	fd = anon_inode_getfd("tee_shm", &tee_shm_fops, shm, O_RDWR);
+ 	if (fd < 0)
+-		dma_buf_put(shm->dmabuf);
++		tee_shm_put(shm);
+ 	return fd;
+ }
+ 
+@@ -361,17 +310,7 @@ int tee_shm_get_fd(struct tee_shm *shm)
+  */
+ void tee_shm_free(struct tee_shm *shm)
+ {
+-	/*
+-	 * dma_buf_put() decreases the dmabuf reference counter and will
+-	 * call tee_shm_release() when the last reference is gone.
+-	 *
+-	 * In the case of driver private memory we call tee_shm_release
+-	 * directly instead as it doesn't have a reference counter.
+-	 */
+-	if (shm->flags & TEE_SHM_DMA_BUF)
+-		dma_buf_put(shm->dmabuf);
+-	else
+-		tee_shm_release(shm);
++	tee_shm_put(shm);
+ }
+ EXPORT_SYMBOL_GPL(tee_shm_free);
+ 
+@@ -478,10 +417,15 @@ struct tee_shm *tee_shm_get_from_id(struct tee_context *ctx, int id)
+ 	teedev = ctx->teedev;
+ 	mutex_lock(&teedev->mutex);
+ 	shm = idr_find(&teedev->idr, id);
++	/*
++	 * If the tee_shm was found in the IDR it must have a refcount
++	 * larger than 0 due to the guarantee in tee_shm_put() below. So
++	 * it's safe to use refcount_inc().
++	 */
+ 	if (!shm || shm->ctx != ctx)
+ 		shm = ERR_PTR(-EINVAL);
+-	else if (shm->flags & TEE_SHM_DMA_BUF)
+-		get_dma_buf(shm->dmabuf);
++	else
++		refcount_inc(&shm->refcount);
+ 	mutex_unlock(&teedev->mutex);
+ 	return shm;
+ }
+@@ -493,7 +437,24 @@ EXPORT_SYMBOL_GPL(tee_shm_get_from_id);
+  */
+ void tee_shm_put(struct tee_shm *shm)
+ {
+-	if (shm->flags & TEE_SHM_DMA_BUF)
+-		dma_buf_put(shm->dmabuf);
++	struct tee_device *teedev = shm->ctx->teedev;
++	bool do_release = false;
++
++	mutex_lock(&teedev->mutex);
++	if (refcount_dec_and_test(&shm->refcount)) {
++		/*
++		 * refcount has reached 0, we must now remove it from the
++		 * IDR before releasing the mutex. This will guarantee that
++		 * the refcount_inc() in tee_shm_get_from_id() never starts
++		 * from 0.
++		 */
++		if (shm->flags & TEE_SHM_DMA_BUF)
++			idr_remove(&teedev->idr, shm->id);
++		do_release = true;
++	}
++	mutex_unlock(&teedev->mutex);
++
++	if (do_release)
++		tee_shm_release(teedev, shm);
+ }
+ EXPORT_SYMBOL_GPL(tee_shm_put);
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index a9cb647bac6fb..a40be8b448c24 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -860,19 +860,23 @@ int gether_register_netdev(struct net_device *net)
+ {
+ 	struct eth_dev *dev;
+ 	struct usb_gadget *g;
+-	struct sockaddr sa;
+ 	int status;
+ 
+ 	if (!net->dev.parent)
+ 		return -EINVAL;
+ 	dev = netdev_priv(net);
+ 	g = dev->gadget;
++
++	memcpy(net->dev_addr, dev->dev_mac, ETH_ALEN);
++	net->addr_assign_type = NET_ADDR_RANDOM;
++
+ 	status = register_netdev(net);
+ 	if (status < 0) {
+ 		dev_dbg(&g->dev, "register_netdev failed, %d\n", status);
+ 		return status;
+ 	} else {
+ 		INFO(dev, "HOST MAC %pM\n", dev->host_mac);
++		INFO(dev, "MAC %pM\n", dev->dev_mac);
+ 
+ 		/* two kinds of host-initiated state changes:
+ 		 *  - iff DATA transfer is active, carrier is "on"
+@@ -880,15 +884,6 @@ int gether_register_netdev(struct net_device *net)
+ 		 */
+ 		netif_carrier_off(net);
+ 	}
+-	sa.sa_family = net->type;
+-	memcpy(sa.sa_data, dev->dev_mac, ETH_ALEN);
+-	rtnl_lock();
+-	status = dev_set_mac_address(net, &sa, NULL);
+-	rtnl_unlock();
+-	if (status)
+-		pr_warn("cannot set self ethernet address: %d\n", status);
+-	else
+-		INFO(dev, "MAC %pM\n", dev->dev_mac);
+ 
+ 	return status;
+ }
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 8e6855e7ed836..8ed881fd7440d 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -603,13 +603,25 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ 	in.cap.realm = cpu_to_le64(ci->i_snap_realm->ino);
+ 	in.cap.flags = CEPH_CAP_FLAG_AUTH;
+ 	in.ctime = in.mtime = in.atime = iinfo.btime;
+-	in.mode = cpu_to_le32((u32)mode);
+ 	in.truncate_seq = cpu_to_le32(1);
+ 	in.truncate_size = cpu_to_le64(-1ULL);
+ 	in.xattr_version = cpu_to_le64(1);
+ 	in.uid = cpu_to_le32(from_kuid(&init_user_ns, current_fsuid()));
+-	in.gid = cpu_to_le32(from_kgid(&init_user_ns, dir->i_mode & S_ISGID ?
+-				dir->i_gid : current_fsgid()));
++	if (dir->i_mode & S_ISGID) {
++		in.gid = cpu_to_le32(from_kgid(&init_user_ns, dir->i_gid));
++
++		/* Directories always inherit the setgid bit. */
++		if (S_ISDIR(mode))
++			mode |= S_ISGID;
++		else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) &&
++			 !in_group_p(dir->i_gid) &&
++			 !capable_wrt_inode_uidgid(dir, CAP_FSETID))
++			mode &= ~S_ISGID;
++	} else {
++		in.gid = cpu_to_le32(from_kgid(&init_user_ns, current_fsgid()));
++	}
++	in.mode = cpu_to_le32((u32)mode);
++
+ 	in.nlink = cpu_to_le32(1);
+ 	in.max_size = cpu_to_le64(lo->stripe_unit);
+ 
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index c2f237653f68e..b8c9df6ab67f5 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -136,14 +136,24 @@ int ext4_datasem_ensure_credits(handle_t *handle, struct inode *inode,
+ static int ext4_ext_get_access(handle_t *handle, struct inode *inode,
+ 				struct ext4_ext_path *path)
+ {
++	int err = 0;
++
+ 	if (path->p_bh) {
+ 		/* path points to block */
+ 		BUFFER_TRACE(path->p_bh, "get_write_access");
+-		return ext4_journal_get_write_access(handle, path->p_bh);
++		err = ext4_journal_get_write_access(handle, path->p_bh);
++		/*
++		 * The extent buffer's verified bit will be set again in
++		 * __ext4_ext_dirty(). We could leave an inconsistent
++		 * buffer if the extents updating procudure break off du
++		 * to some error happens, force to check it again.
++		 */
++		if (!err)
++			clear_buffer_verified(path->p_bh);
+ 	}
+ 	/* path points to leaf/index in inode body */
+ 	/* we use in-core data, no need to protect them */
+-	return 0;
++	return err;
+ }
+ 
+ /*
+@@ -164,6 +174,9 @@ static int __ext4_ext_dirty(const char *where, unsigned int line,
+ 		/* path points to block */
+ 		err = __ext4_handle_dirty_metadata(where, line, handle,
+ 						   inode, path->p_bh);
++		/* Extents updating done, re-set verified flag */
++		if (!err)
++			set_buffer_verified(path->p_bh);
+ 	} else {
+ 		/* path points to leaf/index in inode body */
+ 		err = ext4_mark_inode_dirty(handle, inode);
+@@ -353,9 +366,13 @@ static int ext4_valid_extent_idx(struct inode *inode,
+ 
+ static int ext4_valid_extent_entries(struct inode *inode,
+ 				     struct ext4_extent_header *eh,
+-				     ext4_fsblk_t *pblk, int depth)
++				     ext4_lblk_t lblk, ext4_fsblk_t *pblk,
++				     int depth)
+ {
+ 	unsigned short entries;
++	ext4_lblk_t lblock = 0;
++	ext4_lblk_t prev = 0;
++
+ 	if (eh->eh_entries == 0)
+ 		return 1;
+ 
+@@ -364,31 +381,51 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ 	if (depth == 0) {
+ 		/* leaf entries */
+ 		struct ext4_extent *ext = EXT_FIRST_EXTENT(eh);
+-		ext4_lblk_t lblock = 0;
+-		ext4_lblk_t prev = 0;
+-		int len = 0;
++
++		/*
++		 * The logical block in the first entry should equal to
++		 * the number in the index block.
++		 */
++		if (depth != ext_depth(inode) &&
++		    lblk != le32_to_cpu(ext->ee_block))
++			return 0;
+ 		while (entries) {
+ 			if (!ext4_valid_extent(inode, ext))
+ 				return 0;
+ 
+ 			/* Check for overlapping extents */
+ 			lblock = le32_to_cpu(ext->ee_block);
+-			len = ext4_ext_get_actual_len(ext);
+ 			if ((lblock <= prev) && prev) {
+ 				*pblk = ext4_ext_pblock(ext);
+ 				return 0;
+ 			}
++			prev = lblock + ext4_ext_get_actual_len(ext) - 1;
+ 			ext++;
+ 			entries--;
+-			prev = lblock + len - 1;
+ 		}
+ 	} else {
+ 		struct ext4_extent_idx *ext_idx = EXT_FIRST_INDEX(eh);
++
++		/*
++		 * The logical block in the first entry should equal to
++		 * the number in the parent index block.
++		 */
++		if (depth != ext_depth(inode) &&
++		    lblk != le32_to_cpu(ext_idx->ei_block))
++			return 0;
+ 		while (entries) {
+ 			if (!ext4_valid_extent_idx(inode, ext_idx))
+ 				return 0;
++
++			/* Check for overlapping index extents */
++			lblock = le32_to_cpu(ext_idx->ei_block);
++			if ((lblock <= prev) && prev) {
++				*pblk = ext4_idx_pblock(ext_idx);
++				return 0;
++			}
+ 			ext_idx++;
+ 			entries--;
++			prev = lblock;
+ 		}
+ 	}
+ 	return 1;
+@@ -396,7 +433,7 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ 
+ static int __ext4_ext_check(const char *function, unsigned int line,
+ 			    struct inode *inode, struct ext4_extent_header *eh,
+-			    int depth, ext4_fsblk_t pblk)
++			    int depth, ext4_fsblk_t pblk, ext4_lblk_t lblk)
+ {
+ 	const char *error_msg;
+ 	int max = 0, err = -EFSCORRUPTED;
+@@ -422,7 +459,7 @@ static int __ext4_ext_check(const char *function, unsigned int line,
+ 		error_msg = "invalid eh_entries";
+ 		goto corrupted;
+ 	}
+-	if (!ext4_valid_extent_entries(inode, eh, &pblk, depth)) {
++	if (!ext4_valid_extent_entries(inode, eh, lblk, &pblk, depth)) {
+ 		error_msg = "invalid extent entries";
+ 		goto corrupted;
+ 	}
+@@ -452,7 +489,7 @@ corrupted:
+ }
+ 
+ #define ext4_ext_check(inode, eh, depth, pblk)			\
+-	__ext4_ext_check(__func__, __LINE__, (inode), (eh), (depth), (pblk))
++	__ext4_ext_check(__func__, __LINE__, (inode), (eh), (depth), (pblk), 0)
+ 
+ int ext4_ext_check_inode(struct inode *inode)
+ {
+@@ -485,16 +522,18 @@ static void ext4_cache_extents(struct inode *inode,
+ 
+ static struct buffer_head *
+ __read_extent_tree_block(const char *function, unsigned int line,
+-			 struct inode *inode, ext4_fsblk_t pblk, int depth,
+-			 int flags)
++			 struct inode *inode, struct ext4_extent_idx *idx,
++			 int depth, int flags)
+ {
+ 	struct buffer_head		*bh;
+ 	int				err;
+ 	gfp_t				gfp_flags = __GFP_MOVABLE | GFP_NOFS;
++	ext4_fsblk_t			pblk;
+ 
+ 	if (flags & EXT4_EX_NOFAIL)
+ 		gfp_flags |= __GFP_NOFAIL;
+ 
++	pblk = ext4_idx_pblock(idx);
+ 	bh = sb_getblk_gfp(inode->i_sb, pblk, gfp_flags);
+ 	if (unlikely(!bh))
+ 		return ERR_PTR(-ENOMEM);
+@@ -507,8 +546,8 @@ __read_extent_tree_block(const char *function, unsigned int line,
+ 	}
+ 	if (buffer_verified(bh) && !(flags & EXT4_EX_FORCE_CACHE))
+ 		return bh;
+-	err = __ext4_ext_check(function, line, inode,
+-			       ext_block_hdr(bh), depth, pblk);
++	err = __ext4_ext_check(function, line, inode, ext_block_hdr(bh),
++			       depth, pblk, le32_to_cpu(idx->ei_block));
+ 	if (err)
+ 		goto errout;
+ 	set_buffer_verified(bh);
+@@ -526,8 +565,8 @@ errout:
+ 
+ }
+ 
+-#define read_extent_tree_block(inode, pblk, depth, flags)		\
+-	__read_extent_tree_block(__func__, __LINE__, (inode), (pblk),   \
++#define read_extent_tree_block(inode, idx, depth, flags)		\
++	__read_extent_tree_block(__func__, __LINE__, (inode), (idx),	\
+ 				 (depth), (flags))
+ 
+ /*
+@@ -577,8 +616,7 @@ int ext4_ext_precache(struct inode *inode)
+ 			i--;
+ 			continue;
+ 		}
+-		bh = read_extent_tree_block(inode,
+-					    ext4_idx_pblock(path[i].p_idx++),
++		bh = read_extent_tree_block(inode, path[i].p_idx++,
+ 					    depth - i - 1,
+ 					    EXT4_EX_FORCE_CACHE);
+ 		if (IS_ERR(bh)) {
+@@ -883,8 +921,7 @@ ext4_find_extent(struct inode *inode, ext4_lblk_t block,
+ 		path[ppos].p_depth = i;
+ 		path[ppos].p_ext = NULL;
+ 
+-		bh = read_extent_tree_block(inode, path[ppos].p_block, --i,
+-					    flags);
++		bh = read_extent_tree_block(inode, path[ppos].p_idx, --i, flags);
+ 		if (IS_ERR(bh)) {
+ 			ret = PTR_ERR(bh);
+ 			goto err;
+@@ -1489,7 +1526,6 @@ static int ext4_ext_search_right(struct inode *inode,
+ 	struct ext4_extent_header *eh;
+ 	struct ext4_extent_idx *ix;
+ 	struct ext4_extent *ex;
+-	ext4_fsblk_t block;
+ 	int depth;	/* Note, NOT eh_depth; depth from top of tree */
+ 	int ee_len;
+ 
+@@ -1556,20 +1592,17 @@ got_index:
+ 	 * follow it and find the closest allocated
+ 	 * block to the right */
+ 	ix++;
+-	block = ext4_idx_pblock(ix);
+ 	while (++depth < path->p_depth) {
+ 		/* subtract from p_depth to get proper eh_depth */
+-		bh = read_extent_tree_block(inode, block,
+-					    path->p_depth - depth, 0);
++		bh = read_extent_tree_block(inode, ix, path->p_depth - depth, 0);
+ 		if (IS_ERR(bh))
+ 			return PTR_ERR(bh);
+ 		eh = ext_block_hdr(bh);
+ 		ix = EXT_FIRST_INDEX(eh);
+-		block = ext4_idx_pblock(ix);
+ 		put_bh(bh);
+ 	}
+ 
+-	bh = read_extent_tree_block(inode, block, path->p_depth - depth, 0);
++	bh = read_extent_tree_block(inode, ix, path->p_depth - depth, 0);
+ 	if (IS_ERR(bh))
+ 		return PTR_ERR(bh);
+ 	eh = ext_block_hdr(bh);
+@@ -2948,9 +2981,9 @@ again:
+ 			ext_debug(inode, "move to level %d (block %llu)\n",
+ 				  i + 1, ext4_idx_pblock(path[i].p_idx));
+ 			memset(path + i + 1, 0, sizeof(*path));
+-			bh = read_extent_tree_block(inode,
+-				ext4_idx_pblock(path[i].p_idx), depth - i - 1,
+-				EXT4_EX_NOCACHE);
++			bh = read_extent_tree_block(inode, path[i].p_idx,
++						    depth - i - 1,
++						    EXT4_EX_NOCACHE);
+ 			if (IS_ERR(bh)) {
+ 				/* should we reset i_size? */
+ 				err = PTR_ERR(bh);
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index 65afcc3cc68a0..f44c60114379e 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -680,8 +680,17 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ 	}
+ 
+ 	last = here;
+-	while (!IS_XATTR_LAST_ENTRY(last))
++	while (!IS_XATTR_LAST_ENTRY(last)) {
++		if ((void *)(last) + sizeof(__u32) > last_base_addr ||
++			(void *)XATTR_NEXT_ENTRY(last) > last_base_addr) {
++			f2fs_err(F2FS_I_SB(inode), "inode (%lu) has invalid last xattr entry, entry_size: %zu",
++					inode->i_ino, ENTRY_SIZE(last));
++			set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK);
++			error = -EFSCORRUPTED;
++			goto exit;
++		}
+ 		last = XATTR_NEXT_ENTRY(last);
++	}
+ 
+ 	newsize = XATTR_ALIGN(sizeof(struct f2fs_xattr_entry) + len + size);
+ 
+diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
+index 459e9a76d7e6f..0c6c1de6f3b77 100644
+--- a/include/linux/tee_drv.h
++++ b/include/linux/tee_drv.h
+@@ -195,7 +195,7 @@ int tee_session_calc_client_uuid(uuid_t *uuid, u32 connection_method,
+  * @offset:	offset of buffer in user space
+  * @pages:	locked pages from userspace
+  * @num_pages:	number of locked pages
+- * @dmabuf:	dmabuf used to for exporting to user space
++ * @refcount:	reference counter
+  * @flags:	defined by TEE_SHM_* in tee_drv.h
+  * @id:		unique id of a shared memory object on this device
+  *
+@@ -210,7 +210,7 @@ struct tee_shm {
+ 	unsigned int offset;
+ 	struct page **pages;
+ 	size_t num_pages;
+-	struct dma_buf *dmabuf;
++	refcount_t refcount;
+ 	u32 flags;
+ 	int id;
+ };
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 04e87f4b9417c..a960de68ac69e 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -7,9 +7,27 @@
+ #include <uapi/linux/udp.h>
+ #include <uapi/linux/virtio_net.h>
+ 
++static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type)
++{
++	switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
++	case VIRTIO_NET_HDR_GSO_TCPV4:
++		return protocol == cpu_to_be16(ETH_P_IP);
++	case VIRTIO_NET_HDR_GSO_TCPV6:
++		return protocol == cpu_to_be16(ETH_P_IPV6);
++	case VIRTIO_NET_HDR_GSO_UDP:
++		return protocol == cpu_to_be16(ETH_P_IP) ||
++		       protocol == cpu_to_be16(ETH_P_IPV6);
++	default:
++		return false;
++	}
++}
++
+ static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,
+ 					   const struct virtio_net_hdr *hdr)
+ {
++	if (skb->protocol)
++		return 0;
++
+ 	switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
+ 	case VIRTIO_NET_HDR_GSO_TCPV4:
+ 	case VIRTIO_NET_HDR_GSO_UDP:
+@@ -88,9 +106,12 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 			if (!skb->protocol) {
+ 				__be16 protocol = dev_parse_header_protocol(skb);
+ 
+-				virtio_net_hdr_set_proto(skb, hdr);
+-				if (protocol && protocol != skb->protocol)
++				if (!protocol)
++					virtio_net_hdr_set_proto(skb, hdr);
++				else if (!virtio_net_hdr_match_proto(protocol, hdr->gso_type))
+ 					return -EINVAL;
++				else
++					skb->protocol = protocol;
+ 			}
+ retry:
+ 			if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 01445ddff58d8..aef267c6a7246 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1938,6 +1938,7 @@ retry:
+ 	else if (ret == 0)
+ 		if (soft_offline_free_page(page) && try_again) {
+ 			try_again = false;
++			flags &= ~MF_COUNT_INCREASED;
+ 			goto retry;
+ 		}
+ 
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 3ca4898f3f249..c8b1592dff73d 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -2222,8 +2222,8 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
+ 			 * memory with both reclaim and compact as well.
+ 			 */
+ 			if (!page && (gfp & __GFP_DIRECT_RECLAIM))
+-				page = __alloc_pages_node(hpage_node,
+-								gfp, order);
++				page = __alloc_pages_nodemask(gfp, order,
++							hpage_node, nmask);
+ 
+ 			goto out;
+ 		}
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 269ee89d2c2be..22278807b3f36 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -85,8 +85,10 @@ static void ax25_kill_by_device(struct net_device *dev)
+ again:
+ 	ax25_for_each(s, &ax25_list) {
+ 		if (s->ax25_dev == ax25_dev) {
+-			s->ax25_dev = NULL;
+ 			spin_unlock_bh(&ax25_list_lock);
++			lock_sock(s->sk);
++			s->ax25_dev = NULL;
++			release_sock(s->sk);
+ 			ax25_disconnect(s, ENETUNREACH);
+ 			spin_lock_bh(&ax25_list_lock);
+ 
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index e429dbb10df71..d46ed4cbe7717 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1217,7 +1217,10 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
+ 	return 0;
+ 
+ error:
++	mutex_lock(&local->mtx);
+ 	ieee80211_vif_release_channel(sdata);
++	mutex_unlock(&local->mtx);
++
+ 	return err;
+ }
+ 
+diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
+index b35e8d9a5b37e..33c13edbca4bb 100644
+--- a/net/netfilter/nfnetlink_log.c
++++ b/net/netfilter/nfnetlink_log.c
+@@ -557,7 +557,8 @@ __build_packet_message(struct nfnl_log_net *log,
+ 		goto nla_put_failure;
+ 
+ 	if (indev && skb->dev &&
+-	    skb->mac_header != skb->network_header) {
++	    skb_mac_header_was_set(skb) &&
++	    skb_mac_header_len(skb) != 0) {
+ 		struct nfulnl_msg_packet_hw phw;
+ 		int len;
+ 
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 98994fe677fe9..b0358f30947ea 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -562,7 +562,8 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
+ 		goto nla_put_failure;
+ 
+ 	if (indev && entskb->dev &&
+-	    skb_mac_header_was_set(entskb)) {
++	    skb_mac_header_was_set(entskb) &&
++	    skb_mac_header_len(entskb) != 0) {
+ 		struct nfqnl_msg_packet_hw phw;
+ 		int len;
+ 
+diff --git a/net/phonet/pep.c b/net/phonet/pep.c
+index a1525916885ae..72018e5e4d8ef 100644
+--- a/net/phonet/pep.c
++++ b/net/phonet/pep.c
+@@ -946,6 +946,8 @@ static int pep_ioctl(struct sock *sk, int cmd, unsigned long arg)
+ 			ret =  -EBUSY;
+ 		else if (sk->sk_state == TCP_ESTABLISHED)
+ 			ret = -EISCONN;
++		else if (!pn->pn_sk.sobject)
++			ret = -EADDRNOTAVAIL;
+ 		else
+ 			ret = pep_sock_enable(sk, NULL, 0);
+ 		release_sock(sk);
+diff --git a/sound/core/jack.c b/sound/core/jack.c
+index 503c8af79d550..d6502dff247a8 100644
+--- a/sound/core/jack.c
++++ b/sound/core/jack.c
+@@ -220,6 +220,10 @@ int snd_jack_new(struct snd_card *card, const char *id, int type,
+ 		return -ENOMEM;
+ 
+ 	jack->id = kstrdup(id, GFP_KERNEL);
++	if (jack->id == NULL) {
++		kfree(jack);
++		return -ENOMEM;
++	}
+ 
+ 	/* don't creat input device for phantom jack */
+ 	if (!phantom_jack) {
+diff --git a/sound/drivers/opl3/opl3_midi.c b/sound/drivers/opl3/opl3_midi.c
+index eb23c55323ae1..a2583a30c45ad 100644
+--- a/sound/drivers/opl3/opl3_midi.c
++++ b/sound/drivers/opl3/opl3_midi.c
+@@ -398,7 +398,7 @@ void snd_opl3_note_on(void *p, int note, int vel, struct snd_midi_channel *chan)
+ 	}
+ 	if (instr_4op) {
+ 		vp2 = &opl3->voices[voice + 3];
+-		if (vp->state > 0) {
++		if (vp2->state > 0) {
+ 			opl3_reg = reg_side | (OPL3_REG_KEYON_BLOCK +
+ 					       voice_offset + 3);
+ 			reg_val = vp->keyon_reg & ~OPL3_KEYON_BIT;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index bed2a93001635..14ce48f1a8e47 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6492,6 +6492,23 @@ static void alc233_fixup_no_audio_jack(struct hda_codec *codec,
+ 	alc_process_coef_fw(codec, alc233_fixup_no_audio_jack_coefs);
+ }
+ 
++static void alc256_fixup_mic_no_presence_and_resume(struct hda_codec *codec,
++						    const struct hda_fixup *fix,
++						    int action)
++{
++	/*
++	 * The Clevo NJ51CU comes either with the ALC293 or the ALC256 codec,
++	 * but uses the 0x8686 subproduct id in both cases. The ALC256 codec
++	 * needs an additional quirk for sound working after suspend and resume.
++	 */
++	if (codec->core.vendor_id == 0x10ec0256) {
++		alc_update_coef_idx(codec, 0x10, 1<<9, 0);
++		snd_hda_codec_set_pincfg(codec, 0x19, 0x04a11120);
++	} else {
++		snd_hda_codec_set_pincfg(codec, 0x1a, 0x04a1113c);
++	}
++}
++
+ enum {
+ 	ALC269_FIXUP_GPIO2,
+ 	ALC269_FIXUP_SONY_VAIO,
+@@ -6711,6 +6728,7 @@ enum {
+ 	ALC256_FIXUP_SET_COEF_DEFAULTS,
+ 	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ 	ALC233_FIXUP_NO_AUDIO_JACK,
++	ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8429,6 +8447,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc233_fixup_no_audio_jack,
+ 	},
++	[ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc256_fixup_mic_no_presence_and_resume,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8599,6 +8623,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
++	SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+@@ -8766,7 +8791,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[5|7][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
+ 	SND_PCI_QUIRK(0x1558, 0x8a20, "Clevo NH55DCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8a51, "Clevo NH70RCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8d50, "Clevo NH55RCQ-M", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -9060,6 +9085,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
+ 	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+ 	{.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
++	{.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index 41827cdf26a39..aaef76cc151fa 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -924,6 +924,8 @@ int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert)
+ 	unsigned int val, count;
+ 
+ 	if (jack_insert) {
++		snd_soc_dapm_mutex_lock(dapm);
++
+ 		snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1,
+ 			RT5682_PWR_VREF2 | RT5682_PWR_MB,
+ 			RT5682_PWR_VREF2 | RT5682_PWR_MB);
+@@ -968,6 +970,8 @@ int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert)
+ 		snd_soc_component_update_bits(component, RT5682_MICBIAS_2,
+ 			RT5682_PWR_CLK25M_MASK | RT5682_PWR_CLK1M_MASK,
+ 			RT5682_PWR_CLK25M_PU | RT5682_PWR_CLK1M_PU);
++
++		snd_soc_dapm_mutex_unlock(dapm);
+ 	} else {
+ 		rt5682_enable_push_button_irq(component, false);
+ 		snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1,
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index a91a0a31e74d1..61c3238bc2656 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -291,11 +291,11 @@ static int tas2770_set_samplerate(struct tas2770_priv *tas2770, int samplerate)
+ 		ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_44_1KHZ |
+ 				TAS2770_TDM_CFG_REG0_31_88_2_96KHZ;
+ 		break;
+-	case 19200:
++	case 192000:
+ 		ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_48KHZ |
+ 				TAS2770_TDM_CFG_REG0_31_176_4_192KHZ;
+ 		break;
+-	case 17640:
++	case 176400:
+ 		ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_44_1KHZ |
+ 				TAS2770_TDM_CFG_REG0_31_176_4_192KHZ;
+ 		break;
+diff --git a/sound/soc/meson/aiu-encoder-i2s.c b/sound/soc/meson/aiu-encoder-i2s.c
+index 9322245521463..67729de41a73e 100644
+--- a/sound/soc/meson/aiu-encoder-i2s.c
++++ b/sound/soc/meson/aiu-encoder-i2s.c
+@@ -18,7 +18,6 @@
+ #define AIU_RST_SOFT_I2S_FAST		BIT(0)
+ 
+ #define AIU_I2S_DAC_CFG_MSB_FIRST	BIT(2)
+-#define AIU_I2S_MISC_HOLD_EN		BIT(2)
+ #define AIU_CLK_CTRL_I2S_DIV_EN		BIT(0)
+ #define AIU_CLK_CTRL_I2S_DIV		GENMASK(3, 2)
+ #define AIU_CLK_CTRL_AOCLK_INVERT	BIT(6)
+@@ -36,37 +35,6 @@ static void aiu_encoder_i2s_divider_enable(struct snd_soc_component *component,
+ 				      enable ? AIU_CLK_CTRL_I2S_DIV_EN : 0);
+ }
+ 
+-static void aiu_encoder_i2s_hold(struct snd_soc_component *component,
+-				 bool enable)
+-{
+-	snd_soc_component_update_bits(component, AIU_I2S_MISC,
+-				      AIU_I2S_MISC_HOLD_EN,
+-				      enable ? AIU_I2S_MISC_HOLD_EN : 0);
+-}
+-
+-static int aiu_encoder_i2s_trigger(struct snd_pcm_substream *substream, int cmd,
+-				   struct snd_soc_dai *dai)
+-{
+-	struct snd_soc_component *component = dai->component;
+-
+-	switch (cmd) {
+-	case SNDRV_PCM_TRIGGER_START:
+-	case SNDRV_PCM_TRIGGER_RESUME:
+-	case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+-		aiu_encoder_i2s_hold(component, false);
+-		return 0;
+-
+-	case SNDRV_PCM_TRIGGER_STOP:
+-	case SNDRV_PCM_TRIGGER_SUSPEND:
+-	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+-		aiu_encoder_i2s_hold(component, true);
+-		return 0;
+-
+-	default:
+-		return -EINVAL;
+-	}
+-}
+-
+ static int aiu_encoder_i2s_setup_desc(struct snd_soc_component *component,
+ 				      struct snd_pcm_hw_params *params)
+ {
+@@ -353,7 +321,6 @@ static void aiu_encoder_i2s_shutdown(struct snd_pcm_substream *substream,
+ }
+ 
+ const struct snd_soc_dai_ops aiu_encoder_i2s_dai_ops = {
+-	.trigger	= aiu_encoder_i2s_trigger,
+ 	.hw_params	= aiu_encoder_i2s_hw_params,
+ 	.hw_free	= aiu_encoder_i2s_hw_free,
+ 	.set_fmt	= aiu_encoder_i2s_set_fmt,
+diff --git a/sound/soc/meson/aiu-fifo-i2s.c b/sound/soc/meson/aiu-fifo-i2s.c
+index d91b0d874342f..2cbd127101d35 100644
+--- a/sound/soc/meson/aiu-fifo-i2s.c
++++ b/sound/soc/meson/aiu-fifo-i2s.c
+@@ -20,6 +20,8 @@
+ #define AIU_MEM_I2S_CONTROL_MODE_16BIT	BIT(6)
+ #define AIU_MEM_I2S_BUF_CNTL_INIT	BIT(0)
+ #define AIU_RST_SOFT_I2S_FAST		BIT(0)
++#define AIU_I2S_MISC_HOLD_EN		BIT(2)
++#define AIU_I2S_MISC_FORCE_LEFT_RIGHT	BIT(4)
+ 
+ #define AIU_FIFO_I2S_BLOCK		256
+ 
+@@ -90,6 +92,10 @@ static int aiu_fifo_i2s_hw_params(struct snd_pcm_substream *substream,
+ 	unsigned int val;
+ 	int ret;
+ 
++	snd_soc_component_update_bits(component, AIU_I2S_MISC,
++				      AIU_I2S_MISC_HOLD_EN,
++				      AIU_I2S_MISC_HOLD_EN);
++
+ 	ret = aiu_fifo_hw_params(substream, params, dai);
+ 	if (ret)
+ 		return ret;
+@@ -117,6 +123,19 @@ static int aiu_fifo_i2s_hw_params(struct snd_pcm_substream *substream,
+ 	snd_soc_component_update_bits(component, AIU_MEM_I2S_MASKS,
+ 				      AIU_MEM_I2S_MASKS_IRQ_BLOCK, val);
+ 
++	/*
++	 * Most (all?) supported SoCs have this bit set by default. The vendor
++	 * driver however sets it manually (depending on the version either
++	 * while un-setting AIU_I2S_MISC_HOLD_EN or right before that). Follow
++	 * the same approach for consistency with the vendor driver.
++	 */
++	snd_soc_component_update_bits(component, AIU_I2S_MISC,
++				      AIU_I2S_MISC_FORCE_LEFT_RIGHT,
++				      AIU_I2S_MISC_FORCE_LEFT_RIGHT);
++
++	snd_soc_component_update_bits(component, AIU_I2S_MISC,
++				      AIU_I2S_MISC_HOLD_EN, 0);
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/meson/aiu-fifo.c b/sound/soc/meson/aiu-fifo.c
+index aa88aae8e517d..3efc3cad0b4ec 100644
+--- a/sound/soc/meson/aiu-fifo.c
++++ b/sound/soc/meson/aiu-fifo.c
+@@ -5,6 +5,7 @@
+ 
+ #include <linux/bitfield.h>
+ #include <linux/clk.h>
++#include <linux/dma-mapping.h>
+ #include <sound/pcm_params.h>
+ #include <sound/soc.h>
+ #include <sound/soc-dai.h>
+@@ -192,6 +193,11 @@ int aiu_fifo_pcm_new(struct snd_soc_pcm_runtime *rtd,
+ 	struct snd_card *card = rtd->card->snd_card;
+ 	struct aiu_fifo *fifo = dai->playback_dma_data;
+ 	size_t size = fifo->pcm->buffer_bytes_max;
++	int ret;
++
++	ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32));
++	if (ret)
++		return ret;
+ 
+ 	snd_pcm_lib_preallocate_pages(substream,
+ 				      SNDRV_DMA_TYPE_DEV,


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-01-05 12:53 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-01-05 12:53 UTC (permalink / raw
  To: gentoo-commits

commit:     0efaded8af6a285602c5eafa7a14f16b15c8e93b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jan  5 12:53:35 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jan  5 12:53:35 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0efaded8

Linux patch 5.10.90

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1089_linux-5.10.90.patch | 2487 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2491 insertions(+)

diff --git a/0000_README b/0000_README
index aa52e9d4..46422e5d 100644
--- a/0000_README
+++ b/0000_README
@@ -399,6 +399,10 @@ Patch:  1088_linux-5.10.89.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.89
 
+Patch:  1089_linux-5.10.90.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.90
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1089_linux-5.10.90.patch b/1089_linux-5.10.90.patch
new file mode 100644
index 00000000..30c61e72
--- /dev/null
+++ b/1089_linux-5.10.90.patch
@@ -0,0 +1,2487 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index ccaa72562538e..d00618967854d 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1617,6 +1617,8 @@
+ 			architectures force reset to be always executed
+ 	i8042.unlock	[HW] Unlock (ignore) the keylock
+ 	i8042.kbdreset	[HW] Reset device connected to KBD port
++	i8042.probe_defer
++			[HW] Allow deferred probing upon i8042 probe errors
+ 
+ 	i810=		[HW,DRM]
+ 
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index d4b32cc32bb79..7d5e8a67c775f 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1457,11 +1457,22 @@ unprivileged_bpf_disabled
+ =========================
+ 
+ Writing 1 to this entry will disable unprivileged calls to ``bpf()``;
+-once disabled, calling ``bpf()`` without ``CAP_SYS_ADMIN`` will return
+-``-EPERM``.
++once disabled, calling ``bpf()`` without ``CAP_SYS_ADMIN`` or ``CAP_BPF``
++will return ``-EPERM``. Once set to 1, this can't be cleared from the
++running kernel anymore.
+ 
+-Once set, this can't be cleared.
++Writing 2 to this entry will also disable unprivileged calls to ``bpf()``,
++however, an admin can still change this setting later on, if needed, by
++writing 0 or 1 to this entry.
+ 
++If ``BPF_UNPRIV_DEFAULT_OFF`` is enabled in the kernel config, then this
++entry will default to 2 instead of 0.
++
++= =============================================================
++0 Unprivileged calls to ``bpf()`` are enabled
++1 Unprivileged calls to ``bpf()`` are disabled without recovery
++2 Unprivileged calls to ``bpf()`` are disabled
++= =============================================================
+ 
+ watchdog
+ ========
+diff --git a/Makefile b/Makefile
+index 1500ea340424d..556241a10821f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 89
++SUBLEVEL = 90
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index a52c7abf2ca49..43f56335759a4 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -729,6 +729,8 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
+ 			}
+ 			mmap_read_unlock(current->mm);
+ 		}
++		/* CPU could not fetch instruction, so clear stale IIR value. */
++		regs->iir = 0xbaadf00d;
+ 		fallthrough;
+ 	case 27: 
+ 		/* Data memory protection ID trap */
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 7caf74ad24053..95ca4f934d283 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -662,7 +662,7 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
+ 	BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size);
+ 
+ 	if (buffer->async_transaction) {
+-		alloc->free_async_space += size + sizeof(struct binder_buffer);
++		alloc->free_async_space += buffer_size + sizeof(struct binder_buffer);
+ 
+ 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+ 			     "%d: binder_free_buf size %zd async free %zd\n",
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index bfb95143ba5e8..ec6bfa316daa3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -372,10 +372,15 @@ int amdgpu_discovery_get_ip_version(struct amdgpu_device *adev, int hw_id,
+ 	return -EINVAL;
+ }
+ 
++union gc_info {
++	struct gc_info_v1_0 v1;
++	struct gc_info_v2_0 v2;
++};
++
+ int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
+ {
+ 	struct binary_header *bhdr;
+-	struct gc_info_v1_0 *gc_info;
++	union gc_info *gc_info;
+ 
+ 	if (!adev->mman.discovery_bin) {
+ 		DRM_ERROR("ip discovery uninitialized\n");
+@@ -383,27 +388,54 @@ int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
+ 	}
+ 
+ 	bhdr = (struct binary_header *)adev->mman.discovery_bin;
+-	gc_info = (struct gc_info_v1_0 *)(adev->mman.discovery_bin +
++	gc_info = (union gc_info *)(adev->mman.discovery_bin +
+ 			le16_to_cpu(bhdr->table_list[GC].offset));
+-
+-	adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->gc_num_se);
+-	adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->gc_num_wgp0_per_sa) +
+-					      le32_to_cpu(gc_info->gc_num_wgp1_per_sa));
+-	adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->gc_num_sa_per_se);
+-	adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->gc_num_rb_per_se);
+-	adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->gc_num_gl2c);
+-	adev->gfx.config.max_gprs = le32_to_cpu(gc_info->gc_num_gprs);
+-	adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->gc_num_max_gs_thds);
+-	adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->gc_gs_table_depth);
+-	adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->gc_gsprim_buff_depth);
+-	adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->gc_double_offchip_lds_buffer);
+-	adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->gc_wave_size);
+-	adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->gc_max_waves_per_simd);
+-	adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->gc_max_scratch_slots_per_cu);
+-	adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->gc_lds_size);
+-	adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->gc_num_sc_per_se) /
+-					 le32_to_cpu(gc_info->gc_num_sa_per_se);
+-	adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->gc_num_packer_per_sc);
+-
++	switch (gc_info->v1.header.version_major) {
++	case 1:
++		adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v1.gc_num_se);
++		adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->v1.gc_num_wgp0_per_sa) +
++						      le32_to_cpu(gc_info->v1.gc_num_wgp1_per_sa));
++		adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v1.gc_num_sa_per_se);
++		adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v1.gc_num_rb_per_se);
++		adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v1.gc_num_gl2c);
++		adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v1.gc_num_gprs);
++		adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v1.gc_num_max_gs_thds);
++		adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v1.gc_gs_table_depth);
++		adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v1.gc_gsprim_buff_depth);
++		adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v1.gc_double_offchip_lds_buffer);
++		adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v1.gc_wave_size);
++		adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v1.gc_max_waves_per_simd);
++		adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v1.gc_max_scratch_slots_per_cu);
++		adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v1.gc_lds_size);
++		adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v1.gc_num_sc_per_se) /
++			le32_to_cpu(gc_info->v1.gc_num_sa_per_se);
++		adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v1.gc_num_packer_per_sc);
++		break;
++	case 2:
++		adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v2.gc_num_se);
++		adev->gfx.config.max_cu_per_sh = le32_to_cpu(gc_info->v2.gc_num_cu_per_sh);
++		adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v2.gc_num_sh_per_se);
++		adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v2.gc_num_rb_per_se);
++		adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v2.gc_num_tccs);
++		adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v2.gc_num_gprs);
++		adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v2.gc_num_max_gs_thds);
++		adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v2.gc_gs_table_depth);
++		adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v2.gc_gsprim_buff_depth);
++		adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v2.gc_double_offchip_lds_buffer);
++		adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v2.gc_wave_size);
++		adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v2.gc_max_waves_per_simd);
++		adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v2.gc_max_scratch_slots_per_cu);
++		adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v2.gc_lds_size);
++		adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v2.gc_num_sc_per_se) /
++			le32_to_cpu(gc_info->v2.gc_num_sh_per_se);
++		adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v2.gc_num_packer_per_sc);
++		break;
++	default:
++		dev_err(adev->dev,
++			"Unhandled GC info table %d.%d\n",
++			gc_info->v1.header.version_major,
++			gc_info->v1.header.version_minor);
++		return -EINVAL;
++	}
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index e8737fa438f06..7115f6dbb1372 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -254,6 +254,13 @@ static int vcn_v1_0_suspend(void *handle)
+ {
+ 	int r;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
++	bool idle_work_unexecuted;
++
++	idle_work_unexecuted = cancel_delayed_work_sync(&adev->vcn.idle_work);
++	if (idle_work_unexecuted) {
++		if (adev->pm.dpm_enabled)
++			amdgpu_dpm_enable_uvd(adev, false);
++	}
+ 
+ 	r = vcn_v1_0_hw_fini(adev);
+ 	if (r)
+diff --git a/drivers/gpu/drm/amd/include/discovery.h b/drivers/gpu/drm/amd/include/discovery.h
+index 7ec4331e67f26..a486769b66c6a 100644
+--- a/drivers/gpu/drm/amd/include/discovery.h
++++ b/drivers/gpu/drm/amd/include/discovery.h
+@@ -143,6 +143,55 @@ struct gc_info_v1_0 {
+ 	uint32_t gc_num_gl2a;
+ };
+ 
++struct gc_info_v1_1 {
++	struct gpu_info_header header;
++
++	uint32_t gc_num_se;
++	uint32_t gc_num_wgp0_per_sa;
++	uint32_t gc_num_wgp1_per_sa;
++	uint32_t gc_num_rb_per_se;
++	uint32_t gc_num_gl2c;
++	uint32_t gc_num_gprs;
++	uint32_t gc_num_max_gs_thds;
++	uint32_t gc_gs_table_depth;
++	uint32_t gc_gsprim_buff_depth;
++	uint32_t gc_parameter_cache_depth;
++	uint32_t gc_double_offchip_lds_buffer;
++	uint32_t gc_wave_size;
++	uint32_t gc_max_waves_per_simd;
++	uint32_t gc_max_scratch_slots_per_cu;
++	uint32_t gc_lds_size;
++	uint32_t gc_num_sc_per_se;
++	uint32_t gc_num_sa_per_se;
++	uint32_t gc_num_packer_per_sc;
++	uint32_t gc_num_gl2a;
++	uint32_t gc_num_tcp_per_sa;
++	uint32_t gc_num_sdp_interface;
++	uint32_t gc_num_tcps;
++};
++
++struct gc_info_v2_0 {
++	struct gpu_info_header header;
++
++	uint32_t gc_num_se;
++	uint32_t gc_num_cu_per_sh;
++	uint32_t gc_num_sh_per_se;
++	uint32_t gc_num_rb_per_se;
++	uint32_t gc_num_tccs;
++	uint32_t gc_num_gprs;
++	uint32_t gc_num_max_gs_thds;
++	uint32_t gc_gs_table_depth;
++	uint32_t gc_gsprim_buff_depth;
++	uint32_t gc_parameter_cache_depth;
++	uint32_t gc_double_offchip_lds_buffer;
++	uint32_t gc_wave_size;
++	uint32_t gc_max_waves_per_simd;
++	uint32_t gc_max_scratch_slots_per_cu;
++	uint32_t gc_lds_size;
++	uint32_t gc_num_sc_per_se;
++	uint32_t gc_num_packer_per_sc;
++};
++
+ typedef struct harvest_info_header {
+ 	uint32_t signature; /* Table Signature */
+ 	uint32_t version;   /* Table Version */
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index f358120d59b38..dafad891998ec 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -536,6 +536,9 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo
+ 				   sizeof(rdwr_arg)))
+ 			return -EFAULT;
+ 
++		if (!rdwr_arg.msgs || rdwr_arg.nmsgs == 0)
++			return -EINVAL;
++
+ 		if (rdwr_arg.nmsgs > I2C_RDWR_IOCTL_MAX_MSGS)
+ 			return -EINVAL;
+ 
+diff --git a/drivers/input/joystick/spaceball.c b/drivers/input/joystick/spaceball.c
+index 429411c6c0a8e..a85a4f33aea8c 100644
+--- a/drivers/input/joystick/spaceball.c
++++ b/drivers/input/joystick/spaceball.c
+@@ -19,6 +19,7 @@
+ #include <linux/module.h>
+ #include <linux/input.h>
+ #include <linux/serio.h>
++#include <asm/unaligned.h>
+ 
+ #define DRIVER_DESC	"SpaceTec SpaceBall 2003/3003/4000 FLX driver"
+ 
+@@ -75,9 +76,15 @@ static void spaceball_process_packet(struct spaceball* spaceball)
+ 
+ 		case 'D':					/* Ball data */
+ 			if (spaceball->idx != 15) return;
+-			for (i = 0; i < 6; i++)
++			/*
++			 * Skip first three bytes; read six axes worth of data.
++			 * Axis values are signed 16-bit big-endian.
++			 */
++			data += 3;
++			for (i = 0; i < ARRAY_SIZE(spaceball_axes); i++) {
+ 				input_report_abs(dev, spaceball_axes[i],
+-					(__s16)((data[2 * i + 3] << 8) | data[2 * i + 2]));
++					(__s16)get_unaligned_be16(&data[i * 2]));
++			}
+ 			break;
+ 
+ 		case 'K':					/* Button data */
+diff --git a/drivers/input/mouse/appletouch.c b/drivers/input/mouse/appletouch.c
+index bfa26651c0be7..627048bc6a12e 100644
+--- a/drivers/input/mouse/appletouch.c
++++ b/drivers/input/mouse/appletouch.c
+@@ -916,6 +916,8 @@ static int atp_probe(struct usb_interface *iface,
+ 	set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit);
+ 	set_bit(BTN_LEFT, input_dev->keybit);
+ 
++	INIT_WORK(&dev->work, atp_reinit);
++
+ 	error = input_register_device(dev->input);
+ 	if (error)
+ 		goto err_free_buffer;
+@@ -923,8 +925,6 @@ static int atp_probe(struct usb_interface *iface,
+ 	/* save our data pointer in this interface device */
+ 	usb_set_intfdata(iface, dev);
+ 
+-	INIT_WORK(&dev->work, atp_reinit);
+-
+ 	return 0;
+ 
+  err_free_buffer:
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index aedd055410443..148a7c5fd0e22 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -995,6 +995,24 @@ static const struct dmi_system_id __initconst i8042_dmi_kbdreset_table[] = {
+ 	{ }
+ };
+ 
++static const struct dmi_system_id i8042_dmi_probe_defer_table[] __initconst = {
++	{
++		/* ASUS ZenBook UX425UA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"),
++		},
++	},
++	{
++		/* ASUS ZenBook UM325UA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
++		},
++	},
++	{ }
++};
++
+ #endif /* CONFIG_X86 */
+ 
+ #ifdef CONFIG_PNP
+@@ -1315,6 +1333,9 @@ static int __init i8042_platform_init(void)
+ 	if (dmi_check_system(i8042_dmi_kbdreset_table))
+ 		i8042_kbdreset = true;
+ 
++	if (dmi_check_system(i8042_dmi_probe_defer_table))
++		i8042_probe_defer = true;
++
+ 	/*
+ 	 * A20 was already enabled during early kernel init. But some buggy
+ 	 * BIOSes (in MSI Laptops) require A20 to be enabled using 8042 to
+diff --git a/drivers/input/serio/i8042.c b/drivers/input/serio/i8042.c
+index abae23af0791e..a9f68f535b727 100644
+--- a/drivers/input/serio/i8042.c
++++ b/drivers/input/serio/i8042.c
+@@ -45,6 +45,10 @@ static bool i8042_unlock;
+ module_param_named(unlock, i8042_unlock, bool, 0);
+ MODULE_PARM_DESC(unlock, "Ignore keyboard lock.");
+ 
++static bool i8042_probe_defer;
++module_param_named(probe_defer, i8042_probe_defer, bool, 0);
++MODULE_PARM_DESC(probe_defer, "Allow deferred probing.");
++
+ enum i8042_controller_reset_mode {
+ 	I8042_RESET_NEVER,
+ 	I8042_RESET_ALWAYS,
+@@ -711,7 +715,7 @@ static int i8042_set_mux_mode(bool multiplex, unsigned char *mux_version)
+  * LCS/Telegraphics.
+  */
+ 
+-static int __init i8042_check_mux(void)
++static int i8042_check_mux(void)
+ {
+ 	unsigned char mux_version;
+ 
+@@ -740,10 +744,10 @@ static int __init i8042_check_mux(void)
+ /*
+  * The following is used to test AUX IRQ delivery.
+  */
+-static struct completion i8042_aux_irq_delivered __initdata;
+-static bool i8042_irq_being_tested __initdata;
++static struct completion i8042_aux_irq_delivered;
++static bool i8042_irq_being_tested;
+ 
+-static irqreturn_t __init i8042_aux_test_irq(int irq, void *dev_id)
++static irqreturn_t i8042_aux_test_irq(int irq, void *dev_id)
+ {
+ 	unsigned long flags;
+ 	unsigned char str, data;
+@@ -770,7 +774,7 @@ static irqreturn_t __init i8042_aux_test_irq(int irq, void *dev_id)
+  * verifies success by readinng CTR. Used when testing for presence of AUX
+  * port.
+  */
+-static int __init i8042_toggle_aux(bool on)
++static int i8042_toggle_aux(bool on)
+ {
+ 	unsigned char param;
+ 	int i;
+@@ -798,7 +802,7 @@ static int __init i8042_toggle_aux(bool on)
+  * the presence of an AUX interface.
+  */
+ 
+-static int __init i8042_check_aux(void)
++static int i8042_check_aux(void)
+ {
+ 	int retval = -1;
+ 	bool irq_registered = false;
+@@ -1005,7 +1009,7 @@ static int i8042_controller_init(void)
+ 
+ 		if (i8042_command(&ctr[n++ % 2], I8042_CMD_CTL_RCTR)) {
+ 			pr_err("Can't read CTR while initializing i8042\n");
+-			return -EIO;
++			return i8042_probe_defer ? -EPROBE_DEFER : -EIO;
+ 		}
+ 
+ 	} while (n < 2 || ctr[0] != ctr[1]);
+@@ -1320,7 +1324,7 @@ static void i8042_shutdown(struct platform_device *dev)
+ 	i8042_controller_reset(false);
+ }
+ 
+-static int __init i8042_create_kbd_port(void)
++static int i8042_create_kbd_port(void)
+ {
+ 	struct serio *serio;
+ 	struct i8042_port *port = &i8042_ports[I8042_KBD_PORT_NO];
+@@ -1349,7 +1353,7 @@ static int __init i8042_create_kbd_port(void)
+ 	return 0;
+ }
+ 
+-static int __init i8042_create_aux_port(int idx)
++static int i8042_create_aux_port(int idx)
+ {
+ 	struct serio *serio;
+ 	int port_no = idx < 0 ? I8042_AUX_PORT_NO : I8042_MUX_PORT_NO + idx;
+@@ -1386,13 +1390,13 @@ static int __init i8042_create_aux_port(int idx)
+ 	return 0;
+ }
+ 
+-static void __init i8042_free_kbd_port(void)
++static void i8042_free_kbd_port(void)
+ {
+ 	kfree(i8042_ports[I8042_KBD_PORT_NO].serio);
+ 	i8042_ports[I8042_KBD_PORT_NO].serio = NULL;
+ }
+ 
+-static void __init i8042_free_aux_ports(void)
++static void i8042_free_aux_ports(void)
+ {
+ 	int i;
+ 
+@@ -1402,7 +1406,7 @@ static void __init i8042_free_aux_ports(void)
+ 	}
+ }
+ 
+-static void __init i8042_register_ports(void)
++static void i8042_register_ports(void)
+ {
+ 	int i;
+ 
+@@ -1443,7 +1447,7 @@ static void i8042_free_irqs(void)
+ 	i8042_aux_irq_registered = i8042_kbd_irq_registered = false;
+ }
+ 
+-static int __init i8042_setup_aux(void)
++static int i8042_setup_aux(void)
+ {
+ 	int (*aux_enable)(void);
+ 	int error;
+@@ -1485,7 +1489,7 @@ static int __init i8042_setup_aux(void)
+ 	return error;
+ }
+ 
+-static int __init i8042_setup_kbd(void)
++static int i8042_setup_kbd(void)
+ {
+ 	int error;
+ 
+@@ -1535,7 +1539,7 @@ static int i8042_kbd_bind_notifier(struct notifier_block *nb,
+ 	return 0;
+ }
+ 
+-static int __init i8042_probe(struct platform_device *dev)
++static int i8042_probe(struct platform_device *dev)
+ {
+ 	int error;
+ 
+@@ -1600,6 +1604,7 @@ static struct platform_driver i8042_driver = {
+ 		.pm	= &i8042_pm_ops,
+ #endif
+ 	},
++	.probe		= i8042_probe,
+ 	.remove		= i8042_remove,
+ 	.shutdown	= i8042_shutdown,
+ };
+@@ -1610,7 +1615,6 @@ static struct notifier_block i8042_kbd_bind_notifier_block = {
+ 
+ static int __init i8042_init(void)
+ {
+-	struct platform_device *pdev;
+ 	int err;
+ 
+ 	dbg_init();
+@@ -1626,17 +1630,29 @@ static int __init i8042_init(void)
+ 	/* Set this before creating the dev to allow i8042_command to work right away */
+ 	i8042_present = true;
+ 
+-	pdev = platform_create_bundle(&i8042_driver, i8042_probe, NULL, 0, NULL, 0);
+-	if (IS_ERR(pdev)) {
+-		err = PTR_ERR(pdev);
++	err = platform_driver_register(&i8042_driver);
++	if (err)
+ 		goto err_platform_exit;
++
++	i8042_platform_device = platform_device_alloc("i8042", -1);
++	if (!i8042_platform_device) {
++		err = -ENOMEM;
++		goto err_unregister_driver;
+ 	}
+ 
++	err = platform_device_add(i8042_platform_device);
++	if (err)
++		goto err_free_device;
++
+ 	bus_register_notifier(&serio_bus, &i8042_kbd_bind_notifier_block);
+ 	panic_blink = i8042_panic_blink;
+ 
+ 	return 0;
+ 
++err_free_device:
++	platform_device_put(i8042_platform_device);
++err_unregister_driver:
++	platform_driver_unregister(&i8042_driver);
+  err_platform_exit:
+ 	i8042_platform_exit();
+ 	return err;
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index a60ce90305819..c26c9b0c00d8f 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -1904,15 +1904,12 @@ static int ag71xx_probe(struct platform_device *pdev)
+ 	ag->mac_reset = devm_reset_control_get(&pdev->dev, "mac");
+ 	if (IS_ERR(ag->mac_reset)) {
+ 		netif_err(ag, probe, ndev, "missing mac reset\n");
+-		err = PTR_ERR(ag->mac_reset);
+-		goto err_free;
++		return PTR_ERR(ag->mac_reset);
+ 	}
+ 
+ 	ag->mac_base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
+-	if (!ag->mac_base) {
+-		err = -ENOMEM;
+-		goto err_free;
+-	}
++	if (!ag->mac_base)
++		return -ENOMEM;
+ 
+ 	ndev->irq = platform_get_irq(pdev, 0);
+ 	err = devm_request_irq(&pdev->dev, ndev->irq, ag71xx_interrupt,
+@@ -1920,7 +1917,7 @@ static int ag71xx_probe(struct platform_device *pdev)
+ 	if (err) {
+ 		netif_err(ag, probe, ndev, "unable to request IRQ %d\n",
+ 			  ndev->irq);
+-		goto err_free;
++		return err;
+ 	}
+ 
+ 	ndev->netdev_ops = &ag71xx_netdev_ops;
+@@ -1948,10 +1945,8 @@ static int ag71xx_probe(struct platform_device *pdev)
+ 	ag->stop_desc = dmam_alloc_coherent(&pdev->dev,
+ 					    sizeof(struct ag71xx_desc),
+ 					    &ag->stop_desc_dma, GFP_KERNEL);
+-	if (!ag->stop_desc) {
+-		err = -ENOMEM;
+-		goto err_free;
+-	}
++	if (!ag->stop_desc)
++		return -ENOMEM;
+ 
+ 	ag->stop_desc->data = 0;
+ 	ag->stop_desc->ctrl = 0;
+@@ -1968,7 +1963,7 @@ static int ag71xx_probe(struct platform_device *pdev)
+ 	err = of_get_phy_mode(np, &ag->phy_if_mode);
+ 	if (err) {
+ 		netif_err(ag, probe, ndev, "missing phy-mode property in DT\n");
+-		goto err_free;
++		return err;
+ 	}
+ 
+ 	netif_napi_add(ndev, &ag->napi, ag71xx_poll, AG71XX_NAPI_WEIGHT);
+@@ -1976,7 +1971,7 @@ static int ag71xx_probe(struct platform_device *pdev)
+ 	err = clk_prepare_enable(ag->clk_eth);
+ 	if (err) {
+ 		netif_err(ag, probe, ndev, "Failed to enable eth clk.\n");
+-		goto err_free;
++		return err;
+ 	}
+ 
+ 	ag71xx_wr(ag, AG71XX_REG_MAC_CFG1, 0);
+@@ -2012,8 +2007,6 @@ err_mdio_remove:
+ 	ag71xx_mdio_remove(ag);
+ err_put_clk:
+ 	clk_disable_unprepare(ag->clk_eth);
+-err_free:
+-	free_netdev(ndev);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/fman/fman_port.c b/drivers/net/ethernet/freescale/fman/fman_port.c
+index d9baac0dbc7d0..4c9d05c45c033 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_port.c
++++ b/drivers/net/ethernet/freescale/fman/fman_port.c
+@@ -1805,7 +1805,7 @@ static int fman_port_probe(struct platform_device *of_dev)
+ 	fman = dev_get_drvdata(&fm_pdev->dev);
+ 	if (!fman) {
+ 		err = -EINVAL;
+-		goto return_err;
++		goto put_device;
+ 	}
+ 
+ 	err = of_property_read_u32(port_node, "cell-index", &val);
+@@ -1813,7 +1813,7 @@ static int fman_port_probe(struct platform_device *of_dev)
+ 		dev_err(port->dev, "%s: reading cell-index for %pOF failed\n",
+ 			__func__, port_node);
+ 		err = -EINVAL;
+-		goto return_err;
++		goto put_device;
+ 	}
+ 	port_id = (u8)val;
+ 	port->dts_params.id = port_id;
+@@ -1847,7 +1847,7 @@ static int fman_port_probe(struct platform_device *of_dev)
+ 	}  else {
+ 		dev_err(port->dev, "%s: Illegal port type\n", __func__);
+ 		err = -EINVAL;
+-		goto return_err;
++		goto put_device;
+ 	}
+ 
+ 	port->dts_params.type = port_type;
+@@ -1861,7 +1861,7 @@ static int fman_port_probe(struct platform_device *of_dev)
+ 			dev_err(port->dev, "%s: incorrect qman-channel-id\n",
+ 				__func__);
+ 			err = -EINVAL;
+-			goto return_err;
++			goto put_device;
+ 		}
+ 		port->dts_params.qman_channel_id = qman_channel_id;
+ 	}
+@@ -1871,7 +1871,7 @@ static int fman_port_probe(struct platform_device *of_dev)
+ 		dev_err(port->dev, "%s: of_address_to_resource() failed\n",
+ 			__func__);
+ 		err = -ENOMEM;
+-		goto return_err;
++		goto put_device;
+ 	}
+ 
+ 	port->dts_params.fman = fman;
+@@ -1896,6 +1896,8 @@ static int fman_port_probe(struct platform_device *of_dev)
+ 
+ 	return 0;
+ 
++put_device:
++	put_device(&fm_pdev->dev);
+ return_err:
+ 	of_node_put(port_node);
+ free_port:
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index cae090a072524..61cebb7df6bcb 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -4422,6 +4422,9 @@ static irqreturn_t igc_intr_msi(int irq, void *data)
+ 			mod_timer(&adapter->watchdog_timer, jiffies + 1);
+ 	}
+ 
++	if (icr & IGC_ICR_TS)
++		igc_tsync_interrupt(adapter);
++
+ 	napi_schedule(&q_vector->napi);
+ 
+ 	return IRQ_HANDLED;
+@@ -4465,6 +4468,9 @@ static irqreturn_t igc_intr(int irq, void *data)
+ 			mod_timer(&adapter->watchdog_timer, jiffies + 1);
+ 	}
+ 
++	if (icr & IGC_ICR_TS)
++		igc_tsync_interrupt(adapter);
++
+ 	napi_schedule(&q_vector->napi);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c
+index 072075bc60ee9..500511b72ac60 100644
+--- a/drivers/net/ethernet/lantiq_xrx200.c
++++ b/drivers/net/ethernet/lantiq_xrx200.c
+@@ -209,7 +209,7 @@ static int xrx200_hw_receive(struct xrx200_chan *ch)
+ 	skb->protocol = eth_type_trans(skb, net_dev);
+ 	netif_receive_skb(skb);
+ 	net_dev->stats.rx_packets++;
+-	net_dev->stats.rx_bytes += len - ETH_FCS_LEN;
++	net_dev->stats.rx_bytes += len;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 9da34f82d4668..73060b30fece3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -916,9 +916,6 @@ void mlx5e_deactivate_rq(struct mlx5e_rq *rq);
+ void mlx5e_close_rq(struct mlx5e_rq *rq);
+ 
+ struct mlx5e_sq_param;
+-int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
+-		     struct mlx5e_sq_param *param, struct mlx5e_icosq *sq);
+-void mlx5e_close_icosq(struct mlx5e_icosq *sq);
+ int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params,
+ 		     struct mlx5e_sq_param *param, struct xsk_buff_pool *xsk_pool,
+ 		     struct mlx5e_xdpsq *sq, bool is_redirect);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+index 8be6eaa3eeb14..13dd34c571b9f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+@@ -335,6 +335,14 @@ static int mlx5e_tx_reporter_dump_sq(struct mlx5e_priv *priv, struct devlink_fms
+ 	return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ }
+ 
++static int mlx5e_tx_reporter_timeout_dump(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
++					  void *ctx)
++{
++	struct mlx5e_tx_timeout_ctx *to_ctx = ctx;
++
++	return mlx5e_tx_reporter_dump_sq(priv, fmsg, to_ctx->sq);
++}
++
+ static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv,
+ 					  struct devlink_fmsg *fmsg)
+ {
+@@ -418,7 +426,7 @@ int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq)
+ 	to_ctx.sq = sq;
+ 	err_ctx.ctx = &to_ctx;
+ 	err_ctx.recover = mlx5e_tx_reporter_timeout_recover;
+-	err_ctx.dump = mlx5e_tx_reporter_dump_sq;
++	err_ctx.dump = mlx5e_tx_reporter_timeout_dump;
+ 	snprintf(err_str, sizeof(err_str),
+ 		 "TX timeout on queue: %d, SQ: 0x%x, CQ: 0x%x, SQ Cons: 0x%x SQ Prod: 0x%x, usecs since last trans: %u",
+ 		 sq->channel->ix, sq->sqn, sq->cq.mcq.cqn, sq->cc, sq->pc,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 6ec4b96497ffb..2f6c3a5813ed1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -1051,9 +1051,20 @@ static void mlx5e_icosq_err_cqe_work(struct work_struct *recover_work)
+ 	mlx5e_reporter_icosq_cqe_err(sq);
+ }
+ 
++static void mlx5e_async_icosq_err_cqe_work(struct work_struct *recover_work)
++{
++	struct mlx5e_icosq *sq = container_of(recover_work, struct mlx5e_icosq,
++					      recover_work);
++
++	/* Not implemented yet. */
++
++	netdev_warn(sq->channel->netdev, "async_icosq recovery is not implemented\n");
++}
++
+ static int mlx5e_alloc_icosq(struct mlx5e_channel *c,
+ 			     struct mlx5e_sq_param *param,
+-			     struct mlx5e_icosq *sq)
++			     struct mlx5e_icosq *sq,
++			     work_func_t recover_work_func)
+ {
+ 	void *sqc_wq               = MLX5_ADDR_OF(sqc, param->sqc, wq);
+ 	struct mlx5_core_dev *mdev = c->mdev;
+@@ -1073,7 +1084,7 @@ static int mlx5e_alloc_icosq(struct mlx5e_channel *c,
+ 	if (err)
+ 		goto err_sq_wq_destroy;
+ 
+-	INIT_WORK(&sq->recover_work, mlx5e_icosq_err_cqe_work);
++	INIT_WORK(&sq->recover_work, recover_work_func);
+ 
+ 	return 0;
+ 
+@@ -1423,13 +1434,14 @@ static void mlx5e_tx_err_cqe_work(struct work_struct *recover_work)
+ 	mlx5e_reporter_tx_err_cqe(sq);
+ }
+ 
+-int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
+-		     struct mlx5e_sq_param *param, struct mlx5e_icosq *sq)
++static int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
++			    struct mlx5e_sq_param *param, struct mlx5e_icosq *sq,
++			    work_func_t recover_work_func)
+ {
+ 	struct mlx5e_create_sq_param csp = {};
+ 	int err;
+ 
+-	err = mlx5e_alloc_icosq(c, param, sq);
++	err = mlx5e_alloc_icosq(c, param, sq, recover_work_func);
+ 	if (err)
+ 		return err;
+ 
+@@ -1459,7 +1471,7 @@ void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq)
+ 	synchronize_net(); /* Sync with NAPI. */
+ }
+ 
+-void mlx5e_close_icosq(struct mlx5e_icosq *sq)
++static void mlx5e_close_icosq(struct mlx5e_icosq *sq)
+ {
+ 	struct mlx5e_channel *c = sq->channel;
+ 
+@@ -1862,11 +1874,13 @@ static int mlx5e_open_queues(struct mlx5e_channel *c,
+ 
+ 	spin_lock_init(&c->async_icosq_lock);
+ 
+-	err = mlx5e_open_icosq(c, params, &cparam->async_icosq, &c->async_icosq);
++	err = mlx5e_open_icosq(c, params, &cparam->async_icosq, &c->async_icosq,
++			       mlx5e_async_icosq_err_cqe_work);
+ 	if (err)
+ 		goto err_disable_napi;
+ 
+-	err = mlx5e_open_icosq(c, params, &cparam->icosq, &c->icosq);
++	err = mlx5e_open_icosq(c, params, &cparam->icosq, &c->icosq,
++			       mlx5e_icosq_err_cqe_work);
+ 	if (err)
+ 		goto err_close_async_icosq;
+ 
+@@ -3921,12 +3935,11 @@ static int set_feature_arfs(struct net_device *netdev, bool enable)
+ 
+ static int mlx5e_handle_feature(struct net_device *netdev,
+ 				netdev_features_t *features,
+-				netdev_features_t wanted_features,
+ 				netdev_features_t feature,
+ 				mlx5e_feature_handler feature_handler)
+ {
+-	netdev_features_t changes = wanted_features ^ netdev->features;
+-	bool enable = !!(wanted_features & feature);
++	netdev_features_t changes = *features ^ netdev->features;
++	bool enable = !!(*features & feature);
+ 	int err;
+ 
+ 	if (!(changes & feature))
+@@ -3934,22 +3947,22 @@ static int mlx5e_handle_feature(struct net_device *netdev,
+ 
+ 	err = feature_handler(netdev, enable);
+ 	if (err) {
++		MLX5E_SET_FEATURE(features, feature, !enable);
+ 		netdev_err(netdev, "%s feature %pNF failed, err %d\n",
+ 			   enable ? "Enable" : "Disable", &feature, err);
+ 		return err;
+ 	}
+ 
+-	MLX5E_SET_FEATURE(features, feature, enable);
+ 	return 0;
+ }
+ 
+ int mlx5e_set_features(struct net_device *netdev, netdev_features_t features)
+ {
+-	netdev_features_t oper_features = netdev->features;
++	netdev_features_t oper_features = features;
+ 	int err = 0;
+ 
+ #define MLX5E_HANDLE_FEATURE(feature, handler) \
+-	mlx5e_handle_feature(netdev, &oper_features, features, feature, handler)
++	mlx5e_handle_feature(netdev, &oper_features, feature, handler)
+ 
+ 	err |= MLX5E_HANDLE_FEATURE(NETIF_F_LRO, set_feature_lro);
+ 	err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_VLAN_CTAG_FILTER,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+index 00d861361428f..16a7c7ec5e138 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+@@ -2,6 +2,7 @@
+ /* Copyright (c) 2019 Mellanox Technologies. */
+ 
+ #include <linux/mlx5/eswitch.h>
++#include <linux/err.h>
+ #include "dr_types.h"
+ 
+ #define DR_DOMAIN_SW_STEERING_SUPPORTED(dmn, dmn_type)	\
+@@ -69,9 +70,9 @@ static int dr_domain_init_resources(struct mlx5dr_domain *dmn)
+ 	}
+ 
+ 	dmn->uar = mlx5_get_uars_page(dmn->mdev);
+-	if (!dmn->uar) {
++	if (IS_ERR(dmn->uar)) {
+ 		mlx5dr_err(dmn, "Couldn't allocate UAR\n");
+-		ret = -ENOMEM;
++		ret = PTR_ERR(dmn->uar);
+ 		goto clean_pd;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 1b44155fa24b2..e95c09dc2c30d 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -2836,7 +2836,7 @@ int ionic_lif_init(struct ionic_lif *lif)
+ 		return -EINVAL;
+ 	}
+ 
+-	lif->dbid_inuse = bitmap_alloc(lif->dbid_count, GFP_KERNEL);
++	lif->dbid_inuse = bitmap_zalloc(lif->dbid_count, GFP_KERNEL);
+ 	if (!lif->dbid_inuse) {
+ 		dev_err(dev, "Failed alloc doorbell id bitmap, aborting\n");
+ 		return -ENOMEM;
+diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
+index 2a748a924f838..138279bbb544b 100644
+--- a/drivers/net/usb/pegasus.c
++++ b/drivers/net/usb/pegasus.c
+@@ -518,11 +518,11 @@ static void read_bulk_callback(struct urb *urb)
+ 		goto goon;
+ 
+ 	rx_status = buf[count - 2];
+-	if (rx_status & 0x1e) {
++	if (rx_status & 0x1c) {
+ 		netif_dbg(pegasus, rx_err, net,
+ 			  "RX packet error %x\n", rx_status);
+ 		net->stats.rx_errors++;
+-		if (rx_status & 0x06)	/* long or runt	*/
++		if (rx_status & 0x04)	/* runt	*/
+ 			net->stats.rx_length_errors++;
+ 		if (rx_status & 0x08)
+ 			net->stats.rx_crc_errors++;
+diff --git a/drivers/nfc/st21nfca/i2c.c b/drivers/nfc/st21nfca/i2c.c
+index 23ed11f91213d..6ea59426ab0bf 100644
+--- a/drivers/nfc/st21nfca/i2c.c
++++ b/drivers/nfc/st21nfca/i2c.c
+@@ -533,7 +533,8 @@ static int st21nfca_hci_i2c_probe(struct i2c_client *client,
+ 	phy->gpiod_ena = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW);
+ 	if (IS_ERR(phy->gpiod_ena)) {
+ 		nfc_err(dev, "Unable to get ENABLE GPIO\n");
+-		return PTR_ERR(phy->gpiod_ena);
++		r = PTR_ERR(phy->gpiod_ena);
++		goto out_free;
+ 	}
+ 
+ 	phy->se_status.is_ese_present =
+@@ -544,7 +545,7 @@ static int st21nfca_hci_i2c_probe(struct i2c_client *client,
+ 	r = st21nfca_hci_platform_init(phy);
+ 	if (r < 0) {
+ 		nfc_err(&client->dev, "Unable to reboot st21nfca\n");
+-		return r;
++		goto out_free;
+ 	}
+ 
+ 	r = devm_request_threaded_irq(&client->dev, client->irq, NULL,
+@@ -553,15 +554,23 @@ static int st21nfca_hci_i2c_probe(struct i2c_client *client,
+ 				ST21NFCA_HCI_DRIVER_NAME, phy);
+ 	if (r < 0) {
+ 		nfc_err(&client->dev, "Unable to register IRQ handler\n");
+-		return r;
++		goto out_free;
+ 	}
+ 
+-	return st21nfca_hci_probe(phy, &i2c_phy_ops, LLC_SHDLC_NAME,
+-					ST21NFCA_FRAME_HEADROOM,
+-					ST21NFCA_FRAME_TAILROOM,
+-					ST21NFCA_HCI_LLC_MAX_PAYLOAD,
+-					&phy->hdev,
+-					&phy->se_status);
++	r = st21nfca_hci_probe(phy, &i2c_phy_ops, LLC_SHDLC_NAME,
++			       ST21NFCA_FRAME_HEADROOM,
++			       ST21NFCA_FRAME_TAILROOM,
++			       ST21NFCA_HCI_LLC_MAX_PAYLOAD,
++			       &phy->hdev,
++			       &phy->se_status);
++	if (r)
++		goto out_free;
++
++	return 0;
++
++out_free:
++	kfree_skb(phy->pending_skb);
++	return r;
+ }
+ 
+ static int st21nfca_hci_i2c_remove(struct i2c_client *client)
+@@ -574,6 +583,8 @@ static int st21nfca_hci_i2c_remove(struct i2c_client *client)
+ 
+ 	if (phy->powered)
+ 		st21nfca_hci_i2c_disable(phy);
++	if (phy->pending_skb)
++		kfree_skb(phy->pending_skb);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/platform/x86/apple-gmux.c b/drivers/platform/x86/apple-gmux.c
+index 9aae45a452002..57553f9b4d1dc 100644
+--- a/drivers/platform/x86/apple-gmux.c
++++ b/drivers/platform/x86/apple-gmux.c
+@@ -625,7 +625,7 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
+ 	}
+ 
+ 	gmux_data->iostart = res->start;
+-	gmux_data->iolen = res->end - res->start;
++	gmux_data->iolen = resource_size(res);
+ 
+ 	if (gmux_data->iolen < GMUX_MIN_IO_LEN) {
+ 		pr_err("gmux I/O region too small (%lu < %u)\n",
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index b89c5513243e8..beaf3a8d206f8 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -2956,8 +2956,8 @@ lpfc_debugfs_nvmeio_trc_write(struct file *file, const char __user *buf,
+ 	char mybuf[64];
+ 	char *pbuf;
+ 
+-	if (nbytes > 64)
+-		nbytes = 64;
++	if (nbytes > 63)
++		nbytes = 63;
+ 
+ 	memset(mybuf, 0, sizeof(mybuf));
+ 
+diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c
+index 1421b1394d816..7d51ff4672d75 100644
+--- a/drivers/scsi/vmw_pvscsi.c
++++ b/drivers/scsi/vmw_pvscsi.c
+@@ -591,9 +591,12 @@ static void pvscsi_complete_request(struct pvscsi_adapter *adapter,
+ 			 * Commands like INQUIRY may transfer less data than
+ 			 * requested by the initiator via bufflen. Set residual
+ 			 * count to make upper layer aware of the actual amount
+-			 * of data returned.
++			 * of data returned. There are cases when controller
++			 * returns zero dataLen with non zero data - do not set
++			 * residual count in that case.
+ 			 */
+-			scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);
++			if (e->dataLen && (e->dataLen < scsi_bufflen(cmd)))
++				scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);
+ 			cmd->result = (DID_OK << 16);
+ 			break;
+ 
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 725e35167837e..cbb7947f366f9 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1772,11 +1772,15 @@ static void ffs_data_clear(struct ffs_data *ffs)
+ 
+ 	BUG_ON(ffs->gadget);
+ 
+-	if (ffs->epfiles)
++	if (ffs->epfiles) {
+ 		ffs_epfiles_destroy(ffs->epfiles, ffs->eps_count);
++		ffs->epfiles = NULL;
++	}
+ 
+-	if (ffs->ffs_eventfd)
++	if (ffs->ffs_eventfd) {
+ 		eventfd_ctx_put(ffs->ffs_eventfd);
++		ffs->ffs_eventfd = NULL;
++	}
+ 
+ 	kfree(ffs->raw_descs_data);
+ 	kfree(ffs->raw_strings);
+@@ -1789,7 +1793,6 @@ static void ffs_data_reset(struct ffs_data *ffs)
+ 
+ 	ffs_data_clear(ffs);
+ 
+-	ffs->epfiles = NULL;
+ 	ffs->raw_descs_data = NULL;
+ 	ffs->raw_descs = NULL;
+ 	ffs->raw_strings = NULL;
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index c9133df71e52b..dafb58f05c9fb 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -122,7 +122,6 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	/* Look for vendor-specific quirks */
+ 	if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC &&
+ 			(pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK ||
+-			 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 ||
+ 			 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) {
+ 		if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&
+ 				pdev->revision == 0x0) {
+@@ -157,6 +156,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 			pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1009)
+ 		xhci->quirks |= XHCI_BROKEN_STREAMS;
+ 
++	if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC &&
++			pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100)
++		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
++
+ 	if (pdev->vendor == PCI_VENDOR_ID_NEC)
+ 		xhci->quirks |= XHCI_NEC_HOST;
+ 
+diff --git a/drivers/usb/mtu3/mtu3_gadget.c b/drivers/usb/mtu3/mtu3_gadget.c
+index 0b3aa7c65857a..a3e1105c5c662 100644
+--- a/drivers/usb/mtu3/mtu3_gadget.c
++++ b/drivers/usb/mtu3/mtu3_gadget.c
+@@ -92,6 +92,13 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 			interval = clamp_val(interval, 1, 16) - 1;
+ 			mult = usb_endpoint_maxp_mult(desc) - 1;
+ 		}
++		break;
++	case USB_SPEED_FULL:
++		if (usb_endpoint_xfer_isoc(desc))
++			interval = clamp_val(desc->bInterval, 1, 16);
++		else if (usb_endpoint_xfer_int(desc))
++			interval = clamp_val(desc->bInterval, 1, 255);
++
+ 		break;
+ 	default:
+ 		break; /*others are ignored */
+@@ -235,6 +242,7 @@ struct usb_request *mtu3_alloc_request(struct usb_ep *ep, gfp_t gfp_flags)
+ 	mreq->request.dma = DMA_ADDR_INVALID;
+ 	mreq->epnum = mep->epnum;
+ 	mreq->mep = mep;
++	INIT_LIST_HEAD(&mreq->list);
+ 	trace_mtu3_alloc_request(mreq);
+ 
+ 	return &mreq->request;
+diff --git a/drivers/usb/mtu3/mtu3_qmu.c b/drivers/usb/mtu3/mtu3_qmu.c
+index 3f414f91b5899..2ea3157ddb6e2 100644
+--- a/drivers/usb/mtu3/mtu3_qmu.c
++++ b/drivers/usb/mtu3/mtu3_qmu.c
+@@ -273,6 +273,8 @@ static int mtu3_prepare_tx_gpd(struct mtu3_ep *mep, struct mtu3_request *mreq)
+ 			gpd->dw3_info |= cpu_to_le32(GPD_EXT_FLAG_ZLP);
+ 	}
+ 
++	/* prevent reorder, make sure GPD's HWO is set last */
++	mb();
+ 	gpd->dw0_info |= cpu_to_le32(GPD_FLAGS_IOC | GPD_FLAGS_HWO);
+ 
+ 	mreq->gpd = gpd;
+@@ -306,6 +308,8 @@ static int mtu3_prepare_rx_gpd(struct mtu3_ep *mep, struct mtu3_request *mreq)
+ 	gpd->next_gpd = cpu_to_le32(lower_32_bits(enq_dma));
+ 	ext_addr |= GPD_EXT_NGP(mtu, upper_32_bits(enq_dma));
+ 	gpd->dw3_info = cpu_to_le32(ext_addr);
++	/* prevent reorder, make sure GPD's HWO is set last */
++	mb();
+ 	gpd->dw0_info |= cpu_to_le32(GPD_FLAGS_IOC | GPD_FLAGS_HWO);
+ 
+ 	mreq->gpd = gpd;
+@@ -445,7 +449,8 @@ static void qmu_tx_zlp_error_handler(struct mtu3 *mtu, u8 epnum)
+ 		return;
+ 	}
+ 	mtu3_setbits(mbase, MU3D_EP_TXCR0(mep->epnum), TX_TXPKTRDY);
+-
++	/* prevent reorder, make sure GPD's HWO is set last */
++	mb();
+ 	/* by pass the current GDP */
+ 	gpd_current->dw0_info |= cpu_to_le32(GPD_FLAGS_BPS | GPD_FLAGS_HWO);
+ 
+diff --git a/include/linux/memblock.h b/include/linux/memblock.h
+index 1a8d25f2e0412..3baea2ef33fbb 100644
+--- a/include/linux/memblock.h
++++ b/include/linux/memblock.h
+@@ -387,8 +387,8 @@ phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
+ 				      phys_addr_t end, int nid, bool exact_nid);
+ phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
+ 
+-static inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
+-					      phys_addr_t align)
++static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
++						       phys_addr_t align)
+ {
+ 	return memblock_phys_alloc_range(size, align, 0,
+ 					 MEMBLOCK_ALLOC_ACCESSIBLE);
+diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
+index 4fc747b778eb6..33475d061823e 100644
+--- a/include/net/sctp/sctp.h
++++ b/include/net/sctp/sctp.h
+@@ -103,6 +103,7 @@ extern struct percpu_counter sctp_sockets_allocated;
+ int sctp_asconf_mgmt(struct sctp_sock *, struct sctp_sockaddr_entry *);
+ struct sk_buff *sctp_skb_recv_datagram(struct sock *, int, int, int *);
+ 
++typedef int (*sctp_callback_t)(struct sctp_endpoint *, struct sctp_transport *, void *);
+ void sctp_transport_walk_start(struct rhashtable_iter *iter);
+ void sctp_transport_walk_stop(struct rhashtable_iter *iter);
+ struct sctp_transport *sctp_transport_get_next(struct net *net,
+@@ -113,9 +114,8 @@ int sctp_transport_lookup_process(int (*cb)(struct sctp_transport *, void *),
+ 				  struct net *net,
+ 				  const union sctp_addr *laddr,
+ 				  const union sctp_addr *paddr, void *p);
+-int sctp_for_each_transport(int (*cb)(struct sctp_transport *, void *),
+-			    int (*cb_done)(struct sctp_transport *, void *),
+-			    struct net *net, int *pos, void *p);
++int sctp_transport_traverse_process(sctp_callback_t cb, sctp_callback_t cb_done,
++				    struct net *net, int *pos, void *p);
+ int sctp_for_each_endpoint(int (*cb)(struct sctp_endpoint *, void *), void *p);
+ int sctp_get_sctp_info(struct sock *sk, struct sctp_association *asoc,
+ 		       struct sctp_info *info);
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index 51d698f2656fc..be9ff0422c162 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -1339,6 +1339,7 @@ struct sctp_endpoint {
+ 
+ 	u32 secid;
+ 	u32 peer_secid;
++	struct rcu_head rcu;
+ };
+ 
+ /* Recover the outter endpoint structure. */
+@@ -1354,7 +1355,7 @@ static inline struct sctp_endpoint *sctp_ep(struct sctp_ep_common *base)
+ struct sctp_endpoint *sctp_endpoint_new(struct sock *, gfp_t);
+ void sctp_endpoint_free(struct sctp_endpoint *);
+ void sctp_endpoint_put(struct sctp_endpoint *);
+-void sctp_endpoint_hold(struct sctp_endpoint *);
++int sctp_endpoint_hold(struct sctp_endpoint *ep);
+ void sctp_endpoint_add_asoc(struct sctp_endpoint *, struct sctp_association *);
+ struct sctp_association *sctp_endpoint_lookup_assoc(
+ 	const struct sctp_endpoint *ep,
+diff --git a/include/uapi/linux/nfc.h b/include/uapi/linux/nfc.h
+index f6e3c8c9c7449..4fa4e979e948a 100644
+--- a/include/uapi/linux/nfc.h
++++ b/include/uapi/linux/nfc.h
+@@ -263,7 +263,7 @@ enum nfc_sdp_attr {
+ #define NFC_SE_ENABLED  0x1
+ 
+ struct sockaddr_nfc {
+-	sa_family_t sa_family;
++	__kernel_sa_family_t sa_family;
+ 	__u32 dev_idx;
+ 	__u32 target_idx;
+ 	__u32 nfc_protocol;
+@@ -271,14 +271,14 @@ struct sockaddr_nfc {
+ 
+ #define NFC_LLCP_MAX_SERVICE_NAME 63
+ struct sockaddr_nfc_llcp {
+-	sa_family_t sa_family;
++	__kernel_sa_family_t sa_family;
+ 	__u32 dev_idx;
+ 	__u32 target_idx;
+ 	__u32 nfc_protocol;
+ 	__u8 dsap; /* Destination SAP, if known */
+ 	__u8 ssap; /* Source SAP to be bound to */
+ 	char service_name[NFC_LLCP_MAX_SERVICE_NAME]; /* Service name URI */;
+-	size_t service_name_len;
++	__kernel_size_t service_name_len;
+ };
+ 
+ /* NFC socket protocols */
+diff --git a/init/Kconfig b/init/Kconfig
+index fc4c9f416fadb..13685bffef370 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -1722,6 +1722,16 @@ config BPF_JIT_DEFAULT_ON
+ 	def_bool ARCH_WANT_DEFAULT_BPF_JIT || BPF_JIT_ALWAYS_ON
+ 	depends on HAVE_EBPF_JIT && BPF_JIT
+ 
++config BPF_UNPRIV_DEFAULT_OFF
++	bool "Disable unprivileged BPF by default"
++	depends on BPF_SYSCALL
++	help
++	  Disables unprivileged BPF by default by setting the corresponding
++	  /proc/sys/kernel/unprivileged_bpf_disabled knob to 2. An admin can
++	  still reenable it by setting it to 0 later on, or permanently
++	  disable it by setting it to 1 (from which no other transition to
++	  0 is possible anymore).
++
+ source "kernel/bpf/preload/Kconfig"
+ 
+ config USERFAULTFD
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index bb9a9cb1f321e..209e6567cdab0 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -50,7 +50,8 @@ static DEFINE_SPINLOCK(map_idr_lock);
+ static DEFINE_IDR(link_idr);
+ static DEFINE_SPINLOCK(link_idr_lock);
+ 
+-int sysctl_unprivileged_bpf_disabled __read_mostly;
++int sysctl_unprivileged_bpf_disabled __read_mostly =
++	IS_BUILTIN(CONFIG_BPF_UNPRIV_DEFAULT_OFF) ? 2 : 0;
+ 
+ static const struct bpf_map_ops * const bpf_map_types[] = {
+ #define BPF_PROG_TYPE(_id, _name, prog_ctx_type, kern_ctx_type)
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index b9306d2bb4269..72ceb19574d0c 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -233,7 +233,27 @@ static int bpf_stats_handler(struct ctl_table *table, int write,
+ 	mutex_unlock(&bpf_stats_enabled_mutex);
+ 	return ret;
+ }
+-#endif
++
++static int bpf_unpriv_handler(struct ctl_table *table, int write,
++			      void *buffer, size_t *lenp, loff_t *ppos)
++{
++	int ret, unpriv_enable = *(int *)table->data;
++	bool locked_state = unpriv_enable == 1;
++	struct ctl_table tmp = *table;
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	tmp.data = &unpriv_enable;
++	ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos);
++	if (write && !ret) {
++		if (locked_state && unpriv_enable != 1)
++			return -EPERM;
++		*(int *)table->data = unpriv_enable;
++	}
++	return ret;
++}
++#endif /* CONFIG_BPF_SYSCALL && CONFIG_SYSCTL */
+ 
+ /*
+  * /proc/sys support
+@@ -2626,10 +2646,9 @@ static struct ctl_table kern_table[] = {
+ 		.data		= &sysctl_unprivileged_bpf_disabled,
+ 		.maxlen		= sizeof(sysctl_unprivileged_bpf_disabled),
+ 		.mode		= 0644,
+-		/* only handle a transition from default "0" to "1" */
+-		.proc_handler	= proc_dointvec_minmax,
+-		.extra1		= SYSCTL_ONE,
+-		.extra2		= SYSCTL_ONE,
++		.proc_handler	= bpf_unpriv_handler,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= &two,
+ 	},
+ 	{
+ 		.procname	= "bpf_stats_enabled",
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 8267349afe231..e2f85a16fad9b 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -2003,6 +2003,10 @@ static int __init inet_init(void)
+ 
+ 	ip_init();
+ 
++	/* Initialise per-cpu ipv4 mibs */
++	if (init_ipv4_mibs())
++		panic("%s: Cannot init ipv4 mibs\n", __func__);
++
+ 	/* Setup TCP slab cache for open requests. */
+ 	tcp_init();
+ 
+@@ -2033,12 +2037,6 @@ static int __init inet_init(void)
+ 
+ 	if (init_inet_pernet_ops())
+ 		pr_crit("%s: Cannot init ipv4 inet pernet ops\n", __func__);
+-	/*
+-	 *	Initialise per-cpu ipv4 mibs
+-	 */
+-
+-	if (init_ipv4_mibs())
+-		pr_crit("%s: Cannot init ipv4 mibs\n", __func__);
+ 
+ 	ipv4_proc_init();
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 8a1863146f34c..069551a04369e 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1189,7 +1189,7 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
+ 			kfree_skb(skb);
+ 			return -EINVAL;
+ 		}
+-		if (skb->len > cork->gso_size * UDP_MAX_SEGMENTS) {
++		if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
+ 			kfree_skb(skb);
+ 			return -EINVAL;
+ 		}
+diff --git a/net/ncsi/ncsi-netlink.c b/net/ncsi/ncsi-netlink.c
+index bb5f1650f11cb..c189b4c8a1823 100644
+--- a/net/ncsi/ncsi-netlink.c
++++ b/net/ncsi/ncsi-netlink.c
+@@ -112,7 +112,11 @@ static int ncsi_write_package_info(struct sk_buff *skb,
+ 		pnest = nla_nest_start_noflag(skb, NCSI_PKG_ATTR);
+ 		if (!pnest)
+ 			return -ENOMEM;
+-		nla_put_u32(skb, NCSI_PKG_ATTR_ID, np->id);
++		rc = nla_put_u32(skb, NCSI_PKG_ATTR_ID, np->id);
++		if (rc) {
++			nla_nest_cancel(skb, pnest);
++			return rc;
++		}
+ 		if ((0x1 << np->id) == ndp->package_whitelist)
+ 			nla_put_flag(skb, NCSI_PKG_ATTR_FORCED);
+ 		cnest = nla_nest_start_noflag(skb, NCSI_PKG_ATTR_CHANNEL_LIST);
+diff --git a/net/sctp/diag.c b/net/sctp/diag.c
+index 493fc01e5d2b7..babadd6720a2b 100644
+--- a/net/sctp/diag.c
++++ b/net/sctp/diag.c
+@@ -292,9 +292,8 @@ out:
+ 	return err;
+ }
+ 
+-static int sctp_sock_dump(struct sctp_transport *tsp, void *p)
++static int sctp_sock_dump(struct sctp_endpoint *ep, struct sctp_transport *tsp, void *p)
+ {
+-	struct sctp_endpoint *ep = tsp->asoc->ep;
+ 	struct sctp_comm_param *commp = p;
+ 	struct sock *sk = ep->base.sk;
+ 	struct sk_buff *skb = commp->skb;
+@@ -304,6 +303,8 @@ static int sctp_sock_dump(struct sctp_transport *tsp, void *p)
+ 	int err = 0;
+ 
+ 	lock_sock(sk);
++	if (ep != tsp->asoc->ep)
++		goto release;
+ 	list_for_each_entry(assoc, &ep->asocs, asocs) {
+ 		if (cb->args[4] < cb->args[1])
+ 			goto next;
+@@ -346,9 +347,8 @@ release:
+ 	return err;
+ }
+ 
+-static int sctp_sock_filter(struct sctp_transport *tsp, void *p)
++static int sctp_sock_filter(struct sctp_endpoint *ep, struct sctp_transport *tsp, void *p)
+ {
+-	struct sctp_endpoint *ep = tsp->asoc->ep;
+ 	struct sctp_comm_param *commp = p;
+ 	struct sock *sk = ep->base.sk;
+ 	const struct inet_diag_req_v2 *r = commp->r;
+@@ -507,8 +507,8 @@ skip:
+ 	if (!(idiag_states & ~(TCPF_LISTEN | TCPF_CLOSE)))
+ 		goto done;
+ 
+-	sctp_for_each_transport(sctp_sock_filter, sctp_sock_dump,
+-				net, &pos, &commp);
++	sctp_transport_traverse_process(sctp_sock_filter, sctp_sock_dump,
++					net, &pos, &commp);
+ 	cb->args[2] = pos;
+ 
+ done:
+diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
+index 48c9c2c7602f7..efffde7f2328e 100644
+--- a/net/sctp/endpointola.c
++++ b/net/sctp/endpointola.c
+@@ -184,6 +184,18 @@ void sctp_endpoint_free(struct sctp_endpoint *ep)
+ }
+ 
+ /* Final destructor for endpoint.  */
++static void sctp_endpoint_destroy_rcu(struct rcu_head *head)
++{
++	struct sctp_endpoint *ep = container_of(head, struct sctp_endpoint, rcu);
++	struct sock *sk = ep->base.sk;
++
++	sctp_sk(sk)->ep = NULL;
++	sock_put(sk);
++
++	kfree(ep);
++	SCTP_DBG_OBJCNT_DEC(ep);
++}
++
+ static void sctp_endpoint_destroy(struct sctp_endpoint *ep)
+ {
+ 	struct sock *sk;
+@@ -213,18 +225,13 @@ static void sctp_endpoint_destroy(struct sctp_endpoint *ep)
+ 	if (sctp_sk(sk)->bind_hash)
+ 		sctp_put_port(sk);
+ 
+-	sctp_sk(sk)->ep = NULL;
+-	/* Give up our hold on the sock */
+-	sock_put(sk);
+-
+-	kfree(ep);
+-	SCTP_DBG_OBJCNT_DEC(ep);
++	call_rcu(&ep->rcu, sctp_endpoint_destroy_rcu);
+ }
+ 
+ /* Hold a reference to an endpoint. */
+-void sctp_endpoint_hold(struct sctp_endpoint *ep)
++int sctp_endpoint_hold(struct sctp_endpoint *ep)
+ {
+-	refcount_inc(&ep->base.refcnt);
++	return refcount_inc_not_zero(&ep->base.refcnt);
+ }
+ 
+ /* Release a reference to an endpoint and clean up if there are
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index e872bc50bbe61..0a9e2c7d8e5f5 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -5223,11 +5223,12 @@ int sctp_transport_lookup_process(int (*cb)(struct sctp_transport *, void *),
+ }
+ EXPORT_SYMBOL_GPL(sctp_transport_lookup_process);
+ 
+-int sctp_for_each_transport(int (*cb)(struct sctp_transport *, void *),
+-			    int (*cb_done)(struct sctp_transport *, void *),
+-			    struct net *net, int *pos, void *p) {
++int sctp_transport_traverse_process(sctp_callback_t cb, sctp_callback_t cb_done,
++				    struct net *net, int *pos, void *p)
++{
+ 	struct rhashtable_iter hti;
+ 	struct sctp_transport *tsp;
++	struct sctp_endpoint *ep;
+ 	int ret;
+ 
+ again:
+@@ -5236,26 +5237,32 @@ again:
+ 
+ 	tsp = sctp_transport_get_idx(net, &hti, *pos + 1);
+ 	for (; !IS_ERR_OR_NULL(tsp); tsp = sctp_transport_get_next(net, &hti)) {
+-		ret = cb(tsp, p);
+-		if (ret)
+-			break;
++		ep = tsp->asoc->ep;
++		if (sctp_endpoint_hold(ep)) { /* asoc can be peeled off */
++			ret = cb(ep, tsp, p);
++			if (ret)
++				break;
++			sctp_endpoint_put(ep);
++		}
+ 		(*pos)++;
+ 		sctp_transport_put(tsp);
+ 	}
+ 	sctp_transport_walk_stop(&hti);
+ 
+ 	if (ret) {
+-		if (cb_done && !cb_done(tsp, p)) {
++		if (cb_done && !cb_done(ep, tsp, p)) {
+ 			(*pos)++;
++			sctp_endpoint_put(ep);
+ 			sctp_transport_put(tsp);
+ 			goto again;
+ 		}
++		sctp_endpoint_put(ep);
+ 		sctp_transport_put(tsp);
+ 	}
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(sctp_for_each_transport);
++EXPORT_SYMBOL_GPL(sctp_transport_traverse_process);
+ 
+ /* 7.2.1 Association Status (SCTP_STATUS)
+ 
+diff --git a/net/smc/smc.h b/net/smc/smc.h
+index d65e15f0c944c..e6919fe31617b 100644
+--- a/net/smc/smc.h
++++ b/net/smc/smc.h
+@@ -170,6 +170,11 @@ struct smc_connection {
+ 	u16			tx_cdc_seq;	/* sequence # for CDC send */
+ 	u16			tx_cdc_seq_fin;	/* sequence # - tx completed */
+ 	spinlock_t		send_lock;	/* protect wr_sends */
++	atomic_t		cdc_pend_tx_wr; /* number of pending tx CDC wqe
++						 * - inc when post wqe,
++						 * - dec on polled tx cqe
++						 */
++	wait_queue_head_t	cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/
+ 	struct delayed_work	tx_work;	/* retry of smc_cdc_msg_send */
+ 	u32			tx_off;		/* base offset in peer rmb */
+ 
+diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c
+index b1ce6ccbfaec8..0c490cdde6a49 100644
+--- a/net/smc/smc_cdc.c
++++ b/net/smc/smc_cdc.c
+@@ -31,10 +31,6 @@ static void smc_cdc_tx_handler(struct smc_wr_tx_pend_priv *pnd_snd,
+ 	struct smc_sock *smc;
+ 	int diff;
+ 
+-	if (!conn)
+-		/* already dismissed */
+-		return;
+-
+ 	smc = container_of(conn, struct smc_sock, conn);
+ 	bh_lock_sock(&smc->sk);
+ 	if (!wc_status) {
+@@ -51,6 +47,12 @@ static void smc_cdc_tx_handler(struct smc_wr_tx_pend_priv *pnd_snd,
+ 			      conn);
+ 		conn->tx_cdc_seq_fin = cdcpend->ctrl_seq;
+ 	}
++
++	if (atomic_dec_and_test(&conn->cdc_pend_tx_wr) &&
++	    unlikely(wq_has_sleeper(&conn->cdc_pend_tx_wq)))
++		wake_up(&conn->cdc_pend_tx_wq);
++	WARN_ON(atomic_read(&conn->cdc_pend_tx_wr) < 0);
++
+ 	smc_tx_sndbuf_nonfull(smc);
+ 	bh_unlock_sock(&smc->sk);
+ }
+@@ -107,6 +109,10 @@ int smc_cdc_msg_send(struct smc_connection *conn,
+ 	conn->tx_cdc_seq++;
+ 	conn->local_tx_ctrl.seqno = conn->tx_cdc_seq;
+ 	smc_host_msg_to_cdc((struct smc_cdc_msg *)wr_buf, conn, &cfed);
++
++	atomic_inc(&conn->cdc_pend_tx_wr);
++	smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */
++
+ 	rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend);
+ 	if (!rc) {
+ 		smc_curs_copy(&conn->rx_curs_confirmed, &cfed, conn);
+@@ -114,6 +120,7 @@ int smc_cdc_msg_send(struct smc_connection *conn,
+ 	} else {
+ 		conn->tx_cdc_seq--;
+ 		conn->local_tx_ctrl.seqno = conn->tx_cdc_seq;
++		atomic_dec(&conn->cdc_pend_tx_wr);
+ 	}
+ 
+ 	return rc;
+@@ -136,7 +143,18 @@ int smcr_cdc_msg_send_validation(struct smc_connection *conn,
+ 	peer->token = htonl(local->token);
+ 	peer->prod_flags.failover_validation = 1;
+ 
++	/* We need to set pend->conn here to make sure smc_cdc_tx_handler()
++	 * can handle properly
++	 */
++	smc_cdc_add_pending_send(conn, pend);
++
++	atomic_inc(&conn->cdc_pend_tx_wr);
++	smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */
++
+ 	rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend);
++	if (unlikely(rc))
++		atomic_dec(&conn->cdc_pend_tx_wr);
++
+ 	return rc;
+ }
+ 
+@@ -150,9 +168,11 @@ static int smcr_cdc_get_slot_and_msg_send(struct smc_connection *conn)
+ 
+ again:
+ 	link = conn->lnk;
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_cdc_get_free_slot(conn, link, &wr_buf, NULL, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 
+ 	spin_lock_bh(&conn->send_lock);
+ 	if (link != conn->lnk) {
+@@ -160,6 +180,7 @@ again:
+ 		spin_unlock_bh(&conn->send_lock);
+ 		smc_wr_tx_put_slot(link,
+ 				   (struct smc_wr_tx_pend_priv *)pend);
++		smc_wr_tx_link_put(link);
+ 		if (again)
+ 			return -ENOLINK;
+ 		again = true;
+@@ -167,6 +188,8 @@ again:
+ 	}
+ 	rc = smc_cdc_msg_send(conn, wr_buf, pend);
+ 	spin_unlock_bh(&conn->send_lock);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -188,31 +211,9 @@ int smc_cdc_get_slot_and_msg_send(struct smc_connection *conn)
+ 	return rc;
+ }
+ 
+-static bool smc_cdc_tx_filter(struct smc_wr_tx_pend_priv *tx_pend,
+-			      unsigned long data)
++void smc_cdc_wait_pend_tx_wr(struct smc_connection *conn)
+ {
+-	struct smc_connection *conn = (struct smc_connection *)data;
+-	struct smc_cdc_tx_pend *cdc_pend =
+-		(struct smc_cdc_tx_pend *)tx_pend;
+-
+-	return cdc_pend->conn == conn;
+-}
+-
+-static void smc_cdc_tx_dismisser(struct smc_wr_tx_pend_priv *tx_pend)
+-{
+-	struct smc_cdc_tx_pend *cdc_pend =
+-		(struct smc_cdc_tx_pend *)tx_pend;
+-
+-	cdc_pend->conn = NULL;
+-}
+-
+-void smc_cdc_tx_dismiss_slots(struct smc_connection *conn)
+-{
+-	struct smc_link *link = conn->lnk;
+-
+-	smc_wr_tx_dismiss_slots(link, SMC_CDC_MSG_TYPE,
+-				smc_cdc_tx_filter, smc_cdc_tx_dismisser,
+-				(unsigned long)conn);
++	wait_event(conn->cdc_pend_tx_wq, !atomic_read(&conn->cdc_pend_tx_wr));
+ }
+ 
+ /* Send a SMC-D CDC header.
+diff --git a/net/smc/smc_cdc.h b/net/smc/smc_cdc.h
+index 0a0a89abd38b2..696cc11f2303b 100644
+--- a/net/smc/smc_cdc.h
++++ b/net/smc/smc_cdc.h
+@@ -291,7 +291,7 @@ int smc_cdc_get_free_slot(struct smc_connection *conn,
+ 			  struct smc_wr_buf **wr_buf,
+ 			  struct smc_rdma_wr **wr_rdma_buf,
+ 			  struct smc_cdc_tx_pend **pend);
+-void smc_cdc_tx_dismiss_slots(struct smc_connection *conn);
++void smc_cdc_wait_pend_tx_wr(struct smc_connection *conn);
+ int smc_cdc_msg_send(struct smc_connection *conn, struct smc_wr_buf *wr_buf,
+ 		     struct smc_cdc_tx_pend *pend);
+ int smc_cdc_get_slot_and_msg_send(struct smc_connection *conn);
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 3f1343dfa16ba..2a22dc85951ee 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -226,7 +226,7 @@ static void smcr_lgr_link_deactivate_all(struct smc_link_group *lgr)
+ 	for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+ 		struct smc_link *lnk = &lgr->lnk[i];
+ 
+-		if (smc_link_usable(lnk))
++		if (smc_link_sendable(lnk))
+ 			lnk->state = SMC_LNK_INACTIVE;
+ 	}
+ 	wake_up_all(&lgr->llc_msg_waiter);
+@@ -550,7 +550,7 @@ struct smc_link *smc_switch_conns(struct smc_link_group *lgr,
+ 		to_lnk = &lgr->lnk[i];
+ 		break;
+ 	}
+-	if (!to_lnk) {
++	if (!to_lnk || !smc_wr_tx_link_hold(to_lnk)) {
+ 		smc_lgr_terminate_sched(lgr);
+ 		return NULL;
+ 	}
+@@ -582,24 +582,26 @@ again:
+ 		read_unlock_bh(&lgr->conns_lock);
+ 		/* pre-fetch buffer outside of send_lock, might sleep */
+ 		rc = smc_cdc_get_free_slot(conn, to_lnk, &wr_buf, NULL, &pend);
+-		if (rc) {
+-			smcr_link_down_cond_sched(to_lnk);
+-			return NULL;
+-		}
++		if (rc)
++			goto err_out;
+ 		/* avoid race with smcr_tx_sndbuf_nonempty() */
+ 		spin_lock_bh(&conn->send_lock);
+ 		conn->lnk = to_lnk;
+ 		rc = smc_switch_cursor(smc, pend, wr_buf);
+ 		spin_unlock_bh(&conn->send_lock);
+ 		sock_put(&smc->sk);
+-		if (rc) {
+-			smcr_link_down_cond_sched(to_lnk);
+-			return NULL;
+-		}
++		if (rc)
++			goto err_out;
+ 		goto again;
+ 	}
+ 	read_unlock_bh(&lgr->conns_lock);
++	smc_wr_tx_link_put(to_lnk);
+ 	return to_lnk;
++
++err_out:
++	smcr_link_down_cond_sched(to_lnk);
++	smc_wr_tx_link_put(to_lnk);
++	return NULL;
+ }
+ 
+ static void smcr_buf_unuse(struct smc_buf_desc *rmb_desc,
+@@ -655,7 +657,7 @@ void smc_conn_free(struct smc_connection *conn)
+ 			smc_ism_unset_conn(conn);
+ 		tasklet_kill(&conn->rx_tsklet);
+ 	} else {
+-		smc_cdc_tx_dismiss_slots(conn);
++		smc_cdc_wait_pend_tx_wr(conn);
+ 		if (current_work() != &conn->abort_work)
+ 			cancel_work_sync(&conn->abort_work);
+ 	}
+@@ -732,7 +734,7 @@ void smcr_link_clear(struct smc_link *lnk, bool log)
+ 	smc_llc_link_clear(lnk, log);
+ 	smcr_buf_unmap_lgr(lnk);
+ 	smcr_rtoken_clear_link(lnk);
+-	smc_ib_modify_qp_reset(lnk);
++	smc_ib_modify_qp_error(lnk);
+ 	smc_wr_free_link(lnk);
+ 	smc_ib_destroy_queue_pair(lnk);
+ 	smc_ib_dealloc_protection_domain(lnk);
+@@ -876,7 +878,7 @@ static void smc_conn_kill(struct smc_connection *conn, bool soft)
+ 		else
+ 			tasklet_unlock_wait(&conn->rx_tsklet);
+ 	} else {
+-		smc_cdc_tx_dismiss_slots(conn);
++		smc_cdc_wait_pend_tx_wr(conn);
+ 	}
+ 	smc_lgr_unregister_conn(conn);
+ 	smc_close_active_abort(smc);
+@@ -1000,11 +1002,16 @@ void smc_smcd_terminate_all(struct smcd_dev *smcd)
+ /* Called when an SMCR device is removed or the smc module is unloaded.
+  * If smcibdev is given, all SMCR link groups using this device are terminated.
+  * If smcibdev is NULL, all SMCR link groups are terminated.
++ *
++ * We must wait here for QPs been destroyed before we destroy the CQs,
++ * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus
++ * smc_sock cannot be released.
+  */
+ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ {
+ 	struct smc_link_group *lgr, *lg;
+ 	LIST_HEAD(lgr_free_list);
++	LIST_HEAD(lgr_linkdown_list);
+ 	int i;
+ 
+ 	spin_lock_bh(&smc_lgr_list.lock);
+@@ -1016,7 +1023,7 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ 		list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) {
+ 			for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+ 				if (lgr->lnk[i].smcibdev == smcibdev)
+-					smcr_link_down_cond_sched(&lgr->lnk[i]);
++					list_move_tail(&lgr->list, &lgr_linkdown_list);
+ 			}
+ 		}
+ 	}
+@@ -1028,6 +1035,16 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ 		__smc_lgr_terminate(lgr, false);
+ 	}
+ 
++	list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) {
++		for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
++			if (lgr->lnk[i].smcibdev == smcibdev) {
++				mutex_lock(&lgr->llc_conf_mutex);
++				smcr_link_down_cond(&lgr->lnk[i]);
++				mutex_unlock(&lgr->llc_conf_mutex);
++			}
++		}
++	}
++
+ 	if (smcibdev) {
+ 		if (atomic_read(&smcibdev->lnk_cnt))
+ 			wait_event(smcibdev->lnks_deleted,
+@@ -1127,7 +1144,6 @@ static void smcr_link_down(struct smc_link *lnk)
+ 	if (!lgr || lnk->state == SMC_LNK_UNUSED || list_empty(&lgr->list))
+ 		return;
+ 
+-	smc_ib_modify_qp_reset(lnk);
+ 	to_lnk = smc_switch_conns(lgr, lnk, true);
+ 	if (!to_lnk) { /* no backup link available */
+ 		smcr_link_clear(lnk, true);
+@@ -1355,6 +1371,7 @@ create:
+ 	conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE;
+ 	conn->local_tx_ctrl.len = SMC_WR_TX_SIZE;
+ 	conn->urg_state = SMC_URG_READ;
++	init_waitqueue_head(&conn->cdc_pend_tx_wq);
+ 	INIT_WORK(&smc->conn.abort_work, smc_conn_abort_work);
+ 	if (ini->is_smcd) {
+ 		conn->rx_off = sizeof(struct smcd_cdc_msg);
+diff --git a/net/smc/smc_core.h b/net/smc/smc_core.h
+index 4745a9a5a28f5..9364d0f35ccec 100644
+--- a/net/smc/smc_core.h
++++ b/net/smc/smc_core.h
+@@ -359,6 +359,12 @@ static inline bool smc_link_usable(struct smc_link *lnk)
+ 	return true;
+ }
+ 
++static inline bool smc_link_sendable(struct smc_link *lnk)
++{
++	return smc_link_usable(lnk) &&
++		lnk->qp_attr.cur_qp_state == IB_QPS_RTS;
++}
++
+ static inline bool smc_link_active(struct smc_link *lnk)
+ {
+ 	return lnk->state == SMC_LNK_ACTIVE;
+diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c
+index fc766b537ac7a..f1ffbd414602e 100644
+--- a/net/smc/smc_ib.c
++++ b/net/smc/smc_ib.c
+@@ -100,12 +100,12 @@ int smc_ib_modify_qp_rts(struct smc_link *lnk)
+ 			    IB_QP_MAX_QP_RD_ATOMIC);
+ }
+ 
+-int smc_ib_modify_qp_reset(struct smc_link *lnk)
++int smc_ib_modify_qp_error(struct smc_link *lnk)
+ {
+ 	struct ib_qp_attr qp_attr;
+ 
+ 	memset(&qp_attr, 0, sizeof(qp_attr));
+-	qp_attr.qp_state = IB_QPS_RESET;
++	qp_attr.qp_state = IB_QPS_ERR;
+ 	return ib_modify_qp(lnk->roce_qp, &qp_attr, IB_QP_STATE);
+ }
+ 
+diff --git a/net/smc/smc_ib.h b/net/smc/smc_ib.h
+index 2ce481187dd0b..f90d15eae2aab 100644
+--- a/net/smc/smc_ib.h
++++ b/net/smc/smc_ib.h
+@@ -74,6 +74,7 @@ int smc_ib_create_queue_pair(struct smc_link *lnk);
+ int smc_ib_ready_link(struct smc_link *lnk);
+ int smc_ib_modify_qp_rts(struct smc_link *lnk);
+ int smc_ib_modify_qp_reset(struct smc_link *lnk);
++int smc_ib_modify_qp_error(struct smc_link *lnk);
+ long smc_ib_setup_per_ibdev(struct smc_ib_device *smcibdev);
+ int smc_ib_get_memory_region(struct ib_pd *pd, int access_flags,
+ 			     struct smc_buf_desc *buf_slot, u8 link_idx);
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index d8fe4e1f24d1f..ee1f0fdba0855 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -383,9 +383,11 @@ int smc_llc_send_confirm_link(struct smc_link *link,
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	confllc = (struct smc_llc_msg_confirm_link *)wr_buf;
+ 	memset(confllc, 0, sizeof(*confllc));
+ 	confllc->hd.common.type = SMC_LLC_CONFIRM_LINK;
+@@ -402,6 +404,8 @@ int smc_llc_send_confirm_link(struct smc_link *link,
+ 	confllc->max_links = SMC_LLC_ADD_LNK_MAX_LINKS;
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -415,9 +419,11 @@ static int smc_llc_send_confirm_rkey(struct smc_link *send_link,
+ 	struct smc_link *link;
+ 	int i, rc, rtok_ix;
+ 
++	if (!smc_wr_tx_link_hold(send_link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(send_link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	rkeyllc = (struct smc_llc_msg_confirm_rkey *)wr_buf;
+ 	memset(rkeyllc, 0, sizeof(*rkeyllc));
+ 	rkeyllc->hd.common.type = SMC_LLC_CONFIRM_RKEY;
+@@ -444,6 +450,8 @@ static int smc_llc_send_confirm_rkey(struct smc_link *send_link,
+ 		(u64)sg_dma_address(rmb_desc->sgt[send_link->link_idx].sgl));
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(send_link, pend);
++put_out:
++	smc_wr_tx_link_put(send_link);
+ 	return rc;
+ }
+ 
+@@ -456,9 +464,11 @@ static int smc_llc_send_delete_rkey(struct smc_link *link,
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	rkeyllc = (struct smc_llc_msg_delete_rkey *)wr_buf;
+ 	memset(rkeyllc, 0, sizeof(*rkeyllc));
+ 	rkeyllc->hd.common.type = SMC_LLC_DELETE_RKEY;
+@@ -467,6 +477,8 @@ static int smc_llc_send_delete_rkey(struct smc_link *link,
+ 	rkeyllc->rkey[0] = htonl(rmb_desc->mr_rx[link->link_idx]->rkey);
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -480,9 +492,11 @@ int smc_llc_send_add_link(struct smc_link *link, u8 mac[], u8 gid[],
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	addllc = (struct smc_llc_msg_add_link *)wr_buf;
+ 
+ 	memset(addllc, 0, sizeof(*addllc));
+@@ -504,6 +518,8 @@ int smc_llc_send_add_link(struct smc_link *link, u8 mac[], u8 gid[],
+ 	}
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -517,9 +533,11 @@ int smc_llc_send_delete_link(struct smc_link *link, u8 link_del_id,
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	delllc = (struct smc_llc_msg_del_link *)wr_buf;
+ 
+ 	memset(delllc, 0, sizeof(*delllc));
+@@ -536,6 +554,8 @@ int smc_llc_send_delete_link(struct smc_link *link, u8 link_del_id,
+ 	delllc->reason = htonl(reason);
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -547,9 +567,11 @@ static int smc_llc_send_test_link(struct smc_link *link, u8 user_data[16])
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	testllc = (struct smc_llc_msg_test_link *)wr_buf;
+ 	memset(testllc, 0, sizeof(*testllc));
+ 	testllc->hd.common.type = SMC_LLC_TEST_LINK;
+@@ -557,6 +579,8 @@ static int smc_llc_send_test_link(struct smc_link *link, u8 user_data[16])
+ 	memcpy(testllc->user_data, user_data, sizeof(testllc->user_data));
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -567,13 +591,16 @@ static int smc_llc_send_message(struct smc_link *link, void *llcbuf)
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
+-	if (!smc_link_usable(link))
++	if (!smc_wr_tx_link_hold(link))
+ 		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg));
+-	return smc_wr_tx_send(link, pend);
++	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
++	return rc;
+ }
+ 
+ /* schedule an llc send on link, may wait for buffers,
+@@ -586,13 +613,16 @@ static int smc_llc_send_message_wait(struct smc_link *link, void *llcbuf)
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
+-	if (!smc_link_usable(link))
++	if (!smc_wr_tx_link_hold(link))
+ 		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg));
+-	return smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME);
++	rc = smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME);
++put_out:
++	smc_wr_tx_link_put(link);
++	return rc;
+ }
+ 
+ /********************************* receive ***********************************/
+@@ -672,9 +702,11 @@ static int smc_llc_add_link_cont(struct smc_link *link,
+ 	struct smc_buf_desc *rmb;
+ 	u8 n;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	addc_llc = (struct smc_llc_msg_add_link_cont *)wr_buf;
+ 	memset(addc_llc, 0, sizeof(*addc_llc));
+ 
+@@ -706,7 +738,10 @@ static int smc_llc_add_link_cont(struct smc_link *link,
+ 	addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont);
+ 	if (lgr->role == SMC_CLNT)
+ 		addc_llc->hd.flags |= SMC_LLC_FLAG_RESP;
+-	return smc_wr_tx_send(link, pend);
++	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
++	return rc;
+ }
+ 
+ static int smc_llc_cli_rkey_exchange(struct smc_link *link,
+@@ -1323,7 +1358,7 @@ void smc_llc_send_link_delete_all(struct smc_link_group *lgr, bool ord, u32 rsn)
+ 	delllc.reason = htonl(rsn);
+ 
+ 	for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+-		if (!smc_link_usable(&lgr->lnk[i]))
++		if (!smc_link_sendable(&lgr->lnk[i]))
+ 			continue;
+ 		if (!smc_llc_send_message_wait(&lgr->lnk[i], &delllc))
+ 			break;
+diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
+index ff02952b3d03e..52ef1fca0b604 100644
+--- a/net/smc/smc_tx.c
++++ b/net/smc/smc_tx.c
+@@ -479,7 +479,7 @@ static int smc_tx_rdma_writes(struct smc_connection *conn,
+ /* Wakeup sndbuf consumers from any context (IRQ or process)
+  * since there is more data to transmit; usable snd_wnd as max transmit
+  */
+-static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
++static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+ {
+ 	struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags;
+ 	struct smc_link *link = conn->lnk;
+@@ -488,8 +488,11 @@ static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!link || !smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_cdc_get_free_slot(conn, link, &wr_buf, &wr_rdma_buf, &pend);
+ 	if (rc < 0) {
++		smc_wr_tx_link_put(link);
+ 		if (rc == -EBUSY) {
+ 			struct smc_sock *smc =
+ 				container_of(conn, struct smc_sock, conn);
+@@ -530,22 +533,7 @@ static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+ 
+ out_unlock:
+ 	spin_unlock_bh(&conn->send_lock);
+-	return rc;
+-}
+-
+-static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+-{
+-	struct smc_link *link = conn->lnk;
+-	int rc = -ENOLINK;
+-
+-	if (!link)
+-		return rc;
+-
+-	atomic_inc(&link->wr_tx_refcnt);
+-	if (smc_link_usable(link))
+-		rc = _smcr_tx_sndbuf_nonempty(conn);
+-	if (atomic_dec_and_test(&link->wr_tx_refcnt))
+-		wake_up_all(&link->wr_tx_wait);
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c
+index 9dbe4804853e0..5a81f8c9ebf90 100644
+--- a/net/smc/smc_wr.c
++++ b/net/smc/smc_wr.c
+@@ -62,13 +62,9 @@ static inline bool smc_wr_is_tx_pend(struct smc_link *link)
+ }
+ 
+ /* wait till all pending tx work requests on the given link are completed */
+-int smc_wr_tx_wait_no_pending_sends(struct smc_link *link)
++void smc_wr_tx_wait_no_pending_sends(struct smc_link *link)
+ {
+-	if (wait_event_timeout(link->wr_tx_wait, !smc_wr_is_tx_pend(link),
+-			       SMC_WR_TX_WAIT_PENDING_TIME))
+-		return 0;
+-	else /* timeout */
+-		return -EPIPE;
++	wait_event(link->wr_tx_wait, !smc_wr_is_tx_pend(link));
+ }
+ 
+ static inline int smc_wr_tx_find_pending_index(struct smc_link *link, u64 wr_id)
+@@ -87,7 +83,6 @@ static inline void smc_wr_tx_process_cqe(struct ib_wc *wc)
+ 	struct smc_wr_tx_pend pnd_snd;
+ 	struct smc_link *link;
+ 	u32 pnd_snd_idx;
+-	int i;
+ 
+ 	link = wc->qp->qp_context;
+ 
+@@ -115,14 +110,6 @@ static inline void smc_wr_tx_process_cqe(struct ib_wc *wc)
+ 	if (!test_and_clear_bit(pnd_snd_idx, link->wr_tx_mask))
+ 		return;
+ 	if (wc->status) {
+-		for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) {
+-			/* clear full struct smc_wr_tx_pend including .priv */
+-			memset(&link->wr_tx_pends[i], 0,
+-			       sizeof(link->wr_tx_pends[i]));
+-			memset(&link->wr_tx_bufs[i], 0,
+-			       sizeof(link->wr_tx_bufs[i]));
+-			clear_bit(i, link->wr_tx_mask);
+-		}
+ 		/* terminate link */
+ 		smcr_link_down_cond_sched(link);
+ 	}
+@@ -169,7 +156,7 @@ void smc_wr_tx_cq_handler(struct ib_cq *ib_cq, void *cq_context)
+ static inline int smc_wr_tx_get_free_slot_index(struct smc_link *link, u32 *idx)
+ {
+ 	*idx = link->wr_tx_cnt;
+-	if (!smc_link_usable(link))
++	if (!smc_link_sendable(link))
+ 		return -ENOLINK;
+ 	for_each_clear_bit(*idx, link->wr_tx_mask, link->wr_tx_cnt) {
+ 		if (!test_and_set_bit(*idx, link->wr_tx_mask))
+@@ -212,7 +199,7 @@ int smc_wr_tx_get_free_slot(struct smc_link *link,
+ 	} else {
+ 		rc = wait_event_interruptible_timeout(
+ 			link->wr_tx_wait,
+-			!smc_link_usable(link) ||
++			!smc_link_sendable(link) ||
+ 			lgr->terminating ||
+ 			(smc_wr_tx_get_free_slot_index(link, &idx) != -EBUSY),
+ 			SMC_WR_TX_WAIT_FREE_SLOT_TIME);
+@@ -288,18 +275,20 @@ int smc_wr_tx_send_wait(struct smc_link *link, struct smc_wr_tx_pend_priv *priv,
+ 			unsigned long timeout)
+ {
+ 	struct smc_wr_tx_pend *pend;
++	u32 pnd_idx;
+ 	int rc;
+ 
+ 	pend = container_of(priv, struct smc_wr_tx_pend, priv);
+ 	pend->compl_requested = 1;
+-	init_completion(&link->wr_tx_compl[pend->idx]);
++	pnd_idx = pend->idx;
++	init_completion(&link->wr_tx_compl[pnd_idx]);
+ 
+ 	rc = smc_wr_tx_send(link, priv);
+ 	if (rc)
+ 		return rc;
+ 	/* wait for completion by smc_wr_tx_process_cqe() */
+ 	rc = wait_for_completion_interruptible_timeout(
+-					&link->wr_tx_compl[pend->idx], timeout);
++					&link->wr_tx_compl[pnd_idx], timeout);
+ 	if (rc <= 0)
+ 		rc = -ENODATA;
+ 	if (rc > 0)
+@@ -349,25 +338,6 @@ int smc_wr_reg_send(struct smc_link *link, struct ib_mr *mr)
+ 	return rc;
+ }
+ 
+-void smc_wr_tx_dismiss_slots(struct smc_link *link, u8 wr_tx_hdr_type,
+-			     smc_wr_tx_filter filter,
+-			     smc_wr_tx_dismisser dismisser,
+-			     unsigned long data)
+-{
+-	struct smc_wr_tx_pend_priv *tx_pend;
+-	struct smc_wr_rx_hdr *wr_tx;
+-	int i;
+-
+-	for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) {
+-		wr_tx = (struct smc_wr_rx_hdr *)&link->wr_tx_bufs[i];
+-		if (wr_tx->type != wr_tx_hdr_type)
+-			continue;
+-		tx_pend = &link->wr_tx_pends[i].priv;
+-		if (filter(tx_pend, data))
+-			dismisser(tx_pend);
+-	}
+-}
+-
+ /****************************** receive queue ********************************/
+ 
+ int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler)
+@@ -572,10 +542,7 @@ void smc_wr_free_link(struct smc_link *lnk)
+ 	smc_wr_wakeup_reg_wait(lnk);
+ 	smc_wr_wakeup_tx_wait(lnk);
+ 
+-	if (smc_wr_tx_wait_no_pending_sends(lnk))
+-		memset(lnk->wr_tx_mask, 0,
+-		       BITS_TO_LONGS(SMC_WR_BUF_CNT) *
+-						sizeof(*lnk->wr_tx_mask));
++	smc_wr_tx_wait_no_pending_sends(lnk);
+ 	wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt)));
+ 	wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt)));
+ 
+diff --git a/net/smc/smc_wr.h b/net/smc/smc_wr.h
+index 423b8709f1c9e..cb58e60078f57 100644
+--- a/net/smc/smc_wr.h
++++ b/net/smc/smc_wr.h
+@@ -22,7 +22,6 @@
+ #define SMC_WR_BUF_CNT 16	/* # of ctrl buffers per link */
+ 
+ #define SMC_WR_TX_WAIT_FREE_SLOT_TIME	(10 * HZ)
+-#define SMC_WR_TX_WAIT_PENDING_TIME	(5 * HZ)
+ 
+ #define SMC_WR_TX_SIZE 44 /* actual size of wr_send data (<=SMC_WR_BUF_SIZE) */
+ 
+@@ -60,6 +59,20 @@ static inline void smc_wr_tx_set_wr_id(atomic_long_t *wr_tx_id, long val)
+ 	atomic_long_set(wr_tx_id, val);
+ }
+ 
++static inline bool smc_wr_tx_link_hold(struct smc_link *link)
++{
++	if (!smc_link_sendable(link))
++		return false;
++	atomic_inc(&link->wr_tx_refcnt);
++	return true;
++}
++
++static inline void smc_wr_tx_link_put(struct smc_link *link)
++{
++	if (atomic_dec_and_test(&link->wr_tx_refcnt))
++		wake_up_all(&link->wr_tx_wait);
++}
++
+ static inline void smc_wr_wakeup_tx_wait(struct smc_link *lnk)
+ {
+ 	wake_up_all(&lnk->wr_tx_wait);
+@@ -108,7 +121,7 @@ void smc_wr_tx_dismiss_slots(struct smc_link *lnk, u8 wr_rx_hdr_type,
+ 			     smc_wr_tx_filter filter,
+ 			     smc_wr_tx_dismisser dismisser,
+ 			     unsigned long data);
+-int smc_wr_tx_wait_no_pending_sends(struct smc_link *link);
++void smc_wr_tx_wait_no_pending_sends(struct smc_link *link);
+ 
+ int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler);
+ int smc_wr_rx_post_init(struct smc_link *link);
+diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
+index a4ca050815aba..dc1d3696af6b8 100755
+--- a/scripts/recordmcount.pl
++++ b/scripts/recordmcount.pl
+@@ -252,7 +252,7 @@ if ($arch eq "x86_64") {
+ 
+ } elsif ($arch eq "s390" && $bits == 64) {
+     if ($cc =~ /-DCC_USING_HOTPATCH/) {
+-	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(bcrl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$";
++	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(brcl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$";
+ 	$mcount_adjust = 0;
+     } else {
+ 	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_390_(PC|PLT)32DBL\\s+_mcount\\+0x2\$";
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index f32026bc96b42..ff2191ae53528 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -5665,7 +5665,7 @@ static unsigned int selinux_ip_postroute_compat(struct sk_buff *skb,
+ 	struct common_audit_data ad;
+ 	struct lsm_network_audit net = {0,};
+ 	char *addrp;
+-	u8 proto;
++	u8 proto = 0;
+ 
+ 	if (sk == NULL)
+ 		return NF_ACCEPT;
+diff --git a/security/tomoyo/util.c b/security/tomoyo/util.c
+index cd458e10cf2af..11dd8260c9cc7 100644
+--- a/security/tomoyo/util.c
++++ b/security/tomoyo/util.c
+@@ -1046,10 +1046,11 @@ bool tomoyo_domain_quota_is_ok(struct tomoyo_request_info *r)
+ 		return false;
+ 	if (!domain)
+ 		return true;
++	if (READ_ONCE(domain->flags[TOMOYO_DIF_QUOTA_WARNED]))
++		return false;
+ 	list_for_each_entry_rcu(ptr, &domain->acl_info_list, list,
+ 				srcu_read_lock_held(&tomoyo_ss)) {
+ 		u16 perm;
+-		u8 i;
+ 
+ 		if (ptr->is_deleted)
+ 			continue;
+@@ -1060,23 +1061,23 @@ bool tomoyo_domain_quota_is_ok(struct tomoyo_request_info *r)
+ 		 */
+ 		switch (ptr->type) {
+ 		case TOMOYO_TYPE_PATH_ACL:
+-			data_race(perm = container_of(ptr, struct tomoyo_path_acl, head)->perm);
++			perm = data_race(container_of(ptr, struct tomoyo_path_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_PATH2_ACL:
+-			data_race(perm = container_of(ptr, struct tomoyo_path2_acl, head)->perm);
++			perm = data_race(container_of(ptr, struct tomoyo_path2_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_PATH_NUMBER_ACL:
+-			data_race(perm = container_of(ptr, struct tomoyo_path_number_acl, head)
++			perm = data_race(container_of(ptr, struct tomoyo_path_number_acl, head)
+ 				  ->perm);
+ 			break;
+ 		case TOMOYO_TYPE_MKDEV_ACL:
+-			data_race(perm = container_of(ptr, struct tomoyo_mkdev_acl, head)->perm);
++			perm = data_race(container_of(ptr, struct tomoyo_mkdev_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_INET_ACL:
+-			data_race(perm = container_of(ptr, struct tomoyo_inet_acl, head)->perm);
++			perm = data_race(container_of(ptr, struct tomoyo_inet_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_UNIX_ACL:
+-			data_race(perm = container_of(ptr, struct tomoyo_unix_acl, head)->perm);
++			perm = data_race(container_of(ptr, struct tomoyo_unix_acl, head)->perm);
+ 			break;
+ 		case TOMOYO_TYPE_MANUAL_TASK_ACL:
+ 			perm = 0;
+@@ -1084,21 +1085,17 @@ bool tomoyo_domain_quota_is_ok(struct tomoyo_request_info *r)
+ 		default:
+ 			perm = 1;
+ 		}
+-		for (i = 0; i < 16; i++)
+-			if (perm & (1 << i))
+-				count++;
++		count += hweight16(perm);
+ 	}
+ 	if (count < tomoyo_profile(domain->ns, domain->profile)->
+ 	    pref[TOMOYO_PREF_MAX_LEARNING_ENTRY])
+ 		return true;
+-	if (!domain->flags[TOMOYO_DIF_QUOTA_WARNED]) {
+-		domain->flags[TOMOYO_DIF_QUOTA_WARNED] = true;
+-		/* r->granted = false; */
+-		tomoyo_write_log(r, "%s", tomoyo_dif[TOMOYO_DIF_QUOTA_WARNED]);
++	WRITE_ONCE(domain->flags[TOMOYO_DIF_QUOTA_WARNED], true);
++	/* r->granted = false; */
++	tomoyo_write_log(r, "%s", tomoyo_dif[TOMOYO_DIF_QUOTA_WARNED]);
+ #ifndef CONFIG_SECURITY_TOMOYO_INSECURE_BUILTIN_SETTING
+-		pr_warn("WARNING: Domain '%s' has too many ACLs to hold. Stopped learning mode.\n",
+-			domain->domainname->name);
++	pr_warn("WARNING: Domain '%s' has too many ACLs to hold. Stopped learning mode.\n",
++		domain->domainname->name);
+ #endif
+-	}
+ 	return false;
+ }
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 1d727387cb205..5109d01619eed 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -2354,7 +2354,7 @@ static int process_switch_event(struct perf_tool *tool,
+ 	if (perf_event__process_switch(tool, event, sample, machine) < 0)
+ 		return -1;
+ 
+-	if (scripting_ops && scripting_ops->process_switch)
++	if (scripting_ops && scripting_ops->process_switch && !filter_cpu(sample))
+ 		scripting_ops->process_switch(event, sample, machine);
+ 
+ 	if (!script->show_switch_events)
+diff --git a/tools/testing/selftests/net/udpgso.c b/tools/testing/selftests/net/udpgso.c
+index c66da6ffd6d8d..7badaf215de28 100644
+--- a/tools/testing/selftests/net/udpgso.c
++++ b/tools/testing/selftests/net/udpgso.c
+@@ -156,13 +156,13 @@ struct testcase testcases_v4[] = {
+ 	},
+ 	{
+ 		/* send max number of min sized segments */
+-		.tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4,
++		.tlen = UDP_MAX_SEGMENTS,
+ 		.gso_len = 1,
+-		.r_num_mss = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4,
++		.r_num_mss = UDP_MAX_SEGMENTS,
+ 	},
+ 	{
+ 		/* send max number + 1 of min sized segments: fail */
+-		.tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4 + 1,
++		.tlen = UDP_MAX_SEGMENTS + 1,
+ 		.gso_len = 1,
+ 		.tfail = true,
+ 	},
+@@ -259,13 +259,13 @@ struct testcase testcases_v6[] = {
+ 	},
+ 	{
+ 		/* send max number of min sized segments */
+-		.tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6,
++		.tlen = UDP_MAX_SEGMENTS,
+ 		.gso_len = 1,
+-		.r_num_mss = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6,
++		.r_num_mss = UDP_MAX_SEGMENTS,
+ 	},
+ 	{
+ 		/* send max number + 1 of min sized segments: fail */
+-		.tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6 + 1,
++		.tlen = UDP_MAX_SEGMENTS + 1,
+ 		.gso_len = 1,
+ 		.tfail = true,
+ 	},
+diff --git a/tools/testing/selftests/net/udpgso_bench_tx.c b/tools/testing/selftests/net/udpgso_bench_tx.c
+index 17512a43885e7..f1fdaa2702913 100644
+--- a/tools/testing/selftests/net/udpgso_bench_tx.c
++++ b/tools/testing/selftests/net/udpgso_bench_tx.c
+@@ -419,6 +419,7 @@ static void usage(const char *filepath)
+ 
+ static void parse_opts(int argc, char **argv)
+ {
++	const char *bind_addr = NULL;
+ 	int max_len, hdrlen;
+ 	int c;
+ 
+@@ -446,7 +447,7 @@ static void parse_opts(int argc, char **argv)
+ 			cfg_cpu = strtol(optarg, NULL, 0);
+ 			break;
+ 		case 'D':
+-			setup_sockaddr(cfg_family, optarg, &cfg_dst_addr);
++			bind_addr = optarg;
+ 			break;
+ 		case 'l':
+ 			cfg_runtime_ms = strtoul(optarg, NULL, 10) * 1000;
+@@ -492,6 +493,11 @@ static void parse_opts(int argc, char **argv)
+ 		}
+ 	}
+ 
++	if (!bind_addr)
++		bind_addr = cfg_family == PF_INET6 ? "::" : "0.0.0.0";
++
++	setup_sockaddr(cfg_family, bind_addr, &cfg_dst_addr);
++
+ 	if (optind != argc)
+ 		usage(argv[0]);
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-01-11 14:50 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-01-11 14:50 UTC (permalink / raw
  To: gentoo-commits

commit:     b5b810b63ae2eef210cd178f28871681852474e3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 11 14:50:23 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jan 11 14:50:23 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b5b810b6

Linuxpatch 5.10.91

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1090_linux-5.10.91.patch | 1279 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1283 insertions(+)

diff --git a/0000_README b/0000_README
index 46422e5d..9e878b9b 100644
--- a/0000_README
+++ b/0000_README
@@ -403,6 +403,10 @@ Patch:  1089_linux-5.10.90.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.90
 
+Patch:  1090_linux-5.10.91.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.91
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1090_linux-5.10.91.patch b/1090_linux-5.10.91.patch
new file mode 100644
index 00000000..8527a5d8
--- /dev/null
+++ b/1090_linux-5.10.91.patch
@@ -0,0 +1,1279 @@
+diff --git a/Makefile b/Makefile
+index 556241a10821f..c8d677c7eaa11 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 90
++SUBLEVEL = 91
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index 4ade854bdcdaf..55ec83bde5a61 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -555,6 +555,8 @@
+ 		     <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
+ 		     <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
+ 
++	gpio-ranges = <&gpio 0 0 58>;
++
+ 	gpclk0_gpio49: gpclk0_gpio49 {
+ 		pin-gpclk {
+ 			pins = "gpio49";
+diff --git a/arch/arm/boot/dts/bcm283x.dtsi b/arch/arm/boot/dts/bcm283x.dtsi
+index 0f3be55201a5b..ffdf7c4fba465 100644
+--- a/arch/arm/boot/dts/bcm283x.dtsi
++++ b/arch/arm/boot/dts/bcm283x.dtsi
+@@ -126,6 +126,8 @@
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+ 
++			gpio-ranges = <&gpio 0 0 54>;
++
+ 			/* Defines common pin muxing groups
+ 			 *
+ 			 * While each pin can have its mux selected
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
+index b24c8ae8b1ece..7e228c181b298 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
+@@ -77,6 +77,7 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
+ 	.get_clock = dcn10_get_clock,
+ 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+ 	.calc_vupdate_position = dcn10_calc_vupdate_position,
++	.power_down = dce110_power_down,
+ 	.set_backlight_level = dce110_set_backlight_level,
+ 	.set_abm_immediate_disable = dce110_set_abm_immediate_disable,
+ 	.set_pipe = dce110_set_pipe,
+diff --git a/drivers/infiniband/core/uverbs_marshall.c b/drivers/infiniband/core/uverbs_marshall.c
+index b8d715c68ca44..11a0806469162 100644
+--- a/drivers/infiniband/core/uverbs_marshall.c
++++ b/drivers/infiniband/core/uverbs_marshall.c
+@@ -66,7 +66,7 @@ void ib_copy_ah_attr_to_user(struct ib_device *device,
+ 	struct rdma_ah_attr *src = ah_attr;
+ 	struct rdma_ah_attr conv_ah;
+ 
+-	memset(&dst->grh.reserved, 0, sizeof(dst->grh.reserved));
++	memset(&dst->grh, 0, sizeof(dst->grh));
+ 
+ 	if ((ah_attr->type == RDMA_AH_ATTR_TYPE_OPA) &&
+ 	    (rdma_ah_get_dlid(ah_attr) > be16_to_cpu(IB_LID_PERMISSIVE)) &&
+diff --git a/drivers/infiniband/core/uverbs_uapi.c b/drivers/infiniband/core/uverbs_uapi.c
+index 5addc8fae3f3b..91dbcb3c252d6 100644
+--- a/drivers/infiniband/core/uverbs_uapi.c
++++ b/drivers/infiniband/core/uverbs_uapi.c
+@@ -450,6 +450,9 @@ static int uapi_finalize(struct uverbs_api *uapi)
+ 	uapi->num_write_ex = max_write_ex + 1;
+ 	data = kmalloc_array(uapi->num_write + uapi->num_write_ex,
+ 			     sizeof(*uapi->write_methods), GFP_KERNEL);
++	if (!data)
++		return -ENOMEM;
++
+ 	for (i = 0; i != uapi->num_write + uapi->num_write_ex; i++)
+ 		data[i] = &uapi->notsupp_method;
+ 	uapi->write_methods = data;
+diff --git a/drivers/input/touchscreen/zinitix.c b/drivers/input/touchscreen/zinitix.c
+index fd8b4e9f08a21..6df6f07f1ac66 100644
+--- a/drivers/input/touchscreen/zinitix.c
++++ b/drivers/input/touchscreen/zinitix.c
+@@ -488,6 +488,15 @@ static int zinitix_ts_probe(struct i2c_client *client)
+ 		return error;
+ 	}
+ 
++	error = devm_request_threaded_irq(&client->dev, client->irq,
++					  NULL, zinitix_ts_irq_handler,
++					  IRQF_ONESHOT,
++					  client->name, bt541);
++	if (error) {
++		dev_err(&client->dev, "Failed to request IRQ: %d\n", error);
++		return error;
++	}
++
+ 	error = zinitix_init_input_dev(bt541);
+ 	if (error) {
+ 		dev_err(&client->dev,
+@@ -514,13 +523,6 @@ static int zinitix_ts_probe(struct i2c_client *client)
+ 	}
+ 
+ 	irq_set_status_flags(client->irq, IRQ_NOAUTOEN);
+-	error = devm_request_threaded_irq(&client->dev, client->irq,
+-					  NULL, zinitix_ts_irq_handler,
+-					  IRQF_ONESHOT, client->name, bt541);
+-	if (error) {
+-		dev_err(&client->dev, "Failed to request IRQ: %d\n", error);
+-		return error;
+-	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/isdn/mISDN/core.c b/drivers/isdn/mISDN/core.c
+index 55891e4204460..a41b4b2645941 100644
+--- a/drivers/isdn/mISDN/core.c
++++ b/drivers/isdn/mISDN/core.c
+@@ -381,7 +381,7 @@ mISDNInit(void)
+ 	err = mISDN_inittimer(&debug);
+ 	if (err)
+ 		goto error2;
+-	err = l1_init(&debug);
++	err = Isdnl1_Init(&debug);
+ 	if (err)
+ 		goto error3;
+ 	err = Isdnl2_Init(&debug);
+@@ -395,7 +395,7 @@ mISDNInit(void)
+ error5:
+ 	Isdnl2_cleanup();
+ error4:
+-	l1_cleanup();
++	Isdnl1_cleanup();
+ error3:
+ 	mISDN_timer_cleanup();
+ error2:
+@@ -408,7 +408,7 @@ static void mISDN_cleanup(void)
+ {
+ 	misdn_sock_cleanup();
+ 	Isdnl2_cleanup();
+-	l1_cleanup();
++	Isdnl1_cleanup();
+ 	mISDN_timer_cleanup();
+ 	class_unregister(&mISDN_class);
+ 
+diff --git a/drivers/isdn/mISDN/core.h b/drivers/isdn/mISDN/core.h
+index 23b44d3033279..42599f49c189d 100644
+--- a/drivers/isdn/mISDN/core.h
++++ b/drivers/isdn/mISDN/core.h
+@@ -60,8 +60,8 @@ struct Bprotocol	*get_Bprotocol4id(u_int);
+ extern int	mISDN_inittimer(u_int *);
+ extern void	mISDN_timer_cleanup(void);
+ 
+-extern int	l1_init(u_int *);
+-extern void	l1_cleanup(void);
++extern int	Isdnl1_Init(u_int *);
++extern void	Isdnl1_cleanup(void);
+ extern int	Isdnl2_Init(u_int *);
+ extern void	Isdnl2_cleanup(void);
+ 
+diff --git a/drivers/isdn/mISDN/layer1.c b/drivers/isdn/mISDN/layer1.c
+index 98a3bc6c17009..7b31c25a550e3 100644
+--- a/drivers/isdn/mISDN/layer1.c
++++ b/drivers/isdn/mISDN/layer1.c
+@@ -398,7 +398,7 @@ create_l1(struct dchannel *dch, dchannel_l1callback *dcb) {
+ EXPORT_SYMBOL(create_l1);
+ 
+ int
+-l1_init(u_int *deb)
++Isdnl1_Init(u_int *deb)
+ {
+ 	debug = deb;
+ 	l1fsm_s.state_count = L1S_STATE_COUNT;
+@@ -409,7 +409,7 @@ l1_init(u_int *deb)
+ }
+ 
+ void
+-l1_cleanup(void)
++Isdnl1_cleanup(void)
+ {
+ 	mISDN_FsmFree(&l1fsm_s);
+ }
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index df1884d57d1a0..52414ac2c901a 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -1199,26 +1199,22 @@ static int handle_invalid_req_id(struct ena_ring *ring, u16 req_id,
+ 
+ static int validate_tx_req_id(struct ena_ring *tx_ring, u16 req_id)
+ {
+-	struct ena_tx_buffer *tx_info = NULL;
++	struct ena_tx_buffer *tx_info;
+ 
+-	if (likely(req_id < tx_ring->ring_size)) {
+-		tx_info = &tx_ring->tx_buffer_info[req_id];
+-		if (likely(tx_info->skb))
+-			return 0;
+-	}
++	tx_info = &tx_ring->tx_buffer_info[req_id];
++	if (likely(tx_info->skb))
++		return 0;
+ 
+ 	return handle_invalid_req_id(tx_ring, req_id, tx_info, false);
+ }
+ 
+ static int validate_xdp_req_id(struct ena_ring *xdp_ring, u16 req_id)
+ {
+-	struct ena_tx_buffer *tx_info = NULL;
++	struct ena_tx_buffer *tx_info;
+ 
+-	if (likely(req_id < xdp_ring->ring_size)) {
+-		tx_info = &xdp_ring->tx_buffer_info[req_id];
+-		if (likely(tx_info->xdpf))
+-			return 0;
+-	}
++	tx_info = &xdp_ring->tx_buffer_info[req_id];
++	if (likely(tx_info->xdpf))
++		return 0;
+ 
+ 	return handle_invalid_req_id(xdp_ring, req_id, tx_info, true);
+ }
+@@ -1243,9 +1239,14 @@ static int ena_clean_tx_irq(struct ena_ring *tx_ring, u32 budget)
+ 
+ 		rc = ena_com_tx_comp_req_id_get(tx_ring->ena_com_io_cq,
+ 						&req_id);
+-		if (rc)
++		if (rc) {
++			if (unlikely(rc == -EINVAL))
++				handle_invalid_req_id(tx_ring, req_id, NULL,
++						      false);
+ 			break;
++		}
+ 
++		/* validate that the request id points to a valid skb */
+ 		rc = validate_tx_req_id(tx_ring, req_id);
+ 		if (rc)
+ 			break;
+@@ -1801,9 +1802,14 @@ static int ena_clean_xdp_irq(struct ena_ring *xdp_ring, u32 budget)
+ 
+ 		rc = ena_com_tx_comp_req_id_get(xdp_ring->ena_com_io_cq,
+ 						&req_id);
+-		if (rc)
++		if (rc) {
++			if (unlikely(rc == -EINVAL))
++				handle_invalid_req_id(xdp_ring, req_id, NULL,
++						      true);
+ 			break;
++		}
+ 
++		/* validate that the request id points to a valid xdp_frame */
+ 		rc = validate_xdp_req_id(xdp_ring, req_id);
+ 		if (rc)
+ 			break;
+@@ -3921,10 +3927,6 @@ static u32 ena_calc_max_io_queue_num(struct pci_dev *pdev,
+ 	max_num_io_queues = min_t(u32, max_num_io_queues, io_tx_cq_num);
+ 	/* 1 IRQ for for mgmnt and 1 IRQs for each IO direction */
+ 	max_num_io_queues = min_t(u32, max_num_io_queues, pci_msix_vec_count(pdev) - 1);
+-	if (unlikely(!max_num_io_queues)) {
+-		dev_err(&pdev->dev, "The device doesn't have io queues\n");
+-		return -EFAULT;
+-	}
+ 
+ 	return max_num_io_queues;
+ }
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index 24122ccda614c..72f8751784c31 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -365,6 +365,10 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 		if (!buff->is_eop) {
+ 			buff_ = buff;
+ 			do {
++				if (buff_->next >= self->size) {
++					err = -EIO;
++					goto err_exit;
++				}
+ 				next_ = buff_->next,
+ 				buff_ = &self->buff_ring[next_];
+ 				is_rsc_completed =
+@@ -388,6 +392,10 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 			    (buff->is_lro && buff->is_cso_err)) {
+ 				buff_ = buff;
+ 				do {
++					if (buff_->next >= self->size) {
++						err = -EIO;
++						goto err_exit;
++					}
+ 					next_ = buff_->next,
+ 					buff_ = &self->buff_ring[next_];
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 583eae71cda4b..f888a443a067b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -98,6 +98,24 @@ MODULE_LICENSE("GPL v2");
+ 
+ static struct workqueue_struct *i40e_wq;
+ 
++static void netdev_hw_addr_refcnt(struct i40e_mac_filter *f,
++				  struct net_device *netdev, int delta)
++{
++	struct netdev_hw_addr *ha;
++
++	if (!f || !netdev)
++		return;
++
++	netdev_for_each_mc_addr(ha, netdev) {
++		if (ether_addr_equal(ha->addr, f->macaddr)) {
++			ha->refcount += delta;
++			if (ha->refcount <= 0)
++				ha->refcount = 1;
++			break;
++		}
++	}
++}
++
+ /**
+  * i40e_allocate_dma_mem_d - OS specific memory alloc for shared code
+  * @hw:   pointer to the HW structure
+@@ -2035,6 +2053,7 @@ static void i40e_undo_add_filter_entries(struct i40e_vsi *vsi,
+ 	hlist_for_each_entry_safe(new, h, from, hlist) {
+ 		/* We can simply free the wrapper structure */
+ 		hlist_del(&new->hlist);
++		netdev_hw_addr_refcnt(new->f, vsi->netdev, -1);
+ 		kfree(new);
+ 	}
+ }
+@@ -2382,6 +2401,10 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
+ 						       &tmp_add_list,
+ 						       &tmp_del_list,
+ 						       vlan_filters);
++
++		hlist_for_each_entry(new, &tmp_add_list, hlist)
++			netdev_hw_addr_refcnt(new->f, vsi->netdev, 1);
++
+ 		if (retval)
+ 			goto err_no_memory_locked;
+ 
+@@ -2514,6 +2537,7 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
+ 			if (new->f->state == I40E_FILTER_NEW)
+ 				new->f->state = new->state;
+ 			hlist_del(&new->hlist);
++			netdev_hw_addr_refcnt(new->f, vsi->netdev, -1);
+ 			kfree(new);
+ 		}
+ 		spin_unlock_bh(&vsi->mac_filter_hash_lock);
+@@ -8357,6 +8381,27 @@ int i40e_open(struct net_device *netdev)
+ 	return 0;
+ }
+ 
++/**
++ * i40e_netif_set_realnum_tx_rx_queues - Update number of tx/rx queues
++ * @vsi: vsi structure
++ *
++ * This updates netdev's number of tx/rx queues
++ *
++ * Returns status of setting tx/rx queues
++ **/
++static int i40e_netif_set_realnum_tx_rx_queues(struct i40e_vsi *vsi)
++{
++	int ret;
++
++	ret = netif_set_real_num_rx_queues(vsi->netdev,
++					   vsi->num_queue_pairs);
++	if (ret)
++		return ret;
++
++	return netif_set_real_num_tx_queues(vsi->netdev,
++					    vsi->num_queue_pairs);
++}
++
+ /**
+  * i40e_vsi_open -
+  * @vsi: the VSI to open
+@@ -8393,13 +8438,7 @@ int i40e_vsi_open(struct i40e_vsi *vsi)
+ 			goto err_setup_rx;
+ 
+ 		/* Notify the stack of the actual queue counts. */
+-		err = netif_set_real_num_tx_queues(vsi->netdev,
+-						   vsi->num_queue_pairs);
+-		if (err)
+-			goto err_set_queues;
+-
+-		err = netif_set_real_num_rx_queues(vsi->netdev,
+-						   vsi->num_queue_pairs);
++		err = i40e_netif_set_realnum_tx_rx_queues(vsi);
+ 		if (err)
+ 			goto err_set_queues;
+ 
+@@ -13686,6 +13725,9 @@ struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf, u8 type,
+ 	case I40E_VSI_MAIN:
+ 	case I40E_VSI_VMDQ2:
+ 		ret = i40e_config_netdev(vsi);
++		if (ret)
++			goto err_netdev;
++		ret = i40e_netif_set_realnum_tx_rx_queues(vsi);
+ 		if (ret)
+ 			goto err_netdev;
+ 		ret = register_netdev(vsi->netdev);
+@@ -14963,8 +15005,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	if (hw->aq.api_maj_ver == I40E_FW_API_VERSION_MAJOR &&
+ 	    hw->aq.api_min_ver > I40E_FW_MINOR_VERSION(hw))
+-		dev_info(&pdev->dev,
+-			 "The driver for the device detected a newer version of the NVM image v%u.%u than expected v%u.%u. Please install the most recent version of the network driver.\n",
++		dev_dbg(&pdev->dev,
++			"The driver for the device detected a newer version of the NVM image v%u.%u than v%u.%u.\n",
+ 			 hw->aq.api_maj_ver,
+ 			 hw->aq.api_min_ver,
+ 			 I40E_FW_API_VERSION_MAJOR,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 5a58edba4adfc..65c4c4fd359fa 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1824,17 +1824,19 @@ sriov_configure_out:
+ /***********************virtual channel routines******************/
+ 
+ /**
+- * i40e_vc_send_msg_to_vf
++ * i40e_vc_send_msg_to_vf_ex
+  * @vf: pointer to the VF info
+  * @v_opcode: virtual channel opcode
+  * @v_retval: virtual channel return value
+  * @msg: pointer to the msg buffer
+  * @msglen: msg length
++ * @is_quiet: true for not printing unsuccessful return values, false otherwise
+  *
+  * send msg to VF
+  **/
+-static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
+-				  u32 v_retval, u8 *msg, u16 msglen)
++static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
++				     u32 v_retval, u8 *msg, u16 msglen,
++				     bool is_quiet)
+ {
+ 	struct i40e_pf *pf;
+ 	struct i40e_hw *hw;
+@@ -1850,7 +1852,7 @@ static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
+ 	abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ 
+ 	/* single place to detect unsuccessful return values */
+-	if (v_retval) {
++	if (v_retval && !is_quiet) {
+ 		vf->num_invalid_msgs++;
+ 		dev_info(&pf->pdev->dev, "VF %d failed opcode %d, retval: %d\n",
+ 			 vf->vf_id, v_opcode, v_retval);
+@@ -1880,6 +1882,23 @@ static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
+ 	return 0;
+ }
+ 
++/**
++ * i40e_vc_send_msg_to_vf
++ * @vf: pointer to the VF info
++ * @v_opcode: virtual channel opcode
++ * @v_retval: virtual channel return value
++ * @msg: pointer to the msg buffer
++ * @msglen: msg length
++ *
++ * send msg to VF
++ **/
++static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
++				  u32 v_retval, u8 *msg, u16 msglen)
++{
++	return i40e_vc_send_msg_to_vf_ex(vf, v_opcode, v_retval,
++					 msg, msglen, false);
++}
++
+ /**
+  * i40e_vc_send_resp_to_vf
+  * @vf: pointer to the VF info
+@@ -2641,6 +2660,7 @@ error_param:
+  * i40e_check_vf_permission
+  * @vf: pointer to the VF info
+  * @al: MAC address list from virtchnl
++ * @is_quiet: set true for printing msg without opcode info, false otherwise
+  *
+  * Check that the given list of MAC addresses is allowed. Will return -EPERM
+  * if any address in the list is not valid. Checks the following conditions:
+@@ -2655,13 +2675,15 @@ error_param:
+  * addresses might not be accurate.
+  **/
+ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+-					   struct virtchnl_ether_addr_list *al)
++					   struct virtchnl_ether_addr_list *al,
++					   bool *is_quiet)
+ {
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
+ 	int mac2add_cnt = 0;
+ 	int i;
+ 
++	*is_quiet = false;
+ 	for (i = 0; i < al->num_elements; i++) {
+ 		struct i40e_mac_filter *f;
+ 		u8 *addr = al->list[i].addr;
+@@ -2685,6 +2707,7 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		    !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+ 			dev_err(&pf->pdev->dev,
+ 				"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
++			*is_quiet = true;
+ 			return -EPERM;
+ 		}
+ 
+@@ -2721,6 +2744,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 	    (struct virtchnl_ether_addr_list *)msg;
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = NULL;
++	bool is_quiet = false;
+ 	i40e_status ret = 0;
+ 	int i;
+ 
+@@ -2737,7 +2761,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 	 */
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+ 
+-	ret = i40e_check_vf_permission(vf, al);
++	ret = i40e_check_vf_permission(vf, al, &is_quiet);
+ 	if (ret) {
+ 		spin_unlock_bh(&vsi->mac_filter_hash_lock);
+ 		goto error_param;
+@@ -2775,8 +2799,8 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 
+ error_param:
+ 	/* send the response to the VF */
+-	return i40e_vc_send_resp_to_vf(vf, VIRTCHNL_OP_ADD_ETH_ADDR,
+-				       ret);
++	return i40e_vc_send_msg_to_vf_ex(vf, VIRTCHNL_OP_ADD_ETH_ADDR,
++				       ret, NULL, 0, is_quiet);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 7aa49d4eaa87c..de7794ebc7e73 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2598,8 +2598,11 @@ static int iavf_validate_ch_config(struct iavf_adapter *adapter,
+ 		total_max_rate += tx_rate;
+ 		num_qps += mqprio_qopt->qopt.count[i];
+ 	}
+-	if (num_qps > IAVF_MAX_REQ_QUEUES)
++	if (num_qps > adapter->num_active_queues) {
++		dev_err(&adapter->pdev->dev,
++			"Cannot support requested number of queues\n");
+ 		return -EINVAL;
++	}
+ 
+ 	ret = iavf_validate_tx_bandwidth(adapter, total_max_rate);
+ 	return ret;
+diff --git a/drivers/net/ethernet/sfc/falcon/rx.c b/drivers/net/ethernet/sfc/falcon/rx.c
+index 11a6aee852e92..0c6cc21913693 100644
+--- a/drivers/net/ethernet/sfc/falcon/rx.c
++++ b/drivers/net/ethernet/sfc/falcon/rx.c
+@@ -110,6 +110,8 @@ static struct page *ef4_reuse_page(struct ef4_rx_queue *rx_queue)
+ 	struct ef4_rx_page_state *state;
+ 	unsigned index;
+ 
++	if (unlikely(!rx_queue->page_ring))
++		return NULL;
+ 	index = rx_queue->page_remove & rx_queue->page_ptr_mask;
+ 	page = rx_queue->page_ring[index];
+ 	if (page == NULL)
+@@ -293,6 +295,9 @@ static void ef4_recycle_rx_pages(struct ef4_channel *channel,
+ {
+ 	struct ef4_rx_queue *rx_queue = ef4_channel_get_rx_queue(channel);
+ 
++	if (unlikely(!rx_queue->page_ring))
++		return;
++
+ 	do {
+ 		ef4_recycle_rx_page(channel, rx_buf);
+ 		rx_buf = ef4_rx_buf_next(rx_queue, rx_buf);
+diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c
+index 8834bcb12fa97..e423b17e2a148 100644
+--- a/drivers/net/ethernet/sfc/rx_common.c
++++ b/drivers/net/ethernet/sfc/rx_common.c
+@@ -45,6 +45,8 @@ static struct page *efx_reuse_page(struct efx_rx_queue *rx_queue)
+ 	unsigned int index;
+ 	struct page *page;
+ 
++	if (unlikely(!rx_queue->page_ring))
++		return NULL;
+ 	index = rx_queue->page_remove & rx_queue->page_ptr_mask;
+ 	page = rx_queue->page_ring[index];
+ 	if (page == NULL)
+@@ -114,6 +116,9 @@ void efx_recycle_rx_pages(struct efx_channel *channel,
+ {
+ 	struct efx_rx_queue *rx_queue = efx_channel_get_rx_queue(channel);
+ 
++	if (unlikely(!rx_queue->page_ring))
++		return;
++
+ 	do {
+ 		efx_recycle_rx_page(channel, rx_buf);
+ 		rx_buf = efx_rx_buf_next(rx_queue, rx_buf);
+diff --git a/drivers/net/ieee802154/atusb.c b/drivers/net/ieee802154/atusb.c
+index 23ee0b14cbfa1..2f5e7b31032aa 100644
+--- a/drivers/net/ieee802154/atusb.c
++++ b/drivers/net/ieee802154/atusb.c
+@@ -93,7 +93,9 @@ static int atusb_control_msg(struct atusb *atusb, unsigned int pipe,
+ 
+ 	ret = usb_control_msg(usb_dev, pipe, request, requesttype,
+ 			      value, index, data, size, timeout);
+-	if (ret < 0) {
++	if (ret < size) {
++		ret = ret < 0 ? ret : -ENODATA;
++
+ 		atusb->err = ret;
+ 		dev_err(&usb_dev->dev,
+ 			"%s: req 0x%02x val 0x%x idx 0x%x, error %d\n",
+@@ -861,9 +863,9 @@ static int atusb_get_and_show_build(struct atusb *atusb)
+ 	if (!build)
+ 		return -ENOMEM;
+ 
+-	ret = atusb_control_msg(atusb, usb_rcvctrlpipe(usb_dev, 0),
+-				ATUSB_BUILD, ATUSB_REQ_FROM_DEV, 0, 0,
+-				build, ATUSB_BUILD_SIZE, 1000);
++	/* We cannot call atusb_control_msg() here, since this request may read various length data */
++	ret = usb_control_msg(atusb->usb_dev, usb_rcvctrlpipe(usb_dev, 0), ATUSB_BUILD,
++			      ATUSB_REQ_FROM_DEV, 0, 0, build, ATUSB_BUILD_SIZE, 1000);
+ 	if (ret >= 0) {
+ 		build[ret] = 0;
+ 		dev_info(&usb_dev->dev, "Firmware: build %s\n", build);
+diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c
+index f9b359d4e2939..1505fe3f87ed3 100644
+--- a/drivers/net/usb/rndis_host.c
++++ b/drivers/net/usb/rndis_host.c
+@@ -608,6 +608,11 @@ static const struct usb_device_id	products [] = {
+ 	USB_DEVICE_AND_INTERFACE_INFO(0x1630, 0x0042,
+ 				      USB_CLASS_COMM, 2 /* ACM */, 0x0ff),
+ 	.driver_info = (unsigned long) &rndis_poll_status_info,
++}, {
++	/* Hytera Communications DMR radios' "Radio to PC Network" */
++	USB_VENDOR_AND_INTERFACE_INFO(0x238b,
++				      USB_CLASS_COMM, 2 /* ACM */, 0x0ff),
++	.driver_info = (unsigned long)&rndis_info,
+ }, {
+ 	/* RNDIS is MSFT's un-official variant of CDC ACM */
+ 	USB_INTERFACE_INFO(USB_CLASS_COMM, 2 /* ACM */, 0x0ff),
+diff --git a/drivers/power/reset/ltc2952-poweroff.c b/drivers/power/reset/ltc2952-poweroff.c
+index 318927938b05b..d3844ae9eca4a 100644
+--- a/drivers/power/reset/ltc2952-poweroff.c
++++ b/drivers/power/reset/ltc2952-poweroff.c
+@@ -159,8 +159,8 @@ static void ltc2952_poweroff_kill(void)
+ 
+ static void ltc2952_poweroff_default(struct ltc2952_poweroff *data)
+ {
+-	data->wde_interval = 300L * 1E6L;
+-	data->trigger_delay = ktime_set(2, 500L*1E6L);
++	data->wde_interval = 300L * NSEC_PER_MSEC;
++	data->trigger_delay = ktime_set(2, 500L * NSEC_PER_MSEC);
+ 
+ 	hrtimer_init(&data->timer_trigger, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	data->timer_trigger.function = ltc2952_poweroff_timer_trigger;
+diff --git a/drivers/power/supply/bq25890_charger.c b/drivers/power/supply/bq25890_charger.c
+index 945c3257ca931..fe814805c68b5 100644
+--- a/drivers/power/supply/bq25890_charger.c
++++ b/drivers/power/supply/bq25890_charger.c
+@@ -581,12 +581,12 @@ static irqreturn_t __bq25890_handle_irq(struct bq25890_device *bq)
+ 
+ 	if (!new_state.online && bq->state.online) {	    /* power removed */
+ 		/* disable ADC */
+-		ret = bq25890_field_write(bq, F_CONV_START, 0);
++		ret = bq25890_field_write(bq, F_CONV_RATE, 0);
+ 		if (ret < 0)
+ 			goto error;
+ 	} else if (new_state.online && !bq->state.online) { /* power inserted */
+ 		/* enable ADC, to have control of charge current/voltage */
+-		ret = bq25890_field_write(bq, F_CONV_START, 1);
++		ret = bq25890_field_write(bq, F_CONV_RATE, 1);
+ 		if (ret < 0)
+ 			goto error;
+ 	}
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index 38e3aa642131d..280c54c23e37e 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -834,6 +834,10 @@ power_supply_find_ocv2cap_table(struct power_supply_battery_info *info,
+ 		return NULL;
+ 
+ 	for (i = 0; i < POWER_SUPPLY_OCV_TEMP_MAX; i++) {
++		/* Out of capacity tables */
++		if (!info->ocv_table[i])
++			break;
++
+ 		temp_diff = abs(info->ocv_temp[i] - temp);
+ 
+ 		if (temp_diff < best_temp_diff) {
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 30d27b6706746..d4e66c595eb87 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -2950,6 +2950,8 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ {
+ 	struct iscsi_conn *conn = cls_conn->dd_data;
+ 	struct iscsi_session *session = conn->session;
++	char *tmp_persistent_address = conn->persistent_address;
++	char *tmp_local_ipaddr = conn->local_ipaddr;
+ 
+ 	del_timer_sync(&conn->transport_timer);
+ 
+@@ -2971,8 +2973,6 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	free_pages((unsigned long) conn->data,
+ 		   get_order(ISCSI_DEF_MAX_RECV_SEG_LEN));
+-	kfree(conn->persistent_address);
+-	kfree(conn->local_ipaddr);
+ 	/* regular RX path uses back_lock */
+ 	spin_lock_bh(&session->back_lock);
+ 	kfifo_in(&session->cmdpool.queue, (void*)&conn->login_task,
+@@ -2984,6 +2984,8 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ 	mutex_unlock(&session->eh_mutex);
+ 
+ 	iscsi_destroy_conn(cls_conn);
++	kfree(tmp_persistent_address);
++	kfree(tmp_local_ipaddr);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_conn_teardown);
+ 
+diff --git a/drivers/usb/mtu3/mtu3_gadget.c b/drivers/usb/mtu3/mtu3_gadget.c
+index a3e1105c5c662..b7a6363f387aa 100644
+--- a/drivers/usb/mtu3/mtu3_gadget.c
++++ b/drivers/usb/mtu3/mtu3_gadget.c
+@@ -77,7 +77,7 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 		if (usb_endpoint_xfer_int(desc) ||
+ 				usb_endpoint_xfer_isoc(desc)) {
+ 			interval = desc->bInterval;
+-			interval = clamp_val(interval, 1, 16) - 1;
++			interval = clamp_val(interval, 1, 16);
+ 			if (usb_endpoint_xfer_isoc(desc) && comp_desc)
+ 				mult = comp_desc->bmAttributes;
+ 		}
+@@ -89,7 +89,7 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 		if (usb_endpoint_xfer_isoc(desc) ||
+ 				usb_endpoint_xfer_int(desc)) {
+ 			interval = desc->bInterval;
+-			interval = clamp_val(interval, 1, 16) - 1;
++			interval = clamp_val(interval, 1, 16);
+ 			mult = usb_endpoint_maxp_mult(desc) - 1;
+ 		}
+ 		break;
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index b39bf416d5114..9bcd77db980df 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1147,7 +1147,8 @@ static bool __need_flush_quota(struct f2fs_sb_info *sbi)
+ 	if (!is_journalled_quota(sbi))
+ 		return false;
+ 
+-	down_write(&sbi->quota_sem);
++	if (!down_write_trylock(&sbi->quota_sem))
++		return true;
+ 	if (is_sbi_flag_set(sbi, SBI_QUOTA_SKIP_FLUSH)) {
+ 		ret = false;
+ 	} else if (is_sbi_flag_set(sbi, SBI_QUOTA_NEED_REPAIR)) {
+diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c
+index 3fbd98f61ea5c..646735aad45df 100644
+--- a/fs/xfs/xfs_ioctl.c
++++ b/fs/xfs/xfs_ioctl.c
+@@ -686,7 +686,8 @@ xfs_ioc_space(
+ 
+ 	if (bf->l_start > XFS_ISIZE(ip)) {
+ 		error = xfs_alloc_file_space(ip, XFS_ISIZE(ip),
+-				bf->l_start - XFS_ISIZE(ip), 0);
++				bf->l_start - XFS_ISIZE(ip),
++				XFS_BMAPI_PREALLOC);
+ 		if (error)
+ 			goto out_unlock;
+ 	}
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index e4f154119e52c..cd2d094b9f820 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3134,7 +3134,7 @@ struct trace_buffer_struct {
+ 	char buffer[4][TRACE_BUF_SIZE];
+ };
+ 
+-static struct trace_buffer_struct *trace_percpu_buffer;
++static struct trace_buffer_struct __percpu *trace_percpu_buffer;
+ 
+ /*
+  * Thise allows for lockless recording.  If we're nested too deeply, then
+@@ -3144,7 +3144,7 @@ static char *get_trace_buf(void)
+ {
+ 	struct trace_buffer_struct *buffer = this_cpu_ptr(trace_percpu_buffer);
+ 
+-	if (!buffer || buffer->nesting >= 4)
++	if (!trace_percpu_buffer || buffer->nesting >= 4)
+ 		return NULL;
+ 
+ 	buffer->nesting++;
+@@ -3163,7 +3163,7 @@ static void put_trace_buf(void)
+ 
+ static int alloc_percpu_trace_buffer(void)
+ {
+-	struct trace_buffer_struct *buffers;
++	struct trace_buffer_struct __percpu *buffers;
+ 
+ 	if (trace_percpu_buffer)
+ 		return 0;
+diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
+index 9af99c39b9fd9..139894ca788b9 100644
+--- a/net/batman-adv/multicast.c
++++ b/net/batman-adv/multicast.c
+@@ -1373,6 +1373,7 @@ batadv_mcast_forw_rtr_node_get(struct batadv_priv *bat_priv,
+  * @bat_priv: the bat priv with all the soft interface information
+  * @skb: The multicast packet to check
+  * @orig: an originator to be set to forward the skb to
++ * @is_routable: stores whether the destination is routable
+  *
+  * Return: the forwarding mode as enum batadv_forw_mode and in case of
+  * BATADV_FORW_SINGLE set the orig to the single originator the skb
+@@ -1380,17 +1381,16 @@ batadv_mcast_forw_rtr_node_get(struct batadv_priv *bat_priv,
+  */
+ enum batadv_forw_mode
+ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
+-		       struct batadv_orig_node **orig)
++		       struct batadv_orig_node **orig, int *is_routable)
+ {
+ 	int ret, tt_count, ip_count, unsnoop_count, total_count;
+ 	bool is_unsnoopable = false;
+ 	unsigned int mcast_fanout;
+ 	struct ethhdr *ethhdr;
+-	int is_routable = 0;
+ 	int rtr_count = 0;
+ 
+ 	ret = batadv_mcast_forw_mode_check(bat_priv, skb, &is_unsnoopable,
+-					   &is_routable);
++					   is_routable);
+ 	if (ret == -ENOMEM)
+ 		return BATADV_FORW_NONE;
+ 	else if (ret < 0)
+@@ -1403,7 +1403,7 @@ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ 	ip_count = batadv_mcast_forw_want_all_ip_count(bat_priv, ethhdr);
+ 	unsnoop_count = !is_unsnoopable ? 0 :
+ 			atomic_read(&bat_priv->mcast.num_want_all_unsnoopables);
+-	rtr_count = batadv_mcast_forw_rtr_count(bat_priv, is_routable);
++	rtr_count = batadv_mcast_forw_rtr_count(bat_priv, *is_routable);
+ 
+ 	total_count = tt_count + ip_count + unsnoop_count + rtr_count;
+ 
+@@ -1723,6 +1723,7 @@ batadv_mcast_forw_want_rtr(struct batadv_priv *bat_priv,
+  * @bat_priv: the bat priv with all the soft interface information
+  * @skb: the multicast packet to transmit
+  * @vid: the vlan identifier
++ * @is_routable: stores whether the destination is routable
+  *
+  * Sends copies of a frame with multicast destination to any node that signaled
+  * interest in it, that is either via the translation table or the according
+@@ -1735,7 +1736,7 @@ batadv_mcast_forw_want_rtr(struct batadv_priv *bat_priv,
+  * is neither IPv4 nor IPv6. NET_XMIT_SUCCESS otherwise.
+  */
+ int batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb,
+-			   unsigned short vid)
++			   unsigned short vid, int is_routable)
+ {
+ 	int ret;
+ 
+@@ -1751,12 +1752,16 @@ int batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ 		return ret;
+ 	}
+ 
++	if (!is_routable)
++		goto skip_mc_router;
++
+ 	ret = batadv_mcast_forw_want_rtr(bat_priv, skb, vid);
+ 	if (ret != NET_XMIT_SUCCESS) {
+ 		kfree_skb(skb);
+ 		return ret;
+ 	}
+ 
++skip_mc_router:
+ 	consume_skb(skb);
+ 	return ret;
+ }
+diff --git a/net/batman-adv/multicast.h b/net/batman-adv/multicast.h
+index 3e114bc5ca3bb..1e787b522e69c 100644
+--- a/net/batman-adv/multicast.h
++++ b/net/batman-adv/multicast.h
+@@ -44,7 +44,8 @@ enum batadv_forw_mode {
+ 
+ enum batadv_forw_mode
+ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
+-		       struct batadv_orig_node **mcast_single_orig);
++		       struct batadv_orig_node **mcast_single_orig,
++		       int *is_routable);
+ 
+ int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
+ 				struct sk_buff *skb,
+@@ -52,7 +53,7 @@ int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
+ 				struct batadv_orig_node *orig_node);
+ 
+ int batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb,
+-			   unsigned short vid);
++			   unsigned short vid, int is_routable);
+ 
+ void batadv_mcast_init(struct batadv_priv *bat_priv);
+ 
+@@ -71,7 +72,8 @@ void batadv_mcast_purge_orig(struct batadv_orig_node *orig_node);
+ 
+ static inline enum batadv_forw_mode
+ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
+-		       struct batadv_orig_node **mcast_single_orig)
++		       struct batadv_orig_node **mcast_single_orig,
++		       int *is_routable)
+ {
+ 	return BATADV_FORW_ALL;
+ }
+@@ -88,7 +90,7 @@ batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
+ 
+ static inline int
+ batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb,
+-		       unsigned short vid)
++		       unsigned short vid, int is_routable)
+ {
+ 	kfree_skb(skb);
+ 	return NET_XMIT_DROP;
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index 82e7ca886605a..7496047b318a4 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -200,6 +200,7 @@ static netdev_tx_t batadv_interface_tx(struct sk_buff *skb,
+ 	int gw_mode;
+ 	enum batadv_forw_mode forw_mode = BATADV_FORW_SINGLE;
+ 	struct batadv_orig_node *mcast_single_orig = NULL;
++	int mcast_is_routable = 0;
+ 	int network_offset = ETH_HLEN;
+ 	__be16 proto;
+ 
+@@ -302,7 +303,8 @@ static netdev_tx_t batadv_interface_tx(struct sk_buff *skb,
+ send:
+ 		if (do_bcast && !is_broadcast_ether_addr(ethhdr->h_dest)) {
+ 			forw_mode = batadv_mcast_forw_mode(bat_priv, skb,
+-							   &mcast_single_orig);
++							   &mcast_single_orig,
++							   &mcast_is_routable);
+ 			if (forw_mode == BATADV_FORW_NONE)
+ 				goto dropped;
+ 
+@@ -367,7 +369,8 @@ send:
+ 			ret = batadv_mcast_forw_send_orig(bat_priv, skb, vid,
+ 							  mcast_single_orig);
+ 		} else if (forw_mode == BATADV_FORW_SOME) {
+-			ret = batadv_mcast_forw_send(bat_priv, skb, vid);
++			ret = batadv_mcast_forw_send(bat_priv, skb, vid,
++						     mcast_is_routable);
+ 		} else {
+ 			if (batadv_dat_snoop_outgoing_arp_request(bat_priv,
+ 								  skb))
+diff --git a/net/core/lwtunnel.c b/net/core/lwtunnel.c
+index 8ec7d13d28608..f590b0e672a9b 100644
+--- a/net/core/lwtunnel.c
++++ b/net/core/lwtunnel.c
+@@ -192,6 +192,10 @@ int lwtunnel_valid_encap_type_attr(struct nlattr *attr, int remaining,
+ 			nla_entype = nla_find(attrs, attrlen, RTA_ENCAP_TYPE);
+ 
+ 			if (nla_entype) {
++				if (nla_len(nla_entype) < sizeof(u16)) {
++					NL_SET_ERR_MSG(extack, "Invalid RTA_ENCAP_TYPE");
++					return -EINVAL;
++				}
+ 				encap_type = nla_get_u16(nla_entype);
+ 
+ 				if (lwtunnel_valid_encap_type(encap_type,
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 36f34977dda19..ab6a8f35d369d 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -663,6 +663,19 @@ static int fib_count_nexthops(struct rtnexthop *rtnh, int remaining,
+ 	return nhs;
+ }
+ 
++static int fib_gw_from_attr(__be32 *gw, struct nlattr *nla,
++			    struct netlink_ext_ack *extack)
++{
++	if (nla_len(nla) < sizeof(*gw)) {
++		NL_SET_ERR_MSG(extack, "Invalid IPv4 address in RTA_GATEWAY");
++		return -EINVAL;
++	}
++
++	*gw = nla_get_in_addr(nla);
++
++	return 0;
++}
++
+ /* only called when fib_nh is integrated into fib_info */
+ static int fib_get_nhs(struct fib_info *fi, struct rtnexthop *rtnh,
+ 		       int remaining, struct fib_config *cfg,
+@@ -705,7 +718,11 @@ static int fib_get_nhs(struct fib_info *fi, struct rtnexthop *rtnh,
+ 				return -EINVAL;
+ 			}
+ 			if (nla) {
+-				fib_cfg.fc_gw4 = nla_get_in_addr(nla);
++				ret = fib_gw_from_attr(&fib_cfg.fc_gw4, nla,
++						       extack);
++				if (ret)
++					goto errout;
++
+ 				if (fib_cfg.fc_gw4)
+ 					fib_cfg.fc_gw_family = AF_INET;
+ 			} else if (nlav) {
+@@ -715,10 +732,18 @@ static int fib_get_nhs(struct fib_info *fi, struct rtnexthop *rtnh,
+ 			}
+ 
+ 			nla = nla_find(attrs, attrlen, RTA_FLOW);
+-			if (nla)
++			if (nla) {
++				if (nla_len(nla) < sizeof(u32)) {
++					NL_SET_ERR_MSG(extack, "Invalid RTA_FLOW");
++					return -EINVAL;
++				}
+ 				fib_cfg.fc_flow = nla_get_u32(nla);
++			}
+ 
+ 			fib_cfg.fc_encap = nla_find(attrs, attrlen, RTA_ENCAP);
++			/* RTA_ENCAP_TYPE length checked in
++			 * lwtunnel_valid_encap_type_attr
++			 */
+ 			nla = nla_find(attrs, attrlen, RTA_ENCAP_TYPE);
+ 			if (nla)
+ 				fib_cfg.fc_encap_type = nla_get_u16(nla);
+@@ -903,6 +928,7 @@ int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
+ 		attrlen = rtnh_attrlen(rtnh);
+ 		if (attrlen > 0) {
+ 			struct nlattr *nla, *nlav, *attrs = rtnh_attrs(rtnh);
++			int err;
+ 
+ 			nla = nla_find(attrs, attrlen, RTA_GATEWAY);
+ 			nlav = nla_find(attrs, attrlen, RTA_VIA);
+@@ -913,12 +939,17 @@ int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
+ 			}
+ 
+ 			if (nla) {
++				__be32 gw;
++
++				err = fib_gw_from_attr(&gw, nla, extack);
++				if (err)
++					return err;
++
+ 				if (nh->fib_nh_gw_family != AF_INET ||
+-				    nla_get_in_addr(nla) != nh->fib_nh_gw4)
++				    gw != nh->fib_nh_gw4)
+ 					return 1;
+ 			} else if (nlav) {
+ 				struct fib_config cfg2;
+-				int err;
+ 
+ 				err = fib_gw_from_via(&cfg2, nlav, extack);
+ 				if (err)
+@@ -941,8 +972,14 @@ int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
+ 
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+ 			nla = nla_find(attrs, attrlen, RTA_FLOW);
+-			if (nla && nla_get_u32(nla) != nh->nh_tclassid)
+-				return 1;
++			if (nla) {
++				if (nla_len(nla) < sizeof(u32)) {
++					NL_SET_ERR_MSG(extack, "Invalid RTA_FLOW");
++					return -EINVAL;
++				}
++				if (nla_get_u32(nla) != nh->nh_tclassid)
++					return 1;
++			}
+ #endif
+ 		}
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 86ed2afbee302..ef2068a60d4ad 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -3006,7 +3006,7 @@ int udp4_seq_show(struct seq_file *seq, void *v)
+ {
+ 	seq_setwidth(seq, 127);
+ 	if (v == SEQ_START_TOKEN)
+-		seq_puts(seq, "  sl  local_address rem_address   st tx_queue "
++		seq_puts(seq, "   sl  local_address rem_address   st tx_queue "
+ 			   "rx_queue tr tm->when retrnsmt   uid  timeout "
+ 			   "inode ref pointer drops");
+ 	else {
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index 23aeeb46f99fc..99f2dc802e366 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -804,6 +804,8 @@ vti6_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 	struct net *net = dev_net(dev);
+ 	struct vti6_net *ip6n = net_generic(net, vti6_net_id);
+ 
++	memset(&p1, 0, sizeof(p1));
++
+ 	switch (cmd) {
+ 	case SIOCGETTUNNEL:
+ 		if (dev == ip6n->fb_tnl_dev) {
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 00f133a55ef7c..38349054e361e 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -1020,6 +1020,9 @@ static int do_rawv6_setsockopt(struct sock *sk, int level, int optname,
+ 	struct raw6_sock *rp = raw6_sk(sk);
+ 	int val;
+ 
++	if (optlen < sizeof(val))
++		return -EINVAL;
++
+ 	if (copy_from_sockptr(&val, optval, sizeof(val)))
+ 		return -EFAULT;
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 6fef0d7586bf6..654bf4ca61260 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5113,6 +5113,19 @@ out:
+ 	return should_notify;
+ }
+ 
++static int fib6_gw_from_attr(struct in6_addr *gw, struct nlattr *nla,
++			     struct netlink_ext_ack *extack)
++{
++	if (nla_len(nla) < sizeof(*gw)) {
++		NL_SET_ERR_MSG(extack, "Invalid IPv6 address in RTA_GATEWAY");
++		return -EINVAL;
++	}
++
++	*gw = nla_get_in6_addr(nla);
++
++	return 0;
++}
++
+ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 				   struct netlink_ext_ack *extack)
+ {
+@@ -5153,10 +5166,18 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 
+ 			nla = nla_find(attrs, attrlen, RTA_GATEWAY);
+ 			if (nla) {
+-				r_cfg.fc_gateway = nla_get_in6_addr(nla);
++				err = fib6_gw_from_attr(&r_cfg.fc_gateway, nla,
++							extack);
++				if (err)
++					goto cleanup;
++
+ 				r_cfg.fc_flags |= RTF_GATEWAY;
+ 			}
+ 			r_cfg.fc_encap = nla_find(attrs, attrlen, RTA_ENCAP);
++
++			/* RTA_ENCAP_TYPE length checked in
++			 * lwtunnel_valid_encap_type_attr
++			 */
+ 			nla = nla_find(attrs, attrlen, RTA_ENCAP_TYPE);
+ 			if (nla)
+ 				r_cfg.fc_encap_type = nla_get_u16(nla);
+@@ -5323,7 +5344,13 @@ static int ip6_route_multipath_del(struct fib6_config *cfg,
+ 
+ 			nla = nla_find(attrs, attrlen, RTA_GATEWAY);
+ 			if (nla) {
+-				nla_memcpy(&r_cfg.fc_gateway, nla, 16);
++				err = fib6_gw_from_attr(&r_cfg.fc_gateway, nla,
++							extack);
++				if (err) {
++					last_err = err;
++					goto next_rtnh;
++				}
++
+ 				r_cfg.fc_flags |= RTF_GATEWAY;
+ 			}
+ 		}
+@@ -5331,6 +5358,7 @@ static int ip6_route_multipath_del(struct fib6_config *cfg,
+ 		if (err)
+ 			last_err = err;
+ 
++next_rtnh:
+ 		rtnh = rtnh_next(rtnh, &remaining);
+ 	}
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 7bd42827540ae..778bf262418b5 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -5194,7 +5194,7 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata,
+ 	 */
+ 	if (new_sta) {
+ 		u32 rates = 0, basic_rates = 0;
+-		bool have_higher_than_11mbit;
++		bool have_higher_than_11mbit = false;
+ 		int min_rate = INT_MAX, min_rate_index = -1;
+ 		const struct cfg80211_bss_ies *ies;
+ 		int shift = ieee80211_vif_get_shift(&sdata->vif);
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index 6d16e1ab1a8ab..eef0e3f2f25b0 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -306,7 +306,7 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
+ 	if (optlen < sizeof(unsigned int))
+ 		return -EINVAL;
+ 
+-	if (copy_from_sockptr(&opt, optval, sizeof(unsigned int)))
++	if (copy_from_sockptr(&opt, optval, sizeof(unsigned long)))
+ 		return -EFAULT;
+ 
+ 	switch (optname) {
+diff --git a/net/phonet/pep.c b/net/phonet/pep.c
+index 72018e5e4d8ef..65d463ad87707 100644
+--- a/net/phonet/pep.c
++++ b/net/phonet/pep.c
+@@ -868,6 +868,7 @@ static struct sock *pep_sock_accept(struct sock *sk, int flags, int *errp,
+ 
+ 	err = pep_accept_conn(newsk, skb);
+ 	if (err) {
++		__sock_put(sk);
+ 		sock_put(newsk);
+ 		newsk = NULL;
+ 		goto drop;
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index ade2d6ddc9148..af8c63a9ec18c 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -1421,10 +1421,8 @@ static int qfq_init_qdisc(struct Qdisc *sch, struct nlattr *opt,
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (qdisc_dev(sch)->tx_queue_len + 1 > QFQ_MAX_AGG_CLASSES)
+-		max_classes = QFQ_MAX_AGG_CLASSES;
+-	else
+-		max_classes = qdisc_dev(sch)->tx_queue_len + 1;
++	max_classes = min_t(u64, (u64)qdisc_dev(sch)->tx_queue_len + 1,
++			    QFQ_MAX_AGG_CLASSES);
+ 	/* max_cl_shift = floor(log_2(max_classes)) */
+ 	max_cl_shift = __fls(max_classes);
+ 	q->max_agg_classes = 1<<max_cl_shift;
+diff --git a/samples/ftrace/ftrace-direct-modify.c b/samples/ftrace/ftrace-direct-modify.c
+index 5b9a09957c6e0..89e6bf27cd9f6 100644
+--- a/samples/ftrace/ftrace-direct-modify.c
++++ b/samples/ftrace/ftrace-direct-modify.c
+@@ -3,6 +3,9 @@
+ #include <linux/kthread.h>
+ #include <linux/ftrace.h>
+ 
++extern void my_direct_func1(void);
++extern void my_direct_func2(void);
++
+ void my_direct_func1(void)
+ {
+ 	trace_printk("my direct func1\n");
+diff --git a/samples/ftrace/ftrace-direct-too.c b/samples/ftrace/ftrace-direct-too.c
+index 3f0079c9bd6fa..11b99325f3dbf 100644
+--- a/samples/ftrace/ftrace-direct-too.c
++++ b/samples/ftrace/ftrace-direct-too.c
+@@ -4,6 +4,9 @@
+ #include <linux/mm.h> /* for handle_mm_fault() */
+ #include <linux/ftrace.h>
+ 
++extern void my_direct_func(struct vm_area_struct *vma,
++			   unsigned long address, unsigned int flags);
++
+ void my_direct_func(struct vm_area_struct *vma,
+ 			unsigned long address, unsigned int flags)
+ {
+diff --git a/samples/ftrace/ftrace-direct.c b/samples/ftrace/ftrace-direct.c
+index a2729d1ef17f5..642c50b5f7166 100644
+--- a/samples/ftrace/ftrace-direct.c
++++ b/samples/ftrace/ftrace-direct.c
+@@ -4,6 +4,8 @@
+ #include <linux/sched.h> /* for wake_up_process() */
+ #include <linux/ftrace.h>
+ 
++extern void my_direct_func(struct task_struct *p);
++
+ void my_direct_func(struct task_struct *p)
+ {
+ 	trace_printk("waking up %s-%d\n", p->comm, p->pid);
+diff --git a/tools/testing/selftests/x86/test_vsyscall.c b/tools/testing/selftests/x86/test_vsyscall.c
+index 65c141ebfbbde..5b45e6986aeab 100644
+--- a/tools/testing/selftests/x86/test_vsyscall.c
++++ b/tools/testing/selftests/x86/test_vsyscall.c
+@@ -497,7 +497,7 @@ static int test_process_vm_readv(void)
+ 	}
+ 
+ 	if (vsyscall_map_r) {
+-		if (!memcmp(buf, (const void *)0xffffffffff600000, 4096)) {
++		if (!memcmp(buf, remote.iov_base, sizeof(buf))) {
+ 			printf("[OK]\tIt worked and read correct data\n");
+ 		} else {
+ 			printf("[FAIL]\tIt worked but returned incorrect data\n");


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-01-16 10:21 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-01-16 10:21 UTC (permalink / raw
  To: gentoo-commits

commit:     dd4931786451e78297909a9b991c25020953d0fd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 16 10:21:45 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jan 16 10:21:45 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dd493178

Linux patch 5.10.92

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1091_linux-5.10.92.patch | 873 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 877 insertions(+)

diff --git a/0000_README b/0000_README
index 9e878b9b..6d8b55d4 100644
--- a/0000_README
+++ b/0000_README
@@ -407,6 +407,10 @@ Patch:  1090_linux-5.10.91.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.91
 
+Patch:  1091_linux-5.10.92.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.92
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1091_linux-5.10.92.patch b/1091_linux-5.10.92.patch
new file mode 100644
index 00000000..ea857dbe
--- /dev/null
+++ b/1091_linux-5.10.92.patch
@@ -0,0 +1,873 @@
+diff --git a/Makefile b/Makefile
+index c8d677c7eaa11..a113a29545bdb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 91
++SUBLEVEL = 92
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-i9100.dts b/arch/arm/boot/dts/exynos4210-i9100.dts
+index 7777bf51a6e64..ecc9d4dc707e4 100644
+--- a/arch/arm/boot/dts/exynos4210-i9100.dts
++++ b/arch/arm/boot/dts/exynos4210-i9100.dts
+@@ -765,7 +765,7 @@
+ 		compatible = "brcm,bcm4330-bt";
+ 
+ 		shutdown-gpios = <&gpl0 4 GPIO_ACTIVE_HIGH>;
+-		reset-gpios = <&gpl1 0 GPIO_ACTIVE_HIGH>;
++		reset-gpios = <&gpl1 0 GPIO_ACTIVE_LOW>;
+ 		device-wakeup-gpios = <&gpx3 1 GPIO_ACTIVE_HIGH>;
+ 		host-wakeup-gpios = <&gpx2 6 GPIO_ACTIVE_HIGH>;
+ 	};
+diff --git a/drivers/bluetooth/bfusb.c b/drivers/bluetooth/bfusb.c
+index 5a321b4076aab..cab93935cc7f1 100644
+--- a/drivers/bluetooth/bfusb.c
++++ b/drivers/bluetooth/bfusb.c
+@@ -628,6 +628,9 @@ static int bfusb_probe(struct usb_interface *intf, const struct usb_device_id *i
+ 	data->bulk_out_ep   = bulk_out_ep->desc.bEndpointAddress;
+ 	data->bulk_pkt_size = le16_to_cpu(bulk_out_ep->desc.wMaxPacketSize);
+ 
++	if (!data->bulk_pkt_size)
++		goto done;
++
+ 	rwlock_init(&data->lock);
+ 
+ 	data->reassembly = NULL;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index e0859f4e28073..538232b4c42ac 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -379,6 +379,15 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x8087, 0x0aaa), .driver_info = BTUSB_INTEL_NEW |
+ 						     BTUSB_WIDEBAND_SPEECH |
+ 						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x10ab, 0x9309), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x10ab, 0x9409), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x0489, 0xe0d0), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
+ 
+ 	/* Other Intel Bluetooth devices */
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01),
+@@ -400,6 +409,14 @@ static const struct usb_device_id blacklist_table[] = {
+ 			 BTUSB_WIDEBAND_SPEECH |
+ 			 BTUSB_VALID_LE_STATES },
+ 
++	/* MediaTek MT7922A Bluetooth devices */
++	{ USB_DEVICE(0x0489, 0xe0d8), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++
+ 	/* Additional Realtek 8723AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x13d3, 0x3394), .driver_info = BTUSB_REALTEK },
+@@ -2845,6 +2862,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 		skb = bt_skb_alloc(HCI_WMT_MAX_EVENT_SIZE, GFP_ATOMIC);
+ 		if (!skb) {
+ 			hdev->stat.err_rx++;
++			kfree(urb->setup_packet);
+ 			return;
+ 		}
+ 
+@@ -2865,6 +2883,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 			data->evt_skb = skb_clone(skb, GFP_ATOMIC);
+ 			if (!data->evt_skb) {
+ 				kfree_skb(skb);
++				kfree(urb->setup_packet);
+ 				return;
+ 			}
+ 		}
+@@ -2873,6 +2892,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 		if (err < 0) {
+ 			kfree_skb(data->evt_skb);
+ 			data->evt_skb = NULL;
++			kfree(urb->setup_packet);
+ 			return;
+ 		}
+ 
+@@ -2883,6 +2903,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 			wake_up_bit(&data->flags,
+ 				    BTUSB_TX_WAIT_VND_EVT);
+ 		}
++		kfree(urb->setup_packet);
+ 		return;
+ 	} else if (urb->status == -ENOENT) {
+ 		/* Avoid suspend failed when usb_kill_urb */
+@@ -2903,6 +2924,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 	usb_anchor_urb(urb, &data->ctrl_anchor);
+ 	err = usb_submit_urb(urb, GFP_ATOMIC);
+ 	if (err < 0) {
++		kfree(urb->setup_packet);
+ 		/* -EPERM: urb is being killed;
+ 		 * -ENODEV: device got disconnected
+ 		 */
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 340ad21491e28..8c94380e7a463 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -461,6 +461,7 @@ static struct crng_state primary_crng = {
+  * its value (from 0->1->2).
+  */
+ static int crng_init = 0;
++static bool crng_need_final_init = false;
+ #define crng_ready() (likely(crng_init > 1))
+ static int crng_init_cnt = 0;
+ static unsigned long crng_global_init_time = 0;
+@@ -838,6 +839,36 @@ static void __init crng_initialize_primary(struct crng_state *crng)
+ 	crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
+ }
+ 
++static void crng_finalize_init(struct crng_state *crng)
++{
++	if (crng != &primary_crng || crng_init >= 2)
++		return;
++	if (!system_wq) {
++		/* We can't call numa_crng_init until we have workqueues,
++		 * so mark this for processing later. */
++		crng_need_final_init = true;
++		return;
++	}
++
++	invalidate_batched_entropy();
++	numa_crng_init();
++	crng_init = 2;
++	process_random_ready_list();
++	wake_up_interruptible(&crng_init_wait);
++	kill_fasync(&fasync, SIGIO, POLL_IN);
++	pr_notice("crng init done\n");
++	if (unseeded_warning.missed) {
++		pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
++			  unseeded_warning.missed);
++		unseeded_warning.missed = 0;
++	}
++	if (urandom_warning.missed) {
++		pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
++			  urandom_warning.missed);
++		urandom_warning.missed = 0;
++	}
++}
++
+ #ifdef CONFIG_NUMA
+ static void do_numa_crng_init(struct work_struct *work)
+ {
+@@ -853,8 +884,8 @@ static void do_numa_crng_init(struct work_struct *work)
+ 		crng_initialize_secondary(crng);
+ 		pool[i] = crng;
+ 	}
+-	mb();
+-	if (cmpxchg(&crng_node_pool, NULL, pool)) {
++	/* pairs with READ_ONCE() in select_crng() */
++	if (cmpxchg_release(&crng_node_pool, NULL, pool) != NULL) {
+ 		for_each_node(i)
+ 			kfree(pool[i]);
+ 		kfree(pool);
+@@ -867,8 +898,26 @@ static void numa_crng_init(void)
+ {
+ 	schedule_work(&numa_crng_init_work);
+ }
++
++static struct crng_state *select_crng(void)
++{
++	struct crng_state **pool;
++	int nid = numa_node_id();
++
++	/* pairs with cmpxchg_release() in do_numa_crng_init() */
++	pool = READ_ONCE(crng_node_pool);
++	if (pool && pool[nid])
++		return pool[nid];
++
++	return &primary_crng;
++}
+ #else
+ static void numa_crng_init(void) {}
++
++static struct crng_state *select_crng(void)
++{
++	return &primary_crng;
++}
+ #endif
+ 
+ /*
+@@ -972,38 +1021,23 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
+ 		crng->state[i+4] ^= buf.key[i] ^ rv;
+ 	}
+ 	memzero_explicit(&buf, sizeof(buf));
+-	crng->init_time = jiffies;
++	WRITE_ONCE(crng->init_time, jiffies);
+ 	spin_unlock_irqrestore(&crng->lock, flags);
+-	if (crng == &primary_crng && crng_init < 2) {
+-		invalidate_batched_entropy();
+-		numa_crng_init();
+-		crng_init = 2;
+-		process_random_ready_list();
+-		wake_up_interruptible(&crng_init_wait);
+-		kill_fasync(&fasync, SIGIO, POLL_IN);
+-		pr_notice("crng init done\n");
+-		if (unseeded_warning.missed) {
+-			pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
+-				  unseeded_warning.missed);
+-			unseeded_warning.missed = 0;
+-		}
+-		if (urandom_warning.missed) {
+-			pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
+-				  urandom_warning.missed);
+-			urandom_warning.missed = 0;
+-		}
+-	}
++	crng_finalize_init(crng);
+ }
+ 
+ static void _extract_crng(struct crng_state *crng,
+ 			  __u8 out[CHACHA_BLOCK_SIZE])
+ {
+-	unsigned long v, flags;
+-
+-	if (crng_ready() &&
+-	    (time_after(crng_global_init_time, crng->init_time) ||
+-	     time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL)))
+-		crng_reseed(crng, crng == &primary_crng ? &input_pool : NULL);
++	unsigned long v, flags, init_time;
++
++	if (crng_ready()) {
++		init_time = READ_ONCE(crng->init_time);
++		if (time_after(READ_ONCE(crng_global_init_time), init_time) ||
++		    time_after(jiffies, init_time + CRNG_RESEED_INTERVAL))
++			crng_reseed(crng, crng == &primary_crng ?
++				    &input_pool : NULL);
++	}
+ 	spin_lock_irqsave(&crng->lock, flags);
+ 	if (arch_get_random_long(&v))
+ 		crng->state[14] ^= v;
+@@ -1015,15 +1049,7 @@ static void _extract_crng(struct crng_state *crng,
+ 
+ static void extract_crng(__u8 out[CHACHA_BLOCK_SIZE])
+ {
+-	struct crng_state *crng = NULL;
+-
+-#ifdef CONFIG_NUMA
+-	if (crng_node_pool)
+-		crng = crng_node_pool[numa_node_id()];
+-	if (crng == NULL)
+-#endif
+-		crng = &primary_crng;
+-	_extract_crng(crng, out);
++	_extract_crng(select_crng(), out);
+ }
+ 
+ /*
+@@ -1052,15 +1078,7 @@ static void _crng_backtrack_protect(struct crng_state *crng,
+ 
+ static void crng_backtrack_protect(__u8 tmp[CHACHA_BLOCK_SIZE], int used)
+ {
+-	struct crng_state *crng = NULL;
+-
+-#ifdef CONFIG_NUMA
+-	if (crng_node_pool)
+-		crng = crng_node_pool[numa_node_id()];
+-	if (crng == NULL)
+-#endif
+-		crng = &primary_crng;
+-	_crng_backtrack_protect(crng, tmp, used);
++	_crng_backtrack_protect(select_crng(), tmp, used);
+ }
+ 
+ static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
+@@ -1799,6 +1817,8 @@ static void __init init_std_data(struct entropy_store *r)
+ int __init rand_initialize(void)
+ {
+ 	init_std_data(&input_pool);
++	if (crng_need_final_init)
++		crng_finalize_init(&primary_crng);
+ 	crng_initialize_primary(&primary_crng);
+ 	crng_global_init_time = jiffies;
+ 	if (ratelimit_disable) {
+@@ -1973,7 +1993,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ 		if (crng_init < 2)
+ 			return -ENODATA;
+ 		crng_reseed(&primary_crng, &input_pool);
+-		crng_global_init_time = jiffies - 1;
++		WRITE_ONCE(crng_global_init_time, jiffies - 1);
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+@@ -2307,7 +2327,8 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
+ 	 * We'll be woken up again once below random_write_wakeup_thresh,
+ 	 * or when the calling thread is about to terminate.
+ 	 */
+-	wait_event_interruptible(random_write_wait, kthread_should_stop() ||
++	wait_event_interruptible(random_write_wait,
++			!system_wq || kthread_should_stop() ||
+ 			ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits);
+ 	mix_pool_bytes(poolp, buffer, count);
+ 	credit_entropy_bits(poolp, entropy);
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 1f23cb6ece588..e51ca7ca0a2a7 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -3044,9 +3044,9 @@ static void snb_wm_latency_quirk(struct drm_i915_private *dev_priv)
+ 	 * The BIOS provided WM memory latency values are often
+ 	 * inadequate for high resolution displays. Adjust them.
+ 	 */
+-	changed = ilk_increase_wm_latency(dev_priv, dev_priv->wm.pri_latency, 12) |
+-		ilk_increase_wm_latency(dev_priv, dev_priv->wm.spr_latency, 12) |
+-		ilk_increase_wm_latency(dev_priv, dev_priv->wm.cur_latency, 12);
++	changed = ilk_increase_wm_latency(dev_priv, dev_priv->wm.pri_latency, 12);
++	changed |= ilk_increase_wm_latency(dev_priv, dev_priv->wm.spr_latency, 12);
++	changed |= ilk_increase_wm_latency(dev_priv, dev_priv->wm.cur_latency, 12);
+ 
+ 	if (!changed)
+ 		return;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 2069b16b50eca..cc3876500c4b2 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -459,34 +459,12 @@ check_suspended:
+ }
+ EXPORT_SYMBOL(md_handle_request);
+ 
+-struct md_io {
+-	struct mddev *mddev;
+-	bio_end_io_t *orig_bi_end_io;
+-	void *orig_bi_private;
+-	unsigned long start_time;
+-	struct hd_struct *part;
+-};
+-
+-static void md_end_io(struct bio *bio)
+-{
+-	struct md_io *md_io = bio->bi_private;
+-	struct mddev *mddev = md_io->mddev;
+-
+-	part_end_io_acct(md_io->part, bio, md_io->start_time);
+-
+-	bio->bi_end_io = md_io->orig_bi_end_io;
+-	bio->bi_private = md_io->orig_bi_private;
+-
+-	mempool_free(md_io, &mddev->md_io_pool);
+-
+-	if (bio->bi_end_io)
+-		bio->bi_end_io(bio);
+-}
+-
+ static blk_qc_t md_submit_bio(struct bio *bio)
+ {
+ 	const int rw = bio_data_dir(bio);
++	const int sgrp = op_stat_group(bio_op(bio));
+ 	struct mddev *mddev = bio->bi_disk->private_data;
++	unsigned int sectors;
+ 
+ 	if (mddev == NULL || mddev->pers == NULL) {
+ 		bio_io_error(bio);
+@@ -507,26 +485,21 @@ static blk_qc_t md_submit_bio(struct bio *bio)
+ 		return BLK_QC_T_NONE;
+ 	}
+ 
+-	if (bio->bi_end_io != md_end_io) {
+-		struct md_io *md_io;
+-
+-		md_io = mempool_alloc(&mddev->md_io_pool, GFP_NOIO);
+-		md_io->mddev = mddev;
+-		md_io->orig_bi_end_io = bio->bi_end_io;
+-		md_io->orig_bi_private = bio->bi_private;
+-
+-		bio->bi_end_io = md_end_io;
+-		bio->bi_private = md_io;
+-
+-		md_io->start_time = part_start_io_acct(mddev->gendisk,
+-						       &md_io->part, bio);
+-	}
+-
++	/*
++	 * save the sectors now since our bio can
++	 * go away inside make_request
++	 */
++	sectors = bio_sectors(bio);
+ 	/* bio could be mergeable after passing to underlayer */
+ 	bio->bi_opf &= ~REQ_NOMERGE;
+ 
+ 	md_handle_request(mddev, bio);
+ 
++	part_stat_lock();
++	part_stat_inc(&mddev->gendisk->part0, ios[sgrp]);
++	part_stat_add(&mddev->gendisk->part0, sectors[sgrp], sectors);
++	part_stat_unlock();
++
+ 	return BLK_QC_T_NONE;
+ }
+ 
+@@ -5636,7 +5609,6 @@ static void md_free(struct kobject *ko)
+ 
+ 	bioset_exit(&mddev->bio_set);
+ 	bioset_exit(&mddev->sync_set);
+-	mempool_exit(&mddev->md_io_pool);
+ 	kfree(mddev);
+ }
+ 
+@@ -5732,11 +5704,6 @@ static int md_alloc(dev_t dev, char *name)
+ 		 */
+ 		mddev->hold_active = UNTIL_STOP;
+ 
+-	error = mempool_init_kmalloc_pool(&mddev->md_io_pool, BIO_POOL_SIZE,
+-					  sizeof(struct md_io));
+-	if (error)
+-		goto abort;
+-
+ 	error = -ENOMEM;
+ 	mddev->queue = blk_alloc_queue(NUMA_NO_NODE);
+ 	if (!mddev->queue)
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index 2175a5ac4f7c6..c94811cf26004 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -487,7 +487,6 @@ struct mddev {
+ 	struct bio_set			sync_set; /* for sync operations like
+ 						   * metadata and bitmap writes
+ 						   */
+-	mempool_t			md_io_pool;
+ 
+ 	/* Generic flush handling.
+ 	 * The last to finish preflush schedules a worker to submit
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 447b6a198926e..282f3d2388cc2 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2065,7 +2065,6 @@ int uvc_register_video_device(struct uvc_device *dev,
+ 			      const struct v4l2_file_operations *fops,
+ 			      const struct v4l2_ioctl_ops *ioctl_ops)
+ {
+-	const char *name;
+ 	int ret;
+ 
+ 	/* Initialize the video buffers queue. */
+@@ -2094,20 +2093,16 @@ int uvc_register_video_device(struct uvc_device *dev,
+ 	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ 	default:
+ 		vdev->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
+-		name = "Video Capture";
+ 		break;
+ 	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+ 		vdev->device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING;
+-		name = "Video Output";
+ 		break;
+ 	case V4L2_BUF_TYPE_META_CAPTURE:
+ 		vdev->device_caps = V4L2_CAP_META_CAPTURE | V4L2_CAP_STREAMING;
+-		name = "Metadata";
+ 		break;
+ 	}
+ 
+-	snprintf(vdev->name, sizeof(vdev->name), "%s %u", name,
+-		 stream->header.bTerminalLink);
++	strscpy(vdev->name, dev->name, sizeof(vdev->name));
+ 
+ 	/*
+ 	 * Set the driver data before calling video_register_device, otherwise
+diff --git a/drivers/mfd/intel-lpss-acpi.c b/drivers/mfd/intel-lpss-acpi.c
+index c8fe334b5fe8b..045cbf0cbe53a 100644
+--- a/drivers/mfd/intel-lpss-acpi.c
++++ b/drivers/mfd/intel-lpss-acpi.c
+@@ -102,6 +102,7 @@ static int intel_lpss_acpi_probe(struct platform_device *pdev)
+ {
+ 	struct intel_lpss_platform_info *info;
+ 	const struct acpi_device_id *id;
++	int ret;
+ 
+ 	id = acpi_match_device(intel_lpss_acpi_ids, &pdev->dev);
+ 	if (!id)
+@@ -115,10 +116,14 @@ static int intel_lpss_acpi_probe(struct platform_device *pdev)
+ 	info->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	info->irq = platform_get_irq(pdev, 0);
+ 
++	ret = intel_lpss_probe(&pdev->dev, info);
++	if (ret)
++		return ret;
++
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	return intel_lpss_probe(&pdev->dev, info);
++	return 0;
+ }
+ 
+ static int intel_lpss_acpi_remove(struct platform_device *pdev)
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index bf04a08eeba13..a78b060ce8471 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -1932,6 +1932,7 @@ static const struct pci_device_id pci_ids[] = {
+ 	SDHCI_PCI_DEVICE(INTEL, JSL_SD,    intel_byt_sd),
+ 	SDHCI_PCI_DEVICE(INTEL, LKF_EMMC,  intel_glk_emmc),
+ 	SDHCI_PCI_DEVICE(INTEL, LKF_SD,    intel_byt_sd),
++	SDHCI_PCI_DEVICE(INTEL, ADL_EMMC,  intel_glk_emmc),
+ 	SDHCI_PCI_DEVICE(O2, 8120,     o2),
+ 	SDHCI_PCI_DEVICE(O2, 8220,     o2),
+ 	SDHCI_PCI_DEVICE(O2, 8221,     o2),
+diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
+index 8f90c4163bb5c..dcd99d5057ee1 100644
+--- a/drivers/mmc/host/sdhci-pci.h
++++ b/drivers/mmc/host/sdhci-pci.h
+@@ -59,6 +59,7 @@
+ #define PCI_DEVICE_ID_INTEL_JSL_SD	0x4df8
+ #define PCI_DEVICE_ID_INTEL_LKF_EMMC	0x98c4
+ #define PCI_DEVICE_ID_INTEL_LKF_SD	0x98f8
++#define PCI_DEVICE_ID_INTEL_ADL_EMMC	0x54c4
+ 
+ #define PCI_DEVICE_ID_SYSKONNECT_8000	0x8000
+ #define PCI_DEVICE_ID_VIA_95D0		0x95d0
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index 018ca3b057a3b..3f759fae81fe2 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -320,7 +320,7 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 
+ 	/* device reports out of range channel id */
+ 	if (hf->channel >= GS_MAX_INTF)
+-		goto resubmit_urb;
++		goto device_detach;
+ 
+ 	dev = usbcan->canch[hf->channel];
+ 
+@@ -405,6 +405,7 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 
+ 	/* USB failure take down all interfaces */
+ 	if (rc == -ENODEV) {
++ device_detach:
+ 		for (rc = 0; rc < GS_MAX_INTF; rc++) {
+ 			if (usbcan->canch[rc])
+ 				netif_device_detach(usbcan->canch[rc]->netdev);
+@@ -506,6 +507,8 @@ static netdev_tx_t gs_can_start_xmit(struct sk_buff *skb,
+ 
+ 	hf->echo_id = idx;
+ 	hf->channel = dev->channel;
++	hf->flags = 0;
++	hf->reserved = 0;
+ 
+ 	cf = (struct can_frame *)skb->data;
+ 
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index be18b243642f0..aef66f8eecee1 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -301,7 +301,6 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (rxq < rcv->real_num_rx_queues) {
+ 		rq = &rcv_priv->rq[rxq];
+ 		rcv_xdp = rcu_access_pointer(rq->xdp_prog);
+-		skb_record_rx_queue(skb, rxq);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 74ebe8e7d1d81..e84127165d858 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -2036,7 +2036,7 @@ int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
+ 	void *ptr;
+ 	int i, ret, len;
+ 	u32 *tmp_ptr;
+-	u8 extraie_len_with_pad = 0;
++	u16 extraie_len_with_pad = 0;
+ 	struct hint_short_ssid *s_ssid = NULL;
+ 	struct hint_bssid *hint_bssid = NULL;
+ 
+@@ -2055,7 +2055,7 @@ int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
+ 		len += sizeof(*bssid) * params->num_bssid;
+ 
+ 	len += TLV_HDR_SIZE;
+-	if (params->extraie.len)
++	if (params->extraie.len && params->extraie.len <= 0xFFFF)
+ 		extraie_len_with_pad =
+ 			roundup(params->extraie.len, sizeof(u32));
+ 	len += extraie_len_with_pad;
+@@ -2162,7 +2162,7 @@ int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
+ 		      FIELD_PREP(WMI_TLV_LEN, len);
+ 	ptr += TLV_HDR_SIZE;
+ 
+-	if (params->extraie.len)
++	if (extraie_len_with_pad)
+ 		memcpy(ptr, params->extraie.ptr,
+ 		       params->extraie.len);
+ 
+diff --git a/drivers/staging/greybus/audio_topology.c b/drivers/staging/greybus/audio_topology.c
+index 662e3e8b4b634..2bb8e7b60e8d5 100644
+--- a/drivers/staging/greybus/audio_topology.c
++++ b/drivers/staging/greybus/audio_topology.c
+@@ -974,6 +974,44 @@ static int gbaudio_widget_event(struct snd_soc_dapm_widget *w,
+ 	return ret;
+ }
+ 
++static const struct snd_soc_dapm_widget gbaudio_widgets[] = {
++	[snd_soc_dapm_spk]	= SND_SOC_DAPM_SPK(NULL, gbcodec_event_spk),
++	[snd_soc_dapm_hp]	= SND_SOC_DAPM_HP(NULL, gbcodec_event_hp),
++	[snd_soc_dapm_mic]	= SND_SOC_DAPM_MIC(NULL, gbcodec_event_int_mic),
++	[snd_soc_dapm_output]	= SND_SOC_DAPM_OUTPUT(NULL),
++	[snd_soc_dapm_input]	= SND_SOC_DAPM_INPUT(NULL),
++	[snd_soc_dapm_switch]	= SND_SOC_DAPM_SWITCH_E(NULL, SND_SOC_NOPM,
++					0, 0, NULL,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_pga]	= SND_SOC_DAPM_PGA_E(NULL, SND_SOC_NOPM,
++					0, 0, NULL, 0,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_mixer]	= SND_SOC_DAPM_MIXER_E(NULL, SND_SOC_NOPM,
++					0, 0, NULL, 0,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_mux]	= SND_SOC_DAPM_MUX_E(NULL, SND_SOC_NOPM,
++					0, 0, NULL,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_aif_in]	= SND_SOC_DAPM_AIF_IN_E(NULL, NULL, 0,
++					SND_SOC_NOPM, 0, 0,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_aif_out]	= SND_SOC_DAPM_AIF_OUT_E(NULL, NULL, 0,
++					SND_SOC_NOPM, 0, 0,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++};
++
+ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module,
+ 				      struct snd_soc_dapm_widget *dw,
+ 				      struct gb_audio_widget *w, int *w_size)
+@@ -1052,77 +1090,37 @@ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module,
+ 
+ 	switch (w->type) {
+ 	case snd_soc_dapm_spk:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_SPK(w->name, gbcodec_event_spk);
++		*dw = gbaudio_widgets[w->type];
+ 		module->op_devices |= GBAUDIO_DEVICE_OUT_SPEAKER;
+ 		break;
+ 	case snd_soc_dapm_hp:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_HP(w->name, gbcodec_event_hp);
++		*dw = gbaudio_widgets[w->type];
+ 		module->op_devices |= (GBAUDIO_DEVICE_OUT_WIRED_HEADSET
+ 					| GBAUDIO_DEVICE_OUT_WIRED_HEADPHONE);
+ 		module->ip_devices |= GBAUDIO_DEVICE_IN_WIRED_HEADSET;
+ 		break;
+ 	case snd_soc_dapm_mic:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_MIC(w->name, gbcodec_event_int_mic);
++		*dw = gbaudio_widgets[w->type];
+ 		module->ip_devices |= GBAUDIO_DEVICE_IN_BUILTIN_MIC;
+ 		break;
+ 	case snd_soc_dapm_output:
+-		*dw = (struct snd_soc_dapm_widget)SND_SOC_DAPM_OUTPUT(w->name);
+-		break;
+ 	case snd_soc_dapm_input:
+-		*dw = (struct snd_soc_dapm_widget)SND_SOC_DAPM_INPUT(w->name);
+-		break;
+ 	case snd_soc_dapm_switch:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_SWITCH_E(w->name, SND_SOC_NOPM, 0, 0,
+-					      widget_kctls,
+-					      gbaudio_widget_event,
+-					      SND_SOC_DAPM_PRE_PMU |
+-					      SND_SOC_DAPM_POST_PMD);
+-		break;
+ 	case snd_soc_dapm_pga:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_PGA_E(w->name, SND_SOC_NOPM, 0, 0, NULL, 0,
+-					   gbaudio_widget_event,
+-					   SND_SOC_DAPM_PRE_PMU |
+-					   SND_SOC_DAPM_POST_PMD);
+-		break;
+ 	case snd_soc_dapm_mixer:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_MIXER_E(w->name, SND_SOC_NOPM, 0, 0, NULL,
+-					     0, gbaudio_widget_event,
+-					     SND_SOC_DAPM_PRE_PMU |
+-					     SND_SOC_DAPM_POST_PMD);
+-		break;
+ 	case snd_soc_dapm_mux:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_MUX_E(w->name, SND_SOC_NOPM, 0, 0,
+-					   widget_kctls, gbaudio_widget_event,
+-					   SND_SOC_DAPM_PRE_PMU |
+-					   SND_SOC_DAPM_POST_PMD);
++		*dw = gbaudio_widgets[w->type];
+ 		break;
+ 	case snd_soc_dapm_aif_in:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_AIF_IN_E(w->name, w->sname, 0,
+-					      SND_SOC_NOPM,
+-					      0, 0, gbaudio_widget_event,
+-					      SND_SOC_DAPM_PRE_PMU |
+-					      SND_SOC_DAPM_POST_PMD);
+-		break;
+ 	case snd_soc_dapm_aif_out:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_AIF_OUT_E(w->name, w->sname, 0,
+-					       SND_SOC_NOPM,
+-					       0, 0, gbaudio_widget_event,
+-					       SND_SOC_DAPM_PRE_PMU |
+-					       SND_SOC_DAPM_POST_PMD);
++		*dw = gbaudio_widgets[w->type];
++		dw->sname = w->sname;
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+ 		goto error;
+ 	}
++	dw->name = w->name;
+ 
+ 	dev_dbg(module->dev, "%s: widget of type %d created\n", dw->name,
+ 		dw->id);
+diff --git a/drivers/staging/wlan-ng/hfa384x_usb.c b/drivers/staging/wlan-ng/hfa384x_usb.c
+index f2a0e16b0318c..fac3f34d4a1f5 100644
+--- a/drivers/staging/wlan-ng/hfa384x_usb.c
++++ b/drivers/staging/wlan-ng/hfa384x_usb.c
+@@ -3779,18 +3779,18 @@ static void hfa384x_usb_throttlefn(struct timer_list *t)
+ 
+ 	spin_lock_irqsave(&hw->ctlxq.lock, flags);
+ 
+-	/*
+-	 * We need to check BOTH the RX and the TX throttle controls,
+-	 * so we use the bitwise OR instead of the logical OR.
+-	 */
+ 	pr_debug("flags=0x%lx\n", hw->usb_flags);
+-	if (!hw->wlandev->hwremoved &&
+-	    ((test_and_clear_bit(THROTTLE_RX, &hw->usb_flags) &&
+-	      !test_and_set_bit(WORK_RX_RESUME, &hw->usb_flags)) |
+-	     (test_and_clear_bit(THROTTLE_TX, &hw->usb_flags) &&
+-	      !test_and_set_bit(WORK_TX_RESUME, &hw->usb_flags))
+-	    )) {
+-		schedule_work(&hw->usb_work);
++	if (!hw->wlandev->hwremoved) {
++		bool rx_throttle = test_and_clear_bit(THROTTLE_RX, &hw->usb_flags) &&
++				   !test_and_set_bit(WORK_RX_RESUME, &hw->usb_flags);
++		bool tx_throttle = test_and_clear_bit(THROTTLE_TX, &hw->usb_flags) &&
++				   !test_and_set_bit(WORK_TX_RESUME, &hw->usb_flags);
++		/*
++		 * We need to check BOTH the RX and the TX throttle controls,
++		 * so we use the bitwise OR instead of the logical OR.
++		 */
++		if (rx_throttle | tx_throttle)
++			schedule_work(&hw->usb_work);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&hw->ctlxq.lock, flags);
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 99908d8d2dd36..b2710015493a5 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -754,6 +754,7 @@ void usb_hcd_poll_rh_status(struct usb_hcd *hcd)
+ {
+ 	struct urb	*urb;
+ 	int		length;
++	int		status;
+ 	unsigned long	flags;
+ 	char		buffer[6];	/* Any root hubs with > 31 ports? */
+ 
+@@ -771,11 +772,17 @@ void usb_hcd_poll_rh_status(struct usb_hcd *hcd)
+ 		if (urb) {
+ 			clear_bit(HCD_FLAG_POLL_PENDING, &hcd->flags);
+ 			hcd->status_urb = NULL;
++			if (urb->transfer_buffer_length >= length) {
++				status = 0;
++			} else {
++				status = -EOVERFLOW;
++				length = urb->transfer_buffer_length;
++			}
+ 			urb->actual_length = length;
+ 			memcpy(urb->transfer_buffer, buffer, length);
+ 
+ 			usb_hcd_unlink_urb_from_ep(hcd, urb);
+-			usb_hcd_giveback_urb(hcd, urb, 0);
++			usb_hcd_giveback_urb(hcd, urb, status);
+ 		} else {
+ 			length = 0;
+ 			set_bit(HCD_FLAG_POLL_PENDING, &hcd->flags);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 3f406519da58d..af15dbe6bb141 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1224,7 +1224,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 			 */
+ 			if (portchange || (hub_is_superspeed(hub->hdev) &&
+ 						port_resumed))
+-				set_bit(port1, hub->change_bits);
++				set_bit(port1, hub->event_bits);
+ 
+ 		} else if (udev->persist_enabled) {
+ #ifdef CONFIG_PM
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4e28961cfa53e..b43c9de34a2c2 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -6037,16 +6037,16 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		fallthrough;
+ 	case PTR_TO_PACKET_END:
+ 	case PTR_TO_SOCKET:
+-	case PTR_TO_SOCKET_OR_NULL:
+ 	case PTR_TO_SOCK_COMMON:
+-	case PTR_TO_SOCK_COMMON_OR_NULL:
+ 	case PTR_TO_TCP_SOCK:
+-	case PTR_TO_TCP_SOCK_OR_NULL:
+ 	case PTR_TO_XDP_SOCK:
++reject:
+ 		verbose(env, "R%d pointer arithmetic on %s prohibited\n",
+ 			dst, reg_type_str[ptr_reg->type]);
+ 		return -EACCES;
+ 	default:
++		if (reg_type_may_be_null(ptr_reg->type))
++			goto reject;
+ 		break;
+ 	}
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index d02073b9d56e2..fdf5fa4bf4448 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -850,8 +850,17 @@ void wq_worker_running(struct task_struct *task)
+ 
+ 	if (!worker->sleeping)
+ 		return;
++
++	/*
++	 * If preempted by unbind_workers() between the WORKER_NOT_RUNNING check
++	 * and the nr_running increment below, we may ruin the nr_running reset
++	 * and leave with an unexpected pool->nr_running == 1 on the newly unbound
++	 * pool. Protect against such race.
++	 */
++	preempt_disable();
+ 	if (!(worker->flags & WORKER_NOT_RUNNING))
+ 		atomic_inc(&worker->pool->nr_running);
++	preempt_enable();
+ 	worker->sleeping = 0;
+ }
+ 
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 8ee580538d876..53ce5b6448a5d 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -119,8 +119,8 @@ enum {
+ };
+ 
+ struct tpcon {
+-	int idx;
+-	int len;
++	unsigned int idx;
++	unsigned int len;
+ 	u32 state;
+ 	u8 bs;
+ 	u8 sn;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-01-20 10:00 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-01-20 10:00 UTC (permalink / raw
  To: gentoo-commits

commit:     c53b80c8793617ab6bc636c77c557a0bbaf2c35a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 20 09:59:59 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 20 09:59:59 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c53b80c8

Linux patch 5.10.93

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1092_linux-5.10.93.patch | 997 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1001 insertions(+)

diff --git a/0000_README b/0000_README
index 6d8b55d4..ababebf8 100644
--- a/0000_README
+++ b/0000_README
@@ -411,6 +411,10 @@ Patch:  1091_linux-5.10.92.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.92
 
+Patch:  1092_linux-5.10.93.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.93
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1092_linux-5.10.93.patch b/1092_linux-5.10.93.patch
new file mode 100644
index 00000000..52fb82c6
--- /dev/null
+++ b/1092_linux-5.10.93.patch
@@ -0,0 +1,997 @@
+diff --git a/Makefile b/Makefile
+index a113a29545bdb..993559750df9d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 92
++SUBLEVEL = 93
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1073,7 +1073,7 @@ export mod_sign_cmd
+ HOST_LIBELF_LIBS = $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
+ 
+ has_libelf = $(call try-run,\
+-               echo "int main() {}" | $(HOSTCC) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
++               echo "int main() {}" | $(HOSTCC) $(KBUILD_HOSTLDFLAGS) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
+ 
+ ifdef CONFIG_STACK_VALIDATION
+   ifeq ($(has_libelf),1)
+diff --git a/arch/arm/kernel/perf_callchain.c b/arch/arm/kernel/perf_callchain.c
+index 3b69a76d341e7..1626dfc6f6ce6 100644
+--- a/arch/arm/kernel/perf_callchain.c
++++ b/arch/arm/kernel/perf_callchain.c
+@@ -62,9 +62,10 @@ user_backtrace(struct frame_tail __user *tail,
+ void
+ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct frame_tail __user *tail;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -98,9 +99,10 @@ callchain_trace(struct stackframe *fr,
+ void
+ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stackframe fr;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -111,18 +113,21 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
+ 
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+-		return perf_guest_cbs->get_guest_ip();
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
++	if (guest_cbs && guest_cbs->is_in_guest())
++		return guest_cbs->get_guest_ip();
+ 
+ 	return instruction_pointer(regs);
+ }
+ 
+ unsigned long perf_misc_flags(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	int misc = 0;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+-		if (perf_guest_cbs->is_user_mode())
++	if (guest_cbs && guest_cbs->is_in_guest()) {
++		if (guest_cbs->is_user_mode())
+ 			misc |= PERF_RECORD_MISC_GUEST_USER;
+ 		else
+ 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
+diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
+index 88ff471b0bce5..58ae55d78a203 100644
+--- a/arch/arm64/kernel/perf_callchain.c
++++ b/arch/arm64/kernel/perf_callchain.c
+@@ -102,7 +102,9 @@ compat_user_backtrace(struct compat_frame_tail __user *tail,
+ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 			 struct pt_regs *regs)
+ {
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -147,9 +149,10 @@ static bool callchain_trace(void *data, unsigned long pc)
+ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 			   struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stackframe frame;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -160,18 +163,21 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+-		return perf_guest_cbs->get_guest_ip();
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
++	if (guest_cbs && guest_cbs->is_in_guest())
++		return guest_cbs->get_guest_ip();
+ 
+ 	return instruction_pointer(regs);
+ }
+ 
+ unsigned long perf_misc_flags(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	int misc = 0;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+-		if (perf_guest_cbs->is_user_mode())
++	if (guest_cbs && guest_cbs->is_in_guest()) {
++		if (guest_cbs->is_user_mode())
+ 			misc |= PERF_RECORD_MISC_GUEST_USER;
+ 		else
+ 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
+diff --git a/arch/csky/kernel/perf_callchain.c b/arch/csky/kernel/perf_callchain.c
+index ab55e98ee8f62..35318a635a5fa 100644
+--- a/arch/csky/kernel/perf_callchain.c
++++ b/arch/csky/kernel/perf_callchain.c
+@@ -86,10 +86,11 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 			 struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	unsigned long fp = 0;
+ 
+ 	/* C-SKY does not support virtualization. */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
++	if (guest_cbs && guest_cbs->is_in_guest())
+ 		return;
+ 
+ 	fp = regs->regs[4];
+@@ -110,10 +111,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 			   struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stackframe fr;
+ 
+ 	/* C-SKY does not support virtualization. */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		pr_warn("C-SKY does not support perf in guest mode!");
+ 		return;
+ 	}
+diff --git a/arch/nds32/kernel/perf_event_cpu.c b/arch/nds32/kernel/perf_event_cpu.c
+index 0ce6f9f307e6a..f387919607813 100644
+--- a/arch/nds32/kernel/perf_event_cpu.c
++++ b/arch/nds32/kernel/perf_event_cpu.c
+@@ -1363,6 +1363,7 @@ void
+ perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 		    struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	unsigned long fp = 0;
+ 	unsigned long gp = 0;
+ 	unsigned long lp = 0;
+@@ -1371,7 +1372,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 
+ 	leaf_fp = 0;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -1479,9 +1480,10 @@ void
+ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 		      struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stackframe fr;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -1493,20 +1495,23 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
+ 	/* However, NDS32 does not support virtualization */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+-		return perf_guest_cbs->get_guest_ip();
++	if (guest_cbs && guest_cbs->is_in_guest())
++		return guest_cbs->get_guest_ip();
+ 
+ 	return instruction_pointer(regs);
+ }
+ 
+ unsigned long perf_misc_flags(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	int misc = 0;
+ 
+ 	/* However, NDS32 does not support virtualization */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+-		if (perf_guest_cbs->is_user_mode())
++	if (guest_cbs && guest_cbs->is_in_guest()) {
++		if (guest_cbs->is_user_mode())
+ 			misc |= PERF_RECORD_MISC_GUEST_USER;
+ 		else
+ 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
+diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
+index 3e8e19f5746c7..00c8cda1c9c31 100644
+--- a/arch/powerpc/include/asm/hvcall.h
++++ b/arch/powerpc/include/asm/hvcall.h
+@@ -382,6 +382,8 @@
+ #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
+ #define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
+ #define H_CPU_BEHAV_FLUSH_LINK_STACK	(1ull << 57) // IBM bit 6
++#define H_CPU_BEHAV_NO_L1D_FLUSH_ENTRY	(1ull << 56) // IBM bit 7
++#define H_CPU_BEHAV_NO_L1D_FLUSH_UACCESS (1ull << 55) // IBM bit 8
+ 
+ /* Flag values used in H_REGISTER_PROC_TBL hcall */
+ #define PROC_TABLE_OP_MASK	0x18
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 5f0d446a2325e..47dfada140e19 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -538,6 +538,12 @@ static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
+ 	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
+ 		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+ 
++	if (result->behaviour & H_CPU_BEHAV_NO_L1D_FLUSH_ENTRY)
++		security_ftr_clear(SEC_FTR_L1D_FLUSH_ENTRY);
++
++	if (result->behaviour & H_CPU_BEHAV_NO_L1D_FLUSH_UACCESS)
++		security_ftr_clear(SEC_FTR_L1D_FLUSH_UACCESS);
++
+ 	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+ 		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+ }
+diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
+index cf190197a22f6..ad3001cbdf618 100644
+--- a/arch/riscv/kernel/perf_callchain.c
++++ b/arch/riscv/kernel/perf_callchain.c
+@@ -60,10 +60,11 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 			 struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	unsigned long fp = 0;
+ 
+ 	/* RISC-V does not support perf in guest mode. */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
++	if (guest_cbs && guest_cbs->is_in_guest())
+ 		return;
+ 
+ 	fp = regs->s0;
+@@ -84,8 +85,10 @@ void notrace walk_stackframe(struct task_struct *task,
+ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 			   struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
+ 	/* RISC-V does not support perf in guest mode. */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		pr_warn("RISC-V does not support perf in guest mode!");
+ 		return;
+ 	}
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index e6c4f29fc6956..b51ab19eb9721 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -2115,6 +2115,13 @@ int kvm_s390_is_stop_irq_pending(struct kvm_vcpu *vcpu)
+ 	return test_bit(IRQ_PEND_SIGP_STOP, &li->pending_irqs);
+ }
+ 
++int kvm_s390_is_restart_irq_pending(struct kvm_vcpu *vcpu)
++{
++	struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int;
++
++	return test_bit(IRQ_PEND_RESTART, &li->pending_irqs);
++}
++
+ void kvm_s390_clear_stop_irq(struct kvm_vcpu *vcpu)
+ {
+ 	struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int;
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 00f03f363c9b0..07a04f3926009 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4588,10 +4588,15 @@ int kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu)
+ 		}
+ 	}
+ 
+-	/* SIGP STOP and SIGP STOP AND STORE STATUS has been fully processed */
++	/*
++	 * Set the VCPU to STOPPED and THEN clear the interrupt flag,
++	 * now that the SIGP STOP and SIGP STOP AND STORE STATUS orders
++	 * have been fully processed. This will ensure that the VCPU
++	 * is kept BUSY if another VCPU is inquiring with SIGP SENSE.
++	 */
++	kvm_s390_set_cpuflags(vcpu, CPUSTAT_STOPPED);
+ 	kvm_s390_clear_stop_irq(vcpu);
+ 
+-	kvm_s390_set_cpuflags(vcpu, CPUSTAT_STOPPED);
+ 	__disable_ibs_on_vcpu(vcpu);
+ 
+ 	for (i = 0; i < online_vcpus; i++) {
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index 2d134833bca69..a3e9b71d426f9 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -418,6 +418,7 @@ void kvm_s390_destroy_adapters(struct kvm *kvm);
+ int kvm_s390_ext_call_pending(struct kvm_vcpu *vcpu);
+ extern struct kvm_device_ops kvm_flic_ops;
+ int kvm_s390_is_stop_irq_pending(struct kvm_vcpu *vcpu);
++int kvm_s390_is_restart_irq_pending(struct kvm_vcpu *vcpu);
+ void kvm_s390_clear_stop_irq(struct kvm_vcpu *vcpu);
+ int kvm_s390_set_irq_state(struct kvm_vcpu *vcpu,
+ 			   void __user *buf, int len);
+diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c
+index 683036c1c92a8..3dc921e853b6e 100644
+--- a/arch/s390/kvm/sigp.c
++++ b/arch/s390/kvm/sigp.c
+@@ -288,6 +288,34 @@ static int handle_sigp_dst(struct kvm_vcpu *vcpu, u8 order_code,
+ 	if (!dst_vcpu)
+ 		return SIGP_CC_NOT_OPERATIONAL;
+ 
++	/*
++	 * SIGP RESTART, SIGP STOP, and SIGP STOP AND STORE STATUS orders
++	 * are processed asynchronously. Until the affected VCPU finishes
++	 * its work and calls back into KVM to clear the (RESTART or STOP)
++	 * interrupt, we need to return any new non-reset orders "busy".
++	 *
++	 * This is important because a single VCPU could issue:
++	 *  1) SIGP STOP $DESTINATION
++	 *  2) SIGP SENSE $DESTINATION
++	 *
++	 * If the SIGP SENSE would not be rejected as "busy", it could
++	 * return an incorrect answer as to whether the VCPU is STOPPED
++	 * or OPERATING.
++	 */
++	if (order_code != SIGP_INITIAL_CPU_RESET &&
++	    order_code != SIGP_CPU_RESET) {
++		/*
++		 * Lockless check. Both SIGP STOP and SIGP (RE)START
++		 * properly synchronize everything while processing
++		 * their orders, while the guest cannot observe a
++		 * difference when issuing other orders from two
++		 * different VCPUs.
++		 */
++		if (kvm_s390_is_stop_irq_pending(dst_vcpu) ||
++		    kvm_s390_is_restart_irq_pending(dst_vcpu))
++			return SIGP_CC_BUSY;
++	}
++
+ 	switch (order_code) {
+ 	case SIGP_SENSE:
+ 		vcpu->stat.instruction_sigp_sense++;
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 1f5d96ba4866d..b79b9f21cbb3b 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2545,10 +2545,11 @@ static bool perf_hw_regs(struct pt_regs *regs)
+ void
+ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct unwind_state state;
+ 	unsigned long addr;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* TODO: We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -2648,10 +2649,11 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
+ void
+ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stack_frame frame;
+ 	const struct stack_frame __user *fp;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* TODO: We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -2728,18 +2730,21 @@ static unsigned long code_segment_base(struct pt_regs *regs)
+ 
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+-		return perf_guest_cbs->get_guest_ip();
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
++	if (guest_cbs && guest_cbs->is_in_guest())
++		return guest_cbs->get_guest_ip();
+ 
+ 	return regs->ip + code_segment_base(regs);
+ }
+ 
+ unsigned long perf_misc_flags(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	int misc = 0;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+-		if (perf_guest_cbs->is_user_mode())
++	if (guest_cbs && guest_cbs->is_in_guest()) {
++		if (guest_cbs->is_user_mode())
+ 			misc |= PERF_RECORD_MISC_GUEST_USER;
+ 		else
+ 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index a521135247eb6..6525693e7aeaa 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2586,6 +2586,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
+ {
+ 	struct perf_sample_data data;
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	struct perf_guest_info_callbacks *guest_cbs;
+ 	int bit;
+ 	int handled = 0;
+ 
+@@ -2651,9 +2652,11 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
+ 	 */
+ 	if (__test_and_clear_bit(GLOBAL_STATUS_TRACE_TOPAPMI_BIT, (unsigned long *)&status)) {
+ 		handled++;
+-		if (unlikely(perf_guest_cbs && perf_guest_cbs->is_in_guest() &&
+-			perf_guest_cbs->handle_intel_pt_intr))
+-			perf_guest_cbs->handle_intel_pt_intr();
++
++		guest_cbs = perf_get_guest_cbs();
++		if (unlikely(guest_cbs && guest_cbs->is_in_guest() &&
++			     guest_cbs->handle_intel_pt_intr))
++			guest_cbs->handle_intel_pt_intr();
+ 		else
+ 			intel_pt_interrupt();
+ 	}
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index e8fb4b0394af2..13e10b970ac83 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1306,6 +1306,7 @@ struct kvm_x86_init_ops {
+ 	int (*disabled_by_bios)(void);
+ 	int (*check_processor_compatibility)(void);
+ 	int (*hardware_setup)(void);
++	bool (*intel_pt_intr_in_guest)(void);
+ 
+ 	struct kvm_x86_ops *runtime_ops;
+ };
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 38c453f28f1f0..351ef5cf1436a 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7915,6 +7915,7 @@ static struct kvm_x86_init_ops vmx_init_ops __initdata = {
+ 	.disabled_by_bios = vmx_disabled_by_bios,
+ 	.check_processor_compatibility = vmx_check_processor_compat,
+ 	.hardware_setup = hardware_setup,
++	.intel_pt_intr_in_guest = vmx_pt_mode_is_host_guest,
+ 
+ 	.runtime_ops = &vmx_x86_ops,
+ };
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 4f828cac0273e..271669dc8d90a 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1229,7 +1229,7 @@ static const u32 msrs_to_save_all[] = {
+ 	MSR_IA32_UMWAIT_CONTROL,
+ 
+ 	MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1,
+-	MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_ARCH_PERFMON_FIXED_CTR0 + 3,
++	MSR_ARCH_PERFMON_FIXED_CTR0 + 2,
+ 	MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS,
+ 	MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL,
+ 	MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1,
+@@ -7882,7 +7882,7 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
+ 	.is_in_guest		= kvm_is_in_guest,
+ 	.is_user_mode		= kvm_is_user_mode,
+ 	.get_guest_ip		= kvm_get_guest_ip,
+-	.handle_intel_pt_intr	= kvm_handle_intel_pt_intr,
++	.handle_intel_pt_intr	= NULL,
+ };
+ 
+ #ifdef CONFIG_X86_64
+@@ -8005,6 +8005,8 @@ int kvm_arch_init(void *opaque)
+ 			PT_PRESENT_MASK, 0, sme_me_mask);
+ 	kvm_timer_init();
+ 
++	if (ops->intel_pt_intr_in_guest && ops->intel_pt_intr_in_guest())
++		kvm_guest_cbs.handle_intel_pt_intr = kvm_handle_intel_pt_intr;
+ 	perf_register_guest_info_callbacks(&kvm_guest_cbs);
+ 
+ 	if (boot_cpu_has(X86_FEATURE_XSAVE)) {
+@@ -8042,6 +8044,7 @@ void kvm_arch_exit(void)
+ #endif
+ 	kvm_lapic_exit();
+ 	perf_unregister_guest_info_callbacks(&kvm_guest_cbs);
++	kvm_guest_cbs.handle_intel_pt_intr = NULL;
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
+ 		cpufreq_unregister_notifier(&kvmclock_cpufreq_notifier_block,
+diff --git a/drivers/base/devtmpfs.c b/drivers/base/devtmpfs.c
+index a71d141179439..b5cbaa61cbea7 100644
+--- a/drivers/base/devtmpfs.c
++++ b/drivers/base/devtmpfs.c
+@@ -59,8 +59,15 @@ static struct dentry *public_dev_mount(struct file_system_type *fs_type, int fla
+ 		      const char *dev_name, void *data)
+ {
+ 	struct super_block *s = mnt->mnt_sb;
++	int err;
++
+ 	atomic_inc(&s->s_active);
+ 	down_write(&s->s_umount);
++	err = reconfigure_single(s, flags, data);
++	if (err < 0) {
++		deactivate_locked_super(s);
++		return ERR_PTR(err);
++	}
+ 	return dget(s->s_root);
+ }
+ 
+diff --git a/drivers/firmware/qemu_fw_cfg.c b/drivers/firmware/qemu_fw_cfg.c
+index 172c751a4f6c2..f08e056ed0ae4 100644
+--- a/drivers/firmware/qemu_fw_cfg.c
++++ b/drivers/firmware/qemu_fw_cfg.c
+@@ -388,9 +388,7 @@ static void fw_cfg_sysfs_cache_cleanup(void)
+ 	struct fw_cfg_sysfs_entry *entry, *next;
+ 
+ 	list_for_each_entry_safe(entry, next, &fw_cfg_entry_cache, list) {
+-		/* will end up invoking fw_cfg_sysfs_cache_delist()
+-		 * via each object's release() method (i.e. destructor)
+-		 */
++		fw_cfg_sysfs_cache_delist(entry);
+ 		kobject_put(&entry->kobj);
+ 	}
+ }
+@@ -448,7 +446,6 @@ static void fw_cfg_sysfs_release_entry(struct kobject *kobj)
+ {
+ 	struct fw_cfg_sysfs_entry *entry = to_entry(kobj);
+ 
+-	fw_cfg_sysfs_cache_delist(entry);
+ 	kfree(entry);
+ }
+ 
+@@ -601,20 +598,18 @@ static int fw_cfg_register_file(const struct fw_cfg_file *f)
+ 	/* set file entry information */
+ 	entry->size = be32_to_cpu(f->size);
+ 	entry->select = be16_to_cpu(f->select);
+-	memcpy(entry->name, f->name, FW_CFG_MAX_FILE_PATH);
++	strscpy(entry->name, f->name, FW_CFG_MAX_FILE_PATH);
+ 
+ 	/* register entry under "/sys/firmware/qemu_fw_cfg/by_key/" */
+ 	err = kobject_init_and_add(&entry->kobj, &fw_cfg_sysfs_entry_ktype,
+ 				   fw_cfg_sel_ko, "%d", entry->select);
+-	if (err) {
+-		kobject_put(&entry->kobj);
+-		return err;
+-	}
++	if (err)
++		goto err_put_entry;
+ 
+ 	/* add raw binary content access */
+ 	err = sysfs_create_bin_file(&entry->kobj, &fw_cfg_sysfs_attr_raw);
+ 	if (err)
+-		goto err_add_raw;
++		goto err_del_entry;
+ 
+ 	/* try adding "/sys/firmware/qemu_fw_cfg/by_name/" symlink */
+ 	fw_cfg_build_symlink(fw_cfg_fname_kset, &entry->kobj, entry->name);
+@@ -623,9 +618,10 @@ static int fw_cfg_register_file(const struct fw_cfg_file *f)
+ 	fw_cfg_sysfs_cache_enlist(entry);
+ 	return 0;
+ 
+-err_add_raw:
++err_del_entry:
+ 	kobject_del(&entry->kobj);
+-	kfree(entry);
++err_put_entry:
++	kobject_put(&entry->kobj);
+ 	return err;
+ }
+ 
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index b8477fa93b7d7..f6373d678d256 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -1915,6 +1915,10 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
+ 		if (ep == NULL)
+ 			return -EIO;
+ 
++		/* Reject broken descriptors. */
++		if (usb_endpoint_maxp(&ep->desc) == 0)
++			return -EIO;
++
+ 		ret = uvc_init_video_bulk(stream, ep, gfp_flags);
+ 	}
+ 
+diff --git a/drivers/mtd/chips/Kconfig b/drivers/mtd/chips/Kconfig
+index aef14990e5f7c..19726ebd973d0 100644
+--- a/drivers/mtd/chips/Kconfig
++++ b/drivers/mtd/chips/Kconfig
+@@ -55,12 +55,14 @@ choice
+ 	  LITTLE_ENDIAN_BYTE, if the bytes are reversed.
+ 
+ config MTD_CFI_NOSWAP
++	depends on !ARCH_IXP4XX || CPU_BIG_ENDIAN
+ 	bool "NO"
+ 
+ config MTD_CFI_BE_BYTE_SWAP
+ 	bool "BIG_ENDIAN_BYTE"
+ 
+ config MTD_CFI_LE_BYTE_SWAP
++	depends on !ARCH_IXP4XX
+ 	bool "LITTLE_ENDIAN_BYTE"
+ 
+ endchoice
+diff --git a/drivers/mtd/maps/Kconfig b/drivers/mtd/maps/Kconfig
+index 6650acbc961e9..fc0aaa03c5242 100644
+--- a/drivers/mtd/maps/Kconfig
++++ b/drivers/mtd/maps/Kconfig
+@@ -325,7 +325,7 @@ config MTD_DC21285
+ 
+ config MTD_IXP4XX
+ 	tristate "CFI Flash device mapped on Intel IXP4xx based systems"
+-	depends on MTD_CFI && MTD_COMPLEX_MAPPINGS && ARCH_IXP4XX
++	depends on MTD_CFI && MTD_COMPLEX_MAPPINGS && ARCH_IXP4XX && MTD_CFI_ADV_OPTIONS
+ 	help
+ 	  This enables MTD access to flash devices on platforms based
+ 	  on Intel's IXP4xx family of network processors such as the
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
+index 6312fddd9c00a..eaba661133280 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
+@@ -1000,6 +1000,7 @@ int rtl92cu_hw_init(struct ieee80211_hw *hw)
+ 	_initpabias(hw);
+ 	rtl92c_dm_init(hw);
+ exit:
++	local_irq_disable();
+ 	local_irq_restore(flags);
+ 	return err;
+ }
+diff --git a/drivers/remoteproc/qcom_pil_info.c b/drivers/remoteproc/qcom_pil_info.c
+index 7c007dd7b2000..aca21560e20b8 100644
+--- a/drivers/remoteproc/qcom_pil_info.c
++++ b/drivers/remoteproc/qcom_pil_info.c
+@@ -104,7 +104,7 @@ int qcom_pil_info_store(const char *image, phys_addr_t base, size_t size)
+ 	return -ENOMEM;
+ 
+ found_unused:
+-	memcpy_toio(entry, image, PIL_RELOC_NAME_LEN);
++	memcpy_toio(entry, image, strnlen(image, PIL_RELOC_NAME_LEN));
+ found_existing:
+ 	/* Use two writel() as base is only aligned to 4 bytes on odd entries */
+ 	writel(base, entry + PIL_RELOC_NAME_LEN);
+diff --git a/drivers/video/fbdev/vga16fb.c b/drivers/video/fbdev/vga16fb.c
+index 1e8a38a7967d8..5c6e9dc88060b 100644
+--- a/drivers/video/fbdev/vga16fb.c
++++ b/drivers/video/fbdev/vga16fb.c
+@@ -184,6 +184,25 @@ static inline void setindex(int index)
+ 	vga_io_w(VGA_GFX_I, index);
+ }
+ 
++/* Check if the video mode is supported by the driver */
++static inline int check_mode_supported(void)
++{
++	/* non-x86 architectures treat orig_video_isVGA as a boolean flag */
++#if defined(CONFIG_X86)
++	/* only EGA and VGA in 16 color graphic mode are supported */
++	if (screen_info.orig_video_isVGA != VIDEO_TYPE_EGAC &&
++	    screen_info.orig_video_isVGA != VIDEO_TYPE_VGAC)
++		return -ENODEV;
++
++	if (screen_info.orig_video_mode != 0x0D &&	/* 320x200/4 (EGA) */
++	    screen_info.orig_video_mode != 0x0E &&	/* 640x200/4 (EGA) */
++	    screen_info.orig_video_mode != 0x10 &&	/* 640x350/4 (EGA) */
++	    screen_info.orig_video_mode != 0x12)	/* 640x480/4 (VGA) */
++		return -ENODEV;
++#endif
++	return 0;
++}
++
+ static void vga16fb_pan_var(struct fb_info *info, 
+ 			    struct fb_var_screeninfo *var)
+ {
+@@ -1422,6 +1441,11 @@ static int __init vga16fb_init(void)
+ 
+ 	vga16fb_setup(option);
+ #endif
++
++	ret = check_mode_supported();
++	if (ret)
++		return ret;
++
+ 	ret = platform_driver_register(&vga16fb_driver);
+ 
+ 	if (!ret) {
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 72b67d810b8c2..a13ef836fe4e1 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -541,7 +541,10 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr)
+ {
+ 	int retval;
+ 	struct p9_fid *fid = NULL;
+-	struct p9_iattr_dotl p9attr;
++	struct p9_iattr_dotl p9attr = {
++		.uid = INVALID_UID,
++		.gid = INVALID_GID,
++	};
+ 	struct inode *inode = d_inode(dentry);
+ 
+ 	p9_debug(P9_DEBUG_VFS, "\n");
+@@ -551,14 +554,22 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr)
+ 		return retval;
+ 
+ 	p9attr.valid = v9fs_mapped_iattr_valid(iattr->ia_valid);
+-	p9attr.mode = iattr->ia_mode;
+-	p9attr.uid = iattr->ia_uid;
+-	p9attr.gid = iattr->ia_gid;
+-	p9attr.size = iattr->ia_size;
+-	p9attr.atime_sec = iattr->ia_atime.tv_sec;
+-	p9attr.atime_nsec = iattr->ia_atime.tv_nsec;
+-	p9attr.mtime_sec = iattr->ia_mtime.tv_sec;
+-	p9attr.mtime_nsec = iattr->ia_mtime.tv_nsec;
++	if (iattr->ia_valid & ATTR_MODE)
++		p9attr.mode = iattr->ia_mode;
++	if (iattr->ia_valid & ATTR_UID)
++		p9attr.uid = iattr->ia_uid;
++	if (iattr->ia_valid & ATTR_GID)
++		p9attr.gid = iattr->ia_gid;
++	if (iattr->ia_valid & ATTR_SIZE)
++		p9attr.size = iattr->ia_size;
++	if (iattr->ia_valid & ATTR_ATIME_SET) {
++		p9attr.atime_sec = iattr->ia_atime.tv_sec;
++		p9attr.atime_nsec = iattr->ia_atime.tv_nsec;
++	}
++	if (iattr->ia_valid & ATTR_MTIME_SET) {
++		p9attr.mtime_sec = iattr->ia_mtime.tv_sec;
++		p9attr.mtime_nsec = iattr->ia_mtime.tv_nsec;
++	}
+ 
+ 	if (iattr->ia_valid & ATTR_FILE) {
+ 		fid = iattr->ia_file->private_data;
+diff --git a/fs/fs_context.c b/fs/fs_context.c
+index 2834d1afa6e80..b11677802ee13 100644
+--- a/fs/fs_context.c
++++ b/fs/fs_context.c
+@@ -530,7 +530,7 @@ static int legacy_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 			      param->key);
+ 	}
+ 
+-	if (len > PAGE_SIZE - 2 - size)
++	if (size + len + 2 > PAGE_SIZE)
+ 		return invalf(fc, "VFS: Legacy: Cumulative options too large");
+ 	if (strchr(param->key, ',') ||
+ 	    (param->type == fs_value_is_string &&
+diff --git a/fs/orangefs/orangefs-bufmap.c b/fs/orangefs/orangefs-bufmap.c
+index 538e839590ef5..b501dc07f9222 100644
+--- a/fs/orangefs/orangefs-bufmap.c
++++ b/fs/orangefs/orangefs-bufmap.c
+@@ -176,7 +176,7 @@ orangefs_bufmap_free(struct orangefs_bufmap *bufmap)
+ {
+ 	kfree(bufmap->page_array);
+ 	kfree(bufmap->desc_array);
+-	kfree(bufmap->buffer_index_array);
++	bitmap_free(bufmap->buffer_index_array);
+ 	kfree(bufmap);
+ }
+ 
+@@ -226,8 +226,7 @@ orangefs_bufmap_alloc(struct ORANGEFS_dev_map_desc *user_desc)
+ 	bufmap->desc_size = user_desc->size;
+ 	bufmap->desc_shift = ilog2(bufmap->desc_size);
+ 
+-	bufmap->buffer_index_array =
+-		kzalloc(DIV_ROUND_UP(bufmap->desc_count, BITS_PER_LONG), GFP_KERNEL);
++	bufmap->buffer_index_array = bitmap_zalloc(bufmap->desc_count, GFP_KERNEL);
+ 	if (!bufmap->buffer_index_array)
+ 		goto out_free_bufmap;
+ 
+@@ -250,7 +249,7 @@ orangefs_bufmap_alloc(struct ORANGEFS_dev_map_desc *user_desc)
+ out_free_desc_array:
+ 	kfree(bufmap->desc_array);
+ out_free_index_array:
+-	kfree(bufmap->buffer_index_array);
++	bitmap_free(bufmap->buffer_index_array);
+ out_free_bufmap:
+ 	kfree(bufmap);
+ out:
+diff --git a/fs/super.c b/fs/super.c
+index 98bb0629ee108..20f1707807bbd 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -1472,8 +1472,8 @@ struct dentry *mount_nodev(struct file_system_type *fs_type,
+ }
+ EXPORT_SYMBOL(mount_nodev);
+ 
+-static int reconfigure_single(struct super_block *s,
+-			      int flags, void *data)
++int reconfigure_single(struct super_block *s,
++		       int flags, void *data)
+ {
+ 	struct fs_context *fc;
+ 	int ret;
+diff --git a/include/linux/fs_context.h b/include/linux/fs_context.h
+index 5b44b0195a28a..e869ce3ae6600 100644
+--- a/include/linux/fs_context.h
++++ b/include/linux/fs_context.h
+@@ -140,6 +140,8 @@ extern int generic_parse_monolithic(struct fs_context *fc, void *data);
+ extern int vfs_get_tree(struct fs_context *fc);
+ extern void put_fs_context(struct fs_context *fc);
+ extern void fc_drop_locked(struct fs_context *fc);
++int reconfigure_single(struct super_block *s,
++		       int flags, void *data);
+ 
+ /*
+  * sget() wrappers to be called from the ->get_tree() op.
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index ce14fb2772b5b..c94551091dad3 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1235,7 +1235,18 @@ extern void perf_event_bpf_event(struct bpf_prog *prog,
+ 				 enum perf_bpf_event_type type,
+ 				 u16 flags);
+ 
+-extern struct perf_guest_info_callbacks *perf_guest_cbs;
++extern struct perf_guest_info_callbacks __rcu *perf_guest_cbs;
++static inline struct perf_guest_info_callbacks *perf_get_guest_cbs(void)
++{
++	/*
++	 * Callbacks are RCU-protected and must be READ_ONCE to avoid reloading
++	 * the callbacks between a !NULL check and dereferences, to ensure
++	 * pending stores/changes to the callback pointers are visible before a
++	 * non-NULL perf_guest_cbs is visible to readers, and to prevent a
++	 * module from unloading callbacks while readers are active.
++	 */
++	return rcu_dereference(perf_guest_cbs);
++}
+ extern int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
+ extern int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 639b99a318db1..e2d774cc470ee 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6395,18 +6395,25 @@ static void perf_pending_event(struct irq_work *entry)
+  * Later on, we might change it to a list if there is
+  * another virtualization implementation supporting the callbacks.
+  */
+-struct perf_guest_info_callbacks *perf_guest_cbs;
++struct perf_guest_info_callbacks __rcu *perf_guest_cbs;
+ 
+ int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
+ {
+-	perf_guest_cbs = cbs;
++	if (WARN_ON_ONCE(rcu_access_pointer(perf_guest_cbs)))
++		return -EBUSY;
++
++	rcu_assign_pointer(perf_guest_cbs, cbs);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(perf_register_guest_info_callbacks);
+ 
+ int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
+ {
+-	perf_guest_cbs = NULL;
++	if (WARN_ON_ONCE(rcu_access_pointer(perf_guest_cbs) != cbs))
++		return -EINVAL;
++
++	rcu_assign_pointer(perf_guest_cbs, NULL);
++	synchronize_rcu();
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(perf_unregister_guest_info_callbacks);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 14ce48f1a8e47..a858bb9e99270 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1936,6 +1936,7 @@ enum {
+ 	ALC887_FIXUP_ASUS_BASS,
+ 	ALC887_FIXUP_BASS_CHMAP,
+ 	ALC1220_FIXUP_GB_DUAL_CODECS,
++	ALC1220_FIXUP_GB_X570,
+ 	ALC1220_FIXUP_CLEVO_P950,
+ 	ALC1220_FIXUP_CLEVO_PB51ED,
+ 	ALC1220_FIXUP_CLEVO_PB51ED_PINS,
+@@ -2125,6 +2126,29 @@ static void alc1220_fixup_gb_dual_codecs(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc1220_fixup_gb_x570(struct hda_codec *codec,
++				     const struct hda_fixup *fix,
++				     int action)
++{
++	static const hda_nid_t conn1[] = { 0x0c };
++	static const struct coef_fw gb_x570_coefs[] = {
++		WRITE_COEF(0x1a, 0x01c1),
++		WRITE_COEF(0x1b, 0x0202),
++		WRITE_COEF(0x43, 0x3005),
++		{}
++	};
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_override_conn_list(codec, 0x14, ARRAY_SIZE(conn1), conn1);
++		snd_hda_override_conn_list(codec, 0x1b, ARRAY_SIZE(conn1), conn1);
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		alc_process_coef_fw(codec, gb_x570_coefs);
++		break;
++	}
++}
++
+ static void alc1220_fixup_clevo_p950(struct hda_codec *codec,
+ 				     const struct hda_fixup *fix,
+ 				     int action)
+@@ -2427,6 +2451,10 @@ static const struct hda_fixup alc882_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc1220_fixup_gb_dual_codecs,
+ 	},
++	[ALC1220_FIXUP_GB_X570] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc1220_fixup_gb_x570,
++	},
+ 	[ALC1220_FIXUP_CLEVO_P950] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc1220_fixup_clevo_p950,
+@@ -2529,7 +2557,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x13fe, 0x1009, "Advantech MIT-W101", ALC886_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+-	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_GB_X570),
+ 	SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
+@@ -6729,6 +6757,8 @@ enum {
+ 	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ 	ALC233_FIXUP_NO_AUDIO_JACK,
+ 	ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME,
++	ALC285_FIXUP_LEGION_Y9000X_SPEAKERS,
++	ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8319,6 +8349,18 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ 	},
++	[ALC285_FIXUP_LEGION_Y9000X_SPEAKERS] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_ideapad_s740_coef,
++		.chained = true,
++		.chain_id = ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE,
++	},
++	[ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc287_fixup_legion_15imhg05_speakers,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
++	},
+ 	[ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		//.v.verbs = legion_15imhg05_coefs,
+@@ -8857,13 +8899,16 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
++	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
++	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
++	SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
+-	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x384a, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3853, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+-	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-01-27 11:37 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-01-27 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     53b59dae9c32fb59a8c7d1d56391b196ba0ec67e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 27 11:37:37 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 27 11:37:37 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=53b59dae

Linux patch 5.10.94

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1093_linux-5.10.94.patch | 21291 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 21295 insertions(+)

diff --git a/0000_README b/0000_README
index ababebf8..8c30f470 100644
--- a/0000_README
+++ b/0000_README
@@ -415,6 +415,10 @@ Patch:  1092_linux-5.10.93.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.93
 
+Patch:  1093_linux-5.10.94.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.94
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1093_linux-5.10.94.patch b/1093_linux-5.10.94.patch
new file mode 100644
index 00000000..8bbbc313
--- /dev/null
+++ b/1093_linux-5.10.94.patch
@@ -0,0 +1,21291 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
+deleted file mode 100644
+index 73498ff666bd7..0000000000000
+--- a/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
++++ /dev/null
+@@ -1,62 +0,0 @@
+-What:		/sys/bus/iio/devices/iio:deviceX/in_count0_preset
+-KernelVersion:	4.13
+-Contact:	fabrice.gasnier@st.com
+-Description:
+-		Reading returns the current preset value. Writing sets the
+-		preset value. Encoder counts continuously from 0 to preset
+-		value, depending on direction (up/down).
+-
+-What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
+-KernelVersion:	4.13
+-Contact:	fabrice.gasnier@st.com
+-Description:
+-		Reading returns the list possible quadrature modes.
+-
+-What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
+-KernelVersion:	4.13
+-Contact:	fabrice.gasnier@st.com
+-Description:
+-		Configure the device counter quadrature modes:
+-
+-		- non-quadrature:
+-			Encoder IN1 input servers as the count input (up
+-			direction).
+-
+-		- quadrature:
+-			Encoder IN1 and IN2 inputs are mixed to get direction
+-			and count.
+-
+-What:		/sys/bus/iio/devices/iio:deviceX/in_count_polarity_available
+-KernelVersion:	4.13
+-Contact:	fabrice.gasnier@st.com
+-Description:
+-		Reading returns the list possible active edges.
+-
+-What:		/sys/bus/iio/devices/iio:deviceX/in_count0_polarity
+-KernelVersion:	4.13
+-Contact:	fabrice.gasnier@st.com
+-Description:
+-		Configure the device encoder/counter active edge:
+-
+-		- rising-edge
+-		- falling-edge
+-		- both-edges
+-
+-		In non-quadrature mode, device counts up on active edge.
+-
+-		In quadrature mode, encoder counting scenarios are as follows:
+-
+-		+---------+----------+--------------------+--------------------+
+-		| Active  | Level on |      IN1 signal    |     IN2 signal     |
+-		| edge    | opposite +----------+---------+----------+---------+
+-		|         | signal   |  Rising  | Falling |  Rising  | Falling |
+-		+---------+----------+----------+---------+----------+---------+
+-		| Rising  | High ->  |   Down   |    -    |   Up     |    -    |
+-		| edge    | Low  ->  |   Up     |    -    |   Down   |    -    |
+-		+---------+----------+----------+---------+----------+---------+
+-		| Falling | High ->  |    -     |   Up    |    -     |   Down  |
+-		| edge    | Low  ->  |    -     |   Down  |    -     |   Up    |
+-		+---------+----------+----------+---------+----------+---------+
+-		| Both    | High ->  |   Down   |   Up    |   Up     |   Down  |
+-		| edges   | Low  ->  |   Up     |   Down  |   Down   |   Up    |
+-		+---------+----------+----------+---------+----------+---------+
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index e05e581af5cfe..985181dba0bac 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -468,7 +468,7 @@ Spectre variant 2
+    before invoking any firmware code to prevent Spectre variant 2 exploits
+    using the firmware.
+ 
+-   Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
++   Using kernel address space randomization (CONFIG_RANDOMIZE_BASE=y
+    and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes
+    attacks on the kernel generally more difficult.
+ 
+diff --git a/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml b/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
+index 0da42ab8fd3a5..8a67bb889f18a 100644
+--- a/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
++++ b/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
+@@ -10,6 +10,9 @@ title: Amlogic specific extensions to the Synopsys Designware HDMI Controller
+ maintainers:
+   - Neil Armstrong <narmstrong@baylibre.com>
+ 
++allOf:
++  - $ref: /schemas/sound/name-prefix.yaml#
++
+ description: |
+   The Amlogic Meson Synopsys Designware Integration is composed of
+   - A Synopsys DesignWare HDMI Controller IP
+@@ -99,6 +102,8 @@ properties:
+   "#sound-dai-cells":
+     const: 0
+ 
++  sound-name-prefix: true
++
+ required:
+   - compatible
+   - reg
+diff --git a/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml b/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
+index a8d202c9d004c..b8cb1b4dae1ff 100644
+--- a/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
++++ b/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
+@@ -78,6 +78,10 @@ properties:
+   interrupts:
+     maxItems: 1
+ 
++  amlogic,canvas:
++    description: should point to a canvas provider node
++    $ref: /schemas/types.yaml#/definitions/phandle
++
+   power-domains:
+     maxItems: 1
+     description: phandle to the associated power domain
+@@ -106,6 +110,7 @@ required:
+   - port@1
+   - "#address-cells"
+   - "#size-cells"
++  - amlogic,canvas
+ 
+ additionalProperties: false
+ 
+@@ -118,6 +123,7 @@ examples:
+         interrupts = <3>;
+         #address-cells = <1>;
+         #size-cells = <0>;
++        amlogic,canvas = <&canvas>;
+ 
+         /* CVBS VDAC output port */
+         port@0 {
+diff --git a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
+index 164f71598c595..1b3954aa71c15 100644
+--- a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
++++ b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
+@@ -199,12 +199,11 @@ patternProperties:
+ 
+               contribution:
+                 $ref: /schemas/types.yaml#/definitions/uint32
+-                minimum: 0
+-                maximum: 100
+                 description:
+-                  The percentage contribution of the cooling devices at the
+-                  specific trip temperature referenced in this map
+-                  to this thermal zone
++                  The cooling contribution to the thermal zone of the referred
++                  cooling device at the referred trip point. The contribution is
++                  a ratio of the sum of all cooling contributions within a
++                  thermal zone.
+ 
+             required:
+               - trip
+diff --git a/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml b/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
+index 76cb9586ee00c..93cd77a6e92c0 100644
+--- a/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
++++ b/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
+@@ -39,8 +39,8 @@ properties:
+   samsung,syscon-phandle:
+     $ref: /schemas/types.yaml#/definitions/phandle
+     description:
+-      Phandle to the PMU system controller node (in case of Exynos5250
+-      and Exynos5420).
++      Phandle to the PMU system controller node (in case of Exynos5250,
++      Exynos5420 and Exynos7).
+ 
+ required:
+   - compatible
+@@ -58,6 +58,7 @@ allOf:
+             enum:
+               - samsung,exynos5250-wdt
+               - samsung,exynos5420-wdt
++              - samsung,exynos7-wdt
+     then:
+       required:
+         - samsung,syscon-phandle
+diff --git a/Documentation/driver-api/dmaengine/dmatest.rst b/Documentation/driver-api/dmaengine/dmatest.rst
+index ee268d445d38b..d2e1d8b58e7dc 100644
+--- a/Documentation/driver-api/dmaengine/dmatest.rst
++++ b/Documentation/driver-api/dmaengine/dmatest.rst
+@@ -143,13 +143,14 @@ Part 5 - Handling channel allocation
+ Allocating Channels
+ -------------------
+ 
+-Channels are required to be configured prior to starting the test run.
+-Attempting to run the test without configuring the channels will fail.
++Channels do not need to be configured prior to starting a test run. Attempting
++to run the test without configuring the channels will result in testing any
++channels that are available.
+ 
+ Example::
+ 
+     % echo 1 > /sys/module/dmatest/parameters/run
+-    dmatest: Could not start test, no channels configured
++    dmatest: No channels configured, continue with any
+ 
+ Channels are registered using the "channel" parameter. Channels can be requested by their
+ name, once requested, the channel is registered and a pending thread is added to the test list.
+diff --git a/Documentation/driver-api/firewire.rst b/Documentation/driver-api/firewire.rst
+index 94a2d7f01d999..d3cfa73cbb2b4 100644
+--- a/Documentation/driver-api/firewire.rst
++++ b/Documentation/driver-api/firewire.rst
+@@ -19,7 +19,7 @@ of kernel interfaces is available via exported symbols in `firewire-core` module
+ Firewire char device data structures
+ ====================================
+ 
+-.. include:: /ABI/stable/firewire-cdev
++.. include:: ../ABI/stable/firewire-cdev
+     :literal:
+ 
+ .. kernel-doc:: include/uapi/linux/firewire-cdev.h
+@@ -28,7 +28,7 @@ Firewire char device data structures
+ Firewire device probing and sysfs interfaces
+ ============================================
+ 
+-.. include:: /ABI/stable/sysfs-bus-firewire
++.. include:: ../ABI/stable/sysfs-bus-firewire
+     :literal:
+ 
+ .. kernel-doc:: drivers/firewire/core-device.c
+diff --git a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
+index 9b17dc77d18c5..da0e46496fc4d 100644
+--- a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
++++ b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
+@@ -5,7 +5,7 @@
+ Referencing hierarchical data nodes
+ ===================================
+ 
+-:Copyright: |copy| 2018 Intel Corporation
++:Copyright: |copy| 2018, 2021 Intel Corporation
+ :Author: Sakari Ailus <sakari.ailus@linux.intel.com>
+ 
+ ACPI in general allows referring to device objects in the tree only.
+@@ -52,12 +52,14 @@ the ANOD object which is also the final target node of the reference.
+ 	    Name (NOD0, Package() {
+ 		ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ 		Package () {
++		    Package () { "reg", 0 },
+ 		    Package () { "random-property", 3 },
+ 		}
+ 	    })
+ 	    Name (NOD1, Package() {
+ 		ToUUID("dbb8e3e6-5886-4ba6-8795-1319f52a966b"),
+ 		Package () {
++		    Package () { "reg", 1 },
+ 		    Package () { "anothernode", "ANOD" },
+ 		}
+ 	    })
+@@ -74,7 +76,11 @@ the ANOD object which is also the final target node of the reference.
+ 	    Name (_DSD, Package () {
+ 		ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ 		Package () {
+-		    Package () { "reference", ^DEV0, "node@1", "anothernode" },
++		    Package () {
++			"reference", Package () {
++			    ^DEV0, "node@1", "anothernode"
++			}
++		    },
+ 		}
+ 	    })
+ 	}
+diff --git a/Makefile b/Makefile
+index 993559750df9d..1071ec486aa5b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 93
++SUBLEVEL = 94
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
+index 8986a91a6f31b..dd1cf70353986 100644
+--- a/arch/arm/Kconfig.debug
++++ b/arch/arm/Kconfig.debug
+@@ -400,12 +400,12 @@ choice
+ 		  Say Y here if you want kernel low-level debugging support
+ 		  on i.MX25.
+ 
+-	config DEBUG_IMX21_IMX27_UART
+-		bool "i.MX21 and i.MX27 Debug UART"
+-		depends on SOC_IMX21 || SOC_IMX27
++	config DEBUG_IMX27_UART
++		bool "i.MX27 Debug UART"
++		depends on SOC_IMX27
+ 		help
+ 		  Say Y here if you want kernel low-level debugging support
+-		  on i.MX21 or i.MX27.
++		  on i.MX27.
+ 
+ 	config DEBUG_IMX28_UART
+ 		bool "i.MX28 Debug UART"
+@@ -1523,7 +1523,7 @@ config DEBUG_IMX_UART_PORT
+ 	int "i.MX Debug UART Port Selection"
+ 	depends on DEBUG_IMX1_UART || \
+ 		   DEBUG_IMX25_UART || \
+-		   DEBUG_IMX21_IMX27_UART || \
++		   DEBUG_IMX27_UART || \
+ 		   DEBUG_IMX31_UART || \
+ 		   DEBUG_IMX35_UART || \
+ 		   DEBUG_IMX50_UART || \
+@@ -1591,12 +1591,12 @@ config DEBUG_LL_INCLUDE
+ 	default "debug/icedcc.S" if DEBUG_ICEDCC
+ 	default "debug/imx.S" if DEBUG_IMX1_UART || \
+ 				 DEBUG_IMX25_UART || \
+-				 DEBUG_IMX21_IMX27_UART || \
++				 DEBUG_IMX27_UART || \
+ 				 DEBUG_IMX31_UART || \
+ 				 DEBUG_IMX35_UART || \
+ 				 DEBUG_IMX50_UART || \
+ 				 DEBUG_IMX51_UART || \
+-				 DEBUG_IMX53_UART ||\
++				 DEBUG_IMX53_UART || \
+ 				 DEBUG_IMX6Q_UART || \
+ 				 DEBUG_IMX6SL_UART || \
+ 				 DEBUG_IMX6SX_UART || \
+diff --git a/arch/arm/boot/compressed/efi-header.S b/arch/arm/boot/compressed/efi-header.S
+index c0e7a745103e2..230030c130853 100644
+--- a/arch/arm/boot/compressed/efi-header.S
++++ b/arch/arm/boot/compressed/efi-header.S
+@@ -9,16 +9,22 @@
+ #include <linux/sizes.h>
+ 
+ 		.macro	__nop
+-#ifdef CONFIG_EFI_STUB
+-		@ This is almost but not quite a NOP, since it does clobber the
+-		@ condition flags. But it is the best we can do for EFI, since
+-		@ PE/COFF expects the magic string "MZ" at offset 0, while the
+-		@ ARM/Linux boot protocol expects an executable instruction
+-		@ there.
+-		.inst	MZ_MAGIC | (0x1310 << 16)	@ tstne r0, #0x4d000
+-#else
+  AR_CLASS(	mov	r0, r0		)
+   M_CLASS(	nop.w			)
++		.endm
++
++		.macro __initial_nops
++#ifdef CONFIG_EFI_STUB
++		@ This is a two-instruction NOP, which happens to bear the
++		@ PE/COFF signature "MZ" in the first two bytes, so the kernel
++		@ is accepted as an EFI binary. Booting via the UEFI stub
++		@ will not execute those instructions, but the ARM/Linux
++		@ boot protocol does, so we need some NOPs here.
++		.inst	MZ_MAGIC | (0xe225 << 16)	@ eor r5, r5, 0x4d000
++		eor	r5, r5, 0x4d000			@ undo previous insn
++#else
++		__nop
++		__nop
+ #endif
+ 		.endm
+ 
+diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
+index 247ce90559901..7a38c63b62bf0 100644
+--- a/arch/arm/boot/compressed/head.S
++++ b/arch/arm/boot/compressed/head.S
+@@ -190,7 +190,8 @@ start:
+ 		 * were patching the initial instructions of the kernel, i.e
+ 		 * had started to exploit this "patch area".
+ 		 */
+-		.rept	7
++		__initial_nops
++		.rept	5
+ 		__nop
+ 		.endr
+ #ifndef CONFIG_THUMB2_KERNEL
+diff --git a/arch/arm/boot/dts/armada-38x.dtsi b/arch/arm/boot/dts/armada-38x.dtsi
+index 9b1a24cc5e91f..df3c8d1d8f641 100644
+--- a/arch/arm/boot/dts/armada-38x.dtsi
++++ b/arch/arm/boot/dts/armada-38x.dtsi
+@@ -168,7 +168,7 @@
+ 			};
+ 
+ 			uart0: serial@12000 {
+-				compatible = "marvell,armada-38x-uart";
++				compatible = "marvell,armada-38x-uart", "ns16550a";
+ 				reg = <0x12000 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+@@ -178,7 +178,7 @@
+ 			};
+ 
+ 			uart1: serial@12100 {
+-				compatible = "marvell,armada-38x-uart";
++				compatible = "marvell,armada-38x-uart", "ns16550a";
+ 				reg = <0x12100 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/gemini-nas4220b.dts b/arch/arm/boot/dts/gemini-nas4220b.dts
+index 13112a8a5dd88..6544c730340fa 100644
+--- a/arch/arm/boot/dts/gemini-nas4220b.dts
++++ b/arch/arm/boot/dts/gemini-nas4220b.dts
+@@ -84,7 +84,7 @@
+ 			partitions {
+ 				compatible = "redboot-fis";
+ 				/* Eraseblock at 0xfe0000 */
+-				fis-index-block = <0x1fc>;
++				fis-index-block = <0x7f>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
+index 32335d4ce478b..d40c3d2c4914e 100644
+--- a/arch/arm/boot/dts/omap3-n900.dts
++++ b/arch/arm/boot/dts/omap3-n900.dts
+@@ -8,6 +8,7 @@
+ 
+ #include "omap34xx.dtsi"
+ #include <dt-bindings/input/input.h>
++#include <dt-bindings/leds/common.h>
+ 
+ /*
+  * Default secure signed bootloader (Nokia X-Loader) does not enable L3 firewall
+@@ -630,63 +631,92 @@
+ 	};
+ 
+ 	lp5523: lp5523@32 {
++		#address-cells = <1>;
++		#size-cells = <0>;
+ 		compatible = "national,lp5523";
+ 		reg = <0x32>;
+ 		clock-mode = /bits/ 8 <0>; /* LP55XX_CLOCK_AUTO */
+-		enable-gpio = <&gpio2 9 GPIO_ACTIVE_HIGH>; /* 41 */
++		enable-gpios = <&gpio2 9 GPIO_ACTIVE_HIGH>; /* 41 */
+ 
+-		chan0 {
++		led@0 {
++			reg = <0>;
+ 			chan-name = "lp5523:kb1";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan1 {
++		led@1 {
++			reg = <1>;
+ 			chan-name = "lp5523:kb2";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan2 {
++		led@2 {
++			reg = <2>;
+ 			chan-name = "lp5523:kb3";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan3 {
++		led@3 {
++			reg = <3>;
+ 			chan-name = "lp5523:kb4";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan4 {
++		led@4 {
++			reg = <4>;
+ 			chan-name = "lp5523:b";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_BLUE>;
++			function = LED_FUNCTION_STATUS;
+ 		};
+ 
+-		chan5 {
++		led@5 {
++			reg = <5>;
+ 			chan-name = "lp5523:g";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_GREEN>;
++			function = LED_FUNCTION_STATUS;
+ 		};
+ 
+-		chan6 {
++		led@6 {
++			reg = <6>;
+ 			chan-name = "lp5523:r";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_RED>;
++			function = LED_FUNCTION_STATUS;
+ 		};
+ 
+-		chan7 {
++		led@7 {
++			reg = <7>;
+ 			chan-name = "lp5523:kb5";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan8 {
++		led@8 {
++			reg = <8>;
+ 			chan-name = "lp5523:kb6";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/stm32f429-disco.dts b/arch/arm/boot/dts/stm32f429-disco.dts
+index 075ac57d0bf4a..6435e099c6326 100644
+--- a/arch/arm/boot/dts/stm32f429-disco.dts
++++ b/arch/arm/boot/dts/stm32f429-disco.dts
+@@ -192,7 +192,7 @@
+ 
+ 	display: display@1{
+ 		/* Connect panel-ilitek-9341 to ltdc */
+-		compatible = "st,sf-tc240t-9370-t";
++		compatible = "st,sf-tc240t-9370-t", "ilitek,ili9341";
+ 		reg = <1>;
+ 		spi-3wire;
+ 		spi-max-frequency = <10000000>;
+diff --git a/arch/arm/include/debug/imx-uart.h b/arch/arm/include/debug/imx-uart.h
+index c8eb83d4b8964..3edbb3c5b42bf 100644
+--- a/arch/arm/include/debug/imx-uart.h
++++ b/arch/arm/include/debug/imx-uart.h
+@@ -11,13 +11,6 @@
+ #define IMX1_UART_BASE_ADDR(n)	IMX1_UART##n##_BASE_ADDR
+ #define IMX1_UART_BASE(n)	IMX1_UART_BASE_ADDR(n)
+ 
+-#define IMX21_UART1_BASE_ADDR	0x1000a000
+-#define IMX21_UART2_BASE_ADDR	0x1000b000
+-#define IMX21_UART3_BASE_ADDR	0x1000c000
+-#define IMX21_UART4_BASE_ADDR	0x1000d000
+-#define IMX21_UART_BASE_ADDR(n)	IMX21_UART##n##_BASE_ADDR
+-#define IMX21_UART_BASE(n)	IMX21_UART_BASE_ADDR(n)
+-
+ #define IMX25_UART1_BASE_ADDR	0x43f90000
+ #define IMX25_UART2_BASE_ADDR	0x43f94000
+ #define IMX25_UART3_BASE_ADDR	0x5000c000
+@@ -26,6 +19,13 @@
+ #define IMX25_UART_BASE_ADDR(n)	IMX25_UART##n##_BASE_ADDR
+ #define IMX25_UART_BASE(n)	IMX25_UART_BASE_ADDR(n)
+ 
++#define IMX27_UART1_BASE_ADDR	0x1000a000
++#define IMX27_UART2_BASE_ADDR	0x1000b000
++#define IMX27_UART3_BASE_ADDR	0x1000c000
++#define IMX27_UART4_BASE_ADDR	0x1000d000
++#define IMX27_UART_BASE_ADDR(n)	IMX27_UART##n##_BASE_ADDR
++#define IMX27_UART_BASE(n)	IMX27_UART_BASE_ADDR(n)
++
+ #define IMX31_UART1_BASE_ADDR	0x43f90000
+ #define IMX31_UART2_BASE_ADDR	0x43f94000
+ #define IMX31_UART3_BASE_ADDR	0x5000c000
+@@ -112,10 +112,10 @@
+ 
+ #ifdef CONFIG_DEBUG_IMX1_UART
+ #define UART_PADDR	IMX_DEBUG_UART_BASE(IMX1)
+-#elif defined(CONFIG_DEBUG_IMX21_IMX27_UART)
+-#define UART_PADDR	IMX_DEBUG_UART_BASE(IMX21)
+ #elif defined(CONFIG_DEBUG_IMX25_UART)
+ #define UART_PADDR	IMX_DEBUG_UART_BASE(IMX25)
++#elif defined(CONFIG_DEBUG_IMX27_UART)
++#define UART_PADDR	IMX_DEBUG_UART_BASE(IMX27)
+ #elif defined(CONFIG_DEBUG_IMX31_UART)
+ #define UART_PADDR	IMX_DEBUG_UART_BASE(IMX31)
+ #elif defined(CONFIG_DEBUG_IMX35_UART)
+diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+index ee949255ced3f..09ef73b99dd86 100644
+--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+@@ -154,8 +154,10 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 		return -ENODEV;
+ 
+ 	for_each_matching_node_and_match(np, rcar_gen2_quirk_match, &id) {
+-		if (!of_device_is_available(np))
++		if (!of_device_is_available(np)) {
++			of_node_put(np);
+ 			break;
++		}
+ 
+ 		ret = of_property_read_u32(np, "reg", &addr);
+ 		if (ret)	/* Skip invalid entry and continue */
+@@ -164,6 +166,7 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 		quirk = kzalloc(sizeof(*quirk), GFP_KERNEL);
+ 		if (!quirk) {
+ 			ret = -ENOMEM;
++			of_node_put(np);
+ 			goto err_mem;
+ 		}
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 959b299344e54..7342c8a2b322d 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -52,7 +52,7 @@
+ 		secure-monitor = <&sm>;
+ 	};
+ 
+-	gpu_opp_table: gpu-opp-table {
++	gpu_opp_table: opp-table-gpu {
+ 		compatible = "operating-points-v2";
+ 
+ 		opp-124999998 {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+index 59b5f39088757..b9b8cd4b5ba9d 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+@@ -543,7 +543,7 @@
+ 	pinctrl-0 = <&nor_pins>;
+ 	pinctrl-names = "default";
+ 
+-	mx25u64: spi-flash@0 {
++	mx25u64: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "mxicy,mx25u6435f", "jedec,spi-nor";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+index a350fee1264d7..a4d34398da358 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include "meson-gxbb.dtsi"
++#include <dt-bindings/gpio/gpio.h>
+ 
+ / {
+ 	aliases {
+@@ -64,6 +65,7 @@
+ 		regulator-name = "VDDIO_AO18";
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <1800000>;
++		regulator-always-on;
+ 	};
+ 
+ 	vcc_3v3: regulator-vcc_3v3 {
+@@ -161,6 +163,7 @@
+ 	status = "okay";
+ 	pinctrl-0 = <&hdmi_hpd_pins>, <&hdmi_i2c_pins>;
+ 	pinctrl-names = "default";
++	hdmi-supply = <&vddio_ao18>;
+ };
+ 
+ &hdmi_tx_tmds_port {
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
+index 13cdc958ba3ea..71858c9376c25 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
+@@ -261,11 +261,6 @@
+ 				vcc-supply = <&sb_3v3>;
+ 			};
+ 
+-			rtc@51 {
+-				compatible = "nxp,pcf2129";
+-				reg = <0x51>;
+-			};
+-
+ 			eeprom@56 {
+ 				compatible = "atmel,24c512";
+ 				reg = <0x56>;
+@@ -307,6 +302,15 @@
+ 
+ };
+ 
++&i2c1 {
++	status = "okay";
++
++	rtc@51 {
++		compatible = "nxp,pcf2129";
++		reg = <0x51>;
++	};
++};
++
+ &enetc_port1 {
+ 	phy-handle = <&qds_phy1>;
+ 	phy-connection-type = "rgmii-id";
+diff --git a/arch/arm64/boot/dts/marvell/cn9130.dtsi b/arch/arm64/boot/dts/marvell/cn9130.dtsi
+index a2b7e5ec979d3..327b04134134f 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9130.dtsi
+@@ -11,6 +11,13 @@
+ 	model = "Marvell Armada CN9130 SoC";
+ 	compatible = "marvell,cn9130", "marvell,armada-ap807-quad",
+ 		     "marvell,armada-ap807";
++
++	aliases {
++		gpio1 = &cp0_gpio1;
++		gpio2 = &cp0_gpio2;
++		spi1 = &cp0_spi0;
++		spi2 = &cp0_spi1;
++	};
+ };
+ 
+ /*
+@@ -35,3 +42,11 @@
+ #undef CP11X_PCIE0_BASE
+ #undef CP11X_PCIE1_BASE
+ #undef CP11X_PCIE2_BASE
++
++&cp0_gpio1 {
++	status = "okay";
++};
++
++&cp0_gpio2 {
++	status = "okay";
++};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra186.dtsi b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+index 0c46ab7bbbf37..eec6418ecdb1a 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra186.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+@@ -985,7 +985,7 @@
+ 
+ 	ccplex@e000000 {
+ 		compatible = "nvidia,tegra186-ccplex-cluster";
+-		reg = <0x0 0x0e000000 0x0 0x3fffff>;
++		reg = <0x0 0x0e000000 0x0 0x400000>;
+ 
+ 		nvidia,bpmp = <&bpmp>;
+ 	};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 9b5007e5f790f..05cf606b85c9f 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -782,13 +782,12 @@
+ 			reg = <0x3510000 0x10000>;
+ 			interrupts = <GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_HDA>,
+-				 <&bpmp TEGRA194_CLK_HDA2CODEC_2X>,
+-				 <&bpmp TEGRA194_CLK_HDA2HDMICODEC>;
+-			clock-names = "hda", "hda2codec_2x", "hda2hdmi";
++				 <&bpmp TEGRA194_CLK_HDA2HDMICODEC>,
++				 <&bpmp TEGRA194_CLK_HDA2CODEC_2X>;
++			clock-names = "hda", "hda2hdmi", "hda2codec_2x";
+ 			resets = <&bpmp TEGRA194_RESET_HDA>,
+-				 <&bpmp TEGRA194_RESET_HDA2CODEC_2X>,
+ 				 <&bpmp TEGRA194_RESET_HDA2HDMICODEC>;
+-			reset-names = "hda", "hda2codec_2x", "hda2hdmi";
++			reset-names = "hda", "hda2hdmi";
+ 			power-domains = <&bpmp TEGRA194_POWER_DOMAIN_DISP>;
+ 			interconnects = <&mc TEGRA194_MEMORY_CLIENT_HDAR &emc>,
+ 					<&mc TEGRA194_MEMORY_CLIENT_HDAW &emc>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 9cb8f7a052df9..2a1f03cdb52c7 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -221,7 +221,7 @@
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-controller;
+ 			#gpio-cells = <2>;
+-			gpio-ranges = <&tlmm 0 80>;
++			gpio-ranges = <&tlmm 0 0 80>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index b1ffc056eea0b..291276a38d7cd 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -18,8 +18,8 @@
+ 	#size-cells = <2>;
+ 
+ 	aliases {
+-		sdhc1 = &sdhc_1; /* SDC1 eMMC slot */
+-		sdhc2 = &sdhc_2; /* SDC2 SD card slot */
++		mmc0 = &sdhc_1; /* SDC1 eMMC slot */
++		mmc1 = &sdhc_2; /* SDC2 SD card slot */
+ 	};
+ 
+ 	chosen { };
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index eef17434d12ae..ef5d03a150693 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -645,9 +645,6 @@
+ 			nvmem-cells = <&gpu_speed_bin>;
+ 			nvmem-cell-names = "speed_bin";
+ 
+-			qcom,gpu-quirk-two-pass-use-wfi;
+-			qcom,gpu-quirk-fault-detect-mask;
+-
+ 			operating-points-v2 = <&gpu_opp_table>;
+ 
+ 			gpu_opp_table: opp-table {
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index ad6561843ba28..e080c317b5e3d 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -365,6 +365,10 @@
+ 	dai@1 {
+ 		reg = <1>;
+ 	};
++
++	dai@2 {
++		reg = <2>;
++	};
+ };
+ 
+ &sound {
+@@ -377,6 +381,7 @@
+ 		"SpkrLeft IN", "SPK1 OUT",
+ 		"SpkrRight IN", "SPK2 OUT",
+ 		"MM_DL1",  "MultiMedia1 Playback",
++		"MM_DL3",  "MultiMedia3 Playback",
+ 		"MultiMedia2 Capture", "MM_UL2";
+ 
+ 	mm1-dai-link {
+@@ -393,6 +398,13 @@
+ 		};
+ 	};
+ 
++	mm3-dai-link {
++		link-name = "MultiMedia3";
++		cpu {
++			sound-dai = <&q6asmdai  MSM_FRONTEND_DAI_MULTIMEDIA3>;
++		};
++	};
++
+ 	slim-dai-link {
+ 		link-name = "SLIM Playback";
+ 		cpu {
+@@ -422,6 +434,21 @@
+ 			sound-dai = <&wcd9340 1>;
+ 		};
+ 	};
++
++	slim-wcd-dai-link {
++		link-name = "SLIM WCD Playback";
++		cpu {
++			sound-dai = <&q6afedai SLIMBUS_1_RX>;
++		};
++
++		platform {
++			sound-dai = <&q6routing>;
++		};
++
++		codec {
++			sound-dai =  <&wcd9340 2>;
++		};
++	};
+ };
+ 
+ &tlmm {
+diff --git a/arch/arm64/boot/dts/renesas/cat875.dtsi b/arch/arm64/boot/dts/renesas/cat875.dtsi
+index 801ea54b027c4..20f8adc635e72 100644
+--- a/arch/arm64/boot/dts/renesas/cat875.dtsi
++++ b/arch/arm64/boot/dts/renesas/cat875.dtsi
+@@ -18,6 +18,7 @@
+ 	pinctrl-names = "default";
+ 	renesas,no-ether-link;
+ 	phy-handle = <&phy0>;
++	phy-mode = "rgmii-id";
+ 	status = "okay";
+ 
+ 	phy0: ethernet-phy@0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 5832ad830ed14..1ab9f9604af6c 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -25,7 +25,7 @@
+ 		#size-cells = <1>;
+ 		ranges = <0x00 0x00 0x00100000 0x1c000>;
+ 
+-		serdes_ln_ctrl: serdes-ln-ctrl@4080 {
++		serdes_ln_ctrl: mux-controller@4080 {
+ 			compatible = "mmio-mux";
+ 			#mux-control-cells = <1>;
+ 			mux-reg-masks = <0x4080 0x3>, <0x4084 0x3>, /* SERDES0 lane0/1 select */
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200.dtsi b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+index 66169bcf7c9a4..03a9623f0f956 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+@@ -60,7 +60,7 @@
+ 			i-cache-sets = <256>;
+ 			d-cache-size = <0x8000>;
+ 			d-cache-line-size = <64>;
+-			d-cache-sets = <128>;
++			d-cache-sets = <256>;
+ 			next-level-cache = <&L2_0>;
+ 		};
+ 
+@@ -74,7 +74,7 @@
+ 			i-cache-sets = <256>;
+ 			d-cache-size = <0x8000>;
+ 			d-cache-line-size = <64>;
+-			d-cache-sets = <128>;
++			d-cache-sets = <256>;
+ 			next-level-cache = <&L2_0>;
+ 		};
+ 	};
+@@ -84,7 +84,7 @@
+ 		cache-level = <2>;
+ 		cache-size = <0x100000>;
+ 		cache-line-size = <64>;
+-		cache-sets = <2048>;
++		cache-sets = <1024>;
+ 		next-level-cache = <&msmc_l3>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e.dtsi b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+index cc483f7344af3..a199227327ed2 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+@@ -61,7 +61,7 @@
+ 			i-cache-sets = <256>;
+ 			d-cache-size = <0x8000>;
+ 			d-cache-line-size = <64>;
+-			d-cache-sets = <128>;
++			d-cache-sets = <256>;
+ 			next-level-cache = <&L2_0>;
+ 		};
+ 
+@@ -75,7 +75,7 @@
+ 			i-cache-sets = <256>;
+ 			d-cache-size = <0x8000>;
+ 			d-cache-line-size = <64>;
+-			d-cache-sets = <128>;
++			d-cache-sets = <256>;
+ 			next-level-cache = <&L2_0>;
+ 		};
+ 	};
+@@ -85,7 +85,7 @@
+ 		cache-level = <2>;
+ 		cache-size = <0x100000>;
+ 		cache-line-size = <64>;
+-		cache-sets = <2048>;
++		cache-sets = <1024>;
+ 		next-level-cache = <&msmc_l3>;
+ 	};
+ 
+diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S
+index 073acbf02a7c8..1fd5d790ab800 100644
+--- a/arch/arm64/lib/clear_page.S
++++ b/arch/arm64/lib/clear_page.S
+@@ -14,8 +14,9 @@
+  * Parameters:
+  *	x0 - dest
+  */
+-SYM_FUNC_START(clear_page)
++SYM_FUNC_START_PI(clear_page)
+ 	mrs	x1, dczid_el0
++	tbnz	x1, #4, 2f	/* Branch if DC ZVA is prohibited */
+ 	and	w1, w1, #0xf
+ 	mov	x2, #4
+ 	lsl	x1, x2, x1
+@@ -25,5 +26,14 @@ SYM_FUNC_START(clear_page)
+ 	tst	x0, #(PAGE_SIZE - 1)
+ 	b.ne	1b
+ 	ret
+-SYM_FUNC_END(clear_page)
++
++2:	stnp	xzr, xzr, [x0]
++	stnp	xzr, xzr, [x0, #16]
++	stnp	xzr, xzr, [x0, #32]
++	stnp	xzr, xzr, [x0, #48]
++	add	x0, x0, #64
++	tst	x0, #(PAGE_SIZE - 1)
++	b.ne	2b
++	ret
++SYM_FUNC_END_PI(clear_page)
+ EXPORT_SYMBOL(clear_page)
+diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S
+index e7a793961408d..29144f4cd4492 100644
+--- a/arch/arm64/lib/copy_page.S
++++ b/arch/arm64/lib/copy_page.S
+@@ -17,7 +17,7 @@
+  *	x0 - dest
+  *	x1 - src
+  */
+-SYM_FUNC_START(copy_page)
++SYM_FUNC_START_PI(copy_page)
+ alternative_if ARM64_HAS_NO_HW_PREFETCH
+ 	// Prefetch three cache lines ahead.
+ 	prfm	pldl1strm, [x1, #128]
+@@ -75,5 +75,5 @@ alternative_else_nop_endif
+ 	stnp	x16, x17, [x0, #112 - 256]
+ 
+ 	ret
+-SYM_FUNC_END(copy_page)
++SYM_FUNC_END_PI(copy_page)
+ EXPORT_SYMBOL(copy_page)
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 23d756fe0fd6c..3442bdd4314cb 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -1985,6 +1985,10 @@ config SYS_HAS_CPU_MIPS64_R1
+ config SYS_HAS_CPU_MIPS64_R2
+ 	bool
+ 
++config SYS_HAS_CPU_MIPS64_R5
++	bool
++	select ARCH_HAS_SYNC_DMA_FOR_CPU if DMA_NONCOHERENT
++
+ config SYS_HAS_CPU_MIPS64_R6
+ 	bool
+ 	select ARCH_HAS_SYNC_DMA_FOR_CPU if DMA_NONCOHERENT
+@@ -2146,7 +2150,7 @@ config CPU_SUPPORTS_ADDRWINCFG
+ 	bool
+ config CPU_SUPPORTS_HUGEPAGES
+ 	bool
+-	depends on !(32BIT && (ARCH_PHYS_ADDR_T_64BIT || EVA))
++	depends on !(32BIT && (PHYS_ADDR_T_64BIT || EVA))
+ config MIPS_PGD_C0_CONTEXT
+ 	bool
+ 	default y if 64BIT && (CPU_MIPSR2 || CPU_MIPSR6) && !CPU_XLP
+diff --git a/arch/mips/bcm63xx/clk.c b/arch/mips/bcm63xx/clk.c
+index aba6e2d6a736c..dcfa0ea912fe1 100644
+--- a/arch/mips/bcm63xx/clk.c
++++ b/arch/mips/bcm63xx/clk.c
+@@ -387,6 +387,12 @@ struct clk *clk_get_parent(struct clk *clk)
+ }
+ EXPORT_SYMBOL(clk_get_parent);
+ 
++int clk_set_parent(struct clk *clk, struct clk *parent)
++{
++	return 0;
++}
++EXPORT_SYMBOL(clk_set_parent);
++
+ unsigned long clk_get_rate(struct clk *clk)
+ {
+ 	if (!clk)
+diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
+index d56e9b9d2e434..a994022e32c9f 100644
+--- a/arch/mips/cavium-octeon/octeon-platform.c
++++ b/arch/mips/cavium-octeon/octeon-platform.c
+@@ -328,6 +328,7 @@ static int __init octeon_ehci_device_init(void)
+ 
+ 	pd->dev.platform_data = &octeon_ehci_pdata;
+ 	octeon_ehci_hw_start(&pd->dev);
++	put_device(&pd->dev);
+ 
+ 	return ret;
+ }
+@@ -391,6 +392,7 @@ static int __init octeon_ohci_device_init(void)
+ 
+ 	pd->dev.platform_data = &octeon_ohci_pdata;
+ 	octeon_ohci_hw_start(&pd->dev);
++	put_device(&pd->dev);
+ 
+ 	return ret;
+ }
+diff --git a/arch/mips/cavium-octeon/octeon-usb.c b/arch/mips/cavium-octeon/octeon-usb.c
+index 950e6c6e86297..fa87e5aa1811d 100644
+--- a/arch/mips/cavium-octeon/octeon-usb.c
++++ b/arch/mips/cavium-octeon/octeon-usb.c
+@@ -544,6 +544,7 @@ static int __init dwc3_octeon_device_init(void)
+ 			devm_iounmap(&pdev->dev, base);
+ 			devm_release_mem_region(&pdev->dev, res->start,
+ 						resource_size(res));
++			put_device(&pdev->dev);
+ 		}
+ 	} while (node != NULL);
+ 
+diff --git a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
+index 87a5bfbf8cfe9..28572ddfb004a 100644
+--- a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
++++ b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
+@@ -36,7 +36,7 @@
+ 	nop
+ 	/* Loongson-3A R2/R3 */
+ 	andi	t0, (PRID_IMP_MASK | PRID_REV_MASK)
+-	slti	t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
++	slti	t0, t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
+ 	bnez	t0, 2f
+ 	nop
+ 1:
+@@ -71,7 +71,7 @@
+ 	nop
+ 	/* Loongson-3A R2/R3 */
+ 	andi	t0, (PRID_IMP_MASK | PRID_REV_MASK)
+-	slti	t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
++	slti	t0, t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
+ 	bnez	t0, 2f
+ 	nop
+ 1:
+diff --git a/arch/mips/include/asm/octeon/cvmx-bootinfo.h b/arch/mips/include/asm/octeon/cvmx-bootinfo.h
+index c114a7ba0badd..e77e8b7c00838 100644
+--- a/arch/mips/include/asm/octeon/cvmx-bootinfo.h
++++ b/arch/mips/include/asm/octeon/cvmx-bootinfo.h
+@@ -317,7 +317,7 @@ enum cvmx_chip_types_enum {
+ 
+ /* Functions to return string based on type */
+ #define ENUM_BRD_TYPE_CASE(x) \
+-	case x: return(#x + 16);	/* Skip CVMX_BOARD_TYPE_ */
++	case x: return (&#x[16]);	/* Skip CVMX_BOARD_TYPE_ */
+ static inline const char *cvmx_board_type_to_string(enum
+ 						    cvmx_board_types_enum type)
+ {
+@@ -408,7 +408,7 @@ static inline const char *cvmx_board_type_to_string(enum
+ }
+ 
+ #define ENUM_CHIP_TYPE_CASE(x) \
+-	case x: return(#x + 15);	/* Skip CVMX_CHIP_TYPE */
++	case x: return (&#x[15]);	/* Skip CVMX_CHIP_TYPE */
+ static inline const char *cvmx_chip_type_to_string(enum
+ 						   cvmx_chip_types_enum type)
+ {
+diff --git a/arch/mips/lantiq/clk.c b/arch/mips/lantiq/clk.c
+index 4916cccf378fd..7a623684d9b5e 100644
+--- a/arch/mips/lantiq/clk.c
++++ b/arch/mips/lantiq/clk.c
+@@ -164,6 +164,12 @@ struct clk *clk_get_parent(struct clk *clk)
+ }
+ EXPORT_SYMBOL(clk_get_parent);
+ 
++int clk_set_parent(struct clk *clk, struct clk *parent)
++{
++	return 0;
++}
++EXPORT_SYMBOL(clk_set_parent);
++
+ static inline u32 get_counter_resolution(void)
+ {
+ 	u32 res;
+diff --git a/arch/openrisc/include/asm/syscalls.h b/arch/openrisc/include/asm/syscalls.h
+index 3a7eeae6f56a8..aa1c7e98722e3 100644
+--- a/arch/openrisc/include/asm/syscalls.h
++++ b/arch/openrisc/include/asm/syscalls.h
+@@ -22,9 +22,11 @@ asmlinkage long sys_or1k_atomic(unsigned long type, unsigned long *v1,
+ 
+ asmlinkage long __sys_clone(unsigned long clone_flags, unsigned long newsp,
+ 			void __user *parent_tid, void __user *child_tid, int tls);
++asmlinkage long __sys_clone3(struct clone_args __user *uargs, size_t size);
+ asmlinkage long __sys_fork(void);
+ 
+ #define sys_clone __sys_clone
++#define sys_clone3 __sys_clone3
+ #define sys_fork __sys_fork
+ 
+ #endif /* __ASM_OPENRISC_SYSCALLS_H */
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index 98e4f97db5159..b42d32d79b2e6 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -1170,6 +1170,11 @@ ENTRY(__sys_clone)
+ 	l.j	_fork_save_extra_regs_and_call
+ 	 l.nop
+ 
++ENTRY(__sys_clone3)
++	l.movhi	r29,hi(sys_clone3)
++	l.j	_fork_save_extra_regs_and_call
++	 l.ori	r29,r29,lo(sys_clone3)
++
+ ENTRY(__sys_fork)
+ 	l.movhi	r29,hi(sys_fork)
+ 	l.ori	r29,r29,lo(sys_fork)
+diff --git a/arch/parisc/include/asm/special_insns.h b/arch/parisc/include/asm/special_insns.h
+index a303ae9a77f41..16ee41e77174f 100644
+--- a/arch/parisc/include/asm/special_insns.h
++++ b/arch/parisc/include/asm/special_insns.h
+@@ -2,28 +2,32 @@
+ #ifndef __PARISC_SPECIAL_INSNS_H
+ #define __PARISC_SPECIAL_INSNS_H
+ 
+-#define lpa(va)	({			\
+-	unsigned long pa;		\
+-	__asm__ __volatile__(		\
+-		"copy %%r0,%0\n\t"	\
+-		"lpa %%r0(%1),%0"	\
+-		: "=r" (pa)		\
+-		: "r" (va)		\
+-		: "memory"		\
+-	);				\
+-	pa;				\
++#define lpa(va)	({					\
++	unsigned long pa;				\
++	__asm__ __volatile__(				\
++		"copy %%r0,%0\n"			\
++		"8:\tlpa %%r0(%1),%0\n"			\
++		"9:\n"					\
++		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b)	\
++		: "=&r" (pa)				\
++		: "r" (va)				\
++		: "memory"				\
++	);						\
++	pa;						\
+ })
+ 
+-#define lpa_user(va)	({		\
+-	unsigned long pa;		\
+-	__asm__ __volatile__(		\
+-		"copy %%r0,%0\n\t"	\
+-		"lpa %%r0(%%sr3,%1),%0"	\
+-		: "=r" (pa)		\
+-		: "r" (va)		\
+-		: "memory"		\
+-	);				\
+-	pa;				\
++#define lpa_user(va)	({				\
++	unsigned long pa;				\
++	__asm__ __volatile__(				\
++		"copy %%r0,%0\n"			\
++		"8:\tlpa %%r0(%%sr3,%1),%0\n"		\
++		"9:\n"					\
++		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b)	\
++		: "=&r" (pa)				\
++		: "r" (va)				\
++		: "memory"				\
++	);						\
++	pa;						\
+ })
+ 
+ #define mfctl(reg)	({		\
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index 43f56335759a4..269b737d26299 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -784,7 +784,7 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
+ 	     * unless pagefault_disable() was called before.
+ 	     */
+ 
+-	    if (fault_space == 0 && !faulthandler_disabled())
++	    if (faulthandler_disabled() || fault_space == 0)
+ 	    {
+ 		/* Clean up and return if in exception table. */
+ 		if (fixup_exception(regs))
+diff --git a/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
+index c90702b04a530..48e5cd61599c6 100644
+--- a/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
++++ b/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
+@@ -79,6 +79,7 @@ fman0: fman@400000 {
+ 		#size-cells = <0>;
+ 		compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
+ 		reg = <0xfc000 0x1000>;
++		fsl,erratum-a009885;
+ 	};
+ 
+ 	xmdio0: mdio@fd000 {
+@@ -86,6 +87,7 @@ fman0: fman@400000 {
+ 		#size-cells = <0>;
+ 		compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
+ 		reg = <0xfd000 0x1000>;
++		fsl,erratum-a009885;
+ 	};
+ };
+ 
+diff --git a/arch/powerpc/include/asm/cpu_setup_power.h b/arch/powerpc/include/asm/cpu_setup_power.h
+new file mode 100644
+index 0000000000000..24be9131f8032
+--- /dev/null
++++ b/arch/powerpc/include/asm/cpu_setup_power.h
+@@ -0,0 +1,12 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/*
++ * Copyright (C) 2020 IBM Corporation
++ */
++void __setup_cpu_power7(unsigned long offset, struct cpu_spec *spec);
++void __restore_cpu_power7(void);
++void __setup_cpu_power8(unsigned long offset, struct cpu_spec *spec);
++void __restore_cpu_power8(void);
++void __setup_cpu_power9(unsigned long offset, struct cpu_spec *spec);
++void __restore_cpu_power9(void);
++void __setup_cpu_power10(unsigned long offset, struct cpu_spec *spec);
++void __restore_cpu_power10(void);
+diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
+index 0363734ff56e0..0f2acbb966740 100644
+--- a/arch/powerpc/include/asm/hw_irq.h
++++ b/arch/powerpc/include/asm/hw_irq.h
+@@ -38,6 +38,8 @@
+ #define PACA_IRQ_MUST_HARD_MASK	(PACA_IRQ_EE)
+ #endif
+ 
++#endif /* CONFIG_PPC64 */
++
+ /*
+  * flags for paca->irq_soft_mask
+  */
+@@ -46,8 +48,6 @@
+ #define IRQS_PMI_DISABLED	2
+ #define IRQS_ALL_DISABLED	(IRQS_DISABLED | IRQS_PMI_DISABLED)
+ 
+-#endif /* CONFIG_PPC64 */
+-
+ #ifndef __ASSEMBLY__
+ 
+ extern void replay_system_reset(void);
+@@ -175,6 +175,42 @@ static inline bool arch_irqs_disabled(void)
+ 	return arch_irqs_disabled_flags(arch_local_save_flags());
+ }
+ 
++static inline void set_pmi_irq_pending(void)
++{
++	/*
++	 * Invoked from PMU callback functions to set PMI bit in the paca.
++	 * This has to be called with irq's disabled (via hard_irq_disable()).
++	 */
++	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
++		WARN_ON_ONCE(mfmsr() & MSR_EE);
++
++	get_paca()->irq_happened |= PACA_IRQ_PMI;
++}
++
++static inline void clear_pmi_irq_pending(void)
++{
++	/*
++	 * Invoked from PMU callback functions to clear the pending PMI bit
++	 * in the paca.
++	 */
++	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
++		WARN_ON_ONCE(mfmsr() & MSR_EE);
++
++	get_paca()->irq_happened &= ~PACA_IRQ_PMI;
++}
++
++static inline bool pmi_irq_pending(void)
++{
++	/*
++	 * Invoked from PMU callback functions to check if there is a pending
++	 * PMI bit in the paca.
++	 */
++	if (get_paca()->irq_happened & PACA_IRQ_PMI)
++		return true;
++
++	return false;
++}
++
+ #ifdef CONFIG_PPC_BOOK3S
+ /*
+  * To support disabling and enabling of irq with PMI, set of
+@@ -296,6 +332,10 @@ extern void irq_set_pending_from_srr1(unsigned long srr1);
+ 
+ extern void force_external_irq_replay(void);
+ 
++static inline void irq_soft_mask_regs_set_state(struct pt_regs *regs, unsigned long val)
++{
++	regs->softe = val;
++}
+ #else /* CONFIG_PPC64 */
+ 
+ static inline unsigned long arch_local_save_flags(void)
+@@ -364,6 +404,13 @@ static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
+ 
+ static inline void may_hard_irq_enable(void) { }
+ 
++static inline void clear_pmi_irq_pending(void) { }
++static inline void set_pmi_irq_pending(void) { }
++static inline bool pmi_irq_pending(void) { return false; }
++
++static inline void irq_soft_mask_regs_set_state(struct pt_regs *regs, unsigned long val)
++{
++}
+ #endif /* CONFIG_PPC64 */
+ 
+ #define ARCH_IRQ_INIT_FLAGS	IRQ_NOREQUEST
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index f4b98903064f5..6afb14b6bbc26 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -865,6 +865,7 @@
+ #define   MMCR0_BHRBA	0x00200000UL /* BHRB Access allowed in userspace */
+ #define   MMCR0_EBE	0x00100000UL /* Event based branch enable */
+ #define   MMCR0_PMCC	0x000c0000UL /* PMC control */
++#define   MMCR0_PMCCEXT	ASM_CONST(0x00000200) /* PMCCEXT control */
+ #define   MMCR0_PMCC_U6	0x00080000UL /* PMC1-6 are R/W by user (PR) */
+ #define   MMCR0_PMC1CE	0x00008000UL /* PMC1 count enable*/
+ #define   MMCR0_PMCjCE	ASM_CONST(0x00004000) /* PMCj count enable*/
+diff --git a/arch/powerpc/kernel/btext.c b/arch/powerpc/kernel/btext.c
+index 803c2a45b22ac..1cffb5e7c38d6 100644
+--- a/arch/powerpc/kernel/btext.c
++++ b/arch/powerpc/kernel/btext.c
+@@ -241,8 +241,10 @@ int __init btext_find_display(int allow_nonstdout)
+ 			rc = btext_initialize(np);
+ 			printk("result: %d\n", rc);
+ 		}
+-		if (rc == 0)
++		if (rc == 0) {
++			of_node_put(np);
+ 			break;
++		}
+ 	}
+ 	return rc;
+ }
+diff --git a/arch/powerpc/kernel/cpu_setup_power.S b/arch/powerpc/kernel/cpu_setup_power.S
+deleted file mode 100644
+index 704e8b9501eee..0000000000000
+--- a/arch/powerpc/kernel/cpu_setup_power.S
++++ /dev/null
+@@ -1,252 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-or-later */
+-/*
+- * This file contains low level CPU setup functions.
+- *    Copyright (C) 2003 Benjamin Herrenschmidt (benh@kernel.crashing.org)
+- */
+-
+-#include <asm/processor.h>
+-#include <asm/page.h>
+-#include <asm/cputable.h>
+-#include <asm/ppc_asm.h>
+-#include <asm/asm-offsets.h>
+-#include <asm/cache.h>
+-#include <asm/book3s/64/mmu-hash.h>
+-
+-/* Entry: r3 = crap, r4 = ptr to cputable entry
+- *
+- * Note that we can be called twice for pseudo-PVRs
+- */
+-_GLOBAL(__setup_cpu_power7)
+-	mflr	r11
+-	bl	__init_hvmode_206
+-	mtlr	r11
+-	beqlr
+-	li	r0,0
+-	mtspr	SPRN_LPID,r0
+-	LOAD_REG_IMMEDIATE(r0, PCR_MASK)
+-	mtspr	SPRN_PCR,r0
+-	mfspr	r3,SPRN_LPCR
+-	li	r4,(LPCR_LPES1 >> LPCR_LPES_SH)
+-	bl	__init_LPCR_ISA206
+-	mtlr	r11
+-	blr
+-
+-_GLOBAL(__restore_cpu_power7)
+-	mflr	r11
+-	mfmsr	r3
+-	rldicl.	r0,r3,4,63
+-	beqlr
+-	li	r0,0
+-	mtspr	SPRN_LPID,r0
+-	LOAD_REG_IMMEDIATE(r0, PCR_MASK)
+-	mtspr	SPRN_PCR,r0
+-	mfspr	r3,SPRN_LPCR
+-	li	r4,(LPCR_LPES1 >> LPCR_LPES_SH)
+-	bl	__init_LPCR_ISA206
+-	mtlr	r11
+-	blr
+-
+-_GLOBAL(__setup_cpu_power8)
+-	mflr	r11
+-	bl	__init_FSCR
+-	bl	__init_PMU
+-	bl	__init_PMU_ISA207
+-	bl	__init_hvmode_206
+-	mtlr	r11
+-	beqlr
+-	li	r0,0
+-	mtspr	SPRN_LPID,r0
+-	LOAD_REG_IMMEDIATE(r0, PCR_MASK)
+-	mtspr	SPRN_PCR,r0
+-	mfspr	r3,SPRN_LPCR
+-	ori	r3, r3, LPCR_PECEDH
+-	li	r4,0 /* LPES = 0 */
+-	bl	__init_LPCR_ISA206
+-	bl	__init_HFSCR
+-	bl	__init_PMU_HV
+-	bl	__init_PMU_HV_ISA207
+-	mtlr	r11
+-	blr
+-
+-_GLOBAL(__restore_cpu_power8)
+-	mflr	r11
+-	bl	__init_FSCR
+-	bl	__init_PMU
+-	bl	__init_PMU_ISA207
+-	mfmsr	r3
+-	rldicl.	r0,r3,4,63
+-	mtlr	r11
+-	beqlr
+-	li	r0,0
+-	mtspr	SPRN_LPID,r0
+-	LOAD_REG_IMMEDIATE(r0, PCR_MASK)
+-	mtspr	SPRN_PCR,r0
+-	mfspr   r3,SPRN_LPCR
+-	ori	r3, r3, LPCR_PECEDH
+-	li	r4,0 /* LPES = 0 */
+-	bl	__init_LPCR_ISA206
+-	bl	__init_HFSCR
+-	bl	__init_PMU_HV
+-	bl	__init_PMU_HV_ISA207
+-	mtlr	r11
+-	blr
+-
+-_GLOBAL(__setup_cpu_power10)
+-	mflr	r11
+-	bl	__init_FSCR_power10
+-	bl	__init_PMU
+-	bl	__init_PMU_ISA31
+-	b	1f
+-
+-_GLOBAL(__setup_cpu_power9)
+-	mflr	r11
+-	bl	__init_FSCR_power9
+-	bl	__init_PMU
+-1:	bl	__init_hvmode_206
+-	mtlr	r11
+-	beqlr
+-	li	r0,0
+-	mtspr	SPRN_PSSCR,r0
+-	mtspr	SPRN_LPID,r0
+-	mtspr	SPRN_PID,r0
+-	LOAD_REG_IMMEDIATE(r0, PCR_MASK)
+-	mtspr	SPRN_PCR,r0
+-	mfspr	r3,SPRN_LPCR
+-	LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE  | LPCR_HEIC)
+-	or	r3, r3, r4
+-	LOAD_REG_IMMEDIATE(r4, LPCR_UPRT | LPCR_HR)
+-	andc	r3, r3, r4
+-	li	r4,0 /* LPES = 0 */
+-	bl	__init_LPCR_ISA300
+-	bl	__init_HFSCR
+-	bl	__init_PMU_HV
+-	mtlr	r11
+-	blr
+-
+-_GLOBAL(__restore_cpu_power10)
+-	mflr	r11
+-	bl	__init_FSCR_power10
+-	bl	__init_PMU
+-	bl	__init_PMU_ISA31
+-	b	1f
+-
+-_GLOBAL(__restore_cpu_power9)
+-	mflr	r11
+-	bl	__init_FSCR_power9
+-	bl	__init_PMU
+-1:	mfmsr	r3
+-	rldicl.	r0,r3,4,63
+-	mtlr	r11
+-	beqlr
+-	li	r0,0
+-	mtspr	SPRN_PSSCR,r0
+-	mtspr	SPRN_LPID,r0
+-	mtspr	SPRN_PID,r0
+-	LOAD_REG_IMMEDIATE(r0, PCR_MASK)
+-	mtspr	SPRN_PCR,r0
+-	mfspr   r3,SPRN_LPCR
+-	LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE | LPCR_HEIC)
+-	or	r3, r3, r4
+-	LOAD_REG_IMMEDIATE(r4, LPCR_UPRT | LPCR_HR)
+-	andc	r3, r3, r4
+-	li	r4,0 /* LPES = 0 */
+-	bl	__init_LPCR_ISA300
+-	bl	__init_HFSCR
+-	bl	__init_PMU_HV
+-	mtlr	r11
+-	blr
+-
+-__init_hvmode_206:
+-	/* Disable CPU_FTR_HVMODE and exit if MSR:HV is not set */
+-	mfmsr	r3
+-	rldicl.	r0,r3,4,63
+-	bnelr
+-	ld	r5,CPU_SPEC_FEATURES(r4)
+-	LOAD_REG_IMMEDIATE(r6,CPU_FTR_HVMODE | CPU_FTR_P9_TM_HV_ASSIST)
+-	andc	r5,r5,r6
+-	std	r5,CPU_SPEC_FEATURES(r4)
+-	blr
+-
+-__init_LPCR_ISA206:
+-	/* Setup a sane LPCR:
+-	 *   Called with initial LPCR in R3 and desired LPES 2-bit value in R4
+-	 *
+-	 *   LPES = 0b01 (HSRR0/1 used for 0x500)
+-	 *   PECE = 0b111
+-	 *   DPFD = 4
+-	 *   HDICE = 0
+-	 *   VC = 0b100 (VPM0=1, VPM1=0, ISL=0)
+-	 *   VRMASD = 0b10000 (L=1, LP=00)
+-	 *
+-	 * Other bits untouched for now
+-	 */
+-	li	r5,0x10
+-	rldimi	r3,r5, LPCR_VRMASD_SH, 64-LPCR_VRMASD_SH-5
+-
+-	/* POWER9 has no VRMASD */
+-__init_LPCR_ISA300:
+-	rldimi	r3,r4, LPCR_LPES_SH, 64-LPCR_LPES_SH-2
+-	ori	r3,r3,(LPCR_PECE0|LPCR_PECE1|LPCR_PECE2)
+-	li	r5,4
+-	rldimi	r3,r5, LPCR_DPFD_SH, 64-LPCR_DPFD_SH-3
+-	clrrdi	r3,r3,1		/* clear HDICE */
+-	li	r5,4
+-	rldimi	r3,r5, LPCR_VC_SH, 0
+-	mtspr	SPRN_LPCR,r3
+-	isync
+-	blr
+-
+-__init_FSCR_power10:
+-	mfspr	r3, SPRN_FSCR
+-	ori	r3, r3, FSCR_PREFIX
+-	mtspr	SPRN_FSCR, r3
+-	// fall through
+-
+-__init_FSCR_power9:
+-	mfspr	r3, SPRN_FSCR
+-	ori	r3, r3, FSCR_SCV
+-	mtspr	SPRN_FSCR, r3
+-	// fall through
+-
+-__init_FSCR:
+-	mfspr	r3,SPRN_FSCR
+-	ori	r3,r3,FSCR_TAR|FSCR_EBB
+-	mtspr	SPRN_FSCR,r3
+-	blr
+-
+-__init_HFSCR:
+-	mfspr	r3,SPRN_HFSCR
+-	ori	r3,r3,HFSCR_TAR|HFSCR_TM|HFSCR_BHRB|HFSCR_PM|\
+-		      HFSCR_DSCR|HFSCR_VECVSX|HFSCR_FP|HFSCR_EBB|HFSCR_MSGP
+-	mtspr	SPRN_HFSCR,r3
+-	blr
+-
+-__init_PMU_HV:
+-	li	r5,0
+-	mtspr	SPRN_MMCRC,r5
+-	blr
+-
+-__init_PMU_HV_ISA207:
+-	li	r5,0
+-	mtspr	SPRN_MMCRH,r5
+-	blr
+-
+-__init_PMU:
+-	li	r5,0
+-	mtspr	SPRN_MMCRA,r5
+-	mtspr	SPRN_MMCR0,r5
+-	mtspr	SPRN_MMCR1,r5
+-	mtspr	SPRN_MMCR2,r5
+-	blr
+-
+-__init_PMU_ISA207:
+-	li	r5,0
+-	mtspr	SPRN_MMCRS,r5
+-	blr
+-
+-__init_PMU_ISA31:
+-	li	r5,0
+-	mtspr	SPRN_MMCR3,r5
+-	LOAD_REG_IMMEDIATE(r5, MMCRA_BHRB_DISABLE)
+-	mtspr	SPRN_MMCRA,r5
+-	blr
+diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c
+new file mode 100644
+index 0000000000000..3cca88ee96d71
+--- /dev/null
++++ b/arch/powerpc/kernel/cpu_setup_power.c
+@@ -0,0 +1,272 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * Copyright 2020, Jordan Niethe, IBM Corporation.
++ *
++ * This file contains low level CPU setup functions.
++ * Originally written in assembly by Benjamin Herrenschmidt & various other
++ * authors.
++ */
++
++#include <asm/reg.h>
++#include <asm/synch.h>
++#include <linux/bitops.h>
++#include <asm/cputable.h>
++#include <asm/cpu_setup_power.h>
++
++/* Disable CPU_FTR_HVMODE and return false if MSR:HV is not set */
++static bool init_hvmode_206(struct cpu_spec *t)
++{
++	u64 msr;
++
++	msr = mfmsr();
++	if (msr & MSR_HV)
++		return true;
++
++	t->cpu_features &= ~(CPU_FTR_HVMODE | CPU_FTR_P9_TM_HV_ASSIST);
++	return false;
++}
++
++static void init_LPCR_ISA300(u64 lpcr, u64 lpes)
++{
++	/* POWER9 has no VRMASD */
++	lpcr |= (lpes << LPCR_LPES_SH) & LPCR_LPES;
++	lpcr |= LPCR_PECE0|LPCR_PECE1|LPCR_PECE2;
++	lpcr |= (4ull << LPCR_DPFD_SH) & LPCR_DPFD;
++	lpcr &= ~LPCR_HDICE;	/* clear HDICE */
++	lpcr |= (4ull << LPCR_VC_SH);
++	mtspr(SPRN_LPCR, lpcr);
++	isync();
++}
++
++/*
++ * Setup a sane LPCR:
++ *   Called with initial LPCR and desired LPES 2-bit value
++ *
++ *   LPES = 0b01 (HSRR0/1 used for 0x500)
++ *   PECE = 0b111
++ *   DPFD = 4
++ *   HDICE = 0
++ *   VC = 0b100 (VPM0=1, VPM1=0, ISL=0)
++ *   VRMASD = 0b10000 (L=1, LP=00)
++ *
++ * Other bits untouched for now
++ */
++static void init_LPCR_ISA206(u64 lpcr, u64 lpes)
++{
++	lpcr |= (0x10ull << LPCR_VRMASD_SH) & LPCR_VRMASD;
++	init_LPCR_ISA300(lpcr, lpes);
++}
++
++static void init_FSCR(void)
++{
++	u64 fscr;
++
++	fscr = mfspr(SPRN_FSCR);
++	fscr |= FSCR_TAR|FSCR_EBB;
++	mtspr(SPRN_FSCR, fscr);
++}
++
++static void init_FSCR_power9(void)
++{
++	u64 fscr;
++
++	fscr = mfspr(SPRN_FSCR);
++	fscr |= FSCR_SCV;
++	mtspr(SPRN_FSCR, fscr);
++	init_FSCR();
++}
++
++static void init_FSCR_power10(void)
++{
++	u64 fscr;
++
++	fscr = mfspr(SPRN_FSCR);
++	fscr |= FSCR_PREFIX;
++	mtspr(SPRN_FSCR, fscr);
++	init_FSCR_power9();
++}
++
++static void init_HFSCR(void)
++{
++	u64 hfscr;
++
++	hfscr = mfspr(SPRN_HFSCR);
++	hfscr |= HFSCR_TAR|HFSCR_TM|HFSCR_BHRB|HFSCR_PM|HFSCR_DSCR|\
++		 HFSCR_VECVSX|HFSCR_FP|HFSCR_EBB|HFSCR_MSGP;
++	mtspr(SPRN_HFSCR, hfscr);
++}
++
++static void init_PMU_HV(void)
++{
++	mtspr(SPRN_MMCRC, 0);
++}
++
++static void init_PMU_HV_ISA207(void)
++{
++	mtspr(SPRN_MMCRH, 0);
++}
++
++static void init_PMU(void)
++{
++	mtspr(SPRN_MMCRA, 0);
++	mtspr(SPRN_MMCR0, 0);
++	mtspr(SPRN_MMCR1, 0);
++	mtspr(SPRN_MMCR2, 0);
++}
++
++static void init_PMU_ISA207(void)
++{
++	mtspr(SPRN_MMCRS, 0);
++}
++
++static void init_PMU_ISA31(void)
++{
++	mtspr(SPRN_MMCR3, 0);
++	mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
++	mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
++}
++
++/*
++ * Note that we can be called twice of pseudo-PVRs.
++ * The parameter offset is not used.
++ */
++
++void __setup_cpu_power7(unsigned long offset, struct cpu_spec *t)
++{
++	if (!init_hvmode_206(t))
++		return;
++
++	mtspr(SPRN_LPID, 0);
++	mtspr(SPRN_PCR, PCR_MASK);
++	init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH);
++}
++
++void __restore_cpu_power7(void)
++{
++	u64 msr;
++
++	msr = mfmsr();
++	if (!(msr & MSR_HV))
++		return;
++
++	mtspr(SPRN_LPID, 0);
++	mtspr(SPRN_PCR, PCR_MASK);
++	init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH);
++}
++
++void __setup_cpu_power8(unsigned long offset, struct cpu_spec *t)
++{
++	init_FSCR();
++	init_PMU();
++	init_PMU_ISA207();
++
++	if (!init_hvmode_206(t))
++		return;
++
++	mtspr(SPRN_LPID, 0);
++	mtspr(SPRN_PCR, PCR_MASK);
++	init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */
++	init_HFSCR();
++	init_PMU_HV();
++	init_PMU_HV_ISA207();
++}
++
++void __restore_cpu_power8(void)
++{
++	u64 msr;
++
++	init_FSCR();
++	init_PMU();
++	init_PMU_ISA207();
++
++	msr = mfmsr();
++	if (!(msr & MSR_HV))
++		return;
++
++	mtspr(SPRN_LPID, 0);
++	mtspr(SPRN_PCR, PCR_MASK);
++	init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */
++	init_HFSCR();
++	init_PMU_HV();
++	init_PMU_HV_ISA207();
++}
++
++void __setup_cpu_power9(unsigned long offset, struct cpu_spec *t)
++{
++	init_FSCR_power9();
++	init_PMU();
++
++	if (!init_hvmode_206(t))
++		return;
++
++	mtspr(SPRN_PSSCR, 0);
++	mtspr(SPRN_LPID, 0);
++	mtspr(SPRN_PID, 0);
++	mtspr(SPRN_PCR, PCR_MASK);
++	init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
++			 LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
++	init_HFSCR();
++	init_PMU_HV();
++}
++
++void __restore_cpu_power9(void)
++{
++	u64 msr;
++
++	init_FSCR_power9();
++	init_PMU();
++
++	msr = mfmsr();
++	if (!(msr & MSR_HV))
++		return;
++
++	mtspr(SPRN_PSSCR, 0);
++	mtspr(SPRN_LPID, 0);
++	mtspr(SPRN_PID, 0);
++	mtspr(SPRN_PCR, PCR_MASK);
++	init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
++			 LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
++	init_HFSCR();
++	init_PMU_HV();
++}
++
++void __setup_cpu_power10(unsigned long offset, struct cpu_spec *t)
++{
++	init_FSCR_power10();
++	init_PMU();
++	init_PMU_ISA31();
++
++	if (!init_hvmode_206(t))
++		return;
++
++	mtspr(SPRN_PSSCR, 0);
++	mtspr(SPRN_LPID, 0);
++	mtspr(SPRN_PID, 0);
++	mtspr(SPRN_PCR, PCR_MASK);
++	init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
++			 LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
++	init_HFSCR();
++	init_PMU_HV();
++}
++
++void __restore_cpu_power10(void)
++{
++	u64 msr;
++
++	init_FSCR_power10();
++	init_PMU();
++	init_PMU_ISA31();
++
++	msr = mfmsr();
++	if (!(msr & MSR_HV))
++		return;
++
++	mtspr(SPRN_PSSCR, 0);
++	mtspr(SPRN_LPID, 0);
++	mtspr(SPRN_PID, 0);
++	mtspr(SPRN_PCR, PCR_MASK);
++	init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
++			 LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
++	init_HFSCR();
++	init_PMU_HV();
++}
+diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
+index 29de58d4dfb76..8fdb40ee86d11 100644
+--- a/arch/powerpc/kernel/cputable.c
++++ b/arch/powerpc/kernel/cputable.c
+@@ -60,19 +60,15 @@ extern void __setup_cpu_7410(unsigned long offset, struct cpu_spec* spec);
+ extern void __setup_cpu_745x(unsigned long offset, struct cpu_spec* spec);
+ #endif /* CONFIG_PPC32 */
+ #ifdef CONFIG_PPC64
++#include <asm/cpu_setup_power.h>
+ extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec);
+ extern void __setup_cpu_ppc970MP(unsigned long offset, struct cpu_spec* spec);
+ extern void __setup_cpu_pa6t(unsigned long offset, struct cpu_spec* spec);
+ extern void __restore_cpu_pa6t(void);
+ extern void __restore_cpu_ppc970(void);
+-extern void __setup_cpu_power7(unsigned long offset, struct cpu_spec* spec);
+-extern void __restore_cpu_power7(void);
+-extern void __setup_cpu_power8(unsigned long offset, struct cpu_spec* spec);
+-extern void __restore_cpu_power8(void);
+-extern void __setup_cpu_power9(unsigned long offset, struct cpu_spec* spec);
+-extern void __restore_cpu_power9(void);
+-extern void __setup_cpu_power10(unsigned long offset, struct cpu_spec* spec);
+-extern void __restore_cpu_power10(void);
++extern long __machine_check_early_realmode_p7(struct pt_regs *regs);
++extern long __machine_check_early_realmode_p8(struct pt_regs *regs);
++extern long __machine_check_early_realmode_p9(struct pt_regs *regs);
+ #endif /* CONFIG_PPC64 */
+ #if defined(CONFIG_E500)
+ extern void __setup_cpu_e5500(unsigned long offset, struct cpu_spec* spec);
+diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
+index 1098863e17ee8..9d079659b24d3 100644
+--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
+@@ -454,6 +454,7 @@ static void init_pmu_power10(void)
+ 
+ 	mtspr(SPRN_MMCR3, 0);
+ 	mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
++	mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
+ }
+ 
+ static int __init feat_enable_pmu_power10(struct dt_cpu_feature *f)
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index eddf362caedce..c3bb800dc4352 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -1641,6 +1641,14 @@ int __init setup_fadump(void)
+ 	else if (fw_dump.reserve_dump_area_size)
+ 		fw_dump.ops->fadump_init_mem_struct(&fw_dump);
+ 
++	/*
++	 * In case of panic, fadump is triggered via ppc_panic_event()
++	 * panic notifier. Setting crash_kexec_post_notifiers to 'true'
++	 * lets panic() function take crash friendly path before panic
++	 * notifiers are invoked.
++	 */
++	crash_kexec_post_notifiers = true;
++
+ 	return 1;
+ }
+ subsys_initcall(setup_fadump);
+diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
+index a1ae00689e0f4..aeb9bc9958749 100644
+--- a/arch/powerpc/kernel/head_40x.S
++++ b/arch/powerpc/kernel/head_40x.S
+@@ -27,6 +27,7 @@
+ 
+ #include <linux/init.h>
+ #include <linux/pgtable.h>
++#include <linux/sizes.h>
+ #include <asm/processor.h>
+ #include <asm/page.h>
+ #include <asm/mmu.h>
+@@ -626,7 +627,7 @@ start_here:
+ 	b	.		/* prevent prefetch past rfi */
+ 
+ /* Set up the initial MMU state so we can do the first level of
+- * kernel initialization.  This maps the first 16 MBytes of memory 1:1
++ * kernel initialization.  This maps the first 32 MBytes of memory 1:1
+  * virtual to physical and more importantly sets the cache mode.
+  */
+ initial_mmu:
+@@ -663,6 +664,12 @@ initial_mmu:
+ 	tlbwe	r4,r0,TLB_DATA		/* Load the data portion of the entry */
+ 	tlbwe	r3,r0,TLB_TAG		/* Load the tag portion of the entry */
+ 
++	li	r0,62			/* TLB slot 62 */
++	addis	r4,r4,SZ_16M@h
++	addis	r3,r3,SZ_16M@h
++	tlbwe	r4,r0,TLB_DATA		/* Load the data portion of the entry */
++	tlbwe	r3,r0,TLB_TAG		/* Load the tag portion of the entry */
++
+ 	isync
+ 
+ 	/* Establish the exception vector base
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index 7e337c570ea6b..9e71c0739f08d 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -2956,7 +2956,7 @@ static void __init fixup_device_tree_efika_add_phy(void)
+ 
+ 	/* Check if the phy-handle property exists - bail if it does */
+ 	rv = prom_getprop(node, "phy-handle", prop, sizeof(prop));
+-	if (!rv)
++	if (rv <= 0)
+ 		return;
+ 
+ 	/*
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 452cbf98bfd71..cf99f57aed822 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -60,6 +60,7 @@
+ #include <asm/cpu_has_feature.h>
+ #include <asm/ftrace.h>
+ #include <asm/kup.h>
++#include <asm/fadump.h>
+ 
+ #ifdef DEBUG
+ #include <asm/udbg.h>
+@@ -594,6 +595,45 @@ void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *))
+ }
+ #endif
+ 
++#ifdef CONFIG_NMI_IPI
++static void crash_stop_this_cpu(struct pt_regs *regs)
++#else
++static void crash_stop_this_cpu(void *dummy)
++#endif
++{
++	/*
++	 * Just busy wait here and avoid marking CPU as offline to ensure
++	 * register data is captured appropriately.
++	 */
++	while (1)
++		cpu_relax();
++}
++
++void crash_smp_send_stop(void)
++{
++	static bool stopped = false;
++
++	/*
++	 * In case of fadump, register data for all CPUs is captured by f/w
++	 * on ibm,os-term rtas call. Skip IPI callbacks to other CPUs before
++	 * this rtas call to avoid tricky post processing of those CPUs'
++	 * backtraces.
++	 */
++	if (should_fadump_crash())
++		return;
++
++	if (stopped)
++		return;
++
++	stopped = true;
++
++#ifdef CONFIG_NMI_IPI
++	smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, crash_stop_this_cpu, 1000000);
++#else
++	smp_call_function(crash_stop_this_cpu, NULL, 0);
++#endif /* CONFIG_NMI_IPI */
++}
++
+ #ifdef CONFIG_NMI_IPI
+ static void nmi_stop_this_cpu(struct pt_regs *regs)
+ {
+@@ -1488,10 +1528,12 @@ void start_secondary(void *unused)
+ 	BUG();
+ }
+ 
++#ifdef CONFIG_PROFILING
+ int setup_profiling_timer(unsigned int multiplier)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static void fixup_topology(void)
+ {
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 77dffea3d5373..069d451240fa4 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -1922,11 +1922,40 @@ void vsx_unavailable_tm(struct pt_regs *regs)
+ }
+ #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+ 
+-void performance_monitor_exception(struct pt_regs *regs)
++static void performance_monitor_exception_nmi(struct pt_regs *regs)
++{
++	nmi_enter();
++
++	__this_cpu_inc(irq_stat.pmu_irqs);
++
++	perf_irq(regs);
++
++	nmi_exit();
++}
++
++static void performance_monitor_exception_async(struct pt_regs *regs)
+ {
++	irq_enter();
++
+ 	__this_cpu_inc(irq_stat.pmu_irqs);
+ 
+ 	perf_irq(regs);
++
++	irq_exit();
++}
++
++void performance_monitor_exception(struct pt_regs *regs)
++{
++	/*
++	 * On 64-bit, if perf interrupts hit in a local_irq_disable
++	 * (soft-masked) region, we consider them as NMIs. This is required to
++	 * prevent hash faults on user addresses when reading callchains (and
++	 * looks better from an irq tracing perspective).
++	 */
++	if (IS_ENABLED(CONFIG_PPC64) && unlikely(arch_irq_disabled_regs(regs)))
++		performance_monitor_exception_nmi(regs);
++	else
++		performance_monitor_exception_async(regs);
+ }
+ 
+ #ifdef CONFIG_PPC_ADV_DEBUG_REGS
+diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
+index af3c15a1d41eb..75b2a6c4db5a5 100644
+--- a/arch/powerpc/kernel/watchdog.c
++++ b/arch/powerpc/kernel/watchdog.c
+@@ -132,6 +132,10 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
+ {
+ 	cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
+ 	cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
++	/*
++	 * See wd_smp_clear_cpu_pending()
++	 */
++	smp_mb();
+ 	if (cpumask_empty(&wd_smp_cpus_pending)) {
+ 		wd_smp_last_reset_tb = tb;
+ 		cpumask_andnot(&wd_smp_cpus_pending,
+@@ -217,13 +221,44 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
+ 
+ 			cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
+ 			wd_smp_unlock(&flags);
++		} else {
++			/*
++			 * The last CPU to clear pending should have reset the
++			 * watchdog so we generally should not find it empty
++			 * here if our CPU was clear. However it could happen
++			 * due to a rare race with another CPU taking the
++			 * last CPU out of the mask concurrently.
++			 *
++			 * We can't add a warning for it. But just in case
++			 * there is a problem with the watchdog that is causing
++			 * the mask to not be reset, try to kick it along here.
++			 */
++			if (unlikely(cpumask_empty(&wd_smp_cpus_pending)))
++				goto none_pending;
+ 		}
+ 		return;
+ 	}
++
+ 	cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
++
++	/*
++	 * Order the store to clear pending with the load(s) to check all
++	 * words in the pending mask to check they are all empty. This orders
++	 * with the same barrier on another CPU. This prevents two CPUs
++	 * clearing the last 2 pending bits, but neither seeing the other's
++	 * store when checking if the mask is empty, and missing an empty
++	 * mask, which ends with a false positive.
++	 */
++	smp_mb();
+ 	if (cpumask_empty(&wd_smp_cpus_pending)) {
+ 		unsigned long flags;
+ 
++none_pending:
++		/*
++		 * Double check under lock because more than one CPU could see
++		 * a clear mask with the lockless check after clearing their
++		 * pending bits.
++		 */
+ 		wd_smp_lock(&flags);
+ 		if (cpumask_empty(&wd_smp_cpus_pending)) {
+ 			wd_smp_last_reset_tb = tb;
+@@ -314,8 +349,12 @@ void arch_touch_nmi_watchdog(void)
+ {
+ 	unsigned long ticks = tb_ticks_per_usec * wd_timer_period_ms * 1000;
+ 	int cpu = smp_processor_id();
+-	u64 tb = get_tb();
++	u64 tb;
+ 
++	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
++		return;
++
++	tb = get_tb();
+ 	if (tb - per_cpu(wd_timer_tb, cpu) >= ticks) {
+ 		per_cpu(wd_timer_tb, cpu) = tb;
+ 		wd_smp_clear_cpu_pending(cpu, tb);
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 175967a195c44..527c205d5a5f5 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4557,8 +4557,12 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm,
+ 	unsigned long npages = mem->memory_size >> PAGE_SHIFT;
+ 
+ 	if (change == KVM_MR_CREATE) {
+-		slot->arch.rmap = vzalloc(array_size(npages,
+-					  sizeof(*slot->arch.rmap)));
++		unsigned long size = array_size(npages, sizeof(*slot->arch.rmap));
++
++		if ((size >> PAGE_SHIFT) > totalram_pages())
++			return -ENOMEM;
++
++		slot->arch.rmap = vzalloc(size);
+ 		if (!slot->arch.rmap)
+ 			return -ENOMEM;
+ 	}
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index a5f1ae892ba68..d0b6c8c16c48a 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -510,7 +510,7 @@ long kvmhv_copy_tofrom_guest_nested(struct kvm_vcpu *vcpu)
+ 	if (eaddr & (0xFFFUL << 52))
+ 		return H_PARAMETER;
+ 
+-	buf = kzalloc(n, GFP_KERNEL);
++	buf = kzalloc(n, GFP_KERNEL | __GFP_NOWARN);
+ 	if (!buf)
+ 		return H_NO_MEM;
+ 
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 1d5eec847b883..295959487b76d 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -1152,7 +1152,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
+ 
+ int pud_clear_huge(pud_t *pud)
+ {
+-	if (pud_huge(*pud)) {
++	if (pud_is_leaf(*pud)) {
+ 		pud_clear(pud);
+ 		return 1;
+ 	}
+@@ -1199,7 +1199,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
+ 
+ int pmd_clear_huge(pmd_t *pmd)
+ {
+-	if (pmd_huge(*pmd)) {
++	if (pmd_is_leaf(*pmd)) {
+ 		pmd_clear(pmd);
+ 		return 1;
+ 	}
+diff --git a/arch/powerpc/mm/kasan/book3s_32.c b/arch/powerpc/mm/kasan/book3s_32.c
+index 202bd260a0095..35b287b0a8da4 100644
+--- a/arch/powerpc/mm/kasan/book3s_32.c
++++ b/arch/powerpc/mm/kasan/book3s_32.c
+@@ -19,7 +19,8 @@ int __init kasan_init_region(void *start, size_t size)
+ 	block = memblock_alloc(k_size, k_size_base);
+ 
+ 	if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, k_size_base)) {
+-		int k_size_more = 1 << (ffs(k_size - k_size_base) - 1);
++		int shift = ffs(k_size - k_size_base);
++		int k_size_more = shift ? 1 << (shift - 1) : 0;
+ 
+ 		setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL);
+ 		if (k_size_more >= SZ_128K)
+diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
+index cc6e2f94517fc..aefc2bfdf1049 100644
+--- a/arch/powerpc/mm/pgtable_64.c
++++ b/arch/powerpc/mm/pgtable_64.c
+@@ -102,7 +102,8 @@ EXPORT_SYMBOL(__pte_frag_size_shift);
+ struct page *p4d_page(p4d_t p4d)
+ {
+ 	if (p4d_is_leaf(p4d)) {
+-		VM_WARN_ON(!p4d_huge(p4d));
++		if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
++			VM_WARN_ON(!p4d_huge(p4d));
+ 		return pte_page(p4d_pte(p4d));
+ 	}
+ 	return virt_to_page(p4d_page_vaddr(p4d));
+@@ -112,7 +113,8 @@ struct page *p4d_page(p4d_t p4d)
+ struct page *pud_page(pud_t pud)
+ {
+ 	if (pud_is_leaf(pud)) {
+-		VM_WARN_ON(!pud_huge(pud));
++		if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
++			VM_WARN_ON(!pud_huge(pud));
+ 		return pte_page(pud_pte(pud));
+ 	}
+ 	return virt_to_page(pud_page_vaddr(pud));
+@@ -125,7 +127,13 @@ struct page *pud_page(pud_t pud)
+ struct page *pmd_page(pmd_t pmd)
+ {
+ 	if (pmd_is_leaf(pmd)) {
+-		VM_WARN_ON(!(pmd_large(pmd) || pmd_huge(pmd)));
++		/*
++		 * vmalloc_to_page may be called on any vmap address (not only
++		 * vmalloc), and it uses pmd_page() etc., when huge vmap is
++		 * enabled so these checks can't be used.
++		 */
++		if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
++			VM_WARN_ON(!(pmd_large(pmd) || pmd_huge(pmd)));
+ 		return pte_page(pmd_pte(pmd));
+ 	}
+ 	return virt_to_page(pmd_page_vaddr(pmd));
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 91452313489f1..bd34e062bd290 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -95,6 +95,7 @@ static unsigned int freeze_events_kernel = MMCR0_FCS;
+ #define SPRN_SIER3		0
+ #define MMCRA_SAMPLE_ENABLE	0
+ #define MMCRA_BHRB_DISABLE     0
++#define MMCR0_PMCCEXT		0
+ 
+ static inline unsigned long perf_ip_adjust(struct pt_regs *regs)
+ {
+@@ -109,10 +110,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
+ {
+ 	regs->result = 0;
+ }
+-static inline int perf_intr_is_nmi(struct pt_regs *regs)
+-{
+-	return 0;
+-}
+ 
+ static inline int siar_valid(struct pt_regs *regs)
+ {
+@@ -331,15 +328,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
+ 	regs->result = use_siar;
+ }
+ 
+-/*
+- * If interrupts were soft-disabled when a PMU interrupt occurs, treat
+- * it as an NMI.
+- */
+-static inline int perf_intr_is_nmi(struct pt_regs *regs)
+-{
+-	return (regs->softe & IRQS_DISABLED);
+-}
+-
+ /*
+  * On processors like P7+ that have the SIAR-Valid bit, marked instructions
+  * must be sampled only if the SIAR-valid bit is set.
+@@ -817,6 +805,19 @@ static void write_pmc(int idx, unsigned long val)
+ 	}
+ }
+ 
++static int any_pmc_overflown(struct cpu_hw_events *cpuhw)
++{
++	int i, idx;
++
++	for (i = 0; i < cpuhw->n_events; i++) {
++		idx = cpuhw->event[i]->hw.idx;
++		if ((idx) && ((int)read_pmc(idx) < 0))
++			return idx;
++	}
++
++	return 0;
++}
++
+ /* Called from sysrq_handle_showregs() */
+ void perf_event_print_debug(void)
+ {
+@@ -1240,11 +1241,16 @@ static void power_pmu_disable(struct pmu *pmu)
+ 
+ 		/*
+ 		 * Set the 'freeze counters' bit, clear EBE/BHRBA/PMCC/PMAO/FC56
++		 * Also clear PMXE to disable PMI's getting triggered in some
++		 * corner cases during PMU disable.
+ 		 */
+ 		val  = mmcr0 = mfspr(SPRN_MMCR0);
+ 		val |= MMCR0_FC;
+ 		val &= ~(MMCR0_EBE | MMCR0_BHRBA | MMCR0_PMCC | MMCR0_PMAO |
+-			 MMCR0_FC56);
++			 MMCR0_PMXE | MMCR0_FC56);
++		/* Set mmcr0 PMCCEXT for p10 */
++		if (ppmu->flags & PPMU_ARCH_31)
++			val |= MMCR0_PMCCEXT;
+ 
+ 		/*
+ 		 * The barrier is to make sure the mtspr has been
+@@ -1255,6 +1261,23 @@ static void power_pmu_disable(struct pmu *pmu)
+ 		mb();
+ 		isync();
+ 
++		/*
++		 * Some corner cases could clear the PMU counter overflow
++		 * while a masked PMI is pending. One such case is when
++		 * a PMI happens during interrupt replay and perf counter
++		 * values are cleared by PMU callbacks before replay.
++		 *
++		 * If any PMC corresponding to the active PMU events are
++		 * overflown, disable the interrupt by clearing the paca
++		 * bit for PMI since we are disabling the PMU now.
++		 * Otherwise provide a warning if there is PMI pending, but
++		 * no counter is found overflown.
++		 */
++		if (any_pmc_overflown(cpuhw))
++			clear_pmi_irq_pending();
++		else
++			WARN_ON(pmi_irq_pending());
++
+ 		val = mmcra = cpuhw->mmcr.mmcra;
+ 
+ 		/*
+@@ -1346,6 +1369,15 @@ static void power_pmu_enable(struct pmu *pmu)
+ 	 * (possibly updated for removal of events).
+ 	 */
+ 	if (!cpuhw->n_added) {
++		/*
++		 * If there is any active event with an overflown PMC
++		 * value, set back PACA_IRQ_PMI which would have been
++		 * cleared in power_pmu_disable().
++		 */
++		hard_irq_disable();
++		if (any_pmc_overflown(cpuhw))
++			set_pmi_irq_pending();
++
+ 		mtspr(SPRN_MMCRA, cpuhw->mmcr.mmcra & ~MMCRA_SAMPLE_ENABLE);
+ 		mtspr(SPRN_MMCR1, cpuhw->mmcr.mmcr1);
+ 		if (ppmu->flags & PPMU_ARCH_31)
+@@ -2250,7 +2282,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
+ 	struct perf_event *event;
+ 	unsigned long val[8];
+ 	int found, active;
+-	int nmi;
+ 
+ 	if (cpuhw->n_limited)
+ 		freeze_limited_counters(cpuhw, mfspr(SPRN_PMC5),
+@@ -2258,18 +2289,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
+ 
+ 	perf_read_regs(regs);
+ 
+-	/*
+-	 * If perf interrupts hit in a local_irq_disable (soft-masked) region,
+-	 * we consider them as NMIs. This is required to prevent hash faults on
+-	 * user addresses when reading callchains. See the NMI test in
+-	 * do_hash_page.
+-	 */
+-	nmi = perf_intr_is_nmi(regs);
+-	if (nmi)
+-		nmi_enter();
+-	else
+-		irq_enter();
+-
+ 	/* Read all the PMCs since we'll need them a bunch of times */
+ 	for (i = 0; i < ppmu->n_counter; ++i)
+ 		val[i] = read_pmc(i + 1);
+@@ -2296,6 +2315,14 @@ static void __perf_event_interrupt(struct pt_regs *regs)
+ 				break;
+ 			}
+ 		}
++
++		/*
++		 * Clear PACA_IRQ_PMI in case it was set by
++		 * set_pmi_irq_pending() when PMU was enabled
++		 * after accounting for interrupts.
++		 */
++		clear_pmi_irq_pending();
++
+ 		if (!active)
+ 			/* reset non active counters that have overflowed */
+ 			write_pmc(i + 1, 0);
+@@ -2315,8 +2342,15 @@ static void __perf_event_interrupt(struct pt_regs *regs)
+ 			}
+ 		}
+ 	}
+-	if (!found && !nmi && printk_ratelimit())
+-		printk(KERN_WARNING "Can't find PMC that caused IRQ\n");
++
++	/*
++	 * During system wide profling or while specific CPU is monitored for an
++	 * event, some corner cases could cause PMC to overflow in idle path. This
++	 * will trigger a PMI after waking up from idle. Since counter values are _not_
++	 * saved/restored in idle path, can lead to below "Can't find PMC" message.
++	 */
++	if (unlikely(!found) && !arch_irq_disabled_regs(regs))
++		printk_ratelimited(KERN_WARNING "Can't find PMC that caused IRQ\n");
+ 
+ 	/*
+ 	 * Reset MMCR0 to its normal value.  This will set PMXE and
+@@ -2326,11 +2360,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
+ 	 * we get back out of this interrupt.
+ 	 */
+ 	write_mmcr0(cpuhw, cpuhw->mmcr.mmcr0);
+-
+-	if (nmi)
+-		nmi_exit();
+-	else
+-		irq_exit();
+ }
+ 
+ static void perf_event_interrupt(struct pt_regs *regs)
+diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
+index e0e7e276bfd25..ee721f420a7ba 100644
+--- a/arch/powerpc/perf/core-fsl-emb.c
++++ b/arch/powerpc/perf/core-fsl-emb.c
+@@ -31,19 +31,6 @@ static atomic_t num_events;
+ /* Used to avoid races in calling reserve/release_pmc_hardware */
+ static DEFINE_MUTEX(pmc_reserve_mutex);
+ 
+-/*
+- * If interrupts were soft-disabled when a PMU interrupt occurs, treat
+- * it as an NMI.
+- */
+-static inline int perf_intr_is_nmi(struct pt_regs *regs)
+-{
+-#ifdef __powerpc64__
+-	return (regs->softe & IRQS_DISABLED);
+-#else
+-	return 0;
+-#endif
+-}
+-
+ static void perf_event_interrupt(struct pt_regs *regs);
+ 
+ /*
+@@ -659,13 +646,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
+ 	struct perf_event *event;
+ 	unsigned long val;
+ 	int found = 0;
+-	int nmi;
+-
+-	nmi = perf_intr_is_nmi(regs);
+-	if (nmi)
+-		nmi_enter();
+-	else
+-		irq_enter();
+ 
+ 	for (i = 0; i < ppmu->n_counter; ++i) {
+ 		event = cpuhw->event[i];
+@@ -690,11 +670,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
+ 	mtmsr(mfmsr() | MSR_PMM);
+ 	mtpmr(PMRN_PMGC0, PMGC0_PMIE | PMGC0_FCECE);
+ 	isync();
+-
+-	if (nmi)
+-		nmi_exit();
+-	else
+-		irq_exit();
+ }
+ 
+ void hw_perf_event_setup(int cpu)
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index 5e8eedda45d39..58448f0e47213 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -561,6 +561,14 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
+ 	if (!(pmc_inuse & 0x60))
+ 		mmcr->mmcr0 |= MMCR0_FC56;
+ 
++	/*
++	 * Set mmcr0 (PMCCEXT) for p10 which
++	 * will restrict access to group B registers
++	 * when MMCR0 PMCC=0b00.
++	 */
++	if (cpu_has_feature(CPU_FTR_ARCH_31))
++		mmcr->mmcr0 |= MMCR0_PMCCEXT;
++
+ 	mmcr->mmcr1 = mmcr1;
+ 	mmcr->mmcra = mmcra;
+ 	mmcr->mmcr2 = mmcr2;
+diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
+index 2124831cf57c0..d04079b34d7c2 100644
+--- a/arch/powerpc/platforms/cell/iommu.c
++++ b/arch/powerpc/platforms/cell/iommu.c
+@@ -976,6 +976,7 @@ static int __init cell_iommu_fixed_mapping_init(void)
+ 			if (hbase < dbase || (hend > (dbase + dsize))) {
+ 				pr_debug("iommu: hash window doesn't fit in"
+ 					 "real DMA window\n");
++				of_node_put(np);
+ 				return -1;
+ 			}
+ 		}
+diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c
+index 9068edef71f78..59999902e4a6a 100644
+--- a/arch/powerpc/platforms/cell/pervasive.c
++++ b/arch/powerpc/platforms/cell/pervasive.c
+@@ -77,6 +77,7 @@ static int cbe_system_reset_exception(struct pt_regs *regs)
+ 	switch (regs->msr & SRR1_WAKEMASK) {
+ 	case SRR1_WAKEDEC:
+ 		set_dec(1);
++		break;
+ 	case SRR1_WAKEEE:
+ 		/*
+ 		 * Handle these when interrupts get re-enabled and we take
+diff --git a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+index a1b7f79a8a152..de10c13de15c6 100644
+--- a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
++++ b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+@@ -215,6 +215,7 @@ void hlwd_pic_probe(void)
+ 			irq_set_chained_handler(cascade_virq,
+ 						hlwd_pic_irq_cascade);
+ 			hlwd_irq_host = host;
++			of_node_put(np);
+ 			break;
+ 		}
+ 	}
+diff --git a/arch/powerpc/platforms/powermac/low_i2c.c b/arch/powerpc/platforms/powermac/low_i2c.c
+index f77a59b5c2e1a..df89d916236d9 100644
+--- a/arch/powerpc/platforms/powermac/low_i2c.c
++++ b/arch/powerpc/platforms/powermac/low_i2c.c
+@@ -582,6 +582,7 @@ static void __init kw_i2c_add(struct pmac_i2c_host_kw *host,
+ 	bus->close = kw_i2c_close;
+ 	bus->xfer = kw_i2c_xfer;
+ 	mutex_init(&bus->mutex);
++	lockdep_register_key(&bus->lock_key);
+ 	lockdep_set_class(&bus->mutex, &bus->lock_key);
+ 	if (controller == busnode)
+ 		bus->flags = pmac_i2c_multibus;
+@@ -810,6 +811,7 @@ static void __init pmu_i2c_probe(void)
+ 		bus->hostdata = bus + 1;
+ 		bus->xfer = pmu_i2c_xfer;
+ 		mutex_init(&bus->mutex);
++		lockdep_register_key(&bus->lock_key);
+ 		lockdep_set_class(&bus->mutex, &bus->lock_key);
+ 		bus->flags = pmac_i2c_multibus;
+ 		list_add(&bus->link, &pmac_i2c_busses);
+@@ -933,6 +935,7 @@ static void __init smu_i2c_probe(void)
+ 		bus->hostdata = bus + 1;
+ 		bus->xfer = smu_i2c_xfer;
+ 		mutex_init(&bus->mutex);
++		lockdep_register_key(&bus->lock_key);
+ 		lockdep_set_class(&bus->mutex, &bus->lock_key);
+ 		bus->flags = 0;
+ 		list_add(&bus->link, &pmac_i2c_busses);
+diff --git a/arch/powerpc/platforms/powernv/opal-lpc.c b/arch/powerpc/platforms/powernv/opal-lpc.c
+index 608569082ba0b..123a0e799b7bd 100644
+--- a/arch/powerpc/platforms/powernv/opal-lpc.c
++++ b/arch/powerpc/platforms/powernv/opal-lpc.c
+@@ -396,6 +396,7 @@ void __init opal_lpc_init(void)
+ 		if (!of_get_property(np, "primary", NULL))
+ 			continue;
+ 		opal_lpc_chip_id = of_get_ibm_chip_id(np);
++		of_node_put(np);
+ 		break;
+ 	}
+ 	if (opal_lpc_chip_id < 0)
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index 1e3674d7ea7bc..b57eeaff7bb33 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -658,6 +658,9 @@ static int xive_spapr_debug_show(struct seq_file *m, void *private)
+ 	struct xive_irq_bitmap *xibm;
+ 	char *buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ 
++	if (!buf)
++		return -ENOMEM;
++
+ 	list_for_each_entry(xibm, &xive_irq_bitmaps, list) {
+ 		memset(buf, 0, PAGE_SIZE);
+ 		bitmap_print_to_pagebuf(true, buf, xibm->bitmap, xibm->count);
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index 11d2c8395e2ae..6d99b1be0082f 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -253,13 +253,15 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
+ 		/* Free 2K page table fragment of a 4K page */
+ 		bit = (__pa(table) & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
+ 		spin_lock_bh(&mm->context.lock);
+-		mask = atomic_xor_bits(&page->_refcount, 1U << (bit + 24));
++		mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
+ 		mask >>= 24;
+ 		if (mask & 3)
+ 			list_add(&page->lru, &mm->context.pgtable_list);
+ 		else
+ 			list_del(&page->lru);
+ 		spin_unlock_bh(&mm->context.lock);
++		mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
++		mask >>= 24;
+ 		if (mask != 0)
+ 			return;
+ 	} else {
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index d11b3d41c3785..d5d768188b3ba 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -1076,6 +1076,8 @@ static void virtio_uml_release_dev(struct device *d)
+ 			container_of(d, struct virtio_device, dev);
+ 	struct virtio_uml_device *vu_dev = to_virtio_uml_device(vdev);
+ 
++	time_travel_propagate_time();
++
+ 	/* might not have been opened due to not negotiating the feature */
+ 	if (vu_dev->req_fd >= 0) {
+ 		um_free_irq(VIRTIO_IRQ, vu_dev);
+@@ -1109,6 +1111,8 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ 	vu_dev->pdev = pdev;
+ 	vu_dev->req_fd = -1;
+ 
++	time_travel_propagate_time();
++
+ 	do {
+ 		rc = os_connect_socket(pdata->socket_path);
+ 	} while (rc == -EINTR);
+diff --git a/arch/um/include/asm/delay.h b/arch/um/include/asm/delay.h
+index 56fc2b8f2dd01..e79b2ab6f40c8 100644
+--- a/arch/um/include/asm/delay.h
++++ b/arch/um/include/asm/delay.h
+@@ -14,7 +14,7 @@ static inline void um_ndelay(unsigned long nsecs)
+ 	ndelay(nsecs);
+ }
+ #undef ndelay
+-#define ndelay um_ndelay
++#define ndelay(n) um_ndelay(n)
+ 
+ static inline void um_udelay(unsigned long usecs)
+ {
+@@ -26,5 +26,5 @@ static inline void um_udelay(unsigned long usecs)
+ 	udelay(usecs);
+ }
+ #undef udelay
+-#define udelay um_udelay
++#define udelay(n) um_udelay(n)
+ #endif /* __UM_DELAY_H */
+diff --git a/arch/um/include/shared/registers.h b/arch/um/include/shared/registers.h
+index 0c50fa6e8a55b..fbb709a222839 100644
+--- a/arch/um/include/shared/registers.h
++++ b/arch/um/include/shared/registers.h
+@@ -16,8 +16,8 @@ extern int restore_fp_registers(int pid, unsigned long *fp_regs);
+ extern int save_fpx_registers(int pid, unsigned long *fp_regs);
+ extern int restore_fpx_registers(int pid, unsigned long *fp_regs);
+ extern int save_registers(int pid, struct uml_pt_regs *regs);
+-extern int restore_registers(int pid, struct uml_pt_regs *regs);
+-extern int init_registers(int pid);
++extern int restore_pid_registers(int pid, struct uml_pt_regs *regs);
++extern int init_pid_registers(int pid);
+ extern void get_safe_registers(unsigned long *regs, unsigned long *fp_regs);
+ extern unsigned long get_thread_reg(int reg, jmp_buf *buf);
+ extern int get_fp_registers(int pid, unsigned long *regs);
+diff --git a/arch/um/os-Linux/registers.c b/arch/um/os-Linux/registers.c
+index 2d9270508e156..b123955be7acc 100644
+--- a/arch/um/os-Linux/registers.c
++++ b/arch/um/os-Linux/registers.c
+@@ -21,7 +21,7 @@ int save_registers(int pid, struct uml_pt_regs *regs)
+ 	return 0;
+ }
+ 
+-int restore_registers(int pid, struct uml_pt_regs *regs)
++int restore_pid_registers(int pid, struct uml_pt_regs *regs)
+ {
+ 	int err;
+ 
+@@ -36,7 +36,7 @@ int restore_registers(int pid, struct uml_pt_regs *regs)
+ static unsigned long exec_regs[MAX_REG_NR];
+ static unsigned long exec_fp_regs[FP_SIZE];
+ 
+-int init_registers(int pid)
++int init_pid_registers(int pid)
+ {
+ 	int err;
+ 
+diff --git a/arch/um/os-Linux/start_up.c b/arch/um/os-Linux/start_up.c
+index f79dc338279e6..b28373a2b8d2d 100644
+--- a/arch/um/os-Linux/start_up.c
++++ b/arch/um/os-Linux/start_up.c
+@@ -336,7 +336,7 @@ void __init os_early_checks(void)
+ 	check_tmpexec();
+ 
+ 	pid = start_ptraced_child();
+-	if (init_registers(pid))
++	if (init_pid_registers(pid))
+ 		fatal("Failed to initialize default registers");
+ 	stop_ptraced_child(pid, 1, 1);
+ }
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 6004047d25fdd..bf91e0a36d77f 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -28,7 +28,11 @@ KCOV_INSTRUMENT		:= n
+ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+ 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 vmlinux.bin.zst
+ 
+-KBUILD_CFLAGS := -m$(BITS) -O2
++# CLANG_FLAGS must come before any cc-disable-warning or cc-option calls in
++# case of cross compiling, as it has the '--target=' flag, which is needed to
++# avoid errors with '-march=i386', and future flags may depend on the target to
++# be valid.
++KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
+ KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+ cflags-$(CONFIG_X86_32) := -march=i386
+@@ -46,7 +50,6 @@ KBUILD_CFLAGS += -D__DISABLE_EXPORTS
+ # Disable relocation relaxation in case the link is not PIE.
+ KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
+ KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h
+-KBUILD_CFLAGS += $(CLANG_FLAGS)
+ 
+ # sev-es.c indirectly inludes inat-table.h which is generated during
+ # compilation and stored in $(objtree). Add the directory to the includes so
+diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
+index 78210793d357c..38d7acb9610cc 100644
+--- a/arch/x86/configs/i386_defconfig
++++ b/arch/x86/configs/i386_defconfig
+@@ -264,3 +264,4 @@ CONFIG_BLK_DEV_IO_TRACE=y
+ CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
+ CONFIG_EARLY_PRINTK_DBGP=y
+ CONFIG_DEBUG_BOOT_PARAMS=y
++CONFIG_KALLSYMS_ALL=y
+diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig
+index 9936528e19393..c6e587a9a6f85 100644
+--- a/arch/x86/configs/x86_64_defconfig
++++ b/arch/x86/configs/x86_64_defconfig
+@@ -260,3 +260,4 @@ CONFIG_BLK_DEV_IO_TRACE=y
+ CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
+ CONFIG_EARLY_PRINTK_DBGP=y
+ CONFIG_DEBUG_BOOT_PARAMS=y
++CONFIG_KALLSYMS_ALL=y
+diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
+index 5db5d083c8732..331474b150f16 100644
+--- a/arch/x86/include/asm/realmode.h
++++ b/arch/x86/include/asm/realmode.h
+@@ -89,6 +89,7 @@ static inline void set_real_mode_mem(phys_addr_t mem)
+ }
+ 
+ void reserve_real_mode(void);
++void load_trampoline_pgtable(void);
+ 
+ #endif /* __ASSEMBLY__ */
+ 
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 5c95d242f38d7..bb1430283c726 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -314,11 +314,12 @@ do {									\
+ do {									\
+ 	__chk_user_ptr(ptr);						\
+ 	switch (size) {							\
+-	unsigned char x_u8__;						\
+-	case 1:								\
++	case 1:	{							\
++		unsigned char x_u8__;					\
+ 		__get_user_asm(x_u8__, ptr, "b", "=q", label);		\
+ 		(x) = x_u8__;						\
+ 		break;							\
++	}								\
+ 	case 2:								\
+ 		__get_user_asm(x, ptr, "w", "=r", label);		\
+ 		break;							\
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 14b34963eb1f7..5cf1a024408bf 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -295,11 +295,17 @@ static void wait_for_panic(void)
+ 	panic("Panicing machine check CPU died");
+ }
+ 
+-static void mce_panic(const char *msg, struct mce *final, char *exp)
++static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
+ {
+-	int apei_err = 0;
+ 	struct llist_node *pending;
+ 	struct mce_evt_llist *l;
++	int apei_err = 0;
++
++	/*
++	 * Allow instrumentation around external facilities usage. Not that it
++	 * matters a whole lot since the machine is going to panic anyway.
++	 */
++	instrumentation_begin();
+ 
+ 	if (!fake_panic) {
+ 		/*
+@@ -314,7 +320,7 @@ static void mce_panic(const char *msg, struct mce *final, char *exp)
+ 	} else {
+ 		/* Don't log too much for fake panic */
+ 		if (atomic_inc_return(&mce_fake_panicked) > 1)
+-			return;
++			goto out;
+ 	}
+ 	pending = mce_gen_pool_prepare_records();
+ 	/* First print corrected ones that are still unlogged */
+@@ -352,6 +358,9 @@ static void mce_panic(const char *msg, struct mce *final, char *exp)
+ 		panic(msg);
+ 	} else
+ 		pr_emerg(HW_ERR "Fake kernel panic: %s\n", msg);
++
++out:
++	instrumentation_end();
+ }
+ 
+ /* Support code for software error injection */
+@@ -682,7 +691,7 @@ static struct notifier_block mce_default_nb = {
+ /*
+  * Read ADDR and MISC registers.
+  */
+-static void mce_read_aux(struct mce *m, int i)
++static noinstr void mce_read_aux(struct mce *m, int i)
+ {
+ 	if (m->status & MCI_STATUS_MISCV)
+ 		m->misc = mce_rdmsrl(msr_ops.misc(i));
+@@ -1061,10 +1070,13 @@ static int mce_start(int *no_way_out)
+  * Synchronize between CPUs after main scanning loop.
+  * This invokes the bulk of the Monarch processing.
+  */
+-static int mce_end(int order)
++static noinstr int mce_end(int order)
+ {
+-	int ret = -1;
+ 	u64 timeout = (u64)mca_cfg.monarch_timeout * NSEC_PER_USEC;
++	int ret = -1;
++
++	/* Allow instrumentation around external facilities. */
++	instrumentation_begin();
+ 
+ 	if (!timeout)
+ 		goto reset;
+@@ -1108,7 +1120,8 @@ static int mce_end(int order)
+ 		/*
+ 		 * Don't reset anything. That's done by the Monarch.
+ 		 */
+-		return 0;
++		ret = 0;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -1123,6 +1136,10 @@ reset:
+ 	 * Let others run again.
+ 	 */
+ 	atomic_set(&mce_executing, 0);
++
++out:
++	instrumentation_end();
++
+ 	return ret;
+ }
+ 
+@@ -1443,6 +1460,14 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ 	if (worst != MCE_AR_SEVERITY && !kill_it)
+ 		goto out;
+ 
++	/*
++	 * Enable instrumentation around the external facilities like
++	 * task_work_add() (via queue_task_work()), fixup_exception() etc.
++	 * For now, that is. Fixing this properly would need a lot more involved
++	 * reorganization.
++	 */
++	instrumentation_begin();
++
+ 	/* Fault was in user mode and we need to take some action */
+ 	if ((m.cs & 3) == 3) {
+ 		/* If this triggers there is no way to recover. Die hard. */
+@@ -1468,6 +1493,9 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ 		if (m.kflags & MCE_IN_KERNEL_COPYIN)
+ 			queue_task_work(&m, msg, kill_it);
+ 	}
++
++	instrumentation_end();
++
+ out:
+ 	mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
+ }
+diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
+index 3a44346f22766..e7808309d4710 100644
+--- a/arch/x86/kernel/cpu/mce/inject.c
++++ b/arch/x86/kernel/cpu/mce/inject.c
+@@ -347,7 +347,7 @@ static ssize_t flags_write(struct file *filp, const char __user *ubuf,
+ 	char buf[MAX_FLAG_OPT_SIZE], *__buf;
+ 	int err;
+ 
+-	if (cnt > MAX_FLAG_OPT_SIZE)
++	if (!cnt || cnt > MAX_FLAG_OPT_SIZE)
+ 		return -EINVAL;
+ 
+ 	if (copy_from_user(&buf, ubuf, cnt))
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index 0c6d1dc59fa21..8e27cbefaa4bf 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -515,6 +515,7 @@ static const struct intel_early_ops gen11_early_ops __initconst = {
+ 	.stolen_size = gen9_stolen_size,
+ };
+ 
++/* Intel integrated GPUs for which we need to reserve "stolen memory" */
+ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_I830_IDS(&i830_early_ops),
+ 	INTEL_I845G_IDS(&i845_early_ops),
+@@ -588,6 +589,13 @@ static void __init intel_graphics_quirks(int num, int slot, int func)
+ 	u16 device;
+ 	int i;
+ 
++	/*
++	 * Reserve "stolen memory" for an integrated GPU.  If we've already
++	 * found one, there's nothing to do for other (discrete) GPUs.
++	 */
++	if (resource_size(&intel_graphics_stolen_res))
++		return;
++
+ 	device = read_pci_config_16(num, slot, func, PCI_DEVICE_ID);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(intel_early_ids); i++) {
+@@ -700,7 +708,7 @@ static struct chipset early_qrk[] __initdata = {
+ 	{ PCI_VENDOR_ID_INTEL, 0x3406, PCI_CLASS_BRIDGE_HOST,
+ 	  PCI_BASE_CLASS_BRIDGE, 0, intel_remapping_check },
+ 	{ PCI_VENDOR_ID_INTEL, PCI_ANY_ID, PCI_CLASS_DISPLAY_VGA, PCI_ANY_ID,
+-	  QFLAG_APPLY_ONCE, intel_graphics_quirks },
++	  0, intel_graphics_quirks },
+ 	/*
+ 	 * HPET on the current version of the Baytrail platform has accuracy
+ 	 * problems: it will halt in deep idle state - so we disable it.
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 798a6f73f8946..df3514835b356 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -113,17 +113,9 @@ void __noreturn machine_real_restart(unsigned int type)
+ 	spin_unlock(&rtc_lock);
+ 
+ 	/*
+-	 * Switch back to the initial page table.
++	 * Switch to the trampoline page table.
+ 	 */
+-#ifdef CONFIG_X86_32
+-	load_cr3(initial_page_table);
+-#else
+-	write_cr3(real_mode_header->trampoline_pgd);
+-
+-	/* Exiting long mode will fail if CR4.PCIDE is set. */
+-	if (boot_cpu_has(X86_FEATURE_PCID))
+-		cr4_clear_bits(X86_CR4_PCIDE);
+-#endif
++	load_trampoline_pgtable();
+ 
+ 	/* Jump to the identity-mapped low memory code */
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index f9f1b45e5ddc4..13d1a0ac8916a 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1127,6 +1127,7 @@ static int tsc_cs_enable(struct clocksource *cs)
+ static struct clocksource clocksource_tsc_early = {
+ 	.name			= "tsc-early",
+ 	.rating			= 299,
++	.uncertainty_margin	= 32 * NSEC_PER_MSEC,
+ 	.read			= read_tsc,
+ 	.mask			= CLOCKSOURCE_MASK(64),
+ 	.flags			= CLOCK_SOURCE_IS_CONTINUOUS |
+diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
+index fbd9b10354790..5f8acd2faa7c1 100644
+--- a/arch/x86/kvm/vmx/posted_intr.c
++++ b/arch/x86/kvm/vmx/posted_intr.c
+@@ -15,7 +15,7 @@
+  * can find which vCPU should be waken up.
+  */
+ static DEFINE_PER_CPU(struct list_head, blocked_vcpu_on_cpu);
+-static DEFINE_PER_CPU(spinlock_t, blocked_vcpu_on_cpu_lock);
++static DEFINE_PER_CPU(raw_spinlock_t, blocked_vcpu_on_cpu_lock);
+ 
+ static inline struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu)
+ {
+@@ -121,9 +121,9 @@ static void __pi_post_block(struct kvm_vcpu *vcpu)
+ 			   new.control) != old.control);
+ 
+ 	if (!WARN_ON_ONCE(vcpu->pre_pcpu == -1)) {
+-		spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
++		raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ 		list_del(&vcpu->blocked_vcpu_list);
+-		spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
++		raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ 		vcpu->pre_pcpu = -1;
+ 	}
+ }
+@@ -154,11 +154,11 @@ int pi_pre_block(struct kvm_vcpu *vcpu)
+ 	local_irq_disable();
+ 	if (!WARN_ON_ONCE(vcpu->pre_pcpu != -1)) {
+ 		vcpu->pre_pcpu = vcpu->cpu;
+-		spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
++		raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ 		list_add_tail(&vcpu->blocked_vcpu_list,
+ 			      &per_cpu(blocked_vcpu_on_cpu,
+ 				       vcpu->pre_pcpu));
+-		spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
++		raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ 	}
+ 
+ 	do {
+@@ -215,7 +215,7 @@ void pi_wakeup_handler(void)
+ 	struct kvm_vcpu *vcpu;
+ 	int cpu = smp_processor_id();
+ 
+-	spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
++	raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
+ 	list_for_each_entry(vcpu, &per_cpu(blocked_vcpu_on_cpu, cpu),
+ 			blocked_vcpu_list) {
+ 		struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
+@@ -223,13 +223,13 @@ void pi_wakeup_handler(void)
+ 		if (pi_test_on(pi_desc) == 1)
+ 			kvm_vcpu_kick(vcpu);
+ 	}
+-	spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
++	raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
+ }
+ 
+ void __init pi_init_cpu(int cpu)
+ {
+ 	INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
+-	spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
++	raw_spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
+ }
+ 
+ bool pi_has_pending_interrupt(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
+index 3313bffbecd4d..1a702c6a226ec 100644
+--- a/arch/x86/realmode/init.c
++++ b/arch/x86/realmode/init.c
+@@ -17,6 +17,32 @@ u32 *trampoline_cr4_features;
+ /* Hold the pgd entry used on booting additional CPUs */
+ pgd_t trampoline_pgd_entry;
+ 
++void load_trampoline_pgtable(void)
++{
++#ifdef CONFIG_X86_32
++	load_cr3(initial_page_table);
++#else
++	/*
++	 * This function is called before exiting to real-mode and that will
++	 * fail with CR4.PCIDE still set.
++	 */
++	if (boot_cpu_has(X86_FEATURE_PCID))
++		cr4_clear_bits(X86_CR4_PCIDE);
++
++	write_cr3(real_mode_header->trampoline_pgd);
++#endif
++
++	/*
++	 * The CR3 write above will not flush global TLB entries.
++	 * Stale, global entries from previous page tables may still be
++	 * present.  Flush those stale entries.
++	 *
++	 * This ensures that memory accessed while running with
++	 * trampoline_pgd is *actually* mapped into trampoline_pgd.
++	 */
++	__flush_tlb_all();
++}
++
+ void __init reserve_real_mode(void)
+ {
+ 	phys_addr_t mem;
+diff --git a/arch/x86/um/syscalls_64.c b/arch/x86/um/syscalls_64.c
+index 58f51667e2e4b..8249685b40960 100644
+--- a/arch/x86/um/syscalls_64.c
++++ b/arch/x86/um/syscalls_64.c
+@@ -11,6 +11,7 @@
+ #include <linux/uaccess.h>
+ #include <asm/prctl.h> /* XXX This should get the constants from libc */
+ #include <os.h>
++#include <registers.h>
+ 
+ long arch_prctl(struct task_struct *task, int option,
+ 		unsigned long __user *arg2)
+@@ -35,7 +36,7 @@ long arch_prctl(struct task_struct *task, int option,
+ 	switch (option) {
+ 	case ARCH_SET_FS:
+ 	case ARCH_SET_GS:
+-		ret = restore_registers(pid, &current->thread.regs.regs);
++		ret = restore_pid_registers(pid, &current->thread.regs.regs);
+ 		if (ret)
+ 			return ret;
+ 		break;
+diff --git a/block/blk-flush.c b/block/blk-flush.c
+index 70f1d02135ed6..33b487b5cbf78 100644
+--- a/block/blk-flush.c
++++ b/block/blk-flush.c
+@@ -236,8 +236,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+ 	 * avoiding use-after-free.
+ 	 */
+ 	WRITE_ONCE(flush_rq->state, MQ_RQ_IDLE);
+-	if (fq->rq_status != BLK_STS_OK)
++	if (fq->rq_status != BLK_STS_OK) {
+ 		error = fq->rq_status;
++		fq->rq_status = BLK_STS_OK;
++	}
+ 
+ 	if (!q->elevator) {
+ 		flush_rq->tag = BLK_MQ_NO_TAG;
+diff --git a/block/blk-pm.c b/block/blk-pm.c
+index 17bd020268d42..2dad62cc15727 100644
+--- a/block/blk-pm.c
++++ b/block/blk-pm.c
+@@ -163,27 +163,19 @@ EXPORT_SYMBOL(blk_pre_runtime_resume);
+ /**
+  * blk_post_runtime_resume - Post runtime resume processing
+  * @q: the queue of the device
+- * @err: return value of the device's runtime_resume function
+  *
+  * Description:
+- *    Update the queue's runtime status according to the return value of the
+- *    device's runtime_resume function. If the resume was successful, call
+- *    blk_set_runtime_active() to do the real work of restarting the queue.
++ *    For historical reasons, this routine merely calls blk_set_runtime_active()
++ *    to do the real work of restarting the queue.  It does this regardless of
++ *    whether the device's runtime-resume succeeded; even if it failed the
++ *    driver or error handler will need to communicate with the device.
+  *
+  *    This function should be called near the end of the device's
+  *    runtime_resume callback.
+  */
+-void blk_post_runtime_resume(struct request_queue *q, int err)
++void blk_post_runtime_resume(struct request_queue *q)
+ {
+-	if (!q->dev)
+-		return;
+-	if (!err) {
+-		blk_set_runtime_active(q);
+-	} else {
+-		spin_lock_irq(&q->queue_lock);
+-		q->rpm_status = RPM_SUSPENDED;
+-		spin_unlock_irq(&q->queue_lock);
+-	}
++	blk_set_runtime_active(q);
+ }
+ EXPORT_SYMBOL(blk_post_runtime_resume);
+ 
+@@ -201,7 +193,7 @@ EXPORT_SYMBOL(blk_post_runtime_resume);
+  * runtime PM status and re-enable peeking requests from the queue. It
+  * should be called before first request is added to the queue.
+  *
+- * This function is also called by blk_post_runtime_resume() for successful
++ * This function is also called by blk_post_runtime_resume() for
+  * runtime resumes.  It does everything necessary to restart the queue.
+  */
+ void blk_set_runtime_active(struct request_queue *q)
+diff --git a/crypto/jitterentropy.c b/crypto/jitterentropy.c
+index 6e147c43fc186..37c4c308339e4 100644
+--- a/crypto/jitterentropy.c
++++ b/crypto/jitterentropy.c
+@@ -265,7 +265,6 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
+ {
+ 	__u64 delta2 = jent_delta(ec->last_delta, current_delta);
+ 	__u64 delta3 = jent_delta(ec->last_delta2, delta2);
+-	unsigned int delta_masked = current_delta & JENT_APT_WORD_MASK;
+ 
+ 	ec->last_delta = current_delta;
+ 	ec->last_delta2 = delta2;
+@@ -274,7 +273,7 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
+ 	 * Insert the result of the comparison of two back-to-back time
+ 	 * deltas.
+ 	 */
+-	jent_apt_insert(ec, delta_masked);
++	jent_apt_insert(ec, current_delta);
+ 
+ 	if (!current_delta || !delta2 || !delta3) {
+ 		/* RCT with a stuck bit */
+diff --git a/drivers/acpi/acpica/exfield.c b/drivers/acpi/acpica/exfield.c
+index 3323a2ba6a313..b3230e511870a 100644
+--- a/drivers/acpi/acpica/exfield.c
++++ b/drivers/acpi/acpica/exfield.c
+@@ -326,12 +326,7 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
+ 		       obj_desc->field.base_byte_offset,
+ 		       source_desc->buffer.pointer, data_length);
+ 
+-		if ((obj_desc->field.region_obj->region.address ==
+-		     PCC_MASTER_SUBSPACE
+-		     && MASTER_SUBSPACE_COMMAND(obj_desc->field.
+-						base_byte_offset))
+-		    || GENERIC_SUBSPACE_COMMAND(obj_desc->field.
+-						base_byte_offset)) {
++		if (MASTER_SUBSPACE_COMMAND(obj_desc->field.base_byte_offset)) {
+ 
+ 			/* Perform the write */
+ 
+diff --git a/drivers/acpi/acpica/exoparg1.c b/drivers/acpi/acpica/exoparg1.c
+index a46d685a3ffcf..9d67dfd93d5b6 100644
+--- a/drivers/acpi/acpica/exoparg1.c
++++ b/drivers/acpi/acpica/exoparg1.c
+@@ -1007,7 +1007,8 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
+ 						    (walk_state, return_desc,
+ 						     &temp_desc);
+ 						if (ACPI_FAILURE(status)) {
+-							goto cleanup;
++							return_ACPI_STATUS
++							    (status);
+ 						}
+ 
+ 						return_desc = temp_desc;
+diff --git a/drivers/acpi/acpica/hwesleep.c b/drivers/acpi/acpica/hwesleep.c
+index 4836a4b8b38b8..142a755be6881 100644
+--- a/drivers/acpi/acpica/hwesleep.c
++++ b/drivers/acpi/acpica/hwesleep.c
+@@ -104,7 +104,9 @@ acpi_status acpi_hw_extended_sleep(u8 sleep_state)
+ 
+ 	/* Flush caches, as per ACPI specification */
+ 
+-	ACPI_FLUSH_CPU_CACHE();
++	if (sleep_state < ACPI_STATE_S4) {
++		ACPI_FLUSH_CPU_CACHE();
++	}
+ 
+ 	status = acpi_os_enter_sleep(sleep_state, sleep_control, 0);
+ 	if (status == AE_CTRL_TERMINATE) {
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index fcc84d196238a..6a20bb5059c1d 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -110,7 +110,9 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
+ 
+ 	/* Flush caches, as per ACPI specification */
+ 
+-	ACPI_FLUSH_CPU_CACHE();
++	if (sleep_state < ACPI_STATE_S4) {
++		ACPI_FLUSH_CPU_CACHE();
++	}
+ 
+ 	status = acpi_os_enter_sleep(sleep_state, pm1a_control, pm1b_control);
+ 	if (status == AE_CTRL_TERMINATE) {
+diff --git a/drivers/acpi/acpica/hwxfsleep.c b/drivers/acpi/acpica/hwxfsleep.c
+index f1645d87864c3..3948c34d85830 100644
+--- a/drivers/acpi/acpica/hwxfsleep.c
++++ b/drivers/acpi/acpica/hwxfsleep.c
+@@ -162,8 +162,6 @@ acpi_status acpi_enter_sleep_state_s4bios(void)
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
+-	ACPI_FLUSH_CPU_CACHE();
+-
+ 	status = acpi_hw_write_port(acpi_gbl_FADT.smi_command,
+ 				    (u32)acpi_gbl_FADT.s4_bios_request, 8);
+ 	if (ACPI_FAILURE(status)) {
+diff --git a/drivers/acpi/acpica/utdelete.c b/drivers/acpi/acpica/utdelete.c
+index 72d2c0b656339..cb1750e7a6281 100644
+--- a/drivers/acpi/acpica/utdelete.c
++++ b/drivers/acpi/acpica/utdelete.c
+@@ -422,6 +422,7 @@ acpi_ut_update_ref_count(union acpi_operand_object *object, u32 action)
+ 			ACPI_WARNING((AE_INFO,
+ 				      "Obj %p, Reference Count is already zero, cannot decrement\n",
+ 				      object));
++			return;
+ 		}
+ 
+ 		ACPI_DEBUG_PRINT_RAW((ACPI_DB_ALLOCATIONS,
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index e04352c1dc2ce..2376f57b3617a 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -59,6 +59,7 @@ static int battery_bix_broken_package;
+ static int battery_notification_delay_ms;
+ static int battery_ac_is_broken;
+ static int battery_check_pmic = 1;
++static int battery_quirk_notcharging;
+ static unsigned int cache_time = 1000;
+ module_param(cache_time, uint, 0644);
+ MODULE_PARM_DESC(cache_time, "cache time in milliseconds");
+@@ -222,6 +223,8 @@ static int acpi_battery_get_property(struct power_supply *psy,
+ 			val->intval = POWER_SUPPLY_STATUS_CHARGING;
+ 		else if (acpi_battery_is_charged(battery))
+ 			val->intval = POWER_SUPPLY_STATUS_FULL;
++		else if (battery_quirk_notcharging)
++			val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ 		else
+ 			val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+ 		break;
+@@ -1105,6 +1108,12 @@ battery_do_not_check_pmic_quirk(const struct dmi_system_id *d)
+ 	return 0;
+ }
+ 
++static int __init battery_quirk_not_charging(const struct dmi_system_id *d)
++{
++	battery_quirk_notcharging = 1;
++	return 0;
++}
++
+ static const struct dmi_system_id bat_dmi_table[] __initconst = {
+ 	{
+ 		/* NEC LZ750/LS */
+@@ -1149,6 +1158,19 @@ static const struct dmi_system_id bat_dmi_table[] __initconst = {
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"),
+ 		},
+ 	},
++	{
++		/*
++		 * On Lenovo ThinkPads the BIOS specification defines
++		 * a state when the bits for charging and discharging
++		 * are both set to 0. That state is "Not Charging".
++		 */
++		.callback = battery_quirk_not_charging,
++		.ident = "Lenovo ThinkPad",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index e317214aabec5..5e14288fcabe9 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -98,8 +98,8 @@ int acpi_bus_get_status(struct acpi_device *device)
+ 	acpi_status status;
+ 	unsigned long long sta;
+ 
+-	if (acpi_device_always_present(device)) {
+-		acpi_set_device_status(device, ACPI_STA_DEFAULT);
++	if (acpi_device_override_status(device, &sta)) {
++		acpi_set_device_status(device, sta);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index be3e0921a6c00..3f2e5ea9ab6b7 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -166,6 +166,7 @@ struct acpi_ec_query {
+ 	struct transaction transaction;
+ 	struct work_struct work;
+ 	struct acpi_ec_query_handler *handler;
++	struct acpi_ec *ec;
+ };
+ 
+ static int acpi_ec_query(struct acpi_ec *ec, u8 *data);
+@@ -469,6 +470,7 @@ static void acpi_ec_submit_query(struct acpi_ec *ec)
+ 		ec_dbg_evt("Command(%s) submitted/blocked",
+ 			   acpi_ec_cmd_string(ACPI_EC_COMMAND_QUERY));
+ 		ec->nr_pending_queries++;
++		ec->events_in_progress++;
+ 		queue_work(ec_wq, &ec->work);
+ 	}
+ }
+@@ -535,7 +537,7 @@ static void acpi_ec_enable_event(struct acpi_ec *ec)
+ #ifdef CONFIG_PM_SLEEP
+ static void __acpi_ec_flush_work(void)
+ {
+-	drain_workqueue(ec_wq); /* flush ec->work */
++	flush_workqueue(ec_wq); /* flush ec->work */
+ 	flush_workqueue(ec_query_wq); /* flush queries */
+ }
+ 
+@@ -1116,7 +1118,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
+ }
+ EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler);
+ 
+-static struct acpi_ec_query *acpi_ec_create_query(u8 *pval)
++static struct acpi_ec_query *acpi_ec_create_query(struct acpi_ec *ec, u8 *pval)
+ {
+ 	struct acpi_ec_query *q;
+ 	struct transaction *t;
+@@ -1124,11 +1126,13 @@ static struct acpi_ec_query *acpi_ec_create_query(u8 *pval)
+ 	q = kzalloc(sizeof (struct acpi_ec_query), GFP_KERNEL);
+ 	if (!q)
+ 		return NULL;
++
+ 	INIT_WORK(&q->work, acpi_ec_event_processor);
+ 	t = &q->transaction;
+ 	t->command = ACPI_EC_COMMAND_QUERY;
+ 	t->rdata = pval;
+ 	t->rlen = 1;
++	q->ec = ec;
+ 	return q;
+ }
+ 
+@@ -1145,13 +1149,21 @@ static void acpi_ec_event_processor(struct work_struct *work)
+ {
+ 	struct acpi_ec_query *q = container_of(work, struct acpi_ec_query, work);
+ 	struct acpi_ec_query_handler *handler = q->handler;
++	struct acpi_ec *ec = q->ec;
+ 
+ 	ec_dbg_evt("Query(0x%02x) started", handler->query_bit);
++
+ 	if (handler->func)
+ 		handler->func(handler->data);
+ 	else if (handler->handle)
+ 		acpi_evaluate_object(handler->handle, NULL, NULL, NULL);
++
+ 	ec_dbg_evt("Query(0x%02x) stopped", handler->query_bit);
++
++	spin_lock_irq(&ec->lock);
++	ec->queries_in_progress--;
++	spin_unlock_irq(&ec->lock);
++
+ 	acpi_ec_delete_query(q);
+ }
+ 
+@@ -1161,7 +1173,7 @@ static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
+ 	int result;
+ 	struct acpi_ec_query *q;
+ 
+-	q = acpi_ec_create_query(&value);
++	q = acpi_ec_create_query(ec, &value);
+ 	if (!q)
+ 		return -ENOMEM;
+ 
+@@ -1183,19 +1195,20 @@ static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
+ 	}
+ 
+ 	/*
+-	 * It is reported that _Qxx are evaluated in a parallel way on
+-	 * Windows:
++	 * It is reported that _Qxx are evaluated in a parallel way on Windows:
+ 	 * https://bugzilla.kernel.org/show_bug.cgi?id=94411
+ 	 *
+-	 * Put this log entry before schedule_work() in order to make
+-	 * it appearing before any other log entries occurred during the
+-	 * work queue execution.
++	 * Put this log entry before queue_work() to make it appear in the log
++	 * before any other messages emitted during workqueue handling.
+ 	 */
+ 	ec_dbg_evt("Query(0x%02x) scheduled", value);
+-	if (!queue_work(ec_query_wq, &q->work)) {
+-		ec_dbg_evt("Query(0x%02x) overlapped", value);
+-		result = -EBUSY;
+-	}
++
++	spin_lock_irq(&ec->lock);
++
++	ec->queries_in_progress++;
++	queue_work(ec_query_wq, &q->work);
++
++	spin_unlock_irq(&ec->lock);
+ 
+ err_exit:
+ 	if (result)
+@@ -1253,6 +1266,10 @@ static void acpi_ec_event_handler(struct work_struct *work)
+ 	ec_dbg_evt("Event stopped");
+ 
+ 	acpi_ec_check_event(ec);
++
++	spin_lock_irqsave(&ec->lock, flags);
++	ec->events_in_progress--;
++	spin_unlock_irqrestore(&ec->lock, flags);
+ }
+ 
+ static void acpi_ec_handle_interrupt(struct acpi_ec *ec)
+@@ -2034,6 +2051,7 @@ void acpi_ec_set_gpe_wake_mask(u8 action)
+ 
+ bool acpi_ec_dispatch_gpe(void)
+ {
++	bool work_in_progress;
+ 	u32 ret;
+ 
+ 	if (!first_ec)
+@@ -2054,8 +2072,19 @@ bool acpi_ec_dispatch_gpe(void)
+ 	if (ret == ACPI_INTERRUPT_HANDLED)
+ 		pm_pr_dbg("ACPI EC GPE dispatched\n");
+ 
+-	/* Flush the event and query workqueues. */
+-	acpi_ec_flush_work();
++	/* Drain EC work. */
++	do {
++		acpi_ec_flush_work();
++
++		pm_pr_dbg("ACPI EC work flushed\n");
++
++		spin_lock_irq(&first_ec->lock);
++
++		work_in_progress = first_ec->events_in_progress +
++			first_ec->queries_in_progress > 0;
++
++		spin_unlock_irq(&first_ec->lock);
++	} while (work_in_progress && !pm_wakeup_pending());
+ 
+ 	return false;
+ }
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index a958ad60a3394..125e4901c9b47 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -184,6 +184,8 @@ struct acpi_ec {
+ 	struct work_struct work;
+ 	unsigned long timestamp;
+ 	unsigned long nr_pending_queries;
++	unsigned int events_in_progress;
++	unsigned int queries_in_progress;
+ 	bool busy_polling;
+ 	unsigned int polling_guard;
+ };
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index de0533bd4e086..67a5ee2fedfd3 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -1577,6 +1577,7 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
+ {
+ 	struct list_head resource_list;
+ 	bool is_serial_bus_slave = false;
++	static const struct acpi_device_id ignore_serial_bus_ids[] = {
+ 	/*
+ 	 * These devices have multiple I2cSerialBus resources and an i2c-client
+ 	 * must be instantiated for each, each with its own i2c_device_id.
+@@ -1585,11 +1586,18 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
+ 	 * drivers/platform/x86/i2c-multi-instantiate.c driver, which knows
+ 	 * which i2c_device_id to use for each resource.
+ 	 */
+-	static const struct acpi_device_id i2c_multi_instantiate_ids[] = {
+ 		{"BSG1160", },
+ 		{"BSG2150", },
+ 		{"INT33FE", },
+ 		{"INT3515", },
++	/*
++	 * HIDs of device with an UartSerialBusV2 resource for which userspace
++	 * expects a regular tty cdev to be created (instead of the in kernel
++	 * serdev) and which have a kernel driver which expects a platform_dev
++	 * such as the rfkill-gpio driver.
++	 */
++		{"BCM4752", },
++		{"LNV4752", },
+ 		{}
+ 	};
+ 
+@@ -1603,8 +1611,7 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
+ 	     fwnode_property_present(&device->fwnode, "baud")))
+ 		return true;
+ 
+-	/* Instantiate a pdev for the i2c-multi-instantiate drv to bind to */
+-	if (!acpi_match_device_ids(device, i2c_multi_instantiate_ids))
++	if (!acpi_match_device_ids(device, ignore_serial_bus_ids))
+ 		return false;
+ 
+ 	INIT_LIST_HEAD(&resource_list);
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index bdc1ba00aee9f..3f9a162be84e3 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -22,58 +22,71 @@
+  * Some BIOS-es (temporarily) hide specific APCI devices to work around Windows
+  * driver bugs. We use DMI matching to match known cases of this.
+  *
+- * We work around this by always reporting ACPI_STA_DEFAULT for these
+- * devices. Note this MUST only be done for devices where this is safe.
++ * Likewise sometimes some not-actually present devices are sometimes
++ * reported as present, which may cause issues.
+  *
+- * This forcing of devices to be present is limited to specific CPU (SoC)
+- * models both to avoid potentially causing trouble on other models and
+- * because some HIDs are re-used on different SoCs for completely
+- * different devices.
++ * We work around this by using the below quirk list to override the status
++ * reported by the _STA method with a fixed value (ACPI_STA_DEFAULT or 0).
++ * Note this MUST only be done for devices where this is safe.
++ *
++ * This status overriding is limited to specific CPU (SoC) models both to
++ * avoid potentially causing trouble on other models and because some HIDs
++ * are re-used on different SoCs for completely different devices.
+  */
+-struct always_present_id {
++struct override_status_id {
+ 	struct acpi_device_id hid[2];
+ 	struct x86_cpu_id cpu_ids[2];
+ 	struct dmi_system_id dmi_ids[2]; /* Optional */
+ 	const char *uid;
++	const char *path;
++	unsigned long long status;
+ };
+ 
+-#define X86_MATCH(model)	X86_MATCH_INTEL_FAM6_MODEL(model, NULL)
+-
+-#define ENTRY(hid, uid, cpu_models, dmi...) {				\
++#define ENTRY(status, hid, uid, path, cpu_model, dmi...) {		\
+ 	{ { hid, }, {} },						\
+-	{ cpu_models, {} },						\
++	{ X86_MATCH_INTEL_FAM6_MODEL(cpu_model, NULL), {} },		\
+ 	{ { .matches = dmi }, {} },					\
+ 	uid,								\
++	path,								\
++	status,								\
+ }
+ 
+-static const struct always_present_id always_present_ids[] = {
++#define PRESENT_ENTRY_HID(hid, uid, cpu_model, dmi...) \
++	ENTRY(ACPI_STA_DEFAULT, hid, uid, NULL, cpu_model, dmi)
++
++#define NOT_PRESENT_ENTRY_HID(hid, uid, cpu_model, dmi...) \
++	ENTRY(0, hid, uid, NULL, cpu_model, dmi)
++
++#define PRESENT_ENTRY_PATH(path, cpu_model, dmi...) \
++	ENTRY(ACPI_STA_DEFAULT, "", NULL, path, cpu_model, dmi)
++
++#define NOT_PRESENT_ENTRY_PATH(path, cpu_model, dmi...) \
++	ENTRY(0, "", NULL, path, cpu_model, dmi)
++
++static const struct override_status_id override_status_ids[] = {
+ 	/*
+ 	 * Bay / Cherry Trail PWM directly poked by GPU driver in win10,
+ 	 * but Linux uses a separate PWM driver, harmless if not used.
+ 	 */
+-	ENTRY("80860F09", "1", X86_MATCH(ATOM_SILVERMONT), {}),
+-	ENTRY("80862288", "1", X86_MATCH(ATOM_AIRMONT), {}),
++	PRESENT_ENTRY_HID("80860F09", "1", ATOM_SILVERMONT, {}),
++	PRESENT_ENTRY_HID("80862288", "1", ATOM_AIRMONT, {}),
+ 
+-	/* Lenovo Yoga Book uses PWM2 for keyboard backlight control */
+-	ENTRY("80862289", "2", X86_MATCH(ATOM_AIRMONT), {
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"),
+-		}),
+ 	/*
+ 	 * The INT0002 device is necessary to clear wakeup interrupt sources
+ 	 * on Cherry Trail devices, without it we get nobody cared IRQ msgs.
+ 	 */
+-	ENTRY("INT0002", "1", X86_MATCH(ATOM_AIRMONT), {}),
++	PRESENT_ENTRY_HID("INT0002", "1", ATOM_AIRMONT, {}),
+ 	/*
+ 	 * On the Dell Venue 11 Pro 7130 and 7139, the DSDT hides
+ 	 * the touchscreen ACPI device until a certain time
+ 	 * after _SB.PCI0.GFX0.LCD.LCD1._ON gets called has passed
+ 	 * *and* _STA has been called at least 3 times since.
+ 	 */
+-	ENTRY("SYNA7500", "1", X86_MATCH(HASWELL_L), {
++	PRESENT_ENTRY_HID("SYNA7500", "1", HASWELL_L, {
+ 		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7130"),
+ 	      }),
+-	ENTRY("SYNA7500", "1", X86_MATCH(HASWELL_L), {
++	PRESENT_ENTRY_HID("SYNA7500", "1", HASWELL_L, {
+ 		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7139"),
+ 	      }),
+@@ -81,54 +94,83 @@ static const struct always_present_id always_present_ids[] = {
+ 	/*
+ 	 * The GPD win BIOS dated 20170221 has disabled the accelerometer, the
+ 	 * drivers sometimes cause crashes under Windows and this is how the
+-	 * manufacturer has solved this :| Note that the the DMI data is less
+-	 * generic then it seems, a board_vendor of "AMI Corporation" is quite
+-	 * rare and a board_name of "Default String" also is rare.
++	 * manufacturer has solved this :|  The DMI match may not seem unique,
++	 * but it is. In the 67000+ DMI decode dumps from linux-hardware.org
++	 * only 116 have board_vendor set to "AMI Corporation" and of those 116
++	 * only the GPD win and pocket entries' board_name is "Default string".
+ 	 *
+ 	 * Unfortunately the GPD pocket also uses these strings and its BIOS
+ 	 * was copy-pasted from the GPD win, so it has a disabled KIOX000A
+ 	 * node which we should not enable, thus we also check the BIOS date.
+ 	 */
+-	ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
++	PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
+ 		DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ 		DMI_MATCH(DMI_BOARD_NAME, "Default string"),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
+ 		DMI_MATCH(DMI_BIOS_DATE, "02/21/2017")
+ 	      }),
+-	ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
++	PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
+ 		DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ 		DMI_MATCH(DMI_BOARD_NAME, "Default string"),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
+ 		DMI_MATCH(DMI_BIOS_DATE, "03/20/2017")
+ 	      }),
+-	ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
++	PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
+ 		DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ 		DMI_MATCH(DMI_BOARD_NAME, "Default string"),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
+ 		DMI_MATCH(DMI_BIOS_DATE, "05/25/2017")
+ 	      }),
++
++	/*
++	 * The GPD win/pocket have a PCI wifi card, but its DSDT has the SDIO
++	 * mmc controller enabled and that has a child-device which _PS3
++	 * method sets a GPIO causing the PCI wifi card to turn off.
++	 * See above remark about uniqueness of the DMI match.
++	 */
++	NOT_PRESENT_ENTRY_PATH("\\_SB_.PCI0.SDHB.BRC1", ATOM_AIRMONT, {
++		DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++		DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
++		DMI_EXACT_MATCH(DMI_BOARD_SERIAL, "Default string"),
++		DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"),
++	      }),
+ };
+ 
+-bool acpi_device_always_present(struct acpi_device *adev)
++bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status)
+ {
+ 	bool ret = false;
+ 	unsigned int i;
+ 
+-	for (i = 0; i < ARRAY_SIZE(always_present_ids); i++) {
+-		if (acpi_match_device_ids(adev, always_present_ids[i].hid))
++	for (i = 0; i < ARRAY_SIZE(override_status_ids); i++) {
++		if (!x86_match_cpu(override_status_ids[i].cpu_ids))
+ 			continue;
+ 
+-		if (!adev->pnp.unique_id ||
+-		    strcmp(adev->pnp.unique_id, always_present_ids[i].uid))
++		if (override_status_ids[i].dmi_ids[0].matches[0].slot &&
++		    !dmi_check_system(override_status_ids[i].dmi_ids))
+ 			continue;
+ 
+-		if (!x86_match_cpu(always_present_ids[i].cpu_ids))
+-			continue;
++		if (override_status_ids[i].path) {
++			struct acpi_buffer path = { ACPI_ALLOCATE_BUFFER, NULL };
++			bool match;
+ 
+-		if (always_present_ids[i].dmi_ids[0].matches[0].slot &&
+-		    !dmi_check_system(always_present_ids[i].dmi_ids))
+-			continue;
++			if (acpi_get_name(adev->handle, ACPI_FULL_PATHNAME, &path))
++				continue;
++
++			match = strcmp((char *)path.pointer, override_status_ids[i].path) == 0;
++			kfree(path.pointer);
++
++			if (!match)
++				continue;
++		} else {
++			if (acpi_match_device_ids(adev, override_status_ids[i].hid))
++				continue;
++
++			if (!adev->pnp.unique_id ||
++			    strcmp(adev->pnp.unique_id, override_status_ids[i].uid))
++				continue;
++		}
+ 
++		*status = override_status_ids[i].status;
+ 		ret = true;
+ 		break;
+ 	}
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 80e2bbb36422e..366b124057081 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2657,8 +2657,8 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
+ 		if (!ret)
+ 			ret = binder_translate_fd(fd, offset, t, thread,
+ 						  in_reply_to);
+-		if (ret < 0)
+-			return ret;
++		if (ret)
++			return ret > 0 ? -EINVAL : ret;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 389d13616d1df..c0566aff53551 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -348,8 +348,7 @@ static void device_link_release_fn(struct work_struct *work)
+ 	/* Ensure that all references to the link object have been dropped. */
+ 	device_link_synchronize_removal();
+ 
+-	while (refcount_dec_not_one(&link->rpm_active))
+-		pm_runtime_put(link->supplier);
++	pm_runtime_release_supplier(link, true);
+ 
+ 	put_device(link->consumer);
+ 	put_device(link->supplier);
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index bc649da4899a0..1573319404888 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -305,19 +305,40 @@ static int rpm_get_suppliers(struct device *dev)
+ 	return 0;
+ }
+ 
++/**
++ * pm_runtime_release_supplier - Drop references to device link's supplier.
++ * @link: Target device link.
++ * @check_idle: Whether or not to check if the supplier device is idle.
++ *
++ * Drop all runtime PM references associated with @link to its supplier device
++ * and if @check_idle is set, check if that device is idle (and so it can be
++ * suspended).
++ */
++void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
++{
++	struct device *supplier = link->supplier;
++
++	/*
++	 * The additional power.usage_count check is a safety net in case
++	 * the rpm_active refcount becomes saturated, in which case
++	 * refcount_dec_not_one() would return true forever, but it is not
++	 * strictly necessary.
++	 */
++	while (refcount_dec_not_one(&link->rpm_active) &&
++	       atomic_read(&supplier->power.usage_count) > 0)
++		pm_runtime_put_noidle(supplier);
++
++	if (check_idle)
++		pm_request_idle(supplier);
++}
++
+ static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
+ {
+ 	struct device_link *link;
+ 
+ 	list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
+-				device_links_read_lock_held()) {
+-
+-		while (refcount_dec_not_one(&link->rpm_active))
+-			pm_runtime_put_noidle(link->supplier);
+-
+-		if (try_to_suspend)
+-			pm_request_idle(link->supplier);
+-	}
++				device_links_read_lock_held())
++		pm_runtime_release_supplier(link, try_to_suspend);
+ }
+ 
+ static void rpm_put_suppliers(struct device *dev)
+@@ -1755,9 +1776,7 @@ void pm_runtime_drop_link(struct device_link *link)
+ 		return;
+ 
+ 	pm_runtime_drop_link_count(link->consumer);
+-
+-	while (refcount_dec_not_one(&link->rpm_active))
+-		pm_runtime_put(link->supplier);
++	pm_runtime_release_supplier(link, true);
+ }
+ 
+ static bool pm_runtime_need_not_resume(struct device *dev)
+diff --git a/drivers/base/property.c b/drivers/base/property.c
+index 4c43d30145c6b..cf88a5554d9c5 100644
+--- a/drivers/base/property.c
++++ b/drivers/base/property.c
+@@ -1195,8 +1195,10 @@ fwnode_graph_devcon_match(struct fwnode_handle *fwnode, const char *con_id,
+ 
+ 	fwnode_graph_for_each_endpoint(fwnode, ep) {
+ 		node = fwnode_graph_get_remote_port_parent(ep);
+-		if (!fwnode_device_is_available(node))
++		if (!fwnode_device_is_available(node)) {
++			fwnode_handle_put(node);
+ 			continue;
++		}
+ 
+ 		ret = match(node, con_id, data);
+ 		fwnode_handle_put(node);
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 456a1787e18d0..55a30afc14a00 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -620,6 +620,7 @@ int regmap_attach_dev(struct device *dev, struct regmap *map,
+ 	if (ret)
+ 		return ret;
+ 
++	regmap_debugfs_exit(map);
+ 	regmap_debugfs_init(map);
+ 
+ 	/* Add a devres resource for dev_get_regmap() */
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index 206bd4d7d7e23..d2fb3eb5816c3 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -519,7 +519,7 @@ software_node_get_reference_args(const struct fwnode_handle *fwnode,
+ 		return -ENOENT;
+ 
+ 	if (nargs_prop) {
+-		error = property_entry_read_int_array(swnode->node->properties,
++		error = property_entry_read_int_array(ref->node->properties,
+ 						      nargs_prop, sizeof(u32),
+ 						      &nargs_prop_val, 1);
+ 		if (error)
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 7df79ae6b0a1e..aaee15058d181 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -1015,7 +1015,7 @@ static DECLARE_DELAYED_WORK(fd_timer, fd_timer_workfn);
+ static void cancel_activity(void)
+ {
+ 	do_floppy = NULL;
+-	cancel_delayed_work_sync(&fd_timer);
++	cancel_delayed_work(&fd_timer);
+ 	cancel_work_sync(&floppy_work);
+ }
+ 
+@@ -3169,6 +3169,8 @@ static void raw_cmd_free(struct floppy_raw_cmd **ptr)
+ 	}
+ }
+ 
++#define MAX_LEN (1UL << MAX_ORDER << PAGE_SHIFT)
++
+ static int raw_cmd_copyin(int cmd, void __user *param,
+ 				 struct floppy_raw_cmd **rcmd)
+ {
+@@ -3198,7 +3200,7 @@ loop:
+ 	ptr->resultcode = 0;
+ 
+ 	if (ptr->flags & (FD_RAW_READ | FD_RAW_WRITE)) {
+-		if (ptr->length <= 0)
++		if (ptr->length <= 0 || ptr->length >= MAX_LEN)
+ 			return -EINVAL;
+ 		ptr->kernel_data = (char *)fd_dma_mem_alloc(ptr->length);
+ 		fallback_on_nodma_alloc(&ptr->kernel_data, ptr->length);
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index 5f9f027956317..74856a5862162 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -1042,6 +1042,8 @@ static int btmtksdio_runtime_suspend(struct device *dev)
+ 	if (!bdev)
+ 		return 0;
+ 
++	sdio_set_host_pm_flags(func, MMC_PM_KEEP_POWER);
++
+ 	sdio_claim_host(bdev->func);
+ 
+ 	sdio_writel(bdev->func, C_FW_OWN_REQ_SET, MTK_REG_CHLPCR, &err);
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index 8ea5ca8d71d6d..259a643377c24 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -1164,7 +1164,12 @@ static int bcm_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	dev->dev = &pdev->dev;
+-	dev->irq = platform_get_irq(pdev, 0);
++
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		return ret;
++
++	dev->irq = ret;
+ 
+ 	/* Initialize routing field to an unused value */
+ 	dev->pcm_int_params[0] = 0xff;
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 4184faef9f169..dc7ee5dd2eeca 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1844,6 +1844,9 @@ static int qca_power_off(struct hci_dev *hdev)
+ 	hu->hdev->hw_error = NULL;
+ 	hu->hdev->cmd_timeout = NULL;
+ 
++	del_timer_sync(&qca->wake_retrans_timer);
++	del_timer_sync(&qca->tx_idle_timer);
++
+ 	/* Stop sending shutdown command if soc crashes. */
+ 	if (soc_type != QCA_ROME
+ 		&& qca->memdump_state == QCA_MEMDUMP_IDLE) {
+@@ -1987,7 +1990,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 
+ 		qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",
+ 					       GPIOD_OUT_LOW);
+-		if (!qcadev->bt_en) {
++		if (IS_ERR_OR_NULL(qcadev->bt_en)) {
+ 			dev_warn(&serdev->dev, "failed to acquire enable gpio\n");
+ 			power_ctrl_enabled = false;
+ 		}
+diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
+index 8ab26dec5f6e8..8469f9876dd26 100644
+--- a/drivers/bluetooth/hci_vhci.c
++++ b/drivers/bluetooth/hci_vhci.c
+@@ -121,6 +121,8 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
+ 	if (opcode & 0x80)
+ 		set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
+ 
++	set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++
+ 	if (hci_register_dev(hdev) < 0) {
+ 		BT_ERR("Can't register HCI device");
+ 		hci_free_dev(hdev);
+diff --git a/drivers/char/mwave/3780i.h b/drivers/char/mwave/3780i.h
+index 9ccb6b270b071..95164246afd1a 100644
+--- a/drivers/char/mwave/3780i.h
++++ b/drivers/char/mwave/3780i.h
+@@ -68,7 +68,7 @@ typedef struct {
+ 	unsigned char ClockControl:1;	/* RW: Clock control: 0=normal, 1=stop 3780i clocks */
+ 	unsigned char SoftReset:1;	/* RW: Soft reset 0=normal, 1=soft reset active */
+ 	unsigned char ConfigMode:1;	/* RW: Configuration mode, 0=normal, 1=config mode */
+-	unsigned char Reserved:5;	/* 0: Reserved */
++	unsigned short Reserved:13;	/* 0: Reserved */
+ } DSP_ISA_SLAVE_CONTROL;
+ 
+ 
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 8c94380e7a463..5444206f35e22 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -922,12 +922,14 @@ static struct crng_state *select_crng(void)
+ 
+ /*
+  * crng_fast_load() can be called by code in the interrupt service
+- * path.  So we can't afford to dilly-dally.
++ * path.  So we can't afford to dilly-dally. Returns the number of
++ * bytes processed from cp.
+  */
+-static int crng_fast_load(const char *cp, size_t len)
++static size_t crng_fast_load(const char *cp, size_t len)
+ {
+ 	unsigned long flags;
+ 	char *p;
++	size_t ret = 0;
+ 
+ 	if (!spin_trylock_irqsave(&primary_crng.lock, flags))
+ 		return 0;
+@@ -938,7 +940,7 @@ static int crng_fast_load(const char *cp, size_t len)
+ 	p = (unsigned char *) &primary_crng.state[4];
+ 	while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
+ 		p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp;
+-		cp++; crng_init_cnt++; len--;
++		cp++; crng_init_cnt++; len--; ret++;
+ 	}
+ 	spin_unlock_irqrestore(&primary_crng.lock, flags);
+ 	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
+@@ -946,7 +948,7 @@ static int crng_fast_load(const char *cp, size_t len)
+ 		crng_init = 1;
+ 		pr_notice("fast init done\n");
+ 	}
+-	return 1;
++	return ret;
+ }
+ 
+ /*
+@@ -1299,7 +1301,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
+ 	if (unlikely(crng_init == 0)) {
+ 		if ((fast_pool->count >= 64) &&
+ 		    crng_fast_load((char *) fast_pool->pool,
+-				   sizeof(fast_pool->pool))) {
++				   sizeof(fast_pool->pool)) > 0) {
+ 			fast_pool->count = 0;
+ 			fast_pool->last = now;
+ 		}
+@@ -2319,8 +2321,11 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
+ 	struct entropy_store *poolp = &input_pool;
+ 
+ 	if (unlikely(crng_init == 0)) {
+-		crng_fast_load(buffer, count);
+-		return;
++		size_t ret = crng_fast_load(buffer, count);
++		count -= ret;
++		buffer += ret;
++		if (!count || crng_init == 0)
++			return;
+ 	}
+ 
+ 	/* Suspend writing if we're above the trickle threshold.
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index b2659a4c40168..dc56b976d8162 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -950,9 +950,11 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	priv->timeout_max = TPM_TIMEOUT_USECS_MAX;
+ 	priv->phy_ops = phy_ops;
+ 
++	dev_set_drvdata(&chip->dev, priv);
++
+ 	rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor);
+ 	if (rc < 0)
+-		goto out_err;
++		return rc;
+ 
+ 	priv->manufacturer_id = vendor;
+ 
+@@ -962,8 +964,6 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		priv->timeout_max = TIS_TIMEOUT_MAX_ATML;
+ 	}
+ 
+-	dev_set_drvdata(&chip->dev, priv);
+-
+ 	if (is_bsw()) {
+ 		priv->ilb_base_addr = ioremap(INTEL_LEGACY_BLK_BASE_ADDR,
+ 					ILB_REMAP_SIZE);
+@@ -994,7 +994,15 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT |
+ 		   TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
+ 	intmask &= ~TPM_GLOBAL_INT_ENABLE;
++
++	rc = request_locality(chip, 0);
++	if (rc < 0) {
++		rc = -ENODEV;
++		goto out_err;
++	}
++
+ 	tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
++	release_locality(chip, 0);
+ 
+ 	rc = tpm_chip_start(chip);
+ 	if (rc)
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 1ac803e14fa3e..178886823b90c 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -933,8 +933,7 @@ static int bcm2835_clock_is_on(struct clk_hw *hw)
+ 
+ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
+ 				    unsigned long rate,
+-				    unsigned long parent_rate,
+-				    bool round_up)
++				    unsigned long parent_rate)
+ {
+ 	struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
+ 	const struct bcm2835_clock_data *data = clock->data;
+@@ -946,10 +945,6 @@ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
+ 
+ 	rem = do_div(temp, rate);
+ 	div = temp;
+-
+-	/* Round up and mask off the unused bits */
+-	if (round_up && ((div & unused_frac_mask) != 0 || rem != 0))
+-		div += unused_frac_mask + 1;
+ 	div &= ~unused_frac_mask;
+ 
+ 	/* different clamping limits apply for a mash clock */
+@@ -1080,7 +1075,7 @@ static int bcm2835_clock_set_rate(struct clk_hw *hw,
+ 	struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
+ 	struct bcm2835_cprman *cprman = clock->cprman;
+ 	const struct bcm2835_clock_data *data = clock->data;
+-	u32 div = bcm2835_clock_choose_div(hw, rate, parent_rate, false);
++	u32 div = bcm2835_clock_choose_div(hw, rate, parent_rate);
+ 	u32 ctl;
+ 
+ 	spin_lock(&cprman->regs_lock);
+@@ -1131,7 +1126,7 @@ static unsigned long bcm2835_clock_choose_div_and_prate(struct clk_hw *hw,
+ 
+ 	if (!(BIT(parent_idx) & data->set_rate_parent)) {
+ 		*prate = clk_hw_get_rate(parent);
+-		*div = bcm2835_clock_choose_div(hw, rate, *prate, true);
++		*div = bcm2835_clock_choose_div(hw, rate, *prate);
+ 
+ 		*avgrate = bcm2835_clock_rate_from_divisor(clock, *prate, *div);
+ 
+@@ -1217,7 +1212,7 @@ static int bcm2835_clock_determine_rate(struct clk_hw *hw,
+ 		rate = bcm2835_clock_choose_div_and_prate(hw, i, req->rate,
+ 							  &div, &prate,
+ 							  &avgrate);
+-		if (rate > best_rate && rate <= req->rate) {
++		if (abs(req->rate - rate) < abs(req->rate - best_rate)) {
+ 			best_parent = parent;
+ 			best_prate = prate;
+ 			best_rate = rate;
+diff --git a/drivers/clk/clk-bm1880.c b/drivers/clk/clk-bm1880.c
+index e6d6599d310a1..fad78a22218e8 100644
+--- a/drivers/clk/clk-bm1880.c
++++ b/drivers/clk/clk-bm1880.c
+@@ -522,14 +522,6 @@ static struct clk_hw *bm1880_clk_register_pll(struct bm1880_pll_hw_clock *pll_cl
+ 	return hw;
+ }
+ 
+-static void bm1880_clk_unregister_pll(struct clk_hw *hw)
+-{
+-	struct bm1880_pll_hw_clock *pll_hw = to_bm1880_pll_clk(hw);
+-
+-	clk_hw_unregister(hw);
+-	kfree(pll_hw);
+-}
+-
+ static int bm1880_clk_register_plls(struct bm1880_pll_hw_clock *clks,
+ 				    int num_clks,
+ 				    struct bm1880_clock_data *data)
+@@ -555,7 +547,7 @@ static int bm1880_clk_register_plls(struct bm1880_pll_hw_clock *clks,
+ 
+ err_clk:
+ 	while (i--)
+-		bm1880_clk_unregister_pll(data->hw_data.hws[clks[i].pll.id]);
++		clk_hw_unregister(data->hw_data.hws[clks[i].pll.id]);
+ 
+ 	return PTR_ERR(hw);
+ }
+@@ -695,14 +687,6 @@ static struct clk_hw *bm1880_clk_register_div(struct bm1880_div_hw_clock *div_cl
+ 	return hw;
+ }
+ 
+-static void bm1880_clk_unregister_div(struct clk_hw *hw)
+-{
+-	struct bm1880_div_hw_clock *div_hw = to_bm1880_div_clk(hw);
+-
+-	clk_hw_unregister(hw);
+-	kfree(div_hw);
+-}
+-
+ static int bm1880_clk_register_divs(struct bm1880_div_hw_clock *clks,
+ 				    int num_clks,
+ 				    struct bm1880_clock_data *data)
+@@ -729,7 +713,7 @@ static int bm1880_clk_register_divs(struct bm1880_div_hw_clock *clks,
+ 
+ err_clk:
+ 	while (i--)
+-		bm1880_clk_unregister_div(data->hw_data.hws[clks[i].div.id]);
++		clk_hw_unregister(data->hw_data.hws[clks[i].div.id]);
+ 
+ 	return PTR_ERR(hw);
+ }
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index eb22f4fdbc6b4..772b48ad0cd78 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -1576,7 +1576,7 @@ static int si5341_probe(struct i2c_client *client,
+ 			clk_prepare(data->clk[i].hw.clk);
+ 	}
+ 
+-	err = of_clk_add_hw_provider(client->dev.of_node, of_clk_si5341_get,
++	err = devm_of_clk_add_hw_provider(&client->dev, of_clk_si5341_get,
+ 			data);
+ 	if (err) {
+ 		dev_err(&client->dev, "unable to add clk provider\n");
+diff --git a/drivers/clk/clk-stm32f4.c b/drivers/clk/clk-stm32f4.c
+index 5c75e3d906c20..682a18b392f08 100644
+--- a/drivers/clk/clk-stm32f4.c
++++ b/drivers/clk/clk-stm32f4.c
+@@ -129,7 +129,6 @@ static const struct stm32f4_gate_data stm32f429_gates[] __initconst = {
+ 	{ STM32F4_RCC_APB2ENR, 20,	"spi5",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 21,	"spi6",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 22,	"sai1",		"apb2_div" },
+-	{ STM32F4_RCC_APB2ENR, 26,	"ltdc",		"apb2_div" },
+ };
+ 
+ static const struct stm32f4_gate_data stm32f469_gates[] __initconst = {
+@@ -211,7 +210,6 @@ static const struct stm32f4_gate_data stm32f469_gates[] __initconst = {
+ 	{ STM32F4_RCC_APB2ENR, 20,	"spi5",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 21,	"spi6",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 22,	"sai1",		"apb2_div" },
+-	{ STM32F4_RCC_APB2ENR, 26,	"ltdc",		"apb2_div" },
+ };
+ 
+ static const struct stm32f4_gate_data stm32f746_gates[] __initconst = {
+@@ -286,7 +284,6 @@ static const struct stm32f4_gate_data stm32f746_gates[] __initconst = {
+ 	{ STM32F4_RCC_APB2ENR, 21,	"spi6",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 22,	"sai1",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 23,	"sai2",		"apb2_div" },
+-	{ STM32F4_RCC_APB2ENR, 26,	"ltdc",		"apb2_div" },
+ };
+ 
+ static const struct stm32f4_gate_data stm32f769_gates[] __initconst = {
+@@ -364,7 +361,6 @@ static const struct stm32f4_gate_data stm32f769_gates[] __initconst = {
+ 	{ STM32F4_RCC_APB2ENR, 21,	"spi6",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 22,	"sai1",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 23,	"sai2",		"apb2_div" },
+-	{ STM32F4_RCC_APB2ENR, 26,	"ltdc",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 30,	"mdio",		"apb2_div" },
+ };
+ 
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 515ef39c4610c..b8a0e3d23698c 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3314,6 +3314,24 @@ static int __init clk_debug_init(void)
+ {
+ 	struct clk_core *core;
+ 
++#ifdef CLOCK_ALLOW_WRITE_DEBUGFS
++	pr_warn("\n");
++	pr_warn("********************************************************************\n");
++	pr_warn("**     NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE           **\n");
++	pr_warn("**                                                                **\n");
++	pr_warn("**  WRITEABLE clk DebugFS SUPPORT HAS BEEN ENABLED IN THIS KERNEL **\n");
++	pr_warn("**                                                                **\n");
++	pr_warn("** This means that this kernel is built to expose clk operations  **\n");
++	pr_warn("** such as parent or rate setting, enabling, disabling, etc.      **\n");
++	pr_warn("** to userspace, which may compromise security on your system.    **\n");
++	pr_warn("**                                                                **\n");
++	pr_warn("** If you see this message and you are not debugging the          **\n");
++	pr_warn("** kernel, report this immediately to your vendor!                **\n");
++	pr_warn("**                                                                **\n");
++	pr_warn("**     NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE           **\n");
++	pr_warn("********************************************************************\n");
++#endif
++
+ 	rootdir = debugfs_create_dir("clk", NULL);
+ 
+ 	debugfs_create_file("clk_summary", 0444, rootdir, &all_lists,
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index 33a7ddc23cd24..db122d94db583 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -274,9 +274,9 @@ static const char * const imx8mn_pdm_sels[] = {"osc_24m", "sys_pll2_100m", "audi
+ 
+ static const char * const imx8mn_dram_core_sels[] = {"dram_pll_out", "dram_alt_root", };
+ 
+-static const char * const imx8mn_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "osc_27m",
+-						 "sys_pll1_200m", "audio_pll2_out", "vpu_pll",
+-						 "sys_pll1_80m", };
++static const char * const imx8mn_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "dummy",
++						 "sys_pll1_200m", "audio_pll2_out", "sys_pll2_500m",
++						 "dummy", "sys_pll1_80m", };
+ static const char * const imx8mn_clko2_sels[] = {"osc_24m", "sys_pll2_200m", "sys_pll1_400m",
+ 						 "sys_pll2_166m", "sys_pll3_out", "audio_pll1_out",
+ 						 "video_pll1_out", "osc_32k", };
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index 0a68af6eec3dd..d42551a46ec91 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -712,6 +712,35 @@ static struct clk_regmap gxbb_mpll_prediv = {
+ };
+ 
+ static struct clk_regmap gxbb_mpll0_div = {
++	.data = &(struct meson_clk_mpll_data){
++		.sdm = {
++			.reg_off = HHI_MPLL_CNTL7,
++			.shift   = 0,
++			.width   = 14,
++		},
++		.sdm_en = {
++			.reg_off = HHI_MPLL_CNTL,
++			.shift   = 25,
++			.width	 = 1,
++		},
++		.n2 = {
++			.reg_off = HHI_MPLL_CNTL7,
++			.shift   = 16,
++			.width   = 9,
++		},
++		.lock = &meson_clk_lock,
++	},
++	.hw.init = &(struct clk_init_data){
++		.name = "mpll0_div",
++		.ops = &meson_clk_mpll_ops,
++		.parent_hws = (const struct clk_hw *[]) {
++			&gxbb_mpll_prediv.hw
++		},
++		.num_parents = 1,
++	},
++};
++
++static struct clk_regmap gxl_mpll0_div = {
+ 	.data = &(struct meson_clk_mpll_data){
+ 		.sdm = {
+ 			.reg_off = HHI_MPLL_CNTL7,
+@@ -748,7 +777,16 @@ static struct clk_regmap gxbb_mpll0 = {
+ 	.hw.init = &(struct clk_init_data){
+ 		.name = "mpll0",
+ 		.ops = &clk_regmap_gate_ops,
+-		.parent_hws = (const struct clk_hw *[]) { &gxbb_mpll0_div.hw },
++		.parent_data = &(const struct clk_parent_data) {
++			/*
++			 * Note:
++			 * GXL and GXBB have different SDM_EN registers. We
++			 * fallback to the global naming string mechanism so
++			 * mpll0_div picks up the appropriate one.
++			 */
++			.name = "mpll0_div",
++			.index = -1,
++		},
+ 		.num_parents = 1,
+ 		.flags = CLK_SET_RATE_PARENT,
+ 	},
+@@ -3043,7 +3081,7 @@ static struct clk_hw_onecell_data gxl_hw_onecell_data = {
+ 		[CLKID_VAPB_1]		    = &gxbb_vapb_1.hw,
+ 		[CLKID_VAPB_SEL]	    = &gxbb_vapb_sel.hw,
+ 		[CLKID_VAPB]		    = &gxbb_vapb.hw,
+-		[CLKID_MPLL0_DIV]	    = &gxbb_mpll0_div.hw,
++		[CLKID_MPLL0_DIV]	    = &gxl_mpll0_div.hw,
+ 		[CLKID_MPLL1_DIV]	    = &gxbb_mpll1_div.hw,
+ 		[CLKID_MPLL2_DIV]	    = &gxbb_mpll2_div.hw,
+ 		[CLKID_MPLL_PREDIV]	    = &gxbb_mpll_prediv.hw,
+@@ -3438,7 +3476,7 @@ static struct clk_regmap *const gxl_clk_regmaps[] = {
+ 	&gxbb_mpll0,
+ 	&gxbb_mpll1,
+ 	&gxbb_mpll2,
+-	&gxbb_mpll0_div,
++	&gxl_mpll0_div,
+ 	&gxbb_mpll1_div,
+ 	&gxbb_mpll2_div,
+ 	&gxbb_cts_amclk_div,
+diff --git a/drivers/counter/Kconfig b/drivers/counter/Kconfig
+index 2de53ab0dd252..cbdf84200e278 100644
+--- a/drivers/counter/Kconfig
++++ b/drivers/counter/Kconfig
+@@ -41,7 +41,7 @@ config STM32_TIMER_CNT
+ 
+ config STM32_LPTIMER_CNT
+ 	tristate "STM32 LP Timer encoder counter driver"
+-	depends on (MFD_STM32_LPTIMER || COMPILE_TEST) && IIO
++	depends on MFD_STM32_LPTIMER || COMPILE_TEST
+ 	help
+ 	  Select this option to enable STM32 Low-Power Timer quadrature encoder
+ 	  and counter driver.
+diff --git a/drivers/counter/stm32-lptimer-cnt.c b/drivers/counter/stm32-lptimer-cnt.c
+index fd6828e2d34f5..937439635d53f 100644
+--- a/drivers/counter/stm32-lptimer-cnt.c
++++ b/drivers/counter/stm32-lptimer-cnt.c
+@@ -12,8 +12,8 @@
+ 
+ #include <linux/bitfield.h>
+ #include <linux/counter.h>
+-#include <linux/iio/iio.h>
+ #include <linux/mfd/stm32-lptimer.h>
++#include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+ #include <linux/pinctrl/consumer.h>
+ #include <linux/platform_device.h>
+@@ -107,249 +107,27 @@ static int stm32_lptim_setup(struct stm32_lptim_cnt *priv, int enable)
+ 	return regmap_update_bits(priv->regmap, STM32_LPTIM_CFGR, mask, val);
+ }
+ 
+-static int stm32_lptim_write_raw(struct iio_dev *indio_dev,
+-				 struct iio_chan_spec const *chan,
+-				 int val, int val2, long mask)
+-{
+-	struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
+-	int ret;
+-
+-	switch (mask) {
+-	case IIO_CHAN_INFO_ENABLE:
+-		if (val < 0 || val > 1)
+-			return -EINVAL;
+-
+-		/* Check nobody uses the timer, or already disabled/enabled */
+-		ret = stm32_lptim_is_enabled(priv);
+-		if ((ret < 0) || (!ret && !val))
+-			return ret;
+-		if (val && ret)
+-			return -EBUSY;
+-
+-		ret = stm32_lptim_setup(priv, val);
+-		if (ret)
+-			return ret;
+-		return stm32_lptim_set_enable_state(priv, val);
+-
+-	default:
+-		return -EINVAL;
+-	}
+-}
+-
+-static int stm32_lptim_read_raw(struct iio_dev *indio_dev,
+-				struct iio_chan_spec const *chan,
+-				int *val, int *val2, long mask)
+-{
+-	struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
+-	u32 dat;
+-	int ret;
+-
+-	switch (mask) {
+-	case IIO_CHAN_INFO_RAW:
+-		ret = regmap_read(priv->regmap, STM32_LPTIM_CNT, &dat);
+-		if (ret)
+-			return ret;
+-		*val = dat;
+-		return IIO_VAL_INT;
+-
+-	case IIO_CHAN_INFO_ENABLE:
+-		ret = stm32_lptim_is_enabled(priv);
+-		if (ret < 0)
+-			return ret;
+-		*val = ret;
+-		return IIO_VAL_INT;
+-
+-	case IIO_CHAN_INFO_SCALE:
+-		/* Non-quadrature mode: scale = 1 */
+-		*val = 1;
+-		*val2 = 0;
+-		if (priv->quadrature_mode) {
+-			/*
+-			 * Quadrature encoder mode:
+-			 * - both edges, quarter cycle, scale is 0.25
+-			 * - either rising/falling edge scale is 0.5
+-			 */
+-			if (priv->polarity > 1)
+-				*val2 = 2;
+-			else
+-				*val2 = 1;
+-		}
+-		return IIO_VAL_FRACTIONAL_LOG2;
+-
+-	default:
+-		return -EINVAL;
+-	}
+-}
+-
+-static const struct iio_info stm32_lptim_cnt_iio_info = {
+-	.read_raw = stm32_lptim_read_raw,
+-	.write_raw = stm32_lptim_write_raw,
+-};
+-
+-static const char *const stm32_lptim_quadrature_modes[] = {
+-	"non-quadrature",
+-	"quadrature",
+-};
+-
+-static int stm32_lptim_get_quadrature_mode(struct iio_dev *indio_dev,
+-					   const struct iio_chan_spec *chan)
+-{
+-	struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
+-
+-	return priv->quadrature_mode;
+-}
+-
+-static int stm32_lptim_set_quadrature_mode(struct iio_dev *indio_dev,
+-					   const struct iio_chan_spec *chan,
+-					   unsigned int type)
+-{
+-	struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
+-
+-	if (stm32_lptim_is_enabled(priv))
+-		return -EBUSY;
+-
+-	priv->quadrature_mode = type;
+-
+-	return 0;
+-}
+-
+-static const struct iio_enum stm32_lptim_quadrature_mode_en = {
+-	.items = stm32_lptim_quadrature_modes,
+-	.num_items = ARRAY_SIZE(stm32_lptim_quadrature_modes),
+-	.get = stm32_lptim_get_quadrature_mode,
+-	.set = stm32_lptim_set_quadrature_mode,
+-};
+-
+-static const char * const stm32_lptim_cnt_polarity[] = {
+-	"rising-edge", "falling-edge", "both-edges",
+-};
+-
+-static int stm32_lptim_cnt_get_polarity(struct iio_dev *indio_dev,
+-					const struct iio_chan_spec *chan)
+-{
+-	struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
+-
+-	return priv->polarity;
+-}
+-
+-static int stm32_lptim_cnt_set_polarity(struct iio_dev *indio_dev,
+-					const struct iio_chan_spec *chan,
+-					unsigned int type)
+-{
+-	struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
+-
+-	if (stm32_lptim_is_enabled(priv))
+-		return -EBUSY;
+-
+-	priv->polarity = type;
+-
+-	return 0;
+-}
+-
+-static const struct iio_enum stm32_lptim_cnt_polarity_en = {
+-	.items = stm32_lptim_cnt_polarity,
+-	.num_items = ARRAY_SIZE(stm32_lptim_cnt_polarity),
+-	.get = stm32_lptim_cnt_get_polarity,
+-	.set = stm32_lptim_cnt_set_polarity,
+-};
+-
+-static ssize_t stm32_lptim_cnt_get_ceiling(struct stm32_lptim_cnt *priv,
+-					   char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%u\n", priv->ceiling);
+-}
+-
+-static ssize_t stm32_lptim_cnt_set_ceiling(struct stm32_lptim_cnt *priv,
+-					   const char *buf, size_t len)
+-{
+-	int ret;
+-
+-	if (stm32_lptim_is_enabled(priv))
+-		return -EBUSY;
+-
+-	ret = kstrtouint(buf, 0, &priv->ceiling);
+-	if (ret)
+-		return ret;
+-
+-	if (priv->ceiling > STM32_LPTIM_MAX_ARR)
+-		return -EINVAL;
+-
+-	return len;
+-}
+-
+-static ssize_t stm32_lptim_cnt_get_preset_iio(struct iio_dev *indio_dev,
+-					      uintptr_t private,
+-					      const struct iio_chan_spec *chan,
+-					      char *buf)
+-{
+-	struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
+-
+-	return stm32_lptim_cnt_get_ceiling(priv, buf);
+-}
+-
+-static ssize_t stm32_lptim_cnt_set_preset_iio(struct iio_dev *indio_dev,
+-					      uintptr_t private,
+-					      const struct iio_chan_spec *chan,
+-					      const char *buf, size_t len)
+-{
+-	struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
+-
+-	return stm32_lptim_cnt_set_ceiling(priv, buf, len);
+-}
+-
+-/* LP timer with encoder */
+-static const struct iio_chan_spec_ext_info stm32_lptim_enc_ext_info[] = {
+-	{
+-		.name = "preset",
+-		.shared = IIO_SEPARATE,
+-		.read = stm32_lptim_cnt_get_preset_iio,
+-		.write = stm32_lptim_cnt_set_preset_iio,
+-	},
+-	IIO_ENUM("polarity", IIO_SEPARATE, &stm32_lptim_cnt_polarity_en),
+-	IIO_ENUM_AVAILABLE("polarity", &stm32_lptim_cnt_polarity_en),
+-	IIO_ENUM("quadrature_mode", IIO_SEPARATE,
+-		 &stm32_lptim_quadrature_mode_en),
+-	IIO_ENUM_AVAILABLE("quadrature_mode", &stm32_lptim_quadrature_mode_en),
+-	{}
+-};
+-
+-static const struct iio_chan_spec stm32_lptim_enc_channels = {
+-	.type = IIO_COUNT,
+-	.channel = 0,
+-	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+-			      BIT(IIO_CHAN_INFO_ENABLE) |
+-			      BIT(IIO_CHAN_INFO_SCALE),
+-	.ext_info = stm32_lptim_enc_ext_info,
+-	.indexed = 1,
+-};
+-
+-/* LP timer without encoder (counter only) */
+-static const struct iio_chan_spec_ext_info stm32_lptim_cnt_ext_info[] = {
+-	{
+-		.name = "preset",
+-		.shared = IIO_SEPARATE,
+-		.read = stm32_lptim_cnt_get_preset_iio,
+-		.write = stm32_lptim_cnt_set_preset_iio,
+-	},
+-	IIO_ENUM("polarity", IIO_SEPARATE, &stm32_lptim_cnt_polarity_en),
+-	IIO_ENUM_AVAILABLE("polarity", &stm32_lptim_cnt_polarity_en),
+-	{}
+-};
+-
+-static const struct iio_chan_spec stm32_lptim_cnt_channels = {
+-	.type = IIO_COUNT,
+-	.channel = 0,
+-	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+-			      BIT(IIO_CHAN_INFO_ENABLE) |
+-			      BIT(IIO_CHAN_INFO_SCALE),
+-	.ext_info = stm32_lptim_cnt_ext_info,
+-	.indexed = 1,
+-};
+-
+ /**
+  * enum stm32_lptim_cnt_function - enumerates LPTimer counter & encoder modes
+  * @STM32_LPTIM_COUNTER_INCREASE: up count on IN1 rising, falling or both edges
+  * @STM32_LPTIM_ENCODER_BOTH_EDGE: count on both edges (IN1 & IN2 quadrature)
++ *
++ * In non-quadrature mode, device counts up on active edge.
++ * In quadrature mode, encoder counting scenarios are as follows:
++ * +---------+----------+--------------------+--------------------+
++ * | Active  | Level on |      IN1 signal    |     IN2 signal     |
++ * | edge    | opposite +----------+---------+----------+---------+
++ * |         | signal   |  Rising  | Falling |  Rising  | Falling |
++ * +---------+----------+----------+---------+----------+---------+
++ * | Rising  | High ->  |   Down   |    -    |   Up     |    -    |
++ * | edge    | Low  ->  |   Up     |    -    |   Down   |    -    |
++ * +---------+----------+----------+---------+----------+---------+
++ * | Falling | High ->  |    -     |   Up    |    -     |   Down  |
++ * | edge    | Low  ->  |    -     |   Down  |    -     |   Up    |
++ * +---------+----------+----------+---------+----------+---------+
++ * | Both    | High ->  |   Down   |   Up    |   Up     |   Down  |
++ * | edges   | Low  ->  |   Up     |   Down  |   Down   |   Up    |
++ * +---------+----------+----------+---------+----------+---------+
+  */
+ enum stm32_lptim_cnt_function {
+ 	STM32_LPTIM_COUNTER_INCREASE,
+@@ -484,7 +262,7 @@ static ssize_t stm32_lptim_cnt_ceiling_read(struct counter_device *counter,
+ {
+ 	struct stm32_lptim_cnt *const priv = counter->priv;
+ 
+-	return stm32_lptim_cnt_get_ceiling(priv, buf);
++	return snprintf(buf, PAGE_SIZE, "%u\n", priv->ceiling);
+ }
+ 
+ static ssize_t stm32_lptim_cnt_ceiling_write(struct counter_device *counter,
+@@ -493,8 +271,22 @@ static ssize_t stm32_lptim_cnt_ceiling_write(struct counter_device *counter,
+ 					     const char *buf, size_t len)
+ {
+ 	struct stm32_lptim_cnt *const priv = counter->priv;
++	unsigned int ceiling;
++	int ret;
++
++	if (stm32_lptim_is_enabled(priv))
++		return -EBUSY;
++
++	ret = kstrtouint(buf, 0, &ceiling);
++	if (ret)
++		return ret;
++
++	if (ceiling > STM32_LPTIM_MAX_ARR)
++		return -EINVAL;
++
++	priv->ceiling = ceiling;
+ 
+-	return stm32_lptim_cnt_set_ceiling(priv, buf, len);
++	return len;
+ }
+ 
+ static const struct counter_count_ext stm32_lptim_cnt_ext[] = {
+@@ -630,32 +422,19 @@ static int stm32_lptim_cnt_probe(struct platform_device *pdev)
+ {
+ 	struct stm32_lptimer *ddata = dev_get_drvdata(pdev->dev.parent);
+ 	struct stm32_lptim_cnt *priv;
+-	struct iio_dev *indio_dev;
+-	int ret;
+ 
+ 	if (IS_ERR_OR_NULL(ddata))
+ 		return -EINVAL;
+ 
+-	indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv));
+-	if (!indio_dev)
++	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
++	if (!priv)
+ 		return -ENOMEM;
+ 
+-	priv = iio_priv(indio_dev);
+ 	priv->dev = &pdev->dev;
+ 	priv->regmap = ddata->regmap;
+ 	priv->clk = ddata->clk;
+ 	priv->ceiling = STM32_LPTIM_MAX_ARR;
+ 
+-	/* Initialize IIO device */
+-	indio_dev->name = dev_name(&pdev->dev);
+-	indio_dev->dev.of_node = pdev->dev.of_node;
+-	indio_dev->info = &stm32_lptim_cnt_iio_info;
+-	if (ddata->has_encoder)
+-		indio_dev->channels = &stm32_lptim_enc_channels;
+-	else
+-		indio_dev->channels = &stm32_lptim_cnt_channels;
+-	indio_dev->num_channels = 1;
+-
+ 	/* Initialize Counter device */
+ 	priv->counter.name = dev_name(&pdev->dev);
+ 	priv->counter.parent = &pdev->dev;
+@@ -673,10 +452,6 @@ static int stm32_lptim_cnt_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 
+-	ret = devm_iio_device_register(&pdev->dev, indio_dev);
+-	if (ret)
+-		return ret;
+-
+ 	return devm_counter_register(&pdev->dev, &priv->counter);
+ }
+ 
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 8e159fb6af9cd..30dafe8fc5054 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1400,7 +1400,7 @@ static int cpufreq_online(unsigned int cpu)
+ 
+ 		ret = freq_qos_add_request(&policy->constraints,
+ 					   policy->min_freq_req, FREQ_QOS_MIN,
+-					   policy->min);
++					   FREQ_QOS_MIN_DEFAULT_VALUE);
+ 		if (ret < 0) {
+ 			/*
+ 			 * So we don't call freq_qos_remove_request() for an
+@@ -1420,7 +1420,7 @@ static int cpufreq_online(unsigned int cpu)
+ 
+ 		ret = freq_qos_add_request(&policy->constraints,
+ 					   policy->max_freq_req, FREQ_QOS_MAX,
+-					   policy->max);
++					   FREQ_QOS_MAX_DEFAULT_VALUE);
+ 		if (ret < 0) {
+ 			policy->max_freq_req = NULL;
+ 			goto out_destroy_policy;
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index a780e627838ae..5a40c7d10cc9a 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -5467,7 +5467,7 @@ int dpaa2_caam_enqueue(struct device *dev, struct caam_request *req)
+ 	dpaa2_fd_set_len(&fd, dpaa2_fl_get_len(&req->fd_flt[1]));
+ 	dpaa2_fd_set_flc(&fd, req->flc_dma);
+ 
+-	ppriv = this_cpu_ptr(priv->ppriv);
++	ppriv = raw_cpu_ptr(priv->ppriv);
+ 	for (i = 0; i < (priv->dpseci_attr.num_tx_queues << 1); i++) {
+ 		err = dpaa2_io_service_enqueue_fq(ppriv->dpio, ppriv->req_fqid,
+ 						  &fd);
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 9b968ac4ee7b6..a196bb8b17010 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -1302,7 +1302,7 @@ static int omap_aes_suspend(struct device *dev)
+ 
+ static int omap_aes_resume(struct device *dev)
+ {
+-	pm_runtime_resume_and_get(dev);
++	pm_runtime_get_sync(dev);
+ 	return 0;
+ }
+ #endif
+diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+index d7ca222f0df18..74afafc84c716 100644
+--- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+@@ -111,37 +111,19 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 
+ 	mutex_lock(lock);
+ 
+-	/* Check if PF2VF CSR is in use by remote function */
++	/* Check if the PFVF CSR is in use by remote function */
+ 	val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
+ 	if ((val & remote_in_use_mask) == remote_in_use_pattern) {
+ 		dev_dbg(&GET_DEV(accel_dev),
+-			"PF2VF CSR in use by remote function\n");
++			"PFVF CSR in use by remote function\n");
+ 		ret = -EBUSY;
+ 		goto out;
+ 	}
+ 
+-	/* Attempt to get ownership of PF2VF CSR */
+ 	msg &= ~local_in_use_mask;
+ 	msg |= local_in_use_pattern;
+-	ADF_CSR_WR(pmisc_bar_addr, pf2vf_offset, msg);
+ 
+-	/* Wait in case remote func also attempting to get ownership */
+-	msleep(ADF_IOV_MSG_COLLISION_DETECT_DELAY);
+-
+-	val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
+-	if ((val & local_in_use_mask) != local_in_use_pattern) {
+-		dev_dbg(&GET_DEV(accel_dev),
+-			"PF2VF CSR in use by remote - collision detected\n");
+-		ret = -EBUSY;
+-		goto out;
+-	}
+-
+-	/*
+-	 * This function now owns the PV2VF CSR.  The IN_USE_BY pattern must
+-	 * remain in the PF2VF CSR for all writes including ACK from remote
+-	 * until this local function relinquishes the CSR.  Send the message
+-	 * by interrupting the remote.
+-	 */
++	/* Attempt to get ownership of the PFVF CSR */
+ 	ADF_CSR_WR(pmisc_bar_addr, pf2vf_offset, msg | int_bit);
+ 
+ 	/* Wait for confirmation from remote func it received the message */
+@@ -150,6 +132,12 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 		val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
+ 	} while ((val & int_bit) && (count++ < ADF_IOV_MSG_ACK_MAX_RETRY));
+ 
++	if (val & int_bit) {
++		dev_dbg(&GET_DEV(accel_dev), "ACK not received from remote\n");
++		val &= ~int_bit;
++		ret = -EIO;
++	}
++
+ 	if (val != msg) {
+ 		dev_dbg(&GET_DEV(accel_dev),
+ 			"Collision - PFVF CSR overwritten by remote function\n");
+@@ -157,13 +145,7 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 		goto out;
+ 	}
+ 
+-	if (val & int_bit) {
+-		dev_dbg(&GET_DEV(accel_dev), "ACK not received from remote\n");
+-		val &= ~int_bit;
+-		ret = -EIO;
+-	}
+-
+-	/* Finished with PF2VF CSR; relinquish it and leave msg in CSR */
++	/* Finished with the PFVF CSR; relinquish it and leave msg in CSR */
+ 	ADF_CSR_WR(pmisc_bar_addr, pf2vf_offset, val & ~local_in_use_mask);
+ out:
+ 	mutex_unlock(lock);
+@@ -171,12 +153,13 @@ out:
+ }
+ 
+ /**
+- * adf_iov_putmsg() - send PF2VF message
++ * adf_iov_putmsg() - send PFVF message
+  * @accel_dev:  Pointer to acceleration device.
+  * @msg:	Message to send
+- * @vf_nr:	VF number to which the message will be sent
++ * @vf_nr:	VF number to which the message will be sent if on PF, ignored
++ *		otherwise
+  *
+- * Function sends a messge from the PF to a VF
++ * Function sends a message through the PFVF channel
+  *
+  * Return: 0 on success, error code otherwise.
+  */
+diff --git a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
+index 54b738da829d8..3e25fac051b25 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
+@@ -8,7 +8,7 @@
+  * adf_vf2pf_notify_init() - send init msg to PF
+  * @accel_dev:  Pointer to acceleration VF device.
+  *
+- * Function sends an init messge from the VF to a PF
++ * Function sends an init message from the VF to a PF
+  *
+  * Return: 0 on success, error code otherwise.
+  */
+@@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(adf_vf2pf_notify_init);
+  * adf_vf2pf_notify_shutdown() - send shutdown msg to PF
+  * @accel_dev:  Pointer to acceleration VF device.
+  *
+- * Function sends a shutdown messge from the VF to a PF
++ * Function sends a shutdown message from the VF to a PF
+  *
+  * Return: void
+  */
+diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
+index 87be96a0b0bba..8b4e79d882af4 100644
+--- a/drivers/crypto/qce/sha.c
++++ b/drivers/crypto/qce/sha.c
+@@ -533,8 +533,8 @@ static int qce_ahash_register_one(const struct qce_ahash_def *def,
+ 
+ 	ret = crypto_register_ahash(alg);
+ 	if (ret) {
+-		kfree(tmpl);
+ 		dev_err(qce->dev, "%s registration failed\n", base->cra_name);
++		kfree(tmpl);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
+index d8053789c8828..89c7fc3efbd71 100644
+--- a/drivers/crypto/qce/skcipher.c
++++ b/drivers/crypto/qce/skcipher.c
+@@ -433,8 +433,8 @@ static int qce_skcipher_register_one(const struct qce_skcipher_def *def,
+ 
+ 	ret = crypto_register_skcipher(alg);
+ 	if (ret) {
+-		kfree(tmpl);
+ 		dev_err(qce->dev, "%s registration failed\n", alg->base.cra_name);
++		kfree(tmpl);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
+index 75867c0b00172..be1bf39a317de 100644
+--- a/drivers/crypto/stm32/stm32-crc32.c
++++ b/drivers/crypto/stm32/stm32-crc32.c
+@@ -279,7 +279,7 @@ static struct shash_alg algs[] = {
+ 		.digestsize     = CHKSUM_DIGEST_SIZE,
+ 		.base           = {
+ 			.cra_name               = "crc32",
+-			.cra_driver_name        = DRIVER_NAME,
++			.cra_driver_name        = "stm32-crc32-crc32",
+ 			.cra_priority           = 200,
+ 			.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+ 			.cra_blocksize          = CHKSUM_BLOCK_SIZE,
+@@ -301,7 +301,7 @@ static struct shash_alg algs[] = {
+ 		.digestsize     = CHKSUM_DIGEST_SIZE,
+ 		.base           = {
+ 			.cra_name               = "crc32c",
+-			.cra_driver_name        = DRIVER_NAME,
++			.cra_driver_name        = "stm32-crc32-crc32c",
+ 			.cra_priority           = 200,
+ 			.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+ 			.cra_blocksize          = CHKSUM_BLOCK_SIZE,
+diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
+index 7999b26a16ed0..81eb136b6c11d 100644
+--- a/drivers/crypto/stm32/stm32-cryp.c
++++ b/drivers/crypto/stm32/stm32-cryp.c
+@@ -37,7 +37,6 @@
+ /* Mode mask = bits [15..0] */
+ #define FLG_MODE_MASK           GENMASK(15, 0)
+ /* Bit [31..16] status  */
+-#define FLG_CCM_PADDED_WA       BIT(16)
+ 
+ /* Registers */
+ #define CRYP_CR                 0x00000000
+@@ -105,8 +104,6 @@
+ /* Misc */
+ #define AES_BLOCK_32            (AES_BLOCK_SIZE / sizeof(u32))
+ #define GCM_CTR_INIT            2
+-#define _walked_in              (cryp->in_walk.offset - cryp->in_sg->offset)
+-#define _walked_out             (cryp->out_walk.offset - cryp->out_sg->offset)
+ #define CRYP_AUTOSUSPEND_DELAY	50
+ 
+ struct stm32_cryp_caps {
+@@ -144,26 +141,16 @@ struct stm32_cryp {
+ 	size_t                  authsize;
+ 	size_t                  hw_blocksize;
+ 
+-	size_t                  total_in;
+-	size_t                  total_in_save;
+-	size_t                  total_out;
+-	size_t                  total_out_save;
++	size_t                  payload_in;
++	size_t                  header_in;
++	size_t                  payload_out;
+ 
+-	struct scatterlist      *in_sg;
+ 	struct scatterlist      *out_sg;
+-	struct scatterlist      *out_sg_save;
+-
+-	struct scatterlist      in_sgl;
+-	struct scatterlist      out_sgl;
+-	bool                    sgs_copied;
+-
+-	int                     in_sg_len;
+-	int                     out_sg_len;
+ 
+ 	struct scatter_walk     in_walk;
+ 	struct scatter_walk     out_walk;
+ 
+-	u32                     last_ctr[4];
++	__be32                  last_ctr[4];
+ 	u32                     gcm_ctr;
+ };
+ 
+@@ -262,6 +249,7 @@ static inline int stm32_cryp_wait_output(struct stm32_cryp *cryp)
+ }
+ 
+ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp);
++static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err);
+ 
+ static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
+ {
+@@ -283,103 +271,6 @@ static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
+ 	return cryp;
+ }
+ 
+-static int stm32_cryp_check_aligned(struct scatterlist *sg, size_t total,
+-				    size_t align)
+-{
+-	int len = 0;
+-
+-	if (!total)
+-		return 0;
+-
+-	if (!IS_ALIGNED(total, align))
+-		return -EINVAL;
+-
+-	while (sg) {
+-		if (!IS_ALIGNED(sg->offset, sizeof(u32)))
+-			return -EINVAL;
+-
+-		if (!IS_ALIGNED(sg->length, align))
+-			return -EINVAL;
+-
+-		len += sg->length;
+-		sg = sg_next(sg);
+-	}
+-
+-	if (len != total)
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+-static int stm32_cryp_check_io_aligned(struct stm32_cryp *cryp)
+-{
+-	int ret;
+-
+-	ret = stm32_cryp_check_aligned(cryp->in_sg, cryp->total_in,
+-				       cryp->hw_blocksize);
+-	if (ret)
+-		return ret;
+-
+-	ret = stm32_cryp_check_aligned(cryp->out_sg, cryp->total_out,
+-				       cryp->hw_blocksize);
+-
+-	return ret;
+-}
+-
+-static void sg_copy_buf(void *buf, struct scatterlist *sg,
+-			unsigned int start, unsigned int nbytes, int out)
+-{
+-	struct scatter_walk walk;
+-
+-	if (!nbytes)
+-		return;
+-
+-	scatterwalk_start(&walk, sg);
+-	scatterwalk_advance(&walk, start);
+-	scatterwalk_copychunks(buf, &walk, nbytes, out);
+-	scatterwalk_done(&walk, out, 0);
+-}
+-
+-static int stm32_cryp_copy_sgs(struct stm32_cryp *cryp)
+-{
+-	void *buf_in, *buf_out;
+-	int pages, total_in, total_out;
+-
+-	if (!stm32_cryp_check_io_aligned(cryp)) {
+-		cryp->sgs_copied = 0;
+-		return 0;
+-	}
+-
+-	total_in = ALIGN(cryp->total_in, cryp->hw_blocksize);
+-	pages = total_in ? get_order(total_in) : 1;
+-	buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages);
+-
+-	total_out = ALIGN(cryp->total_out, cryp->hw_blocksize);
+-	pages = total_out ? get_order(total_out) : 1;
+-	buf_out = (void *)__get_free_pages(GFP_ATOMIC, pages);
+-
+-	if (!buf_in || !buf_out) {
+-		dev_err(cryp->dev, "Can't allocate pages when unaligned\n");
+-		cryp->sgs_copied = 0;
+-		return -EFAULT;
+-	}
+-
+-	sg_copy_buf(buf_in, cryp->in_sg, 0, cryp->total_in, 0);
+-
+-	sg_init_one(&cryp->in_sgl, buf_in, total_in);
+-	cryp->in_sg = &cryp->in_sgl;
+-	cryp->in_sg_len = 1;
+-
+-	sg_init_one(&cryp->out_sgl, buf_out, total_out);
+-	cryp->out_sg_save = cryp->out_sg;
+-	cryp->out_sg = &cryp->out_sgl;
+-	cryp->out_sg_len = 1;
+-
+-	cryp->sgs_copied = 1;
+-
+-	return 0;
+-}
+-
+ static void stm32_cryp_hw_write_iv(struct stm32_cryp *cryp, __be32 *iv)
+ {
+ 	if (!iv)
+@@ -481,16 +372,99 @@ static int stm32_cryp_gcm_init(struct stm32_cryp *cryp, u32 cfg)
+ 
+ 	/* Wait for end of processing */
+ 	ret = stm32_cryp_wait_enable(cryp);
+-	if (ret)
++	if (ret) {
+ 		dev_err(cryp->dev, "Timeout (gcm init)\n");
++		return ret;
++	}
+ 
+-	return ret;
++	/* Prepare next phase */
++	if (cryp->areq->assoclen) {
++		cfg |= CR_PH_HEADER;
++		stm32_cryp_write(cryp, CRYP_CR, cfg);
++	} else if (stm32_cryp_get_input_text_len(cryp)) {
++		cfg |= CR_PH_PAYLOAD;
++		stm32_cryp_write(cryp, CRYP_CR, cfg);
++	}
++
++	return 0;
++}
++
++static void stm32_crypt_gcmccm_end_header(struct stm32_cryp *cryp)
++{
++	u32 cfg;
++	int err;
++
++	/* Check if whole header written */
++	if (!cryp->header_in) {
++		/* Wait for completion */
++		err = stm32_cryp_wait_busy(cryp);
++		if (err) {
++			dev_err(cryp->dev, "Timeout (gcm/ccm header)\n");
++			stm32_cryp_write(cryp, CRYP_IMSCR, 0);
++			stm32_cryp_finish_req(cryp, err);
++			return;
++		}
++
++		if (stm32_cryp_get_input_text_len(cryp)) {
++			/* Phase 3 : payload */
++			cfg = stm32_cryp_read(cryp, CRYP_CR);
++			cfg &= ~CR_CRYPEN;
++			stm32_cryp_write(cryp, CRYP_CR, cfg);
++
++			cfg &= ~CR_PH_MASK;
++			cfg |= CR_PH_PAYLOAD | CR_CRYPEN;
++			stm32_cryp_write(cryp, CRYP_CR, cfg);
++		} else {
++			/*
++			 * Phase 4 : tag.
++			 * Nothing to read, nothing to write, caller have to
++			 * end request
++			 */
++		}
++	}
++}
++
++static void stm32_cryp_write_ccm_first_header(struct stm32_cryp *cryp)
++{
++	unsigned int i;
++	size_t written;
++	size_t len;
++	u32 alen = cryp->areq->assoclen;
++	u32 block[AES_BLOCK_32] = {0};
++	u8 *b8 = (u8 *)block;
++
++	if (alen <= 65280) {
++		/* Write first u32 of B1 */
++		b8[0] = (alen >> 8) & 0xFF;
++		b8[1] = alen & 0xFF;
++		len = 2;
++	} else {
++		/* Build the two first u32 of B1 */
++		b8[0] = 0xFF;
++		b8[1] = 0xFE;
++		b8[2] = (alen & 0xFF000000) >> 24;
++		b8[3] = (alen & 0x00FF0000) >> 16;
++		b8[4] = (alen & 0x0000FF00) >> 8;
++		b8[5] = alen & 0x000000FF;
++		len = 6;
++	}
++
++	written = min_t(size_t, AES_BLOCK_SIZE - len, alen);
++
++	scatterwalk_copychunks((char *)block + len, &cryp->in_walk, written, 0);
++	for (i = 0; i < AES_BLOCK_32; i++)
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
++
++	cryp->header_in -= written;
++
++	stm32_crypt_gcmccm_end_header(cryp);
+ }
+ 
+ static int stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
+ {
+ 	int ret;
+-	u8 iv[AES_BLOCK_SIZE], b0[AES_BLOCK_SIZE];
++	u32 iv_32[AES_BLOCK_32], b0_32[AES_BLOCK_32];
++	u8 *iv = (u8 *)iv_32, *b0 = (u8 *)b0_32;
+ 	__be32 *bd;
+ 	u32 *d;
+ 	unsigned int i, textlen;
+@@ -531,10 +505,24 @@ static int stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
+ 
+ 	/* Wait for end of processing */
+ 	ret = stm32_cryp_wait_enable(cryp);
+-	if (ret)
++	if (ret) {
+ 		dev_err(cryp->dev, "Timeout (ccm init)\n");
++		return ret;
++	}
+ 
+-	return ret;
++	/* Prepare next phase */
++	if (cryp->areq->assoclen) {
++		cfg |= CR_PH_HEADER | CR_CRYPEN;
++		stm32_cryp_write(cryp, CRYP_CR, cfg);
++
++		/* Write first (special) block (may move to next phase [payload]) */
++		stm32_cryp_write_ccm_first_header(cryp);
++	} else if (stm32_cryp_get_input_text_len(cryp)) {
++		cfg |= CR_PH_PAYLOAD;
++		stm32_cryp_write(cryp, CRYP_CR, cfg);
++	}
++
++	return 0;
+ }
+ 
+ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+@@ -542,7 +530,7 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+ 	int ret;
+ 	u32 cfg, hw_mode;
+ 
+-	pm_runtime_resume_and_get(cryp->dev);
++	pm_runtime_get_sync(cryp->dev);
+ 
+ 	/* Disable interrupt */
+ 	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+@@ -605,16 +593,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+ 		if (ret)
+ 			return ret;
+ 
+-		/* Phase 2 : header (authenticated data) */
+-		if (cryp->areq->assoclen) {
+-			cfg |= CR_PH_HEADER;
+-		} else if (stm32_cryp_get_input_text_len(cryp)) {
+-			cfg |= CR_PH_PAYLOAD;
+-			stm32_cryp_write(cryp, CRYP_CR, cfg);
+-		} else {
+-			cfg |= CR_PH_INIT;
+-		}
+-
+ 		break;
+ 
+ 	case CR_DES_CBC:
+@@ -633,8 +611,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+ 
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ 
+-	cryp->flags &= ~FLG_CCM_PADDED_WA;
+-
+ 	return 0;
+ }
+ 
+@@ -644,28 +620,9 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err)
+ 		/* Phase 4 : output tag */
+ 		err = stm32_cryp_read_auth_tag(cryp);
+ 
+-	if (!err && (!(is_gcm(cryp) || is_ccm(cryp))))
++	if (!err && (!(is_gcm(cryp) || is_ccm(cryp) || is_ecb(cryp))))
+ 		stm32_cryp_get_iv(cryp);
+ 
+-	if (cryp->sgs_copied) {
+-		void *buf_in, *buf_out;
+-		int pages, len;
+-
+-		buf_in = sg_virt(&cryp->in_sgl);
+-		buf_out = sg_virt(&cryp->out_sgl);
+-
+-		sg_copy_buf(buf_out, cryp->out_sg_save, 0,
+-			    cryp->total_out_save, 1);
+-
+-		len = ALIGN(cryp->total_in_save, cryp->hw_blocksize);
+-		pages = len ? get_order(len) : 1;
+-		free_pages((unsigned long)buf_in, pages);
+-
+-		len = ALIGN(cryp->total_out_save, cryp->hw_blocksize);
+-		pages = len ? get_order(len) : 1;
+-		free_pages((unsigned long)buf_out, pages);
+-	}
+-
+ 	pm_runtime_mark_last_busy(cryp->dev);
+ 	pm_runtime_put_autosuspend(cryp->dev);
+ 
+@@ -674,8 +631,6 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err)
+ 	else
+ 		crypto_finalize_skcipher_request(cryp->engine, cryp->req,
+ 						   err);
+-
+-	memset(cryp->ctx->key, 0, cryp->ctx->keylen);
+ }
+ 
+ static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
+@@ -801,7 +756,20 @@ static int stm32_cryp_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ static int stm32_cryp_aes_gcm_setauthsize(struct crypto_aead *tfm,
+ 					  unsigned int authsize)
+ {
+-	return authsize == AES_BLOCK_SIZE ? 0 : -EINVAL;
++	switch (authsize) {
++	case 4:
++	case 8:
++	case 12:
++	case 13:
++	case 14:
++	case 15:
++	case 16:
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	return 0;
+ }
+ 
+ static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
+@@ -825,31 +793,61 @@ static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
+ 
+ static int stm32_cryp_aes_ecb_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % AES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_aes_ecb_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % AES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB);
+ }
+ 
+ static int stm32_cryp_aes_cbc_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % AES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_aes_cbc_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % AES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC);
+ }
+ 
+ static int stm32_cryp_aes_ctr_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_aes_ctr_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR);
+ }
+ 
+@@ -863,53 +861,122 @@ static int stm32_cryp_aes_gcm_decrypt(struct aead_request *req)
+ 	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM);
+ }
+ 
++static inline int crypto_ccm_check_iv(const u8 *iv)
++{
++	/* 2 <= L <= 8, so 1 <= L' <= 7. */
++	if (iv[0] < 1 || iv[0] > 7)
++		return -EINVAL;
++
++	return 0;
++}
++
+ static int stm32_cryp_aes_ccm_encrypt(struct aead_request *req)
+ {
++	int err;
++
++	err = crypto_ccm_check_iv(req->iv);
++	if (err)
++		return err;
++
+ 	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_aes_ccm_decrypt(struct aead_request *req)
+ {
++	int err;
++
++	err = crypto_ccm_check_iv(req->iv);
++	if (err)
++		return err;
++
+ 	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM);
+ }
+ 
+ static int stm32_cryp_des_ecb_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_des_ecb_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB);
+ }
+ 
+ static int stm32_cryp_des_cbc_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_des_cbc_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC);
+ }
+ 
+ static int stm32_cryp_tdes_ecb_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_tdes_ecb_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB);
+ }
+ 
+ static int stm32_cryp_tdes_cbc_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_tdes_cbc_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC);
+ }
+ 
+@@ -919,6 +986,7 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req,
+ 	struct stm32_cryp_ctx *ctx;
+ 	struct stm32_cryp *cryp;
+ 	struct stm32_cryp_reqctx *rctx;
++	struct scatterlist *in_sg;
+ 	int ret;
+ 
+ 	if (!req && !areq)
+@@ -944,76 +1012,55 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req,
+ 	if (req) {
+ 		cryp->req = req;
+ 		cryp->areq = NULL;
+-		cryp->total_in = req->cryptlen;
+-		cryp->total_out = cryp->total_in;
++		cryp->header_in = 0;
++		cryp->payload_in = req->cryptlen;
++		cryp->payload_out = req->cryptlen;
++		cryp->authsize = 0;
+ 	} else {
+ 		/*
+ 		 * Length of input and output data:
+ 		 * Encryption case:
+-		 *  INPUT  =   AssocData  ||   PlainText
++		 *  INPUT  = AssocData   ||     PlainText
+ 		 *          <- assoclen ->  <- cryptlen ->
+-		 *          <------- total_in ----------->
+ 		 *
+-		 *  OUTPUT =   AssocData  ||  CipherText  ||   AuthTag
+-		 *          <- assoclen ->  <- cryptlen ->  <- authsize ->
+-		 *          <---------------- total_out ----------------->
++		 *  OUTPUT = AssocData    ||   CipherText   ||      AuthTag
++		 *          <- assoclen ->  <-- cryptlen -->  <- authsize ->
+ 		 *
+ 		 * Decryption case:
+-		 *  INPUT  =   AssocData  ||  CipherText  ||  AuthTag
+-		 *          <- assoclen ->  <--------- cryptlen --------->
+-		 *                                          <- authsize ->
+-		 *          <---------------- total_in ------------------>
++		 *  INPUT  =  AssocData     ||    CipherTex   ||       AuthTag
++		 *          <- assoclen --->  <---------- cryptlen ---------->
+ 		 *
+-		 *  OUTPUT =   AssocData  ||   PlainText
+-		 *          <- assoclen ->  <- crypten - authsize ->
+-		 *          <---------- total_out ----------------->
++		 *  OUTPUT = AssocData    ||               PlainText
++		 *          <- assoclen ->  <- cryptlen - authsize ->
+ 		 */
+ 		cryp->areq = areq;
+ 		cryp->req = NULL;
+ 		cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq));
+-		cryp->total_in = areq->assoclen + areq->cryptlen;
+-		if (is_encrypt(cryp))
+-			/* Append auth tag to output */
+-			cryp->total_out = cryp->total_in + cryp->authsize;
+-		else
+-			/* No auth tag in output */
+-			cryp->total_out = cryp->total_in - cryp->authsize;
++		if (is_encrypt(cryp)) {
++			cryp->payload_in = areq->cryptlen;
++			cryp->header_in = areq->assoclen;
++			cryp->payload_out = areq->cryptlen;
++		} else {
++			cryp->payload_in = areq->cryptlen - cryp->authsize;
++			cryp->header_in = areq->assoclen;
++			cryp->payload_out = cryp->payload_in;
++		}
+ 	}
+ 
+-	cryp->total_in_save = cryp->total_in;
+-	cryp->total_out_save = cryp->total_out;
++	in_sg = req ? req->src : areq->src;
++	scatterwalk_start(&cryp->in_walk, in_sg);
+ 
+-	cryp->in_sg = req ? req->src : areq->src;
+ 	cryp->out_sg = req ? req->dst : areq->dst;
+-	cryp->out_sg_save = cryp->out_sg;
+-
+-	cryp->in_sg_len = sg_nents_for_len(cryp->in_sg, cryp->total_in);
+-	if (cryp->in_sg_len < 0) {
+-		dev_err(cryp->dev, "Cannot get in_sg_len\n");
+-		ret = cryp->in_sg_len;
+-		return ret;
+-	}
+-
+-	cryp->out_sg_len = sg_nents_for_len(cryp->out_sg, cryp->total_out);
+-	if (cryp->out_sg_len < 0) {
+-		dev_err(cryp->dev, "Cannot get out_sg_len\n");
+-		ret = cryp->out_sg_len;
+-		return ret;
+-	}
+-
+-	ret = stm32_cryp_copy_sgs(cryp);
+-	if (ret)
+-		return ret;
+-
+-	scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+ 	scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+ 
+ 	if (is_gcm(cryp) || is_ccm(cryp)) {
+ 		/* In output, jump after assoc data */
+-		scatterwalk_advance(&cryp->out_walk, cryp->areq->assoclen);
+-		cryp->total_out -= cryp->areq->assoclen;
++		scatterwalk_copychunks(NULL, &cryp->out_walk, cryp->areq->assoclen, 2);
+ 	}
+ 
++	if (is_ctr(cryp))
++		memset(cryp->last_ctr, 0, sizeof(cryp->last_ctr));
++
+ 	ret = stm32_cryp_hw_init(cryp);
+ 	return ret;
+ }
+@@ -1061,8 +1108,7 @@ static int stm32_cryp_aead_one_req(struct crypto_engine *engine, void *areq)
+ 	if (!cryp)
+ 		return -ENODEV;
+ 
+-	if (unlikely(!cryp->areq->assoclen &&
+-		     !stm32_cryp_get_input_text_len(cryp))) {
++	if (unlikely(!cryp->payload_in && !cryp->header_in)) {
+ 		/* No input data to process: get tag and finish */
+ 		stm32_cryp_finish_req(cryp, 0);
+ 		return 0;
+@@ -1071,43 +1117,10 @@ static int stm32_cryp_aead_one_req(struct crypto_engine *engine, void *areq)
+ 	return stm32_cryp_cpu_start(cryp);
+ }
+ 
+-static u32 *stm32_cryp_next_out(struct stm32_cryp *cryp, u32 *dst,
+-				unsigned int n)
+-{
+-	scatterwalk_advance(&cryp->out_walk, n);
+-
+-	if (unlikely(cryp->out_sg->length == _walked_out)) {
+-		cryp->out_sg = sg_next(cryp->out_sg);
+-		if (cryp->out_sg) {
+-			scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+-			return (sg_virt(cryp->out_sg) + _walked_out);
+-		}
+-	}
+-
+-	return (u32 *)((u8 *)dst + n);
+-}
+-
+-static u32 *stm32_cryp_next_in(struct stm32_cryp *cryp, u32 *src,
+-			       unsigned int n)
+-{
+-	scatterwalk_advance(&cryp->in_walk, n);
+-
+-	if (unlikely(cryp->in_sg->length == _walked_in)) {
+-		cryp->in_sg = sg_next(cryp->in_sg);
+-		if (cryp->in_sg) {
+-			scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+-			return (sg_virt(cryp->in_sg) + _walked_in);
+-		}
+-	}
+-
+-	return (u32 *)((u8 *)src + n);
+-}
+-
+ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+ {
+-	u32 cfg, size_bit, *dst, d32;
+-	u8 *d8;
+-	unsigned int i, j;
++	u32 cfg, size_bit;
++	unsigned int i;
+ 	int ret = 0;
+ 
+ 	/* Update Config */
+@@ -1130,7 +1143,7 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+ 		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+ 
+ 		size_bit = is_encrypt(cryp) ? cryp->areq->cryptlen :
+-				cryp->areq->cryptlen - AES_BLOCK_SIZE;
++				cryp->areq->cryptlen - cryp->authsize;
+ 		size_bit *= 8;
+ 		if (cryp->caps->swap_final)
+ 			size_bit = (__force u32)cpu_to_be32(size_bit);
+@@ -1139,11 +1152,9 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+ 		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+ 	} else {
+ 		/* CCM: write CTR0 */
+-		u8 iv[AES_BLOCK_SIZE];
+-		u32 *iv32 = (u32 *)iv;
+-		__be32 *biv;
+-
+-		biv = (void *)iv;
++		u32 iv32[AES_BLOCK_32];
++		u8 *iv = (u8 *)iv32;
++		__be32 *biv = (__be32 *)iv32;
+ 
+ 		memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
+ 		memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
+@@ -1165,39 +1176,18 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+ 	}
+ 
+ 	if (is_encrypt(cryp)) {
++		u32 out_tag[AES_BLOCK_32];
++
+ 		/* Get and write tag */
+-		dst = sg_virt(cryp->out_sg) + _walked_out;
++		for (i = 0; i < AES_BLOCK_32; i++)
++			out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+ 
+-		for (i = 0; i < AES_BLOCK_32; i++) {
+-			if (cryp->total_out >= sizeof(u32)) {
+-				/* Read a full u32 */
+-				*dst = stm32_cryp_read(cryp, CRYP_DOUT);
+-
+-				dst = stm32_cryp_next_out(cryp, dst,
+-							  sizeof(u32));
+-				cryp->total_out -= sizeof(u32);
+-			} else if (!cryp->total_out) {
+-				/* Empty fifo out (data from input padding) */
+-				stm32_cryp_read(cryp, CRYP_DOUT);
+-			} else {
+-				/* Read less than an u32 */
+-				d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+-				d8 = (u8 *)&d32;
+-
+-				for (j = 0; j < cryp->total_out; j++) {
+-					*((u8 *)dst) = *(d8++);
+-					dst = stm32_cryp_next_out(cryp, dst, 1);
+-				}
+-				cryp->total_out = 0;
+-			}
+-		}
++		scatterwalk_copychunks(out_tag, &cryp->out_walk, cryp->authsize, 1);
+ 	} else {
+ 		/* Get and check tag */
+ 		u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32];
+ 
+-		scatterwalk_map_and_copy(in_tag, cryp->in_sg,
+-					 cryp->total_in_save - cryp->authsize,
+-					 cryp->authsize, 0);
++		scatterwalk_copychunks(in_tag, &cryp->in_walk, cryp->authsize, 0);
+ 
+ 		for (i = 0; i < AES_BLOCK_32; i++)
+ 			out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+@@ -1217,115 +1207,59 @@ static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp)
+ {
+ 	u32 cr;
+ 
+-	if (unlikely(cryp->last_ctr[3] == 0xFFFFFFFF)) {
+-		cryp->last_ctr[3] = 0;
+-		cryp->last_ctr[2]++;
+-		if (!cryp->last_ctr[2]) {
+-			cryp->last_ctr[1]++;
+-			if (!cryp->last_ctr[1])
+-				cryp->last_ctr[0]++;
+-		}
++	if (unlikely(cryp->last_ctr[3] == cpu_to_be32(0xFFFFFFFF))) {
++		/*
++		 * In this case, we need to increment manually the ctr counter,
++		 * as HW doesn't handle the U32 carry.
++		 */
++		crypto_inc((u8 *)cryp->last_ctr, sizeof(cryp->last_ctr));
+ 
+ 		cr = stm32_cryp_read(cryp, CRYP_CR);
+ 		stm32_cryp_write(cryp, CRYP_CR, cr & ~CR_CRYPEN);
+ 
+-		stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->last_ctr);
++		stm32_cryp_hw_write_iv(cryp, cryp->last_ctr);
+ 
+ 		stm32_cryp_write(cryp, CRYP_CR, cr);
+ 	}
+ 
+-	cryp->last_ctr[0] = stm32_cryp_read(cryp, CRYP_IV0LR);
+-	cryp->last_ctr[1] = stm32_cryp_read(cryp, CRYP_IV0RR);
+-	cryp->last_ctr[2] = stm32_cryp_read(cryp, CRYP_IV1LR);
+-	cryp->last_ctr[3] = stm32_cryp_read(cryp, CRYP_IV1RR);
++	/* The IV registers are BE  */
++	cryp->last_ctr[0] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV0LR));
++	cryp->last_ctr[1] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV0RR));
++	cryp->last_ctr[2] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1LR));
++	cryp->last_ctr[3] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1RR));
+ }
+ 
+-static bool stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
++static void stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
+ {
+-	unsigned int i, j;
+-	u32 d32, *dst;
+-	u8 *d8;
+-	size_t tag_size;
+-
+-	/* Do no read tag now (if any) */
+-	if (is_encrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+-		tag_size = cryp->authsize;
+-	else
+-		tag_size = 0;
+-
+-	dst = sg_virt(cryp->out_sg) + _walked_out;
++	unsigned int i;
++	u32 block[AES_BLOCK_32];
+ 
+-	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+-		if (likely(cryp->total_out - tag_size >= sizeof(u32))) {
+-			/* Read a full u32 */
+-			*dst = stm32_cryp_read(cryp, CRYP_DOUT);
++	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
++		block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+ 
+-			dst = stm32_cryp_next_out(cryp, dst, sizeof(u32));
+-			cryp->total_out -= sizeof(u32);
+-		} else if (cryp->total_out == tag_size) {
+-			/* Empty fifo out (data from input padding) */
+-			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+-		} else {
+-			/* Read less than an u32 */
+-			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+-			d8 = (u8 *)&d32;
+-
+-			for (j = 0; j < cryp->total_out - tag_size; j++) {
+-				*((u8 *)dst) = *(d8++);
+-				dst = stm32_cryp_next_out(cryp, dst, 1);
+-			}
+-			cryp->total_out = tag_size;
+-		}
+-	}
+-
+-	return !(cryp->total_out - tag_size) || !cryp->total_in;
++	scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
++							     cryp->payload_out), 1);
++	cryp->payload_out -= min_t(size_t, cryp->hw_blocksize,
++				   cryp->payload_out);
+ }
+ 
+ static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp)
+ {
+-	unsigned int i, j;
+-	u32 *src;
+-	u8 d8[4];
+-	size_t tag_size;
+-
+-	/* Do no write tag (if any) */
+-	if (is_decrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+-		tag_size = cryp->authsize;
+-	else
+-		tag_size = 0;
+-
+-	src = sg_virt(cryp->in_sg) + _walked_in;
++	unsigned int i;
++	u32 block[AES_BLOCK_32] = {0};
+ 
+-	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+-		if (likely(cryp->total_in - tag_size >= sizeof(u32))) {
+-			/* Write a full u32 */
+-			stm32_cryp_write(cryp, CRYP_DIN, *src);
++	scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, cryp->hw_blocksize,
++							    cryp->payload_in), 0);
++	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
+ 
+-			src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+-			cryp->total_in -= sizeof(u32);
+-		} else if (cryp->total_in == tag_size) {
+-			/* Write padding data */
+-			stm32_cryp_write(cryp, CRYP_DIN, 0);
+-		} else {
+-			/* Write less than an u32 */
+-			memset(d8, 0, sizeof(u32));
+-			for (j = 0; j < cryp->total_in - tag_size; j++) {
+-				d8[j] = *((u8 *)src);
+-				src = stm32_cryp_next_in(cryp, src, 1);
+-			}
+-
+-			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-			cryp->total_in = tag_size;
+-		}
+-	}
++	cryp->payload_in -= min_t(size_t, cryp->hw_blocksize, cryp->payload_in);
+ }
+ 
+ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+ {
+ 	int err;
+-	u32 cfg, tmp[AES_BLOCK_32];
+-	size_t total_in_ori = cryp->total_in;
+-	struct scatterlist *out_sg_ori = cryp->out_sg;
++	u32 cfg, block[AES_BLOCK_32] = {0};
+ 	unsigned int i;
+ 
+ 	/* 'Special workaround' procedure described in the datasheet */
+@@ -1350,18 +1284,25 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+ 
+ 	/* b) pad and write the last block */
+ 	stm32_cryp_irq_write_block(cryp);
+-	cryp->total_in = total_in_ori;
++	/* wait end of process */
+ 	err = stm32_cryp_wait_output(cryp);
+ 	if (err) {
+-		dev_err(cryp->dev, "Timeout (write gcm header)\n");
++		dev_err(cryp->dev, "Timeout (write gcm last data)\n");
+ 		return stm32_cryp_finish_req(cryp, err);
+ 	}
+ 
+ 	/* c) get and store encrypted data */
+-	stm32_cryp_irq_read_data(cryp);
+-	scatterwalk_map_and_copy(tmp, out_sg_ori,
+-				 cryp->total_in_save - total_in_ori,
+-				 total_in_ori, 0);
++	/*
++	 * Same code as stm32_cryp_irq_read_data(), but we want to store
++	 * block value
++	 */
++	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
++		block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
++
++	scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
++							     cryp->payload_out), 1);
++	cryp->payload_out -= min_t(size_t, cryp->hw_blocksize,
++				   cryp->payload_out);
+ 
+ 	/* d) change mode back to AES GCM */
+ 	cfg &= ~CR_ALGO_MASK;
+@@ -1374,19 +1315,13 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ 
+ 	/* f) write padded data */
+-	for (i = 0; i < AES_BLOCK_32; i++) {
+-		if (cryp->total_in)
+-			stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
+-		else
+-			stm32_cryp_write(cryp, CRYP_DIN, 0);
+-
+-		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+-	}
++	for (i = 0; i < AES_BLOCK_32; i++)
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
+ 
+ 	/* g) Empty fifo out */
+ 	err = stm32_cryp_wait_output(cryp);
+ 	if (err) {
+-		dev_err(cryp->dev, "Timeout (write gcm header)\n");
++		dev_err(cryp->dev, "Timeout (write gcm padded data)\n");
+ 		return stm32_cryp_finish_req(cryp, err);
+ 	}
+ 
+@@ -1399,16 +1334,14 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+ 
+ static void stm32_cryp_irq_set_npblb(struct stm32_cryp *cryp)
+ {
+-	u32 cfg, payload_bytes;
++	u32 cfg;
+ 
+ 	/* disable ip, set NPBLB and reneable ip */
+ 	cfg = stm32_cryp_read(cryp, CRYP_CR);
+ 	cfg &= ~CR_CRYPEN;
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ 
+-	payload_bytes = is_decrypt(cryp) ? cryp->total_in - cryp->authsize :
+-					   cryp->total_in;
+-	cfg |= (cryp->hw_blocksize - payload_bytes) << CR_NBPBL_SHIFT;
++	cfg |= (cryp->hw_blocksize - cryp->payload_in) << CR_NBPBL_SHIFT;
+ 	cfg |= CR_CRYPEN;
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ }
+@@ -1417,13 +1350,11 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ {
+ 	int err = 0;
+ 	u32 cfg, iv1tmp;
+-	u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32], tmp[AES_BLOCK_32];
+-	size_t last_total_out, total_in_ori = cryp->total_in;
+-	struct scatterlist *out_sg_ori = cryp->out_sg;
++	u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32];
++	u32 block[AES_BLOCK_32] = {0};
+ 	unsigned int i;
+ 
+ 	/* 'Special workaround' procedure described in the datasheet */
+-	cryp->flags |= FLG_CCM_PADDED_WA;
+ 
+ 	/* a) disable ip */
+ 	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+@@ -1453,7 +1384,7 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ 
+ 	/* b) pad and write the last block */
+ 	stm32_cryp_irq_write_block(cryp);
+-	cryp->total_in = total_in_ori;
++	/* wait end of process */
+ 	err = stm32_cryp_wait_output(cryp);
+ 	if (err) {
+ 		dev_err(cryp->dev, "Timeout (wite ccm padded data)\n");
+@@ -1461,13 +1392,16 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ 	}
+ 
+ 	/* c) get and store decrypted data */
+-	last_total_out = cryp->total_out;
+-	stm32_cryp_irq_read_data(cryp);
++	/*
++	 * Same code as stm32_cryp_irq_read_data(), but we want to store
++	 * block value
++	 */
++	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
++		block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+ 
+-	memset(tmp, 0, sizeof(tmp));
+-	scatterwalk_map_and_copy(tmp, out_sg_ori,
+-				 cryp->total_out_save - last_total_out,
+-				 last_total_out, 0);
++	scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
++							     cryp->payload_out), 1);
++	cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out);
+ 
+ 	/* d) Load again CRYP_CSGCMCCMxR */
+ 	for (i = 0; i < ARRAY_SIZE(cstmp2); i++)
+@@ -1484,10 +1418,10 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ 
+ 	/* g) XOR and write padded data */
+-	for (i = 0; i < ARRAY_SIZE(tmp); i++) {
+-		tmp[i] ^= cstmp1[i];
+-		tmp[i] ^= cstmp2[i];
+-		stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
++	for (i = 0; i < ARRAY_SIZE(block); i++) {
++		block[i] ^= cstmp1[i];
++		block[i] ^= cstmp2[i];
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
+ 	}
+ 
+ 	/* h) wait for completion */
+@@ -1501,30 +1435,34 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ 
+ static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
+ {
+-	if (unlikely(!cryp->total_in)) {
++	if (unlikely(!cryp->payload_in)) {
+ 		dev_warn(cryp->dev, "No more data to process\n");
+ 		return;
+ 	}
+ 
+-	if (unlikely(cryp->total_in < AES_BLOCK_SIZE &&
++	if (unlikely(cryp->payload_in < AES_BLOCK_SIZE &&
+ 		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
+ 		     is_encrypt(cryp))) {
+ 		/* Padding for AES GCM encryption */
+-		if (cryp->caps->padding_wa)
++		if (cryp->caps->padding_wa) {
+ 			/* Special case 1 */
+-			return stm32_cryp_irq_write_gcm_padded_data(cryp);
++			stm32_cryp_irq_write_gcm_padded_data(cryp);
++			return;
++		}
+ 
+ 		/* Setting padding bytes (NBBLB) */
+ 		stm32_cryp_irq_set_npblb(cryp);
+ 	}
+ 
+-	if (unlikely((cryp->total_in - cryp->authsize < AES_BLOCK_SIZE) &&
++	if (unlikely((cryp->payload_in < AES_BLOCK_SIZE) &&
+ 		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_CCM) &&
+ 		     is_decrypt(cryp))) {
+ 		/* Padding for AES CCM decryption */
+-		if (cryp->caps->padding_wa)
++		if (cryp->caps->padding_wa) {
+ 			/* Special case 2 */
+-			return stm32_cryp_irq_write_ccm_padded_data(cryp);
++			stm32_cryp_irq_write_ccm_padded_data(cryp);
++			return;
++		}
+ 
+ 		/* Setting padding bytes (NBBLB) */
+ 		stm32_cryp_irq_set_npblb(cryp);
+@@ -1536,192 +1474,60 @@ static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
+ 	stm32_cryp_irq_write_block(cryp);
+ }
+ 
+-static void stm32_cryp_irq_write_gcm_header(struct stm32_cryp *cryp)
++static void stm32_cryp_irq_write_gcmccm_header(struct stm32_cryp *cryp)
+ {
+-	int err;
+-	unsigned int i, j;
+-	u32 cfg, *src;
+-
+-	src = sg_virt(cryp->in_sg) + _walked_in;
+-
+-	for (i = 0; i < AES_BLOCK_32; i++) {
+-		stm32_cryp_write(cryp, CRYP_DIN, *src);
+-
+-		src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+-		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+-
+-		/* Check if whole header written */
+-		if ((cryp->total_in_save - cryp->total_in) ==
+-				cryp->areq->assoclen) {
+-			/* Write padding if needed */
+-			for (j = i + 1; j < AES_BLOCK_32; j++)
+-				stm32_cryp_write(cryp, CRYP_DIN, 0);
+-
+-			/* Wait for completion */
+-			err = stm32_cryp_wait_busy(cryp);
+-			if (err) {
+-				dev_err(cryp->dev, "Timeout (gcm header)\n");
+-				return stm32_cryp_finish_req(cryp, err);
+-			}
+-
+-			if (stm32_cryp_get_input_text_len(cryp)) {
+-				/* Phase 3 : payload */
+-				cfg = stm32_cryp_read(cryp, CRYP_CR);
+-				cfg &= ~CR_CRYPEN;
+-				stm32_cryp_write(cryp, CRYP_CR, cfg);
+-
+-				cfg &= ~CR_PH_MASK;
+-				cfg |= CR_PH_PAYLOAD;
+-				cfg |= CR_CRYPEN;
+-				stm32_cryp_write(cryp, CRYP_CR, cfg);
+-			} else {
+-				/* Phase 4 : tag */
+-				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+-				stm32_cryp_finish_req(cryp, 0);
+-			}
+-
+-			break;
+-		}
+-
+-		if (!cryp->total_in)
+-			break;
+-	}
+-}
++	unsigned int i;
++	u32 block[AES_BLOCK_32] = {0};
++	size_t written;
+ 
+-static void stm32_cryp_irq_write_ccm_header(struct stm32_cryp *cryp)
+-{
+-	int err;
+-	unsigned int i = 0, j, k;
+-	u32 alen, cfg, *src;
+-	u8 d8[4];
+-
+-	src = sg_virt(cryp->in_sg) + _walked_in;
+-	alen = cryp->areq->assoclen;
+-
+-	if (!_walked_in) {
+-		if (cryp->areq->assoclen <= 65280) {
+-			/* Write first u32 of B1 */
+-			d8[0] = (alen >> 8) & 0xFF;
+-			d8[1] = alen & 0xFF;
+-			d8[2] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-			d8[3] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-
+-			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-			i++;
+-
+-			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+-		} else {
+-			/* Build the two first u32 of B1 */
+-			d8[0] = 0xFF;
+-			d8[1] = 0xFE;
+-			d8[2] = alen & 0xFF000000;
+-			d8[3] = alen & 0x00FF0000;
+-
+-			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-			i++;
+-
+-			d8[0] = alen & 0x0000FF00;
+-			d8[1] = alen & 0x000000FF;
+-			d8[2] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-			d8[3] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-
+-			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-			i++;
+-
+-			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+-		}
+-	}
++	written = min_t(size_t, AES_BLOCK_SIZE, cryp->header_in);
+ 
+-	/* Write next u32 */
+-	for (; i < AES_BLOCK_32; i++) {
+-		/* Build an u32 */
+-		memset(d8, 0, sizeof(u32));
+-		for (k = 0; k < sizeof(u32); k++) {
+-			d8[k] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-
+-			cryp->total_in -= min_t(size_t, 1, cryp->total_in);
+-			if ((cryp->total_in_save - cryp->total_in) == alen)
+-				break;
+-		}
++	scatterwalk_copychunks(block, &cryp->in_walk, written, 0);
++	for (i = 0; i < AES_BLOCK_32; i++)
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
+ 
+-		stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-
+-		if ((cryp->total_in_save - cryp->total_in) == alen) {
+-			/* Write padding if needed */
+-			for (j = i + 1; j < AES_BLOCK_32; j++)
+-				stm32_cryp_write(cryp, CRYP_DIN, 0);
+-
+-			/* Wait for completion */
+-			err = stm32_cryp_wait_busy(cryp);
+-			if (err) {
+-				dev_err(cryp->dev, "Timeout (ccm header)\n");
+-				return stm32_cryp_finish_req(cryp, err);
+-			}
+-
+-			if (stm32_cryp_get_input_text_len(cryp)) {
+-				/* Phase 3 : payload */
+-				cfg = stm32_cryp_read(cryp, CRYP_CR);
+-				cfg &= ~CR_CRYPEN;
+-				stm32_cryp_write(cryp, CRYP_CR, cfg);
+-
+-				cfg &= ~CR_PH_MASK;
+-				cfg |= CR_PH_PAYLOAD;
+-				cfg |= CR_CRYPEN;
+-				stm32_cryp_write(cryp, CRYP_CR, cfg);
+-			} else {
+-				/* Phase 4 : tag */
+-				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+-				stm32_cryp_finish_req(cryp, 0);
+-			}
++	cryp->header_in -= written;
+ 
+-			break;
+-		}
+-	}
++	stm32_crypt_gcmccm_end_header(cryp);
+ }
+ 
+ static irqreturn_t stm32_cryp_irq_thread(int irq, void *arg)
+ {
+ 	struct stm32_cryp *cryp = arg;
+ 	u32 ph;
++	u32 it_mask = stm32_cryp_read(cryp, CRYP_IMSCR);
+ 
+ 	if (cryp->irq_status & MISR_OUT)
+ 		/* Output FIFO IRQ: read data */
+-		if (unlikely(stm32_cryp_irq_read_data(cryp))) {
+-			/* All bytes processed, finish */
+-			stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+-			stm32_cryp_finish_req(cryp, 0);
+-			return IRQ_HANDLED;
+-		}
++		stm32_cryp_irq_read_data(cryp);
+ 
+ 	if (cryp->irq_status & MISR_IN) {
+-		if (is_gcm(cryp)) {
++		if (is_gcm(cryp) || is_ccm(cryp)) {
+ 			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+ 			if (unlikely(ph == CR_PH_HEADER))
+ 				/* Write Header */
+-				stm32_cryp_irq_write_gcm_header(cryp);
+-			else
+-				/* Input FIFO IRQ: write data */
+-				stm32_cryp_irq_write_data(cryp);
+-			cryp->gcm_ctr++;
+-		} else if (is_ccm(cryp)) {
+-			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+-			if (unlikely(ph == CR_PH_HEADER))
+-				/* Write Header */
+-				stm32_cryp_irq_write_ccm_header(cryp);
++				stm32_cryp_irq_write_gcmccm_header(cryp);
+ 			else
+ 				/* Input FIFO IRQ: write data */
+ 				stm32_cryp_irq_write_data(cryp);
++			if (is_gcm(cryp))
++				cryp->gcm_ctr++;
+ 		} else {
+ 			/* Input FIFO IRQ: write data */
+ 			stm32_cryp_irq_write_data(cryp);
+ 		}
+ 	}
+ 
++	/* Mask useless interrupts */
++	if (!cryp->payload_in && !cryp->header_in)
++		it_mask &= ~IMSCR_IN;
++	if (!cryp->payload_out)
++		it_mask &= ~IMSCR_OUT;
++	stm32_cryp_write(cryp, CRYP_IMSCR, it_mask);
++
++	if (!cryp->payload_in && !cryp->header_in && !cryp->payload_out)
++		stm32_cryp_finish_req(cryp, 0);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -1742,7 +1548,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= AES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1759,7 +1565,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= AES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1777,7 +1583,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= 1,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1795,7 +1601,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1812,7 +1618,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1830,7 +1636,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1847,7 +1653,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1877,7 +1683,7 @@ static struct aead_alg aead_algs[] = {
+ 		.cra_flags		= CRYPTO_ALG_ASYNC,
+ 		.cra_blocksize		= 1,
+ 		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+-		.cra_alignmask		= 0xf,
++		.cra_alignmask		= 0,
+ 		.cra_module		= THIS_MODULE,
+ 	},
+ },
+@@ -1897,7 +1703,7 @@ static struct aead_alg aead_algs[] = {
+ 		.cra_flags		= CRYPTO_ALG_ASYNC,
+ 		.cra_blocksize		= 1,
+ 		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+-		.cra_alignmask		= 0xf,
++		.cra_alignmask		= 0,
+ 		.cra_module		= THIS_MODULE,
+ 	},
+ },
+@@ -2025,8 +1831,6 @@ err_engine1:
+ 	list_del(&cryp->list);
+ 	spin_unlock(&cryp_list.lock);
+ 
+-	pm_runtime_disable(dev);
+-	pm_runtime_put_noidle(dev);
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_put_noidle(dev);
+ 
+diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
+index ff5362da118d8..16bb52836b28d 100644
+--- a/drivers/crypto/stm32/stm32-hash.c
++++ b/drivers/crypto/stm32/stm32-hash.c
+@@ -812,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
+ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
+ 			      struct stm32_hash_request_ctx *rctx)
+ {
+-	pm_runtime_resume_and_get(hdev->dev);
++	pm_runtime_get_sync(hdev->dev);
+ 
+ 	if (!(HASH_FLAGS_INIT & hdev->flags)) {
+ 		stm32_hash_write(hdev, HASH_CR, HASH_CR_INIT);
+@@ -961,7 +961,7 @@ static int stm32_hash_export(struct ahash_request *req, void *out)
+ 	u32 *preg;
+ 	unsigned int i;
+ 
+-	pm_runtime_resume_and_get(hdev->dev);
++	pm_runtime_get_sync(hdev->dev);
+ 
+ 	while ((stm32_hash_read(hdev, HASH_SR) & HASH_SR_BUSY))
+ 		cpu_relax();
+@@ -999,7 +999,7 @@ static int stm32_hash_import(struct ahash_request *req, const void *in)
+ 
+ 	preg = rctx->hw_context;
+ 
+-	pm_runtime_resume_and_get(hdev->dev);
++	pm_runtime_get_sync(hdev->dev);
+ 
+ 	stm32_hash_write(hdev, HASH_IMR, *preg++);
+ 	stm32_hash_write(hdev, HASH_STR, *preg++);
+diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
+index d3fbd950be944..3e07f961e2f3d 100644
+--- a/drivers/dma-buf/dma-fence-array.c
++++ b/drivers/dma-buf/dma-fence-array.c
+@@ -104,7 +104,11 @@ static bool dma_fence_array_signaled(struct dma_fence *fence)
+ {
+ 	struct dma_fence_array *array = to_dma_fence_array(fence);
+ 
+-	return atomic_read(&array->num_pending) <= 0;
++	if (atomic_read(&array->num_pending) > 0)
++		return false;
++
++	dma_fence_array_clear_pending_error(array);
++	return true;
+ }
+ 
+ static void dma_fence_array_release(struct dma_fence *fence)
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 627ad74c879fd..90afba0b36fe9 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -89,6 +89,7 @@
+ #define		AT_XDMAC_CNDC_NDE		(0x1 << 0)		/* Channel x Next Descriptor Enable */
+ #define		AT_XDMAC_CNDC_NDSUP		(0x1 << 1)		/* Channel x Next Descriptor Source Update */
+ #define		AT_XDMAC_CNDC_NDDUP		(0x1 << 2)		/* Channel x Next Descriptor Destination Update */
++#define		AT_XDMAC_CNDC_NDVIEW_MASK	GENMASK(28, 27)
+ #define		AT_XDMAC_CNDC_NDVIEW_NDV0	(0x0 << 3)		/* Channel x Next Descriptor View 0 */
+ #define		AT_XDMAC_CNDC_NDVIEW_NDV1	(0x1 << 3)		/* Channel x Next Descriptor View 1 */
+ #define		AT_XDMAC_CNDC_NDVIEW_NDV2	(0x2 << 3)		/* Channel x Next Descriptor View 2 */
+@@ -220,15 +221,15 @@ struct at_xdmac {
+ 
+ /* Linked List Descriptor */
+ struct at_xdmac_lld {
+-	dma_addr_t	mbr_nda;	/* Next Descriptor Member */
+-	u32		mbr_ubc;	/* Microblock Control Member */
+-	dma_addr_t	mbr_sa;		/* Source Address Member */
+-	dma_addr_t	mbr_da;		/* Destination Address Member */
+-	u32		mbr_cfg;	/* Configuration Register */
+-	u32		mbr_bc;		/* Block Control Register */
+-	u32		mbr_ds;		/* Data Stride Register */
+-	u32		mbr_sus;	/* Source Microblock Stride Register */
+-	u32		mbr_dus;	/* Destination Microblock Stride Register */
++	u32 mbr_nda;	/* Next Descriptor Member */
++	u32 mbr_ubc;	/* Microblock Control Member */
++	u32 mbr_sa;	/* Source Address Member */
++	u32 mbr_da;	/* Destination Address Member */
++	u32 mbr_cfg;	/* Configuration Register */
++	u32 mbr_bc;	/* Block Control Register */
++	u32 mbr_ds;	/* Data Stride Register */
++	u32 mbr_sus;	/* Source Microblock Stride Register */
++	u32 mbr_dus;	/* Destination Microblock Stride Register */
+ };
+ 
+ /* 64-bit alignment needed to update CNDA and CUBC registers in an atomic way. */
+@@ -338,9 +339,6 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
+ 
+ 	dev_vdbg(chan2dev(&atchan->chan), "%s: desc 0x%p\n", __func__, first);
+ 
+-	if (at_xdmac_chan_is_enabled(atchan))
+-		return;
+-
+ 	/* Set transfer as active to not try to start it again. */
+ 	first->active_xfer = true;
+ 
+@@ -356,7 +354,8 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
+ 	 */
+ 	if (at_xdmac_chan_is_cyclic(atchan))
+ 		reg = AT_XDMAC_CNDC_NDVIEW_NDV1;
+-	else if (first->lld.mbr_ubc & AT_XDMAC_MBR_UBC_NDV3)
++	else if ((first->lld.mbr_ubc &
++		  AT_XDMAC_CNDC_NDVIEW_MASK) == AT_XDMAC_MBR_UBC_NDV3)
+ 		reg = AT_XDMAC_CNDC_NDVIEW_NDV3;
+ 	else
+ 		reg = AT_XDMAC_CNDC_NDVIEW_NDV2;
+@@ -427,13 +426,12 @@ static dma_cookie_t at_xdmac_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	spin_lock_irqsave(&atchan->lock, irqflags);
+ 	cookie = dma_cookie_assign(tx);
+ 
++	list_add_tail(&desc->xfer_node, &atchan->xfers_list);
++	spin_unlock_irqrestore(&atchan->lock, irqflags);
++
+ 	dev_vdbg(chan2dev(tx->chan), "%s: atchan 0x%p, add desc 0x%p to xfers_list\n",
+ 		 __func__, atchan, desc);
+-	list_add_tail(&desc->xfer_node, &atchan->xfers_list);
+-	if (list_is_singular(&atchan->xfers_list))
+-		at_xdmac_start_xfer(atchan, desc);
+ 
+-	spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 	return cookie;
+ }
+ 
+@@ -1563,14 +1561,17 @@ static void at_xdmac_handle_cyclic(struct at_xdmac_chan *atchan)
+ 	struct at_xdmac_desc		*desc;
+ 	struct dma_async_tx_descriptor	*txd;
+ 
+-	if (!list_empty(&atchan->xfers_list)) {
+-		desc = list_first_entry(&atchan->xfers_list,
+-					struct at_xdmac_desc, xfer_node);
+-		txd = &desc->tx_dma_desc;
+-
+-		if (txd->flags & DMA_PREP_INTERRUPT)
+-			dmaengine_desc_get_callback_invoke(txd, NULL);
++	spin_lock_irq(&atchan->lock);
++	if (list_empty(&atchan->xfers_list)) {
++		spin_unlock_irq(&atchan->lock);
++		return;
+ 	}
++	desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc,
++				xfer_node);
++	spin_unlock_irq(&atchan->lock);
++	txd = &desc->tx_dma_desc;
++	if (txd->flags & DMA_PREP_INTERRUPT)
++		dmaengine_desc_get_callback_invoke(txd, NULL);
+ }
+ 
+ static void at_xdmac_handle_error(struct at_xdmac_chan *atchan)
+@@ -1724,11 +1725,9 @@ static void at_xdmac_issue_pending(struct dma_chan *chan)
+ 
+ 	dev_dbg(chan2dev(&atchan->chan), "%s\n", __func__);
+ 
+-	if (!at_xdmac_chan_is_cyclic(atchan)) {
+-		spin_lock_irqsave(&atchan->lock, flags);
+-		at_xdmac_advance_work(atchan);
+-		spin_unlock_irqrestore(&atchan->lock, flags);
+-	}
++	spin_lock_irqsave(&atchan->lock, flags);
++	at_xdmac_advance_work(atchan);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return;
+ }
+diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
+index b84303be8edf5..4eb63f1ad2247 100644
+--- a/drivers/dma/mmp_pdma.c
++++ b/drivers/dma/mmp_pdma.c
+@@ -728,12 +728,6 @@ static int mmp_pdma_config_write(struct dma_chan *dchan,
+ 
+ 	chan->dir = direction;
+ 	chan->dev_addr = addr;
+-	/* FIXME: drivers should be ported over to use the filter
+-	 * function. Once that's done, the following two lines can
+-	 * be removed.
+-	 */
+-	if (cfg->slave_id)
+-		chan->drcmr = cfg->slave_id;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
+index 349fb312c8725..b4ef4f19f7dec 100644
+--- a/drivers/dma/pxa_dma.c
++++ b/drivers/dma/pxa_dma.c
+@@ -911,13 +911,6 @@ static void pxad_get_config(struct pxad_chan *chan,
+ 		*dcmd |= PXA_DCMD_BURST16;
+ 	else if (maxburst == 32)
+ 		*dcmd |= PXA_DCMD_BURST32;
+-
+-	/* FIXME: drivers should be ported over to use the filter
+-	 * function. Once that's done, the following two lines can
+-	 * be removed.
+-	 */
+-	if (chan->cfg.slave_id)
+-		chan->drcmr = chan->cfg.slave_id;
+ }
+ 
+ static struct dma_async_tx_descriptor *
+diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
+index 9d473923712ad..fe36738f2dd7e 100644
+--- a/drivers/dma/stm32-mdma.c
++++ b/drivers/dma/stm32-mdma.c
+@@ -184,7 +184,7 @@
+ #define STM32_MDMA_CTBR(x)		(0x68 + 0x40 * (x))
+ #define STM32_MDMA_CTBR_DBUS		BIT(17)
+ #define STM32_MDMA_CTBR_SBUS		BIT(16)
+-#define STM32_MDMA_CTBR_TSEL_MASK	GENMASK(7, 0)
++#define STM32_MDMA_CTBR_TSEL_MASK	GENMASK(5, 0)
+ #define STM32_MDMA_CTBR_TSEL(n)		STM32_MDMA_SET(n, \
+ 						      STM32_MDMA_CTBR_TSEL_MASK)
+ 
+diff --git a/drivers/dma/uniphier-xdmac.c b/drivers/dma/uniphier-xdmac.c
+index d6b8a202474f4..290836b7e1be2 100644
+--- a/drivers/dma/uniphier-xdmac.c
++++ b/drivers/dma/uniphier-xdmac.c
+@@ -131,8 +131,9 @@ uniphier_xdmac_next_desc(struct uniphier_xdmac_chan *xc)
+ static void uniphier_xdmac_chan_start(struct uniphier_xdmac_chan *xc,
+ 				      struct uniphier_xdmac_desc *xd)
+ {
+-	u32 src_mode, src_addr, src_width;
+-	u32 dst_mode, dst_addr, dst_width;
++	u32 src_mode, src_width;
++	u32 dst_mode, dst_width;
++	dma_addr_t src_addr, dst_addr;
+ 	u32 val, its, tnum;
+ 	enum dma_slave_buswidth buswidth;
+ 
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index 1a801a5d3b08b..92906b56b1a2b 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -1351,8 +1351,7 @@ static int mc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	if (of_device_is_compatible(pdev->dev.of_node,
+-				    "xlnx,zynqmp-ddrc-2.40a"))
++	if (priv->p_data->quirks & DDR_ECC_INTR_SUPPORT)
+ 		setup_address_map(priv);
+ #endif
+ 
+diff --git a/drivers/firmware/google/Kconfig b/drivers/firmware/google/Kconfig
+index 97968aece54f8..931544c9f63d4 100644
+--- a/drivers/firmware/google/Kconfig
++++ b/drivers/firmware/google/Kconfig
+@@ -3,9 +3,9 @@ menuconfig GOOGLE_FIRMWARE
+ 	bool "Google Firmware Drivers"
+ 	default n
+ 	help
+-	  These firmware drivers are used by Google's servers.  They are
+-	  only useful if you are working directly on one of their
+-	  proprietary servers.  If in doubt, say "N".
++	  These firmware drivers are used by Google servers,
++	  Chromebooks and other devices using coreboot firmware.
++	  If in doubt, say "N".
+ 
+ if GOOGLE_FIRMWARE
+ 
+diff --git a/drivers/gpio/gpio-aspeed.c b/drivers/gpio/gpio-aspeed.c
+index b966f5e28ebff..e0d5d80ec8e0f 100644
+--- a/drivers/gpio/gpio-aspeed.c
++++ b/drivers/gpio/gpio-aspeed.c
+@@ -53,7 +53,7 @@ struct aspeed_gpio_config {
+ struct aspeed_gpio {
+ 	struct gpio_chip chip;
+ 	struct irq_chip irqc;
+-	spinlock_t lock;
++	raw_spinlock_t lock;
+ 	void __iomem *base;
+ 	int irq;
+ 	const struct aspeed_gpio_config *config;
+@@ -413,14 +413,14 @@ static void aspeed_gpio_set(struct gpio_chip *gc, unsigned int offset,
+ 	unsigned long flags;
+ 	bool copro;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	__aspeed_gpio_set(gc, offset, val);
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
+@@ -435,7 +435,7 @@ static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
+ 	if (!have_input(gpio, offset))
+ 		return -ENOTSUPP;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	reg = ioread32(addr);
+ 	reg &= ~GPIO_BIT(offset);
+@@ -445,7 +445,7 @@ static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -463,7 +463,7 @@ static int aspeed_gpio_dir_out(struct gpio_chip *gc,
+ 	if (!have_output(gpio, offset))
+ 		return -ENOTSUPP;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	reg = ioread32(addr);
+ 	reg |= GPIO_BIT(offset);
+@@ -474,7 +474,7 @@ static int aspeed_gpio_dir_out(struct gpio_chip *gc,
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -492,11 +492,11 @@ static int aspeed_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
+ 	if (!have_output(gpio, offset))
+ 		return GPIO_LINE_DIRECTION_IN;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	val = ioread32(bank_reg(gpio, bank, reg_dir)) & GPIO_BIT(offset);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return val ? GPIO_LINE_DIRECTION_OUT : GPIO_LINE_DIRECTION_IN;
+ }
+@@ -539,14 +539,14 @@ static void aspeed_gpio_irq_ack(struct irq_data *d)
+ 
+ 	status_addr = bank_reg(gpio, bank, reg_irq_status);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	iowrite32(bit, status_addr);
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
+@@ -565,7 +565,7 @@ static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
+ 
+ 	addr = bank_reg(gpio, bank, reg_irq_enable);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	reg = ioread32(addr);
+@@ -577,7 +577,7 @@ static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static void aspeed_gpio_irq_mask(struct irq_data *d)
+@@ -629,7 +629,7 @@ static int aspeed_gpio_set_type(struct irq_data *d, unsigned int type)
+ 		return -EINVAL;
+ 	}
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	addr = bank_reg(gpio, bank, reg_irq_type0);
+@@ -649,7 +649,7 @@ static int aspeed_gpio_set_type(struct irq_data *d, unsigned int type)
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	irq_set_handler_locked(d, handler);
+ 
+@@ -719,7 +719,7 @@ static int aspeed_gpio_reset_tolerance(struct gpio_chip *chip,
+ 
+ 	treg = bank_reg(gpio, to_bank(offset), reg_tolerance);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	val = readl(treg);
+@@ -733,7 +733,7 @@ static int aspeed_gpio_reset_tolerance(struct gpio_chip *chip,
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -859,7 +859,7 @@ static int enable_debounce(struct gpio_chip *chip, unsigned int offset,
+ 		return rc;
+ 	}
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	if (timer_allocation_registered(gpio, offset)) {
+ 		rc = unregister_allocated_timer(gpio, offset);
+@@ -919,7 +919,7 @@ static int enable_debounce(struct gpio_chip *chip, unsigned int offset,
+ 	configure_timer(gpio, offset, i);
+ 
+ out:
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return rc;
+ }
+@@ -930,13 +930,13 @@ static int disable_debounce(struct gpio_chip *chip, unsigned int offset)
+ 	unsigned long flags;
+ 	int rc;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	rc = unregister_allocated_timer(gpio, offset);
+ 	if (!rc)
+ 		configure_timer(gpio, offset, 0);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return rc;
+ }
+@@ -1018,7 +1018,7 @@ int aspeed_gpio_copro_grab_gpio(struct gpio_desc *desc,
+ 		return -EINVAL;
+ 	bindex = offset >> 3;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	/* Sanity check, this shouldn't happen */
+ 	if (gpio->cf_copro_bankmap[bindex] == 0xff) {
+@@ -1039,7 +1039,7 @@ int aspeed_gpio_copro_grab_gpio(struct gpio_desc *desc,
+ 	if (bit)
+ 		*bit = GPIO_OFFSET(offset);
+  bail:
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(aspeed_gpio_copro_grab_gpio);
+@@ -1063,7 +1063,7 @@ int aspeed_gpio_copro_release_gpio(struct gpio_desc *desc)
+ 		return -EINVAL;
+ 	bindex = offset >> 3;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	/* Sanity check, this shouldn't happen */
+ 	if (gpio->cf_copro_bankmap[bindex] == 0) {
+@@ -1077,7 +1077,7 @@ int aspeed_gpio_copro_release_gpio(struct gpio_desc *desc)
+ 		aspeed_gpio_change_cmd_source(gpio, bank, bindex,
+ 					      GPIO_CMDSRC_ARM);
+  bail:
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(aspeed_gpio_copro_release_gpio);
+@@ -1151,7 +1151,7 @@ static int __init aspeed_gpio_probe(struct platform_device *pdev)
+ 	if (IS_ERR(gpio->base))
+ 		return PTR_ERR(gpio->base);
+ 
+-	spin_lock_init(&gpio->lock);
++	raw_spin_lock_init(&gpio->lock);
+ 
+ 	gpio_id = of_match_node(aspeed_gpio_of_table, pdev->dev.of_node);
+ 	if (!gpio_id)
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 6f11714ce0239..55e4f402ec8b6 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -969,10 +969,17 @@ int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int ind
+ 			irq_flags = acpi_dev_get_irq_type(info.triggering,
+ 							  info.polarity);
+ 
+-			/* Set type if specified and different than the current one */
+-			if (irq_flags != IRQ_TYPE_NONE &&
+-			    irq_flags != irq_get_trigger_type(irq))
+-				irq_set_irq_type(irq, irq_flags);
++			/*
++			 * If the IRQ is not already in use then set type
++			 * if specified and different than the current one.
++			 */
++			if (can_request_irq(irq, irq_flags)) {
++				if (irq_flags != IRQ_TYPE_NONE &&
++				    irq_flags != irq_get_trigger_type(irq))
++					irq_set_irq_type(irq, irq_flags);
++			} else {
++				dev_dbg(&adev->dev, "IRQ %d already in use\n", irq);
++			}
+ 
+ 			return irq;
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index 0de66f59adb8a..df1f9b88a53f9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -387,6 +387,9 @@ amdgpu_connector_lcd_native_mode(struct drm_encoder *encoder)
+ 	    native_mode->vdisplay != 0 &&
+ 	    native_mode->clock != 0) {
+ 		mode = drm_mode_duplicate(dev, native_mode);
++		if (!mode)
++			return NULL;
++
+ 		mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+ 		drm_mode_set_name(mode);
+ 
+@@ -401,6 +404,9 @@ amdgpu_connector_lcd_native_mode(struct drm_encoder *encoder)
+ 		 * simpler.
+ 		 */
+ 		mode = drm_cvt_mode(dev, native_mode->hdisplay, native_mode->vdisplay, 60, true, false, false);
++		if (!mode)
++			return NULL;
++
+ 		mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+ 		DRM_DEBUG_KMS("Adding cvt approximation of native panel mode %s\n", mode->name);
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+index 2f70fdd6104f2..582055136cdbf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+@@ -267,7 +267,6 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
+ 	if (!amdgpu_device_has_dc_support(adev)) {
+ 		if (!adev->enable_virtual_display)
+ 			/* Disable vblank IRQs aggressively for power-saving */
+-			/* XXX: can this be enabled for DC? */
+ 			adev_to_drm(adev)->vblank_disable_immediate = true;
+ 
+ 		r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+index 9ab65ca7df777..873bc33912e23 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+@@ -524,10 +524,10 @@ static void gmc_v8_0_mc_program(struct amdgpu_device *adev)
+ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
+ {
+ 	int r;
++	u32 tmp;
+ 
+ 	adev->gmc.vram_width = amdgpu_atombios_get_vram_width(adev);
+ 	if (!adev->gmc.vram_width) {
+-		u32 tmp;
+ 		int chansize, numchan;
+ 
+ 		/* Get VRAM informations */
+@@ -571,8 +571,15 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
+ 		adev->gmc.vram_width = numchan * chansize;
+ 	}
+ 	/* size in MB on si */
+-	adev->gmc.mc_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
+-	adev->gmc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
++	tmp = RREG32(mmCONFIG_MEMSIZE);
++	/* some boards may have garbage in the upper 16 bits */
++	if (tmp & 0xffff0000) {
++		DRM_INFO("Probable bad vram size: 0x%08x\n", tmp);
++		if (tmp & 0xffff)
++			tmp &= 0xffff;
++	}
++	adev->gmc.mc_vram_size = tmp * 1024ULL * 1024ULL;
++	adev->gmc.real_vram_size = adev->gmc.mc_vram_size;
+ 
+ 	if (!(adev->flags & AMD_IS_APU)) {
+ 		r = amdgpu_device_resize_fb_bar(adev);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index a5b6f36fe1d72..6c8f141103da4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1069,6 +1069,9 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 	adev_to_drm(adev)->mode_config.cursor_width = adev->dm.dc->caps.max_cursor_size;
+ 	adev_to_drm(adev)->mode_config.cursor_height = adev->dm.dc->caps.max_cursor_size;
+ 
++	/* Disable vblank IRQs aggressively for power-saving */
++	adev_to_drm(adev)->vblank_disable_immediate = true;
++
+ 	if (drm_vblank_init(adev_to_drm(adev), adev->dm.display_indexes_num)) {
+ 		DRM_ERROR(
+ 		"amdgpu: failed to initialize sw for display support.\n");
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 284ed1c8a35ac..93f5229c303e7 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2436,7 +2436,8 @@ static void commit_planes_for_stream(struct dc *dc,
+ 	}
+ 
+ 	if ((update_type != UPDATE_TYPE_FAST) && stream->update_flags.bits.dsc_changed)
+-		if (top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
++		if (top_pipe_to_program &&
++			top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
+ 			if (should_use_dmub_lock(stream->link)) {
+ 				union dmub_hw_lock_flags hw_locks = { 0 };
+ 				struct dmub_hw_lock_inst_flags inst_flags = { 0 };
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 9f383b9041d28..49109614510b8 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2098,6 +2098,12 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
+ 		}
+ 	}
+ 
++	/* setting should not be allowed from VF */
++	if (amdgpu_sriov_vf(adev)) {
++		dev_attr->attr.mode &= ~S_IWUGO;
++		dev_attr->store = NULL;
++	}
++
+ #undef DEVICE_ATTR_IS
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
+index 914c569ab8c15..cab3f5c4e2fc8 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
+@@ -1086,11 +1086,21 @@ int analogix_dp_send_psr_spd(struct analogix_dp_device *dp,
+ 	if (!blocking)
+ 		return 0;
+ 
++	/*
++	 * db[1]!=0: entering PSR, wait for fully active remote frame buffer.
++	 * db[1]==0: exiting PSR, wait for either
++	 *  (a) ACTIVE_RESYNC - the sink "must display the
++	 *      incoming active frames from the Source device with no visible
++	 *      glitches and/or artifacts", even though timings may still be
++	 *      re-synchronizing; or
++	 *  (b) INACTIVE - the transition is fully complete.
++	 */
+ 	ret = readx_poll_timeout(analogix_dp_get_psr_status, dp, psr_status,
+ 		psr_status >= 0 &&
+ 		((vsc->db[1] && psr_status == DP_PSR_SINK_ACTIVE_RFB) ||
+-		(!vsc->db[1] && psr_status == DP_PSR_SINK_INACTIVE)), 1500,
+-		DP_TIMEOUT_PSR_LOOP_MS * 1000);
++		(!vsc->db[1] && (psr_status == DP_PSR_SINK_ACTIVE_RESYNC ||
++				 psr_status == DP_PSR_SINK_INACTIVE))),
++		1500, DP_TIMEOUT_PSR_LOOP_MS * 1000);
+ 	if (ret) {
+ 		dev_warn(dp->dev, "Failed to apply PSR %d\n", ret);
+ 		return ret;
+diff --git a/drivers/gpu/drm/bridge/display-connector.c b/drivers/gpu/drm/bridge/display-connector.c
+index 4d278573cdb99..544a47335cac4 100644
+--- a/drivers/gpu/drm/bridge/display-connector.c
++++ b/drivers/gpu/drm/bridge/display-connector.c
+@@ -104,7 +104,7 @@ static int display_connector_probe(struct platform_device *pdev)
+ {
+ 	struct display_connector *conn;
+ 	unsigned int type;
+-	const char *label;
++	const char *label = NULL;
+ 	int ret;
+ 
+ 	conn = devm_kzalloc(&pdev->dev, sizeof(*conn), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+index d2808c4a6fb1c..cce98bf2a4e73 100644
+--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
++++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+@@ -306,19 +306,10 @@ out:
+ 	mutex_unlock(&ge_b850v3_lvds_dev_mutex);
+ }
+ 
+-static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
+-				       const struct i2c_device_id *id)
++static int ge_b850v3_register(void)
+ {
++	struct i2c_client *stdp4028_i2c = ge_b850v3_lvds_ptr->stdp4028_i2c;
+ 	struct device *dev = &stdp4028_i2c->dev;
+-	int ret;
+-
+-	ret = ge_b850v3_lvds_init(dev);
+-
+-	if (ret)
+-		return ret;
+-
+-	ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c;
+-	i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr);
+ 
+ 	/* drm bridge initialization */
+ 	ge_b850v3_lvds_ptr->bridge.funcs = &ge_b850v3_lvds_funcs;
+@@ -343,6 +334,27 @@ static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
+ 			"ge-b850v3-lvds-dp", ge_b850v3_lvds_ptr);
+ }
+ 
++static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
++				       const struct i2c_device_id *id)
++{
++	struct device *dev = &stdp4028_i2c->dev;
++	int ret;
++
++	ret = ge_b850v3_lvds_init(dev);
++
++	if (ret)
++		return ret;
++
++	ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c;
++	i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr);
++
++	/* Only register after both bridges are probed */
++	if (!ge_b850v3_lvds_ptr->stdp2690_i2c)
++		return 0;
++
++	return ge_b850v3_register();
++}
++
+ static int stdp4028_ge_b850v3_fw_remove(struct i2c_client *stdp4028_i2c)
+ {
+ 	ge_b850v3_lvds_remove();
+@@ -386,7 +398,11 @@ static int stdp2690_ge_b850v3_fw_probe(struct i2c_client *stdp2690_i2c,
+ 	ge_b850v3_lvds_ptr->stdp2690_i2c = stdp2690_i2c;
+ 	i2c_set_clientdata(stdp2690_i2c, ge_b850v3_lvds_ptr);
+ 
+-	return 0;
++	/* Only register after both bridges are probed */
++	if (!ge_b850v3_lvds_ptr->stdp4028_i2c)
++		return 0;
++
++	return ge_b850v3_register();
+ }
+ 
+ static int stdp2690_ge_b850v3_fw_remove(struct i2c_client *stdp2690_i2c)
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
+index d0db1acf11d73..7d2ed0ed2fe26 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
+@@ -320,13 +320,17 @@ static int dw_hdmi_open(struct snd_pcm_substream *substream)
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct snd_dw_hdmi *dw = substream->private_data;
+ 	void __iomem *base = dw->data.base;
++	u8 *eld;
+ 	int ret;
+ 
+ 	runtime->hw = dw_hdmi_hw;
+ 
+-	ret = snd_pcm_hw_constraint_eld(runtime, dw->data.eld);
+-	if (ret < 0)
+-		return ret;
++	eld = dw->data.get_eld(dw->data.hdmi);
++	if (eld) {
++		ret = snd_pcm_hw_constraint_eld(runtime, eld);
++		if (ret < 0)
++			return ret;
++	}
+ 
+ 	ret = snd_pcm_limit_hw_rates(runtime);
+ 	if (ret < 0)
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
+index cb07dc0da5a70..f72d27208ebef 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
+@@ -9,15 +9,15 @@ struct dw_hdmi_audio_data {
+ 	void __iomem *base;
+ 	int irq;
+ 	struct dw_hdmi *hdmi;
+-	u8 *eld;
++	u8 *(*get_eld)(struct dw_hdmi *hdmi);
+ };
+ 
+ struct dw_hdmi_i2s_audio_data {
+ 	struct dw_hdmi *hdmi;
+-	u8 *eld;
+ 
+ 	void (*write)(struct dw_hdmi *hdmi, u8 val, int offset);
+ 	u8 (*read)(struct dw_hdmi *hdmi, int offset);
++	u8 *(*get_eld)(struct dw_hdmi *hdmi);
+ };
+ 
+ #endif
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
+index 9fef6413741dc..9682416056ed6 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
+@@ -135,8 +135,15 @@ static int dw_hdmi_i2s_get_eld(struct device *dev, void *data, uint8_t *buf,
+ 			       size_t len)
+ {
+ 	struct dw_hdmi_i2s_audio_data *audio = data;
++	u8 *eld;
++
++	eld = audio->get_eld(audio->hdmi);
++	if (eld)
++		memcpy(buf, eld, min_t(size_t, MAX_ELD_BYTES, len));
++	else
++		/* Pass en empty ELD if connector not available */
++		memset(buf, 0, len);
+ 
+-	memcpy(buf, audio->eld, min_t(size_t, MAX_ELD_BYTES, len));
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index 0c79a9ba48bb6..29c0eb4bd7546 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -756,6 +756,14 @@ static void hdmi_enable_audio_clk(struct dw_hdmi *hdmi, bool enable)
+ 	hdmi_writeb(hdmi, hdmi->mc_clkdis, HDMI_MC_CLKDIS);
+ }
+ 
++static u8 *hdmi_audio_get_eld(struct dw_hdmi *hdmi)
++{
++	if (!hdmi->curr_conn)
++		return NULL;
++
++	return hdmi->curr_conn->eld;
++}
++
+ static void dw_hdmi_ahb_audio_enable(struct dw_hdmi *hdmi)
+ {
+ 	hdmi_set_cts_n(hdmi, hdmi->audio_cts, hdmi->audio_n);
+@@ -3395,7 +3403,7 @@ struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
+ 		audio.base = hdmi->regs;
+ 		audio.irq = irq;
+ 		audio.hdmi = hdmi;
+-		audio.eld = hdmi->connector.eld;
++		audio.get_eld = hdmi_audio_get_eld;
+ 		hdmi->enable_audio = dw_hdmi_ahb_audio_enable;
+ 		hdmi->disable_audio = dw_hdmi_ahb_audio_disable;
+ 
+@@ -3408,7 +3416,7 @@ struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
+ 		struct dw_hdmi_i2s_audio_data audio;
+ 
+ 		audio.hdmi	= hdmi;
+-		audio.eld	= hdmi->connector.eld;
++		audio.get_eld	= hdmi_audio_get_eld;
+ 		audio.write	= hdmi_writeb;
+ 		audio.read	= hdmi_readb;
+ 		hdmi->enable_audio = dw_hdmi_i2s_audio_enable;
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index ecdf9b01340f5..1a58481037b3f 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -171,6 +171,7 @@ static const struct regmap_config ti_sn_bridge_regmap_config = {
+ 	.val_bits = 8,
+ 	.volatile_table = &ti_sn_bridge_volatile_table,
+ 	.cache_type = REGCACHE_NONE,
++	.max_register = 0xFF,
+ };
+ 
+ static void ti_sn_bridge_write_u16(struct ti_sn_bridge *pdata,
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index cd162d406078a..006e3b896caea 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -577,6 +577,7 @@ static int drm_dev_init(struct drm_device *dev,
+ 			struct drm_driver *driver,
+ 			struct device *parent)
+ {
++	struct inode *inode;
+ 	int ret;
+ 
+ 	if (!drm_core_init_complete) {
+@@ -613,13 +614,15 @@ static int drm_dev_init(struct drm_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	dev->anon_inode = drm_fs_inode_new();
+-	if (IS_ERR(dev->anon_inode)) {
+-		ret = PTR_ERR(dev->anon_inode);
++	inode = drm_fs_inode_new();
++	if (IS_ERR(inode)) {
++		ret = PTR_ERR(inode);
+ 		DRM_ERROR("Cannot allocate anonymous inode: %d\n", ret);
+ 		goto err;
+ 	}
+ 
++	dev->anon_inode = inode;
++
+ 	if (drm_core_check_feature(dev, DRIVER_RENDER)) {
+ 		ret = drm_minor_alloc(dev, DRM_MINOR_RENDER);
+ 		if (ret)
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index a950d5db211c5..9d1bd8f491ad7 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -248,6 +248,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
+ 		},
+ 		.driver_data = (void *)&lcd1200x1920_rightside_up,
++	}, {	/* Lenovo Yoga Book X90F / X91F / X91L */
++		.matches = {
++		  /* Non exact match to match all versions */
++		  DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"),
++		},
++		.driver_data = (void *)&lcd1200x1920_rightside_up,
+ 	}, {	/* OneGX1 Pro */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SYSTEM_MANUFACTURER"),
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+index 5f24cc52c2878..ed2c50011d445 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+@@ -469,6 +469,12 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 		return -EINVAL;
+ 	}
+ 
++	if (args->stream_size > SZ_64K || args->nr_relocs > SZ_64K ||
++	    args->nr_bos > SZ_64K || args->nr_pmrs > 128) {
++		DRM_ERROR("submit arguments out of size limits\n");
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * Copy the command submission and bo array to kernel space in
+ 	 * one go, and do this outside of any locks.
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+index 1c75c8ed5bcea..85eddd492774d 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+@@ -130,6 +130,7 @@ struct etnaviv_gpu {
+ 
+ 	/* hang detection */
+ 	u32 hangcheck_dma_addr;
++	u32 hangcheck_fence;
+ 
+ 	void __iomem *mmio;
+ 	int irq;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+index cd46c882269cc..026b6c0731198 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+@@ -106,8 +106,10 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
+ 	 */
+ 	dma_addr = gpu_read(gpu, VIVS_FE_DMA_ADDRESS);
+ 	change = dma_addr - gpu->hangcheck_dma_addr;
+-	if (change < 0 || change > 16) {
++	if (gpu->completed_fence != gpu->hangcheck_fence ||
++	    change < 0 || change > 16) {
+ 		gpu->hangcheck_dma_addr = dma_addr;
++		gpu->hangcheck_fence = gpu->completed_fence;
+ 		goto out_no_timeout;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/lima/lima_device.c b/drivers/gpu/drm/lima/lima_device.c
+index 65fdca366e41f..36c9905894278 100644
+--- a/drivers/gpu/drm/lima/lima_device.c
++++ b/drivers/gpu/drm/lima/lima_device.c
+@@ -357,6 +357,7 @@ int lima_device_init(struct lima_device *ldev)
+ 	int err, i;
+ 
+ 	dma_set_coherent_mask(ldev->dev, DMA_BIT_MASK(32));
++	dma_set_max_seg_size(ldev->dev, UINT_MAX);
+ 
+ 	err = lima_clk_init(ldev);
+ 	if (err)
+diff --git a/drivers/gpu/drm/mediatek/mtk_mipi_tx.c b/drivers/gpu/drm/mediatek/mtk_mipi_tx.c
+index 8cee2591e7284..ccc742dc78bd9 100644
+--- a/drivers/gpu/drm/mediatek/mtk_mipi_tx.c
++++ b/drivers/gpu/drm/mediatek/mtk_mipi_tx.c
+@@ -147,6 +147,8 @@ static int mtk_mipi_tx_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	mipi_tx->driver_data = of_device_get_match_data(dev);
++	if (!mipi_tx->driver_data)
++		return -ENODEV;
+ 
+ 	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	mipi_tx->regs = devm_ioremap_resource(dev, mem);
+diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
+index dabb4a1ccdcf7..1aad34b5ffd7f 100644
+--- a/drivers/gpu/drm/msm/Kconfig
++++ b/drivers/gpu/drm/msm/Kconfig
+@@ -60,6 +60,7 @@ config DRM_MSM_HDMI_HDCP
+ config DRM_MSM_DP
+ 	bool "Enable DisplayPort support in MSM DRM driver"
+ 	depends on DRM_MSM
++	select RATIONAL
+ 	default y
+ 	help
+ 	  Compile in support for DP driver in MSM DRM driver. DP external
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index b4a2e8eb35dd2..08e082d0443af 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -71,8 +71,8 @@ static int _dpu_danger_signal_status(struct seq_file *s,
+ 					&status);
+ 	} else {
+ 		seq_puts(s, "\nSafe signal status:\n");
+-		if (kms->hw_mdp->ops.get_danger_status)
+-			kms->hw_mdp->ops.get_danger_status(kms->hw_mdp,
++		if (kms->hw_mdp->ops.get_safe_status)
++			kms->hw_mdp->ops.get_safe_status(kms->hw_mdp,
+ 					&status);
+ 	}
+ 	pm_runtime_put_sync(&kms->pdev->dev);
+diff --git a/drivers/gpu/drm/nouveau/dispnv04/disp.c b/drivers/gpu/drm/nouveau/dispnv04/disp.c
+index 7739f46470d3e..99fee4d8cd318 100644
+--- a/drivers/gpu/drm/nouveau/dispnv04/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv04/disp.c
+@@ -205,7 +205,7 @@ nv04_display_destroy(struct drm_device *dev)
+ 	nvif_notify_dtor(&disp->flip);
+ 
+ 	nouveau_display(dev)->priv = NULL;
+-	kfree(disp);
++	vfree(disp);
+ 
+ 	nvif_object_unmap(&drm->client.device.object);
+ }
+@@ -223,7 +223,7 @@ nv04_display_create(struct drm_device *dev)
+ 	struct nv04_display *disp;
+ 	int i, ret;
+ 
+-	disp = kzalloc(sizeof(*disp), GFP_KERNEL);
++	disp = vzalloc(sizeof(*disp));
+ 	if (!disp)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
+index a0fe607c9c07f..3bfc55c571b5e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
+@@ -94,20 +94,13 @@ nvkm_pmu_fini(struct nvkm_subdev *subdev, bool suspend)
+ 	return 0;
+ }
+ 
+-static int
++static void
+ nvkm_pmu_reset(struct nvkm_pmu *pmu)
+ {
+ 	struct nvkm_device *device = pmu->subdev.device;
+ 
+ 	if (!pmu->func->enabled(pmu))
+-		return 0;
+-
+-	/* Inhibit interrupts, and wait for idle. */
+-	nvkm_wr32(device, 0x10a014, 0x0000ffff);
+-	nvkm_msec(device, 2000,
+-		if (!nvkm_rd32(device, 0x10a04c))
+-			break;
+-	);
++		return;
+ 
+ 	/* Reset. */
+ 	if (pmu->func->reset)
+@@ -118,25 +111,37 @@ nvkm_pmu_reset(struct nvkm_pmu *pmu)
+ 		if (!(nvkm_rd32(device, 0x10a10c) & 0x00000006))
+ 			break;
+ 	);
+-
+-	return 0;
+ }
+ 
+ static int
+ nvkm_pmu_preinit(struct nvkm_subdev *subdev)
+ {
+ 	struct nvkm_pmu *pmu = nvkm_pmu(subdev);
+-	return nvkm_pmu_reset(pmu);
++	nvkm_pmu_reset(pmu);
++	return 0;
+ }
+ 
+ static int
+ nvkm_pmu_init(struct nvkm_subdev *subdev)
+ {
+ 	struct nvkm_pmu *pmu = nvkm_pmu(subdev);
+-	int ret = nvkm_pmu_reset(pmu);
+-	if (ret == 0 && pmu->func->init)
+-		ret = pmu->func->init(pmu);
+-	return ret;
++	struct nvkm_device *device = pmu->subdev.device;
++
++	if (!pmu->func->init)
++		return 0;
++
++	if (pmu->func->enabled(pmu)) {
++		/* Inhibit interrupts, and wait for idle. */
++		nvkm_wr32(device, 0x10a014, 0x0000ffff);
++		nvkm_msec(device, 2000,
++			if (!nvkm_rd32(device, 0x10a04c))
++				break;
++		);
++
++		nvkm_pmu_reset(pmu);
++	}
++
++	return pmu->func->init(pmu);
+ }
+ 
+ static void *
+diff --git a/drivers/gpu/drm/panel/panel-innolux-p079zca.c b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
+index aea3162253914..f194b62e290ca 100644
+--- a/drivers/gpu/drm/panel/panel-innolux-p079zca.c
++++ b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
+@@ -484,6 +484,7 @@ static void innolux_panel_del(struct innolux_panel *innolux)
+ static int innolux_panel_probe(struct mipi_dsi_device *dsi)
+ {
+ 	const struct panel_desc *desc;
++	struct innolux_panel *innolux;
+ 	int err;
+ 
+ 	desc = of_device_get_match_data(&dsi->dev);
+@@ -495,7 +496,14 @@ static int innolux_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (err < 0)
+ 		return err;
+ 
+-	return mipi_dsi_attach(dsi);
++	err = mipi_dsi_attach(dsi);
++	if (err < 0) {
++		innolux = mipi_dsi_get_drvdata(dsi);
++		innolux_panel_del(innolux);
++		return err;
++	}
++
++	return 0;
+ }
+ 
+ static int innolux_panel_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c b/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
+index 86e4213e8bb13..daccb1fd5fdad 100644
+--- a/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
++++ b/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
+@@ -406,7 +406,13 @@ static int kingdisplay_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (err < 0)
+ 		return err;
+ 
+-	return mipi_dsi_attach(dsi);
++	err = mipi_dsi_attach(dsi);
++	if (err < 0) {
++		kingdisplay_panel_del(kingdisplay);
++		return err;
++	}
++
++	return 0;
+ }
+ 
+ static int kingdisplay_panel_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index 8c0a572940e82..32070e94f6c49 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -634,6 +634,8 @@ void radeon_driver_lastclose_kms(struct drm_device *dev)
+ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
+ {
+ 	struct radeon_device *rdev = dev->dev_private;
++	struct radeon_fpriv *fpriv;
++	struct radeon_vm *vm;
+ 	int r;
+ 
+ 	file_priv->driver_priv = NULL;
+@@ -646,48 +648,52 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
+ 
+ 	/* new gpu have virtual address space support */
+ 	if (rdev->family >= CHIP_CAYMAN) {
+-		struct radeon_fpriv *fpriv;
+-		struct radeon_vm *vm;
+ 
+ 		fpriv = kzalloc(sizeof(*fpriv), GFP_KERNEL);
+ 		if (unlikely(!fpriv)) {
+ 			r = -ENOMEM;
+-			goto out_suspend;
++			goto err_suspend;
+ 		}
+ 
+ 		if (rdev->accel_working) {
+ 			vm = &fpriv->vm;
+ 			r = radeon_vm_init(rdev, vm);
+-			if (r) {
+-				kfree(fpriv);
+-				goto out_suspend;
+-			}
++			if (r)
++				goto err_fpriv;
+ 
+ 			r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);
+-			if (r) {
+-				radeon_vm_fini(rdev, vm);
+-				kfree(fpriv);
+-				goto out_suspend;
+-			}
++			if (r)
++				goto err_vm_fini;
+ 
+ 			/* map the ib pool buffer read only into
+ 			 * virtual address space */
+ 			vm->ib_bo_va = radeon_vm_bo_add(rdev, vm,
+ 							rdev->ring_tmp_bo.bo);
++			if (!vm->ib_bo_va) {
++				r = -ENOMEM;
++				goto err_vm_fini;
++			}
++
+ 			r = radeon_vm_bo_set_addr(rdev, vm->ib_bo_va,
+ 						  RADEON_VA_IB_OFFSET,
+ 						  RADEON_VM_PAGE_READABLE |
+ 						  RADEON_VM_PAGE_SNOOPED);
+-			if (r) {
+-				radeon_vm_fini(rdev, vm);
+-				kfree(fpriv);
+-				goto out_suspend;
+-			}
++			if (r)
++				goto err_vm_fini;
+ 		}
+ 		file_priv->driver_priv = fpriv;
+ 	}
+ 
+-out_suspend:
++	pm_runtime_mark_last_busy(dev->dev);
++	pm_runtime_put_autosuspend(dev->dev);
++	return 0;
++
++err_vm_fini:
++	radeon_vm_fini(rdev, vm);
++err_fpriv:
++	kfree(fpriv);
++
++err_suspend:
+ 	pm_runtime_mark_last_busy(dev->dev);
+ 	pm_runtime_put_autosuspend(dev->dev);
+ 	return r;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+index 1b9738e44909d..065604c5837de 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+@@ -215,6 +215,7 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
+ 	const struct drm_display_mode *mode = &rcrtc->crtc.state->adjusted_mode;
+ 	struct rcar_du_device *rcdu = rcrtc->dev;
+ 	unsigned long mode_clock = mode->clock * 1000;
++	unsigned int hdse_offset;
+ 	u32 dsmr;
+ 	u32 escr;
+ 
+@@ -298,10 +299,15 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
+ 	     | DSMR_DIPM_DISP | DSMR_CSPM;
+ 	rcar_du_crtc_write(rcrtc, DSMR, dsmr);
+ 
++	hdse_offset = 19;
++	if (rcrtc->group->cmms_mask & BIT(rcrtc->index % 2))
++		hdse_offset += 25;
++
+ 	/* Display timings */
+-	rcar_du_crtc_write(rcrtc, HDSR, mode->htotal - mode->hsync_start - 19);
++	rcar_du_crtc_write(rcrtc, HDSR, mode->htotal - mode->hsync_start -
++					hdse_offset);
+ 	rcar_du_crtc_write(rcrtc, HDER, mode->htotal - mode->hsync_start +
+-					mode->hdisplay - 19);
++					mode->hdisplay - hdse_offset);
+ 	rcar_du_crtc_write(rcrtc, HSWR, mode->hsync_end -
+ 					mode->hsync_start - 1);
+ 	rcar_du_crtc_write(rcrtc, HCR,  mode->htotal - 1);
+@@ -831,6 +837,7 @@ rcar_du_crtc_mode_valid(struct drm_crtc *crtc,
+ 	struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
+ 	struct rcar_du_device *rcdu = rcrtc->dev;
+ 	bool interlaced = mode->flags & DRM_MODE_FLAG_INTERLACE;
++	unsigned int min_sync_porch;
+ 	unsigned int vbp;
+ 
+ 	if (interlaced && !rcar_du_has(rcdu, RCAR_DU_FEATURE_INTERLACED))
+@@ -838,9 +845,14 @@ rcar_du_crtc_mode_valid(struct drm_crtc *crtc,
+ 
+ 	/*
+ 	 * The hardware requires a minimum combined horizontal sync and back
+-	 * porch of 20 pixels and a minimum vertical back porch of 3 lines.
++	 * porch of 20 pixels (when CMM isn't used) or 45 pixels (when CMM is
++	 * used), and a minimum vertical back porch of 3 lines.
+ 	 */
+-	if (mode->htotal - mode->hsync_start < 20)
++	min_sync_porch = 20;
++	if (rcrtc->group->cmms_mask & BIT(rcrtc->index % 2))
++		min_sync_porch += 25;
++
++	if (mode->htotal - mode->hsync_start < min_sync_porch)
+ 		return MODE_HBLANK_NARROW;
+ 
+ 	vbp = (mode->vtotal - mode->vsync_end) / (interlaced ? 2 : 1);
+diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+index d0c9610ad2202..b0fb3c3cba596 100644
+--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+@@ -243,6 +243,8 @@ struct dw_mipi_dsi_rockchip {
+ 	struct dw_mipi_dsi *dmd;
+ 	const struct rockchip_dw_dsi_chip_data *cdata;
+ 	struct dw_mipi_dsi_plat_data pdata;
++
++	bool dsi_bound;
+ };
+ 
+ struct dphy_pll_parameter_map {
+@@ -753,10 +755,6 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
+ 	if (mux < 0)
+ 		return;
+ 
+-	pm_runtime_get_sync(dsi->dev);
+-	if (dsi->slave)
+-		pm_runtime_get_sync(dsi->slave->dev);
+-
+ 	/*
+ 	 * For the RK3399, the clk of grf must be enabled before writing grf
+ 	 * register. And for RK3288 or other soc, this grf_clk must be NULL,
+@@ -775,20 +773,10 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
+ 	clk_disable_unprepare(dsi->grf_clk);
+ }
+ 
+-static void dw_mipi_dsi_encoder_disable(struct drm_encoder *encoder)
+-{
+-	struct dw_mipi_dsi_rockchip *dsi = to_dsi(encoder);
+-
+-	if (dsi->slave)
+-		pm_runtime_put(dsi->slave->dev);
+-	pm_runtime_put(dsi->dev);
+-}
+-
+ static const struct drm_encoder_helper_funcs
+ dw_mipi_dsi_encoder_helper_funcs = {
+ 	.atomic_check = dw_mipi_dsi_encoder_atomic_check,
+ 	.enable = dw_mipi_dsi_encoder_enable,
+-	.disable = dw_mipi_dsi_encoder_disable,
+ };
+ 
+ static int rockchip_dsi_drm_create_encoder(struct dw_mipi_dsi_rockchip *dsi,
+@@ -918,10 +906,14 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
+ 		put_device(second);
+ 	}
+ 
++	pm_runtime_get_sync(dsi->dev);
++	if (dsi->slave)
++		pm_runtime_get_sync(dsi->slave->dev);
++
+ 	ret = clk_prepare_enable(dsi->pllref_clk);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "Failed to enable pllref_clk: %d\n", ret);
+-		return ret;
++		goto out_pm_runtime;
+ 	}
+ 
+ 	/*
+@@ -933,7 +925,7 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
+ 	ret = clk_prepare_enable(dsi->grf_clk);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
+-		return ret;
++		goto out_pll_clk;
+ 	}
+ 
+ 	dw_mipi_dsi_rockchip_config(dsi);
+@@ -945,16 +937,27 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
+ 	ret = rockchip_dsi_drm_create_encoder(dsi, drm_dev);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "Failed to create drm encoder\n");
+-		return ret;
++		goto out_pll_clk;
+ 	}
+ 
+ 	ret = dw_mipi_dsi_bind(dsi->dmd, &dsi->encoder);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "Failed to bind: %d\n", ret);
+-		return ret;
++		goto out_pll_clk;
+ 	}
+ 
++	dsi->dsi_bound = true;
++
+ 	return 0;
++
++out_pll_clk:
++	clk_disable_unprepare(dsi->pllref_clk);
++out_pm_runtime:
++	pm_runtime_put(dsi->dev);
++	if (dsi->slave)
++		pm_runtime_put(dsi->slave->dev);
++
++	return ret;
+ }
+ 
+ static void dw_mipi_dsi_rockchip_unbind(struct device *dev,
+@@ -966,9 +969,15 @@ static void dw_mipi_dsi_rockchip_unbind(struct device *dev,
+ 	if (dsi->is_slave)
+ 		return;
+ 
++	dsi->dsi_bound = false;
++
+ 	dw_mipi_dsi_unbind(dsi->dmd);
+ 
+ 	clk_disable_unprepare(dsi->pllref_clk);
++
++	pm_runtime_put(dsi->dev);
++	if (dsi->slave)
++		pm_runtime_put(dsi->slave->dev);
+ }
+ 
+ static const struct component_ops dw_mipi_dsi_rockchip_ops = {
+@@ -1026,6 +1035,36 @@ static const struct dw_mipi_dsi_host_ops dw_mipi_dsi_rockchip_host_ops = {
+ 	.detach = dw_mipi_dsi_rockchip_host_detach,
+ };
+ 
++static int __maybe_unused dw_mipi_dsi_rockchip_resume(struct device *dev)
++{
++	struct dw_mipi_dsi_rockchip *dsi = dev_get_drvdata(dev);
++	int ret;
++
++	/*
++	 * Re-configure DSI state, if we were previously initialized. We need
++	 * to do this before rockchip_drm_drv tries to re-enable() any panels.
++	 */
++	if (dsi->dsi_bound) {
++		ret = clk_prepare_enable(dsi->grf_clk);
++		if (ret) {
++			DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
++			return ret;
++		}
++
++		dw_mipi_dsi_rockchip_config(dsi);
++		if (dsi->slave)
++			dw_mipi_dsi_rockchip_config(dsi->slave);
++
++		clk_disable_unprepare(dsi->grf_clk);
++	}
++
++	return 0;
++}
++
++static const struct dev_pm_ops dw_mipi_dsi_rockchip_pm_ops = {
++	SET_LATE_SYSTEM_SLEEP_PM_OPS(NULL, dw_mipi_dsi_rockchip_resume)
++};
++
+ static int dw_mipi_dsi_rockchip_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -1126,14 +1165,10 @@ static int dw_mipi_dsi_rockchip_probe(struct platform_device *pdev)
+ 		if (ret != -EPROBE_DEFER)
+ 			DRM_DEV_ERROR(dev,
+ 				      "Failed to probe dw_mipi_dsi: %d\n", ret);
+-		goto err_clkdisable;
++		return ret;
+ 	}
+ 
+ 	return 0;
+-
+-err_clkdisable:
+-	clk_disable_unprepare(dsi->pllref_clk);
+-	return ret;
+ }
+ 
+ static int dw_mipi_dsi_rockchip_remove(struct platform_device *pdev)
+@@ -1249,6 +1284,7 @@ struct platform_driver dw_mipi_dsi_rockchip_driver = {
+ 	.remove		= dw_mipi_dsi_rockchip_remove,
+ 	.driver		= {
+ 		.of_match_table = dw_mipi_dsi_rockchip_dt_ids,
++		.pm	= &dw_mipi_dsi_rockchip_pm_ops,
+ 		.name	= "dw-mipi-dsi-rockchip",
+ 	},
+ };
+diff --git a/drivers/gpu/drm/tegra/vic.c b/drivers/gpu/drm/tegra/vic.c
+index b77f726303d89..ec0e4d8f0aade 100644
+--- a/drivers/gpu/drm/tegra/vic.c
++++ b/drivers/gpu/drm/tegra/vic.c
+@@ -5,6 +5,7 @@
+ 
+ #include <linux/clk.h>
+ #include <linux/delay.h>
++#include <linux/dma-mapping.h>
+ #include <linux/host1x.h>
+ #include <linux/iommu.h>
+ #include <linux/module.h>
+@@ -265,10 +266,8 @@ static int vic_load_firmware(struct vic *vic)
+ 
+ 	if (!client->group) {
+ 		virt = dma_alloc_coherent(vic->dev, size, &iova, GFP_KERNEL);
+-
+-		err = dma_mapping_error(vic->dev, iova);
+-		if (err < 0)
+-			return err;
++		if (!virt)
++			return -ENOMEM;
+ 	} else {
+ 		virt = tegra_drm_alloc(tegra, size, &iova);
+ 	}
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index eb4b7df02ca03..f673292eec9db 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -789,6 +789,8 @@ int ttm_mem_evict_first(struct ttm_bo_device *bdev,
+ 	ret = ttm_bo_evict(bo, ctx);
+ 	if (locked)
+ 		ttm_bo_unreserve(bo);
++	else
++		ttm_bo_move_to_lru_tail_unlocked(bo);
+ 
+ 	ttm_bo_put(bo);
+ 	return ret;
+diff --git a/drivers/gpu/drm/vboxvideo/vbox_main.c b/drivers/gpu/drm/vboxvideo/vbox_main.c
+index d68d9bad76747..c5ea880d17b29 100644
+--- a/drivers/gpu/drm/vboxvideo/vbox_main.c
++++ b/drivers/gpu/drm/vboxvideo/vbox_main.c
+@@ -123,8 +123,8 @@ int vbox_hw_init(struct vbox_private *vbox)
+ 	/* Create guest-heap mem-pool use 2^4 = 16 byte chunks */
+ 	vbox->guest_pool = devm_gen_pool_create(vbox->ddev.dev, 4, -1,
+ 						"vboxvideo-accel");
+-	if (!vbox->guest_pool)
+-		return -ENOMEM;
++	if (IS_ERR(vbox->guest_pool))
++		return PTR_ERR(vbox->guest_pool);
+ 
+ 	ret = gen_pool_add_virt(vbox->guest_pool,
+ 				(unsigned long)vbox->guest_heap,
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index ee293f061f0a8..9392de2679a1d 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -79,6 +79,7 @@
+ # define VC4_HD_M_SW_RST			BIT(2)
+ # define VC4_HD_M_ENABLE			BIT(0)
+ 
++#define HSM_MIN_CLOCK_FREQ	120000000
+ #define CEC_CLOCK_FREQ 40000
+ #define VC4_HSM_MID_CLOCK 149985000
+ 
+@@ -1398,8 +1399,14 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	struct vc4_hdmi *vc4_hdmi = cec_get_drvdata(adap);
+ 	/* clock period in microseconds */
+ 	const u32 usecs = 1000000 / CEC_CLOCK_FREQ;
+-	u32 val = HDMI_READ(HDMI_CEC_CNTRL_5);
++	u32 val;
++	int ret;
+ 
++	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
++	if (ret)
++		return ret;
++
++	val = HDMI_READ(HDMI_CEC_CNTRL_5);
+ 	val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
+ 		 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
+ 		 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
+@@ -1524,6 +1531,8 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ 	if (ret < 0)
+ 		goto err_delete_cec_adap;
+ 
++	pm_runtime_put(&vc4_hdmi->pdev->dev);
++
+ 	return 0;
+ 
+ err_delete_cec_adap:
+@@ -1806,6 +1815,19 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 	vc4_hdmi->disable_wifi_frequencies =
+ 		of_property_read_bool(dev->of_node, "wifi-2.4ghz-coexistence");
+ 
++	/*
++	 * If we boot without any cable connected to the HDMI connector,
++	 * the firmware will skip the HSM initialization and leave it
++	 * with a rate of 0, resulting in a bus lockup when we're
++	 * accessing the registers even if it's enabled.
++	 *
++	 * Let's put a sensible default at runtime_resume so that we
++	 * don't end up in this situation.
++	 */
++	ret = clk_set_min_rate(vc4_hdmi->hsm_clock, HSM_MIN_CLOCK_FREQ);
++	if (ret)
++		goto err_put_ddc;
++
+ 	if (vc4_hdmi->variant->reset)
+ 		vc4_hdmi->variant->reset(vc4_hdmi);
+ 
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index d0ebb70e2fdd6..a2c09dca4eef9 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -18,6 +18,10 @@
+ #include <trace/events/host1x.h>
+ #undef CREATE_TRACE_POINTS
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++#include <asm/dma-iommu.h>
++#endif
++
+ #include "bus.h"
+ #include "channel.h"
+ #include "debug.h"
+@@ -232,6 +236,17 @@ static struct iommu_domain *host1x_iommu_attach(struct host1x *host)
+ 	struct iommu_domain *domain = iommu_get_domain_for_dev(host->dev);
+ 	int err;
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++	if (host->dev->archdata.mapping) {
++		struct dma_iommu_mapping *mapping =
++				to_dma_iommu_mapping(host->dev);
++		arm_iommu_detach_device(host->dev);
++		arm_iommu_release_mapping(mapping);
++
++		domain = iommu_get_domain_for_dev(host->dev);
++	}
++#endif
++
+ 	/*
+ 	 * We may not always want to enable IOMMU support (for example if the
+ 	 * host1x firewall is already enabled and we don't support addressing
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 5c1d33cda863b..e5d2e7e9541b8 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -415,7 +415,7 @@ static int apple_input_configured(struct hid_device *hdev,
+ 
+ 	if ((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) {
+ 		hid_info(hdev, "Fn key not found (Apple Wireless Keyboard clone?), disabling Fn key handling\n");
+-		asc->quirks = 0;
++		asc->quirks &= ~APPLE_HAS_FN;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 580d378342c41..eb53855898c8d 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -1288,6 +1288,12 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct
+ 
+ 	input = field->hidinput->input;
+ 
++	if (usage->type == EV_ABS &&
++	    (((*quirks & HID_QUIRK_X_INVERT) && usage->code == ABS_X) ||
++	     ((*quirks & HID_QUIRK_Y_INVERT) && usage->code == ABS_Y))) {
++		value = field->logical_maximum - value;
++	}
++
+ 	if (usage->hat_min < usage->hat_max || usage->hat_dir) {
+ 		int hat_dir = usage->hat_dir;
+ 		if (!hat_dir)
+diff --git a/drivers/hid/hid-uclogic-params.c b/drivers/hid/hid-uclogic-params.c
+index dd05bed4ca53a..38f9bbad81c17 100644
+--- a/drivers/hid/hid-uclogic-params.c
++++ b/drivers/hid/hid-uclogic-params.c
+@@ -65,7 +65,7 @@ static int uclogic_params_get_str_desc(__u8 **pbuf, struct hid_device *hdev,
+ 					__u8 idx, size_t len)
+ {
+ 	int rc;
+-	struct usb_device *udev = hid_to_usb_dev(hdev);
++	struct usb_device *udev;
+ 	__u8 *buf = NULL;
+ 
+ 	/* Check arguments */
+@@ -74,6 +74,8 @@ static int uclogic_params_get_str_desc(__u8 **pbuf, struct hid_device *hdev,
+ 		goto cleanup;
+ 	}
+ 
++	udev = hid_to_usb_dev(hdev);
++
+ 	buf = kmalloc(len, GFP_KERNEL);
+ 	if (buf == NULL) {
+ 		rc = -ENOMEM;
+@@ -449,7 +451,7 @@ static int uclogic_params_frame_init_v1_buttonpad(
+ {
+ 	int rc;
+ 	bool found = false;
+-	struct usb_device *usb_dev = hid_to_usb_dev(hdev);
++	struct usb_device *usb_dev;
+ 	char *str_buf = NULL;
+ 	const size_t str_len = 16;
+ 
+@@ -459,6 +461,8 @@ static int uclogic_params_frame_init_v1_buttonpad(
+ 		goto cleanup;
+ 	}
+ 
++	usb_dev = hid_to_usb_dev(hdev);
++
+ 	/*
+ 	 * Enable generic button mode
+ 	 */
+@@ -705,9 +709,9 @@ static int uclogic_params_huion_init(struct uclogic_params *params,
+ 				     struct hid_device *hdev)
+ {
+ 	int rc;
+-	struct usb_device *udev = hid_to_usb_dev(hdev);
+-	struct usb_interface *iface = to_usb_interface(hdev->dev.parent);
+-	__u8 bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
++	struct usb_device *udev;
++	struct usb_interface *iface;
++	__u8 bInterfaceNumber;
+ 	bool found;
+ 	/* The resulting parameters (noop) */
+ 	struct uclogic_params p = {0, };
+@@ -721,6 +725,10 @@ static int uclogic_params_huion_init(struct uclogic_params *params,
+ 		goto cleanup;
+ 	}
+ 
++	udev = hid_to_usb_dev(hdev);
++	iface = to_usb_interface(hdev->dev.parent);
++	bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
++
+ 	/* If it's not a pen interface */
+ 	if (bInterfaceNumber != 0) {
+ 		/* TODO: Consider marking the interface invalid */
+@@ -832,10 +840,10 @@ int uclogic_params_init(struct uclogic_params *params,
+ 			struct hid_device *hdev)
+ {
+ 	int rc;
+-	struct usb_device *udev = hid_to_usb_dev(hdev);
+-	__u8  bNumInterfaces = udev->config->desc.bNumInterfaces;
+-	struct usb_interface *iface = to_usb_interface(hdev->dev.parent);
+-	__u8 bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
++	struct usb_device *udev;
++	__u8  bNumInterfaces;
++	struct usb_interface *iface;
++	__u8 bInterfaceNumber;
+ 	bool found;
+ 	/* The resulting parameters (noop) */
+ 	struct uclogic_params p = {0, };
+@@ -846,6 +854,11 @@ int uclogic_params_init(struct uclogic_params *params,
+ 		goto cleanup;
+ 	}
+ 
++	udev = hid_to_usb_dev(hdev);
++	bNumInterfaces = udev->config->desc.bNumInterfaces;
++	iface = to_usb_interface(hdev->dev.parent);
++	bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
++
+ 	/*
+ 	 * Set replacement report descriptor if the original matches the
+ 	 * specified size. Otherwise keep interface unchanged.
+diff --git a/drivers/hid/hid-vivaldi.c b/drivers/hid/hid-vivaldi.c
+index 72957a9f71170..576518e704ee6 100644
+--- a/drivers/hid/hid-vivaldi.c
++++ b/drivers/hid/hid-vivaldi.c
+@@ -74,10 +74,11 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
+ 				    struct hid_usage *usage)
+ {
+ 	struct vivaldi_data *drvdata = hid_get_drvdata(hdev);
++	struct hid_report *report = field->report;
+ 	int fn_key;
+ 	int ret;
+ 	u32 report_len;
+-	u8 *buf;
++	u8 *report_data, *buf;
+ 
+ 	if (field->logical != HID_USAGE_FN_ROW_PHYSMAP ||
+ 	    (usage->hid & HID_USAGE_PAGE) != HID_UP_ORDINAL)
+@@ -89,12 +90,24 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
+ 	if (fn_key > drvdata->max_function_row_key)
+ 		drvdata->max_function_row_key = fn_key;
+ 
+-	buf = hid_alloc_report_buf(field->report, GFP_KERNEL);
+-	if (!buf)
++	report_data = buf = hid_alloc_report_buf(report, GFP_KERNEL);
++	if (!report_data)
+ 		return;
+ 
+-	report_len = hid_report_len(field->report);
+-	ret = hid_hw_raw_request(hdev, field->report->id, buf,
++	report_len = hid_report_len(report);
++	if (!report->id) {
++		/*
++		 * hid_hw_raw_request() will stuff report ID (which will be 0)
++		 * into the first byte of the buffer even for unnumbered
++		 * reports, so we need to account for this to avoid getting
++		 * -EOVERFLOW in return.
++		 * Note that hid_alloc_report_buf() adds 7 bytes to the size
++		 * so we can safely say that we have space for an extra byte.
++		 */
++		report_len++;
++	}
++
++	ret = hid_hw_raw_request(hdev, report->id, report_data,
+ 				 report_len, HID_FEATURE_REPORT,
+ 				 HID_REQ_GET_REPORT);
+ 	if (ret < 0) {
+@@ -103,7 +116,16 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
+ 		goto out;
+ 	}
+ 
+-	ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, buf,
++	if (!report->id) {
++		/*
++		 * Undo the damage from hid_hw_raw_request() for unnumbered
++		 * reports.
++		 */
++		report_data++;
++		report_len--;
++	}
++
++	ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, report_data,
+ 				   report_len, 0);
+ 	if (ret) {
+ 		dev_warn(&hdev->dev, "failed to report feature %d\n",
+diff --git a/drivers/hid/uhid.c b/drivers/hid/uhid.c
+index 8fe3efcb83271..fc06d8bb42e0f 100644
+--- a/drivers/hid/uhid.c
++++ b/drivers/hid/uhid.c
+@@ -28,11 +28,22 @@
+ 
+ struct uhid_device {
+ 	struct mutex devlock;
++
++	/* This flag tracks whether the HID device is usable for commands from
++	 * userspace. The flag is already set before hid_add_device(), which
++	 * runs in workqueue context, to allow hid_add_device() to communicate
++	 * with userspace.
++	 * However, if hid_add_device() fails, the flag is cleared without
++	 * holding devlock.
++	 * We guarantee that if @running changes from true to false while you're
++	 * holding @devlock, it's still fine to access @hid.
++	 */
+ 	bool running;
+ 
+ 	__u8 *rd_data;
+ 	uint rd_size;
+ 
++	/* When this is NULL, userspace may use UHID_CREATE/UHID_CREATE2. */
+ 	struct hid_device *hid;
+ 	struct uhid_event input_buf;
+ 
+@@ -63,9 +74,18 @@ static void uhid_device_add_worker(struct work_struct *work)
+ 	if (ret) {
+ 		hid_err(uhid->hid, "Cannot register HID device: error %d\n", ret);
+ 
+-		hid_destroy_device(uhid->hid);
+-		uhid->hid = NULL;
++		/* We used to call hid_destroy_device() here, but that's really
++		 * messy to get right because we have to coordinate with
++		 * concurrent writes from userspace that might be in the middle
++		 * of using uhid->hid.
++		 * Just leave uhid->hid as-is for now, and clean it up when
++		 * userspace tries to close or reinitialize the uhid instance.
++		 *
++		 * However, we do have to clear the ->running flag and do a
++		 * wakeup to make sure userspace knows that the device is gone.
++		 */
+ 		uhid->running = false;
++		wake_up_interruptible(&uhid->report_wait);
+ 	}
+ }
+ 
+@@ -474,7 +494,7 @@ static int uhid_dev_create2(struct uhid_device *uhid,
+ 	void *rd_data;
+ 	int ret;
+ 
+-	if (uhid->running)
++	if (uhid->hid)
+ 		return -EALREADY;
+ 
+ 	rd_size = ev->u.create2.rd_size;
+@@ -556,7 +576,7 @@ static int uhid_dev_create(struct uhid_device *uhid,
+ 
+ static int uhid_dev_destroy(struct uhid_device *uhid)
+ {
+-	if (!uhid->running)
++	if (!uhid->hid)
+ 		return -EINVAL;
+ 
+ 	uhid->running = false;
+@@ -565,6 +585,7 @@ static int uhid_dev_destroy(struct uhid_device *uhid)
+ 	cancel_work_sync(&uhid->worker);
+ 
+ 	hid_destroy_device(uhid->hid);
++	uhid->hid = NULL;
+ 	kfree(uhid->rd_data);
+ 
+ 	return 0;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index c25274275258f..d90bfa8b7313e 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2566,6 +2566,24 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
+ 	}
+ }
+ 
++static bool wacom_wac_slot_is_active(struct input_dev *dev, int key)
++{
++	struct input_mt *mt = dev->mt;
++	struct input_mt_slot *s;
++
++	if (!mt)
++		return false;
++
++	for (s = mt->slots; s != mt->slots + mt->num_slots; s++) {
++		if (s->key == key &&
++			input_mt_get_value(s, ABS_MT_TRACKING_ID) >= 0) {
++			return true;
++		}
++	}
++
++	return false;
++}
++
+ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 		struct hid_field *field, struct hid_usage *usage, __s32 value)
+ {
+@@ -2613,9 +2631,14 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 	}
+ 
+ 	if (usage->usage_index + 1 == field->report_count) {
+-		if (equivalent_usage == wacom_wac->hid_data.last_slot_field &&
+-		    wacom_wac->hid_data.confidence)
+-			wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
++		if (equivalent_usage == wacom_wac->hid_data.last_slot_field) {
++			bool touch_removed = wacom_wac_slot_is_active(wacom_wac->touch_input,
++				wacom_wac->hid_data.id) && !wacom_wac->hid_data.tipswitch;
++
++			if (wacom_wac->hid_data.confidence || touch_removed) {
++				wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
++			}
++		}
+ 	}
+ }
+ 
+@@ -2631,6 +2654,10 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
+ 
+ 	hid_data->confidence = true;
+ 
++	hid_data->cc_report = 0;
++	hid_data->cc_index = -1;
++	hid_data->cc_value_index = -1;
++
+ 	for (i = 0; i < report->maxfield; i++) {
+ 		struct hid_field *field = report->field[i];
+ 		int j;
+@@ -2664,11 +2691,14 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
+ 	    hid_data->cc_index >= 0) {
+ 		struct hid_field *field = report->field[hid_data->cc_index];
+ 		int value = field->value[hid_data->cc_value_index];
+-		if (value)
++		if (value) {
+ 			hid_data->num_expected = value;
++			hid_data->num_received = 0;
++		}
+ 	}
+ 	else {
+ 		hid_data->num_expected = wacom_wac->features.touch_max;
++		hid_data->num_received = 0;
+ 	}
+ }
+ 
+@@ -2692,6 +2722,7 @@ static void wacom_wac_finger_report(struct hid_device *hdev,
+ 
+ 	input_sync(input);
+ 	wacom_wac->hid_data.num_received = 0;
++	wacom_wac->hid_data.num_expected = 0;
+ 
+ 	/* keep touch state for pen event */
+ 	wacom_wac->shared->touch_down = wacom_wac_finger_count_touches(wacom_wac);
+diff --git a/drivers/hsi/hsi_core.c b/drivers/hsi/hsi_core.c
+index a5f92e2889cb8..a330f58d45fc6 100644
+--- a/drivers/hsi/hsi_core.c
++++ b/drivers/hsi/hsi_core.c
+@@ -102,6 +102,7 @@ struct hsi_client *hsi_new_client(struct hsi_port *port,
+ 	if (device_register(&cl->device) < 0) {
+ 		pr_err("hsi: failed to register client: %s\n", info->name);
+ 		put_device(&cl->device);
++		goto err;
+ 	}
+ 
+ 	return cl;
+diff --git a/drivers/hwmon/mr75203.c b/drivers/hwmon/mr75203.c
+index 18da5a25e89ab..046523d47c29b 100644
+--- a/drivers/hwmon/mr75203.c
++++ b/drivers/hwmon/mr75203.c
+@@ -93,7 +93,7 @@
+ #define VM_CH_REQ	BIT(21)
+ 
+ #define IP_TMR			0x05
+-#define POWER_DELAY_CYCLE_256	0x80
++#define POWER_DELAY_CYCLE_256	0x100
+ #define POWER_DELAY_CYCLE_64	0x40
+ 
+ #define PVT_POLL_DELAY_US	20
+diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
+index 55c83a7a24f36..56c87ade0e89d 100644
+--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
+@@ -37,10 +37,10 @@ enum dw_pci_ctl_id_t {
+ };
+ 
+ struct dw_scl_sda_cfg {
+-	u32 ss_hcnt;
+-	u32 fs_hcnt;
+-	u32 ss_lcnt;
+-	u32 fs_lcnt;
++	u16 ss_hcnt;
++	u16 fs_hcnt;
++	u16 ss_lcnt;
++	u16 fs_lcnt;
+ 	u32 sda_hold;
+ };
+ 
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index eab6fd6b890eb..5618c1ff34dc3 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -797,6 +797,11 @@ static int i801_block_transaction(struct i801_priv *priv,
+ 	int result = 0;
+ 	unsigned char hostc;
+ 
++	if (read_write == I2C_SMBUS_READ && command == I2C_SMBUS_BLOCK_DATA)
++		data->block[0] = I2C_SMBUS_BLOCK_MAX;
++	else if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX)
++		return -EPROTO;
++
+ 	if (command == I2C_SMBUS_I2C_BLOCK_DATA) {
+ 		if (read_write == I2C_SMBUS_WRITE) {
+ 			/* set I2C_EN bit in configuration register */
+@@ -810,16 +815,6 @@ static int i801_block_transaction(struct i801_priv *priv,
+ 		}
+ 	}
+ 
+-	if (read_write == I2C_SMBUS_WRITE
+-	 || command == I2C_SMBUS_I2C_BLOCK_DATA) {
+-		if (data->block[0] < 1)
+-			data->block[0] = 1;
+-		if (data->block[0] > I2C_SMBUS_BLOCK_MAX)
+-			data->block[0] = I2C_SMBUS_BLOCK_MAX;
+-	} else {
+-		data->block[0] = 32;	/* max for SMBus block reads */
+-	}
+-
+ 	/* Experience has shown that the block buffer can only be used for
+ 	   SMBus (not I2C) block transactions, even though the datasheet
+ 	   doesn't mention this limitation. */
+diff --git a/drivers/i2c/busses/i2c-mpc.c b/drivers/i2c/busses/i2c-mpc.c
+index af349661fd769..8de8296d25831 100644
+--- a/drivers/i2c/busses/i2c-mpc.c
++++ b/drivers/i2c/busses/i2c-mpc.c
+@@ -105,23 +105,30 @@ static irqreturn_t mpc_i2c_isr(int irq, void *dev_id)
+ /* Sometimes 9th clock pulse isn't generated, and slave doesn't release
+  * the bus, because it wants to send ACK.
+  * Following sequence of enabling/disabling and sending start/stop generates
+- * the 9 pulses, so it's all OK.
++ * the 9 pulses, each with a START then ending with STOP, so it's all OK.
+  */
+ static void mpc_i2c_fixup(struct mpc_i2c *i2c)
+ {
+ 	int k;
+-	u32 delay_val = 1000000 / i2c->real_clk + 1;
+-
+-	if (delay_val < 2)
+-		delay_val = 2;
++	unsigned long flags;
+ 
+ 	for (k = 9; k; k--) {
+ 		writeccr(i2c, 0);
+-		writeccr(i2c, CCR_MSTA | CCR_MTX | CCR_MEN);
++		writeb(0, i2c->base + MPC_I2C_SR); /* clear any status bits */
++		writeccr(i2c, CCR_MEN | CCR_MSTA); /* START */
++		readb(i2c->base + MPC_I2C_DR); /* init xfer */
++		udelay(15); /* let it hit the bus */
++		local_irq_save(flags); /* should not be delayed further */
++		writeccr(i2c, CCR_MEN | CCR_MSTA | CCR_RSTA); /* delay SDA */
+ 		readb(i2c->base + MPC_I2C_DR);
+-		writeccr(i2c, CCR_MEN);
+-		udelay(delay_val << 1);
++		if (k != 1)
++			udelay(5);
++		local_irq_restore(flags);
+ 	}
++	writeccr(i2c, CCR_MEN); /* Initiate STOP */
++	readb(i2c->base + MPC_I2C_DR);
++	udelay(15); /* Let STOP propagate */
++	writeccr(i2c, 0);
+ }
+ 
+ static int i2c_wait(struct mpc_i2c *i2c, unsigned timeout, int writing)
+diff --git a/drivers/iio/adc/ti-adc081c.c b/drivers/iio/adc/ti-adc081c.c
+index b64718daa2017..c79cd88cd4231 100644
+--- a/drivers/iio/adc/ti-adc081c.c
++++ b/drivers/iio/adc/ti-adc081c.c
+@@ -19,6 +19,7 @@
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
++#include <linux/property.h>
+ 
+ #include <linux/iio/iio.h>
+ #include <linux/iio/buffer.h>
+@@ -151,13 +152,16 @@ static int adc081c_probe(struct i2c_client *client,
+ {
+ 	struct iio_dev *iio;
+ 	struct adc081c *adc;
+-	struct adcxx1c_model *model;
++	const struct adcxx1c_model *model;
+ 	int err;
+ 
+ 	if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_WORD_DATA))
+ 		return -EOPNOTSUPP;
+ 
+-	model = &adcxx1c_models[id->driver_data];
++	if (dev_fwnode(&client->dev))
++		model = device_get_match_data(&client->dev);
++	else
++		model = &adcxx1c_models[id->driver_data];
+ 
+ 	iio = devm_iio_device_alloc(&client->dev, sizeof(*adc));
+ 	if (!iio)
+@@ -224,10 +228,17 @@ static const struct i2c_device_id adc081c_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, adc081c_id);
+ 
++static const struct acpi_device_id adc081c_acpi_match[] = {
++	/* Used on some AAEON boards */
++	{ "ADC081C", (kernel_ulong_t)&adcxx1c_models[ADC081C] },
++	{ }
++};
++MODULE_DEVICE_TABLE(acpi, adc081c_acpi_match);
++
+ static const struct of_device_id adc081c_of_match[] = {
+-	{ .compatible = "ti,adc081c" },
+-	{ .compatible = "ti,adc101c" },
+-	{ .compatible = "ti,adc121c" },
++	{ .compatible = "ti,adc081c", .data = &adcxx1c_models[ADC081C] },
++	{ .compatible = "ti,adc101c", .data = &adcxx1c_models[ADC101C] },
++	{ .compatible = "ti,adc121c", .data = &adcxx1c_models[ADC121C] },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, adc081c_of_match);
+@@ -236,6 +247,7 @@ static struct i2c_driver adc081c_driver = {
+ 	.driver = {
+ 		.name = "adc081c",
+ 		.of_match_table = adc081c_of_match,
++		.acpi_match_table = adc081c_acpi_match,
+ 	},
+ 	.probe = adc081c_probe,
+ 	.remove = adc081c_remove,
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 8e54184566f7f..4d4ba09f6cf93 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -775,6 +775,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 	unsigned int p;
+ 	u16 pkey, index;
+ 	enum ib_port_state port_state;
++	int ret;
+ 	int i;
+ 
+ 	cma_dev = NULL;
+@@ -793,9 +794,14 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 
+ 			if (ib_get_cached_port_state(cur_dev->device, p, &port_state))
+ 				continue;
+-			for (i = 0; !rdma_query_gid(cur_dev->device,
+-						    p, i, &gid);
+-			     i++) {
++
++			for (i = 0; i < cur_dev->device->port_data[p].immutable.gid_tbl_len;
++			     ++i) {
++				ret = rdma_query_gid(cur_dev->device, p, i,
++						     &gid);
++				if (ret)
++					continue;
++
+ 				if (!memcmp(&gid, dgid, sizeof(gid))) {
+ 					cma_dev = cur_dev;
+ 					sgid = gid;
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 76b9c436edcd2..aa526c5ca0cf3 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -2411,7 +2411,8 @@ int ib_find_gid(struct ib_device *device, union ib_gid *gid,
+ 		     ++i) {
+ 			ret = rdma_query_gid(device, port, i, &tmp_gid);
+ 			if (ret)
+-				return ret;
++				continue;
++
+ 			if (!memcmp(&tmp_gid, gid, sizeof *gid)) {
+ 				*port_num = port;
+ 				if (index)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 441eb421e5e59..5759027914b01 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -614,8 +614,6 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res,
+ 	if (!cmdq->cmdq_bitmap)
+ 		goto fail;
+ 
+-	cmdq->bmap_size = bmap_size;
+-
+ 	/* Allocate one extra to hold the QP1 entries */
+ 	rcfw->qp_tbl_size = qp_tbl_sz + 1;
+ 	rcfw->qp_tbl = kcalloc(rcfw->qp_tbl_size, sizeof(struct bnxt_qplib_qp_node),
+@@ -663,8 +661,8 @@ void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
+ 	iounmap(cmdq->cmdq_mbox.reg.bar_reg);
+ 	iounmap(creq->creq_db.reg.bar_reg);
+ 
+-	indx = find_first_bit(cmdq->cmdq_bitmap, cmdq->bmap_size);
+-	if (indx != cmdq->bmap_size)
++	indx = find_first_bit(cmdq->cmdq_bitmap, rcfw->cmdq_depth);
++	if (indx != rcfw->cmdq_depth)
+ 		dev_err(&rcfw->pdev->dev,
+ 			"disabling RCFW with pending cmd-bit %lx\n", indx);
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+index 5f2f0a5a3560f..6953f4e53dd20 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+@@ -150,7 +150,6 @@ struct bnxt_qplib_cmdq_ctx {
+ 	wait_queue_head_t		waitq;
+ 	unsigned long			flags;
+ 	unsigned long			*cmdq_bitmap;
+-	u32				bmap_size;
+ 	u32				seq_num;
+ };
+ 
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index 861e19fdfeb46..12e5461581cb4 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -2469,6 +2469,7 @@ int c4iw_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 	memset(attr, 0, sizeof(*attr));
+ 	memset(init_attr, 0, sizeof(*init_attr));
+ 	attr->qp_state = to_ib_qp_state(qhp->attr.state);
++	attr->cur_qp_state = to_ib_qp_state(qhp->attr.state);
+ 	init_attr->cap.max_send_wr = qhp->attr.sq_num_entries;
+ 	init_attr->cap.max_recv_wr = qhp->attr.rq_num_entries;
+ 	init_attr->cap.max_send_sge = qhp->attr.sq_max_sges;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index ba65823a5c0bb..1e8b3e4ef1b17 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -279,6 +279,9 @@ static enum rdma_link_layer hns_roce_get_link_layer(struct ib_device *device,
+ static int hns_roce_query_pkey(struct ib_device *ib_dev, u8 port, u16 index,
+ 			       u16 *pkey)
+ {
++	if (index > 0)
++		return -EINVAL;
++
+ 	*pkey = PKEY_ID;
+ 
+ 	return 0;
+@@ -356,7 +359,7 @@ static int hns_roce_mmap(struct ib_ucontext *context,
+ 		return rdma_user_mmap_io(context, vma,
+ 					 to_hr_ucontext(context)->uar.pfn,
+ 					 PAGE_SIZE,
+-					 pgprot_noncached(vma->vm_page_prot),
++					 pgprot_device(vma->vm_page_prot),
+ 					 NULL);
+ 
+ 	/* vm_pgoff: 1 -- TPTR */
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 16d5283651894..eeb87f31cd252 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -1918,6 +1918,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
+ 	/* db offset was calculated in copy_qp_uresp, now set in the user q */
+ 	if (qedr_qp_has_sq(qp)) {
+ 		qp->usq.db_addr = ctx->dpi_addr + uresp.sq_db_offset;
++		qp->sq.max_wr = attrs->cap.max_send_wr;
+ 		rc = qedr_db_recovery_add(dev, qp->usq.db_addr,
+ 					  &qp->usq.db_rec_data->db_data,
+ 					  DB_REC_WIDTH_32B,
+@@ -1928,6 +1929,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
+ 
+ 	if (qedr_qp_has_rq(qp)) {
+ 		qp->urq.db_addr = ctx->dpi_addr + uresp.rq_db_offset;
++		qp->rq.max_wr = attrs->cap.max_recv_wr;
+ 		rc = qedr_db_recovery_add(dev, qp->urq.db_addr,
+ 					  &qp->urq.db_rec_data->db_data,
+ 					  DB_REC_WIDTH_32B,
+diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c
+index 0cb4b01fd9101..66ffb516bdaf0 100644
+--- a/drivers/infiniband/sw/rxe/rxe_opcode.c
++++ b/drivers/infiniband/sw/rxe/rxe_opcode.c
+@@ -110,7 +110,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = {
+ 		}
+ 	},
+ 	[IB_OPCODE_RC_SEND_MIDDLE]		= {
+-		.name	= "IB_OPCODE_RC_SEND_MIDDLE]",
++		.name	= "IB_OPCODE_RC_SEND_MIDDLE",
+ 		.mask	= RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_SEND_MASK
+ 				| RXE_MIDDLE_MASK,
+ 		.length = RXE_BTH_BYTES,
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 28de889aa5164..3f31a52f7044f 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -805,16 +805,27 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
+ {
+ #ifdef CONFIG_IRQ_REMAP
+ 	u32 status, i;
++	u64 entry;
+ 
+ 	if (!iommu->ga_log)
+ 		return -EINVAL;
+ 
+-	status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
+-
+ 	/* Check if already running */
+-	if (status & (MMIO_STATUS_GALOG_RUN_MASK))
++	status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
++	if (WARN_ON(status & (MMIO_STATUS_GALOG_RUN_MASK)))
+ 		return 0;
+ 
++	entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512;
++	memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET,
++		    &entry, sizeof(entry));
++	entry = (iommu_virt_to_phys(iommu->ga_log_tail) &
++		 (BIT_ULL(52)-1)) & ~7ULL;
++	memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET,
++		    &entry, sizeof(entry));
++	writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
++	writel(0x00, iommu->mmio_base + MMIO_GA_TAIL_OFFSET);
++
++
+ 	iommu_feature_enable(iommu, CONTROL_GAINT_EN);
+ 	iommu_feature_enable(iommu, CONTROL_GALOG_EN);
+ 
+@@ -824,17 +835,15 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
+ 			break;
+ 	}
+ 
+-	if (i >= LOOP_TIMEOUT)
++	if (WARN_ON(i >= LOOP_TIMEOUT))
+ 		return -EINVAL;
+ #endif /* CONFIG_IRQ_REMAP */
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_IRQ_REMAP
+ static int iommu_init_ga_log(struct amd_iommu *iommu)
+ {
+-	u64 entry;
+-
++#ifdef CONFIG_IRQ_REMAP
+ 	if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir))
+ 		return 0;
+ 
+@@ -848,32 +857,13 @@ static int iommu_init_ga_log(struct amd_iommu *iommu)
+ 	if (!iommu->ga_log_tail)
+ 		goto err_out;
+ 
+-	entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512;
+-	memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET,
+-		    &entry, sizeof(entry));
+-	entry = (iommu_virt_to_phys(iommu->ga_log_tail) &
+-		 (BIT_ULL(52)-1)) & ~7ULL;
+-	memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET,
+-		    &entry, sizeof(entry));
+-	writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
+-	writel(0x00, iommu->mmio_base + MMIO_GA_TAIL_OFFSET);
+-
+ 	return 0;
+ err_out:
+ 	free_ga_log(iommu);
+ 	return -EINVAL;
+-}
+-#endif /* CONFIG_IRQ_REMAP */
+-
+-static int iommu_init_ga(struct amd_iommu *iommu)
+-{
+-	int ret = 0;
+-
+-#ifdef CONFIG_IRQ_REMAP
+-	ret = iommu_init_ga_log(iommu);
++#else
++	return 0;
+ #endif /* CONFIG_IRQ_REMAP */
+-
+-	return ret;
+ }
+ 
+ static int __init alloc_cwwb_sem(struct amd_iommu *iommu)
+@@ -1860,7 +1850,7 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
+ 	if (iommu_feature(iommu, FEATURE_PPR) && alloc_ppr_log(iommu))
+ 		return -ENOMEM;
+ 
+-	ret = iommu_init_ga(iommu);
++	ret = iommu_init_ga_log(iommu);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index a688f22cbe3b5..3bcd3afe97783 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -242,13 +242,17 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 			__GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
+ 	else if (lvl == 2)
+ 		table = kmem_cache_zalloc(data->l2_tables, gfp);
++
++	if (!table)
++		return NULL;
++
+ 	phys = virt_to_phys(table);
+ 	if (phys != (arm_v7s_iopte)phys) {
+ 		/* Doesn't fit in PTE */
+ 		dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
+ 		goto out_free;
+ 	}
+-	if (table && !cfg->coherent_walk) {
++	if (!cfg->coherent_walk) {
+ 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, dma))
+ 			goto out_free;
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index bcfbd0e44a4a0..e1cd31c0e3c19 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -302,11 +302,12 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
+ static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table,
+ 					     arm_lpae_iopte *ptep,
+ 					     arm_lpae_iopte curr,
+-					     struct io_pgtable_cfg *cfg)
++					     struct arm_lpae_io_pgtable *data)
+ {
+ 	arm_lpae_iopte old, new;
++	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+ 
+-	new = __pa(table) | ARM_LPAE_PTE_TYPE_TABLE;
++	new = paddr_to_iopte(__pa(table), data) | ARM_LPAE_PTE_TYPE_TABLE;
+ 	if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
+ 		new |= ARM_LPAE_PTE_NSTABLE;
+ 
+@@ -357,7 +358,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
+ 		if (!cptep)
+ 			return -ENOMEM;
+ 
+-		pte = arm_lpae_install_table(cptep, ptep, 0, cfg);
++		pte = arm_lpae_install_table(cptep, ptep, 0, data);
+ 		if (pte)
+ 			__arm_lpae_free_pages(cptep, tblsz, cfg);
+ 	} else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) {
+@@ -546,7 +547,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
+ 		__arm_lpae_init_pte(data, blk_paddr, pte, lvl, &tablep[i]);
+ 	}
+ 
+-	pte = arm_lpae_install_table(tablep, ptep, blk_pte, cfg);
++	pte = arm_lpae_install_table(tablep, ptep, blk_pte, data);
+ 	if (pte != blk_pte) {
+ 		__arm_lpae_free_pages(tablep, tablesz, cfg);
+ 		/*
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index 30d969a4c5fde..1164d1a42cbc5 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -64,8 +64,7 @@ static void free_iova_flush_queue(struct iova_domain *iovad)
+ 	if (!has_iova_flush_queue(iovad))
+ 		return;
+ 
+-	if (timer_pending(&iovad->fq_timer))
+-		del_timer(&iovad->fq_timer);
++	del_timer_sync(&iovad->fq_timer);
+ 
+ 	fq_destroy_all_entries(iovad);
+ 
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 1bdb7acf445f4..04d1b3963b6ba 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -915,6 +915,22 @@ static int __gic_update_rdist_properties(struct redist_region *region,
+ {
+ 	u64 typer = gic_read_typer(ptr + GICR_TYPER);
+ 
++	/* Boot-time cleanip */
++	if ((typer & GICR_TYPER_VLPIS) && (typer & GICR_TYPER_RVPEID)) {
++		u64 val;
++
++		/* Deactivate any present vPE */
++		val = gicr_read_vpendbaser(ptr + SZ_128K + GICR_VPENDBASER);
++		if (val & GICR_VPENDBASER_Valid)
++			gicr_write_vpendbaser(GICR_VPENDBASER_PendingLast,
++					      ptr + SZ_128K + GICR_VPENDBASER);
++
++		/* Mark the VPE table as invalid */
++		val = gicr_read_vpropbaser(ptr + SZ_128K + GICR_VPROPBASER);
++		val &= ~GICR_VPROPBASER_4_1_VALID;
++		gicr_write_vpropbaser(val, ptr + SZ_128K + GICR_VPROPBASER);
++	}
++
+ 	gic_data.rdists.has_vlpis &= !!(typer & GICR_TYPER_VLPIS);
+ 
+ 	/* RVPEID implies some form of DirectLPI, no matter what the doc says... :-/ */
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 19a70f434029b..6030cba5b0382 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1894,8 +1894,10 @@ static struct mapped_device *alloc_dev(int minor)
+ 	if (IS_ENABLED(CONFIG_DAX_DRIVER)) {
+ 		md->dax_dev = alloc_dax(md, md->disk->disk_name,
+ 					&dm_dax_ops, 0);
+-		if (IS_ERR(md->dax_dev))
++		if (IS_ERR(md->dax_dev)) {
++			md->dax_dev = NULL;
+ 			goto bad;
++		}
+ 	}
+ 
+ 	add_disk_no_queue_reg(md->disk);
+diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
+index ef6e78d45d5b8..ee3e63aa864bf 100644
+--- a/drivers/md/persistent-data/dm-btree.c
++++ b/drivers/md/persistent-data/dm-btree.c
+@@ -83,14 +83,16 @@ void inc_children(struct dm_transaction_manager *tm, struct btree_node *n,
+ }
+ 
+ static int insert_at(size_t value_size, struct btree_node *node, unsigned index,
+-		      uint64_t key, void *value)
+-		      __dm_written_to_disk(value)
++		     uint64_t key, void *value)
++	__dm_written_to_disk(value)
+ {
+ 	uint32_t nr_entries = le32_to_cpu(node->header.nr_entries);
++	uint32_t max_entries = le32_to_cpu(node->header.max_entries);
+ 	__le64 key_le = cpu_to_le64(key);
+ 
+ 	if (index > nr_entries ||
+-	    index >= le32_to_cpu(node->header.max_entries)) {
++	    index >= max_entries ||
++	    nr_entries >= max_entries) {
+ 		DMERR("too many entries in btree node for insert");
+ 		__dm_unbless_for_disk(value);
+ 		return -ENOMEM;
+diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
+index a213bf11738fb..85853ab629717 100644
+--- a/drivers/md/persistent-data/dm-space-map-common.c
++++ b/drivers/md/persistent-data/dm-space-map-common.c
+@@ -281,6 +281,11 @@ int sm_ll_lookup_bitmap(struct ll_disk *ll, dm_block_t b, uint32_t *result)
+ 	struct disk_index_entry ie_disk;
+ 	struct dm_block *blk;
+ 
++	if (b >= ll->nr_blocks) {
++		DMERR_LIMIT("metadata block out of bounds");
++		return -EINVAL;
++	}
++
+ 	b = do_div(index, ll->entries_per_block);
+ 	r = ll->load_ie(ll, index, &ie_disk);
+ 	if (r < 0)
+diff --git a/drivers/media/Kconfig b/drivers/media/Kconfig
+index a6d073f2e036a..d157af63be417 100644
+--- a/drivers/media/Kconfig
++++ b/drivers/media/Kconfig
+@@ -142,10 +142,10 @@ config MEDIA_TEST_SUPPORT
+ 	prompt "Test drivers" if MEDIA_SUPPORT_FILTER
+ 	default y if !MEDIA_SUPPORT_FILTER
+ 	help
+-	  Those drivers should not be used on production Kernels, but
+-	  can be useful on debug ones. It enables several dummy drivers
+-	  that simulate a real hardware. Very useful to test userspace
+-	  applications and to validate if the subsystem core is doesn't
++	  These drivers should not be used on production kernels, but
++	  can be useful on debug ones. This option enables several dummy drivers
++	  that simulate real hardware. Very useful to test userspace
++	  applications and to validate if the subsystem core doesn't
+ 	  have regressions.
+ 
+ 	  Say Y if you want to use some virtual test driver.
+diff --git a/drivers/media/cec/core/cec-pin.c b/drivers/media/cec/core/cec-pin.c
+index f006bd8eec63c..f8452a1f9fc6c 100644
+--- a/drivers/media/cec/core/cec-pin.c
++++ b/drivers/media/cec/core/cec-pin.c
+@@ -1033,6 +1033,7 @@ static int cec_pin_thread_func(void *_adap)
+ {
+ 	struct cec_adapter *adap = _adap;
+ 	struct cec_pin *pin = adap->pin;
++	bool irq_enabled = false;
+ 
+ 	for (;;) {
+ 		wait_event_interruptible(pin->kthread_waitq,
+@@ -1060,6 +1061,7 @@ static int cec_pin_thread_func(void *_adap)
+ 				ns_to_ktime(pin->work_rx_msg.rx_ts));
+ 			msg->len = 0;
+ 		}
++
+ 		if (pin->work_tx_status) {
+ 			unsigned int tx_status = pin->work_tx_status;
+ 
+@@ -1083,27 +1085,39 @@ static int cec_pin_thread_func(void *_adap)
+ 		switch (atomic_xchg(&pin->work_irq_change,
+ 				    CEC_PIN_IRQ_UNCHANGED)) {
+ 		case CEC_PIN_IRQ_DISABLE:
+-			pin->ops->disable_irq(adap);
++			if (irq_enabled) {
++				pin->ops->disable_irq(adap);
++				irq_enabled = false;
++			}
+ 			cec_pin_high(pin);
+ 			cec_pin_to_idle(pin);
+ 			hrtimer_start(&pin->timer, ns_to_ktime(0),
+ 				      HRTIMER_MODE_REL);
+ 			break;
+ 		case CEC_PIN_IRQ_ENABLE:
++			if (irq_enabled)
++				break;
+ 			pin->enable_irq_failed = !pin->ops->enable_irq(adap);
+ 			if (pin->enable_irq_failed) {
+ 				cec_pin_to_idle(pin);
+ 				hrtimer_start(&pin->timer, ns_to_ktime(0),
+ 					      HRTIMER_MODE_REL);
++			} else {
++				irq_enabled = true;
+ 			}
+ 			break;
+ 		default:
+ 			break;
+ 		}
+-
+ 		if (kthread_should_stop())
+ 			break;
+ 	}
++	if (pin->ops->disable_irq && irq_enabled)
++		pin->ops->disable_irq(adap);
++	hrtimer_cancel(&pin->timer);
++	cec_pin_read(pin);
++	cec_pin_to_idle(pin);
++	pin->state = CEC_ST_OFF;
+ 	return 0;
+ }
+ 
+@@ -1130,13 +1144,7 @@ static int cec_pin_adap_enable(struct cec_adapter *adap, bool enable)
+ 		hrtimer_start(&pin->timer, ns_to_ktime(0),
+ 			      HRTIMER_MODE_REL);
+ 	} else {
+-		if (pin->ops->disable_irq)
+-			pin->ops->disable_irq(adap);
+-		hrtimer_cancel(&pin->timer);
+ 		kthread_stop(pin->kthread);
+-		cec_pin_read(pin);
+-		cec_pin_to_idle(pin);
+-		pin->state = CEC_ST_OFF;
+ 	}
+ 	return 0;
+ }
+@@ -1157,11 +1165,8 @@ void cec_pin_start_timer(struct cec_pin *pin)
+ 	if (pin->state != CEC_ST_RX_IRQ)
+ 		return;
+ 
+-	atomic_set(&pin->work_irq_change, CEC_PIN_IRQ_UNCHANGED);
+-	pin->ops->disable_irq(pin->adap);
+-	cec_pin_high(pin);
+-	cec_pin_to_idle(pin);
+-	hrtimer_start(&pin->timer, ns_to_ktime(0), HRTIMER_MODE_REL);
++	atomic_set(&pin->work_irq_change, CEC_PIN_IRQ_DISABLE);
++	wake_up_interruptible(&pin->kthread_waitq);
+ }
+ 
+ static int cec_pin_adap_transmit(struct cec_adapter *adap, u8 attempts,
+diff --git a/drivers/media/common/saa7146/saa7146_fops.c b/drivers/media/common/saa7146/saa7146_fops.c
+index d6531874faa65..8047e305f3d01 100644
+--- a/drivers/media/common/saa7146/saa7146_fops.c
++++ b/drivers/media/common/saa7146/saa7146_fops.c
+@@ -523,7 +523,7 @@ int saa7146_vv_init(struct saa7146_dev* dev, struct saa7146_ext_vv *ext_vv)
+ 		ERR("out of memory. aborting.\n");
+ 		kfree(vv);
+ 		v4l2_ctrl_handler_free(hdl);
+-		return -1;
++		return -ENOMEM;
+ 	}
+ 
+ 	saa7146_video_uops.init(dev,vv);
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+index 2f3a5996d3fc9..fe626109ef4db 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+@@ -150,7 +150,7 @@ static void *vb2_dc_alloc(struct device *dev, unsigned long attrs,
+ 	buf->cookie = dma_alloc_attrs(dev, size, &buf->dma_addr,
+ 					GFP_KERNEL | gfp_flags, buf->attrs);
+ 	if (!buf->cookie) {
+-		dev_err(dev, "dma_alloc_coherent of size %ld failed\n", size);
++		dev_err(dev, "dma_alloc_coherent of size %lu failed\n", size);
+ 		kfree(buf);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+@@ -196,9 +196,9 @@ static int vb2_dc_mmap(void *buf_priv, struct vm_area_struct *vma)
+ 
+ 	vma->vm_ops->open(vma);
+ 
+-	pr_debug("%s: mapped dma addr 0x%08lx at 0x%08lx, size %ld\n",
+-		__func__, (unsigned long)buf->dma_addr, vma->vm_start,
+-		buf->size);
++	pr_debug("%s: mapped dma addr 0x%08lx at 0x%08lx, size %lu\n",
++		 __func__, (unsigned long)buf->dma_addr, vma->vm_start,
++		 buf->size);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
+index f14a872d12687..e58cb8434dafe 100644
+--- a/drivers/media/dvb-core/dmxdev.c
++++ b/drivers/media/dvb-core/dmxdev.c
+@@ -1413,7 +1413,7 @@ static const struct dvb_device dvbdev_dvr = {
+ };
+ int dvb_dmxdev_init(struct dmxdev *dmxdev, struct dvb_adapter *dvb_adapter)
+ {
+-	int i;
++	int i, ret;
+ 
+ 	if (dmxdev->demux->open(dmxdev->demux) < 0)
+ 		return -EUSERS;
+@@ -1432,14 +1432,26 @@ int dvb_dmxdev_init(struct dmxdev *dmxdev, struct dvb_adapter *dvb_adapter)
+ 					    DMXDEV_STATE_FREE);
+ 	}
+ 
+-	dvb_register_device(dvb_adapter, &dmxdev->dvbdev, &dvbdev_demux, dmxdev,
++	ret = dvb_register_device(dvb_adapter, &dmxdev->dvbdev, &dvbdev_demux, dmxdev,
+ 			    DVB_DEVICE_DEMUX, dmxdev->filternum);
+-	dvb_register_device(dvb_adapter, &dmxdev->dvr_dvbdev, &dvbdev_dvr,
++	if (ret < 0)
++		goto err_register_dvbdev;
++
++	ret = dvb_register_device(dvb_adapter, &dmxdev->dvr_dvbdev, &dvbdev_dvr,
+ 			    dmxdev, DVB_DEVICE_DVR, dmxdev->filternum);
++	if (ret < 0)
++		goto err_register_dvr_dvbdev;
+ 
+ 	dvb_ringbuffer_init(&dmxdev->dvr_buffer, NULL, 8192);
+ 
+ 	return 0;
++
++err_register_dvr_dvbdev:
++	dvb_unregister_device(dmxdev->dvbdev);
++err_register_dvbdev:
++	vfree(dmxdev->filter);
++	dmxdev->filter = NULL;
++	return ret;
+ }
+ 
+ EXPORT_SYMBOL(dvb_dmxdev_init);
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index bb02354a48b81..d67f2dd997d06 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -4473,8 +4473,10 @@ static struct dvb_frontend *dib8000_init(struct i2c_adapter *i2c_adap, u8 i2c_ad
+ 
+ 	state->timf_default = cfg->pll->timf;
+ 
+-	if (dib8000_identify(&state->i2c) == 0)
++	if (dib8000_identify(&state->i2c) == 0) {
++		kfree(fe);
+ 		goto error;
++	}
+ 
+ 	dibx000_init_i2c_master(&state->i2c_master, DIB8000, state->i2c.adap, state->i2c.addr);
+ 
+diff --git a/drivers/media/pci/b2c2/flexcop-pci.c b/drivers/media/pci/b2c2/flexcop-pci.c
+index a9d9520a94c6d..c9e6c7d663768 100644
+--- a/drivers/media/pci/b2c2/flexcop-pci.c
++++ b/drivers/media/pci/b2c2/flexcop-pci.c
+@@ -185,6 +185,8 @@ static irqreturn_t flexcop_pci_isr(int irq, void *dev_id)
+ 		dma_addr_t cur_addr =
+ 			fc->read_ibi_reg(fc,dma1_008).dma_0x8.dma_cur_addr << 2;
+ 		u32 cur_pos = cur_addr - fc_pci->dma[0].dma_addr0;
++		if (cur_pos > fc_pci->dma[0].size * 2)
++			goto error;
+ 
+ 		deb_irq("%u irq: %08x cur_addr: %llx: cur_pos: %08x, last_cur_pos: %08x ",
+ 				jiffies_to_usecs(jiffies - fc_pci->last_irq),
+@@ -225,6 +227,7 @@ static irqreturn_t flexcop_pci_isr(int irq, void *dev_id)
+ 		ret = IRQ_NONE;
+ 	}
+ 
++error:
+ 	spin_unlock_irqrestore(&fc_pci->irq_lock, flags);
+ 	return ret;
+ }
+diff --git a/drivers/media/pci/saa7146/hexium_gemini.c b/drivers/media/pci/saa7146/hexium_gemini.c
+index 2214c74bbbf15..3947701cd6c7e 100644
+--- a/drivers/media/pci/saa7146/hexium_gemini.c
++++ b/drivers/media/pci/saa7146/hexium_gemini.c
+@@ -284,7 +284,12 @@ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_d
+ 	hexium_set_input(hexium, 0);
+ 	hexium->cur_input = 0;
+ 
+-	saa7146_vv_init(dev, &vv_data);
++	ret = saa7146_vv_init(dev, &vv_data);
++	if (ret) {
++		i2c_del_adapter(&hexium->i2c_adapter);
++		kfree(hexium);
++		return ret;
++	}
+ 
+ 	vv_data.vid_ops.vidioc_enum_input = vidioc_enum_input;
+ 	vv_data.vid_ops.vidioc_g_input = vidioc_g_input;
+diff --git a/drivers/media/pci/saa7146/hexium_orion.c b/drivers/media/pci/saa7146/hexium_orion.c
+index 39d14c179d229..2eb4bee16b71f 100644
+--- a/drivers/media/pci/saa7146/hexium_orion.c
++++ b/drivers/media/pci/saa7146/hexium_orion.c
+@@ -355,10 +355,16 @@ static struct saa7146_ext_vv vv_data;
+ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_data *info)
+ {
+ 	struct hexium *hexium = (struct hexium *) dev->ext_priv;
++	int ret;
+ 
+ 	DEB_EE("\n");
+ 
+-	saa7146_vv_init(dev, &vv_data);
++	ret = saa7146_vv_init(dev, &vv_data);
++	if (ret) {
++		pr_err("Error in saa7146_vv_init()\n");
++		return ret;
++	}
++
+ 	vv_data.vid_ops.vidioc_enum_input = vidioc_enum_input;
+ 	vv_data.vid_ops.vidioc_g_input = vidioc_g_input;
+ 	vv_data.vid_ops.vidioc_s_input = vidioc_s_input;
+diff --git a/drivers/media/pci/saa7146/mxb.c b/drivers/media/pci/saa7146/mxb.c
+index 73fc901ecf3db..bf0b9b0914cd5 100644
+--- a/drivers/media/pci/saa7146/mxb.c
++++ b/drivers/media/pci/saa7146/mxb.c
+@@ -683,10 +683,16 @@ static struct saa7146_ext_vv vv_data;
+ static int mxb_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_data *info)
+ {
+ 	struct mxb *mxb;
++	int ret;
+ 
+ 	DEB_EE("dev:%p\n", dev);
+ 
+-	saa7146_vv_init(dev, &vv_data);
++	ret = saa7146_vv_init(dev, &vv_data);
++	if (ret) {
++		ERR("Error in saa7146_vv_init()");
++		return ret;
++	}
++
+ 	if (mxb_probe(dev)) {
+ 		saa7146_vv_release(dev);
+ 		return -1;
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index 7bb6babdcade0..debc7509c173c 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -500,6 +500,10 @@ static void aspeed_video_enable_mode_detect(struct aspeed_video *video)
+ 	aspeed_video_update(video, VE_INTERRUPT_CTRL, 0,
+ 			    VE_INTERRUPT_MODE_DETECT);
+ 
++	/* Disable mode detect in order to re-trigger */
++	aspeed_video_update(video, VE_SEQ_CTRL,
++			    VE_SEQ_CTRL_TRIG_MODE_DET, 0);
++
+ 	/* Trigger mode detect */
+ 	aspeed_video_update(video, VE_SEQ_CTRL, 0, VE_SEQ_CTRL_TRIG_MODE_DET);
+ }
+@@ -552,6 +556,8 @@ static void aspeed_video_irq_res_change(struct aspeed_video *video, ulong delay)
+ 	set_bit(VIDEO_RES_CHANGE, &video->flags);
+ 	clear_bit(VIDEO_FRAME_INPRG, &video->flags);
+ 
++	video->v4l2_input_status = V4L2_IN_ST_NO_SIGNAL;
++
+ 	aspeed_video_off(video);
+ 	aspeed_video_bufs_done(video, VB2_BUF_STATE_ERROR);
+ 
+@@ -786,10 +792,6 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ 			return;
+ 		}
+ 
+-		/* Disable mode detect in order to re-trigger */
+-		aspeed_video_update(video, VE_SEQ_CTRL,
+-				    VE_SEQ_CTRL_TRIG_MODE_DET, 0);
+-
+ 		aspeed_video_check_and_set_polarity(video);
+ 
+ 		aspeed_video_enable_mode_detect(video);
+@@ -1337,7 +1339,6 @@ static void aspeed_video_resolution_work(struct work_struct *work)
+ 	struct delayed_work *dwork = to_delayed_work(work);
+ 	struct aspeed_video *video = container_of(dwork, struct aspeed_video,
+ 						  res_work);
+-	u32 input_status = video->v4l2_input_status;
+ 
+ 	aspeed_video_on(video);
+ 
+@@ -1350,8 +1351,7 @@ static void aspeed_video_resolution_work(struct work_struct *work)
+ 	aspeed_video_get_resolution(video);
+ 
+ 	if (video->detected_timings.width != video->active_timings.width ||
+-	    video->detected_timings.height != video->active_timings.height ||
+-	    input_status != video->v4l2_input_status) {
++	    video->detected_timings.height != video->active_timings.height) {
+ 		static const struct v4l2_event ev = {
+ 			.type = V4L2_EVENT_SOURCE_CHANGE,
+ 			.u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION,
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index 87a2c706f7477..1eed69d29149f 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -1537,11 +1537,13 @@ static void coda_pic_run_work(struct work_struct *work)
+ 
+ 	if (!wait_for_completion_timeout(&ctx->completion,
+ 					 msecs_to_jiffies(1000))) {
+-		dev_err(dev->dev, "CODA PIC_RUN timeout\n");
++		if (ctx->use_bit) {
++			dev_err(dev->dev, "CODA PIC_RUN timeout\n");
+ 
+-		ctx->hold = true;
++			ctx->hold = true;
+ 
+-		coda_hw_reset(ctx);
++			coda_hw_reset(ctx);
++		}
+ 
+ 		if (ctx->ops->run_timeout)
+ 			ctx->ops->run_timeout(ctx);
+diff --git a/drivers/media/platform/coda/coda-jpeg.c b/drivers/media/platform/coda/coda-jpeg.c
+index b11cfbe166dd3..a72f4655e5ad5 100644
+--- a/drivers/media/platform/coda/coda-jpeg.c
++++ b/drivers/media/platform/coda/coda-jpeg.c
+@@ -1127,7 +1127,8 @@ static int coda9_jpeg_prepare_encode(struct coda_ctx *ctx)
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_BT_PTR);
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_WD_PTR);
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_BBSR);
+-	coda_write(dev, 0, CODA9_REG_JPEG_BBC_STRM_CTRL);
++	coda_write(dev, BIT(31) | ((end_addr - start_addr - header_len) / 256),
++		   CODA9_REG_JPEG_BBC_STRM_CTRL);
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_CTRL);
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_FF_RPTR);
+ 	coda_write(dev, 127, CODA9_REG_JPEG_GBU_BBER);
+@@ -1257,6 +1258,23 @@ static void coda9_jpeg_finish_encode(struct coda_ctx *ctx)
+ 	coda_hw_reset(ctx);
+ }
+ 
++static void coda9_jpeg_encode_timeout(struct coda_ctx *ctx)
++{
++	struct coda_dev *dev = ctx->dev;
++	u32 end_addr, wr_ptr;
++
++	/* Handle missing BBC overflow interrupt via timeout */
++	end_addr = coda_read(dev, CODA9_REG_JPEG_BBC_END_ADDR);
++	wr_ptr = coda_read(dev, CODA9_REG_JPEG_BBC_WR_PTR);
++	if (wr_ptr >= end_addr - 256) {
++		v4l2_err(&dev->v4l2_dev, "JPEG too large for capture buffer\n");
++		coda9_jpeg_finish_encode(ctx);
++		return;
++	}
++
++	coda_hw_reset(ctx);
++}
++
+ static void coda9_jpeg_release(struct coda_ctx *ctx)
+ {
+ 	int i;
+@@ -1276,6 +1294,7 @@ const struct coda_context_ops coda9_jpeg_encode_ops = {
+ 	.start_streaming = coda9_jpeg_start_encoding,
+ 	.prepare_run = coda9_jpeg_prepare_encode,
+ 	.finish_run = coda9_jpeg_finish_encode,
++	.run_timeout = coda9_jpeg_encode_timeout,
+ 	.release = coda9_jpeg_release,
+ };
+ 
+diff --git a/drivers/media/platform/coda/imx-vdoa.c b/drivers/media/platform/coda/imx-vdoa.c
+index 8bc0d83718193..dd6e2e320264e 100644
+--- a/drivers/media/platform/coda/imx-vdoa.c
++++ b/drivers/media/platform/coda/imx-vdoa.c
+@@ -287,7 +287,11 @@ static int vdoa_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int ret;
+ 
+-	dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
++	ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
++	if (ret) {
++		dev_err(&pdev->dev, "DMA enable failed\n");
++		return ret;
++	}
+ 
+ 	vdoa = devm_kzalloc(&pdev->dev, sizeof(*vdoa), GFP_KERNEL);
+ 	if (!vdoa)
+diff --git a/drivers/media/platform/imx-pxp.c b/drivers/media/platform/imx-pxp.c
+index 08d76eb05ed1a..62356adebc39e 100644
+--- a/drivers/media/platform/imx-pxp.c
++++ b/drivers/media/platform/imx-pxp.c
+@@ -1664,6 +1664,8 @@ static int pxp_probe(struct platform_device *pdev)
+ 	if (irq < 0)
+ 		return irq;
+ 
++	spin_lock_init(&dev->irqlock);
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, pxp_irq_handler,
+ 			IRQF_ONESHOT, dev_name(&pdev->dev), dev);
+ 	if (ret < 0) {
+@@ -1681,8 +1683,6 @@ static int pxp_probe(struct platform_device *pdev)
+ 		goto err_clk;
+ 	}
+ 
+-	spin_lock_init(&dev->irqlock);
+-
+ 	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+ 	if (ret)
+ 		goto err_clk;
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
+index 219c2c5b78efc..5f93bc670edb2 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
+@@ -237,11 +237,11 @@ static int fops_vcodec_release(struct file *file)
+ 	mtk_v4l2_debug(1, "[%d] encoder", ctx->id);
+ 	mutex_lock(&dev->dev_mutex);
+ 
++	v4l2_m2m_ctx_release(ctx->m2m_ctx);
+ 	mtk_vcodec_enc_release(ctx);
+ 	v4l2_fh_del(&ctx->fh);
+ 	v4l2_fh_exit(&ctx->fh);
+ 	v4l2_ctrl_handler_free(&ctx->ctrl_hdl);
+-	v4l2_m2m_ctx_release(ctx->m2m_ctx);
+ 
+ 	list_del_init(&ctx->list);
+ 	kfree(ctx);
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 58ddebbb84468..1d621f7769035 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -222,7 +222,6 @@ static int venus_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	core->dev = dev;
+-	platform_set_drvdata(pdev, core);
+ 
+ 	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	core->base = devm_ioremap_resource(dev, r);
+@@ -252,7 +251,7 @@ static int venus_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 
+ 	if (core->pm_ops->core_get) {
+-		ret = core->pm_ops->core_get(dev);
++		ret = core->pm_ops->core_get(core);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -277,6 +276,12 @@ static int venus_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_core_put;
+ 
++	ret = v4l2_device_register(dev, &core->v4l2_dev);
++	if (ret)
++		goto err_core_deinit;
++
++	platform_set_drvdata(pdev, core);
++
+ 	pm_runtime_enable(dev);
+ 
+ 	ret = pm_runtime_get_sync(dev);
+@@ -289,11 +294,11 @@ static int venus_probe(struct platform_device *pdev)
+ 
+ 	ret = venus_firmware_init(core);
+ 	if (ret)
+-		goto err_runtime_disable;
++		goto err_of_depopulate;
+ 
+ 	ret = venus_boot(core);
+ 	if (ret)
+-		goto err_runtime_disable;
++		goto err_firmware_deinit;
+ 
+ 	ret = hfi_core_resume(core, true);
+ 	if (ret)
+@@ -311,10 +316,6 @@ static int venus_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_venus_shutdown;
+ 
+-	ret = v4l2_device_register(dev, &core->v4l2_dev);
+-	if (ret)
+-		goto err_core_deinit;
+-
+ 	ret = pm_runtime_put_sync(dev);
+ 	if (ret) {
+ 		pm_runtime_get_noresume(dev);
+@@ -327,18 +328,22 @@ static int venus_probe(struct platform_device *pdev)
+ 
+ err_dev_unregister:
+ 	v4l2_device_unregister(&core->v4l2_dev);
+-err_core_deinit:
+-	hfi_core_deinit(core, false);
+ err_venus_shutdown:
+ 	venus_shutdown(core);
++err_firmware_deinit:
++	venus_firmware_deinit(core);
++err_of_depopulate:
++	of_platform_depopulate(dev);
+ err_runtime_disable:
+ 	pm_runtime_put_noidle(dev);
+ 	pm_runtime_set_suspended(dev);
+ 	pm_runtime_disable(dev);
+ 	hfi_destroy(core);
++err_core_deinit:
++	hfi_core_deinit(core, false);
+ err_core_put:
+ 	if (core->pm_ops->core_put)
+-		core->pm_ops->core_put(dev);
++		core->pm_ops->core_put(core);
+ 	return ret;
+ }
+ 
+@@ -364,11 +369,14 @@ static int venus_remove(struct platform_device *pdev)
+ 	pm_runtime_disable(dev);
+ 
+ 	if (pm_ops->core_put)
+-		pm_ops->core_put(dev);
++		pm_ops->core_put(core);
++
++	v4l2_device_unregister(&core->v4l2_dev);
+ 
+ 	hfi_destroy(core);
+ 
+ 	v4l2_device_unregister(&core->v4l2_dev);
++
+ 	mutex_destroy(&core->pm_lock);
+ 	mutex_destroy(&core->lock);
+ 	venus_dbgfs_deinit(core);
+@@ -387,7 +395,7 @@ static __maybe_unused int venus_runtime_suspend(struct device *dev)
+ 		return ret;
+ 
+ 	if (pm_ops->core_power) {
+-		ret = pm_ops->core_power(dev, POWER_OFF);
++		ret = pm_ops->core_power(core, POWER_OFF);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -405,7 +413,8 @@ static __maybe_unused int venus_runtime_suspend(struct device *dev)
+ err_video_path:
+ 	icc_set_bw(core->cpucfg_path, kbps_to_icc(1000), 0);
+ err_cpucfg_path:
+-	pm_ops->core_power(dev, POWER_ON);
++	if (pm_ops->core_power)
++		pm_ops->core_power(core, POWER_ON);
+ 
+ 	return ret;
+ }
+@@ -425,7 +434,7 @@ static __maybe_unused int venus_runtime_resume(struct device *dev)
+ 		return ret;
+ 
+ 	if (pm_ops->core_power) {
+-		ret = pm_ops->core_power(dev, POWER_ON);
++		ret = pm_ops->core_power(core, POWER_ON);
+ 		if (ret)
+ 			return ret;
+ 	}
+diff --git a/drivers/media/platform/qcom/venus/core.h b/drivers/media/platform/qcom/venus/core.h
+index 05c9fbd51f0c0..f2a0ef9ee884e 100644
+--- a/drivers/media/platform/qcom/venus/core.h
++++ b/drivers/media/platform/qcom/venus/core.h
+@@ -123,7 +123,6 @@ struct venus_caps {
+  * @clks:	an array of struct clk pointers
+  * @vcodec0_clks: an array of vcodec0 struct clk pointers
+  * @vcodec1_clks: an array of vcodec1 struct clk pointers
+- * @pd_dl_venus: pmdomain device-link for venus domain
+  * @pmdomains:	an array of pmdomains struct device pointers
+  * @vdev_dec:	a reference to video device structure for decoder instances
+  * @vdev_enc:	a reference to video device structure for encoder instances
+@@ -161,7 +160,6 @@ struct venus_core {
+ 	struct icc_path *cpucfg_path;
+ 	struct opp_table *opp_table;
+ 	bool has_opp_table;
+-	struct device_link *pd_dl_venus;
+ 	struct device *pmdomains[VIDC_PMDOMAINS_NUM_MAX];
+ 	struct device_link *opp_dl_venus;
+ 	struct device *opp_pmdomain;
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index 2946547a0df4a..710f9a2b132b0 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -147,14 +147,12 @@ static u32 load_per_type(struct venus_core *core, u32 session_type)
+ 	struct venus_inst *inst = NULL;
+ 	u32 mbs_per_sec = 0;
+ 
+-	mutex_lock(&core->lock);
+ 	list_for_each_entry(inst, &core->instances, list) {
+ 		if (inst->session_type != session_type)
+ 			continue;
+ 
+ 		mbs_per_sec += load_per_instance(inst);
+ 	}
+-	mutex_unlock(&core->lock);
+ 
+ 	return mbs_per_sec;
+ }
+@@ -203,14 +201,12 @@ static int load_scale_bw(struct venus_core *core)
+ 	struct venus_inst *inst = NULL;
+ 	u32 mbs_per_sec, avg, peak, total_avg = 0, total_peak = 0;
+ 
+-	mutex_lock(&core->lock);
+ 	list_for_each_entry(inst, &core->instances, list) {
+ 		mbs_per_sec = load_per_instance(inst);
+ 		mbs_to_bw(inst, mbs_per_sec, &avg, &peak);
+ 		total_avg += avg;
+ 		total_peak += peak;
+ 	}
+-	mutex_unlock(&core->lock);
+ 
+ 	/*
+ 	 * keep minimum bandwidth vote for "video-mem" path,
+@@ -237,8 +233,9 @@ static int load_scale_v1(struct venus_inst *inst)
+ 	struct device *dev = core->dev;
+ 	u32 mbs_per_sec;
+ 	unsigned int i;
+-	int ret;
++	int ret = 0;
+ 
++	mutex_lock(&core->lock);
+ 	mbs_per_sec = load_per_type(core, VIDC_SESSION_TYPE_ENC) +
+ 		      load_per_type(core, VIDC_SESSION_TYPE_DEC);
+ 
+@@ -263,29 +260,28 @@ set_freq:
+ 	if (ret) {
+ 		dev_err(dev, "failed to set clock rate %lu (%d)\n",
+ 			freq, ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	ret = load_scale_bw(core);
+ 	if (ret) {
+ 		dev_err(dev, "failed to set bandwidth (%d)\n",
+ 			ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+-	return 0;
++exit:
++	mutex_unlock(&core->lock);
++	return ret;
+ }
+ 
+-static int core_get_v1(struct device *dev)
++static int core_get_v1(struct venus_core *core)
+ {
+-	struct venus_core *core = dev_get_drvdata(dev);
+-
+ 	return core_clks_get(core);
+ }
+ 
+-static int core_power_v1(struct device *dev, int on)
++static int core_power_v1(struct venus_core *core, int on)
+ {
+-	struct venus_core *core = dev_get_drvdata(dev);
+ 	int ret = 0;
+ 
+ 	if (on == POWER_ON)
+@@ -752,12 +748,12 @@ static int venc_power_v4(struct device *dev, int on)
+ 	return ret;
+ }
+ 
+-static int vcodec_domains_get(struct device *dev)
++static int vcodec_domains_get(struct venus_core *core)
+ {
+ 	int ret;
+ 	struct opp_table *opp_table;
+ 	struct device **opp_virt_dev;
+-	struct venus_core *core = dev_get_drvdata(dev);
++	struct device *dev = core->dev;
+ 	const struct venus_resources *res = core->res;
+ 	struct device *pd;
+ 	unsigned int i;
+@@ -773,13 +769,6 @@ static int vcodec_domains_get(struct device *dev)
+ 		core->pmdomains[i] = pd;
+ 	}
+ 
+-	core->pd_dl_venus = device_link_add(dev, core->pmdomains[0],
+-					    DL_FLAG_PM_RUNTIME |
+-					    DL_FLAG_STATELESS |
+-					    DL_FLAG_RPM_ACTIVE);
+-	if (!core->pd_dl_venus)
+-		return -ENODEV;
+-
+ skip_pmdomains:
+ 	if (!core->has_opp_table)
+ 		return 0;
+@@ -806,29 +795,23 @@ skip_pmdomains:
+ opp_dl_add_err:
+ 	dev_pm_opp_detach_genpd(core->opp_table);
+ opp_attach_err:
+-	if (core->pd_dl_venus) {
+-		device_link_del(core->pd_dl_venus);
+-		for (i = 0; i < res->vcodec_pmdomains_num; i++) {
+-			if (IS_ERR_OR_NULL(core->pmdomains[i]))
+-				continue;
+-			dev_pm_domain_detach(core->pmdomains[i], true);
+-		}
++	for (i = 0; i < res->vcodec_pmdomains_num; i++) {
++		if (IS_ERR_OR_NULL(core->pmdomains[i]))
++			continue;
++		dev_pm_domain_detach(core->pmdomains[i], true);
+ 	}
++
+ 	return ret;
+ }
+ 
+-static void vcodec_domains_put(struct device *dev)
++static void vcodec_domains_put(struct venus_core *core)
+ {
+-	struct venus_core *core = dev_get_drvdata(dev);
+ 	const struct venus_resources *res = core->res;
+ 	unsigned int i;
+ 
+ 	if (!res->vcodec_pmdomains_num)
+ 		goto skip_pmdomains;
+ 
+-	if (core->pd_dl_venus)
+-		device_link_del(core->pd_dl_venus);
+-
+ 	for (i = 0; i < res->vcodec_pmdomains_num; i++) {
+ 		if (IS_ERR_OR_NULL(core->pmdomains[i]))
+ 			continue;
+@@ -845,9 +828,9 @@ skip_pmdomains:
+ 	dev_pm_opp_detach_genpd(core->opp_table);
+ }
+ 
+-static int core_get_v4(struct device *dev)
++static int core_get_v4(struct venus_core *core)
+ {
+-	struct venus_core *core = dev_get_drvdata(dev);
++	struct device *dev = core->dev;
+ 	const struct venus_resources *res = core->res;
+ 	int ret;
+ 
+@@ -886,7 +869,7 @@ static int core_get_v4(struct device *dev)
+ 		}
+ 	}
+ 
+-	ret = vcodec_domains_get(dev);
++	ret = vcodec_domains_get(core);
+ 	if (ret) {
+ 		if (core->has_opp_table)
+ 			dev_pm_opp_of_remove_table(dev);
+@@ -897,14 +880,14 @@ static int core_get_v4(struct device *dev)
+ 	return 0;
+ }
+ 
+-static void core_put_v4(struct device *dev)
++static void core_put_v4(struct venus_core *core)
+ {
+-	struct venus_core *core = dev_get_drvdata(dev);
++	struct device *dev = core->dev;
+ 
+ 	if (legacy_binding)
+ 		return;
+ 
+-	vcodec_domains_put(dev);
++	vcodec_domains_put(core);
+ 
+ 	if (core->has_opp_table)
+ 		dev_pm_opp_of_remove_table(dev);
+@@ -913,19 +896,33 @@ static void core_put_v4(struct device *dev)
+ 
+ }
+ 
+-static int core_power_v4(struct device *dev, int on)
++static int core_power_v4(struct venus_core *core, int on)
+ {
+-	struct venus_core *core = dev_get_drvdata(dev);
++	struct device *dev = core->dev;
++	struct device *pmctrl = core->pmdomains[0];
+ 	int ret = 0;
+ 
+ 	if (on == POWER_ON) {
++		if (pmctrl) {
++			ret = pm_runtime_get_sync(pmctrl);
++			if (ret < 0) {
++				pm_runtime_put_noidle(pmctrl);
++				return ret;
++			}
++		}
++
+ 		ret = core_clks_enable(core);
++		if (ret < 0 && pmctrl)
++			pm_runtime_put_sync(pmctrl);
+ 	} else {
+ 		/* Drop the performance state vote */
+ 		if (core->opp_pmdomain)
+ 			dev_pm_opp_set_rate(dev, 0);
+ 
+ 		core_clks_disable(core);
++
++		if (pmctrl)
++			pm_runtime_put_sync(pmctrl);
+ 	}
+ 
+ 	return ret;
+@@ -962,13 +959,13 @@ static int load_scale_v4(struct venus_inst *inst)
+ 	struct device *dev = core->dev;
+ 	unsigned long freq = 0, freq_core1 = 0, freq_core2 = 0;
+ 	unsigned long filled_len = 0;
+-	int i, ret;
++	int i, ret = 0;
+ 
+ 	for (i = 0; i < inst->num_input_bufs; i++)
+ 		filled_len = max(filled_len, inst->payloads[i]);
+ 
+ 	if (inst->session_type == VIDC_SESSION_TYPE_DEC && !filled_len)
+-		return 0;
++		return ret;
+ 
+ 	freq = calculate_inst_freq(inst, filled_len);
+ 	inst->clk_data.freq = freq;
+@@ -984,7 +981,6 @@ static int load_scale_v4(struct venus_inst *inst)
+ 			freq_core2 += inst->clk_data.freq;
+ 		}
+ 	}
+-	mutex_unlock(&core->lock);
+ 
+ 	freq = max(freq_core1, freq_core2);
+ 
+@@ -1008,17 +1004,19 @@ set_freq:
+ 	if (ret) {
+ 		dev_err(dev, "failed to set clock rate %lu (%d)\n",
+ 			freq, ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	ret = load_scale_bw(core);
+ 	if (ret) {
+ 		dev_err(dev, "failed to set bandwidth (%d)\n",
+ 			ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+-	return 0;
++exit:
++	mutex_unlock(&core->lock);
++	return ret;
+ }
+ 
+ static const struct venus_pm_ops pm_ops_v4 = {
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.h b/drivers/media/platform/qcom/venus/pm_helpers.h
+index aa2f6afa23544..a492c50c5543c 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.h
++++ b/drivers/media/platform/qcom/venus/pm_helpers.h
+@@ -4,14 +4,15 @@
+ #define __VENUS_PM_HELPERS_H__
+ 
+ struct device;
++struct venus_core;
+ 
+ #define POWER_ON	1
+ #define POWER_OFF	0
+ 
+ struct venus_pm_ops {
+-	int (*core_get)(struct device *dev);
+-	void (*core_put)(struct device *dev);
+-	int (*core_power)(struct device *dev, int on);
++	int (*core_get)(struct venus_core *core);
++	void (*core_put)(struct venus_core *core);
++	int (*core_power)(struct venus_core *core, int on);
+ 
+ 	int (*vdec_get)(struct device *dev);
+ 	void (*vdec_put)(struct device *dev);
+diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
+index d2d87a204e918..5e8e48a721a04 100644
+--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
+@@ -436,16 +436,23 @@ static int rcsi2_wait_phy_start(struct rcar_csi2 *priv,
+ static int rcsi2_set_phypll(struct rcar_csi2 *priv, unsigned int mbps)
+ {
+ 	const struct rcsi2_mbps_reg *hsfreq;
++	const struct rcsi2_mbps_reg *hsfreq_prev = NULL;
+ 
+-	for (hsfreq = priv->info->hsfreqrange; hsfreq->mbps != 0; hsfreq++)
++	for (hsfreq = priv->info->hsfreqrange; hsfreq->mbps != 0; hsfreq++) {
+ 		if (hsfreq->mbps >= mbps)
+ 			break;
++		hsfreq_prev = hsfreq;
++	}
+ 
+ 	if (!hsfreq->mbps) {
+ 		dev_err(priv->dev, "Unsupported PHY speed (%u Mbps)", mbps);
+ 		return -ERANGE;
+ 	}
+ 
++	if (hsfreq_prev &&
++	    ((mbps - hsfreq_prev->mbps) <= (hsfreq->mbps - mbps)))
++		hsfreq = hsfreq_prev;
++
+ 	rcsi2_write(priv, PHYPLL_REG, PHYPLL_HSFREQRANGE(hsfreq->reg));
+ 
+ 	return 0;
+@@ -969,10 +976,17 @@ static int rcsi2_phtw_write_mbps(struct rcar_csi2 *priv, unsigned int mbps,
+ 				 const struct rcsi2_mbps_reg *values, u16 code)
+ {
+ 	const struct rcsi2_mbps_reg *value;
++	const struct rcsi2_mbps_reg *prev_value = NULL;
+ 
+-	for (value = values; value->mbps; value++)
++	for (value = values; value->mbps; value++) {
+ 		if (value->mbps >= mbps)
+ 			break;
++		prev_value = value;
++	}
++
++	if (prev_value &&
++	    ((mbps - prev_value->mbps) <= (value->mbps - mbps)))
++		value = prev_value;
+ 
+ 	if (!value->mbps) {
+ 		dev_err(priv->dev, "Unsupported PHY speed (%u Mbps)", mbps);
+diff --git a/drivers/media/platform/rcar-vin/rcar-v4l2.c b/drivers/media/platform/rcar-vin/rcar-v4l2.c
+index 3e7a3ae2a6b97..0bbe6f9f92062 100644
+--- a/drivers/media/platform/rcar-vin/rcar-v4l2.c
++++ b/drivers/media/platform/rcar-vin/rcar-v4l2.c
+@@ -175,20 +175,27 @@ static void rvin_format_align(struct rvin_dev *vin, struct v4l2_pix_format *pix)
+ 		break;
+ 	}
+ 
+-	/* HW limit width to a multiple of 32 (2^5) for NV12/16 else 2 (2^1) */
++	/* Hardware limits width alignment based on format. */
+ 	switch (pix->pixelformat) {
++	/* Multiple of 32 (2^5) for NV12/16. */
+ 	case V4L2_PIX_FMT_NV12:
+ 	case V4L2_PIX_FMT_NV16:
+ 		walign = 5;
+ 		break;
+-	default:
++	/* Multiple of 2 (2^1) for YUV. */
++	case V4L2_PIX_FMT_YUYV:
++	case V4L2_PIX_FMT_UYVY:
+ 		walign = 1;
+ 		break;
++	/* No multiple for RGB. */
++	default:
++		walign = 0;
++		break;
+ 	}
+ 
+ 	/* Limit to VIN capabilities */
+-	v4l_bound_align_image(&pix->width, 2, vin->info->max_width, walign,
+-			      &pix->height, 4, vin->info->max_height, 2, 0);
++	v4l_bound_align_image(&pix->width, 5, vin->info->max_width, walign,
++			      &pix->height, 2, vin->info->max_height, 0, 0);
+ 
+ 	pix->bytesperline = rvin_format_bytesperline(vin, pix);
+ 	pix->sizeimage = rvin_format_sizeimage(pix);
+diff --git a/drivers/media/radio/si470x/radio-si470x-i2c.c b/drivers/media/radio/si470x/radio-si470x-i2c.c
+index a972c0705ac79..76d39e2e87706 100644
+--- a/drivers/media/radio/si470x/radio-si470x-i2c.c
++++ b/drivers/media/radio/si470x/radio-si470x-i2c.c
+@@ -368,7 +368,7 @@ static int si470x_i2c_probe(struct i2c_client *client)
+ 	if (radio->hdl.error) {
+ 		retval = radio->hdl.error;
+ 		dev_err(&client->dev, "couldn't register control\n");
+-		goto err_dev;
++		goto err_all;
+ 	}
+ 
+ 	/* video device initialization */
+@@ -463,7 +463,6 @@ static int si470x_i2c_probe(struct i2c_client *client)
+ 	return 0;
+ err_all:
+ 	v4l2_ctrl_handler_free(&radio->hdl);
+-err_dev:
+ 	v4l2_device_unregister(&radio->v4l2_dev);
+ err_initial:
+ 	return retval;
+diff --git a/drivers/media/rc/igorplugusb.c b/drivers/media/rc/igorplugusb.c
+index effaa5751d6c9..3e9988ee785f0 100644
+--- a/drivers/media/rc/igorplugusb.c
++++ b/drivers/media/rc/igorplugusb.c
+@@ -64,9 +64,11 @@ static void igorplugusb_irdata(struct igorplugusb *ir, unsigned len)
+ 	if (start >= len) {
+ 		dev_err(ir->dev, "receive overflow invalid: %u", overflow);
+ 	} else {
+-		if (overflow > 0)
++		if (overflow > 0) {
+ 			dev_warn(ir->dev, "receive overflow, at least %u lost",
+ 								overflow);
++			ir_raw_event_reset(ir->rc);
++		}
+ 
+ 		do {
+ 			rawir.duration = ir->buf_in[i] * 85;
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index 8870c4e6c5f44..dbb5a4f44bda5 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -1430,7 +1430,7 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ 	 */
+ 	ret = usb_control_msg(ir->usbdev, usb_rcvctrlpipe(ir->usbdev, 0),
+ 			      USB_REQ_SET_ADDRESS, USB_TYPE_VENDOR, 0, 0,
+-			      data, USB_CTRL_MSG_SZ, HZ * 3);
++			      data, USB_CTRL_MSG_SZ, 3000);
+ 	dev_dbg(dev, "set address - ret = %d", ret);
+ 	dev_dbg(dev, "set address - data[0] = %d, data[1] = %d",
+ 						data[0], data[1]);
+@@ -1438,20 +1438,20 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ 	/* set feature: bit rate 38400 bps */
+ 	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+ 			      USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
+-			      0xc04e, 0x0000, NULL, 0, HZ * 3);
++			      0xc04e, 0x0000, NULL, 0, 3000);
+ 
+ 	dev_dbg(dev, "set feature - ret = %d", ret);
+ 
+ 	/* bRequest 4: set char length to 8 bits */
+ 	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+ 			      4, USB_TYPE_VENDOR,
+-			      0x0808, 0x0000, NULL, 0, HZ * 3);
++			      0x0808, 0x0000, NULL, 0, 3000);
+ 	dev_dbg(dev, "set char length - retB = %d", ret);
+ 
+ 	/* bRequest 2: set handshaking to use DTR/DSR */
+ 	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+ 			      2, USB_TYPE_VENDOR,
+-			      0x0000, 0x0100, NULL, 0, HZ * 3);
++			      0x0000, 0x0100, NULL, 0, 3000);
+ 	dev_dbg(dev, "set handshake  - retC = %d", ret);
+ 
+ 	/* device resume */
+diff --git a/drivers/media/rc/redrat3.c b/drivers/media/rc/redrat3.c
+index 2cf3377ec63a7..a61f9820ade95 100644
+--- a/drivers/media/rc/redrat3.c
++++ b/drivers/media/rc/redrat3.c
+@@ -404,7 +404,7 @@ static int redrat3_send_cmd(int cmd, struct redrat3_dev *rr3)
+ 	udev = rr3->udev;
+ 	res = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), cmd,
+ 			      USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			      0x0000, 0x0000, data, sizeof(u8), HZ * 10);
++			      0x0000, 0x0000, data, sizeof(u8), 10000);
+ 
+ 	if (res < 0) {
+ 		dev_err(rr3->dev, "%s: Error sending rr3 cmd res %d, data %d",
+@@ -480,7 +480,7 @@ static u32 redrat3_get_timeout(struct redrat3_dev *rr3)
+ 	pipe = usb_rcvctrlpipe(rr3->udev, 0);
+ 	ret = usb_control_msg(rr3->udev, pipe, RR3_GET_IR_PARAM,
+ 			      USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			      RR3_IR_IO_SIG_TIMEOUT, 0, tmp, len, HZ * 5);
++			      RR3_IR_IO_SIG_TIMEOUT, 0, tmp, len, 5000);
+ 	if (ret != len)
+ 		dev_warn(rr3->dev, "Failed to read timeout from hardware\n");
+ 	else {
+@@ -510,7 +510,7 @@ static int redrat3_set_timeout(struct rc_dev *rc_dev, unsigned int timeoutus)
+ 	ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), RR3_SET_IR_PARAM,
+ 		     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 		     RR3_IR_IO_SIG_TIMEOUT, 0, timeout, sizeof(*timeout),
+-		     HZ * 25);
++		     25000);
+ 	dev_dbg(dev, "set ir parm timeout %d ret 0x%02x\n",
+ 						be32_to_cpu(*timeout), ret);
+ 
+@@ -542,32 +542,32 @@ static void redrat3_reset(struct redrat3_dev *rr3)
+ 	*val = 0x01;
+ 	rc = usb_control_msg(udev, rxpipe, RR3_RESET,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			     RR3_CPUCS_REG_ADDR, 0, val, len, HZ * 25);
++			     RR3_CPUCS_REG_ADDR, 0, val, len, 25000);
+ 	dev_dbg(dev, "reset returned 0x%02x\n", rc);
+ 
+ 	*val = length_fuzz;
+ 	rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-			     RR3_IR_IO_LENGTH_FUZZ, 0, val, len, HZ * 25);
++			     RR3_IR_IO_LENGTH_FUZZ, 0, val, len, 25000);
+ 	dev_dbg(dev, "set ir parm len fuzz %d rc 0x%02x\n", *val, rc);
+ 
+ 	*val = (65536 - (minimum_pause * 2000)) / 256;
+ 	rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-			     RR3_IR_IO_MIN_PAUSE, 0, val, len, HZ * 25);
++			     RR3_IR_IO_MIN_PAUSE, 0, val, len, 25000);
+ 	dev_dbg(dev, "set ir parm min pause %d rc 0x%02x\n", *val, rc);
+ 
+ 	*val = periods_measure_carrier;
+ 	rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-			     RR3_IR_IO_PERIODS_MF, 0, val, len, HZ * 25);
++			     RR3_IR_IO_PERIODS_MF, 0, val, len, 25000);
+ 	dev_dbg(dev, "set ir parm periods measure carrier %d rc 0x%02x", *val,
+ 									rc);
+ 
+ 	*val = RR3_DRIVER_MAXLENS;
+ 	rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-			     RR3_IR_IO_MAX_LENGTHS, 0, val, len, HZ * 25);
++			     RR3_IR_IO_MAX_LENGTHS, 0, val, len, 25000);
+ 	dev_dbg(dev, "set ir parm max lens %d rc 0x%02x\n", *val, rc);
+ 
+ 	kfree(val);
+@@ -585,7 +585,7 @@ static void redrat3_get_firmware_rev(struct redrat3_dev *rr3)
+ 	rc = usb_control_msg(rr3->udev, usb_rcvctrlpipe(rr3->udev, 0),
+ 			     RR3_FW_VERSION,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			     0, 0, buffer, RR3_FW_VERSION_LEN, HZ * 5);
++			     0, 0, buffer, RR3_FW_VERSION_LEN, 5000);
+ 
+ 	if (rc >= 0)
+ 		dev_info(rr3->dev, "Firmware rev: %s", buffer);
+@@ -825,14 +825,14 @@ static int redrat3_transmit_ir(struct rc_dev *rcdev, unsigned *txbuf,
+ 
+ 	pipe = usb_sndbulkpipe(rr3->udev, rr3->ep_out->bEndpointAddress);
+ 	ret = usb_bulk_msg(rr3->udev, pipe, irdata,
+-			    sendbuf_len, &ret_len, 10 * HZ);
++			    sendbuf_len, &ret_len, 10000);
+ 	dev_dbg(dev, "sent %d bytes, (ret %d)\n", ret_len, ret);
+ 
+ 	/* now tell the hardware to transmit what we sent it */
+ 	pipe = usb_rcvctrlpipe(rr3->udev, 0);
+ 	ret = usb_control_msg(rr3->udev, pipe, RR3_TX_SEND_SIGNAL,
+ 			      USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			      0, 0, irdata, 2, HZ * 10);
++			      0, 0, irdata, 2, 10000);
+ 
+ 	if (ret < 0)
+ 		dev_err(dev, "Error: control msg send failed, rc %d\n", ret);
+diff --git a/drivers/media/tuners/msi001.c b/drivers/media/tuners/msi001.c
+index 78e6fd600d8ef..44247049a3190 100644
+--- a/drivers/media/tuners/msi001.c
++++ b/drivers/media/tuners/msi001.c
+@@ -442,6 +442,13 @@ static int msi001_probe(struct spi_device *spi)
+ 			V4L2_CID_RF_TUNER_BANDWIDTH_AUTO, 0, 1, 1, 1);
+ 	dev->bandwidth = v4l2_ctrl_new_std(&dev->hdl, &msi001_ctrl_ops,
+ 			V4L2_CID_RF_TUNER_BANDWIDTH, 200000, 8000000, 1, 200000);
++	if (dev->hdl.error) {
++		ret = dev->hdl.error;
++		dev_err(&spi->dev, "Could not initialize controls\n");
++		/* control init failed, free handler */
++		goto err_ctrl_handler_free;
++	}
++
+ 	v4l2_ctrl_auto_cluster(2, &dev->bandwidth_auto, 0, false);
+ 	dev->lna_gain = v4l2_ctrl_new_std(&dev->hdl, &msi001_ctrl_ops,
+ 			V4L2_CID_RF_TUNER_LNA_GAIN, 0, 1, 1, 1);
+diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c
+index fefb2625f6558..75ddf7ed1faff 100644
+--- a/drivers/media/tuners/si2157.c
++++ b/drivers/media/tuners/si2157.c
+@@ -90,7 +90,7 @@ static int si2157_init(struct dvb_frontend *fe)
+ 	dev_dbg(&client->dev, "\n");
+ 
+ 	/* Try to get Xtal trim property, to verify tuner still running */
+-	memcpy(cmd.args, "\x15\x00\x04\x02", 4);
++	memcpy(cmd.args, "\x15\x00\x02\x04", 4);
+ 	cmd.wlen = 4;
+ 	cmd.rlen = 4;
+ 	ret = si2157_cmd_execute(client, &cmd);
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
+index e731243267e49..a2563c2540808 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.c
++++ b/drivers/media/usb/b2c2/flexcop-usb.c
+@@ -87,7 +87,7 @@ static int flexcop_usb_readwrite_dw(struct flexcop_device *fc, u16 wRegOffsPCI,
+ 			0,
+ 			fc_usb->data,
+ 			sizeof(u32),
+-			B2C2_WAIT_FOR_OPERATION_RDW * HZ);
++			B2C2_WAIT_FOR_OPERATION_RDW);
+ 
+ 	if (ret != sizeof(u32)) {
+ 		err("error while %s dword from %d (%d).", read ? "reading" :
+@@ -155,7 +155,7 @@ static int flexcop_usb_v8_memory_req(struct flexcop_usb *fc_usb,
+ 			wIndex,
+ 			fc_usb->data,
+ 			buflen,
+-			nWaitTime * HZ);
++			nWaitTime);
+ 	if (ret != buflen)
+ 		ret = -EIO;
+ 
+@@ -249,13 +249,13 @@ static int flexcop_usb_i2c_req(struct flexcop_i2c_adapter *i2c,
+ 		/* DKT 020208 - add this to support special case of DiSEqC */
+ 	case USB_FUNC_I2C_CHECKWRITE:
+ 		pipe = B2C2_USB_CTRL_PIPE_OUT;
+-		nWaitTime = 2;
++		nWaitTime = 2000;
+ 		request_type |= USB_DIR_OUT;
+ 		break;
+ 	case USB_FUNC_I2C_READ:
+ 	case USB_FUNC_I2C_REPEATREAD:
+ 		pipe = B2C2_USB_CTRL_PIPE_IN;
+-		nWaitTime = 2;
++		nWaitTime = 2000;
+ 		request_type |= USB_DIR_IN;
+ 		break;
+ 	default:
+@@ -282,7 +282,7 @@ static int flexcop_usb_i2c_req(struct flexcop_i2c_adapter *i2c,
+ 			wIndex,
+ 			fc_usb->data,
+ 			buflen,
+-			nWaitTime * HZ);
++			nWaitTime);
+ 
+ 	if (ret != buflen)
+ 		ret = -EIO;
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.h b/drivers/media/usb/b2c2/flexcop-usb.h
+index 2f230bf72252b..c7cca1a5ee59d 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.h
++++ b/drivers/media/usb/b2c2/flexcop-usb.h
+@@ -91,13 +91,13 @@ typedef enum {
+ 	UTILITY_SRAM_TESTVERIFY     = 0x16,
+ } flexcop_usb_utility_function_t;
+ 
+-#define B2C2_WAIT_FOR_OPERATION_RW (1*HZ)
+-#define B2C2_WAIT_FOR_OPERATION_RDW (3*HZ)
+-#define B2C2_WAIT_FOR_OPERATION_WDW (1*HZ)
++#define B2C2_WAIT_FOR_OPERATION_RW 1000
++#define B2C2_WAIT_FOR_OPERATION_RDW 3000
++#define B2C2_WAIT_FOR_OPERATION_WDW 1000
+ 
+-#define B2C2_WAIT_FOR_OPERATION_V8READ (3*HZ)
+-#define B2C2_WAIT_FOR_OPERATION_V8WRITE (3*HZ)
+-#define B2C2_WAIT_FOR_OPERATION_V8FLASH (3*HZ)
++#define B2C2_WAIT_FOR_OPERATION_V8READ 3000
++#define B2C2_WAIT_FOR_OPERATION_V8WRITE 3000
++#define B2C2_WAIT_FOR_OPERATION_V8FLASH 3000
+ 
+ typedef enum {
+ 	V8_MEMORY_PAGE_DVB_CI = 0x20,
+diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c
+index 76aac06f9fb8e..cba03b2864738 100644
+--- a/drivers/media/usb/cpia2/cpia2_usb.c
++++ b/drivers/media/usb/cpia2/cpia2_usb.c
+@@ -550,7 +550,7 @@ static int write_packet(struct usb_device *udev,
+ 			       0,	/* index */
+ 			       buf,	/* buffer */
+ 			       size,
+-			       HZ);
++			       1000);
+ 
+ 	kfree(buf);
+ 	return ret;
+@@ -582,7 +582,7 @@ static int read_packet(struct usb_device *udev,
+ 			       0,	/* index */
+ 			       buf,	/* buffer */
+ 			       size,
+-			       HZ);
++			       1000);
+ 
+ 	if (ret >= 0)
+ 		memcpy(registers, buf, size);
+diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c
+index 70219b3e85666..7ea8f68b0f458 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_core.c
++++ b/drivers/media/usb/dvb-usb/dib0700_core.c
+@@ -618,8 +618,6 @@ int dib0700_streaming_ctrl(struct dvb_usb_adapter *adap, int onoff)
+ 		deb_info("the endpoint number (%i) is not correct, use the adapter id instead", adap->fe_adap[0].stream.props.endpoint);
+ 		if (onoff)
+ 			st->channel_state |=	1 << (adap->id);
+-		else
+-			st->channel_state |=	1 << ~(adap->id);
+ 	} else {
+ 		if (onoff)
+ 			st->channel_state |=	1 << (adap->fe_adap[0].stream.props.endpoint-2);
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index a27a684403252..aa929db56db1f 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -2148,46 +2148,153 @@ static struct dvb_usb_device_properties s6x0_properties = {
+ 	}
+ };
+ 
+-static const struct dvb_usb_device_description d1100 = {
+-	"Prof 1100 USB ",
+-	{&dw2102_table[PROF_1100], NULL},
+-	{NULL},
+-};
++static struct dvb_usb_device_properties p1100_properties = {
++	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
++	.usb_ctrl = DEVICE_SPECIFIC,
++	.size_of_priv = sizeof(struct dw2102_state),
++	.firmware = P1100_FIRMWARE,
++	.no_reconnect = 1,
+ 
+-static const struct dvb_usb_device_description d660 = {
+-	"TeVii S660 USB",
+-	{&dw2102_table[TEVII_S660], NULL},
+-	{NULL},
+-};
++	.i2c_algo = &s6x0_i2c_algo,
++	.rc.core = {
++		.rc_interval = 150,
++		.rc_codes = RC_MAP_TBS_NEC,
++		.module_name = "dw2102",
++		.allowed_protos   = RC_PROTO_BIT_NEC,
++		.rc_query = prof_rc_query,
++	},
+ 
+-static const struct dvb_usb_device_description d480_1 = {
+-	"TeVii S480.1 USB",
+-	{&dw2102_table[TEVII_S480_1], NULL},
+-	{NULL},
++	.generic_bulk_ctrl_endpoint = 0x81,
++	.num_adapters = 1,
++	.download_firmware = dw2102_load_firmware,
++	.read_mac_address = s6x0_read_mac_address,
++	.adapter = {
++		{
++			.num_frontends = 1,
++			.fe = {{
++				.frontend_attach = stv0288_frontend_attach,
++				.stream = {
++					.type = USB_BULK,
++					.count = 8,
++					.endpoint = 0x82,
++					.u = {
++						.bulk = {
++							.buffersize = 4096,
++						}
++					}
++				},
++			} },
++		}
++	},
++	.num_device_descs = 1,
++	.devices = {
++		{"Prof 1100 USB ",
++			{&dw2102_table[PROF_1100], NULL},
++			{NULL},
++		},
++	}
+ };
+ 
+-static const struct dvb_usb_device_description d480_2 = {
+-	"TeVii S480.2 USB",
+-	{&dw2102_table[TEVII_S480_2], NULL},
+-	{NULL},
+-};
++static struct dvb_usb_device_properties s660_properties = {
++	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
++	.usb_ctrl = DEVICE_SPECIFIC,
++	.size_of_priv = sizeof(struct dw2102_state),
++	.firmware = S660_FIRMWARE,
++	.no_reconnect = 1,
+ 
+-static const struct dvb_usb_device_description d7500 = {
+-	"Prof 7500 USB DVB-S2",
+-	{&dw2102_table[PROF_7500], NULL},
+-	{NULL},
+-};
++	.i2c_algo = &s6x0_i2c_algo,
++	.rc.core = {
++		.rc_interval = 150,
++		.rc_codes = RC_MAP_TEVII_NEC,
++		.module_name = "dw2102",
++		.allowed_protos   = RC_PROTO_BIT_NEC,
++		.rc_query = dw2102_rc_query,
++	},
+ 
+-static const struct dvb_usb_device_description d421 = {
+-	"TeVii S421 PCI",
+-	{&dw2102_table[TEVII_S421], NULL},
+-	{NULL},
++	.generic_bulk_ctrl_endpoint = 0x81,
++	.num_adapters = 1,
++	.download_firmware = dw2102_load_firmware,
++	.read_mac_address = s6x0_read_mac_address,
++	.adapter = {
++		{
++			.num_frontends = 1,
++			.fe = {{
++				.frontend_attach = ds3000_frontend_attach,
++				.stream = {
++					.type = USB_BULK,
++					.count = 8,
++					.endpoint = 0x82,
++					.u = {
++						.bulk = {
++							.buffersize = 4096,
++						}
++					}
++				},
++			} },
++		}
++	},
++	.num_device_descs = 3,
++	.devices = {
++		{"TeVii S660 USB",
++			{&dw2102_table[TEVII_S660], NULL},
++			{NULL},
++		},
++		{"TeVii S480.1 USB",
++			{&dw2102_table[TEVII_S480_1], NULL},
++			{NULL},
++		},
++		{"TeVii S480.2 USB",
++			{&dw2102_table[TEVII_S480_2], NULL},
++			{NULL},
++		},
++	}
+ };
+ 
+-static const struct dvb_usb_device_description d632 = {
+-	"TeVii S632 USB",
+-	{&dw2102_table[TEVII_S632], NULL},
+-	{NULL},
++static struct dvb_usb_device_properties p7500_properties = {
++	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
++	.usb_ctrl = DEVICE_SPECIFIC,
++	.size_of_priv = sizeof(struct dw2102_state),
++	.firmware = P7500_FIRMWARE,
++	.no_reconnect = 1,
++
++	.i2c_algo = &s6x0_i2c_algo,
++	.rc.core = {
++		.rc_interval = 150,
++		.rc_codes = RC_MAP_TBS_NEC,
++		.module_name = "dw2102",
++		.allowed_protos   = RC_PROTO_BIT_NEC,
++		.rc_query = prof_rc_query,
++	},
++
++	.generic_bulk_ctrl_endpoint = 0x81,
++	.num_adapters = 1,
++	.download_firmware = dw2102_load_firmware,
++	.read_mac_address = s6x0_read_mac_address,
++	.adapter = {
++		{
++			.num_frontends = 1,
++			.fe = {{
++				.frontend_attach = prof_7500_frontend_attach,
++				.stream = {
++					.type = USB_BULK,
++					.count = 8,
++					.endpoint = 0x82,
++					.u = {
++						.bulk = {
++							.buffersize = 4096,
++						}
++					}
++				},
++			} },
++		}
++	},
++	.num_device_descs = 1,
++	.devices = {
++		{"Prof 7500 USB DVB-S2",
++			{&dw2102_table[PROF_7500], NULL},
++			{NULL},
++		},
++	}
+ };
+ 
+ static struct dvb_usb_device_properties su3000_properties = {
+@@ -2267,6 +2374,59 @@ static struct dvb_usb_device_properties su3000_properties = {
+ 	}
+ };
+ 
++static struct dvb_usb_device_properties s421_properties = {
++	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
++	.usb_ctrl = DEVICE_SPECIFIC,
++	.size_of_priv = sizeof(struct dw2102_state),
++	.power_ctrl = su3000_power_ctrl,
++	.num_adapters = 1,
++	.identify_state	= su3000_identify_state,
++	.i2c_algo = &su3000_i2c_algo,
++
++	.rc.core = {
++		.rc_interval = 150,
++		.rc_codes = RC_MAP_SU3000,
++		.module_name = "dw2102",
++		.allowed_protos   = RC_PROTO_BIT_RC5,
++		.rc_query = su3000_rc_query,
++	},
++
++	.read_mac_address = su3000_read_mac_address,
++
++	.generic_bulk_ctrl_endpoint = 0x01,
++
++	.adapter = {
++		{
++		.num_frontends = 1,
++		.fe = {{
++			.streaming_ctrl   = su3000_streaming_ctrl,
++			.frontend_attach  = m88rs2000_frontend_attach,
++			.stream = {
++				.type = USB_BULK,
++				.count = 8,
++				.endpoint = 0x82,
++				.u = {
++					.bulk = {
++						.buffersize = 4096,
++					}
++				}
++			}
++		} },
++		}
++	},
++	.num_device_descs = 2,
++	.devices = {
++		{ "TeVii S421 PCI",
++			{ &dw2102_table[TEVII_S421], NULL },
++			{ NULL },
++		},
++		{ "TeVii S632 USB",
++			{ &dw2102_table[TEVII_S632], NULL },
++			{ NULL },
++		},
++	}
++};
++
+ static struct dvb_usb_device_properties t220_properties = {
+ 	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
+ 	.usb_ctrl = DEVICE_SPECIFIC,
+@@ -2384,101 +2544,33 @@ static struct dvb_usb_device_properties tt_s2_4600_properties = {
+ static int dw2102_probe(struct usb_interface *intf,
+ 		const struct usb_device_id *id)
+ {
+-	int retval = -ENOMEM;
+-	struct dvb_usb_device_properties *p1100;
+-	struct dvb_usb_device_properties *s660;
+-	struct dvb_usb_device_properties *p7500;
+-	struct dvb_usb_device_properties *s421;
+-
+-	p1100 = kmemdup(&s6x0_properties,
+-			sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+-	if (!p1100)
+-		goto err0;
+-
+-	/* copy default structure */
+-	/* fill only different fields */
+-	p1100->firmware = P1100_FIRMWARE;
+-	p1100->devices[0] = d1100;
+-	p1100->rc.core.rc_query = prof_rc_query;
+-	p1100->rc.core.rc_codes = RC_MAP_TBS_NEC;
+-	p1100->adapter->fe[0].frontend_attach = stv0288_frontend_attach;
+-
+-	s660 = kmemdup(&s6x0_properties,
+-		       sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+-	if (!s660)
+-		goto err1;
+-
+-	s660->firmware = S660_FIRMWARE;
+-	s660->num_device_descs = 3;
+-	s660->devices[0] = d660;
+-	s660->devices[1] = d480_1;
+-	s660->devices[2] = d480_2;
+-	s660->adapter->fe[0].frontend_attach = ds3000_frontend_attach;
+-
+-	p7500 = kmemdup(&s6x0_properties,
+-			sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+-	if (!p7500)
+-		goto err2;
+-
+-	p7500->firmware = P7500_FIRMWARE;
+-	p7500->devices[0] = d7500;
+-	p7500->rc.core.rc_query = prof_rc_query;
+-	p7500->rc.core.rc_codes = RC_MAP_TBS_NEC;
+-	p7500->adapter->fe[0].frontend_attach = prof_7500_frontend_attach;
+-
+-
+-	s421 = kmemdup(&su3000_properties,
+-		       sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+-	if (!s421)
+-		goto err3;
+-
+-	s421->num_device_descs = 2;
+-	s421->devices[0] = d421;
+-	s421->devices[1] = d632;
+-	s421->adapter->fe[0].frontend_attach = m88rs2000_frontend_attach;
+-
+-	if (0 == dvb_usb_device_init(intf, &dw2102_properties,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &dw2104_properties,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &dw3101_properties,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &s6x0_properties,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, p1100,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, s660,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, p7500,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, s421,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &su3000_properties,
+-			 THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &t220_properties,
+-			 THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &tt_s2_4600_properties,
+-			 THIS_MODULE, NULL, adapter_nr)) {
+-
+-		/* clean up copied properties */
+-		kfree(s421);
+-		kfree(p7500);
+-		kfree(s660);
+-		kfree(p1100);
++	if (!(dvb_usb_device_init(intf, &dw2102_properties,
++			          THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &dw2104_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &dw3101_properties,
++			          THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &s6x0_properties,
++			          THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &p1100_properties,
++			          THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &s660_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &p7500_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &s421_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &su3000_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &t220_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &tt_s2_4600_properties,
++				  THIS_MODULE, NULL, adapter_nr))) {
+ 
+ 		return 0;
+ 	}
+ 
+-	retval = -ENODEV;
+-	kfree(s421);
+-err3:
+-	kfree(p7500);
+-err2:
+-	kfree(s660);
+-err1:
+-	kfree(p1100);
+-err0:
+-	return retval;
++	return -ENODEV;
+ }
+ 
+ static void dw2102_disconnect(struct usb_interface *intf)
+diff --git a/drivers/media/usb/dvb-usb/m920x.c b/drivers/media/usb/dvb-usb/m920x.c
+index 4bb5b82599a79..691e05833db19 100644
+--- a/drivers/media/usb/dvb-usb/m920x.c
++++ b/drivers/media/usb/dvb-usb/m920x.c
+@@ -274,6 +274,13 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
+ 			/* Should check for ack here, if we knew how. */
+ 		}
+ 		if (msg[i].flags & I2C_M_RD) {
++			char *read = kmalloc(1, GFP_KERNEL);
++			if (!read) {
++				ret = -ENOMEM;
++				kfree(read);
++				goto unlock;
++			}
++
+ 			for (j = 0; j < msg[i].len; j++) {
+ 				/* Last byte of transaction?
+ 				 * Send STOP, otherwise send ACK. */
+@@ -281,9 +288,12 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
+ 
+ 				if ((ret = m920x_read(d->udev, M9206_I2C, 0x0,
+ 						      0x20 | stop,
+-						      &msg[i].buf[j], 1)) != 0)
++						      read, 1)) != 0)
+ 					goto unlock;
++				msg[i].buf[j] = read[0];
+ 			}
++
++			kfree(read);
+ 		} else {
+ 			for (j = 0; j < msg[i].len; j++) {
+ 				/* Last byte of transaction? Then send STOP. */
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index cf45cc566cbe2..87e375562dbb2 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -3575,8 +3575,10 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
+ 
+ 	if (dev->is_audio_only) {
+ 		retval = em28xx_audio_setup(dev);
+-		if (retval)
+-			return -ENODEV;
++		if (retval) {
++			retval = -ENODEV;
++			goto err_deinit_media;
++		}
+ 		em28xx_init_extension(dev);
+ 
+ 		return 0;
+@@ -3595,7 +3597,7 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
+ 		dev_err(&dev->intf->dev,
+ 			"%s: em28xx_i2c_register bus 0 - error [%d]!\n",
+ 		       __func__, retval);
+-		return retval;
++		goto err_deinit_media;
+ 	}
+ 
+ 	/* register i2c bus 1 */
+@@ -3611,9 +3613,7 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
+ 				"%s: em28xx_i2c_register bus 1 - error [%d]!\n",
+ 				__func__, retval);
+ 
+-			em28xx_i2c_unregister(dev, 0);
+-
+-			return retval;
++			goto err_unreg_i2c;
+ 		}
+ 	}
+ 
+@@ -3621,6 +3621,12 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
+ 	em28xx_card_setup(dev);
+ 
+ 	return 0;
++
++err_unreg_i2c:
++	em28xx_i2c_unregister(dev, 0);
++err_deinit_media:
++	em28xx_unregister_media_device(dev);
++	return retval;
+ }
+ 
+ static int em28xx_duplicate_dev(struct em28xx *dev)
+diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
+index af9216278024f..308bc029099d9 100644
+--- a/drivers/media/usb/em28xx/em28xx-core.c
++++ b/drivers/media/usb/em28xx/em28xx-core.c
+@@ -89,7 +89,7 @@ int em28xx_read_reg_req_len(struct em28xx *dev, u8 req, u16 reg,
+ 	mutex_lock(&dev->ctrl_urb_lock);
+ 	ret = usb_control_msg(udev, pipe, req,
+ 			      USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			      0x0000, reg, dev->urb_buf, len, HZ);
++			      0x0000, reg, dev->urb_buf, len, 1000);
+ 	if (ret < 0) {
+ 		em28xx_regdbg("(pipe 0x%08x): IN:  %02x %02x %02x %02x %02x %02x %02x %02x  failed with error %i\n",
+ 			      pipe,
+@@ -158,7 +158,7 @@ int em28xx_write_regs_req(struct em28xx *dev, u8 req, u16 reg, char *buf,
+ 	memcpy(dev->urb_buf, buf, len);
+ 	ret = usb_control_msg(udev, pipe, req,
+ 			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			      0x0000, reg, dev->urb_buf, len, HZ);
++			      0x0000, reg, dev->urb_buf, len, 1000);
+ 	mutex_unlock(&dev->ctrl_urb_lock);
+ 
+ 	if (ret < 0) {
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index d38dee1792e41..3915d551d59e7 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -1467,7 +1467,7 @@ static int pvr2_upload_firmware1(struct pvr2_hdw *hdw)
+ 	for (address = 0; address < fwsize; address += 0x800) {
+ 		memcpy(fw_ptr, fw_entry->data + address, 0x800);
+ 		ret += usb_control_msg(hdw->usb_dev, pipe, 0xa0, 0x40, address,
+-				       0, fw_ptr, 0x800, HZ);
++				       0, fw_ptr, 0x800, 1000);
+ 	}
+ 
+ 	trace_firmware("Upload done, releasing device's CPU");
+@@ -1605,7 +1605,7 @@ int pvr2_upload_firmware2(struct pvr2_hdw *hdw)
+ 			((u32 *)fw_ptr)[icnt] = swab32(((u32 *)fw_ptr)[icnt]);
+ 
+ 		ret |= usb_bulk_msg(hdw->usb_dev, pipe, fw_ptr,bcnt,
+-				    &actual_length, HZ);
++				    &actual_length, 1000);
+ 		ret |= (actual_length != bcnt);
+ 		if (ret) break;
+ 		fw_done += bcnt;
+@@ -3438,7 +3438,7 @@ void pvr2_hdw_cpufw_set_enabled(struct pvr2_hdw *hdw,
+ 						      0xa0,0xc0,
+ 						      address,0,
+ 						      hdw->fw_buffer+address,
+-						      0x800,HZ);
++						      0x800,1000);
+ 				if (ret < 0) break;
+ 			}
+ 
+@@ -3977,7 +3977,7 @@ void pvr2_hdw_cpureset_assert(struct pvr2_hdw *hdw,int val)
+ 	/* Write the CPUCS register on the 8051.  The lsb of the register
+ 	   is the reset bit; a 1 asserts reset while a 0 clears it. */
+ 	pipe = usb_sndctrlpipe(hdw->usb_dev, 0);
+-	ret = usb_control_msg(hdw->usb_dev,pipe,0xa0,0x40,0xe600,0,da,1,HZ);
++	ret = usb_control_msg(hdw->usb_dev,pipe,0xa0,0x40,0xe600,0,da,1,1000);
+ 	if (ret < 0) {
+ 		pvr2_trace(PVR2_TRACE_ERROR_LEGS,
+ 			   "cpureset_assert(%d) error=%d",val,ret);
+diff --git a/drivers/media/usb/s2255/s2255drv.c b/drivers/media/usb/s2255/s2255drv.c
+index 4af55e2478be1..cb15eb32d2a6b 100644
+--- a/drivers/media/usb/s2255/s2255drv.c
++++ b/drivers/media/usb/s2255/s2255drv.c
+@@ -1884,7 +1884,7 @@ static long s2255_vendor_req(struct s2255_dev *dev, unsigned char Request,
+ 				    USB_TYPE_VENDOR | USB_RECIP_DEVICE |
+ 				    USB_DIR_IN,
+ 				    Value, Index, buf,
+-				    TransferBufferLength, HZ * 5);
++				    TransferBufferLength, USB_CTRL_SET_TIMEOUT);
+ 
+ 		if (r >= 0)
+ 			memcpy(TransferBuffer, buf, TransferBufferLength);
+@@ -1893,7 +1893,7 @@ static long s2255_vendor_req(struct s2255_dev *dev, unsigned char Request,
+ 		r = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0),
+ 				    Request, USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				    Value, Index, buf,
+-				    TransferBufferLength, HZ * 5);
++				    TransferBufferLength, USB_CTRL_SET_TIMEOUT);
+ 	}
+ 	kfree(buf);
+ 	return r;
+diff --git a/drivers/media/usb/stk1160/stk1160-core.c b/drivers/media/usb/stk1160/stk1160-core.c
+index b4f8bc5db1389..4e1698f788187 100644
+--- a/drivers/media/usb/stk1160/stk1160-core.c
++++ b/drivers/media/usb/stk1160/stk1160-core.c
+@@ -65,7 +65,7 @@ int stk1160_read_reg(struct stk1160 *dev, u16 reg, u8 *value)
+ 		return -ENOMEM;
+ 	ret = usb_control_msg(dev->udev, pipe, 0x00,
+ 			USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			0x00, reg, buf, sizeof(u8), HZ);
++			0x00, reg, buf, sizeof(u8), 1000);
+ 	if (ret < 0) {
+ 		stk1160_err("read failed on reg 0x%x (%d)\n",
+ 			reg, ret);
+@@ -85,7 +85,7 @@ int stk1160_write_reg(struct stk1160 *dev, u16 reg, u16 value)
+ 
+ 	ret =  usb_control_msg(dev->udev, pipe, 0x01,
+ 			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			value, reg, NULL, 0, HZ);
++			value, reg, NULL, 0, 1000);
+ 	if (ret < 0) {
+ 		stk1160_err("write failed on reg 0x%x (%d)\n",
+ 			reg, ret);
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index a3dfacf069c44..c884020b28784 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -183,7 +183,7 @@
+ /* Maximum status buffer size in bytes of interrupt URB. */
+ #define UVC_MAX_STATUS_SIZE	16
+ 
+-#define UVC_CTRL_CONTROL_TIMEOUT	500
++#define UVC_CTRL_CONTROL_TIMEOUT	5000
+ #define UVC_CTRL_STREAMING_TIMEOUT	5000
+ 
+ /* Maximum allowed number of control mappings per device */
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 4ffa14e44efe4..6d6d30dbbe68b 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -2127,6 +2127,7 @@ static int v4l_prepare_buf(const struct v4l2_ioctl_ops *ops,
+ static int v4l_g_parm(const struct v4l2_ioctl_ops *ops,
+ 				struct file *file, void *fh, void *arg)
+ {
++	struct video_device *vfd = video_devdata(file);
+ 	struct v4l2_streamparm *p = arg;
+ 	v4l2_std_id std;
+ 	int ret = check_fmt(file, p->type);
+@@ -2138,7 +2139,8 @@ static int v4l_g_parm(const struct v4l2_ioctl_ops *ops,
+ 	if (p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ 	    p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ 		return -EINVAL;
+-	p->parm.capture.readbuffers = 2;
++	if (vfd->device_caps & V4L2_CAP_READWRITE)
++		p->parm.capture.readbuffers = 2;
+ 	ret = ops->vidioc_g_std(file, fh, &std);
+ 	if (ret == 0)
+ 		v4l2_video_std_frame_period(std, &p->parm.capture.timeperframe);
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index a760ab08256ff..9019121a80f53 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -245,7 +245,7 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
+ 	rpc->dirmap = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(rpc->dirmap))
+-		rpc->dirmap = NULL;
++		return PTR_ERR(rpc->dirmap);
+ 	rpc->size = resource_size(res);
+ 
+ 	rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+diff --git a/drivers/mfd/atmel-flexcom.c b/drivers/mfd/atmel-flexcom.c
+index d2f5c073fdf31..559eb4d352b68 100644
+--- a/drivers/mfd/atmel-flexcom.c
++++ b/drivers/mfd/atmel-flexcom.c
+@@ -87,8 +87,7 @@ static const struct of_device_id atmel_flexcom_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, atmel_flexcom_of_match);
+ 
+-#ifdef CONFIG_PM_SLEEP
+-static int atmel_flexcom_resume(struct device *dev)
++static int __maybe_unused atmel_flexcom_resume_noirq(struct device *dev)
+ {
+ 	struct atmel_flexcom *ddata = dev_get_drvdata(dev);
+ 	int err;
+@@ -105,16 +104,16 @@ static int atmel_flexcom_resume(struct device *dev)
+ 
+ 	return 0;
+ }
+-#endif
+ 
+-static SIMPLE_DEV_PM_OPS(atmel_flexcom_pm_ops, NULL,
+-			 atmel_flexcom_resume);
++static const struct dev_pm_ops atmel_flexcom_pm_ops = {
++	.resume_noirq = atmel_flexcom_resume_noirq,
++};
+ 
+ static struct platform_driver atmel_flexcom_driver = {
+ 	.probe	= atmel_flexcom_probe,
+ 	.driver	= {
+ 		.name		= "atmel_flexcom",
+-		.pm		= &atmel_flexcom_pm_ops,
++		.pm		= pm_ptr(&atmel_flexcom_pm_ops),
+ 		.of_match_table	= atmel_flexcom_of_match,
+ 	},
+ };
+diff --git a/drivers/misc/lattice-ecp3-config.c b/drivers/misc/lattice-ecp3-config.c
+index 5eaf74447ca1e..556bb7d705f53 100644
+--- a/drivers/misc/lattice-ecp3-config.c
++++ b/drivers/misc/lattice-ecp3-config.c
+@@ -76,12 +76,12 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 
+ 	if (fw == NULL) {
+ 		dev_err(&spi->dev, "Cannot load firmware, aborting\n");
+-		return;
++		goto out;
+ 	}
+ 
+ 	if (fw->size == 0) {
+ 		dev_err(&spi->dev, "Error: Firmware size is 0!\n");
+-		return;
++		goto out;
+ 	}
+ 
+ 	/* Fill dummy data (24 stuffing bits for commands) */
+@@ -103,7 +103,7 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 		dev_err(&spi->dev,
+ 			"Error: No supported FPGA detected (JEDEC_ID=%08x)!\n",
+ 			jedec_id);
+-		return;
++		goto out;
+ 	}
+ 
+ 	dev_info(&spi->dev, "FPGA %s detected\n", ecp3_dev[i].name);
+@@ -116,7 +116,7 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 	buffer = kzalloc(fw->size + 8, GFP_KERNEL);
+ 	if (!buffer) {
+ 		dev_err(&spi->dev, "Error: Can't allocate memory!\n");
+-		return;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -155,7 +155,7 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 			"Error: Timeout waiting for FPGA to clear (status=%08x)!\n",
+ 			status);
+ 		kfree(buffer);
+-		return;
++		goto out;
+ 	}
+ 
+ 	dev_info(&spi->dev, "Configuring the FPGA...\n");
+@@ -181,7 +181,7 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 	release_firmware(fw);
+ 
+ 	kfree(buffer);
+-
++out:
+ 	complete(&data->fw_loaded);
+ }
+ 
+diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
+index 30c8ac24635d4..4405fb2bc7a00 100644
+--- a/drivers/misc/lkdtm/Makefile
++++ b/drivers/misc/lkdtm/Makefile
+@@ -16,7 +16,7 @@ KCOV_INSTRUMENT_rodata.o	:= n
+ 
+ OBJCOPYFLAGS :=
+ OBJCOPYFLAGS_rodata_objcopy.o	:= \
+-			--rename-section .noinstr.text=.rodata,alloc,readonly,load
++			--rename-section .noinstr.text=.rodata,alloc,readonly,load,contents
+ targets += rodata.o rodata_objcopy.o
+ $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE
+ 	$(call if_changed,objcopy)
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 1b0853a82189a..99a4ce68d82f1 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -708,6 +708,8 @@ try_again:
+ 	if (host->ops->init_card)
+ 		host->ops->init_card(host, card);
+ 
++	card->ocr = ocr_card;
++
+ 	/*
+ 	 * If the host and card support UHS-I mode request the card
+ 	 * to switch to 1.8V signaling level.  No 1.8v signalling if
+@@ -820,7 +822,7 @@ try_again:
+ 			goto mismatch;
+ 		}
+ 	}
+-	card->ocr = ocr_card;
++
+ 	mmc_fixup_device(card, sdio_fixup_methods);
+ 
+ 	if (card->type == MMC_TYPE_SD_COMBO) {
+diff --git a/drivers/mmc/host/meson-mx-sdhc-mmc.c b/drivers/mmc/host/meson-mx-sdhc-mmc.c
+index 8fdd0bbbfa21f..28aa78aa08f3f 100644
+--- a/drivers/mmc/host/meson-mx-sdhc-mmc.c
++++ b/drivers/mmc/host/meson-mx-sdhc-mmc.c
+@@ -854,6 +854,11 @@ static int meson_mx_sdhc_probe(struct platform_device *pdev)
+ 		goto err_disable_pclk;
+ 
+ 	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto err_disable_pclk;
++	}
++
+ 	ret = devm_request_threaded_irq(dev, irq, meson_mx_sdhc_irq,
+ 					meson_mx_sdhc_irq_thread, IRQF_ONESHOT,
+ 					NULL, host);
+diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
+index 1c5299cd0cbe1..264aae2a2b0cf 100644
+--- a/drivers/mmc/host/meson-mx-sdio.c
++++ b/drivers/mmc/host/meson-mx-sdio.c
+@@ -663,6 +663,11 @@ static int meson_mx_mmc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto error_free_mmc;
++	}
++
+ 	ret = devm_request_threaded_irq(host->controller_dev, irq,
+ 					meson_mx_mmc_irq,
+ 					meson_mx_mmc_irq_thread, IRQF_ONESHOT,
+diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c
+index ecb050ba95cdf..dc164c18f8429 100644
+--- a/drivers/mtd/hyperbus/rpc-if.c
++++ b/drivers/mtd/hyperbus/rpc-if.c
+@@ -124,7 +124,9 @@ static int rpcif_hb_probe(struct platform_device *pdev)
+ 	if (!hyperbus)
+ 		return -ENOMEM;
+ 
+-	rpcif_sw_init(&hyperbus->rpc, pdev->dev.parent);
++	error = rpcif_sw_init(&hyperbus->rpc, pdev->dev.parent);
++	if (error)
++		return error;
+ 
+ 	platform_set_drvdata(pdev, hyperbus);
+ 
+@@ -150,9 +152,9 @@ static int rpcif_hb_remove(struct platform_device *pdev)
+ {
+ 	struct rpcif_hyperbus *hyperbus = platform_get_drvdata(pdev);
+ 	int error = hyperbus_unregister_device(&hyperbus->hbdev);
+-	struct rpcif *rpc = dev_get_drvdata(pdev->dev.parent);
+ 
+-	rpcif_disable_rpm(rpc);
++	rpcif_disable_rpm(&hyperbus->rpc);
++
+ 	return error;
+ }
+ 
+diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
+index 95d47422bbf20..5725818fa199f 100644
+--- a/drivers/mtd/mtdpart.c
++++ b/drivers/mtd/mtdpart.c
+@@ -313,7 +313,7 @@ static int __mtd_del_partition(struct mtd_info *mtd)
+ 	if (err)
+ 		return err;
+ 
+-	list_del(&child->part.node);
++	list_del(&mtd->part.node);
+ 	free_partition(mtd);
+ 
+ 	return 0;
+diff --git a/drivers/mtd/nand/bbt.c b/drivers/mtd/nand/bbt.c
+index 044adf9138546..64af6898131d6 100644
+--- a/drivers/mtd/nand/bbt.c
++++ b/drivers/mtd/nand/bbt.c
+@@ -123,7 +123,7 @@ int nanddev_bbt_set_block_status(struct nand_device *nand, unsigned int entry,
+ 		unsigned int rbits = bits_per_block + offs - BITS_PER_LONG;
+ 
+ 		pos[1] &= ~GENMASK(rbits - 1, 0);
+-		pos[1] |= val >> rbits;
++		pos[1] |= val >> (bits_per_block - rbits);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mtd/nand/raw/davinci_nand.c b/drivers/mtd/nand/raw/davinci_nand.c
+index f8c36d19ab47f..bfd3f440aca57 100644
+--- a/drivers/mtd/nand/raw/davinci_nand.c
++++ b/drivers/mtd/nand/raw/davinci_nand.c
+@@ -372,17 +372,15 @@ correct:
+ }
+ 
+ /**
+- * nand_read_page_hwecc_oob_first - hw ecc, read oob first
++ * nand_davinci_read_page_hwecc_oob_first - Hardware ECC page read with ECC
++ *                                          data read from OOB area
+  * @chip: nand chip info structure
+  * @buf: buffer to store read data
+  * @oob_required: caller requires OOB data read to chip->oob_poi
+  * @page: page number to read
+  *
+- * Hardware ECC for large page chips, require OOB to be read first. For this
+- * ECC mode, the write_page method is re-used from ECC_HW. These methods
+- * read/write ECC from the OOB area, unlike the ECC_HW_SYNDROME support with
+- * multiple ECC steps, follows the "infix ECC" scheme and reads/writes ECC from
+- * the data area, by overwriting the NAND manufacturer bad block markings.
++ * Hardware ECC for large page chips, which requires the ECC data to be
++ * extracted from the OOB before the actual data is read.
+  */
+ static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
+ 						  uint8_t *buf,
+@@ -394,7 +392,6 @@ static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
+ 	int eccsteps = chip->ecc.steps;
+ 	uint8_t *p = buf;
+ 	uint8_t *ecc_code = chip->ecc.code_buf;
+-	uint8_t *ecc_calc = chip->ecc.calc_buf;
+ 	unsigned int max_bitflips = 0;
+ 
+ 	/* Read the OOB area first */
+@@ -402,7 +399,8 @@ static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = nand_read_page_op(chip, page, 0, NULL, 0);
++	/* Move read cursor to start of page */
++	ret = nand_change_read_column_op(chip, 0, NULL, 0, false);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -420,8 +418,6 @@ static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
+ 		if (ret)
+ 			return ret;
+ 
+-		chip->ecc.calculate(chip, p, &ecc_calc[i]);
+-
+ 		stat = chip->ecc.correct(chip, p, &ecc_code[i], NULL);
+ 		if (stat == -EBADMSG &&
+ 		    (chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) {
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index a6658567d55c0..226d527b6c6b7 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -711,14 +711,32 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 			      (use_half_period ? BM_GPMI_CTRL1_HALF_PERIOD : 0);
+ }
+ 
+-static void gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
++static int gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
+ {
+ 	struct gpmi_nfc_hardware_timing *hw = &this->hw;
+ 	struct resources *r = &this->resources;
+ 	void __iomem *gpmi_regs = r->gpmi_regs;
+ 	unsigned int dll_wait_time_us;
++	int ret;
++
++	/* Clock dividers do NOT guarantee a clean clock signal on its output
++	 * during the change of the divide factor on i.MX6Q/UL/SX. On i.MX7/8,
++	 * all clock dividers provide these guarantee.
++	 */
++	if (GPMI_IS_MX6Q(this) || GPMI_IS_MX6SX(this))
++		clk_disable_unprepare(r->clock[0]);
+ 
+-	clk_set_rate(r->clock[0], hw->clk_rate);
++	ret = clk_set_rate(r->clock[0], hw->clk_rate);
++	if (ret) {
++		dev_err(this->dev, "cannot set clock rate to %lu Hz: %d\n", hw->clk_rate, ret);
++		return ret;
++	}
++
++	if (GPMI_IS_MX6Q(this) || GPMI_IS_MX6SX(this)) {
++		ret = clk_prepare_enable(r->clock[0]);
++		if (ret)
++			return ret;
++	}
+ 
+ 	writel(hw->timing0, gpmi_regs + HW_GPMI_TIMING0);
+ 	writel(hw->timing1, gpmi_regs + HW_GPMI_TIMING1);
+@@ -737,6 +755,8 @@ static void gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
+ 
+ 	/* Wait for the DLL to settle. */
+ 	udelay(dll_wait_time_us);
++
++	return 0;
+ }
+ 
+ static int gpmi_setup_interface(struct nand_chip *chip, int chipnr,
+@@ -1032,15 +1052,6 @@ static int gpmi_get_clks(struct gpmi_nand_data *this)
+ 		r->clock[i] = clk;
+ 	}
+ 
+-	if (GPMI_IS_MX6(this))
+-		/*
+-		 * Set the default value for the gpmi clock.
+-		 *
+-		 * If you want to use the ONFI nand which is in the
+-		 * Synchronous Mode, you should change the clock as you need.
+-		 */
+-		clk_set_rate(r->clock[0], 22000000);
+-
+ 	return 0;
+ 
+ err_clock:
+@@ -2278,7 +2289,9 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
+ 	 */
+ 	if (this->hw.must_apply_timings) {
+ 		this->hw.must_apply_timings = false;
+-		gpmi_nfc_apply_timings(this);
++		ret = gpmi_nfc_apply_timings(this);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	dev_dbg(this->dev, "%s: %d instructions\n", __func__, op->ninstrs);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 645c7cabcbe4d..99770b1671923 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1061,9 +1061,6 @@ static bool bond_should_notify_peers(struct bonding *bond)
+ 	slave = rcu_dereference(bond->curr_active_slave);
+ 	rcu_read_unlock();
+ 
+-	netdev_dbg(bond->dev, "bond_should_notify_peers: slave %s\n",
+-		   slave ? slave->dev->name : "NULL");
+-
+ 	if (!slave || !bond->send_peer_notif ||
+ 	    bond->send_peer_notif %
+ 	    max(1, bond->params.peer_notif_delay) != 0 ||
+@@ -1071,6 +1068,9 @@ static bool bond_should_notify_peers(struct bonding *bond)
+ 	    test_bit(__LINK_STATE_LINKWATCH_PENDING, &slave->dev->state))
+ 		return false;
+ 
++	netdev_dbg(bond->dev, "bond_should_notify_peers: slave %s\n",
++		   slave ? slave->dev->name : "NULL");
++
+ 	return true;
+ }
+ 
+@@ -4562,25 +4562,39 @@ static netdev_tx_t bond_xmit_broadcast(struct sk_buff *skb,
+ 	struct bonding *bond = netdev_priv(bond_dev);
+ 	struct slave *slave = NULL;
+ 	struct list_head *iter;
++	bool xmit_suc = false;
++	bool skb_used = false;
+ 
+ 	bond_for_each_slave_rcu(bond, slave, iter) {
+-		if (bond_is_last_slave(bond, slave))
+-			break;
+-		if (bond_slave_is_up(slave) && slave->link == BOND_LINK_UP) {
+-			struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC);
++		struct sk_buff *skb2;
++
++		if (!(bond_slave_is_up(slave) && slave->link == BOND_LINK_UP))
++			continue;
+ 
++		if (bond_is_last_slave(bond, slave)) {
++			skb2 = skb;
++			skb_used = true;
++		} else {
++			skb2 = skb_clone(skb, GFP_ATOMIC);
+ 			if (!skb2) {
+ 				net_err_ratelimited("%s: Error: %s: skb_clone() failed\n",
+ 						    bond_dev->name, __func__);
+ 				continue;
+ 			}
+-			bond_dev_queue_xmit(bond, skb2, slave->dev);
+ 		}
++
++		if (bond_dev_queue_xmit(bond, skb2, slave->dev) == NETDEV_TX_OK)
++			xmit_suc = true;
+ 	}
+-	if (slave && bond_slave_is_up(slave) && slave->link == BOND_LINK_UP)
+-		return bond_dev_queue_xmit(bond, skb, slave->dev);
+ 
+-	return bond_tx_drop(bond_dev, skb);
++	if (!skb_used)
++		dev_kfree_skb_any(skb);
++
++	if (xmit_suc)
++		return NETDEV_TX_OK;
++
++	atomic_long_inc(&bond_dev->tx_dropped);
++	return NET_XMIT_DROP;
+ }
+ 
+ /*------------------------- Device initialization ---------------------------*/
+diff --git a/drivers/net/can/softing/softing_cs.c b/drivers/net/can/softing/softing_cs.c
+index 2e93ee7923739..e5c939b63fa65 100644
+--- a/drivers/net/can/softing/softing_cs.c
++++ b/drivers/net/can/softing/softing_cs.c
+@@ -293,7 +293,7 @@ static int softingcs_probe(struct pcmcia_device *pcmcia)
+ 	return 0;
+ 
+ platform_failed:
+-	kfree(dev);
++	platform_device_put(pdev);
+ mem_failed:
+ pcmcia_bad:
+ pcmcia_failed:
+diff --git a/drivers/net/can/softing/softing_fw.c b/drivers/net/can/softing/softing_fw.c
+index ccd649a8e37bd..bad69a4abec10 100644
+--- a/drivers/net/can/softing/softing_fw.c
++++ b/drivers/net/can/softing/softing_fw.c
+@@ -565,18 +565,19 @@ int softing_startstop(struct net_device *dev, int up)
+ 		if (ret < 0)
+ 			goto failed;
+ 	}
+-	/* enable_error_frame */
+-	/*
++
++	/* enable_error_frame
++	 *
+ 	 * Error reporting is switched off at the moment since
+ 	 * the receiving of them is not yet 100% verified
+ 	 * This should be enabled sooner or later
+-	 *
+-	if (error_reporting) {
++	 */
++	if (0 && error_reporting) {
+ 		ret = softing_fct_cmd(card, 51, "enable_error_frame");
+ 		if (ret < 0)
+ 			goto failed;
+ 	}
+-	*/
++
+ 	/* initialize interface */
+ 	iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ 	iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 4]);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 4e13f6dfb91a2..abe00a085f6fc 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -1288,7 +1288,7 @@ mcp251xfd_tef_obj_read(const struct mcp251xfd_priv *priv,
+ 	     len > tx_ring->obj_num ||
+ 	     offset + len > tx_ring->obj_num)) {
+ 		netdev_err(priv->ndev,
+-			   "Trying to read to many TEF objects (max=%d, offset=%d, len=%d).\n",
++			   "Trying to read too many TEF objects (max=%d, offset=%d, len=%d).\n",
+ 			   tx_ring->obj_num, offset, len);
+ 		return -ERANGE;
+ 	}
+@@ -2497,7 +2497,7 @@ static int mcp251xfd_register_chip_detect(struct mcp251xfd_priv *priv)
+ 	if (!mcp251xfd_is_251X(priv) &&
+ 	    priv->devtype_data.model != devtype_data->model) {
+ 		netdev_info(ndev,
+-			    "Detected %s, but firmware specifies a %s. Fixing up.",
++			    "Detected %s, but firmware specifies a %s. Fixing up.\n",
+ 			    __mcp251xfd_get_model_str(devtype_data->model),
+ 			    mcp251xfd_get_model_str(priv));
+ 	}
+@@ -2534,7 +2534,7 @@ static int mcp251xfd_register_check_rx_int(struct mcp251xfd_priv *priv)
+ 		return 0;
+ 
+ 	netdev_info(priv->ndev,
+-		    "RX_INT active after softreset, disabling RX_INT support.");
++		    "RX_INT active after softreset, disabling RX_INT support.\n");
+ 	devm_gpiod_put(&priv->spi->dev, priv->rx_int);
+ 	priv->rx_int = NULL;
+ 
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 48d746e18f302..375998263af7a 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -1762,7 +1762,12 @@ static int xcan_probe(struct platform_device *pdev)
+ 	spin_lock_init(&priv->tx_lock);
+ 
+ 	/* Get IRQ for the device */
+-	ndev->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto err_free;
++
++	ndev->irq = ret;
++
+ 	ndev->flags |= IFF_ECHO;	/* We support local echo */
+ 
+ 	platform_set_drvdata(pdev, ndev);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index db74241935ab4..e19cf020e5ae1 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -3962,10 +3962,12 @@ static int bcmgenet_probe(struct platform_device *pdev)
+ 
+ 	/* Request the WOL interrupt and advertise suspend if available */
+ 	priv->wol_irq_disabled = true;
+-	err = devm_request_irq(&pdev->dev, priv->wol_irq, bcmgenet_wol_isr, 0,
+-			       dev->name, priv);
+-	if (!err)
+-		device_set_wakeup_capable(&pdev->dev, 1);
++	if (priv->wol_irq > 0) {
++		err = devm_request_irq(&pdev->dev, priv->wol_irq,
++				       bcmgenet_wol_isr, 0, dev->name, priv);
++		if (!err)
++			device_set_wakeup_capable(&pdev->dev, 1);
++	}
+ 
+ 	/* Set the needed headroom to account for any possible
+ 	 * features enabling/disabling at runtime
+diff --git a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
+index d04a6c1634452..da8d10475a08e 100644
+--- a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
++++ b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
+@@ -32,6 +32,7 @@
+ 
+ #include <linux/tcp.h>
+ #include <linux/ipv6.h>
++#include <net/inet_ecn.h>
+ #include <net/route.h>
+ #include <net/ip6_route.h>
+ 
+@@ -99,7 +100,7 @@ cxgb_find_route(struct cxgb4_lld_info *lldi,
+ 
+ 	rt = ip_route_output_ports(&init_net, &fl4, NULL, peer_ip, local_ip,
+ 				   peer_port, local_port, IPPROTO_TCP,
+-				   tos, 0);
++				   tos & ~INET_ECN_MASK, 0);
+ 	if (IS_ERR(rt))
+ 		return NULL;
+ 	n = dst_neigh_lookup(&rt->dst, &peer_ip);
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 8df6f081f2447..d11fcfd927c0b 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -305,21 +305,21 @@ static void gmac_speed_set(struct net_device *netdev)
+ 	switch (phydev->speed) {
+ 	case 1000:
+ 		status.bits.speed = GMAC_SPEED_1000;
+-		if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
++		if (phy_interface_mode_is_rgmii(phydev->interface))
+ 			status.bits.mii_rmii = GMAC_PHY_RGMII_1000;
+ 		netdev_dbg(netdev, "connect %s to RGMII @ 1Gbit\n",
+ 			   phydev_name(phydev));
+ 		break;
+ 	case 100:
+ 		status.bits.speed = GMAC_SPEED_100;
+-		if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
++		if (phy_interface_mode_is_rgmii(phydev->interface))
+ 			status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
+ 		netdev_dbg(netdev, "connect %s to RGMII @ 100 Mbit\n",
+ 			   phydev_name(phydev));
+ 		break;
+ 	case 10:
+ 		status.bits.speed = GMAC_SPEED_10;
+-		if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
++		if (phy_interface_mode_is_rgmii(phydev->interface))
+ 			status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
+ 		netdev_dbg(netdev, "connect %s to RGMII @ 10 Mbit\n",
+ 			   phydev_name(phydev));
+@@ -389,6 +389,9 @@ static int gmac_setup_phy(struct net_device *netdev)
+ 		status.bits.mii_rmii = GMAC_PHY_GMII;
+ 		break;
+ 	case PHY_INTERFACE_MODE_RGMII:
++	case PHY_INTERFACE_MODE_RGMII_ID:
++	case PHY_INTERFACE_MODE_RGMII_TXID:
++	case PHY_INTERFACE_MODE_RGMII_RXID:
+ 		netdev_dbg(netdev,
+ 			   "RGMII: set GMAC0 and GMAC1 to MII/RGMII mode\n");
+ 		status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
+diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c
+index 901749a7a318b..6eeccc11b76ef 100644
+--- a/drivers/net/ethernet/freescale/fman/mac.c
++++ b/drivers/net/ethernet/freescale/fman/mac.c
+@@ -94,14 +94,17 @@ static void mac_exception(void *handle, enum fman_mac_exceptions ex)
+ 		__func__, ex);
+ }
+ 
+-static void set_fman_mac_params(struct mac_device *mac_dev,
+-				struct fman_mac_params *params)
++static int set_fman_mac_params(struct mac_device *mac_dev,
++			       struct fman_mac_params *params)
+ {
+ 	struct mac_priv_s *priv = mac_dev->priv;
+ 
+ 	params->base_addr = (typeof(params->base_addr))
+ 		devm_ioremap(priv->dev, mac_dev->res->start,
+ 			     resource_size(mac_dev->res));
++	if (!params->base_addr)
++		return -ENOMEM;
++
+ 	memcpy(&params->addr, mac_dev->addr, sizeof(mac_dev->addr));
+ 	params->max_speed	= priv->max_speed;
+ 	params->phy_if		= mac_dev->phy_if;
+@@ -112,6 +115,8 @@ static void set_fman_mac_params(struct mac_device *mac_dev,
+ 	params->event_cb	= mac_exception;
+ 	params->dev_id		= mac_dev;
+ 	params->internal_phy_node = priv->internal_phy_node;
++
++	return 0;
+ }
+ 
+ static int tgec_initialization(struct mac_device *mac_dev)
+@@ -123,7 +128,9 @@ static int tgec_initialization(struct mac_device *mac_dev)
+ 
+ 	priv = mac_dev->priv;
+ 
+-	set_fman_mac_params(mac_dev, &params);
++	err = set_fman_mac_params(mac_dev, &params);
++	if (err)
++		goto _return;
+ 
+ 	mac_dev->fman_mac = tgec_config(&params);
+ 	if (!mac_dev->fman_mac) {
+@@ -169,7 +176,9 @@ static int dtsec_initialization(struct mac_device *mac_dev)
+ 
+ 	priv = mac_dev->priv;
+ 
+-	set_fman_mac_params(mac_dev, &params);
++	err = set_fman_mac_params(mac_dev, &params);
++	if (err)
++		goto _return;
+ 
+ 	mac_dev->fman_mac = dtsec_config(&params);
+ 	if (!mac_dev->fman_mac) {
+@@ -218,7 +227,9 @@ static int memac_initialization(struct mac_device *mac_dev)
+ 
+ 	priv = mac_dev->priv;
+ 
+-	set_fman_mac_params(mac_dev, &params);
++	err = set_fman_mac_params(mac_dev, &params);
++	if (err)
++		goto _return;
+ 
+ 	if (priv->max_speed == SPEED_10000)
+ 		params.phy_if = PHY_INTERFACE_MODE_XGMII;
+diff --git a/drivers/net/ethernet/freescale/xgmac_mdio.c b/drivers/net/ethernet/freescale/xgmac_mdio.c
+index bfa2826c55454..b7984a772e12d 100644
+--- a/drivers/net/ethernet/freescale/xgmac_mdio.c
++++ b/drivers/net/ethernet/freescale/xgmac_mdio.c
+@@ -49,6 +49,7 @@ struct tgec_mdio_controller {
+ struct mdio_fsl_priv {
+ 	struct	tgec_mdio_controller __iomem *mdio_base;
+ 	bool	is_little_endian;
++	bool	has_a009885;
+ 	bool	has_a011043;
+ };
+ 
+@@ -184,10 +185,10 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+ {
+ 	struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv;
+ 	struct tgec_mdio_controller __iomem *regs = priv->mdio_base;
++	unsigned long flags;
+ 	uint16_t dev_addr;
+ 	uint32_t mdio_stat;
+ 	uint32_t mdio_ctl;
+-	uint16_t value;
+ 	int ret;
+ 	bool endian = priv->is_little_endian;
+ 
+@@ -219,12 +220,18 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+ 			return ret;
+ 	}
+ 
++	if (priv->has_a009885)
++		/* Once the operation completes, i.e. MDIO_STAT_BSY clears, we
++		 * must read back the data register within 16 MDC cycles.
++		 */
++		local_irq_save(flags);
++
+ 	/* Initiate the read */
+ 	xgmac_write32(mdio_ctl | MDIO_CTL_READ, &regs->mdio_ctl, endian);
+ 
+ 	ret = xgmac_wait_until_done(&bus->dev, regs, endian);
+ 	if (ret)
+-		return ret;
++		goto irq_restore;
+ 
+ 	/* Return all Fs if nothing was there */
+ 	if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) &&
+@@ -232,13 +239,17 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+ 		dev_dbg(&bus->dev,
+ 			"Error while reading PHY%d reg at %d.%hhu\n",
+ 			phy_id, dev_addr, regnum);
+-		return 0xffff;
++		ret = 0xffff;
++	} else {
++		ret = xgmac_read32(&regs->mdio_data, endian) & 0xffff;
++		dev_dbg(&bus->dev, "read %04x\n", ret);
+ 	}
+ 
+-	value = xgmac_read32(&regs->mdio_data, endian) & 0xffff;
+-	dev_dbg(&bus->dev, "read %04x\n", value);
++irq_restore:
++	if (priv->has_a009885)
++		local_irq_restore(flags);
+ 
+-	return value;
++	return ret;
+ }
+ 
+ static int xgmac_mdio_probe(struct platform_device *pdev)
+@@ -282,6 +293,8 @@ static int xgmac_mdio_probe(struct platform_device *pdev)
+ 	priv->is_little_endian = device_property_read_bool(&pdev->dev,
+ 							   "little-endian");
+ 
++	priv->has_a009885 = device_property_read_bool(&pdev->dev,
++						      "fsl,erratum-a009885");
+ 	priv->has_a011043 = device_property_read_bool(&pdev->dev,
+ 						      "fsl,erratum-a011043");
+ 
+@@ -307,9 +320,10 @@ err_ioremap:
+ static int xgmac_mdio_remove(struct platform_device *pdev)
+ {
+ 	struct mii_bus *bus = platform_get_drvdata(pdev);
++	struct mdio_fsl_priv *priv = bus->priv;
+ 
+ 	mdiobus_unregister(bus);
+-	iounmap(bus->priv);
++	iounmap(priv->mdio_base);
+ 	mdiobus_free(bus);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/i825xx/sni_82596.c b/drivers/net/ethernet/i825xx/sni_82596.c
+index 27937c5d79567..daec9ce04531b 100644
+--- a/drivers/net/ethernet/i825xx/sni_82596.c
++++ b/drivers/net/ethernet/i825xx/sni_82596.c
+@@ -117,9 +117,10 @@ static int sni_82596_probe(struct platform_device *dev)
+ 	netdevice->dev_addr[5] = readb(eth_addr + 0x06);
+ 	iounmap(eth_addr);
+ 
+-	if (!netdevice->irq) {
++	if (netdevice->irq < 0) {
+ 		printk(KERN_ERR "%s: IRQ not found for i82596 at 0x%lx\n",
+ 			__FILE__, netdevice->base_addr);
++		retval = netdevice->irq;
+ 		goto probe_failed;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index a2d3f04a9ff22..7d7dc0754a3a1 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -215,7 +215,7 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
+ 					   phylink_config);
+ 	struct mtk_eth *eth = mac->hw;
+ 	u32 mcr_cur, mcr_new, sid, i;
+-	int val, ge_mode, err;
++	int val, ge_mode, err = 0;
+ 
+ 	/* MT76x8 has no hardware settings between for the MAC */
+ 	if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628) &&
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 2e55e00888715..6af0dd8471691 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -147,8 +147,12 @@ static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
+ 	if (!refcount_dec_and_test(&ent->refcnt))
+ 		return;
+ 
+-	if (ent->idx >= 0)
+-		cmd_free_index(ent->cmd, ent->idx);
++	if (ent->idx >= 0) {
++		struct mlx5_cmd *cmd = ent->cmd;
++
++		cmd_free_index(cmd, ent->idx);
++		up(ent->page_queue ? &cmd->pages_sem : &cmd->sem);
++	}
+ 
+ 	cmd_free_ent(ent);
+ }
+@@ -883,25 +887,6 @@ static bool opcode_allowed(struct mlx5_cmd *cmd, u16 opcode)
+ 	return cmd->allowed_opcode == opcode;
+ }
+ 
+-static int cmd_alloc_index_retry(struct mlx5_cmd *cmd)
+-{
+-	unsigned long alloc_end = jiffies + msecs_to_jiffies(1000);
+-	int idx;
+-
+-retry:
+-	idx = cmd_alloc_index(cmd);
+-	if (idx < 0 && time_before(jiffies, alloc_end)) {
+-		/* Index allocation can fail on heavy load of commands. This is a temporary
+-		 * situation as the current command already holds the semaphore, meaning that
+-		 * another command completion is being handled and it is expected to release
+-		 * the entry index soon.
+-		 */
+-		cpu_relax();
+-		goto retry;
+-	}
+-	return idx;
+-}
+-
+ bool mlx5_cmd_is_down(struct mlx5_core_dev *dev)
+ {
+ 	return pci_channel_offline(dev->pdev) ||
+@@ -926,7 +911,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 	sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
+ 	down(sem);
+ 	if (!ent->page_queue) {
+-		alloc_ret = cmd_alloc_index_retry(cmd);
++		alloc_ret = cmd_alloc_index(cmd);
+ 		if (alloc_ret < 0) {
+ 			mlx5_core_err_rl(dev, "failed to allocate command entry\n");
+ 			if (ent->callback) {
+@@ -1582,8 +1567,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ 	vector = vec & 0xffffffff;
+ 	for (i = 0; i < (1 << cmd->log_sz); i++) {
+ 		if (test_bit(i, &vector)) {
+-			struct semaphore *sem;
+-
+ 			ent = cmd->ent_arr[i];
+ 
+ 			/* if we already completed the command, ignore it */
+@@ -1606,10 +1589,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ 			    dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
+ 				cmd_ent_put(ent);
+ 
+-			if (ent->page_queue)
+-				sem = &cmd->pages_sem;
+-			else
+-				sem = &cmd->sem;
+ 			ent->ts2 = ktime_get_ns();
+ 			memcpy(ent->out->first.data, ent->lay->out, sizeof(ent->lay->out));
+ 			dump_command(dev, ent, 0);
+@@ -1663,7 +1642,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ 				 */
+ 				complete(&ent->done);
+ 			}
+-			up(sem);
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
+index 71e8d66fa1509..6692bc8333f73 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
+@@ -11,13 +11,13 @@ static int mlx5e_xsk_map_pool(struct mlx5e_priv *priv,
+ {
+ 	struct device *dev = mlx5_core_dma_dev(priv->mdev);
+ 
+-	return xsk_pool_dma_map(pool, dev, 0);
++	return xsk_pool_dma_map(pool, dev, DMA_ATTR_SKIP_CPU_SYNC);
+ }
+ 
+ static void mlx5e_xsk_unmap_pool(struct mlx5e_priv *priv,
+ 				 struct xsk_buff_pool *pool)
+ {
+-	return xsk_pool_dma_unmap(pool, 0);
++	return xsk_pool_dma_unmap(pool, DMA_ATTR_SKIP_CPU_SYNC);
+ }
+ 
+ static int mlx5e_xsk_get_pools(struct mlx5e_xsk *xsk)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 2f6c3a5813ed1..16e98ac47624c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -5024,9 +5024,13 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	}
+ 
+ 	if (mlx5_vxlan_allowed(mdev->vxlan) || mlx5_geneve_tx_allowed(mdev)) {
+-		netdev->hw_features     |= NETIF_F_GSO_UDP_TUNNEL;
+-		netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL;
+-		netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL;
++		netdev->hw_features     |= NETIF_F_GSO_UDP_TUNNEL |
++					   NETIF_F_GSO_UDP_TUNNEL_CSUM;
++		netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL |
++					   NETIF_F_GSO_UDP_TUNNEL_CSUM;
++		netdev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
++		netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL |
++					 NETIF_F_GSO_UDP_TUNNEL_CSUM;
+ 	}
+ 
+ 	if (mlx5e_tunnel_proto_supported(mdev, IPPROTO_GRE)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 117a593414537..d384403d73f69 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -276,8 +276,8 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq,
+ 	if (unlikely(!dma_info->page))
+ 		return -ENOMEM;
+ 
+-	dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0,
+-				      PAGE_SIZE, rq->buff.map_dir);
++	dma_info->addr = dma_map_page_attrs(rq->pdev, dma_info->page, 0, PAGE_SIZE,
++					    rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC);
+ 	if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
+ 		page_pool_recycle_direct(rq->page_pool, dma_info->page);
+ 		dma_info->page = NULL;
+@@ -298,7 +298,8 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq,
+ 
+ void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info)
+ {
+-	dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir);
++	dma_unmap_page_attrs(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir,
++			     DMA_ATTR_SKIP_CPU_SYNC);
+ }
+ 
+ void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+index 15c3a9058e728..0f0d250bbc150 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+@@ -265,10 +265,8 @@ static int mlx5_lag_fib_event(struct notifier_block *nb,
+ 		fen_info = container_of(info, struct fib_entry_notifier_info,
+ 					info);
+ 		fi = fen_info->fi;
+-		if (fi->nh) {
+-			NL_SET_ERR_MSG_MOD(info->extack, "IPv4 route with nexthop objects is not supported");
+-			return notifier_from_errno(-EINVAL);
+-		}
++		if (fi->nh)
++			return NOTIFY_DONE;
+ 		fib_dev = fib_info_nh(fen_info->fi, 0)->fib_nh_dev;
+ 		if (fib_dev != ldev->pf[MLX5_LAG_P1].netdev &&
+ 		    fib_dev != ldev->pf[MLX5_LAG_P2].netdev) {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/cmd.h b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
+index 5ffdfb532cb7f..91f68fb0b420a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/cmd.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
+@@ -905,6 +905,18 @@ static inline int mlxsw_cmd_sw2hw_rdq(struct mlxsw_core *mlxsw_core,
+  */
+ MLXSW_ITEM32(cmd_mbox, sw2hw_dq, cq, 0x00, 24, 8);
+ 
++enum mlxsw_cmd_mbox_sw2hw_dq_sdq_lp {
++	MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_WQE,
++	MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_IGNORE_WQE,
++};
++
++/* cmd_mbox_sw2hw_dq_sdq_lp
++ * SDQ local Processing
++ * 0: local processing by wqe.lp
++ * 1: local processing (ignoring wqe.lp)
++ */
++MLXSW_ITEM32(cmd_mbox, sw2hw_dq, sdq_lp, 0x00, 23, 1);
++
+ /* cmd_mbox_sw2hw_dq_sdq_tclass
+  * SDQ: CPU Egress TClass
+  * RDQ: Reserved
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+index ffaeda75eec42..dbb16ce25bdf3 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+@@ -285,6 +285,7 @@ static int mlxsw_pci_sdq_init(struct mlxsw_pci *mlxsw_pci, char *mbox,
+ 			      struct mlxsw_pci_queue *q)
+ {
+ 	int tclass;
++	int lp;
+ 	int i;
+ 	int err;
+ 
+@@ -292,9 +293,12 @@ static int mlxsw_pci_sdq_init(struct mlxsw_pci *mlxsw_pci, char *mbox,
+ 	q->consumer_counter = 0;
+ 	tclass = q->num == MLXSW_PCI_SDQ_EMAD_INDEX ? MLXSW_PCI_SDQ_EMAD_TC :
+ 						      MLXSW_PCI_SDQ_CTL_TC;
++	lp = q->num == MLXSW_PCI_SDQ_EMAD_INDEX ? MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_IGNORE_WQE :
++						  MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_WQE;
+ 
+ 	/* Set CQ of same number of this SDQ. */
+ 	mlxsw_cmd_mbox_sw2hw_dq_cq_set(mbox, q->num);
++	mlxsw_cmd_mbox_sw2hw_dq_sdq_lp_set(mbox, lp);
+ 	mlxsw_cmd_mbox_sw2hw_dq_sdq_tclass_set(mbox, tclass);
+ 	mlxsw_cmd_mbox_sw2hw_dq_log2_dq_sz_set(mbox, 3); /* 8 pages */
+ 	for (i = 0; i < MLXSW_PCI_AQ_PAGES; i++) {
+@@ -1599,7 +1603,7 @@ static int mlxsw_pci_skb_transmit(void *bus_priv, struct sk_buff *skb,
+ 
+ 	wqe = elem_info->elem;
+ 	mlxsw_pci_wqe_c_set(wqe, 1); /* always report completion */
+-	mlxsw_pci_wqe_lp_set(wqe, !!tx_info->is_emad);
++	mlxsw_pci_wqe_lp_set(wqe, 0);
+ 	mlxsw_pci_wqe_type_set(wqe, MLXSW_PCI_WQE_TYPE_ETHERNET);
+ 
+ 	err = mlxsw_pci_wqe_frag_map(mlxsw_pci, wqe, 0, skb->data,
+@@ -1900,6 +1904,7 @@ int mlxsw_pci_driver_register(struct pci_driver *pci_driver)
+ {
+ 	pci_driver->probe = mlxsw_pci_probe;
+ 	pci_driver->remove = mlxsw_pci_remove;
++	pci_driver->shutdown = mlxsw_pci_remove;
+ 	return pci_register_driver(pci_driver);
+ }
+ EXPORT_SYMBOL(mlxsw_pci_driver_register);
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index 3655503352928..217e8333de6c6 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -462,13 +462,6 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
+ 			return -EOPNOTSUPP;
+ 		}
+ 
+-		if (filter->block_id == VCAP_IS1 &&
+-		    !is_zero_ether_addr(match.mask->dst)) {
+-			NL_SET_ERR_MSG_MOD(extack,
+-					   "Key type S1_NORMAL cannot match on destination MAC");
+-			return -EOPNOTSUPP;
+-		}
+-
+ 		/* The hw support mac matches only for MAC_ETYPE key,
+ 		 * therefore if other matches(port, tcp flags, etc) are added
+ 		 * then just bail out
+@@ -483,6 +476,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
+ 			return -EOPNOTSUPP;
+ 
+ 		flow_rule_match_eth_addrs(rule, &match);
++
++		if (filter->block_id == VCAP_IS1 &&
++		    !is_zero_ether_addr(match.mask->dst)) {
++			NL_SET_ERR_MSG_MOD(extack,
++					   "Key type S1_NORMAL cannot match on destination MAC");
++			return -EOPNOTSUPP;
++		}
++
+ 		filter->key_type = OCELOT_VCAP_KEY_ETYPE;
+ 		ether_addr_copy(filter->key.etype.dmac.value,
+ 				match.key->dst);
+diff --git a/drivers/net/ethernet/rocker/rocker_ofdpa.c b/drivers/net/ethernet/rocker/rocker_ofdpa.c
+index 7072b249c8bd6..8157666209798 100644
+--- a/drivers/net/ethernet/rocker/rocker_ofdpa.c
++++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c
+@@ -2795,7 +2795,8 @@ static void ofdpa_fib4_abort(struct rocker *rocker)
+ 		if (!ofdpa_port)
+ 			continue;
+ 		nh->fib_nh_flags &= ~RTNH_F_OFFLOAD;
+-		ofdpa_flow_tbl_del(ofdpa_port, OFDPA_OP_FLAG_REMOVE,
++		ofdpa_flow_tbl_del(ofdpa_port,
++				   OFDPA_OP_FLAG_REMOVE | OFDPA_OP_FLAG_NOWAIT,
+ 				   flow_entry);
+ 	}
+ 	spin_unlock_irqrestore(&ofdpa->flow_tbl_lock, flags);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 69c79cc24e6e4..0baf85122f5ac 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -41,8 +41,9 @@
+ #include "xilinx_axienet.h"
+ 
+ /* Descriptors defines for Tx and Rx DMA */
+-#define TX_BD_NUM_DEFAULT		64
++#define TX_BD_NUM_DEFAULT		128
+ #define RX_BD_NUM_DEFAULT		1024
++#define TX_BD_NUM_MIN			(MAX_SKB_FRAGS + 1)
+ #define TX_BD_NUM_MAX			4096
+ #define RX_BD_NUM_MAX			4096
+ 
+@@ -496,7 +497,8 @@ static void axienet_setoptions(struct net_device *ndev, u32 options)
+ 
+ static int __axienet_device_reset(struct axienet_local *lp)
+ {
+-	u32 timeout;
++	u32 value;
++	int ret;
+ 
+ 	/* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset
+ 	 * process of Axi DMA takes a while to complete as all pending
+@@ -506,15 +508,23 @@ static int __axienet_device_reset(struct axienet_local *lp)
+ 	 * they both reset the entire DMA core, so only one needs to be used.
+ 	 */
+ 	axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, XAXIDMA_CR_RESET_MASK);
+-	timeout = DELAY_OF_ONE_MILLISEC;
+-	while (axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET) &
+-				XAXIDMA_CR_RESET_MASK) {
+-		udelay(1);
+-		if (--timeout == 0) {
+-			netdev_err(lp->ndev, "%s: DMA reset timeout!\n",
+-				   __func__);
+-			return -ETIMEDOUT;
+-		}
++	ret = read_poll_timeout(axienet_dma_in32, value,
++				!(value & XAXIDMA_CR_RESET_MASK),
++				DELAY_OF_ONE_MILLISEC, 50000, false, lp,
++				XAXIDMA_TX_CR_OFFSET);
++	if (ret) {
++		dev_err(lp->dev, "%s: DMA reset timeout!\n", __func__);
++		return ret;
++	}
++
++	/* Wait for PhyRstCmplt bit to be set, indicating the PHY reset has finished */
++	ret = read_poll_timeout(axienet_ior, value,
++				value & XAE_INT_PHYRSTCMPLT_MASK,
++				DELAY_OF_ONE_MILLISEC, 50000, false, lp,
++				XAE_IS_OFFSET);
++	if (ret) {
++		dev_err(lp->dev, "%s: timeout waiting for PhyRstCmplt\n", __func__);
++		return ret;
+ 	}
+ 
+ 	return 0;
+@@ -623,6 +633,8 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
+ 		if (nr_bds == -1 && !(status & XAXIDMA_BD_STS_COMPLETE_MASK))
+ 			break;
+ 
++		/* Ensure we see complete descriptor update */
++		dma_rmb();
+ 		phys = desc_get_phys_addr(lp, cur_p);
+ 		dma_unmap_single(ndev->dev.parent, phys,
+ 				 (cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK),
+@@ -631,13 +643,15 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
+ 		if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK))
+ 			dev_consume_skb_irq(cur_p->skb);
+ 
+-		cur_p->cntrl = 0;
+ 		cur_p->app0 = 0;
+ 		cur_p->app1 = 0;
+ 		cur_p->app2 = 0;
+ 		cur_p->app4 = 0;
+-		cur_p->status = 0;
+ 		cur_p->skb = NULL;
++		/* ensure our transmit path and device don't prematurely see status cleared */
++		wmb();
++		cur_p->cntrl = 0;
++		cur_p->status = 0;
+ 
+ 		if (sizep)
+ 			*sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK;
+@@ -646,6 +660,32 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
+ 	return i;
+ }
+ 
++/**
++ * axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy
++ * @lp:		Pointer to the axienet_local structure
++ * @num_frag:	The number of BDs to check for
++ *
++ * Return: 0, on success
++ *	    NETDEV_TX_BUSY, if any of the descriptors are not free
++ *
++ * This function is invoked before BDs are allocated and transmission starts.
++ * This function returns 0 if a BD or group of BDs can be allocated for
++ * transmission. If the BD or any of the BDs are not free the function
++ * returns a busy status. This is invoked from axienet_start_xmit.
++ */
++static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
++					    int num_frag)
++{
++	struct axidma_bd *cur_p;
++
++	/* Ensure we see all descriptor updates from device or TX IRQ path */
++	rmb();
++	cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
++	if (cur_p->cntrl)
++		return NETDEV_TX_BUSY;
++	return 0;
++}
++
+ /**
+  * axienet_start_xmit_done - Invoked once a transmit is completed by the
+  * Axi DMA Tx channel.
+@@ -675,30 +715,8 @@ static void axienet_start_xmit_done(struct net_device *ndev)
+ 	/* Matches barrier in axienet_start_xmit */
+ 	smp_mb();
+ 
+-	netif_wake_queue(ndev);
+-}
+-
+-/**
+- * axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy
+- * @lp:		Pointer to the axienet_local structure
+- * @num_frag:	The number of BDs to check for
+- *
+- * Return: 0, on success
+- *	    NETDEV_TX_BUSY, if any of the descriptors are not free
+- *
+- * This function is invoked before BDs are allocated and transmission starts.
+- * This function returns 0 if a BD or group of BDs can be allocated for
+- * transmission. If the BD or any of the BDs are not free the function
+- * returns a busy status. This is invoked from axienet_start_xmit.
+- */
+-static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
+-					    int num_frag)
+-{
+-	struct axidma_bd *cur_p;
+-	cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
+-	if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK)
+-		return NETDEV_TX_BUSY;
+-	return 0;
++	if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
++		netif_wake_queue(ndev);
+ }
+ 
+ /**
+@@ -730,20 +748,15 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	num_frag = skb_shinfo(skb)->nr_frags;
+ 	cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
+ 
+-	if (axienet_check_tx_bd_space(lp, num_frag)) {
+-		if (netif_queue_stopped(ndev))
+-			return NETDEV_TX_BUSY;
+-
++	if (axienet_check_tx_bd_space(lp, num_frag + 1)) {
++		/* Should not happen as last start_xmit call should have
++		 * checked for sufficient space and queue should only be
++		 * woken when sufficient space is available.
++		 */
+ 		netif_stop_queue(ndev);
+-
+-		/* Matches barrier in axienet_start_xmit_done */
+-		smp_mb();
+-
+-		/* Space might have just been freed - check again */
+-		if (axienet_check_tx_bd_space(lp, num_frag))
+-			return NETDEV_TX_BUSY;
+-
+-		netif_wake_queue(ndev);
++		if (net_ratelimit())
++			netdev_warn(ndev, "TX ring unexpectedly full\n");
++		return NETDEV_TX_BUSY;
+ 	}
+ 
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+@@ -804,6 +817,18 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	if (++lp->tx_bd_tail >= lp->tx_bd_num)
+ 		lp->tx_bd_tail = 0;
+ 
++	/* Stop queue if next transmit may not have space */
++	if (axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) {
++		netif_stop_queue(ndev);
++
++		/* Matches barrier in axienet_start_xmit_done */
++		smp_mb();
++
++		/* Space might have just been freed - check again */
++		if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
++			netif_wake_queue(ndev);
++	}
++
+ 	return NETDEV_TX_OK;
+ }
+ 
+@@ -834,6 +859,8 @@ static void axienet_recv(struct net_device *ndev)
+ 
+ 		tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
+ 
++		/* Ensure we see complete descriptor update */
++		dma_rmb();
+ 		phys = desc_get_phys_addr(lp, cur_p);
+ 		dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
+ 				 DMA_FROM_DEVICE);
+@@ -1355,7 +1382,8 @@ static int axienet_ethtools_set_ringparam(struct net_device *ndev,
+ 	if (ering->rx_pending > RX_BD_NUM_MAX ||
+ 	    ering->rx_mini_pending ||
+ 	    ering->rx_jumbo_pending ||
+-	    ering->rx_pending > TX_BD_NUM_MAX)
++	    ering->tx_pending < TX_BD_NUM_MIN ||
++	    ering->tx_pending > TX_BD_NUM_MAX)
+ 		return -EINVAL;
+ 
+ 	if (netif_running(ndev))
+@@ -2015,6 +2043,11 @@ static int axienet_probe(struct platform_device *pdev)
+ 	lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
+ 	lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
+ 
++	/* Reset core now that clocks are enabled, prior to accessing MDIO */
++	ret = __axienet_device_reset(lp);
++	if (ret)
++		goto cleanup_clk;
++
+ 	lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ 	if (lp->phy_node) {
+ 		ret = axienet_mdio_setup(lp);
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 91616182c311f..4dda2ab19c265 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -1090,6 +1090,12 @@ static int m88e1118_config_init(struct phy_device *phydev)
+ 	if (err < 0)
+ 		return err;
+ 
++	if (phy_interface_is_rgmii(phydev)) {
++		err = m88e1121_config_aneg_rgmii_delays(phydev);
++		if (err < 0)
++			return err;
++	}
++
+ 	/* Adjust LED Control */
+ 	if (phydev->dev_flags & MARVELL_PHY_M1118_DNS323_LEDS)
+ 		err = phy_write(phydev, 0x10, 0x1100);
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 2645ca35103c9..c416ab1d2b008 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -588,7 +588,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	mdiobus_setup_mdiodev_from_board_info(bus, mdiobus_create_device);
+ 
+ 	bus->state = MDIOBUS_REGISTERED;
+-	pr_info("%s: probed\n", bus->name);
++	dev_dbg(&bus->dev, "probed\n");
+ 	return 0;
+ 
+ error:
+diff --git a/drivers/net/phy/phy-core.c b/drivers/net/phy/phy-core.c
+index 8d333d3084ed3..cccb83dae673b 100644
+--- a/drivers/net/phy/phy-core.c
++++ b/drivers/net/phy/phy-core.c
+@@ -161,11 +161,11 @@ static const struct phy_setting settings[] = {
+ 	PHY_SETTING(   2500, FULL,   2500baseT_Full		),
+ 	PHY_SETTING(   2500, FULL,   2500baseX_Full		),
+ 	/* 1G */
+-	PHY_SETTING(   1000, FULL,   1000baseKX_Full		),
+ 	PHY_SETTING(   1000, FULL,   1000baseT_Full		),
+ 	PHY_SETTING(   1000, HALF,   1000baseT_Half		),
+ 	PHY_SETTING(   1000, FULL,   1000baseT1_Full		),
+ 	PHY_SETTING(   1000, FULL,   1000baseX_Full		),
++	PHY_SETTING(   1000, FULL,   1000baseKX_Full		),
+ 	/* 100M */
+ 	PHY_SETTING(    100, FULL,    100baseT_Full		),
+ 	PHY_SETTING(    100, FULL,    100baseT1_Full		),
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 32c34c728c7a1..efffa65f82143 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -1589,17 +1589,20 @@ static int sfp_sm_probe_for_phy(struct sfp *sfp)
+ static int sfp_module_parse_power(struct sfp *sfp)
+ {
+ 	u32 power_mW = 1000;
++	bool supports_a2;
+ 
+ 	if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_POWER_DECL))
+ 		power_mW = 1500;
+ 	if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_HIGH_POWER_LEVEL))
+ 		power_mW = 2000;
+ 
++	supports_a2 = sfp->id.ext.sff8472_compliance !=
++				SFP_SFF8472_COMPLIANCE_NONE ||
++		      sfp->id.ext.diagmon & SFP_DIAGMON_DDM;
++
+ 	if (power_mW > sfp->max_power_mW) {
+ 		/* Module power specification exceeds the allowed maximum. */
+-		if (sfp->id.ext.sff8472_compliance ==
+-			SFP_SFF8472_COMPLIANCE_NONE &&
+-		    !(sfp->id.ext.diagmon & SFP_DIAGMON_DDM)) {
++		if (!supports_a2) {
+ 			/* The module appears not to implement bus address
+ 			 * 0xa2, so assume that the module powers up in the
+ 			 * indicated mode.
+@@ -1616,11 +1619,25 @@ static int sfp_module_parse_power(struct sfp *sfp)
+ 		}
+ 	}
+ 
++	if (power_mW <= 1000) {
++		/* Modules below 1W do not require a power change sequence */
++		sfp->module_power_mW = power_mW;
++		return 0;
++	}
++
++	if (!supports_a2) {
++		/* The module power level is below the host maximum and the
++		 * module appears not to implement bus address 0xa2, so assume
++		 * that the module powers up in the indicated mode.
++		 */
++		return 0;
++	}
++
+ 	/* If the module requires a higher power mode, but also requires
+ 	 * an address change sequence, warn the user that the module may
+ 	 * not be functional.
+ 	 */
+-	if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE && power_mW > 1000) {
++	if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE) {
+ 		dev_warn(sfp->dev,
+ 			 "Address Change Sequence not supported but module requires %u.%uW, module may not be functional\n",
+ 			 power_mW / 1000, (power_mW / 100) % 10);
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 33b2e0fb68bbb..2b9815ec4a622 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -69,6 +69,8 @@
+ #define MPHDRLEN	6	/* multilink protocol header length */
+ #define MPHDRLEN_SSN	4	/* ditto with short sequence numbers */
+ 
++#define PPP_PROTO_LEN	2
++
+ /*
+  * An instance of /dev/ppp can be associated with either a ppp
+  * interface unit or a ppp channel.  In both cases, file->private_data
+@@ -496,6 +498,9 @@ static ssize_t ppp_write(struct file *file, const char __user *buf,
+ 
+ 	if (!pf)
+ 		return -ENXIO;
++	/* All PPP packets should start with the 2-byte protocol */
++	if (count < PPP_PROTO_LEN)
++		return -EINVAL;
+ 	ret = -ENOMEM;
+ 	skb = alloc_skb(count + pf->hdrlen, GFP_KERNEL);
+ 	if (!skb)
+@@ -1632,7 +1637,7 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
+ 	}
+ 
+ 	++ppp->stats64.tx_packets;
+-	ppp->stats64.tx_bytes += skb->len - 2;
++	ppp->stats64.tx_bytes += skb->len - PPP_PROTO_LEN;
+ 
+ 	switch (proto) {
+ 	case PPP_IP:
+diff --git a/drivers/net/usb/mcs7830.c b/drivers/net/usb/mcs7830.c
+index 09bfa6a4dfbc1..7e40e2e2f3723 100644
+--- a/drivers/net/usb/mcs7830.c
++++ b/drivers/net/usb/mcs7830.c
+@@ -108,8 +108,16 @@ static const char driver_name[] = "MOSCHIP usb-ethernet driver";
+ 
+ static int mcs7830_get_reg(struct usbnet *dev, u16 index, u16 size, void *data)
+ {
+-	return usbnet_read_cmd(dev, MCS7830_RD_BREQ, MCS7830_RD_BMREQ,
+-				0x0000, index, data, size);
++	int ret;
++
++	ret = usbnet_read_cmd(dev, MCS7830_RD_BREQ, MCS7830_RD_BMREQ,
++			      0x0000, index, data, size);
++	if (ret < 0)
++		return ret;
++	else if (ret < size)
++		return -ENODATA;
++
++	return ret;
+ }
+ 
+ static int mcs7830_set_reg(struct usbnet *dev, u16 index, u16 size, const void *data)
+diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
+index 49cc4b7ed5163..1baec4b412c8d 100644
+--- a/drivers/net/wireless/ath/ar5523/ar5523.c
++++ b/drivers/net/wireless/ath/ar5523/ar5523.c
+@@ -153,6 +153,10 @@ static void ar5523_cmd_rx_cb(struct urb *urb)
+ 			ar5523_err(ar, "Invalid reply to WDCMSG_TARGET_START");
+ 			return;
+ 		}
++		if (!cmd->odata) {
++			ar5523_err(ar, "Unexpected WDCMSG_TARGET_START reply");
++			return;
++		}
+ 		memcpy(cmd->odata, hdr + 1, sizeof(u32));
+ 		cmd->olen = sizeof(u32);
+ 		cmd->res = 0;
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index d73ad60b571c2..d0967bb1f3871 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -89,6 +89,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = true,
+ 	},
+ 	{
+@@ -123,6 +124,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = true,
+ 	},
+ 	{
+@@ -158,6 +160,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -187,6 +190,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.num_wds_entries = 0x20,
+ 		.uart_pin_workaround = true,
+ 		.tx_stats_over_pktlog = false,
++		.credit_size_workaround = false,
+ 		.bmi_large_size_download = true,
+ 		.supports_peer_stats_info = true,
+ 	},
+@@ -222,6 +226,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -256,6 +261,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -290,6 +296,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -327,6 +334,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = true,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.supports_peer_stats_info = true,
+ 	},
+@@ -368,6 +376,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -415,6 +424,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -459,6 +469,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -493,6 +504,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -529,6 +541,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = true,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -557,6 +570,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.ast_skid_limit = 0x10,
+ 		.num_wds_entries = 0x20,
+ 		.uart_pin_workaround = true,
++		.credit_size_workaround = true,
+ 	},
+ 	{
+ 		.id = QCA4019_HW_1_0_DEV_VERSION,
+@@ -597,6 +611,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ 	{
+@@ -624,6 +639,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = true,
+ 		.hw_filter_reset_required = false,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 	},
+ };
+@@ -697,6 +713,7 @@ static void ath10k_send_suspend_complete(struct ath10k *ar)
+ 
+ static int ath10k_init_sdio(struct ath10k *ar, enum ath10k_firmware_mode mode)
+ {
++	bool mtu_workaround = ar->hw_params.credit_size_workaround;
+ 	int ret;
+ 	u32 param = 0;
+ 
+@@ -714,7 +731,7 @@ static int ath10k_init_sdio(struct ath10k *ar, enum ath10k_firmware_mode mode)
+ 
+ 	param |= HI_ACS_FLAGS_SDIO_REDUCE_TX_COMPL_SET;
+ 
+-	if (mode == ATH10K_FIRMWARE_MODE_NORMAL)
++	if (mode == ATH10K_FIRMWARE_MODE_NORMAL && !mtu_workaround)
+ 		param |= HI_ACS_FLAGS_ALT_DATA_CREDIT_SIZE;
+ 	else
+ 		param &= ~HI_ACS_FLAGS_ALT_DATA_CREDIT_SIZE;
+diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
+index 1fc0a312ab587..5f67da47036cf 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
+@@ -147,6 +147,9 @@ void ath10k_htt_tx_dec_pending(struct ath10k_htt *htt)
+ 	htt->num_pending_tx--;
+ 	if (htt->num_pending_tx == htt->max_num_pending_tx - 1)
+ 		ath10k_mac_tx_unlock(htt->ar, ATH10K_TX_PAUSE_Q_FULL);
++
++	if (htt->num_pending_tx == 0)
++		wake_up(&htt->empty_tx_wq);
+ }
+ 
+ int ath10k_htt_tx_inc_pending(struct ath10k_htt *htt)
+diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
+index c6ded21f5ed69..d3ef83ad577da 100644
+--- a/drivers/net/wireless/ath/ath10k/hw.h
++++ b/drivers/net/wireless/ath/ath10k/hw.h
+@@ -618,6 +618,9 @@ struct ath10k_hw_params {
+ 	 */
+ 	bool uart_pin_workaround;
+ 
++	/* Workaround for the credit size calculation */
++	bool credit_size_workaround;
++
+ 	/* tx stats support over pktlog */
+ 	bool tx_stats_over_pktlog;
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
+index aefe1f7f906c0..f51f1cf2c6a40 100644
+--- a/drivers/net/wireless/ath/ath10k/txrx.c
++++ b/drivers/net/wireless/ath/ath10k/txrx.c
+@@ -82,8 +82,6 @@ int ath10k_txrx_tx_unref(struct ath10k_htt *htt,
+ 	flags = skb_cb->flags;
+ 	ath10k_htt_tx_free_msdu_id(htt, tx_done->msdu_id);
+ 	ath10k_htt_tx_dec_pending(htt);
+-	if (htt->num_pending_tx == 0)
+-		wake_up(&htt->empty_tx_wq);
+ 	spin_unlock_bh(&htt->tx_lock);
+ 
+ 	rcu_read_lock();
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 430723c64adce..9ff6e68533142 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -175,8 +175,11 @@ static void __ath11k_ahb_ext_irq_disable(struct ath11k_base *ab)
+ 
+ 		ath11k_ahb_ext_grp_disable(irq_grp);
+ 
+-		napi_synchronize(&irq_grp->napi);
+-		napi_disable(&irq_grp->napi);
++		if (irq_grp->napi_enabled) {
++			napi_synchronize(&irq_grp->napi);
++			napi_disable(&irq_grp->napi);
++			irq_grp->napi_enabled = false;
++		}
+ 	}
+ }
+ 
+@@ -206,13 +209,13 @@ static void ath11k_ahb_clearbit32(struct ath11k_base *ab, u8 bit, u32 offset)
+ 
+ static void ath11k_ahb_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
+ {
+-	const struct ce_pipe_config *ce_config;
++	const struct ce_attr *ce_attr;
+ 
+-	ce_config = &ab->hw_params.target_ce_config[ce_id];
+-	if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_OUT)
++	ce_attr = &ab->hw_params.host_ce_config[ce_id];
++	if (ce_attr->src_nentries)
+ 		ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
+ 
+-	if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_IN) {
++	if (ce_attr->dest_nentries) {
+ 		ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
+ 		ath11k_ahb_setbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
+ 				    CE_HOST_IE_3_ADDRESS);
+@@ -221,13 +224,13 @@ static void ath11k_ahb_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
+ 
+ static void ath11k_ahb_ce_irq_disable(struct ath11k_base *ab, u16 ce_id)
+ {
+-	const struct ce_pipe_config *ce_config;
++	const struct ce_attr *ce_attr;
+ 
+-	ce_config = &ab->hw_params.target_ce_config[ce_id];
+-	if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_OUT)
++	ce_attr = &ab->hw_params.host_ce_config[ce_id];
++	if (ce_attr->src_nentries)
+ 		ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
+ 
+-	if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_IN) {
++	if (ce_attr->dest_nentries) {
+ 		ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
+ 		ath11k_ahb_clearbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
+ 				      CE_HOST_IE_3_ADDRESS);
+@@ -300,7 +303,10 @@ static void ath11k_ahb_ext_irq_enable(struct ath11k_base *ab)
+ 	for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ 		struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+ 
+-		napi_enable(&irq_grp->napi);
++		if (!irq_grp->napi_enabled) {
++			napi_enable(&irq_grp->napi);
++			irq_grp->napi_enabled = true;
++		}
+ 		ath11k_ahb_ext_grp_enable(irq_grp);
+ 	}
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index c8e36251068c9..d2f2898d17b49 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -124,6 +124,7 @@ struct ath11k_ext_irq_grp {
+ 	u32 num_irq;
+ 	u32 grp_id;
+ 	u64 timestamp;
++	bool napi_enabled;
+ 	struct napi_struct napi;
+ 	struct net_device napi_ndev;
+ };
+@@ -687,7 +688,6 @@ struct ath11k_base {
+ 	u32 wlan_init_status;
+ 	int irq_num[ATH11K_IRQ_NUM_MAX];
+ 	struct ath11k_ext_irq_grp ext_irq_grp[ATH11K_EXT_IRQ_GRP_NUM_MAX];
+-	struct napi_struct *napi;
+ 	struct ath11k_targ_cap target_caps;
+ 	u32 ext_service_bitmap[WMI_SERVICE_EXT_BM_SIZE];
+ 	bool pdevs_macaddr_valid;
+diff --git a/drivers/net/wireless/ath/ath11k/dp.h b/drivers/net/wireless/ath/ath11k/dp.h
+index ee8db812589b3..c4972233149f4 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.h
++++ b/drivers/net/wireless/ath/ath11k/dp.h
+@@ -514,7 +514,8 @@ struct htt_ppdu_stats_cfg_cmd {
+ } __packed;
+ 
+ #define HTT_PPDU_STATS_CFG_MSG_TYPE		GENMASK(7, 0)
+-#define HTT_PPDU_STATS_CFG_PDEV_ID		GENMASK(15, 8)
++#define HTT_PPDU_STATS_CFG_SOC_STATS		BIT(8)
++#define HTT_PPDU_STATS_CFG_PDEV_ID		GENMASK(15, 9)
+ #define HTT_PPDU_STATS_CFG_TLV_TYPE_BITMASK	GENMASK(31, 16)
+ 
+ enum htt_ppdu_stats_tag_type {
+diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c
+index 21dfd08d3debb..092eee735da29 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_tx.c
+@@ -894,7 +894,7 @@ int ath11k_dp_tx_htt_h2t_ppdu_stats_req(struct ath11k *ar, u32 mask)
+ 		cmd->msg = FIELD_PREP(HTT_PPDU_STATS_CFG_MSG_TYPE,
+ 				      HTT_H2T_MSG_TYPE_PPDU_STATS_CFG);
+ 
+-		pdev_mask = 1 << (i + 1);
++		pdev_mask = 1 << (ar->pdev_idx + i);
+ 		cmd->msg |= FIELD_PREP(HTT_PPDU_STATS_CFG_PDEV_ID, pdev_mask);
+ 		cmd->msg |= FIELD_PREP(HTT_PPDU_STATS_CFG_TLV_TYPE_BITMASK, mask);
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
+index 9904c0eb75875..f3b9108ab6bd0 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.c
++++ b/drivers/net/wireless/ath/ath11k/hal.c
+@@ -991,6 +991,7 @@ int ath11k_hal_srng_setup(struct ath11k_base *ab, enum hal_ring_type type,
+ 	srng->msi_data = params->msi_data;
+ 	srng->initialized = 1;
+ 	spin_lock_init(&srng->lock);
++	lockdep_set_class(&srng->lock, hal->srng_key + ring_id);
+ 
+ 	for (i = 0; i < HAL_SRNG_NUM_REG_GRP; i++) {
+ 		srng->hwreg_base[i] = srng_config->reg_start[i] +
+@@ -1237,6 +1238,24 @@ static int ath11k_hal_srng_create_config(struct ath11k_base *ab)
+ 	return 0;
+ }
+ 
++static void ath11k_hal_register_srng_key(struct ath11k_base *ab)
++{
++	struct ath11k_hal *hal = &ab->hal;
++	u32 ring_id;
++
++	for (ring_id = 0; ring_id < HAL_SRNG_RING_ID_MAX; ring_id++)
++		lockdep_register_key(hal->srng_key + ring_id);
++}
++
++static void ath11k_hal_unregister_srng_key(struct ath11k_base *ab)
++{
++	struct ath11k_hal *hal = &ab->hal;
++	u32 ring_id;
++
++	for (ring_id = 0; ring_id < HAL_SRNG_RING_ID_MAX; ring_id++)
++		lockdep_unregister_key(hal->srng_key + ring_id);
++}
++
+ int ath11k_hal_srng_init(struct ath11k_base *ab)
+ {
+ 	struct ath11k_hal *hal = &ab->hal;
+@@ -1256,6 +1275,8 @@ int ath11k_hal_srng_init(struct ath11k_base *ab)
+ 	if (ret)
+ 		goto err_free_cont_rdp;
+ 
++	ath11k_hal_register_srng_key(ab);
++
+ 	return 0;
+ 
+ err_free_cont_rdp:
+@@ -1270,6 +1291,7 @@ void ath11k_hal_srng_deinit(struct ath11k_base *ab)
+ {
+ 	struct ath11k_hal *hal = &ab->hal;
+ 
++	ath11k_hal_unregister_srng_key(ab);
+ 	ath11k_hal_free_cont_rdp(ab);
+ 	ath11k_hal_free_cont_wrp(ab);
+ 	kfree(hal->srng_config);
+diff --git a/drivers/net/wireless/ath/ath11k/hal.h b/drivers/net/wireless/ath/ath11k/hal.h
+index 1f1b29cd0aa39..5fbfded8d546c 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.h
++++ b/drivers/net/wireless/ath/ath11k/hal.h
+@@ -888,6 +888,8 @@ struct ath11k_hal {
+ 	/* shadow register configuration */
+ 	u32 shadow_reg_addr[HAL_SHADOW_NUM_REGS];
+ 	int num_shadow_reg_configured;
++
++	struct lock_class_key srng_key[HAL_SRNG_RING_ID_MAX];
+ };
+ 
+ u32 ath11k_hal_reo_qdesc_size(u32 ba_window_size, u8 tid);
+diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c
+index 66331da350129..f6282e8702923 100644
+--- a/drivers/net/wireless/ath/ath11k/hw.c
++++ b/drivers/net/wireless/ath/ath11k/hw.c
+@@ -246,8 +246,6 @@ const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_ipq8074 = {
+ const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qca6390 = {
+ 	.tx  = {
+ 		ATH11K_TX_RING_MASK_0,
+-		ATH11K_TX_RING_MASK_1,
+-		ATH11K_TX_RING_MASK_2,
+ 	},
+ 	.rx_mon_status = {
+ 		0, 0, 0, 0,
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 0924bc8b35205..cc9122f420243 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
++ * Copyright (c) 2021 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <net/mac80211.h>
+@@ -792,11 +793,15 @@ static int ath11k_mac_setup_bcn_tmpl(struct ath11k_vif *arvif)
+ 
+ 	if (cfg80211_find_ie(WLAN_EID_RSN, ies, (skb_tail_pointer(bcn) - ies)))
+ 		arvif->rsnie_present = true;
++	else
++		arvif->rsnie_present = false;
+ 
+ 	if (cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ 				    WLAN_OUI_TYPE_MICROSOFT_WPA,
+ 				    ies, (skb_tail_pointer(bcn) - ies)))
+ 		arvif->wpaie_present = true;
++	else
++		arvif->wpaie_present = false;
+ 
+ 	ret = ath11k_wmi_bcn_tmpl(ar, arvif->vdev_id, &offs, bcn);
+ 
+@@ -2316,9 +2321,12 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
+ 	arg.scan_id = ATH11K_SCAN_ID;
+ 
+ 	if (req->ie_len) {
++		arg.extraie.ptr = kmemdup(req->ie, req->ie_len, GFP_KERNEL);
++		if (!arg.extraie.ptr) {
++			ret = -ENOMEM;
++			goto exit;
++		}
+ 		arg.extraie.len = req->ie_len;
+-		arg.extraie.ptr = kzalloc(req->ie_len, GFP_KERNEL);
+-		memcpy(arg.extraie.ptr, req->ie, req->ie_len);
+ 	}
+ 
+ 	if (req->n_ssids) {
+@@ -2395,9 +2403,7 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
+ 		return 0;
+ 
+ 	if (cmd == DISABLE_KEY) {
+-		/* TODO: Check if FW expects  value other than NONE for del */
+-		/* arg.key_cipher = WMI_CIPHER_NONE; */
+-		arg.key_len = 0;
++		arg.key_cipher = WMI_CIPHER_NONE;
+ 		arg.key_data = NULL;
+ 		goto install;
+ 	}
+@@ -2529,7 +2535,7 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	/* flush the fragments cache during key (re)install to
+ 	 * ensure all frags in the new frag list belong to the same key.
+ 	 */
+-	if (peer && cmd == SET_KEY)
++	if (peer && sta && cmd == SET_KEY)
+ 		ath11k_peer_frags_flush(ar, peer);
+ 	spin_unlock_bh(&ab->base_lock);
+ 
+@@ -3878,23 +3884,32 @@ static int __ath11k_set_antenna(struct ath11k *ar, u32 tx_ant, u32 rx_ant)
+ 	return 0;
+ }
+ 
+-int ath11k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx)
++static void ath11k_mac_tx_mgmt_free(struct ath11k *ar, int buf_id)
+ {
+-	struct sk_buff *msdu = skb;
++	struct sk_buff *msdu;
+ 	struct ieee80211_tx_info *info;
+-	struct ath11k *ar = ctx;
+-	struct ath11k_base *ab = ar->ab;
+ 
+ 	spin_lock_bh(&ar->txmgmt_idr_lock);
+-	idr_remove(&ar->txmgmt_idr, buf_id);
++	msdu = idr_remove(&ar->txmgmt_idr, buf_id);
+ 	spin_unlock_bh(&ar->txmgmt_idr_lock);
+-	dma_unmap_single(ab->dev, ATH11K_SKB_CB(msdu)->paddr, msdu->len,
++
++	if (!msdu)
++		return;
++
++	dma_unmap_single(ar->ab->dev, ATH11K_SKB_CB(msdu)->paddr, msdu->len,
+ 			 DMA_TO_DEVICE);
+ 
+ 	info = IEEE80211_SKB_CB(msdu);
+ 	memset(&info->status, 0, sizeof(info->status));
+ 
+ 	ieee80211_free_txskb(ar->hw, msdu);
++}
++
++int ath11k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx)
++{
++	struct ath11k *ar = ctx;
++
++	ath11k_mac_tx_mgmt_free(ar, buf_id);
+ 
+ 	return 0;
+ }
+@@ -3903,17 +3918,10 @@ static int ath11k_mac_vif_txmgmt_idr_remove(int buf_id, void *skb, void *ctx)
+ {
+ 	struct ieee80211_vif *vif = ctx;
+ 	struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB((struct sk_buff *)skb);
+-	struct sk_buff *msdu = skb;
+ 	struct ath11k *ar = skb_cb->ar;
+-	struct ath11k_base *ab = ar->ab;
+ 
+-	if (skb_cb->vif == vif) {
+-		spin_lock_bh(&ar->txmgmt_idr_lock);
+-		idr_remove(&ar->txmgmt_idr, buf_id);
+-		spin_unlock_bh(&ar->txmgmt_idr_lock);
+-		dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len,
+-				 DMA_TO_DEVICE);
+-	}
++	if (skb_cb->vif == vif)
++		ath11k_mac_tx_mgmt_free(ar, buf_id);
+ 
+ 	return 0;
+ }
+@@ -3928,6 +3936,8 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
+ 	int buf_id;
+ 	int ret;
+ 
++	ATH11K_SKB_CB(skb)->ar = ar;
++
+ 	spin_lock_bh(&ar->txmgmt_idr_lock);
+ 	buf_id = idr_alloc(&ar->txmgmt_idr, skb, 0,
+ 			   ATH11K_TX_MGMT_NUM_PENDING_MAX, GFP_ATOMIC);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index d7eb6b7160bb4..105e344240c10 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -416,8 +416,11 @@ static void __ath11k_pci_ext_irq_disable(struct ath11k_base *sc)
+ 
+ 		ath11k_pci_ext_grp_disable(irq_grp);
+ 
+-		napi_synchronize(&irq_grp->napi);
+-		napi_disable(&irq_grp->napi);
++		if (irq_grp->napi_enabled) {
++			napi_synchronize(&irq_grp->napi);
++			napi_disable(&irq_grp->napi);
++			irq_grp->napi_enabled = false;
++		}
+ 	}
+ }
+ 
+@@ -436,7 +439,10 @@ static void ath11k_pci_ext_irq_enable(struct ath11k_base *ab)
+ 	for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ 		struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+ 
+-		napi_enable(&irq_grp->napi);
++		if (!irq_grp->napi_enabled) {
++			napi_enable(&irq_grp->napi);
++			irq_grp->napi_enabled = true;
++		}
+ 		ath11k_pci_ext_grp_enable(irq_grp);
+ 	}
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index b8f9f34408879..e34311516b958 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -456,6 +456,9 @@ ath11k_reg_adjust_bw(u16 start_freq, u16 end_freq, u16 max_bw)
+ {
+ 	u16 bw;
+ 
++	if (end_freq <= start_freq)
++		return 0;
++
+ 	bw = end_freq - start_freq;
+ 	bw = min_t(u16, bw, max_bw);
+ 
+@@ -463,8 +466,10 @@ ath11k_reg_adjust_bw(u16 start_freq, u16 end_freq, u16 max_bw)
+ 		bw = 80;
+ 	else if (bw >= 40 && bw < 80)
+ 		bw = 40;
+-	else if (bw < 40)
++	else if (bw >= 20 && bw < 40)
+ 		bw = 20;
++	else
++		bw = 0;
+ 
+ 	return bw;
+ }
+@@ -488,73 +493,77 @@ ath11k_reg_update_weather_radar_band(struct ath11k_base *ab,
+ 				     struct cur_reg_rule *reg_rule,
+ 				     u8 *rule_idx, u32 flags, u16 max_bw)
+ {
++	u32 start_freq;
+ 	u32 end_freq;
+ 	u16 bw;
+ 	u8 i;
+ 
+ 	i = *rule_idx;
+ 
++	/* there might be situations when even the input rule must be dropped */
++	i--;
++
++	/* frequencies below weather radar */
+ 	bw = ath11k_reg_adjust_bw(reg_rule->start_freq,
+ 				  ETSI_WEATHER_RADAR_BAND_LOW, max_bw);
++	if (bw > 0) {
++		i++;
+ 
+-	ath11k_reg_update_rule(regd->reg_rules + i, reg_rule->start_freq,
+-			       ETSI_WEATHER_RADAR_BAND_LOW, bw,
+-			       reg_rule->ant_gain, reg_rule->reg_power,
+-			       flags);
++		ath11k_reg_update_rule(regd->reg_rules + i,
++				       reg_rule->start_freq,
++				       ETSI_WEATHER_RADAR_BAND_LOW, bw,
++				       reg_rule->ant_gain, reg_rule->reg_power,
++				       flags);
+ 
+-	ath11k_dbg(ab, ATH11K_DBG_REG,
+-		   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+-		   i + 1, reg_rule->start_freq, ETSI_WEATHER_RADAR_BAND_LOW,
+-		   bw, reg_rule->ant_gain, reg_rule->reg_power,
+-		   regd->reg_rules[i].dfs_cac_ms,
+-		   flags);
+-
+-	if (reg_rule->end_freq > ETSI_WEATHER_RADAR_BAND_HIGH)
+-		end_freq = ETSI_WEATHER_RADAR_BAND_HIGH;
+-	else
+-		end_freq = reg_rule->end_freq;
++		ath11k_dbg(ab, ATH11K_DBG_REG,
++			   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
++			   i + 1, reg_rule->start_freq,
++			   ETSI_WEATHER_RADAR_BAND_LOW, bw, reg_rule->ant_gain,
++			   reg_rule->reg_power, regd->reg_rules[i].dfs_cac_ms,
++			   flags);
++	}
+ 
+-	bw = ath11k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
+-				  max_bw);
++	/* weather radar frequencies */
++	start_freq = max_t(u32, reg_rule->start_freq,
++			   ETSI_WEATHER_RADAR_BAND_LOW);
++	end_freq = min_t(u32, reg_rule->end_freq, ETSI_WEATHER_RADAR_BAND_HIGH);
+ 
+-	i++;
++	bw = ath11k_reg_adjust_bw(start_freq, end_freq, max_bw);
++	if (bw > 0) {
++		i++;
+ 
+-	ath11k_reg_update_rule(regd->reg_rules + i,
+-			       ETSI_WEATHER_RADAR_BAND_LOW, end_freq, bw,
+-			       reg_rule->ant_gain, reg_rule->reg_power,
+-			       flags);
++		ath11k_reg_update_rule(regd->reg_rules + i, start_freq,
++				       end_freq, bw, reg_rule->ant_gain,
++				       reg_rule->reg_power, flags);
+ 
+-	regd->reg_rules[i].dfs_cac_ms = ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT;
++		regd->reg_rules[i].dfs_cac_ms = ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT;
+ 
+-	ath11k_dbg(ab, ATH11K_DBG_REG,
+-		   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+-		   i + 1, ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
+-		   bw, reg_rule->ant_gain, reg_rule->reg_power,
+-		   regd->reg_rules[i].dfs_cac_ms,
+-		   flags);
+-
+-	if (end_freq == reg_rule->end_freq) {
+-		regd->n_reg_rules--;
+-		*rule_idx = i;
+-		return;
++		ath11k_dbg(ab, ATH11K_DBG_REG,
++			   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
++			   i + 1, start_freq, end_freq, bw,
++			   reg_rule->ant_gain, reg_rule->reg_power,
++			   regd->reg_rules[i].dfs_cac_ms, flags);
+ 	}
+ 
++	/* frequencies above weather radar */
+ 	bw = ath11k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_HIGH,
+ 				  reg_rule->end_freq, max_bw);
++	if (bw > 0) {
++		i++;
+ 
+-	i++;
+-
+-	ath11k_reg_update_rule(regd->reg_rules + i, ETSI_WEATHER_RADAR_BAND_HIGH,
+-			       reg_rule->end_freq, bw,
+-			       reg_rule->ant_gain, reg_rule->reg_power,
+-			       flags);
++		ath11k_reg_update_rule(regd->reg_rules + i,
++				       ETSI_WEATHER_RADAR_BAND_HIGH,
++				       reg_rule->end_freq, bw,
++				       reg_rule->ant_gain, reg_rule->reg_power,
++				       flags);
+ 
+-	ath11k_dbg(ab, ATH11K_DBG_REG,
+-		   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+-		   i + 1, ETSI_WEATHER_RADAR_BAND_HIGH, reg_rule->end_freq,
+-		   bw, reg_rule->ant_gain, reg_rule->reg_power,
+-		   regd->reg_rules[i].dfs_cac_ms,
+-		   flags);
++		ath11k_dbg(ab, ATH11K_DBG_REG,
++			   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
++			   i + 1, ETSI_WEATHER_RADAR_BAND_HIGH,
++			   reg_rule->end_freq, bw, reg_rule->ant_gain,
++			   reg_rule->reg_power, regd->reg_rules[i].dfs_cac_ms,
++			   flags);
++	}
+ 
+ 	*rule_idx = i;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index e84127165d858..53846dc9a5c5a 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -1665,7 +1665,8 @@ int ath11k_wmi_vdev_install_key(struct ath11k *ar,
+ 	tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd));
+ 	tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_BYTE) |
+ 		      FIELD_PREP(WMI_TLV_LEN, key_len_aligned);
+-	memcpy(tlv->value, (u8 *)arg->key_data, key_len_aligned);
++	if (arg->key_data)
++		memcpy(tlv->value, (u8 *)arg->key_data, key_len_aligned);
+ 
+ 	ret = ath11k_wmi_cmd_send(wmi, skb, WMI_VDEV_INSTALL_KEY_CMDID);
+ 	if (ret) {
+@@ -5421,7 +5422,7 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
+ 		ar = ab->pdevs[pdev_idx].ar;
+ 		kfree(ab->new_regd[pdev_idx]);
+ 		ab->new_regd[pdev_idx] = regd;
+-		ieee80211_queue_work(ar->hw, &ar->regd_update_work);
++		queue_work(ab->workqueue, &ar->regd_update_work);
+ 	} else {
+ 		/* This regd would be applied during mac registration and is
+ 		 * held constant throughout for regd intersection purpose
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index 860da13bfb6ac..f06eec99de688 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -590,6 +590,13 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 			return;
+ 		}
+ 
++		if (pkt_len > 2 * MAX_RX_BUF_SIZE) {
++			dev_err(&hif_dev->udev->dev,
++				"ath9k_htc: invalid pkt_len (%x)\n", pkt_len);
++			RX_STAT_INC(skb_dropped);
++			return;
++		}
++
+ 		pad_len = 4 - (pkt_len & 0x3);
+ 		if (pad_len == 4)
+ 			pad_len = 0;
+diff --git a/drivers/net/wireless/ath/wcn36xx/dxe.c b/drivers/net/wireless/ath/wcn36xx/dxe.c
+index cf4eb0fb28151..6c62ffc799a2b 100644
+--- a/drivers/net/wireless/ath/wcn36xx/dxe.c
++++ b/drivers/net/wireless/ath/wcn36xx/dxe.c
+@@ -272,6 +272,21 @@ static int wcn36xx_dxe_enable_ch_int(struct wcn36xx *wcn, u16 wcn_ch)
+ 	return 0;
+ }
+ 
++static void wcn36xx_dxe_disable_ch_int(struct wcn36xx *wcn, u16 wcn_ch)
++{
++	int reg_data = 0;
++
++	wcn36xx_dxe_read_register(wcn,
++				  WCN36XX_DXE_INT_MASK_REG,
++				  &reg_data);
++
++	reg_data &= ~wcn_ch;
++
++	wcn36xx_dxe_write_register(wcn,
++				   WCN36XX_DXE_INT_MASK_REG,
++				   (int)reg_data);
++}
++
+ static int wcn36xx_dxe_fill_skb(struct device *dev,
+ 				struct wcn36xx_dxe_ctl *ctl,
+ 				gfp_t gfp)
+@@ -869,7 +884,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 		WCN36XX_DXE_WQ_TX_L);
+ 
+ 	wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_REG_CH_EN, &reg_data);
+-	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
+ 
+ 	/***************************************/
+ 	/* Init descriptors for TX HIGH channel */
+@@ -893,9 +907,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 
+ 	wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_REG_CH_EN, &reg_data);
+ 
+-	/* Enable channel interrupts */
+-	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
+-
+ 	/***************************************/
+ 	/* Init descriptors for RX LOW channel */
+ 	/***************************************/
+@@ -905,7 +916,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 		goto out_err_rxl_ch;
+ 	}
+ 
+-
+ 	/* For RX we need to preallocated buffers */
+ 	wcn36xx_dxe_ch_alloc_skb(wcn, &wcn->dxe_rx_l_ch);
+ 
+@@ -928,9 +938,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 		WCN36XX_DXE_REG_CTL_RX_L,
+ 		WCN36XX_DXE_CH_DEFAULT_CTL_RX_L);
+ 
+-	/* Enable channel interrupts */
+-	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
+-
+ 	/***************************************/
+ 	/* Init descriptors for RX HIGH channel */
+ 	/***************************************/
+@@ -962,15 +969,18 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 		WCN36XX_DXE_REG_CTL_RX_H,
+ 		WCN36XX_DXE_CH_DEFAULT_CTL_RX_H);
+ 
+-	/* Enable channel interrupts */
+-	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
+-
+ 	ret = wcn36xx_dxe_request_irqs(wcn);
+ 	if (ret < 0)
+ 		goto out_err_irq;
+ 
+ 	timer_setup(&wcn->tx_ack_timer, wcn36xx_dxe_tx_timer, 0);
+ 
++	/* Enable channel interrupts */
++	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
++	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
++	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
++	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
++
+ 	return 0;
+ 
+ out_err_irq:
+@@ -987,6 +997,14 @@ out_err_txh_ch:
+ 
+ void wcn36xx_dxe_deinit(struct wcn36xx *wcn)
+ {
++	int reg_data = 0;
++
++	/* Disable channel interrupts */
++	wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
++	wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
++	wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
++	wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
++
+ 	free_irq(wcn->tx_irq, wcn);
+ 	free_irq(wcn->rx_irq, wcn);
+ 	del_timer(&wcn->tx_ack_timer);
+@@ -996,6 +1014,15 @@ void wcn36xx_dxe_deinit(struct wcn36xx *wcn)
+ 		wcn->tx_ack_skb = NULL;
+ 	}
+ 
++	/* Put the DXE block into reset before freeing memory */
++	reg_data = WCN36XX_DXE_REG_RESET;
++	wcn36xx_dxe_write_register(wcn, WCN36XX_DXE_REG_CSR_RESET, reg_data);
++
+ 	wcn36xx_dxe_ch_free_skbs(wcn, &wcn->dxe_rx_l_ch);
+ 	wcn36xx_dxe_ch_free_skbs(wcn, &wcn->dxe_rx_h_ch);
++
++	wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_tx_l_ch);
++	wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_tx_h_ch);
++	wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_rx_l_ch);
++	wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_rx_h_ch);
+ }
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 629ddfd74da1a..9aaf6f7473333 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -397,6 +397,7 @@ static void wcn36xx_change_opchannel(struct wcn36xx *wcn, int ch)
+ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ {
+ 	struct wcn36xx *wcn = hw->priv;
++	int ret;
+ 
+ 	wcn36xx_dbg(WCN36XX_DBG_MAC, "mac config changed 0x%08x\n", changed);
+ 
+@@ -412,17 +413,31 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ 			 * want to receive/transmit regular data packets, then
+ 			 * simply stop the scan session and exit PS mode.
+ 			 */
+-			wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
+-						wcn->sw_scan_vif);
+-			wcn->sw_scan_channel = 0;
++			if (wcn->sw_scan_channel)
++				wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
++			if (wcn->sw_scan_init) {
++				wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
++							wcn->sw_scan_vif);
++			}
+ 		} else if (wcn->sw_scan) {
+ 			/* A scan is ongoing, do not change the operating
+ 			 * channel, but start a scan session on the channel.
+ 			 */
+-			wcn36xx_smd_init_scan(wcn, HAL_SYS_MODE_SCAN,
+-					      wcn->sw_scan_vif);
++			if (wcn->sw_scan_channel)
++				wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
++			if (!wcn->sw_scan_init) {
++				/* This can fail if we are unable to notify the
++				 * operating channel.
++				 */
++				ret = wcn36xx_smd_init_scan(wcn,
++							    HAL_SYS_MODE_SCAN,
++							    wcn->sw_scan_vif);
++				if (ret) {
++					mutex_unlock(&wcn->conf_mutex);
++					return -EIO;
++				}
++			}
+ 			wcn36xx_smd_start_scan(wcn, ch);
+-			wcn->sw_scan_channel = ch;
+ 		} else {
+ 			wcn36xx_change_opchannel(wcn, ch);
+ 		}
+@@ -709,7 +724,12 @@ static void wcn36xx_sw_scan_complete(struct ieee80211_hw *hw,
+ 	struct wcn36xx *wcn = hw->priv;
+ 
+ 	/* ensure that any scan session is finished */
+-	wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN, wcn->sw_scan_vif);
++	if (wcn->sw_scan_channel)
++		wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
++	if (wcn->sw_scan_init) {
++		wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
++					wcn->sw_scan_vif);
++	}
+ 	wcn->sw_scan = false;
+ 	wcn->sw_scan_opchannel = 0;
+ }
+diff --git a/drivers/net/wireless/ath/wcn36xx/smd.c b/drivers/net/wireless/ath/wcn36xx/smd.c
+index 3793907ace92e..7f00cb6f5e16b 100644
+--- a/drivers/net/wireless/ath/wcn36xx/smd.c
++++ b/drivers/net/wireless/ath/wcn36xx/smd.c
+@@ -730,6 +730,7 @@ int wcn36xx_smd_init_scan(struct wcn36xx *wcn, enum wcn36xx_hal_sys_mode mode,
+ 		wcn36xx_err("hal_init_scan response failed err=%d\n", ret);
+ 		goto out;
+ 	}
++	wcn->sw_scan_init = true;
+ out:
+ 	mutex_unlock(&wcn->hal_mutex);
+ 	return ret;
+@@ -760,6 +761,7 @@ int wcn36xx_smd_start_scan(struct wcn36xx *wcn, u8 scan_channel)
+ 		wcn36xx_err("hal_start_scan response failed err=%d\n", ret);
+ 		goto out;
+ 	}
++	wcn->sw_scan_channel = scan_channel;
+ out:
+ 	mutex_unlock(&wcn->hal_mutex);
+ 	return ret;
+@@ -790,6 +792,7 @@ int wcn36xx_smd_end_scan(struct wcn36xx *wcn, u8 scan_channel)
+ 		wcn36xx_err("hal_end_scan response failed err=%d\n", ret);
+ 		goto out;
+ 	}
++	wcn->sw_scan_channel = 0;
+ out:
+ 	mutex_unlock(&wcn->hal_mutex);
+ 	return ret;
+@@ -831,6 +834,7 @@ int wcn36xx_smd_finish_scan(struct wcn36xx *wcn,
+ 		wcn36xx_err("hal_finish_scan response failed err=%d\n", ret);
+ 		goto out;
+ 	}
++	wcn->sw_scan_init = false;
+ out:
+ 	mutex_unlock(&wcn->hal_mutex);
+ 	return ret;
+@@ -2603,7 +2607,7 @@ static int wcn36xx_smd_missed_beacon_ind(struct wcn36xx *wcn,
+ 			wcn36xx_dbg(WCN36XX_DBG_HAL, "beacon missed bss_index %d\n",
+ 				    tmp->bss_index);
+ 			vif = wcn36xx_priv_to_vif(tmp);
+-			ieee80211_connection_loss(vif);
++			ieee80211_beacon_loss(vif);
+ 		}
+ 		return 0;
+ 	}
+@@ -2618,7 +2622,7 @@ static int wcn36xx_smd_missed_beacon_ind(struct wcn36xx *wcn,
+ 			wcn36xx_dbg(WCN36XX_DBG_HAL, "beacon missed bss_index %d\n",
+ 				    rsp->bss_index);
+ 			vif = wcn36xx_priv_to_vif(tmp);
+-			ieee80211_connection_loss(vif);
++			ieee80211_beacon_loss(vif);
+ 			return 0;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.c b/drivers/net/wireless/ath/wcn36xx/txrx.c
+index bbd7194c82e27..f33e7228a1010 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.c
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.c
+@@ -237,7 +237,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 	const struct wcn36xx_rate *rate;
+ 	struct ieee80211_hdr *hdr;
+ 	struct wcn36xx_rx_bd *bd;
+-	struct ieee80211_supported_band *sband;
+ 	u16 fc, sn;
+ 
+ 	/*
+@@ -259,8 +258,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 	fc = __le16_to_cpu(hdr->frame_control);
+ 	sn = IEEE80211_SEQ_TO_SN(__le16_to_cpu(hdr->seq_ctrl));
+ 
+-	status.freq = WCN36XX_CENTER_FREQ(wcn);
+-	status.band = WCN36XX_BAND(wcn);
+ 	status.mactime = 10;
+ 	status.signal = -get_rssi0(bd);
+ 	status.antenna = 1;
+@@ -272,18 +269,36 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 
+ 	wcn36xx_dbg(WCN36XX_DBG_RX, "status.flags=%x\n", status.flag);
+ 
++	if (bd->scan_learn) {
++		/* If packet originate from hardware scanning, extract the
++		 * band/channel from bd descriptor.
++		 */
++		u8 hwch = (bd->reserved0 << 4) + bd->rx_ch;
++
++		if (bd->rf_band != 1 && hwch <= sizeof(ab_rx_ch_map) && hwch >= 1) {
++			status.band = NL80211_BAND_5GHZ;
++			status.freq = ieee80211_channel_to_frequency(ab_rx_ch_map[hwch - 1],
++								     status.band);
++		} else {
++			status.band = NL80211_BAND_2GHZ;
++			status.freq = ieee80211_channel_to_frequency(hwch, status.band);
++		}
++	} else {
++		status.band = WCN36XX_BAND(wcn);
++		status.freq = WCN36XX_CENTER_FREQ(wcn);
++	}
++
+ 	if (bd->rate_id < ARRAY_SIZE(wcn36xx_rate_table)) {
+ 		rate = &wcn36xx_rate_table[bd->rate_id];
+ 		status.encoding = rate->encoding;
+ 		status.enc_flags = rate->encoding_flags;
+ 		status.bw = rate->bw;
+ 		status.rate_idx = rate->mcs_or_legacy_index;
+-		sband = wcn->hw->wiphy->bands[status.band];
+ 		status.nss = 1;
+ 
+ 		if (status.band == NL80211_BAND_5GHZ &&
+ 		    status.encoding == RX_ENC_LEGACY &&
+-		    status.rate_idx >= sband->n_bitrates) {
++		    status.rate_idx >= 4) {
+ 			/* no dsss rates in 5Ghz rates table */
+ 			status.rate_idx -= 4;
+ 		}
+@@ -298,22 +313,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 	    ieee80211_is_probe_resp(hdr->frame_control))
+ 		status.boottime_ns = ktime_get_boottime_ns();
+ 
+-	if (bd->scan_learn) {
+-		/* If packet originates from hardware scanning, extract the
+-		 * band/channel from bd descriptor.
+-		 */
+-		u8 hwch = (bd->reserved0 << 4) + bd->rx_ch;
+-
+-		if (bd->rf_band != 1 && hwch <= sizeof(ab_rx_ch_map) && hwch >= 1) {
+-			status.band = NL80211_BAND_5GHZ;
+-			status.freq = ieee80211_channel_to_frequency(ab_rx_ch_map[hwch - 1],
+-								     status.band);
+-		} else {
+-			status.band = NL80211_BAND_2GHZ;
+-			status.freq = ieee80211_channel_to_frequency(hwch, status.band);
+-		}
+-	}
+-
+ 	memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index 9b4dee2fc6483..5c40d0bdee245 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -231,6 +231,7 @@ struct wcn36xx {
+ 	struct cfg80211_scan_request *scan_req;
+ 	bool			sw_scan;
+ 	u8			sw_scan_opchannel;
++	bool			sw_scan_init;
+ 	u8			sw_scan_channel;
+ 	struct ieee80211_vif	*sw_scan_vif;
+ 	struct mutex		scan_lock;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index be214f39f52be..30c6d7b18599a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -185,6 +185,9 @@ static void iwl_dealloc_ucode(struct iwl_drv *drv)
+ 
+ 	for (i = 0; i < IWL_UCODE_TYPE_MAX; i++)
+ 		iwl_free_fw_img(drv, drv->fw.img + i);
++
++	/* clear the data for the aborted load case */
++	memset(&drv->fw, 0, sizeof(drv->fw));
+ }
+ 
+ static int iwl_alloc_fw_desc(struct iwl_drv *drv, struct fw_desc *desc,
+@@ -1365,6 +1368,7 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+ 	int i;
+ 	bool load_module = false;
+ 	bool usniffer_images = false;
++	bool failure = true;
+ 
+ 	fw->ucode_capa.max_probe_length = IWL_DEFAULT_MAX_PROBE_LENGTH;
+ 	fw->ucode_capa.standard_phy_calibration_size =
+@@ -1625,15 +1629,9 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+ 	 * else from proceeding if the module fails to load
+ 	 * or hangs loading.
+ 	 */
+-	if (load_module) {
++	if (load_module)
+ 		request_module("%s", op->name);
+-#ifdef CONFIG_IWLWIFI_OPMODE_MODULAR
+-		if (err)
+-			IWL_ERR(drv,
+-				"failed to load module %s (error %d), is dynamic loading enabled?\n",
+-				op->name, err);
+-#endif
+-	}
++	failure = false;
+ 	goto free;
+ 
+  try_again:
+@@ -1649,6 +1647,9 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+ 	complete(&drv->request_firmware_complete);
+ 	device_release_driver(drv->trans->dev);
+  free:
++	if (failure)
++		iwl_dealloc_ucode(drv);
++
+ 	if (pieces) {
+ 		for (i = 0; i < ARRAY_SIZE(pieces->img); i++)
+ 			kfree(pieces->img[i].sec);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+index a0ce761d0c59b..b1335fe3b01a2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+@@ -967,7 +967,7 @@ static void iwl_mvm_ftm_rtt_smoothing(struct iwl_mvm *mvm,
+ 	overshoot = IWL_MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT;
+ 	alpha = IWL_MVM_FTM_INITIATOR_SMOOTH_ALPHA;
+ 
+-	rtt_avg = (alpha * rtt + (100 - alpha) * resp->rtt_avg) / 100;
++	rtt_avg = div_s64(alpha * rtt + (100 - alpha) * resp->rtt_avg, 100);
+ 
+ 	IWL_DEBUG_INFO(mvm,
+ 		       "%pM: prev rtt_avg=%lld, new rtt_avg=%lld, rtt=%lld\n",
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 81cc85a97eb20..922a7ea0cd24e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1739,6 +1739,7 @@ static void iwl_mvm_recalc_multicast(struct iwl_mvm *mvm)
+ 	struct iwl_mvm_mc_iter_data iter_data = {
+ 		.mvm = mvm,
+ 	};
++	int ret;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+@@ -1748,6 +1749,22 @@ static void iwl_mvm_recalc_multicast(struct iwl_mvm *mvm)
+ 	ieee80211_iterate_active_interfaces_atomic(
+ 		mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
+ 		iwl_mvm_mc_iface_iterator, &iter_data);
++
++	/*
++	 * Send a (synchronous) ech command so that we wait for the
++	 * multiple asynchronous MCAST_FILTER_CMD commands sent by
++	 * the interface iterator. Otherwise, we might get here over
++	 * and over again (by userspace just sending a lot of these)
++	 * and the CPU can send them faster than the firmware can
++	 * process them.
++	 * Note that the CPU is still faster - but with this we'll
++	 * actually send fewer commands overall because the CPU will
++	 * not schedule the work in mac80211 as frequently if it's
++	 * still running when rescheduled (possibly multiple times).
++	 */
++	ret = iwl_mvm_send_cmd_pdu(mvm, ECHO_CMD, 0, 0, NULL);
++	if (ret)
++		IWL_ERR(mvm, "Failed to synchronize multicast groups update\n");
+ }
+ 
+ static u64 iwl_mvm_prepare_multicast(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 838734fec5023..86b3fb321dfdd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -177,12 +177,39 @@ static int iwl_mvm_create_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	struct iwl_rx_mpdu_desc *desc = (void *)pkt->data;
+ 	unsigned int headlen, fraglen, pad_len = 0;
+ 	unsigned int hdrlen = ieee80211_hdrlen(hdr->frame_control);
++	u8 mic_crc_len = u8_get_bits(desc->mac_flags1,
++				     IWL_RX_MPDU_MFLG1_MIC_CRC_LEN_MASK) << 1;
+ 
+ 	if (desc->mac_flags2 & IWL_RX_MPDU_MFLG2_PAD) {
+ 		len -= 2;
+ 		pad_len = 2;
+ 	}
+ 
++	/*
++	 * For non monitor interface strip the bytes the RADA might not have
++	 * removed. As monitor interface cannot exist with other interfaces
++	 * this removal is safe.
++	 */
++	if (mic_crc_len && !ieee80211_hw_check(mvm->hw, RX_INCLUDES_FCS)) {
++		u32 pkt_flags = le32_to_cpu(pkt->len_n_flags);
++
++		/*
++		 * If RADA was not enabled then decryption was not performed so
++		 * the MIC cannot be removed.
++		 */
++		if (!(pkt_flags & FH_RSCSR_RADA_EN)) {
++			if (WARN_ON(crypt_len > mic_crc_len))
++				return -EINVAL;
++
++			mic_crc_len -= crypt_len;
++		}
++
++		if (WARN_ON(mic_crc_len > len))
++			return -EINVAL;
++
++		len -= mic_crc_len;
++	}
++
+ 	/* If frame is small enough to fit in skb->head, pull it completely.
+ 	 * If not, only pull ieee80211_hdr (including crypto if present, and
+ 	 * an additional 8 bytes for SNAP/ethertype, see below) so that
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index a5d90e028833c..46255d2c555b6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -2157,7 +2157,7 @@ static int iwl_mvm_check_running_scans(struct iwl_mvm *mvm, int type)
+ 	return -EIO;
+ }
+ 
+-#define SCAN_TIMEOUT 20000
++#define SCAN_TIMEOUT 30000
+ 
+ void iwl_mvm_scan_timeout_wk(struct work_struct *work)
+ {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 394598b14a173..3f081cdea09ca 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -98,14 +98,13 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
+ 	struct iwl_mvm *mvm = container_of(wk, struct iwl_mvm, roc_done_wk);
+ 
+ 	/*
+-	 * Clear the ROC_RUNNING /ROC_AUX_RUNNING status bit.
++	 * Clear the ROC_RUNNING status bit.
+ 	 * This will cause the TX path to drop offchannel transmissions.
+ 	 * That would also be done by mac80211, but it is racy, in particular
+ 	 * in the case that the time event actually completed in the firmware
+ 	 * (which is handled in iwl_mvm_te_handle_notif).
+ 	 */
+ 	clear_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status);
+-	clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status);
+ 
+ 	synchronize_net();
+ 
+@@ -131,9 +130,19 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
+ 			mvmvif = iwl_mvm_vif_from_mac80211(mvm->p2p_device_vif);
+ 			iwl_mvm_flush_sta(mvm, &mvmvif->bcast_sta, true);
+ 		}
+-	} else {
++	}
++
++	/*
++	 * Clear the ROC_AUX_RUNNING status bit.
++	 * This will cause the TX path to drop offchannel transmissions.
++	 * That would also be done by mac80211, but it is racy, in particular
++	 * in the case that the time event actually completed in the firmware
++	 * (which is handled in iwl_mvm_te_handle_notif).
++	 */
++	if (test_and_clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status)) {
+ 		/* do the same in case of hot spot 2.0 */
+ 		iwl_mvm_flush_sta(mvm, &mvm->aux_sta, true);
++
+ 		/* In newer version of this command an aux station is added only
+ 		 * in cases of dedicated tx queue and need to be removed in end
+ 		 * of use */
+@@ -1157,15 +1166,10 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
+ 			cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
+ 							mvmvif->color)),
+ 		.action = cpu_to_le32(FW_CTXT_ACTION_ADD),
++		.conf_id = cpu_to_le32(SESSION_PROTECT_CONF_ASSOC),
+ 		.duration_tu = cpu_to_le32(MSEC_TO_TU(duration)),
+ 	};
+ 
+-	/* The time_event_data.id field is reused to save session
+-	 * protection's configuration.
+-	 */
+-	mvmvif->time_event_data.id = SESSION_PROTECT_CONF_ASSOC;
+-	cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id);
+-
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+ 	spin_lock_bh(&mvm->time_event_lock);
+@@ -1179,6 +1183,11 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
+ 	}
+ 
+ 	iwl_mvm_te_clear_data(mvm, te_data);
++	/*
++	 * The time_event_data.id field is reused to save session
++	 * protection's configuration.
++	 */
++	te_data->id = le32_to_cpu(cmd.conf_id);
+ 	te_data->duration = le32_to_cpu(cmd.duration_tu);
+ 	spin_unlock_bh(&mvm->time_event_lock);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 2c13fa8f28200..6aedf5762571d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -2260,7 +2260,12 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
+ 		}
+ 	}
+ 
+-	if (inta_hw & MSIX_HW_INT_CAUSES_REG_WAKEUP) {
++	/*
++	 * In some rare cases when the HW is in a bad state, we may
++	 * get this interrupt too early, when prph_info is still NULL.
++	 * So make sure that it's not NULL to prevent crashing.
++	 */
++	if (inta_hw & MSIX_HW_INT_CAUSES_REG_WAKEUP && trans_pcie->prph_info) {
+ 		u32 sleep_notif =
+ 			le32_to_cpu(trans_pcie->prph_info->sleep_notif);
+ 		if (sleep_notif == IWL_D3_SLEEP_STATUS_SUSPEND ||
+diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+index 9181221a2434d..0136df00ff6a6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+@@ -1148,6 +1148,7 @@ int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
+ 	return 0;
+ err_free_tfds:
+ 	dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->dma_addr);
++	txq->tfds = NULL;
+ error:
+ 	if (txq->entries && cmd_queue)
+ 		for (i = 0; i < slots_num; i++)
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_event.c b/drivers/net/wireless/marvell/mwifiex/sta_event.c
+index bc79ca4cb803c..753458628f86a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_event.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_event.c
+@@ -364,10 +364,12 @@ static void mwifiex_process_uap_tx_pause(struct mwifiex_private *priv,
+ 		sta_ptr = mwifiex_get_sta_entry(priv, tp->peermac);
+ 		if (sta_ptr && sta_ptr->tx_pause != tp->tx_pause) {
+ 			sta_ptr->tx_pause = tp->tx_pause;
++			spin_unlock_bh(&priv->sta_list_spinlock);
+ 			mwifiex_update_ralist_tx_pause(priv, tp->peermac,
+ 						       tp->tx_pause);
++		} else {
++			spin_unlock_bh(&priv->sta_list_spinlock);
+ 		}
+-		spin_unlock_bh(&priv->sta_list_spinlock);
+ 	}
+ }
+ 
+@@ -399,11 +401,13 @@ static void mwifiex_process_sta_tx_pause(struct mwifiex_private *priv,
+ 			sta_ptr = mwifiex_get_sta_entry(priv, tp->peermac);
+ 			if (sta_ptr && sta_ptr->tx_pause != tp->tx_pause) {
+ 				sta_ptr->tx_pause = tp->tx_pause;
++				spin_unlock_bh(&priv->sta_list_spinlock);
+ 				mwifiex_update_ralist_tx_pause(priv,
+ 							       tp->peermac,
+ 							       tp->tx_pause);
++			} else {
++				spin_unlock_bh(&priv->sta_list_spinlock);
+ 			}
+-			spin_unlock_bh(&priv->sta_list_spinlock);
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
+index 9736aa0ab7fd4..8f01fcbe93961 100644
+--- a/drivers/net/wireless/marvell/mwifiex/usb.c
++++ b/drivers/net/wireless/marvell/mwifiex/usb.c
+@@ -130,7 +130,8 @@ static int mwifiex_usb_recv(struct mwifiex_adapter *adapter,
+ 		default:
+ 			mwifiex_dbg(adapter, ERROR,
+ 				    "unknown recv_type %#x\n", recv_type);
+-			return -1;
++			ret = -1;
++			goto exit_restore_skb;
+ 		}
+ 		break;
+ 	case MWIFIEX_USB_EP_DATA:
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 565efd8806247..2ef1416899f03 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -1652,7 +1652,7 @@ int rtw_core_init(struct rtw_dev *rtwdev)
+ 
+ 	/* default rx filter setting */
+ 	rtwdev->hal.rcr = BIT_APP_FCS | BIT_APP_MIC | BIT_APP_ICV |
+-			  BIT_HTC_LOC_CTRL | BIT_APP_PHYSTS |
++			  BIT_PKTCTL_DLEN | BIT_HTC_LOC_CTRL | BIT_APP_PHYSTS |
+ 			  BIT_AB | BIT_AM | BIT_APM;
+ 
+ 	ret = rtw_load_firmware(rtwdev, RTW_NORMAL_FW);
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.h b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+index bd01e82b6bcd0..8d1e8ff71d7ef 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+@@ -131,7 +131,7 @@ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
+ #define WLAN_TX_FUNC_CFG2		0x30
+ #define WLAN_MAC_OPT_NORM_FUNC1		0x98
+ #define WLAN_MAC_OPT_LB_FUNC1		0x80
+-#define WLAN_MAC_OPT_FUNC2		0x30810041
++#define WLAN_MAC_OPT_FUNC2		0xb0810041
+ 
+ #define WLAN_SIFS_CFG	(WLAN_SIFS_CCK_CONT_TX | \
+ 			(WLAN_SIFS_OFDM_CONT_TX << BIT_SHIFT_SIFS_OFDM_CTX) | \
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+index 22d0dd640ac94..dbfd67c3f598c 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+@@ -204,7 +204,7 @@ static void rtw8822b_phy_set_param(struct rtw_dev *rtwdev)
+ #define WLAN_TX_FUNC_CFG2		0x30
+ #define WLAN_MAC_OPT_NORM_FUNC1		0x98
+ #define WLAN_MAC_OPT_LB_FUNC1		0x80
+-#define WLAN_MAC_OPT_FUNC2		0x30810041
++#define WLAN_MAC_OPT_FUNC2		0xb0810041
+ 
+ #define WLAN_SIFS_CFG	(WLAN_SIFS_CCK_CONT_TX | \
+ 			(WLAN_SIFS_OFDM_CONT_TX << BIT_SHIFT_SIFS_OFDM_CTX) | \
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 79ad6232dce83..cee586335552d 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -1248,7 +1248,7 @@ static void rtw8822c_phy_set_param(struct rtw_dev *rtwdev)
+ #define WLAN_TX_FUNC_CFG2		0x30
+ #define WLAN_MAC_OPT_NORM_FUNC1		0x98
+ #define WLAN_MAC_OPT_LB_FUNC1		0x80
+-#define WLAN_MAC_OPT_FUNC2		0x30810041
++#define WLAN_MAC_OPT_FUNC2		0xb0810041
+ #define WLAN_MAC_INT_MIG_CFG		0x33330000
+ 
+ #define WLAN_SIFS_CFG	(WLAN_SIFS_CCK_CONT_TX | \
+diff --git a/drivers/net/wireless/rsi/rsi_91x_main.c b/drivers/net/wireless/rsi/rsi_91x_main.c
+index 8c638cfeac52f..fe8aed58ac088 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_main.c
++++ b/drivers/net/wireless/rsi/rsi_91x_main.c
+@@ -23,6 +23,7 @@
+ #include "rsi_common.h"
+ #include "rsi_coex.h"
+ #include "rsi_hal.h"
++#include "rsi_usb.h"
+ 
+ u32 rsi_zone_enabled = /* INFO_ZONE |
+ 			INIT_ZONE |
+@@ -168,6 +169,9 @@ int rsi_read_pkt(struct rsi_common *common, u8 *rx_pkt, s32 rcv_pkt_len)
+ 		frame_desc = &rx_pkt[index];
+ 		actual_length = *(u16 *)&frame_desc[0];
+ 		offset = *(u16 *)&frame_desc[2];
++		if (!rcv_pkt_len && offset >
++			RSI_MAX_RX_USB_PKT_SIZE - FRAME_DESC_SZ)
++			goto fail;
+ 
+ 		queueno = rsi_get_queueno(frame_desc, offset);
+ 		length = rsi_get_length(frame_desc, offset);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index d881df9ebd0c3..11388a1469621 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -269,8 +269,12 @@ static void rsi_rx_done_handler(struct urb *urb)
+ 	struct rsi_91x_usbdev *dev = (struct rsi_91x_usbdev *)rx_cb->data;
+ 	int status = -EINVAL;
+ 
++	if (!rx_cb->rx_skb)
++		return;
++
+ 	if (urb->status) {
+ 		dev_kfree_skb(rx_cb->rx_skb);
++		rx_cb->rx_skb = NULL;
+ 		return;
+ 	}
+ 
+@@ -294,8 +298,10 @@ out:
+ 	if (rsi_rx_urb_submit(dev->priv, rx_cb->ep_num, GFP_ATOMIC))
+ 		rsi_dbg(ERR_ZONE, "%s: Failed in urb submission", __func__);
+ 
+-	if (status)
++	if (status) {
+ 		dev_kfree_skb(rx_cb->rx_skb);
++		rx_cb->rx_skb = NULL;
++	}
+ }
+ 
+ static void rsi_rx_urb_kill(struct rsi_hw *adapter, u8 ep_num)
+@@ -322,7 +328,6 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num, gfp_t mem_flags)
+ 	struct sk_buff *skb;
+ 	u8 dword_align_bytes = 0;
+ 
+-#define RSI_MAX_RX_USB_PKT_SIZE	3000
+ 	skb = dev_alloc_skb(RSI_MAX_RX_USB_PKT_SIZE);
+ 	if (!skb)
+ 		return -ENOMEM;
+diff --git a/drivers/net/wireless/rsi/rsi_usb.h b/drivers/net/wireless/rsi/rsi_usb.h
+index 8702f434b5699..ad88f8c70a351 100644
+--- a/drivers/net/wireless/rsi/rsi_usb.h
++++ b/drivers/net/wireless/rsi/rsi_usb.h
+@@ -44,6 +44,8 @@
+ #define RSI_USB_BUF_SIZE	     4096
+ #define RSI_USB_CTRL_BUF_SIZE	     0x04
+ 
++#define RSI_MAX_RX_USB_PKT_SIZE	3000
++
+ struct rx_usb_ctrl_block {
+ 	u8 *data;
+ 	struct urb *rx_urb;
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 6b170083cd248..21d89d80d0838 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -222,6 +222,8 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
+ 	struct device *dev = kobj_to_dev(kobj);
+ 	struct nvmem_device *nvmem = to_nvmem_device(dev);
+ 
++	attr->size = nvmem->size;
++
+ 	return nvmem_bin_attr_get_umode(nvmem);
+ }
+ 
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 161a23631472d..a44a0e7ba2510 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -1328,9 +1328,14 @@ int of_phandle_iterator_next(struct of_phandle_iterator *it)
+ 		 * property data length
+ 		 */
+ 		if (it->cur + count > it->list_end) {
+-			pr_err("%pOF: %s = %d found %d\n",
+-			       it->parent, it->cells_name,
+-			       count, it->cell_count);
++			if (it->cells_name)
++				pr_err("%pOF: %s = %d found %td\n",
++					it->parent, it->cells_name,
++					count, it->list_end - it->cur);
++			else
++				pr_err("%pOF: phandle %s needs %d, found %td\n",
++					it->parent, of_node_full_name(it->node),
++					count, it->list_end - it->cur);
+ 			goto err;
+ 		}
+ 	}
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 1d4b0b7d0cc10..5407bbdb64395 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -910,11 +910,18 @@ static void __init of_unittest_dma_ranges_one(const char *path,
+ 	if (!rc) {
+ 		phys_addr_t	paddr;
+ 		dma_addr_t	dma_addr;
+-		struct device	dev_bogus;
++		struct device	*dev_bogus;
+ 
+-		dev_bogus.dma_range_map = map;
+-		paddr = dma_to_phys(&dev_bogus, expect_dma_addr);
+-		dma_addr = phys_to_dma(&dev_bogus, expect_paddr);
++		dev_bogus = kzalloc(sizeof(struct device), GFP_KERNEL);
++		if (!dev_bogus) {
++			unittest(0, "kzalloc() failed\n");
++			kfree(map);
++			return;
++		}
++
++		dev_bogus->dma_range_map = map;
++		paddr = dma_to_phys(dev_bogus, expect_dma_addr);
++		dma_addr = phys_to_dma(dev_bogus, expect_paddr);
+ 
+ 		unittest(paddr == expect_paddr,
+ 			 "of_dma_get_range: wrong phys addr %pap (expecting %llx) on node %pOF\n",
+@@ -924,6 +931,7 @@ static void __init of_unittest_dma_ranges_one(const char *path,
+ 			 &dma_addr, expect_dma_addr, np);
+ 
+ 		kfree(map);
++		kfree(dev_bogus);
+ 	}
+ 	of_node_put(np);
+ #endif
+@@ -933,8 +941,9 @@ static void __init of_unittest_parse_dma_ranges(void)
+ {
+ 	of_unittest_dma_ranges_one("/testcase-data/address-tests/device@70000000",
+ 		0x0, 0x20000000);
+-	of_unittest_dma_ranges_one("/testcase-data/address-tests/bus@80000000/device@1000",
+-		0x100000000, 0x20000000);
++	if (IS_ENABLED(CONFIG_ARCH_DMA_ADDR_T_64BIT))
++		of_unittest_dma_ranges_one("/testcase-data/address-tests/bus@80000000/device@1000",
++			0x100000000, 0x20000000);
+ 	of_unittest_dma_ranges_one("/testcase-data/address-tests/pci@90000000",
+ 		0x80000000, 0x20000000);
+ }
+diff --git a/drivers/parisc/pdc_stable.c b/drivers/parisc/pdc_stable.c
+index e090978518f1a..4760f82def6ec 100644
+--- a/drivers/parisc/pdc_stable.c
++++ b/drivers/parisc/pdc_stable.c
+@@ -979,8 +979,10 @@ pdcs_register_pathentries(void)
+ 		entry->kobj.kset = paths_kset;
+ 		err = kobject_init_and_add(&entry->kobj, &ktype_pdcspath, NULL,
+ 					   "%s", entry->name);
+-		if (err)
++		if (err) {
++			kobject_put(&entry->kobj);
+ 			return err;
++		}
+ 
+ 		/* kobject is now registered */
+ 		write_lock(&entry->rw_lock);
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 0f6a6685ab5b5..f30144c8c0bd2 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -879,7 +879,6 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+ 
+-	case PCI_CAP_LIST_ID:
+ 	case PCI_EXP_DEVCAP:
+ 	case PCI_EXP_DEVCTL:
+ 		*value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
+@@ -960,6 +959,9 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ 	/* Support interrupt A for MSI feature */
+ 	bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
+ 
++	/* Aardvark HW provides PCIe Capability structure in version 2 */
++	bridge->pcie_conf.cap = cpu_to_le16(2);
++
+ 	/* Indicates supports for Completion Retry Status */
+ 	bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
+ 
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index ed13e81cd691d..2dc6890dbcaa2 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -573,6 +573,8 @@ static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
+ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
+ {
+ 	struct pci_bridge_emul *bridge = &port->bridge;
++	u32 pcie_cap = mvebu_readl(port, PCIE_CAP_PCIEXP);
++	u8 pcie_cap_ver = ((pcie_cap >> 16) & PCI_EXP_FLAGS_VERS);
+ 
+ 	bridge->conf.vendor = PCI_VENDOR_ID_MARVELL;
+ 	bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16;
+@@ -585,6 +587,12 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
+ 		bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
+ 	}
+ 
++	/*
++	 * Older mvebu hardware provides PCIe Capability structure only in
++	 * version 1. New hardware provides it in version 2.
++	 */
++	bridge->pcie_conf.cap = cpu_to_le16(pcie_cap_ver);
++
+ 	bridge->has_pcie = true;
+ 	bridge->data = port;
+ 	bridge->ops = &mvebu_pci_bridge_emul_ops;
+diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
+index c33b385ac918e..b651b6f444691 100644
+--- a/drivers/pci/controller/pci-xgene.c
++++ b/drivers/pci/controller/pci-xgene.c
+@@ -467,7 +467,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
+ 		return 1;
+ 	}
+ 
+-	if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) {
++	if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) {
+ 		*ib_reg_mask |= (1 << 0);
+ 		return 0;
+ 	}
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 4fd200d8b0a9d..f1f789fe0637a 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -72,6 +72,8 @@ extern int pciehp_poll_time;
+  * @reset_lock: prevents access to the Data Link Layer Link Active bit in the
+  *	Link Status register and to the Presence Detect State bit in the Slot
+  *	Status register during a slot reset which may cause them to flap
++ * @depth: Number of additional hotplug ports in the path to the root bus,
++ *	used as lock subclass for @reset_lock
+  * @ist_running: flag to keep user request waiting while IRQ thread is running
+  * @request_result: result of last user request submitted to the IRQ thread
+  * @requester: wait queue to wake up on completion of user request,
+@@ -103,6 +105,7 @@ struct controller {
+ 
+ 	struct hotplug_slot hotplug_slot;	/* hotplug core interface */
+ 	struct rw_semaphore reset_lock;
++	unsigned int depth;
+ 	unsigned int ist_running;
+ 	int request_result;
+ 	wait_queue_head_t requester;
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index ad3393930ecb4..e7fe4b42f0394 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -166,7 +166,7 @@ static void pciehp_check_presence(struct controller *ctrl)
+ {
+ 	int occupied;
+ 
+-	down_read(&ctrl->reset_lock);
++	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+ 	mutex_lock(&ctrl->state_lock);
+ 
+ 	occupied = pciehp_card_present_or_link_active(ctrl);
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 9d06939736c0f..90da17c6da664 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -583,7 +583,7 @@ static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
+ 	 * the corresponding link change may have been ignored above.
+ 	 * Synthesize it to ensure that it is acted on.
+ 	 */
+-	down_read(&ctrl->reset_lock);
++	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+ 	if (!pciehp_check_link_active(ctrl))
+ 		pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
+ 	up_read(&ctrl->reset_lock);
+@@ -746,7 +746,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 	 * Disable requests have higher priority than Presence Detect Changed
+ 	 * or Data Link Layer State Changed events.
+ 	 */
+-	down_read(&ctrl->reset_lock);
++	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+ 	if (events & DISABLE_SLOT)
+ 		pciehp_handle_disable_request(ctrl);
+ 	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
+@@ -880,7 +880,7 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe)
+ 	if (probe)
+ 		return 0;
+ 
+-	down_write(&ctrl->reset_lock);
++	down_write_nested(&ctrl->reset_lock, ctrl->depth);
+ 
+ 	if (!ATTN_BUTTN(ctrl)) {
+ 		ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
+@@ -936,6 +936,20 @@ static inline void dbg_ctrl(struct controller *ctrl)
+ 
+ #define FLAG(x, y)	(((x) & (y)) ? '+' : '-')
+ 
++static inline int pcie_hotplug_depth(struct pci_dev *dev)
++{
++	struct pci_bus *bus = dev->bus;
++	int depth = 0;
++
++	while (bus->parent) {
++		bus = bus->parent;
++		if (bus->self && bus->self->is_hotplug_bridge)
++			depth++;
++	}
++
++	return depth;
++}
++
+ struct controller *pcie_init(struct pcie_device *dev)
+ {
+ 	struct controller *ctrl;
+@@ -949,6 +963,7 @@ struct controller *pcie_init(struct pcie_device *dev)
+ 		return NULL;
+ 
+ 	ctrl->pcie = dev;
++	ctrl->depth = pcie_hotplug_depth(dev->port);
+ 	pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
+ 
+ 	if (pdev->hotplug_user_indicators)
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 57314fec2261b..3da69b26e6743 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -1291,19 +1291,24 @@ EXPORT_SYMBOL(pci_free_irq_vectors);
+ 
+ /**
+  * pci_irq_vector - return Linux IRQ number of a device vector
+- * @dev: PCI device to operate on
+- * @nr: device-relative interrupt vector index (0-based).
++ * @dev:	PCI device to operate on
++ * @nr:		Interrupt vector index (0-based)
++ *
++ * @nr has the following meanings depending on the interrupt mode:
++ *   MSI-X:	The index in the MSI-X vector table
++ *   MSI:	The index of the enabled MSI vectors
++ *   INTx:	Must be 0
++ *
++ * Return: The Linux interrupt number or -EINVAl if @nr is out of range.
+  */
+ int pci_irq_vector(struct pci_dev *dev, unsigned int nr)
+ {
+ 	if (dev->msix_enabled) {
+ 		struct msi_desc *entry;
+-		int i = 0;
+ 
+ 		for_each_pci_msi_entry(entry, dev) {
+-			if (i == nr)
++			if (entry->msi_attrib.entry_nr == nr)
+ 				return entry->irq;
+-			i++;
+ 		}
+ 		WARN_ON_ONCE(1);
+ 		return -EINVAL;
+@@ -1327,17 +1332,22 @@ EXPORT_SYMBOL(pci_irq_vector);
+  * pci_irq_get_affinity - return the affinity of a particular MSI vector
+  * @dev:	PCI device to operate on
+  * @nr:		device-relative interrupt vector index (0-based).
++ *
++ * @nr has the following meanings depending on the interrupt mode:
++ *   MSI-X:	The index in the MSI-X vector table
++ *   MSI:	The index of the enabled MSI vectors
++ *   INTx:	Must be 0
++ *
++ * Return: A cpumask pointer or NULL if @nr is out of range
+  */
+ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
+ {
+ 	if (dev->msix_enabled) {
+ 		struct msi_desc *entry;
+-		int i = 0;
+ 
+ 		for_each_pci_msi_entry(entry, dev) {
+-			if (i == nr)
++			if (entry->msi_attrib.entry_nr == nr)
+ 				return &entry->affinity->mask;
+-			i++;
+ 		}
+ 		WARN_ON_ONCE(1);
+ 		return NULL;
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index db97cddfc85e1..37504c2cce9b8 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -139,8 +139,13 @@ struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = {
+ 		.ro = GENMASK(7, 0),
+ 	},
+ 
++	/*
++	 * If expansion ROM is unsupported then ROM Base Address register must
++	 * be implemented as read-only register that return 0 when read, same
++	 * as for unused Base Address registers.
++	 */
+ 	[PCI_ROM_ADDRESS1 / 4] = {
+-		.rw = GENMASK(31, 11) | BIT(0),
++		.ro = ~0,
+ 	},
+ 
+ 	/*
+@@ -171,41 +176,55 @@ struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] =
+ 	[PCI_CAP_LIST_ID / 4] = {
+ 		/*
+ 		 * Capability ID, Next Capability Pointer and
+-		 * Capabilities register are all read-only.
++		 * bits [14:0] of Capabilities register are all read-only.
++		 * Bit 15 of Capabilities register is reserved.
+ 		 */
+-		.ro = ~0,
++		.ro = GENMASK(30, 0),
+ 	},
+ 
+ 	[PCI_EXP_DEVCAP / 4] = {
+-		.ro = ~0,
++		/*
++		 * Bits [31:29] and [17:16] are reserved.
++		 * Bits [27:18] are reserved for non-upstream ports.
++		 * Bits 28 and [14:6] are reserved for non-endpoint devices.
++		 * Other bits are read-only.
++		 */
++		.ro = BIT(15) | GENMASK(5, 0),
+ 	},
+ 
+ 	[PCI_EXP_DEVCTL / 4] = {
+-		/* Device control register is RW */
+-		.rw = GENMASK(15, 0),
++		/*
++		 * Device control register is RW, except bit 15 which is
++		 * reserved for non-endpoints or non-PCIe-to-PCI/X bridges.
++		 */
++		.rw = GENMASK(14, 0),
+ 
+ 		/*
+ 		 * Device status register has bits 6 and [3:0] W1C, [5:4] RO,
+-		 * the rest is reserved
++		 * the rest is reserved. Also bit 6 is reserved for non-upstream
++		 * ports.
+ 		 */
+-		.w1c = (BIT(6) | GENMASK(3, 0)) << 16,
++		.w1c = GENMASK(3, 0) << 16,
+ 		.ro = GENMASK(5, 4) << 16,
+ 	},
+ 
+ 	[PCI_EXP_LNKCAP / 4] = {
+-		/* All bits are RO, except bit 23 which is reserved */
+-		.ro = lower_32_bits(~BIT(23)),
++		/*
++		 * All bits are RO, except bit 23 which is reserved and
++		 * bit 18 which is reserved for non-upstream ports.
++		 */
++		.ro = lower_32_bits(~(BIT(23) | PCI_EXP_LNKCAP_CLKPM)),
+ 	},
+ 
+ 	[PCI_EXP_LNKCTL / 4] = {
+ 		/*
+ 		 * Link control has bits [15:14], [11:3] and [1:0] RW, the
+-		 * rest is reserved.
++		 * rest is reserved. Bit 8 is reserved for non-upstream ports.
+ 		 *
+ 		 * Link status has bits [13:0] RO, and bits [15:14]
+ 		 * W1C.
+ 		 */
+-		.rw = GENMASK(15, 14) | GENMASK(11, 3) | GENMASK(1, 0),
++		.rw = GENMASK(15, 14) | GENMASK(11, 9) | GENMASK(7, 3) | GENMASK(1, 0),
+ 		.ro = GENMASK(13, 0) << 16,
+ 		.w1c = GENMASK(15, 14) << 16,
+ 	},
+@@ -277,11 +296,9 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
+ 
+ 	if (bridge->has_pcie) {
+ 		bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
++		bridge->conf.status |= cpu_to_le16(PCI_STATUS_CAP_LIST);
+ 		bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
+-		/* Set PCIe v2, root port, slot support */
+-		bridge->pcie_conf.cap =
+-			cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
+-				    PCI_EXP_FLAGS_SLOT);
++		bridge->pcie_conf.cap |= cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4);
+ 		bridge->pcie_cap_regs_behavior =
+ 			kmemdup(pcie_cap_regs_behavior,
+ 				sizeof(pcie_cap_regs_behavior),
+@@ -290,6 +307,27 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
+ 			kfree(bridge->pci_regs_behavior);
+ 			return -ENOMEM;
+ 		}
++		/* These bits are applicable only for PCI and reserved on PCIe */
++		bridge->pci_regs_behavior[PCI_CACHE_LINE_SIZE / 4].ro &=
++			~GENMASK(15, 8);
++		bridge->pci_regs_behavior[PCI_COMMAND / 4].ro &=
++			~((PCI_COMMAND_SPECIAL | PCI_COMMAND_INVALIDATE |
++			   PCI_COMMAND_VGA_PALETTE | PCI_COMMAND_WAIT |
++			   PCI_COMMAND_FAST_BACK) |
++			  (PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
++			   PCI_STATUS_DEVSEL_MASK) << 16);
++		bridge->pci_regs_behavior[PCI_PRIMARY_BUS / 4].ro &=
++			~GENMASK(31, 24);
++		bridge->pci_regs_behavior[PCI_IO_BASE / 4].ro &=
++			~((PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
++			   PCI_STATUS_DEVSEL_MASK) << 16);
++		bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].rw &=
++			~((PCI_BRIDGE_CTL_MASTER_ABORT |
++			   BIT(8) | BIT(9) | BIT(11)) << 16);
++		bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].ro &=
++			~((PCI_BRIDGE_CTL_FAST_BACK) << 16);
++		bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].w1c &=
++			~(BIT(10) << 16);
+ 	}
+ 
+ 	if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) {
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index bb863ddb59bfc..95fcc735c88e7 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4077,6 +4077,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9120,
+ 			 quirk_dma_func1_alias);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9123,
+ 			 quirk_dma_func1_alias);
++/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c136 */
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9125,
++			 quirk_dma_func1_alias);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
+ 			 quirk_dma_func1_alias);
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
+diff --git a/drivers/pcmcia/cs.c b/drivers/pcmcia/cs.c
+index e211e2619680c..f70197154a362 100644
+--- a/drivers/pcmcia/cs.c
++++ b/drivers/pcmcia/cs.c
+@@ -666,18 +666,16 @@ static int pccardd(void *__skt)
+ 		if (events || sysfs_events)
+ 			continue;
+ 
++		set_current_state(TASK_INTERRUPTIBLE);
+ 		if (kthread_should_stop())
+ 			break;
+ 
+-		set_current_state(TASK_INTERRUPTIBLE);
+-
+ 		schedule();
+ 
+-		/* make sure we are running */
+-		__set_current_state(TASK_RUNNING);
+-
+ 		try_to_freeze();
+ 	}
++	/* make sure we are running before we exit */
++	__set_current_state(TASK_RUNNING);
+ 
+ 	/* shut down socket, if a device is still present */
+ 	if (skt->state & SOCKET_PRESENT) {
+diff --git a/drivers/pcmcia/rsrc_nonstatic.c b/drivers/pcmcia/rsrc_nonstatic.c
+index 3b05760e69d62..69a6e9a5d6d26 100644
+--- a/drivers/pcmcia/rsrc_nonstatic.c
++++ b/drivers/pcmcia/rsrc_nonstatic.c
+@@ -690,6 +690,9 @@ static struct resource *__nonstatic_find_io_region(struct pcmcia_socket *s,
+ 	unsigned long min = base;
+ 	int ret;
+ 
++	if (!res)
++		return NULL;
++
+ 	data.mask = align - 1;
+ 	data.offset = base & data.mask;
+ 	data.map = &s_data->io_db;
+@@ -809,6 +812,9 @@ static struct resource *nonstatic_find_mem_region(u_long base, u_long num,
+ 	unsigned long min, max;
+ 	int ret, i, j;
+ 
++	if (!res)
++		return NULL;
++
+ 	low = low || !(s->features & SS_CAP_PAGE_REGS);
+ 
+ 	data.mask = align - 1;
+diff --git a/drivers/phy/socionext/phy-uniphier-usb3ss.c b/drivers/phy/socionext/phy-uniphier-usb3ss.c
+index 6700645bcbe6b..3b5ffc16a6947 100644
+--- a/drivers/phy/socionext/phy-uniphier-usb3ss.c
++++ b/drivers/phy/socionext/phy-uniphier-usb3ss.c
+@@ -22,11 +22,13 @@
+ #include <linux/reset.h>
+ 
+ #define SSPHY_TESTI		0x0
+-#define SSPHY_TESTO		0x4
+ #define TESTI_DAT_MASK		GENMASK(13, 6)
+ #define TESTI_ADR_MASK		GENMASK(5, 1)
+ #define TESTI_WR_EN		BIT(0)
+ 
++#define SSPHY_TESTO		0x4
++#define TESTO_DAT_MASK		GENMASK(7, 0)
++
+ #define PHY_F(regno, msb, lsb) { (regno), (msb), (lsb) }
+ 
+ #define CDR_CPD_TRIM	PHY_F(7, 3, 0)	/* RxPLL charge pump current */
+@@ -84,12 +86,12 @@ static void uniphier_u3ssphy_set_param(struct uniphier_u3ssphy_priv *priv,
+ 	val  = FIELD_PREP(TESTI_DAT_MASK, 1);
+ 	val |= FIELD_PREP(TESTI_ADR_MASK, p->field.reg_no);
+ 	uniphier_u3ssphy_testio_write(priv, val);
+-	val = readl(priv->base + SSPHY_TESTO);
++	val = readl(priv->base + SSPHY_TESTO) & TESTO_DAT_MASK;
+ 
+ 	/* update value */
+-	val &= ~FIELD_PREP(TESTI_DAT_MASK, field_mask);
++	val &= ~field_mask;
+ 	data = field_mask & (p->value << p->field.lsb);
+-	val  = FIELD_PREP(TESTI_DAT_MASK, data);
++	val  = FIELD_PREP(TESTI_DAT_MASK, data | val);
+ 	val |= FIELD_PREP(TESTI_ADR_MASK, p->field.reg_no);
+ 	uniphier_u3ssphy_testio_write(priv, val);
+ 	uniphier_u3ssphy_testio_write(priv, val | TESTI_WR_EN);
+diff --git a/drivers/power/reset/mt6323-poweroff.c b/drivers/power/reset/mt6323-poweroff.c
+index 0532803e6cbc4..d90e76fcb9383 100644
+--- a/drivers/power/reset/mt6323-poweroff.c
++++ b/drivers/power/reset/mt6323-poweroff.c
+@@ -57,6 +57,9 @@ static int mt6323_pwrc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
++
+ 	pwrc->base = res->start;
+ 	pwrc->regmap = mt6397_chip->regmap;
+ 	pwrc->dev = &pdev->dev;
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index bb944ee5fe3b1..03e146e98abd5 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -9,6 +9,7 @@
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/regulator/driver.h>
++#include <linux/regulator/of_regulator.h>
+ #include <linux/soc/qcom/smd-rpm.h>
+ 
+ struct qcom_rpm_reg {
+@@ -1107,52 +1108,91 @@ static const struct of_device_id rpm_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, rpm_of_match);
+ 
+-static int rpm_reg_probe(struct platform_device *pdev)
++/**
++ * rpm_regulator_init_vreg() - initialize all attributes of a qcom_smd-regulator
++ * @vreg:		Pointer to the individual qcom_smd-regulator resource
++ * @dev:		Pointer to the top level qcom_smd-regulator PMIC device
++ * @node:		Pointer to the individual qcom_smd-regulator resource
++ *			device node
++ * @rpm:		Pointer to the rpm bus node
++ * @pmic_rpm_data:	Pointer to a null-terminated array of qcom_smd-regulator
++ *			resources defined for the top level PMIC device
++ *
++ * Return: 0 on success, errno on failure
++ */
++static int rpm_regulator_init_vreg(struct qcom_rpm_reg *vreg, struct device *dev,
++				   struct device_node *node, struct qcom_smd_rpm *rpm,
++				   const struct rpm_regulator_data *pmic_rpm_data)
+ {
+-	const struct rpm_regulator_data *reg;
+-	const struct of_device_id *match;
+-	struct regulator_config config = { };
++	struct regulator_config config = {};
++	const struct rpm_regulator_data *rpm_data;
+ 	struct regulator_dev *rdev;
++	int ret;
++
++	for (rpm_data = pmic_rpm_data; rpm_data->name; rpm_data++)
++		if (of_node_name_eq(node, rpm_data->name))
++			break;
++
++	if (!rpm_data->name) {
++		dev_err(dev, "Unknown regulator %pOFn\n", node);
++		return -EINVAL;
++	}
++
++	vreg->dev	= dev;
++	vreg->rpm	= rpm;
++	vreg->type	= rpm_data->type;
++	vreg->id	= rpm_data->id;
++
++	memcpy(&vreg->desc, rpm_data->desc, sizeof(vreg->desc));
++	vreg->desc.name = rpm_data->name;
++	vreg->desc.supply_name = rpm_data->supply;
++	vreg->desc.owner = THIS_MODULE;
++	vreg->desc.type = REGULATOR_VOLTAGE;
++	vreg->desc.of_match = rpm_data->name;
++
++	config.dev		= dev;
++	config.of_node		= node;
++	config.driver_data	= vreg;
++
++	rdev = devm_regulator_register(dev, &vreg->desc, &config);
++	if (IS_ERR(rdev)) {
++		ret = PTR_ERR(rdev);
++		dev_err(dev, "%pOFn: devm_regulator_register() failed, ret=%d\n", node, ret);
++		return ret;
++	}
++
++	return 0;
++}
++
++static int rpm_reg_probe(struct platform_device *pdev)
++{
++	struct device *dev = &pdev->dev;
++	const struct rpm_regulator_data *vreg_data;
++	struct device_node *node;
+ 	struct qcom_rpm_reg *vreg;
+ 	struct qcom_smd_rpm *rpm;
++	int ret;
+ 
+ 	rpm = dev_get_drvdata(pdev->dev.parent);
+ 	if (!rpm) {
+-		dev_err(&pdev->dev, "unable to retrieve handle to rpm\n");
++		dev_err(&pdev->dev, "Unable to retrieve handle to rpm\n");
+ 		return -ENODEV;
+ 	}
+ 
+-	match = of_match_device(rpm_of_match, &pdev->dev);
+-	if (!match) {
+-		dev_err(&pdev->dev, "failed to match device\n");
++	vreg_data = of_device_get_match_data(dev);
++	if (!vreg_data)
+ 		return -ENODEV;
+-	}
+ 
+-	for (reg = match->data; reg->name; reg++) {
++	for_each_available_child_of_node(dev->of_node, node) {
+ 		vreg = devm_kzalloc(&pdev->dev, sizeof(*vreg), GFP_KERNEL);
+ 		if (!vreg)
+ 			return -ENOMEM;
+ 
+-		vreg->dev = &pdev->dev;
+-		vreg->type = reg->type;
+-		vreg->id = reg->id;
+-		vreg->rpm = rpm;
+-
+-		memcpy(&vreg->desc, reg->desc, sizeof(vreg->desc));
+-
+-		vreg->desc.id = -1;
+-		vreg->desc.owner = THIS_MODULE;
+-		vreg->desc.type = REGULATOR_VOLTAGE;
+-		vreg->desc.name = reg->name;
+-		vreg->desc.supply_name = reg->supply;
+-		vreg->desc.of_match = reg->name;
+-
+-		config.dev = &pdev->dev;
+-		config.driver_data = vreg;
+-		rdev = devm_regulator_register(&pdev->dev, &vreg->desc, &config);
+-		if (IS_ERR(rdev)) {
+-			dev_err(&pdev->dev, "failed to register %s\n", reg->name);
+-			return PTR_ERR(rdev);
++		ret = rpm_regulator_init_vreg(vreg, dev, node, rpm, vreg_data);
++
++		if (ret < 0) {
++			of_node_put(node);
++			return ret;
+ 		}
+ 	}
+ 
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index 91de940896e3d..028ca5961bc2a 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -473,13 +473,25 @@ static int rpmsg_dev_probe(struct device *dev)
+ 	err = rpdrv->probe(rpdev);
+ 	if (err) {
+ 		dev_err(dev, "%s: failed: %d\n", __func__, err);
+-		if (ept)
+-			rpmsg_destroy_ept(ept);
+-		goto out;
++		goto destroy_ept;
+ 	}
+ 
+-	if (ept && rpdev->ops->announce_create)
++	if (ept && rpdev->ops->announce_create) {
+ 		err = rpdev->ops->announce_create(rpdev);
++		if (err) {
++			dev_err(dev, "failed to announce creation\n");
++			goto remove_rpdev;
++		}
++	}
++
++	return 0;
++
++remove_rpdev:
++	if (rpdrv->remove)
++		rpdrv->remove(rpdev);
++destroy_ept:
++	if (ept)
++		rpmsg_destroy_ept(ept);
+ out:
+ 	return err;
+ }
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index c633319cdb913..58c6382a2807c 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -463,7 +463,10 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 	min = t->time.tm_min;
+ 	sec = t->time.tm_sec;
+ 
++	spin_lock_irq(&rtc_lock);
+ 	rtc_control = CMOS_READ(RTC_CONTROL);
++	spin_unlock_irq(&rtc_lock);
++
+ 	if (!(rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ 		/* Writing 0xff means "don't care" or "match all".  */
+ 		mon = (mon <= 12) ? bin2bcd(mon) : 0xff;
+diff --git a/drivers/rtc/rtc-pxa.c b/drivers/rtc/rtc-pxa.c
+index d2f1d8f754bf3..cf8119b6d3204 100644
+--- a/drivers/rtc/rtc-pxa.c
++++ b/drivers/rtc/rtc-pxa.c
+@@ -330,6 +330,10 @@ static int __init pxa_rtc_probe(struct platform_device *pdev)
+ 	if (sa1100_rtc->irq_alarm < 0)
+ 		return -ENXIO;
+ 
++	sa1100_rtc->rtc = devm_rtc_allocate_device(&pdev->dev);
++	if (IS_ERR(sa1100_rtc->rtc))
++		return PTR_ERR(sa1100_rtc->rtc);
++
+ 	pxa_rtc->base = devm_ioremap(dev, pxa_rtc->ress->start,
+ 				resource_size(pxa_rtc->ress));
+ 	if (!pxa_rtc->base) {
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 93e507677bdcb..0273bf3918ff3 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -763,7 +763,6 @@ struct lpfc_hba {
+ #define HBA_DEVLOSS_TMO         0x2000 /* HBA in devloss timeout */
+ #define HBA_RRQ_ACTIVE		0x4000 /* process the rrq active list */
+ #define HBA_IOQ_FLUSH		0x8000 /* FCP/NVME I/O queues being flushed */
+-#define HBA_FW_DUMP_OP		0x10000 /* Skips fn reset before FW dump */
+ #define HBA_RECOVERABLE_UE	0x20000 /* Firmware supports recoverable UE */
+ #define HBA_FORCED_LINK_SPEED	0x40000 /*
+ 					 * Firmware supports Forced Link Speed
+@@ -772,6 +771,7 @@ struct lpfc_hba {
+ #define HBA_FLOGI_ISSUED	0x100000 /* FLOGI was issued */
+ #define HBA_DEFER_FLOGI		0x800000 /* Defer FLOGI till read_sparm cmpl */
+ 
++	struct completion *fw_dump_cmpl; /* cmpl event tracker for fw_dump */
+ 	uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
+ 	struct lpfc_dmabuf slim2p;
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 2c59a5bf35390..727b7ba4d8f82 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -1536,25 +1536,25 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
+ 	before_fc_flag = phba->pport->fc_flag;
+ 	sriov_nr_virtfn = phba->cfg_sriov_nr_virtfn;
+ 
+-	/* Disable SR-IOV virtual functions if enabled */
+-	if (phba->cfg_sriov_nr_virtfn) {
+-		pci_disable_sriov(pdev);
+-		phba->cfg_sriov_nr_virtfn = 0;
+-	}
++	if (opcode == LPFC_FW_DUMP) {
++		init_completion(&online_compl);
++		phba->fw_dump_cmpl = &online_compl;
++	} else {
++		/* Disable SR-IOV virtual functions if enabled */
++		if (phba->cfg_sriov_nr_virtfn) {
++			pci_disable_sriov(pdev);
++			phba->cfg_sriov_nr_virtfn = 0;
++		}
+ 
+-	if (opcode == LPFC_FW_DUMP)
+-		phba->hba_flag |= HBA_FW_DUMP_OP;
++		status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
+ 
+-	status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
++		if (status != 0)
++			return status;
+ 
+-	if (status != 0) {
+-		phba->hba_flag &= ~HBA_FW_DUMP_OP;
+-		return status;
++		/* wait for the device to be quiesced before firmware reset */
++		msleep(100);
+ 	}
+ 
+-	/* wait for the device to be quiesced before firmware reset */
+-	msleep(100);
+-
+ 	reg_val = readl(phba->sli4_hba.conf_regs_memmap_p +
+ 			LPFC_CTL_PDEV_CTL_OFFSET);
+ 
+@@ -1583,24 +1583,42 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+ 				"3153 Fail to perform the requested "
+ 				"access: x%x\n", reg_val);
++		if (phba->fw_dump_cmpl)
++			phba->fw_dump_cmpl = NULL;
+ 		return rc;
+ 	}
+ 
+ 	/* keep the original port state */
+-	if (before_fc_flag & FC_OFFLINE_MODE)
+-		goto out;
+-
+-	init_completion(&online_compl);
+-	job_posted = lpfc_workq_post_event(phba, &status, &online_compl,
+-					   LPFC_EVT_ONLINE);
+-	if (!job_posted)
++	if (before_fc_flag & FC_OFFLINE_MODE) {
++		if (phba->fw_dump_cmpl)
++			phba->fw_dump_cmpl = NULL;
+ 		goto out;
++	}
+ 
+-	wait_for_completion(&online_compl);
++	/* Firmware dump will trigger an HA_ERATT event, and
++	 * lpfc_handle_eratt_s4 routine already handles bringing the port back
++	 * online.
++	 */
++	if (opcode == LPFC_FW_DUMP) {
++		wait_for_completion(phba->fw_dump_cmpl);
++	} else  {
++		init_completion(&online_compl);
++		job_posted = lpfc_workq_post_event(phba, &status, &online_compl,
++						   LPFC_EVT_ONLINE);
++		if (!job_posted)
++			goto out;
+ 
++		wait_for_completion(&online_compl);
++	}
+ out:
+ 	/* in any case, restore the virtual functions enabled as before */
+ 	if (sriov_nr_virtfn) {
++		/* If fw_dump was performed, first disable to clean up */
++		if (opcode == LPFC_FW_DUMP) {
++			pci_disable_sriov(pdev);
++			phba->cfg_sriov_nr_virtfn = 0;
++		}
++
+ 		sriov_err =
+ 			lpfc_sli_probe_sriov_nr_virtfn(phba, sriov_nr_virtfn);
+ 		if (!sriov_err)
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index f4a672e549716..68ff233f936e5 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -635,10 +635,16 @@ lpfc_work_done(struct lpfc_hba *phba)
+ 	if (phba->pci_dev_grp == LPFC_PCI_DEV_OC)
+ 		lpfc_sli4_post_async_mbox(phba);
+ 
+-	if (ha_copy & HA_ERATT)
++	if (ha_copy & HA_ERATT) {
+ 		/* Handle the error attention event */
+ 		lpfc_handle_eratt(phba);
+ 
++		if (phba->fw_dump_cmpl) {
++			complete(phba->fw_dump_cmpl);
++			phba->fw_dump_cmpl = NULL;
++		}
++	}
++
+ 	if (ha_copy & HA_MBATT)
+ 		lpfc_sli_handle_mb_event(phba);
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 06a23718a7c7f..1a9522baba484 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -4629,12 +4629,6 @@ lpfc_sli4_brdreset(struct lpfc_hba *phba)
+ 	phba->fcf.fcf_flag = 0;
+ 	spin_unlock_irq(&phba->hbalock);
+ 
+-	/* SLI4 INTF 2: if FW dump is being taken skip INIT_PORT */
+-	if (phba->hba_flag & HBA_FW_DUMP_OP) {
+-		phba->hba_flag &= ~HBA_FW_DUMP_OP;
+-		return rc;
+-	}
+-
+ 	/* Now physically reset the device */
+ 	lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+ 			"0389 Performing PCI function reset!\n");
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 5d751628a6340..9b318958d78cc 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1323,7 +1323,9 @@ int pm8001_mpi_build_cmd(struct pm8001_hba_info *pm8001_ha,
+ 	int q_index = circularQ - pm8001_ha->inbnd_q_tbl;
+ 	int rv = -1;
+ 
+-	WARN_ON(q_index >= PM8001_MAX_INB_NUM);
++	if (WARN_ON(q_index >= pm8001_ha->max_q_num))
++		return -EINVAL;
++
+ 	spin_lock_irqsave(&circularQ->iq_lock, flags);
+ 	rv = pm8001_mpi_msg_free_get(circularQ, pm8001_ha->iomb_size,
+ 			&pMessage);
+diff --git a/drivers/scsi/scsi_debugfs.c b/drivers/scsi/scsi_debugfs.c
+index c19ea7ab54cbd..d9a18124cfc9d 100644
+--- a/drivers/scsi/scsi_debugfs.c
++++ b/drivers/scsi/scsi_debugfs.c
+@@ -10,6 +10,7 @@ static const char *const scsi_cmd_flags[] = {
+ 	SCSI_CMD_FLAG_NAME(TAGGED),
+ 	SCSI_CMD_FLAG_NAME(UNCHECKED_ISA_DMA),
+ 	SCSI_CMD_FLAG_NAME(INITIALIZED),
++	SCSI_CMD_FLAG_NAME(LAST),
+ };
+ #undef SCSI_CMD_FLAG_NAME
+ 
+diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
+index 3717eea37ecb3..e91a0a5bc7a3e 100644
+--- a/drivers/scsi/scsi_pm.c
++++ b/drivers/scsi/scsi_pm.c
+@@ -262,7 +262,7 @@ static int sdev_runtime_resume(struct device *dev)
+ 	blk_pre_runtime_resume(sdev->request_queue);
+ 	if (pm && pm->runtime_resume)
+ 		err = pm->runtime_resume(dev);
+-	blk_post_runtime_resume(sdev->request_queue, err);
++	blk_post_runtime_resume(sdev->request_queue);
+ 
+ 	return err;
+ }
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 4cb4ab9c6137e..464418413ced0 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -917,7 +917,7 @@ static void get_capabilities(struct scsi_cd *cd)
+ 
+ 
+ 	/* allocate transfer buffer */
+-	buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
++	buffer = kmalloc(512, GFP_KERNEL);
+ 	if (!buffer) {
+ 		sr_printk(KERN_ERR, cd, "out of memory.\n");
+ 		return;
+diff --git a/drivers/scsi/sr_vendor.c b/drivers/scsi/sr_vendor.c
+index 1f988a1b9166f..a61635326ae0a 100644
+--- a/drivers/scsi/sr_vendor.c
++++ b/drivers/scsi/sr_vendor.c
+@@ -131,7 +131,7 @@ int sr_set_blocklength(Scsi_CD *cd, int blocklength)
+ 	if (cd->vendor == VENDOR_TOSHIBA)
+ 		density = (blocklength > 2048) ? 0x81 : 0x83;
+ 
+-	buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
++	buffer = kmalloc(512, GFP_KERNEL);
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+@@ -179,7 +179,7 @@ int sr_cd_check(struct cdrom_device_info *cdi)
+ 	if (cd->cdi.mask & CDC_MULTI_SESSION)
+ 		return 0;
+ 
+-	buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
++	buffer = kmalloc(512, GFP_KERNEL);
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/scsi/ufs/tc-dwc-g210-pci.c b/drivers/scsi/ufs/tc-dwc-g210-pci.c
+index 67a6a61154b71..4e471484539d2 100644
+--- a/drivers/scsi/ufs/tc-dwc-g210-pci.c
++++ b/drivers/scsi/ufs/tc-dwc-g210-pci.c
+@@ -135,7 +135,6 @@ tc_dwc_g210_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		return err;
+ 	}
+ 
+-	pci_set_drvdata(pdev, hba);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_allow(&pdev->dev);
+ 
+diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
+index fadd566025b86..4bf8ec88676ee 100644
+--- a/drivers/scsi/ufs/ufshcd-pci.c
++++ b/drivers/scsi/ufs/ufshcd-pci.c
+@@ -347,8 +347,6 @@ ufshcd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		return err;
+ 	}
+ 
+-	pci_set_drvdata(pdev, hba);
+-
+ 	hba->vops = (struct ufs_hba_variant_ops *)id->driver_data;
+ 
+ 	err = ufshcd_init(hba, mmio_base, pdev->irq);
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index 8c92d1bde64be..e49505534d498 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -412,8 +412,6 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ 		goto dealloc_host;
+ 	}
+ 
+-	platform_set_drvdata(pdev, hba);
+-
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index e3a9a02cadf5a..bf302776340ce 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -9085,6 +9085,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ 	struct device *dev = hba->dev;
+ 	char eh_wq_name[sizeof("ufs_eh_wq_00")];
+ 
++	/*
++	 * dev_set_drvdata() must be called before any callbacks are registered
++	 * that use dev_get_drvdata() (frequency scaling, clock scaling, hwmon,
++	 * sysfs).
++	 */
++	dev_set_drvdata(dev, hba);
++
+ 	if (!mmio_base) {
+ 		dev_err(hba->dev,
+ 		"Invalid memory reference for mmio_base is NULL\n");
+diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c
+index ca75b14931ec9..670cc82d17dc2 100644
+--- a/drivers/soc/mediatek/mtk-scpsys.c
++++ b/drivers/soc/mediatek/mtk-scpsys.c
+@@ -411,12 +411,17 @@ out:
+ 	return ret;
+ }
+ 
+-static void init_clks(struct platform_device *pdev, struct clk **clk)
++static int init_clks(struct platform_device *pdev, struct clk **clk)
+ {
+ 	int i;
+ 
+-	for (i = CLK_NONE + 1; i < CLK_MAX; i++)
++	for (i = CLK_NONE + 1; i < CLK_MAX; i++) {
+ 		clk[i] = devm_clk_get(&pdev->dev, clk_names[i]);
++		if (IS_ERR(clk[i]))
++			return PTR_ERR(clk[i]);
++	}
++
++	return 0;
+ }
+ 
+ static struct scp *init_scp(struct platform_device *pdev,
+@@ -426,7 +431,7 @@ static struct scp *init_scp(struct platform_device *pdev,
+ {
+ 	struct genpd_onecell_data *pd_data;
+ 	struct resource *res;
+-	int i, j;
++	int i, j, ret;
+ 	struct scp *scp;
+ 	struct clk *clk[CLK_MAX];
+ 
+@@ -481,7 +486,9 @@ static struct scp *init_scp(struct platform_device *pdev,
+ 
+ 	pd_data->num_domains = num;
+ 
+-	init_clks(pdev, clk);
++	ret = init_clks(pdev, clk);
++	if (ret)
++		return ERR_PTR(ret);
+ 
+ 	for (i = 0; i < num; i++) {
+ 		struct scp_domain *scpd = &scp->domains[i];
+diff --git a/drivers/soc/qcom/cpr.c b/drivers/soc/qcom/cpr.c
+index b24cc77d1889f..6298561bc29c9 100644
+--- a/drivers/soc/qcom/cpr.c
++++ b/drivers/soc/qcom/cpr.c
+@@ -1043,7 +1043,7 @@ static int cpr_interpolate(const struct corner *corner, int step_volt,
+ 		return corner->uV;
+ 
+ 	temp = f_diff * (uV_high - uV_low);
+-	do_div(temp, f_high - f_low);
++	temp = div64_ul(temp, f_high - f_low);
+ 
+ 	/*
+ 	 * max_volt_scale has units of uV/MHz while freq values
+diff --git a/drivers/soc/ti/pruss.c b/drivers/soc/ti/pruss.c
+index cc0b4ad7a3d34..30695172a508f 100644
+--- a/drivers/soc/ti/pruss.c
++++ b/drivers/soc/ti/pruss.c
+@@ -131,7 +131,7 @@ static int pruss_clk_init(struct pruss *pruss, struct device_node *cfg_node)
+ 
+ 	clks_np = of_get_child_by_name(cfg_node, "clocks");
+ 	if (!clks_np) {
+-		dev_err(dev, "%pOF is missing its 'clocks' node\n", clks_np);
++		dev_err(dev, "%pOF is missing its 'clocks' node\n", cfg_node);
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/spi/spi-meson-spifc.c b/drivers/spi/spi-meson-spifc.c
+index 8eca6f24cb799..c8ed7815c4ba6 100644
+--- a/drivers/spi/spi-meson-spifc.c
++++ b/drivers/spi/spi-meson-spifc.c
+@@ -349,6 +349,7 @@ static int meson_spifc_probe(struct platform_device *pdev)
+ 	return 0;
+ out_clk:
+ 	clk_disable_unprepare(spifc->clk);
++	pm_runtime_disable(spifc->dev);
+ out_err:
+ 	spi_master_put(master);
+ 	return ret;
+diff --git a/drivers/spi/spi-uniphier.c b/drivers/spi/spi-uniphier.c
+index 6a9ef8ee3cc90..e5c234aecf675 100644
+--- a/drivers/spi/spi-uniphier.c
++++ b/drivers/spi/spi-uniphier.c
+@@ -767,12 +767,13 @@ out_master_put:
+ 
+ static int uniphier_spi_remove(struct platform_device *pdev)
+ {
+-	struct uniphier_spi_priv *priv = platform_get_drvdata(pdev);
++	struct spi_master *master = platform_get_drvdata(pdev);
++	struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
+ 
+-	if (priv->master->dma_tx)
+-		dma_release_channel(priv->master->dma_tx);
+-	if (priv->master->dma_rx)
+-		dma_release_channel(priv->master->dma_rx);
++	if (master->dma_tx)
++		dma_release_channel(master->dma_tx);
++	if (master->dma_rx)
++		dma_release_channel(master->dma_rx);
+ 
+ 	clk_disable_unprepare(priv->clk);
+ 
+diff --git a/drivers/staging/greybus/audio_topology.c b/drivers/staging/greybus/audio_topology.c
+index 2bb8e7b60e8d5..e1579f356af5c 100644
+--- a/drivers/staging/greybus/audio_topology.c
++++ b/drivers/staging/greybus/audio_topology.c
+@@ -147,6 +147,9 @@ static const char **gb_generate_enum_strings(struct gbaudio_module_info *gb,
+ 
+ 	items = le32_to_cpu(gbenum->items);
+ 	strings = devm_kcalloc(gb->dev, items, sizeof(char *), GFP_KERNEL);
++	if (!strings)
++		return NULL;
++
+ 	data = gbenum->names;
+ 
+ 	for (i = 0; i < items; i++) {
+@@ -655,6 +658,8 @@ static int gbaudio_tplg_create_enum_kctl(struct gbaudio_module_info *gb,
+ 	/* since count=1, and reg is dummy */
+ 	gbe->items = le32_to_cpu(gb_enum->items);
+ 	gbe->texts = gb_generate_enum_strings(gb, gb_enum);
++	if (!gbe->texts)
++		return -ENOMEM;
+ 
+ 	/* debug enum info */
+ 	dev_dbg(gb->dev, "Max:%d, name_length:%d\n", gbe->items,
+@@ -862,6 +867,8 @@ static int gbaudio_tplg_create_enum_ctl(struct gbaudio_module_info *gb,
+ 	/* since count=1, and reg is dummy */
+ 	gbe->items = le32_to_cpu(gb_enum->items);
+ 	gbe->texts = gb_generate_enum_strings(gb, gb_enum);
++	if (!gbe->texts)
++		return -ENOMEM;
+ 
+ 	/* debug enum info */
+ 	dev_dbg(gb->dev, "Max:%d, name_length:%d\n", gbe->items,
+@@ -1072,6 +1079,10 @@ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module,
+ 			csize += le16_to_cpu(gbenum->names_length);
+ 			control->texts = (const char * const *)
+ 				gb_generate_enum_strings(module, gbenum);
++			if (!control->texts) {
++				ret = -ENOMEM;
++				goto error;
++			}
+ 			control->items = le32_to_cpu(gbenum->items);
+ 		} else {
+ 			csize = sizeof(struct gb_audio_control);
+@@ -1181,6 +1192,10 @@ static int gbaudio_tplg_process_kcontrols(struct gbaudio_module_info *module,
+ 			csize += le16_to_cpu(gbenum->names_length);
+ 			control->texts = (const char * const *)
+ 				gb_generate_enum_strings(module, gbenum);
++			if (!control->texts) {
++				ret = -ENOMEM;
++				goto error;
++			}
+ 			control->items = le32_to_cpu(gbenum->items);
+ 		} else {
+ 			csize = sizeof(struct gb_audio_control);
+diff --git a/drivers/staging/media/atomisp/i2c/ov2680.h b/drivers/staging/media/atomisp/i2c/ov2680.h
+index 49920245e0647..cafb798a71abe 100644
+--- a/drivers/staging/media/atomisp/i2c/ov2680.h
++++ b/drivers/staging/media/atomisp/i2c/ov2680.h
+@@ -289,8 +289,6 @@ static struct ov2680_reg const ov2680_global_setting[] = {
+  */
+ static struct ov2680_reg const ov2680_QCIF_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x24},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -334,8 +332,6 @@ static struct ov2680_reg const ov2680_QCIF_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_CIF_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x24},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -377,8 +373,6 @@ static struct ov2680_reg const ov2680_CIF_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_QVGA_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x24},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -420,8 +414,6 @@ static struct ov2680_reg const ov2680_QVGA_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_656x496_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x24},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -463,8 +455,6 @@ static struct ov2680_reg const ov2680_656x496_30fps[] = {
+ */
+ static struct ov2680_reg const ov2680_720x592_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x26},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0x00}, // X_ADDR_START;
+ 	{0x3802, 0x00},
+@@ -508,8 +498,6 @@ static struct ov2680_reg const ov2680_720x592_30fps[] = {
+ */
+ static struct ov2680_reg const ov2680_800x600_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x26},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0x00},
+ 	{0x3802, 0x00},
+@@ -551,8 +539,6 @@ static struct ov2680_reg const ov2680_800x600_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_720p_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -594,8 +580,6 @@ static struct ov2680_reg const ov2680_720p_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_1296x976_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -637,8 +621,6 @@ static struct ov2680_reg const ov2680_1296x976_30fps[] = {
+ */
+ static struct ov2680_reg const ov2680_1456x1096_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0x90},
+ 	{0x3802, 0x00},
+@@ -682,8 +664,6 @@ static struct ov2680_reg const ov2680_1456x1096_30fps[] = {
+ 
+ static struct ov2680_reg const ov2680_1616x916_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0x00},
+ 	{0x3802, 0x00},
+@@ -726,8 +706,6 @@ static struct ov2680_reg const ov2680_1616x916_30fps[] = {
+ #if 0
+ static struct ov2680_reg const ov2680_1616x1082_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0x00},
+ 	{0x3802, 0x00},
+@@ -769,8 +747,6 @@ static struct ov2680_reg const ov2680_1616x1082_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_1616x1216_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0x00},
+ 	{0x3802, 0x00},
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_cmd.c b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+index 592ea990d4ca4..90d50a693ce57 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_cmd.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+@@ -1138,9 +1138,10 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
+ 					asd->frame_status[vb->i] =
+ 					    ATOMISP_FRAME_STATUS_OK;
+ 				}
+-			} else
++			} else {
+ 				asd->frame_status[vb->i] =
+ 				    ATOMISP_FRAME_STATUS_OK;
++			}
+ 		} else {
+ 			asd->frame_status[vb->i] = ATOMISP_FRAME_STATUS_OK;
+ 		}
+@@ -1714,6 +1715,12 @@ void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe,
+ {
+ 	unsigned long next;
+ 
++	if (!pipe->asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return;
++	}
++
+ 	if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY)
+ 		pipe->wdt_duration = delay;
+ 
+@@ -1776,6 +1783,12 @@ void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay)
+ /* ISP2401 */
+ void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync)
+ {
++	if (!pipe->asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return;
++	}
++
+ 	if (!atomisp_is_wdt_running(pipe))
+ 		return;
+ 
+@@ -4108,6 +4121,12 @@ void atomisp_handle_parameter_and_buffer(struct atomisp_video_pipe *pipe)
+ 	unsigned long irqflags;
+ 	bool need_to_enqueue_buffer = false;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return;
++	}
++
+ 	if (atomisp_is_vf_pipe(pipe))
+ 		return;
+ 
+@@ -4195,6 +4214,12 @@ int atomisp_set_parameters(struct video_device *vdev,
+ 	struct atomisp_css_params *css_param = &asd->params.css_param;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].stream) {
+ 		dev_err(asd->isp->dev, "%s: internal error!\n", __func__);
+ 		return -EINVAL;
+@@ -4855,6 +4880,12 @@ int atomisp_try_fmt(struct video_device *vdev, struct v4l2_format *f,
+ 	int source_pad = atomisp_subdev_source_pad(vdev);
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!isp->inputs[asd->input_curr].camera)
+ 		return -EINVAL;
+ 
+@@ -4945,9 +4976,9 @@ atomisp_try_fmt_file(struct atomisp_device *isp, struct v4l2_format *f)
+ 
+ 	depth = get_pixel_depth(pixelformat);
+ 
+-	if (field == V4L2_FIELD_ANY)
++	if (field == V4L2_FIELD_ANY) {
+ 		field = V4L2_FIELD_NONE;
+-	else if (field != V4L2_FIELD_NONE) {
++	} else if (field != V4L2_FIELD_NONE) {
+ 		dev_err(isp->dev, "Wrong output field\n");
+ 		return -EINVAL;
+ 	}
+@@ -5201,6 +5232,12 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
+ 	const struct atomisp_in_fmt_conv *fc;
+ 	int ret, i;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	v4l2_fh_init(&fh.vfh, vdev);
+ 
+ 	isp_sink_crop = atomisp_subdev_get_rect(
+@@ -5512,6 +5549,7 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
+ 				  unsigned int dvs_env_w, unsigned int dvs_env_h)
+ {
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
++	struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
+ 	const struct atomisp_format_bridge *format;
+ 	struct v4l2_subdev_pad_config pad_cfg;
+ 	struct v4l2_subdev_format vformat = {
+@@ -5527,6 +5565,12 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
+ 	struct v4l2_subdev_fh fh;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	v4l2_fh_init(&fh.vfh, vdev);
+ 
+ 	stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
+@@ -5617,6 +5661,12 @@ int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f)
+ 	struct v4l2_subdev_fh fh;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (source_pad >= ATOMISP_SUBDEV_PADS_NUM)
+ 		return -EINVAL;
+ 
+@@ -6050,6 +6100,12 @@ int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f)
+ 	struct v4l2_subdev_fh fh;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	v4l2_fh_init(&fh.vfh, vdev);
+ 
+ 	dev_dbg(isp->dev, "setting fmt %ux%u 0x%x for file inject\n",
+@@ -6374,6 +6430,12 @@ bool atomisp_is_vf_pipe(struct atomisp_video_pipe *pipe)
+ {
+ 	struct atomisp_sub_device *asd = pipe->asd;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return false;
++	}
++
+ 	if (pipe == &asd->video_out_vf)
+ 		return true;
+ 
+@@ -6587,17 +6649,23 @@ static int atomisp_get_pipe_id(struct atomisp_video_pipe *pipe)
+ {
+ 	struct atomisp_sub_device *asd = pipe->asd;
+ 
+-	if (ATOMISP_USE_YUVPP(asd))
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return -EINVAL;
++	}
++
++	if (ATOMISP_USE_YUVPP(asd)) {
+ 		return IA_CSS_PIPE_ID_YUVPP;
+-	else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER)
++	} else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) {
+ 		return IA_CSS_PIPE_ID_VIDEO;
+-	else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_LOWLAT)
++	} else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_LOWLAT) {
+ 		return IA_CSS_PIPE_ID_CAPTURE;
+-	else if (pipe == &asd->video_out_video_capture)
++	} else if (pipe == &asd->video_out_video_capture) {
+ 		return IA_CSS_PIPE_ID_VIDEO;
+-	else if (pipe == &asd->video_out_vf)
++	} else if (pipe == &asd->video_out_vf) {
+ 		return IA_CSS_PIPE_ID_CAPTURE;
+-	else if (pipe == &asd->video_out_preview) {
++	} else if (pipe == &asd->video_out_preview) {
+ 		if (asd->run_mode->val == ATOMISP_RUN_MODE_VIDEO)
+ 			return IA_CSS_PIPE_ID_VIDEO;
+ 		else
+@@ -6624,6 +6692,12 @@ int atomisp_get_invalid_frame_num(struct video_device *vdev,
+ 	struct ia_css_pipe_info p_info;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (asd->isp->inputs[asd->input_curr].camera_caps->
+ 	    sensor[asd->sensor_curr].stream_num > 1) {
+ 		/* External ISP */
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_fops.c b/drivers/staging/media/atomisp/pci/atomisp_fops.c
+index f1e6b25978534..b751df31cc24c 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_fops.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_fops.c
+@@ -877,6 +877,11 @@ done:
+ 	else
+ 		pipe->users++;
+ 	rt_mutex_unlock(&isp->mutex);
++
++	/* Ensure that a mode is set */
++	if (asd)
++		v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
++
+ 	return 0;
+ 
+ css_error:
+@@ -1171,6 +1176,12 @@ static int atomisp_mmap(struct file *file, struct vm_area_struct *vma)
+ 	u32 origin_size, new_size;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!(vma->vm_flags & (VM_WRITE | VM_READ)))
+ 		return -EACCES;
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index 135994d44802c..34480ca164746 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -481,7 +481,7 @@ fail:
+ 
+ static u8 gmin_get_pmic_id_and_addr(struct device *dev)
+ {
+-	struct i2c_client *power;
++	struct i2c_client *power = NULL;
+ 	static u8 pmic_i2c_addr;
+ 
+ 	if (pmic_id)
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+index 9da82855552de..8a0648fd7c813 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+@@ -646,6 +646,12 @@ static int atomisp_g_input(struct file *file, void *fh, unsigned int *input)
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+ 	*input = asd->input_curr;
+ 	rt_mutex_unlock(&isp->mutex);
+@@ -665,6 +671,12 @@ static int atomisp_s_input(struct file *file, void *fh, unsigned int input)
+ 	struct v4l2_subdev *motor;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+ 	if (input >= ATOM_ISP_MAX_INPUTS || input >= isp->input_cnt) {
+ 		dev_dbg(isp->dev, "input_cnt: %d\n", isp->input_cnt);
+@@ -761,18 +773,33 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
+ 	struct video_device *vdev = video_devdata(file);
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+-	struct v4l2_subdev_mbus_code_enum code = { 0 };
++	struct v4l2_subdev_mbus_code_enum code = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++	};
++	struct v4l2_subdev *camera;
+ 	unsigned int i, fi = 0;
+ 	int rval;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
++	camera = isp->inputs[asd->input_curr].camera;
++	if(!camera) {
++		dev_err(isp->dev, "%s(): camera is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+-	rval = v4l2_subdev_call(isp->inputs[asd->input_curr].camera, pad,
+-				enum_mbus_code, NULL, &code);
++
++	rval = v4l2_subdev_call(camera, pad, enum_mbus_code, NULL, &code);
+ 	if (rval == -ENOIOCTLCMD) {
+ 		dev_warn(isp->dev,
+-			 "enum_mbus_code pad op not supported. Please fix your sensor driver!\n");
+-		//	rval = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
+-		//				video, enum_mbus_fmt, 0, &code.code);
++			 "enum_mbus_code pad op not supported by %s. Please fix your sensor driver!\n",
++			 camera->name);
+ 	}
+ 	rt_mutex_unlock(&isp->mutex);
+ 
+@@ -802,6 +829,8 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
+ 		f->pixelformat = format->pixelformat;
+ 		return 0;
+ 	}
++	dev_err(isp->dev, "%s(): format for code %x not found.\n",
++		__func__, code.code);
+ 
+ 	return -EINVAL;
+ }
+@@ -834,6 +863,72 @@ static int atomisp_g_fmt_file(struct file *file, void *fh,
+ 	return 0;
+ }
+ 
++static int atomisp_adjust_fmt(struct v4l2_format *f)
++{
++	const struct atomisp_format_bridge *format_bridge;
++	u32 padded_width;
++
++	format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
++
++	padded_width = f->fmt.pix.width + pad_w;
++
++	if (format_bridge->planar) {
++		f->fmt.pix.bytesperline = padded_width;
++		f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height *
++						  DIV_ROUND_UP(format_bridge->depth *
++						  padded_width, 8));
++	} else {
++		f->fmt.pix.bytesperline = DIV_ROUND_UP(format_bridge->depth *
++						      padded_width, 8);
++		f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height * f->fmt.pix.bytesperline);
++	}
++
++	if (f->fmt.pix.field == V4L2_FIELD_ANY)
++		f->fmt.pix.field = V4L2_FIELD_NONE;
++
++	format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
++	if (!format_bridge)
++		return -EINVAL;
++
++	/* Currently, raw formats are broken!!! */
++	if (format_bridge->sh_fmt == IA_CSS_FRAME_FORMAT_RAW) {
++		f->fmt.pix.pixelformat = V4L2_PIX_FMT_YUV420;
++
++		format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
++		if (!format_bridge)
++			return -EINVAL;
++	}
++
++	padded_width = f->fmt.pix.width + pad_w;
++
++	if (format_bridge->planar) {
++		f->fmt.pix.bytesperline = padded_width;
++		f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height *
++						  DIV_ROUND_UP(format_bridge->depth *
++						  padded_width, 8));
++	} else {
++		f->fmt.pix.bytesperline = DIV_ROUND_UP(format_bridge->depth *
++						      padded_width, 8);
++		f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height * f->fmt.pix.bytesperline);
++	}
++
++	if (f->fmt.pix.field == V4L2_FIELD_ANY)
++		f->fmt.pix.field = V4L2_FIELD_NONE;
++
++	/*
++	 * FIXME: do we need to setup this differently, depending on the
++	 * sensor or the pipeline?
++	 */
++	f->fmt.pix.colorspace = V4L2_COLORSPACE_REC709;
++	f->fmt.pix.ycbcr_enc = V4L2_YCBCR_ENC_709;
++	f->fmt.pix.xfer_func = V4L2_XFER_FUNC_709;
++
++	f->fmt.pix.width -= pad_w;
++	f->fmt.pix.height -= pad_h;
++
++	return 0;
++}
++
+ /* This function looks up the closest available resolution. */
+ static int atomisp_try_fmt_cap(struct file *file, void *fh,
+ 			       struct v4l2_format *f)
+@@ -845,7 +940,11 @@ static int atomisp_try_fmt_cap(struct file *file, void *fh,
+ 	rt_mutex_lock(&isp->mutex);
+ 	ret = atomisp_try_fmt(vdev, f, NULL);
+ 	rt_mutex_unlock(&isp->mutex);
+-	return ret;
++
++	if (ret)
++		return ret;
++
++	return atomisp_adjust_fmt(f);
+ }
+ 
+ static int atomisp_s_fmt_cap(struct file *file, void *fh,
+@@ -1027,6 +1126,12 @@ int __atomisp_reqbufs(struct file *file, void *fh,
+ 	u16 stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
+ 	int ret = 0, i = 0;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (req->count == 0) {
+ 		mutex_lock(&pipe->capq.vb_lock);
+ 		if (!list_empty(&pipe->capq.stream))
+@@ -1154,6 +1259,12 @@ static int atomisp_qbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
+ 	u32 pgnr;
+ 	int ret = 0;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+ 	if (isp->isp_fatal_error) {
+ 		ret = -EIO;
+@@ -1389,6 +1500,12 @@ static int atomisp_dqbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	int ret = 0;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+ 
+ 	if (isp->isp_fatal_error) {
+@@ -1640,6 +1757,12 @@ static int atomisp_streamon(struct file *file, void *fh,
+ 	int ret = 0;
+ 	unsigned long irqflags;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	dev_dbg(isp->dev, "Start stream on pad %d for asd%d\n",
+ 		atomisp_subdev_source_pad(vdev), asd->index);
+ 
+@@ -1901,6 +2024,12 @@ int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
+ 	unsigned long flags;
+ 	bool first_streamoff = false;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	dev_dbg(isp->dev, "Stop stream on pad %d for asd%d\n",
+ 		atomisp_subdev_source_pad(vdev), asd->index);
+ 
+@@ -2150,6 +2279,12 @@ static int atomisp_g_ctrl(struct file *file, void *fh,
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	int i, ret = -EINVAL;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	for (i = 0; i < ctrls_num; i++) {
+ 		if (ci_v4l2_controls[i].id == control->id) {
+ 			ret = 0;
+@@ -2229,6 +2364,12 @@ static int atomisp_s_ctrl(struct file *file, void *fh,
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	int i, ret = -EINVAL;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	for (i = 0; i < ctrls_num; i++) {
+ 		if (ci_v4l2_controls[i].id == control->id) {
+ 			ret = 0;
+@@ -2310,6 +2451,12 @@ static int atomisp_queryctl(struct file *file, void *fh,
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	switch (qc->id) {
+ 	case V4L2_CID_FOCUS_ABSOLUTE:
+ 	case V4L2_CID_FOCUS_RELATIVE:
+@@ -2355,6 +2502,12 @@ static int atomisp_camera_g_ext_ctrls(struct file *file, void *fh,
+ 	int i;
+ 	int ret = 0;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!IS_ISP2401)
+ 		motor = isp->inputs[asd->input_curr].motor;
+ 	else
+@@ -2466,6 +2619,12 @@ static int atomisp_camera_s_ext_ctrls(struct file *file, void *fh,
+ 	int i;
+ 	int ret = 0;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!IS_ISP2401)
+ 		motor = isp->inputs[asd->input_curr].motor;
+ 	else
+@@ -2591,6 +2750,12 @@ static int atomisp_g_parm(struct file *file, void *fh,
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ 		dev_err(isp->dev, "unsupported v4l2 buf type\n");
+ 		return -EINVAL;
+@@ -2613,6 +2778,12 @@ static int atomisp_s_parm(struct file *file, void *fh,
+ 	int rval;
+ 	int fps;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ 		dev_err(isp->dev, "unsupported v4l2 buf type\n");
+ 		return -EINVAL;
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.c b/drivers/staging/media/atomisp/pci/atomisp_subdev.c
+index dcc2dd981ca60..628e85799274d 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_subdev.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.c
+@@ -1178,23 +1178,28 @@ static int isp_subdev_init_entities(struct atomisp_sub_device *asd)
+ 
+ 	atomisp_init_acc_pipe(asd, &asd->video_acc);
+ 
+-	ret = atomisp_video_init(&asd->video_in, "MEMORY");
++	ret = atomisp_video_init(&asd->video_in, "MEMORY",
++				 ATOMISP_RUN_MODE_SDV);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE");
++	ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE",
++				 ATOMISP_RUN_MODE_STILL_CAPTURE);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = atomisp_video_init(&asd->video_out_vf, "VIEWFINDER");
++	ret = atomisp_video_init(&asd->video_out_vf, "VIEWFINDER",
++				 ATOMISP_RUN_MODE_CONTINUOUS_CAPTURE);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = atomisp_video_init(&asd->video_out_preview, "PREVIEW");
++	ret = atomisp_video_init(&asd->video_out_preview, "PREVIEW",
++				 ATOMISP_RUN_MODE_PREVIEW);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = atomisp_video_init(&asd->video_out_video_capture, "VIDEO");
++	ret = atomisp_video_init(&asd->video_out_video_capture, "VIDEO",
++				 ATOMISP_RUN_MODE_VIDEO);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.h b/drivers/staging/media/atomisp/pci/atomisp_subdev.h
+index 330a77eed8aa6..12215d7406169 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_subdev.h
++++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.h
+@@ -81,6 +81,9 @@ struct atomisp_video_pipe {
+ 	/* the link list to store per_frame parameters */
+ 	struct list_head per_frame_params;
+ 
++	/* Store here the initial run mode */
++	unsigned int default_run_mode;
++
+ 	unsigned int buffers_in_css;
+ 
+ 	/* irq_lock is used to protect video buffer state change operations and
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+index fa1bd99cd6f17..8aeea74cfd06b 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+@@ -447,7 +447,8 @@ const struct atomisp_dfs_config dfs_config_cht_soc = {
+ 	.dfs_table_size = ARRAY_SIZE(dfs_rules_cht_soc),
+ };
+ 
+-int atomisp_video_init(struct atomisp_video_pipe *video, const char *name)
++int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
++		       unsigned int run_mode)
+ {
+ 	int ret;
+ 	const char *direction;
+@@ -478,6 +479,7 @@ int atomisp_video_init(struct atomisp_video_pipe *video, const char *name)
+ 		 "ATOMISP ISP %s %s", name, direction);
+ 	video->vdev.release = video_device_release_empty;
+ 	video_set_drvdata(&video->vdev, video->isp);
++	video->default_run_mode = run_mode;
+ 
+ 	return 0;
+ }
+@@ -711,15 +713,15 @@ static int atomisp_mrfld_power(struct atomisp_device *isp, bool enable)
+ 
+ 	dev_dbg(isp->dev, "IUNIT power-%s.\n", enable ? "on" : "off");
+ 
+-	/*WA:Enable DVFS*/
++	/* WA for P-Unit, if DVFS enabled, ISP timeout observed */
+ 	if (IS_CHT && enable)
+-		punit_ddr_dvfs_enable(true);
++		punit_ddr_dvfs_enable(false);
+ 
+ 	/*
+ 	 * FIXME:WA for ECS28A, with this sleep, CTS
+ 	 * android.hardware.camera2.cts.CameraDeviceTest#testCameraDeviceAbort
+ 	 * PASS, no impact on other platforms
+-	*/
++	 */
+ 	if (IS_BYT && enable)
+ 		msleep(10);
+ 
+@@ -727,7 +729,7 @@ static int atomisp_mrfld_power(struct atomisp_device *isp, bool enable)
+ 	iosf_mbi_modify(BT_MBI_UNIT_PMC, MBI_REG_READ, MRFLD_ISPSSPM0,
+ 			val, MRFLD_ISPSSPM0_ISPSSC_MASK);
+ 
+-	/*WA:Enable DVFS*/
++	/* WA:Enable DVFS */
+ 	if (IS_CHT && !enable)
+ 		punit_ddr_dvfs_enable(true);
+ 
+@@ -1182,6 +1184,7 @@ static void atomisp_unregister_entities(struct atomisp_device *isp)
+ 
+ 	v4l2_device_unregister(&isp->v4l2_dev);
+ 	media_device_unregister(&isp->media_dev);
++	media_device_cleanup(&isp->media_dev);
+ }
+ 
+ static int atomisp_register_entities(struct atomisp_device *isp)
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.h b/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
+index 81bb356b81720..72611b8286a4a 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
+@@ -27,7 +27,8 @@ struct v4l2_device;
+ struct atomisp_device;
+ struct firmware;
+ 
+-int atomisp_video_init(struct atomisp_video_pipe *video, const char *name);
++int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
++		       unsigned int run_mode);
+ void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name);
+ void atomisp_video_unregister(struct atomisp_video_pipe *video);
+ void atomisp_acc_unregister(struct atomisp_acc_pipe *video);
+diff --git a/drivers/staging/media/atomisp/pci/sh_css.c b/drivers/staging/media/atomisp/pci/sh_css.c
+index ddee04c8248d0..54a18921fbd15 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css.c
++++ b/drivers/staging/media/atomisp/pci/sh_css.c
+@@ -527,6 +527,7 @@ ia_css_stream_input_format_bits_per_pixel(struct ia_css_stream *stream)
+ 	return bpp;
+ }
+ 
++/* TODO: move define to proper file in tools */
+ #define GP_ISEL_TPG_MODE 0x90058
+ 
+ #if !defined(ISP2401)
+@@ -579,12 +580,8 @@ sh_css_config_input_network(struct ia_css_stream *stream) {
+ 		vblank_cycles = vblank_lines * (width + hblank_cycles);
+ 		sh_css_sp_configure_sync_gen(width, height, hblank_cycles,
+ 					     vblank_cycles);
+-		if (!IS_ISP2401) {
+-			if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG) {
+-				/* TODO: move define to proper file in tools */
+-				ia_css_device_store_uint32(GP_ISEL_TPG_MODE, 0);
+-			}
+-		}
++		if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG)
++			ia_css_device_store_uint32(GP_ISEL_TPG_MODE, 0);
+ 	}
+ 	ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
+ 			    "sh_css_config_input_network() leave:\n");
+@@ -1019,16 +1016,14 @@ static bool sh_css_translate_stream_cfg_to_isys_stream_descr(
+ 	 * ia_css_isys_stream_capture_indication() instead of
+ 	 * ia_css_pipeline_sp_wait_for_isys_stream_N() as isp processing of
+ 	 * capture takes longer than getting an ISYS frame
+-	 *
+-	 * Only 2401 relevant ??
+ 	 */
+-#if 0 // FIXME: NOT USED on Yocto Aero
+-	isys_stream_descr->polling_mode
+-	    = early_polling ? INPUT_SYSTEM_POLL_ON_CAPTURE_REQUEST
+-	      : INPUT_SYSTEM_POLL_ON_WAIT_FOR_FRAME;
+-	ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
+-			    "sh_css_translate_stream_cfg_to_isys_stream_descr() leave:\n");
+-#endif
++	if (IS_ISP2401) {
++		isys_stream_descr->polling_mode
++		    = early_polling ? INPUT_SYSTEM_POLL_ON_CAPTURE_REQUEST
++		      : INPUT_SYSTEM_POLL_ON_WAIT_FOR_FRAME;
++		ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
++				    "sh_css_translate_stream_cfg_to_isys_stream_descr() leave:\n");
++	}
+ 
+ 	return rc;
+ }
+@@ -1451,7 +1446,7 @@ static void start_pipe(
+ 
+ 	assert(me); /* all callers are in this file and call with non null argument */
+ 
+-	if (!IS_ISP2401) {
++	if (IS_ISP2401) {
+ 		coord = &me->config.internal_frame_origin_bqs_on_sctbl;
+ 		params = me->stream->isp_params_configs;
+ 	}
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_mipi.c b/drivers/staging/media/atomisp/pci/sh_css_mipi.c
+index d5ae7f0b5864b..651eda0469b23 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_mipi.c
++++ b/drivers/staging/media/atomisp/pci/sh_css_mipi.c
+@@ -389,17 +389,17 @@ static bool buffers_needed(struct ia_css_pipe *pipe)
+ {
+ 	if (!IS_ISP2401) {
+ 		if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_BUFFERED_SENSOR)
+-			return false;
+-		else
+ 			return true;
++		else
++			return false;
+ 	}
+ 
+ 	if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_BUFFERED_SENSOR ||
+ 	    pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG ||
+ 	    pipe->stream->config.mode == IA_CSS_INPUT_MODE_PRBS)
+-		return false;
++		return true;
+ 
+-	return true;
++	return false;
+ }
+ 
+ int
+@@ -439,14 +439,17 @@ allocate_mipi_frames(struct ia_css_pipe *pipe,
+ 		return 0; /* AM TODO: Check  */
+ 	}
+ 
+-	if (!IS_ISP2401)
++	if (!IS_ISP2401) {
+ 		port = (unsigned int)pipe->stream->config.source.port.port;
+-	else
+-		err = ia_css_mipi_is_source_port_valid(pipe, &port);
++	} else {
++		/* Returns true if port is valid. So, invert it */
++		err = !ia_css_mipi_is_source_port_valid(pipe, &port);
++	}
+ 
+ 	assert(port < N_CSI_PORTS);
+ 
+-	if (port >= N_CSI_PORTS || err) {
++	if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
++	    (IS_ISP2401 && err)) {
+ 		ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
+ 				    "allocate_mipi_frames(%p) exit: error: port is not correct (port=%d).\n",
+ 				    pipe, port);
+@@ -571,14 +574,17 @@ free_mipi_frames(struct ia_css_pipe *pipe) {
+ 			return err;
+ 		}
+ 
+-		if (!IS_ISP2401)
++		if (!IS_ISP2401) {
+ 			port = (unsigned int)pipe->stream->config.source.port.port;
+-		else
+-			err = ia_css_mipi_is_source_port_valid(pipe, &port);
++		} else {
++			/* Returns true if port is valid. So, invert it */
++			err = !ia_css_mipi_is_source_port_valid(pipe, &port);
++		}
+ 
+ 		assert(port < N_CSI_PORTS);
+ 
+-		if (port >= N_CSI_PORTS || err) {
++		if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
++		    (IS_ISP2401 && err)) {
+ 			ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
+ 					    "free_mipi_frames(%p, %d) exit: error: pipe port is not correct.\n",
+ 					    pipe, port);
+@@ -683,14 +689,17 @@ send_mipi_frames(struct ia_css_pipe *pipe) {
+ 		/* TODO: AM: maybe this should be returning an error. */
+ 	}
+ 
+-	if (!IS_ISP2401)
++	if (!IS_ISP2401) {
+ 		port = (unsigned int)pipe->stream->config.source.port.port;
+-	else
+-		err = ia_css_mipi_is_source_port_valid(pipe, &port);
++	} else {
++		/* Returns true if port is valid. So, invert it */
++		err = !ia_css_mipi_is_source_port_valid(pipe, &port);
++	}
+ 
+ 	assert(port < N_CSI_PORTS);
+ 
+-	if (port >= N_CSI_PORTS || err) {
++	if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
++	    (IS_ISP2401 && err)) {
+ 		IA_CSS_ERROR("send_mipi_frames(%p) exit: invalid port specified (port=%d).\n",
+ 			     pipe, port);
+ 		return err;
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_params.c b/drivers/staging/media/atomisp/pci/sh_css_params.c
+index 24fc497bd4915..8d6514c45eeb6 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_params.c
++++ b/drivers/staging/media/atomisp/pci/sh_css_params.c
+@@ -2437,7 +2437,7 @@ sh_css_create_isp_params(struct ia_css_stream *stream,
+ 	unsigned int i;
+ 	struct sh_css_ddr_address_map *ddr_ptrs;
+ 	struct sh_css_ddr_address_map_size *ddr_ptrs_size;
+-	int err = 0;
++	int err;
+ 	size_t params_size;
+ 	struct ia_css_isp_parameters *params =
+ 	kvmalloc(sizeof(struct ia_css_isp_parameters), GFP_KERNEL);
+@@ -2482,7 +2482,11 @@ sh_css_create_isp_params(struct ia_css_stream *stream,
+ 	succ &= (ddr_ptrs->macc_tbl != mmgr_NULL);
+ 
+ 	*isp_params_out = params;
+-	return err;
++
++	if (!succ)
++		return -ENOMEM;
++
++	return 0;
+ }
+ 
+ static bool
+diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
+index 7749ca9a8ebbf..bc97ec0a7e4af 100644
+--- a/drivers/staging/media/hantro/hantro_drv.c
++++ b/drivers/staging/media/hantro/hantro_drv.c
+@@ -829,7 +829,7 @@ static int hantro_probe(struct platform_device *pdev)
+ 	ret = clk_bulk_prepare(vpu->variant->num_clocks, vpu->clocks);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to prepare clocks\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = v4l2_device_register(&pdev->dev, &vpu->v4l2_dev);
+@@ -885,6 +885,7 @@ err_v4l2_unreg:
+ 	v4l2_device_unregister(&vpu->v4l2_dev);
+ err_clk_unprepare:
+ 	clk_bulk_unprepare(vpu->variant->num_clocks, vpu->clocks);
++err_pm_disable:
+ 	pm_runtime_dont_use_autosuspend(vpu->dev);
+ 	pm_runtime_disable(vpu->dev);
+ 	return ret;
+diff --git a/drivers/staging/rtl8192e/rtllib.h b/drivers/staging/rtl8192e/rtllib.h
+index 4cabaf21c1ca0..367db4acc7852 100644
+--- a/drivers/staging/rtl8192e/rtllib.h
++++ b/drivers/staging/rtl8192e/rtllib.h
+@@ -1982,7 +1982,7 @@ void rtllib_softmac_xmit(struct rtllib_txb *txb, struct rtllib_device *ieee);
+ void rtllib_stop_send_beacons(struct rtllib_device *ieee);
+ void notify_wx_assoc_event(struct rtllib_device *ieee);
+ void rtllib_start_ibss(struct rtllib_device *ieee);
+-void rtllib_softmac_init(struct rtllib_device *ieee);
++int rtllib_softmac_init(struct rtllib_device *ieee);
+ void rtllib_softmac_free(struct rtllib_device *ieee);
+ void rtllib_disassociate(struct rtllib_device *ieee);
+ void rtllib_stop_scan(struct rtllib_device *ieee);
+diff --git a/drivers/staging/rtl8192e/rtllib_module.c b/drivers/staging/rtl8192e/rtllib_module.c
+index 64d9feee1f392..f00ac94b2639b 100644
+--- a/drivers/staging/rtl8192e/rtllib_module.c
++++ b/drivers/staging/rtl8192e/rtllib_module.c
+@@ -88,7 +88,7 @@ struct net_device *alloc_rtllib(int sizeof_priv)
+ 	err = rtllib_networks_allocate(ieee);
+ 	if (err) {
+ 		pr_err("Unable to allocate beacon storage: %d\n", err);
+-		goto failed;
++		goto free_netdev;
+ 	}
+ 	rtllib_networks_initialize(ieee);
+ 
+@@ -121,11 +121,13 @@ struct net_device *alloc_rtllib(int sizeof_priv)
+ 	ieee->hwsec_active = 0;
+ 
+ 	memset(ieee->swcamtable, 0, sizeof(struct sw_cam_table) * 32);
+-	rtllib_softmac_init(ieee);
++	err = rtllib_softmac_init(ieee);
++	if (err)
++		goto free_crypt_info;
+ 
+ 	ieee->pHTInfo = kzalloc(sizeof(struct rt_hi_throughput), GFP_KERNEL);
+ 	if (!ieee->pHTInfo)
+-		return NULL;
++		goto free_softmac;
+ 
+ 	HTUpdateDefaultSetting(ieee);
+ 	HTInitializeHTInfo(ieee);
+@@ -141,8 +143,14 @@ struct net_device *alloc_rtllib(int sizeof_priv)
+ 
+ 	return dev;
+ 
+- failed:
++free_softmac:
++	rtllib_softmac_free(ieee);
++free_crypt_info:
++	lib80211_crypt_info_free(&ieee->crypt_info);
++	rtllib_networks_free(ieee);
++free_netdev:
+ 	free_netdev(dev);
++
+ 	return NULL;
+ }
+ EXPORT_SYMBOL(alloc_rtllib);
+diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c
+index 2c752ba5a802a..e8e72f79ca007 100644
+--- a/drivers/staging/rtl8192e/rtllib_softmac.c
++++ b/drivers/staging/rtl8192e/rtllib_softmac.c
+@@ -2953,7 +2953,7 @@ void rtllib_start_protocol(struct rtllib_device *ieee)
+ 	}
+ }
+ 
+-void rtllib_softmac_init(struct rtllib_device *ieee)
++int rtllib_softmac_init(struct rtllib_device *ieee)
+ {
+ 	int i;
+ 
+@@ -2964,7 +2964,8 @@ void rtllib_softmac_init(struct rtllib_device *ieee)
+ 		ieee->seq_ctrl[i] = 0;
+ 	ieee->dot11d_info = kzalloc(sizeof(struct rt_dot11d_info), GFP_ATOMIC);
+ 	if (!ieee->dot11d_info)
+-		netdev_err(ieee->dev, "Can't alloc memory for DOT11D\n");
++		return -ENOMEM;
++
+ 	ieee->LinkDetectInfo.SlotIndex = 0;
+ 	ieee->LinkDetectInfo.SlotNum = 2;
+ 	ieee->LinkDetectInfo.NumRecvBcnInPeriod = 0;
+@@ -3030,6 +3031,7 @@ void rtllib_softmac_init(struct rtllib_device *ieee)
+ 
+ 	tasklet_setup(&ieee->ps_task, rtllib_sta_ps);
+ 
++	return 0;
+ }
+ 
+ void rtllib_softmac_free(struct rtllib_device *ieee)
+diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
+index 6ade4a5c48407..dfc239c64ce3c 100644
+--- a/drivers/tee/tee_core.c
++++ b/drivers/tee/tee_core.c
+@@ -98,8 +98,10 @@ void teedev_ctx_put(struct tee_context *ctx)
+ 
+ static void teedev_close_context(struct tee_context *ctx)
+ {
+-	tee_device_put(ctx->teedev);
++	struct tee_device *teedev = ctx->teedev;
++
+ 	teedev_ctx_put(ctx);
++	tee_device_put(teedev);
+ }
+ 
+ static int tee_open(struct inode *inode, struct file *filp)
+diff --git a/drivers/thermal/imx8mm_thermal.c b/drivers/thermal/imx8mm_thermal.c
+index a1e4f9bb4cb01..0f4cabd2a8c62 100644
+--- a/drivers/thermal/imx8mm_thermal.c
++++ b/drivers/thermal/imx8mm_thermal.c
+@@ -21,6 +21,7 @@
+ #define TPS			0x4
+ #define TRITSR			0x20	/* TMU immediate temp */
+ 
++#define TER_ADC_PD		BIT(30)
+ #define TER_EN			BIT(31)
+ #define TRITSR_TEMP0_VAL_MASK	0xff
+ #define TRITSR_TEMP1_VAL_MASK	0xff0000
+@@ -113,6 +114,8 @@ static void imx8mm_tmu_enable(struct imx8mm_tmu *tmu, bool enable)
+ 
+ 	val = readl_relaxed(tmu->base + TER);
+ 	val = enable ? (val | TER_EN) : (val & ~TER_EN);
++	if (tmu->socdata->version == TMU_VER2)
++		val = enable ? (val & ~TER_ADC_PD) : (val | TER_ADC_PD);
+ 	writel_relaxed(val, tmu->base + TER);
+ }
+ 
+diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c
+index 2c7473d86a59b..16663373b6829 100644
+--- a/drivers/thermal/imx_thermal.c
++++ b/drivers/thermal/imx_thermal.c
+@@ -15,6 +15,7 @@
+ #include <linux/regmap.h>
+ #include <linux/thermal.h>
+ #include <linux/nvmem-consumer.h>
++#include <linux/pm_runtime.h>
+ 
+ #define REG_SET		0x4
+ #define REG_CLR		0x8
+@@ -194,6 +195,7 @@ static struct thermal_soc_data thermal_imx7d_data = {
+ };
+ 
+ struct imx_thermal_data {
++	struct device *dev;
+ 	struct cpufreq_policy *policy;
+ 	struct thermal_zone_device *tz;
+ 	struct thermal_cooling_device *cdev;
+@@ -252,44 +254,15 @@ static int imx_get_temp(struct thermal_zone_device *tz, int *temp)
+ 	const struct thermal_soc_data *soc_data = data->socdata;
+ 	struct regmap *map = data->tempmon;
+ 	unsigned int n_meas;
+-	bool wait, run_measurement;
+ 	u32 val;
++	int ret;
+ 
+-	run_measurement = !data->irq_enabled;
+-	if (!run_measurement) {
+-		/* Check if a measurement is currently in progress */
+-		regmap_read(map, soc_data->temp_data, &val);
+-		wait = !(val & soc_data->temp_valid_mask);
+-	} else {
+-		/*
+-		 * Every time we measure the temperature, we will power on the
+-		 * temperature sensor, enable measurements, take a reading,
+-		 * disable measurements, power off the temperature sensor.
+-		 */
+-		regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
+-			    soc_data->power_down_mask);
+-		regmap_write(map, soc_data->sensor_ctrl + REG_SET,
+-			    soc_data->measure_temp_mask);
+-
+-		wait = true;
+-	}
+-
+-	/*
+-	 * According to the temp sensor designers, it may require up to ~17us
+-	 * to complete a measurement.
+-	 */
+-	if (wait)
+-		usleep_range(20, 50);
++	ret = pm_runtime_resume_and_get(data->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	regmap_read(map, soc_data->temp_data, &val);
+ 
+-	if (run_measurement) {
+-		regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
+-			     soc_data->measure_temp_mask);
+-		regmap_write(map, soc_data->sensor_ctrl + REG_SET,
+-			     soc_data->power_down_mask);
+-	}
+-
+ 	if ((val & soc_data->temp_valid_mask) == 0) {
+ 		dev_dbg(&tz->device, "temp measurement never finished\n");
+ 		return -EAGAIN;
+@@ -328,6 +301,8 @@ static int imx_get_temp(struct thermal_zone_device *tz, int *temp)
+ 		enable_irq(data->irq);
+ 	}
+ 
++	pm_runtime_put(data->dev);
++
+ 	return 0;
+ }
+ 
+@@ -335,24 +310,16 @@ static int imx_change_mode(struct thermal_zone_device *tz,
+ 			   enum thermal_device_mode mode)
+ {
+ 	struct imx_thermal_data *data = tz->devdata;
+-	struct regmap *map = data->tempmon;
+-	const struct thermal_soc_data *soc_data = data->socdata;
+ 
+ 	if (mode == THERMAL_DEVICE_ENABLED) {
+-		regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
+-			     soc_data->power_down_mask);
+-		regmap_write(map, soc_data->sensor_ctrl + REG_SET,
+-			     soc_data->measure_temp_mask);
++		pm_runtime_get(data->dev);
+ 
+ 		if (!data->irq_enabled) {
+ 			data->irq_enabled = true;
+ 			enable_irq(data->irq);
+ 		}
+ 	} else {
+-		regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
+-			     soc_data->measure_temp_mask);
+-		regmap_write(map, soc_data->sensor_ctrl + REG_SET,
+-			     soc_data->power_down_mask);
++		pm_runtime_put(data->dev);
+ 
+ 		if (data->irq_enabled) {
+ 			disable_irq(data->irq);
+@@ -393,6 +360,11 @@ static int imx_set_trip_temp(struct thermal_zone_device *tz, int trip,
+ 			     int temp)
+ {
+ 	struct imx_thermal_data *data = tz->devdata;
++	int ret;
++
++	ret = pm_runtime_resume_and_get(data->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* do not allow changing critical threshold */
+ 	if (trip == IMX_TRIP_CRITICAL)
+@@ -406,6 +378,8 @@ static int imx_set_trip_temp(struct thermal_zone_device *tz, int trip,
+ 
+ 	imx_set_alarm_temp(data, temp);
+ 
++	pm_runtime_put(data->dev);
++
+ 	return 0;
+ }
+ 
+@@ -681,6 +655,8 @@ static int imx_thermal_probe(struct platform_device *pdev)
+ 	if (!data)
+ 		return -ENOMEM;
+ 
++	data->dev = &pdev->dev;
++
+ 	map = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "fsl,tempmon");
+ 	if (IS_ERR(map)) {
+ 		ret = PTR_ERR(map);
+@@ -800,6 +776,16 @@ static int imx_thermal_probe(struct platform_device *pdev)
+ 		     data->socdata->power_down_mask);
+ 	regmap_write(map, data->socdata->sensor_ctrl + REG_SET,
+ 		     data->socdata->measure_temp_mask);
++	/* After power up, we need a delay before first access can be done. */
++	usleep_range(20, 50);
++
++	/* the core was configured and enabled just before */
++	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_enable(data->dev);
++
++	ret = pm_runtime_resume_and_get(data->dev);
++	if (ret < 0)
++		goto disable_runtime_pm;
+ 
+ 	data->irq_enabled = true;
+ 	ret = thermal_zone_device_enable(data->tz);
+@@ -814,10 +800,15 @@ static int imx_thermal_probe(struct platform_device *pdev)
+ 		goto thermal_zone_unregister;
+ 	}
+ 
++	pm_runtime_put(data->dev);
++
+ 	return 0;
+ 
+ thermal_zone_unregister:
+ 	thermal_zone_device_unregister(data->tz);
++disable_runtime_pm:
++	pm_runtime_put_noidle(data->dev);
++	pm_runtime_disable(data->dev);
+ clk_disable:
+ 	clk_disable_unprepare(data->thermal_clk);
+ legacy_cleanup:
+@@ -829,13 +820,9 @@ legacy_cleanup:
+ static int imx_thermal_remove(struct platform_device *pdev)
+ {
+ 	struct imx_thermal_data *data = platform_get_drvdata(pdev);
+-	struct regmap *map = data->tempmon;
+ 
+-	/* Disable measurements */
+-	regmap_write(map, data->socdata->sensor_ctrl + REG_SET,
+-		     data->socdata->power_down_mask);
+-	if (!IS_ERR(data->thermal_clk))
+-		clk_disable_unprepare(data->thermal_clk);
++	pm_runtime_put_noidle(data->dev);
++	pm_runtime_disable(data->dev);
+ 
+ 	thermal_zone_device_unregister(data->tz);
+ 	imx_thermal_unregister_legacy_cooling(data);
+@@ -858,29 +845,79 @@ static int __maybe_unused imx_thermal_suspend(struct device *dev)
+ 	ret = thermal_zone_device_disable(data->tz);
+ 	if (ret)
+ 		return ret;
++
++	return pm_runtime_force_suspend(data->dev);
++}
++
++static int __maybe_unused imx_thermal_resume(struct device *dev)
++{
++	struct imx_thermal_data *data = dev_get_drvdata(dev);
++	int ret;
++
++	ret = pm_runtime_force_resume(data->dev);
++	if (ret)
++		return ret;
++	/* Enabled thermal sensor after resume */
++	return thermal_zone_device_enable(data->tz);
++}
++
++static int __maybe_unused imx_thermal_runtime_suspend(struct device *dev)
++{
++	struct imx_thermal_data *data = dev_get_drvdata(dev);
++	const struct thermal_soc_data *socdata = data->socdata;
++	struct regmap *map = data->tempmon;
++	int ret;
++
++	ret = regmap_write(map, socdata->sensor_ctrl + REG_CLR,
++			   socdata->measure_temp_mask);
++	if (ret)
++		return ret;
++
++	ret = regmap_write(map, socdata->sensor_ctrl + REG_SET,
++			   socdata->power_down_mask);
++	if (ret)
++		return ret;
++
+ 	clk_disable_unprepare(data->thermal_clk);
+ 
+ 	return 0;
+ }
+ 
+-static int __maybe_unused imx_thermal_resume(struct device *dev)
++static int __maybe_unused imx_thermal_runtime_resume(struct device *dev)
+ {
+ 	struct imx_thermal_data *data = dev_get_drvdata(dev);
++	const struct thermal_soc_data *socdata = data->socdata;
++	struct regmap *map = data->tempmon;
+ 	int ret;
+ 
+ 	ret = clk_prepare_enable(data->thermal_clk);
+ 	if (ret)
+ 		return ret;
+-	/* Enabled thermal sensor after resume */
+-	ret = thermal_zone_device_enable(data->tz);
++
++	ret = regmap_write(map, socdata->sensor_ctrl + REG_CLR,
++			   socdata->power_down_mask);
++	if (ret)
++		return ret;
++
++	ret = regmap_write(map, socdata->sensor_ctrl + REG_SET,
++			   socdata->measure_temp_mask);
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * According to the temp sensor designers, it may require up to ~17us
++	 * to complete a measurement.
++	 */
++	usleep_range(20, 50);
++
+ 	return 0;
+ }
+ 
+-static SIMPLE_DEV_PM_OPS(imx_thermal_pm_ops,
+-			 imx_thermal_suspend, imx_thermal_resume);
++static const struct dev_pm_ops imx_thermal_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(imx_thermal_suspend, imx_thermal_resume)
++	SET_RUNTIME_PM_OPS(imx_thermal_runtime_suspend,
++			   imx_thermal_runtime_resume, NULL)
++};
+ 
+ static struct platform_driver imx_thermal = {
+ 	.driver = {
+diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
+index b5442f979b4d0..6355fdf7d71a3 100644
+--- a/drivers/thunderbolt/acpi.c
++++ b/drivers/thunderbolt/acpi.c
+@@ -7,6 +7,7 @@
+  */
+ 
+ #include <linux/acpi.h>
++#include <linux/pm_runtime.h>
+ 
+ #include "tb.h"
+ 
+@@ -74,8 +75,18 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
+ 		 pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM))) {
+ 		const struct device_link *link;
+ 
++		/*
++		 * Make them both active first to make sure the NHI does
++		 * not runtime suspend before the consumer. The
++		 * pm_runtime_put() below then allows the consumer to
++		 * runtime suspend again (which then allows NHI runtime
++		 * suspend too now that the device link is established).
++		 */
++		pm_runtime_get_sync(&pdev->dev);
++
+ 		link = device_link_add(&pdev->dev, &nhi->pdev->dev,
+ 				       DL_FLAG_AUTOREMOVE_SUPPLIER |
++				       DL_FLAG_RPM_ACTIVE |
+ 				       DL_FLAG_PM_RUNTIME);
+ 		if (link) {
+ 			dev_dbg(&nhi->pdev->dev, "created link from %s\n",
+@@ -84,6 +95,8 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
+ 			dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
+ 				 dev_name(&pdev->dev));
+ 		}
++
++		pm_runtime_put(&pdev->dev);
+ 	}
+ 
+ out_put:
+diff --git a/drivers/tty/serial/amba-pl010.c b/drivers/tty/serial/amba-pl010.c
+index 3284f34e9dfe1..75d61e038a775 100644
+--- a/drivers/tty/serial/amba-pl010.c
++++ b/drivers/tty/serial/amba-pl010.c
+@@ -448,14 +448,11 @@ pl010_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	if ((termios->c_cflag & CREAD) == 0)
+ 		uap->port.ignore_status_mask |= UART_DUMMY_RSR_RX;
+ 
+-	/* first, disable everything */
+ 	old_cr = readb(uap->port.membase + UART010_CR) & ~UART010_CR_MSIE;
+ 
+ 	if (UART_ENABLE_MS(port, termios->c_cflag))
+ 		old_cr |= UART010_CR_MSIE;
+ 
+-	writel(0, uap->port.membase + UART010_CR);
+-
+ 	/* Set baud rate */
+ 	quot -= 1;
+ 	writel((quot & 0xf00) >> 8, uap->port.membase + UART010_LCRM);
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index b3cddcdcbdad0..61183e7ff0097 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -2083,32 +2083,13 @@ static const char *pl011_type(struct uart_port *port)
+ 	return uap->port.type == PORT_AMBA ? uap->type : NULL;
+ }
+ 
+-/*
+- * Release the memory region(s) being used by 'port'
+- */
+-static void pl011_release_port(struct uart_port *port)
+-{
+-	release_mem_region(port->mapbase, SZ_4K);
+-}
+-
+-/*
+- * Request the memory region(s) being used by 'port'
+- */
+-static int pl011_request_port(struct uart_port *port)
+-{
+-	return request_mem_region(port->mapbase, SZ_4K, "uart-pl011")
+-			!= NULL ? 0 : -EBUSY;
+-}
+-
+ /*
+  * Configure/autoconfigure the port.
+  */
+ static void pl011_config_port(struct uart_port *port, int flags)
+ {
+-	if (flags & UART_CONFIG_TYPE) {
++	if (flags & UART_CONFIG_TYPE)
+ 		port->type = PORT_AMBA;
+-		pl011_request_port(port);
+-	}
+ }
+ 
+ /*
+@@ -2123,6 +2104,8 @@ static int pl011_verify_port(struct uart_port *port, struct serial_struct *ser)
+ 		ret = -EINVAL;
+ 	if (ser->baud_base < 9600)
+ 		ret = -EINVAL;
++	if (port->mapbase != (unsigned long) ser->iomem_base)
++		ret = -EINVAL;
+ 	return ret;
+ }
+ 
+@@ -2140,8 +2123,6 @@ static const struct uart_ops amba_pl011_pops = {
+ 	.flush_buffer	= pl011_dma_flush_buffer,
+ 	.set_termios	= pl011_set_termios,
+ 	.type		= pl011_type,
+-	.release_port	= pl011_release_port,
+-	.request_port	= pl011_request_port,
+ 	.config_port	= pl011_config_port,
+ 	.verify_port	= pl011_verify_port,
+ #ifdef CONFIG_CONSOLE_POLL
+@@ -2171,8 +2152,6 @@ static const struct uart_ops sbsa_uart_pops = {
+ 	.shutdown	= sbsa_uart_shutdown,
+ 	.set_termios	= sbsa_uart_set_termios,
+ 	.type		= pl011_type,
+-	.release_port	= pl011_release_port,
+-	.request_port	= pl011_request_port,
+ 	.config_port	= pl011_config_port,
+ 	.verify_port	= pl011_verify_port,
+ #ifdef CONFIG_CONSOLE_POLL
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index a24e5c2b30bc9..602065bfc9bb8 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -1004,6 +1004,13 @@ static void atmel_tx_dma(struct uart_port *port)
+ 		desc->callback = atmel_complete_tx_dma;
+ 		desc->callback_param = atmel_port;
+ 		atmel_port->cookie_tx = dmaengine_submit(desc);
++		if (dma_submit_error(atmel_port->cookie_tx)) {
++			dev_err(port->dev, "dma_submit_error %d\n",
++				atmel_port->cookie_tx);
++			return;
++		}
++
++		dma_async_issue_pending(chan);
+ 	}
+ 
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+@@ -1264,6 +1271,13 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
+ 	desc->callback_param = port;
+ 	atmel_port->desc_rx = desc;
+ 	atmel_port->cookie_rx = dmaengine_submit(desc);
++	if (dma_submit_error(atmel_port->cookie_rx)) {
++		dev_err(port->dev, "dma_submit_error %d\n",
++			atmel_port->cookie_rx);
++		goto chan_err;
++	}
++
++	dma_async_issue_pending(atmel_port->chan_rx);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 28cc328ddb6eb..93cd8ad57f385 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -508,18 +508,21 @@ static void imx_uart_stop_tx(struct uart_port *port)
+ static void imx_uart_stop_rx(struct uart_port *port)
+ {
+ 	struct imx_port *sport = (struct imx_port *)port;
+-	u32 ucr1, ucr2;
++	u32 ucr1, ucr2, ucr4;
+ 
+ 	ucr1 = imx_uart_readl(sport, UCR1);
+ 	ucr2 = imx_uart_readl(sport, UCR2);
++	ucr4 = imx_uart_readl(sport, UCR4);
+ 
+ 	if (sport->dma_is_enabled) {
+ 		ucr1 &= ~(UCR1_RXDMAEN | UCR1_ATDMAEN);
+ 	} else {
+ 		ucr1 &= ~UCR1_RRDYEN;
+ 		ucr2 &= ~UCR2_ATEN;
++		ucr4 &= ~UCR4_OREN;
+ 	}
+ 	imx_uart_writel(sport, ucr1, UCR1);
++	imx_uart_writel(sport, ucr4, UCR4);
+ 
+ 	ucr2 &= ~UCR2_RXEN;
+ 	imx_uart_writel(sport, ucr2, UCR2);
+@@ -1576,7 +1579,7 @@ static void imx_uart_shutdown(struct uart_port *port)
+ 	imx_uart_writel(sport, ucr1, UCR1);
+ 
+ 	ucr4 = imx_uart_readl(sport, UCR4);
+-	ucr4 &= ~(UCR4_OREN | UCR4_TCEN);
++	ucr4 &= ~UCR4_TCEN;
+ 	imx_uart_writel(sport, ucr4, UCR4);
+ 
+ 	spin_unlock_irqrestore(&sport->port.lock, flags);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 046bedca7b8f5..be0d9922e320e 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -162,7 +162,7 @@ static void uart_port_dtr_rts(struct uart_port *uport, int raise)
+ 	int RTS_after_send = !!(uport->rs485.flags & SER_RS485_RTS_AFTER_SEND);
+ 
+ 	if (raise) {
+-		if (rs485_on && !RTS_after_send) {
++		if (rs485_on && RTS_after_send) {
+ 			uart_set_mctrl(uport, TIOCM_DTR);
+ 			uart_clear_mctrl(uport, TIOCM_RTS);
+ 		} else {
+@@ -171,7 +171,7 @@ static void uart_port_dtr_rts(struct uart_port *uport, int raise)
+ 	} else {
+ 		unsigned int clear = TIOCM_DTR;
+ 
+-		clear |= (!rs485_on || !RTS_after_send) ? TIOCM_RTS : 0;
++		clear |= (!rs485_on || RTS_after_send) ? TIOCM_RTS : 0;
+ 		uart_clear_mctrl(uport, clear);
+ 	}
+ }
+@@ -2414,7 +2414,8 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 		 * We probably don't need a spinlock around this, but
+ 		 */
+ 		spin_lock_irqsave(&port->lock, flags);
+-		port->ops->set_mctrl(port, port->mctrl & TIOCM_DTR);
++		port->mctrl &= TIOCM_DTR;
++		port->ops->set_mctrl(port, port->mctrl);
+ 		spin_unlock_irqrestore(&port->lock, flags);
+ 
+ 		/*
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index 7081ab322b402..48923cd8c07d1 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -615,7 +615,7 @@ static struct uart_driver ulite_uart_driver = {
+  *
+  * Returns: 0 on success, <0 otherwise
+  */
+-static int ulite_assign(struct device *dev, int id, u32 base, int irq,
++static int ulite_assign(struct device *dev, int id, phys_addr_t base, int irq,
+ 			struct uartlite_data *pdata)
+ {
+ 	struct uart_port *port;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index af15dbe6bb141..18ee3914b4686 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1109,7 +1109,10 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 		} else {
+ 			hub_power_on(hub, true);
+ 		}
+-	}
++	/* Give some time on remote wakeup to let links to transit to U0 */
++	} else if (hub_is_superspeed(hub->hdev))
++		msleep(20);
++
+  init2:
+ 
+ 	/*
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 2a29e2f681fe6..504f8af4d0f80 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -764,9 +764,12 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 
+ 		if (qcom->acpi_pdata->is_urs) {
+ 			qcom->urs_usb = dwc3_qcom_create_urs_usb_platdev(dev);
+-			if (!qcom->urs_usb) {
++			if (IS_ERR_OR_NULL(qcom->urs_usb)) {
+ 				dev_err(dev, "failed to create URS USB platdev\n");
+-				return -ENODEV;
++				if (!qcom->urs_usb)
++					return -ENODEV;
++				else
++					return PTR_ERR(qcom->urs_usb);
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index cbb7947f366f9..d8652321e15e9 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -614,7 +614,7 @@ static int ffs_ep0_open(struct inode *inode, struct file *file)
+ 	file->private_data = ffs;
+ 	ffs_data_opened(ffs);
+ 
+-	return 0;
++	return stream_open(inode, file);
+ }
+ 
+ static int ffs_ep0_release(struct inode *inode, struct file *file)
+@@ -1152,7 +1152,7 @@ ffs_epfile_open(struct inode *inode, struct file *file)
+ 	file->private_data = epfile;
+ 	ffs_data_opened(epfile->ffs);
+ 
+-	return 0;
++	return stream_open(inode, file);
+ }
+ 
+ static int ffs_aio_cancel(struct kiocb *kiocb)
+diff --git a/drivers/usb/host/uhci-platform.c b/drivers/usb/host/uhci-platform.c
+index 70dbd95c3f063..be9e9db7cad10 100644
+--- a/drivers/usb/host/uhci-platform.c
++++ b/drivers/usb/host/uhci-platform.c
+@@ -113,7 +113,8 @@ static int uhci_hcd_platform_probe(struct platform_device *pdev)
+ 				num_ports);
+ 		}
+ 		if (of_device_is_compatible(np, "aspeed,ast2400-uhci") ||
+-		    of_device_is_compatible(np, "aspeed,ast2500-uhci")) {
++		    of_device_is_compatible(np, "aspeed,ast2500-uhci") ||
++		    of_device_is_compatible(np, "aspeed,ast2600-uhci")) {
+ 			uhci->is_aspeed = 1;
+ 			dev_info(&pdev->dev,
+ 				 "Enabled Aspeed implementation workarounds\n");
+diff --git a/drivers/usb/misc/ftdi-elan.c b/drivers/usb/misc/ftdi-elan.c
+index 8a3d9c0c8d8bc..157b31d354ac2 100644
+--- a/drivers/usb/misc/ftdi-elan.c
++++ b/drivers/usb/misc/ftdi-elan.c
+@@ -202,6 +202,7 @@ static void ftdi_elan_delete(struct kref *kref)
+ 	mutex_unlock(&ftdi_module_lock);
+ 	kfree(ftdi->bulk_in_buffer);
+ 	ftdi->bulk_in_buffer = NULL;
++	kfree(ftdi);
+ }
+ 
+ static void ftdi_elan_put_kref(struct usb_ftdi *ftdi)
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index fbdc9468818d3..65d6f8fd81e70 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -812,8 +812,6 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
+ 	MLX5_SET(virtio_q, vq_ctx, umem_3_id, mvq->umem3.id);
+ 	MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem3.size);
+ 	MLX5_SET(virtio_q, vq_ctx, pd, ndev->mvdev.res.pdn);
+-	if (MLX5_CAP_DEV_VDPA_EMULATION(ndev->mvdev.mdev, eth_frame_offload_type))
+-		MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0, 1);
+ 
+ 	err = mlx5_cmd_exec(ndev->mvdev.mdev, in, inlen, out, sizeof(out));
+ 	if (err)
+diff --git a/drivers/video/backlight/qcom-wled.c b/drivers/video/backlight/qcom-wled.c
+index cd11c57764381..486d35da01507 100644
+--- a/drivers/video/backlight/qcom-wled.c
++++ b/drivers/video/backlight/qcom-wled.c
+@@ -231,14 +231,14 @@ struct wled {
+ static int wled3_set_brightness(struct wled *wled, u16 brightness)
+ {
+ 	int rc, i;
+-	u8 v[2];
++	__le16 v;
+ 
+-	v[0] = brightness & 0xff;
+-	v[1] = (brightness >> 8) & 0xf;
++	v = cpu_to_le16(brightness & WLED3_SINK_REG_BRIGHT_MAX);
+ 
+ 	for (i = 0;  i < wled->cfg.num_strings; ++i) {
+ 		rc = regmap_bulk_write(wled->regmap, wled->ctrl_addr +
+-				       WLED3_SINK_REG_BRIGHT(i), v, 2);
++				       WLED3_SINK_REG_BRIGHT(wled->cfg.enabled_strings[i]),
++				       &v, sizeof(v));
+ 		if (rc < 0)
+ 			return rc;
+ 	}
+@@ -250,18 +250,18 @@ static int wled4_set_brightness(struct wled *wled, u16 brightness)
+ {
+ 	int rc, i;
+ 	u16 low_limit = wled->max_brightness * 4 / 1000;
+-	u8 v[2];
++	__le16 v;
+ 
+ 	/* WLED4's lower limit of operation is 0.4% */
+ 	if (brightness > 0 && brightness < low_limit)
+ 		brightness = low_limit;
+ 
+-	v[0] = brightness & 0xff;
+-	v[1] = (brightness >> 8) & 0xf;
++	v = cpu_to_le16(brightness & WLED3_SINK_REG_BRIGHT_MAX);
+ 
+ 	for (i = 0;  i < wled->cfg.num_strings; ++i) {
+ 		rc = regmap_bulk_write(wled->regmap, wled->sink_addr +
+-				       WLED4_SINK_REG_BRIGHT(i), v, 2);
++				       WLED4_SINK_REG_BRIGHT(wled->cfg.enabled_strings[i]),
++				       &v, sizeof(v));
+ 		if (rc < 0)
+ 			return rc;
+ 	}
+@@ -273,21 +273,20 @@ static int wled5_set_brightness(struct wled *wled, u16 brightness)
+ {
+ 	int rc, offset;
+ 	u16 low_limit = wled->max_brightness * 1 / 1000;
+-	u8 v[2];
++	__le16 v;
+ 
+ 	/* WLED5's lower limit is 0.1% */
+ 	if (brightness < low_limit)
+ 		brightness = low_limit;
+ 
+-	v[0] = brightness & 0xff;
+-	v[1] = (brightness >> 8) & 0x7f;
++	v = cpu_to_le16(brightness & WLED5_SINK_REG_BRIGHT_MAX_15B);
+ 
+ 	offset = (wled->cfg.mod_sel == MOD_A) ?
+ 		  WLED5_SINK_REG_MOD_A_BRIGHTNESS_LSB :
+ 		  WLED5_SINK_REG_MOD_B_BRIGHTNESS_LSB;
+ 
+ 	rc = regmap_bulk_write(wled->regmap, wled->sink_addr + offset,
+-			       v, 2);
++			       &v, sizeof(v));
+ 	return rc;
+ }
+ 
+@@ -572,7 +571,7 @@ unlock_mutex:
+ 
+ static void wled_auto_string_detection(struct wled *wled)
+ {
+-	int rc = 0, i, delay_time_us;
++	int rc = 0, i, j, delay_time_us;
+ 	u32 sink_config = 0;
+ 	u8 sink_test = 0, sink_valid = 0, val;
+ 	bool fault_set;
+@@ -619,14 +618,15 @@ static void wled_auto_string_detection(struct wled *wled)
+ 
+ 	/* Iterate through the strings one by one */
+ 	for (i = 0; i < wled->cfg.num_strings; i++) {
+-		sink_test = BIT((WLED4_SINK_REG_CURR_SINK_SHFT + i));
++		j = wled->cfg.enabled_strings[i];
++		sink_test = BIT((WLED4_SINK_REG_CURR_SINK_SHFT + j));
+ 
+ 		/* Enable feedback control */
+ 		rc = regmap_write(wled->regmap, wled->ctrl_addr +
+-				  WLED3_CTRL_REG_FEEDBACK_CONTROL, i + 1);
++				  WLED3_CTRL_REG_FEEDBACK_CONTROL, j + 1);
+ 		if (rc < 0) {
+ 			dev_err(wled->dev, "Failed to enable feedback for SINK %d rc = %d\n",
+-				i + 1, rc);
++				j + 1, rc);
+ 			goto failed_detect;
+ 		}
+ 
+@@ -635,7 +635,7 @@ static void wled_auto_string_detection(struct wled *wled)
+ 				  WLED4_SINK_REG_CURR_SINK, sink_test);
+ 		if (rc < 0) {
+ 			dev_err(wled->dev, "Failed to configure SINK %d rc=%d\n",
+-				i + 1, rc);
++				j + 1, rc);
+ 			goto failed_detect;
+ 		}
+ 
+@@ -662,7 +662,7 @@ static void wled_auto_string_detection(struct wled *wled)
+ 
+ 		if (fault_set)
+ 			dev_dbg(wled->dev, "WLED OVP fault detected with SINK %d\n",
+-				i + 1);
++				j + 1);
+ 		else
+ 			sink_valid |= sink_test;
+ 
+@@ -702,15 +702,16 @@ static void wled_auto_string_detection(struct wled *wled)
+ 	/* Enable valid sinks */
+ 	if (wled->version == 4) {
+ 		for (i = 0; i < wled->cfg.num_strings; i++) {
++			j = wled->cfg.enabled_strings[i];
+ 			if (sink_config &
+-			    BIT(WLED4_SINK_REG_CURR_SINK_SHFT + i))
++			    BIT(WLED4_SINK_REG_CURR_SINK_SHFT + j))
+ 				val = WLED4_SINK_REG_STR_MOD_MASK;
+ 			else
+ 				/* Disable modulator_en for unused sink */
+ 				val = 0;
+ 
+ 			rc = regmap_write(wled->regmap, wled->sink_addr +
+-					  WLED4_SINK_REG_STR_MOD_EN(i), val);
++					  WLED4_SINK_REG_STR_MOD_EN(j), val);
+ 			if (rc < 0) {
+ 				dev_err(wled->dev, "Failed to configure MODULATOR_EN rc=%d\n",
+ 					rc);
+@@ -1256,21 +1257,6 @@ static const struct wled_var_cfg wled5_ovp_cfg = {
+ 	.size = 16,
+ };
+ 
+-static u32 wled3_num_strings_values_fn(u32 idx)
+-{
+-	return idx + 1;
+-}
+-
+-static const struct wled_var_cfg wled3_num_strings_cfg = {
+-	.fn = wled3_num_strings_values_fn,
+-	.size = 3,
+-};
+-
+-static const struct wled_var_cfg wled4_num_strings_cfg = {
+-	.fn = wled3_num_strings_values_fn,
+-	.size = 4,
+-};
+-
+ static u32 wled3_switch_freq_values_fn(u32 idx)
+ {
+ 	return 19200 / (2 * (1 + idx));
+@@ -1344,11 +1330,6 @@ static int wled_configure(struct wled *wled)
+ 			.val_ptr = &cfg->switch_freq,
+ 			.cfg = &wled3_switch_freq_cfg,
+ 		},
+-		{
+-			.name = "qcom,num-strings",
+-			.val_ptr = &cfg->num_strings,
+-			.cfg = &wled3_num_strings_cfg,
+-		},
+ 	};
+ 
+ 	const struct wled_u32_opts wled4_opts[] = {
+@@ -1372,11 +1353,6 @@ static int wled_configure(struct wled *wled)
+ 			.val_ptr = &cfg->switch_freq,
+ 			.cfg = &wled3_switch_freq_cfg,
+ 		},
+-		{
+-			.name = "qcom,num-strings",
+-			.val_ptr = &cfg->num_strings,
+-			.cfg = &wled4_num_strings_cfg,
+-		},
+ 	};
+ 
+ 	const struct wled_u32_opts wled5_opts[] = {
+@@ -1400,11 +1376,6 @@ static int wled_configure(struct wled *wled)
+ 			.val_ptr = &cfg->switch_freq,
+ 			.cfg = &wled3_switch_freq_cfg,
+ 		},
+-		{
+-			.name = "qcom,num-strings",
+-			.val_ptr = &cfg->num_strings,
+-			.cfg = &wled4_num_strings_cfg,
+-		},
+ 		{
+ 			.name = "qcom,modulator-sel",
+ 			.val_ptr = &cfg->mod_sel,
+@@ -1523,16 +1494,57 @@ static int wled_configure(struct wled *wled)
+ 			*bool_opts[i].val_ptr = true;
+ 	}
+ 
+-	cfg->num_strings = cfg->num_strings + 1;
+-
+ 	string_len = of_property_count_elems_of_size(dev->of_node,
+ 						     "qcom,enabled-strings",
+ 						     sizeof(u32));
+-	if (string_len > 0)
+-		of_property_read_u32_array(dev->of_node,
++	if (string_len > 0) {
++		if (string_len > wled->max_string_count) {
++			dev_err(dev, "Cannot have more than %d strings\n",
++				wled->max_string_count);
++			return -EINVAL;
++		}
++
++		rc = of_property_read_u32_array(dev->of_node,
+ 						"qcom,enabled-strings",
+ 						wled->cfg.enabled_strings,
+-						sizeof(u32));
++						string_len);
++		if (rc) {
++			dev_err(dev, "Failed to read %d elements from qcom,enabled-strings: %d\n",
++				string_len, rc);
++			return rc;
++		}
++
++		for (i = 0; i < string_len; ++i) {
++			if (wled->cfg.enabled_strings[i] >= wled->max_string_count) {
++				dev_err(dev,
++					"qcom,enabled-strings index %d at %d is out of bounds\n",
++					wled->cfg.enabled_strings[i], i);
++				return -EINVAL;
++			}
++		}
++
++		cfg->num_strings = string_len;
++	}
++
++	rc = of_property_read_u32(dev->of_node, "qcom,num-strings", &val);
++	if (!rc) {
++		if (val < 1 || val > wled->max_string_count) {
++			dev_err(dev, "qcom,num-strings must be between 1 and %d\n",
++				wled->max_string_count);
++			return -EINVAL;
++		}
++
++		if (string_len > 0) {
++			dev_warn(dev, "Only one of qcom,num-strings or qcom,enabled-strings"
++				      " should be set\n");
++			if (val > string_len) {
++				dev_err(dev, "qcom,num-strings exceeds qcom,enabled-strings\n");
++				return -EINVAL;
++			}
++		}
++
++		cfg->num_strings = val;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index cce75d3b3ba05..3cc2a4ee7152c 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1124,8 +1124,10 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 	if (virtqueue_use_indirect(_vq, total_sg)) {
+ 		err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
+ 						    in_sgs, data, gfp);
+-		if (err != -ENOMEM)
++		if (err != -ENOMEM) {
++			END_USE(vq);
+ 			return err;
++		}
+ 
+ 		/* fall back on direct */
+ 	}
+diff --git a/drivers/w1/slaves/w1_ds28e04.c b/drivers/w1/slaves/w1_ds28e04.c
+index e4f336111edc6..6cef6e2edb892 100644
+--- a/drivers/w1/slaves/w1_ds28e04.c
++++ b/drivers/w1/slaves/w1_ds28e04.c
+@@ -32,7 +32,7 @@ static int w1_strong_pullup = 1;
+ module_param_named(strong_pullup, w1_strong_pullup, int, 0);
+ 
+ /* enable/disable CRC checking on DS28E04-100 memory accesses */
+-static char w1_enable_crccheck = 1;
++static bool w1_enable_crccheck = true;
+ 
+ #define W1_EEPROM_SIZE		512
+ #define W1_PAGE_COUNT		16
+@@ -339,32 +339,18 @@ static BIN_ATTR_RW(pio, 1);
+ static ssize_t crccheck_show(struct device *dev, struct device_attribute *attr,
+ 			     char *buf)
+ {
+-	if (put_user(w1_enable_crccheck + 0x30, buf))
+-		return -EFAULT;
+-
+-	return sizeof(w1_enable_crccheck);
++	return sysfs_emit(buf, "%d\n", w1_enable_crccheck);
+ }
+ 
+ static ssize_t crccheck_store(struct device *dev, struct device_attribute *attr,
+ 			      const char *buf, size_t count)
+ {
+-	char val;
+-
+-	if (count != 1 || !buf)
+-		return -EINVAL;
++	int err = kstrtobool(buf, &w1_enable_crccheck);
+ 
+-	if (get_user(val, buf))
+-		return -EFAULT;
++	if (err)
++		return err;
+ 
+-	/* convert to decimal */
+-	val = val - 0x30;
+-	if (val != 0 && val != 1)
+-		return -EINVAL;
+-
+-	/* set the new value */
+-	w1_enable_crccheck = val;
+-
+-	return sizeof(w1_enable_crccheck);
++	return count;
+ }
+ 
+ static DEVICE_ATTR_RW(crccheck);
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index b9651f797676c..54778aadf618d 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -240,13 +240,13 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
+ 	if (!refcount_dec_and_test(&map->users))
+ 		return;
+ 
++	if (map->pages && !use_ptemod)
++		unmap_grant_pages(map, 0, map->count);
++
+ 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
+ 		notify_remote_via_evtchn(map->notify.event);
+ 		evtchn_put(map->notify.event);
+ 	}
+-
+-	if (map->pages && !use_ptemod)
+-		unmap_grant_pages(map, 0, map->count);
+ 	gntdev_free_map(map);
+ }
+ 
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 6e447bdaf9ec8..baff31a147e7d 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -1213,7 +1213,12 @@ again:
+ 	ret = btrfs_search_slot(trans, fs_info->extent_root, &key, path, 0, 0);
+ 	if (ret < 0)
+ 		goto out;
+-	BUG_ON(ret == 0);
++	if (ret == 0) {
++		/* This shouldn't happen, indicates a bug or fs corruption. */
++		ASSERT(ret != 0);
++		ret = -EUCLEAN;
++		goto out;
++	}
+ 
+ #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+ 	if (trans && likely(trans->type != __TRANS_DUMMY) &&
+@@ -1361,10 +1366,18 @@ again:
+ 				goto out;
+ 			if (!ret && extent_item_pos) {
+ 				/*
+-				 * we've recorded that parent, so we must extend
+-				 * its inode list here
++				 * We've recorded that parent, so we must extend
++				 * its inode list here.
++				 *
++				 * However if there was corruption we may not
++				 * have found an eie, return an error in this
++				 * case.
+ 				 */
+-				BUG_ON(!eie);
++				ASSERT(eie);
++				if (!eie) {
++					ret = -EUCLEAN;
++					goto out;
++				}
+ 				while (eie->next)
+ 					eie = eie->next;
+ 				eie->next = ref->inode_list;
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 519cf145f9bd1..5addd1e36a8ee 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -2589,12 +2589,9 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
+ {
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct extent_buffer *b;
+-	int root_lock;
++	int root_lock = 0;
+ 	int level = 0;
+ 
+-	/* We try very hard to do read locks on the root */
+-	root_lock = BTRFS_READ_LOCK;
+-
+ 	if (p->search_commit_root) {
+ 		/*
+ 		 * The commit roots are read only so we always do read locks,
+@@ -2632,6 +2629,9 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
+ 		goto out;
+ 	}
+ 
++	/* We try very hard to do read locks on the root */
++	root_lock = BTRFS_READ_LOCK;
++
+ 	/*
+ 	 * If the level is set to maximum, we can skip trying to get the read
+ 	 * lock.
+@@ -2658,6 +2658,17 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
+ 	level = btrfs_header_level(b);
+ 
+ out:
++	/*
++	 * The root may have failed to write out at some point, and thus is no
++	 * longer valid, return an error in this case.
++	 */
++	if (!extent_buffer_uptodate(b)) {
++		if (root_lock)
++			btrfs_tree_unlock_rw(b, root_lock);
++		free_extent_buffer(b);
++		return ERR_PTR(-EIO);
++	}
++
+ 	p->nodes[level] = b;
+ 	if (!p->skip_locking)
+ 		p->locks[level] = root_lock;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index ff3f0638cdb90..1d9262a35473c 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -10094,9 +10094,19 @@ static int btrfs_add_swap_extent(struct swap_info_struct *sis,
+ 				 struct btrfs_swap_info *bsi)
+ {
+ 	unsigned long nr_pages;
++	unsigned long max_pages;
+ 	u64 first_ppage, first_ppage_reported, next_ppage;
+ 	int ret;
+ 
++	/*
++	 * Our swapfile may have had its size extended after the swap header was
++	 * written. In that case activating the swapfile should not go beyond
++	 * the max size set in the swap header.
++	 */
++	if (bsi->nr_pages >= sis->max)
++		return 0;
++
++	max_pages = sis->max - bsi->nr_pages;
+ 	first_ppage = ALIGN(bsi->block_start, PAGE_SIZE) >> PAGE_SHIFT;
+ 	next_ppage = ALIGN_DOWN(bsi->block_start + bsi->block_len,
+ 				PAGE_SIZE) >> PAGE_SHIFT;
+@@ -10104,6 +10114,7 @@ static int btrfs_add_swap_extent(struct swap_info_struct *sis,
+ 	if (first_ppage >= next_ppage)
+ 		return 0;
+ 	nr_pages = next_ppage - first_ppage;
++	nr_pages = min(nr_pages, max_pages);
+ 
+ 	first_ppage_reported = first_ppage;
+ 	if (bsi->start == 0)
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 4bac32a274ceb..f65aa4ed5ca1e 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -941,6 +941,14 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info)
+ 	int ret = 0;
+ 	int slot;
+ 
++	/*
++	 * We need to have subvol_sem write locked, to prevent races between
++	 * concurrent tasks trying to enable quotas, because we will unlock
++	 * and relock qgroup_ioctl_lock before setting fs_info->quota_root
++	 * and before setting BTRFS_FS_QUOTA_ENABLED.
++	 */
++	lockdep_assert_held_write(&fs_info->subvol_sem);
++
+ 	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (fs_info->quota_root)
+ 		goto out;
+@@ -1118,8 +1126,19 @@ out_add_root:
+ 		goto out_free_path;
+ 	}
+ 
++	mutex_unlock(&fs_info->qgroup_ioctl_lock);
++	/*
++	 * Commit the transaction while not holding qgroup_ioctl_lock, to avoid
++	 * a deadlock with tasks concurrently doing other qgroup operations, such
++	 * adding/removing qgroups or adding/deleting qgroup relations for example,
++	 * because all qgroup operations first start or join a transaction and then
++	 * lock the qgroup_ioctl_lock mutex.
++	 * We are safe from a concurrent task trying to enable quotas, by calling
++	 * this function, since we are serialized by fs_info->subvol_sem.
++	 */
+ 	ret = btrfs_commit_transaction(trans);
+ 	trans = NULL;
++	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (ret)
+ 		goto out_free_path;
+ 
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index 3aa5eb9ce498e..96059af28f508 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -147,7 +147,7 @@ static int debugfs_locked_down(struct inode *inode,
+ 			       struct file *filp,
+ 			       const struct file_operations *real_fops)
+ {
+-	if ((inode->i_mode & 07777) == 0444 &&
++	if ((inode->i_mode & 07777 & ~0444) == 0 &&
+ 	    !(filp->f_mode & FMODE_WRITE) &&
+ 	    !real_fops->unlocked_ioctl &&
+ 	    !real_fops->compat_ioctl &&
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index 002123efc6b05..1e9d8999b9390 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -3975,6 +3975,14 @@ static int validate_message(struct dlm_lkb *lkb, struct dlm_message *ms)
+ 	int from = ms->m_header.h_nodeid;
+ 	int error = 0;
+ 
++	/* currently mixing of user/kernel locks are not supported */
++	if (ms->m_flags & DLM_IFL_USER && ~lkb->lkb_flags & DLM_IFL_USER) {
++		log_error(lkb->lkb_resource->res_ls,
++			  "got user dlm message for a kernel lock");
++		error = -EINVAL;
++		goto out;
++	}
++
+ 	switch (ms->m_type) {
+ 	case DLM_MSG_CONVERT:
+ 	case DLM_MSG_UNLOCK:
+@@ -4003,6 +4011,7 @@ static int validate_message(struct dlm_lkb *lkb, struct dlm_message *ms)
+ 		error = -EINVAL;
+ 	}
+ 
++out:
+ 	if (error)
+ 		log_error(lkb->lkb_resource->res_ls,
+ 			  "ignore invalid message %d from %d %x %x %x %d",
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 0c78fdfb1f6fa..68b765369c928 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -471,8 +471,8 @@ int dlm_lowcomms_connect_node(int nodeid)
+ static void lowcomms_error_report(struct sock *sk)
+ {
+ 	struct connection *con;
+-	struct sockaddr_storage saddr;
+ 	void (*orig_report)(struct sock *) = NULL;
++	struct inet_sock *inet;
+ 
+ 	read_lock_bh(&sk->sk_callback_lock);
+ 	con = sock2con(sk);
+@@ -480,34 +480,33 @@ static void lowcomms_error_report(struct sock *sk)
+ 		goto out;
+ 
+ 	orig_report = listen_sock.sk_error_report;
+-	if (con->sock == NULL ||
+-	    kernel_getpeername(con->sock, (struct sockaddr *)&saddr) < 0) {
+-		printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
+-				   "sending to node %d, port %d, "
+-				   "sk_err=%d/%d\n", dlm_our_nodeid(),
+-				   con->nodeid, dlm_config.ci_tcp_port,
+-				   sk->sk_err, sk->sk_err_soft);
+-	} else if (saddr.ss_family == AF_INET) {
+-		struct sockaddr_in *sin4 = (struct sockaddr_in *)&saddr;
+ 
++	inet = inet_sk(sk);
++	switch (sk->sk_family) {
++	case AF_INET:
+ 		printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
+-				   "sending to node %d at %pI4, port %d, "
++				   "sending to node %d at %pI4, dport %d, "
+ 				   "sk_err=%d/%d\n", dlm_our_nodeid(),
+-				   con->nodeid, &sin4->sin_addr.s_addr,
+-				   dlm_config.ci_tcp_port, sk->sk_err,
++				   con->nodeid, &inet->inet_daddr,
++				   ntohs(inet->inet_dport), sk->sk_err,
+ 				   sk->sk_err_soft);
+-	} else {
+-		struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&saddr;
+-
++		break;
++#if IS_ENABLED(CONFIG_IPV6)
++	case AF_INET6:
+ 		printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
+-				   "sending to node %d at %u.%u.%u.%u, "
+-				   "port %d, sk_err=%d/%d\n", dlm_our_nodeid(),
+-				   con->nodeid, sin6->sin6_addr.s6_addr32[0],
+-				   sin6->sin6_addr.s6_addr32[1],
+-				   sin6->sin6_addr.s6_addr32[2],
+-				   sin6->sin6_addr.s6_addr32[3],
+-				   dlm_config.ci_tcp_port, sk->sk_err,
++				   "sending to node %d at %pI6c, "
++				   "dport %d, sk_err=%d/%d\n", dlm_our_nodeid(),
++				   con->nodeid, &sk->sk_v6_daddr,
++				   ntohs(inet->inet_dport), sk->sk_err,
+ 				   sk->sk_err_soft);
++		break;
++#endif
++	default:
++		printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
++				   "invalid socket family %d set, "
++				   "sk_err=%d/%d\n", dlm_our_nodeid(),
++				   sk->sk_family, sk->sk_err, sk->sk_err_soft);
++		goto out;
+ 	}
+ out:
+ 	read_unlock_bh(&sk->sk_callback_lock);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 115a77b96e5e1..99d98d1010217 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2778,6 +2778,7 @@ bool ext4_fc_replay_check_excluded(struct super_block *sb, ext4_fsblk_t block);
+ void ext4_fc_replay_cleanup(struct super_block *sb);
+ int ext4_fc_commit(journal_t *journal, tid_t commit_tid);
+ int __init ext4_fc_init_dentry_cache(void);
++void ext4_fc_destroy_dentry_cache(void);
+ 
+ /* mballoc.c */
+ extern const struct seq_operations ext4_mb_seq_groups_ops;
+diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
+index 0fd0c42a4f7db..6ff7b4020df8a 100644
+--- a/fs/ext4/ext4_jbd2.c
++++ b/fs/ext4/ext4_jbd2.c
+@@ -162,6 +162,8 @@ int __ext4_journal_ensure_credits(handle_t *handle, int check_cred,
+ {
+ 	if (!ext4_handle_valid(handle))
+ 		return 0;
++	if (is_handle_aborted(handle))
++		return -EROFS;
+ 	if (jbd2_handle_buffer_credits(handle) >= check_cred &&
+ 	    handle->h_revoke_credits >= revoke_cred)
+ 		return 0;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index b8c9df6ab67f5..b297b14de7509 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4638,8 +4638,6 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ 	ret = ext4_mark_inode_dirty(handle, inode);
+ 	if (unlikely(ret))
+ 		goto out_handle;
+-	ext4_fc_track_range(handle, inode, offset >> inode->i_sb->s_blocksize_bits,
+-			(offset + len - 1) >> inode->i_sb->s_blocksize_bits);
+ 	/* Zero out partial block at the edges of the range */
+ 	ret = ext4_zero_partial_blocks(handle, inode, offset, len);
+ 	if (ret >= 0)
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 08ca690f928bd..f483abcd5213a 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1764,11 +1764,14 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
+ 		}
+ 	}
+ 
+-	ret = ext4_punch_hole(inode,
+-		le32_to_cpu(lrange.fc_lblk) << sb->s_blocksize_bits,
+-		le32_to_cpu(lrange.fc_len) <<  sb->s_blocksize_bits);
+-	if (ret)
+-		jbd_debug(1, "ext4_punch_hole returned %d", ret);
++	down_write(&EXT4_I(inode)->i_data_sem);
++	ret = ext4_ext_remove_space(inode, lrange.fc_lblk,
++				lrange.fc_lblk + lrange.fc_len - 1);
++	up_write(&EXT4_I(inode)->i_data_sem);
++	if (ret) {
++		iput(inode);
++		return 0;
++	}
+ 	ext4_ext_replay_shrink_inode(inode,
+ 		i_size_read(inode) >> sb->s_blocksize_bits);
+ 	ext4_mark_inode_dirty(NULL, inode);
+@@ -2166,3 +2169,8 @@ int __init ext4_fc_init_dentry_cache(void)
+ 
+ 	return 0;
+ }
++
++void ext4_fc_destroy_dentry_cache(void)
++{
++	kmem_cache_destroy(ext4_fc_dentry_cachep);
++}
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 317aa1b90fb95..d59474a541897 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -741,10 +741,11 @@ out_sem:
+ 			if (ret)
+ 				return ret;
+ 		}
+-		ext4_fc_track_range(handle, inode, map->m_lblk,
+-			    map->m_lblk + map->m_len - 1);
+ 	}
+-
++	if (retval > 0 && (map->m_flags & EXT4_MAP_UNWRITTEN ||
++				map->m_flags & EXT4_MAP_MAPPED))
++		ext4_fc_track_range(handle, inode, map->m_lblk,
++					map->m_lblk + map->m_len - 1);
+ 	if (retval < 0)
+ 		ext_debug(inode, "failed with err %d\n", retval);
+ 	return retval;
+@@ -4445,7 +4446,7 @@ has_buffer:
+ static int __ext4_get_inode_loc_noinmem(struct inode *inode,
+ 					struct ext4_iloc *iloc)
+ {
+-	ext4_fsblk_t err_blk;
++	ext4_fsblk_t err_blk = 0;
+ 	int ret;
+ 
+ 	ret = __ext4_get_inode_loc(inode->i_sb, inode->i_ino, iloc, 0,
+@@ -4460,7 +4461,7 @@ static int __ext4_get_inode_loc_noinmem(struct inode *inode,
+ 
+ int ext4_get_inode_loc(struct inode *inode, struct ext4_iloc *iloc)
+ {
+-	ext4_fsblk_t err_blk;
++	ext4_fsblk_t err_blk = 0;
+ 	int ret;
+ 
+ 	/* We have all inode data except xattrs in memory here. */
+@@ -5467,8 +5468,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 				ext4_fc_track_range(handle, inode,
+ 					(attr->ia_size > 0 ? attr->ia_size - 1 : 0) >>
+ 					inode->i_sb->s_blocksize_bits,
+-					(oldsize > 0 ? oldsize - 1 : 0) >>
+-					inode->i_sb->s_blocksize_bits);
++					EXT_MAX_BLOCKS - 1);
+ 			else
+ 				ext4_fc_track_range(
+ 					handle, inode,
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index cb54ea6461fd8..413bf3d2f7844 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -1123,8 +1123,6 @@ resizefs_out:
+ 		    sizeof(range)))
+ 			return -EFAULT;
+ 
+-		range.minlen = max((unsigned int)range.minlen,
+-				   q->limits.discard_granularity);
+ 		ret = ext4_trim_fs(sb, &range);
+ 		if (ret < 0)
+ 			return ret;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index d7cb7d719ee58..e40f87d07783a 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -4234,7 +4234,7 @@ ext4_mb_release_group_pa(struct ext4_buddy *e4b,
+  */
+ static noinline_for_stack int
+ ext4_mb_discard_group_preallocations(struct super_block *sb,
+-					ext4_group_t group, int needed)
++				     ext4_group_t group, int *busy)
+ {
+ 	struct ext4_group_info *grp = ext4_get_group_info(sb, group);
+ 	struct buffer_head *bitmap_bh = NULL;
+@@ -4242,8 +4242,7 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
+ 	struct list_head list;
+ 	struct ext4_buddy e4b;
+ 	int err;
+-	int busy = 0;
+-	int free, free_total = 0;
++	int free = 0;
+ 
+ 	mb_debug(sb, "discard preallocation for group %u\n", group);
+ 	if (list_empty(&grp->bb_prealloc_list))
+@@ -4266,19 +4265,14 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
+ 		goto out_dbg;
+ 	}
+ 
+-	if (needed == 0)
+-		needed = EXT4_CLUSTERS_PER_GROUP(sb) + 1;
+-
+ 	INIT_LIST_HEAD(&list);
+-repeat:
+-	free = 0;
+ 	ext4_lock_group(sb, group);
+ 	list_for_each_entry_safe(pa, tmp,
+ 				&grp->bb_prealloc_list, pa_group_list) {
+ 		spin_lock(&pa->pa_lock);
+ 		if (atomic_read(&pa->pa_count)) {
+ 			spin_unlock(&pa->pa_lock);
+-			busy = 1;
++			*busy = 1;
+ 			continue;
+ 		}
+ 		if (pa->pa_deleted) {
+@@ -4318,22 +4312,13 @@ repeat:
+ 		call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
+ 	}
+ 
+-	free_total += free;
+-
+-	/* if we still need more blocks and some PAs were used, try again */
+-	if (free_total < needed && busy) {
+-		ext4_unlock_group(sb, group);
+-		cond_resched();
+-		busy = 0;
+-		goto repeat;
+-	}
+ 	ext4_unlock_group(sb, group);
+ 	ext4_mb_unload_buddy(&e4b);
+ 	put_bh(bitmap_bh);
+ out_dbg:
+ 	mb_debug(sb, "discarded (%d) blocks preallocated for group %u bb_free (%d)\n",
+-		 free_total, group, grp->bb_free);
+-	return free_total;
++		 free, group, grp->bb_free);
++	return free;
+ }
+ 
+ /*
+@@ -4875,13 +4860,24 @@ static int ext4_mb_discard_preallocations(struct super_block *sb, int needed)
+ {
+ 	ext4_group_t i, ngroups = ext4_get_groups_count(sb);
+ 	int ret;
+-	int freed = 0;
++	int freed = 0, busy = 0;
++	int retry = 0;
+ 
+ 	trace_ext4_mb_discard_preallocations(sb, needed);
++
++	if (needed == 0)
++		needed = EXT4_CLUSTERS_PER_GROUP(sb) + 1;
++ repeat:
+ 	for (i = 0; i < ngroups && needed > 0; i++) {
+-		ret = ext4_mb_discard_group_preallocations(sb, i, needed);
++		ret = ext4_mb_discard_group_preallocations(sb, i, &busy);
+ 		freed += ret;
+ 		needed -= ret;
++		cond_resched();
++	}
++
++	if (needed > 0 && busy && ++retry < 3) {
++		busy = 0;
++		goto repeat;
+ 	}
+ 
+ 	return freed;
+@@ -5815,6 +5811,7 @@ out:
+  */
+ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ {
++	struct request_queue *q = bdev_get_queue(sb->s_bdev);
+ 	struct ext4_group_info *grp;
+ 	ext4_group_t group, first_group, last_group;
+ 	ext4_grpblk_t cnt = 0, first_cluster, last_cluster;
+@@ -5833,6 +5830,13 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 	    start >= max_blks ||
+ 	    range->len < sb->s_blocksize)
+ 		return -EINVAL;
++	/* No point to try to trim less than discard granularity */
++	if (range->minlen < q->limits.discard_granularity) {
++		minlen = EXT4_NUM_B2C(EXT4_SB(sb),
++			q->limits.discard_granularity >> sb->s_blocksize_bits);
++		if (minlen > EXT4_CLUSTERS_PER_GROUP(sb))
++			goto out;
++	}
+ 	if (end >= max_blks)
+ 		end = max_blks - 1;
+ 	if (end <= first_data_blk)
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index c5e3fc998211a..49912814f3d8d 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -437,12 +437,12 @@ int ext4_ext_migrate(struct inode *inode)
+ 	percpu_down_write(&sbi->s_writepages_rwsem);
+ 
+ 	/*
+-	 * Worst case we can touch the allocation bitmaps, a bgd
+-	 * block, and a block to link in the orphan list.  We do need
+-	 * need to worry about credits for modifying the quota inode.
++	 * Worst case we can touch the allocation bitmaps and a block
++	 * group descriptor block.  We do need need to worry about
++	 * credits for modifying the quota inode.
+ 	 */
+ 	handle = ext4_journal_start(inode, EXT4_HT_MIGRATE,
+-		4 + EXT4_MAXQUOTAS_TRANS_BLOCKS(inode->i_sb));
++		3 + EXT4_MAXQUOTAS_TRANS_BLOCKS(inode->i_sb));
+ 
+ 	if (IS_ERR(handle)) {
+ 		retval = PTR_ERR(handle);
+@@ -459,6 +459,13 @@ int ext4_ext_migrate(struct inode *inode)
+ 		ext4_journal_stop(handle);
+ 		goto out_unlock;
+ 	}
++	/*
++	 * Use the correct seed for checksum (i.e. the seed from 'inode').  This
++	 * is so that the metadata blocks will have the correct checksum after
++	 * the migration.
++	 */
++	ei = EXT4_I(inode);
++	EXT4_I(tmp_inode)->i_csum_seed = ei->i_csum_seed;
+ 	i_size_write(tmp_inode, i_size_read(inode));
+ 	/*
+ 	 * Set the i_nlink to zero so it will be deleted later
+@@ -467,7 +474,6 @@ int ext4_ext_migrate(struct inode *inode)
+ 	clear_nlink(tmp_inode);
+ 
+ 	ext4_ext_tree_init(handle, tmp_inode);
+-	ext4_orphan_add(handle, tmp_inode);
+ 	ext4_journal_stop(handle);
+ 
+ 	/*
+@@ -492,17 +498,10 @@ int ext4_ext_migrate(struct inode *inode)
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_MIGRATE, 1);
+ 	if (IS_ERR(handle)) {
+-		/*
+-		 * It is impossible to update on-disk structures without
+-		 * a handle, so just rollback in-core changes and live other
+-		 * work to orphan_list_cleanup()
+-		 */
+-		ext4_orphan_del(NULL, tmp_inode);
+ 		retval = PTR_ERR(handle);
+ 		goto out_tmp_inode;
+ 	}
+ 
+-	ei = EXT4_I(inode);
+ 	i_data = ei->i_data;
+ 	memset(&lb, 0, sizeof(lb));
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index b1af6588bad01..9e210bc85c817 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6341,10 +6341,7 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ 
+ 	lockdep_set_quota_inode(path->dentry->d_inode, I_DATA_SEM_QUOTA);
+ 	err = dquot_quota_on(sb, type, format_id, path);
+-	if (err) {
+-		lockdep_set_quota_inode(path->dentry->d_inode,
+-					     I_DATA_SEM_NORMAL);
+-	} else {
++	if (!err) {
+ 		struct inode *inode = d_inode(path->dentry);
+ 		handle_t *handle;
+ 
+@@ -6364,7 +6361,12 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ 		ext4_journal_stop(handle);
+ 	unlock_inode:
+ 		inode_unlock(inode);
++		if (err)
++			dquot_quota_off(sb, type);
+ 	}
++	if (err)
++		lockdep_set_quota_inode(path->dentry->d_inode,
++					     I_DATA_SEM_NORMAL);
+ 	return err;
+ }
+ 
+@@ -6427,8 +6429,19 @@ static int ext4_enable_quotas(struct super_block *sb)
+ 					"Failed to enable quota tracking "
+ 					"(type=%d, err=%d). Please run "
+ 					"e2fsck to fix.", type, err);
+-				for (type--; type >= 0; type--)
++				for (type--; type >= 0; type--) {
++					struct inode *inode;
++
++					inode = sb_dqopt(sb)->files[type];
++					if (inode)
++						inode = igrab(inode);
+ 					dquot_quota_off(sb, type);
++					if (inode) {
++						lockdep_set_quota_inode(inode,
++							I_DATA_SEM_NORMAL);
++						iput(inode);
++					}
++				}
+ 
+ 				return err;
+ 			}
+@@ -6532,7 +6545,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
+ 	struct buffer_head *bh;
+ 	handle_t *handle = journal_current_handle();
+ 
+-	if (EXT4_SB(sb)->s_journal && !handle) {
++	if (!handle) {
+ 		ext4_msg(sb, KERN_WARNING, "Quota write (off=%llu, len=%llu)"
+ 			" cancelled because transaction is not started",
+ 			(unsigned long long)off, (unsigned long long)len);
+@@ -6716,6 +6729,7 @@ static int __init ext4_init_fs(void)
+ out:
+ 	unregister_as_ext2();
+ 	unregister_as_ext3();
++	ext4_fc_destroy_dentry_cache();
+ out05:
+ 	destroy_inodecache();
+ out1:
+@@ -6742,6 +6756,7 @@ static void __exit ext4_exit_fs(void)
+ 	unregister_as_ext2();
+ 	unregister_as_ext3();
+ 	unregister_filesystem(&ext4_fs_type);
++	ext4_fc_destroy_dentry_cache();
+ 	destroy_inodecache();
+ 	ext4_exit_mballoc();
+ 	ext4_exit_sysfs();
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 30987ea011f1a..ec542e8c46cc9 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1362,25 +1362,38 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
+ 					enum iostat_type io_type)
+ {
+ 	struct address_space *mapping = cc->inode->i_mapping;
+-	int _submitted, compr_blocks, ret;
+-	int i = -1, err = 0;
++	int _submitted, compr_blocks, ret, i;
+ 
+ 	compr_blocks = f2fs_compressed_blocks(cc);
+-	if (compr_blocks < 0) {
+-		err = compr_blocks;
+-		goto out_err;
++
++	for (i = 0; i < cc->cluster_size; i++) {
++		if (!cc->rpages[i])
++			continue;
++
++		redirty_page_for_writepage(wbc, cc->rpages[i]);
++		unlock_page(cc->rpages[i]);
+ 	}
+ 
++	if (compr_blocks < 0)
++		return compr_blocks;
++
+ 	for (i = 0; i < cc->cluster_size; i++) {
+ 		if (!cc->rpages[i])
+ 			continue;
+ retry_write:
++		lock_page(cc->rpages[i]);
++
+ 		if (cc->rpages[i]->mapping != mapping) {
++continue_unlock:
+ 			unlock_page(cc->rpages[i]);
+ 			continue;
+ 		}
+ 
+-		BUG_ON(!PageLocked(cc->rpages[i]));
++		if (!PageDirty(cc->rpages[i]))
++			goto continue_unlock;
++
++		if (!clear_page_dirty_for_io(cc->rpages[i]))
++			goto continue_unlock;
+ 
+ 		ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted,
+ 						NULL, NULL, wbc, io_type,
+@@ -1395,26 +1408,15 @@ retry_write:
+ 				 * avoid deadlock caused by cluster update race
+ 				 * from foreground operation.
+ 				 */
+-				if (IS_NOQUOTA(cc->inode)) {
+-					err = 0;
+-					goto out_err;
+-				}
++				if (IS_NOQUOTA(cc->inode))
++					return 0;
+ 				ret = 0;
+ 				cond_resched();
+ 				congestion_wait(BLK_RW_ASYNC,
+ 						DEFAULT_IO_TIMEOUT);
+-				lock_page(cc->rpages[i]);
+-
+-				if (!PageDirty(cc->rpages[i])) {
+-					unlock_page(cc->rpages[i]);
+-					continue;
+-				}
+-
+-				clear_page_dirty_for_io(cc->rpages[i]);
+ 				goto retry_write;
+ 			}
+-			err = ret;
+-			goto out_err;
++			return ret;
+ 		}
+ 
+ 		*submitted += _submitted;
+@@ -1423,14 +1425,6 @@ retry_write:
+ 	f2fs_balance_fs(F2FS_M_SB(mapping), true);
+ 
+ 	return 0;
+-out_err:
+-	for (++i; i < cc->cluster_size; i++) {
+-		if (!cc->rpages[i])
+-			continue;
+-		redirty_page_for_writepage(wbc, cc->rpages[i]);
+-		unlock_page(cc->rpages[i]);
+-	}
+-	return err;
+ }
+ 
+ int f2fs_write_multi_pages(struct compress_ctx *cc,
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index bc488a7d01903..6c4bf22a3e83e 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -955,6 +955,7 @@ struct f2fs_sm_info {
+ 	unsigned int segment_count;	/* total # of segments */
+ 	unsigned int main_segments;	/* # of segments in main area */
+ 	unsigned int reserved_segments;	/* # of reserved segments */
++	unsigned int additional_reserved_segments;/* reserved segs for IO align feature */
+ 	unsigned int ovp_segments;	/* # of overprovision segments */
+ 
+ 	/* a threshold to reclaim prefree segments */
+@@ -1984,6 +1985,11 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+ 
+ 	if (!__allow_reserved_blocks(sbi, inode, true))
+ 		avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
++
++	if (F2FS_IO_ALIGNED(sbi))
++		avail_user_block_count -= sbi->blocks_per_seg *
++				SM_I(sbi)->additional_reserved_segments;
++
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
+ 		if (avail_user_block_count > sbi->unusable_block_count)
+ 			avail_user_block_count -= sbi->unusable_block_count;
+@@ -2229,6 +2235,11 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
+ 
+ 	if (!__allow_reserved_blocks(sbi, inode, false))
+ 		valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
++
++	if (F2FS_IO_ALIGNED(sbi))
++		valid_block_count += sbi->blocks_per_seg *
++				SM_I(sbi)->additional_reserved_segments;
++
+ 	user_block_count = sbi->user_block_count;
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+ 		user_block_count -= sbi->unusable_block_count;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 72f227f6ebad0..6b240b71d2e83 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -998,6 +998,9 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 	}
+ 
++	if (f2fs_check_nid_range(sbi, dni->ino))
++		return false;
++
+ 	*nofs = ofs_of_node(node_page);
+ 	source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node);
+ 	f2fs_put_page(node_page, 1);
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 1bf33fc27b8f8..beef833a69604 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -539,7 +539,8 @@ static inline unsigned int free_segments(struct f2fs_sb_info *sbi)
+ 
+ static inline unsigned int reserved_segments(struct f2fs_sb_info *sbi)
+ {
+-	return SM_I(sbi)->reserved_segments;
++	return SM_I(sbi)->reserved_segments +
++			SM_I(sbi)->additional_reserved_segments;
+ }
+ 
+ static inline unsigned int free_sections(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index b7287b722e9e1..af98abb17c272 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -289,6 +289,46 @@ static inline void limit_reserve_root(struct f2fs_sb_info *sbi)
+ 					   F2FS_OPTION(sbi).s_resgid));
+ }
+ 
++static inline int adjust_reserved_segment(struct f2fs_sb_info *sbi)
++{
++	unsigned int sec_blks = sbi->blocks_per_seg * sbi->segs_per_sec;
++	unsigned int avg_vblocks;
++	unsigned int wanted_reserved_segments;
++	block_t avail_user_block_count;
++
++	if (!F2FS_IO_ALIGNED(sbi))
++		return 0;
++
++	/* average valid block count in section in worst case */
++	avg_vblocks = sec_blks / F2FS_IO_SIZE(sbi);
++
++	/*
++	 * we need enough free space when migrating one section in worst case
++	 */
++	wanted_reserved_segments = (F2FS_IO_SIZE(sbi) / avg_vblocks) *
++						reserved_segments(sbi);
++	wanted_reserved_segments -= reserved_segments(sbi);
++
++	avail_user_block_count = sbi->user_block_count -
++				sbi->current_reserved_blocks -
++				F2FS_OPTION(sbi).root_reserved_blocks;
++
++	if (wanted_reserved_segments * sbi->blocks_per_seg >
++					avail_user_block_count) {
++		f2fs_err(sbi, "IO align feature can't grab additional reserved segment: %u, available segments: %u",
++			wanted_reserved_segments,
++			avail_user_block_count >> sbi->log_blocks_per_seg);
++		return -ENOSPC;
++	}
++
++	SM_I(sbi)->additional_reserved_segments = wanted_reserved_segments;
++
++	f2fs_info(sbi, "IO align feature needs additional reserved segment: %u",
++			 wanted_reserved_segments);
++
++	return 0;
++}
++
+ static inline void adjust_unusable_cap_perc(struct f2fs_sb_info *sbi)
+ {
+ 	if (!F2FS_OPTION(sbi).unusable_cap_perc)
+@@ -3736,6 +3776,10 @@ try_onemore:
+ 		goto free_nm;
+ 	}
+ 
++	err = adjust_reserved_segment(sbi);
++	if (err)
++		goto free_nm;
++
+ 	/* For write statistics */
+ 	if (sb->s_bdev->bd_part)
+ 		sbi->sectors_written_start =
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index b8850c81068a0..7ffd4bb398b0c 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -330,7 +330,9 @@ out:
+ 	if (a->struct_type == RESERVED_BLOCKS) {
+ 		spin_lock(&sbi->stat_lock);
+ 		if (t > (unsigned long)(sbi->user_block_count -
+-				F2FS_OPTION(sbi).root_reserved_blocks)) {
++				F2FS_OPTION(sbi).root_reserved_blocks -
++				sbi->blocks_per_seg *
++				SM_I(sbi)->additional_reserved_segments)) {
+ 			spin_unlock(&sbi->stat_lock);
+ 			return -EINVAL;
+ 		}
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 4dd70b53df81a..e81d1c3eb7e11 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -3251,7 +3251,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 
+ static int fuse_writeback_range(struct inode *inode, loff_t start, loff_t end)
+ {
+-	int err = filemap_write_and_wait_range(inode->i_mapping, start, -1);
++	int err = filemap_write_and_wait_range(inode->i_mapping, start, LLONG_MAX);
+ 
+ 	if (!err)
+ 		fuse_sync_writes(inode);
+diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
+index 4fc8cd698d1a4..bd7d58d27bfc6 100644
+--- a/fs/jffs2/file.c
++++ b/fs/jffs2/file.c
+@@ -136,20 +136,15 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 	struct page *pg;
+ 	struct inode *inode = mapping->host;
+ 	struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
++	struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
+ 	pgoff_t index = pos >> PAGE_SHIFT;
+ 	uint32_t pageofs = index << PAGE_SHIFT;
+ 	int ret = 0;
+ 
+-	pg = grab_cache_page_write_begin(mapping, index, flags);
+-	if (!pg)
+-		return -ENOMEM;
+-	*pagep = pg;
+-
+ 	jffs2_dbg(1, "%s()\n", __func__);
+ 
+ 	if (pageofs > inode->i_size) {
+ 		/* Make new hole frag from old EOF to new page */
+-		struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
+ 		struct jffs2_raw_inode ri;
+ 		struct jffs2_full_dnode *fn;
+ 		uint32_t alloc_len;
+@@ -160,7 +155,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 		ret = jffs2_reserve_space(c, sizeof(ri), &alloc_len,
+ 					  ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
+ 		if (ret)
+-			goto out_page;
++			goto out_err;
+ 
+ 		mutex_lock(&f->sem);
+ 		memset(&ri, 0, sizeof(ri));
+@@ -190,7 +185,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 			ret = PTR_ERR(fn);
+ 			jffs2_complete_reservation(c);
+ 			mutex_unlock(&f->sem);
+-			goto out_page;
++			goto out_err;
+ 		}
+ 		ret = jffs2_add_full_dnode_to_inode(c, f, fn);
+ 		if (f->metadata) {
+@@ -205,13 +200,26 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 			jffs2_free_full_dnode(fn);
+ 			jffs2_complete_reservation(c);
+ 			mutex_unlock(&f->sem);
+-			goto out_page;
++			goto out_err;
+ 		}
+ 		jffs2_complete_reservation(c);
+ 		inode->i_size = pageofs;
+ 		mutex_unlock(&f->sem);
+ 	}
+ 
++	/*
++	 * While getting a page and reading data in, lock c->alloc_sem until
++	 * the page is Uptodate. Otherwise GC task may attempt to read the same
++	 * page in read_cache_page(), which causes a deadlock.
++	 */
++	mutex_lock(&c->alloc_sem);
++	pg = grab_cache_page_write_begin(mapping, index, flags);
++	if (!pg) {
++		ret = -ENOMEM;
++		goto release_sem;
++	}
++	*pagep = pg;
++
+ 	/*
+ 	 * Read in the page if it wasn't already present. Cannot optimize away
+ 	 * the whole page write case until jffs2_write_end can handle the
+@@ -221,15 +229,17 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 		mutex_lock(&f->sem);
+ 		ret = jffs2_do_readpage_nolock(inode, pg);
+ 		mutex_unlock(&f->sem);
+-		if (ret)
+-			goto out_page;
++		if (ret) {
++			unlock_page(pg);
++			put_page(pg);
++			goto release_sem;
++		}
+ 	}
+ 	jffs2_dbg(1, "end write_begin(). pg->flags %lx\n", pg->flags);
+-	return ret;
+ 
+-out_page:
+-	unlock_page(pg);
+-	put_page(pg);
++release_sem:
++	mutex_unlock(&c->alloc_sem);
++out_err:
+ 	return ret;
+ }
+ 
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index cfd46753a6856..6a8f9efc2e2f0 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -1853,7 +1853,6 @@ out:
+ 		kthread_stop(c->bgt);
+ 		c->bgt = NULL;
+ 	}
+-	free_wbufs(c);
+ 	kfree(c->write_reserve_buf);
+ 	c->write_reserve_buf = NULL;
+ 	vfree(c->ileb_buf);
+diff --git a/fs/udf/ialloc.c b/fs/udf/ialloc.c
+index 84ed23edebfd3..87a77bf70ee19 100644
+--- a/fs/udf/ialloc.c
++++ b/fs/udf/ialloc.c
+@@ -77,6 +77,7 @@ struct inode *udf_new_inode(struct inode *dir, umode_t mode)
+ 					GFP_KERNEL);
+ 	}
+ 	if (!iinfo->i_data) {
++		make_bad_inode(inode);
+ 		iput(inode);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+@@ -86,6 +87,7 @@ struct inode *udf_new_inode(struct inode *dir, umode_t mode)
+ 			      dinfo->i_location.partitionReferenceNum,
+ 			      start, &err);
+ 	if (err) {
++		make_bad_inode(inode);
+ 		iput(inode);
+ 		return ERR_PTR(err);
+ 	}
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index 6ad3b89a8a2e0..0f5366792d22e 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -605,9 +605,10 @@ int acpi_enable_wakeup_device_power(struct acpi_device *dev, int state);
+ int acpi_disable_wakeup_device_power(struct acpi_device *dev);
+ 
+ #ifdef CONFIG_X86
+-bool acpi_device_always_present(struct acpi_device *adev);
++bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status);
+ #else
+-static inline bool acpi_device_always_present(struct acpi_device *adev)
++static inline bool acpi_device_override_status(struct acpi_device *adev,
++					       unsigned long long *status)
+ {
+ 	return false;
+ }
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index 647cb11d0a0a3..7334037624c5c 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -536,8 +536,14 @@ typedef u64 acpi_integer;
+  * Can be used with access_width of struct acpi_generic_address and access_size of
+  * struct acpi_resource_generic_register.
+  */
+-#define ACPI_ACCESS_BIT_WIDTH(size)     (1 << ((size) + 2))
+-#define ACPI_ACCESS_BYTE_WIDTH(size)    (1 << ((size) - 1))
++#define ACPI_ACCESS_BIT_SHIFT		2
++#define ACPI_ACCESS_BYTE_SHIFT		-1
++#define ACPI_ACCESS_BIT_MAX		(31 - ACPI_ACCESS_BIT_SHIFT)
++#define ACPI_ACCESS_BYTE_MAX		(31 - ACPI_ACCESS_BYTE_SHIFT)
++#define ACPI_ACCESS_BIT_DEFAULT		(8 - ACPI_ACCESS_BIT_SHIFT)
++#define ACPI_ACCESS_BYTE_DEFAULT	(8 - ACPI_ACCESS_BYTE_SHIFT)
++#define ACPI_ACCESS_BIT_WIDTH(size)	(1 << ((size) + ACPI_ACCESS_BIT_SHIFT))
++#define ACPI_ACCESS_BYTE_WIDTH(size)	(1 << ((size) + ACPI_ACCESS_BYTE_SHIFT))
+ 
+ /*******************************************************************************
+  *
+diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h
+index b80c65aba2493..2580e05a8ab67 100644
+--- a/include/linux/blk-pm.h
++++ b/include/linux/blk-pm.h
+@@ -14,7 +14,7 @@ extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev);
+ extern int blk_pre_runtime_suspend(struct request_queue *q);
+ extern void blk_post_runtime_suspend(struct request_queue *q, int err);
+ extern void blk_pre_runtime_resume(struct request_queue *q);
+-extern void blk_post_runtime_resume(struct request_queue *q, int err);
++extern void blk_post_runtime_resume(struct request_queue *q);
+ extern void blk_set_runtime_active(struct request_queue *q);
+ #else
+ static inline void blk_pm_runtime_init(struct request_queue *q,
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 6e330ff2f28df..391bc1480dfb1 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -367,6 +367,13 @@ static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
+ 		 log->level == BPF_LOG_KERNEL);
+ }
+ 
++static inline bool
++bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
++{
++	return log->len_total >= 128 && log->len_total <= UINT_MAX >> 2 &&
++	       log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
++}
++
+ #define BPF_MAX_SUBPROGS 256
+ 
+ struct bpf_subprog_info {
+diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
+index 83a3ebff74560..8f87c1a6f3231 100644
+--- a/include/linux/clocksource.h
++++ b/include/linux/clocksource.h
+@@ -42,6 +42,8 @@ struct module;
+  * @shift:		Cycle to nanosecond divisor (power of two)
+  * @max_idle_ns:	Maximum idle time permitted by the clocksource (nsecs)
+  * @maxadj:		Maximum adjustment value to mult (~11%)
++ * @uncertainty_margin:	Maximum uncertainty in nanoseconds per half second.
++ *			Zero says to use default WATCHDOG_THRESHOLD.
+  * @archdata:		Optional arch-specific data
+  * @max_cycles:		Maximum safe cycle value which won't overflow on
+  *			multiplication
+@@ -93,6 +95,7 @@ struct clocksource {
+ 	u32			shift;
+ 	u64			max_idle_ns;
+ 	u32			maxadj;
++	u32			uncertainty_margin;
+ #ifdef CONFIG_ARCH_CLOCKSOURCE_DATA
+ 	struct arch_clocksource_data archdata;
+ #endif
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index fc56d53cc68bf..2ba33d708942c 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -345,6 +345,8 @@ struct hid_item {
+ /* BIT(9) reserved for backward compatibility, was NO_INIT_INPUT_REPORTS */
+ #define HID_QUIRK_ALWAYS_POLL			BIT(10)
+ #define HID_QUIRK_INPUT_PER_APP			BIT(11)
++#define HID_QUIRK_X_INVERT			BIT(12)
++#define HID_QUIRK_Y_INVERT			BIT(13)
+ #define HID_QUIRK_SKIP_OUTPUT_REPORTS		BIT(16)
+ #define HID_QUIRK_SKIP_OUTPUT_REPORT_ID		BIT(17)
+ #define HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP	BIT(18)
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index 63b550403317a..c142a152d6a41 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -938,6 +938,15 @@ static inline int is_highmem_idx(enum zone_type idx)
+ #endif
+ }
+ 
++#ifdef CONFIG_ZONE_DMA
++bool has_managed_dma(void);
++#else
++static inline bool has_managed_dma(void)
++{
++	return false;
++}
++#endif
++
+ /**
+  * is_highmem - helper function to quickly check if a struct zone is a
+  *              highmem zone or not.  This is an attempt to keep references
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index 161acd4ede448..30091ab5de287 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -58,6 +58,7 @@ extern void pm_runtime_get_suppliers(struct device *dev);
+ extern void pm_runtime_put_suppliers(struct device *dev);
+ extern void pm_runtime_new_link(struct device *dev);
+ extern void pm_runtime_drop_link(struct device_link *link);
++extern void pm_runtime_release_supplier(struct device_link *link, bool check_idle);
+ 
+ /**
+  * pm_runtime_get_if_in_use - Conditionally bump up runtime PM usage counter.
+@@ -279,6 +280,8 @@ static inline void pm_runtime_get_suppliers(struct device *dev) {}
+ static inline void pm_runtime_put_suppliers(struct device *dev) {}
+ static inline void pm_runtime_new_link(struct device *dev) {}
+ static inline void pm_runtime_drop_link(struct device_link *link) {}
++static inline void pm_runtime_release_supplier(struct device_link *link,
++					       bool check_idle) {}
+ 
+ #endif /* !CONFIG_PM */
+ 
+diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h
+index bac79e817776c..4cbd413e71a3f 100644
+--- a/include/net/inet_frag.h
++++ b/include/net/inet_frag.h
+@@ -116,8 +116,15 @@ int fqdir_init(struct fqdir **fqdirp, struct inet_frags *f, struct net *net);
+ 
+ static inline void fqdir_pre_exit(struct fqdir *fqdir)
+ {
+-	fqdir->high_thresh = 0; /* prevent creation of new frags */
+-	fqdir->dead = true;
++	/* Prevent creation of new frags.
++	 * Pairs with READ_ONCE() in inet_frag_find().
++	 */
++	WRITE_ONCE(fqdir->high_thresh, 0);
++
++	/* Pairs with READ_ONCE() in inet_frag_kill(), ip_expire()
++	 * and ip6frag_expire_frag_queue().
++	 */
++	WRITE_ONCE(fqdir->dead, true);
+ }
+ void fqdir_exit(struct fqdir *fqdir);
+ 
+diff --git a/include/net/ipv6_frag.h b/include/net/ipv6_frag.h
+index 851029ecff13c..0a4779175a523 100644
+--- a/include/net/ipv6_frag.h
++++ b/include/net/ipv6_frag.h
+@@ -67,7 +67,8 @@ ip6frag_expire_frag_queue(struct net *net, struct frag_queue *fq)
+ 	struct sk_buff *head;
+ 
+ 	rcu_read_lock();
+-	if (fq->q.fqdir->dead)
++	/* Paired with the WRITE_ONCE() in fqdir_pre_exit(). */
++	if (READ_ONCE(fq->q.fqdir->dead))
+ 		goto out_rcu_unlock;
+ 	spin_lock(&fq->q.lock);
+ 
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 9226a84dcc14d..1042c449e7db5 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1261,6 +1261,7 @@ struct psched_ratecfg {
+ 	u64	rate_bytes_ps; /* bytes per second */
+ 	u32	mult;
+ 	u16	overhead;
++	u16	mpu;
+ 	u8	linklayer;
+ 	u8	shift;
+ };
+@@ -1270,6 +1271,9 @@ static inline u64 psched_l2t_ns(const struct psched_ratecfg *r,
+ {
+ 	len += r->overhead;
+ 
++	if (len < r->mpu)
++		len = r->mpu;
++
+ 	if (unlikely(r->linklayer == TC_LINKLAYER_ATM))
+ 		return ((u64)(DIV_ROUND_UP(len,48)*53) * r->mult) >> r->shift;
+ 
+@@ -1292,6 +1296,7 @@ static inline void psched_ratecfg_getrate(struct tc_ratespec *res,
+ 	res->rate = min_t(u64, r->rate_bytes_ps, ~0U);
+ 
+ 	res->overhead = r->overhead;
++	res->mpu = r->mpu;
+ 	res->linklayer = (r->linklayer & TC_LINKLAYER_MASK);
+ }
+ 
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 6232a5f048bde..337d29875e518 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -193,6 +193,11 @@ struct xfrm_state {
+ 	struct xfrm_algo_aead	*aead;
+ 	const char		*geniv;
+ 
++	/* mapping change rate limiting */
++	__be16 new_mapping_sport;
++	u32 new_mapping;	/* seconds */
++	u32 mapping_maxage;	/* seconds for input SA */
++
+ 	/* Data for encapsulator */
+ 	struct xfrm_encap_tmpl	*encap;
+ 	struct sock __rcu	*encap_sk;
+diff --git a/include/trace/events/cgroup.h b/include/trace/events/cgroup.h
+index 7f42a3de59e6b..dd7d7c9efecdf 100644
+--- a/include/trace/events/cgroup.h
++++ b/include/trace/events/cgroup.h
+@@ -59,8 +59,8 @@ DECLARE_EVENT_CLASS(cgroup,
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	int,		root			)
+-		__field(	int,		id			)
+ 		__field(	int,		level			)
++		__field(	u64,		id			)
+ 		__string(	path,		path			)
+ 	),
+ 
+@@ -71,7 +71,7 @@ DECLARE_EVENT_CLASS(cgroup,
+ 		__assign_str(path, path);
+ 	),
+ 
+-	TP_printk("root=%d id=%d level=%d path=%s",
++	TP_printk("root=%d id=%llu level=%d path=%s",
+ 		  __entry->root, __entry->id, __entry->level, __get_str(path))
+ );
+ 
+@@ -126,8 +126,8 @@ DECLARE_EVENT_CLASS(cgroup_migrate,
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	int,		dst_root		)
+-		__field(	int,		dst_id			)
+ 		__field(	int,		dst_level		)
++		__field(	u64,		dst_id			)
+ 		__field(	int,		pid			)
+ 		__string(	dst_path,	path			)
+ 		__string(	comm,		task->comm		)
+@@ -142,7 +142,7 @@ DECLARE_EVENT_CLASS(cgroup_migrate,
+ 		__assign_str(comm, task->comm);
+ 	),
+ 
+-	TP_printk("dst_root=%d dst_id=%d dst_level=%d dst_path=%s pid=%d comm=%s",
++	TP_printk("dst_root=%d dst_id=%llu dst_level=%d dst_path=%s pid=%d comm=%s",
+ 		  __entry->dst_root, __entry->dst_id, __entry->dst_level,
+ 		  __get_str(dst_path), __entry->pid, __get_str(comm))
+ );
+@@ -171,8 +171,8 @@ DECLARE_EVENT_CLASS(cgroup_event,
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	int,		root			)
+-		__field(	int,		id			)
+ 		__field(	int,		level			)
++		__field(	u64,		id			)
+ 		__string(	path,		path			)
+ 		__field(	int,		val			)
+ 	),
+@@ -185,7 +185,7 @@ DECLARE_EVENT_CLASS(cgroup_event,
+ 		__entry->val = val;
+ 	),
+ 
+-	TP_printk("root=%d id=%d level=%d path=%s val=%d",
++	TP_printk("root=%d id=%llu level=%d path=%s val=%d",
+ 		  __entry->root, __entry->id, __entry->level, __get_str(path),
+ 		  __entry->val)
+ );
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index ffc6a5391bb7b..2290c98b47cf8 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -308,6 +308,7 @@ enum xfrm_attr_type_t {
+ 	XFRMA_SET_MARK,		/* __u32 */
+ 	XFRMA_SET_MARK_MASK,	/* __u32 */
+ 	XFRMA_IF_ID,		/* __u32 */
++	XFRMA_MTIMER_THRESH,	/* __u32 in seconds for input SA */
+ 	__XFRMA_MAX
+ 
+ #define XFRMA_OUTPUT_MARK XFRMA_SET_MARK	/* Compatibility */
+diff --git a/kernel/audit.c b/kernel/audit.c
+index d784000921da3..2a38cbaf3ddb7 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -1540,6 +1540,20 @@ static void audit_receive(struct sk_buff  *skb)
+ 		nlh = nlmsg_next(nlh, &len);
+ 	}
+ 	audit_ctl_unlock();
++
++	/* can't block with the ctrl lock, so penalize the sender now */
++	if (audit_backlog_limit &&
++	    (skb_queue_len(&audit_queue) > audit_backlog_limit)) {
++		DECLARE_WAITQUEUE(wait, current);
++
++		/* wake kauditd to try and flush the queue */
++		wake_up_interruptible(&kauditd_wait);
++
++		add_wait_queue_exclusive(&audit_backlog_wait, &wait);
++		set_current_state(TASK_UNINTERRUPTIBLE);
++		schedule_timeout(audit_backlog_wait_time);
++		remove_wait_queue(&audit_backlog_wait, &wait);
++	}
+ }
+ 
+ /* Log information about who is connecting to the audit multicast socket */
+@@ -1824,7 +1838,9 @@ struct audit_buffer *audit_log_start(struct audit_context *ctx, gfp_t gfp_mask,
+ 	 *    task_tgid_vnr() since auditd_pid is set in audit_receive_msg()
+ 	 *    using a PID anchored in the caller's namespace
+ 	 * 2. generator holding the audit_cmd_mutex - we don't want to block
+-	 *    while holding the mutex */
++	 *    while holding the mutex, although we do penalize the sender
++	 *    later in audit_receive() when it is safe to block
++	 */
+ 	if (!(auditd_test_task(current) || audit_ctl_owner_current())) {
+ 		long stime = audit_backlog_wait_time;
+ 
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index aaf2fbaa0cc76..dc497eaf22663 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -4135,8 +4135,7 @@ static struct btf *btf_parse(void __user *btf_data, u32 btf_data_size,
+ 		log->len_total = log_size;
+ 
+ 		/* log attributes have to be sane */
+-		if (log->len_total < 128 || log->len_total > UINT_MAX >> 8 ||
+-		    !log->level || !log->ubuf) {
++		if (!bpf_verifier_log_attr_valid(log)) {
+ 			err = -EINVAL;
+ 			goto errout;
+ 		}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b43c9de34a2c2..015bf2ba4a0b6 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7725,15 +7725,15 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
+ {
+ 	if (reg_type_may_be_null(reg->type) && reg->id == id &&
+ 	    !WARN_ON_ONCE(!reg->id)) {
+-		/* Old offset (both fixed and variable parts) should
+-		 * have been known-zero, because we don't allow pointer
+-		 * arithmetic on pointers that might be NULL.
+-		 */
+ 		if (WARN_ON_ONCE(reg->smin_value || reg->smax_value ||
+ 				 !tnum_equals_const(reg->var_off, 0) ||
+ 				 reg->off)) {
+-			__mark_reg_known_zero(reg);
+-			reg->off = 0;
++			/* Old offset (both fixed and variable parts) should
++			 * have been known-zero, because we don't allow pointer
++			 * arithmetic on pointers that might be NULL. If we
++			 * see this happening, don't convert the register.
++			 */
++			return;
+ 		}
+ 		if (is_null) {
+ 			reg->type = SCALAR_VALUE;
+@@ -12349,11 +12349,11 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
+ 		log->ubuf = (char __user *) (unsigned long) attr->log_buf;
+ 		log->len_total = attr->log_size;
+ 
+-		ret = -EINVAL;
+ 		/* log attributes have to be sane */
+-		if (log->len_total < 128 || log->len_total > UINT_MAX >> 2 ||
+-		    !log->level || !log->ubuf || log->level & ~BPF_LOG_MASK)
++		if (!bpf_verifier_log_attr_valid(log)) {
++			ret = -EINVAL;
+ 			goto err_unlock;
++		}
+ 	}
+ 
+ 	if (IS_ERR(btf_vmlinux)) {
+diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
+index d4637f72239b4..b9082b572e0f8 100644
+--- a/kernel/dma/pool.c
++++ b/kernel/dma/pool.c
+@@ -206,7 +206,7 @@ static int __init dma_atomic_pool_init(void)
+ 						    GFP_KERNEL);
+ 	if (!atomic_pool_kernel)
+ 		ret = -ENOMEM;
+-	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
++	if (has_managed_dma()) {
+ 		atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
+ 						GFP_KERNEL | GFP_DMA);
+ 		if (!atomic_pool_dma)
+@@ -229,7 +229,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
+ 	if (prev == NULL) {
+ 		if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
+ 			return atomic_pool_dma32;
+-		if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
++		if (atomic_pool_dma && (gfp & GFP_DMA))
+ 			return atomic_pool_dma;
+ 		return atomic_pool_kernel;
+ 	}
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 0ffe185c1f46a..0dc16345e668c 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -387,6 +387,7 @@ retry_ipi:
+ 			continue;
+ 		}
+ 		if (get_cpu() == cpu) {
++			mask_ofl_test |= mask;
+ 			put_cpu();
+ 			continue;
+ 		}
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 5a55d23004524..ca0eef7d3852b 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -147,10 +147,10 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 
+ 	/* Add guest time to cpustat. */
+ 	if (task_nice(p) > 0) {
+-		cpustat[CPUTIME_NICE] += cputime;
++		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+-		cpustat[CPUTIME_USER] += cputime;
++		task_group_account_field(p, CPUTIME_USER, cputime);
+ 		cpustat[CPUTIME_GUEST] += cputime;
+ 	}
+ }
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index c004e3b89c324..2a33cb5a10e59 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -6284,8 +6284,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ 	 * pattern is IO completions.
+ 	 */
+ 	if (is_per_cpu_kthread(current) &&
++	    in_task() &&
+ 	    prev == smp_processor_id() &&
+-	    this_rq()->nr_running <= 1) {
++	    this_rq()->nr_running <= 1 &&
++	    asym_fits_capacity(task_util, prev)) {
+ 		return prev;
+ 	}
+ 
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index b5cf418e2e3fe..41b14d9242039 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -52,11 +52,8 @@ void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime)
+ 	rt_b->rt_period_timer.function = sched_rt_period_timer;
+ }
+ 
+-static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
++static inline void do_start_rt_bandwidth(struct rt_bandwidth *rt_b)
+ {
+-	if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
+-		return;
+-
+ 	raw_spin_lock(&rt_b->rt_runtime_lock);
+ 	if (!rt_b->rt_period_active) {
+ 		rt_b->rt_period_active = 1;
+@@ -75,6 +72,14 @@ static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
+ 	raw_spin_unlock(&rt_b->rt_runtime_lock);
+ }
+ 
++static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
++{
++	if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
++		return;
++
++	do_start_rt_bandwidth(rt_b);
++}
++
+ void init_rt_rq(struct rt_rq *rt_rq)
+ {
+ 	struct rt_prio_array *array;
+@@ -1022,13 +1027,17 @@ static void update_curr_rt(struct rq *rq)
+ 
+ 	for_each_sched_rt_entity(rt_se) {
+ 		struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
++		int exceeded;
+ 
+ 		if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
+ 			raw_spin_lock(&rt_rq->rt_runtime_lock);
+ 			rt_rq->rt_time += delta_exec;
+-			if (sched_rt_runtime_exceeded(rt_rq))
++			exceeded = sched_rt_runtime_exceeded(rt_rq);
++			if (exceeded)
+ 				resched_curr(rq);
+ 			raw_spin_unlock(&rt_rq->rt_runtime_lock);
++			if (exceeded)
++				do_start_rt_bandwidth(sched_rt_bandwidth(rt_rq));
+ 		}
+ 	}
+ }
+@@ -2727,8 +2736,12 @@ static int sched_rt_global_validate(void)
+ 
+ static void sched_rt_do_global(void)
+ {
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
+ 	def_rt_bandwidth.rt_runtime = global_rt_runtime();
+ 	def_rt_bandwidth.rt_period = ns_to_ktime(global_rt_period());
++	raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
+ }
+ 
+ int sched_rt_handler(struct ctl_table *table, int write, void *buffer,
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 74492f08660c4..e34ceb91f4c5a 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -93,6 +93,20 @@ static char override_name[CS_NAME_LEN];
+ static int finished_booting;
+ static u64 suspend_start;
+ 
++/*
++ * Threshold: 0.0312s, when doubled: 0.0625s.
++ * Also a default for cs->uncertainty_margin when registering clocks.
++ */
++#define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 5)
++
++/*
++ * Maximum permissible delay between two readouts of the watchdog
++ * clocksource surrounding a read of the clocksource being validated.
++ * This delay could be due to SMIs, NMIs, or to VCPU preemptions.  Used as
++ * a lower bound for cs->uncertainty_margin values when registering clocks.
++ */
++#define WATCHDOG_MAX_SKEW (100 * NSEC_PER_USEC)
++
+ #ifdef CONFIG_CLOCKSOURCE_WATCHDOG
+ static void clocksource_watchdog_work(struct work_struct *work);
+ static void clocksource_select(void);
+@@ -119,17 +133,9 @@ static int clocksource_watchdog_kthread(void *data);
+ static void __clocksource_change_rating(struct clocksource *cs, int rating);
+ 
+ /*
+- * Interval: 0.5sec Threshold: 0.0625s
++ * Interval: 0.5sec.
+  */
+ #define WATCHDOG_INTERVAL (HZ >> 1)
+-#define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
+-
+-/*
+- * Maximum permissible delay between two readouts of the watchdog
+- * clocksource surrounding a read of the clocksource being validated.
+- * This delay could be due to SMIs, NMIs, or to VCPU preemptions.
+- */
+-#define WATCHDOG_MAX_SKEW (100 * NSEC_PER_USEC)
+ 
+ static void clocksource_watchdog_work(struct work_struct *work)
+ {
+@@ -194,17 +200,24 @@ void clocksource_mark_unstable(struct clocksource *cs)
+ static ulong max_cswd_read_retries = 3;
+ module_param(max_cswd_read_retries, ulong, 0644);
+ 
+-static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
++enum wd_read_status {
++	WD_READ_SUCCESS,
++	WD_READ_UNSTABLE,
++	WD_READ_SKIP
++};
++
++static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
+ {
+ 	unsigned int nretries;
+-	u64 wd_end, wd_delta;
+-	int64_t wd_delay;
++	u64 wd_end, wd_end2, wd_delta;
++	int64_t wd_delay, wd_seq_delay;
+ 
+ 	for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
+ 		local_irq_disable();
+ 		*wdnow = watchdog->read(watchdog);
+ 		*csnow = cs->read(cs);
+ 		wd_end = watchdog->read(watchdog);
++		wd_end2 = watchdog->read(watchdog);
+ 		local_irq_enable();
+ 
+ 		wd_delta = clocksource_delta(wd_end, *wdnow, watchdog->mask);
+@@ -215,13 +228,34 @@ static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
+ 				pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
+ 					smp_processor_id(), watchdog->name, nretries);
+ 			}
+-			return true;
++			return WD_READ_SUCCESS;
+ 		}
++
++		/*
++		 * Now compute delay in consecutive watchdog read to see if
++		 * there is too much external interferences that cause
++		 * significant delay in reading both clocksource and watchdog.
++		 *
++		 * If consecutive WD read-back delay > WATCHDOG_MAX_SKEW/2,
++		 * report system busy, reinit the watchdog and skip the current
++		 * watchdog test.
++		 */
++		wd_delta = clocksource_delta(wd_end2, wd_end, watchdog->mask);
++		wd_seq_delay = clocksource_cyc2ns(wd_delta, watchdog->mult, watchdog->shift);
++		if (wd_seq_delay > WATCHDOG_MAX_SKEW/2)
++			goto skip_test;
+ 	}
+ 
+ 	pr_warn("timekeeping watchdog on CPU%d: %s read-back delay of %lldns, attempt %d, marking unstable\n",
+ 		smp_processor_id(), watchdog->name, wd_delay, nretries);
+-	return false;
++	return WD_READ_UNSTABLE;
++
++skip_test:
++	pr_info("timekeeping watchdog on CPU%d: %s wd-wd read-back delay of %lldns\n",
++		smp_processor_id(), watchdog->name, wd_seq_delay);
++	pr_info("wd-%s-wd read-back delay of %lldns, clock-skew test skipped!\n",
++		cs->name, wd_delay);
++	return WD_READ_SKIP;
+ }
+ 
+ static u64 csnow_mid;
+@@ -284,6 +318,8 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 	int next_cpu, reset_pending;
+ 	int64_t wd_nsec, cs_nsec;
+ 	struct clocksource *cs;
++	enum wd_read_status read_ret;
++	u32 md;
+ 
+ 	spin_lock(&watchdog_lock);
+ 	if (!watchdog_running)
+@@ -300,9 +336,12 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 			continue;
+ 		}
+ 
+-		if (!cs_watchdog_read(cs, &csnow, &wdnow)) {
+-			/* Clock readout unreliable, so give it up. */
+-			__clocksource_unstable(cs);
++		read_ret = cs_watchdog_read(cs, &csnow, &wdnow);
++
++		if (read_ret != WD_READ_SUCCESS) {
++			if (read_ret == WD_READ_UNSTABLE)
++				/* Clock readout unreliable, so give it up. */
++				__clocksource_unstable(cs);
+ 			continue;
+ 		}
+ 
+@@ -330,7 +369,8 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 			continue;
+ 
+ 		/* Check the deviation from the watchdog clocksource. */
+-		if (abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD) {
++		md = cs->uncertainty_margin + watchdog->uncertainty_margin;
++		if (abs(cs_nsec - wd_nsec) > md) {
+ 			pr_warn("timekeeping watchdog on CPU%d: Marking clocksource '%s' as unstable because the skew is too large:\n",
+ 				smp_processor_id(), cs->name);
+ 			pr_warn("                      '%s' wd_now: %llx wd_last: %llx mask: %llx\n",
+@@ -985,6 +1025,26 @@ void __clocksource_update_freq_scale(struct clocksource *cs, u32 scale, u32 freq
+ 		clocks_calc_mult_shift(&cs->mult, &cs->shift, freq,
+ 				       NSEC_PER_SEC / scale, sec * scale);
+ 	}
++
++	/*
++	 * If the uncertainty margin is not specified, calculate it.
++	 * If both scale and freq are non-zero, calculate the clock
++	 * period, but bound below at 2*WATCHDOG_MAX_SKEW.  However,
++	 * if either of scale or freq is zero, be very conservative and
++	 * take the tens-of-milliseconds WATCHDOG_THRESHOLD value for the
++	 * uncertainty margin.  Allow stupidly small uncertainty margins
++	 * to be specified by the caller for testing purposes, but warn
++	 * to discourage production use of this capability.
++	 */
++	if (scale && freq && !cs->uncertainty_margin) {
++		cs->uncertainty_margin = NSEC_PER_SEC / (scale * freq);
++		if (cs->uncertainty_margin < 2 * WATCHDOG_MAX_SKEW)
++			cs->uncertainty_margin = 2 * WATCHDOG_MAX_SKEW;
++	} else if (!cs->uncertainty_margin) {
++		cs->uncertainty_margin = WATCHDOG_THRESHOLD;
++	}
++	WARN_ON_ONCE(cs->uncertainty_margin < 2 * WATCHDOG_MAX_SKEW);
++
+ 	/*
+ 	 * Ensure clocksources that have large 'mult' values don't overflow
+ 	 * when adjusted.
+diff --git a/kernel/time/jiffies.c b/kernel/time/jiffies.c
+index eddcf49704445..65409abcca8e1 100644
+--- a/kernel/time/jiffies.c
++++ b/kernel/time/jiffies.c
+@@ -49,13 +49,14 @@ static u64 jiffies_read(struct clocksource *cs)
+  * for "tick-less" systems.
+  */
+ static struct clocksource clocksource_jiffies = {
+-	.name		= "jiffies",
+-	.rating		= 1, /* lowest valid rating*/
+-	.read		= jiffies_read,
+-	.mask		= CLOCKSOURCE_MASK(32),
+-	.mult		= TICK_NSEC << JIFFIES_SHIFT, /* details above */
+-	.shift		= JIFFIES_SHIFT,
+-	.max_cycles	= 10,
++	.name			= "jiffies",
++	.rating			= 1, /* lowest valid rating*/
++	.uncertainty_margin	= 32 * NSEC_PER_MSEC,
++	.read			= jiffies_read,
++	.mask			= CLOCKSOURCE_MASK(32),
++	.mult			= TICK_NSEC << JIFFIES_SHIFT, /* details above */
++	.shift			= JIFFIES_SHIFT,
++	.max_cycles		= 10,
+ };
+ 
+ __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(jiffies_lock);
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index ba644760f5076..a9e074769881f 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1517,9 +1517,6 @@ static const struct bpf_func_proto bpf_perf_prog_read_value_proto = {
+ BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
+ 	   void *, buf, u32, size, u64, flags)
+ {
+-#ifndef CONFIG_X86
+-	return -ENOENT;
+-#else
+ 	static const u32 br_entry_size = sizeof(struct perf_branch_entry);
+ 	struct perf_branch_stack *br_stack = ctx->data->br_stack;
+ 	u32 to_copy;
+@@ -1528,7 +1525,7 @@ BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
+ 		return -EINVAL;
+ 
+ 	if (unlikely(!br_stack))
+-		return -EINVAL;
++		return -ENOENT;
+ 
+ 	if (flags & BPF_F_GET_BRANCH_RECORDS_SIZE)
+ 		return br_stack->nr * br_entry_size;
+@@ -1540,7 +1537,6 @@ BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
+ 	memcpy(buf, br_stack->entries, to_copy);
+ 
+ 	return to_copy;
+-#endif
+ }
+ 
+ static const struct bpf_func_proto bpf_read_branch_records_proto = {
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 552dbc9d52260..d8a9fc7941266 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1183,15 +1183,18 @@ static int probes_profile_seq_show(struct seq_file *m, void *v)
+ {
+ 	struct dyn_event *ev = v;
+ 	struct trace_kprobe *tk;
++	unsigned long nmissed;
+ 
+ 	if (!is_trace_kprobe(ev))
+ 		return 0;
+ 
+ 	tk = to_trace_kprobe(ev);
++	nmissed = trace_kprobe_is_return(tk) ?
++		tk->rp.kp.nmissed + tk->rp.nmissed : tk->rp.kp.nmissed;
+ 	seq_printf(m, "  %-44s %15lu %15lu\n",
+ 		   trace_probe_name(&tk->tp),
+ 		   trace_kprobe_nhit(tk),
+-		   tk->rp.kp.nmissed);
++		   nmissed);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/tsacct.c b/kernel/tsacct.c
+index 257ffb993ea23..fd2f7a052fdd9 100644
+--- a/kernel/tsacct.c
++++ b/kernel/tsacct.c
+@@ -38,11 +38,10 @@ void bacct_add_tsk(struct user_namespace *user_ns,
+ 	stats->ac_btime = clamp_t(time64_t, btime, 0, U32_MAX);
+ 	stats->ac_btime64 = btime;
+ 
+-	if (thread_group_leader(tsk)) {
++	if (tsk->flags & PF_EXITING)
+ 		stats->ac_exitcode = tsk->exit_code;
+-		if (tsk->flags & PF_FORKNOEXEC)
+-			stats->ac_flag |= AFORK;
+-	}
++	if (thread_group_leader(tsk) && (tsk->flags & PF_FORKNOEXEC))
++		stats->ac_flag |= AFORK;
+ 	if (tsk->flags & PF_SUPERPRIV)
+ 		stats->ac_flag |= ASU;
+ 	if (tsk->flags & PF_DUMPCORE)
+diff --git a/lib/mpi/mpi-mod.c b/lib/mpi/mpi-mod.c
+index 47bc59edd4ff9..54fcc01564d9d 100644
+--- a/lib/mpi/mpi-mod.c
++++ b/lib/mpi/mpi-mod.c
+@@ -40,6 +40,8 @@ mpi_barrett_t mpi_barrett_init(MPI m, int copy)
+ 
+ 	mpi_normalize(m);
+ 	ctx = kcalloc(1, sizeof(*ctx), GFP_KERNEL);
++	if (!ctx)
++		return NULL;
+ 
+ 	if (copy) {
+ 		ctx->m = mpi_copy(m);
+diff --git a/lib/test_hmm.c b/lib/test_hmm.c
+index 80a78877bd939..a85613068d601 100644
+--- a/lib/test_hmm.c
++++ b/lib/test_hmm.c
+@@ -965,9 +965,33 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp,
+ 	return 0;
+ }
+ 
++static int dmirror_fops_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	unsigned long addr;
++
++	for (addr = vma->vm_start; addr < vma->vm_end; addr += PAGE_SIZE) {
++		struct page *page;
++		int ret;
++
++		page = alloc_page(GFP_KERNEL | __GFP_ZERO);
++		if (!page)
++			return -ENOMEM;
++
++		ret = vm_insert_page(vma, addr, page);
++		if (ret) {
++			__free_page(page);
++			return ret;
++		}
++		put_page(page);
++	}
++
++	return 0;
++}
++
+ static const struct file_operations dmirror_fops = {
+ 	.open		= dmirror_fops_open,
+ 	.release	= dmirror_fops_release,
++	.mmap		= dmirror_fops_mmap,
+ 	.unlocked_ioctl = dmirror_fops_unlocked_ioctl,
+ 	.llseek		= default_llseek,
+ 	.owner		= THIS_MODULE,
+diff --git a/lib/test_meminit.c b/lib/test_meminit.c
+index e4f706a404b3a..3ca717f113977 100644
+--- a/lib/test_meminit.c
++++ b/lib/test_meminit.c
+@@ -337,6 +337,7 @@ static int __init do_kmem_cache_size_bulk(int size, int *total_failures)
+ 		if (num)
+ 			kmem_cache_free_bulk(c, num, objects);
+ 	}
++	kmem_cache_destroy(c);
+ 	*total_failures += fail;
+ 	return 1;
+ }
+diff --git a/mm/hmm.c b/mm/hmm.c
+index fb617054f9631..cbe9d0c666504 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -296,7 +296,8 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
+ 	 * Since each architecture defines a struct page for the zero page, just
+ 	 * fall through and treat it like a normal page.
+ 	 */
+-	if (pte_special(pte) && !pte_devmap(pte) &&
++	if (!vm_normal_page(walk->vma, addr, pte) &&
++	    !pte_devmap(pte) &&
+ 	    !is_zero_pfn(pte_pfn(pte))) {
+ 		if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
+ 			pte_unmap(ptep);
+@@ -514,7 +515,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
+ 	struct hmm_range *range = hmm_vma_walk->range;
+ 	struct vm_area_struct *vma = walk->vma;
+ 
+-	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) &&
++	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) &&
+ 	    vma->vm_flags & VM_READ)
+ 		return 0;
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index e8e0f1cec8b04..c63656c42e288 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3964,7 +3964,9 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
+ 	va_list args;
+ 	static DEFINE_RATELIMIT_STATE(nopage_rs, 10*HZ, 1);
+ 
+-	if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
++	if ((gfp_mask & __GFP_NOWARN) ||
++	     !__ratelimit(&nopage_rs) ||
++	     ((gfp_mask & __GFP_DMA) && !has_managed_dma()))
+ 		return;
+ 
+ 	va_start(args, fmt);
+@@ -8903,3 +8905,18 @@ bool take_page_off_buddy(struct page *page)
+ 	return ret;
+ }
+ #endif
++
++#ifdef CONFIG_ZONE_DMA
++bool has_managed_dma(void)
++{
++	struct pglist_data *pgdat;
++
++	for_each_online_pgdat(pgdat) {
++		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
++
++		if (managed_zone(zone))
++			return true;
++	}
++	return false;
++}
++#endif /* CONFIG_ZONE_DMA */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index ae8adca3b56d1..d3d8c5e7a296b 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -527,7 +527,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
+ 	struct shmem_inode_info *info;
+ 	struct page *page;
+ 	unsigned long batch = sc ? sc->nr_to_scan : 128;
+-	int removed = 0, split = 0;
++	int split = 0;
+ 
+ 	if (list_empty(&sbinfo->shrinklist))
+ 		return SHRINK_STOP;
+@@ -542,7 +542,6 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
+ 		/* inode is about to be evicted */
+ 		if (!inode) {
+ 			list_del_init(&info->shrinklist);
+-			removed++;
+ 			goto next;
+ 		}
+ 
+@@ -550,12 +549,12 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
+ 		if (round_up(inode->i_size, PAGE_SIZE) ==
+ 				round_up(inode->i_size, HPAGE_PMD_SIZE)) {
+ 			list_move(&info->shrinklist, &to_remove);
+-			removed++;
+ 			goto next;
+ 		}
+ 
+ 		list_move(&info->shrinklist, &list);
+ next:
++		sbinfo->shrinklist_len--;
+ 		if (!--batch)
+ 			break;
+ 	}
+@@ -575,7 +574,7 @@ next:
+ 		inode = &info->vfs_inode;
+ 
+ 		if (nr_to_split && split >= nr_to_split)
+-			goto leave;
++			goto move_back;
+ 
+ 		page = find_get_page(inode->i_mapping,
+ 				(inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT);
+@@ -589,38 +588,44 @@ next:
+ 		}
+ 
+ 		/*
+-		 * Leave the inode on the list if we failed to lock
+-		 * the page at this time.
++		 * Move the inode on the list back to shrinklist if we failed
++		 * to lock the page at this time.
+ 		 *
+ 		 * Waiting for the lock may lead to deadlock in the
+ 		 * reclaim path.
+ 		 */
+ 		if (!trylock_page(page)) {
+ 			put_page(page);
+-			goto leave;
++			goto move_back;
+ 		}
+ 
+ 		ret = split_huge_page(page);
+ 		unlock_page(page);
+ 		put_page(page);
+ 
+-		/* If split failed leave the inode on the list */
++		/* If split failed move the inode on the list back to shrinklist */
+ 		if (ret)
+-			goto leave;
++			goto move_back;
+ 
+ 		split++;
+ drop:
+ 		list_del_init(&info->shrinklist);
+-		removed++;
+-leave:
++		goto put;
++move_back:
++		/*
++		 * Make sure the inode is either on the global list or deleted
++		 * from any local list before iput() since it could be deleted
++		 * in another thread once we put the inode (then the local list
++		 * is corrupted).
++		 */
++		spin_lock(&sbinfo->shrinklist_lock);
++		list_move(&info->shrinklist, &sbinfo->shrinklist);
++		sbinfo->shrinklist_len++;
++		spin_unlock(&sbinfo->shrinklist_lock);
++put:
+ 		iput(inode);
+ 	}
+ 
+-	spin_lock(&sbinfo->shrinklist_lock);
+-	list_splice_tail(&list, &sbinfo->shrinklist);
+-	sbinfo->shrinklist_len -= removed;
+-	spin_unlock(&sbinfo->shrinklist_lock);
+-
+ 	return split;
+ }
+ 
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 22278807b3f36..5e84dce5ff7ae 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -536,7 +536,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 	ax25_cb *ax25;
+ 	struct net_device *dev;
+ 	char devname[IFNAMSIZ];
+-	unsigned long opt;
++	unsigned int opt;
+ 	int res = 0;
+ 
+ 	if (level != SOL_AX25)
+@@ -568,7 +568,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case AX25_T1:
+-		if (opt < 1 || opt > ULONG_MAX / HZ) {
++		if (opt < 1 || opt > UINT_MAX / HZ) {
+ 			res = -EINVAL;
+ 			break;
+ 		}
+@@ -577,7 +577,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case AX25_T2:
+-		if (opt < 1 || opt > ULONG_MAX / HZ) {
++		if (opt < 1 || opt > UINT_MAX / HZ) {
+ 			res = -EINVAL;
+ 			break;
+ 		}
+@@ -593,7 +593,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case AX25_T3:
+-		if (opt < 1 || opt > ULONG_MAX / HZ) {
++		if (opt < 1 || opt > UINT_MAX / HZ) {
+ 			res = -EINVAL;
+ 			break;
+ 		}
+@@ -601,7 +601,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case AX25_IDLE:
+-		if (opt > ULONG_MAX / (60 * HZ)) {
++		if (opt > UINT_MAX / (60 * HZ)) {
+ 			res = -EINVAL;
+ 			break;
+ 		}
+diff --git a/net/batman-adv/netlink.c b/net/batman-adv/netlink.c
+index c7a55647b520e..121459704b069 100644
+--- a/net/batman-adv/netlink.c
++++ b/net/batman-adv/netlink.c
+@@ -1361,21 +1361,21 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
+ 	{
+ 		.cmd = BATADV_CMD_TP_METER,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_tp_meter_start,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_TP_METER_CANCEL,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_tp_meter_cancel,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_ROUTING_ALGOS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_algo_dump,
+ 	},
+ 	{
+@@ -1390,68 +1390,68 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
+ 	{
+ 		.cmd = BATADV_CMD_GET_TRANSTABLE_LOCAL,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_tt_local_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_TRANSTABLE_GLOBAL,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_tt_global_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_ORIGINATORS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_orig_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_NEIGHBORS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_hardif_neigh_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_GATEWAYS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_gw_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_BLA_CLAIM,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_bla_claim_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_BLA_BACKBONE,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_bla_backbone_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_DAT_CACHE,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_dat_cache_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_MCAST_FLAGS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_mcast_flags_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_SET_MESH,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_set_mesh,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_SET_HARDIF,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_set_hardif,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH |
+ 				  BATADV_FLAG_NEED_HARDIF,
+@@ -1467,7 +1467,7 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
+ 	{
+ 		.cmd = BATADV_CMD_SET_VLAN,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_set_vlan,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH |
+ 				  BATADV_FLAG_NEED_VLAN,
+diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c
+index 0a2d78e811cf5..83eb84e8e688f 100644
+--- a/net/bluetooth/cmtp/core.c
++++ b/net/bluetooth/cmtp/core.c
+@@ -501,9 +501,7 @@ static int __init cmtp_init(void)
+ {
+ 	BT_INFO("CMTP (CAPI Emulation) ver %s", VERSION);
+ 
+-	cmtp_init_sockets();
+-
+-	return 0;
++	return cmtp_init_sockets();
+ }
+ 
+ static void __exit cmtp_exit(void)
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 2ad66f64879f1..2e7998bad133b 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3810,6 +3810,7 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	return id;
+ 
+ err_wqueue:
++	debugfs_remove_recursive(hdev->debugfs);
+ 	destroy_workqueue(hdev->workqueue);
+ 	destroy_workqueue(hdev->req_workqueue);
+ err:
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 9f52145bb7b76..7ffcca9ae82a1 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5661,7 +5661,8 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		struct hci_ev_le_advertising_info *ev = ptr;
+ 		s8 rssi;
+ 
+-		if (ev->length <= HCI_MAX_AD_LENGTH) {
++		if (ev->length <= HCI_MAX_AD_LENGTH &&
++		    ev->data + ev->length <= skb_tail_pointer(skb)) {
+ 			rssi = ev->data[ev->length];
+ 			process_adv_report(hdev, ev->evt_type, &ev->bdaddr,
+ 					   ev->bdaddr_type, NULL, 0, rssi,
+@@ -5671,6 +5672,11 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		}
+ 
+ 		ptr += sizeof(*ev) + ev->length + 1;
++
++		if (ptr > (void *) skb_tail_pointer(skb) - sizeof(*ev)) {
++			bt_dev_err(hdev, "Malicious advertising data. Stopping processing");
++			break;
++		}
+ 	}
+ 
+ 	hci_dev_unlock(hdev);
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 1a94ed2f8a4f8..d965b7c66bd62 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -2118,7 +2118,7 @@ int __hci_req_enable_ext_advertising(struct hci_request *req, u8 instance)
+ 	/* Set duration per instance since controller is responsible for
+ 	 * scheduling it.
+ 	 */
+-	if (adv_instance && adv_instance->duration) {
++	if (adv_instance && adv_instance->timeout) {
+ 		u16 duration = adv_instance->timeout * MSEC_PER_SEC;
+ 
+ 		/* Time = N * 10 ms */
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 160c016a5dfb9..d2c6785205992 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -161,7 +161,11 @@ static int l2cap_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 		break;
+ 	}
+ 
+-	if (chan->psm && bdaddr_type_is_le(chan->src_type))
++	/* Use L2CAP_MODE_LE_FLOWCTL (CoC) in case of LE address and
++	 * L2CAP_MODE_EXT_FLOWCTL (ECRED) has not been set.
++	 */
++	if (chan->psm && bdaddr_type_is_le(chan->src_type) &&
++	    chan->mode != L2CAP_MODE_EXT_FLOWCTL)
+ 		chan->mode = L2CAP_MODE_LE_FLOWCTL;
+ 
+ 	chan->state = BT_BOUND;
+@@ -172,6 +176,21 @@ done:
+ 	return err;
+ }
+ 
++static void l2cap_sock_init_pid(struct sock *sk)
++{
++	struct l2cap_chan *chan = l2cap_pi(sk)->chan;
++
++	/* Only L2CAP_MODE_EXT_FLOWCTL ever need to access the PID in order to
++	 * group the channels being requested.
++	 */
++	if (chan->mode != L2CAP_MODE_EXT_FLOWCTL)
++		return;
++
++	spin_lock(&sk->sk_peer_lock);
++	sk->sk_peer_pid = get_pid(task_tgid(current));
++	spin_unlock(&sk->sk_peer_lock);
++}
++
+ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
+ 			      int alen, int flags)
+ {
+@@ -240,9 +259,15 @@ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
+ 			return -EINVAL;
+ 	}
+ 
+-	if (chan->psm && bdaddr_type_is_le(chan->src_type) && !chan->mode)
++	/* Use L2CAP_MODE_LE_FLOWCTL (CoC) in case of LE address and
++	 * L2CAP_MODE_EXT_FLOWCTL (ECRED) has not been set.
++	 */
++	if (chan->psm && bdaddr_type_is_le(chan->src_type) &&
++	    chan->mode != L2CAP_MODE_EXT_FLOWCTL)
+ 		chan->mode = L2CAP_MODE_LE_FLOWCTL;
+ 
++	l2cap_sock_init_pid(sk);
++
+ 	err = l2cap_chan_connect(chan, la.l2_psm, __le16_to_cpu(la.l2_cid),
+ 				 &la.l2_bdaddr, la.l2_bdaddr_type);
+ 	if (err)
+@@ -298,6 +323,8 @@ static int l2cap_sock_listen(struct socket *sock, int backlog)
+ 		goto done;
+ 	}
+ 
++	l2cap_sock_init_pid(sk);
++
+ 	sk->sk_max_ack_backlog = backlog;
+ 	sk->sk_ack_backlog = 0;
+ 
+@@ -876,6 +903,8 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ 	struct l2cap_conn *conn;
+ 	int len, err = 0;
+ 	u32 opt;
++	u16 mtu;
++	u8 mode;
+ 
+ 	BT_DBG("sk %p", sk);
+ 
+@@ -1058,16 +1087,16 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			break;
+ 		}
+ 
+-		if (copy_from_sockptr(&opt, optval, sizeof(u16))) {
++		if (copy_from_sockptr(&mtu, optval, sizeof(u16))) {
+ 			err = -EFAULT;
+ 			break;
+ 		}
+ 
+ 		if (chan->mode == L2CAP_MODE_EXT_FLOWCTL &&
+ 		    sk->sk_state == BT_CONNECTED)
+-			err = l2cap_chan_reconfigure(chan, opt);
++			err = l2cap_chan_reconfigure(chan, mtu);
+ 		else
+-			chan->imtu = opt;
++			chan->imtu = mtu;
+ 
+ 		break;
+ 
+@@ -1089,14 +1118,14 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			break;
+ 		}
+ 
+-		if (copy_from_sockptr(&opt, optval, sizeof(u8))) {
++		if (copy_from_sockptr(&mode, optval, sizeof(u8))) {
+ 			err = -EFAULT;
+ 			break;
+ 		}
+ 
+-		BT_DBG("opt %u", opt);
++		BT_DBG("mode %u", mode);
+ 
+-		err = l2cap_set_mode(chan, opt);
++		err = l2cap_set_mode(chan, mode);
+ 		if (err)
+ 			break;
+ 
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 8edfb98ae1d58..68c0d0f928908 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -743,6 +743,9 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
+ 	if (nf_bridge->frag_max_size && nf_bridge->frag_max_size < mtu)
+ 		mtu = nf_bridge->frag_max_size;
+ 
++	nf_bridge_update_protocol(skb);
++	nf_bridge_push_encap_header(skb);
++
+ 	if (skb_is_gso(skb) || skb->len + mtu_reserved <= mtu) {
+ 		nf_bridge_info_free(skb);
+ 		return br_dev_queue_push_xmit(net, sk, skb);
+@@ -760,8 +763,6 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
+ 
+ 		IPCB(skb)->frag_max_size = nf_bridge->frag_max_size;
+ 
+-		nf_bridge_update_protocol(skb);
+-
+ 		data = this_cpu_ptr(&brnf_frag_data_storage);
+ 
+ 		if (skb_vlan_tag_present(skb)) {
+@@ -789,8 +790,6 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
+ 
+ 		IP6CB(skb)->frag_max_size = nf_bridge->frag_max_size;
+ 
+-		nf_bridge_update_protocol(skb);
+-
+ 		data = this_cpu_ptr(&brnf_frag_data_storage);
+ 		data->encap_size = nf_bridge_encap_header_len(skb);
+ 		data->size = ETH_HLEN + data->encap_size;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 60cf3cd0c282f..0bab2aca07fd3 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -9339,6 +9339,12 @@ static int bpf_xdp_link_update(struct bpf_link *link, struct bpf_prog *new_prog,
+ 		goto out_unlock;
+ 	}
+ 	old_prog = link->prog;
++	if (old_prog->type != new_prog->type ||
++	    old_prog->expected_attach_type != new_prog->expected_attach_type) {
++		err = -EINVAL;
++		goto out_unlock;
++	}
++
+ 	if (old_prog == new_prog) {
+ 		/* no-op, don't disturb drivers */
+ 		bpf_prog_put(new_prog);
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 442b67c044a9f..646d90f63dafc 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -7852,8 +7852,6 @@ static const struct genl_small_ops devlink_nl_ops[] = {
+ 			    GENL_DONT_VALIDATE_DUMP_STRICT,
+ 		.dumpit = devlink_nl_cmd_health_reporter_dump_get_dumpit,
+ 		.flags = GENL_ADMIN_PERM,
+-		.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT |
+-				  DEVLINK_NL_FLAG_NO_LOCK,
+ 	},
+ 	{
+ 		.cmd = DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR,
+diff --git a/net/core/filter.c b/net/core/filter.c
+index abd58dce49bbc..7fa4283f2a8c0 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -4711,12 +4711,14 @@ static int _bpf_setsockopt(struct sock *sk, int level, int optname,
+ 		switch (optname) {
+ 		case SO_RCVBUF:
+ 			val = min_t(u32, val, sysctl_rmem_max);
++			val = min_t(int, val, INT_MAX / 2);
+ 			sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+ 			WRITE_ONCE(sk->sk_rcvbuf,
+ 				   max_t(int, val * 2, SOCK_MIN_RCVBUF));
+ 			break;
+ 		case SO_SNDBUF:
+ 			val = min_t(u32, val, sysctl_wmem_max);
++			val = min_t(int, val, INT_MAX / 2);
+ 			sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+ 			WRITE_ONCE(sk->sk_sndbuf,
+ 				   max_t(int, val * 2, SOCK_MIN_SNDBUF));
+@@ -7919,9 +7921,9 @@ void bpf_warn_invalid_xdp_action(u32 act)
+ {
+ 	const u32 act_max = XDP_REDIRECT;
+ 
+-	WARN_ONCE(1, "%s XDP return value %u, expect packet loss!\n",
+-		  act > act_max ? "Illegal" : "Driver unsupported",
+-		  act);
++	pr_warn_once("%s XDP return value %u, expect packet loss!\n",
++		     act > act_max ? "Illegal" : "Driver unsupported",
++		     act);
+ }
+ EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_action);
+ 
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index af59123601055..99303897b7bb7 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -1804,6 +1804,9 @@ static void remove_queue_kobjects(struct net_device *dev)
+ 
+ 	net_rx_queue_update_kobjects(dev, real_rx, 0);
+ 	netdev_queue_update_kobjects(dev, real_tx, 0);
++
++	dev->real_num_rx_queues = 0;
++	dev->real_num_tx_queues = 0;
+ #ifdef CONFIG_SYSFS
+ 	kset_unregister(dev->queues_kset);
+ #endif
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index ac852db83de9f..cbff7d94b993e 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -183,8 +183,10 @@ static void ops_exit_list(const struct pernet_operations *ops,
+ {
+ 	struct net *net;
+ 	if (ops->exit) {
+-		list_for_each_entry(net, net_exit_list, exit_list)
++		list_for_each_entry(net, net_exit_list, exit_list) {
+ 			ops->exit(net);
++			cond_resched();
++		}
+ 	}
+ 	if (ops->exit_batch)
+ 		ops->exit_batch(net_exit_list);
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index ab6a8f35d369d..838a876c168ca 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -29,6 +29,7 @@
+ #include <linux/init.h>
+ #include <linux/slab.h>
+ #include <linux/netlink.h>
++#include <linux/hash.h>
+ 
+ #include <net/arp.h>
+ #include <net/ip.h>
+@@ -251,7 +252,6 @@ void free_fib_info(struct fib_info *fi)
+ 		pr_warn("Freeing alive fib_info %p\n", fi);
+ 		return;
+ 	}
+-	fib_info_cnt--;
+ 
+ 	call_rcu(&fi->rcu, free_fib_info_rcu);
+ }
+@@ -262,6 +262,10 @@ void fib_release_info(struct fib_info *fi)
+ 	spin_lock_bh(&fib_info_lock);
+ 	if (fi && --fi->fib_treeref == 0) {
+ 		hlist_del(&fi->fib_hash);
++
++		/* Paired with READ_ONCE() in fib_create_info(). */
++		WRITE_ONCE(fib_info_cnt, fib_info_cnt - 1);
++
+ 		if (fi->fib_prefsrc)
+ 			hlist_del(&fi->fib_lhash);
+ 		if (fi->nh) {
+@@ -318,11 +322,15 @@ static inline int nh_comp(struct fib_info *fi, struct fib_info *ofi)
+ 
+ static inline unsigned int fib_devindex_hashfn(unsigned int val)
+ {
+-	unsigned int mask = DEVINDEX_HASHSIZE - 1;
++	return hash_32(val, DEVINDEX_HASHBITS);
++}
++
++static struct hlist_head *
++fib_info_devhash_bucket(const struct net_device *dev)
++{
++	u32 val = net_hash_mix(dev_net(dev)) ^ dev->ifindex;
+ 
+-	return (val ^
+-		(val >> DEVINDEX_HASHBITS) ^
+-		(val >> (DEVINDEX_HASHBITS * 2))) & mask;
++	return &fib_info_devhash[fib_devindex_hashfn(val)];
+ }
+ 
+ static unsigned int fib_info_hashfn_1(int init_val, u8 protocol, u8 scope,
+@@ -432,12 +440,11 @@ int ip_fib_check_default(__be32 gw, struct net_device *dev)
+ {
+ 	struct hlist_head *head;
+ 	struct fib_nh *nh;
+-	unsigned int hash;
+ 
+ 	spin_lock(&fib_info_lock);
+ 
+-	hash = fib_devindex_hashfn(dev->ifindex);
+-	head = &fib_info_devhash[hash];
++	head = fib_info_devhash_bucket(dev);
++
+ 	hlist_for_each_entry(nh, head, nh_hash) {
+ 		if (nh->fib_nh_dev == dev &&
+ 		    nh->fib_nh_gw4 == gw &&
+@@ -1431,7 +1438,9 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
+ #endif
+ 
+ 	err = -ENOBUFS;
+-	if (fib_info_cnt >= fib_info_hash_size) {
++
++	/* Paired with WRITE_ONCE() in fib_release_info() */
++	if (READ_ONCE(fib_info_cnt) >= fib_info_hash_size) {
+ 		unsigned int new_size = fib_info_hash_size << 1;
+ 		struct hlist_head *new_info_hash;
+ 		struct hlist_head *new_laddrhash;
+@@ -1463,7 +1472,6 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
+ 		return ERR_PTR(err);
+ 	}
+ 
+-	fib_info_cnt++;
+ 	fi->fib_net = net;
+ 	fi->fib_protocol = cfg->fc_protocol;
+ 	fi->fib_scope = cfg->fc_scope;
+@@ -1590,6 +1598,7 @@ link_it:
+ 	fi->fib_treeref++;
+ 	refcount_set(&fi->fib_clntref, 1);
+ 	spin_lock_bh(&fib_info_lock);
++	fib_info_cnt++;
+ 	hlist_add_head(&fi->fib_hash,
+ 		       &fib_info_hash[fib_info_hashfn(fi)]);
+ 	if (fi->fib_prefsrc) {
+@@ -1603,12 +1612,10 @@ link_it:
+ 	} else {
+ 		change_nexthops(fi) {
+ 			struct hlist_head *head;
+-			unsigned int hash;
+ 
+ 			if (!nexthop_nh->fib_nh_dev)
+ 				continue;
+-			hash = fib_devindex_hashfn(nexthop_nh->fib_nh_dev->ifindex);
+-			head = &fib_info_devhash[hash];
++			head = fib_info_devhash_bucket(nexthop_nh->fib_nh_dev);
+ 			hlist_add_head(&nexthop_nh->nh_hash, head);
+ 		} endfor_nexthops(fi)
+ 	}
+@@ -1958,8 +1965,7 @@ void fib_nhc_update_mtu(struct fib_nh_common *nhc, u32 new, u32 orig)
+ 
+ void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
+ {
+-	unsigned int hash = fib_devindex_hashfn(dev->ifindex);
+-	struct hlist_head *head = &fib_info_devhash[hash];
++	struct hlist_head *head = fib_info_devhash_bucket(dev);
+ 	struct fib_nh *nh;
+ 
+ 	hlist_for_each_entry(nh, head, nh_hash) {
+@@ -1978,12 +1984,11 @@ void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
+  */
+ int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force)
+ {
+-	int ret = 0;
+-	int scope = RT_SCOPE_NOWHERE;
++	struct hlist_head *head = fib_info_devhash_bucket(dev);
+ 	struct fib_info *prev_fi = NULL;
+-	unsigned int hash = fib_devindex_hashfn(dev->ifindex);
+-	struct hlist_head *head = &fib_info_devhash[hash];
++	int scope = RT_SCOPE_NOWHERE;
+ 	struct fib_nh *nh;
++	int ret = 0;
+ 
+ 	if (force)
+ 		scope = -1;
+@@ -2128,7 +2133,6 @@ out:
+ int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
+ {
+ 	struct fib_info *prev_fi;
+-	unsigned int hash;
+ 	struct hlist_head *head;
+ 	struct fib_nh *nh;
+ 	int ret;
+@@ -2144,8 +2148,7 @@ int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
+ 	}
+ 
+ 	prev_fi = NULL;
+-	hash = fib_devindex_hashfn(dev->ifindex);
+-	head = &fib_info_devhash[hash];
++	head = fib_info_devhash_bucket(dev);
+ 	ret = 0;
+ 
+ 	hlist_for_each_entry(nh, head, nh_hash) {
+diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
+index 10d31733297d7..e0e8a65d561ec 100644
+--- a/net/ipv4/inet_fragment.c
++++ b/net/ipv4/inet_fragment.c
+@@ -204,9 +204,9 @@ void inet_frag_kill(struct inet_frag_queue *fq)
+ 		/* The RCU read lock provides a memory barrier
+ 		 * guaranteeing that if fqdir->dead is false then
+ 		 * the hash table destruction will not start until
+-		 * after we unlock.  Paired with inet_frags_exit_net().
++		 * after we unlock.  Paired with fqdir_pre_exit().
+ 		 */
+-		if (!fqdir->dead) {
++		if (!READ_ONCE(fqdir->dead)) {
+ 			rhashtable_remove_fast(&fqdir->rhashtable, &fq->node,
+ 					       fqdir->f->rhash_params);
+ 			refcount_dec(&fq->refcnt);
+@@ -321,9 +321,11 @@ static struct inet_frag_queue *inet_frag_create(struct fqdir *fqdir,
+ /* TODO : call from rcu_read_lock() and no longer use refcount_inc_not_zero() */
+ struct inet_frag_queue *inet_frag_find(struct fqdir *fqdir, void *key)
+ {
++	/* This pairs with WRITE_ONCE() in fqdir_pre_exit(). */
++	long high_thresh = READ_ONCE(fqdir->high_thresh);
+ 	struct inet_frag_queue *fq = NULL, *prev;
+ 
+-	if (!fqdir->high_thresh || frag_mem_limit(fqdir) > fqdir->high_thresh)
++	if (!high_thresh || frag_mem_limit(fqdir) > high_thresh)
+ 		return NULL;
+ 
+ 	rcu_read_lock();
+diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
+index cfeb8890f94ee..fad803d2d711e 100644
+--- a/net/ipv4/ip_fragment.c
++++ b/net/ipv4/ip_fragment.c
+@@ -144,7 +144,8 @@ static void ip_expire(struct timer_list *t)
+ 
+ 	rcu_read_lock();
+ 
+-	if (qp->q.fqdir->dead)
++	/* Paired with WRITE_ONCE() in fqdir_pre_exit(). */
++	if (READ_ONCE(qp->q.fqdir->dead))
+ 		goto out_rcu_unlock;
+ 
+ 	spin_lock(&qp->q.lock);
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index a9cc05043fa47..e4504dd510c6d 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -599,8 +599,9 @@ static int gre_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ 
+ 	key = &info->key;
+ 	ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src,
+-			    tunnel_id_to_key32(key->tun_id), key->tos, 0,
+-			    skb->mark, skb_get_hash(skb));
++			    tunnel_id_to_key32(key->tun_id),
++			    key->tos & ~INET_ECN_MASK, 0, skb->mark,
++			    skb_get_hash(skb));
+ 	rt = ip_route_output_key(dev_net(dev), &fl4);
+ 	if (IS_ERR(rt))
+ 		return PTR_ERR(rt);
+diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+index a8b980ad11d4e..1088564d4dbcb 100644
+--- a/net/ipv4/netfilter/ipt_CLUSTERIP.c
++++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+@@ -505,8 +505,11 @@ static int clusterip_tg_check(const struct xt_tgchk_param *par)
+ 			if (IS_ERR(config))
+ 				return PTR_ERR(config);
+ 		}
+-	} else if (memcmp(&config->clustermac, &cipinfo->clustermac, ETH_ALEN))
++	} else if (memcmp(&config->clustermac, &cipinfo->clustermac, ETH_ALEN)) {
++		clusterip_config_entry_put(config);
++		clusterip_config_put(config);
+ 		return -EINVAL;
++	}
+ 
+ 	ret = nf_ct_netns_get(par->net, par->family);
+ 	if (ret < 0) {
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 09fa49bbf617d..9a0263f252323 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -755,6 +755,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ 		fl6->daddr = key->u.ipv6.dst;
+ 		fl6->flowlabel = key->label;
+ 		fl6->flowi6_uid = sock_net_uid(dev_net(dev), NULL);
++		fl6->fl6_gre_key = tunnel_id_to_key32(key->tun_id);
+ 
+ 		dsfield = key->tos;
+ 		flags = key->tun_flags &
+@@ -990,6 +991,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 		fl6.daddr = key->u.ipv6.dst;
+ 		fl6.flowlabel = key->label;
+ 		fl6.flowi6_uid = sock_net_uid(dev_net(dev), NULL);
++		fl6.fl6_gre_key = tunnel_id_to_key32(key->tun_id);
+ 
+ 		dsfield = key->tos;
+ 		if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))
+@@ -1098,6 +1100,7 @@ static void ip6gre_tnl_link_config_common(struct ip6_tnl *t)
+ 	fl6->flowi6_oif = p->link;
+ 	fl6->flowlabel = 0;
+ 	fl6->flowi6_proto = IPPROTO_GRE;
++	fl6->fl6_gre_key = t->parms.o_key;
+ 
+ 	if (!(p->flags&IP6_TNL_F_USE_ORIG_TCLASS))
+ 		fl6->flowlabel |= IPV6_TCLASS_MASK & p->flowinfo;
+@@ -1543,7 +1546,7 @@ static void ip6gre_fb_tunnel_init(struct net_device *dev)
+ static struct inet6_protocol ip6gre_protocol __read_mostly = {
+ 	.handler     = gre_rcv,
+ 	.err_handler = ip6gre_err,
+-	.flags       = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
++	.flags       = INET6_PROTO_FINAL,
+ };
+ 
+ static void ip6gre_destroy_tunnels(struct net *net, struct list_head *head)
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 6a24431b90095..d27c444a19ed1 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4800,7 +4800,7 @@ void ieee80211_rx_list(struct ieee80211_hw *hw, struct ieee80211_sta *pubsta,
+ 				goto drop;
+ 			break;
+ 		case RX_ENC_VHT:
+-			if (WARN_ONCE(status->rate_idx > 9 ||
++			if (WARN_ONCE(status->rate_idx > 11 ||
+ 				      !status->nss ||
+ 				      status->nss > 8,
+ 				      "Rate marked as a VHT rate but data is invalid: MCS: %d, NSS: %d\n",
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 2d73f265b12c9..f67c4436c5d31 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1290,6 +1290,11 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 	if (!new->scratch_aligned)
+ 		goto out_scratch;
+ #endif
++	for_each_possible_cpu(i)
++		*per_cpu_ptr(new->scratch, i) = NULL;
++
++	if (pipapo_realloc_scratch(new, old->bsize_max))
++		goto out_scratch_realloc;
+ 
+ 	rcu_head_init(&new->rcu);
+ 
+@@ -1334,6 +1339,9 @@ out_lt:
+ 		kvfree(dst->lt);
+ 		dst--;
+ 	}
++out_scratch_realloc:
++	for_each_possible_cpu(i)
++		kfree(*per_cpu_ptr(new->scratch, i));
+ #ifdef NFT_PIPAPO_ALIGN
+ 	free_percpu(new->scratch_aligned);
+ #endif
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index eef0e3f2f25b0..e5c8a295e6406 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -298,7 +298,7 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct nr_sock *nr = nr_sk(sk);
+-	unsigned long opt;
++	unsigned int opt;
+ 
+ 	if (level != SOL_NETROM)
+ 		return -ENOPROTOOPT;
+@@ -306,18 +306,18 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
+ 	if (optlen < sizeof(unsigned int))
+ 		return -EINVAL;
+ 
+-	if (copy_from_sockptr(&opt, optval, sizeof(unsigned long)))
++	if (copy_from_sockptr(&opt, optval, sizeof(opt)))
+ 		return -EFAULT;
+ 
+ 	switch (optname) {
+ 	case NETROM_T1:
+-		if (opt < 1 || opt > ULONG_MAX / HZ)
++		if (opt < 1 || opt > UINT_MAX / HZ)
+ 			return -EINVAL;
+ 		nr->t1 = opt * HZ;
+ 		return 0;
+ 
+ 	case NETROM_T2:
+-		if (opt < 1 || opt > ULONG_MAX / HZ)
++		if (opt < 1 || opt > UINT_MAX / HZ)
+ 			return -EINVAL;
+ 		nr->t2 = opt * HZ;
+ 		return 0;
+@@ -329,13 +329,13 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
+ 		return 0;
+ 
+ 	case NETROM_T4:
+-		if (opt < 1 || opt > ULONG_MAX / HZ)
++		if (opt < 1 || opt > UINT_MAX / HZ)
+ 			return -EINVAL;
+ 		nr->t4 = opt * HZ;
+ 		return 0;
+ 
+ 	case NETROM_IDLE:
+-		if (opt > ULONG_MAX / (60 * HZ))
++		if (opt > UINT_MAX / (60 * HZ))
+ 			return -EINVAL;
+ 		nr->idle = opt * 60 * HZ;
+ 		return 0;
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index 6cfd30fc07985..0b93a17b9f11f 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -789,6 +789,11 @@ static int llcp_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 
+ 	lock_sock(sk);
+ 
++	if (!llcp_sock->local) {
++		release_sock(sk);
++		return -ENODEV;
++	}
++
+ 	if (sk->sk_type == SOCK_DGRAM) {
+ 		DECLARE_SOCKADDR(struct sockaddr_nfc_llcp *, addr,
+ 				 msg->msg_name);
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 6a9c1a39874a0..b5005abc84ec2 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1386,6 +1386,7 @@ void psched_ratecfg_precompute(struct psched_ratecfg *r,
+ {
+ 	memset(r, 0, sizeof(*r));
+ 	r->overhead = conf->overhead;
++	r->mpu = conf->mpu;
+ 	r->rate_bytes_ps = max_t(u64, conf->rate, rate64);
+ 	r->linklayer = (conf->linklayer & TC_LINKLAYER_MASK);
+ 	r->mult = 1;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 2a22dc85951ee..4eb9ef9c28003 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1002,16 +1002,11 @@ void smc_smcd_terminate_all(struct smcd_dev *smcd)
+ /* Called when an SMCR device is removed or the smc module is unloaded.
+  * If smcibdev is given, all SMCR link groups using this device are terminated.
+  * If smcibdev is NULL, all SMCR link groups are terminated.
+- *
+- * We must wait here for QPs been destroyed before we destroy the CQs,
+- * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus
+- * smc_sock cannot be released.
+  */
+ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ {
+ 	struct smc_link_group *lgr, *lg;
+ 	LIST_HEAD(lgr_free_list);
+-	LIST_HEAD(lgr_linkdown_list);
+ 	int i;
+ 
+ 	spin_lock_bh(&smc_lgr_list.lock);
+@@ -1023,7 +1018,7 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ 		list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) {
+ 			for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+ 				if (lgr->lnk[i].smcibdev == smcibdev)
+-					list_move_tail(&lgr->list, &lgr_linkdown_list);
++					smcr_link_down_cond_sched(&lgr->lnk[i]);
+ 			}
+ 		}
+ 	}
+@@ -1035,16 +1030,6 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ 		__smc_lgr_terminate(lgr, false);
+ 	}
+ 
+-	list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) {
+-		for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+-			if (lgr->lnk[i].smcibdev == smcibdev) {
+-				mutex_lock(&lgr->llc_conf_mutex);
+-				smcr_link_down_cond(&lgr->lnk[i]);
+-				mutex_unlock(&lgr->llc_conf_mutex);
+-			}
+-		}
+-	}
+-
+ 	if (smcibdev) {
+ 		if (atomic_read(&smcibdev->lnk_cnt))
+ 			wait_event(smcibdev->lnks_deleted,
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index 12e2ddaf887f2..d45d5366115a7 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -192,8 +192,11 @@ void wait_for_unix_gc(void)
+ {
+ 	/* If number of inflight sockets is insane,
+ 	 * force a garbage collect right now.
++	 * Paired with the WRITE_ONCE() in unix_inflight(),
++	 * unix_notinflight() and gc_in_progress().
+ 	 */
+-	if (unix_tot_inflight > UNIX_INFLIGHT_TRIGGER_GC && !gc_in_progress)
++	if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC &&
++	    !READ_ONCE(gc_in_progress))
+ 		unix_gc();
+ 	wait_event(unix_gc_wait, gc_in_progress == false);
+ }
+@@ -213,7 +216,9 @@ void unix_gc(void)
+ 	if (gc_in_progress)
+ 		goto out;
+ 
+-	gc_in_progress = true;
++	/* Paired with READ_ONCE() in wait_for_unix_gc(). */
++	WRITE_ONCE(gc_in_progress, true);
++
+ 	/* First, select candidates for garbage collection.  Only
+ 	 * in-flight sockets are considered, and from those only ones
+ 	 * which don't have any external reference.
+@@ -299,7 +304,10 @@ void unix_gc(void)
+ 
+ 	/* All candidates should have been detached by now. */
+ 	BUG_ON(!list_empty(&gc_candidates));
+-	gc_in_progress = false;
++
++	/* Paired with READ_ONCE() in wait_for_unix_gc(). */
++	WRITE_ONCE(gc_in_progress, false);
++
+ 	wake_up(&unix_gc_wait);
+ 
+  out:
+diff --git a/net/unix/scm.c b/net/unix/scm.c
+index 052ae709ce289..aa27a02478dc1 100644
+--- a/net/unix/scm.c
++++ b/net/unix/scm.c
+@@ -60,7 +60,8 @@ void unix_inflight(struct user_struct *user, struct file *fp)
+ 		} else {
+ 			BUG_ON(list_empty(&u->link));
+ 		}
+-		unix_tot_inflight++;
++		/* Paired with READ_ONCE() in wait_for_unix_gc() */
++		WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1);
+ 	}
+ 	user->unix_inflight++;
+ 	spin_unlock(&unix_gc_lock);
+@@ -80,7 +81,8 @@ void unix_notinflight(struct user_struct *user, struct file *fp)
+ 
+ 		if (atomic_long_dec_and_test(&u->inflight))
+ 			list_del_init(&u->link);
+-		unix_tot_inflight--;
++		/* Paired with READ_ONCE() in wait_for_unix_gc() */
++		WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1);
+ 	}
+ 	user->unix_inflight--;
+ 	spin_unlock(&unix_gc_lock);
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index 2bf2693901631..a0f62fa02e06e 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -127,6 +127,7 @@ static const struct nla_policy compat_policy[XFRMA_MAX+1] = {
+ 	[XFRMA_SET_MARK]	= { .type = NLA_U32 },
+ 	[XFRMA_SET_MARK_MASK]	= { .type = NLA_U32 },
+ 	[XFRMA_IF_ID]		= { .type = NLA_U32 },
++	[XFRMA_MTIMER_THRESH]	= { .type = NLA_U32 },
+ };
+ 
+ static struct nlmsghdr *xfrm_nlmsg_put_compat(struct sk_buff *skb,
+@@ -274,9 +275,10 @@ static int xfrm_xlate64_attr(struct sk_buff *dst, const struct nlattr *src)
+ 	case XFRMA_SET_MARK:
+ 	case XFRMA_SET_MARK_MASK:
+ 	case XFRMA_IF_ID:
++	case XFRMA_MTIMER_THRESH:
+ 		return xfrm_nla_cpy(dst, src, nla_len(src));
+ 	default:
+-		BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID);
++		BUILD_BUG_ON(XFRMA_MAX != XFRMA_MTIMER_THRESH);
+ 		pr_warn_once("unsupported nla_type %d\n", src->nla_type);
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -431,7 +433,7 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
+ 	int err;
+ 
+ 	if (type > XFRMA_MAX) {
+-		BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID);
++		BUILD_BUG_ON(XFRMA_MAX != XFRMA_MTIMER_THRESH);
+ 		NL_SET_ERR_MSG(extack, "Bad attribute");
+ 		return -EOPNOTSUPP;
+ 	}
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index e9ce23343f5ca..e1fae61a5bb90 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -643,11 +643,16 @@ static int xfrmi_newlink(struct net *src_net, struct net_device *dev,
+ 			struct netlink_ext_ack *extack)
+ {
+ 	struct net *net = dev_net(dev);
+-	struct xfrm_if_parms p;
++	struct xfrm_if_parms p = {};
+ 	struct xfrm_if *xi;
+ 	int err;
+ 
+ 	xfrmi_netlink_parms(data, &p);
++	if (!p.if_id) {
++		NL_SET_ERR_MSG(extack, "if_id must be non zero");
++		return -EINVAL;
++	}
++
+ 	xi = xfrmi_locate(net, &p);
+ 	if (xi)
+ 		return -EEXIST;
+@@ -672,7 +677,12 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+ {
+ 	struct xfrm_if *xi = netdev_priv(dev);
+ 	struct net *net = xi->net;
+-	struct xfrm_if_parms p;
++	struct xfrm_if_parms p = {};
++
++	if (!p.if_id) {
++		NL_SET_ERR_MSG(extack, "if_id must be non zero");
++		return -EINVAL;
++	}
+ 
+ 	xfrmi_netlink_parms(data, &p);
+ 	xi = xfrmi_locate(net, &p);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 3a9831c05ec71..c4a195cb36817 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -31,8 +31,10 @@
+ #include <linux/if_tunnel.h>
+ #include <net/dst.h>
+ #include <net/flow.h>
++#include <net/inet_ecn.h>
+ #include <net/xfrm.h>
+ #include <net/ip.h>
++#include <net/gre.h>
+ #if IS_ENABLED(CONFIG_IPV6_MIP6)
+ #include <net/mip6.h>
+ #endif
+@@ -3293,7 +3295,7 @@ decode_session4(struct sk_buff *skb, struct flowi *fl, bool reverse)
+ 	fl4->flowi4_proto = iph->protocol;
+ 	fl4->daddr = reverse ? iph->saddr : iph->daddr;
+ 	fl4->saddr = reverse ? iph->daddr : iph->saddr;
+-	fl4->flowi4_tos = iph->tos;
++	fl4->flowi4_tos = iph->tos & ~INET_ECN_MASK;
+ 
+ 	if (!ip_is_fragment(iph)) {
+ 		switch (iph->protocol) {
+@@ -3455,6 +3457,26 @@ decode_session6(struct sk_buff *skb, struct flowi *fl, bool reverse)
+ 			}
+ 			fl6->flowi6_proto = nexthdr;
+ 			return;
++		case IPPROTO_GRE:
++			if (!onlyproto &&
++			    (nh + offset + 12 < skb->data ||
++			     pskb_may_pull(skb, nh + offset + 12 - skb->data))) {
++				struct gre_base_hdr *gre_hdr;
++				__be32 *gre_key;
++
++				nh = skb_network_header(skb);
++				gre_hdr = (struct gre_base_hdr *)(nh + offset);
++				gre_key = (__be32 *)(gre_hdr + 1);
++
++				if (gre_hdr->flags & GRE_KEY) {
++					if (gre_hdr->flags & GRE_CSUM)
++						gre_key++;
++					fl6->fl6_gre_key = *gre_key;
++				}
++			}
++			fl6->flowi6_proto = nexthdr;
++			return;
++
+ #if IS_ENABLED(CONFIG_IPV6_MIP6)
+ 		case IPPROTO_MH:
+ 			offset += ipv6_optlen(exthdr);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index c158e70e8ae10..65e2805fa113a 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1557,6 +1557,9 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ 	x->km.seq = orig->km.seq;
+ 	x->replay = orig->replay;
+ 	x->preplay = orig->preplay;
++	x->mapping_maxage = orig->mapping_maxage;
++	x->new_mapping = 0;
++	x->new_mapping_sport = 0;
+ 
+ 	return x;
+ 
+@@ -2208,7 +2211,7 @@ int km_query(struct xfrm_state *x, struct xfrm_tmpl *t, struct xfrm_policy *pol)
+ }
+ EXPORT_SYMBOL(km_query);
+ 
+-int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
++static int __km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
+ {
+ 	int err = -EINVAL;
+ 	struct xfrm_mgr *km;
+@@ -2223,6 +2226,24 @@ int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
+ 	rcu_read_unlock();
+ 	return err;
+ }
++
++int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
++{
++	int ret = 0;
++
++	if (x->mapping_maxage) {
++		if ((jiffies / HZ - x->new_mapping) > x->mapping_maxage ||
++		    x->new_mapping_sport != sport) {
++			x->new_mapping_sport = sport;
++			x->new_mapping = jiffies / HZ;
++			ret = __km_new_mapping(x, ipaddr, sport);
++		}
++	} else {
++		ret = __km_new_mapping(x, ipaddr, sport);
++	}
++
++	return ret;
++}
+ EXPORT_SYMBOL(km_new_mapping);
+ 
+ void km_policy_expired(struct xfrm_policy *pol, int dir, int hard, u32 portid)
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 6f97665b632ed..d0fdfbf4c5f72 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -282,6 +282,10 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ 
+ 	err = 0;
+ 
++	if (attrs[XFRMA_MTIMER_THRESH])
++		if (!attrs[XFRMA_ENCAP])
++			err = -EINVAL;
++
+ out:
+ 	return err;
+ }
+@@ -521,6 +525,7 @@ static void xfrm_update_ae_params(struct xfrm_state *x, struct nlattr **attrs,
+ 	struct nlattr *lt = attrs[XFRMA_LTIME_VAL];
+ 	struct nlattr *et = attrs[XFRMA_ETIMER_THRESH];
+ 	struct nlattr *rt = attrs[XFRMA_REPLAY_THRESH];
++	struct nlattr *mt = attrs[XFRMA_MTIMER_THRESH];
+ 
+ 	if (re) {
+ 		struct xfrm_replay_state_esn *replay_esn;
+@@ -552,6 +557,9 @@ static void xfrm_update_ae_params(struct xfrm_state *x, struct nlattr **attrs,
+ 
+ 	if (rt)
+ 		x->replay_maxdiff = nla_get_u32(rt);
++
++	if (mt)
++		x->mapping_maxage = nla_get_u32(mt);
+ }
+ 
+ static void xfrm_smark_init(struct nlattr **attrs, struct xfrm_mark *m)
+@@ -621,8 +629,13 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 
+ 	xfrm_smark_init(attrs, &x->props.smark);
+ 
+-	if (attrs[XFRMA_IF_ID])
++	if (attrs[XFRMA_IF_ID]) {
+ 		x->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
++		if (!x->if_id) {
++			err = -EINVAL;
++			goto error;
++		}
++	}
+ 
+ 	err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV]);
+ 	if (err)
+@@ -964,8 +977,13 @@ static int copy_to_user_state_extra(struct xfrm_state *x,
+ 		if (ret)
+ 			goto out;
+ 	}
+-	if (x->security)
++	if (x->security) {
+ 		ret = copy_sec_ctx(x->security, skb);
++		if (ret)
++			goto out;
++	}
++	if (x->mapping_maxage)
++		ret = nla_put_u32(skb, XFRMA_MTIMER_THRESH, x->mapping_maxage);
+ out:
+ 	return ret;
+ }
+@@ -1353,8 +1371,13 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	mark = xfrm_mark_get(attrs, &m);
+ 
+-	if (attrs[XFRMA_IF_ID])
++	if (attrs[XFRMA_IF_ID]) {
+ 		if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
++		if (!if_id) {
++			err = -EINVAL;
++			goto out_noput;
++		}
++	}
+ 
+ 	if (p->info.seq) {
+ 		x = xfrm_find_acq_byseq(net, mark, p->info.seq);
+@@ -1667,8 +1690,13 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net, struct xfrm_us
+ 
+ 	xfrm_mark_get(attrs, &xp->mark);
+ 
+-	if (attrs[XFRMA_IF_ID])
++	if (attrs[XFRMA_IF_ID]) {
+ 		xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
++		if (!xp->if_id) {
++			err = -EINVAL;
++			goto error;
++		}
++	}
+ 
+ 	return xp;
+  error:
+@@ -2898,7 +2926,7 @@ static inline unsigned int xfrm_sa_len(struct xfrm_state *x)
+ 	if (x->props.extra_flags)
+ 		l += nla_total_size(sizeof(x->props.extra_flags));
+ 	if (x->xso.dev)
+-		 l += nla_total_size(sizeof(x->xso));
++		 l += nla_total_size(sizeof(struct xfrm_user_offload));
+ 	if (x->props.smark.v | x->props.smark.m) {
+ 		l += nla_total_size(sizeof(x->props.smark.v));
+ 		l += nla_total_size(sizeof(x->props.smark.m));
+@@ -2909,6 +2937,9 @@ static inline unsigned int xfrm_sa_len(struct xfrm_state *x)
+ 	/* Must count x->lastused as it may become non-zero behind our back. */
+ 	l += nla_total_size_64bit(sizeof(u64));
+ 
++	if (x->mapping_maxage)
++		l += nla_total_size(sizeof(x->mapping_maxage));
++
+ 	return l;
+ }
+ 
+diff --git a/scripts/dtc/dtx_diff b/scripts/dtc/dtx_diff
+index d3422ee15e300..f2bbde4bba86b 100755
+--- a/scripts/dtc/dtx_diff
++++ b/scripts/dtc/dtx_diff
+@@ -59,12 +59,8 @@ Otherwise DTx is treated as a dts source file (aka .dts).
+    or '/include/' to be processed.
+ 
+    If DTx_1 and DTx_2 are in different architectures, then this script
+-   may not work since \${ARCH} is part of the include path.  Two possible
+-   workarounds:
+-
+-      `basename $0` \\
+-          <(ARCH=arch_of_dtx_1 `basename $0` DTx_1) \\
+-          <(ARCH=arch_of_dtx_2 `basename $0` DTx_2)
++   may not work since \${ARCH} is part of the include path.  The following
++   workaround can be used:
+ 
+       `basename $0` ARCH=arch_of_dtx_1 DTx_1 >tmp_dtx_1.dts
+       `basename $0` ARCH=arch_of_dtx_2 DTx_2 >tmp_dtx_2.dts
+diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install
+index 828a8615a9181..8fcea769d44f5 100755
+--- a/scripts/sphinx-pre-install
++++ b/scripts/sphinx-pre-install
+@@ -76,6 +76,7 @@ my %texlive = (
+ 	'ucs.sty'            => 'texlive-ucs',
+ 	'upquote.sty'        => 'texlive-upquote',
+ 	'wrapfig.sty'        => 'texlive-wrapfig',
++	'ctexhook.sty'       => 'texlive-ctex',
+ );
+ 
+ #
+@@ -370,6 +371,9 @@ sub give_debian_hints()
+ 	);
+ 
+ 	if ($pdf) {
++		check_missing_file(["/usr/share/texlive/texmf-dist/tex/latex/ctex/ctexhook.sty"],
++				   "texlive-lang-chinese", 2);
++
+ 		check_missing_file(["/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf"],
+ 				   "fonts-dejavu", 2);
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index ff2191ae53528..86159b32921cc 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -947,18 +947,22 @@ out:
+ static int selinux_add_opt(int token, const char *s, void **mnt_opts)
+ {
+ 	struct selinux_mnt_opts *opts = *mnt_opts;
++	bool is_alloc_opts = false;
+ 
+ 	if (token == Opt_seclabel)	/* eaten and completely ignored */
+ 		return 0;
+ 
++	if (!s)
++		return -ENOMEM;
++
+ 	if (!opts) {
+ 		opts = kzalloc(sizeof(struct selinux_mnt_opts), GFP_KERNEL);
+ 		if (!opts)
+ 			return -ENOMEM;
+ 		*mnt_opts = opts;
++		is_alloc_opts = true;
+ 	}
+-	if (!s)
+-		return -ENOMEM;
++
+ 	switch (token) {
+ 	case Opt_context:
+ 		if (opts->context || opts->defcontext)
+@@ -983,6 +987,10 @@ static int selinux_add_opt(int token, const char *s, void **mnt_opts)
+ 	}
+ 	return 0;
+ Einval:
++	if (is_alloc_opts) {
++		kfree(opts);
++		*mnt_opts = NULL;
++	}
+ 	pr_warn(SEL_MOUNT_FAIL_MSG);
+ 	return -EINVAL;
+ }
+diff --git a/sound/core/jack.c b/sound/core/jack.c
+index d6502dff247a8..dc2e06ae24149 100644
+--- a/sound/core/jack.c
++++ b/sound/core/jack.c
+@@ -54,10 +54,13 @@ static int snd_jack_dev_free(struct snd_device *device)
+ 	struct snd_card *card = device->card;
+ 	struct snd_jack_kctl *jack_kctl, *tmp_jack_kctl;
+ 
++	down_write(&card->controls_rwsem);
+ 	list_for_each_entry_safe(jack_kctl, tmp_jack_kctl, &jack->kctl_list, list) {
+ 		list_del_init(&jack_kctl->list);
+ 		snd_ctl_remove(card, jack_kctl->kctl);
+ 	}
++	up_write(&card->controls_rwsem);
++
+ 	if (jack->private_free)
+ 		jack->private_free(jack);
+ 
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 77727a69c3c4e..d79febeebf0c5 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -2056,7 +2056,7 @@ static int snd_pcm_oss_set_trigger(struct snd_pcm_oss_file *pcm_oss_file, int tr
+ 	int err, cmd;
+ 
+ #ifdef OSS_DEBUG
+-	pcm_dbg(substream->pcm, "pcm_oss: trigger = 0x%x\n", trigger);
++	pr_debug("pcm_oss: trigger = 0x%x\n", trigger);
+ #endif
+ 	
+ 	psubstream = pcm_oss_file->streams[SNDRV_PCM_STREAM_PLAYBACK];
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index 41cbdac5b1cfa..a8ae5928decda 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -810,7 +810,11 @@ EXPORT_SYMBOL(snd_pcm_new_internal);
+ static void free_chmap(struct snd_pcm_str *pstr)
+ {
+ 	if (pstr->chmap_kctl) {
+-		snd_ctl_remove(pstr->pcm->card, pstr->chmap_kctl);
++		struct snd_card *card = pstr->pcm->card;
++
++		down_write(&card->controls_rwsem);
++		snd_ctl_remove(card, pstr->chmap_kctl);
++		up_write(&card->controls_rwsem);
+ 		pstr->chmap_kctl = NULL;
+ 	}
+ }
+diff --git a/sound/core/seq/seq_queue.c b/sound/core/seq/seq_queue.c
+index 71a6ea62c3be7..4ff0b927230c2 100644
+--- a/sound/core/seq/seq_queue.c
++++ b/sound/core/seq/seq_queue.c
+@@ -234,12 +234,15 @@ struct snd_seq_queue *snd_seq_queue_find_name(char *name)
+ 
+ /* -------------------------------------------------------- */
+ 
++#define MAX_CELL_PROCESSES_IN_QUEUE	1000
++
+ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
+ {
+ 	unsigned long flags;
+ 	struct snd_seq_event_cell *cell;
+ 	snd_seq_tick_time_t cur_tick;
+ 	snd_seq_real_time_t cur_time;
++	int processed = 0;
+ 
+ 	if (q == NULL)
+ 		return;
+@@ -262,6 +265,8 @@ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
+ 		if (!cell)
+ 			break;
+ 		snd_seq_dispatch_event(cell, atomic, hop);
++		if (++processed >= MAX_CELL_PROCESSES_IN_QUEUE)
++			goto out; /* the rest processed at the next batch */
+ 	}
+ 
+ 	/* Process time queue... */
+@@ -271,14 +276,19 @@ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
+ 		if (!cell)
+ 			break;
+ 		snd_seq_dispatch_event(cell, atomic, hop);
++		if (++processed >= MAX_CELL_PROCESSES_IN_QUEUE)
++			goto out; /* the rest processed at the next batch */
+ 	}
+ 
++ out:
+ 	/* free lock */
+ 	spin_lock_irqsave(&q->check_lock, flags);
+ 	if (q->check_again) {
+ 		q->check_again = 0;
+-		spin_unlock_irqrestore(&q->check_lock, flags);
+-		goto __again;
++		if (processed < MAX_CELL_PROCESSES_IN_QUEUE) {
++			spin_unlock_irqrestore(&q->check_lock, flags);
++			goto __again;
++		}
+ 	}
+ 	q->check_blocked = 0;
+ 	spin_unlock_irqrestore(&q->check_lock, flags);
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 6dece719be669..39281106477eb 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -1727,8 +1727,11 @@ void snd_hda_ctls_clear(struct hda_codec *codec)
+ {
+ 	int i;
+ 	struct hda_nid_item *items = codec->mixers.list;
++
++	down_write(&codec->card->controls_rwsem);
+ 	for (i = 0; i < codec->mixers.used; i++)
+ 		snd_ctl_remove(codec->card, items[i].kctl);
++	up_write(&codec->card->controls_rwsem);
+ 	snd_array_free(&codec->mixers);
+ 	snd_array_free(&codec->nids);
+ }
+diff --git a/sound/soc/codecs/rt5663.c b/sound/soc/codecs/rt5663.c
+index 619fb9a031e39..db8a41aaa3859 100644
+--- a/sound/soc/codecs/rt5663.c
++++ b/sound/soc/codecs/rt5663.c
+@@ -3461,6 +3461,7 @@ static void rt5663_calibrate(struct rt5663_priv *rt5663)
+ static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
+ {
+ 	int table_size;
++	int ret;
+ 
+ 	device_property_read_u32(dev, "realtek,dc_offset_l_manual",
+ 		&rt5663->pdata.dc_offset_l_manual);
+@@ -3477,9 +3478,11 @@ static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
+ 		table_size = sizeof(struct impedance_mapping_table) *
+ 			rt5663->pdata.impedance_sensing_num;
+ 		rt5663->imp_table = devm_kzalloc(dev, table_size, GFP_KERNEL);
+-		device_property_read_u32_array(dev,
++		ret = device_property_read_u32_array(dev,
+ 			"realtek,impedance_sensing_table",
+ 			(u32 *)rt5663->imp_table, table_size);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	return 0;
+@@ -3504,8 +3507,11 @@ static int rt5663_i2c_probe(struct i2c_client *i2c,
+ 
+ 	if (pdata)
+ 		rt5663->pdata = *pdata;
+-	else
+-		rt5663_parse_dp(rt5663, &i2c->dev);
++	else {
++		ret = rt5663_parse_dp(rt5663, &i2c->dev);
++		if (ret)
++			return ret;
++	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(rt5663->supplies); i++)
+ 		rt5663->supplies[i].supply = rt5663_supply_names[i];
+diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
+index 02c81d2e34ad0..5e3c71f025f45 100644
+--- a/sound/soc/fsl/fsl_asrc.c
++++ b/sound/soc/fsl/fsl_asrc.c
+@@ -19,6 +19,7 @@
+ #include "fsl_asrc.h"
+ 
+ #define IDEAL_RATIO_DECIMAL_DEPTH 26
++#define DIVIDER_NUM  64
+ 
+ #define pair_err(fmt, ...) \
+ 	dev_err(&asrc->pdev->dev, "Pair %c: " fmt, 'A' + index, ##__VA_ARGS__)
+@@ -101,6 +102,55 @@ static unsigned char clk_map_imx8qxp[2][ASRC_CLK_MAP_LEN] = {
+ 	},
+ };
+ 
++/*
++ * According to RM, the divider range is 1 ~ 8,
++ * prescaler is power of 2 from 1 ~ 128.
++ */
++static int asrc_clk_divider[DIVIDER_NUM] = {
++	1,  2,  4,  8,  16,  32,  64,  128,  /* divider = 1 */
++	2,  4,  8, 16,  32,  64, 128,  256,  /* divider = 2 */
++	3,  6, 12, 24,  48,  96, 192,  384,  /* divider = 3 */
++	4,  8, 16, 32,  64, 128, 256,  512,  /* divider = 4 */
++	5, 10, 20, 40,  80, 160, 320,  640,  /* divider = 5 */
++	6, 12, 24, 48,  96, 192, 384,  768,  /* divider = 6 */
++	7, 14, 28, 56, 112, 224, 448,  896,  /* divider = 7 */
++	8, 16, 32, 64, 128, 256, 512, 1024,  /* divider = 8 */
++};
++
++/*
++ * Check if the divider is available for internal ratio mode
++ */
++static bool fsl_asrc_divider_avail(int clk_rate, int rate, int *div)
++{
++	u32 rem, i;
++	u64 n;
++
++	if (div)
++		*div = 0;
++
++	if (clk_rate == 0 || rate == 0)
++		return false;
++
++	n = clk_rate;
++	rem = do_div(n, rate);
++
++	if (div)
++		*div = n;
++
++	if (rem != 0)
++		return false;
++
++	for (i = 0; i < DIVIDER_NUM; i++) {
++		if (n == asrc_clk_divider[i])
++			break;
++	}
++
++	if (i == DIVIDER_NUM)
++		return false;
++
++	return true;
++}
++
+ /**
+  * fsl_asrc_sel_proc - Select the pre-processing and post-processing options
+  * @inrate: input sample rate
+@@ -330,12 +380,12 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	enum asrc_word_width input_word_width;
+ 	enum asrc_word_width output_word_width;
+ 	u32 inrate, outrate, indiv, outdiv;
+-	u32 clk_index[2], div[2], rem[2];
++	u32 clk_index[2], div[2];
+ 	u64 clk_rate;
+ 	int in, out, channels;
+ 	int pre_proc, post_proc;
+ 	struct clk *clk;
+-	bool ideal;
++	bool ideal, div_avail;
+ 
+ 	if (!config) {
+ 		pair_err("invalid pair config\n");
+@@ -415,8 +465,7 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	clk = asrc_priv->asrck_clk[clk_index[ideal ? OUT : IN]];
+ 
+ 	clk_rate = clk_get_rate(clk);
+-	rem[IN] = do_div(clk_rate, inrate);
+-	div[IN] = (u32)clk_rate;
++	div_avail = fsl_asrc_divider_avail(clk_rate, inrate, &div[IN]);
+ 
+ 	/*
+ 	 * The divider range is [1, 1024], defined by the hardware. For non-
+@@ -425,7 +474,7 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	 * only result in different converting speeds. So remainder does not
+ 	 * matter, as long as we keep the divider within its valid range.
+ 	 */
+-	if (div[IN] == 0 || (!ideal && (div[IN] > 1024 || rem[IN] != 0))) {
++	if (div[IN] == 0 || (!ideal && !div_avail)) {
+ 		pair_err("failed to support input sample rate %dHz by asrck_%x\n",
+ 				inrate, clk_index[ideal ? OUT : IN]);
+ 		return -EINVAL;
+@@ -436,13 +485,12 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	clk = asrc_priv->asrck_clk[clk_index[OUT]];
+ 	clk_rate = clk_get_rate(clk);
+ 	if (ideal && use_ideal_rate)
+-		rem[OUT] = do_div(clk_rate, IDEAL_RATIO_RATE);
++		div_avail = fsl_asrc_divider_avail(clk_rate, IDEAL_RATIO_RATE, &div[OUT]);
+ 	else
+-		rem[OUT] = do_div(clk_rate, outrate);
+-	div[OUT] = clk_rate;
++		div_avail = fsl_asrc_divider_avail(clk_rate, outrate, &div[OUT]);
+ 
+ 	/* Output divider has the same limitation as the input one */
+-	if (div[OUT] == 0 || (!ideal && (div[OUT] > 1024 || rem[OUT] != 0))) {
++	if (div[OUT] == 0 || (!ideal && !div_avail)) {
+ 		pair_err("failed to support output sample rate %dHz by asrck_%x\n",
+ 				outrate, clk_index[OUT]);
+ 		return -EINVAL;
+@@ -621,8 +669,7 @@ static void fsl_asrc_select_clk(struct fsl_asrc_priv *asrc_priv,
+ 			clk_index = asrc_priv->clk_map[j][i];
+ 			clk_rate = clk_get_rate(asrc_priv->asrck_clk[clk_index]);
+ 			/* Only match a perfect clock source with no remainder */
+-			if (clk_rate != 0 && (clk_rate / rate[j]) <= 1024 &&
+-			    (clk_rate % rate[j]) == 0)
++			if (fsl_asrc_divider_avail(clk_rate, rate[j], NULL))
+ 				break;
+ 		}
+ 
+diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
+index 69aeb0e71844d..0d4efbed41dab 100644
+--- a/sound/soc/fsl/fsl_mqs.c
++++ b/sound/soc/fsl/fsl_mqs.c
+@@ -337,4 +337,4 @@ module_platform_driver(fsl_mqs_driver);
+ MODULE_AUTHOR("Shengjiu Wang <Shengjiu.Wang@nxp.com>");
+ MODULE_DESCRIPTION("MQS codec driver");
+ MODULE_LICENSE("GPL v2");
+-MODULE_ALIAS("platform: fsl-mqs");
++MODULE_ALIAS("platform:fsl-mqs");
+diff --git a/sound/soc/intel/catpt/dsp.c b/sound/soc/intel/catpt/dsp.c
+index 9e807b9417321..38a92bbc1ed56 100644
+--- a/sound/soc/intel/catpt/dsp.c
++++ b/sound/soc/intel/catpt/dsp.c
+@@ -65,6 +65,7 @@ static int catpt_dma_memcpy(struct catpt_dev *cdev, struct dma_chan *chan,
+ {
+ 	struct dma_async_tx_descriptor *desc;
+ 	enum dma_status status;
++	int ret;
+ 
+ 	desc = dmaengine_prep_dma_memcpy(chan, dst_addr, src_addr, size,
+ 					 DMA_CTRL_ACK);
+@@ -77,13 +78,22 @@ static int catpt_dma_memcpy(struct catpt_dev *cdev, struct dma_chan *chan,
+ 	catpt_updatel_shim(cdev, HMDC,
+ 			   CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id),
+ 			   CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id));
+-	dmaengine_submit(desc);
++
++	ret = dma_submit_error(dmaengine_submit(desc));
++	if (ret) {
++		dev_err(cdev->dev, "submit tx failed: %d\n", ret);
++		goto clear_hdda;
++	}
++
+ 	status = dma_wait_for_async_tx(desc);
++	ret = (status == DMA_COMPLETE) ? 0 : -EPROTO;
++
++clear_hdda:
+ 	/* regardless of status, disable access to HOST memory in demand mode */
+ 	catpt_updatel_shim(cdev, HMDC,
+ 			   CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id), 0);
+ 
+-	return (status == DMA_COMPLETE) ? 0 : -EPROTO;
++	return ret;
+ }
+ 
+ int catpt_dma_memcpy_todsp(struct catpt_dev *cdev, struct dma_chan *chan,
+diff --git a/sound/soc/mediatek/mt8173/mt8173-max98090.c b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+index fc94314bfc02f..3bdd4931316cd 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-max98090.c
++++ b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+@@ -180,6 +180,9 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
++
++	of_node_put(codec_node);
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
+index 0f28dc2217c09..390da5bf727eb 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
+@@ -218,6 +218,8 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
++
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
+index 077c6ee067806..c8e4e85e10575 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
+@@ -285,6 +285,8 @@ static int mt8173_rt5650_rt5676_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
++
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650.c b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
+index c28ebf891cb05..e168d31f44459 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
+@@ -323,6 +323,8 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
++
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+index 20d31b69a5c00..9cc0f26b08fbc 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+@@ -787,7 +787,11 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	return devm_snd_soc_register_card(&pdev->dev, card);
++	ret = devm_snd_soc_register_card(&pdev->dev, card);
++
++	of_node_put(platform_node);
++	of_node_put(hdmi_codec);
++	return ret;
+ }
+ 
+ #ifdef CONFIG_OF
+diff --git a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
+index 79ba2f2d84522..14ce8b93597f3 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
+@@ -720,7 +720,12 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
+ 				 __func__, ret);
+ 	}
+ 
+-	return devm_snd_soc_register_card(&pdev->dev, card);
++	ret = devm_snd_soc_register_card(&pdev->dev, card);
++
++	of_node_put(platform_node);
++	of_node_put(ec_codec);
++	of_node_put(hdmi_codec);
++	return ret;
+ }
+ 
+ #ifdef CONFIG_OF
+diff --git a/sound/soc/samsung/idma.c b/sound/soc/samsung/idma.c
+index 66bcc2f97544b..c3f1b054e2389 100644
+--- a/sound/soc/samsung/idma.c
++++ b/sound/soc/samsung/idma.c
+@@ -360,6 +360,8 @@ static int preallocate_idma_buffer(struct snd_pcm *pcm, int stream)
+ 	buf->addr = idma.lp_tx_addr;
+ 	buf->bytes = idma_hardware.buffer_bytes_max;
+ 	buf->area = (unsigned char * __force)ioremap(buf->addr, buf->bytes);
++	if (!buf->area)
++		return -ENOMEM;
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/uniphier/Kconfig b/sound/soc/uniphier/Kconfig
+index aa3592ee1358b..ddfa6424c656b 100644
+--- a/sound/soc/uniphier/Kconfig
++++ b/sound/soc/uniphier/Kconfig
+@@ -23,7 +23,6 @@ config SND_SOC_UNIPHIER_LD11
+ 	tristate "UniPhier LD11/LD20 Device Driver"
+ 	depends on SND_SOC_UNIPHIER
+ 	select SND_SOC_UNIPHIER_AIO
+-	select SND_SOC_UNIPHIER_AIO_DMA
+ 	help
+ 	  This adds ASoC driver for Socionext UniPhier LD11/LD20
+ 	  input and output that can be used with other codecs.
+@@ -34,7 +33,6 @@ config SND_SOC_UNIPHIER_PXS2
+ 	tristate "UniPhier PXs2 Device Driver"
+ 	depends on SND_SOC_UNIPHIER
+ 	select SND_SOC_UNIPHIER_AIO
+-	select SND_SOC_UNIPHIER_AIO_DMA
+ 	help
+ 	  This adds ASoC driver for Socionext UniPhier PXs2
+ 	  input and output that can be used with other codecs.
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 4693384db0695..e8a63ea2189d1 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -365,7 +365,7 @@ static int parse_uac2_sample_rate_range(struct snd_usb_audio *chip,
+ 		for (rate = min; rate <= max; rate += res) {
+ 
+ 			/* Filter out invalid rates on Presonus Studio 1810c */
+-			if (chip->usb_id == USB_ID(0x0194f, 0x010c) &&
++			if (chip->usb_id == USB_ID(0x194f, 0x010c) &&
+ 			    !s1810c_valid_sample_rate(fp, rate))
+ 				goto skip_rate;
+ 
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 8297117f4766e..86fdd669f3fd7 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -3033,7 +3033,7 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		err = snd_rme_controls_create(mixer);
+ 		break;
+ 
+-	case USB_ID(0x0194f, 0x010c): /* Presonus Studio 1810c */
++	case USB_ID(0x194f, 0x010c): /* Presonus Studio 1810c */
+ 		err = snd_sc1810_init_mixer(mixer);
+ 		break;
+ 	case USB_ID(0x2a39, 0x3fb0): /* RME Babyface Pro FS */
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 75d4d317b34b6..6333a2ecb848a 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1310,7 +1310,7 @@ int snd_usb_apply_interface_quirk(struct snd_usb_audio *chip,
+ 	if (chip->usb_id == USB_ID(0x0763, 0x2012))
+ 		return fasttrackpro_skip_setting_quirk(chip, iface, altno);
+ 	/* presonus studio 1810c: skip altsets incompatible with device_setup */
+-	if (chip->usb_id == USB_ID(0x0194f, 0x010c))
++	if (chip->usb_id == USB_ID(0x194f, 0x010c))
+ 		return s1810c_skip_setting_quirk(chip, iface, altno);
+ 
+ 
+diff --git a/tools/bpf/bpftool/Documentation/Makefile b/tools/bpf/bpftool/Documentation/Makefile
+index f33cb02de95cf..3601b1d1974ca 100644
+--- a/tools/bpf/bpftool/Documentation/Makefile
++++ b/tools/bpf/bpftool/Documentation/Makefile
+@@ -1,6 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ include ../../../scripts/Makefile.include
+-include ../../../scripts/utilities.mak
+ 
+ INSTALL ?= install
+ RM ?= rm -f
+diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile
+index f60e6ad3a1dff..1896ef69b4492 100644
+--- a/tools/bpf/bpftool/Makefile
++++ b/tools/bpf/bpftool/Makefile
+@@ -1,6 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ include ../../scripts/Makefile.include
+-include ../../scripts/utilities.mak
+ 
+ ifeq ($(srctree),)
+ srctree := $(patsubst %/,%,$(dir $(CURDIR)))
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index c58a135dc355e..1854d6b978604 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -396,6 +396,8 @@ int main(int argc, char **argv)
+ 	};
+ 	int opt, ret;
+ 
++	setlinebuf(stdout);
++
+ 	last_do_help = do_help;
+ 	pretty_output = false;
+ 	json_output = false;
+diff --git a/tools/include/nolibc/nolibc.h b/tools/include/nolibc/nolibc.h
+index 2551e9b71167b..b8cecb66d28b7 100644
+--- a/tools/include/nolibc/nolibc.h
++++ b/tools/include/nolibc/nolibc.h
+@@ -422,16 +422,22 @@ struct stat {
+ })
+ 
+ /* startup code */
++/*
++ * x86-64 System V ABI mandates:
++ * 1) %rsp must be 16-byte aligned right before the function call.
++ * 2) The deepest stack frame should be zero (the %rbp).
++ *
++ */
+ asm(".section .text\n"
+     ".global _start\n"
+     "_start:\n"
+     "pop %rdi\n"                // argc   (first arg, %rdi)
+     "mov %rsp, %rsi\n"          // argv[] (second arg, %rsi)
+     "lea 8(%rsi,%rdi,8),%rdx\n" // then a NULL then envp (third arg, %rdx)
+-    "and $-16, %rsp\n"          // x86 ABI : esp must be 16-byte aligned when
+-    "sub $8, %rsp\n"            // entering the callee
++    "xor %ebp, %ebp\n"          // zero the stack frame
++    "and $-16, %rsp\n"          // x86 ABI : esp must be 16-byte aligned before call
+     "call main\n"               // main() returns the status code, we'll exit with it.
+-    "movzb %al, %rdi\n"         // retrieve exit code from 8 lower bits
++    "mov %eax, %edi\n"          // retrieve exit code (32 bit)
+     "mov $60, %rax\n"           // NR_exit == 60
+     "syscall\n"                 // really exit
+     "hlt\n"                     // ensure it does not return
+@@ -600,20 +606,28 @@ struct sys_stat_struct {
+ })
+ 
+ /* startup code */
++/*
++ * i386 System V ABI mandates:
++ * 1) last pushed argument must be 16-byte aligned.
++ * 2) The deepest stack frame should be set to zero
++ *
++ */
+ asm(".section .text\n"
+     ".global _start\n"
+     "_start:\n"
+     "pop %eax\n"                // argc   (first arg, %eax)
+     "mov %esp, %ebx\n"          // argv[] (second arg, %ebx)
+     "lea 4(%ebx,%eax,4),%ecx\n" // then a NULL then envp (third arg, %ecx)
+-    "and $-16, %esp\n"          // x86 ABI : esp must be 16-byte aligned when
++    "xor %ebp, %ebp\n"          // zero the stack frame
++    "and $-16, %esp\n"          // x86 ABI : esp must be 16-byte aligned before
++    "sub $4, %esp\n"            // the call instruction (args are aligned)
+     "push %ecx\n"               // push all registers on the stack so that we
+     "push %ebx\n"               // support both regparm and plain stack modes
+     "push %eax\n"
+     "call main\n"               // main() returns the status code in %eax
+-    "movzbl %al, %ebx\n"        // retrieve exit code from lower 8 bits
+-    "movl   $1, %eax\n"         // NR_exit == 1
+-    "int    $0x80\n"            // exit now
++    "mov %eax, %ebx\n"          // retrieve exit code (32-bit int)
++    "movl $1, %eax\n"           // NR_exit == 1
++    "int $0x80\n"               // exit now
+     "hlt\n"                     // ensure it does not
+     "");
+ 
+@@ -797,7 +811,6 @@ asm(".section .text\n"
+     "and %r3, %r1, $-8\n"         // AAPCS : sp must be 8-byte aligned in the
+     "mov %sp, %r3\n"              //         callee, an bl doesn't push (lr=pc)
+     "bl main\n"                   // main() returns the status code, we'll exit with it.
+-    "and %r0, %r0, $0xff\n"       // limit exit code to 8 bits
+     "movs r7, $1\n"               // NR_exit == 1
+     "svc $0x00\n"
+     "");
+@@ -994,7 +1007,6 @@ asm(".section .text\n"
+     "add x2, x2, x1\n"            //           + argv
+     "and sp, x1, -16\n"           // sp must be 16-byte aligned in the callee
+     "bl main\n"                   // main() returns the status code, we'll exit with it.
+-    "and x0, x0, 0xff\n"          // limit exit code to 8 bits
+     "mov x8, 93\n"                // NR_exit == 93
+     "svc #0\n"
+     "");
+@@ -1199,7 +1211,7 @@ asm(".section .text\n"
+     "addiu $sp,$sp,-16\n"         // the callee expects to save a0..a3 there!
+     "jal main\n"                  // main() returns the status code, we'll exit with it.
+     "nop\n"                       // delayed slot
+-    "and $a0, $v0, 0xff\n"        // limit exit code to 8 bits
++    "move $a0, $v0\n"             // retrieve 32-bit exit code from v0
+     "li $v0, 4001\n"              // NR_exit == 4001
+     "syscall\n"
+     ".end __start\n"
+@@ -1397,7 +1409,6 @@ asm(".section .text\n"
+     "add   a2,a2,a1\n"           //             + argv
+     "andi  sp,a1,-16\n"          // sp must be 16-byte aligned
+     "call  main\n"               // main() returns the status code, we'll exit with it.
+-    "andi  a0, a0, 0xff\n"       // limit exit code to 8 bits
+     "li a7, 93\n"                // NR_exit == 93
+     "ecall\n"
+     "");
+diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c
+index 5cda5565777a0..0af163abaa62b 100644
+--- a/tools/perf/util/debug.c
++++ b/tools/perf/util/debug.c
+@@ -145,7 +145,7 @@ static int trace_event_printer(enum binary_printer_ops op,
+ 		break;
+ 	case BINARY_PRINT_CHAR_DATA:
+ 		printed += color_fprintf(fp, color, "%c",
+-			      isprint(ch) ? ch : '.');
++			      isprint(ch) && isascii(ch) ? ch : '.');
+ 		break;
+ 	case BINARY_PRINT_CHAR_PAD:
+ 		printed += color_fprintf(fp, color, " ");
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 1cad6051d8b08..1a1cbd16d76d4 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1014,6 +1014,17 @@ struct evsel_config_term *__evsel__get_config_term(struct evsel *evsel, enum evs
+ 	return found_term;
+ }
+ 
++static void evsel__set_default_freq_period(struct record_opts *opts,
++					   struct perf_event_attr *attr)
++{
++	if (opts->freq) {
++		attr->freq = 1;
++		attr->sample_freq = opts->freq;
++	} else {
++		attr->sample_period = opts->default_interval;
++	}
++}
++
+ /*
+  * The enable_on_exec/disabled value strategy:
+  *
+@@ -1080,14 +1091,12 @@ void evsel__config(struct evsel *evsel, struct record_opts *opts,
+ 	 * We default some events to have a default interval. But keep
+ 	 * it a weak assumption overridable by the user.
+ 	 */
+-	if (!attr->sample_period) {
+-		if (opts->freq) {
+-			attr->freq		= 1;
+-			attr->sample_freq	= opts->freq;
+-		} else {
+-			attr->sample_period = opts->default_interval;
+-		}
+-	}
++	if ((evsel->is_libpfm_event && !attr->sample_period) ||
++	    (!evsel->is_libpfm_event && (!attr->sample_period ||
++					 opts->user_freq != UINT_MAX ||
++					 opts->user_interval != ULLONG_MAX)))
++		evsel__set_default_freq_period(opts, attr);
++
+ 	/*
+ 	 * If attr->freq was set (here or earlier), ask for period
+ 	 * to be sampled.
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 07db6cfad65b9..d103084fcd56c 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -3035,6 +3035,9 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
+ 	for (j = 0; j < num_matched_functions; j++) {
+ 		sym = syms[j];
+ 
++		if (sym->type != STT_FUNC)
++			continue;
++
+ 		/* There can be duplicated symbols in the map */
+ 		for (i = 0; i < j; i++)
+ 			if (sym->start == syms[i]->start) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
+index fafeddaad6a99..23915be6172d6 100644
+--- a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
++++ b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
+@@ -105,4 +105,6 @@ void test_skb_ctx(void)
+ 		   "ctx_out_mark",
+ 		   "skb->mark == %u, expected %d\n",
+ 		   skb.mark, 10);
++
++	bpf_object__close(obj);
+ }
+diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
+index 42be3b9258301..076cf4325f783 100644
+--- a/tools/testing/selftests/clone3/clone3.c
++++ b/tools/testing/selftests/clone3/clone3.c
+@@ -52,6 +52,12 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
+ 		size = sizeof(struct __clone_args);
+ 
+ 	switch (test_mode) {
++	case CLONE3_ARGS_NO_TEST:
++		/*
++		 * Uses default 'flags' and 'SIGCHLD'
++		 * assignment.
++		 */
++		break;
+ 	case CLONE3_ARGS_ALL_0:
+ 		args.flags = 0;
+ 		args.exit_signal = 0;
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc b/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
+index 98166fa3eb91c..34fb89b0c61fa 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
+@@ -1,6 +1,6 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+-# description: Kprobe dynamic event - adding and removing
++# description: Kprobe profile
+ # requires: kprobe_events
+ 
+ ! grep -q 'myevent' kprobe_profile
+diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
+index edce85420d193..5ecb9718e1616 100644
+--- a/tools/testing/selftests/kselftest_harness.h
++++ b/tools/testing/selftests/kselftest_harness.h
+@@ -965,7 +965,7 @@ void __run_test(struct __fixture_metadata *f,
+ 	t->passed = 1;
+ 	t->skip = 0;
+ 	t->trigger = 0;
+-	t->step = 0;
++	t->step = 1;
+ 	t->no_print = 0;
+ 	memset(t->results->reason, 0, sizeof(t->results->reason));
+ 
+diff --git a/tools/testing/selftests/powerpc/security/spectre_v2.c b/tools/testing/selftests/powerpc/security/spectre_v2.c
+index adc2b7294e5fd..83647b8277e7d 100644
+--- a/tools/testing/selftests/powerpc/security/spectre_v2.c
++++ b/tools/testing/selftests/powerpc/security/spectre_v2.c
+@@ -193,7 +193,7 @@ int spectre_v2_test(void)
+ 			 * We are not vulnerable and reporting otherwise, so
+ 			 * missing such a mismatch is safe.
+ 			 */
+-			if (state == VULNERABLE)
++			if (miss_percent > 95)
+ 				return 4;
+ 
+ 			return 1;
+diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
+index c9404ef9698e2..426dccc08f906 100644
+--- a/tools/testing/selftests/vm/hmm-tests.c
++++ b/tools/testing/selftests/vm/hmm-tests.c
+@@ -1242,6 +1242,48 @@ TEST_F(hmm, anon_teardown)
+ 	}
+ }
+ 
++/*
++ * Test memory snapshot without faulting in pages accessed by the device.
++ */
++TEST_F(hmm, mixedmap)
++{
++	struct hmm_buffer *buffer;
++	unsigned long npages;
++	unsigned long size;
++	unsigned char *m;
++	int ret;
++
++	npages = 1;
++	size = npages << self->page_shift;
++
++	buffer = malloc(sizeof(*buffer));
++	ASSERT_NE(buffer, NULL);
++
++	buffer->fd = -1;
++	buffer->size = size;
++	buffer->mirror = malloc(npages);
++	ASSERT_NE(buffer->mirror, NULL);
++
++
++	/* Reserve a range of addresses. */
++	buffer->ptr = mmap(NULL, size,
++			   PROT_READ | PROT_WRITE,
++			   MAP_PRIVATE,
++			   self->fd, 0);
++	ASSERT_NE(buffer->ptr, MAP_FAILED);
++
++	/* Simulate a device snapshotting CPU pagetables. */
++	ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_SNAPSHOT, buffer, npages);
++	ASSERT_EQ(ret, 0);
++	ASSERT_EQ(buffer->cpages, npages);
++
++	/* Check what the device saw. */
++	m = buffer->mirror;
++	ASSERT_EQ(m[0], HMM_DMIRROR_PROT_READ);
++
++	hmm_buffer_free(buffer);
++}
++
+ /*
+  * Test memory snapshot without faulting in pages accessed by the device.
+  */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-01-29 17:43 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-01-29 17:43 UTC (permalink / raw
  To: gentoo-commits

commit:     399535a40d9c13d6954ea337eab72175869f49e0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 29 17:43:13 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan 29 17:43:13 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=399535a4

Linux patch 5.10.95

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1094_linux-5.10.95.patch | 823 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 827 insertions(+)

diff --git a/0000_README b/0000_README
index 8c30f470..5f3cbb9a 100644
--- a/0000_README
+++ b/0000_README
@@ -419,6 +419,10 @@ Patch:  1093_linux-5.10.94.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.94
 
+Patch:  1094_linux-5.10.95.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.95
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1094_linux-5.10.95.patch b/1094_linux-5.10.95.patch
new file mode 100644
index 00000000..a12f7b89
--- /dev/null
+++ b/1094_linux-5.10.95.patch
@@ -0,0 +1,823 @@
+diff --git a/Makefile b/Makefile
+index 1071ec486aa5b..fa98893aae615 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 94
++SUBLEVEL = 95
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index f2ddf663e72e9..7e08efb068393 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -1130,12 +1130,12 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root,
+ 	bool spte_set = false;
+ 
+ 	tdp_root_for_each_leaf_pte(iter, root, gfn, gfn + 1) {
+-		if (!is_writable_pte(iter.old_spte))
+-			break;
+-
+ 		new_spte = iter.old_spte &
+ 			~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE);
+ 
++		if (new_spte == iter.old_spte)
++			break;
++
+ 		tdp_mmu_set_spte(kvm, &iter, new_spte);
+ 		spte_set = true;
+ 	}
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+index d6711caa7f399..dbc88fc7136bf 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+@@ -159,6 +159,7 @@ struct drm_i915_gem_object {
+ #define I915_BO_ALLOC_VOLATILE   BIT(1)
+ #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE)
+ #define I915_BO_READONLY         BIT(2)
++#define I915_BO_WAS_BOUND_BIT    3
+ 
+ 	/*
+ 	 * Is the object to be mapped as read-only to the GPU
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+index f60ca6dc911f2..27d24cb38c0d2 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+@@ -10,6 +10,8 @@
+ #include "i915_gem_lmem.h"
+ #include "i915_gem_mman.h"
+ 
++#include "gt/intel_gt.h"
++
+ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
+ 				 struct sg_table *pages,
+ 				 unsigned int sg_page_sizes)
+@@ -186,6 +188,14 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
+ 	__i915_gem_object_reset_page_iter(obj);
+ 	obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
+ 
++	if (test_and_clear_bit(I915_BO_WAS_BOUND_BIT, &obj->flags)) {
++		struct drm_i915_private *i915 = to_i915(obj->base.dev);
++		intel_wakeref_t wakeref;
++
++		with_intel_runtime_pm_if_active(&i915->runtime_pm, wakeref)
++			intel_gt_invalidate_tlbs(&i915->gt);
++	}
++
+ 	return pages;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index 39b428c5049c0..6615eb5147e23 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -26,6 +26,8 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915)
+ 
+ 	spin_lock_init(&gt->irq_lock);
+ 
++	mutex_init(&gt->tlb_invalidate_lock);
++
+ 	INIT_LIST_HEAD(&gt->closed_vma);
+ 	spin_lock_init(&gt->closed_lock);
+ 
+@@ -661,3 +663,103 @@ void intel_gt_info_print(const struct intel_gt_info *info,
+ 
+ 	intel_sseu_dump(&info->sseu, p);
+ }
++
++struct reg_and_bit {
++	i915_reg_t reg;
++	u32 bit;
++};
++
++static struct reg_and_bit
++get_reg_and_bit(const struct intel_engine_cs *engine, const bool gen8,
++		const i915_reg_t *regs, const unsigned int num)
++{
++	const unsigned int class = engine->class;
++	struct reg_and_bit rb = { };
++
++	if (drm_WARN_ON_ONCE(&engine->i915->drm,
++			     class >= num || !regs[class].reg))
++		return rb;
++
++	rb.reg = regs[class];
++	if (gen8 && class == VIDEO_DECODE_CLASS)
++		rb.reg.reg += 4 * engine->instance; /* GEN8_M2TCR */
++	else
++		rb.bit = engine->instance;
++
++	rb.bit = BIT(rb.bit);
++
++	return rb;
++}
++
++void intel_gt_invalidate_tlbs(struct intel_gt *gt)
++{
++	static const i915_reg_t gen8_regs[] = {
++		[RENDER_CLASS]			= GEN8_RTCR,
++		[VIDEO_DECODE_CLASS]		= GEN8_M1TCR, /* , GEN8_M2TCR */
++		[VIDEO_ENHANCEMENT_CLASS]	= GEN8_VTCR,
++		[COPY_ENGINE_CLASS]		= GEN8_BTCR,
++	};
++	static const i915_reg_t gen12_regs[] = {
++		[RENDER_CLASS]			= GEN12_GFX_TLB_INV_CR,
++		[VIDEO_DECODE_CLASS]		= GEN12_VD_TLB_INV_CR,
++		[VIDEO_ENHANCEMENT_CLASS]	= GEN12_VE_TLB_INV_CR,
++		[COPY_ENGINE_CLASS]		= GEN12_BLT_TLB_INV_CR,
++	};
++	struct drm_i915_private *i915 = gt->i915;
++	struct intel_uncore *uncore = gt->uncore;
++	struct intel_engine_cs *engine;
++	enum intel_engine_id id;
++	const i915_reg_t *regs;
++	unsigned int num = 0;
++
++	if (I915_SELFTEST_ONLY(gt->awake == -ENODEV))
++		return;
++
++	if (INTEL_GEN(i915) == 12) {
++		regs = gen12_regs;
++		num = ARRAY_SIZE(gen12_regs);
++	} else if (INTEL_GEN(i915) >= 8 && INTEL_GEN(i915) <= 11) {
++		regs = gen8_regs;
++		num = ARRAY_SIZE(gen8_regs);
++	} else if (INTEL_GEN(i915) < 8) {
++		return;
++	}
++
++	if (drm_WARN_ONCE(&i915->drm, !num,
++			  "Platform does not implement TLB invalidation!"))
++		return;
++
++	GEM_TRACE("\n");
++
++	assert_rpm_wakelock_held(&i915->runtime_pm);
++
++	mutex_lock(&gt->tlb_invalidate_lock);
++	intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
++
++	for_each_engine(engine, gt, id) {
++		/*
++		 * HW architecture suggest typical invalidation time at 40us,
++		 * with pessimistic cases up to 100us and a recommendation to
++		 * cap at 1ms. We go a bit higher just in case.
++		 */
++		const unsigned int timeout_us = 100;
++		const unsigned int timeout_ms = 4;
++		struct reg_and_bit rb;
++
++		rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
++		if (!i915_mmio_reg_offset(rb.reg))
++			continue;
++
++		intel_uncore_write_fw(uncore, rb.reg, rb.bit);
++		if (__intel_wait_for_register_fw(uncore,
++						 rb.reg, rb.bit, 0,
++						 timeout_us, timeout_ms,
++						 NULL))
++			drm_err_ratelimited(&gt->i915->drm,
++					    "%s TLB invalidation did not complete in %ums!\n",
++					    engine->name, timeout_ms);
++	}
++
++	intel_uncore_forcewake_put_delayed(uncore, FORCEWAKE_ALL);
++	mutex_unlock(&gt->tlb_invalidate_lock);
++}
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
+index 9157c7411f603..d9a1168172ae3 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt.h
+@@ -77,4 +77,6 @@ static inline bool intel_gt_is_wedged(const struct intel_gt *gt)
+ void intel_gt_info_print(const struct intel_gt_info *info,
+ 			 struct drm_printer *p);
+ 
++void intel_gt_invalidate_tlbs(struct intel_gt *gt);
++
+ #endif /* __INTEL_GT_H__ */
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
+index 6d39a4a11bf39..78c061614d8bb 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
+@@ -36,6 +36,8 @@ struct intel_gt {
+ 
+ 	struct intel_uc uc;
+ 
++	struct mutex tlb_invalidate_lock;
++
+ 	struct intel_gt_timelines {
+ 		spinlock_t lock; /* protects active_list */
+ 		struct list_head active_list;
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index ce8c91c5fdd3b..12488996a7f4f 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -2639,6 +2639,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
+ #define   GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING	(1 << 28)
+ #define   GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT	(1 << 24)
+ 
++#define GEN8_RTCR	_MMIO(0x4260)
++#define GEN8_M1TCR	_MMIO(0x4264)
++#define GEN8_M2TCR	_MMIO(0x4268)
++#define GEN8_BTCR	_MMIO(0x426c)
++#define GEN8_VTCR	_MMIO(0x4270)
++
+ #if 0
+ #define PRB0_TAIL	_MMIO(0x2030)
+ #define PRB0_HEAD	_MMIO(0x2034)
+@@ -2728,6 +2734,11 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
+ #define   FAULT_VA_HIGH_BITS		(0xf << 0)
+ #define   FAULT_GTT_SEL			(1 << 4)
+ 
++#define GEN12_GFX_TLB_INV_CR	_MMIO(0xced8)
++#define GEN12_VD_TLB_INV_CR	_MMIO(0xcedc)
++#define GEN12_VE_TLB_INV_CR	_MMIO(0xcee0)
++#define GEN12_BLT_TLB_INV_CR	_MMIO(0xcee4)
++
+ #define GEN12_AUX_ERR_DBG		_MMIO(0x43f4)
+ 
+ #define FPGA_DBG		_MMIO(0x42300)
+diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
+index caa9b041616b0..50a86fd89d005 100644
+--- a/drivers/gpu/drm/i915/i915_vma.c
++++ b/drivers/gpu/drm/i915/i915_vma.c
+@@ -439,6 +439,9 @@ int i915_vma_bind(struct i915_vma *vma,
+ 		vma->ops->bind_vma(vma->vm, NULL, vma, cache_level, bind_flags);
+ 	}
+ 
++	if (vma->obj)
++		set_bit(I915_BO_WAS_BOUND_BIT, &vma->obj->flags);
++
+ 	atomic_or(bind_flags, &vma->flags);
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
+index 97ded2a59cf4c..01849840ac560 100644
+--- a/drivers/gpu/drm/i915/intel_uncore.c
++++ b/drivers/gpu/drm/i915/intel_uncore.c
+@@ -694,7 +694,8 @@ void intel_uncore_forcewake_get__locked(struct intel_uncore *uncore,
+ }
+ 
+ static void __intel_uncore_forcewake_put(struct intel_uncore *uncore,
+-					 enum forcewake_domains fw_domains)
++					 enum forcewake_domains fw_domains,
++					 bool delayed)
+ {
+ 	struct intel_uncore_forcewake_domain *domain;
+ 	unsigned int tmp;
+@@ -709,7 +710,11 @@ static void __intel_uncore_forcewake_put(struct intel_uncore *uncore,
+ 			continue;
+ 		}
+ 
+-		uncore->funcs.force_wake_put(uncore, domain->mask);
++		if (delayed &&
++		    !(domain->uncore->fw_domains_timer & domain->mask))
++			fw_domain_arm_timer(domain);
++		else
++			uncore->funcs.force_wake_put(uncore, domain->mask);
+ 	}
+ }
+ 
+@@ -730,7 +735,20 @@ void intel_uncore_forcewake_put(struct intel_uncore *uncore,
+ 		return;
+ 
+ 	spin_lock_irqsave(&uncore->lock, irqflags);
+-	__intel_uncore_forcewake_put(uncore, fw_domains);
++	__intel_uncore_forcewake_put(uncore, fw_domains, false);
++	spin_unlock_irqrestore(&uncore->lock, irqflags);
++}
++
++void intel_uncore_forcewake_put_delayed(struct intel_uncore *uncore,
++					enum forcewake_domains fw_domains)
++{
++	unsigned long irqflags;
++
++	if (!uncore->funcs.force_wake_put)
++		return;
++
++	spin_lock_irqsave(&uncore->lock, irqflags);
++	__intel_uncore_forcewake_put(uncore, fw_domains, true);
+ 	spin_unlock_irqrestore(&uncore->lock, irqflags);
+ }
+ 
+@@ -772,7 +790,7 @@ void intel_uncore_forcewake_put__locked(struct intel_uncore *uncore,
+ 	if (!uncore->funcs.force_wake_put)
+ 		return;
+ 
+-	__intel_uncore_forcewake_put(uncore, fw_domains);
++	__intel_uncore_forcewake_put(uncore, fw_domains, false);
+ }
+ 
+ void assert_forcewakes_inactive(struct intel_uncore *uncore)
+diff --git a/drivers/gpu/drm/i915/intel_uncore.h b/drivers/gpu/drm/i915/intel_uncore.h
+index c4b22d9d0b451..034f04e0de8b7 100644
+--- a/drivers/gpu/drm/i915/intel_uncore.h
++++ b/drivers/gpu/drm/i915/intel_uncore.h
+@@ -211,6 +211,8 @@ void intel_uncore_forcewake_get(struct intel_uncore *uncore,
+ 				enum forcewake_domains domains);
+ void intel_uncore_forcewake_put(struct intel_uncore *uncore,
+ 				enum forcewake_domains domains);
++void intel_uncore_forcewake_put_delayed(struct intel_uncore *uncore,
++					enum forcewake_domains domains);
+ void intel_uncore_forcewake_flush(struct intel_uncore *uncore,
+ 				  enum forcewake_domains fw_domains);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 1523b51a7284c..ad208a5f4ebe5 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -1088,15 +1088,14 @@ extern int vmw_execbuf_fence_commands(struct drm_file *file_priv,
+ 				      struct vmw_private *dev_priv,
+ 				      struct vmw_fence_obj **p_fence,
+ 				      uint32_t *p_handle);
+-extern void vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
++extern int vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
+ 					struct vmw_fpriv *vmw_fp,
+ 					int ret,
+ 					struct drm_vmw_fence_rep __user
+ 					*user_fence_rep,
+ 					struct vmw_fence_obj *fence,
+ 					uint32_t fence_handle,
+-					int32_t out_fence_fd,
+-					struct sync_file *sync_file);
++					int32_t out_fence_fd);
+ bool vmw_cmd_describe(const void *buf, u32 *size, char const **cmd);
+ 
+ /**
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 83e1b54eb8647..739cbc77d8867 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -3816,17 +3816,17 @@ int vmw_execbuf_fence_commands(struct drm_file *file_priv,
+  * Also if copying fails, user-space will be unable to signal the fence object
+  * so we wait for it immediately, and then unreference the user-space reference.
+  */
+-void
++int
+ vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
+ 			    struct vmw_fpriv *vmw_fp, int ret,
+ 			    struct drm_vmw_fence_rep __user *user_fence_rep,
+ 			    struct vmw_fence_obj *fence, uint32_t fence_handle,
+-			    int32_t out_fence_fd, struct sync_file *sync_file)
++			    int32_t out_fence_fd)
+ {
+ 	struct drm_vmw_fence_rep fence_rep;
+ 
+ 	if (user_fence_rep == NULL)
+-		return;
++		return 0;
+ 
+ 	memset(&fence_rep, 0, sizeof(fence_rep));
+ 
+@@ -3854,20 +3854,14 @@ vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
+ 	 * handle.
+ 	 */
+ 	if (unlikely(ret != 0) && (fence_rep.error == 0)) {
+-		if (sync_file)
+-			fput(sync_file->file);
+-
+-		if (fence_rep.fd != -1) {
+-			put_unused_fd(fence_rep.fd);
+-			fence_rep.fd = -1;
+-		}
+-
+ 		ttm_ref_object_base_unref(vmw_fp->tfile, fence_handle,
+ 					  TTM_REF_USAGE);
+ 		VMW_DEBUG_USER("Fence copy error. Syncing.\n");
+ 		(void) vmw_fence_obj_wait(fence, false, false,
+ 					  VMW_FENCE_WAIT_TIMEOUT);
+ 	}
++
++	return ret ? -EFAULT : 0;
+ }
+ 
+ /**
+@@ -4209,16 +4203,23 @@ int vmw_execbuf_process(struct drm_file *file_priv,
+ 
+ 			(void) vmw_fence_obj_wait(fence, false, false,
+ 						  VMW_FENCE_WAIT_TIMEOUT);
++		}
++	}
++
++	ret = vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv), ret,
++				    user_fence_rep, fence, handle, out_fence_fd);
++
++	if (sync_file) {
++		if (ret) {
++			/* usercopy of fence failed, put the file object */
++			fput(sync_file->file);
++			put_unused_fd(out_fence_fd);
+ 		} else {
+ 			/* Link the fence with the FD created earlier */
+ 			fd_install(out_fence_fd, sync_file->file);
+ 		}
+ 	}
+ 
+-	vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv), ret,
+-				    user_fence_rep, fence, handle, out_fence_fd,
+-				    sync_file);
+-
+ 	/* Don't unreference when handing fence out */
+ 	if (unlikely(out_fence != NULL)) {
+ 		*out_fence = fence;
+@@ -4236,7 +4237,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
+ 	 */
+ 	vmw_validation_unref_lists(&val_ctx);
+ 
+-	return 0;
++	return ret;
+ 
+ out_unlock_binding:
+ 	mutex_unlock(&dev_priv->binding_mutex);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+index 0f8d293971576..8bc41ec97d71a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+@@ -1171,7 +1171,7 @@ int vmw_fence_event_ioctl(struct drm_device *dev, void *data,
+ 	}
+ 
+ 	vmw_execbuf_copy_fence_user(dev_priv, vmw_fp, 0, user_fence_rep, fence,
+-				    handle, -1, NULL);
++				    handle, -1);
+ 	vmw_fence_obj_unreference(&fence);
+ 	return 0;
+ out_no_create:
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 312ed0881a99b..e58112997c881 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -2479,7 +2479,7 @@ void vmw_kms_helper_validation_finish(struct vmw_private *dev_priv,
+ 	if (file_priv)
+ 		vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv),
+ 					    ret, user_fence_rep, fence,
+-					    handle, -1, NULL);
++					    handle, -1);
+ 	if (out_fence)
+ 		*out_fence = fence;
+ 	else
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index d04994840b87d..bb3ba614fb174 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -1850,6 +1850,14 @@ struct bnx2x {
+ 
+ 	/* Vxlan/Geneve related information */
+ 	u16 udp_tunnel_ports[BNX2X_UDP_PORT_MAX];
++
++#define FW_CAP_INVALIDATE_VF_FP_HSI	BIT(0)
++	u32 fw_cap;
++
++	u32 fw_major;
++	u32 fw_minor;
++	u32 fw_rev;
++	u32 fw_eng;
+ };
+ 
+ /* Tx queues may be less or equal to Rx queues */
+@@ -2526,5 +2534,6 @@ void bnx2x_register_phc(struct bnx2x *bp);
+  * Meant for implicit re-load flows.
+  */
+ int bnx2x_vlan_reconfigure_vid(struct bnx2x *bp);
+-
++int bnx2x_init_firmware(struct bnx2x *bp);
++void bnx2x_release_firmware(struct bnx2x *bp);
+ #endif /* bnx2x.h */
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index b5d954cb409ae..41ebbb2c7d3ac 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -2364,10 +2364,8 @@ int bnx2x_compare_fw_ver(struct bnx2x *bp, u32 load_code, bool print_err)
+ 	if (load_code != FW_MSG_CODE_DRV_LOAD_COMMON_CHIP &&
+ 	    load_code != FW_MSG_CODE_DRV_LOAD_COMMON) {
+ 		/* build my FW version dword */
+-		u32 my_fw = (BCM_5710_FW_MAJOR_VERSION) +
+-			(BCM_5710_FW_MINOR_VERSION << 8) +
+-			(BCM_5710_FW_REVISION_VERSION << 16) +
+-			(BCM_5710_FW_ENGINEERING_VERSION << 24);
++		u32 my_fw = (bp->fw_major) + (bp->fw_minor << 8) +
++				(bp->fw_rev << 16) + (bp->fw_eng << 24);
+ 
+ 		/* read loaded FW from chip */
+ 		u32 loaded_fw = REG_RD(bp, XSEM_REG_PRAM);
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
+index 3f8435208bf49..a84d015da5dfa 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
+@@ -241,6 +241,8 @@
+ 	IRO[221].m2))
+ #define XSTORM_VF_TO_PF_OFFSET(funcId) \
+ 	(IRO[48].base + ((funcId) * IRO[48].m1))
++#define XSTORM_ETH_FUNCTION_INFO_FP_HSI_VALID_E2_OFFSET(fid)	\
++	(IRO[386].base + ((fid) * IRO[386].m1))
+ #define COMMON_ASM_INVALID_ASSERT_OPCODE 0x0
+ 
+ /* eth hsi version */
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
+index 622fadc50316e..611efee758340 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
+@@ -3024,7 +3024,8 @@ struct afex_stats {
+ 
+ #define BCM_5710_FW_MAJOR_VERSION			7
+ #define BCM_5710_FW_MINOR_VERSION			13
+-#define BCM_5710_FW_REVISION_VERSION		15
++#define BCM_5710_FW_REVISION_VERSION		21
++#define BCM_5710_FW_REVISION_VERSION_V15	15
+ #define BCM_5710_FW_ENGINEERING_VERSION		0
+ #define BCM_5710_FW_COMPILE_FLAGS			1
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index 28069b2908625..9a86367a26369 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -74,9 +74,19 @@
+ 	__stringify(BCM_5710_FW_MINOR_VERSION) "."	\
+ 	__stringify(BCM_5710_FW_REVISION_VERSION) "."	\
+ 	__stringify(BCM_5710_FW_ENGINEERING_VERSION)
++
++#define FW_FILE_VERSION_V15				\
++	__stringify(BCM_5710_FW_MAJOR_VERSION) "."      \
++	__stringify(BCM_5710_FW_MINOR_VERSION) "."	\
++	__stringify(BCM_5710_FW_REVISION_VERSION_V15) "."	\
++	__stringify(BCM_5710_FW_ENGINEERING_VERSION)
++
+ #define FW_FILE_NAME_E1		"bnx2x/bnx2x-e1-" FW_FILE_VERSION ".fw"
+ #define FW_FILE_NAME_E1H	"bnx2x/bnx2x-e1h-" FW_FILE_VERSION ".fw"
+ #define FW_FILE_NAME_E2		"bnx2x/bnx2x-e2-" FW_FILE_VERSION ".fw"
++#define FW_FILE_NAME_E1_V15	"bnx2x/bnx2x-e1-" FW_FILE_VERSION_V15 ".fw"
++#define FW_FILE_NAME_E1H_V15	"bnx2x/bnx2x-e1h-" FW_FILE_VERSION_V15 ".fw"
++#define FW_FILE_NAME_E2_V15	"bnx2x/bnx2x-e2-" FW_FILE_VERSION_V15 ".fw"
+ 
+ /* Time in jiffies before concluding the transmitter is hung */
+ #define TX_TIMEOUT		(5*HZ)
+@@ -747,9 +757,7 @@ static int bnx2x_mc_assert(struct bnx2x *bp)
+ 		  CHIP_IS_E1(bp) ? "everest1" :
+ 		  CHIP_IS_E1H(bp) ? "everest1h" :
+ 		  CHIP_IS_E2(bp) ? "everest2" : "everest3",
+-		  BCM_5710_FW_MAJOR_VERSION,
+-		  BCM_5710_FW_MINOR_VERSION,
+-		  BCM_5710_FW_REVISION_VERSION);
++		  bp->fw_major, bp->fw_minor, bp->fw_rev);
+ 
+ 	return rc;
+ }
+@@ -12355,6 +12363,15 @@ static int bnx2x_init_bp(struct bnx2x *bp)
+ 
+ 	bnx2x_read_fwinfo(bp);
+ 
++	if (IS_PF(bp)) {
++		rc = bnx2x_init_firmware(bp);
++
++		if (rc) {
++			bnx2x_free_mem_bp(bp);
++			return rc;
++		}
++	}
++
+ 	func = BP_FUNC(bp);
+ 
+ 	/* need to reset chip if undi was active */
+@@ -12367,6 +12384,7 @@ static int bnx2x_init_bp(struct bnx2x *bp)
+ 
+ 		rc = bnx2x_prev_unload(bp);
+ 		if (rc) {
++			bnx2x_release_firmware(bp);
+ 			bnx2x_free_mem_bp(bp);
+ 			return rc;
+ 		}
+@@ -13366,16 +13384,11 @@ static int bnx2x_check_firmware(struct bnx2x *bp)
+ 	/* Check FW version */
+ 	offset = be32_to_cpu(fw_hdr->fw_version.offset);
+ 	fw_ver = firmware->data + offset;
+-	if ((fw_ver[0] != BCM_5710_FW_MAJOR_VERSION) ||
+-	    (fw_ver[1] != BCM_5710_FW_MINOR_VERSION) ||
+-	    (fw_ver[2] != BCM_5710_FW_REVISION_VERSION) ||
+-	    (fw_ver[3] != BCM_5710_FW_ENGINEERING_VERSION)) {
++	if (fw_ver[0] != bp->fw_major || fw_ver[1] != bp->fw_minor ||
++	    fw_ver[2] != bp->fw_rev || fw_ver[3] != bp->fw_eng) {
+ 		BNX2X_ERR("Bad FW version:%d.%d.%d.%d. Should be %d.%d.%d.%d\n",
+-		       fw_ver[0], fw_ver[1], fw_ver[2], fw_ver[3],
+-		       BCM_5710_FW_MAJOR_VERSION,
+-		       BCM_5710_FW_MINOR_VERSION,
+-		       BCM_5710_FW_REVISION_VERSION,
+-		       BCM_5710_FW_ENGINEERING_VERSION);
++			  fw_ver[0], fw_ver[1], fw_ver[2], fw_ver[3],
++			  bp->fw_major, bp->fw_minor, bp->fw_rev, bp->fw_eng);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -13453,34 +13466,51 @@ do {									\
+ 	     (u8 *)bp->arr, len);					\
+ } while (0)
+ 
+-static int bnx2x_init_firmware(struct bnx2x *bp)
++int bnx2x_init_firmware(struct bnx2x *bp)
+ {
+-	const char *fw_file_name;
++	const char *fw_file_name, *fw_file_name_v15;
+ 	struct bnx2x_fw_file_hdr *fw_hdr;
+ 	int rc;
+ 
+ 	if (bp->firmware)
+ 		return 0;
+ 
+-	if (CHIP_IS_E1(bp))
++	if (CHIP_IS_E1(bp)) {
+ 		fw_file_name = FW_FILE_NAME_E1;
+-	else if (CHIP_IS_E1H(bp))
++		fw_file_name_v15 = FW_FILE_NAME_E1_V15;
++	} else if (CHIP_IS_E1H(bp)) {
+ 		fw_file_name = FW_FILE_NAME_E1H;
+-	else if (!CHIP_IS_E1x(bp))
++		fw_file_name_v15 = FW_FILE_NAME_E1H_V15;
++	} else if (!CHIP_IS_E1x(bp)) {
+ 		fw_file_name = FW_FILE_NAME_E2;
+-	else {
++		fw_file_name_v15 = FW_FILE_NAME_E2_V15;
++	} else {
+ 		BNX2X_ERR("Unsupported chip revision\n");
+ 		return -EINVAL;
+ 	}
++
+ 	BNX2X_DEV_INFO("Loading %s\n", fw_file_name);
+ 
+ 	rc = request_firmware(&bp->firmware, fw_file_name, &bp->pdev->dev);
+ 	if (rc) {
+-		BNX2X_ERR("Can't load firmware file %s\n",
+-			  fw_file_name);
+-		goto request_firmware_exit;
++		BNX2X_DEV_INFO("Trying to load older fw %s\n", fw_file_name_v15);
++
++		/* try to load prev version */
++		rc = request_firmware(&bp->firmware, fw_file_name_v15, &bp->pdev->dev);
++
++		if (rc)
++			goto request_firmware_exit;
++
++		bp->fw_rev = BCM_5710_FW_REVISION_VERSION_V15;
++	} else {
++		bp->fw_cap |= FW_CAP_INVALIDATE_VF_FP_HSI;
++		bp->fw_rev = BCM_5710_FW_REVISION_VERSION;
+ 	}
+ 
++	bp->fw_major = BCM_5710_FW_MAJOR_VERSION;
++	bp->fw_minor = BCM_5710_FW_MINOR_VERSION;
++	bp->fw_eng = BCM_5710_FW_ENGINEERING_VERSION;
++
+ 	rc = bnx2x_check_firmware(bp);
+ 	if (rc) {
+ 		BNX2X_ERR("Corrupt firmware file %s\n", fw_file_name);
+@@ -13536,7 +13566,7 @@ request_firmware_exit:
+ 	return rc;
+ }
+ 
+-static void bnx2x_release_firmware(struct bnx2x *bp)
++void bnx2x_release_firmware(struct bnx2x *bp)
+ {
+ 	kfree(bp->init_ops_offsets);
+ 	kfree(bp->init_ops);
+@@ -14053,6 +14083,7 @@ static int bnx2x_init_one(struct pci_dev *pdev,
+ 	return 0;
+ 
+ init_one_freemem:
++	bnx2x_release_firmware(bp);
+ 	bnx2x_free_mem_bp(bp);
+ 
+ init_one_exit:
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+index 03eb0179ec008..08437eaacbb96 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+@@ -758,9 +758,18 @@ static void bnx2x_vf_igu_reset(struct bnx2x *bp, struct bnx2x_virtf *vf)
+ 
+ void bnx2x_vf_enable_access(struct bnx2x *bp, u8 abs_vfid)
+ {
++	u16 abs_fid;
++
++	abs_fid = FW_VF_HANDLE(abs_vfid);
++
+ 	/* set the VF-PF association in the FW */
+-	storm_memset_vf_to_pf(bp, FW_VF_HANDLE(abs_vfid), BP_FUNC(bp));
+-	storm_memset_func_en(bp, FW_VF_HANDLE(abs_vfid), 1);
++	storm_memset_vf_to_pf(bp, abs_fid, BP_FUNC(bp));
++	storm_memset_func_en(bp, abs_fid, 1);
++
++	/* Invalidate fp_hsi version for vfs */
++	if (bp->fw_cap & FW_CAP_INVALIDATE_VF_FP_HSI)
++		REG_WR8(bp, BAR_XSTRORM_INTMEM +
++			    XSTORM_ETH_FUNCTION_INFO_FP_HSI_VALID_E2_OFFSET(abs_fid), 0);
+ 
+ 	/* clear vf errors*/
+ 	bnx2x_vf_semi_clear_err(bp, abs_vfid);
+diff --git a/fs/select.c b/fs/select.c
+index 945896d0ac9e7..5edffee1162c2 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -458,9 +458,11 @@ get_max:
+ 	return max;
+ }
+ 
+-#define POLLIN_SET (EPOLLRDNORM | EPOLLRDBAND | EPOLLIN | EPOLLHUP | EPOLLERR)
+-#define POLLOUT_SET (EPOLLWRBAND | EPOLLWRNORM | EPOLLOUT | EPOLLERR)
+-#define POLLEX_SET (EPOLLPRI)
++#define POLLIN_SET (EPOLLRDNORM | EPOLLRDBAND | EPOLLIN | EPOLLHUP | EPOLLERR |\
++			EPOLLNVAL)
++#define POLLOUT_SET (EPOLLWRBAND | EPOLLWRNORM | EPOLLOUT | EPOLLERR |\
++			 EPOLLNVAL)
++#define POLLEX_SET (EPOLLPRI | EPOLLNVAL)
+ 
+ static inline void wait_key_set(poll_table *wait, unsigned long in,
+ 				unsigned long out, unsigned long bit,
+@@ -527,6 +529,7 @@ static int do_select(int n, fd_set_bits *fds, struct timespec64 *end_time)
+ 					break;
+ 				if (!(bit & all_bits))
+ 					continue;
++				mask = EPOLLNVAL;
+ 				f = fdget(i);
+ 				if (f.file) {
+ 					wait_key_set(wait, in, out, bit,
+@@ -534,34 +537,34 @@ static int do_select(int n, fd_set_bits *fds, struct timespec64 *end_time)
+ 					mask = vfs_poll(f.file, wait);
+ 
+ 					fdput(f);
+-					if ((mask & POLLIN_SET) && (in & bit)) {
+-						res_in |= bit;
+-						retval++;
+-						wait->_qproc = NULL;
+-					}
+-					if ((mask & POLLOUT_SET) && (out & bit)) {
+-						res_out |= bit;
+-						retval++;
+-						wait->_qproc = NULL;
+-					}
+-					if ((mask & POLLEX_SET) && (ex & bit)) {
+-						res_ex |= bit;
+-						retval++;
+-						wait->_qproc = NULL;
+-					}
+-					/* got something, stop busy polling */
+-					if (retval) {
+-						can_busy_loop = false;
+-						busy_flag = 0;
+-
+-					/*
+-					 * only remember a returned
+-					 * POLL_BUSY_LOOP if we asked for it
+-					 */
+-					} else if (busy_flag & mask)
+-						can_busy_loop = true;
+-
+ 				}
++				if ((mask & POLLIN_SET) && (in & bit)) {
++					res_in |= bit;
++					retval++;
++					wait->_qproc = NULL;
++				}
++				if ((mask & POLLOUT_SET) && (out & bit)) {
++					res_out |= bit;
++					retval++;
++					wait->_qproc = NULL;
++				}
++				if ((mask & POLLEX_SET) && (ex & bit)) {
++					res_ex |= bit;
++					retval++;
++					wait->_qproc = NULL;
++				}
++				/* got something, stop busy polling */
++				if (retval) {
++					can_busy_loop = false;
++					busy_flag = 0;
++
++				/*
++				 * only remember a returned
++				 * POLL_BUSY_LOOP if we asked for it
++				 */
++				} else if (busy_flag & mask)
++					can_busy_loop = true;
++
+ 			}
+ 			if (res_in)
+ 				*rinp = res_in;
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index b74e7ace4376b..844c35803739e 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1581,10 +1581,11 @@ static void __maybe_unused rcu_advance_cbs_nowake(struct rcu_node *rnp,
+ 						  struct rcu_data *rdp)
+ {
+ 	rcu_lockdep_assert_cblist_protected(rdp);
+-	if (!rcu_seq_state(rcu_seq_current(&rnp->gp_seq)) ||
+-	    !raw_spin_trylock_rcu_node(rnp))
++	if (!rcu_seq_state(rcu_seq_current(&rnp->gp_seq)) || !raw_spin_trylock_rcu_node(rnp))
+ 		return;
+-	WARN_ON_ONCE(rcu_advance_cbs(rnp, rdp));
++	// The grace period cannot end while we hold the rcu_node lock.
++	if (rcu_seq_state(rcu_seq_current(&rnp->gp_seq)))
++		WARN_ON_ONCE(rcu_advance_cbs(rnp, rdp));
+ 	raw_spin_unlock_rcu_node(rnp);
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-01-31 12:25 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-01-31 12:25 UTC (permalink / raw
  To: gentoo-commits

commit:     5112ec9980b2ccf92ff7e4920daca9c9ce1e91af
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jan 31 12:25:09 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jan 31 12:25:09 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5112ec99

Select CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL=y as default

Bug: https://bugs.gentoo.org/832224

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 91c92074..7e387d70 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-12-21 08:57:43.779324794 -0500
-+++ b/distro/Kconfig	2021-12-21 14:33:33.759225728 -0500
-@@ -0,0 +1,283 @@
+--- /dev/null	2022-01-30 08:12:05.041788304 -0500
++++ b/distro/Kconfig	2022-01-30 15:28:10.030352980 -0500
+@@ -0,0 +1,285 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -16,6 +16,8 @@
 +
 +	default y
 +
++	select CPU_FREQ_DEFAULT_GOV_SCHEDUTIL
++
 +	help
 +		In order to boot Gentoo Linux a minimal set of config settings needs to
 +		be enabled in the kernel; to avoid the users from having to enable them


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-02-01 17:23 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-02-01 17:23 UTC (permalink / raw
  To: gentoo-commits

commit:     819b0cffa158a73b6276046ea0cb831c15ae8314
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb  1 17:23:02 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb  1 17:23:02 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=819b0cff

Linux patch 5.10.96

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1095_linux-5.10.96.patch | 3881 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3885 insertions(+)

diff --git a/0000_README b/0000_README
index 5f3cbb9a..cc530626 100644
--- a/0000_README
+++ b/0000_README
@@ -423,6 +423,10 @@ Patch:  1094_linux-5.10.95.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.95
 
+Patch:  1095_linux-5.10.96.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.96
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1095_linux-5.10.96.patch b/1095_linux-5.10.96.patch
new file mode 100644
index 00000000..6d0571ac
--- /dev/null
+++ b/1095_linux-5.10.96.patch
@@ -0,0 +1,3881 @@
+diff --git a/Documentation/devicetree/bindings/net/can/tcan4x5x.txt b/Documentation/devicetree/bindings/net/can/tcan4x5x.txt
+index 0968b40aef1e8..e3501bfa22e90 100644
+--- a/Documentation/devicetree/bindings/net/can/tcan4x5x.txt
++++ b/Documentation/devicetree/bindings/net/can/tcan4x5x.txt
+@@ -31,7 +31,7 @@ tcan4x5x: tcan4x5x@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		spi-max-frequency = <10000000>;
+-		bosch,mram-cfg = <0x0 0 0 32 0 0 1 1>;
++		bosch,mram-cfg = <0x0 0 0 16 0 0 1 1>;
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <14 IRQ_TYPE_LEVEL_LOW>;
+ 		device-state-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>;
+diff --git a/Makefile b/Makefile
+index fa98893aae615..c43133c8a5b1f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 95
++SUBLEVEL = 96
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 4999caff32818..22275d8518eb3 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -511,34 +511,26 @@ static void entry_task_switch(struct task_struct *next)
+ 
+ /*
+  * ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT.
+- * Assuming the virtual counter is enabled at the beginning of times:
+- *
+- * - disable access when switching from a 64bit task to a 32bit task
+- * - enable access when switching from a 32bit task to a 64bit task
++ * Ensure access is disabled when switching to a 32bit task, ensure
++ * access is enabled when switching to a 64bit task.
+  */
+-static void erratum_1418040_thread_switch(struct task_struct *prev,
+-					  struct task_struct *next)
++static void erratum_1418040_thread_switch(struct task_struct *next)
+ {
+-	bool prev32, next32;
+-	u64 val;
+-
+-	if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040))
+-		return;
+-
+-	prev32 = is_compat_thread(task_thread_info(prev));
+-	next32 = is_compat_thread(task_thread_info(next));
+-
+-	if (prev32 == next32 || !this_cpu_has_cap(ARM64_WORKAROUND_1418040))
++	if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040) ||
++	    !this_cpu_has_cap(ARM64_WORKAROUND_1418040))
+ 		return;
+ 
+-	val = read_sysreg(cntkctl_el1);
+-
+-	if (!next32)
+-		val |= ARCH_TIMER_USR_VCT_ACCESS_EN;
++	if (is_compat_thread(task_thread_info(next)))
++		sysreg_clear_set(cntkctl_el1, ARCH_TIMER_USR_VCT_ACCESS_EN, 0);
+ 	else
+-		val &= ~ARCH_TIMER_USR_VCT_ACCESS_EN;
++		sysreg_clear_set(cntkctl_el1, 0, ARCH_TIMER_USR_VCT_ACCESS_EN);
++}
+ 
+-	write_sysreg(val, cntkctl_el1);
++static void erratum_1418040_new_exec(void)
++{
++	preempt_disable();
++	erratum_1418040_thread_switch(current);
++	preempt_enable();
+ }
+ 
+ /*
+@@ -556,7 +548,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
+ 	entry_task_switch(next);
+ 	uao_thread_switch(next);
+ 	ssbs_thread_switch(next);
+-	erratum_1418040_thread_switch(prev, next);
++	erratum_1418040_thread_switch(next);
+ 
+ 	/*
+ 	 * Complete any pending TLB or cache maintenance on this CPU in case
+@@ -622,6 +614,7 @@ void arch_setup_new_exec(void)
+ 	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
+ 
+ 	ptrauth_thread_init_user(current);
++	erratum_1418040_new_exec();
+ 
+ 	if (task_spec_ssb_noexec(current)) {
+ 		arch_prctl_spec_ctrl_set(current, PR_SPEC_STORE_BYPASS,
+diff --git a/arch/powerpc/include/asm/book3s/32/mmu-hash.h b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+index a8982d52f6b1d..cbde06d0fb380 100644
+--- a/arch/powerpc/include/asm/book3s/32/mmu-hash.h
++++ b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+@@ -102,6 +102,8 @@ extern s32 patch__hash_page_B, patch__hash_page_C;
+ extern s32 patch__flush_hash_A0, patch__flush_hash_A1, patch__flush_hash_A2;
+ extern s32 patch__flush_hash_B;
+ 
++int __init find_free_bat(void);
++unsigned int bat_block_size(unsigned long base, unsigned long top);
+ #endif /* !__ASSEMBLY__ */
+ 
+ /* We happily ignore the smaller BATs on 601, we don't actually use
+diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
+index a6e3700c4566a..f0c0816f57270 100644
+--- a/arch/powerpc/include/asm/ppc-opcode.h
++++ b/arch/powerpc/include/asm/ppc-opcode.h
+@@ -449,6 +449,7 @@
+ #define PPC_RAW_LDX(r, base, b)		(0x7c00002a | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_RAW_LHZ(r, base, i)		(0xa0000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i))
+ #define PPC_RAW_LHBRX(r, base, b)	(0x7c00062c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
++#define PPC_RAW_LWBRX(r, base, b)	(0x7c00042c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_RAW_LDBRX(r, base, b)	(0x7c000428 | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_RAW_STWCX(s, a, b)		(0x7c00012d | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b))
+ #define PPC_RAW_CMPWI(a, i)		(0x2c000000 | ___PPC_RA(a) | IMM_L(i))
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index fe2ef598e2ead..376104c166fcf 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -11,6 +11,7 @@ CFLAGS_prom_init.o      += -fPIC
+ CFLAGS_btext.o		+= -fPIC
+ endif
+ 
++CFLAGS_early_32.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
+index 58991233381ed..0697a0e014ae8 100644
+--- a/arch/powerpc/lib/Makefile
++++ b/arch/powerpc/lib/Makefile
+@@ -19,6 +19,9 @@ CFLAGS_code-patching.o += -DDISABLE_BRANCH_PROFILING
+ CFLAGS_feature-fixups.o += -DDISABLE_BRANCH_PROFILING
+ endif
+ 
++CFLAGS_code-patching.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
++CFLAGS_feature-fixups.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
++
+ obj-y += alloc.o code-patching.o feature-fixups.o pmem.o inst.o test_code-patching.o
+ 
+ ifndef CONFIG_KASAN
+diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
+index a59e7ec981803..602ab13127b40 100644
+--- a/arch/powerpc/mm/book3s32/mmu.c
++++ b/arch/powerpc/mm/book3s32/mmu.c
+@@ -72,7 +72,7 @@ unsigned long p_block_mapped(phys_addr_t pa)
+ 	return 0;
+ }
+ 
+-static int find_free_bat(void)
++int __init find_free_bat(void)
+ {
+ 	int b;
+ 	int n = mmu_has_feature(MMU_FTR_USE_HIGH_BATS) ? 8 : 4;
+@@ -96,7 +96,7 @@ static int find_free_bat(void)
+  * - block size has to be a power of two. This is calculated by finding the
+  *   highest bit set to 1.
+  */
+-static unsigned int block_size(unsigned long base, unsigned long top)
++unsigned int bat_block_size(unsigned long base, unsigned long top)
+ {
+ 	unsigned int max_size = SZ_256M;
+ 	unsigned int base_shift = (ffs(base) - 1) & 31;
+@@ -141,7 +141,7 @@ static unsigned long __init __mmu_mapin_ram(unsigned long base, unsigned long to
+ 	int idx;
+ 
+ 	while ((idx = find_free_bat()) != -1 && base != top) {
+-		unsigned int size = block_size(base, top);
++		unsigned int size = bat_block_size(base, top);
+ 
+ 		if (size < 128 << 10)
+ 			break;
+@@ -201,18 +201,17 @@ void mmu_mark_initmem_nx(void)
+ 	int nb = mmu_has_feature(MMU_FTR_USE_HIGH_BATS) ? 8 : 4;
+ 	int i;
+ 	unsigned long base = (unsigned long)_stext - PAGE_OFFSET;
+-	unsigned long top = (unsigned long)_etext - PAGE_OFFSET;
++	unsigned long top = ALIGN((unsigned long)_etext - PAGE_OFFSET, SZ_128K);
+ 	unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
+ 	unsigned long size;
+ 
+-	for (i = 0; i < nb - 1 && base < top && top - base > (128 << 10);) {
+-		size = block_size(base, top);
++	for (i = 0; i < nb - 1 && base < top;) {
++		size = bat_block_size(base, top);
+ 		setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
+ 		base += size;
+ 	}
+ 	if (base < top) {
+-		size = block_size(base, top);
+-		size = max(size, 128UL << 10);
++		size = bat_block_size(base, top);
+ 		if ((top - base) > size) {
+ 			size <<= 1;
+ 			if (strict_kernel_rwx_enabled() && base + size > border)
+diff --git a/arch/powerpc/mm/kasan/book3s_32.c b/arch/powerpc/mm/kasan/book3s_32.c
+index 35b287b0a8da4..450a67ef0bbe1 100644
+--- a/arch/powerpc/mm/kasan/book3s_32.c
++++ b/arch/powerpc/mm/kasan/book3s_32.c
+@@ -10,48 +10,51 @@ int __init kasan_init_region(void *start, size_t size)
+ {
+ 	unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start);
+ 	unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size);
+-	unsigned long k_cur = k_start;
+-	int k_size = k_end - k_start;
+-	int k_size_base = 1 << (ffs(k_size) - 1);
++	unsigned long k_nobat = k_start;
++	unsigned long k_cur;
++	phys_addr_t phys;
+ 	int ret;
+-	void *block;
+ 
+-	block = memblock_alloc(k_size, k_size_base);
+-
+-	if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, k_size_base)) {
+-		int shift = ffs(k_size - k_size_base);
+-		int k_size_more = shift ? 1 << (shift - 1) : 0;
+-
+-		setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL);
+-		if (k_size_more >= SZ_128K)
+-			setbat(-1, k_start + k_size_base, __pa(block) + k_size_base,
+-			       k_size_more, PAGE_KERNEL);
+-		if (v_block_mapped(k_start))
+-			k_cur = k_start + k_size_base;
+-		if (v_block_mapped(k_start + k_size_base))
+-			k_cur = k_start + k_size_base + k_size_more;
+-
+-		update_bats();
++	while (k_nobat < k_end) {
++		unsigned int k_size = bat_block_size(k_nobat, k_end);
++		int idx = find_free_bat();
++
++		if (idx == -1)
++			break;
++		if (k_size < SZ_128K)
++			break;
++		phys = memblock_phys_alloc_range(k_size, k_size, 0,
++						 MEMBLOCK_ALLOC_ANYWHERE);
++		if (!phys)
++			break;
++
++		setbat(idx, k_nobat, phys, k_size, PAGE_KERNEL);
++		k_nobat += k_size;
+ 	}
++	if (k_nobat != k_start)
++		update_bats();
+ 
+-	if (!block)
+-		block = memblock_alloc(k_size, PAGE_SIZE);
+-	if (!block)
+-		return -ENOMEM;
++	if (k_nobat < k_end) {
++		phys = memblock_phys_alloc_range(k_end - k_nobat, PAGE_SIZE, 0,
++						 MEMBLOCK_ALLOC_ANYWHERE);
++		if (!phys)
++			return -ENOMEM;
++	}
+ 
+ 	ret = kasan_init_shadow_page_tables(k_start, k_end);
+ 	if (ret)
+ 		return ret;
+ 
+-	kasan_update_early_region(k_start, k_cur, __pte(0));
++	kasan_update_early_region(k_start, k_nobat, __pte(0));
+ 
+-	for (; k_cur < k_end; k_cur += PAGE_SIZE) {
++	for (k_cur = k_nobat; k_cur < k_end; k_cur += PAGE_SIZE) {
+ 		pmd_t *pmd = pmd_off_k(k_cur);
+-		void *va = block + k_cur - k_start;
+-		pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
++		pte_t pte = pfn_pte(PHYS_PFN(phys + k_cur - k_nobat), PAGE_KERNEL);
+ 
+ 		__set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
+ 	}
+ 	flush_tlb_kernel_range(k_start, k_end);
++	memset(kasan_mem_to_shadow(start), 0, k_end - k_start);
++
+ 	return 0;
+ }
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 8936090acb579..0d47514e8870d 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -651,17 +651,21 @@ bpf_alu32_trunc:
+ 				EMIT(PPC_RAW_MR(dst_reg, b2p[TMP_REG_1]));
+ 				break;
+ 			case 64:
+-				/*
+-				 * Way easier and faster(?) to store the value
+-				 * into stack and then use ldbrx
+-				 *
+-				 * ctx->seen will be reliable in pass2, but
+-				 * the instructions generated will remain the
+-				 * same across all passes
+-				 */
++				/* Store the value to stack and then use byte-reverse loads */
+ 				PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
+ 				EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx)));
+-				EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
++				if (cpu_has_feature(CPU_FTR_ARCH_206)) {
++					EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
++				} else {
++					EMIT(PPC_RAW_LWBRX(dst_reg, 0, b2p[TMP_REG_1]));
++					if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
++						EMIT(PPC_RAW_SLDI(dst_reg, dst_reg, 32));
++					EMIT(PPC_RAW_LI(b2p[TMP_REG_2], 4));
++					EMIT(PPC_RAW_LWBRX(b2p[TMP_REG_2], b2p[TMP_REG_2], b2p[TMP_REG_1]));
++					if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
++						EMIT(PPC_RAW_SLDI(b2p[TMP_REG_2], b2p[TMP_REG_2], 32));
++					EMIT(PPC_RAW_OR(dst_reg, dst_reg, b2p[TMP_REG_2]));
++				}
+ 				break;
+ 			}
+ 			break;
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index bd34e062bd290..e49aa8fc6a491 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1273,9 +1273,20 @@ static void power_pmu_disable(struct pmu *pmu)
+ 		 * Otherwise provide a warning if there is PMI pending, but
+ 		 * no counter is found overflown.
+ 		 */
+-		if (any_pmc_overflown(cpuhw))
+-			clear_pmi_irq_pending();
+-		else
++		if (any_pmc_overflown(cpuhw)) {
++			/*
++			 * Since power_pmu_disable runs under local_irq_save, it
++			 * could happen that code hits a PMC overflow without PMI
++			 * pending in paca. Hence only clear PMI pending if it was
++			 * set.
++			 *
++			 * If a PMI is pending, then MSR[EE] must be disabled (because
++			 * the masked PMI handler disabling EE). So it is safe to
++			 * call clear_pmi_irq_pending().
++			 */
++			if (pmi_irq_pending())
++				clear_pmi_irq_pending();
++		} else
+ 			WARN_ON(pmi_irq_pending());
+ 
+ 		val = mmcra = cpuhw->mmcr.mmcra;
+diff --git a/arch/s390/hypfs/hypfs_vm.c b/arch/s390/hypfs/hypfs_vm.c
+index e1fcc03159ef2..a927adccb4ba7 100644
+--- a/arch/s390/hypfs/hypfs_vm.c
++++ b/arch/s390/hypfs/hypfs_vm.c
+@@ -20,6 +20,7 @@
+ 
+ static char local_guest[] = "        ";
+ static char all_guests[] = "*       ";
++static char *all_groups = all_guests;
+ static char *guest_query;
+ 
+ struct diag2fc_data {
+@@ -62,10 +63,11 @@ static int diag2fc(int size, char* query, void *addr)
+ 
+ 	memcpy(parm_list.userid, query, NAME_LEN);
+ 	ASCEBC(parm_list.userid, NAME_LEN);
+-	parm_list.addr = (unsigned long) addr ;
++	memcpy(parm_list.aci_grp, all_groups, NAME_LEN);
++	ASCEBC(parm_list.aci_grp, NAME_LEN);
++	parm_list.addr = (unsigned long)addr;
+ 	parm_list.size = size;
+ 	parm_list.fmt = 0x02;
+-	memset(parm_list.aci_grp, 0x40, NAME_LEN);
+ 	rc = -1;
+ 
+ 	diag_stat_inc(DIAG_STAT_X2FC);
+diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
+index 4055f1c498147..b81bc96216b97 100644
+--- a/arch/s390/kernel/module.c
++++ b/arch/s390/kernel/module.c
+@@ -30,7 +30,7 @@
+ #define DEBUGP(fmt , ...)
+ #endif
+ 
+-#define PLT_ENTRY_SIZE 20
++#define PLT_ENTRY_SIZE 22
+ 
+ void *module_alloc(unsigned long size)
+ {
+@@ -330,27 +330,26 @@ static int apply_rela(Elf_Rela *rela, Elf_Addr base, Elf_Sym *symtab,
+ 	case R_390_PLTOFF32:	/* 32 bit offset from GOT to PLT. */
+ 	case R_390_PLTOFF64:	/* 16 bit offset from GOT to PLT. */
+ 		if (info->plt_initialized == 0) {
+-			unsigned int insn[5];
+-			unsigned int *ip = me->core_layout.base +
+-					   me->arch.plt_offset +
+-					   info->plt_offset;
+-
+-			insn[0] = 0x0d10e310;	/* basr 1,0  */
+-			insn[1] = 0x100a0004;	/* lg	1,10(1) */
++			unsigned char insn[PLT_ENTRY_SIZE];
++			char *plt_base;
++			char *ip;
++
++			plt_base = me->core_layout.base + me->arch.plt_offset;
++			ip = plt_base + info->plt_offset;
++			*(int *)insn = 0x0d10e310;	/* basr 1,0  */
++			*(int *)&insn[4] = 0x100c0004;	/* lg	1,12(1) */
+ 			if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_disable) {
+-				unsigned int *ij;
+-				ij = me->core_layout.base +
+-					me->arch.plt_offset +
+-					me->arch.plt_size - PLT_ENTRY_SIZE;
+-				insn[2] = 0xa7f40000 +	/* j __jump_r1 */
+-					(unsigned int)(u16)
+-					(((unsigned long) ij - 8 -
+-					  (unsigned long) ip) / 2);
++				char *jump_r1;
++
++				jump_r1 = plt_base + me->arch.plt_size -
++					PLT_ENTRY_SIZE;
++				/* brcl	0xf,__jump_r1 */
++				*(short *)&insn[8] = 0xc0f4;
++				*(int *)&insn[10] = (jump_r1 - (ip + 8)) / 2;
+ 			} else {
+-				insn[2] = 0x07f10000;	/* br %r1 */
++				*(int *)&insn[8] = 0x07f10000;	/* br %r1 */
+ 			}
+-			insn[3] = (unsigned int) (val >> 32);
+-			insn[4] = (unsigned int) val;
++			*(long *)&insn[14] = val;
+ 
+ 			write(ip, insn, sizeof(insn));
+ 			info->plt_initialized = 1;
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index ba26792d96731..03c8047bebb38 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -5239,7 +5239,7 @@ static struct intel_uncore_type icx_uncore_imc = {
+ 	.fixed_ctr_bits	= 48,
+ 	.fixed_ctr	= SNR_IMC_MMIO_PMON_FIXED_CTR,
+ 	.fixed_ctl	= SNR_IMC_MMIO_PMON_FIXED_CTL,
+-	.event_descs	= hswep_uncore_imc_events,
++	.event_descs	= snr_uncore_imc_events,
+ 	.perf_ctr	= SNR_IMC_MMIO_PMON_CTR0,
+ 	.event_ctl	= SNR_IMC_MMIO_PMON_CTL0,
+ 	.event_mask	= SNBEP_PMON_RAW_EVENT_MASK,
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index 0c6b02dd744c1..f73f1184b1c13 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -387,7 +387,7 @@ static void threshold_restart_bank(void *_tr)
+ 	u32 hi, lo;
+ 
+ 	/* sysfs write might race against an offline operation */
+-	if (this_cpu_read(threshold_banks))
++	if (!this_cpu_read(threshold_banks) && !tr->set_lvt_off)
+ 		return;
+ 
+ 	rdmsr(tr->b->address, lo, hi);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 5e1d7396a6b8a..2e6332af98aba 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4146,13 +4146,6 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
+ 	if (likely(!insn || insn_len))
+ 		return true;
+ 
+-	/*
+-	 * If RIP is invalid, go ahead with emulation which will cause an
+-	 * internal error exit.
+-	 */
+-	if (!kvm_vcpu_gfn_to_memslot(vcpu, kvm_rip_read(vcpu) >> PAGE_SHIFT))
+-		return true;
+-
+ 	cr4 = kvm_read_cr4(vcpu);
+ 	smep = cr4 & X86_CR4_SMEP;
+ 	smap = cr4 & X86_CR4_SMAP;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 271669dc8d90a..7871b8e84b368 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3171,6 +3171,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		if (data & ~supported_xss)
+ 			return 1;
+ 		vcpu->arch.ia32_xss = data;
++		kvm_update_cpuid_runtime(vcpu);
+ 		break;
+ 	case MSR_SMI_COUNT:
+ 		if (!msr_info->host_initiated)
+diff --git a/block/bio.c b/block/bio.c
+index 0703a208ca248..f8d26ce7b61b0 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -575,7 +575,8 @@ void bio_truncate(struct bio *bio, unsigned new_size)
+ 				offset = new_size - done;
+ 			else
+ 				offset = 0;
+-			zero_user(bv.bv_page, offset, bv.bv_len - offset);
++			zero_user(bv.bv_page, bv.bv_offset + offset,
++				  bv.bv_len - offset);
+ 			truncated = true;
+ 		}
+ 		done += bv.bv_len;
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 847f33ffc4aed..9fa86288b78a9 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -719,6 +719,13 @@ void __init efi_systab_report_header(const efi_table_hdr_t *systab_hdr,
+ 		systab_hdr->revision >> 16,
+ 		systab_hdr->revision & 0xffff,
+ 		vendor);
++
++	if (IS_ENABLED(CONFIG_X86_64) &&
++	    systab_hdr->revision > EFI_1_10_SYSTEM_TABLE_REVISION &&
++	    !strcmp(vendor, "Apple")) {
++		pr_info("Apple Mac detected, using EFI v1.10 runtime services only\n");
++		efi.runtime_version = EFI_1_10_SYSTEM_TABLE_REVISION;
++	}
+ }
+ 
+ static __initdata char memory_type_name[][13] = {
+diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
+index c1b57dfb12776..415a971e76947 100644
+--- a/drivers/firmware/efi/libstub/arm64-stub.c
++++ b/drivers/firmware/efi/libstub/arm64-stub.c
+@@ -119,9 +119,9 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 	if (image->image_base != _text)
+ 		efi_err("FIRMWARE BUG: efi_loaded_image_t::image_base has bogus value\n");
+ 
+-	if (!IS_ALIGNED((u64)_text, EFI_KIMG_ALIGN))
+-		efi_err("FIRMWARE BUG: kernel image not aligned on %ldk boundary\n",
+-			EFI_KIMG_ALIGN >> 10);
++	if (!IS_ALIGNED((u64)_text, SEGMENT_ALIGN))
++		efi_err("FIRMWARE BUG: kernel image not aligned on %dk boundary\n",
++			SEGMENT_ALIGN >> 10);
+ 
+ 	kernel_size = _edata - _text;
+ 	kernel_memsize = kernel_size + (_end - _edata);
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+index ed2c50011d445..ddf539f26f2da 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+@@ -469,8 +469,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (args->stream_size > SZ_64K || args->nr_relocs > SZ_64K ||
+-	    args->nr_bos > SZ_64K || args->nr_pmrs > 128) {
++	if (args->stream_size > SZ_128K || args->nr_relocs > SZ_128K ||
++	    args->nr_bos > SZ_128K || args->nr_pmrs > 128) {
+ 		DRM_ERROR("submit arguments out of size limits\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
+index a7a24539921f3..a6efc11eba93f 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
+@@ -26,9 +26,16 @@ static void dpu_setup_dspp_pcc(struct dpu_hw_dspp *ctx,
+ 		struct dpu_hw_pcc_cfg *cfg)
+ {
+ 
+-	u32 base = ctx->cap->sblk->pcc.base;
++	u32 base;
+ 
+-	if (!ctx || !base) {
++	if (!ctx) {
++		DRM_ERROR("invalid ctx %pK\n", ctx);
++		return;
++	}
++
++	base = ctx->cap->sblk->pcc.base;
++
++	if (!base) {
+ 		DRM_ERROR("invalid ctx %pK pcc base 0x%x\n", ctx, base);
+ 		return;
+ 	}
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index 1adead764feed..f845333593daa 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -33,7 +33,12 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
+ 
+ 	of_node_put(phy_node);
+ 
+-	if (!phy_pdev || !msm_dsi->phy) {
++	if (!phy_pdev) {
++		DRM_DEV_ERROR(&pdev->dev, "%s: phy driver is not ready\n", __func__);
++		return -EPROBE_DEFER;
++	}
++	if (!msm_dsi->phy) {
++		put_device(&phy_pdev->dev);
+ 		DRM_DEV_ERROR(&pdev->dev, "%s: phy driver is not ready\n", __func__);
+ 		return -EPROBE_DEFER;
+ 	}
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+index e8c1a727179cc..e07986ab52c22 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+@@ -769,12 +769,14 @@ void __exit msm_dsi_phy_driver_unregister(void)
+ int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int src_pll_id,
+ 			struct msm_dsi_phy_clk_request *clk_req)
+ {
+-	struct device *dev = &phy->pdev->dev;
++	struct device *dev;
+ 	int ret;
+ 
+ 	if (!phy || !phy->cfg->ops.enable)
+ 		return -EINVAL;
+ 
++	dev = &phy->pdev->dev;
++
+ 	ret = dsi_phy_enable_resource(phy);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "%s: resource enable failed, %d\n",
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index 737453b6e5966..94f948ef279d1 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -97,10 +97,15 @@ static int msm_hdmi_get_phy(struct hdmi *hdmi)
+ 
+ 	of_node_put(phy_node);
+ 
+-	if (!phy_pdev || !hdmi->phy) {
++	if (!phy_pdev) {
+ 		DRM_DEV_ERROR(&pdev->dev, "phy driver is not ready\n");
+ 		return -EPROBE_DEFER;
+ 	}
++	if (!hdmi->phy) {
++		DRM_DEV_ERROR(&pdev->dev, "phy driver is not ready\n");
++		put_device(&phy_pdev->dev);
++		return -EPROBE_DEFER;
++	}
+ 
+ 	hdmi->phy_dev = get_device(&phy_pdev->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 33e42b2f9cfcb..e37e5afc680a2 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -350,7 +350,7 @@ static int msm_init_vram(struct drm_device *dev)
+ 		of_node_put(node);
+ 		if (ret)
+ 			return ret;
+-		size = r.end - r.start;
++		size = r.end - r.start + 1;
+ 		DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start);
+ 
+ 		/* if we have no IOMMU, then we need to use carveout allocator.
+diff --git a/drivers/hwmon/lm90.c b/drivers/hwmon/lm90.c
+index 959446b0137bc..a7142c32889c0 100644
+--- a/drivers/hwmon/lm90.c
++++ b/drivers/hwmon/lm90.c
+@@ -373,7 +373,7 @@ static const struct lm90_params lm90_params[] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
+ 		  | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+-		.max_convrate = 8,
++		.max_convrate = 7,
+ 	},
+ 	[lm86] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
+@@ -394,12 +394,13 @@ static const struct lm90_params lm90_params[] = {
+ 		.max_convrate = 9,
+ 	},
+ 	[max6646] = {
+-		.flags = LM90_HAVE_CRIT,
++		.flags = LM90_HAVE_CRIT | LM90_HAVE_BROKEN_ALERT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 6,
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+ 	},
+ 	[max6654] = {
++		.flags = LM90_HAVE_BROKEN_ALERT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 7,
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+@@ -418,7 +419,7 @@ static const struct lm90_params lm90_params[] = {
+ 	},
+ 	[max6680] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_CRIT
+-		  | LM90_HAVE_CRIT_ALRM_SWP,
++		  | LM90_HAVE_CRIT_ALRM_SWP | LM90_HAVE_BROKEN_ALERT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 7,
+ 	},
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 1d621f7769035..62d11c6e41d60 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -375,8 +375,6 @@ static int venus_remove(struct platform_device *pdev)
+ 
+ 	hfi_destroy(core);
+ 
+-	v4l2_device_unregister(&core->v4l2_dev);
+-
+ 	mutex_destroy(&core->pm_lock);
+ 	mutex_destroy(&core->lock);
+ 	venus_dbgfs_deinit(core);
+diff --git a/drivers/mtd/nand/raw/mpc5121_nfc.c b/drivers/mtd/nand/raw/mpc5121_nfc.c
+index cb293c50acb87..5b9271b9c3265 100644
+--- a/drivers/mtd/nand/raw/mpc5121_nfc.c
++++ b/drivers/mtd/nand/raw/mpc5121_nfc.c
+@@ -291,7 +291,6 @@ static int ads5121_chipselect_init(struct mtd_info *mtd)
+ /* Control chips select signal on ADS5121 board */
+ static void ads5121_select_chip(struct nand_chip *nand, int chip)
+ {
+-	struct mtd_info *mtd = nand_to_mtd(nand);
+ 	struct mpc5121_nfc_prv *prv = nand_get_controller_data(nand);
+ 	u8 v;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 6e7da1dc2e8c3..d6580e942724d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2382,8 +2382,7 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+ 		break;
+ 	}
+ 
+-	if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
+-		hclgevf_enable_vector(&hdev->misc_vector, true);
++	hclgevf_enable_vector(&hdev->misc_vector, true);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 4f99d97638248..c7be7ab131b19 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -3401,11 +3401,25 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
+ 	struct device *dev = &adapter->vdev->dev;
+ 	union ibmvnic_crq crq;
+ 	int max_entries;
++	int cap_reqs;
++
++	/* We send out 6 or 7 REQUEST_CAPABILITY CRQs below (depending on
++	 * the PROMISC flag). Initialize this count upfront. When the tasklet
++	 * receives a response to all of these, it will send the next protocol
++	 * message (QUERY_IP_OFFLOAD).
++	 */
++	if (!(adapter->netdev->flags & IFF_PROMISC) ||
++	    adapter->promisc_supported)
++		cap_reqs = 7;
++	else
++		cap_reqs = 6;
+ 
+ 	if (!retry) {
+ 		/* Sub-CRQ entries are 32 byte long */
+ 		int entries_page = 4 * PAGE_SIZE / (sizeof(u64) * 4);
+ 
++		atomic_set(&adapter->running_cap_crqs, cap_reqs);
++
+ 		if (adapter->min_tx_entries_per_subcrq > entries_page ||
+ 		    adapter->min_rx_add_entries_per_subcrq > entries_page) {
+ 			dev_err(dev, "Fatal, invalid entries per sub-crq\n");
+@@ -3466,44 +3480,45 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
+ 					adapter->opt_rx_comp_queues;
+ 
+ 		adapter->req_rx_add_queues = adapter->max_rx_add_queues;
++	} else {
++		atomic_add(cap_reqs, &adapter->running_cap_crqs);
+ 	}
+-
+ 	memset(&crq, 0, sizeof(crq));
+ 	crq.request_capability.first = IBMVNIC_CRQ_CMD;
+ 	crq.request_capability.cmd = REQUEST_CAPABILITY;
+ 
+ 	crq.request_capability.capability = cpu_to_be16(REQ_TX_QUEUES);
+ 	crq.request_capability.number = cpu_to_be64(adapter->req_tx_queues);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability = cpu_to_be16(REQ_RX_QUEUES);
+ 	crq.request_capability.number = cpu_to_be64(adapter->req_rx_queues);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability = cpu_to_be16(REQ_RX_ADD_QUEUES);
+ 	crq.request_capability.number = cpu_to_be64(adapter->req_rx_add_queues);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability =
+ 	    cpu_to_be16(REQ_TX_ENTRIES_PER_SUBCRQ);
+ 	crq.request_capability.number =
+ 	    cpu_to_be64(adapter->req_tx_entries_per_subcrq);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability =
+ 	    cpu_to_be16(REQ_RX_ADD_ENTRIES_PER_SUBCRQ);
+ 	crq.request_capability.number =
+ 	    cpu_to_be64(adapter->req_rx_add_entries_per_subcrq);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability = cpu_to_be16(REQ_MTU);
+ 	crq.request_capability.number = cpu_to_be64(adapter->req_mtu);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	if (adapter->netdev->flags & IFF_PROMISC) {
+@@ -3511,16 +3526,21 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
+ 			crq.request_capability.capability =
+ 			    cpu_to_be16(PROMISC_REQUESTED);
+ 			crq.request_capability.number = cpu_to_be64(1);
+-			atomic_inc(&adapter->running_cap_crqs);
++			cap_reqs--;
+ 			ibmvnic_send_crq(adapter, &crq);
+ 		}
+ 	} else {
+ 		crq.request_capability.capability =
+ 		    cpu_to_be16(PROMISC_REQUESTED);
+ 		crq.request_capability.number = cpu_to_be64(0);
+-		atomic_inc(&adapter->running_cap_crqs);
++		cap_reqs--;
+ 		ibmvnic_send_crq(adapter, &crq);
+ 	}
++
++	/* Keep at end to catch any discrepancy between expected and actual
++	 * CRQs sent.
++	 */
++	WARN_ON(cap_reqs != 0);
+ }
+ 
+ static int pending_scrq(struct ibmvnic_adapter *adapter,
+@@ -3953,118 +3973,132 @@ static void send_query_map(struct ibmvnic_adapter *adapter)
+ static void send_query_cap(struct ibmvnic_adapter *adapter)
+ {
+ 	union ibmvnic_crq crq;
++	int cap_reqs;
++
++	/* We send out 25 QUERY_CAPABILITY CRQs below.  Initialize this count
++	 * upfront. When the tasklet receives a response to all of these, it
++	 * can send out the next protocol messaage (REQUEST_CAPABILITY).
++	 */
++	cap_reqs = 25;
++
++	atomic_set(&adapter->running_cap_crqs, cap_reqs);
+ 
+-	atomic_set(&adapter->running_cap_crqs, 0);
+ 	memset(&crq, 0, sizeof(crq));
+ 	crq.query_capability.first = IBMVNIC_CRQ_CMD;
+ 	crq.query_capability.cmd = QUERY_CAPABILITY;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MIN_TX_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MIN_RX_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MIN_RX_ADD_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_TX_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_RX_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_RX_ADD_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 	    cpu_to_be16(MIN_TX_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 	    cpu_to_be16(MIN_RX_ADD_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 	    cpu_to_be16(MAX_TX_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 	    cpu_to_be16(MAX_RX_ADD_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(TCP_IP_OFFLOAD);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(PROMISC_SUPPORTED);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MIN_MTU);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_MTU);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_MULTICAST_FILTERS);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(VLAN_HEADER_INSERTION);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(RX_VLAN_HEADER_INSERTION);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_TX_SG_ENTRIES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(RX_SG_SUPPORTED);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(OPT_TX_COMP_SUB_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(OPT_RX_COMP_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 			cpu_to_be16(OPT_RX_BUFADD_Q_PER_RX_COMP_Q);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 			cpu_to_be16(OPT_TX_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 			cpu_to_be16(OPT_RXBA_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(TX_RX_DESC_REQ);
+-	atomic_inc(&adapter->running_cap_crqs);
++
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
++
++	/* Keep at end to catch any discrepancy between expected and actual
++	 * CRQs sent.
++	 */
++	WARN_ON(cap_reqs != 0);
+ }
+ 
+ static void send_query_ip_offload(struct ibmvnic_adapter *adapter)
+@@ -4369,6 +4403,8 @@ static void handle_request_cap_rsp(union ibmvnic_crq *crq,
+ 	char *name;
+ 
+ 	atomic_dec(&adapter->running_cap_crqs);
++	netdev_dbg(adapter->netdev, "Outstanding request-caps: %d\n",
++		   atomic_read(&adapter->running_cap_crqs));
+ 	switch (be16_to_cpu(crq->request_capability_rsp.capability)) {
+ 	case REQ_TX_QUEUES:
+ 		req_value = &adapter->req_tx_queues;
+@@ -5039,12 +5075,6 @@ static void ibmvnic_tasklet(struct tasklet_struct *t)
+ 			ibmvnic_handle_crq(crq, adapter);
+ 			crq->generic.first = 0;
+ 		}
+-
+-		/* remain in tasklet until all
+-		 * capabilities responses are received
+-		 */
+-		if (!adapter->wait_capability)
+-			done = true;
+ 	}
+ 	/* if capabilities CRQ's were sent in this tasklet, the following
+ 	 * tasklet must wait until all responses are received
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 5b83d1bc0e74d..effdc3361266f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -172,7 +172,6 @@ enum i40e_interrupt_policy {
+ 
+ struct i40e_lump_tracking {
+ 	u16 num_entries;
+-	u16 search_hint;
+ 	u16 list[0];
+ #define I40E_PILE_VALID_BIT  0x8000
+ #define I40E_IWARP_IRQ_PILE_ID  (I40E_PILE_VALID_BIT - 2)
+@@ -755,12 +754,12 @@ struct i40e_vsi {
+ 	struct rtnl_link_stats64 net_stats_offsets;
+ 	struct i40e_eth_stats eth_stats;
+ 	struct i40e_eth_stats eth_stats_offsets;
+-	u32 tx_restart;
+-	u32 tx_busy;
++	u64 tx_restart;
++	u64 tx_busy;
+ 	u64 tx_linearize;
+ 	u64 tx_force_wb;
+-	u32 rx_buf_failed;
+-	u32 rx_page_failed;
++	u64 rx_buf_failed;
++	u64 rx_page_failed;
+ 
+ 	/* These are containers of ring pointers, allocated at run-time */
+ 	struct i40e_ring **rx_rings;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index 714b578b2b49c..1114a15a9ce3c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -240,7 +240,7 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
+ 		 (unsigned long int)vsi->net_stats_offsets.rx_compressed,
+ 		 (unsigned long int)vsi->net_stats_offsets.tx_compressed);
+ 	dev_info(&pf->pdev->dev,
+-		 "    tx_restart = %d, tx_busy = %d, rx_buf_failed = %d, rx_page_failed = %d\n",
++		 "    tx_restart = %llu, tx_busy = %llu, rx_buf_failed = %llu, rx_page_failed = %llu\n",
+ 		 vsi->tx_restart, vsi->tx_busy,
+ 		 vsi->rx_buf_failed, vsi->rx_page_failed);
+ 	rcu_read_lock();
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index f888a443a067b..bd18a780a0008 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -195,10 +195,6 @@ int i40e_free_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem)
+  * @id: an owner id to stick on the items assigned
+  *
+  * Returns the base item index of the lump, or negative for error
+- *
+- * The search_hint trick and lack of advanced fit-finding only work
+- * because we're highly likely to have all the same size lump requests.
+- * Linear search time and any fragmentation should be minimal.
+  **/
+ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
+ 			 u16 needed, u16 id)
+@@ -213,8 +209,21 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
+ 		return -EINVAL;
+ 	}
+ 
+-	/* start the linear search with an imperfect hint */
+-	i = pile->search_hint;
++	/* Allocate last queue in the pile for FDIR VSI queue
++	 * so it doesn't fragment the qp_pile
++	 */
++	if (pile == pf->qp_pile && pf->vsi[id]->type == I40E_VSI_FDIR) {
++		if (pile->list[pile->num_entries - 1] & I40E_PILE_VALID_BIT) {
++			dev_err(&pf->pdev->dev,
++				"Cannot allocate queue %d for I40E_VSI_FDIR\n",
++				pile->num_entries - 1);
++			return -ENOMEM;
++		}
++		pile->list[pile->num_entries - 1] = id | I40E_PILE_VALID_BIT;
++		return pile->num_entries - 1;
++	}
++
++	i = 0;
+ 	while (i < pile->num_entries) {
+ 		/* skip already allocated entries */
+ 		if (pile->list[i] & I40E_PILE_VALID_BIT) {
+@@ -233,7 +242,6 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
+ 			for (j = 0; j < needed; j++)
+ 				pile->list[i+j] = id | I40E_PILE_VALID_BIT;
+ 			ret = i;
+-			pile->search_hint = i + j;
+ 			break;
+ 		}
+ 
+@@ -256,7 +264,7 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
+ {
+ 	int valid_id = (id | I40E_PILE_VALID_BIT);
+ 	int count = 0;
+-	int i;
++	u16 i;
+ 
+ 	if (!pile || index >= pile->num_entries)
+ 		return -EINVAL;
+@@ -268,8 +276,6 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
+ 		count++;
+ 	}
+ 
+-	if (count && index < pile->search_hint)
+-		pile->search_hint = index;
+ 
+ 	return count;
+ }
+@@ -771,9 +777,9 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
+ 	struct rtnl_link_stats64 *ns;   /* netdev stats */
+ 	struct i40e_eth_stats *oes;
+ 	struct i40e_eth_stats *es;     /* device's eth stats */
+-	u32 tx_restart, tx_busy;
++	u64 tx_restart, tx_busy;
+ 	struct i40e_ring *p;
+-	u32 rx_page, rx_buf;
++	u64 rx_page, rx_buf;
+ 	u64 bytes, packets;
+ 	unsigned int start;
+ 	u64 tx_linearize;
+@@ -10130,15 +10136,9 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 	}
+ 	i40e_get_oem_version(&pf->hw);
+ 
+-	if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) &&
+-	    ((hw->aq.fw_maj_ver == 4 && hw->aq.fw_min_ver <= 33) ||
+-	     hw->aq.fw_maj_ver < 4) && hw->mac.type == I40E_MAC_XL710) {
+-		/* The following delay is necessary for 4.33 firmware and older
+-		 * to recover after EMP reset. 200 ms should suffice but we
+-		 * put here 300 ms to be sure that FW is ready to operate
+-		 * after reset.
+-		 */
+-		mdelay(300);
++	if (test_and_clear_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state)) {
++		/* The following delay is necessary for firmware update. */
++		mdelay(1000);
+ 	}
+ 
+ 	/* re-verify the eeprom if we just had an EMP reset */
+@@ -11327,7 +11327,6 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
+ 		return -ENOMEM;
+ 
+ 	pf->irq_pile->num_entries = vectors;
+-	pf->irq_pile->search_hint = 0;
+ 
+ 	/* track first vector for misc interrupts, ignore return */
+ 	(void)i40e_get_lump(pf, pf->irq_pile, 1, I40E_PILE_VALID_BIT - 1);
+@@ -12130,7 +12129,6 @@ static int i40e_sw_init(struct i40e_pf *pf)
+ 		goto sw_init_done;
+ 	}
+ 	pf->qp_pile->num_entries = pf->hw.func_caps.num_tx_qp;
+-	pf->qp_pile->search_hint = 0;
+ 
+ 	pf->tx_timeout_recovery_level = 1;
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h
+index 564df22f3f463..8335f151ceefc 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_register.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_register.h
+@@ -279,6 +279,9 @@
+ #define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */
+ #define I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT 1
+ #define I40E_VFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT)
++#define I40E_VFINT_ICR0_ADMINQ_SHIFT 30
++#define I40E_VFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ADMINQ_SHIFT)
++#define I40E_VFINT_ICR0_ENA(_VF) (0x0002C000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */
+ #define I40E_VPINT_AEQCTL(_VF) (0x0002B800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */
+ #define I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT 0
+ #define I40E_VPINT_AEQCTL_ITR_INDX_SHIFT 11
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 65c4c4fd359fa..f71b7334e2955 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1323,6 +1323,32 @@ static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf,
+ 	return aq_ret;
+ }
+ 
++/**
++ * i40e_sync_vfr_reset
++ * @hw: pointer to hw struct
++ * @vf_id: VF identifier
++ *
++ * Before trigger hardware reset, we need to know if no other process has
++ * reserved the hardware for any reset operations. This check is done by
++ * examining the status of the RSTAT1 register used to signal the reset.
++ **/
++static int i40e_sync_vfr_reset(struct i40e_hw *hw, int vf_id)
++{
++	u32 reg;
++	int i;
++
++	for (i = 0; i < I40E_VFR_WAIT_COUNT; i++) {
++		reg = rd32(hw, I40E_VFINT_ICR0_ENA(vf_id)) &
++			   I40E_VFINT_ICR0_ADMINQ_MASK;
++		if (reg)
++			return 0;
++
++		usleep_range(100, 200);
++	}
++
++	return -EAGAIN;
++}
++
+ /**
+  * i40e_trigger_vf_reset
+  * @vf: pointer to the VF structure
+@@ -1337,9 +1363,11 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_hw *hw = &pf->hw;
+ 	u32 reg, reg_idx, bit_idx;
++	bool vf_active;
++	u32 radq;
+ 
+ 	/* warn the VF */
+-	clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
++	vf_active = test_and_clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
+ 
+ 	/* Disable VF's configuration API during reset. The flag is re-enabled
+ 	 * in i40e_alloc_vf_res(), when it's safe again to access VF's VSI.
+@@ -1353,7 +1381,19 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
+ 	 * just need to clean up, so don't hit the VFRTRIG register.
+ 	 */
+ 	if (!flr) {
+-		/* reset VF using VPGEN_VFRTRIG reg */
++		/* Sync VFR reset before trigger next one */
++		radq = rd32(hw, I40E_VFINT_ICR0_ENA(vf->vf_id)) &
++			    I40E_VFINT_ICR0_ADMINQ_MASK;
++		if (vf_active && !radq)
++			/* waiting for finish reset by virtual driver */
++			if (i40e_sync_vfr_reset(hw, vf->vf_id))
++				dev_info(&pf->pdev->dev,
++					 "Reset VF %d never finished\n",
++				vf->vf_id);
++
++		/* Reset VF using VPGEN_VFRTRIG reg. It is also setting
++		 * in progress state in rstat1 register.
++		 */
+ 		reg = rd32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id));
+ 		reg |= I40E_VPGEN_VFRTRIG_VFSWR_MASK;
+ 		wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id), reg);
+@@ -2563,6 +2603,59 @@ error_param:
+ 				       aq_ret);
+ }
+ 
++/**
++ * i40e_check_enough_queue - find big enough queue number
++ * @vf: pointer to the VF info
++ * @needed: the number of items needed
++ *
++ * Returns the base item index of the queue, or negative for error
++ **/
++static int i40e_check_enough_queue(struct i40e_vf *vf, u16 needed)
++{
++	unsigned int  i, cur_queues, more, pool_size;
++	struct i40e_lump_tracking *pile;
++	struct i40e_pf *pf = vf->pf;
++	struct i40e_vsi *vsi;
++
++	vsi = pf->vsi[vf->lan_vsi_idx];
++	cur_queues = vsi->alloc_queue_pairs;
++
++	/* if current allocated queues are enough for need */
++	if (cur_queues >= needed)
++		return vsi->base_queue;
++
++	pile = pf->qp_pile;
++	if (cur_queues > 0) {
++		/* if the allocated queues are not zero
++		 * just check if there are enough queues for more
++		 * behind the allocated queues.
++		 */
++		more = needed - cur_queues;
++		for (i = vsi->base_queue + cur_queues;
++			i < pile->num_entries; i++) {
++			if (pile->list[i] & I40E_PILE_VALID_BIT)
++				break;
++
++			if (more-- == 1)
++				/* there is enough */
++				return vsi->base_queue;
++		}
++	}
++
++	pool_size = 0;
++	for (i = 0; i < pile->num_entries; i++) {
++		if (pile->list[i] & I40E_PILE_VALID_BIT) {
++			pool_size = 0;
++			continue;
++		}
++		if (needed <= ++pool_size)
++			/* there is enough */
++			return i;
++	}
++
++	return -ENOMEM;
++}
++
+ /**
+  * i40e_vc_request_queues_msg
+  * @vf: pointer to the VF info
+@@ -2597,6 +2690,12 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 			 req_pairs - cur_pairs,
+ 			 pf->queues_left);
+ 		vfres->num_queue_pairs = pf->queues_left + cur_pairs;
++	} else if (i40e_check_enough_queue(vf, req_pairs) < 0) {
++		dev_warn(&pf->pdev->dev,
++			 "VF %d requested %d more queues, but there is not enough for it.\n",
++			 vf->vf_id,
++			 req_pairs - cur_pairs);
++		vfres->num_queue_pairs = cur_pairs;
+ 	} else {
+ 		/* successful request */
+ 		vf->num_req_queues = req_pairs;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 49575a640a84c..03c42fd0fea19 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -19,6 +19,7 @@
+ #define I40E_MAX_VF_PROMISC_FLAGS	3
+ 
+ #define I40E_VF_STATE_WAIT_COUNT	20
++#define I40E_VFR_WAIT_COUNT		100
+ 
+ /* Various queue ctrls */
+ enum i40e_queue_ctrl {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 044a5b1196acb..161174be51c31 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -386,7 +386,12 @@ static int otx2_forward_vf_mbox_msgs(struct otx2_nic *pf,
+ 		dst_mdev->msg_size = mbox_hdr->msg_size;
+ 		dst_mdev->num_msgs = num_msgs;
+ 		err = otx2_sync_mbox_msg(dst_mbox);
+-		if (err) {
++		/* Error code -EIO indicate there is a communication failure
++		 * to the AF. Rest of the error codes indicate that AF processed
++		 * VF messages and set the error codes in response messages
++		 * (if any) so simply forward responses to VF.
++		 */
++		if (err == -EIO) {
+ 			dev_warn(pf->dev,
+ 				 "AF not responding to VF%d messages\n", vf);
+ 			/* restore PF mbase and exit */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index a8c5492cb39be..6d8a839fab22e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -816,8 +816,6 @@ static int stmmac_init_ptp(struct stmmac_priv *priv)
+ 	priv->hwts_tx_en = 0;
+ 	priv->hwts_rx_en = 0;
+ 
+-	stmmac_ptp_register(priv);
+-
+ 	return 0;
+ }
+ 
+@@ -2691,7 +2689,7 @@ static void stmmac_safety_feat_configuration(struct stmmac_priv *priv)
+ /**
+  * stmmac_hw_setup - setup mac in a usable state.
+  *  @dev : pointer to the device structure.
+- *  @init_ptp: initialize PTP if set
++ *  @ptp_register: register PTP if set
+  *  Description:
+  *  this is the main function to setup the HW in a usable state because the
+  *  dma engine is reset, the core registers are configured (e.g. AXI,
+@@ -2701,7 +2699,7 @@ static void stmmac_safety_feat_configuration(struct stmmac_priv *priv)
+  *  0 on success and an appropriate (-)ve integer as defined in errno.h
+  *  file on failure.
+  */
+-static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
++static int stmmac_hw_setup(struct net_device *dev, bool ptp_register)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 	u32 rx_cnt = priv->plat->rx_queues_to_use;
+@@ -2757,13 +2755,13 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
+ 
+ 	stmmac_mmc_setup(priv);
+ 
+-	if (init_ptp) {
+-		ret = stmmac_init_ptp(priv);
+-		if (ret == -EOPNOTSUPP)
+-			netdev_warn(priv->dev, "PTP not supported by HW\n");
+-		else if (ret)
+-			netdev_warn(priv->dev, "PTP init failed\n");
+-	}
++	ret = stmmac_init_ptp(priv);
++	if (ret == -EOPNOTSUPP)
++		netdev_warn(priv->dev, "PTP not supported by HW\n");
++	else if (ret)
++		netdev_warn(priv->dev, "PTP init failed\n");
++	else if (ptp_register)
++		stmmac_ptp_register(priv);
+ 
+ 	priv->eee_tw_timer = STMMAC_DEFAULT_TWT_LS;
+ 
+diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c
+index 424e644724e46..e74f2e95a46eb 100644
+--- a/drivers/net/ethernet/ti/cpsw_priv.c
++++ b/drivers/net/ethernet/ti/cpsw_priv.c
+@@ -1144,7 +1144,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv)
+ static struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw,
+ 					       int size)
+ {
+-	struct page_pool_params pp_params;
++	struct page_pool_params pp_params = {};
+ 	struct page_pool *pool;
+ 
+ 	pp_params.order = 0;
+diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
+index 5ab53e9942f30..5d30b3e1806ab 100644
+--- a/drivers/net/hamradio/yam.c
++++ b/drivers/net/hamradio/yam.c
+@@ -951,9 +951,7 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 				 sizeof(struct yamdrv_ioctl_mcs));
+ 		if (IS_ERR(ym))
+ 			return PTR_ERR(ym);
+-		if (ym->cmd != SIOCYAMSMCS)
+-			return -EINVAL;
+-		if (ym->bitrate > YAM_MAXBITRATE) {
++		if (ym->cmd != SIOCYAMSMCS || ym->bitrate > YAM_MAXBITRATE) {
+ 			kfree(ym);
+ 			return -EINVAL;
+ 		}
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index dbed15dc0fe77..644861366d544 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -789,6 +789,7 @@ static struct phy_driver broadcom_drivers[] = {
+ 	.phy_id_mask	= 0xfffffff0,
+ 	.name		= "Broadcom BCM54616S",
+ 	/* PHY_GBIT_FEATURES */
++	.soft_reset     = genphy_soft_reset,
+ 	.config_init	= bcm54xx_config_init,
+ 	.config_aneg	= bcm54616s_config_aneg,
+ 	.ack_interrupt	= bcm_phy_ack_intr,
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 85f3cde5ffd09..d2f6d8107595a 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1682,6 +1682,9 @@ void phy_detach(struct phy_device *phydev)
+ 	    phy_driver_is_genphy_10g(phydev))
+ 		device_release_driver(&phydev->mdio.dev);
+ 
++	/* Assert the reset signal */
++	phy_device_reset(phydev, 1);
++
+ 	/*
+ 	 * The phydev might go away on the put_device() below, so avoid
+ 	 * a use-after-free bug by reading the underlying bus first.
+@@ -1693,9 +1696,6 @@ void phy_detach(struct phy_device *phydev)
+ 		ndev_owner = dev->dev.parent->driver->owner;
+ 	if (ndev_owner != bus->owner)
+ 		module_put(bus->owner);
+-
+-	/* Assert the reset signal */
+-	phy_device_reset(phydev, 1);
+ }
+ EXPORT_SYMBOL(phy_detach);
+ 
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index 4cf874fb5c5b4..a05d8372669c1 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -609,6 +609,11 @@ struct sfp_bus *sfp_bus_find_fwnode(struct fwnode_handle *fwnode)
+ 	else if (ret < 0)
+ 		return ERR_PTR(ret);
+ 
++	if (!fwnode_device_is_available(ref.fwnode)) {
++		fwnode_handle_put(ref.fwnode);
++		return NULL;
++	}
++
+ 	bus = sfp_bus_get(ref.fwnode);
+ 	fwnode_handle_put(ref.fwnode);
+ 	if (!bus)
+diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c
+index 4bbbacdbf3bb7..be90d77c5168d 100644
+--- a/drivers/rpmsg/rpmsg_char.c
++++ b/drivers/rpmsg/rpmsg_char.c
+@@ -92,7 +92,7 @@ static int rpmsg_eptdev_destroy(struct device *dev, void *data)
+ 	/* wake up any blocked readers */
+ 	wake_up_interruptible(&eptdev->readq);
+ 
+-	device_del(&eptdev->dev);
++	cdev_device_del(&eptdev->cdev, &eptdev->dev);
+ 	put_device(&eptdev->dev);
+ 
+ 	return 0;
+@@ -332,7 +332,6 @@ static void rpmsg_eptdev_release_device(struct device *dev)
+ 
+ 	ida_simple_remove(&rpmsg_ept_ida, dev->id);
+ 	ida_simple_remove(&rpmsg_minor_ida, MINOR(eptdev->dev.devt));
+-	cdev_del(&eptdev->cdev);
+ 	kfree(eptdev);
+ }
+ 
+@@ -377,19 +376,13 @@ static int rpmsg_eptdev_create(struct rpmsg_ctrldev *ctrldev,
+ 	dev->id = ret;
+ 	dev_set_name(dev, "rpmsg%d", ret);
+ 
+-	ret = cdev_add(&eptdev->cdev, dev->devt, 1);
++	ret = cdev_device_add(&eptdev->cdev, &eptdev->dev);
+ 	if (ret)
+ 		goto free_ept_ida;
+ 
+ 	/* We can now rely on the release function for cleanup */
+ 	dev->release = rpmsg_eptdev_release_device;
+ 
+-	ret = device_add(dev);
+-	if (ret) {
+-		dev_err(dev, "device_add failed: %d\n", ret);
+-		put_device(dev);
+-	}
+-
+ 	return ret;
+ 
+ free_ept_ida:
+@@ -458,7 +451,6 @@ static void rpmsg_ctrldev_release_device(struct device *dev)
+ 
+ 	ida_simple_remove(&rpmsg_ctrl_ida, dev->id);
+ 	ida_simple_remove(&rpmsg_minor_ida, MINOR(dev->devt));
+-	cdev_del(&ctrldev->cdev);
+ 	kfree(ctrldev);
+ }
+ 
+@@ -493,19 +485,13 @@ static int rpmsg_chrdev_probe(struct rpmsg_device *rpdev)
+ 	dev->id = ret;
+ 	dev_set_name(&ctrldev->dev, "rpmsg_ctrl%d", ret);
+ 
+-	ret = cdev_add(&ctrldev->cdev, dev->devt, 1);
++	ret = cdev_device_add(&ctrldev->cdev, &ctrldev->dev);
+ 	if (ret)
+ 		goto free_ctrl_ida;
+ 
+ 	/* We can now rely on the release function for cleanup */
+ 	dev->release = rpmsg_ctrldev_release_device;
+ 
+-	ret = device_add(dev);
+-	if (ret) {
+-		dev_err(&rpdev->dev, "device_add failed: %d\n", ret);
+-		put_device(dev);
+-	}
+-
+ 	dev_set_drvdata(&rpdev->dev, ctrldev);
+ 
+ 	return ret;
+@@ -531,7 +517,7 @@ static void rpmsg_chrdev_remove(struct rpmsg_device *rpdev)
+ 	if (ret)
+ 		dev_warn(&rpdev->dev, "failed to nuke endpoints: %d\n", ret);
+ 
+-	device_del(&ctrldev->dev);
++	cdev_device_del(&ctrldev->cdev, &ctrldev->dev);
+ 	put_device(&ctrldev->dev);
+ }
+ 
+diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
+index d24cafe02708f..511bf8e0a436c 100644
+--- a/drivers/s390/scsi/zfcp_fc.c
++++ b/drivers/s390/scsi/zfcp_fc.c
+@@ -521,6 +521,8 @@ static void zfcp_fc_adisc_handler(void *data)
+ 		goto out;
+ 	}
+ 
++	/* re-init to undo drop from zfcp_fc_adisc() */
++	port->d_id = ntoh24(adisc_resp->adisc_port_id);
+ 	/* port is good, unblock rport without going through erp */
+ 	zfcp_scsi_schedule_rport_register(port);
+  out:
+@@ -534,6 +536,7 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
+ 	struct zfcp_fc_req *fc_req;
+ 	struct zfcp_adapter *adapter = port->adapter;
+ 	struct Scsi_Host *shost = adapter->scsi_host;
++	u32 d_id;
+ 	int ret;
+ 
+ 	fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_ATOMIC);
+@@ -558,7 +561,15 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
+ 	fc_req->u.adisc.req.adisc_cmd = ELS_ADISC;
+ 	hton24(fc_req->u.adisc.req.adisc_port_id, fc_host_port_id(shost));
+ 
+-	ret = zfcp_fsf_send_els(adapter, port->d_id, &fc_req->ct_els,
++	d_id = port->d_id; /* remember as destination for send els below */
++	/*
++	 * Force fresh GID_PN lookup on next port recovery.
++	 * Must happen after request setup and before sending request,
++	 * to prevent race with port->d_id re-init in zfcp_fc_adisc_handler().
++	 */
++	port->d_id = 0;
++
++	ret = zfcp_fsf_send_els(adapter, d_id, &fc_req->ct_els,
+ 				ZFCP_FC_CTELS_TMO);
+ 	if (ret)
+ 		kmem_cache_free(zfcp_fc_req_cache, fc_req);
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index 6890bbe04a8c1..052e7879704a5 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -80,7 +80,7 @@ static int bnx2fc_bind_pcidev(struct bnx2fc_hba *hba);
+ static void bnx2fc_unbind_pcidev(struct bnx2fc_hba *hba);
+ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
+ 				  struct device *parent, int npiv);
+-static void bnx2fc_destroy_work(struct work_struct *work);
++static void bnx2fc_port_destroy(struct fcoe_port *port);
+ 
+ static struct bnx2fc_hba *bnx2fc_hba_lookup(struct net_device *phys_dev);
+ static struct bnx2fc_interface *bnx2fc_interface_lookup(struct net_device
+@@ -905,9 +905,6 @@ static void bnx2fc_indicate_netevent(void *context, unsigned long event,
+ 				__bnx2fc_destroy(interface);
+ 		}
+ 		mutex_unlock(&bnx2fc_dev_lock);
+-
+-		/* Ensure ALL destroy work has been completed before return */
+-		flush_workqueue(bnx2fc_wq);
+ 		return;
+ 
+ 	default:
+@@ -1213,8 +1210,8 @@ static int bnx2fc_vport_destroy(struct fc_vport *vport)
+ 	mutex_unlock(&n_port->lp_mutex);
+ 	bnx2fc_free_vport(interface->hba, port->lport);
+ 	bnx2fc_port_shutdown(port->lport);
++	bnx2fc_port_destroy(port);
+ 	bnx2fc_interface_put(interface);
+-	queue_work(bnx2fc_wq, &port->destroy_work);
+ 	return 0;
+ }
+ 
+@@ -1523,7 +1520,6 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
+ 	port->lport = lport;
+ 	port->priv = interface;
+ 	port->get_netdev = bnx2fc_netdev;
+-	INIT_WORK(&port->destroy_work, bnx2fc_destroy_work);
+ 
+ 	/* Configure fcoe_port */
+ 	rc = bnx2fc_lport_config(lport);
+@@ -1651,8 +1647,8 @@ static void __bnx2fc_destroy(struct bnx2fc_interface *interface)
+ 	bnx2fc_interface_cleanup(interface);
+ 	bnx2fc_stop(interface);
+ 	list_del(&interface->list);
++	bnx2fc_port_destroy(port);
+ 	bnx2fc_interface_put(interface);
+-	queue_work(bnx2fc_wq, &port->destroy_work);
+ }
+ 
+ /**
+@@ -1692,15 +1688,12 @@ netdev_err:
+ 	return rc;
+ }
+ 
+-static void bnx2fc_destroy_work(struct work_struct *work)
++static void bnx2fc_port_destroy(struct fcoe_port *port)
+ {
+-	struct fcoe_port *port;
+ 	struct fc_lport *lport;
+ 
+-	port = container_of(work, struct fcoe_port, destroy_work);
+ 	lport = port->lport;
+-
+-	BNX2FC_HBA_DBG(lport, "Entered bnx2fc_destroy_work\n");
++	BNX2FC_HBA_DBG(lport, "Entered %s, destroying lport %p\n", __func__, lport);
+ 
+ 	bnx2fc_if_destroy(lport);
+ }
+@@ -2554,9 +2547,6 @@ static void bnx2fc_ulp_exit(struct cnic_dev *dev)
+ 			__bnx2fc_destroy(interface);
+ 	mutex_unlock(&bnx2fc_dev_lock);
+ 
+-	/* Ensure ALL destroy work has been completed before return */
+-	flush_workqueue(bnx2fc_wq);
+-
+ 	bnx2fc_ulp_stop(hba);
+ 	/* unregister cnic device */
+ 	if (test_and_clear_bit(BNX2FC_CNIC_REGISTERED, &hba->reg_with_cnic))
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index d76880ae68c83..b8f8621537720 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -317,6 +317,7 @@ static struct tty_driver *gsm_tty_driver;
+ #define GSM1_ESCAPE_BITS	0x20
+ #define XON			0x11
+ #define XOFF			0x13
++#define ISO_IEC_646_MASK	0x7F
+ 
+ static const struct tty_port_operations gsm_port_ops;
+ 
+@@ -526,7 +527,8 @@ static int gsm_stuff_frame(const u8 *input, u8 *output, int len)
+ 	int olen = 0;
+ 	while (len--) {
+ 		if (*input == GSM1_SOF || *input == GSM1_ESCAPE
+-		    || *input == XON || *input == XOFF) {
++		    || (*input & ISO_IEC_646_MASK) == XON
++		    || (*input & ISO_IEC_646_MASK) == XOFF) {
+ 			*output++ = GSM1_ESCAPE;
+ 			*output++ = *input++ ^ GSM1_ESCAPE_BITS;
+ 			olen++;
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index 65e9045dafe6d..5595c63c46eaf 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -83,8 +83,17 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ 		port->mapsize = resource_size(&resource);
+ 
+ 		/* Check for shifted address mapping */
+-		if (of_property_read_u32(np, "reg-offset", &prop) == 0)
++		if (of_property_read_u32(np, "reg-offset", &prop) == 0) {
++			if (prop >= port->mapsize) {
++				dev_warn(&ofdev->dev, "reg-offset %u exceeds region size %pa\n",
++					 prop, &port->mapsize);
++				ret = -EINVAL;
++				goto err_unprepare;
++			}
++
+ 			port->mapbase += prop;
++			port->mapsize -= prop;
++		}
+ 
+ 		port->iotype = UPIO_MEM;
+ 		if (of_property_read_u32(np, "reg-io-width", &prop) == 0) {
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 019328d644d8b..3a985e953b8e9 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -5171,8 +5171,30 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	{	PCI_VENDOR_ID_INTASHIELD, PCI_DEVICE_ID_INTASHIELD_IS400,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,    /* 135a.0dc0 */
+ 		pbn_b2_4_115200 },
++	/* Brainboxes Devices */
+ 	/*
+-	 * BrainBoxes UC-260
++	* Brainboxes UC-101
++	*/
++	{       PCI_VENDOR_ID_INTASHIELD, 0x0BA1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-235/246
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0AA1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_1_115200 },
++	/*
++	 * Brainboxes UC-257
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0861,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-260/271/701/756
+ 	 */
+ 	{	PCI_VENDOR_ID_INTASHIELD, 0x0D21,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+@@ -5180,7 +5202,81 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_b2_4_115200 },
+ 	{	PCI_VENDOR_ID_INTASHIELD, 0x0E34,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+-		 PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
++		PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
++		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-268
++	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x0841,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-275/279
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0881,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_8_115200 },
++	/*
++	 * Brainboxes UC-302
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x08E1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-310
++	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x08C1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-313
++	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x08A3,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-320/324
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0A61,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_1_115200 },
++	/*
++	 * Brainboxes UC-346
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0B02,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-357
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0A81,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0A83,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-368
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0C41,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-420/431
++	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x0921,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
+ 		pbn_b2_4_115200 },
+ 	/*
+ 	 * Perle PCI-RAS cards
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 844059861f9e1..0eadf0547175c 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -574,7 +574,7 @@ static void stm32_usart_start_tx(struct uart_port *port)
+ 	struct serial_rs485 *rs485conf = &port->rs485;
+ 	struct circ_buf *xmit = &port->state->xmit;
+ 
+-	if (uart_circ_empty(xmit))
++	if (uart_circ_empty(xmit) && !port->x_char)
+ 		return;
+ 
+ 	if (rs485conf->flags & SER_RS485_ENABLED) {
+diff --git a/drivers/usb/common/ulpi.c b/drivers/usb/common/ulpi.c
+index a18d7c4222ddf..82fe8e00a96a3 100644
+--- a/drivers/usb/common/ulpi.c
++++ b/drivers/usb/common/ulpi.c
+@@ -39,8 +39,11 @@ static int ulpi_match(struct device *dev, struct device_driver *driver)
+ 	struct ulpi *ulpi = to_ulpi_dev(dev);
+ 	const struct ulpi_device_id *id;
+ 
+-	/* Some ULPI devices don't have a vendor id so rely on OF match */
+-	if (ulpi->id.vendor == 0)
++	/*
++	 * Some ULPI devices don't have a vendor id
++	 * or provide an id_table so rely on OF match.
++	 */
++	if (ulpi->id.vendor == 0 || !drv->id_table)
+ 		return of_driver_match_device(dev, driver);
+ 
+ 	for (id = drv->id_table; id->vendor; id++)
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index b2710015493a5..ddd1d3eef912b 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -1562,6 +1562,13 @@ int usb_hcd_submit_urb (struct urb *urb, gfp_t mem_flags)
+ 		urb->hcpriv = NULL;
+ 		INIT_LIST_HEAD(&urb->urb_list);
+ 		atomic_dec(&urb->use_count);
++		/*
++		 * Order the write of urb->use_count above before the read
++		 * of urb->reject below.  Pairs with the memory barriers in
++		 * usb_kill_urb() and usb_poison_urb().
++		 */
++		smp_mb__after_atomic();
++
+ 		atomic_dec(&urb->dev->urbnum);
+ 		if (atomic_read(&urb->reject))
+ 			wake_up(&usb_kill_urb_queue);
+@@ -1666,6 +1673,13 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
+ 
+ 	usb_anchor_resume_wakeups(anchor);
+ 	atomic_dec(&urb->use_count);
++	/*
++	 * Order the write of urb->use_count above before the read
++	 * of urb->reject below.  Pairs with the memory barriers in
++	 * usb_kill_urb() and usb_poison_urb().
++	 */
++	smp_mb__after_atomic();
++
+ 	if (unlikely(atomic_read(&urb->reject)))
+ 		wake_up(&usb_kill_urb_queue);
+ 	usb_put_urb(urb);
+diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
+index 357b149b20d3a..9c285026f8276 100644
+--- a/drivers/usb/core/urb.c
++++ b/drivers/usb/core/urb.c
+@@ -706,6 +706,12 @@ void usb_kill_urb(struct urb *urb)
+ 	if (!(urb && urb->dev && urb->ep))
+ 		return;
+ 	atomic_inc(&urb->reject);
++	/*
++	 * Order the write of urb->reject above before the read
++	 * of urb->use_count below.  Pairs with the barriers in
++	 * __usb_hcd_giveback_urb() and usb_hcd_submit_urb().
++	 */
++	smp_mb__after_atomic();
+ 
+ 	usb_hcd_unlink_urb(urb, -ENOENT);
+ 	wait_event(usb_kill_urb_queue, atomic_read(&urb->use_count) == 0);
+@@ -747,6 +753,12 @@ void usb_poison_urb(struct urb *urb)
+ 	if (!urb)
+ 		return;
+ 	atomic_inc(&urb->reject);
++	/*
++	 * Order the write of urb->reject above before the read
++	 * of urb->use_count below.  Pairs with the barriers in
++	 * __usb_hcd_giveback_urb() and usb_hcd_submit_urb().
++	 */
++	smp_mb__after_atomic();
+ 
+ 	if (!urb->dev || !urb->ep)
+ 		return;
+diff --git a/drivers/usb/gadget/function/f_sourcesink.c b/drivers/usb/gadget/function/f_sourcesink.c
+index 282737e4609ce..2c65a9bb3c81b 100644
+--- a/drivers/usb/gadget/function/f_sourcesink.c
++++ b/drivers/usb/gadget/function/f_sourcesink.c
+@@ -583,6 +583,7 @@ static int source_sink_start_ep(struct f_sourcesink *ss, bool is_in,
+ 
+ 	if (is_iso) {
+ 		switch (speed) {
++		case USB_SPEED_SUPER_PLUS:
+ 		case USB_SPEED_SUPER:
+ 			size = ss->isoc_maxpacket *
+ 					(ss->isoc_mult + 1) *
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index c1edcc9b13cec..dc570ce4e8319 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -437,6 +437,9 @@ static int __maybe_unused xhci_plat_suspend(struct device *dev)
+ 	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+ 	int ret;
+ 
++	if (pm_runtime_suspended(dev))
++		pm_runtime_resume(dev);
++
+ 	ret = xhci_priv_suspend_quirk(hcd);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 29191d33c0e3e..1a05e3dcfec8a 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2301,6 +2301,16 @@ UNUSUAL_DEV(  0x2027, 0xa001, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init,
+ 		US_FL_SCM_MULT_TARG ),
+ 
++/*
++ * Reported by DocMAX <mail@vacharakis.de>
++ * and Thomas Weißschuh <linux@weissschuh.net>
++ */
++UNUSUAL_DEV( 0x2109, 0x0715, 0x9999, 0x9999,
++		"VIA Labs, Inc.",
++		"VL817 SATA Bridge",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ UNUSUAL_DEV( 0x2116, 0x0320, 0x0001, 0x0001,
+ 		"ST",
+ 		"2A",
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 721d9c4ddc81f..8333c80b5f7c1 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4164,7 +4164,8 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
+ 	case SNK_TRYWAIT_DEBOUNCE:
+ 		break;
+ 	case SNK_ATTACH_WAIT:
+-		tcpm_set_state(port, SNK_UNATTACHED, 0);
++	case SNK_DEBOUNCED:
++		/* Do nothing, as TCPM is still waiting for vbus to reaach VSAFE5V to connect */
+ 		break;
+ 
+ 	case SNK_NEGOTIATE_CAPABILITIES:
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index bff96d64dddff..6db7c8ddd51cd 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -325,7 +325,7 @@ static int ucsi_ccg_init(struct ucsi_ccg *uc)
+ 		if (status < 0)
+ 			return status;
+ 
+-		if (!data)
++		if (!(data & DEV_INT))
+ 			return 0;
+ 
+ 		status = ccg_write(uc, CCGX_RAB_INTR_REG, &data, sizeof(data));
+diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
+index 4dc9077dd2ac0..3c309ab208874 100644
+--- a/drivers/video/fbdev/hyperv_fb.c
++++ b/drivers/video/fbdev/hyperv_fb.c
+@@ -286,8 +286,6 @@ struct hvfb_par {
+ 
+ static uint screen_width = HVFB_WIDTH;
+ static uint screen_height = HVFB_HEIGHT;
+-static uint screen_width_max = HVFB_WIDTH;
+-static uint screen_height_max = HVFB_HEIGHT;
+ static uint screen_depth;
+ static uint screen_fb_size;
+ static uint dio_fb_size; /* FB size for deferred IO */
+@@ -581,7 +579,6 @@ static int synthvid_get_supported_resolution(struct hv_device *hdev)
+ 	int ret = 0;
+ 	unsigned long t;
+ 	u8 index;
+-	int i;
+ 
+ 	memset(msg, 0, sizeof(struct synthvid_msg));
+ 	msg->vid_hdr.type = SYNTHVID_RESOLUTION_REQUEST;
+@@ -612,13 +609,6 @@ static int synthvid_get_supported_resolution(struct hv_device *hdev)
+ 		goto out;
+ 	}
+ 
+-	for (i = 0; i < msg->resolution_resp.resolution_count; i++) {
+-		screen_width_max = max_t(unsigned int, screen_width_max,
+-		    msg->resolution_resp.supported_resolution[i].width);
+-		screen_height_max = max_t(unsigned int, screen_height_max,
+-		    msg->resolution_resp.supported_resolution[i].height);
+-	}
+-
+ 	screen_width =
+ 		msg->resolution_resp.supported_resolution[index].width;
+ 	screen_height =
+@@ -940,7 +930,7 @@ static void hvfb_get_option(struct fb_info *info)
+ 
+ 	if (x < HVFB_WIDTH_MIN || y < HVFB_HEIGHT_MIN ||
+ 	    (synthvid_ver_ge(par->synthvid_version, SYNTHVID_VERSION_WIN10) &&
+-	    (x > screen_width_max || y > screen_height_max)) ||
++	    (x * y * screen_depth / 8 > screen_fb_size)) ||
+ 	    (par->synthvid_version == SYNTHVID_VERSION_WIN8 &&
+ 	     x * y * screen_depth / 8 > SYNTHVID_FB_SIZE_WIN8) ||
+ 	    (par->synthvid_version == SYNTHVID_VERSION_WIN7 &&
+@@ -1193,8 +1183,8 @@ static int hvfb_probe(struct hv_device *hdev,
+ 	}
+ 
+ 	hvfb_get_option(info);
+-	pr_info("Screen resolution: %dx%d, Color depth: %d\n",
+-		screen_width, screen_height, screen_depth);
++	pr_info("Screen resolution: %dx%d, Color depth: %d, Frame buffer size: %d\n",
++		screen_width, screen_height, screen_depth, screen_fb_size);
+ 
+ 	ret = hvfb_getmem(hdev, info);
+ 	if (ret) {
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 040db0dfba264..b5e9bfe884c4b 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3103,10 +3103,8 @@ static noinline int btrfs_ioctl_snap_destroy(struct file *file,
+ 	inode_lock(inode);
+ 	err = btrfs_delete_subvolume(dir, dentry);
+ 	inode_unlock(inode);
+-	if (!err) {
+-		fsnotify_rmdir(dir, dentry);
+-		d_delete(dentry);
+-	}
++	if (!err)
++		d_delete_notify(dir, dentry);
+ 
+ out_dput:
+ 	dput(dentry);
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 8ed881fd7440d..450050801f3b6 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -577,6 +577,7 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ 	struct ceph_inode_info *ci = ceph_inode(dir);
+ 	struct inode *inode;
+ 	struct timespec64 now;
++	struct ceph_string *pool_ns;
+ 	struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(dir->i_sb);
+ 	struct ceph_vino vino = { .ino = req->r_deleg_ino,
+ 				  .snap = CEPH_NOSNAP };
+@@ -626,6 +627,12 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ 	in.max_size = cpu_to_le64(lo->stripe_unit);
+ 
+ 	ceph_file_layout_to_legacy(lo, &in.layout);
++	/* lo is private, so pool_ns can't change */
++	pool_ns = rcu_dereference_raw(lo->pool_ns);
++	if (pool_ns) {
++		iinfo.pool_ns_len = pool_ns->len;
++		iinfo.pool_ns_data = pool_ns->str;
++	}
+ 
+ 	down_read(&mdsc->snap_rwsem);
+ 	ret = ceph_fill_inode(inode, NULL, &iinfo, NULL, req->r_session,
+@@ -743,8 +750,10 @@ retry:
+ 				restore_deleg_ino(dir, req->r_deleg_ino);
+ 				ceph_mdsc_put_request(req);
+ 				try_async = false;
++				ceph_put_string(rcu_dereference_raw(lo.pool_ns));
+ 				goto retry;
+ 			}
++			ceph_put_string(rcu_dereference_raw(lo.pool_ns));
+ 			goto out_req;
+ 		}
+ 	}
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index b0983e2a4e2c7..32ddad3ec5d53 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -1805,8 +1805,8 @@ void configfs_unregister_group(struct config_group *group)
+ 	configfs_detach_group(&group->cg_item);
+ 	d_inode(dentry)->i_flags |= S_DEAD;
+ 	dont_mount(dentry);
++	d_drop(dentry);
+ 	fsnotify_rmdir(d_inode(parent), dentry);
+-	d_delete(dentry);
+ 	inode_unlock(d_inode(parent));
+ 
+ 	dput(dentry);
+@@ -1947,10 +1947,10 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys)
+ 	configfs_detach_group(&group->cg_item);
+ 	d_inode(dentry)->i_flags |= S_DEAD;
+ 	dont_mount(dentry);
+-	fsnotify_rmdir(d_inode(root), dentry);
+ 	inode_unlock(d_inode(dentry));
+ 
+-	d_delete(dentry);
++	d_drop(dentry);
++	fsnotify_rmdir(d_inode(root), dentry);
+ 
+ 	inode_unlock(d_inode(root));
+ 
+diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
+index 42e5a766d33c7..4f25015aa5342 100644
+--- a/fs/devpts/inode.c
++++ b/fs/devpts/inode.c
+@@ -621,8 +621,8 @@ void devpts_pty_kill(struct dentry *dentry)
+ 
+ 	dentry->d_fsdata = NULL;
+ 	drop_nlink(dentry->d_inode);
+-	fsnotify_unlink(d_inode(dentry->d_parent), dentry);
+ 	d_drop(dentry);
++	fsnotify_unlink(d_inode(dentry->d_parent), dentry);
+ 	dput(dentry);	/* d_alloc_name() in devpts_pty_new() */
+ }
+ 
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 188f79d769881..b748329bb0bab 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -2795,6 +2795,7 @@ struct journal_head *jbd2_journal_grab_journal_head(struct buffer_head *bh)
+ 	jbd_unlock_bh_journal_head(bh);
+ 	return jh;
+ }
++EXPORT_SYMBOL(jbd2_journal_grab_journal_head);
+ 
+ static void __journal_remove_journal_head(struct buffer_head *bh)
+ {
+@@ -2847,6 +2848,7 @@ void jbd2_journal_put_journal_head(struct journal_head *jh)
+ 		jbd_unlock_bh_journal_head(bh);
+ 	}
+ }
++EXPORT_SYMBOL(jbd2_journal_put_journal_head);
+ 
+ /*
+  * Initialize jbd inode head
+diff --git a/fs/namei.c b/fs/namei.c
+index 4c9d0c36545d3..72f354b62dd5d 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -3709,13 +3709,12 @@ int vfs_rmdir(struct inode *dir, struct dentry *dentry)
+ 	dentry->d_inode->i_flags |= S_DEAD;
+ 	dont_mount(dentry);
+ 	detach_mounts(dentry);
+-	fsnotify_rmdir(dir, dentry);
+ 
+ out:
+ 	inode_unlock(dentry->d_inode);
+ 	dput(dentry);
+ 	if (!error)
+-		d_delete(dentry);
++		d_delete_notify(dir, dentry);
+ 	return error;
+ }
+ EXPORT_SYMBOL(vfs_rmdir);
+@@ -3825,7 +3824,6 @@ int vfs_unlink(struct inode *dir, struct dentry *dentry, struct inode **delegate
+ 			if (!error) {
+ 				dont_mount(dentry);
+ 				detach_mounts(dentry);
+-				fsnotify_unlink(dir, dentry);
+ 			}
+ 		}
+ 	}
+@@ -3833,9 +3831,11 @@ out:
+ 	inode_unlock(target);
+ 
+ 	/* We don't d_delete() NFS sillyrenamed files--they still exist. */
+-	if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) {
++	if (!error && dentry->d_flags & DCACHE_NFSFS_RENAMED) {
++		fsnotify_unlink(dir, dentry);
++	} else if (!error) {
+ 		fsnotify_link_count(target);
+-		d_delete(dentry);
++		d_delete_notify(dir, dentry);
+ 	}
+ 
+ 	return error;
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 8b963c72dd3b1..a23b7a5dec9ee 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1777,6 +1777,24 @@ out:
+ 
+ no_open:
+ 	res = nfs_lookup(dir, dentry, lookup_flags);
++	if (!res) {
++		inode = d_inode(dentry);
++		if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
++		    !S_ISDIR(inode->i_mode))
++			res = ERR_PTR(-ENOTDIR);
++		else if (inode && S_ISREG(inode->i_mode))
++			res = ERR_PTR(-EOPENSTALE);
++	} else if (!IS_ERR(res)) {
++		inode = d_inode(res);
++		if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
++		    !S_ISDIR(inode->i_mode)) {
++			dput(res);
++			res = ERR_PTR(-ENOTDIR);
++		} else if (inode && S_ISREG(inode->i_mode)) {
++			dput(res);
++			res = ERR_PTR(-EOPENSTALE);
++		}
++	}
+ 	if (switched) {
+ 		d_lookup_done(dentry);
+ 		if (!res)
+@@ -2174,6 +2192,8 @@ nfs_link(struct dentry *old_dentry, struct inode *dir, struct dentry *dentry)
+ 
+ 	trace_nfs_link_enter(inode, dir, dentry);
+ 	d_drop(dentry);
++	if (S_ISREG(inode->i_mode))
++		nfs_sync_inode(inode);
+ 	error = NFS_PROTO(dir)->link(inode, dir, &dentry->d_name);
+ 	if (error == 0) {
+ 		ihold(inode);
+@@ -2262,6 +2282,8 @@ int nfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		}
+ 	}
+ 
++	if (S_ISREG(old_inode->i_mode))
++		nfs_sync_inode(old_inode);
+ 	task = nfs_async_rename(old_dir, new_dir, old_dentry, new_dentry, NULL);
+ 	if (IS_ERR(task)) {
+ 		error = PTR_ERR(task);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index a8f954bbde4f5..dedec4771ecc2 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1247,7 +1247,8 @@ static void nfsdfs_remove_file(struct inode *dir, struct dentry *dentry)
+ 	clear_ncl(d_inode(dentry));
+ 	dget(dentry);
+ 	ret = simple_unlink(dir, dentry);
+-	d_delete(dentry);
++	d_drop(dentry);
++	fsnotify_unlink(dir, dentry);
+ 	dput(dentry);
+ 	WARN_ON_ONCE(ret);
+ }
+@@ -1336,8 +1337,8 @@ void nfsd_client_rmdir(struct dentry *dentry)
+ 	dget(dentry);
+ 	ret = simple_rmdir(dir, dentry);
+ 	WARN_ON_ONCE(ret);
++	d_drop(dentry);
+ 	fsnotify_rmdir(dir, dentry);
+-	d_delete(dentry);
+ 	dput(dentry);
+ 	inode_unlock(dir);
+ }
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index e7d04adb6cb87..4f48003e43271 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -1253,26 +1253,23 @@ static int ocfs2_test_bg_bit_allocatable(struct buffer_head *bg_bh,
+ {
+ 	struct ocfs2_group_desc *bg = (struct ocfs2_group_desc *) bg_bh->b_data;
+ 	struct journal_head *jh;
+-	int ret = 1;
++	int ret;
+ 
+ 	if (ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap))
+ 		return 0;
+ 
+-	if (!buffer_jbd(bg_bh))
++	jh = jbd2_journal_grab_journal_head(bg_bh);
++	if (!jh)
+ 		return 1;
+ 
+-	jbd_lock_bh_journal_head(bg_bh);
+-	if (buffer_jbd(bg_bh)) {
+-		jh = bh2jh(bg_bh);
+-		spin_lock(&jh->b_state_lock);
+-		bg = (struct ocfs2_group_desc *) jh->b_committed_data;
+-		if (bg)
+-			ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
+-		else
+-			ret = 1;
+-		spin_unlock(&jh->b_state_lock);
+-	}
+-	jbd_unlock_bh_journal_head(bg_bh);
++	spin_lock(&jh->b_state_lock);
++	bg = (struct ocfs2_group_desc *) jh->b_committed_data;
++	if (bg)
++		ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
++	else
++		ret = 1;
++	spin_unlock(&jh->b_state_lock);
++	jbd2_journal_put_journal_head(jh);
+ 
+ 	return ret;
+ }
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 0dd2f93ac0480..d32b836f6ca74 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -257,10 +257,6 @@ int udf_expand_file_adinicb(struct inode *inode)
+ 	char *kaddr;
+ 	struct udf_inode_info *iinfo = UDF_I(inode);
+ 	int err;
+-	struct writeback_control udf_wbc = {
+-		.sync_mode = WB_SYNC_NONE,
+-		.nr_to_write = 1,
+-	};
+ 
+ 	WARN_ON_ONCE(!inode_is_locked(inode));
+ 	if (!iinfo->i_lenAlloc) {
+@@ -304,8 +300,10 @@ int udf_expand_file_adinicb(struct inode *inode)
+ 		iinfo->i_alloc_type = ICBTAG_FLAG_AD_LONG;
+ 	/* from now on we have normal address_space methods */
+ 	inode->i_data.a_ops = &udf_aops;
++	set_page_dirty(page);
++	unlock_page(page);
+ 	up_write(&iinfo->i_data_sem);
+-	err = inode->i_data.a_ops->writepage(page, &udf_wbc);
++	err = filemap_fdatawrite(inode->i_mapping);
+ 	if (err) {
+ 		/* Restore everything back so that we don't lose data... */
+ 		lock_page(page);
+@@ -316,6 +314,7 @@ int udf_expand_file_adinicb(struct inode *inode)
+ 		unlock_page(page);
+ 		iinfo->i_alloc_type = ICBTAG_FLAG_AD_IN_ICB;
+ 		inode->i_data.a_ops = &udf_adinicb_aops;
++		iinfo->i_lenAlloc = inode->i_size;
+ 		up_write(&iinfo->i_data_sem);
+ 	}
+ 	put_page(page);
+diff --git a/include/linux/fsnotify.h b/include/linux/fsnotify.h
+index f8acddcf54fb4..79add91eaa04e 100644
+--- a/include/linux/fsnotify.h
++++ b/include/linux/fsnotify.h
+@@ -203,6 +203,42 @@ static inline void fsnotify_link(struct inode *dir, struct inode *inode,
+ 	fsnotify_name(dir, FS_CREATE, inode, &new_dentry->d_name, 0);
+ }
+ 
++/*
++ * fsnotify_delete - @dentry was unlinked and unhashed
++ *
++ * Caller must make sure that dentry->d_name is stable.
++ *
++ * Note: unlike fsnotify_unlink(), we have to pass also the unlinked inode
++ * as this may be called after d_delete() and old_dentry may be negative.
++ */
++static inline void fsnotify_delete(struct inode *dir, struct inode *inode,
++				   struct dentry *dentry)
++{
++	__u32 mask = FS_DELETE;
++
++	if (S_ISDIR(inode->i_mode))
++		mask |= FS_ISDIR;
++
++	fsnotify_name(dir, mask, inode, &dentry->d_name, 0);
++}
++
++/**
++ * d_delete_notify - delete a dentry and call fsnotify_delete()
++ * @dentry: The dentry to delete
++ *
++ * This helper is used to guaranty that the unlinked inode cannot be found
++ * by lookup of this name after fsnotify_delete() event has been delivered.
++ */
++static inline void d_delete_notify(struct inode *dir, struct dentry *dentry)
++{
++	struct inode *inode = d_inode(dentry);
++
++	ihold(inode);
++	d_delete(dentry);
++	fsnotify_delete(dir, inode, dentry);
++	iput(inode);
++}
++
+ /*
+  * fsnotify_unlink - 'name' was unlinked
+  *
+@@ -210,10 +246,10 @@ static inline void fsnotify_link(struct inode *dir, struct inode *inode,
+  */
+ static inline void fsnotify_unlink(struct inode *dir, struct dentry *dentry)
+ {
+-	/* Expected to be called before d_delete() */
+-	WARN_ON_ONCE(d_is_negative(dentry));
++	if (WARN_ON_ONCE(d_is_negative(dentry)))
++		return;
+ 
+-	fsnotify_dirent(dir, dentry, FS_DELETE);
++	fsnotify_delete(dir, d_inode(dentry), dentry);
+ }
+ 
+ /*
+@@ -233,10 +269,10 @@ static inline void fsnotify_mkdir(struct inode *inode, struct dentry *dentry)
+  */
+ static inline void fsnotify_rmdir(struct inode *dir, struct dentry *dentry)
+ {
+-	/* Expected to be called before d_delete() */
+-	WARN_ON_ONCE(d_is_negative(dentry));
++	if (WARN_ON_ONCE(d_is_negative(dentry)))
++		return;
+ 
+-	fsnotify_dirent(dir, dentry, FS_DELETE | FS_ISDIR);
++	fsnotify_delete(dir, d_inode(dentry), dentry);
+ }
+ 
+ /*
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 3476d20b75d49..fe3155736d635 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2543,6 +2543,7 @@ struct packet_type {
+ 					      struct net_device *);
+ 	bool			(*id_match)(struct packet_type *ptype,
+ 					    struct sock *sk);
++	struct net		*af_packet_net;
+ 	void			*af_packet_priv;
+ 	struct list_head	list;
+ };
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index c94551091dad3..67a50c78232fe 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -678,18 +678,6 @@ struct perf_event {
+ 	u64				total_time_running;
+ 	u64				tstamp;
+ 
+-	/*
+-	 * timestamp shadows the actual context timing but it can
+-	 * be safely used in NMI interrupt context. It reflects the
+-	 * context time as it was when the event was last scheduled in,
+-	 * or when ctx_sched_in failed to schedule the event because we
+-	 * run out of PMC.
+-	 *
+-	 * ctx_time already accounts for ctx->timestamp. Therefore to
+-	 * compute ctx_time for a sample, simply add perf_clock().
+-	 */
+-	u64				shadow_ctx_time;
+-
+ 	struct perf_event_attr		attr;
+ 	u16				header_size;
+ 	u16				id_header_size;
+@@ -834,6 +822,7 @@ struct perf_event_context {
+ 	 */
+ 	u64				time;
+ 	u64				timestamp;
++	u64				timeoffset;
+ 
+ 	/*
+ 	 * These fields let us detect when two contexts have both
+@@ -916,6 +905,8 @@ struct bpf_perf_event_data_kern {
+ struct perf_cgroup_info {
+ 	u64				time;
+ 	u64				timestamp;
++	u64				timeoffset;
++	int				active;
+ };
+ 
+ struct perf_cgroup {
+diff --git a/include/linux/usb/role.h b/include/linux/usb/role.h
+index 0164fed31b06c..b9ccaeb8a4aef 100644
+--- a/include/linux/usb/role.h
++++ b/include/linux/usb/role.h
+@@ -90,6 +90,12 @@ fwnode_usb_role_switch_get(struct fwnode_handle *node)
+ 
+ static inline void usb_role_switch_put(struct usb_role_switch *sw) { }
+ 
++static inline struct usb_role_switch *
++usb_role_switch_find_by_fwnode(const struct fwnode_handle *fwnode)
++{
++	return NULL;
++}
++
+ static inline struct usb_role_switch *
+ usb_role_switch_register(struct device *parent,
+ 			 const struct usb_role_switch_desc *desc)
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index 78ea3e332688f..e7ce719838b5e 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -6,6 +6,8 @@
+ #define RTR_SOLICITATION_INTERVAL	(4*HZ)
+ #define RTR_SOLICITATION_MAX_INTERVAL	(3600*HZ)	/* 1 hour */
+ 
++#define MIN_VALID_LIFETIME		(2*3600)	/* 2 hours */
++
+ #define TEMP_VALID_LIFETIME		(7*86400)
+ #define TEMP_PREFERRED_LIFETIME		(86400)
+ #define REGEN_MAX_RETRY			(3)
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 5538e54d4620c..de2dc22a78f93 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -506,19 +506,18 @@ static inline void ip_select_ident_segs(struct net *net, struct sk_buff *skb,
+ {
+ 	struct iphdr *iph = ip_hdr(skb);
+ 
++	/* We had many attacks based on IPID, use the private
++	 * generator as much as we can.
++	 */
++	if (sk && inet_sk(sk)->inet_daddr) {
++		iph->id = htons(inet_sk(sk)->inet_id);
++		inet_sk(sk)->inet_id += segs;
++		return;
++	}
+ 	if ((iph->frag_off & htons(IP_DF)) && !skb->ignore_df) {
+-		/* This is only to work around buggy Windows95/2000
+-		 * VJ compression implementations.  If the ID field
+-		 * does not change, they drop every other packet in
+-		 * a TCP stream using header compression.
+-		 */
+-		if (sk && inet_sk(sk)->inet_daddr) {
+-			iph->id = htons(inet_sk(sk)->inet_id);
+-			inet_sk(sk)->inet_id += segs;
+-		} else {
+-			iph->id = 0;
+-		}
++		iph->id = 0;
+ 	} else {
++		/* Unfortunately we need the big hammer to get a suitable IPID */
+ 		__ip_select_ident(net, iph, segs);
+ 	}
+ }
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index 88bc66b8d02b0..95d93ecf07371 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -280,7 +280,7 @@ static inline bool fib6_get_cookie_safe(const struct fib6_info *f6i,
+ 	fn = rcu_dereference(f6i->fib6_node);
+ 
+ 	if (fn) {
+-		*cookie = fn->fn_sernum;
++		*cookie = READ_ONCE(fn->fn_sernum);
+ 		/* pairs with smp_wmb() in fib6_update_sernum_upto_root() */
+ 		smp_rmb();
+ 		status = true;
+diff --git a/include/net/route.h b/include/net/route.h
+index ff021cab657e5..a07c277cd33e8 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -369,7 +369,7 @@ static inline struct neighbour *ip_neigh_gw4(struct net_device *dev,
+ {
+ 	struct neighbour *neigh;
+ 
+-	neigh = __ipv4_neigh_lookup_noref(dev, daddr);
++	neigh = __ipv4_neigh_lookup_noref(dev, (__force u32)daddr);
+ 	if (unlikely(!neigh))
+ 		neigh = __neigh_create(&arp_tbl, &daddr, dev, false);
+ 
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 4477873ac3a0b..56cd7e6589ff3 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -664,13 +664,14 @@ BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf,
+ 	   u32, size, u64, flags)
+ {
+ 	struct pt_regs *regs;
+-	long res;
++	long res = -EINVAL;
+ 
+ 	if (!try_get_task_stack(task))
+ 		return -EFAULT;
+ 
+ 	regs = task_pt_regs(task);
+-	res = __bpf_get_stack(regs, task, NULL, buf, size, flags);
++	if (regs)
++		res = __bpf_get_stack(regs, task, NULL, buf, size, flags);
+ 	put_task_stack(task);
+ 
+ 	return res;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e2d774cc470ee..c6493f7e02359 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -266,7 +266,7 @@ static void event_function_call(struct perf_event *event, event_f func, void *da
+ 	if (!event->parent) {
+ 		/*
+ 		 * If this is a !child event, we must hold ctx::mutex to
+-		 * stabilize the the event->ctx relation. See
++		 * stabilize the event->ctx relation. See
+ 		 * perf_event_ctx_lock().
+ 		 */
+ 		lockdep_assert_held(&ctx->mutex);
+@@ -673,6 +673,23 @@ perf_event_set_state(struct perf_event *event, enum perf_event_state state)
+ 	WRITE_ONCE(event->state, state);
+ }
+ 
++/*
++ * UP store-release, load-acquire
++ */
++
++#define __store_release(ptr, val)					\
++do {									\
++	barrier();							\
++	WRITE_ONCE(*(ptr), (val));					\
++} while (0)
++
++#define __load_acquire(ptr)						\
++({									\
++	__unqual_scalar_typeof(*(ptr)) ___p = READ_ONCE(*(ptr));	\
++	barrier();							\
++	___p;								\
++})
++
+ #ifdef CONFIG_CGROUP_PERF
+ 
+ static inline bool
+@@ -718,34 +735,51 @@ static inline u64 perf_cgroup_event_time(struct perf_event *event)
+ 	return t->time;
+ }
+ 
+-static inline void __update_cgrp_time(struct perf_cgroup *cgrp)
++static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now)
+ {
+-	struct perf_cgroup_info *info;
+-	u64 now;
+-
+-	now = perf_clock();
++	struct perf_cgroup_info *t;
+ 
+-	info = this_cpu_ptr(cgrp->info);
++	t = per_cpu_ptr(event->cgrp->info, event->cpu);
++	if (!__load_acquire(&t->active))
++		return t->time;
++	now += READ_ONCE(t->timeoffset);
++	return now;
++}
+ 
+-	info->time += now - info->timestamp;
++static inline void __update_cgrp_time(struct perf_cgroup_info *info, u64 now, bool adv)
++{
++	if (adv)
++		info->time += now - info->timestamp;
+ 	info->timestamp = now;
++	/*
++	 * see update_context_time()
++	 */
++	WRITE_ONCE(info->timeoffset, info->time - info->timestamp);
+ }
+ 
+-static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx)
++static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx, bool final)
+ {
+ 	struct perf_cgroup *cgrp = cpuctx->cgrp;
+ 	struct cgroup_subsys_state *css;
++	struct perf_cgroup_info *info;
+ 
+ 	if (cgrp) {
++		u64 now = perf_clock();
++
+ 		for (css = &cgrp->css; css; css = css->parent) {
+ 			cgrp = container_of(css, struct perf_cgroup, css);
+-			__update_cgrp_time(cgrp);
++			info = this_cpu_ptr(cgrp->info);
++
++			__update_cgrp_time(info, now, true);
++			if (final)
++				__store_release(&info->active, 0);
+ 		}
+ 	}
+ }
+ 
+ static inline void update_cgrp_time_from_event(struct perf_event *event)
+ {
++	struct perf_cgroup_info *info;
+ 	struct perf_cgroup *cgrp;
+ 
+ 	/*
+@@ -759,8 +793,10 @@ static inline void update_cgrp_time_from_event(struct perf_event *event)
+ 	/*
+ 	 * Do not update time when cgroup is not active
+ 	 */
+-	if (cgroup_is_descendant(cgrp->css.cgroup, event->cgrp->css.cgroup))
+-		__update_cgrp_time(event->cgrp);
++	if (cgroup_is_descendant(cgrp->css.cgroup, event->cgrp->css.cgroup)) {
++		info = this_cpu_ptr(event->cgrp->info);
++		__update_cgrp_time(info, perf_clock(), true);
++	}
+ }
+ 
+ static inline void
+@@ -784,7 +820,8 @@ perf_cgroup_set_timestamp(struct task_struct *task,
+ 	for (css = &cgrp->css; css; css = css->parent) {
+ 		cgrp = container_of(css, struct perf_cgroup, css);
+ 		info = this_cpu_ptr(cgrp->info);
+-		info->timestamp = ctx->timestamp;
++		__update_cgrp_time(info, ctx->timestamp, false);
++		__store_release(&info->active, 1);
+ 	}
+ }
+ 
+@@ -980,14 +1017,6 @@ out:
+ 	return ret;
+ }
+ 
+-static inline void
+-perf_cgroup_set_shadow_time(struct perf_event *event, u64 now)
+-{
+-	struct perf_cgroup_info *t;
+-	t = per_cpu_ptr(event->cgrp->info, event->cpu);
+-	event->shadow_ctx_time = now - t->timestamp;
+-}
+-
+ static inline void
+ perf_cgroup_event_enable(struct perf_event *event, struct perf_event_context *ctx)
+ {
+@@ -1065,7 +1094,8 @@ static inline void update_cgrp_time_from_event(struct perf_event *event)
+ {
+ }
+ 
+-static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx)
++static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx,
++						bool final)
+ {
+ }
+ 
+@@ -1097,12 +1127,12 @@ perf_cgroup_switch(struct task_struct *task, struct task_struct *next)
+ {
+ }
+ 
+-static inline void
+-perf_cgroup_set_shadow_time(struct perf_event *event, u64 now)
++static inline u64 perf_cgroup_event_time(struct perf_event *event)
+ {
++	return 0;
+ }
+ 
+-static inline u64 perf_cgroup_event_time(struct perf_event *event)
++static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now)
+ {
+ 	return 0;
+ }
+@@ -1300,7 +1330,7 @@ static void put_ctx(struct perf_event_context *ctx)
+  * life-time rules separate them. That is an exiting task cannot fork, and a
+  * spawning task cannot (yet) exit.
+  *
+- * But remember that that these are parent<->child context relations, and
++ * But remember that these are parent<->child context relations, and
+  * migration does not affect children, therefore these two orderings should not
+  * interact.
+  *
+@@ -1439,7 +1469,7 @@ static u64 primary_event_id(struct perf_event *event)
+ /*
+  * Get the perf_event_context for a task and lock it.
+  *
+- * This has to cope with with the fact that until it is locked,
++ * This has to cope with the fact that until it is locked,
+  * the context could get moved to another task.
+  */
+ static struct perf_event_context *
+@@ -1524,22 +1554,59 @@ static void perf_unpin_context(struct perf_event_context *ctx)
+ /*
+  * Update the record of the current time in a context.
+  */
+-static void update_context_time(struct perf_event_context *ctx)
++static void __update_context_time(struct perf_event_context *ctx, bool adv)
+ {
+ 	u64 now = perf_clock();
+ 
+-	ctx->time += now - ctx->timestamp;
++	if (adv)
++		ctx->time += now - ctx->timestamp;
+ 	ctx->timestamp = now;
++
++	/*
++	 * The above: time' = time + (now - timestamp), can be re-arranged
++	 * into: time` = now + (time - timestamp), which gives a single value
++	 * offset to compute future time without locks on.
++	 *
++	 * See perf_event_time_now(), which can be used from NMI context where
++	 * it's (obviously) not possible to acquire ctx->lock in order to read
++	 * both the above values in a consistent manner.
++	 */
++	WRITE_ONCE(ctx->timeoffset, ctx->time - ctx->timestamp);
++}
++
++static void update_context_time(struct perf_event_context *ctx)
++{
++	__update_context_time(ctx, true);
+ }
+ 
+ static u64 perf_event_time(struct perf_event *event)
+ {
+ 	struct perf_event_context *ctx = event->ctx;
+ 
++	if (unlikely(!ctx))
++		return 0;
++
+ 	if (is_cgroup_event(event))
+ 		return perf_cgroup_event_time(event);
+ 
+-	return ctx ? ctx->time : 0;
++	return ctx->time;
++}
++
++static u64 perf_event_time_now(struct perf_event *event, u64 now)
++{
++	struct perf_event_context *ctx = event->ctx;
++
++	if (unlikely(!ctx))
++		return 0;
++
++	if (is_cgroup_event(event))
++		return perf_cgroup_event_time_now(event, now);
++
++	if (!(__load_acquire(&ctx->is_active) & EVENT_TIME))
++		return ctx->time;
++
++	now += READ_ONCE(ctx->timeoffset);
++	return now;
+ }
+ 
+ static enum event_type_t get_event_type(struct perf_event *event)
+@@ -2333,7 +2400,7 @@ __perf_remove_from_context(struct perf_event *event,
+ 
+ 	if (ctx->is_active & EVENT_TIME) {
+ 		update_context_time(ctx);
+-		update_cgrp_time_from_cpuctx(cpuctx);
++		update_cgrp_time_from_cpuctx(cpuctx, false);
+ 	}
+ 
+ 	event_sched_out(event, cpuctx, ctx);
+@@ -2342,6 +2409,9 @@ __perf_remove_from_context(struct perf_event *event,
+ 	list_del_event(event, ctx);
+ 
+ 	if (!ctx->nr_events && ctx->is_active) {
++		if (ctx == &cpuctx->ctx)
++			update_cgrp_time_from_cpuctx(cpuctx, true);
++
+ 		ctx->is_active = 0;
+ 		ctx->rotate_necessary = 0;
+ 		if (ctx->task) {
+@@ -2467,40 +2537,6 @@ void perf_event_disable_inatomic(struct perf_event *event)
+ 	irq_work_queue(&event->pending);
+ }
+ 
+-static void perf_set_shadow_time(struct perf_event *event,
+-				 struct perf_event_context *ctx)
+-{
+-	/*
+-	 * use the correct time source for the time snapshot
+-	 *
+-	 * We could get by without this by leveraging the
+-	 * fact that to get to this function, the caller
+-	 * has most likely already called update_context_time()
+-	 * and update_cgrp_time_xx() and thus both timestamp
+-	 * are identical (or very close). Given that tstamp is,
+-	 * already adjusted for cgroup, we could say that:
+-	 *    tstamp - ctx->timestamp
+-	 * is equivalent to
+-	 *    tstamp - cgrp->timestamp.
+-	 *
+-	 * Then, in perf_output_read(), the calculation would
+-	 * work with no changes because:
+-	 * - event is guaranteed scheduled in
+-	 * - no scheduled out in between
+-	 * - thus the timestamp would be the same
+-	 *
+-	 * But this is a bit hairy.
+-	 *
+-	 * So instead, we have an explicit cgroup call to remain
+-	 * within the time time source all along. We believe it
+-	 * is cleaner and simpler to understand.
+-	 */
+-	if (is_cgroup_event(event))
+-		perf_cgroup_set_shadow_time(event, event->tstamp);
+-	else
+-		event->shadow_ctx_time = event->tstamp - ctx->timestamp;
+-}
+-
+ #define MAX_INTERRUPTS (~0ULL)
+ 
+ static void perf_log_throttle(struct perf_event *event, int enable);
+@@ -2541,8 +2577,6 @@ event_sched_in(struct perf_event *event,
+ 
+ 	perf_pmu_disable(event->pmu);
+ 
+-	perf_set_shadow_time(event, ctx);
+-
+ 	perf_log_itrace_start(event);
+ 
+ 	if (event->pmu->add(event, PERF_EF_START)) {
+@@ -3216,16 +3250,6 @@ static void ctx_sched_out(struct perf_event_context *ctx,
+ 		return;
+ 	}
+ 
+-	ctx->is_active &= ~event_type;
+-	if (!(ctx->is_active & EVENT_ALL))
+-		ctx->is_active = 0;
+-
+-	if (ctx->task) {
+-		WARN_ON_ONCE(cpuctx->task_ctx != ctx);
+-		if (!ctx->is_active)
+-			cpuctx->task_ctx = NULL;
+-	}
+-
+ 	/*
+ 	 * Always update time if it was set; not only when it changes.
+ 	 * Otherwise we can 'forget' to update time for any but the last
+@@ -3239,7 +3263,22 @@ static void ctx_sched_out(struct perf_event_context *ctx,
+ 	if (is_active & EVENT_TIME) {
+ 		/* update (and stop) ctx time */
+ 		update_context_time(ctx);
+-		update_cgrp_time_from_cpuctx(cpuctx);
++		update_cgrp_time_from_cpuctx(cpuctx, ctx == &cpuctx->ctx);
++		/*
++		 * CPU-release for the below ->is_active store,
++		 * see __load_acquire() in perf_event_time_now()
++		 */
++		barrier();
++	}
++
++	ctx->is_active &= ~event_type;
++	if (!(ctx->is_active & EVENT_ALL))
++		ctx->is_active = 0;
++
++	if (ctx->task) {
++		WARN_ON_ONCE(cpuctx->task_ctx != ctx);
++		if (!ctx->is_active)
++			cpuctx->task_ctx = NULL;
+ 	}
+ 
+ 	is_active ^= ctx->is_active; /* changed bits */
+@@ -3676,13 +3715,19 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
+ 	return 0;
+ }
+ 
++/*
++ * Because the userpage is strictly per-event (there is no concept of context,
++ * so there cannot be a context indirection), every userpage must be updated
++ * when context time starts :-(
++ *
++ * IOW, we must not miss EVENT_TIME edges.
++ */
+ static inline bool event_update_userpage(struct perf_event *event)
+ {
+ 	if (likely(!atomic_read(&event->mmap_count)))
+ 		return false;
+ 
+ 	perf_event_update_time(event);
+-	perf_set_shadow_time(event, event->ctx);
+ 	perf_event_update_userpage(event);
+ 
+ 	return true;
+@@ -3766,13 +3811,23 @@ ctx_sched_in(struct perf_event_context *ctx,
+ 	     struct task_struct *task)
+ {
+ 	int is_active = ctx->is_active;
+-	u64 now;
+ 
+ 	lockdep_assert_held(&ctx->lock);
+ 
+ 	if (likely(!ctx->nr_events))
+ 		return;
+ 
++	if (is_active ^ EVENT_TIME) {
++		/* start ctx time */
++		__update_context_time(ctx, false);
++		perf_cgroup_set_timestamp(task, ctx);
++		/*
++		 * CPU-release for the below ->is_active store,
++		 * see __load_acquire() in perf_event_time_now()
++		 */
++		barrier();
++	}
++
+ 	ctx->is_active |= (event_type | EVENT_TIME);
+ 	if (ctx->task) {
+ 		if (!is_active)
+@@ -3783,13 +3838,6 @@ ctx_sched_in(struct perf_event_context *ctx,
+ 
+ 	is_active ^= ctx->is_active; /* changed bits */
+ 
+-	if (is_active & EVENT_TIME) {
+-		/* start ctx time */
+-		now = perf_clock();
+-		ctx->timestamp = now;
+-		perf_cgroup_set_timestamp(task, ctx);
+-	}
+-
+ 	/*
+ 	 * First go through the list and put on any pinned groups
+ 	 * in order to give them the best chance of going on.
+@@ -4325,6 +4373,18 @@ static inline u64 perf_event_count(struct perf_event *event)
+ 	return local64_read(&event->count) + atomic64_read(&event->child_count);
+ }
+ 
++static void calc_timer_values(struct perf_event *event,
++				u64 *now,
++				u64 *enabled,
++				u64 *running)
++{
++	u64 ctx_time;
++
++	*now = perf_clock();
++	ctx_time = perf_event_time_now(event, *now);
++	__perf_update_times(event, ctx_time, enabled, running);
++}
++
+ /*
+  * NMI-safe method to read a local event, that is an event that
+  * is:
+@@ -4384,10 +4444,9 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
+ 
+ 	*value = local64_read(&event->count);
+ 	if (enabled || running) {
+-		u64 now = event->shadow_ctx_time + perf_clock();
+-		u64 __enabled, __running;
++		u64 __enabled, __running, __now;;
+ 
+-		__perf_update_times(event, now, &__enabled, &__running);
++		calc_timer_values(event, &__now, &__enabled, &__running);
+ 		if (enabled)
+ 			*enabled = __enabled;
+ 		if (running)
+@@ -5694,18 +5753,6 @@ static int perf_event_index(struct perf_event *event)
+ 	return event->pmu->event_idx(event);
+ }
+ 
+-static void calc_timer_values(struct perf_event *event,
+-				u64 *now,
+-				u64 *enabled,
+-				u64 *running)
+-{
+-	u64 ctx_time;
+-
+-	*now = perf_clock();
+-	ctx_time = event->shadow_ctx_time + *now;
+-	__perf_update_times(event, ctx_time, enabled, running);
+-}
+-
+ static void perf_event_init_userpage(struct perf_event *event)
+ {
+ 	struct perf_event_mmap_page *userpg;
+@@ -6245,7 +6292,6 @@ accounting:
+ 		ring_buffer_attach(event, rb);
+ 
+ 		perf_event_update_time(event);
+-		perf_set_shadow_time(event, event->ctx);
+ 		perf_event_init_userpage(event);
+ 		perf_event_update_userpage(event);
+ 	} else {
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 00b0358739ab3..e1bbb3b92921d 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1735,7 +1735,7 @@ void uprobe_free_utask(struct task_struct *t)
+ }
+ 
+ /*
+- * Allocate a uprobe_task object for the task if if necessary.
++ * Allocate a uprobe_task object for the task if necessary.
+  * Called when the thread hits a breakpoint.
+  *
+  * Returns:
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 2f8cd616d3b29..f00dd928fc711 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -1438,7 +1438,7 @@ rt_mutex_fasttrylock(struct rt_mutex *lock,
+ }
+ 
+ /*
+- * Performs the wakeup of the the top-waiter and re-enables preemption.
++ * Performs the wakeup of the top-waiter and re-enables preemption.
+  */
+ void rt_mutex_postunlock(struct wake_q_head *wake_q)
+ {
+@@ -1832,7 +1832,7 @@ struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock)
+  *			been started.
+  * @waiter:		the pre-initialized rt_mutex_waiter
+  *
+- * Wait for the the lock acquisition started on our behalf by
++ * Wait for the lock acquisition started on our behalf by
+  * rt_mutex_start_proxy_lock(). Upon failure, the caller must call
+  * rt_mutex_cleanup_proxy_lock().
+  *
+diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
+index a163542d178ee..cc5cc889b5b7f 100644
+--- a/kernel/locking/rwsem.c
++++ b/kernel/locking/rwsem.c
+@@ -1177,7 +1177,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
+ 
+ 		/*
+ 		 * If there were already threads queued before us and:
+-		 *  1) there are no no active locks, wake the front
++		 *  1) there are no active locks, wake the front
+ 		 *     queued process(es) as the handoff bit might be set.
+ 		 *  2) there are no active writers and some readers, the lock
+ 		 *     must be read owned; so we try to wake any read lock
+diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
+index d9dd94defc0a9..9aa855a96c4ae 100644
+--- a/kernel/locking/semaphore.c
++++ b/kernel/locking/semaphore.c
+@@ -119,7 +119,7 @@ EXPORT_SYMBOL(down_killable);
+  * @sem: the semaphore to be acquired
+  *
+  * Try to acquire the semaphore atomically.  Returns 0 if the semaphore has
+- * been acquired successfully or 1 if it it cannot be acquired.
++ * been acquired successfully or 1 if it cannot be acquired.
+  *
+  * NOTE: This return value is inverted from both spin_trylock and
+  * mutex_trylock!  Be careful about this when converting code.
+diff --git a/kernel/power/wakelock.c b/kernel/power/wakelock.c
+index 105df4dfc7839..52571dcad768b 100644
+--- a/kernel/power/wakelock.c
++++ b/kernel/power/wakelock.c
+@@ -39,23 +39,20 @@ ssize_t pm_show_wakelocks(char *buf, bool show_active)
+ {
+ 	struct rb_node *node;
+ 	struct wakelock *wl;
+-	char *str = buf;
+-	char *end = buf + PAGE_SIZE;
++	int len = 0;
+ 
+ 	mutex_lock(&wakelocks_lock);
+ 
+ 	for (node = rb_first(&wakelocks_tree); node; node = rb_next(node)) {
+ 		wl = rb_entry(node, struct wakelock, node);
+ 		if (wl->ws->active == show_active)
+-			str += scnprintf(str, end - str, "%s ", wl->name);
++			len += sysfs_emit_at(buf, len, "%s ", wl->name);
+ 	}
+-	if (str > buf)
+-		str--;
+ 
+-	str += scnprintf(str, end - str, "\n");
++	len += sysfs_emit_at(buf, len, "\n");
+ 
+ 	mutex_unlock(&wakelocks_lock);
+-	return (str - buf);
++	return len;
+ }
+ 
+ #if CONFIG_PM_WAKELOCKS_LIMIT > 0
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 2a33cb5a10e59..acd9833b8ec22 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3379,7 +3379,6 @@ void set_task_rq_fair(struct sched_entity *se,
+ 	se->avg.last_update_time = n_last_update_time;
+ }
+ 
+-
+ /*
+  * When on migration a sched_entity joins/leaves the PELT hierarchy, we need to
+  * propagate its contribution. The key to this propagation is the invariant
+@@ -3447,7 +3446,6 @@ void set_task_rq_fair(struct sched_entity *se,
+  * XXX: only do this for the part of runnable > running ?
+  *
+  */
+-
+ static inline void
+ update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq)
+ {
+@@ -3676,7 +3674,19 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ 
+ 		r = removed_util;
+ 		sub_positive(&sa->util_avg, r);
+-		sa->util_sum = sa->util_avg * divider;
++		sub_positive(&sa->util_sum, r * divider);
++		/*
++		 * Because of rounding, se->util_sum might ends up being +1 more than
++		 * cfs->util_sum. Although this is not a problem by itself, detaching
++		 * a lot of tasks with the rounding problem between 2 updates of
++		 * util_avg (~1ms) can make cfs->util_sum becoming null whereas
++		 * cfs_util_avg is not.
++		 * Check that util_sum is still above its lower bound for the new
++		 * util_avg. Given that period_contrib might have moved since the last
++		 * sync, we are only sure that util_sum must be above or equal to
++		 *    util_avg * minimum possible divider
++		 */
++		sa->util_sum = max_t(u32, sa->util_sum, sa->util_avg * PELT_MIN_DIVIDER);
+ 
+ 		r = removed_runnable;
+ 		sub_positive(&sa->runnable_avg, r);
+@@ -5149,7 +5159,7 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ /*
+  * When a group wakes up we want to make sure that its quota is not already
+  * expired/exceeded, otherwise it may be allowed to steal additional ticks of
+- * runtime as update_curr() throttling can not not trigger until it's on-rq.
++ * runtime as update_curr() throttling can not trigger until it's on-rq.
+  */
+ static void check_enqueue_throttle(struct cfs_rq *cfs_rq)
+ {
+diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
+index 16f57e71f9c44..cc7cd512e4e33 100644
+--- a/kernel/sched/membarrier.c
++++ b/kernel/sched/membarrier.c
+@@ -19,11 +19,11 @@
+ #endif
+ 
+ #ifdef CONFIG_RSEQ
+-#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK		\
++#define MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK		\
+ 	(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ			\
+-	| MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
++	| MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ)
+ #else
+-#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK	0
++#define MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK	0
+ #endif
+ 
+ #define MEMBARRIER_CMD_BITMASK						\
+@@ -31,7 +31,8 @@
+ 	| MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED			\
+ 	| MEMBARRIER_CMD_PRIVATE_EXPEDITED				\
+ 	| MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED			\
+-	| MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK)
++	| MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK		\
++	| MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
+ 
+ static void ipi_mb(void *info)
+ {
+@@ -315,7 +316,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
+ 
+ 	/*
+ 	 * For each cpu runqueue, if the task's mm match @mm, ensure that all
+-	 * @mm's membarrier state set bits are also set in in the runqueue's
++	 * @mm's membarrier state set bits are also set in the runqueue's
+ 	 * membarrier state. This ensures that a runqueue scheduling
+ 	 * between threads which are users of @mm has its membarrier state
+ 	 * updated.
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 0b9aeebb9c325..45bf08e22207c 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -37,9 +37,11 @@ update_irq_load_avg(struct rq *rq, u64 running)
+ }
+ #endif
+ 
++#define PELT_MIN_DIVIDER	(LOAD_AVG_MAX - 1024)
++
+ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ {
+-	return LOAD_AVG_MAX - 1024 + avg->period_contrib;
++	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index cd2d094b9f820..a0729213f37be 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7257,7 +7257,8 @@ static struct tracing_log_err *get_tracing_log_err(struct trace_array *tr)
+ 		err = kzalloc(sizeof(*err), GFP_KERNEL);
+ 		if (!err)
+ 			err = ERR_PTR(-ENOMEM);
+-		tr->n_err_log_entries++;
++		else
++			tr->n_err_log_entries++;
+ 
+ 		return err;
+ 	}
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 003e5f37861e3..1557a20b6500e 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -3506,6 +3506,7 @@ static int trace_action_create(struct hist_trigger_data *hist_data,
+ 
+ 			var_ref_idx = find_var_ref_idx(hist_data, var_ref);
+ 			if (WARN_ON(var_ref_idx < 0)) {
++				kfree(p);
+ 				ret = var_ref_idx;
+ 				goto err;
+ 			}
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 7ffcca9ae82a1..72b4127360c7f 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5661,6 +5661,11 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		struct hci_ev_le_advertising_info *ev = ptr;
+ 		s8 rssi;
+ 
++		if (ptr > (void *)skb_tail_pointer(skb) - sizeof(*ev)) {
++			bt_dev_err(hdev, "Malicious advertising data.");
++			break;
++		}
++
+ 		if (ev->length <= HCI_MAX_AD_LENGTH &&
+ 		    ev->data + ev->length <= skb_tail_pointer(skb)) {
+ 			rssi = ev->data[ev->length];
+@@ -5672,11 +5677,6 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		}
+ 
+ 		ptr += sizeof(*ev) + ev->length + 1;
+-
+-		if (ptr > (void *) skb_tail_pointer(skb) - sizeof(*ev)) {
+-			bt_dev_err(hdev, "Malicious advertising data. Stopping processing");
+-			break;
+-		}
+ 	}
+ 
+ 	hci_dev_unlock(hdev);
+diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
+index 08c77418c687b..852f4b54e8811 100644
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -543,10 +543,10 @@ static bool __allowed_ingress(const struct net_bridge *br,
+ 		if (!br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) {
+ 			if (*state == BR_STATE_FORWARDING) {
+ 				*state = br_vlan_get_pvid_state(vg);
+-				return br_vlan_state_allowed(*state, true);
+-			} else {
+-				return true;
++				if (!br_vlan_state_allowed(*state, true))
++					goto drop;
+ 			}
++			return true;
+ 		}
+ 	}
+ 	v = br_vlan_find(vg, *vid);
+@@ -1873,7 +1873,8 @@ static int br_vlan_rtm_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 			goto out_err;
+ 		}
+ 		err = br_vlan_dump_dev(dev, skb, cb, dump_flags);
+-		if (err && err != -EMSGSIZE)
++		/* if the dump completed without an error we return 0 here */
++		if (err != -EMSGSIZE)
+ 			goto out_err;
+ 	} else {
+ 		for_each_netdev_rcu(net, dev) {
+diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c
+index c714e6a9dad4c..eadb696360b48 100644
+--- a/net/core/net-procfs.c
++++ b/net/core/net-procfs.c
+@@ -193,12 +193,23 @@ static const struct seq_operations softnet_seq_ops = {
+ 	.show  = softnet_seq_show,
+ };
+ 
+-static void *ptype_get_idx(loff_t pos)
++static void *ptype_get_idx(struct seq_file *seq, loff_t pos)
+ {
++	struct list_head *ptype_list = NULL;
+ 	struct packet_type *pt = NULL;
++	struct net_device *dev;
+ 	loff_t i = 0;
+ 	int t;
+ 
++	for_each_netdev_rcu(seq_file_net(seq), dev) {
++		ptype_list = &dev->ptype_all;
++		list_for_each_entry_rcu(pt, ptype_list, list) {
++			if (i == pos)
++				return pt;
++			++i;
++		}
++	}
++
+ 	list_for_each_entry_rcu(pt, &ptype_all, list) {
+ 		if (i == pos)
+ 			return pt;
+@@ -219,22 +230,40 @@ static void *ptype_seq_start(struct seq_file *seq, loff_t *pos)
+ 	__acquires(RCU)
+ {
+ 	rcu_read_lock();
+-	return *pos ? ptype_get_idx(*pos - 1) : SEQ_START_TOKEN;
++	return *pos ? ptype_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
+ }
+ 
+ static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
++	struct net_device *dev;
+ 	struct packet_type *pt;
+ 	struct list_head *nxt;
+ 	int hash;
+ 
+ 	++*pos;
+ 	if (v == SEQ_START_TOKEN)
+-		return ptype_get_idx(0);
++		return ptype_get_idx(seq, 0);
+ 
+ 	pt = v;
+ 	nxt = pt->list.next;
++	if (pt->dev) {
++		if (nxt != &pt->dev->ptype_all)
++			goto found;
++
++		dev = pt->dev;
++		for_each_netdev_continue_rcu(seq_file_net(seq), dev) {
++			if (!list_empty(&dev->ptype_all)) {
++				nxt = dev->ptype_all.next;
++				goto found;
++			}
++		}
++
++		nxt = ptype_all.next;
++		goto ptype_all;
++	}
++
+ 	if (pt->type == htons(ETH_P_ALL)) {
++ptype_all:
+ 		if (nxt != &ptype_all)
+ 			goto found;
+ 		hash = 0;
+@@ -263,7 +292,8 @@ static int ptype_seq_show(struct seq_file *seq, void *v)
+ 
+ 	if (v == SEQ_START_TOKEN)
+ 		seq_puts(seq, "Type Device      Function\n");
+-	else if (pt->dev == NULL || dev_net(pt->dev) == seq_file_net(seq)) {
++	else if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) &&
++		 (!pt->dev || net_eq(dev_net(pt->dev), seq_file_net(seq)))) {
+ 		if (pt->type == htons(ETH_P_ALL))
+ 			seq_puts(seq, "ALL ");
+ 		else
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 10d4cde31c6bf..5e48b3d3a00db 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -162,12 +162,19 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk,
+ 	iph->daddr    = (opt && opt->opt.srr ? opt->opt.faddr : daddr);
+ 	iph->saddr    = saddr;
+ 	iph->protocol = sk->sk_protocol;
+-	if (ip_dont_fragment(sk, &rt->dst)) {
++	/* Do not bother generating IPID for small packets (eg SYNACK) */
++	if (skb->len <= IPV4_MIN_MTU || ip_dont_fragment(sk, &rt->dst)) {
+ 		iph->frag_off = htons(IP_DF);
+ 		iph->id = 0;
+ 	} else {
+ 		iph->frag_off = 0;
+-		__ip_select_ident(net, iph, 1);
++		/* TCP packets here are SYNACK with fat IPv4/TCP options.
++		 * Avoid using the hashed IP ident generator.
++		 */
++		if (sk->sk_protocol == IPPROTO_TCP)
++			iph->id = (__force __be16)prandom_u32();
++		else
++			__ip_select_ident(net, iph, 1);
+ 	}
+ 
+ 	if (opt && opt->opt.optlen) {
+@@ -614,18 +621,6 @@ void ip_fraglist_init(struct sk_buff *skb, struct iphdr *iph,
+ }
+ EXPORT_SYMBOL(ip_fraglist_init);
+ 
+-static void ip_fraglist_ipcb_prepare(struct sk_buff *skb,
+-				     struct ip_fraglist_iter *iter)
+-{
+-	struct sk_buff *to = iter->frag;
+-
+-	/* Copy the flags to each fragment. */
+-	IPCB(to)->flags = IPCB(skb)->flags;
+-
+-	if (iter->offset == 0)
+-		ip_options_fragment(to);
+-}
+-
+ void ip_fraglist_prepare(struct sk_buff *skb, struct ip_fraglist_iter *iter)
+ {
+ 	unsigned int hlen = iter->hlen;
+@@ -671,7 +666,7 @@ void ip_frag_init(struct sk_buff *skb, unsigned int hlen,
+ EXPORT_SYMBOL(ip_frag_init);
+ 
+ static void ip_frag_ipcb(struct sk_buff *from, struct sk_buff *to,
+-			 bool first_frag, struct ip_frag_state *state)
++			 bool first_frag)
+ {
+ 	/* Copy the flags to each fragment. */
+ 	IPCB(to)->flags = IPCB(from)->flags;
+@@ -850,8 +845,20 @@ int ip_do_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 			/* Prepare header of the next frame,
+ 			 * before previous one went down. */
+ 			if (iter.frag) {
+-				ip_fraglist_ipcb_prepare(skb, &iter);
++				bool first_frag = (iter.offset == 0);
++
++				IPCB(iter.frag)->flags = IPCB(skb)->flags;
+ 				ip_fraglist_prepare(skb, &iter);
++				if (first_frag && IPCB(skb)->opt.optlen) {
++					/* ipcb->opt is not populated for frags
++					 * coming from __ip_make_skb(),
++					 * ip_options_fragment() needs optlen
++					 */
++					IPCB(iter.frag)->opt.optlen =
++						IPCB(skb)->opt.optlen;
++					ip_options_fragment(iter.frag);
++					ip_send_check(iter.iph);
++				}
+ 			}
+ 
+ 			skb->tstamp = tstamp;
+@@ -905,7 +912,7 @@ slow_path:
+ 			err = PTR_ERR(skb2);
+ 			goto fail;
+ 		}
+-		ip_frag_ipcb(skb, skb2, first_frag, &state);
++		ip_frag_ipcb(skb, skb2, first_frag);
+ 
+ 		/*
+ 		 *	Put this fragment into the sending queue.
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 8ce8b7300b9d3..a5722905456c2 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -220,7 +220,8 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ 			continue;
+ 		}
+ 
+-		if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif)
++		if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif &&
++		    sk->sk_bound_dev_if != inet_sdif(skb))
+ 			continue;
+ 
+ 		sock_hold(sk);
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 7d26e0f8bdaeb..5d95f80314f95 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -721,6 +721,7 @@ static int raw_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	int ret = -EINVAL;
+ 	int chk_addr_ret;
+ 
++	lock_sock(sk);
+ 	if (sk->sk_state != TCP_CLOSE || addr_len < sizeof(struct sockaddr_in))
+ 		goto out;
+ 
+@@ -740,7 +741,9 @@ static int raw_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 		inet->inet_saddr = 0;  /* Use device */
+ 	sk_dst_reset(sk);
+ 	ret = 0;
+-out:	return ret;
++out:
++	release_sock(sk);
++	return ret;
+ }
+ 
+ /*
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 29526937077b3..4dde49e628fab 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2577,7 +2577,7 @@ int addrconf_prefix_rcv_add_addr(struct net *net, struct net_device *dev,
+ 				 __u32 valid_lft, u32 prefered_lft)
+ {
+ 	struct inet6_ifaddr *ifp = ipv6_get_ifaddr(net, addr, dev, 1);
+-	int create = 0;
++	int create = 0, update_lft = 0;
+ 
+ 	if (!ifp && valid_lft) {
+ 		int max_addresses = in6_dev->cnf.max_addresses;
+@@ -2621,19 +2621,32 @@ int addrconf_prefix_rcv_add_addr(struct net *net, struct net_device *dev,
+ 		unsigned long now;
+ 		u32 stored_lft;
+ 
+-		/* Update lifetime (RFC4862 5.5.3 e)
+-		 * We deviate from RFC4862 by honoring all Valid Lifetimes to
+-		 * improve the reaction of SLAAC to renumbering events
+-		 * (draft-gont-6man-slaac-renum-06, Section 4.2)
+-		 */
++		/* update lifetime (RFC2462 5.5.3 e) */
+ 		spin_lock_bh(&ifp->lock);
+ 		now = jiffies;
+ 		if (ifp->valid_lft > (now - ifp->tstamp) / HZ)
+ 			stored_lft = ifp->valid_lft - (now - ifp->tstamp) / HZ;
+ 		else
+ 			stored_lft = 0;
+-
+ 		if (!create && stored_lft) {
++			const u32 minimum_lft = min_t(u32,
++				stored_lft, MIN_VALID_LIFETIME);
++			valid_lft = max(valid_lft, minimum_lft);
++
++			/* RFC4862 Section 5.5.3e:
++			 * "Note that the preferred lifetime of the
++			 *  corresponding address is always reset to
++			 *  the Preferred Lifetime in the received
++			 *  Prefix Information option, regardless of
++			 *  whether the valid lifetime is also reset or
++			 *  ignored."
++			 *
++			 * So we should always update prefered_lft here.
++			 */
++			update_lft = 1;
++		}
++
++		if (update_lft) {
+ 			ifp->valid_lft = valid_lft;
+ 			ifp->prefered_lft = prefered_lft;
+ 			ifp->tstamp = now;
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index e43f1fbac28b6..c783b91231321 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -110,7 +110,7 @@ void fib6_update_sernum(struct net *net, struct fib6_info *f6i)
+ 	fn = rcu_dereference_protected(f6i->fib6_node,
+ 			lockdep_is_held(&f6i->fib6_table->tb6_lock));
+ 	if (fn)
+-		fn->fn_sernum = fib6_new_sernum(net);
++		WRITE_ONCE(fn->fn_sernum, fib6_new_sernum(net));
+ }
+ 
+ /*
+@@ -587,12 +587,13 @@ static int fib6_dump_table(struct fib6_table *table, struct sk_buff *skb,
+ 		spin_unlock_bh(&table->tb6_lock);
+ 		if (res > 0) {
+ 			cb->args[4] = 1;
+-			cb->args[5] = w->root->fn_sernum;
++			cb->args[5] = READ_ONCE(w->root->fn_sernum);
+ 		}
+ 	} else {
+-		if (cb->args[5] != w->root->fn_sernum) {
++		int sernum = READ_ONCE(w->root->fn_sernum);
++		if (cb->args[5] != sernum) {
+ 			/* Begin at the root if the tree changed */
+-			cb->args[5] = w->root->fn_sernum;
++			cb->args[5] = sernum;
+ 			w->state = FWS_INIT;
+ 			w->node = w->root;
+ 			w->skip = w->count;
+@@ -1342,7 +1343,7 @@ static void __fib6_update_sernum_upto_root(struct fib6_info *rt,
+ 	/* paired with smp_rmb() in rt6_get_cookie_safe() */
+ 	smp_wmb();
+ 	while (fn) {
+-		fn->fn_sernum = sernum;
++		WRITE_ONCE(fn->fn_sernum, sernum);
+ 		fn = rcu_dereference_protected(fn->parent,
+ 				lockdep_is_held(&rt->fib6_table->tb6_lock));
+ 	}
+@@ -2171,8 +2172,8 @@ static int fib6_clean_node(struct fib6_walker *w)
+ 	};
+ 
+ 	if (c->sernum != FIB6_NO_SERNUM_CHANGE &&
+-	    w->node->fn_sernum != c->sernum)
+-		w->node->fn_sernum = c->sernum;
++	    READ_ONCE(w->node->fn_sernum) != c->sernum)
++		WRITE_ONCE(w->node->fn_sernum, c->sernum);
+ 
+ 	if (!c->func) {
+ 		WARN_ON_ONCE(c->sernum == FIB6_NO_SERNUM_CHANGE);
+@@ -2536,7 +2537,7 @@ static void ipv6_route_seq_setup_walk(struct ipv6_route_iter *iter,
+ 	iter->w.state = FWS_INIT;
+ 	iter->w.node = iter->w.root;
+ 	iter->w.args = iter;
+-	iter->sernum = iter->w.root->fn_sernum;
++	iter->sernum = READ_ONCE(iter->w.root->fn_sernum);
+ 	INIT_LIST_HEAD(&iter->w.lh);
+ 	fib6_walker_link(net, &iter->w);
+ }
+@@ -2564,8 +2565,10 @@ static struct fib6_table *ipv6_route_seq_next_table(struct fib6_table *tbl,
+ 
+ static void ipv6_route_check_sernum(struct ipv6_route_iter *iter)
+ {
+-	if (iter->sernum != iter->w.root->fn_sernum) {
+-		iter->sernum = iter->w.root->fn_sernum;
++	int sernum = READ_ONCE(iter->w.root->fn_sernum);
++
++	if (iter->sernum != sernum) {
++		iter->sernum = sernum;
+ 		iter->w.state = FWS_INIT;
+ 		iter->w.node = iter->w.root;
+ 		WARN_ON(iter->w.skip);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 08441f06afd48..3a2741569b847 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1066,14 +1066,14 @@ int ip6_tnl_xmit_ctl(struct ip6_tnl *t,
+ 
+ 		if (unlikely(!ipv6_chk_addr_and_flags(net, laddr, ldev, false,
+ 						      0, IFA_F_TENTATIVE)))
+-			pr_warn("%s xmit: Local address not yet configured!\n",
+-				p->name);
++			pr_warn_ratelimited("%s xmit: Local address not yet configured!\n",
++					    p->name);
+ 		else if (!(p->flags & IP6_TNL_F_ALLOW_LOCAL_REMOTE) &&
+ 			 !ipv6_addr_is_multicast(raddr) &&
+ 			 unlikely(ipv6_chk_addr_and_flags(net, raddr, ldev,
+ 							  true, 0, IFA_F_TENTATIVE)))
+-			pr_warn("%s xmit: Routing loop! Remote address found on this node!\n",
+-				p->name);
++			pr_warn_ratelimited("%s xmit: Routing loop! Remote address found on this node!\n",
++					    p->name);
+ 		else
+ 			ret = 1;
+ 		rcu_read_unlock();
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 654bf4ca61260..352e645c546eb 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2674,7 +2674,7 @@ static void ip6_link_failure(struct sk_buff *skb)
+ 			if (from) {
+ 				fn = rcu_dereference(from->fib6_node);
+ 				if (fn && (rt->rt6i_flags & RTF_DEFAULT))
+-					fn->fn_sernum = -1;
++					WRITE_ONCE(fn->fn_sernum, -1);
+ 			}
+ 		}
+ 		rcu_read_unlock();
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index f4cf26b606f92..8369af0c50eab 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -1832,15 +1832,17 @@ repeat:
+ 		pr_debug("nf_conntrack_in: Can't track with proto module\n");
+ 		nf_conntrack_put(&ct->ct_general);
+ 		skb->_nfct = 0;
+-		NF_CT_STAT_INC_ATOMIC(state->net, invalid);
+-		if (ret == -NF_DROP)
+-			NF_CT_STAT_INC_ATOMIC(state->net, drop);
+ 		/* Special case: TCP tracker reports an attempt to reopen a
+ 		 * closed/aborted connection. We have to go back and create a
+ 		 * fresh conntrack.
+ 		 */
+ 		if (ret == -NF_REPEAT)
+ 			goto repeat;
++
++		NF_CT_STAT_INC_ATOMIC(state->net, invalid);
++		if (ret == -NF_DROP)
++			NF_CT_STAT_INC_ATOMIC(state->net, drop);
++
+ 		ret = -ret;
+ 		goto out;
+ 	}
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 1ebee25de6772..6a8495bd08bb2 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -502,6 +502,9 @@ static int nft_payload_l4csum_offset(const struct nft_pktinfo *pkt,
+ 				     struct sk_buff *skb,
+ 				     unsigned int *l4csum_offset)
+ {
++	if (pkt->xt.fragoff)
++		return -1;
++
+ 	switch (pkt->tprot) {
+ 	case IPPROTO_TCP:
+ 		*l4csum_offset = offsetof(struct tcphdr, check);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index f78097aa403a8..6ef035494f30d 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1735,6 +1735,7 @@ static int fanout_add(struct sock *sk, struct fanout_args *args)
+ 		match->prot_hook.dev = po->prot_hook.dev;
+ 		match->prot_hook.func = packet_rcv_fanout;
+ 		match->prot_hook.af_packet_priv = match;
++		match->prot_hook.af_packet_net = read_pnet(&match->net);
+ 		match->prot_hook.id_match = match_fanout_group;
+ 		match->max_num_members = args->max_num_members;
+ 		list_add(&match->list, &fanout_list);
+@@ -3323,6 +3324,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ 		po->prot_hook.func = packet_rcv_spkt;
+ 
+ 	po->prot_hook.af_packet_priv = sk;
++	po->prot_hook.af_packet_net = sock_net(sk);
+ 
+ 	if (proto) {
+ 		po->prot_hook.type = proto;
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index 6be2672a65eab..df864e6922679 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -157,7 +157,7 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call)
+ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ {
+ 	struct sk_buff *skb;
+-	unsigned long resend_at, rto_j;
++	unsigned long resend_at;
+ 	rxrpc_seq_t cursor, seq, top;
+ 	ktime_t now, max_age, oldest, ack_ts;
+ 	int ix;
+@@ -165,10 +165,8 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ 
+ 	_enter("{%d,%d}", call->tx_hard_ack, call->tx_top);
+ 
+-	rto_j = call->peer->rto_j;
+-
+ 	now = ktime_get_real();
+-	max_age = ktime_sub(now, jiffies_to_usecs(rto_j));
++	max_age = ktime_sub(now, jiffies_to_usecs(call->peer->rto_j));
+ 
+ 	spin_lock_bh(&call->lock);
+ 
+@@ -213,7 +211,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ 	}
+ 
+ 	resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest)));
+-	resend_at += jiffies + rto_j;
++	resend_at += jiffies + rxrpc_get_rto_backoff(call->peer, retrans);
+ 	WRITE_ONCE(call->resend_at, resend_at);
+ 
+ 	if (unacked)
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 10f2bf2e9068a..a45c83f22236e 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -468,7 +468,7 @@ done:
+ 			if (call->peer->rtt_count > 1) {
+ 				unsigned long nowj = jiffies, ack_lost_at;
+ 
+-				ack_lost_at = rxrpc_get_rto_backoff(call->peer, retrans);
++				ack_lost_at = rxrpc_get_rto_backoff(call->peer, false);
+ 				ack_lost_at += nowj;
+ 				WRITE_ONCE(call->ack_lost_at, ack_lost_at);
+ 				rxrpc_reduce_call_timer(call, ack_lost_at, nowj,
+diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c
+index eadc0ede928c3..5f854ffbab925 100644
+--- a/net/sunrpc/rpc_pipe.c
++++ b/net/sunrpc/rpc_pipe.c
+@@ -599,9 +599,9 @@ static int __rpc_rmdir(struct inode *dir, struct dentry *dentry)
+ 
+ 	dget(dentry);
+ 	ret = simple_rmdir(dir, dentry);
++	d_drop(dentry);
+ 	if (!ret)
+ 		fsnotify_rmdir(dir, dentry);
+-	d_delete(dentry);
+ 	dput(dentry);
+ 	return ret;
+ }
+@@ -612,9 +612,9 @@ static int __rpc_unlink(struct inode *dir, struct dentry *dentry)
+ 
+ 	dget(dentry);
+ 	ret = simple_unlink(dir, dentry);
++	d_drop(dentry);
+ 	if (!ret)
+ 		fsnotify_unlink(dir, dentry);
+-	d_delete(dentry);
+ 	dput(dentry);
+ 	return ret;
+ }
+diff --git a/usr/include/Makefile b/usr/include/Makefile
+index f6b3c85d900ed..703a255cddc63 100644
+--- a/usr/include/Makefile
++++ b/usr/include/Makefile
+@@ -34,7 +34,6 @@ no-header-test += linux/hdlc/ioctl.h
+ no-header-test += linux/ivtv.h
+ no-header-test += linux/kexec.h
+ no-header-test += linux/matroxfb.h
+-no-header-test += linux/nfc.h
+ no-header-test += linux/omap3isp.h
+ no-header-test += linux/omapfb.h
+ no-header-test += linux/patchkey.h
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 4a7d377b3a500..d22de43925076 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1691,7 +1691,6 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
+ {
+ 	return __gfn_to_memslot(kvm_vcpu_memslots(vcpu), gfn);
+ }
+-EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_memslot);
+ 
+ bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
+ {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-02-05 12:13 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-02-05 12:13 UTC (permalink / raw
  To: gentoo-commits

commit:     65afc4ed817c7aadd9d005e195dca366d4fbd671
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb  5 12:13:03 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb  5 12:13:03 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=65afc4ed

Linux patch 5.10.97

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1096_linux-5.10.97.patch | 938 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 942 insertions(+)

diff --git a/0000_README b/0000_README
index cc530626..20548375 100644
--- a/0000_README
+++ b/0000_README
@@ -427,6 +427,10 @@ Patch:  1095_linux-5.10.96.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.96
 
+Patch:  1096_linux-5.10.97.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.97
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1096_linux-5.10.97.patch b/1096_linux-5.10.97.patch
new file mode 100644
index 00000000..a02dced7
--- /dev/null
+++ b/1096_linux-5.10.97.patch
@@ -0,0 +1,938 @@
+diff --git a/Documentation/accounting/psi.rst b/Documentation/accounting/psi.rst
+index f2b3439edcc2c..860fe651d6453 100644
+--- a/Documentation/accounting/psi.rst
++++ b/Documentation/accounting/psi.rst
+@@ -92,7 +92,8 @@ Triggers can be set on more than one psi metric and more than one trigger
+ for the same psi metric can be specified. However for each trigger a separate
+ file descriptor is required to be able to poll it separately from others,
+ therefore for each trigger a separate open() syscall should be made even
+-when opening the same psi interface file.
++when opening the same psi interface file. Write operations to a file descriptor
++with an already existing psi trigger will fail with EBUSY.
+ 
+ Monitors activate only when system enters stall state for the monitored
+ psi metric and deactivates upon exit from the stall state. While system is
+diff --git a/Makefile b/Makefile
+index c43133c8a5b1f..9f328bfcaf97d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 96
++SUBLEVEL = 97
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 13e10b970ac83..0eb41dce55da3 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1285,6 +1285,7 @@ struct kvm_x86_ops {
+ };
+ 
+ struct kvm_x86_nested_ops {
++	void (*leave_nested)(struct kvm_vcpu *vcpu);
+ 	int (*check_events)(struct kvm_vcpu *vcpu);
+ 	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
+ 	int (*get_state)(struct kvm_vcpu *vcpu,
+diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
+index 2577d78757810..886d4648c9dd4 100644
+--- a/arch/x86/kernel/cpu/mce/intel.c
++++ b/arch/x86/kernel/cpu/mce/intel.c
+@@ -486,6 +486,8 @@ static void intel_ppin_init(struct cpuinfo_x86 *c)
+ 	case INTEL_FAM6_BROADWELL_X:
+ 	case INTEL_FAM6_SKYLAKE_X:
+ 	case INTEL_FAM6_ICELAKE_X:
++	case INTEL_FAM6_ICELAKE_D:
++	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+ 	case INTEL_FAM6_XEON_PHI_KNL:
+ 	case INTEL_FAM6_XEON_PHI_KNM:
+ 
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index f0946872f5e6d..23910e6a3f011 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -783,8 +783,10 @@ void svm_free_nested(struct vcpu_svm *svm)
+ /*
+  * Forcibly leave nested mode in order to be able to reset the VCPU later on.
+  */
+-void svm_leave_nested(struct vcpu_svm *svm)
++void svm_leave_nested(struct kvm_vcpu *vcpu)
+ {
++	struct vcpu_svm *svm = to_svm(vcpu);
++
+ 	if (is_guest_mode(&svm->vcpu)) {
+ 		struct vmcb *hsave = svm->nested.hsave;
+ 		struct vmcb *vmcb = svm->vmcb;
+@@ -1185,7 +1187,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 
+ 	if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) {
+-		svm_leave_nested(svm);
++		svm_leave_nested(vcpu);
+ 		svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET));
+ 		return 0;
+ 	}
+@@ -1238,6 +1240,9 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
+ 	copy_vmcb_control_area(&hsave->control, &svm->vmcb->control);
+ 	hsave->save = *save;
+ 
++	if (is_guest_mode(vcpu))
++		svm_leave_nested(vcpu);
++
+ 	svm->nested.vmcb12_gpa = kvm_state->hdr.svm.vmcb_pa;
+ 	load_nested_vmcb_control(svm, ctl);
+ 	nested_prepare_vmcb_control(svm);
+@@ -1252,6 +1257,7 @@ out_free:
+ }
+ 
+ struct kvm_x86_nested_ops svm_nested_ops = {
++	.leave_nested = svm_leave_nested,
+ 	.check_events = svm_check_nested_events,
+ 	.get_nested_state_pages = svm_get_nested_state_pages,
+ 	.get_state = svm_get_nested_state,
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 2e6332af98aba..fa543c355fbdb 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -279,7 +279,7 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+ 
+ 	if ((old_efer & EFER_SVME) != (efer & EFER_SVME)) {
+ 		if (!(efer & EFER_SVME)) {
+-			svm_leave_nested(svm);
++			svm_leave_nested(vcpu);
+ 			svm_set_gif(svm, true);
+ 
+ 			/*
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index be74e22b82ea7..2c007241fbf53 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -393,7 +393,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
+ 
+ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
+ 			 struct vmcb *nested_vmcb);
+-void svm_leave_nested(struct vcpu_svm *svm);
++void svm_leave_nested(struct kvm_vcpu *vcpu);
+ void svm_free_nested(struct vcpu_svm *svm);
+ int svm_allocate_nested(struct vcpu_svm *svm);
+ int nested_svm_vmrun(struct vcpu_svm *svm);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 36661b15c3d04..0c2389d0fdafe 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -6628,6 +6628,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
+ }
+ 
+ struct kvm_x86_nested_ops vmx_nested_ops = {
++	.leave_nested = vmx_leave_nested,
+ 	.check_events = vmx_check_nested_events,
+ 	.hv_timer_pending = nested_vmx_preemption_timer_pending,
+ 	.get_state = vmx_get_nested_state,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7871b8e84b368..a5d6d79b023bc 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4391,6 +4391,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
+ 				vcpu->arch.hflags |= HF_SMM_MASK;
+ 			else
+ 				vcpu->arch.hflags &= ~HF_SMM_MASK;
++
++			kvm_x86_ops.nested_ops->leave_nested(vcpu);
+ 			kvm_smm_changed(vcpu);
+ 		}
+ 
+diff --git a/drivers/bus/simple-pm-bus.c b/drivers/bus/simple-pm-bus.c
+index 244b8f3b38b40..c5eb46cbf388b 100644
+--- a/drivers/bus/simple-pm-bus.c
++++ b/drivers/bus/simple-pm-bus.c
+@@ -16,33 +16,7 @@
+ 
+ static int simple_pm_bus_probe(struct platform_device *pdev)
+ {
+-	const struct device *dev = &pdev->dev;
+-	struct device_node *np = dev->of_node;
+-	const struct of_device_id *match;
+-
+-	/*
+-	 * Allow user to use driver_override to bind this driver to a
+-	 * transparent bus device which has a different compatible string
+-	 * that's not listed in simple_pm_bus_of_match. We don't want to do any
+-	 * of the simple-pm-bus tasks for these devices, so return early.
+-	 */
+-	if (pdev->driver_override)
+-		return 0;
+-
+-	match = of_match_device(dev->driver->of_match_table, dev);
+-	/*
+-	 * These are transparent bus devices (not simple-pm-bus matches) that
+-	 * have their child nodes populated automatically.  So, don't need to
+-	 * do anything more. We only match with the device if this driver is
+-	 * the most specific match because we don't want to incorrectly bind to
+-	 * a device that has a more specific driver.
+-	 */
+-	if (match && match->data) {
+-		if (of_property_match_string(np, "compatible", match->compatible) == 0)
+-			return 0;
+-		else
+-			return -ENODEV;
+-	}
++	struct device_node *np = pdev->dev.of_node;
+ 
+ 	dev_dbg(&pdev->dev, "%s\n", __func__);
+ 
+@@ -56,25 +30,14 @@ static int simple_pm_bus_probe(struct platform_device *pdev)
+ 
+ static int simple_pm_bus_remove(struct platform_device *pdev)
+ {
+-	const void *data = of_device_get_match_data(&pdev->dev);
+-
+-	if (pdev->driver_override || data)
+-		return 0;
+-
+ 	dev_dbg(&pdev->dev, "%s\n", __func__);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 	return 0;
+ }
+ 
+-#define ONLY_BUS	((void *) 1) /* Match if the device is only a bus. */
+-
+ static const struct of_device_id simple_pm_bus_of_match[] = {
+ 	{ .compatible = "simple-pm-bus", },
+-	{ .compatible = "simple-bus",	.data = ONLY_BUS },
+-	{ .compatible = "simple-mfd",	.data = ONLY_BUS },
+-	{ .compatible = "isa",		.data = ONLY_BUS },
+-	{ .compatible = "arm,amba-bus",	.data = ONLY_BUS },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 9392de2679a1d..8eac7dc637b0f 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1402,18 +1402,18 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	u32 val;
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
+-	if (ret)
+-		return ret;
++	if (enable) {
++		ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
++		if (ret)
++			return ret;
+ 
+-	val = HDMI_READ(HDMI_CEC_CNTRL_5);
+-	val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
+-		 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
+-		 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
+-	val |= ((4700 / usecs) << VC4_HDMI_CEC_CNT_TO_4700_US_SHIFT) |
+-	       ((4500 / usecs) << VC4_HDMI_CEC_CNT_TO_4500_US_SHIFT);
++		val = HDMI_READ(HDMI_CEC_CNTRL_5);
++		val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
++			 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
++			 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
++		val |= ((4700 / usecs) << VC4_HDMI_CEC_CNT_TO_4700_US_SHIFT) |
++			((4500 / usecs) << VC4_HDMI_CEC_CNT_TO_4500_US_SHIFT);
+ 
+-	if (enable) {
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val |
+ 			   VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET);
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val);
+@@ -1439,7 +1439,10 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 		HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, VC4_HDMI_CPU_CEC);
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val |
+ 			   VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET);
++
++		pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 	}
++
+ 	return 0;
+ }
+ 
+@@ -1531,8 +1534,6 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ 	if (ret < 0)
+ 		goto err_delete_cec_adap;
+ 
+-	pm_runtime_put(&vc4_hdmi->pdev->dev);
+-
+ 	return 0;
+ 
+ err_delete_cec_adap:
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index 395eb0b526802..a816b30bca04c 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -721,7 +721,9 @@ static void xgbe_stop_timers(struct xgbe_prv_data *pdata)
+ 		if (!channel->tx_ring)
+ 			break;
+ 
++		/* Deactivate the Tx timer */
+ 		del_timer_sync(&channel->tx_timer);
++		channel->tx_timer_active = 0;
+ 	}
+ }
+ 
+@@ -2557,6 +2559,14 @@ read_again:
+ 			buf2_len = xgbe_rx_buf2_len(rdata, packet, len);
+ 			len += buf2_len;
+ 
++			if (buf2_len > rdata->rx.buf.dma_len) {
++				/* Hardware inconsistency within the descriptors
++				 * that has resulted in a length underflow.
++				 */
++				error = 1;
++				goto skip_data;
++			}
++
+ 			if (!skb) {
+ 				skb = xgbe_create_skb(pdata, napi, rdata,
+ 						      buf1_len);
+@@ -2586,8 +2596,10 @@ skip_data:
+ 		if (!last || context_next)
+ 			goto read_again;
+ 
+-		if (!skb)
++		if (!skb || error) {
++			dev_kfree_skb(skb);
+ 			goto next_packet;
++		}
+ 
+ 		/* Be sure we don't exceed the configured MTU */
+ 		max_len = netdev->mtu + ETH_HLEN;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
+index 9c076aa20306a..b6f5c1bcdbcd4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
+@@ -183,18 +183,7 @@ void mlx5e_rep_bond_unslave(struct mlx5_eswitch *esw,
+ 
+ static bool mlx5e_rep_is_lag_netdev(struct net_device *netdev)
+ {
+-	struct mlx5e_rep_priv *rpriv;
+-	struct mlx5e_priv *priv;
+-
+-	/* A given netdev is not a representor or not a slave of LAG configuration */
+-	if (!mlx5e_eswitch_rep(netdev) || !netif_is_lag_port(netdev))
+-		return false;
+-
+-	priv = netdev_priv(netdev);
+-	rpriv = priv->ppriv;
+-
+-	/* Egress acl forward to vport is supported only non-uplink representor */
+-	return rpriv->rep->vport != MLX5_VPORT_UPLINK;
++	return netif_is_lag_port(netdev) && mlx5e_eswitch_vf_rep(netdev);
+ }
+ 
+ static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *ptr)
+@@ -210,9 +199,6 @@ static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *pt
+ 	u16 fwd_vport_num;
+ 	int err;
+ 
+-	if (!mlx5e_rep_is_lag_netdev(netdev))
+-		return;
+-
+ 	info = ptr;
+ 	lag_info = info->lower_state_info;
+ 	/* This is not an event of a representor becoming active slave */
+@@ -266,9 +252,6 @@ static void mlx5e_rep_changeupper_event(struct net_device *netdev, void *ptr)
+ 	struct net_device *lag_dev;
+ 	struct mlx5e_priv *priv;
+ 
+-	if (!mlx5e_rep_is_lag_netdev(netdev))
+-		return;
+-
+ 	priv = netdev_priv(netdev);
+ 	rpriv = priv->ppriv;
+ 	lag_dev = info->upper_dev;
+@@ -293,6 +276,19 @@ static int mlx5e_rep_esw_bond_netevent(struct notifier_block *nb,
+ 				       unsigned long event, void *ptr)
+ {
+ 	struct net_device *netdev = netdev_notifier_info_to_dev(ptr);
++	struct mlx5e_rep_priv *rpriv;
++	struct mlx5e_rep_bond *bond;
++	struct mlx5e_priv *priv;
++
++	if (!mlx5e_rep_is_lag_netdev(netdev))
++		return NOTIFY_DONE;
++
++	bond = container_of(nb, struct mlx5e_rep_bond, nb);
++	priv = netdev_priv(netdev);
++	rpriv = mlx5_eswitch_get_uplink_priv(priv->mdev->priv.eswitch, REP_ETH);
++	/* Verify VF representor is on the same device of the bond handling the netevent. */
++	if (rpriv->uplink_priv.bond != bond)
++		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+ 	case NETDEV_CHANGELOWERSTATE:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index ee710ce007950..9b472e793ee36 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -131,7 +131,7 @@ static void mlx5_stop_sync_reset_poll(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ 
+-	del_timer(&fw_reset->timer);
++	del_timer_sync(&fw_reset->timer);
+ }
+ 
+ static void mlx5_sync_reset_clear_reset_requested(struct mlx5_core_dev *dev, bool poll_health)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+index 947f346bdc2d6..77c6287c90d55 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+@@ -292,7 +292,7 @@ static int
+ create_chain_restore(struct fs_chain *chain)
+ {
+ 	struct mlx5_eswitch *esw = chain->chains->dev->priv.eswitch;
+-	char modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)];
++	u8 modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {};
+ 	struct mlx5_fs_chains *chains = chain->chains;
+ 	enum mlx5e_tc_attr_to_reg chain_to_reg;
+ 	struct mlx5_modify_hdr *mod_hdr;
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index a37aae00e128f..621648ce750b7 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -901,27 +901,35 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, u32 count)
+ 	struct gsi *gsi;
+ 	u32 backlog;
+ 
+-	if (!endpoint->replenish_enabled) {
++	if (!test_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags)) {
+ 		if (count)
+ 			atomic_add(count, &endpoint->replenish_saved);
+ 		return;
+ 	}
+ 
++	/* If already active, just update the backlog */
++	if (test_and_set_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags)) {
++		if (count)
++			atomic_add(count, &endpoint->replenish_backlog);
++		return;
++	}
+ 
+ 	while (atomic_dec_not_zero(&endpoint->replenish_backlog))
+ 		if (ipa_endpoint_replenish_one(endpoint))
+ 			goto try_again_later;
++
++	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
++
+ 	if (count)
+ 		atomic_add(count, &endpoint->replenish_backlog);
+ 
+ 	return;
+ 
+ try_again_later:
+-	/* The last one didn't succeed, so fix the backlog */
+-	backlog = atomic_inc_return(&endpoint->replenish_backlog);
++	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
+ 
+-	if (count)
+-		atomic_add(count, &endpoint->replenish_backlog);
++	/* The last one didn't succeed, so fix the backlog */
++	backlog = atomic_add_return(count + 1, &endpoint->replenish_backlog);
+ 
+ 	/* Whenever a receive buffer transaction completes we'll try to
+ 	 * replenish again.  It's unlikely, but if we fail to supply even
+@@ -941,7 +949,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
+ 	u32 max_backlog;
+ 	u32 saved;
+ 
+-	endpoint->replenish_enabled = true;
++	set_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
+ 	while ((saved = atomic_xchg(&endpoint->replenish_saved, 0)))
+ 		atomic_add(saved, &endpoint->replenish_backlog);
+ 
+@@ -955,7 +963,7 @@ static void ipa_endpoint_replenish_disable(struct ipa_endpoint *endpoint)
+ {
+ 	u32 backlog;
+ 
+-	endpoint->replenish_enabled = false;
++	clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
+ 	while ((backlog = atomic_xchg(&endpoint->replenish_backlog, 0)))
+ 		atomic_add(backlog, &endpoint->replenish_saved);
+ }
+@@ -1472,7 +1480,8 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
+ 		/* RX transactions require a single TRE, so the maximum
+ 		 * backlog is the same as the maximum outstanding TREs.
+ 		 */
+-		endpoint->replenish_enabled = false;
++		clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
++		clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
+ 		atomic_set(&endpoint->replenish_saved,
+ 			   gsi_channel_tre_max(gsi, endpoint->channel_id));
+ 		atomic_set(&endpoint->replenish_backlog, 0);
+diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
+index 58a245de488e8..823c4a1296587 100644
+--- a/drivers/net/ipa/ipa_endpoint.h
++++ b/drivers/net/ipa/ipa_endpoint.h
+@@ -39,6 +39,19 @@ enum ipa_endpoint_name {
+ 
+ #define IPA_ENDPOINT_MAX		32	/* Max supported by driver */
+ 
++/**
++ * enum ipa_replenish_flag:	RX buffer replenish flags
++ *
++ * @IPA_REPLENISH_ENABLED:	Whether receive buffer replenishing is enabled
++ * @IPA_REPLENISH_ACTIVE:	Whether replenishing is underway
++ * @IPA_REPLENISH_COUNT:	Number of defined replenish flags
++ */
++enum ipa_replenish_flag {
++	IPA_REPLENISH_ENABLED,
++	IPA_REPLENISH_ACTIVE,
++	IPA_REPLENISH_COUNT,	/* Number of flags (must be last) */
++};
++
+ /**
+  * struct ipa_endpoint - IPA endpoint information
+  * @channel_id:	EP's GSI channel
+@@ -60,7 +73,7 @@ struct ipa_endpoint {
+ 	struct net_device *netdev;
+ 
+ 	/* Receive buffer replenishing for RX endpoints */
+-	bool replenish_enabled;
++	DECLARE_BITMAP(replenish_flags, IPA_REPLENISH_COUNT);
+ 	u32 replenish_ready;
+ 	atomic_t replenish_saved;
+ 	atomic_t replenish_backlog;
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index 207e59e74935a..06d9f19ca142a 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -121,7 +121,7 @@ static int ipheth_alloc_urbs(struct ipheth_device *iphone)
+ 	if (tx_buf == NULL)
+ 		goto free_rx_urb;
+ 
+-	rx_buf = usb_alloc_coherent(iphone->udev, IPHETH_BUF_SIZE,
++	rx_buf = usb_alloc_coherent(iphone->udev, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN,
+ 				    GFP_KERNEL, &rx_urb->transfer_dma);
+ 	if (rx_buf == NULL)
+ 		goto free_tx_buf;
+@@ -146,7 +146,7 @@ error_nomem:
+ 
+ static void ipheth_free_urbs(struct ipheth_device *iphone)
+ {
+-	usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE, iphone->rx_buf,
++	usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN, iphone->rx_buf,
+ 			  iphone->rx_urb->transfer_dma);
+ 	usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE, iphone->tx_buf,
+ 			  iphone->tx_urb->transfer_dma);
+@@ -317,7 +317,7 @@ static int ipheth_rx_submit(struct ipheth_device *dev, gfp_t mem_flags)
+ 
+ 	usb_fill_bulk_urb(dev->rx_urb, udev,
+ 			  usb_rcvbulkpipe(udev, dev->bulk_in),
+-			  dev->rx_buf, IPHETH_BUF_SIZE,
++			  dev->rx_buf, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN,
+ 			  ipheth_rcvbulk_callback,
+ 			  dev);
+ 	dev->rx_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 90da17c6da664..30708af975adc 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -642,6 +642,8 @@ read_status:
+ 	 */
+ 	if (ctrl->power_fault_detected)
+ 		status &= ~PCI_EXP_SLTSTA_PFD;
++	else if (status & PCI_EXP_SLTSTA_PFD)
++		ctrl->power_fault_detected = true;
+ 
+ 	events |= status;
+ 	if (!events) {
+@@ -651,7 +653,7 @@ read_status:
+ 	}
+ 
+ 	if (status) {
+-		pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events);
++		pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, status);
+ 
+ 		/*
+ 		 * In MSI mode, all event bits must be zero before the port
+@@ -725,8 +727,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 	}
+ 
+ 	/* Check Power Fault Detected */
+-	if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) {
+-		ctrl->power_fault_detected = 1;
++	if (events & PCI_EXP_SLTSTA_PFD) {
+ 		ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
+ 		pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
+ 				      PCI_EXP_SLTCTL_ATTN_IND_ON);
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 086b6bacbad17..18e014fa06480 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -366,9 +366,6 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 	if (fanotify_is_perm_event(event->mask))
+ 		FANOTIFY_PERM(event)->fd = fd;
+ 
+-	if (f)
+-		fd_install(fd, f);
+-
+ 	/* Event info records order is: dir fid + name, child fid */
+ 	if (fanotify_event_dir_fh_len(event)) {
+ 		info_type = info->name_len ? FAN_EVENT_INFO_TYPE_DFID_NAME :
+@@ -432,6 +429,9 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 		count -= ret;
+ 	}
+ 
++	if (f)
++		fd_install(fd, f);
++
+ 	return metadata.event_len;
+ 
+ out_close_fd:
+diff --git a/include/linux/psi.h b/include/linux/psi.h
+index 7361023f3fdd5..db4ecfaab8792 100644
+--- a/include/linux/psi.h
++++ b/include/linux/psi.h
+@@ -33,7 +33,7 @@ void cgroup_move_task(struct task_struct *p, struct css_set *to);
+ 
+ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 			char *buf, size_t nbytes, enum psi_res res);
+-void psi_trigger_replace(void **trigger_ptr, struct psi_trigger *t);
++void psi_trigger_destroy(struct psi_trigger *t);
+ 
+ __poll_t psi_trigger_poll(void **trigger_ptr, struct file *file,
+ 			poll_table *wait);
+diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
+index b95f3211566a2..17d74f62c1818 100644
+--- a/include/linux/psi_types.h
++++ b/include/linux/psi_types.h
+@@ -128,9 +128,6 @@ struct psi_trigger {
+ 	 * events to one per window
+ 	 */
+ 	u64 last_event_time;
+-
+-	/* Refcounting to prevent premature destruction */
+-	struct kref refcount;
+ };
+ 
+ struct psi_group {
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 7f71b54c06c5f..69fba563c810e 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -545,6 +545,14 @@ static ssize_t cgroup_release_agent_write(struct kernfs_open_file *of,
+ 
+ 	BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX);
+ 
++	/*
++	 * Release agent gets called with all capabilities,
++	 * require capabilities to set release agent.
++	 */
++	if ((of->file->f_cred->user_ns != &init_user_ns) ||
++	    !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
+ 	cgrp = cgroup_kn_lock_live(of->kn, false);
+ 	if (!cgrp)
+ 		return -ENODEV;
+@@ -958,6 +966,12 @@ int cgroup1_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 		/* Specifying two release agents is forbidden */
+ 		if (ctx->release_agent)
+ 			return invalfc(fc, "release_agent respecified");
++		/*
++		 * Release agent gets called with all capabilities,
++		 * require capabilities to set release agent.
++		 */
++		if ((fc->user_ns != &init_user_ns) || !capable(CAP_SYS_ADMIN))
++			return invalfc(fc, "Setting release_agent not allowed");
+ 		ctx->release_agent = param->string;
+ 		param->string = NULL;
+ 		break;
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index a86857edaa571..4927289a91a97 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -3601,6 +3601,12 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ 	cgroup_get(cgrp);
+ 	cgroup_kn_unlock(of->kn);
+ 
++	/* Allow only one trigger per file descriptor */
++	if (of->priv) {
++		cgroup_put(cgrp);
++		return -EBUSY;
++	}
++
+ 	psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi;
+ 	new = psi_trigger_create(psi, buf, nbytes, res);
+ 	if (IS_ERR(new)) {
+@@ -3608,8 +3614,7 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ 		return PTR_ERR(new);
+ 	}
+ 
+-	psi_trigger_replace(&of->priv, new);
+-
++	smp_store_release(&of->priv, new);
+ 	cgroup_put(cgrp);
+ 
+ 	return nbytes;
+@@ -3644,7 +3649,7 @@ static __poll_t cgroup_pressure_poll(struct kernfs_open_file *of,
+ 
+ static void cgroup_pressure_release(struct kernfs_open_file *of)
+ {
+-	psi_trigger_replace(&of->priv, NULL);
++	psi_trigger_destroy(of->priv);
+ }
+ #endif /* CONFIG_PSI */
+ 
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 1999fcec45c71..7c7758a9e2c24 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1566,8 +1566,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ 	 * Make sure that subparts_cpus is a subset of cpus_allowed.
+ 	 */
+ 	if (cs->nr_subparts_cpus) {
+-		cpumask_andnot(cs->subparts_cpus, cs->subparts_cpus,
+-			       cs->cpus_allowed);
++		cpumask_and(cs->subparts_cpus, cs->subparts_cpus, cs->cpus_allowed);
+ 		cs->nr_subparts_cpus = cpumask_weight(cs->subparts_cpus);
+ 	}
+ 	spin_unlock_irq(&callback_lock);
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index d50a31ecedeec..b7f38f3ad42a2 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -1116,7 +1116,6 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 	t->event = 0;
+ 	t->last_event_time = 0;
+ 	init_waitqueue_head(&t->event_wait);
+-	kref_init(&t->refcount);
+ 
+ 	mutex_lock(&group->trigger_lock);
+ 
+@@ -1145,15 +1144,19 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 	return t;
+ }
+ 
+-static void psi_trigger_destroy(struct kref *ref)
++void psi_trigger_destroy(struct psi_trigger *t)
+ {
+-	struct psi_trigger *t = container_of(ref, struct psi_trigger, refcount);
+-	struct psi_group *group = t->group;
++	struct psi_group *group;
+ 	struct task_struct *task_to_destroy = NULL;
+ 
+-	if (static_branch_likely(&psi_disabled))
++	/*
++	 * We do not check psi_disabled since it might have been disabled after
++	 * the trigger got created.
++	 */
++	if (!t)
+ 		return;
+ 
++	group = t->group;
+ 	/*
+ 	 * Wakeup waiters to stop polling. Can happen if cgroup is deleted
+ 	 * from under a polling process.
+@@ -1189,9 +1192,9 @@ static void psi_trigger_destroy(struct kref *ref)
+ 	mutex_unlock(&group->trigger_lock);
+ 
+ 	/*
+-	 * Wait for both *trigger_ptr from psi_trigger_replace and
+-	 * poll_task RCUs to complete their read-side critical sections
+-	 * before destroying the trigger and optionally the poll_task
++	 * Wait for psi_schedule_poll_work RCU to complete its read-side
++	 * critical section before destroying the trigger and optionally the
++	 * poll_task.
+ 	 */
+ 	synchronize_rcu();
+ 	/*
+@@ -1208,18 +1211,6 @@ static void psi_trigger_destroy(struct kref *ref)
+ 	kfree(t);
+ }
+ 
+-void psi_trigger_replace(void **trigger_ptr, struct psi_trigger *new)
+-{
+-	struct psi_trigger *old = *trigger_ptr;
+-
+-	if (static_branch_likely(&psi_disabled))
+-		return;
+-
+-	rcu_assign_pointer(*trigger_ptr, new);
+-	if (old)
+-		kref_put(&old->refcount, psi_trigger_destroy);
+-}
+-
+ __poll_t psi_trigger_poll(void **trigger_ptr,
+ 				struct file *file, poll_table *wait)
+ {
+@@ -1229,24 +1220,15 @@ __poll_t psi_trigger_poll(void **trigger_ptr,
+ 	if (static_branch_likely(&psi_disabled))
+ 		return DEFAULT_POLLMASK | EPOLLERR | EPOLLPRI;
+ 
+-	rcu_read_lock();
+-
+-	t = rcu_dereference(*(void __rcu __force **)trigger_ptr);
+-	if (!t) {
+-		rcu_read_unlock();
++	t = smp_load_acquire(trigger_ptr);
++	if (!t)
+ 		return DEFAULT_POLLMASK | EPOLLERR | EPOLLPRI;
+-	}
+-	kref_get(&t->refcount);
+-
+-	rcu_read_unlock();
+ 
+ 	poll_wait(file, &t->event_wait, wait);
+ 
+ 	if (cmpxchg(&t->event, 1, 0) == 1)
+ 		ret |= EPOLLPRI;
+ 
+-	kref_put(&t->refcount, psi_trigger_destroy);
+-
+ 	return ret;
+ }
+ 
+@@ -1270,14 +1252,24 @@ static ssize_t psi_write(struct file *file, const char __user *user_buf,
+ 
+ 	buf[buf_size - 1] = '\0';
+ 
+-	new = psi_trigger_create(&psi_system, buf, nbytes, res);
+-	if (IS_ERR(new))
+-		return PTR_ERR(new);
+-
+ 	seq = file->private_data;
++
+ 	/* Take seq->lock to protect seq->private from concurrent writes */
+ 	mutex_lock(&seq->lock);
+-	psi_trigger_replace(&seq->private, new);
++
++	/* Allow only one trigger per file descriptor */
++	if (seq->private) {
++		mutex_unlock(&seq->lock);
++		return -EBUSY;
++	}
++
++	new = psi_trigger_create(&psi_system, buf, nbytes, res);
++	if (IS_ERR(new)) {
++		mutex_unlock(&seq->lock);
++		return PTR_ERR(new);
++	}
++
++	smp_store_release(&seq->private, new);
+ 	mutex_unlock(&seq->lock);
+ 
+ 	return nbytes;
+@@ -1312,7 +1304,7 @@ static int psi_fop_release(struct inode *inode, struct file *file)
+ {
+ 	struct seq_file *seq = file->private_data;
+ 
+-	psi_trigger_replace(&seq->private, NULL);
++	psi_trigger_destroy(seq->private);
+ 	return single_release(inode, file);
+ }
+ 
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 27ffa83ffeb3c..373564bf57acb 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3238,8 +3238,8 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	struct nlattr *slave_attr[RTNL_SLAVE_MAX_TYPE + 1];
+ 	unsigned char name_assign_type = NET_NAME_USER;
+ 	struct nlattr *linkinfo[IFLA_INFO_MAX + 1];
+-	const struct rtnl_link_ops *m_ops = NULL;
+-	struct net_device *master_dev = NULL;
++	const struct rtnl_link_ops *m_ops;
++	struct net_device *master_dev;
+ 	struct net *net = sock_net(skb->sk);
+ 	const struct rtnl_link_ops *ops;
+ 	struct nlattr *tb[IFLA_MAX + 1];
+@@ -3277,6 +3277,8 @@ replay:
+ 	else
+ 		dev = NULL;
+ 
++	master_dev = NULL;
++	m_ops = NULL;
+ 	if (dev) {
+ 		master_dev = netdev_master_upper_dev_get(dev);
+ 		if (master_dev)
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 991e3434957b8..12dd08af12b5e 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -1620,6 +1620,8 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
+ 	    (mss != tcp_skb_seglen(skb)))
+ 		goto out;
+ 
++	if (!tcp_skb_can_collapse(prev, skb))
++		goto out;
+ 	len = skb->len;
+ 	pcount = tcp_skb_pcount(skb);
+ 	if (tcp_skb_shift(prev, skb, pcount, len))
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 6ef035494f30d..a31334b92be7e 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1750,7 +1750,10 @@ static int fanout_add(struct sock *sk, struct fanout_args *args)
+ 		err = -ENOSPC;
+ 		if (refcount_read(&match->sk_ref) < match->max_num_members) {
+ 			__dev_remove_pack(&po->prot_hook);
+-			po->fanout = match;
++
++			/* Paired with packet_setsockopt(PACKET_FANOUT_DATA) */
++			WRITE_ONCE(po->fanout, match);
++
+ 			po->rollover = rollover;
+ 			rollover = NULL;
+ 			refcount_set(&match->sk_ref, refcount_read(&match->sk_ref) + 1);
+@@ -3906,7 +3909,8 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 	}
+ 	case PACKET_FANOUT_DATA:
+ 	{
+-		if (!po->fanout)
++		/* Paired with the WRITE_ONCE() in fanout_add() */
++		if (!READ_ONCE(po->fanout))
+ 			return -EINVAL;
+ 
+ 		return fanout_set_data(po, optval, optlen);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index cb1331b357451..7993a692c7fda 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1954,9 +1954,9 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ 	bool prio_allocate;
+ 	u32 parent;
+ 	u32 chain_index;
+-	struct Qdisc *q = NULL;
++	struct Qdisc *q;
+ 	struct tcf_chain_info chain_info;
+-	struct tcf_chain *chain = NULL;
++	struct tcf_chain *chain;
+ 	struct tcf_block *block;
+ 	struct tcf_proto *tp;
+ 	unsigned long cl;
+@@ -1984,6 +1984,8 @@ replay:
+ 	tp = NULL;
+ 	cl = 0;
+ 	block = NULL;
++	q = NULL;
++	chain = NULL;
+ 
+ 	if (prio == 0) {
+ 		/* If no priority is provided by the user,
+@@ -2804,8 +2806,8 @@ static int tc_ctl_chain(struct sk_buff *skb, struct nlmsghdr *n,
+ 	struct tcmsg *t;
+ 	u32 parent;
+ 	u32 chain_index;
+-	struct Qdisc *q = NULL;
+-	struct tcf_chain *chain = NULL;
++	struct Qdisc *q;
++	struct tcf_chain *chain;
+ 	struct tcf_block *block;
+ 	unsigned long cl;
+ 	int err;
+@@ -2815,6 +2817,7 @@ static int tc_ctl_chain(struct sk_buff *skb, struct nlmsghdr *n,
+ 		return -EPERM;
+ 
+ replay:
++	q = NULL;
+ 	err = nlmsg_parse_deprecated(n, sizeof(*t), tca, TCA_MAX,
+ 				     rtm_tca_policy, extack);
+ 	if (err < 0)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-02-05 19:04 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-02-05 19:04 UTC (permalink / raw
  To: gentoo-commits

commit:     f0f4c5984ba72fa8546158dbcf334e91aecd2b80
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb  5 19:04:33 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb  5 19:04:33 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f0f4c598

Linux patch 5.10.98

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |  4 ++++
 1097_linux-5.10.98.patch | 57 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)

diff --git a/0000_README b/0000_README
index 20548375..f1c5090c 100644
--- a/0000_README
+++ b/0000_README
@@ -431,6 +431,10 @@ Patch:  1096_linux-5.10.97.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.97
 
+Patch:  1097_linux-5.10.98.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.98
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1097_linux-5.10.98.patch b/1097_linux-5.10.98.patch
new file mode 100644
index 00000000..f527499c
--- /dev/null
+++ b/1097_linux-5.10.98.patch
@@ -0,0 +1,57 @@
+diff --git a/Makefile b/Makefile
+index 9f328bfcaf97d..10827bec74d8f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 97
++SUBLEVEL = 98
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 8eac7dc637b0f..5d5c4e9a86218 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1399,21 +1399,15 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	struct vc4_hdmi *vc4_hdmi = cec_get_drvdata(adap);
+ 	/* clock period in microseconds */
+ 	const u32 usecs = 1000000 / CEC_CLOCK_FREQ;
+-	u32 val;
+-	int ret;
+-
+-	if (enable) {
+-		ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
+-		if (ret)
+-			return ret;
++	u32 val = HDMI_READ(HDMI_CEC_CNTRL_5);
+ 
+-		val = HDMI_READ(HDMI_CEC_CNTRL_5);
+-		val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
+-			 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
+-			 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
+-		val |= ((4700 / usecs) << VC4_HDMI_CEC_CNT_TO_4700_US_SHIFT) |
+-			((4500 / usecs) << VC4_HDMI_CEC_CNT_TO_4500_US_SHIFT);
++	val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
++		 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
++		 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
++	val |= ((4700 / usecs) << VC4_HDMI_CEC_CNT_TO_4700_US_SHIFT) |
++	       ((4500 / usecs) << VC4_HDMI_CEC_CNT_TO_4500_US_SHIFT);
+ 
++	if (enable) {
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val |
+ 			   VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET);
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val);
+@@ -1439,10 +1433,7 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 		HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, VC4_HDMI_CPU_CEC);
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val |
+ 			   VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET);
+-
+-		pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 	}
+-
+ 	return 0;
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-02-08 17:54 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-02-08 17:54 UTC (permalink / raw
  To: gentoo-commits

commit:     277602a0cea72da681393c0720e62637f700b541
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb  8 17:54:31 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb  8 17:54:31 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=277602a0

Linux patch 5.10.99

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1098_linux-5.10.99.patch | 2812 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2816 insertions(+)

diff --git a/0000_README b/0000_README
index f1c5090c..c04d5d96 100644
--- a/0000_README
+++ b/0000_README
@@ -435,6 +435,10 @@ Patch:  1097_linux-5.10.98.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.98
 
+Patch:  1098_linux-5.10.99.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.99
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1098_linux-5.10.99.patch b/1098_linux-5.10.99.patch
new file mode 100644
index 00000000..9c87e134
--- /dev/null
+++ b/1098_linux-5.10.99.patch
@@ -0,0 +1,2812 @@
+diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
+index 7272a4bd74dd0..28841609aa4f8 100644
+--- a/Documentation/gpu/todo.rst
++++ b/Documentation/gpu/todo.rst
+@@ -273,24 +273,6 @@ Contact: Daniel Vetter, Noralf Tronnes
+ 
+ Level: Advanced
+ 
+-Garbage collect fbdev scrolling acceleration
+---------------------------------------------
+-
+-Scroll acceleration is disabled in fbcon by hard-wiring p->scrollmode =
+-SCROLL_REDRAW. There's a ton of code this will allow us to remove:
+-- lots of code in fbcon.c
+-- a bunch of the hooks in fbcon_ops, maybe the remaining hooks could be called
+-  directly instead of the function table (with a switch on p->rotate)
+-- fb_copyarea is unused after this, and can be deleted from all drivers
+-
+-Note that not all acceleration code can be deleted, since clearing and cursor
+-support is still accelerated, which might be good candidates for further
+-deletion projects.
+-
+-Contact: Daniel Vetter
+-
+-Level: Intermediate
+-
+ idr_init_base()
+ ---------------
+ 
+diff --git a/Makefile b/Makefile
+index 10827bec74d8f..593638785d293 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 98
++SUBLEVEL = 99
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 6525693e7aeaa..5ba13b00e3a71 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4353,6 +4353,19 @@ static __initconst const struct x86_pmu intel_pmu = {
+ 	.lbr_read		= intel_pmu_lbr_read_64,
+ 	.lbr_save		= intel_pmu_lbr_save,
+ 	.lbr_restore		= intel_pmu_lbr_restore,
++
++	/*
++	 * SMM has access to all 4 rings and while traditionally SMM code only
++	 * ran in CPL0, 2021-era firmware is starting to make use of CPL3 in SMM.
++	 *
++	 * Since the EVENTSEL.{USR,OS} CPL filtering makes no distinction
++	 * between SMM or not, this results in what should be pure userspace
++	 * counters including SMM data.
++	 *
++	 * This is a clear privilege issue, therefore globally disable
++	 * counting SMM by default.
++	 */
++	.attr_freeze_on_smi	= 1,
+ };
+ 
+ static __init void intel_clovertown_quirk(void)
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 37129b76135a1..c084899e95825 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -897,8 +897,9 @@ static void pt_handle_status(struct pt *pt)
+ 		 * means we are already losing data; need to let the decoder
+ 		 * know.
+ 		 */
+-		if (!intel_pt_validate_hw_cap(PT_CAP_topa_multiple_entries) ||
+-		    buf->output_off == pt_buffer_region_size(buf)) {
++		if (!buf->single &&
++		    (!intel_pt_validate_hw_cap(PT_CAP_topa_multiple_entries) ||
++		     buf->output_off == pt_buffer_region_size(buf))) {
+ 			perf_aux_output_flag(&pt->handle,
+ 			                     PERF_AUX_FLAG_TRUNCATED);
+ 			advance++;
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 9ffd7e2895547..4f6f140a44e06 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -384,7 +384,7 @@ void bio_integrity_advance(struct bio *bio, unsigned int bytes_done)
+ 	struct blk_integrity *bi = blk_get_integrity(bio->bi_disk);
+ 	unsigned bytes = bio_integrity_bytes(bi, bytes_done >> 9);
+ 
+-	bip->bip_iter.bi_sector += bytes_done >> 9;
++	bip->bip_iter.bi_sector += bio_integrity_intervals(bi, bytes_done >> 9);
+ 	bvec_iter_advance(bip->bip_vec, &bip->bip_iter, bytes);
+ }
+ 
+diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
+index afd22c9dbdcfa..798f86fcd50fa 100644
+--- a/drivers/dma-buf/dma-heap.c
++++ b/drivers/dma-buf/dma-heap.c
+@@ -14,6 +14,7 @@
+ #include <linux/xarray.h>
+ #include <linux/list.h>
+ #include <linux/slab.h>
++#include <linux/nospec.h>
+ #include <linux/uaccess.h>
+ #include <linux/syscalls.h>
+ #include <linux/dma-heap.h>
+@@ -123,6 +124,7 @@ static long dma_heap_ioctl(struct file *file, unsigned int ucmd,
+ 	if (nr >= ARRAY_SIZE(dma_heap_ioctl_cmds))
+ 		return -EINVAL;
+ 
++	nr = array_index_nospec(nr, ARRAY_SIZE(dma_heap_ioctl_cmds));
+ 	/* Get the kernel ioctl cmd that matches */
+ 	kcmd = dma_heap_ioctl_cmds[nr];
+ 
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index e91cf1147a4e0..be38fd71f731a 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -349,7 +349,7 @@ static int altr_sdram_probe(struct platform_device *pdev)
+ 	if (irq < 0) {
+ 		edac_printk(KERN_ERR, EDAC_MC,
+ 			    "No irq %d in DT\n", irq);
+-		return -ENODEV;
++		return irq;
+ 	}
+ 
+ 	/* Arria10 has a 2nd IRQ */
+diff --git a/drivers/edac/xgene_edac.c b/drivers/edac/xgene_edac.c
+index 1d2c27a00a4a8..cd1eefeff1923 100644
+--- a/drivers/edac/xgene_edac.c
++++ b/drivers/edac/xgene_edac.c
+@@ -1919,7 +1919,7 @@ static int xgene_edac_probe(struct platform_device *pdev)
+ 			irq = platform_get_irq(pdev, i);
+ 			if (irq < 0) {
+ 				dev_err(&pdev->dev, "No IRQ resource\n");
+-				rc = -EINVAL;
++				rc = irq;
+ 				goto out_err;
+ 			}
+ 			rc = devm_request_irq(&pdev->dev, irq,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index a7f8caf1086b9..0e359a299f9ec 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -3587,6 +3587,26 @@ static bool retrieve_link_cap(struct dc_link *link)
+ 		dp_hw_fw_revision.ieee_fw_rev,
+ 		sizeof(dp_hw_fw_revision.ieee_fw_rev));
+ 
++	/* Quirk for Apple MBP 2018 15" Retina panels: wrong DP_MAX_LINK_RATE */
++	{
++		uint8_t str_mbp_2018[] = { 101, 68, 21, 103, 98, 97 };
++		uint8_t fwrev_mbp_2018[] = { 7, 4 };
++		uint8_t fwrev_mbp_2018_vega[] = { 8, 4 };
++
++		/* We also check for the firmware revision as 16,1 models have an
++		 * identical device id and are incorrectly quirked otherwise.
++		 */
++		if ((link->dpcd_caps.sink_dev_id == 0x0010fa) &&
++		    !memcmp(link->dpcd_caps.sink_dev_id_str, str_mbp_2018,
++			     sizeof(str_mbp_2018)) &&
++		    (!memcmp(link->dpcd_caps.sink_fw_revision, fwrev_mbp_2018,
++			     sizeof(fwrev_mbp_2018)) ||
++		    !memcmp(link->dpcd_caps.sink_fw_revision, fwrev_mbp_2018_vega,
++			     sizeof(fwrev_mbp_2018_vega)))) {
++			link->reported_link_cap.link_rate = LINK_RATE_RBR2;
++		}
++	}
++
+ 	memset(&link->dpcd_caps.dsc_caps, '\0',
+ 			sizeof(link->dpcd_caps.dsc_caps));
+ 	memset(&link->dpcd_caps.fec_cap, '\0', sizeof(link->dpcd_caps.fec_cap));
+diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
+index 0e60aec0bb191..b561e9e00153e 100644
+--- a/drivers/gpu/drm/i915/display/intel_overlay.c
++++ b/drivers/gpu/drm/i915/display/intel_overlay.c
+@@ -932,6 +932,9 @@ static int check_overlay_dst(struct intel_overlay *overlay,
+ 	const struct intel_crtc_state *pipe_config =
+ 		overlay->crtc->config;
+ 
++	if (rec->dst_height == 0 || rec->dst_width == 0)
++		return -EINVAL;
++
+ 	if (rec->dst_x < pipe_config->pipe_src_w &&
+ 	    rec->dst_x + rec->dst_width <= pipe_config->pipe_src_w &&
+ 	    rec->dst_y < pipe_config->pipe_src_h &&
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
+index f3c30b2a788e8..8bff14ae16b0e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
+@@ -38,7 +38,7 @@ nvbios_addr(struct nvkm_bios *bios, u32 *addr, u8 size)
+ 		*addr += bios->imaged_addr;
+ 	}
+ 
+-	if (unlikely(*addr + size >= bios->size)) {
++	if (unlikely(*addr + size > bios->size)) {
+ 		nvkm_error(&bios->subdev, "OOB %d %08x %08x\n", size, p, *addr);
+ 		return false;
+ 	}
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 4d4ba09f6cf93..ce492134c1e5c 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -68,8 +68,8 @@ static const char * const cma_events[] = {
+ 	[RDMA_CM_EVENT_TIMEWAIT_EXIT]	 = "timewait exit",
+ };
+ 
+-static void cma_set_mgid(struct rdma_id_private *id_priv, struct sockaddr *addr,
+-			 union ib_gid *mgid);
++static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
++			      enum ib_gid_type gid_type);
+ 
+ const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
+ {
+@@ -1840,17 +1840,19 @@ static void destroy_mc(struct rdma_id_private *id_priv,
+ 		if (dev_addr->bound_dev_if)
+ 			ndev = dev_get_by_index(dev_addr->net,
+ 						dev_addr->bound_dev_if);
+-		if (ndev) {
++		if (ndev && !send_only) {
++			enum ib_gid_type gid_type;
+ 			union ib_gid mgid;
+ 
+-			cma_set_mgid(id_priv, (struct sockaddr *)&mc->addr,
+-				     &mgid);
+-
+-			if (!send_only)
+-				cma_igmp_send(ndev, &mgid, false);
+-
+-			dev_put(ndev);
++			gid_type = id_priv->cma_dev->default_gid_type
++					   [id_priv->id.port_num -
++					    rdma_start_port(
++						    id_priv->cma_dev->device)];
++			cma_iboe_set_mgid((struct sockaddr *)&mc->addr, &mgid,
++					  gid_type);
++			cma_igmp_send(ndev, &mgid, false);
+ 		}
++		dev_put(ndev);
+ 
+ 		cancel_work_sync(&mc->iboe_join.work);
+ 	}
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 2cc785c1970b4..d12018c4c86e9 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -95,6 +95,7 @@ struct ucma_context {
+ 	u64			uid;
+ 
+ 	struct list_head	list;
++	struct list_head	mc_list;
+ 	struct work_struct	close_work;
+ };
+ 
+@@ -105,6 +106,7 @@ struct ucma_multicast {
+ 
+ 	u64			uid;
+ 	u8			join_state;
++	struct list_head	list;
+ 	struct sockaddr_storage	addr;
+ };
+ 
+@@ -198,6 +200,7 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
+ 
+ 	INIT_WORK(&ctx->close_work, ucma_close_id);
+ 	init_completion(&ctx->comp);
++	INIT_LIST_HEAD(&ctx->mc_list);
+ 	/* So list_del() will work if we don't do ucma_finish_ctx() */
+ 	INIT_LIST_HEAD(&ctx->list);
+ 	ctx->file = file;
+@@ -484,19 +487,19 @@ err1:
+ 
+ static void ucma_cleanup_multicast(struct ucma_context *ctx)
+ {
+-	struct ucma_multicast *mc;
+-	unsigned long index;
++	struct ucma_multicast *mc, *tmp;
+ 
+-	xa_for_each(&multicast_table, index, mc) {
+-		if (mc->ctx != ctx)
+-			continue;
++	xa_lock(&multicast_table);
++	list_for_each_entry_safe(mc, tmp, &ctx->mc_list, list) {
++		list_del(&mc->list);
+ 		/*
+ 		 * At this point mc->ctx->ref is 0 so the mc cannot leave the
+ 		 * lock on the reader and this is enough serialization
+ 		 */
+-		xa_erase(&multicast_table, index);
++		__xa_erase(&multicast_table, mc->id);
+ 		kfree(mc);
+ 	}
++	xa_unlock(&multicast_table);
+ }
+ 
+ static void ucma_cleanup_mc_events(struct ucma_multicast *mc)
+@@ -1469,12 +1472,16 @@ static ssize_t ucma_process_join(struct ucma_file *file,
+ 	mc->uid = cmd->uid;
+ 	memcpy(&mc->addr, addr, cmd->addr_size);
+ 
+-	if (xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b,
++	xa_lock(&multicast_table);
++	if (__xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b,
+ 		     GFP_KERNEL)) {
+ 		ret = -ENOMEM;
+ 		goto err_free_mc;
+ 	}
+ 
++	list_add_tail(&mc->list, &ctx->mc_list);
++	xa_unlock(&multicast_table);
++
+ 	mutex_lock(&ctx->mutex);
+ 	ret = rdma_join_multicast(ctx->cm_id, (struct sockaddr *)&mc->addr,
+ 				  join_state, mc);
+@@ -1500,8 +1507,11 @@ err_leave_multicast:
+ 	mutex_unlock(&ctx->mutex);
+ 	ucma_cleanup_mc_events(mc);
+ err_xa_erase:
+-	xa_erase(&multicast_table, mc->id);
++	xa_lock(&multicast_table);
++	list_del(&mc->list);
++	__xa_erase(&multicast_table, mc->id);
+ err_free_mc:
++	xa_unlock(&multicast_table);
+ 	kfree(mc);
+ err_put_ctx:
+ 	ucma_put_ctx(ctx);
+@@ -1569,15 +1579,17 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file,
+ 		mc = ERR_PTR(-EINVAL);
+ 	else if (!refcount_inc_not_zero(&mc->ctx->ref))
+ 		mc = ERR_PTR(-ENXIO);
+-	else
+-		__xa_erase(&multicast_table, mc->id);
+-	xa_unlock(&multicast_table);
+ 
+ 	if (IS_ERR(mc)) {
++		xa_unlock(&multicast_table);
+ 		ret = PTR_ERR(mc);
+ 		goto out;
+ 	}
+ 
++	list_del(&mc->list);
++	__xa_erase(&multicast_table, mc->id);
++	xa_unlock(&multicast_table);
++
+ 	mutex_lock(&mc->ctx->mutex);
+ 	rdma_leave_multicast(mc->ctx->cm_id, (struct sockaddr *) &mc->addr);
+ 	mutex_unlock(&mc->ctx->mutex);
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_main.c b/drivers/infiniband/hw/hfi1/ipoib_main.c
+index 9f71b9d706bd9..22299b0b7df0e 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_main.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_main.c
+@@ -185,12 +185,6 @@ static void hfi1_ipoib_netdev_dtor(struct net_device *dev)
+ 	free_percpu(priv->netstats);
+ }
+ 
+-static void hfi1_ipoib_free_rdma_netdev(struct net_device *dev)
+-{
+-	hfi1_ipoib_netdev_dtor(dev);
+-	free_netdev(dev);
+-}
+-
+ static void hfi1_ipoib_set_id(struct net_device *dev, int id)
+ {
+ 	struct hfi1_ipoib_dev_priv *priv = hfi1_ipoib_priv(dev);
+@@ -227,24 +221,23 @@ static int hfi1_ipoib_setup_rn(struct ib_device *device,
+ 	priv->port_num = port_num;
+ 	priv->netdev_ops = netdev->netdev_ops;
+ 
+-	netdev->netdev_ops = &hfi1_ipoib_netdev_ops;
+-
+ 	ib_query_pkey(device, port_num, priv->pkey_index, &priv->pkey);
+ 
+ 	rc = hfi1_ipoib_txreq_init(priv);
+ 	if (rc) {
+ 		dd_dev_err(dd, "IPoIB netdev TX init - failed(%d)\n", rc);
+-		hfi1_ipoib_free_rdma_netdev(netdev);
+ 		return rc;
+ 	}
+ 
+ 	rc = hfi1_ipoib_rxq_init(netdev);
+ 	if (rc) {
+ 		dd_dev_err(dd, "IPoIB netdev RX init - failed(%d)\n", rc);
+-		hfi1_ipoib_free_rdma_netdev(netdev);
++		hfi1_ipoib_txreq_deinit(priv);
+ 		return rc;
+ 	}
+ 
++	netdev->netdev_ops = &hfi1_ipoib_netdev_ops;
++
+ 	netdev->priv_destructor = hfi1_ipoib_netdev_dtor;
+ 	netdev->needs_free_netdev = true;
+ 
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index 7b11aff8a5ea7..05c7200751e50 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -3273,7 +3273,7 @@ static void mlx4_ib_event(struct mlx4_dev *dev, void *ibdev_ptr,
+ 	case MLX4_DEV_EVENT_PORT_MGMT_CHANGE:
+ 		ew = kmalloc(sizeof *ew, GFP_ATOMIC);
+ 		if (!ew)
+-			break;
++			return;
+ 
+ 		INIT_WORK(&ew->work, handle_port_mgmt_change_event);
+ 		memcpy(&ew->ib_eqe, eqe, sizeof *eqe);
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index ee48befc89786..09f0dbf941c06 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -3124,6 +3124,8 @@ do_write:
+ 	case IB_WR_ATOMIC_FETCH_AND_ADD:
+ 		if (unlikely(!(qp->qp_access_flags & IB_ACCESS_REMOTE_ATOMIC)))
+ 			goto inv_err;
++		if (unlikely(wqe->atomic_wr.remote_addr & (sizeof(u64) - 1)))
++			goto inv_err;
+ 		if (unlikely(!rvt_rkey_ok(qp, &qp->r_sge.sge, sizeof(u64),
+ 					  wqe->atomic_wr.remote_addr,
+ 					  wqe->atomic_wr.rkey,
+diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h
+index 368959ae9a8cc..df03d84c6868a 100644
+--- a/drivers/infiniband/sw/siw/siw.h
++++ b/drivers/infiniband/sw/siw/siw.h
+@@ -644,14 +644,9 @@ static inline struct siw_sqe *orq_get_current(struct siw_qp *qp)
+ 	return &qp->orq[qp->orq_get % qp->attrs.orq_size];
+ }
+ 
+-static inline struct siw_sqe *orq_get_tail(struct siw_qp *qp)
+-{
+-	return &qp->orq[qp->orq_put % qp->attrs.orq_size];
+-}
+-
+ static inline struct siw_sqe *orq_get_free(struct siw_qp *qp)
+ {
+-	struct siw_sqe *orq_e = orq_get_tail(qp);
++	struct siw_sqe *orq_e = &qp->orq[qp->orq_put % qp->attrs.orq_size];
+ 
+ 	if (READ_ONCE(orq_e->flags) == 0)
+ 		return orq_e;
+diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/siw/siw_qp_rx.c
+index 60116f20653c7..875ea6f1b04a2 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_rx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_rx.c
+@@ -1153,11 +1153,12 @@ static int siw_check_tx_fence(struct siw_qp *qp)
+ 
+ 	spin_lock_irqsave(&qp->orq_lock, flags);
+ 
+-	rreq = orq_get_current(qp);
+-
+ 	/* free current orq entry */
++	rreq = orq_get_current(qp);
+ 	WRITE_ONCE(rreq->flags, 0);
+ 
++	qp->orq_get++;
++
+ 	if (qp->tx_ctx.orq_fence) {
+ 		if (unlikely(tx_waiting->wr_status != SIW_WR_QUEUED)) {
+ 			pr_warn("siw: [QP %u]: fence resume: bad status %d\n",
+@@ -1165,10 +1166,12 @@ static int siw_check_tx_fence(struct siw_qp *qp)
+ 			rv = -EPROTO;
+ 			goto out;
+ 		}
+-		/* resume SQ processing */
++		/* resume SQ processing, if possible */
+ 		if (tx_waiting->sqe.opcode == SIW_OP_READ ||
+ 		    tx_waiting->sqe.opcode == SIW_OP_READ_LOCAL_INV) {
+-			rreq = orq_get_tail(qp);
++
++			/* SQ processing was stopped because of a full ORQ */
++			rreq = orq_get_free(qp);
+ 			if (unlikely(!rreq)) {
+ 				pr_warn("siw: [QP %u]: no ORQE\n", qp_id(qp));
+ 				rv = -EPROTO;
+@@ -1181,15 +1184,14 @@ static int siw_check_tx_fence(struct siw_qp *qp)
+ 			resume_tx = 1;
+ 
+ 		} else if (siw_orq_empty(qp)) {
++			/*
++			 * SQ processing was stopped by fenced work request.
++			 * Resume since all previous Read's are now completed.
++			 */
+ 			qp->tx_ctx.orq_fence = 0;
+ 			resume_tx = 1;
+-		} else {
+-			pr_warn("siw: [QP %u]: fence resume: orq idx: %d:%d\n",
+-				qp_id(qp), qp->orq_get, qp->orq_put);
+-			rv = -EPROTO;
+ 		}
+ 	}
+-	qp->orq_get++;
+ out:
+ 	spin_unlock_irqrestore(&qp->orq_lock, flags);
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 3f31a52f7044f..502e6532dd549 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -20,6 +20,7 @@
+ #include <linux/export.h>
+ #include <linux/kmemleak.h>
+ #include <linux/mem_encrypt.h>
++#include <linux/iopoll.h>
+ #include <asm/pci-direct.h>
+ #include <asm/iommu.h>
+ #include <asm/apic.h>
+@@ -833,6 +834,7 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
+ 		status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
+ 		if (status & (MMIO_STATUS_GALOG_RUN_MASK))
+ 			break;
++		udelay(10);
+ 	}
+ 
+ 	if (WARN_ON(i >= LOOP_TIMEOUT))
+diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
+index aedaae4630bc8..b853888774e65 100644
+--- a/drivers/iommu/intel/irq_remapping.c
++++ b/drivers/iommu/intel/irq_remapping.c
+@@ -576,9 +576,8 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
+ 					    fn, &intel_ir_domain_ops,
+ 					    iommu);
+ 	if (!iommu->ir_domain) {
+-		irq_domain_free_fwnode(fn);
+ 		pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id);
+-		goto out_free_bitmap;
++		goto out_free_fwnode;
+ 	}
+ 	iommu->ir_msi_domain =
+ 		arch_create_remap_msi_irq_domain(iommu->ir_domain,
+@@ -602,7 +601,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
+ 
+ 		if (dmar_enable_qi(iommu)) {
+ 			pr_err("Failed to enable queued invalidation\n");
+-			goto out_free_bitmap;
++			goto out_free_ir_domain;
+ 		}
+ 	}
+ 
+@@ -626,6 +625,14 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
+ 
+ 	return 0;
+ 
++out_free_ir_domain:
++	if (iommu->ir_msi_domain)
++		irq_domain_remove(iommu->ir_msi_domain);
++	iommu->ir_msi_domain = NULL;
++	irq_domain_remove(iommu->ir_domain);
++	iommu->ir_domain = NULL;
++out_free_fwnode:
++	irq_domain_free_fwnode(fn);
+ out_free_bitmap:
+ 	bitmap_free(bitmap);
+ out_free_pages:
+diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
+index 2451f61a38e4a..9e32ea9c11647 100644
+--- a/drivers/net/dsa/Kconfig
++++ b/drivers/net/dsa/Kconfig
+@@ -36,6 +36,7 @@ config NET_DSA_MT7530
+ 	tristate "MediaTek MT753x and MT7621 Ethernet switch support"
+ 	depends on NET_DSA
+ 	select NET_DSA_TAG_MTK
++	select MEDIATEK_GE_PHY
+ 	help
+ 	  This enables support for the MediaTek MT7530, MT7531, and MT7621
+ 	  Ethernet switch chips.
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 6009d76e41fc4..67f2b9a61463a 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -141,7 +141,7 @@ static int gve_adminq_parse_err(struct gve_priv *priv, u32 status)
+  */
+ static int gve_adminq_kick_and_wait(struct gve_priv *priv)
+ {
+-	u32 tail, head;
++	int tail, head;
+ 	int i;
+ 
+ 	tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h b/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h
+index e5dbd0bc257e7..82889c363c777 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h
+@@ -130,6 +130,7 @@
+ 
+ #define NUM_DWMAC100_DMA_REGS	9
+ #define NUM_DWMAC1000_DMA_REGS	23
++#define NUM_DWMAC4_DMA_REGS	27
+ 
+ void dwmac_enable_dma_transmission(void __iomem *ioaddr);
+ void dwmac_enable_dma_irq(void __iomem *ioaddr, u32 chan, bool rx, bool tx);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+index 9e54f953634b7..0c0f01f490057 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -21,10 +21,18 @@
+ #include "dwxgmac2.h"
+ 
+ #define REG_SPACE_SIZE	0x1060
++#define GMAC4_REG_SPACE_SIZE	0x116C
+ #define MAC100_ETHTOOL_NAME	"st_mac100"
+ #define GMAC_ETHTOOL_NAME	"st_gmac"
+ #define XGMAC_ETHTOOL_NAME	"st_xgmac"
+ 
++/* Same as DMA_CHAN_BASE_ADDR defined in dwmac4_dma.h
++ *
++ * It is here because dwmac_dma.h and dwmac4_dam.h can not be included at the
++ * same time due to the conflicting macro names.
++ */
++#define GMAC4_DMA_CHAN_BASE_ADDR  0x00001100
++
+ #define ETHTOOL_DMA_OFFSET	55
+ 
+ struct stmmac_stats {
+@@ -413,6 +421,8 @@ static int stmmac_ethtool_get_regs_len(struct net_device *dev)
+ 
+ 	if (priv->plat->has_xgmac)
+ 		return XGMAC_REGSIZE * 4;
++	else if (priv->plat->has_gmac4)
++		return GMAC4_REG_SPACE_SIZE;
+ 	return REG_SPACE_SIZE;
+ }
+ 
+@@ -425,8 +435,13 @@ static void stmmac_ethtool_gregs(struct net_device *dev,
+ 	stmmac_dump_mac_regs(priv, priv->hw, reg_space);
+ 	stmmac_dump_dma_regs(priv, priv->ioaddr, reg_space);
+ 
+-	if (!priv->plat->has_xgmac) {
+-		/* Copy DMA registers to where ethtool expects them */
++	/* Copy DMA registers to where ethtool expects them */
++	if (priv->plat->has_gmac4) {
++		/* GMAC4 dumps its DMA registers at its DMA_CHAN_BASE_ADDR */
++		memcpy(&reg_space[ETHTOOL_DMA_OFFSET],
++		       &reg_space[GMAC4_DMA_CHAN_BASE_ADDR / 4],
++		       NUM_DWMAC4_DMA_REGS * 4);
++	} else if (!priv->plat->has_xgmac) {
+ 		memcpy(&reg_space[ETHTOOL_DMA_OFFSET],
+ 		       &reg_space[DMA_BUS_MODE / 4],
+ 		       NUM_DWMAC1000_DMA_REGS * 4);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index d291612eeafb9..07b1b8374cd26 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -142,15 +142,20 @@ static int adjust_systime(void __iomem *ioaddr, u32 sec, u32 nsec,
+ 
+ static void get_systime(void __iomem *ioaddr, u64 *systime)
+ {
+-	u64 ns;
+-
+-	/* Get the TSSS value */
+-	ns = readl(ioaddr + PTP_STNSR);
+-	/* Get the TSS and convert sec time value to nanosecond */
+-	ns += readl(ioaddr + PTP_STSR) * 1000000000ULL;
++	u64 ns, sec0, sec1;
++
++	/* Get the TSS value */
++	sec1 = readl_relaxed(ioaddr + PTP_STSR);
++	do {
++		sec0 = sec1;
++		/* Get the TSSS value */
++		ns = readl_relaxed(ioaddr + PTP_STNSR);
++		/* Get the TSS value */
++		sec1 = readl_relaxed(ioaddr + PTP_STSR);
++	} while (sec0 != sec1);
+ 
+ 	if (systime)
+-		*systime = ns;
++		*systime = ns + (sec1 * 1000000000ULL);
+ }
+ 
+ const struct stmmac_hwtimestamp stmmac_ptp = {
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index 4eb64709d44cb..fea8b681f567c 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -1771,6 +1771,7 @@ static int ca8210_async_xmit_complete(
+ 			status
+ 		);
+ 		if (status != MAC_TRANSACTION_OVERFLOW) {
++			dev_kfree_skb_any(priv->tx_skb);
+ 			ieee802154_wake_queue(priv->hw);
+ 			return 0;
+ 		}
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index 080b15fc00601..97981cf7661ad 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -786,6 +786,7 @@ static int hwsim_add_one(struct genl_info *info, struct device *dev,
+ 		goto err_pib;
+ 	}
+ 
++	pib->channel = 13;
+ 	rcu_assign_pointer(phy->pib, pib);
+ 	phy->idx = idx;
+ 	INIT_LIST_HEAD(&phy->edges);
+diff --git a/drivers/net/ieee802154/mcr20a.c b/drivers/net/ieee802154/mcr20a.c
+index 8dc04e2590b18..383231b854642 100644
+--- a/drivers/net/ieee802154/mcr20a.c
++++ b/drivers/net/ieee802154/mcr20a.c
+@@ -976,8 +976,8 @@ static void mcr20a_hw_setup(struct mcr20a_local *lp)
+ 	dev_dbg(printdev(lp), "%s\n", __func__);
+ 
+ 	phy->symbol_duration = 16;
+-	phy->lifs_period = 40;
+-	phy->sifs_period = 12;
++	phy->lifs_period = 40 * phy->symbol_duration;
++	phy->sifs_period = 12 * phy->symbol_duration;
+ 
+ 	hw->flags = IEEE802154_HW_TX_OMIT_CKSUM |
+ 			IEEE802154_HW_AFILT |
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index c601d3df27220..789a124809e3c 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3869,6 +3869,18 @@ static void macsec_common_dellink(struct net_device *dev, struct list_head *head
+ 	struct macsec_dev *macsec = macsec_priv(dev);
+ 	struct net_device *real_dev = macsec->real_dev;
+ 
++	/* If h/w offloading is available, propagate to the device */
++	if (macsec_is_offloaded(macsec)) {
++		const struct macsec_ops *ops;
++		struct macsec_context ctx;
++
++		ops = macsec_get_ops(netdev_priv(dev), &ctx);
++		if (ops) {
++			ctx.secy = &macsec->secy;
++			macsec_offload(ops->mdo_del_secy, &ctx);
++		}
++	}
++
+ 	unregister_netdevice_queue(dev, head);
+ 	list_del_rcu(&macsec->secys);
+ 	macsec_del_dev(macsec);
+@@ -3883,18 +3895,6 @@ static void macsec_dellink(struct net_device *dev, struct list_head *head)
+ 	struct net_device *real_dev = macsec->real_dev;
+ 	struct macsec_rxh_data *rxd = macsec_data_rtnl(real_dev);
+ 
+-	/* If h/w offloading is available, propagate to the device */
+-	if (macsec_is_offloaded(macsec)) {
+-		const struct macsec_ops *ops;
+-		struct macsec_context ctx;
+-
+-		ops = macsec_get_ops(netdev_priv(dev), &ctx);
+-		if (ops) {
+-			ctx.secy = &macsec->secy;
+-			macsec_offload(ops->mdo_del_secy, &ctx);
+-		}
+-	}
+-
+ 	macsec_common_dellink(dev, head);
+ 
+ 	if (list_empty(&rxd->secys)) {
+@@ -4017,6 +4017,15 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
+ 	    !macsec_check_offload(macsec->offload, macsec))
+ 		return -EOPNOTSUPP;
+ 
++	/* send_sci must be set to true when transmit sci explicitly is set */
++	if ((data && data[IFLA_MACSEC_SCI]) &&
++	    (data && data[IFLA_MACSEC_INC_SCI])) {
++		u8 send_sci = !!nla_get_u8(data[IFLA_MACSEC_INC_SCI]);
++
++		if (!send_sci)
++			return -EINVAL;
++	}
++
+ 	if (data && data[IFLA_MACSEC_ICV_LEN])
+ 		icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
+ 	mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
+diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
+index a9c1e3b4585ec..78467cb3f343e 100644
+--- a/drivers/nvme/host/fabrics.h
++++ b/drivers/nvme/host/fabrics.h
+@@ -153,6 +153,7 @@ nvmf_ctlr_matches_baseopts(struct nvme_ctrl *ctrl,
+ 			struct nvmf_ctrl_options *opts)
+ {
+ 	if (ctrl->state == NVME_CTRL_DELETING ||
++	    ctrl->state == NVME_CTRL_DELETING_NOIO ||
+ 	    ctrl->state == NVME_CTRL_DEAD ||
+ 	    strcmp(opts->subsysnqn, ctrl->opts->subsysnqn) ||
+ 	    strcmp(opts->host->nqn, ctrl->opts->host->nqn) ||
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 40ce18a0d0190..6768b2f03d685 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -1264,16 +1264,18 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 				     sizeof(*girq->parents),
+ 				     GFP_KERNEL);
+ 	if (!girq->parents) {
+-		pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto out_remove;
+ 	}
+ 
+ 	if (is_7211) {
+ 		pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS,
+ 					    sizeof(*pc->wake_irq),
+ 					    GFP_KERNEL);
+-		if (!pc->wake_irq)
+-			return -ENOMEM;
++		if (!pc->wake_irq) {
++			err = -ENOMEM;
++			goto out_remove;
++		}
+ 	}
+ 
+ 	/*
+@@ -1297,8 +1299,10 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 
+ 		len = strlen(dev_name(pc->dev)) + 16;
+ 		name = devm_kzalloc(pc->dev, len, GFP_KERNEL);
+-		if (!name)
+-			return -ENOMEM;
++		if (!name) {
++			err = -ENOMEM;
++			goto out_remove;
++		}
+ 
+ 		snprintf(name, len, "%s:bank%d", dev_name(pc->dev), i);
+ 
+@@ -1317,11 +1321,14 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 	err = gpiochip_add_data(&pc->gpio_chip, pc);
+ 	if (err) {
+ 		dev_err(dev, "could not add GPIO chip\n");
+-		pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
+-		return err;
++		goto out_remove;
+ 	}
+ 
+ 	return 0;
++
++out_remove:
++	pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
++	return err;
+ }
+ 
+ static struct platform_driver bcm2835_pinctrl_driver = {
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index b6ef1911c1dd1..348c670a7b07d 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -441,8 +441,8 @@ static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
+ 	value &= ~PADCFG0_PMODE_MASK;
+ 	value |= PADCFG0_PMODE_GPIO;
+ 
+-	/* Disable input and output buffers */
+-	value |= PADCFG0_GPIORXDIS;
++	/* Disable TX buffer and enable RX (this will be input) */
++	value &= ~PADCFG0_GPIORXDIS;
+ 	value |= PADCFG0_GPIOTXDIS;
+ 
+ 	/* Disable SCI/SMI/NMI generation */
+@@ -487,9 +487,6 @@ static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
+ 
+ 	intel_gpio_set_gpio_mode(padcfg0);
+ 
+-	/* Disable TX buffer and enable RX (this will be input) */
+-	__intel_gpio_set_direction(padcfg0, true);
+-
+ 	raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+ 
+ 	return 0;
+@@ -1105,9 +1102,6 @@ static int intel_gpio_irq_type(struct irq_data *d, unsigned int type)
+ 
+ 	intel_gpio_set_gpio_mode(reg);
+ 
+-	/* Disable TX buffer and enable RX (this will be input) */
+-	__intel_gpio_set_direction(reg, true);
+-
+ 	value = readl(reg);
+ 
+ 	value &= ~(PADCFG0_RXEVCFG_MASK | PADCFG0_RXINV);
+@@ -1207,6 +1201,39 @@ static irqreturn_t intel_gpio_irq(int irq, void *data)
+ 	return IRQ_RETVAL(ret);
+ }
+ 
++static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
++{
++	int i;
++
++	for (i = 0; i < pctrl->ncommunities; i++) {
++		const struct intel_community *community;
++		void __iomem *base;
++		unsigned int gpp;
++
++		community = &pctrl->communities[i];
++		base = community->regs;
++
++		for (gpp = 0; gpp < community->ngpps; gpp++) {
++			/* Mask and clear all interrupts */
++			writel(0, base + community->ie_offset + gpp * 4);
++			writel(0xffff, base + community->is_offset + gpp * 4);
++		}
++	}
++}
++
++static int intel_gpio_irq_init_hw(struct gpio_chip *gc)
++{
++	struct intel_pinctrl *pctrl = gpiochip_get_data(gc);
++
++	/*
++	 * Make sure the interrupt lines are in a proper state before
++	 * further configuration.
++	 */
++	intel_gpio_irq_init(pctrl);
++
++	return 0;
++}
++
+ static int intel_gpio_add_community_ranges(struct intel_pinctrl *pctrl,
+ 				const struct intel_community *community)
+ {
+@@ -1311,6 +1338,7 @@ static int intel_gpio_probe(struct intel_pinctrl *pctrl, int irq)
+ 	girq->num_parents = 0;
+ 	girq->default_type = IRQ_TYPE_NONE;
+ 	girq->handler = handle_bad_irq;
++	girq->init_hw = intel_gpio_irq_init_hw;
+ 
+ 	ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl);
+ 	if (ret) {
+@@ -1640,26 +1668,6 @@ int intel_pinctrl_suspend_noirq(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(intel_pinctrl_suspend_noirq);
+ 
+-static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
+-{
+-	size_t i;
+-
+-	for (i = 0; i < pctrl->ncommunities; i++) {
+-		const struct intel_community *community;
+-		void __iomem *base;
+-		unsigned int gpp;
+-
+-		community = &pctrl->communities[i];
+-		base = community->regs;
+-
+-		for (gpp = 0; gpp < community->ngpps; gpp++) {
+-			/* Mask and clear all interrupts */
+-			writel(0, base + community->ie_offset + gpp * 4);
+-			writel(0xffff, base + community->is_offset + gpp * 4);
+-		}
+-	}
+-}
+-
+ static bool intel_gpio_update_reg(void __iomem *reg, u32 mask, u32 value)
+ {
+ 	u32 curr, updated;
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index 2ecd8752b088b..5add637c9ad23 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -83,7 +83,7 @@ unsigned int mc146818_get_time(struct rtc_time *time)
+ 	time->tm_year += real_year - 72;
+ #endif
+ 
+-	if (century > 20)
++	if (century > 19)
+ 		time->tm_year += (century - 19) * 100;
+ 
+ 	/*
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index 052e7879704a5..8f47bf83694f6 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -506,7 +506,8 @@ static int bnx2fc_l2_rcv_thread(void *arg)
+ 
+ static void bnx2fc_recv_frame(struct sk_buff *skb)
+ {
+-	u32 fr_len;
++	u64 crc_err;
++	u32 fr_len, fr_crc;
+ 	struct fc_lport *lport;
+ 	struct fcoe_rcv_info *fr;
+ 	struct fc_stats *stats;
+@@ -540,6 +541,11 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
+ 	skb_pull(skb, sizeof(struct fcoe_hdr));
+ 	fr_len = skb->len - sizeof(struct fcoe_crc_eof);
+ 
++	stats = per_cpu_ptr(lport->stats, get_cpu());
++	stats->RxFrames++;
++	stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;
++	put_cpu();
++
+ 	fp = (struct fc_frame *)skb;
+ 	fc_frame_init(fp);
+ 	fr_dev(fp) = lport;
+@@ -622,16 +628,15 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
+ 		return;
+ 	}
+ 
+-	stats = per_cpu_ptr(lport->stats, smp_processor_id());
+-	stats->RxFrames++;
+-	stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;
++	fr_crc = le32_to_cpu(fr_crc(fp));
+ 
+-	if (le32_to_cpu(fr_crc(fp)) !=
+-			~crc32(~0, skb->data, fr_len)) {
+-		if (stats->InvalidCRCCount < 5)
++	if (unlikely(fr_crc != ~crc32(~0, skb->data, fr_len))) {
++		stats = per_cpu_ptr(lport->stats, get_cpu());
++		crc_err = (stats->InvalidCRCCount++);
++		put_cpu();
++		if (crc_err < 5)
+ 			printk(KERN_WARNING PFX "dropping frame with "
+ 			       "CRC error\n");
+-		stats->InvalidCRCCount++;
+ 		kfree_skb(skb);
+ 		return;
+ 	}
+diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c
+index 670cc82d17dc2..ca75b14931ec9 100644
+--- a/drivers/soc/mediatek/mtk-scpsys.c
++++ b/drivers/soc/mediatek/mtk-scpsys.c
+@@ -411,17 +411,12 @@ out:
+ 	return ret;
+ }
+ 
+-static int init_clks(struct platform_device *pdev, struct clk **clk)
++static void init_clks(struct platform_device *pdev, struct clk **clk)
+ {
+ 	int i;
+ 
+-	for (i = CLK_NONE + 1; i < CLK_MAX; i++) {
++	for (i = CLK_NONE + 1; i < CLK_MAX; i++)
+ 		clk[i] = devm_clk_get(&pdev->dev, clk_names[i]);
+-		if (IS_ERR(clk[i]))
+-			return PTR_ERR(clk[i]);
+-	}
+-
+-	return 0;
+ }
+ 
+ static struct scp *init_scp(struct platform_device *pdev,
+@@ -431,7 +426,7 @@ static struct scp *init_scp(struct platform_device *pdev,
+ {
+ 	struct genpd_onecell_data *pd_data;
+ 	struct resource *res;
+-	int i, j, ret;
++	int i, j;
+ 	struct scp *scp;
+ 	struct clk *clk[CLK_MAX];
+ 
+@@ -486,9 +481,7 @@ static struct scp *init_scp(struct platform_device *pdev,
+ 
+ 	pd_data->num_domains = num;
+ 
+-	ret = init_clks(pdev, clk);
+-	if (ret)
+-		return ERR_PTR(ret);
++	init_clks(pdev, clk);
+ 
+ 	for (i = 0; i < num; i++) {
+ 		struct scp_domain *scpd = &scp->domains[i];
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 3c0ae6dbc43e2..4a80f043b7b17 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -551,7 +551,7 @@ static void bcm_qspi_chip_select(struct bcm_qspi *qspi, int cs)
+ 	u32 rd = 0;
+ 	u32 wr = 0;
+ 
+-	if (qspi->base[CHIP_SELECT]) {
++	if (cs >= 0 && qspi->base[CHIP_SELECT]) {
+ 		rd = bcm_qspi_read(qspi, CHIP_SELECT, 0);
+ 		wr = (rd & ~0xff) | (1 << cs);
+ 		if (rd == wr)
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index c208efeadd184..0bc7daa7afc83 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -693,6 +693,11 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 	writel_relaxed(0, spicc->base + SPICC_INTREG);
+ 
+ 	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto out_master;
++	}
++
+ 	ret = devm_request_irq(&pdev->dev, irq, meson_spicc_irq,
+ 			       0, NULL, spicc);
+ 	if (ret) {
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 83e56ee62649d..92a09dfb99a8e 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -540,7 +540,7 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
+ 	else
+ 		mdata->state = MTK_SPI_IDLE;
+ 
+-	if (!master->can_dma(master, master->cur_msg->spi, trans)) {
++	if (!master->can_dma(master, NULL, trans)) {
+ 		if (trans->rx_buf) {
+ 			cnt = mdata->xfer_len / 4;
+ 			ioread32_rep(mdata->base + SPI_RX_DATA_REG,
+diff --git a/drivers/spi/spi-uniphier.c b/drivers/spi/spi-uniphier.c
+index e5c234aecf675..ad0088e394723 100644
+--- a/drivers/spi/spi-uniphier.c
++++ b/drivers/spi/spi-uniphier.c
+@@ -726,7 +726,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to get TX DMA capacities: %d\n",
+ 				ret);
+-			goto out_disable_clk;
++			goto out_release_dma;
+ 		}
+ 		dma_tx_burst = caps.max_burst;
+ 	}
+@@ -735,7 +735,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
+ 	if (IS_ERR_OR_NULL(master->dma_rx)) {
+ 		if (PTR_ERR(master->dma_rx) == -EPROBE_DEFER) {
+ 			ret = -EPROBE_DEFER;
+-			goto out_disable_clk;
++			goto out_release_dma;
+ 		}
+ 		master->dma_rx = NULL;
+ 		dma_rx_burst = INT_MAX;
+@@ -744,7 +744,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to get RX DMA capacities: %d\n",
+ 				ret);
+-			goto out_disable_clk;
++			goto out_release_dma;
+ 		}
+ 		dma_rx_burst = caps.max_burst;
+ 	}
+@@ -753,10 +753,20 @@ static int uniphier_spi_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+ 	if (ret)
+-		goto out_disable_clk;
++		goto out_release_dma;
+ 
+ 	return 0;
+ 
++out_release_dma:
++	if (!IS_ERR_OR_NULL(master->dma_rx)) {
++		dma_release_channel(master->dma_rx);
++		master->dma_rx = NULL;
++	}
++	if (!IS_ERR_OR_NULL(master->dma_tx)) {
++		dma_release_channel(master->dma_tx);
++		master->dma_tx = NULL;
++	}
++
+ out_disable_clk:
+ 	clk_disable_unprepare(priv->clk);
+ 
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index ee33b8ec62bb2..47c4939577725 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -78,6 +78,26 @@ config FRAMEBUFFER_CONSOLE
+ 	help
+ 	  Low-level framebuffer-based console driver.
+ 
++config FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++	bool "Enable legacy fbcon hardware acceleration code"
++	depends on FRAMEBUFFER_CONSOLE
++	default y if PARISC
++	default n
++	help
++	  This option enables the fbcon (framebuffer text-based) hardware
++	  acceleration for graphics drivers which were written for the fbdev
++	  graphics interface.
++
++	  On modern machines, on mainstream machines (like x86-64) or when
++	  using a modern Linux distribution those fbdev drivers usually aren't used.
++	  So enabling this option wouldn't have any effect, which is why you want
++	  to disable this option on such newer machines.
++
++	  If you compile this kernel for older machines which still require the
++	  fbdev drivers, you may want to say Y.
++
++	  If unsure, select n.
++
+ config FRAMEBUFFER_CONSOLE_DETECT_PRIMARY
+        bool "Map the console to the primary display device"
+        depends on FRAMEBUFFER_CONSOLE
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 42c72d051158f..f102519ccefb4 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -1033,7 +1033,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	struct vc_data *svc = *default_mode;
+ 	struct fbcon_display *t, *p = &fb_display[vc->vc_num];
+ 	int logo = 1, new_rows, new_cols, rows, cols, charcnt = 256;
+-	int ret;
++	int cap, ret;
+ 
+ 	if (WARN_ON(info_idx == -1))
+ 	    return;
+@@ -1042,6 +1042,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 		con2fb_map[vc->vc_num] = info_idx;
+ 
+ 	info = registered_fb[con2fb_map[vc->vc_num]];
++	cap = info->flags;
+ 
+ 	if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET)
+ 		logo_shown = FBCON_LOGO_DONTSHOW;
+@@ -1146,13 +1147,13 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 
+ 	ops->graphics = 0;
+ 
+-	/*
+-	 * No more hw acceleration for fbcon.
+-	 *
+-	 * FIXME: Garbage collect all the now dead code after sufficient time
+-	 * has passed.
+-	 */
+-	p->scrollmode = SCROLL_REDRAW;
++#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++	if ((cap & FBINFO_HWACCEL_COPYAREA) &&
++	    !(cap & FBINFO_HWACCEL_DISABLED))
++		p->scrollmode = SCROLL_MOVE;
++	else /* default to something safe */
++		p->scrollmode = SCROLL_REDRAW;
++#endif
+ 
+ 	/*
+ 	 *  ++guenther: console.c:vc_allocate() relies on initializing
+@@ -1718,7 +1719,7 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ 			count = vc->vc_rows;
+ 		if (logo_shown >= 0)
+ 			goto redraw_up;
+-		switch (p->scrollmode) {
++		switch (fb_scrollmode(p)) {
+ 		case SCROLL_MOVE:
+ 			fbcon_redraw_blit(vc, info, p, t, b - t - count,
+ 				     count);
+@@ -1808,7 +1809,7 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ 			count = vc->vc_rows;
+ 		if (logo_shown >= 0)
+ 			goto redraw_down;
+-		switch (p->scrollmode) {
++		switch (fb_scrollmode(p)) {
+ 		case SCROLL_MOVE:
+ 			fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+ 				     -count);
+@@ -1959,6 +1960,48 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy,
+ 		   height, width);
+ }
+ 
++static void updatescrollmode_accel(struct fbcon_display *p,
++					struct fb_info *info,
++					struct vc_data *vc)
++{
++#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++	struct fbcon_ops *ops = info->fbcon_par;
++	int cap = info->flags;
++	u16 t = 0;
++	int ypan = FBCON_SWAP(ops->rotate, info->fix.ypanstep,
++				  info->fix.xpanstep);
++	int ywrap = FBCON_SWAP(ops->rotate, info->fix.ywrapstep, t);
++	int yres = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++	int vyres = FBCON_SWAP(ops->rotate, info->var.yres_virtual,
++				   info->var.xres_virtual);
++	int good_pan = (cap & FBINFO_HWACCEL_YPAN) &&
++		divides(ypan, vc->vc_font.height) && vyres > yres;
++	int good_wrap = (cap & FBINFO_HWACCEL_YWRAP) &&
++		divides(ywrap, vc->vc_font.height) &&
++		divides(vc->vc_font.height, vyres) &&
++		divides(vc->vc_font.height, yres);
++	int reading_fast = cap & FBINFO_READS_FAST;
++	int fast_copyarea = (cap & FBINFO_HWACCEL_COPYAREA) &&
++		!(cap & FBINFO_HWACCEL_DISABLED);
++	int fast_imageblit = (cap & FBINFO_HWACCEL_IMAGEBLIT) &&
++		!(cap & FBINFO_HWACCEL_DISABLED);
++
++	if (good_wrap || good_pan) {
++		if (reading_fast || fast_copyarea)
++			p->scrollmode = good_wrap ?
++				SCROLL_WRAP_MOVE : SCROLL_PAN_MOVE;
++		else
++			p->scrollmode = good_wrap ? SCROLL_REDRAW :
++				SCROLL_PAN_REDRAW;
++	} else {
++		if (reading_fast || (fast_copyarea && !fast_imageblit))
++			p->scrollmode = SCROLL_MOVE;
++		else
++			p->scrollmode = SCROLL_REDRAW;
++	}
++#endif
++}
++
+ static void updatescrollmode(struct fbcon_display *p,
+ 					struct fb_info *info,
+ 					struct vc_data *vc)
+@@ -1974,6 +2017,9 @@ static void updatescrollmode(struct fbcon_display *p,
+ 		p->vrows -= (yres - (fh * vc->vc_rows)) / fh;
+ 	if ((yres % fh) && (vyres % fh < yres % fh))
+ 		p->vrows--;
++
++	/* update scrollmode in case hardware acceleration is used */
++	updatescrollmode_accel(p, info, vc);
+ }
+ 
+ #define PITCH(w) (((w) + 7) >> 3)
+@@ -2134,7 +2180,7 @@ static int fbcon_switch(struct vc_data *vc)
+ 
+ 	updatescrollmode(p, info, vc);
+ 
+-	switch (p->scrollmode) {
++	switch (fb_scrollmode(p)) {
+ 	case SCROLL_WRAP_MOVE:
+ 		scrollback_phys_max = p->vrows - vc->vc_rows;
+ 		break;
+diff --git a/drivers/video/fbdev/core/fbcon.h b/drivers/video/fbdev/core/fbcon.h
+index 9315b360c8981..0f16cbc99e6a4 100644
+--- a/drivers/video/fbdev/core/fbcon.h
++++ b/drivers/video/fbdev/core/fbcon.h
+@@ -29,7 +29,9 @@ struct fbcon_display {
+     /* Filled in by the low-level console driver */
+     const u_char *fontdata;
+     int userfont;                   /* != 0 if fontdata kmalloc()ed */
+-    u_short scrollmode;             /* Scroll Method */
++#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++    u_short scrollmode;             /* Scroll Method, use fb_scrollmode() */
++#endif
+     u_short inverse;                /* != 0 text black on white as default */
+     short yscroll;                  /* Hardware scrolling */
+     int vrows;                      /* number of virtual rows */
+@@ -208,6 +210,17 @@ static inline int attr_col_ec(int shift, struct vc_data *vc,
+ #define SCROLL_REDRAW	   0x004
+ #define SCROLL_PAN_REDRAW  0x005
+ 
++static inline u_short fb_scrollmode(struct fbcon_display *fb)
++{
++#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++	return fb->scrollmode;
++#else
++	/* hardcoded to SCROLL_REDRAW if acceleration was disabled. */
++	return SCROLL_REDRAW;
++#endif
++}
++
++
+ #ifdef CONFIG_FB_TILEBLITTING
+ extern void fbcon_set_tileops(struct vc_data *vc, struct fb_info *info);
+ #endif
+diff --git a/drivers/video/fbdev/core/fbcon_ccw.c b/drivers/video/fbdev/core/fbcon_ccw.c
+index bbd869efd03bc..f75b24c32d497 100644
+--- a/drivers/video/fbdev/core/fbcon_ccw.c
++++ b/drivers/video/fbdev/core/fbcon_ccw.c
+@@ -65,7 +65,7 @@ static void ccw_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_copyarea area;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
+ 
+ 	area.sx = sy * vc->vc_font.height;
+ 	area.sy = vyres - ((sx + width) * vc->vc_font.width);
+@@ -83,7 +83,7 @@ static void ccw_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_fillrect region;
+ 	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
+ 
+ 	region.color = attr_bgcol_ec(bgshift,vc,info);
+ 	region.dx = sy * vc->vc_font.height;
+@@ -140,7 +140,7 @@ static void ccw_putcs(struct vc_data *vc, struct fb_info *info,
+ 	u32 cnt, pitch, size;
+ 	u32 attribute = get_attribute(info, scr_readw(s));
+ 	u8 *dst, *buf = NULL;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -229,7 +229,7 @@ static void ccw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
+ 	int err = 1, dx, dy;
+ 	char *src;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -387,7 +387,7 @@ static int ccw_update_start(struct fb_info *info)
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	u32 yoffset;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
+ 	int err;
+ 
+ 	yoffset = (vyres - info->var.yres) - ops->var.xoffset;
+diff --git a/drivers/video/fbdev/core/fbcon_cw.c b/drivers/video/fbdev/core/fbcon_cw.c
+index a34cbe8e98744..cf03dc62f35d3 100644
+--- a/drivers/video/fbdev/core/fbcon_cw.c
++++ b/drivers/video/fbdev/core/fbcon_cw.c
+@@ -50,7 +50,7 @@ static void cw_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_copyarea area;
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	area.sx = vxres - ((sy + height) * vc->vc_font.height);
+ 	area.sy = sx * vc->vc_font.width;
+@@ -68,7 +68,7 @@ static void cw_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_fillrect region;
+ 	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	region.color = attr_bgcol_ec(bgshift,vc,info);
+ 	region.dx = vxres - ((sy + height) * vc->vc_font.height);
+@@ -125,7 +125,7 @@ static void cw_putcs(struct vc_data *vc, struct fb_info *info,
+ 	u32 cnt, pitch, size;
+ 	u32 attribute = get_attribute(info, scr_readw(s));
+ 	u8 *dst, *buf = NULL;
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -212,7 +212,7 @@ static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
+ 	int err = 1, dx, dy;
+ 	char *src;
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -369,7 +369,7 @@ static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ static int cw_update_start(struct fb_info *info)
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 	u32 xoffset;
+ 	int err;
+ 
+diff --git a/drivers/video/fbdev/core/fbcon_rotate.h b/drivers/video/fbdev/core/fbcon_rotate.h
+index e233444cda664..01cbe303b8a29 100644
+--- a/drivers/video/fbdev/core/fbcon_rotate.h
++++ b/drivers/video/fbdev/core/fbcon_rotate.h
+@@ -12,11 +12,11 @@
+ #define _FBCON_ROTATE_H
+ 
+ #define GETVYRES(s,i) ({                           \
+-        (s == SCROLL_REDRAW || s == SCROLL_MOVE) ? \
++        (fb_scrollmode(s) == SCROLL_REDRAW || fb_scrollmode(s) == SCROLL_MOVE) ? \
+         (i)->var.yres : (i)->var.yres_virtual; })
+ 
+ #define GETVXRES(s,i) ({                           \
+-        (s == SCROLL_REDRAW || s == SCROLL_MOVE || !(i)->fix.xpanstep) ? \
++        (fb_scrollmode(s) == SCROLL_REDRAW || fb_scrollmode(s) == SCROLL_MOVE || !(i)->fix.xpanstep) ? \
+         (i)->var.xres : (i)->var.xres_virtual; })
+ 
+ 
+diff --git a/drivers/video/fbdev/core/fbcon_ud.c b/drivers/video/fbdev/core/fbcon_ud.c
+index 199cbc7abe353..c5d2da731d686 100644
+--- a/drivers/video/fbdev/core/fbcon_ud.c
++++ b/drivers/video/fbdev/core/fbcon_ud.c
+@@ -50,8 +50,8 @@ static void ud_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_copyarea area;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	area.sy = vyres - ((sy + height) * vc->vc_font.height);
+ 	area.sx = vxres - ((sx + width) * vc->vc_font.width);
+@@ -69,8 +69,8 @@ static void ud_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_fillrect region;
+ 	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	region.color = attr_bgcol_ec(bgshift,vc,info);
+ 	region.dy = vyres - ((sy + height) * vc->vc_font.height);
+@@ -162,8 +162,8 @@ static void ud_putcs(struct vc_data *vc, struct fb_info *info,
+ 	u32 mod = vc->vc_font.width % 8, cnt, pitch, size;
+ 	u32 attribute = get_attribute(info, scr_readw(s));
+ 	u8 *dst, *buf = NULL;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -259,8 +259,8 @@ static void ud_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
+ 	int err = 1, dx, dy;
+ 	char *src;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -410,8 +410,8 @@ static int ud_update_start(struct fb_info *info)
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	int xoffset, yoffset;
+-	u32 vyres = GETVYRES(ops->p->scrollmode, info);
+-	u32 vxres = GETVXRES(ops->p->scrollmode, info);
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 	int err;
+ 
+ 	xoffset = vxres - info->var.xres - ops->var.xoffset;
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index f65aa4ed5ca1e..e39a12037b403 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1186,9 +1186,24 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	struct btrfs_trans_handle *trans = NULL;
+ 	int ret = 0;
+ 
++	/*
++	 * We need to have subvol_sem write locked, to prevent races between
++	 * concurrent tasks trying to disable quotas, because we will unlock
++	 * and relock qgroup_ioctl_lock across BTRFS_FS_QUOTA_ENABLED changes.
++	 */
++	lockdep_assert_held_write(&fs_info->subvol_sem);
++
+ 	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (!fs_info->quota_root)
+ 		goto out;
++
++	/*
++	 * Request qgroup rescan worker to complete and wait for it. This wait
++	 * must be done before transaction start for quota disable since it may
++	 * deadlock with transaction by the qgroup rescan worker.
++	 */
++	clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
++	btrfs_qgroup_wait_for_completion(fs_info, false);
+ 	mutex_unlock(&fs_info->qgroup_ioctl_lock);
+ 
+ 	/*
+@@ -1206,14 +1221,13 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
+ 		trans = NULL;
++		set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+ 		goto out;
+ 	}
+ 
+ 	if (!fs_info->quota_root)
+ 		goto out;
+ 
+-	clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+-	btrfs_qgroup_wait_for_completion(fs_info, false);
+ 	spin_lock(&fs_info->qgroup_lock);
+ 	quota_root = fs_info->quota_root;
+ 	fs_info->quota_root = NULL;
+@@ -3390,6 +3404,9 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid,
+ 			btrfs_warn(fs_info,
+ 			"qgroup rescan init failed, qgroup is not enabled");
+ 			ret = -EINVAL;
++		} else if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
++			/* Quota disable is in progress */
++			ret = -EBUSY;
+ 		}
+ 
+ 		if (ret) {
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 99d98d1010217..455eb349c76f8 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2779,6 +2779,9 @@ void ext4_fc_replay_cleanup(struct super_block *sb);
+ int ext4_fc_commit(journal_t *journal, tid_t commit_tid);
+ int __init ext4_fc_init_dentry_cache(void);
+ void ext4_fc_destroy_dentry_cache(void);
++int ext4_fc_record_regions(struct super_block *sb, int ino,
++			   ext4_lblk_t lblk, ext4_fsblk_t pblk,
++			   int len, int replay);
+ 
+ /* mballoc.c */
+ extern const struct seq_operations ext4_mb_seq_groups_ops;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index b297b14de7509..0fda3051760d1 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -6088,11 +6088,15 @@ int ext4_ext_clear_bb(struct inode *inode)
+ 
+ 					ext4_mb_mark_bb(inode->i_sb,
+ 							path[j].p_block, 1, 0);
++					ext4_fc_record_regions(inode->i_sb, inode->i_ino,
++							0, path[j].p_block, 1, 1);
+ 				}
+ 				ext4_ext_drop_refs(path);
+ 				kfree(path);
+ 			}
+ 			ext4_mb_mark_bb(inode->i_sb, map.m_pblk, map.m_len, 0);
++			ext4_fc_record_regions(inode->i_sb, inode->i_ino,
++					map.m_lblk, map.m_pblk, map.m_len, 1);
+ 		}
+ 		cur = cur + map.m_len;
+ 	}
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index f483abcd5213a..501e60713010e 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1388,14 +1388,15 @@ static int ext4_fc_record_modified_inode(struct super_block *sb, int ino)
+ 		if (state->fc_modified_inodes[i] == ino)
+ 			return 0;
+ 	if (state->fc_modified_inodes_used == state->fc_modified_inodes_size) {
+-		state->fc_modified_inodes_size +=
+-			EXT4_FC_REPLAY_REALLOC_INCREMENT;
+ 		state->fc_modified_inodes = krealloc(
+-					state->fc_modified_inodes, sizeof(int) *
+-					state->fc_modified_inodes_size,
+-					GFP_KERNEL);
++				state->fc_modified_inodes,
++				sizeof(int) * (state->fc_modified_inodes_size +
++				EXT4_FC_REPLAY_REALLOC_INCREMENT),
++				GFP_KERNEL);
+ 		if (!state->fc_modified_inodes)
+ 			return -ENOMEM;
++		state->fc_modified_inodes_size +=
++			EXT4_FC_REPLAY_REALLOC_INCREMENT;
+ 	}
+ 	state->fc_modified_inodes[state->fc_modified_inodes_used++] = ino;
+ 	return 0;
+@@ -1427,7 +1428,9 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
+ 	}
+ 	inode = NULL;
+ 
+-	ext4_fc_record_modified_inode(sb, ino);
++	ret = ext4_fc_record_modified_inode(sb, ino);
++	if (ret)
++		goto out;
+ 
+ 	raw_fc_inode = (struct ext4_inode *)
+ 		(val + offsetof(struct ext4_fc_inode, fc_raw_inode));
+@@ -1558,16 +1561,23 @@ out:
+ }
+ 
+ /*
+- * Record physical disk regions which are in use as per fast commit area. Our
+- * simple replay phase allocator excludes these regions from allocation.
++ * Record physical disk regions which are in use as per fast commit area,
++ * and used by inodes during replay phase. Our simple replay phase
++ * allocator excludes these regions from allocation.
+  */
+-static int ext4_fc_record_regions(struct super_block *sb, int ino,
+-		ext4_lblk_t lblk, ext4_fsblk_t pblk, int len)
++int ext4_fc_record_regions(struct super_block *sb, int ino,
++		ext4_lblk_t lblk, ext4_fsblk_t pblk, int len, int replay)
+ {
+ 	struct ext4_fc_replay_state *state;
+ 	struct ext4_fc_alloc_region *region;
+ 
+ 	state = &EXT4_SB(sb)->s_fc_replay_state;
++	/*
++	 * during replay phase, the fc_regions_valid may not same as
++	 * fc_regions_used, update it when do new additions.
++	 */
++	if (replay && state->fc_regions_used != state->fc_regions_valid)
++		state->fc_regions_used = state->fc_regions_valid;
+ 	if (state->fc_regions_used == state->fc_regions_size) {
+ 		state->fc_regions_size +=
+ 			EXT4_FC_REPLAY_REALLOC_INCREMENT;
+@@ -1585,6 +1595,9 @@ static int ext4_fc_record_regions(struct super_block *sb, int ino,
+ 	region->pblk = pblk;
+ 	region->len = len;
+ 
++	if (replay)
++		state->fc_regions_valid++;
++
+ 	return 0;
+ }
+ 
+@@ -1616,6 +1629,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 	}
+ 
+ 	ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
++	if (ret)
++		goto out;
+ 
+ 	start = le32_to_cpu(ex->ee_block);
+ 	start_pblk = ext4_ext_pblock(ex);
+@@ -1633,18 +1648,14 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 		map.m_pblk = 0;
+ 		ret = ext4_map_blocks(NULL, inode, &map, 0);
+ 
+-		if (ret < 0) {
+-			iput(inode);
+-			return 0;
+-		}
++		if (ret < 0)
++			goto out;
+ 
+ 		if (ret == 0) {
+ 			/* Range is not mapped */
+ 			path = ext4_find_extent(inode, cur, NULL, 0);
+-			if (IS_ERR(path)) {
+-				iput(inode);
+-				return 0;
+-			}
++			if (IS_ERR(path))
++				goto out;
+ 			memset(&newex, 0, sizeof(newex));
+ 			newex.ee_block = cpu_to_le32(cur);
+ 			ext4_ext_store_pblock(
+@@ -1658,10 +1669,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 			up_write((&EXT4_I(inode)->i_data_sem));
+ 			ext4_ext_drop_refs(path);
+ 			kfree(path);
+-			if (ret) {
+-				iput(inode);
+-				return 0;
+-			}
++			if (ret)
++				goto out;
+ 			goto next;
+ 		}
+ 
+@@ -1674,10 +1683,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 			ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
+ 					ext4_ext_is_unwritten(ex),
+ 					start_pblk + cur - start);
+-			if (ret) {
+-				iput(inode);
+-				return 0;
+-			}
++			if (ret)
++				goto out;
+ 			/*
+ 			 * Mark the old blocks as free since they aren't used
+ 			 * anymore. We maintain an array of all the modified
+@@ -1697,10 +1704,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 			ext4_ext_is_unwritten(ex), map.m_pblk);
+ 		ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
+ 					ext4_ext_is_unwritten(ex), map.m_pblk);
+-		if (ret) {
+-			iput(inode);
+-			return 0;
+-		}
++		if (ret)
++			goto out;
+ 		/*
+ 		 * We may have split the extent tree while toggling the state.
+ 		 * Try to shrink the extent tree now.
+@@ -1712,6 +1717,7 @@ next:
+ 	}
+ 	ext4_ext_replay_shrink_inode(inode, i_size_read(inode) >>
+ 					sb->s_blocksize_bits);
++out:
+ 	iput(inode);
+ 	return 0;
+ }
+@@ -1741,6 +1747,8 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
+ 	}
+ 
+ 	ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
++	if (ret)
++		goto out;
+ 
+ 	jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n",
+ 			inode->i_ino, le32_to_cpu(lrange.fc_lblk),
+@@ -1750,10 +1758,8 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
+ 		map.m_len = remaining;
+ 
+ 		ret = ext4_map_blocks(NULL, inode, &map, 0);
+-		if (ret < 0) {
+-			iput(inode);
+-			return 0;
+-		}
++		if (ret < 0)
++			goto out;
+ 		if (ret > 0) {
+ 			remaining -= ret;
+ 			cur += ret;
+@@ -1765,18 +1771,17 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
+ 	}
+ 
+ 	down_write(&EXT4_I(inode)->i_data_sem);
+-	ret = ext4_ext_remove_space(inode, lrange.fc_lblk,
+-				lrange.fc_lblk + lrange.fc_len - 1);
++	ret = ext4_ext_remove_space(inode, le32_to_cpu(lrange.fc_lblk),
++				le32_to_cpu(lrange.fc_lblk) +
++				le32_to_cpu(lrange.fc_len) - 1);
+ 	up_write(&EXT4_I(inode)->i_data_sem);
+-	if (ret) {
+-		iput(inode);
+-		return 0;
+-	}
++	if (ret)
++		goto out;
+ 	ext4_ext_replay_shrink_inode(inode,
+ 		i_size_read(inode) >> sb->s_blocksize_bits);
+ 	ext4_mark_inode_dirty(NULL, inode);
++out:
+ 	iput(inode);
+-
+ 	return 0;
+ }
+ 
+@@ -1954,7 +1959,7 @@ static int ext4_fc_replay_scan(journal_t *journal,
+ 			ret = ext4_fc_record_regions(sb,
+ 				le32_to_cpu(ext.fc_ino),
+ 				le32_to_cpu(ex->ee_block), ext4_ext_pblock(ex),
+-				ext4_ext_get_actual_len(ex));
++				ext4_ext_get_actual_len(ex), 0);
+ 			if (ret < 0)
+ 				break;
+ 			ret = JBD2_FC_REPLAY_CONTINUE;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index a96b688a0410f..ae1f0c57f54d2 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1120,7 +1120,15 @@ static void ext4_restore_inline_data(handle_t *handle, struct inode *inode,
+ 				     struct ext4_iloc *iloc,
+ 				     void *buf, int inline_size)
+ {
+-	ext4_create_inline_data(handle, inode, inline_size);
++	int ret;
++
++	ret = ext4_create_inline_data(handle, inode, inline_size);
++	if (ret) {
++		ext4_msg(inode->i_sb, KERN_EMERG,
++			"error restoring inline_data for inode -- potential data loss! (inode %lu, error %d)",
++			inode->i_ino, ret);
++		return;
++	}
+ 	ext4_write_inline_data(inode, iloc, buf, 0, inline_size);
+ 	ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ }
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index e40f87d07783a..110c25824a67f 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5173,7 +5173,8 @@ static ext4_fsblk_t ext4_mb_new_blocks_simple(handle_t *handle,
+ 	struct super_block *sb = ar->inode->i_sb;
+ 	ext4_group_t group;
+ 	ext4_grpblk_t blkoff;
+-	int i = sb->s_blocksize;
++	ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb);
++	ext4_grpblk_t i = 0;
+ 	ext4_fsblk_t goal, block;
+ 	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+ 
+@@ -5195,19 +5196,26 @@ static ext4_fsblk_t ext4_mb_new_blocks_simple(handle_t *handle,
+ 		ext4_get_group_no_and_offset(sb,
+ 			max(ext4_group_first_block_no(sb, group), goal),
+ 			NULL, &blkoff);
+-		i = mb_find_next_zero_bit(bitmap_bh->b_data, sb->s_blocksize,
++		while (1) {
++			i = mb_find_next_zero_bit(bitmap_bh->b_data, max,
+ 						blkoff);
++			if (i >= max)
++				break;
++			if (ext4_fc_replay_check_excluded(sb,
++				ext4_group_first_block_no(sb, group) + i)) {
++				blkoff = i + 1;
++			} else
++				break;
++		}
+ 		brelse(bitmap_bh);
+-		if (i >= sb->s_blocksize)
+-			continue;
+-		if (ext4_fc_replay_check_excluded(sb,
+-			ext4_group_first_block_no(sb, group) + i))
+-			continue;
+-		break;
++		if (i < max)
++			break;
+ 	}
+ 
+-	if (group >= ext4_get_groups_count(sb) && i >= sb->s_blocksize)
++	if (group >= ext4_get_groups_count(sb) || i >= max) {
++		*errp = -ENOSPC;
+ 		return 0;
++	}
+ 
+ 	block = ext4_group_first_block_no(sb, group) + i;
+ 	ext4_mb_mark_bb(sb, block, 1, 1);
+diff --git a/fs/fs_context.c b/fs/fs_context.c
+index b11677802ee13..740322dff4a30 100644
+--- a/fs/fs_context.c
++++ b/fs/fs_context.c
+@@ -231,7 +231,7 @@ static struct fs_context *alloc_fs_context(struct file_system_type *fs_type,
+ 	struct fs_context *fc;
+ 	int ret = -ENOMEM;
+ 
+-	fc = kzalloc(sizeof(struct fs_context), GFP_KERNEL);
++	fc = kzalloc(sizeof(struct fs_context), GFP_KERNEL_ACCOUNT);
+ 	if (!fc)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -631,7 +631,7 @@ const struct fs_context_operations legacy_fs_context_ops = {
+  */
+ static int legacy_init_fs_context(struct fs_context *fc)
+ {
+-	fc->fs_private = kzalloc(sizeof(struct legacy_fs_context), GFP_KERNEL);
++	fc->fs_private = kzalloc(sizeof(struct legacy_fs_context), GFP_KERNEL_ACCOUNT);
+ 	if (!fc->fs_private)
+ 		return -ENOMEM;
+ 	fc->ops = &legacy_fs_context_ops;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 210147960c52e..d01d7929753ef 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4047,8 +4047,10 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 			status = nfserr_clid_inuse;
+ 			if (client_has_state(old)
+ 					&& !same_creds(&unconf->cl_cred,
+-							&old->cl_cred))
++							&old->cl_cred)) {
++				old = NULL;
+ 				goto out;
++			}
+ 			status = mark_client_expired_locked(old);
+ 			if (status) {
+ 				old = NULL;
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index 7c869ea8dffc8..9def1ac19546b 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -44,6 +44,7 @@ static inline unsigned long pte_index(unsigned long address)
+ {
+ 	return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
+ }
++#define pte_index pte_index
+ 
+ #ifndef pmd_index
+ static inline unsigned long pmd_index(unsigned long address)
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 2a38cbaf3ddb7..aeec86ed47088 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -541,20 +541,22 @@ static void kauditd_printk_skb(struct sk_buff *skb)
+ /**
+  * kauditd_rehold_skb - Handle a audit record send failure in the hold queue
+  * @skb: audit record
++ * @error: error code (unused)
+  *
+  * Description:
+  * This should only be used by the kauditd_thread when it fails to flush the
+  * hold queue.
+  */
+-static void kauditd_rehold_skb(struct sk_buff *skb)
++static void kauditd_rehold_skb(struct sk_buff *skb, __always_unused int error)
+ {
+-	/* put the record back in the queue at the same place */
+-	skb_queue_head(&audit_hold_queue, skb);
++	/* put the record back in the queue */
++	skb_queue_tail(&audit_hold_queue, skb);
+ }
+ 
+ /**
+  * kauditd_hold_skb - Queue an audit record, waiting for auditd
+  * @skb: audit record
++ * @error: error code
+  *
+  * Description:
+  * Queue the audit record, waiting for an instance of auditd.  When this
+@@ -564,19 +566,31 @@ static void kauditd_rehold_skb(struct sk_buff *skb)
+  * and queue it, if we have room.  If we want to hold on to the record, but we
+  * don't have room, record a record lost message.
+  */
+-static void kauditd_hold_skb(struct sk_buff *skb)
++static void kauditd_hold_skb(struct sk_buff *skb, int error)
+ {
+ 	/* at this point it is uncertain if we will ever send this to auditd so
+ 	 * try to send the message via printk before we go any further */
+ 	kauditd_printk_skb(skb);
+ 
+ 	/* can we just silently drop the message? */
+-	if (!audit_default) {
+-		kfree_skb(skb);
+-		return;
++	if (!audit_default)
++		goto drop;
++
++	/* the hold queue is only for when the daemon goes away completely,
++	 * not -EAGAIN failures; if we are in a -EAGAIN state requeue the
++	 * record on the retry queue unless it's full, in which case drop it
++	 */
++	if (error == -EAGAIN) {
++		if (!audit_backlog_limit ||
++		    skb_queue_len(&audit_retry_queue) < audit_backlog_limit) {
++			skb_queue_tail(&audit_retry_queue, skb);
++			return;
++		}
++		audit_log_lost("kauditd retry queue overflow");
++		goto drop;
+ 	}
+ 
+-	/* if we have room, queue the message */
++	/* if we have room in the hold queue, queue the message */
+ 	if (!audit_backlog_limit ||
+ 	    skb_queue_len(&audit_hold_queue) < audit_backlog_limit) {
+ 		skb_queue_tail(&audit_hold_queue, skb);
+@@ -585,24 +599,32 @@ static void kauditd_hold_skb(struct sk_buff *skb)
+ 
+ 	/* we have no other options - drop the message */
+ 	audit_log_lost("kauditd hold queue overflow");
++drop:
+ 	kfree_skb(skb);
+ }
+ 
+ /**
+  * kauditd_retry_skb - Queue an audit record, attempt to send again to auditd
+  * @skb: audit record
++ * @error: error code (unused)
+  *
+  * Description:
+  * Not as serious as kauditd_hold_skb() as we still have a connected auditd,
+  * but for some reason we are having problems sending it audit records so
+  * queue the given record and attempt to resend.
+  */
+-static void kauditd_retry_skb(struct sk_buff *skb)
++static void kauditd_retry_skb(struct sk_buff *skb, __always_unused int error)
+ {
+-	/* NOTE: because records should only live in the retry queue for a
+-	 * short period of time, before either being sent or moved to the hold
+-	 * queue, we don't currently enforce a limit on this queue */
+-	skb_queue_tail(&audit_retry_queue, skb);
++	if (!audit_backlog_limit ||
++	    skb_queue_len(&audit_retry_queue) < audit_backlog_limit) {
++		skb_queue_tail(&audit_retry_queue, skb);
++		return;
++	}
++
++	/* we have to drop the record, send it via printk as a last effort */
++	kauditd_printk_skb(skb);
++	audit_log_lost("kauditd retry queue overflow");
++	kfree_skb(skb);
+ }
+ 
+ /**
+@@ -640,7 +662,7 @@ static void auditd_reset(const struct auditd_connection *ac)
+ 	/* flush the retry queue to the hold queue, but don't touch the main
+ 	 * queue since we need to process that normally for multicast */
+ 	while ((skb = skb_dequeue(&audit_retry_queue)))
+-		kauditd_hold_skb(skb);
++		kauditd_hold_skb(skb, -ECONNREFUSED);
+ }
+ 
+ /**
+@@ -714,16 +736,18 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
+ 			      struct sk_buff_head *queue,
+ 			      unsigned int retry_limit,
+ 			      void (*skb_hook)(struct sk_buff *skb),
+-			      void (*err_hook)(struct sk_buff *skb))
++			      void (*err_hook)(struct sk_buff *skb, int error))
+ {
+ 	int rc = 0;
+-	struct sk_buff *skb;
++	struct sk_buff *skb = NULL;
++	struct sk_buff *skb_tail;
+ 	unsigned int failed = 0;
+ 
+ 	/* NOTE: kauditd_thread takes care of all our locking, we just use
+ 	 *       the netlink info passed to us (e.g. sk and portid) */
+ 
+-	while ((skb = skb_dequeue(queue))) {
++	skb_tail = skb_peek_tail(queue);
++	while ((skb != skb_tail) && (skb = skb_dequeue(queue))) {
+ 		/* call the skb_hook for each skb we touch */
+ 		if (skb_hook)
+ 			(*skb_hook)(skb);
+@@ -731,7 +755,7 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
+ 		/* can we send to anyone via unicast? */
+ 		if (!sk) {
+ 			if (err_hook)
+-				(*err_hook)(skb);
++				(*err_hook)(skb, -ECONNREFUSED);
+ 			continue;
+ 		}
+ 
+@@ -745,7 +769,7 @@ retry:
+ 			    rc == -ECONNREFUSED || rc == -EPERM) {
+ 				sk = NULL;
+ 				if (err_hook)
+-					(*err_hook)(skb);
++					(*err_hook)(skb, rc);
+ 				if (rc == -EAGAIN)
+ 					rc = 0;
+ 				/* continue to drain the queue */
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index f9913bc65ef8d..1e4bf23528a3d 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -108,7 +108,7 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
+ 	}
+ 
+ 	rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
+-		  VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
++		  VM_MAP | VM_USERMAP, PAGE_KERNEL);
+ 	if (rb) {
+ 		kmemleak_not_leak(pages);
+ 		rb->pages = pages;
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 7c7758a9e2c24..ef6b3a7f31c17 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1481,10 +1481,15 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+ 	struct cpuset *sibling;
+ 	struct cgroup_subsys_state *pos_css;
+ 
++	percpu_rwsem_assert_held(&cpuset_rwsem);
++
+ 	/*
+ 	 * Check all its siblings and call update_cpumasks_hier()
+ 	 * if their use_parent_ecpus flag is set in order for them
+ 	 * to use the right effective_cpus value.
++	 *
++	 * The update_cpumasks_hier() function may sleep. So we have to
++	 * release the RCU read lock before calling it.
+ 	 */
+ 	rcu_read_lock();
+ 	cpuset_for_each_child(sibling, pos_css, parent) {
+@@ -1492,8 +1497,13 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+ 			continue;
+ 		if (!sibling->use_parent_ecpus)
+ 			continue;
++		if (!css_tryget_online(&sibling->css))
++			continue;
+ 
++		rcu_read_unlock();
+ 		update_cpumasks_hier(sibling, tmp);
++		rcu_read_lock();
++		css_put(&sibling->css);
+ 	}
+ 	rcu_read_unlock();
+ }
+diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
+index 12ebc97e8b435..d6fbf28ebf72c 100644
+--- a/mm/debug_vm_pgtable.c
++++ b/mm/debug_vm_pgtable.c
+@@ -128,6 +128,8 @@ static void __init pte_advanced_tests(struct mm_struct *mm,
+ 	ptep_test_and_clear_young(vma, vaddr, ptep);
+ 	pte = ptep_get(ptep);
+ 	WARN_ON(pte_young(pte));
++
++	ptep_get_and_clear_full(mm, vaddr, ptep, 1);
+ }
+ 
+ static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index c0014d3b91c10..56fcfcb8e6173 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1401,7 +1401,8 @@ static void kmemleak_scan(void)
+ {
+ 	unsigned long flags;
+ 	struct kmemleak_object *object;
+-	int i;
++	struct zone *zone;
++	int __maybe_unused i;
+ 	int new_leaks = 0;
+ 
+ 	jiffies_last_scan = jiffies;
+@@ -1441,9 +1442,9 @@ static void kmemleak_scan(void)
+ 	 * Struct page scanning for each node.
+ 	 */
+ 	get_online_mems();
+-	for_each_online_node(i) {
+-		unsigned long start_pfn = node_start_pfn(i);
+-		unsigned long end_pfn = node_end_pfn(i);
++	for_each_populated_zone(zone) {
++		unsigned long start_pfn = zone->zone_start_pfn;
++		unsigned long end_pfn = zone_end_pfn(zone);
+ 		unsigned long pfn;
+ 
+ 		for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+@@ -1452,8 +1453,8 @@ static void kmemleak_scan(void)
+ 			if (!page)
+ 				continue;
+ 
+-			/* only scan pages belonging to this node */
+-			if (page_to_nid(page) != i)
++			/* only scan pages belonging to this zone */
++			if (page_zone(page) != zone)
+ 				continue;
+ 			/* only scan if page is in use */
+ 			if (page_count(page) == 0)
+diff --git a/net/ieee802154/nl802154.c b/net/ieee802154/nl802154.c
+index b34e4f827e756..a493965f157f2 100644
+--- a/net/ieee802154/nl802154.c
++++ b/net/ieee802154/nl802154.c
+@@ -1441,7 +1441,7 @@ static int nl802154_send_key(struct sk_buff *msg, u32 cmd, u32 portid,
+ 
+ 	hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
+ 	if (!hdr)
+-		return -1;
++		return -ENOBUFS;
+ 
+ 	if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
+ 		goto nla_put_failure;
+@@ -1634,7 +1634,7 @@ static int nl802154_send_device(struct sk_buff *msg, u32 cmd, u32 portid,
+ 
+ 	hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
+ 	if (!hdr)
+-		return -1;
++		return -ENOBUFS;
+ 
+ 	if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
+ 		goto nla_put_failure;
+@@ -1812,7 +1812,7 @@ static int nl802154_send_devkey(struct sk_buff *msg, u32 cmd, u32 portid,
+ 
+ 	hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
+ 	if (!hdr)
+-		return -1;
++		return -ENOBUFS;
+ 
+ 	if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
+ 		goto nla_put_failure;
+@@ -1988,7 +1988,7 @@ static int nl802154_send_seclevel(struct sk_buff *msg, u32 cmd, u32 portid,
+ 
+ 	hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
+ 	if (!hdr)
+-		return -1;
++		return -ENOBUFS;
+ 
+ 	if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
+ 		goto nla_put_failure;
+diff --git a/security/selinux/ss/conditional.c b/security/selinux/ss/conditional.c
+index 1ef74c085f2b0..865611127357e 100644
+--- a/security/selinux/ss/conditional.c
++++ b/security/selinux/ss/conditional.c
+@@ -152,6 +152,8 @@ static void cond_list_destroy(struct policydb *p)
+ 	for (i = 0; i < p->cond_list_len; i++)
+ 		cond_node_destroy(&p->cond_list[i]);
+ 	kfree(p->cond_list);
++	p->cond_list = NULL;
++	p->cond_list_len = 0;
+ }
+ 
+ void cond_policydb_destroy(struct policydb *p)
+@@ -440,7 +442,6 @@ int cond_read_list(struct policydb *p, void *fp)
+ 	return 0;
+ err:
+ 	cond_list_destroy(p);
+-	p->cond_list = NULL;
+ 	return rc;
+ }
+ 
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 323df011b94a3..8ee3be7bbd24e 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -91,6 +91,12 @@ static void snd_hda_gen_spec_free(struct hda_gen_spec *spec)
+ 	free_kctls(spec);
+ 	snd_array_free(&spec->paths);
+ 	snd_array_free(&spec->loopback_list);
++#ifdef CONFIG_SND_HDA_GENERIC_LEDS
++	if (spec->led_cdevs[LED_AUDIO_MUTE])
++		led_classdev_unregister(spec->led_cdevs[LED_AUDIO_MUTE]);
++	if (spec->led_cdevs[LED_AUDIO_MICMUTE])
++		led_classdev_unregister(spec->led_cdevs[LED_AUDIO_MICMUTE]);
++#endif
+ }
+ 
+ /*
+@@ -3911,7 +3917,10 @@ static int create_mute_led_cdev(struct hda_codec *codec,
+ 						enum led_brightness),
+ 				bool micmute)
+ {
++	struct hda_gen_spec *spec = codec->spec;
+ 	struct led_classdev *cdev;
++	int idx = micmute ? LED_AUDIO_MICMUTE : LED_AUDIO_MUTE;
++	int err;
+ 
+ 	cdev = devm_kzalloc(&codec->core.dev, sizeof(*cdev), GFP_KERNEL);
+ 	if (!cdev)
+@@ -3921,10 +3930,14 @@ static int create_mute_led_cdev(struct hda_codec *codec,
+ 	cdev->max_brightness = 1;
+ 	cdev->default_trigger = micmute ? "audio-micmute" : "audio-mute";
+ 	cdev->brightness_set_blocking = callback;
+-	cdev->brightness = ledtrig_audio_get(micmute ? LED_AUDIO_MICMUTE : LED_AUDIO_MUTE);
++	cdev->brightness = ledtrig_audio_get(idx);
+ 	cdev->flags = LED_CORE_SUSPENDRESUME;
+ 
+-	return devm_led_classdev_register(&codec->core.dev, cdev);
++	err = led_classdev_register(&codec->core.dev, cdev);
++	if (err < 0)
++		return err;
++	spec->led_cdevs[idx] = cdev;
++	return 0;
+ }
+ 
+ static void vmaster_update_mute_led(void *private_data, int enabled)
+diff --git a/sound/pci/hda/hda_generic.h b/sound/pci/hda/hda_generic.h
+index 0886bc81f40be..578faa9adcdcd 100644
+--- a/sound/pci/hda/hda_generic.h
++++ b/sound/pci/hda/hda_generic.h
+@@ -305,6 +305,9 @@ struct hda_gen_spec {
+ 				   struct hda_jack_callback *cb);
+ 	void (*mic_autoswitch_hook)(struct hda_codec *codec,
+ 				    struct hda_jack_callback *cb);
++
++	/* leds */
++	struct led_classdev *led_cdevs[NUM_AUDIO_LEDS];
+ };
+ 
+ /* values for add_stereo_mix_input flag */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a858bb9e99270..aef017ba00708 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -97,6 +97,7 @@ struct alc_spec {
+ 	unsigned int gpio_mic_led_mask;
+ 	struct alc_coef_led mute_led_coef;
+ 	struct alc_coef_led mic_led_coef;
++	struct mutex coef_mutex;
+ 
+ 	hda_nid_t headset_mic_pin;
+ 	hda_nid_t headphone_mic_pin;
+@@ -133,8 +134,8 @@ struct alc_spec {
+  * COEF access helper functions
+  */
+ 
+-static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+-			       unsigned int coef_idx)
++static int __alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++				 unsigned int coef_idx)
+ {
+ 	unsigned int val;
+ 
+@@ -143,28 +144,61 @@ static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 	return val;
+ }
+ 
++static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++			       unsigned int coef_idx)
++{
++	struct alc_spec *spec = codec->spec;
++	unsigned int val;
++
++	mutex_lock(&spec->coef_mutex);
++	val = __alc_read_coefex_idx(codec, nid, coef_idx);
++	mutex_unlock(&spec->coef_mutex);
++	return val;
++}
++
+ #define alc_read_coef_idx(codec, coef_idx) \
+ 	alc_read_coefex_idx(codec, 0x20, coef_idx)
+ 
+-static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+-				 unsigned int coef_idx, unsigned int coef_val)
++static void __alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++				   unsigned int coef_idx, unsigned int coef_val)
+ {
+ 	snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_COEF_INDEX, coef_idx);
+ 	snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_PROC_COEF, coef_val);
+ }
+ 
++static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++				 unsigned int coef_idx, unsigned int coef_val)
++{
++	struct alc_spec *spec = codec->spec;
++
++	mutex_lock(&spec->coef_mutex);
++	__alc_write_coefex_idx(codec, nid, coef_idx, coef_val);
++	mutex_unlock(&spec->coef_mutex);
++}
++
+ #define alc_write_coef_idx(codec, coef_idx, coef_val) \
+ 	alc_write_coefex_idx(codec, 0x20, coef_idx, coef_val)
+ 
++static void __alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++				    unsigned int coef_idx, unsigned int mask,
++				    unsigned int bits_set)
++{
++	unsigned int val = __alc_read_coefex_idx(codec, nid, coef_idx);
++
++	if (val != -1)
++		__alc_write_coefex_idx(codec, nid, coef_idx,
++				       (val & ~mask) | bits_set);
++}
++
+ static void alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 				  unsigned int coef_idx, unsigned int mask,
+ 				  unsigned int bits_set)
+ {
+-	unsigned int val = alc_read_coefex_idx(codec, nid, coef_idx);
++	struct alc_spec *spec = codec->spec;
+ 
+-	if (val != -1)
+-		alc_write_coefex_idx(codec, nid, coef_idx,
+-				     (val & ~mask) | bits_set);
++	mutex_lock(&spec->coef_mutex);
++	__alc_update_coefex_idx(codec, nid, coef_idx, mask, bits_set);
++	mutex_unlock(&spec->coef_mutex);
+ }
+ 
+ #define alc_update_coef_idx(codec, coef_idx, mask, bits_set)	\
+@@ -197,13 +231,17 @@ struct coef_fw {
+ static void alc_process_coef_fw(struct hda_codec *codec,
+ 				const struct coef_fw *fw)
+ {
++	struct alc_spec *spec = codec->spec;
++
++	mutex_lock(&spec->coef_mutex);
+ 	for (; fw->nid; fw++) {
+ 		if (fw->mask == (unsigned short)-1)
+-			alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val);
++			__alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val);
+ 		else
+-			alc_update_coefex_idx(codec, fw->nid, fw->idx,
+-					      fw->mask, fw->val);
++			__alc_update_coefex_idx(codec, fw->nid, fw->idx,
++						fw->mask, fw->val);
+ 	}
++	mutex_unlock(&spec->coef_mutex);
+ }
+ 
+ /*
+@@ -1160,6 +1198,7 @@ static int alc_alloc_spec(struct hda_codec *codec, hda_nid_t mixer_nid)
+ 	codec->spdif_status_reset = 1;
+ 	codec->forced_resume = 1;
+ 	codec->patch_ops = alc_patch_ops;
++	mutex_init(&spec->coef_mutex);
+ 
+ 	err = alc_codec_rename_from_preset(codec);
+ 	if (err < 0) {
+@@ -2132,6 +2171,7 @@ static void alc1220_fixup_gb_x570(struct hda_codec *codec,
+ {
+ 	static const hda_nid_t conn1[] = { 0x0c };
+ 	static const struct coef_fw gb_x570_coefs[] = {
++		WRITE_COEF(0x07, 0x03c0),
+ 		WRITE_COEF(0x1a, 0x01c1),
+ 		WRITE_COEF(0x1b, 0x0202),
+ 		WRITE_COEF(0x43, 0x3005),
+@@ -2558,7 +2598,8 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_GB_X570),
+-	SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_GB_X570),
++	SND_PCI_QUIRK(0x1458, 0xa0d5, "Gigabyte X570S Aorus Master", ALC1220_FIXUP_GB_X570),
+ 	SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1229, "MSI-GP73", ALC1220_FIXUP_CLEVO_P950),
+@@ -2633,6 +2674,7 @@ static const struct hda_model_fixup alc882_fixup_models[] = {
+ 	{.id = ALC882_FIXUP_NO_PRIMARY_HP, .name = "no-primary-hp"},
+ 	{.id = ALC887_FIXUP_ASUS_BASS, .name = "asus-bass"},
+ 	{.id = ALC1220_FIXUP_GB_DUAL_CODECS, .name = "dual-codecs"},
++	{.id = ALC1220_FIXUP_GB_X570, .name = "gb-x570"},
+ 	{.id = ALC1220_FIXUP_CLEVO_P950, .name = "clevo-p950"},
+ 	{}
+ };
+@@ -8750,6 +8792,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++	SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+diff --git a/sound/soc/codecs/cpcap.c b/sound/soc/codecs/cpcap.c
+index c0425e3707d9c..a3597137fee3e 100644
+--- a/sound/soc/codecs/cpcap.c
++++ b/sound/soc/codecs/cpcap.c
+@@ -1544,6 +1544,8 @@ static int cpcap_codec_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *codec_node =
+ 		of_get_child_by_name(pdev->dev.parent->of_node, "audio-codec");
++	if (!codec_node)
++		return -ENODEV;
+ 
+ 	pdev->dev.of_node = codec_node;
+ 
+diff --git a/sound/soc/codecs/max9759.c b/sound/soc/codecs/max9759.c
+index 00e9d4fd1651f..0c261335c8a16 100644
+--- a/sound/soc/codecs/max9759.c
++++ b/sound/soc/codecs/max9759.c
+@@ -64,7 +64,8 @@ static int speaker_gain_control_put(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_component *c = snd_soc_kcontrol_component(kcontrol);
+ 	struct max9759 *priv = snd_soc_component_get_drvdata(c);
+ 
+-	if (ucontrol->value.integer.value[0] > 3)
++	if (ucontrol->value.integer.value[0] < 0 ||
++	    ucontrol->value.integer.value[0] > 3)
+ 		return -EINVAL;
+ 
+ 	priv->gain = ucontrol->value.integer.value[0];
+diff --git a/sound/soc/fsl/pcm030-audio-fabric.c b/sound/soc/fsl/pcm030-audio-fabric.c
+index af3c3b90c0aca..83b4a22bf15ac 100644
+--- a/sound/soc/fsl/pcm030-audio-fabric.c
++++ b/sound/soc/fsl/pcm030-audio-fabric.c
+@@ -93,16 +93,21 @@ static int pcm030_fabric_probe(struct platform_device *op)
+ 		dev_err(&op->dev, "platform_device_alloc() failed\n");
+ 
+ 	ret = platform_device_add(pdata->codec_device);
+-	if (ret)
++	if (ret) {
+ 		dev_err(&op->dev, "platform_device_add() failed: %d\n", ret);
++		platform_device_put(pdata->codec_device);
++	}
+ 
+ 	ret = snd_soc_register_card(card);
+-	if (ret)
++	if (ret) {
+ 		dev_err(&op->dev, "snd_soc_register_card() failed: %d\n", ret);
++		platform_device_del(pdata->codec_device);
++		platform_device_put(pdata->codec_device);
++	}
+ 
+ 	platform_set_drvdata(op, pdata);
+-
+ 	return ret;
++
+ }
+ 
+ static int pcm030_fabric_remove(struct platform_device *op)
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 10f48827bb0e0..f24f7354f46fe 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -316,13 +316,27 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 	if (sign_bit)
+ 		mask = BIT(sign_bit + 1) - 1;
+ 
+-	val = ((ucontrol->value.integer.value[0] + min) & mask);
++	val = ucontrol->value.integer.value[0];
++	if (mc->platform_max && val > mc->platform_max)
++		return -EINVAL;
++	if (val > max - min)
++		return -EINVAL;
++	if (val < 0)
++		return -EINVAL;
++	val = (val + min) & mask;
+ 	if (invert)
+ 		val = max - val;
+ 	val_mask = mask << shift;
+ 	val = val << shift;
+ 	if (snd_soc_volsw_is_stereo(mc)) {
+-		val2 = ((ucontrol->value.integer.value[1] + min) & mask);
++		val2 = ucontrol->value.integer.value[1];
++		if (mc->platform_max && val2 > mc->platform_max)
++			return -EINVAL;
++		if (val2 > max - min)
++			return -EINVAL;
++		if (val2 < 0)
++			return -EINVAL;
++		val2 = (val2 + min) & mask;
+ 		if (invert)
+ 			val2 = max - val2;
+ 		if (reg == reg2) {
+@@ -409,8 +423,15 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ 	int err = 0;
+ 	unsigned int val, val_mask, val2 = 0;
+ 
++	val = ucontrol->value.integer.value[0];
++	if (mc->platform_max && val > mc->platform_max)
++		return -EINVAL;
++	if (val > max - min)
++		return -EINVAL;
++	if (val < 0)
++		return -EINVAL;
+ 	val_mask = mask << shift;
+-	val = (ucontrol->value.integer.value[0] + min) & mask;
++	val = (val + min) & mask;
+ 	val = val << shift;
+ 
+ 	err = snd_soc_component_update_bits(component, reg, val_mask, val);
+@@ -859,6 +880,8 @@ int snd_soc_put_xr_sx(struct snd_kcontrol *kcontrol,
+ 	unsigned int i, regval, regmask;
+ 	int err;
+ 
++	if (val < mc->min || val > mc->max)
++		return -EINVAL;
+ 	if (invert)
+ 		val = max - val;
+ 	val &= mask;
+diff --git a/sound/soc/xilinx/xlnx_formatter_pcm.c b/sound/soc/xilinx/xlnx_formatter_pcm.c
+index 91afea9d5de67..ce19a6058b279 100644
+--- a/sound/soc/xilinx/xlnx_formatter_pcm.c
++++ b/sound/soc/xilinx/xlnx_formatter_pcm.c
+@@ -37,6 +37,7 @@
+ #define XLNX_AUD_XFER_COUNT	0x28
+ #define XLNX_AUD_CH_STS_START	0x2C
+ #define XLNX_BYTES_PER_CH	0x44
++#define XLNX_AUD_ALIGN_BYTES	64
+ 
+ #define AUD_STS_IOC_IRQ_MASK	BIT(31)
+ #define AUD_STS_CH_STS_MASK	BIT(29)
+@@ -368,12 +369,32 @@ static int xlnx_formatter_pcm_open(struct snd_soc_component *component,
+ 	snd_soc_set_runtime_hwparams(substream, &xlnx_pcm_hardware);
+ 	runtime->private_data = stream_data;
+ 
+-	/* Resize the period size divisible by 64 */
++	/* Resize the period bytes as divisible by 64 */
+ 	err = snd_pcm_hw_constraint_step(runtime, 0,
+-					 SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 64);
++					 SNDRV_PCM_HW_PARAM_PERIOD_BYTES,
++					 XLNX_AUD_ALIGN_BYTES);
+ 	if (err) {
+ 		dev_err(component->dev,
+-			"unable to set constraint on period bytes\n");
++			"Unable to set constraint on period bytes\n");
++		return err;
++	}
++
++	/* Resize the buffer bytes as divisible by 64 */
++	err = snd_pcm_hw_constraint_step(runtime, 0,
++					 SNDRV_PCM_HW_PARAM_BUFFER_BYTES,
++					 XLNX_AUD_ALIGN_BYTES);
++	if (err) {
++		dev_err(component->dev,
++			"Unable to set constraint on buffer bytes\n");
++		return err;
++	}
++
++	/* Set periods as integer multiple */
++	err = snd_pcm_hw_constraint_integer(runtime,
++					    SNDRV_PCM_HW_PARAM_PERIODS);
++	if (err < 0) {
++		dev_err(component->dev,
++			"Unable to set constraint on periods to be integer\n");
+ 		return err;
+ 	}
+ 
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 949c6d129f2a9..aabd3a10ec5b4 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -84,7 +84,7 @@
+  * combination.
+  */
+ {
+-	USB_DEVICE(0x041e, 0x4095),
++	USB_AUDIO_DEVICE(0x041e, 0x4095),
+ 	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+ 		.ifnum = QUIRK_ANY_INTERFACE,
+ 		.type = QUIRK_COMPOSITE,
+diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
+index bb9fa8de7e625..af9f9d3534c96 100644
+--- a/tools/bpf/resolve_btfids/Makefile
++++ b/tools/bpf/resolve_btfids/Makefile
+@@ -9,7 +9,11 @@ ifeq ($(V),1)
+   msg =
+ else
+   Q = @
+-  msg = @printf '  %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))";
++  ifeq ($(silent),1)
++    msg =
++  else
++    msg = @printf '  %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))";
++  endif
+   MAKEFLAGS=--no-print-directory
+ endif
+ 
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index a963b5b8eb724..96fe9c1af3364 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -555,15 +555,16 @@ static void collect_all_aliases(struct perf_stat_config *config, struct evsel *c
+ 
+ 	alias = list_prepare_entry(counter, &(evlist->core.entries), core.node);
+ 	list_for_each_entry_continue (alias, &evlist->core.entries, core.node) {
+-		if (strcmp(evsel__name(alias), evsel__name(counter)) ||
+-		    alias->scale != counter->scale ||
+-		    alias->cgrp != counter->cgrp ||
+-		    strcmp(alias->unit, counter->unit) ||
+-		    evsel__is_clock(alias) != evsel__is_clock(counter) ||
+-		    !strcmp(alias->pmu_name, counter->pmu_name))
+-			break;
+-		alias->merged_stat = true;
+-		cb(config, alias, data, false);
++		/* Merge events with the same name, etc. but on different PMUs. */
++		if (!strcmp(evsel__name(alias), evsel__name(counter)) &&
++			alias->scale == counter->scale &&
++			alias->cgrp == counter->cgrp &&
++			!strcmp(alias->unit, counter->unit) &&
++			evsel__is_clock(alias) == evsel__is_clock(counter) &&
++			strcmp(alias->pmu_name, counter->pmu_name)) {
++			alias->merged_stat = true;
++			cb(config, alias, data, false);
++		}
+ 	}
+ }
+ 
+diff --git a/tools/testing/selftests/exec/Makefile b/tools/testing/selftests/exec/Makefile
+index dd61118df66ed..12c5e27d32c16 100644
+--- a/tools/testing/selftests/exec/Makefile
++++ b/tools/testing/selftests/exec/Makefile
+@@ -5,7 +5,7 @@ CFLAGS += -D_GNU_SOURCE
+ 
+ TEST_PROGS := binfmt_script non-regular
+ TEST_GEN_PROGS := execveat load_address_4096 load_address_2097152 load_address_16777216
+-TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir pipe
++TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir
+ # Makefile is a run-time dependency, since it's accessed by the execveat test
+ TEST_FILES := Makefile
+ 
+diff --git a/tools/testing/selftests/futex/Makefile b/tools/testing/selftests/futex/Makefile
+index 12631f0076a10..11e157d7533b8 100644
+--- a/tools/testing/selftests/futex/Makefile
++++ b/tools/testing/selftests/futex/Makefile
+@@ -11,7 +11,7 @@ all:
+ 	@for DIR in $(SUBDIRS); do		\
+ 		BUILD_TARGET=$(OUTPUT)/$$DIR;	\
+ 		mkdir $$BUILD_TARGET  -p;	\
+-		make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
++		$(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
+ 		if [ -e $$DIR/$(TEST_PROGS) ]; then \
+ 			rsync -a $$DIR/$(TEST_PROGS) $$BUILD_TARGET/; \
+ 		fi \
+@@ -32,6 +32,6 @@ override define CLEAN
+ 	@for DIR in $(SUBDIRS); do		\
+ 		BUILD_TARGET=$(OUTPUT)/$$DIR;	\
+ 		mkdir $$BUILD_TARGET  -p;	\
+-		make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
++		$(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
+ 	done
+ endef
+diff --git a/tools/testing/selftests/netfilter/nft_concat_range.sh b/tools/testing/selftests/netfilter/nft_concat_range.sh
+index 5a4938d6dcf25..9313fa32bef13 100755
+--- a/tools/testing/selftests/netfilter/nft_concat_range.sh
++++ b/tools/testing/selftests/netfilter/nft_concat_range.sh
+@@ -27,7 +27,7 @@ TYPES="net_port port_net net6_port port_proto net6_port_mac net6_port_mac_proto
+        net_port_mac_proto_net"
+ 
+ # Reported bugs, also described by TYPE_ variables below
+-BUGS="flush_remove_add"
++BUGS="flush_remove_add reload"
+ 
+ # List of possible paths to pktgen script from kernel tree for performance tests
+ PKTGEN_SCRIPT_PATHS="
+@@ -337,6 +337,23 @@ TYPE_flush_remove_add="
+ display		Add two elements, flush, re-add
+ "
+ 
++TYPE_reload="
++display		net,mac with reload
++type_spec	ipv4_addr . ether_addr
++chain_spec	ip daddr . ether saddr
++dst		addr4
++src		mac
++start		1
++count		1
++src_delta	2000
++tools		sendip nc bash
++proto		udp
++
++race_repeat	0
++
++perf_duration	0
++"
++
+ # Set template for all tests, types and rules are filled in depending on test
+ set_template='
+ flush ruleset
+@@ -1455,6 +1472,59 @@ test_bug_flush_remove_add() {
+ 	nft flush ruleset
+ }
+ 
++# - add ranged element, check that packets match it
++# - reload the set, check packets still match
++test_bug_reload() {
++	setup veth send_"${proto}" set || return ${KSELFTEST_SKIP}
++	rstart=${start}
++
++	range_size=1
++	for i in $(seq "${start}" $((start + count))); do
++		end=$((start + range_size))
++
++		# Avoid negative or zero-sized port ranges
++		if [ $((end / 65534)) -gt $((start / 65534)) ]; then
++			start=${end}
++			end=$((end + 1))
++		fi
++		srcstart=$((start + src_delta))
++		srcend=$((end + src_delta))
++
++		add "$(format)" || return 1
++		range_size=$((range_size + 1))
++		start=$((end + range_size))
++	done
++
++	# check kernel does allocate pcpu sctrach map
++	# for reload with no elemet add/delete
++	( echo flush set inet filter test ;
++	  nft list set inet filter test ) | nft -f -
++
++	start=${rstart}
++	range_size=1
++
++	for i in $(seq "${start}" $((start + count))); do
++		end=$((start + range_size))
++
++		# Avoid negative or zero-sized port ranges
++		if [ $((end / 65534)) -gt $((start / 65534)) ]; then
++			start=${end}
++			end=$((end + 1))
++		fi
++		srcstart=$((start + src_delta))
++		srcend=$((end + src_delta))
++
++		for j in $(seq ${start} $((range_size / 2 + 1)) ${end}); do
++			send_match "${j}" $((j + src_delta)) || return 1
++		done
++
++		range_size=$((range_size + 1))
++		start=$((end + range_size))
++	done
++
++	nft flush ruleset
++}
++
+ test_reported_issues() {
+ 	eval test_bug_"${subtest}"
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-02-11 12:35 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-02-11 12:35 UTC (permalink / raw
  To: gentoo-commits

commit:     9a07ae45606af0e0736cf6963e6959a171e5bc9d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 11 12:35:14 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 11 12:35:14 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9a07ae45

Linux patch 5.10.100

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 ++
 1099_linux-5.10.100.patch | 109 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/0000_README b/0000_README
index c04d5d96..0a475786 100644
--- a/0000_README
+++ b/0000_README
@@ -439,6 +439,10 @@ Patch:  1098_linux-5.10.99.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.99
 
+Patch:  1099_linux-5.10.100.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.100
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1099_linux-5.10.100.patch b/1099_linux-5.10.100.patch
new file mode 100644
index 00000000..4197eebf
--- /dev/null
+++ b/1099_linux-5.10.100.patch
@@ -0,0 +1,109 @@
+diff --git a/Makefile b/Makefile
+index 593638785d293..fb96cca42ddb5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 99
++SUBLEVEL = 100
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 07a04f3926009..d8e9239c24ffc 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4654,6 +4654,8 @@ static long kvm_s390_guest_sida_op(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 	if (mop->size + mop->sida_offset > sida_size(vcpu->arch.sie_block))
+ 		return -E2BIG;
++	if (!kvm_s390_pv_cpu_is_protected(vcpu))
++		return -EINVAL;
+ 
+ 	switch (mop->op) {
+ 	case KVM_S390_MEMOP_SIDA_READ:
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index fdabf2675b63f..9de27daa98b47 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -1295,3 +1295,4 @@ module_exit(crypto_algapi_exit);
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Cryptographic algorithms API");
++MODULE_SOFTDEP("pre: cryptomgr");
+diff --git a/crypto/api.c b/crypto/api.c
+index c4eda56cff891..5ffcd3ab4a753 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -603,4 +603,3 @@ EXPORT_SYMBOL_GPL(crypto_req_done);
+ 
+ MODULE_DESCRIPTION("Cryptographic core API");
+ MODULE_LICENSE("GPL");
+-MODULE_SOFTDEP("pre: cryptomgr");
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index 7697068ad9695..ea67a7ef2390c 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -708,12 +708,12 @@ static int moxart_remove(struct platform_device *pdev)
+ 	if (!IS_ERR_OR_NULL(host->dma_chan_rx))
+ 		dma_release_channel(host->dma_chan_rx);
+ 	mmc_remove_host(mmc);
+-	mmc_free_host(mmc);
+ 
+ 	writel(0, host->base + REG_INTERRUPT_MASK);
+ 	writel(0, host->base + REG_POWER_CONTROL);
+ 	writel(readl(host->base + REG_CLOCK_CONTROL) | CLK_OFF,
+ 	       host->base + REG_CLOCK_CONTROL);
++	mmc_free_host(mmc);
+ 
+ 	return 0;
+ }
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 29591955d08a5..fb835a3822f49 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -2159,7 +2159,7 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	struct tipc_msg *hdr = buf_msg(skb);
+ 	struct tipc_gap_ack_blks *ga = NULL;
+ 	bool reply = msg_probe(hdr), retransmitted = false;
+-	u16 dlen = msg_data_sz(hdr), glen = 0;
++	u32 dlen = msg_data_sz(hdr), glen = 0;
+ 	u16 peers_snd_nxt =  msg_next_sent(hdr);
+ 	u16 peers_tol = msg_link_tolerance(hdr);
+ 	u16 peers_prio = msg_linkprio(hdr);
+@@ -2173,6 +2173,10 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	void *data;
+ 
+ 	trace_tipc_proto_rcv(skb, false, l->name);
++
++	if (dlen > U16_MAX)
++		goto exit;
++
+ 	if (tipc_link_is_blocked(l) || !xmitq)
+ 		goto exit;
+ 
+@@ -2268,7 +2272,8 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 
+ 		/* Receive Gap ACK blocks from peer if any */
+ 		glen = tipc_get_gap_ack_blks(&ga, l, hdr, true);
+-
++		if(glen > dlen)
++			break;
+ 		tipc_mon_rcv(l->net, data + glen, dlen - glen, l->addr,
+ 			     &l->mon_state, l->bearer_id);
+ 
+diff --git a/net/tipc/monitor.c b/net/tipc/monitor.c
+index 6dce2abf436ee..a37190da5a504 100644
+--- a/net/tipc/monitor.c
++++ b/net/tipc/monitor.c
+@@ -465,6 +465,8 @@ void tipc_mon_rcv(struct net *net, void *data, u16 dlen, u32 addr,
+ 	state->probing = false;
+ 
+ 	/* Sanity check received domain record */
++	if (new_member_cnt > MAX_MON_DOMAIN)
++		return;
+ 	if (dlen < dom_rec_len(arrv_dom, 0))
+ 		return;
+ 	if (dlen != dom_rec_len(arrv_dom, new_member_cnt))


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-02-16 12:46 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-02-16 12:46 UTC (permalink / raw
  To: gentoo-commits

commit:     2458a856d0cb0cd36442d24c5a3c0a43bf0d5ccc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 16 12:45:50 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Feb 16 12:45:50 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2458a856

Linux patch 5.10.101

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1100_linux-5.10.101.patch | 3452 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3456 insertions(+)

diff --git a/0000_README b/0000_README
index 0a475786..25df2085 100644
--- a/0000_README
+++ b/0000_README
@@ -443,6 +443,10 @@ Patch:  1099_linux-5.10.100.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.100
 
+Patch:  1100_linux-5.10.101.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.101
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1100_linux-5.10.101.patch b/1100_linux-5.10.101.patch
new file mode 100644
index 00000000..d853eaa9
--- /dev/null
+++ b/1100_linux-5.10.101.patch
@@ -0,0 +1,3452 @@
+diff --git a/Documentation/devicetree/bindings/arm/omap/omap.txt b/Documentation/devicetree/bindings/arm/omap/omap.txt
+index e77635c5422c6..fa8b31660cadd 100644
+--- a/Documentation/devicetree/bindings/arm/omap/omap.txt
++++ b/Documentation/devicetree/bindings/arm/omap/omap.txt
+@@ -119,6 +119,9 @@ Boards (incomplete list of examples):
+ - OMAP3 BeagleBoard : Low cost community board
+   compatible = "ti,omap3-beagle", "ti,omap3430", "ti,omap3"
+ 
++- OMAP3 BeagleBoard A to B4 : Early BeagleBoard revisions A to B4 with a timer quirk
++  compatible = "ti,omap3-beagle-ab4", "ti,omap3-beagle", "ti,omap3430", "ti,omap3"
++
+ - OMAP3 Tobi with Overo : Commercial expansion board with daughter board
+   compatible = "gumstix,omap3-overo-tobi", "gumstix,omap3-overo", "ti,omap3430", "ti,omap3"
+ 
+diff --git a/Makefile b/Makefile
+index fb96cca42ddb5..32d9ed44e1c47 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 100
++SUBLEVEL = 101
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile
+index ce66ffd5a1bbc..7e8151681597c 100644
+--- a/arch/arm/boot/dts/Makefile
++++ b/arch/arm/boot/dts/Makefile
+@@ -731,6 +731,7 @@ dtb-$(CONFIG_ARCH_OMAP3) += \
+ 	logicpd-som-lv-37xx-devkit.dtb \
+ 	omap3430-sdp.dtb \
+ 	omap3-beagle.dtb \
++	omap3-beagle-ab4.dtb \
+ 	omap3-beagle-xm.dtb \
+ 	omap3-beagle-xm-ab.dtb \
+ 	omap3-cm-t3517.dtb \
+diff --git a/arch/arm/boot/dts/imx23-evk.dts b/arch/arm/boot/dts/imx23-evk.dts
+index 8cbaf1c811745..3b609d987d883 100644
+--- a/arch/arm/boot/dts/imx23-evk.dts
++++ b/arch/arm/boot/dts/imx23-evk.dts
+@@ -79,7 +79,6 @@
+ 						MX23_PAD_LCD_RESET__GPIO_1_18
+ 						MX23_PAD_PWM3__GPIO_1_29
+ 						MX23_PAD_PWM4__GPIO_1_30
+-						MX23_PAD_SSP1_DETECT__SSP1_DETECT
+ 					>;
+ 					fsl,drive-strength = <MXS_DRIVE_4mA>;
+ 					fsl,voltage = <MXS_VOLTAGE_HIGH>;
+diff --git a/arch/arm/boot/dts/imx6qdl-udoo.dtsi b/arch/arm/boot/dts/imx6qdl-udoo.dtsi
+index d07d8f83456d2..ccfa8e320be62 100644
+--- a/arch/arm/boot/dts/imx6qdl-udoo.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-udoo.dtsi
+@@ -5,6 +5,8 @@
+  * Author: Fabio Estevam <fabio.estevam@freescale.com>
+  */
+ 
++#include <dt-bindings/gpio/gpio.h>
++
+ / {
+ 	aliases {
+ 		backlight = &backlight;
+@@ -226,6 +228,7 @@
+ 				MX6QDL_PAD_SD3_DAT1__SD3_DATA1		0x17059
+ 				MX6QDL_PAD_SD3_DAT2__SD3_DATA2		0x17059
+ 				MX6QDL_PAD_SD3_DAT3__SD3_DATA3		0x17059
++				MX6QDL_PAD_SD3_DAT5__GPIO7_IO00		0x1b0b0
+ 			>;
+ 		};
+ 
+@@ -304,7 +307,7 @@
+ &usdhc3 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_usdhc3>;
+-	non-removable;
++	cd-gpios = <&gpio7 0 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/imx7ulp.dtsi b/arch/arm/boot/dts/imx7ulp.dtsi
+index b7ea37ad4e55c..bcec98b964114 100644
+--- a/arch/arm/boot/dts/imx7ulp.dtsi
++++ b/arch/arm/boot/dts/imx7ulp.dtsi
+@@ -259,7 +259,7 @@
+ 			interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&pcc2 IMX7ULP_CLK_WDG1>;
+ 			assigned-clocks = <&pcc2 IMX7ULP_CLK_WDG1>;
+-			assigned-clocks-parents = <&scg1 IMX7ULP_CLK_FIRC_BUS_CLK>;
++			assigned-clock-parents = <&scg1 IMX7ULP_CLK_FIRC_BUS_CLK>;
+ 			timeout-sec = <40>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/meson.dtsi b/arch/arm/boot/dts/meson.dtsi
+index 7649dd1e0b9ee..c928ae312e19c 100644
+--- a/arch/arm/boot/dts/meson.dtsi
++++ b/arch/arm/boot/dts/meson.dtsi
+@@ -42,14 +42,14 @@
+ 			};
+ 
+ 			uart_A: serial@84c0 {
+-				compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
++				compatible = "amlogic,meson6-uart";
+ 				reg = <0x84c0 0x18>;
+ 				interrupts = <GIC_SPI 26 IRQ_TYPE_EDGE_RISING>;
+ 				status = "disabled";
+ 			};
+ 
+ 			uart_B: serial@84dc {
+-				compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
++				compatible = "amlogic,meson6-uart";
+ 				reg = <0x84dc 0x18>;
+ 				interrupts = <GIC_SPI 75 IRQ_TYPE_EDGE_RISING>;
+ 				status = "disabled";
+@@ -87,7 +87,7 @@
+ 			};
+ 
+ 			uart_C: serial@8700 {
+-				compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
++				compatible = "amlogic,meson6-uart";
+ 				reg = <0x8700 0x18>;
+ 				interrupts = <GIC_SPI 93 IRQ_TYPE_EDGE_RISING>;
+ 				status = "disabled";
+@@ -203,7 +203,7 @@
+ 			};
+ 
+ 			uart_AO: serial@4c0 {
+-				compatible = "amlogic,meson6-uart", "amlogic,meson-ao-uart", "amlogic,meson-uart";
++				compatible = "amlogic,meson6-uart", "amlogic,meson-ao-uart";
+ 				reg = <0x4c0 0x18>;
+ 				interrupts = <GIC_SPI 90 IRQ_TYPE_EDGE_RISING>;
+ 				status = "disabled";
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index 740a6c816266c..08533116a39ce 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -598,27 +598,27 @@
+ };
+ 
+ &uart_AO {
+-	compatible = "amlogic,meson8-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_CLK81>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8-uart", "amlogic,meson-ao-uart";
++	clocks = <&xtal>, <&clkc CLKID_CLK81>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_A {
+-	compatible = "amlogic,meson8-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART0>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_B {
+-	compatible = "amlogic,meson8-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART1>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+-	compatible = "amlogic,meson8-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART2>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &usb0 {
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index 2401cdf5f7511..f6eb7c803174e 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -586,27 +586,27 @@
+ };
+ 
+ &uart_AO {
+-	compatible = "amlogic,meson8b-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_CLK81>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8b-uart", "amlogic,meson-ao-uart";
++	clocks = <&xtal>, <&clkc CLKID_CLK81>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_A {
+-	compatible = "amlogic,meson8b-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART0>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8b-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_B {
+-	compatible = "amlogic,meson8b-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART1>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8b-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+-	compatible = "amlogic,meson8b-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART2>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8b-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &usb0 {
+diff --git a/arch/arm/boot/dts/omap3-beagle-ab4.dts b/arch/arm/boot/dts/omap3-beagle-ab4.dts
+new file mode 100644
+index 0000000000000..990ff2d846868
+--- /dev/null
++++ b/arch/arm/boot/dts/omap3-beagle-ab4.dts
+@@ -0,0 +1,47 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/dts-v1/;
++
++#include "omap3-beagle.dts"
++
++/ {
++	model = "TI OMAP3 BeagleBoard A to B4";
++	compatible = "ti,omap3-beagle-ab4", "ti,omap3-beagle", "ti,omap3430", "ti,omap3";
++};
++
++/*
++ * Workaround for capacitor C70 issue, see "Boards revision A and < B5"
++ * section at https://elinux.org/BeagleBoard_Community
++ */
++
++/* Unusable as clocksource because of unreliable oscillator */
++&counter32k {
++	status = "disabled";
++};
++
++/* Unusable as clockevent because of unreliable oscillator, allow to idle */
++&timer1_target {
++	/delete-property/ti,no-reset-on-init;
++	/delete-property/ti,no-idle;
++	timer@0 {
++		/delete-property/ti,timer-alwon;
++	};
++};
++
++/* Preferred always-on timer for clocksource */
++&timer12_target {
++	ti,no-reset-on-init;
++	ti,no-idle;
++	timer@0 {
++		/* Always clocked by secure_32k_fck */
++	};
++};
++
++/* Preferred timer for clockevent */
++&timer2_target {
++	ti,no-reset-on-init;
++	ti,no-idle;
++	timer@0 {
++		assigned-clocks = <&gpt2_fck>;
++		assigned-clock-parents = <&sys_ck>;
++	};
++};
+diff --git a/arch/arm/boot/dts/omap3-beagle.dts b/arch/arm/boot/dts/omap3-beagle.dts
+index f9f34b8458e91..0548b391334fd 100644
+--- a/arch/arm/boot/dts/omap3-beagle.dts
++++ b/arch/arm/boot/dts/omap3-beagle.dts
+@@ -304,39 +304,6 @@
+ 	phys = <0 &hsusb2_phy>;
+ };
+ 
+-/* Unusable as clocksource because of unreliable oscillator */
+-&counter32k {
+-	status = "disabled";
+-};
+-
+-/* Unusable as clockevent because if unreliable oscillator, allow to idle */
+-&timer1_target {
+-	/delete-property/ti,no-reset-on-init;
+-	/delete-property/ti,no-idle;
+-	timer@0 {
+-		/delete-property/ti,timer-alwon;
+-	};
+-};
+-
+-/* Preferred always-on timer for clocksource */
+-&timer12_target {
+-	ti,no-reset-on-init;
+-	ti,no-idle;
+-	timer@0 {
+-		/* Always clocked by secure_32k_fck */
+-	};
+-};
+-
+-/* Preferred timer for clockevent */
+-&timer2_target {
+-	ti,no-reset-on-init;
+-	ti,no-idle;
+-	timer@0 {
+-		assigned-clocks = <&gpt2_fck>;
+-		assigned-clock-parents = <&sys_ck>;
+-	};
+-};
+-
+ &twl_gpio {
+ 	ti,use-leds;
+ 	/* pullups: BIT(1) */
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts b/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
+index 08bddbf0336da..446d93c1c7824 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
+@@ -154,10 +154,6 @@
+ 			cap-sd-highspeed;
+ 			cap-mmc-highspeed;
+ 			/* All direction control is used */
+-			st,sig-dir-cmd;
+-			st,sig-dir-dat0;
+-			st,sig-dir-dat2;
+-			st,sig-dir-dat31;
+ 			st,sig-pin-fbclk;
+ 			full-pwr-cycle;
+ 			vmmc-supply = <&ab8500_ldo_aux3_reg>;
+diff --git a/arch/arm/mach-socfpga/Kconfig b/arch/arm/mach-socfpga/Kconfig
+index c3bb68d57cea2..b62ae4dafa2eb 100644
+--- a/arch/arm/mach-socfpga/Kconfig
++++ b/arch/arm/mach-socfpga/Kconfig
+@@ -2,6 +2,7 @@
+ menuconfig ARCH_SOCFPGA
+ 	bool "Altera SOCFPGA family"
+ 	depends on ARCH_MULTI_V7
++	select ARCH_HAS_RESET_CONTROLLER
+ 	select ARCH_SUPPORTS_BIG_ENDIAN
+ 	select ARM_AMBA
+ 	select ARM_GIC
+@@ -18,6 +19,7 @@ menuconfig ARCH_SOCFPGA
+ 	select PL310_ERRATA_727915
+ 	select PL310_ERRATA_753970 if PL310
+ 	select PL310_ERRATA_769419
++	select RESET_CONTROLLER
+ 
+ if ARCH_SOCFPGA
+ config SOCFPGA_SUSPEND
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+index b9b8cd4b5ba9d..87e8e64ad5cae 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+@@ -15,7 +15,7 @@
+ 		ethernet0 = &ethmac;
+ 	};
+ 
+-	dioo2133: audio-amplifier-0 {
++	dio2133: audio-amplifier-0 {
+ 		compatible = "simple-audio-amplifier";
+ 		enable-gpios = <&gpio_ao GPIOAO_2 GPIO_ACTIVE_HIGH>;
+ 		VCC-supply = <&vcc_5v>;
+@@ -215,7 +215,7 @@
+ 		audio-widgets = "Line", "Lineout";
+ 		audio-aux-devs = <&tdmout_b>, <&tdmout_c>, <&tdmin_a>,
+ 				 <&tdmin_b>, <&tdmin_c>, <&tdmin_lb>,
+-				 <&dioo2133>;
++				 <&dio2133>;
+ 		audio-routing = "TDMOUT_B IN 0", "FRDDR_A OUT 1",
+ 				"TDMOUT_B IN 1", "FRDDR_B OUT 1",
+ 				"TDMOUT_B IN 2", "FRDDR_C OUT 1",
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index c86cf786f4061..8d0d41973ff54 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -524,7 +524,7 @@
+ 				assigned-clock-rates = <0>, <0>, <0>, <594000000>;
+ 				status = "disabled";
+ 
+-				port@0 {
++				port {
+ 					lcdif_mipi_dsi: endpoint {
+ 						remote-endpoint = <&mipi_dsi_lcdif_in>;
+ 					};
+diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
+index 523d3e6e24009..94c5c66231a8c 100644
+--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
+@@ -142,6 +142,7 @@ static inline bool pte_user(pte_t pte)
+ #ifndef __ASSEMBLY__
+ 
+ int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot);
++void unmap_kernel_page(unsigned long va);
+ 
+ #endif /* !__ASSEMBLY__ */
+ 
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 4a3dca0271f1e..71e2c524f1eea 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -1054,6 +1054,8 @@ static inline int map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t p
+ 	return hash__map_kernel_page(ea, pa, prot);
+ }
+ 
++void unmap_kernel_page(unsigned long va);
++
+ static inline int __meminit vmemmap_create_mapping(unsigned long start,
+ 						   unsigned long page_size,
+ 						   unsigned long phys)
+diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
+index 591b2f4deed53..897cc68758d44 100644
+--- a/arch/powerpc/include/asm/fixmap.h
++++ b/arch/powerpc/include/asm/fixmap.h
+@@ -111,8 +111,10 @@ static inline void __set_fixmap(enum fixed_addresses idx,
+ 		BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
+ 	else if (WARN_ON(idx >= __end_of_fixed_addresses))
+ 		return;
+-
+-	map_kernel_page(__fix_to_virt(idx), phys, flags);
++	if (pgprot_val(flags))
++		map_kernel_page(__fix_to_virt(idx), phys, flags);
++	else
++		unmap_kernel_page(__fix_to_virt(idx));
+ }
+ 
+ #define __early_set_fixmap	__set_fixmap
+diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
+index 96522f7f0618a..e53cc07e6b9ec 100644
+--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
+@@ -65,6 +65,7 @@ extern int icache_44x_need_flush;
+ #ifndef __ASSEMBLY__
+ 
+ int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot);
++void unmap_kernel_page(unsigned long va);
+ 
+ #endif /* !__ASSEMBLY__ */
+ 
+diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
+index 57cd3892bfe05..1eacff0fff029 100644
+--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
+@@ -311,6 +311,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
+ #define __swp_entry_to_pte(x)		__pte((x).val)
+ 
+ int map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot);
++void unmap_kernel_page(unsigned long va);
+ extern int __meminit vmemmap_create_mapping(unsigned long start,
+ 					    unsigned long page_size,
+ 					    unsigned long phys);
+diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
+index 15555c95cebc7..faaf33e204de1 100644
+--- a/arch/powerpc/mm/pgtable.c
++++ b/arch/powerpc/mm/pgtable.c
+@@ -194,6 +194,15 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+ 	__set_pte_at(mm, addr, ptep, pte, 0);
+ }
+ 
++void unmap_kernel_page(unsigned long va)
++{
++	pmd_t *pmdp = pmd_off_k(va);
++	pte_t *ptep = pte_offset_kernel(pmdp, va);
++
++	pte_clear(&init_mm, va, ptep);
++	flush_tlb_kernel_range(va, va + PAGE_SIZE);
++}
++
+ /*
+  * This is called when relaxing access to a PTE. It's also called in the page
+  * fault path when we don't hit any of the major fault cases, ie, a minor
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 226c366072da3..db9505c658eab 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -50,6 +50,12 @@ riscv-march-$(CONFIG_ARCH_RV32I)	:= rv32ima
+ riscv-march-$(CONFIG_ARCH_RV64I)	:= rv64ima
+ riscv-march-$(CONFIG_FPU)		:= $(riscv-march-y)fd
+ riscv-march-$(CONFIG_RISCV_ISA_C)	:= $(riscv-march-y)c
++
++# Newer binutils versions default to ISA spec version 20191213 which moves some
++# instructions from the I extension to the Zicsr and Zifencei extensions.
++toolchain-need-zicsr-zifencei := $(call cc-option-yn, -march=$(riscv-march-y)_zicsr_zifencei)
++riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei
++
+ KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))
+ KBUILD_AFLAGS += -march=$(riscv-march-y)
+ 
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index 9c1a013d56822..bd8516e6c353c 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1734,6 +1734,9 @@ static bool is_arch_lbr_xsave_available(void)
+ 	 * Check the LBR state with the corresponding software structure.
+ 	 * Disable LBR XSAVES support if the size doesn't match.
+ 	 */
++	if (xfeature_size(XFEATURE_LBR) == 0)
++		return false;
++
+ 	if (WARN_ON(xfeature_size(XFEATURE_LBR) != get_lbr_state_size()))
+ 		return false;
+ 
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index fa543c355fbdb..d515c8e68314c 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4155,7 +4155,21 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
+ 			return true;
+ 
+ 		pr_err_ratelimited("KVM: SEV Guest triggered AMD Erratum 1096\n");
+-		kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
++
++		/*
++		 * If the fault occurred in userspace, arbitrarily inject #GP
++		 * to avoid killing the guest and to hopefully avoid confusing
++		 * the guest kernel too much, e.g. injecting #PF would not be
++		 * coherent with respect to the guest's page tables.  Request
++		 * triple fault if the fault occurred in the kernel as there's
++		 * no fault that KVM can inject without confusing the guest.
++		 * In practice, the triple fault is moot as no sane SEV kernel
++		 * will execute from user memory while also running with SMAP=1.
++		 */
++		if (is_user)
++			kvm_inject_gp(vcpu, 0);
++		else
++			kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
+ 	}
+ 
+ 	return false;
+diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
+index c0d6fee9225fe..5b68034ec5f9c 100644
+--- a/arch/x86/kvm/vmx/evmcs.c
++++ b/arch/x86/kvm/vmx/evmcs.c
+@@ -361,6 +361,7 @@ void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata)
+ 	case MSR_IA32_VMX_PROCBASED_CTLS2:
+ 		ctl_high &= ~EVMCS1_UNSUPPORTED_2NDEXEC;
+ 		break;
++	case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
+ 	case MSR_IA32_VMX_PINBASED_CTLS:
+ 		ctl_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;
+ 		break;
+diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
+index bd41d9462355f..011929a638230 100644
+--- a/arch/x86/kvm/vmx/evmcs.h
++++ b/arch/x86/kvm/vmx/evmcs.h
+@@ -59,7 +59,9 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs);
+ 	 SECONDARY_EXEC_SHADOW_VMCS |					\
+ 	 SECONDARY_EXEC_TSC_SCALING |					\
+ 	 SECONDARY_EXEC_PAUSE_LOOP_EXITING)
+-#define EVMCS1_UNSUPPORTED_VMEXIT_CTRL (VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL)
++#define EVMCS1_UNSUPPORTED_VMEXIT_CTRL					\
++	(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL |				\
++	 VM_EXIT_SAVE_VMX_PREEMPTION_TIMER)
+ #define EVMCS1_UNSUPPORTED_VMENTRY_CTRL (VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)
+ #define EVMCS1_UNSUPPORTED_VMFUNC (VMX_VMFUNC_EPTP_SWITCHING)
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 351ef5cf1436a..94f5f2129e3b4 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4846,8 +4846,33 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
+ 		dr6 = vmx_get_exit_qual(vcpu);
+ 		if (!(vcpu->guest_debug &
+ 		      (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))) {
++			/*
++			 * If the #DB was due to ICEBP, a.k.a. INT1, skip the
++			 * instruction.  ICEBP generates a trap-like #DB, but
++			 * despite its interception control being tied to #DB,
++			 * is an instruction intercept, i.e. the VM-Exit occurs
++			 * on the ICEBP itself.  Note, skipping ICEBP also
++			 * clears STI and MOVSS blocking.
++			 *
++			 * For all other #DBs, set vmcs.PENDING_DBG_EXCEPTIONS.BS
++			 * if single-step is enabled in RFLAGS and STI or MOVSS
++			 * blocking is active, as the CPU doesn't set the bit
++			 * on VM-Exit due to #DB interception.  VM-Entry has a
++			 * consistency check that a single-step #DB is pending
++			 * in this scenario as the previous instruction cannot
++			 * have toggled RFLAGS.TF 0=>1 (because STI and POP/MOV
++			 * don't modify RFLAGS), therefore the one instruction
++			 * delay when activating single-step breakpoints must
++			 * have already expired.  Note, the CPU sets/clears BS
++			 * as appropriate for all other VM-Exits types.
++			 */
+ 			if (is_icebp(intr_info))
+ 				WARN_ON(!skip_emulated_instruction(vcpu));
++			else if ((vmx_get_rflags(vcpu) & X86_EFLAGS_TF) &&
++				 (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
++				  (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)))
++				vmcs_writel(GUEST_PENDING_DBG_EXCEPTIONS,
++					    vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS) | DR6_BS);
+ 
+ 			kvm_queue_exception_p(vcpu, DB_VECTOR, dr6);
+ 			return 1;
+diff --git a/drivers/accessibility/speakup/speakup_dectlk.c b/drivers/accessibility/speakup/speakup_dectlk.c
+index ab6d61e80b1cb..d689ec5e276f1 100644
+--- a/drivers/accessibility/speakup/speakup_dectlk.c
++++ b/drivers/accessibility/speakup/speakup_dectlk.c
+@@ -44,6 +44,7 @@ static struct var_t vars[] = {
+ 	{ CAPS_START, .u.s = {"[:dv ap 160] " } },
+ 	{ CAPS_STOP, .u.s = {"[:dv ap 100 ] " } },
+ 	{ RATE, .u.n = {"[:ra %d] ", 180, 75, 650, 0, 0, NULL } },
++	{ PITCH, .u.n = {"[:dv ap %d] ", 122, 50, 350, 0, 0, NULL } },
+ 	{ INFLECTION, .u.n = {"[:dv pr %d] ", 100, 0, 10000, 0, 0, NULL } },
+ 	{ VOL, .u.n = {"[:dv g5 %d] ", 86, 60, 86, 0, 0, NULL } },
+ 	{ PUNCT, .u.n = {"[:pu %c] ", 0, 0, 2, 0, 0, "nsa" } },
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 2494138a6905e..50ed949dc1449 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1454,9 +1454,17 @@ static void __init arm_smmu_v3_pmcg_init_resources(struct resource *res,
+ 	res[0].start = pmcg->page0_base_address;
+ 	res[0].end = pmcg->page0_base_address + SZ_4K - 1;
+ 	res[0].flags = IORESOURCE_MEM;
+-	res[1].start = pmcg->page1_base_address;
+-	res[1].end = pmcg->page1_base_address + SZ_4K - 1;
+-	res[1].flags = IORESOURCE_MEM;
++	/*
++	 * The initial version in DEN0049C lacked a way to describe register
++	 * page 1, which makes it broken for most PMCG implementations; in
++	 * that case, just let the driver fail gracefully if it expects to
++	 * find a second memory resource.
++	 */
++	if (node->revision > 0) {
++		res[1].start = pmcg->page1_base_address;
++		res[1].end = pmcg->page1_base_address + SZ_4K - 1;
++		res[1].flags = IORESOURCE_MEM;
++	}
+ 
+ 	if (pmcg->overflow_gsiv)
+ 		acpi_iort_register_irq(pmcg->overflow_gsiv, "overflow",
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 3f2e5ea9ab6b7..8347eaee679c8 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2064,6 +2064,16 @@ bool acpi_ec_dispatch_gpe(void)
+ 	if (acpi_any_gpe_status_set(first_ec->gpe))
+ 		return true;
+ 
++	/*
++	 * Cancel the SCI wakeup and process all pending events in case there
++	 * are any wakeup ones in there.
++	 *
++	 * Note that if any non-EC GPEs are active at this point, the SCI will
++	 * retrigger after the rearming in acpi_s2idle_wake(), so no events
++	 * should be missed by canceling the wakeup here.
++	 */
++	pm_system_cancel_wakeup();
++
+ 	/*
+ 	 * Dispatch the EC GPE in-band, but do not report wakeup in any case
+ 	 * to allow the caller to process events properly after that.
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 31c9d0c8ae11f..e2614ea820bb8 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -1012,21 +1012,15 @@ static bool acpi_s2idle_wake(void)
+ 			return true;
+ 		}
+ 
+-		/* Check non-EC GPE wakeups and dispatch the EC GPE. */
++		/*
++		 * Check non-EC GPE wakeups and if there are none, cancel the
++		 * SCI-related wakeup and dispatch the EC GPE.
++		 */
+ 		if (acpi_ec_dispatch_gpe()) {
+ 			pm_pr_dbg("ACPI non-EC GPE wakeup\n");
+ 			return true;
+ 		}
+ 
+-		/*
+-		 * Cancel the SCI wakeup and process all pending events in case
+-		 * there are any wakeup ones in there.
+-		 *
+-		 * Note that if any non-EC GPEs are active at this point, the
+-		 * SCI will retrigger after the rearming below, so no events
+-		 * should be missed by canceling the wakeup here.
+-		 */
+-		pm_system_cancel_wakeup();
+ 		acpi_os_wait_events_complete();
+ 
+ 		/*
+@@ -1040,6 +1034,7 @@ static bool acpi_s2idle_wake(void)
+ 			return true;
+ 		}
+ 
++		pm_wakeup_clear(acpi_sci_irq);
+ 		rearm_wake_irq(acpi_sci_irq);
+ 	}
+ 
+diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
+index 92073ac68473c..8997e0227eb9d 100644
+--- a/drivers/base/power/wakeup.c
++++ b/drivers/base/power/wakeup.c
+@@ -34,7 +34,8 @@ suspend_state_t pm_suspend_target_state;
+ bool events_check_enabled __read_mostly;
+ 
+ /* First wakeup IRQ seen by the kernel in the last cycle. */
+-unsigned int pm_wakeup_irq __read_mostly;
++static unsigned int wakeup_irq[2] __read_mostly;
++static DEFINE_RAW_SPINLOCK(wakeup_irq_lock);
+ 
+ /* If greater than 0 and the system is suspending, terminate the suspend. */
+ static atomic_t pm_abort_suspend __read_mostly;
+@@ -941,19 +942,45 @@ void pm_system_cancel_wakeup(void)
+ 	atomic_dec_if_positive(&pm_abort_suspend);
+ }
+ 
+-void pm_wakeup_clear(bool reset)
++void pm_wakeup_clear(unsigned int irq_number)
+ {
+-	pm_wakeup_irq = 0;
+-	if (reset)
++	raw_spin_lock_irq(&wakeup_irq_lock);
++
++	if (irq_number && wakeup_irq[0] == irq_number)
++		wakeup_irq[0] = wakeup_irq[1];
++	else
++		wakeup_irq[0] = 0;
++
++	wakeup_irq[1] = 0;
++
++	raw_spin_unlock_irq(&wakeup_irq_lock);
++
++	if (!irq_number)
+ 		atomic_set(&pm_abort_suspend, 0);
+ }
+ 
+ void pm_system_irq_wakeup(unsigned int irq_number)
+ {
+-	if (pm_wakeup_irq == 0) {
+-		pm_wakeup_irq = irq_number;
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&wakeup_irq_lock, flags);
++
++	if (wakeup_irq[0] == 0)
++		wakeup_irq[0] = irq_number;
++	else if (wakeup_irq[1] == 0)
++		wakeup_irq[1] = irq_number;
++	else
++		irq_number = 0;
++
++	raw_spin_unlock_irqrestore(&wakeup_irq_lock, flags);
++
++	if (irq_number)
+ 		pm_system_wakeup();
+-	}
++}
++
++unsigned int pm_wakeup_irq(void)
++{
++	return wakeup_irq[0];
+ }
+ 
+ /**
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index b6f97960d8ee0..5c40ca1d4740e 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -241,7 +241,7 @@ static void __init dmtimer_systimer_assign_alwon(void)
+ 	bool quirk_unreliable_oscillator = false;
+ 
+ 	/* Quirk unreliable 32 KiHz oscillator with incomplete dts */
+-	if (of_machine_is_compatible("ti,omap3-beagle") ||
++	if (of_machine_is_compatible("ti,omap3-beagle-ab4") ||
+ 	    of_machine_is_compatible("timll,omap3-devkit8000")) {
+ 		quirk_unreliable_oscillator = true;
+ 		counter_32k = -ENODEV;
+diff --git a/drivers/gpio/gpio-aggregator.c b/drivers/gpio/gpio-aggregator.c
+index dfd8a4876a27a..d5f25246404d9 100644
+--- a/drivers/gpio/gpio-aggregator.c
++++ b/drivers/gpio/gpio-aggregator.c
+@@ -330,7 +330,8 @@ static int gpio_fwd_get(struct gpio_chip *chip, unsigned int offset)
+ {
+ 	struct gpiochip_fwd *fwd = gpiochip_get_data(chip);
+ 
+-	return gpiod_get_value(fwd->descs[offset]);
++	return chip->can_sleep ? gpiod_get_value_cansleep(fwd->descs[offset])
++			       : gpiod_get_value(fwd->descs[offset]);
+ }
+ 
+ static int gpio_fwd_get_multiple(struct gpiochip_fwd *fwd, unsigned long *mask,
+@@ -349,7 +350,10 @@ static int gpio_fwd_get_multiple(struct gpiochip_fwd *fwd, unsigned long *mask,
+ 	for_each_set_bit(i, mask, fwd->chip.ngpio)
+ 		descs[j++] = fwd->descs[i];
+ 
+-	error = gpiod_get_array_value(j, descs, NULL, values);
++	if (fwd->chip.can_sleep)
++		error = gpiod_get_array_value_cansleep(j, descs, NULL, values);
++	else
++		error = gpiod_get_array_value(j, descs, NULL, values);
+ 	if (error)
+ 		return error;
+ 
+@@ -384,7 +388,10 @@ static void gpio_fwd_set(struct gpio_chip *chip, unsigned int offset, int value)
+ {
+ 	struct gpiochip_fwd *fwd = gpiochip_get_data(chip);
+ 
+-	gpiod_set_value(fwd->descs[offset], value);
++	if (chip->can_sleep)
++		gpiod_set_value_cansleep(fwd->descs[offset], value);
++	else
++		gpiod_set_value(fwd->descs[offset], value);
+ }
+ 
+ static void gpio_fwd_set_multiple(struct gpiochip_fwd *fwd, unsigned long *mask,
+@@ -403,7 +410,10 @@ static void gpio_fwd_set_multiple(struct gpiochip_fwd *fwd, unsigned long *mask,
+ 		descs[j++] = fwd->descs[i];
+ 	}
+ 
+-	gpiod_set_array_value(j, descs, NULL, values);
++	if (fwd->chip.can_sleep)
++		gpiod_set_array_value_cansleep(j, descs, NULL, values);
++	else
++		gpiod_set_array_value(j, descs, NULL, values);
+ }
+ 
+ static void gpio_fwd_set_multiple_locked(struct gpio_chip *chip,
+diff --git a/drivers/gpio/gpio-sifive.c b/drivers/gpio/gpio-sifive.c
+index d5eb9ca119016..4f28fa73450c1 100644
+--- a/drivers/gpio/gpio-sifive.c
++++ b/drivers/gpio/gpio-sifive.c
+@@ -206,7 +206,7 @@ static int sifive_gpio_probe(struct platform_device *pdev)
+ 			 NULL,
+ 			 chip->base + SIFIVE_GPIO_OUTPUT_EN,
+ 			 chip->base + SIFIVE_GPIO_INPUT_EN,
+-			 0);
++			 BGPIOF_READ_OUTPUT_REG_SET);
+ 	if (ret) {
+ 		dev_err(dev, "unable to init generic GPIO\n");
+ 		return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index efda38349a032..917b94002f4b7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -766,9 +766,9 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 			dev_info.high_va_offset = AMDGPU_GMC_HOLE_END;
+ 			dev_info.high_va_max = AMDGPU_GMC_HOLE_END | vm_size;
+ 		}
+-		dev_info.virtual_address_alignment = max((int)PAGE_SIZE, AMDGPU_GPU_PAGE_SIZE);
++		dev_info.virtual_address_alignment = max_t(u32, PAGE_SIZE, AMDGPU_GPU_PAGE_SIZE);
+ 		dev_info.pte_fragment_size = (1 << adev->vm_manager.fragment_size) * AMDGPU_GPU_PAGE_SIZE;
+-		dev_info.gart_page_size = AMDGPU_GPU_PAGE_SIZE;
++		dev_info.gart_page_size = max_t(u32, PAGE_SIZE, AMDGPU_GPU_PAGE_SIZE);
+ 		dev_info.cu_active_number = adev->gfx.cu_info.number;
+ 		dev_info.cu_ao_mask = adev->gfx.cu_info.ao_cu_mask;
+ 		dev_info.ce_ram_size = adev->gfx.ce_ram_size;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 9d1bd8f491ad7..448c2f2d803a6 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -115,6 +115,12 @@ static const struct drm_dmi_panel_orientation_data lcd1280x1920_rightside_up = {
+ 	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+ 
++static const struct drm_dmi_panel_orientation_data lcd1600x2560_leftside_up = {
++	.width = 1600,
++	.height = 2560,
++	.orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
++};
++
+ static const struct dmi_system_id orientation_data[] = {
+ 	{	/* Acer One 10 (S1003) */
+ 		.matches = {
+@@ -261,6 +267,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Default string"),
+ 		},
+ 		.driver_data = (void *)&onegx1_pro,
++	}, {	/* OneXPlayer */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ONE-NETBOOK TECHNOLOGY CO., LTD."),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"),
++		},
++		.driver_data = (void *)&lcd1600x2560_leftside_up,
+ 	}, {	/* Samsung GalaxyBook 10.6 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 204674fccd646..7ffd2a04ab23a 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -557,6 +557,7 @@ static int panel_simple_probe(struct device *dev, const struct panel_desc *desc)
+ 		err = panel_dpi_probe(dev, panel);
+ 		if (err)
+ 			goto free_ddc;
++		desc = panel->desc;
+ 	} else {
+ 		if (!of_get_display_timing(dev->of_node, "panel-timing", &dt))
+ 			panel_simple_parse_panel_timing_node(dev, panel, &dt);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+index a6fe03c3748aa..39e1e1ebea928 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+@@ -873,6 +873,7 @@ static const struct vop_win_phy rk3399_win01_data = {
+ 	.enable = VOP_REG(RK3288_WIN0_CTRL0, 0x1, 0),
+ 	.format = VOP_REG(RK3288_WIN0_CTRL0, 0x7, 1),
+ 	.rb_swap = VOP_REG(RK3288_WIN0_CTRL0, 0x1, 12),
++	.x_mir_en = VOP_REG(RK3288_WIN0_CTRL0, 0x1, 21),
+ 	.y_mir_en = VOP_REG(RK3288_WIN0_CTRL0, 0x1, 22),
+ 	.act_info = VOP_REG(RK3288_WIN0_ACT_INFO, 0x1fff1fff, 0),
+ 	.dsp_info = VOP_REG(RK3288_WIN0_DSP_INFO, 0x0fff0fff, 0),
+@@ -883,6 +884,7 @@ static const struct vop_win_phy rk3399_win01_data = {
+ 	.uv_vir = VOP_REG(RK3288_WIN0_VIR, 0x3fff, 16),
+ 	.src_alpha_ctl = VOP_REG(RK3288_WIN0_SRC_ALPHA_CTRL, 0xff, 0),
+ 	.dst_alpha_ctl = VOP_REG(RK3288_WIN0_DST_ALPHA_CTRL, 0xff, 0),
++	.channel = VOP_REG(RK3288_WIN0_CTRL2, 0xff, 0),
+ };
+ 
+ /*
+@@ -893,11 +895,11 @@ static const struct vop_win_phy rk3399_win01_data = {
+ static const struct vop_win_data rk3399_vop_win_data[] = {
+ 	{ .base = 0x00, .phy = &rk3399_win01_data,
+ 	  .type = DRM_PLANE_TYPE_PRIMARY },
+-	{ .base = 0x40, .phy = &rk3288_win01_data,
++	{ .base = 0x40, .phy = &rk3368_win01_data,
+ 	  .type = DRM_PLANE_TYPE_OVERLAY },
+-	{ .base = 0x00, .phy = &rk3288_win23_data,
++	{ .base = 0x00, .phy = &rk3368_win23_data,
+ 	  .type = DRM_PLANE_TYPE_OVERLAY },
+-	{ .base = 0x50, .phy = &rk3288_win23_data,
++	{ .base = 0x50, .phy = &rk3368_win23_data,
+ 	  .type = DRM_PLANE_TYPE_CURSOR },
+ };
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 5d5c4e9a86218..a308f2d05d173 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -800,6 +800,7 @@ static int vc4_hdmi_encoder_atomic_check(struct drm_encoder *encoder,
+ 	unsigned long long tmds_rate;
+ 
+ 	if (vc4_hdmi->variant->unsupported_odd_h_timings &&
++	    !(mode->flags & DRM_MODE_FLAG_DBLCLK) &&
+ 	    ((mode->hdisplay % 2) || (mode->hsync_start % 2) ||
+ 	     (mode->hsync_end % 2) || (mode->htotal % 2)))
+ 		return -EINVAL;
+@@ -834,6 +835,7 @@ vc4_hdmi_encoder_mode_valid(struct drm_encoder *encoder,
+ 	struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
+ 
+ 	if (vc4_hdmi->variant->unsupported_odd_h_timings &&
++	    !(mode->flags & DRM_MODE_FLAG_DBLCLK) &&
+ 	    ((mode->hdisplay % 2) || (mode->hsync_start % 2) ||
+ 	     (mode->hsync_end % 2) || (mode->htotal % 2)))
+ 		return MODE_H_ILLEGAL;
+diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c
+index 87f401100466d..10c7b6295b02e 100644
+--- a/drivers/hwmon/dell-smm-hwmon.c
++++ b/drivers/hwmon/dell-smm-hwmon.c
+@@ -317,7 +317,7 @@ static int i8k_enable_fan_auto_mode(bool enable)
+ }
+ 
+ /*
+- * Set the fan speed (off, low, high). Returns the new fan status.
++ * Set the fan speed (off, low, high, ...).
+  */
+ static int i8k_set_fan(int fan, int speed)
+ {
+@@ -329,7 +329,7 @@ static int i8k_set_fan(int fan, int speed)
+ 	speed = (speed < 0) ? 0 : ((speed > i8k_fan_max) ? i8k_fan_max : speed);
+ 	regs.ebx = (fan & 0xff) | (speed << 8);
+ 
+-	return i8k_smm(&regs) ? : i8k_get_fan_status(fan);
++	return i8k_smm(&regs);
+ }
+ 
+ static int i8k_get_temp_type(int sensor)
+@@ -443,7 +443,7 @@ static int
+ i8k_ioctl_unlocked(struct file *fp, unsigned int cmd, unsigned long arg)
+ {
+ 	int val = 0;
+-	int speed;
++	int speed, err;
+ 	unsigned char buff[16];
+ 	int __user *argp = (int __user *)arg;
+ 
+@@ -504,7 +504,11 @@ i8k_ioctl_unlocked(struct file *fp, unsigned int cmd, unsigned long arg)
+ 		if (copy_from_user(&speed, argp + 1, sizeof(int)))
+ 			return -EFAULT;
+ 
+-		val = i8k_set_fan(val, speed);
++		err = i8k_set_fan(val, speed);
++		if (err < 0)
++			return err;
++
++		val = i8k_get_fan_status(val);
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index bcf060b5cf85b..9d65557dfb2ce 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -185,9 +185,14 @@ static struct dev_iommu *dev_iommu_get(struct device *dev)
+ 
+ static void dev_iommu_free(struct device *dev)
+ {
+-	iommu_fwspec_free(dev);
+-	kfree(dev->iommu);
++	struct dev_iommu *param = dev->iommu;
++
+ 	dev->iommu = NULL;
++	if (param->fwspec) {
++		fwnode_handle_put(param->fwspec->iommu_fwnode);
++		kfree(param->fwspec);
++	}
++	kfree(param);
+ }
+ 
+ static int __iommu_probe_device(struct device *dev, struct list_head *group_list)
+diff --git a/drivers/misc/eeprom/ee1004.c b/drivers/misc/eeprom/ee1004.c
+index 252e15ba65e11..d9f90332aaf65 100644
+--- a/drivers/misc/eeprom/ee1004.c
++++ b/drivers/misc/eeprom/ee1004.c
+@@ -82,6 +82,9 @@ static ssize_t ee1004_eeprom_read(struct i2c_client *client, char *buf,
+ 	if (unlikely(offset + count > EE1004_PAGE_SIZE))
+ 		count = EE1004_PAGE_SIZE - offset;
+ 
++	if (count > I2C_SMBUS_BLOCK_MAX)
++		count = I2C_SMBUS_BLOCK_MAX;
++
+ 	status = i2c_smbus_read_i2c_block_data_or_emulated(client, offset,
+ 							   count, buf);
+ 	dev_dbg(&client->dev, "read %zu@%d --> %d\n", count, offset, status);
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index ef49ac8d91019..d0471fec37fbb 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1284,7 +1284,14 @@ static int fastrpc_dmabuf_alloc(struct fastrpc_user *fl, char __user *argp)
+ 	}
+ 
+ 	if (copy_to_user(argp, &bp, sizeof(bp))) {
+-		dma_buf_put(buf->dmabuf);
++		/*
++		 * The usercopy failed, but we can't do much about it, as
++		 * dma_buf_fd() already called fd_install() and made the
++		 * file descriptor accessible for the current process. It
++		 * might already be closed and dmabuf no longer valid when
++		 * we reach this point. Therefore "leak" the fd and rely on
++		 * the process exit path to do any required cleanup.
++		 */
+ 		return -EFAULT;
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index ab5ab969f711d..343648fcbc31f 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -524,12 +524,16 @@ static void esdhc_of_adma_workaround(struct sdhci_host *host, u32 intmask)
+ 
+ static int esdhc_of_enable_dma(struct sdhci_host *host)
+ {
++	int ret;
+ 	u32 value;
+ 	struct device *dev = mmc_dev(host->mmc);
+ 
+ 	if (of_device_is_compatible(dev->of_node, "fsl,ls1043a-esdhc") ||
+-	    of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc"))
+-		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
++	    of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc")) {
++		ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
++		if (ret)
++			return ret;
++	}
+ 
+ 	value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
+ 
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index aa001b16765ae..ab8c833411654 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -1003,8 +1003,8 @@ static void ad_mux_machine(struct port *port, bool *update_slave_arr)
+ 				if (port->aggregator &&
+ 				    port->aggregator->is_active &&
+ 				    !__port_is_enabled(port)) {
+-
+ 					__enable_port(port);
++					*update_slave_arr = true;
+ 				}
+ 			}
+ 			break;
+@@ -1760,6 +1760,7 @@ static void ad_agg_selection_logic(struct aggregator *agg,
+ 			     port = port->next_port_in_aggregator) {
+ 				__enable_port(port);
+ 			}
++			*update_slave_arr = true;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 690e9d9495e75..08a675a5328d7 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -504,7 +504,7 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 	get_device(&priv->master_mii_bus->dev);
+ 	priv->master_mii_dn = dn;
+ 
+-	priv->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
++	priv->slave_mii_bus = mdiobus_alloc();
+ 	if (!priv->slave_mii_bus) {
+ 		of_node_put(dn);
+ 		return -ENOMEM;
+@@ -564,8 +564,10 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 	}
+ 
+ 	err = mdiobus_register(priv->slave_mii_bus);
+-	if (err && dn)
++	if (err && dn) {
++		mdiobus_free(priv->slave_mii_bus);
+ 		of_node_put(dn);
++	}
+ 
+ 	return err;
+ }
+@@ -573,6 +575,7 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv)
+ {
+ 	mdiobus_unregister(priv->slave_mii_bus);
++	mdiobus_free(priv->slave_mii_bus);
+ 	of_node_put(priv->master_mii_dn);
+ }
+ 
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 4d23a7aba7961..ed517985ca88e 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -495,8 +495,9 @@ static int gswip_mdio_rd(struct mii_bus *bus, int addr, int reg)
+ static int gswip_mdio(struct gswip_priv *priv, struct device_node *mdio_np)
+ {
+ 	struct dsa_switch *ds = priv->ds;
++	int err;
+ 
+-	ds->slave_mii_bus = devm_mdiobus_alloc(priv->dev);
++	ds->slave_mii_bus = mdiobus_alloc();
+ 	if (!ds->slave_mii_bus)
+ 		return -ENOMEM;
+ 
+@@ -509,7 +510,11 @@ static int gswip_mdio(struct gswip_priv *priv, struct device_node *mdio_np)
+ 	ds->slave_mii_bus->parent = priv->dev;
+ 	ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask;
+ 
+-	return of_mdiobus_register(ds->slave_mii_bus, mdio_np);
++	err = of_mdiobus_register(ds->slave_mii_bus, mdio_np);
++	if (err)
++		mdiobus_free(ds->slave_mii_bus);
++
++	return err;
+ }
+ 
+ static int gswip_pce_table_entry_read(struct gswip_priv *priv,
+@@ -2086,8 +2091,10 @@ disable_switch:
+ 	gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB);
+ 	dsa_unregister_switch(priv->ds);
+ mdio_bus:
+-	if (mdio_np)
++	if (mdio_np) {
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
++		mdiobus_free(priv->ds->slave_mii_bus);
++	}
+ put_mdio_node:
+ 	of_node_put(mdio_np);
+ 	for (i = 0; i < priv->num_gphy_fw; i++)
+@@ -2107,6 +2114,7 @@ static int gswip_remove(struct platform_device *pdev)
+ 
+ 	if (priv->ds->slave_mii_bus) {
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
++		mdiobus_free(priv->ds->slave_mii_bus);
+ 		of_node_put(priv->ds->slave_mii_bus->dev.of_node);
+ 	}
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index afc5500ef8ed9..1992be77522ac 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3072,7 +3072,7 @@ static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
+ 			return err;
+ 	}
+ 
+-	bus = devm_mdiobus_alloc_size(chip->dev, sizeof(*mdio_bus));
++	bus = mdiobus_alloc_size(sizeof(*mdio_bus));
+ 	if (!bus)
+ 		return -ENOMEM;
+ 
+@@ -3097,14 +3097,14 @@ static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
+ 	if (!external) {
+ 		err = mv88e6xxx_g2_irq_mdio_setup(chip, bus);
+ 		if (err)
+-			return err;
++			goto out;
+ 	}
+ 
+ 	err = of_mdiobus_register(bus, np);
+ 	if (err) {
+ 		dev_err(chip->dev, "Cannot register MDIO bus (%d)\n", err);
+ 		mv88e6xxx_g2_irq_mdio_free(chip, bus);
+-		return err;
++		goto out;
+ 	}
+ 
+ 	if (external)
+@@ -3113,21 +3113,26 @@ static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
+ 		list_add(&mdio_bus->list, &chip->mdios);
+ 
+ 	return 0;
++
++out:
++	mdiobus_free(bus);
++	return err;
+ }
+ 
+ static void mv88e6xxx_mdios_unregister(struct mv88e6xxx_chip *chip)
+ 
+ {
+-	struct mv88e6xxx_mdio_bus *mdio_bus;
++	struct mv88e6xxx_mdio_bus *mdio_bus, *p;
+ 	struct mii_bus *bus;
+ 
+-	list_for_each_entry(mdio_bus, &chip->mdios, list) {
++	list_for_each_entry_safe(mdio_bus, p, &chip->mdios, list) {
+ 		bus = mdio_bus->bus;
+ 
+ 		if (!mdio_bus->external)
+ 			mv88e6xxx_g2_irq_mdio_free(chip, bus);
+ 
+ 		mdiobus_unregister(bus);
++		mdiobus_free(bus);
+ 	}
+ }
+ 
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 2e5bbdca5ea47..cd8d9b0e0edb3 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1050,7 +1050,7 @@ static int vsc9959_mdio_bus_alloc(struct ocelot *ocelot)
+ 		return PTR_ERR(hw);
+ 	}
+ 
+-	bus = devm_mdiobus_alloc_size(dev, sizeof(*mdio_priv));
++	bus = mdiobus_alloc_size(sizeof(*mdio_priv));
+ 	if (!bus)
+ 		return -ENOMEM;
+ 
+@@ -1070,6 +1070,7 @@ static int vsc9959_mdio_bus_alloc(struct ocelot *ocelot)
+ 	rc = mdiobus_register(bus);
+ 	if (rc < 0) {
+ 		dev_err(dev, "failed to register MDIO bus\n");
++		mdiobus_free(bus);
+ 		return rc;
+ 	}
+ 
+@@ -1119,6 +1120,7 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
+ 		lynx_pcs_destroy(pcs);
+ 	}
+ 	mdiobus_unregister(felix->imdio);
++	mdiobus_free(felix->imdio);
+ }
+ 
+ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
+diff --git a/drivers/net/dsa/qca/ar9331.c b/drivers/net/dsa/qca/ar9331.c
+index 661745932a539..c33bdcf7efc58 100644
+--- a/drivers/net/dsa/qca/ar9331.c
++++ b/drivers/net/dsa/qca/ar9331.c
+@@ -289,7 +289,7 @@ static int ar9331_sw_mbus_init(struct ar9331_sw_priv *priv)
+ 	if (!mnp)
+ 		return -ENODEV;
+ 
+-	ret = of_mdiobus_register(mbus, mnp);
++	ret = devm_of_mdiobus_register(dev, mbus, mnp);
+ 	of_node_put(mnp);
+ 	if (ret)
+ 		return ret;
+@@ -856,7 +856,6 @@ static void ar9331_sw_remove(struct mdio_device *mdiodev)
+ 	struct ar9331_sw_priv *priv = dev_get_drvdata(&mdiodev->dev);
+ 
+ 	irq_domain_remove(priv->irqdomain);
+-	mdiobus_unregister(priv->mbus);
+ 	dsa_unregister_switch(&priv->ds);
+ 
+ 	reset_control_assert(priv->sw_reset);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+index 90cb55eb54665..014513ce00a14 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+@@ -418,6 +418,9 @@ static void xgbe_pci_remove(struct pci_dev *pdev)
+ 
+ 	pci_free_irq_vectors(pdata->pcidev);
+ 
++	/* Disable all interrupts in the hardware */
++	XP_IOWRITE(pdata, XP_INT_EN, 0x0);
++
+ 	xgbe_free_pdata(pdata);
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index f06d88c471d0f..f917bc9c87969 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -4405,12 +4405,12 @@ static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
+ #ifdef CONFIG_DEBUG_FS
+ 	dpaa2_dbg_remove(priv);
+ #endif
++
++	unregister_netdev(net_dev);
+ 	rtnl_lock();
+ 	dpaa2_eth_disconnect_mac(priv);
+ 	rtnl_unlock();
+ 
+-	unregister_netdev(net_dev);
+-
+ 	dpaa2_eth_dl_port_del(priv);
+ 	dpaa2_eth_dl_traps_unregister(priv);
+ 	dpaa2_eth_dl_unregister(priv);
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 64714757bd4f4..2b0d0373ab2c6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -3032,7 +3032,8 @@ ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
+ 	if (fec == ICE_FEC_AUTO && ice_fw_supports_link_override(pi->hw)) {
+ 		struct ice_link_default_override_tlv tlv;
+ 
+-		if (ice_get_link_default_override(&tlv, pi))
++		status = ice_get_link_default_override(&tlv, pi);
++		if (status)
+ 			goto out;
+ 
+ 		if (!(tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE) &&
+diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+index c0ee0541e53fc..847e1ef8e1064 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
++++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+@@ -507,6 +507,7 @@ struct ice_tx_ctx_desc {
+ 			(0x3FFFFULL << ICE_TXD_CTX_QW1_TSO_LEN_S)
+ 
+ #define ICE_TXD_CTX_QW1_MSS_S	50
++#define ICE_TXD_CTX_MIN_MSS	64
+ 
+ enum ice_tx_ctx_desc_cmd_bits {
+ 	ICE_TX_CTX_DESC_TSO		= 0x01,
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 4c7d1720113a0..fb4656902634c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -6787,6 +6787,7 @@ ice_features_check(struct sk_buff *skb,
+ 		   struct net_device __always_unused *netdev,
+ 		   netdev_features_t features)
+ {
++	bool gso = skb_is_gso(skb);
+ 	size_t len;
+ 
+ 	/* No point in doing any of this if neither checksum nor GSO are
+@@ -6799,24 +6800,32 @@ ice_features_check(struct sk_buff *skb,
+ 	/* We cannot support GSO if the MSS is going to be less than
+ 	 * 64 bytes. If it is then we need to drop support for GSO.
+ 	 */
+-	if (skb_is_gso(skb) && (skb_shinfo(skb)->gso_size < 64))
++	if (gso && (skb_shinfo(skb)->gso_size < ICE_TXD_CTX_MIN_MSS))
+ 		features &= ~NETIF_F_GSO_MASK;
+ 
+-	len = skb_network_header(skb) - skb->data;
++	len = skb_network_offset(skb);
+ 	if (len > ICE_TXD_MACLEN_MAX || len & 0x1)
+ 		goto out_rm_features;
+ 
+-	len = skb_transport_header(skb) - skb_network_header(skb);
++	len = skb_network_header_len(skb);
+ 	if (len > ICE_TXD_IPLEN_MAX || len & 0x1)
+ 		goto out_rm_features;
+ 
+ 	if (skb->encapsulation) {
+-		len = skb_inner_network_header(skb) - skb_transport_header(skb);
+-		if (len > ICE_TXD_L4LEN_MAX || len & 0x1)
+-			goto out_rm_features;
++		/* this must work for VXLAN frames AND IPIP/SIT frames, and in
++		 * the case of IPIP frames, the transport header pointer is
++		 * after the inner header! So check to make sure that this
++		 * is a GRE or UDP_TUNNEL frame before doing that math.
++		 */
++		if (gso && (skb_shinfo(skb)->gso_type &
++			    (SKB_GSO_GRE | SKB_GSO_UDP_TUNNEL))) {
++			len = skb_inner_network_header(skb) -
++			      skb_transport_header(skb);
++			if (len > ICE_TXD_L4LEN_MAX || len & 0x1)
++				goto out_rm_features;
++		}
+ 
+-		len = skb_inner_transport_header(skb) -
+-		      skb_inner_network_header(skb);
++		len = skb_inner_network_header_len(skb);
+ 		if (len > ICE_TXD_IPLEN_MAX || len & 0x1)
+ 			goto out_rm_features;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index a7d0a459969a2..2d6ac61d7a3e6 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -1992,14 +1992,15 @@ static void ixgbevf_set_rx_buffer_len(struct ixgbevf_adapter *adapter,
+ 	if (adapter->flags & IXGBEVF_FLAGS_LEGACY_RX)
+ 		return;
+ 
+-	set_ring_build_skb_enabled(rx_ring);
++	if (PAGE_SIZE < 8192)
++		if (max_frame > IXGBEVF_MAX_FRAME_BUILD_SKB)
++			set_ring_uses_large_buffer(rx_ring);
+ 
+-	if (PAGE_SIZE < 8192) {
+-		if (max_frame <= IXGBEVF_MAX_FRAME_BUILD_SKB)
+-			return;
++	/* 82599 can't rely on RXDCTL.RLPML to restrict the size of the frame */
++	if (adapter->hw.mac.type == ixgbe_mac_82599_vf && !ring_uses_large_buffer(rx_ring))
++		return;
+ 
+-		set_ring_uses_large_buffer(rx_ring);
+-	}
++	set_ring_build_skb_enabled(rx_ring);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 52401915828a1..a06466ecca12a 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -848,12 +848,11 @@ void ocelot_get_strings(struct ocelot *ocelot, int port, u32 sset, u8 *data)
+ }
+ EXPORT_SYMBOL(ocelot_get_strings);
+ 
++/* Caller must hold &ocelot->stats_lock */
+ static void ocelot_update_stats(struct ocelot *ocelot)
+ {
+ 	int i, j;
+ 
+-	mutex_lock(&ocelot->stats_lock);
+-
+ 	for (i = 0; i < ocelot->num_phys_ports; i++) {
+ 		/* Configure the port to read the stats from */
+ 		ocelot_write(ocelot, SYS_STAT_CFG_STAT_VIEW(i), SYS_STAT_CFG);
+@@ -872,8 +871,6 @@ static void ocelot_update_stats(struct ocelot *ocelot)
+ 					      ~(u64)U32_MAX) + val;
+ 		}
+ 	}
+-
+-	mutex_unlock(&ocelot->stats_lock);
+ }
+ 
+ static void ocelot_check_stats_work(struct work_struct *work)
+@@ -882,7 +879,9 @@ static void ocelot_check_stats_work(struct work_struct *work)
+ 	struct ocelot *ocelot = container_of(del_work, struct ocelot,
+ 					     stats_work);
+ 
++	mutex_lock(&ocelot->stats_lock);
+ 	ocelot_update_stats(ocelot);
++	mutex_unlock(&ocelot->stats_lock);
+ 
+ 	queue_delayed_work(ocelot->stats_queue, &ocelot->stats_work,
+ 			   OCELOT_STATS_CHECK_DELAY);
+@@ -892,12 +891,16 @@ void ocelot_get_ethtool_stats(struct ocelot *ocelot, int port, u64 *data)
+ {
+ 	int i;
+ 
++	mutex_lock(&ocelot->stats_lock);
++
+ 	/* check and update now */
+ 	ocelot_update_stats(ocelot);
+ 
+ 	/* Copy all counters */
+ 	for (i = 0; i < ocelot->num_stats; i++)
+ 		*data++ = ocelot->stats[port * ocelot->num_stats + i];
++
++	mutex_unlock(&ocelot->stats_lock);
+ }
+ EXPORT_SYMBOL(ocelot_get_ethtool_stats);
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index d19c02e991145..d3d5b663a4a3c 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -1011,6 +1011,7 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 	struct nfp_flower_repr_priv *repr_priv;
+ 	struct nfp_tun_offloaded_mac *entry;
+ 	struct nfp_repr *repr;
++	u16 nfp_mac_idx;
+ 	int ida_idx;
+ 
+ 	entry = nfp_tunnel_lookup_offloaded_macs(app, mac);
+@@ -1029,8 +1030,6 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 		entry->bridge_count--;
+ 
+ 		if (!entry->bridge_count && entry->ref_count) {
+-			u16 nfp_mac_idx;
+-
+ 			nfp_mac_idx = entry->index & ~NFP_TUN_PRE_TUN_IDX_BIT;
+ 			if (__nfp_tunnel_offload_mac(app, mac, nfp_mac_idx,
+ 						     false)) {
+@@ -1046,7 +1045,6 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 
+ 	/* If MAC is now used by 1 repr set the offloaded MAC index to port. */
+ 	if (entry->ref_count == 1 && list_is_singular(&entry->repr_list)) {
+-		u16 nfp_mac_idx;
+ 		int port, err;
+ 
+ 		repr_priv = list_first_entry(&entry->repr_list,
+@@ -1074,8 +1072,14 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 	WARN_ON_ONCE(rhashtable_remove_fast(&priv->tun.offloaded_macs,
+ 					    &entry->ht_node,
+ 					    offloaded_macs_params));
++
++	if (nfp_flower_is_supported_bridge(netdev))
++		nfp_mac_idx = entry->index & ~NFP_TUN_PRE_TUN_IDX_BIT;
++	else
++		nfp_mac_idx = entry->index;
++
+ 	/* If MAC has global ID then extract and free the ida entry. */
+-	if (nfp_tunnel_is_mac_idx_global(entry->index)) {
++	if (nfp_tunnel_is_mac_idx_global(nfp_mac_idx)) {
+ 		ida_idx = nfp_tunnel_get_ida_from_global_mac_idx(entry->index);
+ 		ida_simple_remove(&priv->tun.mac_off_ids, ida_idx);
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index 9f5ccf1a0a540..cad6588840d8b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -734,7 +734,7 @@ static int sun8i_dwmac_reset(struct stmmac_priv *priv)
+ 
+ 	if (err) {
+ 		dev_err(priv->device, "EMAC reset timeout\n");
+-		return -EFAULT;
++		return err;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/net/mdio/mdio-aspeed.c b/drivers/net/mdio/mdio-aspeed.c
+index 966c3b4ad59d1..e2273588c75b6 100644
+--- a/drivers/net/mdio/mdio-aspeed.c
++++ b/drivers/net/mdio/mdio-aspeed.c
+@@ -148,6 +148,7 @@ static const struct of_device_id aspeed_mdio_of_match[] = {
+ 	{ .compatible = "aspeed,ast2600-mdio", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, aspeed_mdio_of_match);
+ 
+ static struct platform_driver aspeed_mdio_driver = {
+ 	.driver = {
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 4dda2ab19c265..cb9d1852a75c8 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -515,9 +515,9 @@ static int m88e1121_config_aneg_rgmii_delays(struct phy_device *phydev)
+ 	else
+ 		mscr = 0;
+ 
+-	return phy_modify_paged(phydev, MII_MARVELL_MSCR_PAGE,
+-				MII_88E1121_PHY_MSCR_REG,
+-				MII_88E1121_PHY_MSCR_DELAY_MASK, mscr);
++	return phy_modify_paged_changed(phydev, MII_MARVELL_MSCR_PAGE,
++					MII_88E1121_PHY_MSCR_REG,
++					MII_88E1121_PHY_MSCR_DELAY_MASK, mscr);
+ }
+ 
+ static int m88e1121_config_aneg(struct phy_device *phydev)
+@@ -531,11 +531,13 @@ static int m88e1121_config_aneg(struct phy_device *phydev)
+ 			return err;
+ 	}
+ 
++	changed = err;
++
+ 	err = marvell_set_polarity(phydev, phydev->mdix_ctrl);
+ 	if (err < 0)
+ 		return err;
+ 
+-	changed = err;
++	changed |= err;
+ 
+ 	err = genphy_config_aneg(phydev);
+ 	if (err < 0)
+@@ -1059,16 +1061,15 @@ static int m88e1118_config_aneg(struct phy_device *phydev)
+ {
+ 	int err;
+ 
+-	err = genphy_soft_reset(phydev);
++	err = marvell_set_polarity(phydev, phydev->mdix_ctrl);
+ 	if (err < 0)
+ 		return err;
+ 
+-	err = marvell_set_polarity(phydev, phydev->mdix_ctrl);
++	err = genphy_config_aneg(phydev);
+ 	if (err < 0)
+ 		return err;
+ 
+-	err = genphy_config_aneg(phydev);
+-	return 0;
++	return genphy_soft_reset(phydev);
+ }
+ 
+ static int m88e1118_config_init(struct phy_device *phydev)
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index b77b0a33d697d..0b0cbcee1920b 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1467,58 +1467,68 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 	u16 hdr_off;
+ 	u32 *pkt_hdr;
+ 
+-	/* This check is no longer done by usbnet */
+-	if (skb->len < dev->net->hard_header_len)
++	/* At the end of the SKB, there's a header telling us how many packets
++	 * are bundled into this buffer and where we can find an array of
++	 * per-packet metadata (which contains elements encoded into u16).
++	 */
++	if (skb->len < 4)
+ 		return 0;
+-
+ 	skb_trim(skb, skb->len - 4);
+ 	rx_hdr = get_unaligned_le32(skb_tail_pointer(skb));
+-
+ 	pkt_cnt = (u16)rx_hdr;
+ 	hdr_off = (u16)(rx_hdr >> 16);
++
++	if (pkt_cnt == 0)
++		return 0;
++
++	/* Make sure that the bounds of the metadata array are inside the SKB
++	 * (and in front of the counter at the end).
++	 */
++	if (pkt_cnt * 2 + hdr_off > skb->len)
++		return 0;
+ 	pkt_hdr = (u32 *)(skb->data + hdr_off);
+ 
+-	while (pkt_cnt--) {
++	/* Packets must not overlap the metadata array */
++	skb_trim(skb, hdr_off);
++
++	for (; ; pkt_cnt--, pkt_hdr++) {
+ 		u16 pkt_len;
+ 
+ 		le32_to_cpus(pkt_hdr);
+ 		pkt_len = (*pkt_hdr >> 16) & 0x1fff;
+ 
+-		/* Check CRC or runt packet */
+-		if ((*pkt_hdr & AX_RXHDR_CRC_ERR) ||
+-		    (*pkt_hdr & AX_RXHDR_DROP_ERR)) {
+-			skb_pull(skb, (pkt_len + 7) & 0xFFF8);
+-			pkt_hdr++;
+-			continue;
+-		}
+-
+-		if (pkt_cnt == 0) {
+-			skb->len = pkt_len;
+-			/* Skip IP alignment pseudo header */
+-			skb_pull(skb, 2);
+-			skb_set_tail_pointer(skb, skb->len);
+-			skb->truesize = pkt_len + sizeof(struct sk_buff);
+-			ax88179_rx_checksum(skb, pkt_hdr);
+-			return 1;
+-		}
++		if (pkt_len > skb->len)
++			return 0;
+ 
+-		ax_skb = skb_clone(skb, GFP_ATOMIC);
+-		if (ax_skb) {
++		/* Check CRC or runt packet */
++		if (((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) == 0) &&
++		    pkt_len >= 2 + ETH_HLEN) {
++			bool last = (pkt_cnt == 0);
++
++			if (last) {
++				ax_skb = skb;
++			} else {
++				ax_skb = skb_clone(skb, GFP_ATOMIC);
++				if (!ax_skb)
++					return 0;
++			}
+ 			ax_skb->len = pkt_len;
+ 			/* Skip IP alignment pseudo header */
+ 			skb_pull(ax_skb, 2);
+ 			skb_set_tail_pointer(ax_skb, ax_skb->len);
+ 			ax_skb->truesize = pkt_len + sizeof(struct sk_buff);
+ 			ax88179_rx_checksum(ax_skb, pkt_hdr);
++
++			if (last)
++				return 1;
++
+ 			usbnet_skb_return(dev, ax_skb);
+-		} else {
+-			return 0;
+ 		}
+ 
+-		skb_pull(skb, (pkt_len + 7) & 0xFFF8);
+-		pkt_hdr++;
++		/* Trim this packet away from the SKB */
++		if (!skb_pull(skb, (pkt_len + 7) & 0xFFF8))
++			return 0;
+ 	}
+-	return 1;
+ }
+ 
+ static struct sk_buff *
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index aef66f8eecee1..f7e3eb309a26e 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -256,9 +256,10 @@ static void __veth_xdp_flush(struct veth_rq *rq)
+ {
+ 	/* Write ptr_ring before reading rx_notify_masked */
+ 	smp_mb();
+-	if (!rq->rx_notify_masked) {
+-		rq->rx_notify_masked = true;
+-		napi_schedule(&rq->xdp_napi);
++	if (!READ_ONCE(rq->rx_notify_masked) &&
++	    napi_schedule_prep(&rq->xdp_napi)) {
++		WRITE_ONCE(rq->rx_notify_masked, true);
++		__napi_schedule(&rq->xdp_napi);
+ 	}
+ }
+ 
+@@ -852,8 +853,10 @@ static int veth_poll(struct napi_struct *napi, int budget)
+ 		/* Write rx_notify_masked before reading ptr_ring */
+ 		smp_store_mb(rq->rx_notify_masked, false);
+ 		if (unlikely(!__ptr_ring_empty(&rq->xdp_ring))) {
+-			rq->rx_notify_masked = true;
+-			napi_schedule(&rq->xdp_napi);
++			if (napi_schedule_prep(&rq->xdp_napi)) {
++				WRITE_ONCE(rq->rx_notify_masked, true);
++				__napi_schedule(&rq->xdp_napi);
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 1b85349f57af0..97afeb898b253 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3198,7 +3198,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_DEALLOCATE_ZEROES, },
+ 	{ PCI_VDEVICE(INTEL, 0x0a54),	/* Intel P4500/P4600 */
+ 		.driver_data = NVME_QUIRK_STRIPE_SIZE |
+-				NVME_QUIRK_DEALLOCATE_ZEROES, },
++				NVME_QUIRK_DEALLOCATE_ZEROES |
++				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_VDEVICE(INTEL, 0x0a55),	/* Dell Express Flash P4600 */
+ 		.driver_data = NVME_QUIRK_STRIPE_SIZE |
+ 				NVME_QUIRK_DEALLOCATE_ZEROES, },
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index e99d439894187..662028d7a1c6a 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -904,7 +904,15 @@ static inline void nvme_tcp_done_send_req(struct nvme_tcp_queue *queue)
+ 
+ static void nvme_tcp_fail_request(struct nvme_tcp_request *req)
+ {
+-	nvme_tcp_end_request(blk_mq_rq_from_pdu(req), NVME_SC_HOST_PATH_ERROR);
++	if (nvme_tcp_async_req(req)) {
++		union nvme_result res = {};
++
++		nvme_complete_async_event(&req->queue->ctrl->ctrl,
++				cpu_to_le16(NVME_SC_HOST_PATH_ERROR), &res);
++	} else {
++		nvme_tcp_end_request(blk_mq_rq_from_pdu(req),
++				NVME_SC_HOST_PATH_ERROR);
++	}
+ }
+ 
+ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
+index dceac77148721..5536b8f4bfd13 100644
+--- a/drivers/phy/ti/phy-j721e-wiz.c
++++ b/drivers/phy/ti/phy-j721e-wiz.c
+@@ -177,6 +177,7 @@ static const struct clk_div_table clk_div_table[] = {
+ 	{ .val = 1, .div = 2, },
+ 	{ .val = 2, .div = 4, },
+ 	{ .val = 3, .div = 8, },
++	{ /* sentinel */ },
+ };
+ 
+ static struct wiz_clk_div_sel clk_div_sel[] = {
+diff --git a/drivers/phy/xilinx/phy-zynqmp.c b/drivers/phy/xilinx/phy-zynqmp.c
+index 2b0f921b6ee3d..b8ccac6f31467 100644
+--- a/drivers/phy/xilinx/phy-zynqmp.c
++++ b/drivers/phy/xilinx/phy-zynqmp.c
+@@ -134,7 +134,8 @@
+ #define PROT_BUS_WIDTH_10		0x0
+ #define PROT_BUS_WIDTH_20		0x1
+ #define PROT_BUS_WIDTH_40		0x2
+-#define PROT_BUS_WIDTH_SHIFT		2
++#define PROT_BUS_WIDTH_SHIFT(n)		((n) * 2)
++#define PROT_BUS_WIDTH_MASK(n)		GENMASK((n) * 2 + 1, (n) * 2)
+ 
+ /* Number of GT lanes */
+ #define NUM_LANES			4
+@@ -443,12 +444,12 @@ static void xpsgtr_phy_init_sata(struct xpsgtr_phy *gtr_phy)
+ static void xpsgtr_phy_init_sgmii(struct xpsgtr_phy *gtr_phy)
+ {
+ 	struct xpsgtr_dev *gtr_dev = gtr_phy->dev;
++	u32 mask = PROT_BUS_WIDTH_MASK(gtr_phy->lane);
++	u32 val = PROT_BUS_WIDTH_10 << PROT_BUS_WIDTH_SHIFT(gtr_phy->lane);
+ 
+ 	/* Set SGMII protocol TX and RX bus width to 10 bits. */
+-	xpsgtr_write(gtr_dev, TX_PROT_BUS_WIDTH,
+-		     PROT_BUS_WIDTH_10 << (gtr_phy->lane * PROT_BUS_WIDTH_SHIFT));
+-	xpsgtr_write(gtr_dev, RX_PROT_BUS_WIDTH,
+-		     PROT_BUS_WIDTH_10 << (gtr_phy->lane * PROT_BUS_WIDTH_SHIFT));
++	xpsgtr_clr_set(gtr_dev, TX_PROT_BUS_WIDTH, mask, val);
++	xpsgtr_clr_set(gtr_dev, RX_PROT_BUS_WIDTH, mask, val);
+ 
+ 	xpsgtr_bypass_scrambler_8b10b(gtr_phy);
+ }
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 0273bf3918ff3..d1894539efc30 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -898,6 +898,16 @@ struct lpfc_hba {
+ 	uint32_t cfg_hostmem_hgp;
+ 	uint32_t cfg_log_verbose;
+ 	uint32_t cfg_enable_fc4_type;
++#define LPFC_ENABLE_FCP  1
++#define LPFC_ENABLE_NVME 2
++#define LPFC_ENABLE_BOTH 3
++#if (IS_ENABLED(CONFIG_NVME_FC))
++#define LPFC_MAX_ENBL_FC4_TYPE LPFC_ENABLE_BOTH
++#define LPFC_DEF_ENBL_FC4_TYPE LPFC_ENABLE_BOTH
++#else
++#define LPFC_MAX_ENBL_FC4_TYPE LPFC_ENABLE_FCP
++#define LPFC_DEF_ENBL_FC4_TYPE LPFC_ENABLE_FCP
++#endif
+ 	uint32_t cfg_aer_support;
+ 	uint32_t cfg_sriov_nr_virtfn;
+ 	uint32_t cfg_request_firmware_upgrade;
+@@ -918,9 +928,6 @@ struct lpfc_hba {
+ 	uint32_t cfg_ras_fwlog_func;
+ 	uint32_t cfg_enable_bbcr;	/* Enable BB Credit Recovery */
+ 	uint32_t cfg_enable_dpp;	/* Enable Direct Packet Push */
+-#define LPFC_ENABLE_FCP  1
+-#define LPFC_ENABLE_NVME 2
+-#define LPFC_ENABLE_BOTH 3
+ 	uint32_t cfg_enable_pbde;
+ 	struct nvmet_fc_target_port *targetport;
+ 	lpfc_vpd_t vpd;		/* vital product data */
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 727b7ba4d8f82..b73d5d9494021 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -3797,8 +3797,8 @@ LPFC_ATTR_R(nvmet_mrq_post,
+  *                    3 - register both FCP and NVME
+  * Supported values are [1,3]. Default value is 3
+  */
+-LPFC_ATTR_R(enable_fc4_type, LPFC_ENABLE_BOTH,
+-	    LPFC_ENABLE_FCP, LPFC_ENABLE_BOTH,
++LPFC_ATTR_R(enable_fc4_type, LPFC_DEF_ENBL_FC4_TYPE,
++	    LPFC_ENABLE_FCP, LPFC_MAX_ENBL_FC4_TYPE,
+ 	    "Enable FC4 Protocol support - FCP / NVME");
+ 
+ /*
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 37612299a34a1..1149bfc42fe64 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -1998,7 +1998,7 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
+ 		}
+ 		if (reg_err1 == SLIPORT_ERR1_REG_ERR_CODE_2 &&
+ 		    reg_err2 == SLIPORT_ERR2_REG_FW_RESTART) {
+-			lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
++			lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+ 					"3143 Port Down: Firmware Update "
+ 					"Detected\n");
+ 			en_rn_msg = false;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 1a9522baba484..4587127b67f7b 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -12402,6 +12402,7 @@ lpfc_sli4_eratt_read(struct lpfc_hba *phba)
+ 	uint32_t uerr_sta_hi, uerr_sta_lo;
+ 	uint32_t if_type, portsmphr;
+ 	struct lpfc_register portstat_reg;
++	u32 logmask;
+ 
+ 	/*
+ 	 * For now, use the SLI4 device internal unrecoverable error
+@@ -12452,7 +12453,12 @@ lpfc_sli4_eratt_read(struct lpfc_hba *phba)
+ 				readl(phba->sli4_hba.u.if_type2.ERR1regaddr);
+ 			phba->work_status[1] =
+ 				readl(phba->sli4_hba.u.if_type2.ERR2regaddr);
+-			lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
++			logmask = LOG_TRACE_EVENT;
++			if (phba->work_status[0] ==
++				SLIPORT_ERR1_REG_ERR_CODE_2 &&
++			    phba->work_status[1] == SLIPORT_ERR2_REG_FW_RESTART)
++				logmask = LOG_SLI;
++			lpfc_printf_log(phba, KERN_ERR, logmask,
+ 					"2885 Port Status Event: "
+ 					"port status reg 0x%x, "
+ 					"port smphr reg 0x%x, "
+diff --git a/drivers/scsi/myrs.c b/drivers/scsi/myrs.c
+index 78c41bbf67562..e6a6678967e52 100644
+--- a/drivers/scsi/myrs.c
++++ b/drivers/scsi/myrs.c
+@@ -2272,7 +2272,8 @@ static void myrs_cleanup(struct myrs_hba *cs)
+ 	myrs_unmap(cs);
+ 
+ 	if (cs->mmio_base) {
+-		cs->disable_intr(cs);
++		if (cs->disable_intr)
++			cs->disable_intr(cs);
+ 		iounmap(cs->mmio_base);
+ 		cs->mmio_base = NULL;
+ 	}
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index a203a4fc2674a..b22a8ab754faa 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -4057,10 +4057,22 @@ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ 	unsigned long flags;
+ 	u32 regval;
+ 
++	/*
++	 * Fatal errors are programmed to be signalled in irq vector
++	 * pm8001_ha->max_q_num - 1 through pm8001_ha->main_cfg_tbl.pm80xx_tbl.
++	 * fatal_err_interrupt
++	 */
+ 	if (vec == (pm8001_ha->max_q_num - 1)) {
++		u32 mipsall_ready;
++
++		if (pm8001_ha->chip_id == chip_8008 ||
++		    pm8001_ha->chip_id == chip_8009)
++			mipsall_ready = SCRATCH_PAD_MIPSALL_READY_8PORT;
++		else
++			mipsall_ready = SCRATCH_PAD_MIPSALL_READY_16PORT;
++
+ 		regval = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1);
+-		if ((regval & SCRATCH_PAD_MIPSALL_READY) !=
+-					SCRATCH_PAD_MIPSALL_READY) {
++		if ((regval & mipsall_ready) != mipsall_ready) {
+ 			pm8001_ha->controller_fatal_error = true;
+ 			pm8001_dbg(pm8001_ha, FAIL,
+ 				   "Firmware Fatal error! Regval:0x%x\n",
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.h b/drivers/scsi/pm8001/pm80xx_hwi.h
+index 701951a0f715b..0dfe9034f7e7f 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.h
++++ b/drivers/scsi/pm8001/pm80xx_hwi.h
+@@ -1391,8 +1391,12 @@ typedef struct SASProtocolTimerConfig SASProtocolTimerConfig_t;
+ #define SCRATCH_PAD_BOOT_LOAD_SUCCESS	0x0
+ #define SCRATCH_PAD_IOP0_READY		0xC00
+ #define SCRATCH_PAD_IOP1_READY		0x3000
+-#define SCRATCH_PAD_MIPSALL_READY	(SCRATCH_PAD_IOP1_READY | \
++#define SCRATCH_PAD_MIPSALL_READY_16PORT	(SCRATCH_PAD_IOP1_READY | \
+ 					SCRATCH_PAD_IOP0_READY | \
++					SCRATCH_PAD_ILA_READY | \
++					SCRATCH_PAD_RAAE_READY)
++#define SCRATCH_PAD_MIPSALL_READY_8PORT	(SCRATCH_PAD_IOP0_READY | \
++					SCRATCH_PAD_ILA_READY | \
+ 					SCRATCH_PAD_RAAE_READY)
+ 
+ /* boot loader state */
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 63f99f4eeed97..472374d83cede 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -2268,6 +2268,7 @@ process_els:
+ 	    io_req->tm_flags == FCP_TMF_TGT_RESET) {
+ 		clear_bit(QEDF_CMD_OUTSTANDING, &io_req->flags);
+ 		io_req->sc_cmd = NULL;
++		kref_put(&io_req->refcount, qedf_release_cmd);
+ 		complete(&io_req->tm_done);
+ 	}
+ 
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index c63dcc39f76c2..e64457f53da86 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1859,6 +1859,7 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
+ 	vport_qedf->cmd_mgr = base_qedf->cmd_mgr;
+ 	init_completion(&vport_qedf->flogi_compl);
+ 	INIT_LIST_HEAD(&vport_qedf->fcports);
++	INIT_DELAYED_WORK(&vport_qedf->stag_work, qedf_stag_change_work);
+ 
+ 	rc = qedf_vport_libfc_config(vport, vn_port);
+ 	if (rc) {
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index e49505534d498..0f2430fb398db 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -92,6 +92,11 @@ static int ufshcd_parse_clock_info(struct ufs_hba *hba)
+ 		clki->min_freq = clkfreq[i];
+ 		clki->max_freq = clkfreq[i+1];
+ 		clki->name = devm_kstrdup(dev, name, GFP_KERNEL);
++		if (!clki->name) {
++			ret = -ENOMEM;
++			goto out;
++		}
++
+ 		if (!strcmp(name, "ref_clk"))
+ 			clki->keep_link_active = true;
+ 		dev_dbg(dev, "%s: min %u max %u name %s\n", "freq-table-hz",
+@@ -128,6 +133,8 @@ static int ufshcd_populate_vreg(struct device *dev, const char *name,
+ 		return -ENOMEM;
+ 
+ 	vreg->name = devm_kstrdup(dev, name, GFP_KERNEL);
++	if (!vreg->name)
++		return -ENOMEM;
+ 
+ 	snprintf(prop_name, MAX_PROP_SIZE, "%s-max-microamp", name);
+ 	if (of_property_read_u32(np, prop_name, &vreg->max_uA)) {
+diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
+index 6795e1f0e8f8c..1d999228efc85 100644
+--- a/drivers/scsi/ufs/ufshci.h
++++ b/drivers/scsi/ufs/ufshci.h
+@@ -138,7 +138,8 @@ enum {
+ #define INT_FATAL_ERRORS	(DEVICE_FATAL_ERROR |\
+ 				CONTROLLER_FATAL_ERROR |\
+ 				SYSTEM_BUS_FATAL_ERROR |\
+-				CRYPTO_ENGINE_FATAL_ERROR)
++				CRYPTO_ENGINE_FATAL_ERROR |\
++				UIC_LINK_LOST)
+ 
+ /* HCS - Host Controller Status 30h */
+ #define DEVICE_PRESENT				0x1
+diff --git a/drivers/staging/fbtft/fbtft.h b/drivers/staging/fbtft/fbtft.h
+index 76f8c090a8370..06afaa9d505ba 100644
+--- a/drivers/staging/fbtft/fbtft.h
++++ b/drivers/staging/fbtft/fbtft.h
+@@ -332,7 +332,10 @@ static int __init fbtft_driver_module_init(void)                           \
+ 	ret = spi_register_driver(&fbtft_driver_spi_driver);               \
+ 	if (ret < 0)                                                       \
+ 		return ret;                                                \
+-	return platform_driver_register(&fbtft_driver_platform_driver);    \
++	ret = platform_driver_register(&fbtft_driver_platform_driver);     \
++	if (ret < 0)                                                       \
++		spi_unregister_driver(&fbtft_driver_spi_driver);           \
++	return ret;                                                        \
+ }                                                                          \
+ 									   \
+ static void __exit fbtft_driver_module_exit(void)                          \
+diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
+index 8075f60fd02c3..2d5cf1714ae05 100644
+--- a/drivers/target/iscsi/iscsi_target_tpg.c
++++ b/drivers/target/iscsi/iscsi_target_tpg.c
+@@ -443,6 +443,9 @@ static bool iscsit_tpg_check_network_portal(
+ 				break;
+ 		}
+ 		spin_unlock(&tpg->tpg_np_lock);
++
++		if (match)
++			break;
+ 	}
+ 	spin_unlock(&tiqn->tiqn_tpg_lock);
+ 
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index e4f4b2186bcec..128461bd04bb9 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -1372,7 +1372,7 @@ handle_newline:
+ 			put_tty_queue(c, ldata);
+ 			smp_store_release(&ldata->canon_head, ldata->read_head);
+ 			kill_fasync(&tty->fasync, SIGIO, POLL_IN);
+-			wake_up_interruptible_poll(&tty->read_wait, EPOLLIN);
++			wake_up_interruptible_poll(&tty->read_wait, EPOLLIN | EPOLLRDNORM);
+ 			return 0;
+ 		}
+ 	}
+@@ -1653,7 +1653,7 @@ static void __receive_buf(struct tty_struct *tty, const unsigned char *cp,
+ 
+ 	if (read_cnt(ldata)) {
+ 		kill_fasync(&tty->fasync, SIGIO, POLL_IN);
+-		wake_up_interruptible_poll(&tty->read_wait, EPOLLIN);
++		wake_up_interruptible_poll(&tty->read_wait, EPOLLIN | EPOLLRDNORM);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index 90e4fcd3dc39a..a9c6ea8986af0 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -699,8 +699,8 @@ static int vt_setactivate(struct vt_setactivate __user *sa)
+ 	if (vsa.console == 0 || vsa.console > MAX_NR_CONSOLES)
+ 		return -ENXIO;
+ 
+-	vsa.console = array_index_nospec(vsa.console, MAX_NR_CONSOLES + 1);
+ 	vsa.console--;
++	vsa.console = array_index_nospec(vsa.console, MAX_NR_CONSOLES);
+ 	console_lock();
+ 	ret = vc_allocate(vsa.console);
+ 	if (ret) {
+@@ -945,6 +945,7 @@ int vt_ioctl(struct tty_struct *tty,
+ 			return -ENXIO;
+ 
+ 		arg--;
++		arg = array_index_nospec(arg, MAX_NR_CONSOLES);
+ 		console_lock();
+ 		ret = vc_allocate(arg);
+ 		console_unlock();
+diff --git a/drivers/usb/common/ulpi.c b/drivers/usb/common/ulpi.c
+index 82fe8e00a96a3..3c705f1bead8c 100644
+--- a/drivers/usb/common/ulpi.c
++++ b/drivers/usb/common/ulpi.c
+@@ -132,6 +132,7 @@ static const struct attribute_group *ulpi_dev_attr_groups[] = {
+ 
+ static void ulpi_dev_release(struct device *dev)
+ {
++	of_node_put(dev->of_node);
+ 	kfree(to_ulpi_dev(dev));
+ }
+ 
+@@ -249,12 +250,16 @@ static int ulpi_register(struct device *dev, struct ulpi *ulpi)
+ 		return ret;
+ 
+ 	ret = ulpi_read_id(ulpi);
+-	if (ret)
++	if (ret) {
++		of_node_put(ulpi->dev.of_node);
+ 		return ret;
++	}
+ 
+ 	ret = device_register(&ulpi->dev);
+-	if (ret)
++	if (ret) {
++		put_device(&ulpi->dev);
+ 		return ret;
++	}
+ 
+ 	dev_dbg(&ulpi->dev, "registered ULPI PHY: vendor %04x, product %04x\n",
+ 		ulpi->id.vendor, ulpi->id.product);
+@@ -301,7 +306,6 @@ EXPORT_SYMBOL_GPL(ulpi_register_interface);
+  */
+ void ulpi_unregister_interface(struct ulpi *ulpi)
+ {
+-	of_node_put(ulpi->dev.of_node);
+ 	device_unregister(&ulpi->dev);
+ }
+ EXPORT_SYMBOL_GPL(ulpi_unregister_interface);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 449f19c3633c2..ec54971063f8f 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -5032,7 +5032,7 @@ int dwc2_hsotg_suspend(struct dwc2_hsotg *hsotg)
+ 		hsotg->gadget.speed = USB_SPEED_UNKNOWN;
+ 		spin_unlock_irqrestore(&hsotg->lock, flags);
+ 
+-		for (ep = 0; ep < hsotg->num_of_eps; ep++) {
++		for (ep = 1; ep < hsotg->num_of_eps; ep++) {
+ 			if (hsotg->eps_in[ep])
+ 				dwc2_hsotg_ep_disable_lock(&hsotg->eps_in[ep]->ep);
+ 			if (hsotg->eps_out[ep])
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index e9a87e1f49508..9095ce52c28c6 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1072,6 +1072,19 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+ 	if (usb_endpoint_xfer_bulk(dep->endpoint.desc) && dep->stream_capable)
+ 		trb->ctrl |= DWC3_TRB_CTRL_SID_SOFN(stream_id);
+ 
++	/*
++	 * As per data book 4.2.3.2TRB Control Bit Rules section
++	 *
++	 * The controller autonomously checks the HWO field of a TRB to determine if the
++	 * entire TRB is valid. Therefore, software must ensure that the rest of the TRB
++	 * is valid before setting the HWO field to '1'. In most systems, this means that
++	 * software must update the fourth DWORD of a TRB last.
++	 *
++	 * However there is a possibility of CPU re-ordering here which can cause
++	 * controller to observe the HWO bit set prematurely.
++	 * Add a write memory barrier to prevent CPU re-ordering.
++	 */
++	wmb();
+ 	trb->ctrl |= DWC3_TRB_CTRL_HWO;
+ 
+ 	dwc3_ep_inc_enq(dep);
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 8bec0cbf844ed..a980799900e71 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1944,6 +1944,9 @@ unknown:
+ 				if (w_index != 0x5 || (w_value >> 8))
+ 					break;
+ 				interface = w_value & 0xFF;
++				if (interface >= MAX_CONFIG_INTERFACES ||
++				    !os_desc_cfg->interface[interface])
++					break;
+ 				buf[6] = w_index;
+ 				count = count_ext_prop(os_desc_cfg,
+ 					interface);
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index d8652321e15e9..bb0d92837f677 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1710,16 +1710,24 @@ static void ffs_data_put(struct ffs_data *ffs)
+ 
+ static void ffs_data_closed(struct ffs_data *ffs)
+ {
++	struct ffs_epfile *epfiles;
++	unsigned long flags;
++
+ 	ENTER();
+ 
+ 	if (atomic_dec_and_test(&ffs->opened)) {
+ 		if (ffs->no_disconnect) {
+ 			ffs->state = FFS_DEACTIVATED;
+-			if (ffs->epfiles) {
+-				ffs_epfiles_destroy(ffs->epfiles,
+-						   ffs->eps_count);
+-				ffs->epfiles = NULL;
+-			}
++			spin_lock_irqsave(&ffs->eps_lock, flags);
++			epfiles = ffs->epfiles;
++			ffs->epfiles = NULL;
++			spin_unlock_irqrestore(&ffs->eps_lock,
++							flags);
++
++			if (epfiles)
++				ffs_epfiles_destroy(epfiles,
++						 ffs->eps_count);
++
+ 			if (ffs->setup_state == FFS_SETUP_PENDING)
+ 				__ffs_ep0_stall(ffs);
+ 		} else {
+@@ -1766,14 +1774,27 @@ static struct ffs_data *ffs_data_new(const char *dev_name)
+ 
+ static void ffs_data_clear(struct ffs_data *ffs)
+ {
++	struct ffs_epfile *epfiles;
++	unsigned long flags;
++
+ 	ENTER();
+ 
+ 	ffs_closed(ffs);
+ 
+ 	BUG_ON(ffs->gadget);
+ 
+-	if (ffs->epfiles) {
+-		ffs_epfiles_destroy(ffs->epfiles, ffs->eps_count);
++	spin_lock_irqsave(&ffs->eps_lock, flags);
++	epfiles = ffs->epfiles;
++	ffs->epfiles = NULL;
++	spin_unlock_irqrestore(&ffs->eps_lock, flags);
++
++	/*
++	 * potential race possible between ffs_func_eps_disable
++	 * & ffs_epfile_release therefore maintaining a local
++	 * copy of epfile will save us from use-after-free.
++	 */
++	if (epfiles) {
++		ffs_epfiles_destroy(epfiles, ffs->eps_count);
+ 		ffs->epfiles = NULL;
+ 	}
+ 
+@@ -1921,12 +1942,15 @@ static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count)
+ 
+ static void ffs_func_eps_disable(struct ffs_function *func)
+ {
+-	struct ffs_ep *ep         = func->eps;
+-	struct ffs_epfile *epfile = func->ffs->epfiles;
+-	unsigned count            = func->ffs->eps_count;
++	struct ffs_ep *ep;
++	struct ffs_epfile *epfile;
++	unsigned short count;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&func->ffs->eps_lock, flags);
++	count = func->ffs->eps_count;
++	epfile = func->ffs->epfiles;
++	ep = func->eps;
+ 	while (count--) {
+ 		/* pending requests get nuked */
+ 		if (likely(ep->ep))
+@@ -1944,14 +1968,18 @@ static void ffs_func_eps_disable(struct ffs_function *func)
+ 
+ static int ffs_func_eps_enable(struct ffs_function *func)
+ {
+-	struct ffs_data *ffs      = func->ffs;
+-	struct ffs_ep *ep         = func->eps;
+-	struct ffs_epfile *epfile = ffs->epfiles;
+-	unsigned count            = ffs->eps_count;
++	struct ffs_data *ffs;
++	struct ffs_ep *ep;
++	struct ffs_epfile *epfile;
++	unsigned short count;
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+ 	spin_lock_irqsave(&func->ffs->eps_lock, flags);
++	ffs = func->ffs;
++	ep = func->eps;
++	epfile = ffs->epfiles;
++	count = ffs->eps_count;
+ 	while(count--) {
+ 		ep->ep->driver_data = ep;
+ 
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index dd960cea642f3..11cc6056b5902 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -176,7 +176,7 @@ static struct uac2_input_terminal_descriptor io_in_it_desc = {
+ 
+ 	.bDescriptorSubtype = UAC_INPUT_TERMINAL,
+ 	/* .bTerminalID = DYNAMIC */
+-	.wTerminalType = cpu_to_le16(UAC_INPUT_TERMINAL_UNDEFINED),
++	.wTerminalType = cpu_to_le16(UAC_INPUT_TERMINAL_MICROPHONE),
+ 	.bAssocTerminal = 0,
+ 	/* .bCSourceID = DYNAMIC */
+ 	.iChannelNames = 0,
+@@ -204,7 +204,7 @@ static struct uac2_output_terminal_descriptor io_out_ot_desc = {
+ 
+ 	.bDescriptorSubtype = UAC_OUTPUT_TERMINAL,
+ 	/* .bTerminalID = DYNAMIC */
+-	.wTerminalType = cpu_to_le16(UAC_OUTPUT_TERMINAL_UNDEFINED),
++	.wTerminalType = cpu_to_le16(UAC_OUTPUT_TERMINAL_SPEAKER),
+ 	.bAssocTerminal = 0,
+ 	/* .bSourceID = DYNAMIC */
+ 	/* .bCSourceID = DYNAMIC */
+diff --git a/drivers/usb/gadget/function/rndis.c b/drivers/usb/gadget/function/rndis.c
+index 64de9f1b874c5..d9ed651f06ac3 100644
+--- a/drivers/usb/gadget/function/rndis.c
++++ b/drivers/usb/gadget/function/rndis.c
+@@ -637,14 +637,17 @@ static int rndis_set_response(struct rndis_params *params,
+ 	rndis_set_cmplt_type *resp;
+ 	rndis_resp_t *r;
+ 
++	BufLength = le32_to_cpu(buf->InformationBufferLength);
++	BufOffset = le32_to_cpu(buf->InformationBufferOffset);
++	if ((BufLength > RNDIS_MAX_TOTAL_SIZE) ||
++	    (BufOffset + 8 >= RNDIS_MAX_TOTAL_SIZE))
++		    return -EINVAL;
++
+ 	r = rndis_add_response(params, sizeof(rndis_set_cmplt_type));
+ 	if (!r)
+ 		return -ENOMEM;
+ 	resp = (rndis_set_cmplt_type *)r->buf;
+ 
+-	BufLength = le32_to_cpu(buf->InformationBufferLength);
+-	BufOffset = le32_to_cpu(buf->InformationBufferOffset);
+-
+ #ifdef	VERBOSE_DEBUG
+ 	pr_debug("%s: Length: %d\n", __func__, BufLength);
+ 	pr_debug("%s: Offset: %d\n", __func__, BufOffset);
+diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c
+index 062dfac303996..33efa6915b91d 100644
+--- a/drivers/usb/gadget/legacy/raw_gadget.c
++++ b/drivers/usb/gadget/legacy/raw_gadget.c
+@@ -1003,7 +1003,7 @@ static int raw_process_ep_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
+ 		ret = -EBUSY;
+ 		goto out_unlock;
+ 	}
+-	if ((in && !ep->ep->caps.dir_in) || (!in && ep->ep->caps.dir_in)) {
++	if (in != usb_endpoint_dir_in(ep->ep->desc)) {
+ 		dev_dbg(&dev->gadget->dev, "fail, wrong direction\n");
+ 		ret = -EINVAL;
+ 		goto out_unlock;
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 57d417a7c3e0a..601829a6b4bad 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2378,6 +2378,8 @@ static void handle_ext_role_switch_states(struct device *dev,
+ 	switch (role) {
+ 	case USB_ROLE_NONE:
+ 		usb3->connection_state = USB_ROLE_NONE;
++		if (cur_role == USB_ROLE_HOST)
++			device_release_driver(host);
+ 		if (usb3->driver)
+ 			usb3_disconnect(usb3);
+ 		usb3_vbus_out(usb3, false);
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index f26861246f653..8716ada0b1387 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -85,6 +85,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1a86, 0x5523) },
+ 	{ USB_DEVICE(0x1a86, 0x7522) },
+ 	{ USB_DEVICE(0x1a86, 0x7523) },
++	{ USB_DEVICE(0x2184, 0x0057) },
+ 	{ USB_DEVICE(0x4348, 0x5523) },
+ 	{ USB_DEVICE(0x9986, 0x7523) },
+ 	{ },
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index f906c1308f9f9..7ac668023da87 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -55,6 +55,7 @@ static void cp210x_enable_event_mode(struct usb_serial_port *port);
+ static void cp210x_disable_event_mode(struct usb_serial_port *port);
+ 
+ static const struct usb_device_id id_table[] = {
++	{ USB_DEVICE(0x0404, 0x034C) },	/* NCR Retail IO Box */
+ 	{ USB_DEVICE(0x045B, 0x0053) }, /* Renesas RX610 RX-Stick */
+ 	{ USB_DEVICE(0x0471, 0x066A) }, /* AKTAKOM ACE-1001 cable */
+ 	{ USB_DEVICE(0x0489, 0xE000) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
+@@ -72,6 +73,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x0FCF, 0x1004) }, /* Dynastream ANT2USB */
+ 	{ USB_DEVICE(0x0FCF, 0x1006) }, /* Dynastream ANT development board */
+ 	{ USB_DEVICE(0x0FDE, 0xCA05) }, /* OWL Wireless Electricity Monitor CM-160 */
++	{ USB_DEVICE(0x106F, 0x0003) },	/* CPI / Money Controls Bulk Coin Recycler */
+ 	{ USB_DEVICE(0x10A6, 0xAA26) }, /* Knock-off DCU-11 cable */
+ 	{ USB_DEVICE(0x10AB, 0x10C5) }, /* Siemens MC60 Cable */
+ 	{ USB_DEVICE(0x10B5, 0xAC70) }, /* Nokia CA-42 USB */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index dfcf79bdfddce..b74621dc2a658 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -969,6 +969,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_023_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_034_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_101_PID) },
++	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_159_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_1_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_2_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_3_PID) },
+@@ -977,12 +978,14 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_6_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_7_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_8_PID) },
++	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_235_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_257_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_1_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_2_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_3_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_4_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_313_PID) },
++	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_320_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_324_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_346_1_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_346_2_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 755858ca20bac..d1a9564697a4b 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1506,6 +1506,9 @@
+ #define BRAINBOXES_VX_023_PID		0x1003 /* VX-023 ExpressCard 1 Port RS422/485 */
+ #define BRAINBOXES_VX_034_PID		0x1004 /* VX-034 ExpressCard 2 Port RS422/485 */
+ #define BRAINBOXES_US_101_PID		0x1011 /* US-101 1xRS232 */
++#define BRAINBOXES_US_159_PID		0x1021 /* US-159 1xRS232 */
++#define BRAINBOXES_US_235_PID		0x1017 /* US-235 1xRS232 */
++#define BRAINBOXES_US_320_PID		0x1019 /* US-320 1xRS422/485 */
+ #define BRAINBOXES_US_324_PID		0x1013 /* US-324 1xRS422/485 1Mbaud */
+ #define BRAINBOXES_US_606_1_PID		0x2001 /* US-606 6 Port RS232 Serial Port 1 and 2 */
+ #define BRAINBOXES_US_606_2_PID		0x2002 /* US-606 6 Port RS232 Serial Port 3 and 4 */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 21b1488fe4461..c39c505b081b1 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1649,6 +1649,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x1476, 0xff) },	/* GosunCn ZTE WeLink ME3630 (ECM/NCM mode) */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1481, 0xff, 0x00, 0x00) }, /* ZTE MF871A */
++	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1485, 0xff, 0xff, 0xff),  /* ZTE MF286D */
++	  .driver_info = RSVD(5) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) },
+diff --git a/fs/nfs/callback.h b/fs/nfs/callback.h
+index 6a2033131c068..ccd4f245cae24 100644
+--- a/fs/nfs/callback.h
++++ b/fs/nfs/callback.h
+@@ -170,7 +170,7 @@ struct cb_devicenotifyitem {
+ };
+ 
+ struct cb_devicenotifyargs {
+-	int				 ndevs;
++	uint32_t			 ndevs;
+ 	struct cb_devicenotifyitem	 *devs;
+ };
+ 
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index be546ece383f5..b44219ce60b86 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -353,7 +353,7 @@ __be32 nfs4_callback_devicenotify(void *argp, void *resp,
+ 				  struct cb_process_state *cps)
+ {
+ 	struct cb_devicenotifyargs *args = argp;
+-	int i;
++	uint32_t i;
+ 	__be32 res = 0;
+ 	struct nfs_client *clp = cps->clp;
+ 	struct nfs_server *server = NULL;
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index 79ff172eb1c81..1725079a05276 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -259,11 +259,9 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ 				void *argp)
+ {
+ 	struct cb_devicenotifyargs *args = argp;
++	uint32_t tmp, n, i;
+ 	__be32 *p;
+ 	__be32 status = 0;
+-	u32 tmp;
+-	int n, i;
+-	args->ndevs = 0;
+ 
+ 	/* Num of device notifications */
+ 	p = xdr_inline_decode(xdr, sizeof(uint32_t));
+@@ -272,7 +270,7 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ 		goto out;
+ 	}
+ 	n = ntohl(*p++);
+-	if (n <= 0)
++	if (n == 0)
+ 		goto out;
+ 	if (n > ULONG_MAX / sizeof(*args->devs)) {
+ 		status = htonl(NFS4ERR_BADXDR);
+@@ -331,19 +329,21 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ 			dev->cbd_immediate = 0;
+ 		}
+ 
+-		args->ndevs++;
+-
+ 		dprintk("%s: type %d layout 0x%x immediate %d\n",
+ 			__func__, dev->cbd_notify_type, dev->cbd_layout_type,
+ 			dev->cbd_immediate);
+ 	}
++	args->ndevs = n;
++	dprintk("%s: ndevs %d\n", __func__, args->ndevs);
++	return 0;
++err:
++	kfree(args->devs);
+ out:
++	args->devs = NULL;
++	args->ndevs = 0;
+ 	dprintk("%s: status %d ndevs %d\n",
+ 		__func__, ntohl(status), args->ndevs);
+ 	return status;
+-err:
+-	kfree(args->devs);
+-	goto out;
+ }
+ 
+ static __be32 decode_sessionid(struct xdr_stream *xdr,
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 723d425796cca..818ff8b1b99da 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -177,6 +177,7 @@ struct nfs_client *nfs_alloc_client(const struct nfs_client_initdata *cl_init)
+ 	INIT_LIST_HEAD(&clp->cl_superblocks);
+ 	clp->cl_rpcclient = ERR_PTR(-EINVAL);
+ 
++	clp->cl_flags = cl_init->init_flags;
+ 	clp->cl_proto = cl_init->proto;
+ 	clp->cl_nconnect = cl_init->nconnect;
+ 	clp->cl_net = get_net(cl_init->net);
+@@ -426,7 +427,6 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init)
+ 			list_add_tail(&new->cl_share_link,
+ 					&nn->nfs_client_list);
+ 			spin_unlock(&nn->nfs_client_lock);
+-			new->cl_flags = cl_init->init_flags;
+ 			return rpc_ops->init_client(new, cl_init);
+ 		}
+ 
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index a23b7a5dec9ee..682c7b45d8b71 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -2489,7 +2489,7 @@ static struct nfs_access_entry *nfs_access_search_rbtree(struct inode *inode, co
+ 	return NULL;
+ }
+ 
+-static int nfs_access_get_cached_locked(struct inode *inode, const struct cred *cred, struct nfs_access_entry *res, bool may_block)
++static int nfs_access_get_cached_locked(struct inode *inode, const struct cred *cred, u32 *mask, bool may_block)
+ {
+ 	struct nfs_inode *nfsi = NFS_I(inode);
+ 	struct nfs_access_entry *cache;
+@@ -2519,8 +2519,7 @@ static int nfs_access_get_cached_locked(struct inode *inode, const struct cred *
+ 		spin_lock(&inode->i_lock);
+ 		retry = false;
+ 	}
+-	res->cred = cache->cred;
+-	res->mask = cache->mask;
++	*mask = cache->mask;
+ 	list_move_tail(&cache->lru, &nfsi->access_cache_entry_lru);
+ 	err = 0;
+ out:
+@@ -2532,7 +2531,7 @@ out_zap:
+ 	return -ENOENT;
+ }
+ 
+-static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cred, struct nfs_access_entry *res)
++static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cred, u32 *mask)
+ {
+ 	/* Only check the most recently returned cache entry,
+ 	 * but do it without locking.
+@@ -2554,22 +2553,21 @@ static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cre
+ 		goto out;
+ 	if (nfs_check_cache_invalid(inode, NFS_INO_INVALID_ACCESS))
+ 		goto out;
+-	res->cred = cache->cred;
+-	res->mask = cache->mask;
++	*mask = cache->mask;
+ 	err = 0;
+ out:
+ 	rcu_read_unlock();
+ 	return err;
+ }
+ 
+-int nfs_access_get_cached(struct inode *inode, const struct cred *cred, struct
+-nfs_access_entry *res, bool may_block)
++int nfs_access_get_cached(struct inode *inode, const struct cred *cred,
++			  u32 *mask, bool may_block)
+ {
+ 	int status;
+ 
+-	status = nfs_access_get_cached_rcu(inode, cred, res);
++	status = nfs_access_get_cached_rcu(inode, cred, mask);
+ 	if (status != 0)
+-		status = nfs_access_get_cached_locked(inode, cred, res,
++		status = nfs_access_get_cached_locked(inode, cred, mask,
+ 		    may_block);
+ 
+ 	return status;
+@@ -2690,7 +2688,7 @@ static int nfs_do_access(struct inode *inode, const struct cred *cred, int mask)
+ 
+ 	trace_nfs_access_enter(inode);
+ 
+-	status = nfs_access_get_cached(inode, cred, &cache, may_block);
++	status = nfs_access_get_cached(inode, cred, &cache.mask, may_block);
+ 	if (status == 0)
+ 		goto out_cached;
+ 
+diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
+index 3e344bec3647b..6d916563356ef 100644
+--- a/fs/nfs/nfs4_fs.h
++++ b/fs/nfs/nfs4_fs.h
+@@ -281,7 +281,8 @@ struct rpc_clnt *nfs4_negotiate_security(struct rpc_clnt *, struct inode *,
+ int nfs4_submount(struct fs_context *, struct nfs_server *);
+ int nfs4_replace_transport(struct nfs_server *server,
+ 				const struct nfs4_fs_locations *locations);
+-
++size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa,
++			     size_t salen, struct net *net, int port);
+ /* nfs4proc.c */
+ extern int nfs4_handle_exception(struct nfs_server *, int, struct nfs4_exception *);
+ extern int nfs4_async_handle_error(struct rpc_task *task,
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 6d74f2e2de461..0e6437b08a3a5 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -1330,8 +1330,11 @@ int nfs4_update_server(struct nfs_server *server, const char *hostname,
+ 	}
+ 	nfs_put_client(clp);
+ 
+-	if (server->nfs_client->cl_hostname == NULL)
++	if (server->nfs_client->cl_hostname == NULL) {
+ 		server->nfs_client->cl_hostname = kstrdup(hostname, GFP_KERNEL);
++		if (server->nfs_client->cl_hostname == NULL)
++			return -ENOMEM;
++	}
+ 	nfs_server_insert_lists(server);
+ 
+ 	return nfs_probe_destination(server);
+diff --git a/fs/nfs/nfs4namespace.c b/fs/nfs/nfs4namespace.c
+index 873342308dc0d..3680c8da510c9 100644
+--- a/fs/nfs/nfs4namespace.c
++++ b/fs/nfs/nfs4namespace.c
+@@ -164,16 +164,21 @@ static int nfs4_validate_fspath(struct dentry *dentry,
+ 	return 0;
+ }
+ 
+-static size_t nfs_parse_server_name(char *string, size_t len,
+-		struct sockaddr *sa, size_t salen, struct net *net)
++size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa,
++			     size_t salen, struct net *net, int port)
+ {
+ 	ssize_t ret;
+ 
+ 	ret = rpc_pton(net, string, len, sa, salen);
+ 	if (ret == 0) {
+-		ret = nfs_dns_resolve_name(net, string, len, sa, salen);
+-		if (ret < 0)
+-			ret = 0;
++		ret = rpc_uaddr2sockaddr(net, string, len, sa, salen);
++		if (ret == 0) {
++			ret = nfs_dns_resolve_name(net, string, len, sa, salen);
++			if (ret < 0)
++				ret = 0;
++		}
++	} else if (port) {
++		rpc_set_port(sa, port);
+ 	}
+ 	return ret;
+ }
+@@ -328,7 +333,7 @@ static int try_location(struct fs_context *fc,
+ 			nfs_parse_server_name(buf->data, buf->len,
+ 					      &ctx->nfs_server.address,
+ 					      sizeof(ctx->nfs_server._address),
+-					      fc->net_ns);
++					      fc->net_ns, 0);
+ 		if (ctx->nfs_server.addrlen == 0)
+ 			continue;
+ 
+@@ -496,7 +501,7 @@ static int nfs4_try_replacing_one_location(struct nfs_server *server,
+ 			continue;
+ 
+ 		salen = nfs_parse_server_name(buf->data, buf->len,
+-						sap, addr_bufsize, net);
++						sap, addr_bufsize, net, 0);
+ 		if (salen == 0)
+ 			continue;
+ 		rpc_set_port(sap, NFS_PORT);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 3106bd28b1132..d222a980164b7 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7597,7 +7597,7 @@ static int nfs4_xattr_set_nfs4_user(const struct xattr_handler *handler,
+ 				    const char *key, const void *buf,
+ 				    size_t buflen, int flags)
+ {
+-	struct nfs_access_entry cache;
++	u32 mask;
+ 	int ret;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_XATTR))
+@@ -7612,8 +7612,8 @@ static int nfs4_xattr_set_nfs4_user(const struct xattr_handler *handler,
+ 	 * do a cached access check for the XA* flags to possibly avoid
+ 	 * doing an RPC and getting EACCES back.
+ 	 */
+-	if (!nfs_access_get_cached(inode, current_cred(), &cache, true)) {
+-		if (!(cache.mask & NFS_ACCESS_XAWRITE))
++	if (!nfs_access_get_cached(inode, current_cred(), &mask, true)) {
++		if (!(mask & NFS_ACCESS_XAWRITE))
+ 			return -EACCES;
+ 	}
+ 
+@@ -7634,14 +7634,14 @@ static int nfs4_xattr_get_nfs4_user(const struct xattr_handler *handler,
+ 				    struct dentry *unused, struct inode *inode,
+ 				    const char *key, void *buf, size_t buflen)
+ {
+-	struct nfs_access_entry cache;
++	u32 mask;
+ 	ssize_t ret;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_XATTR))
+ 		return -EOPNOTSUPP;
+ 
+-	if (!nfs_access_get_cached(inode, current_cred(), &cache, true)) {
+-		if (!(cache.mask & NFS_ACCESS_XAREAD))
++	if (!nfs_access_get_cached(inode, current_cred(), &mask, true)) {
++		if (!(mask & NFS_ACCESS_XAREAD))
+ 			return -EACCES;
+ 	}
+ 
+@@ -7666,13 +7666,13 @@ nfs4_listxattr_nfs4_user(struct inode *inode, char *list, size_t list_len)
+ 	ssize_t ret, size;
+ 	char *buf;
+ 	size_t buflen;
+-	struct nfs_access_entry cache;
++	u32 mask;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_XATTR))
+ 		return 0;
+ 
+-	if (!nfs_access_get_cached(inode, current_cred(), &cache, true)) {
+-		if (!(cache.mask & NFS_ACCESS_XALIST))
++	if (!nfs_access_get_cached(inode, current_cred(), &mask, true)) {
++		if (!(mask & NFS_ACCESS_XALIST))
+ 			return 0;
+ 	}
+ 
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 4bf10792cb5b1..cbeec29e9f21a 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -2104,6 +2104,9 @@ static int nfs4_try_migration(struct nfs_server *server, const struct cred *cred
+ 	}
+ 
+ 	result = -NFS4ERR_NXIO;
++	if (!locations->nlocations)
++		goto out;
++
+ 	if (!(locations->fattr.valid & NFS_ATTR_FATTR_V4_LOCATIONS)) {
+ 		dprintk("<-- %s: No fs_locations data, migration skipped\n",
+ 			__func__);
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index c16b93df1bc14..e2f0e3446e22a 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -3680,8 +3680,6 @@ static int decode_attr_fs_locations(struct xdr_stream *xdr, uint32_t *bitmap, st
+ 	if (unlikely(!p))
+ 		goto out_eio;
+ 	n = be32_to_cpup(p);
+-	if (n <= 0)
+-		goto out_eio;
+ 	for (res->nlocations = 0; res->nlocations < n; res->nlocations++) {
+ 		u32 m;
+ 		struct nfs4_fs_location *loc;
+@@ -4184,10 +4182,11 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap,
+ 		} else
+ 			printk(KERN_WARNING "%s: label too long (%u)!\n",
+ 					__func__, len);
++		if (label && label->label)
++			dprintk("%s: label=%.*s, len=%d, PI=%d, LFS=%d\n",
++				__func__, label->len, (char *)label->label,
++				label->len, label->pi, label->lfs);
+ 	}
+-	if (label && label->label)
+-		dprintk("%s: label=%s, len=%d, PI=%d, LFS=%d\n", __func__,
+-			(char *)label->label, label->len, label->pi, label->lfs);
+ 	return status;
+ }
+ 
+diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
+index a633044b0dc1f..981a4e4c9a3cf 100644
+--- a/fs/nfsd/nfs3proc.c
++++ b/fs/nfsd/nfs3proc.c
+@@ -183,6 +183,11 @@ nfsd3_proc_write(struct svc_rqst *rqstp)
+ 				(unsigned long long) argp->offset,
+ 				argp->stable? " stable" : "");
+ 
++	resp->status = nfserr_fbig;
++	if (argp->offset > (u64)OFFSET_MAX ||
++	    argp->offset + argp->len > (u64)OFFSET_MAX)
++		return rpc_success;
++
+ 	fh_copy(&resp->fh, &argp->fh);
+ 	resp->committed = argp->stable;
+ 	nvecs = svc_fill_write_vector(rqstp, rqstp->rq_arg.pages,
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 00440337efc1f..7850d141c7621 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1008,8 +1008,9 @@ nfsd4_write(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	unsigned long cnt;
+ 	int nvecs;
+ 
+-	if (write->wr_offset >= OFFSET_MAX)
+-		return nfserr_inval;
++	if (write->wr_offset > (u64)OFFSET_MAX ||
++	    write->wr_offset + write->wr_buflen > (u64)OFFSET_MAX)
++		return nfserr_fbig;
+ 
+ 	cnt = write->wr_buflen;
+ 	trace_nfsd_write_start(rqstp, &cstate->current_fh,
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index c8ca73d69ad04..a952f4a9b2a68 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -175,14 +175,14 @@ TRACE_EVENT(nfsd_export_update,
+ DECLARE_EVENT_CLASS(nfsd_io_class,
+ 	TP_PROTO(struct svc_rqst *rqstp,
+ 		 struct svc_fh	*fhp,
+-		 loff_t		offset,
+-		 unsigned long	len),
++		 u64		offset,
++		 u32		len),
+ 	TP_ARGS(rqstp, fhp, offset, len),
+ 	TP_STRUCT__entry(
+ 		__field(u32, xid)
+ 		__field(u32, fh_hash)
+-		__field(loff_t, offset)
+-		__field(unsigned long, len)
++		__field(u64, offset)
++		__field(u32, len)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->xid = be32_to_cpu(rqstp->rq_xid);
+@@ -190,7 +190,7 @@ DECLARE_EVENT_CLASS(nfsd_io_class,
+ 		__entry->offset = offset;
+ 		__entry->len = len;
+ 	),
+-	TP_printk("xid=0x%08x fh_hash=0x%08x offset=%lld len=%lu",
++	TP_printk("xid=0x%08x fh_hash=0x%08x offset=%llu len=%u",
+ 		  __entry->xid, __entry->fh_hash,
+ 		  __entry->offset, __entry->len)
+ )
+@@ -199,8 +199,8 @@ DECLARE_EVENT_CLASS(nfsd_io_class,
+ DEFINE_EVENT(nfsd_io_class, nfsd_##name,	\
+ 	TP_PROTO(struct svc_rqst *rqstp,	\
+ 		 struct svc_fh	*fhp,		\
+-		 loff_t		offset,		\
+-		 unsigned long	len),		\
++		 u64		offset,		\
++		 u32		len),		\
+ 	TP_ARGS(rqstp, fhp, offset, len))
+ 
+ DEFINE_NFSD_IO_EVENT(read_start);
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index aff5cd382fef5..1e0a3497bdb46 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -501,8 +501,8 @@ extern int nfs_instantiate(struct dentry *dentry, struct nfs_fh *fh,
+ 			struct nfs_fattr *fattr, struct nfs4_label *label);
+ extern int nfs_may_open(struct inode *inode, const struct cred *cred, int openflags);
+ extern void nfs_access_zap_cache(struct inode *inode);
+-extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, struct nfs_access_entry *res,
+-				 bool may_block);
++extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred,
++				 u32 *mask, bool may_block);
+ 
+ /*
+  * linux/fs/nfs/symlink.c
+diff --git a/include/linux/suspend.h b/include/linux/suspend.h
+index 8af13ba60c7e4..4bcd65679cee0 100644
+--- a/include/linux/suspend.h
++++ b/include/linux/suspend.h
+@@ -430,15 +430,7 @@ struct platform_hibernation_ops {
+ 
+ #ifdef CONFIG_HIBERNATION
+ /* kernel/power/snapshot.c */
+-extern void __register_nosave_region(unsigned long b, unsigned long e, int km);
+-static inline void __init register_nosave_region(unsigned long b, unsigned long e)
+-{
+-	__register_nosave_region(b, e, 0);
+-}
+-static inline void __init register_nosave_region_late(unsigned long b, unsigned long e)
+-{
+-	__register_nosave_region(b, e, 1);
+-}
++extern void register_nosave_region(unsigned long b, unsigned long e);
+ extern int swsusp_page_is_forbidden(struct page *);
+ extern void swsusp_set_page_free(struct page *);
+ extern void swsusp_unset_page_free(struct page *);
+@@ -457,7 +449,6 @@ int pfn_is_nosave(unsigned long pfn);
+ int hibernate_quiet_exec(int (*func)(void *data), void *data);
+ #else /* CONFIG_HIBERNATION */
+ static inline void register_nosave_region(unsigned long b, unsigned long e) {}
+-static inline void register_nosave_region_late(unsigned long b, unsigned long e) {}
+ static inline int swsusp_page_is_forbidden(struct page *p) { return 0; }
+ static inline void swsusp_set_page_free(struct page *p) {}
+ static inline void swsusp_unset_page_free(struct page *p) {}
+@@ -505,14 +496,14 @@ extern void ksys_sync_helper(void);
+ 
+ /* drivers/base/power/wakeup.c */
+ extern bool events_check_enabled;
+-extern unsigned int pm_wakeup_irq;
+ extern suspend_state_t pm_suspend_target_state;
+ 
+ extern bool pm_wakeup_pending(void);
+ extern void pm_system_wakeup(void);
+ extern void pm_system_cancel_wakeup(void);
+-extern void pm_wakeup_clear(bool reset);
++extern void pm_wakeup_clear(unsigned int irq_number);
+ extern void pm_system_irq_wakeup(unsigned int irq_number);
++extern unsigned int pm_wakeup_irq(void);
+ extern bool pm_get_wakeup_count(unsigned int *count, bool block);
+ extern bool pm_save_wakeup_count(unsigned int count);
+ extern void pm_wakep_autosleep_enabled(bool set);
+diff --git a/include/net/dst_metadata.h b/include/net/dst_metadata.h
+index 14efa0ded75dd..adab27ba1ecbf 100644
+--- a/include/net/dst_metadata.h
++++ b/include/net/dst_metadata.h
+@@ -123,8 +123,20 @@ static inline struct metadata_dst *tun_dst_unclone(struct sk_buff *skb)
+ 
+ 	memcpy(&new_md->u.tun_info, &md_dst->u.tun_info,
+ 	       sizeof(struct ip_tunnel_info) + md_size);
++#ifdef CONFIG_DST_CACHE
++	/* Unclone the dst cache if there is one */
++	if (new_md->u.tun_info.dst_cache.cache) {
++		int ret;
++
++		ret = dst_cache_init(&new_md->u.tun_info.dst_cache, GFP_ATOMIC);
++		if (ret) {
++			metadata_dst_free(new_md);
++			return ERR_PTR(ret);
++		}
++	}
++#endif
++
+ 	skb_dst_drop(skb);
+-	dst_hold(&new_md->dst);
+ 	skb_dst_set(skb, &new_md->dst);
+ 	return new_md;
+ }
+diff --git a/include/uapi/linux/netfilter/nf_conntrack_common.h b/include/uapi/linux/netfilter/nf_conntrack_common.h
+index 4b3395082d15c..26071021e986f 100644
+--- a/include/uapi/linux/netfilter/nf_conntrack_common.h
++++ b/include/uapi/linux/netfilter/nf_conntrack_common.h
+@@ -106,7 +106,7 @@ enum ip_conntrack_status {
+ 	IPS_NAT_CLASH = IPS_UNTRACKED,
+ #endif
+ 
+-	/* Conntrack got a helper explicitly attached via CT target. */
++	/* Conntrack got a helper explicitly attached (ruleset, ctnetlink). */
+ 	IPS_HELPER_BIT = 13,
+ 	IPS_HELPER = (1 << IPS_HELPER_BIT),
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index c6493f7e02359..c8b3f94f0dbb3 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -838,7 +838,7 @@ static DEFINE_PER_CPU(struct list_head, cgrp_cpuctx_list);
+  */
+ static void perf_cgroup_switch(struct task_struct *task, int mode)
+ {
+-	struct perf_cpu_context *cpuctx;
++	struct perf_cpu_context *cpuctx, *tmp;
+ 	struct list_head *list;
+ 	unsigned long flags;
+ 
+@@ -849,7 +849,7 @@ static void perf_cgroup_switch(struct task_struct *task, int mode)
+ 	local_irq_save(flags);
+ 
+ 	list = this_cpu_ptr(&cgrp_cpuctx_list);
+-	list_for_each_entry(cpuctx, list, cgrp_cpuctx_entry) {
++	list_for_each_entry_safe(cpuctx, tmp, list, cgrp_cpuctx_entry) {
+ 		WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0);
+ 
+ 		perf_ctx_lock(cpuctx, cpuctx->task_ctx);
+@@ -5877,6 +5877,8 @@ static void ring_buffer_attach(struct perf_event *event,
+ 	struct perf_buffer *old_rb = NULL;
+ 	unsigned long flags;
+ 
++	WARN_ON_ONCE(event->parent);
++
+ 	if (event->rb) {
+ 		/*
+ 		 * Should be impossible, we set this when removing
+@@ -5934,6 +5936,9 @@ static void ring_buffer_wakeup(struct perf_event *event)
+ {
+ 	struct perf_buffer *rb;
+ 
++	if (event->parent)
++		event = event->parent;
++
+ 	rcu_read_lock();
+ 	rb = rcu_dereference(event->rb);
+ 	if (rb) {
+@@ -5947,6 +5952,9 @@ struct perf_buffer *ring_buffer_get(struct perf_event *event)
+ {
+ 	struct perf_buffer *rb;
+ 
++	if (event->parent)
++		event = event->parent;
++
+ 	rcu_read_lock();
+ 	rb = rcu_dereference(event->rb);
+ 	if (rb) {
+@@ -6618,7 +6626,7 @@ static unsigned long perf_prepare_sample_aux(struct perf_event *event,
+ 	if (WARN_ON_ONCE(READ_ONCE(sampler->oncpu) != smp_processor_id()))
+ 		goto out;
+ 
+-	rb = ring_buffer_get(sampler->parent ? sampler->parent : sampler);
++	rb = ring_buffer_get(sampler);
+ 	if (!rb)
+ 		goto out;
+ 
+@@ -6684,7 +6692,7 @@ static void perf_aux_sample_output(struct perf_event *event,
+ 	if (WARN_ON_ONCE(!sampler || !data->aux_size))
+ 		return;
+ 
+-	rb = ring_buffer_get(sampler->parent ? sampler->parent : sampler);
++	rb = ring_buffer_get(sampler);
+ 	if (!rb)
+ 		return;
+ 
+diff --git a/kernel/power/main.c b/kernel/power/main.c
+index 0aefd6f57e0ac..d6140ed15d0b1 100644
+--- a/kernel/power/main.c
++++ b/kernel/power/main.c
+@@ -504,7 +504,10 @@ static ssize_t pm_wakeup_irq_show(struct kobject *kobj,
+ 					struct kobj_attribute *attr,
+ 					char *buf)
+ {
+-	return pm_wakeup_irq ? sprintf(buf, "%u\n", pm_wakeup_irq) : -ENODATA;
++	if (!pm_wakeup_irq())
++		return -ENODATA;
++
++	return sprintf(buf, "%u\n", pm_wakeup_irq());
+ }
+ 
+ power_attr_ro(pm_wakeup_irq);
+diff --git a/kernel/power/process.c b/kernel/power/process.c
+index 45b054b7b5ec8..b9faa363c46af 100644
+--- a/kernel/power/process.c
++++ b/kernel/power/process.c
+@@ -134,7 +134,7 @@ int freeze_processes(void)
+ 	if (!pm_freezing)
+ 		atomic_inc(&system_freezing_cnt);
+ 
+-	pm_wakeup_clear(true);
++	pm_wakeup_clear(0);
+ 	pr_info("Freezing user space processes ... ");
+ 	pm_freezing = true;
+ 	error = try_to_freeze_tasks(true);
+diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
+index 46b1804c1ddf7..1da013f50059a 100644
+--- a/kernel/power/snapshot.c
++++ b/kernel/power/snapshot.c
+@@ -944,8 +944,7 @@ static void memory_bm_recycle(struct memory_bitmap *bm)
+  * Register a range of page frames the contents of which should not be saved
+  * during hibernation (to be used in the early initialization code).
+  */
+-void __init __register_nosave_region(unsigned long start_pfn,
+-				     unsigned long end_pfn, int use_kmalloc)
++void __init register_nosave_region(unsigned long start_pfn, unsigned long end_pfn)
+ {
+ 	struct nosave_region *region;
+ 
+@@ -961,18 +960,12 @@ void __init __register_nosave_region(unsigned long start_pfn,
+ 			goto Report;
+ 		}
+ 	}
+-	if (use_kmalloc) {
+-		/* During init, this shouldn't fail */
+-		region = kmalloc(sizeof(struct nosave_region), GFP_KERNEL);
+-		BUG_ON(!region);
+-	} else {
+-		/* This allocation cannot fail */
+-		region = memblock_alloc(sizeof(struct nosave_region),
+-					SMP_CACHE_BYTES);
+-		if (!region)
+-			panic("%s: Failed to allocate %zu bytes\n", __func__,
+-			      sizeof(struct nosave_region));
+-	}
++	/* This allocation cannot fail */
++	region = memblock_alloc(sizeof(struct nosave_region),
++				SMP_CACHE_BYTES);
++	if (!region)
++		panic("%s: Failed to allocate %zu bytes\n", __func__,
++		      sizeof(struct nosave_region));
+ 	region->start_pfn = start_pfn;
+ 	region->end_pfn = end_pfn;
+ 	list_add_tail(&region->list, &nosave_regions);
+diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
+index 32391acc806bf..4aa4d5d3947f1 100644
+--- a/kernel/power/suspend.c
++++ b/kernel/power/suspend.c
+@@ -138,8 +138,6 @@ static void s2idle_loop(void)
+ 			break;
+ 		}
+ 
+-		pm_wakeup_clear(false);
+-
+ 		s2idle_enter();
+ 	}
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 1557a20b6500e..41a9bd52e1fdc 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -2154,6 +2154,8 @@ static struct hist_field *parse_unary(struct hist_trigger_data *hist_data,
+ 		(HIST_FIELD_FL_TIMESTAMP | HIST_FIELD_FL_TIMESTAMP_USECS);
+ 	expr->fn = hist_field_unary_minus;
+ 	expr->operands[0] = operand1;
++	expr->size = operand1->size;
++	expr->is_signed = operand1->is_signed;
+ 	expr->operator = FIELD_OP_UNARY_MINUS;
+ 	expr->name = expr_str(expr, 0);
+ 	expr->type = kstrdup(operand1->type, GFP_KERNEL);
+@@ -2293,6 +2295,7 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
+ 
+ 	/* The operand sizes should be the same, so just pick one */
+ 	expr->size = operand1->size;
++	expr->is_signed = operand1->is_signed;
+ 
+ 	expr->operator = field_op;
+ 	expr->name = expr_str(expr, 0);
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 53ce5b6448a5d..37db4d232313d 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -56,6 +56,7 @@
+ #include <linux/module.h>
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
++#include <linux/spinlock.h>
+ #include <linux/hrtimer.h>
+ #include <linux/wait.h>
+ #include <linux/uio.h>
+@@ -145,6 +146,7 @@ struct isotp_sock {
+ 	struct tpcon rx, tx;
+ 	struct list_head notifier;
+ 	wait_queue_head_t wait;
++	spinlock_t rx_lock; /* protect single thread state machine */
+ };
+ 
+ static LIST_HEAD(isotp_notifier_list);
+@@ -615,11 +617,17 @@ static void isotp_rcv(struct sk_buff *skb, void *data)
+ 
+ 	n_pci_type = cf->data[ae] & 0xF0;
+ 
++	/* Make sure the state changes and data structures stay consistent at
++	 * CAN frame reception time. This locking is not needed in real world
++	 * use cases but the inconsistency can be triggered with syzkaller.
++	 */
++	spin_lock(&so->rx_lock);
++
+ 	if (so->opt.flags & CAN_ISOTP_HALF_DUPLEX) {
+ 		/* check rx/tx path half duplex expectations */
+ 		if ((so->tx.state != ISOTP_IDLE && n_pci_type != N_PCI_FC) ||
+ 		    (so->rx.state != ISOTP_IDLE && n_pci_type == N_PCI_FC))
+-			return;
++			goto out_unlock;
+ 	}
+ 
+ 	switch (n_pci_type) {
+@@ -668,6 +676,9 @@ static void isotp_rcv(struct sk_buff *skb, void *data)
+ 		isotp_rcv_cf(sk, cf, ae, skb);
+ 		break;
+ 	}
++
++out_unlock:
++	spin_unlock(&so->rx_lock);
+ }
+ 
+ static void isotp_fill_dataframe(struct canfd_frame *cf, struct isotp_sock *so,
+@@ -874,24 +885,24 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 
+ 	if (!size || size > MAX_MSG_LENGTH) {
+ 		err = -EINVAL;
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	err = memcpy_from_msg(so->tx.buf, msg, size);
+ 	if (err < 0)
+-		goto err_out;
++		goto err_out_drop;
+ 
+ 	dev = dev_get_by_index(sock_net(sk), so->ifindex);
+ 	if (!dev) {
+ 		err = -ENXIO;
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	skb = sock_alloc_send_skb(sk, so->ll.mtu + sizeof(struct can_skb_priv),
+ 				  msg->msg_flags & MSG_DONTWAIT, &err);
+ 	if (!skb) {
+ 		dev_put(dev);
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	can_skb_reserve(skb);
+@@ -956,7 +967,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	if (err) {
+ 		pr_notice_once("can-isotp: %s: can_send_ret %d\n",
+ 			       __func__, err);
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	if (wait_tx_done) {
+@@ -969,6 +980,9 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 
+ 	return size;
+ 
++err_out_drop:
++	/* drop this PDU and unlock a potential wait queue */
++	old_state = ISOTP_IDLE;
+ err_out:
+ 	so->tx.state = old_state;
+ 	if (so->tx.state == ISOTP_IDLE)
+@@ -1407,6 +1421,7 @@ static int isotp_init(struct sock *sk)
+ 	so->txtimer.function = isotp_tx_timer_handler;
+ 
+ 	init_waitqueue_head(&so->wait);
++	spin_lock_init(&so->rx_lock);
+ 
+ 	spin_lock(&isotp_notifier_lock);
+ 	list_add_tail(&so->notifier, &isotp_notifier_list);
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 939792a388146..be1976536f1c0 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -261,7 +261,9 @@ static int __net_init ipmr_rules_init(struct net *net)
+ 	return 0;
+ 
+ err2:
++	rtnl_lock();
+ 	ipmr_free_table(mrt);
++	rtnl_unlock();
+ err1:
+ 	fib_rules_unregister(ops);
+ 	return err;
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 06b0d2c329b94..41cb348a7c3c4 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -248,7 +248,9 @@ static int __net_init ip6mr_rules_init(struct net *net)
+ 	return 0;
+ 
+ err2:
++	rtnl_lock();
+ 	ip6mr_free_table(mrt);
++	rtnl_unlock();
+ err1:
+ 	fib_rules_unregister(ops);
+ 	return err;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index c6bcc28ae3387..eeeaa34b3e7b5 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -2283,7 +2283,8 @@ ctnetlink_create_conntrack(struct net *net,
+ 			if (helper->from_nlattr)
+ 				helper->from_nlattr(helpinfo, ct);
+ 
+-			/* not in hash table yet so not strictly necessary */
++			/* disable helper auto-assignment for this entry */
++			ct->status |= IPS_HELPER;
+ 			RCU_INIT_POINTER(help->helper, helper);
+ 		}
+ 	} else {
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 7b24582a8a164..6758968e79327 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1204,7 +1204,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev,
+ 
+ 	err = -ENOENT;
+ 	if (!ops) {
+-		NL_SET_ERR_MSG(extack, "Specified qdisc not found");
++		NL_SET_ERR_MSG(extack, "Specified qdisc kind is unknown");
+ 		goto err_out;
+ 	}
+ 
+diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c
+index fe4edce459ad4..a757fe28bcb5f 100644
+--- a/net/tipc/name_distr.c
++++ b/net/tipc/name_distr.c
+@@ -315,7 +315,7 @@ static bool tipc_update_nametbl(struct net *net, struct distr_item *i,
+ 		pr_warn_ratelimited("Failed to remove binding %u,%u from %x\n",
+ 				    type, lower, node);
+ 	} else {
+-		pr_warn("Unrecognized name table message received\n");
++		pr_warn_ratelimited("Unknown name table message received\n");
+ 	}
+ 	return false;
+ }
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index 6baee1200615d..23d3967786b9f 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -51,6 +51,7 @@ KBUILD_CFLAGS += -Wno-sign-compare
+ KBUILD_CFLAGS += -Wno-format-zero-length
+ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast)
+ KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare
++KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access)
+ endif
+ 
+ endif
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index ea8ff8a07b36b..98d5a800fe5b0 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -496,12 +496,12 @@ int __init ima_fs_init(void)
+ 
+ 	return 0;
+ out:
++	securityfs_remove(ima_policy);
+ 	securityfs_remove(violations);
+ 	securityfs_remove(runtime_measurements_count);
+ 	securityfs_remove(ascii_runtime_measurements);
+ 	securityfs_remove(binary_runtime_measurements);
+ 	securityfs_remove(ima_symlink);
+ 	securityfs_remove(ima_dir);
+-	securityfs_remove(ima_policy);
+ 	return -1;
+ }
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 9b5adeaa47fc8..e737c216efc49 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -1636,6 +1636,14 @@ int ima_policy_show(struct seq_file *m, void *v)
+ 
+ 	rcu_read_lock();
+ 
++	/* Do not print rules with inactive LSM labels */
++	for (i = 0; i < MAX_LSM_RULES; i++) {
++		if (entry->lsm[i].args_p && !entry->lsm[i].rule) {
++			rcu_read_unlock();
++			return 0;
++		}
++	}
++
+ 	if (entry->action & MEASURE)
+ 		seq_puts(m, pt(Opt_measure));
+ 	if (entry->action & DONT_MEASURE)
+diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c
+index f83255a39e653..f64c01d53e96a 100644
+--- a/security/integrity/ima/ima_template.c
++++ b/security/integrity/ima/ima_template.c
+@@ -27,6 +27,7 @@ static struct ima_template_desc builtin_templates[] = {
+ 
+ static LIST_HEAD(defined_templates);
+ static DEFINE_SPINLOCK(template_list);
++static int template_setup_done;
+ 
+ static const struct ima_template_field supported_fields[] = {
+ 	{.field_id = "d", .field_init = ima_eventdigest_init,
+@@ -80,10 +81,11 @@ static int __init ima_template_setup(char *str)
+ 	struct ima_template_desc *template_desc;
+ 	int template_len = strlen(str);
+ 
+-	if (ima_template)
++	if (template_setup_done)
+ 		return 1;
+ 
+-	ima_init_template_list();
++	if (!ima_template)
++		ima_init_template_list();
+ 
+ 	/*
+ 	 * Verify that a template with the supplied name exists.
+@@ -107,6 +109,7 @@ static int __init ima_template_setup(char *str)
+ 	}
+ 
+ 	ima_template = template_desc;
++	template_setup_done = 1;
+ 	return 1;
+ }
+ __setup("ima_template=", ima_template_setup);
+@@ -115,7 +118,7 @@ static int __init ima_template_fmt_setup(char *str)
+ {
+ 	int num_templates = ARRAY_SIZE(builtin_templates);
+ 
+-	if (ima_template)
++	if (template_setup_done)
+ 		return 1;
+ 
+ 	if (template_desc_init_fields(str, NULL, NULL) < 0) {
+@@ -126,6 +129,7 @@ static int __init ima_template_fmt_setup(char *str)
+ 
+ 	builtin_templates[num_templates - 1].fmt = str;
+ 	ima_template = builtin_templates + num_templates - 1;
++	template_setup_done = 1;
+ 
+ 	return 1;
+ }
+diff --git a/security/integrity/integrity_audit.c b/security/integrity/integrity_audit.c
+index 29220056207f4..0ec5e4c22cb2a 100644
+--- a/security/integrity/integrity_audit.c
++++ b/security/integrity/integrity_audit.c
+@@ -45,6 +45,8 @@ void integrity_audit_message(int audit_msgno, struct inode *inode,
+ 		return;
+ 
+ 	ab = audit_log_start(audit_context(), GFP_KERNEL, audit_msgno);
++	if (!ab)
++		return;
+ 	audit_log_format(ab, "pid=%d uid=%u auid=%u ses=%u",
+ 			 task_pid_nr(current),
+ 			 from_kuid(&init_user_ns, current_uid()),
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index c2323c27a28b5..518cd8dc390e2 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -451,8 +451,8 @@ bool kvm_irq_has_notifier(struct kvm *kvm, unsigned irqchip, unsigned pin)
+ 	idx = srcu_read_lock(&kvm->irq_srcu);
+ 	gsi = kvm_irq_map_chip_pin(kvm, irqchip, pin);
+ 	if (gsi != -1)
+-		hlist_for_each_entry_rcu(kian, &kvm->irq_ack_notifier_list,
+-					 link)
++		hlist_for_each_entry_srcu(kian, &kvm->irq_ack_notifier_list,
++					  link, srcu_read_lock_held(&kvm->irq_srcu))
+ 			if (kian->gsi == gsi) {
+ 				srcu_read_unlock(&kvm->irq_srcu, idx);
+ 				return true;
+@@ -468,8 +468,8 @@ void kvm_notify_acked_gsi(struct kvm *kvm, int gsi)
+ {
+ 	struct kvm_irq_ack_notifier *kian;
+ 
+-	hlist_for_each_entry_rcu(kian, &kvm->irq_ack_notifier_list,
+-				 link)
++	hlist_for_each_entry_srcu(kian, &kvm->irq_ack_notifier_list,
++				  link, srcu_read_lock_held(&kvm->irq_srcu))
+ 		if (kian->gsi == gsi)
+ 			kian->irq_acked(kian);
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-02-23 12:37 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-02-23 12:37 UTC (permalink / raw
  To: gentoo-commits

commit:     e865be67acbccf8ac9a66c3fb1e8f50a2268c171
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 23 12:37:21 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Feb 23 12:37:21 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e865be67

Linux patch 5.10.102

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1101_linux-5.10.102.patch | 4328 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4332 insertions(+)

diff --git a/0000_README b/0000_README
index 25df2085..3438f96a 100644
--- a/0000_README
+++ b/0000_README
@@ -447,6 +447,10 @@ Patch:  1100_linux-5.10.101.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.101
 
+Patch:  1101_linux-5.10.102.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.102
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1101_linux-5.10.102.patch b/1101_linux-5.10.102.patch
new file mode 100644
index 00000000..f176b43c
--- /dev/null
+++ b/1101_linux-5.10.102.patch
@@ -0,0 +1,4328 @@
+diff --git a/Makefile b/Makefile
+index 32d9ed44e1c47..f71684d435e5a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 101
++SUBLEVEL = 102
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/mach-omap2/display.c b/arch/arm/mach-omap2/display.c
+index 2000fca6bd4e6..6098666e928d0 100644
+--- a/arch/arm/mach-omap2/display.c
++++ b/arch/arm/mach-omap2/display.c
+@@ -263,9 +263,9 @@ static int __init omapdss_init_of(void)
+ 	}
+ 
+ 	r = of_platform_populate(node, NULL, NULL, &pdev->dev);
++	put_device(&pdev->dev);
+ 	if (r) {
+ 		pr_err("Unable to populate DSS submodule devices\n");
+-		put_device(&pdev->dev);
+ 		return r;
+ 	}
+ 
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 9443f129859b2..1fd67abca055b 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -749,8 +749,10 @@ static int __init _init_clkctrl_providers(void)
+ 
+ 	for_each_matching_node(np, ti_clkctrl_match_table) {
+ 		ret = _setup_clkctrl_provider(np);
+-		if (ret)
++		if (ret) {
++			of_node_put(np);
+ 			break;
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 7342c8a2b322d..075153a4d49fc 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -101,6 +101,12 @@
+ 			no-map;
+ 		};
+ 
++		/* 32 MiB reserved for ARM Trusted Firmware (BL32) */
++		secmon_reserved_bl32: secmon@5300000 {
++			reg = <0x0 0x05300000 0x0 0x2000000>;
++			no-map;
++		};
++
+ 		linux,cma {
+ 			compatible = "shared-dma-pool";
+ 			reusable;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
+index 4d5b3e514b514..71f91e31c1818 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
+@@ -157,14 +157,6 @@
+ 		regulator-always-on;
+ 	};
+ 
+-	reserved-memory {
+-		/* TEE Reserved Memory */
+-		bl32_reserved: bl32@5000000 {
+-			reg = <0x0 0x05300000 0x0 0x2000000>;
+-			no-map;
+-		};
+-	};
+-
+ 	sdio_pwrseq: sdio-pwrseq {
+ 		compatible = "mmc-pwrseq-simple";
+ 		reset-gpios = <&gpio GPIOX_6 GPIO_ACTIVE_LOW>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 0edd137151f89..47cbb0a1eb183 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -43,6 +43,12 @@
+ 			no-map;
+ 		};
+ 
++		/* 32 MiB reserved for ARM Trusted Firmware (BL32) */
++		secmon_reserved_bl32: secmon@5300000 {
++			reg = <0x0 0x05300000 0x0 0x2000000>;
++			no-map;
++		};
++
+ 		linux,cma {
+ 			compatible = "shared-dma-pool";
+ 			reusable;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+index 5ab139a34c018..c21178e9c6064 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+@@ -203,14 +203,6 @@
+ 		regulator-always-on;
+ 	};
+ 
+-	reserved-memory {
+-		/* TEE Reserved Memory */
+-		bl32_reserved: bl32@5000000 {
+-			reg = <0x0 0x05300000 0x0 0x2000000>;
+-			no-map;
+-		};
+-	};
+-
+ 	sdio_pwrseq: sdio-pwrseq {
+ 		compatible = "mmc-pwrseq-simple";
+ 		reset-gpios = <&gpio GPIOX_6 GPIO_ACTIVE_LOW>;
+diff --git a/arch/parisc/lib/iomap.c b/arch/parisc/lib/iomap.c
+index f03adb1999e77..e362d6a147311 100644
+--- a/arch/parisc/lib/iomap.c
++++ b/arch/parisc/lib/iomap.c
+@@ -346,6 +346,16 @@ u64 ioread64be(const void __iomem *addr)
+ 	return *((u64 *)addr);
+ }
+ 
++u64 ioread64_lo_hi(const void __iomem *addr)
++{
++	u32 low, high;
++
++	low = ioread32(addr);
++	high = ioread32(addr + sizeof(u32));
++
++	return low + ((u64)high << 32);
++}
++
+ u64 ioread64_hi_lo(const void __iomem *addr)
+ {
+ 	u32 low, high;
+@@ -419,6 +429,12 @@ void iowrite64be(u64 datum, void __iomem *addr)
+ 	}
+ }
+ 
++void iowrite64_lo_hi(u64 val, void __iomem *addr)
++{
++	iowrite32(val, addr);
++	iowrite32(val >> 32, addr + sizeof(u32));
++}
++
+ void iowrite64_hi_lo(u64 val, void __iomem *addr)
+ {
+ 	iowrite32(val >> 32, addr + sizeof(u32));
+@@ -527,6 +543,7 @@ EXPORT_SYMBOL(ioread32);
+ EXPORT_SYMBOL(ioread32be);
+ EXPORT_SYMBOL(ioread64);
+ EXPORT_SYMBOL(ioread64be);
++EXPORT_SYMBOL(ioread64_lo_hi);
+ EXPORT_SYMBOL(ioread64_hi_lo);
+ EXPORT_SYMBOL(iowrite8);
+ EXPORT_SYMBOL(iowrite16);
+@@ -535,6 +552,7 @@ EXPORT_SYMBOL(iowrite32);
+ EXPORT_SYMBOL(iowrite32be);
+ EXPORT_SYMBOL(iowrite64);
+ EXPORT_SYMBOL(iowrite64be);
++EXPORT_SYMBOL(iowrite64_lo_hi);
+ EXPORT_SYMBOL(iowrite64_hi_lo);
+ EXPORT_SYMBOL(ioread8_rep);
+ EXPORT_SYMBOL(ioread16_rep);
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 8f10cc6ee0fce..319afa00cdf7b 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -346,9 +346,9 @@ static void __init setup_bootmem(void)
+ 
+ static bool kernel_set_to_readonly;
+ 
+-static void __init map_pages(unsigned long start_vaddr,
+-			     unsigned long start_paddr, unsigned long size,
+-			     pgprot_t pgprot, int force)
++static void __ref map_pages(unsigned long start_vaddr,
++			    unsigned long start_paddr, unsigned long size,
++			    pgprot_t pgprot, int force)
+ {
+ 	pmd_t *pmd;
+ 	pte_t *pg_table;
+@@ -458,7 +458,7 @@ void __init set_kernel_text_rw(int enable_read_write)
+ 	flush_tlb_all();
+ }
+ 
+-void __ref free_initmem(void)
++void free_initmem(void)
+ {
+ 	unsigned long init_begin = (unsigned long)__init_begin;
+ 	unsigned long init_end = (unsigned long)__init_end;
+@@ -472,7 +472,6 @@ void __ref free_initmem(void)
+ 	/* The init text pages are marked R-X.  We have to
+ 	 * flush the icache and mark them RW-
+ 	 *
+-	 * This is tricky, because map_pages is in the init section.
+ 	 * Do a dummy remap of the data section first (the data
+ 	 * section is already PAGE_KERNEL) to pull in the TLB entries
+ 	 * for map_kernel */
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index a2e067f68dee8..0edebbbffcdca 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -3062,12 +3062,14 @@ void emulate_update_regs(struct pt_regs *regs, struct instruction_op *op)
+ 		case BARRIER_EIEIO:
+ 			eieio();
+ 			break;
++#ifdef CONFIG_PPC64
+ 		case BARRIER_LWSYNC:
+ 			asm volatile("lwsync" : : : "memory");
+ 			break;
+ 		case BARRIER_PTESYNC:
+ 			asm volatile("ptesync" : : : "memory");
+ 			break;
++#endif
+ 		}
+ 		break;
+ 
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index 67741d2a03085..2f83b5d948b33 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -95,7 +95,7 @@ static void kvm_perf_overflow_intr(struct perf_event *perf_event,
+ }
+ 
+ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
+-				  unsigned config, bool exclude_user,
++				  u64 config, bool exclude_user,
+ 				  bool exclude_kernel, bool intr,
+ 				  bool in_tx, bool in_tx_cp)
+ {
+@@ -170,8 +170,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
+ 
+ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ {
+-	unsigned config, type = PERF_TYPE_RAW;
+-	u8 event_select, unit_mask;
++	u64 config;
++	u32 type = PERF_TYPE_RAW;
+ 	struct kvm *kvm = pmc->vcpu->kvm;
+ 	struct kvm_pmu_event_filter *filter;
+ 	int i;
+@@ -203,23 +203,18 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ 	if (!allow_event)
+ 		return;
+ 
+-	event_select = eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
+-	unit_mask = (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+-
+ 	if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE |
+ 			  ARCH_PERFMON_EVENTSEL_INV |
+ 			  ARCH_PERFMON_EVENTSEL_CMASK |
+ 			  HSW_IN_TX |
+ 			  HSW_IN_TX_CHECKPOINTED))) {
+-		config = kvm_x86_ops.pmu_ops->find_arch_event(pmc_to_pmu(pmc),
+-						      event_select,
+-						      unit_mask);
++		config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc);
+ 		if (config != PERF_COUNT_HW_MAX)
+ 			type = PERF_TYPE_HARDWARE;
+ 	}
+ 
+ 	if (type == PERF_TYPE_RAW)
+-		config = eventsel & X86_RAW_EVENT_MASK;
++		config = eventsel & AMD64_RAW_EVENT_MASK;
+ 
+ 	if (pmc->current_config == eventsel && pmc_resume_counter(pmc))
+ 		return;
+diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
+index 067fef51760c4..1a44e29e73330 100644
+--- a/arch/x86/kvm/pmu.h
++++ b/arch/x86/kvm/pmu.h
+@@ -24,8 +24,7 @@ struct kvm_event_hw_type_mapping {
+ };
+ 
+ struct kvm_pmu_ops {
+-	unsigned (*find_arch_event)(struct kvm_pmu *pmu, u8 event_select,
+-				    u8 unit_mask);
++	unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc);
+ 	unsigned (*find_fixed_event)(int idx);
+ 	bool (*pmc_is_enabled)(struct kvm_pmc *pmc);
+ 	struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx);
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index 8c550999ace0c..a8b5533cf601d 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -344,8 +344,6 @@ int avic_incomplete_ipi_interception(struct vcpu_svm *svm)
+ 		break;
+ 	}
+ 	case AVIC_IPI_FAILURE_INVALID_TARGET:
+-		WARN_ONCE(1, "Invalid IPI target: index=%u, vcpu=%d, icr=%#0x:%#0x\n",
+-			  index, svm->vcpu.vcpu_id, icrh, icrl);
+ 		break;
+ 	case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE:
+ 		WARN_ONCE(1, "Invalid backing page\n");
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index 5a5c165a30ed1..4e7093bcb64b6 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -126,10 +126,10 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
+ 	return &pmu->gp_counters[msr_to_index(msr)];
+ }
+ 
+-static unsigned amd_find_arch_event(struct kvm_pmu *pmu,
+-				    u8 event_select,
+-				    u8 unit_mask)
++static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc)
+ {
++	u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
++	u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++)
+@@ -312,7 +312,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu)
+ }
+ 
+ struct kvm_pmu_ops amd_pmu_ops = {
+-	.find_arch_event = amd_find_arch_event,
++	.pmc_perf_hw_id = amd_pmc_perf_hw_id,
+ 	.find_fixed_event = amd_find_fixed_event,
+ 	.pmc_is_enabled = amd_pmc_is_enabled,
+ 	.pmc_idx_to_pmc = amd_pmc_idx_to_pmc,
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index d515c8e68314c..7773a765f5489 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4103,6 +4103,10 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
+ 	bool smep, smap, is_user;
+ 	unsigned long cr4;
+ 
++	/* Emulation is always possible when KVM has access to all guest state. */
++	if (!sev_guest(vcpu->kvm))
++		return true;
++
+ 	/*
+ 	 * Detect and workaround Errata 1096 Fam_17h_00_0Fh.
+ 	 *
+@@ -4151,9 +4155,6 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
+ 	smap = cr4 & X86_CR4_SMAP;
+ 	is_user = svm_get_cpl(vcpu) == 3;
+ 	if (smap && (!smep || is_user)) {
+-		if (!sev_guest(vcpu->kvm))
+-			return true;
+-
+ 		pr_err_ratelimited("KVM: SEV Guest triggered AMD Erratum 1096\n");
+ 
+ 		/*
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index cdf5f34518f43..bd70c1d7f3458 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -68,10 +68,11 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data)
+ 		reprogram_counter(pmu, bit);
+ }
+ 
+-static unsigned intel_find_arch_event(struct kvm_pmu *pmu,
+-				      u8 event_select,
+-				      u8 unit_mask)
++static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc)
+ {
++	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
++	u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
++	u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++)
+@@ -432,7 +433,7 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
+ }
+ 
+ struct kvm_pmu_ops intel_pmu_ops = {
+-	.find_arch_event = intel_find_arch_event,
++	.pmc_perf_hw_id = intel_pmc_perf_hw_id,
+ 	.find_fixed_event = intel_find_fixed_event,
+ 	.pmc_is_enabled = intel_pmc_is_enabled,
+ 	.pmc_idx_to_pmc = intel_pmc_idx_to_pmc,
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 16ff25d6935e7..804c65d2b95f3 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1387,10 +1387,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 
+ 		xen_acpi_sleep_register();
+ 
+-		/* Avoid searching for BIOS MP tables */
+-		x86_init.mpparse.find_smp_config = x86_init_noop;
+-		x86_init.mpparse.get_smp_config = x86_init_uint_noop;
+-
+ 		xen_boot_params_init_edd();
+ 
+ #ifdef CONFIG_ACPI
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index c2ac319f11a4b..8f9e7e2407c87 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -149,28 +149,12 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 	return rc;
+ }
+ 
+-static void __init xen_fill_possible_map(void)
+-{
+-	int i, rc;
+-
+-	if (xen_initial_domain())
+-		return;
+-
+-	for (i = 0; i < nr_cpu_ids; i++) {
+-		rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
+-		if (rc >= 0) {
+-			num_processors++;
+-			set_cpu_possible(i, true);
+-		}
+-	}
+-}
+-
+-static void __init xen_filter_cpu_maps(void)
++static void __init _get_smp_config(unsigned int early)
+ {
+ 	int i, rc;
+ 	unsigned int subtract = 0;
+ 
+-	if (!xen_initial_domain())
++	if (early)
+ 		return;
+ 
+ 	num_processors = 0;
+@@ -211,7 +195,6 @@ static void __init xen_pv_smp_prepare_boot_cpu(void)
+ 		 * sure the old memory can be recycled. */
+ 		make_lowmem_page_readwrite(xen_initial_gdt);
+ 
+-	xen_filter_cpu_maps();
+ 	xen_setup_vcpu_info_placement();
+ 
+ 	/*
+@@ -491,5 +474,8 @@ static const struct smp_ops xen_smp_ops __initconst = {
+ void __init xen_smp_init(void)
+ {
+ 	smp_ops = xen_smp_ops;
+-	xen_fill_possible_map();
++
++	/* Avoid searching for BIOS MP tables */
++	x86_init.mpparse.find_smp_config = x86_init_noop;
++	x86_init.mpparse.get_smp_config = _get_smp_config;
+ }
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index b8c2ddc01aec3..8d95bf7765b19 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -6404,6 +6404,8 @@ static void bfq_exit_queue(struct elevator_queue *e)
+ 	spin_unlock_irq(&bfqd->lock);
+ #endif
+ 
++	wbt_enable_default(bfqd->queue);
++
+ 	kfree(bfqd);
+ }
+ 
+diff --git a/block/elevator.c b/block/elevator.c
+index 2a525863d4e92..2f962662c32a1 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -518,8 +518,6 @@ void elv_unregister_queue(struct request_queue *q)
+ 		kobject_del(&e->kobj);
+ 
+ 		e->registered = 0;
+-		/* Re-enable throttling in case elevator disabled it */
+-		wbt_enable_default(q);
+ 	}
+ }
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 1f54f82d22d61..d2b544bdc7b5e 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3989,6 +3989,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 
+ 	/* devices that don't properly handle TRIM commands */
+ 	{ "SuperSSpeed S238*",		NULL,	ATA_HORKAGE_NOTRIM, },
++	{ "M88V29*",			NULL,	ATA_HORKAGE_NOTRIM, },
+ 
+ 	/*
+ 	 * As defined, the DRAT (Deterministic Read After Trim) and RZAT
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 5444206f35e22..5f541c9465598 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1987,7 +1987,10 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ 		 */
+ 		if (!capable(CAP_SYS_ADMIN))
+ 			return -EPERM;
+-		input_pool.entropy_count = 0;
++		if (xchg(&input_pool.entropy_count, 0) && random_write_wakeup_bits) {
++			wake_up_interruptible(&random_write_wait);
++			kill_fasync(&fasync, SIGIO, POLL_OUT);
++		}
+ 		return 0;
+ 	case RNDRESEEDCRNG:
+ 		if (!capable(CAP_SYS_ADMIN))
+diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
+index 991a7b5da29f0..7c268d1bd2050 100644
+--- a/drivers/dma/sh/rcar-dmac.c
++++ b/drivers/dma/sh/rcar-dmac.c
+@@ -1844,8 +1844,13 @@ static int rcar_dmac_probe(struct platform_device *pdev)
+ 
+ 	dmac->dev = &pdev->dev;
+ 	platform_set_drvdata(pdev, dmac);
+-	dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
+-	dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
++	ret = dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
++	if (ret)
++		return ret;
++
++	ret = dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
++	if (ret)
++		return ret;
+ 
+ 	ret = rcar_dmac_parse_of(&pdev->dev, dmac);
+ 	if (ret < 0)
+diff --git a/drivers/dma/stm32-dmamux.c b/drivers/dma/stm32-dmamux.c
+index bddd3b23f33fc..f04bcffd3c24a 100644
+--- a/drivers/dma/stm32-dmamux.c
++++ b/drivers/dma/stm32-dmamux.c
+@@ -292,10 +292,12 @@ static int stm32_dmamux_probe(struct platform_device *pdev)
+ 	ret = of_dma_router_register(node, stm32_dmamux_route_allocate,
+ 				     &stm32_dmamux->dmarouter);
+ 	if (ret)
+-		goto err_clk;
++		goto pm_disable;
+ 
+ 	return 0;
+ 
++pm_disable:
++	pm_runtime_disable(&pdev->dev);
+ err_clk:
+ 	clk_disable_unprepare(stm32_dmamux->clk);
+ 
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 01ff71f7b6456..f4eb071327be0 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -210,7 +210,7 @@ void *edac_align_ptr(void **p, unsigned int size, int n_elems)
+ 	else
+ 		return (char *)ptr;
+ 
+-	r = (unsigned long)p % align;
++	r = (unsigned long)ptr % align;
+ 
+ 	if (r == 0)
+ 		return (char *)ptr;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 5207ad654f18e..0b162928a248b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -2120,7 +2120,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
+ 	unsigned i;
+ 	int r;
+ 
+-	if (direct_submit && !ring->sched.ready) {
++	if (!direct_submit && !ring->sched.ready) {
+ 		DRM_ERROR("Trying to move memory with ring turned off.\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
+index 1e1cb245fca77..8eb9bf3a1617e 100644
+--- a/drivers/gpu/drm/i915/Kconfig
++++ b/drivers/gpu/drm/i915/Kconfig
+@@ -100,6 +100,7 @@ config DRM_I915_USERPTR
+ config DRM_I915_GVT
+ 	bool "Enable Intel GVT-g graphics virtualization host support"
+ 	depends on DRM_I915
++	depends on X86
+ 	depends on 64BIT
+ 	default n
+ 	help
+diff --git a/drivers/gpu/drm/i915/display/intel_opregion.c b/drivers/gpu/drm/i915/display/intel_opregion.c
+index de995362f4283..abff2d6cedd12 100644
+--- a/drivers/gpu/drm/i915/display/intel_opregion.c
++++ b/drivers/gpu/drm/i915/display/intel_opregion.c
+@@ -361,6 +361,21 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
+ 		port++;
+ 	}
+ 
++	/*
++	 * The port numbering and mapping here is bizarre. The now-obsolete
++	 * swsci spec supports ports numbered [0..4]. Port E is handled as a
++	 * special case, but port F and beyond are not. The functionality is
++	 * supposed to be obsolete for new platforms. Just bail out if the port
++	 * number is out of bounds after mapping.
++	 */
++	if (port > 4) {
++		drm_dbg_kms(&dev_priv->drm,
++			    "[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n",
++			    intel_encoder->base.base.id, intel_encoder->base.name,
++			    port_name(intel_encoder->port), port);
++		return -EINVAL;
++	}
++
+ 	if (!enable)
+ 		parm |= 4 << 8;
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/base.c b/drivers/gpu/drm/nouveau/nvkm/falcon/base.c
+index c6a3448180d6f..93d9575181c67 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/falcon/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/falcon/base.c
+@@ -119,8 +119,12 @@ nvkm_falcon_disable(struct nvkm_falcon *falcon)
+ int
+ nvkm_falcon_reset(struct nvkm_falcon *falcon)
+ {
+-	nvkm_falcon_disable(falcon);
+-	return nvkm_falcon_enable(falcon);
++	if (!falcon->func->reset) {
++		nvkm_falcon_disable(falcon);
++		return nvkm_falcon_enable(falcon);
++	}
++
++	return falcon->func->reset(falcon);
+ }
+ 
+ int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.c
+index 383376addb41c..a9d6c36195ed1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.c
+@@ -23,9 +23,38 @@
+  */
+ #include "priv.h"
+ 
++static int
++gm200_pmu_flcn_reset(struct nvkm_falcon *falcon)
++{
++	struct nvkm_pmu *pmu = container_of(falcon, typeof(*pmu), falcon);
++
++	nvkm_falcon_wr32(falcon, 0x014, 0x0000ffff);
++	pmu->func->reset(pmu);
++	return nvkm_falcon_enable(falcon);
++}
++
++const struct nvkm_falcon_func
++gm200_pmu_flcn = {
++	.debug = 0xc08,
++	.fbif = 0xe00,
++	.load_imem = nvkm_falcon_v1_load_imem,
++	.load_dmem = nvkm_falcon_v1_load_dmem,
++	.read_dmem = nvkm_falcon_v1_read_dmem,
++	.bind_context = nvkm_falcon_v1_bind_context,
++	.wait_for_halt = nvkm_falcon_v1_wait_for_halt,
++	.clear_interrupt = nvkm_falcon_v1_clear_interrupt,
++	.set_start_addr = nvkm_falcon_v1_set_start_addr,
++	.start = nvkm_falcon_v1_start,
++	.enable = nvkm_falcon_v1_enable,
++	.disable = nvkm_falcon_v1_disable,
++	.reset = gm200_pmu_flcn_reset,
++	.cmdq = { 0x4a0, 0x4b0, 4 },
++	.msgq = { 0x4c8, 0x4cc, 0 },
++};
++
+ static const struct nvkm_pmu_func
+ gm200_pmu = {
+-	.flcn = &gt215_pmu_flcn,
++	.flcn = &gm200_pmu_flcn,
+ 	.enabled = gf100_pmu_enabled,
+ 	.reset = gf100_pmu_reset,
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+index 8f6ed5373ea16..7938722b4da17 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+@@ -211,7 +211,7 @@ gm20b_pmu_recv(struct nvkm_pmu *pmu)
+ 
+ static const struct nvkm_pmu_func
+ gm20b_pmu = {
+-	.flcn = &gt215_pmu_flcn,
++	.flcn = &gm200_pmu_flcn,
+ 	.enabled = gf100_pmu_enabled,
+ 	.intr = gt215_pmu_intr,
+ 	.recv = gm20b_pmu_recv,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+index 3d8ce14dba7bf..3dfb3e8522f6a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+@@ -39,7 +39,7 @@ gp102_pmu_enabled(struct nvkm_pmu *pmu)
+ 
+ static const struct nvkm_pmu_func
+ gp102_pmu = {
+-	.flcn = &gt215_pmu_flcn,
++	.flcn = &gm200_pmu_flcn,
+ 	.enabled = gp102_pmu_enabled,
+ 	.reset = gp102_pmu_reset,
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+index 9c237c426599b..7f5f9d5448360 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+@@ -78,7 +78,7 @@ gp10b_pmu_acr = {
+ 
+ static const struct nvkm_pmu_func
+ gp10b_pmu = {
+-	.flcn = &gt215_pmu_flcn,
++	.flcn = &gm200_pmu_flcn,
+ 	.enabled = gf100_pmu_enabled,
+ 	.intr = gt215_pmu_intr,
+ 	.recv = gm20b_pmu_recv,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+index 276b6d778e532..b945ec320cd2e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+@@ -44,6 +44,8 @@ void gf100_pmu_reset(struct nvkm_pmu *);
+ 
+ void gk110_pmu_pgob(struct nvkm_pmu *, bool);
+ 
++extern const struct nvkm_falcon_func gm200_pmu_flcn;
++
+ void gm20b_pmu_acr_bld_patch(struct nvkm_acr *, u32, s64);
+ void gm20b_pmu_acr_bld_write(struct nvkm_acr *, u32, struct nvkm_acr_lsfw *);
+ int gm20b_pmu_acr_boot(struct nvkm_falcon *);
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index cc5ee1b3af84f..12aa7877a625a 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -197,7 +197,8 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder,
+ 	 * so don't register a backlight device
+ 	 */
+ 	if ((rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE) &&
+-	    (rdev->pdev->device == 0x6741))
++	    (rdev->pdev->device == 0x6741) &&
++	    !dmi_match(DMI_PRODUCT_NAME, "iMac12,1"))
+ 		return;
+ 
+ 	if (!radeon_encoder->enc_priv)
+diff --git a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+index 23de359a1dec6..515e6f187dc77 100644
+--- a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+@@ -529,13 +529,6 @@ static int dw_hdmi_rockchip_bind(struct device *dev, struct device *master,
+ 		return ret;
+ 	}
+ 
+-	ret = clk_prepare_enable(hdmi->vpll_clk);
+-	if (ret) {
+-		DRM_DEV_ERROR(hdmi->dev, "Failed to enable HDMI vpll: %d\n",
+-			      ret);
+-		return ret;
+-	}
+-
+ 	hdmi->phy = devm_phy_optional_get(dev, "hdmi");
+ 	if (IS_ERR(hdmi->phy)) {
+ 		ret = PTR_ERR(hdmi->phy);
+@@ -544,6 +537,13 @@ static int dw_hdmi_rockchip_bind(struct device *dev, struct device *master,
+ 		return ret;
+ 	}
+ 
++	ret = clk_prepare_enable(hdmi->vpll_clk);
++	if (ret) {
++		DRM_DEV_ERROR(hdmi->dev, "Failed to enable HDMI vpll: %d\n",
++			      ret);
++		return ret;
++	}
++
+ 	drm_encoder_helper_add(encoder, &dw_hdmi_rockchip_encoder_helper_funcs);
+ 	drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+ 
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 370ec4402ebe3..d2e4f9f5507d5 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -1318,6 +1318,7 @@
+ #define USB_VENDOR_ID_UGTIZER			0x2179
+ #define USB_DEVICE_ID_UGTIZER_TABLET_GP0610	0x0053
+ #define USB_DEVICE_ID_UGTIZER_TABLET_GT5040	0x0077
++#define USB_DEVICE_ID_UGTIZER_TABLET_WP5540	0x0004
+ 
+ #define USB_VENDOR_ID_VIEWSONIC			0x0543
+ #define USB_DEVICE_ID_VIEWSONIC_PD1011		0xe621
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 84a30202e3dbe..2ab71d717bb03 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -187,6 +187,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_TURBOX_KEYBOARD), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_KNA5), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_TWA60), HID_QUIRK_MULTI_INPUT },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_UGTIZER, USB_DEVICE_ID_UGTIZER_TABLET_WP5540), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_MEDIA_TABLET_10_6_INCH), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_MEDIA_TABLET_14_1_INCH), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_SIRIUS_BATTERY_FREE_TABLET), HID_QUIRK_MULTI_INPUT },
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index a5a402e776c77..362da2a83b470 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1944,8 +1944,10 @@ int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
+ 	kobj->kset = dev->channels_kset;
+ 	ret = kobject_init_and_add(kobj, &vmbus_chan_ktype, NULL,
+ 				   "%u", relid);
+-	if (ret)
++	if (ret) {
++		kobject_put(kobj);
+ 		return ret;
++	}
+ 
+ 	ret = sysfs_create_group(kobj, &vmbus_chan_group);
+ 
+@@ -1954,6 +1956,7 @@ int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
+ 		 * The calling functions' error handling paths will cleanup the
+ 		 * empty channel directory.
+ 		 */
++		kobject_put(kobj);
+ 		dev_err(device, "Unable to set up channel sysfs files\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/i2c/busses/i2c-brcmstb.c b/drivers/i2c/busses/i2c-brcmstb.c
+index ba766d24219ef..44e2466f3c674 100644
+--- a/drivers/i2c/busses/i2c-brcmstb.c
++++ b/drivers/i2c/busses/i2c-brcmstb.c
+@@ -674,7 +674,7 @@ static int brcmstb_i2c_probe(struct platform_device *pdev)
+ 
+ 	/* set the data in/out register size for compatible SoCs */
+ 	if (of_device_is_compatible(dev->device->of_node,
+-				    "brcmstb,brcmper-i2c"))
++				    "brcm,brcmper-i2c"))
+ 		dev->data_regsz = sizeof(u8);
+ 	else
+ 		dev->data_regsz = sizeof(u32);
+diff --git a/drivers/i2c/busses/i2c-qcom-cci.c b/drivers/i2c/busses/i2c-qcom-cci.c
+index 1c259b5188de8..09e599069a81d 100644
+--- a/drivers/i2c/busses/i2c-qcom-cci.c
++++ b/drivers/i2c/busses/i2c-qcom-cci.c
+@@ -558,7 +558,7 @@ static int cci_probe(struct platform_device *pdev)
+ 		cci->master[idx].adap.quirks = &cci->data->quirks;
+ 		cci->master[idx].adap.algo = &cci_algo;
+ 		cci->master[idx].adap.dev.parent = dev;
+-		cci->master[idx].adap.dev.of_node = child;
++		cci->master[idx].adap.dev.of_node = of_node_get(child);
+ 		cci->master[idx].master = idx;
+ 		cci->master[idx].cci = cci;
+ 
+@@ -643,8 +643,10 @@ static int cci_probe(struct platform_device *pdev)
+ 			continue;
+ 
+ 		ret = i2c_add_adapter(&cci->master[i].adap);
+-		if (ret < 0)
++		if (ret < 0) {
++			of_node_put(cci->master[i].adap.dev.of_node);
+ 			goto error_i2c;
++		}
+ 	}
+ 
+ 	pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
+@@ -655,9 +657,11 @@ static int cci_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ error_i2c:
+-	for (; i >= 0; i--) {
+-		if (cci->master[i].cci)
++	for (--i ; i >= 0; i--) {
++		if (cci->master[i].cci) {
+ 			i2c_del_adapter(&cci->master[i].adap);
++			of_node_put(cci->master[i].adap.dev.of_node);
++		}
+ 	}
+ error:
+ 	disable_irq(cci->irq);
+@@ -673,8 +677,10 @@ static int cci_remove(struct platform_device *pdev)
+ 	int i;
+ 
+ 	for (i = 0; i < cci->data->num_masters; i++) {
+-		if (cci->master[i].cci)
++		if (cci->master[i].cci) {
+ 			i2c_del_adapter(&cci->master[i].adap);
++			of_node_put(cci->master[i].adap.dev.of_node);
++		}
+ 		cci_halt(cci, i);
+ 	}
+ 
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index 926e55d838cb1..bd99ee0ae433d 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -400,3 +400,4 @@ out_free_priv:
+ 
+ IRQCHIP_DECLARE(sifive_plic, "sifive,plic-1.0.0", plic_init);
+ IRQCHIP_DECLARE(riscv_plic0, "riscv,plic0", plic_init); /* for legacy systems */
++IRQCHIP_DECLARE(thead_c900_plic, "thead,c900-plic", plic_init); /* for firmware driver */
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 94caee49da99c..99b981a05b6c0 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1642,31 +1642,31 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
+ 	struct mmc_card *card = mq->card;
+ 	struct mmc_host *host = card->host;
+ 	blk_status_t error = BLK_STS_OK;
+-	int retries = 0;
+ 
+ 	do {
+ 		u32 status;
+ 		int err;
++		int retries = 0;
+ 
+-		mmc_blk_rw_rq_prep(mqrq, card, 1, mq);
++		while (retries++ <= MMC_READ_SINGLE_RETRIES) {
++			mmc_blk_rw_rq_prep(mqrq, card, 1, mq);
+ 
+-		mmc_wait_for_req(host, mrq);
++			mmc_wait_for_req(host, mrq);
+ 
+-		err = mmc_send_status(card, &status);
+-		if (err)
+-			goto error_exit;
+-
+-		if (!mmc_host_is_spi(host) &&
+-		    !mmc_ready_for_data(status)) {
+-			err = mmc_blk_fix_state(card, req);
++			err = mmc_send_status(card, &status);
+ 			if (err)
+ 				goto error_exit;
+-		}
+ 
+-		if (mrq->cmd->error && retries++ < MMC_READ_SINGLE_RETRIES)
+-			continue;
++			if (!mmc_host_is_spi(host) &&
++			    !mmc_ready_for_data(status)) {
++				err = mmc_blk_fix_state(card, req);
++				if (err)
++					goto error_exit;
++			}
+ 
+-		retries = 0;
++			if (!mrq->cmd->error)
++				break;
++		}
+ 
+ 		if (mrq->cmd->error ||
+ 		    mrq->data->error ||
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 909b14cc8e55c..580b91cbd18de 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -2062,7 +2062,7 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
+ 					mtd->oobsize / trans,
+ 					host->hwcfg.sector_size_1k);
+ 
+-		if (!ret) {
++		if (ret != -EBADMSG) {
+ 			*err_addr = brcmnand_get_uncorrecc_addr(ctrl);
+ 
+ 			if (*err_addr)
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 226d527b6c6b7..cb7631145700a 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -2291,7 +2291,7 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
+ 		this->hw.must_apply_timings = false;
+ 		ret = gpmi_nfc_apply_timings(this);
+ 		if (ret)
+-			return ret;
++			goto out_pm;
+ 	}
+ 
+ 	dev_dbg(this->dev, "%s: %d instructions\n", __func__, op->ninstrs);
+@@ -2420,6 +2420,7 @@ unmap:
+ 
+ 	this->bch = false;
+ 
++out_pm:
+ 	pm_runtime_mark_last_busy(this->dev);
+ 	pm_runtime_put_autosuspend(this->dev);
+ 
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index b99d2e9d1e2c4..bb181e18c7c52 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -2,7 +2,6 @@
+ /*
+  * Copyright (c) 2016, The Linux Foundation. All rights reserved.
+  */
+-
+ #include <linux/clk.h>
+ #include <linux/slab.h>
+ #include <linux/bitops.h>
+@@ -2968,10 +2967,6 @@ static int qcom_nandc_probe(struct platform_device *pdev)
+ 	if (!nandc->base_dma)
+ 		return -ENXIO;
+ 
+-	ret = qcom_nandc_alloc(nandc);
+-	if (ret)
+-		goto err_nandc_alloc;
+-
+ 	ret = clk_prepare_enable(nandc->core_clk);
+ 	if (ret)
+ 		goto err_core_clk;
+@@ -2980,6 +2975,10 @@ static int qcom_nandc_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_aon_clk;
+ 
++	ret = qcom_nandc_alloc(nandc);
++	if (ret)
++		goto err_nandc_alloc;
++
+ 	ret = qcom_nandc_setup(nandc);
+ 	if (ret)
+ 		goto err_setup;
+@@ -2991,15 +2990,14 @@ static int qcom_nandc_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_setup:
++	qcom_nandc_unalloc(nandc);
++err_nandc_alloc:
+ 	clk_disable_unprepare(nandc->aon_clk);
+ err_aon_clk:
+ 	clk_disable_unprepare(nandc->core_clk);
+ err_core_clk:
+-	qcom_nandc_unalloc(nandc);
+-err_nandc_alloc:
+ 	dma_unmap_resource(dev, res->start, resource_size(res),
+ 			   DMA_BIDIRECTIONAL, 0);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index ab8c833411654..c2cef7ba26719 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -223,7 +223,7 @@ static inline int __check_agg_selection_timer(struct port *port)
+ 	if (bond == NULL)
+ 		return 0;
+ 
+-	return BOND_AD_INFO(bond).agg_select_timer ? 1 : 0;
++	return atomic_read(&BOND_AD_INFO(bond).agg_select_timer) ? 1 : 0;
+ }
+ 
+ /**
+@@ -1976,7 +1976,7 @@ static void ad_marker_response_received(struct bond_marker *marker,
+  */
+ void bond_3ad_initiate_agg_selection(struct bonding *bond, int timeout)
+ {
+-	BOND_AD_INFO(bond).agg_select_timer = timeout;
++	atomic_set(&BOND_AD_INFO(bond).agg_select_timer, timeout);
+ }
+ 
+ /**
+@@ -2259,6 +2259,28 @@ void bond_3ad_update_ad_actor_settings(struct bonding *bond)
+ 	spin_unlock_bh(&bond->mode_lock);
+ }
+ 
++/**
++ * bond_agg_timer_advance - advance agg_select_timer
++ * @bond:  bonding structure
++ *
++ * Return true when agg_select_timer reaches 0.
++ */
++static bool bond_agg_timer_advance(struct bonding *bond)
++{
++	int val, nval;
++
++	while (1) {
++		val = atomic_read(&BOND_AD_INFO(bond).agg_select_timer);
++		if (!val)
++			return false;
++		nval = val - 1;
++		if (atomic_cmpxchg(&BOND_AD_INFO(bond).agg_select_timer,
++				   val, nval) == val)
++			break;
++	}
++	return nval == 0;
++}
++
+ /**
+  * bond_3ad_state_machine_handler - handle state machines timeout
+  * @work: work context to fetch bonding struct to work on from
+@@ -2294,9 +2316,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
+ 	if (!bond_has_slaves(bond))
+ 		goto re_arm;
+ 
+-	/* check if agg_select_timer timer after initialize is timed out */
+-	if (BOND_AD_INFO(bond).agg_select_timer &&
+-	    !(--BOND_AD_INFO(bond).agg_select_timer)) {
++	if (bond_agg_timer_advance(bond)) {
+ 		slave = bond_first_slave_rcu(bond);
+ 		port = slave ? &(SLAVE_AD_INFO(slave)->port) : NULL;
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 99770b1671923..cbeb69bca0bba 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2272,10 +2272,9 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 		bond_select_active_slave(bond);
+ 	}
+ 
+-	if (!bond_has_slaves(bond)) {
+-		bond_set_carrier(bond);
++	bond_set_carrier(bond);
++	if (!bond_has_slaves(bond))
+ 		eth_hw_addr_random(bond_dev);
+-	}
+ 
+ 	unblock_netpoll_tx();
+ 	synchronize_rcu();
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index dcf1fc89451f2..2044d440d7de4 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -1305,7 +1305,7 @@ static int lan9303_probe_reset_gpio(struct lan9303 *chip,
+ 				     struct device_node *np)
+ {
+ 	chip->reset_gpio = devm_gpiod_get_optional(chip->dev, "reset",
+-						   GPIOD_OUT_LOW);
++						   GPIOD_OUT_HIGH);
+ 	if (IS_ERR(chip->reset_gpio))
+ 		return PTR_ERR(chip->reset_gpio);
+ 
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index ed517985ca88e..80ef7ea779545 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -2114,8 +2114,8 @@ static int gswip_remove(struct platform_device *pdev)
+ 
+ 	if (priv->ds->slave_mii_bus) {
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
+-		mdiobus_free(priv->ds->slave_mii_bus);
+ 		of_node_put(priv->ds->slave_mii_bus->dev.of_node);
++		mdiobus_free(priv->ds->slave_mii_bus);
+ 	}
+ 
+ 	for (i = 0; i < priv->num_gphy_fw; i++)
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 1e8bf6b9834bb..2af464ac250ac 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -4534,7 +4534,7 @@ static int macb_probe(struct platform_device *pdev)
+ 
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ 	if (GEM_BFEXT(DAW64, gem_readl(bp, DCFG6))) {
+-		dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
++		dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(44));
+ 		bp->hw_dma_cap |= HW_DMA_CAP_64B;
+ 	}
+ #endif
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index f917bc9c87969..d89ddc165ec24 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -4225,7 +4225,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ 	}
+ 
+ 	INIT_WORK(&priv->tx_onestep_tstamp, dpaa2_eth_tx_onestep_tstamp);
+-
++	mutex_init(&priv->onestep_tstamp_lock);
+ 	skb_queue_head_init(&priv->tx_skbs);
+ 
+ 	/* Obtain a MC portal */
+diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c
+index 7d67f41387f55..4f5ef8a9a9a87 100644
+--- a/drivers/net/ieee802154/at86rf230.c
++++ b/drivers/net/ieee802154/at86rf230.c
+@@ -100,6 +100,7 @@ struct at86rf230_local {
+ 	unsigned long cal_timeout;
+ 	bool is_tx;
+ 	bool is_tx_from_off;
++	bool was_tx;
+ 	u8 tx_retry;
+ 	struct sk_buff *tx_skb;
+ 	struct at86rf230_state_change tx;
+@@ -343,7 +344,11 @@ at86rf230_async_error_recover_complete(void *context)
+ 	if (ctx->free)
+ 		kfree(ctx);
+ 
+-	ieee802154_wake_queue(lp->hw);
++	if (lp->was_tx) {
++		lp->was_tx = 0;
++		dev_kfree_skb_any(lp->tx_skb);
++		ieee802154_wake_queue(lp->hw);
++	}
+ }
+ 
+ static void
+@@ -352,7 +357,11 @@ at86rf230_async_error_recover(void *context)
+ 	struct at86rf230_state_change *ctx = context;
+ 	struct at86rf230_local *lp = ctx->lp;
+ 
+-	lp->is_tx = 0;
++	if (lp->is_tx) {
++		lp->was_tx = 1;
++		lp->is_tx = 0;
++	}
++
+ 	at86rf230_async_state_change(lp, ctx, STATE_RX_AACK_ON,
+ 				     at86rf230_async_error_recover_complete);
+ }
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index fea8b681f567c..fd9f33c833fa3 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -2977,8 +2977,8 @@ static void ca8210_hw_setup(struct ieee802154_hw *ca8210_hw)
+ 	ca8210_hw->phy->cca.opt = NL802154_CCA_OPT_ENERGY_CARRIER_AND;
+ 	ca8210_hw->phy->cca_ed_level = -9800;
+ 	ca8210_hw->phy->symbol_duration = 16;
+-	ca8210_hw->phy->lifs_period = 40;
+-	ca8210_hw->phy->sifs_period = 12;
++	ca8210_hw->phy->lifs_period = 40 * ca8210_hw->phy->symbol_duration;
++	ca8210_hw->phy->sifs_period = 12 * ca8210_hw->phy->symbol_duration;
+ 	ca8210_hw->flags =
+ 		IEEE802154_HW_AFILT |
+ 		IEEE802154_HW_OMIT_CKSUM |
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 6e033ba717030..597766d14563e 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1333,6 +1333,8 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x413c, 0x81d7, 0)},	/* Dell Wireless 5821e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81d7, 1)},	/* Dell Wireless 5821e preproduction config */
+ 	{QMI_FIXED_INTF(0x413c, 0x81e0, 0)},	/* Dell Wireless 5821e with eSIM support*/
++	{QMI_FIXED_INTF(0x413c, 0x81e4, 0)},	/* Dell Wireless 5829e with eSIM support*/
++	{QMI_FIXED_INTF(0x413c, 0x81e6, 0)},	/* Dell Wireless 5829e */
+ 	{QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)},	/* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */
+ 	{QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)},	/* HP lt4120 Snapdragon X5 LTE */
+ 	{QMI_FIXED_INTF(0x22de, 0x9061, 3)},	/* WeTelecom WPD-600N */
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index 30c6d7b18599a..ab84ac3f8f03f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1646,6 +1646,8 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+  out_unbind:
+ 	complete(&drv->request_firmware_complete);
+ 	device_release_driver(drv->trans->dev);
++	/* drv has just been freed by the release */
++	failure = false;
+  free:
+ 	if (failure)
+ 		iwl_dealloc_ucode(drv);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index b031e9304983c..b2991582189c2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -320,8 +320,7 @@ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
+ 	/* This may fail if AMT took ownership of the device */
+ 	if (iwl_pcie_prepare_card_hw(trans)) {
+ 		IWL_WARN(trans, "Exit HW not ready\n");
+-		ret = -EIO;
+-		goto out;
++		return -EIO;
+ 	}
+ 
+ 	iwl_enable_rfkill_int(trans);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 082768ec8aa80..daec61a60fec5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1313,8 +1313,7 @@ static int iwl_trans_pcie_start_fw(struct iwl_trans *trans,
+ 	/* This may fail if AMT took ownership of the device */
+ 	if (iwl_pcie_prepare_card_hw(trans)) {
+ 		IWL_WARN(trans, "Exit HW not ready\n");
+-		ret = -EIO;
+-		goto out;
++		return -EIO;
+ 	}
+ 
+ 	iwl_enable_rfkill_int(trans);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 99b5152482fe4..71c85c99e86c6 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4259,7 +4259,14 @@ static void nvme_async_event_work(struct work_struct *work)
+ 		container_of(work, struct nvme_ctrl, async_event_work);
+ 
+ 	nvme_aen_uevent(ctrl);
+-	ctrl->ops->submit_async_event(ctrl);
++
++	/*
++	 * The transport drivers must guarantee AER submission here is safe by
++	 * flushing ctrl async_event_work after changing the controller state
++	 * from LIVE and before freeing the admin queue.
++	*/
++	if (ctrl->state == NVME_CTRL_LIVE)
++		ctrl->ops->submit_async_event(ctrl);
+ }
+ 
+ static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl)
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 1b90563818434..8eacc9bd58f5a 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1200,6 +1200,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
+ 			struct nvme_rdma_ctrl, err_work);
+ 
+ 	nvme_stop_keep_alive(&ctrl->ctrl);
++	flush_work(&ctrl->ctrl.async_event_work);
+ 	nvme_rdma_teardown_io_queues(ctrl, false);
+ 	nvme_start_queues(&ctrl->ctrl);
+ 	nvme_rdma_teardown_admin_queue(ctrl, false);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 662028d7a1c6a..6105894a218a5 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -2077,6 +2077,7 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
+ 	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
+ 
+ 	nvme_stop_keep_alive(ctrl);
++	flush_work(&ctrl->async_event_work);
+ 	nvme_tcp_teardown_io_queues(ctrl, false);
+ 	/* unquiesce to fail fast pending requests */
+ 	nvme_start_queues(ctrl);
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index b5f9ee81a46c1..b916fab9b1618 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -1003,7 +1003,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
+ 	ioc->usg_calls++;
+ #endif
+ 
+-	while(sg_dma_len(sglist) && nents--) {
++	while (nents && sg_dma_len(sglist)) {
+ 
+ #ifdef CCIO_COLLECT_STATS
+ 		ioc->usg_pages += sg_dma_len(sglist) >> PAGE_SHIFT;
+@@ -1011,6 +1011,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
+ 		ccio_unmap_page(dev, sg_dma_address(sglist),
+ 				  sg_dma_len(sglist), direction, 0);
+ 		++sglist;
++		nents--;
+ 	}
+ 
+ 	DBG_RUN_SG("%s() DONE (nents %d)\n", __func__, nents);
+diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
+index dce4cdf786cdb..228c58060e9b3 100644
+--- a/drivers/parisc/sba_iommu.c
++++ b/drivers/parisc/sba_iommu.c
+@@ -1047,7 +1047,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
+ 	spin_unlock_irqrestore(&ioc->res_lock, flags);
+ #endif
+ 
+-	while (sg_dma_len(sglist) && nents--) {
++	while (nents && sg_dma_len(sglist)) {
+ 
+ 		sba_unmap_page(dev, sg_dma_address(sglist), sg_dma_len(sglist),
+ 				direction, 0);
+@@ -1056,6 +1056,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
+ 		ioc->usingle_calls--;	/* kluge since call is unmap_sg() */
+ #endif
+ 		++sglist;
++		nents--;
+ 	}
+ 
+ 	DBG_RUN_SG("%s() DONE (nents %d)\n", __func__,  nents);
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index ad3e3cde1c20d..a070e69bb49cd 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1841,8 +1841,17 @@ static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus)
+ 		if (!hv_dev)
+ 			continue;
+ 
+-		if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY)
+-			set_dev_node(&dev->dev, hv_dev->desc.virtual_numa_node);
++		if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY &&
++		    hv_dev->desc.virtual_numa_node < num_possible_nodes())
++			/*
++			 * The kernel may boot with some NUMA nodes offline
++			 * (e.g. in a KDUMP kernel) or with NUMA disabled via
++			 * "numa=off". In those cases, adjust the host provided
++			 * NUMA node to a valid NUMA node used by the kernel.
++			 */
++			set_dev_node(&dev->dev,
++				     numa_map_to_online_node(
++					     hv_dev->desc.virtual_numa_node));
+ 
+ 		put_pcichild(hv_dev);
+ 	}
+diff --git a/drivers/phy/broadcom/phy-brcm-usb.c b/drivers/phy/broadcom/phy-brcm-usb.c
+index 99fbc7e4138be..b901a0d4e2a80 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb.c
++++ b/drivers/phy/broadcom/phy-brcm-usb.c
+@@ -17,6 +17,7 @@
+ #include <linux/soc/brcmstb/brcmstb.h>
+ #include <dt-bindings/phy/phy.h>
+ #include <linux/mfd/syscon.h>
++#include <linux/suspend.h>
+ 
+ #include "phy-brcm-usb-init.h"
+ 
+@@ -69,12 +70,35 @@ struct brcm_usb_phy_data {
+ 	int			init_count;
+ 	int			wake_irq;
+ 	struct brcm_usb_phy	phys[BRCM_USB_PHY_ID_MAX];
++	struct notifier_block	pm_notifier;
++	bool			pm_active;
+ };
+ 
+ static s8 *node_reg_names[BRCM_REGS_MAX] = {
+ 	"crtl", "xhci_ec", "xhci_gbl", "usb_phy", "usb_mdio", "bdc_ec"
+ };
+ 
++static int brcm_pm_notifier(struct notifier_block *notifier,
++			    unsigned long pm_event,
++			    void *unused)
++{
++	struct brcm_usb_phy_data *priv =
++		container_of(notifier, struct brcm_usb_phy_data, pm_notifier);
++
++	switch (pm_event) {
++	case PM_HIBERNATION_PREPARE:
++	case PM_SUSPEND_PREPARE:
++		priv->pm_active = true;
++		break;
++	case PM_POST_RESTORE:
++	case PM_POST_HIBERNATION:
++	case PM_POST_SUSPEND:
++		priv->pm_active = false;
++		break;
++	}
++	return NOTIFY_DONE;
++}
++
+ static irqreturn_t brcm_usb_phy_wake_isr(int irq, void *dev_id)
+ {
+ 	struct phy *gphy = dev_id;
+@@ -90,6 +114,9 @@ static int brcm_usb_phy_init(struct phy *gphy)
+ 	struct brcm_usb_phy_data *priv =
+ 		container_of(phy, struct brcm_usb_phy_data, phys[phy->id]);
+ 
++	if (priv->pm_active)
++		return 0;
++
+ 	/*
+ 	 * Use a lock to make sure a second caller waits until
+ 	 * the base phy is inited before using it.
+@@ -119,6 +146,9 @@ static int brcm_usb_phy_exit(struct phy *gphy)
+ 	struct brcm_usb_phy_data *priv =
+ 		container_of(phy, struct brcm_usb_phy_data, phys[phy->id]);
+ 
++	if (priv->pm_active)
++		return 0;
++
+ 	dev_dbg(&gphy->dev, "EXIT\n");
+ 	if (phy->id == BRCM_USB_PHY_2_0)
+ 		brcm_usb_uninit_eohci(&priv->ini);
+@@ -484,6 +514,9 @@ static int brcm_usb_phy_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
++	priv->pm_notifier.notifier_call = brcm_pm_notifier;
++	register_pm_notifier(&priv->pm_notifier);
++
+ 	mutex_init(&priv->mutex);
+ 
+ 	/* make sure invert settings are correct */
+@@ -524,7 +557,10 @@ static int brcm_usb_phy_probe(struct platform_device *pdev)
+ 
+ static int brcm_usb_phy_remove(struct platform_device *pdev)
+ {
++	struct brcm_usb_phy_data *priv = dev_get_drvdata(&pdev->dev);
++
+ 	sysfs_remove_group(&pdev->dev.kobj, &brcm_usb_phy_group);
++	unregister_pm_notifier(&priv->pm_notifier);
+ 
+ 	return 0;
+ }
+@@ -535,6 +571,7 @@ static int brcm_usb_phy_suspend(struct device *dev)
+ 	struct brcm_usb_phy_data *priv = dev_get_drvdata(dev);
+ 
+ 	if (priv->init_count) {
++		dev_dbg(dev, "SUSPEND\n");
+ 		priv->ini.wake_enabled = device_may_wakeup(dev);
+ 		if (priv->phys[BRCM_USB_PHY_3_0].inited)
+ 			brcm_usb_uninit_xhci(&priv->ini);
+@@ -574,6 +611,7 @@ static int brcm_usb_phy_resume(struct device *dev)
+ 	 * Uninitialize anything that wasn't previously initialized.
+ 	 */
+ 	if (priv->init_count) {
++		dev_dbg(dev, "RESUME\n");
+ 		if (priv->wake_irq >= 0)
+ 			disable_irq_wake(priv->wake_irq);
+ 		brcm_usb_init_common(&priv->ini);
+diff --git a/drivers/platform/x86/intel_speed_select_if/isst_if_common.c b/drivers/platform/x86/intel_speed_select_if/isst_if_common.c
+index 0c2aa22c7a12e..407afafc7e83f 100644
+--- a/drivers/platform/x86/intel_speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel_speed_select_if/isst_if_common.c
+@@ -532,7 +532,10 @@ static long isst_if_def_ioctl(struct file *file, unsigned int cmd,
+ 	return ret;
+ }
+ 
+-static DEFINE_MUTEX(punit_misc_dev_lock);
++/* Lock to prevent module registration when already opened by user space */
++static DEFINE_MUTEX(punit_misc_dev_open_lock);
++/* Lock to allow one share misc device for all ISST interace */
++static DEFINE_MUTEX(punit_misc_dev_reg_lock);
+ static int misc_usage_count;
+ static int misc_device_ret;
+ static int misc_device_open;
+@@ -542,7 +545,7 @@ static int isst_if_open(struct inode *inode, struct file *file)
+ 	int i, ret = 0;
+ 
+ 	/* Fail open, if a module is going away */
+-	mutex_lock(&punit_misc_dev_lock);
++	mutex_lock(&punit_misc_dev_open_lock);
+ 	for (i = 0; i < ISST_IF_DEV_MAX; ++i) {
+ 		struct isst_if_cmd_cb *cb = &punit_callbacks[i];
+ 
+@@ -564,7 +567,7 @@ static int isst_if_open(struct inode *inode, struct file *file)
+ 	} else {
+ 		misc_device_open++;
+ 	}
+-	mutex_unlock(&punit_misc_dev_lock);
++	mutex_unlock(&punit_misc_dev_open_lock);
+ 
+ 	return ret;
+ }
+@@ -573,7 +576,7 @@ static int isst_if_relase(struct inode *inode, struct file *f)
+ {
+ 	int i;
+ 
+-	mutex_lock(&punit_misc_dev_lock);
++	mutex_lock(&punit_misc_dev_open_lock);
+ 	misc_device_open--;
+ 	for (i = 0; i < ISST_IF_DEV_MAX; ++i) {
+ 		struct isst_if_cmd_cb *cb = &punit_callbacks[i];
+@@ -581,7 +584,7 @@ static int isst_if_relase(struct inode *inode, struct file *f)
+ 		if (cb->registered)
+ 			module_put(cb->owner);
+ 	}
+-	mutex_unlock(&punit_misc_dev_lock);
++	mutex_unlock(&punit_misc_dev_open_lock);
+ 
+ 	return 0;
+ }
+@@ -598,6 +601,43 @@ static struct miscdevice isst_if_char_driver = {
+ 	.fops		= &isst_if_char_driver_ops,
+ };
+ 
++static int isst_misc_reg(void)
++{
++	mutex_lock(&punit_misc_dev_reg_lock);
++	if (misc_device_ret)
++		goto unlock_exit;
++
++	if (!misc_usage_count) {
++		misc_device_ret = isst_if_cpu_info_init();
++		if (misc_device_ret)
++			goto unlock_exit;
++
++		misc_device_ret = misc_register(&isst_if_char_driver);
++		if (misc_device_ret) {
++			isst_if_cpu_info_exit();
++			goto unlock_exit;
++		}
++	}
++	misc_usage_count++;
++
++unlock_exit:
++	mutex_unlock(&punit_misc_dev_reg_lock);
++
++	return misc_device_ret;
++}
++
++static void isst_misc_unreg(void)
++{
++	mutex_lock(&punit_misc_dev_reg_lock);
++	if (misc_usage_count)
++		misc_usage_count--;
++	if (!misc_usage_count && !misc_device_ret) {
++		misc_deregister(&isst_if_char_driver);
++		isst_if_cpu_info_exit();
++	}
++	mutex_unlock(&punit_misc_dev_reg_lock);
++}
++
+ /**
+  * isst_if_cdev_register() - Register callback for IOCTL
+  * @device_type: The device type this callback handling.
+@@ -615,38 +655,31 @@ static struct miscdevice isst_if_char_driver = {
+  */
+ int isst_if_cdev_register(int device_type, struct isst_if_cmd_cb *cb)
+ {
+-	if (misc_device_ret)
+-		return misc_device_ret;
++	int ret;
+ 
+ 	if (device_type >= ISST_IF_DEV_MAX)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&punit_misc_dev_lock);
++	mutex_lock(&punit_misc_dev_open_lock);
++	/* Device is already open, we don't want to add new callbacks */
+ 	if (misc_device_open) {
+-		mutex_unlock(&punit_misc_dev_lock);
++		mutex_unlock(&punit_misc_dev_open_lock);
+ 		return -EAGAIN;
+ 	}
+-	if (!misc_usage_count) {
+-		int ret;
+-
+-		misc_device_ret = misc_register(&isst_if_char_driver);
+-		if (misc_device_ret)
+-			goto unlock_exit;
+-
+-		ret = isst_if_cpu_info_init();
+-		if (ret) {
+-			misc_deregister(&isst_if_char_driver);
+-			misc_device_ret = ret;
+-			goto unlock_exit;
+-		}
+-	}
+ 	memcpy(&punit_callbacks[device_type], cb, sizeof(*cb));
+ 	punit_callbacks[device_type].registered = 1;
+-	misc_usage_count++;
+-unlock_exit:
+-	mutex_unlock(&punit_misc_dev_lock);
++	mutex_unlock(&punit_misc_dev_open_lock);
+ 
+-	return misc_device_ret;
++	ret = isst_misc_reg();
++	if (ret) {
++		/*
++		 * No need of mutex as the misc device register failed
++		 * as no one can open device yet. Hence no contention.
++		 */
++		punit_callbacks[device_type].registered = 0;
++		return ret;
++	}
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(isst_if_cdev_register);
+ 
+@@ -661,16 +694,12 @@ EXPORT_SYMBOL_GPL(isst_if_cdev_register);
+  */
+ void isst_if_cdev_unregister(int device_type)
+ {
+-	mutex_lock(&punit_misc_dev_lock);
+-	misc_usage_count--;
++	isst_misc_unreg();
++	mutex_lock(&punit_misc_dev_open_lock);
+ 	punit_callbacks[device_type].registered = 0;
+ 	if (device_type == ISST_IF_DEV_MBOX)
+ 		isst_delete_hash();
+-	if (!misc_usage_count && !misc_device_ret) {
+-		misc_deregister(&isst_if_char_driver);
+-		isst_if_cpu_info_exit();
+-	}
+-	mutex_unlock(&punit_misc_dev_lock);
++	mutex_unlock(&punit_misc_dev_open_lock);
+ }
+ EXPORT_SYMBOL_GPL(isst_if_cdev_unregister);
+ 
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 59b7e90cd5875..ab6a9369649db 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -756,6 +756,21 @@ static const struct ts_dmi_data predia_basic_data = {
+ 	.properties	= predia_basic_props,
+ };
+ 
++static const struct property_entry rwc_nanote_p8_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 46),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1728),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1140),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rwc-nanote-p8.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	{ }
++};
++
++static const struct ts_dmi_data rwc_nanote_p8_data = {
++	.acpi_name = "MSSL1680:00",
++	.properties = rwc_nanote_p8_props,
++};
++
+ static const struct property_entry schneider_sct101ctm_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 1715),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 1140),
+@@ -1326,6 +1341,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "0E57"),
+ 		},
+ 	},
++	{
++		/* RWC NANOTE P8 */
++		.driver_data = (void *)&rwc_nanote_p8_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Default string"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "AY07J"),
++			DMI_MATCH(DMI_PRODUCT_SKU, "0001")
++		},
++	},
+ 	{
+ 		/* Schneider SCT101CTM */
+ 		.driver_data = (void *)&schneider_sct101ctm_data,
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index d1894539efc30..03bc472f302a2 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -374,6 +374,7 @@ struct lpfc_vport {
+ #define FC_VPORT_LOGO_RCVD      0x200    /* LOGO received on vport */
+ #define FC_RSCN_DISCOVERY       0x400	 /* Auth all devices after RSCN */
+ #define FC_LOGO_RCVD_DID_CHNG   0x800    /* FDISC on phys port detect DID chng*/
++#define FC_PT2PT_NO_NVME        0x1000   /* Don't send NVME PRLI */
+ #define FC_SCSI_SCAN_TMO        0x4000	 /* scsi scan timer running */
+ #define FC_ABORT_DISCOVERY      0x8000	 /* we want to abort discovery */
+ #define FC_NDISC_ACTIVE         0x10000	 /* NPort discovery active */
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index b73d5d9494021..f0d1ced630162 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -1142,6 +1142,9 @@ lpfc_issue_lip(struct Scsi_Host *shost)
+ 	pmboxq->u.mb.mbxCommand = MBX_DOWN_LINK;
+ 	pmboxq->u.mb.mbxOwner = OWN_HOST;
+ 
++	if ((vport->fc_flag & FC_PT2PT) && (vport->fc_flag & FC_PT2PT_NO_NVME))
++		vport->fc_flag &= ~FC_PT2PT_NO_NVME;
++
+ 	mbxstatus = lpfc_sli_issue_mbox_wait(phba, pmboxq, LPFC_MBOX_TMO * 2);
+ 
+ 	if ((mbxstatus == MBX_SUCCESS) &&
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 3d9889b3d5c8a..387b0cd1ea18f 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1067,7 +1067,8 @@ stop_rr_fcf_flogi:
+ 
+ 		/* FLOGI failed, so there is no fabric */
+ 		spin_lock_irq(shost->host_lock);
+-		vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
++		vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP |
++				    FC_PT2PT_NO_NVME);
+ 		spin_unlock_irq(shost->host_lock);
+ 
+ 		/* If private loop, then allow max outstanding els to be
+@@ -3945,6 +3946,23 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 		/* Added for Vendor specifc support
+ 		 * Just keep retrying for these Rsn / Exp codes
+ 		 */
++		if ((vport->fc_flag & FC_PT2PT) &&
++		    cmd == ELS_CMD_NVMEPRLI) {
++			switch (stat.un.b.lsRjtRsnCode) {
++			case LSRJT_UNABLE_TPC:
++			case LSRJT_INVALID_CMD:
++			case LSRJT_LOGICAL_ERR:
++			case LSRJT_CMD_UNSUPPORTED:
++				lpfc_printf_vlog(vport, KERN_WARNING, LOG_ELS,
++						 "0168 NVME PRLI LS_RJT "
++						 "reason %x port doesn't "
++						 "support NVME, disabling NVME\n",
++						 stat.un.b.lsRjtRsnCode);
++				retry = 0;
++				vport->fc_flag |= FC_PT2PT_NO_NVME;
++				goto out_retry;
++			}
++		}
+ 		switch (stat.un.b.lsRjtRsnCode) {
+ 		case LSRJT_UNABLE_TPC:
+ 			/* The driver has a VALID PLOGI but the rport has
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index 6afcb1426e357..e33f752318c19 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -2010,8 +2010,9 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
+ 			 * is configured try it.
+ 			 */
+ 			ndlp->nlp_fc4_type |= NLP_FC4_FCP;
+-			if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
+-			    (vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
++			if ((!(vport->fc_flag & FC_PT2PT_NO_NVME)) &&
++			    (vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH ||
++			    vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
+ 				ndlp->nlp_fc4_type |= NLP_FC4_NVME;
+ 				/* We need to update the localport also */
+ 				lpfc_nvme_update_localport(vport);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 4587127b67f7b..a50f870c5f725 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -7372,6 +7372,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
+ 	struct lpfc_vport *vport = phba->pport;
+ 	struct lpfc_dmabuf *mp;
+ 	struct lpfc_rqb *rqbp;
++	u32 flg;
+ 
+ 	/* Perform a PCI function reset to start from clean */
+ 	rc = lpfc_pci_function_reset(phba);
+@@ -7385,7 +7386,17 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
+ 	else {
+ 		spin_lock_irq(&phba->hbalock);
+ 		phba->sli.sli_flag |= LPFC_SLI_ACTIVE;
++		flg = phba->sli.sli_flag;
+ 		spin_unlock_irq(&phba->hbalock);
++		/* Allow a little time after setting SLI_ACTIVE for any polled
++		 * MBX commands to complete via BSG.
++		 */
++		for (i = 0; i < 50 && (flg & LPFC_SLI_MBOX_ACTIVE); i++) {
++			msleep(20);
++			spin_lock_irq(&phba->hbalock);
++			flg = phba->sli.sli_flag;
++			spin_unlock_irq(&phba->hbalock);
++		}
+ 	}
+ 
+ 	lpfc_sli4_dip(phba);
+@@ -8922,7 +8933,7 @@ lpfc_sli_issue_mbox_s4(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
+ 					"(%d):2541 Mailbox command x%x "
+ 					"(x%x/x%x) failure: "
+ 					"mqe_sta: x%x mcqe_sta: x%x/x%x "
+-					"Data: x%x x%x\n,",
++					"Data: x%x x%x\n",
+ 					mboxq->vport ? mboxq->vport->vpi : 0,
+ 					mboxq->u.mb.mbxCommand,
+ 					lpfc_sli_config_mbox_subsys_get(phba,
+@@ -8956,7 +8967,7 @@ lpfc_sli_issue_mbox_s4(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
+ 					"(%d):2597 Sync Mailbox command "
+ 					"x%x (x%x/x%x) failure: "
+ 					"mqe_sta: x%x mcqe_sta: x%x/x%x "
+-					"Data: x%x x%x\n,",
++					"Data: x%x x%x\n",
+ 					mboxq->vport ? mboxq->vport->vpi : 0,
+ 					mboxq->u.mb.mbxCommand,
+ 					lpfc_sli_config_mbox_subsys_get(phba,
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index c3bb58885033b..75ac4d86d9c4b 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -753,8 +753,13 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
+ 		res = -TMF_RESP_FUNC_FAILED;
+ 		/* Even TMF timed out, return direct. */
+ 		if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
++			struct pm8001_ccb_info *ccb = task->lldd_task;
++
+ 			pm8001_dbg(pm8001_ha, FAIL, "TMF task[%x]timeout.\n",
+ 				   tmf->tmf);
++
++			if (ccb)
++				ccb->task = NULL;
+ 			goto ex_err;
+ 		}
+ 
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index b22a8ab754faa..2a3ce4680734b 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -2133,9 +2133,9 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 		pm8001_dbg(pm8001_ha, FAIL,
+ 			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
+ 			   t, status, ts->resp, ts->stat);
++		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 		if (t->slow_task)
+ 			complete(&t->slow_task->completion);
+-		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+@@ -2726,9 +2726,9 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		pm8001_dbg(pm8001_ha, FAIL,
+ 			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
+ 			   t, status, ts->resp, ts->stat);
++		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 		if (t->slow_task)
+ 			complete(&t->slow_task->completion);
+-		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+ 		pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
+diff --git a/drivers/soc/aspeed/aspeed-lpc-ctrl.c b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+index 040c7dc1d4792..71b555c715d2e 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-ctrl.c
++++ b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+@@ -251,10 +251,9 @@ static int aspeed_lpc_ctrl_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	lpc_ctrl->clk = devm_clk_get(dev, NULL);
+-	if (IS_ERR(lpc_ctrl->clk)) {
+-		dev_err(dev, "couldn't get clock\n");
+-		return PTR_ERR(lpc_ctrl->clk);
+-	}
++	if (IS_ERR(lpc_ctrl->clk))
++		return dev_err_probe(dev, PTR_ERR(lpc_ctrl->clk),
++				     "couldn't get clock\n");
+ 	rc = clk_prepare_enable(lpc_ctrl->clk);
+ 	if (rc) {
+ 		dev_err(dev, "couldn't enable clock\n");
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index 128461bd04bb9..58190135efb7d 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -2024,7 +2024,7 @@ static bool canon_copy_from_read_buf(struct tty_struct *tty,
+ 		return false;
+ 
+ 	canon_head = smp_load_acquire(&ldata->canon_head);
+-	n = min(*nr + 1, canon_head - ldata->read_tail);
++	n = min(*nr, canon_head - ldata->read_tail);
+ 
+ 	tail = ldata->read_tail & (N_TTY_BUF_SIZE - 1);
+ 	size = min_t(size_t, tail + n, N_TTY_BUF_SIZE);
+@@ -2046,10 +2046,8 @@ static bool canon_copy_from_read_buf(struct tty_struct *tty,
+ 		n += N_TTY_BUF_SIZE;
+ 	c = n + found;
+ 
+-	if (!found || read_buf(ldata, eol) != __DISABLED_CHAR) {
+-		c = min(*nr, c);
++	if (!found || read_buf(ldata, eol) != __DISABLED_CHAR)
+ 		n = c;
+-	}
+ 
+ 	n_tty_trace("%s: eol:%zu found:%d n:%zu c:%zu tail:%zu more:%zu\n",
+ 		    __func__, eol, found, n, c, tail, more);
+diff --git a/drivers/tty/serial/8250/8250_gsc.c b/drivers/tty/serial/8250/8250_gsc.c
+index 673cda3d011d0..948d0a1c6ae8e 100644
+--- a/drivers/tty/serial/8250/8250_gsc.c
++++ b/drivers/tty/serial/8250/8250_gsc.c
+@@ -26,7 +26,7 @@ static int __init serial_init_chip(struct parisc_device *dev)
+ 	unsigned long address;
+ 	int err;
+ 
+-#ifdef CONFIG_64BIT
++#if defined(CONFIG_64BIT) && defined(CONFIG_IOSAPIC)
+ 	if (!dev->irq && (dev->id.sversion == 0xad))
+ 		dev->irq = iosapic_serial_irq(dev);
+ #endif
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 10f020ab1186f..6b80dee17f49d 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5006,6 +5006,10 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
+ 			lock_page(page);
+ 			if (!PageUptodate(page)) {
+ 				unlock_page(page);
++				btrfs_err(fs_info,
++			"send: IO error at offset %llu for inode %llu root %llu",
++					page_offset(page), sctx->cur_ino,
++					sctx->send_root->root_key.objectid);
+ 				put_page(page);
+ 				ret = -EIO;
+ 				break;
+diff --git a/fs/file.c b/fs/file.c
+index 9d02352fa18c3..79a76d04c7c33 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -817,28 +817,68 @@ void do_close_on_exec(struct files_struct *files)
+ 	spin_unlock(&files->file_lock);
+ }
+ 
+-static struct file *__fget_files(struct files_struct *files, unsigned int fd,
+-				 fmode_t mask, unsigned int refs)
++static inline struct file *__fget_files_rcu(struct files_struct *files,
++	unsigned int fd, fmode_t mask, unsigned int refs)
+ {
+-	struct file *file;
++	for (;;) {
++		struct file *file;
++		struct fdtable *fdt = rcu_dereference_raw(files->fdt);
++		struct file __rcu **fdentry;
+ 
+-	rcu_read_lock();
+-loop:
+-	file = fcheck_files(files, fd);
+-	if (file) {
+-		/* File object ref couldn't be taken.
+-		 * dup2() atomicity guarantee is the reason
+-		 * we loop to catch the new file (or NULL pointer)
++		if (unlikely(fd >= fdt->max_fds))
++			return NULL;
++
++		fdentry = fdt->fd + array_index_nospec(fd, fdt->max_fds);
++		file = rcu_dereference_raw(*fdentry);
++		if (unlikely(!file))
++			return NULL;
++
++		if (unlikely(file->f_mode & mask))
++			return NULL;
++
++		/*
++		 * Ok, we have a file pointer. However, because we do
++		 * this all locklessly under RCU, we may be racing with
++		 * that file being closed.
++		 *
++		 * Such a race can take two forms:
++		 *
++		 *  (a) the file ref already went down to zero,
++		 *      and get_file_rcu_many() fails. Just try
++		 *      again:
+ 		 */
+-		if (file->f_mode & mask)
+-			file = NULL;
+-		else if (!get_file_rcu_many(file, refs))
+-			goto loop;
+-		else if (__fcheck_files(files, fd) != file) {
++		if (unlikely(!get_file_rcu_many(file, refs)))
++			continue;
++
++		/*
++		 *  (b) the file table entry has changed under us.
++		 *       Note that we don't need to re-check the 'fdt->fd'
++		 *       pointer having changed, because it always goes
++		 *       hand-in-hand with 'fdt'.
++		 *
++		 * If so, we need to put our refs and try again.
++		 */
++		if (unlikely(rcu_dereference_raw(files->fdt) != fdt) ||
++		    unlikely(rcu_dereference_raw(*fdentry) != file)) {
+ 			fput_many(file, refs);
+-			goto loop;
++			continue;
+ 		}
++
++		/*
++		 * Ok, we have a ref to the file, and checked that it
++		 * still exists.
++		 */
++		return file;
+ 	}
++}
++
++static struct file *__fget_files(struct files_struct *files, unsigned int fd,
++				 fmode_t mask, unsigned int refs)
++{
++	struct file *file;
++
++	rcu_read_lock();
++	file = __fget_files_rcu(files, fd, mask, refs);
+ 	rcu_read_unlock();
+ 
+ 	return file;
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 682c7b45d8b71..2ad56ff4752c7 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1780,14 +1780,14 @@ no_open:
+ 	if (!res) {
+ 		inode = d_inode(dentry);
+ 		if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
+-		    !S_ISDIR(inode->i_mode))
++		    !(S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)))
+ 			res = ERR_PTR(-ENOTDIR);
+ 		else if (inode && S_ISREG(inode->i_mode))
+ 			res = ERR_PTR(-EOPENSTALE);
+ 	} else if (!IS_ERR(res)) {
+ 		inode = d_inode(res);
+ 		if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
+-		    !S_ISDIR(inode->i_mode)) {
++		    !(S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))) {
+ 			dput(res);
+ 			res = ERR_PTR(-ENOTDIR);
+ 		} else if (inode && S_ISREG(inode->i_mode)) {
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 21addb78523d2..f27ecc2e490f2 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -195,6 +195,18 @@ bool nfs_check_cache_invalid(struct inode *inode, unsigned long flags)
+ }
+ EXPORT_SYMBOL_GPL(nfs_check_cache_invalid);
+ 
++#ifdef CONFIG_NFS_V4_2
++static bool nfs_has_xattr_cache(const struct nfs_inode *nfsi)
++{
++	return nfsi->xattr_cache != NULL;
++}
++#else
++static bool nfs_has_xattr_cache(const struct nfs_inode *nfsi)
++{
++	return false;
++}
++#endif
++
+ static void nfs_set_cache_invalid(struct inode *inode, unsigned long flags)
+ {
+ 	struct nfs_inode *nfsi = NFS_I(inode);
+@@ -210,6 +222,8 @@ static void nfs_set_cache_invalid(struct inode *inode, unsigned long flags)
+ 	} else if (flags & NFS_INO_REVAL_PAGECACHE)
+ 		flags |= NFS_INO_INVALID_CHANGE | NFS_INO_INVALID_SIZE;
+ 
++	if (!nfs_has_xattr_cache(nfsi))
++		flags &= ~NFS_INO_INVALID_XATTR;
+ 	if (inode->i_mapping->nrpages == 0)
+ 		flags &= ~(NFS_INO_INVALID_DATA|NFS_INO_DATA_INVAL_DEFER);
+ 	nfsi->cache_validity |= flags;
+@@ -807,12 +821,9 @@ int nfs_getattr(const struct path *path, struct kstat *stat,
+ 	}
+ 
+ 	/* Flush out writes to the server in order to update c/mtime.  */
+-	if ((request_mask & (STATX_CTIME|STATX_MTIME)) &&
+-			S_ISREG(inode->i_mode)) {
+-		err = filemap_write_and_wait(inode->i_mapping);
+-		if (err)
+-			goto out;
+-	}
++	if ((request_mask & (STATX_CTIME | STATX_MTIME)) &&
++	    S_ISREG(inode->i_mode))
++		filemap_write_and_wait(inode->i_mapping);
+ 
+ 	/*
+ 	 * We may force a getattr if the user cares about atime.
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 3931f60e421f7..ba98371e9d164 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -430,7 +430,8 @@ static void smaps_page_accumulate(struct mem_size_stats *mss,
+ }
+ 
+ static void smaps_account(struct mem_size_stats *mss, struct page *page,
+-		bool compound, bool young, bool dirty, bool locked)
++		bool compound, bool young, bool dirty, bool locked,
++		bool migration)
+ {
+ 	int i, nr = compound ? compound_nr(page) : 1;
+ 	unsigned long size = nr * PAGE_SIZE;
+@@ -457,8 +458,15 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
+ 	 * page_count(page) == 1 guarantees the page is mapped exactly once.
+ 	 * If any subpage of the compound page mapped with PTE it would elevate
+ 	 * page_count().
++	 *
++	 * The page_mapcount() is called to get a snapshot of the mapcount.
++	 * Without holding the page lock this snapshot can be slightly wrong as
++	 * we cannot always read the mapcount atomically.  It is not safe to
++	 * call page_mapcount() even with PTL held if the page is not mapped,
++	 * especially for migration entries.  Treat regular migration entries
++	 * as mapcount == 1.
+ 	 */
+-	if (page_count(page) == 1) {
++	if ((page_count(page) == 1) || migration) {
+ 		smaps_page_accumulate(mss, page, size, size << PSS_SHIFT, dirty,
+ 			locked, true);
+ 		return;
+@@ -495,6 +503,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ 	struct vm_area_struct *vma = walk->vma;
+ 	bool locked = !!(vma->vm_flags & VM_LOCKED);
+ 	struct page *page = NULL;
++	bool migration = false;
+ 
+ 	if (pte_present(*pte)) {
+ 		page = vm_normal_page(vma, addr, *pte);
+@@ -514,9 +523,10 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ 			} else {
+ 				mss->swap_pss += (u64)PAGE_SIZE << PSS_SHIFT;
+ 			}
+-		} else if (is_migration_entry(swpent))
++		} else if (is_migration_entry(swpent)) {
++			migration = true;
+ 			page = migration_entry_to_page(swpent);
+-		else if (is_device_private_entry(swpent))
++		} else if (is_device_private_entry(swpent))
+ 			page = device_private_entry_to_page(swpent);
+ 	} else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap
+ 							&& pte_none(*pte))) {
+@@ -530,7 +540,8 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ 	if (!page)
+ 		return;
+ 
+-	smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte), locked);
++	smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte),
++		      locked, migration);
+ }
+ 
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+@@ -541,6 +552,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 	struct vm_area_struct *vma = walk->vma;
+ 	bool locked = !!(vma->vm_flags & VM_LOCKED);
+ 	struct page *page = NULL;
++	bool migration = false;
+ 
+ 	if (pmd_present(*pmd)) {
+ 		/* FOLL_DUMP will return -EFAULT on huge zero page */
+@@ -548,8 +560,10 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 	} else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) {
+ 		swp_entry_t entry = pmd_to_swp_entry(*pmd);
+ 
+-		if (is_migration_entry(entry))
++		if (is_migration_entry(entry)) {
++			migration = true;
+ 			page = migration_entry_to_page(entry);
++		}
+ 	}
+ 	if (IS_ERR_OR_NULL(page))
+ 		return;
+@@ -561,7 +575,9 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 		/* pass */;
+ 	else
+ 		mss->file_thp += HPAGE_PMD_SIZE;
+-	smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd), locked);
++
++	smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd),
++		      locked, migration);
+ }
+ #else
+ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
+@@ -1366,6 +1382,7 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ {
+ 	u64 frame = 0, flags = 0;
+ 	struct page *page = NULL;
++	bool migration = false;
+ 
+ 	if (pte_present(pte)) {
+ 		if (pm->show_pfn)
+@@ -1383,8 +1400,10 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ 			frame = swp_type(entry) |
+ 				(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
+ 		flags |= PM_SWAP;
+-		if (is_migration_entry(entry))
++		if (is_migration_entry(entry)) {
++			migration = true;
+ 			page = migration_entry_to_page(entry);
++		}
+ 
+ 		if (is_device_private_entry(entry))
+ 			page = device_private_entry_to_page(entry);
+@@ -1392,7 +1411,7 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ 
+ 	if (page && !PageAnon(page))
+ 		flags |= PM_FILE;
+-	if (page && page_mapcount(page) == 1)
++	if (page && !migration && page_mapcount(page) == 1)
+ 		flags |= PM_MMAP_EXCLUSIVE;
+ 	if (vma->vm_flags & VM_SOFTDIRTY)
+ 		flags |= PM_SOFT_DIRTY;
+@@ -1408,8 +1427,9 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 	spinlock_t *ptl;
+ 	pte_t *pte, *orig_pte;
+ 	int err = 0;
+-
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
++	bool migration = false;
++
+ 	ptl = pmd_trans_huge_lock(pmdp, vma);
+ 	if (ptl) {
+ 		u64 flags = 0, frame = 0;
+@@ -1444,11 +1464,12 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 			if (pmd_swp_soft_dirty(pmd))
+ 				flags |= PM_SOFT_DIRTY;
+ 			VM_BUG_ON(!is_pmd_migration_entry(pmd));
++			migration = is_migration_entry(entry);
+ 			page = migration_entry_to_page(entry);
+ 		}
+ #endif
+ 
+-		if (page && page_mapcount(page) == 1)
++		if (page && !migration && page_mapcount(page) == 1)
+ 			flags |= PM_MMAP_EXCLUSIVE;
+ 
+ 		for (; addr != end; addr += PAGE_SIZE) {
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 4f13734637660..09fb8459bb5ce 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -692,9 +692,14 @@ int dquot_quota_sync(struct super_block *sb, int type)
+ 	/* This is not very clever (and fast) but currently I don't know about
+ 	 * any other simple way of getting quota data to disk and we must get
+ 	 * them there for userspace to be visible... */
+-	if (sb->s_op->sync_fs)
+-		sb->s_op->sync_fs(sb, 1);
+-	sync_blockdev(sb->s_bdev);
++	if (sb->s_op->sync_fs) {
++		ret = sb->s_op->sync_fs(sb, 1);
++		if (ret)
++			return ret;
++	}
++	ret = sync_blockdev(sb->s_bdev);
++	if (ret)
++		return ret;
+ 
+ 	/*
+ 	 * Now when everything is written we can discard the pagecache so
+diff --git a/fs/super.c b/fs/super.c
+index 20f1707807bbd..bae3fe80f852e 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -1667,11 +1667,9 @@ static void lockdep_sb_freeze_acquire(struct super_block *sb)
+ 		percpu_rwsem_acquire(sb->s_writers.rw_sem + level, 0, _THIS_IP_);
+ }
+ 
+-static void sb_freeze_unlock(struct super_block *sb)
++static void sb_freeze_unlock(struct super_block *sb, int level)
+ {
+-	int level;
+-
+-	for (level = SB_FREEZE_LEVELS - 1; level >= 0; level--)
++	for (level--; level >= 0; level--)
+ 		percpu_up_write(sb->s_writers.rw_sem + level);
+ }
+ 
+@@ -1742,7 +1740,14 @@ int freeze_super(struct super_block *sb)
+ 	sb_wait_write(sb, SB_FREEZE_PAGEFAULT);
+ 
+ 	/* All writers are done so after syncing there won't be dirty data */
+-	sync_filesystem(sb);
++	ret = sync_filesystem(sb);
++	if (ret) {
++		sb->s_writers.frozen = SB_UNFROZEN;
++		sb_freeze_unlock(sb, SB_FREEZE_PAGEFAULT);
++		wake_up(&sb->s_writers.wait_unfrozen);
++		deactivate_locked_super(sb);
++		return ret;
++	}
+ 
+ 	/* Now wait for internal filesystem counter */
+ 	sb->s_writers.frozen = SB_FREEZE_FS;
+@@ -1754,7 +1759,7 @@ int freeze_super(struct super_block *sb)
+ 			printk(KERN_ERR
+ 				"VFS:Filesystem freeze failed\n");
+ 			sb->s_writers.frozen = SB_UNFROZEN;
+-			sb_freeze_unlock(sb);
++			sb_freeze_unlock(sb, SB_FREEZE_FS);
+ 			wake_up(&sb->s_writers.wait_unfrozen);
+ 			deactivate_locked_super(sb);
+ 			return ret;
+@@ -1805,7 +1810,7 @@ static int thaw_super_locked(struct super_block *sb)
+ 	}
+ 
+ 	sb->s_writers.frozen = SB_UNFROZEN;
+-	sb_freeze_unlock(sb);
++	sb_freeze_unlock(sb, SB_FREEZE_FS);
+ out:
+ 	wake_up(&sb->s_writers.wait_unfrozen);
+ 	deactivate_locked_super(sb);
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 4b975111b5361..1f467fb620fe1 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -197,7 +197,7 @@ struct obj_cgroup {
+ 	struct mem_cgroup *memcg;
+ 	atomic_t nr_charged_bytes;
+ 	union {
+-		struct list_head list;
++		struct list_head list; /* protected by objcg_lock */
+ 		struct rcu_head rcu;
+ 	};
+ };
+@@ -300,7 +300,8 @@ struct mem_cgroup {
+ 	int kmemcg_id;
+ 	enum memcg_kmem_state kmem_state;
+ 	struct obj_cgroup __rcu *objcg;
+-	struct list_head objcg_list; /* list of inherited objcgs */
++	/* list of inherited objcgs, protected by objcg_lock */
++	struct list_head objcg_list;
+ #endif
+ 
+ 	MEMCG_PADDING(_pad2_);
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index fe3155736d635..861f2480c4571 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2061,7 +2061,7 @@ struct net_device {
+ 	struct netdev_queue	*_tx ____cacheline_aligned_in_smp;
+ 	unsigned int		num_tx_queues;
+ 	unsigned int		real_num_tx_queues;
+-	struct Qdisc		*qdisc;
++	struct Qdisc __rcu	*qdisc;
+ 	unsigned int		tx_queue_len;
+ 	spinlock_t		tx_global_lock;
+ 
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index b85b26d9ccefe..f996d1f343bb7 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1544,7 +1544,6 @@ extern struct pid *cad_pid;
+ #define PF_MEMALLOC		0x00000800	/* Allocating memory */
+ #define PF_NPROC_EXCEEDED	0x00001000	/* set_user() noticed that RLIMIT_NPROC was exceeded */
+ #define PF_USED_MATH		0x00002000	/* If unset the fpu must be initialized before use */
+-#define PF_USED_ASYNC		0x00004000	/* Used async_schedule*(), used by module init */
+ #define PF_NOFREEZE		0x00008000	/* This thread should not be frozen */
+ #define PF_FROZEN		0x00010000	/* Frozen for system suspend */
+ #define PF_KSWAPD		0x00020000	/* I am kswapd */
+diff --git a/include/net/bond_3ad.h b/include/net/bond_3ad.h
+index c8696a230b7d9..1a28f299a4c61 100644
+--- a/include/net/bond_3ad.h
++++ b/include/net/bond_3ad.h
+@@ -262,7 +262,7 @@ struct ad_system {
+ struct ad_bond_info {
+ 	struct ad_system system;	/* 802.3ad system structure */
+ 	struct bond_3ad_stats stats;
+-	u32 agg_select_timer;		/* Timer to select aggregator after all adapter's hand shakes */
++	atomic_t agg_select_timer;		/* Timer to select aggregator after all adapter's hand shakes */
+ 	u16 aggregator_identifier;
+ };
+ 
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index bd1f396cc9c72..60601896d4747 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -390,17 +390,20 @@ static inline void txopt_put(struct ipv6_txoptions *opt)
+ 		kfree_rcu(opt, rcu);
+ }
+ 
++#if IS_ENABLED(CONFIG_IPV6)
+ struct ip6_flowlabel *__fl6_sock_lookup(struct sock *sk, __be32 label);
+ 
+ extern struct static_key_false_deferred ipv6_flowlabel_exclusive;
+ static inline struct ip6_flowlabel *fl6_sock_lookup(struct sock *sk,
+ 						    __be32 label)
+ {
+-	if (static_branch_unlikely(&ipv6_flowlabel_exclusive.key))
++	if (static_branch_unlikely(&ipv6_flowlabel_exclusive.key) &&
++	    READ_ONCE(sock_net(sk)->ipv6.flowlabel_has_excl))
+ 		return __fl6_sock_lookup(sk, label) ? : ERR_PTR(-ENOENT);
+ 
+ 	return NULL;
+ }
++#endif
+ 
+ struct ipv6_txoptions *fl6_merge_options(struct ipv6_txoptions *opt_space,
+ 					 struct ip6_flowlabel *fl,
+diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h
+index 5ec054473d81a..1c0fbe3abf247 100644
+--- a/include/net/netns/ipv6.h
++++ b/include/net/netns/ipv6.h
+@@ -80,9 +80,10 @@ struct netns_ipv6 {
+ 	spinlock_t		fib6_gc_lock;
+ 	unsigned int		 ip6_rt_gc_expire;
+ 	unsigned long		 ip6_rt_last_gc;
++	unsigned char		flowlabel_has_excl;
+ #ifdef CONFIG_IPV6_MULTIPLE_TABLES
+-	unsigned int		fib6_rules_require_fldissect;
+ 	bool			fib6_has_custom_rules;
++	unsigned int		fib6_rules_require_fldissect;
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	unsigned int		fib6_routes_require_src;
+ #endif
+diff --git a/include/uapi/linux/can/isotp.h b/include/uapi/linux/can/isotp.h
+index 7793b26aa154d..c55935b64ccc8 100644
+--- a/include/uapi/linux/can/isotp.h
++++ b/include/uapi/linux/can/isotp.h
+@@ -135,7 +135,7 @@ struct can_isotp_ll_options {
+ #define CAN_ISOTP_FORCE_RXSTMIN	0x100	/* ignore CFs depending on rx stmin */
+ #define CAN_ISOTP_RX_EXT_ADDR	0x200	/* different rx extended addressing */
+ #define CAN_ISOTP_WAIT_TX_DONE	0x400	/* wait for tx completion */
+-
++#define CAN_ISOTP_SF_BROADCAST	0x800	/* 1-to-N functional addressing */
+ 
+ /* default values */
+ 
+diff --git a/kernel/async.c b/kernel/async.c
+index 33258e6e20f83..1746cd65e271b 100644
+--- a/kernel/async.c
++++ b/kernel/async.c
+@@ -205,9 +205,6 @@ async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
+ 	atomic_inc(&entry_count);
+ 	spin_unlock_irqrestore(&async_lock, flags);
+ 
+-	/* mark that this task has queued an async job, used by module init */
+-	current->flags |= PF_USED_ASYNC;
+-
+ 	/* schedule for execution */
+ 	queue_work_node(node, system_unbound_wq, &entry->work);
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index e465903abed9e..a78c0b02edd55 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2258,10 +2258,6 @@ static __latent_entropy struct task_struct *copy_process(
+ 		goto bad_fork_cancel_cgroup;
+ 	}
+ 
+-	/* past the last point of failure */
+-	if (pidfile)
+-		fd_install(pidfd, pidfile);
+-
+ 	init_task_pid_links(p);
+ 	if (likely(p->pid)) {
+ 		ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace);
+@@ -2310,6 +2306,9 @@ static __latent_entropy struct task_struct *copy_process(
+ 	syscall_tracepoint_update(p);
+ 	write_unlock_irq(&tasklist_lock);
+ 
++	if (pidfile)
++		fd_install(pidfd, pidfile);
++
+ 	proc_fork_connector(p);
+ 	sched_post_fork(p, args);
+ 	cgroup_post_fork(p, args);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 1f6a2f1226fa9..af4b35450556f 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3387,7 +3387,7 @@ struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
+ 	u16 chain_hlock = chain_hlocks[chain->base + i];
+ 	unsigned int class_idx = chain_hlock_class_idx(chain_hlock);
+ 
+-	return lock_classes + class_idx - 1;
++	return lock_classes + class_idx;
+ }
+ 
+ /*
+@@ -3455,7 +3455,7 @@ static void print_chain_keys_chain(struct lock_chain *chain)
+ 		hlock_id = chain_hlocks[chain->base + i];
+ 		chain_key = print_chain_key_iteration(hlock_id, chain_key);
+ 
+-		print_lock_name(lock_classes + chain_hlock_class_idx(hlock_id) - 1);
++		print_lock_name(lock_classes + chain_hlock_class_idx(hlock_id));
+ 		printk("\n");
+ 	}
+ }
+diff --git a/kernel/module.c b/kernel/module.c
+index 185b2655bc206..5f4403198f04b 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3714,12 +3714,6 @@ static noinline int do_init_module(struct module *mod)
+ 	}
+ 	freeinit->module_init = mod->init_layout.base;
+ 
+-	/*
+-	 * We want to find out whether @mod uses async during init.  Clear
+-	 * PF_USED_ASYNC.  async_schedule*() will set it.
+-	 */
+-	current->flags &= ~PF_USED_ASYNC;
+-
+ 	do_mod_ctors(mod);
+ 	/* Start the module */
+ 	if (mod->init != NULL)
+@@ -3745,22 +3739,13 @@ static noinline int do_init_module(struct module *mod)
+ 
+ 	/*
+ 	 * We need to finish all async code before the module init sequence
+-	 * is done.  This has potential to deadlock.  For example, a newly
+-	 * detected block device can trigger request_module() of the
+-	 * default iosched from async probing task.  Once userland helper
+-	 * reaches here, async_synchronize_full() will wait on the async
+-	 * task waiting on request_module() and deadlock.
+-	 *
+-	 * This deadlock is avoided by perfomring async_synchronize_full()
+-	 * iff module init queued any async jobs.  This isn't a full
+-	 * solution as it will deadlock the same if module loading from
+-	 * async jobs nests more than once; however, due to the various
+-	 * constraints, this hack seems to be the best option for now.
+-	 * Please refer to the following thread for details.
++	 * is done. This has potential to deadlock if synchronous module
++	 * loading is requested from async (which is not allowed!).
+ 	 *
+-	 * http://thread.gmane.org/gmane.linux.kernel/1420814
++	 * See commit 0fdff3ec6d87 ("async, kmod: warn on synchronous
++	 * request_module() from async workers") for more details.
+ 	 */
+-	if (!mod->async_probe_requested && (current->flags & PF_USED_ASYNC))
++	if (!mod->async_probe_requested)
+ 		async_synchronize_full();
+ 
+ 	ftrace_free_mem(mod, mod->init_layout.base, mod->init_layout.base +
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 6ed153f226b39..244f32e98360f 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -628,7 +628,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
+ 			set_tsk_need_resched(current);
+ 			set_preempt_need_resched();
+ 			if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
+-			    !rdp->defer_qs_iw_pending && exp) {
++			    !rdp->defer_qs_iw_pending && exp && cpu_online(rdp->cpu)) {
+ 				// Get scheduler to re-evaluate and call hooks.
+ 				// If !IRQ_WORK, FQS scan will eventually IPI.
+ 				init_irq_work(&rdp->defer_qs_iw,
+diff --git a/kernel/stackleak.c b/kernel/stackleak.c
+index ce161a8e8d975..dd07239ddff9f 100644
+--- a/kernel/stackleak.c
++++ b/kernel/stackleak.c
+@@ -48,7 +48,7 @@ int stack_erasing_sysctl(struct ctl_table *table, int write,
+ #define skip_erasing()	false
+ #endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */
+ 
+-asmlinkage void notrace stackleak_erase(void)
++asmlinkage void noinstr stackleak_erase(void)
+ {
+ 	/* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */
+ 	unsigned long kstack_ptr = current->lowest_stack;
+@@ -102,9 +102,8 @@ asmlinkage void notrace stackleak_erase(void)
+ 	/* Reset the 'lowest_stack' value for the next syscall */
+ 	current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64;
+ }
+-NOKPROBE_SYMBOL(stackleak_erase);
+ 
+-void __used __no_caller_saved_registers notrace stackleak_track_stack(void)
++void __used __no_caller_saved_registers noinstr stackleak_track_stack(void)
+ {
+ 	unsigned long sp = current_stack_pointer;
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index a0729213f37be..f9fad789321b0 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -250,6 +250,10 @@ __setup("trace_clock=", set_trace_boot_clock);
+ 
+ static int __init set_tracepoint_printk(char *str)
+ {
++	/* Ignore the "tp_printk_stop_on_boot" param */
++	if (*str == '_')
++		return 0;
++
+ 	if ((strcmp(str, "=0") != 0 && strcmp(str, "=off") != 0))
+ 		tracepoint_printk = 1;
+ 	return 1;
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index b364231b5fc8c..1b0a349fbcd92 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -407,6 +407,7 @@ static size_t copy_page_to_iter_pipe(struct page *page, size_t offset, size_t by
+ 		return 0;
+ 
+ 	buf->ops = &page_cache_pipe_buf_ops;
++	buf->flags = 0;
+ 	get_page(page);
+ 	buf->page = page;
+ 	buf->offset = offset;
+@@ -543,6 +544,7 @@ static size_t push_pipe(struct iov_iter *i, size_t size,
+ 			break;
+ 
+ 		buf->ops = &default_pipe_buf_ops;
++		buf->flags = 0;
+ 		buf->page = page;
+ 		buf->offset = 0;
+ 		buf->len = min_t(ssize_t, left, PAGE_SIZE);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 4bb2a4c593f73..dbe07fef26828 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -250,7 +250,7 @@ struct cgroup_subsys_state *vmpressure_to_css(struct vmpressure *vmpr)
+ }
+ 
+ #ifdef CONFIG_MEMCG_KMEM
+-extern spinlock_t css_set_lock;
++static DEFINE_SPINLOCK(objcg_lock);
+ 
+ static void obj_cgroup_release(struct percpu_ref *ref)
+ {
+@@ -284,13 +284,13 @@ static void obj_cgroup_release(struct percpu_ref *ref)
+ 	WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1));
+ 	nr_pages = nr_bytes >> PAGE_SHIFT;
+ 
+-	spin_lock_irqsave(&css_set_lock, flags);
++	spin_lock_irqsave(&objcg_lock, flags);
+ 	memcg = obj_cgroup_memcg(objcg);
+ 	if (nr_pages)
+ 		__memcg_kmem_uncharge(memcg, nr_pages);
+ 	list_del(&objcg->list);
+ 	mem_cgroup_put(memcg);
+-	spin_unlock_irqrestore(&css_set_lock, flags);
++	spin_unlock_irqrestore(&objcg_lock, flags);
+ 
+ 	percpu_ref_exit(ref);
+ 	kfree_rcu(objcg, rcu);
+@@ -322,7 +322,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
+ 
+ 	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
+ 
+-	spin_lock_irq(&css_set_lock);
++	spin_lock_irq(&objcg_lock);
+ 
+ 	/* Move active objcg to the parent's list */
+ 	xchg(&objcg->memcg, parent);
+@@ -337,7 +337,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
+ 	}
+ 	list_splice(&memcg->objcg_list, &parent->objcg_list);
+ 
+-	spin_unlock_irq(&css_set_lock);
++	spin_unlock_irq(&objcg_lock);
+ 
+ 	percpu_ref_kill(&objcg->refcnt);
+ }
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index 56c02beb60414..7ea0aee0c08d9 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -94,7 +94,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
+ 
+ 				/* Also skip shared copy-on-write pages */
+ 				if (is_cow_mapping(vma->vm_flags) &&
+-				    page_mapcount(page) != 1)
++				    page_count(page) != 1)
+ 					continue;
+ 
+ 				/*
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 5e84dce5ff7ae..23bd26057a828 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -77,6 +77,7 @@ static void ax25_kill_by_device(struct net_device *dev)
+ {
+ 	ax25_dev *ax25_dev;
+ 	ax25_cb *s;
++	struct sock *sk;
+ 
+ 	if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
+ 		return;
+@@ -85,13 +86,15 @@ static void ax25_kill_by_device(struct net_device *dev)
+ again:
+ 	ax25_for_each(s, &ax25_list) {
+ 		if (s->ax25_dev == ax25_dev) {
++			sk = s->sk;
++			sock_hold(sk);
+ 			spin_unlock_bh(&ax25_list_lock);
+-			lock_sock(s->sk);
++			lock_sock(sk);
+ 			s->ax25_dev = NULL;
+-			release_sock(s->sk);
++			release_sock(sk);
+ 			ax25_disconnect(s, ENETUNREACH);
+ 			spin_lock_bh(&ax25_list_lock);
+-
++			sock_put(sk);
+ 			/* The entry could have been deleted from the
+ 			 * list meanwhile and thus the next pointer is
+ 			 * no longer valid.  Play it safe and restart
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 37db4d232313d..d0581dc6a65fd 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -888,6 +888,16 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		goto err_out_drop;
+ 	}
+ 
++	/* take care of a potential SF_DL ESC offset for TX_DL > 8 */
++	off = (so->tx.ll_dl > CAN_MAX_DLEN) ? 1 : 0;
++
++	/* does the given data fit into a single frame for SF_BROADCAST? */
++	if ((so->opt.flags & CAN_ISOTP_SF_BROADCAST) &&
++	    (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off)) {
++		err = -EINVAL;
++		goto err_out_drop;
++	}
++
+ 	err = memcpy_from_msg(so->tx.buf, msg, size);
+ 	if (err < 0)
+ 		goto err_out_drop;
+@@ -915,9 +925,6 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	cf = (struct canfd_frame *)skb->data;
+ 	skb_put_zero(skb, so->ll.mtu);
+ 
+-	/* take care of a potential SF_DL ESC offset for TX_DL > 8 */
+-	off = (so->tx.ll_dl > CAN_MAX_DLEN) ? 1 : 0;
+-
+ 	/* check for single frame transmission depending on TX_DL */
+ 	if (size <= so->tx.ll_dl - SF_PCI_SZ4 - ae - off) {
+ 		/* The message size generally fits into a SingleFrame - good.
+@@ -1057,7 +1064,7 @@ static int isotp_release(struct socket *sock)
+ 	lock_sock(sk);
+ 
+ 	/* remove current filters & unregister */
+-	if (so->bound) {
++	if (so->bound && (!(so->opt.flags & CAN_ISOTP_SF_BROADCAST))) {
+ 		if (so->ifindex) {
+ 			struct net_device *dev;
+ 
+@@ -1097,15 +1104,12 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	struct net_device *dev;
+ 	int err = 0;
+ 	int notify_enetdown = 0;
++	int do_rx_reg = 1;
+ 
+ 	if (len < ISOTP_MIN_NAMELEN)
+ 		return -EINVAL;
+ 
+-	if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id)
+-		return -EADDRNOTAVAIL;
+-
+-	if ((addr->can_addr.tp.rx_id | addr->can_addr.tp.tx_id) &
+-	    (CAN_ERR_FLAG | CAN_RTR_FLAG))
++	if (addr->can_addr.tp.tx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG))
+ 		return -EADDRNOTAVAIL;
+ 
+ 	if (!addr->can_ifindex)
+@@ -1113,6 +1117,23 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 
+ 	lock_sock(sk);
+ 
++	/* do not register frame reception for functional addressing */
++	if (so->opt.flags & CAN_ISOTP_SF_BROADCAST)
++		do_rx_reg = 0;
++
++	/* do not validate rx address for functional addressing */
++	if (do_rx_reg) {
++		if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id) {
++			err = -EADDRNOTAVAIL;
++			goto out;
++		}
++
++		if (addr->can_addr.tp.rx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) {
++			err = -EADDRNOTAVAIL;
++			goto out;
++		}
++	}
++
+ 	if (so->bound && addr->can_ifindex == so->ifindex &&
+ 	    addr->can_addr.tp.rx_id == so->rxid &&
+ 	    addr->can_addr.tp.tx_id == so->txid)
+@@ -1138,13 +1159,14 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 
+ 	ifindex = dev->ifindex;
+ 
+-	can_rx_register(net, dev, addr->can_addr.tp.rx_id,
+-			SINGLE_MASK(addr->can_addr.tp.rx_id), isotp_rcv, sk,
+-			"isotp", sk);
++	if (do_rx_reg)
++		can_rx_register(net, dev, addr->can_addr.tp.rx_id,
++				SINGLE_MASK(addr->can_addr.tp.rx_id),
++				isotp_rcv, sk, "isotp", sk);
+ 
+ 	dev_put(dev);
+ 
+-	if (so->bound) {
++	if (so->bound && do_rx_reg) {
+ 		/* unregister old filter */
+ 		if (so->ifindex) {
+ 			dev = dev_get_by_index(net, so->ifindex);
+@@ -1193,16 +1215,13 @@ static int isotp_getname(struct socket *sock, struct sockaddr *uaddr, int peer)
+ 	return ISOTP_MIN_NAMELEN;
+ }
+ 
+-static int isotp_setsockopt(struct socket *sock, int level, int optname,
++static int isotp_setsockopt_locked(struct socket *sock, int level, int optname,
+ 			    sockptr_t optval, unsigned int optlen)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct isotp_sock *so = isotp_sk(sk);
+ 	int ret = 0;
+ 
+-	if (level != SOL_CAN_ISOTP)
+-		return -EINVAL;
+-
+ 	if (so->bound)
+ 		return -EISCONN;
+ 
+@@ -1277,6 +1296,22 @@ static int isotp_setsockopt(struct socket *sock, int level, int optname,
+ 	return ret;
+ }
+ 
++static int isotp_setsockopt(struct socket *sock, int level, int optname,
++			    sockptr_t optval, unsigned int optlen)
++
++{
++	struct sock *sk = sock->sk;
++	int ret;
++
++	if (level != SOL_CAN_ISOTP)
++		return -EINVAL;
++
++	lock_sock(sk);
++	ret = isotp_setsockopt_locked(sock, level, optname, optval, optlen);
++	release_sock(sk);
++	return ret;
++}
++
+ static int isotp_getsockopt(struct socket *sock, int level, int optname,
+ 			    char __user *optval, int __user *optlen)
+ {
+@@ -1344,7 +1379,7 @@ static void isotp_notify(struct isotp_sock *so, unsigned long msg,
+ 	case NETDEV_UNREGISTER:
+ 		lock_sock(sk);
+ 		/* remove current filters & unregister */
+-		if (so->bound)
++		if (so->bound && (!(so->opt.flags & CAN_ISOTP_SF_BROADCAST)))
+ 			can_rx_unregister(dev_net(dev), dev, so->rxid,
+ 					  SINGLE_MASK(so->rxid),
+ 					  isotp_rcv, sk);
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index db65ce62b625a..ed9dd17f9348c 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -280,13 +280,17 @@ static void trace_napi_poll_hit(void *ignore, struct napi_struct *napi,
+ 
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(new_stat, &hw_stats_list, list) {
++		struct net_device *dev;
++
+ 		/*
+ 		 * only add a note to our monitor buffer if:
+ 		 * 1) this is the dev we received on
+ 		 * 2) its after the last_rx delta
+ 		 * 3) our rx_dropped count has gone up
+ 		 */
+-		if ((new_stat->dev == napi->dev)  &&
++		/* Paired with WRITE_ONCE() in dropmon_net_event() */
++		dev = READ_ONCE(new_stat->dev);
++		if ((dev == napi->dev)  &&
+ 		    (time_after(jiffies, new_stat->last_rx + dm_hw_check_delta)) &&
+ 		    (napi->dev->stats.rx_dropped != new_stat->last_drop_val)) {
+ 			trace_drop_common(NULL, NULL);
+@@ -1574,7 +1578,10 @@ static int dropmon_net_event(struct notifier_block *ev_block,
+ 		mutex_lock(&net_dm_mutex);
+ 		list_for_each_entry_safe(new_stat, tmp, &hw_stats_list, list) {
+ 			if (new_stat->dev == dev) {
+-				new_stat->dev = NULL;
++
++				/* Paired with READ_ONCE() in trace_napi_poll_hit() */
++				WRITE_ONCE(new_stat->dev, NULL);
++
+ 				if (trace_state == TRACE_OFF) {
+ 					list_del_rcu(&new_stat->list);
+ 					kfree_rcu(new_stat, rcu);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 373564bf57acb..9ff6d4160daba 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1705,6 +1705,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ {
+ 	struct ifinfomsg *ifm;
+ 	struct nlmsghdr *nlh;
++	struct Qdisc *qdisc;
+ 
+ 	ASSERT_RTNL();
+ 	nlh = nlmsg_put(skb, pid, seq, type, sizeof(*ifm), flags);
+@@ -1722,6 +1723,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ 	if (tgt_netnsid >= 0 && nla_put_s32(skb, IFLA_TARGET_NETNSID, tgt_netnsid))
+ 		goto nla_put_failure;
+ 
++	qdisc = rtnl_dereference(dev->qdisc);
+ 	if (nla_put_string(skb, IFLA_IFNAME, dev->name) ||
+ 	    nla_put_u32(skb, IFLA_TXQLEN, dev->tx_queue_len) ||
+ 	    nla_put_u8(skb, IFLA_OPERSTATE,
+@@ -1740,8 +1742,8 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ #endif
+ 	    put_master_ifindex(skb, dev) ||
+ 	    nla_put_u8(skb, IFLA_CARRIER, netif_carrier_ok(dev)) ||
+-	    (dev->qdisc &&
+-	     nla_put_string(skb, IFLA_QDISC, dev->qdisc->ops->id)) ||
++	    (qdisc &&
++	     nla_put_string(skb, IFLA_QDISC, qdisc->ops->id)) ||
+ 	    nla_put_ifalias(skb, dev) ||
+ 	    nla_put_u32(skb, IFLA_CARRIER_CHANGES,
+ 			atomic_read(&dev->carrier_up_count) +
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index a5722905456c2..323cb231cb580 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -172,16 +172,23 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ 	struct sock *sk = NULL;
+ 	struct inet_sock *isk;
+ 	struct hlist_nulls_node *hnode;
+-	int dif = skb->dev->ifindex;
++	int dif, sdif;
+ 
+ 	if (skb->protocol == htons(ETH_P_IP)) {
++		dif = inet_iif(skb);
++		sdif = inet_sdif(skb);
+ 		pr_debug("try to find: num = %d, daddr = %pI4, dif = %d\n",
+ 			 (int)ident, &ip_hdr(skb)->daddr, dif);
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	} else if (skb->protocol == htons(ETH_P_IPV6)) {
++		dif = inet6_iif(skb);
++		sdif = inet6_sdif(skb);
+ 		pr_debug("try to find: num = %d, daddr = %pI6c, dif = %d\n",
+ 			 (int)ident, &ipv6_hdr(skb)->daddr, dif);
+ #endif
++	} else {
++		pr_err("ping: protocol(%x) is not supported\n", ntohs(skb->protocol));
++		return NULL;
+ 	}
+ 
+ 	read_lock_bh(&ping_table.lock);
+@@ -221,7 +228,7 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ 		}
+ 
+ 		if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif &&
+-		    sk->sk_bound_dev_if != inet_sdif(skb))
++		    sk->sk_bound_dev_if != sdif)
+ 			continue;
+ 
+ 		sock_hold(sk);
+diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
+index aa673a6a7e432..ceb85c67ce395 100644
+--- a/net/ipv6/ip6_flowlabel.c
++++ b/net/ipv6/ip6_flowlabel.c
+@@ -450,8 +450,10 @@ fl_create(struct net *net, struct sock *sk, struct in6_flowlabel_req *freq,
+ 		err = -EINVAL;
+ 		goto done;
+ 	}
+-	if (fl_shared_exclusive(fl) || fl->opt)
++	if (fl_shared_exclusive(fl) || fl->opt) {
++		WRITE_ONCE(sock_net(sk)->ipv6.flowlabel_has_excl, 1);
+ 		static_branch_deferred_inc(&ipv6_flowlabel_exclusive);
++	}
+ 	return fl;
+ 
+ done:
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index 810cca24b3990..7626f3e1c70a7 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -489,6 +489,15 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ 			pr_debug("Setting vtag %x for dir %d\n",
+ 				 ih->init_tag, !dir);
+ 			ct->proto.sctp.vtag[!dir] = ih->init_tag;
++
++			/* don't renew timeout on init retransmit so
++			 * port reuse by client or NAT middlebox cannot
++			 * keep entry alive indefinitely (incl. nat info).
++			 */
++			if (new_state == SCTP_CONNTRACK_CLOSED &&
++			    old_state == SCTP_CONNTRACK_CLOSED &&
++			    nf_ct_is_confirmed(ct))
++				ignore = true;
+ 		}
+ 
+ 		ct->proto.sctp.state = new_state;
+diff --git a/net/netfilter/nft_synproxy.c b/net/netfilter/nft_synproxy.c
+index 4fda8b3f17626..59c4dfaf2ea1f 100644
+--- a/net/netfilter/nft_synproxy.c
++++ b/net/netfilter/nft_synproxy.c
+@@ -191,8 +191,10 @@ static int nft_synproxy_do_init(const struct nft_ctx *ctx,
+ 		if (err)
+ 			goto nf_ct_failure;
+ 		err = nf_synproxy_ipv6_init(snet, ctx->net);
+-		if (err)
++		if (err) {
++			nf_synproxy_ipv4_fini(snet, ctx->net);
+ 			goto nf_ct_failure;
++		}
+ 		break;
+ 	}
+ 
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index f613299ca7f0a..7b29aa1a3ce9a 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -680,15 +680,24 @@ int tcf_action_exec(struct sk_buff *skb, struct tc_action **actions,
+ restart_act_graph:
+ 	for (i = 0; i < nr_actions; i++) {
+ 		const struct tc_action *a = actions[i];
++		int repeat_ttl;
+ 
+ 		if (jmp_prgcnt > 0) {
+ 			jmp_prgcnt -= 1;
+ 			continue;
+ 		}
++
++		repeat_ttl = 32;
+ repeat:
+ 		ret = a->ops->act(skb, a, res);
+-		if (ret == TC_ACT_REPEAT)
+-			goto repeat;	/* we need a ttl - JHS */
++
++		if (unlikely(ret == TC_ACT_REPEAT)) {
++			if (--repeat_ttl != 0)
++				goto repeat;
++			/* suspicious opcode, stop pipeline */
++			net_warn_ratelimited("TC_ACT_REPEAT abuse ?\n");
++			return TC_ACT_OK;
++		}
+ 
+ 		if (TC_ACT_EXT_CMP(ret, TC_ACT_JUMP)) {
+ 			jmp_prgcnt = ret & TCA_ACT_MAX_PRIO_MASK;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 7993a692c7fda..9a789a057a741 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1045,7 +1045,7 @@ static int __tcf_qdisc_find(struct net *net, struct Qdisc **q,
+ 
+ 	/* Find qdisc */
+ 	if (!*parent) {
+-		*q = dev->qdisc;
++		*q = rcu_dereference(dev->qdisc);
+ 		*parent = (*q)->handle;
+ 	} else {
+ 		*q = qdisc_lookup_rcu(dev, TC_H_MAJ(*parent));
+@@ -2591,7 +2591,7 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 		parent = tcm->tcm_parent;
+ 		if (!parent)
+-			q = dev->qdisc;
++			q = rtnl_dereference(dev->qdisc);
+ 		else
+ 			q = qdisc_lookup(dev, TC_H_MAJ(tcm->tcm_parent));
+ 		if (!q)
+@@ -2977,7 +2977,7 @@ static int tc_dump_chain(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 		parent = tcm->tcm_parent;
+ 		if (!parent) {
+-			q = dev->qdisc;
++			q = rtnl_dereference(dev->qdisc);
+ 			parent = q->handle;
+ 		} else {
+ 			q = qdisc_lookup(dev, TC_H_MAJ(tcm->tcm_parent));
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 6758968e79327..6e18aa4177828 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -301,7 +301,7 @@ struct Qdisc *qdisc_lookup(struct net_device *dev, u32 handle)
+ 
+ 	if (!handle)
+ 		return NULL;
+-	q = qdisc_match_from_root(dev->qdisc, handle);
++	q = qdisc_match_from_root(rtnl_dereference(dev->qdisc), handle);
+ 	if (q)
+ 		goto out;
+ 
+@@ -320,7 +320,7 @@ struct Qdisc *qdisc_lookup_rcu(struct net_device *dev, u32 handle)
+ 
+ 	if (!handle)
+ 		return NULL;
+-	q = qdisc_match_from_root(dev->qdisc, handle);
++	q = qdisc_match_from_root(rcu_dereference(dev->qdisc), handle);
+ 	if (q)
+ 		goto out;
+ 
+@@ -1082,10 +1082,10 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ skip:
+ 		if (!ingress) {
+ 			notify_and_destroy(net, skb, n, classid,
+-					   dev->qdisc, new);
++					   rtnl_dereference(dev->qdisc), new);
+ 			if (new && !new->ops->attach)
+ 				qdisc_refcount_inc(new);
+-			dev->qdisc = new ? : &noop_qdisc;
++			rcu_assign_pointer(dev->qdisc, new ? : &noop_qdisc);
+ 
+ 			if (new && new->ops->attach)
+ 				new->ops->attach(new);
+@@ -1460,7 +1460,7 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 				q = dev_ingress_queue(dev)->qdisc_sleeping;
+ 			}
+ 		} else {
+-			q = dev->qdisc;
++			q = rtnl_dereference(dev->qdisc);
+ 		}
+ 		if (!q) {
+ 			NL_SET_ERR_MSG(extack, "Cannot find specified qdisc on specified device");
+@@ -1549,7 +1549,7 @@ replay:
+ 				q = dev_ingress_queue(dev)->qdisc_sleeping;
+ 			}
+ 		} else {
+-			q = dev->qdisc;
++			q = rtnl_dereference(dev->qdisc);
+ 		}
+ 
+ 		/* It may be default qdisc, ignore it */
+@@ -1771,7 +1771,8 @@ static int tc_dump_qdisc(struct sk_buff *skb, struct netlink_callback *cb)
+ 			s_q_idx = 0;
+ 		q_idx = 0;
+ 
+-		if (tc_dump_qdisc_root(dev->qdisc, skb, cb, &q_idx, s_q_idx,
++		if (tc_dump_qdisc_root(rtnl_dereference(dev->qdisc),
++				       skb, cb, &q_idx, s_q_idx,
+ 				       true, tca[TCA_DUMP_INVISIBLE]) < 0)
+ 			goto done;
+ 
+@@ -2047,7 +2048,7 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n,
+ 		} else if (qid1) {
+ 			qid = qid1;
+ 		} else if (qid == 0)
+-			qid = dev->qdisc->handle;
++			qid = rtnl_dereference(dev->qdisc)->handle;
+ 
+ 		/* Now qid is genuine qdisc handle consistent
+ 		 * both with parent and child.
+@@ -2058,7 +2059,7 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n,
+ 			portid = TC_H_MAKE(qid, portid);
+ 	} else {
+ 		if (qid == 0)
+-			qid = dev->qdisc->handle;
++			qid = rtnl_dereference(dev->qdisc)->handle;
+ 	}
+ 
+ 	/* OK. Locate qdisc */
+@@ -2219,7 +2220,8 @@ static int tc_dump_tclass(struct sk_buff *skb, struct netlink_callback *cb)
+ 	s_t = cb->args[0];
+ 	t = 0;
+ 
+-	if (tc_dump_tclass_root(dev->qdisc, skb, tcm, cb, &t, s_t, true) < 0)
++	if (tc_dump_tclass_root(rtnl_dereference(dev->qdisc),
++				skb, tcm, cb, &t, s_t, true) < 0)
+ 		goto done;
+ 
+ 	dev_queue = dev_ingress_queue(dev);
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index b5005abc84ec2..5d5391adb667c 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1088,30 +1088,33 @@ static void attach_default_qdiscs(struct net_device *dev)
+ 	if (!netif_is_multiqueue(dev) ||
+ 	    dev->priv_flags & IFF_NO_QUEUE) {
+ 		netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
+-		dev->qdisc = txq->qdisc_sleeping;
+-		qdisc_refcount_inc(dev->qdisc);
++		qdisc = txq->qdisc_sleeping;
++		rcu_assign_pointer(dev->qdisc, qdisc);
++		qdisc_refcount_inc(qdisc);
+ 	} else {
+ 		qdisc = qdisc_create_dflt(txq, &mq_qdisc_ops, TC_H_ROOT, NULL);
+ 		if (qdisc) {
+-			dev->qdisc = qdisc;
++			rcu_assign_pointer(dev->qdisc, qdisc);
+ 			qdisc->ops->attach(qdisc);
+ 		}
+ 	}
++	qdisc = rtnl_dereference(dev->qdisc);
+ 
+ 	/* Detect default qdisc setup/init failed and fallback to "noqueue" */
+-	if (dev->qdisc == &noop_qdisc) {
++	if (qdisc == &noop_qdisc) {
+ 		netdev_warn(dev, "default qdisc (%s) fail, fallback to %s\n",
+ 			    default_qdisc_ops->id, noqueue_qdisc_ops.id);
+ 		dev->priv_flags |= IFF_NO_QUEUE;
+ 		netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
+-		dev->qdisc = txq->qdisc_sleeping;
+-		qdisc_refcount_inc(dev->qdisc);
++		qdisc = txq->qdisc_sleeping;
++		rcu_assign_pointer(dev->qdisc, qdisc);
++		qdisc_refcount_inc(qdisc);
+ 		dev->priv_flags ^= IFF_NO_QUEUE;
+ 	}
+ 
+ #ifdef CONFIG_NET_SCHED
+-	if (dev->qdisc != &noop_qdisc)
+-		qdisc_hash_add(dev->qdisc, false);
++	if (qdisc != &noop_qdisc)
++		qdisc_hash_add(qdisc, false);
+ #endif
+ }
+ 
+@@ -1141,7 +1144,7 @@ void dev_activate(struct net_device *dev)
+ 	 * and noqueue_qdisc for virtual interfaces
+ 	 */
+ 
+-	if (dev->qdisc == &noop_qdisc)
++	if (rtnl_dereference(dev->qdisc) == &noop_qdisc)
+ 		attach_default_qdiscs(dev);
+ 
+ 	if (!netif_carrier_ok(dev))
+@@ -1306,7 +1309,7 @@ static int qdisc_change_tx_queue_len(struct net_device *dev,
+ void dev_qdisc_change_real_num_tx(struct net_device *dev,
+ 				  unsigned int new_real_tx)
+ {
+-	struct Qdisc *qdisc = dev->qdisc;
++	struct Qdisc *qdisc = rtnl_dereference(dev->qdisc);
+ 
+ 	if (qdisc->ops->change_real_num_tx)
+ 		qdisc->ops->change_real_num_tx(qdisc, new_real_tx);
+@@ -1346,7 +1349,7 @@ static void dev_init_scheduler_queue(struct net_device *dev,
+ 
+ void dev_init_scheduler(struct net_device *dev)
+ {
+-	dev->qdisc = &noop_qdisc;
++	rcu_assign_pointer(dev->qdisc, &noop_qdisc);
+ 	netdev_for_each_tx_queue(dev, dev_init_scheduler_queue, &noop_qdisc);
+ 	if (dev_ingress_queue(dev))
+ 		dev_init_scheduler_queue(dev, dev_ingress_queue(dev), &noop_qdisc);
+@@ -1374,8 +1377,8 @@ void dev_shutdown(struct net_device *dev)
+ 	netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
+ 	if (dev_ingress_queue(dev))
+ 		shutdown_scheduler_queue(dev, dev_ingress_queue(dev), &noop_qdisc);
+-	qdisc_put(dev->qdisc);
+-	dev->qdisc = &noop_qdisc;
++	qdisc_put(rtnl_dereference(dev->qdisc));
++	rcu_assign_pointer(dev->qdisc, &noop_qdisc);
+ 
+ 	WARN_ON(timer_pending(&dev->watchdog_timer));
+ }
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 25554260a5931..dcc1992b14d76 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -449,6 +449,7 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+ 					      IB_POLL_WORKQUEUE);
+ 	if (IS_ERR(ep->re_attr.send_cq)) {
+ 		rc = PTR_ERR(ep->re_attr.send_cq);
++		ep->re_attr.send_cq = NULL;
+ 		goto out_destroy;
+ 	}
+ 
+@@ -457,6 +458,7 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+ 					      IB_POLL_WORKQUEUE);
+ 	if (IS_ERR(ep->re_attr.recv_cq)) {
+ 		rc = PTR_ERR(ep->re_attr.recv_cq);
++		ep->re_attr.recv_cq = NULL;
+ 		goto out_destroy;
+ 	}
+ 	ep->re_receive_count = 0;
+@@ -495,6 +497,7 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+ 	ep->re_pd = ib_alloc_pd(device, 0);
+ 	if (IS_ERR(ep->re_pd)) {
+ 		rc = PTR_ERR(ep->re_pd);
++		ep->re_pd = NULL;
+ 		goto out_destroy;
+ 	}
+ 
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 7fe36dbcbe187..005aa701f4d52 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1357,6 +1357,7 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
+ 			sk->sk_state = sk->sk_state == TCP_ESTABLISHED ? TCP_CLOSING : TCP_CLOSE;
+ 			sock->state = SS_UNCONNECTED;
+ 			vsock_transport_cancel_pkt(vsk);
++			vsock_remove_connected(vsk);
+ 			goto out_wait;
+ 		} else if (timeout == 0) {
+ 			err = -ETIMEDOUT;
+diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
+index a39d93e3c6ae8..867b06c6d2797 100644
+--- a/scripts/kconfig/confdata.c
++++ b/scripts/kconfig/confdata.c
+@@ -968,14 +968,19 @@ static int conf_write_dep(const char *name)
+ 
+ static int conf_touch_deps(void)
+ {
+-	const char *name;
++	const char *name, *tmp;
+ 	struct symbol *sym;
+ 	int res, i;
+ 
+-	strcpy(depfile_path, "include/config/");
+-	depfile_prefix_len = strlen(depfile_path);
+-
+ 	name = conf_get_autoconfig_name();
++	tmp = strrchr(name, '/');
++	depfile_prefix_len = tmp ? tmp - name + 1 : 0;
++	if (depfile_prefix_len + 1 > sizeof(depfile_path))
++		return -1;
++
++	strncpy(depfile_path, name, depfile_prefix_len);
++	depfile_path[depfile_prefix_len] = 0;
++
+ 	conf_read_simple(name, S_DEF_AUTO);
+ 	sym_calc_value(modules_sym);
+ 
+diff --git a/scripts/kconfig/preprocess.c b/scripts/kconfig/preprocess.c
+index 0590f86df6e40..748da578b418c 100644
+--- a/scripts/kconfig/preprocess.c
++++ b/scripts/kconfig/preprocess.c
+@@ -141,7 +141,7 @@ static char *do_lineno(int argc, char *argv[])
+ static char *do_shell(int argc, char *argv[])
+ {
+ 	FILE *p;
+-	char buf[256];
++	char buf[4096];
+ 	char *cmd;
+ 	size_t nread;
+ 	int i;
+diff --git a/scripts/module.lds.S b/scripts/module.lds.S
+index 69b9b71a6a473..c5f12195817bb 100644
+--- a/scripts/module.lds.S
++++ b/scripts/module.lds.S
+@@ -23,6 +23,32 @@ SECTIONS {
+ 	.init_array		0 : ALIGN(8) { *(SORT(.init_array.*)) *(.init_array) }
+ 
+ 	__jump_table		0 : ALIGN(8) { KEEP(*(__jump_table)) }
++
++	__patchable_function_entries : { *(__patchable_function_entries) }
++
++#ifdef CONFIG_LTO_CLANG
++	/*
++	 * With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and
++	 * -ffunction-sections, which increases the size of the final module.
++	 * Merge the split sections in the final binary.
++	 */
++	.bss : {
++		*(.bss .bss.[0-9a-zA-Z_]*)
++		*(.bss..L*)
++	}
++
++	.data : {
++		*(.data .data.[0-9a-zA-Z_]*)
++		*(.data..L*)
++	}
++
++	.rodata : {
++		*(.rodata .rodata.[0-9a-zA-Z_]*)
++		*(.rodata..L*)
++	}
++
++	.text : { *(.text .text.[0-9a-zA-Z_]*) }
++#endif
+ }
+ 
+ /* bring in arch-specific sections */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 3cc936f2cbf8d..600ea241ead79 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1652,6 +1652,7 @@ static const struct snd_pci_quirk probe_mask_list[] = {
+ 	/* forced codec slots */
+ 	SND_PCI_QUIRK(0x1043, 0x1262, "ASUS W5Fm", 0x103),
+ 	SND_PCI_QUIRK(0x1046, 0x1262, "ASUS W5F", 0x103),
++	SND_PCI_QUIRK(0x1558, 0x0351, "Schenker Dock 15", 0x105),
+ 	/* WinFast VP200 H (Teradici) user reported broken communication */
+ 	SND_PCI_QUIRK(0x3a21, 0x040d, "WinFast VP200 H", 0x101),
+ 	{}
+@@ -1837,8 +1838,6 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 
+ 	assign_position_fix(chip, check_position_fix(chip, position_fix[dev]));
+ 
+-	check_probe_mask(chip, dev);
+-
+ 	if (single_cmd < 0) /* allow fallback to single_cmd at errors */
+ 		chip->fallback_to_single_cmd = 1;
+ 	else /* explicitly set to single_cmd or not */
+@@ -1866,6 +1865,8 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 		chip->bus.core.needs_damn_long_delay = 1;
+ 	}
+ 
++	check_probe_mask(chip, dev);
++
+ 	err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops);
+ 	if (err < 0) {
+ 		dev_err(card->dev, "Error creating device [card]!\n");
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index aef017ba00708..ed0cfcb05ef0d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -134,6 +134,22 @@ struct alc_spec {
+  * COEF access helper functions
+  */
+ 
++static void coef_mutex_lock(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	snd_hda_power_up_pm(codec);
++	mutex_lock(&spec->coef_mutex);
++}
++
++static void coef_mutex_unlock(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	mutex_unlock(&spec->coef_mutex);
++	snd_hda_power_down_pm(codec);
++}
++
+ static int __alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 				 unsigned int coef_idx)
+ {
+@@ -147,12 +163,11 @@ static int __alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 			       unsigned int coef_idx)
+ {
+-	struct alc_spec *spec = codec->spec;
+ 	unsigned int val;
+ 
+-	mutex_lock(&spec->coef_mutex);
++	coef_mutex_lock(codec);
+ 	val = __alc_read_coefex_idx(codec, nid, coef_idx);
+-	mutex_unlock(&spec->coef_mutex);
++	coef_mutex_unlock(codec);
+ 	return val;
+ }
+ 
+@@ -169,11 +184,9 @@ static void __alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 				 unsigned int coef_idx, unsigned int coef_val)
+ {
+-	struct alc_spec *spec = codec->spec;
+-
+-	mutex_lock(&spec->coef_mutex);
++	coef_mutex_lock(codec);
+ 	__alc_write_coefex_idx(codec, nid, coef_idx, coef_val);
+-	mutex_unlock(&spec->coef_mutex);
++	coef_mutex_unlock(codec);
+ }
+ 
+ #define alc_write_coef_idx(codec, coef_idx, coef_val) \
+@@ -194,11 +207,9 @@ static void alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 				  unsigned int coef_idx, unsigned int mask,
+ 				  unsigned int bits_set)
+ {
+-	struct alc_spec *spec = codec->spec;
+-
+-	mutex_lock(&spec->coef_mutex);
++	coef_mutex_lock(codec);
+ 	__alc_update_coefex_idx(codec, nid, coef_idx, mask, bits_set);
+-	mutex_unlock(&spec->coef_mutex);
++	coef_mutex_unlock(codec);
+ }
+ 
+ #define alc_update_coef_idx(codec, coef_idx, mask, bits_set)	\
+@@ -231,9 +242,7 @@ struct coef_fw {
+ static void alc_process_coef_fw(struct hda_codec *codec,
+ 				const struct coef_fw *fw)
+ {
+-	struct alc_spec *spec = codec->spec;
+-
+-	mutex_lock(&spec->coef_mutex);
++	coef_mutex_lock(codec);
+ 	for (; fw->nid; fw++) {
+ 		if (fw->mask == (unsigned short)-1)
+ 			__alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val);
+@@ -241,7 +250,7 @@ static void alc_process_coef_fw(struct hda_codec *codec,
+ 			__alc_update_coefex_idx(codec, fw->nid, fw->idx,
+ 						fw->mask, fw->val);
+ 	}
+-	mutex_unlock(&spec->coef_mutex);
++	coef_mutex_unlock(codec);
+ }
+ 
+ /*
+@@ -8948,6 +8957,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ 	SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x383d, "Legion Y9000X 2019", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
+ 	SND_PCI_QUIRK(0x17aa, 0x384a, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 61c3238bc2656..315fd9d971c8c 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -38,10 +38,12 @@ static void tas2770_reset(struct tas2770_priv *tas2770)
+ 		gpiod_set_value_cansleep(tas2770->reset_gpio, 0);
+ 		msleep(20);
+ 		gpiod_set_value_cansleep(tas2770->reset_gpio, 1);
++		usleep_range(1000, 2000);
+ 	}
+ 
+ 	snd_soc_component_write(tas2770->component, TAS2770_SW_RST,
+ 		TAS2770_RST);
++	usleep_range(1000, 2000);
+ }
+ 
+ static int tas2770_set_bias_level(struct snd_soc_component *component,
+@@ -110,6 +112,7 @@ static int tas2770_codec_resume(struct snd_soc_component *component)
+ 
+ 	if (tas2770->sdz_gpio) {
+ 		gpiod_set_value_cansleep(tas2770->sdz_gpio, 1);
++		usleep_range(1000, 2000);
+ 	} else {
+ 		ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+ 						    TAS2770_PWR_CTRL_MASK,
+@@ -510,8 +513,10 @@ static int tas2770_codec_probe(struct snd_soc_component *component)
+ 
+ 	tas2770->component = component;
+ 
+-	if (tas2770->sdz_gpio)
++	if (tas2770->sdz_gpio) {
+ 		gpiod_set_value_cansleep(tas2770->sdz_gpio, 1);
++		usleep_range(1000, 2000);
++	}
+ 
+ 	tas2770_reset(tas2770);
+ 
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index f24f7354f46fe..caa8d45ebb209 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -308,7 +308,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 	unsigned int sign_bit = mc->sign_bit;
+ 	unsigned int mask = (1 << fls(max)) - 1;
+ 	unsigned int invert = mc->invert;
+-	int err;
++	int err, ret;
+ 	bool type_2r = false;
+ 	unsigned int val2 = 0;
+ 	unsigned int val, val_mask;
+@@ -350,12 +350,18 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 	err = snd_soc_component_update_bits(component, reg, val_mask, val);
+ 	if (err < 0)
+ 		return err;
++	ret = err;
+ 
+-	if (type_2r)
++	if (type_2r) {
+ 		err = snd_soc_component_update_bits(component, reg2, val_mask,
+-			val2);
++						    val2);
++		/* Don't discard any error code or drop change flag */
++		if (ret == 0 || err < 0) {
++			ret = err;
++		}
++	}
+ 
+-	return err;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_put_volsw);
+ 
+@@ -504,7 +510,7 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 	unsigned int mask = (1 << fls(max)) - 1;
+ 	unsigned int invert = mc->invert;
+ 	unsigned int val, val_mask;
+-	int ret;
++	int err, ret;
+ 
+ 	if (invert)
+ 		val = (max - ucontrol->value.integer.value[0]) & mask;
+@@ -513,9 +519,10 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 	val_mask = mask << shift;
+ 	val = val << shift;
+ 
+-	ret = snd_soc_component_update_bits(component, reg, val_mask, val);
+-	if (ret < 0)
+-		return ret;
++	err = snd_soc_component_update_bits(component, reg, val_mask, val);
++	if (err < 0)
++		return err;
++	ret = err;
+ 
+ 	if (snd_soc_volsw_is_stereo(mc)) {
+ 		if (invert)
+@@ -525,8 +532,12 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 		val_mask = mask << shift;
+ 		val = val << shift;
+ 
+-		ret = snd_soc_component_update_bits(component, rreg, val_mask,
++		err = snd_soc_component_update_bits(component, rreg, val_mask,
+ 			val);
++		/* Don't discard any error code or drop change flag */
++		if (ret == 0 || err < 0) {
++			ret = err;
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/tools/lib/subcmd/subcmd-util.h b/tools/lib/subcmd/subcmd-util.h
+index 794a375dad360..b2aec04fce8f6 100644
+--- a/tools/lib/subcmd/subcmd-util.h
++++ b/tools/lib/subcmd/subcmd-util.h
+@@ -50,15 +50,8 @@ static NORETURN inline void die(const char *err, ...)
+ static inline void *xrealloc(void *ptr, size_t size)
+ {
+ 	void *ret = realloc(ptr, size);
+-	if (!ret && !size)
+-		ret = realloc(ptr, 1);
+-	if (!ret) {
+-		ret = realloc(ptr, size);
+-		if (!ret && !size)
+-			ret = realloc(ptr, 1);
+-		if (!ret)
+-			die("Out of memory, realloc failed");
+-	}
++	if (!ret)
++		die("Out of memory, realloc failed");
+ 	return ret;
+ }
+ 
+diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
+index 0374adcb223c7..ac99c0764bee8 100644
+--- a/tools/perf/util/bpf-loader.c
++++ b/tools/perf/util/bpf-loader.c
+@@ -1215,9 +1215,10 @@ bpf__obj_config_map(struct bpf_object *obj,
+ 	pr_debug("ERROR: Invalid map config option '%s'\n", map_opt);
+ 	err = -BPF_LOADER_ERRNO__OBJCONF_MAP_OPT;
+ out:
+-	free(map_name);
+ 	if (!err)
+ 		*key_scan_pos += strlen(map_opt);
++
++	free(map_name);
+ 	return err;
+ }
+ 
+diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
+index 076cf4325f783..cd4582129c7d6 100644
+--- a/tools/testing/selftests/clone3/clone3.c
++++ b/tools/testing/selftests/clone3/clone3.c
+@@ -126,8 +126,6 @@ static void test_clone3(uint64_t flags, size_t size, int expected,
+ 
+ int main(int argc, char *argv[])
+ {
+-	pid_t pid;
+-
+ 	uid_t uid = getuid();
+ 
+ 	ksft_print_header();
+diff --git a/tools/testing/selftests/exec/Makefile b/tools/testing/selftests/exec/Makefile
+index 12c5e27d32c16..2d7fca446c7f7 100644
+--- a/tools/testing/selftests/exec/Makefile
++++ b/tools/testing/selftests/exec/Makefile
+@@ -3,8 +3,8 @@ CFLAGS = -Wall
+ CFLAGS += -Wno-nonnull
+ CFLAGS += -D_GNU_SOURCE
+ 
+-TEST_PROGS := binfmt_script non-regular
+-TEST_GEN_PROGS := execveat load_address_4096 load_address_2097152 load_address_16777216
++TEST_PROGS := binfmt_script
++TEST_GEN_PROGS := execveat load_address_4096 load_address_2097152 load_address_16777216 non-regular
+ TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir
+ # Makefile is a run-time dependency, since it's accessed by the execveat test
+ TEST_FILES := Makefile
+diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
+index 5ecb9718e1616..3e7b2e521cde4 100644
+--- a/tools/testing/selftests/kselftest_harness.h
++++ b/tools/testing/selftests/kselftest_harness.h
+@@ -871,7 +871,8 @@ static void __timeout_handler(int sig, siginfo_t *info, void *ucontext)
+ 	}
+ 
+ 	t->timed_out = true;
+-	kill(t->pid, SIGKILL);
++	// signal process group
++	kill(-(t->pid), SIGKILL);
+ }
+ 
+ void __wait_for_test(struct __test_metadata *t)
+@@ -981,6 +982,7 @@ void __run_test(struct __fixture_metadata *f,
+ 		ksft_print_msg("ERROR SPAWNING TEST CHILD\n");
+ 		t->passed = 0;
+ 	} else if (t->pid == 0) {
++		setpgrp();
+ 		t->fn(t, variant);
+ 		if (t->skip)
+ 			_exit(255);
+diff --git a/tools/testing/selftests/mincore/mincore_selftest.c b/tools/testing/selftests/mincore/mincore_selftest.c
+index 5a1e85ff5d32a..2cf6f2f277ab8 100644
+--- a/tools/testing/selftests/mincore/mincore_selftest.c
++++ b/tools/testing/selftests/mincore/mincore_selftest.c
+@@ -208,15 +208,21 @@ TEST(check_file_mmap)
+ 
+ 	errno = 0;
+ 	fd = open(".", O_TMPFILE | O_RDWR, 0600);
+-	ASSERT_NE(-1, fd) {
+-		TH_LOG("Can't create temporary file: %s",
+-			strerror(errno));
++	if (fd < 0) {
++		ASSERT_EQ(errno, EOPNOTSUPP) {
++			TH_LOG("Can't create temporary file: %s",
++			       strerror(errno));
++		}
++		SKIP(goto out_free, "O_TMPFILE not supported by filesystem.");
+ 	}
+ 	errno = 0;
+ 	retval = fallocate(fd, 0, 0, FILE_SIZE);
+-	ASSERT_EQ(0, retval) {
+-		TH_LOG("Error allocating space for the temporary file: %s",
+-			strerror(errno));
++	if (retval) {
++		ASSERT_EQ(errno, EOPNOTSUPP) {
++			TH_LOG("Error allocating space for the temporary file: %s",
++			       strerror(errno));
++		}
++		SKIP(goto out_close, "fallocate not supported by filesystem.");
+ 	}
+ 
+ 	/*
+@@ -272,7 +278,9 @@ TEST(check_file_mmap)
+ 	}
+ 
+ 	munmap(addr, FILE_SIZE);
++out_close:
+ 	close(fd);
++out_free:
+ 	free(vec);
+ }
+ 
+diff --git a/tools/testing/selftests/netfilter/nft_concat_range.sh b/tools/testing/selftests/netfilter/nft_concat_range.sh
+index 9313fa32bef13..b5eef5ffb58e5 100755
+--- a/tools/testing/selftests/netfilter/nft_concat_range.sh
++++ b/tools/testing/selftests/netfilter/nft_concat_range.sh
+@@ -1583,4 +1583,4 @@ for name in ${TESTS}; do
+ 	done
+ done
+ 
+-[ ${passed} -eq 0 ] && exit ${KSELFTEST_SKIP}
++[ ${passed} -eq 0 ] && exit ${KSELFTEST_SKIP} || exit 0
+diff --git a/tools/testing/selftests/openat2/Makefile b/tools/testing/selftests/openat2/Makefile
+index 4b93b1417b862..843ba56d8e49e 100644
+--- a/tools/testing/selftests/openat2/Makefile
++++ b/tools/testing/selftests/openat2/Makefile
+@@ -5,4 +5,4 @@ TEST_GEN_PROGS := openat2_test resolve_test rename_attack_test
+ 
+ include ../lib.mk
+ 
+-$(TEST_GEN_PROGS): helpers.c
++$(TEST_GEN_PROGS): helpers.c helpers.h
+diff --git a/tools/testing/selftests/openat2/helpers.h b/tools/testing/selftests/openat2/helpers.h
+index a6ea27344db2d..7056340b9339e 100644
+--- a/tools/testing/selftests/openat2/helpers.h
++++ b/tools/testing/selftests/openat2/helpers.h
+@@ -9,6 +9,7 @@
+ 
+ #define _GNU_SOURCE
+ #include <stdint.h>
++#include <stdbool.h>
+ #include <errno.h>
+ #include <linux/types.h>
+ #include "../kselftest.h"
+@@ -62,11 +63,12 @@ bool needs_openat2(const struct open_how *how);
+ 					(similar to chroot(2)). */
+ #endif /* RESOLVE_IN_ROOT */
+ 
+-#define E_func(func, ...)						\
+-	do {								\
+-		if (func(__VA_ARGS__) < 0)				\
+-			ksft_exit_fail_msg("%s:%d %s failed\n", \
+-					   __FILE__, __LINE__, #func);\
++#define E_func(func, ...)						      \
++	do {								      \
++		errno = 0;						      \
++		if (func(__VA_ARGS__) < 0)				      \
++			ksft_exit_fail_msg("%s:%d %s failed - errno:%d\n",    \
++					   __FILE__, __LINE__, #func, errno); \
+ 	} while (0)
+ 
+ #define E_asprintf(...)		E_func(asprintf,	__VA_ARGS__)
+diff --git a/tools/testing/selftests/openat2/openat2_test.c b/tools/testing/selftests/openat2/openat2_test.c
+index b386367c606b1..453152b58e7f0 100644
+--- a/tools/testing/selftests/openat2/openat2_test.c
++++ b/tools/testing/selftests/openat2/openat2_test.c
+@@ -244,6 +244,16 @@ void test_openat2_flags(void)
+ 		unlink(path);
+ 
+ 		fd = sys_openat2(AT_FDCWD, path, &test->how);
++		if (fd < 0 && fd == -EOPNOTSUPP) {
++			/*
++			 * Skip the testcase if it failed because not supported
++			 * by FS. (e.g. a valid O_TMPFILE combination on NFS)
++			 */
++			ksft_test_result_skip("openat2 with %s fails with %d (%s)\n",
++					      test->name, fd, strerror(-fd));
++			goto next;
++		}
++
+ 		if (test->err >= 0)
+ 			failed = (fd < 0);
+ 		else
+@@ -288,7 +298,7 @@ skip:
+ 		else
+ 			resultfn("openat2 with %s fails with %d (%s)\n",
+ 				 test->name, test->err, strerror(-test->err));
+-
++next:
+ 		free(fdpath);
+ 		fflush(stdout);
+ 	}
+diff --git a/tools/testing/selftests/pidfd/pidfd.h b/tools/testing/selftests/pidfd/pidfd.h
+index 01f8d3c0cf2cb..6922d6417e1cf 100644
+--- a/tools/testing/selftests/pidfd/pidfd.h
++++ b/tools/testing/selftests/pidfd/pidfd.h
+@@ -68,7 +68,7 @@
+ #define PIDFD_SKIP 3
+ #define PIDFD_XFAIL 4
+ 
+-int wait_for_pid(pid_t pid)
++static inline int wait_for_pid(pid_t pid)
+ {
+ 	int status, ret;
+ 
+@@ -78,13 +78,20 @@ again:
+ 		if (errno == EINTR)
+ 			goto again;
+ 
++		ksft_print_msg("waitpid returned -1, errno=%d\n", errno);
+ 		return -1;
+ 	}
+ 
+-	if (!WIFEXITED(status))
++	if (!WIFEXITED(status)) {
++		ksft_print_msg(
++		       "waitpid !WIFEXITED, WIFSIGNALED=%d, WTERMSIG=%d\n",
++		       WIFSIGNALED(status), WTERMSIG(status));
+ 		return -1;
++	}
+ 
+-	return WEXITSTATUS(status);
++	ret = WEXITSTATUS(status);
++	ksft_print_msg("waitpid WEXITSTATUS=%d\n", ret);
++	return ret;
+ }
+ 
+ static inline int sys_pidfd_open(pid_t pid, unsigned int flags)
+diff --git a/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c b/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
+index 22558524f71c3..3fd8e903118f5 100644
+--- a/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
++++ b/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
+@@ -12,6 +12,7 @@
+ #include <string.h>
+ #include <syscall.h>
+ #include <sys/wait.h>
++#include <sys/mman.h>
+ 
+ #include "pidfd.h"
+ #include "../kselftest.h"
+@@ -80,7 +81,10 @@ static inline int error_check(struct error *err, const char *test_name)
+ 	return err->code;
+ }
+ 
++#define CHILD_STACK_SIZE 8192
++
+ struct child {
++	char *stack;
+ 	pid_t pid;
+ 	int   fd;
+ };
+@@ -89,17 +93,22 @@ static struct child clone_newns(int (*fn)(void *), void *args,
+ 				struct error *err)
+ {
+ 	static int flags = CLONE_PIDFD | CLONE_NEWPID | CLONE_NEWNS | SIGCHLD;
+-	size_t stack_size = 1024;
+-	char *stack[1024] = { 0 };
+ 	struct child ret;
+ 
+ 	if (!(flags & CLONE_NEWUSER) && geteuid() != 0)
+ 		flags |= CLONE_NEWUSER;
+ 
++	ret.stack = mmap(NULL, CHILD_STACK_SIZE, PROT_READ | PROT_WRITE,
++			 MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
++	if (ret.stack == MAP_FAILED) {
++		error_set(err, -1, "mmap of stack failed (errno %d)", errno);
++		return ret;
++	}
++
+ #ifdef __ia64__
+-	ret.pid = __clone2(fn, stack, stack_size, flags, args, &ret.fd);
++	ret.pid = __clone2(fn, ret.stack, CHILD_STACK_SIZE, flags, args, &ret.fd);
+ #else
+-	ret.pid = clone(fn, stack + stack_size, flags, args, &ret.fd);
++	ret.pid = clone(fn, ret.stack + CHILD_STACK_SIZE, flags, args, &ret.fd);
+ #endif
+ 
+ 	if (ret.pid < 0) {
+@@ -129,6 +138,11 @@ static inline int child_join(struct child *child, struct error *err)
+ 	else if (r > 0)
+ 		error_set(err, r, "child %d reported: %d", child->pid, r);
+ 
++	if (munmap(child->stack, CHILD_STACK_SIZE)) {
++		error_set(err, -1, "munmap of child stack failed (errno %d)", errno);
++		r = -1;
++	}
++
+ 	return r;
+ }
+ 
+diff --git a/tools/testing/selftests/pidfd/pidfd_test.c b/tools/testing/selftests/pidfd/pidfd_test.c
+index 529eb700ac26a..9a2d64901d591 100644
+--- a/tools/testing/selftests/pidfd/pidfd_test.c
++++ b/tools/testing/selftests/pidfd/pidfd_test.c
+@@ -441,7 +441,6 @@ static void test_pidfd_poll_exec(int use_waitpid)
+ {
+ 	int pid, pidfd = 0;
+ 	int status, ret;
+-	pthread_t t1;
+ 	time_t prog_start = time(NULL);
+ 	const char *test_name = "pidfd_poll check for premature notification on child thread exec";
+ 
+@@ -500,13 +499,14 @@ static int child_poll_leader_exit_test(void *args)
+ 	 */
+ 	*child_exit_secs = time(NULL);
+ 	syscall(SYS_exit, 0);
++	/* Never reached, but appeases compiler thinking we should return. */
++	exit(0);
+ }
+ 
+ static void test_pidfd_poll_leader_exit(int use_waitpid)
+ {
+ 	int pid, pidfd = 0;
+-	int status, ret;
+-	time_t prog_start = time(NULL);
++	int status, ret = 0;
+ 	const char *test_name = "pidfd_poll check for premature notification on non-empty"
+ 				"group leader exit";
+ 
+diff --git a/tools/testing/selftests/pidfd/pidfd_wait.c b/tools/testing/selftests/pidfd/pidfd_wait.c
+index be2943f072f60..17999e082aa71 100644
+--- a/tools/testing/selftests/pidfd/pidfd_wait.c
++++ b/tools/testing/selftests/pidfd/pidfd_wait.c
+@@ -39,7 +39,7 @@ static int sys_waitid(int which, pid_t pid, siginfo_t *info, int options,
+ 
+ TEST(wait_simple)
+ {
+-	int pidfd = -1, status = 0;
++	int pidfd = -1;
+ 	pid_t parent_tid = -1;
+ 	struct clone_args args = {
+ 		.parent_tid = ptr_to_u64(&parent_tid),
+@@ -47,7 +47,6 @@ TEST(wait_simple)
+ 		.flags = CLONE_PIDFD | CLONE_PARENT_SETTID,
+ 		.exit_signal = SIGCHLD,
+ 	};
+-	int ret;
+ 	pid_t pid;
+ 	siginfo_t info = {
+ 		.si_signo = 0,
+@@ -88,7 +87,7 @@ TEST(wait_simple)
+ 
+ TEST(wait_states)
+ {
+-	int pidfd = -1, status = 0;
++	int pidfd = -1;
+ 	pid_t parent_tid = -1;
+ 	struct clone_args args = {
+ 		.parent_tid = ptr_to_u64(&parent_tid),
+diff --git a/tools/testing/selftests/rtc/settings b/tools/testing/selftests/rtc/settings
+index ba4d85f74cd6b..a953c96aa16e1 100644
+--- a/tools/testing/selftests/rtc/settings
++++ b/tools/testing/selftests/rtc/settings
+@@ -1 +1 @@
+-timeout=90
++timeout=180
+diff --git a/tools/testing/selftests/zram/zram.sh b/tools/testing/selftests/zram/zram.sh
+index 232e958ec4547..b0b91d9b0dc21 100755
+--- a/tools/testing/selftests/zram/zram.sh
++++ b/tools/testing/selftests/zram/zram.sh
+@@ -2,9 +2,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ TCID="zram.sh"
+ 
+-# Kselftest framework requirement - SKIP code is 4.
+-ksft_skip=4
+-
+ . ./zram_lib.sh
+ 
+ run_zram () {
+@@ -18,14 +15,4 @@ echo ""
+ 
+ check_prereqs
+ 
+-# check zram module exists
+-MODULE_PATH=/lib/modules/`uname -r`/kernel/drivers/block/zram/zram.ko
+-if [ -f $MODULE_PATH ]; then
+-	run_zram
+-elif [ -b /dev/zram0 ]; then
+-	run_zram
+-else
+-	echo "$TCID : No zram.ko module or /dev/zram0 device file not found"
+-	echo "$TCID : CONFIG_ZRAM is not set"
+-	exit $ksft_skip
+-fi
++run_zram
+diff --git a/tools/testing/selftests/zram/zram01.sh b/tools/testing/selftests/zram/zram01.sh
+index 114863d9fb876..8f4affe34f3e4 100755
+--- a/tools/testing/selftests/zram/zram01.sh
++++ b/tools/testing/selftests/zram/zram01.sh
+@@ -33,9 +33,7 @@ zram_algs="lzo"
+ 
+ zram_fill_fs()
+ {
+-	local mem_free0=$(free -m | awk 'NR==2 {print $4}')
+-
+-	for i in $(seq 0 $(($dev_num - 1))); do
++	for i in $(seq $dev_start $dev_end); do
+ 		echo "fill zram$i..."
+ 		local b=0
+ 		while [ true ]; do
+@@ -45,29 +43,17 @@ zram_fill_fs()
+ 			b=$(($b + 1))
+ 		done
+ 		echo "zram$i can be filled with '$b' KB"
+-	done
+ 
+-	local mem_free1=$(free -m | awk 'NR==2 {print $4}')
+-	local used_mem=$(($mem_free0 - $mem_free1))
++		local mem_used_total=`awk '{print $3}' "/sys/block/zram$i/mm_stat"`
++		local v=$((100 * 1024 * $b / $mem_used_total))
++		if [ "$v" -lt 100 ]; then
++			 echo "FAIL compression ratio: 0.$v:1"
++			 ERR_CODE=-1
++			 return
++		fi
+ 
+-	local total_size=0
+-	for sm in $zram_sizes; do
+-		local s=$(echo $sm | sed 's/M//')
+-		total_size=$(($total_size + $s))
++		echo "zram compression ratio: $(echo "scale=2; $v / 100 " | bc):1: OK"
+ 	done
+-
+-	echo "zram used ${used_mem}M, zram disk sizes ${total_size}M"
+-
+-	local v=$((100 * $total_size / $used_mem))
+-
+-	if [ "$v" -lt 100 ]; then
+-		echo "FAIL compression ratio: 0.$v:1"
+-		ERR_CODE=-1
+-		zram_cleanup
+-		return
+-	fi
+-
+-	echo "zram compression ratio: $(echo "scale=2; $v / 100 " | bc):1: OK"
+ }
+ 
+ check_prereqs
+@@ -81,7 +67,6 @@ zram_mount
+ 
+ zram_fill_fs
+ zram_cleanup
+-zram_unload
+ 
+ if [ $ERR_CODE -ne 0 ]; then
+ 	echo "$TCID : [FAIL]"
+diff --git a/tools/testing/selftests/zram/zram02.sh b/tools/testing/selftests/zram/zram02.sh
+index e83b404807c09..2418b0c4ed136 100755
+--- a/tools/testing/selftests/zram/zram02.sh
++++ b/tools/testing/selftests/zram/zram02.sh
+@@ -36,7 +36,6 @@ zram_set_memlimit
+ zram_makeswap
+ zram_swapoff
+ zram_cleanup
+-zram_unload
+ 
+ if [ $ERR_CODE -ne 0 ]; then
+ 	echo "$TCID : [FAIL]"
+diff --git a/tools/testing/selftests/zram/zram_lib.sh b/tools/testing/selftests/zram/zram_lib.sh
+index 6f872f266fd11..21ec1966de76c 100755
+--- a/tools/testing/selftests/zram/zram_lib.sh
++++ b/tools/testing/selftests/zram/zram_lib.sh
+@@ -5,12 +5,17 @@
+ # Author: Alexey Kodanev <alexey.kodanev@oracle.com>
+ # Modified: Naresh Kamboju <naresh.kamboju@linaro.org>
+ 
+-MODULE=0
+ dev_makeswap=-1
+ dev_mounted=-1
+-
++dev_start=0
++dev_end=-1
++module_load=-1
++sys_control=-1
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
++kernel_version=`uname -r | cut -d'.' -f1,2`
++kernel_major=${kernel_version%.*}
++kernel_minor=${kernel_version#*.}
+ 
+ trap INT
+ 
+@@ -25,68 +30,104 @@ check_prereqs()
+ 	fi
+ }
+ 
++kernel_gte()
++{
++	major=${1%.*}
++	minor=${1#*.}
++
++	if [ $kernel_major -gt $major ]; then
++		return 0
++	elif [[ $kernel_major -eq $major && $kernel_minor -ge $minor ]]; then
++		return 0
++	fi
++
++	return 1
++}
++
+ zram_cleanup()
+ {
+ 	echo "zram cleanup"
+ 	local i=
+-	for i in $(seq 0 $dev_makeswap); do
++	for i in $(seq $dev_start $dev_makeswap); do
+ 		swapoff /dev/zram$i
+ 	done
+ 
+-	for i in $(seq 0 $dev_mounted); do
++	for i in $(seq $dev_start $dev_mounted); do
+ 		umount /dev/zram$i
+ 	done
+ 
+-	for i in $(seq 0 $(($dev_num - 1))); do
++	for i in $(seq $dev_start $dev_end); do
+ 		echo 1 > /sys/block/zram${i}/reset
+ 		rm -rf zram$i
+ 	done
+ 
+-}
++	if [ $sys_control -eq 1 ]; then
++		for i in $(seq $dev_start $dev_end); do
++			echo $i > /sys/class/zram-control/hot_remove
++		done
++	fi
+ 
+-zram_unload()
+-{
+-	if [ $MODULE -ne 0 ] ; then
+-		echo "zram rmmod zram"
++	if [ $module_load -eq 1 ]; then
+ 		rmmod zram > /dev/null 2>&1
+ 	fi
+ }
+ 
+ zram_load()
+ {
+-	# check zram module exists
+-	MODULE_PATH=/lib/modules/`uname -r`/kernel/drivers/block/zram/zram.ko
+-	if [ -f $MODULE_PATH ]; then
+-		MODULE=1
+-		echo "create '$dev_num' zram device(s)"
+-		modprobe zram num_devices=$dev_num
+-		if [ $? -ne 0 ]; then
+-			echo "failed to insert zram module"
+-			exit 1
+-		fi
+-
+-		dev_num_created=$(ls /dev/zram* | wc -w)
++	echo "create '$dev_num' zram device(s)"
++
++	# zram module loaded, new kernel
++	if [ -d "/sys/class/zram-control" ]; then
++		echo "zram modules already loaded, kernel supports" \
++			"zram-control interface"
++		dev_start=$(ls /dev/zram* | wc -w)
++		dev_end=$(($dev_start + $dev_num - 1))
++		sys_control=1
++
++		for i in $(seq $dev_start $dev_end); do
++			cat /sys/class/zram-control/hot_add > /dev/null
++		done
++
++		echo "all zram devices (/dev/zram$dev_start~$dev_end" \
++			"successfully created"
++		return 0
++	fi
+ 
+-		if [ "$dev_num_created" -ne "$dev_num" ]; then
+-			echo "unexpected num of devices: $dev_num_created"
+-			ERR_CODE=-1
++	# detect old kernel or built-in
++	modprobe zram num_devices=$dev_num
++	if [ ! -d "/sys/class/zram-control" ]; then
++		if grep -q '^zram' /proc/modules; then
++			rmmod zram > /dev/null 2>&1
++			if [ $? -ne 0 ]; then
++				echo "zram module is being used on old kernel" \
++					"without zram-control interface"
++				exit $ksft_skip
++			fi
+ 		else
+-			echo "zram load module successful"
++			echo "test needs CONFIG_ZRAM=m on old kernel without" \
++				"zram-control interface"
++			exit $ksft_skip
+ 		fi
+-	elif [ -b /dev/zram0 ]; then
+-		echo "/dev/zram0 device file found: OK"
+-	else
+-		echo "ERROR: No zram.ko module or no /dev/zram0 device found"
+-		echo "$TCID : CONFIG_ZRAM is not set"
+-		exit 1
++		modprobe zram num_devices=$dev_num
+ 	fi
++
++	module_load=1
++	dev_end=$(($dev_num - 1))
++	echo "all zram devices (/dev/zram0~$dev_end) successfully created"
+ }
+ 
+ zram_max_streams()
+ {
+ 	echo "set max_comp_streams to zram device(s)"
+ 
+-	local i=0
++	kernel_gte 4.7
++	if [ $? -eq 0 ]; then
++		echo "The device attribute max_comp_streams was"\
++		               "deprecated in 4.7"
++		return 0
++	fi
++
++	local i=$dev_start
+ 	for max_s in $zram_max_streams; do
+ 		local sys_path="/sys/block/zram${i}/max_comp_streams"
+ 		echo $max_s > $sys_path || \
+@@ -98,7 +139,7 @@ zram_max_streams()
+ 			echo "FAIL can't set max_streams '$max_s', get $max_stream"
+ 
+ 		i=$(($i + 1))
+-		echo "$sys_path = '$max_streams' ($i/$dev_num)"
++		echo "$sys_path = '$max_streams'"
+ 	done
+ 
+ 	echo "zram max streams: OK"
+@@ -108,15 +149,16 @@ zram_compress_alg()
+ {
+ 	echo "test that we can set compression algorithm"
+ 
+-	local algs=$(cat /sys/block/zram0/comp_algorithm)
++	local i=$dev_start
++	local algs=$(cat /sys/block/zram${i}/comp_algorithm)
+ 	echo "supported algs: $algs"
+-	local i=0
++
+ 	for alg in $zram_algs; do
+ 		local sys_path="/sys/block/zram${i}/comp_algorithm"
+ 		echo "$alg" >	$sys_path || \
+ 			echo "FAIL can't set '$alg' to $sys_path"
+ 		i=$(($i + 1))
+-		echo "$sys_path = '$alg' ($i/$dev_num)"
++		echo "$sys_path = '$alg'"
+ 	done
+ 
+ 	echo "zram set compression algorithm: OK"
+@@ -125,14 +167,14 @@ zram_compress_alg()
+ zram_set_disksizes()
+ {
+ 	echo "set disk size to zram device(s)"
+-	local i=0
++	local i=$dev_start
+ 	for ds in $zram_sizes; do
+ 		local sys_path="/sys/block/zram${i}/disksize"
+ 		echo "$ds" >	$sys_path || \
+ 			echo "FAIL can't set '$ds' to $sys_path"
+ 
+ 		i=$(($i + 1))
+-		echo "$sys_path = '$ds' ($i/$dev_num)"
++		echo "$sys_path = '$ds'"
+ 	done
+ 
+ 	echo "zram set disksizes: OK"
+@@ -142,14 +184,14 @@ zram_set_memlimit()
+ {
+ 	echo "set memory limit to zram device(s)"
+ 
+-	local i=0
++	local i=$dev_start
+ 	for ds in $zram_mem_limits; do
+ 		local sys_path="/sys/block/zram${i}/mem_limit"
+ 		echo "$ds" >	$sys_path || \
+ 			echo "FAIL can't set '$ds' to $sys_path"
+ 
+ 		i=$(($i + 1))
+-		echo "$sys_path = '$ds' ($i/$dev_num)"
++		echo "$sys_path = '$ds'"
+ 	done
+ 
+ 	echo "zram set memory limit: OK"
+@@ -158,8 +200,8 @@ zram_set_memlimit()
+ zram_makeswap()
+ {
+ 	echo "make swap with zram device(s)"
+-	local i=0
+-	for i in $(seq 0 $(($dev_num - 1))); do
++	local i=$dev_start
++	for i in $(seq $dev_start $dev_end); do
+ 		mkswap /dev/zram$i > err.log 2>&1
+ 		if [ $? -ne 0 ]; then
+ 			cat err.log
+@@ -182,7 +224,7 @@ zram_makeswap()
+ zram_swapoff()
+ {
+ 	local i=
+-	for i in $(seq 0 $dev_makeswap); do
++	for i in $(seq $dev_start $dev_end); do
+ 		swapoff /dev/zram$i > err.log 2>&1
+ 		if [ $? -ne 0 ]; then
+ 			cat err.log
+@@ -196,7 +238,7 @@ zram_swapoff()
+ 
+ zram_makefs()
+ {
+-	local i=0
++	local i=$dev_start
+ 	for fs in $zram_filesystems; do
+ 		# if requested fs not supported default it to ext2
+ 		which mkfs.$fs > /dev/null 2>&1 || fs=ext2
+@@ -215,7 +257,7 @@ zram_makefs()
+ zram_mount()
+ {
+ 	local i=0
+-	for i in $(seq 0 $(($dev_num - 1))); do
++	for i in $(seq $dev_start $dev_end); do
+ 		echo "mount /dev/zram$i"
+ 		mkdir zram$i
+ 		mount /dev/zram$i zram$i > /dev/null || \


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-02-26 20:27 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-02-26 20:27 UTC (permalink / raw
  To: gentoo-commits

commit:     b3f78be65acbb318d6af57a24685410c3a76d78e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 26 20:26:53 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 26 20:26:53 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b3f78be6

Update default security restrictions

Bug: https://bugs.gentoo.org/834085

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 ...able-link-security-restrictions-by-default.patch | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
index f0ed144f..1b3e590d 100644
--- a/1510_fs-enable-link-security-restrictions-by-default.patch
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -1,20 +1,17 @@
-From: Ben Hutchings <ben@decadent.org.uk>
-Subject: fs: Enable link security restrictions by default
-Date: Fri, 02 Nov 2012 05:32:06 +0000
-Bug-Debian: https://bugs.debian.org/609455
-Forwarded: not-needed
-This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
-('VFS: don't do protected {sym,hard}links by default').
---- a/fs/namei.c	2018-09-28 07:56:07.770005006 -0400
-+++ b/fs/namei.c	2018-09-28 07:56:43.370349204 -0400
-@@ -885,8 +885,8 @@ static inline void put_link(struct namei
+--- a/fs/namei.c	2022-01-09 17:55:34.000000000 -0500
++++ b/fs/namei.c	2022-02-26 11:32:31.832844465 -0500
+@@ -1020,10 +1020,10 @@ static inline void put_link(struct namei
  		path_put(&last->link);
  }
  
 -int sysctl_protected_symlinks __read_mostly = 0;
 -int sysctl_protected_hardlinks __read_mostly = 0;
+-int sysctl_protected_fifos __read_mostly;
+-int sysctl_protected_regular __read_mostly;
 +int sysctl_protected_symlinks __read_mostly = 1;
 +int sysctl_protected_hardlinks __read_mostly = 1;
- int sysctl_protected_fifos __read_mostly;
- int sysctl_protected_regular __read_mostly;
++int sysctl_protected_fifos __read_mostly = 1;
++int sysctl_protected_regular __read_mostly = 1;
  
+ /**
+  * may_follow_link - Check symlink following for unsafe situations


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-03-02 13:06 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-03-02 13:06 UTC (permalink / raw
  To: gentoo-commits

commit:     52ea92c5b1054000b9633c25305d864bd15c610c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar  2 13:06:23 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar  2 13:06:23 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=52ea92c5

Linux patch 5.10.103

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1102_linux-5.10.103.patch | 2970 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2974 insertions(+)

diff --git a/0000_README b/0000_README
index 3438f96a..7f478841 100644
--- a/0000_README
+++ b/0000_README
@@ -451,6 +451,10 @@ Patch:  1101_linux-5.10.102.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.102
 
+Patch:  1102_linux-5.10.103.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.103
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1102_linux-5.10.103.patch b/1102_linux-5.10.103.patch
new file mode 100644
index 00000000..713f7c90
--- /dev/null
+++ b/1102_linux-5.10.103.patch
@@ -0,0 +1,2970 @@
+diff --git a/Makefile b/Makefile
+index f71684d435e5a..829a66a36807e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 102
++SUBLEVEL = 103
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/parisc/kernel/unaligned.c b/arch/parisc/kernel/unaligned.c
+index 237d20dd5622d..286cec4d86d7b 100644
+--- a/arch/parisc/kernel/unaligned.c
++++ b/arch/parisc/kernel/unaligned.c
+@@ -340,7 +340,7 @@ static int emulate_stw(struct pt_regs *regs, int frreg, int flop)
+ 	: "r" (val), "r" (regs->ior), "r" (regs->isr)
+ 	: "r19", "r20", "r21", "r22", "r1", FIXUP_BRANCH_CLOBBER );
+ 
+-	return 0;
++	return ret;
+ }
+ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
+ {
+@@ -397,7 +397,7 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
+ 	__asm__ __volatile__ (
+ "	mtsp	%4, %%sr1\n"
+ "	zdep	%2, 29, 2, %%r19\n"
+-"	dep	%%r0, 31, 2, %2\n"
++"	dep	%%r0, 31, 2, %3\n"
+ "	mtsar	%%r19\n"
+ "	zvdepi	-2, 32, %%r19\n"
+ "1:	ldw	0(%%sr1,%3),%%r20\n"
+@@ -409,7 +409,7 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
+ "	andcm	%%r21, %%r19, %%r21\n"
+ "	or	%1, %%r20, %1\n"
+ "	or	%2, %%r21, %2\n"
+-"3:	stw	%1,0(%%sr1,%1)\n"
++"3:	stw	%1,0(%%sr1,%3)\n"
+ "4:	stw	%%r1,4(%%sr1,%3)\n"
+ "5:	stw	%2,8(%%sr1,%3)\n"
+ "	copy	%%r0, %0\n"
+@@ -596,7 +596,6 @@ void handle_unaligned(struct pt_regs *regs)
+ 		ret = ERR_NOTHANDLED;	/* "undefined", but lets kill them. */
+ 		break;
+ 	}
+-#ifdef CONFIG_PA20
+ 	switch (regs->iir & OPCODE2_MASK)
+ 	{
+ 	case OPCODE_FLDD_L:
+@@ -607,22 +606,23 @@ void handle_unaligned(struct pt_regs *regs)
+ 		flop=1;
+ 		ret = emulate_std(regs, R2(regs->iir),1);
+ 		break;
++#ifdef CONFIG_PA20
+ 	case OPCODE_LDD_L:
+ 		ret = emulate_ldd(regs, R2(regs->iir),0);
+ 		break;
+ 	case OPCODE_STD_L:
+ 		ret = emulate_std(regs, R2(regs->iir),0);
+ 		break;
+-	}
+ #endif
++	}
+ 	switch (regs->iir & OPCODE3_MASK)
+ 	{
+ 	case OPCODE_FLDW_L:
+ 		flop=1;
+-		ret = emulate_ldw(regs, R2(regs->iir),0);
++		ret = emulate_ldw(regs, R2(regs->iir), 1);
+ 		break;
+ 	case OPCODE_LDW_M:
+-		ret = emulate_ldw(regs, R2(regs->iir),1);
++		ret = emulate_ldw(regs, R2(regs->iir), 0);
+ 		break;
+ 
+ 	case OPCODE_FSTW_L:
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index 62de075fc60c0..bc49d5f2302b6 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -44,6 +44,8 @@ obj-$(CONFIG_MODULE_SECTIONS)	+= module-sections.o
+ obj-$(CONFIG_FUNCTION_TRACER)	+= mcount.o ftrace.o
+ obj-$(CONFIG_DYNAMIC_FTRACE)	+= mcount-dyn.o
+ 
++obj-$(CONFIG_TRACE_IRQFLAGS)	+= trace_irq.o
++
+ obj-$(CONFIG_RISCV_BASE_PMU)	+= perf_event.o
+ obj-$(CONFIG_PERF_EVENTS)	+= perf_callchain.o
+ obj-$(CONFIG_HAVE_PERF_REGS)	+= perf_regs.o
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index 76274a4a1d8e6..5214c578a6023 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -98,7 +98,7 @@ _save_context:
+ .option pop
+ 
+ #ifdef CONFIG_TRACE_IRQFLAGS
+-	call trace_hardirqs_off
++	call __trace_hardirqs_off
+ #endif
+ 
+ #ifdef CONFIG_CONTEXT_TRACKING
+@@ -131,7 +131,7 @@ skip_context_tracking:
+ 	andi t0, s1, SR_PIE
+ 	beqz t0, 1f
+ #ifdef CONFIG_TRACE_IRQFLAGS
+-	call trace_hardirqs_on
++	call __trace_hardirqs_on
+ #endif
+ 	csrs CSR_STATUS, SR_IE
+ 
+@@ -222,7 +222,7 @@ ret_from_exception:
+ 	REG_L s0, PT_STATUS(sp)
+ 	csrc CSR_STATUS, SR_IE
+ #ifdef CONFIG_TRACE_IRQFLAGS
+-	call trace_hardirqs_off
++	call __trace_hardirqs_off
+ #endif
+ #ifdef CONFIG_RISCV_M_MODE
+ 	/* the MPP value is too large to be used as an immediate arg for addi */
+@@ -258,10 +258,10 @@ restore_all:
+ 	REG_L s1, PT_STATUS(sp)
+ 	andi t0, s1, SR_PIE
+ 	beqz t0, 1f
+-	call trace_hardirqs_on
++	call __trace_hardirqs_on
+ 	j 2f
+ 1:
+-	call trace_hardirqs_off
++	call __trace_hardirqs_off
+ 2:
+ #endif
+ 	REG_L a0, PT_STATUS(sp)
+diff --git a/arch/riscv/kernel/trace_irq.c b/arch/riscv/kernel/trace_irq.c
+new file mode 100644
+index 0000000000000..095ac976d7da1
+--- /dev/null
++++ b/arch/riscv/kernel/trace_irq.c
+@@ -0,0 +1,27 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (C) 2022 Changbin Du <changbin.du@gmail.com>
++ */
++
++#include <linux/irqflags.h>
++#include <linux/kprobes.h>
++#include "trace_irq.h"
++
++/*
++ * trace_hardirqs_on/off require the caller to setup frame pointer properly.
++ * Otherwise, CALLER_ADDR1 might trigger an pagging exception in kernel.
++ * Here we add one extra level so they can be safely called by low
++ * level entry code which $fp is used for other purpose.
++ */
++
++void __trace_hardirqs_on(void)
++{
++	trace_hardirqs_on();
++}
++NOKPROBE_SYMBOL(__trace_hardirqs_on);
++
++void __trace_hardirqs_off(void)
++{
++	trace_hardirqs_off();
++}
++NOKPROBE_SYMBOL(__trace_hardirqs_off);
+diff --git a/arch/riscv/kernel/trace_irq.h b/arch/riscv/kernel/trace_irq.h
+new file mode 100644
+index 0000000000000..99fe67377e5ed
+--- /dev/null
++++ b/arch/riscv/kernel/trace_irq.h
+@@ -0,0 +1,11 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Copyright (C) 2022 Changbin Du <changbin.du@gmail.com>
++ */
++#ifndef __TRACE_IRQ_H
++#define __TRACE_IRQ_H
++
++void __trace_hardirqs_on(void);
++void __trace_hardirqs_off(void);
++
++#endif /* __TRACE_IRQ_H */
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index 4e5af2b00d89b..70b9bc5403c5e 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -531,9 +531,11 @@ static inline void __fpregs_load_activate(void)
+  * The FPU context is only stored/restored for a user task and
+  * PF_KTHREAD is used to distinguish between kernel and user threads.
+  */
+-static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu)
++static inline void switch_fpu_prepare(struct task_struct *prev, int cpu)
+ {
+-	if (static_cpu_has(X86_FEATURE_FPU) && !(current->flags & PF_KTHREAD)) {
++	struct fpu *old_fpu = &prev->thread.fpu;
++
++	if (static_cpu_has(X86_FEATURE_FPU) && !(prev->flags & PF_KTHREAD)) {
+ 		if (!copy_fpregs_to_fpstate(old_fpu))
+ 			old_fpu->last_cpu = -1;
+ 		else
+@@ -552,10 +554,11 @@ static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu)
+  * Load PKRU from the FPU context if available. Delay loading of the
+  * complete FPU state until the return to userland.
+  */
+-static inline void switch_fpu_finish(struct fpu *new_fpu)
++static inline void switch_fpu_finish(struct task_struct *next)
+ {
+ 	u32 pkru_val = init_pkru_value;
+ 	struct pkru_state *pk;
++	struct fpu *next_fpu = &next->thread.fpu;
+ 
+ 	if (!static_cpu_has(X86_FEATURE_FPU))
+ 		return;
+@@ -569,7 +572,7 @@ static inline void switch_fpu_finish(struct fpu *new_fpu)
+ 	 * PKRU state is switched eagerly because it needs to be valid before we
+ 	 * return to userland e.g. for a copy_to_user() operation.
+ 	 */
+-	if (!(current->flags & PF_KTHREAD)) {
++	if (!(next->flags & PF_KTHREAD)) {
+ 		/*
+ 		 * If the PKRU bit in xsave.header.xfeatures is not set,
+ 		 * then the PKRU component was in init state, which means
+@@ -578,7 +581,7 @@ static inline void switch_fpu_finish(struct fpu *new_fpu)
+ 		 * in memory is not valid. This means pkru_val has to be
+ 		 * set to 0 and not to init_pkru_value.
+ 		 */
+-		pk = get_xsave_addr(&new_fpu->state.xsave, XFEATURE_PKRU);
++		pk = get_xsave_addr(&next_fpu->state.xsave, XFEATURE_PKRU);
+ 		pkru_val = pk ? pk->pkru : 0;
+ 	}
+ 	__write_pkru(pkru_val);
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 4f2f54e1281c3..98bf8fd189025 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -159,14 +159,12 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ {
+ 	struct thread_struct *prev = &prev_p->thread,
+ 			     *next = &next_p->thread;
+-	struct fpu *prev_fpu = &prev->fpu;
+-	struct fpu *next_fpu = &next->fpu;
+ 	int cpu = smp_processor_id();
+ 
+ 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
+ 
+ 	if (!test_thread_flag(TIF_NEED_FPU_LOAD))
+-		switch_fpu_prepare(prev_fpu, cpu);
++		switch_fpu_prepare(prev_p, cpu);
+ 
+ 	/*
+ 	 * Save away %gs. No need to save %fs, as it was saved on the
+@@ -213,7 +211,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 
+ 	this_cpu_write(current_task, next_p);
+ 
+-	switch_fpu_finish(next_fpu);
++	switch_fpu_finish(next_p);
+ 
+ 	/* Load the Intel cache allocation PQR MSR. */
+ 	resctrl_sched_in();
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index df342bedea88a..ad3f82a18de9d 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -535,15 +535,13 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ {
+ 	struct thread_struct *prev = &prev_p->thread;
+ 	struct thread_struct *next = &next_p->thread;
+-	struct fpu *prev_fpu = &prev->fpu;
+-	struct fpu *next_fpu = &next->fpu;
+ 	int cpu = smp_processor_id();
+ 
+ 	WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) &&
+ 		     this_cpu_read(irq_count) != -1);
+ 
+ 	if (!test_thread_flag(TIF_NEED_FPU_LOAD))
+-		switch_fpu_prepare(prev_fpu, cpu);
++		switch_fpu_prepare(prev_p, cpu);
+ 
+ 	/* We must save %fs and %gs before load_TLS() because
+ 	 * %fs and %gs may be cleared by load_TLS().
+@@ -595,7 +593,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	this_cpu_write(current_task, next_p);
+ 	this_cpu_write(cpu_current_top_of_stack, task_top_of_stack(next_p));
+ 
+-	switch_fpu_finish(next_fpu);
++	switch_fpu_finish(next_p);
+ 
+ 	/* Reload sp0. */
+ 	update_task_stack(next_p);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index c2516ddc3cbec..20d29ae8ed702 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3631,12 +3631,23 @@ static void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr)
+ 	walk_shadow_page_lockless_end(vcpu);
+ }
+ 
++static u32 alloc_apf_token(struct kvm_vcpu *vcpu)
++{
++	/* make sure the token value is not 0 */
++	u32 id = vcpu->arch.apf.id;
++
++	if (id << 12 == 0)
++		vcpu->arch.apf.id = 1;
++
++	return (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id;
++}
++
+ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 				    gfn_t gfn)
+ {
+ 	struct kvm_arch_async_pf arch;
+ 
+-	arch.token = (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id;
++	arch.token = alloc_apf_token(vcpu);
+ 	arch.gfn = gfn;
+ 	arch.direct_map = vcpu->arch.mmu->direct_map;
+ 	arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu);
+diff --git a/drivers/ata/pata_hpt37x.c b/drivers/ata/pata_hpt37x.c
+index fad6c6a873130..499a947d56ddb 100644
+--- a/drivers/ata/pata_hpt37x.c
++++ b/drivers/ata/pata_hpt37x.c
+@@ -917,6 +917,20 @@ static int hpt37x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
+ 	irqmask &= ~0x10;
+ 	pci_write_config_byte(dev, 0x5a, irqmask);
+ 
++	/*
++	 * HPT371 chips physically have only one channel, the secondary one,
++	 * but the primary channel registers do exist!  Go figure...
++	 * So,  we manually disable the non-existing channel here
++	 * (if the BIOS hasn't done this already).
++	 */
++	if (dev->device == PCI_DEVICE_ID_TTI_HPT371) {
++		u8 mcr1;
++
++		pci_read_config_byte(dev, 0x50, &mcr1);
++		mcr1 &= ~0x04;
++		pci_write_config_byte(dev, 0x50, mcr1);
++	}
++
+ 	/*
+ 	 * default to pci clock. make sure MA15/16 are set to output
+ 	 * to prevent drives having problems with 40-pin cables. Needed
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 81ad4f867f02d..64ff137408b8c 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -592,6 +592,9 @@ re_probe:
+ 			drv->remove(dev);
+ 
+ 		devres_release_all(dev);
++		arch_teardown_dma_ops(dev);
++		kfree(dev->dma_range_map);
++		dev->dma_range_map = NULL;
+ 		driver_sysfs_remove(dev);
+ 		dev->driver = NULL;
+ 		dev_set_drvdata(dev, NULL);
+@@ -1168,6 +1171,8 @@ static void __device_release_driver(struct device *dev, struct device *parent)
+ 
+ 		devres_release_all(dev);
+ 		arch_teardown_dma_ops(dev);
++		kfree(dev->dma_range_map);
++		dev->dma_range_map = NULL;
+ 		dev->driver = NULL;
+ 		dev_set_drvdata(dev, NULL);
+ 		if (dev->pm_domain && dev->pm_domain->dismiss)
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index ad5c2de395d1f..87c5c421e0f46 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -170,11 +170,9 @@ static void regmap_irq_sync_unlock(struct irq_data *data)
+ 				ret = regmap_write(map, reg, d->mask_buf[i]);
+ 			if (d->chip->clear_ack) {
+ 				if (d->chip->ack_invert && !ret)
+-					ret = regmap_write(map, reg,
+-							   d->mask_buf[i]);
++					ret = regmap_write(map, reg, UINT_MAX);
+ 				else if (!ret)
+-					ret = regmap_write(map, reg,
+-							   ~d->mask_buf[i]);
++					ret = regmap_write(map, reg, 0);
+ 			}
+ 			if (ret != 0)
+ 				dev_err(d->map->dev, "Failed to ack 0x%x: %d\n",
+@@ -509,11 +507,9 @@ static irqreturn_t regmap_irq_thread(int irq, void *d)
+ 						data->status_buf[i]);
+ 			if (chip->clear_ack) {
+ 				if (chip->ack_invert && !ret)
+-					ret = regmap_write(map, reg,
+-							data->status_buf[i]);
++					ret = regmap_write(map, reg, UINT_MAX);
+ 				else if (!ret)
+-					ret = regmap_write(map, reg,
+-							~data->status_buf[i]);
++					ret = regmap_write(map, reg, 0);
+ 			}
+ 			if (ret != 0)
+ 				dev_err(map->dev, "Failed to ack 0x%x: %d\n",
+@@ -745,13 +741,9 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 					d->status_buf[i] & d->mask_buf[i]);
+ 			if (chip->clear_ack) {
+ 				if (chip->ack_invert && !ret)
+-					ret = regmap_write(map, reg,
+-						(d->status_buf[i] &
+-						 d->mask_buf[i]));
++					ret = regmap_write(map, reg, UINT_MAX);
+ 				else if (!ret)
+-					ret = regmap_write(map, reg,
+-						~(d->status_buf[i] &
+-						  d->mask_buf[i]));
++					ret = regmap_write(map, reg, 0);
+ 			}
+ 			if (ret != 0) {
+ 				dev_err(map->dev, "Failed to ack 0x%x: %d\n",
+diff --git a/drivers/clk/ingenic/jz4725b-cgu.c b/drivers/clk/ingenic/jz4725b-cgu.c
+index 8c38e72d14a79..786e361a4a6a4 100644
+--- a/drivers/clk/ingenic/jz4725b-cgu.c
++++ b/drivers/clk/ingenic/jz4725b-cgu.c
+@@ -139,11 +139,10 @@ static const struct ingenic_cgu_clk_info jz4725b_cgu_clocks[] = {
+ 	},
+ 
+ 	[JZ4725B_CLK_I2S] = {
+-		"i2s", CGU_CLK_MUX | CGU_CLK_DIV | CGU_CLK_GATE,
++		"i2s", CGU_CLK_MUX | CGU_CLK_DIV,
+ 		.parents = { JZ4725B_CLK_EXT, JZ4725B_CLK_PLL_HALF, -1, -1 },
+ 		.mux = { CGU_REG_CPCCR, 31, 1 },
+ 		.div = { CGU_REG_I2SCDR, 0, 1, 9, -1, -1, -1 },
+-		.gate = { CGU_REG_CLKGR, 6 },
+ 	},
+ 
+ 	[JZ4725B_CLK_SPI] = {
+diff --git a/drivers/gpio/gpio-tegra186.c b/drivers/gpio/gpio-tegra186.c
+index 9500074b1f1b5..7fbe5f0681b95 100644
+--- a/drivers/gpio/gpio-tegra186.c
++++ b/drivers/gpio/gpio-tegra186.c
+@@ -337,9 +337,12 @@ static int tegra186_gpio_of_xlate(struct gpio_chip *chip,
+ 	return offset + pin;
+ }
+ 
++#define to_tegra_gpio(x) container_of((x), struct tegra_gpio, gpio)
++
+ static void tegra186_irq_ack(struct irq_data *data)
+ {
+-	struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
++	struct tegra_gpio *gpio = to_tegra_gpio(gc);
+ 	void __iomem *base;
+ 
+ 	base = tegra186_gpio_get_base(gpio, data->hwirq);
+@@ -351,7 +354,8 @@ static void tegra186_irq_ack(struct irq_data *data)
+ 
+ static void tegra186_irq_mask(struct irq_data *data)
+ {
+-	struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
++	struct tegra_gpio *gpio = to_tegra_gpio(gc);
+ 	void __iomem *base;
+ 	u32 value;
+ 
+@@ -366,7 +370,8 @@ static void tegra186_irq_mask(struct irq_data *data)
+ 
+ static void tegra186_irq_unmask(struct irq_data *data)
+ {
+-	struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
++	struct tegra_gpio *gpio = to_tegra_gpio(gc);
+ 	void __iomem *base;
+ 	u32 value;
+ 
+@@ -381,7 +386,8 @@ static void tegra186_irq_unmask(struct irq_data *data)
+ 
+ static int tegra186_irq_set_type(struct irq_data *data, unsigned int type)
+ {
+-	struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
++	struct tegra_gpio *gpio = to_tegra_gpio(gc);
+ 	void __iomem *base;
+ 	u32 value;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 37226cbbbd11a..7212b9900e0ab 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -1194,8 +1194,11 @@ static int soc15_common_early_init(void *handle)
+ 				AMD_CG_SUPPORT_SDMA_MGCG |
+ 				AMD_CG_SUPPORT_SDMA_LS;
+ 
++			/*
++			 * MMHUB PG needs to be disabled for Picasso for
++			 * stability reasons.
++			 */
+ 			adev->pg_flags = AMD_PG_SUPPORT_SDMA |
+-				AMD_PG_SUPPORT_MMHUB |
+ 				AMD_PG_SUPPORT_VCN;
+ 		} else {
+ 			adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index add317bd8d55c..3d7593ea79f14 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -5132,6 +5132,7 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+ 	if (!(edid->input & DRM_EDID_INPUT_DIGITAL))
+ 		return quirks;
+ 
++	info->color_formats |= DRM_COLOR_FORMAT_RGB444;
+ 	drm_parse_cea_ext(connector, edid);
+ 
+ 	/*
+@@ -5180,7 +5181,6 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+ 	DRM_DEBUG("%s: Assigning EDID-1.4 digital sink color depth as %d bpc.\n",
+ 			  connector->name, info->bpc);
+ 
+-	info->color_formats |= DRM_COLOR_FORMAT_RGB444;
+ 	if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB444)
+ 		info->color_formats |= DRM_COLOR_FORMAT_YCRCB444;
+ 	if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB422)
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index e51ca7ca0a2a7..472aaea75ef84 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -3996,6 +3996,17 @@ static int intel_compute_sagv_mask(struct intel_atomic_state *state)
+ 			return ret;
+ 	}
+ 
++	if (intel_can_enable_sagv(dev_priv, new_bw_state) !=
++	    intel_can_enable_sagv(dev_priv, old_bw_state)) {
++		ret = intel_atomic_serialize_global_state(&new_bw_state->base);
++		if (ret)
++			return ret;
++	} else if (new_bw_state->pipe_sagv_reject != old_bw_state->pipe_sagv_reject) {
++		ret = intel_atomic_lock_global_state(&new_bw_state->base);
++		if (ret)
++			return ret;
++	}
++
+ 	for_each_new_intel_crtc_in_state(state, crtc,
+ 					 new_crtc_state, i) {
+ 		struct skl_pipe_wm *pipe_wm = &new_crtc_state->wm.skl.optimal;
+@@ -4010,17 +4021,6 @@ static int intel_compute_sagv_mask(struct intel_atomic_state *state)
+ 				       intel_can_enable_sagv(dev_priv, new_bw_state);
+ 	}
+ 
+-	if (intel_can_enable_sagv(dev_priv, new_bw_state) !=
+-	    intel_can_enable_sagv(dev_priv, old_bw_state)) {
+-		ret = intel_atomic_serialize_global_state(&new_bw_state->base);
+-		if (ret)
+-			return ret;
+-	} else if (new_bw_state->pipe_sagv_reject != old_bw_state->pipe_sagv_reject) {
+-		ret = intel_atomic_lock_global_state(&new_bw_state->base);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c
+index e5a83f7492677..d649fea829994 100644
+--- a/drivers/hwmon/hwmon.c
++++ b/drivers/hwmon/hwmon.c
+@@ -178,12 +178,14 @@ static int hwmon_thermal_add_sensor(struct device *dev, int index)
+ 
+ 	tzd = devm_thermal_zone_of_sensor_register(dev, index, tdata,
+ 						   &hwmon_thermal_ops);
+-	/*
+-	 * If CONFIG_THERMAL_OF is disabled, this returns -ENODEV,
+-	 * so ignore that error but forward any other error.
+-	 */
+-	if (IS_ERR(tzd) && (PTR_ERR(tzd) != -ENODEV))
+-		return PTR_ERR(tzd);
++	if (IS_ERR(tzd)) {
++		if (PTR_ERR(tzd) != -ENODEV)
++			return PTR_ERR(tzd);
++		dev_info(dev, "temp%d_input not attached to any thermal zone\n",
++			 index + 1);
++		devm_kfree(dev, tdata);
++		return 0;
++	}
+ 
+ 	err = devm_add_action(dev, hwmon_thermal_remove_sensor, &tdata->node);
+ 	if (err)
+diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c
+index 48435865fdaf3..792526462f1c9 100644
+--- a/drivers/iio/accel/bmc150-accel-core.c
++++ b/drivers/iio/accel/bmc150-accel-core.c
+@@ -1648,11 +1648,14 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Unable to register iio device\n");
+-		goto err_trigger_unregister;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(dev);
++	pm_runtime_disable(dev);
+ err_trigger_unregister:
+ 	bmc150_accel_unregister_triggers(data, BMC150_ACCEL_TRIGGERS - 1);
+ err_buffer_cleanup:
+diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
+index 2eaf85b6e39f4..89e0a89d95d6b 100644
+--- a/drivers/iio/accel/kxcjk-1013.c
++++ b/drivers/iio/accel/kxcjk-1013.c
+@@ -1429,11 +1429,14 @@ static int kxcjk1013_probe(struct i2c_client *client,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "unable to register iio device\n");
+-		goto err_buffer_cleanup;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(&client->dev);
++	pm_runtime_disable(&client->dev);
+ err_buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ err_trigger_unregister:
+diff --git a/drivers/iio/accel/mma9551.c b/drivers/iio/accel/mma9551.c
+index 08a2303cc9df3..26421e8e82639 100644
+--- a/drivers/iio/accel/mma9551.c
++++ b/drivers/iio/accel/mma9551.c
+@@ -495,11 +495,14 @@ static int mma9551_probe(struct i2c_client *client,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "unable to register iio device\n");
+-		goto out_poweroff;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(&client->dev);
++	pm_runtime_disable(&client->dev);
+ out_poweroff:
+ 	mma9551_set_device_state(client, false);
+ 
+diff --git a/drivers/iio/accel/mma9553.c b/drivers/iio/accel/mma9553.c
+index c15908faa3816..a23a7685d1f93 100644
+--- a/drivers/iio/accel/mma9553.c
++++ b/drivers/iio/accel/mma9553.c
+@@ -1134,12 +1134,15 @@ static int mma9553_probe(struct i2c_client *client,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "unable to register iio device\n");
+-		goto out_poweroff;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	dev_dbg(&indio_dev->dev, "Registered device %s\n", name);
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(&client->dev);
++	pm_runtime_disable(&client->dev);
+ out_poweroff:
+ 	mma9551_set_device_state(client, false);
+ 	return ret;
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 9c2401c5848ec..bd35009950376 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -74,7 +74,7 @@
+ #define AD7124_CONFIG_REF_SEL(x)	FIELD_PREP(AD7124_CONFIG_REF_SEL_MSK, x)
+ #define AD7124_CONFIG_PGA_MSK		GENMASK(2, 0)
+ #define AD7124_CONFIG_PGA(x)		FIELD_PREP(AD7124_CONFIG_PGA_MSK, x)
+-#define AD7124_CONFIG_IN_BUFF_MSK	GENMASK(7, 6)
++#define AD7124_CONFIG_IN_BUFF_MSK	GENMASK(6, 5)
+ #define AD7124_CONFIG_IN_BUFF(x)	FIELD_PREP(AD7124_CONFIG_IN_BUFF_MSK, x)
+ 
+ /* AD7124_FILTER_X */
+diff --git a/drivers/iio/adc/men_z188_adc.c b/drivers/iio/adc/men_z188_adc.c
+index 42ea8bc7e7805..adc5ceaef8c93 100644
+--- a/drivers/iio/adc/men_z188_adc.c
++++ b/drivers/iio/adc/men_z188_adc.c
+@@ -103,6 +103,7 @@ static int men_z188_probe(struct mcb_device *dev,
+ 	struct z188_adc *adc;
+ 	struct iio_dev *indio_dev;
+ 	struct resource *mem;
++	int ret;
+ 
+ 	indio_dev = devm_iio_device_alloc(&dev->dev, sizeof(struct z188_adc));
+ 	if (!indio_dev)
+@@ -128,8 +129,14 @@ static int men_z188_probe(struct mcb_device *dev,
+ 	adc->mem = mem;
+ 	mcb_set_drvdata(dev, indio_dev);
+ 
+-	return iio_device_register(indio_dev);
++	ret = iio_device_register(indio_dev);
++	if (ret)
++		goto err_unmap;
++
++	return 0;
+ 
++err_unmap:
++	iounmap(adc->base);
+ err:
+ 	mcb_release_mem(mem);
+ 	return -ENXIO;
+diff --git a/drivers/iio/gyro/bmg160_core.c b/drivers/iio/gyro/bmg160_core.c
+index 39fe0b1785920..b6b90eebec0b9 100644
+--- a/drivers/iio/gyro/bmg160_core.c
++++ b/drivers/iio/gyro/bmg160_core.c
+@@ -1170,11 +1170,14 @@ int bmg160_core_probe(struct device *dev, struct regmap *regmap, int irq,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "unable to register iio device\n");
+-		goto err_buffer_cleanup;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(dev);
++	pm_runtime_disable(dev);
+ err_buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ err_trigger_unregister:
+diff --git a/drivers/iio/imu/kmx61.c b/drivers/iio/imu/kmx61.c
+index 61885e99d3fc1..89133315e6aaf 100644
+--- a/drivers/iio/imu/kmx61.c
++++ b/drivers/iio/imu/kmx61.c
+@@ -1392,7 +1392,7 @@ static int kmx61_probe(struct i2c_client *client,
+ 	ret = iio_device_register(data->acc_indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "Failed to register acc iio device\n");
+-		goto err_buffer_cleanup_mag;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	ret = iio_device_register(data->mag_indio_dev);
+@@ -1405,6 +1405,9 @@ static int kmx61_probe(struct i2c_client *client,
+ 
+ err_iio_unregister_acc:
+ 	iio_device_unregister(data->acc_indio_dev);
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(&client->dev);
++	pm_runtime_disable(&client->dev);
+ err_buffer_cleanup_mag:
+ 	if (client->irq > 0)
+ 		iio_triggered_buffer_cleanup(data->mag_indio_dev);
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 558ca3843bb95..2c528425b03b4 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -1558,8 +1558,12 @@ static int st_lsm6dsx_read_oneshot(struct st_lsm6dsx_sensor *sensor,
+ 	if (err < 0)
+ 		return err;
+ 
++	/*
++	 * we need to wait for sensor settling time before
++	 * reading data in order to avoid corrupted samples
++	 */
+ 	delay = 1000000000 / sensor->odr;
+-	usleep_range(delay, 2 * delay);
++	usleep_range(3 * delay, 4 * delay);
+ 
+ 	err = st_lsm6dsx_read_locked(hw, addr, &data, sizeof(data));
+ 	if (err < 0)
+diff --git a/drivers/iio/magnetometer/bmc150_magn.c b/drivers/iio/magnetometer/bmc150_magn.c
+index 8eacfaf584cfd..620537d0104d4 100644
+--- a/drivers/iio/magnetometer/bmc150_magn.c
++++ b/drivers/iio/magnetometer/bmc150_magn.c
+@@ -941,13 +941,14 @@ int bmc150_magn_probe(struct device *dev, struct regmap *regmap,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "unable to register iio device\n");
+-		goto err_disable_runtime_pm;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	dev_dbg(dev, "Registered device %s\n", name);
+ 	return 0;
+ 
+-err_disable_runtime_pm:
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(dev);
+ 	pm_runtime_disable(dev);
+ err_buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index ce492134c1e5c..fbb0efbe25f84 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -3321,22 +3321,30 @@ err:
+ static int cma_bind_addr(struct rdma_cm_id *id, struct sockaddr *src_addr,
+ 			 const struct sockaddr *dst_addr)
+ {
+-	if (!src_addr || !src_addr->sa_family) {
+-		src_addr = (struct sockaddr *) &id->route.addr.src_addr;
+-		src_addr->sa_family = dst_addr->sa_family;
+-		if (IS_ENABLED(CONFIG_IPV6) &&
+-		    dst_addr->sa_family == AF_INET6) {
+-			struct sockaddr_in6 *src_addr6 = (struct sockaddr_in6 *) src_addr;
+-			struct sockaddr_in6 *dst_addr6 = (struct sockaddr_in6 *) dst_addr;
+-			src_addr6->sin6_scope_id = dst_addr6->sin6_scope_id;
+-			if (ipv6_addr_type(&dst_addr6->sin6_addr) & IPV6_ADDR_LINKLOCAL)
+-				id->route.addr.dev_addr.bound_dev_if = dst_addr6->sin6_scope_id;
+-		} else if (dst_addr->sa_family == AF_IB) {
+-			((struct sockaddr_ib *) src_addr)->sib_pkey =
+-				((struct sockaddr_ib *) dst_addr)->sib_pkey;
+-		}
+-	}
+-	return rdma_bind_addr(id, src_addr);
++	struct sockaddr_storage zero_sock = {};
++
++	if (src_addr && src_addr->sa_family)
++		return rdma_bind_addr(id, src_addr);
++
++	/*
++	 * When the src_addr is not specified, automatically supply an any addr
++	 */
++	zero_sock.ss_family = dst_addr->sa_family;
++	if (IS_ENABLED(CONFIG_IPV6) && dst_addr->sa_family == AF_INET6) {
++		struct sockaddr_in6 *src_addr6 =
++			(struct sockaddr_in6 *)&zero_sock;
++		struct sockaddr_in6 *dst_addr6 =
++			(struct sockaddr_in6 *)dst_addr;
++
++		src_addr6->sin6_scope_id = dst_addr6->sin6_scope_id;
++		if (ipv6_addr_type(&dst_addr6->sin6_addr) & IPV6_ADDR_LINKLOCAL)
++			id->route.addr.dev_addr.bound_dev_if =
++				dst_addr6->sin6_scope_id;
++	} else if (dst_addr->sa_family == AF_IB) {
++		((struct sockaddr_ib *)&zero_sock)->sib_pkey =
++			((struct sockaddr_ib *)dst_addr)->sib_pkey;
++	}
++	return rdma_bind_addr(id, (struct sockaddr *)&zero_sock);
+ }
+ 
+ /*
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 46fad202a380e..13634eda833de 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -1328,6 +1328,12 @@ out_err:
+ 
+ static void free_permits(struct rtrs_clt *clt)
+ {
++	if (clt->permits_map) {
++		size_t sz = clt->queue_depth;
++
++		wait_event(clt->permits_wait,
++			   find_first_bit(clt->permits_map, sz) >= sz);
++	}
+ 	kfree(clt->permits_map);
+ 	clt->permits_map = NULL;
+ 	kfree(clt->permits);
+@@ -2540,6 +2546,8 @@ static void rtrs_clt_dev_release(struct device *dev)
+ {
+ 	struct rtrs_clt *clt = container_of(dev, struct rtrs_clt, dev);
+ 
++	mutex_destroy(&clt->paths_ev_mutex);
++	mutex_destroy(&clt->paths_mutex);
+ 	kfree(clt);
+ }
+ 
+@@ -2571,6 +2579,8 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
++	clt->dev.class = rtrs_clt_dev_class;
++	clt->dev.release = rtrs_clt_dev_release;
+ 	uuid_gen(&clt->paths_uuid);
+ 	INIT_LIST_HEAD_RCU(&clt->paths_list);
+ 	clt->paths_num = paths_num;
+@@ -2588,64 +2598,51 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
+ 	init_waitqueue_head(&clt->permits_wait);
+ 	mutex_init(&clt->paths_ev_mutex);
+ 	mutex_init(&clt->paths_mutex);
++	device_initialize(&clt->dev);
+ 
+-	clt->dev.class = rtrs_clt_dev_class;
+-	clt->dev.release = rtrs_clt_dev_release;
+ 	err = dev_set_name(&clt->dev, "%s", sessname);
+ 	if (err)
+-		goto err;
++		goto err_put;
++
+ 	/*
+ 	 * Suppress user space notification until
+ 	 * sysfs files are created
+ 	 */
+ 	dev_set_uevent_suppress(&clt->dev, true);
+-	err = device_register(&clt->dev);
+-	if (err) {
+-		put_device(&clt->dev);
+-		goto err;
+-	}
++	err = device_add(&clt->dev);
++	if (err)
++		goto err_put;
+ 
+ 	clt->kobj_paths = kobject_create_and_add("paths", &clt->dev.kobj);
+ 	if (!clt->kobj_paths) {
+ 		err = -ENOMEM;
+-		goto err_dev;
++		goto err_del;
+ 	}
+ 	err = rtrs_clt_create_sysfs_root_files(clt);
+ 	if (err) {
+ 		kobject_del(clt->kobj_paths);
+ 		kobject_put(clt->kobj_paths);
+-		goto err_dev;
++		goto err_del;
+ 	}
+ 	dev_set_uevent_suppress(&clt->dev, false);
+ 	kobject_uevent(&clt->dev.kobj, KOBJ_ADD);
+ 
+ 	return clt;
+-err_dev:
+-	device_unregister(&clt->dev);
+-err:
++err_del:
++	device_del(&clt->dev);
++err_put:
+ 	free_percpu(clt->pcpu_path);
+-	kfree(clt);
++	put_device(&clt->dev);
+ 	return ERR_PTR(err);
+ }
+ 
+-static void wait_for_inflight_permits(struct rtrs_clt *clt)
+-{
+-	if (clt->permits_map) {
+-		size_t sz = clt->queue_depth;
+-
+-		wait_event(clt->permits_wait,
+-			   find_first_bit(clt->permits_map, sz) >= sz);
+-	}
+-}
+-
+ static void free_clt(struct rtrs_clt *clt)
+ {
+-	wait_for_inflight_permits(clt);
+-	free_permits(clt);
+ 	free_percpu(clt->pcpu_path);
+-	mutex_destroy(&clt->paths_ev_mutex);
+-	mutex_destroy(&clt->paths_mutex);
+-	/* release callback will free clt in last put */
++
++	/*
++	 * release callback will free clt and destroy mutexes in last put
++	 */
+ 	device_unregister(&clt->dev);
+ }
+ 
+@@ -2761,6 +2758,7 @@ void rtrs_clt_close(struct rtrs_clt *clt)
+ 		rtrs_clt_destroy_sess_files(sess, NULL);
+ 		kobject_put(&sess->kobj);
+ 	}
++	free_permits(clt);
+ 	free_clt(clt);
+ }
+ EXPORT_SYMBOL(rtrs_clt_close);
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 86d5c4c92b363..b4ccb333a8342 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -4045,9 +4045,11 @@ static void srp_remove_one(struct ib_device *device, void *client_data)
+ 		spin_unlock(&host->target_lock);
+ 
+ 		/*
+-		 * Wait for tl_err and target port removal tasks.
++		 * srp_queue_remove_work() queues a call to
++		 * srp_remove_target(). The latter function cancels
++		 * target->tl_err_work so waiting for the remove works to
++		 * finish is sufficient.
+ 		 */
+-		flush_workqueue(system_long_wq);
+ 		flush_workqueue(srp_remove_wq);
+ 
+ 		kfree(host);
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index 9a86367a26369..7fa271db41b07 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -100,6 +100,9 @@ MODULE_LICENSE("GPL");
+ MODULE_FIRMWARE(FW_FILE_NAME_E1);
+ MODULE_FIRMWARE(FW_FILE_NAME_E1H);
+ MODULE_FIRMWARE(FW_FILE_NAME_E2);
++MODULE_FIRMWARE(FW_FILE_NAME_E1_V15);
++MODULE_FIRMWARE(FW_FILE_NAME_E1H_V15);
++MODULE_FIRMWARE(FW_FILE_NAME_E2_V15);
+ 
+ int bnx2x_num_queues;
+ module_param_named(num_queues, bnx2x_num_queues, int, 0444);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 6f9196ff2ac4f..98087b278d1f4 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -1926,6 +1926,9 @@ static int bnxt_get_fecparam(struct net_device *dev,
+ 	case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_RS272_IEEE_ACTIVE:
+ 		fec->active_fec |= ETHTOOL_FEC_LLRS;
+ 		break;
++	case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_NONE_ACTIVE:
++		fec->active_fec |= ETHTOOL_FEC_OFF;
++		break;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index bc7c1962f9e66..6a1b1363ac16a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1746,7 +1746,7 @@ static int mlx5e_get_module_eeprom(struct net_device *netdev,
+ 		if (size_read < 0) {
+ 			netdev_err(priv->netdev, "%s: mlx5_query_eeprom failed:0x%x\n",
+ 				   __func__, size_read);
+-			return 0;
++			return size_read;
+ 		}
+ 
+ 		i += size_read;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index d384403d73f69..b8637547800f9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -985,7 +985,8 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 	}
+ 
+ 	/* True when explicitly set via priv flag, or XDP prog is loaded */
+-	if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state))
++	if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state) ||
++	    get_cqe_tls_offload(cqe))
+ 		goto csum_unnecessary;
+ 
+ 	/* CQE csum doesn't cover padding octets in short ethernet
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index e06b1ba7d2349..ccc7dd3e738a4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -2037,10 +2037,6 @@ esw_check_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
+ 	if (!MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source))
+ 		return false;
+ 
+-	if (mlx5_core_is_ecpf_esw_manager(esw->dev) ||
+-	    mlx5_ecpf_vport_exists(esw->dev))
+-		return false;
+-
+ 	return true;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 0ff034b0866e2..55772f0cbbf8f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2034,6 +2034,8 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
+ 		fte->node.del_hw_func = NULL;
+ 		up_write_ref_node(&fte->node, false);
+ 		tree_put_node(&fte->node, false);
++	} else {
++		up_write_ref_node(&fte->node, false);
+ 	}
+ 	kfree(handle);
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index d3d5b663a4a3c..088ceac07b805 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -922,8 +922,8 @@ nfp_tunnel_add_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 			  int port, bool mod)
+ {
+ 	struct nfp_flower_priv *priv = app->priv;
+-	int ida_idx = NFP_MAX_MAC_INDEX, err;
+ 	struct nfp_tun_offloaded_mac *entry;
++	int ida_idx = -1, err;
+ 	u16 nfp_mac_idx = 0;
+ 
+ 	entry = nfp_tunnel_lookup_offloaded_macs(app, netdev->dev_addr);
+@@ -997,7 +997,7 @@ err_remove_hash:
+ err_free_entry:
+ 	kfree(entry);
+ err_free_ida:
+-	if (ida_idx != NFP_MAX_MAC_INDEX)
++	if (ida_idx != -1)
+ 		ida_simple_remove(&priv->tun.mac_off_ids, ida_idx);
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index 650ffb93796f1..130f4b707bdc4 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -1421,6 +1421,8 @@ static int temac_probe(struct platform_device *pdev)
+ 		lp->indirect_lock = devm_kmalloc(&pdev->dev,
+ 						 sizeof(*lp->indirect_lock),
+ 						 GFP_KERNEL);
++		if (!lp->indirect_lock)
++			return -ENOMEM;
+ 		spin_lock_init(lp->indirect_lock);
+ 	}
+ 
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 6aaa0675c28a3..43ddbe61dc58e 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -570,6 +570,11 @@ static const struct usb_device_id	products[] = {
+ 	.bInterfaceSubClass	= USB_CDC_SUBCLASS_ETHERNET, \
+ 	.bInterfaceProtocol	= USB_CDC_PROTO_NONE
+ 
++#define ZAURUS_FAKE_INTERFACE \
++	.bInterfaceClass	= USB_CLASS_COMM, \
++	.bInterfaceSubClass	= USB_CDC_SUBCLASS_MDLM, \
++	.bInterfaceProtocol	= USB_CDC_PROTO_NONE
++
+ /* SA-1100 based Sharp Zaurus ("collie"), or compatible;
+  * wire-incompatible with true CDC Ethernet implementations.
+  * (And, it seems, needlessly so...)
+@@ -623,6 +628,13 @@ static const struct usb_device_id	products[] = {
+ 	.idProduct              = 0x9032,	/* SL-6000 */
+ 	ZAURUS_MASTER_INTERFACE,
+ 	.driver_info		= 0,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++		 | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor               = 0x04DD,
++	.idProduct              = 0x9032,	/* SL-6000 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info		= 0,
+ }, {
+ 	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
+ 		 | USB_DEVICE_ID_MATCH_DEVICE,
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index eaaa5aee58251..ab91fa5b0194d 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1702,10 +1702,10 @@ int cdc_ncm_rx_fixup(struct usbnet *dev, struct sk_buff *skb_in)
+ {
+ 	struct sk_buff *skb;
+ 	struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
+-	int len;
++	unsigned int len;
+ 	int nframes;
+ 	int x;
+-	int offset;
++	unsigned int offset;
+ 	union {
+ 		struct usb_cdc_ncm_ndp16 *ndp16;
+ 		struct usb_cdc_ncm_ndp32 *ndp32;
+@@ -1777,8 +1777,8 @@ next_ndp:
+ 			break;
+ 		}
+ 
+-		/* sanity checking */
+-		if (((offset + len) > skb_in->len) ||
++		/* sanity checking - watch out for integer wrap*/
++		if ((offset > skb_in->len) || (len > skb_in->len - offset) ||
+ 				(len > ctx->rx_max) || (len < ETH_HLEN)) {
+ 			netif_dbg(dev, rx_err, dev->net,
+ 				  "invalid frame detected (ignored) offset[%u]=%u, length=%u, skb=%p\n",
+diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
+index e04c8054c2cf3..fce6713e970ba 100644
+--- a/drivers/net/usb/sr9700.c
++++ b/drivers/net/usb/sr9700.c
+@@ -410,7 +410,7 @@ static int sr9700_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 		/* ignore the CRC length */
+ 		len = (skb->data[1] | (skb->data[2] << 8)) - 4;
+ 
+-		if (len > ETH_FRAME_LEN)
++		if (len > ETH_FRAME_LEN || len > skb->len)
+ 			return 0;
+ 
+ 		/* the last packet of current skb */
+diff --git a/drivers/net/usb/zaurus.c b/drivers/net/usb/zaurus.c
+index 8e717a0b559b3..7984f2157d222 100644
+--- a/drivers/net/usb/zaurus.c
++++ b/drivers/net/usb/zaurus.c
+@@ -256,6 +256,11 @@ static const struct usb_device_id	products [] = {
+ 	.bInterfaceSubClass	= USB_CDC_SUBCLASS_ETHERNET, \
+ 	.bInterfaceProtocol	= USB_CDC_PROTO_NONE
+ 
++#define ZAURUS_FAKE_INTERFACE \
++	.bInterfaceClass	= USB_CLASS_COMM, \
++	.bInterfaceSubClass	= USB_CDC_SUBCLASS_MDLM, \
++	.bInterfaceProtocol	= USB_CDC_PROTO_NONE
++
+ /* SA-1100 based Sharp Zaurus ("collie"), or compatible. */
+ {
+ 	.match_flags	=   USB_DEVICE_ID_MATCH_INT_INFO
+@@ -313,6 +318,13 @@ static const struct usb_device_id	products [] = {
+ 	.idProduct              = 0x9032,	/* SL-6000 */
+ 	ZAURUS_MASTER_INTERFACE,
+ 	.driver_info = ZAURUS_PXA_INFO,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++			    | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor		= 0x04DD,
++	.idProduct		= 0x9032,	/* SL-6000 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info = (unsigned long)&bogus_mdlm_info,
+ }, {
+ 	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
+ 		 | USB_DEVICE_ID_MATCH_DEVICE,
+diff --git a/drivers/platform/x86/surface3_power.c b/drivers/platform/x86/surface3_power.c
+index cc4f9cba68563..01aacf1bee074 100644
+--- a/drivers/platform/x86/surface3_power.c
++++ b/drivers/platform/x86/surface3_power.c
+@@ -233,14 +233,21 @@ static int mshw0011_bix(struct mshw0011_data *cdata, struct bix *bix)
+ 	}
+ 	bix->last_full_charg_capacity = ret;
+ 
+-	/* get serial number */
++	/*
++	 * Get serial number, on some devices (with unofficial replacement
++	 * battery?) reading any of the serial number range addresses gets
++	 * nacked in this case just leave the serial number empty.
++	 */
+ 	ret = i2c_smbus_read_i2c_block_data(client, MSHW0011_BAT0_REG_SERIAL_NO,
+ 					    sizeof(buf), buf);
+-	if (ret != sizeof(buf)) {
++	if (ret == -EREMOTEIO) {
++		/* no serial number available */
++	} else if (ret != sizeof(buf)) {
+ 		dev_err(&client->dev, "Error reading serial no: %d\n", ret);
+ 		return ret;
++	} else {
++		snprintf(bix->serial, ARRAY_SIZE(bix->serial), "%3pE%6pE", buf + 7, buf);
+ 	}
+-	snprintf(bix->serial, ARRAY_SIZE(bix->serial), "%3pE%6pE", buf + 7, buf);
+ 
+ 	/* get cycle count */
+ 	ret = i2c_smbus_read_word_data(client, MSHW0011_BAT0_REG_CYCLE_CNT);
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index b635835729d66..13c0b15fe1764 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -570,6 +570,9 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 
+ 	if (op->dummy.nbytes) {
+ 		tmpbuf = kzalloc(op->dummy.nbytes, GFP_KERNEL);
++		if (!tmpbuf)
++			return -ENOMEM;
++
+ 		memset(tmpbuf, 0xff, op->dummy.nbytes);
+ 		reinit_completion(&xqspi->data_completion);
+ 		xqspi->txbuf = tmpbuf;
+diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
+index f255a96ae5a48..6ea80add7378f 100644
+--- a/drivers/tee/optee/core.c
++++ b/drivers/tee/optee/core.c
+@@ -588,6 +588,7 @@ static int optee_remove(struct platform_device *pdev)
+ 	/* Unregister OP-TEE specific client devices on TEE bus */
+ 	optee_unregister_devices();
+ 
++	teedev_close_context(optee->ctx);
+ 	/*
+ 	 * Ask OP-TEE to free all cached shared memory objects to decrease
+ 	 * reference counters and also avoid wild pointers in secure world
+@@ -633,6 +634,7 @@ static int optee_probe(struct platform_device *pdev)
+ 	struct optee *optee = NULL;
+ 	void *memremaped_shm = NULL;
+ 	struct tee_device *teedev;
++	struct tee_context *ctx;
+ 	u32 sec_caps;
+ 	int rc;
+ 
+@@ -719,6 +721,12 @@ static int optee_probe(struct platform_device *pdev)
+ 	optee_supp_init(&optee->supp);
+ 	optee->memremaped_shm = memremaped_shm;
+ 	optee->pool = pool;
++	ctx = teedev_open(optee->teedev);
++	if (IS_ERR(ctx)) {
++		rc = PTR_ERR(ctx);
++		goto err;
++	}
++	optee->ctx = ctx;
+ 
+ 	/*
+ 	 * Ensure that there are no pre-existing shm objects before enabling
+diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h
+index f6bb4a763ba94..ea09533e30cde 100644
+--- a/drivers/tee/optee/optee_private.h
++++ b/drivers/tee/optee/optee_private.h
+@@ -70,6 +70,7 @@ struct optee_supp {
+  * struct optee - main service struct
+  * @supp_teedev:	supplicant device
+  * @teedev:		client device
++ * @ctx:		driver internal TEE context
+  * @invoke_fn:		function to issue smc or hvc
+  * @call_queue:		queue of threads waiting to call @invoke_fn
+  * @wait_queue:		queue of threads from secure world waiting for a
+@@ -87,6 +88,7 @@ struct optee {
+ 	struct tee_device *supp_teedev;
+ 	struct tee_device *teedev;
+ 	optee_invoke_fn *invoke_fn;
++	struct tee_context *ctx;
+ 	struct optee_call_queue call_queue;
+ 	struct optee_wait_queue wait_queue;
+ 	struct optee_supp supp;
+diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c
+index 9dbdd783d6f2d..f1e0332b0f6e8 100644
+--- a/drivers/tee/optee/rpc.c
++++ b/drivers/tee/optee/rpc.c
+@@ -284,6 +284,7 @@ static struct tee_shm *cmd_alloc_suppl(struct tee_context *ctx, size_t sz)
+ }
+ 
+ static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
++					  struct optee *optee,
+ 					  struct optee_msg_arg *arg,
+ 					  struct optee_call_ctx *call_ctx)
+ {
+@@ -313,7 +314,8 @@ static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
+ 		shm = cmd_alloc_suppl(ctx, sz);
+ 		break;
+ 	case OPTEE_MSG_RPC_SHM_TYPE_KERNEL:
+-		shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV);
++		shm = tee_shm_alloc(optee->ctx, sz,
++				    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		break;
+ 	default:
+ 		arg->ret = TEEC_ERROR_BAD_PARAMETERS;
+@@ -470,7 +472,7 @@ static void handle_rpc_func_cmd(struct tee_context *ctx, struct optee *optee,
+ 		break;
+ 	case OPTEE_MSG_RPC_CMD_SHM_ALLOC:
+ 		free_pages_list(call_ctx);
+-		handle_rpc_func_cmd_shm_alloc(ctx, arg, call_ctx);
++		handle_rpc_func_cmd_shm_alloc(ctx, optee, arg, call_ctx);
+ 		break;
+ 	case OPTEE_MSG_RPC_CMD_SHM_FREE:
+ 		handle_rpc_func_cmd_shm_free(ctx, arg);
+@@ -501,7 +503,7 @@ void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param,
+ 
+ 	switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) {
+ 	case OPTEE_SMC_RPC_FUNC_ALLOC:
+-		shm = tee_shm_alloc(ctx, param->a1,
++		shm = tee_shm_alloc(optee->ctx, param->a1,
+ 				    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) {
+ 			reg_pair_from_64(&param->a1, &param->a2, pa);
+diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
+index dfc239c64ce3c..e07f997cf8dd3 100644
+--- a/drivers/tee/tee_core.c
++++ b/drivers/tee/tee_core.c
+@@ -43,7 +43,7 @@ static DEFINE_SPINLOCK(driver_lock);
+ static struct class *tee_class;
+ static dev_t tee_devt;
+ 
+-static struct tee_context *teedev_open(struct tee_device *teedev)
++struct tee_context *teedev_open(struct tee_device *teedev)
+ {
+ 	int rc;
+ 	struct tee_context *ctx;
+@@ -70,6 +70,7 @@ err:
+ 	return ERR_PTR(rc);
+ 
+ }
++EXPORT_SYMBOL_GPL(teedev_open);
+ 
+ void teedev_ctx_get(struct tee_context *ctx)
+ {
+@@ -96,13 +97,14 @@ void teedev_ctx_put(struct tee_context *ctx)
+ 	kref_put(&ctx->refcount, teedev_ctx_release);
+ }
+ 
+-static void teedev_close_context(struct tee_context *ctx)
++void teedev_close_context(struct tee_context *ctx)
+ {
+ 	struct tee_device *teedev = ctx->teedev;
+ 
+ 	teedev_ctx_put(ctx);
+ 	tee_device_put(teedev);
+ }
++EXPORT_SYMBOL_GPL(teedev_close_context);
+ 
+ static int tee_open(struct inode *inode, struct file *filp)
+ {
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index 0966551cbaaa0..793d7b58fc650 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -402,6 +402,10 @@ static void int3400_notify(acpi_handle handle,
+ 	thermal_prop[3] = kasprintf(GFP_KERNEL, "EVENT=%d", therm_event);
+ 	thermal_prop[4] = NULL;
+ 	kobject_uevent_env(&priv->thermal->device.kobj, KOBJ_CHANGE, thermal_prop);
++	kfree(thermal_prop[0]);
++	kfree(thermal_prop[1]);
++	kfree(thermal_prop[2]);
++	kfree(thermal_prop[3]);
+ }
+ 
+ static int int3400_thermal_get_temp(struct thermal_zone_device *thermal,
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index b8f8621537720..05562b3cca451 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -434,7 +434,7 @@ static u8 gsm_encode_modem(const struct gsm_dlci *dlci)
+ 		modembits |= MDM_RTR;
+ 	if (dlci->modem_tx & TIOCM_RI)
+ 		modembits |= MDM_IC;
+-	if (dlci->modem_tx & TIOCM_CD)
++	if (dlci->modem_tx & TIOCM_CD || dlci->gsm->initiator)
+ 		modembits |= MDM_DV;
+ 	return modembits;
+ }
+@@ -1426,6 +1426,9 @@ static void gsm_dlci_close(struct gsm_dlci *dlci)
+ 	if (dlci->addr != 0) {
+ 		tty_port_tty_hangup(&dlci->port, false);
+ 		kfifo_reset(&dlci->fifo);
++		/* Ensure that gsmtty_open() can return. */
++		tty_port_set_initialized(&dlci->port, 0);
++		wake_up_interruptible(&dlci->port.open_wait);
+ 	} else
+ 		dlci->gsm->dead = true;
+ 	wake_up(&dlci->gsm->event);
+@@ -1485,7 +1488,7 @@ static void gsm_dlci_t1(struct timer_list *t)
+ 			dlci->mode = DLCI_MODE_ADM;
+ 			gsm_dlci_open(dlci);
+ 		} else {
+-			gsm_dlci_close(dlci);
++			gsm_dlci_begin_close(dlci); /* prevent half open link */
+ 		}
+ 
+ 		break;
+@@ -1719,7 +1722,12 @@ static void gsm_dlci_release(struct gsm_dlci *dlci)
+ 		gsm_destroy_network(dlci);
+ 		mutex_unlock(&dlci->mutex);
+ 
+-		tty_hangup(tty);
++		/* We cannot use tty_hangup() because in tty_kref_put() the tty
++		 * driver assumes that the hangup queue is free and reuses it to
++		 * queue release_one_tty() -> NULL pointer panic in
++		 * process_one_work().
++		 */
++		tty_vhangup(tty);
+ 
+ 		tty_port_tty_set(&dlci->port, NULL);
+ 		tty_kref_put(tty);
+@@ -3173,9 +3181,9 @@ static void gsmtty_throttle(struct tty_struct *tty)
+ 	if (dlci->state == DLCI_CLOSED)
+ 		return;
+ 	if (C_CRTSCTS(tty))
+-		dlci->modem_tx &= ~TIOCM_DTR;
++		dlci->modem_tx &= ~TIOCM_RTS;
+ 	dlci->throttled = true;
+-	/* Send an MSC with DTR cleared */
++	/* Send an MSC with RTS cleared */
+ 	gsmtty_modem_update(dlci, 0);
+ }
+ 
+@@ -3185,9 +3193,9 @@ static void gsmtty_unthrottle(struct tty_struct *tty)
+ 	if (dlci->state == DLCI_CLOSED)
+ 		return;
+ 	if (C_CRTSCTS(tty))
+-		dlci->modem_tx |= TIOCM_DTR;
++		dlci->modem_tx |= TIOCM_RTS;
+ 	dlci->throttled = false;
+-	/* Send an MSC with DTR set */
++	/* Send an MSC with RTS set */
+ 	gsmtty_modem_update(dlci, 0);
+ }
+ 
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 9adb8362578c5..04b4ed5d06341 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -734,12 +734,15 @@ static irqreturn_t sc16is7xx_irq(int irq, void *dev_id)
+ static void sc16is7xx_tx_proc(struct kthread_work *ws)
+ {
+ 	struct uart_port *port = &(to_sc16is7xx_one(ws, tx_work)->port);
++	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+ 
+ 	if ((port->rs485.flags & SER_RS485_ENABLED) &&
+ 	    (port->rs485.delay_rts_before_send > 0))
+ 		msleep(port->rs485.delay_rts_before_send);
+ 
++	mutex_lock(&s->efr_lock);
+ 	sc16is7xx_handle_tx(port);
++	mutex_unlock(&s->efr_lock);
+ }
+ 
+ static void sc16is7xx_reconf_rs485(struct uart_port *port)
+diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
+index 641e4251cb7f1..03d16a08261d8 100644
+--- a/drivers/usb/dwc2/core.h
++++ b/drivers/usb/dwc2/core.h
+@@ -1406,6 +1406,7 @@ void dwc2_hsotg_core_connect(struct dwc2_hsotg *hsotg);
+ void dwc2_hsotg_disconnect(struct dwc2_hsotg *dwc2);
+ int dwc2_hsotg_set_test_mode(struct dwc2_hsotg *hsotg, int testmode);
+ #define dwc2_is_device_connected(hsotg) (hsotg->connected)
++#define dwc2_is_device_enabled(hsotg) (hsotg->enabled)
+ int dwc2_backup_device_registers(struct dwc2_hsotg *hsotg);
+ int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup);
+ int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg);
+@@ -1434,6 +1435,7 @@ static inline int dwc2_hsotg_set_test_mode(struct dwc2_hsotg *hsotg,
+ 					   int testmode)
+ { return 0; }
+ #define dwc2_is_device_connected(hsotg) (0)
++#define dwc2_is_device_enabled(hsotg) (0)
+ static inline int dwc2_backup_device_registers(struct dwc2_hsotg *hsotg)
+ { return 0; }
+ static inline int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg,
+diff --git a/drivers/usb/dwc2/drd.c b/drivers/usb/dwc2/drd.c
+index aa6eb76f64ddc..36f2c38416e5e 100644
+--- a/drivers/usb/dwc2/drd.c
++++ b/drivers/usb/dwc2/drd.c
+@@ -109,8 +109,10 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
+ 		already = dwc2_ovr_avalid(hsotg, true);
+ 	} else if (role == USB_ROLE_DEVICE) {
+ 		already = dwc2_ovr_bvalid(hsotg, true);
+-		/* This clear DCTL.SFTDISCON bit */
+-		dwc2_hsotg_core_connect(hsotg);
++		if (dwc2_is_device_enabled(hsotg)) {
++			/* This clear DCTL.SFTDISCON bit */
++			dwc2_hsotg_core_connect(hsotg);
++		}
+ 	} else {
+ 		if (dwc2_is_device_mode(hsotg)) {
+ 			if (!dwc2_ovr_bvalid(hsotg, false))
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 17117870f6cea..98df8d52c765c 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -81,8 +81,8 @@ static const struct acpi_gpio_mapping acpi_dwc3_byt_gpios[] = {
+ static struct gpiod_lookup_table platform_bytcr_gpios = {
+ 	.dev_id		= "0000:00:16.0",
+ 	.table		= {
+-		GPIO_LOOKUP("INT33FC:00", 54, "reset", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("INT33FC:02", 14, "cs", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("INT33FC:00", 54, "cs", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("INT33FC:02", 14, "reset", GPIO_ACTIVE_HIGH),
+ 		{}
+ 	},
+ };
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 9095ce52c28c6..b68fe48ac5792 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3775,9 +3775,11 @@ static irqreturn_t dwc3_thread_interrupt(int irq, void *_evt)
+ 	unsigned long flags;
+ 	irqreturn_t ret = IRQ_NONE;
+ 
++	local_bh_disable();
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	ret = dwc3_process_event_buf(evt);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
++	local_bh_enable();
+ 
+ 	return ret;
+ }
+diff --git a/drivers/usb/gadget/function/rndis.c b/drivers/usb/gadget/function/rndis.c
+index d9ed651f06ac3..0f14c5291af07 100644
+--- a/drivers/usb/gadget/function/rndis.c
++++ b/drivers/usb/gadget/function/rndis.c
+@@ -922,6 +922,7 @@ struct rndis_params *rndis_register(void (*resp_avail)(void *v), void *v)
+ 	params->resp_avail = resp_avail;
+ 	params->v = v;
+ 	INIT_LIST_HEAD(&params->resp_queue);
++	spin_lock_init(&params->resp_lock);
+ 	pr_debug("%s: configNr = %d\n", __func__, i);
+ 
+ 	return params;
+@@ -1015,12 +1016,14 @@ void rndis_free_response(struct rndis_params *params, u8 *buf)
+ {
+ 	rndis_resp_t *r, *n;
+ 
++	spin_lock(&params->resp_lock);
+ 	list_for_each_entry_safe(r, n, &params->resp_queue, list) {
+ 		if (r->buf == buf) {
+ 			list_del(&r->list);
+ 			kfree(r);
+ 		}
+ 	}
++	spin_unlock(&params->resp_lock);
+ }
+ EXPORT_SYMBOL_GPL(rndis_free_response);
+ 
+@@ -1030,14 +1033,17 @@ u8 *rndis_get_next_response(struct rndis_params *params, u32 *length)
+ 
+ 	if (!length) return NULL;
+ 
++	spin_lock(&params->resp_lock);
+ 	list_for_each_entry_safe(r, n, &params->resp_queue, list) {
+ 		if (!r->send) {
+ 			r->send = 1;
+ 			*length = r->length;
++			spin_unlock(&params->resp_lock);
+ 			return r->buf;
+ 		}
+ 	}
+ 
++	spin_unlock(&params->resp_lock);
+ 	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(rndis_get_next_response);
+@@ -1054,7 +1060,9 @@ static rndis_resp_t *rndis_add_response(struct rndis_params *params, u32 length)
+ 	r->length = length;
+ 	r->send = 0;
+ 
++	spin_lock(&params->resp_lock);
+ 	list_add_tail(&r->list, &params->resp_queue);
++	spin_unlock(&params->resp_lock);
+ 	return r;
+ }
+ 
+diff --git a/drivers/usb/gadget/function/rndis.h b/drivers/usb/gadget/function/rndis.h
+index f6167f7fea82b..6206b8b7490f6 100644
+--- a/drivers/usb/gadget/function/rndis.h
++++ b/drivers/usb/gadget/function/rndis.h
+@@ -174,6 +174,7 @@ typedef struct rndis_params {
+ 	void			(*resp_avail)(void *v);
+ 	void			*v;
+ 	struct list_head	resp_queue;
++	spinlock_t		resp_lock;
+ } rndis_params;
+ 
+ /* RNDIS Message parser and other useless functions */
+diff --git a/drivers/usb/gadget/udc/udc-xilinx.c b/drivers/usb/gadget/udc/udc-xilinx.c
+index d5e9d20c097d2..096f56a09e6a2 100644
+--- a/drivers/usb/gadget/udc/udc-xilinx.c
++++ b/drivers/usb/gadget/udc/udc-xilinx.c
+@@ -1612,6 +1612,8 @@ static void xudc_getstatus(struct xusb_udc *udc)
+ 		break;
+ 	case USB_RECIP_ENDPOINT:
+ 		epnum = udc->setup.wIndex & USB_ENDPOINT_NUMBER_MASK;
++		if (epnum >= XUSB_MAX_ENDPOINTS)
++			goto stall;
+ 		target_ep = &udc->ep[epnum];
+ 		epcfgreg = udc->read_fn(udc->addr + target_ep->offset);
+ 		halt = epcfgreg & XUSB_EP_CFG_STALL_MASK;
+@@ -1679,6 +1681,10 @@ static void xudc_set_clear_feature(struct xusb_udc *udc)
+ 	case USB_RECIP_ENDPOINT:
+ 		if (!udc->setup.wValue) {
+ 			endpoint = udc->setup.wIndex & USB_ENDPOINT_NUMBER_MASK;
++			if (endpoint >= XUSB_MAX_ENDPOINTS) {
++				xudc_ep0_stall(udc);
++				return;
++			}
+ 			target_ep = &udc->ep[endpoint];
+ 			outinbit = udc->setup.wIndex & USB_ENDPOINT_DIR_MASK;
+ 			outinbit = outinbit >> 7;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 325eb1609f8c5..49f74299d3f57 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1091,6 +1091,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 	int			retval = 0;
+ 	bool			comp_timer_running = false;
+ 	bool			pending_portevent = false;
++	bool			reinit_xhc = false;
+ 
+ 	if (!hcd->state)
+ 		return 0;
+@@ -1107,10 +1108,11 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 	set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);
+ 
+ 	spin_lock_irq(&xhci->lock);
+-	if ((xhci->quirks & XHCI_RESET_ON_RESUME) || xhci->broken_suspend)
+-		hibernated = true;
+ 
+-	if (!hibernated) {
++	if (hibernated || xhci->quirks & XHCI_RESET_ON_RESUME || xhci->broken_suspend)
++		reinit_xhc = true;
++
++	if (!reinit_xhc) {
+ 		/*
+ 		 * Some controllers might lose power during suspend, so wait
+ 		 * for controller not ready bit to clear, just as in xHC init.
+@@ -1143,12 +1145,17 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 			spin_unlock_irq(&xhci->lock);
+ 			return -ETIMEDOUT;
+ 		}
+-		temp = readl(&xhci->op_regs->status);
+ 	}
+ 
+-	/* If restore operation fails, re-initialize the HC during resume */
+-	if ((temp & STS_SRE) || hibernated) {
++	temp = readl(&xhci->op_regs->status);
+ 
++	/* re-initialize the HC on Restore Error, or Host Controller Error */
++	if (temp & (STS_SRE | STS_HCE)) {
++		reinit_xhc = true;
++		xhci_warn(xhci, "xHC error in resume, USBSTS 0x%x, Reinit\n", temp);
++	}
++
++	if (reinit_xhc) {
+ 		if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) &&
+ 				!(xhci_all_ports_seen_u0(xhci))) {
+ 			del_timer_sync(&xhci->comp_mode_recovery_timer);
+@@ -1480,9 +1487,12 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 	struct urb_priv	*urb_priv;
+ 	int num_tds;
+ 
+-	if (!urb || xhci_check_args(hcd, urb->dev, urb->ep,
+-					true, true, __func__) <= 0)
++	if (!urb)
+ 		return -EINVAL;
++	ret = xhci_check_args(hcd, urb->dev, urb->ep,
++					true, true, __func__);
++	if (ret <= 0)
++		return ret ? ret : -EINVAL;
+ 
+ 	slot_id = urb->dev->slot_id;
+ 	ep_index = xhci_get_endpoint_index(&urb->ep->desc);
+@@ -3282,7 +3292,7 @@ static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+ 		return -EINVAL;
+ 	ret = xhci_check_args(xhci_to_hcd(xhci), udev, ep, 1, true, __func__);
+ 	if (ret <= 0)
+-		return -EINVAL;
++		return ret ? ret : -EINVAL;
+ 	if (usb_ss_max_streams(&ep->ss_ep_comp) == 0) {
+ 		xhci_warn(xhci, "WARN: SuperSpeed Endpoint Companion"
+ 				" descriptor for ep 0x%x does not support streams\n",
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 8716ada0b1387..a2a38fc76ca53 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -81,7 +81,6 @@
+ #define CH341_QUIRK_SIMULATE_BREAK	BIT(1)
+ 
+ static const struct usb_device_id id_table[] = {
+-	{ USB_DEVICE(0x1a86, 0x5512) },
+ 	{ USB_DEVICE(0x1a86, 0x5523) },
+ 	{ USB_DEVICE(0x1a86, 0x7522) },
+ 	{ USB_DEVICE(0x1a86, 0x7523) },
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index c39c505b081b1..b878f4c87fee8 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -198,6 +198,8 @@ static void option_instat_callback(struct urb *urb);
+ 
+ #define DELL_PRODUCT_5821E			0x81d7
+ #define DELL_PRODUCT_5821E_ESIM			0x81e0
++#define DELL_PRODUCT_5829E_ESIM			0x81e4
++#define DELL_PRODUCT_5829E			0x81e6
+ 
+ #define KYOCERA_VENDOR_ID			0x0c88
+ #define KYOCERA_PRODUCT_KPC650			0x17da
+@@ -1063,6 +1065,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E_ESIM),
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
++	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E),
++	  .driver_info = RSVD(0) | RSVD(6) },
++	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E_ESIM),
++	  .driver_info = RSVD(0) | RSVD(6) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) },	/* ADU-E100, ADU-310 */
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
+@@ -1273,10 +1279,16 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff),	/* Telit LE910-S1 (ECM) */
+ 	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x701a, 0xff),	/* Telit LE910R1 (RNDIS) */
++	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x701b, 0xff),	/* Telit LE910R1 (ECM) */
++	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010),				/* Telit SBL FN980 flashing device */
+ 	  .driver_info = NCTRL(0) | ZLP },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9200),				/* Telit LE910S1 flashing device */
+ 	  .driver_info = NCTRL(0) | ZLP },
++	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9201),				/* Telit LE910R1 flashing device */
++	  .driver_info = NCTRL(0) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) },
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index 5cd1ee66d2326..c282fc0d04bd1 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -573,16 +573,18 @@ err:
+ 	return ret;
+ }
+ 
+-static int vhost_vsock_stop(struct vhost_vsock *vsock)
++static int vhost_vsock_stop(struct vhost_vsock *vsock, bool check_owner)
+ {
+ 	size_t i;
+-	int ret;
++	int ret = 0;
+ 
+ 	mutex_lock(&vsock->dev.mutex);
+ 
+-	ret = vhost_dev_check_owner(&vsock->dev);
+-	if (ret)
+-		goto err;
++	if (check_owner) {
++		ret = vhost_dev_check_owner(&vsock->dev);
++		if (ret)
++			goto err;
++	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) {
+ 		struct vhost_virtqueue *vq = &vsock->vqs[i];
+@@ -697,7 +699,12 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
+ 	 * inefficient.  Room for improvement here. */
+ 	vsock_for_each_connected_socket(vhost_vsock_reset_orphans);
+ 
+-	vhost_vsock_stop(vsock);
++	/* Don't check the owner, because we are in the release path, so we
++	 * need to stop the vsock device in any case.
++	 * vhost_vsock_stop() can not fail in this case, so we don't need to
++	 * check the return code.
++	 */
++	vhost_vsock_stop(vsock, false);
+ 	vhost_vsock_flush(vsock);
+ 	vhost_dev_stop(&vsock->dev);
+ 
+@@ -801,7 +808,7 @@ static long vhost_vsock_dev_ioctl(struct file *f, unsigned int ioctl,
+ 		if (start)
+ 			return vhost_vsock_start(vsock);
+ 		else
+-			return vhost_vsock_stop(vsock);
++			return vhost_vsock_stop(vsock, true);
+ 	case VHOST_GET_FEATURES:
+ 		features = VHOST_VSOCK_FEATURES;
+ 		if (copy_to_user(argp, &features, sizeof(features)))
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index d4a3a56726aa8..32f1b15b25dcc 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -947,6 +947,7 @@ static int check_dev_item(struct extent_buffer *leaf,
+ 			  struct btrfs_key *key, int slot)
+ {
+ 	struct btrfs_dev_item *ditem;
++	const u32 item_size = btrfs_item_size_nr(leaf, slot);
+ 
+ 	if (key->objectid != BTRFS_DEV_ITEMS_OBJECTID) {
+ 		dev_item_err(leaf, slot,
+@@ -954,6 +955,13 @@ static int check_dev_item(struct extent_buffer *leaf,
+ 			     key->objectid, BTRFS_DEV_ITEMS_OBJECTID);
+ 		return -EUCLEAN;
+ 	}
++
++	if (unlikely(item_size != sizeof(*ditem))) {
++		dev_item_err(leaf, slot, "invalid item size: has %u expect %zu",
++			     item_size, sizeof(*ditem));
++		return -EUCLEAN;
++	}
++
+ 	ditem = btrfs_item_ptr(leaf, slot, struct btrfs_dev_item);
+ 	if (btrfs_device_id(leaf, ditem) != key->offset) {
+ 		dev_item_err(leaf, slot,
+@@ -989,6 +997,7 @@ static int check_inode_item(struct extent_buffer *leaf,
+ 	struct btrfs_inode_item *iitem;
+ 	u64 super_gen = btrfs_super_generation(fs_info->super_copy);
+ 	u32 valid_mask = (S_IFMT | S_ISUID | S_ISGID | S_ISVTX | 0777);
++	const u32 item_size = btrfs_item_size_nr(leaf, slot);
+ 	u32 mode;
+ 	int ret;
+ 
+@@ -996,6 +1005,12 @@ static int check_inode_item(struct extent_buffer *leaf,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (unlikely(item_size != sizeof(*iitem))) {
++		generic_err(leaf, slot, "invalid item size: has %u expect %zu",
++			    item_size, sizeof(*iitem));
++		return -EUCLEAN;
++	}
++
+ 	iitem = btrfs_item_ptr(leaf, slot, struct btrfs_inode_item);
+ 
+ 	/* Here we use super block generation + 1 to handle log tree */
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 32ddad3ec5d53..5ad27e484014f 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -36,6 +36,14 @@
+  */
+ DEFINE_SPINLOCK(configfs_dirent_lock);
+ 
++/*
++ * All of link_obj/unlink_obj/link_group/unlink_group require that
++ * subsys->su_mutex is held.
++ * But parent configfs_subsystem is NULL when config_item is root.
++ * Use this mutex when config_item is root.
++ */
++static DEFINE_MUTEX(configfs_subsystem_mutex);
++
+ static void configfs_d_iput(struct dentry * dentry,
+ 			    struct inode * inode)
+ {
+@@ -1884,7 +1892,9 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys)
+ 		group->cg_item.ci_name = group->cg_item.ci_namebuf;
+ 
+ 	sd = root->d_fsdata;
++	mutex_lock(&configfs_subsystem_mutex);
+ 	link_group(to_config_group(sd->s_element), group);
++	mutex_unlock(&configfs_subsystem_mutex);
+ 
+ 	inode_lock_nested(d_inode(root), I_MUTEX_PARENT);
+ 
+@@ -1909,7 +1919,9 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys)
+ 	inode_unlock(d_inode(root));
+ 
+ 	if (err) {
++		mutex_lock(&configfs_subsystem_mutex);
+ 		unlink_group(group);
++		mutex_unlock(&configfs_subsystem_mutex);
+ 		configfs_release_fs();
+ 	}
+ 	put_fragment(frag);
+@@ -1956,7 +1968,9 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys)
+ 
+ 	dput(dentry);
+ 
++	mutex_lock(&configfs_subsystem_mutex);
+ 	unlink_group(group);
++	mutex_unlock(&configfs_subsystem_mutex);
+ 	configfs_release_fs();
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 104dff9c71314..019cbde8c3d67 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4058,6 +4058,7 @@ static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
+ 		} else {
+ 			list_add_tail(&buf->list, &(*head)->list);
+ 		}
++		cond_resched();
+ 	}
+ 
+ 	return i ? i : -ENOMEM;
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index ade05887070dd..8b7315c22f0d1 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -262,7 +262,6 @@ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ 			if (!gid_valid(gid))
+ 				return -EINVAL;
+ 			opts->gid = gid;
+-			set_gid(tracefs_mount->mnt_root, gid);
+ 			break;
+ 		case Opt_mode:
+ 			if (match_octal(&args[0], &option))
+@@ -289,7 +288,9 @@ static int tracefs_apply_options(struct super_block *sb)
+ 	inode->i_mode |= opts->mode;
+ 
+ 	inode->i_uid = opts->uid;
+-	inode->i_gid = opts->gid;
++
++	/* Set all the group ids to the mount option */
++	set_gid(sb->s_root, opts->gid);
+ 
+ 	return 0;
+ }
+diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
+index 0c6c1de6f3b77..18a9949bba187 100644
+--- a/include/linux/tee_drv.h
++++ b/include/linux/tee_drv.h
+@@ -582,4 +582,18 @@ struct tee_client_driver {
+ #define to_tee_client_driver(d) \
+ 		container_of(d, struct tee_client_driver, driver)
+ 
++/**
++ * teedev_open() - Open a struct tee_device
++ * @teedev:	Device to open
++ *
++ * @return a pointer to struct tee_context on success or an ERR_PTR on failure.
++ */
++struct tee_context *teedev_open(struct tee_device *teedev);
++
++/**
++ * teedev_close_context() - closes a struct tee_context
++ * @ctx:	The struct tee_context to close
++ */
++void teedev_close_context(struct tee_context *ctx);
++
+ #endif /*__TEE_DRV_H*/
+diff --git a/include/net/checksum.h b/include/net/checksum.h
+index 0d05b9e8690b8..8b7d0c31598f5 100644
+--- a/include/net/checksum.h
++++ b/include/net/checksum.h
+@@ -22,7 +22,7 @@
+ #include <asm/checksum.h>
+ 
+ #ifndef _HAVE_ARCH_COPY_AND_CSUM_FROM_USER
+-static inline
++static __always_inline
+ __wsum csum_and_copy_from_user (const void __user *src, void *dst,
+ 				      int len)
+ {
+@@ -33,7 +33,7 @@ __wsum csum_and_copy_from_user (const void __user *src, void *dst,
+ #endif
+ 
+ #ifndef HAVE_CSUM_COPY_USER
+-static __inline__ __wsum csum_and_copy_to_user
++static __always_inline __wsum csum_and_copy_to_user
+ (const void *src, void __user *dst, int len)
+ {
+ 	__wsum sum = csum_partial(src, len, ~0U);
+@@ -45,7 +45,7 @@ static __inline__ __wsum csum_and_copy_to_user
+ #endif
+ 
+ #ifndef _HAVE_ARCH_CSUM_AND_COPY
+-static inline __wsum
++static __always_inline __wsum
+ csum_partial_copy_nocheck(const void *src, void *dst, int len)
+ {
+ 	memcpy(dst, src, len);
+@@ -54,7 +54,7 @@ csum_partial_copy_nocheck(const void *src, void *dst, int len)
+ #endif
+ 
+ #ifndef HAVE_ARCH_CSUM_ADD
+-static inline __wsum csum_add(__wsum csum, __wsum addend)
++static __always_inline __wsum csum_add(__wsum csum, __wsum addend)
+ {
+ 	u32 res = (__force u32)csum;
+ 	res += (__force u32)addend;
+@@ -62,12 +62,12 @@ static inline __wsum csum_add(__wsum csum, __wsum addend)
+ }
+ #endif
+ 
+-static inline __wsum csum_sub(__wsum csum, __wsum addend)
++static __always_inline __wsum csum_sub(__wsum csum, __wsum addend)
+ {
+ 	return csum_add(csum, ~addend);
+ }
+ 
+-static inline __sum16 csum16_add(__sum16 csum, __be16 addend)
++static __always_inline __sum16 csum16_add(__sum16 csum, __be16 addend)
+ {
+ 	u16 res = (__force u16)csum;
+ 
+@@ -75,12 +75,12 @@ static inline __sum16 csum16_add(__sum16 csum, __be16 addend)
+ 	return (__force __sum16)(res + (res < (__force u16)addend));
+ }
+ 
+-static inline __sum16 csum16_sub(__sum16 csum, __be16 addend)
++static __always_inline __sum16 csum16_sub(__sum16 csum, __be16 addend)
+ {
+ 	return csum16_add(csum, ~addend);
+ }
+ 
+-static inline __wsum
++static __always_inline __wsum
+ csum_block_add(__wsum csum, __wsum csum2, int offset)
+ {
+ 	u32 sum = (__force u32)csum2;
+@@ -92,36 +92,37 @@ csum_block_add(__wsum csum, __wsum csum2, int offset)
+ 	return csum_add(csum, (__force __wsum)sum);
+ }
+ 
+-static inline __wsum
++static __always_inline __wsum
+ csum_block_add_ext(__wsum csum, __wsum csum2, int offset, int len)
+ {
+ 	return csum_block_add(csum, csum2, offset);
+ }
+ 
+-static inline __wsum
++static __always_inline __wsum
+ csum_block_sub(__wsum csum, __wsum csum2, int offset)
+ {
+ 	return csum_block_add(csum, ~csum2, offset);
+ }
+ 
+-static inline __wsum csum_unfold(__sum16 n)
++static __always_inline __wsum csum_unfold(__sum16 n)
+ {
+ 	return (__force __wsum)n;
+ }
+ 
+-static inline __wsum csum_partial_ext(const void *buff, int len, __wsum sum)
++static __always_inline
++__wsum csum_partial_ext(const void *buff, int len, __wsum sum)
+ {
+ 	return csum_partial(buff, len, sum);
+ }
+ 
+ #define CSUM_MANGLED_0 ((__force __sum16)0xffff)
+ 
+-static inline void csum_replace_by_diff(__sum16 *sum, __wsum diff)
++static __always_inline void csum_replace_by_diff(__sum16 *sum, __wsum diff)
+ {
+ 	*sum = csum_fold(csum_add(diff, ~csum_unfold(*sum)));
+ }
+ 
+-static inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
++static __always_inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
+ {
+ 	__wsum tmp = csum_sub(~csum_unfold(*sum), (__force __wsum)from);
+ 
+@@ -134,11 +135,16 @@ static inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
+  *  m : old value of a 16bit field
+  *  m' : new value of a 16bit field
+  */
+-static inline void csum_replace2(__sum16 *sum, __be16 old, __be16 new)
++static __always_inline void csum_replace2(__sum16 *sum, __be16 old, __be16 new)
+ {
+ 	*sum = ~csum16_add(csum16_sub(~(*sum), old), new);
+ }
+ 
++static inline void csum_replace(__wsum *csum, __wsum old, __wsum new)
++{
++	*csum = csum_add(csum_sub(*csum, old), new);
++}
++
+ struct sk_buff;
+ void inet_proto_csum_replace4(__sum16 *sum, struct sk_buff *skb,
+ 			      __be32 from, __be32 to, bool pseudohdr);
+@@ -148,16 +154,16 @@ void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb,
+ void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb,
+ 				     __wsum diff, bool pseudohdr);
+ 
+-static inline void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
+-					    __be16 from, __be16 to,
+-					    bool pseudohdr)
++static __always_inline
++void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
++			      __be16 from, __be16 to, bool pseudohdr)
+ {
+ 	inet_proto_csum_replace4(sum, skb, (__force __be32)from,
+ 				 (__force __be32)to, pseudohdr);
+ }
+ 
+-static inline __wsum remcsum_adjust(void *ptr, __wsum csum,
+-				    int start, int offset)
++static __always_inline __wsum remcsum_adjust(void *ptr, __wsum csum,
++					     int start, int offset)
+ {
+ 	__sum16 *psum = (__sum16 *)(ptr + offset);
+ 	__wsum delta;
+@@ -173,7 +179,7 @@ static inline __wsum remcsum_adjust(void *ptr, __wsum csum,
+ 	return delta;
+ }
+ 
+-static inline void remcsum_unadjust(__sum16 *psum, __wsum delta)
++static __always_inline void remcsum_unadjust(__sum16 *psum, __wsum delta)
+ {
+ 	*psum = csum_fold(csum_sub(delta, (__force __wsum)*psum));
+ }
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index ed4a9d098164f..76bfb6cd5815d 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -825,7 +825,7 @@ struct nft_expr_ops {
+ 	int				(*offload)(struct nft_offload_ctx *ctx,
+ 						   struct nft_flow_rule *flow,
+ 						   const struct nft_expr *expr);
+-	u32				offload_flags;
++	bool				(*offload_action)(const struct nft_expr *expr);
+ 	const struct nft_expr_type	*type;
+ 	void				*data;
+ };
+diff --git a/include/net/netfilter/nf_tables_offload.h b/include/net/netfilter/nf_tables_offload.h
+index 434a6158852f3..7a453a35a41dd 100644
+--- a/include/net/netfilter/nf_tables_offload.h
++++ b/include/net/netfilter/nf_tables_offload.h
+@@ -67,8 +67,6 @@ struct nft_flow_rule {
+ 	struct flow_rule	*rule;
+ };
+ 
+-#define NFT_OFFLOAD_F_ACTION	(1 << 0)
+-
+ void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow,
+ 				 enum flow_dissector_key_id addr_type);
+ 
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 209e6567cdab0..419dbc3d060ee 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1308,6 +1308,7 @@ int generic_map_delete_batch(struct bpf_map *map,
+ 		maybe_wait_bpf_programs(map);
+ 		if (err)
+ 			break;
++		cond_resched();
+ 	}
+ 	if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
+ 		err = -EFAULT;
+@@ -1365,6 +1366,7 @@ int generic_map_update_batch(struct bpf_map *map,
+ 
+ 		if (err)
+ 			break;
++		cond_resched();
+ 	}
+ 
+ 	if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
+@@ -1462,6 +1464,7 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ 		swap(prev_key, key);
+ 		retry = MAP_LOOKUP_RETRIES;
+ 		cp++;
++		cond_resched();
+ 	}
+ 
+ 	if (err == -EFAULT)
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index ef6b3a7f31c17..0aa224c31f10a 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -2212,6 +2212,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ 	cgroup_taskset_first(tset, &css);
+ 	cs = css_cs(css);
+ 
++	cpus_read_lock();
+ 	percpu_down_write(&cpuset_rwsem);
+ 
+ 	/* prepare for attach */
+@@ -2267,6 +2268,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ 		wake_up(&cpuset_attach_wq);
+ 
+ 	percpu_up_write(&cpuset_rwsem);
++	cpus_read_unlock();
+ }
+ 
+ /* The various types of files and directories in a cpuset file system */
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index f725802160c0b..d0309de2f84fe 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -940,6 +940,16 @@ static void
+ traceon_trigger(struct event_trigger_data *data, void *rec,
+ 		struct ring_buffer_event *event)
+ {
++	struct trace_event_file *file = data->private_data;
++
++	if (file) {
++		if (tracer_tracing_is_on(file->tr))
++			return;
++
++		tracer_tracing_on(file->tr);
++		return;
++	}
++
+ 	if (tracing_is_on())
+ 		return;
+ 
+@@ -950,8 +960,15 @@ static void
+ traceon_count_trigger(struct event_trigger_data *data, void *rec,
+ 		      struct ring_buffer_event *event)
+ {
+-	if (tracing_is_on())
+-		return;
++	struct trace_event_file *file = data->private_data;
++
++	if (file) {
++		if (tracer_tracing_is_on(file->tr))
++			return;
++	} else {
++		if (tracing_is_on())
++			return;
++	}
+ 
+ 	if (!data->count)
+ 		return;
+@@ -959,13 +976,26 @@ traceon_count_trigger(struct event_trigger_data *data, void *rec,
+ 	if (data->count != -1)
+ 		(data->count)--;
+ 
+-	tracing_on();
++	if (file)
++		tracer_tracing_on(file->tr);
++	else
++		tracing_on();
+ }
+ 
+ static void
+ traceoff_trigger(struct event_trigger_data *data, void *rec,
+ 		 struct ring_buffer_event *event)
+ {
++	struct trace_event_file *file = data->private_data;
++
++	if (file) {
++		if (!tracer_tracing_is_on(file->tr))
++			return;
++
++		tracer_tracing_off(file->tr);
++		return;
++	}
++
+ 	if (!tracing_is_on())
+ 		return;
+ 
+@@ -976,8 +1006,15 @@ static void
+ traceoff_count_trigger(struct event_trigger_data *data, void *rec,
+ 		       struct ring_buffer_event *event)
+ {
+-	if (!tracing_is_on())
+-		return;
++	struct trace_event_file *file = data->private_data;
++
++	if (file) {
++		if (!tracer_tracing_is_on(file->tr))
++			return;
++	} else {
++		if (!tracing_is_on())
++			return;
++	}
+ 
+ 	if (!data->count)
+ 		return;
+@@ -985,7 +1022,10 @@ traceoff_count_trigger(struct event_trigger_data *data, void *rec,
+ 	if (data->count != -1)
+ 		(data->count)--;
+ 
+-	tracing_off();
++	if (file)
++		tracer_tracing_off(file->tr);
++	else
++		tracing_off();
+ }
+ 
+ static int
+diff --git a/mm/memblock.c b/mm/memblock.c
+index faa4de579b3db..f72d539570339 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -366,14 +366,20 @@ void __init memblock_discard(void)
+ 		addr = __pa(memblock.reserved.regions);
+ 		size = PAGE_ALIGN(sizeof(struct memblock_region) *
+ 				  memblock.reserved.max);
+-		__memblock_free_late(addr, size);
++		if (memblock_reserved_in_slab)
++			kfree(memblock.reserved.regions);
++		else
++			__memblock_free_late(addr, size);
+ 	}
+ 
+ 	if (memblock.memory.regions != memblock_memory_init_regions) {
+ 		addr = __pa(memblock.memory.regions);
+ 		size = PAGE_ALIGN(sizeof(struct memblock_region) *
+ 				  memblock.memory.max);
+-		__memblock_free_late(addr, size);
++		if (memblock_memory_in_slab)
++			kfree(memblock.memory.regions);
++		else
++			__memblock_free_late(addr, size);
+ 	}
+ 
+ 	memblock_memory = NULL;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 7fa4283f2a8c0..659a328024713 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2730,6 +2730,9 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ 	if (unlikely(flags))
+ 		return -EINVAL;
+ 
++	if (unlikely(len == 0))
++		return 0;
++
+ 	/* First find the starting scatterlist element */
+ 	i = msg->sg.start;
+ 	do {
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 0215ae898e836..fccc42c8ca0c7 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -2139,7 +2139,7 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta)
+ 		/* Free pulled out fragments. */
+ 		while ((list = skb_shinfo(skb)->frag_list) != insp) {
+ 			skb_shinfo(skb)->frag_list = list->next;
+-			kfree_skb(list);
++			consume_skb(list);
+ 		}
+ 		/* And insert new clone at head. */
+ 		if (clone) {
+@@ -6044,7 +6044,7 @@ static int pskb_carve_frag_list(struct sk_buff *skb,
+ 	/* Free pulled out fragments. */
+ 	while ((list = shinfo->frag_list) != insp) {
+ 		shinfo->frag_list = list->next;
+-		kfree_skb(list);
++		consume_skb(list);
+ 	}
+ 	/* And insert new clone at head. */
+ 	if (clone) {
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index e2f85a16fad9b..742218594741a 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1374,8 +1374,11 @@ struct sk_buff *inet_gso_segment(struct sk_buff *skb,
+ 	}
+ 
+ 	ops = rcu_dereference(inet_offloads[proto]);
+-	if (likely(ops && ops->callbacks.gso_segment))
++	if (likely(ops && ops->callbacks.gso_segment)) {
+ 		segs = ops->callbacks.gso_segment(skb, features);
++		if (!segs)
++			skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
++	}
+ 
+ 	if (IS_ERR_OR_NULL(segs))
+ 		goto out;
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 323cb231cb580..e60ca03543a53 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -187,7 +187,6 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ 			 (int)ident, &ipv6_hdr(skb)->daddr, dif);
+ #endif
+ 	} else {
+-		pr_err("ping: protocol(%x) is not supported\n", ntohs(skb->protocol));
+ 		return NULL;
+ 	}
+ 
+diff --git a/net/ipv4/udp_tunnel_nic.c b/net/ipv4/udp_tunnel_nic.c
+index b91003538d87a..bc3a043a5d5c7 100644
+--- a/net/ipv4/udp_tunnel_nic.c
++++ b/net/ipv4/udp_tunnel_nic.c
+@@ -846,7 +846,7 @@ udp_tunnel_nic_unregister(struct net_device *dev, struct udp_tunnel_nic *utn)
+ 		list_for_each_entry(node, &info->shared->devices, list)
+ 			if (node->dev == dev)
+ 				break;
+-		if (node->dev != dev)
++		if (list_entry_is_head(node, &info->shared->devices, list))
+ 			return;
+ 
+ 		list_del(&node->list);
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index a80f90bf3ae7d..15c8eef1ef443 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -113,6 +113,8 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+ 	if (likely(ops && ops->callbacks.gso_segment)) {
+ 		skb_reset_transport_header(skb);
+ 		segs = ops->callbacks.gso_segment(skb, features);
++		if (!segs)
++			skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
+ 	}
+ 
+ 	if (IS_ERR_OR_NULL(segs))
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index b781ba97c474e..fdd1da9ecea9e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -5924,12 +5924,15 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
+ {
+ 	struct nft_object *newobj;
+ 	struct nft_trans *trans;
+-	int err;
++	int err = -ENOMEM;
++
++	if (!try_module_get(type->owner))
++		return -ENOENT;
+ 
+ 	trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ,
+ 				sizeof(struct nft_trans_obj));
+ 	if (!trans)
+-		return -ENOMEM;
++		goto err_trans;
+ 
+ 	newobj = nft_obj_init(ctx, type, attr);
+ 	if (IS_ERR(newobj)) {
+@@ -5946,6 +5949,8 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
+ 
+ err_free_trans:
+ 	kfree(trans);
++err_trans:
++	module_put(type->owner);
+ 	return err;
+ }
+ 
+@@ -7555,7 +7560,7 @@ static void nft_obj_commit_update(struct nft_trans *trans)
+ 	if (obj->ops->update)
+ 		obj->ops->update(obj, newobj);
+ 
+-	kfree(newobj);
++	nft_obj_destroy(&trans->ctx, newobj);
+ }
+ 
+ static void nft_commit_release(struct nft_trans *trans)
+@@ -8202,7 +8207,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			break;
+ 		case NFT_MSG_NEWOBJ:
+ 			if (nft_trans_obj_update(trans)) {
+-				kfree(nft_trans_obj_newobj(trans));
++				nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans));
+ 				nft_trans_destroy(trans);
+ 			} else {
+ 				trans->ctx.table->use--;
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index e5fcbb0e4b8e5..839fd09f1bb4a 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -94,7 +94,8 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
+ 
+ 	expr = nft_expr_first(rule);
+ 	while (nft_expr_more(rule, expr)) {
+-		if (expr->ops->offload_flags & NFT_OFFLOAD_F_ACTION)
++		if (expr->ops->offload_action &&
++		    expr->ops->offload_action(expr))
+ 			num_actions++;
+ 
+ 		expr = nft_expr_next(expr);
+diff --git a/net/netfilter/nft_dup_netdev.c b/net/netfilter/nft_dup_netdev.c
+index 40788b3f1071a..70c457476b874 100644
+--- a/net/netfilter/nft_dup_netdev.c
++++ b/net/netfilter/nft_dup_netdev.c
+@@ -67,6 +67,11 @@ static int nft_dup_netdev_offload(struct nft_offload_ctx *ctx,
+ 	return nft_fwd_dup_netdev_offload(ctx, flow, FLOW_ACTION_MIRRED, oif);
+ }
+ 
++static bool nft_dup_netdev_offload_action(const struct nft_expr *expr)
++{
++	return true;
++}
++
+ static struct nft_expr_type nft_dup_netdev_type;
+ static const struct nft_expr_ops nft_dup_netdev_ops = {
+ 	.type		= &nft_dup_netdev_type,
+@@ -75,6 +80,7 @@ static const struct nft_expr_ops nft_dup_netdev_ops = {
+ 	.init		= nft_dup_netdev_init,
+ 	.dump		= nft_dup_netdev_dump,
+ 	.offload	= nft_dup_netdev_offload,
++	.offload_action	= nft_dup_netdev_offload_action,
+ };
+ 
+ static struct nft_expr_type nft_dup_netdev_type __read_mostly = {
+diff --git a/net/netfilter/nft_fwd_netdev.c b/net/netfilter/nft_fwd_netdev.c
+index b77985986b24e..3b0dcd170551b 100644
+--- a/net/netfilter/nft_fwd_netdev.c
++++ b/net/netfilter/nft_fwd_netdev.c
+@@ -77,6 +77,11 @@ static int nft_fwd_netdev_offload(struct nft_offload_ctx *ctx,
+ 	return nft_fwd_dup_netdev_offload(ctx, flow, FLOW_ACTION_REDIRECT, oif);
+ }
+ 
++static bool nft_fwd_netdev_offload_action(const struct nft_expr *expr)
++{
++	return true;
++}
++
+ struct nft_fwd_neigh {
+ 	enum nft_registers	sreg_dev:8;
+ 	enum nft_registers	sreg_addr:8;
+@@ -219,6 +224,7 @@ static const struct nft_expr_ops nft_fwd_netdev_ops = {
+ 	.dump		= nft_fwd_netdev_dump,
+ 	.validate	= nft_fwd_validate,
+ 	.offload	= nft_fwd_netdev_offload,
++	.offload_action	= nft_fwd_netdev_offload_action,
+ };
+ 
+ static const struct nft_expr_ops *
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index c63eb3b171784..5c9d88560a474 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -213,6 +213,16 @@ static int nft_immediate_offload(struct nft_offload_ctx *ctx,
+ 	return 0;
+ }
+ 
++static bool nft_immediate_offload_action(const struct nft_expr *expr)
++{
++	const struct nft_immediate_expr *priv = nft_expr_priv(expr);
++
++	if (priv->dreg == NFT_REG_VERDICT)
++		return true;
++
++	return false;
++}
++
+ static const struct nft_expr_ops nft_imm_ops = {
+ 	.type		= &nft_imm_type,
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_immediate_expr)),
+@@ -224,7 +234,7 @@ static const struct nft_expr_ops nft_imm_ops = {
+ 	.dump		= nft_immediate_dump,
+ 	.validate	= nft_immediate_validate,
+ 	.offload	= nft_immediate_offload,
+-	.offload_flags	= NFT_OFFLOAD_F_ACTION,
++	.offload_action	= nft_immediate_offload_action,
+ };
+ 
+ struct nft_expr_type nft_imm_type __read_mostly = {
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index fc487f9812fc5..525c1540f10e6 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -422,12 +422,43 @@ static void set_ipv6_addr(struct sk_buff *skb, u8 l4_proto,
+ 	memcpy(addr, new_addr, sizeof(__be32[4]));
+ }
+ 
+-static void set_ipv6_fl(struct ipv6hdr *nh, u32 fl, u32 mask)
++static void set_ipv6_dsfield(struct sk_buff *skb, struct ipv6hdr *nh, u8 ipv6_tclass, u8 mask)
+ {
++	u8 old_ipv6_tclass = ipv6_get_dsfield(nh);
++
++	ipv6_tclass = OVS_MASKED(old_ipv6_tclass, ipv6_tclass, mask);
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		csum_replace(&skb->csum, (__force __wsum)(old_ipv6_tclass << 12),
++			     (__force __wsum)(ipv6_tclass << 12));
++
++	ipv6_change_dsfield(nh, ~mask, ipv6_tclass);
++}
++
++static void set_ipv6_fl(struct sk_buff *skb, struct ipv6hdr *nh, u32 fl, u32 mask)
++{
++	u32 ofl;
++
++	ofl = nh->flow_lbl[0] << 16 |  nh->flow_lbl[1] << 8 |  nh->flow_lbl[2];
++	fl = OVS_MASKED(ofl, fl, mask);
++
+ 	/* Bits 21-24 are always unmasked, so this retains their values. */
+-	OVS_SET_MASKED(nh->flow_lbl[0], (u8)(fl >> 16), (u8)(mask >> 16));
+-	OVS_SET_MASKED(nh->flow_lbl[1], (u8)(fl >> 8), (u8)(mask >> 8));
+-	OVS_SET_MASKED(nh->flow_lbl[2], (u8)fl, (u8)mask);
++	nh->flow_lbl[0] = (u8)(fl >> 16);
++	nh->flow_lbl[1] = (u8)(fl >> 8);
++	nh->flow_lbl[2] = (u8)fl;
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		csum_replace(&skb->csum, (__force __wsum)htonl(ofl), (__force __wsum)htonl(fl));
++}
++
++static void set_ipv6_ttl(struct sk_buff *skb, struct ipv6hdr *nh, u8 new_ttl, u8 mask)
++{
++	new_ttl = OVS_MASKED(nh->hop_limit, new_ttl, mask);
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		csum_replace(&skb->csum, (__force __wsum)(nh->hop_limit << 8),
++			     (__force __wsum)(new_ttl << 8));
++	nh->hop_limit = new_ttl;
+ }
+ 
+ static void set_ip_ttl(struct sk_buff *skb, struct iphdr *nh, u8 new_ttl,
+@@ -545,18 +576,17 @@ static int set_ipv6(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ 		}
+ 	}
+ 	if (mask->ipv6_tclass) {
+-		ipv6_change_dsfield(nh, ~mask->ipv6_tclass, key->ipv6_tclass);
++		set_ipv6_dsfield(skb, nh, key->ipv6_tclass, mask->ipv6_tclass);
+ 		flow_key->ip.tos = ipv6_get_dsfield(nh);
+ 	}
+ 	if (mask->ipv6_label) {
+-		set_ipv6_fl(nh, ntohl(key->ipv6_label),
++		set_ipv6_fl(skb, nh, ntohl(key->ipv6_label),
+ 			    ntohl(mask->ipv6_label));
+ 		flow_key->ipv6.label =
+ 		    *(__be32 *)nh & htonl(IPV6_FLOWINFO_FLOWLABEL);
+ 	}
+ 	if (mask->ipv6_hlimit) {
+-		OVS_SET_MASKED(nh->hop_limit, key->ipv6_hlimit,
+-			       mask->ipv6_hlimit);
++		set_ipv6_ttl(skb, nh, key->ipv6_hlimit, mask->ipv6_hlimit);
+ 		flow_key->ip.ttl = nh->hop_limit;
+ 	}
+ 	return 0;
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 812c3c70a53a0..825b3e9b55f7e 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -514,11 +514,6 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
+ 	struct nf_conn *ct;
+ 	u8 dir;
+ 
+-	/* Previously seen or loopback */
+-	ct = nf_ct_get(skb, &ctinfo);
+-	if ((ct && !nf_ct_is_template(ct)) || ctinfo == IP_CT_UNTRACKED)
+-		return false;
+-
+ 	switch (family) {
+ 	case NFPROTO_IPV4:
+ 		if (!tcf_ct_flow_table_fill_tuple_ipv4(skb, &tuple, &tcph))
+diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c
+index f3c18b991d35c..9007c7e3bae4e 100644
+--- a/net/smc/smc_pnet.c
++++ b/net/smc/smc_pnet.c
+@@ -112,7 +112,7 @@ static int smc_pnet_remove_by_pnetid(struct net *net, char *pnet_name)
+ 	pnettable = &sn->pnettable;
+ 
+ 	/* remove table entry */
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist,
+ 				 list) {
+ 		if (!pnet_name ||
+@@ -130,7 +130,7 @@ static int smc_pnet_remove_by_pnetid(struct net *net, char *pnet_name)
+ 			rc = 0;
+ 		}
+ 	}
+-	write_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 
+ 	/* if this is not the initial namespace, stop here */
+ 	if (net != &init_net)
+@@ -191,7 +191,7 @@ static int smc_pnet_add_by_ndev(struct net_device *ndev)
+ 	sn = net_generic(net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) {
+ 		if (pnetelem->type == SMC_PNET_ETH && !pnetelem->ndev &&
+ 		    !strncmp(pnetelem->eth_name, ndev->name, IFNAMSIZ)) {
+@@ -205,7 +205,7 @@ static int smc_pnet_add_by_ndev(struct net_device *ndev)
+ 			break;
+ 		}
+ 	}
+-	write_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 	return rc;
+ }
+ 
+@@ -223,7 +223,7 @@ static int smc_pnet_remove_by_ndev(struct net_device *ndev)
+ 	sn = net_generic(net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) {
+ 		if (pnetelem->type == SMC_PNET_ETH && pnetelem->ndev == ndev) {
+ 			dev_put(pnetelem->ndev);
+@@ -236,7 +236,7 @@ static int smc_pnet_remove_by_ndev(struct net_device *ndev)
+ 			break;
+ 		}
+ 	}
+-	write_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 	return rc;
+ }
+ 
+@@ -371,7 +371,7 @@ static int smc_pnet_add_eth(struct smc_pnettable *pnettable, struct net *net,
+ 
+ 	rc = -EEXIST;
+ 	new_netdev = true;
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
+ 		if (tmp_pe->type == SMC_PNET_ETH &&
+ 		    !strncmp(tmp_pe->eth_name, eth_name, IFNAMSIZ)) {
+@@ -381,9 +381,9 @@ static int smc_pnet_add_eth(struct smc_pnettable *pnettable, struct net *net,
+ 	}
+ 	if (new_netdev) {
+ 		list_add_tail(&new_pe->list, &pnettable->pnetlist);
+-		write_unlock(&pnettable->lock);
++		mutex_unlock(&pnettable->lock);
+ 	} else {
+-		write_unlock(&pnettable->lock);
++		mutex_unlock(&pnettable->lock);
+ 		kfree(new_pe);
+ 		goto out_put;
+ 	}
+@@ -445,7 +445,7 @@ static int smc_pnet_add_ib(struct smc_pnettable *pnettable, char *ib_name,
+ 	new_pe->ib_port = ib_port;
+ 
+ 	new_ibdev = true;
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
+ 		if (tmp_pe->type == SMC_PNET_IB &&
+ 		    !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) {
+@@ -455,9 +455,9 @@ static int smc_pnet_add_ib(struct smc_pnettable *pnettable, char *ib_name,
+ 	}
+ 	if (new_ibdev) {
+ 		list_add_tail(&new_pe->list, &pnettable->pnetlist);
+-		write_unlock(&pnettable->lock);
++		mutex_unlock(&pnettable->lock);
+ 	} else {
+-		write_unlock(&pnettable->lock);
++		mutex_unlock(&pnettable->lock);
+ 		kfree(new_pe);
+ 	}
+ 	return (new_ibdev) ? 0 : -EEXIST;
+@@ -602,7 +602,7 @@ static int _smc_pnet_dump(struct net *net, struct sk_buff *skb, u32 portid,
+ 	pnettable = &sn->pnettable;
+ 
+ 	/* dump pnettable entries */
+-	read_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(pnetelem, &pnettable->pnetlist, list) {
+ 		if (pnetid && !smc_pnet_match(pnetelem->pnet_name, pnetid))
+ 			continue;
+@@ -617,7 +617,7 @@ static int _smc_pnet_dump(struct net *net, struct sk_buff *skb, u32 portid,
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 	return idx;
+ }
+ 
+@@ -859,7 +859,7 @@ int smc_pnet_net_init(struct net *net)
+ 	struct smc_pnetids_ndev *pnetids_ndev = &sn->pnetids_ndev;
+ 
+ 	INIT_LIST_HEAD(&pnettable->pnetlist);
+-	rwlock_init(&pnettable->lock);
++	mutex_init(&pnettable->lock);
+ 	INIT_LIST_HEAD(&pnetids_ndev->list);
+ 	rwlock_init(&pnetids_ndev->lock);
+ 
+@@ -939,7 +939,7 @@ static int smc_pnet_find_ndev_pnetid_by_table(struct net_device *ndev,
+ 	sn = net_generic(net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	read_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(pnetelem, &pnettable->pnetlist, list) {
+ 		if (pnetelem->type == SMC_PNET_ETH && ndev == pnetelem->ndev) {
+ 			/* get pnetid of netdev device */
+@@ -948,7 +948,7 @@ static int smc_pnet_find_ndev_pnetid_by_table(struct net_device *ndev,
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 	return rc;
+ }
+ 
+@@ -1129,7 +1129,7 @@ int smc_pnetid_by_table_ib(struct smc_ib_device *smcibdev, u8 ib_port)
+ 	sn = net_generic(&init_net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	read_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
+ 		if (tmp_pe->type == SMC_PNET_IB &&
+ 		    !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX) &&
+@@ -1139,7 +1139,7 @@ int smc_pnetid_by_table_ib(struct smc_ib_device *smcibdev, u8 ib_port)
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 
+ 	return rc;
+ }
+@@ -1158,7 +1158,7 @@ int smc_pnetid_by_table_smcd(struct smcd_dev *smcddev)
+ 	sn = net_generic(&init_net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	read_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
+ 		if (tmp_pe->type == SMC_PNET_IB &&
+ 		    !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) {
+@@ -1167,7 +1167,7 @@ int smc_pnetid_by_table_smcd(struct smcd_dev *smcddev)
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 
+ 	return rc;
+ }
+diff --git a/net/smc/smc_pnet.h b/net/smc/smc_pnet.h
+index 14039272f7e42..80a88eea49491 100644
+--- a/net/smc/smc_pnet.h
++++ b/net/smc/smc_pnet.h
+@@ -29,7 +29,7 @@ struct smc_link_group;
+  * @pnetlist: List of PNETIDs
+  */
+ struct smc_pnettable {
+-	rwlock_t lock;
++	struct mutex lock;
+ 	struct list_head pnetlist;
+ };
+ 
+diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
+index f6a6acef42235..54c5328f492d2 100644
+--- a/net/tipc/name_table.c
++++ b/net/tipc/name_table.c
+@@ -931,7 +931,7 @@ static int __tipc_nl_add_nametable_publ(struct tipc_nl_msg *msg,
+ 		list_for_each_entry(p, &sr->all_publ, all_publ)
+ 			if (p->key == *last_key)
+ 				break;
+-		if (p->key != *last_key)
++		if (list_entry_is_head(p, &sr->all_publ, all_publ))
+ 			return -EPIPE;
+ 	} else {
+ 		p = list_first_entry(&sr->all_publ,
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index ce957ee5383c4..8d2c98531af45 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3743,7 +3743,7 @@ static int __tipc_nl_list_sk_publ(struct sk_buff *skb,
+ 			if (p->key == *last_publ)
+ 				break;
+ 		}
+-		if (p->key != *last_publ) {
++		if (list_entry_is_head(p, &tsk->publications, binding_sock)) {
+ 			/* We never set seq or call nl_dump_check_consistent()
+ 			 * this means that setting prev_seq here will cause the
+ 			 * consistence check to fail in the netlink callback
+diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
+index bcb494dc816a0..48754083791d8 100644
+--- a/tools/perf/util/data.c
++++ b/tools/perf/util/data.c
+@@ -44,10 +44,6 @@ int perf_data__create_dir(struct perf_data *data, int nr)
+ 	if (!files)
+ 		return -ENOMEM;
+ 
+-	data->dir.version = PERF_DIR_VERSION;
+-	data->dir.files   = files;
+-	data->dir.nr      = nr;
+-
+ 	for (i = 0; i < nr; i++) {
+ 		struct perf_data_file *file = &files[i];
+ 
+@@ -62,6 +58,9 @@ int perf_data__create_dir(struct perf_data *data, int nr)
+ 		file->fd = ret;
+ 	}
+ 
++	data->dir.version = PERF_DIR_VERSION;
++	data->dir.files   = files;
++	data->dir.nr      = nr;
+ 	return 0;
+ 
+ out_err:
+diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h
+index 1858435de7aaf..5cb90ca292186 100644
+--- a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h
++++ b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h
+@@ -235,7 +235,7 @@ SEC("sk_msg1")
+ int bpf_prog4(struct sk_msg_md *msg)
+ {
+ 	int *bytes, zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5;
+-	int *start, *end, *start_push, *end_push, *start_pop, *pop;
++	int *start, *end, *start_push, *end_push, *start_pop, *pop, err = 0;
+ 
+ 	bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
+ 	if (bytes)
+@@ -249,8 +249,11 @@ int bpf_prog4(struct sk_msg_md *msg)
+ 		bpf_msg_pull_data(msg, *start, *end, 0);
+ 	start_push = bpf_map_lookup_elem(&sock_bytes, &two);
+ 	end_push = bpf_map_lookup_elem(&sock_bytes, &three);
+-	if (start_push && end_push)
+-		bpf_msg_push_data(msg, *start_push, *end_push, 0);
++	if (start_push && end_push) {
++		err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
++		if (err)
++			return SK_DROP;
++	}
+ 	start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
+ 	pop = bpf_map_lookup_elem(&sock_bytes, &five);
+ 	if (start_pop && pop)
+@@ -263,6 +266,7 @@ int bpf_prog6(struct sk_msg_md *msg)
+ {
+ 	int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, key = 0;
+ 	int *bytes, *start, *end, *start_push, *end_push, *start_pop, *pop, *f;
++	int err = 0;
+ 	__u64 flags = 0;
+ 
+ 	bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
+@@ -279,8 +283,11 @@ int bpf_prog6(struct sk_msg_md *msg)
+ 
+ 	start_push = bpf_map_lookup_elem(&sock_bytes, &two);
+ 	end_push = bpf_map_lookup_elem(&sock_bytes, &three);
+-	if (start_push && end_push)
+-		bpf_msg_push_data(msg, *start_push, *end_push, 0);
++	if (start_push && end_push) {
++		err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
++		if (err)
++			return SK_DROP;
++	}
+ 
+ 	start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
+ 	pop = bpf_map_lookup_elem(&sock_bytes, &five);
+@@ -338,7 +345,7 @@ SEC("sk_msg5")
+ int bpf_prog10(struct sk_msg_md *msg)
+ {
+ 	int *bytes, *start, *end, *start_push, *end_push, *start_pop, *pop;
+-	int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5;
++	int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, err = 0;
+ 
+ 	bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
+ 	if (bytes)
+@@ -352,8 +359,11 @@ int bpf_prog10(struct sk_msg_md *msg)
+ 		bpf_msg_pull_data(msg, *start, *end, 0);
+ 	start_push = bpf_map_lookup_elem(&sock_bytes, &two);
+ 	end_push = bpf_map_lookup_elem(&sock_bytes, &three);
+-	if (start_push && end_push)
+-		bpf_msg_push_data(msg, *start_push, *end_push, 0);
++	if (start_push && end_push) {
++		err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
++		if (err)
++			return SK_PASS;
++	}
+ 	start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
+ 	pop = bpf_map_lookup_elem(&sock_bytes, &five);
+ 	if (start_pop && pop)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-03-08 18:32 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-03-08 18:32 UTC (permalink / raw
  To: gentoo-commits

commit:     2d68b1c4f9ce729191ae26a84480c18c7a4ea823
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Mar  8 18:32:15 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Mar  8 18:32:15 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2d68b1c4

Linux patch 5.10.104

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1103_linux-5.10.104.patch | 3674 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3678 insertions(+)

diff --git a/0000_README b/0000_README
index 7f478841..f4f4b91a 100644
--- a/0000_README
+++ b/0000_README
@@ -455,6 +455,10 @@ Patch:  1102_linux-5.10.103.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.103
 
+Patch:  1103_linux-5.10.104.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.104
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1103_linux-5.10.104.patch b/1103_linux-5.10.104.patch
new file mode 100644
index 00000000..df50ccb0
--- /dev/null
+++ b/1103_linux-5.10.104.patch
@@ -0,0 +1,3674 @@
+diff --git a/Documentation/trace/events.rst b/Documentation/trace/events.rst
+index 2a5aa48eff6c7..9df29a935757a 100644
+--- a/Documentation/trace/events.rst
++++ b/Documentation/trace/events.rst
+@@ -198,6 +198,15 @@ The glob (~) accepts a wild card character (\*,?) and character classes
+   prev_comm ~ "*sh*"
+   prev_comm ~ "ba*sh"
+ 
++If the field is a pointer that points into user space (for example
++"filename" from sys_enter_openat), then you have to append ".ustring" to the
++field name::
++
++  filename.ustring ~ "password"
++
++As the kernel will have to know how to retrieve the memory that the pointer
++is at from user space.
++
+ 5.2 Setting filters
+ -------------------
+ 
+@@ -230,6 +239,16 @@ Currently the caret ('^') for an error always appears at the beginning of
+ the filter string; the error message should still be useful though
+ even without more accurate position info.
+ 
++5.2.1 Filter limitations
++------------------------
++
++If a filter is placed on a string pointer ``(char *)`` that does not point
++to a string on the ring buffer, but instead points to kernel or user space
++memory, then, for safety reasons, at most 1024 bytes of the content is
++copied onto a temporary buffer to do the compare. If the copy of the memory
++faults (the pointer points to memory that should not be accessed), then the
++string compare will be treated as not matching.
++
+ 5.3 Clearing filters
+ --------------------
+ 
+diff --git a/Makefile b/Makefile
+index 829a66a36807e..6e6efe5516872 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 103
++SUBLEVEL = 104
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/omap3-devkit8000-common.dtsi b/arch/arm/boot/dts/omap3-devkit8000-common.dtsi
+index 2c19d6e255bdc..6883ccb45600b 100644
+--- a/arch/arm/boot/dts/omap3-devkit8000-common.dtsi
++++ b/arch/arm/boot/dts/omap3-devkit8000-common.dtsi
+@@ -158,6 +158,24 @@
+ 	status = "disabled";
+ };
+ 
++/* Unusable as clockevent because if unreliable oscillator, allow to idle */
++&timer1_target {
++	/delete-property/ti,no-reset-on-init;
++	/delete-property/ti,no-idle;
++	timer@0 {
++		/delete-property/ti,timer-alwon;
++	};
++};
++
++/* Preferred timer for clockevent */
++&timer12_target {
++	ti,no-reset-on-init;
++	ti,no-idle;
++	timer@0 {
++		/* Always clocked by secure_32k_fck */
++	};
++};
++
+ &twl_gpio {
+ 	ti,use-leds;
+ 	/*
+diff --git a/arch/arm/boot/dts/omap3-devkit8000.dts b/arch/arm/boot/dts/omap3-devkit8000.dts
+index c2995a280729d..162d0726b0080 100644
+--- a/arch/arm/boot/dts/omap3-devkit8000.dts
++++ b/arch/arm/boot/dts/omap3-devkit8000.dts
+@@ -14,36 +14,3 @@
+ 		display2 = &tv0;
+ 	};
+ };
+-
+-/* Unusable as clocksource because of unreliable oscillator */
+-&counter32k {
+-	status = "disabled";
+-};
+-
+-/* Unusable as clockevent because if unreliable oscillator, allow to idle */
+-&timer1_target {
+-	/delete-property/ti,no-reset-on-init;
+-	/delete-property/ti,no-idle;
+-	timer@0 {
+-		/delete-property/ti,timer-alwon;
+-	};
+-};
+-
+-/* Preferred always-on timer for clocksource */
+-&timer12_target {
+-	ti,no-reset-on-init;
+-	ti,no-idle;
+-	timer@0 {
+-		/* Always clocked by secure_32k_fck */
+-	};
+-};
+-
+-/* Preferred timer for clockevent */
+-&timer2_target {
+-	ti,no-reset-on-init;
+-	ti,no-idle;
+-	timer@0 {
+-		assigned-clocks = <&gpt2_fck>;
+-		assigned-clock-parents = <&sys_ck>;
+-	};
+-};
+diff --git a/arch/arm/boot/dts/tegra124-nyan-big.dts b/arch/arm/boot/dts/tegra124-nyan-big.dts
+index 1d2aac2cb6d03..fdc1d64dfff9d 100644
+--- a/arch/arm/boot/dts/tegra124-nyan-big.dts
++++ b/arch/arm/boot/dts/tegra124-nyan-big.dts
+@@ -13,12 +13,15 @@
+ 		     "google,nyan-big-rev1", "google,nyan-big-rev0",
+ 		     "google,nyan-big", "google,nyan", "nvidia,tegra124";
+ 
+-	panel: panel {
+-		compatible = "auo,b133xtn01";
+-
+-		power-supply = <&vdd_3v3_panel>;
+-		backlight = <&backlight>;
+-		ddc-i2c-bus = <&dpaux>;
++	host1x@50000000 {
++		dpaux@545c0000 {
++			aux-bus {
++				panel: panel {
++					compatible = "auo,b133xtn01";
++					backlight = <&backlight>;
++				};
++			};
++		};
+ 	};
+ 
+ 	mmc@700b0400 { /* SD Card on this bus */
+diff --git a/arch/arm/boot/dts/tegra124-nyan-blaze.dts b/arch/arm/boot/dts/tegra124-nyan-blaze.dts
+index 677babde6460e..abdf4456826f8 100644
+--- a/arch/arm/boot/dts/tegra124-nyan-blaze.dts
++++ b/arch/arm/boot/dts/tegra124-nyan-blaze.dts
+@@ -15,12 +15,15 @@
+ 		     "google,nyan-blaze-rev0", "google,nyan-blaze",
+ 		     "google,nyan", "nvidia,tegra124";
+ 
+-	panel: panel {
+-		compatible = "samsung,ltn140at29-301";
+-
+-		power-supply = <&vdd_3v3_panel>;
+-		backlight = <&backlight>;
+-		ddc-i2c-bus = <&dpaux>;
++	host1x@50000000 {
++		dpaux@545c0000 {
++			aux-bus {
++				panel: panel {
++					compatible = "samsung,ltn140at29-301";
++					backlight = <&backlight>;
++				};
++			};
++		};
+ 	};
+ 
+ 	sound {
+diff --git a/arch/arm/boot/dts/tegra124-venice2.dts b/arch/arm/boot/dts/tegra124-venice2.dts
+index e6b54ac1ebd1a..84e2d24065e9a 100644
+--- a/arch/arm/boot/dts/tegra124-venice2.dts
++++ b/arch/arm/boot/dts/tegra124-venice2.dts
+@@ -48,6 +48,13 @@
+ 		dpaux@545c0000 {
+ 			vdd-supply = <&vdd_3v3_panel>;
+ 			status = "okay";
++
++			aux-bus {
++				panel: panel {
++					compatible = "lg,lp129qe";
++					backlight = <&backlight>;
++				};
++			};
+ 		};
+ 	};
+ 
+@@ -1079,13 +1086,6 @@
+ 		};
+ 	};
+ 
+-	panel: panel {
+-		compatible = "lg,lp129qe";
+-		power-supply = <&vdd_3v3_panel>;
+-		backlight = <&backlight>;
+-		ddc-i2c-bus = <&dpaux>;
+-	};
+-
+ 	vdd_mux: regulator@0 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "+VDD_MUX";
+diff --git a/arch/arm/kernel/kgdb.c b/arch/arm/kernel/kgdb.c
+index 7bd30c0a4280d..22f937e6f3ffb 100644
+--- a/arch/arm/kernel/kgdb.c
++++ b/arch/arm/kernel/kgdb.c
+@@ -154,22 +154,38 @@ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int instr)
+ 	return 0;
+ }
+ 
+-static struct undef_hook kgdb_brkpt_hook = {
++static struct undef_hook kgdb_brkpt_arm_hook = {
+ 	.instr_mask		= 0xffffffff,
+ 	.instr_val		= KGDB_BREAKINST,
+-	.cpsr_mask		= MODE_MASK,
++	.cpsr_mask		= PSR_T_BIT | MODE_MASK,
+ 	.cpsr_val		= SVC_MODE,
+ 	.fn			= kgdb_brk_fn
+ };
+ 
+-static struct undef_hook kgdb_compiled_brkpt_hook = {
++static struct undef_hook kgdb_brkpt_thumb_hook = {
++	.instr_mask		= 0xffff,
++	.instr_val		= KGDB_BREAKINST & 0xffff,
++	.cpsr_mask		= PSR_T_BIT | MODE_MASK,
++	.cpsr_val		= PSR_T_BIT | SVC_MODE,
++	.fn			= kgdb_brk_fn
++};
++
++static struct undef_hook kgdb_compiled_brkpt_arm_hook = {
+ 	.instr_mask		= 0xffffffff,
+ 	.instr_val		= KGDB_COMPILED_BREAK,
+-	.cpsr_mask		= MODE_MASK,
++	.cpsr_mask		= PSR_T_BIT | MODE_MASK,
+ 	.cpsr_val		= SVC_MODE,
+ 	.fn			= kgdb_compiled_brk_fn
+ };
+ 
++static struct undef_hook kgdb_compiled_brkpt_thumb_hook = {
++	.instr_mask		= 0xffff,
++	.instr_val		= KGDB_COMPILED_BREAK & 0xffff,
++	.cpsr_mask		= PSR_T_BIT | MODE_MASK,
++	.cpsr_val		= PSR_T_BIT | SVC_MODE,
++	.fn			= kgdb_compiled_brk_fn
++};
++
+ static int __kgdb_notify(struct die_args *args, unsigned long cmd)
+ {
+ 	struct pt_regs *regs = args->regs;
+@@ -210,8 +226,10 @@ int kgdb_arch_init(void)
+ 	if (ret != 0)
+ 		return ret;
+ 
+-	register_undef_hook(&kgdb_brkpt_hook);
+-	register_undef_hook(&kgdb_compiled_brkpt_hook);
++	register_undef_hook(&kgdb_brkpt_arm_hook);
++	register_undef_hook(&kgdb_brkpt_thumb_hook);
++	register_undef_hook(&kgdb_compiled_brkpt_arm_hook);
++	register_undef_hook(&kgdb_compiled_brkpt_thumb_hook);
+ 
+ 	return 0;
+ }
+@@ -224,8 +242,10 @@ int kgdb_arch_init(void)
+  */
+ void kgdb_arch_exit(void)
+ {
+-	unregister_undef_hook(&kgdb_brkpt_hook);
+-	unregister_undef_hook(&kgdb_compiled_brkpt_hook);
++	unregister_undef_hook(&kgdb_brkpt_arm_hook);
++	unregister_undef_hook(&kgdb_brkpt_thumb_hook);
++	unregister_undef_hook(&kgdb_compiled_brkpt_arm_hook);
++	unregister_undef_hook(&kgdb_compiled_brkpt_thumb_hook);
+ 	unregister_die_notifier(&kgdb_notifier);
+ }
+ 
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index 4df688f410728..3e3001998460b 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -212,12 +212,14 @@ early_param("ecc", early_ecc);
+ static int __init early_cachepolicy(char *p)
+ {
+ 	pr_warn("cachepolicy kernel parameter not supported without cp15\n");
++	return 0;
+ }
+ early_param("cachepolicy", early_cachepolicy);
+ 
+ static int __init noalign_setup(char *__unused)
+ {
+ 	pr_warn("noalign kernel parameter not supported without cp15\n");
++	return 1;
+ }
+ __setup("noalign", noalign_setup);
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
+index 765b24a2bcbf0..fb0a13cad6c93 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
+@@ -281,7 +281,7 @@
+ 
+ 	sound: sound {
+ 		compatible = "rockchip,rk3399-gru-sound";
+-		rockchip,cpu = <&i2s0 &i2s2>;
++		rockchip,cpu = <&i2s0 &spdif>;
+ 	};
+ };
+ 
+@@ -432,10 +432,6 @@ ap_i2c_audio: &i2c8 {
+ 	status = "okay";
+ };
+ 
+-&i2s2 {
+-	status = "okay";
+-};
+-
+ &io_domains {
+ 	status = "okay";
+ 
+@@ -532,6 +528,17 @@ ap_i2c_audio: &i2c8 {
+ 	vqmmc-supply = <&ppvar_sd_card_io>;
+ };
+ 
++&spdif {
++	status = "okay";
++
++	/*
++	 * SPDIF is routed internally to DP; we either don't use these pins, or
++	 * mux them to something else.
++	 */
++	/delete-property/ pinctrl-0;
++	/delete-property/ pinctrl-names;
++};
++
+ &spi1 {
+ 	status = "okay";
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c
+index b2d73fc0d1ef4..9e1459534ce54 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio.c
+@@ -248,6 +248,8 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
+ 						    IRQCHIP_STATE_PENDING,
+ 						    &val);
+ 			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
++		} else if (vgic_irq_is_mapped_level(irq)) {
++			val = vgic_get_phys_line_level(irq);
+ 		} else {
+ 			val = irq_is_pending(irq);
+ 		}
+diff --git a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c
+index a5636524af769..e2af6b172200e 100644
+--- a/arch/ia64/kernel/acpi.c
++++ b/arch/ia64/kernel/acpi.c
+@@ -446,7 +446,8 @@ void __init acpi_numa_fixup(void)
+ 	if (srat_num_cpus == 0) {
+ 		node_set_online(0);
+ 		node_cpuid[0].phys_id = hard_smp_processor_id();
+-		return;
++		slit_distance(0, 0) = LOCAL_DISTANCE;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -489,7 +490,7 @@ void __init acpi_numa_fixup(void)
+ 			for (j = 0; j < MAX_NUMNODES; j++)
+ 				slit_distance(i, j) = i == j ?
+ 					LOCAL_DISTANCE : REMOTE_DISTANCE;
+-		return;
++		goto out;
+ 	}
+ 
+ 	memset(numa_slit, -1, sizeof(numa_slit));
+@@ -514,6 +515,8 @@ void __init acpi_numa_fixup(void)
+ 		printk("\n");
+ 	}
+ #endif
++out:
++	node_possible_map = node_online_map;
+ }
+ #endif				/* CONFIG_ACPI_NUMA */
+ 
+diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
+index 7ebaef10ea1b6..ac7a25298a04a 100644
+--- a/arch/riscv/mm/Makefile
++++ b/arch/riscv/mm/Makefile
+@@ -24,6 +24,9 @@ obj-$(CONFIG_KASAN)   += kasan_init.o
+ ifdef CONFIG_KASAN
+ KASAN_SANITIZE_kasan_init.o := n
+ KASAN_SANITIZE_init.o := n
++ifdef CONFIG_DEBUG_VIRTUAL
++KASAN_SANITIZE_physaddr.o := n
++endif
+ endif
+ 
+ obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
+diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
+index 883c3be43ea98..2db442701ee28 100644
+--- a/arch/riscv/mm/kasan_init.c
++++ b/arch/riscv/mm/kasan_init.c
+@@ -21,8 +21,7 @@ asmlinkage void __init kasan_early_init(void)
+ 
+ 	for (i = 0; i < PTRS_PER_PTE; ++i)
+ 		set_pte(kasan_early_shadow_pte + i,
+-			mk_pte(virt_to_page(kasan_early_shadow_page),
+-			       PAGE_KERNEL));
++			pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL));
+ 
+ 	for (i = 0; i < PTRS_PER_PMD; ++i)
+ 		set_pmd(kasan_early_shadow_pmd + i,
+diff --git a/arch/s390/include/asm/extable.h b/arch/s390/include/asm/extable.h
+index 3beb294fd5531..ce0db8172aad1 100644
+--- a/arch/s390/include/asm/extable.h
++++ b/arch/s390/include/asm/extable.h
+@@ -69,8 +69,13 @@ static inline void swap_ex_entry_fixup(struct exception_table_entry *a,
+ {
+ 	a->fixup = b->fixup + delta;
+ 	b->fixup = tmp.fixup - delta;
+-	a->handler = b->handler + delta;
+-	b->handler = tmp.handler - delta;
++	a->handler = b->handler;
++	if (a->handler)
++		a->handler += delta;
++	b->handler = tmp.handler;
++	if (b->handler)
++		b->handler -= delta;
+ }
++#define swap_ex_entry_fixup swap_ex_entry_fixup
+ 
+ #endif
+diff --git a/drivers/ata/pata_hpt37x.c b/drivers/ata/pata_hpt37x.c
+index 499a947d56ddb..fef46de2f6b23 100644
+--- a/drivers/ata/pata_hpt37x.c
++++ b/drivers/ata/pata_hpt37x.c
+@@ -962,14 +962,14 @@ static int hpt37x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
+ 
+ 	if ((freq >> 12) != 0xABCDE) {
+ 		int i;
+-		u8 sr;
++		u16 sr;
+ 		u32 total = 0;
+ 
+ 		pr_warn("BIOS has not set timing clocks\n");
+ 
+ 		/* This is the process the HPT371 BIOS is reported to use */
+ 		for (i = 0; i < 128; i++) {
+-			pci_read_config_byte(dev, 0x78, &sr);
++			pci_read_config_word(dev, 0x78, &sr);
+ 			total += sr & 0x1FF;
+ 			udelay(15);
+ 		}
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 5c40ca1d4740e..1fccb457fcc54 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -241,8 +241,7 @@ static void __init dmtimer_systimer_assign_alwon(void)
+ 	bool quirk_unreliable_oscillator = false;
+ 
+ 	/* Quirk unreliable 32 KiHz oscillator with incomplete dts */
+-	if (of_machine_is_compatible("ti,omap3-beagle-ab4") ||
+-	    of_machine_is_compatible("timll,omap3-devkit8000")) {
++	if (of_machine_is_compatible("ti,omap3-beagle-ab4")) {
+ 		quirk_unreliable_oscillator = true;
+ 		counter_32k = -ENODEV;
+ 	}
+diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c
+index 7f72b3f4cd1ae..19ac95c0098f0 100644
+--- a/drivers/dma/sh/shdma-base.c
++++ b/drivers/dma/sh/shdma-base.c
+@@ -115,8 +115,10 @@ static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx)
+ 		ret = pm_runtime_get(schan->dev);
+ 
+ 		spin_unlock_irq(&schan->chan_lock);
+-		if (ret < 0)
++		if (ret < 0) {
+ 			dev_err(schan->dev, "%s(): GET = %d\n", __func__, ret);
++			pm_runtime_put(schan->dev);
++		}
+ 
+ 		pm_runtime_barrier(schan->dev);
+ 
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 7632232486645..745b7f9eb3351 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -979,7 +979,7 @@ static void __exit scmi_driver_exit(void)
+ }
+ module_exit(scmi_driver_exit);
+ 
+-MODULE_ALIAS("platform: arm-scmi");
++MODULE_ALIAS("platform:arm-scmi");
+ MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+ MODULE_DESCRIPTION("ARM SCMI protocol driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/firmware/efi/libstub/riscv-stub.c b/drivers/firmware/efi/libstub/riscv-stub.c
+index 380e4e2513994..9c460843442f5 100644
+--- a/drivers/firmware/efi/libstub/riscv-stub.c
++++ b/drivers/firmware/efi/libstub/riscv-stub.c
+@@ -25,7 +25,7 @@ typedef void __noreturn (*jump_kernel_func)(unsigned int, unsigned long);
+ 
+ static u32 hartid;
+ 
+-static u32 get_boot_hartid_from_fdt(void)
++static int get_boot_hartid_from_fdt(void)
+ {
+ 	const void *fdt;
+ 	int chosen_node, len;
+@@ -33,23 +33,26 @@ static u32 get_boot_hartid_from_fdt(void)
+ 
+ 	fdt = get_efi_config_table(DEVICE_TREE_GUID);
+ 	if (!fdt)
+-		return U32_MAX;
++		return -EINVAL;
+ 
+ 	chosen_node = fdt_path_offset(fdt, "/chosen");
+ 	if (chosen_node < 0)
+-		return U32_MAX;
++		return -EINVAL;
+ 
+ 	prop = fdt_getprop((void *)fdt, chosen_node, "boot-hartid", &len);
+ 	if (!prop || len != sizeof(u32))
+-		return U32_MAX;
++		return -EINVAL;
+ 
+-	return fdt32_to_cpu(*prop);
++	hartid = fdt32_to_cpu(*prop);
++	return 0;
+ }
+ 
+ efi_status_t check_platform_features(void)
+ {
+-	hartid = get_boot_hartid_from_fdt();
+-	if (hartid == U32_MAX) {
++	int ret;
++
++	ret = get_boot_hartid_from_fdt();
++	if (ret) {
+ 		efi_err("/chosen/boot-hartid missing or invalid!\n");
+ 		return EFI_UNSUPPORTED;
+ 	}
+diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c
+index abdc8a6a39631..cae590bd08f27 100644
+--- a/drivers/firmware/efi/vars.c
++++ b/drivers/firmware/efi/vars.c
+@@ -742,6 +742,7 @@ int efivar_entry_set_safe(efi_char16_t *name, efi_guid_t vendor, u32 attributes,
+ {
+ 	const struct efivar_operations *ops;
+ 	efi_status_t status;
++	unsigned long varsize;
+ 
+ 	if (!__efivars)
+ 		return -EINVAL;
+@@ -764,15 +765,17 @@ int efivar_entry_set_safe(efi_char16_t *name, efi_guid_t vendor, u32 attributes,
+ 		return efivar_entry_set_nonblocking(name, vendor, attributes,
+ 						    size, data);
+ 
++	varsize = size + ucs2_strsize(name, 1024);
+ 	if (!block) {
+ 		if (down_trylock(&efivars_lock))
+ 			return -EBUSY;
++		status = check_var_size_nonblocking(attributes, varsize);
+ 	} else {
+ 		if (down_interruptible(&efivars_lock))
+ 			return -EINTR;
++		status = check_var_size(attributes, varsize);
+ 	}
+ 
+-	status = check_var_size(attributes, size + ucs2_strsize(name, 1024));
+ 	if (status != EFI_SUCCESS) {
+ 		up(&efivars_lock);
+ 		return -ENOSPC;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index b47829ff30af7..635601d8b1310 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -715,11 +715,17 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+  * Check if all VM PDs/PTs are ready for updates
+  *
+  * Returns:
+- * True if eviction list is empty.
++ * True if VM is not evicting.
+  */
+ bool amdgpu_vm_ready(struct amdgpu_vm *vm)
+ {
+-	return list_empty(&vm->evicted);
++	bool ret;
++
++	amdgpu_vm_eviction_lock(vm);
++	ret = !vm->evicting;
++	amdgpu_vm_eviction_unlock(vm);
++
++	return ret && list_empty(&vm->evicted);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/i915/intel_pch.c b/drivers/gpu/drm/i915/intel_pch.c
+index 6c97192e9ca87..a0d5e95234fd0 100644
+--- a/drivers/gpu/drm/i915/intel_pch.c
++++ b/drivers/gpu/drm/i915/intel_pch.c
+@@ -110,6 +110,7 @@ intel_pch_type(const struct drm_i915_private *dev_priv, unsigned short id)
+ 		/* Comet Lake V PCH is based on KBP, which is SPT compatible */
+ 		return PCH_SPT;
+ 	case INTEL_PCH_ICP_DEVICE_ID_TYPE:
++	case INTEL_PCH_ICP2_DEVICE_ID_TYPE:
+ 		drm_dbg_kms(&dev_priv->drm, "Found Ice Lake PCH\n");
+ 		drm_WARN_ON(&dev_priv->drm, !IS_ICELAKE(dev_priv));
+ 		return PCH_ICP;
+@@ -124,7 +125,6 @@ intel_pch_type(const struct drm_i915_private *dev_priv, unsigned short id)
+ 			    !IS_ROCKETLAKE(dev_priv));
+ 		return PCH_TGP;
+ 	case INTEL_PCH_JSP_DEVICE_ID_TYPE:
+-	case INTEL_PCH_JSP2_DEVICE_ID_TYPE:
+ 		drm_dbg_kms(&dev_priv->drm, "Found Jasper Lake PCH\n");
+ 		drm_WARN_ON(&dev_priv->drm, !IS_ELKHARTLAKE(dev_priv));
+ 		return PCH_JSP;
+diff --git a/drivers/gpu/drm/i915/intel_pch.h b/drivers/gpu/drm/i915/intel_pch.h
+index 06d2cd50af0b9..49325022b3c96 100644
+--- a/drivers/gpu/drm/i915/intel_pch.h
++++ b/drivers/gpu/drm/i915/intel_pch.h
+@@ -48,11 +48,11 @@ enum intel_pch {
+ #define INTEL_PCH_CMP2_DEVICE_ID_TYPE		0x0680
+ #define INTEL_PCH_CMP_V_DEVICE_ID_TYPE		0xA380
+ #define INTEL_PCH_ICP_DEVICE_ID_TYPE		0x3480
++#define INTEL_PCH_ICP2_DEVICE_ID_TYPE		0x3880
+ #define INTEL_PCH_MCC_DEVICE_ID_TYPE		0x4B00
+ #define INTEL_PCH_TGP_DEVICE_ID_TYPE		0xA080
+ #define INTEL_PCH_TGP2_DEVICE_ID_TYPE		0x4380
+ #define INTEL_PCH_JSP_DEVICE_ID_TYPE		0x4D80
+-#define INTEL_PCH_JSP2_DEVICE_ID_TYPE		0x3880
+ #define INTEL_PCH_P2X_DEVICE_ID_TYPE		0x7100
+ #define INTEL_PCH_P3X_DEVICE_ID_TYPE		0x7000
+ #define INTEL_PCH_QEMU_DEVICE_ID_TYPE		0x2900 /* qemu q35 has 2918 */
+diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
+index 982737827b871..f4e2e69377589 100644
+--- a/drivers/hid/hid-debug.c
++++ b/drivers/hid/hid-debug.c
+@@ -823,7 +823,9 @@ static const char *keys[KEY_MAX + 1] = {
+ 	[KEY_F22] = "F22",			[KEY_F23] = "F23",
+ 	[KEY_F24] = "F24",			[KEY_PLAYCD] = "PlayCD",
+ 	[KEY_PAUSECD] = "PauseCD",		[KEY_PROG3] = "Prog3",
+-	[KEY_PROG4] = "Prog4",			[KEY_SUSPEND] = "Suspend",
++	[KEY_PROG4] = "Prog4",
++	[KEY_ALL_APPLICATIONS] = "AllApplications",
++	[KEY_SUSPEND] = "Suspend",
+ 	[KEY_CLOSE] = "Close",			[KEY_PLAY] = "Play",
+ 	[KEY_FASTFORWARD] = "FastForward",	[KEY_BASSBOOST] = "BassBoost",
+ 	[KEY_PRINT] = "Print",			[KEY_HP] = "HP",
+@@ -930,6 +932,7 @@ static const char *keys[KEY_MAX + 1] = {
+ 	[KEY_SCREENSAVER] = "ScreenSaver",
+ 	[KEY_VOICECOMMAND] = "VoiceCommand",
+ 	[KEY_EMOJI_PICKER] = "EmojiPicker",
++	[KEY_DICTATE] = "Dictate",
+ 	[KEY_BRIGHTNESS_MIN] = "BrightnessMin",
+ 	[KEY_BRIGHTNESS_MAX] = "BrightnessMax",
+ 	[KEY_BRIGHTNESS_AUTO] = "BrightnessAuto",
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index eb53855898c8d..a17d1dda95703 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -956,6 +956,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 		case 0x0cd: map_key_clear(KEY_PLAYPAUSE);	break;
+ 		case 0x0cf: map_key_clear(KEY_VOICECOMMAND);	break;
+ 
++		case 0x0d8: map_key_clear(KEY_DICTATE);		break;
+ 		case 0x0d9: map_key_clear(KEY_EMOJI_PICKER);	break;
+ 
+ 		case 0x0e0: map_abs_clear(ABS_VOLUME);		break;
+@@ -1047,6 +1048,8 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 
+ 		case 0x29d: map_key_clear(KEY_KBD_LAYOUT_NEXT);	break;
+ 
++		case 0x2a2: map_key_clear(KEY_ALL_APPLICATIONS);	break;
++
+ 		case 0x2c7: map_key_clear(KEY_KBDINPUTASSIST_PREV);		break;
+ 		case 0x2c8: map_key_clear(KEY_KBDINPUTASSIST_NEXT);		break;
+ 		case 0x2c9: map_key_clear(KEY_KBDINPUTASSIST_PREVGROUP);		break;
+diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
+index 7e693dcbdd196..ea474b16e3aac 100644
+--- a/drivers/i2c/busses/Kconfig
++++ b/drivers/i2c/busses/Kconfig
+@@ -488,7 +488,7 @@ config I2C_BRCMSTB
+ 
+ config I2C_CADENCE
+ 	tristate "Cadence I2C Controller"
+-	depends on ARCH_ZYNQ || ARM64 || XTENSA
++	depends on ARCH_ZYNQ || ARM64 || XTENSA || COMPILE_TEST
+ 	help
+ 	  Say yes here to select Cadence I2C Host Controller. This controller is
+ 	  e.g. used by Xilinx Zynq.
+@@ -926,7 +926,7 @@ config I2C_QCOM_GENI
+ 
+ config I2C_QUP
+ 	tristate "Qualcomm QUP based I2C controller"
+-	depends on ARCH_QCOM
++	depends on ARCH_QCOM || COMPILE_TEST
+ 	help
+ 	  If you say yes to this option, support will be included for the
+ 	  built-in I2C interface on the Qualcomm SoCs.
+diff --git a/drivers/i2c/busses/i2c-bcm2835.c b/drivers/i2c/busses/i2c-bcm2835.c
+index 37443edbf7546..ad3b124a2e376 100644
+--- a/drivers/i2c/busses/i2c-bcm2835.c
++++ b/drivers/i2c/busses/i2c-bcm2835.c
+@@ -23,6 +23,11 @@
+ #define BCM2835_I2C_FIFO	0x10
+ #define BCM2835_I2C_DIV		0x14
+ #define BCM2835_I2C_DEL		0x18
++/*
++ * 16-bit field for the number of SCL cycles to wait after rising SCL
++ * before deciding the slave is not responding. 0 disables the
++ * timeout detection.
++ */
+ #define BCM2835_I2C_CLKT	0x1c
+ 
+ #define BCM2835_I2C_C_READ	BIT(0)
+@@ -477,6 +482,12 @@ static int bcm2835_i2c_probe(struct platform_device *pdev)
+ 	adap->dev.of_node = pdev->dev.of_node;
+ 	adap->quirks = of_device_get_match_data(&pdev->dev);
+ 
++	/*
++	 * Disable the hardware clock stretching timeout. SMBUS
++	 * specifies a limit for how long the device can stretch the
++	 * clock, but core I2C doesn't.
++	 */
++	bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_CLKT, 0);
+ 	bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_C, 0);
+ 
+ 	ret = i2c_add_adapter(adap);
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index 3cfd2c18eebd9..ff9dc37eff345 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -2179,6 +2179,12 @@ int input_register_device(struct input_dev *dev)
+ 	/* KEY_RESERVED is not supposed to be transmitted to userspace. */
+ 	__clear_bit(KEY_RESERVED, dev->keybit);
+ 
++	/* Buttonpads should not map BTN_RIGHT and/or BTN_MIDDLE. */
++	if (test_bit(INPUT_PROP_BUTTONPAD, dev->propbit)) {
++		__clear_bit(BTN_RIGHT, dev->keybit);
++		__clear_bit(BTN_MIDDLE, dev->keybit);
++	}
++
+ 	/* Make sure that bitmasks not mentioned in dev->evbit are clean. */
+ 	input_cleanse_bitmasks(dev);
+ 
+diff --git a/drivers/input/keyboard/Kconfig b/drivers/input/keyboard/Kconfig
+index 9f60f1559e499..3f7a5ff17a9a3 100644
+--- a/drivers/input/keyboard/Kconfig
++++ b/drivers/input/keyboard/Kconfig
+@@ -556,7 +556,7 @@ config KEYBOARD_PMIC8XXX
+ 
+ config KEYBOARD_SAMSUNG
+ 	tristate "Samsung keypad support"
+-	depends on HAVE_CLK
++	depends on HAS_IOMEM && HAVE_CLK
+ 	select INPUT_MATRIXKMAP
+ 	help
+ 	  Say Y here if you want to use the keypad on your Samsung mobile
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 11a9ee32c98cc..6f59c8b245f24 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -153,55 +153,21 @@ static int elan_get_fwinfo(u16 ic_type, u8 iap_version, u16 *validpage_count,
+ 	return 0;
+ }
+ 
+-static int elan_enable_power(struct elan_tp_data *data)
++static int elan_set_power(struct elan_tp_data *data, bool on)
+ {
+ 	int repeat = ETP_RETRY_COUNT;
+ 	int error;
+ 
+-	error = regulator_enable(data->vcc);
+-	if (error) {
+-		dev_err(&data->client->dev,
+-			"failed to enable regulator: %d\n", error);
+-		return error;
+-	}
+-
+ 	do {
+-		error = data->ops->power_control(data->client, true);
++		error = data->ops->power_control(data->client, on);
+ 		if (error >= 0)
+ 			return 0;
+ 
+ 		msleep(30);
+ 	} while (--repeat > 0);
+ 
+-	dev_err(&data->client->dev, "failed to enable power: %d\n", error);
+-	return error;
+-}
+-
+-static int elan_disable_power(struct elan_tp_data *data)
+-{
+-	int repeat = ETP_RETRY_COUNT;
+-	int error;
+-
+-	do {
+-		error = data->ops->power_control(data->client, false);
+-		if (!error) {
+-			error = regulator_disable(data->vcc);
+-			if (error) {
+-				dev_err(&data->client->dev,
+-					"failed to disable regulator: %d\n",
+-					error);
+-				/* Attempt to power the chip back up */
+-				data->ops->power_control(data->client, true);
+-				break;
+-			}
+-
+-			return 0;
+-		}
+-
+-		msleep(30);
+-	} while (--repeat > 0);
+-
+-	dev_err(&data->client->dev, "failed to disable power: %d\n", error);
++	dev_err(&data->client->dev, "failed to set power %s: %d\n",
++		on ? "on" : "off", error);
+ 	return error;
+ }
+ 
+@@ -1361,9 +1327,19 @@ static int __maybe_unused elan_suspend(struct device *dev)
+ 		/* Enable wake from IRQ */
+ 		data->irq_wake = (enable_irq_wake(client->irq) == 0);
+ 	} else {
+-		ret = elan_disable_power(data);
++		ret = elan_set_power(data, false);
++		if (ret)
++			goto err;
++
++		ret = regulator_disable(data->vcc);
++		if (ret) {
++			dev_err(dev, "error %d disabling regulator\n", ret);
++			/* Attempt to power the chip back up */
++			elan_set_power(data, true);
++		}
+ 	}
+ 
++err:
+ 	mutex_unlock(&data->sysfs_mutex);
+ 	return ret;
+ }
+@@ -1374,12 +1350,18 @@ static int __maybe_unused elan_resume(struct device *dev)
+ 	struct elan_tp_data *data = i2c_get_clientdata(client);
+ 	int error;
+ 
+-	if (device_may_wakeup(dev) && data->irq_wake) {
++	if (!device_may_wakeup(dev)) {
++		error = regulator_enable(data->vcc);
++		if (error) {
++			dev_err(dev, "error %d enabling regulator\n", error);
++			goto err;
++		}
++	} else if (data->irq_wake) {
+ 		disable_irq_wake(client->irq);
+ 		data->irq_wake = false;
+ 	}
+ 
+-	error = elan_enable_power(data);
++	error = elan_set_power(data, true);
+ 	if (error) {
+ 		dev_err(dev, "power up when resuming failed: %d\n", error);
+ 		goto err;
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index b4adab6985632..0c40d22409f23 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -17,6 +17,7 @@ extern int amd_iommu_init_passthrough(void);
+ extern irqreturn_t amd_iommu_int_thread(int irq, void *data);
+ extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
+ extern void amd_iommu_apply_erratum_63(u16 devid);
++extern void amd_iommu_restart_event_logging(struct amd_iommu *iommu);
+ extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
+ extern int amd_iommu_init_devices(void);
+ extern void amd_iommu_uninit_devices(void);
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 33446c9d3bac8..690c5976575c6 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -109,6 +109,7 @@
+ #define PASID_MASK		0x0000ffff
+ 
+ /* MMIO status bits */
++#define MMIO_STATUS_EVT_OVERFLOW_INT_MASK	(1 << 0)
+ #define MMIO_STATUS_EVT_INT_MASK	(1 << 1)
+ #define MMIO_STATUS_COM_WAIT_INT_MASK	(1 << 2)
+ #define MMIO_STATUS_PPR_INT_MASK	(1 << 6)
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 502e6532dd549..6eaefc9e7b3d6 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -656,6 +656,16 @@ static int __init alloc_command_buffer(struct amd_iommu *iommu)
+ 	return iommu->cmd_buf ? 0 : -ENOMEM;
+ }
+ 
++/*
++ * This function restarts event logging in case the IOMMU experienced
++ * an event log buffer overflow.
++ */
++void amd_iommu_restart_event_logging(struct amd_iommu *iommu)
++{
++	iommu_feature_disable(iommu, CONTROL_EVT_LOG_EN);
++	iommu_feature_enable(iommu, CONTROL_EVT_LOG_EN);
++}
++
+ /*
+  * This function resets the command buffer if the IOMMU stopped fetching
+  * commands from it.
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 5f1195791cb18..200cf5da5e0ad 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -813,7 +813,8 @@ amd_iommu_set_pci_msi_domain(struct device *dev, struct amd_iommu *iommu) { }
+ #endif /* !CONFIG_IRQ_REMAP */
+ 
+ #define AMD_IOMMU_INT_MASK	\
+-	(MMIO_STATUS_EVT_INT_MASK | \
++	(MMIO_STATUS_EVT_OVERFLOW_INT_MASK | \
++	 MMIO_STATUS_EVT_INT_MASK | \
+ 	 MMIO_STATUS_PPR_INT_MASK | \
+ 	 MMIO_STATUS_GALOG_INT_MASK)
+ 
+@@ -823,7 +824,7 @@ irqreturn_t amd_iommu_int_thread(int irq, void *data)
+ 	u32 status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
+ 
+ 	while (status & AMD_IOMMU_INT_MASK) {
+-		/* Enable EVT and PPR and GA interrupts again */
++		/* Enable interrupt sources again */
+ 		writel(AMD_IOMMU_INT_MASK,
+ 			iommu->mmio_base + MMIO_STATUS_OFFSET);
+ 
+@@ -844,6 +845,11 @@ irqreturn_t amd_iommu_int_thread(int irq, void *data)
+ 		}
+ #endif
+ 
++		if (status & MMIO_STATUS_EVT_OVERFLOW_INT_MASK) {
++			pr_info_ratelimited("IOMMU event log overflow\n");
++			amd_iommu_restart_event_logging(iommu);
++		}
++
+ 		/*
+ 		 * Hardware bug: ERBT1312
+ 		 * When re-enabling interrupt (by writing 1
+diff --git a/drivers/net/arcnet/com20020-pci.c b/drivers/net/arcnet/com20020-pci.c
+index eb7f76753c9c0..9f44e2e458df1 100644
+--- a/drivers/net/arcnet/com20020-pci.c
++++ b/drivers/net/arcnet/com20020-pci.c
+@@ -136,6 +136,9 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ 		return -ENOMEM;
+ 
+ 	ci = (struct com20020_pci_card_info *)id->driver_data;
++	if (!ci)
++		return -EINVAL;
++
+ 	priv->ci = ci;
+ 	mm = &ci->misc_map;
+ 
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index 3f759fae81fe2..e023c401f4f77 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -190,8 +190,8 @@ struct gs_can {
+ struct gs_usb {
+ 	struct gs_can *canch[GS_MAX_INTF];
+ 	struct usb_anchor rx_submitted;
+-	atomic_t active_channels;
+ 	struct usb_device *udev;
++	u8 active_channels;
+ };
+ 
+ /* 'allocate' a tx context.
+@@ -588,7 +588,7 @@ static int gs_can_open(struct net_device *netdev)
+ 	if (rc)
+ 		return rc;
+ 
+-	if (atomic_add_return(1, &parent->active_channels) == 1) {
++	if (!parent->active_channels) {
+ 		for (i = 0; i < GS_MAX_RX_URBS; i++) {
+ 			struct urb *urb;
+ 			u8 *buf;
+@@ -689,6 +689,7 @@ static int gs_can_open(struct net_device *netdev)
+ 
+ 	dev->can.state = CAN_STATE_ERROR_ACTIVE;
+ 
++	parent->active_channels++;
+ 	if (!(dev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
+ 		netif_start_queue(netdev);
+ 
+@@ -704,7 +705,8 @@ static int gs_can_close(struct net_device *netdev)
+ 	netif_stop_queue(netdev);
+ 
+ 	/* Stop polling */
+-	if (atomic_dec_and_test(&parent->active_channels))
++	parent->active_channels--;
++	if (!parent->active_channels)
+ 		usb_kill_anchored_urbs(&parent->rx_submitted);
+ 
+ 	/* Stop sending URBs */
+@@ -983,8 +985,6 @@ static int gs_usb_probe(struct usb_interface *intf,
+ 
+ 	init_usb_anchor(&dev->rx_submitted);
+ 
+-	atomic_set(&dev->active_channels, 0);
+-
+ 	usb_set_intfdata(intf, dev);
+ 	dev->udev = interface_to_usbdev(intf);
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
+index 7ff31d1026fb2..e0d34e64fc6cb 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
+@@ -3678,6 +3678,8 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
+ 	    MAC_STATS_ACCUM_SECS : (MAC_STATS_ACCUM_SECS * 10);
+ 	adapter->params.pci.vpd_cap_addr =
+ 	    pci_find_capability(adapter->pdev, PCI_CAP_ID_VPD);
++	if (!adapter->params.pci.vpd_cap_addr)
++		return -ENODEV;
+ 	ret = get_vpd_params(adapter, &adapter->params.vpd);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index c7be7ab131b19..95bee3d915934 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -2354,8 +2354,10 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 	 * flush reset queue and process this reset
+ 	 */
+ 	if (adapter->force_reset_recovery && !list_empty(&adapter->rwi_list)) {
+-		list_for_each_safe(entry, tmp_entry, &adapter->rwi_list)
++		list_for_each_safe(entry, tmp_entry, &adapter->rwi_list) {
+ 			list_del(entry);
++			kfree(list_entry(entry, struct ibmvnic_rwi, list));
++		}
+ 	}
+ 	rwi->reset_reason = reason;
+ 	list_add_tail(&rwi->list, &adapter->rwi_list);
+@@ -4921,6 +4923,13 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
+ 			adapter->fw_done_rc = -EIO;
+ 			complete(&adapter->fw_done);
+ 		}
++
++		/* if we got here during crq-init, retry crq-init */
++		if (!completion_done(&adapter->init_done)) {
++			adapter->init_done_rc = -EAGAIN;
++			complete(&adapter->init_done);
++		}
++
+ 		if (!completion_done(&adapter->stats_done))
+ 			complete(&adapter->stats_done);
+ 		if (test_bit(0, &adapter->resetting))
+@@ -5383,6 +5392,12 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 		goto ibmvnic_dev_file_err;
+ 
+ 	netif_carrier_off(netdev);
++
++	adapter->state = VNIC_PROBED;
++
++	adapter->wait_for_reset = false;
++	adapter->last_reset_time = jiffies;
++
+ 	rc = register_netdev(netdev);
+ 	if (rc) {
+ 		dev_err(&dev->dev, "failed to register netdev rc=%d\n", rc);
+@@ -5390,10 +5405,6 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 	}
+ 	dev_info(&dev->dev, "ibmvnic registered\n");
+ 
+-	adapter->state = VNIC_PROBED;
+-
+-	adapter->wait_for_reset = false;
+-	adapter->last_reset_time = jiffies;
+ 	return 0;
+ 
+ ibmvnic_register_fail:
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index b38b914f9ac6c..15b1503d5b6ca 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -4134,9 +4134,9 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
+ 		return ret_val;
+ 
+ 	if (!(data & valid_csum_mask)) {
+-		e_dbg("NVM Checksum Invalid\n");
++		e_dbg("NVM Checksum valid bit not set\n");
+ 
+-		if (hw->mac.type < e1000_pch_cnp) {
++		if (hw->mac.type < e1000_pch_tgp) {
+ 			data |= valid_csum_mask;
+ 			ret_val = e1000_write_nvm(hw, word, 1, &data);
+ 			if (ret_val)
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 6766446a33f49..ce1e2fb22e092 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -309,6 +309,7 @@ struct iavf_adapter {
+ 	struct iavf_hw hw; /* defined in iavf_type.h */
+ 
+ 	enum iavf_state_t state;
++	enum iavf_state_t last_state;
+ 	unsigned long crit_section;
+ 
+ 	struct delayed_work watchdog_task;
+@@ -378,6 +379,15 @@ struct iavf_device {
+ extern char iavf_driver_name[];
+ extern struct workqueue_struct *iavf_wq;
+ 
++static inline void iavf_change_state(struct iavf_adapter *adapter,
++				     enum iavf_state_t state)
++{
++	if (adapter->state != state) {
++		adapter->last_state = adapter->state;
++		adapter->state = state;
++	}
++}
++
+ int iavf_up(struct iavf_adapter *adapter);
+ void iavf_down(struct iavf_adapter *adapter);
+ int iavf_process_config(struct iavf_adapter *adapter);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index de7794ebc7e73..bd1fb3774769b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -963,7 +963,7 @@ static void iavf_configure(struct iavf_adapter *adapter)
+  **/
+ static void iavf_up_complete(struct iavf_adapter *adapter)
+ {
+-	adapter->state = __IAVF_RUNNING;
++	iavf_change_state(adapter, __IAVF_RUNNING);
+ 	clear_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 
+ 	iavf_napi_enable_all(adapter);
+@@ -1698,7 +1698,7 @@ static int iavf_startup(struct iavf_adapter *adapter)
+ 		iavf_shutdown_adminq(hw);
+ 		goto err;
+ 	}
+-	adapter->state = __IAVF_INIT_VERSION_CHECK;
++	iavf_change_state(adapter, __IAVF_INIT_VERSION_CHECK);
+ err:
+ 	return err;
+ }
+@@ -1722,7 +1722,7 @@ static int iavf_init_version_check(struct iavf_adapter *adapter)
+ 	if (!iavf_asq_done(hw)) {
+ 		dev_err(&pdev->dev, "Admin queue command never completed\n");
+ 		iavf_shutdown_adminq(hw);
+-		adapter->state = __IAVF_STARTUP;
++		iavf_change_state(adapter, __IAVF_STARTUP);
+ 		goto err;
+ 	}
+ 
+@@ -1745,8 +1745,7 @@ static int iavf_init_version_check(struct iavf_adapter *adapter)
+ 			err);
+ 		goto err;
+ 	}
+-	adapter->state = __IAVF_INIT_GET_RESOURCES;
+-
++	iavf_change_state(adapter, __IAVF_INIT_GET_RESOURCES);
+ err:
+ 	return err;
+ }
+@@ -1862,7 +1861,7 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
+ 	if (netdev->features & NETIF_F_GRO)
+ 		dev_info(&pdev->dev, "GRO is enabled\n");
+ 
+-	adapter->state = __IAVF_DOWN;
++	iavf_change_state(adapter, __IAVF_DOWN);
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 	rtnl_unlock();
+ 
+@@ -1910,7 +1909,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		goto restart_watchdog;
+ 
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+-		adapter->state = __IAVF_COMM_FAILED;
++		iavf_change_state(adapter, __IAVF_COMM_FAILED);
+ 
+ 	switch (adapter->state) {
+ 	case __IAVF_COMM_FAILED:
+@@ -1921,7 +1920,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 			/* A chance for redemption! */
+ 			dev_err(&adapter->pdev->dev,
+ 				"Hardware came out of reset. Attempting reinit.\n");
+-			adapter->state = __IAVF_STARTUP;
++			iavf_change_state(adapter, __IAVF_STARTUP);
+ 			adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
+ 			queue_delayed_work(iavf_wq, &adapter->init_task, 10);
+ 			clear_bit(__IAVF_IN_CRITICAL_TASK,
+@@ -1971,9 +1970,10 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		goto restart_watchdog;
+ 	}
+ 
+-		/* check for hw reset */
++	/* check for hw reset */
+ 	reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK;
+ 	if (!reg_val) {
++		iavf_change_state(adapter, __IAVF_RESETTING);
+ 		adapter->flags |= IAVF_FLAG_RESET_PENDING;
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+@@ -2053,7 +2053,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ 	adapter->netdev->flags &= ~IFF_UP;
+ 	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+ 	adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
+-	adapter->state = __IAVF_DOWN;
++	iavf_change_state(adapter, __IAVF_DOWN);
+ 	wake_up(&adapter->down_waitqueue);
+ 	dev_info(&adapter->pdev->dev, "Reset task did not complete, VF disabled\n");
+ }
+@@ -2165,7 +2165,7 @@ continue_reset:
+ 	}
+ 	iavf_irq_disable(adapter);
+ 
+-	adapter->state = __IAVF_RESETTING;
++	iavf_change_state(adapter, __IAVF_RESETTING);
+ 	adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
+ 
+ 	/* free the Tx/Rx rings and descriptors, might be better to just
+@@ -2265,11 +2265,14 @@ continue_reset:
+ 
+ 		iavf_configure(adapter);
+ 
++		/* iavf_up_complete() will switch device back
++		 * to __IAVF_RUNNING
++		 */
+ 		iavf_up_complete(adapter);
+ 
+ 		iavf_irq_enable(adapter, true);
+ 	} else {
+-		adapter->state = __IAVF_DOWN;
++		iavf_change_state(adapter, __IAVF_DOWN);
+ 		wake_up(&adapter->down_waitqueue);
+ 	}
+ 	clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
+@@ -3277,7 +3280,7 @@ static int iavf_close(struct net_device *netdev)
+ 		adapter->flags |= IAVF_FLAG_CLIENT_NEEDS_CLOSE;
+ 
+ 	iavf_down(adapter);
+-	adapter->state = __IAVF_DOWN_PENDING;
++	iavf_change_state(adapter, __IAVF_DOWN_PENDING);
+ 	iavf_free_traffic_irqs(adapter);
+ 
+ 	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+@@ -3317,8 +3320,11 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu)
+ 		iavf_notify_client_l2_params(&adapter->vsi);
+ 		adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
+ 	}
+-	adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+-	queue_work(iavf_wq, &adapter->reset_task);
++
++	if (netif_running(netdev)) {
++		adapter->flags |= IAVF_FLAG_RESET_NEEDED;
++		queue_work(iavf_wq, &adapter->reset_task);
++	}
+ 
+ 	return 0;
+ }
+@@ -3658,7 +3664,7 @@ init_failed:
+ 			"Failed to communicate with PF; waiting before retry\n");
+ 		adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED;
+ 		iavf_shutdown_adminq(hw);
+-		adapter->state = __IAVF_STARTUP;
++		iavf_change_state(adapter, __IAVF_STARTUP);
+ 		queue_delayed_work(iavf_wq, &adapter->init_task, HZ * 5);
+ 		goto out;
+ 	}
+@@ -3684,7 +3690,7 @@ static void iavf_shutdown(struct pci_dev *pdev)
+ 	if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 5000))
+ 		dev_warn(&adapter->pdev->dev, "failed to set __IAVF_IN_CRITICAL_TASK in %s\n", __FUNCTION__);
+ 	/* Prevent the watchdog from running. */
+-	adapter->state = __IAVF_REMOVE;
++	iavf_change_state(adapter, __IAVF_REMOVE);
+ 	adapter->aq_required = 0;
+ 	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+ 
+@@ -3757,7 +3763,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	hw->back = adapter;
+ 
+ 	adapter->msg_enable = BIT(DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
+-	adapter->state = __IAVF_STARTUP;
++	iavf_change_state(adapter, __IAVF_STARTUP);
+ 
+ 	/* Call save state here because it relies on the adapter struct. */
+ 	pci_save_state(pdev);
+@@ -3925,7 +3931,7 @@ static void iavf_remove(struct pci_dev *pdev)
+ 		dev_warn(&adapter->pdev->dev, "failed to set __IAVF_IN_CRITICAL_TASK in %s\n", __FUNCTION__);
+ 
+ 	/* Shut down all the garbage mashers on the detention level */
+-	adapter->state = __IAVF_REMOVE;
++	iavf_change_state(adapter, __IAVF_REMOVE);
+ 	adapter->aq_required = 0;
+ 	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 	iavf_free_all_tx_resources(adapter);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index 8be3151f2c62b..ff479bf721443 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -1460,7 +1460,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		iavf_free_all_tx_resources(adapter);
+ 		iavf_free_all_rx_resources(adapter);
+ 		if (adapter->state == __IAVF_DOWN_PENDING) {
+-			adapter->state = __IAVF_DOWN;
++			iavf_change_state(adapter, __IAVF_DOWN);
+ 			wake_up(&adapter->down_waitqueue);
+ 		}
+ 		break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index fb4656902634c..6c75df216fa7a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1602,7 +1602,9 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
+ 				 * reset, so print the event prior to reset.
+ 				 */
+ 				ice_print_vf_rx_mdd_event(vf);
++				mutex_lock(&pf->vf[i].cfg_lock);
+ 				ice_reset_vf(&pf->vf[i], false);
++				mutex_unlock(&pf->vf[i].cfg_lock);
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 69ce5d60a8570..48511ad0e0c82 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -360,20 +360,26 @@ void ice_free_vfs(struct ice_pf *pf)
+ 	else
+ 		dev_warn(dev, "VFs are assigned - not disabling SR-IOV\n");
+ 
+-	/* Avoid wait time by stopping all VFs at the same time */
+-	ice_for_each_vf(pf, i)
+-		ice_dis_vf_qs(&pf->vf[i]);
+-
+ 	tmp = pf->num_alloc_vfs;
+ 	pf->num_qps_per_vf = 0;
+ 	pf->num_alloc_vfs = 0;
+ 	for (i = 0; i < tmp; i++) {
+-		if (test_bit(ICE_VF_STATE_INIT, pf->vf[i].vf_states)) {
++		struct ice_vf *vf = &pf->vf[i];
++
++		mutex_lock(&vf->cfg_lock);
++
++		ice_dis_vf_qs(vf);
++
++		if (test_bit(ICE_VF_STATE_INIT, vf->vf_states)) {
+ 			/* disable VF qp mappings and set VF disable state */
+-			ice_dis_vf_mappings(&pf->vf[i]);
+-			set_bit(ICE_VF_STATE_DIS, pf->vf[i].vf_states);
+-			ice_free_vf_res(&pf->vf[i]);
++			ice_dis_vf_mappings(vf);
++			set_bit(ICE_VF_STATE_DIS, vf->vf_states);
++			ice_free_vf_res(vf);
+ 		}
++
++		mutex_unlock(&vf->cfg_lock);
++
++		mutex_destroy(&vf->cfg_lock);
+ 	}
+ 
+ 	if (ice_sriov_free_msix_res(pf))
+@@ -1221,9 +1227,13 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
+ 	ice_for_each_vf(pf, v) {
+ 		vf = &pf->vf[v];
+ 
++		mutex_lock(&vf->cfg_lock);
++
+ 		ice_vf_pre_vsi_rebuild(vf);
+ 		ice_vf_rebuild_vsi(vf);
+ 		ice_vf_post_vsi_rebuild(vf);
++
++		mutex_unlock(&vf->cfg_lock);
+ 	}
+ 
+ 	ice_flush(hw);
+@@ -1270,6 +1280,8 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
+ 	u32 reg;
+ 	int i;
+ 
++	lockdep_assert_held(&vf->cfg_lock);
++
+ 	dev = ice_pf_to_dev(pf);
+ 
+ 	if (test_bit(__ICE_VF_RESETS_DISABLED, pf->state)) {
+@@ -1518,6 +1530,8 @@ static void ice_set_dflt_settings_vfs(struct ice_pf *pf)
+ 		set_bit(ICE_VIRTCHNL_VF_CAP_L2, &vf->vf_caps);
+ 		vf->spoofchk = true;
+ 		vf->num_vf_qs = pf->num_qps_per_vf;
++
++		mutex_init(&vf->cfg_lock);
+ 	}
+ }
+ 
+@@ -1721,9 +1735,12 @@ void ice_process_vflr_event(struct ice_pf *pf)
+ 		bit_idx = (hw->func_caps.vf_base_id + vf_id) % 32;
+ 		/* read GLGEN_VFLRSTAT register to find out the flr VFs */
+ 		reg = rd32(hw, GLGEN_VFLRSTAT(reg_idx));
+-		if (reg & BIT(bit_idx))
++		if (reg & BIT(bit_idx)) {
+ 			/* GLGEN_VFLRSTAT bit will be cleared in ice_reset_vf */
++			mutex_lock(&vf->cfg_lock);
+ 			ice_reset_vf(vf, true);
++			mutex_unlock(&vf->cfg_lock);
++		}
+ 	}
+ }
+ 
+@@ -1800,7 +1817,9 @@ ice_vf_lan_overflow_event(struct ice_pf *pf, struct ice_rq_event_info *event)
+ 	if (!vf)
+ 		return;
+ 
++	mutex_lock(&vf->cfg_lock);
+ 	ice_vc_reset_vf(vf);
++	mutex_unlock(&vf->cfg_lock);
+ }
+ 
+ /**
+@@ -3345,6 +3364,8 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
+ 		return 0;
+ 	}
+ 
++	mutex_lock(&vf->cfg_lock);
++
+ 	vf->port_vlan_info = vlanprio;
+ 
+ 	if (vf->port_vlan_info)
+@@ -3354,6 +3375,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
+ 		dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id);
+ 
+ 	ice_vc_reset_vf(vf);
++	mutex_unlock(&vf->cfg_lock);
+ 
+ 	return 0;
+ }
+@@ -3719,6 +3741,15 @@ error_handler:
+ 		return;
+ 	}
+ 
++	/* VF is being configured in another context that triggers a VFR, so no
++	 * need to process this message
++	 */
++	if (!mutex_trylock(&vf->cfg_lock)) {
++		dev_info(dev, "VF %u is being configured in another context that will trigger a VFR, so there is no need to handle this message\n",
++			 vf->vf_id);
++		return;
++	}
++
+ 	switch (v_opcode) {
+ 	case VIRTCHNL_OP_VERSION:
+ 		err = ice_vc_get_ver_msg(vf, msg);
+@@ -3795,6 +3826,8 @@ error_handler:
+ 		dev_info(dev, "PF failed to honor VF %d, opcode %d, error %d\n",
+ 			 vf_id, v_opcode, err);
+ 	}
++
++	mutex_unlock(&vf->cfg_lock);
+ }
+ 
+ /**
+@@ -3909,6 +3942,8 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
+ 		return -EINVAL;
+ 	}
+ 
++	mutex_lock(&vf->cfg_lock);
++
+ 	/* VF is notified of its new MAC via the PF's response to the
+ 	 * VIRTCHNL_OP_GET_VF_RESOURCES message after the VF has been reset
+ 	 */
+@@ -3926,6 +3961,7 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
+ 	}
+ 
+ 	ice_vc_reset_vf(vf);
++	mutex_unlock(&vf->cfg_lock);
+ 	return 0;
+ }
+ 
+@@ -3955,11 +3991,15 @@ int ice_set_vf_trust(struct net_device *netdev, int vf_id, bool trusted)
+ 	if (trusted == vf->trusted)
+ 		return 0;
+ 
++	mutex_lock(&vf->cfg_lock);
++
+ 	vf->trusted = trusted;
+ 	ice_vc_reset_vf(vf);
+ 	dev_info(ice_pf_to_dev(pf), "VF %u is now %strusted\n",
+ 		 vf_id, trusted ? "" : "un");
+ 
++	mutex_unlock(&vf->cfg_lock);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+index 0f519fba3770d..59e5b4f16e965 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+@@ -68,6 +68,11 @@ struct ice_mdd_vf_events {
+ struct ice_vf {
+ 	struct ice_pf *pf;
+ 
++	/* Used during virtchnl message handling and NDO ops against the VF
++	 * that will trigger a VFR
++	 */
++	struct mutex cfg_lock;
++
+ 	u16 vf_id;			/* VF ID in the PF space */
+ 	u16 lan_vsi_idx;		/* index into PF struct */
+ 	/* first vector index of this VF in the PF space */
+diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c
+index 8e1799508edc4..e380b7a3ea63b 100644
+--- a/drivers/net/ethernet/intel/igc/igc_phy.c
++++ b/drivers/net/ethernet/intel/igc/igc_phy.c
+@@ -748,8 +748,6 @@ s32 igc_write_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 data)
+ 		if (ret_val)
+ 			return ret_val;
+ 		ret_val = igc_write_phy_reg_mdic(hw, offset, data);
+-		if (ret_val)
+-			return ret_val;
+ 		hw->phy.ops.release(hw);
+ 	} else {
+ 		ret_val = igc_write_xmdio_reg(hw, (u16)offset, dev_addr,
+@@ -781,8 +779,6 @@ s32 igc_read_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 *data)
+ 		if (ret_val)
+ 			return ret_val;
+ 		ret_val = igc_read_phy_reg_mdic(hw, offset, data);
+-		if (ret_val)
+-			return ret_val;
+ 		hw->phy.ops.release(hw);
+ 	} else {
+ 		ret_val = igc_read_xmdio_reg(hw, (u16)offset, dev_addr,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index d60da7a89092e..ca1a428b278e0 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -391,12 +391,14 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
+ 	u32 cmd_type;
+ 
+ 	while (budget-- > 0) {
+-		if (unlikely(!ixgbe_desc_unused(xdp_ring)) ||
+-		    !netif_carrier_ok(xdp_ring->netdev)) {
++		if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
+ 			work_done = false;
+ 			break;
+ 		}
+ 
++		if (!netif_carrier_ok(xdp_ring->netdev))
++			break;
++
+ 		if (!xsk_tx_peek_desc(pool, &desc))
+ 			break;
+ 
+diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
+index 971f1e54b6526..b1dd6189638b3 100644
+--- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
++++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
+@@ -2282,18 +2282,18 @@ static int __init sxgbe_cmdline_opt(char *str)
+ 	char *opt;
+ 
+ 	if (!str || !*str)
+-		return -EINVAL;
++		return 1;
+ 	while ((opt = strsep(&str, ",")) != NULL) {
+ 		if (!strncmp(opt, "eee_timer:", 10)) {
+ 			if (kstrtoint(opt + 10, 0, &eee_timer))
+ 				goto err;
+ 		}
+ 	}
+-	return 0;
++	return 1;
+ 
+ err:
+ 	pr_err("%s: ERROR broken module parameter conversion\n", __func__);
+-	return -EINVAL;
++	return 1;
+ }
+ 
+ __setup("sxgbeeth=", sxgbe_cmdline_opt);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 6d8a839fab22e..a46c32257de42 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -5428,7 +5428,7 @@ static int __init stmmac_cmdline_opt(char *str)
+ 	char *opt;
+ 
+ 	if (!str || !*str)
+-		return -EINVAL;
++		return 1;
+ 	while ((opt = strsep(&str, ",")) != NULL) {
+ 		if (!strncmp(opt, "debug:", 6)) {
+ 			if (kstrtoint(opt + 6, 0, &debug))
+@@ -5459,11 +5459,11 @@ static int __init stmmac_cmdline_opt(char *str)
+ 				goto err;
+ 		}
+ 	}
+-	return 0;
++	return 1;
+ 
+ err:
+ 	pr_err("%s: ERROR broken module parameter conversion", __func__);
+-	return -EINVAL;
++	return 1;
+ }
+ 
+ __setup("stmmaceth=", stmmac_cmdline_opt);
+diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c
+index 63502a85a9751..049264a7d9611 100644
+--- a/drivers/net/hamradio/mkiss.c
++++ b/drivers/net/hamradio/mkiss.c
+@@ -31,6 +31,8 @@
+ 
+ #define AX_MTU		236
+ 
++/* some arch define END as assembly function ending, just undef it */
++#undef	END
+ /* SLIP/KISS protocol characters. */
+ #define END             0300		/* indicates end of frame	*/
+ #define ESC             0333		/* indicates byte stuffing	*/
+diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
+index 77ac5a721e7b6..414341c9cf5ae 100644
+--- a/drivers/net/usb/cdc_mbim.c
++++ b/drivers/net/usb/cdc_mbim.c
+@@ -658,6 +658,11 @@ static const struct usb_device_id mbim_devs[] = {
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
+ 	},
+ 
++	/* Telit FN990 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1071, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
++	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
++	},
++
+ 	/* default entry */
+ 	{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index b793d61d15d27..cc550ba0c9dfe 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -2264,6 +2264,15 @@ static void hw_scan_work(struct work_struct *work)
+ 			if (req->ie_len)
+ 				skb_put_data(probe, req->ie, req->ie_len);
+ 
++			if (!ieee80211_tx_prepare_skb(hwsim->hw,
++						      hwsim->hw_scan_vif,
++						      probe,
++						      hwsim->tmp_chan->band,
++						      NULL)) {
++				kfree_skb(probe);
++				continue;
++			}
++
+ 			local_bh_disable();
+ 			mac80211_hwsim_tx_frame(hwsim->hw, probe,
+ 						hwsim->tmp_chan);
+@@ -3567,6 +3576,10 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2,
+ 		}
+ 		txi->flags |= IEEE80211_TX_STAT_ACK;
+ 	}
++
++	if (hwsim_flags & HWSIM_TX_CTL_NO_ACK)
++		txi->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
++
+ 	ieee80211_tx_status_irqsafe(data2->hw, skb);
+ 	return 0;
+ out:
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index fce3a90a335cb..7ed8872d08c60 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -844,6 +844,28 @@ static int xennet_close(struct net_device *dev)
+ 	return 0;
+ }
+ 
++static void xennet_destroy_queues(struct netfront_info *info)
++{
++	unsigned int i;
++
++	for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
++		struct netfront_queue *queue = &info->queues[i];
++
++		if (netif_running(info->netdev))
++			napi_disable(&queue->napi);
++		netif_napi_del(&queue->napi);
++	}
++
++	kfree(info->queues);
++	info->queues = NULL;
++}
++
++static void xennet_uninit(struct net_device *dev)
++{
++	struct netfront_info *np = netdev_priv(dev);
++	xennet_destroy_queues(np);
++}
++
+ static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
+ {
+ 	unsigned long flags;
+@@ -1613,6 +1635,7 @@ static int xennet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+ }
+ 
+ static const struct net_device_ops xennet_netdev_ops = {
++	.ndo_uninit          = xennet_uninit,
+ 	.ndo_open            = xennet_open,
+ 	.ndo_stop            = xennet_close,
+ 	.ndo_start_xmit      = xennet_start_xmit,
+@@ -2105,22 +2128,6 @@ error:
+ 	return err;
+ }
+ 
+-static void xennet_destroy_queues(struct netfront_info *info)
+-{
+-	unsigned int i;
+-
+-	for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
+-		struct netfront_queue *queue = &info->queues[i];
+-
+-		if (netif_running(info->netdev))
+-			napi_disable(&queue->napi);
+-		netif_napi_del(&queue->napi);
+-	}
+-
+-	kfree(info->queues);
+-	info->queues = NULL;
+-}
+-
+ 
+ 
+ static int xennet_create_page_pool(struct netfront_queue *queue)
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen4.c b/drivers/ntb/hw/intel/ntb_hw_gen4.c
+index bc4541cbf8c6e..99a5fc1ab0aaf 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen4.c
++++ b/drivers/ntb/hw/intel/ntb_hw_gen4.c
+@@ -168,6 +168,18 @@ static enum ntb_topo gen4_ppd_topo(struct intel_ntb_dev *ndev, u32 ppd)
+ 	return NTB_TOPO_NONE;
+ }
+ 
++static enum ntb_topo spr_ppd_topo(struct intel_ntb_dev *ndev, u32 ppd)
++{
++	switch (ppd & SPR_PPD_TOPO_MASK) {
++	case SPR_PPD_TOPO_B2B_USD:
++		return NTB_TOPO_B2B_USD;
++	case SPR_PPD_TOPO_B2B_DSD:
++		return NTB_TOPO_B2B_DSD;
++	}
++
++	return NTB_TOPO_NONE;
++}
++
+ int gen4_init_dev(struct intel_ntb_dev *ndev)
+ {
+ 	struct pci_dev *pdev = ndev->ntb.pdev;
+@@ -181,7 +193,10 @@ int gen4_init_dev(struct intel_ntb_dev *ndev)
+ 		ndev->hwerr_flags |= NTB_HWERR_BAR_ALIGN;
+ 
+ 	ppd1 = ioread32(ndev->self_mmio + GEN4_PPD1_OFFSET);
+-	ndev->ntb.topo = gen4_ppd_topo(ndev, ppd1);
++	if (pdev_is_ICX(pdev))
++		ndev->ntb.topo = gen4_ppd_topo(ndev, ppd1);
++	else if (pdev_is_SPR(pdev))
++		ndev->ntb.topo = spr_ppd_topo(ndev, ppd1);
+ 	dev_dbg(&pdev->dev, "ppd %#x topo %s\n", ppd1,
+ 		ntb_topo_string(ndev->ntb.topo));
+ 	if (ndev->ntb.topo == NTB_TOPO_NONE)
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen4.h b/drivers/ntb/hw/intel/ntb_hw_gen4.h
+index a868c788de02f..ec293953d665f 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen4.h
++++ b/drivers/ntb/hw/intel/ntb_hw_gen4.h
+@@ -46,10 +46,14 @@
+ #define GEN4_PPD_CLEAR_TRN		0x0001
+ #define GEN4_PPD_LINKTRN		0x0008
+ #define GEN4_PPD_CONN_MASK		0x0300
++#define SPR_PPD_CONN_MASK		0x0700
+ #define GEN4_PPD_CONN_B2B		0x0200
+ #define GEN4_PPD_DEV_MASK		0x1000
+ #define GEN4_PPD_DEV_DSD		0x1000
+ #define GEN4_PPD_DEV_USD		0x0000
++#define SPR_PPD_DEV_MASK		0x4000
++#define SPR_PPD_DEV_DSD 		0x4000
++#define SPR_PPD_DEV_USD 		0x0000
+ #define GEN4_LINK_CTRL_LINK_DISABLE	0x0010
+ 
+ #define GEN4_SLOTSTS			0xb05a
+@@ -59,6 +63,10 @@
+ #define GEN4_PPD_TOPO_B2B_USD	(GEN4_PPD_CONN_B2B | GEN4_PPD_DEV_USD)
+ #define GEN4_PPD_TOPO_B2B_DSD	(GEN4_PPD_CONN_B2B | GEN4_PPD_DEV_DSD)
+ 
++#define SPR_PPD_TOPO_MASK	(SPR_PPD_CONN_MASK | SPR_PPD_DEV_MASK)
++#define SPR_PPD_TOPO_B2B_USD	(GEN4_PPD_CONN_B2B | SPR_PPD_DEV_USD)
++#define SPR_PPD_TOPO_B2B_DSD	(GEN4_PPD_CONN_B2B | SPR_PPD_DEV_DSD)
++
+ #define GEN4_DB_COUNT			32
+ #define GEN4_DB_LINK			32
+ #define GEN4_DB_LINK_BIT		BIT_ULL(GEN4_DB_LINK)
+@@ -97,4 +105,12 @@ static inline int pdev_is_ICX(struct pci_dev *pdev)
+ 	return 0;
+ }
+ 
++static inline int pdev_is_SPR(struct pci_dev *pdev)
++{
++	if (pdev_is_gen4(pdev) &&
++	    pdev->revision > PCI_DEVICE_REVISION_ICX_MAX)
++		return 1;
++	return 0;
++}
++
+ #endif
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index e42a3a0005a72..be7f4f95f455d 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -36,6 +36,13 @@
+ #include "../core.h"
+ #include "pinctrl-sunxi.h"
+ 
++/*
++ * These lock classes tell lockdep that GPIO IRQs are in a different
++ * category than their parents, so it won't report false recursion.
++ */
++static struct lock_class_key sunxi_pinctrl_irq_lock_class;
++static struct lock_class_key sunxi_pinctrl_irq_request_class;
++
+ static struct irq_chip sunxi_pinctrl_edge_irq_chip;
+ static struct irq_chip sunxi_pinctrl_level_irq_chip;
+ 
+@@ -1552,6 +1559,8 @@ int sunxi_pinctrl_init_with_variant(struct platform_device *pdev,
+ 	for (i = 0; i < (pctl->desc->irq_banks * IRQ_PER_BANK); i++) {
+ 		int irqno = irq_create_mapping(pctl->domain, i);
+ 
++		irq_set_lockdep_class(irqno, &sunxi_pinctrl_irq_lock_class,
++				      &sunxi_pinctrl_irq_request_class);
+ 		irq_set_chip_and_handler(irqno, &sunxi_pinctrl_edge_irq_chip,
+ 					 handle_edge_irq);
+ 		irq_set_chip_data(irqno, pctl);
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 043b5f63b94a1..2c48e55c4104e 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5862,9 +5862,8 @@ core_initcall(regulator_init);
+ static int regulator_late_cleanup(struct device *dev, void *data)
+ {
+ 	struct regulator_dev *rdev = dev_to_rdev(dev);
+-	const struct regulator_ops *ops = rdev->desc->ops;
+ 	struct regulation_constraints *c = rdev->constraints;
+-	int enabled, ret;
++	int ret;
+ 
+ 	if (c && c->always_on)
+ 		return 0;
+@@ -5877,14 +5876,8 @@ static int regulator_late_cleanup(struct device *dev, void *data)
+ 	if (rdev->use_count)
+ 		goto unlock;
+ 
+-	/* If we can't read the status assume it's always on. */
+-	if (ops->is_enabled)
+-		enabled = ops->is_enabled(rdev);
+-	else
+-		enabled = 1;
+-
+-	/* But if reading the status failed, assume that it's off. */
+-	if (enabled <= 0)
++	/* If reading the status failed, assume that it's off. */
++	if (_regulator_is_enabled(rdev) <= 0)
+ 		goto unlock;
+ 
+ 	if (have_full_constraints()) {
+diff --git a/drivers/soc/fsl/guts.c b/drivers/soc/fsl/guts.c
+index 34810f9bb2ee7..091e94c04f309 100644
+--- a/drivers/soc/fsl/guts.c
++++ b/drivers/soc/fsl/guts.c
+@@ -28,7 +28,6 @@ struct fsl_soc_die_attr {
+ static struct guts *guts;
+ static struct soc_device_attribute soc_dev_attr;
+ static struct soc_device *soc_dev;
+-static struct device_node *root;
+ 
+ 
+ /* SoC die attribute definition for QorIQ platform */
+@@ -138,7 +137,7 @@ static u32 fsl_guts_get_svr(void)
+ 
+ static int fsl_guts_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
++	struct device_node *root, *np = pdev->dev.of_node;
+ 	struct device *dev = &pdev->dev;
+ 	struct resource *res;
+ 	const struct fsl_soc_die_attr *soc_die;
+@@ -161,8 +160,14 @@ static int fsl_guts_probe(struct platform_device *pdev)
+ 	root = of_find_node_by_path("/");
+ 	if (of_property_read_string(root, "model", &machine))
+ 		of_property_read_string_index(root, "compatible", 0, &machine);
+-	if (machine)
+-		soc_dev_attr.machine = machine;
++	if (machine) {
++		soc_dev_attr.machine = devm_kstrdup(dev, machine, GFP_KERNEL);
++		if (!soc_dev_attr.machine) {
++			of_node_put(root);
++			return -ENOMEM;
++		}
++	}
++	of_node_put(root);
+ 
+ 	svr = fsl_guts_get_svr();
+ 	soc_die = fsl_soc_die_match(svr, fsl_soc_die);
+@@ -197,7 +202,6 @@ static int fsl_guts_probe(struct platform_device *pdev)
+ static int fsl_guts_remove(struct platform_device *dev)
+ {
+ 	soc_device_unregister(soc_dev);
+-	of_node_put(root);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/fsl/qe/qe_io.c b/drivers/soc/fsl/qe/qe_io.c
+index 11ea08e97db75..1bb46d955d525 100644
+--- a/drivers/soc/fsl/qe/qe_io.c
++++ b/drivers/soc/fsl/qe/qe_io.c
+@@ -35,6 +35,8 @@ int par_io_init(struct device_node *np)
+ 	if (ret)
+ 		return ret;
+ 	par_io = ioremap(res.start, resource_size(&res));
++	if (!par_io)
++		return -ENOMEM;
+ 
+ 	if (!of_property_read_u32(np, "num-ports", &num_ports))
+ 		num_par_io_ports = num_ports;
+diff --git a/drivers/thermal/thermal_netlink.c b/drivers/thermal/thermal_netlink.c
+index 1234dbe958951..41c8d47805c4e 100644
+--- a/drivers/thermal/thermal_netlink.c
++++ b/drivers/thermal/thermal_netlink.c
+@@ -418,11 +418,12 @@ static int thermal_genl_cmd_tz_get_trip(struct param *p)
+ 	for (i = 0; i < tz->trips; i++) {
+ 
+ 		enum thermal_trip_type type;
+-		int temp, hyst;
++		int temp, hyst = 0;
+ 
+ 		tz->ops->get_trip_type(tz, i, &type);
+ 		tz->ops->get_trip_temp(tz, i, &temp);
+-		tz->ops->get_trip_hyst(tz, i, &hyst);
++		if (tz->ops->get_trip_hyst)
++			tz->ops->get_trip_hyst(tz, i, &hyst);
+ 
+ 		if (nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_TRIP_ID, i) ||
+ 		    nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_TRIP_TYPE, type) ||
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 0eadf0547175c..6afae051ba8d1 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -420,10 +420,22 @@ static void stm32_usart_transmit_chars(struct uart_port *port)
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	struct circ_buf *xmit = &port->state->xmit;
++	u32 isr;
++	int ret;
+ 
+ 	if (port->x_char) {
+ 		if (stm32_port->tx_dma_busy)
+ 			stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
++
++		/* Check that TDR is empty before filling FIFO */
++		ret =
++		readl_relaxed_poll_timeout_atomic(port->membase + ofs->isr,
++						  isr,
++						  (isr & USART_SR_TXE),
++						  10, 1000);
++		if (ret)
++			dev_warn(port->dev, "1 character may be erased\n");
++
+ 		writel_relaxed(port->x_char, port->membase + ofs->tdr);
+ 		port->x_char = 0;
+ 		port->icount.tx++;
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 217d2b66fa514..454860d52ce77 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -1828,8 +1828,9 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 	spin_lock_irq (&dev->lock);
+ 	value = -EINVAL;
+ 	if (dev->buf) {
++		spin_unlock_irq(&dev->lock);
+ 		kfree(kbuf);
+-		goto fail;
++		return value;
+ 	}
+ 	dev->buf = kbuf;
+ 
+@@ -1876,8 +1877,8 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 
+ 	value = usb_gadget_probe_driver(&gadgetfs_driver);
+ 	if (value != 0) {
+-		kfree (dev->buf);
+-		dev->buf = NULL;
++		spin_lock_irq(&dev->lock);
++		goto fail;
+ 	} else {
+ 		/* at this point "good" hardware has for the first time
+ 		 * let the USB the host see us.  alternatively, if users
+@@ -1894,6 +1895,9 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 	return value;
+ 
+ fail:
++	dev->config = NULL;
++	dev->hs_config = NULL;
++	dev->dev = NULL;
+ 	spin_unlock_irq (&dev->lock);
+ 	pr_debug ("%s: %s fail %zd, %p\n", shortname, __func__, value, dev);
+ 	kfree (dev->buf);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index e39a12037b403..a02e38fb696c1 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1197,6 +1197,14 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	if (!fs_info->quota_root)
+ 		goto out;
+ 
++	/*
++	 * Unlock the qgroup_ioctl_lock mutex before waiting for the rescan worker to
++	 * complete. Otherwise we can deadlock because btrfs_remove_qgroup() needs
++	 * to lock that mutex while holding a transaction handle and the rescan
++	 * worker needs to commit a transaction.
++	 */
++	mutex_unlock(&fs_info->qgroup_ioctl_lock);
++
+ 	/*
+ 	 * Request qgroup rescan worker to complete and wait for it. This wait
+ 	 * must be done before transaction start for quota disable since it may
+@@ -1204,7 +1212,6 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	 */
+ 	clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+ 	btrfs_qgroup_wait_for_completion(fs_info, false);
+-	mutex_unlock(&fs_info->qgroup_ioctl_lock);
+ 
+ 	/*
+ 	 * 1 For the root item
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 09ef6419e890a..62784b99a8074 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1286,6 +1286,15 @@ again:
+ 						 inode, name, namelen);
+ 			kfree(name);
+ 			iput(dir);
++			/*
++			 * Whenever we need to check if a name exists or not, we
++			 * check the subvolume tree. So after an unlink we must
++			 * run delayed items, so that future checks for a name
++			 * during log replay see that the name does not exists
++			 * anymore.
++			 */
++			if (!ret)
++				ret = btrfs_run_delayed_items(trans);
+ 			if (ret)
+ 				goto out;
+ 			goto again;
+@@ -1537,6 +1546,15 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 				 */
+ 				if (!ret && inode->i_nlink == 0)
+ 					inc_nlink(inode);
++				/*
++				 * Whenever we need to check if a name exists or
++				 * not, we check the subvolume tree. So after an
++				 * unlink we must run delayed items, so that future
++				 * checks for a name during log replay see that the
++				 * name does not exists anymore.
++				 */
++				if (!ret)
++					ret = btrfs_run_delayed_items(trans);
+ 			}
+ 			if (ret < 0)
+ 				goto out;
+@@ -4297,7 +4315,7 @@ static int log_one_extent(struct btrfs_trans_handle *trans,
+ 
+ /*
+  * Log all prealloc extents beyond the inode's i_size to make sure we do not
+- * lose them after doing a fast fsync and replaying the log. We scan the
++ * lose them after doing a full/fast fsync and replaying the log. We scan the
+  * subvolume's root instead of iterating the inode's extent map tree because
+  * otherwise we can log incorrect extent items based on extent map conversion.
+  * That can happen due to the fact that extent maps are merged when they
+@@ -5084,6 +5102,7 @@ static int copy_inode_items_to_log(struct btrfs_trans_handle *trans,
+ 				   struct btrfs_log_ctx *ctx,
+ 				   bool *need_log_inode_item)
+ {
++	const u64 i_size = i_size_read(&inode->vfs_inode);
+ 	struct btrfs_root *root = inode->root;
+ 	int ins_start_slot = 0;
+ 	int ins_nr = 0;
+@@ -5104,13 +5123,21 @@ again:
+ 		if (min_key->type > max_key->type)
+ 			break;
+ 
+-		if (min_key->type == BTRFS_INODE_ITEM_KEY)
++		if (min_key->type == BTRFS_INODE_ITEM_KEY) {
+ 			*need_log_inode_item = false;
+-
+-		if ((min_key->type == BTRFS_INODE_REF_KEY ||
+-		     min_key->type == BTRFS_INODE_EXTREF_KEY) &&
+-		    inode->generation == trans->transid &&
+-		    !recursive_logging) {
++		} else if (min_key->type == BTRFS_EXTENT_DATA_KEY &&
++			   min_key->offset >= i_size) {
++			/*
++			 * Extents at and beyond eof are logged with
++			 * btrfs_log_prealloc_extents().
++			 * Only regular files have BTRFS_EXTENT_DATA_KEY keys,
++			 * and no keys greater than that, so bail out.
++			 */
++			break;
++		} else if ((min_key->type == BTRFS_INODE_REF_KEY ||
++			    min_key->type == BTRFS_INODE_EXTREF_KEY) &&
++			   inode->generation == trans->transid &&
++			   !recursive_logging) {
+ 			u64 other_ino = 0;
+ 			u64 other_parent = 0;
+ 
+@@ -5141,10 +5168,8 @@ again:
+ 				btrfs_release_path(path);
+ 				goto next_key;
+ 			}
+-		}
+-
+-		/* Skip xattrs, we log them later with btrfs_log_all_xattrs() */
+-		if (min_key->type == BTRFS_XATTR_ITEM_KEY) {
++		} else if (min_key->type == BTRFS_XATTR_ITEM_KEY) {
++			/* Skip xattrs, logged later with btrfs_log_all_xattrs() */
+ 			if (ins_nr == 0)
+ 				goto next_slot;
+ 			ret = copy_items(trans, inode, dst_path, path,
+@@ -5197,9 +5222,21 @@ next_key:
+ 			break;
+ 		}
+ 	}
+-	if (ins_nr)
++	if (ins_nr) {
+ 		ret = copy_items(trans, inode, dst_path, path, ins_start_slot,
+ 				 ins_nr, inode_only, logged_isize);
++		if (ret)
++			return ret;
++	}
++
++	if (inode_only == LOG_INODE_ALL && S_ISREG(inode->vfs_inode.i_mode)) {
++		/*
++		 * Release the path because otherwise we might attempt to double
++		 * lock the same leaf with btrfs_log_prealloc_extents() below.
++		 */
++		btrfs_release_path(path);
++		ret = btrfs_log_prealloc_extents(trans, inode, dst_path);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index f0ed29a9a6f11..aa5a4d759ca23 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -864,6 +864,7 @@ cifs_smb3_do_mount(struct file_system_type *fs_type,
+ 
+ out_super:
+ 	deactivate_locked_super(sb);
++	return root;
+ out:
+ 	cifs_cleanup_volume_info(volume_info);
+ 	return root;
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index a92478eabfa4e..c819e8427ea57 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -109,8 +109,7 @@ int __exfat_truncate(struct inode *inode, loff_t new_size)
+ 	exfat_set_volume_dirty(sb);
+ 
+ 	num_clusters_new = EXFAT_B_TO_CLU_ROUND_UP(i_size_read(inode), sbi);
+-	num_clusters_phys =
+-		EXFAT_B_TO_CLU_ROUND_UP(EXFAT_I(inode)->i_size_ondisk, sbi);
++	num_clusters_phys = EXFAT_B_TO_CLU_ROUND_UP(ei->i_size_ondisk, sbi);
+ 
+ 	exfat_chain_set(&clu, ei->start_clu, num_clusters_phys, ei->flags);
+ 
+@@ -227,12 +226,13 @@ void exfat_truncate(struct inode *inode, loff_t size)
+ {
+ 	struct super_block *sb = inode->i_sb;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
++	struct exfat_inode_info *ei = EXFAT_I(inode);
+ 	unsigned int blocksize = i_blocksize(inode);
+ 	loff_t aligned_size;
+ 	int err;
+ 
+ 	mutex_lock(&sbi->s_lock);
+-	if (EXFAT_I(inode)->start_clu == 0) {
++	if (ei->start_clu == 0) {
+ 		/*
+ 		 * Empty start_clu != ~0 (not allocated)
+ 		 */
+@@ -250,8 +250,8 @@ void exfat_truncate(struct inode *inode, loff_t size)
+ 	else
+ 		mark_inode_dirty(inode);
+ 
+-	inode->i_blocks = ((i_size_read(inode) + (sbi->cluster_size - 1)) &
+-			~(sbi->cluster_size - 1)) >> inode->i_blkbits;
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
++				inode->i_blkbits;
+ write_size:
+ 	aligned_size = i_size_read(inode);
+ 	if (aligned_size & (blocksize - 1)) {
+@@ -259,11 +259,11 @@ write_size:
+ 		aligned_size++;
+ 	}
+ 
+-	if (EXFAT_I(inode)->i_size_ondisk > i_size_read(inode))
+-		EXFAT_I(inode)->i_size_ondisk = aligned_size;
++	if (ei->i_size_ondisk > i_size_read(inode))
++		ei->i_size_ondisk = aligned_size;
+ 
+-	if (EXFAT_I(inode)->i_size_aligned > i_size_read(inode))
+-		EXFAT_I(inode)->i_size_aligned = aligned_size;
++	if (ei->i_size_aligned > i_size_read(inode))
++		ei->i_size_aligned = aligned_size;
+ 	mutex_unlock(&sbi->s_lock);
+ }
+ 
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index 8b0288f70e93d..2a9f6a80584ee 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -114,10 +114,9 @@ static int exfat_map_cluster(struct inode *inode, unsigned int clu_offset,
+ 	unsigned int local_clu_offset = clu_offset;
+ 	unsigned int num_to_be_allocated = 0, num_clusters = 0;
+ 
+-	if (EXFAT_I(inode)->i_size_ondisk > 0)
++	if (ei->i_size_ondisk > 0)
+ 		num_clusters =
+-			EXFAT_B_TO_CLU_ROUND_UP(EXFAT_I(inode)->i_size_ondisk,
+-			sbi);
++			EXFAT_B_TO_CLU_ROUND_UP(ei->i_size_ondisk, sbi);
+ 
+ 	if (clu_offset >= num_clusters)
+ 		num_to_be_allocated = clu_offset - num_clusters + 1;
+@@ -415,10 +414,10 @@ static int exfat_write_end(struct file *file, struct address_space *mapping,
+ 
+ 	err = generic_write_end(file, mapping, pos, len, copied, pagep, fsdata);
+ 
+-	if (EXFAT_I(inode)->i_size_aligned < i_size_read(inode)) {
++	if (ei->i_size_aligned < i_size_read(inode)) {
+ 		exfat_fs_error(inode->i_sb,
+ 			"invalid size(size(%llu) > aligned(%llu)\n",
+-			i_size_read(inode), EXFAT_I(inode)->i_size_aligned);
++			i_size_read(inode), ei->i_size_aligned);
+ 		return -EIO;
+ 	}
+ 
+@@ -601,8 +600,8 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
+ 
+ 	exfat_save_attr(inode, info->attr);
+ 
+-	inode->i_blocks = ((i_size_read(inode) + (sbi->cluster_size - 1)) &
+-		~((loff_t)sbi->cluster_size - 1)) >> inode->i_blkbits;
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
++				inode->i_blkbits;
+ 	inode->i_mtime = info->mtime;
+ 	inode->i_ctime = info->mtime;
+ 	ei->i_crtime = info->crtime;
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 2932b23a3b6c3..935f600509009 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -395,9 +395,9 @@ static int exfat_find_empty_entry(struct inode *inode,
+ 
+ 		/* directory inode should be updated in here */
+ 		i_size_write(inode, size);
+-		EXFAT_I(inode)->i_size_ondisk += sbi->cluster_size;
+-		EXFAT_I(inode)->i_size_aligned += sbi->cluster_size;
+-		EXFAT_I(inode)->flags = p_dir->flags;
++		ei->i_size_ondisk += sbi->cluster_size;
++		ei->i_size_aligned += sbi->cluster_size;
++		ei->flags = p_dir->flags;
+ 		inode->i_blocks += 1 << sbi->sect_per_clus_bits;
+ 	}
+ 
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index c6d8d2e534865..cd04c912f02e0 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -364,11 +364,11 @@ static int exfat_read_root(struct inode *inode)
+ 	inode->i_op = &exfat_dir_inode_operations;
+ 	inode->i_fop = &exfat_dir_operations;
+ 
+-	inode->i_blocks = ((i_size_read(inode) + (sbi->cluster_size - 1))
+-			& ~(sbi->cluster_size - 1)) >> inode->i_blkbits;
+-	EXFAT_I(inode)->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff;
+-	EXFAT_I(inode)->i_size_aligned = i_size_read(inode);
+-	EXFAT_I(inode)->i_size_ondisk = i_size_read(inode);
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
++				inode->i_blkbits;
++	ei->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff;
++	ei->i_size_aligned = i_size_read(inode);
++	ei->i_size_ondisk = i_size_read(inode);
+ 
+ 	exfat_save_attr(inode, ATTR_SUBDIR);
+ 	inode->i_mtime = inode->i_atime = inode->i_ctime = ei->i_crtime =
+diff --git a/include/linux/topology.h b/include/linux/topology.h
+index ad03df1cc2667..7634cd737061c 100644
+--- a/include/linux/topology.h
++++ b/include/linux/topology.h
+@@ -48,6 +48,7 @@ int arch_update_cpu_topology(void);
+ /* Conform to ACPI 2.0 SLIT distance definitions */
+ #define LOCAL_DISTANCE		10
+ #define REMOTE_DISTANCE		20
++#define DISTANCE_BITS           8
+ #ifndef node_distance
+ #define node_distance(from,to)	((from) == (to) ? LOCAL_DISTANCE : REMOTE_DISTANCE)
+ #endif
+diff --git a/include/net/netfilter/nf_queue.h b/include/net/netfilter/nf_queue.h
+index e770bba000664..b1d43894296a6 100644
+--- a/include/net/netfilter/nf_queue.h
++++ b/include/net/netfilter/nf_queue.h
+@@ -37,7 +37,7 @@ void nf_register_queue_handler(struct net *net, const struct nf_queue_handler *q
+ void nf_unregister_queue_handler(struct net *net);
+ void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict);
+ 
+-void nf_queue_entry_get_refs(struct nf_queue_entry *entry);
++bool nf_queue_entry_get_refs(struct nf_queue_entry *entry);
+ void nf_queue_entry_free(struct nf_queue_entry *entry);
+ 
+ static inline void init_hashrandom(u32 *jhash_initval)
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 337d29875e518..4a2843441caf1 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1551,7 +1551,6 @@ void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si);
+ void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
+ u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq);
+ int xfrm_init_replay(struct xfrm_state *x);
+-u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu);
+ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu);
+ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload);
+ int xfrm_init_state(struct xfrm_state *x);
+diff --git a/include/uapi/linux/input-event-codes.h b/include/uapi/linux/input-event-codes.h
+index 225ec87d4f228..7989d9483ea75 100644
+--- a/include/uapi/linux/input-event-codes.h
++++ b/include/uapi/linux/input-event-codes.h
+@@ -278,7 +278,8 @@
+ #define KEY_PAUSECD		201
+ #define KEY_PROG3		202
+ #define KEY_PROG4		203
+-#define KEY_DASHBOARD		204	/* AL Dashboard */
++#define KEY_ALL_APPLICATIONS	204	/* AC Desktop Show All Applications */
++#define KEY_DASHBOARD		KEY_ALL_APPLICATIONS
+ #define KEY_SUSPEND		205
+ #define KEY_CLOSE		206	/* AC Close */
+ #define KEY_PLAY		207
+@@ -612,6 +613,7 @@
+ #define KEY_ASSISTANT		0x247	/* AL Context-aware desktop assistant */
+ #define KEY_KBD_LAYOUT_NEXT	0x248	/* AC Next Keyboard Layout Select */
+ #define KEY_EMOJI_PICKER	0x249	/* Show/hide emoji picker (HUTRR101) */
++#define KEY_DICTATE		0x24a	/* Start or Stop Voice Dictation Session (HUTRR99) */
+ 
+ #define KEY_BRIGHTNESS_MIN		0x250	/* Set Brightness to Minimum */
+ #define KEY_BRIGHTNESS_MAX		0x251	/* Set Brightness to Maximum */
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index 2290c98b47cf8..90ddb49fce84e 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -506,6 +506,12 @@ struct xfrm_user_offload {
+ 	int				ifindex;
+ 	__u8				flags;
+ };
++/* This flag was exposed without any kernel code that supporting it.
++ * Unfortunately, strongswan has the code that uses sets this flag,
++ * which makes impossible to reuse this bit.
++ *
++ * So leave it here to make sure that it won't be reused by mistake.
++ */
+ #define XFRM_OFFLOAD_IPV6	1
+ #define XFRM_OFFLOAD_INBOUND	2
+ 
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 244f32e98360f..658427c33b937 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -1646,7 +1646,11 @@ static void wake_nocb_gp(struct rcu_data *rdp, bool force,
+ 		rcu_nocb_unlock_irqrestore(rdp, flags);
+ 		return;
+ 	}
+-	del_timer(&rdp->nocb_timer);
++
++	if (READ_ONCE(rdp->nocb_defer_wakeup) > RCU_NOCB_WAKE_NOT) {
++		WRITE_ONCE(rdp->nocb_defer_wakeup, RCU_NOCB_WAKE_NOT);
++		del_timer(&rdp->nocb_timer);
++	}
+ 	rcu_nocb_unlock_irqrestore(rdp, flags);
+ 	raw_spin_lock_irqsave(&rdp_gp->nocb_gp_lock, flags);
+ 	if (force || READ_ONCE(rdp_gp->nocb_gp_sleep)) {
+@@ -2164,7 +2168,6 @@ static void do_nocb_deferred_wakeup_common(struct rcu_data *rdp)
+ 		return;
+ 	}
+ 	ndw = READ_ONCE(rdp->nocb_defer_wakeup);
+-	WRITE_ONCE(rdp->nocb_defer_wakeup, RCU_NOCB_WAKE_NOT);
+ 	wake_nocb_gp(rdp, ndw == RCU_NOCB_WAKE_FORCE, flags);
+ 	trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("DeferredWake"));
+ }
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index dd77702260869..ff2c6d3ba6c79 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1549,66 +1549,58 @@ static void init_numa_topology_type(void)
+ 	}
+ }
+ 
++
++#define NR_DISTANCE_VALUES (1 << DISTANCE_BITS)
++
+ void sched_init_numa(void)
+ {
+-	int next_distance, curr_distance = node_distance(0, 0);
+ 	struct sched_domain_topology_level *tl;
+-	int level = 0;
+-	int i, j, k;
+-
+-	sched_domains_numa_distance = kzalloc(sizeof(int) * (nr_node_ids + 1), GFP_KERNEL);
+-	if (!sched_domains_numa_distance)
+-		return;
+-
+-	/* Includes NUMA identity node at level 0. */
+-	sched_domains_numa_distance[level++] = curr_distance;
+-	sched_domains_numa_levels = level;
++	unsigned long *distance_map;
++	int nr_levels = 0;
++	int i, j;
+ 
+ 	/*
+ 	 * O(nr_nodes^2) deduplicating selection sort -- in order to find the
+ 	 * unique distances in the node_distance() table.
+-	 *
+-	 * Assumes node_distance(0,j) includes all distances in
+-	 * node_distance(i,j) in order to avoid cubic time.
+ 	 */
+-	next_distance = curr_distance;
++	distance_map = bitmap_alloc(NR_DISTANCE_VALUES, GFP_KERNEL);
++	if (!distance_map)
++		return;
++
++	bitmap_zero(distance_map, NR_DISTANCE_VALUES);
+ 	for (i = 0; i < nr_node_ids; i++) {
+ 		for (j = 0; j < nr_node_ids; j++) {
+-			for (k = 0; k < nr_node_ids; k++) {
+-				int distance = node_distance(i, k);
+-
+-				if (distance > curr_distance &&
+-				    (distance < next_distance ||
+-				     next_distance == curr_distance))
+-					next_distance = distance;
+-
+-				/*
+-				 * While not a strong assumption it would be nice to know
+-				 * about cases where if node A is connected to B, B is not
+-				 * equally connected to A.
+-				 */
+-				if (sched_debug() && node_distance(k, i) != distance)
+-					sched_numa_warn("Node-distance not symmetric");
++			int distance = node_distance(i, j);
+ 
+-				if (sched_debug() && i && !find_numa_distance(distance))
+-					sched_numa_warn("Node-0 not representative");
++			if (distance < LOCAL_DISTANCE || distance >= NR_DISTANCE_VALUES) {
++				sched_numa_warn("Invalid distance value range");
++				return;
+ 			}
+-			if (next_distance != curr_distance) {
+-				sched_domains_numa_distance[level++] = next_distance;
+-				sched_domains_numa_levels = level;
+-				curr_distance = next_distance;
+-			} else break;
++
++			bitmap_set(distance_map, distance, 1);
+ 		}
++	}
++	/*
++	 * We can now figure out how many unique distance values there are and
++	 * allocate memory accordingly.
++	 */
++	nr_levels = bitmap_weight(distance_map, NR_DISTANCE_VALUES);
+ 
+-		/*
+-		 * In case of sched_debug() we verify the above assumption.
+-		 */
+-		if (!sched_debug())
+-			break;
++	sched_domains_numa_distance = kcalloc(nr_levels, sizeof(int), GFP_KERNEL);
++	if (!sched_domains_numa_distance) {
++		bitmap_free(distance_map);
++		return;
++	}
++
++	for (i = 0, j = 0; i < nr_levels; i++, j++) {
++		j = find_next_bit(distance_map, NR_DISTANCE_VALUES, j);
++		sched_domains_numa_distance[i] = j;
+ 	}
+ 
++	bitmap_free(distance_map);
++
+ 	/*
+-	 * 'level' contains the number of unique distances
++	 * 'nr_levels' contains the number of unique distances
+ 	 *
+ 	 * The sched_domains_numa_distance[] array includes the actual distance
+ 	 * numbers.
+@@ -1617,15 +1609,15 @@ void sched_init_numa(void)
+ 	/*
+ 	 * Here, we should temporarily reset sched_domains_numa_levels to 0.
+ 	 * If it fails to allocate memory for array sched_domains_numa_masks[][],
+-	 * the array will contain less then 'level' members. This could be
++	 * the array will contain less then 'nr_levels' members. This could be
+ 	 * dangerous when we use it to iterate array sched_domains_numa_masks[][]
+ 	 * in other functions.
+ 	 *
+-	 * We reset it to 'level' at the end of this function.
++	 * We reset it to 'nr_levels' at the end of this function.
+ 	 */
+ 	sched_domains_numa_levels = 0;
+ 
+-	sched_domains_numa_masks = kzalloc(sizeof(void *) * level, GFP_KERNEL);
++	sched_domains_numa_masks = kzalloc(sizeof(void *) * nr_levels, GFP_KERNEL);
+ 	if (!sched_domains_numa_masks)
+ 		return;
+ 
+@@ -1633,7 +1625,7 @@ void sched_init_numa(void)
+ 	 * Now for each level, construct a mask per node which contains all
+ 	 * CPUs of nodes that are that many hops away from us.
+ 	 */
+-	for (i = 0; i < level; i++) {
++	for (i = 0; i < nr_levels; i++) {
+ 		sched_domains_numa_masks[i] =
+ 			kzalloc(nr_node_ids * sizeof(void *), GFP_KERNEL);
+ 		if (!sched_domains_numa_masks[i])
+@@ -1641,12 +1633,17 @@ void sched_init_numa(void)
+ 
+ 		for (j = 0; j < nr_node_ids; j++) {
+ 			struct cpumask *mask = kzalloc(cpumask_size(), GFP_KERNEL);
++			int k;
++
+ 			if (!mask)
+ 				return;
+ 
+ 			sched_domains_numa_masks[i][j] = mask;
+ 
+ 			for_each_node(k) {
++				if (sched_debug() && (node_distance(j, k) != node_distance(k, j)))
++					sched_numa_warn("Node-distance not symmetric");
++
+ 				if (node_distance(j, k) > sched_domains_numa_distance[i])
+ 					continue;
+ 
+@@ -1658,7 +1655,7 @@ void sched_init_numa(void)
+ 	/* Compute default topology size */
+ 	for (i = 0; sched_domain_topology[i].mask; i++);
+ 
+-	tl = kzalloc((i + level + 1) *
++	tl = kzalloc((i + nr_levels + 1) *
+ 			sizeof(struct sched_domain_topology_level), GFP_KERNEL);
+ 	if (!tl)
+ 		return;
+@@ -1681,7 +1678,7 @@ void sched_init_numa(void)
+ 	/*
+ 	 * .. and append 'j' levels of NUMA goodness.
+ 	 */
+-	for (j = 1; j < level; i++, j++) {
++	for (j = 1; j < nr_levels; i++, j++) {
+ 		tl[i] = (struct sched_domain_topology_level){
+ 			.mask = sd_numa_mask,
+ 			.sd_flags = cpu_numa_flags,
+@@ -1693,8 +1690,8 @@ void sched_init_numa(void)
+ 
+ 	sched_domain_topology = tl;
+ 
+-	sched_domains_numa_levels = level;
+-	sched_max_numa_distance = sched_domains_numa_distance[level - 1];
++	sched_domains_numa_levels = nr_levels;
++	sched_max_numa_distance = sched_domains_numa_distance[nr_levels - 1];
+ 
+ 	init_numa_topology_type();
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index f9fad789321b0..71ed0616d83bd 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -233,7 +233,7 @@ static char trace_boot_options_buf[MAX_TRACER_SIZE] __initdata;
+ static int __init set_trace_boot_options(char *str)
+ {
+ 	strlcpy(trace_boot_options_buf, str, MAX_TRACER_SIZE);
+-	return 0;
++	return 1;
+ }
+ __setup("trace_options=", set_trace_boot_options);
+ 
+@@ -244,7 +244,7 @@ static int __init set_trace_boot_clock(char *str)
+ {
+ 	strlcpy(trace_boot_clock_buf, str, MAX_TRACER_SIZE);
+ 	trace_boot_clock = trace_boot_clock_buf;
+-	return 0;
++	return 1;
+ }
+ __setup("trace_clock=", set_trace_boot_clock);
+ 
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 78a678eeb1409..a255ffbe342f3 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -5,6 +5,7 @@
+  * Copyright (C) 2009 Tom Zanussi <tzanussi@gmail.com>
+  */
+ 
++#include <linux/uaccess.h>
+ #include <linux/module.h>
+ #include <linux/ctype.h>
+ #include <linux/mutex.h>
+@@ -654,6 +655,52 @@ DEFINE_EQUALITY_PRED(32);
+ DEFINE_EQUALITY_PRED(16);
+ DEFINE_EQUALITY_PRED(8);
+ 
++/* user space strings temp buffer */
++#define USTRING_BUF_SIZE	1024
++
++struct ustring_buffer {
++	char		buffer[USTRING_BUF_SIZE];
++};
++
++static __percpu struct ustring_buffer *ustring_per_cpu;
++
++static __always_inline char *test_string(char *str)
++{
++	struct ustring_buffer *ubuf;
++	char *kstr;
++
++	if (!ustring_per_cpu)
++		return NULL;
++
++	ubuf = this_cpu_ptr(ustring_per_cpu);
++	kstr = ubuf->buffer;
++
++	/* For safety, do not trust the string pointer */
++	if (!strncpy_from_kernel_nofault(kstr, str, USTRING_BUF_SIZE))
++		return NULL;
++	return kstr;
++}
++
++static __always_inline char *test_ustring(char *str)
++{
++	struct ustring_buffer *ubuf;
++	char __user *ustr;
++	char *kstr;
++
++	if (!ustring_per_cpu)
++		return NULL;
++
++	ubuf = this_cpu_ptr(ustring_per_cpu);
++	kstr = ubuf->buffer;
++
++	/* user space address? */
++	ustr = (char __user *)str;
++	if (!strncpy_from_user_nofault(kstr, ustr, USTRING_BUF_SIZE))
++		return NULL;
++
++	return kstr;
++}
++
+ /* Filter predicate for fixed sized arrays of characters */
+ static int filter_pred_string(struct filter_pred *pred, void *event)
+ {
+@@ -667,19 +714,43 @@ static int filter_pred_string(struct filter_pred *pred, void *event)
+ 	return match;
+ }
+ 
+-/* Filter predicate for char * pointers */
+-static int filter_pred_pchar(struct filter_pred *pred, void *event)
++static __always_inline int filter_pchar(struct filter_pred *pred, char *str)
+ {
+-	char **addr = (char **)(event + pred->offset);
+ 	int cmp, match;
+-	int len = strlen(*addr) + 1;	/* including tailing '\0' */
++	int len;
+ 
+-	cmp = pred->regex.match(*addr, &pred->regex, len);
++	len = strlen(str) + 1;	/* including tailing '\0' */
++	cmp = pred->regex.match(str, &pred->regex, len);
+ 
+ 	match = cmp ^ pred->not;
+ 
+ 	return match;
+ }
++/* Filter predicate for char * pointers */
++static int filter_pred_pchar(struct filter_pred *pred, void *event)
++{
++	char **addr = (char **)(event + pred->offset);
++	char *str;
++
++	str = test_string(*addr);
++	if (!str)
++		return 0;
++
++	return filter_pchar(pred, str);
++}
++
++/* Filter predicate for char * pointers in user space*/
++static int filter_pred_pchar_user(struct filter_pred *pred, void *event)
++{
++	char **addr = (char **)(event + pred->offset);
++	char *str;
++
++	str = test_ustring(*addr);
++	if (!str)
++		return 0;
++
++	return filter_pchar(pred, str);
++}
+ 
+ /*
+  * Filter predicate for dynamic sized arrays of characters.
+@@ -1158,6 +1229,7 @@ static int parse_pred(const char *str, void *data,
+ 	struct filter_pred *pred = NULL;
+ 	char num_buf[24];	/* Big enough to hold an address */
+ 	char *field_name;
++	bool ustring = false;
+ 	char q;
+ 	u64 val;
+ 	int len;
+@@ -1192,6 +1264,12 @@ static int parse_pred(const char *str, void *data,
+ 		return -EINVAL;
+ 	}
+ 
++	/* See if the field is a user space string */
++	if ((len = str_has_prefix(str + i, ".ustring"))) {
++		ustring = true;
++		i += len;
++	}
++
+ 	while (isspace(str[i]))
+ 		i++;
+ 
+@@ -1320,8 +1398,20 @@ static int parse_pred(const char *str, void *data,
+ 
+ 		} else if (field->filter_type == FILTER_DYN_STRING)
+ 			pred->fn = filter_pred_strloc;
+-		else
+-			pred->fn = filter_pred_pchar;
++		else {
++
++			if (!ustring_per_cpu) {
++				/* Once allocated, keep it around for good */
++				ustring_per_cpu = alloc_percpu(struct ustring_buffer);
++				if (!ustring_per_cpu)
++					goto err_mem;
++			}
++
++			if (ustring)
++				pred->fn = filter_pred_pchar_user;
++			else
++				pred->fn = filter_pred_pchar;
++		}
+ 		/* go past the last quote */
+ 		i++;
+ 
+@@ -1387,6 +1477,9 @@ static int parse_pred(const char *str, void *data,
+ err_free:
+ 	kfree(pred);
+ 	return -EINVAL;
++err_mem:
++	kfree(pred);
++	return -ENOMEM;
+ }
+ 
+ enum {
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 41a9bd52e1fdc..eb7200699cf66 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1985,9 +1985,9 @@ parse_field(struct hist_trigger_data *hist_data, struct trace_event_file *file,
+ 			/*
+ 			 * For backward compatibility, if field_name
+ 			 * was "cpu", then we treat this the same as
+-			 * common_cpu.
++			 * common_cpu. This also works for "CPU".
+ 			 */
+-			if (strcmp(field_name, "cpu") == 0) {
++			if (field && field->filter_type == FILTER_CPU) {
+ 				*flags |= HIST_FIELD_FL_CPU;
+ 			} else {
+ 				hist_err(tr, HIST_ERR_FIELD_NOT_FOUND,
+@@ -4365,7 +4365,7 @@ static int create_tracing_map_fields(struct hist_trigger_data *hist_data)
+ 
+ 			if (hist_field->flags & HIST_FIELD_FL_STACKTRACE)
+ 				cmp_fn = tracing_map_cmp_none;
+-			else if (!field)
++			else if (!field || hist_field->flags & HIST_FIELD_FL_CPU)
+ 				cmp_fn = tracing_map_cmp_num(hist_field->size,
+ 							     hist_field->is_signed);
+ 			else if (is_string_field(field))
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index d8a9fc7941266..41dd17390c732 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -31,7 +31,7 @@ static int __init set_kprobe_boot_events(char *str)
+ 	strlcpy(kprobe_boot_events_buf, str, COMMAND_LINE_SIZE);
+ 	disable_tracing_selftest("running kprobe events");
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("kprobe_event=", set_kprobe_boot_events);
+ 
+diff --git a/mm/memfd.c b/mm/memfd.c
+index 2647c898990c8..fae4142f7d254 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -31,20 +31,28 @@
+ static void memfd_tag_pins(struct xa_state *xas)
+ {
+ 	struct page *page;
+-	unsigned int tagged = 0;
++	int latency = 0;
++	int cache_count;
+ 
+ 	lru_add_drain();
+ 
+ 	xas_lock_irq(xas);
+ 	xas_for_each(xas, page, ULONG_MAX) {
+-		if (xa_is_value(page))
+-			continue;
+-		page = find_subpage(page, xas->xa_index);
+-		if (page_count(page) - page_mapcount(page) > 1)
++		cache_count = 1;
++		if (!xa_is_value(page) &&
++		    PageTransHuge(page) && !PageHuge(page))
++			cache_count = HPAGE_PMD_NR;
++
++		if (!xa_is_value(page) &&
++		    page_count(page) - total_mapcount(page) != cache_count)
+ 			xas_set_mark(xas, MEMFD_TAG_PINNED);
++		if (cache_count != 1)
++			xas_set(xas, page->index + cache_count);
+ 
+-		if (++tagged % XA_CHECK_SCHED)
++		latency += cache_count;
++		if (latency < XA_CHECK_SCHED)
+ 			continue;
++		latency = 0;
+ 
+ 		xas_pause(xas);
+ 		xas_unlock_irq(xas);
+@@ -73,7 +81,8 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ 
+ 	error = 0;
+ 	for (scan = 0; scan <= LAST_SCAN; scan++) {
+-		unsigned int tagged = 0;
++		int latency = 0;
++		int cache_count;
+ 
+ 		if (!xas_marked(&xas, MEMFD_TAG_PINNED))
+ 			break;
+@@ -87,10 +96,14 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ 		xas_lock_irq(&xas);
+ 		xas_for_each_marked(&xas, page, ULONG_MAX, MEMFD_TAG_PINNED) {
+ 			bool clear = true;
+-			if (xa_is_value(page))
+-				continue;
+-			page = find_subpage(page, xas.xa_index);
+-			if (page_count(page) - page_mapcount(page) != 1) {
++
++			cache_count = 1;
++			if (!xa_is_value(page) &&
++			    PageTransHuge(page) && !PageHuge(page))
++				cache_count = HPAGE_PMD_NR;
++
++			if (!xa_is_value(page) && cache_count !=
++			    page_count(page) - total_mapcount(page)) {
+ 				/*
+ 				 * On the last scan, we clean up all those tags
+ 				 * we inserted; but make a note that we still
+@@ -103,8 +116,11 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ 			}
+ 			if (clear)
+ 				xas_clear_mark(&xas, MEMFD_TAG_PINNED);
+-			if (++tagged % XA_CHECK_SCHED)
++
++			latency += cache_count;
++			if (latency < XA_CHECK_SCHED)
+ 				continue;
++			latency = 0;
+ 
+ 			xas_pause(&xas);
+ 			xas_unlock_irq(&xas);
+diff --git a/mm/util.c b/mm/util.c
+index 90792e4eaa252..8904727607907 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -582,8 +582,10 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node)
+ 		return ret;
+ 
+ 	/* Don't even allow crazy sizes */
+-	if (WARN_ON_ONCE(size > INT_MAX))
++	if (unlikely(size > INT_MAX)) {
++		WARN_ON_ONCE(!(flags & __GFP_NOWARN));
+ 		return NULL;
++	}
+ 
+ 	return __vmalloc_node(size, 1, flags, node,
+ 			__builtin_return_address(0));
+diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
+index 33904595fc56a..fe0898a9b4e82 100644
+--- a/net/batman-adv/hard-interface.c
++++ b/net/batman-adv/hard-interface.c
+@@ -151,22 +151,25 @@ static bool batadv_is_on_batman_iface(const struct net_device *net_dev)
+ 	struct net *net = dev_net(net_dev);
+ 	struct net_device *parent_dev;
+ 	struct net *parent_net;
++	int iflink;
+ 	bool ret;
+ 
+ 	/* check if this is a batman-adv mesh interface */
+ 	if (batadv_softif_is_valid(net_dev))
+ 		return true;
+ 
+-	/* no more parents..stop recursion */
+-	if (dev_get_iflink(net_dev) == 0 ||
+-	    dev_get_iflink(net_dev) == net_dev->ifindex)
++	iflink = dev_get_iflink(net_dev);
++	if (iflink == 0)
+ 		return false;
+ 
+ 	parent_net = batadv_getlink_net(net_dev, net);
+ 
++	/* iflink to itself, most likely physical device */
++	if (net == parent_net && iflink == net_dev->ifindex)
++		return false;
++
+ 	/* recurse over the parent device */
+-	parent_dev = __dev_get_by_index((struct net *)parent_net,
+-					dev_get_iflink(net_dev));
++	parent_dev = __dev_get_by_index((struct net *)parent_net, iflink);
+ 	/* if we got a NULL parent_dev there is something broken.. */
+ 	if (!parent_dev) {
+ 		pr_err("Cannot find parent device\n");
+@@ -216,14 +219,15 @@ static struct net_device *batadv_get_real_netdevice(struct net_device *netdev)
+ 	struct net_device *real_netdev = NULL;
+ 	struct net *real_net;
+ 	struct net *net;
+-	int ifindex;
++	int iflink;
+ 
+ 	ASSERT_RTNL();
+ 
+ 	if (!netdev)
+ 		return NULL;
+ 
+-	if (netdev->ifindex == dev_get_iflink(netdev)) {
++	iflink = dev_get_iflink(netdev);
++	if (iflink == 0) {
+ 		dev_hold(netdev);
+ 		return netdev;
+ 	}
+@@ -233,9 +237,16 @@ static struct net_device *batadv_get_real_netdevice(struct net_device *netdev)
+ 		goto out;
+ 
+ 	net = dev_net(hard_iface->soft_iface);
+-	ifindex = dev_get_iflink(netdev);
+ 	real_net = batadv_getlink_net(netdev, net);
+-	real_netdev = dev_get_by_index(real_net, ifindex);
++
++	/* iflink to itself, most likely physical device */
++	if (net == real_net && netdev->ifindex == iflink) {
++		real_netdev = netdev;
++		dev_hold(real_netdev);
++		goto out;
++	}
++
++	real_netdev = dev_get_by_index(real_net, iflink);
+ 
+ out:
+ 	if (hard_iface)
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index fccc42c8ca0c7..48b6438f2a3d9 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3690,6 +3690,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 		list_skb = list_skb->next;
+ 
+ 		err = 0;
++		delta_truesize += nskb->truesize;
+ 		if (skb_shared(nskb)) {
+ 			tmp = skb_clone(nskb, GFP_ATOMIC);
+ 			if (tmp) {
+@@ -3714,7 +3715,6 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 		tail = nskb;
+ 
+ 		delta_len += nskb->len;
+-		delta_truesize += nskb->truesize;
+ 
+ 		skb_push(nskb, -skb_network_offset(nskb) + offset);
+ 
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 5dd5569f89bf5..e4bb89599b44b 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -943,7 +943,7 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
+ 	struct sk_psock *psock;
+ 	struct bpf_prog *prog;
+ 	int ret = __SK_DROP;
+-	int len = skb->len;
++	int len = orig_len;
+ 
+ 	/* clone here so sk_eat_skb() in tcp_read_sock does not drop our data */
+ 	skb = skb_clone(skb, GFP_ATOMIC);
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index a352ce4f878a3..2535d3dfb92c8 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -2063,10 +2063,54 @@ u8 dcb_ieee_getapp_default_prio_mask(const struct net_device *dev)
+ }
+ EXPORT_SYMBOL(dcb_ieee_getapp_default_prio_mask);
+ 
++static void dcbnl_flush_dev(struct net_device *dev)
++{
++	struct dcb_app_type *itr, *tmp;
++
++	spin_lock_bh(&dcb_lock);
++
++	list_for_each_entry_safe(itr, tmp, &dcb_app_list, list) {
++		if (itr->ifindex == dev->ifindex) {
++			list_del(&itr->list);
++			kfree(itr);
++		}
++	}
++
++	spin_unlock_bh(&dcb_lock);
++}
++
++static int dcbnl_netdevice_event(struct notifier_block *nb,
++				 unsigned long event, void *ptr)
++{
++	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++
++	switch (event) {
++	case NETDEV_UNREGISTER:
++		if (!dev->dcbnl_ops)
++			return NOTIFY_DONE;
++
++		dcbnl_flush_dev(dev);
++
++		return NOTIFY_OK;
++	default:
++		return NOTIFY_DONE;
++	}
++}
++
++static struct notifier_block dcbnl_nb __read_mostly = {
++	.notifier_call  = dcbnl_netdevice_event,
++};
++
+ static int __init dcbnl_init(void)
+ {
++	int err;
++
+ 	INIT_LIST_HEAD(&dcb_app_list);
+ 
++	err = register_netdevice_notifier(&dcbnl_nb);
++	if (err)
++		return err;
++
+ 	rtnl_register(PF_UNSPEC, RTM_GETDCB, dcb_doit, NULL, 0);
+ 	rtnl_register(PF_UNSPEC, RTM_SETDCB, dcb_doit, NULL, 0);
+ 
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index ed9857b2875dc..4b834bbf95e07 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -673,7 +673,7 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
+ 		struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+ 		u32 padto;
+ 
+-		padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
++		padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
+ 		if (skb->len < padto)
+ 			esp.tfclen = padto - skb->len;
+ 	}
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 4dde49e628fab..072c348237536 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3712,6 +3712,7 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister)
+ 	struct inet6_dev *idev;
+ 	struct inet6_ifaddr *ifa, *tmp;
+ 	bool keep_addr = false;
++	bool was_ready;
+ 	int state, i;
+ 
+ 	ASSERT_RTNL();
+@@ -3777,7 +3778,10 @@ restart:
+ 
+ 	addrconf_del_rs_timer(idev);
+ 
+-	/* Step 2: clear flags for stateless addrconf */
++	/* Step 2: clear flags for stateless addrconf, repeated down
++	 *         detection
++	 */
++	was_ready = idev->if_flags & IF_READY;
+ 	if (!unregister)
+ 		idev->if_flags &= ~(IF_RS_SENT|IF_RA_RCVD|IF_READY);
+ 
+@@ -3851,7 +3855,7 @@ restart:
+ 	if (unregister) {
+ 		ipv6_ac_destroy_dev(idev);
+ 		ipv6_mc_destroy_dev(idev);
+-	} else {
++	} else if (was_ready) {
+ 		ipv6_mc_down(idev);
+ 	}
+ 
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 7f2ffc7b1f75a..fc8acb15dcfbb 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -708,7 +708,7 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
+ 		struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+ 		u32 padto;
+ 
+-		padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
++		padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
+ 		if (skb->len < padto)
+ 			esp.tfclen = padto - skb->len;
+ 	}
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 54cabf1c2ae15..d6f2126f46184 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1432,8 +1432,6 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
+ 		if (np->frag_size)
+ 			mtu = np->frag_size;
+ 	}
+-	if (mtu < IPV6_MIN_MTU)
+-		return -EINVAL;
+ 	cork->base.fragsize = mtu;
+ 	cork->base.gso_size = ipc6->gso_size;
+ 	cork->base.tx_flags = 0;
+@@ -1495,8 +1493,6 @@ static int __ip6_append_data(struct sock *sk,
+ 
+ 	fragheaderlen = sizeof(struct ipv6hdr) + rt->rt6i_nfheader_len +
+ 			(opt ? opt->opt_nflen : 0);
+-	maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
+-		     sizeof(struct frag_hdr);
+ 
+ 	headersize = sizeof(struct ipv6hdr) +
+ 		     (opt ? opt->opt_flen + opt->opt_nflen : 0) +
+@@ -1504,6 +1500,13 @@ static int __ip6_append_data(struct sock *sk,
+ 		      sizeof(struct frag_hdr) : 0) +
+ 		     rt->rt6i_nfheader_len;
+ 
++	if (mtu < fragheaderlen ||
++	    ((mtu - fragheaderlen) & ~7) + fragheaderlen < sizeof(struct frag_hdr))
++		goto emsgsize;
++
++	maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
++		     sizeof(struct frag_hdr);
++
+ 	/* as per RFC 7112 section 5, the entire IPv6 Header Chain must fit
+ 	 * the first fragment
+ 	 */
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 7f2be08b72a56..fe8f586886b41 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -374,7 +374,7 @@ struct ieee80211_mgd_auth_data {
+ 
+ 	u8 key[WLAN_KEY_LEN_WEP104];
+ 	u8 key_len, key_idx;
+-	bool done;
++	bool done, waiting;
+ 	bool peer_confirmed;
+ 	bool timeout_started;
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 778bf262418b5..0dba353d3f8fe 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -37,6 +37,7 @@
+ #define IEEE80211_AUTH_TIMEOUT_SAE	(HZ * 2)
+ #define IEEE80211_AUTH_MAX_TRIES	3
+ #define IEEE80211_AUTH_WAIT_ASSOC	(HZ * 5)
++#define IEEE80211_AUTH_WAIT_SAE_RETRY	(HZ * 2)
+ #define IEEE80211_ASSOC_TIMEOUT		(HZ / 5)
+ #define IEEE80211_ASSOC_TIMEOUT_LONG	(HZ / 2)
+ #define IEEE80211_ASSOC_TIMEOUT_SHORT	(HZ / 10)
+@@ -2999,8 +3000,15 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 		    (status_code == WLAN_STATUS_ANTI_CLOG_REQUIRED ||
+ 		     (auth_transaction == 1 &&
+ 		      (status_code == WLAN_STATUS_SAE_HASH_TO_ELEMENT ||
+-		       status_code == WLAN_STATUS_SAE_PK))))
++		       status_code == WLAN_STATUS_SAE_PK)))) {
++			/* waiting for userspace now */
++			ifmgd->auth_data->waiting = true;
++			ifmgd->auth_data->timeout =
++				jiffies + IEEE80211_AUTH_WAIT_SAE_RETRY;
++			ifmgd->auth_data->timeout_started = true;
++			run_again(sdata, ifmgd->auth_data->timeout);
+ 			return;
++		}
+ 
+ 		sdata_info(sdata, "%pM denied authentication (status %d)\n",
+ 			   mgmt->sa, status_code);
+@@ -4526,10 +4534,10 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
+ 
+ 	if (ifmgd->auth_data && ifmgd->auth_data->timeout_started &&
+ 	    time_after(jiffies, ifmgd->auth_data->timeout)) {
+-		if (ifmgd->auth_data->done) {
++		if (ifmgd->auth_data->done || ifmgd->auth_data->waiting) {
+ 			/*
+-			 * ok ... we waited for assoc but userspace didn't,
+-			 * so let's just kill the auth data
++			 * ok ... we waited for assoc or continuation but
++			 * userspace didn't do it, so kill the auth data
+ 			 */
+ 			ieee80211_destroy_auth_data(sdata, false);
+ 		} else if (ieee80211_auth(sdata)) {
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index d27c444a19ed1..1e7614abd947d 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2910,13 +2910,13 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx)
+ 	    ether_addr_equal(sdata->vif.addr, hdr->addr3))
+ 		return RX_CONTINUE;
+ 
+-	ac = ieee80211_select_queue_80211(sdata, skb, hdr);
++	ac = ieee802_1d_to_ac[skb->priority];
+ 	q = sdata->vif.hw_queue[ac];
+ 	if (ieee80211_queue_stopped(&local->hw, q)) {
+ 		IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, dropped_frames_congestion);
+ 		return RX_DROP_MONITOR;
+ 	}
+-	skb_set_queue_mapping(skb, q);
++	skb_set_queue_mapping(skb, ac);
+ 
+ 	if (!--mesh_hdr->ttl) {
+ 		if (!is_multicast_ether_addr(hdr->addr1))
+diff --git a/net/netfilter/core.c b/net/netfilter/core.c
+index 63d032191e626..60332fdb6dd44 100644
+--- a/net/netfilter/core.c
++++ b/net/netfilter/core.c
+@@ -406,14 +406,15 @@ static int __nf_register_net_hook(struct net *net, int pf,
+ 	p = nf_entry_dereference(*pp);
+ 	new_hooks = nf_hook_entries_grow(p, reg);
+ 
+-	if (!IS_ERR(new_hooks))
++	if (!IS_ERR(new_hooks)) {
++		hooks_validate(new_hooks);
+ 		rcu_assign_pointer(*pp, new_hooks);
++	}
+ 
+ 	mutex_unlock(&nf_hook_mutex);
+ 	if (IS_ERR(new_hooks))
+ 		return PTR_ERR(new_hooks);
+ 
+-	hooks_validate(new_hooks);
+ #ifdef CONFIG_NETFILTER_INGRESS
+ 	if (nf_ingress_hook(reg, pf))
+ 		net_inc_ingress_queue();
+diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c
+index bbd1209694b89..bb8607ff94bc7 100644
+--- a/net/netfilter/nf_queue.c
++++ b/net/netfilter/nf_queue.c
+@@ -46,6 +46,15 @@ void nf_unregister_queue_handler(struct net *net)
+ }
+ EXPORT_SYMBOL(nf_unregister_queue_handler);
+ 
++static void nf_queue_sock_put(struct sock *sk)
++{
++#ifdef CONFIG_INET
++	sock_gen_put(sk);
++#else
++	sock_put(sk);
++#endif
++}
++
+ static void nf_queue_entry_release_refs(struct nf_queue_entry *entry)
+ {
+ 	struct nf_hook_state *state = &entry->state;
+@@ -56,7 +65,7 @@ static void nf_queue_entry_release_refs(struct nf_queue_entry *entry)
+ 	if (state->out)
+ 		dev_put(state->out);
+ 	if (state->sk)
+-		sock_put(state->sk);
++		nf_queue_sock_put(state->sk);
+ 
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+ 	if (entry->physin)
+@@ -91,16 +100,17 @@ static void __nf_queue_entry_init_physdevs(struct nf_queue_entry *entry)
+ }
+ 
+ /* Bump dev refs so they don't vanish while packet is out */
+-void nf_queue_entry_get_refs(struct nf_queue_entry *entry)
++bool nf_queue_entry_get_refs(struct nf_queue_entry *entry)
+ {
+ 	struct nf_hook_state *state = &entry->state;
+ 
++	if (state->sk && !refcount_inc_not_zero(&state->sk->sk_refcnt))
++		return false;
++
+ 	if (state->in)
+ 		dev_hold(state->in);
+ 	if (state->out)
+ 		dev_hold(state->out);
+-	if (state->sk)
+-		sock_hold(state->sk);
+ 
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+ 	if (entry->physin)
+@@ -108,6 +118,7 @@ void nf_queue_entry_get_refs(struct nf_queue_entry *entry)
+ 	if (entry->physout)
+ 		dev_hold(entry->physout);
+ #endif
++	return true;
+ }
+ EXPORT_SYMBOL_GPL(nf_queue_entry_get_refs);
+ 
+@@ -178,6 +189,18 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
+ 		break;
+ 	}
+ 
++	if (skb_sk_is_prefetched(skb)) {
++		struct sock *sk = skb->sk;
++
++		if (!sk_is_refcounted(sk)) {
++			if (!refcount_inc_not_zero(&sk->sk_refcnt))
++				return -ENOTCONN;
++
++			/* drop refcount on skb_orphan */
++			skb->destructor = sock_edemux;
++		}
++	}
++
+ 	entry = kmalloc(sizeof(*entry) + route_key_size, GFP_ATOMIC);
+ 	if (!entry)
+ 		return -ENOMEM;
+@@ -196,7 +219,10 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
+ 
+ 	__nf_queue_entry_init_physdevs(entry);
+ 
+-	nf_queue_entry_get_refs(entry);
++	if (!nf_queue_entry_get_refs(entry)) {
++		kfree(entry);
++		return -ENOTCONN;
++	}
+ 
+ 	switch (entry->state.pf) {
+ 	case AF_INET:
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index b0358f30947ea..1640da5c50776 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -712,9 +712,15 @@ static struct nf_queue_entry *
+ nf_queue_entry_dup(struct nf_queue_entry *e)
+ {
+ 	struct nf_queue_entry *entry = kmemdup(e, e->size, GFP_ATOMIC);
+-	if (entry)
+-		nf_queue_entry_get_refs(entry);
+-	return entry;
++
++	if (!entry)
++		return NULL;
++
++	if (nf_queue_entry_get_refs(entry))
++		return entry;
++
++	kfree(entry);
++	return NULL;
+ }
+ 
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 99b902e410c49..4f16d406ad8ea 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -180,7 +180,7 @@ static int smc_release(struct socket *sock)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct smc_sock *smc;
+-	int rc = 0;
++	int old_state, rc = 0;
+ 
+ 	if (!sk)
+ 		goto out;
+@@ -188,8 +188,10 @@ static int smc_release(struct socket *sock)
+ 	sock_hold(sk); /* sock_put below */
+ 	smc = smc_sk(sk);
+ 
++	old_state = sk->sk_state;
++
+ 	/* cleanup for a dangling non-blocking connect */
+-	if (smc->connect_nonblock && sk->sk_state == SMC_INIT)
++	if (smc->connect_nonblock && old_state == SMC_INIT)
+ 		tcp_abort(smc->clcsock->sk, ECONNABORTED);
+ 
+ 	if (cancel_work_sync(&smc->connect_work))
+@@ -203,6 +205,10 @@ static int smc_release(struct socket *sock)
+ 	else
+ 		lock_sock(sk);
+ 
++	if (old_state == SMC_INIT && sk->sk_state == SMC_ACTIVE &&
++	    !smc->use_fallback)
++		smc_close_active_abort(smc);
++
+ 	rc = __smc_release(smc);
+ 
+ 	/* detach socket */
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 4eb9ef9c28003..d69aac6c1fcea 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -662,8 +662,8 @@ void smc_conn_free(struct smc_connection *conn)
+ 			cancel_work_sync(&conn->abort_work);
+ 	}
+ 	if (!list_empty(&lgr->list)) {
+-		smc_lgr_unregister_conn(conn);
+ 		smc_buf_unuse(conn, lgr); /* allow buffer reuse */
++		smc_lgr_unregister_conn(conn);
+ 	}
+ 
+ 	if (!lgr->conns_num)
+@@ -1316,7 +1316,8 @@ int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
+ 		    (ini->smcd_version == SMC_V2 ||
+ 		     lgr->vlan_id == ini->vlan_id) &&
+ 		    (role == SMC_CLNT || ini->is_smcd ||
+-		     lgr->conns_num < SMC_RMBS_PER_LGR_MAX)) {
++		    (lgr->conns_num < SMC_RMBS_PER_LGR_MAX &&
++		      !bitmap_full(lgr->rtokens_used_mask, SMC_RMBS_PER_LGR_MAX)))) {
+ 			/* link group found */
+ 			ini->first_contact_local = 0;
+ 			conn->lgr = lgr;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index d8a2f424786fc..6f91b9a306dc3 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -2280,7 +2280,7 @@ static bool tipc_crypto_key_rcv(struct tipc_crypto *rx, struct tipc_msg *hdr)
+ 	struct tipc_crypto *tx = tipc_net(rx->net)->crypto_tx;
+ 	struct tipc_aead_key *skey = NULL;
+ 	u16 key_gen = msg_key_gen(hdr);
+-	u16 size = msg_data_sz(hdr);
++	u32 size = msg_data_sz(hdr);
+ 	u8 *data = msg_data(hdr);
+ 	unsigned int keylen;
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 8fb0478888fb2..07bd7b00b56d4 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -12930,6 +12930,9 @@ static int handle_nan_filter(struct nlattr *attr_filter,
+ 	i = 0;
+ 	nla_for_each_nested(attr, attr_filter, rem) {
+ 		filter[i].filter = nla_memdup(attr, GFP_KERNEL);
++		if (!filter[i].filter)
++			goto err;
++
+ 		filter[i].len = nla_len(attr);
+ 		i++;
+ 	}
+@@ -12942,6 +12945,15 @@ static int handle_nan_filter(struct nlattr *attr_filter,
+ 	}
+ 
+ 	return 0;
++
++err:
++	i = 0;
++	nla_for_each_nested(attr, attr_filter, rem) {
++		kfree(filter[i].filter);
++		i++;
++	}
++	kfree(filter);
++	return -ENOMEM;
+ }
+ 
+ static int nl80211_nan_add_func(struct sk_buff *skb,
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index e843b0d9e2a61..c255aac6b816b 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -223,6 +223,9 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 	if (x->encap || x->tfcpad)
+ 		return -EINVAL;
+ 
++	if (xuo->flags & ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND))
++		return -EINVAL;
++
+ 	dev = dev_get_by_index(net, xuo->ifindex);
+ 	if (!dev) {
+ 		if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) {
+@@ -261,7 +264,8 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 	xso->dev = dev;
+ 	xso->real_dev = dev;
+ 	xso->num_exthdrs = 1;
+-	xso->flags = xuo->flags;
++	/* Don't forward bit that is not implemented */
++	xso->flags = xuo->flags & ~XFRM_OFFLOAD_IPV6;
+ 
+ 	err = dev->xfrmdev_ops->xdo_dev_state_add(x);
+ 	if (err) {
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index e1fae61a5bb90..4420c8fd318a6 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -679,12 +679,12 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	struct net *net = xi->net;
+ 	struct xfrm_if_parms p = {};
+ 
++	xfrmi_netlink_parms(data, &p);
+ 	if (!p.if_id) {
+ 		NL_SET_ERR_MSG(extack, "if_id must be non zero");
+ 		return -EINVAL;
+ 	}
+ 
+-	xfrmi_netlink_parms(data, &p);
+ 	xi = xfrmi_locate(net, &p);
+ 	if (!xi) {
+ 		xi = netdev_priv(dev);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 65e2805fa113a..f5b846a2edcd7 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2537,7 +2537,7 @@ void xfrm_state_delete_tunnel(struct xfrm_state *x)
+ }
+ EXPORT_SYMBOL(xfrm_state_delete_tunnel);
+ 
+-u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
++u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ {
+ 	const struct xfrm_type *type = READ_ONCE(x->type);
+ 	struct crypto_aead *aead;
+@@ -2568,17 +2568,7 @@ u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ 	return ((mtu - x->props.header_len - crypto_aead_authsize(aead) -
+ 		 net_adj) & ~(blksize - 1)) + net_adj - 2;
+ }
+-EXPORT_SYMBOL_GPL(__xfrm_state_mtu);
+-
+-u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
+-{
+-	mtu = __xfrm_state_mtu(x, mtu);
+-
+-	if (x->props.family == AF_INET6 && mtu < IPV6_MIN_MTU)
+-		return IPV6_MIN_MTU;
+-
+-	return mtu;
+-}
++EXPORT_SYMBOL_GPL(xfrm_state_mtu);
+ 
+ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
+ {
+diff --git a/sound/soc/codecs/cs4265.c b/sound/soc/codecs/cs4265.c
+index d76be44f46b40..36b9e4fab099b 100644
+--- a/sound/soc/codecs/cs4265.c
++++ b/sound/soc/codecs/cs4265.c
+@@ -150,7 +150,6 @@ static const struct snd_kcontrol_new cs4265_snd_controls[] = {
+ 	SOC_SINGLE("E to F Buffer Disable Switch", CS4265_SPDIF_CTL1,
+ 				6, 1, 0),
+ 	SOC_ENUM("C Data Access", cam_mode_enum),
+-	SOC_SINGLE("SPDIF Switch", CS4265_SPDIF_CTL2, 5, 1, 1),
+ 	SOC_SINGLE("Validity Bit Control Switch", CS4265_SPDIF_CTL2,
+ 				3, 1, 0),
+ 	SOC_ENUM("SPDIF Mono/Stereo", spdif_mono_stereo_enum),
+@@ -186,7 +185,7 @@ static const struct snd_soc_dapm_widget cs4265_dapm_widgets[] = {
+ 
+ 	SND_SOC_DAPM_SWITCH("Loopback", SND_SOC_NOPM, 0, 0,
+ 			&loopback_ctl),
+-	SND_SOC_DAPM_SWITCH("SPDIF", SND_SOC_NOPM, 0, 0,
++	SND_SOC_DAPM_SWITCH("SPDIF", CS4265_SPDIF_CTL2, 5, 1,
+ 			&spdif_switch),
+ 	SND_SOC_DAPM_SWITCH("DAC", CS4265_PWRCTL, 1, 1,
+ 			&dac_switch),
+diff --git a/sound/soc/codecs/rt5668.c b/sound/soc/codecs/rt5668.c
+index bc69adc9c8b70..e625df57c69e5 100644
+--- a/sound/soc/codecs/rt5668.c
++++ b/sound/soc/codecs/rt5668.c
+@@ -1022,11 +1022,13 @@ static void rt5668_jack_detect_handler(struct work_struct *work)
+ 		container_of(work, struct rt5668_priv, jack_detect_work.work);
+ 	int val, btn_type;
+ 
+-	while (!rt5668->component)
+-		usleep_range(10000, 15000);
+-
+-	while (!rt5668->component->card->instantiated)
+-		usleep_range(10000, 15000);
++	if (!rt5668->component || !rt5668->component->card ||
++	    !rt5668->component->card->instantiated) {
++		/* card not yet ready, try later */
++		mod_delayed_work(system_power_efficient_wq,
++				 &rt5668->jack_detect_work, msecs_to_jiffies(15));
++		return;
++	}
+ 
+ 	mutex_lock(&rt5668->calibrate_mutex);
+ 
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index aaef76cc151fa..113ed00ddf1e5 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -1081,11 +1081,13 @@ void rt5682_jack_detect_handler(struct work_struct *work)
+ 		container_of(work, struct rt5682_priv, jack_detect_work.work);
+ 	int val, btn_type;
+ 
+-	while (!rt5682->component)
+-		usleep_range(10000, 15000);
+-
+-	while (!rt5682->component->card->instantiated)
+-		usleep_range(10000, 15000);
++	if (!rt5682->component || !rt5682->component->card ||
++	    !rt5682->component->card->instantiated) {
++		/* card not yet ready, try later */
++		mod_delayed_work(system_power_efficient_wq,
++				 &rt5682->jack_detect_work, msecs_to_jiffies(15));
++		return;
++	}
+ 
+ 	mutex_lock(&rt5682->calibrate_mutex);
+ 
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index caa8d45ebb209..2bc9fa6a34b8f 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -317,7 +317,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 		mask = BIT(sign_bit + 1) - 1;
+ 
+ 	val = ucontrol->value.integer.value[0];
+-	if (mc->platform_max && val > mc->platform_max)
++	if (mc->platform_max && ((int)val + min) > mc->platform_max)
+ 		return -EINVAL;
+ 	if (val > max - min)
+ 		return -EINVAL;
+@@ -330,7 +330,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 	val = val << shift;
+ 	if (snd_soc_volsw_is_stereo(mc)) {
+ 		val2 = ucontrol->value.integer.value[1];
+-		if (mc->platform_max && val2 > mc->platform_max)
++		if (mc->platform_max && ((int)val2 + min) > mc->platform_max)
+ 			return -EINVAL;
+ 		if (val2 > max - min)
+ 			return -EINVAL;
+diff --git a/sound/x86/intel_hdmi_audio.c b/sound/x86/intel_hdmi_audio.c
+index 9f9fcd2749f22..dbaa43ffbbd2d 100644
+--- a/sound/x86/intel_hdmi_audio.c
++++ b/sound/x86/intel_hdmi_audio.c
+@@ -1276,7 +1276,7 @@ static int had_pcm_mmap(struct snd_pcm_substream *substream,
+ {
+ 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ 	return remap_pfn_range(vma, vma->vm_start,
+-			substream->dma_buffer.addr >> PAGE_SHIFT,
++			substream->runtime->dma_addr >> PAGE_SHIFT,
+ 			vma->vm_end - vma->vm_start, vma->vm_page_prot);
+ }
+ 
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh
+index 3e3e06ea5703c..86e787895f78b 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh
+@@ -60,7 +60,8 @@ __tc_police_test()
+ 
+ 	tc_police_rules_create $count $should_fail
+ 
+-	offload_count=$(tc filter show dev $swp1 ingress | grep in_hw | wc -l)
++	offload_count=$(tc -j filter show dev $swp1 ingress |
++			jq "[.[] | select(.options.in_hw == true)] | length")
+ 	((offload_count == count))
+ 	check_err_fail $should_fail $? "tc police offload count"
+ }
+diff --git a/tools/testing/selftests/seccomp/Makefile b/tools/testing/selftests/seccomp/Makefile
+index 0ebfe8b0e147f..585f7a0c10cbe 100644
+--- a/tools/testing/selftests/seccomp/Makefile
++++ b/tools/testing/selftests/seccomp/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -Wl,-no-as-needed -Wall
++CFLAGS += -Wl,-no-as-needed -Wall -isystem ../../../../usr/include/
+ LDFLAGS += -lpthread
+ 
+ TEST_GEN_PROGS := seccomp_bpf seccomp_benchmark


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-03-11 11:31 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-03-11 11:31 UTC (permalink / raw
  To: gentoo-commits

commit:     549c51ef7f24622f906ad3941b9dcea359ada5e1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 11 11:30:48 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 11 11:30:48 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=549c51ef

Linux patch 5.10.105

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1104_linux-5.10.105.patch | 3857 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3861 insertions(+)

diff --git a/0000_README b/0000_README
index f4f4b91a..6fbdd908 100644
--- a/0000_README
+++ b/0000_README
@@ -459,6 +459,10 @@ Patch:  1103_linux-5.10.104.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.104
 
+Patch:  1104_linux-5.10.105.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.105
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1104_linux-5.10.105.patch b/1104_linux-5.10.105.patch
new file mode 100644
index 00000000..aa2a1fc7
--- /dev/null
+++ b/1104_linux-5.10.105.patch
@@ -0,0 +1,3857 @@
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index 985181dba0bac..6bd97cd50d625 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -60,8 +60,8 @@ privileged data touched during the speculative execution.
+ Spectre variant 1 attacks take advantage of speculative execution of
+ conditional branches, while Spectre variant 2 attacks use speculative
+ execution of indirect branches to leak privileged memory.
+-See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
+-:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
++See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[6] <spec_ref6>`
++:ref:`[7] <spec_ref7>` :ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
+ 
+ Spectre variant 1 (Bounds Check Bypass)
+ ---------------------------------------
+@@ -131,6 +131,19 @@ steer its indirect branch speculations to gadget code, and measure the
+ speculative execution's side effects left in level 1 cache to infer the
+ victim's data.
+ 
++Yet another variant 2 attack vector is for the attacker to poison the
++Branch History Buffer (BHB) to speculatively steer an indirect branch
++to a specific Branch Target Buffer (BTB) entry, even if the entry isn't
++associated with the source address of the indirect branch. Specifically,
++the BHB might be shared across privilege levels even in the presence of
++Enhanced IBRS.
++
++Currently the only known real-world BHB attack vector is via
++unprivileged eBPF. Therefore, it's highly recommended to not enable
++unprivileged eBPF, especially when eIBRS is used (without retpolines).
++For a full mitigation against BHB attacks, it's recommended to use
++retpolines (or eIBRS combined with retpolines).
++
+ Attack scenarios
+ ----------------
+ 
+@@ -364,13 +377,15 @@ The possible values in this file are:
+ 
+   - Kernel status:
+ 
+-  ====================================  =================================
+-  'Not affected'                        The processor is not vulnerable
+-  'Vulnerable'                          Vulnerable, no mitigation
+-  'Mitigation: Full generic retpoline'  Software-focused mitigation
+-  'Mitigation: Full AMD retpoline'      AMD-specific software mitigation
+-  'Mitigation: Enhanced IBRS'           Hardware-focused mitigation
+-  ====================================  =================================
++  ========================================  =================================
++  'Not affected'                            The processor is not vulnerable
++  'Mitigation: None'                        Vulnerable, no mitigation
++  'Mitigation: Retpolines'                  Use Retpoline thunks
++  'Mitigation: LFENCE'                      Use LFENCE instructions
++  'Mitigation: Enhanced IBRS'               Hardware-focused mitigation
++  'Mitigation: Enhanced IBRS + Retpolines'  Hardware-focused + Retpolines
++  'Mitigation: Enhanced IBRS + LFENCE'      Hardware-focused + LFENCE
++  ========================================  =================================
+ 
+   - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
+     used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
+@@ -584,12 +599,13 @@ kernel command line.
+ 
+ 		Specific mitigations can also be selected manually:
+ 
+-		retpoline
+-					replace indirect branches
+-		retpoline,generic
+-					google's original retpoline
+-		retpoline,amd
+-					AMD-specific minimal thunk
++                retpoline               auto pick between generic,lfence
++                retpoline,generic       Retpolines
++                retpoline,lfence        LFENCE; indirect branch
++                retpoline,amd           alias for retpoline,lfence
++                eibrs                   enhanced IBRS
++                eibrs,retpoline         enhanced IBRS + Retpolines
++                eibrs,lfence            enhanced IBRS + LFENCE
+ 
+ 		Not specifying this option is equivalent to
+ 		spectre_v2=auto.
+@@ -730,7 +746,7 @@ AMD white papers:
+ 
+ .. _spec_ref6:
+ 
+-[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
++[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf>`_.
+ 
+ ARM white papers:
+ 
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index d00618967854d..611172f68bb57 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4957,8 +4957,12 @@
+ 			Specific mitigations can also be selected manually:
+ 
+ 			retpoline	  - replace indirect branches
+-			retpoline,generic - google's original retpoline
+-			retpoline,amd     - AMD-specific minimal thunk
++			retpoline,generic - Retpolines
++			retpoline,lfence  - LFENCE; indirect branch
++			retpoline,amd     - alias for retpoline,lfence
++			eibrs		  - enhanced IBRS
++			eibrs,retpoline   - enhanced IBRS + Retpolines
++			eibrs,lfence      - enhanced IBRS + LFENCE
+ 
+ 			Not specifying this option is equivalent to
+ 			spectre_v2=auto.
+diff --git a/Documentation/arm64/cpu-feature-registers.rst b/Documentation/arm64/cpu-feature-registers.rst
+index 328e0c454fbd4..749ae970c3195 100644
+--- a/Documentation/arm64/cpu-feature-registers.rst
++++ b/Documentation/arm64/cpu-feature-registers.rst
+@@ -235,7 +235,15 @@ infrastructure:
+      | DPB                          | [3-0]   |    y    |
+      +------------------------------+---------+---------+
+ 
+-  6) ID_AA64MMFR2_EL1 - Memory model feature register 2
++  6) ID_AA64MMFR0_EL1 - Memory model feature register 0
++
++     +------------------------------+---------+---------+
++     | Name                         |  bits   | visible |
++     +------------------------------+---------+---------+
++     | ECV                          | [63-60] |    y    |
++     +------------------------------+---------+---------+
++
++  7) ID_AA64MMFR2_EL1 - Memory model feature register 2
+ 
+      +------------------------------+---------+---------+
+      | Name                         |  bits   | visible |
+@@ -243,7 +251,7 @@ infrastructure:
+      | AT                           | [35-32] |    y    |
+      +------------------------------+---------+---------+
+ 
+-  7) ID_AA64ZFR0_EL1 - SVE feature ID register 0
++  8) ID_AA64ZFR0_EL1 - SVE feature ID register 0
+ 
+      +------------------------------+---------+---------+
+      | Name                         |  bits   | visible |
+@@ -267,6 +275,23 @@ infrastructure:
+      | SVEVer                       | [3-0]   |    y    |
+      +------------------------------+---------+---------+
+ 
++  8) ID_AA64MMFR1_EL1 - Memory model feature register 1
++
++     +------------------------------+---------+---------+
++     | Name                         |  bits   | visible |
++     +------------------------------+---------+---------+
++     | AFP                          | [47-44] |    y    |
++     +------------------------------+---------+---------+
++
++  9) ID_AA64ISAR2_EL1 - Instruction set attribute register 2
++
++     +------------------------------+---------+---------+
++     | Name                         |  bits   | visible |
++     +------------------------------+---------+---------+
++     | RPRES                        | [7-4]   |    y    |
++     +------------------------------+---------+---------+
++
++
+ Appendix I: Example
+ -------------------
+ 
+diff --git a/Documentation/arm64/elf_hwcaps.rst b/Documentation/arm64/elf_hwcaps.rst
+index bbd9cf54db6c7..e88d245d426da 100644
+--- a/Documentation/arm64/elf_hwcaps.rst
++++ b/Documentation/arm64/elf_hwcaps.rst
+@@ -245,6 +245,18 @@ HWCAP2_MTE
+     Functionality implied by ID_AA64PFR1_EL1.MTE == 0b0010, as described
+     by Documentation/arm64/memory-tagging-extension.rst.
+ 
++HWCAP2_ECV
++
++    Functionality implied by ID_AA64MMFR0_EL1.ECV == 0b0001.
++
++HWCAP2_AFP
++
++    Functionality implied by ID_AA64MFR1_EL1.AFP == 0b0001.
++
++HWCAP2_RPRES
++
++    Functionality implied by ID_AA64ISAR2_EL1.RPRES == 0b0001.
++
+ 4. Unused AT_HWCAP bits
+ -----------------------
+ 
+diff --git a/Makefile b/Makefile
+index 6e6efe5516872..ea665736db040 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 104
++SUBLEVEL = 105
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
+index 72627c5fb3b2c..24a1f498b3b5f 100644
+--- a/arch/arm/include/asm/assembler.h
++++ b/arch/arm/include/asm/assembler.h
+@@ -107,6 +107,16 @@
+ 	.endm
+ #endif
+ 
++#if __LINUX_ARM_ARCH__ < 7
++	.macro	dsb, args
++	mcr	p15, 0, r0, c7, c10, 4
++	.endm
++
++	.macro	isb, args
++	mcr	p15, 0, r0, c7, c5, 4
++	.endm
++#endif
++
+ 	.macro asm_trace_hardirqs_off, save=1
+ #if defined(CONFIG_TRACE_IRQFLAGS)
+ 	.if \save
+diff --git a/arch/arm/include/asm/spectre.h b/arch/arm/include/asm/spectre.h
+new file mode 100644
+index 0000000000000..d1fa5607d3aa3
+--- /dev/null
++++ b/arch/arm/include/asm/spectre.h
+@@ -0,0 +1,32 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++#ifndef __ASM_SPECTRE_H
++#define __ASM_SPECTRE_H
++
++enum {
++	SPECTRE_UNAFFECTED,
++	SPECTRE_MITIGATED,
++	SPECTRE_VULNERABLE,
++};
++
++enum {
++	__SPECTRE_V2_METHOD_BPIALL,
++	__SPECTRE_V2_METHOD_ICIALLU,
++	__SPECTRE_V2_METHOD_SMC,
++	__SPECTRE_V2_METHOD_HVC,
++	__SPECTRE_V2_METHOD_LOOP8,
++};
++
++enum {
++	SPECTRE_V2_METHOD_BPIALL = BIT(__SPECTRE_V2_METHOD_BPIALL),
++	SPECTRE_V2_METHOD_ICIALLU = BIT(__SPECTRE_V2_METHOD_ICIALLU),
++	SPECTRE_V2_METHOD_SMC = BIT(__SPECTRE_V2_METHOD_SMC),
++	SPECTRE_V2_METHOD_HVC = BIT(__SPECTRE_V2_METHOD_HVC),
++	SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8),
++};
++
++void spectre_v2_update_state(unsigned int state, unsigned int methods);
++
++int spectre_bhb_update_vectors(unsigned int method);
++
++#endif
+diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
+index 4a91428c324db..fad45c884e988 100644
+--- a/arch/arm/include/asm/vmlinux.lds.h
++++ b/arch/arm/include/asm/vmlinux.lds.h
+@@ -26,6 +26,19 @@
+ #define ARM_MMU_DISCARD(x)	x
+ #endif
+ 
++/*
++ * ld.lld does not support NOCROSSREFS:
++ * https://github.com/ClangBuiltLinux/linux/issues/1609
++ */
++#ifdef CONFIG_LD_IS_LLD
++#define NOCROSSREFS
++#endif
++
++/* Set start/end symbol names to the LMA for the section */
++#define ARM_LMA(sym, section)						\
++	sym##_start = LOADADDR(section);				\
++	sym##_end = LOADADDR(section) + SIZEOF(section)
++
+ #define PROC_INFO							\
+ 		. = ALIGN(4);						\
+ 		__proc_info_begin = .;					\
+@@ -110,19 +123,31 @@
+  * only thing that matters is their relative offsets
+  */
+ #define ARM_VECTORS							\
+-	__vectors_start = .;						\
+-	.vectors 0xffff0000 : AT(__vectors_start) {			\
+-		*(.vectors)						\
++	__vectors_lma = .;						\
++	OVERLAY 0xffff0000 : NOCROSSREFS AT(__vectors_lma) {		\
++		.vectors {						\
++			*(.vectors)					\
++		}							\
++		.vectors.bhb.loop8 {					\
++			*(.vectors.bhb.loop8)				\
++		}							\
++		.vectors.bhb.bpiall {					\
++			*(.vectors.bhb.bpiall)				\
++		}							\
+ 	}								\
+-	. = __vectors_start + SIZEOF(.vectors);				\
+-	__vectors_end = .;						\
++	ARM_LMA(__vectors, .vectors);					\
++	ARM_LMA(__vectors_bhb_loop8, .vectors.bhb.loop8);		\
++	ARM_LMA(__vectors_bhb_bpiall, .vectors.bhb.bpiall);		\
++	. = __vectors_lma + SIZEOF(.vectors) +				\
++		SIZEOF(.vectors.bhb.loop8) +				\
++		SIZEOF(.vectors.bhb.bpiall);				\
+ 									\
+-	__stubs_start = .;						\
+-	.stubs ADDR(.vectors) + 0x1000 : AT(__stubs_start) {		\
++	__stubs_lma = .;						\
++	.stubs ADDR(.vectors) + 0x1000 : AT(__stubs_lma) {		\
+ 		*(.stubs)						\
+ 	}								\
+-	. = __stubs_start + SIZEOF(.stubs);				\
+-	__stubs_end = .;						\
++	ARM_LMA(__stubs, .stubs);					\
++	. = __stubs_lma + SIZEOF(.stubs);				\
+ 									\
+ 	PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors));
+ 
+diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
+index 89e5d864e9234..79588b5623532 100644
+--- a/arch/arm/kernel/Makefile
++++ b/arch/arm/kernel/Makefile
+@@ -106,4 +106,6 @@ endif
+ 
+ obj-$(CONFIG_HAVE_ARM_SMCCC)	+= smccc-call.o
+ 
++obj-$(CONFIG_GENERIC_CPU_VULNERABILITIES) += spectre.o
++
+ extra-y := $(head-y) vmlinux.lds
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 63fbcdc97ded9..3cbd35c82a66c 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -1005,12 +1005,11 @@ vector_\name:
+ 	sub	lr, lr, #\correction
+ 	.endif
+ 
+-	@
+-	@ Save r0, lr_<exception> (parent PC) and spsr_<exception>
+-	@ (parent CPSR)
+-	@
++	@ Save r0, lr_<exception> (parent PC)
+ 	stmia	sp, {r0, lr}		@ save r0, lr
+-	mrs	lr, spsr
++
++	@ Save spsr_<exception> (parent CPSR)
++2:	mrs	lr, spsr
+ 	str	lr, [sp, #8]		@ save spsr
+ 
+ 	@
+@@ -1031,6 +1030,44 @@ vector_\name:
+ 	movs	pc, lr			@ branch to handler in SVC mode
+ ENDPROC(vector_\name)
+ 
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++	.subsection 1
++	.align 5
++vector_bhb_loop8_\name:
++	.if \correction
++	sub	lr, lr, #\correction
++	.endif
++
++	@ Save r0, lr_<exception> (parent PC)
++	stmia	sp, {r0, lr}
++
++	@ bhb workaround
++	mov	r0, #8
++1:	b	. + 4
++	subs	r0, r0, #1
++	bne	1b
++	dsb
++	isb
++	b	2b
++ENDPROC(vector_bhb_loop8_\name)
++
++vector_bhb_bpiall_\name:
++	.if \correction
++	sub	lr, lr, #\correction
++	.endif
++
++	@ Save r0, lr_<exception> (parent PC)
++	stmia	sp, {r0, lr}
++
++	@ bhb workaround
++	mcr	p15, 0, r0, c7, c5, 6	@ BPIALL
++	@ isb not needed due to "movs pc, lr" in the vector stub
++	@ which gives a "context synchronisation".
++	b	2b
++ENDPROC(vector_bhb_bpiall_\name)
++	.previous
++#endif
++
+ 	.align	2
+ 	@ handler addresses follow this label
+ 1:
+@@ -1039,6 +1076,10 @@ ENDPROC(vector_\name)
+ 	.section .stubs, "ax", %progbits
+ 	@ This must be the first word
+ 	.word	vector_swi
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++	.word	vector_bhb_loop8_swi
++	.word	vector_bhb_bpiall_swi
++#endif
+ 
+ vector_rst:
+  ARM(	swi	SYS_ERROR0	)
+@@ -1153,8 +1194,10 @@ vector_addrexcptn:
+  * FIQ "NMI" handler
+  *-----------------------------------------------------------------------------
+  * Handle a FIQ using the SVC stack allowing FIQ act like NMI on x86
+- * systems.
++ * systems. This must be the last vector stub, so lets place it in its own
++ * subsection.
+  */
++	.subsection 2
+ 	vector_stub	fiq, FIQ_MODE, 4
+ 
+ 	.long	__fiq_usr			@  0  (USR_26 / USR_32)
+@@ -1187,6 +1230,30 @@ vector_addrexcptn:
+ 	W(b)	vector_irq
+ 	W(b)	vector_fiq
+ 
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++	.section .vectors.bhb.loop8, "ax", %progbits
++.L__vectors_bhb_loop8_start:
++	W(b)	vector_rst
++	W(b)	vector_bhb_loop8_und
++	W(ldr)	pc, .L__vectors_bhb_loop8_start + 0x1004
++	W(b)	vector_bhb_loop8_pabt
++	W(b)	vector_bhb_loop8_dabt
++	W(b)	vector_addrexcptn
++	W(b)	vector_bhb_loop8_irq
++	W(b)	vector_bhb_loop8_fiq
++
++	.section .vectors.bhb.bpiall, "ax", %progbits
++.L__vectors_bhb_bpiall_start:
++	W(b)	vector_rst
++	W(b)	vector_bhb_bpiall_und
++	W(ldr)	pc, .L__vectors_bhb_bpiall_start + 0x1008
++	W(b)	vector_bhb_bpiall_pabt
++	W(b)	vector_bhb_bpiall_dabt
++	W(b)	vector_addrexcptn
++	W(b)	vector_bhb_bpiall_irq
++	W(b)	vector_bhb_bpiall_fiq
++#endif
++
+ 	.data
+ 	.align	2
+ 
+diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
+index 271cb8a1eba1e..bd619da73c84e 100644
+--- a/arch/arm/kernel/entry-common.S
++++ b/arch/arm/kernel/entry-common.S
+@@ -162,6 +162,29 @@ ENDPROC(ret_from_fork)
+  *-----------------------------------------------------------------------------
+  */
+ 
++	.align	5
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++ENTRY(vector_bhb_loop8_swi)
++	sub	sp, sp, #PT_REGS_SIZE
++	stmia	sp, {r0 - r12}
++	mov	r8, #8
++1:	b	2f
++2:	subs	r8, r8, #1
++	bne	1b
++	dsb
++	isb
++	b	3f
++ENDPROC(vector_bhb_loop8_swi)
++
++	.align	5
++ENTRY(vector_bhb_bpiall_swi)
++	sub	sp, sp, #PT_REGS_SIZE
++	stmia	sp, {r0 - r12}
++	mcr	p15, 0, r8, c7, c5, 6	@ BPIALL
++	isb
++	b	3f
++ENDPROC(vector_bhb_bpiall_swi)
++#endif
+ 	.align	5
+ ENTRY(vector_swi)
+ #ifdef CONFIG_CPU_V7M
+@@ -169,6 +192,7 @@ ENTRY(vector_swi)
+ #else
+ 	sub	sp, sp, #PT_REGS_SIZE
+ 	stmia	sp, {r0 - r12}			@ Calling r0 - r12
++3:
+  ARM(	add	r8, sp, #S_PC		)
+  ARM(	stmdb	r8, {sp, lr}^		)	@ Calling sp, lr
+  THUMB(	mov	r8, sp			)
+diff --git a/arch/arm/kernel/spectre.c b/arch/arm/kernel/spectre.c
+new file mode 100644
+index 0000000000000..0dcefc36fb7a0
+--- /dev/null
++++ b/arch/arm/kernel/spectre.c
+@@ -0,0 +1,71 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include <linux/bpf.h>
++#include <linux/cpu.h>
++#include <linux/device.h>
++
++#include <asm/spectre.h>
++
++static bool _unprivileged_ebpf_enabled(void)
++{
++#ifdef CONFIG_BPF_SYSCALL
++	return !sysctl_unprivileged_bpf_disabled;
++#else
++	return false;
++#endif
++}
++
++ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
++			    char *buf)
++{
++	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
++}
++
++static unsigned int spectre_v2_state;
++static unsigned int spectre_v2_methods;
++
++void spectre_v2_update_state(unsigned int state, unsigned int method)
++{
++	if (state > spectre_v2_state)
++		spectre_v2_state = state;
++	spectre_v2_methods |= method;
++}
++
++ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
++			    char *buf)
++{
++	const char *method;
++
++	if (spectre_v2_state == SPECTRE_UNAFFECTED)
++		return sprintf(buf, "%s\n", "Not affected");
++
++	if (spectre_v2_state != SPECTRE_MITIGATED)
++		return sprintf(buf, "%s\n", "Vulnerable");
++
++	if (_unprivileged_ebpf_enabled())
++		return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n");
++
++	switch (spectre_v2_methods) {
++	case SPECTRE_V2_METHOD_BPIALL:
++		method = "Branch predictor hardening";
++		break;
++
++	case SPECTRE_V2_METHOD_ICIALLU:
++		method = "I-cache invalidation";
++		break;
++
++	case SPECTRE_V2_METHOD_SMC:
++	case SPECTRE_V2_METHOD_HVC:
++		method = "Firmware call";
++		break;
++
++	case SPECTRE_V2_METHOD_LOOP8:
++		method = "History overwrite";
++		break;
++
++	default:
++		method = "Multiple mitigations";
++		break;
++	}
++
++	return sprintf(buf, "Mitigation: %s\n", method);
++}
+diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
+index 17d5a785df28b..2d9e72ad1b0f9 100644
+--- a/arch/arm/kernel/traps.c
++++ b/arch/arm/kernel/traps.c
+@@ -30,6 +30,7 @@
+ #include <linux/atomic.h>
+ #include <asm/cacheflush.h>
+ #include <asm/exception.h>
++#include <asm/spectre.h>
+ #include <asm/unistd.h>
+ #include <asm/traps.h>
+ #include <asm/ptrace.h>
+@@ -806,10 +807,59 @@ static inline void __init kuser_init(void *vectors)
+ }
+ #endif
+ 
++#ifndef CONFIG_CPU_V7M
++static void copy_from_lma(void *vma, void *lma_start, void *lma_end)
++{
++	memcpy(vma, lma_start, lma_end - lma_start);
++}
++
++static void flush_vectors(void *vma, size_t offset, size_t size)
++{
++	unsigned long start = (unsigned long)vma + offset;
++	unsigned long end = start + size;
++
++	flush_icache_range(start, end);
++}
++
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++int spectre_bhb_update_vectors(unsigned int method)
++{
++	extern char __vectors_bhb_bpiall_start[], __vectors_bhb_bpiall_end[];
++	extern char __vectors_bhb_loop8_start[], __vectors_bhb_loop8_end[];
++	void *vec_start, *vec_end;
++
++	if (system_state > SYSTEM_SCHEDULING) {
++		pr_err("CPU%u: Spectre BHB workaround too late - system vulnerable\n",
++		       smp_processor_id());
++		return SPECTRE_VULNERABLE;
++	}
++
++	switch (method) {
++	case SPECTRE_V2_METHOD_LOOP8:
++		vec_start = __vectors_bhb_loop8_start;
++		vec_end = __vectors_bhb_loop8_end;
++		break;
++
++	case SPECTRE_V2_METHOD_BPIALL:
++		vec_start = __vectors_bhb_bpiall_start;
++		vec_end = __vectors_bhb_bpiall_end;
++		break;
++
++	default:
++		pr_err("CPU%u: unknown Spectre BHB state %d\n",
++		       smp_processor_id(), method);
++		return SPECTRE_VULNERABLE;
++	}
++
++	copy_from_lma(vectors_page, vec_start, vec_end);
++	flush_vectors(vectors_page, 0, vec_end - vec_start);
++
++	return SPECTRE_MITIGATED;
++}
++#endif
++
+ void __init early_trap_init(void *vectors_base)
+ {
+-#ifndef CONFIG_CPU_V7M
+-	unsigned long vectors = (unsigned long)vectors_base;
+ 	extern char __stubs_start[], __stubs_end[];
+ 	extern char __vectors_start[], __vectors_end[];
+ 	unsigned i;
+@@ -830,17 +880,20 @@ void __init early_trap_init(void *vectors_base)
+ 	 * into the vector page, mapped at 0xffff0000, and ensure these
+ 	 * are visible to the instruction stream.
+ 	 */
+-	memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start);
+-	memcpy((void *)vectors + 0x1000, __stubs_start, __stubs_end - __stubs_start);
++	copy_from_lma(vectors_base, __vectors_start, __vectors_end);
++	copy_from_lma(vectors_base + 0x1000, __stubs_start, __stubs_end);
+ 
+ 	kuser_init(vectors_base);
+ 
+-	flush_icache_range(vectors, vectors + PAGE_SIZE * 2);
++	flush_vectors(vectors_base, 0, PAGE_SIZE * 2);
++}
+ #else /* ifndef CONFIG_CPU_V7M */
++void __init early_trap_init(void *vectors_base)
++{
+ 	/*
+ 	 * on V7-M there is no need to copy the vector table to a dedicated
+ 	 * memory area. The address is configurable and so a table in the kernel
+ 	 * image can be used.
+ 	 */
+-#endif
+ }
++#endif
+diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
+index 423a97dd2f57c..c6bf34a33849c 100644
+--- a/arch/arm/mm/Kconfig
++++ b/arch/arm/mm/Kconfig
+@@ -833,6 +833,7 @@ config CPU_BPREDICT_DISABLE
+ 
+ config CPU_SPECTRE
+ 	bool
++	select GENERIC_CPU_VULNERABILITIES
+ 
+ config HARDEN_BRANCH_PREDICTOR
+ 	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+@@ -853,6 +854,16 @@ config HARDEN_BRANCH_PREDICTOR
+ 
+ 	   If unsure, say Y.
+ 
++config HARDEN_BRANCH_HISTORY
++	bool "Harden Spectre style attacks against branch history" if EXPERT
++	depends on CPU_SPECTRE
++	default y
++	help
++	  Speculation attacks against some high-performance processors can
++	  make use of branch history to influence future speculation. When
++	  taking an exception, a sequence of branches overwrites the branch
++	  history, or branch history is invalidated.
++
+ config TLS_REG_EMUL
+ 	bool
+ 	select NEED_KUSER_HELPERS
+diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
+index 114c05ab4dd91..06dbfb968182d 100644
+--- a/arch/arm/mm/proc-v7-bugs.c
++++ b/arch/arm/mm/proc-v7-bugs.c
+@@ -6,8 +6,35 @@
+ #include <asm/cp15.h>
+ #include <asm/cputype.h>
+ #include <asm/proc-fns.h>
++#include <asm/spectre.h>
+ #include <asm/system_misc.h>
+ 
++#ifdef CONFIG_ARM_PSCI
++static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
++{
++	struct arm_smccc_res res;
++
++	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++			     ARM_SMCCC_ARCH_WORKAROUND_1, &res);
++
++	switch ((int)res.a0) {
++	case SMCCC_RET_SUCCESS:
++		return SPECTRE_MITIGATED;
++
++	case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED:
++		return SPECTRE_UNAFFECTED;
++
++	default:
++		return SPECTRE_VULNERABLE;
++	}
++}
++#else
++static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
++{
++	return SPECTRE_VULNERABLE;
++}
++#endif
++
+ #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+ DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
+ 
+@@ -36,13 +63,61 @@ static void __maybe_unused call_hvc_arch_workaround_1(void)
+ 	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+ }
+ 
+-static void cpu_v7_spectre_init(void)
++static unsigned int spectre_v2_install_workaround(unsigned int method)
+ {
+ 	const char *spectre_v2_method = NULL;
+ 	int cpu = smp_processor_id();
+ 
+ 	if (per_cpu(harden_branch_predictor_fn, cpu))
+-		return;
++		return SPECTRE_MITIGATED;
++
++	switch (method) {
++	case SPECTRE_V2_METHOD_BPIALL:
++		per_cpu(harden_branch_predictor_fn, cpu) =
++			harden_branch_predictor_bpiall;
++		spectre_v2_method = "BPIALL";
++		break;
++
++	case SPECTRE_V2_METHOD_ICIALLU:
++		per_cpu(harden_branch_predictor_fn, cpu) =
++			harden_branch_predictor_iciallu;
++		spectre_v2_method = "ICIALLU";
++		break;
++
++	case SPECTRE_V2_METHOD_HVC:
++		per_cpu(harden_branch_predictor_fn, cpu) =
++			call_hvc_arch_workaround_1;
++		cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
++		spectre_v2_method = "hypervisor";
++		break;
++
++	case SPECTRE_V2_METHOD_SMC:
++		per_cpu(harden_branch_predictor_fn, cpu) =
++			call_smc_arch_workaround_1;
++		cpu_do_switch_mm = cpu_v7_smc_switch_mm;
++		spectre_v2_method = "firmware";
++		break;
++	}
++
++	if (spectre_v2_method)
++		pr_info("CPU%u: Spectre v2: using %s workaround\n",
++			smp_processor_id(), spectre_v2_method);
++
++	return SPECTRE_MITIGATED;
++}
++#else
++static unsigned int spectre_v2_install_workaround(unsigned int method)
++{
++	pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n",
++		smp_processor_id());
++
++	return SPECTRE_VULNERABLE;
++}
++#endif
++
++static void cpu_v7_spectre_v2_init(void)
++{
++	unsigned int state, method = 0;
+ 
+ 	switch (read_cpuid_part()) {
+ 	case ARM_CPU_PART_CORTEX_A8:
+@@ -51,69 +126,133 @@ static void cpu_v7_spectre_init(void)
+ 	case ARM_CPU_PART_CORTEX_A17:
+ 	case ARM_CPU_PART_CORTEX_A73:
+ 	case ARM_CPU_PART_CORTEX_A75:
+-		per_cpu(harden_branch_predictor_fn, cpu) =
+-			harden_branch_predictor_bpiall;
+-		spectre_v2_method = "BPIALL";
++		state = SPECTRE_MITIGATED;
++		method = SPECTRE_V2_METHOD_BPIALL;
+ 		break;
+ 
+ 	case ARM_CPU_PART_CORTEX_A15:
+ 	case ARM_CPU_PART_BRAHMA_B15:
+-		per_cpu(harden_branch_predictor_fn, cpu) =
+-			harden_branch_predictor_iciallu;
+-		spectre_v2_method = "ICIALLU";
++		state = SPECTRE_MITIGATED;
++		method = SPECTRE_V2_METHOD_ICIALLU;
+ 		break;
+ 
+-#ifdef CONFIG_ARM_PSCI
+ 	case ARM_CPU_PART_BRAHMA_B53:
+ 		/* Requires no workaround */
++		state = SPECTRE_UNAFFECTED;
+ 		break;
++
+ 	default:
+ 		/* Other ARM CPUs require no workaround */
+-		if (read_cpuid_implementor() == ARM_CPU_IMP_ARM)
++		if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) {
++			state = SPECTRE_UNAFFECTED;
+ 			break;
++		}
++
+ 		fallthrough;
+-		/* Cortex A57/A72 require firmware workaround */
+-	case ARM_CPU_PART_CORTEX_A57:
+-	case ARM_CPU_PART_CORTEX_A72: {
+-		struct arm_smccc_res res;
+ 
+-		arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+-				     ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+-		if ((int)res.a0 != 0)
+-			return;
++	/* Cortex A57/A72 require firmware workaround */
++	case ARM_CPU_PART_CORTEX_A57:
++	case ARM_CPU_PART_CORTEX_A72:
++		state = spectre_v2_get_cpu_fw_mitigation_state();
++		if (state != SPECTRE_MITIGATED)
++			break;
+ 
+ 		switch (arm_smccc_1_1_get_conduit()) {
+ 		case SMCCC_CONDUIT_HVC:
+-			per_cpu(harden_branch_predictor_fn, cpu) =
+-				call_hvc_arch_workaround_1;
+-			cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
+-			spectre_v2_method = "hypervisor";
++			method = SPECTRE_V2_METHOD_HVC;
+ 			break;
+ 
+ 		case SMCCC_CONDUIT_SMC:
+-			per_cpu(harden_branch_predictor_fn, cpu) =
+-				call_smc_arch_workaround_1;
+-			cpu_do_switch_mm = cpu_v7_smc_switch_mm;
+-			spectre_v2_method = "firmware";
++			method = SPECTRE_V2_METHOD_SMC;
+ 			break;
+ 
+ 		default:
++			state = SPECTRE_VULNERABLE;
+ 			break;
+ 		}
+ 	}
+-#endif
++
++	if (state == SPECTRE_MITIGATED)
++		state = spectre_v2_install_workaround(method);
++
++	spectre_v2_update_state(state, method);
++}
++
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++static int spectre_bhb_method;
++
++static const char *spectre_bhb_method_name(int method)
++{
++	switch (method) {
++	case SPECTRE_V2_METHOD_LOOP8:
++		return "loop";
++
++	case SPECTRE_V2_METHOD_BPIALL:
++		return "BPIALL";
++
++	default:
++		return "unknown";
+ 	}
++}
+ 
+-	if (spectre_v2_method)
+-		pr_info("CPU%u: Spectre v2: using %s workaround\n",
+-			smp_processor_id(), spectre_v2_method);
++static int spectre_bhb_install_workaround(int method)
++{
++	if (spectre_bhb_method != method) {
++		if (spectre_bhb_method) {
++			pr_err("CPU%u: Spectre BHB: method disagreement, system vulnerable\n",
++			       smp_processor_id());
++
++			return SPECTRE_VULNERABLE;
++		}
++
++		if (spectre_bhb_update_vectors(method) == SPECTRE_VULNERABLE)
++			return SPECTRE_VULNERABLE;
++
++		spectre_bhb_method = method;
++	}
++
++	pr_info("CPU%u: Spectre BHB: using %s workaround\n",
++		smp_processor_id(), spectre_bhb_method_name(method));
++
++	return SPECTRE_MITIGATED;
+ }
+ #else
+-static void cpu_v7_spectre_init(void)
++static int spectre_bhb_install_workaround(int method)
+ {
++	return SPECTRE_VULNERABLE;
+ }
+ #endif
+ 
++static void cpu_v7_spectre_bhb_init(void)
++{
++	unsigned int state, method = 0;
++
++	switch (read_cpuid_part()) {
++	case ARM_CPU_PART_CORTEX_A15:
++	case ARM_CPU_PART_BRAHMA_B15:
++	case ARM_CPU_PART_CORTEX_A57:
++	case ARM_CPU_PART_CORTEX_A72:
++		state = SPECTRE_MITIGATED;
++		method = SPECTRE_V2_METHOD_LOOP8;
++		break;
++
++	case ARM_CPU_PART_CORTEX_A73:
++	case ARM_CPU_PART_CORTEX_A75:
++		state = SPECTRE_MITIGATED;
++		method = SPECTRE_V2_METHOD_BPIALL;
++		break;
++
++	default:
++		state = SPECTRE_UNAFFECTED;
++		break;
++	}
++
++	if (state == SPECTRE_MITIGATED)
++		state = spectre_bhb_install_workaround(method);
++
++	spectre_v2_update_state(state, method);
++}
++
+ static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned,
+ 						  u32 mask, const char *msg)
+ {
+@@ -142,16 +281,17 @@ static bool check_spectre_auxcr(bool *warned, u32 bit)
+ void cpu_v7_ca8_ibe(void)
+ {
+ 	if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)))
+-		cpu_v7_spectre_init();
++		cpu_v7_spectre_v2_init();
+ }
+ 
+ void cpu_v7_ca15_ibe(void)
+ {
+ 	if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
+-		cpu_v7_spectre_init();
++		cpu_v7_spectre_v2_init();
+ }
+ 
+ void cpu_v7_bugs_init(void)
+ {
+-	cpu_v7_spectre_init();
++	cpu_v7_spectre_v2_init();
++	cpu_v7_spectre_bhb_init();
+ }
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 3da71fe56b922..7c7906e9dafda 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1184,6 +1184,15 @@ config UNMAP_KERNEL_AT_EL0
+ 
+ 	  If unsure, say Y.
+ 
++config MITIGATE_SPECTRE_BRANCH_HISTORY
++	bool "Mitigate Spectre style attacks against branch history" if EXPERT
++	default y
++	help
++	  Speculation attacks against some high-performance processors can
++	  make use of branch history to influence future speculation.
++	  When taking an exception from user-space, a sequence of branches
++	  or a firmware call overwrites the branch history.
++
+ config RODATA_FULL_DEFAULT_ENABLED
+ 	bool "Apply r/o permissions of VM areas also to their linear aliases"
+ 	default y
+diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
+index ddbe6bf00e336..011e681a23366 100644
+--- a/arch/arm64/include/asm/assembler.h
++++ b/arch/arm64/include/asm/assembler.h
+@@ -97,6 +97,13 @@
+ 	hint	#20
+ 	.endm
+ 
++/*
++ * Clear Branch History instruction
++ */
++	.macro clearbhb
++	hint	#22
++	.endm
++
+ /*
+  * Speculation barrier
+  */
+@@ -795,4 +802,30 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
+ 
+ #endif /* GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT */
+ 
++	.macro __mitigate_spectre_bhb_loop      tmp
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++alternative_cb  spectre_bhb_patch_loop_iter
++	mov	\tmp, #32		// Patched to correct the immediate
++alternative_cb_end
++.Lspectre_bhb_loop\@:
++	b	. + 4
++	subs	\tmp, \tmp, #1
++	b.ne	.Lspectre_bhb_loop\@
++	sb
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++	.endm
++
++	/* Save/restores x0-x3 to the stack */
++	.macro __mitigate_spectre_bhb_fw
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++	stp	x0, x1, [sp, #-16]!
++	stp	x2, x3, [sp, #-16]!
++	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_3
++alternative_cb	smccc_patch_fw_mitigation_conduit
++	nop					// Patched to SMC/HVC #0
++alternative_cb_end
++	ldp	x2, x3, [sp], #16
++	ldp	x0, x1, [sp], #16
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++	.endm
+ #endif	/* __ASM_ASSEMBLER_H */
+diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
+index 7faae6ff3ab4d..24ed6643da266 100644
+--- a/arch/arm64/include/asm/cpu.h
++++ b/arch/arm64/include/asm/cpu.h
+@@ -25,6 +25,7 @@ struct cpuinfo_arm64 {
+ 	u64		reg_id_aa64dfr1;
+ 	u64		reg_id_aa64isar0;
+ 	u64		reg_id_aa64isar1;
++	u64		reg_id_aa64isar2;
+ 	u64		reg_id_aa64mmfr0;
+ 	u64		reg_id_aa64mmfr1;
+ 	u64		reg_id_aa64mmfr2;
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index e7d98997c09c3..f42fd0a2e81c8 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -66,7 +66,8 @@
+ #define ARM64_HAS_TLB_RANGE			56
+ #define ARM64_MTE				57
+ #define ARM64_WORKAROUND_1508412		58
++#define ARM64_SPECTRE_BHB			59
+ 
+-#define ARM64_NCAPS				59
++#define ARM64_NCAPS				60
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index da250e4741bd7..423f9b40e4d95 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -606,6 +606,34 @@ static inline bool cpu_supports_mixed_endian_el0(void)
+ 	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
+ }
+ 
++static inline bool supports_csv2p3(int scope)
++{
++	u64 pfr0;
++	u8 csv2_val;
++
++	if (scope == SCOPE_LOCAL_CPU)
++		pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1);
++	else
++		pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
++
++	csv2_val = cpuid_feature_extract_unsigned_field(pfr0,
++							ID_AA64PFR0_CSV2_SHIFT);
++	return csv2_val == 3;
++}
++
++static inline bool supports_clearbhb(int scope)
++{
++	u64 isar2;
++
++	if (scope == SCOPE_LOCAL_CPU)
++		isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1);
++	else
++		isar2 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR2_EL1);
++
++	return cpuid_feature_extract_unsigned_field(isar2,
++						    ID_AA64ISAR2_CLEARBHB_SHIFT);
++}
++
+ static inline bool system_supports_32bit_el0(void)
+ {
+ 	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index ef5b040dee44d..bfbf0c4c7c5e5 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -59,6 +59,7 @@
+ #define ARM_CPU_IMP_NVIDIA		0x4E
+ #define ARM_CPU_IMP_FUJITSU		0x46
+ #define ARM_CPU_IMP_HISI		0x48
++#define ARM_CPU_IMP_APPLE		0x61
+ 
+ #define ARM_CPU_PART_AEM_V8		0xD0F
+ #define ARM_CPU_PART_FOUNDATION		0xD00
+@@ -72,6 +73,14 @@
+ #define ARM_CPU_PART_CORTEX_A76		0xD0B
+ #define ARM_CPU_PART_NEOVERSE_N1	0xD0C
+ #define ARM_CPU_PART_CORTEX_A77		0xD0D
++#define ARM_CPU_PART_NEOVERSE_V1	0xD40
++#define ARM_CPU_PART_CORTEX_A78		0xD41
++#define ARM_CPU_PART_CORTEX_X1		0xD44
++#define ARM_CPU_PART_CORTEX_A510	0xD46
++#define ARM_CPU_PART_CORTEX_A710	0xD47
++#define ARM_CPU_PART_CORTEX_X2		0xD48
++#define ARM_CPU_PART_NEOVERSE_N2	0xD49
++#define ARM_CPU_PART_CORTEX_A78C	0xD4B
+ 
+ #define APM_CPU_PART_POTENZA		0x000
+ 
+@@ -99,6 +108,9 @@
+ 
+ #define HISI_CPU_PART_TSV110		0xD01
+ 
++#define APPLE_CPU_PART_M1_ICESTORM	0x022
++#define APPLE_CPU_PART_M1_FIRESTORM	0x023
++
+ #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
+ #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
+ #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
+@@ -109,6 +121,14 @@
+ #define MIDR_CORTEX_A76	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
+ #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
+ #define MIDR_CORTEX_A77	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
++#define MIDR_NEOVERSE_V1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
++#define MIDR_CORTEX_A78	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
++#define MIDR_CORTEX_X1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
++#define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
++#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
++#define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
++#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
++#define MIDR_CORTEX_A78C	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C)
+ #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+@@ -127,6 +147,8 @@
+ #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL)
+ #define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX)
+ #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110)
++#define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM)
++#define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM)
+ 
+ /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
+ #define MIDR_FUJITSU_ERRATUM_010001		MIDR_FUJITSU_A64FX
+diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
+index 4335800201c97..daff882883f92 100644
+--- a/arch/arm64/include/asm/fixmap.h
++++ b/arch/arm64/include/asm/fixmap.h
+@@ -62,9 +62,11 @@ enum fixed_addresses {
+ #endif /* CONFIG_ACPI_APEI_GHES */
+ 
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++	FIX_ENTRY_TRAMP_TEXT3,
++	FIX_ENTRY_TRAMP_TEXT2,
++	FIX_ENTRY_TRAMP_TEXT1,
+ 	FIX_ENTRY_TRAMP_DATA,
+-	FIX_ENTRY_TRAMP_TEXT,
+-#define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
++#define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT1))
+ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+ 	__end_of_permanent_fixed_addresses,
+ 
+diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
+index 9a5498c2c8eea..6422147ea612f 100644
+--- a/arch/arm64/include/asm/hwcap.h
++++ b/arch/arm64/include/asm/hwcap.h
+@@ -105,6 +105,9 @@
+ #define KERNEL_HWCAP_RNG		__khwcap2_feature(RNG)
+ #define KERNEL_HWCAP_BTI		__khwcap2_feature(BTI)
+ #define KERNEL_HWCAP_MTE		__khwcap2_feature(MTE)
++#define KERNEL_HWCAP_ECV		__khwcap2_feature(ECV)
++#define KERNEL_HWCAP_AFP		__khwcap2_feature(AFP)
++#define KERNEL_HWCAP_RPRES		__khwcap2_feature(RPRES)
+ 
+ /*
+  * This yields a mask that user programs can use to figure out what
+diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
+index 4b39293d0f72d..d45b42295254d 100644
+--- a/arch/arm64/include/asm/insn.h
++++ b/arch/arm64/include/asm/insn.h
+@@ -65,6 +65,7 @@ enum aarch64_insn_hint_cr_op {
+ 	AARCH64_INSN_HINT_PSB  = 0x11 << 5,
+ 	AARCH64_INSN_HINT_TSB  = 0x12 << 5,
+ 	AARCH64_INSN_HINT_CSDB = 0x14 << 5,
++	AARCH64_INSN_HINT_CLEARBHB = 0x16 << 5,
+ 
+ 	AARCH64_INSN_HINT_BTI   = 0x20 << 5,
+ 	AARCH64_INSN_HINT_BTIC  = 0x22 << 5,
+diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
+index 044bb9e2cd74f..ada24a20a5671 100644
+--- a/arch/arm64/include/asm/kvm_asm.h
++++ b/arch/arm64/include/asm/kvm_asm.h
+@@ -35,6 +35,9 @@
+ #define KVM_VECTOR_PREAMBLE	(2 * AARCH64_INSN_SIZE)
+ 
+ #define __SMCCC_WORKAROUND_1_SMC_SZ 36
++#define __SMCCC_WORKAROUND_3_SMC_SZ 36
++#define __SPECTRE_BHB_LOOP_SZ       44
++#define __SPECTRE_BHB_CLEARBHB_SZ   12
+ 
+ #define KVM_HOST_SMCCC_ID(id)						\
+ 	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
+@@ -199,6 +202,11 @@ extern void __vgic_v3_init_lrs(void);
+ extern u32 __kvm_get_mdcr_el2(void);
+ 
+ extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ];
++extern char __smccc_workaround_3_smc[__SMCCC_WORKAROUND_3_SMC_SZ];
++extern char __spectre_bhb_loop_k8[__SPECTRE_BHB_LOOP_SZ];
++extern char __spectre_bhb_loop_k24[__SPECTRE_BHB_LOOP_SZ];
++extern char __spectre_bhb_loop_k32[__SPECTRE_BHB_LOOP_SZ];
++extern char __spectre_bhb_clearbhb[__SPECTRE_BHB_LOOP_SZ];
+ 
+ /*
+  * Obtain the PC-relative address of a kernel symbol
+diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
+index 331394306ccee..47dafd6ab3a30 100644
+--- a/arch/arm64/include/asm/kvm_mmu.h
++++ b/arch/arm64/include/asm/kvm_mmu.h
+@@ -237,7 +237,8 @@ static inline void *kvm_get_hyp_vector(void)
+ 	void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector));
+ 	int slot = -1;
+ 
+-	if (cpus_have_const_cap(ARM64_SPECTRE_V2) && data->fn) {
++	if ((cpus_have_const_cap(ARM64_SPECTRE_V2) ||
++	     cpus_have_const_cap(ARM64_SPECTRE_BHB)) && data->template_start) {
+ 		vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs));
+ 		slot = data->hyp_vectors_slot;
+ 	}
+diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
+index c7315862e2435..bc151b7dc042c 100644
+--- a/arch/arm64/include/asm/mmu.h
++++ b/arch/arm64/include/asm/mmu.h
+@@ -67,6 +67,12 @@ typedef void (*bp_hardening_cb_t)(void);
+ struct bp_hardening_data {
+ 	int			hyp_vectors_slot;
+ 	bp_hardening_cb_t	fn;
++
++	/*
++	 * template_start is only used by the BHB mitigation to identify the
++	 * hyp_vectors_slot sequence.
++	 */
++	const char *template_start;
+ };
+ 
+ DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
+index 3994169985efc..6a45c26da46e3 100644
+--- a/arch/arm64/include/asm/sections.h
++++ b/arch/arm64/include/asm/sections.h
+@@ -19,4 +19,9 @@ extern char __irqentry_text_start[], __irqentry_text_end[];
+ extern char __mmuoff_data_start[], __mmuoff_data_end[];
+ extern char __entry_tramp_text_start[], __entry_tramp_text_end[];
+ 
++static inline size_t entry_tramp_text_size(void)
++{
++	return __entry_tramp_text_end - __entry_tramp_text_start;
++}
++
+ #endif /* __ASM_SECTIONS_H */
+diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
+index fcdfbce302bdf..4b3a5f050f71f 100644
+--- a/arch/arm64/include/asm/spectre.h
++++ b/arch/arm64/include/asm/spectre.h
+@@ -29,4 +29,8 @@ bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope);
+ void spectre_v4_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
+ void spectre_v4_enable_task_mitigation(struct task_struct *tsk);
+ 
++enum mitigation_state arm64_get_spectre_bhb_state(void);
++bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope);
++u8 spectre_bhb_loop_affected(int scope);
++void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
+ #endif	/* __ASM_SPECTRE_H */
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 801861d054268..1f2209ad2cca1 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -175,6 +175,7 @@
+ 
+ #define SYS_ID_AA64ISAR0_EL1		sys_reg(3, 0, 0, 6, 0)
+ #define SYS_ID_AA64ISAR1_EL1		sys_reg(3, 0, 0, 6, 1)
++#define SYS_ID_AA64ISAR2_EL1		sys_reg(3, 0, 0, 6, 2)
+ 
+ #define SYS_ID_AA64MMFR0_EL1		sys_reg(3, 0, 0, 7, 0)
+ #define SYS_ID_AA64MMFR1_EL1		sys_reg(3, 0, 0, 7, 1)
+@@ -687,6 +688,21 @@
+ #define ID_AA64ISAR1_GPI_NI			0x0
+ #define ID_AA64ISAR1_GPI_IMP_DEF		0x1
+ 
++/* id_aa64isar2 */
++#define ID_AA64ISAR2_CLEARBHB_SHIFT	28
++#define ID_AA64ISAR2_RPRES_SHIFT	4
++#define ID_AA64ISAR2_WFXT_SHIFT		0
++
++#define ID_AA64ISAR2_RPRES_8BIT		0x0
++#define ID_AA64ISAR2_RPRES_12BIT	0x1
++/*
++ * Value 0x1 has been removed from the architecture, and is
++ * reserved, but has not yet been removed from the ARM ARM
++ * as of ARM DDI 0487G.b.
++ */
++#define ID_AA64ISAR2_WFXT_NI		0x0
++#define ID_AA64ISAR2_WFXT_SUPPORTED	0x2
++
+ /* id_aa64pfr0 */
+ #define ID_AA64PFR0_CSV3_SHIFT		60
+ #define ID_AA64PFR0_CSV2_SHIFT		56
+@@ -786,6 +802,8 @@
+ #endif
+ 
+ /* id_aa64mmfr1 */
++#define ID_AA64MMFR1_ECBHB_SHIFT	60
++#define ID_AA64MMFR1_AFP_SHIFT		44
+ #define ID_AA64MMFR1_ETS_SHIFT		36
+ #define ID_AA64MMFR1_TWED_SHIFT		32
+ #define ID_AA64MMFR1_XNX_SHIFT		28
+diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
+new file mode 100644
+index 0000000000000..f64613a96d530
+--- /dev/null
++++ b/arch/arm64/include/asm/vectors.h
+@@ -0,0 +1,73 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (C) 2022 ARM Ltd.
++ */
++#ifndef __ASM_VECTORS_H
++#define __ASM_VECTORS_H
++
++#include <linux/bug.h>
++#include <linux/percpu.h>
++
++#include <asm/fixmap.h>
++
++extern char vectors[];
++extern char tramp_vectors[];
++extern char __bp_harden_el1_vectors[];
++
++/*
++ * Note: the order of this enum corresponds to two arrays in entry.S:
++ * tramp_vecs and __bp_harden_el1_vectors. By default the canonical
++ * 'full fat' vectors are used directly.
++ */
++enum arm64_bp_harden_el1_vectors {
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++	/*
++	 * Perform the BHB loop mitigation, before branching to the canonical
++	 * vectors.
++	 */
++	EL1_VECTOR_BHB_LOOP,
++
++	/*
++	 * Make the SMC call for firmware mitigation, before branching to the
++	 * canonical vectors.
++	 */
++	EL1_VECTOR_BHB_FW,
++
++	/*
++	 * Use the ClearBHB instruction, before branching to the canonical
++	 * vectors.
++	 */
++	EL1_VECTOR_BHB_CLEAR_INSN,
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++
++	/*
++	 * Remap the kernel before branching to the canonical vectors.
++	 */
++	EL1_VECTOR_KPTI,
++};
++
++#ifndef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++#define EL1_VECTOR_BHB_LOOP		-1
++#define EL1_VECTOR_BHB_FW		-1
++#define EL1_VECTOR_BHB_CLEAR_INSN	-1
++#endif /* !CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++
++/* The vectors to use on return from EL0. e.g. to remap the kernel */
++DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector);
++
++#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
++#define TRAMP_VALIAS	0
++#endif
++
++static inline const char *
++arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot)
++{
++	if (arm64_kernel_unmapped_at_el0())
++		return (char *)TRAMP_VALIAS + SZ_2K * slot;
++
++	WARN_ON_ONCE(slot == EL1_VECTOR_KPTI);
++
++	return __bp_harden_el1_vectors + SZ_2K * slot;
++}
++
++#endif /* __ASM_VECTORS_H */
+diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
+index b8f41aa234ee1..f03731847d9df 100644
+--- a/arch/arm64/include/uapi/asm/hwcap.h
++++ b/arch/arm64/include/uapi/asm/hwcap.h
+@@ -75,5 +75,8 @@
+ #define HWCAP2_RNG		(1 << 16)
+ #define HWCAP2_BTI		(1 << 17)
+ #define HWCAP2_MTE		(1 << 18)
++#define HWCAP2_ECV		(1 << 19)
++#define HWCAP2_AFP		(1 << 20)
++#define HWCAP2_RPRES		(1 << 21)
+ 
+ #endif /* _UAPI__ASM_HWCAP_H */
+diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
+index 1c17c3a24411d..531ff62e82e95 100644
+--- a/arch/arm64/include/uapi/asm/kvm.h
++++ b/arch/arm64/include/uapi/asm/kvm.h
+@@ -273,6 +273,11 @@ struct kvm_vcpu_events {
+ #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED	3
+ #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED     	(1U << 4)
+ 
++#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3	KVM_REG_ARM_FW_REG(3)
++#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL		0
++#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_AVAIL		1
++#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_REQUIRED	2
++
+ /* SVE registers */
+ #define KVM_REG_ARM64_SVE		(0x15 << KVM_REG_ARM_COPROC_SHIFT)
+ 
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index cafaf0da05b7c..533559c7d2b31 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -473,6 +473,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		.matches = has_spectre_v4,
+ 		.cpu_enable = spectre_v4_enable_mitigation,
+ 	},
++	{
++		.desc = "Spectre-BHB",
++		.capability = ARM64_SPECTRE_BHB,
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++		.matches = is_spectre_bhb_affected,
++		.cpu_enable = spectre_bhb_enable_mitigation,
++	},
+ #ifdef CONFIG_ARM64_ERRATUM_1418040
+ 	{
+ 		.desc = "ARM erratum 1418040",
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 5001c43ea6c33..c9108ed406458 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -65,11 +65,13 @@
+ #include <linux/bsearch.h>
+ #include <linux/cpumask.h>
+ #include <linux/crash_dump.h>
++#include <linux/percpu.h>
+ #include <linux/sort.h>
+ #include <linux/stop_machine.h>
+ #include <linux/types.h>
+ #include <linux/mm.h>
+ #include <linux/cpu.h>
++
+ #include <asm/cpu.h>
+ #include <asm/cpufeature.h>
+ #include <asm/cpu_ops.h>
+@@ -79,6 +81,7 @@
+ #include <asm/processor.h>
+ #include <asm/sysreg.h>
+ #include <asm/traps.h>
++#include <asm/vectors.h>
+ #include <asm/virt.h>
+ 
+ /* Kernel representation of AT_HWCAP and AT_HWCAP2 */
+@@ -104,6 +107,8 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
+ bool arm64_use_ng_mappings = false;
+ EXPORT_SYMBOL(arm64_use_ng_mappings);
+ 
++DEFINE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector) = vectors;
++
+ /*
+  * Flag to indicate if we have computed the system wide
+  * capabilities based on the boot time active CPUs. This
+@@ -205,6 +210,12 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+ 	ARM64_FTR_END,
+ };
+ 
++static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_RPRES_SHIFT, 4, 0),
++	ARM64_FTR_END,
++};
++
+ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
+@@ -259,7 +270,7 @@ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
+ };
+ 
+ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EXS_SHIFT, 4, 0),
+ 	/*
+@@ -305,6 +316,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+ };
+ 
+ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_AFP_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
+@@ -596,6 +608,7 @@ static const struct __ftr_reg_entry {
+ 	/* Op1 = 0, CRn = 0, CRm = 6 */
+ 	ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0),
+ 	ARM64_FTR_REG(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1),
++	ARM64_FTR_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2),
+ 
+ 	/* Op1 = 0, CRn = 0, CRm = 7 */
+ 	ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
+@@ -830,6 +843,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
+ 	init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1);
+ 	init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0);
+ 	init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
++	init_cpu_ftr_reg(SYS_ID_AA64ISAR2_EL1, info->reg_id_aa64isar2);
+ 	init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
+ 	init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
+ 	init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
+@@ -1058,6 +1072,8 @@ void update_cpu_features(int cpu,
+ 				      info->reg_id_aa64isar0, boot->reg_id_aa64isar0);
+ 	taint |= check_update_ftr_reg(SYS_ID_AA64ISAR1_EL1, cpu,
+ 				      info->reg_id_aa64isar1, boot->reg_id_aa64isar1);
++	taint |= check_update_ftr_reg(SYS_ID_AA64ISAR2_EL1, cpu,
++				      info->reg_id_aa64isar2, boot->reg_id_aa64isar2);
+ 
+ 	/*
+ 	 * Differing PARange support is fine as long as all peripherals and
+@@ -1157,6 +1173,7 @@ static u64 __read_sysreg_by_encoding(u32 sys_id)
+ 	read_sysreg_case(SYS_ID_AA64MMFR2_EL1);
+ 	read_sysreg_case(SYS_ID_AA64ISAR0_EL1);
+ 	read_sysreg_case(SYS_ID_AA64ISAR1_EL1);
++	read_sysreg_case(SYS_ID_AA64ISAR2_EL1);
+ 
+ 	read_sysreg_case(SYS_CNTFRQ_EL0);
+ 	read_sysreg_case(SYS_CTR_EL0);
+@@ -1402,6 +1419,12 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
+ 
+ 	int cpu = smp_processor_id();
+ 
++	if (__this_cpu_read(this_cpu_vector) == vectors) {
++		const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI);
++
++		__this_cpu_write(this_cpu_vector, v);
++	}
++
+ 	/*
+ 	 * We don't need to rewrite the page-tables if either we've done
+ 	 * it already or we have KASLR enabled and therefore have not
+@@ -2252,6 +2275,9 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ #ifdef CONFIG_ARM64_MTE
+ 	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE),
+ #endif /* CONFIG_ARM64_MTE */
++	HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV),
++	HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_AFP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP),
++	HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_RPRES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RPRES),
+ 	{},
+ };
+ 
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index 77605aec25fec..4c0e72781f31b 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -94,6 +94,9 @@ static const char *const hwcap_str[] = {
+ 	[KERNEL_HWCAP_RNG]		= "rng",
+ 	[KERNEL_HWCAP_BTI]		= "bti",
+ 	[KERNEL_HWCAP_MTE]		= "mte",
++	[KERNEL_HWCAP_ECV]		= "ecv",
++	[KERNEL_HWCAP_AFP]		= "afp",
++	[KERNEL_HWCAP_RPRES]		= "rpres",
+ };
+ 
+ #ifdef CONFIG_COMPAT
+@@ -364,6 +367,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
+ 	info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1);
+ 	info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1);
+ 	info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1);
++	info->reg_id_aa64isar2 = read_cpuid(ID_AA64ISAR2_EL1);
+ 	info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+ 	info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
+ 	info->reg_id_aa64mmfr2 = read_cpuid(ID_AA64MMFR2_EL1);
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index fe83d6d67ec3d..d5bc1dbdd2fda 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -62,18 +62,21 @@
+ 
+ 	.macro kernel_ventry, el, label, regsize = 64
+ 	.align 7
+-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++.Lventry_start\@:
+ 	.if	\el == 0
+-alternative_if ARM64_UNMAP_KERNEL_AT_EL0
++	/*
++	 * This must be the first instruction of the EL0 vector entries. It is
++	 * skipped by the trampoline vectors, to trigger the cleanup.
++	 */
++	b	.Lskip_tramp_vectors_cleanup\@
+ 	.if	\regsize == 64
+ 	mrs	x30, tpidrro_el0
+ 	msr	tpidrro_el0, xzr
+ 	.else
+ 	mov	x30, xzr
+ 	.endif
+-alternative_else_nop_endif
++.Lskip_tramp_vectors_cleanup\@:
+ 	.endif
+-#endif
+ 
+ 	sub	sp, sp, #S_FRAME_SIZE
+ #ifdef CONFIG_VMAP_STACK
+@@ -120,11 +123,15 @@ alternative_else_nop_endif
+ 	mrs	x0, tpidrro_el0
+ #endif
+ 	b	el\()\el\()_\label
++.org .Lventry_start\@ + 128	// Did we overflow the ventry slot?
+ 	.endm
+ 
+-	.macro tramp_alias, dst, sym
++	.macro tramp_alias, dst, sym, tmp
+ 	mov_q	\dst, TRAMP_VALIAS
+-	add	\dst, \dst, #(\sym - .entry.tramp.text)
++	adr_l	\tmp, \sym
++	add	\dst, \dst, \tmp
++	adr_l	\tmp, .entry.tramp.text
++	sub	\dst, \dst, \tmp
+ 	.endm
+ 
+ 	/*
+@@ -141,7 +148,7 @@ alternative_cb_end
+ 	tbnz	\tmp2, #TIF_SSBD, .L__asm_ssbd_skip\@
+ 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
+ 	mov	w1, #\state
+-alternative_cb	spectre_v4_patch_fw_mitigation_conduit
++alternative_cb	smccc_patch_fw_mitigation_conduit
+ 	nop					// Patched to SMC/HVC #0
+ alternative_cb_end
+ .L__asm_ssbd_skip\@:
+@@ -351,21 +358,26 @@ alternative_else_nop_endif
+ 	ldp	x24, x25, [sp, #16 * 12]
+ 	ldp	x26, x27, [sp, #16 * 13]
+ 	ldp	x28, x29, [sp, #16 * 14]
+-	ldr	lr, [sp, #S_LR]
+-	add	sp, sp, #S_FRAME_SIZE		// restore sp
+ 
+ 	.if	\el == 0
+-alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
++alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
++	ldr	lr, [sp, #S_LR]
++	add	sp, sp, #S_FRAME_SIZE		// restore sp
++	eret
++alternative_else_nop_endif
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ 	bne	4f
+-	msr	far_el1, x30
+-	tramp_alias	x30, tramp_exit_native
++	msr	far_el1, x29
++	tramp_alias	x30, tramp_exit_native, x29
+ 	br	x30
+ 4:
+-	tramp_alias	x30, tramp_exit_compat
++	tramp_alias	x30, tramp_exit_compat, x29
+ 	br	x30
+ #endif
+ 	.else
++	ldr	lr, [sp, #S_LR]
++	add	sp, sp, #S_FRAME_SIZE		// restore sp
++
+ 	/* Ensure any device/NC reads complete */
+ 	alternative_insn nop, "dmb sy", ARM64_WORKAROUND_1508412
+ 
+@@ -764,12 +776,6 @@ SYM_CODE_END(ret_to_user)
+ 
+ 	.popsection				// .entry.text
+ 
+-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+-/*
+- * Exception vectors trampoline.
+- */
+-	.pushsection ".entry.tramp.text", "ax"
+-
+ 	// Move from tramp_pg_dir to swapper_pg_dir
+ 	.macro tramp_map_kernel, tmp
+ 	mrs	\tmp, ttbr1_el1
+@@ -803,12 +809,47 @@ alternative_else_nop_endif
+ 	 */
+ 	.endm
+ 
+-	.macro tramp_ventry, regsize = 64
++	.macro tramp_data_page	dst
++	adr_l	\dst, .entry.tramp.text
++	sub	\dst, \dst, PAGE_SIZE
++	.endm
++
++	.macro tramp_data_read_var	dst, var
++#ifdef CONFIG_RANDOMIZE_BASE
++	tramp_data_page		\dst
++	add	\dst, \dst, #:lo12:__entry_tramp_data_\var
++	ldr	\dst, [\dst]
++#else
++	ldr	\dst, =\var
++#endif
++	.endm
++
++#define BHB_MITIGATION_NONE	0
++#define BHB_MITIGATION_LOOP	1
++#define BHB_MITIGATION_FW	2
++#define BHB_MITIGATION_INSN	3
++
++	.macro tramp_ventry, vector_start, regsize, kpti, bhb
+ 	.align	7
+ 1:
+ 	.if	\regsize == 64
+ 	msr	tpidrro_el0, x30	// Restored in kernel_ventry
+ 	.endif
++
++	.if	\bhb == BHB_MITIGATION_LOOP
++	/*
++	 * This sequence must appear before the first indirect branch. i.e. the
++	 * ret out of tramp_ventry. It appears here because x30 is free.
++	 */
++	__mitigate_spectre_bhb_loop	x30
++	.endif // \bhb == BHB_MITIGATION_LOOP
++
++	.if	\bhb == BHB_MITIGATION_INSN
++	clearbhb
++	isb
++	.endif // \bhb == BHB_MITIGATION_INSN
++
++	.if	\kpti == 1
+ 	/*
+ 	 * Defend against branch aliasing attacks by pushing a dummy
+ 	 * entry onto the return stack and using a RET instruction to
+@@ -818,46 +859,75 @@ alternative_else_nop_endif
+ 	b	.
+ 2:
+ 	tramp_map_kernel	x30
+-#ifdef CONFIG_RANDOMIZE_BASE
+-	adr	x30, tramp_vectors + PAGE_SIZE
+ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
+-	ldr	x30, [x30]
+-#else
+-	ldr	x30, =vectors
+-#endif
++	tramp_data_read_var	x30, vectors
+ alternative_if_not ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM
+-	prfm	plil1strm, [x30, #(1b - tramp_vectors)]
++	prfm	plil1strm, [x30, #(1b - \vector_start)]
+ alternative_else_nop_endif
++
+ 	msr	vbar_el1, x30
+-	add	x30, x30, #(1b - tramp_vectors)
+ 	isb
++	.else
++	ldr	x30, =vectors
++	.endif // \kpti == 1
++
++	.if	\bhb == BHB_MITIGATION_FW
++	/*
++	 * The firmware sequence must appear before the first indirect branch.
++	 * i.e. the ret out of tramp_ventry. But it also needs the stack to be
++	 * mapped to save/restore the registers the SMC clobbers.
++	 */
++	__mitigate_spectre_bhb_fw
++	.endif // \bhb == BHB_MITIGATION_FW
++
++	add	x30, x30, #(1b - \vector_start + 4)
+ 	ret
++.org 1b + 128	// Did we overflow the ventry slot?
+ 	.endm
+ 
+ 	.macro tramp_exit, regsize = 64
+-	adr	x30, tramp_vectors
++	tramp_data_read_var	x30, this_cpu_vector
++	this_cpu_offset x29
++	ldr	x30, [x30, x29]
++
+ 	msr	vbar_el1, x30
+-	tramp_unmap_kernel	x30
++	ldr	lr, [sp, #S_LR]
++	tramp_unmap_kernel	x29
+ 	.if	\regsize == 64
+-	mrs	x30, far_el1
++	mrs	x29, far_el1
+ 	.endif
++	add	sp, sp, #S_FRAME_SIZE		// restore sp
+ 	eret
+ 	sb
+ 	.endm
+ 
+-	.align	11
+-SYM_CODE_START_NOALIGN(tramp_vectors)
++	.macro	generate_tramp_vector,	kpti, bhb
++.Lvector_start\@:
+ 	.space	0x400
+ 
+-	tramp_ventry
+-	tramp_ventry
+-	tramp_ventry
+-	tramp_ventry
++	.rept	4
++	tramp_ventry	.Lvector_start\@, 64, \kpti, \bhb
++	.endr
++	.rept	4
++	tramp_ventry	.Lvector_start\@, 32, \kpti, \bhb
++	.endr
++	.endm
+ 
+-	tramp_ventry	32
+-	tramp_ventry	32
+-	tramp_ventry	32
+-	tramp_ventry	32
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++/*
++ * Exception vectors trampoline.
++ * The order must match __bp_harden_el1_vectors and the
++ * arm64_bp_harden_el1_vectors enum.
++ */
++	.pushsection ".entry.tramp.text", "ax"
++	.align	11
++SYM_CODE_START_NOALIGN(tramp_vectors)
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++	generate_tramp_vector	kpti=1, bhb=BHB_MITIGATION_LOOP
++	generate_tramp_vector	kpti=1, bhb=BHB_MITIGATION_FW
++	generate_tramp_vector	kpti=1, bhb=BHB_MITIGATION_INSN
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++	generate_tramp_vector	kpti=1, bhb=BHB_MITIGATION_NONE
+ SYM_CODE_END(tramp_vectors)
+ 
+ SYM_CODE_START(tramp_exit_native)
+@@ -874,12 +944,56 @@ SYM_CODE_END(tramp_exit_compat)
+ 	.pushsection ".rodata", "a"
+ 	.align PAGE_SHIFT
+ SYM_DATA_START(__entry_tramp_data_start)
++__entry_tramp_data_vectors:
+ 	.quad	vectors
++#ifdef CONFIG_ARM_SDE_INTERFACE
++__entry_tramp_data___sdei_asm_handler:
++	.quad	__sdei_asm_handler
++#endif /* CONFIG_ARM_SDE_INTERFACE */
++__entry_tramp_data_this_cpu_vector:
++	.quad	this_cpu_vector
+ SYM_DATA_END(__entry_tramp_data_start)
+ 	.popsection				// .rodata
+ #endif /* CONFIG_RANDOMIZE_BASE */
+ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+ 
++/*
++ * Exception vectors for spectre mitigations on entry from EL1 when
++ * kpti is not in use.
++ */
++	.macro generate_el1_vector, bhb
++.Lvector_start\@:
++	kernel_ventry	1, sync_invalid			// Synchronous EL1t
++	kernel_ventry	1, irq_invalid			// IRQ EL1t
++	kernel_ventry	1, fiq_invalid			// FIQ EL1t
++	kernel_ventry	1, error_invalid		// Error EL1t
++
++	kernel_ventry	1, sync				// Synchronous EL1h
++	kernel_ventry	1, irq				// IRQ EL1h
++	kernel_ventry	1, fiq_invalid			// FIQ EL1h
++	kernel_ventry	1, error			// Error EL1h
++
++	.rept	4
++	tramp_ventry	.Lvector_start\@, 64, 0, \bhb
++	.endr
++	.rept 4
++	tramp_ventry	.Lvector_start\@, 32, 0, \bhb
++	.endr
++	.endm
++
++/* The order must match tramp_vecs and the arm64_bp_harden_el1_vectors enum. */
++	.pushsection ".entry.text", "ax"
++	.align	11
++SYM_CODE_START(__bp_harden_el1_vectors)
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++	generate_el1_vector	bhb=BHB_MITIGATION_LOOP
++	generate_el1_vector	bhb=BHB_MITIGATION_FW
++	generate_el1_vector	bhb=BHB_MITIGATION_INSN
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++SYM_CODE_END(__bp_harden_el1_vectors)
++	.popsection
++
++
+ /*
+  * Register switch for AArch64. The callee-saved registers need to be saved
+  * and restored. On entry:
+@@ -969,13 +1083,7 @@ SYM_CODE_START(__sdei_asm_entry_trampoline)
+ 	 */
+ 1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
+ 
+-#ifdef CONFIG_RANDOMIZE_BASE
+-	adr	x4, tramp_vectors + PAGE_SIZE
+-	add	x4, x4, #:lo12:__sdei_asm_trampoline_next_handler
+-	ldr	x4, [x4]
+-#else
+-	ldr	x4, =__sdei_asm_handler
+-#endif
++	tramp_data_read_var     x4, __sdei_asm_handler
+ 	br	x4
+ SYM_CODE_END(__sdei_asm_entry_trampoline)
+ NOKPROBE(__sdei_asm_entry_trampoline)
+@@ -998,13 +1106,6 @@ SYM_CODE_END(__sdei_asm_exit_trampoline)
+ NOKPROBE(__sdei_asm_exit_trampoline)
+ 	.ltorg
+ .popsection		// .entry.tramp.text
+-#ifdef CONFIG_RANDOMIZE_BASE
+-.pushsection ".rodata", "a"
+-SYM_DATA_START(__sdei_asm_trampoline_next_handler)
+-	.quad	__sdei_asm_handler
+-SYM_DATA_END(__sdei_asm_trampoline_next_handler)
+-.popsection		// .rodata
+-#endif /* CONFIG_RANDOMIZE_BASE */
+ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+ 
+ /*
+@@ -1112,7 +1213,7 @@ alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
+ alternative_else_nop_endif
+ 
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+-	tramp_alias	dst=x5, sym=__sdei_asm_exit_trampoline
++	tramp_alias	dst=x5, sym=__sdei_asm_exit_trampoline, tmp=x3
+ 	br	x5
+ #endif
+ SYM_CODE_END(__sdei_asm_handler)
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index f6e4e3737405d..3dd489b62b29f 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -18,14 +18,18 @@
+  */
+ 
+ #include <linux/arm-smccc.h>
++#include <linux/bpf.h>
+ #include <linux/cpu.h>
+ #include <linux/device.h>
+ #include <linux/nospec.h>
+ #include <linux/prctl.h>
+ #include <linux/sched/task_stack.h>
+ 
++#include <asm/insn.h>
+ #include <asm/spectre.h>
+ #include <asm/traps.h>
++#include <asm/vectors.h>
++#include <asm/virt.h>
+ 
+ /*
+  * We try to ensure that the mitigation state can never change as the result of
+@@ -94,14 +98,51 @@ static bool spectre_v2_mitigations_off(void)
+ 	return ret;
+ }
+ 
++static const char *get_bhb_affected_string(enum mitigation_state bhb_state)
++{
++	switch (bhb_state) {
++	case SPECTRE_UNAFFECTED:
++		return "";
++	default:
++	case SPECTRE_VULNERABLE:
++		return ", but not BHB";
++	case SPECTRE_MITIGATED:
++		return ", BHB";
++	}
++}
++
++static bool _unprivileged_ebpf_enabled(void)
++{
++#ifdef CONFIG_BPF_SYSCALL
++	return !sysctl_unprivileged_bpf_disabled;
++#else
++	return false;
++#endif
++}
++
+ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+ 			    char *buf)
+ {
++	enum mitigation_state bhb_state = arm64_get_spectre_bhb_state();
++	const char *bhb_str = get_bhb_affected_string(bhb_state);
++	const char *v2_str = "Branch predictor hardening";
++
+ 	switch (spectre_v2_state) {
+ 	case SPECTRE_UNAFFECTED:
+-		return sprintf(buf, "Not affected\n");
++		if (bhb_state == SPECTRE_UNAFFECTED)
++			return sprintf(buf, "Not affected\n");
++
++		/*
++		 * Platforms affected by Spectre-BHB can't report
++		 * "Not affected" for Spectre-v2.
++		 */
++		v2_str = "CSV2";
++		fallthrough;
+ 	case SPECTRE_MITIGATED:
+-		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
++		if (bhb_state == SPECTRE_MITIGATED && _unprivileged_ebpf_enabled())
++			return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n");
++
++		return sprintf(buf, "Mitigation: %s%s\n", v2_str, bhb_str);
+ 	case SPECTRE_VULNERABLE:
+ 		fallthrough;
+ 	default:
+@@ -195,9 +236,9 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
+ 	__flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
+ }
+ 
++static DEFINE_RAW_SPINLOCK(bp_lock);
+ static void install_bp_hardening_cb(bp_hardening_cb_t fn)
+ {
+-	static DEFINE_RAW_SPINLOCK(bp_lock);
+ 	int cpu, slot = -1;
+ 	const char *hyp_vecs_start = __smccc_workaround_1_smc;
+ 	const char *hyp_vecs_end = __smccc_workaround_1_smc +
+@@ -228,6 +269,7 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn)
+ 
+ 	__this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
+ 	__this_cpu_write(bp_hardening_data.fn, fn);
++	__this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start);
+ 	raw_spin_unlock(&bp_lock);
+ }
+ #else
+@@ -571,9 +613,9 @@ void __init spectre_v4_patch_fw_mitigation_enable(struct alt_instr *alt,
+  * Patch a NOP in the Spectre-v4 mitigation code with an SMC/HVC instruction
+  * to call into firmware to adjust the mitigation state.
+  */
+-void __init spectre_v4_patch_fw_mitigation_conduit(struct alt_instr *alt,
+-						   __le32 *origptr,
+-						   __le32 *updptr, int nr_inst)
++void __init smccc_patch_fw_mitigation_conduit(struct alt_instr *alt,
++					       __le32 *origptr,
++					       __le32 *updptr, int nr_inst)
+ {
+ 	u32 insn;
+ 
+@@ -787,3 +829,308 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+ 		return -ENODEV;
+ 	}
+ }
++
++/*
++ * Spectre BHB.
++ *
++ * A CPU is either:
++ * - Mitigated by a branchy loop a CPU specific number of times, and listed
++ *   in our "loop mitigated list".
++ * - Mitigated in software by the firmware Spectre v2 call.
++ * - Has the ClearBHB instruction to perform the mitigation.
++ * - Has the 'Exception Clears Branch History Buffer' (ECBHB) feature, so no
++ *   software mitigation in the vectors is needed.
++ * - Has CSV2.3, so is unaffected.
++ */
++static enum mitigation_state spectre_bhb_state;
++
++enum mitigation_state arm64_get_spectre_bhb_state(void)
++{
++	return spectre_bhb_state;
++}
++
++/*
++ * This must be called with SCOPE_LOCAL_CPU for each type of CPU, before any
++ * SCOPE_SYSTEM call will give the right answer.
++ */
++u8 spectre_bhb_loop_affected(int scope)
++{
++	u8 k = 0;
++	static u8 max_bhb_k;
++
++	if (scope == SCOPE_LOCAL_CPU) {
++		static const struct midr_range spectre_bhb_k32_list[] = {
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
++			MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++			MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
++			{},
++		};
++		static const struct midr_range spectre_bhb_k24_list[] = {
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
++			MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
++			{},
++		};
++		static const struct midr_range spectre_bhb_k8_list[] = {
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
++			{},
++		};
++
++		if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list))
++			k = 32;
++		else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
++			k = 24;
++		else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
++			k =  8;
++
++		max_bhb_k = max(max_bhb_k, k);
++	} else {
++		k = max_bhb_k;
++	}
++
++	return k;
++}
++
++static enum mitigation_state spectre_bhb_get_cpu_fw_mitigation_state(void)
++{
++	int ret;
++	struct arm_smccc_res res;
++
++	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++			     ARM_SMCCC_ARCH_WORKAROUND_3, &res);
++
++	ret = res.a0;
++	switch (ret) {
++	case SMCCC_RET_SUCCESS:
++		return SPECTRE_MITIGATED;
++	case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED:
++		return SPECTRE_UNAFFECTED;
++	default:
++		fallthrough;
++	case SMCCC_RET_NOT_SUPPORTED:
++		return SPECTRE_VULNERABLE;
++	}
++}
++
++static bool is_spectre_bhb_fw_affected(int scope)
++{
++	static bool system_affected;
++	enum mitigation_state fw_state;
++	bool has_smccc = arm_smccc_1_1_get_conduit() != SMCCC_CONDUIT_NONE;
++	static const struct midr_range spectre_bhb_firmware_mitigated_list[] = {
++		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
++		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
++		{},
++	};
++	bool cpu_in_list = is_midr_in_range_list(read_cpuid_id(),
++					 spectre_bhb_firmware_mitigated_list);
++
++	if (scope != SCOPE_LOCAL_CPU)
++		return system_affected;
++
++	fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
++	if (cpu_in_list || (has_smccc && fw_state == SPECTRE_MITIGATED)) {
++		system_affected = true;
++		return true;
++	}
++
++	return false;
++}
++
++static bool supports_ecbhb(int scope)
++{
++	u64 mmfr1;
++
++	if (scope == SCOPE_LOCAL_CPU)
++		mmfr1 = read_sysreg_s(SYS_ID_AA64MMFR1_EL1);
++	else
++		mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
++
++	return cpuid_feature_extract_unsigned_field(mmfr1,
++						    ID_AA64MMFR1_ECBHB_SHIFT);
++}
++
++bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
++			     int scope)
++{
++	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
++
++	if (supports_csv2p3(scope))
++		return false;
++
++	if (supports_clearbhb(scope))
++		return true;
++
++	if (spectre_bhb_loop_affected(scope))
++		return true;
++
++	if (is_spectre_bhb_fw_affected(scope))
++		return true;
++
++	return false;
++}
++
++static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot)
++{
++	const char *v = arm64_get_bp_hardening_vector(slot);
++
++	if (slot < 0)
++		return;
++
++	__this_cpu_write(this_cpu_vector, v);
++
++	/*
++	 * When KPTI is in use, the vectors are switched when exiting to
++	 * user-space.
++	 */
++	if (arm64_kernel_unmapped_at_el0())
++		return;
++
++	write_sysreg(v, vbar_el1);
++	isb();
++}
++
++#ifdef CONFIG_KVM
++static int kvm_bhb_get_vecs_size(const char *start)
++{
++	if (start == __smccc_workaround_3_smc)
++		return __SMCCC_WORKAROUND_3_SMC_SZ;
++	else if (start == __spectre_bhb_loop_k8 ||
++		 start == __spectre_bhb_loop_k24 ||
++		 start == __spectre_bhb_loop_k32)
++		return __SPECTRE_BHB_LOOP_SZ;
++	else if (start == __spectre_bhb_clearbhb)
++		return __SPECTRE_BHB_CLEARBHB_SZ;
++
++	return 0;
++}
++
++static void kvm_setup_bhb_slot(const char *hyp_vecs_start)
++{
++	int cpu, slot = -1, size;
++	const char *hyp_vecs_end;
++
++	if (!IS_ENABLED(CONFIG_KVM) || !is_hyp_mode_available())
++		return;
++
++	size = kvm_bhb_get_vecs_size(hyp_vecs_start);
++	if (WARN_ON_ONCE(!hyp_vecs_start || !size))
++		return;
++	hyp_vecs_end = hyp_vecs_start + size;
++
++	raw_spin_lock(&bp_lock);
++	for_each_possible_cpu(cpu) {
++		if (per_cpu(bp_hardening_data.template_start, cpu) == hyp_vecs_start) {
++			slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu);
++			break;
++		}
++	}
++
++	if (slot == -1) {
++		slot = atomic_inc_return(&arm64_el2_vector_last_slot);
++		BUG_ON(slot >= BP_HARDEN_EL2_SLOTS);
++		__copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end);
++	}
++
++	__this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
++	__this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start);
++	raw_spin_unlock(&bp_lock);
++}
++#else
++#define __smccc_workaround_3_smc NULL
++#define __spectre_bhb_loop_k8 NULL
++#define __spectre_bhb_loop_k24 NULL
++#define __spectre_bhb_loop_k32 NULL
++#define __spectre_bhb_clearbhb NULL
++
++static void kvm_setup_bhb_slot(const char *hyp_vecs_start) { }
++#endif /* CONFIG_KVM */
++
++void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
++{
++	enum mitigation_state fw_state, state = SPECTRE_VULNERABLE;
++
++	if (!is_spectre_bhb_affected(entry, SCOPE_LOCAL_CPU))
++		return;
++
++	if (arm64_get_spectre_v2_state() == SPECTRE_VULNERABLE) {
++		/* No point mitigating Spectre-BHB alone. */
++	} else if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY)) {
++		pr_info_once("spectre-bhb mitigation disabled by compile time option\n");
++	} else if (cpu_mitigations_off()) {
++		pr_info_once("spectre-bhb mitigation disabled by command line option\n");
++	} else if (supports_ecbhb(SCOPE_LOCAL_CPU)) {
++		state = SPECTRE_MITIGATED;
++	} else if (supports_clearbhb(SCOPE_LOCAL_CPU)) {
++		kvm_setup_bhb_slot(__spectre_bhb_clearbhb);
++		this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN);
++
++		state = SPECTRE_MITIGATED;
++	} else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) {
++		switch (spectre_bhb_loop_affected(SCOPE_SYSTEM)) {
++		case 8:
++			kvm_setup_bhb_slot(__spectre_bhb_loop_k8);
++			break;
++		case 24:
++			kvm_setup_bhb_slot(__spectre_bhb_loop_k24);
++			break;
++		case 32:
++			kvm_setup_bhb_slot(__spectre_bhb_loop_k32);
++			break;
++		default:
++			WARN_ON_ONCE(1);
++		}
++		this_cpu_set_vectors(EL1_VECTOR_BHB_LOOP);
++
++		state = SPECTRE_MITIGATED;
++	} else if (is_spectre_bhb_fw_affected(SCOPE_LOCAL_CPU)) {
++		fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
++		if (fw_state == SPECTRE_MITIGATED) {
++			kvm_setup_bhb_slot(__smccc_workaround_3_smc);
++			this_cpu_set_vectors(EL1_VECTOR_BHB_FW);
++
++			state = SPECTRE_MITIGATED;
++		}
++	}
++
++	update_mitigation_state(&spectre_bhb_state, state);
++}
++
++/* Patched to correct the immediate */
++void noinstr spectre_bhb_patch_loop_iter(struct alt_instr *alt,
++				   __le32 *origptr, __le32 *updptr, int nr_inst)
++{
++	u8 rd;
++	u32 insn;
++	u16 loop_count = spectre_bhb_loop_affected(SCOPE_SYSTEM);
++
++	BUG_ON(nr_inst != 1); /* MOV -> MOV */
++
++	if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY))
++		return;
++
++	insn = le32_to_cpu(*origptr);
++	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, insn);
++	insn = aarch64_insn_gen_movewide(rd, loop_count, 0,
++					 AARCH64_INSN_VARIANT_64BIT,
++					 AARCH64_INSN_MOVEWIDE_ZERO);
++	*updptr++ = cpu_to_le32(insn);
++}
++
++#ifdef CONFIG_BPF_SYSCALL
++#define EBPF_WARN "Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks!\n"
++void unpriv_ebpf_notify(int new_state)
++{
++	if (spectre_v2_state == SPECTRE_VULNERABLE ||
++	    spectre_bhb_state != SPECTRE_MITIGATED)
++		return;
++
++	if (!new_state)
++		pr_err("WARNING: %s", EBPF_WARN);
++}
++#endif
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 30c1029789427..71f4b5f24d15f 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -299,7 +299,7 @@ ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1))
+ 	<= SZ_4K, "Hibernate exit text too big or misaligned")
+ #endif
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+-ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
++ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) <= 3*PAGE_SIZE,
+ 	"Entry trampoline text too big")
+ #endif
+ /*
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 5bc978be80434..4d63fcd7574b2 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -1337,7 +1337,8 @@ static int kvm_map_vectors(void)
+ 	 * !SV2 +  HEL2 -> allocate one vector slot and use exec mapping
+ 	 *  SV2 +  HEL2 -> use hardened vectors and use exec mapping
+ 	 */
+-	if (cpus_have_const_cap(ARM64_SPECTRE_V2)) {
++	if (cpus_have_const_cap(ARM64_SPECTRE_V2) ||
++	    cpus_have_const_cap(ARM64_SPECTRE_BHB)) {
+ 		__kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs);
+ 		__kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base);
+ 	}
+diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
+index bcbead3746c66..bc06243cf4225 100644
+--- a/arch/arm64/kvm/hyp/hyp-entry.S
++++ b/arch/arm64/kvm/hyp/hyp-entry.S
+@@ -61,6 +61,10 @@ el1_sync:				// Guest trapped into EL2
+ 	/* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
+ 	eor	w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \
+ 			  ARM_SMCCC_ARCH_WORKAROUND_2)
++	cbz	w1, wa_epilogue
++
++	eor	w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_2 ^ \
++			  ARM_SMCCC_ARCH_WORKAROUND_3)
+ 	cbnz	w1, el1_trap
+ 
+ wa_epilogue:
+diff --git a/arch/arm64/kvm/hyp/smccc_wa.S b/arch/arm64/kvm/hyp/smccc_wa.S
+index b0441dbdf68bd..24b281912463d 100644
+--- a/arch/arm64/kvm/hyp/smccc_wa.S
++++ b/arch/arm64/kvm/hyp/smccc_wa.S
+@@ -30,3 +30,78 @@ SYM_DATA_START(__smccc_workaround_1_smc)
+ 1:	.org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ
+ 	.org 1b
+ SYM_DATA_END(__smccc_workaround_1_smc)
++
++	.global		__smccc_workaround_3_smc
++SYM_DATA_START(__smccc_workaround_3_smc)
++	esb
++	sub	sp, sp, #(8 * 4)
++	stp	x2, x3, [sp, #(8 * 0)]
++	stp	x0, x1, [sp, #(8 * 2)]
++	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_3
++	smc	#0
++	ldp	x2, x3, [sp, #(8 * 0)]
++	ldp	x0, x1, [sp, #(8 * 2)]
++	add	sp, sp, #(8 * 4)
++1:	.org __smccc_workaround_3_smc + __SMCCC_WORKAROUND_3_SMC_SZ
++	.org 1b
++SYM_DATA_END(__smccc_workaround_3_smc)
++
++	.global	__spectre_bhb_loop_k8
++SYM_DATA_START(__spectre_bhb_loop_k8)
++	esb
++	sub	sp, sp, #(8 * 2)
++	stp	x0, x1, [sp, #(8 * 0)]
++	mov	x0, #8
++2:	b	. + 4
++	subs	x0, x0, #1
++	b.ne	2b
++	dsb	nsh
++	isb
++	ldp	x0, x1, [sp, #(8 * 0)]
++	add	sp, sp, #(8 * 2)
++1:	.org __spectre_bhb_loop_k8 + __SPECTRE_BHB_LOOP_SZ
++	.org 1b
++SYM_DATA_END(__spectre_bhb_loop_k8)
++
++	.global	__spectre_bhb_loop_k24
++SYM_DATA_START(__spectre_bhb_loop_k24)
++	esb
++	sub	sp, sp, #(8 * 2)
++	stp	x0, x1, [sp, #(8 * 0)]
++	mov	x0, #8
++2:	b	. + 4
++	subs	x0, x0, #1
++	b.ne	2b
++	dsb	nsh
++	isb
++	ldp	x0, x1, [sp, #(8 * 0)]
++	add	sp, sp, #(8 * 2)
++1:	.org __spectre_bhb_loop_k24 + __SPECTRE_BHB_LOOP_SZ
++	.org 1b
++SYM_DATA_END(__spectre_bhb_loop_k24)
++
++	.global	__spectre_bhb_loop_k32
++SYM_DATA_START(__spectre_bhb_loop_k32)
++	esb
++	sub	sp, sp, #(8 * 2)
++	stp	x0, x1, [sp, #(8 * 0)]
++	mov	x0, #8
++2:	b	. + 4
++	subs	x0, x0, #1
++	b.ne	2b
++	dsb	nsh
++	isb
++	ldp	x0, x1, [sp, #(8 * 0)]
++	add	sp, sp, #(8 * 2)
++1:	.org __spectre_bhb_loop_k32 + __SPECTRE_BHB_LOOP_SZ
++	.org 1b
++SYM_DATA_END(__spectre_bhb_loop_k32)
++
++	.global	__spectre_bhb_clearbhb
++SYM_DATA_START(__spectre_bhb_clearbhb)
++	esb
++	clearbhb
++	isb
++1:	.org __spectre_bhb_clearbhb + __SPECTRE_BHB_CLEARBHB_SZ
++	.org 1b
++SYM_DATA_END(__spectre_bhb_clearbhb)
+diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
+index 62546e20b2511..532e687f69366 100644
+--- a/arch/arm64/kvm/hyp/vhe/switch.c
++++ b/arch/arm64/kvm/hyp/vhe/switch.c
+@@ -10,6 +10,7 @@
+ #include <linux/kvm_host.h>
+ #include <linux/types.h>
+ #include <linux/jump_label.h>
++#include <linux/percpu.h>
+ #include <uapi/linux/psci.h>
+ 
+ #include <kvm/arm_psci.h>
+@@ -25,6 +26,7 @@
+ #include <asm/debug-monitors.h>
+ #include <asm/processor.h>
+ #include <asm/thread_info.h>
++#include <asm/vectors.h>
+ 
+ const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n";
+ 
+@@ -70,7 +72,7 @@ NOKPROBE_SYMBOL(__activate_traps);
+ 
+ static void __deactivate_traps(struct kvm_vcpu *vcpu)
+ {
+-	extern char vectors[];	/* kernel exception vectors */
++	const char *host_vectors = vectors;
+ 
+ 	___deactivate_traps(vcpu);
+ 
+@@ -84,7 +86,10 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
+ 	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
+ 
+ 	write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
+-	write_sysreg(vectors, vbar_el1);
++
++	if (!arm64_kernel_unmapped_at_el0())
++		host_vectors = __this_cpu_read(this_cpu_vector);
++	write_sysreg(host_vectors, vbar_el1);
+ }
+ NOKPROBE_SYMBOL(__deactivate_traps);
+ 
+diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
+index 25ea4ecb6449f..bc111a1aff032 100644
+--- a/arch/arm64/kvm/hypercalls.c
++++ b/arch/arm64/kvm/hypercalls.c
+@@ -58,6 +58,18 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
+ 				break;
+ 			}
+ 			break;
++		case ARM_SMCCC_ARCH_WORKAROUND_3:
++			switch (arm64_get_spectre_bhb_state()) {
++			case SPECTRE_VULNERABLE:
++				break;
++			case SPECTRE_MITIGATED:
++				val = SMCCC_RET_SUCCESS;
++				break;
++			case SPECTRE_UNAFFECTED:
++				val = SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED;
++				break;
++			}
++			break;
+ 		case ARM_SMCCC_HV_PV_TIME_FEATURES:
+ 			val = SMCCC_RET_SUCCESS;
+ 			break;
+diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
+index db4056ecccfda..20ba5136ac3dd 100644
+--- a/arch/arm64/kvm/psci.c
++++ b/arch/arm64/kvm/psci.c
+@@ -397,7 +397,7 @@ int kvm_psci_call(struct kvm_vcpu *vcpu)
+ 
+ int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
+ {
+-	return 3;		/* PSCI version and two workaround registers */
++	return 4;		/* PSCI version and three workaround registers */
+ }
+ 
+ int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+@@ -411,6 +411,9 @@ int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+ 	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++))
+ 		return -EFAULT;
+ 
++	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3, uindices++))
++		return -EFAULT;
++
+ 	return 0;
+ }
+ 
+@@ -450,6 +453,17 @@ static int get_kernel_wa_level(u64 regid)
+ 		case SPECTRE_VULNERABLE:
+ 			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
+ 		}
++		break;
++	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
++		switch (arm64_get_spectre_bhb_state()) {
++		case SPECTRE_VULNERABLE:
++			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL;
++		case SPECTRE_MITIGATED:
++			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_AVAIL;
++		case SPECTRE_UNAFFECTED:
++			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_REQUIRED;
++		}
++		return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL;
+ 	}
+ 
+ 	return -EINVAL;
+@@ -466,6 +480,7 @@ int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 		break;
+ 	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+ 	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
++	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
+ 		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
+ 		break;
+ 	default:
+@@ -511,6 +526,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	}
+ 
+ 	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
++	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
+ 		if (val & ~KVM_REG_FEATURE_LEVEL_MASK)
+ 			return -EINVAL;
+ 
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 568f11e23830c..835fa036b2d54 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1517,7 +1517,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ 	/* CRm=6 */
+ 	ID_SANITISED(ID_AA64ISAR0_EL1),
+ 	ID_SANITISED(ID_AA64ISAR1_EL1),
+-	ID_UNALLOCATED(6,2),
++	ID_SANITISED(ID_AA64ISAR2_EL1),
+ 	ID_UNALLOCATED(6,3),
+ 	ID_UNALLOCATED(6,4),
+ 	ID_UNALLOCATED(6,5),
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 2601a514d8c4a..991e599f70577 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -592,6 +592,8 @@ early_param("rodata", parse_rodata);
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ static int __init map_entry_trampoline(void)
+ {
++	int i;
++
+ 	pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
+ 	phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start);
+ 
+@@ -600,11 +602,15 @@ static int __init map_entry_trampoline(void)
+ 
+ 	/* Map only the text into the trampoline page table */
+ 	memset(tramp_pg_dir, 0, PGD_SIZE);
+-	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
+-			     prot, __pgd_pgtable_alloc, 0);
++	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS,
++			     entry_tramp_text_size(), prot,
++			     __pgd_pgtable_alloc, NO_BLOCK_MAPPINGS);
+ 
+ 	/* Map both the text and data into the kernel page table */
+-	__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
++	for (i = 0; i < DIV_ROUND_UP(entry_tramp_text_size(), PAGE_SIZE); i++)
++		__set_fixmap(FIX_ENTRY_TRAMP_TEXT1 - i,
++			     pa_start + i * PAGE_SIZE, prot);
++
+ 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ 		extern char __entry_tramp_data_start[];
+ 
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index dad350d42ecfb..3b407f46f1a0d 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -204,7 +204,7 @@
+ #define X86_FEATURE_SME			( 7*32+10) /* AMD Secure Memory Encryption */
+ #define X86_FEATURE_PTI			( 7*32+11) /* Kernel Page Table Isolation enabled */
+ #define X86_FEATURE_RETPOLINE		( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
+-#define X86_FEATURE_RETPOLINE_AMD	( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
++#define X86_FEATURE_RETPOLINE_LFENCE	( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */
+ #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
+ #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
+ #define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index cb9ad6b739737..4d0f5386e637b 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -82,7 +82,7 @@
+ #ifdef CONFIG_RETPOLINE
+ 	ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \
+ 		      __stringify(jmp __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \
+-		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_AMD
++		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE
+ #else
+ 	jmp	*%\reg
+ #endif
+@@ -92,7 +92,7 @@
+ #ifdef CONFIG_RETPOLINE
+ 	ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *%\reg), \
+ 		      __stringify(call __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \
+-		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_AMD
++		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_LFENCE
+ #else
+ 	call	*%\reg
+ #endif
+@@ -134,7 +134,7 @@
+ 	"lfence;\n"						\
+ 	ANNOTATE_RETPOLINE_SAFE					\
+ 	"call *%[thunk_target]\n",				\
+-	X86_FEATURE_RETPOLINE_AMD)
++	X86_FEATURE_RETPOLINE_LFENCE)
+ 
+ # define THUNK_TARGET(addr) [thunk_target] "r" (addr)
+ 
+@@ -164,7 +164,7 @@
+ 	"lfence;\n"						\
+ 	ANNOTATE_RETPOLINE_SAFE					\
+ 	"call *%[thunk_target]\n",				\
+-	X86_FEATURE_RETPOLINE_AMD)
++	X86_FEATURE_RETPOLINE_LFENCE)
+ 
+ # define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
+ #endif
+@@ -176,9 +176,11 @@
+ /* The Spectre V2 mitigation variants */
+ enum spectre_v2_mitigation {
+ 	SPECTRE_V2_NONE,
+-	SPECTRE_V2_RETPOLINE_GENERIC,
+-	SPECTRE_V2_RETPOLINE_AMD,
+-	SPECTRE_V2_IBRS_ENHANCED,
++	SPECTRE_V2_RETPOLINE,
++	SPECTRE_V2_LFENCE,
++	SPECTRE_V2_EIBRS,
++	SPECTRE_V2_EIBRS_RETPOLINE,
++	SPECTRE_V2_EIBRS_LFENCE,
+ };
+ 
+ /* The indirect branch speculation control variants */
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index d41b70fe4918e..78b9514a38440 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -16,6 +16,7 @@
+ #include <linux/prctl.h>
+ #include <linux/sched/smt.h>
+ #include <linux/pgtable.h>
++#include <linux/bpf.h>
+ 
+ #include <asm/spec-ctrl.h>
+ #include <asm/cmdline.h>
+@@ -613,6 +614,32 @@ static inline const char *spectre_v2_module_string(void)
+ static inline const char *spectre_v2_module_string(void) { return ""; }
+ #endif
+ 
++#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
++#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
++#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
++
++#ifdef CONFIG_BPF_SYSCALL
++void unpriv_ebpf_notify(int new_state)
++{
++	if (new_state)
++		return;
++
++	/* Unprivileged eBPF is enabled */
++
++	switch (spectre_v2_enabled) {
++	case SPECTRE_V2_EIBRS:
++		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
++		break;
++	case SPECTRE_V2_EIBRS_LFENCE:
++		if (sched_smt_active())
++			pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
++		break;
++	default:
++		break;
++	}
++}
++#endif
++
+ static inline bool match_option(const char *arg, int arglen, const char *opt)
+ {
+ 	int len = strlen(opt);
+@@ -627,7 +654,10 @@ enum spectre_v2_mitigation_cmd {
+ 	SPECTRE_V2_CMD_FORCE,
+ 	SPECTRE_V2_CMD_RETPOLINE,
+ 	SPECTRE_V2_CMD_RETPOLINE_GENERIC,
+-	SPECTRE_V2_CMD_RETPOLINE_AMD,
++	SPECTRE_V2_CMD_RETPOLINE_LFENCE,
++	SPECTRE_V2_CMD_EIBRS,
++	SPECTRE_V2_CMD_EIBRS_RETPOLINE,
++	SPECTRE_V2_CMD_EIBRS_LFENCE,
+ };
+ 
+ enum spectre_v2_user_cmd {
+@@ -700,6 +730,13 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
+ 	return SPECTRE_V2_USER_CMD_AUTO;
+ }
+ 
++static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
++{
++	return (mode == SPECTRE_V2_EIBRS ||
++		mode == SPECTRE_V2_EIBRS_RETPOLINE ||
++		mode == SPECTRE_V2_EIBRS_LFENCE);
++}
++
+ static void __init
+ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ {
+@@ -767,7 +804,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ 	 */
+ 	if (!boot_cpu_has(X86_FEATURE_STIBP) ||
+ 	    !smt_possible ||
+-	    spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
++	    spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ 		return;
+ 
+ 	/*
+@@ -787,9 +824,11 @@ set_mode:
+ 
+ static const char * const spectre_v2_strings[] = {
+ 	[SPECTRE_V2_NONE]			= "Vulnerable",
+-	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+-	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
+-	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
++	[SPECTRE_V2_RETPOLINE]			= "Mitigation: Retpolines",
++	[SPECTRE_V2_LFENCE]			= "Mitigation: LFENCE",
++	[SPECTRE_V2_EIBRS]			= "Mitigation: Enhanced IBRS",
++	[SPECTRE_V2_EIBRS_LFENCE]		= "Mitigation: Enhanced IBRS + LFENCE",
++	[SPECTRE_V2_EIBRS_RETPOLINE]		= "Mitigation: Enhanced IBRS + Retpolines",
+ };
+ 
+ static const struct {
+@@ -800,8 +839,12 @@ static const struct {
+ 	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
+ 	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
+ 	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
+-	{ "retpoline,amd",	SPECTRE_V2_CMD_RETPOLINE_AMD,	  false },
++	{ "retpoline,amd",	SPECTRE_V2_CMD_RETPOLINE_LFENCE,  false },
++	{ "retpoline,lfence",	SPECTRE_V2_CMD_RETPOLINE_LFENCE,  false },
+ 	{ "retpoline,generic",	SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
++	{ "eibrs",		SPECTRE_V2_CMD_EIBRS,		  false },
++	{ "eibrs,lfence",	SPECTRE_V2_CMD_EIBRS_LFENCE,	  false },
++	{ "eibrs,retpoline",	SPECTRE_V2_CMD_EIBRS_RETPOLINE,	  false },
+ 	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
+ };
+ 
+@@ -838,17 +881,30 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	}
+ 
+ 	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
+-	     cmd == SPECTRE_V2_CMD_RETPOLINE_AMD ||
+-	     cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) &&
++	     cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
++	     cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
+ 	    !IS_ENABLED(CONFIG_RETPOLINE)) {
+-		pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option);
++		pr_err("%s selected but not compiled in. Switching to AUTO select\n",
++		       mitigation_options[i].option);
+ 		return SPECTRE_V2_CMD_AUTO;
+ 	}
+ 
+-	if (cmd == SPECTRE_V2_CMD_RETPOLINE_AMD &&
+-	    boot_cpu_data.x86_vendor != X86_VENDOR_HYGON &&
+-	    boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
+-		pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n");
++	if ((cmd == SPECTRE_V2_CMD_EIBRS ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
++	    !boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
++		pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
++		       mitigation_options[i].option);
++		return SPECTRE_V2_CMD_AUTO;
++	}
++
++	if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) &&
++	    !boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
++		pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n",
++		       mitigation_options[i].option);
+ 		return SPECTRE_V2_CMD_AUTO;
+ 	}
+ 
+@@ -857,6 +913,16 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
++static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
++{
++	if (!IS_ENABLED(CONFIG_RETPOLINE)) {
++		pr_err("Kernel not compiled with retpoline; no mitigation available!");
++		return SPECTRE_V2_NONE;
++	}
++
++	return SPECTRE_V2_RETPOLINE;
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -877,49 +943,64 @@ static void __init spectre_v2_select_mitigation(void)
+ 	case SPECTRE_V2_CMD_FORCE:
+ 	case SPECTRE_V2_CMD_AUTO:
+ 		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
+-			mode = SPECTRE_V2_IBRS_ENHANCED;
+-			/* Force it so VMEXIT will restore correctly */
+-			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+-			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+-			goto specv2_set_mode;
++			mode = SPECTRE_V2_EIBRS;
++			break;
+ 		}
+-		if (IS_ENABLED(CONFIG_RETPOLINE))
+-			goto retpoline_auto;
++
++		mode = spectre_v2_select_retpoline();
+ 		break;
+-	case SPECTRE_V2_CMD_RETPOLINE_AMD:
+-		if (IS_ENABLED(CONFIG_RETPOLINE))
+-			goto retpoline_amd;
++
++	case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
++		pr_err(SPECTRE_V2_LFENCE_MSG);
++		mode = SPECTRE_V2_LFENCE;
+ 		break;
++
+ 	case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
+-		if (IS_ENABLED(CONFIG_RETPOLINE))
+-			goto retpoline_generic;
++		mode = SPECTRE_V2_RETPOLINE;
+ 		break;
++
+ 	case SPECTRE_V2_CMD_RETPOLINE:
+-		if (IS_ENABLED(CONFIG_RETPOLINE))
+-			goto retpoline_auto;
++		mode = spectre_v2_select_retpoline();
++		break;
++
++	case SPECTRE_V2_CMD_EIBRS:
++		mode = SPECTRE_V2_EIBRS;
++		break;
++
++	case SPECTRE_V2_CMD_EIBRS_LFENCE:
++		mode = SPECTRE_V2_EIBRS_LFENCE;
++		break;
++
++	case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
++		mode = SPECTRE_V2_EIBRS_RETPOLINE;
+ 		break;
+ 	}
+-	pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
+-	return;
+ 
+-retpoline_auto:
+-	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+-	    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+-	retpoline_amd:
+-		if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
+-			pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
+-			goto retpoline_generic;
+-		}
+-		mode = SPECTRE_V2_RETPOLINE_AMD;
+-		setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
+-		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
+-	} else {
+-	retpoline_generic:
+-		mode = SPECTRE_V2_RETPOLINE_GENERIC;
++	if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
++		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
++
++	if (spectre_v2_in_eibrs_mode(mode)) {
++		/* Force it so VMEXIT will restore correctly */
++		x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
++		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++	}
++
++	switch (mode) {
++	case SPECTRE_V2_NONE:
++	case SPECTRE_V2_EIBRS:
++		break;
++
++	case SPECTRE_V2_LFENCE:
++	case SPECTRE_V2_EIBRS_LFENCE:
++		setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
++		fallthrough;
++
++	case SPECTRE_V2_RETPOLINE:
++	case SPECTRE_V2_EIBRS_RETPOLINE:
+ 		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
++		break;
+ 	}
+ 
+-specv2_set_mode:
+ 	spectre_v2_enabled = mode;
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+@@ -945,7 +1026,7 @@ specv2_set_mode:
+ 	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
+ 	 * enable IBRS around firmware calls.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
++	if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
+ 		pr_info("Enabling Restricted Speculation for firmware calls\n");
+ 	}
+@@ -1015,6 +1096,10 @@ void cpu_bugs_smt_update(void)
+ {
+ 	mutex_lock(&spec_ctrl_mutex);
+ 
++	if (sched_smt_active() && unprivileged_ebpf_enabled() &&
++	    spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
++		pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
++
+ 	switch (spectre_v2_user_stibp) {
+ 	case SPECTRE_V2_USER_NONE:
+ 		break;
+@@ -1621,7 +1706,7 @@ static ssize_t tsx_async_abort_show_state(char *buf)
+ 
+ static char *stibp_state(void)
+ {
+-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
++	if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ 		return "";
+ 
+ 	switch (spectre_v2_user_stibp) {
+@@ -1651,6 +1736,27 @@ static char *ibpb_state(void)
+ 	return "";
+ }
+ 
++static ssize_t spectre_v2_show_state(char *buf)
++{
++	if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
++		return sprintf(buf, "Vulnerable: LFENCE\n");
++
++	if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
++		return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");
++
++	if (sched_smt_active() && unprivileged_ebpf_enabled() &&
++	    spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
++		return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
++
++	return sprintf(buf, "%s%s%s%s%s%s\n",
++		       spectre_v2_strings[spectre_v2_enabled],
++		       ibpb_state(),
++		       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
++		       stibp_state(),
++		       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
++		       spectre_v2_module_string());
++}
++
+ static ssize_t srbds_show_state(char *buf)
+ {
+ 	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
+@@ -1676,12 +1782,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
+ 
+ 	case X86_BUG_SPECTRE_V2:
+-		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+-			       ibpb_state(),
+-			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
+-			       stibp_state(),
+-			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
+-			       spectre_v2_module_string());
++		return spectre_v2_show_state(buf);
+ 
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 8347eaee679c8..3f2e5ea9ab6b7 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2064,16 +2064,6 @@ bool acpi_ec_dispatch_gpe(void)
+ 	if (acpi_any_gpe_status_set(first_ec->gpe))
+ 		return true;
+ 
+-	/*
+-	 * Cancel the SCI wakeup and process all pending events in case there
+-	 * are any wakeup ones in there.
+-	 *
+-	 * Note that if any non-EC GPEs are active at this point, the SCI will
+-	 * retrigger after the rearming in acpi_s2idle_wake(), so no events
+-	 * should be missed by canceling the wakeup here.
+-	 */
+-	pm_system_cancel_wakeup();
+-
+ 	/*
+ 	 * Dispatch the EC GPE in-band, but do not report wakeup in any case
+ 	 * to allow the caller to process events properly after that.
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index e2614ea820bb8..503935b1deeb1 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -1012,15 +1012,21 @@ static bool acpi_s2idle_wake(void)
+ 			return true;
+ 		}
+ 
+-		/*
+-		 * Check non-EC GPE wakeups and if there are none, cancel the
+-		 * SCI-related wakeup and dispatch the EC GPE.
+-		 */
++		/* Check non-EC GPE wakeups and dispatch the EC GPE. */
+ 		if (acpi_ec_dispatch_gpe()) {
+ 			pm_pr_dbg("ACPI non-EC GPE wakeup\n");
+ 			return true;
+ 		}
+ 
++		/*
++		 * Cancel the SCI wakeup and process all pending events in case
++		 * there are any wakeup ones in there.
++		 *
++		 * Note that if any non-EC GPEs are active at this point, the
++		 * SCI will retrigger after the rearming below, so no events
++		 * should be missed by canceling the wakeup here.
++		 */
++		pm_system_cancel_wakeup();
+ 		acpi_os_wait_events_complete();
+ 
+ 		/*
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 22842d2938c28..47d4bb23d6f31 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1352,7 +1352,8 @@ free_shadow:
+ 			rinfo->ring_ref[i] = GRANT_INVALID_REF;
+ 		}
+ 	}
+-	free_pages((unsigned long)rinfo->ring.sring, get_order(info->nr_ring_pages * XEN_PAGE_SIZE));
++	free_pages_exact(rinfo->ring.sring,
++			 info->nr_ring_pages * XEN_PAGE_SIZE);
+ 	rinfo->ring.sring = NULL;
+ 
+ 	if (rinfo->irq)
+@@ -1436,9 +1437,15 @@ static int blkif_get_final_status(enum blk_req_status s1,
+ 	return BLKIF_RSP_OKAY;
+ }
+ 
+-static bool blkif_completion(unsigned long *id,
+-			     struct blkfront_ring_info *rinfo,
+-			     struct blkif_response *bret)
++/*
++ * Return values:
++ *  1 response processed.
++ *  0 missing further responses.
++ * -1 error while processing.
++ */
++static int blkif_completion(unsigned long *id,
++			    struct blkfront_ring_info *rinfo,
++			    struct blkif_response *bret)
+ {
+ 	int i = 0;
+ 	struct scatterlist *sg;
+@@ -1461,7 +1468,7 @@ static bool blkif_completion(unsigned long *id,
+ 
+ 		/* Wait the second response if not yet here. */
+ 		if (s2->status < REQ_DONE)
+-			return false;
++			return 0;
+ 
+ 		bret->status = blkif_get_final_status(s->status,
+ 						      s2->status);
+@@ -1512,42 +1519,43 @@ static bool blkif_completion(unsigned long *id,
+ 	}
+ 	/* Add the persistent grant into the list of free grants */
+ 	for (i = 0; i < num_grant; i++) {
+-		if (gnttab_query_foreign_access(s->grants_used[i]->gref)) {
++		if (!gnttab_try_end_foreign_access(s->grants_used[i]->gref)) {
+ 			/*
+ 			 * If the grant is still mapped by the backend (the
+ 			 * backend has chosen to make this grant persistent)
+ 			 * we add it at the head of the list, so it will be
+ 			 * reused first.
+ 			 */
+-			if (!info->feature_persistent)
+-				pr_alert_ratelimited("backed has not unmapped grant: %u\n",
+-						     s->grants_used[i]->gref);
++			if (!info->feature_persistent) {
++				pr_alert("backed has not unmapped grant: %u\n",
++					 s->grants_used[i]->gref);
++				return -1;
++			}
+ 			list_add(&s->grants_used[i]->node, &rinfo->grants);
+ 			rinfo->persistent_gnts_c++;
+ 		} else {
+ 			/*
+-			 * If the grant is not mapped by the backend we end the
+-			 * foreign access and add it to the tail of the list,
+-			 * so it will not be picked again unless we run out of
+-			 * persistent grants.
++			 * If the grant is not mapped by the backend we add it
++			 * to the tail of the list, so it will not be picked
++			 * again unless we run out of persistent grants.
+ 			 */
+-			gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL);
+ 			s->grants_used[i]->gref = GRANT_INVALID_REF;
+ 			list_add_tail(&s->grants_used[i]->node, &rinfo->grants);
+ 		}
+ 	}
+ 	if (s->req.operation == BLKIF_OP_INDIRECT) {
+ 		for (i = 0; i < INDIRECT_GREFS(num_grant); i++) {
+-			if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) {
+-				if (!info->feature_persistent)
+-					pr_alert_ratelimited("backed has not unmapped grant: %u\n",
+-							     s->indirect_grants[i]->gref);
++			if (!gnttab_try_end_foreign_access(s->indirect_grants[i]->gref)) {
++				if (!info->feature_persistent) {
++					pr_alert("backed has not unmapped grant: %u\n",
++						 s->indirect_grants[i]->gref);
++					return -1;
++				}
+ 				list_add(&s->indirect_grants[i]->node, &rinfo->grants);
+ 				rinfo->persistent_gnts_c++;
+ 			} else {
+ 				struct page *indirect_page;
+ 
+-				gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL);
+ 				/*
+ 				 * Add the used indirect page back to the list of
+ 				 * available pages for indirect grefs.
+@@ -1562,7 +1570,7 @@ static bool blkif_completion(unsigned long *id,
+ 		}
+ 	}
+ 
+-	return true;
++	return 1;
+ }
+ 
+ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+@@ -1628,12 +1636,17 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 		}
+ 
+ 		if (bret.operation != BLKIF_OP_DISCARD) {
++			int ret;
++
+ 			/*
+ 			 * We may need to wait for an extra response if the
+ 			 * I/O request is split in 2
+ 			 */
+-			if (!blkif_completion(&id, rinfo, &bret))
++			ret = blkif_completion(&id, rinfo, &bret);
++			if (!ret)
+ 				continue;
++			if (unlikely(ret < 0))
++				goto err;
+ 		}
+ 
+ 		if (add_id_to_freelist(rinfo, id)) {
+@@ -1740,8 +1753,7 @@ static int setup_blkring(struct xenbus_device *dev,
+ 	for (i = 0; i < info->nr_ring_pages; i++)
+ 		rinfo->ring_ref[i] = GRANT_INVALID_REF;
+ 
+-	sring = (struct blkif_sring *)__get_free_pages(GFP_NOIO | __GFP_HIGH,
+-						       get_order(ring_size));
++	sring = alloc_pages_exact(ring_size, GFP_NOIO);
+ 	if (!sring) {
+ 		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
+ 		return -ENOMEM;
+@@ -1751,7 +1763,7 @@ static int setup_blkring(struct xenbus_device *dev,
+ 
+ 	err = xenbus_grant_ring(dev, rinfo->ring.sring, info->nr_ring_pages, gref);
+ 	if (err < 0) {
+-		free_pages((unsigned long)sring, get_order(ring_size));
++		free_pages_exact(sring, ring_size);
+ 		rinfo->ring.sring = NULL;
+ 		goto fail;
+ 	}
+@@ -2729,11 +2741,10 @@ static void purge_persistent_grants(struct blkfront_info *info)
+ 		list_for_each_entry_safe(gnt_list_entry, tmp, &rinfo->grants,
+ 					 node) {
+ 			if (gnt_list_entry->gref == GRANT_INVALID_REF ||
+-			    gnttab_query_foreign_access(gnt_list_entry->gref))
++			    !gnttab_try_end_foreign_access(gnt_list_entry->gref))
+ 				continue;
+ 
+ 			list_del(&gnt_list_entry->node);
+-			gnttab_end_foreign_access(gnt_list_entry->gref, 0, 0UL);
+ 			rinfo->persistent_gnts_c--;
+ 			gnt_list_entry->gref = GRANT_INVALID_REF;
+ 			list_add_tail(&gnt_list_entry->node, &rinfo->grants);
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 7ed8872d08c60..1a69b5246133b 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -424,14 +424,12 @@ static bool xennet_tx_buf_gc(struct netfront_queue *queue)
+ 			queue->tx_link[id] = TX_LINK_NONE;
+ 			skb = queue->tx_skbs[id];
+ 			queue->tx_skbs[id] = NULL;
+-			if (unlikely(gnttab_query_foreign_access(
+-				queue->grant_tx_ref[id]) != 0)) {
++			if (unlikely(!gnttab_end_foreign_access_ref(
++				queue->grant_tx_ref[id], GNTMAP_readonly))) {
+ 				dev_alert(dev,
+ 					  "Grant still in use by backend domain\n");
+ 				goto err;
+ 			}
+-			gnttab_end_foreign_access_ref(
+-				queue->grant_tx_ref[id], GNTMAP_readonly);
+ 			gnttab_release_grant_reference(
+ 				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+ 			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+@@ -992,7 +990,6 @@ static int xennet_get_responses(struct netfront_queue *queue,
+ 	struct device *dev = &queue->info->netdev->dev;
+ 	struct bpf_prog *xdp_prog;
+ 	struct xdp_buff xdp;
+-	unsigned long ret;
+ 	int slots = 1;
+ 	int err = 0;
+ 	u32 verdict;
+@@ -1034,8 +1031,13 @@ static int xennet_get_responses(struct netfront_queue *queue,
+ 			goto next;
+ 		}
+ 
+-		ret = gnttab_end_foreign_access_ref(ref, 0);
+-		BUG_ON(!ret);
++		if (!gnttab_end_foreign_access_ref(ref, 0)) {
++			dev_alert(dev,
++				  "Grant still in use by backend domain\n");
++			queue->info->broken = true;
++			dev_alert(dev, "Disabled for further use\n");
++			return -EINVAL;
++		}
+ 
+ 		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
+ 
+@@ -1256,6 +1258,10 @@ static int xennet_poll(struct napi_struct *napi, int budget)
+ 					   &need_xdp_flush);
+ 
+ 		if (unlikely(err)) {
++			if (queue->info->broken) {
++				spin_unlock(&queue->rx_lock);
++				return 0;
++			}
+ err:
+ 			while ((skb = __skb_dequeue(&tmpq)))
+ 				__skb_queue_tail(&errq, skb);
+@@ -1920,7 +1926,7 @@ static int setup_netfront(struct xenbus_device *dev,
+ 			struct netfront_queue *queue, unsigned int feature_split_evtchn)
+ {
+ 	struct xen_netif_tx_sring *txs;
+-	struct xen_netif_rx_sring *rxs;
++	struct xen_netif_rx_sring *rxs = NULL;
+ 	grant_ref_t gref;
+ 	int err;
+ 
+@@ -1940,21 +1946,21 @@ static int setup_netfront(struct xenbus_device *dev,
+ 
+ 	err = xenbus_grant_ring(dev, txs, 1, &gref);
+ 	if (err < 0)
+-		goto grant_tx_ring_fail;
++		goto fail;
+ 	queue->tx_ring_ref = gref;
+ 
+ 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
+ 	if (!rxs) {
+ 		err = -ENOMEM;
+ 		xenbus_dev_fatal(dev, err, "allocating rx ring page");
+-		goto alloc_rx_ring_fail;
++		goto fail;
+ 	}
+ 	SHARED_RING_INIT(rxs);
+ 	FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE);
+ 
+ 	err = xenbus_grant_ring(dev, rxs, 1, &gref);
+ 	if (err < 0)
+-		goto grant_rx_ring_fail;
++		goto fail;
+ 	queue->rx_ring_ref = gref;
+ 
+ 	if (feature_split_evtchn)
+@@ -1967,22 +1973,28 @@ static int setup_netfront(struct xenbus_device *dev,
+ 		err = setup_netfront_single(queue);
+ 
+ 	if (err)
+-		goto alloc_evtchn_fail;
++		goto fail;
+ 
+ 	return 0;
+ 
+ 	/* If we fail to setup netfront, it is safe to just revoke access to
+ 	 * granted pages because backend is not accessing it at this point.
+ 	 */
+-alloc_evtchn_fail:
+-	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
+-grant_rx_ring_fail:
+-	free_page((unsigned long)rxs);
+-alloc_rx_ring_fail:
+-	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
+-grant_tx_ring_fail:
+-	free_page((unsigned long)txs);
+-fail:
++ fail:
++	if (queue->rx_ring_ref != GRANT_INVALID_REF) {
++		gnttab_end_foreign_access(queue->rx_ring_ref, 0,
++					  (unsigned long)rxs);
++		queue->rx_ring_ref = GRANT_INVALID_REF;
++	} else {
++		free_page((unsigned long)rxs);
++	}
++	if (queue->tx_ring_ref != GRANT_INVALID_REF) {
++		gnttab_end_foreign_access(queue->tx_ring_ref, 0,
++					  (unsigned long)txs);
++		queue->tx_ring_ref = GRANT_INVALID_REF;
++	} else {
++		free_page((unsigned long)txs);
++	}
+ 	return err;
+ }
+ 
+diff --git a/drivers/scsi/xen-scsifront.c b/drivers/scsi/xen-scsifront.c
+index 259fc248d06cf..a25c9386fdf78 100644
+--- a/drivers/scsi/xen-scsifront.c
++++ b/drivers/scsi/xen-scsifront.c
+@@ -233,12 +233,11 @@ static void scsifront_gnttab_done(struct vscsifrnt_info *info,
+ 		return;
+ 
+ 	for (i = 0; i < shadow->nr_grants; i++) {
+-		if (unlikely(gnttab_query_foreign_access(shadow->gref[i]))) {
++		if (unlikely(!gnttab_try_end_foreign_access(shadow->gref[i]))) {
+ 			shost_printk(KERN_ALERT, info->host, KBUILD_MODNAME
+ 				     "grant still in use by backend\n");
+ 			BUG();
+ 		}
+-		gnttab_end_foreign_access(shadow->gref[i], 0, 0UL);
+ 	}
+ 
+ 	kfree(shadow->sg);
+diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
+index 3fa40c723e8e9..edb0acd0b8323 100644
+--- a/drivers/xen/gntalloc.c
++++ b/drivers/xen/gntalloc.c
+@@ -169,20 +169,14 @@ undo:
+ 		__del_gref(gref);
+ 	}
+ 
+-	/* It's possible for the target domain to map the just-allocated grant
+-	 * references by blindly guessing their IDs; if this is done, then
+-	 * __del_gref will leave them in the queue_gref list. They need to be
+-	 * added to the global list so that we can free them when they are no
+-	 * longer referenced.
+-	 */
+-	if (unlikely(!list_empty(&queue_gref)))
+-		list_splice_tail(&queue_gref, &gref_list);
+ 	mutex_unlock(&gref_mutex);
+ 	return rc;
+ }
+ 
+ static void __del_gref(struct gntalloc_gref *gref)
+ {
++	unsigned long addr;
++
+ 	if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
+ 		uint8_t *tmp = kmap(gref->page);
+ 		tmp[gref->notify.pgoff] = 0;
+@@ -196,21 +190,16 @@ static void __del_gref(struct gntalloc_gref *gref)
+ 	gref->notify.flags = 0;
+ 
+ 	if (gref->gref_id) {
+-		if (gnttab_query_foreign_access(gref->gref_id))
+-			return;
+-
+-		if (!gnttab_end_foreign_access_ref(gref->gref_id, 0))
+-			return;
+-
+-		gnttab_free_grant_reference(gref->gref_id);
++		if (gref->page) {
++			addr = (unsigned long)page_to_virt(gref->page);
++			gnttab_end_foreign_access(gref->gref_id, 0, addr);
++		} else
++			gnttab_free_grant_reference(gref->gref_id);
+ 	}
+ 
+ 	gref_size--;
+ 	list_del(&gref->next_gref);
+ 
+-	if (gref->page)
+-		__free_page(gref->page);
+-
+ 	kfree(gref);
+ }
+ 
+diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
+index 3729bea0c9895..5c83d41766c85 100644
+--- a/drivers/xen/grant-table.c
++++ b/drivers/xen/grant-table.c
+@@ -134,12 +134,9 @@ struct gnttab_ops {
+ 	 */
+ 	unsigned long (*end_foreign_transfer_ref)(grant_ref_t ref);
+ 	/*
+-	 * Query the status of a grant entry. Ref parameter is reference of
+-	 * queried grant entry, return value is the status of queried entry.
+-	 * Detailed status(writing/reading) can be gotten from the return value
+-	 * by bit operations.
++	 * Read the frame number related to a given grant reference.
+ 	 */
+-	int (*query_foreign_access)(grant_ref_t ref);
++	unsigned long (*read_frame)(grant_ref_t ref);
+ };
+ 
+ struct unmap_refs_callback_data {
+@@ -284,22 +281,6 @@ int gnttab_grant_foreign_access(domid_t domid, unsigned long frame,
+ }
+ EXPORT_SYMBOL_GPL(gnttab_grant_foreign_access);
+ 
+-static int gnttab_query_foreign_access_v1(grant_ref_t ref)
+-{
+-	return gnttab_shared.v1[ref].flags & (GTF_reading|GTF_writing);
+-}
+-
+-static int gnttab_query_foreign_access_v2(grant_ref_t ref)
+-{
+-	return grstatus[ref] & (GTF_reading|GTF_writing);
+-}
+-
+-int gnttab_query_foreign_access(grant_ref_t ref)
+-{
+-	return gnttab_interface->query_foreign_access(ref);
+-}
+-EXPORT_SYMBOL_GPL(gnttab_query_foreign_access);
+-
+ static int gnttab_end_foreign_access_ref_v1(grant_ref_t ref, int readonly)
+ {
+ 	u16 flags, nflags;
+@@ -353,6 +334,16 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly)
+ }
+ EXPORT_SYMBOL_GPL(gnttab_end_foreign_access_ref);
+ 
++static unsigned long gnttab_read_frame_v1(grant_ref_t ref)
++{
++	return gnttab_shared.v1[ref].frame;
++}
++
++static unsigned long gnttab_read_frame_v2(grant_ref_t ref)
++{
++	return gnttab_shared.v2[ref].full_page.frame;
++}
++
+ struct deferred_entry {
+ 	struct list_head list;
+ 	grant_ref_t ref;
+@@ -382,12 +373,9 @@ static void gnttab_handle_deferred(struct timer_list *unused)
+ 		spin_unlock_irqrestore(&gnttab_list_lock, flags);
+ 		if (_gnttab_end_foreign_access_ref(entry->ref, entry->ro)) {
+ 			put_free_entry(entry->ref);
+-			if (entry->page) {
+-				pr_debug("freeing g.e. %#x (pfn %#lx)\n",
+-					 entry->ref, page_to_pfn(entry->page));
+-				put_page(entry->page);
+-			} else
+-				pr_info("freeing g.e. %#x\n", entry->ref);
++			pr_debug("freeing g.e. %#x (pfn %#lx)\n",
++				 entry->ref, page_to_pfn(entry->page));
++			put_page(entry->page);
+ 			kfree(entry);
+ 			entry = NULL;
+ 		} else {
+@@ -412,9 +400,18 @@ static void gnttab_handle_deferred(struct timer_list *unused)
+ static void gnttab_add_deferred(grant_ref_t ref, bool readonly,
+ 				struct page *page)
+ {
+-	struct deferred_entry *entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
++	struct deferred_entry *entry;
++	gfp_t gfp = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
+ 	const char *what = KERN_WARNING "leaking";
+ 
++	entry = kmalloc(sizeof(*entry), gfp);
++	if (!page) {
++		unsigned long gfn = gnttab_interface->read_frame(ref);
++
++		page = pfn_to_page(gfn_to_pfn(gfn));
++		get_page(page);
++	}
++
+ 	if (entry) {
+ 		unsigned long flags;
+ 
+@@ -435,11 +432,21 @@ static void gnttab_add_deferred(grant_ref_t ref, bool readonly,
+ 	       what, ref, page ? page_to_pfn(page) : -1);
+ }
+ 
++int gnttab_try_end_foreign_access(grant_ref_t ref)
++{
++	int ret = _gnttab_end_foreign_access_ref(ref, 0);
++
++	if (ret)
++		put_free_entry(ref);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(gnttab_try_end_foreign_access);
++
+ void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
+ 			       unsigned long page)
+ {
+-	if (gnttab_end_foreign_access_ref(ref, readonly)) {
+-		put_free_entry(ref);
++	if (gnttab_try_end_foreign_access(ref)) {
+ 		if (page != 0)
+ 			put_page(virt_to_page(page));
+ 	} else
+@@ -1417,7 +1424,7 @@ static const struct gnttab_ops gnttab_v1_ops = {
+ 	.update_entry			= gnttab_update_entry_v1,
+ 	.end_foreign_access_ref		= gnttab_end_foreign_access_ref_v1,
+ 	.end_foreign_transfer_ref	= gnttab_end_foreign_transfer_ref_v1,
+-	.query_foreign_access		= gnttab_query_foreign_access_v1,
++	.read_frame			= gnttab_read_frame_v1,
+ };
+ 
+ static const struct gnttab_ops gnttab_v2_ops = {
+@@ -1429,7 +1436,7 @@ static const struct gnttab_ops gnttab_v2_ops = {
+ 	.update_entry			= gnttab_update_entry_v2,
+ 	.end_foreign_access_ref		= gnttab_end_foreign_access_ref_v2,
+ 	.end_foreign_transfer_ref	= gnttab_end_foreign_transfer_ref_v2,
+-	.query_foreign_access		= gnttab_query_foreign_access_v2,
++	.read_frame			= gnttab_read_frame_v2,
+ };
+ 
+ static bool gnttab_need_v2(void)
+diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
+index 7984645b59563..bbe337dc296e3 100644
+--- a/drivers/xen/pvcalls-front.c
++++ b/drivers/xen/pvcalls-front.c
+@@ -337,8 +337,8 @@ static void free_active_ring(struct sock_mapping *map)
+ 	if (!map->active.ring)
+ 		return;
+ 
+-	free_pages((unsigned long)map->active.data.in,
+-			map->active.ring->ring_order);
++	free_pages_exact(map->active.data.in,
++			 PAGE_SIZE << map->active.ring->ring_order);
+ 	free_page((unsigned long)map->active.ring);
+ }
+ 
+@@ -352,8 +352,8 @@ static int alloc_active_ring(struct sock_mapping *map)
+ 		goto out;
+ 
+ 	map->active.ring->ring_order = PVCALLS_RING_ORDER;
+-	bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+-					PVCALLS_RING_ORDER);
++	bytes = alloc_pages_exact(PAGE_SIZE << PVCALLS_RING_ORDER,
++				  GFP_KERNEL | __GFP_ZERO);
+ 	if (!bytes)
+ 		goto out;
+ 
+diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
+index 0cd728961fce9..16cfef0993295 100644
+--- a/drivers/xen/xenbus/xenbus_client.c
++++ b/drivers/xen/xenbus/xenbus_client.c
+@@ -379,7 +379,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
+ 		      unsigned int nr_pages, grant_ref_t *grefs)
+ {
+ 	int err;
+-	int i, j;
++	unsigned int i;
++	grant_ref_t gref_head;
++
++	err = gnttab_alloc_grant_references(nr_pages, &gref_head);
++	if (err) {
++		xenbus_dev_fatal(dev, err, "granting access to ring page");
++		return err;
++	}
+ 
+ 	for (i = 0; i < nr_pages; i++) {
+ 		unsigned long gfn;
+@@ -389,23 +396,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
+ 		else
+ 			gfn = virt_to_gfn(vaddr);
+ 
+-		err = gnttab_grant_foreign_access(dev->otherend_id, gfn, 0);
+-		if (err < 0) {
+-			xenbus_dev_fatal(dev, err,
+-					 "granting access to ring page");
+-			goto fail;
+-		}
+-		grefs[i] = err;
++		grefs[i] = gnttab_claim_grant_reference(&gref_head);
++		gnttab_grant_foreign_access_ref(grefs[i], dev->otherend_id,
++						gfn, 0);
+ 
+ 		vaddr = vaddr + XEN_PAGE_SIZE;
+ 	}
+ 
+ 	return 0;
+-
+-fail:
+-	for (j = 0; j < i; j++)
+-		gnttab_end_foreign_access_ref(grefs[j], 0);
+-	return err;
+ }
+ EXPORT_SYMBOL_GPL(xenbus_grant_ring);
+ 
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index f860645f65128..ff38737475ecb 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -87,6 +87,11 @@
+ 			   ARM_SMCCC_SMC_32,				\
+ 			   0, 0x7fff)
+ 
++#define ARM_SMCCC_ARCH_WORKAROUND_3					\
++	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
++			   ARM_SMCCC_SMC_32,				\
++			   0, 0x3fff)
++
+ #define SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED	1
+ 
+ /* Paravirtualised time calls (defined by ARM DEN0057A) */
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index e6ddf5a3beaf8..ea3ff499e94a3 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1485,6 +1485,12 @@ struct bpf_prog *bpf_prog_by_id(u32 id);
+ struct bpf_link *bpf_link_by_id(u32 id);
+ 
+ const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
++
++static inline bool unprivileged_ebpf_enabled(void)
++{
++	return !sysctl_unprivileged_bpf_disabled;
++}
++
+ #else /* !CONFIG_BPF_SYSCALL */
+ static inline struct bpf_prog *bpf_prog_get(u32 ufd)
+ {
+@@ -1679,6 +1685,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
+ {
+ 	return NULL;
+ }
++
++static inline bool unprivileged_ebpf_enabled(void)
++{
++	return false;
++}
++
+ #endif /* CONFIG_BPF_SYSCALL */
+ 
+ static inline struct bpf_prog *bpf_prog_get_type(u32 ufd,
+diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
+index 0b1182a3cf412..57b4ae6a4a186 100644
+--- a/include/xen/grant_table.h
++++ b/include/xen/grant_table.h
+@@ -97,17 +97,32 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly);
+  * access has been ended, free the given page too.  Access will be ended
+  * immediately iff the grant entry is not in use, otherwise it will happen
+  * some time later.  page may be 0, in which case no freeing will occur.
++ * Note that the granted page might still be accessed (read or write) by the
++ * other side after gnttab_end_foreign_access() returns, so even if page was
++ * specified as 0 it is not allowed to just reuse the page for other
++ * purposes immediately. gnttab_end_foreign_access() will take an additional
++ * reference to the granted page in this case, which is dropped only after
++ * the grant is no longer in use.
++ * This requires that multi page allocations for areas subject to
++ * gnttab_end_foreign_access() are done via alloc_pages_exact() (and freeing
++ * via free_pages_exact()) in order to avoid high order pages.
+  */
+ void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
+ 			       unsigned long page);
+ 
++/*
++ * End access through the given grant reference, iff the grant entry is
++ * no longer in use.  In case of success ending foreign access, the
++ * grant reference is deallocated.
++ * Return 1 if the grant entry was freed, 0 if it is still in use.
++ */
++int gnttab_try_end_foreign_access(grant_ref_t ref);
++
+ int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn);
+ 
+ unsigned long gnttab_end_foreign_transfer_ref(grant_ref_t ref);
+ unsigned long gnttab_end_foreign_transfer(grant_ref_t ref);
+ 
+-int gnttab_query_foreign_access(grant_ref_t ref);
+-
+ /*
+  * operations on reserved batches of grant references
+  */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 72ceb19574d0c..8832440a4938e 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -234,6 +234,10 @@ static int bpf_stats_handler(struct ctl_table *table, int write,
+ 	return ret;
+ }
+ 
++void __weak unpriv_ebpf_notify(int new_state)
++{
++}
++
+ static int bpf_unpriv_handler(struct ctl_table *table, int write,
+ 			      void *buffer, size_t *lenp, loff_t *ppos)
+ {
+@@ -251,6 +255,9 @@ static int bpf_unpriv_handler(struct ctl_table *table, int write,
+ 			return -EPERM;
+ 		*(int *)table->data = unpriv_enable;
+ 	}
++
++	unpriv_ebpf_notify(unpriv_enable);
++
+ 	return ret;
+ }
+ #endif /* CONFIG_BPF_SYSCALL && CONFIG_SYSCTL */
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 3ec1a51a6944e..432ac5a16f2e0 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -304,9 +304,9 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
+ 				ref = priv->rings[i].intf->ref[j];
+ 				gnttab_end_foreign_access(ref, 0, 0);
+ 			}
+-			free_pages((unsigned long)priv->rings[i].data.in,
+-				   priv->rings[i].intf->ring_order -
+-				   (PAGE_SHIFT - XEN_PAGE_SHIFT));
++			free_pages_exact(priv->rings[i].data.in,
++				   1UL << (priv->rings[i].intf->ring_order +
++					   XEN_PAGE_SHIFT));
+ 		}
+ 		gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
+ 		free_page((unsigned long)priv->rings[i].intf);
+@@ -345,8 +345,8 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
+ 	if (ret < 0)
+ 		goto out;
+ 	ring->ref = ret;
+-	bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+-			order - (PAGE_SHIFT - XEN_PAGE_SHIFT));
++	bytes = alloc_pages_exact(1UL << (order + XEN_PAGE_SHIFT),
++				  GFP_KERNEL | __GFP_ZERO);
+ 	if (!bytes) {
+ 		ret = -ENOMEM;
+ 		goto out;
+@@ -377,9 +377,7 @@ out:
+ 	if (bytes) {
+ 		for (i--; i >= 0; i--)
+ 			gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
+-		free_pages((unsigned long)bytes,
+-			   ring->intf->ring_order -
+-			   (PAGE_SHIFT - XEN_PAGE_SHIFT));
++		free_pages_exact(bytes, 1UL << (order + XEN_PAGE_SHIFT));
+ 	}
+ 	gnttab_end_foreign_access(ring->ref, 0, 0);
+ 	free_page((unsigned long)ring->intf);
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index dad350d42ecfb..b58730cc12e83 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -204,7 +204,7 @@
+ #define X86_FEATURE_SME			( 7*32+10) /* AMD Secure Memory Encryption */
+ #define X86_FEATURE_PTI			( 7*32+11) /* Kernel Page Table Isolation enabled */
+ #define X86_FEATURE_RETPOLINE		( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
+-#define X86_FEATURE_RETPOLINE_AMD	( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
++#define X86_FEATURE_RETPOLINE_LFENCE	( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */
+ #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
+ #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
+ #define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-03-16 13:33 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-03-16 13:33 UTC (permalink / raw
  To: gentoo-commits

commit:     040d55ca4b181afe5b6151820d220af1af18b017
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 16 13:33:04 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 16 13:33:04 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=040d55ca

Linux patch 5.10.106

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1105_linux-5.10.106.patch | 3053 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3057 insertions(+)

diff --git a/0000_README b/0000_README
index 6fbdd908..acb932f5 100644
--- a/0000_README
+++ b/0000_README
@@ -463,6 +463,10 @@ Patch:  1104_linux-5.10.105.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.105
 
+Patch:  1105_linux-5.10.106.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.106
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1105_linux-5.10.106.patch b/1105_linux-5.10.106.patch
new file mode 100644
index 00000000..e5a68d3f
--- /dev/null
+++ b/1105_linux-5.10.106.patch
@@ -0,0 +1,3053 @@
+diff --git a/Makefile b/Makefile
+index ea665736db040..7b0dffadf6a89 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 105
++SUBLEVEL = 106
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+index 910eacc8ad3bd..a362714ae9fc0 100644
+--- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+@@ -118,7 +118,7 @@
+ 	};
+ 
+ 	pinctrl_fwqspid_default: fwqspid_default {
+-		function = "FWQSPID";
++		function = "FWSPID";
+ 		groups = "FWQSPID";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index 55ec83bde5a61..e46a3f4ad350a 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -290,6 +290,7 @@
+ 
+ 		hvs: hvs@7e400000 {
+ 			compatible = "brcm,bcm2711-hvs";
++			reg = <0x7e400000 0x8000>;
+ 			interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+diff --git a/arch/arm/include/asm/spectre.h b/arch/arm/include/asm/spectre.h
+index d1fa5607d3aa3..85f9e538fb325 100644
+--- a/arch/arm/include/asm/spectre.h
++++ b/arch/arm/include/asm/spectre.h
+@@ -25,7 +25,13 @@ enum {
+ 	SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8),
+ };
+ 
++#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
+ void spectre_v2_update_state(unsigned int state, unsigned int methods);
++#else
++static inline void spectre_v2_update_state(unsigned int state,
++					   unsigned int methods)
++{}
++#endif
+ 
+ int spectre_bhb_update_vectors(unsigned int method);
+ 
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 3cbd35c82a66c..c3ebe3584103b 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -1043,9 +1043,9 @@ vector_bhb_loop8_\name:
+ 
+ 	@ bhb workaround
+ 	mov	r0, #8
+-1:	b	. + 4
++3:	b	. + 4
+ 	subs	r0, r0, #1
+-	bne	1b
++	bne	3b
+ 	dsb
+ 	isb
+ 	b	2b
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index 2e437f20da39b..00e5dbf4b8236 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -18,6 +18,7 @@
+ 
+ 	aliases {
+ 		spi0 = &spi0;
++		ethernet0 = &eth0;
+ 		ethernet1 = &eth1;
+ 		mmc0 = &sdhci0;
+ 		mmc1 = &sdhci1;
+@@ -137,7 +138,9 @@
+ 	/*
+ 	 * U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property
+ 	 * contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and
+-	 * 2 size cells and also expects that the second range starts at 16 MB offset. If these
++	 * 2 size cells and also expects that the second range starts at 16 MB offset. Also it
++	 * expects that first range uses same address for PCI (child) and CPU (parent) cells (so
++	 * no remapping) and that this address is the lowest from all specified ranges. If these
+ 	 * conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address
+ 	 * space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window
+ 	 * for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB.
+@@ -146,6 +149,9 @@
+ 	 * https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7
+ 	 * https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf
+ 	 * https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33
++	 * Bug related to requirement of same child and parent addresses for first range is fixed
++	 * in U-Boot version 2022.04 by following commit:
++	 * https://source.denx.de/u-boot/u-boot/-/commit/1fd54253bca7d43d046bba4853fe5fafd034bc17
+ 	 */
+ 	#address-cells = <3>;
+ 	#size-cells = <2>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 2a2015a153627..0f4bcd15d8580 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -495,7 +495,7 @@
+ 			 * (totaling 127 MiB) for MEM.
+ 			 */
+ 			ranges = <0x82000000 0 0xe8000000   0 0xe8000000   0 0x07f00000   /* Port 0 MEM */
+-				  0x81000000 0 0xefff0000   0 0xefff0000   0 0x00010000>; /* Port 0 IO */
++				  0x81000000 0 0x00000000   0 0xefff0000   0 0x00010000>; /* Port 0 IO */
+ 			interrupt-map-mask = <0 0 0 7>;
+ 			interrupt-map = <0 0 0 1 &pcie_intc 0>,
+ 					<0 0 0 2 &pcie_intc 1>,
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index 104fba889cf76..c3310a68ac463 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -13,6 +13,19 @@
+ #include <linux/pgtable.h>
+ #include <asm/sections.h>
+ 
++/*
++ * The auipc+jalr instruction pair can reach any PC-relative offset
++ * in the range [-2^31 - 2^11, 2^31 - 2^11)
++ */
++static bool riscv_insn_valid_32bit_offset(ptrdiff_t val)
++{
++#ifdef CONFIG_32BIT
++	return true;
++#else
++	return (-(1L << 31) - (1L << 11)) <= val && val < ((1L << 31) - (1L << 11));
++#endif
++}
++
+ static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v)
+ {
+ 	if (v != (u32)v) {
+@@ -95,7 +108,7 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
+ 	ptrdiff_t offset = (void *)v - (void *)location;
+ 	s32 hi20;
+ 
+-	if (offset != (s32)offset) {
++	if (!riscv_insn_valid_32bit_offset(offset)) {
+ 		pr_err(
+ 		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+ 		  me->name, (long long)v, location);
+@@ -197,10 +210,9 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
+ 				       Elf_Addr v)
+ {
+ 	ptrdiff_t offset = (void *)v - (void *)location;
+-	s32 fill_v = offset;
+ 	u32 hi20, lo12;
+ 
+-	if (offset != fill_v) {
++	if (!riscv_insn_valid_32bit_offset(offset)) {
+ 		/* Only emit the plt entry if offset over 32-bit range */
+ 		if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) {
+ 			offset = module_emit_plt_entry(me, v);
+@@ -224,10 +236,9 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location,
+ 				   Elf_Addr v)
+ {
+ 	ptrdiff_t offset = (void *)v - (void *)location;
+-	s32 fill_v = offset;
+ 	u32 hi20, lo12;
+ 
+-	if (offset != fill_v) {
++	if (!riscv_insn_valid_32bit_offset(offset)) {
+ 		pr_err(
+ 		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+ 		  me->name, (long long)v, location);
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index 629c4994f1654..7f57110f958e1 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -995,8 +995,10 @@ early_param("memmap", parse_memmap_opt);
+  */
+ void __init e820__reserve_setup_data(void)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+-	u64 pa_data;
++	u64 pa_data, pa_next;
++	u32 len;
+ 
+ 	pa_data = boot_params.hdr.setup_data;
+ 	if (!pa_data)
+@@ -1004,6 +1006,14 @@ void __init e820__reserve_setup_data(void)
+ 
+ 	while (pa_data) {
+ 		data = early_memremap(pa_data, sizeof(*data));
++		if (!data) {
++			pr_warn("e820: failed to memremap setup_data entry\n");
++			return;
++		}
++
++		len = sizeof(*data);
++		pa_next = data->next;
++
+ 		e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+ 
+ 		/*
+@@ -1015,18 +1025,27 @@ void __init e820__reserve_setup_data(void)
+ 						 sizeof(*data) + data->len,
+ 						 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+ 
+-		if (data->type == SETUP_INDIRECT &&
+-		    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+-			e820__range_update(((struct setup_indirect *)data->data)->addr,
+-					   ((struct setup_indirect *)data->data)->len,
+-					   E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+-			e820__range_update_kexec(((struct setup_indirect *)data->data)->addr,
+-						 ((struct setup_indirect *)data->data)->len,
+-						 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
++		if (data->type == SETUP_INDIRECT) {
++			len += data->len;
++			early_memunmap(data, sizeof(*data));
++			data = early_memremap(pa_data, len);
++			if (!data) {
++				pr_warn("e820: failed to memremap indirect setup_data\n");
++				return;
++			}
++
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT) {
++				e820__range_update(indirect->addr, indirect->len,
++						   E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
++				e820__range_update_kexec(indirect->addr, indirect->len,
++							 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
++			}
+ 		}
+ 
+-		pa_data = data->next;
+-		early_memunmap(data, sizeof(*data));
++		pa_data = pa_next;
++		early_memunmap(data, len);
+ 	}
+ 
+ 	e820__update_table(e820_table);
+diff --git a/arch/x86/kernel/kdebugfs.c b/arch/x86/kernel/kdebugfs.c
+index 64b6da95af984..e2e89bebcbc32 100644
+--- a/arch/x86/kernel/kdebugfs.c
++++ b/arch/x86/kernel/kdebugfs.c
+@@ -88,11 +88,13 @@ create_setup_data_node(struct dentry *parent, int no,
+ 
+ static int __init create_setup_data_nodes(struct dentry *parent)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data_node *node;
+ 	struct setup_data *data;
+-	int error;
++	u64 pa_data, pa_next;
+ 	struct dentry *d;
+-	u64 pa_data;
++	int error;
++	u32 len;
+ 	int no = 0;
+ 
+ 	d = debugfs_create_dir("setup_data", parent);
+@@ -112,12 +114,29 @@ static int __init create_setup_data_nodes(struct dentry *parent)
+ 			error = -ENOMEM;
+ 			goto err_dir;
+ 		}
+-
+-		if (data->type == SETUP_INDIRECT &&
+-		    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+-			node->paddr = ((struct setup_indirect *)data->data)->addr;
+-			node->type  = ((struct setup_indirect *)data->data)->type;
+-			node->len   = ((struct setup_indirect *)data->data)->len;
++		pa_next = data->next;
++
++		if (data->type == SETUP_INDIRECT) {
++			len = sizeof(*data) + data->len;
++			memunmap(data);
++			data = memremap(pa_data, len, MEMREMAP_WB);
++			if (!data) {
++				kfree(node);
++				error = -ENOMEM;
++				goto err_dir;
++			}
++
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT) {
++				node->paddr = indirect->addr;
++				node->type  = indirect->type;
++				node->len   = indirect->len;
++			} else {
++				node->paddr = pa_data;
++				node->type  = data->type;
++				node->len   = data->len;
++			}
+ 		} else {
+ 			node->paddr = pa_data;
+ 			node->type  = data->type;
+@@ -125,7 +144,7 @@ static int __init create_setup_data_nodes(struct dentry *parent)
+ 		}
+ 
+ 		create_setup_data_node(d, no, node);
+-		pa_data = data->next;
++		pa_data = pa_next;
+ 
+ 		memunmap(data);
+ 		no++;
+diff --git a/arch/x86/kernel/ksysfs.c b/arch/x86/kernel/ksysfs.c
+index d0a19121c6a4f..257892fcefa79 100644
+--- a/arch/x86/kernel/ksysfs.c
++++ b/arch/x86/kernel/ksysfs.c
+@@ -91,26 +91,41 @@ static int get_setup_data_paddr(int nr, u64 *paddr)
+ 
+ static int __init get_setup_data_size(int nr, size_t *size)
+ {
+-	int i = 0;
++	u64 pa_data = boot_params.hdr.setup_data, pa_next;
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+-	u64 pa_data = boot_params.hdr.setup_data;
++	int i = 0;
++	u32 len;
+ 
+ 	while (pa_data) {
+ 		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
+ 		if (!data)
+ 			return -ENOMEM;
++		pa_next = data->next;
++
+ 		if (nr == i) {
+-			if (data->type == SETUP_INDIRECT &&
+-			    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT)
+-				*size = ((struct setup_indirect *)data->data)->len;
+-			else
++			if (data->type == SETUP_INDIRECT) {
++				len = sizeof(*data) + data->len;
++				memunmap(data);
++				data = memremap(pa_data, len, MEMREMAP_WB);
++				if (!data)
++					return -ENOMEM;
++
++				indirect = (struct setup_indirect *)data->data;
++
++				if (indirect->type != SETUP_INDIRECT)
++					*size = indirect->len;
++				else
++					*size = data->len;
++			} else {
+ 				*size = data->len;
++			}
+ 
+ 			memunmap(data);
+ 			return 0;
+ 		}
+ 
+-		pa_data = data->next;
++		pa_data = pa_next;
+ 		memunmap(data);
+ 		i++;
+ 	}
+@@ -120,9 +135,11 @@ static int __init get_setup_data_size(int nr, size_t *size)
+ static ssize_t type_show(struct kobject *kobj,
+ 			 struct kobj_attribute *attr, char *buf)
+ {
++	struct setup_indirect *indirect;
++	struct setup_data *data;
+ 	int nr, ret;
+ 	u64 paddr;
+-	struct setup_data *data;
++	u32 len;
+ 
+ 	ret = kobj_to_setup_data_nr(kobj, &nr);
+ 	if (ret)
+@@ -135,10 +152,20 @@ static ssize_t type_show(struct kobject *kobj,
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+-	if (data->type == SETUP_INDIRECT)
+-		ret = sprintf(buf, "0x%x\n", ((struct setup_indirect *)data->data)->type);
+-	else
++	if (data->type == SETUP_INDIRECT) {
++		len = sizeof(*data) + data->len;
++		memunmap(data);
++		data = memremap(paddr, len, MEMREMAP_WB);
++		if (!data)
++			return -ENOMEM;
++
++		indirect = (struct setup_indirect *)data->data;
++
++		ret = sprintf(buf, "0x%x\n", indirect->type);
++	} else {
+ 		ret = sprintf(buf, "0x%x\n", data->type);
++	}
++
+ 	memunmap(data);
+ 	return ret;
+ }
+@@ -149,9 +176,10 @@ static ssize_t setup_data_data_read(struct file *fp,
+ 				    char *buf,
+ 				    loff_t off, size_t count)
+ {
++	struct setup_indirect *indirect;
++	struct setup_data *data;
+ 	int nr, ret = 0;
+ 	u64 paddr, len;
+-	struct setup_data *data;
+ 	void *p;
+ 
+ 	ret = kobj_to_setup_data_nr(kobj, &nr);
+@@ -165,10 +193,27 @@ static ssize_t setup_data_data_read(struct file *fp,
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+-	if (data->type == SETUP_INDIRECT &&
+-	    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+-		paddr = ((struct setup_indirect *)data->data)->addr;
+-		len = ((struct setup_indirect *)data->data)->len;
++	if (data->type == SETUP_INDIRECT) {
++		len = sizeof(*data) + data->len;
++		memunmap(data);
++		data = memremap(paddr, len, MEMREMAP_WB);
++		if (!data)
++			return -ENOMEM;
++
++		indirect = (struct setup_indirect *)data->data;
++
++		if (indirect->type != SETUP_INDIRECT) {
++			paddr = indirect->addr;
++			len = indirect->len;
++		} else {
++			/*
++			 * Even though this is technically undefined, return
++			 * the data as though it is a normal setup_data struct.
++			 * This will at least allow it to be inspected.
++			 */
++			paddr += sizeof(*data);
++			len = data->len;
++		}
+ 	} else {
+ 		paddr += sizeof(*data);
+ 		len = data->len;
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 28c89fce0dab8..065152d9265e4 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -371,21 +371,41 @@ static void __init parse_setup_data(void)
+ 
+ static void __init memblock_x86_reserve_range_setup_data(void)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+-	u64 pa_data;
++	u64 pa_data, pa_next;
++	u32 len;
+ 
+ 	pa_data = boot_params.hdr.setup_data;
+ 	while (pa_data) {
+ 		data = early_memremap(pa_data, sizeof(*data));
++		if (!data) {
++			pr_warn("setup: failed to memremap setup_data entry\n");
++			return;
++		}
++
++		len = sizeof(*data);
++		pa_next = data->next;
++
+ 		memblock_reserve(pa_data, sizeof(*data) + data->len);
+ 
+-		if (data->type == SETUP_INDIRECT &&
+-		    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT)
+-			memblock_reserve(((struct setup_indirect *)data->data)->addr,
+-					 ((struct setup_indirect *)data->data)->len);
++		if (data->type == SETUP_INDIRECT) {
++			len += data->len;
++			early_memunmap(data, sizeof(*data));
++			data = early_memremap(pa_data, len);
++			if (!data) {
++				pr_warn("setup: failed to memremap indirect setup_data\n");
++				return;
++			}
+ 
+-		pa_data = data->next;
+-		early_memunmap(data, sizeof(*data));
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT)
++				memblock_reserve(indirect->addr, indirect->len);
++		}
++
++		pa_data = pa_next;
++		early_memunmap(data, len);
+ 	}
+ }
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 2d4ecd50e69b8..2a39a2df6f43e 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -651,6 +651,7 @@ static bool do_int3(struct pt_regs *regs)
+ 
+ 	return res == NOTIFY_STOP;
+ }
++NOKPROBE_SYMBOL(do_int3);
+ 
+ static void do_int3_user(struct pt_regs *regs)
+ {
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+index 356b746dfbe7a..91e61dbba3e0c 100644
+--- a/arch/x86/mm/ioremap.c
++++ b/arch/x86/mm/ioremap.c
+@@ -633,6 +633,7 @@ static bool memremap_is_efi_data(resource_size_t phys_addr,
+ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ 				   unsigned long size)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+ 	u64 paddr, paddr_next;
+ 
+@@ -645,6 +646,10 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ 
+ 		data = memremap(paddr, sizeof(*data),
+ 				MEMREMAP_WB | MEMREMAP_DEC);
++		if (!data) {
++			pr_warn("failed to memremap setup_data entry\n");
++			return false;
++		}
+ 
+ 		paddr_next = data->next;
+ 		len = data->len;
+@@ -654,10 +659,21 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ 			return true;
+ 		}
+ 
+-		if (data->type == SETUP_INDIRECT &&
+-		    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+-			paddr = ((struct setup_indirect *)data->data)->addr;
+-			len = ((struct setup_indirect *)data->data)->len;
++		if (data->type == SETUP_INDIRECT) {
++			memunmap(data);
++			data = memremap(paddr, sizeof(*data) + len,
++					MEMREMAP_WB | MEMREMAP_DEC);
++			if (!data) {
++				pr_warn("failed to memremap indirect setup_data\n");
++				return false;
++			}
++
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT) {
++				paddr = indirect->addr;
++				len = indirect->len;
++			}
+ 		}
+ 
+ 		memunmap(data);
+@@ -678,22 +694,51 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr,
+ 						unsigned long size)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+ 	u64 paddr, paddr_next;
+ 
+ 	paddr = boot_params.hdr.setup_data;
+ 	while (paddr) {
+-		unsigned int len;
++		unsigned int len, size;
+ 
+ 		if (phys_addr == paddr)
+ 			return true;
+ 
+ 		data = early_memremap_decrypted(paddr, sizeof(*data));
++		if (!data) {
++			pr_warn("failed to early memremap setup_data entry\n");
++			return false;
++		}
++
++		size = sizeof(*data);
+ 
+ 		paddr_next = data->next;
+ 		len = data->len;
+ 
+-		early_memunmap(data, sizeof(*data));
++		if ((phys_addr > paddr) && (phys_addr < (paddr + len))) {
++			early_memunmap(data, sizeof(*data));
++			return true;
++		}
++
++		if (data->type == SETUP_INDIRECT) {
++			size += len;
++			early_memunmap(data, sizeof(*data));
++			data = early_memremap_decrypted(paddr, size);
++			if (!data) {
++				pr_warn("failed to early memremap indirect setup_data\n");
++				return false;
++			}
++
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT) {
++				paddr = indirect->addr;
++				len = indirect->len;
++			}
++		}
++
++		early_memunmap(data, size);
+ 
+ 		if ((phys_addr > paddr) && (phys_addr < (paddr + len)))
+ 			return true;
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 42acf9587ef38..a03390127741f 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -869,9 +869,15 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 
+ 		virtio_cread(vdev, struct virtio_blk_config, max_discard_seg,
+ 			     &v);
++
++		/*
++		 * max_discard_seg == 0 is out of spec but we always
++		 * handled it.
++		 */
++		if (!v)
++			v = sg_elems - 2;
+ 		blk_queue_max_discard_segments(q,
+-					       min_not_zero(v,
+-							    MAX_DISCARD_SEGMENTS));
++					       min(v, MAX_DISCARD_SEGMENTS));
+ 
+ 		blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
+ 	}
+diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
+index 4ece326ea233e..cf23cfd7e4674 100644
+--- a/drivers/clk/qcom/gdsc.c
++++ b/drivers/clk/qcom/gdsc.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2015, 2017-2018, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2015, 2017-2018, 2022, The Linux Foundation. All rights reserved.
+  */
+ 
+ #include <linux/bitops.h>
+@@ -34,9 +34,14 @@
+ #define CFG_GDSCR_OFFSET		0x4
+ 
+ /* Wait 2^n CXO cycles between all states. Here, n=2 (4 cycles). */
+-#define EN_REST_WAIT_VAL	(0x2 << 20)
+-#define EN_FEW_WAIT_VAL		(0x8 << 16)
+-#define CLK_DIS_WAIT_VAL	(0x2 << 12)
++#define EN_REST_WAIT_VAL	0x2
++#define EN_FEW_WAIT_VAL		0x8
++#define CLK_DIS_WAIT_VAL	0x2
++
++/* Transition delay shifts */
++#define EN_REST_WAIT_SHIFT	20
++#define EN_FEW_WAIT_SHIFT	16
++#define CLK_DIS_WAIT_SHIFT	12
+ 
+ #define RETAIN_MEM		BIT(14)
+ #define RETAIN_PERIPH		BIT(13)
+@@ -341,7 +346,18 @@ static int gdsc_init(struct gdsc *sc)
+ 	 */
+ 	mask = HW_CONTROL_MASK | SW_OVERRIDE_MASK |
+ 	       EN_REST_WAIT_MASK | EN_FEW_WAIT_MASK | CLK_DIS_WAIT_MASK;
+-	val = EN_REST_WAIT_VAL | EN_FEW_WAIT_VAL | CLK_DIS_WAIT_VAL;
++
++	if (!sc->en_rest_wait_val)
++		sc->en_rest_wait_val = EN_REST_WAIT_VAL;
++	if (!sc->en_few_wait_val)
++		sc->en_few_wait_val = EN_FEW_WAIT_VAL;
++	if (!sc->clk_dis_wait_val)
++		sc->clk_dis_wait_val = CLK_DIS_WAIT_VAL;
++
++	val = sc->en_rest_wait_val << EN_REST_WAIT_SHIFT |
++		sc->en_few_wait_val << EN_FEW_WAIT_SHIFT |
++		sc->clk_dis_wait_val << CLK_DIS_WAIT_SHIFT;
++
+ 	ret = regmap_update_bits(sc->regmap, sc->gdscr, mask, val);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/clk/qcom/gdsc.h b/drivers/clk/qcom/gdsc.h
+index 5bb396b344d16..762f1b5e1ec51 100644
+--- a/drivers/clk/qcom/gdsc.h
++++ b/drivers/clk/qcom/gdsc.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- * Copyright (c) 2015, 2017-2018, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2015, 2017-2018, 2022, The Linux Foundation. All rights reserved.
+  */
+ 
+ #ifndef __QCOM_GDSC_H__
+@@ -22,6 +22,9 @@ struct reset_controller_dev;
+  * @cxcs: offsets of branch registers to toggle mem/periph bits in
+  * @cxc_count: number of @cxcs
+  * @pwrsts: Possible powerdomain power states
++ * @en_rest_wait_val: transition delay value for receiving enr ack signal
++ * @en_few_wait_val: transition delay value for receiving enf ack signal
++ * @clk_dis_wait_val: transition delay value for halting clock
+  * @resets: ids of resets associated with this gdsc
+  * @reset_count: number of @resets
+  * @rcdev: reset controller
+@@ -35,6 +38,9 @@ struct gdsc {
+ 	unsigned int			clamp_io_ctrl;
+ 	unsigned int			*cxcs;
+ 	unsigned int			cxc_count;
++	unsigned int			en_rest_wait_val;
++	unsigned int			en_few_wait_val;
++	unsigned int			clk_dis_wait_val;
+ 	const u8			pwrsts;
+ /* Powerdomain allowable state bitfields */
+ #define PWRSTS_OFF		BIT(0)
+diff --git a/drivers/gpio/gpio-ts4900.c b/drivers/gpio/gpio-ts4900.c
+index d885032cf814d..d918d2df4de2c 100644
+--- a/drivers/gpio/gpio-ts4900.c
++++ b/drivers/gpio/gpio-ts4900.c
+@@ -1,7 +1,7 @@
+ /*
+  * Digital I/O driver for Technologic Systems I2C FPGA Core
+  *
+- * Copyright (C) 2015 Technologic Systems
++ * Copyright (C) 2015, 2018 Technologic Systems
+  * Copyright (C) 2016 Savoir-Faire Linux
+  *
+  * This program is free software; you can redistribute it and/or
+@@ -55,19 +55,33 @@ static int ts4900_gpio_direction_input(struct gpio_chip *chip,
+ {
+ 	struct ts4900_gpio_priv *priv = gpiochip_get_data(chip);
+ 
+-	/*
+-	 * This will clear the output enable bit, the other bits are
+-	 * dontcare when this is cleared
++	/* Only clear the OE bit here, requires a RMW. Prevents potential issue
++	 * with OE and data getting to the physical pin at different times.
+ 	 */
+-	return regmap_write(priv->regmap, offset, 0);
++	return regmap_update_bits(priv->regmap, offset, TS4900_GPIO_OE, 0);
+ }
+ 
+ static int ts4900_gpio_direction_output(struct gpio_chip *chip,
+ 					unsigned int offset, int value)
+ {
+ 	struct ts4900_gpio_priv *priv = gpiochip_get_data(chip);
++	unsigned int reg;
+ 	int ret;
+ 
++	/* If changing from an input to an output, we need to first set the
++	 * proper data bit to what is requested and then set OE bit. This
++	 * prevents a glitch that can occur on the IO line
++	 */
++	regmap_read(priv->regmap, offset, &reg);
++	if (!(reg & TS4900_GPIO_OE)) {
++		if (value)
++			reg = TS4900_GPIO_OUT;
++		else
++			reg &= ~TS4900_GPIO_OUT;
++
++		regmap_write(priv->regmap, offset, reg);
++	}
++
+ 	if (value)
+ 		ret = regmap_write(priv->regmap, offset, TS4900_GPIO_OE |
+ 							 TS4900_GPIO_OUT);
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index af5bb8fedfea7..00526fdd7691f 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -3215,6 +3215,16 @@ int gpiod_to_irq(const struct gpio_desc *desc)
+ 
+ 		return retirq;
+ 	}
++#ifdef CONFIG_GPIOLIB_IRQCHIP
++	if (gc->irq.chip) {
++		/*
++		 * Avoid race condition with other code, which tries to lookup
++		 * an IRQ before the irqchip has been properly registered,
++		 * i.e. while gpiochip is still being brought up.
++		 */
++		return -EPROBE_DEFER;
++	}
++#endif
+ 	return -ENXIO;
+ }
+ EXPORT_SYMBOL_GPL(gpiod_to_irq);
+diff --git a/drivers/gpu/drm/sun4i/sun8i_mixer.h b/drivers/gpu/drm/sun4i/sun8i_mixer.h
+index 7576b523fdbb1..b0178c045267c 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_mixer.h
++++ b/drivers/gpu/drm/sun4i/sun8i_mixer.h
+@@ -113,10 +113,10 @@
+ /* format 13 is semi-planar YUV411 VUVU */
+ #define SUN8I_MIXER_FBFMT_YUV411	14
+ /* format 15 doesn't exist */
+-/* format 16 is P010 YVU */
+-#define SUN8I_MIXER_FBFMT_P010_YUV	17
+-/* format 18 is P210 YVU */
+-#define SUN8I_MIXER_FBFMT_P210_YUV	19
++#define SUN8I_MIXER_FBFMT_P010_YUV	16
++/* format 17 is P010 YVU */
++#define SUN8I_MIXER_FBFMT_P210_YUV	18
++/* format 19 is P210 YVU */
+ /* format 20 is packed YVU444 10-bit */
+ /* format 21 is packed YUV444 10-bit */
+ 
+diff --git a/drivers/hid/hid-vivaldi.c b/drivers/hid/hid-vivaldi.c
+index 576518e704ee6..d57ec17670379 100644
+--- a/drivers/hid/hid-vivaldi.c
++++ b/drivers/hid/hid-vivaldi.c
+@@ -143,7 +143,7 @@ out:
+ static int vivaldi_input_configured(struct hid_device *hdev,
+ 				    struct hid_input *hidinput)
+ {
+-	return sysfs_create_group(&hdev->dev.kobj, &input_attribute_group);
++	return devm_device_add_group(&hdev->dev, &input_attribute_group);
+ }
+ 
+ static const struct hid_device_id vivaldi_table[] = {
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index b0e2820a2d578..71798fde2ef0c 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -898,6 +898,11 @@ static int pmbus_get_boolean(struct i2c_client *client, struct pmbus_boolean *b,
+ 		pmbus_update_sensor_data(client, s2);
+ 
+ 	regval = status & mask;
++	if (regval) {
++		ret = pmbus_write_byte_data(client, page, reg, regval);
++		if (ret)
++			goto unlock;
++	}
+ 	if (s1 && s2) {
+ 		s64 v1, v2;
+ 
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index bd087cca1c1d2..af17459c1a5c0 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -2005,7 +2005,11 @@ setup_hw(struct hfc_pci *hc)
+ 	}
+ 	/* Allocate memory for FIFOS */
+ 	/* the memory needs to be on a 32k boundary within the first 4G */
+-	dma_set_mask(&hc->pdev->dev, 0xFFFF8000);
++	if (dma_set_mask(&hc->pdev->dev, 0xFFFF8000)) {
++		printk(KERN_WARNING
++		       "HFC-PCI: No usable DMA configuration!\n");
++		return -EIO;
++	}
+ 	buffer = dma_alloc_coherent(&hc->pdev->dev, 0x8000, &hc->hw.dmahandle,
+ 				    GFP_KERNEL);
+ 	/* We silently assume the address is okay if nonzero */
+diff --git a/drivers/isdn/mISDN/dsp_pipeline.c b/drivers/isdn/mISDN/dsp_pipeline.c
+index 40588692cec74..c3b2c99b5cd5c 100644
+--- a/drivers/isdn/mISDN/dsp_pipeline.c
++++ b/drivers/isdn/mISDN/dsp_pipeline.c
+@@ -17,9 +17,6 @@
+ #include "dsp.h"
+ #include "dsp_hwec.h"
+ 
+-/* uncomment for debugging */
+-/*#define PIPELINE_DEBUG*/
+-
+ struct dsp_pipeline_entry {
+ 	struct mISDN_dsp_element *elem;
+ 	void                *p;
+@@ -104,10 +101,6 @@ int mISDN_dsp_element_register(struct mISDN_dsp_element *elem)
+ 		}
+ 	}
+ 
+-#ifdef PIPELINE_DEBUG
+-	printk(KERN_DEBUG "%s: %s registered\n", __func__, elem->name);
+-#endif
+-
+ 	return 0;
+ 
+ err2:
+@@ -129,10 +122,6 @@ void mISDN_dsp_element_unregister(struct mISDN_dsp_element *elem)
+ 	list_for_each_entry_safe(entry, n, &dsp_elements, list)
+ 		if (entry->elem == elem) {
+ 			device_unregister(&entry->dev);
+-#ifdef PIPELINE_DEBUG
+-			printk(KERN_DEBUG "%s: %s unregistered\n",
+-			       __func__, elem->name);
+-#endif
+ 			return;
+ 		}
+ 	printk(KERN_ERR "%s: element %s not in list.\n", __func__, elem->name);
+@@ -145,10 +134,6 @@ int dsp_pipeline_module_init(void)
+ 	if (IS_ERR(elements_class))
+ 		return PTR_ERR(elements_class);
+ 
+-#ifdef PIPELINE_DEBUG
+-	printk(KERN_DEBUG "%s: dsp pipeline module initialized\n", __func__);
+-#endif
+-
+ 	dsp_hwec_init();
+ 
+ 	return 0;
+@@ -168,10 +153,6 @@ void dsp_pipeline_module_exit(void)
+ 		       __func__, entry->elem->name);
+ 		kfree(entry);
+ 	}
+-
+-#ifdef PIPELINE_DEBUG
+-	printk(KERN_DEBUG "%s: dsp pipeline module exited\n", __func__);
+-#endif
+ }
+ 
+ int dsp_pipeline_init(struct dsp_pipeline *pipeline)
+@@ -181,10 +162,6 @@ int dsp_pipeline_init(struct dsp_pipeline *pipeline)
+ 
+ 	INIT_LIST_HEAD(&pipeline->list);
+ 
+-#ifdef PIPELINE_DEBUG
+-	printk(KERN_DEBUG "%s: dsp pipeline ready\n", __func__);
+-#endif
+-
+ 	return 0;
+ }
+ 
+@@ -210,16 +187,12 @@ void dsp_pipeline_destroy(struct dsp_pipeline *pipeline)
+ 		return;
+ 
+ 	_dsp_pipeline_destroy(pipeline);
+-
+-#ifdef PIPELINE_DEBUG
+-	printk(KERN_DEBUG "%s: dsp pipeline destroyed\n", __func__);
+-#endif
+ }
+ 
+ int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
+ {
+-	int incomplete = 0, found = 0;
+-	char *dup, *tok, *name, *args;
++	int found = 0;
++	char *dup, *next, *tok, *name, *args;
+ 	struct dsp_element_entry *entry, *n;
+ 	struct dsp_pipeline_entry *pipeline_entry;
+ 	struct mISDN_dsp_element *elem;
+@@ -230,10 +203,10 @@ int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
+ 	if (!list_empty(&pipeline->list))
+ 		_dsp_pipeline_destroy(pipeline);
+ 
+-	dup = kstrdup(cfg, GFP_ATOMIC);
++	dup = next = kstrdup(cfg, GFP_ATOMIC);
+ 	if (!dup)
+ 		return 0;
+-	while ((tok = strsep(&dup, "|"))) {
++	while ((tok = strsep(&next, "|"))) {
+ 		if (!strlen(tok))
+ 			continue;
+ 		name = strsep(&tok, "(");
+@@ -251,7 +224,6 @@ int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
+ 					printk(KERN_ERR "%s: failed to add "
+ 					       "entry to pipeline: %s (out of "
+ 					       "memory)\n", __func__, elem->name);
+-					incomplete = 1;
+ 					goto _out;
+ 				}
+ 				pipeline_entry->elem = elem;
+@@ -268,20 +240,12 @@ int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
+ 					if (pipeline_entry->p) {
+ 						list_add_tail(&pipeline_entry->
+ 							      list, &pipeline->list);
+-#ifdef PIPELINE_DEBUG
+-						printk(KERN_DEBUG "%s: created "
+-						       "instance of %s%s%s\n",
+-						       __func__, name, args ?
+-						       " with args " : "", args ?
+-						       args : "");
+-#endif
+ 					} else {
+ 						printk(KERN_ERR "%s: failed "
+ 						       "to add entry to pipeline: "
+ 						       "%s (new() returned NULL)\n",
+ 						       __func__, elem->name);
+ 						kfree(pipeline_entry);
+-						incomplete = 1;
+ 					}
+ 				}
+ 				found = 1;
+@@ -290,11 +254,9 @@ int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
+ 
+ 		if (found)
+ 			found = 0;
+-		else {
++		else
+ 			printk(KERN_ERR "%s: element not found, skipping: "
+ 			       "%s\n", __func__, name);
+-			incomplete = 1;
+-		}
+ 	}
+ 
+ _out:
+@@ -303,10 +265,6 @@ _out:
+ 	else
+ 		pipeline->inuse = 0;
+ 
+-#ifdef PIPELINE_DEBUG
+-	printk(KERN_DEBUG "%s: dsp pipeline built%s: %s\n",
+-	       __func__, incomplete ? " incomplete" : "", cfg);
+-#endif
+ 	kfree(dup);
+ 	return 0;
+ }
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index b274083a6e635..091e0e051d109 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -173,6 +173,8 @@ struct meson_host {
+ 	int irq;
+ 
+ 	bool vqmmc_enabled;
++	bool needs_pre_post_req;
++
+ };
+ 
+ #define CMD_CFG_LENGTH_MASK GENMASK(8, 0)
+@@ -652,6 +654,8 @@ static void meson_mmc_request_done(struct mmc_host *mmc,
+ 	struct meson_host *host = mmc_priv(mmc);
+ 
+ 	host->cmd = NULL;
++	if (host->needs_pre_post_req)
++		meson_mmc_post_req(mmc, mrq, 0);
+ 	mmc_request_done(host->mmc, mrq);
+ }
+ 
+@@ -869,7 +873,7 @@ static int meson_mmc_validate_dram_access(struct mmc_host *mmc, struct mmc_data
+ static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ {
+ 	struct meson_host *host = mmc_priv(mmc);
+-	bool needs_pre_post_req = mrq->data &&
++	host->needs_pre_post_req = mrq->data &&
+ 			!(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE);
+ 
+ 	/*
+@@ -885,22 +889,19 @@ static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		}
+ 	}
+ 
+-	if (needs_pre_post_req) {
++	if (host->needs_pre_post_req) {
+ 		meson_mmc_get_transfer_mode(mmc, mrq);
+ 		if (!meson_mmc_desc_chain_mode(mrq->data))
+-			needs_pre_post_req = false;
++			host->needs_pre_post_req = false;
+ 	}
+ 
+-	if (needs_pre_post_req)
++	if (host->needs_pre_post_req)
+ 		meson_mmc_pre_req(mmc, mrq);
+ 
+ 	/* Stop execution */
+ 	writel(0, host->regs + SD_EMMC_START);
+ 
+ 	meson_mmc_start_cmd(mmc, mrq->sbc ?: mrq->cmd);
+-
+-	if (needs_pre_post_req)
+-		meson_mmc_post_req(mmc, mrq, 0);
+ }
+ 
+ static void meson_mmc_read_resp(struct mmc_host *mmc, struct mmc_command *cmd)
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 1f642fdbf214c..5ee8809bc2711 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2342,7 +2342,7 @@ mt753x_phylink_validate(struct dsa_switch *ds, int port,
+ 
+ 	phylink_set_port_modes(mask);
+ 
+-	if (state->interface != PHY_INTERFACE_MODE_TRGMII ||
++	if (state->interface != PHY_INTERFACE_MODE_TRGMII &&
+ 	    !phy_interface_mode_is_8023z(state->interface)) {
+ 		phylink_set(mask, 10baseT_Half);
+ 		phylink_set(mask, 10baseT_Full);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index e84ad587fb214..2c2a56d5a0a1a 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -41,6 +41,13 @@
+ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
++	struct device *kdev = &priv->pdev->dev;
++
++	if (!device_can_wakeup(kdev)) {
++		wol->supported = 0;
++		wol->wolopts = 0;
++		return;
++	}
+ 
+ 	wol->supported = WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
+ 	wol->wolopts = priv->wolopts;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 2af464ac250ac..f29ec765d684a 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1448,7 +1448,14 @@ static int macb_poll(struct napi_struct *napi, int budget)
+ 	if (work_done < budget) {
+ 		napi_complete_done(napi, work_done);
+ 
+-		/* Packets received while interrupts were disabled */
++		/* RSR bits only seem to propagate to raise interrupts when
++		 * interrupts are enabled at the time, so if bits are already
++		 * set due to packets received while interrupts were disabled,
++		 * they will not cause another interrupt to be generated when
++		 * interrupts are re-enabled.
++		 * Check for this case here. This has been seen to happen
++		 * around 30% of the time under heavy network load.
++		 */
+ 		status = macb_readl(bp, RSR);
+ 		if (status) {
+ 			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+@@ -1456,6 +1463,22 @@ static int macb_poll(struct napi_struct *napi, int budget)
+ 			napi_reschedule(napi);
+ 		} else {
+ 			queue_writel(queue, IER, bp->rx_intr_mask);
++
++			/* In rare cases, packets could have been received in
++			 * the window between the check above and re-enabling
++			 * interrupts. Therefore, a double-check is required
++			 * to avoid losing a wakeup. This can potentially race
++			 * with the interrupt handler doing the same actions
++			 * if an interrupt is raised just after enabling them,
++			 * but this should be harmless.
++			 */
++			status = macb_readl(bp, RSR);
++			if (unlikely(status)) {
++				queue_writel(queue, IDR, bp->rx_intr_mask);
++				if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
++					queue_writel(queue, ISR, MACB_BIT(RCOMP));
++				napi_schedule(napi);
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+index cc7d4f93da540..799a1486f586d 100644
+--- a/drivers/net/ethernet/freescale/gianfar_ethtool.c
++++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+@@ -1456,6 +1456,7 @@ static int gfar_get_ts_info(struct net_device *dev,
+ 	ptp_node = of_find_compatible_node(NULL, NULL, "fsl,etsec-ptp");
+ 	if (ptp_node) {
+ 		ptp_dev = of_find_device_by_node(ptp_node);
++		of_node_put(ptp_node);
+ 		if (ptp_dev)
+ 			ptp = platform_get_drvdata(ptp_dev);
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index 1114a15a9ce3c..989d5c7263d7c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -742,10 +742,8 @@ static void i40e_dbg_dump_vf(struct i40e_pf *pf, int vf_id)
+ 		vsi = pf->vsi[vf->lan_vsi_idx];
+ 		dev_info(&pf->pdev->dev, "vf %2d: VSI id=%d, seid=%d, qps=%d\n",
+ 			 vf_id, vf->lan_vsi_id, vsi->seid, vf->num_queue_pairs);
+-		dev_info(&pf->pdev->dev, "       num MDD=%lld, invalid msg=%lld, valid msg=%lld\n",
+-			 vf->num_mdd_events,
+-			 vf->num_invalid_msgs,
+-			 vf->num_valid_msgs);
++		dev_info(&pf->pdev->dev, "       num MDD=%lld\n",
++			 vf->num_mdd_events);
+ 	} else {
+ 		dev_info(&pf->pdev->dev, "invalid VF id %d\n", vf_id);
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index f71b7334e2955..9181e007e0392 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1864,19 +1864,17 @@ sriov_configure_out:
+ /***********************virtual channel routines******************/
+ 
+ /**
+- * i40e_vc_send_msg_to_vf_ex
++ * i40e_vc_send_msg_to_vf
+  * @vf: pointer to the VF info
+  * @v_opcode: virtual channel opcode
+  * @v_retval: virtual channel return value
+  * @msg: pointer to the msg buffer
+  * @msglen: msg length
+- * @is_quiet: true for not printing unsuccessful return values, false otherwise
+  *
+  * send msg to VF
+  **/
+-static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
+-				     u32 v_retval, u8 *msg, u16 msglen,
+-				     bool is_quiet)
++static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
++				  u32 v_retval, u8 *msg, u16 msglen)
+ {
+ 	struct i40e_pf *pf;
+ 	struct i40e_hw *hw;
+@@ -1891,25 +1889,6 @@ static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
+ 	hw = &pf->hw;
+ 	abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ 
+-	/* single place to detect unsuccessful return values */
+-	if (v_retval && !is_quiet) {
+-		vf->num_invalid_msgs++;
+-		dev_info(&pf->pdev->dev, "VF %d failed opcode %d, retval: %d\n",
+-			 vf->vf_id, v_opcode, v_retval);
+-		if (vf->num_invalid_msgs >
+-		    I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED) {
+-			dev_err(&pf->pdev->dev,
+-				"Number of invalid messages exceeded for VF %d\n",
+-				vf->vf_id);
+-			dev_err(&pf->pdev->dev, "Use PF Control I/F to enable the VF\n");
+-			set_bit(I40E_VF_STATE_DISABLED, &vf->vf_states);
+-		}
+-	} else {
+-		vf->num_valid_msgs++;
+-		/* reset the invalid counter, if a valid message is received. */
+-		vf->num_invalid_msgs = 0;
+-	}
+-
+ 	aq_ret = i40e_aq_send_msg_to_vf(hw, abs_vf_id,	v_opcode, v_retval,
+ 					msg, msglen, NULL);
+ 	if (aq_ret) {
+@@ -1922,23 +1901,6 @@ static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
+ 	return 0;
+ }
+ 
+-/**
+- * i40e_vc_send_msg_to_vf
+- * @vf: pointer to the VF info
+- * @v_opcode: virtual channel opcode
+- * @v_retval: virtual channel return value
+- * @msg: pointer to the msg buffer
+- * @msglen: msg length
+- *
+- * send msg to VF
+- **/
+-static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
+-				  u32 v_retval, u8 *msg, u16 msglen)
+-{
+-	return i40e_vc_send_msg_to_vf_ex(vf, v_opcode, v_retval,
+-					 msg, msglen, false);
+-}
+-
+ /**
+  * i40e_vc_send_resp_to_vf
+  * @vf: pointer to the VF info
+@@ -2759,7 +2721,6 @@ error_param:
+  * i40e_check_vf_permission
+  * @vf: pointer to the VF info
+  * @al: MAC address list from virtchnl
+- * @is_quiet: set true for printing msg without opcode info, false otherwise
+  *
+  * Check that the given list of MAC addresses is allowed. Will return -EPERM
+  * if any address in the list is not valid. Checks the following conditions:
+@@ -2774,15 +2735,13 @@ error_param:
+  * addresses might not be accurate.
+  **/
+ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+-					   struct virtchnl_ether_addr_list *al,
+-					   bool *is_quiet)
++					   struct virtchnl_ether_addr_list *al)
+ {
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
+ 	int mac2add_cnt = 0;
+ 	int i;
+ 
+-	*is_quiet = false;
+ 	for (i = 0; i < al->num_elements; i++) {
+ 		struct i40e_mac_filter *f;
+ 		u8 *addr = al->list[i].addr;
+@@ -2806,7 +2765,6 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		    !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+ 			dev_err(&pf->pdev->dev,
+ 				"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
+-			*is_quiet = true;
+ 			return -EPERM;
+ 		}
+ 
+@@ -2843,7 +2801,6 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 	    (struct virtchnl_ether_addr_list *)msg;
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = NULL;
+-	bool is_quiet = false;
+ 	i40e_status ret = 0;
+ 	int i;
+ 
+@@ -2860,7 +2817,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 	 */
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+ 
+-	ret = i40e_check_vf_permission(vf, al, &is_quiet);
++	ret = i40e_check_vf_permission(vf, al);
+ 	if (ret) {
+ 		spin_unlock_bh(&vsi->mac_filter_hash_lock);
+ 		goto error_param;
+@@ -2898,8 +2855,8 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 
+ error_param:
+ 	/* send the response to the VF */
+-	return i40e_vc_send_msg_to_vf_ex(vf, VIRTCHNL_OP_ADD_ETH_ADDR,
+-				       ret, NULL, 0, is_quiet);
++	return i40e_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_ETH_ADDR,
++				      ret, NULL, 0);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 03c42fd0fea19..a554d0a0b09bd 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -10,8 +10,6 @@
+ 
+ #define I40E_VIRTCHNL_SUPPORTED_QTYPES 2
+ 
+-#define I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED	10
+-
+ #define I40E_VLAN_PRIORITY_SHIFT	13
+ #define I40E_VLAN_MASK			0xFFF
+ #define I40E_PRIORITY_MASK		0xE000
+@@ -92,9 +90,6 @@ struct i40e_vf {
+ 	u8 num_queue_pairs;	/* num of qps assigned to VF vsis */
+ 	u8 num_req_queues;	/* num of requested qps */
+ 	u64 num_mdd_events;	/* num of mdd events detected */
+-	/* num of continuous malformed or invalid msgs detected */
+-	u64 num_invalid_msgs;
+-	u64 num_valid_msgs;	/* num of valid msgs detected */
+ 
+ 	unsigned long vf_caps;	/* vf's adv. capabilities */
+ 	unsigned long vf_states;	/* vf's runtime states */
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index b06fbe99d8e93..b6dd8f81d6997 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -870,11 +870,11 @@ struct ice_aqc_get_phy_caps {
+ 	 * 01b - Report topology capabilities
+ 	 * 10b - Report SW configured
+ 	 */
+-#define ICE_AQC_REPORT_MODE_S		1
+-#define ICE_AQC_REPORT_MODE_M		(3 << ICE_AQC_REPORT_MODE_S)
+-#define ICE_AQC_REPORT_NVM_CAP		0
+-#define ICE_AQC_REPORT_TOPO_CAP		BIT(1)
+-#define ICE_AQC_REPORT_SW_CFG		BIT(2)
++#define ICE_AQC_REPORT_MODE_S			1
++#define ICE_AQC_REPORT_MODE_M			(3 << ICE_AQC_REPORT_MODE_S)
++#define ICE_AQC_REPORT_TOPO_CAP_NO_MEDIA	0
++#define ICE_AQC_REPORT_TOPO_CAP_MEDIA		BIT(1)
++#define ICE_AQC_REPORT_ACTIVE_CFG		BIT(2)
+ 	__le32 reserved1;
+ 	__le32 addr_high;
+ 	__le32 addr_low;
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 2b0d0373ab2c6..ecdc467c4f6f5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -193,7 +193,7 @@ ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+ 	ice_debug(hw, ICE_DBG_LINK, "   module_type[2] = 0x%x\n",
+ 		  pcaps->module_type[2]);
+ 
+-	if (!status && report_mode == ICE_AQC_REPORT_TOPO_CAP) {
++	if (!status && report_mode == ICE_AQC_REPORT_TOPO_CAP_MEDIA) {
+ 		pi->phy.phy_type_low = le64_to_cpu(pcaps->phy_type_low);
+ 		pi->phy.phy_type_high = le64_to_cpu(pcaps->phy_type_high);
+ 		memcpy(pi->phy.link_info.module_type, &pcaps->module_type,
+@@ -924,7 +924,8 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
+ 
+ 	/* Initialize port_info struct with PHY capabilities */
+ 	status = ice_aq_get_phy_caps(hw->port_info, false,
+-				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
++				     ICE_AQC_REPORT_TOPO_CAP_MEDIA, pcaps,
++				     NULL);
+ 	devm_kfree(ice_hw_to_dev(hw), pcaps);
+ 	if (status)
+ 		goto err_unroll_sched;
+@@ -2682,7 +2683,7 @@ enum ice_status ice_update_link_info(struct ice_port_info *pi)
+ 		if (!pcaps)
+ 			return ICE_ERR_NO_MEMORY;
+ 
+-		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
++		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
+ 					     pcaps, NULL);
+ 
+ 		devm_kfree(ice_hw_to_dev(hw), pcaps);
+@@ -2842,8 +2843,8 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
+ 		return ICE_ERR_NO_MEMORY;
+ 
+ 	/* Get the current PHY config */
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
+-				     NULL);
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_ACTIVE_CFG,
++				     pcaps, NULL);
+ 	if (status) {
+ 		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+ 		goto out;
+@@ -2989,7 +2990,7 @@ ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
+ 	if (!pcaps)
+ 		return ICE_ERR_NO_MEMORY;
+ 
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP, pcaps,
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA, pcaps,
+ 				     NULL);
+ 	if (status)
+ 		goto out;
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 14eba9bc174d8..421fc707f80af 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -1081,7 +1081,7 @@ ice_get_fecparam(struct net_device *netdev, struct ethtool_fecparam *fecparam)
+ 	if (!caps)
+ 		return -ENOMEM;
+ 
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
+ 				     caps, NULL);
+ 	if (status) {
+ 		err = -EAGAIN;
+@@ -1976,7 +1976,7 @@ ice_get_link_ksettings(struct net_device *netdev,
+ 		return -ENOMEM;
+ 
+ 	status = ice_aq_get_phy_caps(vsi->port_info, false,
+-				     ICE_AQC_REPORT_SW_CFG, caps, NULL);
++				     ICE_AQC_REPORT_ACTIVE_CFG, caps, NULL);
+ 	if (status) {
+ 		err = -EIO;
+ 		goto done;
+@@ -2013,7 +2013,7 @@ ice_get_link_ksettings(struct net_device *netdev,
+ 		ethtool_link_ksettings_add_link_mode(ks, advertising, FEC_RS);
+ 
+ 	status = ice_aq_get_phy_caps(vsi->port_info, false,
+-				     ICE_AQC_REPORT_TOPO_CAP, caps, NULL);
++				     ICE_AQC_REPORT_TOPO_CAP_MEDIA, caps, NULL);
+ 	if (status) {
+ 		err = -EIO;
+ 		goto done;
+@@ -2187,12 +2187,12 @@ ice_set_link_ksettings(struct net_device *netdev,
+ {
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 	struct ethtool_link_ksettings safe_ks, copy_ks;
+-	struct ice_aqc_get_phy_caps_data *abilities;
+ 	u8 autoneg, timeout = TEST_SET_BITS_TIMEOUT;
+-	u16 adv_link_speed, curr_link_speed, idx;
++	struct ice_aqc_get_phy_caps_data *phy_caps;
+ 	struct ice_aqc_set_phy_cfg_data config;
++	u16 adv_link_speed, curr_link_speed;
+ 	struct ice_pf *pf = np->vsi->back;
+-	struct ice_port_info *p;
++	struct ice_port_info *pi;
+ 	u8 autoneg_changed = 0;
+ 	enum ice_status status;
+ 	u64 phy_type_high = 0;
+@@ -2200,33 +2200,25 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 	int err = 0;
+ 	bool linkup;
+ 
+-	p = np->vsi->port_info;
++	pi = np->vsi->port_info;
+ 
+-	if (!p)
++	if (!pi)
+ 		return -EOPNOTSUPP;
+ 
+-	/* Check if this is LAN VSI */
+-	ice_for_each_vsi(pf, idx)
+-		if (pf->vsi[idx]->type == ICE_VSI_PF) {
+-			if (np->vsi != pf->vsi[idx])
+-				return -EOPNOTSUPP;
+-			break;
+-		}
+-
+-	if (p->phy.media_type != ICE_MEDIA_BASET &&
+-	    p->phy.media_type != ICE_MEDIA_FIBER &&
+-	    p->phy.media_type != ICE_MEDIA_BACKPLANE &&
+-	    p->phy.media_type != ICE_MEDIA_DA &&
+-	    p->phy.link_info.link_info & ICE_AQ_LINK_UP)
++	if (pi->phy.media_type != ICE_MEDIA_BASET &&
++	    pi->phy.media_type != ICE_MEDIA_FIBER &&
++	    pi->phy.media_type != ICE_MEDIA_BACKPLANE &&
++	    pi->phy.media_type != ICE_MEDIA_DA &&
++	    pi->phy.link_info.link_info & ICE_AQ_LINK_UP)
+ 		return -EOPNOTSUPP;
+ 
+-	abilities = kzalloc(sizeof(*abilities), GFP_KERNEL);
+-	if (!abilities)
++	phy_caps = kzalloc(sizeof(*phy_caps), GFP_KERNEL);
++	if (!phy_caps)
+ 		return -ENOMEM;
+ 
+ 	/* Get the PHY capabilities based on media */
+-	status = ice_aq_get_phy_caps(p, false, ICE_AQC_REPORT_TOPO_CAP,
+-				     abilities, NULL);
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
++				     phy_caps, NULL);
+ 	if (status) {
+ 		err = -EAGAIN;
+ 		goto done;
+@@ -2288,26 +2280,26 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 	 * configuration is initialized during probe from PHY capabilities
+ 	 * software mode, and updated on set PHY configuration.
+ 	 */
+-	memcpy(&config, &p->phy.curr_user_phy_cfg, sizeof(config));
++	memcpy(&config, &pi->phy.curr_user_phy_cfg, sizeof(config));
+ 
+ 	config.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+ 
+ 	/* Check autoneg */
+-	err = ice_setup_autoneg(p, &safe_ks, &config, autoneg, &autoneg_changed,
++	err = ice_setup_autoneg(pi, &safe_ks, &config, autoneg, &autoneg_changed,
+ 				netdev);
+ 
+ 	if (err)
+ 		goto done;
+ 
+ 	/* Call to get the current link speed */
+-	p->phy.get_link_info = true;
+-	status = ice_get_link_status(p, &linkup);
++	pi->phy.get_link_info = true;
++	status = ice_get_link_status(pi, &linkup);
+ 	if (status) {
+ 		err = -EAGAIN;
+ 		goto done;
+ 	}
+ 
+-	curr_link_speed = p->phy.link_info.link_speed;
++	curr_link_speed = pi->phy.curr_user_speed_req;
+ 	adv_link_speed = ice_ksettings_find_adv_link_speed(ks);
+ 
+ 	/* If speed didn't get set, set it to what it currently is.
+@@ -2326,7 +2318,7 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 	}
+ 
+ 	/* save the requested speeds */
+-	p->phy.link_info.req_speeds = adv_link_speed;
++	pi->phy.link_info.req_speeds = adv_link_speed;
+ 
+ 	/* set link and auto negotiation so changes take effect */
+ 	config.caps |= ICE_AQ_PHY_ENA_LINK;
+@@ -2342,9 +2334,9 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 	 * for set PHY configuration
+ 	 */
+ 	config.phy_type_high = cpu_to_le64(phy_type_high) &
+-			abilities->phy_type_high;
++			phy_caps->phy_type_high;
+ 	config.phy_type_low = cpu_to_le64(phy_type_low) &
+-			abilities->phy_type_low;
++			phy_caps->phy_type_low;
+ 
+ 	if (!(config.phy_type_high || config.phy_type_low)) {
+ 		/* If there is no intersection and lenient mode is enabled, then
+@@ -2364,7 +2356,7 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 	}
+ 
+ 	/* If link is up put link down */
+-	if (p->phy.link_info.link_info & ICE_AQ_LINK_UP) {
++	if (pi->phy.link_info.link_info & ICE_AQ_LINK_UP) {
+ 		/* Tell the OS link is going down, the link will go
+ 		 * back up when fw says it is ready asynchronously
+ 		 */
+@@ -2374,7 +2366,7 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 	}
+ 
+ 	/* make the aq call */
+-	status = ice_aq_set_phy_cfg(&pf->hw, p, &config, NULL);
++	status = ice_aq_set_phy_cfg(&pf->hw, pi, &config, NULL);
+ 	if (status) {
+ 		netdev_info(netdev, "Set phy config failed,\n");
+ 		err = -EAGAIN;
+@@ -2382,9 +2374,9 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 	}
+ 
+ 	/* Save speed request */
+-	p->phy.curr_user_speed_req = adv_link_speed;
++	pi->phy.curr_user_speed_req = adv_link_speed;
+ done:
+-	kfree(abilities);
++	kfree(phy_caps);
+ 	clear_bit(__ICE_CFG_BUSY, pf->state);
+ 
+ 	return err;
+@@ -2954,7 +2946,7 @@ ice_get_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *pause)
+ 		return;
+ 
+ 	/* Get current PHY config */
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_ACTIVE_CFG, pcaps,
+ 				     NULL);
+ 	if (status)
+ 		goto out;
+@@ -3021,7 +3013,7 @@ ice_set_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *pause)
+ 		return -ENOMEM;
+ 
+ 	/* Get current PHY config */
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_ACTIVE_CFG, pcaps,
+ 				     NULL);
+ 	if (status) {
+ 		kfree(pcaps);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 6c75df216fa7a..20c9d55f3adce 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -726,7 +726,7 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
+ 	}
+ 
+ 	status = ice_aq_get_phy_caps(vsi->port_info, false,
+-				     ICE_AQC_REPORT_SW_CFG, caps, NULL);
++				     ICE_AQC_REPORT_ACTIVE_CFG, caps, NULL);
+ 	if (status)
+ 		netdev_info(vsi->netdev, "Get phy capability failed.\n");
+ 
+@@ -1645,7 +1645,7 @@ static int ice_force_phys_link_state(struct ice_vsi *vsi, bool link_up)
+ 	if (!pcaps)
+ 		return -ENOMEM;
+ 
+-	retcode = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
++	retcode = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_ACTIVE_CFG, pcaps,
+ 				      NULL);
+ 	if (retcode) {
+ 		dev_err(dev, "Failed to get phy capabilities, VSI %d error %d\n",
+@@ -1705,7 +1705,7 @@ static int ice_init_nvm_phy_type(struct ice_port_info *pi)
+ 	if (!pcaps)
+ 		return -ENOMEM;
+ 
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_NVM_CAP, pcaps,
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_NO_MEDIA, pcaps,
+ 				     NULL);
+ 
+ 	if (status) {
+@@ -1821,7 +1821,7 @@ static int ice_init_phy_user_cfg(struct ice_port_info *pi)
+ 	if (!pcaps)
+ 		return -ENOMEM;
+ 
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP, pcaps,
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA, pcaps,
+ 				     NULL);
+ 	if (status) {
+ 		dev_err(ice_pf_to_dev(pf), "Get PHY capability failed.\n");
+@@ -1900,7 +1900,7 @@ static int ice_configure_phy(struct ice_vsi *vsi)
+ 		return -ENOMEM;
+ 
+ 	/* Get current PHY config */
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_ACTIVE_CFG, pcaps,
+ 				     NULL);
+ 	if (status) {
+ 		dev_err(dev, "Failed to get PHY configuration, VSI %d error %s\n",
+@@ -1918,7 +1918,7 @@ static int ice_configure_phy(struct ice_vsi *vsi)
+ 
+ 	/* Use PHY topology as baseline for configuration */
+ 	memset(pcaps, 0, sizeof(*pcaps));
+-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP, pcaps,
++	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA, pcaps,
+ 				     NULL);
+ 	if (status) {
+ 		dev_err(dev, "Failed to get PHY topology, VSI %d error %s\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 48511ad0e0c82..5134342ff70fc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -1849,24 +1849,6 @@ ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode,
+ 
+ 	dev = ice_pf_to_dev(pf);
+ 
+-	/* single place to detect unsuccessful return values */
+-	if (v_retval) {
+-		vf->num_inval_msgs++;
+-		dev_info(dev, "VF %d failed opcode %d, retval: %d\n", vf->vf_id,
+-			 v_opcode, v_retval);
+-		if (vf->num_inval_msgs > ICE_DFLT_NUM_INVAL_MSGS_ALLOWED) {
+-			dev_err(dev, "Number of invalid messages exceeded for VF %d\n",
+-				vf->vf_id);
+-			dev_err(dev, "Use PF Control I/F to enable the VF\n");
+-			set_bit(ICE_VF_STATE_DIS, vf->vf_states);
+-			return -EIO;
+-		}
+-	} else {
+-		vf->num_valid_msgs++;
+-		/* reset the invalid counter, if a valid message is received. */
+-		vf->num_inval_msgs = 0;
+-	}
+-
+ 	aq_ret = ice_aq_send_msg_to_vf(&pf->hw, vf->vf_id, v_opcode, v_retval,
+ 				       msg, msglen, NULL);
+ 	if (aq_ret && pf->hw.mailboxq.sq_last_status != ICE_AQ_RC_ENOSYS) {
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+index 59e5b4f16e965..d2e935c678a14 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+@@ -13,7 +13,6 @@
+ #define ICE_MAX_MACADDR_PER_VF		18
+ 
+ /* Malicious Driver Detection */
+-#define ICE_DFLT_NUM_INVAL_MSGS_ALLOWED		10
+ #define ICE_MDD_EVENTS_THRESHOLD		30
+ 
+ /* Static VF transaction/status register def */
+@@ -97,8 +96,6 @@ struct ice_vf {
+ 	unsigned int tx_rate;		/* Tx bandwidth limit in Mbps */
+ 	DECLARE_BITMAP(vf_states, ICE_VF_STATES_NBITS);	/* VF runtime states */
+ 
+-	u64 num_inval_msgs;		/* number of continuous invalid msgs */
+-	u64 num_valid_msgs;		/* number of valid msgs detected */
+ 	unsigned long vf_caps;		/* VF's adv. capabilities */
+ 	u8 num_req_qs;			/* num of queue pairs requested by VF */
+ 	u16 num_mac;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 6af0dd8471691..94426d29025eb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -130,11 +130,8 @@ static int cmd_alloc_index(struct mlx5_cmd *cmd)
+ 
+ static void cmd_free_index(struct mlx5_cmd *cmd, int idx)
+ {
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&cmd->alloc_lock, flags);
++	lockdep_assert_held(&cmd->alloc_lock);
+ 	set_bit(idx, &cmd->bitmask);
+-	spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+ }
+ 
+ static void cmd_ent_get(struct mlx5_cmd_work_ent *ent)
+@@ -144,17 +141,21 @@ static void cmd_ent_get(struct mlx5_cmd_work_ent *ent)
+ 
+ static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
+ {
++	struct mlx5_cmd *cmd = ent->cmd;
++	unsigned long flags;
++
++	spin_lock_irqsave(&cmd->alloc_lock, flags);
+ 	if (!refcount_dec_and_test(&ent->refcnt))
+-		return;
++		goto out;
+ 
+ 	if (ent->idx >= 0) {
+-		struct mlx5_cmd *cmd = ent->cmd;
+-
+ 		cmd_free_index(cmd, ent->idx);
+ 		up(ent->page_queue ? &cmd->pages_sem : &cmd->sem);
+ 	}
+ 
+ 	cmd_free_ent(ent);
++out:
++	spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+ }
+ 
+ static struct mlx5_cmd_layout *get_inst(struct mlx5_cmd *cmd, int idx)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+index 0f0d250bbc150..c04413f449c50 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+@@ -123,6 +123,10 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+ 		return;
+ 	}
+ 
++	/* Handle multipath entry with lower priority value */
++	if (mp->mfi && mp->mfi != fi && fi->fib_priority >= mp->mfi->fib_priority)
++		return;
++
+ 	/* Handle add/replace event */
+ 	nhs = fib_info_num_path(fi);
+ 	if (nhs == 1) {
+@@ -132,12 +136,13 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+ 			int i = mlx5_lag_dev_get_netdev_idx(ldev, nh_dev);
+ 
+ 			if (i < 0)
+-				i = MLX5_LAG_NORMAL_AFFINITY;
+-			else
+-				++i;
++				return;
+ 
++			i++;
+ 			mlx5_lag_set_port_affinity(ldev, i);
+ 		}
++
++		mp->mfi = fi;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
+index 9e098e40fb1c6..a9a9bf2e065a5 100644
+--- a/drivers/net/ethernet/nxp/lpc_eth.c
++++ b/drivers/net/ethernet/nxp/lpc_eth.c
+@@ -1468,6 +1468,7 @@ static int lpc_eth_drv_resume(struct platform_device *pdev)
+ {
+ 	struct net_device *ndev = platform_get_drvdata(pdev);
+ 	struct netdata_local *pldat;
++	int ret;
+ 
+ 	if (device_may_wakeup(&pdev->dev))
+ 		disable_irq_wake(ndev->irq);
+@@ -1477,7 +1478,9 @@ static int lpc_eth_drv_resume(struct platform_device *pdev)
+ 			pldat = netdev_priv(ndev);
+ 
+ 			/* Enable interface clock */
+-			clk_enable(pldat->clk);
++			ret = clk_enable(pldat->clk);
++			if (ret)
++				return ret;
+ 
+ 			/* Reset and initialize */
+ 			__lpc_eth_reset(pldat);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+index b8dc5c4591ef5..ef0ad4cf82e60 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+@@ -3778,11 +3778,11 @@ bool qed_iov_mark_vf_flr(struct qed_hwfn *p_hwfn, u32 *p_disabled_vfs)
+ 	return found;
+ }
+ 
+-static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
+-			     u16 vfid,
+-			     struct qed_mcp_link_params *p_params,
+-			     struct qed_mcp_link_state *p_link,
+-			     struct qed_mcp_link_capabilities *p_caps)
++static int qed_iov_get_link(struct qed_hwfn *p_hwfn,
++			    u16 vfid,
++			    struct qed_mcp_link_params *p_params,
++			    struct qed_mcp_link_state *p_link,
++			    struct qed_mcp_link_capabilities *p_caps)
+ {
+ 	struct qed_vf_info *p_vf = qed_iov_get_vf_info(p_hwfn,
+ 						       vfid,
+@@ -3790,7 +3790,7 @@ static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
+ 	struct qed_bulletin_content *p_bulletin;
+ 
+ 	if (!p_vf)
+-		return;
++		return -EINVAL;
+ 
+ 	p_bulletin = p_vf->bulletin.p_virt;
+ 
+@@ -3800,6 +3800,7 @@ static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
+ 		__qed_vf_get_link_state(p_hwfn, p_link, p_bulletin);
+ 	if (p_caps)
+ 		__qed_vf_get_link_caps(p_hwfn, p_caps, p_bulletin);
++	return 0;
+ }
+ 
+ static int
+@@ -4658,6 +4659,7 @@ static int qed_get_vf_config(struct qed_dev *cdev,
+ 	struct qed_public_vf_info *vf_info;
+ 	struct qed_mcp_link_state link;
+ 	u32 tx_rate;
++	int ret;
+ 
+ 	/* Sanitize request */
+ 	if (IS_VF(cdev))
+@@ -4671,7 +4673,9 @@ static int qed_get_vf_config(struct qed_dev *cdev,
+ 
+ 	vf_info = qed_iov_get_public_vf_info(hwfn, vf_id, true);
+ 
+-	qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL);
++	ret = qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL);
++	if (ret)
++		return ret;
+ 
+ 	/* Fill information about VF */
+ 	ivi->vf = vf_id;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_vf.c b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+index 72a38d53d33f6..e2a5a6a373cbe 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_vf.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+@@ -513,6 +513,9 @@ int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn)
+ 						    p_iov->bulletin.size,
+ 						    &p_iov->bulletin.phys,
+ 						    GFP_KERNEL);
++	if (!p_iov->bulletin.p_virt)
++		goto free_pf2vf_reply;
++
+ 	DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ 		   "VF's bulletin Board [%p virt 0x%llx phys 0x%08x bytes]\n",
+ 		   p_iov->bulletin.p_virt,
+@@ -552,6 +555,10 @@ int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn)
+ 
+ 	return rc;
+ 
++free_pf2vf_reply:
++	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
++			  sizeof(union pfvf_tlvs),
++			  p_iov->pf2vf_reply, p_iov->pf2vf_reply_phys);
+ free_vf2pf_request:
+ 	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ 			  sizeof(union vfpf_tlvs),
+diff --git a/drivers/net/ethernet/ti/cpts.c b/drivers/net/ethernet/ti/cpts.c
+index 43222a34cba06..f9514518700eb 100644
+--- a/drivers/net/ethernet/ti/cpts.c
++++ b/drivers/net/ethernet/ti/cpts.c
+@@ -568,7 +568,9 @@ int cpts_register(struct cpts *cpts)
+ 	for (i = 0; i < CPTS_MAX_EVENTS; i++)
+ 		list_add(&cpts->pool_data[i].list, &cpts->pool);
+ 
+-	clk_enable(cpts->refclk);
++	err = clk_enable(cpts->refclk);
++	if (err)
++		return err;
+ 
+ 	cpts_write32(cpts, CPTS_EN, control);
+ 	cpts_write32(cpts, TS_PEND_EN, int_enable);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index 962831cdde4db..4bd44fbc6ecfa 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -1187,7 +1187,7 @@ static int xemaclite_of_probe(struct platform_device *ofdev)
+ 	if (rc) {
+ 		dev_err(dev,
+ 			"Cannot register network device, aborting\n");
+-		goto error;
++		goto put_node;
+ 	}
+ 
+ 	dev_info(dev,
+@@ -1195,6 +1195,8 @@ static int xemaclite_of_probe(struct platform_device *ofdev)
+ 		 (unsigned int __force)ndev->mem_start, lp->base_addr, ndev->irq);
+ 	return 0;
+ 
++put_node:
++	of_node_put(lp->phy_node);
+ error:
+ 	free_netdev(ndev);
+ 	return rc;
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index 7bf43031cea8c..3d75b98f3051d 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -289,7 +289,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 		if (err < 0)
+ 			return err;
+ 
+-		err = phy_write(phydev, MII_DP83822_MISR1, 0);
++		err = phy_write(phydev, MII_DP83822_MISR2, 0);
+ 		if (err < 0)
+ 			return err;
+ 
+diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
+index 94d19158efc18..ca261e0fc9c9b 100644
+--- a/drivers/net/xen-netback/xenbus.c
++++ b/drivers/net/xen-netback/xenbus.c
+@@ -256,6 +256,7 @@ static void backend_disconnect(struct backend_info *be)
+ 		unsigned int queue_index;
+ 
+ 		xen_unregister_watchers(vif);
++		xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
+ #ifdef CONFIG_DEBUG_FS
+ 		xenvif_debugfs_delif(vif);
+ #endif /* CONFIG_DEBUG_FS */
+@@ -675,7 +676,6 @@ static void hotplug_status_changed(struct xenbus_watch *watch,
+ 
+ 		/* Not interested in this watch anymore. */
+ 		unregister_hotplug_status_watch(be);
+-		xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
+ 	}
+ 	kfree(str);
+ }
+@@ -824,15 +824,11 @@ static void connect(struct backend_info *be)
+ 	xenvif_carrier_on(be->vif);
+ 
+ 	unregister_hotplug_status_watch(be);
+-	if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
+-		err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
+-					   NULL, hotplug_status_changed,
+-					   "%s/%s", dev->nodename,
+-					   "hotplug-status");
+-		if (err)
+-			goto err;
++	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
++				   hotplug_status_changed,
++				   "%s/%s", dev->nodename, "hotplug-status");
++	if (!err)
+ 		be->have_hotplug_status_watch = 1;
+-	}
+ 
+ 	netif_tx_wake_all_queues(be->vif->dev);
+ 
+diff --git a/drivers/nfc/port100.c b/drivers/nfc/port100.c
+index 1caebefb25ff1..2ae1474faede9 100644
+--- a/drivers/nfc/port100.c
++++ b/drivers/nfc/port100.c
+@@ -1609,7 +1609,9 @@ free_nfc_dev:
+ 	nfc_digital_free_device(dev->nfc_digital_dev);
+ 
+ error:
++	usb_kill_urb(dev->in_urb);
+ 	usb_free_urb(dev->in_urb);
++	usb_kill_urb(dev->out_urb);
+ 	usb_free_urb(dev->out_urb);
+ 	usb_put_dev(dev->udev);
+ 
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 624273d0e727f..a9f97023d5a00 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -567,6 +567,12 @@ static int rockchip_spi_slave_abort(struct spi_controller *ctlr)
+ {
+ 	struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
+ 
++	if (atomic_read(&rs->state) & RXDMA)
++		dmaengine_terminate_sync(ctlr->dma_rx);
++	if (atomic_read(&rs->state) & TXDMA)
++		dmaengine_terminate_sync(ctlr->dma_tx);
++	atomic_set(&rs->state, 0);
++	spi_enable_chip(rs, false);
+ 	rs->slave_abort = true;
+ 	complete(&ctlr->xfer_completion);
+ 
+@@ -636,7 +642,7 @@ static int rockchip_spi_probe(struct platform_device *pdev)
+ 	struct spi_controller *ctlr;
+ 	struct resource *mem;
+ 	struct device_node *np = pdev->dev.of_node;
+-	u32 rsd_nsecs;
++	u32 rsd_nsecs, num_cs;
+ 	bool slave_mode;
+ 
+ 	slave_mode = of_property_read_bool(np, "spi-slave");
+@@ -744,8 +750,9 @@ static int rockchip_spi_probe(struct platform_device *pdev)
+ 		 * rk spi0 has two native cs, spi1..5 one cs only
+ 		 * if num-cs is missing in the dts, default to 1
+ 		 */
+-		if (of_property_read_u16(np, "num-cs", &ctlr->num_chipselect))
+-			ctlr->num_chipselect = 1;
++		if (of_property_read_u32(np, "num-cs", &num_cs))
++			num_cs = 1;
++		ctlr->num_chipselect = num_cs;
+ 		ctlr->use_gpio_descriptors = true;
+ 	}
+ 	ctlr->dev.of_node = pdev->dev.of_node;
+diff --git a/drivers/staging/gdm724x/gdm_lte.c b/drivers/staging/gdm724x/gdm_lte.c
+index bd5f874334043..de30262c3fae0 100644
+--- a/drivers/staging/gdm724x/gdm_lte.c
++++ b/drivers/staging/gdm724x/gdm_lte.c
+@@ -76,14 +76,15 @@ static void tx_complete(void *arg)
+ 
+ static int gdm_lte_rx(struct sk_buff *skb, struct nic *nic, int nic_type)
+ {
+-	int ret;
++	int ret, len;
+ 
++	len = skb->len + ETH_HLEN;
+ 	ret = netif_rx_ni(skb);
+ 	if (ret == NET_RX_DROP) {
+ 		nic->stats.rx_dropped++;
+ 	} else {
+ 		nic->stats.rx_packets++;
+-		nic->stats.rx_bytes += skb->len + ETH_HLEN;
++		nic->stats.rx_bytes += len;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
+index 4df6d04315e39..b912ad2f4b720 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
++++ b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
+@@ -6679,6 +6679,7 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 	struct sta_info *psta_bmc;
+ 	struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 	struct xmit_frame *pxmitframe = NULL;
++	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 	struct sta_priv  *pstapriv = &padapter->stapriv;
+ 
+ 	/* for BC/MC Frames */
+@@ -6689,7 +6690,8 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 	if ((pstapriv->tim_bitmap&BIT(0)) && (psta_bmc->sleepq_len > 0)) {
+ 		msleep(10);/*  10ms, ATIM(HIQ) Windows */
+ 
+-		spin_lock_bh(&psta_bmc->sleep_q.lock);
++		/* spin_lock_bh(&psta_bmc->sleep_q.lock); */
++		spin_lock_bh(&pxmitpriv->lock);
+ 
+ 		xmitframe_phead = get_list_head(&psta_bmc->sleep_q);
+ 		xmitframe_plist = get_next(xmitframe_phead);
+@@ -6715,7 +6717,8 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 			rtw_hal_xmitframe_enqueue(padapter, pxmitframe);
+ 		}
+ 
+-		spin_unlock_bh(&psta_bmc->sleep_q.lock);
++		/* spin_unlock_bh(&psta_bmc->sleep_q.lock); */
++		spin_unlock_bh(&pxmitpriv->lock);
+ 
+ 		/* check hi queue and bmc_sleepq */
+ 		rtw_chk_hi_queue_cmd(padapter);
+diff --git a/drivers/staging/rtl8723bs/core/rtw_recv.c b/drivers/staging/rtl8723bs/core/rtw_recv.c
+index 0d47e6e121777..6979f8dbccb84 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_recv.c
++++ b/drivers/staging/rtl8723bs/core/rtw_recv.c
+@@ -1144,8 +1144,10 @@ sint validate_recv_ctrl_frame(struct adapter *padapter, union recv_frame *precv_
+ 		if ((psta->state&WIFI_SLEEP_STATE) && (pstapriv->sta_dz_bitmap&BIT(psta->aid))) {
+ 			struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 			struct xmit_frame *pxmitframe = NULL;
++			struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+-			spin_lock_bh(&psta->sleep_q.lock);
++			/* spin_lock_bh(&psta->sleep_q.lock); */
++			spin_lock_bh(&pxmitpriv->lock);
+ 
+ 			xmitframe_phead = get_list_head(&psta->sleep_q);
+ 			xmitframe_plist = get_next(xmitframe_phead);
+@@ -1180,10 +1182,12 @@ sint validate_recv_ctrl_frame(struct adapter *padapter, union recv_frame *precv_
+ 					update_beacon(padapter, _TIM_IE_, NULL, true);
+ 				}
+ 
+-				spin_unlock_bh(&psta->sleep_q.lock);
++				/* spin_unlock_bh(&psta->sleep_q.lock); */
++				spin_unlock_bh(&pxmitpriv->lock);
+ 
+ 			} else {
+-				spin_unlock_bh(&psta->sleep_q.lock);
++				/* spin_unlock_bh(&psta->sleep_q.lock); */
++				spin_unlock_bh(&pxmitpriv->lock);
+ 
+ 				/* DBG_871X("no buffered packets to xmit\n"); */
+ 				if (pstapriv->tim_bitmap&BIT(psta->aid)) {
+diff --git a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
+index b1784b4e466f3..e3f56c6cc882e 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
++++ b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
+@@ -330,48 +330,46 @@ u32 rtw_free_stainfo(struct adapter *padapter, struct sta_info *psta)
+ 
+ 	/* list_del_init(&psta->wakeup_list); */
+ 
+-	spin_lock_bh(&psta->sleep_q.lock);
++	spin_lock_bh(&pxmitpriv->lock);
++
+ 	rtw_free_xmitframe_queue(pxmitpriv, &psta->sleep_q);
+ 	psta->sleepq_len = 0;
+-	spin_unlock_bh(&psta->sleep_q.lock);
+-
+-	spin_lock_bh(&pxmitpriv->lock);
+ 
+ 	/* vo */
+-	spin_lock_bh(&pstaxmitpriv->vo_q.sta_pending.lock);
++	/* spin_lock_bh(&(pxmitpriv->vo_pending.lock)); */
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vo_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->vo_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits;
+ 	phwxmit->accnt -= pstaxmitpriv->vo_q.qcnt;
+ 	pstaxmitpriv->vo_q.qcnt = 0;
+-	spin_unlock_bh(&pstaxmitpriv->vo_q.sta_pending.lock);
++	/* spin_unlock_bh(&(pxmitpriv->vo_pending.lock)); */
+ 
+ 	/* vi */
+-	spin_lock_bh(&pstaxmitpriv->vi_q.sta_pending.lock);
++	/* spin_lock_bh(&(pxmitpriv->vi_pending.lock)); */
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vi_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->vi_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+1;
+ 	phwxmit->accnt -= pstaxmitpriv->vi_q.qcnt;
+ 	pstaxmitpriv->vi_q.qcnt = 0;
+-	spin_unlock_bh(&pstaxmitpriv->vi_q.sta_pending.lock);
++	/* spin_unlock_bh(&(pxmitpriv->vi_pending.lock)); */
+ 
+ 	/* be */
+-	spin_lock_bh(&pstaxmitpriv->be_q.sta_pending.lock);
++	/* spin_lock_bh(&(pxmitpriv->be_pending.lock)); */
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->be_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->be_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+2;
+ 	phwxmit->accnt -= pstaxmitpriv->be_q.qcnt;
+ 	pstaxmitpriv->be_q.qcnt = 0;
+-	spin_unlock_bh(&pstaxmitpriv->be_q.sta_pending.lock);
++	/* spin_unlock_bh(&(pxmitpriv->be_pending.lock)); */
+ 
+ 	/* bk */
+-	spin_lock_bh(&pstaxmitpriv->bk_q.sta_pending.lock);
++	/* spin_lock_bh(&(pxmitpriv->bk_pending.lock)); */
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->bk_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->bk_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+3;
+ 	phwxmit->accnt -= pstaxmitpriv->bk_q.qcnt;
+ 	pstaxmitpriv->bk_q.qcnt = 0;
+-	spin_unlock_bh(&pstaxmitpriv->bk_q.sta_pending.lock);
++	/* spin_unlock_bh(&(pxmitpriv->bk_pending.lock)); */
+ 
+ 	spin_unlock_bh(&pxmitpriv->lock);
+ 
+diff --git a/drivers/staging/rtl8723bs/core/rtw_xmit.c b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+index d78cff7ed6a01..6ecaff9728fd4 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_xmit.c
++++ b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+@@ -1871,6 +1871,8 @@ void rtw_free_xmitframe_queue(struct xmit_priv *pxmitpriv, struct __queue *pfram
+ 	struct list_head	*plist, *phead;
+ 	struct	xmit_frame	*pxmitframe;
+ 
++	spin_lock_bh(&pframequeue->lock);
++
+ 	phead = get_list_head(pframequeue);
+ 	plist = get_next(phead);
+ 
+@@ -1881,6 +1883,7 @@ void rtw_free_xmitframe_queue(struct xmit_priv *pxmitpriv, struct __queue *pfram
+ 
+ 		rtw_free_xmitframe(pxmitpriv, pxmitframe);
+ 	}
++	spin_unlock_bh(&pframequeue->lock);
+ }
+ 
+ s32 rtw_xmitframe_enqueue(struct adapter *padapter, struct xmit_frame *pxmitframe)
+@@ -1943,7 +1946,6 @@ s32 rtw_xmit_classifier(struct adapter *padapter, struct xmit_frame *pxmitframe)
+ 	struct sta_info *psta;
+ 	struct tx_servq	*ptxservq;
+ 	struct pkt_attrib	*pattrib = &pxmitframe->attrib;
+-	struct xmit_priv *xmit_priv = &padapter->xmitpriv;
+ 	struct hw_xmit	*phwxmits =  padapter->xmitpriv.hwxmits;
+ 	sint res = _SUCCESS;
+ 
+@@ -1972,14 +1974,12 @@ s32 rtw_xmit_classifier(struct adapter *padapter, struct xmit_frame *pxmitframe)
+ 
+ 	ptxservq = rtw_get_sta_pending(padapter, psta, pattrib->priority, (u8 *)(&ac_index));
+ 
+-	spin_lock_bh(&xmit_priv->lock);
+ 	if (list_empty(&ptxservq->tx_pending))
+ 		list_add_tail(&ptxservq->tx_pending, get_list_head(phwxmits[ac_index].sta_queue));
+ 
+ 	list_add_tail(&pxmitframe->list, get_list_head(&ptxservq->sta_pending));
+ 	ptxservq->qcnt++;
+ 	phwxmits[ac_index].accnt++;
+-	spin_unlock_bh(&xmit_priv->lock);
+ 
+ exit:
+ 
+@@ -2397,10 +2397,11 @@ void wakeup_sta_to_xmit(struct adapter *padapter, struct sta_info *psta)
+ 	struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 	struct xmit_frame *pxmitframe = NULL;
+ 	struct sta_priv *pstapriv = &padapter->stapriv;
++	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+ 	psta_bmc = rtw_get_bcmc_stainfo(padapter);
+ 
+-	spin_lock_bh(&psta->sleep_q.lock);
++	spin_lock_bh(&pxmitpriv->lock);
+ 
+ 	xmitframe_phead = get_list_head(&psta->sleep_q);
+ 	xmitframe_plist = get_next(xmitframe_phead);
+@@ -2508,7 +2509,7 @@ void wakeup_sta_to_xmit(struct adapter *padapter, struct sta_info *psta)
+ 
+ _exit:
+ 
+-	spin_unlock_bh(&psta->sleep_q.lock);
++	spin_unlock_bh(&pxmitpriv->lock);
+ 
+ 	if (update_mask)
+ 		update_beacon(padapter, _TIM_IE_, NULL, true);
+@@ -2520,8 +2521,9 @@ void xmit_delivery_enabled_frames(struct adapter *padapter, struct sta_info *pst
+ 	struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 	struct xmit_frame *pxmitframe = NULL;
+ 	struct sta_priv *pstapriv = &padapter->stapriv;
++	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+-	spin_lock_bh(&psta->sleep_q.lock);
++	spin_lock_bh(&pxmitpriv->lock);
+ 
+ 	xmitframe_phead = get_list_head(&psta->sleep_q);
+ 	xmitframe_plist = get_next(xmitframe_phead);
+@@ -2577,7 +2579,7 @@ void xmit_delivery_enabled_frames(struct adapter *padapter, struct sta_info *pst
+ 		}
+ 	}
+ 
+-	spin_unlock_bh(&psta->sleep_q.lock);
++	spin_unlock_bh(&pxmitpriv->lock);
+ }
+ 
+ void enqueue_pending_xmitbuf(
+diff --git a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
+index ce5bf2861d0c1..44799c4a9f35b 100644
+--- a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
++++ b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
+@@ -572,7 +572,9 @@ s32 rtl8723bs_hal_xmit(
+ 			rtw_issue_addbareq_cmd(padapter, pxmitframe);
+ 	}
+ 
++	spin_lock_bh(&pxmitpriv->lock);
+ 	err = rtw_xmitframe_enqueue(padapter, pxmitframe);
++	spin_unlock_bh(&pxmitpriv->lock);
+ 	if (err != _SUCCESS) {
+ 		RT_TRACE(_module_hal_xmit_c_, _drv_err_, ("rtl8723bs_hal_xmit: enqueue xmitframe fail\n"));
+ 		rtw_free_xmitframe(pxmitpriv, pxmitframe);
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 5c53098755a35..441bc057896f5 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -167,14 +167,13 @@ void virtio_add_status(struct virtio_device *dev, unsigned int status)
+ }
+ EXPORT_SYMBOL_GPL(virtio_add_status);
+ 
+-int virtio_finalize_features(struct virtio_device *dev)
++/* Do some validation, then set FEATURES_OK */
++static int virtio_features_ok(struct virtio_device *dev)
+ {
+-	int ret = dev->config->finalize_features(dev);
+ 	unsigned status;
++	int ret;
+ 
+ 	might_sleep();
+-	if (ret)
+-		return ret;
+ 
+ 	ret = arch_has_restricted_virtio_memory_access();
+ 	if (ret) {
+@@ -203,7 +202,6 @@ int virtio_finalize_features(struct virtio_device *dev)
+ 	}
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(virtio_finalize_features);
+ 
+ static int virtio_dev_probe(struct device *_d)
+ {
+@@ -240,17 +238,6 @@ static int virtio_dev_probe(struct device *_d)
+ 		driver_features_legacy = driver_features;
+ 	}
+ 
+-	/*
+-	 * Some devices detect legacy solely via F_VERSION_1. Write
+-	 * F_VERSION_1 to force LE config space accesses before FEATURES_OK for
+-	 * these when needed.
+-	 */
+-	if (drv->validate && !virtio_legacy_is_little_endian()
+-			  && device_features & BIT_ULL(VIRTIO_F_VERSION_1)) {
+-		dev->features = BIT_ULL(VIRTIO_F_VERSION_1);
+-		dev->config->finalize_features(dev);
+-	}
+-
+ 	if (device_features & (1ULL << VIRTIO_F_VERSION_1))
+ 		dev->features = driver_features & device_features;
+ 	else
+@@ -261,13 +248,26 @@ static int virtio_dev_probe(struct device *_d)
+ 		if (device_features & (1ULL << i))
+ 			__virtio_set_bit(dev, i);
+ 
++	err = dev->config->finalize_features(dev);
++	if (err)
++		goto err;
++
+ 	if (drv->validate) {
++		u64 features = dev->features;
++
+ 		err = drv->validate(dev);
+ 		if (err)
+ 			goto err;
++
++		/* Did validation change any features? Then write them again. */
++		if (features != dev->features) {
++			err = dev->config->finalize_features(dev);
++			if (err)
++				goto err;
++		}
+ 	}
+ 
+-	err = virtio_finalize_features(dev);
++	err = virtio_features_ok(dev);
+ 	if (err)
+ 		goto err;
+ 
+@@ -438,7 +438,11 @@ int virtio_device_restore(struct virtio_device *dev)
+ 	/* We have a driver! */
+ 	virtio_add_status(dev, VIRTIO_CONFIG_S_DRIVER);
+ 
+-	ret = virtio_finalize_features(dev);
++	ret = dev->config->finalize_features(dev);
++	if (ret)
++		goto err;
++
++	ret = virtio_features_ok(dev);
+ 	if (ret)
+ 		goto err;
+ 
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 928700d57eb67..6513079c728be 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -74,6 +74,11 @@ int ext4_resize_begin(struct super_block *sb)
+ 		return -EPERM;
+ 	}
+ 
++	if (ext4_has_feature_sparse_super2(sb)) {
++		ext4_msg(sb, KERN_ERR, "Online resizing not supported with sparse_super2");
++		return -EOPNOTSUPP;
++	}
++
+ 	if (test_and_set_bit_lock(EXT4_FLAGS_RESIZING,
+ 				  &EXT4_SB(sb)->s_ext4_flags))
+ 		ret = -EBUSY;
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index d100b5dfedbd2..8ac91ba05d6de 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -945,7 +945,17 @@ static int fuse_copy_page(struct fuse_copy_state *cs, struct page **pagep,
+ 
+ 	while (count) {
+ 		if (cs->write && cs->pipebufs && page) {
+-			return fuse_ref_page(cs, page, offset, count);
++			/*
++			 * Can't control lifetime of pipe buffers, so always
++			 * copy user pages.
++			 */
++			if (cs->req->args->user_pages) {
++				err = fuse_copy_fill(cs);
++				if (err)
++					return err;
++			} else {
++				return fuse_ref_page(cs, page, offset, count);
++			}
+ 		} else if (!cs->len) {
+ 			if (cs->move_pages && page &&
+ 			    offset == 0 && count == PAGE_SIZE) {
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index e81d1c3eb7e11..d1bc96ee6eb3d 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1418,6 +1418,7 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
+ 			(PAGE_SIZE - ret) & (PAGE_SIZE - 1);
+ 	}
+ 
++	ap->args.user_pages = true;
+ 	if (write)
+ 		ap->args.in_pages = true;
+ 	else
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index b159d8b5e8937..b10cddd723559 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -263,6 +263,7 @@ struct fuse_args {
+ 	bool nocreds:1;
+ 	bool in_pages:1;
+ 	bool out_pages:1;
++	bool user_pages:1;
+ 	bool out_argvar:1;
+ 	bool page_zeroing:1;
+ 	bool page_replace:1;
+diff --git a/fs/pipe.c b/fs/pipe.c
+index d6d4019ba32f5..9f2ca1b1c17ac 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -252,7 +252,8 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ 	 */
+ 	was_full = pipe_full(pipe->head, pipe->tail, pipe->max_usage);
+ 	for (;;) {
+-		unsigned int head = pipe->head;
++		/* Read ->head with a barrier vs post_one_notification() */
++		unsigned int head = smp_load_acquire(&pipe->head);
+ 		unsigned int tail = pipe->tail;
+ 		unsigned int mask = pipe->ring_size - 1;
+ 
+@@ -830,10 +831,8 @@ void free_pipe_info(struct pipe_inode_info *pipe)
+ 	int i;
+ 
+ #ifdef CONFIG_WATCH_QUEUE
+-	if (pipe->watch_queue) {
++	if (pipe->watch_queue)
+ 		watch_queue_clear(pipe->watch_queue);
+-		put_watch_queue(pipe->watch_queue);
+-	}
+ #endif
+ 
+ 	(void) account_pipe_buffers(pipe->user, pipe->nr_accounted, 0);
+@@ -843,6 +842,10 @@ void free_pipe_info(struct pipe_inode_info *pipe)
+ 		if (buf->ops)
+ 			pipe_buf_release(pipe, buf);
+ 	}
++#ifdef CONFIG_WATCH_QUEUE
++	if (pipe->watch_queue)
++		put_watch_queue(pipe->watch_queue);
++#endif
+ 	if (pipe->tmp_page)
+ 		__free_page(pipe->tmp_page);
+ 	kfree(pipe->bufs);
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index f5e829e12a76d..eba1f1cbc9fbd 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -9307,8 +9307,8 @@ struct mlx5_ifc_bufferx_reg_bits {
+ 	u8         reserved_at_0[0x6];
+ 	u8         lossy[0x1];
+ 	u8         epsb[0x1];
+-	u8         reserved_at_8[0xc];
+-	u8         size[0xc];
++	u8         reserved_at_8[0x8];
++	u8         size[0x10];
+ 
+ 	u8         xoff_threshold[0x10];
+ 	u8         xon_threshold[0x10];
+diff --git a/include/linux/virtio.h b/include/linux/virtio.h
+index 8ecc2e208d613..90c5ad5568097 100644
+--- a/include/linux/virtio.h
++++ b/include/linux/virtio.h
+@@ -135,7 +135,6 @@ void virtio_break_device(struct virtio_device *dev);
+ void virtio_config_changed(struct virtio_device *dev);
+ void virtio_config_disable(struct virtio_device *dev);
+ void virtio_config_enable(struct virtio_device *dev);
+-int virtio_finalize_features(struct virtio_device *dev);
+ #ifdef CONFIG_PM_SLEEP
+ int virtio_device_freeze(struct virtio_device *dev);
+ int virtio_device_restore(struct virtio_device *dev);
+diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
+index 8519b3ae5d52e..b341dd62aa4da 100644
+--- a/include/linux/virtio_config.h
++++ b/include/linux/virtio_config.h
+@@ -62,8 +62,9 @@ struct virtio_shm_region {
+  *	Returns the first 64 feature bits (all we currently need).
+  * @finalize_features: confirm what device features we'll be using.
+  *	vdev: the virtio_device
+- *	This gives the final feature bits for the device: it can change
++ *	This sends the driver feature bits to the device: it can change
+  *	the dev->feature bits if it wants.
++ * Note: despite the name this can be called any number of times.
+  *	Returns 0 on success or error status
+  * @bus_name: return the bus name associated with the device (optional)
+  *	vdev: the virtio_device
+diff --git a/include/linux/watch_queue.h b/include/linux/watch_queue.h
+index c994d1b2cdbaa..3b9a40ae8bdba 100644
+--- a/include/linux/watch_queue.h
++++ b/include/linux/watch_queue.h
+@@ -28,7 +28,8 @@ struct watch_type_filter {
+ struct watch_filter {
+ 	union {
+ 		struct rcu_head	rcu;
+-		unsigned long	type_filter[2];	/* Bitmask of accepted types */
++		/* Bitmask of accepted types */
++		DECLARE_BITMAP(type_filter, WATCH_TYPE__NR);
+ 	};
+ 	u32			nr_filters;	/* Number of filters */
+ 	struct watch_type_filter filters[];
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 71ed0616d83bd..953dd9568dd74 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1490,10 +1490,12 @@ static int __init set_buf_size(char *str)
+ 	if (!str)
+ 		return 0;
+ 	buf_size = memparse(str, &str);
+-	/* nr_entries can not be zero */
+-	if (buf_size == 0)
+-		return 0;
+-	trace_buf_size = buf_size;
++	/*
++	 * nr_entries can not be zero and the startup
++	 * tests require some buffer space. Therefore
++	 * ensure we have at least 4096 bytes of buffer.
++	 */
++	trace_buf_size = max(4096UL, buf_size);
+ 	return 1;
+ }
+ __setup("trace_buf_size=", set_buf_size);
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index 0ef8f65bd2d71..e3f144d960261 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -54,6 +54,7 @@ static void watch_queue_pipe_buf_release(struct pipe_inode_info *pipe,
+ 	bit += page->index;
+ 
+ 	set_bit(bit, wqueue->notes_bitmap);
++	generic_pipe_buf_release(pipe, buf);
+ }
+ 
+ // No try_steal function => no stealing
+@@ -112,7 +113,7 @@ static bool post_one_notification(struct watch_queue *wqueue,
+ 	buf->offset = offset;
+ 	buf->len = len;
+ 	buf->flags = PIPE_BUF_FLAG_WHOLE;
+-	pipe->head = head + 1;
++	smp_store_release(&pipe->head, head + 1); /* vs pipe_read() */
+ 
+ 	if (!test_and_clear_bit(note, wqueue->notes_bitmap)) {
+ 		spin_unlock_irq(&pipe->rd_wait.lock);
+@@ -243,7 +244,8 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ 		goto error;
+ 	}
+ 
+-	ret = pipe_resize_ring(pipe, nr_notes);
++	nr_notes = nr_pages * WATCH_QUEUE_NOTES_PER_PAGE;
++	ret = pipe_resize_ring(pipe, roundup_pow_of_two(nr_notes));
+ 	if (ret < 0)
+ 		goto error;
+ 
+@@ -268,7 +270,7 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ 	wqueue->notes = pages;
+ 	wqueue->notes_bitmap = bitmap;
+ 	wqueue->nr_pages = nr_pages;
+-	wqueue->nr_notes = nr_pages * WATCH_QUEUE_NOTES_PER_PAGE;
++	wqueue->nr_notes = nr_notes;
+ 	return 0;
+ 
+ error_p:
+@@ -320,7 +322,7 @@ long watch_queue_set_filter(struct pipe_inode_info *pipe,
+ 		    tf[i].info_mask & WATCH_INFO_LENGTH)
+ 			goto err_filter;
+ 		/* Ignore any unknown types */
+-		if (tf[i].type >= sizeof(wfilter->type_filter) * 8)
++		if (tf[i].type >= WATCH_TYPE__NR)
+ 			continue;
+ 		nr_filter++;
+ 	}
+@@ -336,7 +338,7 @@ long watch_queue_set_filter(struct pipe_inode_info *pipe,
+ 
+ 	q = wfilter->filters;
+ 	for (i = 0; i < filter.nr_filters; i++) {
+-		if (tf[i].type >= sizeof(wfilter->type_filter) * BITS_PER_LONG)
++		if (tf[i].type >= WATCH_TYPE__NR)
+ 			continue;
+ 
+ 		q->type			= tf[i].type;
+@@ -371,6 +373,7 @@ static void __put_watch_queue(struct kref *kref)
+ 
+ 	for (i = 0; i < wqueue->nr_pages; i++)
+ 		__free_page(wqueue->notes[i]);
++	bitmap_free(wqueue->notes_bitmap);
+ 
+ 	wfilter = rcu_access_pointer(wqueue->filter);
+ 	if (wfilter)
+@@ -566,7 +569,7 @@ void watch_queue_clear(struct watch_queue *wqueue)
+ 	rcu_read_lock();
+ 	spin_lock_bh(&wqueue->lock);
+ 
+-	/* Prevent new additions and prevent notifications from happening */
++	/* Prevent new notifications from being stored. */
+ 	wqueue->defunct = true;
+ 
+ 	while (!hlist_empty(&wqueue->watches)) {
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 23bd26057a828..9e0eef7fe9add 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -87,6 +87,13 @@ again:
+ 	ax25_for_each(s, &ax25_list) {
+ 		if (s->ax25_dev == ax25_dev) {
+ 			sk = s->sk;
++			if (!sk) {
++				spin_unlock_bh(&ax25_list_lock);
++				s->ax25_dev = NULL;
++				ax25_disconnect(s, ENETUNREACH);
++				spin_lock_bh(&ax25_list_lock);
++				goto again;
++			}
+ 			sock_hold(sk);
+ 			spin_unlock_bh(&ax25_list_lock);
+ 			lock_sock(sk);
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 99303897b7bb7..989b3f7ee85f4 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -213,7 +213,7 @@ static ssize_t speed_show(struct device *dev,
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+-	if (netif_running(netdev)) {
++	if (netif_running(netdev) && netif_device_present(netdev)) {
+ 		struct ethtool_link_ksettings cmd;
+ 
+ 		if (!__ethtool_get_link_ksettings(netdev, &cmd))
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 5aa7344dbec7f..3450c9ba2728c 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -160,6 +160,9 @@ static struct sk_buff *xfrm4_beet_gso_segment(struct xfrm_state *x,
+ 			skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV4;
+ 	}
+ 
++	if (proto == IPPROTO_IPV6)
++		skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP4;
++
+ 	__skb_pull(skb, skb_transport_offset(skb));
+ 	ops = rcu_dereference(inet_offloads[proto]);
+ 	if (likely(ops && ops->callbacks.gso_segment))
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 072c348237536..7c5bf39dca5d1 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4979,6 +4979,7 @@ static int inet6_fill_ifaddr(struct sk_buff *skb, struct inet6_ifaddr *ifa,
+ 	    nla_put_s32(skb, IFA_TARGET_NETNSID, args->netnsid))
+ 		goto error;
+ 
++	spin_lock_bh(&ifa->lock);
+ 	if (!((ifa->flags&IFA_F_PERMANENT) &&
+ 	      (ifa->prefered_lft == INFINITY_LIFE_TIME))) {
+ 		preferred = ifa->prefered_lft;
+@@ -5000,6 +5001,7 @@ static int inet6_fill_ifaddr(struct sk_buff *skb, struct inet6_ifaddr *ifa,
+ 		preferred = INFINITY_LIFE_TIME;
+ 		valid = INFINITY_LIFE_TIME;
+ 	}
++	spin_unlock_bh(&ifa->lock);
+ 
+ 	if (!ipv6_addr_any(&ifa->peer_addr)) {
+ 		if (nla_put_in6_addr(skb, IFA_LOCAL, &ifa->addr) < 0 ||
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 4af56affaafd4..1c3f02d05d2bf 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -198,6 +198,9 @@ static struct sk_buff *xfrm6_beet_gso_segment(struct xfrm_state *x,
+ 			ipv6_skip_exthdr(skb, 0, &proto, &frag);
+ 	}
+ 
++	if (proto == IPPROTO_IPIP)
++		skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP6;
++
+ 	__skb_pull(skb, skb_transport_offset(skb));
+ 	ops = rcu_dereference(inet6_offloads[proto]);
+ 	if (likely(ops && ops->callbacks.gso_segment))
+diff --git a/net/sctp/diag.c b/net/sctp/diag.c
+index babadd6720a2b..68ff82ff49a3d 100644
+--- a/net/sctp/diag.c
++++ b/net/sctp/diag.c
+@@ -61,10 +61,6 @@ static void inet_diag_msg_sctpasoc_fill(struct inet_diag_msg *r,
+ 		r->idiag_timer = SCTP_EVENT_TIMEOUT_T3_RTX;
+ 		r->idiag_retrans = asoc->rtx_data_chunks;
+ 		r->idiag_expires = jiffies_to_msecs(t3_rtx->expires - jiffies);
+-	} else {
+-		r->idiag_timer = 0;
+-		r->idiag_retrans = 0;
+-		r->idiag_expires = 0;
+ 	}
+ }
+ 
+@@ -144,13 +140,14 @@ static int inet_sctp_diag_fill(struct sock *sk, struct sctp_association *asoc,
+ 	r = nlmsg_data(nlh);
+ 	BUG_ON(!sk_fullsock(sk));
+ 
++	r->idiag_timer = 0;
++	r->idiag_retrans = 0;
++	r->idiag_expires = 0;
+ 	if (asoc) {
+ 		inet_diag_msg_sctpasoc_fill(r, sk, asoc);
+ 	} else {
+ 		inet_diag_msg_common_fill(r, sk);
+ 		r->idiag_state = sk->sk_state;
+-		r->idiag_timer = 0;
+-		r->idiag_retrans = 0;
+ 	}
+ 
+ 	if (inet_diag_msg_attrs_fill(sk, skb, r, ext, user_ns, net_admin))
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 12e535b43d887..6911f1cab2063 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -342,16 +342,18 @@ static int tipc_enable_bearer(struct net *net, const char *name,
+ 		goto rejected;
+ 	}
+ 
+-	test_and_set_bit_lock(0, &b->up);
+-	rcu_assign_pointer(tn->bearer_list[bearer_id], b);
+-	if (skb)
+-		tipc_bearer_xmit_skb(net, bearer_id, skb, &b->bcast_addr);
+-
++	/* Create monitoring data before accepting activate messages */
+ 	if (tipc_mon_create(net, bearer_id)) {
+ 		bearer_disable(net, b);
++		kfree_skb(skb);
+ 		return -ENOMEM;
+ 	}
+ 
++	test_and_set_bit_lock(0, &b->up);
++	rcu_assign_pointer(tn->bearer_list[bearer_id], b);
++	if (skb)
++		tipc_bearer_xmit_skb(net, bearer_id, skb, &b->bcast_addr);
++
+ 	pr_info("Enabled bearer <%s>, priority %u\n", name, prio);
+ 
+ 	return res;
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index fb835a3822f49..7a353ff628448 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -2245,6 +2245,11 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 		break;
+ 
+ 	case STATE_MSG:
++		/* Validate Gap ACK blocks, drop if invalid */
++		glen = tipc_get_gap_ack_blks(&ga, l, hdr, true);
++		if (glen > dlen)
++			break;
++
+ 		l->rcv_nxt_state = msg_seqno(hdr) + 1;
+ 
+ 		/* Update own tolerance if peer indicates a non-zero value */
+@@ -2270,10 +2275,6 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 			break;
+ 		}
+ 
+-		/* Receive Gap ACK blocks from peer if any */
+-		glen = tipc_get_gap_ack_blks(&ga, l, hdr, true);
+-		if(glen > dlen)
+-			break;
+ 		tipc_mon_rcv(l->net, data + glen, dlen - glen, l->addr,
+ 			     &l->mon_state, l->bearer_id);
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_crash.c b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+new file mode 100644
+index 0000000000000..f74b82305da8c
+--- /dev/null
++++ b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+@@ -0,0 +1,32 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <test_progs.h>
++#include "timer_crash.skel.h"
++
++enum {
++	MODE_ARRAY,
++	MODE_HASH,
++};
++
++static void test_timer_crash_mode(int mode)
++{
++	struct timer_crash *skel;
++
++	skel = timer_crash__open_and_load();
++	if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load"))
++		return;
++	skel->bss->pid = getpid();
++	skel->bss->crash_map = mode;
++	if (!ASSERT_OK(timer_crash__attach(skel), "timer_crash__attach"))
++		goto end;
++	usleep(1);
++end:
++	timer_crash__destroy(skel);
++}
++
++void test_timer_crash(void)
++{
++	if (test__start_subtest("array"))
++		test_timer_crash_mode(MODE_ARRAY);
++	if (test__start_subtest("hash"))
++		test_timer_crash_mode(MODE_HASH);
++}
+diff --git a/tools/testing/selftests/bpf/progs/timer_crash.c b/tools/testing/selftests/bpf/progs/timer_crash.c
+new file mode 100644
+index 0000000000000..f8f7944e70dae
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/timer_crash.c
+@@ -0,0 +1,54 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <vmlinux.h>
++#include <bpf/bpf_tracing.h>
++#include <bpf/bpf_helpers.h>
++
++struct map_elem {
++	struct bpf_timer timer;
++	struct bpf_spin_lock lock;
++};
++
++struct {
++	__uint(type, BPF_MAP_TYPE_ARRAY);
++	__uint(max_entries, 1);
++	__type(key, int);
++	__type(value, struct map_elem);
++} amap SEC(".maps");
++
++struct {
++	__uint(type, BPF_MAP_TYPE_HASH);
++	__uint(max_entries, 1);
++	__type(key, int);
++	__type(value, struct map_elem);
++} hmap SEC(".maps");
++
++int pid = 0;
++int crash_map = 0; /* 0 for amap, 1 for hmap */
++
++SEC("fentry/do_nanosleep")
++int sys_enter(void *ctx)
++{
++	struct map_elem *e, value = {};
++	void *map = crash_map ? (void *)&hmap : (void *)&amap;
++
++	if (bpf_get_current_task_btf()->tgid != pid)
++		return 0;
++
++	*(void **)&value = (void *)0xdeadcaf3;
++
++	bpf_map_update_elem(map, &(int){0}, &value, 0);
++	/* For array map, doing bpf_map_update_elem will do a
++	 * check_and_free_timer_in_array, which will trigger the crash if timer
++	 * pointer was overwritten, for hmap we need to use bpf_timer_cancel.
++	 */
++	if (crash_map == 1) {
++		e = bpf_map_lookup_elem(map, &(int){0});
++		if (!e)
++			return 0;
++		bpf_timer_cancel(&e->timer);
++	}
++	return 0;
++}
++
++char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/memfd/memfd_test.c b/tools/testing/selftests/memfd/memfd_test.c
+index 334a7eea20042..fba322d1c67a1 100644
+--- a/tools/testing/selftests/memfd/memfd_test.c
++++ b/tools/testing/selftests/memfd/memfd_test.c
+@@ -455,6 +455,7 @@ static void mfd_fail_write(int fd)
+ 			printf("mmap()+mprotect() didn't fail as expected\n");
+ 			abort();
+ 		}
++		munmap(p, mfd_def_size);
+ 	}
+ 
+ 	/* verify PUNCH_HOLE fails */
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 3367fb5f2feff..3253fdc780d62 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -799,7 +799,6 @@ setup_ovs_bridge() {
+ setup() {
+ 	[ "$(id -u)" -ne 0 ] && echo "  need to run as root" && return $ksft_skip
+ 
+-	cleanup
+ 	for arg do
+ 		eval setup_${arg} || { echo "  ${arg} not supported"; return 1; }
+ 	done
+@@ -810,7 +809,7 @@ trace() {
+ 
+ 	for arg do
+ 		[ "${ns_cmd}" = "" ] && ns_cmd="${arg}" && continue
+-		${ns_cmd} tcpdump -s 0 -i "${arg}" -w "${name}_${arg}.pcap" 2> /dev/null &
++		${ns_cmd} tcpdump --immediate-mode -s 0 -i "${arg}" -w "${name}_${arg}.pcap" 2> /dev/null &
+ 		tcpdump_pids="${tcpdump_pids} $!"
+ 		ns_cmd=
+ 	done
+@@ -1636,6 +1635,10 @@ run_test() {
+ 
+ 	unset IFS
+ 
++	# Since cleanup() relies on variables modified by this subshell, it
++	# has to run in this context.
++	trap cleanup EXIT
++
+ 	if [ "$VERBOSE" = "1" ]; then
+ 		printf "\n##########################################################################\n\n"
+ 	fi
+diff --git a/tools/testing/selftests/vm/map_fixed_noreplace.c b/tools/testing/selftests/vm/map_fixed_noreplace.c
+index d91bde5112686..eed44322d1a63 100644
+--- a/tools/testing/selftests/vm/map_fixed_noreplace.c
++++ b/tools/testing/selftests/vm/map_fixed_noreplace.c
+@@ -17,9 +17,6 @@
+ #define MAP_FIXED_NOREPLACE 0x100000
+ #endif
+ 
+-#define BASE_ADDRESS	(256ul * 1024 * 1024)
+-
+-
+ static void dump_maps(void)
+ {
+ 	char cmd[32];
+@@ -28,18 +25,46 @@ static void dump_maps(void)
+ 	system(cmd);
+ }
+ 
++static unsigned long find_base_addr(unsigned long size)
++{
++	void *addr;
++	unsigned long flags;
++
++	flags = MAP_PRIVATE | MAP_ANONYMOUS;
++	addr = mmap(NULL, size, PROT_NONE, flags, -1, 0);
++	if (addr == MAP_FAILED) {
++		printf("Error: couldn't map the space we need for the test\n");
++		return 0;
++	}
++
++	if (munmap(addr, size) != 0) {
++		printf("Error: couldn't map the space we need for the test\n");
++		return 0;
++	}
++	return (unsigned long)addr;
++}
++
+ int main(void)
+ {
++	unsigned long base_addr;
+ 	unsigned long flags, addr, size, page_size;
+ 	char *p;
+ 
+ 	page_size = sysconf(_SC_PAGE_SIZE);
+ 
++	//let's find a base addr that is free before we start the tests
++	size = 5 * page_size;
++	base_addr = find_base_addr(size);
++	if (!base_addr) {
++		printf("Error: couldn't map the space we need for the test\n");
++		return 1;
++	}
++
+ 	flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED_NOREPLACE;
+ 
+ 	// Check we can map all the areas we need below
+ 	errno = 0;
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = 5 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 
+@@ -60,7 +85,7 @@ int main(void)
+ 	printf("unmap() successful\n");
+ 
+ 	errno = 0;
+-	addr = BASE_ADDRESS + page_size;
++	addr = base_addr + page_size;
+ 	size = 3 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -80,7 +105,7 @@ int main(void)
+ 	 *     +4 |  free  | new
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = 5 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -101,7 +126,7 @@ int main(void)
+ 	 *     +4 |  free  |
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS + (2 * page_size);
++	addr = base_addr + (2 * page_size);
+ 	size = page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -121,7 +146,7 @@ int main(void)
+ 	 *     +4 |  free  | new
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS + (3 * page_size);
++	addr = base_addr + (3 * page_size);
+ 	size = 2 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -141,7 +166,7 @@ int main(void)
+ 	 *     +4 |  free  |
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = 2 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -161,7 +186,7 @@ int main(void)
+ 	 *     +4 |  free  |
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -181,7 +206,7 @@ int main(void)
+ 	 *     +4 |  free  |  new
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS + (4 * page_size);
++	addr = base_addr + (4 * page_size);
+ 	size = page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -192,7 +217,7 @@ int main(void)
+ 		return 1;
+ 	}
+ 
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = 5 * page_size;
+ 	if (munmap((void *)addr, size) != 0) {
+ 		dump_maps();


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-03-19 13:20 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-03-19 13:20 UTC (permalink / raw
  To: gentoo-commits

commit:     89510c1a892c8cf38f2121a6deb8d039a988d028
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 19 13:19:47 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 19 13:19:47 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=89510c1a

Linux patch 5.10.107

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1106_linux-5.10.107.patch | 716 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 720 insertions(+)

diff --git a/0000_README b/0000_README
index acb932f5..41173da0 100644
--- a/0000_README
+++ b/0000_README
@@ -467,6 +467,10 @@ Patch:  1105_linux-5.10.106.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.106
 
+Patch:  1106_linux-5.10.107.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.107
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1106_linux-5.10.107.patch b/1106_linux-5.10.107.patch
new file mode 100644
index 00000000..e645a75f
--- /dev/null
+++ b/1106_linux-5.10.107.patch
@@ -0,0 +1,716 @@
+diff --git a/Makefile b/Makefile
+index 7b0dffadf6a89..c0be463910578 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 106
++SUBLEVEL = 107
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/rk322x.dtsi b/arch/arm/boot/dts/rk322x.dtsi
+index 7de8b006ca13a..2f17bf35d7a65 100644
+--- a/arch/arm/boot/dts/rk322x.dtsi
++++ b/arch/arm/boot/dts/rk322x.dtsi
+@@ -640,8 +640,8 @@
+ 		interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
+ 		assigned-clocks = <&cru SCLK_HDMI_PHY>;
+ 		assigned-clock-parents = <&hdmi_phy>;
+-		clocks = <&cru SCLK_HDMI_HDCP>, <&cru PCLK_HDMI_CTRL>, <&cru SCLK_HDMI_CEC>;
+-		clock-names = "isfr", "iahb", "cec";
++		clocks = <&cru PCLK_HDMI_CTRL>, <&cru SCLK_HDMI_HDCP>, <&cru SCLK_HDMI_CEC>;
++		clock-names = "iahb", "isfr", "cec";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&hdmii2c_xfer &hdmi_hpd &hdmi_cec>;
+ 		resets = <&cru SRST_HDMI_P>;
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index 0d89ad274268b..9051fb4a267d4 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -990,7 +990,7 @@
+ 		status = "disabled";
+ 	};
+ 
+-	crypto: cypto-controller@ff8a0000 {
++	crypto: crypto@ff8a0000 {
+ 		compatible = "rockchip,rk3288-crypto";
+ 		reg = <0x0 0xff8a0000 0x0 0x4000>;
+ 		interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi b/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
+index 07c099b4ed5b5..1e0c9415bfcd0 100644
+--- a/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
++++ b/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
+@@ -476,7 +476,7 @@
+ 		};
+ 
+ 		usb0: usb@ffb00000 {
+-			compatible = "snps,dwc2";
++			compatible = "intel,socfpga-agilex-hsotg", "snps,dwc2";
+ 			reg = <0xffb00000 0x40000>;
+ 			interrupts = <0 93 4>;
+ 			phys = <&usbphy0>;
+@@ -489,7 +489,7 @@
+ 		};
+ 
+ 		usb1: usb@ffb40000 {
+-			compatible = "snps,dwc2";
++			compatible = "intel,socfpga-agilex-hsotg", "snps,dwc2";
+ 			reg = <0xffb40000 0x40000>;
+ 			interrupts = <0 94 4>;
+ 			phys = <&usbphy0>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 4660416c8f382..544110aaffc56 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -472,6 +472,12 @@
+ };
+ 
+ &sdhci {
++	/*
++	 * Signal integrity isn't great at 200MHz but 100MHz has proven stable
++	 * enough.
++	 */
++	max-frequency = <100000000>;
++
+ 	bus-width = <8>;
+ 	mmc-hs400-1_8v;
+ 	mmc-hs400-enhanced-strobe;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 4b6065dbba55e..52ba4d07e7712 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1770,10 +1770,10 @@
+ 		interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH 0>;
+ 		clocks = <&cru PCLK_HDMI_CTRL>,
+ 			 <&cru SCLK_HDMI_SFR>,
+-			 <&cru PLL_VPLL>,
++			 <&cru SCLK_HDMI_CEC>,
+ 			 <&cru PCLK_VIO_GRF>,
+-			 <&cru SCLK_HDMI_CEC>;
+-		clock-names = "iahb", "isfr", "vpll", "grf", "cec";
++			 <&cru PLL_VPLL>;
++		clock-names = "iahb", "isfr", "cec", "grf", "vpll";
+ 		power-domains = <&power RK3399_PD_HDCP>;
+ 		reg-io-width = <4>;
+ 		rockchip,grf = <&grf>;
+diff --git a/arch/arm64/kvm/hyp/smccc_wa.S b/arch/arm64/kvm/hyp/smccc_wa.S
+index 24b281912463d..533b0aa73256a 100644
+--- a/arch/arm64/kvm/hyp/smccc_wa.S
++++ b/arch/arm64/kvm/hyp/smccc_wa.S
+@@ -68,7 +68,7 @@ SYM_DATA_START(__spectre_bhb_loop_k24)
+ 	esb
+ 	sub	sp, sp, #(8 * 2)
+ 	stp	x0, x1, [sp, #(8 * 0)]
+-	mov	x0, #8
++	mov	x0, #24
+ 2:	b	. + 4
+ 	subs	x0, x0, #1
+ 	b.ne	2b
+@@ -85,7 +85,7 @@ SYM_DATA_START(__spectre_bhb_loop_k32)
+ 	esb
+ 	sub	sp, sp, #(8 * 2)
+ 	stp	x0, x1, [sp, #(8 * 0)]
+-	mov	x0, #8
++	mov	x0, #32
+ 2:	b	. + 4
+ 	subs	x0, x0, #1
+ 	b.ne	2b
+diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
+index ff25926c5458c..14db66dbcdad9 100644
+--- a/arch/mips/kernel/smp.c
++++ b/arch/mips/kernel/smp.c
+@@ -351,6 +351,9 @@ asmlinkage void start_secondary(void)
+ 	cpu = smp_processor_id();
+ 	cpu_data[cpu].udelay_val = loops_per_jiffy;
+ 
++	set_cpu_sibling_map(cpu);
++	set_cpu_core_map(cpu);
++
+ 	cpumask_set_cpu(cpu, &cpu_coherent_mask);
+ 	notify_cpu_starting(cpu);
+ 
+@@ -362,9 +365,6 @@ asmlinkage void start_secondary(void)
+ 	/* The CPU is running and counters synchronised, now mark it online */
+ 	set_cpu_online(cpu, true);
+ 
+-	set_cpu_sibling_map(cpu);
+-	set_cpu_core_map(cpu);
+-
+ 	calculate_cpu_foreign_map();
+ 
+ 	/*
+diff --git a/drivers/atm/firestream.c b/drivers/atm/firestream.c
+index 0ddd611b42776..43a34aee33b82 100644
+--- a/drivers/atm/firestream.c
++++ b/drivers/atm/firestream.c
+@@ -1675,6 +1675,8 @@ static int fs_init(struct fs_dev *dev)
+ 	dev->hw_base = pci_resource_start(pci_dev, 0);
+ 
+ 	dev->base = ioremap(dev->hw_base, 0x1000);
++	if (!dev->base)
++		return 1;
+ 
+ 	reset_chip (dev);
+   
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index 717c4e7271b04..5163433ac561b 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -2155,6 +2155,9 @@ EXPORT_SYMBOL(drm_connector_attach_max_bpc_property);
+ void drm_connector_set_vrr_capable_property(
+ 		struct drm_connector *connector, bool capable)
+ {
++	if (!connector->vrr_capable_property)
++		return;
++
+ 	drm_object_property_set_value(&connector->base,
+ 				      connector->vrr_capable_property,
+ 				      capable);
+diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
+index de59dd6aad299..67f0f14e2bf4e 100644
+--- a/drivers/net/can/rcar/rcar_canfd.c
++++ b/drivers/net/can/rcar/rcar_canfd.c
+@@ -1598,15 +1598,15 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
+ 
+ 	netif_napi_add(ndev, &priv->napi, rcar_canfd_rx_poll,
+ 		       RCANFD_NAPI_WEIGHT);
++	spin_lock_init(&priv->tx_lock);
++	devm_can_led_init(ndev);
++	gpriv->ch[priv->channel] = priv;
+ 	err = register_candev(ndev);
+ 	if (err) {
+ 		dev_err(&pdev->dev,
+ 			"register_candev() failed, error %d\n", err);
+ 		goto fail_candev;
+ 	}
+-	spin_lock_init(&priv->tx_lock);
+-	devm_can_led_init(ndev);
+-	gpriv->ch[priv->channel] = priv;
+ 	dev_info(&pdev->dev, "device registered (channel %u)\n", priv->channel);
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/sfc/mcdi.c b/drivers/net/ethernet/sfc/mcdi.c
+index be6bfd6b7ec75..50baf62b2cbc6 100644
+--- a/drivers/net/ethernet/sfc/mcdi.c
++++ b/drivers/net/ethernet/sfc/mcdi.c
+@@ -163,9 +163,9 @@ static void efx_mcdi_send_request(struct efx_nic *efx, unsigned cmd,
+ 	/* Serialise with efx_mcdi_ev_cpl() and efx_mcdi_ev_death() */
+ 	spin_lock_bh(&mcdi->iface_lock);
+ 	++mcdi->seqno;
++	seqno = mcdi->seqno & SEQ_MASK;
+ 	spin_unlock_bh(&mcdi->iface_lock);
+ 
+-	seqno = mcdi->seqno & SEQ_MASK;
+ 	xflags = 0;
+ 	if (mcdi->mode == MCDI_MODE_EVENTS)
+ 		xflags |= MCDI_HEADER_XFLAGS_EVREQ;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index cbde21e772b17..b862cfbcd6e79 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -587,8 +587,7 @@ static struct ieee80211_sband_iftype_data iwl_he_capa[] = {
+ 			.has_he = true,
+ 			.he_cap_elem = {
+ 				.mac_cap_info[0] =
+-					IEEE80211_HE_MAC_CAP0_HTC_HE |
+-					IEEE80211_HE_MAC_CAP0_TWT_REQ,
++					IEEE80211_HE_MAC_CAP0_HTC_HE,
+ 				.mac_cap_info[1] =
+ 					IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US |
+ 					IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 922a7ea0cd24e..d2c6fdb702732 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -350,7 +350,6 @@ static const u8 he_if_types_ext_capa_sta[] = {
+ 	 [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ 	 [2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT,
+ 	 [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+-	 [9] = WLAN_EXT_CAPA10_TWT_REQUESTER_SUPPORT,
+ };
+ 
+ static const struct wiphy_iftype_ext_capab he_iftypes_ext_capa[] = {
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 019cbde8c3d67..fd188b9721511 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1009,6 +1009,18 @@ static inline bool __io_match_files(struct io_kiocb *req,
+ 		req->work.identity->files == files;
+ }
+ 
++static void io_refs_resurrect(struct percpu_ref *ref, struct completion *compl)
++{
++	bool got = percpu_ref_tryget(ref);
++
++	/* already at zero, wait for ->release() */
++	if (!got)
++		wait_for_completion(compl);
++	percpu_ref_resurrect(ref);
++	if (got)
++		percpu_ref_put(ref);
++}
++
+ static bool io_match_task(struct io_kiocb *head,
+ 			  struct task_struct *task,
+ 			  struct files_struct *files)
+@@ -9757,12 +9769,11 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
+ 			if (ret < 0)
+ 				break;
+ 		} while (1);
+-
+ 		mutex_lock(&ctx->uring_lock);
+ 
+ 		if (ret) {
+-			percpu_ref_resurrect(&ctx->refs);
+-			goto out_quiesce;
++			io_refs_resurrect(&ctx->refs, &ctx->ref_comp);
++			return ret;
+ 		}
+ 	}
+ 
+@@ -9855,7 +9866,6 @@ out:
+ 	if (io_register_op_must_quiesce(opcode)) {
+ 		/* bring the ctx back to life */
+ 		percpu_ref_reinit(&ctx->refs);
+-out_quiesce:
+ 		reinit_completion(&ctx->ref_comp);
+ 	}
+ 	return ret;
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 4a2843441caf1..0049a74596490 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1668,14 +1668,15 @@ int km_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
+ 	       const struct xfrm_migrate *m, int num_bundles,
+ 	       const struct xfrm_kmaddress *k,
+ 	       const struct xfrm_encap_tmpl *encap);
+-struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net);
++struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net,
++						u32 if_id);
+ struct xfrm_state *xfrm_state_migrate(struct xfrm_state *x,
+ 				      struct xfrm_migrate *m,
+ 				      struct xfrm_encap_tmpl *encap);
+ int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
+ 		 struct xfrm_migrate *m, int num_bundles,
+ 		 struct xfrm_kmaddress *k, struct net *net,
+-		 struct xfrm_encap_tmpl *encap);
++		 struct xfrm_encap_tmpl *encap, u32 if_id);
+ #endif
+ 
+ int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport);
+diff --git a/lib/Kconfig b/lib/Kconfig
+index b46a9fd122c81..9216e24e51646 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -45,7 +45,6 @@ config BITREVERSE
+ config HAVE_ARCH_BITREVERSE
+ 	bool
+ 	default n
+-	depends on BITREVERSE
+ 	help
+ 	  This option enables the use of hardware bit-reversal instructions on
+ 	  architectures which support such operations.
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 63c81af41b43e..a3ec2a08027b8 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1652,11 +1652,13 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ 				if (!copied)
+ 					copied = used;
+ 				break;
+-			} else if (used <= len) {
+-				seq += used;
+-				copied += used;
+-				offset += used;
+ 			}
++			if (WARN_ON_ONCE(used > len))
++				used = len;
++			seq += used;
++			copied += used;
++			offset += used;
++
+ 			/* If recv_actor drops the lock (e.g. TCP splice
+ 			 * receive) the skb pointer might be invalid when
+ 			 * getting here: tcp_collapse might have deleted it
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index ef9b4ac03e7b7..d1364b858fdf0 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2627,7 +2627,7 @@ static int pfkey_migrate(struct sock *sk, struct sk_buff *skb,
+ 	}
+ 
+ 	return xfrm_migrate(&sel, dir, XFRM_POLICY_TYPE_MAIN, m, i,
+-			    kma ? &k : NULL, net, NULL);
++			    kma ? &k : NULL, net, NULL, 0);
+ 
+  out:
+ 	return err;
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index 190f300d8923c..4b4ab1961068f 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2021 Intel Corporation
++ * Copyright (C) 2018 - 2022 Intel Corporation
+  */
+ 
+ #include <linux/ieee80211.h>
+@@ -626,6 +626,14 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid,
+ 		return -EINVAL;
+ 	}
+ 
++	if (test_sta_flag(sta, WLAN_STA_MFP) &&
++	    !test_sta_flag(sta, WLAN_STA_AUTHORIZED)) {
++		ht_dbg(sdata,
++		       "MFP STA not authorized - deny BA session request %pM tid %d\n",
++		       sta->sta.addr, tid);
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * 802.11n-2009 11.5.1.1: If the initiating STA is an HT STA, is a
+ 	 * member of an IBSS, and has no other existing Block Ack agreement
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 096e6be1d8fc8..ee0b2b03657ca 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -149,6 +149,12 @@ static enum sctp_disposition __sctp_sf_do_9_1_abort(
+ 					void *arg,
+ 					struct sctp_cmd_seq *commands);
+ 
++static enum sctp_disposition
++__sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
++			   const struct sctp_association *asoc,
++			   const union sctp_subtype type, void *arg,
++			   struct sctp_cmd_seq *commands);
++
+ /* Small helper function that checks if the chunk length
+  * is of the appropriate length.  The 'required_length' argument
+  * is set to be the size of a specific chunk we are testing.
+@@ -330,6 +336,14 @@ enum sctp_disposition sctp_sf_do_5_1B_init(struct net *net,
+ 	if (!chunk->singleton)
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
++	/* Make sure that the INIT chunk has a valid length.
++	 * Normally, this would cause an ABORT with a Protocol Violation
++	 * error, but since we don't have an association, we'll
++	 * just discard the packet.
++	 */
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* If the packet is an OOTB packet which is temporarily on the
+ 	 * control endpoint, respond with an ABORT.
+ 	 */
+@@ -344,14 +358,6 @@ enum sctp_disposition sctp_sf_do_5_1B_init(struct net *net,
+ 	if (chunk->sctp_hdr->vtag != 0)
+ 		return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
+ 
+-	/* Make sure that the INIT chunk has a valid length.
+-	 * Normally, this would cause an ABORT with a Protocol Violation
+-	 * error, but since we don't have an association, we'll
+-	 * just discard the packet.
+-	 */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
+-		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+-
+ 	/* If the INIT is coming toward a closing socket, we'll send back
+ 	 * and ABORT.  Essentially, this catches the race of INIT being
+ 	 * backloged to the socket at the same time as the user isses close().
+@@ -1484,19 +1490,16 @@ static enum sctp_disposition sctp_sf_do_unexpected_init(
+ 	if (!chunk->singleton)
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
++	/* Make sure that the INIT chunk has a valid length. */
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* 3.1 A packet containing an INIT chunk MUST have a zero Verification
+ 	 * Tag.
+ 	 */
+ 	if (chunk->sctp_hdr->vtag != 0)
+ 		return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
+ 
+-	/* Make sure that the INIT chunk has a valid length.
+-	 * In this case, we generate a protocol violation since we have
+-	 * an association established.
+-	 */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
+-		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
+ 	/* Grab the INIT header.  */
+ 	chunk->subh.init_hdr = (struct sctp_inithdr *)chunk->skb->data;
+ 
+@@ -1814,9 +1817,9 @@ static enum sctp_disposition sctp_sf_do_dupcook_a(
+ 	 * its peer.
+ 	*/
+ 	if (sctp_state(asoc, SHUTDOWN_ACK_SENT)) {
+-		disposition = sctp_sf_do_9_2_reshutack(net, ep, asoc,
+-				SCTP_ST_CHUNK(chunk->chunk_hdr->type),
+-				chunk, commands);
++		disposition = __sctp_sf_do_9_2_reshutack(net, ep, asoc,
++							 SCTP_ST_CHUNK(chunk->chunk_hdr->type),
++							 chunk, commands);
+ 		if (SCTP_DISPOSITION_NOMEM == disposition)
+ 			goto nomem;
+ 
+@@ -2915,13 +2918,11 @@ enum sctp_disposition sctp_sf_do_9_2_shut_ctsn(
+  * that belong to this association, it should discard the INIT chunk and
+  * retransmit the SHUTDOWN ACK chunk.
+  */
+-enum sctp_disposition sctp_sf_do_9_2_reshutack(
+-					struct net *net,
+-					const struct sctp_endpoint *ep,
+-					const struct sctp_association *asoc,
+-					const union sctp_subtype type,
+-					void *arg,
+-					struct sctp_cmd_seq *commands)
++static enum sctp_disposition
++__sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
++			   const struct sctp_association *asoc,
++			   const union sctp_subtype type, void *arg,
++			   struct sctp_cmd_seq *commands)
+ {
+ 	struct sctp_chunk *chunk = arg;
+ 	struct sctp_chunk *reply;
+@@ -2955,6 +2956,26 @@ nomem:
+ 	return SCTP_DISPOSITION_NOMEM;
+ }
+ 
++enum sctp_disposition
++sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
++			 const struct sctp_association *asoc,
++			 const union sctp_subtype type, void *arg,
++			 struct sctp_cmd_seq *commands)
++{
++	struct sctp_chunk *chunk = arg;
++
++	if (!chunk->singleton)
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
++	if (chunk->sctp_hdr->vtag != 0)
++		return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
++
++	return __sctp_sf_do_9_2_reshutack(net, ep, asoc, type, arg, commands);
++}
++
+ /*
+  * sctp_sf_do_ecn_cwr
+  *
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 07bd7b00b56d4..0df8b9a19952c 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -17127,7 +17127,8 @@ void cfg80211_ch_switch_notify(struct net_device *dev,
+ 	wdev->chandef = *chandef;
+ 	wdev->preset_chandef = *chandef;
+ 
+-	if (wdev->iftype == NL80211_IFTYPE_STATION &&
++	if ((wdev->iftype == NL80211_IFTYPE_STATION ||
++	     wdev->iftype == NL80211_IFTYPE_P2P_CLIENT) &&
+ 	    !WARN_ON(!wdev->current_bss))
+ 		cfg80211_update_assoc_bss_entry(wdev, chandef->chan);
+ 
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index c4a195cb36817..3d0ffd9270041 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -4287,7 +4287,7 @@ static bool xfrm_migrate_selector_match(const struct xfrm_selector *sel_cmp,
+ }
+ 
+ static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *sel,
+-						    u8 dir, u8 type, struct net *net)
++						    u8 dir, u8 type, struct net *net, u32 if_id)
+ {
+ 	struct xfrm_policy *pol, *ret = NULL;
+ 	struct hlist_head *chain;
+@@ -4296,7 +4296,8 @@ static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *
+ 	spin_lock_bh(&net->xfrm.xfrm_policy_lock);
+ 	chain = policy_hash_direct(net, &sel->daddr, &sel->saddr, sel->family, dir);
+ 	hlist_for_each_entry(pol, chain, bydst) {
+-		if (xfrm_migrate_selector_match(sel, &pol->selector) &&
++		if ((if_id == 0 || pol->if_id == if_id) &&
++		    xfrm_migrate_selector_match(sel, &pol->selector) &&
+ 		    pol->type == type) {
+ 			ret = pol;
+ 			priority = ret->priority;
+@@ -4308,7 +4309,8 @@ static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *
+ 		if ((pol->priority >= priority) && ret)
+ 			break;
+ 
+-		if (xfrm_migrate_selector_match(sel, &pol->selector) &&
++		if ((if_id == 0 || pol->if_id == if_id) &&
++		    xfrm_migrate_selector_match(sel, &pol->selector) &&
+ 		    pol->type == type) {
+ 			ret = pol;
+ 			break;
+@@ -4424,7 +4426,7 @@ static int xfrm_migrate_check(const struct xfrm_migrate *m, int num_migrate)
+ int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
+ 		 struct xfrm_migrate *m, int num_migrate,
+ 		 struct xfrm_kmaddress *k, struct net *net,
+-		 struct xfrm_encap_tmpl *encap)
++		 struct xfrm_encap_tmpl *encap, u32 if_id)
+ {
+ 	int i, err, nx_cur = 0, nx_new = 0;
+ 	struct xfrm_policy *pol = NULL;
+@@ -4443,14 +4445,14 @@ int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
+ 	}
+ 
+ 	/* Stage 1 - find policy */
+-	if ((pol = xfrm_migrate_policy_find(sel, dir, type, net)) == NULL) {
++	if ((pol = xfrm_migrate_policy_find(sel, dir, type, net, if_id)) == NULL) {
+ 		err = -ENOENT;
+ 		goto out;
+ 	}
+ 
+ 	/* Stage 2 - find and update state(s) */
+ 	for (i = 0, mp = m; i < num_migrate; i++, mp++) {
+-		if ((x = xfrm_migrate_state_find(mp, net))) {
++		if ((x = xfrm_migrate_state_find(mp, net, if_id))) {
+ 			x_cur[nx_cur] = x;
+ 			nx_cur++;
+ 			xc = xfrm_state_migrate(x, mp, encap);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index f5b846a2edcd7..1befc6db723b0 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1542,9 +1542,6 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ 	memcpy(&x->mark, &orig->mark, sizeof(x->mark));
+ 	memcpy(&x->props.smark, &orig->props.smark, sizeof(x->props.smark));
+ 
+-	if (xfrm_init_state(x) < 0)
+-		goto error;
+-
+ 	x->props.flags = orig->props.flags;
+ 	x->props.extra_flags = orig->props.extra_flags;
+ 
+@@ -1569,7 +1566,8 @@ out:
+ 	return NULL;
+ }
+ 
+-struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net)
++struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net,
++						u32 if_id)
+ {
+ 	unsigned int h;
+ 	struct xfrm_state *x = NULL;
+@@ -1585,6 +1583,8 @@ struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *n
+ 				continue;
+ 			if (m->reqid && x->props.reqid != m->reqid)
+ 				continue;
++			if (if_id != 0 && x->if_id != if_id)
++				continue;
+ 			if (!xfrm_addr_equal(&x->id.daddr, &m->old_daddr,
+ 					     m->old_family) ||
+ 			    !xfrm_addr_equal(&x->props.saddr, &m->old_saddr,
+@@ -1600,6 +1600,8 @@ struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *n
+ 			if (x->props.mode != m->mode ||
+ 			    x->id.proto != m->proto)
+ 				continue;
++			if (if_id != 0 && x->if_id != if_id)
++				continue;
+ 			if (!xfrm_addr_equal(&x->id.daddr, &m->old_daddr,
+ 					     m->old_family) ||
+ 			    !xfrm_addr_equal(&x->props.saddr, &m->old_saddr,
+@@ -1626,6 +1628,11 @@ struct xfrm_state *xfrm_state_migrate(struct xfrm_state *x,
+ 	if (!xc)
+ 		return NULL;
+ 
++	xc->props.family = m->new_family;
++
++	if (xfrm_init_state(xc) < 0)
++		goto error;
++
+ 	memcpy(&xc->id.daddr, &m->new_daddr, sizeof(xc->id.daddr));
+ 	memcpy(&xc->props.saddr, &m->new_saddr, sizeof(xc->props.saddr));
+ 
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index d0fdfbf4c5f72..1ece01cd67a42 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -629,13 +629,8 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 
+ 	xfrm_smark_init(attrs, &x->props.smark);
+ 
+-	if (attrs[XFRMA_IF_ID]) {
++	if (attrs[XFRMA_IF_ID])
+ 		x->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+-		if (!x->if_id) {
+-			err = -EINVAL;
+-			goto error;
+-		}
+-	}
+ 
+ 	err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV]);
+ 	if (err)
+@@ -1371,13 +1366,8 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	mark = xfrm_mark_get(attrs, &m);
+ 
+-	if (attrs[XFRMA_IF_ID]) {
++	if (attrs[XFRMA_IF_ID])
+ 		if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+-		if (!if_id) {
+-			err = -EINVAL;
+-			goto out_noput;
+-		}
+-	}
+ 
+ 	if (p->info.seq) {
+ 		x = xfrm_find_acq_byseq(net, mark, p->info.seq);
+@@ -1690,13 +1680,8 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net, struct xfrm_us
+ 
+ 	xfrm_mark_get(attrs, &xp->mark);
+ 
+-	if (attrs[XFRMA_IF_ID]) {
++	if (attrs[XFRMA_IF_ID])
+ 		xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+-		if (!xp->if_id) {
+-			err = -EINVAL;
+-			goto error;
+-		}
+-	}
+ 
+ 	return xp;
+  error:
+@@ -2451,6 +2436,7 @@ static int xfrm_do_migrate(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	int n = 0;
+ 	struct net *net = sock_net(skb->sk);
+ 	struct xfrm_encap_tmpl  *encap = NULL;
++	u32 if_id = 0;
+ 
+ 	if (attrs[XFRMA_MIGRATE] == NULL)
+ 		return -EINVAL;
+@@ -2475,7 +2461,10 @@ static int xfrm_do_migrate(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 			return 0;
+ 	}
+ 
+-	err = xfrm_migrate(&pi->sel, pi->dir, type, m, n, kmp, net, encap);
++	if (attrs[XFRMA_IF_ID])
++		if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
++
++	err = xfrm_migrate(&pi->sel, pi->dir, type, m, n, kmp, net, encap, if_id);
+ 
+ 	kfree(encap);
+ 
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index d418ca5f90399..034245ea397f6 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -46,6 +46,7 @@
+ #include <signal.h>
+ #include <poll.h>
+ #include <string.h>
++#include <linux/mman.h>
+ #include <sys/mman.h>
+ #include <sys/syscall.h>
+ #include <sys/ioctl.h>


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-03-23 11:55 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-03-23 11:55 UTC (permalink / raw
  To: gentoo-commits

commit:     bf597de43119fd724a6de288d471b24546ab1dd1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 23 11:55:45 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 23 11:55:45 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bf597de4

Linux patch 5.10.108

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1107_linux-5.10.108.patch | 1112 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1116 insertions(+)

diff --git a/0000_README b/0000_README
index 41173da0..bf9e37df 100644
--- a/0000_README
+++ b/0000_README
@@ -471,6 +471,10 @@ Patch:  1106_linux-5.10.107.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.107
 
+Patch:  1107_linux-5.10.108.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.108
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1107_linux-5.10.108.patch b/1107_linux-5.10.108.patch
new file mode 100644
index 00000000..5645f1e5
--- /dev/null
+++ b/1107_linux-5.10.108.patch
@@ -0,0 +1,1112 @@
+diff --git a/Makefile b/Makefile
+index c0be463910578..08b3066fe6e53 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 107
++SUBLEVEL = 108
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
+index f64613a96d530..bc9a2145f4194 100644
+--- a/arch/arm64/include/asm/vectors.h
++++ b/arch/arm64/include/asm/vectors.h
+@@ -56,14 +56,14 @@ enum arm64_bp_harden_el1_vectors {
+ DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector);
+ 
+ #ifndef CONFIG_UNMAP_KERNEL_AT_EL0
+-#define TRAMP_VALIAS	0
++#define TRAMP_VALIAS	0ul
+ #endif
+ 
+ static inline const char *
+ arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot)
+ {
+ 	if (arm64_kernel_unmapped_at_el0())
+-		return (char *)TRAMP_VALIAS + SZ_2K * slot;
++		return (char *)(TRAMP_VALIAS + SZ_2K * slot);
+ 
+ 	WARN_ON_ONCE(slot == EL1_VECTOR_KPTI);
+ 
+diff --git a/drivers/atm/eni.c b/drivers/atm/eni.c
+index b574cce98dc36..9fcc49be499f1 100644
+--- a/drivers/atm/eni.c
++++ b/drivers/atm/eni.c
+@@ -1112,6 +1112,8 @@ DPRINTK("iovcnt = %d\n",skb_shinfo(skb)->nr_frags);
+ 	skb_data3 = skb->data[3];
+ 	paddr = dma_map_single(&eni_dev->pci_dev->dev,skb->data,skb->len,
+ 			       DMA_TO_DEVICE);
++	if (dma_mapping_error(&eni_dev->pci_dev->dev, paddr))
++		return enq_next;
+ 	ENI_PRV_PADDR(skb) = paddr;
+ 	/* prepare DMA queue entries */
+ 	j = 0;
+diff --git a/drivers/crypto/qcom-rng.c b/drivers/crypto/qcom-rng.c
+index 99ba8d51d1020..11f30fd48c141 100644
+--- a/drivers/crypto/qcom-rng.c
++++ b/drivers/crypto/qcom-rng.c
+@@ -8,6 +8,7 @@
+ #include <linux/clk.h>
+ #include <linux/crypto.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+@@ -43,16 +44,19 @@ static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
+ {
+ 	unsigned int currsize = 0;
+ 	u32 val;
++	int ret;
+ 
+ 	/* read random data from hardware */
+ 	do {
+-		val = readl_relaxed(rng->base + PRNG_STATUS);
+-		if (!(val & PRNG_STATUS_DATA_AVAIL))
+-			break;
++		ret = readl_poll_timeout(rng->base + PRNG_STATUS, val,
++					 val & PRNG_STATUS_DATA_AVAIL,
++					 200, 10000);
++		if (ret)
++			return ret;
+ 
+ 		val = readl_relaxed(rng->base + PRNG_DATA_OUT);
+ 		if (!val)
+-			break;
++			return -EINVAL;
+ 
+ 		if ((max - currsize) >= WORD_SZ) {
+ 			memcpy(data, &val, WORD_SZ);
+@@ -61,11 +65,10 @@ static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
+ 		} else {
+ 			/* copy only remaining bytes */
+ 			memcpy(data, &val, max - currsize);
+-			break;
+ 		}
+ 	} while (currsize < max);
+ 
+-	return currsize;
++	return 0;
+ }
+ 
+ static int qcom_rng_generate(struct crypto_rng *tfm,
+@@ -87,7 +90,7 @@ static int qcom_rng_generate(struct crypto_rng *tfm,
+ 	mutex_unlock(&rng->lock);
+ 	clk_disable_unprepare(rng->clk);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int qcom_rng_seed(struct crypto_rng *tfm, const u8 *seed,
+diff --git a/drivers/firmware/efi/apple-properties.c b/drivers/firmware/efi/apple-properties.c
+index e1926483ae2fd..e51838d749e2e 100644
+--- a/drivers/firmware/efi/apple-properties.c
++++ b/drivers/firmware/efi/apple-properties.c
+@@ -24,7 +24,7 @@ static bool dump_properties __initdata;
+ static int __init dump_properties_enable(char *arg)
+ {
+ 	dump_properties = true;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("dump_apple_properties", dump_properties_enable);
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 9fa86288b78a9..e3df82d5d37a8 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -209,7 +209,7 @@ static int __init efivar_ssdt_setup(char *str)
+ 		memcpy(efivar_ssdt, str, strlen(str));
+ 	else
+ 		pr_warn("efivar_ssdt: name too long: %s\n", str);
+-	return 0;
++	return 1;
+ }
+ __setup("efivar_ssdt=", efivar_ssdt_setup);
+ 
+diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
+index 2eb8df4697dfa..605ac8825a591 100644
+--- a/drivers/gpu/drm/imx/parallel-display.c
++++ b/drivers/gpu/drm/imx/parallel-display.c
+@@ -212,14 +212,6 @@ static int imx_pd_bridge_atomic_check(struct drm_bridge *bridge,
+ 	if (!imx_pd_format_supported(bus_fmt))
+ 		return -EINVAL;
+ 
+-	if (bus_flags &
+-	    ~(DRM_BUS_FLAG_DE_LOW | DRM_BUS_FLAG_DE_HIGH |
+-	      DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE |
+-	      DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)) {
+-		dev_warn(imxpd->dev, "invalid bus_flags (%x)\n", bus_flags);
+-		return -EINVAL;
+-	}
+-
+ 	bridge_state->output_bus_cfg.flags = bus_flags;
+ 	bridge_state->input_bus_cfg.flags = bus_flags;
+ 	imx_crtc_state->bus_flags = bus_flags;
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 7ffd2a04ab23a..959dcbd8a29c1 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2132,7 +2132,7 @@ static const struct display_timing innolux_g070y2_l01_timing = {
+ static const struct panel_desc innolux_g070y2_l01 = {
+ 	.timings = &innolux_g070y2_l01_timing,
+ 	.num_timings = 1,
+-	.bpc = 6,
++	.bpc = 8,
+ 	.size = {
+ 		.width = 152,
+ 		.height = 91,
+diff --git a/drivers/input/tablet/aiptek.c b/drivers/input/tablet/aiptek.c
+index e08b0ef078e81..8afeefcea67bb 100644
+--- a/drivers/input/tablet/aiptek.c
++++ b/drivers/input/tablet/aiptek.c
+@@ -1801,15 +1801,13 @@ aiptek_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	input_set_abs_params(inputdev, ABS_TILT_Y, AIPTEK_TILT_MIN, AIPTEK_TILT_MAX, 0, 0);
+ 	input_set_abs_params(inputdev, ABS_WHEEL, AIPTEK_WHEEL_MIN, AIPTEK_WHEEL_MAX - 1, 0, 0);
+ 
+-	/* Verify that a device really has an endpoint */
+-	if (intf->cur_altsetting->desc.bNumEndpoints < 1) {
++	err = usb_find_common_endpoints(intf->cur_altsetting,
++					NULL, NULL, &endpoint, NULL);
++	if (err) {
+ 		dev_err(&intf->dev,
+-			"interface has %d endpoints, but must have minimum 1\n",
+-			intf->cur_altsetting->desc.bNumEndpoints);
+-		err = -EINVAL;
++			"interface has no int in endpoints, but must have minimum 1\n");
+ 		goto fail3;
+ 	}
+-	endpoint = &intf->cur_altsetting->endpoint[0].desc;
+ 
+ 	/* Go set up our URB, which is called when the tablet receives
+ 	 * input.
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index bb3ba614fb174..2a61229d3f976 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -2534,6 +2534,4 @@ void bnx2x_register_phc(struct bnx2x *bp);
+  * Meant for implicit re-load flows.
+  */
+ int bnx2x_vlan_reconfigure_vid(struct bnx2x *bp);
+-int bnx2x_init_firmware(struct bnx2x *bp);
+-void bnx2x_release_firmware(struct bnx2x *bp);
+ #endif /* bnx2x.h */
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 41ebbb2c7d3ac..198e041d84109 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -2363,24 +2363,30 @@ int bnx2x_compare_fw_ver(struct bnx2x *bp, u32 load_code, bool print_err)
+ 	/* is another pf loaded on this engine? */
+ 	if (load_code != FW_MSG_CODE_DRV_LOAD_COMMON_CHIP &&
+ 	    load_code != FW_MSG_CODE_DRV_LOAD_COMMON) {
+-		/* build my FW version dword */
+-		u32 my_fw = (bp->fw_major) + (bp->fw_minor << 8) +
+-				(bp->fw_rev << 16) + (bp->fw_eng << 24);
++		u8 loaded_fw_major, loaded_fw_minor, loaded_fw_rev, loaded_fw_eng;
++		u32 loaded_fw;
+ 
+ 		/* read loaded FW from chip */
+-		u32 loaded_fw = REG_RD(bp, XSEM_REG_PRAM);
++		loaded_fw = REG_RD(bp, XSEM_REG_PRAM);
+ 
+-		DP(BNX2X_MSG_SP, "loaded fw %x, my fw %x\n",
+-		   loaded_fw, my_fw);
++		loaded_fw_major = loaded_fw & 0xff;
++		loaded_fw_minor = (loaded_fw >> 8) & 0xff;
++		loaded_fw_rev = (loaded_fw >> 16) & 0xff;
++		loaded_fw_eng = (loaded_fw >> 24) & 0xff;
++
++		DP(BNX2X_MSG_SP, "loaded fw 0x%x major 0x%x minor 0x%x rev 0x%x eng 0x%x\n",
++		   loaded_fw, loaded_fw_major, loaded_fw_minor, loaded_fw_rev, loaded_fw_eng);
+ 
+ 		/* abort nic load if version mismatch */
+-		if (my_fw != loaded_fw) {
++		if (loaded_fw_major != BCM_5710_FW_MAJOR_VERSION ||
++		    loaded_fw_minor != BCM_5710_FW_MINOR_VERSION ||
++		    loaded_fw_eng != BCM_5710_FW_ENGINEERING_VERSION ||
++		    loaded_fw_rev < BCM_5710_FW_REVISION_VERSION_V15) {
+ 			if (print_err)
+-				BNX2X_ERR("bnx2x with FW %x was already loaded which mismatches my %x FW. Aborting\n",
+-					  loaded_fw, my_fw);
++				BNX2X_ERR("loaded FW incompatible. Aborting\n");
+ 			else
+-				BNX2X_DEV_INFO("bnx2x with FW %x was already loaded which mismatches my %x FW, possibly due to MF UNDI\n",
+-					       loaded_fw, my_fw);
++				BNX2X_DEV_INFO("loaded FW incompatible, possibly due to MF UNDI\n");
++
+ 			return -EBUSY;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index 7fa271db41b07..6333471916be1 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -12366,15 +12366,6 @@ static int bnx2x_init_bp(struct bnx2x *bp)
+ 
+ 	bnx2x_read_fwinfo(bp);
+ 
+-	if (IS_PF(bp)) {
+-		rc = bnx2x_init_firmware(bp);
+-
+-		if (rc) {
+-			bnx2x_free_mem_bp(bp);
+-			return rc;
+-		}
+-	}
+-
+ 	func = BP_FUNC(bp);
+ 
+ 	/* need to reset chip if undi was active */
+@@ -12387,7 +12378,6 @@ static int bnx2x_init_bp(struct bnx2x *bp)
+ 
+ 		rc = bnx2x_prev_unload(bp);
+ 		if (rc) {
+-			bnx2x_release_firmware(bp);
+ 			bnx2x_free_mem_bp(bp);
+ 			return rc;
+ 		}
+@@ -13469,7 +13459,7 @@ do {									\
+ 	     (u8 *)bp->arr, len);					\
+ } while (0)
+ 
+-int bnx2x_init_firmware(struct bnx2x *bp)
++static int bnx2x_init_firmware(struct bnx2x *bp)
+ {
+ 	const char *fw_file_name, *fw_file_name_v15;
+ 	struct bnx2x_fw_file_hdr *fw_hdr;
+@@ -13569,7 +13559,7 @@ request_firmware_exit:
+ 	return rc;
+ }
+ 
+-void bnx2x_release_firmware(struct bnx2x *bp)
++static void bnx2x_release_firmware(struct bnx2x *bp)
+ {
+ 	kfree(bp->init_ops_offsets);
+ 	kfree(bp->init_ops);
+@@ -14086,7 +14076,6 @@ static int bnx2x_init_one(struct pci_dev *pdev,
+ 	return 0;
+ 
+ init_one_freemem:
+-	bnx2x_release_firmware(bp);
+ 	bnx2x_free_mem_bp(bp);
+ 
+ init_one_exit:
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index e19cf020e5ae1..a2062144d7ca1 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2239,8 +2239,10 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
+ 		dma_length_status = status->length_status;
+ 		if (dev->features & NETIF_F_RXCSUM) {
+ 			rx_csum = (__force __be16)(status->rx_csum & 0xffff);
+-			skb->csum = (__force __wsum)ntohs(rx_csum);
+-			skb->ip_summed = CHECKSUM_COMPLETE;
++			if (rx_csum) {
++				skb->csum = (__force __wsum)ntohs(rx_csum);
++				skb->ip_summed = CHECKSUM_COMPLETE;
++			}
+ 		}
+ 
+ 		/* DMA flags and length are still valid no matter how
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index 217e8333de6c6..c4c4649b2088e 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -54,6 +54,12 @@ static int ocelot_chain_to_block(int chain, bool ingress)
+  */
+ static int ocelot_chain_to_lookup(int chain)
+ {
++	/* Backwards compatibility with older, single-chain tc-flower
++	 * offload support in Ocelot
++	 */
++	if (chain == 0)
++		return 0;
++
+ 	return (chain / VCAP_LOOKUP) % 10;
+ }
+ 
+@@ -62,7 +68,15 @@ static int ocelot_chain_to_lookup(int chain)
+  */
+ static int ocelot_chain_to_pag(int chain)
+ {
+-	int lookup = ocelot_chain_to_lookup(chain);
++	int lookup;
++
++	/* Backwards compatibility with older, single-chain tc-flower
++	 * offload support in Ocelot
++	 */
++	if (chain == 0)
++		return 0;
++
++	lookup = ocelot_chain_to_lookup(chain);
+ 
+ 	/* calculate PAG value as chain index relative to the first PAG */
+ 	return chain - VCAP_IS2_CHAIN(lookup, 0);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 261e6e55a907b..e3676386d0eeb 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -1562,6 +1562,9 @@ static void netvsc_get_ethtool_stats(struct net_device *dev,
+ 	pcpu_sum = kvmalloc_array(num_possible_cpus(),
+ 				  sizeof(struct netvsc_ethtool_pcpu_stats),
+ 				  GFP_KERNEL);
++	if (!pcpu_sum)
++		return;
++
+ 	netvsc_get_pcpu_stats(dev, pcpu_sum);
+ 	for_each_present_cpu(cpu) {
+ 		struct netvsc_ethtool_pcpu_stats *this_sum = &pcpu_sum[cpu];
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index cb9d1852a75c8..54786712a9913 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -1536,8 +1536,8 @@ static int marvell_suspend(struct phy_device *phydev)
+ 	int err;
+ 
+ 	/* Suspend the fiber mode first */
+-	if (!linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+-			       phydev->supported)) {
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
++			      phydev->supported)) {
+ 		err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE);
+ 		if (err < 0)
+ 			goto error;
+@@ -1571,8 +1571,8 @@ static int marvell_resume(struct phy_device *phydev)
+ 	int err;
+ 
+ 	/* Resume the fiber mode first */
+-	if (!linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+-			       phydev->supported)) {
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
++			      phydev->supported)) {
+ 		err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE);
+ 		if (err < 0)
+ 			goto error;
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index 41a410124437d..e14fa72791b0e 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -2584,3 +2584,6 @@ MODULE_DEVICE_TABLE(mdio, vsc85xx_tbl);
+ MODULE_DESCRIPTION("Microsemi VSC85xx PHY driver");
+ MODULE_AUTHOR("Nagaraju Lakkaraju");
+ MODULE_LICENSE("Dual MIT/GPL");
++
++MODULE_FIRMWARE(MSCC_VSC8584_REVB_INT8051_FW);
++MODULE_FIRMWARE(MSCC_VSC8574_REVB_INT8051_FW);
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 465e11dcdf129..e5b7448511467 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -84,9 +84,10 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
+ 	ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
+ 		 | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 		 0, index, &buf, 4);
+-	if (unlikely(ret < 0)) {
+-		netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
+-			    index, ret);
++	if (ret < 0) {
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
++				    index, ret);
+ 		return ret;
+ 	}
+ 
+@@ -116,7 +117,7 @@ static int __must_check __smsc95xx_write_reg(struct usbnet *dev, u32 index,
+ 	ret = fn(dev, USB_VENDOR_REQUEST_WRITE_REGISTER, USB_DIR_OUT
+ 		 | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 		 0, index, &buf, 4);
+-	if (unlikely(ret < 0))
++	if (ret < 0 && ret != -ENODEV)
+ 		netdev_warn(dev->net, "Failed to write reg index 0x%08x: %d\n",
+ 			    index, ret);
+ 
+@@ -159,6 +160,9 @@ static int __must_check __smsc95xx_phy_wait_not_busy(struct usbnet *dev,
+ 	do {
+ 		ret = __smsc95xx_read_reg(dev, MII_ADDR, &val, in_pm);
+ 		if (ret < 0) {
++			/* Ignore -ENODEV error during disconnect() */
++			if (ret == -ENODEV)
++				return 0;
+ 			netdev_warn(dev->net, "Error reading MII_ACCESS\n");
+ 			return ret;
+ 		}
+@@ -194,7 +198,8 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
+ 	addr = mii_address_cmd(phy_id, idx, MII_READ_ | MII_BUSY_);
+ 	ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm);
+ 	if (ret < 0) {
+-		netdev_warn(dev->net, "Error writing MII_ADDR\n");
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Error writing MII_ADDR\n");
+ 		goto done;
+ 	}
+ 
+@@ -206,7 +211,8 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
+ 
+ 	ret = __smsc95xx_read_reg(dev, MII_DATA, &val, in_pm);
+ 	if (ret < 0) {
+-		netdev_warn(dev->net, "Error reading MII_DATA\n");
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Error reading MII_DATA\n");
+ 		goto done;
+ 	}
+ 
+@@ -214,6 +220,10 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
+ 
+ done:
+ 	mutex_unlock(&dev->phy_mutex);
++
++	/* Ignore -ENODEV error during disconnect() */
++	if (ret == -ENODEV)
++		return 0;
+ 	return ret;
+ }
+ 
+@@ -235,7 +245,8 @@ static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
+ 	val = regval;
+ 	ret = __smsc95xx_write_reg(dev, MII_DATA, val, in_pm);
+ 	if (ret < 0) {
+-		netdev_warn(dev->net, "Error writing MII_DATA\n");
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Error writing MII_DATA\n");
+ 		goto done;
+ 	}
+ 
+@@ -243,7 +254,8 @@ static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
+ 	addr = mii_address_cmd(phy_id, idx, MII_WRITE_ | MII_BUSY_);
+ 	ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm);
+ 	if (ret < 0) {
+-		netdev_warn(dev->net, "Error writing MII_ADDR\n");
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Error writing MII_ADDR\n");
+ 		goto done;
+ 	}
+ 
+@@ -1049,6 +1061,14 @@ static const struct net_device_ops smsc95xx_netdev_ops = {
+ 	.ndo_set_features	= smsc95xx_set_features,
+ };
+ 
++static void smsc95xx_handle_link_change(struct net_device *net)
++{
++	struct usbnet *dev = netdev_priv(net);
++
++	phy_print_status(net->phydev);
++	usbnet_defer_kevent(dev, EVENT_LINK_CHANGE);
++}
++
+ static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	struct smsc95xx_priv *pdata;
+@@ -1153,6 +1173,17 @@ static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	dev->net->min_mtu = ETH_MIN_MTU;
+ 	dev->net->max_mtu = ETH_DATA_LEN;
+ 	dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len;
++
++	ret = phy_connect_direct(dev->net, pdata->phydev,
++				 &smsc95xx_handle_link_change,
++				 PHY_INTERFACE_MODE_MII);
++	if (ret) {
++		netdev_err(dev->net, "can't attach PHY to %s\n", pdata->mdiobus->id);
++		goto unregister_mdio;
++	}
++
++	phy_attached_info(dev->net->phydev);
++
+ 	return 0;
+ 
+ unregister_mdio:
+@@ -1170,47 +1201,25 @@ static void smsc95xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	struct smsc95xx_priv *pdata = dev->driver_priv;
+ 
++	phy_disconnect(dev->net->phydev);
+ 	mdiobus_unregister(pdata->mdiobus);
+ 	mdiobus_free(pdata->mdiobus);
+ 	netif_dbg(dev, ifdown, dev->net, "free pdata\n");
+ 	kfree(pdata);
+ }
+ 
+-static void smsc95xx_handle_link_change(struct net_device *net)
+-{
+-	struct usbnet *dev = netdev_priv(net);
+-
+-	phy_print_status(net->phydev);
+-	usbnet_defer_kevent(dev, EVENT_LINK_CHANGE);
+-}
+-
+ static int smsc95xx_start_phy(struct usbnet *dev)
+ {
+-	struct smsc95xx_priv *pdata = dev->driver_priv;
+-	struct net_device *net = dev->net;
+-	int ret;
+-
+-	ret = smsc95xx_reset(dev);
+-	if (ret < 0)
+-		return ret;
++	phy_start(dev->net->phydev);
+ 
+-	ret = phy_connect_direct(net, pdata->phydev,
+-				 &smsc95xx_handle_link_change,
+-				 PHY_INTERFACE_MODE_MII);
+-	if (ret) {
+-		netdev_err(net, "can't attach PHY to %s\n", pdata->mdiobus->id);
+-		return ret;
+-	}
+-
+-	phy_attached_info(net->phydev);
+-	phy_start(net->phydev);
+ 	return 0;
+ }
+ 
+-static int smsc95xx_disconnect_phy(struct usbnet *dev)
++static int smsc95xx_stop(struct usbnet *dev)
+ {
+-	phy_stop(dev->net->phydev);
+-	phy_disconnect(dev->net->phydev);
++	if (dev->net->phydev)
++		phy_stop(dev->net->phydev);
++
+ 	return 0;
+ }
+ 
+@@ -1964,8 +1973,9 @@ static const struct driver_info smsc95xx_info = {
+ 	.bind		= smsc95xx_bind,
+ 	.unbind		= smsc95xx_unbind,
+ 	.link_reset	= smsc95xx_link_reset,
+-	.reset		= smsc95xx_start_phy,
+-	.stop		= smsc95xx_disconnect_phy,
++	.reset		= smsc95xx_reset,
++	.check_connect	= smsc95xx_start_phy,
++	.stop		= smsc95xx_stop,
+ 	.rx_fixup	= smsc95xx_rx_fixup,
+ 	.tx_fixup	= smsc95xx_tx_fixup,
+ 	.status		= smsc95xx_status,
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 3fbbdf084d67a..3153f164554aa 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -1832,9 +1832,10 @@ mpt3sas_base_sync_reply_irqs(struct MPT3SAS_ADAPTER *ioc, u8 poll)
+ 				enable_irq(reply_q->os_irq);
+ 			}
+ 		}
++
++		if (poll)
++			_base_process_reply_queue(reply_q);
+ 	}
+-	if (poll)
+-		_base_process_reply_queue(reply_q);
+ }
+ 
+ /**
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 58274c5073531..49f59d53b4b26 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -1889,6 +1889,7 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 	struct usbtmc_ctrlrequest request;
+ 	u8 *buffer = NULL;
+ 	int rv;
++	unsigned int is_in, pipe;
+ 	unsigned long res;
+ 
+ 	res = copy_from_user(&request, arg, sizeof(struct usbtmc_ctrlrequest));
+@@ -1898,12 +1899,14 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 	if (request.req.wLength > USBTMC_BUFSIZE)
+ 		return -EMSGSIZE;
+ 
++	is_in = request.req.bRequestType & USB_DIR_IN;
++
+ 	if (request.req.wLength) {
+ 		buffer = kmalloc(request.req.wLength, GFP_KERNEL);
+ 		if (!buffer)
+ 			return -ENOMEM;
+ 
+-		if ((request.req.bRequestType & USB_DIR_IN) == 0) {
++		if (!is_in) {
+ 			/* Send control data to device */
+ 			res = copy_from_user(buffer, request.data,
+ 					     request.req.wLength);
+@@ -1914,8 +1917,12 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 		}
+ 	}
+ 
++	if (is_in)
++		pipe = usb_rcvctrlpipe(data->usb_dev, 0);
++	else
++		pipe = usb_sndctrlpipe(data->usb_dev, 0);
+ 	rv = usb_control_msg(data->usb_dev,
+-			usb_rcvctrlpipe(data->usb_dev, 0),
++			pipe,
+ 			request.req.bRequest,
+ 			request.req.bRequestType,
+ 			request.req.wValue,
+@@ -1927,7 +1934,7 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 		goto exit;
+ 	}
+ 
+-	if (rv && (request.req.bRequestType & USB_DIR_IN)) {
++	if (rv && is_in) {
+ 		/* Read control data from device */
+ 		res = copy_to_user(request.data, buffer, rv);
+ 		if (res)
+diff --git a/drivers/usb/gadget/function/rndis.c b/drivers/usb/gadget/function/rndis.c
+index 0f14c5291af07..4150de96b937a 100644
+--- a/drivers/usb/gadget/function/rndis.c
++++ b/drivers/usb/gadget/function/rndis.c
+@@ -640,6 +640,7 @@ static int rndis_set_response(struct rndis_params *params,
+ 	BufLength = le32_to_cpu(buf->InformationBufferLength);
+ 	BufOffset = le32_to_cpu(buf->InformationBufferOffset);
+ 	if ((BufLength > RNDIS_MAX_TOTAL_SIZE) ||
++	    (BufOffset > RNDIS_MAX_TOTAL_SIZE) ||
+ 	    (BufOffset + 8 >= RNDIS_MAX_TOTAL_SIZE))
+ 		    return -EINVAL;
+ 
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index da691a69fec10..3a3b5a03dda75 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1343,7 +1343,6 @@ static void usb_gadget_remove_driver(struct usb_udc *udc)
+ 	usb_gadget_udc_stop(udc);
+ 
+ 	udc->driver = NULL;
+-	udc->dev.driver = NULL;
+ 	udc->gadget->dev.driver = NULL;
+ }
+ 
+@@ -1405,7 +1404,6 @@ static int udc_bind_to_driver(struct usb_udc *udc, struct usb_gadget_driver *dri
+ 			driver->function);
+ 
+ 	udc->driver = driver;
+-	udc->dev.driver = &driver->driver;
+ 	udc->gadget->dev.driver = &driver->driver;
+ 
+ 	usb_gadget_udc_set_speed(udc, driver->max_speed);
+@@ -1427,7 +1425,6 @@ err1:
+ 		dev_err(&udc->dev, "failed to start %s: %d\n",
+ 			udc->driver->function, ret);
+ 	udc->driver = NULL;
+-	udc->dev.driver = NULL;
+ 	udc->gadget->dev.driver = NULL;
+ 	return ret;
+ }
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index c282fc0d04bd1..5d2d6ce7ff413 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -697,7 +697,8 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
+ 
+ 	/* Iterating over all connections for all CIDs to find orphans is
+ 	 * inefficient.  Room for improvement here. */
+-	vsock_for_each_connected_socket(vhost_vsock_reset_orphans);
++	vsock_for_each_connected_socket(&vhost_transport.transport,
++					vhost_vsock_reset_orphans);
+ 
+ 	/* Don't check the owner, because we are in the release path, so we
+ 	 * need to stop the vsock device in any case.
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 435f82892432c..477ad05a34ea2 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -1110,17 +1110,6 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto read_super_error;
+ 	}
+ 
+-	root = d_make_root(inode);
+-	if (!root) {
+-		status = -ENOMEM;
+-		mlog_errno(status);
+-		goto read_super_error;
+-	}
+-
+-	sb->s_root = root;
+-
+-	ocfs2_complete_mount_recovery(osb);
+-
+ 	osb->osb_dev_kset = kset_create_and_add(sb->s_id, NULL,
+ 						&ocfs2_kset->kobj);
+ 	if (!osb->osb_dev_kset) {
+@@ -1138,6 +1127,17 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto read_super_error;
+ 	}
+ 
++	root = d_make_root(inode);
++	if (!root) {
++		status = -ENOMEM;
++		mlog_errno(status);
++		goto read_super_error;
++	}
++
++	sb->s_root = root;
++
++	ocfs2_complete_mount_recovery(osb);
++
+ 	if (ocfs2_mount_local(osb))
+ 		snprintf(nodestr, sizeof(nodestr), "local");
+ 	else
+diff --git a/include/linux/if_arp.h b/include/linux/if_arp.h
+index bf5c5f32c65e4..e147ea6794670 100644
+--- a/include/linux/if_arp.h
++++ b/include/linux/if_arp.h
+@@ -51,6 +51,7 @@ static inline bool dev_is_mac_header_xmit(const struct net_device *dev)
+ 	case ARPHRD_VOID:
+ 	case ARPHRD_NONE:
+ 	case ARPHRD_RAWIP:
++	case ARPHRD_PIMREG:
+ 		return false;
+ 	default:
+ 		return true;
+diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
+index b1c7172869939..4d8589244dc75 100644
+--- a/include/net/af_vsock.h
++++ b/include/net/af_vsock.h
+@@ -197,7 +197,8 @@ struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr);
+ struct sock *vsock_find_connected_socket(struct sockaddr_vm *src,
+ 					 struct sockaddr_vm *dst);
+ void vsock_remove_sock(struct vsock_sock *vsk);
+-void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
++void vsock_for_each_connected_socket(struct vsock_transport *transport,
++				     void (*fn)(struct sock *sk));
+ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk);
+ bool vsock_find_cid(unsigned int cid);
+ 
+diff --git a/include/net/esp.h b/include/net/esp.h
+index 9c5637d41d951..90cd02ff77ef6 100644
+--- a/include/net/esp.h
++++ b/include/net/esp.h
+@@ -4,6 +4,8 @@
+ 
+ #include <linux/skbuff.h>
+ 
++#define ESP_SKB_FRAG_MAXSIZE (PAGE_SIZE << SKB_FRAG_PAGE_ORDER)
++
+ struct ip_esp_hdr;
+ 
+ static inline struct ip_esp_hdr *ip_esp_hdr(const struct sk_buff *skb)
+diff --git a/include/net/sock.h b/include/net/sock.h
+index bb40d4de545ca..2c11eb4abdd24 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2670,6 +2670,7 @@ extern int sysctl_optmem_max;
+ extern __u32 sysctl_wmem_default;
+ extern __u32 sysctl_rmem_default;
+ 
++#define SKB_FRAG_PAGE_ORDER	get_order(32768)
+ DECLARE_STATIC_KEY_FALSE(net_high_order_alloc_disable_key);
+ 
+ static inline int sk_get_wmem0(const struct sock *sk, const struct proto *proto)
+diff --git a/mm/swap_state.c b/mm/swap_state.c
+index ee465827420e6..5c5cb2d67b31f 100644
+--- a/mm/swap_state.c
++++ b/mm/swap_state.c
+@@ -512,7 +512,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
+ 		 * __read_swap_cache_async(), which has set SWAP_HAS_CACHE
+ 		 * in swap_map, but not yet added its page to swap cache.
+ 		 */
+-		cond_resched();
++		schedule_timeout_uninterruptible(1);
+ 	}
+ 
+ 	/*
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 71c8ef7d40870..f543fca6dfcbf 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -766,6 +766,7 @@ static int dsa_port_parse_of(struct dsa_port *dp, struct device_node *dn)
+ 		struct net_device *master;
+ 
+ 		master = of_find_net_device_by_node(ethernet);
++		of_node_put(ethernet);
+ 		if (!master)
+ 			return -EPROBE_DEFER;
+ 
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 4b834bbf95e07..9aae82145bc16 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -448,6 +448,7 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 	struct page *page;
+ 	struct sk_buff *trailer;
+ 	int tailen = esp->tailen;
++	unsigned int allocsz;
+ 
+ 	/* this is non-NULL only with TCP/UDP Encapsulation */
+ 	if (x->encap) {
+@@ -457,6 +458,10 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 			return err;
+ 	}
+ 
++	allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
++	if (allocsz > ESP_SKB_FRAG_MAXSIZE)
++		goto cow;
++
+ 	if (!skb_cloned(skb)) {
+ 		if (tailen <= skb_tailroom(skb)) {
+ 			nfrags = 1;
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index fc8acb15dcfbb..20c7bef6829e1 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -483,6 +483,7 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ 	struct page *page;
+ 	struct sk_buff *trailer;
+ 	int tailen = esp->tailen;
++	unsigned int allocsz;
+ 
+ 	if (x->encap) {
+ 		int err = esp6_output_encap(x, skb, esp);
+@@ -491,6 +492,10 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ 			return err;
+ 	}
+ 
++	allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
++	if (allocsz > ESP_SKB_FRAG_MAXSIZE)
++		goto cow;
++
+ 	if (!skb_cloned(skb)) {
+ 		if (tailen <= skb_tailroom(skb)) {
+ 			nfrags = 1;
+@@ -808,8 +813,7 @@ int esp6_input_done2(struct sk_buff *skb, int err)
+ 		struct tcphdr *th;
+ 
+ 		offset = ipv6_skip_exthdr(skb, offset, &nexthdr, &frag_off);
+-
+-		if (offset < 0) {
++		if (offset == -1) {
+ 			err = -EINVAL;
+ 			goto out;
+ 		}
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index a31334b92be7e..d0c95d7dd292d 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2278,8 +2278,11 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 					copy_skb = skb_get(skb);
+ 					skb_head = skb->data;
+ 				}
+-				if (copy_skb)
++				if (copy_skb) {
++					memset(&PACKET_SKB_CB(copy_skb)->sa.ll, 0,
++					       sizeof(PACKET_SKB_CB(copy_skb)->sa.ll));
+ 					skb_set_owner_r(copy_skb, sk);
++				}
+ 			}
+ 			snaplen = po->rx_ring.frame_size - macoff;
+ 			if ((int)snaplen < 0) {
+@@ -3434,6 +3437,8 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	sock_recv_ts_and_drops(msg, sk, skb);
+ 
+ 	if (msg->msg_name) {
++		const size_t max_len = min(sizeof(skb->cb),
++					   sizeof(struct sockaddr_storage));
+ 		int copy_len;
+ 
+ 		/* If the address length field is there to be filled
+@@ -3456,6 +3461,10 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 				msg->msg_namelen = sizeof(struct sockaddr_ll);
+ 			}
+ 		}
++		if (WARN_ON_ONCE(copy_len > max_len)) {
++			copy_len = max_len;
++			msg->msg_namelen = copy_len;
++		}
+ 		memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa, copy_len);
+ 	}
+ 
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 005aa701f4d52..c59806253a65a 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -333,7 +333,8 @@ void vsock_remove_sock(struct vsock_sock *vsk)
+ }
+ EXPORT_SYMBOL_GPL(vsock_remove_sock);
+ 
+-void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
++void vsock_for_each_connected_socket(struct vsock_transport *transport,
++				     void (*fn)(struct sock *sk))
+ {
+ 	int i;
+ 
+@@ -342,8 +343,12 @@ void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
+ 	for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) {
+ 		struct vsock_sock *vsk;
+ 		list_for_each_entry(vsk, &vsock_connected_table[i],
+-				    connected_table)
++				    connected_table) {
++			if (vsk->transport != transport)
++				continue;
++
+ 			fn(sk_vsock(vsk));
++		}
+ 	}
+ 
+ 	spin_unlock_bh(&vsock_table_lock);
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index 3a056f8affd1d..e131121533ad9 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -24,6 +24,7 @@
+ static struct workqueue_struct *virtio_vsock_workqueue;
+ static struct virtio_vsock __rcu *the_virtio_vsock;
+ static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
++static struct virtio_transport virtio_transport; /* forward declaration */
+ 
+ struct virtio_vsock {
+ 	struct virtio_device *vdev;
+@@ -383,7 +384,8 @@ static void virtio_vsock_event_handle(struct virtio_vsock *vsock,
+ 	switch (le32_to_cpu(event->id)) {
+ 	case VIRTIO_VSOCK_EVENT_TRANSPORT_RESET:
+ 		virtio_vsock_update_guest_cid(vsock);
+-		vsock_for_each_connected_socket(virtio_vsock_reset_sock);
++		vsock_for_each_connected_socket(&virtio_transport.transport,
++						virtio_vsock_reset_sock);
+ 		break;
+ 	}
+ }
+@@ -635,7 +637,8 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
+ 	synchronize_rcu();
+ 
+ 	/* Reset all connected sockets when the device disappear */
+-	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
++	vsock_for_each_connected_socket(&virtio_transport.transport,
++					virtio_vsock_reset_sock);
+ 
+ 	/* Stop all work handlers to make sure no one is accessing the device,
+ 	 * so we can safely call vdev->config->reset().
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index 1c9ecb18b8e64..a9ca95a0fcdda 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -75,6 +75,8 @@ static u32 vmci_transport_qp_resumed_sub_id = VMCI_INVALID_ID;
+ 
+ static int PROTOCOL_OVERRIDE = -1;
+ 
++static struct vsock_transport vmci_transport; /* forward declaration */
++
+ /* Helper function to convert from a VMCI error code to a VSock error code. */
+ 
+ static s32 vmci_transport_error_to_vsock_error(s32 vmci_error)
+@@ -882,7 +884,8 @@ static void vmci_transport_qp_resumed_cb(u32 sub_id,
+ 					 const struct vmci_event_data *e_data,
+ 					 void *client_data)
+ {
+-	vsock_for_each_connected_socket(vmci_transport_handle_detach);
++	vsock_for_each_connected_socket(&vmci_transport,
++					vmci_transport_handle_detach);
+ }
+ 
+ static void vmci_transport_recv_pkt_work(struct work_struct *work)
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 4d569ad7db02d..3609da7cce0ab 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -231,7 +231,7 @@ void symbols__fixup_end(struct rb_root_cached *symbols)
+ 		prev = curr;
+ 		curr = rb_entry(nd, struct symbol, rb_node);
+ 
+-		if (prev->end == prev->start && prev->end != curr->start)
++		if (prev->end == prev->start || prev->end != curr->start)
+ 			arch__symbols__fixup_end(prev, curr);
+ 	}
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_crash.c b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+deleted file mode 100644
+index f74b82305da8c..0000000000000
+--- a/tools/testing/selftests/bpf/prog_tests/timer_crash.c
++++ /dev/null
+@@ -1,32 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-#include <test_progs.h>
+-#include "timer_crash.skel.h"
+-
+-enum {
+-	MODE_ARRAY,
+-	MODE_HASH,
+-};
+-
+-static void test_timer_crash_mode(int mode)
+-{
+-	struct timer_crash *skel;
+-
+-	skel = timer_crash__open_and_load();
+-	if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load"))
+-		return;
+-	skel->bss->pid = getpid();
+-	skel->bss->crash_map = mode;
+-	if (!ASSERT_OK(timer_crash__attach(skel), "timer_crash__attach"))
+-		goto end;
+-	usleep(1);
+-end:
+-	timer_crash__destroy(skel);
+-}
+-
+-void test_timer_crash(void)
+-{
+-	if (test__start_subtest("array"))
+-		test_timer_crash_mode(MODE_ARRAY);
+-	if (test__start_subtest("hash"))
+-		test_timer_crash_mode(MODE_HASH);
+-}
+diff --git a/tools/testing/selftests/bpf/progs/timer_crash.c b/tools/testing/selftests/bpf/progs/timer_crash.c
+deleted file mode 100644
+index f8f7944e70dae..0000000000000
+--- a/tools/testing/selftests/bpf/progs/timer_crash.c
++++ /dev/null
+@@ -1,54 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-
+-#include <vmlinux.h>
+-#include <bpf/bpf_tracing.h>
+-#include <bpf/bpf_helpers.h>
+-
+-struct map_elem {
+-	struct bpf_timer timer;
+-	struct bpf_spin_lock lock;
+-};
+-
+-struct {
+-	__uint(type, BPF_MAP_TYPE_ARRAY);
+-	__uint(max_entries, 1);
+-	__type(key, int);
+-	__type(value, struct map_elem);
+-} amap SEC(".maps");
+-
+-struct {
+-	__uint(type, BPF_MAP_TYPE_HASH);
+-	__uint(max_entries, 1);
+-	__type(key, int);
+-	__type(value, struct map_elem);
+-} hmap SEC(".maps");
+-
+-int pid = 0;
+-int crash_map = 0; /* 0 for amap, 1 for hmap */
+-
+-SEC("fentry/do_nanosleep")
+-int sys_enter(void *ctx)
+-{
+-	struct map_elem *e, value = {};
+-	void *map = crash_map ? (void *)&hmap : (void *)&amap;
+-
+-	if (bpf_get_current_task_btf()->tgid != pid)
+-		return 0;
+-
+-	*(void **)&value = (void *)0xdeadcaf3;
+-
+-	bpf_map_update_elem(map, &(int){0}, &value, 0);
+-	/* For array map, doing bpf_map_update_elem will do a
+-	 * check_and_free_timer_in_array, which will trigger the crash if timer
+-	 * pointer was overwritten, for hmap we need to use bpf_timer_cancel.
+-	 */
+-	if (crash_map == 1) {
+-		e = bpf_map_lookup_elem(map, &(int){0});
+-		if (!e)
+-			return 0;
+-		bpf_timer_cancel(&e->timer);
+-	}
+-	return 0;
+-}
+-
+-char _license[] SEC("license") = "GPL";


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-03-28 10:58 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-03-28 10:58 UTC (permalink / raw
  To: gentoo-commits

commit:     5f8b01a12b5f1581cc8d5cfbc354ebd06cc2dc02
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 28 10:58:00 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 28 10:58:00 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5f8b01a1

Linux patch 5.10.109

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1108_linux-5.10.109.patch | 1481 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1485 insertions(+)

diff --git a/0000_README b/0000_README
index bf9e37df..e51b86fc 100644
--- a/0000_README
+++ b/0000_README
@@ -475,6 +475,10 @@ Patch:  1107_linux-5.10.108.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.108
 
+Patch:  1108_linux-5.10.109.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.109
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1108_linux-5.10.109.patch b/1108_linux-5.10.109.patch
new file mode 100644
index 00000000..4f325e08
--- /dev/null
+++ b/1108_linux-5.10.109.patch
@@ -0,0 +1,1481 @@
+diff --git a/Makefile b/Makefile
+index 08b3066fe6e53..3b462df1134b6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 108
++SUBLEVEL = 109
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/nds32/include/asm/uaccess.h b/arch/nds32/include/asm/uaccess.h
+index 010ba5f1d7dd6..54500e81efe59 100644
+--- a/arch/nds32/include/asm/uaccess.h
++++ b/arch/nds32/include/asm/uaccess.h
+@@ -70,9 +70,7 @@ static inline void set_fs(mm_segment_t fs)
+  * versions are void (ie, don't return a value as such).
+  */
+ 
+-#define get_user	__get_user					\
+-
+-#define __get_user(x, ptr)						\
++#define get_user(x, ptr)						\
+ ({									\
+ 	long __gu_err = 0;						\
+ 	__get_user_check((x), (ptr), __gu_err);				\
+@@ -85,6 +83,14 @@ static inline void set_fs(mm_segment_t fs)
+ 	(void)0;							\
+ })
+ 
++#define __get_user(x, ptr)						\
++({									\
++	long __gu_err = 0;						\
++	const __typeof__(*(ptr)) __user *__p = (ptr);			\
++	__get_user_err((x), __p, (__gu_err));				\
++	__gu_err;							\
++})
++
+ #define __get_user_check(x, ptr, err)					\
+ ({									\
+ 	const __typeof__(*(ptr)) __user *__p = (ptr);			\
+@@ -165,12 +171,18 @@ do {									\
+ 		: "r"(addr), "i"(-EFAULT)				\
+ 		: "cc")
+ 
+-#define put_user	__put_user					\
++#define put_user(x, ptr)						\
++({									\
++	long __pu_err = 0;						\
++	__put_user_check((x), (ptr), __pu_err);				\
++	__pu_err;							\
++})
+ 
+ #define __put_user(x, ptr)						\
+ ({									\
+ 	long __pu_err = 0;						\
+-	__put_user_err((x), (ptr), __pu_err);				\
++	__typeof__(*(ptr)) __user *__p = (ptr);				\
++	__put_user_err((x), __p, __pu_err);				\
+ 	__pu_err;							\
+ })
+ 
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 14cd3186dc77d..55562a9b7f92e 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -1340,6 +1340,17 @@ static int __init disable_acpi_pci(const struct dmi_system_id *d)
+ 	return 0;
+ }
+ 
++static int __init disable_acpi_xsdt(const struct dmi_system_id *d)
++{
++	if (!acpi_force) {
++		pr_notice("%s detected: force use of acpi=rsdt\n", d->ident);
++		acpi_gbl_do_not_use_xsdt = TRUE;
++	} else {
++		pr_notice("Warning: DMI blacklist says broken, but acpi XSDT forced\n");
++	}
++	return 0;
++}
++
+ static int __init dmi_disable_acpi(const struct dmi_system_id *d)
+ {
+ 	if (!acpi_force) {
+@@ -1464,6 +1475,19 @@ static const struct dmi_system_id acpi_dmi_table[] __initconst = {
+ 		     DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 360"),
+ 		     },
+ 	 },
++	/*
++	 * Boxes that need ACPI XSDT use disabled due to corrupted tables
++	 */
++	{
++	 .callback = disable_acpi_xsdt,
++	 .ident = "Advantech DAC-BJ01",
++	 .matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "NEC"),
++		     DMI_MATCH(DMI_PRODUCT_NAME, "Bearlake CRB Board"),
++		     DMI_MATCH(DMI_BIOS_VERSION, "V1.12"),
++		     DMI_MATCH(DMI_BIOS_DATE, "02/01/2011"),
++		     },
++	 },
+ 	{}
+ };
+ 
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index 2376f57b3617a..be743d177bcbf 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -66,6 +66,10 @@ MODULE_PARM_DESC(cache_time, "cache time in milliseconds");
+ 
+ static const struct acpi_device_id battery_device_ids[] = {
+ 	{"PNP0C0A", 0},
++
++	/* Microsoft Surface Go 3 */
++	{"MSHW0146", 0},
++
+ 	{"", 0},
+ };
+ 
+@@ -1171,6 +1175,14 @@ static const struct dmi_system_id bat_dmi_table[] __initconst = {
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad"),
+ 		},
+ 	},
++	{
++		/* Microsoft Surface Go 3 */
++		.callback = battery_notification_delay_quirk,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 33474fd969913..7b9793cb55c50 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -409,6 +409,81 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "GA503"),
+ 		},
+ 	},
++	/*
++	 * Clevo NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2 have both a
++	 * working native and video interface. However the default detection
++	 * mechanism first registers the video interface before unregistering
++	 * it again and switching to the native interface during boot. This
++	 * results in a dangling SBIOS request for backlight change for some
++	 * reason, causing the backlight to switch to ~2% once per boot on the
++	 * first power cord connect or disconnect event. Setting the native
++	 * interface explicitly circumvents this buggy behaviour, by avoiding
++	 * the unregistering process.
++	 */
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xNU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xNU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xNU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		},
++	},
+ 
+ 	/*
+ 	 * Desktops which falsely report a backlight and which our heuristics
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index 1784530b8387b..b99e1941c52c9 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -70,7 +70,13 @@ static void tpm_dev_async_work(struct work_struct *work)
+ 	ret = tpm_dev_transmit(priv->chip, priv->space, priv->data_buffer,
+ 			       sizeof(priv->data_buffer));
+ 	tpm_put_ops(priv->chip);
+-	if (ret > 0) {
++
++	/*
++	 * If ret is > 0 then tpm_dev_transmit returned the size of the
++	 * response. If ret is < 0 then tpm_dev_transmit failed and
++	 * returned an error code.
++	 */
++	if (ret != 0) {
+ 		priv->response_length = ret;
+ 		mod_timer(&priv->user_read_timer, jiffies + (120 * HZ));
+ 	}
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 97e916856cf3e..d2225020e4d2c 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -58,12 +58,12 @@ int tpm2_init_space(struct tpm_space *space, unsigned int buf_size)
+ 
+ void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space)
+ {
+-	mutex_lock(&chip->tpm_mutex);
+-	if (!tpm_chip_start(chip)) {
++
++	if (tpm_try_get_ops(chip) == 0) {
+ 		tpm2_flush_sessions(chip, space);
+-		tpm_chip_stop(chip);
++		tpm_put_ops(chip);
+ 	}
+-	mutex_unlock(&chip->tpm_mutex);
++
+ 	kfree(space->context_buf);
+ 	kfree(space->session_buf);
+ }
+diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c
+index ab621b7dbd203..9210af8a1f58c 100644
+--- a/drivers/crypto/qat/qat_common/qat_crypto.c
++++ b/drivers/crypto/qat/qat_common/qat_crypto.c
+@@ -126,6 +126,14 @@ int qat_crypto_dev_config(struct adf_accel_dev *accel_dev)
+ 		goto err;
+ 	if (adf_cfg_section_add(accel_dev, "Accelerator0"))
+ 		goto err;
++
++	/* Temporarily set the number of crypto instances to zero to avoid
++	 * registering the crypto algorithms.
++	 * This will be removed when the algorithms will support the
++	 * CRYPTO_TFM_REQ_MAY_BACKLOG flag
++	 */
++	instances = 0;
++
+ 	for (i = 0; i < instances; i++) {
+ 		val = i;
+ 		snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_BANK_NUM, i);
+diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+index 5f1fc6582d74a..78c7cbc372b05 100644
+--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
++++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+@@ -696,6 +696,12 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
+ 	buf_pool->rx_skb[skb_index] = NULL;
+ 
+ 	datalen = xgene_enet_get_data_len(le64_to_cpu(raw_desc->m1));
++
++	/* strip off CRC as HW isn't doing this */
++	nv = GET_VAL(NV, le64_to_cpu(raw_desc->m0));
++	if (!nv)
++		datalen -= 4;
++
+ 	skb_put(skb, datalen);
+ 	prefetch(skb->data - NET_IP_ALIGN);
+ 	skb->protocol = eth_type_trans(skb, ndev);
+@@ -717,12 +723,8 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
+ 		}
+ 	}
+ 
+-	nv = GET_VAL(NV, le64_to_cpu(raw_desc->m0));
+-	if (!nv) {
+-		/* strip off CRC as HW isn't doing this */
+-		datalen -= 4;
++	if (!nv)
+ 		goto skip_jumbo;
+-	}
+ 
+ 	slots = page_pool->slots - 1;
+ 	head = page_pool->head;
+diff --git a/drivers/net/wireless/ath/regd.c b/drivers/net/wireless/ath/regd.c
+index bee9110b91f38..20f4f8ea9f894 100644
+--- a/drivers/net/wireless/ath/regd.c
++++ b/drivers/net/wireless/ath/regd.c
+@@ -666,14 +666,14 @@ ath_regd_init_wiphy(struct ath_regulatory *reg,
+ 
+ /*
+  * Some users have reported their EEPROM programmed with
+- * 0x8000 or 0x0 set, this is not a supported regulatory
+- * domain but since we have more than one user with it we
+- * need a solution for them. We default to 0x64, which is
+- * the default Atheros world regulatory domain.
++ * 0x8000 set, this is not a supported regulatory domain
++ * but since we have more than one user with it we need
++ * a solution for them. We default to 0x64, which is the
++ * default Atheros world regulatory domain.
+  */
+ static void ath_regd_sanitize(struct ath_regulatory *reg)
+ {
+-	if (reg->current_rd != COUNTRY_ERD_FLAG && reg->current_rd != 0)
++	if (reg->current_rd != COUNTRY_ERD_FLAG)
+ 		return;
+ 	printk(KERN_DEBUG "ath: EEPROM regdomain sanitized\n");
+ 	reg->current_rd = 0x64;
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 9aaf6f7473333..37e6e49de3366 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -1362,6 +1362,9 @@ static int wcn36xx_platform_get_resources(struct wcn36xx *wcn,
+ 	if (iris_node) {
+ 		if (of_device_is_compatible(iris_node, "qcom,wcn3620"))
+ 			wcn->rf_id = RF_IRIS_WCN3620;
++		if (of_device_is_compatible(iris_node, "qcom,wcn3660") ||
++		    of_device_is_compatible(iris_node, "qcom,wcn3660b"))
++			wcn->rf_id = RF_IRIS_WCN3660;
+ 		if (of_device_is_compatible(iris_node, "qcom,wcn3680"))
+ 			wcn->rf_id = RF_IRIS_WCN3680;
+ 		of_node_put(iris_node);
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index 5c40d0bdee245..82be08265c06c 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -96,6 +96,7 @@ enum wcn36xx_ampdu_state {
+ 
+ #define RF_UNKNOWN	0x0000
+ #define RF_IRIS_WCN3620	0x3620
++#define RF_IRIS_WCN3660	0x3660
+ #define RF_IRIS_WCN3680	0x3680
+ 
+ static inline void buff_to_be(u32 *buf, size_t len)
+diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
+index c8bdf078d1115..0841e0e370a03 100644
+--- a/drivers/nfc/st21nfca/se.c
++++ b/drivers/nfc/st21nfca/se.c
+@@ -320,6 +320,11 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
+ 			return -ENOMEM;
+ 
+ 		transaction->aid_len = skb->data[1];
++
++		/* Checking if the length of the AID is valid */
++		if (transaction->aid_len > sizeof(transaction->aid))
++			return -EINVAL;
++
+ 		memcpy(transaction->aid, &skb->data[2],
+ 		       transaction->aid_len);
+ 
+@@ -329,6 +334,11 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
+ 			return -EPROTO;
+ 
+ 		transaction->params_len = skb->data[transaction->aid_len + 3];
++
++		/* Total size is allocated (skb->len - 2) minus fixed array members */
++		if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction)))
++			return -EINVAL;
++
+ 		memcpy(transaction->params, skb->data +
+ 		       transaction->aid_len + 4, transaction->params_len);
+ 
+diff --git a/drivers/staging/fbtft/fb_st7789v.c b/drivers/staging/fbtft/fb_st7789v.c
+index 3a280cc1892ca..0a2dbed9ffc74 100644
+--- a/drivers/staging/fbtft/fb_st7789v.c
++++ b/drivers/staging/fbtft/fb_st7789v.c
+@@ -82,6 +82,8 @@ enum st7789v_command {
+  */
+ static int init_display(struct fbtft_par *par)
+ {
++	par->fbtftops.reset(par);
++
+ 	/* turn off sleep mode */
+ 	write_reg(par, MIPI_DCS_EXIT_SLEEP_MODE);
+ 	mdelay(120);
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index cd04c912f02e0..ba70ed1c98049 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -690,7 +690,7 @@ static int exfat_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	if (!sb->s_root) {
+ 		exfat_err(sb, "failed to get the root dentry");
+ 		err = -ENOMEM;
+-		goto put_inode;
++		goto free_table;
+ 	}
+ 
+ 	return 0;
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 2336bf9243e18..ab966563e852e 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -398,6 +398,7 @@ struct snd_pcm_runtime {
+ 	wait_queue_head_t tsleep;	/* transfer sleep */
+ 	struct fasync_struct *fasync;
+ 	bool stop_operating;		/* sync_stop will be called */
++	struct mutex buffer_mutex;	/* protect for buffer changes */
+ 
+ 	/* -- private section -- */
+ 	void *private_data;
+diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h
+index bfbeabc17a9df..6e36e854b5124 100644
+--- a/kernel/cgroup/cgroup-internal.h
++++ b/kernel/cgroup/cgroup-internal.h
+@@ -65,6 +65,25 @@ static inline struct cgroup_fs_context *cgroup_fc2context(struct fs_context *fc)
+ 	return container_of(kfc, struct cgroup_fs_context, kfc);
+ }
+ 
++struct cgroup_pidlist;
++
++struct cgroup_file_ctx {
++	struct cgroup_namespace	*ns;
++
++	struct {
++		void			*trigger;
++	} psi;
++
++	struct {
++		bool			started;
++		struct css_task_iter	iter;
++	} procs;
++
++	struct {
++		struct cgroup_pidlist	*pidlist;
++	} procs1;
++};
++
+ /*
+  * A cgroup can be associated with multiple css_sets as different tasks may
+  * belong to different cgroups on different hierarchies.  In the other
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 69fba563c810e..8f0ea12d7cee2 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -393,6 +393,7 @@ static void *cgroup_pidlist_start(struct seq_file *s, loff_t *pos)
+ 	 * next pid to display, if any
+ 	 */
+ 	struct kernfs_open_file *of = s->private;
++	struct cgroup_file_ctx *ctx = of->priv;
+ 	struct cgroup *cgrp = seq_css(s)->cgroup;
+ 	struct cgroup_pidlist *l;
+ 	enum cgroup_filetype type = seq_cft(s)->private;
+@@ -402,25 +403,24 @@ static void *cgroup_pidlist_start(struct seq_file *s, loff_t *pos)
+ 	mutex_lock(&cgrp->pidlist_mutex);
+ 
+ 	/*
+-	 * !NULL @of->priv indicates that this isn't the first start()
+-	 * after open.  If the matching pidlist is around, we can use that.
+-	 * Look for it.  Note that @of->priv can't be used directly.  It
+-	 * could already have been destroyed.
++	 * !NULL @ctx->procs1.pidlist indicates that this isn't the first
++	 * start() after open. If the matching pidlist is around, we can use
++	 * that. Look for it. Note that @ctx->procs1.pidlist can't be used
++	 * directly. It could already have been destroyed.
+ 	 */
+-	if (of->priv)
+-		of->priv = cgroup_pidlist_find(cgrp, type);
++	if (ctx->procs1.pidlist)
++		ctx->procs1.pidlist = cgroup_pidlist_find(cgrp, type);
+ 
+ 	/*
+ 	 * Either this is the first start() after open or the matching
+ 	 * pidlist has been destroyed inbetween.  Create a new one.
+ 	 */
+-	if (!of->priv) {
+-		ret = pidlist_array_load(cgrp, type,
+-					 (struct cgroup_pidlist **)&of->priv);
++	if (!ctx->procs1.pidlist) {
++		ret = pidlist_array_load(cgrp, type, &ctx->procs1.pidlist);
+ 		if (ret)
+ 			return ERR_PTR(ret);
+ 	}
+-	l = of->priv;
++	l = ctx->procs1.pidlist;
+ 
+ 	if (pid) {
+ 		int end = l->length;
+@@ -448,7 +448,8 @@ static void *cgroup_pidlist_start(struct seq_file *s, loff_t *pos)
+ static void cgroup_pidlist_stop(struct seq_file *s, void *v)
+ {
+ 	struct kernfs_open_file *of = s->private;
+-	struct cgroup_pidlist *l = of->priv;
++	struct cgroup_file_ctx *ctx = of->priv;
++	struct cgroup_pidlist *l = ctx->procs1.pidlist;
+ 
+ 	if (l)
+ 		mod_delayed_work(cgroup_pidlist_destroy_wq, &l->destroy_dwork,
+@@ -459,7 +460,8 @@ static void cgroup_pidlist_stop(struct seq_file *s, void *v)
+ static void *cgroup_pidlist_next(struct seq_file *s, void *v, loff_t *pos)
+ {
+ 	struct kernfs_open_file *of = s->private;
+-	struct cgroup_pidlist *l = of->priv;
++	struct cgroup_file_ctx *ctx = of->priv;
++	struct cgroup_pidlist *l = ctx->procs1.pidlist;
+ 	pid_t *p = v;
+ 	pid_t *end = l->list + l->length;
+ 	/*
+@@ -542,6 +544,7 @@ static ssize_t cgroup_release_agent_write(struct kernfs_open_file *of,
+ 					  char *buf, size_t nbytes, loff_t off)
+ {
+ 	struct cgroup *cgrp;
++	struct cgroup_file_ctx *ctx;
+ 
+ 	BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX);
+ 
+@@ -549,8 +552,9 @@ static ssize_t cgroup_release_agent_write(struct kernfs_open_file *of,
+ 	 * Release agent gets called with all capabilities,
+ 	 * require capabilities to set release agent.
+ 	 */
+-	if ((of->file->f_cred->user_ns != &init_user_ns) ||
+-	    !capable(CAP_SYS_ADMIN))
++	ctx = of->priv;
++	if ((ctx->ns->user_ns != &init_user_ns) ||
++	    !file_ns_capable(of->file, &init_user_ns, CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	cgrp = cgroup_kn_lock_live(of->kn, false);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 4927289a91a97..3f8447a5393e9 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -3590,6 +3590,7 @@ static int cgroup_cpu_pressure_show(struct seq_file *seq, void *v)
+ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ 					  size_t nbytes, enum psi_res res)
+ {
++	struct cgroup_file_ctx *ctx = of->priv;
+ 	struct psi_trigger *new;
+ 	struct cgroup *cgrp;
+ 	struct psi_group *psi;
+@@ -3602,7 +3603,7 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ 	cgroup_kn_unlock(of->kn);
+ 
+ 	/* Allow only one trigger per file descriptor */
+-	if (of->priv) {
++	if (ctx->psi.trigger) {
+ 		cgroup_put(cgrp);
+ 		return -EBUSY;
+ 	}
+@@ -3614,7 +3615,7 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ 		return PTR_ERR(new);
+ 	}
+ 
+-	smp_store_release(&of->priv, new);
++	smp_store_release(&ctx->psi.trigger, new);
+ 	cgroup_put(cgrp);
+ 
+ 	return nbytes;
+@@ -3644,12 +3645,15 @@ static ssize_t cgroup_cpu_pressure_write(struct kernfs_open_file *of,
+ static __poll_t cgroup_pressure_poll(struct kernfs_open_file *of,
+ 					  poll_table *pt)
+ {
+-	return psi_trigger_poll(&of->priv, of->file, pt);
++	struct cgroup_file_ctx *ctx = of->priv;
++	return psi_trigger_poll(&ctx->psi.trigger, of->file, pt);
+ }
+ 
+ static void cgroup_pressure_release(struct kernfs_open_file *of)
+ {
+-	psi_trigger_destroy(of->priv);
++	struct cgroup_file_ctx *ctx = of->priv;
++
++	psi_trigger_destroy(ctx->psi.trigger);
+ }
+ #endif /* CONFIG_PSI */
+ 
+@@ -3690,24 +3694,43 @@ static ssize_t cgroup_freeze_write(struct kernfs_open_file *of,
+ static int cgroup_file_open(struct kernfs_open_file *of)
+ {
+ 	struct cftype *cft = of->kn->priv;
++	struct cgroup_file_ctx *ctx;
++	int ret;
+ 
+-	if (cft->open)
+-		return cft->open(of);
+-	return 0;
++	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
++	if (!ctx)
++		return -ENOMEM;
++
++	ctx->ns = current->nsproxy->cgroup_ns;
++	get_cgroup_ns(ctx->ns);
++	of->priv = ctx;
++
++	if (!cft->open)
++		return 0;
++
++	ret = cft->open(of);
++	if (ret) {
++		put_cgroup_ns(ctx->ns);
++		kfree(ctx);
++	}
++	return ret;
+ }
+ 
+ static void cgroup_file_release(struct kernfs_open_file *of)
+ {
+ 	struct cftype *cft = of->kn->priv;
++	struct cgroup_file_ctx *ctx = of->priv;
+ 
+ 	if (cft->release)
+ 		cft->release(of);
++	put_cgroup_ns(ctx->ns);
++	kfree(ctx);
+ }
+ 
+ static ssize_t cgroup_file_write(struct kernfs_open_file *of, char *buf,
+ 				 size_t nbytes, loff_t off)
+ {
+-	struct cgroup_namespace *ns = current->nsproxy->cgroup_ns;
++	struct cgroup_file_ctx *ctx = of->priv;
+ 	struct cgroup *cgrp = of->kn->parent->priv;
+ 	struct cftype *cft = of->kn->priv;
+ 	struct cgroup_subsys_state *css;
+@@ -3724,7 +3747,7 @@ static ssize_t cgroup_file_write(struct kernfs_open_file *of, char *buf,
+ 	 */
+ 	if ((cgrp->root->flags & CGRP_ROOT_NS_DELEGATE) &&
+ 	    !(cft->flags & CFTYPE_NS_DELEGATABLE) &&
+-	    ns != &init_cgroup_ns && ns->root_cset->dfl_cgrp == cgrp)
++	    ctx->ns != &init_cgroup_ns && ctx->ns->root_cset->dfl_cgrp == cgrp)
+ 		return -EPERM;
+ 
+ 	if (cft->write)
+@@ -4625,21 +4648,21 @@ void css_task_iter_end(struct css_task_iter *it)
+ 
+ static void cgroup_procs_release(struct kernfs_open_file *of)
+ {
+-	if (of->priv) {
+-		css_task_iter_end(of->priv);
+-		kfree(of->priv);
+-	}
++	struct cgroup_file_ctx *ctx = of->priv;
++
++	if (ctx->procs.started)
++		css_task_iter_end(&ctx->procs.iter);
+ }
+ 
+ static void *cgroup_procs_next(struct seq_file *s, void *v, loff_t *pos)
+ {
+ 	struct kernfs_open_file *of = s->private;
+-	struct css_task_iter *it = of->priv;
++	struct cgroup_file_ctx *ctx = of->priv;
+ 
+ 	if (pos)
+ 		(*pos)++;
+ 
+-	return css_task_iter_next(it);
++	return css_task_iter_next(&ctx->procs.iter);
+ }
+ 
+ static void *__cgroup_procs_start(struct seq_file *s, loff_t *pos,
+@@ -4647,21 +4670,18 @@ static void *__cgroup_procs_start(struct seq_file *s, loff_t *pos,
+ {
+ 	struct kernfs_open_file *of = s->private;
+ 	struct cgroup *cgrp = seq_css(s)->cgroup;
+-	struct css_task_iter *it = of->priv;
++	struct cgroup_file_ctx *ctx = of->priv;
++	struct css_task_iter *it = &ctx->procs.iter;
+ 
+ 	/*
+ 	 * When a seq_file is seeked, it's always traversed sequentially
+ 	 * from position 0, so we can simply keep iterating on !0 *pos.
+ 	 */
+-	if (!it) {
++	if (!ctx->procs.started) {
+ 		if (WARN_ON_ONCE((*pos)))
+ 			return ERR_PTR(-EINVAL);
+-
+-		it = kzalloc(sizeof(*it), GFP_KERNEL);
+-		if (!it)
+-			return ERR_PTR(-ENOMEM);
+-		of->priv = it;
+ 		css_task_iter_start(&cgrp->self, iter_flags, it);
++		ctx->procs.started = true;
+ 	} else if (!(*pos)) {
+ 		css_task_iter_end(it);
+ 		css_task_iter_start(&cgrp->self, iter_flags, it);
+@@ -4712,9 +4732,9 @@ static int cgroup_may_write(const struct cgroup *cgrp, struct super_block *sb)
+ 
+ static int cgroup_procs_write_permission(struct cgroup *src_cgrp,
+ 					 struct cgroup *dst_cgrp,
+-					 struct super_block *sb)
++					 struct super_block *sb,
++					 struct cgroup_namespace *ns)
+ {
+-	struct cgroup_namespace *ns = current->nsproxy->cgroup_ns;
+ 	struct cgroup *com_cgrp = src_cgrp;
+ 	int ret;
+ 
+@@ -4743,11 +4763,12 @@ static int cgroup_procs_write_permission(struct cgroup *src_cgrp,
+ 
+ static int cgroup_attach_permissions(struct cgroup *src_cgrp,
+ 				     struct cgroup *dst_cgrp,
+-				     struct super_block *sb, bool threadgroup)
++				     struct super_block *sb, bool threadgroup,
++				     struct cgroup_namespace *ns)
+ {
+ 	int ret = 0;
+ 
+-	ret = cgroup_procs_write_permission(src_cgrp, dst_cgrp, sb);
++	ret = cgroup_procs_write_permission(src_cgrp, dst_cgrp, sb, ns);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -4764,6 +4785,7 @@ static int cgroup_attach_permissions(struct cgroup *src_cgrp,
+ static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
+ 				  char *buf, size_t nbytes, loff_t off)
+ {
++	struct cgroup_file_ctx *ctx = of->priv;
+ 	struct cgroup *src_cgrp, *dst_cgrp;
+ 	struct task_struct *task;
+ 	ssize_t ret;
+@@ -4784,7 +4806,8 @@ static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
+ 	spin_unlock_irq(&css_set_lock);
+ 
+ 	ret = cgroup_attach_permissions(src_cgrp, dst_cgrp,
+-					of->file->f_path.dentry->d_sb, true);
++					of->file->f_path.dentry->d_sb, true,
++					ctx->ns);
+ 	if (ret)
+ 		goto out_finish;
+ 
+@@ -4806,6 +4829,7 @@ static void *cgroup_threads_start(struct seq_file *s, loff_t *pos)
+ static ssize_t cgroup_threads_write(struct kernfs_open_file *of,
+ 				    char *buf, size_t nbytes, loff_t off)
+ {
++	struct cgroup_file_ctx *ctx = of->priv;
+ 	struct cgroup *src_cgrp, *dst_cgrp;
+ 	struct task_struct *task;
+ 	ssize_t ret;
+@@ -4829,7 +4853,8 @@ static ssize_t cgroup_threads_write(struct kernfs_open_file *of,
+ 
+ 	/* thread migrations follow the cgroup.procs delegation rule */
+ 	ret = cgroup_attach_permissions(src_cgrp, dst_cgrp,
+-					of->file->f_path.dentry->d_sb, false);
++					of->file->f_path.dentry->d_sb, false,
++					ctx->ns);
+ 	if (ret)
+ 		goto out_finish;
+ 
+@@ -6009,7 +6034,8 @@ static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
+ 		goto err;
+ 
+ 	ret = cgroup_attach_permissions(cset->dfl_cgrp, dst_cgrp, sb,
+-					!(kargs->flags & CLONE_THREAD));
++					!(kargs->flags & CLONE_THREAD),
++					current->nsproxy->cgroup_ns);
+ 	if (ret)
+ 		goto err;
+ 
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 658427c33b937..f5ba0740f9b50 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -531,16 +531,17 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
+ 			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ 		}
+ 
+-		/* Unboost if we were boosted. */
+-		if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
+-			rt_mutex_futex_unlock(&rnp->boost_mtx);
+-
+ 		/*
+ 		 * If this was the last task on the expedited lists,
+ 		 * then we need to report up the rcu_node hierarchy.
+ 		 */
+ 		if (!empty_exp && empty_exp_now)
+ 			rcu_report_exp_rnp(rnp, true);
++
++		/* Unboost if we were boosted. */
++		if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
++			rt_mutex_futex_unlock(&rnp->boost_mtx);
++
+ 	} else {
+ 		local_irq_restore(flags);
+ 	}
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index d6f2126f46184..2aa39ce7093df 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1500,8 +1500,8 @@ static int __ip6_append_data(struct sock *sk,
+ 		      sizeof(struct frag_hdr) : 0) +
+ 		     rt->rt6i_nfheader_len;
+ 
+-	if (mtu < fragheaderlen ||
+-	    ((mtu - fragheaderlen) & ~7) + fragheaderlen < sizeof(struct frag_hdr))
++	if (mtu <= fragheaderlen ||
++	    ((mtu - fragheaderlen) & ~7) + fragheaderlen <= sizeof(struct frag_hdr))
+ 		goto emsgsize;
+ 
+ 	maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index ac5cadd02cfa8..99a37c411323e 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -276,6 +276,7 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct llc_sock *llc = llc_sk(sk);
++	struct net_device *dev = NULL;
+ 	struct llc_sap *sap;
+ 	int rc = -EINVAL;
+ 
+@@ -287,14 +288,14 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ 		goto out;
+ 	rc = -ENODEV;
+ 	if (sk->sk_bound_dev_if) {
+-		llc->dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
+-		if (llc->dev && addr->sllc_arphrd != llc->dev->type) {
+-			dev_put(llc->dev);
+-			llc->dev = NULL;
++		dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
++		if (dev && addr->sllc_arphrd != dev->type) {
++			dev_put(dev);
++			dev = NULL;
+ 		}
+ 	} else
+-		llc->dev = dev_getfirstbyhwtype(&init_net, addr->sllc_arphrd);
+-	if (!llc->dev)
++		dev = dev_getfirstbyhwtype(&init_net, addr->sllc_arphrd);
++	if (!dev)
+ 		goto out;
+ 	rc = -EUSERS;
+ 	llc->laddr.lsap = llc_ui_autoport();
+@@ -304,6 +305,11 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ 	sap = llc_sap_open(llc->laddr.lsap, NULL);
+ 	if (!sap)
+ 		goto out;
++
++	/* Note: We do not expect errors from this point. */
++	llc->dev = dev;
++	dev = NULL;
++
+ 	memcpy(llc->laddr.mac, llc->dev->dev_addr, IFHWADDRLEN);
+ 	memcpy(&llc->addr, addr, sizeof(llc->addr));
+ 	/* assign new connection to its SAP */
+@@ -311,6 +317,7 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ 	sock_reset_flag(sk, SOCK_ZAPPED);
+ 	rc = 0;
+ out:
++	dev_put(dev);
+ 	return rc;
+ }
+ 
+@@ -333,6 +340,7 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ 	struct sockaddr_llc *addr = (struct sockaddr_llc *)uaddr;
+ 	struct sock *sk = sock->sk;
+ 	struct llc_sock *llc = llc_sk(sk);
++	struct net_device *dev = NULL;
+ 	struct llc_sap *sap;
+ 	int rc = -EINVAL;
+ 
+@@ -348,25 +356,26 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ 	rc = -ENODEV;
+ 	rcu_read_lock();
+ 	if (sk->sk_bound_dev_if) {
+-		llc->dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
+-		if (llc->dev) {
++		dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
++		if (dev) {
+ 			if (is_zero_ether_addr(addr->sllc_mac))
+-				memcpy(addr->sllc_mac, llc->dev->dev_addr,
++				memcpy(addr->sllc_mac, dev->dev_addr,
+ 				       IFHWADDRLEN);
+-			if (addr->sllc_arphrd != llc->dev->type ||
++			if (addr->sllc_arphrd != dev->type ||
+ 			    !ether_addr_equal(addr->sllc_mac,
+-					      llc->dev->dev_addr)) {
++					      dev->dev_addr)) {
+ 				rc = -EINVAL;
+-				llc->dev = NULL;
++				dev = NULL;
+ 			}
+ 		}
+-	} else
+-		llc->dev = dev_getbyhwaddr_rcu(&init_net, addr->sllc_arphrd,
++	} else {
++		dev = dev_getbyhwaddr_rcu(&init_net, addr->sllc_arphrd,
+ 					   addr->sllc_mac);
+-	if (llc->dev)
+-		dev_hold(llc->dev);
++	}
++	if (dev)
++		dev_hold(dev);
+ 	rcu_read_unlock();
+-	if (!llc->dev)
++	if (!dev)
+ 		goto out;
+ 	if (!addr->sllc_sap) {
+ 		rc = -EUSERS;
+@@ -399,6 +408,11 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ 			goto out_put;
+ 		}
+ 	}
++
++	/* Note: We do not expect errors from this point. */
++	llc->dev = dev;
++	dev = NULL;
++
+ 	llc->laddr.lsap = addr->sllc_sap;
+ 	memcpy(llc->laddr.mac, addr->sllc_mac, IFHWADDRLEN);
+ 	memcpy(&llc->addr, addr, sizeof(llc->addr));
+@@ -409,6 +423,7 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ out_put:
+ 	llc_sap_put(sap);
+ out:
++	dev_put(dev);
+ 	release_sock(sk);
+ 	return rc;
+ }
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index d46ed4cbe7717..8010967a68741 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2076,14 +2076,12 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
+ 		const struct mesh_setup *setup)
+ {
+ 	u8 *new_ie;
+-	const u8 *old_ie;
+ 	struct ieee80211_sub_if_data *sdata = container_of(ifmsh,
+ 					struct ieee80211_sub_if_data, u.mesh);
+ 	int i;
+ 
+ 	/* allocate information elements */
+ 	new_ie = NULL;
+-	old_ie = ifmsh->ie;
+ 
+ 	if (setup->ie_len) {
+ 		new_ie = kmemdup(setup->ie, setup->ie_len,
+@@ -2093,7 +2091,6 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
+ 	}
+ 	ifmsh->ie_len = setup->ie_len;
+ 	ifmsh->ie = new_ie;
+-	kfree(old_ie);
+ 
+ 	/* now copy the rest of the setup parameters */
+ 	ifmsh->mesh_id_len = setup->mesh_id_len;
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index dbc2e945c98eb..a61b5bf5aa0fb 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -162,7 +162,7 @@ nft_do_chain(struct nft_pktinfo *pkt, void *priv)
+ 	struct nft_rule *const *rules;
+ 	const struct nft_rule *rule;
+ 	const struct nft_expr *expr, *last;
+-	struct nft_regs regs;
++	struct nft_regs regs = {};
+ 	unsigned int stackptr = 0;
+ 	struct nft_jumpstack jumpstack[NFT_JUMP_STACK_SIZE];
+ 	bool genbit = READ_ONCE(net->nft.gencursor);
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index d79febeebf0c5..f88de74da1eb3 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -774,6 +774,11 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+ 
+ 	if (oss_period_size < 16)
+ 		return -EINVAL;
++
++	/* don't allocate too large period; 1MB period must be enough */
++	if (oss_period_size > 1024 * 1024)
++		return -ENOMEM;
++
+ 	runtime->oss.period_bytes = oss_period_size;
+ 	runtime->oss.period_frames = 1;
+ 	runtime->oss.periods = oss_periods;
+@@ -1042,10 +1047,9 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ 			goto failure;
+ 	}
+ #endif
+-	oss_period_size *= oss_frame_size;
+-
+-	oss_buffer_size = oss_period_size * runtime->oss.periods;
+-	if (oss_buffer_size < 0) {
++	oss_period_size = array_size(oss_period_size, oss_frame_size);
++	oss_buffer_size = array_size(oss_period_size, runtime->oss.periods);
++	if (oss_buffer_size <= 0) {
+ 		err = -EINVAL;
+ 		goto failure;
+ 	}
+diff --git a/sound/core/oss/pcm_plugin.c b/sound/core/oss/pcm_plugin.c
+index d5ca161d588c5..1e2d1b35c1946 100644
+--- a/sound/core/oss/pcm_plugin.c
++++ b/sound/core/oss/pcm_plugin.c
+@@ -61,7 +61,10 @@ static int snd_pcm_plugin_alloc(struct snd_pcm_plugin *plugin, snd_pcm_uframes_t
+ 	}
+ 	if ((width = snd_pcm_format_physical_width(format->format)) < 0)
+ 		return width;
+-	size = frames * format->channels * width;
++	size = array3_size(frames, format->channels, width);
++	/* check for too large period size once again */
++	if (size > 1024 * 1024)
++		return -ENOMEM;
+ 	if (snd_BUG_ON(size % 8))
+ 		return -ENXIO;
+ 	size /= 8;
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index a8ae5928decda..8e5c6b227e52d 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -969,6 +969,7 @@ int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream,
+ 	init_waitqueue_head(&runtime->tsleep);
+ 
+ 	runtime->status->state = SNDRV_PCM_STATE_OPEN;
++	mutex_init(&runtime->buffer_mutex);
+ 
+ 	substream->runtime = runtime;
+ 	substream->private_data = pcm->private_data;
+@@ -1002,6 +1003,7 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream)
+ 	} else {
+ 		substream->runtime = NULL;
+ 	}
++	mutex_destroy(&runtime->buffer_mutex);
+ 	kfree(runtime);
+ 	put_pid(substream->pid);
+ 	substream->pid = NULL;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 5e04c4b9e0239..45afef73275f0 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1871,9 +1871,11 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ 		if (avail >= runtime->twake)
+ 			break;
+ 		snd_pcm_stream_unlock_irq(substream);
++		mutex_unlock(&runtime->buffer_mutex);
+ 
+ 		tout = schedule_timeout(wait_time);
+ 
++		mutex_lock(&runtime->buffer_mutex);
+ 		snd_pcm_stream_lock_irq(substream);
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		switch (runtime->status->state) {
+@@ -2167,6 +2169,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 
+ 	nonblock = !!(substream->f_flags & O_NONBLOCK);
+ 
++	mutex_lock(&runtime->buffer_mutex);
+ 	snd_pcm_stream_lock_irq(substream);
+ 	err = pcm_accessible_state(runtime);
+ 	if (err < 0)
+@@ -2254,6 +2257,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 	if (xfer > 0 && err >= 0)
+ 		snd_pcm_update_state(substream, runtime);
+ 	snd_pcm_stream_unlock_irq(substream);
++	mutex_unlock(&runtime->buffer_mutex);
+ 	return xfer > 0 ? (snd_pcm_sframes_t)xfer : err;
+ }
+ EXPORT_SYMBOL(__snd_pcm_lib_xfer);
+diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c
+index 4f03ba8ed0ae5..a9a0d74f31656 100644
+--- a/sound/core/pcm_memory.c
++++ b/sound/core/pcm_memory.c
+@@ -164,19 +164,20 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ 	size_t size;
+ 	struct snd_dma_buffer new_dmab;
+ 
++	mutex_lock(&substream->pcm->open_mutex);
+ 	if (substream->runtime) {
+ 		buffer->error = -EBUSY;
+-		return;
++		goto unlock;
+ 	}
+ 	if (!snd_info_get_line(buffer, line, sizeof(line))) {
+ 		snd_info_get_str(str, line, sizeof(str));
+ 		size = simple_strtoul(str, NULL, 10) * 1024;
+ 		if ((size != 0 && size < 8192) || size > substream->dma_max) {
+ 			buffer->error = -EINVAL;
+-			return;
++			goto unlock;
+ 		}
+ 		if (substream->dma_buffer.bytes == size)
+-			return;
++			goto unlock;
+ 		memset(&new_dmab, 0, sizeof(new_dmab));
+ 		new_dmab.dev = substream->dma_buffer.dev;
+ 		if (size > 0) {
+@@ -185,7 +186,7 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ 					   substream->dma_buffer.dev.dev,
+ 					   size, &new_dmab) < 0) {
+ 				buffer->error = -ENOMEM;
+-				return;
++				goto unlock;
+ 			}
+ 			substream->buffer_bytes_max = size;
+ 		} else {
+@@ -197,6 +198,8 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ 	} else {
+ 		buffer->error = -EINVAL;
+ 	}
++ unlock:
++	mutex_unlock(&substream->pcm->open_mutex);
+ }
+ 
+ static inline void preallocate_info_init(struct snd_pcm_substream *substream)
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index c5ef5182fcf19..6579802c55116 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -667,33 +667,40 @@ static int snd_pcm_hw_params_choose(struct snd_pcm_substream *pcm,
+ 	return 0;
+ }
+ 
++#if IS_ENABLED(CONFIG_SND_PCM_OSS)
++#define is_oss_stream(substream)	((substream)->oss.oss)
++#else
++#define is_oss_stream(substream)	false
++#endif
++
+ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 			     struct snd_pcm_hw_params *params)
+ {
+ 	struct snd_pcm_runtime *runtime;
+-	int err, usecs;
++	int err = 0, usecs;
+ 	unsigned int bits;
+ 	snd_pcm_uframes_t frames;
+ 
+ 	if (PCM_RUNTIME_CHECK(substream))
+ 		return -ENXIO;
+ 	runtime = substream->runtime;
++	mutex_lock(&runtime->buffer_mutex);
+ 	snd_pcm_stream_lock_irq(substream);
+ 	switch (runtime->status->state) {
+ 	case SNDRV_PCM_STATE_OPEN:
+ 	case SNDRV_PCM_STATE_SETUP:
+ 	case SNDRV_PCM_STATE_PREPARED:
++		if (!is_oss_stream(substream) &&
++		    atomic_read(&substream->mmap_count))
++			err = -EBADFD;
+ 		break;
+ 	default:
+-		snd_pcm_stream_unlock_irq(substream);
+-		return -EBADFD;
++		err = -EBADFD;
++		break;
+ 	}
+ 	snd_pcm_stream_unlock_irq(substream);
+-#if IS_ENABLED(CONFIG_SND_PCM_OSS)
+-	if (!substream->oss.oss)
+-#endif
+-		if (atomic_read(&substream->mmap_count))
+-			return -EBADFD;
++	if (err)
++		goto unlock;
+ 
+ 	snd_pcm_sync_stop(substream, true);
+ 
+@@ -780,16 +787,21 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 	if ((usecs = period_to_usecs(runtime)) >= 0)
+ 		cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
+ 					    usecs);
+-	return 0;
++	err = 0;
+  _error:
+-	/* hardware might be unusable from this time,
+-	   so we force application to retry to set
+-	   the correct hardware parameter settings */
+-	snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+-	if (substream->ops->hw_free != NULL)
+-		substream->ops->hw_free(substream);
+-	if (substream->managed_buffer_alloc)
+-		snd_pcm_lib_free_pages(substream);
++	if (err) {
++		/* hardware might be unusable from this time,
++		 * so we force application to retry to set
++		 * the correct hardware parameter settings
++		 */
++		snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
++		if (substream->ops->hw_free != NULL)
++			substream->ops->hw_free(substream);
++		if (substream->managed_buffer_alloc)
++			snd_pcm_lib_free_pages(substream);
++	}
++ unlock:
++	mutex_unlock(&runtime->buffer_mutex);
+ 	return err;
+ }
+ 
+@@ -829,26 +841,31 @@ static int do_hw_free(struct snd_pcm_substream *substream)
+ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime;
+-	int result;
++	int result = 0;
+ 
+ 	if (PCM_RUNTIME_CHECK(substream))
+ 		return -ENXIO;
+ 	runtime = substream->runtime;
++	mutex_lock(&runtime->buffer_mutex);
+ 	snd_pcm_stream_lock_irq(substream);
+ 	switch (runtime->status->state) {
+ 	case SNDRV_PCM_STATE_SETUP:
+ 	case SNDRV_PCM_STATE_PREPARED:
++		if (atomic_read(&substream->mmap_count))
++			result = -EBADFD;
+ 		break;
+ 	default:
+-		snd_pcm_stream_unlock_irq(substream);
+-		return -EBADFD;
++		result = -EBADFD;
++		break;
+ 	}
+ 	snd_pcm_stream_unlock_irq(substream);
+-	if (atomic_read(&substream->mmap_count))
+-		return -EBADFD;
++	if (result)
++		goto unlock;
+ 	result = do_hw_free(substream);
+ 	snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+ 	cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
++ unlock:
++	mutex_unlock(&runtime->buffer_mutex);
+ 	return result;
+ }
+ 
+@@ -1154,15 +1171,17 @@ struct action_ops {
+ static int snd_pcm_action_group(const struct action_ops *ops,
+ 				struct snd_pcm_substream *substream,
+ 				snd_pcm_state_t state,
+-				bool do_lock)
++				bool stream_lock)
+ {
+ 	struct snd_pcm_substream *s = NULL;
+ 	struct snd_pcm_substream *s1;
+ 	int res = 0, depth = 1;
+ 
+ 	snd_pcm_group_for_each_entry(s, substream) {
+-		if (do_lock && s != substream) {
+-			if (s->pcm->nonatomic)
++		if (s != substream) {
++			if (!stream_lock)
++				mutex_lock_nested(&s->runtime->buffer_mutex, depth);
++			else if (s->pcm->nonatomic)
+ 				mutex_lock_nested(&s->self_group.mutex, depth);
+ 			else
+ 				spin_lock_nested(&s->self_group.lock, depth);
+@@ -1190,18 +1209,18 @@ static int snd_pcm_action_group(const struct action_ops *ops,
+ 		ops->post_action(s, state);
+ 	}
+  _unlock:
+-	if (do_lock) {
+-		/* unlock streams */
+-		snd_pcm_group_for_each_entry(s1, substream) {
+-			if (s1 != substream) {
+-				if (s1->pcm->nonatomic)
+-					mutex_unlock(&s1->self_group.mutex);
+-				else
+-					spin_unlock(&s1->self_group.lock);
+-			}
+-			if (s1 == s)	/* end */
+-				break;
++	/* unlock streams */
++	snd_pcm_group_for_each_entry(s1, substream) {
++		if (s1 != substream) {
++			if (!stream_lock)
++				mutex_unlock(&s1->runtime->buffer_mutex);
++			else if (s1->pcm->nonatomic)
++				mutex_unlock(&s1->self_group.mutex);
++			else
++				spin_unlock(&s1->self_group.lock);
+ 		}
++		if (s1 == s)	/* end */
++			break;
+ 	}
+ 	return res;
+ }
+@@ -1331,10 +1350,12 @@ static int snd_pcm_action_nonatomic(const struct action_ops *ops,
+ 
+ 	/* Guarantee the group members won't change during non-atomic action */
+ 	down_read(&snd_pcm_link_rwsem);
++	mutex_lock(&substream->runtime->buffer_mutex);
+ 	if (snd_pcm_stream_linked(substream))
+ 		res = snd_pcm_action_group(ops, substream, state, false);
+ 	else
+ 		res = snd_pcm_action_single(ops, substream, state);
++	mutex_unlock(&substream->runtime->buffer_mutex);
+ 	up_read(&snd_pcm_link_rwsem);
+ 	return res;
+ }
+@@ -1829,11 +1850,13 @@ static int snd_pcm_do_reset(struct snd_pcm_substream *substream,
+ 	int err = snd_pcm_ops_ioctl(substream, SNDRV_PCM_IOCTL1_RESET, NULL);
+ 	if (err < 0)
+ 		return err;
++	snd_pcm_stream_lock_irq(substream);
+ 	runtime->hw_ptr_base = 0;
+ 	runtime->hw_ptr_interrupt = runtime->status->hw_ptr -
+ 		runtime->status->hw_ptr % runtime->period_size;
+ 	runtime->silence_start = runtime->status->hw_ptr;
+ 	runtime->silence_filled = 0;
++	snd_pcm_stream_unlock_irq(substream);
+ 	return 0;
+ }
+ 
+@@ -1841,10 +1864,12 @@ static void snd_pcm_post_reset(struct snd_pcm_substream *substream,
+ 			       snd_pcm_state_t state)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
++	snd_pcm_stream_lock_irq(substream);
+ 	runtime->control->appl_ptr = runtime->status->hw_ptr;
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
+ 	    runtime->silence_size > 0)
+ 		snd_pcm_playback_silence(substream, ULONG_MAX);
++	snd_pcm_stream_unlock_irq(substream);
+ }
+ 
+ static const struct action_ops snd_pcm_action_reset = {
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index 012a7ee849e8a..963731cf0d8c8 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -938,8 +938,8 @@ static int snd_ac97_ad18xx_pcm_get_volume(struct snd_kcontrol *kcontrol, struct
+ 	int codec = kcontrol->private_value & 3;
+ 	
+ 	mutex_lock(&ac97->page_mutex);
+-	ucontrol->value.integer.value[0] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 0) & 31);
+-	ucontrol->value.integer.value[1] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 8) & 31);
++	ucontrol->value.integer.value[0] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 8) & 31);
++	ucontrol->value.integer.value[1] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 0) & 31);
+ 	mutex_unlock(&ac97->page_mutex);
+ 	return 0;
+ }
+diff --git a/sound/pci/cmipci.c b/sound/pci/cmipci.c
+index 7363d61eaec23..120dd8b33ac81 100644
+--- a/sound/pci/cmipci.c
++++ b/sound/pci/cmipci.c
+@@ -302,7 +302,6 @@ MODULE_PARM_DESC(joystick_port, "Joystick port address.");
+ #define CM_MICGAINZ		0x01	/* mic boost */
+ #define CM_MICGAINZ_SHIFT	0
+ 
+-#define CM_REG_MIXER3		0x24
+ #define CM_REG_AUX_VOL		0x26
+ #define CM_VAUXL_MASK		0xf0
+ #define CM_VAUXR_MASK		0x0f
+@@ -3291,7 +3290,7 @@ static void snd_cmipci_remove(struct pci_dev *pci)
+  */
+ static const unsigned char saved_regs[] = {
+ 	CM_REG_FUNCTRL1, CM_REG_CHFORMAT, CM_REG_LEGACY_CTRL, CM_REG_MISC_CTRL,
+-	CM_REG_MIXER0, CM_REG_MIXER1, CM_REG_MIXER2, CM_REG_MIXER3, CM_REG_PLL,
++	CM_REG_MIXER0, CM_REG_MIXER1, CM_REG_MIXER2, CM_REG_AUX_VOL, CM_REG_PLL,
+ 	CM_REG_CH0_FRAME1, CM_REG_CH0_FRAME2,
+ 	CM_REG_CH1_FRAME1, CM_REG_CH1_FRAME2, CM_REG_EXT_MISC,
+ 	CM_REG_INT_STATUS, CM_REG_INT_HLDCLR, CM_REG_FUNCTRL0,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index ed0cfcb05ef0d..3bd37c02ce0ed 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8801,6 +8801,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++	SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+@@ -8884,6 +8885,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x8561, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[5|7][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x866d, "Clevo NP5[05]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x867d, "Clevo NP7[01]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
+ 	SND_PCI_QUIRK(0x1558, 0x8a20, "Clevo NH55DCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -10839,6 +10842,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
++	SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+ 	SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
+diff --git a/sound/soc/sti/uniperif_player.c b/sound/soc/sti/uniperif_player.c
+index 2ed92c990b97c..dd9013c476649 100644
+--- a/sound/soc/sti/uniperif_player.c
++++ b/sound/soc/sti/uniperif_player.c
+@@ -91,7 +91,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ 			SET_UNIPERIF_ITM_BCLR_FIFO_ERROR(player);
+ 
+ 			/* Stop the player */
+-			snd_pcm_stop_xrun(player->substream);
++			snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+ 		}
+ 
+ 		ret = IRQ_HANDLED;
+@@ -105,7 +105,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ 		SET_UNIPERIF_ITM_BCLR_DMA_ERROR(player);
+ 
+ 		/* Stop the player */
+-		snd_pcm_stop_xrun(player->substream);
++		snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+ 
+ 		ret = IRQ_HANDLED;
+ 	}
+@@ -138,7 +138,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ 		dev_err(player->dev, "Underflow recovery failed\n");
+ 
+ 		/* Stop the player */
+-		snd_pcm_stop_xrun(player->substream);
++		snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+ 
+ 		ret = IRQ_HANDLED;
+ 	}
+diff --git a/sound/soc/sti/uniperif_reader.c b/sound/soc/sti/uniperif_reader.c
+index 136059331211d..065c5f0d1f5f0 100644
+--- a/sound/soc/sti/uniperif_reader.c
++++ b/sound/soc/sti/uniperif_reader.c
+@@ -65,7 +65,7 @@ static irqreturn_t uni_reader_irq_handler(int irq, void *dev_id)
+ 	if (unlikely(status & UNIPERIF_ITS_FIFO_ERROR_MASK(reader))) {
+ 		dev_err(reader->dev, "FIFO error detected\n");
+ 
+-		snd_pcm_stop_xrun(reader->substream);
++		snd_pcm_stop(reader->substream, SNDRV_PCM_STATE_XRUN);
+ 
+ 		ret = IRQ_HANDLED;
+ 	}
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 8f6823df944ff..81ace832d7e42 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -542,6 +542,16 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x25c4, 0x0003),
+ 		.map = scms_usb3318_map,
+ 	},
++	{
++		/* Corsair Virtuoso SE Latest (wired mode) */
++		.id = USB_ID(0x1b1c, 0x0a3f),
++		.map = corsair_virtuoso_map,
++	},
++	{
++		/* Corsair Virtuoso SE Latest (wireless mode) */
++		.id = USB_ID(0x1b1c, 0x0a40),
++		.map = corsair_virtuoso_map,
++	},
+ 	{
+ 		.id = USB_ID(0x30be, 0x0101), /*  Schiit Hel */
+ 		.ignore_ctl_error = 1,
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 86fdd669f3fd7..99f2203bf51f1 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -3135,9 +3135,10 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ 		if (unitid == 7 && cval->control == UAC_FU_VOLUME)
+ 			snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
+ 		break;
+-	/* lowest playback value is muted on C-Media devices */
+-	case USB_ID(0x0d8c, 0x000c):
+-	case USB_ID(0x0d8c, 0x0014):
++	/* lowest playback value is muted on some devices */
++	case USB_ID(0x0d8c, 0x000c): /* C-Media */
++	case USB_ID(0x0d8c, 0x0014): /* C-Media */
++	case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */
+ 		if (strstr(kctl->id.name, "Playback"))
+ 			cval->min_mute = 1;
+ 		break;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-04-08 13:16 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-04-08 13:16 UTC (permalink / raw
  To: gentoo-commits

commit:     28c96975c24fae2b42b0573ff32c1abb76602038
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr  8 13:15:49 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr  8 13:15:49 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=28c96975

Linux patch 5.10.110

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1109_linux-5.10.110.patch | 21603 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 21607 insertions(+)

diff --git a/0000_README b/0000_README
index e51b86fc..958564ab 100644
--- a/0000_README
+++ b/0000_README
@@ -479,6 +479,10 @@ Patch:  1108_linux-5.10.109.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.109
 
+Patch:  1109_linux-5.10.110.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.110
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1109_linux-5.10.110.patch b/1109_linux-5.10.110.patch
new file mode 100644
index 00000000..6efeb944
--- /dev/null
+++ b/1109_linux-5.10.110.patch
@@ -0,0 +1,21603 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 7d5e8a67c775f..e338306f45873 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -787,6 +787,7 @@ bit 1  print system memory info
+ bit 2  print timer info
+ bit 3  print locks info if ``CONFIG_LOCKDEP`` is on
+ bit 4  print ftrace buffer
++bit 5  print all printk messages in buffer
+ =====  ============================================
+ 
+ So for example to print tasks and memory info on panic, user can::
+diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core-api/dma-attributes.rst
+index 1887d92e8e926..17706dc91ec9f 100644
+--- a/Documentation/core-api/dma-attributes.rst
++++ b/Documentation/core-api/dma-attributes.rst
+@@ -130,3 +130,11 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged
+ subsystem that the buffer is fully accessible at the elevated privilege
+ level (and ideally inaccessible or at least read-only at the
+ lesser-privileged levels).
++
++DMA_ATTR_OVERWRITE
++------------------
++
++This is a hint to the DMA-mapping subsystem that the device is expected to
++overwrite the entire mapped size, thus the caller does not require any of the
++previous buffer contents to be preserved. This allows bounce-buffering
++implementations to optimise DMA_FROM_DEVICE transfers.
+diff --git a/Documentation/devicetree/bindings/mtd/nand-controller.yaml b/Documentation/devicetree/bindings/mtd/nand-controller.yaml
+index b29050fd7470a..6fe2a3d8ee6b8 100644
+--- a/Documentation/devicetree/bindings/mtd/nand-controller.yaml
++++ b/Documentation/devicetree/bindings/mtd/nand-controller.yaml
+@@ -44,7 +44,7 @@ patternProperties:
+     properties:
+       reg:
+         description:
+-          Contains the native Ready/Busy IDs.
++          Contains the chip-select IDs.
+ 
+       nand-ecc-mode:
+         description:
+@@ -174,6 +174,6 @@ examples:
+         nand-ecc-mode = "soft";
+         nand-ecc-algo = "bch";
+ 
+-        /* controller specific properties */
++        /* NAND chip specific properties */
+       };
+     };
+diff --git a/Documentation/devicetree/bindings/spi/spi-mxic.txt b/Documentation/devicetree/bindings/spi/spi-mxic.txt
+index 529f2dab2648a..7bcbb229b78bb 100644
+--- a/Documentation/devicetree/bindings/spi/spi-mxic.txt
++++ b/Documentation/devicetree/bindings/spi/spi-mxic.txt
+@@ -8,11 +8,13 @@ Required properties:
+ - reg: should contain 2 entries, one for the registers and one for the direct
+        mapping area
+ - reg-names: should contain "regs" and "dirmap"
+-- interrupts: interrupt line connected to the SPI controller
+ - clock-names: should contain "ps_clk", "send_clk" and "send_dly_clk"
+ - clocks: should contain 3 entries for the "ps_clk", "send_clk" and
+ 	  "send_dly_clk" clocks
+ 
++Optional properties:
++- interrupts: interrupt line connected to the SPI controller
++
+ Example:
+ 
+ 	spi@43c30000 {
+diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
+index 003c865e9c212..fbcb48bc2a903 100644
+--- a/Documentation/process/stable-kernel-rules.rst
++++ b/Documentation/process/stable-kernel-rules.rst
+@@ -168,7 +168,16 @@ Trees
+  - The finalized and tagged releases of all stable kernels can be found
+    in separate branches per version at:
+ 
+-	https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++	https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
++
++ - The release candidate of all stable kernel versions can be found at:
++
++        https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git/
++
++   .. warning::
++      The -stable-rc tree is a snapshot in time of the stable-queue tree and
++      will change frequently, hence will be rebased often. It should only be
++      used for testing purposes (e.g. to be consumed by CI systems).
+ 
+ 
+ Review committee
+diff --git a/Documentation/sound/hd-audio/models.rst b/Documentation/sound/hd-audio/models.rst
+index d25335993e553..9b52f50a68542 100644
+--- a/Documentation/sound/hd-audio/models.rst
++++ b/Documentation/sound/hd-audio/models.rst
+@@ -261,6 +261,10 @@ alc-sense-combo
+ huawei-mbx-stereo
+     Enable initialization verbs for Huawei MBX stereo speakers;
+     might be risky, try this at your own risk
++alc298-samsung-headphone
++    Samsung laptops with ALC298
++alc256-samsung-headphone
++    Samsung laptops with ALC256
+ 
+ ALC66x/67x/892
+ ==============
+diff --git a/Makefile b/Makefile
+index 3b462df1134b6..c4674e8bb3e81 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 109
++SUBLEVEL = 110
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
+index 37f724ad5e399..a85e9c625ab50 100644
+--- a/arch/arc/kernel/process.c
++++ b/arch/arc/kernel/process.c
+@@ -43,7 +43,7 @@ SYSCALL_DEFINE0(arc_gettls)
+ 	return task_thread_info(current)->thr_ptr;
+ }
+ 
+-SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)
++SYSCALL_DEFINE3(arc_usr_cmpxchg, int __user *, uaddr, int, expected, int, new)
+ {
+ 	struct pt_regs *regs = current_pt_regs();
+ 	u32 uval;
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index e46a3f4ad350a..b50229c3102fa 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -433,12 +433,26 @@
+ 		#size-cells = <0>;
+ 		enable-method = "brcm,bcm2836-smp"; // for ARM 32-bit
+ 
++		/* Source for d/i-cache-line-size and d/i-cache-sets
++		 * https://developer.arm.com/documentation/100095/0003
++		 * /Level-1-Memory-System/About-the-L1-memory-system?lang=en
++		 * Source for d/i-cache-size
++		 * https://www.raspberrypi.com/documentation/computers
++		 * /processors.html#bcm2711
++		 */
+ 		cpu0: cpu@0 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a72";
+ 			reg = <0>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000d8>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			i-cache-size = <0xc000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu1: cpu@1 {
+@@ -447,6 +461,13 @@
+ 			reg = <1>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000e0>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			i-cache-size = <0xc000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu2: cpu@2 {
+@@ -455,6 +476,13 @@
+ 			reg = <2>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000e8>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			i-cache-size = <0xc000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu3: cpu@3 {
+@@ -463,6 +491,28 @@
+ 			reg = <3>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000f0>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			i-cache-size = <0xc000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++			next-level-cache = <&l2>;
++		};
++
++		/* Source for d/i-cache-line-size and d/i-cache-sets
++		 *  https://developer.arm.com/documentation/100095/0003
++		 *  /Level-2-Memory-System/About-the-L2-memory-system?lang=en
++		 *  Source for d/i-cache-size
++		 *  https://www.raspberrypi.com/documentation/computers
++		 *  /processors.html#bcm2711
++		 */
++		l2: l2-cache0 {
++			compatible = "cache";
++			cache-size = <0x100000>;
++			cache-line-size = <64>;
++			cache-sets = <1024>; // 1MiB(size)/64(line-size)=16384ways/16-way set
++			cache-level = <2>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/bcm2837.dtsi b/arch/arm/boot/dts/bcm2837.dtsi
+index 0199ec98cd616..5dbdebc462594 100644
+--- a/arch/arm/boot/dts/bcm2837.dtsi
++++ b/arch/arm/boot/dts/bcm2837.dtsi
+@@ -40,12 +40,26 @@
+ 		#size-cells = <0>;
+ 		enable-method = "brcm,bcm2836-smp"; // for ARM 32-bit
+ 
++		/* Source for d/i-cache-line-size and d/i-cache-sets
++		 * https://developer.arm.com/documentation/ddi0500/e/level-1-memory-system
++		 * /about-the-l1-memory-system?lang=en
++		 *
++		 * Source for d/i-cache-size
++		 * https://magpi.raspberrypi.com/articles/raspberry-pi-3-specs-benchmarks
++		 */
+ 		cpu0: cpu@0 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a53";
+ 			reg = <0>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000d8>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++			i-cache-size = <0x8000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu1: cpu@1 {
+@@ -54,6 +68,13 @@
+ 			reg = <1>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000e0>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++			i-cache-size = <0x8000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu2: cpu@2 {
+@@ -62,6 +83,13 @@
+ 			reg = <2>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000e8>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++			i-cache-size = <0x8000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu3: cpu@3 {
+@@ -70,6 +98,27 @@
+ 			reg = <3>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000f0>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++			i-cache-size = <0x8000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			next-level-cache = <&l2>;
++		};
++
++		/* Source for cache-line-size + cache-sets
++		 * https://developer.arm.com/documentation/ddi0500
++		 * /e/level-2-memory-system/about-the-l2-memory-system?lang=en
++		 * Source for cache-size
++		 * https://datasheets.raspberrypi.com/cm/cm1-and-cm3-datasheet.pdf
++		 */
++		l2: l2-cache0 {
++			compatible = "cache";
++			cache-size = <0x80000>;
++			cache-line-size = <64>;
++			cache-sets = <512>; // 512KiB(size)/64(line-size)=8192ways/16-way set
++			cache-level = <2>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 30b72f4318501..f8c0eee7a62b9 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -3448,8 +3448,7 @@
+ 				ti,timer-pwm;
+ 			};
+ 		};
+-
+-		target-module@2c000 {			/* 0x4882c000, ap 17 02.0 */
++		timer15_target: target-module@2c000 {	/* 0x4882c000, ap 17 02.0 */
+ 			compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ 			reg = <0x2c000 0x4>,
+ 			      <0x2c010 0x4>;
+@@ -3477,7 +3476,7 @@
+ 			};
+ 		};
+ 
+-		target-module@2e000 {			/* 0x4882e000, ap 19 14.0 */
++		timer16_target: target-module@2e000 {	/* 0x4882e000, ap 19 14.0 */
+ 			compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ 			reg = <0x2e000 0x4>,
+ 			      <0x2e010 0x4>;
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index 7ecf8f86ac747..9989321366560 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -1093,20 +1093,20 @@
+ };
+ 
+ /* Local timers, see ARM architected timer wrap erratum i940 */
+-&timer3_target {
++&timer15_target {
+ 	ti,no-reset-on-init;
+ 	ti,no-idle;
+ 	timer@0 {
+-		assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
++		assigned-clocks = <&l4per3_clkctrl DRA7_L4PER3_TIMER15_CLKCTRL 24>;
+ 		assigned-clock-parents = <&timer_sys_clk_div>;
+ 	};
+ };
+ 
+-&timer4_target {
++&timer16_target {
+ 	ti,no-reset-on-init;
+ 	ti,no-idle;
+ 	timer@0 {
+-		assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
++		assigned-clocks = <&l4per3_clkctrl DRA7_L4PER3_TIMER16_CLKCTRL 24>;
+ 		assigned-clock-parents = <&timer_sys_clk_div>;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/exynos5250-pinctrl.dtsi b/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
+index d31a68672bfac..d7d756614edd1 100644
+--- a/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
++++ b/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
+@@ -260,7 +260,7 @@
+ 	};
+ 
+ 	uart3_data: uart3-data {
+-		samsung,pins = "gpa1-4", "gpa1-4";
++		samsung,pins = "gpa1-4", "gpa1-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+ 		samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+index d0e48c10aec2b..572198b6834e6 100644
+--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+@@ -118,6 +118,9 @@
+ 	status = "okay";
+ 	ddc = <&i2c_2>;
+ 	hpd-gpios = <&gpx3 7 GPIO_ACTIVE_HIGH>;
++	vdd-supply = <&ldo8_reg>;
++	vdd_osc-supply = <&ldo10_reg>;
++	vdd_pll-supply = <&ldo8_reg>;
+ };
+ 
+ &i2c_0 {
+diff --git a/arch/arm/boot/dts/exynos5420-smdk5420.dts b/arch/arm/boot/dts/exynos5420-smdk5420.dts
+index 4e49d8095b292..741294bd564e7 100644
+--- a/arch/arm/boot/dts/exynos5420-smdk5420.dts
++++ b/arch/arm/boot/dts/exynos5420-smdk5420.dts
+@@ -124,6 +124,9 @@
+ 	hpd-gpios = <&gpx3 7 GPIO_ACTIVE_HIGH>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&hdmi_hpd_irq>;
++	vdd-supply = <&ldo6_reg>;
++	vdd_osc-supply = <&ldo7_reg>;
++	vdd_pll-supply = <&ldo6_reg>;
+ };
+ 
+ &hsi2c_4 {
+diff --git a/arch/arm/boot/dts/imx53-m53menlo.dts b/arch/arm/boot/dts/imx53-m53menlo.dts
+index 4f88e96d81ddb..d5c68d1ea707c 100644
+--- a/arch/arm/boot/dts/imx53-m53menlo.dts
++++ b/arch/arm/boot/dts/imx53-m53menlo.dts
+@@ -53,6 +53,31 @@
+ 		};
+ 	};
+ 
++	lvds-decoder {
++		compatible = "ti,ds90cf364a", "lvds-decoder";
++
++		ports {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			port@0 {
++				reg = <0>;
++
++				lvds_decoder_in: endpoint {
++					remote-endpoint = <&lvds0_out>;
++				};
++			};
++
++			port@1 {
++				reg = <1>;
++
++				lvds_decoder_out: endpoint {
++					remote-endpoint = <&panel_in>;
++				};
++			};
++		};
++	};
++
+ 	panel {
+ 		compatible = "edt,etm0700g0dh6";
+ 		pinctrl-0 = <&pinctrl_display_gpio>;
+@@ -61,7 +86,7 @@
+ 
+ 		port {
+ 			panel_in: endpoint {
+-				remote-endpoint = <&lvds0_out>;
++				remote-endpoint = <&lvds_decoder_out>;
+ 			};
+ 		};
+ 	};
+@@ -450,7 +475,7 @@
+ 			reg = <2>;
+ 
+ 			lvds0_out: endpoint {
+-				remote-endpoint = <&panel_in>;
++				remote-endpoint = <&lvds_decoder_in>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/imx7-colibri.dtsi b/arch/arm/boot/dts/imx7-colibri.dtsi
+index 62b771c1d5a9a..f1c60b0cb143e 100644
+--- a/arch/arm/boot/dts/imx7-colibri.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri.dtsi
+@@ -40,7 +40,7 @@
+ 
+ 		dailink_master: simple-audio-card,codec {
+ 			sound-dai = <&codec>;
+-			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		};
+ 	};
+ };
+@@ -293,7 +293,7 @@
+ 		compatible = "fsl,sgtl5000";
+ 		#sound-dai-cells = <0>;
+ 		reg = <0x0a>;
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_sai1_mclk>;
+ 		VDDA-supply = <&reg_module_3v3_avdd>;
+diff --git a/arch/arm/boot/dts/imx7-mba7.dtsi b/arch/arm/boot/dts/imx7-mba7.dtsi
+index 50abf18ad30b2..887497e3bb4b8 100644
+--- a/arch/arm/boot/dts/imx7-mba7.dtsi
++++ b/arch/arm/boot/dts/imx7-mba7.dtsi
+@@ -250,7 +250,7 @@
+ 	tlv320aic32x4: audio-codec@18 {
+ 		compatible = "ti,tlv320aic32x4";
+ 		reg = <0x18>;
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		clock-names = "mclk";
+ 		ldoin-supply = <&reg_audio_3v3>;
+ 		iov-supply = <&reg_audio_3v3>;
+diff --git a/arch/arm/boot/dts/imx7d-nitrogen7.dts b/arch/arm/boot/dts/imx7d-nitrogen7.dts
+index e0751e6ba3c0f..a31de900139d6 100644
+--- a/arch/arm/boot/dts/imx7d-nitrogen7.dts
++++ b/arch/arm/boot/dts/imx7d-nitrogen7.dts
+@@ -288,7 +288,7 @@
+ 	codec: wm8960@1a {
+ 		compatible = "wlf,wm8960";
+ 		reg = <0x1a>;
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		clock-names = "mclk";
+ 		wlf,shared-lrclk;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7d-pico-hobbit.dts b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+index 7b2198a9372c6..d917dc4f2f227 100644
+--- a/arch/arm/boot/dts/imx7d-pico-hobbit.dts
++++ b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+@@ -31,7 +31,7 @@
+ 
+ 		dailink_master: simple-audio-card,codec {
+ 			sound-dai = <&sgtl5000>;
+-			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		};
+ 	};
+ };
+@@ -41,7 +41,7 @@
+ 		#sound-dai-cells = <0>;
+ 		reg = <0x0a>;
+ 		compatible = "fsl,sgtl5000";
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		VDDA-supply = <&reg_2p5v>;
+ 		VDDIO-supply = <&reg_vref_1v8>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7d-pico-pi.dts b/arch/arm/boot/dts/imx7d-pico-pi.dts
+index 70bea95c06d83..f263e391e24cb 100644
+--- a/arch/arm/boot/dts/imx7d-pico-pi.dts
++++ b/arch/arm/boot/dts/imx7d-pico-pi.dts
+@@ -31,7 +31,7 @@
+ 
+ 		dailink_master: simple-audio-card,codec {
+ 			sound-dai = <&sgtl5000>;
+-			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		};
+ 	};
+ };
+@@ -41,7 +41,7 @@
+ 		#sound-dai-cells = <0>;
+ 		reg = <0x0a>;
+ 		compatible = "fsl,sgtl5000";
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		VDDA-supply = <&reg_2p5v>;
+ 		VDDIO-supply = <&reg_vref_1v8>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7d-sdb.dts b/arch/arm/boot/dts/imx7d-sdb.dts
+index ac0751bc1177e..6823b9f1a2a32 100644
+--- a/arch/arm/boot/dts/imx7d-sdb.dts
++++ b/arch/arm/boot/dts/imx7d-sdb.dts
+@@ -378,14 +378,14 @@
+ 	codec: wm8960@1a {
+ 		compatible = "wlf,wm8960";
+ 		reg = <0x1a>;
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		clock-names = "mclk";
+ 		wlf,shared-lrclk;
+ 		wlf,hp-cfg = <2 2 3>;
+ 		wlf,gpio-cfg = <1 3>;
+ 		assigned-clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_SRC>,
+ 				  <&clks IMX7D_PLL_AUDIO_POST_DIV>,
+-				  <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++				  <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		assigned-clock-parents = <&clks IMX7D_PLL_AUDIO_POST_DIV>;
+ 		assigned-clock-rates = <0>, <884736000>, <12288000>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7s-warp.dts b/arch/arm/boot/dts/imx7s-warp.dts
+index d6b4888fa686b..e035dd5bf4f62 100644
+--- a/arch/arm/boot/dts/imx7s-warp.dts
++++ b/arch/arm/boot/dts/imx7s-warp.dts
+@@ -75,7 +75,7 @@
+ 
+ 		dailink_master: simple-audio-card,codec {
+ 			sound-dai = <&codec>;
+-			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		};
+ 	};
+ };
+@@ -232,7 +232,7 @@
+ 		#sound-dai-cells = <0>;
+ 		reg = <0x0a>;
+ 		compatible = "fsl,sgtl5000";
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_sai1_mclk>;
+ 		VDDA-supply = <&vgen4_reg>;
+diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+index 74d8e2c8e4b34..3defd47fd8fab 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+@@ -142,7 +142,8 @@
+ 	clocks {
+ 		sleep_clk: sleep_clk {
+ 			compatible = "fixed-clock";
+-			clock-frequency = <32768>;
++			clock-frequency = <32000>;
++			clock-output-names = "gcc_sleep_clk_src";
+ 			#clock-cells = <0>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/qcom-msm8960.dtsi b/arch/arm/boot/dts/qcom-msm8960.dtsi
+index 172ea3c70eac2..c197927e7435f 100644
+--- a/arch/arm/boot/dts/qcom-msm8960.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8960.dtsi
+@@ -146,7 +146,9 @@
+ 			reg		= <0x108000 0x1000>;
+ 			qcom,ipc	= <&l2cc 0x8 2>;
+ 
+-			interrupts	= <0 19 0>, <0 21 0>, <0 22 0>;
++			interrupts	= <GIC_SPI 19 IRQ_TYPE_EDGE_RISING>,
++					  <GIC_SPI 21 IRQ_TYPE_EDGE_RISING>,
++					  <GIC_SPI 22 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names	= "ack", "err", "wakeup";
+ 
+ 			regulators {
+@@ -192,7 +194,7 @@
+ 				compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm";
+ 				reg = <0x16440000 0x1000>,
+ 				      <0x16400000 0x1000>;
+-				interrupts = <0 154 0x0>;
++				interrupts = <GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&gcc GSBI5_UART_CLK>, <&gcc GSBI5_H_CLK>;
+ 				clock-names = "core", "iface";
+ 				status = "disabled";
+@@ -318,7 +320,7 @@
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+ 				reg = <0x16080000 0x1000>;
+-				interrupts = <0 147 0>;
++				interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>;
+ 				spi-max-frequency = <24000000>;
+ 				cs-gpios = <&msmgpio 8 0>;
+ 
+diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi
+index 2c4952427296e..12f57278ba4a5 100644
+--- a/arch/arm/boot/dts/sama5d2.dtsi
++++ b/arch/arm/boot/dts/sama5d2.dtsi
+@@ -413,7 +413,7 @@
+ 				pmecc: ecc-engine@f8014070 {
+ 					compatible = "atmel,sama5d2-pmecc";
+ 					reg = <0xf8014070 0x490>,
+-					      <0xf8014500 0x100>;
++					      <0xf8014500 0x200>;
+ 				};
+ 			};
+ 
+diff --git a/arch/arm/boot/dts/spear1340.dtsi b/arch/arm/boot/dts/spear1340.dtsi
+index 1a8f5e8b10e3a..66cd473ecb617 100644
+--- a/arch/arm/boot/dts/spear1340.dtsi
++++ b/arch/arm/boot/dts/spear1340.dtsi
+@@ -136,9 +136,9 @@
+ 				reg = <0xb4100000 0x1000>;
+ 				interrupts = <0 105 0x4>;
+ 				status = "disabled";
+-				dmas = <&dwdma0 12 0 1>,
+-					<&dwdma0 13 1 0>;
+-				dma-names = "tx", "rx";
++				dmas = <&dwdma0 13 0 1>,
++					<&dwdma0 12 1 0>;
++				dma-names = "rx", "tx";
+ 			};
+ 
+ 			thermal@e07008c4 {
+diff --git a/arch/arm/boot/dts/spear13xx.dtsi b/arch/arm/boot/dts/spear13xx.dtsi
+index c87b881b2c8bb..9135533676879 100644
+--- a/arch/arm/boot/dts/spear13xx.dtsi
++++ b/arch/arm/boot/dts/spear13xx.dtsi
+@@ -284,9 +284,9 @@
+ 				#size-cells = <0>;
+ 				interrupts = <0 31 0x4>;
+ 				status = "disabled";
+-				dmas = <&dwdma0 4 0 0>,
+-					<&dwdma0 5 0 0>;
+-				dma-names = "tx", "rx";
++				dmas = <&dwdma0 5 0 0>,
++					<&dwdma0 4 0 0>;
++				dma-names = "rx", "tx";
+ 			};
+ 
+ 			rtc@e0580000 {
+diff --git a/arch/arm/boot/dts/sun8i-v3s.dtsi b/arch/arm/boot/dts/sun8i-v3s.dtsi
+index 89abd4cc7e23a..b21ecb820b133 100644
+--- a/arch/arm/boot/dts/sun8i-v3s.dtsi
++++ b/arch/arm/boot/dts/sun8i-v3s.dtsi
+@@ -524,6 +524,17 @@
+ 			#size-cells = <0>;
+ 		};
+ 
++		gic: interrupt-controller@1c81000 {
++			compatible = "arm,gic-400";
++			reg = <0x01c81000 0x1000>,
++			      <0x01c82000 0x2000>,
++			      <0x01c84000 0x2000>,
++			      <0x01c86000 0x2000>;
++			interrupt-controller;
++			#interrupt-cells = <3>;
++			interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
++		};
++
+ 		csi1: camera@1cb4000 {
+ 			compatible = "allwinner,sun8i-v3s-csi";
+ 			reg = <0x01cb4000 0x3000>;
+@@ -535,16 +546,5 @@
+ 			resets = <&ccu RST_BUS_CSI>;
+ 			status = "disabled";
+ 		};
+-
+-		gic: interrupt-controller@1c81000 {
+-			compatible = "arm,gic-400";
+-			reg = <0x01c81000 0x1000>,
+-			      <0x01c82000 0x2000>,
+-			      <0x01c84000 0x2000>,
+-			      <0x01c86000 0x2000>;
+-			interrupt-controller;
+-			#interrupt-cells = <3>;
+-			interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
+-		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/tegra20-tamonten.dtsi b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+index dd4d506683de7..7f14f0d005c3e 100644
+--- a/arch/arm/boot/dts/tegra20-tamonten.dtsi
++++ b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+@@ -183,8 +183,8 @@
+ 			};
+ 			conf_ata {
+ 				nvidia,pins = "ata", "atb", "atc", "atd", "ate",
+-					"cdev1", "cdev2", "dap1", "dtb", "gma",
+-					"gmb", "gmc", "gmd", "gme", "gpu7",
++					"cdev1", "cdev2", "dap1", "dtb", "dtf",
++					"gma", "gmb", "gmc", "gmd", "gme", "gpu7",
+ 					"gpv", "i2cp", "irrx", "irtx", "pta",
+ 					"rm", "slxa", "slxk", "spia", "spib",
+ 					"uac";
+@@ -203,7 +203,7 @@
+ 			};
+ 			conf_crtp {
+ 				nvidia,pins = "crtp", "dap2", "dap3", "dap4",
+-					"dtc", "dte", "dtf", "gpu", "sdio1",
++					"dtc", "dte", "gpu", "sdio1",
+ 					"slxc", "slxd", "spdi", "spdo", "spig",
+ 					"uda";
+ 				nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+diff --git a/arch/arm/configs/multi_v5_defconfig b/arch/arm/configs/multi_v5_defconfig
+index e00be9faa23bf..4393e689f2354 100644
+--- a/arch/arm/configs/multi_v5_defconfig
++++ b/arch/arm/configs/multi_v5_defconfig
+@@ -187,6 +187,7 @@ CONFIG_REGULATOR=y
+ CONFIG_REGULATOR_FIXED_VOLTAGE=y
+ CONFIG_MEDIA_SUPPORT=y
+ CONFIG_MEDIA_CAMERA_SUPPORT=y
++CONFIG_MEDIA_PLATFORM_SUPPORT=y
+ CONFIG_V4L_PLATFORM_DRIVERS=y
+ CONFIG_VIDEO_ASPEED=m
+ CONFIG_VIDEO_ATMEL_ISI=m
+diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
+index c9bf2df85cb90..c46c05548080a 100644
+--- a/arch/arm/crypto/Kconfig
++++ b/arch/arm/crypto/Kconfig
+@@ -83,6 +83,8 @@ config CRYPTO_AES_ARM_BS
+ 	depends on KERNEL_MODE_NEON
+ 	select CRYPTO_SKCIPHER
+ 	select CRYPTO_LIB_AES
++	select CRYPTO_AES
++	select CRYPTO_CBC
+ 	select CRYPTO_SIMD
+ 	help
+ 	  Use a faster and more secure NEON based implementation of AES in CBC,
+diff --git a/arch/arm/kernel/entry-ftrace.S b/arch/arm/kernel/entry-ftrace.S
+index a74289ebc8036..5f1b1ce10473a 100644
+--- a/arch/arm/kernel/entry-ftrace.S
++++ b/arch/arm/kernel/entry-ftrace.S
+@@ -22,10 +22,7 @@
+  * mcount can be thought of as a function called in the middle of a subroutine
+  * call.  As such, it needs to be transparent for both the caller and the
+  * callee: the original lr needs to be restored when leaving mcount, and no
+- * registers should be clobbered.  (In the __gnu_mcount_nc implementation, we
+- * clobber the ip register.  This is OK because the ARM calling convention
+- * allows it to be clobbered in subroutines and doesn't use it to hold
+- * parameters.)
++ * registers should be clobbered.
+  *
+  * When using dynamic ftrace, we patch out the mcount call by a "pop {lr}"
+  * instead of the __gnu_mcount_nc call (see arch/arm/kernel/ftrace.c).
+@@ -70,26 +67,25 @@
+ 
+ .macro __ftrace_regs_caller
+ 
+-	sub	sp, sp, #8	@ space for PC and CPSR OLD_R0,
++	str	lr, [sp, #-8]!	@ store LR as PC and make space for CPSR/OLD_R0,
+ 				@ OLD_R0 will overwrite previous LR
+ 
+-	add 	ip, sp, #12	@ move in IP the value of SP as it was
+-				@ before the push {lr} of the mcount mechanism
++	ldr	lr, [sp, #8]    @ get previous LR
+ 
+-	str     lr, [sp, #0]    @ store LR instead of PC
++	str	r0, [sp, #8]	@ write r0 as OLD_R0 over previous LR
+ 
+-	ldr     lr, [sp, #8]    @ get previous LR
++	str	lr, [sp, #-4]!	@ store previous LR as LR
+ 
+-	str	r0, [sp, #8]	@ write r0 as OLD_R0 over previous LR
++	add 	lr, sp, #16	@ move in LR the value of SP as it was
++				@ before the push {lr} of the mcount mechanism
+ 
+-	stmdb   sp!, {ip, lr}
+-	stmdb   sp!, {r0-r11, lr}
++	push	{r0-r11, ip, lr}
+ 
+ 	@ stack content at this point:
+ 	@ 0  4          48   52       56            60   64    68       72
+-	@ R0 | R1 | ... | LR | SP + 4 | previous LR | LR | PSR | OLD_R0 |
++	@ R0 | R1 | ... | IP | SP + 4 | previous LR | LR | PSR | OLD_R0 |
+ 
+-	mov r3, sp				@ struct pt_regs*
++	mov	r3, sp				@ struct pt_regs*
+ 
+ 	ldr r2, =function_trace_op
+ 	ldr r2, [r2]				@ pointer to the current
+@@ -112,11 +108,9 @@ ftrace_graph_regs_call:
+ #endif
+ 
+ 	@ pop saved regs
+-	ldmia   sp!, {r0-r12}			@ restore r0 through r12
+-	ldr	ip, [sp, #8]			@ restore PC
+-	ldr	lr, [sp, #4]			@ restore LR
+-	ldr	sp, [sp, #0]			@ restore SP
+-	mov	pc, ip				@ return
++	pop	{r0-r11, ip, lr}		@ restore r0 through r12
++	ldr	lr, [sp], #4			@ restore LR
++	ldr	pc, [sp], #12
+ .endm
+ 
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+@@ -132,11 +126,9 @@ ftrace_graph_regs_call:
+ 	bl	prepare_ftrace_return
+ 
+ 	@ pop registers saved in ftrace_regs_caller
+-	ldmia   sp!, {r0-r12}			@ restore r0 through r12
+-	ldr	ip, [sp, #8]			@ restore PC
+-	ldr	lr, [sp, #4]			@ restore LR
+-	ldr	sp, [sp, #0]			@ restore SP
+-	mov	pc, ip				@ return
++	pop	{r0-r11, ip, lr}		@ restore r0 through r12
++	ldr	lr, [sp], #4			@ restore LR
++	ldr	pc, [sp], #12
+ 
+ .endm
+ #endif
+@@ -202,16 +194,17 @@ ftrace_graph_call\suffix:
+ .endm
+ 
+ .macro mcount_exit
+-	ldmia	sp!, {r0-r3, ip, lr}
+-	ret	ip
++	ldmia	sp!, {r0-r3}
++	ldr	lr, [sp, #4]
++	ldr	pc, [sp], #8
+ .endm
+ 
+ ENTRY(__gnu_mcount_nc)
+ UNWIND(.fnstart)
+ #ifdef CONFIG_DYNAMIC_FTRACE
+-	mov	ip, lr
+-	ldmia	sp!, {lr}
+-	ret	ip
++	push	{lr}
++	ldr	lr, [sp, #4]
++	ldr	pc, [sp], #8
+ #else
+ 	__mcount
+ #endif
+diff --git a/arch/arm/kernel/swp_emulate.c b/arch/arm/kernel/swp_emulate.c
+index 6166ba38bf994..b74bfcf94fb1a 100644
+--- a/arch/arm/kernel/swp_emulate.c
++++ b/arch/arm/kernel/swp_emulate.c
+@@ -195,7 +195,7 @@ static int swp_handler(struct pt_regs *regs, unsigned int instr)
+ 		 destreg, EXTRACT_REG_NUM(instr, RT2_OFFSET), data);
+ 
+ 	/* Check access in reasonable access range for both SWP and SWPB */
+-	if (!access_ok((address & ~3), 4)) {
++	if (!access_ok((void __user *)(address & ~3), 4)) {
+ 		pr_debug("SWP{B} emulation: access to %p not allowed!\n",
+ 			 (void *)address);
+ 		res = -EFAULT;
+diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
+index 2d9e72ad1b0f9..a531afad87fdb 100644
+--- a/arch/arm/kernel/traps.c
++++ b/arch/arm/kernel/traps.c
+@@ -589,7 +589,7 @@ do_cache_op(unsigned long start, unsigned long end, int flags)
+ 	if (end < start || flags)
+ 		return -EINVAL;
+ 
+-	if (!access_ok(start, end - start))
++	if (!access_ok((void __user *)start, end - start))
+ 		return -EFAULT;
+ 
+ 	return __do_cache_op(start, end);
+diff --git a/arch/arm/mach-iop32x/include/mach/entry-macro.S b/arch/arm/mach-iop32x/include/mach/entry-macro.S
+index 8e6766d4621eb..341e5d9a6616d 100644
+--- a/arch/arm/mach-iop32x/include/mach/entry-macro.S
++++ b/arch/arm/mach-iop32x/include/mach/entry-macro.S
+@@ -20,7 +20,7 @@
+ 	mrc     p6, 0, \irqstat, c8, c0, 0	@ Read IINTSRC
+ 	cmp     \irqstat, #0
+ 	clzne   \irqnr, \irqstat
+-	rsbne   \irqnr, \irqnr, #31
++	rsbne   \irqnr, \irqnr, #32
+ 	.endm
+ 
+ 	.macro arch_ret_to_user, tmp1, tmp2
+diff --git a/arch/arm/mach-iop32x/include/mach/irqs.h b/arch/arm/mach-iop32x/include/mach/irqs.h
+index c4e78df428e86..e09ae5f48aec5 100644
+--- a/arch/arm/mach-iop32x/include/mach/irqs.h
++++ b/arch/arm/mach-iop32x/include/mach/irqs.h
+@@ -9,6 +9,6 @@
+ #ifndef __IRQS_H
+ #define __IRQS_H
+ 
+-#define NR_IRQS			32
++#define NR_IRQS			33
+ 
+ #endif
+diff --git a/arch/arm/mach-iop32x/irq.c b/arch/arm/mach-iop32x/irq.c
+index 2d48bf1398c10..d1e8824cbd824 100644
+--- a/arch/arm/mach-iop32x/irq.c
++++ b/arch/arm/mach-iop32x/irq.c
+@@ -32,14 +32,14 @@ static void intstr_write(u32 val)
+ static void
+ iop32x_irq_mask(struct irq_data *d)
+ {
+-	iop32x_mask &= ~(1 << d->irq);
++	iop32x_mask &= ~(1 << (d->irq - 1));
+ 	intctl_write(iop32x_mask);
+ }
+ 
+ static void
+ iop32x_irq_unmask(struct irq_data *d)
+ {
+-	iop32x_mask |= 1 << d->irq;
++	iop32x_mask |= 1 << (d->irq - 1);
+ 	intctl_write(iop32x_mask);
+ }
+ 
+@@ -65,7 +65,7 @@ void __init iop32x_init_irq(void)
+ 	    machine_is_em7210())
+ 		*IOP3XX_PCIIRSR = 0x0f;
+ 
+-	for (i = 0; i < NR_IRQS; i++) {
++	for (i = 1; i < NR_IRQS; i++) {
+ 		irq_set_chip_and_handler(i, &ext_chip, handle_level_irq);
+ 		irq_clear_status_flags(i, IRQ_NOREQUEST | IRQ_NOPROBE);
+ 	}
+diff --git a/arch/arm/mach-iop32x/irqs.h b/arch/arm/mach-iop32x/irqs.h
+index 69858e4e905d1..e1dfc8b4e7d7e 100644
+--- a/arch/arm/mach-iop32x/irqs.h
++++ b/arch/arm/mach-iop32x/irqs.h
+@@ -7,36 +7,40 @@
+ #ifndef __IOP32X_IRQS_H
+ #define __IOP32X_IRQS_H
+ 
++/* Interrupts in Linux start at 1, hardware starts at 0 */
++
++#define IOP_IRQ(x) ((x) + 1)
++
+ /*
+  * IOP80321 chipset interrupts
+  */
+-#define IRQ_IOP32X_DMA0_EOT	0
+-#define IRQ_IOP32X_DMA0_EOC	1
+-#define IRQ_IOP32X_DMA1_EOT	2
+-#define IRQ_IOP32X_DMA1_EOC	3
+-#define IRQ_IOP32X_AA_EOT	6
+-#define IRQ_IOP32X_AA_EOC	7
+-#define IRQ_IOP32X_CORE_PMON	8
+-#define IRQ_IOP32X_TIMER0	9
+-#define IRQ_IOP32X_TIMER1	10
+-#define IRQ_IOP32X_I2C_0	11
+-#define IRQ_IOP32X_I2C_1	12
+-#define IRQ_IOP32X_MESSAGING	13
+-#define IRQ_IOP32X_ATU_BIST	14
+-#define IRQ_IOP32X_PERFMON	15
+-#define IRQ_IOP32X_CORE_PMU	16
+-#define IRQ_IOP32X_BIU_ERR	17
+-#define IRQ_IOP32X_ATU_ERR	18
+-#define IRQ_IOP32X_MCU_ERR	19
+-#define IRQ_IOP32X_DMA0_ERR	20
+-#define IRQ_IOP32X_DMA1_ERR	21
+-#define IRQ_IOP32X_AA_ERR	23
+-#define IRQ_IOP32X_MSG_ERR	24
+-#define IRQ_IOP32X_SSP		25
+-#define IRQ_IOP32X_XINT0	27
+-#define IRQ_IOP32X_XINT1	28
+-#define IRQ_IOP32X_XINT2	29
+-#define IRQ_IOP32X_XINT3	30
+-#define IRQ_IOP32X_HPI		31
++#define IRQ_IOP32X_DMA0_EOT	IOP_IRQ(0)
++#define IRQ_IOP32X_DMA0_EOC	IOP_IRQ(1)
++#define IRQ_IOP32X_DMA1_EOT	IOP_IRQ(2)
++#define IRQ_IOP32X_DMA1_EOC	IOP_IRQ(3)
++#define IRQ_IOP32X_AA_EOT	IOP_IRQ(6)
++#define IRQ_IOP32X_AA_EOC	IOP_IRQ(7)
++#define IRQ_IOP32X_CORE_PMON	IOP_IRQ(8)
++#define IRQ_IOP32X_TIMER0	IOP_IRQ(9)
++#define IRQ_IOP32X_TIMER1	IOP_IRQ(10)
++#define IRQ_IOP32X_I2C_0	IOP_IRQ(11)
++#define IRQ_IOP32X_I2C_1	IOP_IRQ(12)
++#define IRQ_IOP32X_MESSAGING	IOP_IRQ(13)
++#define IRQ_IOP32X_ATU_BIST	IOP_IRQ(14)
++#define IRQ_IOP32X_PERFMON	IOP_IRQ(15)
++#define IRQ_IOP32X_CORE_PMU	IOP_IRQ(16)
++#define IRQ_IOP32X_BIU_ERR	IOP_IRQ(17)
++#define IRQ_IOP32X_ATU_ERR	IOP_IRQ(18)
++#define IRQ_IOP32X_MCU_ERR	IOP_IRQ(19)
++#define IRQ_IOP32X_DMA0_ERR	IOP_IRQ(20)
++#define IRQ_IOP32X_DMA1_ERR	IOP_IRQ(21)
++#define IRQ_IOP32X_AA_ERR	IOP_IRQ(23)
++#define IRQ_IOP32X_MSG_ERR	IOP_IRQ(24)
++#define IRQ_IOP32X_SSP		IOP_IRQ(25)
++#define IRQ_IOP32X_XINT0	IOP_IRQ(27)
++#define IRQ_IOP32X_XINT1	IOP_IRQ(28)
++#define IRQ_IOP32X_XINT2	IOP_IRQ(29)
++#define IRQ_IOP32X_XINT3	IOP_IRQ(30)
++#define IRQ_IOP32X_HPI		IOP_IRQ(31)
+ 
+ #endif
+diff --git a/arch/arm/mach-mmp/sram.c b/arch/arm/mach-mmp/sram.c
+index 6794e2db1ad5f..ecc46c31004f6 100644
+--- a/arch/arm/mach-mmp/sram.c
++++ b/arch/arm/mach-mmp/sram.c
+@@ -72,6 +72,8 @@ static int sram_probe(struct platform_device *pdev)
+ 	if (!info)
+ 		return -ENOMEM;
+ 
++	platform_set_drvdata(pdev, info);
++
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (res == NULL) {
+ 		dev_err(&pdev->dev, "no memory resource defined\n");
+@@ -107,8 +109,6 @@ static int sram_probe(struct platform_device *pdev)
+ 	list_add(&info->node, &sram_bank_list);
+ 	mutex_unlock(&sram_lock);
+ 
+-	platform_set_drvdata(pdev, info);
+-
+ 	dev_info(&pdev->dev, "initialized\n");
+ 	return 0;
+ 
+@@ -127,17 +127,19 @@ static int sram_remove(struct platform_device *pdev)
+ 	struct sram_bank_info *info;
+ 
+ 	info = platform_get_drvdata(pdev);
+-	if (info == NULL)
+-		return -ENODEV;
+ 
+-	mutex_lock(&sram_lock);
+-	list_del(&info->node);
+-	mutex_unlock(&sram_lock);
++	if (info->sram_size) {
++		mutex_lock(&sram_lock);
++		list_del(&info->node);
++		mutex_unlock(&sram_lock);
++
++		gen_pool_destroy(info->gpool);
++		iounmap(info->sram_virt);
++		kfree(info->pool_name);
++	}
+ 
+-	gen_pool_destroy(info->gpool);
+-	iounmap(info->sram_virt);
+-	kfree(info->pool_name);
+ 	kfree(info);
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm/mach-mstar/Kconfig b/arch/arm/mach-mstar/Kconfig
+index 576d1ab293c87..30560fdf87ed2 100644
+--- a/arch/arm/mach-mstar/Kconfig
++++ b/arch/arm/mach-mstar/Kconfig
+@@ -3,6 +3,7 @@ menuconfig ARCH_MSTARV7
+ 	depends on ARCH_MULTI_V7
+ 	select ARM_GIC
+ 	select ARM_HEAVY_MB
++	select HAVE_ARM_ARCH_TIMER
+ 	select MST_IRQ
+ 	help
+ 	  Support for newer MStar/Sigmastar SoC families that are
+diff --git a/arch/arm/mach-s3c/mach-jive.c b/arch/arm/mach-s3c/mach-jive.c
+index 2a29c3eca559e..ae6a1c9ebf78c 100644
+--- a/arch/arm/mach-s3c/mach-jive.c
++++ b/arch/arm/mach-s3c/mach-jive.c
+@@ -236,11 +236,11 @@ static int __init jive_mtdset(char *options)
+ 	unsigned long set;
+ 
+ 	if (options == NULL || options[0] == '\0')
+-		return 0;
++		return 1;
+ 
+ 	if (kstrtoul(options, 10, &set)) {
+ 		printk(KERN_ERR "failed to parse mtdset=%s\n", options);
+-		return 0;
++		return 1;
+ 	}
+ 
+ 	switch (set) {
+@@ -255,7 +255,7 @@ static int __init jive_mtdset(char *options)
+ 		       "using default.", set);
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ /* parse the mtdset= option given to the kernel command line */
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts b/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
+index ec19fbf928a14..12a4b1c03390c 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
+@@ -111,8 +111,8 @@
+ 		compatible = "silabs,si3226x";
+ 		reg = <0>;
+ 		spi-max-frequency = <5000000>;
+-		spi-cpha = <1>;
+-		spi-cpol = <1>;
++		spi-cpha;
++		spi-cpol;
+ 		pl022,hierarchy = <0>;
+ 		pl022,interface = <0>;
+ 		pl022,slave-tx-disable = <0>;
+@@ -135,8 +135,8 @@
+ 		at25,byte-len = <0x8000>;
+ 		at25,addr-mode = <2>;
+ 		at25,page-size = <64>;
+-		spi-cpha = <1>;
+-		spi-cpol = <1>;
++		spi-cpha;
++		spi-cpol;
+ 		pl022,hierarchy = <0>;
+ 		pl022,interface = <0>;
+ 		pl022,slave-tx-disable = <0>;
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+index 2cfeaf3b0a876..8c218689fef70 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+@@ -687,7 +687,7 @@
+ 			};
+ 		};
+ 
+-		sata: ahci@663f2000 {
++		sata: sata@663f2000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+ 			reg = <0x663f2000 0x1000>;
+ 			dma-coherent;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index ea6e3a11e641b..9beb3c34fcdb5 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -3406,10 +3406,10 @@
+ 					#clock-cells = <0>;
+ 					clock-frequency = <9600000>;
+ 					clock-output-names = "mclk";
+-					qcom,micbias1-millivolt = <1800>;
+-					qcom,micbias2-millivolt = <1800>;
+-					qcom,micbias3-millivolt = <1800>;
+-					qcom,micbias4-millivolt = <1800>;
++					qcom,micbias1-microvolt = <1800000>;
++					qcom,micbias2-microvolt = <1800000>;
++					qcom,micbias3-microvolt = <1800000>;
++					qcom,micbias4-microvolt = <1800000>;
+ 
+ 					#address-cells = <1>;
+ 					#size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 1aec54590a11a..a8a47378ba689 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -1114,9 +1114,9 @@
+ 			qcom,tcs-offset = <0xd00>;
+ 			qcom,drv-id = <2>;
+ 			qcom,tcs-config = <ACTIVE_TCS  2>,
+-					  <SLEEP_TCS   1>,
+-					  <WAKE_TCS    1>,
+-					  <CONTROL_TCS 0>;
++					  <SLEEP_TCS   3>,
++					  <WAKE_TCS    3>,
++					  <CONTROL_TCS 1>;
+ 
+ 			rpmhcc: clock-controller {
+ 				compatible = "qcom,sm8150-rpmh-clk";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts b/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
+index 6db18808b9c54..dc45ec372ada4 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
+@@ -665,8 +665,8 @@
+ 	sd-uhs-sdr104;
+ 
+ 	/* Power supply */
+-	vqmmc-supply = &vcc1v8_s3;	/* IO line */
+-	vmmc-supply = &vcc_sdio;	/* card's power */
++	vqmmc-supply = <&vcc1v8_s3>;	/* IO line */
++	vmmc-supply = <&vcc_sdio>;	/* card's power */
+ 
+ 	#address-cells = <1>;
+ 	#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index b9662205be9bf..d04189771c773 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -35,7 +35,10 @@
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+ 		reg = <0x00 0x01800000 0x00 0x10000>,	/* GICD */
+-		      <0x00 0x01880000 0x00 0x90000>;	/* GICR */
++		      <0x00 0x01880000 0x00 0x90000>,	/* GICR */
++		      <0x00 0x6f000000 0x00 0x2000>,	/* GICC */
++		      <0x00 0x6f010000 0x00 0x1000>,	/* GICH */
++		      <0x00 0x6f020000 0x00 0x2000>;	/* GICV */
+ 		/*
+ 		 * vcpumntirq:
+ 		 * virtual CPU interface maintenance interrupt
+diff --git a/arch/arm64/boot/dts/ti/k3-am65.dtsi b/arch/arm64/boot/dts/ti/k3-am65.dtsi
+index d84c0bc050233..c6a3fecc7518e 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65.dtsi
+@@ -84,6 +84,7 @@
+ 			 <0x00 0x46000000 0x00 0x46000000 0x00 0x00200000>,
+ 			 <0x00 0x47000000 0x00 0x47000000 0x00 0x00068400>,
+ 			 <0x00 0x50000000 0x00 0x50000000 0x00 0x8000000>,
++			 <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A53 PERIPHBASE */
+ 			 <0x00 0x70000000 0x00 0x70000000 0x00 0x200000>,
+ 			 <0x05 0x00000000 0x05 0x00000000 0x01 0x0000000>,
+ 			 <0x07 0x00000000 0x07 0x00000000 0x01 0x0000000>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 1ab9f9604af6c..bef47f96376d9 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -47,7 +47,10 @@
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+ 		reg = <0x00 0x01800000 0x00 0x10000>,	/* GICD */
+-		      <0x00 0x01900000 0x00 0x100000>;	/* GICR */
++		      <0x00 0x01900000 0x00 0x100000>,	/* GICR */
++		      <0x00 0x6f000000 0x00 0x2000>,	/* GICC */
++		      <0x00 0x6f010000 0x00 0x1000>,	/* GICH */
++		      <0x00 0x6f020000 0x00 0x2000>;	/* GICV */
+ 
+ 		/* vcpumntirq: virtual CPU interface maintenance interrupt */
+ 		interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200.dtsi b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+index 03a9623f0f956..59f5113e657dd 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+@@ -127,6 +127,7 @@
+ 			 <0x00 0x00a40000 0x00 0x00a40000 0x00 0x00000800>, /* timesync router */
+ 			 <0x00 0x01000000 0x00 0x01000000 0x00 0x0d000000>, /* Most peripherals */
+ 			 <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>, /* MAIN NAVSS */
++			 <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
+ 			 <0x00 0x70000000 0x00 0x70000000 0x00 0x00800000>, /* MSMC RAM */
+ 			 <0x00 0x18000000 0x00 0x18000000 0x00 0x08000000>, /* PCIe1 DAT0 */
+ 			 <0x41 0x00000000 0x41 0x00000000 0x01 0x00000000>, /* PCIe1 DAT1 */
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index 85526f72b4616..0350ddfe2c723 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -108,7 +108,10 @@
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+ 		reg = <0x00 0x01800000 0x00 0x10000>,	/* GICD */
+-		      <0x00 0x01900000 0x00 0x100000>;	/* GICR */
++		      <0x00 0x01900000 0x00 0x100000>,	/* GICR */
++		      <0x00 0x6f000000 0x00 0x2000>,	/* GICC */
++		      <0x00 0x6f010000 0x00 0x1000>,	/* GICH */
++		      <0x00 0x6f020000 0x00 0x2000>;	/* GICV */
+ 
+ 		/* vcpumntirq: virtual CPU interface maintenance interrupt */
+ 		interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e.dtsi b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+index a199227327ed2..ba4fe3f983158 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+@@ -136,6 +136,7 @@
+ 			 <0x00 0x0e000000 0x00 0x0e000000 0x00 0x01800000>, /* PCIe Core*/
+ 			 <0x00 0x10000000 0x00 0x10000000 0x00 0x10000000>, /* PCIe DAT */
+ 			 <0x00 0x64800000 0x00 0x64800000 0x00 0x00800000>, /* C71 */
++			 <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
+ 			 <0x44 0x00000000 0x44 0x00000000 0x00 0x08000000>, /* PCIe2 DAT */
+ 			 <0x44 0x10000000 0x44 0x10000000 0x00 0x08000000>, /* PCIe3 DAT */
+ 			 <0x4d 0x80800000 0x4d 0x80800000 0x00 0x00800000>, /* C66_0 */
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index 5cfe3cf6f2acb..2bdf38d05fa5b 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -837,7 +837,7 @@ CONFIG_DMADEVICES=y
+ CONFIG_DMA_BCM2835=y
+ CONFIG_DMA_SUN6I=m
+ CONFIG_FSL_EDMA=y
+-CONFIG_IMX_SDMA=y
++CONFIG_IMX_SDMA=m
+ CONFIG_K3_DMA=y
+ CONFIG_MV_XOR=y
+ CONFIG_MV_XOR_V2=y
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index e62005317ce29..0dab5679a97d5 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -572,10 +572,12 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user,
+ {
+ 	int err;
+ 
+-	err = sigframe_alloc(user, &user->fpsimd_offset,
+-			     sizeof(struct fpsimd_context));
+-	if (err)
+-		return err;
++	if (system_supports_fpsimd()) {
++		err = sigframe_alloc(user, &user->fpsimd_offset,
++				     sizeof(struct fpsimd_context));
++		if (err)
++			return err;
++	}
+ 
+ 	/* fault information, if valid */
+ 	if (add_all || current->thread.fault_code) {
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index c0a7f0d90b39d..80cc79760e8e9 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -58,8 +58,34 @@ EXPORT_SYMBOL(memstart_addr);
+  * unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4).
+  * In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory,
+  * otherwise it is empty.
++ *
++ * Memory reservation for crash kernel either done early or deferred
++ * depending on DMA memory zones configs (ZONE_DMA) --
++ *
++ * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
++ * here instead of max_zone_phys().  This lets early reservation of
++ * crash kernel memory which has a dependency on arm64_dma_phys_limit.
++ * Reserving memory early for crash kernel allows linear creation of block
++ * mappings (greater than page-granularity) for all the memory bank rangs.
++ * In this scheme a comparatively quicker boot is observed.
++ *
++ * If ZONE_DMA configs are defined, crash kernel memory reservation
++ * is delayed until DMA zone memory range size initilazation performed in
++ * zone_sizes_init().  The defer is necessary to steer clear of DMA zone
++ * memory range to avoid overlap allocation.  So crash kernel memory boundaries
++ * are not known when mapping all bank memory ranges, which otherwise means
++ * not possible to exclude crash kernel range from creating block mappings
++ * so page-granularity mappings are created for the entire memory range.
++ * Hence a slightly slower boot is observed.
++ *
++ * Note: Page-granularity mapppings are necessary for crash kernel memory
++ * range for shrinking its size via /sys/kernel/kexec_crash_size interface.
+  */
+-phys_addr_t arm64_dma_phys_limit __ro_after_init;
++#if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
++phys_addr_t __ro_after_init arm64_dma_phys_limit;
++#else
++phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
++#endif
+ 
+ #ifdef CONFIG_KEXEC_CORE
+ /*
+@@ -210,8 +236,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ 	if (!arm64_dma_phys_limit)
+ 		arm64_dma_phys_limit = dma32_phys_limit;
+ #endif
+-	if (!arm64_dma_phys_limit)
+-		arm64_dma_phys_limit = PHYS_MASK + 1;
+ 	max_zone_pfns[ZONE_NORMAL] = max;
+ 
+ 	free_area_init(max_zone_pfns);
+@@ -407,6 +431,9 @@ void __init arm64_memblock_init(void)
+ 
+ 	reserve_elfcorehdr();
+ 
++	if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
++		reserve_crashkernel();
++
+ 	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
+ }
+ 
+@@ -451,7 +478,8 @@ void __init bootmem_init(void)
+ 	 * request_standard_resources() depends on crashkernel's memory being
+ 	 * reserved, so do it here.
+ 	 */
+-	reserve_crashkernel();
++	if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
++		reserve_crashkernel();
+ 
+ 	memblock_dump_all();
+ }
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 991e599f70577..3284709ef5676 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -61,6 +61,7 @@ static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
+ static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
+ 
+ static DEFINE_SPINLOCK(swapper_pgdir_lock);
++static DEFINE_MUTEX(fixmap_lock);
+ 
+ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd)
+ {
+@@ -314,6 +315,12 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ 	}
+ 	BUG_ON(p4d_bad(p4d));
+ 
++	/*
++	 * No need for locking during early boot. And it doesn't work as
++	 * expected with KASLR enabled.
++	 */
++	if (system_state != SYSTEM_BOOTING)
++		mutex_lock(&fixmap_lock);
+ 	pudp = pud_set_fixmap_offset(p4dp, addr);
+ 	do {
+ 		pud_t old_pud = READ_ONCE(*pudp);
+@@ -344,6 +351,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ 	} while (pudp++, addr = next, addr != end);
+ 
+ 	pud_clear_fixmap();
++	if (system_state != SYSTEM_BOOTING)
++		mutex_unlock(&fixmap_lock);
+ }
+ 
+ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
+@@ -492,7 +501,7 @@ static void __init map_mem(pgd_t *pgdp)
+ 	int flags = 0;
+ 	u64 i;
+ 
+-	if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
++	if (rodata_full || debug_pagealloc_enabled())
+ 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+ 
+ 	/*
+@@ -503,6 +512,17 @@ static void __init map_mem(pgd_t *pgdp)
+ 	 */
+ 	memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+ 
++#ifdef CONFIG_KEXEC_CORE
++	if (crash_mem_map) {
++		if (IS_ENABLED(CONFIG_ZONE_DMA) ||
++		    IS_ENABLED(CONFIG_ZONE_DMA32))
++			flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
++		else if (crashk_res.end)
++			memblock_mark_nomap(crashk_res.start,
++					    resource_size(&crashk_res));
++	}
++#endif
++
+ 	/* map all the memory banks */
+ 	for_each_mem_range(i, &start, &end) {
+ 		if (start >= end)
+@@ -529,6 +549,25 @@ static void __init map_mem(pgd_t *pgdp)
+ 	__map_memblock(pgdp, kernel_start, kernel_end,
+ 		       PAGE_KERNEL, NO_CONT_MAPPINGS);
+ 	memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
++
++	/*
++	 * Use page-level mappings here so that we can shrink the region
++	 * in page granularity and put back unused memory to buddy system
++	 * through /sys/kernel/kexec_crash_size interface.
++	 */
++#ifdef CONFIG_KEXEC_CORE
++	if (crash_mem_map &&
++	    !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
++		if (crashk_res.end) {
++			__map_memblock(pgdp, crashk_res.start,
++				       crashk_res.end + 1,
++				       PAGE_KERNEL,
++				       NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
++			memblock_clear_nomap(crashk_res.start,
++					     resource_size(&crashk_res));
++		}
++	}
++#endif
+ }
+ 
+ void mark_rodata_ro(void)
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 064577ff9ff59..9c6cab71ba98b 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1040,15 +1040,18 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 		goto out_off;
+ 	}
+ 
+-	/* 1. Initial fake pass to compute ctx->idx. */
+-
+-	/* Fake pass to fill in ctx->offset. */
+-	if (build_body(&ctx, extra_pass)) {
++	/*
++	 * 1. Initial fake pass to compute ctx->idx and ctx->offset.
++	 *
++	 * BPF line info needs ctx->offset[i] to be the offset of
++	 * instruction[i] in jited image, so build prologue first.
++	 */
++	if (build_prologue(&ctx, was_classic)) {
+ 		prog = orig_prog;
+ 		goto out_off;
+ 	}
+ 
+-	if (build_prologue(&ctx, was_classic)) {
++	if (build_body(&ctx, extra_pass)) {
+ 		prog = orig_prog;
+ 		goto out_off;
+ 	}
+@@ -1121,6 +1124,11 @@ skip_init_ctx:
+ 	prog->jited_len = prog_size;
+ 
+ 	if (!prog->is_func || extra_pass) {
++		int i;
++
++		/* offset[prog->len] is the size of program */
++		for (i = 0; i <= prog->len; i++)
++			ctx.offset[i] *= AARCH64_INSN_SIZE;
+ 		bpf_prog_fill_jited_linfo(prog, ctx.offset + 1);
+ out_off:
+ 		kfree(ctx.offset);
+diff --git a/arch/csky/kernel/perf_callchain.c b/arch/csky/kernel/perf_callchain.c
+index 35318a635a5fa..75e1f9df5f604 100644
+--- a/arch/csky/kernel/perf_callchain.c
++++ b/arch/csky/kernel/perf_callchain.c
+@@ -49,7 +49,7 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ {
+ 	struct stackframe buftail;
+ 	unsigned long lr = 0;
+-	unsigned long *user_frame_tail = (unsigned long *)fp;
++	unsigned long __user *user_frame_tail = (unsigned long __user *)fp;
+ 
+ 	/* Check accessibility of one struct frame_tail beyond */
+ 	if (!access_ok(user_frame_tail, sizeof(buftail)))
+diff --git a/arch/csky/kernel/signal.c b/arch/csky/kernel/signal.c
+index 0ca49b5e3dd37..243228b0aa075 100644
+--- a/arch/csky/kernel/signal.c
++++ b/arch/csky/kernel/signal.c
+@@ -136,7 +136,7 @@ static inline void __user *get_sigframe(struct ksignal *ksig,
+ static int
+ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs)
+ {
+-	struct rt_sigframe *frame;
++	struct rt_sigframe __user *frame;
+ 	int err = 0;
+ 	struct csky_vdso *vdso = current->mm->context.vdso;
+ 
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 59f7dfe50a4d0..a055616942a1e 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -480,7 +480,7 @@ static struct platform_device mcf_i2c5 = {
+ #endif /* MCFI2C_BASE5 */
+ #endif /* IS_ENABLED(CONFIG_I2C_IMX) */
+ 
+-#if IS_ENABLED(CONFIG_MCF_EDMA)
++#ifdef MCFEDMA_BASE
+ 
+ static const struct dma_slave_map mcf_edma_map[] = {
+ 	{ "dreq0", "rx-tx", MCF_EDMA_FILTER_PARAM(0) },
+@@ -552,7 +552,7 @@ static struct platform_device mcf_edma = {
+ 		.platform_data = &mcf_edma_data,
+ 	}
+ };
+-#endif /* IS_ENABLED(CONFIG_MCF_EDMA) */
++#endif /* MCFEDMA_BASE */
+ 
+ #ifdef MCFSDHC_BASE
+ static struct mcf_esdhc_platform_data mcf_esdhc_data = {
+@@ -610,7 +610,7 @@ static struct platform_device *mcf_devices[] __initdata = {
+ 	&mcf_i2c5,
+ #endif
+ #endif
+-#if IS_ENABLED(CONFIG_MCF_EDMA)
++#ifdef MCFEDMA_BASE
+ 	&mcf_edma,
+ #endif
+ #ifdef MCFSDHC_BASE
+diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h
+index 304b04ffea2fa..7c5d92e2915ca 100644
+--- a/arch/microblaze/include/asm/uaccess.h
++++ b/arch/microblaze/include/asm/uaccess.h
+@@ -167,27 +167,27 @@ extern long __user_bad(void);
+ 
+ #define __get_user(x, ptr)						\
+ ({									\
+-	unsigned long __gu_val = 0;					\
+ 	long __gu_err;							\
+ 	switch (sizeof(*(ptr))) {					\
+ 	case 1:								\
+-		__get_user_asm("lbu", (ptr), __gu_val, __gu_err);	\
++		__get_user_asm("lbu", (ptr), x, __gu_err);		\
+ 		break;							\
+ 	case 2:								\
+-		__get_user_asm("lhu", (ptr), __gu_val, __gu_err);	\
++		__get_user_asm("lhu", (ptr), x, __gu_err);		\
+ 		break;							\
+ 	case 4:								\
+-		__get_user_asm("lw", (ptr), __gu_val, __gu_err);	\
++		__get_user_asm("lw", (ptr), x, __gu_err);		\
+ 		break;							\
+-	case 8:								\
+-		__gu_err = __copy_from_user(&__gu_val, ptr, 8);		\
+-		if (__gu_err)						\
+-			__gu_err = -EFAULT;				\
++	case 8: {							\
++		__u64 __x = 0;						\
++		__gu_err = raw_copy_from_user(&__x, ptr, 8) ?		\
++							-EFAULT : 0;	\
++		(x) = (typeof(x))(typeof((x) - (x)))__x;		\
+ 		break;							\
++	}								\
+ 	default:							\
+ 		/* __gu_val = 0; __gu_err = -EINVAL;*/ __gu_err = __user_bad();\
+ 	}								\
+-	x = (__force __typeof__(*(ptr))) __gu_val;			\
+ 	__gu_err;							\
+ })
+ 
+diff --git a/arch/mips/dec/int-handler.S b/arch/mips/dec/int-handler.S
+index ea5b5a83f1e11..011d1d678840a 100644
+--- a/arch/mips/dec/int-handler.S
++++ b/arch/mips/dec/int-handler.S
+@@ -131,7 +131,7 @@
+ 		 */
+ 		mfc0	t0,CP0_CAUSE		# get pending interrupts
+ 		mfc0	t1,CP0_STATUS
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ 		lw	t2,cpu_fpu_mask
+ #endif
+ 		andi	t0,ST0_IM		# CAUSE.CE may be non-zero!
+@@ -139,7 +139,7 @@
+ 
+ 		beqz	t0,spurious
+ 
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ 		 and	t2,t0
+ 		bnez	t2,fpu			# handle FPU immediately
+ #endif
+@@ -280,7 +280,7 @@ handle_it:
+ 		j	dec_irq_dispatch
+ 		 nop
+ 
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ fpu:
+ 		lw	t0,fpu_kstat_irq
+ 		nop
+diff --git a/arch/mips/dec/prom/Makefile b/arch/mips/dec/prom/Makefile
+index d95016016b42b..2bad87551203b 100644
+--- a/arch/mips/dec/prom/Makefile
++++ b/arch/mips/dec/prom/Makefile
+@@ -6,4 +6,4 @@
+ 
+ lib-y			+= init.o memory.o cmdline.o identify.o console.o
+ 
+-lib-$(CONFIG_32BIT)	+= locore.o
++lib-$(CONFIG_CPU_R3000)	+= locore.o
+diff --git a/arch/mips/dec/setup.c b/arch/mips/dec/setup.c
+index eaad0ed4b523b..99b9b29750db3 100644
+--- a/arch/mips/dec/setup.c
++++ b/arch/mips/dec/setup.c
+@@ -746,7 +746,8 @@ void __init arch_init_irq(void)
+ 		dec_interrupt[DEC_IRQ_HALT] = -1;
+ 
+ 	/* Register board interrupts: FPU and cascade. */
+-	if (dec_interrupt[DEC_IRQ_FPU] >= 0 && cpu_has_fpu) {
++	if (IS_ENABLED(CONFIG_MIPS_FP_SUPPORT) &&
++	    dec_interrupt[DEC_IRQ_FPU] >= 0 && cpu_has_fpu) {
+ 		struct irq_desc *desc_fpu;
+ 		int irq_fpu;
+ 
+diff --git a/arch/mips/include/asm/dec/prom.h b/arch/mips/include/asm/dec/prom.h
+index 62c7dfb90e06c..1e1247add1cf8 100644
+--- a/arch/mips/include/asm/dec/prom.h
++++ b/arch/mips/include/asm/dec/prom.h
+@@ -43,16 +43,11 @@
+  */
+ #define REX_PROM_MAGIC		0x30464354
+ 
+-#ifdef CONFIG_64BIT
+-
+-#define prom_is_rex(magic)	1	/* KN04 and KN05 are REX PROMs.  */
+-
+-#else /* !CONFIG_64BIT */
+-
+-#define prom_is_rex(magic)	((magic) == REX_PROM_MAGIC)
+-
+-#endif /* !CONFIG_64BIT */
+-
++/* KN04 and KN05 are REX PROMs, so only do the check for R3k systems.  */
++static inline bool prom_is_rex(u32 magic)
++{
++	return !IS_ENABLED(CONFIG_CPU_R3000) || magic == REX_PROM_MAGIC;
++}
+ 
+ /*
+  * 3MIN/MAXINE PROM entry points for DS5000/1xx's, DS5000/xx's and
+diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
+index 139b4050259fa..71153c369f294 100644
+--- a/arch/mips/include/asm/pgalloc.h
++++ b/arch/mips/include/asm/pgalloc.h
+@@ -15,6 +15,7 @@
+ 
+ #define __HAVE_ARCH_PMD_ALLOC_ONE
+ #define __HAVE_ARCH_PUD_ALLOC_ONE
++#define __HAVE_ARCH_PGD_FREE
+ #include <asm-generic/pgalloc.h>
+ 
+ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
+@@ -49,6 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+ extern void pgd_init(unsigned long page);
+ extern pgd_t *pgd_alloc(struct mm_struct *mm);
+ 
++static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
++{
++	free_pages((unsigned long)pgd, PGD_ORDER);
++}
++
+ #define __pte_free_tlb(tlb,pte,address)			\
+ do {							\
+ 	pgtable_pte_page_dtor(pte);			\
+diff --git a/arch/mips/rb532/devices.c b/arch/mips/rb532/devices.c
+index dd34f1b32b797..0e3c8d761a451 100644
+--- a/arch/mips/rb532/devices.c
++++ b/arch/mips/rb532/devices.c
+@@ -310,11 +310,9 @@ static int __init plat_setup_devices(void)
+ static int __init setup_kmac(char *s)
+ {
+ 	printk(KERN_INFO "korina mac = %s\n", s);
+-	if (!mac_pton(s, korina_dev0_data.mac)) {
++	if (!mac_pton(s, korina_dev0_data.mac))
+ 		printk(KERN_ERR "Invalid mac\n");
+-		return -EINVAL;
+-	}
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("kmac=", setup_kmac);
+diff --git a/arch/nios2/include/asm/uaccess.h b/arch/nios2/include/asm/uaccess.h
+index a741abbed6fbf..8a386e6c07df1 100644
+--- a/arch/nios2/include/asm/uaccess.h
++++ b/arch/nios2/include/asm/uaccess.h
+@@ -89,6 +89,7 @@ extern __must_check long strnlen_user(const char __user *s, long n);
+ /* Optimized macros */
+ #define __get_user_asm(val, insn, addr, err)				\
+ {									\
++	unsigned long __gu_val;						\
+ 	__asm__ __volatile__(						\
+ 	"       movi    %0, %3\n"					\
+ 	"1:   " insn " %1, 0(%2)\n"					\
+@@ -97,14 +98,20 @@ extern __must_check long strnlen_user(const char __user *s, long n);
+ 	"       .section __ex_table,\"a\"\n"				\
+ 	"       .word 1b, 2b\n"						\
+ 	"       .previous"						\
+-	: "=&r" (err), "=r" (val)					\
++	: "=&r" (err), "=r" (__gu_val)					\
+ 	: "r" (addr), "i" (-EFAULT));					\
++	val = (__force __typeof__(*(addr)))__gu_val;			\
+ }
+ 
+-#define __get_user_unknown(val, size, ptr, err) do {			\
++extern void __get_user_unknown(void);
++
++#define __get_user_8(val, ptr, err) do {				\
++	u64 __val = 0;							\
+ 	err = 0;							\
+-	if (__copy_from_user(&(val), ptr, size)) {			\
++	if (raw_copy_from_user(&(__val), ptr, sizeof(val))) {		\
+ 		err = -EFAULT;						\
++	} else {							\
++		val = (typeof(val))(typeof((val) - (val)))__val;	\
+ 	}								\
+ 	} while (0)
+ 
+@@ -120,8 +127,11 @@ do {									\
+ 	case 4:								\
+ 		__get_user_asm(val, "ldw", ptr, err);			\
+ 		break;							\
++	case 8:								\
++		__get_user_8(val, ptr, err);				\
++		break;							\
+ 	default:							\
+-		__get_user_unknown(val, size, ptr, err);		\
++		__get_user_unknown();					\
+ 		break;							\
+ 	}								\
+ } while (0)
+@@ -130,9 +140,7 @@ do {									\
+ 	({								\
+ 	long __gu_err = -EFAULT;					\
+ 	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+-	unsigned long __gu_val = 0;					\
+-	__get_user_common(__gu_val, sizeof(*(ptr)), __gu_ptr, __gu_err);\
+-	(x) = (__force __typeof__(x))__gu_val;				\
++	__get_user_common(x, sizeof(*(ptr)), __gu_ptr, __gu_err);	\
+ 	__gu_err;							\
+ 	})
+ 
+@@ -140,11 +148,9 @@ do {									\
+ ({									\
+ 	long __gu_err = -EFAULT;					\
+ 	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+-	unsigned long __gu_val = 0;					\
+ 	if (access_ok( __gu_ptr, sizeof(*__gu_ptr)))	\
+-		__get_user_common(__gu_val, sizeof(*__gu_ptr),		\
++		__get_user_common(x, sizeof(*__gu_ptr),			\
+ 			__gu_ptr, __gu_err);				\
+-	(x) = (__force __typeof__(x))__gu_val;				\
+ 	__gu_err;							\
+ })
+ 
+diff --git a/arch/nios2/kernel/signal.c b/arch/nios2/kernel/signal.c
+index cf2dca2ac7c37..e45491d1d3e44 100644
+--- a/arch/nios2/kernel/signal.c
++++ b/arch/nios2/kernel/signal.c
+@@ -36,10 +36,10 @@ struct rt_sigframe {
+ 
+ static inline int rt_restore_ucontext(struct pt_regs *regs,
+ 					struct switch_stack *sw,
+-					struct ucontext *uc, int *pr2)
++					struct ucontext __user *uc, int *pr2)
+ {
+ 	int temp;
+-	unsigned long *gregs = uc->uc_mcontext.gregs;
++	unsigned long __user *gregs = uc->uc_mcontext.gregs;
+ 	int err;
+ 
+ 	/* Always make any pending restarted system calls return -EINTR */
+@@ -102,10 +102,11 @@ asmlinkage int do_rt_sigreturn(struct switch_stack *sw)
+ {
+ 	struct pt_regs *regs = (struct pt_regs *)(sw + 1);
+ 	/* Verify, can we follow the stack back */
+-	struct rt_sigframe *frame = (struct rt_sigframe *) regs->sp;
++	struct rt_sigframe __user *frame;
+ 	sigset_t set;
+ 	int rval;
+ 
++	frame = (struct rt_sigframe __user *) regs->sp;
+ 	if (!access_ok(frame, sizeof(*frame)))
+ 		goto badframe;
+ 
+@@ -124,10 +125,10 @@ badframe:
+ 	return 0;
+ }
+ 
+-static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)
++static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *regs)
+ {
+ 	struct switch_stack *sw = (struct switch_stack *)regs - 1;
+-	unsigned long *gregs = uc->uc_mcontext.gregs;
++	unsigned long __user *gregs = uc->uc_mcontext.gregs;
+ 	int err = 0;
+ 
+ 	err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
+@@ -162,8 +163,9 @@ static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)
+ 	return err;
+ }
+ 
+-static inline void *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
+-				 size_t frame_size)
++static inline void __user *get_sigframe(struct ksignal *ksig,
++					struct pt_regs *regs,
++					size_t frame_size)
+ {
+ 	unsigned long usp;
+ 
+@@ -174,13 +176,13 @@ static inline void *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
+ 	usp = sigsp(usp, ksig);
+ 
+ 	/* Verify, is it 32 or 64 bit aligned */
+-	return (void *)((usp - frame_size) & -8UL);
++	return (void __user *)((usp - frame_size) & -8UL);
+ }
+ 
+ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 			  struct pt_regs *regs)
+ {
+-	struct rt_sigframe *frame;
++	struct rt_sigframe __user *frame;
+ 	int err = 0;
+ 
+ 	frame = get_sigframe(ksig, regs, sizeof(*frame));
+diff --git a/arch/parisc/include/asm/traps.h b/arch/parisc/include/asm/traps.h
+index 8ecc1f0c0483d..d0e090a2c000d 100644
+--- a/arch/parisc/include/asm/traps.h
++++ b/arch/parisc/include/asm/traps.h
+@@ -17,6 +17,7 @@ void die_if_kernel(char *str, struct pt_regs *regs, long err);
+ const char *trap_name(unsigned long code);
+ void do_page_fault(struct pt_regs *regs, unsigned long code,
+ 		unsigned long address);
++int handle_nadtlb_fault(struct pt_regs *regs);
+ #endif
+ 
+ #endif
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index 269b737d26299..bce47e0fb692c 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -661,6 +661,8 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
+ 			 by hand. Technically we need to emulate:
+ 			 fdc,fdce,pdc,"fic,4f",prober,probeir,probew, probeiw
+ 		*/
++		if (code == 17 && handle_nadtlb_fault(regs))
++			return;
+ 		fault_address = regs->ior;
+ 		fault_space = regs->isr;
+ 		break;
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index 716960f5d92ea..5faa3cff47387 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -424,3 +424,92 @@ no_context:
+ 		goto no_context;
+ 	pagefault_out_of_memory();
+ }
++
++/* Handle non-access data TLB miss faults.
++ *
++ * For probe instructions, accesses to userspace are considered allowed
++ * if they lie in a valid VMA and the access type matches. We are not
++ * allowed to handle MM faults here so there may be situations where an
++ * actual access would fail even though a probe was successful.
++ */
++int
++handle_nadtlb_fault(struct pt_regs *regs)
++{
++	unsigned long insn = regs->iir;
++	int breg, treg, xreg, val = 0;
++	struct vm_area_struct *vma, *prev_vma;
++	struct task_struct *tsk;
++	struct mm_struct *mm;
++	unsigned long address;
++	unsigned long acc_type;
++
++	switch (insn & 0x380) {
++	case 0x280:
++		/* FDC instruction */
++		fallthrough;
++	case 0x380:
++		/* PDC and FIC instructions */
++		if (printk_ratelimit()) {
++			pr_warn("BUG: nullifying cache flush/purge instruction\n");
++			show_regs(regs);
++		}
++		if (insn & 0x20) {
++			/* Base modification */
++			breg = (insn >> 21) & 0x1f;
++			xreg = (insn >> 16) & 0x1f;
++			if (breg && xreg)
++				regs->gr[breg] += regs->gr[xreg];
++		}
++		regs->gr[0] |= PSW_N;
++		return 1;
++
++	case 0x180:
++		/* PROBE instruction */
++		treg = insn & 0x1f;
++		if (regs->isr) {
++			tsk = current;
++			mm = tsk->mm;
++			if (mm) {
++				/* Search for VMA */
++				address = regs->ior;
++				mmap_read_lock(mm);
++				vma = find_vma_prev(mm, address, &prev_vma);
++				mmap_read_unlock(mm);
++
++				/*
++				 * Check if access to the VMA is okay.
++				 * We don't allow for stack expansion.
++				 */
++				acc_type = (insn & 0x40) ? VM_WRITE : VM_READ;
++				if (vma
++				    && address >= vma->vm_start
++				    && (vma->vm_flags & acc_type) == acc_type)
++					val = 1;
++			}
++		}
++		if (treg)
++			regs->gr[treg] = val;
++		regs->gr[0] |= PSW_N;
++		return 1;
++
++	case 0x300:
++		/* LPA instruction */
++		if (insn & 0x20) {
++			/* Base modification */
++			breg = (insn >> 21) & 0x1f;
++			xreg = (insn >> 16) & 0x1f;
++			if (breg && xreg)
++				regs->gr[breg] += regs->gr[xreg];
++		}
++		treg = insn & 0x1f;
++		if (treg)
++			regs->gr[treg] = 0;
++		regs->gr[0] |= PSW_N;
++		return 1;
++
++	default:
++		break;
++	}
++
++	return 0;
++}
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index 5c8c06215dd4b..7a96cdefbd4e4 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -172,7 +172,7 @@ else
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,$(call cc-option,-mtune=power5))
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mcpu=power5,-mcpu=power4)
+ endif
+-else
++else ifdef CONFIG_PPC_BOOK3E_64
+ CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
+ endif
+ 
+diff --git a/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+new file mode 100644
+index 0000000000000..73f8c998c64df
+--- /dev/null
++++ b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+@@ -0,0 +1,30 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * T1040RDB-REV-A Device Tree Source
++ *
++ * Copyright 2014 - 2015 Freescale Semiconductor Inc.
++ *
++ */
++
++#include "t1040rdb.dts"
++
++/ {
++	model = "fsl,T1040RDB-REV-A";
++	compatible = "fsl,T1040RDB-REV-A";
++};
++
++&seville_port0 {
++	label = "ETH5";
++};
++
++&seville_port2 {
++	label = "ETH7";
++};
++
++&seville_port4 {
++	label = "ETH9";
++};
++
++&seville_port6 {
++	label = "ETH11";
++};
+diff --git a/arch/powerpc/boot/dts/fsl/t1040rdb.dts b/arch/powerpc/boot/dts/fsl/t1040rdb.dts
+index af0c8a6f56138..b6733e7e65805 100644
+--- a/arch/powerpc/boot/dts/fsl/t1040rdb.dts
++++ b/arch/powerpc/boot/dts/fsl/t1040rdb.dts
+@@ -119,7 +119,7 @@
+ 	managed = "in-band-status";
+ 	phy-handle = <&phy_qsgmii_0>;
+ 	phy-mode = "qsgmii";
+-	label = "ETH5";
++	label = "ETH3";
+ 	status = "okay";
+ };
+ 
+@@ -135,7 +135,7 @@
+ 	managed = "in-band-status";
+ 	phy-handle = <&phy_qsgmii_2>;
+ 	phy-mode = "qsgmii";
+-	label = "ETH7";
++	label = "ETH5";
+ 	status = "okay";
+ };
+ 
+@@ -151,7 +151,7 @@
+ 	managed = "in-band-status";
+ 	phy-handle = <&phy_qsgmii_4>;
+ 	phy-mode = "qsgmii";
+-	label = "ETH9";
++	label = "ETH7";
+ 	status = "okay";
+ };
+ 
+@@ -167,7 +167,7 @@
+ 	managed = "in-band-status";
+ 	phy-handle = <&phy_qsgmii_6>;
+ 	phy-mode = "qsgmii";
+-	label = "ETH11";
++	label = "ETH9";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
+index 58635960403c0..0182b291248ac 100644
+--- a/arch/powerpc/include/asm/io.h
++++ b/arch/powerpc/include/asm/io.h
+@@ -344,25 +344,37 @@ static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr)
+  */
+ static inline void __raw_rm_writeb(u8 val, volatile void __iomem *paddr)
+ {
+-	__asm__ __volatile__("stbcix %0,0,%1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      stbcix %0,0,%1;  \
++			      .machine pop;"
+ 		: : "r" (val), "r" (paddr) : "memory");
+ }
+ 
+ static inline void __raw_rm_writew(u16 val, volatile void __iomem *paddr)
+ {
+-	__asm__ __volatile__("sthcix %0,0,%1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      sthcix %0,0,%1;  \
++			      .machine pop;"
+ 		: : "r" (val), "r" (paddr) : "memory");
+ }
+ 
+ static inline void __raw_rm_writel(u32 val, volatile void __iomem *paddr)
+ {
+-	__asm__ __volatile__("stwcix %0,0,%1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      stwcix %0,0,%1;  \
++			      .machine pop;"
+ 		: : "r" (val), "r" (paddr) : "memory");
+ }
+ 
+ static inline void __raw_rm_writeq(u64 val, volatile void __iomem *paddr)
+ {
+-	__asm__ __volatile__("stdcix %0,0,%1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      stdcix %0,0,%1;  \
++			      .machine pop;"
+ 		: : "r" (val), "r" (paddr) : "memory");
+ }
+ 
+@@ -374,7 +386,10 @@ static inline void __raw_rm_writeq_be(u64 val, volatile void __iomem *paddr)
+ static inline u8 __raw_rm_readb(volatile void __iomem *paddr)
+ {
+ 	u8 ret;
+-	__asm__ __volatile__("lbzcix %0,0, %1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      lbzcix %0,0, %1; \
++			      .machine pop;"
+ 			     : "=r" (ret) : "r" (paddr) : "memory");
+ 	return ret;
+ }
+@@ -382,7 +397,10 @@ static inline u8 __raw_rm_readb(volatile void __iomem *paddr)
+ static inline u16 __raw_rm_readw(volatile void __iomem *paddr)
+ {
+ 	u16 ret;
+-	__asm__ __volatile__("lhzcix %0,0, %1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      lhzcix %0,0, %1; \
++			      .machine pop;"
+ 			     : "=r" (ret) : "r" (paddr) : "memory");
+ 	return ret;
+ }
+@@ -390,7 +408,10 @@ static inline u16 __raw_rm_readw(volatile void __iomem *paddr)
+ static inline u32 __raw_rm_readl(volatile void __iomem *paddr)
+ {
+ 	u32 ret;
+-	__asm__ __volatile__("lwzcix %0,0, %1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      lwzcix %0,0, %1; \
++			      .machine pop;"
+ 			     : "=r" (ret) : "r" (paddr) : "memory");
+ 	return ret;
+ }
+@@ -398,7 +419,10 @@ static inline u32 __raw_rm_readl(volatile void __iomem *paddr)
+ static inline u64 __raw_rm_readq(volatile void __iomem *paddr)
+ {
+ 	u64 ret;
+-	__asm__ __volatile__("ldcix %0,0, %1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      ldcix %0,0, %1;  \
++			      .machine pop;"
+ 			     : "=r" (ret) : "r" (paddr) : "memory");
+ 	return ret;
+ }
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index f53bfefb4a577..6b808bcdecd52 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -229,8 +229,11 @@ extern long __get_user_bad(void);
+  */
+ #define __get_user_atomic_128_aligned(kaddr, uaddr, err)		\
+ 	__asm__ __volatile__(				\
++		".machine push\n"			\
++		".machine altivec\n"			\
+ 		"1:	lvx  0,0,%1	# get user\n"	\
+ 		" 	stvx 0,0,%2	# put kernel\n"	\
++		".machine pop\n"			\
+ 		"2:\n"					\
+ 		".section .fixup,\"ax\"\n"		\
+ 		"3:	li %0,%3\n"			\
+diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
+index 617eba82531cb..d89cf802d9aa7 100644
+--- a/arch/powerpc/kernel/kvm.c
++++ b/arch/powerpc/kernel/kvm.c
+@@ -669,7 +669,7 @@ static void __init kvm_use_magic_page(void)
+ 	on_each_cpu(kvm_map_magic_page, &features, 1);
+ 
+ 	/* Quick self-test to see if the mapping works */
+-	if (!fault_in_pages_readable((const char *)KVM_MAGIC_PAGE, sizeof(u32))) {
++	if (fault_in_pages_readable((const char *)KVM_MAGIC_PAGE, sizeof(u32))) {
+ 		kvm_patching_worked = false;
+ 		return;
+ 	}
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 527c205d5a5f5..38b7a3491aac0 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -5752,8 +5752,11 @@ static int kvmppc_book3s_init_hv(void)
+ 	if (r)
+ 		return r;
+ 
+-	if (kvmppc_radix_possible())
++	if (kvmppc_radix_possible()) {
+ 		r = kvmppc_radix_init();
++		if (r)
++			return r;
++	}
+ 
+ 	/*
+ 	 * POWER9 chips before version 2.02 can't have some threads in
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index 543db9157f3b1..ef8077a739b88 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -1500,7 +1500,7 @@ int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu,
+ {
+ 	enum emulation_result emulated = EMULATE_DONE;
+ 
+-	if (vcpu->arch.mmio_vsx_copy_nums > 2)
++	if (vcpu->arch.mmio_vmx_copy_nums > 2)
+ 		return EMULATE_FAIL;
+ 
+ 	while (vcpu->arch.mmio_vmx_copy_nums) {
+@@ -1597,7 +1597,7 @@ int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu,
+ 	unsigned int index = rs & KVM_MMIO_REG_MASK;
+ 	enum emulation_result emulated = EMULATE_DONE;
+ 
+-	if (vcpu->arch.mmio_vsx_copy_nums > 2)
++	if (vcpu->arch.mmio_vmx_copy_nums > 2)
+ 		return EMULATE_FAIL;
+ 
+ 	vcpu->arch.io_gpr = rs;
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index 0edebbbffcdca..2d19655328f12 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -108,9 +108,9 @@ static nokprobe_inline long address_ok(struct pt_regs *regs,
+ {
+ 	if (!user_mode(regs))
+ 		return 1;
+-	if (__access_ok(ea, nb))
++	if (access_ok((void __user *)ea, nb))
+ 		return 1;
+-	if (__access_ok(ea, 1))
++	if (access_ok((void __user *)ea, 1))
+ 		/* Access overlaps the end of the user region */
+ 		regs->dar = TASK_SIZE_MAX - 1;
+ 	else
+@@ -949,7 +949,10 @@ NOKPROBE_SYMBOL(emulate_dcbz);
+ 
+ #define __put_user_asmx(x, addr, err, op, cr)		\
+ 	__asm__ __volatile__(				\
++		".machine push\n"			\
++		".machine power8\n"			\
+ 		"1:	" op " %2,0,%3\n"		\
++		".machine pop\n"			\
+ 		"	mfcr	%1\n"			\
+ 		"2:\n"					\
+ 		".section .fixup,\"ax\"\n"		\
+@@ -962,7 +965,10 @@ NOKPROBE_SYMBOL(emulate_dcbz);
+ 
+ #define __get_user_asmx(x, addr, err, op)		\
+ 	__asm__ __volatile__(				\
++		".machine push\n"			\
++		".machine power8\n"			\
+ 		"1:	"op" %1,0,%2\n"			\
++		".machine pop\n"			\
+ 		"2:\n"					\
+ 		".section .fixup,\"ax\"\n"		\
+ 		"3:	li	%0,%3\n"		\
+@@ -3187,7 +3193,7 @@ int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op)
+ 			__put_user_asmx(op->val, ea, err, "stbcx.", cr);
+ 			break;
+ 		case 2:
+-			__put_user_asmx(op->val, ea, err, "stbcx.", cr);
++			__put_user_asmx(op->val, ea, err, "sthcx.", cr);
+ 			break;
+ #endif
+ 		case 4:
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index cf8770b1a692e..f3e4d069e0ba7 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -83,13 +83,12 @@ void __init
+ kasan_update_early_region(unsigned long k_start, unsigned long k_end, pte_t pte)
+ {
+ 	unsigned long k_cur;
+-	phys_addr_t pa = __pa(kasan_early_shadow_page);
+ 
+ 	for (k_cur = k_start; k_cur != k_end; k_cur += PAGE_SIZE) {
+ 		pmd_t *pmd = pmd_off_k(k_cur);
+ 		pte_t *ptep = pte_offset_kernel(pmd, k_cur);
+ 
+-		if ((pte_val(*ptep) & PTE_RPN_MASK) != pa)
++		if (pte_page(*ptep) != virt_to_page(lm_alias(kasan_early_shadow_page)))
+ 			continue;
+ 
+ 		__set_pte_at(&init_mm, k_cur, ptep, pte, 0);
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 094a1076fd1fe..275c60f92a7ce 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -742,7 +742,9 @@ static int __init parse_numa_properties(void)
+ 			of_node_put(cpu);
+ 		}
+ 
+-		node_set_online(nid);
++		/* node_set_online() is an UB if 'nid' is negative */
++		if (likely(nid >= 0))
++			node_set_online(nid);
+ 	}
+ 
+ 	get_n_mem_cells(&n_mem_addr_cells, &n_mem_size_cells);
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index 7b25548ec42b0..e8074d7f2401b 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -1457,7 +1457,11 @@ static int trace_imc_event_init(struct perf_event *event)
+ 
+ 	event->hw.idx = -1;
+ 
+-	event->pmu->task_ctx_nr = perf_hw_context;
++	/*
++	 * There can only be a single PMU for perf_hw_context events which is assigned to
++	 * core PMU. Hence use "perf_sw_context" for trace_imc.
++	 */
++	event->pmu->task_ctx_nr = perf_sw_context;
+ 	event->destroy = reset_global_refc;
+ 	return 0;
+ }
+diff --git a/arch/powerpc/platforms/8xx/pic.c b/arch/powerpc/platforms/8xx/pic.c
+index f2ba837249d69..04a6abf14c295 100644
+--- a/arch/powerpc/platforms/8xx/pic.c
++++ b/arch/powerpc/platforms/8xx/pic.c
+@@ -153,6 +153,7 @@ int __init mpc8xx_pic_init(void)
+ 	if (mpc8xx_pic_host == NULL) {
+ 		printk(KERN_ERR "MPC8xx PIC: failed to allocate irq host!\n");
+ 		ret = -ENOMEM;
++		goto out;
+ 	}
+ 
+ 	ret = 0;
+diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c
+index 72c25295c1c2b..69c344c8884f3 100644
+--- a/arch/powerpc/platforms/powernv/rng.c
++++ b/arch/powerpc/platforms/powernv/rng.c
+@@ -43,7 +43,11 @@ static unsigned long rng_whiten(struct powernv_rng *rng, unsigned long val)
+ 	unsigned long parity;
+ 
+ 	/* Calculate the parity of the value */
+-	asm ("popcntd %0,%1" : "=r" (parity) : "r" (val));
++	asm (".machine push;   \
++	      .machine power7; \
++	      popcntd %0,%1;   \
++	      .machine pop;"
++	     : "=r" (parity) : "r" (val));
+ 
+ 	/* xor our value with the previous mask */
+ 	val ^= rng->mask;
+diff --git a/arch/powerpc/sysdev/fsl_gtm.c b/arch/powerpc/sysdev/fsl_gtm.c
+index 8963eaffb1b7b..39186ad6b3c3a 100644
+--- a/arch/powerpc/sysdev/fsl_gtm.c
++++ b/arch/powerpc/sysdev/fsl_gtm.c
+@@ -86,7 +86,7 @@ static LIST_HEAD(gtms);
+  */
+ struct gtm_timer *gtm_get_timer16(void)
+ {
+-	struct gtm *gtm = NULL;
++	struct gtm *gtm;
+ 	int i;
+ 
+ 	list_for_each_entry(gtm, &gtms, list_node) {
+@@ -103,7 +103,7 @@ struct gtm_timer *gtm_get_timer16(void)
+ 		spin_unlock_irq(&gtm->lock);
+ 	}
+ 
+-	if (gtm)
++	if (!list_empty(&gtms))
+ 		return ERR_PTR(-EBUSY);
+ 	return ERR_PTR(-ENODEV);
+ }
+diff --git a/arch/riscv/include/asm/module.lds.h b/arch/riscv/include/asm/module.lds.h
+index 4254ff2ff0494..1075beae1ac64 100644
+--- a/arch/riscv/include/asm/module.lds.h
++++ b/arch/riscv/include/asm/module.lds.h
+@@ -2,8 +2,8 @@
+ /* Copyright (C) 2017 Andes Technology Corporation */
+ #ifdef CONFIG_MODULE_SECTIONS
+ SECTIONS {
+-	.plt (NOLOAD) : { BYTE(0) }
+-	.got (NOLOAD) : { BYTE(0) }
+-	.got.plt (NOLOAD) : { BYTE(0) }
++	.plt : { BYTE(0) }
++	.got : { BYTE(0) }
++	.got.plt : { BYTE(0) }
+ }
+ #endif
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index a390711129de6..d79ae9d98999f 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -11,11 +11,17 @@
+ #include <asm/page.h>
+ #include <linux/const.h>
+ 
++#ifdef CONFIG_KASAN
++#define KASAN_STACK_ORDER 1
++#else
++#define KASAN_STACK_ORDER 0
++#endif
++
+ /* thread information allocation */
+ #ifdef CONFIG_64BIT
+-#define THREAD_SIZE_ORDER	(2)
++#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
+ #else
+-#define THREAD_SIZE_ORDER	(1)
++#define THREAD_SIZE_ORDER	(1 + KASAN_STACK_ORDER)
+ #endif
+ #define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+ 
+diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
+index ad3001cbdf618..fb02811df7143 100644
+--- a/arch/riscv/kernel/perf_callchain.c
++++ b/arch/riscv/kernel/perf_callchain.c
+@@ -19,8 +19,8 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ {
+ 	struct stackframe buftail;
+ 	unsigned long ra = 0;
+-	unsigned long *user_frame_tail =
+-			(unsigned long *)(fp - sizeof(struct stackframe));
++	unsigned long __user *user_frame_tail =
++		(unsigned long __user *)(fp - sizeof(struct stackframe));
+ 
+ 	/* Check accessibility of one struct frame_tail beyond */
+ 	if (!access_ok(user_frame_tail, sizeof(buftail)))
+@@ -77,7 +77,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 
+ bool fill_callchain(unsigned long pc, void *entry)
+ {
+-	return perf_callchain_store(entry, pc);
++	return perf_callchain_store(entry, pc) == 0;
+ }
+ 
+ void notrace walk_stackframe(struct task_struct *task,
+diff --git a/arch/sparc/kernel/signal_32.c b/arch/sparc/kernel/signal_32.c
+index 741d0701003af..1da36dd34990b 100644
+--- a/arch/sparc/kernel/signal_32.c
++++ b/arch/sparc/kernel/signal_32.c
+@@ -65,7 +65,7 @@ struct rt_signal_frame {
+  */
+ static inline bool invalid_frame_pointer(void __user *fp, int fplen)
+ {
+-	if ((((unsigned long) fp) & 15) || !__access_ok((unsigned long)fp, fplen))
++	if ((((unsigned long) fp) & 15) || !access_ok(fp, fplen))
+ 		return true;
+ 
+ 	return false;
+diff --git a/arch/um/drivers/mconsole_kern.c b/arch/um/drivers/mconsole_kern.c
+index a2e680f7d39f2..2d20773b7d8ee 100644
+--- a/arch/um/drivers/mconsole_kern.c
++++ b/arch/um/drivers/mconsole_kern.c
+@@ -223,7 +223,7 @@ void mconsole_go(struct mc_request *req)
+ 
+ void mconsole_stop(struct mc_request *req)
+ {
+-	deactivate_fd(req->originating_fd, MCONSOLE_IRQ);
++	block_signals();
+ 	os_set_fd_block(req->originating_fd, 1);
+ 	mconsole_reply(req, "stopped", 0, 0);
+ 	for (;;) {
+@@ -246,6 +246,7 @@ void mconsole_stop(struct mc_request *req)
+ 	}
+ 	os_set_fd_block(req->originating_fd, 0);
+ 	mconsole_reply(req, "", 0, 0);
++	unblock_signals();
+ }
+ 
+ static DEFINE_SPINLOCK(mc_devices_lock);
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index c084899e95825..cc3b79c066853 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -472,7 +472,7 @@ static u64 pt_config_filters(struct perf_event *event)
+ 			pt->filters.filter[range].msr_b = filter->msr_b;
+ 		}
+ 
+-		rtit_ctl |= filter->config << pt_address_ranges[range].reg_off;
++		rtit_ctl |= (u64)filter->config << pt_address_ranges[range].reg_off;
+ 	}
+ 
+ 	return rtit_ctl;
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index 7462b79c39de6..18e952fed021b 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -532,7 +532,7 @@ static void __send_ipi_mask(const struct cpumask *mask, int vector)
+ 		} else if (apic_id < min && max - apic_id < KVM_IPI_CLUSTER_SIZE) {
+ 			ipi_bitmap <<= min - apic_id;
+ 			min = apic_id;
+-		} else if (apic_id < min + KVM_IPI_CLUSTER_SIZE) {
++		} else if (apic_id > min && apic_id < min + KVM_IPI_CLUSTER_SIZE) {
+ 			max = apic_id < max ? max : apic_id;
+ 		} else {
+ 			ret = kvm_hypercall4(KVM_HC_SEND_IPI, (unsigned long)ipi_bitmap,
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index e82151ba95c09..a63df19ef4dad 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -1718,11 +1718,6 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ 		goto exception;
+ 	}
+ 
+-	if (!seg_desc.p) {
+-		err_vec = (seg == VCPU_SREG_SS) ? SS_VECTOR : NP_VECTOR;
+-		goto exception;
+-	}
+-
+ 	dpl = seg_desc.dpl;
+ 
+ 	switch (seg) {
+@@ -1762,6 +1757,10 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ 	case VCPU_SREG_TR:
+ 		if (seg_desc.s || (seg_desc.type != 1 && seg_desc.type != 9))
+ 			goto exception;
++		if (!seg_desc.p) {
++			err_vec = NP_VECTOR;
++			goto exception;
++		}
+ 		old_desc = seg_desc;
+ 		seg_desc.type |= 2; /* busy */
+ 		ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
+@@ -1786,6 +1785,11 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ 		break;
+ 	}
+ 
++	if (!seg_desc.p) {
++		err_vec = (seg == VCPU_SREG_SS) ? SS_VECTOR : NP_VECTOR;
++		goto exception;
++	}
++
+ 	if (seg_desc.s) {
+ 		/* mark segment as accessed */
+ 		if (!(seg_desc.type & 1)) {
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 328f37e4fd3a7..d806139377bc6 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -207,7 +207,7 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	struct kvm_vcpu *vcpu = synic_to_vcpu(synic);
+ 	int ret;
+ 
+-	if (!synic->active && !host)
++	if (!synic->active && (!host || data))
+ 		return 1;
+ 
+ 	trace_kvm_hv_synic_set_msr(vcpu->vcpu_id, msr, data, host);
+@@ -253,6 +253,9 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	case HV_X64_MSR_EOM: {
+ 		int i;
+ 
++		if (!synic->active)
++			break;
++
+ 		for (i = 0; i < ARRAY_SIZE(synic->sint); i++)
+ 			kvm_hv_notify_acked_sint(vcpu, i);
+ 		break;
+@@ -636,7 +639,7 @@ static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config,
+ 	struct kvm_vcpu *vcpu = stimer_to_vcpu(stimer);
+ 	struct kvm_vcpu_hv_synic *synic = vcpu_to_synic(vcpu);
+ 
+-	if (!synic->active && !host)
++	if (!synic->active && (!host || config))
+ 		return 1;
+ 
+ 	trace_kvm_hv_stimer_set_config(stimer_to_vcpu(stimer)->vcpu_id,
+@@ -660,7 +663,7 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count,
+ 	struct kvm_vcpu *vcpu = stimer_to_vcpu(stimer);
+ 	struct kvm_vcpu_hv_synic *synic = vcpu_to_synic(vcpu);
+ 
+-	if (!synic->active && !host)
++	if (!synic->active && (!host || count))
+ 		return 1;
+ 
+ 	trace_kvm_hv_stimer_set_count(stimer_to_vcpu(stimer)->vcpu_id,
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 677d21082454f..de11149e28e09 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2227,10 +2227,7 @@ void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
+ 
+ void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8)
+ {
+-	struct kvm_lapic *apic = vcpu->arch.apic;
+-
+-	apic_set_tpr(apic, ((cr8 & 0x0f) << 4)
+-		     | (kvm_lapic_get_reg(apic, APIC_TASKPRI) & 4));
++	apic_set_tpr(vcpu->arch.apic, (cr8 & 0x0f) << 4);
+ }
+ 
+ u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index f8829134bf341..c6daeeff1d9c9 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -34,9 +34,8 @@
+ 	#define PT_HAVE_ACCESSED_DIRTY(mmu) true
+ 	#ifdef CONFIG_X86_64
+ 	#define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
+-	#define CMPXCHG cmpxchg
++	#define CMPXCHG "cmpxchgq"
+ 	#else
+-	#define CMPXCHG cmpxchg64
+ 	#define PT_MAX_FULL_LEVELS 2
+ 	#endif
+ #elif PTTYPE == 32
+@@ -52,7 +51,7 @@
+ 	#define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT
+ 	#define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT
+ 	#define PT_HAVE_ACCESSED_DIRTY(mmu) true
+-	#define CMPXCHG cmpxchg
++	#define CMPXCHG "cmpxchgl"
+ #elif PTTYPE == PTTYPE_EPT
+ 	#define pt_element_t u64
+ 	#define guest_walker guest_walkerEPT
+@@ -65,7 +64,9 @@
+ 	#define PT_GUEST_DIRTY_SHIFT 9
+ 	#define PT_GUEST_ACCESSED_SHIFT 8
+ 	#define PT_HAVE_ACCESSED_DIRTY(mmu) ((mmu)->ept_ad)
+-	#define CMPXCHG cmpxchg64
++	#ifdef CONFIG_X86_64
++	#define CMPXCHG "cmpxchgq"
++	#endif
+ 	#define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
+ #else
+ 	#error Invalid PTTYPE value
+@@ -147,43 +148,39 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+ 			       pt_element_t __user *ptep_user, unsigned index,
+ 			       pt_element_t orig_pte, pt_element_t new_pte)
+ {
+-	int npages;
+-	pt_element_t ret;
+-	pt_element_t *table;
+-	struct page *page;
+-
+-	npages = get_user_pages_fast((unsigned long)ptep_user, 1, FOLL_WRITE, &page);
+-	if (likely(npages == 1)) {
+-		table = kmap_atomic(page);
+-		ret = CMPXCHG(&table[index], orig_pte, new_pte);
+-		kunmap_atomic(table);
+-
+-		kvm_release_page_dirty(page);
+-	} else {
+-		struct vm_area_struct *vma;
+-		unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK;
+-		unsigned long pfn;
+-		unsigned long paddr;
+-
+-		mmap_read_lock(current->mm);
+-		vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE);
+-		if (!vma || !(vma->vm_flags & VM_PFNMAP)) {
+-			mmap_read_unlock(current->mm);
+-			return -EFAULT;
+-		}
+-		pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+-		paddr = pfn << PAGE_SHIFT;
+-		table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB);
+-		if (!table) {
+-			mmap_read_unlock(current->mm);
+-			return -EFAULT;
+-		}
+-		ret = CMPXCHG(&table[index], orig_pte, new_pte);
+-		memunmap(table);
+-		mmap_read_unlock(current->mm);
+-	}
++	int r = -EFAULT;
++
++	if (!user_access_begin(ptep_user, sizeof(pt_element_t)))
++		return -EFAULT;
++
++#ifdef CMPXCHG
++	asm volatile("1:" LOCK_PREFIX CMPXCHG " %[new], %[ptr]\n"
++		     "mov $0, %[r]\n"
++		     "setnz %b[r]\n"
++		     "2:"
++		     _ASM_EXTABLE_UA(1b, 2b)
++		     : [ptr] "+m" (*ptep_user),
++		       [old] "+a" (orig_pte),
++		       [r] "+q" (r)
++		     : [new] "r" (new_pte)
++		     : "memory");
++#else
++	asm volatile("1:" LOCK_PREFIX "cmpxchg8b %[ptr]\n"
++		     "movl $0, %[r]\n"
++		     "jz 2f\n"
++		     "incl %[r]\n"
++		     "2:"
++		     _ASM_EXTABLE_UA(1b, 2b)
++		     : [ptr] "+m" (*ptep_user),
++		       [old] "+A" (orig_pte),
++		       [r] "+rm" (r)
++		     : [new_lo] "b" ((u32)new_pte),
++		       [new_hi] "c" ((u32)(new_pte >> 32))
++		     : "memory");
++#endif
+ 
+-	return (ret != orig_pte);
++	user_access_end();
++	return r;
+ }
+ 
+ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 7e08efb068393..073514bbb5f71 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -902,6 +902,9 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 		if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
+ 			continue;
+ 
++		if (!is_shadow_present_pte(iter.old_spte))
++			continue;
++
+ 		if (spte_ad_need_write_protect(iter.old_spte)) {
+ 			if (is_writable_pte(iter.old_spte))
+ 				new_spte = iter.old_spte & ~PT_WRITABLE_MASK;
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index a8b5533cf601d..3e5cb74c0b538 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -806,7 +806,7 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ {
+ 	struct kvm_kernel_irq_routing_entry *e;
+ 	struct kvm_irq_routing_table *irq_rt;
+-	int idx, ret = -EINVAL;
++	int idx, ret = 0;
+ 
+ 	if (!kvm_arch_has_assigned_device(kvm) ||
+ 	    !irq_remapping_cap(IRQ_POSTING_CAP))
+@@ -817,7 +817,13 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ 
+ 	idx = srcu_read_lock(&kvm->irq_srcu);
+ 	irq_rt = srcu_dereference(kvm->irq_routing, &kvm->irq_srcu);
+-	WARN_ON(guest_irq >= irq_rt->nr_rt_entries);
++
++	if (guest_irq >= irq_rt->nr_rt_entries ||
++		hlist_empty(&irq_rt->map[guest_irq])) {
++		pr_warn_once("no route for guest_irq %u/%u (broken user space?)\n",
++			     guest_irq, irq_rt->nr_rt_entries);
++		goto out;
++	}
+ 
+ 	hlist_for_each_entry(e, &irq_rt->map[guest_irq], link) {
+ 		struct vcpu_data vcpu_info;
+diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
+index e13b0b49fcdfc..d7249f4c90f1b 100644
+--- a/arch/x86/xen/pmu.c
++++ b/arch/x86/xen/pmu.c
+@@ -512,10 +512,7 @@ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id)
+ 	return ret;
+ }
+ 
+-bool is_xen_pmu(int cpu)
+-{
+-	return (get_xenpmu_data() != NULL);
+-}
++bool is_xen_pmu;
+ 
+ void xen_pmu_init(int cpu)
+ {
+@@ -526,7 +523,7 @@ void xen_pmu_init(int cpu)
+ 
+ 	BUILD_BUG_ON(sizeof(struct xen_pmu_data) > PAGE_SIZE);
+ 
+-	if (xen_hvm_domain())
++	if (xen_hvm_domain() || (cpu != 0 && !is_xen_pmu))
+ 		return;
+ 
+ 	xenpmu_data = (struct xen_pmu_data *)get_zeroed_page(GFP_KERNEL);
+@@ -547,7 +544,8 @@ void xen_pmu_init(int cpu)
+ 	per_cpu(xenpmu_shared, cpu).xenpmu_data = xenpmu_data;
+ 	per_cpu(xenpmu_shared, cpu).flags = 0;
+ 
+-	if (cpu == 0) {
++	if (!is_xen_pmu) {
++		is_xen_pmu = true;
+ 		perf_register_guest_info_callbacks(&xen_guest_cbs);
+ 		xen_pmu_arch_init();
+ 	}
+diff --git a/arch/x86/xen/pmu.h b/arch/x86/xen/pmu.h
+index 0e83a160589bc..65c58894fc79f 100644
+--- a/arch/x86/xen/pmu.h
++++ b/arch/x86/xen/pmu.h
+@@ -4,6 +4,8 @@
+ 
+ #include <xen/interface/xenpmu.h>
+ 
++extern bool is_xen_pmu;
++
+ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id);
+ #ifdef CONFIG_XEN_HAVE_VPMU
+ void xen_pmu_init(int cpu);
+@@ -12,7 +14,6 @@ void xen_pmu_finish(int cpu);
+ static inline void xen_pmu_init(int cpu) {}
+ static inline void xen_pmu_finish(int cpu) {}
+ #endif
+-bool is_xen_pmu(int cpu);
+ bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err);
+ bool pmu_msr_write(unsigned int msr, uint32_t low, uint32_t high, int *err);
+ int pmu_apic_update(uint32_t reg);
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 8f9e7e2407c87..35b6d15d874d0 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -130,7 +130,7 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 	per_cpu(xen_irq_work, cpu).irq = rc;
+ 	per_cpu(xen_irq_work, cpu).name = callfunc_name;
+ 
+-	if (is_xen_pmu(cpu)) {
++	if (is_xen_pmu) {
+ 		pmu_name = kasprintf(GFP_KERNEL, "pmu%d", cpu);
+ 		rc = bind_virq_to_irqhandler(VIRQ_XENPMU, cpu,
+ 					     xen_pmu_irq_handler,
+diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h
+index 7f63aca6a0d34..9dd4efe1bf0bd 100644
+--- a/arch/xtensa/include/asm/processor.h
++++ b/arch/xtensa/include/asm/processor.h
+@@ -226,8 +226,8 @@ extern unsigned long get_wchan(struct task_struct *p);
+ 
+ #define xtensa_set_sr(x, sr) \
+ 	({ \
+-	 unsigned int v = (unsigned int)(x); \
+-	 __asm__ __volatile__ ("wsr %0, "__stringify(sr) :: "a"(v)); \
++	 __asm__ __volatile__ ("wsr %0, "__stringify(sr) :: \
++			       "a"((unsigned int)(x))); \
+ 	 })
+ 
+ #define xtensa_get_sr(sr) \
+diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
+index 61cf6497a646b..0dde21e0d3de4 100644
+--- a/arch/xtensa/kernel/jump_label.c
++++ b/arch/xtensa/kernel/jump_label.c
+@@ -61,7 +61,7 @@ static void patch_text(unsigned long addr, const void *data, size_t sz)
+ 			.data = data,
+ 		};
+ 		stop_machine_cpuslocked(patch_text_stop_machine,
+-					&patch, NULL);
++					&patch, cpu_online_mask);
+ 	} else {
+ 		unsigned long flags;
+ 
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index b791e2041e49b..c2fdd6fcdaee6 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -642,6 +642,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_entity *entity = &bfqq->entity;
+ 
++	/*
++	 * oom_bfqq is not allowed to move, oom_bfqq will hold ref to root_group
++	 * until elevator exit.
++	 */
++	if (bfqq == &bfqd->oom_bfqq)
++		return;
+ 	/*
+ 	 * Get extra reference to prevent bfqq from being freed in
+ 	 * next possible expire or deactivate.
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 8d95bf7765b19..de2cd4bd602fd 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2526,6 +2526,15 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	 * are likely to increase the throughput.
+ 	 */
+ 	bfqq->new_bfqq = new_bfqq;
++	/*
++	 * The above assignment schedules the following redirections:
++	 * each time some I/O for bfqq arrives, the process that
++	 * generated that I/O is disassociated from bfqq and
++	 * associated with new_bfqq. Here we increases new_bfqq->ref
++	 * in advance, adding the number of processes that are
++	 * expected to be associated with new_bfqq as they happen to
++	 * issue I/O.
++	 */
+ 	new_bfqq->ref += process_refs;
+ 	return new_bfqq;
+ }
+@@ -2585,6 +2594,10 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_queue *in_service_bfqq, *new_bfqq;
+ 
++	/* if a merge has already been setup, then proceed with that first */
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++
+ 	/*
+ 	 * Do not perform queue merging if the device is non
+ 	 * rotational and performs internal queueing. In fact, such a
+@@ -2639,9 +2652,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfq_too_late_for_merging(bfqq))
+ 		return NULL;
+ 
+-	if (bfqq->new_bfqq)
+-		return bfqq->new_bfqq;
+-
+ 	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
+ 		return NULL;
+ 
+@@ -4799,7 +4809,7 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ 	struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
+ 	struct request *rq;
+ 	struct bfq_queue *in_serv_queue;
+-	bool waiting_rq, idle_timer_disabled;
++	bool waiting_rq, idle_timer_disabled = false;
+ 
+ 	spin_lock_irq(&bfqd->lock);
+ 
+@@ -4807,14 +4817,15 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ 	waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue);
+ 
+ 	rq = __bfq_dispatch_request(hctx);
+-
+-	idle_timer_disabled =
+-		waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
++	if (in_serv_queue == bfqd->in_service_queue) {
++		idle_timer_disabled =
++			waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
++	}
+ 
+ 	spin_unlock_irq(&bfqd->lock);
+-
+-	bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue,
+-				  idle_timer_disabled);
++	bfq_update_dispatch_stats(hctx->queue, rq,
++			idle_timer_disabled ? in_serv_queue : NULL,
++				idle_timer_disabled);
+ 
+ 	return rq;
+ }
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 26f4bcc10de9d..006b1f0a59bc5 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -7,6 +7,7 @@
+ #include <linux/bio.h>
+ #include <linux/blkdev.h>
+ #include <linux/scatterlist.h>
++#include <linux/blk-cgroup.h>
+ 
+ #include <trace/events/block.h>
+ 
+@@ -554,6 +555,9 @@ static inline unsigned int blk_rq_get_max_segments(struct request *rq)
+ static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
+ 		unsigned int nr_phys_segs)
+ {
++	if (!blk_cgroup_mergeable(req, bio))
++		goto no_merge;
++
+ 	if (blk_integrity_merge_bio(req->q, req, bio) == false)
+ 		goto no_merge;
+ 
+@@ -650,6 +654,9 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
+ 	if (total_phys_segments > blk_rq_get_max_segments(req))
+ 		return 0;
+ 
++	if (!blk_cgroup_mergeable(req, next->bio))
++		return 0;
++
+ 	if (blk_integrity_merge_rq(q, req, next) == false)
+ 		return 0;
+ 
+@@ -861,6 +868,10 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
+ 	if (rq->rq_disk != bio->bi_disk)
+ 		return false;
+ 
++	/* don't merge across cgroup boundaries */
++	if (!blk_cgroup_mergeable(rq, bio))
++		return false;
++
+ 	/* only merge integrity protected bio into ditto rq */
+ 	if (blk_integrity_merge_bio(rq->q, rq, bio) == false)
+ 		return false;
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 24c08963890e9..e0117f5f969de 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -194,11 +194,18 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
+ 
+ static int blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
+ {
++	unsigned long end = jiffies + HZ;
+ 	int ret;
+ 
+ 	do {
+ 		ret = __blk_mq_do_dispatch_sched(hctx);
+-	} while (ret == 1);
++		if (ret != 1)
++			break;
++		if (need_resched() || time_is_before_jiffies(end)) {
++			blk_mq_delay_run_hw_queue(hctx, 0);
++			break;
++		}
++	} while (1);
+ 
+ 	return ret;
+ }
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index b513f1683af06..8c5816364dd17 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -958,15 +958,17 @@ void blk_unregister_queue(struct gendisk *disk)
+ 	 */
+ 	if (queue_is_mq(q))
+ 		blk_mq_unregister_dev(disk_to_dev(disk), q);
+-
+-	kobject_uevent(&q->kobj, KOBJ_REMOVE);
+-	kobject_del(&q->kobj);
+ 	blk_trace_remove_sysfs(disk_to_dev(disk));
+ 
+ 	mutex_lock(&q->sysfs_lock);
+ 	if (q->elevator)
+ 		elv_unregister_queue(q);
+ 	mutex_unlock(&q->sysfs_lock);
++
++	/* Now that we've deleted all child objects, we can delete the queue. */
++	kobject_uevent(&q->kobj, KOBJ_REMOVE);
++	kobject_del(&q->kobj);
++
+ 	mutex_unlock(&q->sysfs_dir_lock);
+ 
+ 	kobject_put(&disk_to_dev(disk)->kobj);
+diff --git a/crypto/authenc.c b/crypto/authenc.c
+index 670bf1a01d00e..17f674a7cdff5 100644
+--- a/crypto/authenc.c
++++ b/crypto/authenc.c
+@@ -253,7 +253,7 @@ static int crypto_authenc_decrypt_tail(struct aead_request *req,
+ 		dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
+ 
+ 	skcipher_request_set_tfm(skreq, ctx->enc);
+-	skcipher_request_set_callback(skreq, aead_request_flags(req),
++	skcipher_request_set_callback(skreq, flags,
+ 				      req->base.complete, req->base.data);
+ 	skcipher_request_set_crypt(skreq, src, dst,
+ 				   req->cryptlen - authsize, req->iv);
+diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
+index 8ac3e73e8ea65..9d804831c8b3f 100644
+--- a/crypto/rsa-pkcs1pad.c
++++ b/crypto/rsa-pkcs1pad.c
+@@ -476,6 +476,8 @@ static int pkcs1pad_verify_complete(struct akcipher_request *req, int err)
+ 	pos++;
+ 
+ 	if (digest_info) {
++		if (digest_info->size > dst_len - pos)
++			goto done;
+ 		if (crypto_memneq(out_buf + pos, digest_info->data,
+ 				  digest_info->size))
+ 			goto done;
+@@ -495,7 +497,7 @@ static int pkcs1pad_verify_complete(struct akcipher_request *req, int err)
+ 			   sg_nents_for_len(req->src,
+ 					    req->src_len + req->dst_len),
+ 			   req_ctx->out_buf + ctx->key_size,
+-			   req->dst_len, ctx->key_size);
++			   req->dst_len, req->src_len);
+ 	/* Do the actual verification step. */
+ 	if (memcmp(req_ctx->out_buf + ctx->key_size, out_buf + pos,
+ 		   req->dst_len) != 0)
+@@ -538,7 +540,7 @@ static int pkcs1pad_verify(struct akcipher_request *req)
+ 
+ 	if (WARN_ON(req->dst) ||
+ 	    WARN_ON(!req->dst_len) ||
+-	    !ctx->key_size || req->src_len < ctx->key_size)
++	    !ctx->key_size || req->src_len != ctx->key_size)
+ 		return -EINVAL;
+ 
+ 	req_ctx->out_buf = kmalloc(ctx->key_size + req->dst_len, GFP_KERNEL);
+@@ -621,6 +623,11 @@ static int pkcs1pad_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 
+ 	rsa_alg = crypto_spawn_akcipher_alg(&ctx->spawn);
+ 
++	if (strcmp(rsa_alg->base.cra_name, "rsa") != 0) {
++		err = -EINVAL;
++		goto err_free_inst;
++	}
++
+ 	err = -ENAMETOOLONG;
+ 	hash_name = crypto_attr_alg_name(tb[2]);
+ 	if (IS_ERR(hash_name)) {
+diff --git a/drivers/acpi/acpica/nswalk.c b/drivers/acpi/acpica/nswalk.c
+index b7f3e8603ad84..901fa5ca284d2 100644
+--- a/drivers/acpi/acpica/nswalk.c
++++ b/drivers/acpi/acpica/nswalk.c
+@@ -169,6 +169,9 @@ acpi_ns_walk_namespace(acpi_object_type type,
+ 
+ 	if (start_node == ACPI_ROOT_OBJECT) {
+ 		start_node = acpi_gbl_root_node;
++		if (!start_node) {
++			return_ACPI_STATUS(AE_NO_NAMESPACE);
++		}
+ 	}
+ 
+ 	/* Null child means "get first node" */
+diff --git a/drivers/acpi/apei/bert.c b/drivers/acpi/apei/bert.c
+index 19e50fcbf4d6f..598fd19b65fa4 100644
+--- a/drivers/acpi/apei/bert.c
++++ b/drivers/acpi/apei/bert.c
+@@ -29,6 +29,7 @@
+ 
+ #undef pr_fmt
+ #define pr_fmt(fmt) "BERT: " fmt
++#define ACPI_BERT_PRINT_MAX_LEN 1024
+ 
+ static int bert_disable;
+ 
+@@ -58,8 +59,11 @@ static void __init bert_print_all(struct acpi_bert_region *region,
+ 		}
+ 
+ 		pr_info_once("Error records from previous boot:\n");
+-
+-		cper_estatus_print(KERN_INFO HW_ERR, estatus);
++		if (region_len < ACPI_BERT_PRINT_MAX_LEN)
++			cper_estatus_print(KERN_INFO HW_ERR, estatus);
++		else
++			pr_info_once("Max print length exceeded, table data is available at:\n"
++				     "/sys/firmware/acpi/tables/data/BERT");
+ 
+ 		/*
+ 		 * Because the boot error source is "one-time polled" type,
+@@ -77,7 +81,7 @@ static int __init setup_bert_disable(char *str)
+ {
+ 	bert_disable = 1;
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("bert_disable", setup_bert_disable);
+ 
+diff --git a/drivers/acpi/apei/erst.c b/drivers/acpi/apei/erst.c
+index 2e0b0fcad9607..83efb52a3f31d 100644
+--- a/drivers/acpi/apei/erst.c
++++ b/drivers/acpi/apei/erst.c
+@@ -891,7 +891,7 @@ EXPORT_SYMBOL_GPL(erst_clear);
+ static int __init setup_erst_disable(char *str)
+ {
+ 	erst_disable = 1;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("erst_disable", setup_erst_disable);
+diff --git a/drivers/acpi/apei/hest.c b/drivers/acpi/apei/hest.c
+index 6e980fe16772c..7bf48c2776fbf 100644
+--- a/drivers/acpi/apei/hest.c
++++ b/drivers/acpi/apei/hest.c
+@@ -219,7 +219,7 @@ err:
+ static int __init setup_hest_disable(char *str)
+ {
+ 	hest_disable = HEST_DISABLED;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("hest_disable", setup_hest_disable);
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 0a2da06e9d8bf..2ac0773326e9a 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -719,6 +719,11 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ 	cpc_obj = &out_obj->package.elements[0];
+ 	if (cpc_obj->type == ACPI_TYPE_INTEGER)	{
+ 		num_ent = cpc_obj->integer.value;
++		if (num_ent <= 1) {
++			pr_debug("Unexpected _CPC NumEntries value (%d) for CPU:%d\n",
++				 num_ent, pr->id);
++			goto out_free;
++		}
+ 	} else {
+ 		pr_debug("Unexpected entry type(%d) for NumEntries\n",
+ 				cpc_obj->type);
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 18bd428f11ac0..bd16340088389 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -685,7 +685,7 @@ int __acpi_node_get_property_reference(const struct fwnode_handle *fwnode,
+ 	 */
+ 	if (obj->type == ACPI_TYPE_LOCAL_REFERENCE) {
+ 		if (index)
+-			return -EINVAL;
++			return -ENOENT;
+ 
+ 		ret = acpi_bus_get_device(obj->reference.handle, &device);
+ 		if (ret)
+diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c
+index 8f4ae6e967e39..47c72447ccd59 100644
+--- a/drivers/amba/bus.c
++++ b/drivers/amba/bus.c
+@@ -299,11 +299,10 @@ static int amba_remove(struct device *dev)
+ {
+ 	struct amba_device *pcdev = to_amba_device(dev);
+ 	struct amba_driver *drv = to_amba_driver(dev->driver);
+-	int ret = 0;
+ 
+ 	pm_runtime_get_sync(dev);
+ 	if (drv->remove)
+-		ret = drv->remove(pcdev);
++		drv->remove(pcdev);
+ 	pm_runtime_put_noidle(dev);
+ 
+ 	/* Undo the runtime PM settings in amba_probe() */
+@@ -314,7 +313,7 @@ static int amba_remove(struct device *dev)
+ 	amba_put_disable_pclk(pcdev);
+ 	dev_pm_domain_detach(dev, true);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static void amba_shutdown(struct device *dev)
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 64ff137408b8c..2728223c1fbc8 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -771,7 +771,7 @@ static int __init save_async_options(char *buf)
+ 		pr_warn("Too long list of driver names for 'driver_async_probe'!\n");
+ 
+ 	strlcpy(async_probe_drv_names, buf, ASYNC_DRV_NAMES_MAX_LEN);
+-	return 0;
++	return 1;
+ }
+ __setup("driver_async_probe=", save_async_options);
+ 
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 4167e2aef3975..1dbaaddf540e1 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1994,7 +1994,9 @@ static bool pm_ops_is_empty(const struct dev_pm_ops *ops)
+ 
+ void device_pm_check_callbacks(struct device *dev)
+ {
+-	spin_lock_irq(&dev->power.lock);
++	unsigned long flags;
++
++	spin_lock_irqsave(&dev->power.lock, flags);
+ 	dev->power.no_pm_callbacks =
+ 		(!dev->bus || (pm_ops_is_empty(dev->bus->pm) &&
+ 		 !dev->bus->suspend && !dev->bus->resume)) &&
+@@ -2003,7 +2005,7 @@ void device_pm_check_callbacks(struct device *dev)
+ 		(!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) &&
+ 		(!dev->driver || (pm_ops_is_empty(dev->driver->pm) &&
+ 		 !dev->driver->suspend && !dev->driver->resume));
+-	spin_unlock_irq(&dev->power.lock);
++	spin_unlock_irqrestore(&dev->power.lock, flags);
+ }
+ 
+ bool dev_pm_skip_suspend(struct device *dev)
+diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
+index 330f851cb8f0b..69638146f949c 100644
+--- a/drivers/block/drbd/drbd_req.c
++++ b/drivers/block/drbd/drbd_req.c
+@@ -177,7 +177,8 @@ void start_new_tl_epoch(struct drbd_connection *connection)
+ void complete_master_bio(struct drbd_device *device,
+ 		struct bio_and_error *m)
+ {
+-	m->bio->bi_status = errno_to_blk_status(m->error);
++	if (unlikely(m->error))
++		m->bio->bi_status = errno_to_blk_status(m->error);
+ 	bio_endio(m->bio);
+ 	dec_ap_bio(device);
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index ee537a9f1d1a4..e4517d483bdc3 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -797,33 +797,33 @@ static ssize_t loop_attr_backing_file_show(struct loop_device *lo, char *buf)
+ 
+ static ssize_t loop_attr_offset_show(struct loop_device *lo, char *buf)
+ {
+-	return sprintf(buf, "%llu\n", (unsigned long long)lo->lo_offset);
++	return sysfs_emit(buf, "%llu\n", (unsigned long long)lo->lo_offset);
+ }
+ 
+ static ssize_t loop_attr_sizelimit_show(struct loop_device *lo, char *buf)
+ {
+-	return sprintf(buf, "%llu\n", (unsigned long long)lo->lo_sizelimit);
++	return sysfs_emit(buf, "%llu\n", (unsigned long long)lo->lo_sizelimit);
+ }
+ 
+ static ssize_t loop_attr_autoclear_show(struct loop_device *lo, char *buf)
+ {
+ 	int autoclear = (lo->lo_flags & LO_FLAGS_AUTOCLEAR);
+ 
+-	return sprintf(buf, "%s\n", autoclear ? "1" : "0");
++	return sysfs_emit(buf, "%s\n", autoclear ? "1" : "0");
+ }
+ 
+ static ssize_t loop_attr_partscan_show(struct loop_device *lo, char *buf)
+ {
+ 	int partscan = (lo->lo_flags & LO_FLAGS_PARTSCAN);
+ 
+-	return sprintf(buf, "%s\n", partscan ? "1" : "0");
++	return sysfs_emit(buf, "%s\n", partscan ? "1" : "0");
+ }
+ 
+ static ssize_t loop_attr_dio_show(struct loop_device *lo, char *buf)
+ {
+ 	int dio = (lo->lo_flags & LO_FLAGS_DIRECT_IO);
+ 
+-	return sprintf(buf, "%s\n", dio ? "1" : "0");
++	return sysfs_emit(buf, "%s\n", dio ? "1" : "0");
+ }
+ 
+ LOOP_ATTR_RO(backing_file);
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index a03390127741f..02e2056780ad2 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -825,9 +825,17 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 	err = virtio_cread_feature(vdev, VIRTIO_BLK_F_BLK_SIZE,
+ 				   struct virtio_blk_config, blk_size,
+ 				   &blk_size);
+-	if (!err)
++	if (!err) {
++		err = blk_validate_block_size(blk_size);
++		if (err) {
++			dev_err(&vdev->dev,
++				"virtio_blk: invalid block size: 0x%x\n",
++				blk_size);
++			goto out_free_tags;
++		}
++
+ 		blk_queue_logical_block_size(q, blk_size);
+-	else
++	} else
+ 		blk_size = queue_logical_block_size(q);
+ 
+ 	/* Use topology information if available */
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index 74856a5862162..c41560be39fb6 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -981,6 +981,8 @@ static int btmtksdio_probe(struct sdio_func *func,
+ 	hdev->manufacturer = 70;
+ 	set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
+ 
++	sdio_set_drvdata(func, bdev);
++
+ 	err = hci_register_dev(hdev);
+ 	if (err < 0) {
+ 		dev_err(&func->dev, "Can't register HCI device\n");
+@@ -988,8 +990,6 @@ static int btmtksdio_probe(struct sdio_func *func,
+ 		return err;
+ 	}
+ 
+-	sdio_set_drvdata(func, bdev);
+-
+ 	/* pm_runtime_enable would be done after the firmware is being
+ 	 * downloaded because the core layer probably already enables
+ 	 * runtime PM for this func such as the case host->caps &
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 9e03402ef1b37..e9a44ab3812df 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -305,6 +305,8 @@ int hci_uart_register_device(struct hci_uart *hu,
+ 	if (err)
+ 		return err;
+ 
++	percpu_init_rwsem(&hu->proto_lock);
++
+ 	err = p->open(hu);
+ 	if (err)
+ 		goto err_open;
+@@ -327,7 +329,6 @@ int hci_uart_register_device(struct hci_uart *hu,
+ 
+ 	INIT_WORK(&hu->init_ready, hci_uart_init_work);
+ 	INIT_WORK(&hu->write_work, hci_uart_write_work);
+-	percpu_init_rwsem(&hu->proto_lock);
+ 
+ 	/* Only when vendor specific setup callback is provided, consider
+ 	 * the manufacturer information valid. This avoids filling in the
+diff --git a/drivers/bus/mips_cdmm.c b/drivers/bus/mips_cdmm.c
+index 626dedd110cbc..fca0d0669aa97 100644
+--- a/drivers/bus/mips_cdmm.c
++++ b/drivers/bus/mips_cdmm.c
+@@ -351,6 +351,7 @@ phys_addr_t __weak mips_cdmm_phys_base(void)
+ 	np = of_find_compatible_node(NULL, NULL, "mti,mips-cdmm");
+ 	if (np) {
+ 		err = of_address_to_resource(np, 0, &res);
++		of_node_put(np);
+ 		if (!err)
+ 			return res.start;
+ 	}
+diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig
+index 5952210526aaa..a7d9e4600d40e 100644
+--- a/drivers/char/hw_random/Kconfig
++++ b/drivers/char/hw_random/Kconfig
+@@ -427,7 +427,7 @@ config HW_RANDOM_MESON
+ 
+ config HW_RANDOM_CAVIUM
+ 	tristate "Cavium ThunderX Random Number Generator support"
+-	depends on HW_RANDOM && PCI && (ARM64 || (COMPILE_TEST && 64BIT))
++	depends on HW_RANDOM && PCI && ARCH_THUNDER
+ 	default HW_RANDOM
+ 	help
+ 	  This driver provides kernel-side support for the Random Number
+diff --git a/drivers/char/hw_random/atmel-rng.c b/drivers/char/hw_random/atmel-rng.c
+index ecb71c4317a50..8cf0ef501341e 100644
+--- a/drivers/char/hw_random/atmel-rng.c
++++ b/drivers/char/hw_random/atmel-rng.c
+@@ -114,6 +114,7 @@ static int atmel_trng_probe(struct platform_device *pdev)
+ 
+ err_register:
+ 	clk_disable_unprepare(trng->clk);
++	atmel_trng_disable(trng);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/char/hw_random/cavium-rng-vf.c b/drivers/char/hw_random/cavium-rng-vf.c
+index 3de4a6a443ef9..6f66919652bf5 100644
+--- a/drivers/char/hw_random/cavium-rng-vf.c
++++ b/drivers/char/hw_random/cavium-rng-vf.c
+@@ -1,10 +1,7 @@
++// SPDX-License-Identifier: GPL-2.0
+ /*
+- * Hardware Random Number Generator support for Cavium, Inc.
+- * Thunder processor family.
+- *
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License.  See the file "COPYING" in the main directory of this archive
+- * for more details.
++ * Hardware Random Number Generator support.
++ * Cavium Thunder, Marvell OcteonTx/Tx2 processor families.
+  *
+  * Copyright (C) 2016 Cavium, Inc.
+  */
+@@ -15,16 +12,146 @@
+ #include <linux/pci.h>
+ #include <linux/pci_ids.h>
+ 
++#include <asm/arch_timer.h>
++
++/* PCI device IDs */
++#define	PCI_DEVID_CAVIUM_RNG_PF		0xA018
++#define	PCI_DEVID_CAVIUM_RNG_VF		0xA033
++
++#define HEALTH_STATUS_REG		0x38
++
++/* RST device info */
++#define PCI_DEVICE_ID_RST_OTX2		0xA085
++#define RST_BOOT_REG			0x1600ULL
++#define CLOCK_BASE_RATE			50000000ULL
++#define MSEC_TO_NSEC(x)			(x * 1000000)
++
+ struct cavium_rng {
+ 	struct hwrng ops;
+ 	void __iomem *result;
++	void __iomem *pf_regbase;
++	struct pci_dev *pdev;
++	u64  clock_rate;
++	u64  prev_error;
++	u64  prev_time;
+ };
+ 
++static inline bool is_octeontx(struct pci_dev *pdev)
++{
++	if (midr_is_cpu_model_range(read_cpuid_id(), MIDR_THUNDERX_83XX,
++				    MIDR_CPU_VAR_REV(0, 0),
++				    MIDR_CPU_VAR_REV(3, 0)) ||
++	    midr_is_cpu_model_range(read_cpuid_id(), MIDR_THUNDERX_81XX,
++				    MIDR_CPU_VAR_REV(0, 0),
++				    MIDR_CPU_VAR_REV(3, 0)) ||
++	    midr_is_cpu_model_range(read_cpuid_id(), MIDR_THUNDERX,
++				    MIDR_CPU_VAR_REV(0, 0),
++				    MIDR_CPU_VAR_REV(3, 0)))
++		return true;
++
++	return false;
++}
++
++static u64 rng_get_coprocessor_clkrate(void)
++{
++	u64 ret = CLOCK_BASE_RATE * 16; /* Assume 800Mhz as default */
++	struct pci_dev *pdev;
++	void __iomem *base;
++
++	pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM,
++			      PCI_DEVICE_ID_RST_OTX2, NULL);
++	if (!pdev)
++		goto error;
++
++	base = pci_ioremap_bar(pdev, 0);
++	if (!base)
++		goto error_put_pdev;
++
++	/* RST: PNR_MUL * 50Mhz gives clockrate */
++	ret = CLOCK_BASE_RATE * ((readq(base + RST_BOOT_REG) >> 33) & 0x3F);
++
++	iounmap(base);
++
++error_put_pdev:
++	pci_dev_put(pdev);
++
++error:
++	return ret;
++}
++
++static int check_rng_health(struct cavium_rng *rng)
++{
++	u64 cur_err, cur_time;
++	u64 status, cycles;
++	u64 time_elapsed;
++
++
++	/* Skip checking health for OcteonTx */
++	if (!rng->pf_regbase)
++		return 0;
++
++	status = readq(rng->pf_regbase + HEALTH_STATUS_REG);
++	if (status & BIT_ULL(0)) {
++		dev_err(&rng->pdev->dev, "HWRNG: Startup health test failed\n");
++		return -EIO;
++	}
++
++	cycles = status >> 1;
++	if (!cycles)
++		return 0;
++
++	cur_time = arch_timer_read_counter();
++
++	/* RNM_HEALTH_STATUS[CYCLES_SINCE_HEALTH_FAILURE]
++	 * Number of coprocessor cycles times 2 since the last failure.
++	 * This field doesn't get cleared/updated until another failure.
++	 */
++	cycles = cycles / 2;
++	cur_err = (cycles * 1000000000) / rng->clock_rate; /* In nanosec */
++
++	/* Ignore errors that happenned a long time ago, these
++	 * are most likely false positive errors.
++	 */
++	if (cur_err > MSEC_TO_NSEC(10)) {
++		rng->prev_error = 0;
++		rng->prev_time = 0;
++		return 0;
++	}
++
++	if (rng->prev_error) {
++		/* Calculate time elapsed since last error
++		 * '1' tick of CNTVCT is 10ns, since it runs at 100Mhz.
++		 */
++		time_elapsed = (cur_time - rng->prev_time) * 10;
++		time_elapsed += rng->prev_error;
++
++		/* Check if current error is a new one or the old one itself.
++		 * If error is a new one then consider there is a persistent
++		 * issue with entropy, declare hardware failure.
++		 */
++		if (cur_err < time_elapsed) {
++			dev_err(&rng->pdev->dev, "HWRNG failure detected\n");
++			rng->prev_error = cur_err;
++			rng->prev_time = cur_time;
++			return -EIO;
++		}
++	}
++
++	rng->prev_error = cur_err;
++	rng->prev_time = cur_time;
++	return 0;
++}
++
+ /* Read data from the RNG unit */
+ static int cavium_rng_read(struct hwrng *rng, void *dat, size_t max, bool wait)
+ {
+ 	struct cavium_rng *p = container_of(rng, struct cavium_rng, ops);
+ 	unsigned int size = max;
++	int err = 0;
++
++	err = check_rng_health(p);
++	if (err)
++		return err;
+ 
+ 	while (size >= 8) {
+ 		*((u64 *)dat) = readq(p->result);
+@@ -39,6 +166,39 @@ static int cavium_rng_read(struct hwrng *rng, void *dat, size_t max, bool wait)
+ 	return max;
+ }
+ 
++static int cavium_map_pf_regs(struct cavium_rng *rng)
++{
++	struct pci_dev *pdev;
++
++	/* Health status is not supported on 83xx, skip mapping PF CSRs */
++	if (is_octeontx(rng->pdev)) {
++		rng->pf_regbase = NULL;
++		return 0;
++	}
++
++	pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM,
++			      PCI_DEVID_CAVIUM_RNG_PF, NULL);
++	if (!pdev) {
++		dev_err(&pdev->dev, "Cannot find RNG PF device\n");
++		return -EIO;
++	}
++
++	rng->pf_regbase = ioremap(pci_resource_start(pdev, 0),
++				  pci_resource_len(pdev, 0));
++	if (!rng->pf_regbase) {
++		dev_err(&pdev->dev, "Failed to map PF CSR region\n");
++		pci_dev_put(pdev);
++		return -ENOMEM;
++	}
++
++	pci_dev_put(pdev);
++
++	/* Get co-processor clock rate */
++	rng->clock_rate = rng_get_coprocessor_clkrate();
++
++	return 0;
++}
++
+ /* Map Cavium RNG to an HWRNG object */
+ static int cavium_rng_probe_vf(struct	pci_dev		*pdev,
+ 			 const struct	pci_device_id	*id)
+@@ -50,6 +210,8 @@ static int cavium_rng_probe_vf(struct	pci_dev		*pdev,
+ 	if (!rng)
+ 		return -ENOMEM;
+ 
++	rng->pdev = pdev;
++
+ 	/* Map the RNG result */
+ 	rng->result = pcim_iomap(pdev, 0, 0);
+ 	if (!rng->result) {
+@@ -67,6 +229,11 @@ static int cavium_rng_probe_vf(struct	pci_dev		*pdev,
+ 
+ 	pci_set_drvdata(pdev, rng);
+ 
++	/* Health status is available only at PF, hence map PF registers. */
++	ret = cavium_map_pf_regs(rng);
++	if (ret)
++		return ret;
++
+ 	ret = devm_hwrng_register(&pdev->dev, &rng->ops);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Error registering device as HWRNG.\n");
+@@ -76,10 +243,18 @@ static int cavium_rng_probe_vf(struct	pci_dev		*pdev,
+ 	return 0;
+ }
+ 
++/* Remove the VF */
++static void cavium_rng_remove_vf(struct pci_dev *pdev)
++{
++	struct cavium_rng *rng;
++
++	rng = pci_get_drvdata(pdev);
++	iounmap(rng->pf_regbase);
++}
+ 
+ static const struct pci_device_id cavium_rng_vf_id_table[] = {
+-	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, 0xa033), 0, 0, 0},
+-	{0,},
++	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CAVIUM_RNG_VF) },
++	{ 0, }
+ };
+ MODULE_DEVICE_TABLE(pci, cavium_rng_vf_id_table);
+ 
+@@ -87,8 +262,9 @@ static struct pci_driver cavium_rng_vf_driver = {
+ 	.name		= "cavium_rng_vf",
+ 	.id_table	= cavium_rng_vf_id_table,
+ 	.probe		= cavium_rng_probe_vf,
++	.remove		= cavium_rng_remove_vf,
+ };
+ module_pci_driver(cavium_rng_vf_driver);
+ 
+ MODULE_AUTHOR("Omer Khaliq <okhaliq@caviumnetworks.com>");
+-MODULE_LICENSE("GPL");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/char/hw_random/cavium-rng.c b/drivers/char/hw_random/cavium-rng.c
+index 63d6e68c24d2f..b96579222408b 100644
+--- a/drivers/char/hw_random/cavium-rng.c
++++ b/drivers/char/hw_random/cavium-rng.c
+@@ -1,10 +1,7 @@
++// SPDX-License-Identifier: GPL-2.0
+ /*
+- * Hardware Random Number Generator support for Cavium Inc.
+- * Thunder processor family.
+- *
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License.  See the file "COPYING" in the main directory of this archive
+- * for more details.
++ * Hardware Random Number Generator support.
++ * Cavium Thunder, Marvell OcteonTx/Tx2 processor families.
+  *
+  * Copyright (C) 2016 Cavium, Inc.
+  */
+@@ -91,4 +88,4 @@ static struct pci_driver cavium_rng_pf_driver = {
+ 
+ module_pci_driver(cavium_rng_pf_driver);
+ MODULE_AUTHOR("Omer Khaliq <okhaliq@caviumnetworks.com>");
+-MODULE_LICENSE("GPL");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/char/hw_random/nomadik-rng.c b/drivers/char/hw_random/nomadik-rng.c
+index b0ded41eb865f..e8f9621e79541 100644
+--- a/drivers/char/hw_random/nomadik-rng.c
++++ b/drivers/char/hw_random/nomadik-rng.c
+@@ -65,15 +65,14 @@ static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id)
+ out_release:
+ 	amba_release_regions(dev);
+ out_clk:
+-	clk_disable(rng_clk);
++	clk_disable_unprepare(rng_clk);
+ 	return ret;
+ }
+ 
+-static int nmk_rng_remove(struct amba_device *dev)
++static void nmk_rng_remove(struct amba_device *dev)
+ {
+ 	amba_release_regions(dev);
+-	clk_disable(rng_clk);
+-	return 0;
++	clk_disable_unprepare(rng_clk);
+ }
+ 
+ static const struct amba_id nmk_rng_ids[] = {
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index ddaeceb7e1091..ed600473ad7e3 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -274,14 +274,6 @@ static void tpm_dev_release(struct device *dev)
+ 	kfree(chip);
+ }
+ 
+-static void tpm_devs_release(struct device *dev)
+-{
+-	struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
+-
+-	/* release the master device reference */
+-	put_device(&chip->dev);
+-}
+-
+ /**
+  * tpm_class_shutdown() - prepare the TPM device for loss of power.
+  * @dev: device to which the chip is associated.
+@@ -344,7 +336,6 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ 	chip->dev_num = rc;
+ 
+ 	device_initialize(&chip->dev);
+-	device_initialize(&chip->devs);
+ 
+ 	chip->dev.class = tpm_class;
+ 	chip->dev.class->shutdown_pre = tpm_class_shutdown;
+@@ -352,29 +343,12 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ 	chip->dev.parent = pdev;
+ 	chip->dev.groups = chip->groups;
+ 
+-	chip->devs.parent = pdev;
+-	chip->devs.class = tpmrm_class;
+-	chip->devs.release = tpm_devs_release;
+-	/* get extra reference on main device to hold on
+-	 * behalf of devs.  This holds the chip structure
+-	 * while cdevs is in use.  The corresponding put
+-	 * is in the tpm_devs_release (TPM2 only)
+-	 */
+-	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+-		get_device(&chip->dev);
+-
+ 	if (chip->dev_num == 0)
+ 		chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR);
+ 	else
+ 		chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num);
+ 
+-	chip->devs.devt =
+-		MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
+-
+ 	rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num);
+-	if (rc)
+-		goto out;
+-	rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
+ 	if (rc)
+ 		goto out;
+ 
+@@ -382,9 +356,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ 		chip->flags |= TPM_CHIP_FLAG_VIRTUAL;
+ 
+ 	cdev_init(&chip->cdev, &tpm_fops);
+-	cdev_init(&chip->cdevs, &tpmrm_fops);
+ 	chip->cdev.owner = THIS_MODULE;
+-	chip->cdevs.owner = THIS_MODULE;
+ 
+ 	rc = tpm2_init_space(&chip->work_space, TPM2_SPACE_BUFFER_SIZE);
+ 	if (rc) {
+@@ -396,7 +368,6 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ 	return chip;
+ 
+ out:
+-	put_device(&chip->devs);
+ 	put_device(&chip->dev);
+ 	return ERR_PTR(rc);
+ }
+@@ -445,14 +416,9 @@ static int tpm_add_char_device(struct tpm_chip *chip)
+ 	}
+ 
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+-		rc = cdev_device_add(&chip->cdevs, &chip->devs);
+-		if (rc) {
+-			dev_err(&chip->devs,
+-				"unable to cdev_device_add() %s, major %d, minor %d, err=%d\n",
+-				dev_name(&chip->devs), MAJOR(chip->devs.devt),
+-				MINOR(chip->devs.devt), rc);
+-			return rc;
+-		}
++		rc = tpm_devs_add(chip);
++		if (rc)
++			goto err_del_cdev;
+ 	}
+ 
+ 	/* Make the chip available. */
+@@ -460,6 +426,10 @@ static int tpm_add_char_device(struct tpm_chip *chip)
+ 	idr_replace(&dev_nums_idr, chip, chip->dev_num);
+ 	mutex_unlock(&idr_lock);
+ 
++	return 0;
++
++err_del_cdev:
++	cdev_device_del(&chip->cdev, &chip->dev);
+ 	return rc;
+ }
+ 
+@@ -641,7 +611,7 @@ void tpm_chip_unregister(struct tpm_chip *chip)
+ 		hwrng_unregister(&chip->hwrng);
+ 	tpm_bios_log_teardown(chip);
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+-		cdev_device_del(&chip->cdevs, &chip->devs);
++		tpm_devs_remove(chip);
+ 	tpm_del_char_device(chip);
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_unregister);
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 283f78211c3a7..2163c6ee0d364 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -234,6 +234,8 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
+ 		       size_t cmdsiz);
+ int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, void *buf,
+ 		      size_t *bufsiz);
++int tpm_devs_add(struct tpm_chip *chip);
++void tpm_devs_remove(struct tpm_chip *chip);
+ 
+ void tpm_bios_log_setup(struct tpm_chip *chip);
+ void tpm_bios_log_teardown(struct tpm_chip *chip);
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index d2225020e4d2c..ffb35f0154c16 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -574,3 +574,68 @@ out:
+ 	dev_err(&chip->dev, "%s: error %d\n", __func__, rc);
+ 	return rc;
+ }
++
++/*
++ * Put the reference to the main device.
++ */
++static void tpm_devs_release(struct device *dev)
++{
++	struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
++
++	/* release the master device reference */
++	put_device(&chip->dev);
++}
++
++/*
++ * Remove the device file for exposed TPM spaces and release the device
++ * reference. This may also release the reference to the master device.
++ */
++void tpm_devs_remove(struct tpm_chip *chip)
++{
++	cdev_device_del(&chip->cdevs, &chip->devs);
++	put_device(&chip->devs);
++}
++
++/*
++ * Add a device file to expose TPM spaces. Also take a reference to the
++ * main device.
++ */
++int tpm_devs_add(struct tpm_chip *chip)
++{
++	int rc;
++
++	device_initialize(&chip->devs);
++	chip->devs.parent = chip->dev.parent;
++	chip->devs.class = tpmrm_class;
++
++	/*
++	 * Get extra reference on main device to hold on behalf of devs.
++	 * This holds the chip structure while cdevs is in use. The
++	 * corresponding put is in the tpm_devs_release.
++	 */
++	get_device(&chip->dev);
++	chip->devs.release = tpm_devs_release;
++	chip->devs.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
++	cdev_init(&chip->cdevs, &tpmrm_fops);
++	chip->cdevs.owner = THIS_MODULE;
++
++	rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
++	if (rc)
++		goto err_put_devs;
++
++	rc = cdev_device_add(&chip->cdevs, &chip->devs);
++	if (rc) {
++		dev_err(&chip->devs,
++			"unable to cdev_device_add() %s, major %d, minor %d, err=%d\n",
++			dev_name(&chip->devs), MAJOR(chip->devs.devt),
++			MINOR(chip->devs.devt), rc);
++		goto err_put_devs;
++	}
++
++	return 0;
++
++err_put_devs:
++	put_device(&chip->devs);
++
++	return rc;
++}
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 673522874cec4..3dd4deb60adbf 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -1959,6 +1959,13 @@ static void virtcons_remove(struct virtio_device *vdev)
+ 	list_del(&portdev->list);
+ 	spin_unlock_irq(&pdrvdata_lock);
+ 
++	/* Device is going away, exit any polling for buffers */
++	virtio_break_device(vdev);
++	if (use_multiport(portdev))
++		flush_work(&portdev->control_work);
++	else
++		flush_work(&portdev->config_work);
++
+ 	/* Disable interrupts for vqs */
+ 	vdev->config->reset(vdev);
+ 	/* Finish up work that's lined up */
+diff --git a/drivers/clk/actions/owl-s700.c b/drivers/clk/actions/owl-s700.c
+index a2f34d13fb543..6ea7da1d6d755 100644
+--- a/drivers/clk/actions/owl-s700.c
++++ b/drivers/clk/actions/owl-s700.c
+@@ -162,6 +162,7 @@ static struct clk_div_table hdmia_div_table[] = {
+ 
+ static struct clk_div_table rmii_div_table[] = {
+ 	{0, 4},   {1, 10},
++	{0, 0}
+ };
+ 
+ /* divider clocks */
+diff --git a/drivers/clk/actions/owl-s900.c b/drivers/clk/actions/owl-s900.c
+index 790890978424a..5144ada2c7e1a 100644
+--- a/drivers/clk/actions/owl-s900.c
++++ b/drivers/clk/actions/owl-s900.c
+@@ -140,7 +140,7 @@ static struct clk_div_table rmii_ref_div_table[] = {
+ 
+ static struct clk_div_table usb3_mac_div_table[] = {
+ 	{ 1, 2 }, { 2, 3 }, { 3, 4 },
+-	{ 0, 8 },
++	{ 0, 0 }
+ };
+ 
+ static struct clk_div_table i2s_div_table[] = {
+diff --git a/drivers/clk/at91/sama7g5.c b/drivers/clk/at91/sama7g5.c
+index a092a940baa40..9d25b23fb99d7 100644
+--- a/drivers/clk/at91/sama7g5.c
++++ b/drivers/clk/at91/sama7g5.c
+@@ -606,16 +606,16 @@ static const struct {
+ 	{ .n  = "pdmc0_gclk",
+ 	  .id = 68,
+ 	  .r = { .max = 50000000  },
+-	  .pp = { "syspll_divpmcck", "baudpll_divpmcck", },
+-	  .pp_mux_table = { 5, 8, },
++	  .pp = { "syspll_divpmcck", "audiopll_divpmcck", },
++	  .pp_mux_table = { 5, 9, },
+ 	  .pp_count = 2,
+ 	  .pp_chg_id = INT_MIN, },
+ 
+ 	{ .n  = "pdmc1_gclk",
+ 	  .id = 69,
+ 	  .r = { .max = 50000000, },
+-	  .pp = { "syspll_divpmcck", "baudpll_divpmcck", },
+-	  .pp_mux_table = { 5, 8, },
++	  .pp = { "syspll_divpmcck", "audiopll_divpmcck", },
++	  .pp_mux_table = { 5, 9, },
+ 	  .pp_count = 2,
+ 	  .pp_chg_id = INT_MIN, },
+ 
+diff --git a/drivers/clk/clk-clps711x.c b/drivers/clk/clk-clps711x.c
+index a2c6486ef1708..f8417ee2961aa 100644
+--- a/drivers/clk/clk-clps711x.c
++++ b/drivers/clk/clk-clps711x.c
+@@ -28,11 +28,13 @@ static const struct clk_div_table spi_div_table[] = {
+ 	{ .val = 1, .div = 8, },
+ 	{ .val = 2, .div = 2, },
+ 	{ .val = 3, .div = 1, },
++	{ /* sentinel */ }
+ };
+ 
+ static const struct clk_div_table timer_div_table[] = {
+ 	{ .val = 0, .div = 256, },
+ 	{ .val = 1, .div = 1, },
++	{ /* sentinel */ }
+ };
+ 
+ struct clps711x_clk {
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index b8a0e3d23698c..92fc084203b75 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3384,6 +3384,19 @@ static void clk_core_reparent_orphans_nolock(void)
+ 			__clk_set_parent_after(orphan, parent, NULL);
+ 			__clk_recalc_accuracies(orphan);
+ 			__clk_recalc_rates(orphan, 0);
++
++			/*
++			 * __clk_init_parent() will set the initial req_rate to
++			 * 0 if the clock doesn't have clk_ops::recalc_rate and
++			 * is an orphan when it's registered.
++			 *
++			 * 'req_rate' is used by clk_set_rate_range() and
++			 * clk_put() to trigger a clk_set_rate() call whenever
++			 * the boundaries are modified. Let's make sure
++			 * 'req_rate' is set to something non-zero so that
++			 * clk_set_rate_range() doesn't drop the frequency.
++			 */
++			orphan->req_rate = orphan->rate;
+ 		}
+ 	}
+ }
+diff --git a/drivers/clk/imx/clk-imx7d.c b/drivers/clk/imx/clk-imx7d.c
+index c4e0f1c07192f..3f6fd7ef2a68f 100644
+--- a/drivers/clk/imx/clk-imx7d.c
++++ b/drivers/clk/imx/clk-imx7d.c
+@@ -849,7 +849,6 @@ static void __init imx7d_clocks_init(struct device_node *ccm_node)
+ 	hws[IMX7D_WDOG4_ROOT_CLK] = imx_clk_hw_gate4("wdog4_root_clk", "wdog_post_div", base + 0x49f0, 0);
+ 	hws[IMX7D_KPP_ROOT_CLK] = imx_clk_hw_gate4("kpp_root_clk", "ipg_root_clk", base + 0x4aa0, 0);
+ 	hws[IMX7D_CSI_MCLK_ROOT_CLK] = imx_clk_hw_gate4("csi_mclk_root_clk", "csi_mclk_post_div", base + 0x4490, 0);
+-	hws[IMX7D_AUDIO_MCLK_ROOT_CLK] = imx_clk_hw_gate4("audio_mclk_root_clk", "audio_mclk_post_div", base + 0x4790, 0);
+ 	hws[IMX7D_WRCLK_ROOT_CLK] = imx_clk_hw_gate4("wrclk_root_clk", "wrclk_post_div", base + 0x47a0, 0);
+ 	hws[IMX7D_USB_CTRL_CLK] = imx_clk_hw_gate4("usb_ctrl_clk", "ahb_root_clk", base + 0x4680, 0);
+ 	hws[IMX7D_USB_PHY1_CLK] = imx_clk_hw_gate4("usb_phy1_clk", "pll_usb1_main_clk", base + 0x46a0, 0);
+diff --git a/drivers/clk/loongson1/clk-loongson1c.c b/drivers/clk/loongson1/clk-loongson1c.c
+index 703f87622cf5f..1ebf740380efb 100644
+--- a/drivers/clk/loongson1/clk-loongson1c.c
++++ b/drivers/clk/loongson1/clk-loongson1c.c
+@@ -37,6 +37,7 @@ static const struct clk_div_table ahb_div_table[] = {
+ 	[1] = { .val = 1, .div = 4 },
+ 	[2] = { .val = 2, .div = 3 },
+ 	[3] = { .val = 3, .div = 3 },
++	[4] = { /* sentinel */ }
+ };
+ 
+ void __init ls1x_clk_init(void)
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index 59a5a0f261f33..71a0d30cf44df 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -264,7 +264,7 @@ static int clk_rcg2_determine_floor_rate(struct clk_hw *hw,
+ 
+ static int __clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ {
+-	u32 cfg, mask;
++	u32 cfg, mask, d_val, not2d_val, n_minus_m;
+ 	struct clk_hw *hw = &rcg->clkr.hw;
+ 	int ret, index = qcom_find_src_index(hw, rcg->parent_map, f->src);
+ 
+@@ -283,8 +283,17 @@ static int __clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ 		if (ret)
+ 			return ret;
+ 
++		/* Calculate 2d value */
++		d_val = f->n;
++
++		n_minus_m = f->n - f->m;
++		n_minus_m *= 2;
++
++		d_val = clamp_t(u32, d_val, f->m, n_minus_m);
++		not2d_val = ~d_val & mask;
++
+ 		ret = regmap_update_bits(rcg->clkr.regmap,
+-				RCG_D_OFFSET(rcg), mask, ~f->n);
++				RCG_D_OFFSET(rcg), mask, not2d_val);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -639,6 +648,7 @@ static const struct frac_entry frac_table_pixel[] = {
+ 	{ 2, 9 },
+ 	{ 4, 9 },
+ 	{ 1, 1 },
++	{ 2, 3 },
+ 	{ }
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 108fe27bee10f..541016db3c4bb 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -60,11 +60,6 @@ static const struct parent_map gcc_xo_gpll0_gpll0_out_main_div2_map[] = {
+ 	{ P_GPLL0_DIV2, 4 },
+ };
+ 
+-static const char * const gcc_xo_gpll0[] = {
+-	"xo",
+-	"gpll0",
+-};
+-
+ static const struct parent_map gcc_xo_gpll0_map[] = {
+ 	{ P_XO, 0 },
+ 	{ P_GPLL0, 1 },
+@@ -956,6 +951,11 @@ static struct clk_rcg2 blsp1_uart6_apps_clk_src = {
+ 	},
+ };
+ 
++static const struct clk_parent_data gcc_xo_gpll0[] = {
++	{ .fw_name = "xo" },
++	{ .hw = &gpll0.clkr.hw },
++};
++
+ static const struct freq_tbl ftbl_pcie_axi_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
+ 	F(200000000, P_GPLL0, 4, 0, 0),
+@@ -969,7 +969,7 @@ static struct clk_rcg2 pcie0_axi_clk_src = {
+ 	.parent_map = gcc_xo_gpll0_map,
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "pcie0_axi_clk_src",
+-		.parent_names = gcc_xo_gpll0,
++		.parent_data = gcc_xo_gpll0,
+ 		.num_parents = 2,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1016,7 +1016,7 @@ static struct clk_rcg2 pcie1_axi_clk_src = {
+ 	.parent_map = gcc_xo_gpll0_map,
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "pcie1_axi_clk_src",
+-		.parent_names = gcc_xo_gpll0,
++		.parent_data = gcc_xo_gpll0,
+ 		.num_parents = 2,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1074,7 +1074,7 @@ static struct clk_rcg2 sdcc1_apps_clk_src = {
+ 		.name = "sdcc1_apps_clk_src",
+ 		.parent_names = gcc_xo_gpll0_gpll2_gpll0_out_main_div2,
+ 		.num_parents = 4,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+@@ -1330,7 +1330,7 @@ static struct clk_rcg2 nss_ce_clk_src = {
+ 	.parent_map = gcc_xo_gpll0_map,
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "nss_ce_clk_src",
+-		.parent_names = gcc_xo_gpll0,
++		.parent_data = gcc_xo_gpll0,
+ 		.num_parents = 2,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -4329,8 +4329,7 @@ static struct clk_rcg2 pcie0_rchng_clk_src = {
+ 	.parent_map = gcc_xo_gpll0_map,
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "pcie0_rchng_clk_src",
+-		.parent_hws = (const struct clk_hw *[]) {
+-				&gpll0.clkr.hw },
++		.parent_data = gcc_xo_gpll0,
+ 		.num_parents = 2,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+diff --git a/drivers/clk/qcom/gcc-msm8994.c b/drivers/clk/qcom/gcc-msm8994.c
+index 144d2ba7a9bef..463a444c8a7e4 100644
+--- a/drivers/clk/qcom/gcc-msm8994.c
++++ b/drivers/clk/qcom/gcc-msm8994.c
+@@ -108,6 +108,7 @@ static struct clk_alpha_pll gpll4_early = {
+ 
+ static struct clk_alpha_pll_postdiv gpll4 = {
+ 	.offset = 0x1dc0,
++	.width = 4,
+ 	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ 	.clkr.hw.init = &(struct clk_init_data)
+ 	{
+diff --git a/drivers/clk/tegra/clk-tegra124-emc.c b/drivers/clk/tegra/clk-tegra124-emc.c
+index 745f9faa98d8e..733a962ff521a 100644
+--- a/drivers/clk/tegra/clk-tegra124-emc.c
++++ b/drivers/clk/tegra/clk-tegra124-emc.c
+@@ -191,6 +191,7 @@ static struct tegra_emc *emc_ensure_emc_driver(struct tegra_clk_emc *tegra)
+ 
+ 	tegra->emc = platform_get_drvdata(pdev);
+ 	if (!tegra->emc) {
++		put_device(&pdev->dev);
+ 		pr_err("%s: cannot find EMC driver\n", __func__);
+ 		return NULL;
+ 	}
+diff --git a/drivers/clk/uniphier/clk-uniphier-fixed-rate.c b/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
+index 5319cd3804801..3bc55ab75314b 100644
+--- a/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
++++ b/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
+@@ -24,6 +24,7 @@ struct clk_hw *uniphier_clk_register_fixed_rate(struct device *dev,
+ 
+ 	init.name = name;
+ 	init.ops = &clk_fixed_rate_ops;
++	init.flags = 0;
+ 	init.parent_names = NULL;
+ 	init.num_parents = 0;
+ 
+diff --git a/drivers/clocksource/acpi_pm.c b/drivers/clocksource/acpi_pm.c
+index eb596ff9e7bb3..279ddff81ab49 100644
+--- a/drivers/clocksource/acpi_pm.c
++++ b/drivers/clocksource/acpi_pm.c
+@@ -229,8 +229,10 @@ static int __init parse_pmtmr(char *arg)
+ 	int ret;
+ 
+ 	ret = kstrtouint(arg, 16, &base);
+-	if (ret)
+-		return ret;
++	if (ret) {
++		pr_warn("PMTMR: invalid 'pmtmr=' value: '%s'\n", arg);
++		return 1;
++	}
+ 
+ 	pr_info("PMTMR IOPort override: 0x%04x -> 0x%04x\n", pmtmr_ioport,
+ 		base);
+diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
+index fabad79baafce..df194b05e944c 100644
+--- a/drivers/clocksource/exynos_mct.c
++++ b/drivers/clocksource/exynos_mct.c
+@@ -494,11 +494,14 @@ static int exynos4_mct_dying_cpu(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static int __init exynos4_timer_resources(struct device_node *np, void __iomem *base)
++static int __init exynos4_timer_resources(struct device_node *np)
+ {
+-	int err, cpu;
+ 	struct clk *mct_clk, *tick_clk;
+ 
++	reg_base = of_iomap(np, 0);
++	if (!reg_base)
++		panic("%s: unable to ioremap mct address space\n", __func__);
++
+ 	tick_clk = of_clk_get_by_name(np, "fin_pll");
+ 	if (IS_ERR(tick_clk))
+ 		panic("%s: unable to determine tick clock rate\n", __func__);
+@@ -509,9 +512,32 @@ static int __init exynos4_timer_resources(struct device_node *np, void __iomem *
+ 		panic("%s: unable to retrieve mct clock instance\n", __func__);
+ 	clk_prepare_enable(mct_clk);
+ 
+-	reg_base = base;
+-	if (!reg_base)
+-		panic("%s: unable to ioremap mct address space\n", __func__);
++	return 0;
++}
++
++static int __init exynos4_timer_interrupts(struct device_node *np,
++					   unsigned int int_type)
++{
++	int nr_irqs, i, err, cpu;
++
++	mct_int_type = int_type;
++
++	/* This driver uses only one global timer interrupt */
++	mct_irqs[MCT_G0_IRQ] = irq_of_parse_and_map(np, MCT_G0_IRQ);
++
++	/*
++	 * Find out the number of local irqs specified. The local
++	 * timer irqs are specified after the four global timer
++	 * irqs are specified.
++	 */
++	nr_irqs = of_irq_count(np);
++	if (nr_irqs > ARRAY_SIZE(mct_irqs)) {
++		pr_err("exynos-mct: too many (%d) interrupts configured in DT\n",
++			nr_irqs);
++		nr_irqs = ARRAY_SIZE(mct_irqs);
++	}
++	for (i = MCT_L0_IRQ; i < nr_irqs; i++)
++		mct_irqs[i] = irq_of_parse_and_map(np, i);
+ 
+ 	if (mct_int_type == MCT_INT_PPI) {
+ 
+@@ -522,11 +548,14 @@ static int __init exynos4_timer_resources(struct device_node *np, void __iomem *
+ 		     mct_irqs[MCT_L0_IRQ], err);
+ 	} else {
+ 		for_each_possible_cpu(cpu) {
+-			int mct_irq = mct_irqs[MCT_L0_IRQ + cpu];
++			int mct_irq;
+ 			struct mct_clock_event_device *pcpu_mevt =
+ 				per_cpu_ptr(&percpu_mct_tick, cpu);
+ 
+ 			pcpu_mevt->evt.irq = -1;
++			if (MCT_L0_IRQ + cpu >= ARRAY_SIZE(mct_irqs))
++				break;
++			mct_irq = mct_irqs[MCT_L0_IRQ + cpu];
+ 
+ 			irq_set_status_flags(mct_irq, IRQ_NOAUTOEN);
+ 			if (request_irq(mct_irq,
+@@ -571,24 +600,13 @@ out_irq:
+ 
+ static int __init mct_init_dt(struct device_node *np, unsigned int int_type)
+ {
+-	u32 nr_irqs, i;
+ 	int ret;
+ 
+-	mct_int_type = int_type;
+-
+-	/* This driver uses only one global timer interrupt */
+-	mct_irqs[MCT_G0_IRQ] = irq_of_parse_and_map(np, MCT_G0_IRQ);
+-
+-	/*
+-	 * Find out the number of local irqs specified. The local
+-	 * timer irqs are specified after the four global timer
+-	 * irqs are specified.
+-	 */
+-	nr_irqs = of_irq_count(np);
+-	for (i = MCT_L0_IRQ; i < nr_irqs; i++)
+-		mct_irqs[i] = irq_of_parse_and_map(np, i);
++	ret = exynos4_timer_resources(np);
++	if (ret)
++		return ret;
+ 
+-	ret = exynos4_timer_resources(np, of_iomap(np, 0));
++	ret = exynos4_timer_interrupts(np, int_type);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/clocksource/timer-microchip-pit64b.c b/drivers/clocksource/timer-microchip-pit64b.c
+index 59e11ca8ee73e..5c9485cb4e059 100644
+--- a/drivers/clocksource/timer-microchip-pit64b.c
++++ b/drivers/clocksource/timer-microchip-pit64b.c
+@@ -121,7 +121,7 @@ static u64 mchp_pit64b_clksrc_read(struct clocksource *cs)
+ 	return mchp_pit64b_cnt_read(mchp_pit64b_cs_base);
+ }
+ 
+-static u64 mchp_pit64b_sched_read_clk(void)
++static u64 notrace mchp_pit64b_sched_read_clk(void)
+ {
+ 	return mchp_pit64b_cnt_read(mchp_pit64b_cs_base);
+ }
+diff --git a/drivers/clocksource/timer-of.c b/drivers/clocksource/timer-of.c
+index 572da477c6d35..b965f20174e3a 100644
+--- a/drivers/clocksource/timer-of.c
++++ b/drivers/clocksource/timer-of.c
+@@ -157,9 +157,9 @@ static __init int timer_of_base_init(struct device_node *np,
+ 	of_base->base = of_base->name ?
+ 		of_io_request_and_map(np, of_base->index, of_base->name) :
+ 		of_iomap(np, of_base->index);
+-	if (IS_ERR(of_base->base)) {
+-		pr_err("Failed to iomap (%s)\n", of_base->name);
+-		return PTR_ERR(of_base->base);
++	if (IS_ERR_OR_NULL(of_base->base)) {
++		pr_err("Failed to iomap (%s:%s)\n", np->name, of_base->name);
++		return of_base->base ? PTR_ERR(of_base->base) : -ENOMEM;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 1fccb457fcc54..2737407ff0698 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -694,9 +694,9 @@ static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
+ 		return 0;
+ 	}
+ 
+-	if (pa == 0x48034000)		/* dra7 dmtimer3 */
++	if (pa == 0x4882c000)           /* dra7 dmtimer15 */
+ 		return dmtimer_percpu_timer_init(np, 0);
+-	else if (pa == 0x48036000)	/* dra7 dmtimer4 */
++	else if (pa == 0x4882e000)      /* dra7 dmtimer16 */
+ 		return dmtimer_percpu_timer_init(np, 1);
+ 
+ 	return 0;
+diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+index fba9937a406b3..7fdd30e92e429 100644
+--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
++++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+@@ -130,7 +130,7 @@ static void get_krait_bin_format_b(struct device *cpu_dev,
+ 	}
+ 
+ 	/* Check PVS_BLOW_STATUS */
+-	pte_efuse = *(((u32 *)buf) + 4);
++	pte_efuse = *(((u32 *)buf) + 1);
+ 	pte_efuse &= BIT(21);
+ 	if (pte_efuse) {
+ 		dev_dbg(cpu_dev, "PVS bin: %d\n", *pvs);
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index 33707a2e55ff0..64133d4da3d56 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -11,6 +11,7 @@
+  * You could find a link for the datasheet in Documentation/arm/sunxi.rst
+  */
+ 
++#include <linux/bottom_half.h>
+ #include <linux/crypto.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+@@ -280,7 +281,9 @@ static int sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq)
+ 
+ 	flow = rctx->flow;
+ 	err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm));
++	local_bh_disable();
+ 	crypto_finalize_skcipher_request(engine, breq, err);
++	local_bh_enable();
+ 	return 0;
+ }
+ 
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+index 4c5a2c11d7141..62c07a724d40e 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+@@ -9,6 +9,7 @@
+  *
+  * You could find the datasheet in Documentation/arm/sunxi.rst
+  */
++#include <linux/bottom_half.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/scatterlist.h>
+@@ -412,6 +413,8 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ theend:
+ 	kfree(buf);
+ 	kfree(result);
++	local_bh_disable();
+ 	crypto_finalize_hash_request(engine, breq, err);
++	local_bh_enable();
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 7c355bc2fb066..f783748462f94 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -11,6 +11,7 @@
+  * You could find a link for the datasheet in Documentation/arm/sunxi.rst
+  */
+ 
++#include <linux/bottom_half.h>
+ #include <linux/crypto.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+@@ -271,7 +272,9 @@ static int sun8i_ss_handle_cipher_request(struct crypto_engine *engine, void *ar
+ 	struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+ 
+ 	err = sun8i_ss_cipher(breq);
++	local_bh_disable();
+ 	crypto_finalize_skcipher_request(engine, breq, err);
++	local_bh_enable();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index 80e89066dbd1a..319fe3279a716 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -30,6 +30,8 @@
+ static const struct ss_variant ss_a80_variant = {
+ 	.alg_cipher = { SS_ALG_AES, SS_ALG_DES, SS_ALG_3DES,
+ 	},
++	.alg_hash = { SS_ID_NOTSUPP, SS_ID_NOTSUPP, SS_ID_NOTSUPP, SS_ID_NOTSUPP,
++	},
+ 	.op_mode = { SS_OP_ECB, SS_OP_CBC,
+ 	},
+ 	.ss_clks = {
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index 756d5a7835482..c9edecd43ef96 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -9,6 +9,7 @@
+  *
+  * You could find the datasheet in Documentation/arm/sunxi.rst
+  */
++#include <linux/bottom_half.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/scatterlist.h>
+@@ -440,6 +441,8 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ theend:
+ 	kfree(pad);
+ 	kfree(result);
++	local_bh_disable();
+ 	crypto_finalize_hash_request(engine, breq, err);
++	local_bh_enable();
+ 	return 0;
+ }
+diff --git a/drivers/crypto/amlogic/amlogic-gxl-cipher.c b/drivers/crypto/amlogic/amlogic-gxl-cipher.c
+index 8b5e07316352c..652e72d030bb0 100644
+--- a/drivers/crypto/amlogic/amlogic-gxl-cipher.c
++++ b/drivers/crypto/amlogic/amlogic-gxl-cipher.c
+@@ -265,7 +265,9 @@ static int meson_handle_cipher_request(struct crypto_engine *engine,
+ 	struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+ 
+ 	err = meson_cipher(breq);
++	local_bh_disable();
+ 	crypto_finalize_skcipher_request(engine, breq, err);
++	local_bh_enable();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index 0770a83bf1a57..b3eea329f840f 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -633,6 +633,20 @@ static int ccp_terminate_all(struct dma_chan *dma_chan)
+ 	return 0;
+ }
+ 
++static void ccp_dma_release(struct ccp_device *ccp)
++{
++	struct ccp_dma_chan *chan;
++	struct dma_chan *dma_chan;
++	unsigned int i;
++
++	for (i = 0; i < ccp->cmd_q_count; i++) {
++		chan = ccp->ccp_dma_chan + i;
++		dma_chan = &chan->dma_chan;
++		tasklet_kill(&chan->cleanup_tasklet);
++		list_del_rcu(&dma_chan->device_node);
++	}
++}
++
+ int ccp_dmaengine_register(struct ccp_device *ccp)
+ {
+ 	struct ccp_dma_chan *chan;
+@@ -737,6 +751,7 @@ int ccp_dmaengine_register(struct ccp_device *ccp)
+ 	return 0;
+ 
+ err_reg:
++	ccp_dma_release(ccp);
+ 	kmem_cache_destroy(ccp->dma_desc_cache);
+ 
+ err_cache:
+@@ -753,6 +768,7 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
+ 		return;
+ 
+ 	dma_async_device_unregister(dma_dev);
++	ccp_dma_release(ccp);
+ 
+ 	kmem_cache_destroy(ccp->dma_desc_cache);
+ 	kmem_cache_destroy(ccp->dma_cmd_cache);
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index a5e041d9d2cf1..11e0278c8631d 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -258,6 +258,13 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+ {
+ 	int ret = 0;
+ 
++	if (!nbytes) {
++		*mapped_nents = 0;
++		*lbytes = 0;
++		*nents = 0;
++		return 0;
++	}
++
+ 	*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
+ 	if (*nents > max_sg_nents) {
+ 		*nents = 0;
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index dafa6577a8451..c289e4d5cbdc0 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -254,8 +254,8 @@ static void cc_cipher_exit(struct crypto_tfm *tfm)
+ 		&ctx_p->user.key_dma_addr);
+ 
+ 	/* Free key buffer in context */
+-	kfree_sensitive(ctx_p->user.key);
+ 	dev_dbg(dev, "Free key buffer in context. key=@%p\n", ctx_p->user.key);
++	kfree_sensitive(ctx_p->user.key);
+ }
+ 
+ struct tdes_keys {
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index 5edc91cdb4e65..a9d3e675f7ff4 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -330,7 +330,7 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 		memset(key + AES_KEYSIZE_128, 0, AES_KEYSIZE_128);
+ 	}
+ 
+-	for_each_sg(req->src, src, sg_nents(src), i) {
++	for_each_sg(req->src, src, sg_nents(req->src), i) {
+ 		src_buf = sg_virt(src);
+ 		len = sg_dma_len(src);
+ 		tlen += len;
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
+index 1cece1a7d3f00..5bbf0d2722e11 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
+@@ -506,7 +506,6 @@ struct rk_crypto_tmp rk_ecb_des3_ede_alg = {
+ 		.exit			= rk_ablk_exit_tfm,
+ 		.min_keysize		= DES3_EDE_KEY_SIZE,
+ 		.max_keysize		= DES3_EDE_KEY_SIZE,
+-		.ivsize			= DES_BLOCK_SIZE,
+ 		.setkey			= rk_tdes_setkey,
+ 		.encrypt		= rk_des3_ede_ecb_encrypt,
+ 		.decrypt		= rk_des3_ede_ecb_decrypt,
+diff --git a/drivers/crypto/vmx/Kconfig b/drivers/crypto/vmx/Kconfig
+index c85fab7ef0bdd..b2c28b87f14b3 100644
+--- a/drivers/crypto/vmx/Kconfig
++++ b/drivers/crypto/vmx/Kconfig
+@@ -2,7 +2,11 @@
+ config CRYPTO_DEV_VMX_ENCRYPT
+ 	tristate "Encryption acceleration support on P8 CPU"
+ 	depends on CRYPTO_DEV_VMX
++	select CRYPTO_AES
++	select CRYPTO_CBC
++	select CRYPTO_CTR
+ 	select CRYPTO_GHASH
++	select CRYPTO_XTS
+ 	default m
+ 	help
+ 	  Support for VMX cryptographic acceleration instructions on Power8 CPU.
+diff --git a/drivers/dax/super.c b/drivers/dax/super.c
+index cadbd0a1a1ef0..260a247c60d2d 100644
+--- a/drivers/dax/super.c
++++ b/drivers/dax/super.c
+@@ -723,6 +723,7 @@ static int dax_fs_init(void)
+ static void dax_fs_exit(void)
+ {
+ 	kern_unmount(dax_mnt);
++	rcu_barrier();
+ 	kmem_cache_destroy(dax_cache);
+ }
+ 
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index db732f71e59ad..cfbf10128aaed 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -181,6 +181,10 @@ static long udmabuf_create(struct miscdevice *device,
+ 		if (ubuf->pagecount > pglimit)
+ 			goto err;
+ 	}
++
++	if (!ubuf->pagecount)
++		goto err;
++
+ 	ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
+ 				    GFP_KERNEL);
+ 	if (!ubuf->pages) {
+diff --git a/drivers/dma/hisi_dma.c b/drivers/dma/hisi_dma.c
+index e1a958ae79254..3e83769615d1c 100644
+--- a/drivers/dma/hisi_dma.c
++++ b/drivers/dma/hisi_dma.c
+@@ -30,7 +30,7 @@
+ #define HISI_DMA_MODE			0x217c
+ #define HISI_DMA_OFFSET			0x100
+ 
+-#define HISI_DMA_MSI_NUM		30
++#define HISI_DMA_MSI_NUM		32
+ #define HISI_DMA_CHAN_NUM		30
+ #define HISI_DMA_Q_DEPTH_VAL		1024
+ 
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index dfbf514188f37..6dca548f4dab1 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -3199,7 +3199,7 @@ probe_err2:
+ 	return ret;
+ }
+ 
+-static int pl330_remove(struct amba_device *adev)
++static void pl330_remove(struct amba_device *adev)
+ {
+ 	struct pl330_dmac *pl330 = amba_get_drvdata(adev);
+ 	struct dma_pl330_chan *pch, *_p;
+@@ -3239,7 +3239,6 @@ static int pl330_remove(struct amba_device *adev)
+ 
+ 	if (pl330->rstc)
+ 		reset_control_assert(pl330->rstc);
+-	return 0;
+ }
+ 
+ static const struct amba_id pl330_ids[] = {
+diff --git a/drivers/firmware/efi/efi-pstore.c b/drivers/firmware/efi/efi-pstore.c
+index 0ef086e43090b..7e771c56c13c6 100644
+--- a/drivers/firmware/efi/efi-pstore.c
++++ b/drivers/firmware/efi/efi-pstore.c
+@@ -266,7 +266,7 @@ static int efi_pstore_write(struct pstore_record *record)
+ 		efi_name[i] = name[i];
+ 
+ 	ret = efivar_entry_set_safe(efi_name, vendor, PSTORE_EFI_ATTRIBUTES,
+-			      preemptible(), record->size, record->psi->buf);
++			      false, record->size, record->psi->buf);
+ 
+ 	if (record->reason == KMSG_DUMP_OOPS && try_module_get(THIS_MODULE))
+ 		if (!schedule_work(&efivar_work))
+diff --git a/drivers/firmware/google/Kconfig b/drivers/firmware/google/Kconfig
+index 931544c9f63d4..983e07dc022ed 100644
+--- a/drivers/firmware/google/Kconfig
++++ b/drivers/firmware/google/Kconfig
+@@ -21,7 +21,7 @@ config GOOGLE_SMI
+ 
+ config GOOGLE_COREBOOT_TABLE
+ 	tristate "Coreboot Table Access"
+-	depends on ACPI || OF
++	depends on HAS_IOMEM && (ACPI || OF)
+ 	help
+ 	  This option enables the coreboot_table module, which provides other
+ 	  firmware modules access to the coreboot table. The coreboot table
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index e10a99860ca4b..d417199f8fe94 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -749,12 +749,6 @@ int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare)
+ 	};
+ 	int ret;
+ 
+-	desc.args[0] = addr;
+-	desc.args[1] = size;
+-	desc.args[2] = spare;
+-	desc.arginfo = QCOM_SCM_ARGS(3, QCOM_SCM_RW, QCOM_SCM_VAL,
+-				     QCOM_SCM_VAL);
+-
+ 	ret = qcom_scm_call(__scm->dev, &desc, NULL);
+ 
+ 	/* the pg table has been initialized already, ignore the error */
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 2a7687911c097..53c7e3f8cfde2 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -477,7 +477,7 @@ static int svc_normal_to_secure_thread(void *data)
+ 		case INTEL_SIP_SMC_RSU_ERROR:
+ 			pr_err("%s: STATUS_ERROR\n", __func__);
+ 			cbdata->status = BIT(SVC_STATUS_ERROR);
+-			cbdata->kaddr1 = NULL;
++			cbdata->kaddr1 = &res.a1;
+ 			cbdata->kaddr2 = NULL;
+ 			cbdata->kaddr3 = NULL;
+ 			pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata);
+diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c
+index dbad73162c833..87edc77260d20 100644
+--- a/drivers/fsi/fsi-master-aspeed.c
++++ b/drivers/fsi/fsi-master-aspeed.c
+@@ -525,7 +525,6 @@ static int tacoma_cabled_fsi_fixup(struct device *dev)
+ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ {
+ 	struct fsi_master_aspeed *aspeed;
+-	struct resource *res;
+ 	int rc, links, reg;
+ 	__be32 raw;
+ 
+@@ -535,26 +534,28 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ 		return rc;
+ 	}
+ 
+-	aspeed = devm_kzalloc(&pdev->dev, sizeof(*aspeed), GFP_KERNEL);
++	aspeed = kzalloc(sizeof(*aspeed), GFP_KERNEL);
+ 	if (!aspeed)
+ 		return -ENOMEM;
+ 
+ 	aspeed->dev = &pdev->dev;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	aspeed->base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(aspeed->base))
+-		return PTR_ERR(aspeed->base);
++	aspeed->base = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(aspeed->base)) {
++		rc = PTR_ERR(aspeed->base);
++		goto err_free_aspeed;
++	}
+ 
+ 	aspeed->clk = devm_clk_get(aspeed->dev, NULL);
+ 	if (IS_ERR(aspeed->clk)) {
+ 		dev_err(aspeed->dev, "couldn't get clock\n");
+-		return PTR_ERR(aspeed->clk);
++		rc = PTR_ERR(aspeed->clk);
++		goto err_free_aspeed;
+ 	}
+ 	rc = clk_prepare_enable(aspeed->clk);
+ 	if (rc) {
+ 		dev_err(aspeed->dev, "couldn't enable clock\n");
+-		return rc;
++		goto err_free_aspeed;
+ 	}
+ 
+ 	rc = setup_cfam_reset(aspeed);
+@@ -589,7 +590,7 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ 	rc = opb_readl(aspeed, ctrl_base + FSI_MVER, &raw);
+ 	if (rc) {
+ 		dev_err(&pdev->dev, "failed to read hub version\n");
+-		return rc;
++		goto err_release;
+ 	}
+ 
+ 	reg = be32_to_cpu(raw);
+@@ -628,6 +629,8 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ 
+ err_release:
+ 	clk_disable_unprepare(aspeed->clk);
++err_free_aspeed:
++	kfree(aspeed);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 6c8f141103da4..e828f9414ba2c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -6396,6 +6396,9 @@ static void amdgpu_dm_connector_add_common_modes(struct drm_encoder *encoder,
+ 		mode = amdgpu_dm_create_common_mode(encoder,
+ 				common_modes[i].name, common_modes[i].w,
+ 				common_modes[i].h);
++		if (!mode)
++			continue;
++
+ 		drm_mode_probed_add(connector, mode);
+ 		amdgpu_dm_connector->num_modes++;
+ 	}
+@@ -8612,10 +8615,13 @@ static int dm_update_plane_state(struct dc *dc,
+ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm_crtc *crtc)
+ {
+ 	struct drm_connector *connector;
+-	struct drm_connector_state *conn_state;
++	struct drm_connector_state *conn_state, *old_conn_state;
+ 	struct amdgpu_dm_connector *aconnector = NULL;
+ 	int i;
+-	for_each_new_connector_in_state(state, connector, conn_state, i) {
++	for_each_oldnew_connector_in_state(state, connector, old_conn_state, conn_state, i) {
++		if (!conn_state->crtc)
++			conn_state = old_conn_state;
++
+ 		if (conn_state->crtc != crtc)
+ 			continue;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+index 0e0f494fbb5e1..b037fd57fd366 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+@@ -227,14 +227,6 @@ static const struct irq_source_info_funcs vupdate_no_lock_irq_info_funcs = {
+ 		.funcs = &pflip_irq_info_funcs\
+ 	}
+ 
+-#define vupdate_int_entry(reg_num)\
+-	[DC_IRQ_SOURCE_VUPDATE1 + reg_num] = {\
+-		IRQ_REG_ENTRY(OTG, reg_num,\
+-			OTG_GLOBAL_SYNC_STATUS, VUPDATE_INT_EN,\
+-			OTG_GLOBAL_SYNC_STATUS, VUPDATE_EVENT_CLEAR),\
+-		.funcs = &vblank_irq_info_funcs\
+-	}
+-
+ /* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic
+  * of DCE's DC_IRQ_SOURCE_VUPDATEx.
+  */
+@@ -348,12 +340,6 @@ irq_source_info_dcn21[DAL_IRQ_SOURCES_NUMBER] = {
+ 	dc_underflow_int_entry(6),
+ 	[DC_IRQ_SOURCE_DMCU_SCP] = dummy_irq_entry(),
+ 	[DC_IRQ_SOURCE_VBIOS_SW] = dummy_irq_entry(),
+-	vupdate_int_entry(0),
+-	vupdate_int_entry(1),
+-	vupdate_int_entry(2),
+-	vupdate_int_entry(3),
+-	vupdate_int_entry(4),
+-	vupdate_int_entry(5),
+ 	vupdate_no_lock_int_entry(0),
+ 	vupdate_no_lock_int_entry(1),
+ 	vupdate_no_lock_int_entry(2),
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 49109614510b8..5abb68017f6ed 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2098,8 +2098,8 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
+ 		}
+ 	}
+ 
+-	/* setting should not be allowed from VF */
+-	if (amdgpu_sriov_vf(adev)) {
++	/* setting should not be allowed from VF if not in one VF mode */
++	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev)) {
+ 		dev_attr->attr.mode &= ~S_IWUGO;
+ 		dev_attr->store = NULL;
+ 	}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index e5893218fa4bb..ee27970cfff95 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -115,7 +115,7 @@ int smu_get_dpm_freq_range(struct smu_context *smu,
+ 			   uint32_t *min,
+ 			   uint32_t *max)
+ {
+-	int ret = 0;
++	int ret = -ENOTSUPP;
+ 
+ 	if (!min && !max)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+index a9bb734366ae6..a0f6ee15c2485 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+@@ -169,6 +169,7 @@
+ #define ADV7511_PACKET_ENABLE_SPARE2		BIT(1)
+ #define ADV7511_PACKET_ENABLE_SPARE1		BIT(0)
+ 
++#define ADV7535_REG_POWER2_HPD_OVERRIDE		BIT(6)
+ #define ADV7511_REG_POWER2_HPD_SRC_MASK		0xc0
+ #define ADV7511_REG_POWER2_HPD_SRC_BOTH		0x00
+ #define ADV7511_REG_POWER2_HPD_SRC_HPD		0x40
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index a0d392c338da5..c6f059be4b897 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -351,11 +351,17 @@ static void __adv7511_power_on(struct adv7511 *adv7511)
+ 	 * from standby or are enabled. When the HPD goes low the adv7511 is
+ 	 * reset and the outputs are disabled which might cause the monitor to
+ 	 * go to standby again. To avoid this we ignore the HPD pin for the
+-	 * first few seconds after enabling the output.
++	 * first few seconds after enabling the output. On the other hand
++	 * adv7535 require to enable HPD Override bit for proper HPD.
+ 	 */
+-	regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
+-			   ADV7511_REG_POWER2_HPD_SRC_MASK,
+-			   ADV7511_REG_POWER2_HPD_SRC_NONE);
++	if (adv7511->type == ADV7535)
++		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++				   ADV7535_REG_POWER2_HPD_OVERRIDE,
++				   ADV7535_REG_POWER2_HPD_OVERRIDE);
++	else
++		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++				   ADV7511_REG_POWER2_HPD_SRC_MASK,
++				   ADV7511_REG_POWER2_HPD_SRC_NONE);
+ }
+ 
+ static void adv7511_power_on(struct adv7511 *adv7511)
+@@ -375,6 +381,10 @@ static void adv7511_power_on(struct adv7511 *adv7511)
+ static void __adv7511_power_off(struct adv7511 *adv7511)
+ {
+ 	/* TODO: setup additional power down modes */
++	if (adv7511->type == ADV7535)
++		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++				   ADV7535_REG_POWER2_HPD_OVERRIDE, 0);
++
+ 	regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ 			   ADV7511_POWER_POWER_DOWN,
+ 			   ADV7511_POWER_POWER_DOWN);
+@@ -672,9 +682,14 @@ adv7511_detect(struct adv7511 *adv7511, struct drm_connector *connector)
+ 			status = connector_status_disconnected;
+ 	} else {
+ 		/* Renable HPD sensing */
+-		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
+-				   ADV7511_REG_POWER2_HPD_SRC_MASK,
+-				   ADV7511_REG_POWER2_HPD_SRC_BOTH);
++		if (adv7511->type == ADV7535)
++			regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++					   ADV7535_REG_POWER2_HPD_OVERRIDE,
++					   ADV7535_REG_POWER2_HPD_OVERRIDE);
++		else
++			regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++					   ADV7511_REG_POWER2_HPD_SRC_MASK,
++					   ADV7511_REG_POWER2_HPD_SRC_BOTH);
+ 	}
+ 
+ 	adv7511->status = status;
+diff --git a/drivers/gpu/drm/bridge/cdns-dsi.c b/drivers/gpu/drm/bridge/cdns-dsi.c
+index b31281f76117c..0ced08d81d7a2 100644
+--- a/drivers/gpu/drm/bridge/cdns-dsi.c
++++ b/drivers/gpu/drm/bridge/cdns-dsi.c
+@@ -1286,6 +1286,7 @@ static const struct of_device_id cdns_dsi_of_match[] = {
+ 	{ .compatible = "cdns,dsi" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, cdns_dsi_of_match);
+ 
+ static struct platform_driver cdns_dsi_platform_driver = {
+ 	.probe  = cdns_dsi_drm_probe,
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index 6cac2e58cd15f..b68d335981588 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -1188,6 +1188,7 @@ static int nwl_dsi_probe(struct platform_device *pdev)
+ 
+ 	ret = nwl_dsi_select_input(dsi);
+ 	if (ret < 0) {
++		pm_runtime_disable(dev);
+ 		mipi_dsi_host_unregister(&dsi->dsi_host);
+ 		return ret;
+ 	}
+diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c
+index 843265d7f1b12..ec7745c31da07 100644
+--- a/drivers/gpu/drm/bridge/sil-sii8620.c
++++ b/drivers/gpu/drm/bridge/sil-sii8620.c
+@@ -2120,7 +2120,7 @@ static void sii8620_init_rcp_input_dev(struct sii8620 *ctx)
+ 	if (ret) {
+ 		dev_err(ctx->dev, "Failed to register RC device\n");
+ 		ctx->error = ret;
+-		rc_free_device(ctx->rc_dev);
++		rc_free_device(rc_dev);
+ 		return;
+ 	}
+ 	ctx->rc_dev = rc_dev;
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index 29c0eb4bd7546..b10228b9e3a93 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -2566,8 +2566,9 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
+ 	if (!output_fmts)
+ 		return NULL;
+ 
+-	/* If dw-hdmi is the only bridge, avoid negociating with ourselves */
+-	if (list_is_singular(&bridge->encoder->bridge_chain)) {
++	/* If dw-hdmi is the first or only bridge, avoid negociating with ourselves */
++	if (list_is_singular(&bridge->encoder->bridge_chain) ||
++	    list_is_first(&bridge->chain_node, &bridge->encoder->bridge_chain)) {
+ 		*num_output_fmts = 1;
+ 		output_fmts[0] = MEDIA_BUS_FMT_FIXED;
+ 
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+index 6b268f9445b36..376fa6eb46f69 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+@@ -1172,6 +1172,7 @@ __dw_mipi_dsi_probe(struct platform_device *pdev,
+ 	ret = mipi_dsi_host_register(&dsi->dsi_host);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register MIPI host: %d\n", ret);
++		pm_runtime_disable(dev);
+ 		dw_mipi_dsi_debugfs_remove(dsi);
+ 		return ERR_PTR(ret);
+ 	}
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 3d7593ea79f14..862e173d34315 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -4806,7 +4806,8 @@ bool drm_detect_monitor_audio(struct edid *edid)
+ 	if (!edid_ext)
+ 		goto end;
+ 
+-	has_audio = ((edid_ext[3] & EDID_BASIC_AUDIO) != 0);
++	has_audio = (edid_ext[0] == CEA_EXT &&
++		    (edid_ext[3] & EDID_BASIC_AUDIO) != 0);
+ 
+ 	if (has_audio) {
+ 		DRM_DEBUG_KMS("Monitor has basic audio support\n");
+@@ -4959,16 +4960,8 @@ static void drm_parse_hdmi_deep_color_info(struct drm_connector *connector,
+ 		  connector->name, dc_bpc);
+ 	info->bpc = dc_bpc;
+ 
+-	/*
+-	 * Deep color support mandates RGB444 support for all video
+-	 * modes and forbids YCRCB422 support for all video modes per
+-	 * HDMI 1.3 spec.
+-	 */
+-	info->color_formats = DRM_COLOR_FORMAT_RGB444;
+-
+ 	/* YCRCB444 is optional according to spec. */
+ 	if (hdmi[6] & DRM_EDID_HDMI_DC_Y444) {
+-		info->color_formats |= DRM_COLOR_FORMAT_YCRCB444;
+ 		DRM_DEBUG("%s: HDMI sink does YCRCB444 in deep color.\n",
+ 			  connector->name);
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_opregion.c b/drivers/gpu/drm/i915/display/intel_opregion.c
+index abff2d6cedd12..6d083b98f6ae6 100644
+--- a/drivers/gpu/drm/i915/display/intel_opregion.c
++++ b/drivers/gpu/drm/i915/display/intel_opregion.c
+@@ -376,6 +376,21 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * The port numbering and mapping here is bizarre. The now-obsolete
++	 * swsci spec supports ports numbered [0..4]. Port E is handled as a
++	 * special case, but port F and beyond are not. The functionality is
++	 * supposed to be obsolete for new platforms. Just bail out if the port
++	 * number is out of bounds after mapping.
++	 */
++	if (port > 4) {
++		drm_dbg_kms(&dev_priv->drm,
++			    "[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n",
++			    intel_encoder->base.base.id, intel_encoder->base.name,
++			    port_name(intel_encoder->port), port);
++		return -EINVAL;
++	}
++
+ 	if (!enable)
+ 		parm |= 4 << 8;
+ 
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 5754bccff4d15..92dd65befbcb8 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -423,7 +423,7 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
+ 		return -EACCES;
+ 
+ 	addr -= area->vm_start;
+-	if (addr >= obj->base.size)
++	if (range_overflows_t(u64, addr, len, obj->base.size))
+ 		return -EINVAL;
+ 
+ 	/* As this is primarily for debugging, let's focus on simplicity */
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 2753067c08e68..728fea5094124 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -396,10 +396,8 @@ static void meson_drv_unbind(struct device *dev)
+ 	drm_irq_uninstall(drm);
+ 	drm_dev_put(drm);
+ 
+-	if (priv->afbcd.ops) {
+-		priv->afbcd.ops->reset(priv);
+-		meson_rdma_free(priv);
+-	}
++	if (priv->afbcd.ops)
++		priv->afbcd.ops->exit(priv);
+ }
+ 
+ static const struct component_master_ops meson_drv_master_ops = {
+diff --git a/drivers/gpu/drm/meson/meson_osd_afbcd.c b/drivers/gpu/drm/meson/meson_osd_afbcd.c
+index ffc6b584dbf85..0cdbe899402f8 100644
+--- a/drivers/gpu/drm/meson/meson_osd_afbcd.c
++++ b/drivers/gpu/drm/meson/meson_osd_afbcd.c
+@@ -79,11 +79,6 @@ static bool meson_gxm_afbcd_supported_fmt(u64 modifier, uint32_t format)
+ 	return meson_gxm_afbcd_pixel_fmt(modifier, format) >= 0;
+ }
+ 
+-static int meson_gxm_afbcd_init(struct meson_drm *priv)
+-{
+-	return 0;
+-}
+-
+ static int meson_gxm_afbcd_reset(struct meson_drm *priv)
+ {
+ 	writel_relaxed(VIU_SW_RESET_OSD1_AFBCD,
+@@ -93,6 +88,16 @@ static int meson_gxm_afbcd_reset(struct meson_drm *priv)
+ 	return 0;
+ }
+ 
++static int meson_gxm_afbcd_init(struct meson_drm *priv)
++{
++	return 0;
++}
++
++static void meson_gxm_afbcd_exit(struct meson_drm *priv)
++{
++	meson_gxm_afbcd_reset(priv);
++}
++
+ static int meson_gxm_afbcd_enable(struct meson_drm *priv)
+ {
+ 	writel_relaxed(FIELD_PREP(OSD1_AFBCD_ID_FIFO_THRD, 0x40) |
+@@ -172,6 +177,7 @@ static int meson_gxm_afbcd_setup(struct meson_drm *priv)
+ 
+ struct meson_afbcd_ops meson_afbcd_gxm_ops = {
+ 	.init = meson_gxm_afbcd_init,
++	.exit = meson_gxm_afbcd_exit,
+ 	.reset = meson_gxm_afbcd_reset,
+ 	.enable = meson_gxm_afbcd_enable,
+ 	.disable = meson_gxm_afbcd_disable,
+@@ -269,6 +275,18 @@ static bool meson_g12a_afbcd_supported_fmt(u64 modifier, uint32_t format)
+ 	return meson_g12a_afbcd_pixel_fmt(modifier, format) >= 0;
+ }
+ 
++static int meson_g12a_afbcd_reset(struct meson_drm *priv)
++{
++	meson_rdma_reset(priv);
++
++	meson_rdma_writel_sync(priv, VIU_SW_RESET_G12A_AFBC_ARB |
++			       VIU_SW_RESET_G12A_OSD1_AFBCD,
++			       VIU_SW_RESET);
++	meson_rdma_writel_sync(priv, 0, VIU_SW_RESET);
++
++	return 0;
++}
++
+ static int meson_g12a_afbcd_init(struct meson_drm *priv)
+ {
+ 	int ret;
+@@ -286,16 +304,10 @@ static int meson_g12a_afbcd_init(struct meson_drm *priv)
+ 	return 0;
+ }
+ 
+-static int meson_g12a_afbcd_reset(struct meson_drm *priv)
++static void meson_g12a_afbcd_exit(struct meson_drm *priv)
+ {
+-	meson_rdma_reset(priv);
+-
+-	meson_rdma_writel_sync(priv, VIU_SW_RESET_G12A_AFBC_ARB |
+-			       VIU_SW_RESET_G12A_OSD1_AFBCD,
+-			       VIU_SW_RESET);
+-	meson_rdma_writel_sync(priv, 0, VIU_SW_RESET);
+-
+-	return 0;
++	meson_g12a_afbcd_reset(priv);
++	meson_rdma_free(priv);
+ }
+ 
+ static int meson_g12a_afbcd_enable(struct meson_drm *priv)
+@@ -380,6 +392,7 @@ static int meson_g12a_afbcd_setup(struct meson_drm *priv)
+ 
+ struct meson_afbcd_ops meson_afbcd_g12a_ops = {
+ 	.init = meson_g12a_afbcd_init,
++	.exit = meson_g12a_afbcd_exit,
+ 	.reset = meson_g12a_afbcd_reset,
+ 	.enable = meson_g12a_afbcd_enable,
+ 	.disable = meson_g12a_afbcd_disable,
+diff --git a/drivers/gpu/drm/meson/meson_osd_afbcd.h b/drivers/gpu/drm/meson/meson_osd_afbcd.h
+index 5e5523304f42f..e77ddeb6416f3 100644
+--- a/drivers/gpu/drm/meson/meson_osd_afbcd.h
++++ b/drivers/gpu/drm/meson/meson_osd_afbcd.h
+@@ -14,6 +14,7 @@
+ 
+ struct meson_afbcd_ops {
+ 	int (*init)(struct meson_drm *priv);
++	void (*exit)(struct meson_drm *priv);
+ 	int (*reset)(struct meson_drm *priv);
+ 	int (*enable)(struct meson_drm *priv);
+ 	int (*disable)(struct meson_drm *priv);
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index 509968c0d16bc..2a13e297e16df 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -1243,7 +1243,10 @@ static void mgag200_set_format_regs(struct mga_device *mdev,
+ 	WREG_GFX(3, 0x00);
+ 	WREG_GFX(4, 0x00);
+ 	WREG_GFX(5, 0x40);
+-	WREG_GFX(6, 0x05);
++	/* GCTL6 should be 0x05, but we configure memmapsl to 0xb8000 (text mode),
++	 * so that it doesn't hang when running kexec/kdump on G200_SE rev42.
++	 */
++	WREG_GFX(6, 0x0d);
+ 	WREG_GFX(7, 0x0f);
+ 	WREG_GFX(8, 0x0f);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index f7f5c258b5537..a0274fcfe9c9d 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -1113,7 +1113,7 @@ static void _dpu_encoder_virt_enable_helper(struct drm_encoder *drm_enc)
+ 	}
+ 
+ 
+-	if (dpu_enc->disp_info.intf_type == DRM_MODE_CONNECTOR_DisplayPort &&
++	if (dpu_enc->disp_info.intf_type == DRM_MODE_ENCODER_TMDS &&
+ 		dpu_enc->cur_master->hw_mdptop &&
+ 		dpu_enc->cur_master->hw_mdptop->ops.intf_audio_select)
+ 		dpu_enc->cur_master->hw_mdptop->ops.intf_audio_select(
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+index 9b2b5044e8e05..74a13ccad34c0 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+@@ -34,6 +34,14 @@ int dpu_rm_destroy(struct dpu_rm *rm)
+ {
+ 	int i;
+ 
++	for (i = 0; i < ARRAY_SIZE(rm->dspp_blks); i++) {
++		struct dpu_hw_dspp *hw;
++
++		if (rm->dspp_blks[i]) {
++			hw = to_dpu_hw_dspp(rm->dspp_blks[i]);
++			dpu_hw_dspp_destroy(hw);
++		}
++	}
+ 	for (i = 0; i < ARRAY_SIZE(rm->pingpong_blks); i++) {
+ 		struct dpu_hw_pingpong *hw;
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 66f2ea3d42fc2..6cd6934c8c9f1 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -1336,6 +1336,7 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ 			struct drm_encoder *encoder)
+ {
+ 	struct msm_drm_private *priv;
++	struct dp_display_private *dp_priv;
+ 	int ret;
+ 
+ 	if (WARN_ON(!encoder) || WARN_ON(!dp_display) || WARN_ON(!dev))
+@@ -1344,6 +1345,8 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ 	priv = dev->dev_private;
+ 	dp_display->drm_dev = dev;
+ 
++	dp_priv = container_of(dp_display, struct dp_display_private, dp_display);
++
+ 	ret = dp_display_request_irq(dp_display);
+ 	if (ret) {
+ 		DRM_ERROR("request_irq failed, ret=%d\n", ret);
+@@ -1361,6 +1364,8 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ 		return ret;
+ 	}
+ 
++	dp_priv->panel->connector = dp_display->connector;
++
+ 	priv->connectors[priv->num_connectors++] = dp_display->connector;
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
+index 667fa016496ee..a6ea89a5d51ab 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
+@@ -142,11 +142,12 @@ nvkm_acr_hsfw_load_bl(struct nvkm_acr *acr, const char *name, int ver,
+ 
+ 	hsfw->imem_size = desc->code_size;
+ 	hsfw->imem_tag = desc->start_tag;
+-	hsfw->imem = kmalloc(desc->code_size, GFP_KERNEL);
+-	memcpy(hsfw->imem, data + desc->code_off, desc->code_size);
+-
++	hsfw->imem = kmemdup(data + desc->code_off, desc->code_size, GFP_KERNEL);
+ 	nvkm_firmware_put(fw);
+-	return 0;
++	if (!hsfw->imem)
++		return -ENOMEM;
++	else
++		return 0;
+ }
+ 
+ int
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+index 2aae636f1cf5c..107ad2d764ec0 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+@@ -359,8 +359,11 @@ int panfrost_gpu_init(struct panfrost_device *pfdev)
+ 
+ 	panfrost_gpu_init_features(pfdev);
+ 
+-	dma_set_mask_and_coherent(pfdev->dev,
++	err = dma_set_mask_and_coherent(pfdev->dev,
+ 		DMA_BIT_MASK(FIELD_GET(0xff00, pfdev->features.mmu_features)));
++	if (err)
++		return err;
++
+ 	dma_set_max_seg_size(pfdev->dev, UINT_MAX);
+ 
+ 	irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "gpu");
+diff --git a/drivers/gpu/drm/pl111/pl111_drv.c b/drivers/gpu/drm/pl111/pl111_drv.c
+index 46b0d1c4a16c6..d5e8e3a8bff3e 100644
+--- a/drivers/gpu/drm/pl111/pl111_drv.c
++++ b/drivers/gpu/drm/pl111/pl111_drv.c
+@@ -324,7 +324,7 @@ dev_put:
+ 	return ret;
+ }
+ 
+-static int pl111_amba_remove(struct amba_device *amba_dev)
++static void pl111_amba_remove(struct amba_device *amba_dev)
+ {
+ 	struct device *dev = &amba_dev->dev;
+ 	struct drm_device *drm = amba_get_drvdata(amba_dev);
+@@ -335,8 +335,6 @@ static int pl111_amba_remove(struct amba_device *amba_dev)
+ 		drm_panel_bridge_remove(priv->bridge);
+ 	drm_dev_put(drm);
+ 	of_reserved_mem_device_release(dev);
+-
+-	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/tegra/dsi.c b/drivers/gpu/drm/tegra/dsi.c
+index f46d377f0c304..de1333dc0d867 100644
+--- a/drivers/gpu/drm/tegra/dsi.c
++++ b/drivers/gpu/drm/tegra/dsi.c
+@@ -1538,8 +1538,10 @@ static int tegra_dsi_ganged_probe(struct tegra_dsi *dsi)
+ 		dsi->slave = platform_get_drvdata(gangster);
+ 		of_node_put(np);
+ 
+-		if (!dsi->slave)
++		if (!dsi->slave) {
++			put_device(&gangster->dev);
+ 			return -EPROBE_DEFER;
++		}
+ 
+ 		dsi->slave->master = dsi;
+ 	}
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index a2c09dca4eef9..8659558b518d6 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -520,6 +520,7 @@ static int host1x_remove(struct platform_device *pdev)
+ 	host1x_syncpt_deinit(host);
+ 	reset_control_assert(host->rst);
+ 	clk_disable_unprepare(host->clk);
++	host1x_channel_list_free(&host->channel_list);
+ 	host1x_iommu_exit(host);
+ 
+ 	return 0;
+diff --git a/drivers/greybus/svc.c b/drivers/greybus/svc.c
+index ce7740ef449ba..51d0875a34800 100644
+--- a/drivers/greybus/svc.c
++++ b/drivers/greybus/svc.c
+@@ -866,8 +866,14 @@ static int gb_svc_hello(struct gb_operation *op)
+ 
+ 	gb_svc_debugfs_init(svc);
+ 
+-	return gb_svc_queue_deferred_request(op);
++	ret = gb_svc_queue_deferred_request(op);
++	if (ret)
++		goto err_remove_debugfs;
++
++	return 0;
+ 
++err_remove_debugfs:
++	gb_svc_debugfs_exit(svc);
+ err_unregister_device:
+ 	gb_svc_watchdog_destroy(svc);
+ 	device_del(&svc->dev);
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index a311b0a33eba7..587259b3db97c 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1000,6 +1000,7 @@ static void logi_hidpp_recv_queue_notif(struct hid_device *hdev,
+ 		workitem.reports_supported |= STD_KEYBOARD;
+ 		break;
+ 	case 0x0f:
++	case 0x11:
+ 		device_type = "eQUAD Lightspeed 1.2";
+ 		logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
+ 		workitem.reports_supported |= STD_KEYBOARD;
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 998aad8a9e608..14811d42a5a91 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -620,6 +620,17 @@ static int i2c_hid_get_raw_report(struct hid_device *hid,
+ 	if (report_type == HID_OUTPUT_REPORT)
+ 		return -EINVAL;
+ 
++	/*
++	 * In case of unnumbered reports the response from the device will
++	 * not have the report ID that the upper layers expect, so we need
++	 * to stash it the buffer ourselves and adjust the data size.
++	 */
++	if (!report_number) {
++		buf[0] = 0;
++		buf++;
++		count--;
++	}
++
+ 	/* +2 bytes to include the size of the reply in the query buffer */
+ 	ask_count = min(count + 2, (size_t)ihid->bufsize);
+ 
+@@ -641,6 +652,9 @@ static int i2c_hid_get_raw_report(struct hid_device *hid,
+ 	count = min(count, ret_count - 2);
+ 	memcpy(buf, ihid->rawbuf + 2, count);
+ 
++	if (!report_number)
++		count++;
++
+ 	return count;
+ }
+ 
+@@ -657,17 +671,19 @@ static int i2c_hid_output_raw_report(struct hid_device *hid, __u8 *buf,
+ 
+ 	mutex_lock(&ihid->reset_lock);
+ 
+-	if (report_id) {
+-		buf++;
+-		count--;
+-	}
+-
++	/*
++	 * Note that both numbered and unnumbered reports passed here
++	 * are supposed to have report ID stored in the 1st byte of the
++	 * buffer, so we strip it off unconditionally before passing payload
++	 * to i2c_hid_set_or_send_report which takes care of encoding
++	 * everything properly.
++	 */
+ 	ret = i2c_hid_set_or_send_report(client,
+ 				report_type == HID_FEATURE_REPORT ? 0x03 : 0x02,
+-				report_id, buf, count, use_data);
++				report_id, buf + 1, count - 1, use_data);
+ 
+-	if (report_id && ret >= 0)
+-		ret++; /* add report_id to the number of transfered bytes */
++	if (ret >= 0)
++		ret++; /* add report_id to the number of transferred bytes */
+ 
+ 	mutex_unlock(&ihid->reset_lock);
+ 
+diff --git a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+index 6cf59fd26ad78..b6d6d119035ca 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
++++ b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+@@ -656,21 +656,12 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ 	 */
+ 	payload_max_size &= ~(L1_CACHE_BYTES - 1);
+ 
+-	dma_buf = kmalloc(payload_max_size, GFP_KERNEL | GFP_DMA32);
++	dma_buf = dma_alloc_coherent(devc, payload_max_size, &dma_buf_phy, GFP_KERNEL);
+ 	if (!dma_buf) {
+ 		client_data->flag_retry = true;
+ 		return -ENOMEM;
+ 	}
+ 
+-	dma_buf_phy = dma_map_single(devc, dma_buf, payload_max_size,
+-				     DMA_TO_DEVICE);
+-	if (dma_mapping_error(devc, dma_buf_phy)) {
+-		dev_err(cl_data_to_dev(client_data), "DMA map failed\n");
+-		client_data->flag_retry = true;
+-		rv = -ENOMEM;
+-		goto end_err_dma_buf_release;
+-	}
+-
+ 	ldr_xfer_dma_frag.fragment.hdr.command = LOADER_CMD_XFER_FRAGMENT;
+ 	ldr_xfer_dma_frag.fragment.xfer_mode = LOADER_XFER_MODE_DIRECT_DMA;
+ 	ldr_xfer_dma_frag.ddr_phys_addr = (u64)dma_buf_phy;
+@@ -690,14 +681,7 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ 		ldr_xfer_dma_frag.fragment.size = fragment_size;
+ 		memcpy(dma_buf, &fw->data[fragment_offset], fragment_size);
+ 
+-		dma_sync_single_for_device(devc, dma_buf_phy,
+-					   payload_max_size,
+-					   DMA_TO_DEVICE);
+-
+-		/*
+-		 * Flush cache here because the dma_sync_single_for_device()
+-		 * does not do for x86.
+-		 */
++		/* Flush cache to be sure the data is in main memory. */
+ 		clflush_cache_range(dma_buf, payload_max_size);
+ 
+ 		dev_dbg(cl_data_to_dev(client_data),
+@@ -720,15 +704,8 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ 		fragment_offset += fragment_size;
+ 	}
+ 
+-	dma_unmap_single(devc, dma_buf_phy, payload_max_size, DMA_TO_DEVICE);
+-	kfree(dma_buf);
+-	return 0;
+-
+ end_err_resp_buf_release:
+-	/* Free ISH buffer if not done already, in error case */
+-	dma_unmap_single(devc, dma_buf_phy, payload_max_size, DMA_TO_DEVICE);
+-end_err_dma_buf_release:
+-	kfree(dma_buf);
++	dma_free_coherent(devc, payload_max_size, dma_buf, dma_buf_phy);
+ 	return rv;
+ }
+ 
+diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
+index 79e5356a737a2..210e532ac277f 100644
+--- a/drivers/hv/Kconfig
++++ b/drivers/hv/Kconfig
+@@ -17,6 +17,7 @@ config HYPERV_TIMER
+ config HYPERV_UTILS
+ 	tristate "Microsoft Hyper-V Utilities driver"
+ 	depends on HYPERV && CONNECTOR && NLS
++	depends on PTP_1588_CLOCK_OPTIONAL
+ 	help
+ 	  Select this option to enable the Hyper-V Utilities.
+ 
+diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
+index eb56e09ae15f3..6a716996a6250 100644
+--- a/drivers/hv/hv_balloon.c
++++ b/drivers/hv/hv_balloon.c
+@@ -1558,7 +1558,7 @@ static void balloon_onchannelcallback(void *context)
+ 			break;
+ 
+ 		default:
+-			pr_warn("Unhandled message: type: %d\n", dm_hdr->type);
++			pr_warn_ratelimited("Unhandled message: type: %d\n", dm_hdr->type);
+ 
+ 		}
+ 	}
+diff --git a/drivers/hwmon/pmbus/pmbus.h b/drivers/hwmon/pmbus/pmbus.h
+index 88a5df2633fb2..de27837e85271 100644
+--- a/drivers/hwmon/pmbus/pmbus.h
++++ b/drivers/hwmon/pmbus/pmbus.h
+@@ -319,6 +319,7 @@ enum pmbus_fan_mode { percent = 0, rpm };
+ /*
+  * STATUS_VOUT, STATUS_INPUT
+  */
++#define PB_VOLTAGE_VIN_OFF		BIT(3)
+ #define PB_VOLTAGE_UV_FAULT		BIT(4)
+ #define PB_VOLTAGE_UV_WARNING		BIT(5)
+ #define PB_VOLTAGE_OV_WARNING		BIT(6)
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index 71798fde2ef0c..117e3ce9c76ad 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -1360,7 +1360,7 @@ static const struct pmbus_limit_attr vin_limit_attrs[] = {
+ 		.reg = PMBUS_VIN_UV_FAULT_LIMIT,
+ 		.attr = "lcrit",
+ 		.alarm = "lcrit_alarm",
+-		.sbit = PB_VOLTAGE_UV_FAULT,
++		.sbit = PB_VOLTAGE_UV_FAULT | PB_VOLTAGE_VIN_OFF,
+ 	}, {
+ 		.reg = PMBUS_VIN_OV_WARN_LIMIT,
+ 		.attr = "max",
+@@ -2255,10 +2255,14 @@ static int pmbus_regulator_is_enabled(struct regulator_dev *rdev)
+ {
+ 	struct device *dev = rdev_get_dev(rdev);
+ 	struct i2c_client *client = to_i2c_client(dev->parent);
++	struct pmbus_data *data = i2c_get_clientdata(client);
+ 	u8 page = rdev_get_id(rdev);
+ 	int ret;
+ 
++	mutex_lock(&data->update_lock);
+ 	ret = pmbus_read_byte_data(client, page, PMBUS_OPERATION);
++	mutex_unlock(&data->update_lock);
++
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -2269,11 +2273,17 @@ static int _pmbus_regulator_on_off(struct regulator_dev *rdev, bool enable)
+ {
+ 	struct device *dev = rdev_get_dev(rdev);
+ 	struct i2c_client *client = to_i2c_client(dev->parent);
++	struct pmbus_data *data = i2c_get_clientdata(client);
+ 	u8 page = rdev_get_id(rdev);
++	int ret;
+ 
+-	return pmbus_update_byte_data(client, page, PMBUS_OPERATION,
+-				      PB_OPERATION_CONTROL_ON,
+-				      enable ? PB_OPERATION_CONTROL_ON : 0);
++	mutex_lock(&data->update_lock);
++	ret = pmbus_update_byte_data(client, page, PMBUS_OPERATION,
++				     PB_OPERATION_CONTROL_ON,
++				     enable ? PB_OPERATION_CONTROL_ON : 0);
++	mutex_unlock(&data->update_lock);
++
++	return ret;
+ }
+ 
+ static int pmbus_regulator_enable(struct regulator_dev *rdev)
+diff --git a/drivers/hwmon/sch56xx-common.c b/drivers/hwmon/sch56xx-common.c
+index 6c84780e358e8..066b12990fbfb 100644
+--- a/drivers/hwmon/sch56xx-common.c
++++ b/drivers/hwmon/sch56xx-common.c
+@@ -424,7 +424,7 @@ struct sch56xx_watchdog_data *sch56xx_watchdog_register(struct device *parent,
+ 	if (nowayout)
+ 		set_bit(WDOG_NO_WAY_OUT, &data->wddev.status);
+ 	if (output_enable & SCH56XX_WDOG_OUTPUT_ENABLE)
+-		set_bit(WDOG_ACTIVE, &data->wddev.status);
++		set_bit(WDOG_HW_RUNNING, &data->wddev.status);
+ 
+ 	/* Since the watchdog uses a downcounter there is no register to read
+ 	   the BIOS set timeout from (if any was set at all) ->
+diff --git a/drivers/hwtracing/coresight/coresight-catu.c b/drivers/hwtracing/coresight/coresight-catu.c
+index a61313f320bda..8e19e8cdcce5e 100644
+--- a/drivers/hwtracing/coresight/coresight-catu.c
++++ b/drivers/hwtracing/coresight/coresight-catu.c
+@@ -567,12 +567,11 @@ out:
+ 	return ret;
+ }
+ 
+-static int catu_remove(struct amba_device *adev)
++static void catu_remove(struct amba_device *adev)
+ {
+ 	struct catu_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+ 	coresight_unregister(drvdata->csdev);
+-	return 0;
+ }
+ 
+ static struct amba_id catu_ids[] = {
+diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+index e1d232411d8d7..2dcf13de751fc 100644
+--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c
++++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+@@ -627,7 +627,7 @@ err:
+ 	return ret;
+ }
+ 
+-static int debug_remove(struct amba_device *adev)
++static void debug_remove(struct amba_device *adev)
+ {
+ 	struct device *dev = &adev->dev;
+ 	struct debug_drvdata *drvdata = amba_get_drvdata(adev);
+@@ -642,8 +642,6 @@ static int debug_remove(struct amba_device *adev)
+ 
+ 	if (!--debug_count)
+ 		debug_func_exit();
+-
+-	return 0;
+ }
+ 
+ static const struct amba_cs_uci_id uci_id_debug[] = {
+diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c
+index 7ea93598f0eea..0276700c246d5 100644
+--- a/drivers/hwtracing/coresight/coresight-cti-core.c
++++ b/drivers/hwtracing/coresight/coresight-cti-core.c
+@@ -836,7 +836,7 @@ static void cti_device_release(struct device *dev)
+ 	if (drvdata->csdev_release)
+ 		drvdata->csdev_release(dev);
+ }
+-static int cti_remove(struct amba_device *adev)
++static void cti_remove(struct amba_device *adev)
+ {
+ 	struct cti_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+@@ -845,8 +845,6 @@ static int cti_remove(struct amba_device *adev)
+ 	mutex_unlock(&ect_mutex);
+ 
+ 	coresight_unregister(drvdata->csdev);
+-
+-	return 0;
+ }
+ 
+ static int cti_probe(struct amba_device *adev, const struct amba_id *id)
+diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
+index 0cf6f0b947b6f..51c801c05e5c3 100644
+--- a/drivers/hwtracing/coresight/coresight-etb10.c
++++ b/drivers/hwtracing/coresight/coresight-etb10.c
+@@ -803,7 +803,7 @@ err_misc_register:
+ 	return ret;
+ }
+ 
+-static int etb_remove(struct amba_device *adev)
++static void etb_remove(struct amba_device *adev)
+ {
+ 	struct etb_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+@@ -814,8 +814,6 @@ static int etb_remove(struct amba_device *adev)
+ 	 */
+ 	misc_deregister(&drvdata->miscdev);
+ 	coresight_unregister(drvdata->csdev);
+-
+-	return 0;
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
+index 5bf5a5a4ce6d1..683a69e88efda 100644
+--- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
+@@ -909,7 +909,7 @@ static void clear_etmdrvdata(void *info)
+ 	etmdrvdata[cpu] = NULL;
+ }
+ 
+-static int etm_remove(struct amba_device *adev)
++static void etm_remove(struct amba_device *adev)
+ {
+ 	struct etm_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+@@ -932,8 +932,6 @@ static int etm_remove(struct amba_device *adev)
+ 	cpus_read_unlock();
+ 
+ 	coresight_unregister(drvdata->csdev);
+-
+-	return 0;
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 74d3e2fe43d46..99df453575f50 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -1582,7 +1582,7 @@ static void clear_etmdrvdata(void *info)
+ 	etmdrvdata[cpu] = NULL;
+ }
+ 
+-static int etm4_remove(struct amba_device *adev)
++static void etm4_remove(struct amba_device *adev)
+ {
+ 	struct etmv4_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+@@ -1605,8 +1605,6 @@ static int etm4_remove(struct amba_device *adev)
+ 	cpus_read_unlock();
+ 
+ 	coresight_unregister(drvdata->csdev);
+-
+-	return 0;
+ }
+ 
+ static const struct amba_id etm4_ids[] = {
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index 4682f26139961..42cc38c89f3ba 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -364,8 +364,12 @@ static ssize_t mode_store(struct device *dev,
+ 	mode = ETM_MODE_QELEM(config->mode);
+ 	/* start by clearing QE bits */
+ 	config->cfg &= ~(BIT(13) | BIT(14));
+-	/* if supported, Q elements with instruction counts are enabled */
+-	if ((mode & BIT(0)) && (drvdata->q_support & BIT(0)))
++	/*
++	 * if supported, Q elements with instruction counts are enabled.
++	 * Always set the low bit for any requested mode. Valid combos are
++	 * 0b00, 0b01 and 0b11.
++	 */
++	if (mode && drvdata->q_support)
+ 		config->cfg |= BIT(13);
+ 	/*
+ 	 * if supported, Q elements with and without instruction
+diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
+index 3fc6c678b51d8..b2fb853776d79 100644
+--- a/drivers/hwtracing/coresight/coresight-funnel.c
++++ b/drivers/hwtracing/coresight/coresight-funnel.c
+@@ -370,9 +370,9 @@ static int dynamic_funnel_probe(struct amba_device *adev,
+ 	return funnel_probe(&adev->dev, &adev->res);
+ }
+ 
+-static int dynamic_funnel_remove(struct amba_device *adev)
++static void dynamic_funnel_remove(struct amba_device *adev)
+ {
+-	return funnel_remove(&adev->dev);
++	funnel_remove(&adev->dev);
+ }
+ 
+ static const struct amba_id dynamic_funnel_ids[] = {
+diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
+index 38008aca2c0f4..da2bfeeabc1b4 100644
+--- a/drivers/hwtracing/coresight/coresight-replicator.c
++++ b/drivers/hwtracing/coresight/coresight-replicator.c
+@@ -388,9 +388,9 @@ static int dynamic_replicator_probe(struct amba_device *adev,
+ 	return replicator_probe(&adev->dev, &adev->res);
+ }
+ 
+-static int dynamic_replicator_remove(struct amba_device *adev)
++static void dynamic_replicator_remove(struct amba_device *adev)
+ {
+-	return replicator_remove(&adev->dev);
++	replicator_remove(&adev->dev);
+ }
+ 
+ static const struct amba_id dynamic_replicator_ids[] = {
+diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
+index 587c1d7f25208..0ecca9f93f3a1 100644
+--- a/drivers/hwtracing/coresight/coresight-stm.c
++++ b/drivers/hwtracing/coresight/coresight-stm.c
+@@ -951,15 +951,13 @@ stm_unregister:
+ 	return ret;
+ }
+ 
+-static int stm_remove(struct amba_device *adev)
++static void stm_remove(struct amba_device *adev)
+ {
+ 	struct stm_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+ 	coresight_unregister(drvdata->csdev);
+ 
+ 	stm_unregister_device(&drvdata->stm);
+-
+-	return 0;
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-core.c b/drivers/hwtracing/coresight/coresight-tmc-core.c
+index 8169dff5a9f6a..e29b3914fc0ff 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-core.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-core.c
+@@ -559,7 +559,7 @@ out:
+ 	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ }
+ 
+-static int tmc_remove(struct amba_device *adev)
++static void tmc_remove(struct amba_device *adev)
+ {
+ 	struct tmc_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+@@ -570,8 +570,6 @@ static int tmc_remove(struct amba_device *adev)
+ 	 */
+ 	misc_deregister(&drvdata->miscdev);
+ 	coresight_unregister(drvdata->csdev);
+-
+-	return 0;
+ }
+ 
+ static const struct amba_id tmc_ids[] = {
+diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
+index 5b35029461a0c..0ca39d905d0b3 100644
+--- a/drivers/hwtracing/coresight/coresight-tpiu.c
++++ b/drivers/hwtracing/coresight/coresight-tpiu.c
+@@ -173,13 +173,11 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
+ 	return PTR_ERR(drvdata->csdev);
+ }
+ 
+-static int tpiu_remove(struct amba_device *adev)
++static void tpiu_remove(struct amba_device *adev)
+ {
+ 	struct tpiu_drvdata *drvdata = dev_get_drvdata(&adev->dev);
+ 
+ 	coresight_unregister(drvdata->csdev);
+-
+-	return 0;
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/i2c/busses/i2c-meson.c b/drivers/i2c/busses/i2c-meson.c
+index ef73a42577cc7..07eb819072c4f 100644
+--- a/drivers/i2c/busses/i2c-meson.c
++++ b/drivers/i2c/busses/i2c-meson.c
+@@ -465,18 +465,18 @@ static int meson_i2c_probe(struct platform_device *pdev)
+ 	 */
+ 	meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_START, 0);
+ 
+-	ret = i2c_add_adapter(&i2c->adap);
+-	if (ret < 0) {
+-		clk_disable_unprepare(i2c->clk);
+-		return ret;
+-	}
+-
+ 	/* Disable filtering */
+ 	meson_i2c_set_mask(i2c, REG_SLAVE_ADDR,
+ 			   REG_SLV_SDA_FILTER | REG_SLV_SCL_FILTER, 0);
+ 
+ 	meson_i2c_set_clk_div(i2c, timings.bus_freq_hz);
+ 
++	ret = i2c_add_adapter(&i2c->adap);
++	if (ret < 0) {
++		clk_disable_unprepare(i2c->clk);
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-nomadik.c b/drivers/i2c/busses/i2c-nomadik.c
+index d4b1b0865f676..a3363b20f168a 100644
+--- a/drivers/i2c/busses/i2c-nomadik.c
++++ b/drivers/i2c/busses/i2c-nomadik.c
+@@ -1055,7 +1055,7 @@ static int nmk_i2c_probe(struct amba_device *adev, const struct amba_id *id)
+ 	return ret;
+ }
+ 
+-static int nmk_i2c_remove(struct amba_device *adev)
++static void nmk_i2c_remove(struct amba_device *adev)
+ {
+ 	struct resource *res = &adev->res;
+ 	struct nmk_i2c_dev *dev = amba_get_drvdata(adev);
+@@ -1068,8 +1068,6 @@ static int nmk_i2c_remove(struct amba_device *adev)
+ 	i2c_clr_bit(dev->virtbase + I2C_CR, I2C_CR_PE);
+ 	clk_disable_unprepare(dev->clk);
+ 	release_mem_region(res->start, resource_size(res));
+-
+-	return 0;
+ }
+ 
+ static struct i2c_vendor_data vendor_stn8815 = {
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 2a8568b97c14d..8dabb6ffb1a4f 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -756,7 +756,6 @@ static const struct i2c_adapter_quirks xiic_quirks = {
+ 
+ static const struct i2c_adapter xiic_adapter = {
+ 	.owner = THIS_MODULE,
+-	.name = DRIVER_NAME,
+ 	.class = I2C_CLASS_DEPRECATED,
+ 	.algo = &xiic_algorithm,
+ 	.quirks = &xiic_quirks,
+@@ -793,6 +792,8 @@ static int xiic_i2c_probe(struct platform_device *pdev)
+ 	i2c_set_adapdata(&i2c->adap, i2c);
+ 	i2c->adap.dev.parent = &pdev->dev;
+ 	i2c->adap.dev.of_node = pdev->dev.of_node;
++	snprintf(i2c->adap.name, sizeof(i2c->adap.name),
++		 DRIVER_NAME " %s", pdev->name);
+ 
+ 	mutex_init(&i2c->lock);
+ 	init_waitqueue_head(&i2c->wait);
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index 5365199a31f41..f7a7405d4350a 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -261,7 +261,7 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ 
+ 	err = device_create_file(&pdev->dev, &dev_attr_available_masters);
+ 	if (err)
+-		goto err_rollback;
++		goto err_rollback_activation;
+ 
+ 	err = device_create_file(&pdev->dev, &dev_attr_current_master);
+ 	if (err)
+@@ -271,8 +271,9 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ 
+ err_rollback_available:
+ 	device_remove_file(&pdev->dev, &dev_attr_available_masters);
+-err_rollback:
++err_rollback_activation:
+ 	i2c_demux_deactivate_master(priv);
++err_rollback:
+ 	for (j = 0; j < i; j++) {
+ 		of_node_put(priv->chan[j].parent_np);
+ 		of_changeset_destroy(&priv->chan[j].chgset);
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index a7208704d31c9..e7e2802827740 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -176,6 +176,7 @@ static const struct mma8452_event_regs trans_ev_regs = {
+  * @enabled_events:		event flags enabled and handled by this driver
+  */
+ struct mma_chip_info {
++	const char *name;
+ 	u8 chip_id;
+ 	const struct iio_chan_spec *channels;
+ 	int num_channels;
+@@ -1303,6 +1304,7 @@ enum {
+ 
+ static const struct mma_chip_info mma_chip_info_table[] = {
+ 	[mma8451] = {
++		.name = "mma8451",
+ 		.chip_id = MMA8451_DEVICE_ID,
+ 		.channels = mma8451_channels,
+ 		.num_channels = ARRAY_SIZE(mma8451_channels),
+@@ -1327,6 +1329,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 					MMA8452_INT_FF_MT,
+ 	},
+ 	[mma8452] = {
++		.name = "mma8452",
+ 		.chip_id = MMA8452_DEVICE_ID,
+ 		.channels = mma8452_channels,
+ 		.num_channels = ARRAY_SIZE(mma8452_channels),
+@@ -1343,6 +1346,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 					MMA8452_INT_FF_MT,
+ 	},
+ 	[mma8453] = {
++		.name = "mma8453",
+ 		.chip_id = MMA8453_DEVICE_ID,
+ 		.channels = mma8453_channels,
+ 		.num_channels = ARRAY_SIZE(mma8453_channels),
+@@ -1359,6 +1363,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 					MMA8452_INT_FF_MT,
+ 	},
+ 	[mma8652] = {
++		.name = "mma8652",
+ 		.chip_id = MMA8652_DEVICE_ID,
+ 		.channels = mma8652_channels,
+ 		.num_channels = ARRAY_SIZE(mma8652_channels),
+@@ -1368,6 +1373,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 		.enabled_events = MMA8452_INT_FF_MT,
+ 	},
+ 	[mma8653] = {
++		.name = "mma8653",
+ 		.chip_id = MMA8653_DEVICE_ID,
+ 		.channels = mma8653_channels,
+ 		.num_channels = ARRAY_SIZE(mma8653_channels),
+@@ -1382,6 +1388,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 		.enabled_events = MMA8452_INT_FF_MT,
+ 	},
+ 	[fxls8471] = {
++		.name = "fxls8471",
+ 		.chip_id = FXLS8471_DEVICE_ID,
+ 		.channels = mma8451_channels,
+ 		.num_channels = ARRAY_SIZE(mma8451_channels),
+@@ -1525,13 +1532,6 @@ static int mma8452_probe(struct i2c_client *client,
+ 	struct mma8452_data *data;
+ 	struct iio_dev *indio_dev;
+ 	int ret;
+-	const struct of_device_id *match;
+-
+-	match = of_match_device(mma8452_dt_ids, &client->dev);
+-	if (!match) {
+-		dev_err(&client->dev, "unknown device model\n");
+-		return -ENODEV;
+-	}
+ 
+ 	indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ 	if (!indio_dev)
+@@ -1540,7 +1540,14 @@ static int mma8452_probe(struct i2c_client *client,
+ 	data = iio_priv(indio_dev);
+ 	data->client = client;
+ 	mutex_init(&data->lock);
+-	data->chip_info = match->data;
++
++	data->chip_info = device_get_match_data(&client->dev);
++	if (!data->chip_info && id) {
++		data->chip_info = &mma_chip_info_table[id->driver_data];
++	} else {
++		dev_err(&client->dev, "unknown device model\n");
++		return -ENODEV;
++	}
+ 
+ 	data->vdd_reg = devm_regulator_get(&client->dev, "vdd");
+ 	if (IS_ERR(data->vdd_reg))
+@@ -1584,11 +1591,11 @@ static int mma8452_probe(struct i2c_client *client,
+ 	}
+ 
+ 	dev_info(&client->dev, "registering %s accelerometer; ID 0x%x\n",
+-		 match->compatible, data->chip_info->chip_id);
++		 data->chip_info->name, data->chip_info->chip_id);
+ 
+ 	i2c_set_clientdata(client, indio_dev);
+ 	indio_dev->info = &mma8452_info;
+-	indio_dev->name = id->name;
++	indio_dev->name = data->chip_info->name;
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->channels = data->chip_info->channels;
+ 	indio_dev->num_channels = data->chip_info->num_channels;
+@@ -1814,7 +1821,7 @@ MODULE_DEVICE_TABLE(i2c, mma8452_id);
+ static struct i2c_driver mma8452_driver = {
+ 	.driver = {
+ 		.name	= "mma8452",
+-		.of_match_table = of_match_ptr(mma8452_dt_ids),
++		.of_match_table = mma8452_dt_ids,
+ 		.pm	= &mma8452_pm_ops,
+ 	},
+ 	.probe = mma8452_probe,
+diff --git a/drivers/iio/adc/twl6030-gpadc.c b/drivers/iio/adc/twl6030-gpadc.c
+index c6416ad795ca4..256177b15c511 100644
+--- a/drivers/iio/adc/twl6030-gpadc.c
++++ b/drivers/iio/adc/twl6030-gpadc.c
+@@ -911,6 +911,8 @@ static int twl6030_gpadc_probe(struct platform_device *pdev)
+ 	ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				twl6030_gpadc_irq_handler,
+ 				IRQF_ONESHOT, "twl6030_gpadc", indio_dev);
++	if (ret)
++		return ret;
+ 
+ 	ret = twl6030_gpadc_enable_irq(TWL6030_GPADC_RT_SW1_EOC_MASK);
+ 	if (ret < 0) {
+diff --git a/drivers/iio/afe/iio-rescale.c b/drivers/iio/afe/iio-rescale.c
+index e42ea2b1707db..3809f98894a51 100644
+--- a/drivers/iio/afe/iio-rescale.c
++++ b/drivers/iio/afe/iio-rescale.c
+@@ -38,7 +38,7 @@ static int rescale_read_raw(struct iio_dev *indio_dev,
+ 			    int *val, int *val2, long mask)
+ {
+ 	struct rescale *rescale = iio_priv(indio_dev);
+-	unsigned long long tmp;
++	s64 tmp;
+ 	int ret;
+ 
+ 	switch (mask) {
+@@ -59,10 +59,10 @@ static int rescale_read_raw(struct iio_dev *indio_dev,
+ 			*val2 = rescale->denominator;
+ 			return IIO_VAL_FRACTIONAL;
+ 		case IIO_VAL_FRACTIONAL_LOG2:
+-			tmp = *val * 1000000000LL;
+-			do_div(tmp, rescale->denominator);
++			tmp = (s64)*val * 1000000000LL;
++			tmp = div_s64(tmp, rescale->denominator);
+ 			tmp *= rescale->numerator;
+-			do_div(tmp, 1000000000LL);
++			tmp = div_s64(tmp, 1000000000LL);
+ 			*val = tmp;
+ 			return ret;
+ 		default:
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index ede99e0d53714..8c3faa7972842 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -561,28 +561,50 @@ EXPORT_SYMBOL_GPL(iio_read_channel_average_raw);
+ static int iio_convert_raw_to_processed_unlocked(struct iio_channel *chan,
+ 	int raw, int *processed, unsigned int scale)
+ {
+-	int scale_type, scale_val, scale_val2, offset;
++	int scale_type, scale_val, scale_val2;
++	int offset_type, offset_val, offset_val2;
+ 	s64 raw64 = raw;
+-	int ret;
+ 
+-	ret = iio_channel_read(chan, &offset, NULL, IIO_CHAN_INFO_OFFSET);
+-	if (ret >= 0)
+-		raw64 += offset;
++	offset_type = iio_channel_read(chan, &offset_val, &offset_val2,
++				       IIO_CHAN_INFO_OFFSET);
++	if (offset_type >= 0) {
++		switch (offset_type) {
++		case IIO_VAL_INT:
++			break;
++		case IIO_VAL_INT_PLUS_MICRO:
++		case IIO_VAL_INT_PLUS_NANO:
++			/*
++			 * Both IIO_VAL_INT_PLUS_MICRO and IIO_VAL_INT_PLUS_NANO
++			 * implicitely truncate the offset to it's integer form.
++			 */
++			break;
++		case IIO_VAL_FRACTIONAL:
++			offset_val /= offset_val2;
++			break;
++		case IIO_VAL_FRACTIONAL_LOG2:
++			offset_val >>= offset_val2;
++			break;
++		default:
++			return -EINVAL;
++		}
++
++		raw64 += offset_val;
++	}
+ 
+ 	scale_type = iio_channel_read(chan, &scale_val, &scale_val2,
+ 					IIO_CHAN_INFO_SCALE);
+ 	if (scale_type < 0) {
+ 		/*
+-		 * Just pass raw values as processed if no scaling is
+-		 * available.
++		 * If no channel scaling is available apply consumer scale to
++		 * raw value and return.
+ 		 */
+-		*processed = raw;
++		*processed = raw * scale;
+ 		return 0;
+ 	}
+ 
+ 	switch (scale_type) {
+ 	case IIO_VAL_INT:
+-		*processed = raw64 * scale_val;
++		*processed = raw64 * scale_val * scale;
+ 		break;
+ 	case IIO_VAL_INT_PLUS_MICRO:
+ 		if (scale_val2 < 0)
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index fbb0efbe25f84..3c40aa50cd60c 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2635,7 +2635,7 @@ int rdma_set_ack_timeout(struct rdma_cm_id *id, u8 timeout)
+ {
+ 	struct rdma_id_private *id_priv;
+ 
+-	if (id->qp_type != IB_QPT_RC)
++	if (id->qp_type != IB_QPT_RC && id->qp_type != IB_QPT_XRC_INI)
+ 		return -EINVAL;
+ 
+ 	id_priv = container_of(id, struct rdma_id_private, id);
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 3d895cc41c3ad..597e889ba8312 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -2078,6 +2078,7 @@ struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ 		return mr;
+ 
+ 	mr->device = pd->device;
++	mr->type = IB_MR_TYPE_USER;
+ 	mr->pd = pd;
+ 	mr->dm = NULL;
+ 	atomic_inc(&pd->usecnt);
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 3591923abebb9..5f3edd255ca3c 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -1439,8 +1439,7 @@ static int query_port(struct rvt_dev_info *rdi, u8 port_num,
+ 				      4096 : hfi1_max_mtu), IB_MTU_4096);
+ 	props->active_mtu = !valid_ib_mtu(ppd->ibmtu) ? props->max_mtu :
+ 		mtu_to_enum(ppd->ibmtu, IB_MTU_4096);
+-	props->phys_mtu = HFI1_CAP_IS_KSET(AIP) ? hfi1_max_mtu :
+-				ib_mtu_enum_to_int(props->max_mtu);
++	props->phys_mtu = hfi1_max_mtu;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 343e6709d9fc3..2f053f48f1beb 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1792,8 +1792,10 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ 				key_level2,
+ 				obj_event,
+ 				GFP_KERNEL);
+-		if (err)
++		if (err) {
++			kfree(obj_event);
+ 			return err;
++		}
+ 		INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 19346693c1da4..6cd0cbd4fc9f6 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -575,6 +575,8 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev,
+ 	ent = &cache->ent[entry];
+ 	spin_lock_irq(&ent->lock);
+ 	if (list_empty(&ent->head)) {
++		queue_adjust_cache_locked(ent);
++		ent->miss++;
+ 		spin_unlock_irq(&ent->lock);
+ 		mr = create_cache_mr(ent);
+ 		if (IS_ERR(mr))
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index ff9dc37eff345..3cfd2c18eebd9 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -2179,12 +2179,6 @@ int input_register_device(struct input_dev *dev)
+ 	/* KEY_RESERVED is not supposed to be transmitted to userspace. */
+ 	__clear_bit(KEY_RESERVED, dev->keybit);
+ 
+-	/* Buttonpads should not map BTN_RIGHT and/or BTN_MIDDLE. */
+-	if (test_bit(INPUT_PROP_BUTTONPAD, dev->propbit)) {
+-		__clear_bit(BTN_RIGHT, dev->keybit);
+-		__clear_bit(BTN_MIDDLE, dev->keybit);
+-	}
+-
+ 	/* Make sure that bitmasks not mentioned in dev->evbit are clean. */
+ 	input_cleanse_bitmasks(dev);
+ 
+diff --git a/drivers/input/serio/ambakmi.c b/drivers/input/serio/ambakmi.c
+index ecdeca147ed71..4408245b61d2c 100644
+--- a/drivers/input/serio/ambakmi.c
++++ b/drivers/input/serio/ambakmi.c
+@@ -159,7 +159,7 @@ static int amba_kmi_probe(struct amba_device *dev,
+ 	return ret;
+ }
+ 
+-static int amba_kmi_remove(struct amba_device *dev)
++static void amba_kmi_remove(struct amba_device *dev)
+ {
+ 	struct amba_kmi_port *kmi = amba_get_drvdata(dev);
+ 
+@@ -168,7 +168,6 @@ static int amba_kmi_remove(struct amba_device *dev)
+ 	iounmap(kmi->base);
+ 	kfree(kmi);
+ 	amba_release_regions(dev);
+-	return 0;
+ }
+ 
+ static int __maybe_unused amba_kmi_resume(struct device *dev)
+diff --git a/drivers/input/touchscreen/zinitix.c b/drivers/input/touchscreen/zinitix.c
+index 6df6f07f1ac66..17b10b81c7131 100644
+--- a/drivers/input/touchscreen/zinitix.c
++++ b/drivers/input/touchscreen/zinitix.c
+@@ -135,7 +135,7 @@ struct point_coord {
+ 
+ struct touch_event {
+ 	__le16	status;
+-	u8	finger_cnt;
++	u8	finger_mask;
+ 	u8	time_stamp;
+ 	struct point_coord point_coord[MAX_SUPPORTED_FINGER_NUM];
+ };
+@@ -311,11 +311,32 @@ static int zinitix_send_power_on_sequence(struct bt541_ts_data *bt541)
+ static void zinitix_report_finger(struct bt541_ts_data *bt541, int slot,
+ 				  const struct point_coord *p)
+ {
++	u16 x, y;
++
++	if (unlikely(!(p->sub_status &
++		       (SUB_BIT_UP | SUB_BIT_DOWN | SUB_BIT_MOVE)))) {
++		dev_dbg(&bt541->client->dev, "unknown finger event %#02x\n",
++			p->sub_status);
++		return;
++	}
++
++	x = le16_to_cpu(p->x);
++	y = le16_to_cpu(p->y);
++
+ 	input_mt_slot(bt541->input_dev, slot);
+-	input_mt_report_slot_state(bt541->input_dev, MT_TOOL_FINGER, true);
+-	touchscreen_report_pos(bt541->input_dev, &bt541->prop,
+-			       le16_to_cpu(p->x), le16_to_cpu(p->y), true);
+-	input_report_abs(bt541->input_dev, ABS_MT_TOUCH_MAJOR, p->width);
++	if (input_mt_report_slot_state(bt541->input_dev, MT_TOOL_FINGER,
++				       !(p->sub_status & SUB_BIT_UP))) {
++		touchscreen_report_pos(bt541->input_dev,
++				       &bt541->prop, x, y, true);
++		input_report_abs(bt541->input_dev,
++				 ABS_MT_TOUCH_MAJOR, p->width);
++		dev_dbg(&bt541->client->dev, "finger %d %s (%u, %u)\n",
++			slot, p->sub_status & SUB_BIT_DOWN ? "down" : "move",
++			x, y);
++	} else {
++		dev_dbg(&bt541->client->dev, "finger %d up (%u, %u)\n",
++			slot, x, y);
++	}
+ }
+ 
+ static irqreturn_t zinitix_ts_irq_handler(int irq, void *bt541_handler)
+@@ -323,6 +344,7 @@ static irqreturn_t zinitix_ts_irq_handler(int irq, void *bt541_handler)
+ 	struct bt541_ts_data *bt541 = bt541_handler;
+ 	struct i2c_client *client = bt541->client;
+ 	struct touch_event touch_event;
++	unsigned long finger_mask;
+ 	int error;
+ 	int i;
+ 
+@@ -335,10 +357,14 @@ static irqreturn_t zinitix_ts_irq_handler(int irq, void *bt541_handler)
+ 		goto out;
+ 	}
+ 
+-	for (i = 0; i < MAX_SUPPORTED_FINGER_NUM; i++)
+-		if (touch_event.point_coord[i].sub_status & SUB_BIT_EXIST)
+-			zinitix_report_finger(bt541, i,
+-					      &touch_event.point_coord[i]);
++	finger_mask = touch_event.finger_mask;
++	for_each_set_bit(i, &finger_mask, MAX_SUPPORTED_FINGER_NUM) {
++		const struct point_coord *p = &touch_event.point_coord[i];
++
++		/* Only process contacts that are actually reported */
++		if (p->sub_status & SUB_BIT_EXIST)
++			zinitix_report_finger(bt541, i, p);
++	}
+ 
+ 	input_mt_sync_frame(bt541->input_dev);
+ 	input_sync(bt541->input_dev);
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index 1164d1a42cbc5..4600e97acb264 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -138,10 +138,11 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
+ 	cached_iova = rb_entry(iovad->cached32_node, struct iova, node);
+ 	if (free == cached_iova ||
+ 	    (free->pfn_hi < iovad->dma_32bit_pfn &&
+-	     free->pfn_lo >= cached_iova->pfn_lo)) {
++	     free->pfn_lo >= cached_iova->pfn_lo))
+ 		iovad->cached32_node = rb_next(&free->node);
++
++	if (free->pfn_lo < iovad->dma_32bit_pfn)
+ 		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
+-	}
+ 
+ 	cached_iova = rb_entry(iovad->cached_node, struct iova, node);
+ 	if (free->pfn_lo >= cached_iova->pfn_lo)
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index 0f18abda0e208..bae6c7078ec97 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -1013,7 +1013,9 @@ static int ipmmu_probe(struct platform_device *pdev)
+ 	bitmap_zero(mmu->ctx, IPMMU_CTX_MAX);
+ 	mmu->features = of_device_get_match_data(&pdev->dev);
+ 	memset(mmu->utlb_ctx, IPMMU_CTX_INVALID, mmu->features->num_utlbs);
+-	dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(40));
++	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(40));
++	if (ret)
++		return ret;
+ 
+ 	/* Map I/O memory and request IRQ. */
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+diff --git a/drivers/irqchip/irq-nvic.c b/drivers/irqchip/irq-nvic.c
+index 21cb31ff2bbf2..e903c44edb64a 100644
+--- a/drivers/irqchip/irq-nvic.c
++++ b/drivers/irqchip/irq-nvic.c
+@@ -94,6 +94,7 @@ static int __init nvic_of_init(struct device_node *node,
+ 
+ 	if (!nvic_irq_domain) {
+ 		pr_warn("Failed to allocate irq domain\n");
++		iounmap(nvic_base);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -103,6 +104,7 @@ static int __init nvic_of_init(struct device_node *node,
+ 	if (ret) {
+ 		pr_warn("Failed to allocate irq chips\n");
+ 		irq_domain_remove(nvic_irq_domain);
++		iounmap(nvic_base);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/irqchip/qcom-pdc.c b/drivers/irqchip/qcom-pdc.c
+index 5dc63c20b67ea..fc747b7f49830 100644
+--- a/drivers/irqchip/qcom-pdc.c
++++ b/drivers/irqchip/qcom-pdc.c
+@@ -74,17 +74,18 @@ static int qcom_pdc_gic_set_irqchip_state(struct irq_data *d,
+ static void pdc_enable_intr(struct irq_data *d, bool on)
+ {
+ 	int pin_out = d->hwirq;
++	unsigned long flags;
+ 	u32 index, mask;
+ 	u32 enable;
+ 
+ 	index = pin_out / 32;
+ 	mask = pin_out % 32;
+ 
+-	raw_spin_lock(&pdc_lock);
++	raw_spin_lock_irqsave(&pdc_lock, flags);
+ 	enable = pdc_reg_read(IRQ_ENABLE_BANK, index);
+ 	enable = on ? ENABLE_INTR(enable, mask) : CLEAR_INTR(enable, mask);
+ 	pdc_reg_write(IRQ_ENABLE_BANK, index, enable);
+-	raw_spin_unlock(&pdc_lock);
++	raw_spin_unlock_irqrestore(&pdc_lock, flags);
+ }
+ 
+ static void qcom_pdc_gic_disable(struct irq_data *d)
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index 2543c7b6948b6..c5663398c6b7d 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -13,6 +13,7 @@
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/pm_runtime.h>
++#include <linux/suspend.h>
+ #include <linux/slab.h>
+ 
+ #define IMX_MU_xSR_GIPn(x)	BIT(28 + (3 - (x)))
+@@ -66,6 +67,7 @@ struct imx_mu_priv {
+ 	const struct imx_mu_dcfg	*dcfg;
+ 	struct clk		*clk;
+ 	int			irq;
++	bool			suspend;
+ 
+ 	u32 xcr;
+ 
+@@ -277,6 +279,9 @@ static irqreturn_t imx_mu_isr(int irq, void *p)
+ 		return IRQ_NONE;
+ 	}
+ 
++	if (priv->suspend)
++		pm_system_wakeup();
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -326,6 +331,8 @@ static int imx_mu_startup(struct mbox_chan *chan)
+ 		break;
+ 	}
+ 
++	priv->suspend = true;
++
+ 	return 0;
+ }
+ 
+@@ -543,6 +550,8 @@ static int imx_mu_probe(struct platform_device *pdev)
+ 
+ 	clk_disable_unprepare(priv->clk);
+ 
++	priv->suspend = false;
++
+ 	return 0;
+ 
+ disable_runtime_pm:
+diff --git a/drivers/mailbox/tegra-hsp.c b/drivers/mailbox/tegra-hsp.c
+index e07091d71986a..4895d80740022 100644
+--- a/drivers/mailbox/tegra-hsp.c
++++ b/drivers/mailbox/tegra-hsp.c
+@@ -410,6 +410,11 @@ static int tegra_hsp_mailbox_flush(struct mbox_chan *chan,
+ 		value = tegra_hsp_channel_readl(ch, HSP_SM_SHRD_MBOX);
+ 		if ((value & HSP_SM_SHRD_MBOX_FULL) == 0) {
+ 			mbox_chan_txdone(chan, 0);
++
++			/* Wait until channel is empty */
++			if (chan->active_req != NULL)
++				continue;
++
+ 			return 0;
+ 		}
+ 
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index fe6dce125aba2..418914373a513 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2060,9 +2060,11 @@ int bch_btree_check(struct cache_set *c)
+ 		}
+ 	}
+ 
++	/*
++	 * Must wait for all threads to stop.
++	 */
+ 	wait_event_interruptible(check_state->wait,
+-				 atomic_read(&check_state->started) == 0 ||
+-				  test_bit(CACHE_SET_IO_DISABLE, &c->flags));
++				 atomic_read(&check_state->started) == 0);
+ 
+ 	for (i = 0; i < check_state->total_threads; i++) {
+ 		if (check_state->infos[i].result) {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 3c74996978dad..952253f24175a 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -952,9 +952,11 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ 		}
+ 	}
+ 
++	/*
++	 * Must wait for all threads to stop.
++	 */
+ 	wait_event_interruptible(state->wait,
+-		 atomic_read(&state->started) == 0 ||
+-		 test_bit(CACHE_SET_IO_DISABLE, &c->flags));
++		 atomic_read(&state->started) == 0);
+ 
+ out:
+ 	kfree(state);
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 2aa4acd33af39..b9677f701b6a1 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2561,7 +2561,7 @@ static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string
+ 
+ static int get_key_size(char **key_string)
+ {
+-	return (*key_string[0] == ':') ? -EINVAL : strlen(*key_string) >> 1;
++	return (*key_string[0] == ':') ? -EINVAL : (int)(strlen(*key_string) >> 1);
+ }
+ 
+ #endif /* CONFIG_KEYS */
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 4c7da1c4e6cb9..f7471a2642dd4 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2354,9 +2354,11 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start,
+ 					dm_integrity_io_error(ic, "invalid sector in journal", -EIO);
+ 					sec &= ~(sector_t)(ic->sectors_per_block - 1);
+ 				}
++				if (unlikely(sec >= ic->provided_data_sectors)) {
++					journal_entry_set_unused(je);
++					continue;
++				}
+ 			}
+-			if (unlikely(sec >= ic->provided_data_sectors))
+-				continue;
+ 			get_area_and_offset(ic, sec, &area, &offset);
+ 			restore_last_bytes(ic, access_journal_data(ic, i, j), je);
+ 			for (k = j + 1; k < ic->journal_section_entries; k++) {
+diff --git a/drivers/media/i2c/adv7511-v4l2.c b/drivers/media/i2c/adv7511-v4l2.c
+index ab7883cff8b22..9f5713b76794d 100644
+--- a/drivers/media/i2c/adv7511-v4l2.c
++++ b/drivers/media/i2c/adv7511-v4l2.c
+@@ -555,7 +555,7 @@ static void log_infoframe(struct v4l2_subdev *sd, const struct adv7511_cfg_read_
+ 	buffer[3] = 0;
+ 	buffer[3] = hdmi_infoframe_checksum(buffer, len + 4);
+ 
+-	if (hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer)) < 0) {
++	if (hdmi_infoframe_unpack(&frame, buffer, len + 4) < 0) {
+ 		v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc);
+ 		return;
+ 	}
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index d1f58795794fd..8cf1704308bf5 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -2454,7 +2454,7 @@ static int adv76xx_read_infoframe(struct v4l2_subdev *sd, int index,
+ 		buffer[i + 3] = infoframe_read(sd,
+ 				       adv76xx_cri[index].payload_addr + i);
+ 
+-	if (hdmi_infoframe_unpack(frame, buffer, sizeof(buffer)) < 0) {
++	if (hdmi_infoframe_unpack(frame, buffer, len + 3) < 0) {
+ 		v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__,
+ 			 adv76xx_cri[index].desc);
+ 		return -ENOENT;
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index f7d2b6cd3008b..a870117feb44c 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -2574,7 +2574,7 @@ static void log_infoframe(struct v4l2_subdev *sd, const struct adv7842_cfg_read_
+ 	for (i = 0; i < len; i++)
+ 		buffer[i + 3] = infoframe_read(sd, cri->payload_addr + i);
+ 
+-	if (hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer)) < 0) {
++	if (hdmi_infoframe_unpack(&frame, buffer, len + 3) < 0) {
+ 		v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc);
+ 		return;
+ 	}
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 35a51e9b539da..1f0e4b913a053 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -3898,7 +3898,7 @@ static int bttv_register_video(struct bttv *btv)
+ 
+ 	/* video */
+ 	vdev_init(btv, &btv->video_dev, &bttv_video_template, "video");
+-	btv->video_dev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_TUNER |
++	btv->video_dev.device_caps = V4L2_CAP_VIDEO_CAPTURE |
+ 				     V4L2_CAP_READWRITE | V4L2_CAP_STREAMING;
+ 	if (btv->tuner_type != TUNER_ABSENT)
+ 		btv->video_dev.device_caps |= V4L2_CAP_TUNER;
+@@ -3919,7 +3919,7 @@ static int bttv_register_video(struct bttv *btv)
+ 	/* vbi */
+ 	vdev_init(btv, &btv->vbi_dev, &bttv_video_template, "vbi");
+ 	btv->vbi_dev.device_caps = V4L2_CAP_VBI_CAPTURE | V4L2_CAP_READWRITE |
+-				   V4L2_CAP_STREAMING | V4L2_CAP_TUNER;
++				   V4L2_CAP_STREAMING;
+ 	if (btv->tuner_type != TUNER_ABSENT)
+ 		btv->vbi_dev.device_caps |= V4L2_CAP_TUNER;
+ 
+diff --git a/drivers/media/pci/cx88/cx88-mpeg.c b/drivers/media/pci/cx88/cx88-mpeg.c
+index a57c991b165b1..10d2971ef0624 100644
+--- a/drivers/media/pci/cx88/cx88-mpeg.c
++++ b/drivers/media/pci/cx88/cx88-mpeg.c
+@@ -162,6 +162,9 @@ int cx8802_start_dma(struct cx8802_dev    *dev,
+ 	cx_write(MO_TS_GPCNTRL, GP_COUNT_CONTROL_RESET);
+ 	q->count = 0;
+ 
++	/* clear interrupt status register */
++	cx_write(MO_TS_INTSTAT,  0x1f1111);
++
+ 	/* enable irqs */
+ 	dprintk(1, "setting the interrupt mask\n");
+ 	cx_set(MO_PCI_INTMSK, core->pci_irqmask | PCI_INT_TSINT);
+diff --git a/drivers/media/pci/ivtv/ivtv-driver.h b/drivers/media/pci/ivtv/ivtv-driver.h
+index e5efe525ad7bf..00caf60ff9890 100644
+--- a/drivers/media/pci/ivtv/ivtv-driver.h
++++ b/drivers/media/pci/ivtv/ivtv-driver.h
+@@ -332,7 +332,6 @@ struct ivtv_stream {
+ 	struct ivtv *itv;		/* for ease of use */
+ 	const char *name;		/* name of the stream */
+ 	int type;			/* stream type */
+-	u32 caps;			/* V4L2 capabilities */
+ 
+ 	struct v4l2_fh *fh;		/* pointer to the streaming filehandle */
+ 	spinlock_t qlock;		/* locks access to the queues */
+diff --git a/drivers/media/pci/ivtv/ivtv-ioctl.c b/drivers/media/pci/ivtv/ivtv-ioctl.c
+index 35dccb31174c1..a9d69b253516b 100644
+--- a/drivers/media/pci/ivtv/ivtv-ioctl.c
++++ b/drivers/media/pci/ivtv/ivtv-ioctl.c
+@@ -443,7 +443,7 @@ static int ivtv_g_fmt_vid_out_overlay(struct file *file, void *fh, struct v4l2_f
+ 	struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+ 	struct v4l2_window *winfmt = &fmt->fmt.win;
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -EINVAL;
+ 	if (!itv->osd_video_pbase)
+ 		return -EINVAL;
+@@ -554,7 +554,7 @@ static int ivtv_try_fmt_vid_out_overlay(struct file *file, void *fh, struct v4l2
+ 	u32 chromakey = fmt->fmt.win.chromakey;
+ 	u8 global_alpha = fmt->fmt.win.global_alpha;
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -EINVAL;
+ 	if (!itv->osd_video_pbase)
+ 		return -EINVAL;
+@@ -1388,7 +1388,7 @@ static int ivtv_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *fb)
+ 		0,
+ 	};
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -ENOTTY;
+ 	if (!itv->osd_video_pbase)
+ 		return -ENOTTY;
+@@ -1455,7 +1455,7 @@ static int ivtv_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffe
+ 	struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+ 	struct yuv_playback_info *yi = &itv->yuv_info;
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -ENOTTY;
+ 	if (!itv->osd_video_pbase)
+ 		return -ENOTTY;
+@@ -1475,7 +1475,7 @@ static int ivtv_overlay(struct file *file, void *fh, unsigned int on)
+ 	struct ivtv *itv = id->itv;
+ 	struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -ENOTTY;
+ 	if (!itv->osd_video_pbase)
+ 		return -ENOTTY;
+diff --git a/drivers/media/pci/ivtv/ivtv-streams.c b/drivers/media/pci/ivtv/ivtv-streams.c
+index f04ee84bab5fd..f9de5d1605fe3 100644
+--- a/drivers/media/pci/ivtv/ivtv-streams.c
++++ b/drivers/media/pci/ivtv/ivtv-streams.c
+@@ -176,7 +176,7 @@ static void ivtv_stream_init(struct ivtv *itv, int type)
+ 	s->itv = itv;
+ 	s->type = type;
+ 	s->name = ivtv_stream_info[type].name;
+-	s->caps = ivtv_stream_info[type].v4l2_caps;
++	s->vdev.device_caps = ivtv_stream_info[type].v4l2_caps;
+ 
+ 	if (ivtv_stream_info[type].pio)
+ 		s->dma = PCI_DMA_NONE;
+@@ -299,12 +299,9 @@ static int ivtv_reg_dev(struct ivtv *itv, int type)
+ 		if (s_mpg->vdev.v4l2_dev)
+ 			num = s_mpg->vdev.num + ivtv_stream_info[type].num_offset;
+ 	}
+-	s->vdev.device_caps = s->caps;
+-	if (itv->osd_video_pbase) {
+-		itv->streams[IVTV_DEC_STREAM_TYPE_YUV].vdev.device_caps |=
+-			V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+-		itv->streams[IVTV_DEC_STREAM_TYPE_MPG].vdev.device_caps |=
+-			V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
++	if (itv->osd_video_pbase && (type == IVTV_DEC_STREAM_TYPE_YUV ||
++				     type == IVTV_DEC_STREAM_TYPE_MPG)) {
++		s->vdev.device_caps |= V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+ 		itv->v4l2_cap |= V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+ 	}
+ 	video_set_drvdata(&s->vdev, s);
+diff --git a/drivers/media/pci/saa7134/saa7134-alsa.c b/drivers/media/pci/saa7134/saa7134-alsa.c
+index 7a1fb067b0e09..d3cde05a6ebab 100644
+--- a/drivers/media/pci/saa7134/saa7134-alsa.c
++++ b/drivers/media/pci/saa7134/saa7134-alsa.c
+@@ -1214,16 +1214,14 @@ static int alsa_device_exit(struct saa7134_dev *dev)
+ 
+ static int saa7134_alsa_init(void)
+ {
+-	struct saa7134_dev *dev = NULL;
+-	struct list_head *list;
++	struct saa7134_dev *dev;
+ 
+ 	saa7134_dmasound_init = alsa_device_init;
+ 	saa7134_dmasound_exit = alsa_device_exit;
+ 
+ 	pr_info("saa7134 ALSA driver for DMA sound loaded\n");
+ 
+-	list_for_each(list,&saa7134_devlist) {
+-		dev = list_entry(list, struct saa7134_dev, devlist);
++	list_for_each_entry(dev, &saa7134_devlist, devlist) {
+ 		if (dev->pci->device == PCI_DEVICE_ID_PHILIPS_SAA7130)
+ 			pr_info("%s/alsa: %s doesn't support digital audio\n",
+ 				dev->name, saa7134_boards[dev->board].name);
+@@ -1231,7 +1229,7 @@ static int saa7134_alsa_init(void)
+ 			alsa_device_init(dev);
+ 	}
+ 
+-	if (dev == NULL)
++	if (list_empty(&saa7134_devlist))
+ 		pr_info("saa7134 ALSA: no saa7134 cards found\n");
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index debc7509c173c..757a58829a512 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -151,7 +151,7 @@
+ #define  VE_SRC_TB_EDGE_DET_BOT		GENMASK(28, VE_SRC_TB_EDGE_DET_BOT_SHF)
+ 
+ #define VE_MODE_DETECT_STATUS		0x098
+-#define  VE_MODE_DETECT_H_PIXELS	GENMASK(11, 0)
++#define  VE_MODE_DETECT_H_PERIOD	GENMASK(11, 0)
+ #define  VE_MODE_DETECT_V_LINES_SHF	16
+ #define  VE_MODE_DETECT_V_LINES		GENMASK(27, VE_MODE_DETECT_V_LINES_SHF)
+ #define  VE_MODE_DETECT_STATUS_VSYNC	BIT(28)
+@@ -162,6 +162,8 @@
+ #define  VE_SYNC_STATUS_VSYNC_SHF	16
+ #define  VE_SYNC_STATUS_VSYNC		GENMASK(27, VE_SYNC_STATUS_VSYNC_SHF)
+ 
++#define VE_H_TOTAL_PIXELS		0x0A0
++
+ #define VE_INTERRUPT_CTRL		0x304
+ #define VE_INTERRUPT_STATUS		0x308
+ #define  VE_INTERRUPT_MODE_DETECT_WD	BIT(0)
+@@ -765,6 +767,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ 	u32 src_lr_edge;
+ 	u32 src_tb_edge;
+ 	u32 sync;
++	u32 htotal;
+ 	struct v4l2_bt_timings *det = &video->detected_timings;
+ 
+ 	det->width = MIN_WIDTH;
+@@ -809,6 +812,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ 		src_tb_edge = aspeed_video_read(video, VE_SRC_TB_EDGE_DET);
+ 		mds = aspeed_video_read(video, VE_MODE_DETECT_STATUS);
+ 		sync = aspeed_video_read(video, VE_SYNC_STATUS);
++		htotal = aspeed_video_read(video, VE_H_TOTAL_PIXELS);
+ 
+ 		video->frame_bottom = (src_tb_edge & VE_SRC_TB_EDGE_DET_BOT) >>
+ 			VE_SRC_TB_EDGE_DET_BOT_SHF;
+@@ -825,8 +829,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ 			VE_SRC_LR_EDGE_DET_RT_SHF;
+ 		video->frame_left = src_lr_edge & VE_SRC_LR_EDGE_DET_LEFT;
+ 		det->hfrontporch = video->frame_left;
+-		det->hbackporch = (mds & VE_MODE_DETECT_H_PIXELS) -
+-			video->frame_right;
++		det->hbackporch = htotal - video->frame_right;
+ 		det->hsync = sync & VE_SYNC_STATUS_HSYNC;
+ 		if (video->frame_left > video->frame_right)
+ 			continue;
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index 1eed69d29149f..2333079a83c71 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -408,6 +408,7 @@ static struct vdoa_data *coda_get_vdoa_data(void)
+ 	if (!vdoa_data)
+ 		vdoa_data = ERR_PTR(-EPROBE_DEFER);
+ 
++	put_device(&vdoa_pdev->dev);
+ out:
+ 	of_node_put(vdoa_node);
+ 
+diff --git a/drivers/media/platform/davinci/vpif.c b/drivers/media/platform/davinci/vpif.c
+index 5e67994e62cca..ee610daf90a3c 100644
+--- a/drivers/media/platform/davinci/vpif.c
++++ b/drivers/media/platform/davinci/vpif.c
+@@ -428,6 +428,7 @@ static int vpif_probe(struct platform_device *pdev)
+ 	static struct resource	*res, *res_irq;
+ 	struct platform_device *pdev_capture, *pdev_display;
+ 	struct device_node *endpoint = NULL;
++	int ret;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	vpif_base = devm_ioremap_resource(&pdev->dev, res);
+@@ -458,8 +459,8 @@ static int vpif_probe(struct platform_device *pdev)
+ 	res_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ 	if (!res_irq) {
+ 		dev_warn(&pdev->dev, "Missing IRQ resource.\n");
+-		pm_runtime_put(&pdev->dev);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_put_rpm;
+ 	}
+ 
+ 	pdev_capture = devm_kzalloc(&pdev->dev, sizeof(*pdev_capture),
+@@ -493,10 +494,17 @@ static int vpif_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	return 0;
++
++err_put_rpm:
++	pm_runtime_put(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
++
++	return ret;
+ }
+ 
+ static int vpif_remove(struct platform_device *pdev)
+ {
++	pm_runtime_put(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
+index cd27f637dbe7c..cfc7ebed8fb7a 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
+@@ -102,6 +102,8 @@ struct mtk_vcodec_fw *mtk_vcodec_fw_vpu_init(struct mtk_vcodec_dev *dev,
+ 	vpu_wdt_reg_handler(fw_pdev, mtk_vcodec_vpu_reset_handler, dev, rst_id);
+ 
+ 	fw = devm_kzalloc(&dev->plat_dev->dev, sizeof(*fw), GFP_KERNEL);
++	if (!fw)
++		return ERR_PTR(-ENOMEM);
+ 	fw->type = VPU;
+ 	fw->ops = &mtk_vcodec_vpu_msg;
+ 	fw->pdev = fw_pdev;
+diff --git a/drivers/media/rc/gpio-ir-tx.c b/drivers/media/rc/gpio-ir-tx.c
+index c6cd2e6d8e654..a50701cfbbd7b 100644
+--- a/drivers/media/rc/gpio-ir-tx.c
++++ b/drivers/media/rc/gpio-ir-tx.c
+@@ -48,11 +48,29 @@ static int gpio_ir_tx_set_carrier(struct rc_dev *dev, u32 carrier)
+ 	return 0;
+ }
+ 
++static void delay_until(ktime_t until)
++{
++	/*
++	 * delta should never exceed 0.5 seconds (IR_MAX_DURATION) and on
++	 * m68k ndelay(s64) does not compile; so use s32 rather than s64.
++	 */
++	s32 delta;
++
++	while (true) {
++		delta = ktime_us_delta(until, ktime_get());
++		if (delta <= 0)
++			return;
++
++		/* udelay more than 1ms may not work */
++		delta = min(delta, 1000);
++		udelay(delta);
++	}
++}
++
+ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ 				   uint count)
+ {
+ 	ktime_t edge;
+-	s32 delta;
+ 	int i;
+ 
+ 	local_irq_disable();
+@@ -63,9 +81,7 @@ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ 		gpiod_set_value(gpio_ir->gpio, !(i % 2));
+ 
+ 		edge = ktime_add_us(edge, txbuf[i]);
+-		delta = ktime_us_delta(edge, ktime_get());
+-		if (delta > 0)
+-			udelay(delta);
++		delay_until(edge);
+ 	}
+ 
+ 	gpiod_set_value(gpio_ir->gpio, 0);
+@@ -97,9 +113,7 @@ static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ 		if (i % 2) {
+ 			// space
+ 			edge = ktime_add_us(edge, txbuf[i]);
+-			delta = ktime_us_delta(edge, ktime_get());
+-			if (delta > 0)
+-				udelay(delta);
++			delay_until(edge);
+ 		} else {
+ 			// pulse
+ 			ktime_t last = ktime_add_us(edge, txbuf[i]);
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 1aa7989e756cc..7f394277478b3 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -429,7 +429,7 @@ static int irtoy_probe(struct usb_interface *intf,
+ 	err = usb_submit_urb(irtoy->urb_in, GFP_KERNEL);
+ 	if (err != 0) {
+ 		dev_err(irtoy->dev, "fail to submit in urb: %d\n", err);
+-		return err;
++		goto free_rcdev;
+ 	}
+ 
+ 	err = irtoy_setup(irtoy);
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_s302m.c b/drivers/media/test-drivers/vidtv/vidtv_s302m.c
+index d79b65854627c..4676083cee3b8 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_s302m.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_s302m.c
+@@ -455,6 +455,9 @@ struct vidtv_encoder
+ 		e->name = kstrdup(args.name, GFP_KERNEL);
+ 
+ 	e->encoder_buf = vzalloc(VIDTV_S302M_BUF_SZ);
++	if (!e->encoder_buf)
++		goto out_kfree_e;
++
+ 	e->encoder_buf_sz = VIDTV_S302M_BUF_SZ;
+ 	e->encoder_buf_offset = 0;
+ 
+@@ -467,10 +470,8 @@ struct vidtv_encoder
+ 	e->is_video_encoder = false;
+ 
+ 	ctx = kzalloc(priv_sz, GFP_KERNEL);
+-	if (!ctx) {
+-		kfree(e);
+-		return NULL;
+-	}
++	if (!ctx)
++		goto out_kfree_buf;
+ 
+ 	e->ctx = ctx;
+ 	ctx->last_duration = 0;
+@@ -498,6 +499,14 @@ struct vidtv_encoder
+ 	e->next = NULL;
+ 
+ 	return e;
++
++out_kfree_buf:
++	kfree(e->encoder_buf);
++
++out_kfree_e:
++	kfree(e->name);
++	kfree(e);
++	return NULL;
+ }
+ 
+ void vidtv_s302m_encoder_destroy(struct vidtv_encoder *e)
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 87e375562dbb2..26408a972b443 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -3881,6 +3881,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ 		goto err_free;
+ 	}
+ 
++	kref_init(&dev->ref);
++
+ 	dev->devno = nr;
+ 	dev->model = id->driver_info;
+ 	dev->alt   = -1;
+@@ -3981,6 +3983,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ 	}
+ 
+ 	if (dev->board.has_dual_ts && em28xx_duplicate_dev(dev) == 0) {
++		kref_init(&dev->dev_next->ref);
++
+ 		dev->dev_next->ts = SECONDARY_TS;
+ 		dev->dev_next->alt   = -1;
+ 		dev->dev_next->is_audio_only = has_vendor_audio &&
+@@ -4035,12 +4039,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ 			em28xx_write_reg(dev, 0x0b, 0x82);
+ 			mdelay(100);
+ 		}
+-
+-		kref_init(&dev->dev_next->ref);
+ 	}
+ 
+-	kref_init(&dev->ref);
+-
+ 	request_modules(dev);
+ 
+ 	/*
+@@ -4095,11 +4095,8 @@ static void em28xx_usb_disconnect(struct usb_interface *intf)
+ 
+ 	em28xx_close_extension(dev);
+ 
+-	if (dev->dev_next) {
+-		em28xx_close_extension(dev->dev_next);
++	if (dev->dev_next)
+ 		em28xx_release_resources(dev->dev_next);
+-	}
+-
+ 	em28xx_release_resources(dev);
+ 
+ 	if (dev->dev_next) {
+diff --git a/drivers/media/usb/go7007/s2250-board.c b/drivers/media/usb/go7007/s2250-board.c
+index b9e45124673b6..2e5913bccb38f 100644
+--- a/drivers/media/usb/go7007/s2250-board.c
++++ b/drivers/media/usb/go7007/s2250-board.c
+@@ -504,6 +504,7 @@ static int s2250_probe(struct i2c_client *client,
+ 	u8 *data;
+ 	struct go7007 *go = i2c_get_adapdata(adapter);
+ 	struct go7007_usb *usb = go->hpi_context;
++	int err = -EIO;
+ 
+ 	audio = i2c_new_dummy_device(adapter, TLV320_ADDRESS >> 1);
+ 	if (IS_ERR(audio))
+@@ -532,11 +533,8 @@ static int s2250_probe(struct i2c_client *client,
+ 		V4L2_CID_HUE, -512, 511, 1, 0);
+ 	sd->ctrl_handler = &state->hdl;
+ 	if (state->hdl.error) {
+-		int err = state->hdl.error;
+-
+-		v4l2_ctrl_handler_free(&state->hdl);
+-		kfree(state);
+-		return err;
++		err = state->hdl.error;
++		goto fail;
+ 	}
+ 
+ 	state->std = V4L2_STD_NTSC;
+@@ -600,7 +598,7 @@ fail:
+ 	i2c_unregister_device(audio);
+ 	v4l2_ctrl_handler_free(&state->hdl);
+ 	kfree(state);
+-	return -EIO;
++	return err;
+ }
+ 
+ static int s2250_remove(struct i2c_client *client)
+diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c
+index 563128d117317..60e57e0f19272 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-video.c
++++ b/drivers/media/usb/hdpvr/hdpvr-video.c
+@@ -308,7 +308,6 @@ static int hdpvr_start_streaming(struct hdpvr_device *dev)
+ 
+ 	dev->status = STATUS_STREAMING;
+ 
+-	INIT_WORK(&dev->worker, hdpvr_transmit_buffers);
+ 	schedule_work(&dev->worker);
+ 
+ 	v4l2_dbg(MSG_BUFFER, hdpvr_debug, &dev->v4l2_dev,
+@@ -1165,6 +1164,9 @@ int hdpvr_register_videodev(struct hdpvr_device *dev, struct device *parent,
+ 	bool ac3 = dev->flags & HDPVR_FLAG_AC3_CAP;
+ 	int res;
+ 
++	// initialize dev->worker
++	INIT_WORK(&dev->worker, hdpvr_transmit_buffers);
++
+ 	dev->cur_std = V4L2_STD_525_60;
+ 	dev->width = 720;
+ 	dev->height = 480;
+diff --git a/drivers/media/usb/stk1160/stk1160-core.c b/drivers/media/usb/stk1160/stk1160-core.c
+index 4e1698f788187..ce717502ea4c3 100644
+--- a/drivers/media/usb/stk1160/stk1160-core.c
++++ b/drivers/media/usb/stk1160/stk1160-core.c
+@@ -403,7 +403,7 @@ static void stk1160_disconnect(struct usb_interface *interface)
+ 	/* Here is the only place where isoc get released */
+ 	stk1160_uninit_isoc(dev);
+ 
+-	stk1160_clear_queue(dev);
++	stk1160_clear_queue(dev, VB2_BUF_STATE_ERROR);
+ 
+ 	video_unregister_device(&dev->vdev);
+ 	v4l2_device_disconnect(&dev->v4l2_dev);
+diff --git a/drivers/media/usb/stk1160/stk1160-v4l.c b/drivers/media/usb/stk1160/stk1160-v4l.c
+index 6a4eb616d5160..1aa953469402f 100644
+--- a/drivers/media/usb/stk1160/stk1160-v4l.c
++++ b/drivers/media/usb/stk1160/stk1160-v4l.c
+@@ -258,7 +258,7 @@ out_uninit:
+ 	stk1160_uninit_isoc(dev);
+ out_stop_hw:
+ 	usb_set_interface(dev->udev, 0, 0);
+-	stk1160_clear_queue(dev);
++	stk1160_clear_queue(dev, VB2_BUF_STATE_QUEUED);
+ 
+ 	mutex_unlock(&dev->v4l_lock);
+ 
+@@ -306,7 +306,7 @@ static int stk1160_stop_streaming(struct stk1160 *dev)
+ 
+ 	stk1160_stop_hw(dev);
+ 
+-	stk1160_clear_queue(dev);
++	stk1160_clear_queue(dev, VB2_BUF_STATE_ERROR);
+ 
+ 	stk1160_dbg("streaming stopped\n");
+ 
+@@ -745,7 +745,7 @@ static const struct video_device v4l_template = {
+ /********************************************************************/
+ 
+ /* Must be called with both v4l_lock and vb_queue_lock hold */
+-void stk1160_clear_queue(struct stk1160 *dev)
++void stk1160_clear_queue(struct stk1160 *dev, enum vb2_buffer_state vb2_state)
+ {
+ 	struct stk1160_buffer *buf;
+ 	unsigned long flags;
+@@ -756,7 +756,7 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ 		buf = list_first_entry(&dev->avail_bufs,
+ 			struct stk1160_buffer, list);
+ 		list_del(&buf->list);
+-		vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++		vb2_buffer_done(&buf->vb.vb2_buf, vb2_state);
+ 		stk1160_dbg("buffer [%p/%d] aborted\n",
+ 			    buf, buf->vb.vb2_buf.index);
+ 	}
+@@ -766,7 +766,7 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ 		buf = dev->isoc_ctl.buf;
+ 		dev->isoc_ctl.buf = NULL;
+ 
+-		vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++		vb2_buffer_done(&buf->vb.vb2_buf, vb2_state);
+ 		stk1160_dbg("buffer [%p/%d] aborted\n",
+ 			    buf, buf->vb.vb2_buf.index);
+ 	}
+diff --git a/drivers/media/usb/stk1160/stk1160.h b/drivers/media/usb/stk1160/stk1160.h
+index a31ea1c80f255..a70963ce87533 100644
+--- a/drivers/media/usb/stk1160/stk1160.h
++++ b/drivers/media/usb/stk1160/stk1160.h
+@@ -166,7 +166,7 @@ struct regval {
+ int stk1160_vb2_setup(struct stk1160 *dev);
+ int stk1160_video_register(struct stk1160 *dev);
+ void stk1160_video_unregister(struct stk1160 *dev);
+-void stk1160_clear_queue(struct stk1160 *dev);
++void stk1160_clear_queue(struct stk1160 *dev, enum vb2_buffer_state vb2_state);
+ 
+ /* Provided by stk1160-video.c */
+ int stk1160_alloc_isoc(struct stk1160 *dev);
+diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
+index b221b4e438a1a..73190652c267b 100644
+--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
++++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
+@@ -585,19 +585,14 @@ int v4l2_m2m_reqbufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_reqbufs);
+ 
+-int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+-		      struct v4l2_buffer *buf)
++static void v4l2_m2m_adjust_mem_offset(struct vb2_queue *vq,
++				       struct v4l2_buffer *buf)
+ {
+-	struct vb2_queue *vq;
+-	int ret = 0;
+-	unsigned int i;
+-
+-	vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+-	ret = vb2_querybuf(vq, buf);
+-
+ 	/* Adjust MMAP memory offsets for the CAPTURE queue */
+ 	if (buf->memory == V4L2_MEMORY_MMAP && V4L2_TYPE_IS_CAPTURE(vq->type)) {
+ 		if (V4L2_TYPE_IS_MULTIPLANAR(vq->type)) {
++			unsigned int i;
++
+ 			for (i = 0; i < buf->length; ++i)
+ 				buf->m.planes[i].m.mem_offset
+ 					+= DST_QUEUE_OFF_BASE;
+@@ -605,8 +600,23 @@ int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ 			buf->m.offset += DST_QUEUE_OFF_BASE;
+ 		}
+ 	}
++}
+ 
+-	return ret;
++int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
++		      struct v4l2_buffer *buf)
++{
++	struct vb2_queue *vq;
++	int ret;
++
++	vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
++	ret = vb2_querybuf(vq, buf);
++	if (ret)
++		return ret;
++
++	/* Adjust MMAP memory offsets for the CAPTURE queue */
++	v4l2_m2m_adjust_mem_offset(vq, buf);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_querybuf);
+ 
+@@ -763,6 +773,9 @@ int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ 	if (ret)
+ 		return ret;
+ 
++	/* Adjust MMAP memory offsets for the CAPTURE queue */
++	v4l2_m2m_adjust_mem_offset(vq, buf);
++
+ 	/*
+ 	 * If the capture queue is streaming, but streaming hasn't started
+ 	 * on the device, but was asked to stop, mark the previously queued
+@@ -784,9 +797,17 @@ int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ 		   struct v4l2_buffer *buf)
+ {
+ 	struct vb2_queue *vq;
++	int ret;
+ 
+ 	vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+-	return vb2_dqbuf(vq, buf, file->f_flags & O_NONBLOCK);
++	ret = vb2_dqbuf(vq, buf, file->f_flags & O_NONBLOCK);
++	if (ret)
++		return ret;
++
++	/* Adjust MMAP memory offsets for the CAPTURE queue */
++	v4l2_m2m_adjust_mem_offset(vq, buf);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf);
+ 
+@@ -795,9 +816,17 @@ int v4l2_m2m_prepare_buf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ {
+ 	struct video_device *vdev = video_devdata(file);
+ 	struct vb2_queue *vq;
++	int ret;
+ 
+ 	vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+-	return vb2_prepare_buf(vq, vdev->v4l2_dev->mdev, buf);
++	ret = vb2_prepare_buf(vq, vdev->v4l2_dev->mdev, buf);
++	if (ret)
++		return ret;
++
++	/* Adjust MMAP memory offsets for the CAPTURE queue */
++	v4l2_m2m_adjust_mem_offset(vq, buf);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_prepare_buf);
+ 
+diff --git a/drivers/memory/emif.c b/drivers/memory/emif.c
+index ddb1879f07d3f..5a059be3516c9 100644
+--- a/drivers/memory/emif.c
++++ b/drivers/memory/emif.c
+@@ -1403,7 +1403,7 @@ static struct emif_data *__init_or_module get_device_details(
+ 	temp	= devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
+ 	dev_info = devm_kzalloc(dev, sizeof(*dev_info), GFP_KERNEL);
+ 
+-	if (!emif || !pd || !dev_info) {
++	if (!emif || !temp || !dev_info) {
+ 		dev_err(dev, "%s:%d: allocation error\n", __func__, __LINE__);
+ 		goto error;
+ 	}
+@@ -1495,7 +1495,7 @@ static int __init_or_module emif_probe(struct platform_device *pdev)
+ {
+ 	struct emif_data	*emif;
+ 	struct resource		*res;
+-	int			irq;
++	int			irq, ret;
+ 
+ 	if (pdev->dev.of_node)
+ 		emif = of_get_memory_device_details(pdev->dev.of_node, &pdev->dev);
+@@ -1526,7 +1526,9 @@ static int __init_or_module emif_probe(struct platform_device *pdev)
+ 	emif_onetime_settings(emif);
+ 	emif_debugfs_init(emif);
+ 	disable_and_clear_all_interrupts(emif);
+-	setup_interrupts(emif, irq);
++	ret = setup_interrupts(emif, irq);
++	if (ret)
++		goto error;
+ 
+ 	/* One-time actions taken on probing the first device */
+ 	if (!emif1) {
+diff --git a/drivers/memory/pl172.c b/drivers/memory/pl172.c
+index 575fadbffa306..9eb8cc7de494a 100644
+--- a/drivers/memory/pl172.c
++++ b/drivers/memory/pl172.c
+@@ -273,14 +273,12 @@ err_clk_enable:
+ 	return ret;
+ }
+ 
+-static int pl172_remove(struct amba_device *adev)
++static void pl172_remove(struct amba_device *adev)
+ {
+ 	struct pl172_data *pl172 = amba_get_drvdata(adev);
+ 
+ 	clk_disable_unprepare(pl172->clk);
+ 	amba_release_regions(adev);
+-
+-	return 0;
+ }
+ 
+ static const struct amba_id pl172_ids[] = {
+diff --git a/drivers/memory/pl353-smc.c b/drivers/memory/pl353-smc.c
+index cc01979780d87..b0b251bb207f3 100644
+--- a/drivers/memory/pl353-smc.c
++++ b/drivers/memory/pl353-smc.c
+@@ -427,14 +427,12 @@ out_clk_dis_aper:
+ 	return err;
+ }
+ 
+-static int pl353_smc_remove(struct amba_device *adev)
++static void pl353_smc_remove(struct amba_device *adev)
+ {
+ 	struct pl353_smc_data *pl353_smc = amba_get_drvdata(adev);
+ 
+ 	clk_disable_unprepare(pl353_smc->memclk);
+ 	clk_disable_unprepare(pl353_smc->aclk);
+-
+-	return 0;
+ }
+ 
+ static const struct amba_id pl353_ids[] = {
+diff --git a/drivers/mfd/asic3.c b/drivers/mfd/asic3.c
+index a6bd2134cea2a..14e4bbe6a9da3 100644
+--- a/drivers/mfd/asic3.c
++++ b/drivers/mfd/asic3.c
+@@ -914,14 +914,14 @@ static int __init asic3_mfd_probe(struct platform_device *pdev,
+ 		ret = mfd_add_devices(&pdev->dev, pdev->id,
+ 			&asic3_cell_ds1wm, 1, mem, asic->irq_base, NULL);
+ 		if (ret < 0)
+-			goto out;
++			goto out_unmap;
+ 	}
+ 
+ 	if (mem_sdio && (irq >= 0)) {
+ 		ret = mfd_add_devices(&pdev->dev, pdev->id,
+ 			&asic3_cell_mmc, 1, mem_sdio, irq, NULL);
+ 		if (ret < 0)
+-			goto out;
++			goto out_unmap;
+ 	}
+ 
+ 	ret = 0;
+@@ -935,8 +935,12 @@ static int __init asic3_mfd_probe(struct platform_device *pdev,
+ 		ret = mfd_add_devices(&pdev->dev, 0,
+ 			asic3_cell_leds, ASIC3_NUM_LEDS, NULL, 0, NULL);
+ 	}
++	return ret;
+ 
+- out:
++out_unmap:
++	if (asic->tmio_cnf)
++		iounmap(asic->tmio_cnf);
++out:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mfd/mc13xxx-core.c b/drivers/mfd/mc13xxx-core.c
+index 1abe7432aad82..e281a9202f110 100644
+--- a/drivers/mfd/mc13xxx-core.c
++++ b/drivers/mfd/mc13xxx-core.c
+@@ -323,8 +323,10 @@ int mc13xxx_adc_do_conversion(struct mc13xxx *mc13xxx, unsigned int mode,
+ 		adc1 |= MC13783_ADC1_ATOX;
+ 
+ 	dev_dbg(mc13xxx->dev, "%s: request irq\n", __func__);
+-	mc13xxx_irq_request(mc13xxx, MC13XXX_IRQ_ADCDONE,
++	ret = mc13xxx_irq_request(mc13xxx, MC13XXX_IRQ_ADCDONE,
+ 			mc13xxx_handler_adcdone, __func__, &adcdone_data);
++	if (ret)
++		goto out;
+ 
+ 	mc13xxx_reg_write(mc13xxx, MC13XXX_ADC0, adc0);
+ 	mc13xxx_reg_write(mc13xxx, MC13XXX_ADC1, adc1);
+diff --git a/drivers/misc/cardreader/alcor_pci.c b/drivers/misc/cardreader/alcor_pci.c
+index de6d44a158bba..3f514d77a843f 100644
+--- a/drivers/misc/cardreader/alcor_pci.c
++++ b/drivers/misc/cardreader/alcor_pci.c
+@@ -266,7 +266,7 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
+-	ret = ida_simple_get(&alcor_pci_idr, 0, 0, GFP_KERNEL);
++	ret = ida_alloc(&alcor_pci_idr, GFP_KERNEL);
+ 	if (ret < 0)
+ 		return ret;
+ 	priv->id = ret;
+@@ -280,7 +280,8 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+ 	ret = pci_request_regions(pdev, DRV_NAME_ALCOR_PCI);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Cannot request region\n");
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto error_free_ida;
+ 	}
+ 
+ 	if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) {
+@@ -324,6 +325,8 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+ 
+ error_release_regions:
+ 	pci_release_regions(pdev);
++error_free_ida:
++	ida_free(&alcor_pci_idr, priv->id);
+ 	return ret;
+ }
+ 
+@@ -337,7 +340,7 @@ static void alcor_pci_remove(struct pci_dev *pdev)
+ 
+ 	mfd_remove_devices(&pdev->dev);
+ 
+-	ida_simple_remove(&alcor_pci_idr, priv->id);
++	ida_free(&alcor_pci_idr, priv->id);
+ 
+ 	pci_release_regions(pdev);
+ 	pci_set_drvdata(pdev, NULL);
+diff --git a/drivers/misc/habanalabs/common/debugfs.c b/drivers/misc/habanalabs/common/debugfs.c
+index 912ddfa360b13..9716b0728b306 100644
+--- a/drivers/misc/habanalabs/common/debugfs.c
++++ b/drivers/misc/habanalabs/common/debugfs.c
+@@ -859,6 +859,8 @@ static ssize_t hl_set_power_state(struct file *f, const char __user *buf,
+ 		pci_set_power_state(hdev->pdev, PCI_D0);
+ 		pci_restore_state(hdev->pdev);
+ 		rc = pci_enable_device(hdev->pdev);
++		if (rc < 0)
++			return rc;
+ 	} else if (value == 2) {
+ 		pci_save_state(hdev->pdev);
+ 		pci_disable_device(hdev->pdev);
+diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c
+index 49489153cd162..3e4d894719380 100644
+--- a/drivers/misc/kgdbts.c
++++ b/drivers/misc/kgdbts.c
+@@ -1060,10 +1060,10 @@ static int kgdbts_option_setup(char *opt)
+ {
+ 	if (strlen(opt) >= MAX_CONFIG_LEN) {
+ 		printk(KERN_ERR "kgdbts: config string too long\n");
+-		return -ENOSPC;
++		return 1;
+ 	}
+ 	strcpy(config, opt);
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("kgdbts=", kgdbts_option_setup);
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 67bb6a25fd0a0..d81d75a20b8f2 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -107,6 +107,7 @@
+ #define MEI_DEV_ID_ADP_S      0x7AE8  /* Alder Lake Point S */
+ #define MEI_DEV_ID_ADP_LP     0x7A60  /* Alder Lake Point LP */
+ #define MEI_DEV_ID_ADP_P      0x51E0  /* Alder Lake Point P */
++#define MEI_DEV_ID_ADP_N      0x54E0  /* Alder Lake Point N */
+ 
+ /*
+  * MEI HW Section
+diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
+index fee603039e872..ca3067fa6f0e0 100644
+--- a/drivers/misc/mei/interrupt.c
++++ b/drivers/misc/mei/interrupt.c
+@@ -427,31 +427,26 @@ int mei_irq_read_handler(struct mei_device *dev,
+ 	list_for_each_entry(cl, &dev->file_list, link) {
+ 		if (mei_cl_hbm_equal(cl, mei_hdr)) {
+ 			cl_dbg(dev, cl, "got a message\n");
+-			break;
++			ret = mei_cl_irq_read_msg(cl, mei_hdr, meta_hdr, cmpl_list);
++			goto reset_slots;
+ 		}
+ 	}
+ 
+ 	/* if no recipient cl was found we assume corrupted header */
+-	if (&cl->link == &dev->file_list) {
+-		/* A message for not connected fixed address clients
+-		 * should be silently discarded
+-		 * On power down client may be force cleaned,
+-		 * silently discard such messages
+-		 */
+-		if (hdr_is_fixed(mei_hdr) ||
+-		    dev->dev_state == MEI_DEV_POWER_DOWN) {
+-			mei_irq_discard_msg(dev, mei_hdr, mei_hdr->length);
+-			ret = 0;
+-			goto reset_slots;
+-		}
+-		dev_err(dev->dev, "no destination client found 0x%08X\n",
+-				dev->rd_msg_hdr[0]);
+-		ret = -EBADMSG;
+-		goto end;
++	/* A message for not connected fixed address clients
++	 * should be silently discarded
++	 * On power down client may be force cleaned,
++	 * silently discard such messages
++	 */
++	if (hdr_is_fixed(mei_hdr) ||
++	    dev->dev_state == MEI_DEV_POWER_DOWN) {
++		mei_irq_discard_msg(dev, mei_hdr, mei_hdr->length);
++		ret = 0;
++		goto reset_slots;
+ 	}
+-
+-	ret = mei_cl_irq_read_msg(cl, mei_hdr, meta_hdr, cmpl_list);
+-
++	dev_err(dev->dev, "no destination client found 0x%08X\n", dev->rd_msg_hdr[0]);
++	ret = -EBADMSG;
++	goto end;
+ 
+ reset_slots:
+ 	/* reset the number of slots and header */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 3a45aaf002ac8..a738253dbd056 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -113,6 +113,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_S, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_LP, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)},
+ 
+ 	/* required last entry */
+ 	{0, }
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index 864c8c205ff78..03e2f965a96a8 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -513,6 +513,16 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
+ 
+ EXPORT_SYMBOL(mmc_alloc_host);
+ 
++static int mmc_validate_host_caps(struct mmc_host *host)
++{
++	if (host->caps & MMC_CAP_SDIO_IRQ && !host->ops->enable_sdio_irq) {
++		dev_warn(host->parent, "missing ->enable_sdio_irq() ops\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ /**
+  *	mmc_add_host - initialise host hardware
+  *	@host: mmc host
+@@ -525,8 +535,9 @@ int mmc_add_host(struct mmc_host *host)
+ {
+ 	int err;
+ 
+-	WARN_ON((host->caps & MMC_CAP_SDIO_IRQ) &&
+-		!host->ops->enable_sdio_irq);
++	err = mmc_validate_host_caps(host);
++	if (err)
++		return err;
+ 
+ 	err = device_add(&host->class_dev);
+ 	if (err)
+diff --git a/drivers/mmc/host/davinci_mmc.c b/drivers/mmc/host/davinci_mmc.c
+index 90cd179625fc2..647928ab00a30 100644
+--- a/drivers/mmc/host/davinci_mmc.c
++++ b/drivers/mmc/host/davinci_mmc.c
+@@ -1375,8 +1375,12 @@ static int davinci_mmcsd_suspend(struct device *dev)
+ static int davinci_mmcsd_resume(struct device *dev)
+ {
+ 	struct mmc_davinci_host *host = dev_get_drvdata(dev);
++	int ret;
++
++	ret = clk_enable(host->clk);
++	if (ret)
++		return ret;
+ 
+-	clk_enable(host->clk);
+ 	mmc_davinci_reset_ctrl(host, 0);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index 9bde0def114b5..b5684e5d79e60 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -2203,7 +2203,7 @@ static int mmci_probe(struct amba_device *dev,
+ 	return ret;
+ }
+ 
+-static int mmci_remove(struct amba_device *dev)
++static void mmci_remove(struct amba_device *dev)
+ {
+ 	struct mmc_host *mmc = amba_get_drvdata(dev);
+ 
+@@ -2231,8 +2231,6 @@ static int mmci_remove(struct amba_device *dev)
+ 		clk_disable_unprepare(host->clk);
+ 		mmc_free_host(mmc);
+ 	}
+-
+-	return 0;
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/mtd/nand/onenand/generic.c b/drivers/mtd/nand/onenand/generic.c
+index 8b6f4da5d7201..a4b8b65fe15f5 100644
+--- a/drivers/mtd/nand/onenand/generic.c
++++ b/drivers/mtd/nand/onenand/generic.c
+@@ -53,7 +53,12 @@ static int generic_onenand_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	info->onenand.mmcontrol = pdata ? pdata->mmcontrol : NULL;
+-	info->onenand.irq = platform_get_irq(pdev, 0);
++
++	err = platform_get_irq(pdev, 0);
++	if (err < 0)
++		goto out_iounmap;
++
++	info->onenand.irq = err;
+ 
+ 	info->mtd.dev.parent = &pdev->dev;
+ 	info->mtd.priv = &info->onenand;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 8aab1017b4600..c048e826746a9 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -2057,13 +2057,15 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ 	nc->mck = of_clk_get(dev->parent->of_node, 0);
+ 	if (IS_ERR(nc->mck)) {
+ 		dev_err(dev, "Failed to retrieve MCK clk\n");
+-		return PTR_ERR(nc->mck);
++		ret = PTR_ERR(nc->mck);
++		goto out_release_dma;
+ 	}
+ 
+ 	np = of_parse_phandle(dev->parent->of_node, "atmel,smc", 0);
+ 	if (!np) {
+ 		dev_err(dev, "Missing or invalid atmel,smc property\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out_release_dma;
+ 	}
+ 
+ 	nc->smc = syscon_node_to_regmap(np);
+@@ -2071,10 +2073,16 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ 	if (IS_ERR(nc->smc)) {
+ 		ret = PTR_ERR(nc->smc);
+ 		dev_err(dev, "Could not get SMC regmap (err = %d)\n", ret);
+-		return ret;
++		goto out_release_dma;
+ 	}
+ 
+ 	return 0;
++
++out_release_dma:
++	if (nc->dmac)
++		dma_release_channel(nc->dmac);
++
++	return ret;
+ }
+ 
+ static int
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index cb7631145700a..92e8ca56f5665 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -646,6 +646,7 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 				     const struct nand_sdr_timings *sdr)
+ {
+ 	struct gpmi_nfc_hardware_timing *hw = &this->hw;
++	struct resources *r = &this->resources;
+ 	unsigned int dll_threshold_ps = this->devdata->max_chain_delay;
+ 	unsigned int period_ps, reference_period_ps;
+ 	unsigned int data_setup_cycles, data_hold_cycles, addr_setup_cycles;
+@@ -669,6 +670,8 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 		wrn_dly_sel = BV_GPMI_CTRL1_WRN_DLY_SEL_NO_DELAY;
+ 	}
+ 
++	hw->clk_rate = clk_round_rate(r->clock[0], hw->clk_rate);
++
+ 	/* SDR core timings are given in picoseconds */
+ 	period_ps = div_u64((u64)NSEC_PER_SEC * 1000, hw->clk_rate);
+ 
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index 1f0d542d59230..c41c0ff611b1b 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -297,16 +297,19 @@ static int nand_isbad_bbm(struct nand_chip *chip, loff_t ofs)
+  *
+  * Return: -EBUSY if the chip has been suspended, 0 otherwise
+  */
+-static int nand_get_device(struct nand_chip *chip)
++static void nand_get_device(struct nand_chip *chip)
+ {
+-	mutex_lock(&chip->lock);
+-	if (chip->suspended) {
++	/* Wait until the device is resumed. */
++	while (1) {
++		mutex_lock(&chip->lock);
++		if (!chip->suspended) {
++			mutex_lock(&chip->controller->lock);
++			return;
++		}
+ 		mutex_unlock(&chip->lock);
+-		return -EBUSY;
+-	}
+-	mutex_lock(&chip->controller->lock);
+ 
+-	return 0;
++		wait_event(chip->resume_wq, !chip->suspended);
++	}
+ }
+ 
+ /**
+@@ -531,9 +534,7 @@ static int nand_block_markbad_lowlevel(struct nand_chip *chip, loff_t ofs)
+ 		nand_erase_nand(chip, &einfo, 0);
+ 
+ 		/* Write bad block marker to OOB */
+-		ret = nand_get_device(chip);
+-		if (ret)
+-			return ret;
++		nand_get_device(chip);
+ 
+ 		ret = nand_markbad_bbm(chip, ofs);
+ 		nand_release_device(chip);
+@@ -3534,9 +3535,7 @@ static int nand_read_oob(struct mtd_info *mtd, loff_t from,
+ 	    ops->mode != MTD_OPS_RAW)
+ 		return -ENOTSUPP;
+ 
+-	ret = nand_get_device(chip);
+-	if (ret)
+-		return ret;
++	nand_get_device(chip);
+ 
+ 	if (!ops->datbuf)
+ 		ret = nand_do_read_oob(chip, from, ops);
+@@ -4119,13 +4118,11 @@ static int nand_write_oob(struct mtd_info *mtd, loff_t to,
+ 			  struct mtd_oob_ops *ops)
+ {
+ 	struct nand_chip *chip = mtd_to_nand(mtd);
+-	int ret;
++	int ret = 0;
+ 
+ 	ops->retlen = 0;
+ 
+-	ret = nand_get_device(chip);
+-	if (ret)
+-		return ret;
++	nand_get_device(chip);
+ 
+ 	switch (ops->mode) {
+ 	case MTD_OPS_PLACE_OOB:
+@@ -4181,9 +4178,7 @@ int nand_erase_nand(struct nand_chip *chip, struct erase_info *instr,
+ 		return -EINVAL;
+ 
+ 	/* Grab the lock and see if the device is available */
+-	ret = nand_get_device(chip);
+-	if (ret)
+-		return ret;
++	nand_get_device(chip);
+ 
+ 	/* Shift to get first page */
+ 	page = (int)(instr->addr >> chip->page_shift);
+@@ -4270,7 +4265,7 @@ static void nand_sync(struct mtd_info *mtd)
+ 	pr_debug("%s: called\n", __func__);
+ 
+ 	/* Grab the lock and see if the device is available */
+-	WARN_ON(nand_get_device(chip));
++	nand_get_device(chip);
+ 	/* Release it and go back */
+ 	nand_release_device(chip);
+ }
+@@ -4287,9 +4282,7 @@ static int nand_block_isbad(struct mtd_info *mtd, loff_t offs)
+ 	int ret;
+ 
+ 	/* Select the NAND device */
+-	ret = nand_get_device(chip);
+-	if (ret)
+-		return ret;
++	nand_get_device(chip);
+ 
+ 	nand_select_target(chip, chipnr);
+ 
+@@ -4360,6 +4353,8 @@ static void nand_resume(struct mtd_info *mtd)
+ 			__func__);
+ 	}
+ 	mutex_unlock(&chip->lock);
++
++	wake_up_all(&chip->resume_wq);
+ }
+ 
+ /**
+@@ -5068,6 +5063,7 @@ static int nand_scan_ident(struct nand_chip *chip, unsigned int maxchips,
+ 	chip->cur_cs = -1;
+ 
+ 	mutex_init(&chip->lock);
++	init_waitqueue_head(&chip->resume_wq);
+ 
+ 	/* Enforce the right timings for reset/detection */
+ 	chip->current_interface_config = nand_get_reset_interface_config();
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index e85b04e9716b2..4153e0d15c5f9 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -350,9 +350,6 @@ static ssize_t dev_attribute_show(struct device *dev,
+ 	 * we still can use 'ubi->ubi_num'.
+ 	 */
+ 	ubi = container_of(dev, struct ubi_device, dev);
+-	ubi = ubi_get_device(ubi->ubi_num);
+-	if (!ubi)
+-		return -ENODEV;
+ 
+ 	if (attr == &dev_eraseblock_size)
+ 		ret = sprintf(buf, "%d\n", ubi->leb_size);
+@@ -381,7 +378,6 @@ static ssize_t dev_attribute_show(struct device *dev,
+ 	else
+ 		ret = -EINVAL;
+ 
+-	ubi_put_device(ubi);
+ 	return ret;
+ }
+ 
+@@ -980,9 +976,6 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
+ 			goto out_detach;
+ 	}
+ 
+-	/* Make device "available" before it becomes accessible via sysfs */
+-	ubi_devices[ubi_num] = ubi;
+-
+ 	err = uif_init(ubi);
+ 	if (err)
+ 		goto out_detach;
+@@ -1027,6 +1020,7 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
+ 	wake_up_process(ubi->bgt_thread);
+ 	spin_unlock(&ubi->wl_lock);
+ 
++	ubi_devices[ubi_num] = ubi;
+ 	ubi_notify_all(ubi, UBI_VOLUME_ADDED, NULL);
+ 	return ubi_num;
+ 
+@@ -1035,7 +1029,6 @@ out_debugfs:
+ out_uif:
+ 	uif_close(ubi);
+ out_detach:
+-	ubi_devices[ubi_num] = NULL;
+ 	ubi_wl_close(ubi);
+ 	ubi_free_all_volumes(ubi);
+ 	vfree(ubi->vtbl);
+diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
+index 022af59906aa9..6b5f1ffd961b9 100644
+--- a/drivers/mtd/ubi/fastmap.c
++++ b/drivers/mtd/ubi/fastmap.c
+@@ -468,7 +468,9 @@ static int scan_pool(struct ubi_device *ubi, struct ubi_attach_info *ai,
+ 			if (err == UBI_IO_FF_BITFLIPS)
+ 				scrub = 1;
+ 
+-			add_aeb(ai, free, pnum, ec, scrub);
++			ret = add_aeb(ai, free, pnum, ec, scrub);
++			if (ret)
++				goto out;
+ 			continue;
+ 		} else if (err == 0 || err == UBI_IO_BITFLIPS) {
+ 			dbg_bld("Found non empty PEB:%i in pool", pnum);
+@@ -638,8 +640,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 		if (fm_pos >= fm_size)
+ 			goto fail_bad;
+ 
+-		add_aeb(ai, &ai->free, be32_to_cpu(fmec->pnum),
+-			be32_to_cpu(fmec->ec), 0);
++		ret = add_aeb(ai, &ai->free, be32_to_cpu(fmec->pnum),
++			      be32_to_cpu(fmec->ec), 0);
++		if (ret)
++			goto fail;
+ 	}
+ 
+ 	/* read EC values from used list */
+@@ -649,8 +653,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 		if (fm_pos >= fm_size)
+ 			goto fail_bad;
+ 
+-		add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
+-			be32_to_cpu(fmec->ec), 0);
++		ret = add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
++			      be32_to_cpu(fmec->ec), 0);
++		if (ret)
++			goto fail;
+ 	}
+ 
+ 	/* read EC values from scrub list */
+@@ -660,8 +666,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 		if (fm_pos >= fm_size)
+ 			goto fail_bad;
+ 
+-		add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
+-			be32_to_cpu(fmec->ec), 1);
++		ret = add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
++			      be32_to_cpu(fmec->ec), 1);
++		if (ret)
++			goto fail;
+ 	}
+ 
+ 	/* read EC values from erase list */
+@@ -671,8 +679,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 		if (fm_pos >= fm_size)
+ 			goto fail_bad;
+ 
+-		add_aeb(ai, &ai->erase, be32_to_cpu(fmec->pnum),
+-			be32_to_cpu(fmec->ec), 1);
++		ret = add_aeb(ai, &ai->erase, be32_to_cpu(fmec->pnum),
++			      be32_to_cpu(fmec->ec), 1);
++		if (ret)
++			goto fail;
+ 	}
+ 
+ 	ai->mean_ec = div_u64(ai->ec_sum, ai->ec_count);
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 139ee132bfbcf..1bc7b3a056046 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -56,16 +56,11 @@ static ssize_t vol_attribute_show(struct device *dev,
+ {
+ 	int ret;
+ 	struct ubi_volume *vol = container_of(dev, struct ubi_volume, dev);
+-	struct ubi_device *ubi;
+-
+-	ubi = ubi_get_device(vol->ubi->ubi_num);
+-	if (!ubi)
+-		return -ENODEV;
++	struct ubi_device *ubi = vol->ubi;
+ 
+ 	spin_lock(&ubi->volumes_lock);
+ 	if (!ubi->volumes[vol->vol_id]) {
+ 		spin_unlock(&ubi->volumes_lock);
+-		ubi_put_device(ubi);
+ 		return -ENODEV;
+ 	}
+ 	/* Take a reference to prevent volume removal */
+@@ -103,7 +98,6 @@ static ssize_t vol_attribute_show(struct device *dev,
+ 	vol->ref_count -= 1;
+ 	ubi_assert(vol->ref_count >= 0);
+ 	spin_unlock(&ubi->volumes_lock);
+-	ubi_put_device(ubi);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index 39b128205f255..53ef48588e59a 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -140,14 +140,14 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 	oiph = skb_network_header(skb);
+ 	skb_reset_network_header(skb);
+ 
+-	if (!IS_ENABLED(CONFIG_IPV6) || family == AF_INET)
++	if (!ipv6_mod_enabled() || family == AF_INET)
+ 		err = IP_ECN_decapsulate(oiph, skb);
+ 	else
+ 		err = IP6_ECN_decapsulate(oiph, skb);
+ 
+ 	if (unlikely(err)) {
+ 		if (log_ecn_error) {
+-			if  (!IS_ENABLED(CONFIG_IPV6) || family == AF_INET)
++			if  (!ipv6_mod_enabled() || family == AF_INET)
+ 				net_info_ratelimited("non-ECT from %pI4 "
+ 						     "with TOS=%#x\n",
+ 						     &((struct iphdr *)oiph)->saddr,
+@@ -213,11 +213,12 @@ static struct socket *bareudp_create_sock(struct net *net, __be16 port)
+ 	int err;
+ 
+ 	memset(&udp_conf, 0, sizeof(udp_conf));
+-#if IS_ENABLED(CONFIG_IPV6)
+-	udp_conf.family = AF_INET6;
+-#else
+-	udp_conf.family = AF_INET;
+-#endif
++
++	if (ipv6_mod_enabled())
++		udp_conf.family = AF_INET6;
++	else
++		udp_conf.family = AF_INET;
++
+ 	udp_conf.local_udp_port = port;
+ 	/* Open UDP socket */
+ 	err = udp_sock_create(net, &udp_conf, &sock);
+@@ -246,12 +247,6 @@ static int bareudp_socket_create(struct bareudp_dev *bareudp, __be16 port)
+ 	tunnel_cfg.encap_destroy = NULL;
+ 	setup_udp_tunnel_sock(bareudp->net, sock, &tunnel_cfg);
+ 
+-	/* As the setup_udp_tunnel_sock does not call udp_encap_enable if the
+-	 * socket type is v6 an explicit call to udp_encap_enable is needed.
+-	 */
+-	if (sock->sk->sk_family == AF_INET6)
+-		udp_encap_enable();
+-
+ 	rcu_assign_pointer(bareudp->sock, sock);
+ 	return 0;
+ }
+@@ -445,7 +440,7 @@ static netdev_tx_t bareudp_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	}
+ 
+ 	rcu_read_lock();
+-	if (IS_ENABLED(CONFIG_IPV6) && info->mode & IP_TUNNEL_INFO_IPV6)
++	if (ipv6_mod_enabled() && info->mode & IP_TUNNEL_INFO_IPV6)
+ 		err = bareudp6_xmit_skb(skb, dev, bareudp, info);
+ 	else
+ 		err = bareudp_xmit_skb(skb, dev, bareudp, info);
+@@ -475,7 +470,7 @@ static int bareudp_fill_metadata_dst(struct net_device *dev,
+ 
+ 	use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ 
+-	if (!IS_ENABLED(CONFIG_IPV6) || ip_tunnel_info_af(info) == AF_INET) {
++	if (!ipv6_mod_enabled() || ip_tunnel_info_af(info) == AF_INET) {
+ 		struct rtable *rt;
+ 		__be32 saddr;
+ 
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 19a7e4adb9338..19a19a7b7deb8 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1491,8 +1491,6 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
+ 					 M_CAN_FIFO_DATA(i / 4),
+ 					 *(u32 *)(cf->data + i));
+ 
+-		can_put_echo_skb(skb, dev, 0);
+-
+ 		if (cdev->can.ctrlmode & CAN_CTRLMODE_FD) {
+ 			cccr = m_can_read(cdev, M_CAN_CCCR);
+ 			cccr &= ~(CCCR_CMR_MASK << CCCR_CMR_SHIFT);
+@@ -1509,6 +1507,9 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
+ 			m_can_write(cdev, M_CAN_CCCR, cccr);
+ 		}
+ 		m_can_write(cdev, M_CAN_TXBTIE, 0x1);
++
++		can_put_echo_skb(skb, dev, 0);
++
+ 		m_can_write(cdev, M_CAN_TXBAR, 0x1);
+ 		/* End of xmit function for version 3.0.x */
+ 	} else {
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index abe00a085f6fc..189d226588133 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -2578,7 +2578,7 @@ mcp251xfd_register_get_dev_id(const struct mcp251xfd_priv *priv,
+  out_kfree_buf_rx:
+ 	kfree(buf_rx);
+ 
+-	return 0;
++	return err;
+ }
+ 
+ #define MCP251XFD_QUIRK_ACTIVE(quirk) \
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 249d2fba28c7f..6458da9c13b95 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -823,7 +823,6 @@ static netdev_tx_t ems_usb_start_xmit(struct sk_buff *skb, struct net_device *ne
+ 
+ 		usb_unanchor_urb(urb);
+ 		usb_free_coherent(dev->udev, size, buf, urb->transfer_dma);
+-		dev_kfree_skb(skb);
+ 
+ 		atomic_dec(&dev->active_tx_urbs);
+ 
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 912160fd2ca02..21063335ab599 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -33,10 +33,6 @@
+ #define MCBA_USB_RX_BUFF_SIZE 64
+ #define MCBA_USB_TX_BUFF_SIZE (sizeof(struct mcba_usb_msg))
+ 
+-/* MCBA endpoint numbers */
+-#define MCBA_USB_EP_IN 1
+-#define MCBA_USB_EP_OUT 1
+-
+ /* Microchip command id */
+ #define MBCA_CMD_RECEIVE_MESSAGE 0xE3
+ #define MBCA_CMD_I_AM_ALIVE_FROM_CAN 0xF5
+@@ -84,6 +80,8 @@ struct mcba_priv {
+ 	atomic_t free_ctx_cnt;
+ 	void *rxbuf[MCBA_MAX_RX_URBS];
+ 	dma_addr_t rxbuf_dma[MCBA_MAX_RX_URBS];
++	int rx_pipe;
++	int tx_pipe;
+ };
+ 
+ /* CAN frame */
+@@ -272,10 +270,8 @@ static netdev_tx_t mcba_usb_xmit(struct mcba_priv *priv,
+ 
+ 	memcpy(buf, usb_msg, MCBA_USB_TX_BUFF_SIZE);
+ 
+-	usb_fill_bulk_urb(urb, priv->udev,
+-			  usb_sndbulkpipe(priv->udev, MCBA_USB_EP_OUT), buf,
+-			  MCBA_USB_TX_BUFF_SIZE, mcba_usb_write_bulk_callback,
+-			  ctx);
++	usb_fill_bulk_urb(urb, priv->udev, priv->tx_pipe, buf, MCBA_USB_TX_BUFF_SIZE,
++			  mcba_usb_write_bulk_callback, ctx);
+ 
+ 	urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+ 	usb_anchor_urb(urb, &priv->tx_submitted);
+@@ -368,7 +364,6 @@ static netdev_tx_t mcba_usb_start_xmit(struct sk_buff *skb,
+ xmit_failed:
+ 	can_free_echo_skb(priv->netdev, ctx->ndx);
+ 	mcba_usb_free_ctx(ctx);
+-	dev_kfree_skb(skb);
+ 	stats->tx_dropped++;
+ 
+ 	return NETDEV_TX_OK;
+@@ -611,7 +606,7 @@ static void mcba_usb_read_bulk_callback(struct urb *urb)
+ resubmit_urb:
+ 
+ 	usb_fill_bulk_urb(urb, priv->udev,
+-			  usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_OUT),
++			  priv->rx_pipe,
+ 			  urb->transfer_buffer, MCBA_USB_RX_BUFF_SIZE,
+ 			  mcba_usb_read_bulk_callback, priv);
+ 
+@@ -656,7 +651,7 @@ static int mcba_usb_start(struct mcba_priv *priv)
+ 		urb->transfer_dma = buf_dma;
+ 
+ 		usb_fill_bulk_urb(urb, priv->udev,
+-				  usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_IN),
++				  priv->rx_pipe,
+ 				  buf, MCBA_USB_RX_BUFF_SIZE,
+ 				  mcba_usb_read_bulk_callback, priv);
+ 		urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+@@ -810,6 +805,13 @@ static int mcba_usb_probe(struct usb_interface *intf,
+ 	struct mcba_priv *priv;
+ 	int err;
+ 	struct usb_device *usbdev = interface_to_usbdev(intf);
++	struct usb_endpoint_descriptor *in, *out;
++
++	err = usb_find_common_endpoints(intf->cur_altsetting, &in, &out, NULL, NULL);
++	if (err) {
++		dev_err(&intf->dev, "Can't find endpoints\n");
++		return err;
++	}
+ 
+ 	netdev = alloc_candev(sizeof(struct mcba_priv), MCBA_MAX_TX_URBS);
+ 	if (!netdev) {
+@@ -855,6 +857,9 @@ static int mcba_usb_probe(struct usb_interface *intf,
+ 		goto cleanup_free_candev;
+ 	}
+ 
++	priv->rx_pipe = usb_rcvbulkpipe(priv->udev, in->bEndpointAddress);
++	priv->tx_pipe = usb_sndbulkpipe(priv->udev, out->bEndpointAddress);
++
+ 	devm_can_led_init(netdev);
+ 
+ 	/* Start USB dev only if we have successfully registered CAN device */
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index ca7c55d6a41db..985e00aee4ee1 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -670,9 +670,20 @@ static netdev_tx_t usb_8dev_start_xmit(struct sk_buff *skb,
+ 	atomic_inc(&priv->active_tx_urbs);
+ 
+ 	err = usb_submit_urb(urb, GFP_ATOMIC);
+-	if (unlikely(err))
+-		goto failed;
+-	else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
++	if (unlikely(err)) {
++		can_free_echo_skb(netdev, context->echo_index);
++
++		usb_unanchor_urb(urb);
++		usb_free_coherent(priv->udev, size, buf, urb->transfer_dma);
++
++		atomic_dec(&priv->active_tx_urbs);
++
++		if (err == -ENODEV)
++			netif_device_detach(netdev);
++		else
++			netdev_warn(netdev, "failed tx_urb %d\n", err);
++		stats->tx_dropped++;
++	} else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
+ 		/* Slow down tx path */
+ 		netif_stop_queue(netdev);
+ 
+@@ -691,19 +702,6 @@ nofreecontext:
+ 
+ 	return NETDEV_TX_BUSY;
+ 
+-failed:
+-	can_free_echo_skb(netdev, context->echo_index);
+-
+-	usb_unanchor_urb(urb);
+-	usb_free_coherent(priv->udev, size, buf, urb->transfer_dma);
+-
+-	atomic_dec(&priv->active_tx_urbs);
+-
+-	if (err == -ENODEV)
+-		netif_device_detach(netdev);
+-	else
+-		netdev_warn(netdev, "failed tx_urb %d\n", err);
+-
+ nomembuf:
+ 	usb_free_urb(urb);
+ 
+diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
+index 7000c6cd1e48b..282c53ef76d23 100644
+--- a/drivers/net/can/vxcan.c
++++ b/drivers/net/can/vxcan.c
+@@ -148,7 +148,7 @@ static void vxcan_setup(struct net_device *dev)
+ 	dev->hard_header_len	= 0;
+ 	dev->addr_len		= 0;
+ 	dev->tx_queue_len	= 0;
+-	dev->flags		= (IFF_NOARP|IFF_ECHO);
++	dev->flags		= IFF_NOARP;
+ 	dev->netdev_ops		= &vxcan_netdev_ops;
+ 	dev->needs_free_netdev	= true;
+ 
+diff --git a/drivers/net/dsa/bcm_sf2_cfp.c b/drivers/net/dsa/bcm_sf2_cfp.c
+index d82cee5d92022..cbf44fc7d03aa 100644
+--- a/drivers/net/dsa/bcm_sf2_cfp.c
++++ b/drivers/net/dsa/bcm_sf2_cfp.c
+@@ -567,14 +567,14 @@ static void bcm_sf2_cfp_slice_ipv6(struct bcm_sf2_priv *priv,
+ static struct cfp_rule *bcm_sf2_cfp_rule_find(struct bcm_sf2_priv *priv,
+ 					      int port, u32 location)
+ {
+-	struct cfp_rule *rule = NULL;
++	struct cfp_rule *rule;
+ 
+ 	list_for_each_entry(rule, &priv->cfp.rules_list, next) {
+ 		if (rule->port == port && rule->fs.location == location)
+-			break;
++			return rule;
+ 	}
+ 
+-	return rule;
++	return NULL;
+ }
+ 
+ static int bcm_sf2_cfp_rule_cmp(struct bcm_sf2_priv *priv, int port,
+diff --git a/drivers/net/dsa/microchip/ksz8795_spi.c b/drivers/net/dsa/microchip/ksz8795_spi.c
+index 8b00f8e6c02f4..5639c5c59e255 100644
+--- a/drivers/net/dsa/microchip/ksz8795_spi.c
++++ b/drivers/net/dsa/microchip/ksz8795_spi.c
+@@ -86,12 +86,23 @@ static const struct of_device_id ksz8795_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ksz8795_dt_ids);
+ 
++static const struct spi_device_id ksz8795_spi_ids[] = {
++	{ "ksz8765" },
++	{ "ksz8794" },
++	{ "ksz8795" },
++	{ "ksz8863" },
++	{ "ksz8873" },
++	{ },
++};
++MODULE_DEVICE_TABLE(spi, ksz8795_spi_ids);
++
+ static struct spi_driver ksz8795_spi_driver = {
+ 	.driver = {
+ 		.name	= "ksz8795-switch",
+ 		.owner	= THIS_MODULE,
+ 		.of_match_table = of_match_ptr(ksz8795_dt_ids),
+ 	},
++	.id_table = ksz8795_spi_ids,
+ 	.probe	= ksz8795_spi_probe,
+ 	.remove	= ksz8795_spi_remove,
+ 	.shutdown = ksz8795_spi_shutdown,
+diff --git a/drivers/net/dsa/microchip/ksz9477_spi.c b/drivers/net/dsa/microchip/ksz9477_spi.c
+index 1142768969c20..9bda83d063e8e 100644
+--- a/drivers/net/dsa/microchip/ksz9477_spi.c
++++ b/drivers/net/dsa/microchip/ksz9477_spi.c
+@@ -88,12 +88,24 @@ static const struct of_device_id ksz9477_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ksz9477_dt_ids);
+ 
++static const struct spi_device_id ksz9477_spi_ids[] = {
++	{ "ksz9477" },
++	{ "ksz9897" },
++	{ "ksz9893" },
++	{ "ksz9563" },
++	{ "ksz8563" },
++	{ "ksz9567" },
++	{ },
++};
++MODULE_DEVICE_TABLE(spi, ksz9477_spi_ids);
++
+ static struct spi_driver ksz9477_spi_driver = {
+ 	.driver = {
+ 		.name	= "ksz9477-switch",
+ 		.owner	= THIS_MODULE,
+ 		.of_match_table = of_match_ptr(ksz9477_dt_ids),
+ 	},
++	.id_table = ksz9477_spi_ids,
+ 	.probe	= ksz9477_spi_probe,
+ 	.remove	= ksz9477_spi_remove,
+ 	.shutdown = ksz9477_spi_shutdown,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 1992be77522ac..e79a808375fc8 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3297,6 +3297,7 @@ static const struct mv88e6xxx_ops mv88e6097_ops = {
+ 	.port_set_link = mv88e6xxx_port_set_link,
+ 	.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
++	.port_set_policy = mv88e6352_port_set_policy,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+ 	.port_set_ether_type = mv88e6351_port_set_ether_type,
+diff --git a/drivers/net/ethernet/8390/mcf8390.c b/drivers/net/ethernet/8390/mcf8390.c
+index 4ad8031ab6695..065fdbe66c425 100644
+--- a/drivers/net/ethernet/8390/mcf8390.c
++++ b/drivers/net/ethernet/8390/mcf8390.c
+@@ -406,12 +406,12 @@ static int mcf8390_init(struct net_device *dev)
+ static int mcf8390_probe(struct platform_device *pdev)
+ {
+ 	struct net_device *dev;
+-	struct resource *mem, *irq;
++	struct resource *mem;
+ 	resource_size_t msize;
+-	int ret;
++	int ret, irq;
+ 
+-	irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (irq == NULL) {
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
+ 		dev_err(&pdev->dev, "no IRQ specified?\n");
+ 		return -ENXIO;
+ 	}
+@@ -434,7 +434,7 @@ static int mcf8390_probe(struct platform_device *pdev)
+ 	SET_NETDEV_DEV(dev, &pdev->dev);
+ 	platform_set_drvdata(pdev, dev);
+ 
+-	dev->irq = irq->start;
++	dev->irq = irq;
+ 	dev->base_addr = mem->start;
+ 
+ 	ret = mcf8390_init(dev);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index a2062144d7ca1..7dcd5613ee56f 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -76,7 +76,7 @@ static inline void bcmgenet_writel(u32 value, void __iomem *offset)
+ 	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		__raw_writel(value, offset);
+ 	else
+-		writel_relaxed(value, offset);
++		writel(value, offset);
+ }
+ 
+ static inline u32 bcmgenet_readl(void __iomem *offset)
+@@ -84,7 +84,7 @@ static inline u32 bcmgenet_readl(void __iomem *offset)
+ 	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		return __raw_readl(offset);
+ 	else
+-		return readl_relaxed(offset);
++		return readl(offset);
+ }
+ 
+ static inline void dmadesc_set_length_status(struct bcmgenet_priv *priv,
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+index 9c1690f64a027..cf98a00296edf 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+@@ -651,7 +651,10 @@ static int enetc_get_ts_info(struct net_device *ndev,
+ #ifdef CONFIG_FSL_ENETC_PTP_CLOCK
+ 	info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ 				SOF_TIMESTAMPING_RX_HARDWARE |
+-				SOF_TIMESTAMPING_RAW_HARDWARE;
++				SOF_TIMESTAMPING_RAW_HARDWARE |
++				SOF_TIMESTAMPING_TX_SOFTWARE |
++				SOF_TIMESTAMPING_RX_SOFTWARE |
++				SOF_TIMESTAMPING_SOFTWARE;
+ 
+ 	info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+ 			 (1 << HWTSTAMP_TX_ON);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 7b94764b4f5d9..2070e26a3a358 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -7616,12 +7616,11 @@ int hclge_rm_uc_addr_common(struct hclge_vport *vport,
+ 	hnae3_set_bit(req.entry_type, HCLGE_MAC_VLAN_BIT0_EN_B, 0);
+ 	hclge_prepare_mac_addr(&req, addr, false);
+ 	ret = hclge_remove_mac_vlan_tbl(vport, &req);
+-	if (!ret) {
++	if (!ret || ret == -ENOENT) {
+ 		mutex_lock(&hdev->vport_lock);
+ 		hclge_update_umv_space(vport, true);
+ 		mutex_unlock(&hdev->vport_lock);
+-	} else if (ret == -ENOENT) {
+-		ret = 0;
++		return 0;
+ 	}
+ 
+ 	return ret;
+@@ -9198,11 +9197,11 @@ int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
+ 	}
+ 
+ 	if (!ret) {
+-		if (is_kill)
+-			hclge_rm_vport_vlan_table(vport, vlan_id, false);
+-		else
++		if (!is_kill)
+ 			hclge_add_vport_vlan_table(vport, vlan_id,
+ 						   writen_to_tbl);
++		else if (is_kill && vlan_id != 0)
++			hclge_rm_vport_vlan_table(vport, vlan_id, false);
+ 	} else if (is_kill) {
+ 		/* when remove hw vlan filter failed, record the vlan id,
+ 		 * and try to remove it from hw later, to be consistence
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index 86c79f71c685a..75e4a698c3db2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -247,21 +247,25 @@ no_buffers:
+ static struct sk_buff *i40e_construct_skb_zc(struct i40e_ring *rx_ring,
+ 					     struct xdp_buff *xdp)
+ {
++	unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ 	unsigned int metasize = xdp->data - xdp->data_meta;
+-	unsigned int datasize = xdp->data_end - xdp->data;
+ 	struct sk_buff *skb;
+ 
++	net_prefetch(xdp->data_meta);
++
+ 	/* allocate a skb to store the frags */
+-	skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+-			       xdp->data_end - xdp->data_hard_start,
++	skb = __napi_alloc_skb(&rx_ring->q_vector->napi, totalsize,
+ 			       GFP_ATOMIC | __GFP_NOWARN);
+ 	if (unlikely(!skb))
+ 		return NULL;
+ 
+-	skb_reserve(skb, xdp->data - xdp->data_hard_start);
+-	memcpy(__skb_put(skb, datasize), xdp->data, datasize);
+-	if (metasize)
++	memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++	       ALIGN(totalsize, sizeof(long)));
++
++	if (metasize) {
+ 		skb_metadata_set(skb, metasize);
++		__skb_pull(skb, metasize);
++	}
+ 
+ 	xsk_buff_free(xdp);
+ 	return skb;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index d355676f6c160..e14869a2e24a5 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -311,10 +311,10 @@ int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx)
+ 
+ static void ionic_dev_cmd_clean(struct ionic *ionic)
+ {
+-	union __iomem ionic_dev_cmd_regs *regs = ionic->idev.dev_cmd_regs;
++	struct ionic_dev *idev = &ionic->idev;
+ 
+-	iowrite32(0, &regs->doorbell);
+-	memset_io(&regs->cmd, 0, sizeof(regs->cmd));
++	iowrite32(0, &idev->dev_cmd_regs->doorbell);
++	memset_io(&idev->dev_cmd_regs->cmd, 0, sizeof(idev->dev_cmd_regs->cmd));
+ }
+ 
+ int ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_seconds)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+index ef0ad4cf82e60..3541bc95493f0 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+@@ -2982,12 +2982,16 @@ static int qed_iov_pre_update_vport(struct qed_hwfn *hwfn,
+ 	u8 mask = QED_ACCEPT_UCAST_UNMATCHED | QED_ACCEPT_MCAST_UNMATCHED;
+ 	struct qed_filter_accept_flags *flags = &params->accept_flags;
+ 	struct qed_public_vf_info *vf_info;
++	u16 tlv_mask;
++
++	tlv_mask = BIT(QED_IOV_VP_UPDATE_ACCEPT_PARAM) |
++		   BIT(QED_IOV_VP_UPDATE_ACCEPT_ANY_VLAN);
+ 
+ 	/* Untrusted VFs can't even be trusted to know that fact.
+ 	 * Simply indicate everything is configured fine, and trace
+ 	 * configuration 'behind their back'.
+ 	 */
+-	if (!(*tlvs & BIT(QED_IOV_VP_UPDATE_ACCEPT_PARAM)))
++	if (!(*tlvs & tlv_mask))
+ 		return 0;
+ 
+ 	vf_info = qed_iov_get_public_vf_info(hwfn, vfid, true);
+@@ -3004,6 +3008,13 @@ static int qed_iov_pre_update_vport(struct qed_hwfn *hwfn,
+ 			flags->tx_accept_filter &= ~mask;
+ 	}
+ 
++	if (params->update_accept_any_vlan_flg) {
++		vf_info->accept_any_vlan = params->accept_any_vlan;
++
++		if (vf_info->forced_vlan && !vf_info->is_trusted_configured)
++			params->accept_any_vlan = false;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -4691,6 +4702,7 @@ static int qed_get_vf_config(struct qed_dev *cdev,
+ 	tx_rate = vf_info->tx_rate;
+ 	ivi->max_tx_rate = tx_rate ? tx_rate : link.speed;
+ 	ivi->min_tx_rate = qed_iov_get_vf_min_rate(hwfn, vf_id);
++	ivi->trusted = vf_info->is_trusted_request;
+ 
+ 	return 0;
+ }
+@@ -5120,6 +5132,12 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
+ 
+ 		params.update_ctl_frame_check = 1;
+ 		params.mac_chk_en = !vf_info->is_trusted_configured;
++		params.update_accept_any_vlan_flg = 0;
++
++		if (vf_info->accept_any_vlan && vf_info->forced_vlan) {
++			params.update_accept_any_vlan_flg = 1;
++			params.accept_any_vlan = vf_info->accept_any_vlan;
++		}
+ 
+ 		if (vf_info->rx_accept_mode & mask) {
+ 			flags->update_rx_mode_config = 1;
+@@ -5135,13 +5153,20 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
+ 		if (!vf_info->is_trusted_configured) {
+ 			flags->rx_accept_filter &= ~mask;
+ 			flags->tx_accept_filter &= ~mask;
++			params.accept_any_vlan = false;
+ 		}
+ 
+ 		if (flags->update_rx_mode_config ||
+ 		    flags->update_tx_mode_config ||
+-		    params.update_ctl_frame_check)
++		    params.update_ctl_frame_check ||
++		    params.update_accept_any_vlan_flg) {
++			DP_VERBOSE(hwfn, QED_MSG_IOV,
++				   "vport update config for %s VF[abs 0x%x rel 0x%x]\n",
++				   vf_info->is_trusted_configured ? "trusted" : "untrusted",
++				   vf->abs_vf_id, vf->relative_vf_id);
+ 			qed_sp_vport_update(hwfn, &params,
+ 					    QED_SPQ_MODE_EBLOCK, NULL);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.h b/drivers/net/ethernet/qlogic/qed/qed_sriov.h
+index eacd6457f195c..7ff23ef8ccc17 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.h
+@@ -62,6 +62,7 @@ struct qed_public_vf_info {
+ 	bool is_trusted_request;
+ 	u8 rx_accept_mode;
+ 	u8 tx_accept_mode;
++	bool accept_any_vlan;
+ };
+ 
+ struct qed_iov_vf_init_params {
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+index 5d79ee4370bcd..7519773eaca6e 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+@@ -51,7 +51,7 @@ static inline int qlcnic_dcb_get_hw_capability(struct qlcnic_dcb *dcb)
+ 	if (dcb && dcb->ops->get_hw_capability)
+ 		return dcb->ops->get_hw_capability(dcb);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline void qlcnic_dcb_free(struct qlcnic_dcb *dcb)
+@@ -65,7 +65,7 @@ static inline int qlcnic_dcb_attach(struct qlcnic_dcb *dcb)
+ 	if (dcb && dcb->ops->attach)
+ 		return dcb->ops->attach(dcb);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline int
+@@ -74,7 +74,7 @@ qlcnic_dcb_query_hw_capability(struct qlcnic_dcb *dcb, char *buf)
+ 	if (dcb && dcb->ops->query_hw_capability)
+ 		return dcb->ops->query_hw_capability(dcb, buf);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline void qlcnic_dcb_get_info(struct qlcnic_dcb *dcb)
+@@ -89,7 +89,7 @@ qlcnic_dcb_query_cee_param(struct qlcnic_dcb *dcb, char *buf, u8 type)
+ 	if (dcb && dcb->ops->query_cee_param)
+ 		return dcb->ops->query_cee_param(dcb, buf, type);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline int qlcnic_dcb_get_cee_cfg(struct qlcnic_dcb *dcb)
+@@ -97,7 +97,7 @@ static inline int qlcnic_dcb_get_cee_cfg(struct qlcnic_dcb *dcb)
+ 	if (dcb && dcb->ops->get_cee_cfg)
+ 		return dcb->ops->get_cee_cfg(dcb);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline void qlcnic_dcb_aen_handler(struct qlcnic_dcb *dcb, void *msg)
+diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c
+index 54b53dbdb33cd..69fc47089e625 100644
+--- a/drivers/net/ethernet/sun/sunhme.c
++++ b/drivers/net/ethernet/sun/sunhme.c
+@@ -3163,7 +3163,7 @@ static int happy_meal_pci_probe(struct pci_dev *pdev,
+ 	if (err) {
+ 		printk(KERN_ERR "happymeal(PCI): Cannot register net device, "
+ 		       "aborting.\n");
+-		goto err_out_iounmap;
++		goto err_out_free_coherent;
+ 	}
+ 
+ 	pci_set_drvdata(pdev, hp);
+@@ -3196,6 +3196,10 @@ static int happy_meal_pci_probe(struct pci_dev *pdev,
+ 
+ 	return 0;
+ 
++err_out_free_coherent:
++	dma_free_coherent(hp->dma_dev, PAGE_SIZE,
++			  hp->happy_block, hp->hblock_dvma);
++
+ err_out_iounmap:
+ 	iounmap(hp->gregs);
+ 
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 0baf85122f5ac..bbdcba88c021e 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -857,46 +857,53 @@ static void axienet_recv(struct net_device *ndev)
+ 	while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) {
+ 		dma_addr_t phys;
+ 
+-		tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
+-
+ 		/* Ensure we see complete descriptor update */
+ 		dma_rmb();
+-		phys = desc_get_phys_addr(lp, cur_p);
+-		dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
+-				 DMA_FROM_DEVICE);
+ 
+ 		skb = cur_p->skb;
+ 		cur_p->skb = NULL;
+-		length = cur_p->app4 & 0x0000FFFF;
+-
+-		skb_put(skb, length);
+-		skb->protocol = eth_type_trans(skb, ndev);
+-		/*skb_checksum_none_assert(skb);*/
+-		skb->ip_summed = CHECKSUM_NONE;
+-
+-		/* if we're doing Rx csum offload, set it up */
+-		if (lp->features & XAE_FEATURE_FULL_RX_CSUM) {
+-			csumstatus = (cur_p->app2 &
+-				      XAE_FULL_CSUM_STATUS_MASK) >> 3;
+-			if ((csumstatus == XAE_IP_TCP_CSUM_VALIDATED) ||
+-			    (csumstatus == XAE_IP_UDP_CSUM_VALIDATED)) {
+-				skb->ip_summed = CHECKSUM_UNNECESSARY;
++
++		/* skb could be NULL if a previous pass already received the
++		 * packet for this slot in the ring, but failed to refill it
++		 * with a newly allocated buffer. In this case, don't try to
++		 * receive it again.
++		 */
++		if (likely(skb)) {
++			length = cur_p->app4 & 0x0000FFFF;
++
++			phys = desc_get_phys_addr(lp, cur_p);
++			dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
++					 DMA_FROM_DEVICE);
++
++			skb_put(skb, length);
++			skb->protocol = eth_type_trans(skb, ndev);
++			/*skb_checksum_none_assert(skb);*/
++			skb->ip_summed = CHECKSUM_NONE;
++
++			/* if we're doing Rx csum offload, set it up */
++			if (lp->features & XAE_FEATURE_FULL_RX_CSUM) {
++				csumstatus = (cur_p->app2 &
++					      XAE_FULL_CSUM_STATUS_MASK) >> 3;
++				if (csumstatus == XAE_IP_TCP_CSUM_VALIDATED ||
++				    csumstatus == XAE_IP_UDP_CSUM_VALIDATED) {
++					skb->ip_summed = CHECKSUM_UNNECESSARY;
++				}
++			} else if ((lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) != 0 &&
++				   skb->protocol == htons(ETH_P_IP) &&
++				   skb->len > 64) {
++				skb->csum = be32_to_cpu(cur_p->app3 & 0xFFFF);
++				skb->ip_summed = CHECKSUM_COMPLETE;
+ 			}
+-		} else if ((lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) != 0 &&
+-			   skb->protocol == htons(ETH_P_IP) &&
+-			   skb->len > 64) {
+-			skb->csum = be32_to_cpu(cur_p->app3 & 0xFFFF);
+-			skb->ip_summed = CHECKSUM_COMPLETE;
+-		}
+ 
+-		netif_rx(skb);
++			netif_rx(skb);
+ 
+-		size += length;
+-		packets++;
++			size += length;
++			packets++;
++		}
+ 
+ 		new_skb = netdev_alloc_skb_ip_align(ndev, lp->max_frm_size);
+ 		if (!new_skb)
+-			return;
++			break;
+ 
+ 		phys = dma_map_single(ndev->dev.parent, new_skb->data,
+ 				      lp->max_frm_size,
+@@ -905,7 +912,7 @@ static void axienet_recv(struct net_device *ndev)
+ 			if (net_ratelimit())
+ 				netdev_err(ndev, "RX DMA mapping error\n");
+ 			dev_kfree_skb(new_skb);
+-			return;
++			break;
+ 		}
+ 		desc_set_phys_addr(lp, phys, cur_p);
+ 
+@@ -913,6 +920,11 @@ static void axienet_recv(struct net_device *ndev)
+ 		cur_p->status = 0;
+ 		cur_p->skb = new_skb;
+ 
++		/* Only update tail_p to mark this slot as usable after it has
++		 * been successfully refilled.
++		 */
++		tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
++
+ 		if (++lp->rx_bd_ci >= lp->rx_bd_num)
+ 			lp->rx_bd_ci = 0;
+ 		cur_p = &lp->rx_bd_v[lp->rx_bd_ci];
+diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
+index bd0beb16d68a9..02d6f3ad9aca8 100644
+--- a/drivers/net/hamradio/6pack.c
++++ b/drivers/net/hamradio/6pack.c
+@@ -674,14 +674,14 @@ static void sixpack_close(struct tty_struct *tty)
+ 	 */
+ 	netif_stop_queue(sp->dev);
+ 
++	unregister_netdev(sp->dev);
++
+ 	del_timer_sync(&sp->tx_t);
+ 	del_timer_sync(&sp->resync_t);
+ 
+ 	/* Free all 6pack frame buffers. */
+ 	kfree(sp->rbuff);
+ 	kfree(sp->xbuff);
+-
+-	unregister_netdev(sp->dev);
+ }
+ 
+ /* Perform I/O control on an active 6pack channel. */
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 644861366d544..0cde17bd743f3 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -11,6 +11,7 @@
+  */
+ 
+ #include "bcm-phy-lib.h"
++#include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/phy.h>
+ #include <linux/brcmphy.h>
+@@ -622,6 +623,26 @@ static int brcm_fet_config_init(struct phy_device *phydev)
+ 	if (err < 0)
+ 		return err;
+ 
++	/* The datasheet indicates the PHY needs up to 1us to complete a reset,
++	 * build some slack here.
++	 */
++	usleep_range(1000, 2000);
++
++	/* The PHY requires 65 MDC clock cycles to complete a write operation
++	 * and turnaround the line properly.
++	 *
++	 * We ignore -EIO here as the MDIO controller (e.g.: mdio-bcm-unimac)
++	 * may flag the lack of turn-around as a read failure. This is
++	 * particularly true with this combination since the MDIO controller
++	 * only used 64 MDC cycles. This is not a critical failure in this
++	 * specific case and it has no functional impact otherwise, so we let
++	 * that one go through. If there is a genuine bus error, the next read
++	 * of MII_BRCM_FET_INTREG will error out.
++	 */
++	err = phy_read(phydev, MII_BMCR);
++	if (err < 0 && err != -EIO)
++		return err;
++
+ 	reg = phy_read(phydev, MII_BRCM_FET_INTREG);
+ 	if (reg < 0)
+ 		return reg;
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 1de413b19e342..8084e7408c0ae 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include "queueing.h"
++#include <linux/skb_array.h>
+ 
+ struct multicore_worker __percpu *
+ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
+@@ -42,7 +43,7 @@ void wg_packet_queue_free(struct crypt_queue *queue, bool purge)
+ {
+ 	free_percpu(queue->worker);
+ 	WARN_ON(!purge && !__ptr_ring_empty(&queue->ring));
+-	ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL);
++	ptr_ring_cleanup(&queue->ring, purge ? __skb_array_destroy_skb : NULL);
+ }
+ 
+ #define NEXT(skb) ((skb)->prev)
+diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c
+index 52b9bc83abcbc..473221aa22368 100644
+--- a/drivers/net/wireguard/socket.c
++++ b/drivers/net/wireguard/socket.c
+@@ -160,6 +160,7 @@ out:
+ 	rcu_read_unlock_bh();
+ 	return ret;
+ #else
++	kfree_skb(skb);
+ 	return -EAFNOSUPPORT;
+ #endif
+ }
+@@ -241,7 +242,7 @@ int wg_socket_endpoint_from_skb(struct endpoint *endpoint,
+ 		endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
+ 		endpoint->src4.s_addr = ip_hdr(skb)->daddr;
+ 		endpoint->src_if4 = skb->skb_iif;
+-	} else if (skb->protocol == htons(ETH_P_IPV6)) {
++	} else if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) {
+ 		endpoint->addr6.sin6_family = AF_INET6;
+ 		endpoint->addr6.sin6_port = udp_hdr(skb)->source;
+ 		endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr;
+@@ -284,7 +285,7 @@ void wg_socket_set_peer_endpoint(struct wg_peer *peer,
+ 		peer->endpoint.addr4 = endpoint->addr4;
+ 		peer->endpoint.src4 = endpoint->src4;
+ 		peer->endpoint.src_if4 = endpoint->src_if4;
+-	} else if (endpoint->addr.sa_family == AF_INET6) {
++	} else if (IS_ENABLED(CONFIG_IPV6) && endpoint->addr.sa_family == AF_INET6) {
+ 		peer->endpoint.addr6 = endpoint->addr6;
+ 		peer->endpoint.src6 = endpoint->src6;
+ 	} else {
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index daae470ecf5aa..e5a296039f714 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -1477,11 +1477,11 @@ static int ath10k_setup_msa_resources(struct ath10k *ar, u32 msa_size)
+ 	node = of_parse_phandle(dev->of_node, "memory-region", 0);
+ 	if (node) {
+ 		ret = of_address_to_resource(node, 0, &r);
++		of_node_put(node);
+ 		if (ret) {
+ 			dev_err(dev, "failed to resolve msa fixed region\n");
+ 			return ret;
+ 		}
+-		of_node_put(node);
+ 
+ 		ar->msa.paddr = r.start;
+ 		ar->msa.mem_size = resource_size(&r);
+diff --git a/drivers/net/wireless/ath/ath10k/wow.c b/drivers/net/wireless/ath/ath10k/wow.c
+index 7d65c115669fe..20b9aa8ddf7d5 100644
+--- a/drivers/net/wireless/ath/ath10k/wow.c
++++ b/drivers/net/wireless/ath/ath10k/wow.c
+@@ -337,14 +337,15 @@ static int ath10k_vif_wow_set_wakeups(struct ath10k_vif *arvif,
+ 			if (patterns[i].mask[j / 8] & BIT(j % 8))
+ 				bitmask[j] = 0xff;
+ 		old_pattern.mask = bitmask;
+-		new_pattern = old_pattern;
+ 
+ 		if (ar->wmi.rx_decap_mode == ATH10K_HW_TXRX_NATIVE_WIFI) {
+-			if (patterns[i].pkt_offset < ETH_HLEN)
++			if (patterns[i].pkt_offset < ETH_HLEN) {
+ 				ath10k_wow_convert_8023_to_80211(&new_pattern,
+ 								 &old_pattern);
+-			else
++			} else {
++				new_pattern = old_pattern;
+ 				new_pattern.pkt_offset += WOW_HDR_LEN - ETH_HLEN;
++			}
+ 		}
+ 
+ 		if (WARN_ON(new_pattern.pattern_len > WOW_MAX_PATTERN_SIZE))
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index 510e61e97dbcb..994ec48b2f669 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -30,6 +30,7 @@ static int htc_issue_send(struct htc_target *target, struct sk_buff* skb,
+ 	hdr->endpoint_id = epid;
+ 	hdr->flags = flags;
+ 	hdr->payload_len = cpu_to_be16(len);
++	memset(hdr->control, 0, sizeof(hdr->control));
+ 
+ 	status = target->hif->send(target->hif_dev, endpoint->ul_pipeid, skb);
+ 
+@@ -272,6 +273,10 @@ int htc_connect_service(struct htc_target *target,
+ 	conn_msg->dl_pipeid = endpoint->dl_pipeid;
+ 	conn_msg->ul_pipeid = endpoint->ul_pipeid;
+ 
++	/* To prevent infoleak */
++	conn_msg->svc_meta_len = 0;
++	conn_msg->pad = 0;
++
+ 	ret = htc_issue_send(target, skb, skb->len, 0, ENDPOINT0);
+ 	if (ret)
+ 		goto err;
+diff --git a/drivers/net/wireless/ath/carl9170/main.c b/drivers/net/wireless/ath/carl9170/main.c
+index dbef9d8fc893b..b903b856bcf7b 100644
+--- a/drivers/net/wireless/ath/carl9170/main.c
++++ b/drivers/net/wireless/ath/carl9170/main.c
+@@ -1916,7 +1916,7 @@ static int carl9170_parse_eeprom(struct ar9170 *ar)
+ 		WARN_ON(!(tx_streams >= 1 && tx_streams <=
+ 			IEEE80211_HT_MCS_TX_MAX_STREAMS));
+ 
+-		tx_params = (tx_streams - 1) <<
++		tx_params |= (tx_streams - 1) <<
+ 			    IEEE80211_HT_MCS_TX_MAX_STREAMS_SHIFT;
+ 
+ 		carl9170_band_2GHz.ht_cap.mcs.tx_params |= tx_params;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+index d821a4758f8cf..a2b8d9171af2a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+@@ -207,6 +207,8 @@ static int brcmf_init_nvram_parser(struct nvram_parser *nvp,
+ 		size = BRCMF_FW_MAX_NVRAM_SIZE;
+ 	else
+ 		size = data_len;
++	/* Add space for properties we may add */
++	size += strlen(BRCMF_FW_DEFAULT_BOARDREV) + 1;
+ 	/* Alloc for extra 0 byte + roundup by 4 + length field */
+ 	size += 1 + 3 + sizeof(u32);
+ 	nvp->nvram = kzalloc(size, GFP_KERNEL);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 1f12dfb33938a..61febc9bfa14a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -12,6 +12,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/bcma/bcma.h>
+ #include <linux/sched.h>
++#include <linux/io.h>
+ #include <asm/unaligned.h>
+ 
+ #include <soc.h>
+@@ -446,47 +447,6 @@ brcmf_pcie_write_ram32(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+ }
+ 
+ 
+-static void
+-brcmf_pcie_copy_mem_todev(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+-			  void *srcaddr, u32 len)
+-{
+-	void __iomem *address = devinfo->tcm + mem_offset;
+-	__le32 *src32;
+-	__le16 *src16;
+-	u8 *src8;
+-
+-	if (((ulong)address & 4) || ((ulong)srcaddr & 4) || (len & 4)) {
+-		if (((ulong)address & 2) || ((ulong)srcaddr & 2) || (len & 2)) {
+-			src8 = (u8 *)srcaddr;
+-			while (len) {
+-				iowrite8(*src8, address);
+-				address++;
+-				src8++;
+-				len--;
+-			}
+-		} else {
+-			len = len / 2;
+-			src16 = (__le16 *)srcaddr;
+-			while (len) {
+-				iowrite16(le16_to_cpu(*src16), address);
+-				address += 2;
+-				src16++;
+-				len--;
+-			}
+-		}
+-	} else {
+-		len = len / 4;
+-		src32 = (__le32 *)srcaddr;
+-		while (len) {
+-			iowrite32(le32_to_cpu(*src32), address);
+-			address += 4;
+-			src32++;
+-			len--;
+-		}
+-	}
+-}
+-
+-
+ static void
+ brcmf_pcie_copy_dev_tomem(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+ 			  void *dstaddr, u32 len)
+@@ -1346,6 +1306,18 @@ static void brcmf_pcie_down(struct device *dev)
+ {
+ }
+ 
++static int brcmf_pcie_preinit(struct device *dev)
++{
++	struct brcmf_bus *bus_if = dev_get_drvdata(dev);
++	struct brcmf_pciedev *buspub = bus_if->bus_priv.pcie;
++
++	brcmf_dbg(PCIE, "Enter\n");
++
++	brcmf_pcie_intr_enable(buspub->devinfo);
++	brcmf_pcie_hostready(buspub->devinfo);
++
++	return 0;
++}
+ 
+ static int brcmf_pcie_tx(struct device *dev, struct sk_buff *skb)
+ {
+@@ -1454,6 +1426,7 @@ static int brcmf_pcie_reset(struct device *dev)
+ }
+ 
+ static const struct brcmf_bus_ops brcmf_pcie_bus_ops = {
++	.preinit = brcmf_pcie_preinit,
+ 	.txdata = brcmf_pcie_tx,
+ 	.stop = brcmf_pcie_down,
+ 	.txctl = brcmf_pcie_tx_ctlpkt,
+@@ -1561,8 +1534,8 @@ static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
+ 		return err;
+ 
+ 	brcmf_dbg(PCIE, "Download FW %s\n", devinfo->fw_name);
+-	brcmf_pcie_copy_mem_todev(devinfo, devinfo->ci->rambase,
+-				  (void *)fw->data, fw->size);
++	memcpy_toio(devinfo->tcm + devinfo->ci->rambase,
++		    (void *)fw->data, fw->size);
+ 
+ 	resetintr = get_unaligned_le32(fw->data);
+ 	release_firmware(fw);
+@@ -1576,7 +1549,7 @@ static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
+ 		brcmf_dbg(PCIE, "Download NVRAM %s\n", devinfo->nvram_name);
+ 		address = devinfo->ci->rambase + devinfo->ci->ramsize -
+ 			  nvram_len;
+-		brcmf_pcie_copy_mem_todev(devinfo, address, nvram, nvram_len);
++		memcpy_toio(devinfo->tcm + address, nvram, nvram_len);
+ 		brcmf_fw_nvram_free(nvram);
+ 	} else {
+ 		brcmf_dbg(PCIE, "No matching NVRAM file found %s\n",
+@@ -1775,6 +1748,8 @@ static void brcmf_pcie_setup(struct device *dev, int ret,
+ 	ret = brcmf_chip_get_raminfo(devinfo->ci);
+ 	if (ret) {
+ 		brcmf_err(bus, "Failed to get RAM info\n");
++		release_firmware(fw);
++		brcmf_fw_nvram_free(nvram);
+ 		goto fail;
+ 	}
+ 
+@@ -1824,9 +1799,6 @@ static void brcmf_pcie_setup(struct device *dev, int ret,
+ 
+ 	init_waitqueue_head(&devinfo->mbdata_resp_wait);
+ 
+-	brcmf_pcie_intr_enable(devinfo);
+-	brcmf_pcie_hostready(devinfo);
+-
+ 	ret = brcmf_attach(&devinfo->pdev->dev);
+ 	if (ret)
+ 		goto fail;
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
+index 423d3c396b2d3..1e21cdbb7313b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
+@@ -304,7 +304,7 @@ static int iwlagn_mac_start(struct ieee80211_hw *hw)
+ 
+ 	priv->is_open = 1;
+ 	IWL_DEBUG_MAC80211(priv, "leave\n");
+-	return 0;
++	return ret;
+ }
+ 
+ static void iwlagn_mac_stop(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 6348dfa61724a..54b28f0932e25 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1495,8 +1495,10 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
+ 	while (!sband && i < NUM_NL80211_BANDS)
+ 		sband = mvm->hw->wiphy->bands[i++];
+ 
+-	if (WARN_ON_ONCE(!sband))
++	if (WARN_ON_ONCE(!sband)) {
++		ret = -ENODEV;
+ 		goto error;
++	}
+ 
+ 	chan = &sband->channels[0];
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/main.c b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+index c9226dceb510c..bdff89cc3105e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+@@ -618,6 +618,9 @@ mt7603_sta_rate_tbl_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	struct ieee80211_sta_rates *sta_rates = rcu_dereference(sta->rates);
+ 	int i;
+ 
++	if (!sta_rates)
++		return;
++
+ 	spin_lock_bh(&dev->mt76.lock);
+ 	for (i = 0; i < ARRAY_SIZE(msta->rates); i++) {
+ 		msta->rates[i].idx = sta_rates->rate[i].idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index 88cdc2badeae7..defa207f53d6f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -673,6 +673,9 @@ static void mt7615_sta_rate_tbl_update(struct ieee80211_hw *hw,
+ 	struct ieee80211_sta_rates *sta_rates = rcu_dereference(sta->rates);
+ 	int i;
+ 
++	if (!sta_rates)
++		return;
++
+ 	spin_lock_bh(&dev->mt76.lock);
+ 	for (i = 0; i < ARRAY_SIZE(msta->rates); i++) {
+ 		msta->rates[i].idx = sta_rates->rate[i].idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 9a7f317a098fc..41054ee43dbfa 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -1259,8 +1259,11 @@ mt7915_mcu_wtbl_generic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	generic = (struct wtbl_generic *)tlv;
+ 
+ 	if (sta) {
++		if (vif->type == NL80211_IFTYPE_STATION)
++			generic->partial_aid = cpu_to_le16(vif->bss_conf.aid);
++		else
++			generic->partial_aid = cpu_to_le16(sta->aid);
+ 		memcpy(generic->peer_addr, sta->addr, ETH_ALEN);
+-		generic->partial_aid = cpu_to_le16(sta->aid);
+ 		generic->muar_idx = mvif->omac_idx;
+ 		generic->qos = sta->wme;
+ 	} else {
+@@ -1314,12 +1317,15 @@ mt7915_mcu_sta_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	case NL80211_IFTYPE_MESH_POINT:
+ 	case NL80211_IFTYPE_AP:
+ 		basic->conn_type = cpu_to_le32(CONNECTION_INFRA_STA);
++		basic->aid = cpu_to_le16(sta->aid);
+ 		break;
+ 	case NL80211_IFTYPE_STATION:
+ 		basic->conn_type = cpu_to_le32(CONNECTION_INFRA_AP);
++		basic->aid = cpu_to_le16(vif->bss_conf.aid);
+ 		break;
+ 	case NL80211_IFTYPE_ADHOC:
+ 		basic->conn_type = cpu_to_le32(CONNECTION_IBSS_ADHOC);
++		basic->aid = cpu_to_le16(sta->aid);
+ 		break;
+ 	default:
+ 		WARN_ON(1);
+@@ -1327,7 +1333,6 @@ mt7915_mcu_sta_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	}
+ 
+ 	memcpy(basic->peer_addr, sta->addr, ETH_ALEN);
+-	basic->aid = cpu_to_le16(sta->aid);
+ 	basic->qos = sta->wme;
+ }
+ 
+diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c
+index bf3fbd14eda3c..091eea0d958d1 100644
+--- a/drivers/net/wireless/ray_cs.c
++++ b/drivers/net/wireless/ray_cs.c
+@@ -382,6 +382,8 @@ static int ray_config(struct pcmcia_device *link)
+ 		goto failed;
+ 	local->sram = ioremap(link->resource[2]->start,
+ 			resource_size(link->resource[2]));
++	if (!local->sram)
++		goto failed;
+ 
+ /*** Set up 16k window for shared memory (receive buffer) ***************/
+ 	link->resource[3]->flags |=
+@@ -396,6 +398,8 @@ static int ray_config(struct pcmcia_device *link)
+ 		goto failed;
+ 	local->rmem = ioremap(link->resource[3]->start,
+ 			resource_size(link->resource[3]));
++	if (!local->rmem)
++		goto failed;
+ 
+ /*** Set up window for attribute memory ***********************************/
+ 	link->resource[4]->flags |=
+@@ -410,6 +414,8 @@ static int ray_config(struct pcmcia_device *link)
+ 		goto failed;
+ 	local->amem = ioremap(link->resource[4]->start,
+ 			resource_size(link->resource[4]));
++	if (!local->amem)
++		goto failed;
+ 
+ 	dev_dbg(&link->dev, "ray_config sram=%p\n", local->sram);
+ 	dev_dbg(&link->dev, "ray_config rmem=%p\n", local->rmem);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index e05cc9f8a9fd1..1d72653b5c8d1 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -1018,6 +1018,9 @@ static unsigned long default_align(struct nd_region *nd_region)
+ 		}
+ 	}
+ 
++	if (nd_region->ndr_size < MEMREMAP_COMPAT_ALIGN_MAX)
++		align = PAGE_SIZE;
++
+ 	mappings = max_t(u16, 1, nd_region->ndr_mappings);
+ 	div_u64_rem(align, mappings, &remainder);
+ 	if (remainder)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 71c85c99e86c6..853b9a24f744e 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3681,16 +3681,15 @@ static struct nvme_ns_head *nvme_find_ns_head(struct nvme_subsystem *subsys,
+ 	return NULL;
+ }
+ 
+-static int __nvme_check_ids(struct nvme_subsystem *subsys,
+-		struct nvme_ns_head *new)
++static int nvme_subsys_check_duplicate_ids(struct nvme_subsystem *subsys,
++		struct nvme_ns_ids *ids)
+ {
+ 	struct nvme_ns_head *h;
+ 
+ 	lockdep_assert_held(&subsys->lock);
+ 
+ 	list_for_each_entry(h, &subsys->nsheads, entry) {
+-		if (nvme_ns_ids_valid(&new->ids) &&
+-		    nvme_ns_ids_equal(&new->ids, &h->ids))
++		if (nvme_ns_ids_valid(ids) && nvme_ns_ids_equal(ids, &h->ids))
+ 			return -EINVAL;
+ 	}
+ 
+@@ -3724,7 +3723,7 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
+ 	head->ids = *ids;
+ 	kref_init(&head->ref);
+ 
+-	ret = __nvme_check_ids(ctrl->subsys, head);
++	ret = nvme_subsys_check_duplicate_ids(ctrl->subsys, &head->ids);
+ 	if (ret) {
+ 		dev_err(ctrl->device,
+ 			"duplicate IDs for nsid %d\n", nsid);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 6105894a218a5..7e39320337072 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -30,6 +30,44 @@ static int so_priority;
+ module_param(so_priority, int, 0644);
+ MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority");
+ 
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++/* lockdep can detect a circular dependency of the form
++ *   sk_lock -> mmap_lock (page fault) -> fs locks -> sk_lock
++ * because dependencies are tracked for both nvme-tcp and user contexts. Using
++ * a separate class prevents lockdep from conflating nvme-tcp socket use with
++ * user-space socket API use.
++ */
++static struct lock_class_key nvme_tcp_sk_key[2];
++static struct lock_class_key nvme_tcp_slock_key[2];
++
++static void nvme_tcp_reclassify_socket(struct socket *sock)
++{
++	struct sock *sk = sock->sk;
++
++	if (WARN_ON_ONCE(!sock_allow_reclassification(sk)))
++		return;
++
++	switch (sk->sk_family) {
++	case AF_INET:
++		sock_lock_init_class_and_name(sk, "slock-AF_INET-NVME",
++					      &nvme_tcp_slock_key[0],
++					      "sk_lock-AF_INET-NVME",
++					      &nvme_tcp_sk_key[0]);
++		break;
++	case AF_INET6:
++		sock_lock_init_class_and_name(sk, "slock-AF_INET6-NVME",
++					      &nvme_tcp_slock_key[1],
++					      "sk_lock-AF_INET6-NVME",
++					      &nvme_tcp_sk_key[1]);
++		break;
++	default:
++		WARN_ON_ONCE(1);
++	}
++}
++#else
++static void nvme_tcp_reclassify_socket(struct socket *sock) { }
++#endif
++
+ enum nvme_tcp_send_state {
+ 	NVME_TCP_SEND_CMD_PDU = 0,
+ 	NVME_TCP_SEND_H2C_PDU,
+@@ -1422,6 +1460,8 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl,
+ 		goto err_destroy_mutex;
+ 	}
+ 
++	nvme_tcp_reclassify_socket(queue->sock);
++
+ 	/* Single syn retry */
+ 	tcp_sock_set_syncnt(queue->sock->sk, 1);
+ 
+diff --git a/drivers/pci/access.c b/drivers/pci/access.c
+index 46935695cfb90..8d0d1f61c650d 100644
+--- a/drivers/pci/access.c
++++ b/drivers/pci/access.c
+@@ -160,9 +160,12 @@ int pci_generic_config_write32(struct pci_bus *bus, unsigned int devfn,
+ 	 * write happen to have any RW1C (write-one-to-clear) bits set, we
+ 	 * just inadvertently cleared something we shouldn't have.
+ 	 */
+-	dev_warn_ratelimited(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
+-			     size, pci_domain_nr(bus), bus->number,
+-			     PCI_SLOT(devfn), PCI_FUNC(devfn), where);
++	if (!bus->unsafe_warn) {
++		dev_warn(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
++			 size, pci_domain_nr(bus), bus->number,
++			 PCI_SLOT(devfn), PCI_FUNC(devfn), where);
++		bus->unsafe_warn = 1;
++	}
+ 
+ 	mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8));
+ 	tmp = readl(addr) & mask;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index f30144c8c0bd2..49ff8bf10c740 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -851,7 +851,9 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 	case PCI_EXP_RTSTA: {
+ 		u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG);
+ 		u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG);
+-		*value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16);
++		*value = msglog >> 16;
++		if (isr0 & PCIE_MSG_PM_PME_MASK)
++			*value |= PCI_EXP_RTSTA_PME;
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+ 
+diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
+index b651b6f444691..e1c2daa50b498 100644
+--- a/drivers/pci/controller/pci-xgene.c
++++ b/drivers/pci/controller/pci-xgene.c
+@@ -467,7 +467,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
+ 		return 1;
+ 	}
+ 
+-	if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) {
++	if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) {
+ 		*ib_reg_mask |= (1 << 0);
+ 		return 0;
+ 	}
+@@ -481,28 +481,27 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
+ }
+ 
+ static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port,
+-				    struct resource_entry *entry,
+-				    u8 *ib_reg_mask)
++				    struct of_pci_range *range, u8 *ib_reg_mask)
+ {
+ 	void __iomem *cfg_base = port->cfg_base;
+ 	struct device *dev = port->dev;
+ 	void *bar_addr;
+ 	u32 pim_reg;
+-	u64 cpu_addr = entry->res->start;
+-	u64 pci_addr = cpu_addr - entry->offset;
+-	u64 size = resource_size(entry->res);
++	u64 cpu_addr = range->cpu_addr;
++	u64 pci_addr = range->pci_addr;
++	u64 size = range->size;
+ 	u64 mask = ~(size - 1) | EN_REG;
+ 	u32 flags = PCI_BASE_ADDRESS_MEM_TYPE_64;
+ 	u32 bar_low;
+ 	int region;
+ 
+-	region = xgene_pcie_select_ib_reg(ib_reg_mask, size);
++	region = xgene_pcie_select_ib_reg(ib_reg_mask, range->size);
+ 	if (region < 0) {
+ 		dev_warn(dev, "invalid pcie dma-range config\n");
+ 		return;
+ 	}
+ 
+-	if (entry->res->flags & IORESOURCE_PREFETCH)
++	if (range->flags & IORESOURCE_PREFETCH)
+ 		flags |= PCI_BASE_ADDRESS_MEM_PREFETCH;
+ 
+ 	bar_low = pcie_bar_low_val((u32)cpu_addr, flags);
+@@ -533,13 +532,25 @@ static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port,
+ 
+ static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie_port *port)
+ {
+-	struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port);
+-	struct resource_entry *entry;
++	struct device_node *np = port->node;
++	struct of_pci_range range;
++	struct of_pci_range_parser parser;
++	struct device *dev = port->dev;
+ 	u8 ib_reg_mask = 0;
+ 
+-	resource_list_for_each_entry(entry, &bridge->dma_ranges)
+-		xgene_pcie_setup_ib_reg(port, entry, &ib_reg_mask);
++	if (of_pci_dma_range_parser_init(&parser, np)) {
++		dev_err(dev, "missing dma-ranges property\n");
++		return -EINVAL;
++	}
++
++	/* Get the dma-ranges from DT */
++	for_each_of_pci_range(&parser, &range) {
++		u64 end = range.cpu_addr + range.size - 1;
+ 
++		dev_dbg(dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n",
++			range.flags, range.cpu_addr, end, range.pci_addr);
++		xgene_pcie_setup_ib_reg(port, &range, &ib_reg_mask);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 30708af975adc..af4c4cc837fcd 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -98,6 +98,8 @@ static int pcie_poll_cmd(struct controller *ctrl, int timeout)
+ 		if (slot_status & PCI_EXP_SLTSTA_CC) {
+ 			pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
+ 						   PCI_EXP_SLTSTA_CC);
++			ctrl->cmd_busy = 0;
++			smp_mb();
+ 			return 1;
+ 		}
+ 		msleep(10);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 95fcc735c88e7..1be2894ada70c 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1816,6 +1816,18 @@ static void quirk_alder_ioapic(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL,	PCI_DEVICE_ID_INTEL_EESSC,	quirk_alder_ioapic);
+ #endif
+ 
++static void quirk_no_msi(struct pci_dev *dev)
++{
++	pci_info(dev, "avoiding MSI to work around a hardware defect\n");
++	dev->no_msi = 1;
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4386, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4387, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4388, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4389, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438a, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438b, quirk_no_msi);
++
+ static void quirk_pcie_mch(struct pci_dev *pdev)
+ {
+ 	pdev->no_msi = 1;
+diff --git a/drivers/phy/phy-core-mipi-dphy.c b/drivers/phy/phy-core-mipi-dphy.c
+index 14e0551cd3190..0aa740b73d0db 100644
+--- a/drivers/phy/phy-core-mipi-dphy.c
++++ b/drivers/phy/phy-core-mipi-dphy.c
+@@ -66,10 +66,10 @@ int phy_mipi_dphy_get_default_config(unsigned long pixel_clock,
+ 	cfg->hs_trail = max(4 * 8 * ui, 60000 + 4 * 4 * ui);
+ 
+ 	cfg->init = 100;
+-	cfg->lpx = 60000;
++	cfg->lpx = 50000;
+ 	cfg->ta_get = 5 * cfg->lpx;
+ 	cfg->ta_go = 4 * cfg->lpx;
+-	cfg->ta_sure = 2 * cfg->lpx;
++	cfg->ta_sure = cfg->lpx;
+ 	cfg->wakeup = 1000;
+ 
+ 	cfg->hs_clk_rate = hs_clk_rate;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+index a02ad10ec6fad..730581d130649 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+@@ -1039,6 +1039,7 @@ int mtk_pctrl_init(struct platform_device *pdev,
+ 	node = of_parse_phandle(np, "mediatek,pctl-regmap", 0);
+ 	if (node) {
+ 		pctl->regmap1 = syscon_node_to_regmap(node);
++		of_node_put(node);
+ 		if (IS_ERR(pctl->regmap1))
+ 			return PTR_ERR(pctl->regmap1);
+ 	} else if (regmap) {
+@@ -1052,6 +1053,7 @@ int mtk_pctrl_init(struct platform_device *pdev,
+ 	node = of_parse_phandle(np, "mediatek,pctl-regmap", 1);
+ 	if (node) {
+ 		pctl->regmap2 = syscon_node_to_regmap(node);
++		of_node_put(node);
+ 		if (IS_ERR(pctl->regmap2))
+ 			return PTR_ERR(pctl->regmap2);
+ 	}
+diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
+index 623af4410b07c..d0a4ebbe1e7e6 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
+@@ -96,20 +96,16 @@ static int mtk_pinconf_get(struct pinctrl_dev *pctldev,
+ 			err = hw->soc->bias_get_combo(hw, desc, &pullup, &ret);
+ 			if (err)
+ 				goto out;
++			if (ret == MTK_PUPD_SET_R1R0_00)
++				ret = MTK_DISABLE;
+ 			if (param == PIN_CONFIG_BIAS_DISABLE) {
+-				if (ret == MTK_PUPD_SET_R1R0_00)
+-					ret = MTK_DISABLE;
++				if (ret != MTK_DISABLE)
++					err = -EINVAL;
+ 			} else if (param == PIN_CONFIG_BIAS_PULL_UP) {
+-				/* When desire to get pull-up value, return
+-				 *  error if current setting is pull-down
+-				 */
+-				if (!pullup)
++				if (!pullup || ret == MTK_DISABLE)
+ 					err = -EINVAL;
+ 			} else if (param == PIN_CONFIG_BIAS_PULL_DOWN) {
+-				/* When desire to get pull-down value, return
+-				 *  error if current setting is pull-up
+-				 */
+-				if (pullup)
++				if (pullup || ret == MTK_DISABLE)
+ 					err = -EINVAL;
+ 			}
+ 		} else {
+@@ -188,8 +184,7 @@ out:
+ }
+ 
+ static int mtk_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+-			   enum pin_config_param param,
+-			   enum pin_config_param arg)
++			   enum pin_config_param param, u32 arg)
+ {
+ 	struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
+ 	const struct mtk_pin_desc *desc;
+@@ -585,6 +580,9 @@ ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
+ 	if (gpio >= hw->soc->npins)
+ 		return -EINVAL;
+ 
++	if (mtk_is_virt_gpio(hw, gpio))
++		return -EINVAL;
++
+ 	desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio];
+ 	pinmux = mtk_pctrl_get_pinmux(hw, gpio);
+ 	if (pinmux >= hw->soc->nfuncs)
+@@ -719,10 +717,10 @@ static int mtk_pconf_group_get(struct pinctrl_dev *pctldev, unsigned group,
+ 			       unsigned long *config)
+ {
+ 	struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
++	struct mtk_pinctrl_group *grp = &hw->groups[group];
+ 
+-	*config = hw->groups[group].config;
+-
+-	return 0;
++	 /* One pin per group only */
++	return mtk_pinconf_get(pctldev, grp->pin, config);
+ }
+ 
+ static int mtk_pconf_group_set(struct pinctrl_dev *pctldev, unsigned group,
+@@ -738,8 +736,6 @@ static int mtk_pconf_group_set(struct pinctrl_dev *pctldev, unsigned group,
+ 				      pinconf_to_config_argument(configs[i]));
+ 		if (ret < 0)
+ 			return ret;
+-
+-		grp->config = configs[i];
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index 657e35a75d84a..6d77feda9090a 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -1883,8 +1883,10 @@ static int nmk_pinctrl_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	prcm_np = of_parse_phandle(np, "prcm", 0);
+-	if (prcm_np)
++	if (prcm_np) {
+ 		npct->prcm_base = of_iomap(prcm_np, 0);
++		of_node_put(prcm_np);
++	}
+ 	if (!npct->prcm_base) {
+ 		if (version == PINCTRL_NMK_STN8815) {
+ 			dev_info(&pdev->dev,
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+index 6de31b5ee358c..ce36b6ff7b95e 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+@@ -78,7 +78,6 @@ struct npcm7xx_gpio {
+ 	struct gpio_chip	gc;
+ 	int			irqbase;
+ 	int			irq;
+-	void			*priv;
+ 	struct irq_chip		irq_chip;
+ 	u32			pinctrl_id;
+ 	int (*direction_input)(struct gpio_chip *chip, unsigned offset);
+@@ -226,7 +225,7 @@ static void npcmgpio_irq_handler(struct irq_desc *desc)
+ 	chained_irq_enter(chip, desc);
+ 	sts = ioread32(bank->base + NPCM7XX_GP_N_EVST);
+ 	en  = ioread32(bank->base + NPCM7XX_GP_N_EVEN);
+-	dev_dbg(chip->parent_device, "==> got irq sts %.8x %.8x\n", sts,
++	dev_dbg(bank->gc.parent, "==> got irq sts %.8x %.8x\n", sts,
+ 		en);
+ 
+ 	sts &= en;
+@@ -241,33 +240,33 @@ static int npcmgpio_set_irq_type(struct irq_data *d, unsigned int type)
+ 		gpiochip_get_data(irq_data_get_irq_chip_data(d));
+ 	unsigned int gpio = BIT(d->hwirq);
+ 
+-	dev_dbg(d->chip->parent_device, "setirqtype: %u.%u = %u\n", gpio,
++	dev_dbg(bank->gc.parent, "setirqtype: %u.%u = %u\n", gpio,
+ 		d->irq, type);
+ 	switch (type) {
+ 	case IRQ_TYPE_EDGE_RISING:
+-		dev_dbg(d->chip->parent_device, "edge.rising\n");
++		dev_dbg(bank->gc.parent, "edge.rising\n");
+ 		npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ 		npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ 		break;
+ 	case IRQ_TYPE_EDGE_FALLING:
+-		dev_dbg(d->chip->parent_device, "edge.falling\n");
++		dev_dbg(bank->gc.parent, "edge.falling\n");
+ 		npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ 		npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ 		break;
+ 	case IRQ_TYPE_EDGE_BOTH:
+-		dev_dbg(d->chip->parent_device, "edge.both\n");
++		dev_dbg(bank->gc.parent, "edge.both\n");
+ 		npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_LOW:
+-		dev_dbg(d->chip->parent_device, "level.low\n");
++		dev_dbg(bank->gc.parent, "level.low\n");
+ 		npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+-		dev_dbg(d->chip->parent_device, "level.high\n");
++		dev_dbg(bank->gc.parent, "level.high\n");
+ 		npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ 		break;
+ 	default:
+-		dev_dbg(d->chip->parent_device, "invalid irq type\n");
++		dev_dbg(bank->gc.parent, "invalid irq type\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -289,7 +288,7 @@ static void npcmgpio_irq_ack(struct irq_data *d)
+ 		gpiochip_get_data(irq_data_get_irq_chip_data(d));
+ 	unsigned int gpio = d->hwirq;
+ 
+-	dev_dbg(d->chip->parent_device, "irq_ack: %u.%u\n", gpio, d->irq);
++	dev_dbg(bank->gc.parent, "irq_ack: %u.%u\n", gpio, d->irq);
+ 	iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVST);
+ }
+ 
+@@ -301,7 +300,7 @@ static void npcmgpio_irq_mask(struct irq_data *d)
+ 	unsigned int gpio = d->hwirq;
+ 
+ 	/* Clear events */
+-	dev_dbg(d->chip->parent_device, "irq_mask: %u.%u\n", gpio, d->irq);
++	dev_dbg(bank->gc.parent, "irq_mask: %u.%u\n", gpio, d->irq);
+ 	iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVENC);
+ }
+ 
+@@ -313,7 +312,7 @@ static void npcmgpio_irq_unmask(struct irq_data *d)
+ 	unsigned int gpio = d->hwirq;
+ 
+ 	/* Enable events */
+-	dev_dbg(d->chip->parent_device, "irq_unmask: %u.%u\n", gpio, d->irq);
++	dev_dbg(bank->gc.parent, "irq_unmask: %u.%u\n", gpio, d->irq);
+ 	iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVENS);
+ }
+ 
+@@ -323,7 +322,7 @@ static unsigned int npcmgpio_irq_startup(struct irq_data *d)
+ 	unsigned int gpio = d->hwirq;
+ 
+ 	/* active-high, input, clear interrupt, enable interrupt */
+-	dev_dbg(d->chip->parent_device, "startup: %u.%u\n", gpio, d->irq);
++	dev_dbg(gc->parent, "startup: %u.%u\n", gpio, d->irq);
+ 	npcmgpio_direction_input(gc, gpio);
+ 	npcmgpio_irq_ack(d);
+ 	npcmgpio_irq_unmask(d);
+@@ -905,7 +904,7 @@ static struct npcm7xx_func npcm7xx_funcs[] = {
+ #define DRIVE_STRENGTH_HI_SHIFT		12
+ #define DRIVE_STRENGTH_MASK		0x0000FF00
+ 
+-#define DS(lo, hi)	(((lo) << DRIVE_STRENGTH_LO_SHIFT) | \
++#define DSTR(lo, hi)	(((lo) << DRIVE_STRENGTH_LO_SHIFT) | \
+ 			 ((hi) << DRIVE_STRENGTH_HI_SHIFT))
+ #define DSLO(x)		(((x) >> DRIVE_STRENGTH_LO_SHIFT) & 0xF)
+ #define DSHI(x)		(((x) >> DRIVE_STRENGTH_HI_SHIFT) & 0xF)
+@@ -925,31 +924,31 @@ struct npcm7xx_pincfg {
+ static const struct npcm7xx_pincfg pincfg[] = {
+ 	/*	PIN	  FUNCTION 1		   FUNCTION 2		  FUNCTION 3	    FLAGS */
+ 	NPCM7XX_PINCFG(0,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(1,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(2,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(1,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(2,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(3,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(4,	 iox2, MFSEL3, 14,	 smb1d, I2CSEGSEL, 7,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(5,	 iox2, MFSEL3, 14,	 smb1d, I2CSEGSEL, 7,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(6,	 iox2, MFSEL3, 14,	 smb2d, I2CSEGSEL, 10,  none, NONE, 0,       SLEW),
+ 	NPCM7XX_PINCFG(7,	 iox2, MFSEL3, 14,	 smb2d, I2CSEGSEL, 10,  none, NONE, 0,       SLEW),
+-	NPCM7XX_PINCFG(8,      lkgpo1, FLOCKR1, 4,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(9,      lkgpo2, FLOCKR1, 8,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(10,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(11,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(8,      lkgpo1, FLOCKR1, 4,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(9,      lkgpo2, FLOCKR1, 8,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(10,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(11,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(12,	 gspi, MFSEL1, 24,	 smb5b, I2CSEGSEL, 19,  none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(13,	 gspi, MFSEL1, 24,	 smb5b, I2CSEGSEL, 19,  none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(14,	 gspi, MFSEL1, 24,	 smb5c, I2CSEGSEL, 20,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(15,	 gspi, MFSEL1, 24,	 smb5c, I2CSEGSEL, 20,	none, NONE, 0,	     SLEW),
+-	NPCM7XX_PINCFG(16,     lkgpo0, FLOCKR1, 0,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(17,      pspi2, MFSEL3, 13,     smb4den, I2CSEGSEL, 23,  none, NONE, 0,       DS(8, 12)),
+-	NPCM7XX_PINCFG(18,      pspi2, MFSEL3, 13,	 smb4b, I2CSEGSEL, 14,  none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(19,      pspi2, MFSEL3, 13,	 smb4b, I2CSEGSEL, 14,  none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(16,     lkgpo0, FLOCKR1, 0,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(17,      pspi2, MFSEL3, 13,     smb4den, I2CSEGSEL, 23,  none, NONE, 0,       DSTR(8, 12)),
++	NPCM7XX_PINCFG(18,      pspi2, MFSEL3, 13,	 smb4b, I2CSEGSEL, 14,  none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(19,      pspi2, MFSEL3, 13,	 smb4b, I2CSEGSEL, 14,  none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(20,	smb4c, I2CSEGSEL, 15,    smb15, MFSEL3, 8,      none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(21,	smb4c, I2CSEGSEL, 15,    smb15, MFSEL3, 8,      none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(22,      smb4d, I2CSEGSEL, 16,	 smb14, MFSEL3, 7,      none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(23,      smb4d, I2CSEGSEL, 16,	 smb14, MFSEL3, 7,      none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(24,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(25,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(24,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(25,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(26,	 smb5, MFSEL1, 2,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(27,	 smb5, MFSEL1, 2,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(28,	 smb4, MFSEL1, 1,	  none, NONE, 0,	none, NONE, 0,	     0),
+@@ -965,12 +964,12 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(39,	smb3b, I2CSEGSEL, 11,	  none, NONE, 0,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(40,	smb3b, I2CSEGSEL, 11,	  none, NONE, 0,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(41,  bmcuart0a, MFSEL1, 9,         none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(42,  bmcuart0a, MFSEL1, 9,         none, NONE, 0,	none, NONE, 0,	     DS(2, 4) | GPO),
++	NPCM7XX_PINCFG(42,  bmcuart0a, MFSEL1, 9,         none, NONE, 0,	none, NONE, 0,	     DSTR(2, 4) | GPO),
+ 	NPCM7XX_PINCFG(43,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,  bmcuart1, MFSEL3, 24,    0),
+ 	NPCM7XX_PINCFG(44,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,  bmcuart1, MFSEL3, 24,    0),
+ 	NPCM7XX_PINCFG(45,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(46,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     DS(2, 8)),
+-	NPCM7XX_PINCFG(47,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     DS(2, 8)),
++	NPCM7XX_PINCFG(46,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     DSTR(2, 8)),
++	NPCM7XX_PINCFG(47,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     DSTR(2, 8)),
+ 	NPCM7XX_PINCFG(48,	uart2, MFSEL1, 11,   bmcuart0b, MFSEL4, 1,      none, NONE, 0,	     GPO),
+ 	NPCM7XX_PINCFG(49,	uart2, MFSEL1, 11,   bmcuart0b, MFSEL4, 1,      none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(50,	uart2, MFSEL1, 11,	  none, NONE, 0,        none, NONE, 0,	     0),
+@@ -980,8 +979,8 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(54,	uart2, MFSEL1, 11,	  none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(55,	uart2, MFSEL1, 11,	  none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(56,	r1err, MFSEL1, 12,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(57,       r1md, MFSEL1, 13,        none, NONE, 0,        none, NONE, 0,       DS(2, 4)),
+-	NPCM7XX_PINCFG(58,       r1md, MFSEL1, 13,        none, NONE, 0,	none, NONE, 0,	     DS(2, 4)),
++	NPCM7XX_PINCFG(57,       r1md, MFSEL1, 13,        none, NONE, 0,        none, NONE, 0,       DSTR(2, 4)),
++	NPCM7XX_PINCFG(58,       r1md, MFSEL1, 13,        none, NONE, 0,	none, NONE, 0,	     DSTR(2, 4)),
+ 	NPCM7XX_PINCFG(59,	smb3d, I2CSEGSEL, 13,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(60,	smb3d, I2CSEGSEL, 13,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(61,      uart1, MFSEL1, 10,	  none, NONE, 0,	none, NONE, 0,     GPO),
+@@ -1004,19 +1003,19 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(77,    fanin13, MFSEL2, 13,        none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(78,    fanin14, MFSEL2, 14,        none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(79,    fanin15, MFSEL2, 15,        none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(80,	 pwm0, MFSEL2, 16,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(81,	 pwm1, MFSEL2, 17,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(82,	 pwm2, MFSEL2, 18,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(83,	 pwm3, MFSEL2, 19,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(84,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(85,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(86,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(80,	 pwm0, MFSEL2, 16,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(81,	 pwm1, MFSEL2, 17,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(82,	 pwm2, MFSEL2, 18,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(83,	 pwm3, MFSEL2, 19,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(84,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(85,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(86,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(87,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(88,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(89,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(90,      r2err, MFSEL1, 15,        none, NONE, 0,        none, NONE, 0,       0),
+-	NPCM7XX_PINCFG(91,       r2md, MFSEL1, 16,	  none, NONE, 0,        none, NONE, 0,	     DS(2, 4)),
+-	NPCM7XX_PINCFG(92,       r2md, MFSEL1, 16,	  none, NONE, 0,        none, NONE, 0,	     DS(2, 4)),
++	NPCM7XX_PINCFG(91,       r2md, MFSEL1, 16,	  none, NONE, 0,        none, NONE, 0,	     DSTR(2, 4)),
++	NPCM7XX_PINCFG(92,       r2md, MFSEL1, 16,	  none, NONE, 0,        none, NONE, 0,	     DSTR(2, 4)),
+ 	NPCM7XX_PINCFG(93,    ga20kbc, MFSEL1, 17,	 smb5d, I2CSEGSEL, 21,  none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(94,    ga20kbc, MFSEL1, 17,	 smb5d, I2CSEGSEL, 21,  none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(95,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    0),
+@@ -1062,34 +1061,34 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(133,	smb10, MFSEL4, 13,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(134,	smb11, MFSEL4, 14,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(135,	smb11, MFSEL4, 14,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(136,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(137,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(138,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(139,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(140,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(136,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(137,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(138,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(139,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(140,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(141,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(142,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(142,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(143,       sd1, MFSEL3, 12,      sd1pwr, MFSEL4, 5,      none, NONE, 0,       0),
+-	NPCM7XX_PINCFG(144,	 pwm4, MFSEL2, 20,	  none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(145,	 pwm5, MFSEL2, 21,	  none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(146,	 pwm6, MFSEL2, 22,	  none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(147,	 pwm7, MFSEL2, 23,	  none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(148,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(149,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(150,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(151,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(152,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(144,	 pwm4, MFSEL2, 20,	  none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(145,	 pwm5, MFSEL2, 21,	  none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(146,	 pwm6, MFSEL2, 22,	  none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(147,	 pwm7, MFSEL2, 23,	  none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(148,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(149,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(150,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(151,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(152,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(153,     mmcwp, FLOCKR1, 24,       none, NONE, 0,	none, NONE, 0,	     0),  /* Z1/A1 */
+-	NPCM7XX_PINCFG(154,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(154,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(155,     mmccd, MFSEL3, 25,      mmcrst, MFSEL4, 6,      none, NONE, 0,       0),  /* Z1/A1 */
+-	NPCM7XX_PINCFG(156,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(157,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(158,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(159,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-
+-	NPCM7XX_PINCFG(160,    clkout, MFSEL1, 21,        none, NONE, 0,        none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(161,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    DS(8, 12)),
+-	NPCM7XX_PINCFG(162,    serirq, NONE, 0,           gpio, MFSEL1, 31,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(156,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(157,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(158,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(159,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++
++	NPCM7XX_PINCFG(160,    clkout, MFSEL1, 21,        none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(161,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    DSTR(8, 12)),
++	NPCM7XX_PINCFG(162,    serirq, NONE, 0,           gpio, MFSEL1, 31,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(163,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    0),
+ 	NPCM7XX_PINCFG(164,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    SLEWLPC),
+ 	NPCM7XX_PINCFG(165,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    SLEWLPC),
+@@ -1102,25 +1101,25 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(172,	 smb6, MFSEL3, 1,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(173,	 smb7, MFSEL3, 2,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(174,	 smb7, MFSEL3, 2,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(175,	pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(176,     pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(177,     pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(178,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(179,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(180,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(175,	pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(176,     pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(177,     pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(178,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(179,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(180,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(181,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(182,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(183,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(184,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW | GPO),
+-	NPCM7XX_PINCFG(185,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW | GPO),
+-	NPCM7XX_PINCFG(186,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(187,   spi3cs1, MFSEL4, 17,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(188,  spi3quad, MFSEL4, 20,     spi3cs2, MFSEL4, 18,     none, NONE, 0,    DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(189,  spi3quad, MFSEL4, 20,     spi3cs3, MFSEL4, 19,     none, NONE, 0,    DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(190,      gpio, FLOCKR1, 20,   nprd_smi, NONE, 0,	none, NONE, 0,	     DS(2, 4)),
+-	NPCM7XX_PINCFG(191,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),  /* XX */
+-
+-	NPCM7XX_PINCFG(192,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),  /* XX */
++	NPCM7XX_PINCFG(183,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(184,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW | GPO),
++	NPCM7XX_PINCFG(185,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW | GPO),
++	NPCM7XX_PINCFG(186,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(187,   spi3cs1, MFSEL4, 17,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(188,  spi3quad, MFSEL4, 20,     spi3cs2, MFSEL4, 18,     none, NONE, 0,    DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(189,  spi3quad, MFSEL4, 20,     spi3cs3, MFSEL4, 19,     none, NONE, 0,    DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(190,      gpio, FLOCKR1, 20,   nprd_smi, NONE, 0,	none, NONE, 0,	     DSTR(2, 4)),
++	NPCM7XX_PINCFG(191,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),  /* XX */
++
++	NPCM7XX_PINCFG(192,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),  /* XX */
+ 	NPCM7XX_PINCFG(193,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(194,	smb0b, I2CSEGSEL, 0,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(195,	smb0b, I2CSEGSEL, 0,	  none, NONE, 0,	none, NONE, 0,	     0),
+@@ -1131,11 +1130,11 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(200,        r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(201,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(202,	smb0c, I2CSEGSEL, 1,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(203,    faninx, MFSEL3, 3,         none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(203,    faninx, MFSEL3, 3,         none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(204,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(205,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     SLEW),
+-	NPCM7XX_PINCFG(206,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(207,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     DS(4, 8)),
++	NPCM7XX_PINCFG(206,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(207,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     DSTR(4, 8)),
+ 	NPCM7XX_PINCFG(208,       rg2, MFSEL4, 24,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(209,       rg2, MFSEL4, 24,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(210,       rg2, MFSEL4, 24,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+@@ -1147,20 +1146,20 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(216,   rg2mdio, MFSEL4, 23,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(217,   rg2mdio, MFSEL4, 23,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(218,     wdog1, MFSEL3, 19,        none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(219,     wdog2, MFSEL3, 20,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
++	NPCM7XX_PINCFG(219,     wdog2, MFSEL3, 20,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
+ 	NPCM7XX_PINCFG(220,	smb12, MFSEL3, 5,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(221,	smb12, MFSEL3, 5,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(222,     smb13, MFSEL3, 6,         none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(223,     smb13, MFSEL3, 6,         none, NONE, 0,	none, NONE, 0,	     0),
+ 
+ 	NPCM7XX_PINCFG(224,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     SLEW),
+-	NPCM7XX_PINCFG(225,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW | GPO),
+-	NPCM7XX_PINCFG(226,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW | GPO),
+-	NPCM7XX_PINCFG(227,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(228,   spixcs1, MFSEL4, 28,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(229,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(230,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(231,    clkreq, MFSEL4, 9,         none, NONE, 0,        none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(225,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW | GPO),
++	NPCM7XX_PINCFG(226,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW | GPO),
++	NPCM7XX_PINCFG(227,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(228,   spixcs1, MFSEL4, 28,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(229,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(230,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(231,    clkreq, MFSEL4, 9,         none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(253,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     GPI), /* SDHC1 power */
+ 	NPCM7XX_PINCFG(254,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     GPI), /* SDHC2 power */
+ 	NPCM7XX_PINCFG(255,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     GPI), /* DACOSEL */
+@@ -1561,7 +1560,7 @@ static int npcm7xx_get_groups_count(struct pinctrl_dev *pctldev)
+ {
+ 	struct npcm7xx_pinctrl *npcm = pinctrl_dev_get_drvdata(pctldev);
+ 
+-	dev_dbg(npcm->dev, "group size: %d\n", ARRAY_SIZE(npcm7xx_groups));
++	dev_dbg(npcm->dev, "group size: %zu\n", ARRAY_SIZE(npcm7xx_groups));
+ 	return ARRAY_SIZE(npcm7xx_groups);
+ }
+ 
+diff --git a/drivers/pinctrl/pinconf-generic.c b/drivers/pinctrl/pinconf-generic.c
+index 1e225d5139888..42e27dba62e26 100644
+--- a/drivers/pinctrl/pinconf-generic.c
++++ b/drivers/pinctrl/pinconf-generic.c
+@@ -30,10 +30,10 @@ static const struct pin_config_item conf_items[] = {
+ 	PCONFDUMP(PIN_CONFIG_BIAS_BUS_HOLD, "input bias bus hold", NULL, false),
+ 	PCONFDUMP(PIN_CONFIG_BIAS_DISABLE, "input bias disabled", NULL, false),
+ 	PCONFDUMP(PIN_CONFIG_BIAS_HIGH_IMPEDANCE, "input bias high impedance", NULL, false),
+-	PCONFDUMP(PIN_CONFIG_BIAS_PULL_DOWN, "input bias pull down", NULL, false),
++	PCONFDUMP(PIN_CONFIG_BIAS_PULL_DOWN, "input bias pull down", "ohms", true),
+ 	PCONFDUMP(PIN_CONFIG_BIAS_PULL_PIN_DEFAULT,
+-				"input bias pull to pin specific state", NULL, false),
+-	PCONFDUMP(PIN_CONFIG_BIAS_PULL_UP, "input bias pull up", NULL, false),
++				"input bias pull to pin specific state", "ohms", true),
++	PCONFDUMP(PIN_CONFIG_BIAS_PULL_UP, "input bias pull up", "ohms", true),
+ 	PCONFDUMP(PIN_CONFIG_DRIVE_OPEN_DRAIN, "output drive open drain", NULL, false),
+ 	PCONFDUMP(PIN_CONFIG_DRIVE_OPEN_SOURCE, "output drive open source", NULL, false),
+ 	PCONFDUMP(PIN_CONFIG_DRIVE_PUSH_PULL, "output drive push pull", NULL, false),
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 53a0badc6b035..9df48e0cf4cb4 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -3774,6 +3774,7 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
+ 	node = of_parse_phandle(np, "rockchip,grf", 0);
+ 	if (node) {
+ 		info->regmap_base = syscon_node_to_regmap(node);
++		of_node_put(node);
+ 		if (IS_ERR(info->regmap_base))
+ 			return PTR_ERR(info->regmap_base);
+ 	} else {
+@@ -3810,6 +3811,7 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
+ 	node = of_parse_phandle(np, "rockchip,pmu", 0);
+ 	if (node) {
+ 		info->regmap_pmu = syscon_node_to_regmap(node);
++		of_node_put(node);
+ 		if (IS_ERR(info->regmap_pmu))
+ 			return PTR_ERR(info->regmap_pmu);
+ 	}
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index 9d168b90cd281..258972672eda1 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -739,7 +739,7 @@ static int sh_pfc_suspend_init(struct sh_pfc *pfc) { return 0; }
+ 
+ #ifdef DEBUG
+ #define SH_PFC_MAX_REGS		300
+-#define SH_PFC_MAX_ENUMS	3000
++#define SH_PFC_MAX_ENUMS	5000
+ 
+ static unsigned int sh_pfc_errors __initdata = 0;
+ static unsigned int sh_pfc_warnings __initdata = 0;
+@@ -851,7 +851,8 @@ static void __init sh_pfc_check_cfg_reg(const char *drvname,
+ 	sh_pfc_check_reg(drvname, cfg_reg->reg);
+ 
+ 	if (cfg_reg->field_width) {
+-		n = cfg_reg->reg_width / cfg_reg->field_width;
++		fw = cfg_reg->field_width;
++		n = (cfg_reg->reg_width / fw) << fw;
+ 		/* Skip field checks (done at build time) */
+ 		goto check_enum_ids;
+ 	}
+diff --git a/drivers/pinctrl/renesas/pfc-r8a77470.c b/drivers/pinctrl/renesas/pfc-r8a77470.c
+index b3b116da1bb0d..14005725a726b 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a77470.c
++++ b/drivers/pinctrl/renesas/pfc-r8a77470.c
+@@ -2121,7 +2121,7 @@ static const unsigned int vin0_clk_mux[] = {
+ 	VI0_CLK_MARK,
+ };
+ /* - VIN1 ------------------------------------------------------------------- */
+-static const union vin_data vin1_data_pins = {
++static const union vin_data12 vin1_data_pins = {
+ 	.data12 = {
+ 		RCAR_GP_PIN(3,  1), RCAR_GP_PIN(3, 2),
+ 		RCAR_GP_PIN(3,  3), RCAR_GP_PIN(3, 4),
+@@ -2131,7 +2131,7 @@ static const union vin_data vin1_data_pins = {
+ 		RCAR_GP_PIN(3, 15), RCAR_GP_PIN(3, 16),
+ 	},
+ };
+-static const union vin_data vin1_data_mux = {
++static const union vin_data12 vin1_data_mux = {
+ 	.data12 = {
+ 		VI1_DATA0_MARK, VI1_DATA1_MARK,
+ 		VI1_DATA2_MARK, VI1_DATA3_MARK,
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 7f809a57bee50..56fff83a143bd 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1002,6 +1002,16 @@ samsung_pinctrl_get_soc_data_for_of_alias(struct platform_device *pdev)
+ 	return &(of_data->ctrl[id]);
+ }
+ 
++static void samsung_banks_of_node_put(struct samsung_pinctrl_drv_data *d)
++{
++	struct samsung_pin_bank *bank;
++	unsigned int i;
++
++	bank = d->pin_banks;
++	for (i = 0; i < d->nr_banks; ++i, ++bank)
++		of_node_put(bank->of_node);
++}
++
+ /* retrieve the soc specific data */
+ static const struct samsung_pin_ctrl *
+ samsung_pinctrl_get_soc_data(struct samsung_pinctrl_drv_data *d,
+@@ -1116,19 +1126,19 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+ 	if (ctrl->retention_data) {
+ 		drvdata->retention_ctrl = ctrl->retention_data->init(drvdata,
+ 							  ctrl->retention_data);
+-		if (IS_ERR(drvdata->retention_ctrl))
+-			return PTR_ERR(drvdata->retention_ctrl);
++		if (IS_ERR(drvdata->retention_ctrl)) {
++			ret = PTR_ERR(drvdata->retention_ctrl);
++			goto err_put_banks;
++		}
+ 	}
+ 
+ 	ret = samsung_pinctrl_register(pdev, drvdata);
+ 	if (ret)
+-		return ret;
++		goto err_put_banks;
+ 
+ 	ret = samsung_gpiolib_register(pdev, drvdata);
+-	if (ret) {
+-		samsung_pinctrl_unregister(pdev, drvdata);
+-		return ret;
+-	}
++	if (ret)
++		goto err_unregister;
+ 
+ 	if (ctrl->eint_gpio_init)
+ 		ctrl->eint_gpio_init(drvdata);
+@@ -1138,6 +1148,12 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, drvdata);
+ 
+ 	return 0;
++
++err_unregister:
++	samsung_pinctrl_unregister(pdev, drvdata);
++err_put_banks:
++	samsung_banks_of_node_put(drvdata);
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/platform/chrome/Makefile b/drivers/platform/chrome/Makefile
+index f901d2e43166c..88cbc434c06b2 100644
+--- a/drivers/platform/chrome/Makefile
++++ b/drivers/platform/chrome/Makefile
+@@ -2,6 +2,7 @@
+ 
+ # tell define_trace.h where to find the cros ec trace header
+ CFLAGS_cros_ec_trace.o:=		-I$(src)
++CFLAGS_cros_ec_sensorhub_ring.o:=	-I$(src)
+ 
+ obj-$(CONFIG_CHROMEOS_LAPTOP)		+= chromeos_laptop.o
+ obj-$(CONFIG_CHROMEOS_PSTORE)		+= chromeos_pstore.o
+@@ -20,7 +21,7 @@ obj-$(CONFIG_CROS_EC_CHARDEV)		+= cros_ec_chardev.o
+ obj-$(CONFIG_CROS_EC_LIGHTBAR)		+= cros_ec_lightbar.o
+ obj-$(CONFIG_CROS_EC_VBC)		+= cros_ec_vbc.o
+ obj-$(CONFIG_CROS_EC_DEBUGFS)		+= cros_ec_debugfs.o
+-cros-ec-sensorhub-objs			:= cros_ec_sensorhub.o cros_ec_sensorhub_ring.o cros_ec_trace.o
++cros-ec-sensorhub-objs			:= cros_ec_sensorhub.o cros_ec_sensorhub_ring.o
+ obj-$(CONFIG_CROS_EC_SENSORHUB)		+= cros-ec-sensorhub.o
+ obj-$(CONFIG_CROS_EC_SYSFS)		+= cros_ec_sysfs.o
+ obj-$(CONFIG_CROS_USBPD_LOGGER)		+= cros_usbpd_logger.o
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_ring.c b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+index 98e37080f7609..71948dade0e2a 100644
+--- a/drivers/platform/chrome/cros_ec_sensorhub_ring.c
++++ b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+@@ -17,7 +17,8 @@
+ #include <linux/sort.h>
+ #include <linux/slab.h>
+ 
+-#include "cros_ec_trace.h"
++#define CREATE_TRACE_POINTS
++#include "cros_ec_sensorhub_trace.h"
+ 
+ /* Precision of fixed point for the m values from the filter */
+ #define M_PRECISION BIT(23)
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_trace.h b/drivers/platform/chrome/cros_ec_sensorhub_trace.h
+new file mode 100644
+index 0000000000000..57d9b47859692
+--- /dev/null
++++ b/drivers/platform/chrome/cros_ec_sensorhub_trace.h
+@@ -0,0 +1,123 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Trace events for the ChromeOS Sensorhub kernel module
++ *
++ * Copyright 2021 Google LLC.
++ */
++
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM cros_ec
++
++#if !defined(_CROS_EC_SENSORHUB_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
++#define _CROS_EC_SENSORHUB_TRACE_H_
++
++#include <linux/types.h>
++#include <linux/platform_data/cros_ec_sensorhub.h>
++
++#include <linux/tracepoint.h>
++
++TRACE_EVENT(cros_ec_sensorhub_timestamp,
++	    TP_PROTO(u32 ec_sample_timestamp, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++		     s64 current_timestamp, s64 current_time),
++	TP_ARGS(ec_sample_timestamp, ec_fifo_timestamp, fifo_timestamp, current_timestamp,
++		current_time),
++	TP_STRUCT__entry(
++		__field(u32, ec_sample_timestamp)
++		__field(u32, ec_fifo_timestamp)
++		__field(s64, fifo_timestamp)
++		__field(s64, current_timestamp)
++		__field(s64, current_time)
++		__field(s64, delta)
++	),
++	TP_fast_assign(
++		__entry->ec_sample_timestamp = ec_sample_timestamp;
++		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
++		__entry->fifo_timestamp = fifo_timestamp;
++		__entry->current_timestamp = current_timestamp;
++		__entry->current_time = current_time;
++		__entry->delta = current_timestamp - current_time;
++	),
++	TP_printk("ec_ts: %9u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++		  __entry->ec_sample_timestamp,
++		__entry->ec_fifo_timestamp,
++		__entry->fifo_timestamp,
++		__entry->current_timestamp,
++		__entry->current_time,
++		__entry->delta
++	)
++);
++
++TRACE_EVENT(cros_ec_sensorhub_data,
++	    TP_PROTO(u32 ec_sensor_num, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++		     s64 current_timestamp, s64 current_time),
++	TP_ARGS(ec_sensor_num, ec_fifo_timestamp, fifo_timestamp, current_timestamp, current_time),
++	TP_STRUCT__entry(
++		__field(u32, ec_sensor_num)
++		__field(u32, ec_fifo_timestamp)
++		__field(s64, fifo_timestamp)
++		__field(s64, current_timestamp)
++		__field(s64, current_time)
++		__field(s64, delta)
++	),
++	TP_fast_assign(
++		__entry->ec_sensor_num = ec_sensor_num;
++		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
++		__entry->fifo_timestamp = fifo_timestamp;
++		__entry->current_timestamp = current_timestamp;
++		__entry->current_time = current_time;
++		__entry->delta = current_timestamp - current_time;
++	),
++	TP_printk("ec_num: %4u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++		  __entry->ec_sensor_num,
++		__entry->ec_fifo_timestamp,
++		__entry->fifo_timestamp,
++		__entry->current_timestamp,
++		__entry->current_time,
++		__entry->delta
++	)
++);
++
++TRACE_EVENT(cros_ec_sensorhub_filter,
++	    TP_PROTO(struct cros_ec_sensors_ts_filter_state *state, s64 dx, s64 dy),
++	TP_ARGS(state, dx, dy),
++	TP_STRUCT__entry(
++		__field(s64, dx)
++		__field(s64, dy)
++		__field(s64, median_m)
++		__field(s64, median_error)
++		__field(s64, history_len)
++		__field(s64, x)
++		__field(s64, y)
++	),
++	TP_fast_assign(
++		__entry->dx = dx;
++		__entry->dy = dy;
++		__entry->median_m = state->median_m;
++		__entry->median_error = state->median_error;
++		__entry->history_len = state->history_len;
++		__entry->x = state->x_offset;
++		__entry->y = state->y_offset;
++	),
++	TP_printk("dx: %12lld. dy: %12lld median_m: %12lld median_error: %12lld len: %lld x: %12lld y: %12lld",
++		  __entry->dx,
++		__entry->dy,
++		__entry->median_m,
++		__entry->median_error,
++		__entry->history_len,
++		__entry->x,
++		__entry->y
++	)
++);
++
++
++#endif /* _CROS_EC_SENSORHUB_TRACE_H_ */
++
++/* this part must be outside header guard */
++
++#undef TRACE_INCLUDE_PATH
++#define TRACE_INCLUDE_PATH .
++
++#undef TRACE_INCLUDE_FILE
++#define TRACE_INCLUDE_FILE cros_ec_sensorhub_trace
++
++#include <trace/define_trace.h>
+diff --git a/drivers/platform/chrome/cros_ec_trace.h b/drivers/platform/chrome/cros_ec_trace.h
+index 7e7cfc98657a4..9bb5cd2c98b8b 100644
+--- a/drivers/platform/chrome/cros_ec_trace.h
++++ b/drivers/platform/chrome/cros_ec_trace.h
+@@ -15,7 +15,6 @@
+ #include <linux/types.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+ #include <linux/platform_data/cros_ec_proto.h>
+-#include <linux/platform_data/cros_ec_sensorhub.h>
+ 
+ #include <linux/tracepoint.h>
+ 
+@@ -71,100 +70,6 @@ TRACE_EVENT(cros_ec_request_done,
+ 		  __entry->retval)
+ );
+ 
+-TRACE_EVENT(cros_ec_sensorhub_timestamp,
+-	    TP_PROTO(u32 ec_sample_timestamp, u32 ec_fifo_timestamp, s64 fifo_timestamp,
+-		     s64 current_timestamp, s64 current_time),
+-	TP_ARGS(ec_sample_timestamp, ec_fifo_timestamp, fifo_timestamp, current_timestamp,
+-		current_time),
+-	TP_STRUCT__entry(
+-		__field(u32, ec_sample_timestamp)
+-		__field(u32, ec_fifo_timestamp)
+-		__field(s64, fifo_timestamp)
+-		__field(s64, current_timestamp)
+-		__field(s64, current_time)
+-		__field(s64, delta)
+-	),
+-	TP_fast_assign(
+-		__entry->ec_sample_timestamp = ec_sample_timestamp;
+-		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
+-		__entry->fifo_timestamp = fifo_timestamp;
+-		__entry->current_timestamp = current_timestamp;
+-		__entry->current_time = current_time;
+-		__entry->delta = current_timestamp - current_time;
+-	),
+-	TP_printk("ec_ts: %9u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
+-		  __entry->ec_sample_timestamp,
+-		__entry->ec_fifo_timestamp,
+-		__entry->fifo_timestamp,
+-		__entry->current_timestamp,
+-		__entry->current_time,
+-		__entry->delta
+-	)
+-);
+-
+-TRACE_EVENT(cros_ec_sensorhub_data,
+-	    TP_PROTO(u32 ec_sensor_num, u32 ec_fifo_timestamp, s64 fifo_timestamp,
+-		     s64 current_timestamp, s64 current_time),
+-	TP_ARGS(ec_sensor_num, ec_fifo_timestamp, fifo_timestamp, current_timestamp, current_time),
+-	TP_STRUCT__entry(
+-		__field(u32, ec_sensor_num)
+-		__field(u32, ec_fifo_timestamp)
+-		__field(s64, fifo_timestamp)
+-		__field(s64, current_timestamp)
+-		__field(s64, current_time)
+-		__field(s64, delta)
+-	),
+-	TP_fast_assign(
+-		__entry->ec_sensor_num = ec_sensor_num;
+-		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
+-		__entry->fifo_timestamp = fifo_timestamp;
+-		__entry->current_timestamp = current_timestamp;
+-		__entry->current_time = current_time;
+-		__entry->delta = current_timestamp - current_time;
+-	),
+-	TP_printk("ec_num: %4u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
+-		  __entry->ec_sensor_num,
+-		__entry->ec_fifo_timestamp,
+-		__entry->fifo_timestamp,
+-		__entry->current_timestamp,
+-		__entry->current_time,
+-		__entry->delta
+-	)
+-);
+-
+-TRACE_EVENT(cros_ec_sensorhub_filter,
+-	    TP_PROTO(struct cros_ec_sensors_ts_filter_state *state, s64 dx, s64 dy),
+-	TP_ARGS(state, dx, dy),
+-	TP_STRUCT__entry(
+-		__field(s64, dx)
+-		__field(s64, dy)
+-		__field(s64, median_m)
+-		__field(s64, median_error)
+-		__field(s64, history_len)
+-		__field(s64, x)
+-		__field(s64, y)
+-	),
+-	TP_fast_assign(
+-		__entry->dx = dx;
+-		__entry->dy = dy;
+-		__entry->median_m = state->median_m;
+-		__entry->median_error = state->median_error;
+-		__entry->history_len = state->history_len;
+-		__entry->x = state->x_offset;
+-		__entry->y = state->y_offset;
+-	),
+-	TP_printk("dx: %12lld. dy: %12lld median_m: %12lld median_error: %12lld len: %lld x: %12lld y: %12lld",
+-		  __entry->dx,
+-		__entry->dy,
+-		__entry->median_m,
+-		__entry->median_error,
+-		__entry->history_len,
+-		__entry->x,
+-		__entry->y
+-	)
+-);
+-
+-
+ #endif /* _CROS_EC_TRACE_H_ */
+ 
+ /* this part must be outside header guard */
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 036d54dc52e24..cc336457ca808 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -712,7 +712,13 @@ static int cros_typec_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	typec->dev = dev;
++
+ 	typec->ec = dev_get_drvdata(pdev->dev.parent);
++	if (!typec->ec) {
++		dev_err(dev, "couldn't find parent EC device\n");
++		return -ENODEV;
++	}
++
+ 	platform_set_drvdata(pdev, typec);
+ 
+ 	ret = cros_typec_get_cmd_version(typec);
+diff --git a/drivers/platform/x86/huawei-wmi.c b/drivers/platform/x86/huawei-wmi.c
+index a2d846c4a7eef..eac3e6b4ea113 100644
+--- a/drivers/platform/x86/huawei-wmi.c
++++ b/drivers/platform/x86/huawei-wmi.c
+@@ -470,10 +470,17 @@ static DEVICE_ATTR_RW(charge_control_thresholds);
+ 
+ static int huawei_wmi_battery_add(struct power_supply *battery)
+ {
+-	device_create_file(&battery->dev, &dev_attr_charge_control_start_threshold);
+-	device_create_file(&battery->dev, &dev_attr_charge_control_end_threshold);
++	int err = 0;
+ 
+-	return 0;
++	err = device_create_file(&battery->dev, &dev_attr_charge_control_start_threshold);
++	if (err)
++		return err;
++
++	err = device_create_file(&battery->dev, &dev_attr_charge_control_end_threshold);
++	if (err)
++		device_remove_file(&battery->dev, &dev_attr_charge_control_start_threshold);
++
++	return err;
+ }
+ 
+ static int huawei_wmi_battery_remove(struct power_supply *battery)
+diff --git a/drivers/power/reset/gemini-poweroff.c b/drivers/power/reset/gemini-poweroff.c
+index 90e35c07240ae..b7f7a8225f22e 100644
+--- a/drivers/power/reset/gemini-poweroff.c
++++ b/drivers/power/reset/gemini-poweroff.c
+@@ -107,8 +107,8 @@ static int gemini_poweroff_probe(struct platform_device *pdev)
+ 		return PTR_ERR(gpw->base);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (!irq)
+-		return -EINVAL;
++	if (irq < 0)
++		return irq;
+ 
+ 	gpw->dev = dev;
+ 
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index f1da757c939f8..a6b4a94c27662 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -2541,8 +2541,10 @@ static int ab8500_fg_sysfs_init(struct ab8500_fg *di)
+ 	ret = kobject_init_and_add(&di->fg_kobject,
+ 		&ab8500_fg_ktype,
+ 		NULL, "battery");
+-	if (ret < 0)
++	if (ret < 0) {
++		kobject_put(&di->fg_kobject);
+ 		dev_err(di->dev, "failed to create sysfs entry\n");
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index 845af0f44c022..8c3c378dce0d5 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -41,6 +41,7 @@
+ #define BQ24190_REG_POC_CHG_CONFIG_DISABLE		0x0
+ #define BQ24190_REG_POC_CHG_CONFIG_CHARGE		0x1
+ #define BQ24190_REG_POC_CHG_CONFIG_OTG			0x2
++#define BQ24190_REG_POC_CHG_CONFIG_OTG_ALT		0x3
+ #define BQ24190_REG_POC_SYS_MIN_MASK		(BIT(3) | BIT(2) | BIT(1))
+ #define BQ24190_REG_POC_SYS_MIN_SHIFT		1
+ #define BQ24190_REG_POC_SYS_MIN_MIN			3000
+@@ -552,7 +553,11 @@ static int bq24190_vbus_is_enabled(struct regulator_dev *dev)
+ 	pm_runtime_mark_last_busy(bdi->dev);
+ 	pm_runtime_put_autosuspend(bdi->dev);
+ 
+-	return ret ? ret : val == BQ24190_REG_POC_CHG_CONFIG_OTG;
++	if (ret)
++		return ret;
++
++	return (val == BQ24190_REG_POC_CHG_CONFIG_OTG ||
++		val == BQ24190_REG_POC_CHG_CONFIG_OTG_ALT);
+ }
+ 
+ static const struct regulator_ops bq24190_vbus_ops = {
+diff --git a/drivers/power/supply/wm8350_power.c b/drivers/power/supply/wm8350_power.c
+index e05cee457471b..908cfd45d2624 100644
+--- a/drivers/power/supply/wm8350_power.c
++++ b/drivers/power/supply/wm8350_power.c
+@@ -408,44 +408,112 @@ static const struct power_supply_desc wm8350_usb_desc = {
+  *		Initialisation
+  *********************************************************************/
+ 
+-static void wm8350_init_charger(struct wm8350 *wm8350)
++static int wm8350_init_charger(struct wm8350 *wm8350)
+ {
++	int ret;
++
+ 	/* register our interest in charger events */
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT,
+ 			    wm8350_charger_handler, 0, "Battery hot", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD,
++	if (ret)
++		goto err;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD,
+ 			    wm8350_charger_handler, 0, "Battery cold", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL,
++	if (ret)
++		goto free_chg_bat_hot;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL,
+ 			    wm8350_charger_handler, 0, "Battery fail", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_TO,
++	if (ret)
++		goto free_chg_bat_cold;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_TO,
+ 			    wm8350_charger_handler, 0,
+ 			    "Charger timeout", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_END,
++	if (ret)
++		goto free_chg_bat_fail;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_END,
+ 			    wm8350_charger_handler, 0,
+ 			    "Charge end", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_START,
++	if (ret)
++		goto free_chg_to;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_START,
+ 			    wm8350_charger_handler, 0,
+ 			    "Charge start", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY,
++	if (ret)
++		goto free_chg_end;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY,
+ 			    wm8350_charger_handler, 0,
+ 			    "Fast charge ready", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9,
++	if (ret)
++		goto free_chg_start;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9,
+ 			    wm8350_charger_handler, 0,
+ 			    "Battery <3.9V", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1,
++	if (ret)
++		goto free_chg_fast_rdy;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1,
+ 			    wm8350_charger_handler, 0,
+ 			    "Battery <3.1V", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85,
++	if (ret)
++		goto free_chg_vbatt_lt_3p9;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85,
+ 			    wm8350_charger_handler, 0,
+ 			    "Battery <2.85V", wm8350);
++	if (ret)
++		goto free_chg_vbatt_lt_3p1;
+ 
+ 	/* and supply change events */
+-	wm8350_register_irq(wm8350, WM8350_IRQ_EXT_USB_FB,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_USB_FB,
+ 			    wm8350_charger_handler, 0, "USB", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_EXT_WALL_FB,
++	if (ret)
++		goto free_chg_vbatt_lt_2p85;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_WALL_FB,
+ 			    wm8350_charger_handler, 0, "Wall", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_EXT_BAT_FB,
++	if (ret)
++		goto free_ext_usb_fb;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_BAT_FB,
+ 			    wm8350_charger_handler, 0, "Battery", wm8350);
++	if (ret)
++		goto free_ext_wall_fb;
++
++	return 0;
++
++free_ext_wall_fb:
++	wm8350_free_irq(wm8350, WM8350_IRQ_EXT_WALL_FB, wm8350);
++free_ext_usb_fb:
++	wm8350_free_irq(wm8350, WM8350_IRQ_EXT_USB_FB, wm8350);
++free_chg_vbatt_lt_2p85:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85, wm8350);
++free_chg_vbatt_lt_3p1:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1, wm8350);
++free_chg_vbatt_lt_3p9:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9, wm8350);
++free_chg_fast_rdy:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY, wm8350);
++free_chg_start:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_START, wm8350);
++free_chg_end:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_END, wm8350);
++free_chg_to:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_TO, wm8350);
++free_chg_bat_fail:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL, wm8350);
++free_chg_bat_cold:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD, wm8350);
++free_chg_bat_hot:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT, wm8350);
++err:
++	return ret;
+ }
+ 
+ static void free_charger_irq(struct wm8350 *wm8350)
+@@ -456,6 +524,7 @@ static void free_charger_irq(struct wm8350 *wm8350)
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_TO, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_END, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_START, wm8350);
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85, wm8350);
+diff --git a/drivers/pwm/pwm-lpc18xx-sct.c b/drivers/pwm/pwm-lpc18xx-sct.c
+index 5ff11145c1a30..9b15b6a79082a 100644
+--- a/drivers/pwm/pwm-lpc18xx-sct.c
++++ b/drivers/pwm/pwm-lpc18xx-sct.c
+@@ -400,12 +400,6 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ 	lpc18xx_pwm_writel(lpc18xx_pwm, LPC18XX_PWM_LIMIT,
+ 			   BIT(lpc18xx_pwm->period_event));
+ 
+-	ret = pwmchip_add(&lpc18xx_pwm->chip);
+-	if (ret < 0) {
+-		dev_err(&pdev->dev, "pwmchip_add failed: %d\n", ret);
+-		goto disable_pwmclk;
+-	}
+-
+ 	for (i = 0; i < lpc18xx_pwm->chip.npwm; i++) {
+ 		struct lpc18xx_pwm_data *data;
+ 
+@@ -415,14 +409,12 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ 				    GFP_KERNEL);
+ 		if (!data) {
+ 			ret = -ENOMEM;
+-			goto remove_pwmchip;
++			goto disable_pwmclk;
+ 		}
+ 
+ 		pwm_set_chip_data(pwm, data);
+ 	}
+ 
+-	platform_set_drvdata(pdev, lpc18xx_pwm);
+-
+ 	val = lpc18xx_pwm_readl(lpc18xx_pwm, LPC18XX_PWM_CTRL);
+ 	val &= ~LPC18XX_PWM_BIDIR;
+ 	val &= ~LPC18XX_PWM_CTRL_HALT;
+@@ -430,10 +422,16 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ 	val |= LPC18XX_PWM_PRE(0);
+ 	lpc18xx_pwm_writel(lpc18xx_pwm, LPC18XX_PWM_CTRL, val);
+ 
++	ret = pwmchip_add(&lpc18xx_pwm->chip);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "pwmchip_add failed: %d\n", ret);
++		goto disable_pwmclk;
++	}
++
++	platform_set_drvdata(pdev, lpc18xx_pwm);
++
+ 	return 0;
+ 
+-remove_pwmchip:
+-	pwmchip_remove(&lpc18xx_pwm->chip);
+ disable_pwmclk:
+ 	clk_disable_unprepare(lpc18xx_pwm->pwm_clk);
+ 	return ret;
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 03e146e98abd5..8d784a2a09d86 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -1185,8 +1185,10 @@ static int rpm_reg_probe(struct platform_device *pdev)
+ 
+ 	for_each_available_child_of_node(dev->of_node, node) {
+ 		vreg = devm_kzalloc(&pdev->dev, sizeof(*vreg), GFP_KERNEL);
+-		if (!vreg)
++		if (!vreg) {
++			of_node_put(node);
+ 			return -ENOMEM;
++		}
+ 
+ 		ret = rpm_regulator_init_vreg(vreg, dev, node, rpm, vreg_data);
+ 
+diff --git a/drivers/regulator/rpi-panel-attiny-regulator.c b/drivers/regulator/rpi-panel-attiny-regulator.c
+index ee46bfbf5eee7..991b4730d7687 100644
+--- a/drivers/regulator/rpi-panel-attiny-regulator.c
++++ b/drivers/regulator/rpi-panel-attiny-regulator.c
+@@ -37,11 +37,24 @@ static const struct regmap_config attiny_regmap_config = {
+ static int attiny_lcd_power_enable(struct regulator_dev *rdev)
+ {
+ 	unsigned int data;
++	int ret, i;
+ 
+ 	regmap_write(rdev->regmap, REG_POWERON, 1);
++	msleep(80);
++
+ 	/* Wait for nPWRDWN to go low to indicate poweron is done. */
+-	regmap_read_poll_timeout(rdev->regmap, REG_PORTB, data,
+-					data & BIT(0), 10, 1000000);
++	for (i = 0; i < 20; i++) {
++		ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++		if (!ret) {
++			if (data & BIT(0))
++				break;
++		}
++		usleep_range(10000, 12000);
++	}
++	usleep_range(10000, 12000);
++
++	if (ret)
++		pr_err("%s: regmap_read_poll_timeout failed %d\n", __func__, ret);
+ 
+ 	/* Default to the same orientation as the closed source
+ 	 * firmware used for the panel.  Runtime rotation
+@@ -57,23 +70,34 @@ static int attiny_lcd_power_disable(struct regulator_dev *rdev)
+ {
+ 	regmap_write(rdev->regmap, REG_PWM, 0);
+ 	regmap_write(rdev->regmap, REG_POWERON, 0);
+-	udelay(1);
++	msleep(30);
+ 	return 0;
+ }
+ 
+ static int attiny_lcd_power_is_enabled(struct regulator_dev *rdev)
+ {
+ 	unsigned int data;
+-	int ret;
++	int ret, i;
+ 
+-	ret = regmap_read(rdev->regmap, REG_POWERON, &data);
++	for (i = 0; i < 10; i++) {
++		ret = regmap_read(rdev->regmap, REG_POWERON, &data);
++		if (!ret)
++			break;
++		usleep_range(10000, 12000);
++	}
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	if (!(data & BIT(0)))
+ 		return 0;
+ 
+-	ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++	for (i = 0; i < 10; i++) {
++		ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++		if (!ret)
++			break;
++		usleep_range(10000, 12000);
++	}
++
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -103,20 +127,32 @@ static int attiny_update_status(struct backlight_device *bl)
+ {
+ 	struct regmap *regmap = bl_get_data(bl);
+ 	int brightness = bl->props.brightness;
++	int ret, i;
+ 
+ 	if (bl->props.power != FB_BLANK_UNBLANK ||
+ 	    bl->props.fb_blank != FB_BLANK_UNBLANK)
+ 		brightness = 0;
+ 
+-	return regmap_write(regmap, REG_PWM, brightness);
++	for (i = 0; i < 10; i++) {
++		ret = regmap_write(regmap, REG_PWM, brightness);
++		if (!ret)
++			break;
++	}
++
++	return ret;
+ }
+ 
+ static int attiny_get_brightness(struct backlight_device *bl)
+ {
+ 	struct regmap *regmap = bl_get_data(bl);
+-	int ret, brightness;
++	int ret, brightness, i;
++
++	for (i = 0; i < 10; i++) {
++		ret = regmap_read(regmap, REG_PWM, &brightness);
++		if (!ret)
++			break;
++	}
+ 
+-	ret = regmap_read(regmap, REG_PWM, &brightness);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -166,7 +202,7 @@ static int attiny_i2c_probe(struct i2c_client *i2c,
+ 	}
+ 
+ 	regmap_write(regmap, REG_POWERON, 0);
+-	mdelay(1);
++	msleep(30);
+ 
+ 	config.dev = &i2c->dev;
+ 	config.regmap = regmap;
+diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
+index 9eb599701f9b0..c39138d39cf07 100644
+--- a/drivers/remoteproc/qcom_q6v5_adsp.c
++++ b/drivers/remoteproc/qcom_q6v5_adsp.c
+@@ -406,6 +406,7 @@ static int adsp_alloc_memory_region(struct qcom_adsp *adsp)
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index ebc3e755bcbcd..1b3aa84e36e7a 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1594,18 +1594,20 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ 	 * reserved memory regions from device's memory-region property.
+ 	 */
+ 	child = of_get_child_by_name(qproc->dev->of_node, "mba");
+-	if (!child)
++	if (!child) {
+ 		node = of_parse_phandle(qproc->dev->of_node,
+ 					"memory-region", 0);
+-	else
++	} else {
+ 		node = of_parse_phandle(child, "memory-region", 0);
++		of_node_put(child);
++	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret) {
+ 		dev_err(qproc->dev, "unable to resolve mba region\n");
+ 		return ret;
+ 	}
+-	of_node_put(node);
+ 
+ 	qproc->mba_phys = r.start;
+ 	qproc->mba_size = resource_size(&r);
+@@ -1622,14 +1624,15 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ 	} else {
+ 		child = of_get_child_by_name(qproc->dev->of_node, "mpss");
+ 		node = of_parse_phandle(child, "memory-region", 0);
++		of_node_put(child);
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret) {
+ 		dev_err(qproc->dev, "unable to resolve mpss region\n");
+ 		return ret;
+ 	}
+-	of_node_put(node);
+ 
+ 	qproc->mpss_phys = qproc->mpss_reloc = r.start;
+ 	qproc->mpss_size = resource_size(&r);
+diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c
+index e2573f79a137d..67286a4505cd1 100644
+--- a/drivers/remoteproc/qcom_wcnss.c
++++ b/drivers/remoteproc/qcom_wcnss.c
+@@ -448,6 +448,7 @@ static int wcnss_alloc_memory_region(struct qcom_wcnss *wcnss)
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/remoteproc/remoteproc_debugfs.c b/drivers/remoteproc/remoteproc_debugfs.c
+index 7e5845376e9fa..e8bb0ee6b35ac 100644
+--- a/drivers/remoteproc/remoteproc_debugfs.c
++++ b/drivers/remoteproc/remoteproc_debugfs.c
+@@ -76,7 +76,7 @@ static ssize_t rproc_coredump_write(struct file *filp,
+ 	int ret, err = 0;
+ 	char buf[20];
+ 
+-	if (count > sizeof(buf))
++	if (count < 1 || count > sizeof(buf))
+ 		return -EINVAL;
+ 
+ 	ret = copy_from_user(buf, user_buf, count);
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index 794a4f036b998..146056858135e 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -807,9 +807,13 @@ static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
+ 	struct timerqueue_node *next = timerqueue_getnext(&rtc->timerqueue);
+ 	struct rtc_time tm;
+ 	ktime_t now;
++	int err;
++
++	err = __rtc_read_time(rtc, &tm);
++	if (err)
++		return err;
+ 
+ 	timer->enabled = 1;
+-	__rtc_read_time(rtc, &tm);
+ 	now = rtc_tm_to_ktime(tm);
+ 
+ 	/* Skip over expired timers */
+@@ -823,7 +827,6 @@ static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
+ 	trace_rtc_timer_enqueue(timer);
+ 	if (!next || ktime_before(timer->node.expires, next->expires)) {
+ 		struct rtc_wkalrm alarm;
+-		int err;
+ 
+ 		alarm.time = rtc_ktime_to_tm(timer->node.expires);
+ 		alarm.enabled = 1;
+diff --git a/drivers/rtc/rtc-pl030.c b/drivers/rtc/rtc-pl030.c
+index ebe03eba8f5ff..87c93843d62ad 100644
+--- a/drivers/rtc/rtc-pl030.c
++++ b/drivers/rtc/rtc-pl030.c
+@@ -137,7 +137,7 @@ static int pl030_probe(struct amba_device *dev, const struct amba_id *id)
+ 	return ret;
+ }
+ 
+-static int pl030_remove(struct amba_device *dev)
++static void pl030_remove(struct amba_device *dev)
+ {
+ 	struct pl030_rtc *rtc = amba_get_drvdata(dev);
+ 
+@@ -146,8 +146,6 @@ static int pl030_remove(struct amba_device *dev)
+ 	free_irq(dev->irq[0], rtc);
+ 	iounmap(rtc->base);
+ 	amba_release_regions(dev);
+-
+-	return 0;
+ }
+ 
+ static struct amba_id pl030_ids[] = {
+diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c
+index d4b2ab7861266..2f5581ea26fe1 100644
+--- a/drivers/rtc/rtc-pl031.c
++++ b/drivers/rtc/rtc-pl031.c
+@@ -280,7 +280,7 @@ static int pl031_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ 	return 0;
+ }
+ 
+-static int pl031_remove(struct amba_device *adev)
++static void pl031_remove(struct amba_device *adev)
+ {
+ 	struct pl031_local *ldata = dev_get_drvdata(&adev->dev);
+ 
+@@ -289,8 +289,6 @@ static int pl031_remove(struct amba_device *adev)
+ 	if (adev->irq[0])
+ 		free_irq(adev->irq[0], ldata);
+ 	amba_release_regions(adev);
+-
+-	return 0;
+ }
+ 
+ static int pl031_probe(struct amba_device *adev, const struct amba_id *id)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 50a1c3478a6e0..a8998b016b862 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -514,7 +514,7 @@ MODULE_PARM_DESC(intr_conv, "interrupt converge enable (0-1)");
+ 
+ /* permit overriding the host protection capabilities mask (EEDP/T10 PI) */
+ static int prot_mask;
+-module_param(prot_mask, int, 0);
++module_param(prot_mask, int, 0444);
+ MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=0x0 ");
+ 
+ static bool auto_affine_msi_experimental;
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index 8b9a39077dbab..a1a06a832d866 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -202,7 +202,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ 		task->total_xfer_len = qc->nbytes;
+ 		task->num_scatter = qc->n_elem;
+ 		task->data_dir = qc->dma_dir;
+-	} else if (qc->tf.protocol == ATA_PROT_NODATA) {
++	} else if (!ata_is_data(qc->tf.protocol)) {
+ 		task->data_dir = DMA_NONE;
+ 	} else {
+ 		for_each_sg(qc->sg, sg, qc->n_elem, si)
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 9b318958d78cc..cd0e1d31db701 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1727,6 +1727,7 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	ccb->device = pm8001_ha_dev;
+ 	ccb->ccb_tag = ccb_tag;
+ 	ccb->task = task;
++	ccb->n_elem = 0;
+ 
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 
+@@ -1788,6 +1789,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 	ccb->device = pm8001_ha_dev;
+ 	ccb->ccb_tag = ccb_tag;
+ 	ccb->task = task;
++	ccb->n_elem = 0;
+ 	pm8001_ha_dev->id |= NCQ_READ_LOG_FLAG;
+ 	pm8001_ha_dev->id |= NCQ_2ND_RLE_FLAG;
+ 
+@@ -1804,7 +1806,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	sata_cmd.tag = cpu_to_le32(ccb_tag);
+ 	sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id);
+-	sata_cmd.ncqtag_atap_dir_m |= ((0x1 << 7) | (0x5 << 9));
++	sata_cmd.ncqtag_atap_dir_m = cpu_to_le32((0x1 << 7) | (0x5 << 9));
+ 	memcpy(&sata_cmd.sata_fis, &fis, sizeof(struct host_to_dev_fis));
+ 
+ 	res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd,
+@@ -2365,7 +2367,8 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				len = sizeof(struct pio_setup_fis);
+ 				pm8001_dbg(pm8001_ha, IO,
+ 					   "PIO read len = %d\n", len);
+-			} else if (t->ata_task.use_ncq) {
++			} else if (t->ata_task.use_ncq &&
++				   t->data_dir != DMA_NONE) {
+ 				len = sizeof(struct set_dev_bits_fis);
+ 				pm8001_dbg(pm8001_ha, IO, "FPDMA len = %d\n",
+ 					   len);
+@@ -4220,22 +4223,22 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 	u32  opc = OPC_INB_SATA_HOST_OPSTART;
+ 	memset(&sata_cmd, 0, sizeof(sata_cmd));
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+-	if (task->data_dir == DMA_NONE) {
++
++	if (task->data_dir == DMA_NONE && !task->ata_task.use_ncq) {
+ 		ATAP = 0x04;  /* no data*/
+ 		pm8001_dbg(pm8001_ha, IO, "no data\n");
+ 	} else if (likely(!task->ata_task.device_control_reg_update)) {
+-		if (task->ata_task.dma_xfer) {
++		if (task->ata_task.use_ncq &&
++		    dev->sata_dev.class != ATA_DEV_ATAPI) {
++			ATAP = 0x07; /* FPDMA */
++			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
++		} else if (task->ata_task.dma_xfer) {
+ 			ATAP = 0x06; /* DMA */
+ 			pm8001_dbg(pm8001_ha, IO, "DMA\n");
+ 		} else {
+ 			ATAP = 0x05; /* PIO*/
+ 			pm8001_dbg(pm8001_ha, IO, "PIO\n");
+ 		}
+-		if (task->ata_task.use_ncq &&
+-			dev->sata_dev.class != ATA_DEV_ATAPI) {
+-			ATAP = 0x07; /* FPDMA */
+-			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
+-		}
+ 	}
+ 	if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) {
+ 		task->ata_task.fis.sector_count |= (u8) (hdr_tag << 3);
+@@ -4575,7 +4578,7 @@ int pm8001_chip_ssp_tm_req(struct pm8001_hba_info *pm8001_ha,
+ 	memcpy(sspTMCmd.lun, task->ssp_task.LUN, 8);
+ 	sspTMCmd.tag = cpu_to_le32(ccb->ccb_tag);
+ 	if (pm8001_ha->chip_id != chip_8001)
+-		sspTMCmd.ds_ads_m = 0x08;
++		sspTMCmd.ds_ads_m = cpu_to_le32(0x08);
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sspTMCmd,
+ 			sizeof(sspTMCmd), 0);
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 2a3ce4680734b..b5e60553acdc5 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -1199,9 +1199,11 @@ pm80xx_set_thermal_config(struct pm8001_hba_info *pm8001_ha)
+ 	else
+ 		page_code = THERMAL_PAGE_CODE_8H;
+ 
+-	payload.cfg_pg[0] = (THERMAL_LOG_ENABLE << 9) |
+-				(THERMAL_ENABLE << 8) | page_code;
+-	payload.cfg_pg[1] = (LTEMPHIL << 24) | (RTEMPHIL << 8);
++	payload.cfg_pg[0] =
++		cpu_to_le32((THERMAL_LOG_ENABLE << 9) |
++			    (THERMAL_ENABLE << 8) | page_code);
++	payload.cfg_pg[1] =
++		cpu_to_le32((LTEMPHIL << 24) | (RTEMPHIL << 8));
+ 
+ 	pm8001_dbg(pm8001_ha, DEV,
+ 		   "Setting up thermal config. cfg_pg 0 0x%x cfg_pg 1 0x%x\n",
+@@ -1241,43 +1243,41 @@ pm80xx_set_sas_protocol_timer_config(struct pm8001_hba_info *pm8001_ha)
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 	payload.tag = cpu_to_le32(tag);
+ 
+-	SASConfigPage.pageCode        =  SAS_PROTOCOL_TIMER_CONFIG_PAGE;
+-	SASConfigPage.MST_MSI         =  3 << 15;
+-	SASConfigPage.STP_SSP_MCT_TMO =  (STP_MCT_TMO << 16) | SSP_MCT_TMO;
+-	SASConfigPage.STP_FRM_TMO     = (SAS_MAX_OPEN_TIME << 24) |
+-				(SMP_MAX_CONN_TIMER << 16) | STP_FRM_TIMER;
+-	SASConfigPage.STP_IDLE_TMO    =  STP_IDLE_TIME;
+-
+-	if (SASConfigPage.STP_IDLE_TMO > 0x3FFFFFF)
+-		SASConfigPage.STP_IDLE_TMO = 0x3FFFFFF;
+-
+-
+-	SASConfigPage.OPNRJT_RTRY_INTVL =         (SAS_MFD << 16) |
+-						SAS_OPNRJT_RTRY_INTVL;
+-	SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO =  (SAS_DOPNRJT_RTRY_TMO << 16)
+-						| SAS_COPNRJT_RTRY_TMO;
+-	SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR =  (SAS_DOPNRJT_RTRY_THR << 16)
+-						| SAS_COPNRJT_RTRY_THR;
+-	SASConfigPage.MAX_AIP =  SAS_MAX_AIP;
++	SASConfigPage.pageCode = cpu_to_le32(SAS_PROTOCOL_TIMER_CONFIG_PAGE);
++	SASConfigPage.MST_MSI = cpu_to_le32(3 << 15);
++	SASConfigPage.STP_SSP_MCT_TMO =
++		cpu_to_le32((STP_MCT_TMO << 16) | SSP_MCT_TMO);
++	SASConfigPage.STP_FRM_TMO =
++		cpu_to_le32((SAS_MAX_OPEN_TIME << 24) |
++			    (SMP_MAX_CONN_TIMER << 16) | STP_FRM_TIMER);
++	SASConfigPage.STP_IDLE_TMO = cpu_to_le32(STP_IDLE_TIME);
++
++	SASConfigPage.OPNRJT_RTRY_INTVL =
++		cpu_to_le32((SAS_MFD << 16) | SAS_OPNRJT_RTRY_INTVL);
++	SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO =
++		cpu_to_le32((SAS_DOPNRJT_RTRY_TMO << 16) | SAS_COPNRJT_RTRY_TMO);
++	SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR =
++		cpu_to_le32((SAS_DOPNRJT_RTRY_THR << 16) | SAS_COPNRJT_RTRY_THR);
++	SASConfigPage.MAX_AIP = cpu_to_le32(SAS_MAX_AIP);
+ 
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.pageCode 0x%08x\n",
+-		   SASConfigPage.pageCode);
++		   le32_to_cpu(SASConfigPage.pageCode));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.MST_MSI  0x%08x\n",
+-		   SASConfigPage.MST_MSI);
++		   le32_to_cpu(SASConfigPage.MST_MSI));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_SSP_MCT_TMO  0x%08x\n",
+-		   SASConfigPage.STP_SSP_MCT_TMO);
++		   le32_to_cpu(SASConfigPage.STP_SSP_MCT_TMO));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_FRM_TMO  0x%08x\n",
+-		   SASConfigPage.STP_FRM_TMO);
++		   le32_to_cpu(SASConfigPage.STP_FRM_TMO));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_IDLE_TMO  0x%08x\n",
+-		   SASConfigPage.STP_IDLE_TMO);
++		   le32_to_cpu(SASConfigPage.STP_IDLE_TMO));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.OPNRJT_RTRY_INTVL  0x%08x\n",
+-		   SASConfigPage.OPNRJT_RTRY_INTVL);
++		   le32_to_cpu(SASConfigPage.OPNRJT_RTRY_INTVL));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO  0x%08x\n",
+-		   SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO);
++		   le32_to_cpu(SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR  0x%08x\n",
+-		   SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR);
++		   le32_to_cpu(SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.MAX_AIP  0x%08x\n",
+-		   SASConfigPage.MAX_AIP);
++		   le32_to_cpu(SASConfigPage.MAX_AIP));
+ 
+ 	memcpy(&payload.cfg_pg, &SASConfigPage,
+ 			 sizeof(SASProtocolTimerConfig_t));
+@@ -1403,12 +1403,13 @@ static int pm80xx_encrypt_update(struct pm8001_hba_info *pm8001_ha)
+ 	/* Currently only one key is used. New KEK index is 1.
+ 	 * Current KEK index is 1. Store KEK to NVRAM is 1.
+ 	 */
+-	payload.new_curidx_ksop = ((1 << 24) | (1 << 16) | (1 << 8) |
+-					KEK_MGMT_SUBOP_KEYCARDUPDATE);
++	payload.new_curidx_ksop =
++		cpu_to_le32(((1 << 24) | (1 << 16) | (1 << 8) |
++			     KEK_MGMT_SUBOP_KEYCARDUPDATE));
+ 
+ 	pm8001_dbg(pm8001_ha, DEV,
+ 		   "Saving Encryption info to flash. payload 0x%x\n",
+-		   payload.new_curidx_ksop);
++		   le32_to_cpu(payload.new_curidx_ksop));
+ 
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
+@@ -1749,6 +1750,7 @@ static void pm80xx_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	ccb->device = pm8001_ha_dev;
+ 	ccb->ccb_tag = ccb_tag;
+ 	ccb->task = task;
++	ccb->n_elem = 0;
+ 
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 
+@@ -1830,7 +1832,7 @@ static void pm80xx_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	sata_cmd.tag = cpu_to_le32(ccb_tag);
+ 	sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id);
+-	sata_cmd.ncqtag_atap_dir_m_dad |= ((0x1 << 7) | (0x5 << 9));
++	sata_cmd.ncqtag_atap_dir_m_dad = cpu_to_le32(((0x1 << 7) | (0x5 << 9)));
+ 	memcpy(&sata_cmd.sata_fis, &fis, sizeof(struct host_to_dev_fis));
+ 
+ 	res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd,
+@@ -2464,7 +2466,8 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				len = sizeof(struct pio_setup_fis);
+ 				pm8001_dbg(pm8001_ha, IO,
+ 					   "PIO read len = %d\n", len);
+-			} else if (t->ata_task.use_ncq) {
++			} else if (t->ata_task.use_ncq &&
++				   t->data_dir != DMA_NONE) {
+ 				len = sizeof(struct set_dev_bits_fis);
+ 				pm8001_dbg(pm8001_ha, IO, "FPDMA len = %d\n",
+ 					   len);
+@@ -4307,13 +4310,15 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 	struct ssp_ini_io_start_req ssp_cmd;
+ 	u32 tag = ccb->ccb_tag;
+ 	int ret;
+-	u64 phys_addr, start_addr, end_addr;
++	u64 phys_addr, end_addr;
+ 	u32 end_addr_high, end_addr_low;
+ 	struct inbound_queue_table *circularQ;
+ 	u32 q_index, cpu_id;
+ 	u32 opc = OPC_INB_SSPINIIOSTART;
++
+ 	memset(&ssp_cmd, 0, sizeof(ssp_cmd));
+ 	memcpy(ssp_cmd.ssp_iu.lun, task->ssp_task.LUN, 8);
++
+ 	/* data address domain added for spcv; set to 0 by host,
+ 	 * used internally by controller
+ 	 * 0 for SAS 1.1 and SAS 2.0 compatible TLR
+@@ -4324,7 +4329,7 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 	ssp_cmd.device_id = cpu_to_le32(pm8001_dev->device_id);
+ 	ssp_cmd.tag = cpu_to_le32(tag);
+ 	if (task->ssp_task.enable_first_burst)
+-		ssp_cmd.ssp_iu.efb_prio_attr |= 0x80;
++		ssp_cmd.ssp_iu.efb_prio_attr = 0x80;
+ 	ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_prio << 3);
+ 	ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_attr & 7);
+ 	memcpy(ssp_cmd.ssp_iu.cdb, task->ssp_task.cmd->cmnd,
+@@ -4356,21 +4361,24 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			ssp_cmd.enc_esgl = cpu_to_le32(1<<31);
+ 		} else if (task->num_scatter == 1) {
+ 			u64 dma_addr = sg_dma_address(task->scatter);
++
+ 			ssp_cmd.enc_addr_low =
+ 				cpu_to_le32(lower_32_bits(dma_addr));
+ 			ssp_cmd.enc_addr_high =
+ 				cpu_to_le32(upper_32_bits(dma_addr));
+ 			ssp_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ 			ssp_cmd.enc_esgl = 0;
++
+ 			/* Check 4G Boundary */
+-			start_addr = cpu_to_le64(dma_addr);
+-			end_addr = (start_addr + ssp_cmd.enc_len) - 1;
+-			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+-			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+-			if (end_addr_high != ssp_cmd.enc_addr_high) {
++			end_addr = dma_addr + le32_to_cpu(ssp_cmd.enc_len) - 1;
++			end_addr_low = lower_32_bits(end_addr);
++			end_addr_high = upper_32_bits(end_addr);
++
++			if (end_addr_high != le32_to_cpu(ssp_cmd.enc_addr_high)) {
+ 				pm8001_dbg(pm8001_ha, FAIL,
+ 					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+-					   start_addr, ssp_cmd.enc_len,
++					   dma_addr,
++					   le32_to_cpu(ssp_cmd.enc_len),
+ 					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+@@ -4379,7 +4387,7 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 					cpu_to_le32(lower_32_bits(phys_addr));
+ 				ssp_cmd.enc_addr_high =
+ 					cpu_to_le32(upper_32_bits(phys_addr));
+-				ssp_cmd.enc_esgl = cpu_to_le32(1<<31);
++				ssp_cmd.enc_esgl = cpu_to_le32(1U<<31);
+ 			}
+ 		} else if (task->num_scatter == 0) {
+ 			ssp_cmd.enc_addr_low = 0;
+@@ -4387,8 +4395,10 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			ssp_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ 			ssp_cmd.enc_esgl = 0;
+ 		}
++
+ 		/* XTS mode. All other fields are 0 */
+-		ssp_cmd.key_cmode = 0x6 << 4;
++		ssp_cmd.key_cmode = cpu_to_le32(0x6 << 4);
++
+ 		/* set tweak values. Should be the start lba */
+ 		ssp_cmd.twk_val0 = cpu_to_le32((task->ssp_task.cmd->cmnd[2] << 24) |
+ 						(task->ssp_task.cmd->cmnd[3] << 16) |
+@@ -4410,20 +4420,22 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			ssp_cmd.esgl = cpu_to_le32(1<<31);
+ 		} else if (task->num_scatter == 1) {
+ 			u64 dma_addr = sg_dma_address(task->scatter);
++
+ 			ssp_cmd.addr_low = cpu_to_le32(lower_32_bits(dma_addr));
+ 			ssp_cmd.addr_high =
+ 				cpu_to_le32(upper_32_bits(dma_addr));
+ 			ssp_cmd.len = cpu_to_le32(task->total_xfer_len);
+ 			ssp_cmd.esgl = 0;
++
+ 			/* Check 4G Boundary */
+-			start_addr = cpu_to_le64(dma_addr);
+-			end_addr = (start_addr + ssp_cmd.len) - 1;
+-			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+-			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+-			if (end_addr_high != ssp_cmd.addr_high) {
++			end_addr = dma_addr + le32_to_cpu(ssp_cmd.len) - 1;
++			end_addr_low = lower_32_bits(end_addr);
++			end_addr_high = upper_32_bits(end_addr);
++			if (end_addr_high != le32_to_cpu(ssp_cmd.addr_high)) {
+ 				pm8001_dbg(pm8001_ha, FAIL,
+ 					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+-					   start_addr, ssp_cmd.len,
++					   dma_addr,
++					   le32_to_cpu(ssp_cmd.len),
+ 					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+@@ -4457,7 +4469,7 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 	u32 q_index, cpu_id;
+ 	struct sata_start_req sata_cmd;
+ 	u32 hdr_tag, ncg_tag = 0;
+-	u64 phys_addr, start_addr, end_addr;
++	u64 phys_addr, end_addr;
+ 	u32 end_addr_high, end_addr_low;
+ 	u32 ATAP = 0x0;
+ 	u32 dir;
+@@ -4469,22 +4481,21 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 	q_index = (u32) (cpu_id) % (pm8001_ha->max_q_num);
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[q_index];
+ 
+-	if (task->data_dir == DMA_NONE) {
++	if (task->data_dir == DMA_NONE && !task->ata_task.use_ncq) {
+ 		ATAP = 0x04; /* no data*/
+ 		pm8001_dbg(pm8001_ha, IO, "no data\n");
+ 	} else if (likely(!task->ata_task.device_control_reg_update)) {
+-		if (task->ata_task.dma_xfer) {
++		if (task->ata_task.use_ncq &&
++		    dev->sata_dev.class != ATA_DEV_ATAPI) {
++			ATAP = 0x07; /* FPDMA */
++			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
++		} else if (task->ata_task.dma_xfer) {
+ 			ATAP = 0x06; /* DMA */
+ 			pm8001_dbg(pm8001_ha, IO, "DMA\n");
+ 		} else {
+ 			ATAP = 0x05; /* PIO*/
+ 			pm8001_dbg(pm8001_ha, IO, "PIO\n");
+ 		}
+-		if (task->ata_task.use_ncq &&
+-		    dev->sata_dev.class != ATA_DEV_ATAPI) {
+-			ATAP = 0x07; /* FPDMA */
+-			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
+-		}
+ 	}
+ 	if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) {
+ 		task->ata_task.fis.sector_count |= (u8) (hdr_tag << 3);
+@@ -4518,32 +4529,38 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			pm8001_chip_make_sg(task->scatter,
+ 						ccb->n_elem, ccb->buf_prd);
+ 			phys_addr = ccb->ccb_dma_handle;
+-			sata_cmd.enc_addr_low = lower_32_bits(phys_addr);
+-			sata_cmd.enc_addr_high = upper_32_bits(phys_addr);
++			sata_cmd.enc_addr_low =
++				cpu_to_le32(lower_32_bits(phys_addr));
++			sata_cmd.enc_addr_high =
++				cpu_to_le32(upper_32_bits(phys_addr));
+ 			sata_cmd.enc_esgl = cpu_to_le32(1 << 31);
+ 		} else if (task->num_scatter == 1) {
+ 			u64 dma_addr = sg_dma_address(task->scatter);
+-			sata_cmd.enc_addr_low = lower_32_bits(dma_addr);
+-			sata_cmd.enc_addr_high = upper_32_bits(dma_addr);
++
++			sata_cmd.enc_addr_low =
++				cpu_to_le32(lower_32_bits(dma_addr));
++			sata_cmd.enc_addr_high =
++				cpu_to_le32(upper_32_bits(dma_addr));
+ 			sata_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ 			sata_cmd.enc_esgl = 0;
++
+ 			/* Check 4G Boundary */
+-			start_addr = cpu_to_le64(dma_addr);
+-			end_addr = (start_addr + sata_cmd.enc_len) - 1;
+-			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+-			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+-			if (end_addr_high != sata_cmd.enc_addr_high) {
++			end_addr = dma_addr + le32_to_cpu(sata_cmd.enc_len) - 1;
++			end_addr_low = lower_32_bits(end_addr);
++			end_addr_high = upper_32_bits(end_addr);
++			if (end_addr_high != le32_to_cpu(sata_cmd.enc_addr_high)) {
+ 				pm8001_dbg(pm8001_ha, FAIL,
+ 					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+-					   start_addr, sata_cmd.enc_len,
++					   dma_addr,
++					   le32_to_cpu(sata_cmd.enc_len),
+ 					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+ 				phys_addr = ccb->ccb_dma_handle;
+ 				sata_cmd.enc_addr_low =
+-					lower_32_bits(phys_addr);
++					cpu_to_le32(lower_32_bits(phys_addr));
+ 				sata_cmd.enc_addr_high =
+-					upper_32_bits(phys_addr);
++					cpu_to_le32(upper_32_bits(phys_addr));
+ 				sata_cmd.enc_esgl =
+ 					cpu_to_le32(1 << 31);
+ 			}
+@@ -4554,7 +4571,8 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			sata_cmd.enc_esgl = 0;
+ 		}
+ 		/* XTS mode. All other fields are 0 */
+-		sata_cmd.key_index_mode = 0x6 << 4;
++		sata_cmd.key_index_mode = cpu_to_le32(0x6 << 4);
++
+ 		/* set tweak values. Should be the start lba */
+ 		sata_cmd.twk_val0 =
+ 			cpu_to_le32((sata_cmd.sata_fis.lbal_exp << 24) |
+@@ -4580,31 +4598,31 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			phys_addr = ccb->ccb_dma_handle;
+ 			sata_cmd.addr_low = lower_32_bits(phys_addr);
+ 			sata_cmd.addr_high = upper_32_bits(phys_addr);
+-			sata_cmd.esgl = cpu_to_le32(1 << 31);
++			sata_cmd.esgl = cpu_to_le32(1U << 31);
+ 		} else if (task->num_scatter == 1) {
+ 			u64 dma_addr = sg_dma_address(task->scatter);
++
+ 			sata_cmd.addr_low = lower_32_bits(dma_addr);
+ 			sata_cmd.addr_high = upper_32_bits(dma_addr);
+ 			sata_cmd.len = cpu_to_le32(task->total_xfer_len);
+ 			sata_cmd.esgl = 0;
++
+ 			/* Check 4G Boundary */
+-			start_addr = cpu_to_le64(dma_addr);
+-			end_addr = (start_addr + sata_cmd.len) - 1;
+-			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+-			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
++			end_addr = dma_addr + le32_to_cpu(sata_cmd.len) - 1;
++			end_addr_low = lower_32_bits(end_addr);
++			end_addr_high = upper_32_bits(end_addr);
+ 			if (end_addr_high != sata_cmd.addr_high) {
+ 				pm8001_dbg(pm8001_ha, FAIL,
+ 					   "The sg list address start_addr=0x%016llx data_len=0x%xend_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+-					   start_addr, sata_cmd.len,
++					   dma_addr,
++					   le32_to_cpu(sata_cmd.len),
+ 					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+ 				phys_addr = ccb->ccb_dma_handle;
+-				sata_cmd.addr_low =
+-					lower_32_bits(phys_addr);
+-				sata_cmd.addr_high =
+-					upper_32_bits(phys_addr);
+-				sata_cmd.esgl = cpu_to_le32(1 << 31);
++				sata_cmd.addr_low = lower_32_bits(phys_addr);
++				sata_cmd.addr_high = upper_32_bits(phys_addr);
++				sata_cmd.esgl = cpu_to_le32(1U << 31);
+ 			}
+ 		} else if (task->num_scatter == 0) {
+ 			sata_cmd.addr_low = 0;
+@@ -4612,27 +4630,28 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			sata_cmd.len = cpu_to_le32(task->total_xfer_len);
+ 			sata_cmd.esgl = 0;
+ 		}
++
+ 		/* scsi cdb */
+ 		sata_cmd.atapi_scsi_cdb[0] =
+ 			cpu_to_le32(((task->ata_task.atapi_packet[0]) |
+-			(task->ata_task.atapi_packet[1] << 8) |
+-			(task->ata_task.atapi_packet[2] << 16) |
+-			(task->ata_task.atapi_packet[3] << 24)));
++				     (task->ata_task.atapi_packet[1] << 8) |
++				     (task->ata_task.atapi_packet[2] << 16) |
++				     (task->ata_task.atapi_packet[3] << 24)));
+ 		sata_cmd.atapi_scsi_cdb[1] =
+ 			cpu_to_le32(((task->ata_task.atapi_packet[4]) |
+-			(task->ata_task.atapi_packet[5] << 8) |
+-			(task->ata_task.atapi_packet[6] << 16) |
+-			(task->ata_task.atapi_packet[7] << 24)));
++				     (task->ata_task.atapi_packet[5] << 8) |
++				     (task->ata_task.atapi_packet[6] << 16) |
++				     (task->ata_task.atapi_packet[7] << 24)));
+ 		sata_cmd.atapi_scsi_cdb[2] =
+ 			cpu_to_le32(((task->ata_task.atapi_packet[8]) |
+-			(task->ata_task.atapi_packet[9] << 8) |
+-			(task->ata_task.atapi_packet[10] << 16) |
+-			(task->ata_task.atapi_packet[11] << 24)));
++				     (task->ata_task.atapi_packet[9] << 8) |
++				     (task->ata_task.atapi_packet[10] << 16) |
++				     (task->ata_task.atapi_packet[11] << 24)));
+ 		sata_cmd.atapi_scsi_cdb[3] =
+ 			cpu_to_le32(((task->ata_task.atapi_packet[12]) |
+-			(task->ata_task.atapi_packet[13] << 8) |
+-			(task->ata_task.atapi_packet[14] << 16) |
+-			(task->ata_task.atapi_packet[15] << 24)));
++				     (task->ata_task.atapi_packet[13] << 8) |
++				     (task->ata_task.atapi_packet[14] << 16) |
++				     (task->ata_task.atapi_packet[15] << 24)));
+ 	}
+ 
+ 	/* Check for read log for failed drive and return */
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index e40a37236aa10..d0407f44de78d 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -555,7 +555,7 @@ qla2x00_sysfs_read_vpd(struct file *filp, struct kobject *kobj,
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EINVAL;
+ 
+-	if (IS_NOCACHE_VPD_TYPE(ha))
++	if (!IS_NOCACHE_VPD_TYPE(ha))
+ 		goto skip;
+ 
+ 	faddr = ha->flt_region_vpd << 2;
+@@ -739,7 +739,7 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
+ 		ql_log(ql_log_info, vha, 0x706f,
+ 		    "Issuing MPI reset.\n");
+ 
+-		if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
++		if (IS_QLA83XX(ha)) {
+ 			uint32_t idc_control;
+ 
+ 			qla83xx_idc_lock(vha, 0);
+@@ -1050,9 +1050,6 @@ qla2x00_free_sysfs_attr(scsi_qla_host_t *vha, bool stop_beacon)
+ 			continue;
+ 		if (iter->type == 3 && !(IS_CNA_CAPABLE(ha)))
+ 			continue;
+-		if (iter->type == 0x27 &&
+-		    (!IS_QLA27XX(ha) || !IS_QLA28XX(ha)))
+-			continue;
+ 
+ 		sysfs_remove_bin_file(&host->shost_gendev.kobj,
+ 		    iter->attr);
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index e1fd91a581202..8a8e0920d2b41 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -2796,7 +2796,11 @@ struct ct_fdmi2_hba_attributes {
+ #define FDMI_PORT_SPEED_8GB		0x10
+ #define FDMI_PORT_SPEED_16GB		0x20
+ #define FDMI_PORT_SPEED_32GB		0x40
+-#define FDMI_PORT_SPEED_64GB		0x80
++#define FDMI_PORT_SPEED_20GB		0x80
++#define FDMI_PORT_SPEED_40GB		0x100
++#define FDMI_PORT_SPEED_128GB		0x200
++#define FDMI_PORT_SPEED_64GB		0x400
++#define FDMI_PORT_SPEED_256GB		0x800
+ #define FDMI_PORT_SPEED_UNKNOWN		0x8000
+ 
+ #define FC_CLASS_2	0x04
+@@ -5171,4 +5175,8 @@ struct sff_8247_a0 {
+ #include "qla_gbl.h"
+ #include "qla_dbg.h"
+ #include "qla_inline.h"
++
++#define IS_SESSION_DELETED(_fcport) (_fcport->disc_state == DSC_DELETE_PEND || \
++				      _fcport->disc_state == DSC_DELETED)
++
+ #endif
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index e28c4b7ec55ff..73015c69b5e89 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -676,8 +676,7 @@ qla2x00_rff_id(scsi_qla_host_t *vha, u8 type)
+ 		return (QLA_SUCCESS);
+ 	}
+ 
+-	return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha),
+-	    FC4_TYPE_FCP_SCSI);
++	return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha), type);
+ }
+ 
+ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+@@ -727,7 +726,7 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ 	/* Prepare CT arguments -- port_id, FC-4 feature, FC-4 type */
+ 	ct_req->req.rff_id.port_id = port_id_to_be_id(*d_id);
+ 	ct_req->req.rff_id.fc4_feature = fc4feature;
+-	ct_req->req.rff_id.fc4_type = fc4type;		/* SCSI - FCP */
++	ct_req->req.rff_id.fc4_type = fc4type;		/* SCSI-FCP or FC-NVMe */
+ 
+ 	sp->u.iocb_cmd.u.ctarg.req_size = RFF_ID_REQ_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = RFF_ID_RSP_SIZE;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index fdae25ec554d9..9452848ede3f8 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -570,6 +570,14 @@ qla2x00_async_adisc(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 	struct srb_iocb *lio;
+ 	int rval = QLA_FUNCTION_FAILED;
+ 
++	if (IS_SESSION_DELETED(fcport)) {
++		ql_log(ql_log_warn, vha, 0xffff,
++		       "%s: %8phC is being delete - not sending command.\n",
++		       __func__, fcport->port_name);
++		fcport->flags &= ~FCF_ASYNC_ACTIVE;
++		return rval;
++	}
++
+ 	if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ 		return rval;
+ 
+@@ -953,6 +961,9 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ 				set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ 			}
+ 			break;
++		case ISP_CFG_NL:
++			qla24xx_fcport_handle_login(vha, fcport);
++			break;
+ 		default:
+ 			break;
+ 		}
+@@ -1313,14 +1324,21 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ 	struct port_database_24xx *pd;
+ 	struct qla_hw_data *ha = vha->hw;
+ 
+-	if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT) ||
+-	    fcport->loop_id == FC_NO_LOOP_ID) {
++	if (IS_SESSION_DELETED(fcport)) {
+ 		ql_log(ql_log_warn, vha, 0xffff,
+-		    "%s: %8phC - not sending command.\n",
+-		    __func__, fcport->port_name);
++		       "%s: %8phC is being delete - not sending command.\n",
++		       __func__, fcport->port_name);
++		fcport->flags &= ~FCF_ASYNC_ACTIVE;
+ 		return rval;
+ 	}
+ 
++	if (!vha->flags.online || fcport->flags & FCF_ASYNC_SENT) {
++		ql_log(ql_log_warn, vha, 0xffff,
++		    "%s: %8phC online %d flags %x - not sending command.\n",
++		    __func__, fcport->port_name, vha->flags.online, fcport->flags);
++		goto done;
++	}
++
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+@@ -1480,6 +1498,11 @@ static void qla_chk_n2n_b4_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	u8 login = 0;
+ 	int rc;
+ 
++	ql_dbg(ql_dbg_disc, vha, 0x307b,
++	    "%s %8phC DS %d LS %d lid %d retries=%d\n",
++	    __func__, fcport->port_name, fcport->disc_state,
++	    fcport->fw_login_state, fcport->loop_id, fcport->login_retry);
++
+ 	if (qla_tgt_mode_enabled(vha))
+ 		return;
+ 
+@@ -1537,7 +1560,8 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	    fcport->conflict, fcport->last_rscn_gen, fcport->rscn_gen,
+ 	    fcport->login_gen, fcport->loop_id, fcport->scan_state);
+ 
+-	if (fcport->scan_state != QLA_FCPORT_FOUND)
++	if (fcport->scan_state != QLA_FCPORT_FOUND ||
++	    fcport->disc_state == DSC_DELETE_PEND)
+ 		return 0;
+ 
+ 	if ((fcport->loop_id != FC_NO_LOOP_ID) &&
+@@ -1558,7 +1582,7 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	if (vha->host->active_mode == MODE_TARGET && !N2N_TOPO(vha->hw))
+ 		return 0;
+ 
+-	if (fcport->flags & FCF_ASYNC_SENT) {
++	if (fcport->flags & (FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE)) {
+ 		set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ 		return 0;
+ 	}
+@@ -2114,12 +2138,7 @@ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ 		ql_dbg(ql_dbg_disc, vha, 0x20eb, "%s %d %8phC cmd error %x\n",
+ 		    __func__, __LINE__, ea->fcport->port_name, ea->data[1]);
+ 
+-		ea->fcport->flags &= ~FCF_ASYNC_SENT;
+-		qla2x00_set_fcport_disc_state(ea->fcport, DSC_LOGIN_FAILED);
+-		if (ea->data[1] & QLA_LOGIO_LOGIN_RETRIED)
+-			set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+-		else
+-			qla2x00_mark_device_lost(vha, ea->fcport, 1);
++		qlt_schedule_sess_for_deletion(ea->fcport);
+ 		break;
+ 	case MBS_LOOP_ID_USED:
+ 		/* data[1] = IO PARAM 1 = nport ID  */
+@@ -3309,6 +3328,14 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ 	struct rsp_que *rsp = ha->rsp_q_map[0];
+ 	struct qla2xxx_fw_dump *fw_dump;
+ 
++	if (ha->fw_dump) {
++		ql_dbg(ql_dbg_init, vha, 0x00bd,
++		    "Firmware dump already allocated.\n");
++		return;
++	}
++
++	ha->fw_dumped = 0;
++	ha->fw_dump_cap_flags = 0;
+ 	dump_size = fixed_size = mem_size = eft_size = fce_size = mq_size = 0;
+ 	req_q_size = rsp_q_size = 0;
+ 
+@@ -3319,7 +3346,7 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ 		mem_size = (ha->fw_memory_size - 0x11000 + 1) *
+ 		    sizeof(uint16_t);
+ 	} else if (IS_FWI2_CAPABLE(ha)) {
+-		if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
++		if (IS_QLA83XX(ha))
+ 			fixed_size = offsetof(struct qla83xx_fw_dump, ext_mem);
+ 		else if (IS_QLA81XX(ha))
+ 			fixed_size = offsetof(struct qla81xx_fw_dump, ext_mem);
+@@ -3331,8 +3358,7 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ 		mem_size = (ha->fw_memory_size - 0x100000 + 1) *
+ 		    sizeof(uint32_t);
+ 		if (ha->mqenable) {
+-			if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha) &&
+-			    !IS_QLA28XX(ha))
++			if (!IS_QLA83XX(ha))
+ 				mq_size = sizeof(struct qla2xxx_mq_chain);
+ 			/*
+ 			 * Allocate maximum buffer size for all queues - Q0.
+@@ -3893,8 +3919,7 @@ enable_82xx_npiv:
+ 			    ha->fw_major_version, ha->fw_minor_version,
+ 			    ha->fw_subminor_version);
+ 
+-			if (IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+-			    IS_QLA28XX(ha)) {
++			if (IS_QLA83XX(ha)) {
+ 				ha->flags.fac_supported = 0;
+ 				rval = QLA_SUCCESS;
+ 			}
+@@ -5382,6 +5407,13 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ 			memcpy(fcport->node_name, new_fcport->node_name,
+ 			    WWN_SIZE);
+ 			fcport->scan_state = QLA_FCPORT_FOUND;
++			if (fcport->login_retry == 0) {
++				fcport->login_retry = vha->hw->login_retry_count;
++				ql_dbg(ql_dbg_disc, vha, 0x2135,
++				    "Port login retry %8phN, lid 0x%04x retry cnt=%d.\n",
++				    fcport->port_name, fcport->loop_id,
++				    fcport->login_retry);
++			}
+ 			found++;
+ 			break;
+ 		}
+@@ -5515,6 +5547,8 @@ qla2x00_reg_remote_port(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	if (atomic_read(&fcport->state) == FCS_ONLINE)
+ 		return;
+ 
++	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
++
+ 	rport_ids.node_name = wwn_to_u64(fcport->node_name);
+ 	rport_ids.port_name = wwn_to_u64(fcport->port_name);
+ 	rport_ids.port_id = fcport->d_id.b.domain << 16 |
+@@ -5615,6 +5649,7 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 		qla2x00_reg_remote_port(vha, fcport);
+ 		break;
+ 	case MODE_TARGET:
++		qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+ 		if (!vha->vha_tgt.qla_tgt->tgt_stop &&
+ 			!vha->vha_tgt.qla_tgt->tgt_stopped)
+ 			qlt_fc_port_added(vha, fcport);
+@@ -5629,8 +5664,6 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 		break;
+ 	}
+ 
+-	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+-
+ 	if (IS_IIDMA_CAPABLE(vha->hw) && vha->hw->flags.gpsc_supported) {
+ 		if (fcport->id_changed) {
+ 			fcport->id_changed = 0;
+@@ -9127,7 +9160,7 @@ struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *vha, int qos,
+ 		qpair->rsp->req = qpair->req;
+ 		qpair->rsp->qpair = qpair;
+ 		/* init qpair to this cpu. Will adjust at run time. */
+-		qla_cpu_update(qpair, smp_processor_id());
++		qla_cpu_update(qpair, raw_smp_processor_id());
+ 
+ 		if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) {
+ 			if (ha->fw_attributes & BIT_4)
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index c532c74ca1ab9..e54cc2a761dd4 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2910,6 +2910,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 					set_bit(ISP_ABORT_NEEDED,
+ 					    &vha->dpc_flags);
+ 					qla2xxx_wake_dpc(vha);
++					break;
+ 				}
+ 				fallthrough;
+ 			default:
+@@ -2919,9 +2920,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 				    fw_status[0], fw_status[1], fw_status[2]);
+ 
+ 				fcport->flags &= ~FCF_ASYNC_SENT;
+-				qla2x00_set_fcport_disc_state(fcport,
+-				    DSC_LOGIN_FAILED);
+-				set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++				qlt_schedule_sess_for_deletion(fcport);
+ 				break;
+ 			}
+ 			break;
+@@ -2933,8 +2932,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 			    fw_status[0], fw_status[1], fw_status[2]);
+ 
+ 			sp->fcport->flags &= ~FCF_ASYNC_SENT;
+-			qla2x00_set_fcport_disc_state(fcport, DSC_LOGIN_FAILED);
+-			set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++			qlt_schedule_sess_for_deletion(fcport);
+ 			break;
+ 		}
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 5e040b6debc84..c5c7d60ab2524 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -2248,6 +2248,7 @@ qla24xx_tm_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, void *tsk)
+ 		iocb->u.tmf.data = QLA_FUNCTION_FAILED;
+ 	} else if ((le16_to_cpu(sts->scsi_status) &
+ 	    SS_RESPONSE_INFO_LEN_VALID)) {
++		host_to_fcp_swap(sts->data, sizeof(sts->data));
+ 		if (le32_to_cpu(sts->rsp_data_len) < 4) {
+ 			ql_log(ql_log_warn, fcport->vha, 0x503b,
+ 			    "Async-%s error - hdl=%x not enough response(%d).\n",
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 734745f450211..bbb57edc1f662 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -9,6 +9,12 @@
+ #include <linux/delay.h>
+ #include <linux/gfp.h>
+ 
++#ifdef CONFIG_PPC
++#define IS_PPCARCH      true
++#else
++#define IS_PPCARCH      false
++#endif
++
+ static struct mb_cmd_name {
+ 	uint16_t cmd;
+ 	const char *str;
+@@ -698,6 +704,9 @@ again:
+ 				vha->min_supported_speed =
+ 				    nv->min_supported_speed;
+ 			}
++
++			if (IS_PPCARCH)
++				mcp->mb[11] |= BIT_4;
+ 		}
+ 
+ 		if (ha->flags.exlogins_enabled)
+@@ -2984,8 +2993,7 @@ qla2x00_get_resource_cnts(scsi_qla_host_t *vha)
+ 		ha->orig_fw_iocb_count = mcp->mb[10];
+ 		if (ha->flags.npiv_supported)
+ 			ha->max_npiv_vports = mcp->mb[11];
+-		if (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+-		    IS_QLA28XX(ha))
++		if (IS_QLA81XX(ha) || IS_QLA83XX(ha))
+ 			ha->fw_max_fcf_count = mcp->mb[12];
+ 	}
+ 
+@@ -5546,7 +5554,7 @@ qla2x00_get_data_rate(scsi_qla_host_t *vha)
+ 	mcp->out_mb = MBX_1|MBX_0;
+ 	mcp->in_mb = MBX_2|MBX_1|MBX_0;
+ 	if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
+-		mcp->in_mb |= MBX_3;
++		mcp->in_mb |= MBX_4|MBX_3;
+ 	mcp->tov = MBX_TOV_SECONDS;
+ 	mcp->flags = 0;
+ 	rval = qla2x00_mailbox_command(vha, mcp);
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 5acee3c798d42..ba1b1c7549d35 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -35,6 +35,11 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
+ 		(fcport->nvme_flag & NVME_FLAG_REGISTERED))
+ 		return 0;
+ 
++	if (atomic_read(&fcport->state) == FCS_ONLINE)
++		return 0;
++
++	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
++
+ 	fcport->nvme_flag &= ~NVME_FLAG_RESETTING;
+ 
+ 	memset(&req, 0, sizeof(struct nvme_fc_port_info));
+@@ -165,6 +170,18 @@ out:
+ 	qla2xxx_rel_qpair_sp(sp->qpair, sp);
+ }
+ 
++static void qla_nvme_ls_unmap(struct srb *sp, struct nvmefc_ls_req *fd)
++{
++	if (sp->flags & SRB_DMA_VALID) {
++		struct srb_iocb *nvme = &sp->u.iocb_cmd;
++		struct qla_hw_data *ha = sp->fcport->vha->hw;
++
++		dma_unmap_single(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
++				 fd->rqstlen, DMA_TO_DEVICE);
++		sp->flags &= ~SRB_DMA_VALID;
++	}
++}
++
+ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ {
+ 	struct srb *sp = container_of(kref, struct srb, cmd_kref);
+@@ -181,6 +198,8 @@ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ 	spin_unlock_irqrestore(&priv->cmd_lock, flags);
+ 
+ 	fd = priv->fd;
++
++	qla_nvme_ls_unmap(sp, fd);
+ 	fd->done(fd, priv->comp_status);
+ out:
+ 	qla2x00_rel_sp(sp);
+@@ -327,6 +346,8 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ 	dma_sync_single_for_device(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
+ 	    fd->rqstlen, DMA_TO_DEVICE);
+ 
++	sp->flags |= SRB_DMA_VALID;
++
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+ 		ql_log(ql_log_warn, vha, 0x700e,
+@@ -334,6 +355,7 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ 		wake_up(&sp->nvme_ls_waitq);
+ 		sp->priv = NULL;
+ 		priv->sp = NULL;
++		qla_nvme_ls_unmap(sp, fd);
+ 		qla2x00_rel_sp(sp);
+ 		return rval;
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index e7f73a167fbd6..419156121cb59 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -3673,8 +3673,7 @@ qla2x00_unmap_iobases(struct qla_hw_data *ha)
+ 		if (ha->mqiobase)
+ 			iounmap(ha->mqiobase);
+ 
+-		if ((IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) &&
+-		    ha->msixbase)
++		if (ha->msixbase)
+ 			iounmap(ha->msixbase);
+ 	}
+ }
+@@ -5390,6 +5389,11 @@ void qla2x00_relogin(struct scsi_qla_host *vha)
+ 					memset(&ea, 0, sizeof(ea));
+ 					ea.fcport = fcport;
+ 					qla24xx_handle_relogin_event(vha, &ea);
++				} else if (vha->hw->current_topology ==
++					 ISP_CFG_NL &&
++					IS_QLA2XXX_MIDTYPE(vha->hw)) {
++					(void)qla24xx_fcport_handle_login(vha,
++									fcport);
+ 				} else if (vha->hw->current_topology ==
+ 				    ISP_CFG_NL) {
+ 					fcport->login_retry--;
+diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
+index 0f92e9a044dcd..0fa9c529fca11 100644
+--- a/drivers/scsi/qla2xxx/qla_sup.c
++++ b/drivers/scsi/qla2xxx/qla_sup.c
+@@ -844,7 +844,7 @@ qla2xxx_get_flt_info(scsi_qla_host_t *vha, uint32_t flt_addr)
+ 				ha->flt_region_nvram = start;
+ 			break;
+ 		case FLT_REG_IMG_PRI_27XX:
+-			if (IS_QLA27XX(ha) && !IS_QLA28XX(ha))
++			if (IS_QLA27XX(ha) || IS_QLA28XX(ha))
+ 				ha->flt_region_img_status_pri = start;
+ 			break;
+ 		case FLT_REG_IMG_SEC_27XX:
+@@ -1356,7 +1356,7 @@ next:
+ 		    flash_data_addr(ha, faddr), le32_to_cpu(*dwptr));
+ 		if (ret) {
+ 			ql_dbg(ql_dbg_user, vha, 0x7006,
+-			    "Failed slopw write %x (%x)\n", faddr, *dwptr);
++			    "Failed slow write %x (%x)\n", faddr, *dwptr);
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index ebed14bed7835..cf9ae0ab489a0 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -3256,6 +3256,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ 			"RESET-RSP online/active/old-count/new-count = %d/%d/%d/%d.\n",
+ 			vha->flags.online, qla2x00_reset_active(vha),
+ 			cmd->reset_count, qpair->chip_reset);
++		res = 0;
+ 		goto out_unmap_unlock;
+ 	}
+ 
+@@ -7076,8 +7077,7 @@ qlt_probe_one_stage1(struct scsi_qla_host *base_vha, struct qla_hw_data *ha)
+ 	if (!QLA_TGT_MODE_ENABLED())
+ 		return;
+ 
+-	if  ((ql2xenablemsix == 0) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+-	    IS_QLA28XX(ha)) {
++	if  (ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+ 		ISP_ATIO_Q_IN(base_vha) = &ha->mqiobase->isp25mq.atio_q_in;
+ 		ISP_ATIO_Q_OUT(base_vha) = &ha->mqiobase->isp25mq.atio_q_out;
+ 	} else {
+diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c
+index f1875dc31ae2c..85f82e195ef8b 100644
+--- a/drivers/soc/qcom/ocmem.c
++++ b/drivers/soc/qcom/ocmem.c
+@@ -206,6 +206,7 @@ struct ocmem *of_get_ocmem(struct device *dev)
+ 	ocmem = platform_get_drvdata(pdev);
+ 	if (!ocmem) {
+ 		dev_err(dev, "Cannot get ocmem\n");
++		put_device(&pdev->dev);
+ 		return ERR_PTR(-ENODEV);
+ 	}
+ 	return ocmem;
+diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c
+index 4fe88d4690e2b..941499b117580 100644
+--- a/drivers/soc/qcom/qcom_aoss.c
++++ b/drivers/soc/qcom/qcom_aoss.c
+@@ -548,7 +548,7 @@ static int qmp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	ret = devm_request_irq(&pdev->dev, irq, qmp_intr, IRQF_ONESHOT,
++	ret = devm_request_irq(&pdev->dev, irq, qmp_intr, 0,
+ 			       "aoss-qmp", qmp);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to request interrupt\n");
+diff --git a/drivers/soc/qcom/rpmpd.c b/drivers/soc/qcom/rpmpd.c
+index f2168e4259b23..c6084c0d35302 100644
+--- a/drivers/soc/qcom/rpmpd.c
++++ b/drivers/soc/qcom/rpmpd.c
+@@ -387,6 +387,9 @@ static int rpmpd_probe(struct platform_device *pdev)
+ 
+ 	data->domains = devm_kcalloc(&pdev->dev, num, sizeof(*data->domains),
+ 				     GFP_KERNEL);
++	if (!data->domains)
++		return -ENOMEM;
++
+ 	data->num_domains = num;
+ 
+ 	for (i = 0; i < num; i++) {
+diff --git a/drivers/soc/ti/wkup_m3_ipc.c b/drivers/soc/ti/wkup_m3_ipc.c
+index e9ece45d7a333..ef3f95fefab58 100644
+--- a/drivers/soc/ti/wkup_m3_ipc.c
++++ b/drivers/soc/ti/wkup_m3_ipc.c
+@@ -447,9 +447,9 @@ static int wkup_m3_ipc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (!irq) {
++	if (irq < 0) {
+ 		dev_err(&pdev->dev, "no irq resource\n");
+-		return -ENXIO;
++		return irq;
+ 	}
+ 
+ 	ret = devm_request_irq(dev, irq, wkup_m3_txev_handler,
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index dad4326a2a714..824d9f900aca7 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -521,8 +521,8 @@ static void intel_shim_wake(struct sdw_intel *sdw, bool wake_enable)
+ 
+ 		/* Clear wake status */
+ 		wake_sts = intel_readw(shim, SDW_SHIM_WAKESTS);
+-		wake_sts |= (SDW_SHIM_WAKEEN_ENABLE << link_id);
+-		intel_writew(shim, SDW_SHIM_WAKESTS_STATUS, wake_sts);
++		wake_sts |= (SDW_SHIM_WAKESTS_STATUS << link_id);
++		intel_writew(shim, SDW_SHIM_WAKESTS, wake_sts);
+ 	}
+ 	mutex_unlock(sdw->link_res->shim_lock);
+ }
+diff --git a/drivers/spi/spi-mxic.c b/drivers/spi/spi-mxic.c
+index 96b418293bf2a..4fb19e6f94b05 100644
+--- a/drivers/spi/spi-mxic.c
++++ b/drivers/spi/spi-mxic.c
+@@ -304,25 +304,21 @@ static int mxic_spi_data_xfer(struct mxic_spi *mxic, const void *txbuf,
+ 
+ 		writel(data, mxic->regs + TXD(nbytes % 4));
+ 
++		ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
++					 sts & INT_TX_EMPTY, 0, USEC_PER_SEC);
++		if (ret)
++			return ret;
++
++		ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
++					 sts & INT_RX_NOT_EMPTY, 0,
++					 USEC_PER_SEC);
++		if (ret)
++			return ret;
++
++		data = readl(mxic->regs + RXD);
+ 		if (rxbuf) {
+-			ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
+-						 sts & INT_TX_EMPTY, 0,
+-						 USEC_PER_SEC);
+-			if (ret)
+-				return ret;
+-
+-			ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
+-						 sts & INT_RX_NOT_EMPTY, 0,
+-						 USEC_PER_SEC);
+-			if (ret)
+-				return ret;
+-
+-			data = readl(mxic->regs + RXD);
+ 			data >>= (8 * (4 - nbytes));
+ 			memcpy(rxbuf + pos, &data, nbytes);
+-			WARN_ON(readl(mxic->regs + INT_STS) & INT_RX_NOT_EMPTY);
+-		} else {
+-			readl(mxic->regs + RXD);
+ 		}
+ 		WARN_ON(readl(mxic->regs + INT_STS) & INT_RX_NOT_EMPTY);
+ 
+diff --git a/drivers/spi/spi-pl022.c b/drivers/spi/spi-pl022.c
+index e4ee8b0847993..f7603c209e9d5 100644
+--- a/drivers/spi/spi-pl022.c
++++ b/drivers/spi/spi-pl022.c
+@@ -2315,13 +2315,13 @@ static int pl022_probe(struct amba_device *adev, const struct amba_id *id)
+ 	return status;
+ }
+ 
+-static int
++static void
+ pl022_remove(struct amba_device *adev)
+ {
+ 	struct pl022 *pl022 = amba_get_drvdata(adev);
+ 
+ 	if (!pl022)
+-		return 0;
++		return;
+ 
+ 	/*
+ 	 * undo pm_runtime_put() in probe.  I assume that we're not
+@@ -2336,7 +2336,6 @@ pl022_remove(struct amba_device *adev)
+ 	clk_disable_unprepare(pl022->clk);
+ 	amba_release_regions(adev);
+ 	tasklet_disable(&pl022->pump_transfers);
+-	return 0;
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/spi/spi-pxa2xx-pci.c b/drivers/spi/spi-pxa2xx-pci.c
+index aafac128bb5f1..4eb979a096c78 100644
+--- a/drivers/spi/spi-pxa2xx-pci.c
++++ b/drivers/spi/spi-pxa2xx-pci.c
+@@ -74,14 +74,23 @@ static bool lpss_dma_filter(struct dma_chan *chan, void *param)
+ 	return true;
+ }
+ 
++static void lpss_dma_put_device(void *dma_dev)
++{
++	pci_dev_put(dma_dev);
++}
++
+ static int lpss_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ {
+ 	struct pci_dev *dma_dev;
++	int ret;
+ 
+ 	c->num_chipselect = 1;
+ 	c->max_clk_rate = 50000000;
+ 
+ 	dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
++	ret = devm_add_action_or_reset(&dev->dev, lpss_dma_put_device, dma_dev);
++	if (ret)
++		return ret;
+ 
+ 	if (c->tx_param) {
+ 		struct dw_dma_slave *slave = c->tx_param;
+@@ -105,8 +114,9 @@ static int lpss_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ 
+ static int mrfld_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ {
+-	struct pci_dev *dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(21, 0));
+ 	struct dw_dma_slave *tx, *rx;
++	struct pci_dev *dma_dev;
++	int ret;
+ 
+ 	switch (PCI_FUNC(dev->devfn)) {
+ 	case 0:
+@@ -131,6 +141,11 @@ static int mrfld_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ 		return -ENODEV;
+ 	}
+ 
++	dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(21, 0));
++	ret = devm_add_action_or_reset(&dev->dev, lpss_dma_put_device, dma_dev);
++	if (ret)
++		return ret;
++
+ 	tx = c->tx_param;
+ 	tx->dma_dev = &dma_dev->dev;
+ 
+diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c
+index a2e5907276e7f..ed42665b12241 100644
+--- a/drivers/spi/spi-tegra114.c
++++ b/drivers/spi/spi-tegra114.c
+@@ -1353,6 +1353,10 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ 	tspi->phys = r->start;
+ 
+ 	spi_irq = platform_get_irq(pdev, 0);
++	if (spi_irq < 0) {
++		ret = spi_irq;
++		goto exit_free_master;
++	}
+ 	tspi->irq = spi_irq;
+ 
+ 	tspi->clk = devm_clk_get(&pdev->dev, "spi");
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index 669fc4286231f..9e2b812b9025f 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1006,14 +1006,8 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ 	struct resource		*r;
+ 	int ret, spi_irq;
+ 	const struct tegra_slink_chip_data *cdata = NULL;
+-	const struct of_device_id *match;
+ 
+-	match = of_match_device(tegra_slink_of_match, &pdev->dev);
+-	if (!match) {
+-		dev_err(&pdev->dev, "Error: No device match found\n");
+-		return -ENODEV;
+-	}
+-	cdata = match->data;
++	cdata = of_device_get_match_data(&pdev->dev);
+ 
+ 	master = spi_alloc_master(&pdev->dev, sizeof(*tspi));
+ 	if (!master) {
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index 1dd2af9cc2374..3d3ac48243ebd 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1165,7 +1165,10 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_all;
+ 	}
+ 
+-	dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
++	ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
++	if (ret)
++		goto clk_dis_all;
++
+ 	ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
+ 	ctlr->num_chipselect = GQSPI_DEFAULT_NUM_CS;
+ 	ctlr->mem_ops = &zynqmp_qspi_mem_ops;
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 8c261eac2cee5..6ea7b286c80c2 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -881,10 +881,10 @@ int spi_map_buf(struct spi_controller *ctlr, struct device *dev,
+ 	int i, ret;
+ 
+ 	if (vmalloced_buf || kmap_buf) {
+-		desc_len = min_t(int, max_seg_size, PAGE_SIZE);
++		desc_len = min_t(unsigned long, max_seg_size, PAGE_SIZE);
+ 		sgs = DIV_ROUND_UP(len + offset_in_page(buf), desc_len);
+ 	} else if (virt_addr_valid(buf)) {
+-		desc_len = min_t(int, max_seg_size, ctlr->max_dma_len);
++		desc_len = min_t(size_t, max_seg_size, ctlr->max_dma_len);
+ 		sgs = DIV_ROUND_UP(len, desc_len);
+ 	} else {
+ 		return -EINVAL;
+diff --git a/drivers/staging/iio/adc/ad7280a.c b/drivers/staging/iio/adc/ad7280a.c
+index fef0055b89909..20183b2ea1279 100644
+--- a/drivers/staging/iio/adc/ad7280a.c
++++ b/drivers/staging/iio/adc/ad7280a.c
+@@ -107,9 +107,9 @@
+ static unsigned int ad7280a_devaddr(unsigned int addr)
+ {
+ 	return ((addr & 0x1) << 4) |
+-	       ((addr & 0x2) << 3) |
++	       ((addr & 0x2) << 2) |
+ 	       (addr & 0x4) |
+-	       ((addr & 0x8) >> 3) |
++	       ((addr & 0x8) >> 2) |
+ 	       ((addr & 0x10) >> 4);
+ }
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_acc.c b/drivers/staging/media/atomisp/pci/atomisp_acc.c
+index f638d0bd09fe6..b1614cce2dfb0 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_acc.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_acc.c
+@@ -439,6 +439,18 @@ int atomisp_acc_s_mapped_arg(struct atomisp_sub_device *asd,
+ 	return 0;
+ }
+ 
++static void atomisp_acc_unload_some_extensions(struct atomisp_sub_device *asd,
++					      int i,
++					      struct atomisp_acc_fw *acc_fw)
++{
++	while (--i >= 0) {
++		if (acc_fw->flags & acc_flag_to_pipe[i].flag) {
++			atomisp_css_unload_acc_extension(asd, acc_fw->fw,
++							 acc_flag_to_pipe[i].pipe_id);
++		}
++	}
++}
++
+ /*
+  * Appends the loaded acceleration binary extensions to the
+  * current ISP mode. Must be called just before sh_css_start().
+@@ -477,16 +489,20 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ 								     acc_fw->fw,
+ 								     acc_flag_to_pipe[i].pipe_id,
+ 								     acc_fw->type);
+-				if (ret)
++				if (ret) {
++					atomisp_acc_unload_some_extensions(asd, i, acc_fw);
+ 					goto error;
++				}
+ 
+ 				ext_loaded = true;
+ 			}
+ 		}
+ 
+ 		ret = atomisp_css_set_acc_parameters(acc_fw);
+-		if (ret < 0)
++		if (ret < 0) {
++			atomisp_acc_unload_some_extensions(asd, i, acc_fw);
+ 			goto error;
++		}
+ 	}
+ 
+ 	if (!ext_loaded)
+@@ -495,6 +511,7 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ 	ret = atomisp_css_update_stream(asd);
+ 	if (ret) {
+ 		dev_err(isp->dev, "%s: update stream failed.\n", __func__);
++		atomisp_acc_unload_extensions(asd);
+ 		goto error;
+ 	}
+ 
+@@ -502,13 +519,6 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ 	return 0;
+ 
+ error:
+-	while (--i >= 0) {
+-		if (acc_fw->flags & acc_flag_to_pipe[i].flag) {
+-			atomisp_css_unload_acc_extension(asd, acc_fw->fw,
+-							 acc_flag_to_pipe[i].pipe_id);
+-		}
+-	}
+-
+ 	list_for_each_entry_continue_reverse(acc_fw, &asd->acc.fw, list) {
+ 		if (acc_fw->type != ATOMISP_ACC_FW_LOAD_TYPE_OUTPUT &&
+ 		    acc_fw->type != ATOMISP_ACC_FW_LOAD_TYPE_VIEWFINDER)
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index 34480ca164746..c9ee85037644f 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -729,6 +729,21 @@ static int axp_regulator_set(struct device *dev, struct gmin_subdev *gs,
+ 	return 0;
+ }
+ 
++/*
++ * Some boards contain a hw-bug where turning eldo2 back on after having turned
++ * it off causes the CPLM3218 ambient-light-sensor on the image-sensor's I2C bus
++ * to crash, hanging the bus. Do not turn eldo2 off on these systems.
++ */
++static const struct dmi_system_id axp_leave_eldo2_on_ids[] = {
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TrekStor"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "SurfTab duo W1 10.1 (VT4)"),
++		},
++	},
++	{ }
++};
++
+ static int axp_v1p8_on(struct device *dev, struct gmin_subdev *gs)
+ {
+ 	int ret;
+@@ -763,6 +778,9 @@ static int axp_v1p8_off(struct device *dev, struct gmin_subdev *gs)
+ 	if (ret)
+ 		return ret;
+ 
++	if (dmi_check_system(axp_leave_eldo2_on_ids))
++		return 0;
++
+ 	ret = axp_regulator_set(dev, gs, gs->eldo2_sel_reg, gs->eldo2_1p8v,
+ 				ELDO_CTRL_REG, gs->eldo2_ctrl_shift, false);
+ 	return ret;
+diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/media/atomisp/pci/hmm/hmm.c
+index 6a5ee46070898..c1cda16f2dc01 100644
+--- a/drivers/staging/media/atomisp/pci/hmm/hmm.c
++++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c
+@@ -39,7 +39,7 @@
+ struct hmm_bo_device bo_device;
+ struct hmm_pool	dynamic_pool;
+ struct hmm_pool	reserved_pool;
+-static ia_css_ptr dummy_ptr;
++static ia_css_ptr dummy_ptr = mmgr_EXCEPTION;
+ static bool hmm_initialized;
+ struct _hmm_mem_stat hmm_mem_stat;
+ 
+@@ -209,7 +209,7 @@ int hmm_init(void)
+ 
+ void hmm_cleanup(void)
+ {
+-	if (!dummy_ptr)
++	if (dummy_ptr == mmgr_EXCEPTION)
+ 		return;
+ 	sysfs_remove_group(&atomisp_dev->kobj, atomisp_attribute_group);
+ 
+@@ -288,7 +288,8 @@ void hmm_free(ia_css_ptr virt)
+ 
+ 	dev_dbg(atomisp_dev, "%s: free 0x%08x\n", __func__, virt);
+ 
+-	WARN_ON(!virt);
++	if (WARN_ON(virt == mmgr_EXCEPTION))
++		return;
+ 
+ 	bo = hmm_bo_device_search_start(&bo_device, (unsigned int)virt);
+ 
+diff --git a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+index b88dc4ed06db7..ed244aee196c3 100644
+--- a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
++++ b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+@@ -23,7 +23,7 @@ static void hantro_h1_set_src_img_ctrl(struct hantro_dev *vpu,
+ 
+ 	reg = H1_REG_IN_IMG_CTRL_ROW_LEN(pix_fmt->width)
+ 		| H1_REG_IN_IMG_CTRL_OVRFLR_D4(0)
+-		| H1_REG_IN_IMG_CTRL_OVRFLB_D4(0)
++		| H1_REG_IN_IMG_CTRL_OVRFLB(0)
+ 		| H1_REG_IN_IMG_CTRL_FMT(ctx->vpu_src_fmt->enc_fmt);
+ 	vepu_write_relaxed(vpu, reg, H1_REG_IN_IMG_CTRL);
+ }
+diff --git a/drivers/staging/media/hantro/hantro_h1_regs.h b/drivers/staging/media/hantro/hantro_h1_regs.h
+index d6e9825bb5c7b..30e7e7b920b55 100644
+--- a/drivers/staging/media/hantro/hantro_h1_regs.h
++++ b/drivers/staging/media/hantro/hantro_h1_regs.h
+@@ -47,7 +47,7 @@
+ #define H1_REG_IN_IMG_CTRL				0x03c
+ #define     H1_REG_IN_IMG_CTRL_ROW_LEN(x)		((x) << 12)
+ #define     H1_REG_IN_IMG_CTRL_OVRFLR_D4(x)		((x) << 10)
+-#define     H1_REG_IN_IMG_CTRL_OVRFLB_D4(x)		((x) << 6)
++#define     H1_REG_IN_IMG_CTRL_OVRFLB(x)		((x) << 6)
+ #define     H1_REG_IN_IMG_CTRL_FMT(x)			((x) << 2)
+ #define H1_REG_ENC_CTRL0				0x040
+ #define    H1_REG_ENC_CTRL0_INIT_QP(x)			((x) << 26)
+diff --git a/drivers/staging/media/meson/vdec/esparser.c b/drivers/staging/media/meson/vdec/esparser.c
+index db7022707ff8d..86ccc8937afca 100644
+--- a/drivers/staging/media/meson/vdec/esparser.c
++++ b/drivers/staging/media/meson/vdec/esparser.c
+@@ -328,7 +328,12 @@ esparser_queue(struct amvdec_session *sess, struct vb2_v4l2_buffer *vbuf)
+ 
+ 	offset = esparser_get_offset(sess);
+ 
+-	amvdec_add_ts(sess, vb->timestamp, vbuf->timecode, offset, vbuf->flags);
++	ret = amvdec_add_ts(sess, vb->timestamp, vbuf->timecode, offset, vbuf->flags);
++	if (ret) {
++		v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
++		return ret;
++	}
++
+ 	dev_dbg(core->dev, "esparser: ts = %llu pld_size = %u offset = %08X flags = %08X\n",
+ 		vb->timestamp, payload_size, offset, vbuf->flags);
+ 
+diff --git a/drivers/staging/media/meson/vdec/vdec_helpers.c b/drivers/staging/media/meson/vdec/vdec_helpers.c
+index 7f07a9175815f..db4a854e59a38 100644
+--- a/drivers/staging/media/meson/vdec/vdec_helpers.c
++++ b/drivers/staging/media/meson/vdec/vdec_helpers.c
+@@ -227,13 +227,16 @@ int amvdec_set_canvases(struct amvdec_session *sess,
+ }
+ EXPORT_SYMBOL_GPL(amvdec_set_canvases);
+ 
+-void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+-		   struct v4l2_timecode tc, u32 offset, u32 vbuf_flags)
++int amvdec_add_ts(struct amvdec_session *sess, u64 ts,
++		  struct v4l2_timecode tc, u32 offset, u32 vbuf_flags)
+ {
+ 	struct amvdec_timestamp *new_ts;
+ 	unsigned long flags;
+ 
+ 	new_ts = kzalloc(sizeof(*new_ts), GFP_KERNEL);
++	if (!new_ts)
++		return -ENOMEM;
++
+ 	new_ts->ts = ts;
+ 	new_ts->tc = tc;
+ 	new_ts->offset = offset;
+@@ -242,6 +245,7 @@ void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+ 	spin_lock_irqsave(&sess->ts_spinlock, flags);
+ 	list_add_tail(&new_ts->list, &sess->timestamps);
+ 	spin_unlock_irqrestore(&sess->ts_spinlock, flags);
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(amvdec_add_ts);
+ 
+diff --git a/drivers/staging/media/meson/vdec/vdec_helpers.h b/drivers/staging/media/meson/vdec/vdec_helpers.h
+index cfaed52ab5265..798e5a8a9b3f1 100644
+--- a/drivers/staging/media/meson/vdec/vdec_helpers.h
++++ b/drivers/staging/media/meson/vdec/vdec_helpers.h
+@@ -55,8 +55,8 @@ void amvdec_dst_buf_done_offset(struct amvdec_session *sess,
+  * @offset: offset in the VIFIFO where the associated packet was written
+  * @flags the vb2_v4l2_buffer flags
+  */
+-void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+-		   struct v4l2_timecode tc, u32 offset, u32 flags);
++int amvdec_add_ts(struct amvdec_session *sess, u64 ts,
++		  struct v4l2_timecode tc, u32 offset, u32 flags);
+ void amvdec_remove_ts(struct amvdec_session *sess, u64 ts);
+ 
+ /**
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h264.c b/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
+index de7442d4834dc..d3e26bfe6c90b 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
+@@ -38,7 +38,7 @@ struct cedrus_h264_sram_ref_pic {
+ 
+ #define CEDRUS_H264_FRAME_NUM		18
+ 
+-#define CEDRUS_NEIGHBOR_INFO_BUF_SIZE	(16 * SZ_1K)
++#define CEDRUS_NEIGHBOR_INFO_BUF_SIZE	(32 * SZ_1K)
+ #define CEDRUS_MIN_PIC_INFO_BUF_SIZE       (130 * SZ_1K)
+ 
+ static void cedrus_h264_write_sram(struct cedrus_dev *dev,
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index 10744fab7ceaa..368439cf5e174 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -23,7 +23,7 @@
+  * Subsequent BSP implementations seem to double the neighbor info buffer size
+  * for the H6 SoC, which may be related to 10 bit H265 support.
+  */
+-#define CEDRUS_H265_NEIGHBOR_INFO_BUF_SIZE	(397 * SZ_1K)
++#define CEDRUS_H265_NEIGHBOR_INFO_BUF_SIZE	(794 * SZ_1K)
+ #define CEDRUS_H265_ENTRY_POINTS_BUF_SIZE	(4 * SZ_1K)
+ #define CEDRUS_H265_MV_COL_BUF_UNIT_CTB_SIZE	160
+ 
+diff --git a/drivers/staging/media/zoran/zoran.h b/drivers/staging/media/zoran/zoran.h
+index e7fe8da7732c7..3f223e5b1872b 100644
+--- a/drivers/staging/media/zoran/zoran.h
++++ b/drivers/staging/media/zoran/zoran.h
+@@ -314,6 +314,6 @@ static inline struct zoran *to_zoran(struct v4l2_device *v4l2_dev)
+ 
+ #endif
+ 
+-int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq);
++int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq, int dir);
+ void zoran_queue_exit(struct zoran *zr);
+ int zr_set_buf(struct zoran *zr);
+diff --git a/drivers/staging/media/zoran/zoran_card.c b/drivers/staging/media/zoran/zoran_card.c
+index dfc60e2e9dd7a..fe0cca12119c7 100644
+--- a/drivers/staging/media/zoran/zoran_card.c
++++ b/drivers/staging/media/zoran/zoran_card.c
+@@ -802,6 +802,52 @@ int zoran_check_jpg_settings(struct zoran *zr,
+ 	return 0;
+ }
+ 
++static int zoran_init_video_device(struct zoran *zr, struct video_device *video_dev, int dir)
++{
++	int err;
++
++	/* Now add the template and register the device unit. */
++	*video_dev = zoran_template;
++	video_dev->v4l2_dev = &zr->v4l2_dev;
++	video_dev->lock = &zr->lock;
++	video_dev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_READWRITE | dir;
++
++	strscpy(video_dev->name, ZR_DEVNAME(zr), sizeof(video_dev->name));
++	/*
++	 * It's not a mem2mem device, but you can both capture and output from one and the same
++	 * device. This should really be split up into two device nodes, but that's a job for
++	 * another day.
++	 */
++	video_dev->vfl_dir = VFL_DIR_M2M;
++	zoran_queue_init(zr, &zr->vq, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++
++	err = video_register_device(video_dev, VFL_TYPE_VIDEO, video_nr[zr->id]);
++	if (err < 0)
++		return err;
++	video_set_drvdata(video_dev, zr);
++	return 0;
++}
++
++static void zoran_exit_video_devices(struct zoran *zr)
++{
++	video_unregister_device(zr->video_dev);
++	kfree(zr->video_dev);
++}
++
++static int zoran_init_video_devices(struct zoran *zr)
++{
++	int err;
++
++	zr->video_dev = video_device_alloc();
++	if (!zr->video_dev)
++		return -ENOMEM;
++
++	err = zoran_init_video_device(zr, zr->video_dev, V4L2_CAP_VIDEO_CAPTURE);
++	if (err)
++		kfree(zr->video_dev);
++	return err;
++}
++
+ void zoran_open_init_params(struct zoran *zr)
+ {
+ 	int i;
+@@ -873,17 +919,11 @@ static int zr36057_init(struct zoran *zr)
+ 	zoran_open_init_params(zr);
+ 
+ 	/* allocate memory *before* doing anything to the hardware in case allocation fails */
+-	zr->video_dev = video_device_alloc();
+-	if (!zr->video_dev) {
+-		err = -ENOMEM;
+-		goto exit;
+-	}
+ 	zr->stat_com = dma_alloc_coherent(&zr->pci_dev->dev,
+ 					  BUZ_NUM_STAT_COM * sizeof(u32),
+ 					  &zr->p_sc, GFP_KERNEL);
+ 	if (!zr->stat_com) {
+-		err = -ENOMEM;
+-		goto exit_video;
++		return -ENOMEM;
+ 	}
+ 	for (j = 0; j < BUZ_NUM_STAT_COM; j++)
+ 		zr->stat_com[j] = cpu_to_le32(1); /* mark as unavailable to zr36057 */
+@@ -896,26 +936,9 @@ static int zr36057_init(struct zoran *zr)
+ 		goto exit_statcom;
+ 	}
+ 
+-	/* Now add the template and register the device unit. */
+-	*zr->video_dev = zoran_template;
+-	zr->video_dev->v4l2_dev = &zr->v4l2_dev;
+-	zr->video_dev->lock = &zr->lock;
+-	zr->video_dev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_VIDEO_CAPTURE;
+-
+-	strscpy(zr->video_dev->name, ZR_DEVNAME(zr), sizeof(zr->video_dev->name));
+-	/*
+-	 * It's not a mem2mem device, but you can both capture and output from one and the same
+-	 * device. This should really be split up into two device nodes, but that's a job for
+-	 * another day.
+-	 */
+-	zr->video_dev->vfl_dir = VFL_DIR_M2M;
+-
+-	zoran_queue_init(zr, &zr->vq);
+-
+-	err = video_register_device(zr->video_dev, VFL_TYPE_VIDEO, video_nr[zr->id]);
+-	if (err < 0)
++	err = zoran_init_video_devices(zr);
++	if (err)
+ 		goto exit_statcomb;
+-	video_set_drvdata(zr->video_dev, zr);
+ 
+ 	zoran_init_hardware(zr);
+ 	if (!pass_through) {
+@@ -930,9 +953,6 @@ exit_statcomb:
+ 	dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32) * 2, zr->stat_comb, zr->p_scb);
+ exit_statcom:
+ 	dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32), zr->stat_com, zr->p_sc);
+-exit_video:
+-	kfree(zr->video_dev);
+-exit:
+ 	return err;
+ }
+ 
+@@ -964,7 +984,7 @@ static void zoran_remove(struct pci_dev *pdev)
+ 	dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32) * 2, zr->stat_comb, zr->p_scb);
+ 	pci_release_regions(pdev);
+ 	pci_disable_device(zr->pci_dev);
+-	video_unregister_device(zr->video_dev);
++	zoran_exit_video_devices(zr);
+ exit_free:
+ 	v4l2_ctrl_handler_free(&zr->hdl);
+ 	v4l2_device_unregister(&zr->v4l2_dev);
+@@ -1068,8 +1088,10 @@ static int zoran_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ 	if (err)
+-		return -ENODEV;
+-	vb2_dma_contig_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32));
++		return err;
++	err = vb2_dma_contig_set_max_seg_size(&pdev->dev, U32_MAX);
++	if (err)
++		return err;
+ 
+ 	nr = zoran_num++;
+ 	if (nr >= BUZ_MAX) {
+diff --git a/drivers/staging/media/zoran/zoran_device.c b/drivers/staging/media/zoran/zoran_device.c
+index e569a1341d010..913f5a3c5bfce 100644
+--- a/drivers/staging/media/zoran/zoran_device.c
++++ b/drivers/staging/media/zoran/zoran_device.c
+@@ -879,7 +879,7 @@ static void zoran_reap_stat_com(struct zoran *zr)
+ 		if (zr->jpg_settings.tmp_dcm == 1)
+ 			i = (zr->jpg_dma_tail - zr->jpg_err_shift) & BUZ_MASK_STAT_COM;
+ 		else
+-			i = ((zr->jpg_dma_tail - zr->jpg_err_shift) & 1) * 2 + 1;
++			i = ((zr->jpg_dma_tail - zr->jpg_err_shift) & 1) * 2;
+ 
+ 		stat_com = le32_to_cpu(zr->stat_com[i]);
+ 		if ((stat_com & 1) == 0) {
+@@ -891,6 +891,11 @@ static void zoran_reap_stat_com(struct zoran *zr)
+ 		size = (stat_com & GENMASK(22, 1)) >> 1;
+ 
+ 		buf = zr->inuse[i];
++		if (!buf) {
++			spin_unlock_irqrestore(&zr->queued_bufs_lock, flags);
++			pci_err(zr->pci_dev, "No buffer at slot %d\n", i);
++			return;
++		}
+ 		buf->vbuf.vb2_buf.timestamp = ktime_get_ns();
+ 
+ 		if (zr->codec_mode == BUZ_MODE_MOTION_COMPRESS) {
+diff --git a/drivers/staging/media/zoran/zoran_driver.c b/drivers/staging/media/zoran/zoran_driver.c
+index 808196ea5b81b..ea04f6c732b21 100644
+--- a/drivers/staging/media/zoran/zoran_driver.c
++++ b/drivers/staging/media/zoran/zoran_driver.c
+@@ -255,8 +255,6 @@ static int zoran_querycap(struct file *file, void *__fh, struct v4l2_capability
+ 	strscpy(cap->card, ZR_DEVNAME(zr), sizeof(cap->card));
+ 	strscpy(cap->driver, "zoran", sizeof(cap->driver));
+ 	snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s", pci_name(zr->pci_dev));
+-	cap->device_caps = zr->video_dev->device_caps;
+-	cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+ 	return 0;
+ }
+ 
+@@ -582,6 +580,9 @@ static int zoran_s_std(struct file *file, void *__fh, v4l2_std_id std)
+ 	struct zoran *zr = video_drvdata(file);
+ 	int res = 0;
+ 
++	if (zr->norm == std)
++		return 0;
++
+ 	if (zr->running != ZORAN_MAP_MODE_NONE)
+ 		return -EBUSY;
+ 
+@@ -737,6 +738,7 @@ static int zoran_g_parm(struct file *file, void *priv, struct v4l2_streamparm *p
+ 	if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ 		return -EINVAL;
+ 
++	parm->parm.capture.readbuffers = 9;
+ 	return 0;
+ }
+ 
+@@ -867,6 +869,10 @@ int zr_set_buf(struct zoran *zr)
+ 		vbuf = &buf->vbuf;
+ 
+ 		buf->vbuf.field = V4L2_FIELD_INTERLACED;
++		if (BUZ_MAX_HEIGHT < (zr->v4l_settings.height * 2))
++			buf->vbuf.field = V4L2_FIELD_INTERLACED;
++		else
++			buf->vbuf.field = V4L2_FIELD_TOP;
+ 		vb2_set_plane_payload(&buf->vbuf.vb2_buf, 0, zr->buffer_size);
+ 		vb2_buffer_done(&buf->vbuf.vb2_buf, VB2_BUF_STATE_DONE);
+ 		zr->inuse[0] = NULL;
+@@ -926,6 +932,7 @@ static int zr_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 		zr->stat_com[j] = cpu_to_le32(1);
+ 		zr->inuse[j] = NULL;
+ 	}
++	zr->vbseq = 0;
+ 
+ 	if (zr->map_mode != ZORAN_MAP_MODE_RAW) {
+ 		pci_info(zr->pci_dev, "START JPG\n");
+@@ -1006,7 +1013,7 @@ static const struct vb2_ops zr_video_qops = {
+ 	.wait_finish            = vb2_ops_wait_finish,
+ };
+ 
+-int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq)
++int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq, int dir)
+ {
+ 	int err;
+ 
+@@ -1014,8 +1021,9 @@ int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq)
+ 	INIT_LIST_HEAD(&zr->queued_bufs);
+ 
+ 	vq->dev = &zr->pci_dev->dev;
+-	vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+-	vq->io_modes = VB2_USERPTR | VB2_DMABUF | VB2_MMAP | VB2_READ | VB2_WRITE;
++	vq->type = dir;
++
++	vq->io_modes = VB2_DMABUF | VB2_MMAP | VB2_READ | VB2_WRITE;
+ 	vq->drv_priv = zr;
+ 	vq->buf_struct_size = sizeof(struct zr_buffer);
+ 	vq->ops = &zr_video_qops;
+diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts
+index a7c0d3115d726..d48ca5a25c2c4 100644
+--- a/drivers/staging/mt7621-dts/gbpc1.dts
++++ b/drivers/staging/mt7621-dts/gbpc1.dts
+@@ -11,7 +11,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x0 0x1c000000>, <0x20000000 0x4000000>;
++		reg = <0x00000000 0x1c000000>,
++		      <0x20000000 0x04000000>;
+ 	};
+ 
+ 	chosen {
+@@ -37,24 +38,16 @@
+ 	gpio-leds {
+ 		compatible = "gpio-leds";
+ 
+-		system {
+-			label = "gb-pc1:green:system";
++		power {
++			label = "green:power";
+ 			gpios = <&gpio 6 GPIO_ACTIVE_LOW>;
++			linux,default-trigger = "default-on";
+ 		};
+ 
+-		status {
+-			label = "gb-pc1:green:status";
++		system {
++			label = "green:system";
+ 			gpios = <&gpio 8 GPIO_ACTIVE_LOW>;
+-		};
+-
+-		lan1 {
+-			label = "gb-pc1:green:lan1";
+-			gpios = <&gpio 24 GPIO_ACTIVE_LOW>;
+-		};
+-
+-		lan2 {
+-			label = "gb-pc1:green:lan2";
+-			gpios = <&gpio 25 GPIO_ACTIVE_LOW>;
++			linux,default-trigger = "disk-activity";
+ 		};
+ 	};
+ };
+@@ -94,9 +87,8 @@
+ 
+ 		partition@50000 {
+ 			label = "firmware";
+-			reg = <0x50000 0x1FB0000>;
++			reg = <0x50000 0x1fb0000>;
+ 		};
+-
+ 	};
+ };
+ 
+@@ -122,9 +114,12 @@
+ };
+ 
+ &pinctrl {
+-	state_default: pinctrl0 {
+-		default_gpio: gpio {
+-			groups = "wdt", "rgmii2", "uart3";
++	pinctrl-names = "default";
++	pinctrl-0 = <&state_default>;
++
++	state_default: state-default {
++		gpio-pinmux {
++			groups = "rgmii2", "uart3", "wdt";
+ 			function = "gpio";
+ 		};
+ 	};
+@@ -133,12 +128,13 @@
+ &switch0 {
+ 	ports {
+ 		port@0 {
++			status = "okay";
+ 			label = "ethblack";
+-			status = "ok";
+ 		};
++
+ 		port@4 {
++			status = "okay";
+ 			label = "ethblue";
+-			status = "ok";
+ 		};
+ 	};
+ };
+diff --git a/drivers/staging/mt7621-dts/gbpc2.dts b/drivers/staging/mt7621-dts/gbpc2.dts
+index 52760e7351f6c..6f6fed071dda0 100644
+--- a/drivers/staging/mt7621-dts/gbpc2.dts
++++ b/drivers/staging/mt7621-dts/gbpc2.dts
+@@ -1,21 +1,121 @@
+ /dts-v1/;
+ 
+-#include "gbpc1.dts"
++#include "mt7621.dtsi"
++
++#include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/input.h>
+ 
+ / {
+ 	compatible = "gnubee,gb-pc2", "mediatek,mt7621-soc";
+ 	model = "GB-PC2";
++
++	memory@0 {
++		device_type = "memory";
++		reg = <0x00000000 0x1c000000>,
++		      <0x20000000 0x04000000>;
++	};
++
++	chosen {
++		bootargs = "console=ttyS0,57600";
++	};
++
++	palmbus: palmbus@1e000000 {
++		i2c@900 {
++			status = "okay";
++		};
++	};
++
++	gpio-keys {
++		compatible = "gpio-keys";
++
++		reset {
++			label = "reset";
++			gpios = <&gpio 18 GPIO_ACTIVE_HIGH>;
++			linux,code = <KEY_RESTART>;
++		};
++	};
++};
++
++&sdhci {
++	status = "okay";
++};
++
++&spi0 {
++	status = "okay";
++
++	m25p80@0 {
++		#address-cells = <1>;
++		#size-cells = <1>;
++		compatible = "jedec,spi-nor";
++		reg = <0>;
++		spi-max-frequency = <50000000>;
++		broken-flash-reset;
++
++		partition@0 {
++			label = "u-boot";
++			reg = <0x0 0x30000>;
++			read-only;
++		};
++
++		partition@30000 {
++			label = "u-boot-env";
++			reg = <0x30000 0x10000>;
++			read-only;
++		};
++
++		factory: partition@40000 {
++			label = "factory";
++			reg = <0x40000 0x10000>;
++			read-only;
++		};
++
++		partition@50000 {
++			label = "firmware";
++			reg = <0x50000 0x1fb0000>;
++		};
++	};
+ };
+ 
+-&default_gpio {
+-	groups = "wdt", "uart3";
+-	function = "gpio";
++&pcie {
++	status = "okay";
+ };
+ 
+-&gmac1 {
+-	status = "ok";
++&pinctrl {
++	pinctrl-names = "default";
++	pinctrl-0 = <&state_default>;
++
++	state_default: state-default {
++		gpio-pinmux {
++			groups = "wdt";
++			function = "gpio";
++		};
++	};
+ };
+ 
+-&phy_external {
+-	status = "ok";
++&ethernet {
++	gmac1: mac@1 {
++		status = "okay";
++		phy-handle = <&ethphy7>;
++	};
++
++	mdio-bus {
++		ethphy7: ethernet-phy@7 {
++			reg = <7>;
++			phy-mode = "rgmii-rxid";
++		};
++	};
++};
++
++&switch0 {
++	ports {
++		port@0 {
++			status = "okay";
++			label = "ethblack";
++		};
++
++		port@4 {
++			status = "okay";
++			label = "ethblue";
++		};
++	};
+ };
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index 27222f7b246fd..91a7fa7482964 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -56,9 +56,9 @@
+ 		regulator-max-microvolt = <3300000>;
+ 		enable-active-high;
+ 		regulator-always-on;
+-	  };
++	};
+ 
+-	  mmc_fixed_1v8_io: fixedregulator@1 {
++	mmc_fixed_1v8_io: fixedregulator@1 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "mmc_io";
+ 		regulator-min-microvolt = <1800000>;
+@@ -412,37 +412,32 @@
+ 
+ 		mediatek,ethsys = <&ethsys>;
+ 
++		pinctrl-names = "default";
++		pinctrl-0 = <&mdio_pins>, <&rgmii1_pins>, <&rgmii2_pins>;
+ 
+ 		gmac0: mac@0 {
+ 			compatible = "mediatek,eth-mac";
+ 			reg = <0>;
+ 			phy-mode = "rgmii";
++
+ 			fixed-link {
+ 				speed = <1000>;
+ 				full-duplex;
+ 				pause;
+ 			};
+ 		};
++
+ 		gmac1: mac@1 {
+ 			compatible = "mediatek,eth-mac";
+ 			reg = <1>;
+ 			status = "off";
+ 			phy-mode = "rgmii-rxid";
+-			phy-handle = <&phy_external>;
+ 		};
++
+ 		mdio-bus {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			phy_external: ethernet-phy@5 {
+-				status = "off";
+-				reg = <5>;
+-				phy-mode = "rgmii-rxid";
+-
+-				pinctrl-names = "default";
+-				pinctrl-0 = <&rgmii2_pins>;
+-			};
+-
+ 			switch0: switch0@0 {
+ 				compatible = "mediatek,mt7621";
+ 				#address-cells = <1>;
+@@ -456,36 +451,43 @@
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 					reg = <0>;
++
+ 					port@0 {
+ 						status = "off";
+ 						reg = <0>;
+ 						label = "lan0";
+ 					};
++
+ 					port@1 {
+ 						status = "off";
+ 						reg = <1>;
+ 						label = "lan1";
+ 					};
++
+ 					port@2 {
+ 						status = "off";
+ 						reg = <2>;
+ 						label = "lan2";
+ 					};
++
+ 					port@3 {
+ 						status = "off";
+ 						reg = <3>;
+ 						label = "lan3";
+ 					};
++
+ 					port@4 {
+ 						status = "off";
+ 						reg = <4>;
+ 						label = "lan4";
+ 					};
++
+ 					port@6 {
+ 						reg = <6>;
+ 						label = "cpu";
+ 						ethernet = <&gmac0>;
+ 						phy-mode = "trgmii";
++
+ 						fixed-link {
+ 							speed = <1000>;
+ 							full-duplex;
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index 793d7b58fc650..72a26867c2092 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -53,7 +53,7 @@ struct int3400_thermal_priv {
+ 	struct art *arts;
+ 	int trt_count;
+ 	struct trt *trts;
+-	u8 uuid_bitmap;
++	u32 uuid_bitmap;
+ 	int rel_misc_dev_res;
+ 	int current_uuid_index;
+ 	char *data_vault;
+@@ -466,6 +466,11 @@ static void int3400_setup_gddv(struct int3400_thermal_priv *priv)
+ 	priv->data_vault = kmemdup(obj->package.elements[0].buffer.pointer,
+ 				   obj->package.elements[0].buffer.length,
+ 				   GFP_KERNEL);
++	if (!priv->data_vault) {
++		kfree(buffer.pointer);
++		return;
++	}
++
+ 	bin_attr_data_vault.private = priv->data_vault;
+ 	bin_attr_data_vault.size = obj->package.elements[0].buffer.length;
+ 	kfree(buffer.pointer);
+diff --git a/drivers/tty/hvc/hvc_iucv.c b/drivers/tty/hvc/hvc_iucv.c
+index 2af1e5751bd63..796fbff623f6e 100644
+--- a/drivers/tty/hvc/hvc_iucv.c
++++ b/drivers/tty/hvc/hvc_iucv.c
+@@ -1470,7 +1470,9 @@ out_error:
+  */
+ static	int __init hvc_iucv_config(char *val)
+ {
+-	 return kstrtoul(val, 10, &hvc_iucv_devices);
++	if (kstrtoul(val, 10, &hvc_iucv_devices))
++		pr_warn("hvc_iucv= invalid parameter value '%s'\n", val);
++	return 1;
+ }
+ 
+ 
+diff --git a/drivers/tty/mxser.c b/drivers/tty/mxser.c
+index 3703987c46661..8344265a1948b 100644
+--- a/drivers/tty/mxser.c
++++ b/drivers/tty/mxser.c
+@@ -858,6 +858,7 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ 	struct mxser_port *info = container_of(port, struct mxser_port, port);
+ 	unsigned long page;
+ 	unsigned long flags;
++	int ret;
+ 
+ 	page = __get_free_page(GFP_KERNEL);
+ 	if (!page)
+@@ -867,9 +868,9 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ 
+ 	if (!info->ioaddr || !info->type) {
+ 		set_bit(TTY_IO_ERROR, &tty->flags);
+-		free_page(page);
+ 		spin_unlock_irqrestore(&info->slock, flags);
+-		return 0;
++		ret = 0;
++		goto err_free_xmit;
+ 	}
+ 	info->port.xmit_buf = (unsigned char *) page;
+ 
+@@ -895,8 +896,10 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ 		if (capable(CAP_SYS_ADMIN)) {
+ 			set_bit(TTY_IO_ERROR, &tty->flags);
+ 			return 0;
+-		} else
+-			return -ENODEV;
++		}
++
++		ret = -ENODEV;
++		goto err_free_xmit;
+ 	}
+ 
+ 	/*
+@@ -941,6 +944,10 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ 	spin_unlock_irqrestore(&info->slock, flags);
+ 
+ 	return 0;
++err_free_xmit:
++	free_page(page);
++	info->port.xmit_buf = NULL;
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
+index 890fa7ddaa7f3..b3c3f7e5851ab 100644
+--- a/drivers/tty/serial/8250/8250_dma.c
++++ b/drivers/tty/serial/8250/8250_dma.c
+@@ -64,10 +64,19 @@ int serial8250_tx_dma(struct uart_8250_port *p)
+ 	struct uart_8250_dma		*dma = p->dma;
+ 	struct circ_buf			*xmit = &p->port.state->xmit;
+ 	struct dma_async_tx_descriptor	*desc;
++	struct uart_port		*up = &p->port;
+ 	int ret;
+ 
+-	if (dma->tx_running)
++	if (dma->tx_running) {
++		if (up->x_char) {
++			dmaengine_pause(dma->txchan);
++			uart_xchar_out(up, UART_TX);
++			dmaengine_resume(dma->txchan);
++		}
+ 		return 0;
++	} else if (up->x_char) {
++		uart_xchar_out(up, UART_TX);
++	}
+ 
+ 	if (uart_tx_stopped(&p->port) || uart_circ_empty(xmit)) {
+ 		/* We have been called from __dma_tx_complete() */
+diff --git a/drivers/tty/serial/8250/8250_lpss.c b/drivers/tty/serial/8250/8250_lpss.c
+index 4dee8a9e0c951..dfb730b7ea2ae 100644
+--- a/drivers/tty/serial/8250/8250_lpss.c
++++ b/drivers/tty/serial/8250/8250_lpss.c
+@@ -121,8 +121,7 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ {
+ 	struct dw_dma_slave *param = &lpss->dma_param;
+ 	struct pci_dev *pdev = to_pci_dev(port->dev);
+-	unsigned int dma_devfn = PCI_DEVFN(PCI_SLOT(pdev->devfn), 0);
+-	struct pci_dev *dma_dev = pci_get_slot(pdev->bus, dma_devfn);
++	struct pci_dev *dma_dev;
+ 
+ 	switch (pdev->device) {
+ 	case PCI_DEVICE_ID_INTEL_BYT_UART1:
+@@ -141,6 +140,8 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ 		return -EINVAL;
+ 	}
+ 
++	dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), 0));
++
+ 	param->dma_dev = &dma_dev->dev;
+ 	param->m_master = 0;
+ 	param->p_master = 1;
+@@ -156,11 +157,26 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ 	return 0;
+ }
+ 
++static void byt_serial_exit(struct lpss8250 *lpss)
++{
++	struct dw_dma_slave *param = &lpss->dma_param;
++
++	/* Paired with pci_get_slot() in the byt_serial_setup() above */
++	put_device(param->dma_dev);
++}
++
+ static int ehl_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ {
+ 	return 0;
+ }
+ 
++static void ehl_serial_exit(struct lpss8250 *lpss)
++{
++	struct uart_8250_port *up = serial8250_get_port(lpss->data.line);
++
++	up->dma = NULL;
++}
++
+ #ifdef CONFIG_SERIAL_8250_DMA
+ static const struct dw_dma_platform_data qrk_serial_dma_pdata = {
+ 	.nr_channels = 2,
+@@ -335,8 +351,7 @@ static int lpss8250_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ err_exit:
+-	if (lpss->board->exit)
+-		lpss->board->exit(lpss);
++	lpss->board->exit(lpss);
+ 	pci_free_irq_vectors(pdev);
+ 	return ret;
+ }
+@@ -347,8 +362,7 @@ static void lpss8250_remove(struct pci_dev *pdev)
+ 
+ 	serial8250_unregister_port(lpss->data.line);
+ 
+-	if (lpss->board->exit)
+-		lpss->board->exit(lpss);
++	lpss->board->exit(lpss);
+ 	pci_free_irq_vectors(pdev);
+ }
+ 
+@@ -356,12 +370,14 @@ static const struct lpss8250_board byt_board = {
+ 	.freq = 100000000,
+ 	.base_baud = 2764800,
+ 	.setup = byt_serial_setup,
++	.exit = byt_serial_exit,
+ };
+ 
+ static const struct lpss8250_board ehl_board = {
+ 	.freq = 200000000,
+ 	.base_baud = 12500000,
+ 	.setup = ehl_serial_setup,
++	.exit = ehl_serial_exit,
+ };
+ 
+ static const struct lpss8250_board qrk_board = {
+diff --git a/drivers/tty/serial/8250/8250_mid.c b/drivers/tty/serial/8250/8250_mid.c
+index efa0515139f8e..e6c1791609ddf 100644
+--- a/drivers/tty/serial/8250/8250_mid.c
++++ b/drivers/tty/serial/8250/8250_mid.c
+@@ -73,6 +73,11 @@ static int pnw_setup(struct mid8250 *mid, struct uart_port *p)
+ 	return 0;
+ }
+ 
++static void pnw_exit(struct mid8250 *mid)
++{
++	pci_dev_put(mid->dma_dev);
++}
++
+ static int tng_handle_irq(struct uart_port *p)
+ {
+ 	struct mid8250 *mid = p->private_data;
+@@ -124,6 +129,11 @@ static int tng_setup(struct mid8250 *mid, struct uart_port *p)
+ 	return 0;
+ }
+ 
++static void tng_exit(struct mid8250 *mid)
++{
++	pci_dev_put(mid->dma_dev);
++}
++
+ static int dnv_handle_irq(struct uart_port *p)
+ {
+ 	struct mid8250 *mid = p->private_data;
+@@ -330,9 +340,9 @@ static int mid8250_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	pci_set_drvdata(pdev, mid);
+ 	return 0;
++
+ err:
+-	if (mid->board->exit)
+-		mid->board->exit(mid);
++	mid->board->exit(mid);
+ 	return ret;
+ }
+ 
+@@ -342,8 +352,7 @@ static void mid8250_remove(struct pci_dev *pdev)
+ 
+ 	serial8250_unregister_port(mid->line);
+ 
+-	if (mid->board->exit)
+-		mid->board->exit(mid);
++	mid->board->exit(mid);
+ }
+ 
+ static const struct mid8250_board pnw_board = {
+@@ -351,6 +360,7 @@ static const struct mid8250_board pnw_board = {
+ 	.freq = 50000000,
+ 	.base_baud = 115200,
+ 	.setup = pnw_setup,
++	.exit = pnw_exit,
+ };
+ 
+ static const struct mid8250_board tng_board = {
+@@ -358,6 +368,7 @@ static const struct mid8250_board tng_board = {
+ 	.freq = 38400000,
+ 	.base_baud = 1843200,
+ 	.setup = tng_setup,
++	.exit = tng_exit,
+ };
+ 
+ static const struct mid8250_board dnv_board = {
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 7c07ebb37b1b9..3055353514e1d 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1620,6 +1620,18 @@ static inline void start_tx_rs485(struct uart_port *port)
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 	struct uart_8250_em485 *em485 = up->em485;
+ 
++	/*
++	 * While serial8250_em485_handle_stop_tx() is a noop if
++	 * em485->active_timer != &em485->stop_tx_timer, it might happen that
++	 * the timer is still armed and triggers only after the current bunch of
++	 * chars is send and em485->active_timer == &em485->stop_tx_timer again.
++	 * So cancel the timer. There is still a theoretical race condition if
++	 * the timer is already running and only comes around to check for
++	 * em485->active_timer when &em485->stop_tx_timer is armed again.
++	 */
++	if (em485->active_timer == &em485->stop_tx_timer)
++		hrtimer_try_to_cancel(&em485->stop_tx_timer);
++
+ 	em485->active_timer = NULL;
+ 
+ 	if (em485->tx_stopped) {
+@@ -1805,9 +1817,7 @@ void serial8250_tx_chars(struct uart_8250_port *up)
+ 	int count;
+ 
+ 	if (port->x_char) {
+-		serial_out(up, UART_TX, port->x_char);
+-		port->icount.tx++;
+-		port->x_char = 0;
++		uart_xchar_out(port, UART_TX);
+ 		return;
+ 	}
+ 	if (uart_tx_stopped(port)) {
+diff --git a/drivers/tty/serial/amba-pl010.c b/drivers/tty/serial/amba-pl010.c
+index 75d61e038a775..e538d6d75155e 100644
+--- a/drivers/tty/serial/amba-pl010.c
++++ b/drivers/tty/serial/amba-pl010.c
+@@ -751,7 +751,7 @@ static int pl010_probe(struct amba_device *dev, const struct amba_id *id)
+ 	return ret;
+ }
+ 
+-static int pl010_remove(struct amba_device *dev)
++static void pl010_remove(struct amba_device *dev)
+ {
+ 	struct uart_amba_port *uap = amba_get_drvdata(dev);
+ 	int i;
+@@ -767,8 +767,6 @@ static int pl010_remove(struct amba_device *dev)
+ 
+ 	if (!busy)
+ 		uart_unregister_driver(&amba_reg);
+-
+-	return 0;
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 61183e7ff0097..07b19e97f850d 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -2658,13 +2658,12 @@ static int pl011_probe(struct amba_device *dev, const struct amba_id *id)
+ 	return pl011_register_port(uap);
+ }
+ 
+-static int pl011_remove(struct amba_device *dev)
++static void pl011_remove(struct amba_device *dev)
+ {
+ 	struct uart_amba_port *uap = amba_get_drvdata(dev);
+ 
+ 	uart_remove_one_port(&amba_reg, &uap->port);
+ 	pl011_unregister_port(uap);
+-	return 0;
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index 49d0c7f2b29b8..79b7db8580e05 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -403,16 +403,16 @@ static int kgdboc_option_setup(char *opt)
+ {
+ 	if (!opt) {
+ 		pr_err("config string not provided\n");
+-		return -EINVAL;
++		return 1;
+ 	}
+ 
+ 	if (strlen(opt) >= MAX_CONFIG_LEN) {
+ 		pr_err("config string too long\n");
+-		return -ENOSPC;
++		return 1;
+ 	}
+ 	strcpy(config, opt);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("kgdboc=", kgdboc_option_setup);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index be0d9922e320e..19f0c5db11e33 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -676,6 +676,20 @@ static void uart_flush_buffer(struct tty_struct *tty)
+ 	tty_port_tty_wakeup(&state->port);
+ }
+ 
++/*
++ * This function performs low-level write of high-priority XON/XOFF
++ * character and accounting for it.
++ *
++ * Requires uart_port to implement .serial_out().
++ */
++void uart_xchar_out(struct uart_port *uport, int offset)
++{
++	serial_port_out(uport, offset, uport->x_char);
++	uport->icount.tx++;
++	uport->x_char = 0;
++}
++EXPORT_SYMBOL_GPL(uart_xchar_out);
++
+ /*
+  * This function is used to send a high-priority XON/XOFF character to
+  * the device
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 71b018e9a5735..460a8a86e3111 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -676,7 +676,7 @@ static int xhci_exit_test_mode(struct xhci_hcd *xhci)
+ 	}
+ 	pm_runtime_allow(xhci_to_hcd(xhci)->self.controller);
+ 	xhci->test_mode = 0;
+-	return xhci_reset(xhci);
++	return xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ }
+ 
+ void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port,
+@@ -1002,6 +1002,9 @@ static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status,
+ 		if (link_state == XDEV_U2)
+ 			*status |= USB_PORT_STAT_L1;
+ 		if (link_state == XDEV_U0) {
++			if (bus_state->resume_done[portnum])
++				usb_hcd_end_port_resume(&port->rhub->hcd->self,
++							portnum);
+ 			bus_state->resume_done[portnum] = 0;
+ 			clear_bit(portnum, &bus_state->resuming_ports);
+ 			if (bus_state->suspended_ports & (1 << portnum)) {
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index ed380ee58ab5d..024e8911df344 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2595,7 +2595,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ 
+ fail:
+ 	xhci_halt(xhci);
+-	xhci_reset(xhci);
++	xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ 	xhci_mem_cleanup(xhci);
+ 	return -ENOMEM;
+ }
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 49f74299d3f57..95effd28179b4 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -66,7 +66,7 @@ static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
+  * handshake done).  There are two failure modes:  "usec" have passed (major
+  * hardware flakeout), or the register reads as all-ones (hardware removed).
+  */
+-int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec)
++int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us)
+ {
+ 	u32	result;
+ 	int	ret;
+@@ -74,7 +74,7 @@ int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec)
+ 	ret = readl_poll_timeout_atomic(ptr, result,
+ 					(result & mask) == done ||
+ 					result == U32_MAX,
+-					1, usec);
++					1, timeout_us);
+ 	if (result == U32_MAX)		/* card removed */
+ 		return -ENODEV;
+ 
+@@ -163,7 +163,7 @@ int xhci_start(struct xhci_hcd *xhci)
+  * Transactions will be terminated immediately, and operational registers
+  * will be set to their defaults.
+  */
+-int xhci_reset(struct xhci_hcd *xhci)
++int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
+ {
+ 	u32 command;
+ 	u32 state;
+@@ -196,8 +196,7 @@ int xhci_reset(struct xhci_hcd *xhci)
+ 	if (xhci->quirks & XHCI_INTEL_HOST)
+ 		udelay(1000);
+ 
+-	ret = xhci_handshake(&xhci->op_regs->command,
+-			CMD_RESET, 0, 10 * 1000 * 1000);
++	ret = xhci_handshake(&xhci->op_regs->command, CMD_RESET, 0, timeout_us);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -210,8 +209,7 @@ int xhci_reset(struct xhci_hcd *xhci)
+ 	 * xHCI cannot write to any doorbells or operational registers other
+ 	 * than status until the "Controller Not Ready" flag is cleared.
+ 	 */
+-	ret = xhci_handshake(&xhci->op_regs->status,
+-			STS_CNR, 0, 10 * 1000 * 1000);
++	ret = xhci_handshake(&xhci->op_regs->status, STS_CNR, 0, timeout_us);
+ 
+ 	xhci->usb2_rhub.bus_state.port_c_suspend = 0;
+ 	xhci->usb2_rhub.bus_state.suspended_ports = 0;
+@@ -732,7 +730,7 @@ static void xhci_stop(struct usb_hcd *hcd)
+ 	xhci->xhc_state |= XHCI_STATE_HALTED;
+ 	xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+ 	xhci_halt(xhci);
+-	xhci_reset(xhci);
++	xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ 	spin_unlock_irq(&xhci->lock);
+ 
+ 	xhci_cleanup_msix(xhci);
+@@ -785,7 +783,7 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ 	xhci_halt(xhci);
+ 	/* Workaround for spurious wakeups at shutdown with HSW */
+ 	if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+-		xhci_reset(xhci);
++		xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ 	spin_unlock_irq(&xhci->lock);
+ 
+ 	xhci_cleanup_msix(xhci);
+@@ -1170,7 +1168,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 		xhci_dbg(xhci, "Stop HCD\n");
+ 		xhci_halt(xhci);
+ 		xhci_zero_64b_regs(xhci);
+-		retval = xhci_reset(xhci);
++		retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC);
+ 		spin_unlock_irq(&xhci->lock);
+ 		if (retval)
+ 			return retval;
+@@ -5276,7 +5274,7 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
+ 
+ 	xhci_dbg(xhci, "Resetting HCD\n");
+ 	/* Reset the internal HC memory state and registers. */
+-	retval = xhci_reset(xhci);
++	retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC);
+ 	if (retval)
+ 		return retval;
+ 	xhci_dbg(xhci, "Reset complete\n");
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 45584a2783366..a46bbf5beffa9 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -229,6 +229,9 @@ struct xhci_op_regs {
+ #define CMD_ETE		(1 << 14)
+ /* bits 15:31 are reserved (and should be preserved on writes). */
+ 
++#define XHCI_RESET_LONG_USEC		(10 * 1000 * 1000)
++#define XHCI_RESET_SHORT_USEC		(250 * 1000)
++
+ /* IMAN - Interrupt Management Register */
+ #define IMAN_IE		(1 << 1)
+ #define IMAN_IP		(1 << 0)
+@@ -2068,11 +2071,11 @@ void xhci_free_container_ctx(struct xhci_hcd *xhci,
+ 
+ /* xHCI host controller glue */
+ typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);
+-int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec);
++int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us);
+ void xhci_quiesce(struct xhci_hcd *xhci);
+ int xhci_halt(struct xhci_hcd *xhci);
+ int xhci_start(struct xhci_hcd *xhci);
+-int xhci_reset(struct xhci_hcd *xhci);
++int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us);
+ int xhci_run(struct usb_hcd *hcd);
+ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks);
+ void xhci_shutdown(struct usb_hcd *hcd);
+@@ -2455,6 +2458,8 @@ static inline const char *xhci_decode_ctrl_ctx(char *str,
+ 	unsigned int	bit;
+ 	int		ret = 0;
+ 
++	str[0] = '\0';
++
+ 	if (drop) {
+ 		ret = sprintf(str, "Drop:");
+ 		for_each_set_bit(bit, &drop, 32)
+@@ -2612,8 +2617,11 @@ static inline const char *xhci_decode_usbsts(char *str, u32 usbsts)
+ {
+ 	int ret = 0;
+ 
++	ret = sprintf(str, " 0x%08x", usbsts);
++
+ 	if (usbsts == ~(u32)0)
+-		return " 0xffffffff";
++		return str;
++
+ 	if (usbsts & STS_HALT)
+ 		ret += sprintf(str + ret, " HCHalted");
+ 	if (usbsts & STS_FATAL)
+diff --git a/drivers/usb/serial/Kconfig b/drivers/usb/serial/Kconfig
+index 4007fa25a8ffa..169251ec8353e 100644
+--- a/drivers/usb/serial/Kconfig
++++ b/drivers/usb/serial/Kconfig
+@@ -66,6 +66,7 @@ config USB_SERIAL_SIMPLE
+ 		- Libtransistor USB console
+ 		- a number of Motorola phones
+ 		- Motorola Tetra devices
++		- Nokia mobile phones
+ 		- Novatel Wireless GPS receivers
+ 		- Siemens USB/MPI adapter.
+ 		- ViVOtech ViVOpay USB device.
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 1bbe18f3f9f11..d736822e95e18 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -116,6 +116,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530GC_PRODUCT_ID) },
+ 	{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
+ 	{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
++	{ USB_DEVICE(IBM_VENDOR_ID, IBM_PRODUCT_ID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 6097ee8fccb25..c5406452b774e 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -35,6 +35,9 @@
+ #define ATEN_PRODUCT_UC232B	0x2022
+ #define ATEN_PRODUCT_ID2	0x2118
+ 
++#define IBM_VENDOR_ID		0x04b3
++#define IBM_PRODUCT_ID		0x4016
++
+ #define IODATA_VENDOR_ID	0x04bb
+ #define IODATA_PRODUCT_ID	0x0a03
+ #define IODATA_PRODUCT_ID_RSAQ5	0x0a0e
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index bd23a7cb1be2b..4c6747889a194 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -91,6 +91,11 @@ DEVICE(moto_modem, MOTO_IDS);
+ 	{ USB_DEVICE(0x0cad, 0x9016) }	/* TPG2200 */
+ DEVICE(motorola_tetra, MOTOROLA_TETRA_IDS);
+ 
++/* Nokia mobile phone driver */
++#define NOKIA_IDS()			\
++	{ USB_DEVICE(0x0421, 0x069a) }	/* Nokia 130 (RM-1035) */
++DEVICE(nokia, NOKIA_IDS);
++
+ /* Novatel Wireless GPS driver */
+ #define NOVATEL_IDS()			\
+ 	{ USB_DEVICE(0x09d7, 0x0100) }	/* NovAtel FlexPack GPS */
+@@ -123,6 +128,7 @@ static struct usb_serial_driver * const serial_drivers[] = {
+ 	&vivopay_device,
+ 	&moto_modem_device,
+ 	&motorola_tetra_device,
++	&nokia_device,
+ 	&novatel_gps_device,
+ 	&hp4x_device,
+ 	&suunto_device,
+@@ -140,6 +146,7 @@ static const struct usb_device_id id_table[] = {
+ 	VIVOPAY_IDS(),
+ 	MOTO_IDS(),
+ 	MOTOROLA_TETRA_IDS(),
++	NOKIA_IDS(),
+ 	NOVATEL_IDS(),
+ 	HP4X_IDS(),
+ 	SUUNTO_IDS(),
+diff --git a/drivers/usb/storage/ene_ub6250.c b/drivers/usb/storage/ene_ub6250.c
+index 98c1aa594e6c4..c9ce1c25c80cc 100644
+--- a/drivers/usb/storage/ene_ub6250.c
++++ b/drivers/usb/storage/ene_ub6250.c
+@@ -237,36 +237,33 @@ static struct us_unusual_dev ene_ub6250_unusual_dev_list[] = {
+ #define memstick_logaddr(logadr1, logadr0) ((((u16)(logadr1)) << 8) | (logadr0))
+ 
+ 
+-struct SD_STATUS {
+-	u8    Insert:1;
+-	u8    Ready:1;
+-	u8    MediaChange:1;
+-	u8    IsMMC:1;
+-	u8    HiCapacity:1;
+-	u8    HiSpeed:1;
+-	u8    WtP:1;
+-	u8    Reserved:1;
+-};
+-
+-struct MS_STATUS {
+-	u8    Insert:1;
+-	u8    Ready:1;
+-	u8    MediaChange:1;
+-	u8    IsMSPro:1;
+-	u8    IsMSPHG:1;
+-	u8    Reserved1:1;
+-	u8    WtP:1;
+-	u8    Reserved2:1;
+-};
+-
+-struct SM_STATUS {
+-	u8    Insert:1;
+-	u8    Ready:1;
+-	u8    MediaChange:1;
+-	u8    Reserved:3;
+-	u8    WtP:1;
+-	u8    IsMS:1;
+-};
++/* SD_STATUS bits */
++#define SD_Insert	BIT(0)
++#define SD_Ready	BIT(1)
++#define SD_MediaChange	BIT(2)
++#define SD_IsMMC	BIT(3)
++#define SD_HiCapacity	BIT(4)
++#define SD_HiSpeed	BIT(5)
++#define SD_WtP		BIT(6)
++			/* Bit 7 reserved */
++
++/* MS_STATUS bits */
++#define MS_Insert	BIT(0)
++#define MS_Ready	BIT(1)
++#define MS_MediaChange	BIT(2)
++#define MS_IsMSPro	BIT(3)
++#define MS_IsMSPHG	BIT(4)
++			/* Bit 5 reserved */
++#define MS_WtP		BIT(6)
++			/* Bit 7 reserved */
++
++/* SM_STATUS bits */
++#define SM_Insert	BIT(0)
++#define SM_Ready	BIT(1)
++#define SM_MediaChange	BIT(2)
++			/* Bits 3-5 reserved */
++#define SM_WtP		BIT(6)
++#define SM_IsMS		BIT(7)
+ 
+ struct ms_bootblock_cis {
+ 	u8 bCistplDEVICE[6];    /* 0 */
+@@ -437,9 +434,9 @@ struct ene_ub6250_info {
+ 	u8		*bbuf;
+ 
+ 	/* for 6250 code */
+-	struct SD_STATUS	SD_Status;
+-	struct MS_STATUS	MS_Status;
+-	struct SM_STATUS	SM_Status;
++	u8		SD_Status;
++	u8		MS_Status;
++	u8		SM_Status;
+ 
+ 	/* ----- SD Control Data ---------------- */
+ 	/*SD_REGISTER SD_Regs; */
+@@ -602,7 +599,7 @@ static int sd_scsi_test_unit_ready(struct us_data *us, struct scsi_cmnd *srb)
+ {
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+ 
+-	if (info->SD_Status.Insert && info->SD_Status.Ready)
++	if ((info->SD_Status & SD_Insert) && (info->SD_Status & SD_Ready))
+ 		return USB_STOR_TRANSPORT_GOOD;
+ 	else {
+ 		ene_sd_init(us);
+@@ -622,7 +619,7 @@ static int sd_scsi_mode_sense(struct us_data *us, struct scsi_cmnd *srb)
+ 		0x0b, 0x00, 0x80, 0x08, 0x00, 0x00,
+ 		0x71, 0xc0, 0x00, 0x00, 0x02, 0x00 };
+ 
+-	if (info->SD_Status.WtP)
++	if (info->SD_Status & SD_WtP)
+ 		usb_stor_set_xfer_buf(mediaWP, 12, srb);
+ 	else
+ 		usb_stor_set_xfer_buf(mediaNoWP, 12, srb);
+@@ -641,9 +638,9 @@ static int sd_scsi_read_capacity(struct us_data *us, struct scsi_cmnd *srb)
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+ 
+ 	usb_stor_dbg(us, "sd_scsi_read_capacity\n");
+-	if (info->SD_Status.HiCapacity) {
++	if (info->SD_Status & SD_HiCapacity) {
+ 		bl_len = 0x200;
+-		if (info->SD_Status.IsMMC)
++		if (info->SD_Status & SD_IsMMC)
+ 			bl_num = info->HC_C_SIZE-1;
+ 		else
+ 			bl_num = (info->HC_C_SIZE + 1) * 1024 - 1;
+@@ -693,7 +690,7 @@ static int sd_scsi_read(struct us_data *us, struct scsi_cmnd *srb)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 
+-	if (info->SD_Status.HiCapacity)
++	if (info->SD_Status & SD_HiCapacity)
+ 		bnByte = bn;
+ 
+ 	/* set up the command wrapper */
+@@ -733,7 +730,7 @@ static int sd_scsi_write(struct us_data *us, struct scsi_cmnd *srb)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 
+-	if (info->SD_Status.HiCapacity)
++	if (info->SD_Status & SD_HiCapacity)
+ 		bnByte = bn;
+ 
+ 	/* set up the command wrapper */
+@@ -1455,7 +1452,7 @@ static int ms_scsi_test_unit_ready(struct us_data *us, struct scsi_cmnd *srb)
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+ 
+ 	/* pr_info("MS_SCSI_Test_Unit_Ready\n"); */
+-	if (info->MS_Status.Insert && info->MS_Status.Ready) {
++	if ((info->MS_Status & MS_Insert) && (info->MS_Status & MS_Ready)) {
+ 		return USB_STOR_TRANSPORT_GOOD;
+ 	} else {
+ 		ene_ms_init(us);
+@@ -1475,7 +1472,7 @@ static int ms_scsi_mode_sense(struct us_data *us, struct scsi_cmnd *srb)
+ 		0x0b, 0x00, 0x80, 0x08, 0x00, 0x00,
+ 		0x71, 0xc0, 0x00, 0x00, 0x02, 0x00 };
+ 
+-	if (info->MS_Status.WtP)
++	if (info->MS_Status & MS_WtP)
+ 		usb_stor_set_xfer_buf(mediaWP, 12, srb);
+ 	else
+ 		usb_stor_set_xfer_buf(mediaNoWP, 12, srb);
+@@ -1494,7 +1491,7 @@ static int ms_scsi_read_capacity(struct us_data *us, struct scsi_cmnd *srb)
+ 
+ 	usb_stor_dbg(us, "ms_scsi_read_capacity\n");
+ 	bl_len = 0x200;
+-	if (info->MS_Status.IsMSPro)
++	if (info->MS_Status & MS_IsMSPro)
+ 		bl_num = info->MSP_TotalBlock - 1;
+ 	else
+ 		bl_num = info->MS_Lib.NumberOfLogBlock * info->MS_Lib.blockSize * 2 - 1;
+@@ -1649,7 +1646,7 @@ static int ms_scsi_read(struct us_data *us, struct scsi_cmnd *srb)
+ 	if (bn > info->bl_num)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 
+-	if (info->MS_Status.IsMSPro) {
++	if (info->MS_Status & MS_IsMSPro) {
+ 		result = ene_load_bincode(us, MSP_RW_PATTERN);
+ 		if (result != USB_STOR_XFER_GOOD) {
+ 			usb_stor_dbg(us, "Load MPS RW pattern Fail !!\n");
+@@ -1750,7 +1747,7 @@ static int ms_scsi_write(struct us_data *us, struct scsi_cmnd *srb)
+ 	if (bn > info->bl_num)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 
+-	if (info->MS_Status.IsMSPro) {
++	if (info->MS_Status & MS_IsMSPro) {
+ 		result = ene_load_bincode(us, MSP_RW_PATTERN);
+ 		if (result != USB_STOR_XFER_GOOD) {
+ 			pr_info("Load MSP RW pattern Fail !!\n");
+@@ -1858,12 +1855,12 @@ static int ene_get_card_status(struct us_data *us, u8 *buf)
+ 
+ 	tmpreg = (u16) reg4b;
+ 	reg4b = *(u32 *)(&buf[0x14]);
+-	if (info->SD_Status.HiCapacity && !info->SD_Status.IsMMC)
++	if ((info->SD_Status & SD_HiCapacity) && !(info->SD_Status & SD_IsMMC))
+ 		info->HC_C_SIZE = (reg4b >> 8) & 0x3fffff;
+ 
+ 	info->SD_C_SIZE = ((tmpreg & 0x03) << 10) | (u16)(reg4b >> 22);
+ 	info->SD_C_SIZE_MULT = (u8)(reg4b >> 7)  & 0x07;
+-	if (info->SD_Status.HiCapacity && info->SD_Status.IsMMC)
++	if ((info->SD_Status & SD_HiCapacity) && (info->SD_Status & SD_IsMMC))
+ 		info->HC_C_SIZE = *(u32 *)(&buf[0x100]);
+ 
+ 	if (info->SD_READ_BL_LEN > SD_BLOCK_LEN) {
+@@ -2075,6 +2072,7 @@ static int ene_ms_init(struct us_data *us)
+ 	u16 MSP_BlockSize, MSP_UserAreaBlocks;
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+ 	u8 *bbuf = info->bbuf;
++	unsigned int s;
+ 
+ 	printk(KERN_INFO "transport --- ENE_MSInit\n");
+ 
+@@ -2099,15 +2097,16 @@ static int ene_ms_init(struct us_data *us)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 	/* the same part to test ENE */
+-	info->MS_Status = *(struct MS_STATUS *) bbuf;
+-
+-	if (info->MS_Status.Insert && info->MS_Status.Ready) {
+-		printk(KERN_INFO "Insert     = %x\n", info->MS_Status.Insert);
+-		printk(KERN_INFO "Ready      = %x\n", info->MS_Status.Ready);
+-		printk(KERN_INFO "IsMSPro    = %x\n", info->MS_Status.IsMSPro);
+-		printk(KERN_INFO "IsMSPHG    = %x\n", info->MS_Status.IsMSPHG);
+-		printk(KERN_INFO "WtP= %x\n", info->MS_Status.WtP);
+-		if (info->MS_Status.IsMSPro) {
++	info->MS_Status = bbuf[0];
++
++	s = info->MS_Status;
++	if ((s & MS_Insert) && (s & MS_Ready)) {
++		printk(KERN_INFO "Insert     = %x\n", !!(s & MS_Insert));
++		printk(KERN_INFO "Ready      = %x\n", !!(s & MS_Ready));
++		printk(KERN_INFO "IsMSPro    = %x\n", !!(s & MS_IsMSPro));
++		printk(KERN_INFO "IsMSPHG    = %x\n", !!(s & MS_IsMSPHG));
++		printk(KERN_INFO "WtP= %x\n", !!(s & MS_WtP));
++		if (s & MS_IsMSPro) {
+ 			MSP_BlockSize      = (bbuf[6] << 8) | bbuf[7];
+ 			MSP_UserAreaBlocks = (bbuf[10] << 8) | bbuf[11];
+ 			info->MSP_TotalBlock = MSP_BlockSize * MSP_UserAreaBlocks;
+@@ -2168,17 +2167,17 @@ static int ene_sd_init(struct us_data *us)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 
+-	info->SD_Status =  *(struct SD_STATUS *) bbuf;
+-	if (info->SD_Status.Insert && info->SD_Status.Ready) {
+-		struct SD_STATUS *s = &info->SD_Status;
++	info->SD_Status = bbuf[0];
++	if ((info->SD_Status & SD_Insert) && (info->SD_Status & SD_Ready)) {
++		unsigned int s = info->SD_Status;
+ 
+ 		ene_get_card_status(us, bbuf);
+-		usb_stor_dbg(us, "Insert     = %x\n", s->Insert);
+-		usb_stor_dbg(us, "Ready      = %x\n", s->Ready);
+-		usb_stor_dbg(us, "IsMMC      = %x\n", s->IsMMC);
+-		usb_stor_dbg(us, "HiCapacity = %x\n", s->HiCapacity);
+-		usb_stor_dbg(us, "HiSpeed    = %x\n", s->HiSpeed);
+-		usb_stor_dbg(us, "WtP        = %x\n", s->WtP);
++		usb_stor_dbg(us, "Insert     = %x\n", !!(s & SD_Insert));
++		usb_stor_dbg(us, "Ready      = %x\n", !!(s & SD_Ready));
++		usb_stor_dbg(us, "IsMMC      = %x\n", !!(s & SD_IsMMC));
++		usb_stor_dbg(us, "HiCapacity = %x\n", !!(s & SD_HiCapacity));
++		usb_stor_dbg(us, "HiSpeed    = %x\n", !!(s & SD_HiSpeed));
++		usb_stor_dbg(us, "WtP        = %x\n", !!(s & SD_WtP));
+ 	} else {
+ 		usb_stor_dbg(us, "SD Card Not Ready --- %x\n", bbuf[0]);
+ 		return USB_STOR_TRANSPORT_ERROR;
+@@ -2200,14 +2199,14 @@ static int ene_init(struct us_data *us)
+ 
+ 	misc_reg03 = bbuf[0];
+ 	if (misc_reg03 & 0x01) {
+-		if (!info->SD_Status.Ready) {
++		if (!(info->SD_Status & SD_Ready)) {
+ 			result = ene_sd_init(us);
+ 			if (result != USB_STOR_XFER_GOOD)
+ 				return USB_STOR_TRANSPORT_ERROR;
+ 		}
+ 	}
+ 	if (misc_reg03 & 0x02) {
+-		if (!info->MS_Status.Ready) {
++		if (!(info->MS_Status & MS_Ready)) {
+ 			result = ene_ms_init(us);
+ 			if (result != USB_STOR_XFER_GOOD)
+ 				return USB_STOR_TRANSPORT_ERROR;
+@@ -2306,14 +2305,14 @@ static int ene_transport(struct scsi_cmnd *srb, struct us_data *us)
+ 
+ 	/*US_DEBUG(usb_stor_show_command(us, srb)); */
+ 	scsi_set_resid(srb, 0);
+-	if (unlikely(!(info->SD_Status.Ready || info->MS_Status.Ready)))
++	if (unlikely(!(info->SD_Status & SD_Ready) || (info->MS_Status & MS_Ready)))
+ 		result = ene_init(us);
+ 	if (result == USB_STOR_XFER_GOOD) {
+ 		result = USB_STOR_TRANSPORT_ERROR;
+-		if (info->SD_Status.Ready)
++		if (info->SD_Status & SD_Ready)
+ 			result = sd_scsi_irp(us, srb);
+ 
+-		if (info->MS_Status.Ready)
++		if (info->MS_Status & MS_Ready)
+ 			result = ms_scsi_irp(us, srb);
+ 	}
+ 	return result;
+@@ -2377,7 +2376,6 @@ static int ene_ub6250_probe(struct usb_interface *intf,
+ 
+ static int ene_ub6250_resume(struct usb_interface *iface)
+ {
+-	u8 tmp = 0;
+ 	struct us_data *us = usb_get_intfdata(iface);
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+ 
+@@ -2389,17 +2387,16 @@ static int ene_ub6250_resume(struct usb_interface *iface)
+ 	mutex_unlock(&us->dev_mutex);
+ 
+ 	info->Power_IsResum = true;
+-	/*info->SD_Status.Ready = 0; */
+-	info->SD_Status = *(struct SD_STATUS *)&tmp;
+-	info->MS_Status = *(struct MS_STATUS *)&tmp;
+-	info->SM_Status = *(struct SM_STATUS *)&tmp;
++	/* info->SD_Status &= ~SD_Ready; */
++	info->SD_Status = 0;
++	info->MS_Status = 0;
++	info->SM_Status = 0;
+ 
+ 	return 0;
+ }
+ 
+ static int ene_ub6250_reset_resume(struct usb_interface *iface)
+ {
+-	u8 tmp = 0;
+ 	struct us_data *us = usb_get_intfdata(iface);
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+ 
+@@ -2411,10 +2408,10 @@ static int ene_ub6250_reset_resume(struct usb_interface *iface)
+ 	 * the device
+ 	 */
+ 	info->Power_IsResum = true;
+-	/*info->SD_Status.Ready = 0; */
+-	info->SD_Status = *(struct SD_STATUS *)&tmp;
+-	info->MS_Status = *(struct MS_STATUS *)&tmp;
+-	info->SM_Status = *(struct SM_STATUS *)&tmp;
++	/* info->SD_Status &= ~SD_Ready; */
++	info->SD_Status = 0;
++	info->MS_Status = 0;
++	info->SM_Status = 0;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/storage/realtek_cr.c b/drivers/usb/storage/realtek_cr.c
+index 3789698d9d3c6..0c423916d7bfa 100644
+--- a/drivers/usb/storage/realtek_cr.c
++++ b/drivers/usb/storage/realtek_cr.c
+@@ -365,7 +365,7 @@ static int rts51x_read_mem(struct us_data *us, u16 addr, u8 *data, u16 len)
+ 
+ 	buf = kmalloc(len, GFP_NOIO);
+ 	if (buf == NULL)
+-		return USB_STOR_TRANSPORT_ERROR;
++		return -ENOMEM;
+ 
+ 	usb_stor_dbg(us, "addr = 0x%x, len = %d\n", addr, len);
+ 
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 65d6f8fd81e70..577ff786f11b1 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -1482,11 +1482,25 @@ static u64 mlx5_vdpa_get_features(struct vdpa_device *vdev)
+ 	return ndev->mvdev.mlx_features;
+ }
+ 
+-static int verify_min_features(struct mlx5_vdpa_dev *mvdev, u64 features)
++static int verify_driver_features(struct mlx5_vdpa_dev *mvdev, u64 features)
+ {
++	/* Minimum features to expect */
+ 	if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM)))
+ 		return -EOPNOTSUPP;
+ 
++	/* Double check features combination sent down by the driver.
++	 * Fail invalid features due to absence of the depended feature.
++	 *
++	 * Per VIRTIO v1.1 specification, section 5.1.3.1 Feature bit
++	 * requirements: "VIRTIO_NET_F_MQ Requires VIRTIO_NET_F_CTRL_VQ".
++	 * By failing the invalid features sent down by untrusted drivers,
++	 * we're assured the assumption made upon is_index_valid() and
++	 * is_ctrl_vq_idx() will not be compromised.
++	 */
++	if ((features & (BIT_ULL(VIRTIO_NET_F_MQ) | BIT_ULL(VIRTIO_NET_F_CTRL_VQ))) ==
++            BIT_ULL(VIRTIO_NET_F_MQ))
++		return -EINVAL;
++
+ 	return 0;
+ }
+ 
+@@ -1544,7 +1558,7 @@ static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features)
+ 
+ 	print_features(mvdev, features, true);
+ 
+-	err = verify_min_features(mvdev, features);
++	err = verify_driver_features(mvdev, features);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/vfio/platform/vfio_amba.c b/drivers/vfio/platform/vfio_amba.c
+index 9636a2afaecd1..3626c21501017 100644
+--- a/drivers/vfio/platform/vfio_amba.c
++++ b/drivers/vfio/platform/vfio_amba.c
+@@ -71,18 +71,13 @@ static int vfio_amba_probe(struct amba_device *adev, const struct amba_id *id)
+ 	return ret;
+ }
+ 
+-static int vfio_amba_remove(struct amba_device *adev)
++static void vfio_amba_remove(struct amba_device *adev)
+ {
+-	struct vfio_platform_device *vdev;
+-
+-	vdev = vfio_platform_remove_common(&adev->dev);
+-	if (vdev) {
+-		kfree(vdev->name);
+-		kfree(vdev);
+-		return 0;
+-	}
++	struct vfio_platform_device *vdev =
++		vfio_platform_remove_common(&adev->dev);
+ 
+-	return -EINVAL;
++	kfree(vdev->name);
++	kfree(vdev);
+ }
+ 
+ static const struct amba_id pl330_ids[] = {
+diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c
+index b7682de412d83..33595cc4778e9 100644
+--- a/drivers/video/fbdev/amba-clcd.c
++++ b/drivers/video/fbdev/amba-clcd.c
+@@ -925,7 +925,7 @@ static int clcdfb_probe(struct amba_device *dev, const struct amba_id *id)
+ 	return ret;
+ }
+ 
+-static int clcdfb_remove(struct amba_device *dev)
++static void clcdfb_remove(struct amba_device *dev)
+ {
+ 	struct clcd_fb *fb = amba_get_drvdata(dev);
+ 
+@@ -942,8 +942,6 @@ static int clcdfb_remove(struct amba_device *dev)
+ 	kfree(fb);
+ 
+ 	amba_release_regions(dev);
+-
+-	return 0;
+ }
+ 
+ static const struct amba_id clcdfb_id_table[] = {
+diff --git a/drivers/video/fbdev/atafb.c b/drivers/video/fbdev/atafb.c
+index f253daa05d9d3..a7a1739cff1bd 100644
+--- a/drivers/video/fbdev/atafb.c
++++ b/drivers/video/fbdev/atafb.c
+@@ -1691,9 +1691,9 @@ static int falcon_setcolreg(unsigned int regno, unsigned int red,
+ 			   ((blue & 0xfc00) >> 8));
+ 	if (regno < 16) {
+ 		shifter_tt.color_reg[regno] =
+-			(((red & 0xe000) >> 13) | ((red & 0x1000) >> 12) << 8) |
+-			(((green & 0xe000) >> 13) | ((green & 0x1000) >> 12) << 4) |
+-			((blue & 0xe000) >> 13) | ((blue & 0x1000) >> 12);
++			((((red & 0xe000) >> 13)   | ((red & 0x1000) >> 12)) << 8)   |
++			((((green & 0xe000) >> 13) | ((green & 0x1000) >> 12)) << 4) |
++			   ((blue & 0xe000) >> 13) | ((blue & 0x1000) >> 12);
+ 		((u32 *)info->pseudo_palette)[regno] = ((red & 0xf800) |
+ 						       ((green & 0xfc00) >> 5) |
+ 						       ((blue & 0xf800) >> 11));
+@@ -1979,9 +1979,9 @@ static int stste_setcolreg(unsigned int regno, unsigned int red,
+ 	green >>= 12;
+ 	if (ATARIHW_PRESENT(EXTD_SHIFTER))
+ 		shifter_tt.color_reg[regno] =
+-			(((red & 0xe) >> 1) | ((red & 1) << 3) << 8) |
+-			(((green & 0xe) >> 1) | ((green & 1) << 3) << 4) |
+-			((blue & 0xe) >> 1) | ((blue & 1) << 3);
++			((((red & 0xe)   >> 1) | ((red & 1)   << 3)) << 8) |
++			((((green & 0xe) >> 1) | ((green & 1) << 3)) << 4) |
++			  ((blue & 0xe)  >> 1) | ((blue & 1)  << 3);
+ 	else
+ 		shifter_tt.color_reg[regno] =
+ 			((red & 0xe) << 7) |
+diff --git a/drivers/video/fbdev/atmel_lcdfb.c b/drivers/video/fbdev/atmel_lcdfb.c
+index 355b6120dc4f0..1fc8de4ecbebf 100644
+--- a/drivers/video/fbdev/atmel_lcdfb.c
++++ b/drivers/video/fbdev/atmel_lcdfb.c
+@@ -1062,15 +1062,16 @@ static int __init atmel_lcdfb_probe(struct platform_device *pdev)
+ 
+ 	INIT_LIST_HEAD(&info->modelist);
+ 
+-	if (pdev->dev.of_node) {
+-		ret = atmel_lcdfb_of_init(sinfo);
+-		if (ret)
+-			goto free_info;
+-	} else {
++	if (!pdev->dev.of_node) {
+ 		dev_err(dev, "cannot get default configuration\n");
+ 		goto free_info;
+ 	}
+ 
++	ret = atmel_lcdfb_of_init(sinfo);
++	if (ret)
++		goto free_info;
++
++	ret = -ENODEV;
+ 	if (!sinfo->config)
+ 		goto free_info;
+ 
+diff --git a/drivers/video/fbdev/cirrusfb.c b/drivers/video/fbdev/cirrusfb.c
+index 15a9ee7cd734d..b4980bc2985e3 100644
+--- a/drivers/video/fbdev/cirrusfb.c
++++ b/drivers/video/fbdev/cirrusfb.c
+@@ -469,7 +469,7 @@ static int cirrusfb_check_mclk(struct fb_info *info, long freq)
+ 	return 0;
+ }
+ 
+-static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
++static int cirrusfb_check_pixclock(struct fb_var_screeninfo *var,
+ 				   struct fb_info *info)
+ {
+ 	long freq;
+@@ -478,9 +478,7 @@ static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
+ 	unsigned maxclockidx = var->bits_per_pixel >> 3;
+ 
+ 	/* convert from ps to kHz */
+-	freq = PICOS2KHZ(var->pixclock);
+-
+-	dev_dbg(info->device, "desired pixclock: %ld kHz\n", freq);
++	freq = PICOS2KHZ(var->pixclock ? : 1);
+ 
+ 	maxclock = cirrusfb_board_info[cinfo->btype].maxclock[maxclockidx];
+ 	cinfo->multiplexing = 0;
+@@ -488,11 +486,13 @@ static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
+ 	/* If the frequency is greater than we can support, we might be able
+ 	 * to use multiplexing for the video mode */
+ 	if (freq > maxclock) {
+-		dev_err(info->device,
+-			"Frequency greater than maxclock (%ld kHz)\n",
+-			maxclock);
+-		return -EINVAL;
++		var->pixclock = KHZ2PICOS(maxclock);
++
++		while ((freq = PICOS2KHZ(var->pixclock)) > maxclock)
++			var->pixclock++;
+ 	}
++	dev_dbg(info->device, "desired pixclock: %ld kHz\n", freq);
++
+ 	/*
+ 	 * Additional constraint: 8bpp uses DAC clock doubling to allow maximum
+ 	 * pixel clock
+diff --git a/drivers/video/fbdev/controlfb.c b/drivers/video/fbdev/controlfb.c
+index 2df56bd303d25..bd59e7b11ed53 100644
+--- a/drivers/video/fbdev/controlfb.c
++++ b/drivers/video/fbdev/controlfb.c
+@@ -64,10 +64,12 @@
+ #undef in_le32
+ #undef out_le32
+ #define in_8(addr)		0
+-#define out_8(addr, val)
++#define out_8(addr, val)	(void)(val)
+ #define in_le32(addr)		0
+-#define out_le32(addr, val)
++#define out_le32(addr, val)	(void)(val)
++#ifndef pgprot_cached_wthru
+ #define pgprot_cached_wthru(prot) (prot)
++#endif
+ #else
+ static void invalid_vram_cache(void __force *addr)
+ {
+diff --git a/drivers/video/fbdev/core/fbcvt.c b/drivers/video/fbdev/core/fbcvt.c
+index 55d2bd0ce5c02..64843464c6613 100644
+--- a/drivers/video/fbdev/core/fbcvt.c
++++ b/drivers/video/fbdev/core/fbcvt.c
+@@ -214,9 +214,11 @@ static u32 fb_cvt_aspect_ratio(struct fb_cvt_data *cvt)
+ static void fb_cvt_print_name(struct fb_cvt_data *cvt)
+ {
+ 	u32 pixcount, pixcount_mod;
+-	int cnt = 255, offset = 0, read = 0;
+-	u8 *buf = kzalloc(256, GFP_KERNEL);
++	int size = 256;
++	int off = 0;
++	u8 *buf;
+ 
++	buf = kzalloc(size, GFP_KERNEL);
+ 	if (!buf)
+ 		return;
+ 
+@@ -224,43 +226,30 @@ static void fb_cvt_print_name(struct fb_cvt_data *cvt)
+ 	pixcount_mod = (cvt->xres * (cvt->yres/cvt->interlace)) % 1000000;
+ 	pixcount_mod /= 1000;
+ 
+-	read = snprintf(buf+offset, cnt, "fbcvt: %dx%d@%d: CVT Name - ",
+-			cvt->xres, cvt->yres, cvt->refresh);
+-	offset += read;
+-	cnt -= read;
++	off += scnprintf(buf + off, size - off, "fbcvt: %dx%d@%d: CVT Name - ",
++			    cvt->xres, cvt->yres, cvt->refresh);
+ 
+-	if (cvt->status)
+-		snprintf(buf+offset, cnt, "Not a CVT standard - %d.%03d Mega "
+-			 "Pixel Image\n", pixcount, pixcount_mod);
+-	else {
+-		if (pixcount) {
+-			read = snprintf(buf+offset, cnt, "%d", pixcount);
+-			cnt -= read;
+-			offset += read;
+-		}
++	if (cvt->status) {
++		off += scnprintf(buf + off, size - off,
++				 "Not a CVT standard - %d.%03d Mega Pixel Image\n",
++				 pixcount, pixcount_mod);
++	} else {
++		if (pixcount)
++			off += scnprintf(buf + off, size - off, "%d", pixcount);
+ 
+-		read = snprintf(buf+offset, cnt, ".%03dM", pixcount_mod);
+-		cnt -= read;
+-		offset += read;
++		off += scnprintf(buf + off, size - off, ".%03dM", pixcount_mod);
+ 
+ 		if (cvt->aspect_ratio == 0)
+-			read = snprintf(buf+offset, cnt, "3");
++			off += scnprintf(buf + off, size - off, "3");
+ 		else if (cvt->aspect_ratio == 3)
+-			read = snprintf(buf+offset, cnt, "4");
++			off += scnprintf(buf + off, size - off, "4");
+ 		else if (cvt->aspect_ratio == 1 || cvt->aspect_ratio == 4)
+-			read = snprintf(buf+offset, cnt, "9");
++			off += scnprintf(buf + off, size - off, "9");
+ 		else if (cvt->aspect_ratio == 2)
+-			read = snprintf(buf+offset, cnt, "A");
+-		else
+-			read = 0;
+-		cnt -= read;
+-		offset += read;
+-
+-		if (cvt->flags & FB_CVT_FLAG_REDUCED_BLANK) {
+-			read = snprintf(buf+offset, cnt, "-R");
+-			cnt -= read;
+-			offset += read;
+-		}
++			off += scnprintf(buf + off, size - off, "A");
++
++		if (cvt->flags & FB_CVT_FLAG_REDUCED_BLANK)
++			off += scnprintf(buf + off, size - off, "-R");
+ 	}
+ 
+ 	printk(KERN_INFO "%s\n", buf);
+diff --git a/drivers/video/fbdev/matrox/matroxfb_base.c b/drivers/video/fbdev/matrox/matroxfb_base.c
+index 570439b326552..daaa99818d3b7 100644
+--- a/drivers/video/fbdev/matrox/matroxfb_base.c
++++ b/drivers/video/fbdev/matrox/matroxfb_base.c
+@@ -1377,7 +1377,7 @@ static struct video_board vbG200 = {
+ 	.lowlevel = &matrox_G100
+ };
+ static struct video_board vbG200eW = {
+-	.maxvram = 0x800000,
++	.maxvram = 0x100000,
+ 	.maxdisplayable = 0x800000,
+ 	.accelID = FB_ACCEL_MATROX_MGAG200,
+ 	.lowlevel = &matrox_G100
+diff --git a/drivers/video/fbdev/nvidia/nv_i2c.c b/drivers/video/fbdev/nvidia/nv_i2c.c
+index d7994a1732459..0b48965a6420c 100644
+--- a/drivers/video/fbdev/nvidia/nv_i2c.c
++++ b/drivers/video/fbdev/nvidia/nv_i2c.c
+@@ -86,7 +86,7 @@ static int nvidia_setup_i2c_bus(struct nvidia_i2c_chan *chan, const char *name,
+ {
+ 	int rc;
+ 
+-	strcpy(chan->adapter.name, name);
++	strscpy(chan->adapter.name, name, sizeof(chan->adapter.name));
+ 	chan->adapter.owner = THIS_MODULE;
+ 	chan->adapter.class = i2c_class;
+ 	chan->adapter.algo_data = &chan->algo;
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c b/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
+index b4a1aefff7661..777f6d66c28c3 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
+@@ -251,6 +251,7 @@ static int dvic_probe_of(struct platform_device *pdev)
+ 	adapter_node = of_parse_phandle(node, "ddc-i2c-bus", 0);
+ 	if (adapter_node) {
+ 		adapter = of_get_i2c_adapter_by_node(adapter_node);
++		of_node_put(adapter_node);
+ 		if (adapter == NULL) {
+ 			dev_err(&pdev->dev, "failed to parse ddc-i2c-bus\n");
+ 			omap_dss_put_device(ddata->in);
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
+index 4b0793abdd84b..a2c7c5cb15234 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
+@@ -409,7 +409,7 @@ static ssize_t dsicm_num_errors_show(struct device *dev,
+ 	if (r)
+ 		return r;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", errors);
++	return sysfs_emit(buf, "%d\n", errors);
+ }
+ 
+ static ssize_t dsicm_hw_revision_show(struct device *dev,
+@@ -439,7 +439,7 @@ static ssize_t dsicm_hw_revision_show(struct device *dev,
+ 	if (r)
+ 		return r;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%02x.%02x.%02x\n", id1, id2, id3);
++	return sysfs_emit(buf, "%02x.%02x.%02x\n", id1, id2, id3);
+ }
+ 
+ static ssize_t dsicm_store_ulps(struct device *dev,
+@@ -487,7 +487,7 @@ static ssize_t dsicm_show_ulps(struct device *dev,
+ 	t = ddata->ulps_enabled;
+ 	mutex_unlock(&ddata->lock);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%u\n", t);
++	return sysfs_emit(buf, "%u\n", t);
+ }
+ 
+ static ssize_t dsicm_store_ulps_timeout(struct device *dev,
+@@ -532,7 +532,7 @@ static ssize_t dsicm_show_ulps_timeout(struct device *dev,
+ 	t = ddata->ulps_timeout;
+ 	mutex_unlock(&ddata->lock);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%u\n", t);
++	return sysfs_emit(buf, "%u\n", t);
+ }
+ 
+ static DEVICE_ATTR(num_dsi_errors, S_IRUGO, dsicm_num_errors_show, NULL);
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
+index 1293515e4b169..0cbc5b9183f89 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
+@@ -476,7 +476,7 @@ static ssize_t show_cabc_available_modes(struct device *dev,
+ 	int i;
+ 
+ 	if (!ddata->has_cabc)
+-		return snprintf(buf, PAGE_SIZE, "%s\n", cabc_modes[0]);
++		return sysfs_emit(buf, "%s\n", cabc_modes[0]);
+ 
+ 	for (i = 0, len = 0;
+ 	     len < PAGE_SIZE && i < ARRAY_SIZE(cabc_modes); i++)
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
+index bb85b21f07248..9f6ef9e04d9ce 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
+@@ -169,7 +169,7 @@ static ssize_t tpo_td043_vmirror_show(struct device *dev,
+ {
+ 	struct panel_drv_data *ddata = dev_get_drvdata(dev);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", ddata->vmirror);
++	return sysfs_emit(buf, "%d\n", ddata->vmirror);
+ }
+ 
+ static ssize_t tpo_td043_vmirror_store(struct device *dev,
+@@ -199,7 +199,7 @@ static ssize_t tpo_td043_mode_show(struct device *dev,
+ {
+ 	struct panel_drv_data *ddata = dev_get_drvdata(dev);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", ddata->mode);
++	return sysfs_emit(buf, "%d\n", ddata->mode);
+ }
+ 
+ static ssize_t tpo_td043_mode_store(struct device *dev,
+diff --git a/drivers/video/fbdev/sm712fb.c b/drivers/video/fbdev/sm712fb.c
+index 0dbc6bf8268ac..092a1caa1208e 100644
+--- a/drivers/video/fbdev/sm712fb.c
++++ b/drivers/video/fbdev/sm712fb.c
+@@ -1047,7 +1047,7 @@ static ssize_t smtcfb_read(struct fb_info *info, char __user *buf,
+ 	if (count + p > total_size)
+ 		count = total_size - p;
+ 
+-	buffer = kmalloc((count > PAGE_SIZE) ? PAGE_SIZE : count, GFP_KERNEL);
++	buffer = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+@@ -1059,25 +1059,14 @@ static ssize_t smtcfb_read(struct fb_info *info, char __user *buf,
+ 	while (count) {
+ 		c = (count > PAGE_SIZE) ? PAGE_SIZE : count;
+ 		dst = buffer;
+-		for (i = c >> 2; i--;) {
+-			*dst = fb_readl(src++);
+-			*dst = big_swap(*dst);
++		for (i = (c + 3) >> 2; i--;) {
++			u32 val;
++
++			val = fb_readl(src);
++			*dst = big_swap(val);
++			src++;
+ 			dst++;
+ 		}
+-		if (c & 3) {
+-			u8 *dst8 = (u8 *)dst;
+-			u8 __iomem *src8 = (u8 __iomem *)src;
+-
+-			for (i = c & 3; i--;) {
+-				if (i & 1) {
+-					*dst8++ = fb_readb(++src8);
+-				} else {
+-					*dst8++ = fb_readb(--src8);
+-					src8 += 2;
+-				}
+-			}
+-			src = (u32 __iomem *)src8;
+-		}
+ 
+ 		if (copy_to_user(buf, buffer, c)) {
+ 			err = -EFAULT;
+@@ -1130,7 +1119,7 @@ static ssize_t smtcfb_write(struct fb_info *info, const char __user *buf,
+ 		count = total_size - p;
+ 	}
+ 
+-	buffer = kmalloc((count > PAGE_SIZE) ? PAGE_SIZE : count, GFP_KERNEL);
++	buffer = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+@@ -1148,24 +1137,11 @@ static ssize_t smtcfb_write(struct fb_info *info, const char __user *buf,
+ 			break;
+ 		}
+ 
+-		for (i = c >> 2; i--;) {
+-			fb_writel(big_swap(*src), dst++);
++		for (i = (c + 3) >> 2; i--;) {
++			fb_writel(big_swap(*src), dst);
++			dst++;
+ 			src++;
+ 		}
+-		if (c & 3) {
+-			u8 *src8 = (u8 *)src;
+-			u8 __iomem *dst8 = (u8 __iomem *)dst;
+-
+-			for (i = c & 3; i--;) {
+-				if (i & 1) {
+-					fb_writeb(*src8++, ++dst8);
+-				} else {
+-					fb_writeb(*src8++, --dst8);
+-					dst8 += 2;
+-				}
+-			}
+-			dst = (u32 __iomem *)dst8;
+-		}
+ 
+ 		*ppos += c;
+ 		buf += c;
+diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
+index bfac3ee4a6422..28768c272b73d 100644
+--- a/drivers/video/fbdev/smscufx.c
++++ b/drivers/video/fbdev/smscufx.c
+@@ -1656,6 +1656,7 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 	info->par = dev;
+ 	info->pseudo_palette = dev->pseudo_palette;
+ 	info->fbops = &ufx_ops;
++	INIT_LIST_HEAD(&info->modelist);
+ 
+ 	retval = fb_alloc_cmap(&info->cmap, 256, 0);
+ 	if (retval < 0) {
+@@ -1666,8 +1667,6 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 	INIT_DELAYED_WORK(&dev->free_framebuffer_work,
+ 			  ufx_free_framebuffer_work);
+ 
+-	INIT_LIST_HEAD(&info->modelist);
+-
+ 	retval = ufx_reg_read(dev, 0x3000, &id_rev);
+ 	check_warn_goto_error(retval, "error %d reading 0x3000 register from device", retval);
+ 	dev_dbg(dev->gdev, "ID_REV register value 0x%08x", id_rev);
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index b9cdd02c10009..90f48b71fd8f7 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -1426,7 +1426,7 @@ static ssize_t metrics_bytes_rendered_show(struct device *fbdev,
+ 				   struct device_attribute *a, char *buf) {
+ 	struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ 	struct dlfb_data *dlfb = fb_info->par;
+-	return snprintf(buf, PAGE_SIZE, "%u\n",
++	return sysfs_emit(buf, "%u\n",
+ 			atomic_read(&dlfb->bytes_rendered));
+ }
+ 
+@@ -1434,7 +1434,7 @@ static ssize_t metrics_bytes_identical_show(struct device *fbdev,
+ 				   struct device_attribute *a, char *buf) {
+ 	struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ 	struct dlfb_data *dlfb = fb_info->par;
+-	return snprintf(buf, PAGE_SIZE, "%u\n",
++	return sysfs_emit(buf, "%u\n",
+ 			atomic_read(&dlfb->bytes_identical));
+ }
+ 
+@@ -1442,7 +1442,7 @@ static ssize_t metrics_bytes_sent_show(struct device *fbdev,
+ 				   struct device_attribute *a, char *buf) {
+ 	struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ 	struct dlfb_data *dlfb = fb_info->par;
+-	return snprintf(buf, PAGE_SIZE, "%u\n",
++	return sysfs_emit(buf, "%u\n",
+ 			atomic_read(&dlfb->bytes_sent));
+ }
+ 
+@@ -1450,7 +1450,7 @@ static ssize_t metrics_cpu_kcycles_used_show(struct device *fbdev,
+ 				   struct device_attribute *a, char *buf) {
+ 	struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ 	struct dlfb_data *dlfb = fb_info->par;
+-	return snprintf(buf, PAGE_SIZE, "%u\n",
++	return sysfs_emit(buf, "%u\n",
+ 			atomic_read(&dlfb->cpu_kcycles_used));
+ }
+ 
+diff --git a/drivers/video/fbdev/w100fb.c b/drivers/video/fbdev/w100fb.c
+index d96ab28f8ce4a..4e641a780726e 100644
+--- a/drivers/video/fbdev/w100fb.c
++++ b/drivers/video/fbdev/w100fb.c
+@@ -770,12 +770,18 @@ out:
+ 		fb_dealloc_cmap(&info->cmap);
+ 		kfree(info->pseudo_palette);
+ 	}
+-	if (remapped_fbuf != NULL)
++	if (remapped_fbuf != NULL) {
+ 		iounmap(remapped_fbuf);
+-	if (remapped_regs != NULL)
++		remapped_fbuf = NULL;
++	}
++	if (remapped_regs != NULL) {
+ 		iounmap(remapped_regs);
+-	if (remapped_base != NULL)
++		remapped_regs = NULL;
++	}
++	if (remapped_base != NULL) {
+ 		iounmap(remapped_base);
++		remapped_base = NULL;
++	}
+ 	if (info)
+ 		framebuffer_release(info);
+ 	return err;
+@@ -795,8 +801,11 @@ static int w100fb_remove(struct platform_device *pdev)
+ 	fb_dealloc_cmap(&info->cmap);
+ 
+ 	iounmap(remapped_base);
++	remapped_base = NULL;
+ 	iounmap(remapped_regs);
++	remapped_regs = NULL;
+ 	iounmap(remapped_fbuf);
++	remapped_fbuf = NULL;
+ 
+ 	framebuffer_release(info);
+ 
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 359302f71f7ef..ae7f9357bb871 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -229,6 +229,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ 	ret = pm_runtime_get_sync(dev);
+ 	if (ret) {
+ 		pm_runtime_put_noidle(dev);
++		pm_runtime_disable(&pdev->dev);
+ 		return dev_err_probe(dev, ret, "runtime pm failed\n");
+ 	}
+ 
+diff --git a/drivers/watchdog/sp805_wdt.c b/drivers/watchdog/sp805_wdt.c
+index 190d26e2e75f9..2815f78d22bb3 100644
+--- a/drivers/watchdog/sp805_wdt.c
++++ b/drivers/watchdog/sp805_wdt.c
+@@ -304,14 +304,12 @@ err:
+ 	return ret;
+ }
+ 
+-static int sp805_wdt_remove(struct amba_device *adev)
++static void sp805_wdt_remove(struct amba_device *adev)
+ {
+ 	struct sp805_wdt *wdt = amba_get_drvdata(adev);
+ 
+ 	watchdog_unregister_device(&wdt->wdd);
+ 	watchdog_set_drvdata(&wdt->wdd, NULL);
+-
+-	return 0;
+ }
+ 
+ static int __maybe_unused sp805_wdt_suspend(struct device *dev)
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 04c4aa7a1df2c..213864bc7e8c0 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -170,8 +170,8 @@ static int padzero(unsigned long elf_bss)
+ 
+ static int
+ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+-		unsigned long load_addr, unsigned long interp_load_addr,
+-		unsigned long e_entry)
++		unsigned long interp_load_addr,
++		unsigned long e_entry, unsigned long phdr_addr)
+ {
+ 	struct mm_struct *mm = current->mm;
+ 	unsigned long p = bprm->p;
+@@ -256,7 +256,7 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+ 	NEW_AUX_ENT(AT_HWCAP, ELF_HWCAP);
+ 	NEW_AUX_ENT(AT_PAGESZ, ELF_EXEC_PAGESIZE);
+ 	NEW_AUX_ENT(AT_CLKTCK, CLOCKS_PER_SEC);
+-	NEW_AUX_ENT(AT_PHDR, load_addr + exec->e_phoff);
++	NEW_AUX_ENT(AT_PHDR, phdr_addr);
+ 	NEW_AUX_ENT(AT_PHENT, sizeof(struct elf_phdr));
+ 	NEW_AUX_ENT(AT_PHNUM, exec->e_phnum);
+ 	NEW_AUX_ENT(AT_BASE, interp_load_addr);
+@@ -820,7 +820,7 @@ static int parse_elf_properties(struct file *f, const struct elf_phdr *phdr,
+ static int load_elf_binary(struct linux_binprm *bprm)
+ {
+ 	struct file *interpreter = NULL; /* to shut gcc up */
+- 	unsigned long load_addr = 0, load_bias = 0;
++	unsigned long load_addr, load_bias = 0, phdr_addr = 0;
+ 	int load_addr_set = 0;
+ 	unsigned long error;
+ 	struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL;
+@@ -1153,6 +1153,17 @@ out_free_interp:
+ 				reloc_func_desc = load_bias;
+ 			}
+ 		}
++
++		/*
++		 * Figure out which segment in the file contains the Program
++		 * Header table, and map to the associated memory address.
++		 */
++		if (elf_ppnt->p_offset <= elf_ex->e_phoff &&
++		    elf_ex->e_phoff < elf_ppnt->p_offset + elf_ppnt->p_filesz) {
++			phdr_addr = elf_ex->e_phoff - elf_ppnt->p_offset +
++				    elf_ppnt->p_vaddr;
++		}
++
+ 		k = elf_ppnt->p_vaddr;
+ 		if ((elf_ppnt->p_flags & PF_X) && k < start_code)
+ 			start_code = k;
+@@ -1188,6 +1199,7 @@ out_free_interp:
+ 	}
+ 
+ 	e_entry = elf_ex->e_entry + load_bias;
++	phdr_addr += load_bias;
+ 	elf_bss += load_bias;
+ 	elf_brk += load_bias;
+ 	start_code += load_bias;
+@@ -1251,8 +1263,8 @@ out_free_interp:
+ 		goto out;
+ #endif /* ARCH_HAS_SETUP_ADDITIONAL_PAGES */
+ 
+-	retval = create_elf_tables(bprm, elf_ex,
+-			  load_addr, interp_load_addr, e_entry);
++	retval = create_elf_tables(bprm, elf_ex, interp_load_addr,
++				   e_entry, phdr_addr);
+ 	if (retval < 0)
+ 		goto out;
+ 
+@@ -1601,17 +1613,16 @@ static void fill_siginfo_note(struct memelfnote *note, user_siginfo_t *csigdata,
+  *   long file_ofs
+  * followed by COUNT filenames in ASCII: "FILE1" NUL "FILE2" NUL...
+  */
+-static int fill_files_note(struct memelfnote *note)
++static int fill_files_note(struct memelfnote *note, struct coredump_params *cprm)
+ {
+-	struct mm_struct *mm = current->mm;
+-	struct vm_area_struct *vma;
+ 	unsigned count, size, names_ofs, remaining, n;
+ 	user_long_t *data;
+ 	user_long_t *start_end_ofs;
+ 	char *name_base, *name_curpos;
++	int i;
+ 
+ 	/* *Estimated* file count and total data size needed */
+-	count = mm->map_count;
++	count = cprm->vma_count;
+ 	if (count > UINT_MAX / 64)
+ 		return -EINVAL;
+ 	size = count * 64;
+@@ -1633,11 +1644,12 @@ static int fill_files_note(struct memelfnote *note)
+ 	name_base = name_curpos = ((char *)data) + names_ofs;
+ 	remaining = size - names_ofs;
+ 	count = 0;
+-	for (vma = mm->mmap; vma != NULL; vma = vma->vm_next) {
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *m = &cprm->vma_meta[i];
+ 		struct file *file;
+ 		const char *filename;
+ 
+-		file = vma->vm_file;
++		file = m->file;
+ 		if (!file)
+ 			continue;
+ 		filename = file_path(file, name_curpos, remaining);
+@@ -1657,9 +1669,9 @@ static int fill_files_note(struct memelfnote *note)
+ 		memmove(name_curpos, filename, n);
+ 		name_curpos += n;
+ 
+-		*start_end_ofs++ = vma->vm_start;
+-		*start_end_ofs++ = vma->vm_end;
+-		*start_end_ofs++ = vma->vm_pgoff;
++		*start_end_ofs++ = m->start;
++		*start_end_ofs++ = m->end;
++		*start_end_ofs++ = m->pgoff;
+ 		count++;
+ 	}
+ 
+@@ -1670,7 +1682,7 @@ static int fill_files_note(struct memelfnote *note)
+ 	 * Count usually is less than mm->map_count,
+ 	 * we need to move filenames down.
+ 	 */
+-	n = mm->map_count - count;
++	n = cprm->vma_count - count;
+ 	if (n != 0) {
+ 		unsigned shift_bytes = n * 3 * sizeof(data[0]);
+ 		memmove(name_base - shift_bytes, name_base,
+@@ -1785,7 +1797,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
+ 
+ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 			  struct elf_note_info *info,
+-			  const kernel_siginfo_t *siginfo, struct pt_regs *regs)
++			  struct coredump_params *cprm)
+ {
+ 	struct task_struct *dump_task = current;
+ 	const struct user_regset_view *view = task_user_regset_view(dump_task);
+@@ -1857,7 +1869,7 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 	 * Now fill in each thread's information.
+ 	 */
+ 	for (t = info->thread; t != NULL; t = t->next)
+-		if (!fill_thread_core_info(t, view, siginfo->si_signo, &info->size))
++		if (!fill_thread_core_info(t, view, cprm->siginfo->si_signo, &info->size))
+ 			return 0;
+ 
+ 	/*
+@@ -1866,13 +1878,13 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 	fill_psinfo(psinfo, dump_task->group_leader, dump_task->mm);
+ 	info->size += notesize(&info->psinfo);
+ 
+-	fill_siginfo_note(&info->signote, &info->csigdata, siginfo);
++	fill_siginfo_note(&info->signote, &info->csigdata, cprm->siginfo);
+ 	info->size += notesize(&info->signote);
+ 
+ 	fill_auxv_note(&info->auxv, current->mm);
+ 	info->size += notesize(&info->auxv);
+ 
+-	if (fill_files_note(&info->files) == 0)
++	if (fill_files_note(&info->files, cprm) == 0)
+ 		info->size += notesize(&info->files);
+ 
+ 	return 1;
+@@ -2014,7 +2026,7 @@ static int elf_note_info_init(struct elf_note_info *info)
+ 
+ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 			  struct elf_note_info *info,
+-			  const kernel_siginfo_t *siginfo, struct pt_regs *regs)
++			  struct coredump_params *cprm)
+ {
+ 	struct core_thread *ct;
+ 	struct elf_thread_status *ets;
+@@ -2035,13 +2047,13 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 	list_for_each_entry(ets, &info->thread_list, list) {
+ 		int sz;
+ 
+-		sz = elf_dump_thread_status(siginfo->si_signo, ets);
++		sz = elf_dump_thread_status(cprm->siginfo->si_signo, ets);
+ 		info->thread_status_size += sz;
+ 	}
+ 	/* now collect the dump for the current */
+ 	memset(info->prstatus, 0, sizeof(*info->prstatus));
+-	fill_prstatus(info->prstatus, current, siginfo->si_signo);
+-	elf_core_copy_regs(&info->prstatus->pr_reg, regs);
++	fill_prstatus(info->prstatus, current, cprm->siginfo->si_signo);
++	elf_core_copy_regs(&info->prstatus->pr_reg, cprm->regs);
+ 
+ 	/* Set up header */
+ 	fill_elf_header(elf, phdrs, ELF_ARCH, ELF_CORE_EFLAGS);
+@@ -2057,18 +2069,18 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 	fill_note(info->notes + 1, "CORE", NT_PRPSINFO,
+ 		  sizeof(*info->psinfo), info->psinfo);
+ 
+-	fill_siginfo_note(info->notes + 2, &info->csigdata, siginfo);
++	fill_siginfo_note(info->notes + 2, &info->csigdata, cprm->siginfo);
+ 	fill_auxv_note(info->notes + 3, current->mm);
+ 	info->numnote = 4;
+ 
+-	if (fill_files_note(info->notes + info->numnote) == 0) {
++	if (fill_files_note(info->notes + info->numnote, cprm) == 0) {
+ 		info->notes_files = info->notes + info->numnote;
+ 		info->numnote++;
+ 	}
+ 
+ 	/* Try to dump the FPU. */
+-	info->prstatus->pr_fpvalid = elf_core_copy_task_fpregs(current, regs,
+-							       info->fpu);
++	info->prstatus->pr_fpvalid =
++		elf_core_copy_task_fpregs(current, cprm->regs, info->fpu);
+ 	if (info->prstatus->pr_fpvalid)
+ 		fill_note(info->notes + info->numnote++,
+ 			  "CORE", NT_PRFPREG, sizeof(*info->fpu), info->fpu);
+@@ -2154,8 +2166,7 @@ static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum,
+ static int elf_core_dump(struct coredump_params *cprm)
+ {
+ 	int has_dumped = 0;
+-	int vma_count, segs, i;
+-	size_t vma_data_size;
++	int segs, i;
+ 	struct elfhdr elf;
+ 	loff_t offset = 0, dataoff;
+ 	struct elf_note_info info = { };
+@@ -2163,16 +2174,12 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 	struct elf_shdr *shdr4extnum = NULL;
+ 	Elf_Half e_phnum;
+ 	elf_addr_t e_shoff;
+-	struct core_vma_metadata *vma_meta;
+-
+-	if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, &vma_data_size))
+-		return 0;
+ 
+ 	/*
+ 	 * The number of segs are recored into ELF header as 16bit value.
+ 	 * Please check DEFAULT_MAX_MAP_COUNT definition when you modify here.
+ 	 */
+-	segs = vma_count + elf_core_extra_phdrs();
++	segs = cprm->vma_count + elf_core_extra_phdrs();
+ 
+ 	/* for notes section */
+ 	segs++;
+@@ -2186,7 +2193,7 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 	 * Collect all the non-memory information about the process for the
+ 	 * notes.  This also sets up the file header.
+ 	 */
+-	if (!fill_note_info(&elf, e_phnum, &info, cprm->siginfo, cprm->regs))
++	if (!fill_note_info(&elf, e_phnum, &info, cprm))
+ 		goto end_coredump;
+ 
+ 	has_dumped = 1;
+@@ -2210,7 +2217,7 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 
+ 	dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
+ 
+-	offset += vma_data_size;
++	offset += cprm->vma_data_size;
+ 	offset += elf_core_extra_data_size();
+ 	e_shoff = offset;
+ 
+@@ -2230,8 +2237,8 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 		goto end_coredump;
+ 
+ 	/* Write program headers for segments dump */
+-	for (i = 0; i < vma_count; i++) {
+-		struct core_vma_metadata *meta = vma_meta + i;
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *meta = cprm->vma_meta + i;
+ 		struct elf_phdr phdr;
+ 
+ 		phdr.p_type = PT_LOAD;
+@@ -2268,8 +2275,8 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 	if (!dump_skip(cprm, dataoff - cprm->pos))
+ 		goto end_coredump;
+ 
+-	for (i = 0; i < vma_count; i++) {
+-		struct core_vma_metadata *meta = vma_meta + i;
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *meta = cprm->vma_meta + i;
+ 
+ 		if (!dump_user_range(cprm, meta->start, meta->dump_size))
+ 			goto end_coredump;
+@@ -2287,7 +2294,6 @@ static int elf_core_dump(struct coredump_params *cprm)
+ end_coredump:
+ 	free_note_info(&info);
+ 	kfree(shdr4extnum);
+-	kvfree(vma_meta);
+ 	kfree(phdr4note);
+ 	return has_dumped;
+ }
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index be4062b8ba75e..5764295a3f0ff 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -1479,7 +1479,7 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm,
+ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ {
+ 	int has_dumped = 0;
+-	int vma_count, segs;
++	int segs;
+ 	int i;
+ 	struct elfhdr *elf = NULL;
+ 	loff_t offset = 0, dataoff;
+@@ -1494,8 +1494,6 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	elf_addr_t e_shoff;
+ 	struct core_thread *ct;
+ 	struct elf_thread_status *tmp;
+-	struct core_vma_metadata *vma_meta = NULL;
+-	size_t vma_data_size;
+ 
+ 	/* alloc memory for large data structures: too large to be on stack */
+ 	elf = kmalloc(sizeof(*elf), GFP_KERNEL);
+@@ -1505,9 +1503,6 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	if (!psinfo)
+ 		goto end_coredump;
+ 
+-	if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, &vma_data_size))
+-		goto end_coredump;
+-
+ 	for (ct = current->mm->core_state->dumper.next;
+ 					ct; ct = ct->next) {
+ 		tmp = elf_dump_thread_status(cprm->siginfo->si_signo,
+@@ -1527,7 +1522,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	tmp->next = thread_list;
+ 	thread_list = tmp;
+ 
+-	segs = vma_count + elf_core_extra_phdrs();
++	segs = cprm->vma_count + elf_core_extra_phdrs();
+ 
+ 	/* for notes section */
+ 	segs++;
+@@ -1572,7 +1567,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	/* Page-align dumped data */
+ 	dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
+ 
+-	offset += vma_data_size;
++	offset += cprm->vma_data_size;
+ 	offset += elf_core_extra_data_size();
+ 	e_shoff = offset;
+ 
+@@ -1592,8 +1587,8 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 		goto end_coredump;
+ 
+ 	/* write program headers for segments dump */
+-	for (i = 0; i < vma_count; i++) {
+-		struct core_vma_metadata *meta = vma_meta + i;
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *meta = cprm->vma_meta + i;
+ 		struct elf_phdr phdr;
+ 		size_t sz;
+ 
+@@ -1643,7 +1638,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	if (!dump_skip(cprm, dataoff - cprm->pos))
+ 		goto end_coredump;
+ 
+-	if (!elf_fdpic_dump_segments(cprm, vma_meta, vma_count))
++	if (!elf_fdpic_dump_segments(cprm, cprm->vma_meta, cprm->vma_count))
+ 		goto end_coredump;
+ 
+ 	if (!elf_core_write_extra_data(cprm))
+@@ -1667,7 +1662,6 @@ end_coredump:
+ 		thread_list = thread_list->next;
+ 		kfree(tmp);
+ 	}
+-	kvfree(vma_meta);
+ 	kfree(phdr4note);
+ 	kfree(elf);
+ 	kfree(psinfo);
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index 3a3102bc15a05..4b3ae0faf548e 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -503,8 +503,11 @@ process_slot:
+ 			 */
+ 			ASSERT(key.offset == 0);
+ 			ASSERT(datal <= fs_info->sectorsize);
+-			if (key.offset != 0 || datal > fs_info->sectorsize)
+-				return -EUCLEAN;
++			if (WARN_ON(key.offset != 0) ||
++			    WARN_ON(datal > fs_info->sectorsize)) {
++				ret = -EUCLEAN;
++				goto out;
++			}
+ 
+ 			ret = clone_copy_inline_extent(inode, path, &new_key,
+ 						       drop_start, datal, size,
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index fdb1d660bd136..0e8f484031da9 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1526,6 +1526,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	unsigned int size[2];
+ 	void *data[2];
+ 	int create_options = is_dir ? CREATE_NOT_FILE : CREATE_NOT_DIR;
++	void (*free_req1_func)(struct smb_rqst *r);
+ 
+ 	vars = kzalloc(sizeof(*vars), GFP_ATOMIC);
+ 	if (vars == NULL)
+@@ -1535,27 +1536,29 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 
+ 	resp_buftype[0] = resp_buftype[1] = resp_buftype[2] = CIFS_NO_BUFFER;
+ 
+-	if (copy_from_user(&qi, arg, sizeof(struct smb_query_info)))
+-		goto e_fault;
+-
++	if (copy_from_user(&qi, arg, sizeof(struct smb_query_info))) {
++		rc = -EFAULT;
++		goto free_vars;
++	}
+ 	if (qi.output_buffer_length > 1024) {
+-		kfree(vars);
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto free_vars;
+ 	}
+ 
+ 	if (!ses || !server) {
+-		kfree(vars);
+-		return -EIO;
++		rc = -EIO;
++		goto free_vars;
+ 	}
+ 
+ 	if (smb3_encryption_required(tcon))
+ 		flags |= CIFS_TRANSFORM_REQ;
+ 
+-	buffer = memdup_user(arg + sizeof(struct smb_query_info),
+-			     qi.output_buffer_length);
+-	if (IS_ERR(buffer)) {
+-		kfree(vars);
+-		return PTR_ERR(buffer);
++	if (qi.output_buffer_length) {
++		buffer = memdup_user(arg + sizeof(struct smb_query_info), qi.output_buffer_length);
++		if (IS_ERR(buffer)) {
++			rc = PTR_ERR(buffer);
++			goto free_vars;
++		}
+ 	}
+ 
+ 	/* Open */
+@@ -1593,45 +1596,45 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	rc = SMB2_open_init(tcon, server,
+ 			    &rqst[0], &oplock, &oparms, path);
+ 	if (rc)
+-		goto iqinf_exit;
++		goto free_output_buffer;
+ 	smb2_set_next_command(tcon, &rqst[0]);
+ 
+ 	/* Query */
+ 	if (qi.flags & PASSTHRU_FSCTL) {
+ 		/* Can eventually relax perm check since server enforces too */
+-		if (!capable(CAP_SYS_ADMIN))
++		if (!capable(CAP_SYS_ADMIN)) {
+ 			rc = -EPERM;
+-		else  {
+-			rqst[1].rq_iov = &vars->io_iov[0];
+-			rqst[1].rq_nvec = SMB2_IOCTL_IOV_SIZE;
+-
+-			rc = SMB2_ioctl_init(tcon, server,
+-					     &rqst[1],
+-					     COMPOUND_FID, COMPOUND_FID,
+-					     qi.info_type, true, buffer,
+-					     qi.output_buffer_length,
+-					     CIFSMaxBufSize -
+-					     MAX_SMB2_CREATE_RESPONSE_SIZE -
+-					     MAX_SMB2_CLOSE_RESPONSE_SIZE);
++			goto free_open_req;
+ 		}
++		rqst[1].rq_iov = &vars->io_iov[0];
++		rqst[1].rq_nvec = SMB2_IOCTL_IOV_SIZE;
++
++		rc = SMB2_ioctl_init(tcon, server, &rqst[1], COMPOUND_FID, COMPOUND_FID,
++				     qi.info_type, true, buffer, qi.output_buffer_length,
++				     CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE -
++				     MAX_SMB2_CLOSE_RESPONSE_SIZE);
++		free_req1_func = SMB2_ioctl_free;
+ 	} else if (qi.flags == PASSTHRU_SET_INFO) {
+ 		/* Can eventually relax perm check since server enforces too */
+-		if (!capable(CAP_SYS_ADMIN))
++		if (!capable(CAP_SYS_ADMIN)) {
+ 			rc = -EPERM;
+-		else  {
+-			rqst[1].rq_iov = &vars->si_iov[0];
+-			rqst[1].rq_nvec = 1;
+-
+-			size[0] = 8;
+-			data[0] = buffer;
+-
+-			rc = SMB2_set_info_init(tcon, server,
+-					&rqst[1],
+-					COMPOUND_FID, COMPOUND_FID,
+-					current->tgid,
+-					FILE_END_OF_FILE_INFORMATION,
+-					SMB2_O_INFO_FILE, 0, data, size);
++			goto free_open_req;
+ 		}
++		if (qi.output_buffer_length < 8) {
++			rc = -EINVAL;
++			goto free_open_req;
++		}
++		rqst[1].rq_iov = &vars->si_iov[0];
++		rqst[1].rq_nvec = 1;
++
++		/* MS-FSCC 2.4.13 FileEndOfFileInformation */
++		size[0] = 8;
++		data[0] = buffer;
++
++		rc = SMB2_set_info_init(tcon, server, &rqst[1], COMPOUND_FID, COMPOUND_FID,
++					current->tgid, FILE_END_OF_FILE_INFORMATION,
++					SMB2_O_INFO_FILE, 0, data, size);
++		free_req1_func = SMB2_set_info_free;
+ 	} else if (qi.flags == PASSTHRU_QUERY_INFO) {
+ 		rqst[1].rq_iov = &vars->qi_iov[0];
+ 		rqst[1].rq_nvec = 1;
+@@ -1642,6 +1645,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 				  qi.info_type, qi.additional_information,
+ 				  qi.input_buffer_length,
+ 				  qi.output_buffer_length, buffer);
++		free_req1_func = SMB2_query_info_free;
+ 	} else { /* unknown flags */
+ 		cifs_tcon_dbg(VFS, "Invalid passthru query flags: 0x%x\n",
+ 			      qi.flags);
+@@ -1649,7 +1653,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	}
+ 
+ 	if (rc)
+-		goto iqinf_exit;
++		goto free_open_req;
+ 	smb2_set_next_command(tcon, &rqst[1]);
+ 	smb2_set_related(&rqst[1]);
+ 
+@@ -1660,14 +1664,14 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	rc = SMB2_close_init(tcon, server,
+ 			     &rqst[2], COMPOUND_FID, COMPOUND_FID, false);
+ 	if (rc)
+-		goto iqinf_exit;
++		goto free_req_1;
+ 	smb2_set_related(&rqst[2]);
+ 
+ 	rc = compound_send_recv(xid, ses, server,
+ 				flags, 3, rqst,
+ 				resp_buftype, rsp_iov);
+ 	if (rc)
+-		goto iqinf_exit;
++		goto out;
+ 
+ 	/* No need to bump num_remote_opens since handle immediately closed */
+ 	if (qi.flags & PASSTHRU_FSCTL) {
+@@ -1677,18 +1681,22 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 			qi.input_buffer_length = le32_to_cpu(io_rsp->OutputCount);
+ 		if (qi.input_buffer_length > 0 &&
+ 		    le32_to_cpu(io_rsp->OutputOffset) + qi.input_buffer_length
+-		    > rsp_iov[1].iov_len)
+-			goto e_fault;
++		    > rsp_iov[1].iov_len) {
++			rc = -EFAULT;
++			goto out;
++		}
+ 
+ 		if (copy_to_user(&pqi->input_buffer_length,
+ 				 &qi.input_buffer_length,
+-				 sizeof(qi.input_buffer_length)))
+-			goto e_fault;
++				 sizeof(qi.input_buffer_length))) {
++			rc = -EFAULT;
++			goto out;
++		}
+ 
+ 		if (copy_to_user((void __user *)pqi + sizeof(struct smb_query_info),
+ 				 (const void *)io_rsp + le32_to_cpu(io_rsp->OutputOffset),
+ 				 qi.input_buffer_length))
+-			goto e_fault;
++			rc = -EFAULT;
+ 	} else {
+ 		pqi = (struct smb_query_info __user *)arg;
+ 		qi_rsp = (struct smb2_query_info_rsp *)rsp_iov[1].iov_base;
+@@ -1696,28 +1704,30 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 			qi.input_buffer_length = le32_to_cpu(qi_rsp->OutputBufferLength);
+ 		if (copy_to_user(&pqi->input_buffer_length,
+ 				 &qi.input_buffer_length,
+-				 sizeof(qi.input_buffer_length)))
+-			goto e_fault;
++				 sizeof(qi.input_buffer_length))) {
++			rc = -EFAULT;
++			goto out;
++		}
+ 
+ 		if (copy_to_user(pqi + 1, qi_rsp->Buffer,
+ 				 qi.input_buffer_length))
+-			goto e_fault;
++			rc = -EFAULT;
+ 	}
+ 
+- iqinf_exit:
+-	cifs_small_buf_release(rqst[0].rq_iov[0].iov_base);
+-	cifs_small_buf_release(rqst[1].rq_iov[0].iov_base);
+-	cifs_small_buf_release(rqst[2].rq_iov[0].iov_base);
++out:
+ 	free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+ 	free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
+ 	free_rsp_buf(resp_buftype[2], rsp_iov[2].iov_base);
+-	kfree(vars);
++	SMB2_close_free(&rqst[2]);
++free_req_1:
++	free_req1_func(&rqst[1]);
++free_open_req:
++	SMB2_open_free(&rqst[0]);
++free_output_buffer:
+ 	kfree(buffer);
++free_vars:
++	kfree(vars);
+ 	return rc;
+-
+-e_fault:
+-	rc = -EFAULT;
+-	goto iqinf_exit;
+ }
+ 
+ static ssize_t
+diff --git a/fs/coredump.c b/fs/coredump.c
+index c56a3bdce7cd4..edbaf61125c9c 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -41,6 +41,7 @@
+ #include <linux/fs.h>
+ #include <linux/path.h>
+ #include <linux/timekeeping.h>
++#include <linux/elf.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/mmu_context.h>
+@@ -52,6 +53,9 @@
+ 
+ #include <trace/events/sched.h>
+ 
++static bool dump_vma_snapshot(struct coredump_params *cprm);
++static void free_vma_snapshot(struct coredump_params *cprm);
++
+ int core_uses_pid;
+ unsigned int core_pipe_limit;
+ char core_pattern[CORENAME_MAX_SIZE] = "core";
+@@ -601,6 +605,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 		 * by any locks.
+ 		 */
+ 		.mm_flags = mm->flags,
++		.vma_meta = NULL,
+ 	};
+ 
+ 	audit_core_dumps(siginfo->si_signo);
+@@ -806,9 +811,13 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 			pr_info("Core dump to |%s disabled\n", cn.corename);
+ 			goto close_fail;
+ 		}
++		if (!dump_vma_snapshot(&cprm))
++			goto close_fail;
++
+ 		file_start_write(cprm.file);
+ 		core_dumped = binfmt->core_dump(&cprm);
+ 		file_end_write(cprm.file);
++		free_vma_snapshot(&cprm);
+ 	}
+ 	if (ispipe && core_pipe_limit)
+ 		wait_for_dump_helpers(cprm.file);
+@@ -969,6 +978,8 @@ static bool always_dump_vma(struct vm_area_struct *vma)
+ 	return false;
+ }
+ 
++#define DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER 1
++
+ /*
+  * Decide how much of @vma's contents should be included in a core dump.
+  */
+@@ -1028,9 +1039,20 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma,
+ 	 * dump the first page to aid in determining what was mapped here.
+ 	 */
+ 	if (FILTER(ELF_HEADERS) &&
+-	    vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ) &&
+-	    (READ_ONCE(file_inode(vma->vm_file)->i_mode) & 0111) != 0)
+-		return PAGE_SIZE;
++	    vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ)) {
++		if ((READ_ONCE(file_inode(vma->vm_file)->i_mode) & 0111) != 0)
++			return PAGE_SIZE;
++
++		/*
++		 * ELF libraries aren't always executable.
++		 * We'll want to check whether the mapping starts with the ELF
++		 * magic, but not now - we're holding the mmap lock,
++		 * so copy_from_user() doesn't work here.
++		 * Use a placeholder instead, and fix it up later in
++		 * dump_vma_snapshot().
++		 */
++		return DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER;
++	}
+ 
+ #undef	FILTER
+ 
+@@ -1067,18 +1089,29 @@ static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma,
+ 	return gate_vma;
+ }
+ 
++static void free_vma_snapshot(struct coredump_params *cprm)
++{
++	if (cprm->vma_meta) {
++		int i;
++		for (i = 0; i < cprm->vma_count; i++) {
++			struct file *file = cprm->vma_meta[i].file;
++			if (file)
++				fput(file);
++		}
++		kvfree(cprm->vma_meta);
++		cprm->vma_meta = NULL;
++	}
++}
++
+ /*
+  * Under the mmap_lock, take a snapshot of relevant information about the task's
+  * VMAs.
+  */
+-int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+-		      struct core_vma_metadata **vma_meta,
+-		      size_t *vma_data_size_ptr)
++static bool dump_vma_snapshot(struct coredump_params *cprm)
+ {
+ 	struct vm_area_struct *vma, *gate_vma;
+ 	struct mm_struct *mm = current->mm;
+ 	int i;
+-	size_t vma_data_size = 0;
+ 
+ 	/*
+ 	 * Once the stack expansion code is fixed to not change VMA bounds
+@@ -1086,36 +1119,51 @@ int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+ 	 * mmap_lock in read mode.
+ 	 */
+ 	if (mmap_write_lock_killable(mm))
+-		return -EINTR;
++		return false;
+ 
++	cprm->vma_data_size = 0;
+ 	gate_vma = get_gate_vma(mm);
+-	*vma_count = mm->map_count + (gate_vma ? 1 : 0);
++	cprm->vma_count = mm->map_count + (gate_vma ? 1 : 0);
+ 
+-	*vma_meta = kvmalloc_array(*vma_count, sizeof(**vma_meta), GFP_KERNEL);
+-	if (!*vma_meta) {
++	cprm->vma_meta = kvmalloc_array(cprm->vma_count, sizeof(*cprm->vma_meta), GFP_KERNEL);
++	if (!cprm->vma_meta) {
+ 		mmap_write_unlock(mm);
+-		return -ENOMEM;
++		return false;
+ 	}
+ 
+ 	for (i = 0, vma = first_vma(current, gate_vma); vma != NULL;
+ 			vma = next_vma(vma, gate_vma), i++) {
+-		struct core_vma_metadata *m = (*vma_meta) + i;
++		struct core_vma_metadata *m = cprm->vma_meta + i;
+ 
+ 		m->start = vma->vm_start;
+ 		m->end = vma->vm_end;
+ 		m->flags = vma->vm_flags;
+ 		m->dump_size = vma_dump_size(vma, cprm->mm_flags);
++		m->pgoff = vma->vm_pgoff;
+ 
+-		vma_data_size += m->dump_size;
++		m->file = vma->vm_file;
++		if (m->file)
++			get_file(m->file);
+ 	}
+ 
+ 	mmap_write_unlock(mm);
+ 
+-	if (WARN_ON(i != *vma_count)) {
+-		kvfree(*vma_meta);
+-		return -EFAULT;
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *m = cprm->vma_meta + i;
++
++		if (m->dump_size == DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER) {
++			char elfmag[SELFMAG];
++
++			if (copy_from_user(elfmag, (void __user *)m->start, SELFMAG) ||
++					memcmp(elfmag, ELFMAG, SELFMAG) != 0) {
++				m->dump_size = 0;
++			} else {
++				m->dump_size = PAGE_SIZE;
++			}
++		}
++
++		cprm->vma_data_size += m->dump_size;
+ 	}
+ 
+-	*vma_data_size_ptr = vma_data_size;
+-	return 0;
++	return true;
+ }
+diff --git a/fs/exec.c b/fs/exec.c
+index ca89e0e3ef10f..bcd86f2d176c3 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -494,8 +494,14 @@ static int bprm_stack_limits(struct linux_binprm *bprm)
+ 	 * the stack. They aren't stored until much later when we can't
+ 	 * signal to the parent that the child has run out of stack space.
+ 	 * Instead, calculate it here so it's possible to fail gracefully.
++	 *
++	 * In the case of argc = 0, make sure there is space for adding a
++	 * empty string (which will bump argc to 1), to ensure confused
++	 * userspace programs don't start processing from argv[1], thinking
++	 * argc can never be 0, to keep them from walking envp by accident.
++	 * See do_execveat_common().
+ 	 */
+-	ptr_size = (bprm->argc + bprm->envc) * sizeof(void *);
++	ptr_size = (max(bprm->argc, 1) + bprm->envc) * sizeof(void *);
+ 	if (limit <= ptr_size)
+ 		return -E2BIG;
+ 	limit -= ptr_size;
+@@ -1886,6 +1892,9 @@ static int do_execveat_common(int fd, struct filename *filename,
+ 	}
+ 
+ 	retval = count(argv, MAX_ARG_STRINGS);
++	if (retval == 0)
++		pr_warn_once("process '%s' launched '%s' with NULL argv: empty string added\n",
++			     current->comm, bprm->filename);
+ 	if (retval < 0)
+ 		goto out_free;
+ 	bprm->argc = retval;
+@@ -1912,6 +1921,19 @@ static int do_execveat_common(int fd, struct filename *filename,
+ 	if (retval < 0)
+ 		goto out_free;
+ 
++	/*
++	 * When argv is empty, add an empty string ("") as argv[0] to
++	 * ensure confused userspace programs that start processing
++	 * from argv[1] won't end up walking envp. See also
++	 * bprm_stack_limits().
++	 */
++	if (bprm->argc == 0) {
++		retval = copy_string_kernel("", bprm);
++		if (retval < 0)
++			goto out_free;
++		bprm->argc = 1;
++	}
++
+ 	retval = bprm_execve(bprm, fd, filename, flags);
+ out_free:
+ 	free_bprm(bprm);
+@@ -1940,6 +1962,8 @@ int kernel_execve(const char *kernel_filename,
+ 	}
+ 
+ 	retval = count_strings_kernel(argv);
++	if (WARN_ON_ONCE(retval == 0))
++		retval = -EINVAL;
+ 	if (retval < 0)
+ 		goto out_free;
+ 	bprm->argc = retval;
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 09f1fe6769727..b6314d3c6a87d 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -756,8 +756,12 @@ static loff_t ext2_max_size(int bits)
+ 	res += 1LL << (bits-2);
+ 	res += 1LL << (2*(bits-2));
+ 	res += 1LL << (3*(bits-2));
++	/* Compute how many metadata blocks are needed */
++	meta_blocks = 1;
++	meta_blocks += 1 + ppb;
++	meta_blocks += 1 + ppb + ppb * ppb;
+ 	/* Does block tree limit file size? */
+-	if (res < upper_limit)
++	if (res + meta_blocks <= upper_limit)
+ 		goto check_lfs;
+ 
+ 	res = upper_limit;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index ae1f0c57f54d2..c9a8c7d24f89c 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1768,19 +1768,20 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 	void *inline_pos;
+ 	unsigned int offset;
+ 	struct ext4_dir_entry_2 *de;
+-	bool ret = true;
++	bool ret = false;
+ 
+ 	err = ext4_get_inode_loc(dir, &iloc);
+ 	if (err) {
+ 		EXT4_ERROR_INODE_ERR(dir, -err,
+ 				     "error %d getting inode %lu block",
+ 				     err, dir->i_ino);
+-		return true;
++		return false;
+ 	}
+ 
+ 	down_read(&EXT4_I(dir)->xattr_sem);
+ 	if (!ext4_has_inline_data(dir)) {
+ 		*has_inline_data = 0;
++		ret = true;
+ 		goto out;
+ 	}
+ 
+@@ -1789,7 +1790,6 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 		ext4_warning(dir->i_sb,
+ 			     "bad inline directory (dir #%lu) - no `..'",
+ 			     dir->i_ino);
+-		ret = true;
+ 		goto out;
+ 	}
+ 
+@@ -1808,16 +1808,15 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 				     dir->i_ino, le32_to_cpu(de->inode),
+ 				     le16_to_cpu(de->rec_len), de->name_len,
+ 				     inline_size);
+-			ret = true;
+ 			goto out;
+ 		}
+ 		if (le32_to_cpu(de->inode)) {
+-			ret = false;
+ 			goto out;
+ 		}
+ 		offset += ext4_rec_len_from_disk(de->rec_len, inline_size);
+ 	}
+ 
++	ret = true;
+ out:
+ 	up_read(&EXT4_I(dir)->xattr_sem);
+ 	brelse(iloc.bh);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index d59474a541897..96546df39bcf9 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -2023,6 +2023,15 @@ static int ext4_writepage(struct page *page,
+ 	else
+ 		len = PAGE_SIZE;
+ 
++	/* Should never happen but for bugs in other kernel subsystems */
++	if (!page_has_buffers(page)) {
++		ext4_warning_inode(inode,
++		   "page %lu does not have buffers attached", page->index);
++		ClearPageDirty(page);
++		unlock_page(page);
++		return 0;
++	}
++
+ 	page_bufs = page_buffers(page);
+ 	/*
+ 	 * We cannot do block allocation or other extent handling in this
+@@ -2626,6 +2635,22 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
+ 			wait_on_page_writeback(page);
+ 			BUG_ON(PageWriteback(page));
+ 
++			/*
++			 * Should never happen but for buggy code in
++			 * other subsystems that call
++			 * set_page_dirty() without properly warning
++			 * the file system first.  See [1] for more
++			 * information.
++			 *
++			 * [1] https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz
++			 */
++			if (!page_has_buffers(page)) {
++				ext4_warning_inode(mpd->inode, "page %lu does not have buffers attached", page->index);
++				ClearPageDirty(page);
++				unlock_page(page);
++				continue;
++			}
++
+ 			if (mpd->map.m_len == 0)
+ 				mpd->first_page = page->index;
+ 			mpd->next_page = page->index + 1;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 110c25824a67f..15223b5a3af97 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3320,69 +3320,95 @@ void ext4_mb_mark_bb(struct super_block *sb, ext4_fsblk_t block,
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	ext4_group_t group;
+ 	ext4_grpblk_t blkoff;
+-	int i, clen, err;
++	int i, err;
+ 	int already;
++	unsigned int clen, clen_changed, thisgrp_len;
+ 
+-	clen = EXT4_B2C(sbi, len);
++	while (len > 0) {
++		ext4_get_group_no_and_offset(sb, block, &group, &blkoff);
+ 
+-	ext4_get_group_no_and_offset(sb, block, &group, &blkoff);
+-	bitmap_bh = ext4_read_block_bitmap(sb, group);
+-	if (IS_ERR(bitmap_bh)) {
+-		err = PTR_ERR(bitmap_bh);
+-		bitmap_bh = NULL;
+-		goto out_err;
+-	}
++		/*
++		 * Check to see if we are freeing blocks across a group
++		 * boundary.
++		 * In case of flex_bg, this can happen that (block, len) may
++		 * span across more than one group. In that case we need to
++		 * get the corresponding group metadata to work with.
++		 * For this we have goto again loop.
++		 */
++		thisgrp_len = min_t(unsigned int, (unsigned int)len,
++			EXT4_BLOCKS_PER_GROUP(sb) - EXT4_C2B(sbi, blkoff));
++		clen = EXT4_NUM_B2C(sbi, thisgrp_len);
+ 
+-	err = -EIO;
+-	gdp = ext4_get_group_desc(sb, group, &gdp_bh);
+-	if (!gdp)
+-		goto out_err;
++		bitmap_bh = ext4_read_block_bitmap(sb, group);
++		if (IS_ERR(bitmap_bh)) {
++			err = PTR_ERR(bitmap_bh);
++			bitmap_bh = NULL;
++			break;
++		}
+ 
+-	ext4_lock_group(sb, group);
+-	already = 0;
+-	for (i = 0; i < clen; i++)
+-		if (!mb_test_bit(blkoff + i, bitmap_bh->b_data) == !state)
+-			already++;
++		err = -EIO;
++		gdp = ext4_get_group_desc(sb, group, &gdp_bh);
++		if (!gdp)
++			break;
+ 
+-	if (state)
+-		ext4_set_bits(bitmap_bh->b_data, blkoff, clen);
+-	else
+-		mb_test_and_clear_bits(bitmap_bh->b_data, blkoff, clen);
+-	if (ext4_has_group_desc_csum(sb) &&
+-	    (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
+-		gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
+-		ext4_free_group_clusters_set(sb, gdp,
+-					     ext4_free_clusters_after_init(sb,
+-						group, gdp));
+-	}
+-	if (state)
+-		clen = ext4_free_group_clusters(sb, gdp) - clen + already;
+-	else
+-		clen = ext4_free_group_clusters(sb, gdp) + clen - already;
++		ext4_lock_group(sb, group);
++		already = 0;
++		for (i = 0; i < clen; i++)
++			if (!mb_test_bit(blkoff + i, bitmap_bh->b_data) ==
++					 !state)
++				already++;
++
++		clen_changed = clen - already;
++		if (state)
++			ext4_set_bits(bitmap_bh->b_data, blkoff, clen);
++		else
++			mb_test_and_clear_bits(bitmap_bh->b_data, blkoff, clen);
++		if (ext4_has_group_desc_csum(sb) &&
++		    (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
++			gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
++			ext4_free_group_clusters_set(sb, gdp,
++			     ext4_free_clusters_after_init(sb, group, gdp));
++		}
++		if (state)
++			clen = ext4_free_group_clusters(sb, gdp) - clen_changed;
++		else
++			clen = ext4_free_group_clusters(sb, gdp) + clen_changed;
+ 
+-	ext4_free_group_clusters_set(sb, gdp, clen);
+-	ext4_block_bitmap_csum_set(sb, group, gdp, bitmap_bh);
+-	ext4_group_desc_csum_set(sb, group, gdp);
++		ext4_free_group_clusters_set(sb, gdp, clen);
++		ext4_block_bitmap_csum_set(sb, group, gdp, bitmap_bh);
++		ext4_group_desc_csum_set(sb, group, gdp);
+ 
+-	ext4_unlock_group(sb, group);
++		ext4_unlock_group(sb, group);
+ 
+-	if (sbi->s_log_groups_per_flex) {
+-		ext4_group_t flex_group = ext4_flex_group(sbi, group);
++		if (sbi->s_log_groups_per_flex) {
++			ext4_group_t flex_group = ext4_flex_group(sbi, group);
++			struct flex_groups *fg = sbi_array_rcu_deref(sbi,
++						   s_flex_groups, flex_group);
+ 
+-		atomic64_sub(len,
+-			     &sbi_array_rcu_deref(sbi, s_flex_groups,
+-						  flex_group)->free_clusters);
++			if (state)
++				atomic64_sub(clen_changed, &fg->free_clusters);
++			else
++				atomic64_add(clen_changed, &fg->free_clusters);
++
++		}
++
++		err = ext4_handle_dirty_metadata(NULL, NULL, bitmap_bh);
++		if (err)
++			break;
++		sync_dirty_buffer(bitmap_bh);
++		err = ext4_handle_dirty_metadata(NULL, NULL, gdp_bh);
++		sync_dirty_buffer(gdp_bh);
++		if (err)
++			break;
++
++		block += thisgrp_len;
++		len -= thisgrp_len;
++		brelse(bitmap_bh);
++		BUG_ON(len < 0);
+ 	}
+ 
+-	err = ext4_handle_dirty_metadata(NULL, NULL, bitmap_bh);
+ 	if (err)
+-		goto out_err;
+-	sync_dirty_buffer(bitmap_bh);
+-	err = ext4_handle_dirty_metadata(NULL, NULL, gdp_bh);
+-	sync_dirty_buffer(gdp_bh);
+-
+-out_err:
+-	brelse(bitmap_bh);
++		brelse(bitmap_bh);
+ }
+ 
+ /*
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index f71de6c1ecf40..a622e186b7ee1 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2868,14 +2868,14 @@ bool ext4_empty_dir(struct inode *inode)
+ 	sb = inode->i_sb;
+ 	if (inode->i_size < EXT4_DIR_REC_LEN(1) + EXT4_DIR_REC_LEN(2)) {
+ 		EXT4_ERROR_INODE(inode, "invalid size");
+-		return true;
++		return false;
+ 	}
+ 	/* The first directory block must not be a hole,
+ 	 * so treat it as DIRENT_HTREE
+ 	 */
+ 	bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
+ 	if (IS_ERR(bh))
+-		return true;
++		return false;
+ 
+ 	de = (struct ext4_dir_entry_2 *) bh->b_data;
+ 	if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, bh->b_size,
+@@ -2883,7 +2883,7 @@ bool ext4_empty_dir(struct inode *inode)
+ 	    le32_to_cpu(de->inode) != inode->i_ino || strcmp(".", de->name)) {
+ 		ext4_warning_inode(inode, "directory missing '.'");
+ 		brelse(bh);
+-		return true;
++		return false;
+ 	}
+ 	offset = ext4_rec_len_from_disk(de->rec_len, sb->s_blocksize);
+ 	de = ext4_next_entry(de, sb->s_blocksize);
+@@ -2892,7 +2892,7 @@ bool ext4_empty_dir(struct inode *inode)
+ 	    le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) {
+ 		ext4_warning_inode(inode, "directory missing '..'");
+ 		brelse(bh);
+-		return true;
++		return false;
+ 	}
+ 	offset += ext4_rec_len_from_disk(de->rec_len, sb->s_blocksize);
+ 	while (offset < inode->i_size) {
+@@ -2906,7 +2906,7 @@ bool ext4_empty_dir(struct inode *inode)
+ 				continue;
+ 			}
+ 			if (IS_ERR(bh))
+-				return true;
++				return false;
+ 		}
+ 		de = (struct ext4_dir_entry_2 *) (bh->b_data +
+ 					(offset & (sb->s_blocksize - 1)));
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 9bcd77db980df..77f30320f8628 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -851,6 +851,7 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 	struct page *cp_page_1 = NULL, *cp_page_2 = NULL;
+ 	struct f2fs_checkpoint *cp_block = NULL;
+ 	unsigned long long cur_version = 0, pre_version = 0;
++	unsigned int cp_blocks;
+ 	int err;
+ 
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+@@ -858,15 +859,16 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 	if (err)
+ 		return NULL;
+ 
+-	if (le32_to_cpu(cp_block->cp_pack_total_block_count) >
+-					sbi->blocks_per_seg) {
++	cp_blocks = le32_to_cpu(cp_block->cp_pack_total_block_count);
++
++	if (cp_blocks > sbi->blocks_per_seg || cp_blocks <= F2FS_CP_PACKS) {
+ 		f2fs_warn(sbi, "invalid cp_pack_total_block_count:%u",
+ 			  le32_to_cpu(cp_block->cp_pack_total_block_count));
+ 		goto invalid_cp;
+ 	}
+ 	pre_version = *version;
+ 
+-	cp_addr += le32_to_cpu(cp_block->cp_pack_total_block_count) - 1;
++	cp_addr += cp_blocks - 1;
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ 					&cp_page_2, version);
+ 	if (err)
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index ec542e8c46cc9..1541da5ace85e 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -286,10 +286,9 @@ static int lz4_decompress_pages(struct decompress_io_ctx *dic)
+ 	}
+ 
+ 	if (ret != PAGE_SIZE << dic->log_cluster_size) {
+-		printk_ratelimited("%sF2FS-fs (%s): lz4 invalid rlen:%zu, "
++		printk_ratelimited("%sF2FS-fs (%s): lz4 invalid ret:%d, "
+ 					"expected:%lu\n", KERN_ERR,
+-					F2FS_I_SB(dic->inode)->sb->s_id,
+-					dic->rlen,
++					F2FS_I_SB(dic->inode)->sb->s_id, ret,
+ 					PAGE_SIZE << dic->log_cluster_size);
+ 		return -EIO;
+ 	}
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 1b11a42847c48..b2016fd3a7ca3 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -3264,8 +3264,12 @@ static int __f2fs_write_data_pages(struct address_space *mapping,
+ 	/* to avoid spliting IOs due to mixed WB_SYNC_ALL and WB_SYNC_NONE */
+ 	if (wbc->sync_mode == WB_SYNC_ALL)
+ 		atomic_inc(&sbi->wb_sync_req[DATA]);
+-	else if (atomic_read(&sbi->wb_sync_req[DATA]))
++	else if (atomic_read(&sbi->wb_sync_req[DATA])) {
++		/* to avoid potential deadlock */
++		if (current->plug)
++			blk_finish_plug(current->plug);
+ 		goto skip_write;
++	}
+ 
+ 	if (__should_serialize_io(inode, wbc)) {
+ 		mutex_lock(&sbi->writepages);
+@@ -3457,6 +3461,9 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
+ 
+ 		*fsdata = NULL;
+ 
++		if (len == PAGE_SIZE && !(f2fs_is_atomic_file(inode)))
++			goto repeat;
++
+ 		ret = f2fs_prepare_compress_overwrite(inode, pagep,
+ 							index, fsdata);
+ 		if (ret < 0) {
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 1fbaab1f7aba8..792f9059d897c 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2035,7 +2035,10 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 
+ 	inode_lock(inode);
+ 
+-	f2fs_disable_compressed_file(inode);
++	if (!f2fs_disable_compressed_file(inode)) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	if (f2fs_is_atomic_file(inode)) {
+ 		if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST))
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 6b240b71d2e83..24e93fb254c5f 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -998,8 +998,10 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 	}
+ 
+-	if (f2fs_check_nid_range(sbi, dni->ino))
++	if (f2fs_check_nid_range(sbi, dni->ino)) {
++		f2fs_put_page(node_page, 1);
+ 		return false;
++	}
+ 
+ 	*nofs = ofs_of_node(node_page);
+ 	source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index a35fcf43ad5a3..98483f50e5e92 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -848,6 +848,7 @@ void f2fs_handle_failed_inode(struct inode *inode)
+ 	err = f2fs_get_node_info(sbi, inode->i_ino, &ni);
+ 	if (err) {
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		set_inode_flag(inode, FI_FREE_NID);
+ 		f2fs_warn(sbi, "May loss orphan inode, run fsck to fix.");
+ 		goto out;
+ 	}
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 7e625806bd4a2..5fa10d0b00683 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2055,8 +2055,12 @@ static int f2fs_write_node_pages(struct address_space *mapping,
+ 
+ 	if (wbc->sync_mode == WB_SYNC_ALL)
+ 		atomic_inc(&sbi->wb_sync_req[NODE]);
+-	else if (atomic_read(&sbi->wb_sync_req[NODE]))
++	else if (atomic_read(&sbi->wb_sync_req[NODE])) {
++		/* to avoid potential deadlock */
++		if (current->plug)
++			blk_finish_plug(current->plug);
+ 		goto skip_write;
++	}
+ 
+ 	trace_f2fs_writepages(mapping->host, wbc, NODE);
+ 
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index d04b449978aa8..49f5cb532738d 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -4650,6 +4650,13 @@ static int sanity_check_curseg(struct f2fs_sb_info *sbi)
+ 
+ 		sanity_check_seg_type(sbi, curseg->seg_type);
+ 
++		if (curseg->alloc_type != LFS && curseg->alloc_type != SSR) {
++			f2fs_err(sbi,
++				 "Current segment has invalid alloc_type:%d",
++				 curseg->alloc_type);
++			return -EFSCORRUPTED;
++		}
++
+ 		if (f2fs_test_bit(blkofs, se->cur_valid_map))
+ 			goto out;
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index af98abb17c272..78ee14f6e939e 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2278,7 +2278,7 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ 	struct quota_info *dqopt = sb_dqopt(sb);
+ 	int cnt;
+-	int ret;
++	int ret = 0;
+ 
+ 	/*
+ 	 * Now when everything is written we can discard the pagecache so
+@@ -2289,8 +2289,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 		if (type != -1 && cnt != type)
+ 			continue;
+ 
+-		if (!sb_has_quota_active(sb, type))
+-			return 0;
++		if (!sb_has_quota_active(sb, cnt))
++			continue;
+ 
+ 		inode_lock(dqopt->files[cnt]);
+ 
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 7ffd4bb398b0c..a7e7d68256e00 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -386,7 +386,7 @@ out:
+ 		} else if (t == GC_IDLE_AT) {
+ 			if (!sbi->am.atgc_enabled)
+ 				return -EINVAL;
+-			sbi->gc_mode = GC_AT;
++			sbi->gc_mode = GC_IDLE_AT;
+ 		} else {
+ 			sbi->gc_mode = GC_NORMAL;
+ 		}
+diff --git a/fs/file.c b/fs/file.c
+index 79a76d04c7c33..8431dfde036cc 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -85,6 +85,21 @@ static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt)
+ 	copy_fd_bitmaps(nfdt, ofdt, ofdt->max_fds);
+ }
+ 
++/*
++ * Note how the fdtable bitmap allocations very much have to be a multiple of
++ * BITS_PER_LONG. This is not only because we walk those things in chunks of
++ * 'unsigned long' in some places, but simply because that is how the Linux
++ * kernel bitmaps are defined to work: they are not "bits in an array of bytes",
++ * they are very much "bits in an array of unsigned long".
++ *
++ * The ALIGN(nr, BITS_PER_LONG) here is for clarity: since we just multiplied
++ * by that "1024/sizeof(ptr)" before, we already know there are sufficient
++ * clear low bits. Clang seems to realize that, gcc ends up being confused.
++ *
++ * On a 128-bit machine, the ALIGN() would actually matter. In the meantime,
++ * let's consider it documentation (and maybe a test-case for gcc to improve
++ * its code generation ;)
++ */
+ static struct fdtable * alloc_fdtable(unsigned int nr)
+ {
+ 	struct fdtable *fdt;
+@@ -100,6 +115,7 @@ static struct fdtable * alloc_fdtable(unsigned int nr)
+ 	nr /= (1024 / sizeof(struct file *));
+ 	nr = roundup_pow_of_two(nr + 1);
+ 	nr *= (1024 / sizeof(struct file *));
++	nr = ALIGN(nr, BITS_PER_LONG);
+ 	/*
+ 	 * Note that this can drive nr *below* what we had passed if sysctl_nr_open
+ 	 * had been set lower between the check in expand_files() and here.  Deal
+@@ -267,6 +283,19 @@ static unsigned int count_open_files(struct fdtable *fdt)
+ 	return i;
+ }
+ 
++/*
++ * Note that a sane fdtable size always has to be a multiple of
++ * BITS_PER_LONG, since we have bitmaps that are sized by this.
++ *
++ * 'max_fds' will normally already be properly aligned, but it
++ * turns out that in the close_range() -> __close_range() ->
++ * unshare_fd() -> dup_fd() -> sane_fdtable_size() we can end
++ * up having a 'max_fds' value that isn't already aligned.
++ *
++ * Rather than make close_range() have to worry about this,
++ * just make that BITS_PER_LONG alignment be part of a sane
++ * fdtable size. Becuase that's really what it is.
++ */
+ static unsigned int sane_fdtable_size(struct fdtable *fdt, unsigned int max_fds)
+ {
+ 	unsigned int count;
+@@ -274,7 +303,7 @@ static unsigned int sane_fdtable_size(struct fdtable *fdt, unsigned int max_fds)
+ 	count = count_open_files(fdt);
+ 	if (max_fds < NR_OPEN_DEFAULT)
+ 		max_fds = NR_OPEN_DEFAULT;
+-	return min(count, max_fds);
++	return ALIGN(min(count, max_fds), BITS_PER_LONG);
+ }
+ 
+ /*
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 5e8eef9990e32..eb775e93de97c 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -1389,7 +1389,8 @@ int gfs2_fitrim(struct file *filp, void __user *argp)
+ 
+ 	start = r.start >> bs_shift;
+ 	end = start + (r.len >> bs_shift);
+-	minlen = max_t(u64, r.minlen,
++	minlen = max_t(u64, r.minlen, sdp->sd_sb.sb_bsize);
++	minlen = max_t(u64, minlen,
+ 		       q->limits.discard_granularity) >> bs_shift;
+ 
+ 	if (end <= start || minlen > sdp->sd_max_rg_data)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index fd188b9721511..5959b0359524c 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3220,13 +3220,15 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+ 				ret = nr;
+ 			break;
+ 		}
++		ret += nr;
+ 		if (!iov_iter_is_bvec(iter)) {
+ 			iov_iter_advance(iter, nr);
+ 		} else {
+-			req->rw.len -= nr;
+ 			req->rw.addr += nr;
++			req->rw.len -= nr;
++			if (!req->rw.len)
++				break;
+ 		}
+-		ret += nr;
+ 		if (nr != iovec.iov_len)
+ 			break;
+ 	}
+@@ -7348,6 +7350,7 @@ static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+ 			fput(fpl->fp[i]);
+ 	} else {
+ 		kfree_skb(skb);
++		free_uid(fpl->user);
+ 		kfree(fpl);
+ 	}
+ 
+diff --git a/fs/jffs2/build.c b/fs/jffs2/build.c
+index b288c8ae1236b..837cd55fd4c5e 100644
+--- a/fs/jffs2/build.c
++++ b/fs/jffs2/build.c
+@@ -415,13 +415,15 @@ int jffs2_do_mount_fs(struct jffs2_sb_info *c)
+ 		jffs2_free_ino_caches(c);
+ 		jffs2_free_raw_node_refs(c);
+ 		ret = -EIO;
+-		goto out_free;
++		goto out_sum_exit;
+ 	}
+ 
+ 	jffs2_calc_trigger_levels(c);
+ 
+ 	return 0;
+ 
++ out_sum_exit:
++	jffs2_sum_exit(c);
+  out_free:
+ 	kvfree(c->blocks);
+ 
+diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c
+index 78858f6e95839..7170de78cd260 100644
+--- a/fs/jffs2/fs.c
++++ b/fs/jffs2/fs.c
+@@ -602,8 +602,8 @@ out_root:
+ 	jffs2_free_ino_caches(c);
+ 	jffs2_free_raw_node_refs(c);
+ 	kvfree(c->blocks);
+- out_inohash:
+ 	jffs2_clear_xattr_subsystem(c);
++ out_inohash:
+ 	kfree(c->inocache_list);
+  out_wbuf:
+ 	jffs2_flash_cleanup(c);
+diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c
+index b676056826beb..29671e33a1714 100644
+--- a/fs/jffs2/scan.c
++++ b/fs/jffs2/scan.c
+@@ -136,7 +136,7 @@ int jffs2_scan_medium(struct jffs2_sb_info *c)
+ 		if (!s) {
+ 			JFFS2_WARNING("Can't allocate memory for summary\n");
+ 			ret = -ENOMEM;
+-			goto out;
++			goto out_buf;
+ 		}
+ 	}
+ 
+@@ -275,13 +275,15 @@ int jffs2_scan_medium(struct jffs2_sb_info *c)
+ 	}
+ 	ret = 0;
+  out:
++	jffs2_sum_reset_collected(s);
++	kfree(s);
++ out_buf:
+ 	if (buf_size)
+ 		kfree(flashbuf);
+ #ifndef __ECOS
+ 	else
+ 		mtd_unpoint(c->mtd, 0, c->mtd->size);
+ #endif
+-	kfree(s);
+ 	return ret;
+ }
+ 
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index aedad59f8a458..e58ae29a223d7 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -148,6 +148,7 @@ static const s8 budtab[256] = {
+  *	0	- success
+  *	-ENOMEM	- insufficient memory
+  *	-EIO	- i/o error
++ *	-EINVAL - wrong bmap data
+  */
+ int dbMount(struct inode *ipbmap)
+ {
+@@ -179,6 +180,12 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_nfree = le64_to_cpu(dbmp_le->dn_nfree);
+ 	bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
+ 	bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
++	if (!bmp->db_numag) {
++		release_metapage(mp);
++		kfree(bmp);
++		return -EINVAL;
++	}
++
+ 	bmp->db_maxlevel = le32_to_cpu(dbmp_le->dn_maxlevel);
+ 	bmp->db_maxag = le32_to_cpu(dbmp_le->dn_maxag);
+ 	bmp->db_agpref = le32_to_cpu(dbmp_le->dn_agpref);
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index b44219ce60b86..a5209643ac36c 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -353,12 +353,11 @@ __be32 nfs4_callback_devicenotify(void *argp, void *resp,
+ 				  struct cb_process_state *cps)
+ {
+ 	struct cb_devicenotifyargs *args = argp;
++	const struct pnfs_layoutdriver_type *ld = NULL;
+ 	uint32_t i;
+ 	__be32 res = 0;
+-	struct nfs_client *clp = cps->clp;
+-	struct nfs_server *server = NULL;
+ 
+-	if (!clp) {
++	if (!cps->clp) {
+ 		res = cpu_to_be32(NFS4ERR_OP_NOT_IN_SESSION);
+ 		goto out;
+ 	}
+@@ -366,23 +365,15 @@ __be32 nfs4_callback_devicenotify(void *argp, void *resp,
+ 	for (i = 0; i < args->ndevs; i++) {
+ 		struct cb_devicenotifyitem *dev = &args->devs[i];
+ 
+-		if (!server ||
+-		    server->pnfs_curr_ld->id != dev->cbd_layout_type) {
+-			rcu_read_lock();
+-			list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link)
+-				if (server->pnfs_curr_ld &&
+-				    server->pnfs_curr_ld->id == dev->cbd_layout_type) {
+-					rcu_read_unlock();
+-					goto found;
+-				}
+-			rcu_read_unlock();
+-			continue;
++		if (!ld || ld->id != dev->cbd_layout_type) {
++			pnfs_put_layoutdriver(ld);
++			ld = pnfs_find_layoutdriver(dev->cbd_layout_type);
++			if (!ld)
++				continue;
+ 		}
+-
+-	found:
+-		nfs4_delete_deviceid(server->pnfs_curr_ld, clp, &dev->cbd_dev_id);
++		nfs4_delete_deviceid(ld, cps->clp, &dev->cbd_dev_id);
+ 	}
+-
++	pnfs_put_layoutdriver(ld);
+ out:
+ 	kfree(args->devs);
+ 	return res;
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index 1725079a05276..ca8a4aa351dc9 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -272,10 +272,6 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ 	n = ntohl(*p++);
+ 	if (n == 0)
+ 		goto out;
+-	if (n > ULONG_MAX / sizeof(*args->devs)) {
+-		status = htonl(NFS4ERR_BADXDR);
+-		goto out;
+-	}
+ 
+ 	args->devs = kmalloc_array(n, sizeof(*args->devs), GFP_KERNEL);
+ 	if (!args->devs) {
+diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c
+index f6676af37d5db..5e6453e9b3079 100644
+--- a/fs/nfs/nfs2xdr.c
++++ b/fs/nfs/nfs2xdr.c
+@@ -948,7 +948,7 @@ int nfs2_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 
+ 	error = decode_filename_inline(xdr, &entry->name, &entry->len);
+ 	if (unlikely(error))
+-		return error;
++		return -EAGAIN;
+ 
+ 	/*
+ 	 * The type (size and byte order) of nfscookie isn't defined in
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index dff6b52d26a85..b5a9379b14504 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -1964,7 +1964,6 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 		       bool plus)
+ {
+ 	struct user_namespace *userns = rpc_userns(entry->server->client);
+-	struct nfs_entry old = *entry;
+ 	__be32 *p;
+ 	int error;
+ 	u64 new_cookie;
+@@ -1984,15 +1983,15 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 
+ 	error = decode_fileid3(xdr, &entry->ino);
+ 	if (unlikely(error))
+-		return error;
++		return -EAGAIN;
+ 
+ 	error = decode_inline_filename3(xdr, &entry->name, &entry->len);
+ 	if (unlikely(error))
+-		return error;
++		return -EAGAIN;
+ 
+ 	error = decode_cookie3(xdr, &new_cookie);
+ 	if (unlikely(error))
+-		return error;
++		return -EAGAIN;
+ 
+ 	entry->d_type = DT_UNKNOWN;
+ 
+@@ -2000,7 +1999,7 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 		entry->fattr->valid = 0;
+ 		error = decode_post_op_attr(xdr, entry->fattr, userns);
+ 		if (unlikely(error))
+-			return error;
++			return -EAGAIN;
+ 		if (entry->fattr->valid & NFS_ATTR_FATTR_V3)
+ 			entry->d_type = nfs_umode_to_dtype(entry->fattr->mode);
+ 
+@@ -2015,11 +2014,8 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 			return -EAGAIN;
+ 		if (*p != xdr_zero) {
+ 			error = decode_nfs_fh3(xdr, entry->fh);
+-			if (unlikely(error)) {
+-				if (error == -E2BIG)
+-					goto out_truncated;
+-				return error;
+-			}
++			if (unlikely(error))
++				return -EAGAIN;
+ 		} else
+ 			zero_nfs_fh3(entry->fh);
+ 	}
+@@ -2028,11 +2024,6 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 	entry->cookie = new_cookie;
+ 
+ 	return 0;
+-
+-out_truncated:
+-	dprintk("NFS: directory entry contains invalid file handle\n");
+-	*entry = old;
+-	return -EAGAIN;
+ }
+ 
+ /*
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index d222a980164b7..77199d3560429 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -8205,6 +8205,7 @@ nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata)
+ 	case -NFS4ERR_DEADSESSION:
+ 		nfs4_schedule_session_recovery(clp->cl_session,
+ 				task->tk_status);
++		return;
+ 	}
+ 	if (args->dir == NFS4_CDFC4_FORE_OR_BOTH &&
+ 			res->dir != NFS4_CDFS4_BOTH) {
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 5370e082aded5..b3b9eff5d5727 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -92,6 +92,17 @@ find_pnfs_driver(u32 id)
+ 	return local;
+ }
+ 
++const struct pnfs_layoutdriver_type *pnfs_find_layoutdriver(u32 id)
++{
++	return find_pnfs_driver(id);
++}
++
++void pnfs_put_layoutdriver(const struct pnfs_layoutdriver_type *ld)
++{
++	if (ld)
++		module_put(ld->owner);
++}
++
+ void
+ unset_pnfs_layoutdriver(struct nfs_server *nfss)
+ {
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 0212fe32e63aa..11d9ed9addc06 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -236,6 +236,8 @@ struct pnfs_devicelist {
+ 
+ extern int pnfs_register_layoutdriver(struct pnfs_layoutdriver_type *);
+ extern void pnfs_unregister_layoutdriver(struct pnfs_layoutdriver_type *);
++extern const struct pnfs_layoutdriver_type *pnfs_find_layoutdriver(u32 id);
++extern void pnfs_put_layoutdriver(const struct pnfs_layoutdriver_type *ld);
+ 
+ /* nfs4proc.c */
+ extern size_t max_response_pages(struct nfs_server *server);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index bde4c362841f0..cc926e69ee9ba 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -314,7 +314,10 @@ static void nfs_mapping_set_error(struct page *page, int error)
+ 	struct address_space *mapping = page_file_mapping(page);
+ 
+ 	SetPageError(page);
+-	mapping_set_error(mapping, error);
++	filemap_set_wb_err(mapping, error);
++	if (mapping->host)
++		errseq_set(&mapping->host->i_sb->s_wb_err,
++			   error == -ENOSPC ? -ENOSPC : -EIO);
+ 	nfs_set_pageerror(mapping);
+ }
+ 
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index e5aad1c10ea32..acd0898e3866d 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -641,7 +641,7 @@ nfsd_file_cache_init(void)
+ 	if (!nfsd_filecache_wq)
+ 		goto out;
+ 
+-	nfsd_file_hashtbl = kcalloc(NFSD_FILE_HASH_SIZE,
++	nfsd_file_hashtbl = kvcalloc(NFSD_FILE_HASH_SIZE,
+ 				sizeof(*nfsd_file_hashtbl), GFP_KERNEL);
+ 	if (!nfsd_file_hashtbl) {
+ 		pr_err("nfsd: unable to allocate nfsd_file_hashtbl\n");
+@@ -708,7 +708,7 @@ out_err:
+ 	nfsd_file_slab = NULL;
+ 	kmem_cache_destroy(nfsd_file_mark_slab);
+ 	nfsd_file_mark_slab = NULL;
+-	kfree(nfsd_file_hashtbl);
++	kvfree(nfsd_file_hashtbl);
+ 	nfsd_file_hashtbl = NULL;
+ 	destroy_workqueue(nfsd_filecache_wq);
+ 	nfsd_filecache_wq = NULL;
+@@ -854,7 +854,7 @@ nfsd_file_cache_shutdown(void)
+ 	fsnotify_wait_marks_destroyed();
+ 	kmem_cache_destroy(nfsd_file_mark_slab);
+ 	nfsd_file_mark_slab = NULL;
+-	kfree(nfsd_file_hashtbl);
++	kvfree(nfsd_file_hashtbl);
+ 	nfsd_file_hashtbl = NULL;
+ 	destroy_workqueue(nfsd_filecache_wq);
+ 	nfsd_filecache_wq = NULL;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index d01d7929753ef..84dd68091f422 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4607,6 +4607,14 @@ nfsd_break_deleg_cb(struct file_lock *fl)
+ 	return ret;
+ }
+ 
++/**
++ * nfsd_breaker_owns_lease - Check if lease conflict was resolved
++ * @fl: Lock state to check
++ *
++ * Return values:
++ *   %true: Lease conflict was resolved
++ *   %false: Lease conflict was not resolved.
++ */
+ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
+ {
+ 	struct nfs4_delegation *dl = fl->fl_owner;
+@@ -4614,11 +4622,11 @@ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
+ 	struct nfs4_client *clp;
+ 
+ 	if (!i_am_nfsd())
+-		return NULL;
++		return false;
+ 	rqst = kthread_data(current);
+ 	/* Note rq_prog == NFS_ACL_PROGRAM is also possible: */
+ 	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
+-		return NULL;
++		return false;
+ 	clp = *(rqst->rq_lease_breaker);
+ 	return dl->dl_stid.sc_client == clp;
+ }
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index 9c9de2b66e641..bbd01e8397f6e 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -223,7 +223,7 @@ nfsd_proc_write(struct svc_rqst *rqstp)
+ 	unsigned long cnt = argp->len;
+ 	unsigned int nvecs;
+ 
+-	dprintk("nfsd: WRITE    %s %d bytes at %d\n",
++	dprintk("nfsd: WRITE    %s %u bytes at %d\n",
+ 		SVCFH_fmt(&argp->fh),
+ 		argp->len, argp->offset);
+ 
+diff --git a/fs/nfsd/xdr.h b/fs/nfsd/xdr.h
+index 0ff336b0b25f9..b8cc6a4b2e0ec 100644
+--- a/fs/nfsd/xdr.h
++++ b/fs/nfsd/xdr.h
+@@ -33,7 +33,7 @@ struct nfsd_readargs {
+ struct nfsd_writeargs {
+ 	svc_fh			fh;
+ 	__u32			offset;
+-	int			len;
++	__u32			len;
+ 	struct kvec		first;
+ };
+ 
+diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
+index ea18e4a2a691d..cf222c9225d6d 100644
+--- a/fs/ntfs/inode.c
++++ b/fs/ntfs/inode.c
+@@ -1881,6 +1881,10 @@ int ntfs_read_inode_mount(struct inode *vi)
+ 		}
+ 		/* Now allocate memory for the attribute list. */
+ 		ni->attr_list_size = (u32)ntfs_attr_size(a);
++		if (!ni->attr_list_size) {
++			ntfs_error(sb, "Attr_list_size is zero");
++			goto put_err_out;
++		}
+ 		ni->attr_list = ntfs_malloc_nofs(ni->attr_list_size);
+ 		if (!ni->attr_list) {
+ 			ntfs_error(sb, "Not enough memory to allocate buffer "
+diff --git a/fs/proc/bootconfig.c b/fs/proc/bootconfig.c
+index ad31ec4ad6270..d82dae133243b 100644
+--- a/fs/proc/bootconfig.c
++++ b/fs/proc/bootconfig.c
+@@ -32,6 +32,8 @@ static int __init copy_xbc_key_value_list(char *dst, size_t size)
+ 	int ret = 0;
+ 
+ 	key = kzalloc(XBC_KEYLEN_MAX, GFP_KERNEL);
++	if (!key)
++		return -ENOMEM;
+ 
+ 	xbc_for_each_key_value(leaf, val) {
+ 		ret = xbc_node_compose_key(leaf, key, XBC_KEYLEN_MAX);
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index b1ebf7b61732c..ce03c3dbb5c30 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -143,21 +143,22 @@ static void pstore_timer_kick(void)
+ 	mod_timer(&pstore_timer, jiffies + msecs_to_jiffies(pstore_update_ms));
+ }
+ 
+-/*
+- * Should pstore_dump() wait for a concurrent pstore_dump()? If
+- * not, the current pstore_dump() will report a failure to dump
+- * and return.
+- */
+-static bool pstore_cannot_wait(enum kmsg_dump_reason reason)
++static bool pstore_cannot_block_path(enum kmsg_dump_reason reason)
+ {
+-	/* In NMI path, pstore shouldn't block regardless of reason. */
++	/*
++	 * In case of NMI path, pstore shouldn't be blocked
++	 * regardless of reason.
++	 */
+ 	if (in_nmi())
+ 		return true;
+ 
+ 	switch (reason) {
+ 	/* In panic case, other cpus are stopped by smp_send_stop(). */
+ 	case KMSG_DUMP_PANIC:
+-	/* Emergency restart shouldn't be blocked. */
++	/*
++	 * Emergency restart shouldn't be blocked by spinning on
++	 * pstore_info::buf_lock.
++	 */
+ 	case KMSG_DUMP_EMERG:
+ 		return true;
+ 	default:
+@@ -388,21 +389,19 @@ static void pstore_dump(struct kmsg_dumper *dumper,
+ 	unsigned long	total = 0;
+ 	const char	*why;
+ 	unsigned int	part = 1;
++	unsigned long	flags = 0;
+ 	int		ret;
+ 
+ 	why = kmsg_dump_reason_str(reason);
+ 
+-	if (down_trylock(&psinfo->buf_lock)) {
+-		/* Failed to acquire lock: give up if we cannot wait. */
+-		if (pstore_cannot_wait(reason)) {
+-			pr_err("dump skipped in %s path: may corrupt error record\n",
+-				in_nmi() ? "NMI" : why);
+-			return;
+-		}
+-		if (down_interruptible(&psinfo->buf_lock)) {
+-			pr_err("could not grab semaphore?!\n");
++	if (pstore_cannot_block_path(reason)) {
++		if (!spin_trylock_irqsave(&psinfo->buf_lock, flags)) {
++			pr_err("dump skipped in %s path because of concurrent dump\n",
++					in_nmi() ? "NMI" : why);
+ 			return;
+ 		}
++	} else {
++		spin_lock_irqsave(&psinfo->buf_lock, flags);
+ 	}
+ 
+ 	oopscount++;
+@@ -464,8 +463,7 @@ static void pstore_dump(struct kmsg_dumper *dumper,
+ 		total += record.size;
+ 		part++;
+ 	}
+-
+-	up(&psinfo->buf_lock);
++	spin_unlock_irqrestore(&psinfo->buf_lock, flags);
+ }
+ 
+ static struct kmsg_dumper pstore_dumper = {
+@@ -591,7 +589,7 @@ int pstore_register(struct pstore_info *psi)
+ 		psi->write_user = pstore_write_user_compat;
+ 	psinfo = psi;
+ 	mutex_init(&psinfo->read_mutex);
+-	sema_init(&psinfo->buf_lock, 1);
++	spin_lock_init(&psinfo->buf_lock);
+ 
+ 	if (psi->flags & PSTORE_FLAGS_DMESG)
+ 		allocate_buf_for_compression();
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index ad90a3a64293e..5daffd46369dd 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -431,6 +431,8 @@ out_inode:
+ 	make_bad_inode(inode);
+ 	if (!instantiated)
+ 		iput(inode);
++	else if (whiteout)
++		iput(*whiteout);
+ out_budg:
+ 	ubifs_release_budget(c, &req);
+ 	if (!instantiated)
+@@ -1322,6 +1324,7 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	if (flags & RENAME_WHITEOUT) {
+ 		union ubifs_dev_desc *dev = NULL;
++		struct ubifs_budget_req wht_req;
+ 
+ 		dev = kmalloc(sizeof(union ubifs_dev_desc), GFP_NOFS);
+ 		if (!dev) {
+@@ -1343,6 +1346,23 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		whiteout_ui->data = dev;
+ 		whiteout_ui->data_len = ubifs_encode_dev(dev, MKDEV(0, 0));
+ 		ubifs_assert(c, !whiteout_ui->dirty);
++
++		memset(&wht_req, 0, sizeof(struct ubifs_budget_req));
++		wht_req.dirtied_ino = 1;
++		wht_req.dirtied_ino_d = ALIGN(whiteout_ui->data_len, 8);
++		/*
++		 * To avoid deadlock between space budget (holds ui_mutex and
++		 * waits wb work) and writeback work(waits ui_mutex), do space
++		 * budget before ubifs inodes locked.
++		 */
++		err = ubifs_budget_space(c, &wht_req);
++		if (err) {
++			iput(whiteout);
++			goto out_release;
++		}
++
++		/* Add the old_dentry size to the old_dir size. */
++		old_sz -= CALC_DENT_SIZE(fname_len(&old_nm));
+ 	}
+ 
+ 	lock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+@@ -1417,18 +1437,6 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	}
+ 
+ 	if (whiteout) {
+-		struct ubifs_budget_req wht_req = { .dirtied_ino = 1,
+-				.dirtied_ino_d = \
+-				ALIGN(ubifs_inode(whiteout)->data_len, 8) };
+-
+-		err = ubifs_budget_space(c, &wht_req);
+-		if (err) {
+-			kfree(whiteout_ui->data);
+-			whiteout_ui->data_len = 0;
+-			iput(whiteout);
+-			goto out_release;
+-		}
+-
+ 		inc_nlink(whiteout);
+ 		mark_inode_dirty(whiteout);
+ 
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index f4826b6da6828..354457e846cda 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -570,7 +570,7 @@ static int ubifs_write_end(struct file *file, struct address_space *mapping,
+ 	}
+ 
+ 	if (!PagePrivate(page)) {
+-		SetPagePrivate(page);
++		attach_page_private(page, (void *)1);
+ 		atomic_long_inc(&c->dirty_pg_cnt);
+ 		__set_page_dirty_nobuffers(page);
+ 	}
+@@ -947,7 +947,7 @@ static int do_writepage(struct page *page, int len)
+ 		release_existing_page_budget(c);
+ 
+ 	atomic_long_dec(&c->dirty_pg_cnt);
+-	ClearPagePrivate(page);
++	detach_page_private(page);
+ 	ClearPageChecked(page);
+ 
+ 	kunmap(page);
+@@ -1303,7 +1303,7 @@ static void ubifs_invalidatepage(struct page *page, unsigned int offset,
+ 		release_existing_page_budget(c);
+ 
+ 	atomic_long_dec(&c->dirty_pg_cnt);
+-	ClearPagePrivate(page);
++	detach_page_private(page);
+ 	ClearPageChecked(page);
+ }
+ 
+@@ -1470,8 +1470,8 @@ static int ubifs_migrate_page(struct address_space *mapping,
+ 		return rc;
+ 
+ 	if (PagePrivate(page)) {
+-		ClearPagePrivate(page);
+-		SetPagePrivate(newpage);
++		detach_page_private(page);
++		attach_page_private(newpage, (void *)1);
+ 	}
+ 
+ 	if (mode != MIGRATE_SYNC_NO_COPY)
+@@ -1495,7 +1495,7 @@ static int ubifs_releasepage(struct page *page, gfp_t unused_gfp_flags)
+ 		return 0;
+ 	ubifs_assert(c, PagePrivate(page));
+ 	ubifs_assert(c, 0);
+-	ClearPagePrivate(page);
++	detach_page_private(page);
+ 	ClearPageChecked(page);
+ 	return 1;
+ }
+@@ -1566,7 +1566,7 @@ static vm_fault_t ubifs_vm_page_mkwrite(struct vm_fault *vmf)
+ 	else {
+ 		if (!PageChecked(page))
+ 			ubifs_convert_page_budget(c);
+-		SetPagePrivate(page);
++		attach_page_private(page, (void *)1);
+ 		atomic_long_inc(&c->dirty_pg_cnt);
+ 		__set_page_dirty_nobuffers(page);
+ 	}
+diff --git a/fs/ubifs/io.c b/fs/ubifs/io.c
+index eae9cf5a57b05..89b671ad0f9aa 100644
+--- a/fs/ubifs/io.c
++++ b/fs/ubifs/io.c
+@@ -846,16 +846,42 @@ int ubifs_wbuf_write_nolock(struct ubifs_wbuf *wbuf, void *buf, int len)
+ 	 */
+ 	n = aligned_len >> c->max_write_shift;
+ 	if (n) {
+-		n <<= c->max_write_shift;
++		int m = n - 1;
++
+ 		dbg_io("write %d bytes to LEB %d:%d", n, wbuf->lnum,
+ 		       wbuf->offs);
+-		err = ubifs_leb_write(c, wbuf->lnum, buf + written,
+-				      wbuf->offs, n);
++
++		if (m) {
++			/* '(n-1)<<c->max_write_shift < len' is always true. */
++			m <<= c->max_write_shift;
++			err = ubifs_leb_write(c, wbuf->lnum, buf + written,
++					      wbuf->offs, m);
++			if (err)
++				goto out;
++			wbuf->offs += m;
++			aligned_len -= m;
++			len -= m;
++			written += m;
++		}
++
++		/*
++		 * The non-written len of buf may be less than 'n' because
++		 * parameter 'len' is not 8 bytes aligned, so here we read
++		 * min(len, n) bytes from buf.
++		 */
++		n = 1 << c->max_write_shift;
++		memcpy(wbuf->buf, buf + written, min(len, n));
++		if (n > len) {
++			ubifs_assert(c, n - len < 8);
++			ubifs_pad(c, wbuf->buf + len, n - len);
++		}
++
++		err = ubifs_leb_write(c, wbuf->lnum, wbuf->buf, wbuf->offs, n);
+ 		if (err)
+ 			goto out;
+ 		wbuf->offs += n;
+ 		aligned_len -= n;
+-		len -= n;
++		len -= min(len, n);
+ 		written += n;
+ 	}
+ 
+diff --git a/fs/ubifs/ioctl.c b/fs/ubifs/ioctl.c
+index 4363d85a3fd40..8db380a000325 100644
+--- a/fs/ubifs/ioctl.c
++++ b/fs/ubifs/ioctl.c
+@@ -107,7 +107,7 @@ static int setflags(struct inode *inode, int flags)
+ 	struct ubifs_inode *ui = ubifs_inode(inode);
+ 	struct ubifs_info *c = inode->i_sb->s_fs_info;
+ 	struct ubifs_budget_req req = { .dirtied_ino = 1,
+-					.dirtied_ino_d = ui->data_len };
++			.dirtied_ino_d = ALIGN(ui->data_len, 8) };
+ 
+ 	err = ubifs_budget_space(c, &req);
+ 	if (err)
+diff --git a/include/linux/amba/bus.h b/include/linux/amba/bus.h
+index 0bbfd647f5c6d..6cc93ab5b8096 100644
+--- a/include/linux/amba/bus.h
++++ b/include/linux/amba/bus.h
+@@ -76,7 +76,7 @@ struct amba_device {
+ struct amba_driver {
+ 	struct device_driver	drv;
+ 	int			(*probe)(struct amba_device *, const struct amba_id *);
+-	int			(*remove)(struct amba_device *);
++	void			(*remove)(struct amba_device *);
+ 	void			(*shutdown)(struct amba_device *);
+ 	const struct amba_id	*id_table;
+ };
+diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
+index 0571701ab1c57..5a9786e6b5546 100644
+--- a/include/linux/binfmts.h
++++ b/include/linux/binfmts.h
+@@ -82,6 +82,9 @@ struct coredump_params {
+ 	unsigned long mm_flags;
+ 	loff_t written;
+ 	loff_t pos;
++	int vma_count;
++	size_t vma_data_size;
++	struct core_vma_metadata *vma_meta;
+ };
+ 
+ /*
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index c8fc9792ac776..0e6e84db06f67 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -24,6 +24,7 @@
+ #include <linux/atomic.h>
+ #include <linux/kthread.h>
+ #include <linux/fs.h>
++#include <linux/blk-mq.h>
+ 
+ /* percpu_counter batch for blkg_[rw]stats, per-cpu drift doesn't matter */
+ #define BLKG_STAT_CPU_BATCH	(INT_MAX / 2)
+@@ -599,6 +600,21 @@ static inline void blkcg_clear_delay(struct blkcg_gq *blkg)
+ 		atomic_dec(&blkg->blkcg->css.cgroup->congestion_count);
+ }
+ 
++/**
++ * blk_cgroup_mergeable - Determine whether to allow or disallow merges
++ * @rq: request to merge into
++ * @bio: bio to merge
++ *
++ * @bio and @rq should belong to the same cgroup and their issue_as_root should
++ * match. The latter is necessary as we don't want to throttle e.g. a metadata
++ * update because it happens to be next to a regular IO.
++ */
++static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio)
++{
++	return rq->bio->bi_blkg == bio->bi_blkg &&
++		bio_issue_as_root_blkg(rq->bio) == bio_issue_as_root_blkg(bio);
++}
++
+ void blk_cgroup_bio_start(struct bio *bio);
+ void blkcg_add_delay(struct blkcg_gq *blkg, u64 now, u64 delta);
+ void blkcg_schedule_throttle(struct request_queue *q, bool use_memdelay);
+@@ -654,6 +670,7 @@ static inline void blkg_put(struct blkcg_gq *blkg) { }
+ static inline bool blkcg_punt_bio_submit(struct bio *bio) { return false; }
+ static inline void blkcg_bio_issue_init(struct bio *bio) { }
+ static inline void blk_cgroup_bio_start(struct bio *bio) { }
++static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio) { return true; }
+ 
+ #define blk_queue_for_each_rl(rl, q)	\
+ 	for ((rl) = &(q)->root_rl; (rl); (rl) = NULL)
+diff --git a/include/linux/coredump.h b/include/linux/coredump.h
+index e58e8c2077828..5300d0cb70f7e 100644
+--- a/include/linux/coredump.h
++++ b/include/linux/coredump.h
+@@ -11,6 +11,8 @@ struct core_vma_metadata {
+ 	unsigned long start, end;
+ 	unsigned long flags;
+ 	unsigned long dump_size;
++	unsigned long pgoff;
++	struct file   *file;
+ };
+ 
+ /*
+@@ -24,9 +26,6 @@ extern int dump_align(struct coredump_params *cprm, int align);
+ extern void dump_truncate(struct coredump_params *cprm);
+ int dump_user_range(struct coredump_params *cprm, unsigned long start,
+ 		    unsigned long len);
+-int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+-		      struct core_vma_metadata **vma_meta,
+-		      size_t *vma_data_size_ptr);
+ #ifdef CONFIG_COREDUMP
+ extern void do_coredump(const kernel_siginfo_t *siginfo);
+ #else
+diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
+index a7d70cdee25e3..a9361178c5dbb 100644
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -61,6 +61,14 @@
+  */
+ #define DMA_ATTR_PRIVILEGED		(1UL << 9)
+ 
++/*
++ * This is a hint to the DMA-mapping subsystem that the device is expected
++ * to overwrite the entire mapped size, thus the caller does not require any
++ * of the previous buffer contents to be preserved. This allows
++ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
++ */
++#define DMA_ATTR_OVERWRITE		(1UL << 10)
++
+ /*
+  * A dma_addr_t can hold any valid DMA or bus address for the platform.  It can
+  * be given to a device to use as a DMA source or target.  It is specific to a
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index aac07940de09d..db2eaff77f41a 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -1083,6 +1083,7 @@ struct nand_manufacturer {
+  * @lock: Lock protecting the suspended field. Also used to serialize accesses
+  *        to the NAND device
+  * @suspended: Set to 1 when the device is suspended, 0 when it's not
++ * @resume_wq: wait queue to sleep if rawnand is in suspended state.
+  * @cur_cs: Currently selected target. -1 means no target selected, otherwise we
+  *          should always have cur_cs >= 0 && cur_cs < nanddev_ntargets().
+  *          NAND Controller drivers should not modify this value, but they're
+@@ -1135,6 +1136,7 @@ struct nand_chip {
+ 	/* Internals */
+ 	struct mutex lock;
+ 	unsigned int suspended : 1;
++	wait_queue_head_t resume_wq;
+ 	int cur_cs;
+ 	int read_retries;
+ 
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 861f2480c4571..ed2d531400051 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3980,7 +3980,8 @@ void netdev_run_todo(void);
+  */
+ static inline void dev_put(struct net_device *dev)
+ {
+-	this_cpu_dec(*dev->pcpu_refcnt);
++	if (dev)
++		this_cpu_dec(*dev->pcpu_refcnt);
+ }
+ 
+ /**
+@@ -3991,7 +3992,8 @@ static inline void dev_put(struct net_device *dev)
+  */
+ static inline void dev_hold(struct net_device *dev)
+ {
+-	this_cpu_inc(*dev->pcpu_refcnt);
++	if (dev)
++		this_cpu_inc(*dev->pcpu_refcnt);
+ }
+ 
+ /* Carrier loss detection, dial on demand. The functions netif_carrier_on
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 4519bd12643f6..bc5a1150f0723 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -638,6 +638,7 @@ struct pci_bus {
+ 	struct bin_attribute	*legacy_io;	/* Legacy I/O for this bus */
+ 	struct bin_attribute	*legacy_mem;	/* Legacy mem */
+ 	unsigned int		is_added:1;
++	unsigned int		unsafe_warn:1;	/* warned about RW1C config write */
+ };
+ 
+ #define to_pci_bus(n)	container_of(n, struct pci_bus, dev)
+diff --git a/include/linux/pstore.h b/include/linux/pstore.h
+index eb93a54cff31f..e97a8188f0fd8 100644
+--- a/include/linux/pstore.h
++++ b/include/linux/pstore.h
+@@ -14,7 +14,7 @@
+ #include <linux/errno.h>
+ #include <linux/kmsg_dump.h>
+ #include <linux/mutex.h>
+-#include <linux/semaphore.h>
++#include <linux/spinlock.h>
+ #include <linux/time.h>
+ #include <linux/types.h>
+ 
+@@ -87,7 +87,7 @@ struct pstore_record {
+  * @owner:	module which is responsible for this backend driver
+  * @name:	name of the backend driver
+  *
+- * @buf_lock:	semaphore to serialize access to @buf
++ * @buf_lock:	spinlock to serialize access to @buf
+  * @buf:	preallocated crash dump buffer
+  * @bufsize:	size of @buf available for crash dump bytes (must match
+  *		smallest number of bytes available for writing to a
+@@ -178,7 +178,7 @@ struct pstore_info {
+ 	struct module	*owner;
+ 	const char	*name;
+ 
+-	struct semaphore buf_lock;
++	spinlock_t	buf_lock;
+ 	char		*buf;
+ 	size_t		bufsize;
+ 
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index ff63c2963359d..35b26743dbb28 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -463,6 +463,8 @@ extern void uart_handle_cts_change(struct uart_port *uport,
+ extern void uart_insert_char(struct uart_port *port, unsigned int status,
+ 		 unsigned int overrun, unsigned int ch, unsigned int flag);
+ 
++void uart_xchar_out(struct uart_port *uport, int offset);
++
+ #ifdef CONFIG_MAGIC_SYSRQ_SERIAL
+ #define SYSRQ_TIMEOUT	(HZ * 5)
+ 
+diff --git a/include/linux/soc/ti/ti_sci_protocol.h b/include/linux/soc/ti/ti_sci_protocol.h
+index cf27b080e1482..b1af87330f863 100644
+--- a/include/linux/soc/ti/ti_sci_protocol.h
++++ b/include/linux/soc/ti/ti_sci_protocol.h
+@@ -618,7 +618,7 @@ devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
+ 
+ static inline struct ti_sci_resource *
+ devm_ti_sci_get_resource(const struct ti_sci_handle *handle, struct device *dev,
+-			 u32 dev_id, u32 sub_type);
++			 u32 dev_id, u32 sub_type)
+ {
+ 	return ERR_PTR(-EINVAL);
+ }
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index b998e4b736912..6d9d1520612b8 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -603,6 +603,8 @@ xdr_stream_decode_uint32_array(struct xdr_stream *xdr,
+ 
+ 	if (unlikely(xdr_stream_decode_u32(xdr, &len) < 0))
+ 		return -EBADMSG;
++	if (len > SIZE_MAX / sizeof(*p))
++		return -EBADMSG;
+ 	p = xdr_inline_decode(xdr, len * sizeof(*p));
+ 	if (unlikely(!p))
+ 		return -EBADMSG;
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 435cc009e6eaa..4017f257628f3 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -467,6 +467,7 @@ void udp_init(void);
+ 
+ DECLARE_STATIC_KEY_FALSE(udp_encap_needed_key);
+ void udp_encap_enable(void);
++void udp_encap_disable(void);
+ #if IS_ENABLED(CONFIG_IPV6)
+ DECLARE_STATIC_KEY_FALSE(udpv6_encap_needed_key);
+ void udpv6_encap_enable(void);
+diff --git a/include/net/udp_tunnel.h b/include/net/udp_tunnel.h
+index 2ea453dac8762..24ece06bad9ef 100644
+--- a/include/net/udp_tunnel.h
++++ b/include/net/udp_tunnel.h
+@@ -177,9 +177,8 @@ static inline void udp_tunnel_encap_enable(struct socket *sock)
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	if (sock->sk->sk_family == PF_INET6)
+ 		ipv6_stub->udpv6_encap_enable();
+-	else
+ #endif
+-		udp_encap_enable();
++	udp_encap_enable();
+ }
+ 
+ #define UDP_TUNNEL_NIC_MAX_TABLES	4
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index ab966563e852e..5ffc2efedd9f8 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -399,6 +399,7 @@ struct snd_pcm_runtime {
+ 	struct fasync_struct *fasync;
+ 	bool stop_operating;		/* sync_stop will be called */
+ 	struct mutex buffer_mutex;	/* protect for buffer changes */
++	atomic_t buffer_accessing;	/* >0: in r/w operation, <0: blocked */
+ 
+ 	/* -- private section -- */
+ 	void *private_data;
+diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
+index 70ae5497b73a6..4973265655a7f 100644
+--- a/include/trace/events/ext4.h
++++ b/include/trace/events/ext4.h
+@@ -95,6 +95,17 @@ TRACE_DEFINE_ENUM(ES_REFERENCED_B);
+ 	{ FALLOC_FL_COLLAPSE_RANGE,	"COLLAPSE_RANGE"},	\
+ 	{ FALLOC_FL_ZERO_RANGE,		"ZERO_RANGE"})
+ 
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_XATTR);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_CROSS_RENAME);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_NOMEM);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_SWAP_BOOT);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_RESIZE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_RENAME_DIR);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_FALLOC_RANGE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_INODE_JOURNAL_DATA);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_MAX);
++
+ #define show_fc_reason(reason)						\
+ 	__print_symbolic(reason,					\
+ 		{ EXT4_FC_REASON_XATTR,		"XATTR"},		\
+@@ -2899,41 +2910,50 @@ TRACE_EVENT(ext4_fc_commit_stop,
+ 
+ #define FC_REASON_NAME_STAT(reason)					\
+ 	show_fc_reason(reason),						\
+-	__entry->sbi->s_fc_stats.fc_ineligible_reason_count[reason]
++	__entry->fc_ineligible_rc[reason]
+ 
+ TRACE_EVENT(ext4_fc_stats,
+-	    TP_PROTO(struct super_block *sb),
+-
+-	    TP_ARGS(sb),
++	TP_PROTO(struct super_block *sb),
+ 
+-	    TP_STRUCT__entry(
+-		    __field(dev_t, dev)
+-		    __field(struct ext4_sb_info *, sbi)
+-		    __field(int, count)
+-		    ),
++	TP_ARGS(sb),
+ 
+-	    TP_fast_assign(
+-		    __entry->dev = sb->s_dev;
+-		    __entry->sbi = EXT4_SB(sb);
+-		    ),
++	TP_STRUCT__entry(
++		__field(dev_t, dev)
++		__array(unsigned int, fc_ineligible_rc, EXT4_FC_REASON_MAX)
++		__field(unsigned long, fc_commits)
++		__field(unsigned long, fc_ineligible_commits)
++		__field(unsigned long, fc_numblks)
++	),
+ 
+-	    TP_printk("dev %d:%d fc ineligible reasons:\n"
+-		      "%s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d; "
+-		      "num_commits:%ld, ineligible: %ld, numblks: %ld",
+-		      MAJOR(__entry->dev), MINOR(__entry->dev),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_XATTR),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_CROSS_RENAME),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_NOMEM),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_SWAP_BOOT),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_RESIZE),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_RENAME_DIR),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_FALLOC_RANGE),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_INODE_JOURNAL_DATA),
+-		      __entry->sbi->s_fc_stats.fc_num_commits,
+-		      __entry->sbi->s_fc_stats.fc_ineligible_commits,
+-		      __entry->sbi->s_fc_stats.fc_numblks)
++	TP_fast_assign(
++		int i;
+ 
++		__entry->dev = sb->s_dev;
++		for (i = 0; i < EXT4_FC_REASON_MAX; i++) {
++			__entry->fc_ineligible_rc[i] =
++				EXT4_SB(sb)->s_fc_stats.fc_ineligible_reason_count[i];
++		}
++		__entry->fc_commits = EXT4_SB(sb)->s_fc_stats.fc_num_commits;
++		__entry->fc_ineligible_commits =
++			EXT4_SB(sb)->s_fc_stats.fc_ineligible_commits;
++		__entry->fc_numblks = EXT4_SB(sb)->s_fc_stats.fc_numblks;
++	),
++
++	TP_printk("dev %d,%d fc ineligible reasons:\n"
++		  "%s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u "
++		  "num_commits:%lu, ineligible: %lu, numblks: %lu",
++		  MAJOR(__entry->dev), MINOR(__entry->dev),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_XATTR),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_CROSS_RENAME),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_NOMEM),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_SWAP_BOOT),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_RESIZE),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_RENAME_DIR),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_FALLOC_RANGE),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_INODE_JOURNAL_DATA),
++		  __entry->fc_commits, __entry->fc_ineligible_commits,
++		  __entry->fc_numblks)
+ );
+ 
+ #define DEFINE_TRACE_DENTRY_EVENT(__type)				\
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index e70c90116edae..4a3ab0ed6e062 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -83,12 +83,15 @@ enum rxrpc_call_trace {
+ 	rxrpc_call_error,
+ 	rxrpc_call_got,
+ 	rxrpc_call_got_kernel,
++	rxrpc_call_got_timer,
+ 	rxrpc_call_got_userid,
+ 	rxrpc_call_new_client,
+ 	rxrpc_call_new_service,
+ 	rxrpc_call_put,
+ 	rxrpc_call_put_kernel,
+ 	rxrpc_call_put_noqueue,
++	rxrpc_call_put_notimer,
++	rxrpc_call_put_timer,
+ 	rxrpc_call_put_userid,
+ 	rxrpc_call_queued,
+ 	rxrpc_call_queued_ref,
+@@ -278,12 +281,15 @@ enum rxrpc_tx_point {
+ 	EM(rxrpc_call_error,			"*E*") \
+ 	EM(rxrpc_call_got,			"GOT") \
+ 	EM(rxrpc_call_got_kernel,		"Gke") \
++	EM(rxrpc_call_got_timer,		"GTM") \
+ 	EM(rxrpc_call_got_userid,		"Gus") \
+ 	EM(rxrpc_call_new_client,		"NWc") \
+ 	EM(rxrpc_call_new_service,		"NWs") \
+ 	EM(rxrpc_call_put,			"PUT") \
+ 	EM(rxrpc_call_put_kernel,		"Pke") \
+-	EM(rxrpc_call_put_noqueue,		"PNQ") \
++	EM(rxrpc_call_put_noqueue,		"PnQ") \
++	EM(rxrpc_call_put_notimer,		"PnT") \
++	EM(rxrpc_call_put_timer,		"PTM") \
+ 	EM(rxrpc_call_put_userid,		"Pus") \
+ 	EM(rxrpc_call_queued,			"QUE") \
+ 	EM(rxrpc_call_queued_ref,		"QUR") \
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 762bf87c26a3e..b43a86d054948 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -1490,8 +1490,8 @@ union bpf_attr {
+  * 	Return
+  * 		The return value depends on the result of the test, and can be:
+  *
+- *		* 0, if current task belongs to the cgroup2.
+- *		* 1, if current task does not belong to the cgroup2.
++ *		* 1, if current task belongs to the cgroup2.
++ *		* 0, if current task does not belong to the cgroup2.
+  * 		* A negative error code, if an error occurred.
+  *
+  * long bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
+@@ -2163,8 +2163,8 @@ union bpf_attr {
+  *
+  * 			# sysctl kernel.perf_event_max_stack=<new value>
+  * 	Return
+- * 		A non-negative value equal to or less than *size* on success,
+- * 		or a negative error in case of failure.
++ * 		The non-negative copied *buf* length equal to or less than
++ * 		*size* on success, or a negative error in case of failure.
+  *
+  * long bpf_skb_load_bytes_relative(const void *skb, u32 offset, void *to, u32 len, u32 start_header)
+  * 	Description
+@@ -3448,8 +3448,8 @@ union bpf_attr {
+  *
+  *			# sysctl kernel.perf_event_max_stack=<new value>
+  *	Return
+- *		A non-negative value equal to or less than *size* on success,
+- *		or a negative error in case of failure.
++ * 		The non-negative copied *buf* length equal to or less than
++ * 		*size* on success, or a negative error in case of failure.
+  *
+  * long bpf_load_hdr_opt(struct bpf_sock_ops *skops, void *searchby_res, u32 len, u64 flags)
+  *	Description
+diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
+index 9a402fdb60e97..77ee207623a9b 100644
+--- a/include/uapi/linux/rseq.h
++++ b/include/uapi/linux/rseq.h
+@@ -105,23 +105,11 @@ struct rseq {
+ 	 * Read and set by the kernel. Set by user-space with single-copy
+ 	 * atomicity semantics. This field should only be updated by the
+ 	 * thread which registered this data structure. Aligned on 64-bit.
++	 *
++	 * 32-bit architectures should update the low order bits of the
++	 * rseq_cs field, leaving the high order bits initialized to 0.
+ 	 */
+-	union {
+-		__u64 ptr64;
+-#ifdef __LP64__
+-		__u64 ptr;
+-#else
+-		struct {
+-#if (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || defined(__BIG_ENDIAN)
+-			__u32 padding;		/* Initialized to zero. */
+-			__u32 ptr32;
+-#else /* LITTLE */
+-			__u32 ptr32;
+-			__u32 padding;		/* Initialized to zero. */
+-#endif /* ENDIAN */
+-		} ptr;
+-#endif
+-	} rseq_cs;
++	__u64 rseq_cs;
+ 
+ 	/*
+ 	 * Restartable sequences flags field.
+diff --git a/kernel/audit.h b/kernel/audit.h
+index 3b9c0945225a1..1918019e6aaf7 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -191,6 +191,10 @@ struct audit_context {
+ 		struct {
+ 			char			*name;
+ 		} module;
++		struct {
++			struct audit_ntp_data	ntp_data;
++			struct timespec64	tk_injoffset;
++		} time;
+ 	};
+ 	int fds[2];
+ 	struct audit_proctitle proctitle;
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 638f424859edc..07e2788bbbf12 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1214,6 +1214,53 @@ static void audit_log_fcaps(struct audit_buffer *ab, struct audit_names *name)
+ 			 from_kuid(&init_user_ns, name->fcap.rootid));
+ }
+ 
++static void audit_log_time(struct audit_context *context, struct audit_buffer **ab)
++{
++	const struct audit_ntp_data *ntp = &context->time.ntp_data;
++	const struct timespec64 *tk = &context->time.tk_injoffset;
++	static const char * const ntp_name[] = {
++		"offset",
++		"freq",
++		"status",
++		"tai",
++		"tick",
++		"adjust",
++	};
++	int type;
++
++	if (context->type == AUDIT_TIME_ADJNTPVAL) {
++		for (type = 0; type < AUDIT_NTP_NVALS; type++) {
++			if (ntp->vals[type].newval != ntp->vals[type].oldval) {
++				if (!*ab) {
++					*ab = audit_log_start(context,
++							GFP_KERNEL,
++							AUDIT_TIME_ADJNTPVAL);
++					if (!*ab)
++						return;
++				}
++				audit_log_format(*ab, "op=%s old=%lli new=%lli",
++						 ntp_name[type],
++						 ntp->vals[type].oldval,
++						 ntp->vals[type].newval);
++				audit_log_end(*ab);
++				*ab = NULL;
++			}
++		}
++	}
++	if (tk->tv_sec != 0 || tk->tv_nsec != 0) {
++		if (!*ab) {
++			*ab = audit_log_start(context, GFP_KERNEL,
++					      AUDIT_TIME_INJOFFSET);
++			if (!*ab)
++				return;
++		}
++		audit_log_format(*ab, "sec=%lli nsec=%li",
++				 (long long)tk->tv_sec, tk->tv_nsec);
++		audit_log_end(*ab);
++		*ab = NULL;
++	}
++}
++
+ static void show_special(struct audit_context *context, int *call_panic)
+ {
+ 	struct audit_buffer *ab;
+@@ -1319,6 +1366,11 @@ static void show_special(struct audit_context *context, int *call_panic)
+ 			audit_log_format(ab, "(null)");
+ 
+ 		break;
++	case AUDIT_TIME_ADJNTPVAL:
++	case AUDIT_TIME_INJOFFSET:
++		/* this call deviates from the rest, eating the buffer */
++		audit_log_time(context, &ab);
++		break;
+ 	}
+ 	audit_log_end(ab);
+ }
+@@ -2560,31 +2612,26 @@ void __audit_fanotify(unsigned int response)
+ 
+ void __audit_tk_injoffset(struct timespec64 offset)
+ {
+-	audit_log(audit_context(), GFP_KERNEL, AUDIT_TIME_INJOFFSET,
+-		  "sec=%lli nsec=%li",
+-		  (long long)offset.tv_sec, offset.tv_nsec);
+-}
+-
+-static void audit_log_ntp_val(const struct audit_ntp_data *ad,
+-			      const char *op, enum audit_ntp_type type)
+-{
+-	const struct audit_ntp_val *val = &ad->vals[type];
+-
+-	if (val->newval == val->oldval)
+-		return;
++	struct audit_context *context = audit_context();
+ 
+-	audit_log(audit_context(), GFP_KERNEL, AUDIT_TIME_ADJNTPVAL,
+-		  "op=%s old=%lli new=%lli", op, val->oldval, val->newval);
++	/* only set type if not already set by NTP */
++	if (!context->type)
++		context->type = AUDIT_TIME_INJOFFSET;
++	memcpy(&context->time.tk_injoffset, &offset, sizeof(offset));
+ }
+ 
+ void __audit_ntp_log(const struct audit_ntp_data *ad)
+ {
+-	audit_log_ntp_val(ad, "offset",	AUDIT_NTP_OFFSET);
+-	audit_log_ntp_val(ad, "freq",	AUDIT_NTP_FREQ);
+-	audit_log_ntp_val(ad, "status",	AUDIT_NTP_STATUS);
+-	audit_log_ntp_val(ad, "tai",	AUDIT_NTP_TAI);
+-	audit_log_ntp_val(ad, "tick",	AUDIT_NTP_TICK);
+-	audit_log_ntp_val(ad, "adjust",	AUDIT_NTP_ADJUST);
++	struct audit_context *context = audit_context();
++	int type;
++
++	for (type = 0; type < AUDIT_NTP_NVALS; type++)
++		if (ad->vals[type].newval != ad->vals[type].oldval) {
++			/* unconditionally set type, overwriting TK */
++			context->type = AUDIT_TIME_ADJNTPVAL;
++			memcpy(&context->time.ntp_data, ad, sizeof(*ad));
++			break;
++		}
+ }
+ 
+ void __audit_log_nfcfg(const char *name, u8 af, unsigned int nentries,
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 56cd7e6589ff3..4575d2d60cb10 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -358,7 +358,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs,
+ }
+ 
+ static struct perf_callchain_entry *
+-get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
++get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
+ {
+ #ifdef CONFIG_STACKTRACE
+ 	struct perf_callchain_entry *entry;
+@@ -369,9 +369,8 @@ get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
+ 	if (!entry)
+ 		return NULL;
+ 
+-	entry->nr = init_nr +
+-		stack_trace_save_tsk(task, (unsigned long *)(entry->ip + init_nr),
+-				     sysctl_perf_event_max_stack - init_nr, 0);
++	entry->nr = stack_trace_save_tsk(task, (unsigned long *)entry->ip,
++					 max_depth, 0);
+ 
+ 	/* stack_trace_save_tsk() works on unsigned long array, while
+ 	 * perf_callchain_entry uses u64 array. For 32-bit systems, it is
+@@ -383,7 +382,7 @@ get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
+ 		int i;
+ 
+ 		/* copy data from the end to avoid using extra buffer */
+-		for (i = entry->nr - 1; i >= (int)init_nr; i--)
++		for (i = entry->nr - 1; i >= 0; i--)
+ 			to[i] = (u64)(from[i]);
+ 	}
+ 
+@@ -400,27 +399,19 @@ static long __bpf_get_stackid(struct bpf_map *map,
+ {
+ 	struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map);
+ 	struct stack_map_bucket *bucket, *new_bucket, *old_bucket;
+-	u32 max_depth = map->value_size / stack_map_data_size(map);
+-	/* stack_map_alloc() checks that max_depth <= sysctl_perf_event_max_stack */
+-	u32 init_nr = sysctl_perf_event_max_stack - max_depth;
+ 	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ 	u32 hash, id, trace_nr, trace_len;
+ 	bool user = flags & BPF_F_USER_STACK;
+ 	u64 *ips;
+ 	bool hash_matches;
+ 
+-	/* get_perf_callchain() guarantees that trace->nr >= init_nr
+-	 * and trace-nr <= sysctl_perf_event_max_stack, so trace_nr <= max_depth
+-	 */
+-	trace_nr = trace->nr - init_nr;
+-
+-	if (trace_nr <= skip)
++	if (trace->nr <= skip)
+ 		/* skipping more than usable stack trace */
+ 		return -EFAULT;
+ 
+-	trace_nr -= skip;
++	trace_nr = trace->nr - skip;
+ 	trace_len = trace_nr * sizeof(u64);
+-	ips = trace->ip + skip + init_nr;
++	ips = trace->ip + skip;
+ 	hash = jhash2((u32 *)ips, trace_len / sizeof(u32), 0);
+ 	id = hash & (smap->n_buckets - 1);
+ 	bucket = READ_ONCE(smap->buckets[id]);
+@@ -477,8 +468,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
+ 	   u64, flags)
+ {
+ 	u32 max_depth = map->value_size / stack_map_data_size(map);
+-	/* stack_map_alloc() checks that max_depth <= sysctl_perf_event_max_stack */
+-	u32 init_nr = sysctl_perf_event_max_stack - max_depth;
++	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ 	bool user = flags & BPF_F_USER_STACK;
+ 	struct perf_callchain_entry *trace;
+ 	bool kernel = !user;
+@@ -487,8 +477,12 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
+ 			       BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID)))
+ 		return -EINVAL;
+ 
+-	trace = get_perf_callchain(regs, init_nr, kernel, user,
+-				   sysctl_perf_event_max_stack, false, false);
++	max_depth += skip;
++	if (max_depth > sysctl_perf_event_max_stack)
++		max_depth = sysctl_perf_event_max_stack;
++
++	trace = get_perf_callchain(regs, 0, kernel, user, max_depth,
++				   false, false);
+ 
+ 	if (unlikely(!trace))
+ 		/* couldn't fetch the stack trace */
+@@ -579,7 +573,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ 			    struct perf_callchain_entry *trace_in,
+ 			    void *buf, u32 size, u64 flags)
+ {
+-	u32 init_nr, trace_nr, copy_len, elem_size, num_elem;
++	u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
+ 	bool user_build_id = flags & BPF_F_USER_BUILD_ID;
+ 	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ 	bool user = flags & BPF_F_USER_STACK;
+@@ -604,30 +598,28 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ 		goto err_fault;
+ 
+ 	num_elem = size / elem_size;
+-	if (sysctl_perf_event_max_stack < num_elem)
+-		init_nr = 0;
+-	else
+-		init_nr = sysctl_perf_event_max_stack - num_elem;
++	max_depth = num_elem + skip;
++	if (sysctl_perf_event_max_stack < max_depth)
++		max_depth = sysctl_perf_event_max_stack;
+ 
+ 	if (trace_in)
+ 		trace = trace_in;
+ 	else if (kernel && task)
+-		trace = get_callchain_entry_for_task(task, init_nr);
++		trace = get_callchain_entry_for_task(task, max_depth);
+ 	else
+-		trace = get_perf_callchain(regs, init_nr, kernel, user,
+-					   sysctl_perf_event_max_stack,
++		trace = get_perf_callchain(regs, 0, kernel, user, max_depth,
+ 					   false, false);
+ 	if (unlikely(!trace))
+ 		goto err_fault;
+ 
+-	trace_nr = trace->nr - init_nr;
+-	if (trace_nr < skip)
++	if (trace->nr < skip)
+ 		goto err_fault;
+ 
+-	trace_nr -= skip;
++	trace_nr = trace->nr - skip;
+ 	trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;
+ 	copy_len = trace_nr * elem_size;
+-	ips = trace->ip + skip + init_nr;
++
++	ips = trace->ip + skip;
+ 	if (user && user_build_id)
+ 		stack_map_get_build_id_offset(buf, ips, trace_nr, user);
+ 	else
+diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
+index 6226502ce0499..13417f0045f02 100644
+--- a/kernel/debug/kdb/kdb_support.c
++++ b/kernel/debug/kdb/kdb_support.c
+@@ -350,7 +350,7 @@ int kdb_getarea_size(void *res, unsigned long addr, size_t size)
+  */
+ int kdb_putarea_size(unsigned long addr, void *res, size_t size)
+ {
+-	int ret = copy_from_kernel_nofault((char *)addr, (char *)res, size);
++	int ret = copy_to_kernel_nofault((char *)addr, (char *)res, size);
+ 	if (ret) {
+ 		if (!KDB_STATE(SUPPRESS)) {
+ 			kdb_printf("kdb_putarea: Bad address 0x%lx\n", addr);
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 10d07ace46c15..f8ae546798651 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -928,7 +928,7 @@ static __init int dma_debug_cmdline(char *str)
+ 		global_disable = true;
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static __init int dma_debug_entries_cmdline(char *str)
+@@ -937,7 +937,7 @@ static __init int dma_debug_entries_cmdline(char *str)
+ 		return -EINVAL;
+ 	if (!get_option(&str, &nr_prealloc_entries))
+ 		nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("dma_debug=", dma_debug_cmdline);
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 0ed0e1f215c75..62b1e5fa86736 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -598,7 +598,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
+ 
+ 	tlb_addr = slot_addr(io_tlb_start, index) + offset;
+ 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+-	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
++	    (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
++	    dir == DMA_BIDIRECTIONAL))
+ 		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
+ 	return tlb_addr;
+ }
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index c8b3f94f0dbb3..79d8b27cf2fc6 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10265,8 +10265,11 @@ perf_event_parse_addr_filter(struct perf_event *event, char *fstr,
+ 			}
+ 
+ 			/* ready to consume more filters */
++			kfree(filename);
++			filename = NULL;
+ 			state = IF_STATE_ACTION;
+ 			filter = NULL;
++			kernel = 0;
+ 		}
+ 	}
+ 
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index f76fdb9255323..e8bdce6fdd647 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -191,7 +191,7 @@ static int klp_find_object_symbol(const char *objname, const char *name,
+ 	return -EINVAL;
+ }
+ 
+-static int klp_resolve_symbols(Elf64_Shdr *sechdrs, const char *strtab,
++static int klp_resolve_symbols(Elf_Shdr *sechdrs, const char *strtab,
+ 			       unsigned int symndx, Elf_Shdr *relasec,
+ 			       const char *sec_objname)
+ {
+@@ -219,7 +219,7 @@ static int klp_resolve_symbols(Elf64_Shdr *sechdrs, const char *strtab,
+ 	relas = (Elf_Rela *) relasec->sh_addr;
+ 	/* For each rela in this klp relocation section */
+ 	for (i = 0; i < relasec->sh_size / sizeof(Elf_Rela); i++) {
+-		sym = (Elf64_Sym *)sechdrs[symndx].sh_addr + ELF_R_SYM(relas[i].r_info);
++		sym = (Elf_Sym *)sechdrs[symndx].sh_addr + ELF_R_SYM(relas[i].r_info);
+ 		if (sym->st_shndx != SHN_LIVEPATCH) {
+ 			pr_err("symbol %s is not marked as a livepatch symbol\n",
+ 			       strtab + sym->st_name);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index af4b35450556f..b6683cefe19a4 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -182,11 +182,9 @@ static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
+ static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
+ unsigned long nr_lock_classes;
+ unsigned long nr_zapped_classes;
+-#ifndef CONFIG_DEBUG_LOCKDEP
+-static
+-#endif
++unsigned long max_lock_class_idx;
+ struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+-static DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
++DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
+ 
+ static inline struct lock_class *hlock_class(struct held_lock *hlock)
+ {
+@@ -337,7 +335,7 @@ static inline void lock_release_holdtime(struct held_lock *hlock)
+  * elements. These elements are linked together by the lock_entry member in
+  * struct lock_class.
+  */
+-LIST_HEAD(all_lock_classes);
++static LIST_HEAD(all_lock_classes);
+ static LIST_HEAD(free_lock_classes);
+ 
+ /**
+@@ -1239,6 +1237,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
+ 	struct lockdep_subclass_key *key;
+ 	struct hlist_head *hash_head;
+ 	struct lock_class *class;
++	int idx;
+ 
+ 	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
+ 
+@@ -1304,6 +1303,9 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
+ 	 * of classes.
+ 	 */
+ 	list_move_tail(&class->lock_entry, &all_lock_classes);
++	idx = class - lock_classes;
++	if (idx > max_lock_class_idx)
++		max_lock_class_idx = idx;
+ 
+ 	if (verbose(class)) {
+ 		graph_unlock();
+@@ -5919,6 +5921,8 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
+ 		WRITE_ONCE(class->name, NULL);
+ 		nr_lock_classes--;
+ 		__clear_bit(class - lock_classes, lock_classes_in_use);
++		if (class - lock_classes == max_lock_class_idx)
++			max_lock_class_idx--;
+ 	} else {
+ 		WARN_ONCE(true, "%s() failed for class %s\n", __func__,
+ 			  class->name);
+@@ -6209,7 +6213,13 @@ void lockdep_reset_lock(struct lockdep_map *lock)
+ 		lockdep_reset_lock_reg(lock);
+ }
+ 
+-/* Unregister a dynamically allocated key. */
++/*
++ * Unregister a dynamically allocated key.
++ *
++ * Unlike lockdep_register_key(), a search is always done to find a matching
++ * key irrespective of debug_locks to avoid potential invalid access to freed
++ * memory in lock_class entry.
++ */
+ void lockdep_unregister_key(struct lock_class_key *key)
+ {
+ 	struct hlist_head *hash_head = keyhashentry(key);
+@@ -6224,10 +6234,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ 		return;
+ 
+ 	raw_local_irq_save(flags);
+-	if (!graph_lock())
+-		goto out_irq;
++	lockdep_lock();
+ 
+-	pf = get_pending_free();
+ 	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+ 		if (k == key) {
+ 			hlist_del_rcu(&k->hash_entry);
+@@ -6235,11 +6243,13 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ 			break;
+ 		}
+ 	}
+-	WARN_ON_ONCE(!found);
+-	__lockdep_free_key_range(pf, key, 1);
+-	call_rcu_zapped(pf);
+-	graph_unlock();
+-out_irq:
++	WARN_ON_ONCE(!found && debug_locks);
++	if (found) {
++		pf = get_pending_free();
++		__lockdep_free_key_range(pf, key, 1);
++		call_rcu_zapped(pf);
++	}
++	lockdep_unlock();
+ 	raw_local_irq_restore(flags);
+ 
+ 	/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */
+diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
+index de49f9e1c11ba..a19b016353478 100644
+--- a/kernel/locking/lockdep_internals.h
++++ b/kernel/locking/lockdep_internals.h
+@@ -121,7 +121,6 @@ static const unsigned long LOCKF_USED_IN_IRQ_READ =
+ 
+ #define MAX_LOCKDEP_CHAIN_HLOCKS (MAX_LOCKDEP_CHAINS*5)
+ 
+-extern struct list_head all_lock_classes;
+ extern struct lock_chain lock_chains[];
+ 
+ #define LOCK_USAGE_CHARS (2*XXX_LOCK_USAGE_STATES + 1)
+@@ -151,6 +150,10 @@ extern unsigned int nr_large_chain_blocks;
+ 
+ extern unsigned int max_lockdep_depth;
+ extern unsigned int max_bfs_queue_depth;
++extern unsigned long max_lock_class_idx;
++
++extern struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
++extern unsigned long lock_classes_in_use[];
+ 
+ #ifdef CONFIG_PROVE_LOCKING
+ extern unsigned long lockdep_count_forward_deps(struct lock_class *);
+@@ -205,7 +208,6 @@ struct lockdep_stats {
+ };
+ 
+ DECLARE_PER_CPU(struct lockdep_stats, lockdep_stats);
+-extern struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+ 
+ #define __debug_atomic_inc(ptr)					\
+ 	this_cpu_inc(lockdep_stats.ptr);
+diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
+index 02ef87f50df29..ccb5292d1e194 100644
+--- a/kernel/locking/lockdep_proc.c
++++ b/kernel/locking/lockdep_proc.c
+@@ -24,14 +24,33 @@
+ 
+ #include "lockdep_internals.h"
+ 
++/*
++ * Since iteration of lock_classes is done without holding the lockdep lock,
++ * it is not safe to iterate all_lock_classes list directly as the iteration
++ * may branch off to free_lock_classes or the zapped list. Iteration is done
++ * directly on the lock_classes array by checking the lock_classes_in_use
++ * bitmap and max_lock_class_idx.
++ */
++#define iterate_lock_classes(idx, class)				\
++	for (idx = 0, class = lock_classes; idx <= max_lock_class_idx;	\
++	     idx++, class++)
++
+ static void *l_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+-	return seq_list_next(v, &all_lock_classes, pos);
++	struct lock_class *class = v;
++
++	++class;
++	*pos = class - lock_classes;
++	return (*pos > max_lock_class_idx) ? NULL : class;
+ }
+ 
+ static void *l_start(struct seq_file *m, loff_t *pos)
+ {
+-	return seq_list_start_head(&all_lock_classes, *pos);
++	unsigned long idx = *pos;
++
++	if (idx > max_lock_class_idx)
++		return NULL;
++	return lock_classes + idx;
+ }
+ 
+ static void l_stop(struct seq_file *m, void *v)
+@@ -57,14 +76,16 @@ static void print_name(struct seq_file *m, struct lock_class *class)
+ 
+ static int l_show(struct seq_file *m, void *v)
+ {
+-	struct lock_class *class = list_entry(v, struct lock_class, lock_entry);
++	struct lock_class *class = v;
+ 	struct lock_list *entry;
+ 	char usage[LOCK_USAGE_CHARS];
++	int idx = class - lock_classes;
+ 
+-	if (v == &all_lock_classes) {
++	if (v == lock_classes)
+ 		seq_printf(m, "all lock classes:\n");
++
++	if (!test_bit(idx, lock_classes_in_use))
+ 		return 0;
+-	}
+ 
+ 	seq_printf(m, "%p", class->key);
+ #ifdef CONFIG_DEBUG_LOCKDEP
+@@ -218,8 +239,11 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+ 
+ #ifdef CONFIG_PROVE_LOCKING
+ 	struct lock_class *class;
++	unsigned long idx;
+ 
+-	list_for_each_entry(class, &all_lock_classes, lock_entry) {
++	iterate_lock_classes(idx, class) {
++		if (!test_bit(idx, lock_classes_in_use))
++			continue;
+ 
+ 		if (class->usage_mask == 0)
+ 			nr_unused++;
+@@ -252,6 +276,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+ 
+ 		sum_forward_deps += lockdep_count_forward_deps(class);
+ 	}
++
+ #ifdef CONFIG_DEBUG_LOCKDEP
+ 	DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused);
+ #endif
+@@ -343,6 +368,8 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+ 	seq_printf(m, " max bfs queue depth:           %11u\n",
+ 			max_bfs_queue_depth);
+ #endif
++	seq_printf(m, " max lock class index:          %11lu\n",
++			max_lock_class_idx);
+ 	lockdep_stats_debug_show(m);
+ 	seq_printf(m, " debug_locks:                   %11u\n",
+ 			debug_locks);
+@@ -620,12 +647,16 @@ static int lock_stat_open(struct inode *inode, struct file *file)
+ 	if (!res) {
+ 		struct lock_stat_data *iter = data->stats;
+ 		struct seq_file *m = file->private_data;
++		unsigned long idx;
+ 
+-		list_for_each_entry(class, &all_lock_classes, lock_entry) {
++		iterate_lock_classes(idx, class) {
++			if (!test_bit(idx, lock_classes_in_use))
++				continue;
+ 			iter->class = class;
+ 			iter->stats = lock_stats(class);
+ 			iter++;
+ 		}
++
+ 		data->iter_end = iter;
+ 
+ 		sort(data->stats, data->iter_end - data->stats,
+@@ -643,6 +674,7 @@ static ssize_t lock_stat_write(struct file *file, const char __user *buf,
+ 			       size_t count, loff_t *ppos)
+ {
+ 	struct lock_class *class;
++	unsigned long idx;
+ 	char c;
+ 
+ 	if (count) {
+@@ -652,8 +684,11 @@ static ssize_t lock_stat_write(struct file *file, const char __user *buf,
+ 		if (c != '0')
+ 			return count;
+ 
+-		list_for_each_entry(class, &all_lock_classes, lock_entry)
++		iterate_lock_classes(idx, class) {
++			if (!test_bit(idx, lock_classes_in_use))
++				continue;
+ 			clear_lock_stats(class);
++		}
+ 	}
+ 	return count;
+ }
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index bf640fd6142a0..522cb1387462c 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -1323,7 +1323,7 @@ static int __init resumedelay_setup(char *str)
+ 	int rc = kstrtouint(str, 0, &resume_delay);
+ 
+ 	if (rc)
+-		return rc;
++		pr_warn("resumedelay: bad option string '%s'\n", str);
+ 	return 1;
+ }
+ 
+diff --git a/kernel/power/suspend_test.c b/kernel/power/suspend_test.c
+index e1ed58adb69e4..be480ae5cb2aa 100644
+--- a/kernel/power/suspend_test.c
++++ b/kernel/power/suspend_test.c
+@@ -157,22 +157,22 @@ static int __init setup_test_suspend(char *value)
+ 	value++;
+ 	suspend_type = strsep(&value, ",");
+ 	if (!suspend_type)
+-		return 0;
++		return 1;
+ 
+ 	repeat = strsep(&value, ",");
+ 	if (repeat) {
+ 		if (kstrtou32(repeat, 0, &test_repeat_count_max))
+-			return 0;
++			return 1;
+ 	}
+ 
+ 	for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++)
+ 		if (!strcmp(pm_labels[i], suspend_type)) {
+ 			test_state_label = pm_labels[i];
+-			return 0;
++			return 1;
+ 		}
+ 
+ 	printk(warn_bad_state, suspend_type);
+-	return 0;
++	return 1;
+ }
+ __setup("test_suspend", setup_test_suspend);
+ 
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 85351a12c85dc..17a310dcb6d96 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -146,8 +146,10 @@ static int __control_devkmsg(char *str)
+ 
+ static int __init control_devkmsg(char *str)
+ {
+-	if (__control_devkmsg(str) < 0)
++	if (__control_devkmsg(str) < 0) {
++		pr_warn("printk.devkmsg: bad option string '%s'\n", str);
+ 		return 1;
++	}
+ 
+ 	/*
+ 	 * Set sysctl string accordingly:
+@@ -166,7 +168,7 @@ static int __init control_devkmsg(char *str)
+ 	 */
+ 	devkmsg_log |= DEVKMSG_LOG_MASK_LOCK;
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("printk.devkmsg=", control_devkmsg);
+ 
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index eb4d04cb3aaf5..d99f73f83bf5f 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -370,6 +370,26 @@ bool ptrace_may_access(struct task_struct *task, unsigned int mode)
+ 	return !err;
+ }
+ 
++static int check_ptrace_options(unsigned long data)
++{
++	if (data & ~(unsigned long)PTRACE_O_MASK)
++		return -EINVAL;
++
++	if (unlikely(data & PTRACE_O_SUSPEND_SECCOMP)) {
++		if (!IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) ||
++		    !IS_ENABLED(CONFIG_SECCOMP))
++			return -EINVAL;
++
++		if (!capable(CAP_SYS_ADMIN))
++			return -EPERM;
++
++		if (seccomp_mode(&current->seccomp) != SECCOMP_MODE_DISABLED ||
++		    current->ptrace & PT_SUSPEND_SECCOMP)
++			return -EPERM;
++	}
++	return 0;
++}
++
+ static int ptrace_attach(struct task_struct *task, long request,
+ 			 unsigned long addr,
+ 			 unsigned long flags)
+@@ -381,8 +401,16 @@ static int ptrace_attach(struct task_struct *task, long request,
+ 	if (seize) {
+ 		if (addr != 0)
+ 			goto out;
++		/*
++		 * This duplicates the check in check_ptrace_options() because
++		 * ptrace_attach() and ptrace_setoptions() have historically
++		 * used different error codes for unknown ptrace options.
++		 */
+ 		if (flags & ~(unsigned long)PTRACE_O_MASK)
+ 			goto out;
++		retval = check_ptrace_options(flags);
++		if (retval)
++			return retval;
+ 		flags = PT_PTRACED | PT_SEIZED | (flags << PT_OPT_FLAG_SHIFT);
+ 	} else {
+ 		flags = PT_PTRACED;
+@@ -655,22 +683,11 @@ int ptrace_writedata(struct task_struct *tsk, char __user *src, unsigned long ds
+ static int ptrace_setoptions(struct task_struct *child, unsigned long data)
+ {
+ 	unsigned flags;
++	int ret;
+ 
+-	if (data & ~(unsigned long)PTRACE_O_MASK)
+-		return -EINVAL;
+-
+-	if (unlikely(data & PTRACE_O_SUSPEND_SECCOMP)) {
+-		if (!IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) ||
+-		    !IS_ENABLED(CONFIG_SECCOMP))
+-			return -EINVAL;
+-
+-		if (!capable(CAP_SYS_ADMIN))
+-			return -EPERM;
+-
+-		if (seccomp_mode(&current->seccomp) != SECCOMP_MODE_DISABLED ||
+-		    current->ptrace & PT_SUSPEND_SECCOMP)
+-			return -EPERM;
+-	}
++	ret = check_ptrace_options(data);
++	if (ret)
++		return ret;
+ 
+ 	/* Avoid intermediate state when all opts are cleared */
+ 	flags = child->ptrace;
+diff --git a/kernel/rseq.c b/kernel/rseq.c
+index 0077713bf2400..6ca29dddceabc 100644
+--- a/kernel/rseq.c
++++ b/kernel/rseq.c
+@@ -120,8 +120,13 @@ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
+ 	u32 sig;
+ 	int ret;
+ 
+-	if (copy_from_user(&ptr, &t->rseq->rseq_cs.ptr64, sizeof(ptr)))
++#ifdef CONFIG_64BIT
++	if (get_user(ptr, &t->rseq->rseq_cs))
+ 		return -EFAULT;
++#else
++	if (copy_from_user(&ptr, &t->rseq->rseq_cs, sizeof(ptr)))
++		return -EFAULT;
++#endif
+ 	if (!ptr) {
+ 		memset(rseq_cs, 0, sizeof(*rseq_cs));
+ 		return 0;
+@@ -204,9 +209,13 @@ static int clear_rseq_cs(struct task_struct *t)
+ 	 *
+ 	 * Set rseq_cs to NULL.
+ 	 */
+-	if (clear_user(&t->rseq->rseq_cs.ptr64, sizeof(t->rseq->rseq_cs.ptr64)))
++#ifdef CONFIG_64BIT
++	return put_user(0UL, &t->rseq->rseq_cs);
++#else
++	if (clear_user(&t->rseq->rseq_cs, sizeof(t->rseq->rseq_cs)))
+ 		return -EFAULT;
+ 	return 0;
++#endif
+ }
+ 
+ /*
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 0a5f9fad45e4b..e437d946b27bb 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -36,6 +36,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_rt_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_dl_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_se_tp);
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_thermal_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_cpu_capacity_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_overutilized_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_util_est_cfs_tp);
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 70a5782724363..e7df4f2935871 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -908,25 +908,15 @@ void print_numa_stats(struct seq_file *m, int node, unsigned long tsf,
+ static void sched_show_numa(struct task_struct *p, struct seq_file *m)
+ {
+ #ifdef CONFIG_NUMA_BALANCING
+-	struct mempolicy *pol;
+-
+ 	if (p->mm)
+ 		P(mm->numa_scan_seq);
+ 
+-	task_lock(p);
+-	pol = p->mempolicy;
+-	if (pol && !(pol->flags & MPOL_F_MORON))
+-		pol = NULL;
+-	mpol_get(pol);
+-	task_unlock(p);
+-
+ 	P(numa_pages_migrated);
+ 	P(numa_preferred_nid);
+ 	P(total_numa_faults);
+ 	SEQ_printf(m, "current_node=%d, numa_group_id=%d\n",
+ 			task_node(p), task_numa_group_id(p));
+ 	show_numa_stats(p, m);
+-	mpol_put(pol);
+ #endif
+ }
+ 
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index e3f144d960261..249ed32591449 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -274,7 +274,7 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ 	return 0;
+ 
+ error_p:
+-	for (i = 0; i < nr_pages; i++)
++	while (--i >= 0)
+ 		__free_page(pages[i]);
+ 	kfree(pages);
+ error:
+@@ -373,6 +373,7 @@ static void __put_watch_queue(struct kref *kref)
+ 
+ 	for (i = 0; i < wqueue->nr_pages; i++)
+ 		__free_page(wqueue->notes[i]);
++	kfree(wqueue->notes);
+ 	bitmap_free(wqueue->notes_bitmap);
+ 
+ 	wfilter = rcu_access_pointer(wqueue->filter);
+@@ -398,6 +399,7 @@ static void free_watch(struct rcu_head *rcu)
+ 	put_watch_queue(rcu_access_pointer(watch->queue));
+ 	atomic_dec(&watch->cred->user->nr_watches);
+ 	put_cred(watch->cred);
++	kfree(watch);
+ }
+ 
+ static void __put_watch(struct kref *kref)
+diff --git a/lib/kunit/try-catch.c b/lib/kunit/try-catch.c
+index 0dd434e40487c..71e5c58530996 100644
+--- a/lib/kunit/try-catch.c
++++ b/lib/kunit/try-catch.c
+@@ -52,7 +52,7 @@ static unsigned long kunit_test_timeout(void)
+ 	 * If tests timeout due to exceeding sysctl_hung_task_timeout_secs,
+ 	 * the task will be killed and an oops generated.
+ 	 */
+-	return 300 * MSEC_PER_SEC; /* 5 min */
++	return 300 * msecs_to_jiffies(MSEC_PER_SEC); /* 5 min */
+ }
+ 
+ void kunit_try_catch_run(struct kunit_try_catch *try_catch, void *context)
+diff --git a/lib/raid6/test/Makefile b/lib/raid6/test/Makefile
+index a4c7cd74cff58..4fb7700a741bd 100644
+--- a/lib/raid6/test/Makefile
++++ b/lib/raid6/test/Makefile
+@@ -4,6 +4,8 @@
+ # from userspace.
+ #
+ 
++pound := \#
++
+ CC	 = gcc
+ OPTFLAGS = -O2			# Adjust as desired
+ CFLAGS	 = -I.. -I ../../../include -g $(OPTFLAGS)
+@@ -42,7 +44,7 @@ else ifeq ($(HAS_NEON),yes)
+         OBJS   += neon.o neon1.o neon2.o neon4.o neon8.o recov_neon.o recov_neon_inner.o
+         CFLAGS += -DCONFIG_KERNEL_MODE_NEON=1
+ else
+-        HAS_ALTIVEC := $(shell printf '\#include <altivec.h>\nvector int a;\n' |\
++        HAS_ALTIVEC := $(shell printf '$(pound)include <altivec.h>\nvector int a;\n' |\
+                          gcc -c -x c - >/dev/null && rm ./-.o && echo yes)
+         ifeq ($(HAS_ALTIVEC),yes)
+                 CFLAGS += -I../../../arch/powerpc/include
+diff --git a/lib/raid6/test/test.c b/lib/raid6/test/test.c
+index a3cf071941ab4..841a55242abaa 100644
+--- a/lib/raid6/test/test.c
++++ b/lib/raid6/test/test.c
+@@ -19,7 +19,6 @@
+ #define NDISKS		16	/* Including P and Q */
+ 
+ const char raid6_empty_zero_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE)));
+-struct raid6_calls raid6_call;
+ 
+ char *dataptrs[NDISKS];
+ char data[NDISKS][PAGE_SIZE] __attribute__((aligned(PAGE_SIZE)));
+diff --git a/lib/test_kmod.c b/lib/test_kmod.c
+index eab52770070d6..c637f6b5053a9 100644
+--- a/lib/test_kmod.c
++++ b/lib/test_kmod.c
+@@ -1155,6 +1155,7 @@ static struct kmod_test_device *register_test_dev_kmod(void)
+ 	if (ret) {
+ 		pr_err("could not register misc device: %d\n", ret);
+ 		free_test_dev_kmod(test_dev);
++		test_dev = NULL;
+ 		goto out;
+ 	}
+ 
+diff --git a/lib/test_lockup.c b/lib/test_lockup.c
+index f1a020bcc763e..78a630bbd03df 100644
+--- a/lib/test_lockup.c
++++ b/lib/test_lockup.c
+@@ -417,9 +417,14 @@ static bool test_kernel_ptr(unsigned long addr, int size)
+ 		return false;
+ 
+ 	/* should be at least readable kernel address */
+-	if (access_ok(ptr, 1) ||
+-	    access_ok(ptr + size - 1, 1) ||
+-	    get_kernel_nofault(buf, ptr) ||
++	if (!IS_ENABLED(CONFIG_ALTERNATE_USER_ADDRESS_SPACE) &&
++	    (access_ok((void __user *)ptr, 1) ||
++	     access_ok((void __user *)ptr + size - 1, 1))) {
++		pr_err("user space ptr invalid in kernel: %#lx\n", addr);
++		return true;
++	}
++
++	if (get_kernel_nofault(buf, ptr) ||
+ 	    get_kernel_nofault(buf, ptr + size - 1)) {
+ 		pr_err("invalid kernel ptr: %#lx\n", addr);
+ 		return true;
+diff --git a/lib/test_xarray.c b/lib/test_xarray.c
+index 8b1c318189ce8..e77d4856442c3 100644
+--- a/lib/test_xarray.c
++++ b/lib/test_xarray.c
+@@ -1463,6 +1463,25 @@ unlock:
+ 	XA_BUG_ON(xa, !xa_empty(xa));
+ }
+ 
++static noinline void check_create_range_5(struct xarray *xa,
++		unsigned long index, unsigned int order)
++{
++	XA_STATE_ORDER(xas, xa, index, order);
++	unsigned int i;
++
++	xa_store_order(xa, index, order, xa_mk_index(index), GFP_KERNEL);
++
++	for (i = 0; i < order + 10; i++) {
++		do {
++			xas_lock(&xas);
++			xas_create_range(&xas);
++			xas_unlock(&xas);
++		} while (xas_nomem(&xas, GFP_KERNEL));
++	}
++
++	xa_destroy(xa);
++}
++
+ static noinline void check_create_range(struct xarray *xa)
+ {
+ 	unsigned int order;
+@@ -1490,6 +1509,9 @@ static noinline void check_create_range(struct xarray *xa)
+ 		check_create_range_4(xa, (3U << order) + 1, order);
+ 		check_create_range_4(xa, (3U << order) - 1, order);
+ 		check_create_range_4(xa, (1U << 24) + 1, order);
++
++		check_create_range_5(xa, 0, order);
++		check_create_range_5(xa, (1U << order), order);
+ 	}
+ 
+ 	check_create_range_3();
+diff --git a/lib/xarray.c b/lib/xarray.c
+index ed775dee1074c..75da19a7a9334 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -722,6 +722,8 @@ void xas_create_range(struct xa_state *xas)
+ 
+ 		for (;;) {
+ 			struct xa_node *node = xas->xa_node;
++			if (node->shift >= shift)
++				break;
+ 			xas->xa_node = xa_parent_locked(xas->xa, node);
+ 			xas->xa_offset = node->offset - 1;
+ 			if (node->offset != 0)
+@@ -1078,6 +1080,7 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order)
+ 					xa_mk_node(child));
+ 			if (xa_is_value(curr))
+ 				values--;
++			xas_update(xas, child);
+ 		} else {
+ 			unsigned int canon = offset - xas->xa_sibs;
+ 
+@@ -1092,6 +1095,7 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order)
+ 	} while (offset-- > xas->xa_offset);
+ 
+ 	node->nr_values += values;
++	xas_update(xas, node);
+ }
+ EXPORT_SYMBOL_GPL(xas_split);
+ #endif
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 56fcfcb8e6173..4801751cb6b6d 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -787,6 +787,8 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ 	unsigned long flags;
+ 	struct kmemleak_object *object;
+ 	struct kmemleak_scan_area *area = NULL;
++	unsigned long untagged_ptr;
++	unsigned long untagged_objp;
+ 
+ 	object = find_and_get_object(ptr, 1);
+ 	if (!object) {
+@@ -795,6 +797,9 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ 		return;
+ 	}
+ 
++	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
++	untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer);
++
+ 	if (scan_area_cache)
+ 		area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp));
+ 
+@@ -806,8 +811,8 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ 		goto out_unlock;
+ 	}
+ 	if (size == SIZE_MAX) {
+-		size = object->pointer + object->size - ptr;
+-	} else if (ptr + size > object->pointer + object->size) {
++		size = untagged_objp + object->size - untagged_ptr;
++	} else if (untagged_ptr + size > untagged_objp + object->size) {
+ 		kmemleak_warn("Scan area larger than object 0x%08lx\n", ptr);
+ 		dump_object_info(object);
+ 		kmem_cache_free(scan_area_cache, area);
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 24abc79f8914e..77e1dc2d4e186 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -1229,8 +1229,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
+ 		iov_iter_advance(&iter, iovec.iov_len);
+ 	}
+ 
+-	if (ret == 0)
+-		ret = total_len - iov_iter_count(&iter);
++	ret = (total_len - iov_iter_count(&iter)) ? : ret;
+ 
+ release_mm:
+ 	mmput(mm);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index dbe07fef26828..92ab008777183 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -7124,7 +7124,7 @@ static int __init cgroup_memory(char *s)
+ 		if (!strcmp(token, "nokmem"))
+ 			cgroup_memory_nokmem = true;
+ 	}
+-	return 0;
++	return 1;
+ }
+ __setup("cgroup.memory=", cgroup_memory);
+ 
+diff --git a/mm/memory.c b/mm/memory.c
+index 4fe24cd865a79..af27127c235e2 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3676,11 +3676,20 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
+ 		return ret;
+ 
+ 	if (unlikely(PageHWPoison(vmf->page))) {
+-		if (ret & VM_FAULT_LOCKED)
+-			unlock_page(vmf->page);
+-		put_page(vmf->page);
++		struct page *page = vmf->page;
++		vm_fault_t poisonret = VM_FAULT_HWPOISON;
++		if (ret & VM_FAULT_LOCKED) {
++			if (page_mapped(page))
++				unmap_mapping_pages(page_mapping(page),
++						    page->index, 1, false);
++			/* Retry if a clean page was removed from the cache. */
++			if (invalidate_inode_page(page))
++				poisonret = VM_FAULT_NOPAGE;
++			unlock_page(page);
++		}
++		put_page(page);
+ 		vmf->page = NULL;
+-		return VM_FAULT_HWPOISON;
++		return poisonret;
+ 	}
+ 
+ 	if (unlikely(!(ret & VM_FAULT_LOCKED)))
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index c8b1592dff73d..eb97aed2fbe7d 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -802,7 +802,6 @@ static int vma_replace_policy(struct vm_area_struct *vma,
+ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ 		       unsigned long end, struct mempolicy *new_pol)
+ {
+-	struct vm_area_struct *next;
+ 	struct vm_area_struct *prev;
+ 	struct vm_area_struct *vma;
+ 	int err = 0;
+@@ -817,8 +816,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ 	if (start > vma->vm_start)
+ 		prev = vma;
+ 
+-	for (; vma && vma->vm_start < end; prev = vma, vma = next) {
+-		next = vma->vm_next;
++	for (; vma && vma->vm_start < end; prev = vma, vma = vma->vm_next) {
+ 		vmstart = max(start, vma->vm_start);
+ 		vmend   = min(end, vma->vm_end);
+ 
+@@ -832,10 +830,6 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ 				 new_pol, vma->vm_userfaultfd_ctx);
+ 		if (prev) {
+ 			vma = prev;
+-			next = vma->vm_next;
+-			if (mpol_equal(vma_policy(vma), new_pol))
+-				continue;
+-			/* vma_merge() joined vma && vma->next, case 8 */
+ 			goto replace;
+ 		}
+ 		if (vma->vm_start != vmstart) {
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 5c8b4485860de..46c160d4eac14 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2577,7 +2577,7 @@ static int __init cmdline_parse_stack_guard_gap(char *p)
+ 	if (!*endptr)
+ 		stack_guard_gap = val << PAGE_SHIFT;
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index c63656c42e288..42f64ed2be478 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7402,10 +7402,17 @@ restart:
+ 
+ out2:
+ 	/* Align start of ZONE_MOVABLE on all nids to MAX_ORDER_NR_PAGES */
+-	for (nid = 0; nid < MAX_NUMNODES; nid++)
++	for (nid = 0; nid < MAX_NUMNODES; nid++) {
++		unsigned long start_pfn, end_pfn;
++
+ 		zone_movable_pfn[nid] =
+ 			roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
+ 
++		get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
++		if (zone_movable_pfn[nid] >= end_pfn)
++			zone_movable_pfn[nid] = 0;
++	}
++
+ out:
+ 	/* restore the node_state */
+ 	node_states[N_MEMORY] = saved_node_state;
+diff --git a/mm/usercopy.c b/mm/usercopy.c
+index b3de3c4eefba7..540968b481e7e 100644
+--- a/mm/usercopy.c
++++ b/mm/usercopy.c
+@@ -294,7 +294,10 @@ static bool enable_checks __initdata = true;
+ 
+ static int __init parse_hardened_usercopy(char *str)
+ {
+-	return strtobool(str, &enable_checks);
++	if (strtobool(str, &enable_checks))
++		pr_warn("Invalid option string for hardened_usercopy: '%s'\n",
++			str);
++	return 1;
+ }
+ 
+ __setup("hardened_usercopy=", parse_hardened_usercopy);
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index ee9cead765450..986f707e7d973 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -164,6 +164,9 @@ static void batadv_backbone_gw_release(struct kref *ref)
+  */
+ static void batadv_backbone_gw_put(struct batadv_bla_backbone_gw *backbone_gw)
+ {
++	if (!backbone_gw)
++		return;
++
+ 	kref_put(&backbone_gw->refcount, batadv_backbone_gw_release);
+ }
+ 
+@@ -199,6 +202,9 @@ static void batadv_claim_release(struct kref *ref)
+  */
+ static void batadv_claim_put(struct batadv_bla_claim *claim)
+ {
++	if (!claim)
++		return;
++
+ 	kref_put(&claim->refcount, batadv_claim_release);
+ }
+ 
+diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
+index 0e6e53e9b5f35..338e4e9c33b8a 100644
+--- a/net/batman-adv/distributed-arp-table.c
++++ b/net/batman-adv/distributed-arp-table.c
+@@ -128,6 +128,9 @@ static void batadv_dat_entry_release(struct kref *ref)
+  */
+ static void batadv_dat_entry_put(struct batadv_dat_entry *dat_entry)
+ {
++	if (!dat_entry)
++		return;
++
+ 	kref_put(&dat_entry->refcount, batadv_dat_entry_release);
+ }
+ 
+diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c
+index ef3f85b576c4c..62f6f13f89ffd 100644
+--- a/net/batman-adv/gateway_client.c
++++ b/net/batman-adv/gateway_client.c
+@@ -60,7 +60,7 @@
+  *  after rcu grace period
+  * @ref: kref pointer of the gw_node
+  */
+-static void batadv_gw_node_release(struct kref *ref)
++void batadv_gw_node_release(struct kref *ref)
+ {
+ 	struct batadv_gw_node *gw_node;
+ 
+@@ -70,16 +70,6 @@ static void batadv_gw_node_release(struct kref *ref)
+ 	kfree_rcu(gw_node, rcu);
+ }
+ 
+-/**
+- * batadv_gw_node_put() - decrement the gw_node refcounter and possibly release
+- *  it
+- * @gw_node: gateway node to free
+- */
+-void batadv_gw_node_put(struct batadv_gw_node *gw_node)
+-{
+-	kref_put(&gw_node->refcount, batadv_gw_node_release);
+-}
+-
+ /**
+  * batadv_gw_get_selected_gw_node() - Get currently selected gateway
+  * @bat_priv: the bat priv with all the soft interface information
+diff --git a/net/batman-adv/gateway_client.h b/net/batman-adv/gateway_client.h
+index 88b5dba843547..c5b1de586fde0 100644
+--- a/net/batman-adv/gateway_client.h
++++ b/net/batman-adv/gateway_client.h
+@@ -9,6 +9,7 @@
+ 
+ #include "main.h"
+ 
++#include <linux/kref.h>
+ #include <linux/netlink.h>
+ #include <linux/seq_file.h>
+ #include <linux/skbuff.h>
+@@ -28,7 +29,7 @@ void batadv_gw_node_update(struct batadv_priv *bat_priv,
+ void batadv_gw_node_delete(struct batadv_priv *bat_priv,
+ 			   struct batadv_orig_node *orig_node);
+ void batadv_gw_node_free(struct batadv_priv *bat_priv);
+-void batadv_gw_node_put(struct batadv_gw_node *gw_node);
++void batadv_gw_node_release(struct kref *ref);
+ struct batadv_gw_node *
+ batadv_gw_get_selected_gw_node(struct batadv_priv *bat_priv);
+ int batadv_gw_client_seq_print_text(struct seq_file *seq, void *offset);
+@@ -40,4 +41,17 @@ batadv_gw_dhcp_recipient_get(struct sk_buff *skb, unsigned int *header_len,
+ struct batadv_gw_node *batadv_gw_node_get(struct batadv_priv *bat_priv,
+ 					  struct batadv_orig_node *orig_node);
+ 
++/**
++ * batadv_gw_node_put() - decrement the gw_node refcounter and possibly release
++ *  it
++ * @gw_node: gateway node to free
++ */
++static inline void batadv_gw_node_put(struct batadv_gw_node *gw_node)
++{
++	if (!gw_node)
++		return;
++
++	kref_put(&gw_node->refcount, batadv_gw_node_release);
++}
++
+ #endif /* _NET_BATMAN_ADV_GATEWAY_CLIENT_H_ */
+diff --git a/net/batman-adv/hard-interface.h b/net/batman-adv/hard-interface.h
+index b1855d9d0b062..ba5850cfb2774 100644
+--- a/net/batman-adv/hard-interface.h
++++ b/net/batman-adv/hard-interface.h
+@@ -113,6 +113,9 @@ int batadv_hardif_no_broadcast(struct batadv_hard_iface *if_outgoing,
+  */
+ static inline void batadv_hardif_put(struct batadv_hard_iface *hard_iface)
+ {
++	if (!hard_iface)
++		return;
++
+ 	kref_put(&hard_iface->refcount, batadv_hardif_release);
+ }
+ 
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index 35b3e03c07774..1481b80395689 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -222,6 +222,9 @@ static void batadv_nc_node_release(struct kref *ref)
+  */
+ static void batadv_nc_node_put(struct batadv_nc_node *nc_node)
+ {
++	if (!nc_node)
++		return;
++
+ 	kref_put(&nc_node->refcount, batadv_nc_node_release);
+ }
+ 
+@@ -246,6 +249,9 @@ static void batadv_nc_path_release(struct kref *ref)
+  */
+ static void batadv_nc_path_put(struct batadv_nc_path *nc_path)
+ {
++	if (!nc_path)
++		return;
++
+ 	kref_put(&nc_path->refcount, batadv_nc_path_release);
+ }
+ 
+diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c
+index 805d8969bdfbc..2d38a09459bb5 100644
+--- a/net/batman-adv/originator.c
++++ b/net/batman-adv/originator.c
+@@ -178,7 +178,7 @@ out:
+  *  and queue for free after rcu grace period
+  * @ref: kref pointer of the originator-vlan object
+  */
+-static void batadv_orig_node_vlan_release(struct kref *ref)
++void batadv_orig_node_vlan_release(struct kref *ref)
+ {
+ 	struct batadv_orig_node_vlan *orig_vlan;
+ 
+@@ -187,16 +187,6 @@ static void batadv_orig_node_vlan_release(struct kref *ref)
+ 	kfree_rcu(orig_vlan, rcu);
+ }
+ 
+-/**
+- * batadv_orig_node_vlan_put() - decrement the refcounter and possibly release
+- *  the originator-vlan object
+- * @orig_vlan: the originator-vlan object to release
+- */
+-void batadv_orig_node_vlan_put(struct batadv_orig_node_vlan *orig_vlan)
+-{
+-	kref_put(&orig_vlan->refcount, batadv_orig_node_vlan_release);
+-}
+-
+ /**
+  * batadv_originator_init() - Initialize all originator structures
+  * @bat_priv: the bat priv with all the soft interface information
+@@ -232,7 +222,7 @@ err:
+  *  free after rcu grace period
+  * @ref: kref pointer of the neigh_ifinfo
+  */
+-static void batadv_neigh_ifinfo_release(struct kref *ref)
++void batadv_neigh_ifinfo_release(struct kref *ref)
+ {
+ 	struct batadv_neigh_ifinfo *neigh_ifinfo;
+ 
+@@ -244,22 +234,12 @@ static void batadv_neigh_ifinfo_release(struct kref *ref)
+ 	kfree_rcu(neigh_ifinfo, rcu);
+ }
+ 
+-/**
+- * batadv_neigh_ifinfo_put() - decrement the refcounter and possibly release
+- *  the neigh_ifinfo
+- * @neigh_ifinfo: the neigh_ifinfo object to release
+- */
+-void batadv_neigh_ifinfo_put(struct batadv_neigh_ifinfo *neigh_ifinfo)
+-{
+-	kref_put(&neigh_ifinfo->refcount, batadv_neigh_ifinfo_release);
+-}
+-
+ /**
+  * batadv_hardif_neigh_release() - release hardif neigh node from lists and
+  *  queue for free after rcu grace period
+  * @ref: kref pointer of the neigh_node
+  */
+-static void batadv_hardif_neigh_release(struct kref *ref)
++void batadv_hardif_neigh_release(struct kref *ref)
+ {
+ 	struct batadv_hardif_neigh_node *hardif_neigh;
+ 
+@@ -274,22 +254,12 @@ static void batadv_hardif_neigh_release(struct kref *ref)
+ 	kfree_rcu(hardif_neigh, rcu);
+ }
+ 
+-/**
+- * batadv_hardif_neigh_put() - decrement the hardif neighbors refcounter
+- *  and possibly release it
+- * @hardif_neigh: hardif neigh neighbor to free
+- */
+-void batadv_hardif_neigh_put(struct batadv_hardif_neigh_node *hardif_neigh)
+-{
+-	kref_put(&hardif_neigh->refcount, batadv_hardif_neigh_release);
+-}
+-
+ /**
+  * batadv_neigh_node_release() - release neigh_node from lists and queue for
+  *  free after rcu grace period
+  * @ref: kref pointer of the neigh_node
+  */
+-static void batadv_neigh_node_release(struct kref *ref)
++void batadv_neigh_node_release(struct kref *ref)
+ {
+ 	struct hlist_node *node_tmp;
+ 	struct batadv_neigh_node *neigh_node;
+@@ -309,16 +279,6 @@ static void batadv_neigh_node_release(struct kref *ref)
+ 	kfree_rcu(neigh_node, rcu);
+ }
+ 
+-/**
+- * batadv_neigh_node_put() - decrement the neighbors refcounter and possibly
+- *  release it
+- * @neigh_node: neigh neighbor to free
+- */
+-void batadv_neigh_node_put(struct batadv_neigh_node *neigh_node)
+-{
+-	kref_put(&neigh_node->refcount, batadv_neigh_node_release);
+-}
+-
+ /**
+  * batadv_orig_router_get() - router to the originator depending on iface
+  * @orig_node: the orig node for the router
+@@ -851,7 +811,7 @@ int batadv_hardif_neigh_dump(struct sk_buff *msg, struct netlink_callback *cb)
+  *  free after rcu grace period
+  * @ref: kref pointer of the orig_ifinfo
+  */
+-static void batadv_orig_ifinfo_release(struct kref *ref)
++void batadv_orig_ifinfo_release(struct kref *ref)
+ {
+ 	struct batadv_orig_ifinfo *orig_ifinfo;
+ 	struct batadv_neigh_node *router;
+@@ -869,16 +829,6 @@ static void batadv_orig_ifinfo_release(struct kref *ref)
+ 	kfree_rcu(orig_ifinfo, rcu);
+ }
+ 
+-/**
+- * batadv_orig_ifinfo_put() - decrement the refcounter and possibly release
+- *  the orig_ifinfo
+- * @orig_ifinfo: the orig_ifinfo object to release
+- */
+-void batadv_orig_ifinfo_put(struct batadv_orig_ifinfo *orig_ifinfo)
+-{
+-	kref_put(&orig_ifinfo->refcount, batadv_orig_ifinfo_release);
+-}
+-
+ /**
+  * batadv_orig_node_free_rcu() - free the orig_node
+  * @rcu: rcu pointer of the orig_node
+@@ -902,7 +852,7 @@ static void batadv_orig_node_free_rcu(struct rcu_head *rcu)
+  *  free after rcu grace period
+  * @ref: kref pointer of the orig_node
+  */
+-static void batadv_orig_node_release(struct kref *ref)
++void batadv_orig_node_release(struct kref *ref)
+ {
+ 	struct hlist_node *node_tmp;
+ 	struct batadv_neigh_node *neigh_node;
+@@ -948,16 +898,6 @@ static void batadv_orig_node_release(struct kref *ref)
+ 	call_rcu(&orig_node->rcu, batadv_orig_node_free_rcu);
+ }
+ 
+-/**
+- * batadv_orig_node_put() - decrement the orig node refcounter and possibly
+- *  release it
+- * @orig_node: the orig node to free
+- */
+-void batadv_orig_node_put(struct batadv_orig_node *orig_node)
+-{
+-	kref_put(&orig_node->refcount, batadv_orig_node_release);
+-}
+-
+ /**
+  * batadv_originator_free() - Free all originator structures
+  * @bat_priv: the bat priv with all the soft interface information
+diff --git a/net/batman-adv/originator.h b/net/batman-adv/originator.h
+index 7bc01c138b3ab..3b824a79743a2 100644
+--- a/net/batman-adv/originator.h
++++ b/net/batman-adv/originator.h
+@@ -12,6 +12,7 @@
+ #include <linux/compiler.h>
+ #include <linux/if_ether.h>
+ #include <linux/jhash.h>
++#include <linux/kref.h>
+ #include <linux/netlink.h>
+ #include <linux/seq_file.h>
+ #include <linux/skbuff.h>
+@@ -21,19 +22,18 @@ bool batadv_compare_orig(const struct hlist_node *node, const void *data2);
+ int batadv_originator_init(struct batadv_priv *bat_priv);
+ void batadv_originator_free(struct batadv_priv *bat_priv);
+ void batadv_purge_orig_ref(struct batadv_priv *bat_priv);
+-void batadv_orig_node_put(struct batadv_orig_node *orig_node);
++void batadv_orig_node_release(struct kref *ref);
+ struct batadv_orig_node *batadv_orig_node_new(struct batadv_priv *bat_priv,
+ 					      const u8 *addr);
+ struct batadv_hardif_neigh_node *
+ batadv_hardif_neigh_get(const struct batadv_hard_iface *hard_iface,
+ 			const u8 *neigh_addr);
+-void
+-batadv_hardif_neigh_put(struct batadv_hardif_neigh_node *hardif_neigh);
++void batadv_hardif_neigh_release(struct kref *ref);
+ struct batadv_neigh_node *
+ batadv_neigh_node_get_or_create(struct batadv_orig_node *orig_node,
+ 				struct batadv_hard_iface *hard_iface,
+ 				const u8 *neigh_addr);
+-void batadv_neigh_node_put(struct batadv_neigh_node *neigh_node);
++void batadv_neigh_node_release(struct kref *ref);
+ struct batadv_neigh_node *
+ batadv_orig_router_get(struct batadv_orig_node *orig_node,
+ 		       const struct batadv_hard_iface *if_outgoing);
+@@ -43,7 +43,7 @@ batadv_neigh_ifinfo_new(struct batadv_neigh_node *neigh,
+ struct batadv_neigh_ifinfo *
+ batadv_neigh_ifinfo_get(struct batadv_neigh_node *neigh,
+ 			struct batadv_hard_iface *if_outgoing);
+-void batadv_neigh_ifinfo_put(struct batadv_neigh_ifinfo *neigh_ifinfo);
++void batadv_neigh_ifinfo_release(struct kref *ref);
+ 
+ int batadv_hardif_neigh_dump(struct sk_buff *msg, struct netlink_callback *cb);
+ int batadv_hardif_neigh_seq_print_text(struct seq_file *seq, void *offset);
+@@ -54,7 +54,7 @@ batadv_orig_ifinfo_get(struct batadv_orig_node *orig_node,
+ struct batadv_orig_ifinfo *
+ batadv_orig_ifinfo_new(struct batadv_orig_node *orig_node,
+ 		       struct batadv_hard_iface *if_outgoing);
+-void batadv_orig_ifinfo_put(struct batadv_orig_ifinfo *orig_ifinfo);
++void batadv_orig_ifinfo_release(struct kref *ref);
+ 
+ int batadv_orig_seq_print_text(struct seq_file *seq, void *offset);
+ int batadv_orig_dump(struct sk_buff *msg, struct netlink_callback *cb);
+@@ -65,7 +65,7 @@ batadv_orig_node_vlan_new(struct batadv_orig_node *orig_node,
+ struct batadv_orig_node_vlan *
+ batadv_orig_node_vlan_get(struct batadv_orig_node *orig_node,
+ 			  unsigned short vid);
+-void batadv_orig_node_vlan_put(struct batadv_orig_node_vlan *orig_vlan);
++void batadv_orig_node_vlan_release(struct kref *ref);
+ 
+ /**
+  * batadv_choose_orig() - Return the index of the orig entry in the hash table
+@@ -86,4 +86,86 @@ static inline u32 batadv_choose_orig(const void *data, u32 size)
+ struct batadv_orig_node *
+ batadv_orig_hash_find(struct batadv_priv *bat_priv, const void *data);
+ 
++/**
++ * batadv_orig_node_vlan_put() - decrement the refcounter and possibly release
++ *  the originator-vlan object
++ * @orig_vlan: the originator-vlan object to release
++ */
++static inline void
++batadv_orig_node_vlan_put(struct batadv_orig_node_vlan *orig_vlan)
++{
++	if (!orig_vlan)
++		return;
++
++	kref_put(&orig_vlan->refcount, batadv_orig_node_vlan_release);
++}
++
++/**
++ * batadv_neigh_ifinfo_put() - decrement the refcounter and possibly release
++ *  the neigh_ifinfo
++ * @neigh_ifinfo: the neigh_ifinfo object to release
++ */
++static inline void
++batadv_neigh_ifinfo_put(struct batadv_neigh_ifinfo *neigh_ifinfo)
++{
++	if (!neigh_ifinfo)
++		return;
++
++	kref_put(&neigh_ifinfo->refcount, batadv_neigh_ifinfo_release);
++}
++
++/**
++ * batadv_hardif_neigh_put() - decrement the hardif neighbors refcounter
++ *  and possibly release it
++ * @hardif_neigh: hardif neigh neighbor to free
++ */
++static inline void
++batadv_hardif_neigh_put(struct batadv_hardif_neigh_node *hardif_neigh)
++{
++	if (!hardif_neigh)
++		return;
++
++	kref_put(&hardif_neigh->refcount, batadv_hardif_neigh_release);
++}
++
++/**
++ * batadv_neigh_node_put() - decrement the neighbors refcounter and possibly
++ *  release it
++ * @neigh_node: neigh neighbor to free
++ */
++static inline void batadv_neigh_node_put(struct batadv_neigh_node *neigh_node)
++{
++	if (!neigh_node)
++		return;
++
++	kref_put(&neigh_node->refcount, batadv_neigh_node_release);
++}
++
++/**
++ * batadv_orig_ifinfo_put() - decrement the refcounter and possibly release
++ *  the orig_ifinfo
++ * @orig_ifinfo: the orig_ifinfo object to release
++ */
++static inline void
++batadv_orig_ifinfo_put(struct batadv_orig_ifinfo *orig_ifinfo)
++{
++	if (!orig_ifinfo)
++		return;
++
++	kref_put(&orig_ifinfo->refcount, batadv_orig_ifinfo_release);
++}
++
++/**
++ * batadv_orig_node_put() - decrement the orig node refcounter and possibly
++ *  release it
++ * @orig_node: the orig node to free
++ */
++static inline void batadv_orig_node_put(struct batadv_orig_node *orig_node)
++{
++	if (!orig_node)
++		return;
++
++	kref_put(&orig_node->refcount, batadv_orig_node_release);
++}
++
+ #endif /* _NET_BATMAN_ADV_ORIGINATOR_H_ */
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index 7496047b318a4..8f7c778255fba 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -512,7 +512,7 @@ out:
+  *  after rcu grace period
+  * @ref: kref pointer of the vlan object
+  */
+-static void batadv_softif_vlan_release(struct kref *ref)
++void batadv_softif_vlan_release(struct kref *ref)
+ {
+ 	struct batadv_softif_vlan *vlan;
+ 
+@@ -525,19 +525,6 @@ static void batadv_softif_vlan_release(struct kref *ref)
+ 	kfree_rcu(vlan, rcu);
+ }
+ 
+-/**
+- * batadv_softif_vlan_put() - decrease the vlan object refcounter and
+- *  possibly release it
+- * @vlan: the vlan object to release
+- */
+-void batadv_softif_vlan_put(struct batadv_softif_vlan *vlan)
+-{
+-	if (!vlan)
+-		return;
+-
+-	kref_put(&vlan->refcount, batadv_softif_vlan_release);
+-}
+-
+ /**
+  * batadv_softif_vlan_get() - get the vlan object for a specific vid
+  * @bat_priv: the bat priv with all the soft interface information
+diff --git a/net/batman-adv/soft-interface.h b/net/batman-adv/soft-interface.h
+index 534e08d6ad919..53aba17b90688 100644
+--- a/net/batman-adv/soft-interface.h
++++ b/net/batman-adv/soft-interface.h
+@@ -9,6 +9,7 @@
+ 
+ #include "main.h"
+ 
++#include <linux/kref.h>
+ #include <linux/netdevice.h>
+ #include <linux/skbuff.h>
+ #include <linux/types.h>
+@@ -24,8 +25,21 @@ void batadv_softif_destroy_sysfs(struct net_device *soft_iface);
+ bool batadv_softif_is_valid(const struct net_device *net_dev);
+ extern struct rtnl_link_ops batadv_link_ops;
+ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid);
+-void batadv_softif_vlan_put(struct batadv_softif_vlan *softif_vlan);
++void batadv_softif_vlan_release(struct kref *ref);
+ struct batadv_softif_vlan *batadv_softif_vlan_get(struct batadv_priv *bat_priv,
+ 						  unsigned short vid);
+ 
++/**
++ * batadv_softif_vlan_put() - decrease the vlan object refcounter and
++ *  possibly release it
++ * @vlan: the vlan object to release
++ */
++static inline void batadv_softif_vlan_put(struct batadv_softif_vlan *vlan)
++{
++	if (!vlan)
++		return;
++
++	kref_put(&vlan->refcount, batadv_softif_vlan_release);
++}
++
+ #endif /* _NET_BATMAN_ADV_SOFT_INTERFACE_H_ */
+diff --git a/net/batman-adv/tp_meter.c b/net/batman-adv/tp_meter.c
+index db7e3774825b5..00d62a6c5e0ef 100644
+--- a/net/batman-adv/tp_meter.c
++++ b/net/batman-adv/tp_meter.c
+@@ -357,6 +357,9 @@ static void batadv_tp_vars_release(struct kref *ref)
+  */
+ static void batadv_tp_vars_put(struct batadv_tp_vars *tp_vars)
+ {
++	if (!tp_vars)
++		return;
++
+ 	kref_put(&tp_vars->refcount, batadv_tp_vars_release);
+ }
+ 
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index de946ea8f13c8..5f990a2061072 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -248,6 +248,9 @@ static void batadv_tt_local_entry_release(struct kref *ref)
+ static void
+ batadv_tt_local_entry_put(struct batadv_tt_local_entry *tt_local_entry)
+ {
++	if (!tt_local_entry)
++		return;
++
+ 	kref_put(&tt_local_entry->common.refcount,
+ 		 batadv_tt_local_entry_release);
+ }
+@@ -271,7 +274,7 @@ static void batadv_tt_global_entry_free_rcu(struct rcu_head *rcu)
+  *  queue for free after rcu grace period
+  * @ref: kref pointer of the nc_node
+  */
+-static void batadv_tt_global_entry_release(struct kref *ref)
++void batadv_tt_global_entry_release(struct kref *ref)
+ {
+ 	struct batadv_tt_global_entry *tt_global_entry;
+ 
+@@ -283,17 +286,6 @@ static void batadv_tt_global_entry_release(struct kref *ref)
+ 	call_rcu(&tt_global_entry->common.rcu, batadv_tt_global_entry_free_rcu);
+ }
+ 
+-/**
+- * batadv_tt_global_entry_put() - decrement the tt_global_entry refcounter and
+- *  possibly release it
+- * @tt_global_entry: tt_global_entry to be free'd
+- */
+-void batadv_tt_global_entry_put(struct batadv_tt_global_entry *tt_global_entry)
+-{
+-	kref_put(&tt_global_entry->common.refcount,
+-		 batadv_tt_global_entry_release);
+-}
+-
+ /**
+  * batadv_tt_global_hash_count() - count the number of orig entries
+  * @bat_priv: the bat priv with all the soft interface information
+@@ -453,6 +445,9 @@ static void batadv_tt_orig_list_entry_release(struct kref *ref)
+ static void
+ batadv_tt_orig_list_entry_put(struct batadv_tt_orig_list_entry *orig_entry)
+ {
++	if (!orig_entry)
++		return;
++
+ 	kref_put(&orig_entry->refcount, batadv_tt_orig_list_entry_release);
+ }
+ 
+@@ -2818,6 +2813,9 @@ static void batadv_tt_req_node_release(struct kref *ref)
+  */
+ static void batadv_tt_req_node_put(struct batadv_tt_req_node *tt_req_node)
+ {
++	if (!tt_req_node)
++		return;
++
+ 	kref_put(&tt_req_node->refcount, batadv_tt_req_node_release);
+ }
+ 
+diff --git a/net/batman-adv/translation-table.h b/net/batman-adv/translation-table.h
+index b24d35b9226a1..63cc8fd3ff66a 100644
+--- a/net/batman-adv/translation-table.h
++++ b/net/batman-adv/translation-table.h
+@@ -9,6 +9,7 @@
+ 
+ #include "main.h"
+ 
++#include <linux/kref.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
+ #include <linux/seq_file.h>
+@@ -31,7 +32,7 @@ void batadv_tt_global_del_orig(struct batadv_priv *bat_priv,
+ struct batadv_tt_global_entry *
+ batadv_tt_global_hash_find(struct batadv_priv *bat_priv, const u8 *addr,
+ 			   unsigned short vid);
+-void batadv_tt_global_entry_put(struct batadv_tt_global_entry *tt_global_entry);
++void batadv_tt_global_entry_release(struct kref *ref);
+ int batadv_tt_global_hash_count(struct batadv_priv *bat_priv,
+ 				const u8 *addr, unsigned short vid);
+ struct batadv_orig_node *batadv_transtable_search(struct batadv_priv *bat_priv,
+@@ -58,4 +59,19 @@ bool batadv_tt_global_is_isolated(struct batadv_priv *bat_priv,
+ int batadv_tt_cache_init(void);
+ void batadv_tt_cache_destroy(void);
+ 
++/**
++ * batadv_tt_global_entry_put() - decrement the tt_global_entry refcounter and
++ *  possibly release it
++ * @tt_global_entry: tt_global_entry to be free'd
++ */
++static inline void
++batadv_tt_global_entry_put(struct batadv_tt_global_entry *tt_global_entry)
++{
++	if (!tt_global_entry)
++		return;
++
++	kref_put(&tt_global_entry->common.refcount,
++		 batadv_tt_global_entry_release);
++}
++
+ #endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */
+diff --git a/net/batman-adv/tvlv.c b/net/batman-adv/tvlv.c
+index 6a23a566cde17..99fc48efde543 100644
+--- a/net/batman-adv/tvlv.c
++++ b/net/batman-adv/tvlv.c
+@@ -50,6 +50,9 @@ static void batadv_tvlv_handler_release(struct kref *ref)
+  */
+ static void batadv_tvlv_handler_put(struct batadv_tvlv_handler *tvlv_handler)
+ {
++	if (!tvlv_handler)
++		return;
++
+ 	kref_put(&tvlv_handler->refcount, batadv_tvlv_handler_release);
+ }
+ 
+@@ -106,6 +109,9 @@ static void batadv_tvlv_container_release(struct kref *ref)
+  */
+ static void batadv_tvlv_container_put(struct batadv_tvlv_container *tvlv)
+ {
++	if (!tvlv)
++		return;
++
+ 	kref_put(&tvlv->refcount, batadv_tvlv_container_release);
+ }
+ 
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 1c5a0a60292d2..ecd2ffcf2ba28 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -508,7 +508,9 @@ static void le_conn_timeout(struct work_struct *work)
+ 	if (conn->role == HCI_ROLE_SLAVE) {
+ 		/* Disable LE Advertising */
+ 		le_disable_advertising(hdev);
++		hci_dev_lock(hdev);
+ 		hci_le_conn_failed(conn, HCI_ERROR_ADVERTISING_TIMEOUT);
++		hci_dev_unlock(hdev);
+ 		return;
+ 	}
+ 
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index d0581dc6a65fd..63e6e8923200b 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1003,26 +1003,29 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct sk_buff *skb;
+-	int err = 0;
+-	int noblock;
++	struct isotp_sock *so = isotp_sk(sk);
++	int noblock = flags & MSG_DONTWAIT;
++	int ret = 0;
+ 
+-	noblock = flags & MSG_DONTWAIT;
+-	flags &= ~MSG_DONTWAIT;
++	if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK))
++		return -EINVAL;
+ 
+-	skb = skb_recv_datagram(sk, flags, noblock, &err);
++	if (!so->bound)
++		return -EADDRNOTAVAIL;
++
++	flags &= ~MSG_DONTWAIT;
++	skb = skb_recv_datagram(sk, flags, noblock, &ret);
+ 	if (!skb)
+-		return err;
++		return ret;
+ 
+ 	if (size < skb->len)
+ 		msg->msg_flags |= MSG_TRUNC;
+ 	else
+ 		size = skb->len;
+ 
+-	err = memcpy_to_msg(msg, skb->data, size);
+-	if (err < 0) {
+-		skb_free_datagram(sk, skb);
+-		return err;
+-	}
++	ret = memcpy_to_msg(msg, skb->data, size);
++	if (ret < 0)
++		goto out_err;
+ 
+ 	sock_recv_timestamp(msg, sk, skb);
+ 
+@@ -1032,9 +1035,13 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 		memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
+ 	}
+ 
++	/* set length of return value */
++	ret = (flags & MSG_TRUNC) ? skb->len : size;
++
++out_err:
+ 	skb_free_datagram(sk, skb);
+ 
+-	return size;
++	return ret;
+ }
+ 
+ static int isotp_release(struct socket *sock)
+@@ -1102,6 +1109,7 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	struct net *net = sock_net(sk);
+ 	int ifindex;
+ 	struct net_device *dev;
++	canid_t tx_id, rx_id;
+ 	int err = 0;
+ 	int notify_enetdown = 0;
+ 	int do_rx_reg = 1;
+@@ -1109,8 +1117,18 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	if (len < ISOTP_MIN_NAMELEN)
+ 		return -EINVAL;
+ 
+-	if (addr->can_addr.tp.tx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG))
+-		return -EADDRNOTAVAIL;
++	/* sanitize tx/rx CAN identifiers */
++	tx_id = addr->can_addr.tp.tx_id;
++	if (tx_id & CAN_EFF_FLAG)
++		tx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
++	else
++		tx_id &= CAN_SFF_MASK;
++
++	rx_id = addr->can_addr.tp.rx_id;
++	if (rx_id & CAN_EFF_FLAG)
++		rx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
++	else
++		rx_id &= CAN_SFF_MASK;
+ 
+ 	if (!addr->can_ifindex)
+ 		return -ENODEV;
+@@ -1122,21 +1140,13 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 		do_rx_reg = 0;
+ 
+ 	/* do not validate rx address for functional addressing */
+-	if (do_rx_reg) {
+-		if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id) {
+-			err = -EADDRNOTAVAIL;
+-			goto out;
+-		}
+-
+-		if (addr->can_addr.tp.rx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) {
+-			err = -EADDRNOTAVAIL;
+-			goto out;
+-		}
++	if (do_rx_reg && rx_id == tx_id) {
++		err = -EADDRNOTAVAIL;
++		goto out;
+ 	}
+ 
+ 	if (so->bound && addr->can_ifindex == so->ifindex &&
+-	    addr->can_addr.tp.rx_id == so->rxid &&
+-	    addr->can_addr.tp.tx_id == so->txid)
++	    rx_id == so->rxid && tx_id == so->txid)
+ 		goto out;
+ 
+ 	dev = dev_get_by_index(net, addr->can_ifindex);
+@@ -1160,8 +1170,7 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	ifindex = dev->ifindex;
+ 
+ 	if (do_rx_reg)
+-		can_rx_register(net, dev, addr->can_addr.tp.rx_id,
+-				SINGLE_MASK(addr->can_addr.tp.rx_id),
++		can_rx_register(net, dev, rx_id, SINGLE_MASK(rx_id),
+ 				isotp_rcv, sk, "isotp", sk);
+ 
+ 	dev_put(dev);
+@@ -1181,8 +1190,8 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 
+ 	/* switch to new settings */
+ 	so->ifindex = ifindex;
+-	so->rxid = addr->can_addr.tp.rx_id;
+-	so->txid = addr->can_addr.tp.tx_id;
++	so->rxid = rx_id;
++	so->txid = tx_id;
+ 	so->bound = 1;
+ 
+ out:
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index e4bb89599b44b..545181a1ae043 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -27,6 +27,7 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ 		 int elem_first_coalesce)
+ {
+ 	struct page_frag *pfrag = sk_page_frag(sk);
++	u32 osize = msg->sg.size;
+ 	int ret = 0;
+ 
+ 	len -= msg->sg.size;
+@@ -35,13 +36,17 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ 		u32 orig_offset;
+ 		int use, i;
+ 
+-		if (!sk_page_frag_refill(sk, pfrag))
+-			return -ENOMEM;
++		if (!sk_page_frag_refill(sk, pfrag)) {
++			ret = -ENOMEM;
++			goto msg_trim;
++		}
+ 
+ 		orig_offset = pfrag->offset;
+ 		use = min_t(int, len, pfrag->size - orig_offset);
+-		if (!sk_wmem_schedule(sk, use))
+-			return -ENOMEM;
++		if (!sk_wmem_schedule(sk, use)) {
++			ret = -ENOMEM;
++			goto msg_trim;
++		}
+ 
+ 		i = msg->sg.end;
+ 		sk_msg_iter_var_prev(i);
+@@ -71,6 +76,10 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ 	}
+ 
+ 	return ret;
++
++msg_trim:
++	sk_msg_trim(sk, msg, osize);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(sk_msg_alloc);
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index ce787c3867938..c72d0de8bf714 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -529,6 +529,15 @@ void __ip_select_ident(struct net *net, struct iphdr *iph, int segs)
+ }
+ EXPORT_SYMBOL(__ip_select_ident);
+ 
++static void ip_rt_fix_tos(struct flowi4 *fl4)
++{
++	__u8 tos = RT_FL_TOS(fl4);
++
++	fl4->flowi4_tos = tos & IPTOS_RT_MASK;
++	fl4->flowi4_scope = tos & RTO_ONLINK ?
++			    RT_SCOPE_LINK : RT_SCOPE_UNIVERSE;
++}
++
+ static void __build_flow_key(const struct net *net, struct flowi4 *fl4,
+ 			     const struct sock *sk,
+ 			     const struct iphdr *iph,
+@@ -853,6 +862,7 @@ static void ip_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_buf
+ 	rt = (struct rtable *) dst;
+ 
+ 	__build_flow_key(net, &fl4, sk, iph, oif, tos, prot, mark, 0);
++	ip_rt_fix_tos(&fl4);
+ 	__ip_do_redirect(rt, skb, &fl4, true);
+ }
+ 
+@@ -1077,6 +1087,7 @@ static void ip_rt_update_pmtu(struct dst_entry *dst, struct sock *sk,
+ 	struct flowi4 fl4;
+ 
+ 	ip_rt_build_flow_key(&fl4, sk, skb);
++	ip_rt_fix_tos(&fl4);
+ 
+ 	/* Don't make lookup fail for bridged encapsulations */
+ 	if (skb && netif_is_any_bridge_port(skb->dev))
+@@ -1151,6 +1162,8 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu)
+ 			goto out;
+ 
+ 		new = true;
++	} else {
++		ip_rt_fix_tos(&fl4);
+ 	}
+ 
+ 	__ip_rt_update_pmtu((struct rtable *)xfrm_dst_path(&rt->dst), &fl4, mtu);
+@@ -2524,7 +2537,6 @@ add:
+ struct rtable *ip_route_output_key_hash(struct net *net, struct flowi4 *fl4,
+ 					const struct sk_buff *skb)
+ {
+-	__u8 tos = RT_FL_TOS(fl4);
+ 	struct fib_result res = {
+ 		.type		= RTN_UNSPEC,
+ 		.fi		= NULL,
+@@ -2534,9 +2546,7 @@ struct rtable *ip_route_output_key_hash(struct net *net, struct flowi4 *fl4,
+ 	struct rtable *rth;
+ 
+ 	fl4->flowi4_iif = LOOPBACK_IFINDEX;
+-	fl4->flowi4_tos = tos & IPTOS_RT_MASK;
+-	fl4->flowi4_scope = ((tos & RTO_ONLINK) ?
+-			 RT_SCOPE_LINK : RT_SCOPE_UNIVERSE);
++	ip_rt_fix_tos(fl4);
+ 
+ 	rcu_read_lock();
+ 	rth = ip_route_output_key_hash_rcu(net, fl4, &res, skb);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 6b745ce4108c8..eaf2308c355a6 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -218,10 +218,9 @@ int tcp_bpf_sendmsg_redir(struct sock *sk, struct sk_msg *msg,
+ 	struct sk_psock *psock = sk_psock_get(sk);
+ 	int ret;
+ 
+-	if (unlikely(!psock)) {
+-		sk_msg_free(sk, msg);
+-		return 0;
+-	}
++	if (unlikely(!psock))
++		return -EPIPE;
++
+ 	ret = ingress ? bpf_tcp_ingress(sk, psock, msg, bytes, flags) :
+ 			tcp_bpf_push_locked(sk, msg, bytes, flags, false);
+ 	sk_psock_put(sk, psock);
+@@ -371,7 +370,7 @@ more_data:
+ 			cork = true;
+ 			psock->cork = NULL;
+ 		}
+-		sk_msg_return(sk, msg, tosend);
++		sk_msg_return(sk, msg, msg->sg.size);
+ 		release_sock(sk);
+ 
+ 		ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
+@@ -411,8 +410,11 @@ more_data:
+ 		}
+ 		if (msg &&
+ 		    msg->sg.data[msg->sg.start].page_link &&
+-		    msg->sg.data[msg->sg.start].length)
++		    msg->sg.data[msg->sg.start].length) {
++			if (eval == __SK_REDIRECT)
++				sk_mem_charge(sk, msg->sg.size);
+ 			goto more_data;
++		}
+ 	}
+ 	return ret;
+ }
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 19ef4577b70d6..ce9987e6ff252 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3733,6 +3733,7 @@ static void tcp_connect_queue_skb(struct sock *sk, struct sk_buff *skb)
+  */
+ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
+ {
++	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	struct tcp_fastopen_request *fo = tp->fastopen_req;
+ 	int space, err = 0;
+@@ -3747,8 +3748,10 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
+ 	 * private TCP options. The cost is reduced data space in SYN :(
+ 	 */
+ 	tp->rx_opt.mss_clamp = tcp_mss_clamp(tp, tp->rx_opt.mss_clamp);
++	/* Sync mss_cache after updating the mss_clamp */
++	tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);
+ 
+-	space = __tcp_mtu_to_mss(sk, inet_csk(sk)->icsk_pmtu_cookie) -
++	space = __tcp_mtu_to_mss(sk, icsk->icsk_pmtu_cookie) -
+ 		MAX_TCP_OPTION_SPACE;
+ 
+ 	space = min_t(size_t, space, fo->size);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index ef2068a60d4ad..e97a2dd206e14 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -598,6 +598,12 @@ void udp_encap_enable(void)
+ }
+ EXPORT_SYMBOL(udp_encap_enable);
+ 
++void udp_encap_disable(void)
++{
++	static_branch_dec(&udp_encap_needed_key);
++}
++EXPORT_SYMBOL(udp_encap_disable);
++
+ /* Handler for tunnels with arbitrary destination ports: no socket lookup, go
+  * through error handlers in encapsulations looking for a match.
+  */
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 069551a04369e..10760164a80f4 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1610,8 +1610,10 @@ void udpv6_destroy_sock(struct sock *sk)
+ 			if (encap_destroy)
+ 				encap_destroy(sk);
+ 		}
+-		if (up->encap_enabled)
++		if (up->encap_enabled) {
+ 			static_branch_dec(&udpv6_encap_needed_key);
++			udp_encap_disable();
++		}
+ 	}
+ 
+ 	inet6_destroy_sock(sk);
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index 6abb45a671994..ee349c2438782 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -52,6 +52,19 @@ static int __xfrm6_output_finish(struct net *net, struct sock *sk, struct sk_buf
+ 	return xfrm_output(sk, skb);
+ }
+ 
++static int xfrm6_noneed_fragment(struct sk_buff *skb)
++{
++	struct frag_hdr *fh;
++	u8 prevhdr = ipv6_hdr(skb)->nexthdr;
++
++	if (prevhdr != NEXTHDR_FRAGMENT)
++		return 0;
++	fh = (struct frag_hdr *)(skb->data + sizeof(struct ipv6hdr));
++	if (fh->nexthdr == NEXTHDR_ESP || fh->nexthdr == NEXTHDR_AUTH)
++		return 1;
++	return 0;
++}
++
+ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct dst_entry *dst = skb_dst(skb);
+@@ -80,6 +93,9 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 		xfrm6_local_rxpmtu(skb, mtu);
+ 		kfree_skb(skb);
+ 		return -EMSGSIZE;
++	} else if (toobig && xfrm6_noneed_fragment(skb)) {
++		skb->ignore_df = 1;
++		goto skip_frag;
+ 	} else if (!skb->ignore_df && toobig && skb->sk) {
+ 		xfrm_local_error(skb, mtu);
+ 		kfree_skb(skb);
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index d1364b858fdf0..bd9b5c573b5a4 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1703,7 +1703,7 @@ static int pfkey_register(struct sock *sk, struct sk_buff *skb, const struct sad
+ 
+ 	xfrm_probe_algs();
+ 
+-	supp_skb = compose_sadb_supported(hdr, GFP_KERNEL);
++	supp_skb = compose_sadb_supported(hdr, GFP_KERNEL | __GFP_ZERO);
+ 	if (!supp_skb) {
+ 		if (hdr->sadb_msg_satype != SADB_SATYPE_UNSPEC)
+ 			pfk->registered &= ~(1<<hdr->sadb_msg_satype);
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index c8fb2187ad4b2..3f785bdfa942d 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -354,8 +354,8 @@ static void tcp_options(const struct sk_buff *skb,
+ 				 length, buff);
+ 	BUG_ON(ptr == NULL);
+ 
+-	state->td_scale =
+-	state->flags = 0;
++	state->td_scale = 0;
++	state->flags &= IP_CT_TCP_FLAG_BE_LIBERAL;
+ 
+ 	while (length > 0) {
+ 		int opcode=*ptr++;
+@@ -840,6 +840,16 @@ static bool nf_conntrack_tcp_established(const struct nf_conn *ct)
+ 	       test_bit(IPS_ASSURED_BIT, &ct->status);
+ }
+ 
++static void nf_ct_tcp_state_reset(struct ip_ct_tcp_state *state)
++{
++	state->td_end		= 0;
++	state->td_maxend	= 0;
++	state->td_maxwin	= 0;
++	state->td_maxack	= 0;
++	state->td_scale		= 0;
++	state->flags		&= IP_CT_TCP_FLAG_BE_LIBERAL;
++}
++
+ /* Returns verdict for packet, or -1 for invalid. */
+ int nf_conntrack_tcp_packet(struct nf_conn *ct,
+ 			    struct sk_buff *skb,
+@@ -946,8 +956,7 @@ int nf_conntrack_tcp_packet(struct nf_conn *ct,
+ 			ct->proto.tcp.last_flags &= ~IP_CT_EXP_CHALLENGE_ACK;
+ 			ct->proto.tcp.seen[ct->proto.tcp.last_dir].flags =
+ 				ct->proto.tcp.last_flags;
+-			memset(&ct->proto.tcp.seen[dir], 0,
+-			       sizeof(struct ip_ct_tcp_state));
++			nf_ct_tcp_state_reset(&ct->proto.tcp.seen[dir]);
+ 			break;
+ 		}
+ 		ct->proto.tcp.last_index = index;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index e55af5c078ac0..f37916156ca52 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -149,6 +149,8 @@ static const struct rhashtable_params netlink_rhashtable_params;
+ 
+ static inline u32 netlink_group_mask(u32 group)
+ {
++	if (group > 32)
++		return 0;
+ 	return group ? 1 << (group - 1) : 0;
+ }
+ 
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index a11b558813c10..7ff98d39ec942 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -730,6 +730,57 @@ static bool skb_nfct_cached(struct net *net,
+ }
+ 
+ #if IS_ENABLED(CONFIG_NF_NAT)
++static void ovs_nat_update_key(struct sw_flow_key *key,
++			       const struct sk_buff *skb,
++			       enum nf_nat_manip_type maniptype)
++{
++	if (maniptype == NF_NAT_MANIP_SRC) {
++		__be16 src;
++
++		key->ct_state |= OVS_CS_F_SRC_NAT;
++		if (key->eth.type == htons(ETH_P_IP))
++			key->ipv4.addr.src = ip_hdr(skb)->saddr;
++		else if (key->eth.type == htons(ETH_P_IPV6))
++			memcpy(&key->ipv6.addr.src, &ipv6_hdr(skb)->saddr,
++			       sizeof(key->ipv6.addr.src));
++		else
++			return;
++
++		if (key->ip.proto == IPPROTO_UDP)
++			src = udp_hdr(skb)->source;
++		else if (key->ip.proto == IPPROTO_TCP)
++			src = tcp_hdr(skb)->source;
++		else if (key->ip.proto == IPPROTO_SCTP)
++			src = sctp_hdr(skb)->source;
++		else
++			return;
++
++		key->tp.src = src;
++	} else {
++		__be16 dst;
++
++		key->ct_state |= OVS_CS_F_DST_NAT;
++		if (key->eth.type == htons(ETH_P_IP))
++			key->ipv4.addr.dst = ip_hdr(skb)->daddr;
++		else if (key->eth.type == htons(ETH_P_IPV6))
++			memcpy(&key->ipv6.addr.dst, &ipv6_hdr(skb)->daddr,
++			       sizeof(key->ipv6.addr.dst));
++		else
++			return;
++
++		if (key->ip.proto == IPPROTO_UDP)
++			dst = udp_hdr(skb)->dest;
++		else if (key->ip.proto == IPPROTO_TCP)
++			dst = tcp_hdr(skb)->dest;
++		else if (key->ip.proto == IPPROTO_SCTP)
++			dst = sctp_hdr(skb)->dest;
++		else
++			return;
++
++		key->tp.dst = dst;
++	}
++}
++
+ /* Modelled after nf_nat_ipv[46]_fn().
+  * range is only used for new, uninitialized NAT state.
+  * Returns either NF_ACCEPT or NF_DROP.
+@@ -737,7 +788,7 @@ static bool skb_nfct_cached(struct net *net,
+ static int ovs_ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct,
+ 			      enum ip_conntrack_info ctinfo,
+ 			      const struct nf_nat_range2 *range,
+-			      enum nf_nat_manip_type maniptype)
++			      enum nf_nat_manip_type maniptype, struct sw_flow_key *key)
+ {
+ 	int hooknum, nh_off, err = NF_ACCEPT;
+ 
+@@ -810,58 +861,11 @@ push:
+ 	skb_push(skb, nh_off);
+ 	skb_postpush_rcsum(skb, skb->data, nh_off);
+ 
+-	return err;
+-}
+-
+-static void ovs_nat_update_key(struct sw_flow_key *key,
+-			       const struct sk_buff *skb,
+-			       enum nf_nat_manip_type maniptype)
+-{
+-	if (maniptype == NF_NAT_MANIP_SRC) {
+-		__be16 src;
+-
+-		key->ct_state |= OVS_CS_F_SRC_NAT;
+-		if (key->eth.type == htons(ETH_P_IP))
+-			key->ipv4.addr.src = ip_hdr(skb)->saddr;
+-		else if (key->eth.type == htons(ETH_P_IPV6))
+-			memcpy(&key->ipv6.addr.src, &ipv6_hdr(skb)->saddr,
+-			       sizeof(key->ipv6.addr.src));
+-		else
+-			return;
+-
+-		if (key->ip.proto == IPPROTO_UDP)
+-			src = udp_hdr(skb)->source;
+-		else if (key->ip.proto == IPPROTO_TCP)
+-			src = tcp_hdr(skb)->source;
+-		else if (key->ip.proto == IPPROTO_SCTP)
+-			src = sctp_hdr(skb)->source;
+-		else
+-			return;
+-
+-		key->tp.src = src;
+-	} else {
+-		__be16 dst;
+-
+-		key->ct_state |= OVS_CS_F_DST_NAT;
+-		if (key->eth.type == htons(ETH_P_IP))
+-			key->ipv4.addr.dst = ip_hdr(skb)->daddr;
+-		else if (key->eth.type == htons(ETH_P_IPV6))
+-			memcpy(&key->ipv6.addr.dst, &ipv6_hdr(skb)->daddr,
+-			       sizeof(key->ipv6.addr.dst));
+-		else
+-			return;
+-
+-		if (key->ip.proto == IPPROTO_UDP)
+-			dst = udp_hdr(skb)->dest;
+-		else if (key->ip.proto == IPPROTO_TCP)
+-			dst = tcp_hdr(skb)->dest;
+-		else if (key->ip.proto == IPPROTO_SCTP)
+-			dst = sctp_hdr(skb)->dest;
+-		else
+-			return;
++	/* Update the flow key if NAT successful. */
++	if (err == NF_ACCEPT)
++		ovs_nat_update_key(key, skb, maniptype);
+ 
+-		key->tp.dst = dst;
+-	}
++	return err;
+ }
+ 
+ /* Returns NF_DROP if the packet should be dropped, NF_ACCEPT otherwise. */
+@@ -903,7 +907,7 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key,
+ 	} else {
+ 		return NF_ACCEPT; /* Connection is not NATed. */
+ 	}
+-	err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype);
++	err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype, key);
+ 
+ 	if (err == NF_ACCEPT && ct->status & IPS_DST_NAT) {
+ 		if (ct->status & IPS_SRC_NAT) {
+@@ -913,17 +917,13 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key,
+ 				maniptype = NF_NAT_MANIP_SRC;
+ 
+ 			err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range,
+-						 maniptype);
++						 maniptype, key);
+ 		} else if (CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) {
+ 			err = ovs_ct_nat_execute(skb, ct, ctinfo, NULL,
+-						 NF_NAT_MANIP_SRC);
++						 NF_NAT_MANIP_SRC, key);
+ 		}
+ 	}
+ 
+-	/* Mark NAT done if successful and update the flow key. */
+-	if (err == NF_ACCEPT)
+-		ovs_nat_update_key(key, skb, maniptype);
+-
+ 	return err;
+ }
+ #else /* !CONFIG_NF_NAT */
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 4c5c2331e7648..8c4bdfa627ca9 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2201,8 +2201,8 @@ static int __ovs_nla_put_key(const struct sw_flow_key *swkey,
+ 			icmpv6_key->icmpv6_type = ntohs(output->tp.src);
+ 			icmpv6_key->icmpv6_code = ntohs(output->tp.dst);
+ 
+-			if (icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_SOLICITATION ||
+-			    icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_ADVERTISEMENT) {
++			if (swkey->tp.src == htons(NDISC_NEIGHBOUR_SOLICITATION) ||
++			    swkey->tp.src == htons(NDISC_NEIGHBOUR_ADVERTISEMENT)) {
+ 				struct ovs_key_nd *nd_key;
+ 
+ 				nla = nla_reserve(skb, OVS_KEY_ATTR_ND, sizeof(*nd_key));
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index dce48162f6c27..3bad9f5f91023 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -760,14 +760,12 @@ void rxrpc_propose_ACK(struct rxrpc_call *, u8, u32, bool, bool,
+ 		       enum rxrpc_propose_ack_trace);
+ void rxrpc_process_call(struct work_struct *);
+ 
+-static inline void rxrpc_reduce_call_timer(struct rxrpc_call *call,
+-					   unsigned long expire_at,
+-					   unsigned long now,
+-					   enum rxrpc_timer_trace why)
+-{
+-	trace_rxrpc_timer(call, why, now);
+-	timer_reduce(&call->timer, expire_at);
+-}
++void rxrpc_reduce_call_timer(struct rxrpc_call *call,
++			     unsigned long expire_at,
++			     unsigned long now,
++			     enum rxrpc_timer_trace why);
++
++void rxrpc_delete_call_timer(struct rxrpc_call *call);
+ 
+ /*
+  * call_object.c
+@@ -791,6 +789,7 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *);
+ bool __rxrpc_queue_call(struct rxrpc_call *);
+ bool rxrpc_queue_call(struct rxrpc_call *);
+ void rxrpc_see_call(struct rxrpc_call *);
++bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op);
+ void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace);
+ void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace);
+ void rxrpc_cleanup_call(struct rxrpc_call *);
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index df864e6922679..22e05de5d1ca9 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -310,7 +310,7 @@ recheck_state:
+ 	}
+ 
+ 	if (call->state == RXRPC_CALL_COMPLETE) {
+-		del_timer_sync(&call->timer);
++		rxrpc_delete_call_timer(call);
+ 		goto out_put;
+ 	}
+ 
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 4eb91d958a48d..043508fd8d8a5 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -53,10 +53,30 @@ static void rxrpc_call_timer_expired(struct timer_list *t)
+ 
+ 	if (call->state < RXRPC_CALL_COMPLETE) {
+ 		trace_rxrpc_timer(call, rxrpc_timer_expired, jiffies);
+-		rxrpc_queue_call(call);
++		__rxrpc_queue_call(call);
++	} else {
++		rxrpc_put_call(call, rxrpc_call_put);
++	}
++}
++
++void rxrpc_reduce_call_timer(struct rxrpc_call *call,
++			     unsigned long expire_at,
++			     unsigned long now,
++			     enum rxrpc_timer_trace why)
++{
++	if (rxrpc_try_get_call(call, rxrpc_call_got_timer)) {
++		trace_rxrpc_timer(call, why, now);
++		if (timer_reduce(&call->timer, expire_at))
++			rxrpc_put_call(call, rxrpc_call_put_notimer);
+ 	}
+ }
+ 
++void rxrpc_delete_call_timer(struct rxrpc_call *call)
++{
++	if (del_timer_sync(&call->timer))
++		rxrpc_put_call(call, rxrpc_call_put_timer);
++}
++
+ static struct lock_class_key rxrpc_call_user_mutex_lock_class_key;
+ 
+ /*
+@@ -463,6 +483,17 @@ void rxrpc_see_call(struct rxrpc_call *call)
+ 	}
+ }
+ 
++bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
++{
++	const void *here = __builtin_return_address(0);
++	int n = atomic_fetch_add_unless(&call->usage, 1, 0);
++
++	if (n == 0)
++		return false;
++	trace_rxrpc_call(call->debug_id, op, n, here, NULL);
++	return true;
++}
++
+ /*
+  * Note the addition of a ref on a call.
+  */
+@@ -510,8 +541,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
+ 	spin_unlock_bh(&call->lock);
+ 
+ 	rxrpc_put_call_slot(call);
+-
+-	del_timer_sync(&call->timer);
++	rxrpc_delete_call_timer(call);
+ 
+ 	/* Make sure we don't get any more notifications */
+ 	write_lock_bh(&rx->recvmsg_lock);
+@@ -618,6 +648,8 @@ static void rxrpc_destroy_call(struct work_struct *work)
+ 	struct rxrpc_call *call = container_of(work, struct rxrpc_call, processor);
+ 	struct rxrpc_net *rxnet = call->rxnet;
+ 
++	rxrpc_delete_call_timer(call);
++
+ 	rxrpc_put_connection(call->conn);
+ 	rxrpc_put_peer(call->peer);
+ 	kfree(call->rxtx_buffer);
+@@ -652,8 +684,6 @@ void rxrpc_cleanup_call(struct rxrpc_call *call)
+ 
+ 	memset(&call->sock_node, 0xcd, sizeof(call->sock_node));
+ 
+-	del_timer_sync(&call->timer);
+-
+ 	ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
+ 	ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));
+ 
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 04aaca4b8bf93..46304e647c492 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -2037,7 +2037,14 @@ static void xprt_destroy(struct rpc_xprt *xprt)
+ 	 */
+ 	wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_UNINTERRUPTIBLE);
+ 
++	/*
++	 * xprt_schedule_autodisconnect() can run after XPRT_LOCKED
++	 * is cleared.  We use ->transport_lock to ensure the mod_timer()
++	 * can only run *before* del_time_sync(), never after.
++	 */
++	spin_lock(&xprt->transport_lock);
+ 	del_timer_sync(&xprt->timer);
++	spin_unlock(&xprt->transport_lock);
+ 
+ 	/*
+ 	 * Destroy sockets etc from the system workqueue so they can
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 8d2c98531af45..42283dc6c5b7c 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2846,7 +2846,8 @@ static void tipc_sk_retry_connect(struct sock *sk, struct sk_buff_head *list)
+ 
+ 	/* Try again later if dest link is congested */
+ 	if (tsk->cong_link_cnt) {
+-		sk_reset_timer(sk, &sk->sk_timer, msecs_to_jiffies(100));
++		sk_reset_timer(sk, &sk->sk_timer,
++			       jiffies + msecs_to_jiffies(100));
+ 		return;
+ 	}
+ 	/* Prepare SYN for retransmit */
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index 03ed170b8125e..d231d4620c38f 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -1775,10 +1775,15 @@ void x25_kill_by_neigh(struct x25_neigh *nb)
+ 
+ 	write_lock_bh(&x25_list_lock);
+ 
+-	sk_for_each(s, &x25_list)
+-		if (x25_sk(s)->neighbour == nb)
++	sk_for_each(s, &x25_list) {
++		if (x25_sk(s)->neighbour == nb) {
++			write_unlock_bh(&x25_list_lock);
++			lock_sock(s);
+ 			x25_disconnect(s, ENETUNREACH, 0, 0);
+-
++			release_sock(s);
++			write_lock_bh(&x25_list_lock);
++		}
++	}
+ 	write_unlock_bh(&x25_list_lock);
+ 
+ 	/* Remove any related forwards */
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index 4420c8fd318a6..da518b4ca84c6 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -303,7 +303,10 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 			if (mtu < IPV6_MIN_MTU)
+ 				mtu = IPV6_MIN_MTU;
+ 
+-			icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			if (skb->len > 1280)
++				icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			else
++				goto xmit;
+ 		} else {
+ 			if (!(ip_hdr(skb)->frag_off & htons(IP_DF)))
+ 				goto xmit;
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index 2e4508a6cb3a7..cf5b0a8952254 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -1520,14 +1520,15 @@ int main(int argc, char **argv)
+ 
+ 	setlocale(LC_ALL, "");
+ 
++	prev_time = get_nsecs();
++	start_time = prev_time;
++
+ 	if (!opt_quiet) {
+ 		ret = pthread_create(&pt, NULL, poller, NULL);
+ 		if (ret)
+ 			exit_with_error(ret);
+ 	}
+ 
+-	prev_time = get_nsecs();
+-	start_time = prev_time;
+ 
+ 	if (opt_bench == BENCH_RXDROP)
+ 		rx_drop_all();
+diff --git a/scripts/dtc/Makefile b/scripts/dtc/Makefile
+index 4852bf44e913e..f1d201782346f 100644
+--- a/scripts/dtc/Makefile
++++ b/scripts/dtc/Makefile
+@@ -22,7 +22,7 @@ dtc-objs	+= yamltree.o
+ # To include <yaml.h> installed in a non-default path
+ HOSTCFLAGS_yamltree.o := $(shell pkg-config --cflags yaml-0.1)
+ # To link libyaml installed in a non-default path
+-HOSTLDLIBS_dtc	:= $(shell pkg-config yaml-0.1 --libs)
++HOSTLDLIBS_dtc	:= $(shell pkg-config --libs yaml-0.1)
+ endif
+ 
+ # Generated files need one more search path to include headers in source tree
+diff --git a/scripts/gcc-plugins/stackleak_plugin.c b/scripts/gcc-plugins/stackleak_plugin.c
+index 48e141e079562..dacd697ffd383 100644
+--- a/scripts/gcc-plugins/stackleak_plugin.c
++++ b/scripts/gcc-plugins/stackleak_plugin.c
+@@ -431,6 +431,23 @@ static unsigned int stackleak_cleanup_execute(void)
+ 	return 0;
+ }
+ 
++/*
++ * STRING_CST may or may not be NUL terminated:
++ * https://gcc.gnu.org/onlinedocs/gccint/Constant-expressions.html
++ */
++static inline bool string_equal(tree node, const char *string, int length)
++{
++	if (TREE_STRING_LENGTH(node) < length)
++		return false;
++	if (TREE_STRING_LENGTH(node) > length + 1)
++		return false;
++	if (TREE_STRING_LENGTH(node) == length + 1 &&
++	    TREE_STRING_POINTER(node)[length] != '\0')
++		return false;
++	return !memcmp(TREE_STRING_POINTER(node), string, length);
++}
++#define STRING_EQUAL(node, str)	string_equal(node, str, strlen(str))
++
+ static bool stackleak_gate(void)
+ {
+ 	tree section;
+@@ -440,13 +457,13 @@ static bool stackleak_gate(void)
+ 	if (section && TREE_VALUE(section)) {
+ 		section = TREE_VALUE(TREE_VALUE(section));
+ 
+-		if (!strncmp(TREE_STRING_POINTER(section), ".init.text", 10))
++		if (STRING_EQUAL(section, ".init.text"))
+ 			return false;
+-		if (!strncmp(TREE_STRING_POINTER(section), ".devinit.text", 13))
++		if (STRING_EQUAL(section, ".devinit.text"))
+ 			return false;
+-		if (!strncmp(TREE_STRING_POINTER(section), ".cpuinit.text", 13))
++		if (STRING_EQUAL(section, ".cpuinit.text"))
+ 			return false;
+-		if (!strncmp(TREE_STRING_POINTER(section), ".meminit.text", 13))
++		if (STRING_EQUAL(section, ".meminit.text"))
+ 			return false;
+ 	}
+ 
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index b929c683aba12..0033364ac404f 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -62,7 +62,7 @@ static int __init evm_set_fixmode(char *str)
+ 	else
+ 		pr_err("invalid \"%s\" mode", str);
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("evm=", evm_set_fixmode);
+ 
+diff --git a/security/keys/keyctl_pkey.c b/security/keys/keyctl_pkey.c
+index 931d8dfb4a7f4..63e5c646f7620 100644
+--- a/security/keys/keyctl_pkey.c
++++ b/security/keys/keyctl_pkey.c
+@@ -135,15 +135,23 @@ static int keyctl_pkey_params_get_2(const struct keyctl_pkey_params __user *_par
+ 
+ 	switch (op) {
+ 	case KEYCTL_PKEY_ENCRYPT:
++		if (uparams.in_len  > info.max_dec_size ||
++		    uparams.out_len > info.max_enc_size)
++			return -EINVAL;
++		break;
+ 	case KEYCTL_PKEY_DECRYPT:
+ 		if (uparams.in_len  > info.max_enc_size ||
+ 		    uparams.out_len > info.max_dec_size)
+ 			return -EINVAL;
+ 		break;
+ 	case KEYCTL_PKEY_SIGN:
++		if (uparams.in_len  > info.max_data_size ||
++		    uparams.out_len > info.max_sig_size)
++			return -EINVAL;
++		break;
+ 	case KEYCTL_PKEY_VERIFY:
+-		if (uparams.in_len  > info.max_sig_size ||
+-		    uparams.out_len > info.max_data_size)
++		if (uparams.in_len  > info.max_data_size ||
++		    uparams.in2_len > info.max_sig_size)
+ 			return -EINVAL;
+ 		break;
+ 	default:
+@@ -151,7 +159,7 @@ static int keyctl_pkey_params_get_2(const struct keyctl_pkey_params __user *_par
+ 	}
+ 
+ 	params->in_len  = uparams.in_len;
+-	params->out_len = uparams.out_len;
++	params->out_len = uparams.out_len; /* Note: same as in2_len */
+ 	return 0;
+ }
+ 
+diff --git a/security/security.c b/security/security.c
+index a864ff824dd3b..d9d42d64f89f2 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -860,9 +860,22 @@ int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc)
+ 	return call_int_hook(fs_context_dup, 0, fc, src_fc);
+ }
+ 
+-int security_fs_context_parse_param(struct fs_context *fc, struct fs_parameter *param)
++int security_fs_context_parse_param(struct fs_context *fc,
++				    struct fs_parameter *param)
+ {
+-	return call_int_hook(fs_context_parse_param, -ENOPARAM, fc, param);
++	struct security_hook_list *hp;
++	int trc;
++	int rc = -ENOPARAM;
++
++	hlist_for_each_entry(hp, &security_hook_heads.fs_context_parse_param,
++			     list) {
++		trc = hp->hook.fs_context_parse_param(fc, param);
++		if (trc == 0)
++			rc = 0;
++		else if (trc != -ENOPARAM)
++			return trc;
++	}
++	return rc;
+ }
+ 
+ int security_sb_alloc(struct super_block *sb)
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 86159b32921cc..8c901ae05dd84 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -2820,10 +2820,9 @@ static int selinux_fs_context_parse_param(struct fs_context *fc,
+ 		return opt;
+ 
+ 	rc = selinux_add_opt(opt, param->string, &fc->security);
+-	if (!rc) {
++	if (!rc)
+ 		param->string = NULL;
+-		rc = 1;
+-	}
++
+ 	return rc;
+ }
+ 
+@@ -3648,6 +3647,12 @@ static int selinux_file_ioctl(struct file *file, unsigned int cmd,
+ 					    CAP_OPT_NONE, true);
+ 		break;
+ 
++	case FIOCLEX:
++	case FIONCLEX:
++		if (!selinux_policycap_ioctl_skip_cloexec())
++			error = ioctl_has_perm(cred, file, FILE__IOCTL, (u16) cmd);
++		break;
++
+ 	/* default case assumes that the command will go
+ 	 * to the file's ioctl() function.
+ 	 */
+diff --git a/security/selinux/include/policycap.h b/security/selinux/include/policycap.h
+index 2ec038efbb03c..a9e572ca4fd96 100644
+--- a/security/selinux/include/policycap.h
++++ b/security/selinux/include/policycap.h
+@@ -11,6 +11,7 @@ enum {
+ 	POLICYDB_CAPABILITY_CGROUPSECLABEL,
+ 	POLICYDB_CAPABILITY_NNP_NOSUID_TRANSITION,
+ 	POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS,
++	POLICYDB_CAPABILITY_IOCTL_SKIP_CLOEXEC,
+ 	__POLICYDB_CAPABILITY_MAX
+ };
+ #define POLICYDB_CAPABILITY_MAX (__POLICYDB_CAPABILITY_MAX - 1)
+diff --git a/security/selinux/include/policycap_names.h b/security/selinux/include/policycap_names.h
+index b89289f092c93..ebd64afe1defd 100644
+--- a/security/selinux/include/policycap_names.h
++++ b/security/selinux/include/policycap_names.h
+@@ -12,7 +12,8 @@ const char *selinux_policycap_names[__POLICYDB_CAPABILITY_MAX] = {
+ 	"always_check_network",
+ 	"cgroup_seclabel",
+ 	"nnp_nosuid_transition",
+-	"genfs_seclabel_symlinks"
++	"genfs_seclabel_symlinks",
++	"ioctl_skip_cloexec"
+ };
+ 
+ #endif /* _SELINUX_POLICYCAP_NAMES_H_ */
+diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h
+index 63ca6e79daeb9..1521460a97d4e 100644
+--- a/security/selinux/include/security.h
++++ b/security/selinux/include/security.h
+@@ -219,6 +219,13 @@ static inline bool selinux_policycap_genfs_seclabel_symlinks(void)
+ 	return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS]);
+ }
+ 
++static inline bool selinux_policycap_ioctl_skip_cloexec(void)
++{
++	struct selinux_state *state = &selinux_state;
++
++	return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_IOCTL_SKIP_CLOEXEC]);
++}
++
+ struct selinux_policy_convert_data;
+ 
+ struct selinux_load_state {
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index 2b745ae8cb981..d893c2280f595 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -2124,6 +2124,8 @@ static int sel_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	}
+ 
+ 	ret = sel_make_avc_files(dentry);
++	if (ret)
++		goto err;
+ 
+ 	dentry = sel_make_dir(sb->s_root, "ss", &fsi->last_ino);
+ 	if (IS_ERR(dentry)) {
+diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c
+index 7314196185d15..00e95f8bd7c73 100644
+--- a/security/selinux/xfrm.c
++++ b/security/selinux/xfrm.c
+@@ -346,7 +346,7 @@ int selinux_xfrm_state_alloc_acquire(struct xfrm_state *x,
+ 	int rc;
+ 	struct xfrm_sec_ctx *ctx;
+ 	char *ctx_str = NULL;
+-	int str_len;
++	u32 str_len;
+ 
+ 	if (!polsec)
+ 		return 0;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 5c90b9fa4d405..b36b8668f1f4a 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -2506,7 +2506,7 @@ static int smk_ipv6_check(struct smack_known *subject,
+ #ifdef CONFIG_AUDIT
+ 	smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+ 	ad.a.u.net->family = PF_INET6;
+-	ad.a.u.net->dport = ntohs(address->sin6_port);
++	ad.a.u.net->dport = address->sin6_port;
+ 	if (act == SMK_RECEIVING)
+ 		ad.a.u.net->v6info.saddr = address->sin6_addr;
+ 	else
+diff --git a/security/tomoyo/load_policy.c b/security/tomoyo/load_policy.c
+index 3445ae6fd4794..363b65be87ab7 100644
+--- a/security/tomoyo/load_policy.c
++++ b/security/tomoyo/load_policy.c
+@@ -24,7 +24,7 @@ static const char *tomoyo_loader;
+ static int __init tomoyo_loader_setup(char *str)
+ {
+ 	tomoyo_loader = str;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("TOMOYO_loader=", tomoyo_loader_setup);
+@@ -64,7 +64,7 @@ static const char *tomoyo_trigger;
+ static int __init tomoyo_trigger_setup(char *str)
+ {
+ 	tomoyo_trigger = str;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("TOMOYO_trigger=", tomoyo_trigger_setup);
+diff --git a/sound/arm/aaci.c b/sound/arm/aaci.c
+index a0996c47e58fe..b326a5f5f0d53 100644
+--- a/sound/arm/aaci.c
++++ b/sound/arm/aaci.c
+@@ -1055,7 +1055,7 @@ static int aaci_probe(struct amba_device *dev,
+ 	return ret;
+ }
+ 
+-static int aaci_remove(struct amba_device *dev)
++static void aaci_remove(struct amba_device *dev)
+ {
+ 	struct snd_card *card = amba_get_drvdata(dev);
+ 
+@@ -1066,8 +1066,6 @@ static int aaci_remove(struct amba_device *dev)
+ 		snd_card_free(card);
+ 		amba_release_regions(dev);
+ 	}
+-
+-	return 0;
+ }
+ 
+ static struct amba_id aaci_ids[] = {
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index 8e5c6b227e52d..59d222446d777 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -970,6 +970,7 @@ int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream,
+ 
+ 	runtime->status->state = SNDRV_PCM_STATE_OPEN;
+ 	mutex_init(&runtime->buffer_mutex);
++	atomic_set(&runtime->buffer_accessing, 0);
+ 
+ 	substream->runtime = runtime;
+ 	substream->private_data = pcm->private_data;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 45afef73275f0..289f52af15b96 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1871,11 +1871,9 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ 		if (avail >= runtime->twake)
+ 			break;
+ 		snd_pcm_stream_unlock_irq(substream);
+-		mutex_unlock(&runtime->buffer_mutex);
+ 
+ 		tout = schedule_timeout(wait_time);
+ 
+-		mutex_lock(&runtime->buffer_mutex);
+ 		snd_pcm_stream_lock_irq(substream);
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		switch (runtime->status->state) {
+@@ -2169,7 +2167,6 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 
+ 	nonblock = !!(substream->f_flags & O_NONBLOCK);
+ 
+-	mutex_lock(&runtime->buffer_mutex);
+ 	snd_pcm_stream_lock_irq(substream);
+ 	err = pcm_accessible_state(runtime);
+ 	if (err < 0)
+@@ -2224,10 +2221,15 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 			err = -EINVAL;
+ 			goto _end_unlock;
+ 		}
++		if (!atomic_inc_unless_negative(&runtime->buffer_accessing)) {
++			err = -EBUSY;
++			goto _end_unlock;
++		}
+ 		snd_pcm_stream_unlock_irq(substream);
+ 		err = writer(substream, appl_ofs, data, offset, frames,
+ 			     transfer);
+ 		snd_pcm_stream_lock_irq(substream);
++		atomic_dec(&runtime->buffer_accessing);
+ 		if (err < 0)
+ 			goto _end_unlock;
+ 		err = pcm_accessible_state(runtime);
+@@ -2257,7 +2259,6 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 	if (xfer > 0 && err >= 0)
+ 		snd_pcm_update_state(substream, runtime);
+ 	snd_pcm_stream_unlock_irq(substream);
+-	mutex_unlock(&runtime->buffer_mutex);
+ 	return xfer > 0 ? (snd_pcm_sframes_t)xfer : err;
+ }
+ EXPORT_SYMBOL(__snd_pcm_lib_xfer);
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 6579802c55116..6cc7c2a9fe732 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -667,6 +667,24 @@ static int snd_pcm_hw_params_choose(struct snd_pcm_substream *pcm,
+ 	return 0;
+ }
+ 
++/* acquire buffer_mutex; if it's in r/w operation, return -EBUSY, otherwise
++ * block the further r/w operations
++ */
++static int snd_pcm_buffer_access_lock(struct snd_pcm_runtime *runtime)
++{
++	if (!atomic_dec_unless_positive(&runtime->buffer_accessing))
++		return -EBUSY;
++	mutex_lock(&runtime->buffer_mutex);
++	return 0; /* keep buffer_mutex, unlocked by below */
++}
++
++/* release buffer_mutex and clear r/w access flag */
++static void snd_pcm_buffer_access_unlock(struct snd_pcm_runtime *runtime)
++{
++	mutex_unlock(&runtime->buffer_mutex);
++	atomic_inc(&runtime->buffer_accessing);
++}
++
+ #if IS_ENABLED(CONFIG_SND_PCM_OSS)
+ #define is_oss_stream(substream)	((substream)->oss.oss)
+ #else
+@@ -677,14 +695,16 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 			     struct snd_pcm_hw_params *params)
+ {
+ 	struct snd_pcm_runtime *runtime;
+-	int err = 0, usecs;
++	int err, usecs;
+ 	unsigned int bits;
+ 	snd_pcm_uframes_t frames;
+ 
+ 	if (PCM_RUNTIME_CHECK(substream))
+ 		return -ENXIO;
+ 	runtime = substream->runtime;
+-	mutex_lock(&runtime->buffer_mutex);
++	err = snd_pcm_buffer_access_lock(runtime);
++	if (err < 0)
++		return err;
+ 	snd_pcm_stream_lock_irq(substream);
+ 	switch (runtime->status->state) {
+ 	case SNDRV_PCM_STATE_OPEN:
+@@ -801,7 +821,7 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 			snd_pcm_lib_free_pages(substream);
+ 	}
+  unlock:
+-	mutex_unlock(&runtime->buffer_mutex);
++	snd_pcm_buffer_access_unlock(runtime);
+ 	return err;
+ }
+ 
+@@ -846,7 +866,9 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ 	if (PCM_RUNTIME_CHECK(substream))
+ 		return -ENXIO;
+ 	runtime = substream->runtime;
+-	mutex_lock(&runtime->buffer_mutex);
++	result = snd_pcm_buffer_access_lock(runtime);
++	if (result < 0)
++		return result;
+ 	snd_pcm_stream_lock_irq(substream);
+ 	switch (runtime->status->state) {
+ 	case SNDRV_PCM_STATE_SETUP:
+@@ -865,7 +887,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ 	snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+ 	cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
+  unlock:
+-	mutex_unlock(&runtime->buffer_mutex);
++	snd_pcm_buffer_access_unlock(runtime);
+ 	return result;
+ }
+ 
+@@ -1350,12 +1372,15 @@ static int snd_pcm_action_nonatomic(const struct action_ops *ops,
+ 
+ 	/* Guarantee the group members won't change during non-atomic action */
+ 	down_read(&snd_pcm_link_rwsem);
+-	mutex_lock(&substream->runtime->buffer_mutex);
++	res = snd_pcm_buffer_access_lock(substream->runtime);
++	if (res < 0)
++		goto unlock;
+ 	if (snd_pcm_stream_linked(substream))
+ 		res = snd_pcm_action_group(ops, substream, state, false);
+ 	else
+ 		res = snd_pcm_action_single(ops, substream, state);
+-	mutex_unlock(&substream->runtime->buffer_mutex);
++	snd_pcm_buffer_access_unlock(substream->runtime);
++ unlock:
+ 	up_read(&snd_pcm_link_rwsem);
+ 	return res;
+ }
+diff --git a/sound/firewire/fcp.c b/sound/firewire/fcp.c
+index bbfbebf4affbc..df44dd5dc4b22 100644
+--- a/sound/firewire/fcp.c
++++ b/sound/firewire/fcp.c
+@@ -240,9 +240,7 @@ int fcp_avc_transaction(struct fw_unit *unit,
+ 	t.response_match_bytes = response_match_bytes;
+ 	t.state = STATE_PENDING;
+ 	init_waitqueue_head(&t.wait);
+-
+-	if (*(const u8 *)command == 0x00 || *(const u8 *)command == 0x03)
+-		t.deferrable = true;
++	t.deferrable = (*(const u8 *)command == 0x00 || *(const u8 *)command == 0x03);
+ 
+ 	spin_lock_irq(&transactions_lock);
+ 	list_add_tail(&t.list, &transactions);
+diff --git a/sound/isa/cs423x/cs4236.c b/sound/isa/cs423x/cs4236.c
+index fa3c39cff5f85..9ee3a312c6793 100644
+--- a/sound/isa/cs423x/cs4236.c
++++ b/sound/isa/cs423x/cs4236.c
+@@ -544,7 +544,7 @@ static int snd_cs423x_pnpbios_detect(struct pnp_dev *pdev,
+ 	static int dev;
+ 	int err;
+ 	struct snd_card *card;
+-	struct pnp_dev *cdev;
++	struct pnp_dev *cdev, *iter;
+ 	char cid[PNP_ID_LEN];
+ 
+ 	if (pnp_device_is_isapnp(pdev))
+@@ -560,9 +560,11 @@ static int snd_cs423x_pnpbios_detect(struct pnp_dev *pdev,
+ 	strcpy(cid, pdev->id[0].id);
+ 	cid[5] = '1';
+ 	cdev = NULL;
+-	list_for_each_entry(cdev, &(pdev->protocol->devices), protocol_list) {
+-		if (!strcmp(cdev->id[0].id, cid))
++	list_for_each_entry(iter, &(pdev->protocol->devices), protocol_list) {
++		if (!strcmp(iter->id[0].id, cid)) {
++			cdev = iter;
+ 			break;
++		}
+ 	}
+ 	err = snd_cs423x_card_new(&pdev->dev, dev, &card);
+ 	if (err < 0)
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index fe725f0f09312..71e11481ba41c 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1608,6 +1608,7 @@ static void hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ 	struct hda_codec *codec = per_pin->codec;
+ 	struct hdmi_spec *spec = codec->spec;
+ 	struct hdmi_eld *eld = &spec->temp_eld;
++	struct device *dev = hda_codec_dev(codec);
+ 	hda_nid_t pin_nid = per_pin->pin_nid;
+ 	int dev_id = per_pin->dev_id;
+ 	/*
+@@ -1621,8 +1622,13 @@ static void hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ 	int present;
+ 	int ret;
+ 
++#ifdef	CONFIG_PM
++	if (dev->power.runtime_status == RPM_SUSPENDING)
++		return;
++#endif
++
+ 	ret = snd_hda_power_up_pm(codec);
+-	if (ret < 0 && pm_runtime_suspended(hda_codec_dev(codec)))
++	if (ret < 0 && pm_runtime_suspended(dev))
+ 		goto out;
+ 
+ 	present = snd_hda_jack_pin_sense(codec, pin_nid, dev_id);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3bd37c02ce0ed..b886326ce9b96 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3615,8 +3615,8 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	/* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ 	 * when booting with headset plugged. So skip setting it for the codec alc257
+ 	 */
+-	if (spec->codec_variant != ALC269_TYPE_ALC257 &&
+-	    spec->codec_variant != ALC269_TYPE_ALC256)
++	if (codec->core.vendor_id != 0x10ec0236 &&
++	    codec->core.vendor_id != 0x10ec0257)
+ 		alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+ 	if (!spec->no_shutup_pins)
+@@ -6762,6 +6762,7 @@ enum {
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ 	ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
++	ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ 	ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+ 	ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS,
+ 	ALC269VC_FIXUP_ACER_HEADSET_MIC,
+@@ -8083,6 +8084,14 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ }
+ 		},
+ 	},
++	[ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x08},
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x2fcf},
++			{ }
++		},
++	},
+ 	[ALC295_FIXUP_ASUS_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -8835,6 +8844,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ 	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++	SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+@@ -9177,6 +9187,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
+ 	{.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+ 	{.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
++	{.id = ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc256-samsung-headphone"},
+ 	{.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ 	{.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
+ 	{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
+diff --git a/sound/soc/atmel/atmel_ssc_dai.c b/sound/soc/atmel/atmel_ssc_dai.c
+index 6a63e8797a0b6..97533412ce11e 100644
+--- a/sound/soc/atmel/atmel_ssc_dai.c
++++ b/sound/soc/atmel/atmel_ssc_dai.c
+@@ -280,7 +280,10 @@ static int atmel_ssc_startup(struct snd_pcm_substream *substream,
+ 
+ 	/* Enable PMC peripheral clock for this SSC */
+ 	pr_debug("atmel_ssc_dai: Starting clock\n");
+-	clk_enable(ssc_p->ssc->clk);
++	ret = clk_enable(ssc_p->ssc->clk);
++	if (ret)
++		return ret;
++
+ 	ssc_p->mck_rate = clk_get_rate(ssc_p->ssc->clk);
+ 
+ 	/* Reset the SSC unless initialized to keep it in a clean state */
+diff --git a/sound/soc/atmel/sam9g20_wm8731.c b/sound/soc/atmel/sam9g20_wm8731.c
+index ed1f69b570244..8a55d59a6c2aa 100644
+--- a/sound/soc/atmel/sam9g20_wm8731.c
++++ b/sound/soc/atmel/sam9g20_wm8731.c
+@@ -214,6 +214,7 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ 	cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0);
+ 	if (!cpu_np) {
+ 		dev_err(&pdev->dev, "dai and pcm info missing\n");
++		of_node_put(codec_np);
+ 		return -EINVAL;
+ 	}
+ 	at91sam9g20ek_dai.cpus->of_node = cpu_np;
+diff --git a/sound/soc/atmel/sam9x5_wm8731.c b/sound/soc/atmel/sam9x5_wm8731.c
+index 9fbc3c1113cc5..529604a06c532 100644
+--- a/sound/soc/atmel/sam9x5_wm8731.c
++++ b/sound/soc/atmel/sam9x5_wm8731.c
+@@ -142,7 +142,7 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+ 	if (!cpu_np) {
+ 		dev_err(&pdev->dev, "atmel,ssc-controller node missing\n");
+ 		ret = -EINVAL;
+-		goto out;
++		goto out_put_codec_np;
+ 	}
+ 	dai->cpus->of_node = cpu_np;
+ 	dai->platforms->of_node = cpu_np;
+@@ -153,13 +153,10 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+ 	if (ret != 0) {
+ 		dev_err(&pdev->dev, "Failed to set SSC %d for audio: %d\n",
+ 			ret, priv->ssc_id);
+-		goto out;
++		goto out_put_cpu_np;
+ 	}
+ 
+-	of_node_put(codec_np);
+-	of_node_put(cpu_np);
+-
+-	ret = snd_soc_register_card(card);
++	ret = devm_snd_soc_register_card(&pdev->dev, card);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Platform device allocation failed\n");
+ 		goto out_put_audio;
+@@ -167,10 +164,14 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+ 
+ 	dev_dbg(&pdev->dev, "%s ok\n", __func__);
+ 
+-	return ret;
++	goto out_put_cpu_np;
+ 
+ out_put_audio:
+ 	atmel_ssc_put_audio(priv->ssc_id);
++out_put_cpu_np:
++	of_node_put(cpu_np);
++out_put_codec_np:
++	of_node_put(codec_np);
+ out:
+ 	return ret;
+ }
+@@ -180,7 +181,6 @@ static int sam9x5_wm8731_driver_remove(struct platform_device *pdev)
+ 	struct snd_soc_card *card = platform_get_drvdata(pdev);
+ 	struct sam9x5_drvdata *priv = card->drvdata;
+ 
+-	snd_soc_unregister_card(card);
+ 	atmel_ssc_put_audio(priv->ssc_id);
+ 
+ 	return 0;
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 34c6dd04b85a3..52c89a6f54e9a 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -659,6 +659,7 @@ config SND_SOC_CS4349
+ 
+ config SND_SOC_CS47L15
+ 	tristate
++	depends on MFD_CS47L15
+ 
+ config SND_SOC_CS47L24
+ 	tristate
+@@ -666,15 +667,19 @@ config SND_SOC_CS47L24
+ 
+ config SND_SOC_CS47L35
+ 	tristate
++	depends on MFD_CS47L35
+ 
+ config SND_SOC_CS47L85
+ 	tristate
++	depends on MFD_CS47L85
+ 
+ config SND_SOC_CS47L90
+ 	tristate
++	depends on MFD_CS47L90
+ 
+ config SND_SOC_CS47L92
+ 	tristate
++	depends on MFD_CS47L92
+ 
+ # Cirrus Logic Quad-Channel ADC
+ config SND_SOC_CS53L30
+diff --git a/sound/soc/codecs/msm8916-wcd-analog.c b/sound/soc/codecs/msm8916-wcd-analog.c
+index 3ddd822240e3a..971b8360b5b1b 100644
+--- a/sound/soc/codecs/msm8916-wcd-analog.c
++++ b/sound/soc/codecs/msm8916-wcd-analog.c
+@@ -1221,8 +1221,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq_byname(pdev, "mbhc_switch_int");
+-	if (irq < 0)
+-		return irq;
++	if (irq < 0) {
++		ret = irq;
++		goto err_disable_clk;
++	}
+ 
+ 	ret = devm_request_threaded_irq(dev, irq, NULL,
+ 			       pm8916_mbhc_switch_irq_handler,
+@@ -1234,8 +1236,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 
+ 	if (priv->mbhc_btn_enabled) {
+ 		irq = platform_get_irq_byname(pdev, "mbhc_but_press_det");
+-		if (irq < 0)
+-			return irq;
++		if (irq < 0) {
++			ret = irq;
++			goto err_disable_clk;
++		}
+ 
+ 		ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				       mbhc_btn_press_irq_handler,
+@@ -1246,8 +1250,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 			dev_err(dev, "cannot request mbhc button press irq\n");
+ 
+ 		irq = platform_get_irq_byname(pdev, "mbhc_but_rel_det");
+-		if (irq < 0)
+-			return irq;
++		if (irq < 0) {
++			ret = irq;
++			goto err_disable_clk;
++		}
+ 
+ 		ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				       mbhc_btn_release_irq_handler,
+@@ -1264,6 +1270,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 	return devm_snd_soc_register_component(dev, &pm8916_wcd_analog,
+ 				      pm8916_wcd_analog_dai,
+ 				      ARRAY_SIZE(pm8916_wcd_analog_dai));
++
++err_disable_clk:
++	clk_disable_unprepare(priv->mclk);
++	return ret;
+ }
+ 
+ static int pm8916_wcd_analog_spmi_remove(struct platform_device *pdev)
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index fcc10c8bc6259..9ad7fc0baf072 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -1201,7 +1201,7 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(priv->mclk);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to enable mclk %d\n", ret);
+-		return ret;
++		goto err_clk;
+ 	}
+ 
+ 	dev_set_drvdata(dev, priv);
+@@ -1209,6 +1209,9 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
+ 	return devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
+ 				      msm8916_wcd_digital_dai,
+ 				      ARRAY_SIZE(msm8916_wcd_digital_dai));
++err_clk:
++	clk_disable_unprepare(priv->ahbclk);
++	return ret;
+ }
+ 
+ static int msm8916_wcd_digital_remove(struct platform_device *pdev)
+diff --git a/sound/soc/codecs/mt6358.c b/sound/soc/codecs/mt6358.c
+index 1f39d5998cf67..456d9b24d0249 100644
+--- a/sound/soc/codecs/mt6358.c
++++ b/sound/soc/codecs/mt6358.c
+@@ -107,6 +107,7 @@ int mt6358_set_mtkaif_protocol(struct snd_soc_component *cmpnt,
+ 	priv->mtkaif_protocol = mtkaif_protocol;
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_set_mtkaif_protocol);
+ 
+ static void playback_gpio_set(struct mt6358_priv *priv)
+ {
+@@ -273,6 +274,7 @@ int mt6358_mtkaif_calibration_enable(struct snd_soc_component *cmpnt)
+ 			   1 << RG_AUD_PAD_TOP_DAT_MISO_LOOPBACK_SFT);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_mtkaif_calibration_enable);
+ 
+ int mt6358_mtkaif_calibration_disable(struct snd_soc_component *cmpnt)
+ {
+@@ -296,6 +298,7 @@ int mt6358_mtkaif_calibration_disable(struct snd_soc_component *cmpnt)
+ 	capture_gpio_reset(priv);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_mtkaif_calibration_disable);
+ 
+ int mt6358_set_mtkaif_calibration_phase(struct snd_soc_component *cmpnt,
+ 					int phase_1, int phase_2)
+@@ -310,6 +313,7 @@ int mt6358_set_mtkaif_calibration_phase(struct snd_soc_component *cmpnt,
+ 			   phase_2 << RG_AUD_PAD_TOP_PHASE_MODE2_SFT);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_set_mtkaif_calibration_phase);
+ 
+ /* dl pga gain */
+ enum {
+diff --git a/sound/soc/codecs/rt5663.c b/sound/soc/codecs/rt5663.c
+index db8a41aaa3859..4423e61bf1abf 100644
+--- a/sound/soc/codecs/rt5663.c
++++ b/sound/soc/codecs/rt5663.c
+@@ -3478,6 +3478,8 @@ static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
+ 		table_size = sizeof(struct impedance_mapping_table) *
+ 			rt5663->pdata.impedance_sensing_num;
+ 		rt5663->imp_table = devm_kzalloc(dev, table_size, GFP_KERNEL);
++		if (!rt5663->imp_table)
++			return -ENOMEM;
+ 		ret = device_property_read_u32_array(dev,
+ 			"realtek,impedance_sensing_table",
+ 			(u32 *)rt5663->imp_table, table_size);
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index 01df3f4e045a9..8540ac230d0ed 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -2522,13 +2522,16 @@ static int wcd934x_rx_hph_mode_put(struct snd_kcontrol *kc,
+ 
+ 	mode_val = ucontrol->value.enumerated.item[0];
+ 
++	if (mode_val == wcd->hph_mode)
++		return 0;
++
+ 	if (mode_val == 0) {
+ 		dev_err(wcd->dev, "Invalid HPH Mode, default to ClSH HiFi\n");
+ 		mode_val = CLS_H_LOHIFI;
+ 	}
+ 	wcd->hph_mode = mode_val;
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static int slim_rx_mux_get(struct snd_kcontrol *kc,
+@@ -5044,6 +5047,7 @@ static int wcd934x_codec_parse_data(struct wcd934x_codec *wcd)
+ 	}
+ 
+ 	wcd->sidev = of_slim_get_device(wcd->sdev->ctrl, ifc_dev_np);
++	of_node_put(ifc_dev_np);
+ 	if (!wcd->sidev) {
+ 		dev_err(dev, "Unable to get SLIM Interface device\n");
+ 		return -EINVAL;
+diff --git a/sound/soc/codecs/wm8350.c b/sound/soc/codecs/wm8350.c
+index a6aa212fa0c89..ec5d997725b9c 100644
+--- a/sound/soc/codecs/wm8350.c
++++ b/sound/soc/codecs/wm8350.c
+@@ -1536,18 +1536,38 @@ static  int wm8350_component_probe(struct snd_soc_component *component)
+ 	wm8350_clear_bits(wm8350, WM8350_JACK_DETECT,
+ 			  WM8350_JDL_ENA | WM8350_JDR_ENA);
+ 
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L,
+ 			    wm8350_hpl_jack_handler, 0, "Left jack detect",
+ 			    priv);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R,
++	if (ret != 0)
++		goto err;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R,
+ 			    wm8350_hpr_jack_handler, 0, "Right jack detect",
+ 			    priv);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICSCD,
++	if (ret != 0)
++		goto free_jck_det_l;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICSCD,
+ 			    wm8350_mic_handler, 0, "Microphone short", priv);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICD,
++	if (ret != 0)
++		goto free_jck_det_r;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICD,
+ 			    wm8350_mic_handler, 0, "Microphone detect", priv);
++	if (ret != 0)
++		goto free_micscd;
+ 
+ 	return 0;
++
++free_micscd:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_MICSCD, priv);
++free_jck_det_r:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R, priv);
++free_jck_det_l:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L, priv);
++err:
++	return ret;
+ }
+ 
+ static void wm8350_component_remove(struct snd_soc_component *component)
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index fd4160289faca..36da0f01571a1 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -403,9 +403,13 @@ static int dw_i2s_runtime_suspend(struct device *dev)
+ static int dw_i2s_runtime_resume(struct device *dev)
+ {
+ 	struct dw_i2s_dev *dw_dev = dev_get_drvdata(dev);
++	int ret;
+ 
+-	if (dw_dev->capability & DW_I2S_MASTER)
+-		clk_enable(dw_dev->clk);
++	if (dw_dev->capability & DW_I2S_MASTER) {
++		ret = clk_enable(dw_dev->clk);
++		if (ret)
++			return ret;
++	}
+ 	return 0;
+ }
+ 
+@@ -422,10 +426,13 @@ static int dw_i2s_resume(struct snd_soc_component *component)
+ {
+ 	struct dw_i2s_dev *dev = snd_soc_component_get_drvdata(component);
+ 	struct snd_soc_dai *dai;
+-	int stream;
++	int stream, ret;
+ 
+-	if (dev->capability & DW_I2S_MASTER)
+-		clk_enable(dev->clk);
++	if (dev->capability & DW_I2S_MASTER) {
++		ret = clk_enable(dev->clk);
++		if (ret)
++			return ret;
++	}
+ 
+ 	for_each_component_dais(component, dai) {
+ 		for_each_pcm_streams(stream)
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index 15bcb0f38ec9e..d01e8d516df1f 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -544,6 +544,8 @@ static void fsl_spdif_shutdown(struct snd_pcm_substream *substream,
+ 		mask = SCR_TXFIFO_AUTOSYNC_MASK | SCR_TXFIFO_CTRL_MASK |
+ 			SCR_TXSEL_MASK | SCR_USRC_SEL_MASK |
+ 			SCR_TXFIFO_FSEL_MASK;
++		/* Disable TX clock */
++		regmap_update_bits(regmap, REG_SPDIF_STC, STC_TXCLK_ALL_EN_MASK, 0);
+ 	} else {
+ 		scr = SCR_RXFIFO_OFF | SCR_RXFIFO_CTL_ZERO;
+ 		mask = SCR_RXFIFO_FSEL_MASK | SCR_RXFIFO_AUTOSYNC_MASK|
+diff --git a/sound/soc/fsl/imx-es8328.c b/sound/soc/fsl/imx-es8328.c
+index fad1eb6253d53..9e602c3456196 100644
+--- a/sound/soc/fsl/imx-es8328.c
++++ b/sound/soc/fsl/imx-es8328.c
+@@ -87,6 +87,7 @@ static int imx_es8328_probe(struct platform_device *pdev)
+ 	if (int_port > MUX_PORT_MAX || int_port == 0) {
+ 		dev_err(dev, "mux-int-port: hardware only has %d mux ports\n",
+ 			MUX_PORT_MAX);
++		ret = -EINVAL;
+ 		goto fail;
+ 	}
+ 
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index 6cada4c1e283b..d0d79f47bfdd5 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -255,7 +255,7 @@ int asoc_simple_hw_params(struct snd_pcm_substream *substream,
+ 	struct simple_dai_props *dai_props =
+ 		simple_priv_to_props(priv, rtd->num);
+ 	unsigned int mclk, mclk_fs = 0;
+-	int ret = 0;
++	int ret;
+ 
+ 	if (dai_props->mclk_fs)
+ 		mclk_fs = dai_props->mclk_fs;
+diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c
+index 07f8cf9980e31..f2eda81985e27 100644
+--- a/sound/soc/mxs/mxs-saif.c
++++ b/sound/soc/mxs/mxs-saif.c
+@@ -455,7 +455,10 @@ static int mxs_saif_hw_params(struct snd_pcm_substream *substream,
+ 		* basic clock which should be fast enough for the internal
+ 		* logic.
+ 		*/
+-		clk_enable(saif->clk);
++		ret = clk_enable(saif->clk);
++		if (ret)
++			return ret;
++
+ 		ret = clk_set_rate(saif->clk, 24000000);
+ 		clk_disable(saif->clk);
+ 		if (ret)
+diff --git a/sound/soc/mxs/mxs-sgtl5000.c b/sound/soc/mxs/mxs-sgtl5000.c
+index a6407f4388de7..fb721bc499496 100644
+--- a/sound/soc/mxs/mxs-sgtl5000.c
++++ b/sound/soc/mxs/mxs-sgtl5000.c
+@@ -118,6 +118,9 @@ static int mxs_sgtl5000_probe(struct platform_device *pdev)
+ 	codec_np = of_parse_phandle(np, "audio-codec", 0);
+ 	if (!saif_np[0] || !saif_np[1] || !codec_np) {
+ 		dev_err(&pdev->dev, "phandle missing or invalid\n");
++		of_node_put(codec_np);
++		of_node_put(saif_np[0]);
++		of_node_put(saif_np[1]);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index fa84ec695b525..785baf98f9da2 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -624,20 +624,23 @@ static int rockchip_i2s_probe(struct platform_device *pdev)
+ 	i2s->mclk = devm_clk_get(&pdev->dev, "i2s_clk");
+ 	if (IS_ERR(i2s->mclk)) {
+ 		dev_err(&pdev->dev, "Can't retrieve i2s master clock\n");
+-		return PTR_ERR(i2s->mclk);
++		ret = PTR_ERR(i2s->mclk);
++		goto err_clk;
+ 	}
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	regs = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(regs))
+-		return PTR_ERR(regs);
++	regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
++	if (IS_ERR(regs)) {
++		ret = PTR_ERR(regs);
++		goto err_clk;
++	}
+ 
+ 	i2s->regmap = devm_regmap_init_mmio(&pdev->dev, regs,
+ 					    &rockchip_i2s_regmap_config);
+ 	if (IS_ERR(i2s->regmap)) {
+ 		dev_err(&pdev->dev,
+ 			"Failed to initialise managed register map\n");
+-		return PTR_ERR(i2s->regmap);
++		ret = PTR_ERR(i2s->regmap);
++		goto err_clk;
+ 	}
+ 
+ 	i2s->playback_dma_data.addr = res->start + I2S_TXDR;
+@@ -696,7 +699,8 @@ err_suspend:
+ 		i2s_runtime_suspend(&pdev->dev);
+ err_pm_disable:
+ 	pm_runtime_disable(&pdev->dev);
+-
++err_clk:
++	clk_disable_unprepare(i2s->hclk);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/sh/fsi.c b/sound/soc/sh/fsi.c
+index 3c574792231bc..0fa72907d5bf1 100644
+--- a/sound/soc/sh/fsi.c
++++ b/sound/soc/sh/fsi.c
+@@ -816,14 +816,27 @@ static int fsi_clk_enable(struct device *dev,
+ 			return ret;
+ 		}
+ 
+-		clk_enable(clock->xck);
+-		clk_enable(clock->ick);
+-		clk_enable(clock->div);
++		ret = clk_enable(clock->xck);
++		if (ret)
++			goto err;
++		ret = clk_enable(clock->ick);
++		if (ret)
++			goto disable_xck;
++		ret = clk_enable(clock->div);
++		if (ret)
++			goto disable_ick;
+ 
+ 		clock->count++;
+ 	}
+ 
+ 	return ret;
++
++disable_ick:
++	clk_disable(clock->ick);
++disable_xck:
++	clk_disable(clock->xck);
++err:
++	return ret;
+ }
+ 
+ static int fsi_clk_disable(struct device *dev,
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 3a6a60215e815..d0f3ff8edd904 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -766,6 +766,11 @@ int snd_soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
+ 		return -EINVAL;
+ 	}
+ 
++	if (!codec_dai) {
++		dev_err(rtd->card->dev, "Missing codec\n");
++		return -EINVAL;
++	}
++
+ 	/* check client and interface hw capabilities */
+ 	if (snd_soc_dai_stream_valid(codec_dai, SNDRV_PCM_STREAM_PLAYBACK) &&
+ 	    snd_soc_dai_stream_valid(cpu_dai,   SNDRV_PCM_STREAM_PLAYBACK))
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 1332965968646..a6d6d10cd471b 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -3020,7 +3020,7 @@ int snd_soc_get_dai_name(struct of_phandle_args *args,
+ 	for_each_component(pos) {
+ 		component_of_node = soc_component_to_node(pos);
+ 
+-		if (component_of_node != args->np)
++		if (component_of_node != args->np || !pos->num_dai)
+ 			continue;
+ 
+ 		ret = snd_soc_component_of_xlate_dai_name(pos, args, dai_name);
+diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c
+index 9ef80a48707eb..0d100b4e43f7e 100644
+--- a/sound/soc/soc-generic-dmaengine-pcm.c
++++ b/sound/soc/soc-generic-dmaengine-pcm.c
+@@ -83,10 +83,10 @@ static int dmaengine_pcm_hw_params(struct snd_soc_component *component,
+ 
+ 	memset(&slave_config, 0, sizeof(slave_config));
+ 
+-	if (!pcm->config)
+-		prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
+-	else
++	if (pcm->config && pcm->config->prepare_slave_config)
+ 		prepare_slave_config = pcm->config->prepare_slave_config;
++	else
++		prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
+ 
+ 	if (prepare_slave_config) {
+ 		ret = prepare_slave_config(substream, params, &slave_config);
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 4d24ac255d253..23a5f9a52da0f 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -578,7 +578,8 @@ static int soc_tplg_kcontrol_bind_io(struct snd_soc_tplg_ctl_hdr *hdr,
+ 
+ 	if (le32_to_cpu(hdr->ops.info) == SND_SOC_TPLG_CTL_BYTES
+ 		&& k->iface & SNDRV_CTL_ELEM_IFACE_MIXER
+-		&& k->access & SNDRV_CTL_ELEM_ACCESS_TLV_READWRITE
++		&& (k->access & SNDRV_CTL_ELEM_ACCESS_TLV_READ
++		    || k->access & SNDRV_CTL_ELEM_ACCESS_TLV_WRITE)
+ 		&& k->access & SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK) {
+ 		struct soc_bytes_ext *sbe;
+ 		struct snd_soc_tplg_bytes_control *be;
+diff --git a/sound/soc/sof/imx/imx8m.c b/sound/soc/sof/imx/imx8m.c
+index cb822d9537678..6943c05273ae7 100644
+--- a/sound/soc/sof/imx/imx8m.c
++++ b/sound/soc/sof/imx/imx8m.c
+@@ -191,6 +191,7 @@ static int imx8m_probe(struct snd_sof_dev *sdev)
+ 	}
+ 
+ 	ret = of_address_to_resource(res_node, 0, &res);
++	of_node_put(res_node);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to get reserved region address\n");
+ 		goto exit_pdev_unregister;
+diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c
+index 2707a16c6a4d3..347636a80b487 100644
+--- a/sound/soc/sof/intel/hda-loader.c
++++ b/sound/soc/sof/intel/hda-loader.c
+@@ -47,7 +47,7 @@ static struct hdac_ext_stream *cl_stream_prepare(struct snd_sof_dev *sdev, unsig
+ 	ret = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV_SG, &pci->dev, size, dmab);
+ 	if (ret < 0) {
+ 		dev_err(sdev->dev, "error: memory alloc failed: %x\n", ret);
+-		goto error;
++		goto out_put;
+ 	}
+ 
+ 	hstream->period_bytes = 0;/* initialize period_bytes */
+@@ -58,22 +58,23 @@ static struct hdac_ext_stream *cl_stream_prepare(struct snd_sof_dev *sdev, unsig
+ 		ret = hda_dsp_iccmax_stream_hw_params(sdev, dsp_stream, dmab, NULL);
+ 		if (ret < 0) {
+ 			dev_err(sdev->dev, "error: iccmax stream prepare failed: %x\n", ret);
+-			goto error;
++			goto out_free;
+ 		}
+ 	} else {
+ 		ret = hda_dsp_stream_hw_params(sdev, dsp_stream, dmab, NULL);
+ 		if (ret < 0) {
+ 			dev_err(sdev->dev, "error: hdac prepare failed: %x\n", ret);
+-			goto error;
++			goto out_free;
+ 		}
+ 		hda_dsp_stream_spib_config(sdev, dsp_stream, HDA_DSP_SPIB_ENABLE, size);
+ 	}
+ 
+ 	return dsp_stream;
+ 
+-error:
+-	hda_dsp_stream_put(sdev, direction, hstream->stream_tag);
++out_free:
+ 	snd_dma_free_pages(dmab);
++out_put:
++	hda_dsp_stream_put(sdev, direction, hstream->stream_tag);
+ 	return ERR_PTR(ret);
+ }
+ 
+diff --git a/sound/soc/ti/davinci-i2s.c b/sound/soc/ti/davinci-i2s.c
+index dd34504c09ba8..4895bcee1f557 100644
+--- a/sound/soc/ti/davinci-i2s.c
++++ b/sound/soc/ti/davinci-i2s.c
+@@ -708,7 +708,9 @@ static int davinci_i2s_probe(struct platform_device *pdev)
+ 	dev->clk = clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(dev->clk))
+ 		return -ENODEV;
+-	clk_enable(dev->clk);
++	ret = clk_enable(dev->clk);
++	if (ret)
++		goto err_put_clk;
+ 
+ 	dev->dev = &pdev->dev;
+ 	dev_set_drvdata(&pdev->dev, dev);
+@@ -730,6 +732,7 @@ err_unregister_component:
+ 	snd_soc_unregister_component(&pdev->dev);
+ err_release_clk:
+ 	clk_disable(dev->clk);
++err_put_clk:
+ 	clk_put(dev->clk);
+ 	return ret;
+ }
+diff --git a/sound/soc/xilinx/xlnx_formatter_pcm.c b/sound/soc/xilinx/xlnx_formatter_pcm.c
+index ce19a6058b279..5c4158069a5a8 100644
+--- a/sound/soc/xilinx/xlnx_formatter_pcm.c
++++ b/sound/soc/xilinx/xlnx_formatter_pcm.c
+@@ -84,6 +84,7 @@ struct xlnx_pcm_drv_data {
+ 	struct snd_pcm_substream *play_stream;
+ 	struct snd_pcm_substream *capture_stream;
+ 	struct clk *axi_clk;
++	unsigned int sysclk;
+ };
+ 
+ /*
+@@ -314,6 +315,15 @@ static irqreturn_t xlnx_s2mm_irq_handler(int irq, void *arg)
+ 	return IRQ_NONE;
+ }
+ 
++static int xlnx_formatter_set_sysclk(struct snd_soc_component *component,
++				     int clk_id, int source, unsigned int freq, int dir)
++{
++	struct xlnx_pcm_drv_data *adata = dev_get_drvdata(component->dev);
++
++	adata->sysclk = freq;
++	return 0;
++}
++
+ static int xlnx_formatter_pcm_open(struct snd_soc_component *component,
+ 				   struct snd_pcm_substream *substream)
+ {
+@@ -450,11 +460,25 @@ static int xlnx_formatter_pcm_hw_params(struct snd_soc_component *component,
+ 	u64 size;
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct xlnx_pcm_stream_param *stream_data = runtime->private_data;
++	struct xlnx_pcm_drv_data *adata = dev_get_drvdata(component->dev);
+ 
+ 	active_ch = params_channels(params);
+ 	if (active_ch > stream_data->ch_limit)
+ 		return -EINVAL;
+ 
++	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
++	    adata->sysclk) {
++		unsigned int mclk_fs = adata->sysclk / params_rate(params);
++
++		if (adata->sysclk % params_rate(params) != 0) {
++			dev_warn(component->dev, "sysclk %u not divisible by rate %u\n",
++				 adata->sysclk, params_rate(params));
++			return -EINVAL;
++		}
++
++		writel(mclk_fs, stream_data->mmio + XLNX_AUD_FS_MULTIPLIER);
++	}
++
+ 	if (substream->stream == SNDRV_PCM_STREAM_CAPTURE &&
+ 	    stream_data->xfer_mode == AES_TO_PCM) {
+ 		val = readl(stream_data->mmio + XLNX_AUD_STS);
+@@ -552,6 +576,7 @@ static int xlnx_formatter_pcm_new(struct snd_soc_component *component,
+ 
+ static const struct snd_soc_component_driver xlnx_asoc_component = {
+ 	.name		= DRV_NAME,
++	.set_sysclk	= xlnx_formatter_set_sysclk,
+ 	.open		= xlnx_formatter_pcm_open,
+ 	.close		= xlnx_formatter_pcm_close,
+ 	.hw_params	= xlnx_formatter_pcm_hw_params,
+diff --git a/sound/spi/at73c213.c b/sound/spi/at73c213.c
+index 76c0e37a838cf..8a2da6b1012eb 100644
+--- a/sound/spi/at73c213.c
++++ b/sound/spi/at73c213.c
+@@ -218,7 +218,9 @@ static int snd_at73c213_pcm_open(struct snd_pcm_substream *substream)
+ 	runtime->hw = snd_at73c213_playback_hw;
+ 	chip->substream = substream;
+ 
+-	clk_enable(chip->ssc->clk);
++	err = clk_enable(chip->ssc->clk);
++	if (err)
++		return err;
+ 
+ 	return 0;
+ }
+@@ -776,7 +778,9 @@ static int snd_at73c213_chip_init(struct snd_at73c213 *chip)
+ 		goto out;
+ 
+ 	/* Enable DAC master clock. */
+-	clk_enable(chip->board->dac_clk);
++	retval = clk_enable(chip->board->dac_clk);
++	if (retval)
++		goto out;
+ 
+ 	/* Initialize at73c213 on SPI bus. */
+ 	retval = snd_at73c213_write_reg(chip, DAC_RST, 0x04);
+@@ -889,7 +893,9 @@ static int snd_at73c213_dev_init(struct snd_card *card,
+ 	chip->card = card;
+ 	chip->irq = -1;
+ 
+-	clk_enable(chip->ssc->clk);
++	retval = clk_enable(chip->ssc->clk);
++	if (retval)
++		return retval;
+ 
+ 	retval = request_irq(irq, snd_at73c213_interrupt, 0, "at73c213", chip);
+ 	if (retval) {
+@@ -1008,7 +1014,9 @@ static int snd_at73c213_remove(struct spi_device *spi)
+ 	int retval;
+ 
+ 	/* Stop playback. */
+-	clk_enable(chip->ssc->clk);
++	retval = clk_enable(chip->ssc->clk);
++	if (retval)
++		goto out;
+ 	ssc_writel(chip->ssc->regs, CR, SSC_BIT(CR_TXDIS));
+ 	clk_disable(chip->ssc->clk);
+ 
+@@ -1088,9 +1096,16 @@ static int snd_at73c213_resume(struct device *dev)
+ {
+ 	struct snd_card *card = dev_get_drvdata(dev);
+ 	struct snd_at73c213 *chip = card->private_data;
++	int retval;
+ 
+-	clk_enable(chip->board->dac_clk);
+-	clk_enable(chip->ssc->clk);
++	retval = clk_enable(chip->board->dac_clk);
++	if (retval)
++		return retval;
++	retval = clk_enable(chip->ssc->clk);
++	if (retval) {
++		clk_disable(chip->board->dac_clk);
++		return retval;
++	}
+ 	ssc_writel(chip->ssc->regs, CR, SSC_BIT(CR_TXEN));
+ 
+ 	return 0;
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 762bf87c26a3e..e440cd7f32a6f 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -1490,8 +1490,8 @@ union bpf_attr {
+  * 	Return
+  * 		The return value depends on the result of the test, and can be:
+  *
+- *		* 0, if current task belongs to the cgroup2.
+- *		* 1, if current task does not belong to the cgroup2.
++ *		* 1, if current task belongs to the cgroup2.
++ *		* 0, if current task does not belong to the cgroup2.
+  * 		* A negative error code, if an error occurred.
+  *
+  * long bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 0911aea4cdbe5..bd22853be4a6b 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -1416,6 +1416,11 @@ static const char *btf_dump_resolve_name(struct btf_dump *d, __u32 id,
+ 	if (s->name_resolved)
+ 		return *cached_name ? *cached_name : orig_name;
+ 
++	if (btf_is_fwd(t) || (btf_is_enum(t) && btf_vlen(t) == 0)) {
++		s->name_resolved = 1;
++		return orig_name;
++	}
++
+ 	dup_cnt = btf_dump_name_dups(d, name_map, orig_name);
+ 	if (dup_cnt > 1) {
+ 		const size_t max_len = 256;
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index b337d6f29098b..61df26f048d91 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -10923,6 +10923,9 @@ void bpf_object__detach_skeleton(struct bpf_object_skeleton *s)
+ 
+ void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s)
+ {
++	if (!s)
++		return;
++
+ 	if (s->progs)
+ 		bpf_object__detach_skeleton(s);
+ 	if (s->obj)
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index 3028f932e10c0..c4390ef98b192 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -895,12 +895,23 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
+ 
+ int xsk_umem__delete(struct xsk_umem *umem)
+ {
++	struct xdp_mmap_offsets off;
++	int err;
++
+ 	if (!umem)
+ 		return 0;
+ 
+ 	if (umem->refcount)
+ 		return -EBUSY;
+ 
++	err = xsk_get_mmap_offsets(umem->fd, &off);
++	if (!err && umem->fill_save && umem->comp_save) {
++		munmap(umem->fill_save->ring - off.fr.desc,
++		       off.fr.desc + umem->config.fill_size * sizeof(__u64));
++		munmap(umem->comp_save->ring - off.cr.desc,
++		       off.cr.desc + umem->config.comp_size * sizeof(__u64));
++	}
++
+ 	close(umem->fd);
+ 	free(umem);
+ 
+diff --git a/tools/testing/selftests/bpf/progs/test_sock_fields.c b/tools/testing/selftests/bpf/progs/test_sock_fields.c
+index 81b57b9aaaeae..7967348b11af6 100644
+--- a/tools/testing/selftests/bpf/progs/test_sock_fields.c
++++ b/tools/testing/selftests/bpf/progs/test_sock_fields.c
+@@ -113,7 +113,7 @@ static void tpcpy(struct bpf_tcp_sock *dst,
+ 
+ #define RET_LOG() ({						\
+ 	linum = __LINE__;					\
+-	bpf_map_update_elem(&linum_map, &linum_idx, &linum, BPF_NOEXIST);	\
++	bpf_map_update_elem(&linum_map, &linum_idx, &linum, BPF_ANY);	\
+ 	return CG_OK;						\
+ })
+ 
+diff --git a/tools/testing/selftests/bpf/test_lirc_mode2.sh b/tools/testing/selftests/bpf/test_lirc_mode2.sh
+index ec4e15948e406..5252b91f48a18 100755
+--- a/tools/testing/selftests/bpf/test_lirc_mode2.sh
++++ b/tools/testing/selftests/bpf/test_lirc_mode2.sh
+@@ -3,6 +3,7 @@
+ 
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
++ret=$ksft_skip
+ 
+ msg="skip all tests:"
+ if [ $UID != 0 ]; then
+@@ -25,7 +26,7 @@ do
+ 	fi
+ done
+ 
+-if [ -n $LIRCDEV ];
++if [ -n "$LIRCDEV" ];
+ then
+ 	TYPE=lirc_mode2
+ 	./test_lirc_mode2_user $LIRCDEV $INPUTDEV
+@@ -36,3 +37,5 @@ then
+ 		echo -e ${GREEN}"PASS: $TYPE"${NC}
+ 	fi
+ fi
++
++exit $ret
+diff --git a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+index b497bb85b667f..6c69c42b1d607 100755
+--- a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
++++ b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+@@ -120,6 +120,14 @@ setup()
+ 	ip netns exec ${NS2} sysctl -wq net.ipv4.conf.default.rp_filter=0
+ 	ip netns exec ${NS3} sysctl -wq net.ipv4.conf.default.rp_filter=0
+ 
++	# disable IPv6 DAD because it sometimes takes too long and fails tests
++	ip netns exec ${NS1} sysctl -wq net.ipv6.conf.all.accept_dad=0
++	ip netns exec ${NS2} sysctl -wq net.ipv6.conf.all.accept_dad=0
++	ip netns exec ${NS3} sysctl -wq net.ipv6.conf.all.accept_dad=0
++	ip netns exec ${NS1} sysctl -wq net.ipv6.conf.default.accept_dad=0
++	ip netns exec ${NS2} sysctl -wq net.ipv6.conf.default.accept_dad=0
++	ip netns exec ${NS3} sysctl -wq net.ipv6.conf.default.accept_dad=0
++
+ 	ip link add veth1 type veth peer name veth2
+ 	ip link add veth3 type veth peer name veth4
+ 	ip link add veth5 type veth peer name veth6
+@@ -289,7 +297,7 @@ test_ping()
+ 		ip netns exec ${NS1} ping  -c 1 -W 1 -I veth1 ${IPv4_DST} 2>&1 > /dev/null
+ 		RET=$?
+ 	elif [ "${PROTO}" == "IPv6" ] ; then
+-		ip netns exec ${NS1} ping6 -c 1 -W 6 -I veth1 ${IPv6_DST} 2>&1 > /dev/null
++		ip netns exec ${NS1} ping6 -c 1 -W 1 -I veth1 ${IPv6_DST} 2>&1 > /dev/null
+ 		RET=$?
+ 	else
+ 		echo "    test_ping: unknown PROTO: ${PROTO}"
+diff --git a/tools/testing/selftests/net/test_vxlan_under_vrf.sh b/tools/testing/selftests/net/test_vxlan_under_vrf.sh
+index 09f9ed92cbe4c..a44b9aca74272 100755
+--- a/tools/testing/selftests/net/test_vxlan_under_vrf.sh
++++ b/tools/testing/selftests/net/test_vxlan_under_vrf.sh
+@@ -118,11 +118,11 @@ echo "[ OK ]"
+ 
+ # Move the underlay to a non-default VRF
+ ip -netns hv-1 link set veth0 vrf vrf-underlay
+-ip -netns hv-1 link set veth0 down
+-ip -netns hv-1 link set veth0 up
++ip -netns hv-1 link set vxlan0 down
++ip -netns hv-1 link set vxlan0 up
+ ip -netns hv-2 link set veth0 vrf vrf-underlay
+-ip -netns hv-2 link set veth0 down
+-ip -netns hv-2 link set veth0 up
++ip -netns hv-2 link set vxlan0 down
++ip -netns hv-2 link set vxlan0 up
+ 
+ echo -n "Check VM connectivity through VXLAN (underlay in a VRF)            "
+ ip netns exec vm-1 ping -c 1 -W 1 10.0.0.2 &> /dev/null || (echo "[FAIL]"; false)
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index 2cf32e6b376e1..01ec6876e8f58 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -40,9 +40,9 @@ TEST_GEN_FILES += userfaultfd
+ TEST_GEN_FILES += khugepaged
+ 
+ ifeq ($(MACHINE),x86_64)
+-CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_32bit_program.c -m32)
+-CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_64bit_program.c)
+-CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_program.c -no-pie)
++CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_32bit_program.c -m32)
++CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_64bit_program.c)
++CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_program.c -no-pie)
+ 
+ TARGETS := protection_keys
+ BINARIES_32 := $(TARGETS:%=%_32)
+diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
+index 6703c7906b714..f1b675a4040b7 100644
+--- a/tools/testing/selftests/x86/Makefile
++++ b/tools/testing/selftests/x86/Makefile
+@@ -6,9 +6,9 @@ include ../lib.mk
+ .PHONY: all all_32 all_64 warn_32bit_failure clean
+ 
+ UNAME_M := $(shell uname -m)
+-CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32)
+-CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
+-CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh $(CC) trivial_program.c -no-pie)
++CAN_BUILD_I386 := $(shell ./check_cc.sh "$(CC)" trivial_32bit_program.c -m32)
++CAN_BUILD_X86_64 := $(shell ./check_cc.sh "$(CC)" trivial_64bit_program.c)
++CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh "$(CC)" trivial_program.c -no-pie)
+ 
+ TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
+ 			check_initial_reg_state sigreturn iopl ioperm \
+diff --git a/tools/testing/selftests/x86/check_cc.sh b/tools/testing/selftests/x86/check_cc.sh
+index 3e2089c8cf549..8c669c0d662ee 100755
+--- a/tools/testing/selftests/x86/check_cc.sh
++++ b/tools/testing/selftests/x86/check_cc.sh
+@@ -7,7 +7,7 @@ CC="$1"
+ TESTPROG="$2"
+ shift 2
+ 
+-if "$CC" -o /dev/null "$TESTPROG" -O0 "$@" 2>/dev/null; then
++if [ -n "$CC" ] && $CC -o /dev/null "$TESTPROG" -O0 "$@" 2>/dev/null; then
+     echo 1
+ else
+     echo 0
+diff --git a/tools/virtio/virtio_test.c b/tools/virtio/virtio_test.c
+index cb3f29c09aff3..23f142af544ad 100644
+--- a/tools/virtio/virtio_test.c
++++ b/tools/virtio/virtio_test.c
+@@ -130,6 +130,7 @@ static void vdev_info_init(struct vdev_info* dev, unsigned long long features)
+ 	memset(dev, 0, sizeof *dev);
+ 	dev->vdev.features = features;
+ 	INIT_LIST_HEAD(&dev->vdev.vqs);
++	spin_lock_init(&dev->vdev.vqs_list_lock);
+ 	dev->buf_size = 1024;
+ 	dev->buf = malloc(dev->buf_size);
+ 	assert(dev->buf);
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index d22de43925076..9cd8ca2d8bc16 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -114,6 +114,8 @@ EXPORT_SYMBOL_GPL(kvm_debugfs_dir);
+ static int kvm_debugfs_num_entries;
+ static const struct file_operations stat_fops_per_vm;
+ 
++static struct file_operations kvm_chardev_ops;
++
+ static long kvm_vcpu_ioctl(struct file *file, unsigned int ioctl,
+ 			   unsigned long arg);
+ #ifdef CONFIG_KVM_COMPAT
+@@ -818,6 +820,16 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ 
+ 	preempt_notifier_inc();
+ 
++	/*
++	 * When the fd passed to this ioctl() is opened it pins the module,
++	 * but try_module_get() also prevents getting a reference if the module
++	 * is in MODULE_STATE_GOING (e.g. if someone ran "rmmod --wait").
++	 */
++	if (!try_module_get(kvm_chardev_ops.owner)) {
++		r = -ENODEV;
++		goto out_err;
++	}
++
+ 	return kvm;
+ 
+ out_err:
+@@ -896,6 +908,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
+ 	preempt_notifier_dec();
+ 	hardware_disable_all();
+ 	mmdrop(mm);
++	module_put(kvm_chardev_ops.owner);
+ }
+ 
+ void kvm_get_kvm(struct kvm *kvm)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-04-12 19:08 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-04-12 19:08 UTC (permalink / raw
  To: gentoo-commits

commit:     6ccec45907691aaa9db146ee81a0728f2fcc558c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 12 19:08:33 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 12 19:08:33 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6ccec459

Select AUTOFS_FS when GENTOO_LINUX_INIT_SYSTEMD selected

Bug: https://bugs.gentoo.org/838082

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 7e387d70..c9e2f30a 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -8,7 +8,7 @@
 +source "distro/Kconfig"
 --- /dev/null	2022-01-30 08:12:05.041788304 -0500
 +++ b/distro/Kconfig	2022-01-30 15:28:10.030352980 -0500
-@@ -0,0 +1,285 @@
+@@ -0,0 +1,286 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -122,6 +122,7 @@
 +	depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
 +
 +	select AUTOFS4_FS
++	select AUTOFS_FS
 +	select BLK_DEV_BSG
 +	select BPF_SYSCALL
 +	select CGROUP_BPF
@@ -353,4 +354,3 @@ index 24c045b24..e13fc740c 100644
  	  This is the portion of low virtual memory which should be protected
 -- 
 2.31.1
-```


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-04-13 19:48 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-04-13 19:48 UTC (permalink / raw
  To: gentoo-commits

commit:     ae54ff91489cdbd4a65c38d7c453b4dfa1523c20
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 13 19:48:39 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 13 19:48:39 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ae54ff91

Linux patch 5.10.111

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1110_linux-5.10.111.patch | 5693 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5697 insertions(+)

diff --git a/0000_README b/0000_README
index 958564ab..9076962f 100644
--- a/0000_README
+++ b/0000_README
@@ -483,6 +483,10 @@ Patch:  1109_linux-5.10.110.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.110
 
+Patch:  1110_linux-5.10.111.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.111
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1110_linux-5.10.111.patch b/1110_linux-5.10.111.patch
new file mode 100644
index 00000000..c67cdbaa
--- /dev/null
+++ b/1110_linux-5.10.111.patch
@@ -0,0 +1,5693 @@
+diff --git a/Makefile b/Makefile
+index c4674e8bb3e81..8695a13fe7cd6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 110
++SUBLEVEL = 111
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index bfbf0c4c7c5e5..39f5c1672f480 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -75,6 +75,7 @@
+ #define ARM_CPU_PART_CORTEX_A77		0xD0D
+ #define ARM_CPU_PART_NEOVERSE_V1	0xD40
+ #define ARM_CPU_PART_CORTEX_A78		0xD41
++#define ARM_CPU_PART_CORTEX_A78AE	0xD42
+ #define ARM_CPU_PART_CORTEX_X1		0xD44
+ #define ARM_CPU_PART_CORTEX_A510	0xD46
+ #define ARM_CPU_PART_CORTEX_A710	0xD47
+@@ -123,6 +124,7 @@
+ #define MIDR_CORTEX_A77	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
+ #define MIDR_NEOVERSE_V1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
+ #define MIDR_CORTEX_A78	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
++#define MIDR_CORTEX_A78AE	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
+ #define MIDR_CORTEX_X1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
+ #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
+ #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h
+index 810045628c66e..0522337d600a7 100644
+--- a/arch/arm64/include/asm/module.lds.h
++++ b/arch/arm64/include/asm/module.lds.h
+@@ -1,7 +1,7 @@
+ #ifdef CONFIG_ARM64_MODULE_PLTS
+ SECTIONS {
+-	.plt 0 (NOLOAD) : { BYTE(0) }
+-	.init.plt 0 (NOLOAD) : { BYTE(0) }
+-	.text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) }
++	.plt 0 : { BYTE(0) }
++	.init.plt 0 : { BYTE(0) }
++	.text.ftrace_trampoline 0 : { BYTE(0) }
+ }
+ #endif
+diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
+index 6c0de2f60ea96..7d4fdf9745428 100644
+--- a/arch/arm64/kernel/insn.c
++++ b/arch/arm64/kernel/insn.c
+@@ -216,8 +216,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
+ 	int i, ret = 0;
+ 	struct aarch64_insn_patch *pp = arg;
+ 
+-	/* The first CPU becomes master */
+-	if (atomic_inc_return(&pp->cpu_count) == 1) {
++	/* The last CPU becomes master */
++	if (atomic_inc_return(&pp->cpu_count) == num_online_cpus()) {
+ 		for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
+ 			ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
+ 							     pp->new_insns[i]);
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 3dd489b62b29f..6ae53d8cd576f 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -861,6 +861,7 @@ u8 spectre_bhb_loop_affected(int scope)
+ 	if (scope == SCOPE_LOCAL_CPU) {
+ 		static const struct midr_range spectre_bhb_k32_list[] = {
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+diff --git a/arch/mips/boot/dts/ingenic/jz4780.dtsi b/arch/mips/boot/dts/ingenic/jz4780.dtsi
+index dfb5a7e1bb21d..830e5dd3550e2 100644
+--- a/arch/mips/boot/dts/ingenic/jz4780.dtsi
++++ b/arch/mips/boot/dts/ingenic/jz4780.dtsi
+@@ -429,7 +429,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 
+-			eth0_addr: eth-mac-addr@0x22 {
++			eth0_addr: eth-mac-addr@22 {
+ 				reg = <0x22 0x6>;
+ 			};
+ 		};
+diff --git a/arch/mips/include/asm/setup.h b/arch/mips/include/asm/setup.h
+index bb36a400203df..8c56b862fd9c2 100644
+--- a/arch/mips/include/asm/setup.h
++++ b/arch/mips/include/asm/setup.h
+@@ -16,7 +16,7 @@ static inline void setup_8250_early_printk_port(unsigned long base,
+ 	unsigned int reg_shift, unsigned int timeout) {}
+ #endif
+ 
+-extern void set_handler(unsigned long offset, void *addr, unsigned long len);
++void set_handler(unsigned long offset, const void *addr, unsigned long len);
+ extern void set_uncached_handler(unsigned long offset, void *addr, unsigned long len);
+ 
+ typedef void (*vi_handler_t)(void);
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index e0352958e2f72..b1fe4518bd221 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -2097,19 +2097,19 @@ static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
+ 		 * If no shadow set is selected then use the default handler
+ 		 * that does normal register saving and standard interrupt exit
+ 		 */
+-		extern char except_vec_vi, except_vec_vi_lui;
+-		extern char except_vec_vi_ori, except_vec_vi_end;
+-		extern char rollback_except_vec_vi;
+-		char *vec_start = using_rollback_handler() ?
+-			&rollback_except_vec_vi : &except_vec_vi;
++		extern const u8 except_vec_vi[], except_vec_vi_lui[];
++		extern const u8 except_vec_vi_ori[], except_vec_vi_end[];
++		extern const u8 rollback_except_vec_vi[];
++		const u8 *vec_start = using_rollback_handler() ?
++				      rollback_except_vec_vi : except_vec_vi;
+ #if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_BIG_ENDIAN)
+-		const int lui_offset = &except_vec_vi_lui - vec_start + 2;
+-		const int ori_offset = &except_vec_vi_ori - vec_start + 2;
++		const int lui_offset = except_vec_vi_lui - vec_start + 2;
++		const int ori_offset = except_vec_vi_ori - vec_start + 2;
+ #else
+-		const int lui_offset = &except_vec_vi_lui - vec_start;
+-		const int ori_offset = &except_vec_vi_ori - vec_start;
++		const int lui_offset = except_vec_vi_lui - vec_start;
++		const int ori_offset = except_vec_vi_ori - vec_start;
+ #endif
+-		const int handler_len = &except_vec_vi_end - vec_start;
++		const int handler_len = except_vec_vi_end - vec_start;
+ 
+ 		if (handler_len > VECTORSPACING) {
+ 			/*
+@@ -2317,7 +2317,7 @@ void per_cpu_trap_init(bool is_boot_cpu)
+ }
+ 
+ /* Install CPU exception handler */
+-void set_handler(unsigned long offset, void *addr, unsigned long size)
++void set_handler(unsigned long offset, const void *addr, unsigned long size)
+ {
+ #ifdef CONFIG_CPU_MICROMIPS
+ 	memcpy((void *)(ebase + offset), ((unsigned char *)addr - 1), size);
+diff --git a/arch/mips/ralink/ill_acc.c b/arch/mips/ralink/ill_acc.c
+index bdf53807d7c2b..bea857c9da8b7 100644
+--- a/arch/mips/ralink/ill_acc.c
++++ b/arch/mips/ralink/ill_acc.c
+@@ -61,6 +61,7 @@ static int __init ill_acc_of_setup(void)
+ 	pdev = of_find_device_by_node(np);
+ 	if (!pdev) {
+ 		pr_err("%pOFn: failed to lookup pdev\n", np);
++		of_node_put(np);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/parisc/kernel/patch.c b/arch/parisc/kernel/patch.c
+index 80a0ab372802d..e59574f65e641 100644
+--- a/arch/parisc/kernel/patch.c
++++ b/arch/parisc/kernel/patch.c
+@@ -40,10 +40,7 @@ static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags,
+ 
+ 	*need_unmap = 1;
+ 	set_fixmap(fixmap, page_to_phys(page));
+-	if (flags)
+-		raw_spin_lock_irqsave(&patch_lock, *flags);
+-	else
+-		__acquire(&patch_lock);
++	raw_spin_lock_irqsave(&patch_lock, *flags);
+ 
+ 	return (void *) (__fix_to_virt(fixmap) + (uintaddr & ~PAGE_MASK));
+ }
+@@ -52,10 +49,7 @@ static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
+ {
+ 	clear_fixmap(fixmap);
+ 
+-	if (flags)
+-		raw_spin_unlock_irqrestore(&patch_lock, *flags);
+-	else
+-		__release(&patch_lock);
++	raw_spin_unlock_irqrestore(&patch_lock, *flags);
+ }
+ 
+ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+@@ -67,8 +61,9 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ 	int mapped;
+ 
+ 	/* Make sure we don't have any aliases in cache */
+-	flush_kernel_vmap_range(addr, len);
+-	flush_icache_range(start, end);
++	flush_kernel_dcache_range_asm(start, end);
++	flush_kernel_icache_range_asm(start, end);
++	flush_tlb_kernel_range(start, end);
+ 
+ 	p = fixmap = patch_map(addr, FIX_TEXT_POKE0, &flags, &mapped);
+ 
+@@ -81,8 +76,10 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ 			 * We're crossing a page boundary, so
+ 			 * need to remap
+ 			 */
+-			flush_kernel_vmap_range((void *)fixmap,
+-						(p-fixmap) * sizeof(*p));
++			flush_kernel_dcache_range_asm((unsigned long)fixmap,
++						      (unsigned long)p);
++			flush_tlb_kernel_range((unsigned long)fixmap,
++					       (unsigned long)p);
+ 			if (mapped)
+ 				patch_unmap(FIX_TEXT_POKE0, &flags);
+ 			p = fixmap = patch_map(addr, FIX_TEXT_POKE0, &flags,
+@@ -90,10 +87,10 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ 		}
+ 	}
+ 
+-	flush_kernel_vmap_range((void *)fixmap, (p-fixmap) * sizeof(*p));
++	flush_kernel_dcache_range_asm((unsigned long)fixmap, (unsigned long)p);
++	flush_tlb_kernel_range((unsigned long)fixmap, (unsigned long)p);
+ 	if (mapped)
+ 		patch_unmap(FIX_TEXT_POKE0, &flags);
+-	flush_icache_range(start, end);
+ }
+ 
+ void __kprobes __patch_text(void *addr, u32 insn)
+diff --git a/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi b/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
+index 099a598c74c00..bfe1ed5be3374 100644
+--- a/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
++++ b/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
+@@ -139,12 +139,12 @@
+ 		fman@400000 {
+ 			ethernet@e6000 {
+ 				phy-handle = <&phy_rgmii_0>;
+-				phy-connection-type = "rgmii";
++				phy-connection-type = "rgmii-id";
+ 			};
+ 
+ 			ethernet@e8000 {
+ 				phy-handle = <&phy_rgmii_1>;
+-				phy-connection-type = "rgmii";
++				phy-connection-type = "rgmii-id";
+ 			};
+ 
+ 			mdio0: mdio@fc000 {
+diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
+index 254687258f42b..f2c5c26869f1a 100644
+--- a/arch/powerpc/include/asm/page.h
++++ b/arch/powerpc/include/asm/page.h
+@@ -132,7 +132,11 @@ static inline bool pfn_valid(unsigned long pfn)
+ #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
+ #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
+ 
+-#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
++#define virt_addr_valid(vaddr)	({					\
++	unsigned long _addr = (unsigned long)vaddr;			\
++	_addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory &&	\
++	pfn_valid(virt_to_pfn(_addr));					\
++})
+ 
+ /*
+  * On Book-E parts we need __va to parse the device tree and we can't
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index cccb32cf0e08c..cf421eb7f90d4 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -1296,6 +1296,12 @@ int __init early_init_dt_scan_rtas(unsigned long node,
+ 	entryp = of_get_flat_dt_prop(node, "linux,rtas-entry", NULL);
+ 	sizep  = of_get_flat_dt_prop(node, "rtas-size", NULL);
+ 
++#ifdef CONFIG_PPC64
++	/* need this feature to decide the crashkernel offset */
++	if (of_get_flat_dt_prop(node, "ibm,hypertas-functions", NULL))
++		powerpc_firmware_features |= FW_FEATURE_LPAR;
++#endif
++
+ 	if (basep && entryp && sizep) {
+ 		rtas.base = *basep;
+ 		rtas.entry = *entryp;
+diff --git a/arch/powerpc/kernel/secvar-sysfs.c b/arch/powerpc/kernel/secvar-sysfs.c
+index a0a78aba2083e..1ee4640a26413 100644
+--- a/arch/powerpc/kernel/secvar-sysfs.c
++++ b/arch/powerpc/kernel/secvar-sysfs.c
+@@ -26,15 +26,18 @@ static ssize_t format_show(struct kobject *kobj, struct kobj_attribute *attr,
+ 	const char *format;
+ 
+ 	node = of_find_compatible_node(NULL, NULL, "ibm,secvar-backend");
+-	if (!of_device_is_available(node))
+-		return -ENODEV;
++	if (!of_device_is_available(node)) {
++		rc = -ENODEV;
++		goto out;
++	}
+ 
+ 	rc = of_property_read_string(node, "format", &format);
+ 	if (rc)
+-		return rc;
++		goto out;
+ 
+ 	rc = sprintf(buf, "%s\n", format);
+ 
++out:
+ 	of_node_put(node);
+ 
+ 	return rc;
+diff --git a/arch/powerpc/kexec/core.c b/arch/powerpc/kexec/core.c
+index 56da5eb2b923a..80c79cb5010c5 100644
+--- a/arch/powerpc/kexec/core.c
++++ b/arch/powerpc/kexec/core.c
+@@ -147,11 +147,18 @@ void __init reserve_crashkernel(void)
+ 	if (!crashk_res.start) {
+ #ifdef CONFIG_PPC64
+ 		/*
+-		 * On 64bit we split the RMO in half but cap it at half of
+-		 * a small SLB (128MB) since the crash kernel needs to place
+-		 * itself and some stacks to be in the first segment.
++		 * On the LPAR platform place the crash kernel to mid of
++		 * RMA size (512MB or more) to ensure the crash kernel
++		 * gets enough space to place itself and some stack to be
++		 * in the first segment. At the same time normal kernel
++		 * also get enough space to allocate memory for essential
++		 * system resource in the first segment. Keep the crash
++		 * kernel starts at 128MB offset on other platforms.
+ 		 */
+-		crashk_res.start = min(0x8000000ULL, (ppc64_rma_size / 2));
++		if (firmware_has_feature(FW_FEATURE_LPAR))
++			crashk_res.start = ppc64_rma_size / 2;
++		else
++			crashk_res.start = min(0x8000000ULL, (ppc64_rma_size / 2));
+ #else
+ 		crashk_res.start = KDUMP_KERNELBASE;
+ #endif
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index fb873a7bb65c8..db95ac482e0ef 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2865,6 +2865,11 @@ config IA32_AOUT
+ config X86_X32
+ 	bool "x32 ABI for 64-bit mode"
+ 	depends on X86_64
++	# llvm-objcopy does not convert x86_64 .note.gnu.property or
++	# compressed debug sections to x86_x32 properly:
++	# https://github.com/ClangBuiltLinux/linux/issues/514
++	# https://github.com/ClangBuiltLinux/linux/issues/1141
++	depends on $(success,$(OBJCOPY) --version | head -n1 | grep -qv llvm)
+ 	help
+ 	  Include code to run binaries for the x32 native 32-bit ABI
+ 	  for 64-bit processors.  An x32 process gets access to the
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index a63df19ef4dad..71e1a2d39f218 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -3611,8 +3611,10 @@ static int em_rdpid(struct x86_emulate_ctxt *ctxt)
+ {
+ 	u64 tsc_aux = 0;
+ 
+-	if (ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux))
++	if (!ctxt->ops->guest_has_rdpid(ctxt))
+ 		return emulate_ud(ctxt);
++
++	ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux);
+ 	ctxt->dst.val = tsc_aux;
+ 	return X86EMUL_CONTINUE;
+ }
+diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
+index 7d5be04dc6616..aeed6da60e0c7 100644
+--- a/arch/x86/kvm/kvm_emulate.h
++++ b/arch/x86/kvm/kvm_emulate.h
+@@ -225,6 +225,7 @@ struct x86_emulate_ops {
+ 	bool (*guest_has_long_mode)(struct x86_emulate_ctxt *ctxt);
+ 	bool (*guest_has_movbe)(struct x86_emulate_ctxt *ctxt);
+ 	bool (*guest_has_fxsr)(struct x86_emulate_ctxt *ctxt);
++	bool (*guest_has_rdpid)(struct x86_emulate_ctxt *ctxt);
+ 
+ 	void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked);
+ 
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index 4e7093bcb64b6..0e9c2322d3988 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -253,12 +253,10 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	/* MSR_EVNTSELn */
+ 	pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL);
+ 	if (pmc) {
+-		if (data == pmc->eventsel)
+-			return 0;
+-		if (!(data & pmu->reserved_bits)) {
++		data &= ~pmu->reserved_bits;
++		if (data != pmc->eventsel)
+ 			reprogram_gp_counter(pmc, data);
+-			return 0;
+-		}
++		return 0;
+ 	}
+ 
+ 	return 1;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index a5d6d79b023bc..70d23bec09f5c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6875,6 +6875,11 @@ static bool emulator_guest_has_fxsr(struct x86_emulate_ctxt *ctxt)
+ 	return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_FXSR);
+ }
+ 
++static bool emulator_guest_has_rdpid(struct x86_emulate_ctxt *ctxt)
++{
++	return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_RDPID);
++}
++
+ static ulong emulator_read_gpr(struct x86_emulate_ctxt *ctxt, unsigned reg)
+ {
+ 	return kvm_register_read(emul_to_vcpu(ctxt), reg);
+@@ -6958,6 +6963,7 @@ static const struct x86_emulate_ops emulate_ops = {
+ 	.guest_has_long_mode = emulator_guest_has_long_mode,
+ 	.guest_has_movbe     = emulator_guest_has_movbe,
+ 	.guest_has_fxsr      = emulator_guest_has_fxsr,
++	.guest_has_rdpid     = emulator_guest_has_rdpid,
+ 	.set_nmi_mask        = emulator_set_nmi_mask,
+ 	.get_hflags          = emulator_get_hflags,
+ 	.set_hflags          = emulator_set_hflags,
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index db1378c6ff262..decebcd8ee1c7 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -40,7 +40,8 @@ static void msr_save_context(struct saved_context *ctxt)
+ 	struct saved_msr *end = msr + ctxt->saved_msrs.num;
+ 
+ 	while (msr < end) {
+-		msr->valid = !rdmsrl_safe(msr->info.msr_no, &msr->info.reg.q);
++		if (msr->valid)
++			rdmsrl(msr->info.msr_no, msr->info.reg.q);
+ 		msr++;
+ 	}
+ }
+@@ -427,8 +428,10 @@ static int msr_build_context(const u32 *msr_id, const int num)
+ 	}
+ 
+ 	for (i = saved_msrs->num, j = 0; i < total_num; i++, j++) {
++		u64 dummy;
++
+ 		msr_array[i].info.msr_no	= msr_id[j];
+-		msr_array[i].valid		= false;
++		msr_array[i].valid		= !rdmsrl_safe(msr_id[j], &dummy);
+ 		msr_array[i].info.reg.q		= 0;
+ 	}
+ 	saved_msrs->num   = total_num;
+@@ -503,10 +506,24 @@ static int pm_cpu_check(const struct x86_cpu_id *c)
+ 	return ret;
+ }
+ 
++static void pm_save_spec_msr(void)
++{
++	u32 spec_msr_id[] = {
++		MSR_IA32_SPEC_CTRL,
++		MSR_IA32_TSX_CTRL,
++		MSR_TSX_FORCE_ABORT,
++		MSR_IA32_MCU_OPT_CTRL,
++		MSR_AMD64_LS_CFG,
++	};
++
++	msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
++}
++
+ static int pm_check_save_msr(void)
+ {
+ 	dmi_check_system(msr_save_dmi_table);
+ 	pm_cpu_check(msr_save_cpu_table);
++	pm_save_spec_msr();
+ 
+ 	return 0;
+ }
+diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
+index 6ff3c887e0b99..b70afdff419ca 100644
+--- a/arch/x86/xen/smp_hvm.c
++++ b/arch/x86/xen/smp_hvm.c
+@@ -19,6 +19,12 @@ static void __init xen_hvm_smp_prepare_boot_cpu(void)
+ 	 */
+ 	xen_vcpu_setup(0);
+ 
++	/*
++	 * Called again in case the kernel boots on vcpu >= MAX_VIRT_CPUS.
++	 * Refer to comments in xen_hvm_init_time_ops().
++	 */
++	xen_hvm_init_time_ops();
++
+ 	/*
+ 	 * The alternative logic (which patches the unlock/lock) runs before
+ 	 * the smp bootup up code is activated. Hence we need to set this up
+diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
+index 91f5b330dcc6d..8183d17e1cf17 100644
+--- a/arch/x86/xen/time.c
++++ b/arch/x86/xen/time.c
+@@ -556,6 +556,11 @@ static void xen_hvm_setup_cpu_clockevents(void)
+ 
+ void __init xen_hvm_init_time_ops(void)
+ {
++	static bool hvm_time_initialized;
++
++	if (hvm_time_initialized)
++		return;
++
+ 	/*
+ 	 * vector callback is needed otherwise we cannot receive interrupts
+ 	 * on cpu > 0 and at this point we don't know how many cpus are
+@@ -565,7 +570,22 @@ void __init xen_hvm_init_time_ops(void)
+ 		return;
+ 
+ 	if (!xen_feature(XENFEAT_hvm_safe_pvclock)) {
+-		pr_info("Xen doesn't support pvclock on HVM, disable pv timer");
++		pr_info_once("Xen doesn't support pvclock on HVM, disable pv timer");
++		return;
++	}
++
++	/*
++	 * Only MAX_VIRT_CPUS 'vcpu_info' are embedded inside 'shared_info'.
++	 * The __this_cpu_read(xen_vcpu) is still NULL when Xen HVM guest
++	 * boots on vcpu >= MAX_VIRT_CPUS (e.g., kexec), To access
++	 * __this_cpu_read(xen_vcpu) via xen_clocksource_read() will panic.
++	 *
++	 * The xen_hvm_init_time_ops() should be called again later after
++	 * __this_cpu_read(xen_vcpu) is available.
++	 */
++	if (!__this_cpu_read(xen_vcpu)) {
++		pr_info("Delay xen_init_time_common() as kernel is running on vcpu=%d\n",
++			xen_vcpu_nr(0));
+ 		return;
+ 	}
+ 
+@@ -577,6 +597,8 @@ void __init xen_hvm_init_time_ops(void)
+ 	x86_platform.calibrate_tsc = xen_tsc_khz;
+ 	x86_platform.get_wallclock = xen_get_wallclock;
+ 	x86_platform.set_wallclock = xen_set_wallclock;
++
++	hvm_time_initialized = true;
+ }
+ #endif
+ 
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
+index 9bf8bad1dd18a..c33932568aa73 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
+@@ -8,19 +8,19 @@
+ 			reg = <0x00000000 0x08000000>;
+ 			bank-width = <2>;
+ 			device-width = <2>;
+-			partition@0x0 {
++			partition@0 {
+ 				label = "data";
+ 				reg = <0x00000000 0x06000000>;
+ 			};
+-			partition@0x6000000 {
++			partition@6000000 {
+ 				label = "boot loader area";
+ 				reg = <0x06000000 0x00800000>;
+ 			};
+-			partition@0x6800000 {
++			partition@6800000 {
+ 				label = "kernel image";
+ 				reg = <0x06800000 0x017e0000>;
+ 			};
+-			partition@0x7fe0000 {
++			partition@7fe0000 {
+ 				label = "boot environment";
+ 				reg = <0x07fe0000 0x00020000>;
+ 			};
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
+index 40c2f81f7cb66..7bde2ab2d6fb5 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
+@@ -8,19 +8,19 @@
+ 			reg = <0x08000000 0x01000000>;
+ 			bank-width = <2>;
+ 			device-width = <2>;
+-			partition@0x0 {
++			partition@0 {
+ 				label = "boot loader area";
+ 				reg = <0x00000000 0x00400000>;
+ 			};
+-			partition@0x400000 {
++			partition@400000 {
+ 				label = "kernel image";
+ 				reg = <0x00400000 0x00600000>;
+ 			};
+-			partition@0xa00000 {
++			partition@a00000 {
+ 				label = "data";
+ 				reg = <0x00a00000 0x005e0000>;
+ 			};
+-			partition@0xfe0000 {
++			partition@fe0000 {
+ 				label = "boot environment";
+ 				reg = <0x00fe0000 0x00020000>;
+ 			};
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
+index fb8d3a9f33c23..0655b868749a4 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
+@@ -8,11 +8,11 @@
+ 			reg = <0x08000000 0x00400000>;
+ 			bank-width = <2>;
+ 			device-width = <2>;
+-			partition@0x0 {
++			partition@0 {
+ 				label = "boot loader area";
+ 				reg = <0x00000000 0x003f0000>;
+ 			};
+-			partition@0x3f0000 {
++			partition@3f0000 {
+ 				label = "boot environment";
+ 				reg = <0x003f0000 0x00010000>;
+ 			};
+diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c
+index 982fe91125322..464260f668708 100644
+--- a/drivers/ata/sata_dwc_460ex.c
++++ b/drivers/ata/sata_dwc_460ex.c
+@@ -145,7 +145,11 @@ struct sata_dwc_device {
+ #endif
+ };
+ 
+-#define SATA_DWC_QCMD_MAX	32
++/*
++ * Allow one extra special slot for commands and DMA management
++ * to account for libata internal commands.
++ */
++#define SATA_DWC_QCMD_MAX	(ATA_MAX_QUEUE + 1)
+ 
+ struct sata_dwc_device_port {
+ 	struct sata_dwc_device	*hsdev;
+diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
+index 8f879e5c2f670..60b9ca53c0a35 100644
+--- a/drivers/block/drbd/drbd_int.h
++++ b/drivers/block/drbd/drbd_int.h
+@@ -1644,22 +1644,22 @@ struct sib_info {
+ };
+ void drbd_bcast_event(struct drbd_device *device, const struct sib_info *sib);
+ 
+-extern void notify_resource_state(struct sk_buff *,
++extern int notify_resource_state(struct sk_buff *,
+ 				  unsigned int,
+ 				  struct drbd_resource *,
+ 				  struct resource_info *,
+ 				  enum drbd_notification_type);
+-extern void notify_device_state(struct sk_buff *,
++extern int notify_device_state(struct sk_buff *,
+ 				unsigned int,
+ 				struct drbd_device *,
+ 				struct device_info *,
+ 				enum drbd_notification_type);
+-extern void notify_connection_state(struct sk_buff *,
++extern int notify_connection_state(struct sk_buff *,
+ 				    unsigned int,
+ 				    struct drbd_connection *,
+ 				    struct connection_info *,
+ 				    enum drbd_notification_type);
+-extern void notify_peer_device_state(struct sk_buff *,
++extern int notify_peer_device_state(struct sk_buff *,
+ 				     unsigned int,
+ 				     struct drbd_peer_device *,
+ 				     struct peer_device_info *,
+diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
+index bf7de4c7b96c1..f8d0146bf7852 100644
+--- a/drivers/block/drbd/drbd_nl.c
++++ b/drivers/block/drbd/drbd_nl.c
+@@ -4614,7 +4614,7 @@ static int nla_put_notification_header(struct sk_buff *msg,
+ 	return drbd_notification_header_to_skb(msg, &nh, true);
+ }
+ 
+-void notify_resource_state(struct sk_buff *skb,
++int notify_resource_state(struct sk_buff *skb,
+ 			   unsigned int seq,
+ 			   struct drbd_resource *resource,
+ 			   struct resource_info *resource_info,
+@@ -4656,16 +4656,17 @@ void notify_resource_state(struct sk_buff *skb,
+ 		if (err && err != -ESRCH)
+ 			goto failed;
+ 	}
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ failed:
+ 	drbd_err(resource, "Error %d while broadcasting event. Event seq:%u\n",
+ 			err, seq);
++	return err;
+ }
+ 
+-void notify_device_state(struct sk_buff *skb,
++int notify_device_state(struct sk_buff *skb,
+ 			 unsigned int seq,
+ 			 struct drbd_device *device,
+ 			 struct device_info *device_info,
+@@ -4705,16 +4706,17 @@ void notify_device_state(struct sk_buff *skb,
+ 		if (err && err != -ESRCH)
+ 			goto failed;
+ 	}
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ failed:
+ 	drbd_err(device, "Error %d while broadcasting event. Event seq:%u\n",
+ 		 err, seq);
++	return err;
+ }
+ 
+-void notify_connection_state(struct sk_buff *skb,
++int notify_connection_state(struct sk_buff *skb,
+ 			     unsigned int seq,
+ 			     struct drbd_connection *connection,
+ 			     struct connection_info *connection_info,
+@@ -4754,16 +4756,17 @@ void notify_connection_state(struct sk_buff *skb,
+ 		if (err && err != -ESRCH)
+ 			goto failed;
+ 	}
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ failed:
+ 	drbd_err(connection, "Error %d while broadcasting event. Event seq:%u\n",
+ 		 err, seq);
++	return err;
+ }
+ 
+-void notify_peer_device_state(struct sk_buff *skb,
++int notify_peer_device_state(struct sk_buff *skb,
+ 			      unsigned int seq,
+ 			      struct drbd_peer_device *peer_device,
+ 			      struct peer_device_info *peer_device_info,
+@@ -4804,13 +4807,14 @@ void notify_peer_device_state(struct sk_buff *skb,
+ 		if (err && err != -ESRCH)
+ 			goto failed;
+ 	}
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ failed:
+ 	drbd_err(peer_device, "Error %d while broadcasting event. Event seq:%u\n",
+ 		 err, seq);
++	return err;
+ }
+ 
+ void notify_helper(enum drbd_notification_type type,
+@@ -4861,7 +4865,7 @@ fail:
+ 		 err, seq);
+ }
+ 
+-static void notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
++static int notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
+ {
+ 	struct drbd_genlmsghdr *dh;
+ 	int err;
+@@ -4875,11 +4879,12 @@ static void notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
+ 	if (nla_put_notification_header(skb, NOTIFY_EXISTS))
+ 		goto nla_put_failure;
+ 	genlmsg_end(skb, dh);
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ 	pr_err("Error %d sending event. Event seq:%u\n", err, seq);
++	return err;
+ }
+ 
+ static void free_state_changes(struct list_head *list)
+@@ -4906,6 +4911,7 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+ 	unsigned int seq = cb->args[2];
+ 	unsigned int n;
+ 	enum drbd_notification_type flags = 0;
++	int err = 0;
+ 
+ 	/* There is no need for taking notification_mutex here: it doesn't
+ 	   matter if the initial state events mix with later state chage
+@@ -4914,32 +4920,32 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	cb->args[5]--;
+ 	if (cb->args[5] == 1) {
+-		notify_initial_state_done(skb, seq);
++		err = notify_initial_state_done(skb, seq);
+ 		goto out;
+ 	}
+ 	n = cb->args[4]++;
+ 	if (cb->args[4] < cb->args[3])
+ 		flags |= NOTIFY_CONTINUES;
+ 	if (n < 1) {
+-		notify_resource_state_change(skb, seq, state_change->resource,
++		err = notify_resource_state_change(skb, seq, state_change->resource,
+ 					     NOTIFY_EXISTS | flags);
+ 		goto next;
+ 	}
+ 	n--;
+ 	if (n < state_change->n_connections) {
+-		notify_connection_state_change(skb, seq, &state_change->connections[n],
++		err = notify_connection_state_change(skb, seq, &state_change->connections[n],
+ 					       NOTIFY_EXISTS | flags);
+ 		goto next;
+ 	}
+ 	n -= state_change->n_connections;
+ 	if (n < state_change->n_devices) {
+-		notify_device_state_change(skb, seq, &state_change->devices[n],
++		err = notify_device_state_change(skb, seq, &state_change->devices[n],
+ 					   NOTIFY_EXISTS | flags);
+ 		goto next;
+ 	}
+ 	n -= state_change->n_devices;
+ 	if (n < state_change->n_devices * state_change->n_connections) {
+-		notify_peer_device_state_change(skb, seq, &state_change->peer_devices[n],
++		err = notify_peer_device_state_change(skb, seq, &state_change->peer_devices[n],
+ 						NOTIFY_EXISTS | flags);
+ 		goto next;
+ 	}
+@@ -4954,7 +4960,10 @@ next:
+ 		cb->args[4] = 0;
+ 	}
+ out:
+-	return skb->len;
++	if (err)
++		return err;
++	else
++		return skb->len;
+ }
+ 
+ int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c
+index 0067d328f0b56..5fbaea6b77b14 100644
+--- a/drivers/block/drbd/drbd_state.c
++++ b/drivers/block/drbd/drbd_state.c
+@@ -1537,7 +1537,7 @@ int drbd_bitmap_io_from_worker(struct drbd_device *device,
+ 	return rv;
+ }
+ 
+-void notify_resource_state_change(struct sk_buff *skb,
++int notify_resource_state_change(struct sk_buff *skb,
+ 				  unsigned int seq,
+ 				  struct drbd_resource_state_change *resource_state_change,
+ 				  enum drbd_notification_type type)
+@@ -1550,10 +1550,10 @@ void notify_resource_state_change(struct sk_buff *skb,
+ 		.res_susp_fen = resource_state_change->susp_fen[NEW],
+ 	};
+ 
+-	notify_resource_state(skb, seq, resource, &resource_info, type);
++	return notify_resource_state(skb, seq, resource, &resource_info, type);
+ }
+ 
+-void notify_connection_state_change(struct sk_buff *skb,
++int notify_connection_state_change(struct sk_buff *skb,
+ 				    unsigned int seq,
+ 				    struct drbd_connection_state_change *connection_state_change,
+ 				    enum drbd_notification_type type)
+@@ -1564,10 +1564,10 @@ void notify_connection_state_change(struct sk_buff *skb,
+ 		.conn_role = connection_state_change->peer_role[NEW],
+ 	};
+ 
+-	notify_connection_state(skb, seq, connection, &connection_info, type);
++	return notify_connection_state(skb, seq, connection, &connection_info, type);
+ }
+ 
+-void notify_device_state_change(struct sk_buff *skb,
++int notify_device_state_change(struct sk_buff *skb,
+ 				unsigned int seq,
+ 				struct drbd_device_state_change *device_state_change,
+ 				enum drbd_notification_type type)
+@@ -1577,10 +1577,10 @@ void notify_device_state_change(struct sk_buff *skb,
+ 		.dev_disk_state = device_state_change->disk_state[NEW],
+ 	};
+ 
+-	notify_device_state(skb, seq, device, &device_info, type);
++	return notify_device_state(skb, seq, device, &device_info, type);
+ }
+ 
+-void notify_peer_device_state_change(struct sk_buff *skb,
++int notify_peer_device_state_change(struct sk_buff *skb,
+ 				     unsigned int seq,
+ 				     struct drbd_peer_device_state_change *p,
+ 				     enum drbd_notification_type type)
+@@ -1594,7 +1594,7 @@ void notify_peer_device_state_change(struct sk_buff *skb,
+ 		.peer_resync_susp_dependency = p->resync_susp_dependency[NEW],
+ 	};
+ 
+-	notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
++	return notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
+ }
+ 
+ static void broadcast_state_change(struct drbd_state_change *state_change)
+@@ -1602,7 +1602,7 @@ static void broadcast_state_change(struct drbd_state_change *state_change)
+ 	struct drbd_resource_state_change *resource_state_change = &state_change->resource[0];
+ 	bool resource_state_has_changed;
+ 	unsigned int n_device, n_connection, n_peer_device, n_peer_devices;
+-	void (*last_func)(struct sk_buff *, unsigned int, void *,
++	int (*last_func)(struct sk_buff *, unsigned int, void *,
+ 			  enum drbd_notification_type) = NULL;
+ 	void *last_arg = NULL;
+ 
+diff --git a/drivers/block/drbd/drbd_state_change.h b/drivers/block/drbd/drbd_state_change.h
+index ba80f612d6abb..d5b0479bc9a66 100644
+--- a/drivers/block/drbd/drbd_state_change.h
++++ b/drivers/block/drbd/drbd_state_change.h
+@@ -44,19 +44,19 @@ extern struct drbd_state_change *remember_old_state(struct drbd_resource *, gfp_
+ extern void copy_old_to_new_state_change(struct drbd_state_change *);
+ extern void forget_state_change(struct drbd_state_change *);
+ 
+-extern void notify_resource_state_change(struct sk_buff *,
++extern int notify_resource_state_change(struct sk_buff *,
+ 					 unsigned int,
+ 					 struct drbd_resource_state_change *,
+ 					 enum drbd_notification_type type);
+-extern void notify_connection_state_change(struct sk_buff *,
++extern int notify_connection_state_change(struct sk_buff *,
+ 					   unsigned int,
+ 					   struct drbd_connection_state_change *,
+ 					   enum drbd_notification_type type);
+-extern void notify_device_state_change(struct sk_buff *,
++extern int notify_device_state_change(struct sk_buff *,
+ 				       unsigned int,
+ 				       struct drbd_device_state_change *,
+ 				       enum drbd_notification_type type);
+-extern void notify_peer_device_state_change(struct sk_buff *,
++extern int notify_peer_device_state_change(struct sk_buff *,
+ 					    unsigned int,
+ 					    struct drbd_peer_device_state_change *,
+ 					    enum drbd_notification_type type);
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 3dd4deb60adbf..6d361420ffe82 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -2239,7 +2239,7 @@ static struct virtio_driver virtio_rproc_serial = {
+ 	.remove =	virtcons_remove,
+ };
+ 
+-static int __init init(void)
++static int __init virtio_console_init(void)
+ {
+ 	int err;
+ 
+@@ -2276,7 +2276,7 @@ free:
+ 	return err;
+ }
+ 
+-static void __exit fini(void)
++static void __exit virtio_console_fini(void)
+ {
+ 	reclaim_dma_bufs();
+ 
+@@ -2286,8 +2286,8 @@ static void __exit fini(void)
+ 	class_destroy(pdrvdata.class);
+ 	debugfs_remove_recursive(pdrvdata.debugfs_dir);
+ }
+-module_init(init);
+-module_exit(fini);
++module_init(virtio_console_init);
++module_exit(virtio_console_fini);
+ 
+ MODULE_DESCRIPTION("Virtio console driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index 772b48ad0cd78..382a0619a0488 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -789,6 +789,15 @@ static unsigned long si5341_output_clk_recalc_rate(struct clk_hw *hw,
+ 	u32 r_divider;
+ 	u8 r[3];
+ 
++	err = regmap_read(output->data->regmap,
++			SI5341_OUT_CONFIG(output), &val);
++	if (err < 0)
++		return err;
++
++	/* If SI5341_OUT_CFG_RDIV_FORCE2 is set, r_divider is 2 */
++	if (val & SI5341_OUT_CFG_RDIV_FORCE2)
++		return parent_rate / 2;
++
+ 	err = regmap_bulk_read(output->data->regmap,
+ 			SI5341_OUT_R_REG(output), r, 3);
+ 	if (err < 0)
+@@ -805,13 +814,6 @@ static unsigned long si5341_output_clk_recalc_rate(struct clk_hw *hw,
+ 	r_divider += 1;
+ 	r_divider <<= 1;
+ 
+-	err = regmap_read(output->data->regmap,
+-			SI5341_OUT_CONFIG(output), &val);
+-	if (err < 0)
+-		return err;
+-
+-	if (val & SI5341_OUT_CFG_RDIV_FORCE2)
+-		r_divider = 2;
+ 
+ 	return parent_rate / r_divider;
+ }
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 92fc084203b75..2e56cc0a3bce6 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -631,6 +631,24 @@ static void clk_core_get_boundaries(struct clk_core *core,
+ 		*max_rate = min(*max_rate, clk_user->max_rate);
+ }
+ 
++static bool clk_core_check_boundaries(struct clk_core *core,
++				      unsigned long min_rate,
++				      unsigned long max_rate)
++{
++	struct clk *user;
++
++	lockdep_assert_held(&prepare_lock);
++
++	if (min_rate > core->max_rate || max_rate < core->min_rate)
++		return false;
++
++	hlist_for_each_entry(user, &core->clks, clks_node)
++		if (min_rate > user->max_rate || max_rate < user->min_rate)
++			return false;
++
++	return true;
++}
++
+ void clk_hw_set_rate_range(struct clk_hw *hw, unsigned long min_rate,
+ 			   unsigned long max_rate)
+ {
+@@ -2332,6 +2350,11 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
+ 	clk->min_rate = min;
+ 	clk->max_rate = max;
+ 
++	if (!clk_core_check_boundaries(clk->core, min, max)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	rate = clk_core_get_rate_nolock(clk->core);
+ 	if (rate < min || rate > max) {
+ 		/*
+@@ -2360,6 +2383,7 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
+ 		}
+ 	}
+ 
++out:
+ 	if (clk->exclusive_count)
+ 		clk_core_rate_protect(clk->core);
+ 
+diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
+index 3da33c786d77c..29eafab4353ef 100644
+--- a/drivers/clk/ti/clk.c
++++ b/drivers/clk/ti/clk.c
+@@ -131,7 +131,7 @@ int ti_clk_setup_ll_ops(struct ti_clk_ll_ops *ops)
+ void __init ti_dt_clocks_register(struct ti_dt_clk oclks[])
+ {
+ 	struct ti_dt_clk *c;
+-	struct device_node *node, *parent;
++	struct device_node *node, *parent, *child;
+ 	struct clk *clk;
+ 	struct of_phandle_args clkspec;
+ 	char buf[64];
+@@ -171,10 +171,13 @@ void __init ti_dt_clocks_register(struct ti_dt_clk oclks[])
+ 		node = of_find_node_by_name(NULL, buf);
+ 		if (num_args && compat_mode) {
+ 			parent = node;
+-			node = of_get_child_by_name(parent, "clock");
+-			if (!node)
+-				node = of_get_child_by_name(parent, "clk");
+-			of_node_put(parent);
++			child = of_get_child_by_name(parent, "clock");
++			if (!child)
++				child = of_get_child_by_name(parent, "clk");
++			if (child) {
++				of_node_put(parent);
++				node = child;
++			}
+ 		}
+ 
+ 		clkspec.np = node;
+diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c
+index 19ac95c0098f0..7f72b3f4cd1ae 100644
+--- a/drivers/dma/sh/shdma-base.c
++++ b/drivers/dma/sh/shdma-base.c
+@@ -115,10 +115,8 @@ static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx)
+ 		ret = pm_runtime_get(schan->dev);
+ 
+ 		spin_unlock_irq(&schan->chan_lock);
+-		if (ret < 0) {
++		if (ret < 0)
+ 			dev_err(schan->dev, "%s(): GET = %d\n", __func__, ret);
+-			pm_runtime_put(schan->dev);
+-		}
+ 
+ 		pm_runtime_barrier(schan->dev);
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 00526fdd7691f..d180787482009 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1411,6 +1411,16 @@ static int gpiochip_to_irq(struct gpio_chip *gc, unsigned offset)
+ {
+ 	struct irq_domain *domain = gc->irq.domain;
+ 
++#ifdef CONFIG_GPIOLIB_IRQCHIP
++	/*
++	 * Avoid race condition with other code, which tries to lookup
++	 * an IRQ before the irqchip has been properly registered,
++	 * i.e. while gpiochip is still being brought up.
++	 */
++	if (!gc->irq.initialized)
++		return -EPROBE_DEFER;
++#endif
++
+ 	if (!gpiochip_irqchip_irq_valid(gc, offset))
+ 		return -ENXIO;
+ 
+@@ -1604,6 +1614,15 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+ 
+ 	acpi_gpiochip_request_interrupts(gc);
+ 
++	/*
++	 * Using barrier() here to prevent compiler from reordering
++	 * gc->irq.initialized before initialization of above
++	 * GPIO chip irq members.
++	 */
++	barrier();
++
++	gc->irq.initialized = true;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 12598a4b5c788..867fcee6b0d3b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1484,6 +1484,7 @@ int amdgpu_cs_fence_to_handle_ioctl(struct drm_device *dev, void *data,
+ 		return 0;
+ 
+ 	default:
++		dma_fence_put(fence);
+ 		return -EINVAL;
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 9f9f55a2b257c..f84582b70d0ed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -263,7 +263,7 @@ static int amdgpu_gfx_kiq_acquire(struct amdgpu_device *adev,
+ 		    * adev->gfx.mec.num_pipe_per_mec
+ 		    * adev->gfx.mec.num_queue_per_pipe;
+ 
+-	while (queue_bit-- >= 0) {
++	while (--queue_bit >= 0) {
+ 		if (test_bit(queue_bit, adev->gfx.mec.queue_bitmap))
+ 			continue;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index ad9863b84f1fc..f615ecc06a223 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1338,7 +1338,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
+ 	    !(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
+ 		return;
+ 
+-	dma_resv_lock(bo->base.resv, NULL);
++	if (WARN_ON_ONCE(!dma_resv_trylock(bo->base.resv)))
++		return;
+ 
+ 	r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence);
+ 	if (!WARN_ON(r)) {
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index 31d793ee0836e..86b4dadf772e3 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -784,7 +784,7 @@ int kfd_create_crat_image_acpi(void **crat_image, size_t *size)
+ 	/* Fetch the CRAT table from ACPI */
+ 	status = acpi_get_table(CRAT_SIGNATURE, 0, &crat_table);
+ 	if (status == AE_NOT_FOUND) {
+-		pr_warn("CRAT table not found\n");
++		pr_info("CRAT table not found\n");
+ 		return -ENODATA;
+ 	} else if (ACPI_FAILURE(status)) {
+ 		const char *err = acpi_format_exception(status);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
+index 17d1736367ea3..bd4caa36ab2e2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
+@@ -270,15 +270,6 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
+ 		return ret;
+ 	}
+ 
+-	ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
+-			       O_RDWR);
+-	if (ret < 0) {
+-		kfifo_free(&client->fifo);
+-		kfree(client);
+-		return ret;
+-	}
+-	*fd = ret;
+-
+ 	init_waitqueue_head(&client->wait_queue);
+ 	spin_lock_init(&client->lock);
+ 	client->events = 0;
+@@ -288,5 +279,20 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
+ 	list_add_rcu(&client->list, &dev->smi_clients);
+ 	spin_unlock(&dev->smi_lock);
+ 
++	ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
++			       O_RDWR);
++	if (ret < 0) {
++		spin_lock(&dev->smi_lock);
++		list_del_rcu(&client->list);
++		spin_unlock(&dev->smi_lock);
++
++		synchronize_rcu();
++
++		kfifo_free(&client->fifo);
++		kfree(client);
++		return ret;
++	}
++	*fd = ret;
++
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 5f4cdb05c4db9..5c5ccbad96588 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1674,6 +1674,9 @@ static bool are_stream_backends_same(
+ 	if (is_timing_changed(stream_a, stream_b))
+ 		return false;
+ 
++	if (stream_a->signal != stream_b->signal)
++		return false;
++
+ 	if (stream_a->dpms_off != stream_b->dpms_off)
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
+index e6f40ee9f3134..9d97938bd49ef 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
+@@ -709,13 +709,13 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetHardMinFclkByFreq,
+ 						hwmgr->display_config->num_display > 3 ?
+-						data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk :
++						(data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk / 100) :
+ 						min_mclk,
+ 						NULL);
+ 
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetHardMinSocclkByFreq,
+-						data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk,
++						data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk / 100,
+ 						NULL);
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetHardMinVcn,
+@@ -728,11 +728,11 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ 						NULL);
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetSoftMaxFclkByFreq,
+-						data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk,
++						data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk / 100,
+ 						NULL);
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetSoftMaxSocclkByFreq,
+-						data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk,
++						data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk / 100,
+ 						NULL);
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetSoftMaxVcn,
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 448c2f2d803a6..f5ab891731d0b 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -166,6 +166,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "MicroPC"),
+ 		},
+ 		.driver_data = (void *)&lcd720x1280_rightside_up,
++	}, {	/* GPD Win Max */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "G1619-01"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/*
+ 		 * GPD Pocket, note that the the DMI data is less generic then
+ 		 * it seems, devices with a board-vendor of "AMI Corporation"
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index 75036aaa0c639..efd13e533726b 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -553,6 +553,8 @@ static int imx_ldb_panel_ddc(struct device *dev,
+ 		edidp = of_get_property(child, "edid", &edid_len);
+ 		if (edidp) {
+ 			channel->edid = kmemdup(edidp, edid_len, GFP_KERNEL);
++			if (!channel->edid)
++				return -ENOMEM;
+ 		} else if (!channel->panel) {
+ 			/* fallback to display-timings node */
+ 			ret = of_get_drm_display_mode(child,
+diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
+index 605ac8825a591..b61bfa84b6bbd 100644
+--- a/drivers/gpu/drm/imx/parallel-display.c
++++ b/drivers/gpu/drm/imx/parallel-display.c
+@@ -70,8 +70,10 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
+ 		ret = of_get_drm_display_mode(np, &imxpd->mode,
+ 					      &imxpd->bus_flags,
+ 					      OF_USE_NATIVE_MODE);
+-		if (ret)
++		if (ret) {
++			drm_mode_destroy(connector->dev, mode);
+ 			return ret;
++		}
+ 
+ 		drm_mode_copy(mode, &imxpd->mode);
+ 		mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+index 7938722b4da17..d82529becfdc9 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+@@ -216,6 +216,7 @@ gm20b_pmu = {
+ 	.intr = gt215_pmu_intr,
+ 	.recv = gm20b_pmu_recv,
+ 	.initmsg = gm20b_pmu_initmsg,
++	.reset = gf100_pmu_reset,
+ };
+ 
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+index 3dfb3e8522f6a..9f32982216b6f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+@@ -23,7 +23,7 @@
+  */
+ #include "priv.h"
+ 
+-static void
++void
+ gp102_pmu_reset(struct nvkm_pmu *pmu)
+ {
+ 	struct nvkm_device *device = pmu->subdev.device;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+index 7f5f9d5448360..0bd4b32ad863f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+@@ -83,6 +83,7 @@ gp10b_pmu = {
+ 	.intr = gt215_pmu_intr,
+ 	.recv = gm20b_pmu_recv,
+ 	.initmsg = gm20b_pmu_initmsg,
++	.reset = gp102_pmu_reset,
+ };
+ 
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+index b945ec320cd2e..80c4cb861d40e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+@@ -41,6 +41,7 @@ int gt215_pmu_send(struct nvkm_pmu *, u32[2], u32, u32, u32, u32);
+ 
+ bool gf100_pmu_enabled(struct nvkm_pmu *);
+ void gf100_pmu_reset(struct nvkm_pmu *);
++void gp102_pmu_reset(struct nvkm_pmu *pmu);
+ 
+ void gk110_pmu_pgob(struct nvkm_pmu *, bool);
+ 
+diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
+index 210e532ac277f..79e5356a737a2 100644
+--- a/drivers/hv/Kconfig
++++ b/drivers/hv/Kconfig
+@@ -17,7 +17,6 @@ config HYPERV_TIMER
+ config HYPERV_UTILS
+ 	tristate "Microsoft Hyper-V Utilities driver"
+ 	depends on HYPERV && CONNECTOR && NLS
+-	depends on PTP_1588_CLOCK_OPTIONAL
+ 	help
+ 	  Select this option to enable the Hyper-V Utilities.
+ 
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 6476bfe193afd..5dbb949b1afd8 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -350,7 +350,7 @@ void vmbus_channel_map_relid(struct vmbus_channel *channel)
+ 	 * execute:
+ 	 *
+ 	 *  (a) In the "normal (i.e., not resuming from hibernation)" path,
+-	 *      the full barrier in smp_store_mb() guarantees that the store
++	 *      the full barrier in virt_store_mb() guarantees that the store
+ 	 *      is propagated to all CPUs before the add_channel_work work
+ 	 *      is queued.  In turn, add_channel_work is queued before the
+ 	 *      channel's ring buffer is allocated/initialized and the
+@@ -362,14 +362,14 @@ void vmbus_channel_map_relid(struct vmbus_channel *channel)
+ 	 *      recv_int_page before retrieving the channel pointer from the
+ 	 *      array of channels.
+ 	 *
+-	 *  (b) In the "resuming from hibernation" path, the smp_store_mb()
++	 *  (b) In the "resuming from hibernation" path, the virt_store_mb()
+ 	 *      guarantees that the store is propagated to all CPUs before
+ 	 *      the VMBus connection is marked as ready for the resume event
+ 	 *      (cf. check_ready_for_resume_event()).  The interrupt handler
+ 	 *      of the VMBus driver and vmbus_chan_sched() can not run before
+ 	 *      vmbus_bus_resume() has completed execution (cf. resume_noirq).
+ 	 */
+-	smp_store_mb(
++	virt_store_mb(
+ 		vmbus_connection.channels[channel->offermsg.child_relid],
+ 		channel);
+ }
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 362da2a83b470..b9ac357e465db 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2673,10 +2673,15 @@ static void __exit vmbus_exit(void)
+ 	if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
+ 		kmsg_dump_unregister(&hv_kmsg_dumper);
+ 		unregister_die_notifier(&hyperv_die_block);
+-		atomic_notifier_chain_unregister(&panic_notifier_list,
+-						 &hyperv_panic_block);
+ 	}
+ 
++	/*
++	 * The panic notifier is always registered, hence we should
++	 * also unconditionally unregister it here as well.
++	 */
++	atomic_notifier_chain_unregister(&panic_notifier_list,
++					 &hyperv_panic_block);
++
+ 	free_page((unsigned long)hv_panic_page);
+ 	unregister_sysctl_table(hv_ctl_table_hdr);
+ 	hv_ctl_table_hdr = NULL;
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index d213f65d4cdd0..ed8a96ae61cef 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -121,6 +121,9 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+ 	unsigned long flags;
+ 	struct list_head del_list;
+ 
++	/* Prevent freeing of mm until we are completely finished. */
++	mmgrab(handler->mn.mm);
++
+ 	/* Unregister first so we don't get any more notifications. */
+ 	mmu_notifier_unregister(&handler->mn, handler->mn.mm);
+ 
+@@ -143,6 +146,9 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+ 
+ 	do_remove(handler, &del_list);
+ 
++	/* Now the mm may be freed. */
++	mmdrop(handler->mn.mm);
++
+ 	kfree(handler);
+ }
+ 
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 6cd0cbd4fc9f6..d827a4e44c946 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -531,8 +531,10 @@ static void __cache_work_func(struct mlx5_cache_ent *ent)
+ 		spin_lock_irq(&ent->lock);
+ 		if (ent->disabled)
+ 			goto out;
+-		if (need_delay)
++		if (need_delay) {
+ 			queue_delayed_work(cache->wq, &ent->dwork, 300 * HZ);
++			goto out;
++		}
+ 		remove_cache_mr_locked(ent);
+ 		queue_adjust_cache_locked(ent);
+ 	}
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 09f0dbf941c06..d8d52a00a1be9 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -3241,7 +3241,11 @@ serr_no_r_lock:
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
+ 	rvt_send_complete(sqp, wqe, send_status);
+ 	if (sqp->ibqp.qp_type == IB_QPT_RC) {
+-		int lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
++		int lastwqe;
++
++		spin_lock(&sqp->r_lock);
++		lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
++		spin_unlock(&sqp->r_lock);
+ 
+ 		sqp->s_flags &= ~RVT_S_BUSY;
+ 		spin_unlock_irqrestore(&sqp->s_lock, flags);
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 7067b7c116260..483c1362cc4aa 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -1368,6 +1368,7 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+ 				dev_info(smmu->dev, "\t0x%016llx\n",
+ 					 (unsigned long long)evt[i]);
+ 
++			cond_resched();
+ 		}
+ 
+ 		/*
+diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
+index 71f29c0927fc7..ff2c692c0db47 100644
+--- a/drivers/iommu/omap-iommu.c
++++ b/drivers/iommu/omap-iommu.c
+@@ -1665,7 +1665,7 @@ static struct iommu_device *omap_iommu_probe_device(struct device *dev)
+ 	num_iommus = of_property_count_elems_of_size(dev->of_node, "iommus",
+ 						     sizeof(phandle));
+ 	if (num_iommus < 0)
+-		return 0;
++		return ERR_PTR(-ENODEV);
+ 
+ 	arch_data = kcalloc(num_iommus + 1, sizeof(*arch_data), GFP_KERNEL);
+ 	if (!arch_data)
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 04d1b3963b6ba..e5e3fd6b95543 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -206,11 +206,11 @@ static inline void __iomem *gic_dist_base(struct irq_data *d)
+ 	}
+ }
+ 
+-static void gic_do_wait_for_rwp(void __iomem *base)
++static void gic_do_wait_for_rwp(void __iomem *base, u32 bit)
+ {
+ 	u32 count = 1000000;	/* 1s! */
+ 
+-	while (readl_relaxed(base + GICD_CTLR) & GICD_CTLR_RWP) {
++	while (readl_relaxed(base + GICD_CTLR) & bit) {
+ 		count--;
+ 		if (!count) {
+ 			pr_err_ratelimited("RWP timeout, gone fishing\n");
+@@ -224,13 +224,13 @@ static void gic_do_wait_for_rwp(void __iomem *base)
+ /* Wait for completion of a distributor change */
+ static void gic_dist_wait_for_rwp(void)
+ {
+-	gic_do_wait_for_rwp(gic_data.dist_base);
++	gic_do_wait_for_rwp(gic_data.dist_base, GICD_CTLR_RWP);
+ }
+ 
+ /* Wait for completion of a redistributor change */
+ static void gic_redist_wait_for_rwp(void)
+ {
+-	gic_do_wait_for_rwp(gic_data_rdist_rd_base());
++	gic_do_wait_for_rwp(gic_data_rdist_rd_base(), GICR_CTLR_RWP);
+ }
+ 
+ #ifdef CONFIG_ARM64
+@@ -1467,6 +1467,12 @@ static int gic_irq_domain_translate(struct irq_domain *d,
+ 		if(fwspec->param_count != 2)
+ 			return -EINVAL;
+ 
++		if (fwspec->param[0] < 16) {
++			pr_err(FW_BUG "Illegal GSI%d translation request\n",
++			       fwspec->param[0]);
++			return -EINVAL;
++		}
++
+ 		*hwirq = fwspec->param[0];
+ 		*type = fwspec->param[1];
+ 
+diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
+index 176f5f06432d1..205cbd24ff209 100644
+--- a/drivers/irqchip/irq-gic.c
++++ b/drivers/irqchip/irq-gic.c
+@@ -1094,6 +1094,12 @@ static int gic_irq_domain_translate(struct irq_domain *d,
+ 		if(fwspec->param_count != 2)
+ 			return -EINVAL;
+ 
++		if (fwspec->param[0] < 16) {
++			pr_err(FW_BUG "Illegal GSI%d translation request\n",
++			       fwspec->param[0]);
++			return -EINVAL;
++		}
++
+ 		*hwirq = fwspec->param[0];
+ 		*type = fwspec->param[1];
+ 
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 1ca65b434f1fa..b839705654d4e 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -17,6 +17,7 @@
+ #include <linux/dm-ioctl.h>
+ #include <linux/hdreg.h>
+ #include <linux/compat.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/uaccess.h>
+ 
+@@ -1696,6 +1697,7 @@ static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
+ 	if (unlikely(cmd >= ARRAY_SIZE(_ioctls)))
+ 		return NULL;
+ 
++	cmd = array_index_nospec(cmd, ARRAY_SIZE(_ioctls));
+ 	*ioctl_flags = _ioctls[cmd].flags;
+ 	return _ioctls[cmd].fn;
+ }
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index b1e867feb4f6b..4833f4b20b2c7 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -492,8 +492,13 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	if (unlikely(!ti)) {
+ 		int srcu_idx;
+-		struct dm_table *map = dm_get_live_table(md, &srcu_idx);
++		struct dm_table *map;
+ 
++		map = dm_get_live_table(md, &srcu_idx);
++		if (unlikely(!map)) {
++			dm_put_live_table(md, srcu_idx);
++			return BLK_STS_RESOURCE;
++		}
+ 		ti = dm_table_find_target(map, 0);
+ 		dm_put_live_table(md, srcu_idx);
+ 	}
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 6030cba5b0382..2836d44094aba 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1692,15 +1692,10 @@ static blk_qc_t dm_submit_bio(struct bio *bio)
+ 	struct dm_table *map;
+ 
+ 	map = dm_get_live_table(md, &srcu_idx);
+-	if (unlikely(!map)) {
+-		DMERR_LIMIT("%s: mapping table unavailable, erroring io",
+-			    dm_device_name(md));
+-		bio_io_error(bio);
+-		goto out;
+-	}
+ 
+-	/* If suspended, queue this IO for later */
+-	if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) {
++	/* If suspended, or map not yet available, queue this IO for later */
++	if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) ||
++	    unlikely(!map)) {
+ 		if (bio->bi_opf & REQ_NOWAIT)
+ 			bio_wouldblock_error(bio);
+ 		else if (bio->bi_opf & REQ_RAHEAD)
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index a75d3dd34d18c..4cceb9bab0361 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -62,8 +62,8 @@ static int sdmmc_idma_validate_data(struct mmci_host *host,
+ 	 * excepted the last element which has no constraint on idmasize
+ 	 */
+ 	for_each_sg(data->sg, sg, data->sg_len - 1, i) {
+-		if (!IS_ALIGNED(data->sg->offset, sizeof(u32)) ||
+-		    !IS_ALIGNED(data->sg->length, SDMMC_IDMA_BURST)) {
++		if (!IS_ALIGNED(sg->offset, sizeof(u32)) ||
++		    !IS_ALIGNED(sg->length, SDMMC_IDMA_BURST)) {
+ 			dev_err(mmc_dev(host->mmc),
+ 				"unaligned scatterlist: ofst:%x length:%d\n",
+ 				data->sg->offset, data->sg->length);
+@@ -71,7 +71,7 @@ static int sdmmc_idma_validate_data(struct mmci_host *host,
+ 		}
+ 	}
+ 
+-	if (!IS_ALIGNED(data->sg->offset, sizeof(u32))) {
++	if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
+ 		dev_err(mmc_dev(host->mmc),
+ 			"unaligned last scatterlist: ofst:%x length:%d\n",
+ 			data->sg->offset, data->sg->length);
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 782879d46ff48..ac01fb518386a 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -390,10 +390,10 @@ static void renesas_sdhi_hs400_complete(struct mmc_host *mmc)
+ 			SH_MOBILE_SDHI_SCC_TMPPORT2_HS400OSEL) |
+ 			sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_TMPPORT2));
+ 
+-	/* Set the sampling clock selection range of HS400 mode */
+ 	sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_DTCNTL,
+ 		       SH_MOBILE_SDHI_SCC_DTCNTL_TAPEN |
+-		       0x4 << SH_MOBILE_SDHI_SCC_DTCNTL_TAPNUM_SHIFT);
++		       sd_scc_read32(host, priv,
++				     SH_MOBILE_SDHI_SCC_DTCNTL));
+ 
+ 	/* Avoid bad TAP */
+ 	if (bad_taps & BIT(priv->tap_set)) {
+diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c
+index 0e5234a5ca224..d509198c00c8a 100644
+--- a/drivers/mmc/host/sdhci-xenon.c
++++ b/drivers/mmc/host/sdhci-xenon.c
+@@ -240,16 +240,6 @@ static void xenon_voltage_switch(struct sdhci_host *host)
+ {
+ 	/* Wait for 5ms after set 1.8V signal enable bit */
+ 	usleep_range(5000, 5500);
+-
+-	/*
+-	 * For some reason the controller's Host Control2 register reports
+-	 * the bit representing 1.8V signaling as 0 when read after it was
+-	 * written as 1. Subsequent read reports 1.
+-	 *
+-	 * Since this may cause some issues, do an empty read of the Host
+-	 * Control2 register here to circumvent this.
+-	 */
+-	sdhci_readw(host, SDHCI_HOST_CONTROL2);
+ }
+ 
+ static const struct sdhci_ops sdhci_xenon_ops = {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 92f9f7f5240b6..34affd1de91da 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -569,7 +569,8 @@ struct nqe_cn {
+ #define BNXT_MAX_MTU		9500
+ #define BNXT_MAX_PAGE_MODE_MTU	\
+ 	((unsigned int)PAGE_SIZE - VLAN_ETH_HLEN - NET_IP_ALIGN -	\
+-	 XDP_PACKET_HEADROOM)
++	 XDP_PACKET_HEADROOM - \
++	 SKB_DATA_ALIGN((unsigned int)sizeof(struct skb_shared_info)))
+ 
+ #define BNXT_MIN_PKT_SIZE	52
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 98087b278d1f4..f8f7756195205 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -2041,9 +2041,7 @@ static int bnxt_set_pauseparam(struct net_device *dev,
+ 		}
+ 
+ 		link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
+-		if (bp->hwrm_spec_code >= 0x10201)
+-			link_info->req_flow_ctrl =
+-				PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE;
++		link_info->req_flow_ctrl = 0;
+ 	} else {
+ 		/* when transition from auto pause to force pause,
+ 		 * force a link change
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
+index 32b5faa87bb8d..208a3459f2e29 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
+@@ -168,7 +168,7 @@ static int dpaa2_ptp_probe(struct fsl_mc_device *mc_dev)
+ 	base = of_iomap(node, 0);
+ 	if (!base) {
+ 		err = -ENOMEM;
+-		goto err_close;
++		goto err_put;
+ 	}
+ 
+ 	err = fsl_mc_allocate_irqs(mc_dev);
+@@ -212,6 +212,8 @@ err_free_mc_irq:
+ 	fsl_mc_free_irqs(mc_dev);
+ err_unmap:
+ 	iounmap(base);
++err_put:
++	of_node_put(node);
+ err_close:
+ 	dprtc_close(mc_dev->mc_io, 0, mc_dev->mc_handle);
+ err_free_mcp:
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 6a57b41ddb545..7794703c13593 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -498,7 +498,7 @@ static inline struct ice_pf *ice_netdev_to_pf(struct net_device *netdev)
+ 
+ static inline bool ice_is_xdp_ena_vsi(struct ice_vsi *vsi)
+ {
+-	return !!vsi->xdp_prog;
++	return !!READ_ONCE(vsi->xdp_prog);
+ }
+ 
+ static inline void ice_set_ring_xdp(struct ice_ring *ring)
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 52ac6cc08e83e..ea8d868c8f30a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -1265,6 +1265,7 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi)
+ 		ring->vsi = vsi;
+ 		ring->dev = dev;
+ 		ring->count = vsi->num_tx_desc;
++		ring->txq_teid = ICE_INVAL_TEID;
+ 		WRITE_ONCE(vsi->tx_rings[i], ring);
+ 	}
+ 
+@@ -2667,6 +2668,8 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ 		}
+ 	}
+ 
++	if (ice_is_vsi_dflt_vsi(pf->first_sw, vsi))
++		ice_clear_dflt_vsi(pf->first_sw);
+ 	ice_fltr_remove_all(vsi);
+ 	ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ 	ice_vsi_delete(vsi);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 20c9d55f3adce..eb0625b52e453 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2475,8 +2475,10 @@ free_qmap:
+ 
+ 	for (i = 0; i < vsi->num_xdp_txq; i++)
+ 		if (vsi->xdp_rings[i]) {
+-			if (vsi->xdp_rings[i]->desc)
++			if (vsi->xdp_rings[i]->desc) {
++				synchronize_rcu();
+ 				ice_free_tx_ring(vsi->xdp_rings[i]);
++			}
+ 			kfree_rcu(vsi->xdp_rings[i], rcu);
+ 			vsi->xdp_rings[i] = NULL;
+ 		}
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 5134342ff70fc..a980d337861de 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -2723,9 +2723,9 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
+ 				goto error_param;
+ 			}
+ 
+-			/* Skip queue if not enabled */
+ 			if (!test_bit(vf_q_id, vf->txq_ena))
+-				continue;
++				dev_dbg(ice_pf_to_dev(vsi->back), "Queue %u on VSI %u is not enabled, but stopping it anyway\n",
++					vf_q_id, vsi->vsi_num);
+ 
+ 			ice_fill_txq_meta(vsi, ring, &txq_meta);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 9f36f8d7a9854..5733526fa245c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -36,8 +36,10 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
+ static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx)
+ {
+ 	ice_clean_tx_ring(vsi->tx_rings[q_idx]);
+-	if (ice_is_xdp_ena_vsi(vsi))
++	if (ice_is_xdp_ena_vsi(vsi)) {
++		synchronize_rcu();
+ 		ice_clean_tx_ring(vsi->xdp_rings[q_idx]);
++	}
+ 	ice_clean_rx_ring(vsi->rx_rings[q_idx]);
+ }
+ 
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+index 21c906200e791..d210632676d32 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+@@ -752,6 +752,9 @@ qede_build_skb(struct qede_rx_queue *rxq,
+ 	buf = page_address(bd->data) + bd->page_offset;
+ 	skb = build_skb(buf, rxq->rx_buf_seg_size);
+ 
++	if (unlikely(!skb))
++		return NULL;
++
+ 	skb_reserve(skb, pad);
+ 	skb_put(skb, len);
+ 
+diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c
+index e423b17e2a148..2c09afac5beb4 100644
+--- a/drivers/net/ethernet/sfc/rx_common.c
++++ b/drivers/net/ethernet/sfc/rx_common.c
+@@ -166,6 +166,9 @@ static void efx_fini_rx_recycle_ring(struct efx_rx_queue *rx_queue)
+ 	struct efx_nic *efx = rx_queue->efx;
+ 	int i;
+ 
++	if (unlikely(!rx_queue->page_ring))
++		return;
++
+ 	/* Unmap and release the pages in the recycle ring. Remove the ring. */
+ 	for (i = 0; i <= rx_queue->page_ptr_mask; i++) {
+ 		struct page *page = rx_queue->page_ring[i];
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 3183d8826981e..b40b962055fa5 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -432,8 +432,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
+ 	plat->phylink_node = np;
+ 
+ 	/* Get max speed of operation from device tree */
+-	if (of_property_read_u32(np, "max-speed", &plat->max_speed))
+-		plat->max_speed = -1;
++	of_property_read_u32(np, "max-speed", &plat->max_speed);
+ 
+ 	plat->bus_id = of_alias_get_id(np, "ethernet");
+ 	if (plat->bus_id < 0)
+diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
+index 694e2f5dbbe59..39801c31e5071 100644
+--- a/drivers/net/macvtap.c
++++ b/drivers/net/macvtap.c
+@@ -133,11 +133,17 @@ static void macvtap_setup(struct net_device *dev)
+ 	dev->tx_queue_len = TUN_READQ_SIZE;
+ }
+ 
++static struct net *macvtap_link_net(const struct net_device *dev)
++{
++	return dev_net(macvlan_dev_real_dev(dev));
++}
++
+ static struct rtnl_link_ops macvtap_link_ops __read_mostly = {
+ 	.kind		= "macvtap",
+ 	.setup		= macvtap_setup,
+ 	.newlink	= macvtap_newlink,
+ 	.dellink	= macvtap_dellink,
++	.get_link_net	= macvtap_link_net,
+ 	.priv_size      = sizeof(struct macvtap_dev),
+ };
+ 
+diff --git a/drivers/net/mdio/mdio-mscc-miim.c b/drivers/net/mdio/mdio-mscc-miim.c
+index 11f583fd4611f..1c9232fca1e2f 100644
+--- a/drivers/net/mdio/mdio-mscc-miim.c
++++ b/drivers/net/mdio/mdio-mscc-miim.c
+@@ -76,6 +76,9 @@ static int mscc_miim_read(struct mii_bus *bus, int mii_id, int regnum)
+ 	u32 val;
+ 	int ret;
+ 
++	if (regnum & MII_ADDR_C45)
++		return -EOPNOTSUPP;
++
+ 	ret = mscc_miim_wait_pending(bus);
+ 	if (ret)
+ 		goto out;
+@@ -105,6 +108,9 @@ static int mscc_miim_write(struct mii_bus *bus, int mii_id,
+ 	struct mscc_miim_dev *miim = bus->priv;
+ 	int ret;
+ 
++	if (regnum & MII_ADDR_C45)
++		return -EOPNOTSUPP;
++
+ 	ret = mscc_miim_wait_pending(bus);
+ 	if (ret < 0)
+ 		goto out;
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index a05d8372669c1..850915a37f4c2 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -74,6 +74,12 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 		.vendor = "HUAWEI",
+ 		.part = "MA5671A",
+ 		.modes = sfp_quirk_2500basex,
++	}, {
++		// Lantech 8330-262D-E can operate at 2500base-X, but
++		// incorrectly report 2500MBd NRZ in their EEPROM
++		.vendor = "Lantech",
++		.part = "8330-262D-E",
++		.modes = sfp_quirk_2500basex,
+ 	}, {
+ 		.vendor = "UBNT",
+ 		.part = "UF-INSTANT",
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index f549d3a8e59c0..8f7bb15206e9f 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -1202,7 +1202,8 @@ static int tap_sendmsg(struct socket *sock, struct msghdr *m,
+ 	struct xdp_buff *xdp;
+ 	int i;
+ 
+-	if (ctl && (ctl->type == TUN_MSG_PTR)) {
++	if (m->msg_controllen == sizeof(struct tun_msg_ctl) &&
++	    ctl && ctl->type == TUN_MSG_PTR) {
+ 		for (i = 0; i < ctl->num; i++) {
+ 			xdp = &((struct xdp_buff *)ctl->ptr)[i];
+ 			tap_get_user_xdp(q, xdp);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index ffbc7eda95eed..55ce141c93c75 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2499,7 +2499,8 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
+ 	if (!tun)
+ 		return -EBADFD;
+ 
+-	if (ctl && (ctl->type == TUN_MSG_PTR)) {
++	if (m->msg_controllen == sizeof(struct tun_msg_ctl) &&
++	    ctl && ctl->type == TUN_MSG_PTR) {
+ 		struct tun_page tpage;
+ 		int n = ctl->num;
+ 		int flush = 0;
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 9ff6e68533142..190bc5712e965 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -366,6 +366,8 @@ static void ath11k_ahb_free_ext_irq(struct ath11k_base *ab)
+ 
+ 		for (j = 0; j < irq_grp->num_irq; j++)
+ 			free_irq(ab->irq_num[irq_grp->irqs[j]], irq_grp);
++
++		netif_napi_del(&irq_grp->napi);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/mhi.c b/drivers/net/wireless/ath/ath11k/mhi.c
+index aded9a719d51e..84db9e55c3e72 100644
+--- a/drivers/net/wireless/ath/ath11k/mhi.c
++++ b/drivers/net/wireless/ath/ath11k/mhi.c
+@@ -402,7 +402,7 @@ static int ath11k_mhi_set_state(struct ath11k_pci *ab_pci,
+ 		ret = 0;
+ 		break;
+ 	case ATH11K_MHI_POWER_ON:
+-		ret = mhi_async_power_up(ab_pci->mhi_ctrl);
++		ret = mhi_sync_power_up(ab_pci->mhi_ctrl);
+ 		break;
+ 	case ATH11K_MHI_POWER_OFF:
+ 		mhi_power_down(ab_pci->mhi_ctrl, true);
+diff --git a/drivers/net/wireless/ath/ath5k/eeprom.c b/drivers/net/wireless/ath/ath5k/eeprom.c
+index 1fbc2c19848f2..d444b3d70ba2e 100644
+--- a/drivers/net/wireless/ath/ath5k/eeprom.c
++++ b/drivers/net/wireless/ath/ath5k/eeprom.c
+@@ -746,6 +746,9 @@ ath5k_eeprom_convert_pcal_info_5111(struct ath5k_hw *ah, int mode,
+ 			}
+ 		}
+ 
++		if (idx == AR5K_EEPROM_N_PD_CURVES)
++			goto err_out;
++
+ 		ee->ee_pd_gains[mode] = 1;
+ 
+ 		pd = &chinfo[pier].pd_curves[idx];
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 46255d2c555b6..17b9925266947 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1706,7 +1706,10 @@ static u8 iwl_mvm_scan_umac_chan_flags_v2(struct iwl_mvm *mvm,
+ 			IWL_SCAN_CHANNEL_FLAG_CACHE_ADD;
+ 
+ 	/* set fragmented ebs for fragmented scan on HB channels */
+-	if (iwl_mvm_is_scan_fragmented(params->hb_type))
++	if ((!iwl_mvm_is_cdb_supported(mvm) &&
++	     iwl_mvm_is_scan_fragmented(params->type)) ||
++	    (iwl_mvm_is_cdb_supported(mvm) &&
++	     iwl_mvm_is_scan_fragmented(params->hb_type)))
+ 		flags |= IWL_SCAN_CHANNEL_FLAG_EBS_FRAG;
+ 
+ 	return flags;
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 0fdfead45c77c..f01b455783b23 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -455,6 +455,7 @@ mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queue *q)
+ 
+ 		qbuf.addr = addr + offset;
+ 		qbuf.len = len - offset;
++		qbuf.skip_unmap = false;
+ 		mt76_dma_add_buf(dev, q, &qbuf, 1, 0, buf, NULL);
+ 		frames++;
+ 	}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 424be103093c6..1465a92ea3fc9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -1626,7 +1626,7 @@ mt7615_mac_adjust_sensitivity(struct mt7615_phy *phy,
+ 	struct mt7615_dev *dev = phy->dev;
+ 	int false_cca = ofdm ? phy->false_cca_ofdm : phy->false_cca_cck;
+ 	bool ext_phy = phy != &dev->phy;
+-	u16 def_th = ofdm ? -98 : -110;
++	s16 def_th = ofdm ? -98 : -110;
+ 	bool update = false;
+ 	s8 *sensitivity;
+ 	int signal;
+diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c
+index 952a92504df69..e33036281327d 100644
+--- a/drivers/parisc/dino.c
++++ b/drivers/parisc/dino.c
+@@ -142,9 +142,8 @@ struct dino_device
+ {
+ 	struct pci_hba_data	hba;	/* 'C' inheritance - must be first */
+ 	spinlock_t		dinosaur_pen;
+-	unsigned long		txn_addr; /* EIR addr to generate interrupt */ 
+-	u32			txn_data; /* EIR data assign to each dino */ 
+ 	u32 			imr;	  /* IRQ's which are enabled */ 
++	struct gsc_irq		gsc_irq;
+ 	int			global_irq[DINO_LOCAL_IRQS]; /* map IMR bit to global irq */
+ #ifdef DINO_DEBUG
+ 	unsigned int		dino_irr0; /* save most recent IRQ line stat */
+@@ -339,14 +338,43 @@ static void dino_unmask_irq(struct irq_data *d)
+ 	if (tmp & DINO_MASK_IRQ(local_irq)) {
+ 		DBG(KERN_WARNING "%s(): IRQ asserted! (ILR 0x%x)\n",
+ 				__func__, tmp);
+-		gsc_writel(dino_dev->txn_data, dino_dev->txn_addr);
++		gsc_writel(dino_dev->gsc_irq.txn_data, dino_dev->gsc_irq.txn_addr);
+ 	}
+ }
+ 
++#ifdef CONFIG_SMP
++static int dino_set_affinity_irq(struct irq_data *d, const struct cpumask *dest,
++				bool force)
++{
++	struct dino_device *dino_dev = irq_data_get_irq_chip_data(d);
++	struct cpumask tmask;
++	int cpu_irq;
++	u32 eim;
++
++	if (!cpumask_and(&tmask, dest, cpu_online_mask))
++		return -EINVAL;
++
++	cpu_irq = cpu_check_affinity(d, &tmask);
++	if (cpu_irq < 0)
++		return cpu_irq;
++
++	dino_dev->gsc_irq.txn_addr = txn_affinity_addr(d->irq, cpu_irq);
++	eim = ((u32) dino_dev->gsc_irq.txn_addr) | dino_dev->gsc_irq.txn_data;
++	__raw_writel(eim, dino_dev->hba.base_addr+DINO_IAR0);
++
++	irq_data_update_effective_affinity(d, &tmask);
++
++	return IRQ_SET_MASK_OK;
++}
++#endif
++
+ static struct irq_chip dino_interrupt_type = {
+ 	.name		= "GSC-PCI",
+ 	.irq_unmask	= dino_unmask_irq,
+ 	.irq_mask	= dino_mask_irq,
++#ifdef CONFIG_SMP
++	.irq_set_affinity = dino_set_affinity_irq,
++#endif
+ };
+ 
+ 
+@@ -806,7 +834,6 @@ static int __init dino_common_init(struct parisc_device *dev,
+ {
+ 	int status;
+ 	u32 eim;
+-	struct gsc_irq gsc_irq;
+ 	struct resource *res;
+ 
+ 	pcibios_register_hba(&dino_dev->hba);
+@@ -821,10 +848,8 @@ static int __init dino_common_init(struct parisc_device *dev,
+ 	**   still only has 11 IRQ input lines - just map some of them
+ 	**   to a different processor.
+ 	*/
+-	dev->irq = gsc_alloc_irq(&gsc_irq);
+-	dino_dev->txn_addr = gsc_irq.txn_addr;
+-	dino_dev->txn_data = gsc_irq.txn_data;
+-	eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++	dev->irq = gsc_alloc_irq(&dino_dev->gsc_irq);
++	eim = ((u32) dino_dev->gsc_irq.txn_addr) | dino_dev->gsc_irq.txn_data;
+ 
+ 	/* 
+ 	** Dino needs a PA "IRQ" to get a processor's attention.
+diff --git a/drivers/parisc/gsc.c b/drivers/parisc/gsc.c
+index ed9371acf37eb..ec175ae998733 100644
+--- a/drivers/parisc/gsc.c
++++ b/drivers/parisc/gsc.c
+@@ -135,10 +135,41 @@ static void gsc_asic_unmask_irq(struct irq_data *d)
+ 	 */
+ }
+ 
++#ifdef CONFIG_SMP
++static int gsc_set_affinity_irq(struct irq_data *d, const struct cpumask *dest,
++				bool force)
++{
++	struct gsc_asic *gsc_dev = irq_data_get_irq_chip_data(d);
++	struct cpumask tmask;
++	int cpu_irq;
++
++	if (!cpumask_and(&tmask, dest, cpu_online_mask))
++		return -EINVAL;
++
++	cpu_irq = cpu_check_affinity(d, &tmask);
++	if (cpu_irq < 0)
++		return cpu_irq;
++
++	gsc_dev->gsc_irq.txn_addr = txn_affinity_addr(d->irq, cpu_irq);
++	gsc_dev->eim = ((u32) gsc_dev->gsc_irq.txn_addr) | gsc_dev->gsc_irq.txn_data;
++
++	/* switch IRQ's for devices below LASI/WAX to other CPU */
++	gsc_writel(gsc_dev->eim, gsc_dev->hpa + OFFSET_IAR);
++
++	irq_data_update_effective_affinity(d, &tmask);
++
++	return IRQ_SET_MASK_OK;
++}
++#endif
++
++
+ static struct irq_chip gsc_asic_interrupt_type = {
+ 	.name		=	"GSC-ASIC",
+ 	.irq_unmask	=	gsc_asic_unmask_irq,
+ 	.irq_mask	=	gsc_asic_mask_irq,
++#ifdef CONFIG_SMP
++	.irq_set_affinity =	gsc_set_affinity_irq,
++#endif
+ };
+ 
+ int gsc_assign_irq(struct irq_chip *type, void *data)
+diff --git a/drivers/parisc/gsc.h b/drivers/parisc/gsc.h
+index 86abad3fa2150..73cbd0bb1975a 100644
+--- a/drivers/parisc/gsc.h
++++ b/drivers/parisc/gsc.h
+@@ -31,6 +31,7 @@ struct gsc_asic {
+ 	int version;
+ 	int type;
+ 	int eim;
++	struct gsc_irq gsc_irq;
+ 	int global_irq[32];
+ };
+ 
+diff --git a/drivers/parisc/lasi.c b/drivers/parisc/lasi.c
+index 4e4fd12c2112e..6ef621adb63a8 100644
+--- a/drivers/parisc/lasi.c
++++ b/drivers/parisc/lasi.c
+@@ -163,7 +163,6 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ {
+ 	extern void (*chassis_power_off)(void);
+ 	struct gsc_asic *lasi;
+-	struct gsc_irq gsc_irq;
+ 	int ret;
+ 
+ 	lasi = kzalloc(sizeof(*lasi), GFP_KERNEL);
+@@ -185,7 +184,7 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ 	lasi_init_irq(lasi);
+ 
+ 	/* the IRQ lasi should use */
+-	dev->irq = gsc_alloc_irq(&gsc_irq);
++	dev->irq = gsc_alloc_irq(&lasi->gsc_irq);
+ 	if (dev->irq < 0) {
+ 		printk(KERN_ERR "%s(): cannot get GSC irq\n",
+ 				__func__);
+@@ -193,9 +192,9 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ 		return -EBUSY;
+ 	}
+ 
+-	lasi->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++	lasi->eim = ((u32) lasi->gsc_irq.txn_addr) | lasi->gsc_irq.txn_data;
+ 
+-	ret = request_irq(gsc_irq.irq, gsc_asic_intr, 0, "lasi", lasi);
++	ret = request_irq(lasi->gsc_irq.irq, gsc_asic_intr, 0, "lasi", lasi);
+ 	if (ret < 0) {
+ 		kfree(lasi);
+ 		return ret;
+diff --git a/drivers/parisc/wax.c b/drivers/parisc/wax.c
+index 5b6df15162354..73a2b01f8d9ca 100644
+--- a/drivers/parisc/wax.c
++++ b/drivers/parisc/wax.c
+@@ -68,7 +68,6 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ {
+ 	struct gsc_asic *wax;
+ 	struct parisc_device *parent;
+-	struct gsc_irq gsc_irq;
+ 	int ret;
+ 
+ 	wax = kzalloc(sizeof(*wax), GFP_KERNEL);
+@@ -85,7 +84,7 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ 	wax_init_irq(wax);
+ 
+ 	/* the IRQ wax should use */
+-	dev->irq = gsc_claim_irq(&gsc_irq, WAX_GSC_IRQ);
++	dev->irq = gsc_claim_irq(&wax->gsc_irq, WAX_GSC_IRQ);
+ 	if (dev->irq < 0) {
+ 		printk(KERN_ERR "%s(): cannot get GSC irq\n",
+ 				__func__);
+@@ -93,9 +92,9 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ 		return -EBUSY;
+ 	}
+ 
+-	wax->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++	wax->eim = ((u32) wax->gsc_irq.txn_addr) | wax->gsc_irq.txn_data;
+ 
+-	ret = request_irq(gsc_irq.irq, gsc_asic_intr, 0, "wax", wax);
++	ret = request_irq(wax->gsc_irq.irq, gsc_asic_intr, 0, "wax", wax);
+ 	if (ret < 0) {
+ 		kfree(wax);
+ 		return ret;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 49ff8bf10c740..af051fb886998 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -1186,7 +1186,7 @@ static void advk_msi_irq_compose_msi_msg(struct irq_data *data,
+ 
+ 	msg->address_lo = lower_32_bits(msi_msg);
+ 	msg->address_hi = upper_32_bits(msi_msg);
+-	msg->data = data->irq;
++	msg->data = data->hwirq;
+ }
+ 
+ static int advk_msi_set_affinity(struct irq_data *irq_data,
+@@ -1203,15 +1203,11 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
+ 	int hwirq, i;
+ 
+ 	mutex_lock(&pcie->msi_used_lock);
+-	hwirq = bitmap_find_next_zero_area(pcie->msi_used, MSI_IRQ_NUM,
+-					   0, nr_irqs, 0);
+-	if (hwirq >= MSI_IRQ_NUM) {
+-		mutex_unlock(&pcie->msi_used_lock);
+-		return -ENOSPC;
+-	}
+-
+-	bitmap_set(pcie->msi_used, hwirq, nr_irqs);
++	hwirq = bitmap_find_free_region(pcie->msi_used, MSI_IRQ_NUM,
++					order_base_2(nr_irqs));
+ 	mutex_unlock(&pcie->msi_used_lock);
++	if (hwirq < 0)
++		return -ENOSPC;
+ 
+ 	for (i = 0; i < nr_irqs; i++)
+ 		irq_domain_set_info(domain, virq + i, hwirq + i,
+@@ -1229,7 +1225,7 @@ static void advk_msi_irq_domain_free(struct irq_domain *domain,
+ 	struct advk_pcie *pcie = domain->host_data;
+ 
+ 	mutex_lock(&pcie->msi_used_lock);
+-	bitmap_clear(pcie->msi_used, d->hwirq, nr_irqs);
++	bitmap_release_region(pcie->msi_used, d->hwirq, order_base_2(nr_irqs));
+ 	mutex_unlock(&pcie->msi_used_lock);
+ }
+ 
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index d41570715dc7f..262b2c4c70c9f 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -285,7 +285,17 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
+ 		if (ret)
+ 			dev_err(dev, "Data transfer failed\n");
+ 	} else {
+-		memcpy(dst_addr, src_addr, reg->size);
++		void *buf;
++
++		buf = kzalloc(reg->size, GFP_KERNEL);
++		if (!buf) {
++			ret = -ENOMEM;
++			goto err_map_addr;
++		}
++
++		memcpy_fromio(buf, src_addr, reg->size);
++		memcpy_toio(dst_addr, buf, reg->size);
++		kfree(buf);
+ 	}
+ 	ktime_get_ts64(&end);
+ 	pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma);
+@@ -441,7 +451,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
+ 		if (!epf_test->dma_supported) {
+ 			dev_err(dev, "Cannot transfer data using DMA\n");
+ 			ret = -EINVAL;
+-			goto err_map_addr;
++			goto err_dma_map;
+ 		}
+ 
+ 		src_phys_addr = dma_map_single(dma_dev, buf, reg->size,
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index af4c4cc837fcd..dda9523577472 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -1060,6 +1060,8 @@ static void quirk_cmd_compl(struct pci_dev *pdev)
+ }
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ 			      PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
++DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0110,
++			      PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400,
+ 			      PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401,
+diff --git a/drivers/perf/qcom_l2_pmu.c b/drivers/perf/qcom_l2_pmu.c
+index 23a0e008dafa2..d58810f53727c 100644
+--- a/drivers/perf/qcom_l2_pmu.c
++++ b/drivers/perf/qcom_l2_pmu.c
+@@ -739,7 +739,7 @@ static struct cluster_pmu *l2_cache_associate_cpu_with_cluster(
+ {
+ 	u64 mpidr;
+ 	int cpu_cluster_id;
+-	struct cluster_pmu *cluster = NULL;
++	struct cluster_pmu *cluster;
+ 
+ 	/*
+ 	 * This assumes that the cluster_id is in MPIDR[aff1] for
+@@ -761,10 +761,10 @@ static struct cluster_pmu *l2_cache_associate_cpu_with_cluster(
+ 			 cluster->cluster_id);
+ 		cpumask_set_cpu(cpu, &cluster->cluster_cpus);
+ 		*per_cpu_ptr(l2cache_pmu->pmu_cluster, cpu) = cluster;
+-		break;
++		return cluster;
+ 	}
+ 
+-	return cluster;
++	return NULL;
+ }
+ 
+ static int l2cache_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
+diff --git a/drivers/phy/amlogic/phy-meson8b-usb2.c b/drivers/phy/amlogic/phy-meson8b-usb2.c
+index 03c061dd5f0de..8f40b9342a971 100644
+--- a/drivers/phy/amlogic/phy-meson8b-usb2.c
++++ b/drivers/phy/amlogic/phy-meson8b-usb2.c
+@@ -261,8 +261,9 @@ static int phy_meson8b_usb2_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->clk_usb);
+ 
+ 	priv->reset = devm_reset_control_get_optional_shared(&pdev->dev, NULL);
+-	if (PTR_ERR(priv->reset) == -EPROBE_DEFER)
+-		return PTR_ERR(priv->reset);
++	if (IS_ERR(priv->reset))
++		return dev_err_probe(&pdev->dev, PTR_ERR(priv->reset),
++				     "Failed to get the reset line");
+ 
+ 	priv->dr_mode = of_usb_get_dr_mode_by_phy(pdev->dev.of_node, -1);
+ 	if (priv->dr_mode == USB_DR_MODE_UNKNOWN) {
+diff --git a/drivers/power/supply/axp20x_battery.c b/drivers/power/supply/axp20x_battery.c
+index e84b6e4da14a8..9fda98b950bab 100644
+--- a/drivers/power/supply/axp20x_battery.c
++++ b/drivers/power/supply/axp20x_battery.c
+@@ -185,7 +185,6 @@ static int axp20x_battery_get_prop(struct power_supply *psy,
+ 				   union power_supply_propval *val)
+ {
+ 	struct axp20x_batt_ps *axp20x_batt = power_supply_get_drvdata(psy);
+-	struct iio_channel *chan;
+ 	int ret = 0, reg, val1;
+ 
+ 	switch (psp) {
+@@ -265,12 +264,12 @@ static int axp20x_battery_get_prop(struct power_supply *psy,
+ 		if (ret)
+ 			return ret;
+ 
+-		if (reg & AXP20X_PWR_STATUS_BAT_CHARGING)
+-			chan = axp20x_batt->batt_chrg_i;
+-		else
+-			chan = axp20x_batt->batt_dischrg_i;
+-
+-		ret = iio_read_channel_processed(chan, &val->intval);
++		if (reg & AXP20X_PWR_STATUS_BAT_CHARGING) {
++			ret = iio_read_channel_processed(axp20x_batt->batt_chrg_i, &val->intval);
++		} else {
++			ret = iio_read_channel_processed(axp20x_batt->batt_dischrg_i, &val1);
++			val->intval = -val1;
++		}
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index a4df1ea923864..f65bf7b295c59 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -41,11 +41,11 @@
+ #define VBUS_ISPOUT_CUR_LIM_1500MA	0x1	/* 1500mA */
+ #define VBUS_ISPOUT_CUR_LIM_2000MA	0x2	/* 2000mA */
+ #define VBUS_ISPOUT_CUR_NO_LIM		0x3	/* 2500mA */
+-#define VBUS_ISPOUT_VHOLD_SET_MASK	0x31
++#define VBUS_ISPOUT_VHOLD_SET_MASK	0x38
+ #define VBUS_ISPOUT_VHOLD_SET_BIT_POS	0x3
+ #define VBUS_ISPOUT_VHOLD_SET_OFFSET	4000	/* 4000mV */
+ #define VBUS_ISPOUT_VHOLD_SET_LSB_RES	100	/* 100mV */
+-#define VBUS_ISPOUT_VHOLD_SET_4300MV	0x3	/* 4300mV */
++#define VBUS_ISPOUT_VHOLD_SET_4400MV	0x4	/* 4400mV */
+ #define VBUS_ISPOUT_VBUS_PATH_DIS	BIT(7)
+ 
+ #define CHRG_CCCV_CC_MASK		0xf		/* 4 bits */
+@@ -744,6 +744,16 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ 		ret = axp288_charger_vbus_path_select(info, true);
+ 		if (ret < 0)
+ 			return ret;
++	} else {
++		/* Set Vhold to the factory default / recommended 4.4V */
++		val = VBUS_ISPOUT_VHOLD_SET_4400MV << VBUS_ISPOUT_VHOLD_SET_BIT_POS;
++		ret = regmap_update_bits(info->regmap, AXP20X_VBUS_IPSOUT_MGMT,
++					 VBUS_ISPOUT_VHOLD_SET_MASK, val);
++		if (ret < 0) {
++			dev_err(&info->pdev->dev, "register(%x) write error(%d)\n",
++				AXP20X_VBUS_IPSOUT_MGMT, ret);
++			return ret;
++		}
+ 	}
+ 
+ 	/* Read current charge voltage and current limit */
+diff --git a/drivers/ptp/ptp_sysfs.c b/drivers/ptp/ptp_sysfs.c
+index be076a91e20e6..8cd59e8481631 100644
+--- a/drivers/ptp/ptp_sysfs.c
++++ b/drivers/ptp/ptp_sysfs.c
+@@ -13,7 +13,7 @@ static ssize_t clock_name_show(struct device *dev,
+ 			       struct device_attribute *attr, char *page)
+ {
+ 	struct ptp_clock *ptp = dev_get_drvdata(dev);
+-	return snprintf(page, PAGE_SIZE-1, "%s\n", ptp->info->name);
++	return sysfs_emit(page, "%s\n", ptp->info->name);
+ }
+ static DEVICE_ATTR_RO(clock_name);
+ 
+@@ -227,7 +227,7 @@ static ssize_t ptp_pin_show(struct device *dev, struct device_attribute *attr,
+ 
+ 	mutex_unlock(&ptp->pincfg_mux);
+ 
+-	return snprintf(page, PAGE_SIZE, "%u %u\n", func, chan);
++	return sysfs_emit(page, "%u %u\n", func, chan);
+ }
+ 
+ static ssize_t ptp_pin_store(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/rtc/rtc-wm8350.c b/drivers/rtc/rtc-wm8350.c
+index 2018614f258f6..6eaa9321c0741 100644
+--- a/drivers/rtc/rtc-wm8350.c
++++ b/drivers/rtc/rtc-wm8350.c
+@@ -432,14 +432,21 @@ static int wm8350_rtc_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	wm8350_register_irq(wm8350, WM8350_IRQ_RTC_SEC,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_RTC_SEC,
+ 			    wm8350_rtc_update_handler, 0,
+ 			    "RTC Seconds", wm8350);
++	if (ret)
++		return ret;
++
+ 	wm8350_mask_irq(wm8350, WM8350_IRQ_RTC_SEC);
+ 
+-	wm8350_register_irq(wm8350, WM8350_IRQ_RTC_ALM,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_RTC_ALM,
+ 			    wm8350_rtc_alarm_handler, 0,
+ 			    "RTC Alarm", wm8350);
++	if (ret) {
++		wm8350_free_irq(wm8350, WM8350_IRQ_RTC_SEC, wm8350);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/scsi/aha152x.c b/drivers/scsi/aha152x.c
+index d8e19afa7a140..c6607c4686bb7 100644
+--- a/drivers/scsi/aha152x.c
++++ b/drivers/scsi/aha152x.c
+@@ -3367,13 +3367,11 @@ static int __init aha152x_setup(char *str)
+ 	setup[setup_count].synchronous = ints[0] >= 6 ? ints[6] : 1;
+ 	setup[setup_count].delay       = ints[0] >= 7 ? ints[7] : DELAY_DEFAULT;
+ 	setup[setup_count].ext_trans   = ints[0] >= 8 ? ints[8] : 0;
+-	if (ints[0] > 8) {                                                /*}*/
++	if (ints[0] > 8)
+ 		printk(KERN_NOTICE "aha152x: usage: aha152x=<IOBASE>[,<IRQ>[,<SCSI ID>"
+ 		       "[,<RECONNECT>[,<PARITY>[,<SYNCHRONOUS>[,<DELAY>[,<EXT_TRANS>]]]]]]]\n");
+-	} else {
++	else
+ 		setup_count++;
+-		return 0;
+-	}
+ 
+ 	return 1;
+ }
+diff --git a/drivers/scsi/bfa/bfad_attr.c b/drivers/scsi/bfa/bfad_attr.c
+index 5ae1e3f789101..e049cdb3c286c 100644
+--- a/drivers/scsi/bfa/bfad_attr.c
++++ b/drivers/scsi/bfa/bfad_attr.c
+@@ -711,7 +711,7 @@ bfad_im_serial_num_show(struct device *dev, struct device_attribute *attr,
+ 	char serial_num[BFA_ADAPTER_SERIAL_NUM_LEN];
+ 
+ 	bfa_get_adapter_serial_num(&bfad->bfa, serial_num);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", serial_num);
++	return sysfs_emit(buf, "%s\n", serial_num);
+ }
+ 
+ static ssize_t
+@@ -725,7 +725,7 @@ bfad_im_model_show(struct device *dev, struct device_attribute *attr,
+ 	char model[BFA_ADAPTER_MODEL_NAME_LEN];
+ 
+ 	bfa_get_adapter_model(&bfad->bfa, model);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", model);
++	return sysfs_emit(buf, "%s\n", model);
+ }
+ 
+ static ssize_t
+@@ -805,7 +805,7 @@ bfad_im_model_desc_show(struct device *dev, struct device_attribute *attr,
+ 		snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
+ 			"Invalid Model");
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n", model_descr);
++	return sysfs_emit(buf, "%s\n", model_descr);
+ }
+ 
+ static ssize_t
+@@ -819,7 +819,7 @@ bfad_im_node_name_show(struct device *dev, struct device_attribute *attr,
+ 	u64        nwwn;
+ 
+ 	nwwn = bfa_fcs_lport_get_nwwn(port->fcs_port);
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n", cpu_to_be64(nwwn));
++	return sysfs_emit(buf, "0x%llx\n", cpu_to_be64(nwwn));
+ }
+ 
+ static ssize_t
+@@ -836,7 +836,7 @@ bfad_im_symbolic_name_show(struct device *dev, struct device_attribute *attr,
+ 	bfa_fcs_lport_get_attr(&bfad->bfa_fcs.fabric.bport, &port_attr);
+ 	strlcpy(symname, port_attr.port_cfg.sym_name.symname,
+ 			BFA_SYMNAME_MAXLEN);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", symname);
++	return sysfs_emit(buf, "%s\n", symname);
+ }
+ 
+ static ssize_t
+@@ -850,14 +850,14 @@ bfad_im_hw_version_show(struct device *dev, struct device_attribute *attr,
+ 	char hw_ver[BFA_VERSION_LEN];
+ 
+ 	bfa_get_pci_chip_rev(&bfad->bfa, hw_ver);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", hw_ver);
++	return sysfs_emit(buf, "%s\n", hw_ver);
+ }
+ 
+ static ssize_t
+ bfad_im_drv_version_show(struct device *dev, struct device_attribute *attr,
+ 				char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%s\n", BFAD_DRIVER_VERSION);
++	return sysfs_emit(buf, "%s\n", BFAD_DRIVER_VERSION);
+ }
+ 
+ static ssize_t
+@@ -871,7 +871,7 @@ bfad_im_optionrom_version_show(struct device *dev,
+ 	char optrom_ver[BFA_VERSION_LEN];
+ 
+ 	bfa_get_adapter_optrom_ver(&bfad->bfa, optrom_ver);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", optrom_ver);
++	return sysfs_emit(buf, "%s\n", optrom_ver);
+ }
+ 
+ static ssize_t
+@@ -885,7 +885,7 @@ bfad_im_fw_version_show(struct device *dev, struct device_attribute *attr,
+ 	char fw_ver[BFA_VERSION_LEN];
+ 
+ 	bfa_get_adapter_fw_ver(&bfad->bfa, fw_ver);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", fw_ver);
++	return sysfs_emit(buf, "%s\n", fw_ver);
+ }
+ 
+ static ssize_t
+@@ -897,7 +897,7 @@ bfad_im_num_of_ports_show(struct device *dev, struct device_attribute *attr,
+ 			(struct bfad_im_port_s *) shost->hostdata[0];
+ 	struct bfad_s *bfad = im_port->bfad;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return sysfs_emit(buf, "%d\n",
+ 			bfa_get_nports(&bfad->bfa));
+ }
+ 
+@@ -905,7 +905,7 @@ static ssize_t
+ bfad_im_drv_name_show(struct device *dev, struct device_attribute *attr,
+ 				char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%s\n", BFAD_DRIVER_NAME);
++	return sysfs_emit(buf, "%s\n", BFAD_DRIVER_NAME);
+ }
+ 
+ static ssize_t
+@@ -924,14 +924,14 @@ bfad_im_num_of_discovered_ports_show(struct device *dev,
+ 	rports = kcalloc(nrports, sizeof(struct bfa_rport_qualifier_s),
+ 			 GFP_ATOMIC);
+ 	if (rports == NULL)
+-		return snprintf(buf, PAGE_SIZE, "Failed\n");
++		return sysfs_emit(buf, "Failed\n");
+ 
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	bfa_fcs_lport_get_rport_quals(port->fcs_port, rports, &nrports);
+ 	spin_unlock_irqrestore(&bfad->bfad_lock, flags);
+ 	kfree(rports);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", nrports);
++	return sysfs_emit(buf, "%d\n", nrports);
+ }
+ 
+ static          DEVICE_ATTR(serial_number, S_IRUGO,
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index a8998b016b862..cd41dc061d874 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2372,17 +2372,25 @@ static irqreturn_t cq_interrupt_v3_hw(int irq_no, void *p)
+ 	return IRQ_WAKE_THREAD;
+ }
+ 
++static void hisi_sas_v3_free_vectors(void *data)
++{
++	struct pci_dev *pdev = data;
++
++	pci_free_irq_vectors(pdev);
++}
++
+ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
+ {
+ 	int vectors;
+ 	int max_msi = HISI_SAS_MSI_COUNT_V3_HW, min_msi;
+ 	struct Scsi_Host *shost = hisi_hba->shost;
++	struct pci_dev *pdev = hisi_hba->pci_dev;
+ 	struct irq_affinity desc = {
+ 		.pre_vectors = BASE_VECTORS_V3_HW,
+ 	};
+ 
+ 	min_msi = MIN_AFFINE_VECTORS_V3_HW;
+-	vectors = pci_alloc_irq_vectors_affinity(hisi_hba->pci_dev,
++	vectors = pci_alloc_irq_vectors_affinity(pdev,
+ 						 min_msi, max_msi,
+ 						 PCI_IRQ_MSI |
+ 						 PCI_IRQ_AFFINITY,
+@@ -2394,6 +2402,7 @@ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
+ 	hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW;
+ 	shost->nr_hw_queues = hisi_hba->cq_nvecs;
+ 
++	devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
+ 	return 0;
+ }
+ 
+@@ -3313,7 +3322,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	dev_err(dev, "%d hw queues\n", shost->nr_hw_queues);
+ 	rc = scsi_add_host(shost, dev);
+ 	if (rc)
+-		goto err_out_free_irq_vectors;
++		goto err_out_debugfs;
+ 
+ 	rc = sas_register_ha(sha);
+ 	if (rc)
+@@ -3340,8 +3349,6 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ err_out_register_ha:
+ 	scsi_remove_host(shost);
+-err_out_free_irq_vectors:
+-	pci_free_irq_vectors(pdev);
+ err_out_debugfs:
+ 	hisi_sas_debugfs_exit(hisi_hba);
+ err_out_ha:
+@@ -3369,7 +3376,6 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba)
+ 
+ 		devm_free_irq(&pdev->dev, pci_irq_vector(pdev, nr), cq);
+ 	}
+-	pci_free_irq_vectors(pdev);
+ }
+ 
+ static void hisi_sas_v3_remove(struct pci_dev *pdev)
+diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
+index a50f1eef0e0cd..4261380af97b4 100644
+--- a/drivers/scsi/libfc/fc_exch.c
++++ b/drivers/scsi/libfc/fc_exch.c
+@@ -1702,6 +1702,7 @@ static void fc_exch_abts_resp(struct fc_exch *ep, struct fc_frame *fp)
+ 	if (cancel_delayed_work_sync(&ep->timeout_work)) {
+ 		FC_EXCH_DBG(ep, "Exchange timer canceled due to ABTS response\n");
+ 		fc_exch_release(ep);	/* release from pending timer hold */
++		return;
+ 	}
+ 
+ 	spin_lock_bh(&ep->ex_lock);
+diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
+index b03c0f35d7b04..0cfea7b2ab13a 100644
+--- a/drivers/scsi/mvsas/mv_init.c
++++ b/drivers/scsi/mvsas/mv_init.c
+@@ -697,7 +697,7 @@ static ssize_t
+ mvs_show_driver_version(struct device *cdev,
+ 		struct device_attribute *attr,  char *buffer)
+ {
+-	return snprintf(buffer, PAGE_SIZE, "%s\n", DRV_VERSION);
++	return sysfs_emit(buffer, "%s\n", DRV_VERSION);
+ }
+ 
+ static DEVICE_ATTR(driver_version,
+@@ -749,7 +749,7 @@ mvs_store_interrupt_coalescing(struct device *cdev,
+ static ssize_t mvs_show_interrupt_coalescing(struct device *cdev,
+ 			struct device_attribute *attr, char *buffer)
+ {
+-	return snprintf(buffer, PAGE_SIZE, "%d\n", interrupt_coalescing);
++	return sysfs_emit(buffer, "%d\n", interrupt_coalescing);
+ }
+ 
+ static DEVICE_ATTR(interrupt_coalescing,
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index cd0e1d31db701..da9fbe62a34d1 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1711,7 +1711,6 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	}
+ 
+ 	task = sas_alloc_slow_task(GFP_ATOMIC);
+-
+ 	if (!task) {
+ 		pm8001_dbg(pm8001_ha, FAIL, "cannot allocate task\n");
+ 		return;
+@@ -1720,8 +1719,10 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	task->task_done = pm8001_task_done;
+ 
+ 	res = pm8001_tag_alloc(pm8001_ha, &ccb_tag);
+-	if (res)
++	if (res) {
++		sas_free_task(task);
+ 		return;
++	}
+ 
+ 	ccb = &pm8001_ha->ccb_info[ccb_tag];
+ 	ccb->device = pm8001_ha_dev;
+@@ -1738,8 +1739,10 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort,
+ 			sizeof(task_abort), 0);
+-	if (ret)
++	if (ret) {
++		sas_free_task(task);
+ 		pm8001_tag_free(pm8001_ha, ccb_tag);
++	}
+ 
+ }
+ 
+@@ -3669,12 +3672,11 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	mb();
+ 
+ 	if (pm8001_dev->id & NCQ_ABORT_ALL_FLAG) {
+-		pm8001_tag_free(pm8001_ha, tag);
+ 		sas_free_task(t);
+-		/* clear the flag */
+-		pm8001_dev->id &= 0xBFFFFFFF;
+-	} else
++		pm8001_dev->id &= ~NCQ_ABORT_ALL_FLAG;
++	} else {
+ 		t->task_done(t);
++	}
+ 
+ 	return 0;
+ }
+@@ -4428,6 +4430,9 @@ static int pm8001_chip_reg_dev_req(struct pm8001_hba_info *pm8001_ha,
+ 		SAS_ADDR_SIZE);
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
++	if (rc)
++		pm8001_tag_free(pm8001_ha, tag);
++
+ 	return rc;
+ }
+ 
+@@ -4840,6 +4845,11 @@ pm8001_chip_fw_flash_update_req(struct pm8001_hba_info *pm8001_ha,
+ 	ccb->ccb_tag = tag;
+ 	rc = pm8001_chip_fw_flash_update_build(pm8001_ha, &flash_update_info,
+ 		tag);
++	if (rc) {
++		kfree(fw_control_context);
++		pm8001_tag_free(pm8001_ha, tag);
++	}
++
+ 	return rc;
+ }
+ 
+@@ -4944,6 +4954,9 @@ pm8001_chip_set_dev_state_req(struct pm8001_hba_info *pm8001_ha,
+ 	payload.nds = cpu_to_le32(state);
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
++	if (rc)
++		pm8001_tag_free(pm8001_ha, tag);
++
+ 	return rc;
+ 
+ }
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index 75ac4d86d9c4b..ba5852548bee3 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -831,10 +831,10 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+ 
+ 		res = PM8001_CHIP_DISP->task_abort(pm8001_ha,
+ 			pm8001_dev, flag, task_tag, ccb_tag);
+-
+ 		if (res) {
+ 			del_timer(&task->slow_task->timer);
+ 			pm8001_dbg(pm8001_ha, FAIL, "Executing internal task failed\n");
++			pm8001_tag_free(pm8001_ha, ccb_tag);
+ 			goto ex_err;
+ 		}
+ 		wait_for_completion(&task->slow_task->completion);
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index b5e60553acdc5..4c03bf08b543c 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -66,18 +66,16 @@ int pm80xx_bar4_shift(struct pm8001_hba_info *pm8001_ha, u32 shift_value)
+ }
+ 
+ static void pm80xx_pci_mem_copy(struct pm8001_hba_info  *pm8001_ha, u32 soffset,
+-				const void *destination,
++				__le32 *destination,
+ 				u32 dw_count, u32 bus_base_number)
+ {
+ 	u32 index, value, offset;
+-	u32 *destination1;
+-	destination1 = (u32 *)destination;
+ 
+-	for (index = 0; index < dw_count; index += 4, destination1++) {
++	for (index = 0; index < dw_count; index += 4, destination++) {
+ 		offset = (soffset + index);
+ 		if (offset < (64 * 1024)) {
+ 			value = pm8001_cr32(pm8001_ha, bus_base_number, offset);
+-			*destination1 =  cpu_to_le32(value);
++			*destination = cpu_to_le32(value);
+ 		}
+ 	}
+ 	return;
+@@ -4849,8 +4847,13 @@ static int pm80xx_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha,
+ 	payload.tag = cpu_to_le32(tag);
+ 	payload.phyop_phyid =
+ 		cpu_to_le32(((phy_op & 0xFF) << 8) | (phyId & 0xFF));
+-	return pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+-			sizeof(payload), 0);
++
++	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
++				  sizeof(payload), 0);
++	if (rc)
++		pm8001_tag_free(pm8001_ha, tag);
++
++	return rc;
+ }
+ 
+ static u32 pm80xx_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha)
+diff --git a/drivers/scsi/zorro7xx.c b/drivers/scsi/zorro7xx.c
+index 27b9e2baab1a6..7acf9193a9e80 100644
+--- a/drivers/scsi/zorro7xx.c
++++ b/drivers/scsi/zorro7xx.c
+@@ -159,6 +159,8 @@ static void zorro7xx_remove_one(struct zorro_dev *z)
+ 	scsi_remove_host(host);
+ 
+ 	NCR_700_release(host);
++	if (host->base > 0x01000000)
++		iounmap(hostdata->base);
+ 	kfree(hostdata);
+ 	free_irq(host->irq, host);
+ 	zorro_release_device(z);
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 4a80f043b7b17..766b00350e391 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1032,7 +1032,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
+ 	addr = op->addr.val;
+ 	len = op->data.nbytes;
+ 
+-	if (bcm_qspi_bspi_ver_three(qspi) == true) {
++	if (has_bspi(qspi) && bcm_qspi_bspi_ver_three(qspi) == true) {
+ 		/*
+ 		 * The address coming into this function is a raw flash offset.
+ 		 * But for BSPI <= V3, we need to convert it to a remapped BSPI
+@@ -1051,7 +1051,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
+ 	    len < 4)
+ 		mspi_read = true;
+ 
+-	if (mspi_read)
++	if (!has_bspi(qspi) || mspi_read)
+ 		return bcm_qspi_mspi_exec_mem_op(spi, op);
+ 
+ 	ret = bcm_qspi_bspi_set_mode(qspi, op, 0);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 38b10fd5d9921..95b91fe45cb38 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -2280,6 +2280,9 @@ void vchiq_msg_queue_push(unsigned int handle, struct vchiq_header *header)
+ 	struct vchiq_service *service = find_service_by_handle(handle);
+ 	int pos;
+ 
++	if (!service)
++		return;
++
+ 	while (service->msg_queue_write == service->msg_queue_read +
+ 		VCHIQ_MAX_SLOTS) {
+ 		if (wait_for_completion_interruptible(&service->msg_queue_pop))
+@@ -2299,6 +2302,9 @@ struct vchiq_header *vchiq_msg_hold(unsigned int handle)
+ 	struct vchiq_header *header;
+ 	int pos;
+ 
++	if (!service)
++		return NULL;
++
+ 	if (service->msg_queue_write == service->msg_queue_read)
+ 		return NULL;
+ 
+diff --git a/drivers/staging/wfx/main.c b/drivers/staging/wfx/main.c
+index e7bc1988124a8..d5dacd5583c6e 100644
+--- a/drivers/staging/wfx/main.c
++++ b/drivers/staging/wfx/main.c
+@@ -309,7 +309,8 @@ struct wfx_dev *wfx_init_common(struct device *dev,
+ 	wdev->pdata.gpio_wakeup = devm_gpiod_get_optional(dev, "wakeup",
+ 							  GPIOD_OUT_LOW);
+ 	if (IS_ERR(wdev->pdata.gpio_wakeup))
+-		return NULL;
++		goto err;
++
+ 	if (wdev->pdata.gpio_wakeup)
+ 		gpiod_set_consumer_name(wdev->pdata.gpio_wakeup, "wfx wakeup");
+ 
+@@ -328,6 +329,10 @@ struct wfx_dev *wfx_init_common(struct device *dev,
+ 		return NULL;
+ 
+ 	return wdev;
++
++err:
++	ieee80211_free_hw(hw);
++	return NULL;
+ }
+ 
+ int wfx_probe(struct wfx_dev *wdev)
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index 8ae3e03fbd8ce..81faead3c4f80 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -883,11 +883,8 @@ static irqreturn_t s3c24xx_serial_tx_chars(int irq, void *id)
+ 		goto out;
+ 	}
+ 
+-	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) {
+-		spin_unlock(&port->lock);
++	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(port);
+-		spin_lock(&port->lock);
+-	}
+ 
+ 	if (uart_circ_empty(xmit))
+ 		s3c24xx_serial_stop_tx(port);
+diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
+index e196673f5c647..efaf0db595f46 100644
+--- a/drivers/usb/dwc3/dwc3-omap.c
++++ b/drivers/usb/dwc3/dwc3-omap.c
+@@ -242,7 +242,7 @@ static void dwc3_omap_set_mailbox(struct dwc3_omap *omap,
+ 		break;
+ 
+ 	case OMAP_DWC3_ID_FLOAT:
+-		if (omap->vbus_reg)
++		if (omap->vbus_reg && regulator_is_enabled(omap->vbus_reg))
+ 			regulator_disable(omap->vbus_reg);
+ 		val = dwc3_omap_read_utmi_ctrl(omap);
+ 		val |= USBOTGSS_UTMI_OTG_CTRL_IDDIG;
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 57ee72fead45a..de178bf264c21 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -32,9 +32,6 @@
+ #include <linux/workqueue.h>
+ 
+ /* XUSB_DEV registers */
+-#define SPARAM 0x000
+-#define  SPARAM_ERSTMAX_MASK GENMASK(20, 16)
+-#define  SPARAM_ERSTMAX(x) (((x) << 16) & SPARAM_ERSTMAX_MASK)
+ #define DB 0x004
+ #define  DB_TARGET_MASK GENMASK(15, 8)
+ #define  DB_TARGET(x) (((x) << 8) & DB_TARGET_MASK)
+@@ -275,8 +272,10 @@ BUILD_EP_CONTEXT_RW(deq_hi, deq_hi, 0, 0xffffffff)
+ BUILD_EP_CONTEXT_RW(avg_trb_len, tx_info, 0, 0xffff)
+ BUILD_EP_CONTEXT_RW(max_esit_payload, tx_info, 16, 0xffff)
+ BUILD_EP_CONTEXT_RW(edtla, rsvd[0], 0, 0xffffff)
+-BUILD_EP_CONTEXT_RW(seq_num, rsvd[0], 24, 0xff)
++BUILD_EP_CONTEXT_RW(rsvd, rsvd[0], 24, 0x1)
+ BUILD_EP_CONTEXT_RW(partial_td, rsvd[0], 25, 0x1)
++BUILD_EP_CONTEXT_RW(splitxstate, rsvd[0], 26, 0x1)
++BUILD_EP_CONTEXT_RW(seq_num, rsvd[0], 27, 0x1f)
+ BUILD_EP_CONTEXT_RW(cerrcnt, rsvd[1], 18, 0x3)
+ BUILD_EP_CONTEXT_RW(data_offset, rsvd[2], 0, 0x1ffff)
+ BUILD_EP_CONTEXT_RW(numtrbs, rsvd[2], 22, 0x1f)
+@@ -1557,6 +1556,9 @@ static int __tegra_xudc_ep_set_halt(struct tegra_xudc_ep *ep, bool halt)
+ 		ep_reload(xudc, ep->index);
+ 
+ 		ep_ctx_write_state(ep->context, EP_STATE_RUNNING);
++		ep_ctx_write_rsvd(ep->context, 0);
++		ep_ctx_write_partial_td(ep->context, 0);
++		ep_ctx_write_splitxstate(ep->context, 0);
+ 		ep_ctx_write_seq_num(ep->context, 0);
+ 
+ 		ep_reload(xudc, ep->index);
+@@ -2812,7 +2814,10 @@ static void tegra_xudc_reset(struct tegra_xudc *xudc)
+ 	xudc->setup_seq_num = 0;
+ 	xudc->queued_setup_packet = false;
+ 
+-	ep_ctx_write_seq_num(ep0->context, xudc->setup_seq_num);
++	ep_ctx_write_rsvd(ep0->context, 0);
++	ep_ctx_write_partial_td(ep0->context, 0);
++	ep_ctx_write_splitxstate(ep0->context, 0);
++	ep_ctx_write_seq_num(ep0->context, 0);
+ 
+ 	deq_ptr = trb_virt_to_phys(ep0, &ep0->transfer_ring[ep0->deq_ptr]);
+ 
+@@ -3295,11 +3300,6 @@ static void tegra_xudc_init_event_ring(struct tegra_xudc *xudc)
+ 	unsigned int i;
+ 	u32 val;
+ 
+-	val = xudc_readl(xudc, SPARAM);
+-	val &= ~(SPARAM_ERSTMAX_MASK);
+-	val |= SPARAM_ERSTMAX(XUDC_NR_EVENT_RINGS);
+-	xudc_writel(xudc, val, SPARAM);
+-
+ 	for (i = 0; i < ARRAY_SIZE(xudc->event_ring); i++) {
+ 		memset(xudc->event_ring[i], 0, XUDC_EVENT_RING_SIZE *
+ 		       sizeof(*xudc->event_ring[i]));
+diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c
+index e87cf3a00fa4b..638f03b897394 100644
+--- a/drivers/usb/host/ehci-pci.c
++++ b/drivers/usb/host/ehci-pci.c
+@@ -21,6 +21,9 @@ static const char hcd_name[] = "ehci-pci";
+ /* defined here to avoid adding to pci_ids.h for single instance use */
+ #define PCI_DEVICE_ID_INTEL_CE4100_USB	0x2e70
+ 
++#define PCI_VENDOR_ID_ASPEED		0x1a03
++#define PCI_DEVICE_ID_ASPEED_EHCI	0x2603
++
+ /*-------------------------------------------------------------------------*/
+ #define PCI_DEVICE_ID_INTEL_QUARK_X1000_SOC		0x0939
+ static inline bool is_intel_quark_x1000(struct pci_dev *pdev)
+@@ -222,6 +225,12 @@ static int ehci_pci_setup(struct usb_hcd *hcd)
+ 			ehci->has_synopsys_hc_bug = 1;
+ 		}
+ 		break;
++	case PCI_VENDOR_ID_ASPEED:
++		if (pdev->device == PCI_DEVICE_ID_ASPEED_EHCI) {
++			ehci_info(ehci, "applying Aspeed HC workaround\n");
++			ehci->is_aspeed = 1;
++		}
++		break;
+ 	}
+ 
+ 	/* optional debug port, normally in the first BAR */
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index da02c3e96e7b2..e303f6f073d2b 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -472,6 +472,7 @@ static void vhost_tx_batch(struct vhost_net *net,
+ 		goto signal_used;
+ 
+ 	msghdr->msg_control = &ctl;
++	msghdr->msg_controllen = sizeof(ctl);
+ 	err = sock->ops->sendmsg(sock, msghdr, 0);
+ 	if (unlikely(err < 0)) {
+ 		vq_err(&nvq->vq, "Fail to batch sending packets\n");
+diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c
+index 974d02bb3a45c..6546d029c7fd6 100644
+--- a/drivers/w1/slaves/w1_therm.c
++++ b/drivers/w1/slaves/w1_therm.c
+@@ -2092,16 +2092,20 @@ static ssize_t w1_seq_show(struct device *device,
+ 		if (sl->reg_num.id == reg_num->id)
+ 			seq = i;
+ 
++		if (w1_reset_bus(sl->master))
++			goto error;
++
++		/* Put the device into chain DONE state */
++		w1_write_8(sl->master, W1_MATCH_ROM);
++		w1_write_block(sl->master, (u8 *)&rn, 8);
+ 		w1_write_8(sl->master, W1_42_CHAIN);
+ 		w1_write_8(sl->master, W1_42_CHAIN_DONE);
+ 		w1_write_8(sl->master, W1_42_CHAIN_DONE_INV);
+-		w1_read_block(sl->master, &ack, sizeof(ack));
+ 
+ 		/* check for acknowledgment */
+ 		ack = w1_read_8(sl->master);
+ 		if (ack != W1_42_SUCCESS_CONFIRM_BYTE)
+ 			goto error;
+-
+ 	}
+ 
+ 	/* Exit from CHAIN state */
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index f39d02e7f7efe..16f44bc481ab4 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -121,7 +121,7 @@ struct extent_buffer {
+  */
+ struct extent_changeset {
+ 	/* How many bytes are set/cleared in this operation */
+-	unsigned int bytes_changed;
++	u64 bytes_changed;
+ 
+ 	/* Changed ranges */
+ 	struct ulist range_changed;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 1d9262a35473c..f7f4ac01589bc 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4023,6 +4023,13 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 			   dest->root_key.objectid);
+ 		return -EPERM;
+ 	}
++	if (atomic_read(&dest->nr_swapfiles)) {
++		spin_unlock(&dest->root_item_lock);
++		btrfs_warn(fs_info,
++			   "attempt to delete subvolume %llu with active swapfile",
++			   root->root_key.objectid);
++		return -EPERM;
++	}
+ 	root_flags = btrfs_root_flags(&dest->root_item);
+ 	btrfs_set_root_flags(&dest->root_item,
+ 			     root_flags | BTRFS_ROOT_SUBVOL_DEAD);
+@@ -10215,8 +10222,23 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ 	 * set. We use this counter to prevent snapshots. We must increment it
+ 	 * before walking the extents because we don't want a concurrent
+ 	 * snapshot to run after we've already checked the extents.
++	 *
++	 * It is possible that subvolume is marked for deletion but still not
++	 * removed yet. To prevent this race, we check the root status before
++	 * activating the swapfile.
+ 	 */
++	spin_lock(&root->root_item_lock);
++	if (btrfs_root_dead(root)) {
++		spin_unlock(&root->root_item_lock);
++
++		btrfs_exclop_finish(fs_info);
++		btrfs_warn(fs_info,
++		"cannot activate swapfile because subvolume %llu is being deleted",
++			root->root_key.objectid);
++		return -EPERM;
++	}
+ 	atomic_inc(&root->nr_swapfiles);
++	spin_unlock(&root->root_item_lock);
+ 
+ 	isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize);
+ 
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index f63c1a090139c..1fddb9cd3e88e 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -478,8 +478,11 @@ more:
+ 					2 : (fpos_off(rde->offset) + 1);
+ 			err = note_last_dentry(dfi, rde->name, rde->name_len,
+ 					       next_offset);
+-			if (err)
++			if (err) {
++				ceph_mdsc_put_request(dfi->last_readdir);
++				dfi->last_readdir = NULL;
+ 				return err;
++			}
+ 		} else if (req->r_reply_info.dir_end) {
+ 			dfi->next_offset = 2;
+ 			/* keep last name */
+@@ -520,6 +523,12 @@ more:
+ 		if (!dir_emit(ctx, rde->name, rde->name_len,
+ 			      ceph_present_ino(inode->i_sb, le64_to_cpu(rde->inode.in->ino)),
+ 			      le32_to_cpu(rde->inode.in->mode) >> 12)) {
++			/*
++			 * NOTE: Here no need to put the 'dfi->last_readdir',
++			 * because when dir_emit stops us it's most likely
++			 * doesn't have enough memory, etc. So for next readdir
++			 * it will continue.
++			 */
+ 			dout("filldir stopping us...\n");
+ 			return 0;
+ 		}
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index b34c02985d9d2..6c047570d6a94 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -2200,7 +2200,7 @@ int gfs2_setattr_size(struct inode *inode, u64 newsize)
+ 
+ 	ret = do_shrink(inode, newsize);
+ out:
+-	gfs2_rs_delete(ip, NULL);
++	gfs2_rs_delete(ip);
+ 	gfs2_qa_put(ip);
+ 	return ret;
+ }
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index cfd9d03f604fe..2e6f622ed4283 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -717,7 +717,8 @@ static int gfs2_release(struct inode *inode, struct file *file)
+ 	file->private_data = NULL;
+ 
+ 	if (file->f_mode & FMODE_WRITE) {
+-		gfs2_rs_delete(ip, &inode->i_writecount);
++		if (gfs2_rs_active(&ip->i_res))
++			gfs2_rs_delete(ip);
+ 		gfs2_qa_put(ip);
+ 	}
+ 	return 0;
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index 65ae4fc28ede4..74a6b0800e059 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -811,7 +811,7 @@ fail_free_inode:
+ 		if (free_vfs_inode) /* else evict will do the put for us */
+ 			gfs2_glock_put(ip->i_gl);
+ 	}
+-	gfs2_rs_delete(ip, NULL);
++	gfs2_rs_deltree(&ip->i_res);
+ 	gfs2_qa_put(ip);
+ fail_free_acls:
+ 	posix_acl_release(default_acl);
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index eb775e93de97c..dc55b029afaa4 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -664,13 +664,14 @@ void gfs2_rs_deltree(struct gfs2_blkreserv *rs)
+ /**
+  * gfs2_rs_delete - delete a multi-block reservation
+  * @ip: The inode for this reservation
+- * @wcount: The inode's write count, or NULL
+  *
+  */
+-void gfs2_rs_delete(struct gfs2_inode *ip, atomic_t *wcount)
++void gfs2_rs_delete(struct gfs2_inode *ip)
+ {
++	struct inode *inode = &ip->i_inode;
++
+ 	down_write(&ip->i_rw_mutex);
+-	if ((wcount == NULL) || (atomic_read(wcount) <= 1))
++	if (atomic_read(&inode->i_writecount) <= 1)
+ 		gfs2_rs_deltree(&ip->i_res);
+ 	up_write(&ip->i_rw_mutex);
+ }
+diff --git a/fs/gfs2/rgrp.h b/fs/gfs2/rgrp.h
+index 9a587ada51eda..2d3c150c55bd5 100644
+--- a/fs/gfs2/rgrp.h
++++ b/fs/gfs2/rgrp.h
+@@ -45,7 +45,7 @@ extern int gfs2_alloc_blocks(struct gfs2_inode *ip, u64 *bn, unsigned int *n,
+ 			     bool dinode, u64 *generation);
+ 
+ extern void gfs2_rs_deltree(struct gfs2_blkreserv *rs);
+-extern void gfs2_rs_delete(struct gfs2_inode *ip, atomic_t *wcount);
++extern void gfs2_rs_delete(struct gfs2_inode *ip);
+ extern void __gfs2_free_blocks(struct gfs2_inode *ip, struct gfs2_rgrpd *rgd,
+ 			       u64 bstart, u32 blen, int meta);
+ extern void gfs2_free_meta(struct gfs2_inode *ip, struct gfs2_rgrpd *rgd,
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index d2b7ecbd1b150..d14b98aa1c3eb 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1434,7 +1434,7 @@ out:
+ 	truncate_inode_pages_final(&inode->i_data);
+ 	if (ip->i_qadata)
+ 		gfs2_assert_warn(sdp, ip->i_qadata->qa_ref == 0);
+-	gfs2_rs_delete(ip, NULL);
++	gfs2_rs_deltree(&ip->i_res);
+ 	gfs2_ordered_del_inode(ip);
+ 	clear_inode(inode);
+ 	gfs2_dir_hash_inval(ip);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 5959b0359524c..ab9290ab4cae0 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1556,6 +1556,7 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx)
+ 
+ static void io_flush_timeouts(struct io_ring_ctx *ctx)
+ {
++	struct io_kiocb *req, *tmp;
+ 	u32 seq;
+ 
+ 	if (list_empty(&ctx->timeout_list))
+@@ -1563,10 +1564,8 @@ static void io_flush_timeouts(struct io_ring_ctx *ctx)
+ 
+ 	seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+ 
+-	do {
++	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+ 		u32 events_needed, events_got;
+-		struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
+-						struct io_kiocb, timeout.list);
+ 
+ 		if (io_is_timeout_noseq(req))
+ 			break;
+@@ -1583,9 +1582,8 @@ static void io_flush_timeouts(struct io_ring_ctx *ctx)
+ 		if (events_got < events_needed)
+ 			break;
+ 
+-		list_del_init(&req->timeout.list);
+ 		io_kill_timeout(req, 0);
+-	} while (!list_empty(&ctx->timeout_list));
++	}
+ 
+ 	ctx->cq_last_tm_flush = seq;
+ }
+@@ -5639,6 +5637,7 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 	else
+ 		data->mode = HRTIMER_MODE_REL;
+ 
++	INIT_LIST_HEAD(&req->timeout.list);
+ 	hrtimer_init(&data->timer, CLOCK_MONOTONIC, data->mode);
+ 	return 0;
+ }
+@@ -6282,12 +6281,12 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
+ 	if (!list_empty(&req->link_list)) {
+ 		prev = list_entry(req->link_list.prev, struct io_kiocb,
+ 				  link_list);
+-		if (refcount_inc_not_zero(&prev->refs))
+-			list_del_init(&req->link_list);
+-		else
++		list_del_init(&req->link_list);
++		if (!refcount_inc_not_zero(&prev->refs))
+ 			prev = NULL;
+ 	}
+ 
++	list_del(&req->timeout.list);
+ 	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+ 
+ 	if (prev) {
+@@ -7346,8 +7345,12 @@ static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+ 		refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+ 		skb_queue_head(&sk->sk_receive_queue, skb);
+ 
+-		for (i = 0; i < nr_files; i++)
+-			fput(fpl->fp[i]);
++		for (i = 0; i < nr; i++) {
++			struct file *file = io_file_from_index(ctx, i + offset);
++
++			if (file)
++				fput(file);
++		}
+ 	} else {
+ 		kfree_skb(skb);
+ 		free_uid(fpl->user);
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index b0eb9c85eea0c..980aa3300f106 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -146,12 +146,13 @@ void jfs_evict_inode(struct inode *inode)
+ 		dquot_initialize(inode);
+ 
+ 		if (JFS_IP(inode)->fileset == FILESYSTEM_I) {
++			struct inode *ipimap = JFS_SBI(inode->i_sb)->ipimap;
+ 			truncate_inode_pages_final(&inode->i_data);
+ 
+ 			if (test_cflag(COMMIT_Freewmap, inode))
+ 				jfs_free_zero_link(inode);
+ 
+-			if (JFS_SBI(inode->i_sb)->ipimap)
++			if (ipimap && JFS_IP(ipimap)->i_imap)
+ 				diFree(inode);
+ 
+ 			/*
+diff --git a/fs/minix/inode.c b/fs/minix/inode.c
+index 34f546404aa11..e938f5b1e4b94 100644
+--- a/fs/minix/inode.c
++++ b/fs/minix/inode.c
+@@ -446,7 +446,8 @@ static const struct address_space_operations minix_aops = {
+ 	.writepage = minix_writepage,
+ 	.write_begin = minix_write_begin,
+ 	.write_end = generic_write_end,
+-	.bmap = minix_bmap
++	.bmap = minix_bmap,
++	.direct_IO = noop_direct_IO
+ };
+ 
+ static const struct inode_operations minix_symlink_inode_operations = {
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 2ad56ff4752c7..9f88ca7b20015 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1628,16 +1628,6 @@ const struct dentry_operations nfs4_dentry_operations = {
+ };
+ EXPORT_SYMBOL_GPL(nfs4_dentry_operations);
+ 
+-static fmode_t flags_to_mode(int flags)
+-{
+-	fmode_t res = (__force fmode_t)flags & FMODE_EXEC;
+-	if ((flags & O_ACCMODE) != O_WRONLY)
+-		res |= FMODE_READ;
+-	if ((flags & O_ACCMODE) != O_RDONLY)
+-		res |= FMODE_WRITE;
+-	return res;
+-}
+-
+ static struct nfs_open_context *create_nfs_open_context(struct dentry *dentry, int open_flags, struct file *filp)
+ {
+ 	return alloc_nfs_open_context(dentry, flags_to_mode(open_flags), filp);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 3c0335c15a730..c220810c61d14 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -172,8 +172,8 @@ ssize_t nfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 	VM_BUG_ON(iov_iter_count(iter) != PAGE_SIZE);
+ 
+ 	if (iov_iter_rw(iter) == READ)
+-		return nfs_file_direct_read(iocb, iter);
+-	return nfs_file_direct_write(iocb, iter);
++		return nfs_file_direct_read(iocb, iter, true);
++	return nfs_file_direct_write(iocb, iter, true);
+ }
+ 
+ static void nfs_direct_release_pages(struct page **pages, unsigned int npages)
+@@ -424,6 +424,7 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
+  * nfs_file_direct_read - file direct read operation for NFS files
+  * @iocb: target I/O control block
+  * @iter: vector of user buffers into which to read data
++ * @swap: flag indicating this is swap IO, not O_DIRECT IO
+  *
+  * We use this function for direct reads instead of calling
+  * generic_file_aio_read() in order to avoid gfar's check to see if
+@@ -439,7 +440,8 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
+  * client must read the updated atime from the server back into its
+  * cache.
+  */
+-ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
++ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter,
++			     bool swap)
+ {
+ 	struct file *file = iocb->ki_filp;
+ 	struct address_space *mapping = file->f_mapping;
+@@ -481,12 +483,14 @@ ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
+ 	if (iter_is_iovec(iter))
+ 		dreq->flags = NFS_ODIRECT_SHOULD_DIRTY;
+ 
+-	nfs_start_io_direct(inode);
++	if (!swap)
++		nfs_start_io_direct(inode);
+ 
+ 	NFS_I(inode)->read_io += count;
+ 	requested = nfs_direct_read_schedule_iovec(dreq, iter, iocb->ki_pos);
+ 
+-	nfs_end_io_direct(inode);
++	if (!swap)
++		nfs_end_io_direct(inode);
+ 
+ 	if (requested > 0) {
+ 		result = nfs_direct_wait(dreq);
+@@ -789,7 +793,7 @@ static const struct nfs_pgio_completion_ops nfs_direct_write_completion_ops = {
+  */
+ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ 					       struct iov_iter *iter,
+-					       loff_t pos)
++					       loff_t pos, int ioflags)
+ {
+ 	struct nfs_pageio_descriptor desc;
+ 	struct inode *inode = dreq->inode;
+@@ -797,7 +801,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ 	size_t requested_bytes = 0;
+ 	size_t wsize = max_t(size_t, NFS_SERVER(inode)->wsize, PAGE_SIZE);
+ 
+-	nfs_pageio_init_write(&desc, inode, FLUSH_COND_STABLE, false,
++	nfs_pageio_init_write(&desc, inode, ioflags, false,
+ 			      &nfs_direct_write_completion_ops);
+ 	desc.pg_dreq = dreq;
+ 	get_dreq(dreq);
+@@ -875,6 +879,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+  * nfs_file_direct_write - file direct write operation for NFS files
+  * @iocb: target I/O control block
+  * @iter: vector of user buffers from which to write data
++ * @swap: flag indicating this is swap IO, not O_DIRECT IO
+  *
+  * We use this function for direct writes instead of calling
+  * generic_file_aio_write() in order to avoid taking the inode
+@@ -891,7 +896,8 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+  * Note that O_APPEND is not supported for NFS direct writes, as there
+  * is no atomic O_APPEND write facility in the NFS protocol.
+  */
+-ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
++ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter,
++			      bool swap)
+ {
+ 	ssize_t result, requested;
+ 	size_t count;
+@@ -905,7 +911,11 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
+ 	dfprintk(FILE, "NFS: direct write(%pD2, %zd@%Ld)\n",
+ 		file, iov_iter_count(iter), (long long) iocb->ki_pos);
+ 
+-	result = generic_write_checks(iocb, iter);
++	if (swap)
++		/* bypass generic checks */
++		result =  iov_iter_count(iter);
++	else
++		result = generic_write_checks(iocb, iter);
+ 	if (result <= 0)
+ 		return result;
+ 	count = result;
+@@ -936,16 +946,22 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
+ 		dreq->iocb = iocb;
+ 	pnfs_init_ds_commit_info_ops(&dreq->ds_cinfo, inode);
+ 
+-	nfs_start_io_direct(inode);
++	if (swap) {
++		requested = nfs_direct_write_schedule_iovec(dreq, iter, pos,
++							    FLUSH_STABLE);
++	} else {
++		nfs_start_io_direct(inode);
+ 
+-	requested = nfs_direct_write_schedule_iovec(dreq, iter, pos);
++		requested = nfs_direct_write_schedule_iovec(dreq, iter, pos,
++							    FLUSH_COND_STABLE);
+ 
+-	if (mapping->nrpages) {
+-		invalidate_inode_pages2_range(mapping,
+-					      pos >> PAGE_SHIFT, end);
+-	}
++		if (mapping->nrpages) {
++			invalidate_inode_pages2_range(mapping,
++						      pos >> PAGE_SHIFT, end);
++		}
+ 
+-	nfs_end_io_direct(inode);
++		nfs_end_io_direct(inode);
++	}
+ 
+ 	if (requested > 0) {
+ 		result = nfs_direct_wait(dreq);
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 63940a7a70be1..7b47f9b063f1f 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -161,7 +161,7 @@ nfs_file_read(struct kiocb *iocb, struct iov_iter *to)
+ 	ssize_t result;
+ 
+ 	if (iocb->ki_flags & IOCB_DIRECT)
+-		return nfs_file_direct_read(iocb, to);
++		return nfs_file_direct_read(iocb, to, false);
+ 
+ 	dprintk("NFS: read(%pD2, %zu@%lu)\n",
+ 		iocb->ki_filp,
+@@ -616,7 +616,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 		return result;
+ 
+ 	if (iocb->ki_flags & IOCB_DIRECT)
+-		return nfs_file_direct_write(iocb, from);
++		return nfs_file_direct_write(iocb, from, false);
+ 
+ 	dprintk("NFS: write(%pD2, %zu@%Ld)\n",
+ 		file, iov_iter_count(from), (long long) iocb->ki_pos);
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index f27ecc2e490f2..1adece1cff3ed 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1139,7 +1139,6 @@ int nfs_open(struct inode *inode, struct file *filp)
+ 	nfs_fscache_open_file(inode, filp);
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(nfs_open);
+ 
+ /*
+  * This function is called whenever some part of NFS notices that
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 98554dd18a715..7009a8dddd45b 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -42,6 +42,16 @@ static inline bool nfs_lookup_is_soft_revalidate(const struct dentry *dentry)
+ 	return true;
+ }
+ 
++static inline fmode_t flags_to_mode(int flags)
++{
++	fmode_t res = (__force fmode_t)flags & FMODE_EXEC;
++	if ((flags & O_ACCMODE) != O_WRONLY)
++		res |= FMODE_READ;
++	if ((flags & O_ACCMODE) != O_RDONLY)
++		res |= FMODE_WRITE;
++	return res;
++}
++
+ /*
+  * Note: RFC 1813 doesn't limit the number of auth flavors that
+  * a server can return, so make something up.
+@@ -578,6 +588,13 @@ nfs_write_match_verf(const struct nfs_writeverf *verf,
+ 		!nfs_write_verifier_cmp(&req->wb_verf, &verf->verifier);
+ }
+ 
++static inline gfp_t nfs_io_gfp_mask(void)
++{
++	if (current->flags & PF_WQ_WORKER)
++		return GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
++	return GFP_KERNEL;
++}
++
+ /* unlink.c */
+ extern struct rpc_task *
+ nfs_async_rename(struct inode *old_dir, struct inode *new_dir,
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 2587b1b8e2ef7..dad32b171e677 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -567,8 +567,10 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+ 
+ 	ctx = get_nfs_open_context(nfs_file_open_context(src));
+ 	l_ctx = nfs_get_lock_context(ctx);
+-	if (IS_ERR(l_ctx))
+-		return PTR_ERR(l_ctx);
++	if (IS_ERR(l_ctx)) {
++		status = PTR_ERR(l_ctx);
++		goto out;
++	}
+ 
+ 	status = nfs4_set_rw_stateid(&args->cna_src_stateid, ctx, l_ctx,
+ 				     FMODE_READ);
+@@ -576,7 +578,7 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+ 	if (status) {
+ 		if (status == -EAGAIN)
+ 			status = -NFS4ERR_BAD_STATEID;
+-		return status;
++		goto out;
+ 	}
+ 
+ 	status = nfs4_call_sync(src_server->client, src_server, &msg,
+@@ -584,6 +586,7 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+ 	if (status == -ENOTSUPP)
+ 		src_server->caps &= ~NFS_CAP_COPY_NOTIFY;
+ 
++out:
+ 	put_nfs_open_context(nfs_file_open_context(src));
+ 	return status;
+ }
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index a1e5c6b85dedc..9fdecd9090493 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -32,6 +32,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ 	struct dentry *parent = NULL;
+ 	struct inode *dir;
+ 	unsigned openflags = filp->f_flags;
++	fmode_t f_mode;
+ 	struct iattr attr;
+ 	int err;
+ 
+@@ -50,8 +51,9 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ 	if (err)
+ 		return err;
+ 
++	f_mode = filp->f_mode;
+ 	if ((openflags & O_ACCMODE) == 3)
+-		return nfs_open(inode, filp);
++		f_mode |= flags_to_mode(openflags);
+ 
+ 	/* We can't create new files here */
+ 	openflags &= ~(O_CREAT|O_EXCL);
+@@ -59,7 +61,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ 	parent = dget_parent(dentry);
+ 	dir = d_inode(parent);
+ 
+-	ctx = alloc_nfs_open_context(file_dentry(filp), filp->f_mode, filp);
++	ctx = alloc_nfs_open_context(file_dentry(filp), f_mode, filp);
+ 	err = PTR_ERR(ctx);
+ 	if (IS_ERR(ctx))
+ 		goto out;
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index cbeec29e9f21a..a8fe8f84c5ae0 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -49,6 +49,7 @@
+ #include <linux/workqueue.h>
+ #include <linux/bitops.h>
+ #include <linux/jiffies.h>
++#include <linux/sched/mm.h>
+ 
+ #include <linux/sunrpc/clnt.h>
+ 
+@@ -2557,9 +2558,17 @@ static void nfs4_layoutreturn_any_run(struct nfs_client *clp)
+ 
+ static void nfs4_state_manager(struct nfs_client *clp)
+ {
++	unsigned int memflags;
+ 	int status = 0;
+ 	const char *section = "", *section_sep = "";
+ 
++	/*
++	 * State recovery can deadlock if the direct reclaim code tries
++	 * start NFS writeback. So ensure memory allocations are all
++	 * GFP_NOFS.
++	 */
++	memflags = memalloc_nofs_save();
++
+ 	/* Ensure exclusive access to NFSv4 state */
+ 	do {
+ 		trace_nfs4_state_mgr(clp);
+@@ -2654,6 +2663,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ 			clear_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state);
+ 		}
+ 
++		memalloc_nofs_restore(memflags);
+ 		nfs4_end_drain_session(clp);
+ 		nfs4_clear_state_manager_bit(clp);
+ 
+@@ -2671,6 +2681,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ 			return;
+ 		if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
+ 			return;
++		memflags = memalloc_nofs_save();
+ 	} while (refcount_read(&clp->cl_count) > 1 && !signalled());
+ 	goto out_drain;
+ 
+@@ -2683,6 +2694,7 @@ out_error:
+ 			clp->cl_hostname, -status);
+ 	ssleep(1);
+ out_drain:
++	memalloc_nofs_restore(memflags);
+ 	nfs4_end_drain_session(clp);
+ 	nfs4_clear_state_manager_bit(clp);
+ }
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 98b9c1ed366ee..17fef6eb490c5 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -90,10 +90,10 @@ void nfs_set_pgio_error(struct nfs_pgio_header *hdr, int error, loff_t pos)
+ 	}
+ }
+ 
+-static inline struct nfs_page *
+-nfs_page_alloc(void)
++static inline struct nfs_page *nfs_page_alloc(void)
+ {
+-	struct nfs_page	*p = kmem_cache_zalloc(nfs_page_cachep, GFP_KERNEL);
++	struct nfs_page *p =
++		kmem_cache_zalloc(nfs_page_cachep, nfs_io_gfp_mask());
+ 	if (p)
+ 		INIT_LIST_HEAD(&p->wb_list);
+ 	return p;
+@@ -901,7 +901,7 @@ int nfs_generic_pgio(struct nfs_pageio_descriptor *desc,
+ 	struct nfs_commit_info cinfo;
+ 	struct nfs_page_array *pg_array = &hdr->page_array;
+ 	unsigned int pagecount, pageused;
+-	gfp_t gfp_flags = GFP_KERNEL;
++	gfp_t gfp_flags = nfs_io_gfp_mask();
+ 
+ 	pagecount = nfs_page_array_len(mirror->pg_base, mirror->pg_count);
+ 	pg_array->npages = pagecount;
+@@ -984,7 +984,7 @@ nfs_pageio_alloc_mirrors(struct nfs_pageio_descriptor *desc,
+ 	desc->pg_mirrors_dynamic = NULL;
+ 	if (mirror_count == 1)
+ 		return desc->pg_mirrors_static;
+-	ret = kmalloc_array(mirror_count, sizeof(*ret), GFP_KERNEL);
++	ret = kmalloc_array(mirror_count, sizeof(*ret), nfs_io_gfp_mask());
+ 	if (ret != NULL) {
+ 		for (i = 0; i < mirror_count; i++)
+ 			nfs_pageio_mirror_init(&ret[i], desc->pg_bsize);
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 7b9d701bef016..a2ad8bb87e2db 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -419,7 +419,7 @@ static struct nfs_commit_data *
+ pnfs_bucket_fetch_commitdata(struct pnfs_commit_bucket *bucket,
+ 			     struct nfs_commit_info *cinfo)
+ {
+-	struct nfs_commit_data *data = nfs_commitdata_alloc(false);
++	struct nfs_commit_data *data = nfs_commitdata_alloc();
+ 
+ 	if (!data)
+ 		return NULL;
+@@ -515,7 +515,11 @@ pnfs_generic_commit_pagelist(struct inode *inode, struct list_head *mds_pages,
+ 	unsigned int nreq = 0;
+ 
+ 	if (!list_empty(mds_pages)) {
+-		data = nfs_commitdata_alloc(true);
++		data = nfs_commitdata_alloc();
++		if (!data) {
++			nfs_retry_commit(mds_pages, NULL, cinfo, -1);
++			return -ENOMEM;
++		}
+ 		data->ds_commit_index = -1;
+ 		list_splice_init(mds_pages, &data->pages);
+ 		list_add_tail(&data->list, &list);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index cc926e69ee9ba..5d07799513a65 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -70,27 +70,17 @@ static mempool_t *nfs_wdata_mempool;
+ static struct kmem_cache *nfs_cdata_cachep;
+ static mempool_t *nfs_commit_mempool;
+ 
+-struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail)
++struct nfs_commit_data *nfs_commitdata_alloc(void)
+ {
+ 	struct nfs_commit_data *p;
+ 
+-	if (never_fail)
+-		p = mempool_alloc(nfs_commit_mempool, GFP_NOIO);
+-	else {
+-		/* It is OK to do some reclaim, not no safe to wait
+-		 * for anything to be returned to the pool.
+-		 * mempool_alloc() cannot handle that particular combination,
+-		 * so we need two separate attempts.
+-		 */
++	p = kmem_cache_zalloc(nfs_cdata_cachep, nfs_io_gfp_mask());
++	if (!p) {
+ 		p = mempool_alloc(nfs_commit_mempool, GFP_NOWAIT);
+-		if (!p)
+-			p = kmem_cache_alloc(nfs_cdata_cachep, GFP_NOIO |
+-					     __GFP_NOWARN | __GFP_NORETRY);
+ 		if (!p)
+ 			return NULL;
++		memset(p, 0, sizeof(*p));
+ 	}
+-
+-	memset(p, 0, sizeof(*p));
+ 	INIT_LIST_HEAD(&p->pages);
+ 	return p;
+ }
+@@ -104,9 +94,15 @@ EXPORT_SYMBOL_GPL(nfs_commit_free);
+ 
+ static struct nfs_pgio_header *nfs_writehdr_alloc(void)
+ {
+-	struct nfs_pgio_header *p = mempool_alloc(nfs_wdata_mempool, GFP_KERNEL);
++	struct nfs_pgio_header *p;
+ 
+-	memset(p, 0, sizeof(*p));
++	p = kmem_cache_zalloc(nfs_wdata_cachep, nfs_io_gfp_mask());
++	if (!p) {
++		p = mempool_alloc(nfs_wdata_mempool, GFP_NOWAIT);
++		if (!p)
++			return NULL;
++		memset(p, 0, sizeof(*p));
++	}
+ 	p->rw_mode = FMODE_WRITE;
+ 	return p;
+ }
+@@ -1800,7 +1796,11 @@ nfs_commit_list(struct inode *inode, struct list_head *head, int how,
+ 	if (list_empty(head))
+ 		return 0;
+ 
+-	data = nfs_commitdata_alloc(true);
++	data = nfs_commitdata_alloc();
++	if (!data) {
++		nfs_retry_commit(head, NULL, cinfo, -1);
++		return -ENOMEM;
++	}
+ 
+ 	/* Set up the argument struct */
+ 	nfs_init_commit(data, head, NULL, cinfo);
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 5daffd46369dd..9257ee893bdb8 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -353,15 +353,18 @@ static int do_tmpfile(struct inode *dir, struct dentry *dentry,
+ {
+ 	struct inode *inode;
+ 	struct ubifs_info *c = dir->i_sb->s_fs_info;
+-	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1};
++	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
++					.dirtied_ino = 1};
+ 	struct ubifs_budget_req ino_req = { .dirtied_ino = 1 };
+ 	struct ubifs_inode *ui, *dir_ui = ubifs_inode(dir);
+ 	int err, instantiated = 0;
+ 	struct fscrypt_name nm;
+ 
+ 	/*
+-	 * Budget request settings: new dirty inode, new direntry,
+-	 * budget for dirtied inode will be released via writeback.
++	 * Budget request settings: new inode, new direntry, changing the
++	 * parent directory inode.
++	 * Allocate budget separately for new dirtied inode, the budget will
++	 * be released via writeback.
+ 	 */
+ 
+ 	dbg_gen("dent '%pd', mode %#hx in dir ino %lu",
+@@ -949,7 +952,8 @@ static int ubifs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	struct ubifs_inode *dir_ui = ubifs_inode(dir);
+ 	struct ubifs_info *c = dir->i_sb->s_fs_info;
+ 	int err, sz_change;
+-	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1 };
++	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
++					.dirtied_ino = 1};
+ 	struct fscrypt_name nm;
+ 
+ 	/*
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 8e144306e2622..b216899b4745e 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -224,6 +224,15 @@ struct gpio_irq_chip {
+ 				unsigned long *valid_mask,
+ 				unsigned int ngpios);
+ 
++	/**
++	 * @initialized:
++	 *
++	 * Flag to track GPIO chip irq member's initialization.
++	 * This flag will make sure GPIO chip irq members are not used
++	 * before they are initialized.
++	 */
++	bool initialized;
++
+ 	/**
+ 	 * @valid_mask:
+ 	 *
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index f514a7dd8c9cf..510f876564796 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -50,7 +50,7 @@ struct ipv6_devconf {
+ 	__s32		use_optimistic;
+ #endif
+ #ifdef CONFIG_IPV6_MROUTE
+-	__s32		mc_forwarding;
++	atomic_t	mc_forwarding;
+ #endif
+ 	__s32		disable_ipv6;
+ 	__s32		drop_unicast_in_l2_multicast;
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index c142a152d6a41..f3016b8e698ab 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1252,13 +1252,16 @@ static inline unsigned long *section_to_usemap(struct mem_section *ms)
+ 
+ static inline struct mem_section *__nr_to_section(unsigned long nr)
+ {
++	unsigned long root = SECTION_NR_TO_ROOT(nr);
++
++	if (unlikely(root >= NR_SECTION_ROOTS))
++		return NULL;
++
+ #ifdef CONFIG_SPARSEMEM_EXTREME
+-	if (!mem_section)
++	if (!mem_section || !mem_section[root])
+ 		return NULL;
+ #endif
+-	if (!mem_section[SECTION_NR_TO_ROOT(nr)])
+-		return NULL;
+-	return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
++	return &mem_section[root][nr & SECTION_ROOT_MASK];
+ }
+ extern unsigned long __section_nr(struct mem_section *ms);
+ extern size_t mem_section_usage_size(void);
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 1e0a3497bdb46..e39342945a80b 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -478,10 +478,10 @@ static inline const struct cred *nfs_file_cred(struct file *file)
+  * linux/fs/nfs/direct.c
+  */
+ extern ssize_t nfs_direct_IO(struct kiocb *, struct iov_iter *);
+-extern ssize_t nfs_file_direct_read(struct kiocb *iocb,
+-			struct iov_iter *iter);
+-extern ssize_t nfs_file_direct_write(struct kiocb *iocb,
+-			struct iov_iter *iter);
++ssize_t nfs_file_direct_read(struct kiocb *iocb,
++			     struct iov_iter *iter, bool swap);
++ssize_t nfs_file_direct_write(struct kiocb *iocb,
++			      struct iov_iter *iter, bool swap);
+ 
+ /*
+  * linux/fs/nfs/dir.c
+@@ -551,7 +551,7 @@ extern int nfs_wb_all(struct inode *inode);
+ extern int nfs_wb_page(struct inode *inode, struct page *page);
+ extern int nfs_wb_page_cancel(struct inode *inode, struct page* page);
+ extern int  nfs_commit_inode(struct inode *, int);
+-extern struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail);
++extern struct nfs_commit_data *nfs_commitdata_alloc(void);
+ extern void nfs_commit_free(struct nfs_commit_data *data);
+ bool nfs_commit_end(struct nfs_mds_commit_info *cinfo);
+ 
+diff --git a/include/net/arp.h b/include/net/arp.h
+index 4950191f6b2bf..4a23a97195f33 100644
+--- a/include/net/arp.h
++++ b/include/net/arp.h
+@@ -71,6 +71,7 @@ void arp_send(int type, int ptype, __be32 dest_ip,
+ 	      const unsigned char *src_hw, const unsigned char *th);
+ int arp_mc_map(__be32 addr, u8 *haddr, struct net_device *dev, int dir);
+ void arp_ifdown(struct net_device *dev);
++int arp_invalidate(struct net_device *dev, __be32 ip, bool force);
+ 
+ struct sk_buff *arp_create(int type, int ptype, __be32 dest_ip,
+ 			   struct net_device *dev, __be32 src_ip,
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index 9125effbf4483..3fecc4a411a13 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -180,19 +180,21 @@ void bt_err_ratelimited(const char *fmt, ...);
+ #define BT_DBG(fmt, ...)	pr_debug(fmt "\n", ##__VA_ARGS__)
+ #endif
+ 
++#define bt_dev_name(hdev) ((hdev) ? (hdev)->name : "null")
++
+ #define bt_dev_info(hdev, fmt, ...)				\
+-	BT_INFO("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	BT_INFO("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_warn(hdev, fmt, ...)				\
+-	BT_WARN("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	BT_WARN("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_err(hdev, fmt, ...)				\
+-	BT_ERR("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	BT_ERR("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_dbg(hdev, fmt, ...)				\
+-	BT_DBG("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	BT_DBG("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ 
+ #define bt_dev_warn_ratelimited(hdev, fmt, ...)			\
+-	bt_warn_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	bt_warn_ratelimited("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_err_ratelimited(hdev, fmt, ...)			\
+-	bt_err_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	bt_err_ratelimited("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ 
+ /* Connection and socket states */
+ enum {
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index b43a86d054948..0f39fdcb2273c 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -4180,7 +4180,8 @@ struct bpf_sock {
+ 	__u32 src_ip4;
+ 	__u32 src_ip6[4];
+ 	__u32 src_port;		/* host byte order */
+-	__u32 dst_port;		/* network byte order */
++	__be16 dst_port;	/* network byte order */
++	__u16 :16;		/* zero padding */
+ 	__u32 dst_ip4;
+ 	__u32 dst_ip6[4];
+ 	__u32 state;
+diff --git a/include/uapi/linux/can/isotp.h b/include/uapi/linux/can/isotp.h
+index c55935b64ccc8..590f8aea2b6d2 100644
+--- a/include/uapi/linux/can/isotp.h
++++ b/include/uapi/linux/can/isotp.h
+@@ -137,20 +137,16 @@ struct can_isotp_ll_options {
+ #define CAN_ISOTP_WAIT_TX_DONE	0x400	/* wait for tx completion */
+ #define CAN_ISOTP_SF_BROADCAST	0x800	/* 1-to-N functional addressing */
+ 
+-/* default values */
++/* protocol machine default values */
+ 
+ #define CAN_ISOTP_DEFAULT_FLAGS		0
+ #define CAN_ISOTP_DEFAULT_EXT_ADDRESS	0x00
+ #define CAN_ISOTP_DEFAULT_PAD_CONTENT	0xCC /* prevent bit-stuffing */
+-#define CAN_ISOTP_DEFAULT_FRAME_TXTIME	0
++#define CAN_ISOTP_DEFAULT_FRAME_TXTIME	50000 /* 50 micro seconds */
+ #define CAN_ISOTP_DEFAULT_RECV_BS	0
+ #define CAN_ISOTP_DEFAULT_RECV_STMIN	0x00
+ #define CAN_ISOTP_DEFAULT_RECV_WFTMAX	0
+ 
+-#define CAN_ISOTP_DEFAULT_LL_MTU	CAN_MTU
+-#define CAN_ISOTP_DEFAULT_LL_TX_DL	CAN_MAX_DLEN
+-#define CAN_ISOTP_DEFAULT_LL_TX_FLAGS	0
+-
+ /*
+  * Remark on CAN_ISOTP_DEFAULT_RECV_* values:
+  *
+@@ -162,4 +158,24 @@ struct can_isotp_ll_options {
+  * consistency and copied directly into the flow control (FC) frame.
+  */
+ 
++/* link layer default values => make use of Classical CAN frames */
++
++#define CAN_ISOTP_DEFAULT_LL_MTU	CAN_MTU
++#define CAN_ISOTP_DEFAULT_LL_TX_DL	CAN_MAX_DLEN
++#define CAN_ISOTP_DEFAULT_LL_TX_FLAGS	0
++
++/*
++ * The CAN_ISOTP_DEFAULT_FRAME_TXTIME has become a non-zero value as
++ * it only makes sense for isotp implementation tests to run without
++ * a N_As value. As user space applications usually do not set the
++ * frame_txtime element of struct can_isotp_options the new in-kernel
++ * default is very likely overwritten with zero when the sockopt()
++ * CAN_ISOTP_OPTS is invoked.
++ * To make sure that a N_As value of zero is only set intentional the
++ * value '0' is now interpreted as 'do not change the current value'.
++ * When a frame_txtime of zero is required for testing purposes this
++ * CAN_ISOTP_FRAME_TXTIME_ZERO u32 value has to be set in frame_txtime.
++ */
++#define CAN_ISOTP_FRAME_TXTIME_ZERO	0xFFFFFFFF
++
+ #endif /* !_UAPI_CAN_ISOTP_H */
+diff --git a/init/main.c b/init/main.c
+index 4fe58ed4aca7b..3526eaec7508f 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1104,7 +1104,7 @@ static int __init initcall_blacklist(char *str)
+ 		}
+ 	} while (str_entry);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static bool __init_or_module initcall_blacklisted(initcall_t fn)
+@@ -1367,7 +1367,9 @@ static noinline void __init kernel_init_freeable(void);
+ bool rodata_enabled __ro_after_init = true;
+ static int __init set_debug_rodata(char *str)
+ {
+-	return strtobool(str, &rodata_enabled);
++	if (strtobool(str, &rodata_enabled))
++		pr_warn("Invalid option string for rodata: '%s'\n", str);
++	return 1;
+ }
+ __setup("rodata=", set_debug_rodata);
+ #endif
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 8f0ea12d7cee2..1a0a9f820c69b 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -505,10 +505,11 @@ static ssize_t __cgroup1_procs_write(struct kernfs_open_file *of,
+ 		goto out_unlock;
+ 
+ 	/*
+-	 * Even if we're attaching all tasks in the thread group, we only
+-	 * need to check permissions on one of them.
++	 * Even if we're attaching all tasks in the thread group, we only need
++	 * to check permissions on one of them. Check permissions using the
++	 * credentials from file open to protect against inherited fd attacks.
+ 	 */
+-	cred = current_cred();
++	cred = of->file->f_cred;
+ 	tcred = get_task_cred(task);
+ 	if (!uid_eq(cred->euid, GLOBAL_ROOT_UID) &&
+ 	    !uid_eq(cred->euid, tcred->uid) &&
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 3f8447a5393e9..0853289d321a5 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -4788,6 +4788,7 @@ static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
+ 	struct cgroup_file_ctx *ctx = of->priv;
+ 	struct cgroup *src_cgrp, *dst_cgrp;
+ 	struct task_struct *task;
++	const struct cred *saved_cred;
+ 	ssize_t ret;
+ 	bool locked;
+ 
+@@ -4805,9 +4806,16 @@ static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
+ 	src_cgrp = task_cgroup_from_root(task, &cgrp_dfl_root);
+ 	spin_unlock_irq(&css_set_lock);
+ 
++	/*
++	 * Process and thread migrations follow same delegation rule. Check
++	 * permissions using the credentials from file open to protect against
++	 * inherited fd attacks.
++	 */
++	saved_cred = override_creds(of->file->f_cred);
+ 	ret = cgroup_attach_permissions(src_cgrp, dst_cgrp,
+ 					of->file->f_path.dentry->d_sb, true,
+ 					ctx->ns);
++	revert_creds(saved_cred);
+ 	if (ret)
+ 		goto out_finish;
+ 
+@@ -4832,6 +4840,7 @@ static ssize_t cgroup_threads_write(struct kernfs_open_file *of,
+ 	struct cgroup_file_ctx *ctx = of->priv;
+ 	struct cgroup *src_cgrp, *dst_cgrp;
+ 	struct task_struct *task;
++	const struct cred *saved_cred;
+ 	ssize_t ret;
+ 	bool locked;
+ 
+@@ -4851,10 +4860,16 @@ static ssize_t cgroup_threads_write(struct kernfs_open_file *of,
+ 	src_cgrp = task_cgroup_from_root(task, &cgrp_dfl_root);
+ 	spin_unlock_irq(&css_set_lock);
+ 
+-	/* thread migrations follow the cgroup.procs delegation rule */
++	/*
++	 * Process and thread migrations follow same delegation rule. Check
++	 * permissions using the credentials from file open to protect against
++	 * inherited fd attacks.
++	 */
++	saved_cred = override_creds(of->file->f_cred);
+ 	ret = cgroup_attach_permissions(src_cgrp, dst_cgrp,
+ 					of->file->f_path.dentry->d_sb, false,
+ 					ctx->ns);
++	revert_creds(saved_cred);
+ 	if (ret)
+ 		goto out_finish;
+ 
+diff --git a/lib/lz4/lz4_decompress.c b/lib/lz4/lz4_decompress.c
+index 8a7724a6ce2fb..5b6705c4b2d26 100644
+--- a/lib/lz4/lz4_decompress.c
++++ b/lib/lz4/lz4_decompress.c
+@@ -271,8 +271,12 @@ static FORCE_INLINE int LZ4_decompress_generic(
+ 			ip += length;
+ 			op += length;
+ 
+-			/* Necessarily EOF, due to parsing restrictions */
+-			if (!partialDecoding || (cpy == oend))
++			/* Necessarily EOF when !partialDecoding.
++			 * When partialDecoding, it is EOF if we've either
++			 * filled the output buffer or
++			 * can't proceed with reading an offset for following match.
++			 */
++			if (!partialDecoding || (cpy == oend) || (ip >= (iend - 2)))
+ 				break;
+ 		} else {
+ 			/* may overwrite up to WILDCOPYLENGTH beyond cpy */
+diff --git a/lib/test_ubsan.c b/lib/test_ubsan.c
+index 9ea10adf7a66f..b1d0a6ecfe1b8 100644
+--- a/lib/test_ubsan.c
++++ b/lib/test_ubsan.c
+@@ -89,16 +89,6 @@ static void test_ubsan_misaligned_access(void)
+ 	*ptr = val;
+ }
+ 
+-static void test_ubsan_object_size_mismatch(void)
+-{
+-	/* "((aligned(8)))" helps this not into be misaligned for ptr-access. */
+-	volatile int val __aligned(8) = 4;
+-	volatile long long *ptr, val2;
+-
+-	ptr = (long long *)&val;
+-	val2 = *ptr;
+-}
+-
+ static const test_ubsan_fp test_ubsan_array[] = {
+ 	test_ubsan_add_overflow,
+ 	test_ubsan_sub_overflow,
+@@ -110,7 +100,6 @@ static const test_ubsan_fp test_ubsan_array[] = {
+ 	test_ubsan_load_invalid_value,
+ 	//test_ubsan_null_ptr_deref, /* exclude it because there is a crash */
+ 	test_ubsan_misaligned_access,
+-	test_ubsan_object_size_mismatch,
+ };
+ 
+ static int __init test_ubsan_init(void)
+diff --git a/mm/memory.c b/mm/memory.c
+index af27127c235e2..14f91c7467d6f 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1204,6 +1204,17 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ 	return ret;
+ }
+ 
++/* Whether we should zap all COWed (private) pages too */
++static inline bool should_zap_cows(struct zap_details *details)
++{
++	/* By default, zap all pages */
++	if (!details)
++		return true;
++
++	/* Or, we zap COWed pages only if the caller wants to */
++	return !details->check_mapping;
++}
++
+ static unsigned long zap_pte_range(struct mmu_gather *tlb,
+ 				struct vm_area_struct *vma, pmd_t *pmd,
+ 				unsigned long addr, unsigned long end,
+@@ -1295,16 +1306,18 @@ again:
+ 			continue;
+ 		}
+ 
+-		/* If details->check_mapping, we leave swap entries. */
+-		if (unlikely(details))
+-			continue;
+-
+-		if (!non_swap_entry(entry))
++		if (!non_swap_entry(entry)) {
++			/* Genuine swap entry, hence a private anon page */
++			if (!should_zap_cows(details))
++				continue;
+ 			rss[MM_SWAPENTS]--;
+-		else if (is_migration_entry(entry)) {
++		} else if (is_migration_entry(entry)) {
+ 			struct page *page;
+ 
+ 			page = migration_entry_to_page(entry);
++			if (details && details->check_mapping &&
++			    details->check_mapping != page_rmapping(page))
++				continue;
+ 			rss[mm_counter(page)]--;
+ 		}
+ 		if (unlikely(!free_swap_and_cache(entry)))
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index eb97aed2fbe7d..7315b978e834b 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -2636,6 +2636,7 @@ alloc_new:
+ 	mpol_new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
+ 	if (!mpol_new)
+ 		goto err_out;
++	atomic_set(&mpol_new->refcnt, 1);
+ 	goto restart;
+ }
+ 
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 138abbae4f758..d4c8d6cca3f46 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -260,6 +260,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 	struct mmu_notifier_range range;
+ 	pmd_t *old_pmd, *new_pmd;
+ 
++	if (!len)
++		return 0;
++
+ 	old_end = old_addr + len;
+ 	flush_cache_range(vma, old_addr, old_end);
+ 
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 14f84f70c5571..44ad7bf2e5631 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1640,7 +1640,30 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 
+ 			/* MADV_FREE page check */
+ 			if (!PageSwapBacked(page)) {
+-				if (!PageDirty(page)) {
++				int ref_count, map_count;
++
++				/*
++				 * Synchronize with gup_pte_range():
++				 * - clear PTE; barrier; read refcount
++				 * - inc refcount; barrier; read PTE
++				 */
++				smp_mb();
++
++				ref_count = page_ref_count(page);
++				map_count = page_mapcount(page);
++
++				/*
++				 * Order reads for page refcount and dirty flag
++				 * (see comments in __remove_mapping()).
++				 */
++				smp_rmb();
++
++				/*
++				 * The only page refs must be one from isolation
++				 * plus the rmap(s) (dropped by discard:).
++				 */
++				if (ref_count == 1 + map_count &&
++				    !PageDirty(page)) {
+ 					/* Invalidate as we cleared the pte */
+ 					mmu_notifier_invalidate_range(mm,
+ 						address, address + PAGE_SIZE);
+diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
+index 139894ca788b9..c8a341cd652c7 100644
+--- a/net/batman-adv/multicast.c
++++ b/net/batman-adv/multicast.c
+@@ -136,7 +136,7 @@ static u8 batadv_mcast_mla_rtr_flags_softif_get_ipv6(struct net_device *dev)
+ {
+ 	struct inet6_dev *in6_dev = __in6_dev_get(dev);
+ 
+-	if (in6_dev && in6_dev->cnf.mc_forwarding)
++	if (in6_dev && atomic_read(&in6_dev->cnf.mc_forwarding))
+ 		return BATADV_NO_FLAGS;
+ 	else
+ 		return BATADV_MCAST_WANT_NO_RTR6;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 72b4127360c7f..e926e80d9731b 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5061,8 +5061,9 @@ static void hci_disconn_phylink_complete_evt(struct hci_dev *hdev,
+ 	hci_dev_lock(hdev);
+ 
+ 	hcon = hci_conn_hash_lookup_handle(hdev, ev->phy_handle);
+-	if (hcon) {
++	if (hcon && hcon->type == AMP_LINK) {
+ 		hcon->state = BT_CLOSED;
++		hci_disconn_cfm(hcon, ev->reason);
+ 		hci_conn_del(hcon);
+ 	}
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 0ddbc415ce156..012c1a0abda8c 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1438,6 +1438,7 @@ static void l2cap_ecred_connect(struct l2cap_chan *chan)
+ 
+ 	l2cap_ecred_init(chan, 0);
+ 
++	memset(&data, 0, sizeof(data));
+ 	data.pdu.req.psm     = chan->psm;
+ 	data.pdu.req.mtu     = cpu_to_le16(chan->imtu);
+ 	data.pdu.req.mps     = cpu_to_le16(chan->mps);
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 63e6e8923200b..9a4a9c5a9f24c 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -141,6 +141,7 @@ struct isotp_sock {
+ 	struct can_isotp_options opt;
+ 	struct can_isotp_fc_options rxfc, txfc;
+ 	struct can_isotp_ll_options ll;
++	u32 frame_txtime;
+ 	u32 force_tx_stmin;
+ 	u32 force_rx_stmin;
+ 	struct tpcon rx, tx;
+@@ -360,7 +361,7 @@ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae)
+ 
+ 		so->tx_gap = ktime_set(0, 0);
+ 		/* add transmission time for CAN frame N_As */
+-		so->tx_gap = ktime_add_ns(so->tx_gap, so->opt.frame_txtime);
++		so->tx_gap = ktime_add_ns(so->tx_gap, so->frame_txtime);
+ 		/* add waiting time for consecutive frames N_Cs */
+ 		if (so->opt.flags & CAN_ISOTP_FORCE_TXSTMIN)
+ 			so->tx_gap = ktime_add_ns(so->tx_gap,
+@@ -1245,6 +1246,14 @@ static int isotp_setsockopt_locked(struct socket *sock, int level, int optname,
+ 		/* no separate rx_ext_address is given => use ext_address */
+ 		if (!(so->opt.flags & CAN_ISOTP_RX_EXT_ADDR))
+ 			so->opt.rx_ext_address = so->opt.ext_address;
++
++		/* check for frame_txtime changes (0 => no changes) */
++		if (so->opt.frame_txtime) {
++			if (so->opt.frame_txtime == CAN_ISOTP_FRAME_TXTIME_ZERO)
++				so->frame_txtime = 0;
++			else
++				so->frame_txtime = so->opt.frame_txtime;
++		}
+ 		break;
+ 
+ 	case CAN_ISOTP_RECV_FC:
+@@ -1446,6 +1455,7 @@ static int isotp_init(struct sock *sk)
+ 	so->opt.rxpad_content = CAN_ISOTP_DEFAULT_PAD_CONTENT;
+ 	so->opt.txpad_content = CAN_ISOTP_DEFAULT_PAD_CONTENT;
+ 	so->opt.frame_txtime = CAN_ISOTP_DEFAULT_FRAME_TXTIME;
++	so->frame_txtime = CAN_ISOTP_DEFAULT_FRAME_TXTIME;
+ 	so->rxfc.bs = CAN_ISOTP_DEFAULT_RECV_BS;
+ 	so->rxfc.stmin = CAN_ISOTP_DEFAULT_RECV_STMIN;
+ 	so->rxfc.wftmax = CAN_ISOTP_DEFAULT_RECV_WFTMAX;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 659a328024713..ddf9792c0cb2e 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6492,24 +6492,33 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len
+ 	if (!th->ack || th->rst || th->syn)
+ 		return -ENOENT;
+ 
++	if (unlikely(iph_len < sizeof(struct iphdr)))
++		return -EINVAL;
++
+ 	if (tcp_synq_no_recent_overflow(sk))
+ 		return -ENOENT;
+ 
+ 	cookie = ntohl(th->ack_seq) - 1;
+ 
+-	switch (sk->sk_family) {
+-	case AF_INET:
+-		if (unlikely(iph_len < sizeof(struct iphdr)))
++	/* Both struct iphdr and struct ipv6hdr have the version field at the
++	 * same offset so we can cast to the shorter header (struct iphdr).
++	 */
++	switch (((struct iphdr *)iph)->version) {
++	case 4:
++		if (sk->sk_family == AF_INET6 && ipv6_only_sock(sk))
+ 			return -EINVAL;
+ 
+ 		ret = __cookie_v4_check((struct iphdr *)iph, th, cookie);
+ 		break;
+ 
+ #if IS_BUILTIN(CONFIG_IPV6)
+-	case AF_INET6:
++	case 6:
+ 		if (unlikely(iph_len < sizeof(struct ipv6hdr)))
+ 			return -EINVAL;
+ 
++		if (sk->sk_family != AF_INET6)
++			return -EINVAL;
++
+ 		ret = __cookie_v6_check((struct ipv6hdr *)iph, th, cookie);
+ 		break;
+ #endif /* CONFIG_IPV6 */
+@@ -7709,6 +7718,7 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ 			      struct bpf_insn_access_aux *info)
+ {
+ 	const int size_default = sizeof(__u32);
++	int field_size;
+ 
+ 	if (off < 0 || off >= sizeof(struct bpf_sock))
+ 		return false;
+@@ -7720,7 +7730,6 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ 	case offsetof(struct bpf_sock, family):
+ 	case offsetof(struct bpf_sock, type):
+ 	case offsetof(struct bpf_sock, protocol):
+-	case offsetof(struct bpf_sock, dst_port):
+ 	case offsetof(struct bpf_sock, src_port):
+ 	case offsetof(struct bpf_sock, rx_queue_mapping):
+ 	case bpf_ctx_range(struct bpf_sock, src_ip4):
+@@ -7729,6 +7738,14 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ 	case bpf_ctx_range_till(struct bpf_sock, dst_ip6[0], dst_ip6[3]):
+ 		bpf_ctx_record_field_size(info, size_default);
+ 		return bpf_ctx_narrow_access_ok(off, size, size_default);
++	case bpf_ctx_range(struct bpf_sock, dst_port):
++		field_size = size == size_default ?
++			size_default : sizeof_field(struct bpf_sock, dst_port);
++		bpf_ctx_record_field_size(info, field_size);
++		return bpf_ctx_narrow_access_ok(off, size, field_size);
++	case offsetofend(struct bpf_sock, dst_port) ...
++	     offsetof(struct bpf_sock, dst_ip4) - 1:
++		return false;
+ 	}
+ 
+ 	return size == size_default;
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 9ff6d4160daba..873081cda9507 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3626,13 +3626,24 @@ static int rtnl_alt_ifname(int cmd, struct net_device *dev, struct nlattr *attr,
+ 			   bool *changed, struct netlink_ext_ack *extack)
+ {
+ 	char *alt_ifname;
++	size_t size;
+ 	int err;
+ 
+ 	err = nla_validate(attr, attr->nla_len, IFLA_MAX, ifla_policy, extack);
+ 	if (err)
+ 		return err;
+ 
+-	alt_ifname = nla_strdup(attr, GFP_KERNEL);
++	if (cmd == RTM_NEWLINKPROP) {
++		size = rtnl_prop_list_size(dev);
++		size += nla_total_size(ALTIFNAMSIZ);
++		if (size >= U16_MAX) {
++			NL_SET_ERR_MSG(extack,
++				       "effective property list too long");
++			return -EINVAL;
++		}
++	}
++
++	alt_ifname = nla_strdup(attr, GFP_KERNEL_ACCOUNT);
+ 	if (!alt_ifname)
+ 		return -ENOMEM;
+ 
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 922dd73e57406..83a47998c4b18 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -1116,13 +1116,18 @@ static int arp_req_get(struct arpreq *r, struct net_device *dev)
+ 	return err;
+ }
+ 
+-static int arp_invalidate(struct net_device *dev, __be32 ip)
++int arp_invalidate(struct net_device *dev, __be32 ip, bool force)
+ {
+ 	struct neighbour *neigh = neigh_lookup(&arp_tbl, &ip, dev);
+ 	int err = -ENXIO;
+ 	struct neigh_table *tbl = &arp_tbl;
+ 
+ 	if (neigh) {
++		if ((neigh->nud_state & NUD_VALID) && !force) {
++			neigh_release(neigh);
++			return 0;
++		}
++
+ 		if (neigh->nud_state & ~NUD_NOARP)
+ 			err = neigh_update(neigh, NULL, NUD_FAILED,
+ 					   NEIGH_UPDATE_F_OVERRIDE|
+@@ -1169,7 +1174,7 @@ static int arp_req_delete(struct net *net, struct arpreq *r,
+ 		if (!dev)
+ 			return -EINVAL;
+ 	}
+-	return arp_invalidate(dev, ip);
++	return arp_invalidate(dev, ip, true);
+ }
+ 
+ /*
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 917ea953dfad8..0df4594b49c78 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1112,9 +1112,11 @@ void fib_add_ifaddr(struct in_ifaddr *ifa)
+ 		return;
+ 
+ 	/* Add broadcast address, if it is explicitly assigned. */
+-	if (ifa->ifa_broadcast && ifa->ifa_broadcast != htonl(0xFFFFFFFF))
++	if (ifa->ifa_broadcast && ifa->ifa_broadcast != htonl(0xFFFFFFFF)) {
+ 		fib_magic(RTM_NEWROUTE, RTN_BROADCAST, ifa->ifa_broadcast, 32,
+ 			  prim, 0);
++		arp_invalidate(dev, ifa->ifa_broadcast, false);
++	}
+ 
+ 	if (!ipv4_is_zeronet(prefix) && !(ifa->ifa_flags & IFA_F_SECONDARY) &&
+ 	    (prefix != addr || ifa->ifa_prefixlen < 32)) {
+@@ -1130,6 +1132,7 @@ void fib_add_ifaddr(struct in_ifaddr *ifa)
+ 				  prim, 0);
+ 			fib_magic(RTM_NEWROUTE, RTN_BROADCAST, prefix | ~mask,
+ 				  32, prim, 0);
++			arp_invalidate(dev, prefix | ~mask, false);
+ 		}
+ 	}
+ }
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 838a876c168ca..c8c7b76c3b2e2 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -888,8 +888,13 @@ int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
+ 	}
+ 
+ 	if (cfg->fc_oif || cfg->fc_gw_family) {
+-		struct fib_nh *nh = fib_info_nh(fi, 0);
++		struct fib_nh *nh;
++
++		/* cannot match on nexthop object attributes */
++		if (fi->nh)
++			return 1;
+ 
++		nh = fib_info_nh(fi, 0);
+ 		if (cfg->fc_encap) {
+ 			if (fib_encap_match(net, cfg->fc_encap_type,
+ 					    cfg->fc_encap, nh, cfg, extack))
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index e093847c334da..915b8e1bd9efb 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -637,7 +637,9 @@ int __inet_hash(struct sock *sk, struct sock *osk)
+ 	int err = 0;
+ 
+ 	if (sk->sk_state != TCP_LISTEN) {
++		local_bh_disable();
+ 		inet_ehash_nolisten(sk, osk, NULL);
++		local_bh_enable();
+ 		return 0;
+ 	}
+ 	WARN_ON(!sk_unhashed(sk));
+@@ -669,45 +671,54 @@ int inet_hash(struct sock *sk)
+ {
+ 	int err = 0;
+ 
+-	if (sk->sk_state != TCP_CLOSE) {
+-		local_bh_disable();
++	if (sk->sk_state != TCP_CLOSE)
+ 		err = __inet_hash(sk, NULL);
+-		local_bh_enable();
+-	}
+ 
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(inet_hash);
+ 
+-void inet_unhash(struct sock *sk)
++static void __inet_unhash(struct sock *sk, struct inet_listen_hashbucket *ilb)
+ {
+-	struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
+-	struct inet_listen_hashbucket *ilb = NULL;
+-	spinlock_t *lock;
+-
+ 	if (sk_unhashed(sk))
+ 		return;
+ 
+-	if (sk->sk_state == TCP_LISTEN) {
+-		ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)];
+-		lock = &ilb->lock;
+-	} else {
+-		lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
+-	}
+-	spin_lock_bh(lock);
+-	if (sk_unhashed(sk))
+-		goto unlock;
+-
+ 	if (rcu_access_pointer(sk->sk_reuseport_cb))
+ 		reuseport_detach_sock(sk);
+ 	if (ilb) {
++		struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
++
+ 		inet_unhash2(hashinfo, sk);
+ 		ilb->count--;
+ 	}
+ 	__sk_nulls_del_node_init_rcu(sk);
+ 	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+-unlock:
+-	spin_unlock_bh(lock);
++}
++
++void inet_unhash(struct sock *sk)
++{
++	struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
++
++	if (sk_unhashed(sk))
++		return;
++
++	if (sk->sk_state == TCP_LISTEN) {
++		struct inet_listen_hashbucket *ilb;
++
++		ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)];
++		/* Don't disable bottom halves while acquiring the lock to
++		 * avoid circular locking dependency on PREEMPT_RT.
++		 */
++		spin_lock(&ilb->lock);
++		__inet_unhash(sk, ilb);
++		spin_unlock(&ilb->lock);
++	} else {
++		spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
++
++		spin_lock_bh(lock);
++		__inet_unhash(sk, NULL);
++		spin_unlock_bh(lock);
++	}
+ }
+ EXPORT_SYMBOL_GPL(inet_unhash);
+ 
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 7c5bf39dca5d1..86bcb18256982 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -542,7 +542,7 @@ static int inet6_netconf_fill_devconf(struct sk_buff *skb, int ifindex,
+ #ifdef CONFIG_IPV6_MROUTE
+ 	if ((all || type == NETCONFA_MC_FORWARDING) &&
+ 	    nla_put_s32(skb, NETCONFA_MC_FORWARDING,
+-			devconf->mc_forwarding) < 0)
++			atomic_read(&devconf->mc_forwarding)) < 0)
+ 		goto nla_put_failure;
+ #endif
+ 	if ((all || type == NETCONFA_PROXY_NEIGH) &&
+@@ -5515,7 +5515,7 @@ static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
+ 	array[DEVCONF_USE_OPTIMISTIC] = cnf->use_optimistic;
+ #endif
+ #ifdef CONFIG_IPV6_MROUTE
+-	array[DEVCONF_MC_FORWARDING] = cnf->mc_forwarding;
++	array[DEVCONF_MC_FORWARDING] = atomic_read(&cnf->mc_forwarding);
+ #endif
+ 	array[DEVCONF_DISABLE_IPV6] = cnf->disable_ipv6;
+ 	array[DEVCONF_ACCEPT_DAD] = cnf->accept_dad;
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 67c9114835c84..0a2e7f2283911 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -333,11 +333,8 @@ int inet6_hash(struct sock *sk)
+ {
+ 	int err = 0;
+ 
+-	if (sk->sk_state != TCP_CLOSE) {
+-		local_bh_disable();
++	if (sk->sk_state != TCP_CLOSE)
+ 		err = __inet_hash(sk, NULL);
+-		local_bh_enable();
+-	}
+ 
+ 	return err;
+ }
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index 06d60662717d1..15ea3d082534d 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -509,7 +509,7 @@ int ip6_mc_input(struct sk_buff *skb)
+ 	/*
+ 	 *      IPv6 multicast router mode is now supported ;)
+ 	 */
+-	if (dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding &&
++	if (atomic_read(&dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding) &&
+ 	    !(ipv6_addr_type(&hdr->daddr) &
+ 	      (IPV6_ADDR_LOOPBACK|IPV6_ADDR_LINKLOCAL)) &&
+ 	    likely(!(IP6CB(skb)->flags & IP6SKB_FORWARDED))) {
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 41cb348a7c3c4..5f0ac47acc74b 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -740,7 +740,7 @@ static int mif6_delete(struct mr_table *mrt, int vifi, int notify,
+ 
+ 	in6_dev = __in6_dev_get(dev);
+ 	if (in6_dev) {
+-		in6_dev->cnf.mc_forwarding--;
++		atomic_dec(&in6_dev->cnf.mc_forwarding);
+ 		inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ 					     NETCONFA_MC_FORWARDING,
+ 					     dev->ifindex, &in6_dev->cnf);
+@@ -908,7 +908,7 @@ static int mif6_add(struct net *net, struct mr_table *mrt,
+ 
+ 	in6_dev = __in6_dev_get(dev);
+ 	if (in6_dev) {
+-		in6_dev->cnf.mc_forwarding++;
++		atomic_inc(&in6_dev->cnf.mc_forwarding);
+ 		inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ 					     NETCONFA_MC_FORWARDING,
+ 					     dev->ifindex, &in6_dev->cnf);
+@@ -1558,7 +1558,7 @@ static int ip6mr_sk_init(struct mr_table *mrt, struct sock *sk)
+ 	} else {
+ 		rcu_assign_pointer(mrt->mroute_sk, sk);
+ 		sock_set_flag(sk, SOCK_RCU_FREE);
+-		net->ipv6.devconf_all->mc_forwarding++;
++		atomic_inc(&net->ipv6.devconf_all->mc_forwarding);
+ 	}
+ 	write_unlock_bh(&mrt_lock);
+ 
+@@ -1591,7 +1591,7 @@ int ip6mr_sk_done(struct sock *sk)
+ 			 * so the RCU grace period before sk freeing
+ 			 * is guaranteed by sk_destruct()
+ 			 */
+-			net->ipv6.devconf_all->mc_forwarding--;
++			atomic_dec(&net->ipv6.devconf_all->mc_forwarding);
+ 			write_unlock_bh(&mrt_lock);
+ 			inet6_netconf_notify_devconf(net, RTM_NEWNETCONF,
+ 						     NETCONFA_MC_FORWARDING,
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 352e645c546eb..776b1b58c5dc6 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -4398,7 +4398,7 @@ static int ip6_pkt_drop(struct sk_buff *skb, u8 code, int ipstats_mib_noroutes)
+ 	struct inet6_dev *idev;
+ 	int type;
+ 
+-	if (netif_is_l3_master(skb->dev) &&
++	if (netif_is_l3_master(skb->dev) ||
+ 	    dst->dev == net->loopback_dev)
+ 		idev = __in6_dev_get_safely(dev_get_by_index_rcu(net, IP6CB(skb)->iif));
+ 	else
+diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c
+index 5e1239cef0005..91b35b7c80d82 100644
+--- a/net/netlabel/netlabel_kapi.c
++++ b/net/netlabel/netlabel_kapi.c
+@@ -885,6 +885,8 @@ int netlbl_bitmap_walk(const unsigned char *bitmap, u32 bitmap_len,
+ 	unsigned char bitmask;
+ 	unsigned char byte;
+ 
++	if (offset >= bitmap_len)
++		return -1;
+ 	byte_offset = offset / 8;
+ 	byte = bitmap[byte_offset];
+ 	bit_spot = offset;
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 525c1540f10e6..6d8d700216662 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -1044,7 +1044,7 @@ static int clone(struct datapath *dp, struct sk_buff *skb,
+ 	int rem = nla_len(attr);
+ 	bool dont_clone_flow_key;
+ 
+-	/* The first action is always 'OVS_CLONE_ATTR_ARG'. */
++	/* The first action is always 'OVS_CLONE_ATTR_EXEC'. */
+ 	clone_arg = nla_data(attr);
+ 	dont_clone_flow_key = nla_get_u32(clone_arg);
+ 	actions = nla_next(clone_arg, &rem);
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 8c4bdfa627ca9..98a7e6f64ab0b 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2288,6 +2288,62 @@ static struct sw_flow_actions *nla_alloc_flow_actions(int size)
+ 	return sfa;
+ }
+ 
++static void ovs_nla_free_nested_actions(const struct nlattr *actions, int len);
++
++static void ovs_nla_free_check_pkt_len_action(const struct nlattr *action)
++{
++	const struct nlattr *a;
++	int rem;
++
++	nla_for_each_nested(a, action, rem) {
++		switch (nla_type(a)) {
++		case OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_LESS_EQUAL:
++		case OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_GREATER:
++			ovs_nla_free_nested_actions(nla_data(a), nla_len(a));
++			break;
++		}
++	}
++}
++
++static void ovs_nla_free_clone_action(const struct nlattr *action)
++{
++	const struct nlattr *a = nla_data(action);
++	int rem = nla_len(action);
++
++	switch (nla_type(a)) {
++	case OVS_CLONE_ATTR_EXEC:
++		/* The real list of actions follows this attribute. */
++		a = nla_next(a, &rem);
++		ovs_nla_free_nested_actions(a, rem);
++		break;
++	}
++}
++
++static void ovs_nla_free_dec_ttl_action(const struct nlattr *action)
++{
++	const struct nlattr *a = nla_data(action);
++
++	switch (nla_type(a)) {
++	case OVS_DEC_TTL_ATTR_ACTION:
++		ovs_nla_free_nested_actions(nla_data(a), nla_len(a));
++		break;
++	}
++}
++
++static void ovs_nla_free_sample_action(const struct nlattr *action)
++{
++	const struct nlattr *a = nla_data(action);
++	int rem = nla_len(action);
++
++	switch (nla_type(a)) {
++	case OVS_SAMPLE_ATTR_ARG:
++		/* The real list of actions follows this attribute. */
++		a = nla_next(a, &rem);
++		ovs_nla_free_nested_actions(a, rem);
++		break;
++	}
++}
++
+ static void ovs_nla_free_set_action(const struct nlattr *a)
+ {
+ 	const struct nlattr *ovs_key = nla_data(a);
+@@ -2301,25 +2357,54 @@ static void ovs_nla_free_set_action(const struct nlattr *a)
+ 	}
+ }
+ 
+-void ovs_nla_free_flow_actions(struct sw_flow_actions *sf_acts)
++static void ovs_nla_free_nested_actions(const struct nlattr *actions, int len)
+ {
+ 	const struct nlattr *a;
+ 	int rem;
+ 
+-	if (!sf_acts)
++	/* Whenever new actions are added, the need to update this
++	 * function should be considered.
++	 */
++	BUILD_BUG_ON(OVS_ACTION_ATTR_MAX != 23);
++
++	if (!actions)
+ 		return;
+ 
+-	nla_for_each_attr(a, sf_acts->actions, sf_acts->actions_len, rem) {
++	nla_for_each_attr(a, actions, len, rem) {
+ 		switch (nla_type(a)) {
+-		case OVS_ACTION_ATTR_SET:
+-			ovs_nla_free_set_action(a);
++		case OVS_ACTION_ATTR_CHECK_PKT_LEN:
++			ovs_nla_free_check_pkt_len_action(a);
++			break;
++
++		case OVS_ACTION_ATTR_CLONE:
++			ovs_nla_free_clone_action(a);
+ 			break;
++
+ 		case OVS_ACTION_ATTR_CT:
+ 			ovs_ct_free_action(a);
+ 			break;
++
++		case OVS_ACTION_ATTR_DEC_TTL:
++			ovs_nla_free_dec_ttl_action(a);
++			break;
++
++		case OVS_ACTION_ATTR_SAMPLE:
++			ovs_nla_free_sample_action(a);
++			break;
++
++		case OVS_ACTION_ATTR_SET:
++			ovs_nla_free_set_action(a);
++			break;
+ 		}
+ 	}
++}
++
++void ovs_nla_free_flow_actions(struct sw_flow_actions *sf_acts)
++{
++	if (!sf_acts)
++		return;
+ 
++	ovs_nla_free_nested_actions(sf_acts->actions, sf_acts->actions_len);
+ 	kfree(sf_acts);
+ }
+ 
+@@ -3419,7 +3504,9 @@ static int clone_action_to_attr(const struct nlattr *attr,
+ 	if (!start)
+ 		return -EMSGSIZE;
+ 
+-	err = ovs_nla_put_actions(nla_data(attr), rem, skb);
++	/* Skipping the OVS_CLONE_ATTR_EXEC that is always the first attribute. */
++	attr = nla_next(nla_data(attr), &rem);
++	err = ovs_nla_put_actions(attr, rem, skb);
+ 
+ 	if (err)
+ 		nla_nest_cancel(skb, start);
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index 25bbc4cc8b135..f15d6942da453 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -113,8 +113,8 @@ static __net_exit void rxrpc_exit_net(struct net *net)
+ 	struct rxrpc_net *rxnet = rxrpc_net(net);
+ 
+ 	rxnet->live = false;
+-	del_timer_sync(&rxnet->peer_keepalive_timer);
+ 	cancel_work_sync(&rxnet->peer_keepalive_work);
++	del_timer_sync(&rxnet->peer_keepalive_timer);
+ 	rxrpc_destroy_all_calls(rxnet);
+ 	rxrpc_destroy_all_connections(rxnet);
+ 	rxrpc_destroy_all_peers(rxnet);
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index d69aac6c1fcea..ef2fd28999baf 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1426,7 +1426,7 @@ static struct smc_buf_desc *smc_buf_get_slot(int compressed_bufsize,
+  */
+ static inline int smc_rmb_wnd_update_limit(int rmbe_size)
+ {
+-	return min_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2);
++	return max_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2);
+ }
+ 
+ /* map an rmb buf to a link */
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 84c8a534029c9..c5af31312e0cf 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2175,6 +2175,7 @@ call_transmit_status(struct rpc_task *task)
+ 		 * socket just returned a connection error,
+ 		 * then hold onto the transport lock.
+ 		 */
++	case -ENOMEM:
+ 	case -ENOBUFS:
+ 		rpc_delay(task, HZ>>2);
+ 		fallthrough;
+@@ -2258,6 +2259,7 @@ call_bc_transmit_status(struct rpc_task *task)
+ 	case -ENOTCONN:
+ 	case -EPIPE:
+ 		break;
++	case -ENOMEM:
+ 	case -ENOBUFS:
+ 		rpc_delay(task, HZ>>2);
+ 		fallthrough;
+@@ -2340,6 +2342,11 @@ call_status(struct rpc_task *task)
+ 	case -EPIPE:
+ 	case -EAGAIN:
+ 		break;
++	case -ENFILE:
++	case -ENOBUFS:
++	case -ENOMEM:
++		rpc_delay(task, HZ>>2);
++		break;
+ 	case -EIO:
+ 		/* shutdown or soft timeout */
+ 		goto out_exit;
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index c045f63d11fa6..f0f55fbd13752 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -186,11 +186,6 @@ static void __rpc_add_wait_queue_priority(struct rpc_wait_queue *queue,
+ 
+ /*
+  * Add new request to wait queue.
+- *
+- * Swapper tasks always get inserted at the head of the queue.
+- * This should avoid many nasty memory deadlocks and hopefully
+- * improve overall performance.
+- * Everyone else gets appended to the queue to ensure proper FIFO behavior.
+  */
+ static void __rpc_add_wait_queue(struct rpc_wait_queue *queue,
+ 		struct rpc_task *task,
+@@ -199,8 +194,6 @@ static void __rpc_add_wait_queue(struct rpc_wait_queue *queue,
+ 	INIT_LIST_HEAD(&task->u.tk_wait.timer_list);
+ 	if (RPC_IS_PRIORITY(queue))
+ 		__rpc_add_wait_queue_priority(queue, task, queue_priority);
+-	else if (RPC_IS_SWAPPER(task))
+-		list_add(&task->u.tk_wait.list, &queue->tasks[0]);
+ 	else
+ 		list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]);
+ 	task->tk_waitqueue = queue;
+@@ -1012,8 +1005,10 @@ int rpc_malloc(struct rpc_task *task)
+ 	struct rpc_buffer *buf;
+ 	gfp_t gfp = GFP_NOFS;
+ 
++	if (RPC_IS_ASYNC(task))
++		gfp = GFP_NOWAIT | __GFP_NOWARN;
+ 	if (RPC_IS_SWAPPER(task))
+-		gfp = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
++		gfp |= __GFP_MEMALLOC;
+ 
+ 	size += sizeof(struct rpc_buffer);
+ 	if (size <= RPC_BUFFER_MAXSIZE)
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index eba1714bf09ab..6d5bb8bfed38b 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1091,7 +1091,9 @@ static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	int flags, ret;
+ 
+ 	*sentp = 0;
+-	xdr_alloc_bvec(xdr, GFP_KERNEL);
++	ret = xdr_alloc_bvec(xdr, GFP_KERNEL);
++	if (ret < 0)
++		return ret;
+ 
+ 	msg->msg_flags = MSG_MORE;
+ 	ret = kernel_sendmsg(sock, msg, &rm, 1, rm.iov_len);
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 46304e647c492..6bc225d64d23f 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1306,17 +1306,6 @@ xprt_request_enqueue_transmit(struct rpc_task *task)
+ 				INIT_LIST_HEAD(&req->rq_xmit2);
+ 				goto out;
+ 			}
+-		} else if (RPC_IS_SWAPPER(task)) {
+-			list_for_each_entry(pos, &xprt->xmit_queue, rq_xmit) {
+-				if (pos->rq_cong || pos->rq_bytes_sent)
+-					continue;
+-				if (RPC_IS_SWAPPER(pos->rq_task))
+-					continue;
+-				/* Note: req is added _before_ pos */
+-				list_add_tail(&req->rq_xmit, &pos->rq_xmit);
+-				INIT_LIST_HEAD(&req->rq_xmit2);
+-				goto out;
+-			}
+ 		} else if (!req->rq_seqno) {
+ 			list_for_each_entry(pos, &xprt->xmit_queue, rq_xmit) {
+ 				if (pos->rq_task->tk_owner != task->tk_owner)
+@@ -1635,12 +1624,15 @@ out:
+ static struct rpc_rqst *xprt_dynamic_alloc_slot(struct rpc_xprt *xprt)
+ {
+ 	struct rpc_rqst *req = ERR_PTR(-EAGAIN);
++	gfp_t gfp_mask = GFP_KERNEL;
+ 
+ 	if (xprt->num_reqs >= xprt->max_reqs)
+ 		goto out;
+ 	++xprt->num_reqs;
+ 	spin_unlock(&xprt->reserve_lock);
+-	req = kzalloc(sizeof(struct rpc_rqst), GFP_NOFS);
++	if (current->flags & PF_WQ_WORKER)
++		gfp_mask |= __GFP_NORETRY | __GFP_NOWARN;
++	req = kzalloc(sizeof(*req), gfp_mask);
+ 	spin_lock(&xprt->reserve_lock);
+ 	if (req != NULL)
+ 		goto out;
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 8e2368a0c2a29..9cf10cfb85c65 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -519,7 +519,7 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task)
+ 	return;
+ 
+ out_sleep:
+-	task->tk_status = -EAGAIN;
++	task->tk_status = -ENOMEM;
+ 	xprt_add_backlog(xprt, task);
+ }
+ 
+@@ -572,8 +572,10 @@ xprt_rdma_allocate(struct rpc_task *task)
+ 	gfp_t flags;
+ 
+ 	flags = RPCRDMA_DEF_GFP;
++	if (RPC_IS_ASYNC(task))
++		flags = GFP_NOWAIT | __GFP_NOWARN;
+ 	if (RPC_IS_SWAPPER(task))
+-		flags = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
++		flags |= __GFP_MEMALLOC;
+ 
+ 	if (!rpcrdma_check_regbuf(r_xprt, req->rl_sendbuf, rqst->rq_callsize,
+ 				  flags))
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 16c7758e7bf30..bd123f1d09230 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -754,12 +754,12 @@ xs_stream_start_connect(struct sock_xprt *transport)
+ /**
+  * xs_nospace - handle transmit was incomplete
+  * @req: pointer to RPC request
++ * @transport: pointer to struct sock_xprt
+  *
+  */
+-static int xs_nospace(struct rpc_rqst *req)
++static int xs_nospace(struct rpc_rqst *req, struct sock_xprt *transport)
+ {
+-	struct rpc_xprt *xprt = req->rq_xprt;
+-	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
++	struct rpc_xprt *xprt = &transport->xprt;
+ 	struct sock *sk = transport->inet;
+ 	int ret = -EAGAIN;
+ 
+@@ -770,25 +770,49 @@ static int xs_nospace(struct rpc_rqst *req)
+ 
+ 	/* Don't race with disconnect */
+ 	if (xprt_connected(xprt)) {
++		struct socket_wq *wq;
++
++		rcu_read_lock();
++		wq = rcu_dereference(sk->sk_wq);
++		set_bit(SOCKWQ_ASYNC_NOSPACE, &wq->flags);
++		rcu_read_unlock();
++
+ 		/* wait for more buffer space */
++		set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ 		sk->sk_write_pending++;
+ 		xprt_wait_for_buffer_space(xprt);
+ 	} else
+ 		ret = -ENOTCONN;
+ 
+ 	spin_unlock(&xprt->transport_lock);
++	return ret;
++}
+ 
+-	/* Race breaker in case memory is freed before above code is called */
+-	if (ret == -EAGAIN) {
+-		struct socket_wq *wq;
++static int xs_sock_nospace(struct rpc_rqst *req)
++{
++	struct sock_xprt *transport =
++		container_of(req->rq_xprt, struct sock_xprt, xprt);
++	struct sock *sk = transport->inet;
++	int ret = -EAGAIN;
+ 
+-		rcu_read_lock();
+-		wq = rcu_dereference(sk->sk_wq);
+-		set_bit(SOCKWQ_ASYNC_NOSPACE, &wq->flags);
+-		rcu_read_unlock();
++	lock_sock(sk);
++	if (!sock_writeable(sk))
++		ret = xs_nospace(req, transport);
++	release_sock(sk);
++	return ret;
++}
+ 
+-		sk->sk_write_space(sk);
+-	}
++static int xs_stream_nospace(struct rpc_rqst *req)
++{
++	struct sock_xprt *transport =
++		container_of(req->rq_xprt, struct sock_xprt, xprt);
++	struct sock *sk = transport->inet;
++	int ret = -EAGAIN;
++
++	lock_sock(sk);
++	if (!sk_stream_memory_free(sk))
++		ret = xs_nospace(req, transport);
++	release_sock(sk);
+ 	return ret;
+ }
+ 
+@@ -878,7 +902,7 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ 	case -ENOBUFS:
+ 		break;
+ 	case -EAGAIN:
+-		status = xs_nospace(req);
++		status = xs_stream_nospace(req);
+ 		break;
+ 	default:
+ 		dprintk("RPC:       sendmsg returned unrecognized error %d\n",
+@@ -954,7 +978,7 @@ process_status:
+ 		/* Should we call xs_close() here? */
+ 		break;
+ 	case -EAGAIN:
+-		status = xs_nospace(req);
++		status = xs_sock_nospace(req);
+ 		break;
+ 	case -ENETUNREACH:
+ 	case -ENOBUFS:
+@@ -1069,7 +1093,7 @@ static int xs_tcp_send_request(struct rpc_rqst *req)
+ 		/* Should we call xs_close() here? */
+ 		break;
+ 	case -EAGAIN:
+-		status = xs_nospace(req);
++		status = xs_stream_nospace(req);
+ 		break;
+ 	case -ECONNRESET:
+ 	case -ECONNREFUSED:
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 8cd011ea9fbb8..21f20c3cda971 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1483,7 +1483,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
+ 	}
+ 	if (prot->version == TLS_1_3_VERSION)
+ 		memcpy(iv + iv_offset, tls_ctx->rx.iv,
+-		       crypto_aead_ivsize(ctx->aead_recv));
++		       prot->iv_size + prot->salt_size);
+ 	else
+ 		memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index fd614a5a00b42..c1b2655682a8a 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -702,8 +702,12 @@ static bool cfg80211_find_ssid_match(struct cfg80211_colocated_ap *ap,
+ 
+ 	for (i = 0; i < request->n_ssids; i++) {
+ 		/* wildcard ssid in the scan request */
+-		if (!request->ssids[i].ssid_len)
++		if (!request->ssids[i].ssid_len) {
++			if (ap->multi_bss && !ap->transmitted_bssid)
++				continue;
++
+ 			return true;
++		}
+ 
+ 		if (ap->ssid_len &&
+ 		    ap->ssid_len == request->ssids[i].ssid_len) {
+@@ -830,6 +834,9 @@ static int cfg80211_scan_6ghz(struct cfg80211_registered_device *rdev)
+ 		    !cfg80211_find_ssid_match(ap, request))
+ 			continue;
+ 
++		if (!request->n_ssids && ap->multi_bss && !ap->transmitted_bssid)
++			continue;
++
+ 		cfg80211_scan_req_add_chan(request, chan, true);
+ 		memcpy(scan_6ghz_params->bssid, ap->bssid, ETH_ALEN);
+ 		scan_6ghz_params->short_ssid = ap->short_ssid;
+diff --git a/scripts/Makefile.ubsan b/scripts/Makefile.ubsan
+index 9716dab06bc7a..2156e18391a3f 100644
+--- a/scripts/Makefile.ubsan
++++ b/scripts/Makefile.ubsan
+@@ -23,7 +23,6 @@ ifdef CONFIG_UBSAN_MISC
+       CFLAGS_UBSAN += $(call cc-option, -fsanitize=integer-divide-by-zero)
+       CFLAGS_UBSAN += $(call cc-option, -fsanitize=unreachable)
+       CFLAGS_UBSAN += $(call cc-option, -fsanitize=signed-integer-overflow)
+-      CFLAGS_UBSAN += $(call cc-option, -fsanitize=object-size)
+       CFLAGS_UBSAN += $(call cc-option, -fsanitize=bool)
+       CFLAGS_UBSAN += $(call cc-option, -fsanitize=enum)
+ endif
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index 22ea350dab588..221250973d078 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -210,9 +210,16 @@ strip-libs = $(filter-out -l%,$(1))
+ PERL_EMBED_LDOPTS = $(shell perl -MExtUtils::Embed -e ldopts 2>/dev/null)
+ PERL_EMBED_LDFLAGS = $(call strip-libs,$(PERL_EMBED_LDOPTS))
+ PERL_EMBED_LIBADD = $(call grep-libs,$(PERL_EMBED_LDOPTS))
+-PERL_EMBED_CCOPTS = `perl -MExtUtils::Embed -e ccopts 2>/dev/null`
++PERL_EMBED_CCOPTS = $(shell perl -MExtUtils::Embed -e ccopts 2>/dev/null)
+ FLAGS_PERL_EMBED=$(PERL_EMBED_CCOPTS) $(PERL_EMBED_LDOPTS)
+ 
++ifeq ($(CC_NO_CLANG), 0)
++  PERL_EMBED_LDOPTS := $(filter-out -specs=%,$(PERL_EMBED_LDOPTS))
++  PERL_EMBED_CCOPTS := $(filter-out -flto=auto -ffat-lto-objects, $(PERL_EMBED_CCOPTS))
++  PERL_EMBED_CCOPTS := $(filter-out -specs=%,$(PERL_EMBED_CCOPTS))
++  FLAGS_PERL_EMBED += -Wno-compound-token-split-by-macro
++endif
++
+ $(OUTPUT)test-libperl.bin:
+ 	$(BUILD) $(FLAGS_PERL_EMBED)
+ 
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 154b75fc1373e..f2a353bba25f4 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -147,7 +147,7 @@ GLOBAL_SYM_COUNT = $(shell readelf -s --wide $(BPF_IN_SHARED) | \
+ 			   sort -u | wc -l)
+ VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
+ 			      sed 's/\[.*\]//' | \
+-			      awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \
++			      awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}' | \
+ 			      grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l)
+ 
+ CMD_TARGETS = $(LIB_TARGET) $(PC_FILE)
+@@ -216,7 +216,7 @@ check_abi: $(OUTPUT)libbpf.so $(VERSION_SCRIPT)
+ 		    sort -u > $(OUTPUT)libbpf_global_syms.tmp;		 \
+ 		readelf --dyn-syms --wide $(OUTPUT)libbpf.so |		 \
+ 		    sed 's/\[.*\]//' |					 \
+-		    awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'|  \
++		    awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}'|  \
+ 		    grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 |		 \
+ 		    sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; 	 \
+ 		diff -u $(OUTPUT)libbpf_global_syms.tmp			 \
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 68408a5ecfed2..41dff8d38448d 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -255,6 +255,9 @@ ifdef PYTHON_CONFIG
+   PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil
+   PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null)
+   FLAGS_PYTHON_EMBED := $(PYTHON_EMBED_CCOPTS) $(PYTHON_EMBED_LDOPTS)
++  ifeq ($(CC_NO_CLANG), 0)
++    PYTHON_EMBED_CCOPTS := $(filter-out -ffat-lto-objects, $(PYTHON_EMBED_CCOPTS))
++  endif
+ endif
+ 
+ FEATURE_CHECK_CFLAGS-libpython := $(PYTHON_EMBED_CCOPTS)
+@@ -760,6 +763,9 @@ else
+     LDFLAGS += $(PERL_EMBED_LDFLAGS)
+     EXTLIBS += $(PERL_EMBED_LIBADD)
+     CFLAGS += -DHAVE_LIBPERL_SUPPORT
++    ifeq ($(CC_NO_CLANG), 0)
++      CFLAGS += -Wno-compound-token-split-by-macro
++    endif
+     $(call detected,CONFIG_LIBPERL)
+   endif
+ endif
+diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
+index e3593063b3d17..37765e2bd9ddb 100644
+--- a/tools/perf/arch/arm64/util/arm-spe.c
++++ b/tools/perf/arch/arm64/util/arm-spe.c
+@@ -124,6 +124,12 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
+ 	evsel__set_sample_bit(arm_spe_evsel, TIME);
+ 	evsel__set_sample_bit(arm_spe_evsel, TID);
+ 
++	/*
++	 * Set this only so that perf report knows that SPE generates memory info. It has no effect
++	 * on the opening of the event or the SPE data produced.
++	 */
++	evsel__set_sample_bit(arm_spe_evsel, DATA_SRC);
++
+ 	/* Add dummy event to keep tracking */
+ 	err = parse_events(evlist, "dummy:u", NULL);
+ 	if (err)
+diff --git a/tools/perf/perf.c b/tools/perf/perf.c
+index 27f94b0bb8747..505e2a2f1872b 100644
+--- a/tools/perf/perf.c
++++ b/tools/perf/perf.c
+@@ -433,7 +433,7 @@ void pthread__unblock_sigwinch(void)
+ static int libperf_print(enum libperf_print_level level,
+ 			 const char *fmt, va_list ap)
+ {
+-	return eprintf(level, verbose, fmt, ap);
++	return veprintf(level, verbose, fmt, ap);
+ }
+ 
+ int main(int argc, const char **argv)
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 9dddec19a494e..354e1e04a2662 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -2056,6 +2056,7 @@ prefetch_event(char *buf, u64 head, size_t mmap_size,
+ 	       bool needs_swap, union perf_event *error)
+ {
+ 	union perf_event *event;
++	u16 event_size;
+ 
+ 	/*
+ 	 * Ensure we have enough space remaining to read
+@@ -2068,15 +2069,23 @@ prefetch_event(char *buf, u64 head, size_t mmap_size,
+ 	if (needs_swap)
+ 		perf_event_header__bswap(&event->header);
+ 
+-	if (head + event->header.size <= mmap_size)
++	event_size = event->header.size;
++	if (head + event_size <= mmap_size)
+ 		return event;
+ 
+ 	/* We're not fetching the event so swap back again */
+ 	if (needs_swap)
+ 		perf_event_header__bswap(&event->header);
+ 
+-	pr_debug("%s: head=%#" PRIx64 " event->header_size=%#x, mmap_size=%#zx:"
+-		 " fuzzed or compressed perf.data?\n",__func__, head, event->header.size, mmap_size);
++	/* Check if the event fits into the next mmapped buf. */
++	if (event_size <= mmap_size - head % page_size) {
++		/* Remap buf and fetch again. */
++		return NULL;
++	}
++
++	/* Invalid input. Event size should never exceed mmap_size. */
++	pr_debug("%s: head=%#" PRIx64 " event->header.size=%#x, mmap_size=%#zx:"
++		 " fuzzed or compressed perf.data?\n", __func__, head, event_size, mmap_size);
+ 
+ 	return error;
+ }
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index c5e3e9a68162d..b670469a8124f 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -1,12 +1,14 @@
+-from os import getenv
++from os import getenv, path
+ from subprocess import Popen, PIPE
+ from re import sub
+ 
+ cc = getenv("CC")
+ cc_is_clang = b"clang version" in Popen([cc.split()[0], "-v"], stderr=PIPE).stderr.readline()
++src_feature_tests  = getenv('srctree') + '/tools/build/feature'
+ 
+ def clang_has_option(option):
+-    return [o for o in Popen([cc, option], stderr=PIPE).stderr.readlines() if b"unknown argument" in o] == [ ]
++    cc_output = Popen([cc, option, path.join(src_feature_tests, "test-hello.c") ], stderr=PIPE).stderr.readlines()
++    return [o for o in cc_output if ((b"unknown argument" in o) or (b"is not supported" in o))] == [ ]
+ 
+ if cc_is_clang:
+     from distutils.sysconfig import get_config_vars
+@@ -23,6 +25,8 @@ if cc_is_clang:
+             vars[var] = sub("-fstack-protector-strong", "", vars[var])
+         if not clang_has_option("-fno-semantic-interposition"):
+             vars[var] = sub("-fno-semantic-interposition", "", vars[var])
++        if not clang_has_option("-ffat-lto-objects"):
++            vars[var] = sub("-ffat-lto-objects", "", vars[var])
+ 
+ from distutils.core import setup, Extension
+ 
+diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
+index 05853b0b88318..5b16c7b0ae4fa 100644
+--- a/tools/testing/selftests/cgroup/cgroup_util.c
++++ b/tools/testing/selftests/cgroup/cgroup_util.c
+@@ -219,7 +219,7 @@ int cg_find_unified_root(char *root, size_t len)
+ 
+ int cg_create(const char *cgroup)
+ {
+-	return mkdir(cgroup, 0644);
++	return mkdir(cgroup, 0755);
+ }
+ 
+ int cg_wait_for_proc_count(const char *cgroup, int count)
+@@ -337,13 +337,13 @@ pid_t clone_into_cgroup(int cgroup_fd)
+ #ifdef CLONE_ARGS_SIZE_VER2
+ 	pid_t pid;
+ 
+-	struct clone_args args = {
++	struct __clone_args args = {
+ 		.flags = CLONE_INTO_CGROUP,
+ 		.exit_signal = SIGCHLD,
+ 		.cgroup = cgroup_fd,
+ 	};
+ 
+-	pid = sys_clone3(&args, sizeof(struct clone_args));
++	pid = sys_clone3(&args, sizeof(struct __clone_args));
+ 	/*
+ 	 * Verify that this is a genuine test failure:
+ 	 * ENOSYS -> clone3() not available
+diff --git a/tools/testing/selftests/cgroup/test_core.c b/tools/testing/selftests/cgroup/test_core.c
+index 3df648c378765..6001235030631 100644
+--- a/tools/testing/selftests/cgroup/test_core.c
++++ b/tools/testing/selftests/cgroup/test_core.c
+@@ -1,11 +1,14 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ 
++#define _GNU_SOURCE
+ #include <linux/limits.h>
++#include <linux/sched.h>
+ #include <sys/types.h>
+ #include <sys/mman.h>
+ #include <sys/wait.h>
+ #include <unistd.h>
+ #include <fcntl.h>
++#include <sched.h>
+ #include <stdio.h>
+ #include <errno.h>
+ #include <signal.h>
+@@ -674,6 +677,166 @@ cleanup:
+ 	return ret;
+ }
+ 
++/*
++ * cgroup migration permission check should be performed based on the
++ * credentials at the time of open instead of write.
++ */
++static int test_cgcore_lesser_euid_open(const char *root)
++{
++	const uid_t test_euid = 65534;	/* usually nobody, any !root is fine */
++	int ret = KSFT_FAIL;
++	char *cg_test_a = NULL, *cg_test_b = NULL;
++	char *cg_test_a_procs = NULL, *cg_test_b_procs = NULL;
++	int cg_test_b_procs_fd = -1;
++	uid_t saved_uid;
++
++	cg_test_a = cg_name(root, "cg_test_a");
++	cg_test_b = cg_name(root, "cg_test_b");
++
++	if (!cg_test_a || !cg_test_b)
++		goto cleanup;
++
++	cg_test_a_procs = cg_name(cg_test_a, "cgroup.procs");
++	cg_test_b_procs = cg_name(cg_test_b, "cgroup.procs");
++
++	if (!cg_test_a_procs || !cg_test_b_procs)
++		goto cleanup;
++
++	if (cg_create(cg_test_a) || cg_create(cg_test_b))
++		goto cleanup;
++
++	if (cg_enter_current(cg_test_a))
++		goto cleanup;
++
++	if (chown(cg_test_a_procs, test_euid, -1) ||
++	    chown(cg_test_b_procs, test_euid, -1))
++		goto cleanup;
++
++	saved_uid = geteuid();
++	if (seteuid(test_euid))
++		goto cleanup;
++
++	cg_test_b_procs_fd = open(cg_test_b_procs, O_RDWR);
++
++	if (seteuid(saved_uid))
++		goto cleanup;
++
++	if (cg_test_b_procs_fd < 0)
++		goto cleanup;
++
++	if (write(cg_test_b_procs_fd, "0", 1) >= 0 || errno != EACCES)
++		goto cleanup;
++
++	ret = KSFT_PASS;
++
++cleanup:
++	cg_enter_current(root);
++	if (cg_test_b_procs_fd >= 0)
++		close(cg_test_b_procs_fd);
++	if (cg_test_b)
++		cg_destroy(cg_test_b);
++	if (cg_test_a)
++		cg_destroy(cg_test_a);
++	free(cg_test_b_procs);
++	free(cg_test_a_procs);
++	free(cg_test_b);
++	free(cg_test_a);
++	return ret;
++}
++
++struct lesser_ns_open_thread_arg {
++	const char	*path;
++	int		fd;
++	int		err;
++};
++
++static int lesser_ns_open_thread_fn(void *arg)
++{
++	struct lesser_ns_open_thread_arg *targ = arg;
++
++	targ->fd = open(targ->path, O_RDWR);
++	targ->err = errno;
++	return 0;
++}
++
++/*
++ * cgroup migration permission check should be performed based on the cgroup
++ * namespace at the time of open instead of write.
++ */
++static int test_cgcore_lesser_ns_open(const char *root)
++{
++	static char stack[65536];
++	const uid_t test_euid = 65534;	/* usually nobody, any !root is fine */
++	int ret = KSFT_FAIL;
++	char *cg_test_a = NULL, *cg_test_b = NULL;
++	char *cg_test_a_procs = NULL, *cg_test_b_procs = NULL;
++	int cg_test_b_procs_fd = -1;
++	struct lesser_ns_open_thread_arg targ = { .fd = -1 };
++	pid_t pid;
++	int status;
++
++	cg_test_a = cg_name(root, "cg_test_a");
++	cg_test_b = cg_name(root, "cg_test_b");
++
++	if (!cg_test_a || !cg_test_b)
++		goto cleanup;
++
++	cg_test_a_procs = cg_name(cg_test_a, "cgroup.procs");
++	cg_test_b_procs = cg_name(cg_test_b, "cgroup.procs");
++
++	if (!cg_test_a_procs || !cg_test_b_procs)
++		goto cleanup;
++
++	if (cg_create(cg_test_a) || cg_create(cg_test_b))
++		goto cleanup;
++
++	if (cg_enter_current(cg_test_b))
++		goto cleanup;
++
++	if (chown(cg_test_a_procs, test_euid, -1) ||
++	    chown(cg_test_b_procs, test_euid, -1))
++		goto cleanup;
++
++	targ.path = cg_test_b_procs;
++	pid = clone(lesser_ns_open_thread_fn, stack + sizeof(stack),
++		    CLONE_NEWCGROUP | CLONE_FILES | CLONE_VM | SIGCHLD,
++		    &targ);
++	if (pid < 0)
++		goto cleanup;
++
++	if (waitpid(pid, &status, 0) < 0)
++		goto cleanup;
++
++	if (!WIFEXITED(status))
++		goto cleanup;
++
++	cg_test_b_procs_fd = targ.fd;
++	if (cg_test_b_procs_fd < 0)
++		goto cleanup;
++
++	if (cg_enter_current(cg_test_a))
++		goto cleanup;
++
++	if ((status = write(cg_test_b_procs_fd, "0", 1)) >= 0 || errno != ENOENT)
++		goto cleanup;
++
++	ret = KSFT_PASS;
++
++cleanup:
++	cg_enter_current(root);
++	if (cg_test_b_procs_fd >= 0)
++		close(cg_test_b_procs_fd);
++	if (cg_test_b)
++		cg_destroy(cg_test_b);
++	if (cg_test_a)
++		cg_destroy(cg_test_a);
++	free(cg_test_b_procs);
++	free(cg_test_a_procs);
++	free(cg_test_b);
++	free(cg_test_a);
++	return ret;
++}
++
+ #define T(x) { x, #x }
+ struct corecg_test {
+ 	int (*fn)(const char *root);
+@@ -689,6 +852,8 @@ struct corecg_test {
+ 	T(test_cgcore_proc_migration),
+ 	T(test_cgcore_thread_migration),
+ 	T(test_cgcore_destroy),
++	T(test_cgcore_lesser_euid_open),
++	T(test_cgcore_lesser_ns_open),
+ };
+ #undef T
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-04-13 20:20 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-04-13 20:20 UTC (permalink / raw
  To: gentoo-commits

commit:     6f5bf0970c0bae5f810a1e51c9068579589f373f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 13 20:20:29 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 13 20:20:29 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6f5bf097

Remove deprecated select AUTOFS4_FS

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index c9e2f30a..ab910775 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -8,7 +8,7 @@
 +source "distro/Kconfig"
 --- /dev/null	2022-01-30 08:12:05.041788304 -0500
 +++ b/distro/Kconfig	2022-01-30 15:28:10.030352980 -0500
-@@ -0,0 +1,286 @@
+@@ -0,0 +1,285 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -121,7 +121,6 @@
 +
 +	depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
 +
-+	select AUTOFS4_FS
 +	select AUTOFS_FS
 +	select BLK_DEV_BSG
 +	select BPF_SYSCALL


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-04-20 12:07 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-04-20 12:07 UTC (permalink / raw
  To: gentoo-commits

commit:     760cba0b4f0006dcfc2311e65b17747a161e6dcd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 20 12:07:48 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 20 12:07:48 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=760cba0b

Linux patch 5.10.112

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1111_linux-5.10.112.patch | 4003 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4007 insertions(+)

diff --git a/0000_README b/0000_README
index 9076962f..a85d4035 100644
--- a/0000_README
+++ b/0000_README
@@ -487,6 +487,10 @@ Patch:  1110_linux-5.10.111.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.111
 
+Patch:  1111_linux-5.10.112.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.112
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1111_linux-5.10.112.patch b/1111_linux-5.10.112.patch
new file mode 100644
index 00000000..fd0346bc
--- /dev/null
+++ b/1111_linux-5.10.112.patch
@@ -0,0 +1,4003 @@
+diff --git a/Makefile b/Makefile
+index 8695a13fe7cd6..05013bf5a469b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 111
++SUBLEVEL = 112
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c
+index 428012687a802..7f7f6bae21c2d 100644
+--- a/arch/arm/mach-davinci/board-da850-evm.c
++++ b/arch/arm/mach-davinci/board-da850-evm.c
+@@ -1101,11 +1101,13 @@ static int __init da850_evm_config_emac(void)
+ 	int ret;
+ 	u32 val;
+ 	struct davinci_soc_info *soc_info = &davinci_soc_info;
+-	u8 rmii_en = soc_info->emac_pdata->rmii_en;
++	u8 rmii_en;
+ 
+ 	if (!machine_is_davinci_da850_evm())
+ 		return 0;
+ 
++	rmii_en = soc_info->emac_pdata->rmii_en;
++
+ 	cfg_chip3_base = DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP3_REG);
+ 
+ 	val = __raw_readl(cfg_chip3_base);
+diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
+index 73039949b5ce2..5f8e4c2df53cc 100644
+--- a/arch/arm64/kernel/alternative.c
++++ b/arch/arm64/kernel/alternative.c
+@@ -41,7 +41,7 @@ bool alternative_is_applied(u16 cpufeature)
+ /*
+  * Check if the target PC is within an alternative block.
+  */
+-static bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
++static __always_inline bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
+ {
+ 	unsigned long replptr = (unsigned long)ALT_REPL_PTR(alt);
+ 	return !(pc >= replptr && pc <= (replptr + alt->alt_len));
+@@ -49,7 +49,7 @@ static bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
+ 
+ #define align_down(x, a)	((unsigned long)(x) & ~(((unsigned long)(a)) - 1))
+ 
+-static u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnptr)
++static __always_inline u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnptr)
+ {
+ 	u32 insn;
+ 
+@@ -94,7 +94,7 @@ static u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnp
+ 	return insn;
+ }
+ 
+-static void patch_alternative(struct alt_instr *alt,
++static noinstr void patch_alternative(struct alt_instr *alt,
+ 			      __le32 *origptr, __le32 *updptr, int nr_inst)
+ {
+ 	__le32 *replptr;
+diff --git a/arch/arm64/kernel/cpuidle.c b/arch/arm64/kernel/cpuidle.c
+index b512b5503f6e6..d4ff9ae673fa4 100644
+--- a/arch/arm64/kernel/cpuidle.c
++++ b/arch/arm64/kernel/cpuidle.c
+@@ -54,6 +54,9 @@ static int psci_acpi_cpu_init_idle(unsigned int cpu)
+ 	struct acpi_lpi_state *lpi;
+ 	struct acpi_processor *pr = per_cpu(processors, cpu);
+ 
++	if (unlikely(!pr || !pr->flags.has_lpi))
++		return -EINVAL;
++
+ 	/*
+ 	 * If the PSCI cpu_suspend function hook has not been initialized
+ 	 * idle states must not be enabled, so bail out
+@@ -61,9 +64,6 @@ static int psci_acpi_cpu_init_idle(unsigned int cpu)
+ 	if (!psci_ops.cpu_suspend)
+ 		return -EOPNOTSUPP;
+ 
+-	if (unlikely(!pr || !pr->flags.has_lpi))
+-		return -EINVAL;
+-
+ 	count = pr->power.count - 1;
+ 	if (count <= 0)
+ 		return -ENODEV;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 0eb41dce55da3..b0e4001efb50f 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1340,8 +1340,9 @@ static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
+ 		return -ENOTSUPP;
+ }
+ 
+-int kvm_mmu_module_init(void);
+-void kvm_mmu_module_exit(void);
++void kvm_mmu_x86_module_init(void);
++int kvm_mmu_vendor_module_init(void);
++void kvm_mmu_vendor_module_exit(void);
+ 
+ void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
+ int kvm_mmu_create(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 20d29ae8ed702..99ea1ec12ffe0 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5876,12 +5876,24 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+ 	return 0;
+ }
+ 
+-int kvm_mmu_module_init(void)
++/*
++ * nx_huge_pages needs to be resolved to true/false when kvm.ko is loaded, as
++ * its default value of -1 is technically undefined behavior for a boolean.
++ */
++void kvm_mmu_x86_module_init(void)
+ {
+-	int ret = -ENOMEM;
+-
+ 	if (nx_huge_pages == -1)
+ 		__set_nx_huge_pages(get_nx_auto_mode());
++}
++
++/*
++ * The bulk of the MMU initialization is deferred until the vendor module is
++ * loaded as many of the masks/values may be modified by VMX or SVM, i.e. need
++ * to be reset when a potentially different vendor module is loaded.
++ */
++int kvm_mmu_vendor_module_init(void)
++{
++	int ret = -ENOMEM;
+ 
+ 	/*
+ 	 * MMU roles use union aliasing which is, generally speaking, an
+@@ -5955,7 +5967,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
+ 	mmu_free_memory_caches(vcpu);
+ }
+ 
+-void kvm_mmu_module_exit(void)
++void kvm_mmu_vendor_module_exit(void)
+ {
+ 	mmu_destroy_caches();
+ 	percpu_counter_destroy(&kvm_total_used_mmu_pages);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 70d23bec09f5c..4588f73bf59a4 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8005,7 +8005,7 @@ int kvm_arch_init(void *opaque)
+ 		goto out_free_x86_emulator_cache;
+ 	}
+ 
+-	r = kvm_mmu_module_init();
++	r = kvm_mmu_vendor_module_init();
+ 	if (r)
+ 		goto out_free_percpu;
+ 
+@@ -8065,7 +8065,7 @@ void kvm_arch_exit(void)
+ 	cancel_work_sync(&pvclock_gtod_work);
+ #endif
+ 	kvm_x86_ops.hardware_enable = NULL;
+-	kvm_mmu_module_exit();
++	kvm_mmu_vendor_module_exit();
+ 	free_percpu(user_return_msrs);
+ 	kmem_cache_destroy(x86_emulator_cache);
+ 	kmem_cache_destroy(x86_fpu_cache);
+@@ -11426,3 +11426,19 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_unaccelerated_access);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_incomplete_ipi);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_ga_log);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_apicv_update_request);
++
++static int __init kvm_x86_init(void)
++{
++	kvm_mmu_x86_module_init();
++	return 0;
++}
++module_init(kvm_x86_init);
++
++static void __exit kvm_x86_exit(void)
++{
++	/*
++	 * If module_init() is implemented, module_exit() must also be
++	 * implemented to allow module unload.
++	 */
++}
++module_exit(kvm_x86_exit);
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 8377c3ed10ffa..9921b481c7ee1 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -1080,6 +1080,11 @@ static int flatten_lpi_states(struct acpi_processor *pr,
+ 	return 0;
+ }
+ 
++int __weak acpi_processor_ffh_lpi_probe(unsigned int cpu)
++{
++	return -EOPNOTSUPP;
++}
++
+ static int acpi_processor_get_lpi_info(struct acpi_processor *pr)
+ {
+ 	int ret, i;
+@@ -1088,6 +1093,11 @@ static int acpi_processor_get_lpi_info(struct acpi_processor *pr)
+ 	struct acpi_device *d = NULL;
+ 	struct acpi_lpi_states_array info[2], *tmp, *prev, *curr;
+ 
++	/* make sure our architecture has support */
++	ret = acpi_processor_ffh_lpi_probe(pr->id);
++	if (ret == -EOPNOTSUPP)
++		return ret;
++
+ 	if (!osc_pc_lpi_support_confirmed)
+ 		return -EOPNOTSUPP;
+ 
+@@ -1139,11 +1149,6 @@ static int acpi_processor_get_lpi_info(struct acpi_processor *pr)
+ 	return 0;
+ }
+ 
+-int __weak acpi_processor_ffh_lpi_probe(unsigned int cpu)
+-{
+-	return -ENODEV;
+-}
+-
+ int __weak acpi_processor_ffh_lpi_enter(struct acpi_lpi_state *lpi)
+ {
+ 	return -ENODEV;
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index d2b544bdc7b5e..f963a0a7da46a 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3974,6 +3974,9 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Crucial_CT*MX100*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++	{ "Samsung SSD 840 EVO*",	NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++						ATA_HORKAGE_NO_DMA_LOG |
++						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Samsung SSD 840*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Samsung SSD 850*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c
+index 4645677d86f1b..a45678cd9b740 100644
+--- a/drivers/firmware/arm_scmi/clock.c
++++ b/drivers/firmware/arm_scmi/clock.c
+@@ -202,7 +202,8 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
+ 
+ 	if (rate_discrete && rate) {
+ 		clk->list.num_rates = tot_rate_cnt;
+-		sort(rate, tot_rate_cnt, sizeof(*rate), rate_cmp_func, NULL);
++		sort(clk->list.rates, tot_rate_cnt, sizeof(*rate),
++		     rate_cmp_func, NULL);
+ 	}
+ 
+ 	clk->rate_discrete = rate_discrete;
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 55e4f402ec8b6..44ee319da1b35 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -276,8 +276,8 @@ static acpi_status acpi_gpiochip_alloc_event(struct acpi_resource *ares,
+ 	pin = agpio->pin_table[0];
+ 
+ 	if (pin <= 255) {
+-		char ev_name[5];
+-		sprintf(ev_name, "_%c%02hhX",
++		char ev_name[8];
++		sprintf(ev_name, "_%c%02X",
+ 			agpio->triggering == ACPI_EDGE_SENSITIVE ? 'E' : 'L',
+ 			pin);
+ 		if (ACPI_SUCCESS(acpi_get_handle(handle, ev_name, &evt_handle)))
+diff --git a/drivers/gpu/drm/amd/amdgpu/ObjectID.h b/drivers/gpu/drm/amd/amdgpu/ObjectID.h
+index 5b393622f5920..a0f0a17e224fe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/ObjectID.h
++++ b/drivers/gpu/drm/amd/amdgpu/ObjectID.h
+@@ -119,6 +119,7 @@
+ #define CONNECTOR_OBJECT_ID_eDP                   0x14
+ #define CONNECTOR_OBJECT_ID_MXM                   0x15
+ #define CONNECTOR_OBJECT_ID_LVDS_eDP              0x16
++#define CONNECTOR_OBJECT_ID_USBC                  0x17
+ 
+ /* deleted */
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 26f8a21383774..1b4c7ced8b92c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -1024,11 +1024,15 @@ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
+ 					   struct dma_fence **ef)
+ {
+ 	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+-	struct drm_file *drm_priv = filp->private_data;
+-	struct amdgpu_fpriv *drv_priv = drm_priv->driver_priv;
+-	struct amdgpu_vm *avm = &drv_priv->vm;
++	struct amdgpu_fpriv *drv_priv;
++	struct amdgpu_vm *avm;
+ 	int ret;
+ 
++	ret = amdgpu_file_to_fpriv(filp, &drv_priv);
++	if (ret)
++		return ret;
++	avm = &drv_priv->vm;
++
+ 	/* Already a compute VM? */
+ 	if (avm->process_info)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index ed13a2f76884c..30659c1776e81 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -632,7 +632,7 @@ MODULE_PARM_DESC(sched_policy,
+  * Maximum number of processes that HWS can schedule concurrently. The maximum is the
+  * number of VMIDs assigned to the HWS, which is also the default.
+  */
+-int hws_max_conc_proc = 8;
++int hws_max_conc_proc = -1;
+ module_param(hws_max_conc_proc, int, 0444);
+ MODULE_PARM_DESC(hws_max_conc_proc,
+ 	"Max # processes HWS can execute concurrently when sched_policy=0 (0 = no concurrency, #VMIDs for KFD = Maximum(default))");
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index b19f7bd37781f..405bb3efa2a96 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1248,6 +1248,8 @@ static const struct amdgpu_gfxoff_quirk amdgpu_gfxoff_quirk_list[] = {
+ 	{ 0x1002, 0x15dd, 0x103c, 0x83e7, 0xd3 },
+ 	/* GFXOFF is unstable on C6 parts with a VBIOS 113-RAVEN-114 */
+ 	{ 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc6 },
++	/* Apple MacBook Pro (15-inch, 2019) Radeon Pro Vega 20 4 GB */
++	{ 0x1002, 0x69af, 0x106b, 0x019a, 0xc0 },
+ 	{ 0, 0, 0, 0, 0 },
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 2099f6ebd8338..bdb8e596bda6a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -1429,8 +1429,11 @@ static int vcn_v3_0_start_sriov(struct amdgpu_device *adev)
+ 
+ static int vcn_v3_0_stop_dpg_mode(struct amdgpu_device *adev, int inst_idx)
+ {
++	struct dpg_pause_state state = {.fw_based = VCN_DPG_STATE__UNPAUSE};
+ 	uint32_t tmp;
+ 
++	vcn_v3_0_pause_dpg_mode(adev, 0, &state);
++
+ 	/* Wait for power status to be 1 */
+ 	SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
+ 		UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 84313135c2eae..148e43dee657a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -664,15 +664,10 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 			- kfd->vm_info.first_vmid_kfd + 1;
+ 
+ 	/* Verify module parameters regarding mapped process number*/
+-	if ((hws_max_conc_proc < 0)
+-			|| (hws_max_conc_proc > kfd->vm_info.vmid_num_kfd)) {
+-		dev_err(kfd_device,
+-			"hws_max_conc_proc %d must be between 0 and %d, use %d instead\n",
+-			hws_max_conc_proc, kfd->vm_info.vmid_num_kfd,
+-			kfd->vm_info.vmid_num_kfd);
++	if (hws_max_conc_proc >= 0)
++		kfd->max_proc_per_quantum = min((u32)hws_max_conc_proc, kfd->vm_info.vmid_num_kfd);
++	else
+ 		kfd->max_proc_per_quantum = kfd->vm_info.vmid_num_kfd;
+-	} else
+-		kfd->max_proc_per_quantum = hws_max_conc_proc;
+ 
+ 	/* calculate max size of mqds needed for queues */
+ 	size = max_num_of_queues_per_device *
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+index ba2c2ce0c55af..159be13ef20bb 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+@@ -531,6 +531,8 @@ static struct kfd_event_waiter *alloc_event_waiters(uint32_t num_events)
+ 	event_waiters = kmalloc_array(num_events,
+ 					sizeof(struct kfd_event_waiter),
+ 					GFP_KERNEL);
++	if (!event_waiters)
++		return NULL;
+ 
+ 	for (i = 0; (event_waiters) && (i < num_events) ; i++) {
+ 		init_wait(&event_waiters[i].wait);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index e828f9414ba2c..7bb151283f44b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2022,7 +2022,8 @@ static int dm_resume(void *handle)
+ 		 * this is the case when traversing through already created
+ 		 * MST connectors, should be skipped
+ 		 */
+-		if (aconnector->mst_port)
++		if (aconnector->dc_link &&
++		    aconnector->dc_link->type == dc_connection_mst_branch)
+ 			continue;
+ 
+ 		mutex_lock(&aconnector->hpd_lock);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 5c5ccbad96588..1e47afc4ccc1d 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1701,8 +1701,8 @@ bool dc_is_stream_unchanged(
+ 	if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param)
+ 		return false;
+ 
+-	// Only Have Audio left to check whether it is same or not. This is a corner case for Tiled sinks
+-	if (old_stream->audio_info.mode_count != stream->audio_info.mode_count)
++	/*compare audio info*/
++	if (memcmp(&old_stream->audio_info, &stream->audio_info, sizeof(stream->audio_info)) != 0)
+ 		return false;
+ 
+ 	return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 532f6a1145b55..31a13daf4289c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -2387,14 +2387,18 @@ void dcn10_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ 				&blnd_cfg.black_color);
+ 	}
+ 
+-	if (per_pixel_alpha)
+-		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA;
+-	else
+-		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_GLOBAL_ALPHA;
+-
+ 	blnd_cfg.overlap_only = false;
+ 	blnd_cfg.global_gain = 0xff;
+ 
++	if (per_pixel_alpha && pipe_ctx->plane_state->global_alpha) {
++		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA_COMBINED_GLOBAL_GAIN;
++		blnd_cfg.global_gain = pipe_ctx->plane_state->global_alpha_value;
++	} else if (per_pixel_alpha) {
++		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA;
++	} else {
++		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_GLOBAL_ALPHA;
++	}
++
+ 	if (pipe_ctx->plane_state->global_alpha)
+ 		blnd_cfg.global_alpha = pipe_ctx->plane_state->global_alpha_value;
+ 	else
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 79a2b9c785f05..3d778760a3b55 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -2270,14 +2270,18 @@ void dcn20_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ 				pipe_ctx, &blnd_cfg.black_color);
+ 	}
+ 
+-	if (per_pixel_alpha)
+-		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA;
+-	else
+-		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_GLOBAL_ALPHA;
+-
+ 	blnd_cfg.overlap_only = false;
+ 	blnd_cfg.global_gain = 0xff;
+ 
++	if (per_pixel_alpha && pipe_ctx->plane_state->global_alpha) {
++		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA_COMBINED_GLOBAL_GAIN;
++		blnd_cfg.global_gain = pipe_ctx->plane_state->global_alpha_value;
++	} else if (per_pixel_alpha) {
++		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA;
++	} else {
++		blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_GLOBAL_ALPHA;
++	}
++
+ 	if (pipe_ctx->plane_state->global_alpha)
+ 		blnd_cfg.global_alpha = pipe_ctx->plane_state->global_alpha_value;
+ 	else
+diff --git a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+index 0fdf7a3e96dea..96e18050a6175 100644
+--- a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
++++ b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+@@ -100,7 +100,8 @@ enum vsc_packet_revision {
+ //PB7 = MD0
+ #define MASK_VTEM_MD0__VRR_EN         0x01
+ #define MASK_VTEM_MD0__M_CONST        0x02
+-#define MASK_VTEM_MD0__RESERVED2      0x0C
++#define MASK_VTEM_MD0__QMS_EN         0x04
++#define MASK_VTEM_MD0__RESERVED2      0x08
+ #define MASK_VTEM_MD0__FVA_FACTOR_M1  0xF0
+ 
+ //MD1
+@@ -109,7 +110,7 @@ enum vsc_packet_revision {
+ //MD2
+ #define MASK_VTEM_MD2__BASE_REFRESH_RATE_98  0x03
+ #define MASK_VTEM_MD2__RB                    0x04
+-#define MASK_VTEM_MD2__RESERVED3             0xF8
++#define MASK_VTEM_MD2__NEXT_TFR              0xF8
+ 
+ //MD3
+ #define MASK_VTEM_MD3__BASE_REFRESH_RATE_07  0xFF
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 9e09805575db4..39563daff4a0b 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1222,7 +1222,7 @@ a6xx_create_private_address_space(struct msm_gpu *gpu)
+ 		return ERR_CAST(mmu);
+ 
+ 	return msm_gem_address_space_create(mmu,
+-		"gpu", 0x100000000ULL, 0x1ffffffffULL);
++		"gpu", 0x100000000ULL, SZ_4G);
+ }
+ 
+ static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+index 1d28dfba2c9bb..fb421ca56b3da 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+@@ -644,7 +644,7 @@ struct drm_connector *msm_dsi_manager_connector_init(u8 id)
+ 	return connector;
+ 
+ fail:
+-	connector->funcs->destroy(msm_dsi->connector);
++	connector->funcs->destroy(connector);
+ 	return ERR_PTR(ret);
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 819567e40565c..9c05bf6c45510 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -849,6 +849,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
+ 					get_pid_task(aspace->pid, PIDTYPE_PID);
+ 				if (task) {
+ 					comm = kstrdup(task->comm, GFP_KERNEL);
++					put_task_struct(task);
+ 				} else {
+ 					comm = NULL;
+ 				}
+diff --git a/drivers/gpu/ipu-v3/ipu-di.c b/drivers/gpu/ipu-v3/ipu-di.c
+index b4a31d506fccf..74eca68891add 100644
+--- a/drivers/gpu/ipu-v3/ipu-di.c
++++ b/drivers/gpu/ipu-v3/ipu-di.c
+@@ -451,8 +451,9 @@ static void ipu_di_config_clock(struct ipu_di *di,
+ 
+ 		error = rate / (sig->mode.pixelclock / 1000);
+ 
+-		dev_dbg(di->ipu->dev, "  IPU clock can give %lu with divider %u, error %d.%u%%\n",
+-			rate, div, (signed)(error - 1000) / 10, error % 10);
++		dev_dbg(di->ipu->dev, "  IPU clock can give %lu with divider %u, error %c%d.%d%%\n",
++			rate, div, error < 1000 ? '-' : '+',
++			abs(error - 1000) / 10, abs(error - 1000) % 10);
+ 
+ 		/* Allow a 1% error */
+ 		if (error < 1010 && error >= 990) {
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
+index 356e22159e834..769851b6e74c5 100644
+--- a/drivers/hv/ring_buffer.c
++++ b/drivers/hv/ring_buffer.c
+@@ -378,7 +378,16 @@ int hv_ringbuffer_read(struct vmbus_channel *channel,
+ static u32 hv_pkt_iter_avail(const struct hv_ring_buffer_info *rbi)
+ {
+ 	u32 priv_read_loc = rbi->priv_read_index;
+-	u32 write_loc = READ_ONCE(rbi->ring_buffer->write_index);
++	u32 write_loc;
++
++	/*
++	 * The Hyper-V host writes the packet data, then uses
++	 * store_release() to update the write_index.  Use load_acquire()
++	 * here to prevent loads of the packet data from being re-ordered
++	 * before the read of the write_index and potentially getting
++	 * stale data.
++	 */
++	write_loc = virt_load_acquire(&rbi->ring_buffer->write_index);
+ 
+ 	if (write_loc >= priv_read_loc)
+ 		return write_loc - priv_read_loc;
+diff --git a/drivers/i2c/busses/i2c-pasemi.c b/drivers/i2c/busses/i2c-pasemi.c
+index 20f2772c0e79b..2c909522f0f38 100644
+--- a/drivers/i2c/busses/i2c-pasemi.c
++++ b/drivers/i2c/busses/i2c-pasemi.c
+@@ -137,6 +137,12 @@ static int pasemi_i2c_xfer_msg(struct i2c_adapter *adapter,
+ 
+ 		TXFIFO_WR(smbus, msg->buf[msg->len-1] |
+ 			  (stop ? MTXFIFO_STOP : 0));
++
++		if (stop) {
++			err = pasemi_smb_waitready(smbus);
++			if (err)
++				goto reset_out;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
+index 3690e28cc7ea2..a16e066989fa5 100644
+--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
++++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
+@@ -499,6 +499,7 @@ iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,
+ 	iser_conn->iscsi_conn = conn;
+ 
+ out:
++	iscsi_put_endpoint(ep);
+ 	mutex_unlock(&iser_conn->state_mutex);
+ 	return error;
+ }
+@@ -988,6 +989,7 @@ static struct iscsi_transport iscsi_iser_transport = {
+ 	/* connection management */
+ 	.create_conn            = iscsi_iser_conn_create,
+ 	.bind_conn              = iscsi_iser_conn_bind,
++	.unbind_conn		= iscsi_conn_unbind,
+ 	.destroy_conn           = iscsi_conn_teardown,
+ 	.attr_is_visible	= iser_attr_is_visible,
+ 	.set_param              = iscsi_iser_set_param,
+diff --git a/drivers/md/dm-historical-service-time.c b/drivers/md/dm-historical-service-time.c
+index 186f91e2752c1..06fe43c13ba38 100644
+--- a/drivers/md/dm-historical-service-time.c
++++ b/drivers/md/dm-historical-service-time.c
+@@ -429,7 +429,7 @@ static struct dm_path *hst_select_path(struct path_selector *ps,
+ {
+ 	struct selector *s = ps->context;
+ 	struct path_info *pi = NULL, *best = NULL;
+-	u64 time_now = sched_clock();
++	u64 time_now = ktime_get_ns();
+ 	struct dm_path *ret = NULL;
+ 	unsigned long flags;
+ 
+@@ -470,7 +470,7 @@ static int hst_start_io(struct path_selector *ps, struct dm_path *path,
+ 
+ static u64 path_service_time(struct path_info *pi, u64 start_time)
+ {
+-	u64 sched_now = ktime_get_ns();
++	u64 now = ktime_get_ns();
+ 
+ 	/* if a previous disk request has finished after this IO was
+ 	 * sent to the hardware, pretend the submission happened
+@@ -479,11 +479,11 @@ static u64 path_service_time(struct path_info *pi, u64 start_time)
+ 	if (time_after64(pi->last_finish, start_time))
+ 		start_time = pi->last_finish;
+ 
+-	pi->last_finish = sched_now;
+-	if (time_before64(sched_now, start_time))
++	pi->last_finish = now;
++	if (time_before64(now, start_time))
+ 		return 0;
+ 
+-	return sched_now - start_time;
++	return now - start_time;
+ }
+ 
+ static int hst_end_io(struct path_selector *ps, struct dm_path *path,
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index f7471a2642dd4..6f085e96c3f33 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -4232,6 +4232,7 @@ try_smaller_buffer:
+ 	}
+ 
+ 	if (ic->internal_hash) {
++		size_t recalc_tags_size;
+ 		ic->recalc_wq = alloc_workqueue("dm-integrity-recalc", WQ_MEM_RECLAIM, 1);
+ 		if (!ic->recalc_wq ) {
+ 			ti->error = "Cannot allocate workqueue";
+@@ -4245,8 +4246,10 @@ try_smaller_buffer:
+ 			r = -ENOMEM;
+ 			goto bad;
+ 		}
+-		ic->recalc_tags = kvmalloc_array(RECALC_SECTORS >> ic->sb->log2_sectors_per_block,
+-						 ic->tag_size, GFP_KERNEL);
++		recalc_tags_size = (RECALC_SECTORS >> ic->sb->log2_sectors_per_block) * ic->tag_size;
++		if (crypto_shash_digestsize(ic->internal_hash) > ic->tag_size)
++			recalc_tags_size += crypto_shash_digestsize(ic->internal_hash) - ic->tag_size;
++		ic->recalc_tags = kvmalloc(recalc_tags_size, GFP_KERNEL);
+ 		if (!ic->recalc_tags) {
+ 			ti->error = "Cannot allocate tags for recalculating";
+ 			r = -ENOMEM;
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 6759091b15e09..d99ea8973b678 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -895,7 +895,7 @@ static int rga_probe(struct platform_device *pdev)
+ 	}
+ 	rga->dst_mmu_pages =
+ 		(unsigned int *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 3);
+-	if (rga->dst_mmu_pages) {
++	if (!rga->dst_mmu_pages) {
+ 		ret = -ENOMEM;
+ 		goto free_src_pages;
+ 	}
+diff --git a/drivers/memory/atmel-ebi.c b/drivers/memory/atmel-ebi.c
+index c267283b01fda..e749dcb3ddea9 100644
+--- a/drivers/memory/atmel-ebi.c
++++ b/drivers/memory/atmel-ebi.c
+@@ -544,20 +544,27 @@ static int atmel_ebi_probe(struct platform_device *pdev)
+ 	smc_np = of_parse_phandle(dev->of_node, "atmel,smc", 0);
+ 
+ 	ebi->smc.regmap = syscon_node_to_regmap(smc_np);
+-	if (IS_ERR(ebi->smc.regmap))
+-		return PTR_ERR(ebi->smc.regmap);
++	if (IS_ERR(ebi->smc.regmap)) {
++		ret = PTR_ERR(ebi->smc.regmap);
++		goto put_node;
++	}
+ 
+ 	ebi->smc.layout = atmel_hsmc_get_reg_layout(smc_np);
+-	if (IS_ERR(ebi->smc.layout))
+-		return PTR_ERR(ebi->smc.layout);
++	if (IS_ERR(ebi->smc.layout)) {
++		ret = PTR_ERR(ebi->smc.layout);
++		goto put_node;
++	}
+ 
+ 	ebi->smc.clk = of_clk_get(smc_np, 0);
+ 	if (IS_ERR(ebi->smc.clk)) {
+-		if (PTR_ERR(ebi->smc.clk) != -ENOENT)
+-			return PTR_ERR(ebi->smc.clk);
++		if (PTR_ERR(ebi->smc.clk) != -ENOENT) {
++			ret = PTR_ERR(ebi->smc.clk);
++			goto put_node;
++		}
+ 
+ 		ebi->smc.clk = NULL;
+ 	}
++	of_node_put(smc_np);
+ 	ret = clk_prepare_enable(ebi->smc.clk);
+ 	if (ret)
+ 		return ret;
+@@ -608,6 +615,10 @@ static int atmel_ebi_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	return of_platform_populate(np, NULL, NULL, dev);
++
++put_node:
++	of_node_put(smc_np);
++	return ret;
+ }
+ 
+ static __maybe_unused int atmel_ebi_resume(struct device *dev)
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index 9019121a80f53..781af51e3f793 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -592,6 +592,7 @@ static int rpcif_probe(struct platform_device *pdev)
+ 	struct platform_device *vdev;
+ 	struct device_node *flash;
+ 	const char *name;
++	int ret;
+ 
+ 	flash = of_get_next_child(pdev->dev.of_node, NULL);
+ 	if (!flash) {
+@@ -615,7 +616,14 @@ static int rpcif_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 	vdev->dev.parent = &pdev->dev;
+ 	platform_set_drvdata(pdev, vdev);
+-	return platform_device_add(vdev);
++
++	ret = platform_device_add(vdev);
++	if (ret) {
++		platform_device_put(vdev);
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static int rpcif_remove(struct platform_device *pdev)
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index cd8d9b0e0edb3..c96dfc11aa6fc 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1466,7 +1466,7 @@ static int felix_pci_probe(struct pci_dev *pdev,
+ 
+ 	err = dsa_register_switch(ds);
+ 	if (err) {
+-		dev_err(&pdev->dev, "Failed to register DSA switch: %d\n", err);
++		dev_err_probe(&pdev->dev, err, "Failed to register DSA switch\n");
+ 		goto err_register_ds;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 7dcd5613ee56f..a2062144d7ca1 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -76,7 +76,7 @@ static inline void bcmgenet_writel(u32 value, void __iomem *offset)
+ 	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		__raw_writel(value, offset);
+ 	else
+-		writel(value, offset);
++		writel_relaxed(value, offset);
+ }
+ 
+ static inline u32 bcmgenet_readl(void __iomem *offset)
+@@ -84,7 +84,7 @@ static inline u32 bcmgenet_readl(void __iomem *offset)
+ 	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		return __raw_readl(offset);
+ 	else
+-		return readl(offset);
++		return readl_relaxed(offset);
+ }
+ 
+ static inline void dmadesc_set_length_status(struct bcmgenet_priv *priv,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+index 939b692ffc335..ce843ea914646 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+@@ -650,6 +650,7 @@ static int mlxsw_i2c_probe(struct i2c_client *client,
+ 	return 0;
+ 
+ errout:
++	mutex_destroy(&mlxsw_i2c->cmd.lock);
+ 	i2c_set_clientdata(client, NULL);
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/micrel/Kconfig b/drivers/net/ethernet/micrel/Kconfig
+index 42bc014136fe3..9ceb7e1fb1696 100644
+--- a/drivers/net/ethernet/micrel/Kconfig
++++ b/drivers/net/ethernet/micrel/Kconfig
+@@ -37,6 +37,7 @@ config KS8851
+ config KS8851_MLL
+ 	tristate "Micrel KS8851 MLL"
+ 	depends on HAS_IOMEM
++	depends on PTP_1588_CLOCK_OPTIONAL
+ 	select MII
+ 	select CRC32
+ 	select EEPROM_93CX6
+diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+index fc99ad8e4a388..1664e9184c9ca 100644
+--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
++++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+@@ -2895,11 +2895,9 @@ static netdev_tx_t myri10ge_sw_tso(struct sk_buff *skb,
+ 		status = myri10ge_xmit(curr, dev);
+ 		if (status != 0) {
+ 			dev_kfree_skb_any(curr);
+-			if (segs != NULL) {
+-				curr = segs;
+-				segs = next;
++			skb_list_walk_safe(next, curr, next) {
+ 				curr->next = NULL;
+-				dev_kfree_skb_any(segs);
++				dev_kfree_skb_any(curr);
+ 			}
+ 			goto drop;
+ 		}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c
+index cd478d2cd871a..00f6d347eaf75 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c
++++ b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c
+@@ -57,10 +57,6 @@
+ #define TSE_PCS_USE_SGMII_ENA				BIT(0)
+ #define TSE_PCS_IF_USE_SGMII				0x03
+ 
+-#define SGMII_ADAPTER_CTRL_REG				0x00
+-#define SGMII_ADAPTER_DISABLE				0x0001
+-#define SGMII_ADAPTER_ENABLE				0x0000
+-
+ #define AUTONEGO_LINK_TIMER				20
+ 
+ static int tse_pcs_reset(void __iomem *base, struct tse_pcs *pcs)
+@@ -202,12 +198,8 @@ void tse_pcs_fix_mac_speed(struct tse_pcs *pcs, struct phy_device *phy_dev,
+ 			   unsigned int speed)
+ {
+ 	void __iomem *tse_pcs_base = pcs->tse_pcs_base;
+-	void __iomem *sgmii_adapter_base = pcs->sgmii_adapter_base;
+ 	u32 val;
+ 
+-	writew(SGMII_ADAPTER_ENABLE,
+-	       sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+-
+ 	pcs->autoneg = phy_dev->autoneg;
+ 
+ 	if (phy_dev->autoneg == AUTONEG_ENABLE) {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h
+index 442812c0a4bdc..694ac25ef426b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h
++++ b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h
+@@ -10,6 +10,10 @@
+ #include <linux/phy.h>
+ #include <linux/timer.h>
+ 
++#define SGMII_ADAPTER_CTRL_REG		0x00
++#define SGMII_ADAPTER_ENABLE		0x0000
++#define SGMII_ADAPTER_DISABLE		0x0001
++
+ struct tse_pcs {
+ 	struct device *dev;
+ 	void __iomem *tse_pcs_base;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index f37b6d57b2fe2..8bb0106cb7ea3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -18,9 +18,6 @@
+ 
+ #include "altr_tse_pcs.h"
+ 
+-#define SGMII_ADAPTER_CTRL_REG                          0x00
+-#define SGMII_ADAPTER_DISABLE                           0x0001
+-
+ #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII 0x0
+ #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII 0x1
+ #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RMII 0x2
+@@ -62,16 +59,14 @@ static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+ {
+ 	struct socfpga_dwmac *dwmac = (struct socfpga_dwmac *)priv;
+ 	void __iomem *splitter_base = dwmac->splitter_base;
+-	void __iomem *tse_pcs_base = dwmac->pcs.tse_pcs_base;
+ 	void __iomem *sgmii_adapter_base = dwmac->pcs.sgmii_adapter_base;
+ 	struct device *dev = dwmac->dev;
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct phy_device *phy_dev = ndev->phydev;
+ 	u32 val;
+ 
+-	if ((tse_pcs_base) && (sgmii_adapter_base))
+-		writew(SGMII_ADAPTER_DISABLE,
+-		       sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
++	writew(SGMII_ADAPTER_DISABLE,
++	       sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+ 
+ 	if (splitter_base) {
+ 		val = readl(splitter_base + EMAC_SPLITTER_CTRL_REG);
+@@ -93,7 +88,9 @@ static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+ 		writel(val, splitter_base + EMAC_SPLITTER_CTRL_REG);
+ 	}
+ 
+-	if (tse_pcs_base && sgmii_adapter_base)
++	writew(SGMII_ADAPTER_ENABLE,
++	       sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
++	if (phy_dev)
+ 		tse_pcs_fix_mac_speed(&dwmac->pcs, phy_dev, speed);
+ }
+ 
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index bbdcba88c021e..3d91baf2e55aa 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2060,15 +2060,14 @@ static int axienet_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto cleanup_clk;
+ 
+-	lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+-	if (lp->phy_node) {
+-		ret = axienet_mdio_setup(lp);
+-		if (ret)
+-			dev_warn(&pdev->dev,
+-				 "error registering MDIO bus: %d\n", ret);
+-	}
++	ret = axienet_mdio_setup(lp);
++	if (ret)
++		dev_warn(&pdev->dev,
++			 "error registering MDIO bus: %d\n", ret);
++
+ 	if (lp->phy_mode == PHY_INTERFACE_MODE_SGMII ||
+ 	    lp->phy_mode == PHY_INTERFACE_MODE_1000BASEX) {
++		lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ 		if (!lp->phy_node) {
+ 			dev_err(&pdev->dev, "phy-handle required for 1000BaseX/SGMII\n");
+ 			ret = -EINVAL;
+diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
+index 02d6f3ad9aca8..83dc1c2c3b84b 100644
+--- a/drivers/net/hamradio/6pack.c
++++ b/drivers/net/hamradio/6pack.c
+@@ -311,7 +311,6 @@ static void sp_setup(struct net_device *dev)
+ {
+ 	/* Finish setting up the DEVICE info. */
+ 	dev->netdev_ops		= &sp_netdev_ops;
+-	dev->needs_free_netdev	= true;
+ 	dev->mtu		= SIXP_MTU;
+ 	dev->hard_header_len	= AX25_MAX_HEADER_LEN;
+ 	dev->header_ops 	= &ax25_header_ops;
+@@ -679,9 +678,11 @@ static void sixpack_close(struct tty_struct *tty)
+ 	del_timer_sync(&sp->tx_t);
+ 	del_timer_sync(&sp->resync_t);
+ 
+-	/* Free all 6pack frame buffers. */
++	/* Free all 6pack frame buffers after unreg. */
+ 	kfree(sp->rbuff);
+ 	kfree(sp->xbuff);
++
++	free_netdev(sp->dev);
+ }
+ 
+ /* Perform I/O control on an active 6pack channel. */
+diff --git a/drivers/net/mdio/mdio-bcm-unimac.c b/drivers/net/mdio/mdio-bcm-unimac.c
+index fbd36891ee643..5d171e7f118df 100644
+--- a/drivers/net/mdio/mdio-bcm-unimac.c
++++ b/drivers/net/mdio/mdio-bcm-unimac.c
+@@ -5,20 +5,18 @@
+  * Copyright (C) 2014-2017 Broadcom
+  */
+ 
++#include <linux/clk.h>
++#include <linux/delay.h>
++#include <linux/io.h>
+ #include <linux/kernel.h>
+-#include <linux/phy.h>
+-#include <linux/platform_device.h>
+-#include <linux/sched.h>
+ #include <linux/module.h>
+-#include <linux/io.h>
+-#include <linux/delay.h>
+-#include <linux/clk.h>
+-
+ #include <linux/of.h>
+-#include <linux/of_platform.h>
+ #include <linux/of_mdio.h>
+-
++#include <linux/of_platform.h>
++#include <linux/phy.h>
+ #include <linux/platform_data/mdio-bcm-unimac.h>
++#include <linux/platform_device.h>
++#include <linux/sched.h>
+ 
+ #define MDIO_CMD		0x00
+ #define  MDIO_START_BUSY	(1 << 29)
+diff --git a/drivers/net/mdio/mdio-bitbang.c b/drivers/net/mdio/mdio-bitbang.c
+index 5136275c8e739..99588192cc78f 100644
+--- a/drivers/net/mdio/mdio-bitbang.c
++++ b/drivers/net/mdio/mdio-bitbang.c
+@@ -14,10 +14,10 @@
+  * Vitaly Bordug <vbordug@ru.mvista.com>
+  */
+ 
+-#include <linux/module.h>
++#include <linux/delay.h>
+ #include <linux/mdio-bitbang.h>
++#include <linux/module.h>
+ #include <linux/types.h>
+-#include <linux/delay.h>
+ 
+ #define MDIO_READ 2
+ #define MDIO_WRITE 1
+diff --git a/drivers/net/mdio/mdio-cavium.c b/drivers/net/mdio/mdio-cavium.c
+index 1afd6fc1a3517..95ce274c1be14 100644
+--- a/drivers/net/mdio/mdio-cavium.c
++++ b/drivers/net/mdio/mdio-cavium.c
+@@ -4,9 +4,9 @@
+  */
+ 
+ #include <linux/delay.h>
++#include <linux/io.h>
+ #include <linux/module.h>
+ #include <linux/phy.h>
+-#include <linux/io.h>
+ 
+ #include "mdio-cavium.h"
+ 
+diff --git a/drivers/net/mdio/mdio-gpio.c b/drivers/net/mdio/mdio-gpio.c
+index 1b00235d7dc5b..56c8f914f8930 100644
+--- a/drivers/net/mdio/mdio-gpio.c
++++ b/drivers/net/mdio/mdio-gpio.c
+@@ -17,15 +17,15 @@
+  * Vitaly Bordug <vbordug@ru.mvista.com>
+  */
+ 
+-#include <linux/module.h>
+-#include <linux/slab.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/interrupt.h>
+-#include <linux/platform_device.h>
+-#include <linux/platform_data/mdio-gpio.h>
+ #include <linux/mdio-bitbang.h>
+ #include <linux/mdio-gpio.h>
+-#include <linux/gpio/consumer.h>
++#include <linux/module.h>
+ #include <linux/of_mdio.h>
++#include <linux/platform_data/mdio-gpio.h>
++#include <linux/platform_device.h>
++#include <linux/slab.h>
+ 
+ struct mdio_gpio_info {
+ 	struct mdiobb_ctrl ctrl;
+diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c
+index 25c25ea6da66f..9cd71d896963d 100644
+--- a/drivers/net/mdio/mdio-ipq4019.c
++++ b/drivers/net/mdio/mdio-ipq4019.c
+@@ -3,10 +3,10 @@
+ /* Copyright (c) 2020 Sartura Ltd. */
+ 
+ #include <linux/delay.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+ #include <linux/io.h>
+ #include <linux/iopoll.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
+ #include <linux/of_address.h>
+ #include <linux/of_mdio.h>
+ #include <linux/phy.h>
+diff --git a/drivers/net/mdio/mdio-ipq8064.c b/drivers/net/mdio/mdio-ipq8064.c
+index f0a6bfa61645e..49d4e9aa30bbf 100644
+--- a/drivers/net/mdio/mdio-ipq8064.c
++++ b/drivers/net/mdio/mdio-ipq8064.c
+@@ -7,12 +7,12 @@
+ 
+ #include <linux/delay.h>
+ #include <linux/kernel.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+-#include <linux/regmap.h>
+ #include <linux/of_mdio.h>
+ #include <linux/of_address.h>
+ #include <linux/platform_device.h>
+-#include <linux/mfd/syscon.h>
++#include <linux/regmap.h>
+ 
+ /* MII address register definitions */
+ #define MII_ADDR_REG_ADDR                       0x10
+diff --git a/drivers/net/mdio/mdio-mscc-miim.c b/drivers/net/mdio/mdio-mscc-miim.c
+index 1c9232fca1e2f..037649bef92ea 100644
+--- a/drivers/net/mdio/mdio-mscc-miim.c
++++ b/drivers/net/mdio/mdio-mscc-miim.c
+@@ -6,14 +6,14 @@
+  * Copyright (c) 2017 Microsemi Corporation
+  */
+ 
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/phy.h>
+-#include <linux/platform_device.h>
+ #include <linux/bitops.h>
+ #include <linux/io.h>
+ #include <linux/iopoll.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
+ #include <linux/of_mdio.h>
++#include <linux/phy.h>
++#include <linux/platform_device.h>
+ 
+ #define MSCC_MIIM_REG_STATUS		0x0
+ #define		MSCC_MIIM_STATUS_STAT_PENDING	BIT(2)
+diff --git a/drivers/net/mdio/mdio-mux-bcm-iproc.c b/drivers/net/mdio/mdio-mux-bcm-iproc.c
+index 42fb5f166136b..641cfa41f492a 100644
+--- a/drivers/net/mdio/mdio-mux-bcm-iproc.c
++++ b/drivers/net/mdio/mdio-mux-bcm-iproc.c
+@@ -3,14 +3,14 @@
+  * Copyright 2016 Broadcom
+  */
+ #include <linux/clk.h>
+-#include <linux/platform_device.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+-#include <linux/of_mdio.h>
++#include <linux/iopoll.h>
++#include <linux/mdio-mux.h>
+ #include <linux/module.h>
++#include <linux/of_mdio.h>
+ #include <linux/phy.h>
+-#include <linux/mdio-mux.h>
+-#include <linux/delay.h>
+-#include <linux/iopoll.h>
++#include <linux/platform_device.h>
+ 
+ #define MDIO_RATE_ADJ_EXT_OFFSET	0x000
+ #define MDIO_RATE_ADJ_INT_OFFSET	0x004
+diff --git a/drivers/net/mdio/mdio-mux-gpio.c b/drivers/net/mdio/mdio-mux-gpio.c
+index 10a758fdc9e63..3c7f16f06b452 100644
+--- a/drivers/net/mdio/mdio-mux-gpio.c
++++ b/drivers/net/mdio/mdio-mux-gpio.c
+@@ -3,13 +3,13 @@
+  * Copyright (C) 2011, 2012 Cavium, Inc.
+  */
+ 
+-#include <linux/platform_device.h>
+ #include <linux/device.h>
+-#include <linux/of_mdio.h>
++#include <linux/gpio/consumer.h>
++#include <linux/mdio-mux.h>
+ #include <linux/module.h>
++#include <linux/of_mdio.h>
+ #include <linux/phy.h>
+-#include <linux/mdio-mux.h>
+-#include <linux/gpio/consumer.h>
++#include <linux/platform_device.h>
+ 
+ #define DRV_VERSION "1.1"
+ #define DRV_DESCRIPTION "GPIO controlled MDIO bus multiplexer driver"
+diff --git a/drivers/net/mdio/mdio-mux-mmioreg.c b/drivers/net/mdio/mdio-mux-mmioreg.c
+index d1a8780e24d88..c02fb2a067eef 100644
+--- a/drivers/net/mdio/mdio-mux-mmioreg.c
++++ b/drivers/net/mdio/mdio-mux-mmioreg.c
+@@ -7,13 +7,13 @@
+  * Copyright 2012 Freescale Semiconductor, Inc.
+  */
+ 
+-#include <linux/platform_device.h>
+ #include <linux/device.h>
++#include <linux/mdio-mux.h>
++#include <linux/module.h>
+ #include <linux/of_address.h>
+ #include <linux/of_mdio.h>
+-#include <linux/module.h>
+ #include <linux/phy.h>
+-#include <linux/mdio-mux.h>
++#include <linux/platform_device.h>
+ 
+ struct mdio_mux_mmioreg_state {
+ 	void *mux_handle;
+diff --git a/drivers/net/mdio/mdio-mux-multiplexer.c b/drivers/net/mdio/mdio-mux-multiplexer.c
+index d6564381aa3e4..527acfc3c045a 100644
+--- a/drivers/net/mdio/mdio-mux-multiplexer.c
++++ b/drivers/net/mdio/mdio-mux-multiplexer.c
+@@ -4,10 +4,10 @@
+  * Copyright 2019 NXP
+  */
+ 
+-#include <linux/platform_device.h>
+ #include <linux/mdio-mux.h>
+ #include <linux/module.h>
+ #include <linux/mux/consumer.h>
++#include <linux/platform_device.h>
+ 
+ struct mdio_mux_multiplexer_state {
+ 	struct mux_control *muxc;
+diff --git a/drivers/net/mdio/mdio-mux.c b/drivers/net/mdio/mdio-mux.c
+index ccb3ee704eb1c..3dde0c2b3e097 100644
+--- a/drivers/net/mdio/mdio-mux.c
++++ b/drivers/net/mdio/mdio-mux.c
+@@ -3,12 +3,12 @@
+  * Copyright (C) 2011, 2012 Cavium, Inc.
+  */
+ 
+-#include <linux/platform_device.h>
+-#include <linux/mdio-mux.h>
+-#include <linux/of_mdio.h>
+ #include <linux/device.h>
++#include <linux/mdio-mux.h>
+ #include <linux/module.h>
++#include <linux/of_mdio.h>
+ #include <linux/phy.h>
++#include <linux/platform_device.h>
+ 
+ #define DRV_DESCRIPTION "MDIO bus multiplexer driver"
+ 
+diff --git a/drivers/net/mdio/mdio-octeon.c b/drivers/net/mdio/mdio-octeon.c
+index 6faf39314ac93..e096e68ac667b 100644
+--- a/drivers/net/mdio/mdio-octeon.c
++++ b/drivers/net/mdio/mdio-octeon.c
+@@ -3,13 +3,13 @@
+  * Copyright (C) 2009-2015 Cavium, Inc.
+  */
+ 
+-#include <linux/platform_device.h>
++#include <linux/gfp.h>
++#include <linux/io.h>
++#include <linux/module.h>
+ #include <linux/of_address.h>
+ #include <linux/of_mdio.h>
+-#include <linux/module.h>
+-#include <linux/gfp.h>
+ #include <linux/phy.h>
+-#include <linux/io.h>
++#include <linux/platform_device.h>
+ 
+ #include "mdio-cavium.h"
+ 
+diff --git a/drivers/net/mdio/mdio-thunder.c b/drivers/net/mdio/mdio-thunder.c
+index dd7430c998a2a..822d2cdd2f359 100644
+--- a/drivers/net/mdio/mdio-thunder.c
++++ b/drivers/net/mdio/mdio-thunder.c
+@@ -3,14 +3,14 @@
+  * Copyright (C) 2009-2016 Cavium, Inc.
+  */
+ 
+-#include <linux/of_address.h>
+-#include <linux/of_mdio.h>
+-#include <linux/module.h>
++#include <linux/acpi.h>
+ #include <linux/gfp.h>
+-#include <linux/phy.h>
+ #include <linux/io.h>
+-#include <linux/acpi.h>
++#include <linux/module.h>
++#include <linux/of_address.h>
++#include <linux/of_mdio.h>
+ #include <linux/pci.h>
++#include <linux/phy.h>
+ 
+ #include "mdio-cavium.h"
+ 
+diff --git a/drivers/net/mdio/mdio-xgene.c b/drivers/net/mdio/mdio-xgene.c
+index 461207cdf5d6e..7ab4e26db08c2 100644
+--- a/drivers/net/mdio/mdio-xgene.c
++++ b/drivers/net/mdio/mdio-xgene.c
+@@ -13,11 +13,11 @@
+ #include <linux/io.h>
+ #include <linux/mdio/mdio-xgene.h>
+ #include <linux/module.h>
+-#include <linux/of_platform.h>
+-#include <linux/of_net.h>
+ #include <linux/of_mdio.h>
+-#include <linux/prefetch.h>
++#include <linux/of_net.h>
++#include <linux/of_platform.h>
+ #include <linux/phy.h>
++#include <linux/prefetch.h>
+ #include <net/ip.h>
+ 
+ static bool xgene_mdio_status;
+diff --git a/drivers/net/mdio/of_mdio.c b/drivers/net/mdio/of_mdio.c
+index 4daf94bb56a53..ea0bf13e8ac3f 100644
+--- a/drivers/net/mdio/of_mdio.c
++++ b/drivers/net/mdio/of_mdio.c
+@@ -8,17 +8,17 @@
+  * out of the OpenFirmware device tree and using it to populate an mii_bus.
+  */
+ 
+-#include <linux/kernel.h>
+ #include <linux/device.h>
+-#include <linux/netdevice.h>
+ #include <linux/err.h>
+-#include <linux/phy.h>
+-#include <linux/phy_fixed.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/netdevice.h>
+ #include <linux/of.h>
+ #include <linux/of_irq.h>
+ #include <linux/of_mdio.h>
+ #include <linux/of_net.h>
+-#include <linux/module.h>
++#include <linux/phy.h>
++#include <linux/phy_fixed.h>
+ 
+ #define DEFAULT_GPIO_RESET_DELAY	10	/* in microseconds */
+ 
+diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c
+index f81fb0b13a944..369bd30fed35f 100644
+--- a/drivers/net/slip/slip.c
++++ b/drivers/net/slip/slip.c
+@@ -468,7 +468,7 @@ static void sl_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ 	spin_lock(&sl->lock);
+ 
+ 	if (netif_queue_stopped(dev)) {
+-		if (!netif_running(dev))
++		if (!netif_running(dev) || !sl->tty)
+ 			goto out;
+ 
+ 		/* May be we must check transmitter timeout here ?
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index 0717c18015c9c..c9c4095181744 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -1102,10 +1102,15 @@ static int aqc111_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 	if (start_of_descs != desc_offset)
+ 		goto err;
+ 
+-	/* self check desc_offset from header*/
+-	if (desc_offset >= skb_len)
++	/* self check desc_offset from header and make sure that the
++	 * bounds of the metadata array are inside the SKB
++	 */
++	if (pkt_count * 2 + desc_offset >= skb_len)
+ 		goto err;
+ 
++	/* Packets must not overlap the metadata array */
++	skb_trim(skb, desc_offset);
++
+ 	if (pkt_count == 0)
+ 		goto err;
+ 
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index f7e3eb309a26e..5be8ed9105535 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -292,7 +292,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	rcu_read_lock();
+ 	rcv = rcu_dereference(priv->peer);
+-	if (unlikely(!rcv)) {
++	if (unlikely(!rcv) || !pskb_may_pull(skb, ETH_HLEN)) {
+ 		kfree_skb(skb);
+ 		goto drop;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index af367696fd92f..ac354dfc50559 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -839,7 +839,7 @@ static bool ath9k_txq_list_has_key(struct list_head *txq_list, u32 keyix)
+ 			continue;
+ 
+ 		txinfo = IEEE80211_SKB_CB(bf->bf_mpdu);
+-		fi = (struct ath_frame_info *)&txinfo->rate_driver_data[0];
++		fi = (struct ath_frame_info *)&txinfo->status.status_driver_data[0];
+ 		if (fi->keyix == keyix)
+ 			return true;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index 5691bd6eb82c2..6555abf02f18b 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -141,8 +141,8 @@ static struct ath_frame_info *get_frame_info(struct sk_buff *skb)
+ {
+ 	struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ 	BUILD_BUG_ON(sizeof(struct ath_frame_info) >
+-		     sizeof(tx_info->rate_driver_data));
+-	return (struct ath_frame_info *) &tx_info->rate_driver_data[0];
++		     sizeof(tx_info->status.status_driver_data));
++	return (struct ath_frame_info *) &tx_info->status.status_driver_data[0];
+ }
+ 
+ static void ath_send_bar(struct ath_atx_tid *tid, u16 seqno)
+@@ -2501,6 +2501,16 @@ skip_tx_complete:
+ 	spin_unlock_irqrestore(&sc->tx.txbuflock, flags);
+ }
+ 
++static void ath_clear_tx_status(struct ieee80211_tx_info *tx_info)
++{
++	void *ptr = &tx_info->status;
++
++	memset(ptr + sizeof(tx_info->status.rates), 0,
++	       sizeof(tx_info->status) -
++	       sizeof(tx_info->status.rates) -
++	       sizeof(tx_info->status.status_driver_data));
++}
++
+ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
+ 			     struct ath_tx_status *ts, int nframes, int nbad,
+ 			     int txok)
+@@ -2512,6 +2522,8 @@ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
+ 	struct ath_hw *ah = sc->sc_ah;
+ 	u8 i, tx_rateindex;
+ 
++	ath_clear_tx_status(tx_info);
++
+ 	if (txok)
+ 		tx_info->status.ack_signal = ts->ts_rssi;
+ 
+@@ -2526,6 +2538,13 @@ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
+ 	tx_info->status.ampdu_len = nframes;
+ 	tx_info->status.ampdu_ack_len = nframes - nbad;
+ 
++	tx_info->status.rates[tx_rateindex].count = ts->ts_longretry + 1;
++
++	for (i = tx_rateindex + 1; i < hw->max_rates; i++) {
++		tx_info->status.rates[i].count = 0;
++		tx_info->status.rates[i].idx = -1;
++	}
++
+ 	if ((ts->ts_status & ATH9K_TXERR_FILT) == 0 &&
+ 	    (tx_info->flags & IEEE80211_TX_CTL_NO_ACK) == 0) {
+ 		/*
+@@ -2547,16 +2566,6 @@ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
+ 			tx_info->status.rates[tx_rateindex].count =
+ 				hw->max_rate_tries;
+ 	}
+-
+-	for (i = tx_rateindex + 1; i < hw->max_rates; i++) {
+-		tx_info->status.rates[i].count = 0;
+-		tx_info->status.rates[i].idx = -1;
+-	}
+-
+-	tx_info->status.rates[tx_rateindex].count = ts->ts_longretry + 1;
+-
+-	/* we report airtime in ath_tx_count_airtime(), don't report twice */
+-	tx_info->status.tx_time = 0;
+ }
+ 
+ static void ath_tx_processq(struct ath_softc *sc, struct ath_txq *txq)
+diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c
+index 7f7bc0993670f..e09bbf3890c49 100644
+--- a/drivers/perf/fsl_imx8_ddr_perf.c
++++ b/drivers/perf/fsl_imx8_ddr_perf.c
+@@ -29,7 +29,7 @@
+ #define CNTL_OVER_MASK		0xFFFFFFFE
+ 
+ #define CNTL_CSV_SHIFT		24
+-#define CNTL_CSV_MASK		(0xFF << CNTL_CSV_SHIFT)
++#define CNTL_CSV_MASK		(0xFFU << CNTL_CSV_SHIFT)
+ 
+ #define EVENT_CYCLES_ID		0
+ #define EVENT_CYCLES_COUNTER	0
+diff --git a/drivers/regulator/wm8994-regulator.c b/drivers/regulator/wm8994-regulator.c
+index cadea0344486f..40befdd9dfa92 100644
+--- a/drivers/regulator/wm8994-regulator.c
++++ b/drivers/regulator/wm8994-regulator.c
+@@ -71,6 +71,35 @@ static const struct regulator_ops wm8994_ldo2_ops = {
+ };
+ 
+ static const struct regulator_desc wm8994_ldo_desc[] = {
++	{
++		.name = "LDO1",
++		.id = 1,
++		.type = REGULATOR_VOLTAGE,
++		.n_voltages = WM8994_LDO1_MAX_SELECTOR + 1,
++		.vsel_reg = WM8994_LDO_1,
++		.vsel_mask = WM8994_LDO1_VSEL_MASK,
++		.ops = &wm8994_ldo1_ops,
++		.min_uV = 2400000,
++		.uV_step = 100000,
++		.enable_time = 3000,
++		.off_on_delay = 36000,
++		.owner = THIS_MODULE,
++	},
++	{
++		.name = "LDO2",
++		.id = 2,
++		.type = REGULATOR_VOLTAGE,
++		.n_voltages = WM8994_LDO2_MAX_SELECTOR + 1,
++		.vsel_reg = WM8994_LDO_2,
++		.vsel_mask = WM8994_LDO2_VSEL_MASK,
++		.ops = &wm8994_ldo2_ops,
++		.enable_time = 3000,
++		.off_on_delay = 36000,
++		.owner = THIS_MODULE,
++	},
++};
++
++static const struct regulator_desc wm8958_ldo_desc[] = {
+ 	{
+ 		.name = "LDO1",
+ 		.id = 1,
+@@ -172,9 +201,16 @@ static int wm8994_ldo_probe(struct platform_device *pdev)
+ 	 * regulator core and we need not worry about it on the
+ 	 * error path.
+ 	 */
+-	ldo->regulator = devm_regulator_register(&pdev->dev,
+-						 &wm8994_ldo_desc[id],
+-						 &config);
++	if (ldo->wm8994->type == WM8994) {
++		ldo->regulator = devm_regulator_register(&pdev->dev,
++							 &wm8994_ldo_desc[id],
++							 &config);
++	} else {
++		ldo->regulator = devm_regulator_register(&pdev->dev,
++							 &wm8958_ldo_desc[id],
++							 &config);
++	}
++
+ 	if (IS_ERR(ldo->regulator)) {
+ 		ret = PTR_ERR(ldo->regulator);
+ 		dev_err(wm8994->dev, "Failed to register LDO%d: %d\n",
+diff --git a/drivers/scsi/be2iscsi/be_iscsi.c b/drivers/scsi/be2iscsi/be_iscsi.c
+index a13c203ef7a9a..c4881657a807b 100644
+--- a/drivers/scsi/be2iscsi/be_iscsi.c
++++ b/drivers/scsi/be2iscsi/be_iscsi.c
+@@ -182,6 +182,7 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ 	struct beiscsi_endpoint *beiscsi_ep;
+ 	struct iscsi_endpoint *ep;
+ 	uint16_t cri_index;
++	int rc = 0;
+ 
+ 	ep = iscsi_lookup_endpoint(transport_fd);
+ 	if (!ep)
+@@ -189,15 +190,17 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ 
+ 	beiscsi_ep = ep->dd_data;
+ 
+-	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+-		return -EINVAL;
++	if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
++		rc = -EINVAL;
++		goto put_ep;
++	}
+ 
+ 	if (beiscsi_ep->phba != phba) {
+ 		beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
+ 			    "BS_%d : beiscsi_ep->hba=%p not equal to phba=%p\n",
+ 			    beiscsi_ep->phba, phba);
+-
+-		return -EEXIST;
++		rc = -EEXIST;
++		goto put_ep;
+ 	}
+ 	cri_index = BE_GET_CRI_FROM_CID(beiscsi_ep->ep_cid);
+ 	if (phba->conn_table[cri_index]) {
+@@ -209,7 +212,8 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ 				      beiscsi_ep->ep_cid,
+ 				      beiscsi_conn,
+ 				      phba->conn_table[cri_index]);
+-			return -EINVAL;
++			rc = -EINVAL;
++			goto put_ep;
+ 		}
+ 	}
+ 
+@@ -226,7 +230,10 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ 		    "BS_%d : cid %d phba->conn_table[%u]=%p\n",
+ 		    beiscsi_ep->ep_cid, cri_index, beiscsi_conn);
+ 	phba->conn_table[cri_index] = beiscsi_conn;
+-	return 0;
++
++put_ep:
++	iscsi_put_endpoint(ep);
++	return rc;
+ }
+ 
+ static int beiscsi_iface_create_ipv4(struct beiscsi_hba *phba)
+diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
+index 987dc8135a9b4..b977e039bb789 100644
+--- a/drivers/scsi/be2iscsi/be_main.c
++++ b/drivers/scsi/be2iscsi/be_main.c
+@@ -5810,6 +5810,7 @@ struct iscsi_transport beiscsi_iscsi_transport = {
+ 	.destroy_session = beiscsi_session_destroy,
+ 	.create_conn = beiscsi_conn_create,
+ 	.bind_conn = beiscsi_conn_bind,
++	.unbind_conn = iscsi_conn_unbind,
+ 	.destroy_conn = iscsi_conn_teardown,
+ 	.attr_is_visible = beiscsi_attr_is_visible,
+ 	.set_iface_param = beiscsi_iface_set_param,
+diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+index 21efc73b87bee..649664dc6da44 100644
+--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
++++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+@@ -1422,17 +1422,23 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
+ 	 * Forcefully terminate all in progress connection recovery at the
+ 	 * earliest, either in bind(), send_pdu(LOGIN), or conn_start()
+ 	 */
+-	if (bnx2i_adapter_ready(hba))
+-		return -EIO;
++	if (bnx2i_adapter_ready(hba)) {
++		ret_code = -EIO;
++		goto put_ep;
++	}
+ 
+ 	bnx2i_ep = ep->dd_data;
+ 	if ((bnx2i_ep->state == EP_STATE_TCP_FIN_RCVD) ||
+-	    (bnx2i_ep->state == EP_STATE_TCP_RST_RCVD))
++	    (bnx2i_ep->state == EP_STATE_TCP_RST_RCVD)) {
+ 		/* Peer disconnect via' FIN or RST */
+-		return -EINVAL;
++		ret_code = -EINVAL;
++		goto put_ep;
++	}
+ 
+-	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+-		return -EINVAL;
++	if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
++		ret_code = -EINVAL;
++		goto put_ep;
++	}
+ 
+ 	if (bnx2i_ep->hba != hba) {
+ 		/* Error - TCP connection does not belong to this device
+@@ -1443,7 +1449,8 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
+ 		iscsi_conn_printk(KERN_ALERT, cls_conn->dd_data,
+ 				  "belong to hba (%s)\n",
+ 				  hba->netdev->name);
+-		return -EEXIST;
++		ret_code = -EEXIST;
++		goto put_ep;
+ 	}
+ 	bnx2i_ep->conn = bnx2i_conn;
+ 	bnx2i_conn->ep = bnx2i_ep;
+@@ -1460,6 +1467,8 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
+ 		bnx2i_put_rq_buf(bnx2i_conn, 0);
+ 
+ 	bnx2i_arm_cq_event_coalescing(bnx2i_conn->ep, CNIC_ARM_CQE);
++put_ep:
++	iscsi_put_endpoint(ep);
+ 	return ret_code;
+ }
+ 
+@@ -2278,6 +2287,7 @@ struct iscsi_transport bnx2i_iscsi_transport = {
+ 	.destroy_session	= bnx2i_session_destroy,
+ 	.create_conn		= bnx2i_conn_create,
+ 	.bind_conn		= bnx2i_conn_bind,
++	.unbind_conn		= iscsi_conn_unbind,
+ 	.destroy_conn		= bnx2i_conn_destroy,
+ 	.attr_is_visible	= bnx2i_attr_is_visible,
+ 	.set_param		= iscsi_set_param,
+diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+index 37d99357120fa..edcd3fab6973c 100644
+--- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
++++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+@@ -117,6 +117,7 @@ static struct iscsi_transport cxgb3i_iscsi_transport = {
+ 	/* connection management */
+ 	.create_conn	= cxgbi_create_conn,
+ 	.bind_conn	= cxgbi_bind_conn,
++	.unbind_conn	= iscsi_conn_unbind,
+ 	.destroy_conn	= iscsi_tcp_conn_teardown,
+ 	.start_conn	= iscsi_conn_start,
+ 	.stop_conn	= iscsi_conn_stop,
+diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
+index 2c3491528d424..efb3e2b3398e2 100644
+--- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
++++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
+@@ -134,6 +134,7 @@ static struct iscsi_transport cxgb4i_iscsi_transport = {
+ 	/* connection management */
+ 	.create_conn	= cxgbi_create_conn,
+ 	.bind_conn		= cxgbi_bind_conn,
++	.unbind_conn	= iscsi_conn_unbind,
+ 	.destroy_conn	= iscsi_tcp_conn_teardown,
+ 	.start_conn		= iscsi_conn_start,
+ 	.stop_conn		= iscsi_conn_stop,
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index ecb134b4699f2..506b561670af0 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -2690,11 +2690,13 @@ int cxgbi_bind_conn(struct iscsi_cls_session *cls_session,
+ 	err = csk->cdev->csk_ddp_setup_pgidx(csk, csk->tid,
+ 					     ppm->tformat.pgsz_idx_dflt);
+ 	if (err < 0)
+-		return err;
++		goto put_ep;
+ 
+ 	err = iscsi_conn_bind(cls_session, cls_conn, is_leading);
+-	if (err)
+-		return -EINVAL;
++	if (err) {
++		err = -EINVAL;
++		goto put_ep;
++	}
+ 
+ 	/*  calculate the tag idx bits needed for this conn based on cmds_max */
+ 	cconn->task_idx_bits = (__ilog2_u32(conn->session->cmds_max - 1)) + 1;
+@@ -2715,7 +2717,9 @@ int cxgbi_bind_conn(struct iscsi_cls_session *cls_session,
+ 	/*  init recv engine */
+ 	iscsi_tcp_hdr_recv_prep(tcp_conn);
+ 
+-	return 0;
++put_ep:
++	iscsi_put_endpoint(ep);
++	return err;
+ }
+ EXPORT_SYMBOL_GPL(cxgbi_bind_conn);
+ 
+diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+index cc3908c2d2f94..a3431485def8f 100644
+--- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
++++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+@@ -35,7 +35,7 @@
+ 
+ #define IBMVSCSIS_VERSION	"v0.2"
+ 
+-#define	INITIAL_SRP_LIMIT	800
++#define	INITIAL_SRP_LIMIT	1024
+ #define	DEFAULT_MAX_SECTORS	256
+ #define MAX_TXU			1024 * 1024
+ 
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index d4e66c595eb87..05799b41974d5 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -1367,23 +1367,32 @@ void iscsi_session_failure(struct iscsi_session *session,
+ }
+ EXPORT_SYMBOL_GPL(iscsi_session_failure);
+ 
+-void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err)
++static bool iscsi_set_conn_failed(struct iscsi_conn *conn)
+ {
+ 	struct iscsi_session *session = conn->session;
+ 
+-	spin_lock_bh(&session->frwd_lock);
+-	if (session->state == ISCSI_STATE_FAILED) {
+-		spin_unlock_bh(&session->frwd_lock);
+-		return;
+-	}
++	if (session->state == ISCSI_STATE_FAILED)
++		return false;
+ 
+ 	if (conn->stop_stage == 0)
+ 		session->state = ISCSI_STATE_FAILED;
+-	spin_unlock_bh(&session->frwd_lock);
+ 
+ 	set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
+ 	set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx);
+-	iscsi_conn_error_event(conn->cls_conn, err);
++	return true;
++}
++
++void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err)
++{
++	struct iscsi_session *session = conn->session;
++	bool needs_evt;
++
++	spin_lock_bh(&session->frwd_lock);
++	needs_evt = iscsi_set_conn_failed(conn);
++	spin_unlock_bh(&session->frwd_lock);
++
++	if (needs_evt)
++		iscsi_conn_error_event(conn->cls_conn, err);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_conn_failure);
+ 
+@@ -2117,6 +2126,51 @@ done:
+ 	spin_unlock(&session->frwd_lock);
+ }
+ 
++/**
++ * iscsi_conn_unbind - prevent queueing to conn.
++ * @cls_conn: iscsi conn ep is bound to.
++ * @is_active: is the conn in use for boot or is this for EH/termination
++ *
++ * This must be called by drivers implementing the ep_disconnect callout.
++ * It disables queueing to the connection from libiscsi in preparation for
++ * an ep_disconnect call.
++ */
++void iscsi_conn_unbind(struct iscsi_cls_conn *cls_conn, bool is_active)
++{
++	struct iscsi_session *session;
++	struct iscsi_conn *conn;
++
++	if (!cls_conn)
++		return;
++
++	conn = cls_conn->dd_data;
++	session = conn->session;
++	/*
++	 * Wait for iscsi_eh calls to exit. We don't wait for the tmf to
++	 * complete or timeout. The caller just wants to know what's running
++	 * is everything that needs to be cleaned up, and no cmds will be
++	 * queued.
++	 */
++	mutex_lock(&session->eh_mutex);
++
++	iscsi_suspend_queue(conn);
++	iscsi_suspend_tx(conn);
++
++	spin_lock_bh(&session->frwd_lock);
++	if (!is_active) {
++		/*
++		 * if logout timed out before userspace could even send a PDU
++		 * the state might still be in ISCSI_STATE_LOGGED_IN and
++		 * allowing new cmds and TMFs.
++		 */
++		if (session->state == ISCSI_STATE_LOGGED_IN)
++			iscsi_set_conn_failed(conn);
++	}
++	spin_unlock_bh(&session->frwd_lock);
++	mutex_unlock(&session->eh_mutex);
++}
++EXPORT_SYMBOL_GPL(iscsi_conn_unbind);
++
+ static void iscsi_prep_abort_task_pdu(struct iscsi_task *task,
+ 				      struct iscsi_tm *hdr)
+ {
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 1149bfc42fe64..134e4ee5dc481 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -13614,6 +13614,8 @@ lpfc_io_slot_reset_s4(struct pci_dev *pdev)
+ 	psli->sli_flag &= ~LPFC_SLI_ACTIVE;
+ 	spin_unlock_irq(&phba->hbalock);
+ 
++	/* Init cpu_map array */
++	lpfc_cpu_map_array_init(phba);
+ 	/* Configure and enable interrupt */
+ 	intr_mode = lpfc_sli4_enable_intr(phba, phba->intr_mode);
+ 	if (intr_mode == LPFC_INTR_ERROR) {
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index 6b8ec57e8bdfa..c088a848776ef 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -2554,6 +2554,9 @@ struct megasas_instance_template {
+ #define MEGASAS_IS_LOGICAL(sdev)					\
+ 	((sdev->channel < MEGASAS_MAX_PD_CHANNELS) ? 0 : 1)
+ 
++#define MEGASAS_IS_LUN_VALID(sdev)					\
++	(((sdev)->lun == 0) ? 1 : 0)
++
+ #define MEGASAS_DEV_INDEX(scp)						\
+ 	(((scp->device->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) +	\
+ 	scp->device->id)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 1a70cc995c28c..84a2e9292fd03 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -2111,6 +2111,9 @@ static int megasas_slave_alloc(struct scsi_device *sdev)
+ 			goto scan_target;
+ 		}
+ 		return -ENXIO;
++	} else if (!MEGASAS_IS_LUN_VALID(sdev)) {
++		sdev_printk(KERN_INFO, sdev, "%s: invalid LUN\n", __func__);
++		return -ENXIO;
+ 	}
+ 
+ scan_target:
+@@ -2141,6 +2144,10 @@ static void megasas_slave_destroy(struct scsi_device *sdev)
+ 	instance = megasas_lookup_instance(sdev->host->host_no);
+ 
+ 	if (MEGASAS_IS_LOGICAL(sdev)) {
++		if (!MEGASAS_IS_LUN_VALID(sdev)) {
++			sdev_printk(KERN_INFO, sdev, "%s: invalid LUN\n", __func__);
++			return;
++		}
+ 		ld_tgt_id = MEGASAS_TARGET_ID(sdev);
+ 		instance->ld_tgtid_status[ld_tgt_id] = LD_TARGET_ID_DELETED;
+ 		if (megasas_dbg_lvl & LD_PD_DEBUG)
+diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
+index 0cfea7b2ab13a..85ca8421fb862 100644
+--- a/drivers/scsi/mvsas/mv_init.c
++++ b/drivers/scsi/mvsas/mv_init.c
+@@ -646,6 +646,7 @@ static struct pci_device_id mvs_pci_table[] = {
+ 	{ PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1300), chip_1300 },
+ 	{ PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1320), chip_1320 },
+ 	{ PCI_VDEVICE(ADAPTEC2, 0x0450), chip_6440 },
++	{ PCI_VDEVICE(TTI, 0x2640), chip_6440 },
+ 	{ PCI_VDEVICE(TTI, 0x2710), chip_9480 },
+ 	{ PCI_VDEVICE(TTI, 0x2720), chip_9480 },
+ 	{ PCI_VDEVICE(TTI, 0x2721), chip_9480 },
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 4c03bf08b543c..0305c8999ba5d 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -765,6 +765,10 @@ static void init_default_table_values(struct pm8001_hba_info *pm8001_ha)
+ 	pm8001_ha->main_cfg_tbl.pm80xx_tbl.pcs_event_log_severity	= 0x01;
+ 	pm8001_ha->main_cfg_tbl.pm80xx_tbl.fatal_err_interrupt		= 0x01;
+ 
++	/* Enable higher IQs and OQs, 32 to 63, bit 16 */
++	if (pm8001_ha->max_q_num > 32)
++		pm8001_ha->main_cfg_tbl.pm80xx_tbl.fatal_err_interrupt |=
++							1 << 16;
+ 	/* Disable end to end CRC checking */
+ 	pm8001_ha->main_cfg_tbl.pm80xx_tbl.crc_core_dump = (0x1 << 16);
+ 
+@@ -1024,6 +1028,13 @@ static int mpi_init_check(struct pm8001_hba_info *pm8001_ha)
+ 	if (0x0000 != gst_len_mpistate)
+ 		return -EBUSY;
+ 
++	/*
++	 *  As per controller datasheet, after successful MPI
++	 *  initialization minimum 500ms delay is required before
++	 *  issuing commands.
++	 */
++	msleep(500);
++
+ 	return 0;
+ }
+ 
+@@ -1682,10 +1693,11 @@ static void
+ pm80xx_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ {
+ #ifdef PM8001_USE_MSIX
+-	u32 mask;
+-	mask = (u32)(1 << vec);
+-
+-	pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, (u32)(mask & 0xFFFFFFFF));
++	if (vec < 32)
++		pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, 1U << vec);
++	else
++		pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR_U,
++			    1U << (vec - 32));
+ 	return;
+ #endif
+ 	pm80xx_chip_intx_interrupt_enable(pm8001_ha);
+@@ -1701,12 +1713,15 @@ static void
+ pm80xx_chip_interrupt_disable(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ {
+ #ifdef PM8001_USE_MSIX
+-	u32 mask;
+-	if (vec == 0xFF)
+-		mask = 0xFFFFFFFF;
++	if (vec == 0xFF) {
++		/* disable all vectors 0-31, 32-63 */
++		pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, 0xFFFFFFFF);
++		pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U, 0xFFFFFFFF);
++	} else if (vec < 32)
++		pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, 1U << vec);
+ 	else
+-		mask = (u32)(1 << vec);
+-	pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, (u32)(mask & 0xFFFFFFFF));
++		pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U,
++			    1U << (vec - 32));
+ 	return;
+ #endif
+ 	pm80xx_chip_intx_interrupt_disable(pm8001_ha);
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index f51723e2d5227..5f7e62f19d83a 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -387,6 +387,7 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
+ 	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+ 	struct qedi_endpoint *qedi_ep;
+ 	struct iscsi_endpoint *ep;
++	int rc = 0;
+ 
+ 	ep = iscsi_lookup_endpoint(transport_fd);
+ 	if (!ep)
+@@ -394,11 +395,16 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
+ 
+ 	qedi_ep = ep->dd_data;
+ 	if ((qedi_ep->state == EP_STATE_TCP_FIN_RCVD) ||
+-	    (qedi_ep->state == EP_STATE_TCP_RST_RCVD))
+-		return -EINVAL;
++	    (qedi_ep->state == EP_STATE_TCP_RST_RCVD)) {
++		rc = -EINVAL;
++		goto put_ep;
++	}
++
++	if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
++		rc = -EINVAL;
++		goto put_ep;
++	}
+ 
+-	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+-		return -EINVAL;
+ 
+ 	qedi_ep->conn = qedi_conn;
+ 	qedi_conn->ep = qedi_ep;
+@@ -408,13 +414,18 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
+ 	qedi_conn->cmd_cleanup_req = 0;
+ 	qedi_conn->cmd_cleanup_cmpl = 0;
+ 
+-	if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn))
+-		return -EINVAL;
++	if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn)) {
++		rc = -EINVAL;
++		goto put_ep;
++	}
++
+ 
+ 	spin_lock_init(&qedi_conn->tmf_work_lock);
+ 	INIT_LIST_HEAD(&qedi_conn->tmf_work_list);
+ 	init_waitqueue_head(&qedi_conn->wait_queue);
+-	return 0;
++put_ep:
++	iscsi_put_endpoint(ep);
++	return rc;
+ }
+ 
+ static int qedi_iscsi_update_conn(struct qedi_ctx *qedi,
+@@ -1428,6 +1439,7 @@ struct iscsi_transport qedi_iscsi_transport = {
+ 	.destroy_session = qedi_session_destroy,
+ 	.create_conn = qedi_conn_create,
+ 	.bind_conn = qedi_conn_bind,
++	.unbind_conn = iscsi_conn_unbind,
+ 	.start_conn = qedi_conn_start,
+ 	.stop_conn = iscsi_conn_stop,
+ 	.destroy_conn = qedi_conn_destroy,
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index 2c23b692e318c..8d82d2a83059d 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -259,6 +259,7 @@ static struct iscsi_transport qla4xxx_iscsi_transport = {
+ 	.start_conn             = qla4xxx_conn_start,
+ 	.create_conn            = qla4xxx_conn_create,
+ 	.bind_conn              = qla4xxx_conn_bind,
++	.unbind_conn		= iscsi_conn_unbind,
+ 	.stop_conn              = iscsi_conn_stop,
+ 	.destroy_conn           = qla4xxx_conn_destroy,
+ 	.set_param              = iscsi_set_param,
+@@ -3237,6 +3238,7 @@ static int qla4xxx_conn_bind(struct iscsi_cls_session *cls_session,
+ 	conn = cls_conn->dd_data;
+ 	qla_conn = conn->dd_data;
+ 	qla_conn->qla_ep = ep->dd_data;
++	iscsi_put_endpoint(ep);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index a5759d0e388a8..ef7cd7520e7c7 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -86,16 +86,10 @@ struct iscsi_internal {
+ 	struct transport_container session_cont;
+ };
+ 
+-/* Worker to perform connection failure on unresponsive connections
+- * completely in kernel space.
+- */
+-static void stop_conn_work_fn(struct work_struct *work);
+-static DECLARE_WORK(stop_conn_work, stop_conn_work_fn);
+-
+ static atomic_t iscsi_session_nr; /* sysfs session id for next new session */
+ static struct workqueue_struct *iscsi_eh_timer_workq;
+ 
+-static struct workqueue_struct *iscsi_destroy_workq;
++static struct workqueue_struct *iscsi_conn_cleanup_workq;
+ 
+ static DEFINE_IDA(iscsi_sess_ida);
+ /*
+@@ -268,9 +262,20 @@ void iscsi_destroy_endpoint(struct iscsi_endpoint *ep)
+ }
+ EXPORT_SYMBOL_GPL(iscsi_destroy_endpoint);
+ 
++void iscsi_put_endpoint(struct iscsi_endpoint *ep)
++{
++	put_device(&ep->dev);
++}
++EXPORT_SYMBOL_GPL(iscsi_put_endpoint);
++
++/**
++ * iscsi_lookup_endpoint - get ep from handle
++ * @handle: endpoint handle
++ *
++ * Caller must do a iscsi_put_endpoint.
++ */
+ struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle)
+ {
+-	struct iscsi_endpoint *ep;
+ 	struct device *dev;
+ 
+ 	dev = class_find_device(&iscsi_endpoint_class, NULL, &handle,
+@@ -278,13 +283,7 @@ struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle)
+ 	if (!dev)
+ 		return NULL;
+ 
+-	ep = iscsi_dev_to_endpoint(dev);
+-	/*
+-	 * we can drop this now because the interface will prevent
+-	 * removals and lookups from racing.
+-	 */
+-	put_device(dev);
+-	return ep;
++	return iscsi_dev_to_endpoint(dev);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_lookup_endpoint);
+ 
+@@ -1598,12 +1597,6 @@ static DECLARE_TRANSPORT_CLASS(iscsi_connection_class,
+ static struct sock *nls;
+ static DEFINE_MUTEX(rx_queue_mutex);
+ 
+-/*
+- * conn_mutex protects the {start,bind,stop,destroy}_conn from racing
+- * against the kernel stop_connection recovery mechanism
+- */
+-static DEFINE_MUTEX(conn_mutex);
+-
+ static LIST_HEAD(sesslist);
+ static DEFINE_SPINLOCK(sesslock);
+ static LIST_HEAD(connlist);
+@@ -2225,6 +2218,155 @@ void iscsi_remove_session(struct iscsi_cls_session *session)
+ }
+ EXPORT_SYMBOL_GPL(iscsi_remove_session);
+ 
++static void iscsi_stop_conn(struct iscsi_cls_conn *conn, int flag)
++{
++	ISCSI_DBG_TRANS_CONN(conn, "Stopping conn.\n");
++
++	switch (flag) {
++	case STOP_CONN_RECOVER:
++		WRITE_ONCE(conn->state, ISCSI_CONN_FAILED);
++		break;
++	case STOP_CONN_TERM:
++		WRITE_ONCE(conn->state, ISCSI_CONN_DOWN);
++		break;
++	default:
++		iscsi_cls_conn_printk(KERN_ERR, conn, "invalid stop flag %d\n",
++				      flag);
++		return;
++	}
++
++	conn->transport->stop_conn(conn, flag);
++	ISCSI_DBG_TRANS_CONN(conn, "Stopping conn done.\n");
++}
++
++static void iscsi_ep_disconnect(struct iscsi_cls_conn *conn, bool is_active)
++{
++	struct iscsi_cls_session *session = iscsi_conn_to_session(conn);
++	struct iscsi_endpoint *ep;
++
++	ISCSI_DBG_TRANS_CONN(conn, "disconnect ep.\n");
++	WRITE_ONCE(conn->state, ISCSI_CONN_FAILED);
++
++	if (!conn->ep || !session->transport->ep_disconnect)
++		return;
++
++	ep = conn->ep;
++	conn->ep = NULL;
++
++	session->transport->unbind_conn(conn, is_active);
++	session->transport->ep_disconnect(ep);
++	ISCSI_DBG_TRANS_CONN(conn, "disconnect ep done.\n");
++}
++
++static void iscsi_if_disconnect_bound_ep(struct iscsi_cls_conn *conn,
++					 struct iscsi_endpoint *ep,
++					 bool is_active)
++{
++	/* Check if this was a conn error and the kernel took ownership */
++	spin_lock_irq(&conn->lock);
++	if (!test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++		spin_unlock_irq(&conn->lock);
++		iscsi_ep_disconnect(conn, is_active);
++	} else {
++		spin_unlock_irq(&conn->lock);
++		ISCSI_DBG_TRANS_CONN(conn, "flush kernel conn cleanup.\n");
++		mutex_unlock(&conn->ep_mutex);
++
++		flush_work(&conn->cleanup_work);
++		/*
++		 * Userspace is now done with the EP so we can release the ref
++		 * iscsi_cleanup_conn_work_fn took.
++		 */
++		iscsi_put_endpoint(ep);
++		mutex_lock(&conn->ep_mutex);
++	}
++}
++
++static int iscsi_if_stop_conn(struct iscsi_transport *transport,
++			      struct iscsi_uevent *ev)
++{
++	int flag = ev->u.stop_conn.flag;
++	struct iscsi_cls_conn *conn;
++
++	conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
++	if (!conn)
++		return -EINVAL;
++
++	ISCSI_DBG_TRANS_CONN(conn, "iscsi if conn stop.\n");
++	/*
++	 * If this is a termination we have to call stop_conn with that flag
++	 * so the correct states get set. If we haven't run the work yet try to
++	 * avoid the extra run.
++	 */
++	if (flag == STOP_CONN_TERM) {
++		cancel_work_sync(&conn->cleanup_work);
++		iscsi_stop_conn(conn, flag);
++	} else {
++		/*
++		 * For offload, when iscsid is restarted it won't know about
++		 * existing endpoints so it can't do a ep_disconnect. We clean
++		 * it up here for userspace.
++		 */
++		mutex_lock(&conn->ep_mutex);
++		if (conn->ep)
++			iscsi_if_disconnect_bound_ep(conn, conn->ep, true);
++		mutex_unlock(&conn->ep_mutex);
++
++		/*
++		 * Figure out if it was the kernel or userspace initiating this.
++		 */
++		spin_lock_irq(&conn->lock);
++		if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++			spin_unlock_irq(&conn->lock);
++			iscsi_stop_conn(conn, flag);
++		} else {
++			spin_unlock_irq(&conn->lock);
++			ISCSI_DBG_TRANS_CONN(conn,
++					     "flush kernel conn cleanup.\n");
++			flush_work(&conn->cleanup_work);
++		}
++		/*
++		 * Only clear for recovery to avoid extra cleanup runs during
++		 * termination.
++		 */
++		spin_lock_irq(&conn->lock);
++		clear_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags);
++		spin_unlock_irq(&conn->lock);
++	}
++	ISCSI_DBG_TRANS_CONN(conn, "iscsi if conn stop done.\n");
++	return 0;
++}
++
++static void iscsi_cleanup_conn_work_fn(struct work_struct *work)
++{
++	struct iscsi_cls_conn *conn = container_of(work, struct iscsi_cls_conn,
++						   cleanup_work);
++	struct iscsi_cls_session *session = iscsi_conn_to_session(conn);
++
++	mutex_lock(&conn->ep_mutex);
++	/*
++	 * Get a ref to the ep, so we don't release its ID until after
++	 * userspace is done referencing it in iscsi_if_disconnect_bound_ep.
++	 */
++	if (conn->ep)
++		get_device(&conn->ep->dev);
++	iscsi_ep_disconnect(conn, false);
++
++	if (system_state != SYSTEM_RUNNING) {
++		/*
++		 * If the user has set up for the session to never timeout
++		 * then hang like they wanted. For all other cases fail right
++		 * away since userspace is not going to relogin.
++		 */
++		if (session->recovery_tmo > 0)
++			session->recovery_tmo = 0;
++	}
++
++	iscsi_stop_conn(conn, STOP_CONN_RECOVER);
++	mutex_unlock(&conn->ep_mutex);
++	ISCSI_DBG_TRANS_CONN(conn, "cleanup done.\n");
++}
++
+ void iscsi_free_session(struct iscsi_cls_session *session)
+ {
+ 	ISCSI_DBG_TRANS_SESSION(session, "Freeing session\n");
+@@ -2263,11 +2405,12 @@ iscsi_create_conn(struct iscsi_cls_session *session, int dd_size, uint32_t cid)
+ 		conn->dd_data = &conn[1];
+ 
+ 	mutex_init(&conn->ep_mutex);
++	spin_lock_init(&conn->lock);
+ 	INIT_LIST_HEAD(&conn->conn_list);
+-	INIT_LIST_HEAD(&conn->conn_list_err);
++	INIT_WORK(&conn->cleanup_work, iscsi_cleanup_conn_work_fn);
+ 	conn->transport = transport;
+ 	conn->cid = cid;
+-	conn->state = ISCSI_CONN_DOWN;
++	WRITE_ONCE(conn->state, ISCSI_CONN_DOWN);
+ 
+ 	/* this is released in the dev's release function */
+ 	if (!get_device(&session->dev))
+@@ -2321,7 +2464,6 @@ int iscsi_destroy_conn(struct iscsi_cls_conn *conn)
+ 
+ 	spin_lock_irqsave(&connlock, flags);
+ 	list_del(&conn->conn_list);
+-	list_del(&conn->conn_list_err);
+ 	spin_unlock_irqrestore(&connlock, flags);
+ 
+ 	transport_unregister_device(&conn->dev);
+@@ -2448,77 +2590,6 @@ int iscsi_offload_mesg(struct Scsi_Host *shost,
+ }
+ EXPORT_SYMBOL_GPL(iscsi_offload_mesg);
+ 
+-/*
+- * This can be called without the rx_queue_mutex, if invoked by the kernel
+- * stop work. But, in that case, it is guaranteed not to race with
+- * iscsi_destroy by conn_mutex.
+- */
+-static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag)
+-{
+-	/*
+-	 * It is important that this path doesn't rely on
+-	 * rx_queue_mutex, otherwise, a thread doing allocation on a
+-	 * start_session/start_connection could sleep waiting on a
+-	 * writeback to a failed iscsi device, that cannot be recovered
+-	 * because the lock is held.  If we don't hold it here, the
+-	 * kernel stop_conn_work_fn has a chance to stop the broken
+-	 * session and resolve the allocation.
+-	 *
+-	 * Still, the user invoked .stop_conn() needs to be serialized
+-	 * with stop_conn_work_fn by a private mutex.  Not pretty, but
+-	 * it works.
+-	 */
+-	mutex_lock(&conn_mutex);
+-	switch (flag) {
+-	case STOP_CONN_RECOVER:
+-		conn->state = ISCSI_CONN_FAILED;
+-		break;
+-	case STOP_CONN_TERM:
+-		conn->state = ISCSI_CONN_DOWN;
+-		break;
+-	default:
+-		iscsi_cls_conn_printk(KERN_ERR, conn,
+-				      "invalid stop flag %d\n", flag);
+-		goto unlock;
+-	}
+-
+-	conn->transport->stop_conn(conn, flag);
+-unlock:
+-	mutex_unlock(&conn_mutex);
+-}
+-
+-static void stop_conn_work_fn(struct work_struct *work)
+-{
+-	struct iscsi_cls_conn *conn, *tmp;
+-	unsigned long flags;
+-	LIST_HEAD(recovery_list);
+-
+-	spin_lock_irqsave(&connlock, flags);
+-	if (list_empty(&connlist_err)) {
+-		spin_unlock_irqrestore(&connlock, flags);
+-		return;
+-	}
+-	list_splice_init(&connlist_err, &recovery_list);
+-	spin_unlock_irqrestore(&connlock, flags);
+-
+-	list_for_each_entry_safe(conn, tmp, &recovery_list, conn_list_err) {
+-		uint32_t sid = iscsi_conn_get_sid(conn);
+-		struct iscsi_cls_session *session;
+-
+-		session = iscsi_session_lookup(sid);
+-		if (session) {
+-			if (system_state != SYSTEM_RUNNING) {
+-				session->recovery_tmo = 0;
+-				iscsi_if_stop_conn(conn, STOP_CONN_TERM);
+-			} else {
+-				iscsi_if_stop_conn(conn, STOP_CONN_RECOVER);
+-			}
+-		}
+-
+-		list_del_init(&conn->conn_list_err);
+-	}
+-}
+-
+ void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
+ {
+ 	struct nlmsghdr	*nlh;
+@@ -2527,11 +2598,31 @@ void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
+ 	struct iscsi_internal *priv;
+ 	int len = nlmsg_total_size(sizeof(*ev));
+ 	unsigned long flags;
++	int state;
+ 
+-	spin_lock_irqsave(&connlock, flags);
+-	list_add(&conn->conn_list_err, &connlist_err);
+-	spin_unlock_irqrestore(&connlock, flags);
+-	queue_work(system_unbound_wq, &stop_conn_work);
++	spin_lock_irqsave(&conn->lock, flags);
++	/*
++	 * Userspace will only do a stop call if we are at least bound. And, we
++	 * only need to do the in kernel cleanup if in the UP state so cmds can
++	 * be released to upper layers. If in other states just wait for
++	 * userspace to avoid races that can leave the cleanup_work queued.
++	 */
++	state = READ_ONCE(conn->state);
++	switch (state) {
++	case ISCSI_CONN_BOUND:
++	case ISCSI_CONN_UP:
++		if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP,
++				      &conn->flags)) {
++			queue_work(iscsi_conn_cleanup_workq,
++				   &conn->cleanup_work);
++		}
++		break;
++	default:
++		ISCSI_DBG_TRANS_CONN(conn, "Got conn error in state %d\n",
++				     state);
++		break;
++	}
++	spin_unlock_irqrestore(&conn->lock, flags);
+ 
+ 	priv = iscsi_if_transport_lookup(conn->transport);
+ 	if (!priv)
+@@ -2861,26 +2952,17 @@ static int
+ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ {
+ 	struct iscsi_cls_conn *conn;
+-	unsigned long flags;
+ 
+ 	conn = iscsi_conn_lookup(ev->u.d_conn.sid, ev->u.d_conn.cid);
+ 	if (!conn)
+ 		return -EINVAL;
+ 
+-	spin_lock_irqsave(&connlock, flags);
+-	if (!list_empty(&conn->conn_list_err)) {
+-		spin_unlock_irqrestore(&connlock, flags);
+-		return -EAGAIN;
+-	}
+-	spin_unlock_irqrestore(&connlock, flags);
+-
++	ISCSI_DBG_TRANS_CONN(conn, "Flushing cleanup during destruction\n");
++	flush_work(&conn->cleanup_work);
+ 	ISCSI_DBG_TRANS_CONN(conn, "Destroying transport conn\n");
+ 
+-	mutex_lock(&conn_mutex);
+ 	if (transport->destroy_conn)
+ 		transport->destroy_conn(conn);
+-	mutex_unlock(&conn_mutex);
+-
+ 	return 0;
+ }
+ 
+@@ -2890,7 +2972,7 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ 	char *data = (char*)ev + sizeof(*ev);
+ 	struct iscsi_cls_conn *conn;
+ 	struct iscsi_cls_session *session;
+-	int err = 0, value = 0;
++	int err = 0, value = 0, state;
+ 
+ 	if (ev->u.set_param.len > PAGE_SIZE)
+ 		return -EINVAL;
+@@ -2907,8 +2989,8 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ 			session->recovery_tmo = value;
+ 		break;
+ 	default:
+-		if ((conn->state == ISCSI_CONN_BOUND) ||
+-			(conn->state == ISCSI_CONN_UP)) {
++		state = READ_ONCE(conn->state);
++		if (state == ISCSI_CONN_BOUND || state == ISCSI_CONN_UP) {
+ 			err = transport->set_param(conn, ev->u.set_param.param,
+ 					data, ev->u.set_param.len);
+ 		} else {
+@@ -2968,15 +3050,22 @@ static int iscsi_if_ep_disconnect(struct iscsi_transport *transport,
+ 	ep = iscsi_lookup_endpoint(ep_handle);
+ 	if (!ep)
+ 		return -EINVAL;
++
+ 	conn = ep->conn;
+-	if (conn) {
+-		mutex_lock(&conn->ep_mutex);
+-		conn->ep = NULL;
+-		mutex_unlock(&conn->ep_mutex);
+-		conn->state = ISCSI_CONN_FAILED;
++	if (!conn) {
++		/*
++		 * conn was not even bound yet, so we can't get iscsi conn
++		 * failures yet.
++		 */
++		transport->ep_disconnect(ep);
++		goto put_ep;
+ 	}
+ 
+-	transport->ep_disconnect(ep);
++	mutex_lock(&conn->ep_mutex);
++	iscsi_if_disconnect_bound_ep(conn, ep, false);
++	mutex_unlock(&conn->ep_mutex);
++put_ep:
++	iscsi_put_endpoint(ep);
+ 	return 0;
+ }
+ 
+@@ -3002,6 +3091,7 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
+ 
+ 		ev->r.retcode = transport->ep_poll(ep,
+ 						   ev->u.ep_poll.timeout_ms);
++		iscsi_put_endpoint(ep);
+ 		break;
+ 	case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT:
+ 		rc = iscsi_if_ep_disconnect(transport,
+@@ -3632,18 +3722,123 @@ exit_host_stats:
+ 	return err;
+ }
+ 
++static int iscsi_if_transport_conn(struct iscsi_transport *transport,
++				   struct nlmsghdr *nlh)
++{
++	struct iscsi_uevent *ev = nlmsg_data(nlh);
++	struct iscsi_cls_session *session;
++	struct iscsi_cls_conn *conn = NULL;
++	struct iscsi_endpoint *ep;
++	uint32_t pdu_len;
++	int err = 0;
++
++	switch (nlh->nlmsg_type) {
++	case ISCSI_UEVENT_CREATE_CONN:
++		return iscsi_if_create_conn(transport, ev);
++	case ISCSI_UEVENT_DESTROY_CONN:
++		return iscsi_if_destroy_conn(transport, ev);
++	case ISCSI_UEVENT_STOP_CONN:
++		return iscsi_if_stop_conn(transport, ev);
++	}
++
++	/*
++	 * The following cmds need to be run under the ep_mutex so in kernel
++	 * conn cleanup (ep_disconnect + unbind and conn) is not done while
++	 * these are running. They also must not run if we have just run a conn
++	 * cleanup because they would set the state in a way that might allow
++	 * IO or send IO themselves.
++	 */
++	switch (nlh->nlmsg_type) {
++	case ISCSI_UEVENT_START_CONN:
++		conn = iscsi_conn_lookup(ev->u.start_conn.sid,
++					 ev->u.start_conn.cid);
++		break;
++	case ISCSI_UEVENT_BIND_CONN:
++		conn = iscsi_conn_lookup(ev->u.b_conn.sid, ev->u.b_conn.cid);
++		break;
++	case ISCSI_UEVENT_SEND_PDU:
++		conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
++		break;
++	}
++
++	if (!conn)
++		return -EINVAL;
++
++	mutex_lock(&conn->ep_mutex);
++	spin_lock_irq(&conn->lock);
++	if (test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++		spin_unlock_irq(&conn->lock);
++		mutex_unlock(&conn->ep_mutex);
++		ev->r.retcode = -ENOTCONN;
++		return 0;
++	}
++	spin_unlock_irq(&conn->lock);
++
++	switch (nlh->nlmsg_type) {
++	case ISCSI_UEVENT_BIND_CONN:
++		session = iscsi_session_lookup(ev->u.b_conn.sid);
++		if (!session) {
++			err = -EINVAL;
++			break;
++		}
++
++		ev->r.retcode =	transport->bind_conn(session, conn,
++						ev->u.b_conn.transport_eph,
++						ev->u.b_conn.is_leading);
++		if (!ev->r.retcode)
++			WRITE_ONCE(conn->state, ISCSI_CONN_BOUND);
++
++		if (ev->r.retcode || !transport->ep_connect)
++			break;
++
++		ep = iscsi_lookup_endpoint(ev->u.b_conn.transport_eph);
++		if (ep) {
++			ep->conn = conn;
++			conn->ep = ep;
++			iscsi_put_endpoint(ep);
++		} else {
++			err = -ENOTCONN;
++			iscsi_cls_conn_printk(KERN_ERR, conn,
++					      "Could not set ep conn binding\n");
++		}
++		break;
++	case ISCSI_UEVENT_START_CONN:
++		ev->r.retcode = transport->start_conn(conn);
++		if (!ev->r.retcode)
++			WRITE_ONCE(conn->state, ISCSI_CONN_UP);
++
++		break;
++	case ISCSI_UEVENT_SEND_PDU:
++		pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
++
++		if ((ev->u.send_pdu.hdr_size > pdu_len) ||
++		    (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
++			err = -EINVAL;
++			break;
++		}
++
++		ev->r.retcode =	transport->send_pdu(conn,
++				(struct iscsi_hdr *)((char *)ev + sizeof(*ev)),
++				(char *)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
++				ev->u.send_pdu.data_size);
++		break;
++	default:
++		err = -ENOSYS;
++	}
++
++	mutex_unlock(&conn->ep_mutex);
++	return err;
++}
+ 
+ static int
+ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ {
+ 	int err = 0;
+ 	u32 portid;
+-	u32 pdu_len;
+ 	struct iscsi_uevent *ev = nlmsg_data(nlh);
+ 	struct iscsi_transport *transport = NULL;
+ 	struct iscsi_internal *priv;
+ 	struct iscsi_cls_session *session;
+-	struct iscsi_cls_conn *conn;
+ 	struct iscsi_endpoint *ep = NULL;
+ 
+ 	if (!netlink_capable(skb, CAP_SYS_ADMIN))
+@@ -3684,6 +3879,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 					ev->u.c_bound_session.initial_cmdsn,
+ 					ev->u.c_bound_session.cmds_max,
+ 					ev->u.c_bound_session.queue_depth);
++		iscsi_put_endpoint(ep);
+ 		break;
+ 	case ISCSI_UEVENT_DESTROY_SESSION:
+ 		session = iscsi_session_lookup(ev->u.d_session.sid);
+@@ -3708,7 +3904,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 			list_del_init(&session->sess_list);
+ 			spin_unlock_irqrestore(&sesslock, flags);
+ 
+-			queue_work(iscsi_destroy_workq, &session->destroy_work);
++			queue_work(system_unbound_wq, &session->destroy_work);
+ 		}
+ 		break;
+ 	case ISCSI_UEVENT_UNBIND_SESSION:
+@@ -3719,89 +3915,16 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 		else
+ 			err = -EINVAL;
+ 		break;
+-	case ISCSI_UEVENT_CREATE_CONN:
+-		err = iscsi_if_create_conn(transport, ev);
+-		break;
+-	case ISCSI_UEVENT_DESTROY_CONN:
+-		err = iscsi_if_destroy_conn(transport, ev);
+-		break;
+-	case ISCSI_UEVENT_BIND_CONN:
+-		session = iscsi_session_lookup(ev->u.b_conn.sid);
+-		conn = iscsi_conn_lookup(ev->u.b_conn.sid, ev->u.b_conn.cid);
+-
+-		if (conn && conn->ep)
+-			iscsi_if_ep_disconnect(transport, conn->ep->id);
+-
+-		if (!session || !conn) {
+-			err = -EINVAL;
+-			break;
+-		}
+-
+-		mutex_lock(&conn_mutex);
+-		ev->r.retcode =	transport->bind_conn(session, conn,
+-						ev->u.b_conn.transport_eph,
+-						ev->u.b_conn.is_leading);
+-		if (!ev->r.retcode)
+-			conn->state = ISCSI_CONN_BOUND;
+-		mutex_unlock(&conn_mutex);
+-
+-		if (ev->r.retcode || !transport->ep_connect)
+-			break;
+-
+-		ep = iscsi_lookup_endpoint(ev->u.b_conn.transport_eph);
+-		if (ep) {
+-			ep->conn = conn;
+-
+-			mutex_lock(&conn->ep_mutex);
+-			conn->ep = ep;
+-			mutex_unlock(&conn->ep_mutex);
+-		} else
+-			iscsi_cls_conn_printk(KERN_ERR, conn,
+-					      "Could not set ep conn "
+-					      "binding\n");
+-		break;
+ 	case ISCSI_UEVENT_SET_PARAM:
+ 		err = iscsi_set_param(transport, ev);
+ 		break;
+-	case ISCSI_UEVENT_START_CONN:
+-		conn = iscsi_conn_lookup(ev->u.start_conn.sid, ev->u.start_conn.cid);
+-		if (conn) {
+-			mutex_lock(&conn_mutex);
+-			ev->r.retcode = transport->start_conn(conn);
+-			if (!ev->r.retcode)
+-				conn->state = ISCSI_CONN_UP;
+-			mutex_unlock(&conn_mutex);
+-		}
+-		else
+-			err = -EINVAL;
+-		break;
++	case ISCSI_UEVENT_CREATE_CONN:
++	case ISCSI_UEVENT_DESTROY_CONN:
+ 	case ISCSI_UEVENT_STOP_CONN:
+-		conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
+-		if (conn)
+-			iscsi_if_stop_conn(conn, ev->u.stop_conn.flag);
+-		else
+-			err = -EINVAL;
+-		break;
++	case ISCSI_UEVENT_START_CONN:
++	case ISCSI_UEVENT_BIND_CONN:
+ 	case ISCSI_UEVENT_SEND_PDU:
+-		pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
+-
+-		if ((ev->u.send_pdu.hdr_size > pdu_len) ||
+-		    (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
+-			err = -EINVAL;
+-			break;
+-		}
+-
+-		conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
+-		if (conn) {
+-			mutex_lock(&conn_mutex);
+-			ev->r.retcode =	transport->send_pdu(conn,
+-				(struct iscsi_hdr*)((char*)ev + sizeof(*ev)),
+-				(char*)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
+-				ev->u.send_pdu.data_size);
+-			mutex_unlock(&conn_mutex);
+-		}
+-		else
+-			err = -EINVAL;
++		err = iscsi_if_transport_conn(transport, nlh);
+ 		break;
+ 	case ISCSI_UEVENT_GET_STATS:
+ 		err = iscsi_if_get_stats(transport, nlh);
+@@ -3991,10 +4114,11 @@ static ssize_t show_conn_state(struct device *dev,
+ {
+ 	struct iscsi_cls_conn *conn = iscsi_dev_to_conn(dev->parent);
+ 	const char *state = "unknown";
++	int conn_state = READ_ONCE(conn->state);
+ 
+-	if (conn->state >= 0 &&
+-	    conn->state < ARRAY_SIZE(connection_state_names))
+-		state = connection_state_names[conn->state];
++	if (conn_state >= 0 &&
++	    conn_state < ARRAY_SIZE(connection_state_names))
++		state = connection_state_names[conn_state];
+ 
+ 	return sysfs_emit(buf, "%s\n", state);
+ }
+@@ -4649,6 +4773,7 @@ iscsi_register_transport(struct iscsi_transport *tt)
+ 	int err;
+ 
+ 	BUG_ON(!tt);
++	WARN_ON(tt->ep_disconnect && !tt->unbind_conn);
+ 
+ 	priv = iscsi_if_transport_lookup(tt);
+ 	if (priv)
+@@ -4803,10 +4928,10 @@ static __init int iscsi_transport_init(void)
+ 		goto release_nls;
+ 	}
+ 
+-	iscsi_destroy_workq = alloc_workqueue("%s",
+-			WQ_SYSFS | __WQ_LEGACY | WQ_MEM_RECLAIM | WQ_UNBOUND,
+-			1, "iscsi_destroy");
+-	if (!iscsi_destroy_workq) {
++	iscsi_conn_cleanup_workq = alloc_workqueue("%s",
++			WQ_SYSFS | WQ_MEM_RECLAIM | WQ_UNBOUND, 0,
++			"iscsi_conn_cleanup");
++	if (!iscsi_conn_cleanup_workq) {
+ 		err = -ENOMEM;
+ 		goto destroy_wq;
+ 	}
+@@ -4836,7 +4961,7 @@ unregister_transport_class:
+ 
+ static void __exit iscsi_transport_exit(void)
+ {
+-	destroy_workqueue(iscsi_destroy_workq);
++	destroy_workqueue(iscsi_conn_cleanup_workq);
+ 	destroy_workqueue(iscsi_eh_timer_workq);
+ 	netlink_kernel_release(nls);
+ 	bus_unregister(&iscsi_flashnode_bus);
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index c6950f157b99f..c283e45ac300b 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -1676,6 +1676,7 @@ static struct page *tcmu_try_get_block_page(struct tcmu_dev *udev, uint32_t dbi)
+ 	mutex_lock(&udev->cmdr_lock);
+ 	page = tcmu_get_block_page(udev, dbi);
+ 	if (likely(page)) {
++		get_page(page);
+ 		mutex_unlock(&udev->cmdr_lock);
+ 		return page;
+ 	}
+@@ -1714,6 +1715,7 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
+ 		/* For the vmalloc()ed cmd area pages */
+ 		addr = (void *)(unsigned long)info->mem[mi].addr + offset;
+ 		page = vmalloc_to_page(addr);
++		get_page(page);
+ 	} else {
+ 		uint32_t dbi;
+ 
+@@ -1724,7 +1726,6 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
+ 			return VM_FAULT_SIGBUS;
+ 	}
+ 
+-	get_page(page);
+ 	vmf->page = page;
+ 	return 0;
+ }
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index c99e293b50f54..e351f53199505 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2570,7 +2570,6 @@ int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans)
+ 	struct btrfs_path *path = NULL;
+ 	LIST_HEAD(dirty);
+ 	struct list_head *io = &cur_trans->io_bgs;
+-	int num_started = 0;
+ 	int loops = 0;
+ 
+ 	spin_lock(&cur_trans->dirty_bgs_lock);
+@@ -2636,7 +2635,6 @@ again:
+ 			cache->io_ctl.inode = NULL;
+ 			ret = btrfs_write_out_cache(trans, cache, path);
+ 			if (ret == 0 && cache->io_ctl.inode) {
+-				num_started++;
+ 				should_put = 0;
+ 
+ 				/*
+@@ -2737,7 +2735,6 @@ int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans)
+ 	int should_put;
+ 	struct btrfs_path *path;
+ 	struct list_head *io = &cur_trans->io_bgs;
+-	int num_started = 0;
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+@@ -2795,7 +2792,6 @@ int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans)
+ 			cache->io_ctl.inode = NULL;
+ 			ret = btrfs_write_out_cache(trans, cache, path);
+ 			if (ret == 0 && cache->io_ctl.inode) {
+-				num_started++;
+ 				should_put = 0;
+ 				list_add_tail(&cache->io_list, io);
+ 			} else {
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index a5bcad0278835..87e55b024ac2e 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1596,9 +1596,10 @@ again:
+ 
+ 	ret = btrfs_insert_fs_root(fs_info, root);
+ 	if (ret) {
+-		btrfs_put_root(root);
+-		if (ret == -EEXIST)
++		if (ret == -EEXIST) {
++			btrfs_put_root(root);
+ 			goto again;
++		}
+ 		goto fail;
+ 	}
+ 	return root;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index f59ec55e5feb2..416a1b753ff62 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2833,8 +2833,9 @@ out:
+ 	return ret;
+ }
+ 
+-static int btrfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
++static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
+ {
++	struct inode *inode = file_inode(file);
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ 	struct btrfs_root *root = BTRFS_I(inode)->root;
+ 	struct extent_state *cached_state = NULL;
+@@ -2866,6 +2867,10 @@ static int btrfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 		goto out_only_mutex;
+ 	}
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out_only_mutex;
++
+ 	lockstart = round_up(offset, btrfs_inode_sectorsize(BTRFS_I(inode)));
+ 	lockend = round_down(offset + len,
+ 			     btrfs_inode_sectorsize(BTRFS_I(inode))) - 1;
+@@ -3301,7 +3306,7 @@ static long btrfs_fallocate(struct file *file, int mode,
+ 		return -EOPNOTSUPP;
+ 
+ 	if (mode & FALLOC_FL_PUNCH_HOLE)
+-		return btrfs_punch_hole(inode, offset, len);
++		return btrfs_punch_hole(file, offset, len);
+ 
+ 	/*
+ 	 * Only trigger disk allocation, don't trigger qgroup reserve
+@@ -3323,6 +3328,10 @@ static long btrfs_fallocate(struct file *file, int mode,
+ 			goto out;
+ 	}
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out;
++
+ 	/*
+ 	 * TODO: Move these two operations after we have checked
+ 	 * accurate reserved space, or fallocate can still fail but
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index f7f4ac01589bc..4a5248097d7aa 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -995,7 +995,6 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
+ 	int ret = 0;
+ 
+ 	if (btrfs_is_free_space_inode(inode)) {
+-		WARN_ON_ONCE(1);
+ 		ret = -EINVAL;
+ 		goto out_unlock;
+ 	}
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index e462de9917237..366d047638646 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4220,10 +4220,12 @@ static int balance_kthread(void *data)
+ 	struct btrfs_fs_info *fs_info = data;
+ 	int ret = 0;
+ 
++	sb_start_write(fs_info->sb);
+ 	mutex_lock(&fs_info->balance_mutex);
+ 	if (fs_info->balance_ctl)
+ 		ret = btrfs_balance(fs_info, fs_info->balance_ctl, NULL);
+ 	mutex_unlock(&fs_info->balance_mutex);
++	sb_end_write(fs_info->sb);
+ 
+ 	return ret;
+ }
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index 94dab4309fbb4..85d30fef98a29 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -97,6 +97,9 @@ parse_mf_symlink(const u8 *buf, unsigned int buf_len, unsigned int *_link_len,
+ 	if (rc != 1)
+ 		return -EINVAL;
+ 
++	if (link_len > CIFS_MF_SYMLINK_LINK_MAXLEN)
++		return -EINVAL;
++
+ 	rc = symlink_hash(link_len, link_str, md5_hash);
+ 	if (rc) {
+ 		cifs_dbg(FYI, "%s: MD5 hash failure: %d\n", __func__, rc);
+diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
+index 6661ee1cff479..a0c4b99d28994 100644
+--- a/include/asm-generic/tlb.h
++++ b/include/asm-generic/tlb.h
+@@ -563,10 +563,14 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb,
+ #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address)	\
+ 	do {							\
+ 		unsigned long _sz = huge_page_size(h);		\
+-		if (_sz == PMD_SIZE)				\
+-			tlb_flush_pmd_range(tlb, address, _sz);	\
+-		else if (_sz == PUD_SIZE)			\
++		if (_sz >= P4D_SIZE)				\
++			tlb_flush_p4d_range(tlb, address, _sz);	\
++		else if (_sz >= PUD_SIZE)			\
+ 			tlb_flush_pud_range(tlb, address, _sz);	\
++		else if (_sz >= PMD_SIZE)			\
++			tlb_flush_pmd_range(tlb, address, _sz);	\
++		else						\
++			tlb_flush_pte_range(tlb, address, _sz);	\
+ 		__tlb_remove_tlb_entry(tlb, ptep, address);	\
+ 	} while (0)
+ 
+diff --git a/include/net/ax25.h b/include/net/ax25.h
+index 8b7eb46ad72d8..aadff553e4b73 100644
+--- a/include/net/ax25.h
++++ b/include/net/ax25.h
+@@ -236,6 +236,7 @@ typedef struct ax25_dev {
+ #if defined(CONFIG_AX25_DAMA_SLAVE) || defined(CONFIG_AX25_DAMA_MASTER)
+ 	ax25_dama_info		dama;
+ #endif
++	refcount_t		refcount;
+ } ax25_dev;
+ 
+ typedef struct ax25_cb {
+@@ -290,6 +291,17 @@ static __inline__ void ax25_cb_put(ax25_cb *ax25)
+ 	}
+ }
+ 
++static inline void ax25_dev_hold(ax25_dev *ax25_dev)
++{
++	refcount_inc(&ax25_dev->refcount);
++}
++
++static inline void ax25_dev_put(ax25_dev *ax25_dev)
++{
++	if (refcount_dec_and_test(&ax25_dev->refcount)) {
++		kfree(ax25_dev);
++	}
++}
+ static inline __be16 ax25_type_trans(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	skb->dev      = dev;
+diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
+index cc10b10dc3a19..5eecf44369659 100644
+--- a/include/net/flow_dissector.h
++++ b/include/net/flow_dissector.h
+@@ -59,6 +59,8 @@ struct flow_dissector_key_vlan {
+ 		__be16	vlan_tci;
+ 	};
+ 	__be16	vlan_tpid;
++	__be16	vlan_eth_type;
++	u16	padding;
+ };
+ 
+ struct flow_dissector_mpls_lse {
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index 2b5f97224f693..fa00e2543ad65 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -421,6 +421,7 @@ extern int iscsi_conn_start(struct iscsi_cls_conn *);
+ extern void iscsi_conn_stop(struct iscsi_cls_conn *, int);
+ extern int iscsi_conn_bind(struct iscsi_cls_session *, struct iscsi_cls_conn *,
+ 			   int);
++extern void iscsi_conn_unbind(struct iscsi_cls_conn *cls_conn, bool is_active);
+ extern void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err);
+ extern void iscsi_session_failure(struct iscsi_session *session,
+ 				  enum iscsi_err err);
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index f28bb20d62713..037c77fb5dc55 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -82,6 +82,7 @@ struct iscsi_transport {
+ 	void (*destroy_session) (struct iscsi_cls_session *session);
+ 	struct iscsi_cls_conn *(*create_conn) (struct iscsi_cls_session *sess,
+ 				uint32_t cid);
++	void (*unbind_conn) (struct iscsi_cls_conn *conn, bool is_active);
+ 	int (*bind_conn) (struct iscsi_cls_session *session,
+ 			  struct iscsi_cls_conn *cls_conn,
+ 			  uint64_t transport_eph, int is_leading);
+@@ -196,15 +197,25 @@ enum iscsi_connection_state {
+ 	ISCSI_CONN_BOUND,
+ };
+ 
++#define ISCSI_CLS_CONN_BIT_CLEANUP	1
++
+ struct iscsi_cls_conn {
+ 	struct list_head conn_list;	/* item in connlist */
+-	struct list_head conn_list_err;	/* item in connlist_err */
+ 	void *dd_data;			/* LLD private data */
+ 	struct iscsi_transport *transport;
+ 	uint32_t cid;			/* connection id */
++	/*
++	 * This protects the conn startup and binding/unbinding of the ep to
++	 * the conn. Unbinding includes ep_disconnect and stop_conn.
++	 */
+ 	struct mutex ep_mutex;
+ 	struct iscsi_endpoint *ep;
+ 
++	/* Used when accessing flags and queueing work. */
++	spinlock_t lock;
++	unsigned long flags;
++	struct work_struct cleanup_work;
++
+ 	struct device dev;		/* sysfs transport/container device */
+ 	enum iscsi_connection_state state;
+ };
+@@ -443,6 +454,7 @@ extern int iscsi_scan_finished(struct Scsi_Host *shost, unsigned long time);
+ extern struct iscsi_endpoint *iscsi_create_endpoint(int dd_size);
+ extern void iscsi_destroy_endpoint(struct iscsi_endpoint *ep);
+ extern struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle);
++extern void iscsi_put_endpoint(struct iscsi_endpoint *ep);
+ extern int iscsi_block_scsi_eh(struct scsi_cmnd *cmd);
+ extern struct iscsi_iface *iscsi_create_iface(struct Scsi_Host *shost,
+ 					      struct iscsi_transport *t,
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 23db248a7fdbe..ed1bbac004d52 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1874,17 +1874,18 @@ DECLARE_EVENT_CLASS(svc_deferred_event,
+ 	TP_STRUCT__entry(
+ 		__field(const void *, dr)
+ 		__field(u32, xid)
+-		__string(addr, dr->xprt->xpt_remotebuf)
++		__array(__u8, addr, INET6_ADDRSTRLEN + 10)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->dr = dr;
+ 		__entry->xid = be32_to_cpu(*(__be32 *)(dr->args +
+ 						       (dr->xprt_hlen>>2)));
+-		__assign_str(addr, dr->xprt->xpt_remotebuf);
++		snprintf(__entry->addr, sizeof(__entry->addr) - 1,
++			 "%pISpc", (struct sockaddr *)&dr->addr);
+ 	),
+ 
+-	TP_printk("addr=%s dr=%p xid=0x%08x", __get_str(addr), __entry->dr,
++	TP_printk("addr=%s dr=%p xid=0x%08x", __entry->addr, __entry->dr,
+ 		__entry->xid)
+ );
+ 
+diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
+index b986155787376..c9d380318dd89 100644
+--- a/kernel/dma/direct.h
++++ b/kernel/dma/direct.h
+@@ -114,6 +114,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
+ 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
+ 
+ 	if (unlikely(is_swiotlb_buffer(phys)))
+-		swiotlb_tbl_unmap_single(dev, phys, size, size, dir, attrs);
++		swiotlb_tbl_unmap_single(dev, phys, size, size, dir,
++					 attrs | DMA_ATTR_SKIP_CPU_SYNC);
+ }
+ #endif /* _KERNEL_DMA_DIRECT_H */
+diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
+index 4d89ad4fae3bb..5fb78addff51b 100644
+--- a/kernel/irq/affinity.c
++++ b/kernel/irq/affinity.c
+@@ -269,8 +269,9 @@ static int __irq_build_affinity_masks(unsigned int startvec,
+ 	 */
+ 	if (numvecs <= nodes) {
+ 		for_each_node_mask(n, nodemsk) {
+-			cpumask_or(&masks[curvec].mask, &masks[curvec].mask,
+-				   node_to_cpumask[n]);
++			/* Ensure that only CPUs which are in both masks are set */
++			cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
++			cpumask_or(&masks[curvec].mask, &masks[curvec].mask, nmsk);
+ 			if (++curvec == last_affv)
+ 				curvec = firstvec;
+ 		}
+diff --git a/kernel/smp.c b/kernel/smp.c
+index f73a597c8e4cf..b0684b4c111e9 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -346,7 +346,7 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
+ 
+ 	/* There shouldn't be any pending callbacks on an offline CPU. */
+ 	if (unlikely(warn_cpu_offline && !cpu_online(smp_processor_id()) &&
+-		     !warned && !llist_empty(head))) {
++		     !warned && entry != NULL)) {
+ 		warned = true;
+ 		WARN(1, "IPI on offline CPU %d\n", smp_processor_id());
+ 
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index e8d351b7f9b03..9c352357fc8bc 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -136,7 +136,7 @@ static void tick_sched_do_timer(struct tick_sched *ts, ktime_t now)
+ 	 */
+ 	if (unlikely(tick_do_timer_cpu == TICK_DO_TIMER_NONE)) {
+ #ifdef CONFIG_NO_HZ_FULL
+-		WARN_ON(tick_nohz_full_running);
++		WARN_ON_ONCE(tick_nohz_full_running);
+ #endif
+ 		tick_do_timer_cpu = cpu;
+ 	}
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index a3ec21be3b140..e87e638c31bdf 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1738,11 +1738,14 @@ static inline void __run_timers(struct timer_base *base)
+ 	       time_after_eq(jiffies, base->next_expiry)) {
+ 		levels = collect_expired_timers(base, heads);
+ 		/*
+-		 * The only possible reason for not finding any expired
+-		 * timer at this clk is that all matching timers have been
+-		 * dequeued.
++		 * The two possible reasons for not finding any expired
++		 * timer at this clk are that all matching timers have been
++		 * dequeued or no timer has been queued since
++		 * base::next_expiry was set to base::clk +
++		 * NEXT_TIMER_MAX_DELTA.
+ 		 */
+-		WARN_ON_ONCE(!levels && !base->next_expiry_recalc);
++		WARN_ON_ONCE(!levels && !base->next_expiry_recalc
++			     && base->timers_pending);
+ 		base->clk++;
+ 		base->next_expiry = __next_timer_interrupt(base);
+ 
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 4801751cb6b6d..5bfae0686199e 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1123,7 +1123,7 @@ EXPORT_SYMBOL(kmemleak_no_scan);
+ void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
+ 			       gfp_t gfp)
+ {
+-	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
++	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+ 		kmemleak_alloc(__va(phys), size, min_count, gfp);
+ }
+ EXPORT_SYMBOL(kmemleak_alloc_phys);
+@@ -1137,7 +1137,7 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
+  */
+ void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
+ {
+-	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
++	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+ 		kmemleak_free_part(__va(phys), size);
+ }
+ EXPORT_SYMBOL(kmemleak_free_part_phys);
+@@ -1149,7 +1149,7 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
+  */
+ void __ref kmemleak_not_leak_phys(phys_addr_t phys)
+ {
+-	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
++	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+ 		kmemleak_not_leak(__va(phys));
+ }
+ EXPORT_SYMBOL(kmemleak_not_leak_phys);
+@@ -1161,7 +1161,7 @@ EXPORT_SYMBOL(kmemleak_not_leak_phys);
+  */
+ void __ref kmemleak_ignore_phys(phys_addr_t phys)
+ {
+-	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
++	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+ 		kmemleak_ignore(__va(phys));
+ }
+ EXPORT_SYMBOL(kmemleak_ignore_phys);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 42f64ed2be478..f022e0024e8db 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -5653,7 +5653,7 @@ static int build_zonerefs_node(pg_data_t *pgdat, struct zoneref *zonerefs)
+ 	do {
+ 		zone_type--;
+ 		zone = pgdat->node_zones + zone_type;
+-		if (managed_zone(zone)) {
++		if (populated_zone(zone)) {
+ 			zoneref_set_zone(zone, &zonerefs[nr_zones++]);
+ 			check_highest_zone(zone_type);
+ 		}
+diff --git a/mm/page_io.c b/mm/page_io.c
+index 96479817ffae3..f0ada4455895c 100644
+--- a/mm/page_io.c
++++ b/mm/page_io.c
+@@ -69,54 +69,6 @@ void end_swap_bio_write(struct bio *bio)
+ 	bio_put(bio);
+ }
+ 
+-static void swap_slot_free_notify(struct page *page)
+-{
+-	struct swap_info_struct *sis;
+-	struct gendisk *disk;
+-	swp_entry_t entry;
+-
+-	/*
+-	 * There is no guarantee that the page is in swap cache - the software
+-	 * suspend code (at least) uses end_swap_bio_read() against a non-
+-	 * swapcache page.  So we must check PG_swapcache before proceeding with
+-	 * this optimization.
+-	 */
+-	if (unlikely(!PageSwapCache(page)))
+-		return;
+-
+-	sis = page_swap_info(page);
+-	if (data_race(!(sis->flags & SWP_BLKDEV)))
+-		return;
+-
+-	/*
+-	 * The swap subsystem performs lazy swap slot freeing,
+-	 * expecting that the page will be swapped out again.
+-	 * So we can avoid an unnecessary write if the page
+-	 * isn't redirtied.
+-	 * This is good for real swap storage because we can
+-	 * reduce unnecessary I/O and enhance wear-leveling
+-	 * if an SSD is used as the as swap device.
+-	 * But if in-memory swap device (eg zram) is used,
+-	 * this causes a duplicated copy between uncompressed
+-	 * data in VM-owned memory and compressed data in
+-	 * zram-owned memory.  So let's free zram-owned memory
+-	 * and make the VM-owned decompressed page *dirty*,
+-	 * so the page should be swapped out somewhere again if
+-	 * we again wish to reclaim it.
+-	 */
+-	disk = sis->bdev->bd_disk;
+-	entry.val = page_private(page);
+-	if (disk->fops->swap_slot_free_notify && __swap_count(entry) == 1) {
+-		unsigned long offset;
+-
+-		offset = swp_offset(entry);
+-
+-		SetPageDirty(page);
+-		disk->fops->swap_slot_free_notify(sis->bdev,
+-				offset);
+-	}
+-}
+-
+ static void end_swap_bio_read(struct bio *bio)
+ {
+ 	struct page *page = bio_first_page_all(bio);
+@@ -132,7 +84,6 @@ static void end_swap_bio_read(struct bio *bio)
+ 	}
+ 
+ 	SetPageUptodate(page);
+-	swap_slot_free_notify(page);
+ out:
+ 	unlock_page(page);
+ 	WRITE_ONCE(bio->bi_private, NULL);
+@@ -409,11 +360,6 @@ int swap_readpage(struct page *page, bool synchronous)
+ 	if (sis->flags & SWP_SYNCHRONOUS_IO) {
+ 		ret = bdev_read_page(sis->bdev, swap_page_sector(page), page);
+ 		if (!ret) {
+-			if (trylock_page(page)) {
+-				swap_slot_free_notify(page);
+-				unlock_page(page);
+-			}
+-
+ 			count_vm_event(PSWPIN);
+ 			goto out;
+ 		}
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 9e0eef7fe9add..5fff027f25fad 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -89,17 +89,21 @@ again:
+ 			sk = s->sk;
+ 			if (!sk) {
+ 				spin_unlock_bh(&ax25_list_lock);
+-				s->ax25_dev = NULL;
+ 				ax25_disconnect(s, ENETUNREACH);
++				s->ax25_dev = NULL;
+ 				spin_lock_bh(&ax25_list_lock);
+ 				goto again;
+ 			}
+ 			sock_hold(sk);
+ 			spin_unlock_bh(&ax25_list_lock);
+ 			lock_sock(sk);
++			ax25_disconnect(s, ENETUNREACH);
+ 			s->ax25_dev = NULL;
++			if (sk->sk_socket) {
++				dev_put(ax25_dev->dev);
++				ax25_dev_put(ax25_dev);
++			}
+ 			release_sock(sk);
+-			ax25_disconnect(s, ENETUNREACH);
+ 			spin_lock_bh(&ax25_list_lock);
+ 			sock_put(sk);
+ 			/* The entry could have been deleted from the
+@@ -365,21 +369,25 @@ static int ax25_ctl_ioctl(const unsigned int cmd, void __user *arg)
+ 	if (copy_from_user(&ax25_ctl, arg, sizeof(ax25_ctl)))
+ 		return -EFAULT;
+ 
+-	if ((ax25_dev = ax25_addr_ax25dev(&ax25_ctl.port_addr)) == NULL)
+-		return -ENODEV;
+-
+ 	if (ax25_ctl.digi_count > AX25_MAX_DIGIS)
+ 		return -EINVAL;
+ 
+ 	if (ax25_ctl.arg > ULONG_MAX / HZ && ax25_ctl.cmd != AX25_KILL)
+ 		return -EINVAL;
+ 
++	ax25_dev = ax25_addr_ax25dev(&ax25_ctl.port_addr);
++	if (!ax25_dev)
++		return -ENODEV;
++
+ 	digi.ndigi = ax25_ctl.digi_count;
+ 	for (k = 0; k < digi.ndigi; k++)
+ 		digi.calls[k] = ax25_ctl.digi_addr[k];
+ 
+-	if ((ax25 = ax25_find_cb(&ax25_ctl.source_addr, &ax25_ctl.dest_addr, &digi, ax25_dev->dev)) == NULL)
++	ax25 = ax25_find_cb(&ax25_ctl.source_addr, &ax25_ctl.dest_addr, &digi, ax25_dev->dev);
++	if (!ax25) {
++		ax25_dev_put(ax25_dev);
+ 		return -ENOTCONN;
++	}
+ 
+ 	switch (ax25_ctl.cmd) {
+ 	case AX25_KILL:
+@@ -446,6 +454,7 @@ static int ax25_ctl_ioctl(const unsigned int cmd, void __user *arg)
+ 	  }
+ 
+ out_put:
++	ax25_dev_put(ax25_dev);
+ 	ax25_cb_put(ax25);
+ 	return ret;
+ 
+@@ -971,14 +980,16 @@ static int ax25_release(struct socket *sock)
+ {
+ 	struct sock *sk = sock->sk;
+ 	ax25_cb *ax25;
++	ax25_dev *ax25_dev;
+ 
+ 	if (sk == NULL)
+ 		return 0;
+ 
+ 	sock_hold(sk);
+-	sock_orphan(sk);
+ 	lock_sock(sk);
++	sock_orphan(sk);
+ 	ax25 = sk_to_ax25(sk);
++	ax25_dev = ax25->ax25_dev;
+ 
+ 	if (sk->sk_type == SOCK_SEQPACKET) {
+ 		switch (ax25->state) {
+@@ -1040,6 +1051,15 @@ static int ax25_release(struct socket *sock)
+ 		sk->sk_state_change(sk);
+ 		ax25_destroy_socket(ax25);
+ 	}
++	if (ax25_dev) {
++		del_timer_sync(&ax25->timer);
++		del_timer_sync(&ax25->t1timer);
++		del_timer_sync(&ax25->t2timer);
++		del_timer_sync(&ax25->t3timer);
++		del_timer_sync(&ax25->idletimer);
++		dev_put(ax25_dev->dev);
++		ax25_dev_put(ax25_dev);
++	}
+ 
+ 	sock->sk   = NULL;
+ 	release_sock(sk);
+@@ -1116,8 +1136,10 @@ static int ax25_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ 		}
+ 	}
+ 
+-	if (ax25_dev != NULL)
++	if (ax25_dev) {
+ 		ax25_fillin_cb(ax25, ax25_dev);
++		dev_hold(ax25_dev->dev);
++	}
+ 
+ done:
+ 	ax25_cb_add(ax25);
+diff --git a/net/ax25/ax25_dev.c b/net/ax25/ax25_dev.c
+index 4ac2e0847652a..d2e0cc67d91a7 100644
+--- a/net/ax25/ax25_dev.c
++++ b/net/ax25/ax25_dev.c
+@@ -37,6 +37,7 @@ ax25_dev *ax25_addr_ax25dev(ax25_address *addr)
+ 	for (ax25_dev = ax25_dev_list; ax25_dev != NULL; ax25_dev = ax25_dev->next)
+ 		if (ax25cmp(addr, (ax25_address *)ax25_dev->dev->dev_addr) == 0) {
+ 			res = ax25_dev;
++			ax25_dev_hold(ax25_dev);
+ 		}
+ 	spin_unlock_bh(&ax25_dev_lock);
+ 
+@@ -56,6 +57,7 @@ void ax25_dev_device_up(struct net_device *dev)
+ 		return;
+ 	}
+ 
++	refcount_set(&ax25_dev->refcount, 1);
+ 	dev->ax25_ptr     = ax25_dev;
+ 	ax25_dev->dev     = dev;
+ 	dev_hold(dev);
+@@ -84,6 +86,7 @@ void ax25_dev_device_up(struct net_device *dev)
+ 	ax25_dev->next = ax25_dev_list;
+ 	ax25_dev_list  = ax25_dev;
+ 	spin_unlock_bh(&ax25_dev_lock);
++	ax25_dev_hold(ax25_dev);
+ 
+ 	ax25_register_dev_sysctl(ax25_dev);
+ }
+@@ -113,9 +116,10 @@ void ax25_dev_device_down(struct net_device *dev)
+ 	if ((s = ax25_dev_list) == ax25_dev) {
+ 		ax25_dev_list = s->next;
+ 		spin_unlock_bh(&ax25_dev_lock);
++		ax25_dev_put(ax25_dev);
+ 		dev->ax25_ptr = NULL;
+ 		dev_put(dev);
+-		kfree(ax25_dev);
++		ax25_dev_put(ax25_dev);
+ 		return;
+ 	}
+ 
+@@ -123,9 +127,10 @@ void ax25_dev_device_down(struct net_device *dev)
+ 		if (s->next == ax25_dev) {
+ 			s->next = ax25_dev->next;
+ 			spin_unlock_bh(&ax25_dev_lock);
++			ax25_dev_put(ax25_dev);
+ 			dev->ax25_ptr = NULL;
+ 			dev_put(dev);
+-			kfree(ax25_dev);
++			ax25_dev_put(ax25_dev);
+ 			return;
+ 		}
+ 
+@@ -133,6 +138,7 @@ void ax25_dev_device_down(struct net_device *dev)
+ 	}
+ 	spin_unlock_bh(&ax25_dev_lock);
+ 	dev->ax25_ptr = NULL;
++	ax25_dev_put(ax25_dev);
+ }
+ 
+ int ax25_fwd_ioctl(unsigned int cmd, struct ax25_fwd_struct *fwd)
+@@ -144,20 +150,32 @@ int ax25_fwd_ioctl(unsigned int cmd, struct ax25_fwd_struct *fwd)
+ 
+ 	switch (cmd) {
+ 	case SIOCAX25ADDFWD:
+-		if ((fwd_dev = ax25_addr_ax25dev(&fwd->port_to)) == NULL)
++		fwd_dev = ax25_addr_ax25dev(&fwd->port_to);
++		if (!fwd_dev) {
++			ax25_dev_put(ax25_dev);
+ 			return -EINVAL;
+-		if (ax25_dev->forward != NULL)
++		}
++		if (ax25_dev->forward) {
++			ax25_dev_put(fwd_dev);
++			ax25_dev_put(ax25_dev);
+ 			return -EINVAL;
++		}
+ 		ax25_dev->forward = fwd_dev->dev;
++		ax25_dev_put(fwd_dev);
++		ax25_dev_put(ax25_dev);
+ 		break;
+ 
+ 	case SIOCAX25DELFWD:
+-		if (ax25_dev->forward == NULL)
++		if (!ax25_dev->forward) {
++			ax25_dev_put(ax25_dev);
+ 			return -EINVAL;
++		}
+ 		ax25_dev->forward = NULL;
++		ax25_dev_put(ax25_dev);
+ 		break;
+ 
+ 	default:
++		ax25_dev_put(ax25_dev);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/net/ax25/ax25_route.c b/net/ax25/ax25_route.c
+index b40e0bce67ead..dc2168d2a32a9 100644
+--- a/net/ax25/ax25_route.c
++++ b/net/ax25/ax25_route.c
+@@ -75,11 +75,13 @@ static int __must_check ax25_rt_add(struct ax25_routes_struct *route)
+ 	ax25_dev *ax25_dev;
+ 	int i;
+ 
+-	if ((ax25_dev = ax25_addr_ax25dev(&route->port_addr)) == NULL)
+-		return -EINVAL;
+ 	if (route->digi_count > AX25_MAX_DIGIS)
+ 		return -EINVAL;
+ 
++	ax25_dev = ax25_addr_ax25dev(&route->port_addr);
++	if (!ax25_dev)
++		return -EINVAL;
++
+ 	write_lock_bh(&ax25_route_lock);
+ 
+ 	ax25_rt = ax25_route_list;
+@@ -91,6 +93,7 @@ static int __must_check ax25_rt_add(struct ax25_routes_struct *route)
+ 			if (route->digi_count != 0) {
+ 				if ((ax25_rt->digipeat = kmalloc(sizeof(ax25_digi), GFP_ATOMIC)) == NULL) {
+ 					write_unlock_bh(&ax25_route_lock);
++					ax25_dev_put(ax25_dev);
+ 					return -ENOMEM;
+ 				}
+ 				ax25_rt->digipeat->lastrepeat = -1;
+@@ -101,6 +104,7 @@ static int __must_check ax25_rt_add(struct ax25_routes_struct *route)
+ 				}
+ 			}
+ 			write_unlock_bh(&ax25_route_lock);
++			ax25_dev_put(ax25_dev);
+ 			return 0;
+ 		}
+ 		ax25_rt = ax25_rt->next;
+@@ -108,6 +112,7 @@ static int __must_check ax25_rt_add(struct ax25_routes_struct *route)
+ 
+ 	if ((ax25_rt = kmalloc(sizeof(ax25_route), GFP_ATOMIC)) == NULL) {
+ 		write_unlock_bh(&ax25_route_lock);
++		ax25_dev_put(ax25_dev);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -120,6 +125,7 @@ static int __must_check ax25_rt_add(struct ax25_routes_struct *route)
+ 		if ((ax25_rt->digipeat = kmalloc(sizeof(ax25_digi), GFP_ATOMIC)) == NULL) {
+ 			write_unlock_bh(&ax25_route_lock);
+ 			kfree(ax25_rt);
++			ax25_dev_put(ax25_dev);
+ 			return -ENOMEM;
+ 		}
+ 		ax25_rt->digipeat->lastrepeat = -1;
+@@ -132,6 +138,7 @@ static int __must_check ax25_rt_add(struct ax25_routes_struct *route)
+ 	ax25_rt->next   = ax25_route_list;
+ 	ax25_route_list = ax25_rt;
+ 	write_unlock_bh(&ax25_route_lock);
++	ax25_dev_put(ax25_dev);
+ 
+ 	return 0;
+ }
+@@ -173,6 +180,7 @@ static int ax25_rt_del(struct ax25_routes_struct *route)
+ 		}
+ 	}
+ 	write_unlock_bh(&ax25_route_lock);
++	ax25_dev_put(ax25_dev);
+ 
+ 	return 0;
+ }
+@@ -215,6 +223,7 @@ static int ax25_rt_opt(struct ax25_route_opt_struct *rt_option)
+ 
+ out:
+ 	write_unlock_bh(&ax25_route_lock);
++	ax25_dev_put(ax25_dev);
+ 	return err;
+ }
+ 
+diff --git a/net/ax25/ax25_subr.c b/net/ax25/ax25_subr.c
+index 15ab812c4fe4b..3a476e4f6cd0b 100644
+--- a/net/ax25/ax25_subr.c
++++ b/net/ax25/ax25_subr.c
+@@ -261,12 +261,20 @@ void ax25_disconnect(ax25_cb *ax25, int reason)
+ {
+ 	ax25_clear_queues(ax25);
+ 
+-	if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY))
+-		ax25_stop_heartbeat(ax25);
+-	ax25_stop_t1timer(ax25);
+-	ax25_stop_t2timer(ax25);
+-	ax25_stop_t3timer(ax25);
+-	ax25_stop_idletimer(ax25);
++	if (reason == ENETUNREACH) {
++		del_timer_sync(&ax25->timer);
++		del_timer_sync(&ax25->t1timer);
++		del_timer_sync(&ax25->t2timer);
++		del_timer_sync(&ax25->t3timer);
++		del_timer_sync(&ax25->idletimer);
++	} else {
++		if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY))
++			ax25_stop_heartbeat(ax25);
++		ax25_stop_t1timer(ax25);
++		ax25_stop_t2timer(ax25);
++		ax25_stop_t3timer(ax25);
++		ax25_stop_idletimer(ax25);
++	}
+ 
+ 	ax25->state = AX25_STATE_0;
+ 
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 813c709c61cfb..f9baa9b1c77f7 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1171,6 +1171,7 @@ proto_again:
+ 					 VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
+ 			}
+ 			key_vlan->vlan_tpid = saved_vlan_tpid;
++			key_vlan->vlan_eth_type = proto;
+ 		}
+ 
+ 		fdret = FLOW_DISSECT_RET_PROTO_AGAIN;
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 2aa39ce7093df..05e19e5d65140 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -508,7 +508,7 @@ int ip6_forward(struct sk_buff *skb)
+ 		goto drop;
+ 
+ 	if (!net->ipv6.devconf_all->disable_policy &&
+-	    !idev->cnf.disable_policy &&
++	    (!idev || !idev->cnf.disable_policy) &&
+ 	    !xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) {
+ 		__IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
+ 		goto drop;
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index e38719e2ee582..2cfff70f70e06 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -548,6 +548,10 @@ static int nci_close_device(struct nci_dev *ndev)
+ 	mutex_lock(&ndev->req_lock);
+ 
+ 	if (!test_and_clear_bit(NCI_UP, &ndev->flags)) {
++		/* Need to flush the cmd wq in case
++		 * there is a queued/running cmd_work
++		 */
++		flush_workqueue(ndev->cmd_wq);
+ 		del_timer_sync(&ndev->cmd_timer);
+ 		del_timer_sync(&ndev->data_timer);
+ 		mutex_unlock(&ndev->req_lock);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 9a789a057a741..b8ffb7e4f696c 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1656,10 +1656,10 @@ static int tcf_chain_tp_insert(struct tcf_chain *chain,
+ 	if (chain->flushing)
+ 		return -EAGAIN;
+ 
++	RCU_INIT_POINTER(tp->next, tcf_chain_tp_prev(chain, chain_info));
+ 	if (*chain_info->pprev == chain->filter_chain)
+ 		tcf_chain0_head_change(chain, tp);
+ 	tcf_proto_get(tp);
+-	RCU_INIT_POINTER(tp->next, tcf_chain_tp_prev(chain, chain_info));
+ 	rcu_assign_pointer(*chain_info->pprev, tp);
+ 
+ 	return 0;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 8ff6945b9f8f4..35ee6d8226e61 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -998,6 +998,7 @@ static int fl_set_key_mpls(struct nlattr **tb,
+ static void fl_set_key_vlan(struct nlattr **tb,
+ 			    __be16 ethertype,
+ 			    int vlan_id_key, int vlan_prio_key,
++			    int vlan_next_eth_type_key,
+ 			    struct flow_dissector_key_vlan *key_val,
+ 			    struct flow_dissector_key_vlan *key_mask)
+ {
+@@ -1016,6 +1017,11 @@ static void fl_set_key_vlan(struct nlattr **tb,
+ 	}
+ 	key_val->vlan_tpid = ethertype;
+ 	key_mask->vlan_tpid = cpu_to_be16(~0);
++	if (tb[vlan_next_eth_type_key]) {
++		key_val->vlan_eth_type =
++			nla_get_be16(tb[vlan_next_eth_type_key]);
++		key_mask->vlan_eth_type = cpu_to_be16(~0);
++	}
+ }
+ 
+ static void fl_set_key_flag(u32 flower_key, u32 flower_mask,
+@@ -1497,8 +1503,9 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
+ 
+ 		if (eth_type_vlan(ethertype)) {
+ 			fl_set_key_vlan(tb, ethertype, TCA_FLOWER_KEY_VLAN_ID,
+-					TCA_FLOWER_KEY_VLAN_PRIO, &key->vlan,
+-					&mask->vlan);
++					TCA_FLOWER_KEY_VLAN_PRIO,
++					TCA_FLOWER_KEY_VLAN_ETH_TYPE,
++					&key->vlan, &mask->vlan);
+ 
+ 			if (tb[TCA_FLOWER_KEY_VLAN_ETH_TYPE]) {
+ 				ethertype = nla_get_be16(tb[TCA_FLOWER_KEY_VLAN_ETH_TYPE]);
+@@ -1506,6 +1513,7 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
+ 					fl_set_key_vlan(tb, ethertype,
+ 							TCA_FLOWER_KEY_CVLAN_ID,
+ 							TCA_FLOWER_KEY_CVLAN_PRIO,
++							TCA_FLOWER_KEY_CVLAN_ETH_TYPE,
+ 							&key->cvlan, &mask->cvlan);
+ 					fl_set_key_val(tb, &key->basic.n_proto,
+ 						       TCA_FLOWER_KEY_CVLAN_ETH_TYPE,
+@@ -2861,13 +2869,13 @@ static int fl_dump_key(struct sk_buff *skb, struct net *net,
+ 		goto nla_put_failure;
+ 
+ 	if (mask->basic.n_proto) {
+-		if (mask->cvlan.vlan_tpid) {
++		if (mask->cvlan.vlan_eth_type) {
+ 			if (nla_put_be16(skb, TCA_FLOWER_KEY_CVLAN_ETH_TYPE,
+ 					 key->basic.n_proto))
+ 				goto nla_put_failure;
+-		} else if (mask->vlan.vlan_tpid) {
++		} else if (mask->vlan.vlan_eth_type) {
+ 			if (nla_put_be16(skb, TCA_FLOWER_KEY_VLAN_ETH_TYPE,
+-					 key->basic.n_proto))
++					 key->vlan.vlan_eth_type))
+ 				goto nla_put_failure;
+ 		}
+ 	}
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 806babdd838d2..eca525791013e 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -427,7 +427,8 @@ static int taprio_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 	if (unlikely(!child))
+ 		return qdisc_drop(skb, sch, to_free);
+ 
+-	if (skb->sk && sock_flag(skb->sk, SOCK_TXTIME)) {
++	/* sk_flags are only safe to use on full sockets. */
++	if (skb->sk && sk_fullsock(skb->sk) && sock_flag(skb->sk, SOCK_TXTIME)) {
+ 		if (!is_valid_interval(skb, sch))
+ 			return qdisc_drop(skb, sch, to_free);
+ 	} else if (TXTIME_ASSIST_IS_ENABLED(q->flags)) {
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 0a9e2c7d8e5f5..e9b4ea3d934fa 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -5518,7 +5518,7 @@ int sctp_do_peeloff(struct sock *sk, sctp_assoc_t id, struct socket **sockp)
+ 	 * Set the daddr and initialize id to something more random and also
+ 	 * copy over any ip options.
+ 	 */
+-	sp->pf->to_sk_daddr(&asoc->peer.primary_addr, sk);
++	sp->pf->to_sk_daddr(&asoc->peer.primary_addr, sock->sk);
+ 	sp->pf->copy_ip_options(sk, sock->sk);
+ 
+ 	/* Populate the fields of the newsk from the oldsk and migrate the
+diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c
+index 9007c7e3bae4e..30bae60d626c6 100644
+--- a/net/smc/smc_pnet.c
++++ b/net/smc/smc_pnet.c
+@@ -310,8 +310,9 @@ static struct smc_ib_device *smc_pnet_find_ib(char *ib_name)
+ 	list_for_each_entry(ibdev, &smc_ib_devices.list, list) {
+ 		if (!strncmp(ibdev->ibdev->name, ib_name,
+ 			     sizeof(ibdev->ibdev->name)) ||
+-		    !strncmp(dev_name(ibdev->ibdev->dev.parent), ib_name,
+-			     IB_DEVICE_NAME_MAX - 1)) {
++		    (ibdev->ibdev->dev.parent &&
++		     !strncmp(dev_name(ibdev->ibdev->dev.parent), ib_name,
++			     IB_DEVICE_NAME_MAX - 1))) {
+ 			goto out;
+ 		}
+ 	}
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 0df8b9a19952c..12f44ad4e0d8e 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -475,7 +475,8 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ 				   .len = IEEE80211_MAX_MESH_ID_LEN },
+ 	[NL80211_ATTR_MPATH_NEXT_HOP] = NLA_POLICY_ETH_ADDR_COMPAT,
+ 
+-	[NL80211_ATTR_REG_ALPHA2] = { .type = NLA_STRING, .len = 2 },
++	/* allow 3 for NUL-termination, we used to declare this NLA_STRING */
++	[NL80211_ATTR_REG_ALPHA2] = NLA_POLICY_RANGE(NLA_BINARY, 2, 3),
+ 	[NL80211_ATTR_REG_RULES] = { .type = NLA_NESTED },
+ 
+ 	[NL80211_ATTR_BSS_CTS_PROT] = { .type = NLA_U8 },
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index c1b2655682a8a..6dc9b7e22b71d 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1968,11 +1968,13 @@ cfg80211_inform_single_bss_data(struct wiphy *wiphy,
+ 		/* this is a nontransmitting bss, we need to add it to
+ 		 * transmitting bss' list if it is not there
+ 		 */
++		spin_lock_bh(&rdev->bss_lock);
+ 		if (cfg80211_add_nontrans_list(non_tx_data->tx_bss,
+ 					       &res->pub)) {
+ 			if (__cfg80211_unlink_bss(rdev, res))
+ 				rdev->bss_generation++;
+ 		}
++		spin_unlock_bh(&rdev->bss_lock);
+ 	}
+ 
+ 	trace_cfg80211_return_bss(&res->pub);
+diff --git a/scripts/gcc-plugins/latent_entropy_plugin.c b/scripts/gcc-plugins/latent_entropy_plugin.c
+index cbe1d6c4b1a51..c84bef1d28955 100644
+--- a/scripts/gcc-plugins/latent_entropy_plugin.c
++++ b/scripts/gcc-plugins/latent_entropy_plugin.c
+@@ -86,25 +86,31 @@ static struct plugin_info latent_entropy_plugin_info = {
+ 	.help		= "disable\tturn off latent entropy instrumentation\n",
+ };
+ 
+-static unsigned HOST_WIDE_INT seed;
+-/*
+- * get_random_seed() (this is a GCC function) generates the seed.
+- * This is a simple random generator without any cryptographic security because
+- * the entropy doesn't come from here.
+- */
++static unsigned HOST_WIDE_INT deterministic_seed;
++static unsigned HOST_WIDE_INT rnd_buf[32];
++static size_t rnd_idx = ARRAY_SIZE(rnd_buf);
++static int urandom_fd = -1;
++
+ static unsigned HOST_WIDE_INT get_random_const(void)
+ {
+-	unsigned int i;
+-	unsigned HOST_WIDE_INT ret = 0;
+-
+-	for (i = 0; i < 8 * sizeof(ret); i++) {
+-		ret = (ret << 1) | (seed & 1);
+-		seed >>= 1;
+-		if (ret & 1)
+-			seed ^= 0xD800000000000000ULL;
++	if (deterministic_seed) {
++		unsigned HOST_WIDE_INT w = deterministic_seed;
++		w ^= w << 13;
++		w ^= w >> 7;
++		w ^= w << 17;
++		deterministic_seed = w;
++		return deterministic_seed;
+ 	}
+ 
+-	return ret;
++	if (urandom_fd < 0) {
++		urandom_fd = open("/dev/urandom", O_RDONLY);
++		gcc_assert(urandom_fd >= 0);
++	}
++	if (rnd_idx >= ARRAY_SIZE(rnd_buf)) {
++		gcc_assert(read(urandom_fd, rnd_buf, sizeof(rnd_buf)) == sizeof(rnd_buf));
++		rnd_idx = 0;
++	}
++	return rnd_buf[rnd_idx++];
+ }
+ 
+ static tree tree_get_random_const(tree type)
+@@ -549,8 +555,6 @@ static void latent_entropy_start_unit(void *gcc_data __unused,
+ 	tree type, id;
+ 	int quals;
+ 
+-	seed = get_random_seed(false);
+-
+ 	if (in_lto_p)
+ 		return;
+ 
+@@ -585,6 +589,12 @@ __visible int plugin_init(struct plugin_name_args *plugin_info,
+ 	const struct plugin_argument * const argv = plugin_info->argv;
+ 	int i;
+ 
++	/*
++	 * Call get_random_seed() with noinit=true, so that this returns
++	 * 0 in the case where no seed has been passed via -frandom-seed.
++	 */
++	deterministic_seed = get_random_seed(true);
++
+ 	static const struct ggc_root_tab gt_ggc_r_gt_latent_entropy[] = {
+ 		{
+ 			.base = &latent_entropy_decl,
+diff --git a/sound/core/pcm_misc.c b/sound/core/pcm_misc.c
+index 257d412eac5dd..30f0f96e00004 100644
+--- a/sound/core/pcm_misc.c
++++ b/sound/core/pcm_misc.c
+@@ -429,7 +429,7 @@ int snd_pcm_format_set_silence(snd_pcm_format_t format, void *data, unsigned int
+ 		return 0;
+ 	width = pcm_formats[(INT)format].phys; /* physical width */
+ 	pat = pcm_formats[(INT)format].silence;
+-	if (! width)
++	if (!width || !pat)
+ 		return -EINVAL;
+ 	/* signed or 1 byte data */
+ 	if (pcm_formats[(INT)format].signd == 1 || width <= 8) {
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b886326ce9b96..11d653190e6ea 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2626,6 +2626,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65f1, "Clevo PC50HS", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65f5, "Clevo PD50PN[NRT]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+@@ -8994,6 +8995,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x505d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x505f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5062, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++	SND_PCI_QUIRK(0x17aa, 0x508b, "Thinkpad X12 Gen 1", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 3b273580fb840..3a0a7930cd10a 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -1442,7 +1442,9 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
+ 	bool use_uncore_alias;
+ 	LIST_HEAD(config_terms);
+ 
+-	if (verbose > 1) {
++	pmu = parse_state->fake_pmu ?: perf_pmu__find(name);
++
++	if (verbose > 1 && !(pmu && pmu->selectable)) {
+ 		fprintf(stderr, "Attempting to add event pmu '%s' with '",
+ 			name);
+ 		if (head_config) {
+@@ -1455,7 +1457,6 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
+ 		fprintf(stderr, "' that may result in non-fatal errors\n");
+ 	}
+ 
+-	pmu = parse_state->fake_pmu ?: perf_pmu__find(name);
+ 	if (!pmu) {
+ 		char *err_str;
+ 
+diff --git a/tools/testing/selftests/mqueue/mq_perf_tests.c b/tools/testing/selftests/mqueue/mq_perf_tests.c
+index b019e0b8221c7..84fda3b490735 100644
+--- a/tools/testing/selftests/mqueue/mq_perf_tests.c
++++ b/tools/testing/selftests/mqueue/mq_perf_tests.c
+@@ -180,6 +180,9 @@ void shutdown(int exit_val, char *err_cause, int line_no)
+ 	if (in_shutdown++)
+ 		return;
+ 
++	/* Free the cpu_set allocated using CPU_ALLOC in main function */
++	CPU_FREE(cpu_set);
++
+ 	for (i = 0; i < num_cpus_to_pin; i++)
+ 		if (cpu_threads[i]) {
+ 			pthread_kill(cpu_threads[i], SIGUSR1);
+@@ -551,6 +554,12 @@ int main(int argc, char *argv[])
+ 		perror("sysconf(_SC_NPROCESSORS_ONLN)");
+ 		exit(1);
+ 	}
++
++	if (getuid() != 0)
++		ksft_exit_skip("Not running as root, but almost all tests "
++			"require root in order to modify\nsystem settings.  "
++			"Exiting.\n");
++
+ 	cpus_online = min(MAX_CPUS, sysconf(_SC_NPROCESSORS_ONLN));
+ 	cpu_set = CPU_ALLOC(cpus_online);
+ 	if (cpu_set == NULL) {
+@@ -589,7 +598,7 @@ int main(int argc, char *argv[])
+ 						cpu_set)) {
+ 					fprintf(stderr, "Any given CPU may "
+ 						"only be given once.\n");
+-					exit(1);
++					goto err_code;
+ 				} else
+ 					CPU_SET_S(cpus_to_pin[cpu],
+ 						  cpu_set_size, cpu_set);
+@@ -607,7 +616,7 @@ int main(int argc, char *argv[])
+ 				queue_path = malloc(strlen(option) + 2);
+ 				if (!queue_path) {
+ 					perror("malloc()");
+-					exit(1);
++					goto err_code;
+ 				}
+ 				queue_path[0] = '/';
+ 				queue_path[1] = 0;
+@@ -622,17 +631,12 @@ int main(int argc, char *argv[])
+ 		fprintf(stderr, "Must pass at least one CPU to continuous "
+ 			"mode.\n");
+ 		poptPrintUsage(popt_context, stderr, 0);
+-		exit(1);
++		goto err_code;
+ 	} else if (!continuous_mode) {
+ 		num_cpus_to_pin = 1;
+ 		cpus_to_pin[0] = cpus_online - 1;
+ 	}
+ 
+-	if (getuid() != 0)
+-		ksft_exit_skip("Not running as root, but almost all tests "
+-			"require root in order to modify\nsystem settings.  "
+-			"Exiting.\n");
+-
+ 	max_msgs = fopen(MAX_MSGS, "r+");
+ 	max_msgsize = fopen(MAX_MSGSIZE, "r+");
+ 	if (!max_msgs)
+@@ -740,4 +744,9 @@ int main(int argc, char *argv[])
+ 			sleep(1);
+ 	}
+ 	shutdown(0, "", 0);
++
++err_code:
++	CPU_FREE(cpu_set);
++	exit(1);
++
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-04-26 12:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-04-26 12:17 UTC (permalink / raw
  To: gentoo-commits

commit:     8e997a3231771c98aebfdd4bc66617edc044dfdd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 26 12:17:16 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 26 12:17:16 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8e997a32

gpio: Request interrupts after IRQ is initialized

Bug: https://bugs.gentoo.org/840942

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ++
 ...quest-interrupts-after-IRQ-is-initialized.patch | 75 ++++++++++++++++++++++
 2 files changed, 79 insertions(+)

diff --git a/0000_README b/0000_README
index a85d4035..79bdf239 100644
--- a/0000_README
+++ b/0000_README
@@ -503,6 +503,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Path:   2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/drivers/gpio?id=06fb4ecfeac7e00d6704fa5ed19299f2fefb3cc9
+Desc:   gpio: Request interrupts after IRQ is initialized
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch b/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
new file mode 100644
index 00000000..0a1d4624
--- /dev/null
+++ b/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
@@ -0,0 +1,75 @@
+From 06fb4ecfeac7e00d6704fa5ed19299f2fefb3cc9 Mon Sep 17 00:00:00 2001
+From: Mario Limonciello <mario.limonciello@amd.com>
+Date: Fri, 22 Apr 2022 08:14:52 -0500
+Subject: gpio: Request interrupts after IRQ is initialized
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Commit 5467801f1fcb ("gpio: Restrict usage of GPIO chip irq members
+before initialization") attempted to fix a race condition that lead to a
+NULL pointer, but in the process caused a regression for _AEI/_EVT
+declared GPIOs.
+
+This manifests in messages showing deferred probing while trying to
+allocate IRQs like so:
+
+  amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x0000 to IRQ, err -517
+  amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x002C to IRQ, err -517
+  amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x003D to IRQ, err -517
+  [ .. more of the same .. ]
+
+The code for walking _AEI doesn't handle deferred probing and so this
+leads to non-functional GPIO interrupts.
+
+Fix this issue by moving the call to `acpi_gpiochip_request_interrupts`
+to occur after gc->irc.initialized is set.
+
+Fixes: 5467801f1fcb ("gpio: Restrict usage of GPIO chip irq members before initialization")
+Link: https://lore.kernel.org/linux-gpio/BL1PR12MB51577A77F000A008AA694675E2EF9@BL1PR12MB5157.namprd12.prod.outlook.com/
+Link: https://bugzilla.suse.com/show_bug.cgi?id=1198697
+Link: https://bugzilla.kernel.org/show_bug.cgi?id=215850
+Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1979
+Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1976
+Reported-by: Mario Limonciello <mario.limonciello@amd.com>
+Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
+Reviewed-by: Shreeya Patel <shreeya.patel@collabora.com>
+Tested-By: Samuel Čavoj <samuel@cavoj.net>
+Tested-By: lukeluk498@gmail.com Link:
+Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
+Acked-by: Linus Walleij <linus.walleij@linaro.org>
+Reviewed-and-tested-by: Takashi Iwai <tiwai@suse.de>
+Cc: Shreeya Patel <shreeya.patel@collabora.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+---
+ drivers/gpio/gpiolib.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+(limited to 'drivers/gpio')
+
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 085348e089860..b7694171655cf 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1601,8 +1601,6 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+ 
+ 	gpiochip_set_irq_hooks(gc);
+ 
+-	acpi_gpiochip_request_interrupts(gc);
+-
+ 	/*
+ 	 * Using barrier() here to prevent compiler from reordering
+ 	 * gc->irq.initialized before initialization of above
+@@ -1612,6 +1610,8 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+ 
+ 	gc->irq.initialized = true;
+ 
++	acpi_gpiochip_request_interrupts(gc);
++
+ 	return 0;
+ }
+ 
+-- 
+cgit 1.2.3-1.el7
+


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-04-27 12:20 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-04-27 12:20 UTC (permalink / raw
  To: gentoo-commits

commit:     7a8071f7fc7e8d3ba7cb7e27a7175f45de7cefe2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 27 12:20:07 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 27 12:20:07 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7a8071f7

Linux patch 5.10.113

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1112_linux-5.10.113.patch | 2842 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2846 insertions(+)

diff --git a/0000_README b/0000_README
index 79bdf239..e6bddb24 100644
--- a/0000_README
+++ b/0000_README
@@ -491,6 +491,10 @@ Patch:  1111_linux-5.10.112.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.112
 
+Patch:  1112_linux-5.10.113.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.113
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1112_linux-5.10.113.patch b/1112_linux-5.10.113.patch
new file mode 100644
index 00000000..3839f947
--- /dev/null
+++ b/1112_linux-5.10.113.patch
@@ -0,0 +1,2842 @@
+diff --git a/Documentation/filesystems/ext4/attributes.rst b/Documentation/filesystems/ext4/attributes.rst
+index 54386a010a8d7..871d2da7a0a91 100644
+--- a/Documentation/filesystems/ext4/attributes.rst
++++ b/Documentation/filesystems/ext4/attributes.rst
+@@ -76,7 +76,7 @@ The beginning of an extended attribute block is in
+      - Checksum of the extended attribute block.
+    * - 0x14
+      - \_\_u32
+-     - h\_reserved[2]
++     - h\_reserved[3]
+      - Zero.
+ 
+ The checksum is calculated against the FS UUID, the 64-bit block number
+diff --git a/Makefile b/Makefile
+index 05013bf5a469b..99bbaa9f54f4c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 112
++SUBLEVEL = 113
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S
+index ae656bfc31c3d..301ade4d0b943 100644
+--- a/arch/arc/kernel/entry.S
++++ b/arch/arc/kernel/entry.S
+@@ -199,6 +199,7 @@ tracesys_exit:
+ 	st  r0, [sp, PT_r0]     ; sys call return value in pt_regs
+ 
+ 	;POST Sys Call Ptrace Hook
++	mov r0, sp		; pt_regs needed
+ 	bl  @syscall_trace_exit
+ 	b   ret_from_exception ; NOT ret_from_system_call at is saves r0 which
+ 	; we'd done before calling post hook above
+diff --git a/arch/arm/mach-vexpress/spc.c b/arch/arm/mach-vexpress/spc.c
+index 1da11bdb1dfbd..1c6500c4e6a17 100644
+--- a/arch/arm/mach-vexpress/spc.c
++++ b/arch/arm/mach-vexpress/spc.c
+@@ -580,7 +580,7 @@ static int __init ve_spc_clk_init(void)
+ 		}
+ 
+ 		cluster = topology_physical_package_id(cpu_dev->id);
+-		if (init_opp_table[cluster])
++		if (cluster < 0 || init_opp_table[cluster])
+ 			continue;
+ 
+ 		if (ve_init_opp_table(cpu_dev))
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-var-som.dtsi
+index 49082529764f0..0fac1f3f7f478 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-var-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-var-som.dtsi
+@@ -89,12 +89,12 @@
+ 		pendown-gpio = <&gpio1 3 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <125>;
+-		touchscreen-size-x = /bits/ 16 <4008>;
++		touchscreen-size-x = <4008>;
+ 		ti,y-min = /bits/ 16 <282>;
+-		touchscreen-size-y = /bits/ 16 <3864>;
++		touchscreen-size-y = <3864>;
+ 		ti,x-plate-ohms = /bits/ 16 <180>;
+-		touchscreen-max-pressure = /bits/ 16 <255>;
+-		touchscreen-average-samples = /bits/ 16 <10>;
++		touchscreen-max-pressure = <255>;
++		touchscreen-average-samples = <10>;
+ 		ti,debounce-tol = /bits/ 16 <3>;
+ 		ti,debounce-rep = /bits/ 16 <1>;
+ 		ti,settle-delay-usec = /bits/ 16 <150>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+index 7f356edf9f916..f6287f174355c 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+@@ -70,12 +70,12 @@
+ 		pendown-gpio = <&gpio1 3 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <125>;
+-		touchscreen-size-x = /bits/ 16 <4008>;
++		touchscreen-size-x = <4008>;
+ 		ti,y-min = /bits/ 16 <282>;
+-		touchscreen-size-y = /bits/ 16 <3864>;
++		touchscreen-size-y = <3864>;
+ 		ti,x-plate-ohms = /bits/ 16 <180>;
+-		touchscreen-max-pressure = /bits/ 16 <255>;
+-		touchscreen-average-samples = /bits/ 16 <10>;
++		touchscreen-max-pressure = <255>;
++		touchscreen-average-samples = <10>;
+ 		ti,debounce-tol = /bits/ 16 <3>;
+ 		ti,debounce-rep = /bits/ 16 <1>;
+ 		ti,settle-delay-usec = /bits/ 16 <150>;
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index f3a70dc7c5942..3f74db7b0a31d 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -512,13 +512,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
+ 
+ #define pmd_none(pmd)		(!pmd_val(pmd))
+ 
+-#define pmd_bad(pmd)		(!(pmd_val(pmd) & PMD_TABLE_BIT))
+-
+ #define pmd_table(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
+ 				 PMD_TYPE_TABLE)
+ #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
+ 				 PMD_TYPE_SECT)
+-#define pmd_leaf(pmd)		pmd_sect(pmd)
++#define pmd_leaf(pmd)		(pmd_present(pmd) && !pmd_table(pmd))
++#define pmd_bad(pmd)		(!pmd_table(pmd))
+ 
+ #if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3
+ static inline bool pud_sect(pud_t pud) { return false; }
+@@ -602,9 +601,9 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+ 	pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e))
+ 
+ #define pud_none(pud)		(!pud_val(pud))
+-#define pud_bad(pud)		(!(pud_val(pud) & PUD_TABLE_BIT))
++#define pud_bad(pud)		(!pud_table(pud))
+ #define pud_present(pud)	pte_present(pud_pte(pud))
+-#define pud_leaf(pud)		pud_sect(pud)
++#define pud_leaf(pud)		(pud_present(pud) && !pud_table(pud))
+ #define pud_valid(pud)		pte_valid(pud_pte(pud))
+ 
+ static inline void set_pud(pud_t *pudp, pud_t pud)
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index 8da93fdfa59e9..c640053ab03f2 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -421,13 +421,19 @@ static void kvmppc_tce_put(struct kvmppc_spapr_tce_table *stt,
+ 	tbl[idx % TCES_PER_PAGE] = tce;
+ }
+ 
+-static void kvmppc_clear_tce(struct mm_struct *mm, struct iommu_table *tbl,
+-		unsigned long entry)
++static void kvmppc_clear_tce(struct mm_struct *mm, struct kvmppc_spapr_tce_table *stt,
++		struct iommu_table *tbl, unsigned long entry)
+ {
+-	unsigned long hpa = 0;
+-	enum dma_data_direction dir = DMA_NONE;
++	unsigned long i;
++	unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift);
++	unsigned long io_entry = entry << (stt->page_shift - tbl->it_page_shift);
++
++	for (i = 0; i < subpages; ++i) {
++		unsigned long hpa = 0;
++		enum dma_data_direction dir = DMA_NONE;
+ 
+-	iommu_tce_xchg_no_kill(mm, tbl, entry, &hpa, &dir);
++		iommu_tce_xchg_no_kill(mm, tbl, io_entry + i, &hpa, &dir);
++	}
+ }
+ 
+ static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm,
+@@ -486,6 +492,8 @@ static long kvmppc_tce_iommu_unmap(struct kvm *kvm,
+ 			break;
+ 	}
+ 
++	iommu_tce_kill(tbl, io_entry, subpages);
++
+ 	return ret;
+ }
+ 
+@@ -545,6 +553,8 @@ static long kvmppc_tce_iommu_map(struct kvm *kvm,
+ 			break;
+ 	}
+ 
++	iommu_tce_kill(tbl, io_entry, subpages);
++
+ 	return ret;
+ }
+ 
+@@ -591,10 +601,9 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
+ 			ret = kvmppc_tce_iommu_map(vcpu->kvm, stt, stit->tbl,
+ 					entry, ua, dir);
+ 
+-		iommu_tce_kill(stit->tbl, entry, 1);
+ 
+ 		if (ret != H_SUCCESS) {
+-			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
++			kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl, entry);
+ 			goto unlock_exit;
+ 		}
+ 	}
+@@ -670,13 +679,13 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+ 		 */
+ 		if (get_user(tce, tces + i)) {
+ 			ret = H_TOO_HARD;
+-			goto invalidate_exit;
++			goto unlock_exit;
+ 		}
+ 		tce = be64_to_cpu(tce);
+ 
+ 		if (kvmppc_tce_to_ua(vcpu->kvm, tce, &ua)) {
+ 			ret = H_PARAMETER;
+-			goto invalidate_exit;
++			goto unlock_exit;
+ 		}
+ 
+ 		list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
+@@ -685,19 +694,15 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+ 					iommu_tce_direction(tce));
+ 
+ 			if (ret != H_SUCCESS) {
+-				kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl,
+-						entry);
+-				goto invalidate_exit;
++				kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl,
++						 entry + i);
++				goto unlock_exit;
+ 			}
+ 		}
+ 
+ 		kvmppc_tce_put(stt, entry + i, tce);
+ 	}
+ 
+-invalidate_exit:
+-	list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
+-		iommu_tce_kill(stit->tbl, entry, npages);
+-
+ unlock_exit:
+ 	srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ 
+@@ -736,20 +741,16 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
+ 				continue;
+ 
+ 			if (ret == H_TOO_HARD)
+-				goto invalidate_exit;
++				return ret;
+ 
+ 			WARN_ON_ONCE(1);
+-			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
++			kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl, entry + i);
+ 		}
+ 	}
+ 
+ 	for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift))
+ 		kvmppc_tce_put(stt, ioba >> stt->page_shift, tce_value);
+ 
+-invalidate_exit:
+-	list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
+-		iommu_tce_kill(stit->tbl, ioba >> stt->page_shift, npages);
+-
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(kvmppc_h_stuff_tce);
+diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
+index e5ba96c41f3fc..57af53a6a2d84 100644
+--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
++++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
+@@ -247,13 +247,19 @@ static void iommu_tce_kill_rm(struct iommu_table *tbl,
+ 		tbl->it_ops->tce_kill(tbl, entry, pages, true);
+ }
+ 
+-static void kvmppc_rm_clear_tce(struct kvm *kvm, struct iommu_table *tbl,
+-		unsigned long entry)
++static void kvmppc_rm_clear_tce(struct kvm *kvm, struct kvmppc_spapr_tce_table *stt,
++		struct iommu_table *tbl, unsigned long entry)
+ {
+-	unsigned long hpa = 0;
+-	enum dma_data_direction dir = DMA_NONE;
++	unsigned long i;
++	unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift);
++	unsigned long io_entry = entry << (stt->page_shift - tbl->it_page_shift);
++
++	for (i = 0; i < subpages; ++i) {
++		unsigned long hpa = 0;
++		enum dma_data_direction dir = DMA_NONE;
+ 
+-	iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, entry, &hpa, &dir);
++		iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, io_entry + i, &hpa, &dir);
++	}
+ }
+ 
+ static long kvmppc_rm_tce_iommu_mapped_dec(struct kvm *kvm,
+@@ -316,6 +322,8 @@ static long kvmppc_rm_tce_iommu_unmap(struct kvm *kvm,
+ 			break;
+ 	}
+ 
++	iommu_tce_kill_rm(tbl, io_entry, subpages);
++
+ 	return ret;
+ }
+ 
+@@ -379,6 +387,8 @@ static long kvmppc_rm_tce_iommu_map(struct kvm *kvm,
+ 			break;
+ 	}
+ 
++	iommu_tce_kill_rm(tbl, io_entry, subpages);
++
+ 	return ret;
+ }
+ 
+@@ -424,10 +434,8 @@ long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
+ 			ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt,
+ 					stit->tbl, entry, ua, dir);
+ 
+-		iommu_tce_kill_rm(stit->tbl, entry, 1);
+-
+ 		if (ret != H_SUCCESS) {
+-			kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);
++			kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, entry);
+ 			return ret;
+ 		}
+ 	}
+@@ -569,7 +577,7 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+ 		ua = 0;
+ 		if (kvmppc_rm_tce_to_ua(vcpu->kvm, tce, &ua)) {
+ 			ret = H_PARAMETER;
+-			goto invalidate_exit;
++			goto unlock_exit;
+ 		}
+ 
+ 		list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
+@@ -578,19 +586,15 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+ 					iommu_tce_direction(tce));
+ 
+ 			if (ret != H_SUCCESS) {
+-				kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl,
+-						entry);
+-				goto invalidate_exit;
++				kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl,
++						entry + i);
++				goto unlock_exit;
+ 			}
+ 		}
+ 
+ 		kvmppc_rm_tce_put(stt, entry + i, tce);
+ 	}
+ 
+-invalidate_exit:
+-	list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
+-		iommu_tce_kill_rm(stit->tbl, entry, npages);
+-
+ unlock_exit:
+ 	if (!prereg)
+ 		arch_spin_unlock(&kvm->mmu_lock.rlock.raw_lock);
+@@ -632,20 +636,16 @@ long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu,
+ 				continue;
+ 
+ 			if (ret == H_TOO_HARD)
+-				goto invalidate_exit;
++				return ret;
+ 
+ 			WARN_ON_ONCE_RM(1);
+-			kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);
++			kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, entry + i);
+ 		}
+ 	}
+ 
+ 	for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift))
+ 		kvmppc_rm_tce_put(stt, ioba >> stt->page_shift, tce_value);
+ 
+-invalidate_exit:
+-	list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
+-		iommu_tce_kill_rm(stit->tbl, ioba >> stt->page_shift, npages);
+-
+ 	return ret;
+ }
+ 
+diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
+index 2a57e93a79dcf..7245355bee28b 100644
+--- a/arch/powerpc/perf/power9-pmu.c
++++ b/arch/powerpc/perf/power9-pmu.c
+@@ -133,11 +133,11 @@ int p9_dd22_bl_ev[] = {
+ 
+ /* Table of alternatives, sorted by column 0 */
+ static const unsigned int power9_event_alternatives[][MAX_ALT] = {
+-	{ PM_INST_DISP,			PM_INST_DISP_ALT },
+-	{ PM_RUN_CYC_ALT,		PM_RUN_CYC },
+-	{ PM_RUN_INST_CMPL_ALT,		PM_RUN_INST_CMPL },
+-	{ PM_LD_MISS_L1,		PM_LD_MISS_L1_ALT },
+ 	{ PM_BR_2PATH,			PM_BR_2PATH_ALT },
++	{ PM_INST_DISP,			PM_INST_DISP_ALT },
++	{ PM_RUN_CYC_ALT,               PM_RUN_CYC },
++	{ PM_LD_MISS_L1,                PM_LD_MISS_L1_ALT },
++	{ PM_RUN_INST_CMPL_ALT,         PM_RUN_INST_CMPL },
+ };
+ 
+ static int power9_get_alternatives(u64 event, unsigned int flags, u64 alt[])
+diff --git a/arch/x86/include/asm/compat.h b/arch/x86/include/asm/compat.h
+index 0e327a01f50fb..46a067bd7e0ba 100644
+--- a/arch/x86/include/asm/compat.h
++++ b/arch/x86/include/asm/compat.h
+@@ -29,15 +29,13 @@ typedef u32		compat_caddr_t;
+ typedef __kernel_fsid_t	compat_fsid_t;
+ 
+ struct compat_stat {
+-	compat_dev_t	st_dev;
+-	u16		__pad1;
++	u32		st_dev;
+ 	compat_ino_t	st_ino;
+ 	compat_mode_t	st_mode;
+ 	compat_nlink_t	st_nlink;
+ 	__compat_uid_t	st_uid;
+ 	__compat_gid_t	st_gid;
+-	compat_dev_t	st_rdev;
+-	u16		__pad2;
++	u32		st_rdev;
+ 	u32		st_size;
+ 	u32		st_blksize;
+ 	u32		st_blocks;
+diff --git a/arch/xtensa/kernel/coprocessor.S b/arch/xtensa/kernel/coprocessor.S
+index 45cc0ae0af6f9..c7b9f12896f20 100644
+--- a/arch/xtensa/kernel/coprocessor.S
++++ b/arch/xtensa/kernel/coprocessor.S
+@@ -29,7 +29,7 @@
+ 	.if XTENSA_HAVE_COPROCESSOR(x);					\
+ 		.align 4;						\
+ 	.Lsave_cp_regs_cp##x:						\
+-		xchal_cp##x##_store a2 a4 a5 a6 a7;			\
++		xchal_cp##x##_store a2 a3 a4 a5 a6;			\
+ 		jx	a0;						\
+ 	.endif
+ 
+@@ -46,7 +46,7 @@
+ 	.if XTENSA_HAVE_COPROCESSOR(x);					\
+ 		.align 4;						\
+ 	.Lload_cp_regs_cp##x:						\
+-		xchal_cp##x##_load a2 a4 a5 a6 a7;			\
++		xchal_cp##x##_load a2 a3 a4 a5 a6;			\
+ 		jx	a0;						\
+ 	.endif
+ 
+diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
+index 0dde21e0d3de4..ad1841cecdfb7 100644
+--- a/arch/xtensa/kernel/jump_label.c
++++ b/arch/xtensa/kernel/jump_label.c
+@@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
+ {
+ 	struct patch *patch = data;
+ 
+-	if (atomic_inc_return(&patch->cpu_count) == 1) {
++	if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
+ 		local_patch_text(patch->addr, patch->data, patch->sz);
+ 		atomic_inc(&patch->cpu_count);
+ 	} else {
+diff --git a/block/ioctl.c b/block/ioctl.c
+index ed240e170e148..e7eed7dadb5cf 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -679,7 +679,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+ 			       (bdev->bd_bdi->ra_pages * PAGE_SIZE) / 512);
+ 	case BLKGETSIZE:
+ 		size = i_size_read(bdev->bd_inode);
+-		if ((size >> 9) > ~0UL)
++		if ((size >> 9) > ~(compat_ulong_t)0)
+ 			return -EFBIG;
+ 		return compat_put_ulong(argp, size >> 9);
+ 
+diff --git a/drivers/ata/pata_marvell.c b/drivers/ata/pata_marvell.c
+index b066809ba9a11..c56f4043b0cc0 100644
+--- a/drivers/ata/pata_marvell.c
++++ b/drivers/ata/pata_marvell.c
+@@ -83,6 +83,8 @@ static int marvell_cable_detect(struct ata_port *ap)
+ 	switch(ap->port_no)
+ 	{
+ 	case 0:
++		if (!ap->ioaddr.bmdma_addr)
++			return ATA_CBL_PATA_UNK;
+ 		if (ioread8(ap->ioaddr.bmdma_addr + 1) & 1)
+ 			return ATA_CBL_PATA40;
+ 		return ATA_CBL_PATA80;
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 90afba0b36fe9..47552db6b8dc3 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -1390,7 +1390,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
+-	struct at_xdmac_desc	*desc, *_desc;
++	struct at_xdmac_desc	*desc, *_desc, *iter;
+ 	struct list_head	*descs_list;
+ 	enum dma_status		ret;
+ 	int			residue, retry;
+@@ -1505,11 +1505,13 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	 * microblock.
+ 	 */
+ 	descs_list = &desc->descs_list;
+-	list_for_each_entry_safe(desc, _desc, descs_list, desc_node) {
+-		dwidth = at_xdmac_get_dwidth(desc->lld.mbr_cfg);
+-		residue -= (desc->lld.mbr_ubc & 0xffffff) << dwidth;
+-		if ((desc->lld.mbr_nda & 0xfffffffc) == cur_nda)
++	list_for_each_entry_safe(iter, _desc, descs_list, desc_node) {
++		dwidth = at_xdmac_get_dwidth(iter->lld.mbr_cfg);
++		residue -= (iter->lld.mbr_ubc & 0xffffff) << dwidth;
++		if ((iter->lld.mbr_nda & 0xfffffffc) == cur_nda) {
++			desc = iter;
+ 			break;
++		}
+ 	}
+ 	residue += cur_ubc << dwidth;
+ 
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 7b41cdff1a2ce..51af0dfc3c63e 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -1098,6 +1098,9 @@ static ssize_t wq_max_transfer_size_store(struct device *dev, struct device_attr
+ 	u64 xfer_size;
+ 	int rc;
+ 
++	if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
++		return -EPERM;
++
+ 	if (wq->state != IDXD_WQ_DISABLED)
+ 		return -EPERM;
+ 
+@@ -1132,6 +1135,9 @@ static ssize_t wq_max_batch_size_store(struct device *dev, struct device_attribu
+ 	u64 batch_size;
+ 	int rc;
+ 
++	if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
++		return -EPERM;
++
+ 	if (wq->state != IDXD_WQ_DISABLED)
+ 		return -EPERM;
+ 
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 306f93e4b26a8..792c91cd16080 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -1789,7 +1789,7 @@ static int sdma_event_remap(struct sdma_engine *sdma)
+ 	u32 reg, val, shift, num_map, i;
+ 	int ret = 0;
+ 
+-	if (IS_ERR(np) || IS_ERR(gpr_np))
++	if (IS_ERR(np) || !gpr_np)
+ 		goto out;
+ 
+ 	event_remap = of_find_property(np, propname, NULL);
+@@ -1837,7 +1837,7 @@ static int sdma_event_remap(struct sdma_engine *sdma)
+ 	}
+ 
+ out:
+-	if (!IS_ERR(gpr_np))
++	if (gpr_np)
+ 		of_node_put(gpr_np);
+ 
+ 	return ret;
+diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
+index 375e7e647df6b..a1517ef1f4a01 100644
+--- a/drivers/dma/mediatek/mtk-uart-apdma.c
++++ b/drivers/dma/mediatek/mtk-uart-apdma.c
+@@ -274,7 +274,7 @@ static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
+ 	unsigned int status;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(mtkd->ddev.dev);
++	ret = pm_runtime_resume_and_get(mtkd->ddev.dev);
+ 	if (ret < 0) {
+ 		pm_runtime_put_noidle(chan->device->dev);
+ 		return ret;
+@@ -288,18 +288,21 @@ static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
+ 	ret = readx_poll_timeout(readl, c->base + VFF_EN,
+ 			  status, !status, 10, 100);
+ 	if (ret)
+-		return ret;
++		goto err_pm;
+ 
+ 	ret = request_irq(c->irq, mtk_uart_apdma_irq_handler,
+ 			  IRQF_TRIGGER_NONE, KBUILD_MODNAME, chan);
+ 	if (ret < 0) {
+ 		dev_err(chan->device->dev, "Can't request dma IRQ\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm;
+ 	}
+ 
+ 	if (mtkd->support_33bits)
+ 		mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+ 
++err_pm:
++	pm_runtime_put_noidle(mtkd->ddev.dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index 92906b56b1a2b..fea44dc0484b5 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -163,6 +163,11 @@
+ #define ECC_STAT_CECNT_SHIFT		8
+ #define ECC_STAT_BITNUM_MASK		0x7F
+ 
++/* ECC error count register definitions */
++#define ECC_ERRCNT_UECNT_MASK		0xFFFF0000
++#define ECC_ERRCNT_UECNT_SHIFT		16
++#define ECC_ERRCNT_CECNT_MASK		0xFFFF
++
+ /* DDR QOS Interrupt register definitions */
+ #define DDR_QOS_IRQ_STAT_OFST		0x20200
+ #define DDR_QOSUE_MASK			0x4
+@@ -418,15 +423,16 @@ static int zynqmp_get_error_info(struct synps_edac_priv *priv)
+ 	base = priv->baseaddr;
+ 	p = &priv->stat;
+ 
++	regval = readl(base + ECC_ERRCNT_OFST);
++	p->ce_cnt = regval & ECC_ERRCNT_CECNT_MASK;
++	p->ue_cnt = (regval & ECC_ERRCNT_UECNT_MASK) >> ECC_ERRCNT_UECNT_SHIFT;
++	if (!p->ce_cnt)
++		goto ue_err;
++
+ 	regval = readl(base + ECC_STAT_OFST);
+ 	if (!regval)
+ 		return 1;
+ 
+-	p->ce_cnt = (regval & ECC_STAT_CECNT_MASK) >> ECC_STAT_CECNT_SHIFT;
+-	p->ue_cnt = (regval & ECC_STAT_UECNT_MASK) >> ECC_STAT_UECNT_SHIFT;
+-	if (!p->ce_cnt)
+-		goto ue_err;
+-
+ 	p->ceinfo.bitpos = (regval & ECC_STAT_BITNUM_MASK);
+ 
+ 	regval = readl(base + ECC_CEADDR0_OFST);
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index d180787482009..59d8affad343a 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1612,8 +1612,6 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+ 
+ 	gpiochip_set_irq_hooks(gc);
+ 
+-	acpi_gpiochip_request_interrupts(gc);
+-
+ 	/*
+ 	 * Using barrier() here to prevent compiler from reordering
+ 	 * gc->irq.initialized before initialization of above
+@@ -1623,6 +1621,8 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+ 
+ 	gc->irq.initialized = true;
+ 
++	acpi_gpiochip_request_interrupts(gc);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+index 83423092de2ff..da07993339702 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+@@ -179,7 +179,10 @@ static void mdp5_plane_reset(struct drm_plane *plane)
+ 		drm_framebuffer_put(plane->state->fb);
+ 
+ 	kfree(to_mdp5_plane_state(plane->state));
++	plane->state = NULL;
+ 	mdp5_state = kzalloc(sizeof(*mdp5_state), GFP_KERNEL);
++	if (!mdp5_state)
++		return;
+ 
+ 	/* assign default blend parameters */
+ 	mdp5_state->alpha = 255;
+diff --git a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+index bbdd086be7f59..4b92c63414905 100644
+--- a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
++++ b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+@@ -229,7 +229,7 @@ static void rpi_touchscreen_i2c_write(struct rpi_touchscreen *ts,
+ 
+ 	ret = i2c_smbus_write_byte_data(ts->i2c, reg, val);
+ 	if (ret)
+-		dev_err(&ts->dsi->dev, "I2C write failed: %d\n", ret);
++		dev_err(&ts->i2c->dev, "I2C write failed: %d\n", ret);
+ }
+ 
+ static int rpi_touchscreen_write(struct rpi_touchscreen *ts, u16 reg, u32 val)
+@@ -265,7 +265,7 @@ static int rpi_touchscreen_noop(struct drm_panel *panel)
+ 	return 0;
+ }
+ 
+-static int rpi_touchscreen_enable(struct drm_panel *panel)
++static int rpi_touchscreen_prepare(struct drm_panel *panel)
+ {
+ 	struct rpi_touchscreen *ts = panel_to_ts(panel);
+ 	int i;
+@@ -295,6 +295,13 @@ static int rpi_touchscreen_enable(struct drm_panel *panel)
+ 	rpi_touchscreen_write(ts, DSI_STARTDSI, 0x01);
+ 	msleep(100);
+ 
++	return 0;
++}
++
++static int rpi_touchscreen_enable(struct drm_panel *panel)
++{
++	struct rpi_touchscreen *ts = panel_to_ts(panel);
++
+ 	/* Turn on the backlight. */
+ 	rpi_touchscreen_i2c_write(ts, REG_PWM, 255);
+ 
+@@ -349,7 +356,7 @@ static int rpi_touchscreen_get_modes(struct drm_panel *panel,
+ static const struct drm_panel_funcs rpi_touchscreen_funcs = {
+ 	.disable = rpi_touchscreen_disable,
+ 	.unprepare = rpi_touchscreen_noop,
+-	.prepare = rpi_touchscreen_noop,
++	.prepare = rpi_touchscreen_prepare,
+ 	.enable = rpi_touchscreen_enable,
+ 	.get_modes = rpi_touchscreen_get_modes,
+ };
+diff --git a/drivers/gpu/drm/vc4/vc4_dsi.c b/drivers/gpu/drm/vc4/vc4_dsi.c
+index eaf276978ee7f..ad84b56f4091d 100644
+--- a/drivers/gpu/drm/vc4/vc4_dsi.c
++++ b/drivers/gpu/drm/vc4/vc4_dsi.c
+@@ -835,7 +835,7 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+ 	unsigned long phy_clock;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to runtime PM enable on DSI%d\n", dsi->port);
+ 		return;
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 2836d44094aba..b3d8d9e0e6f6e 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -607,18 +607,17 @@ static void start_io_acct(struct dm_io *io)
+ 				    false, 0, &io->stats_aux);
+ }
+ 
+-static void end_io_acct(struct dm_io *io)
++static void end_io_acct(struct mapped_device *md, struct bio *bio,
++			unsigned long start_time, struct dm_stats_aux *stats_aux)
+ {
+-	struct mapped_device *md = io->md;
+-	struct bio *bio = io->orig_bio;
+-	unsigned long duration = jiffies - io->start_time;
++	unsigned long duration = jiffies - start_time;
+ 
+-	bio_end_io_acct(bio, io->start_time);
++	bio_end_io_acct(bio, start_time);
+ 
+ 	if (unlikely(dm_stats_used(&md->stats)))
+ 		dm_stats_account_io(&md->stats, bio_data_dir(bio),
+ 				    bio->bi_iter.bi_sector, bio_sectors(bio),
+-				    true, duration, &io->stats_aux);
++				    true, duration, stats_aux);
+ 
+ 	/* nudge anyone waiting on suspend queue */
+ 	if (unlikely(wq_has_sleeper(&md->wait)))
+@@ -903,6 +902,8 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
+ 	blk_status_t io_error;
+ 	struct bio *bio;
+ 	struct mapped_device *md = io->md;
++	unsigned long start_time = 0;
++	struct dm_stats_aux stats_aux;
+ 
+ 	/* Push-back supersedes any I/O errors */
+ 	if (unlikely(error)) {
+@@ -929,8 +930,10 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
+ 
+ 		io_error = io->status;
+ 		bio = io->orig_bio;
+-		end_io_acct(io);
++		start_time = io->start_time;
++		stats_aux = io->stats_aux;
+ 		free_io(md, io);
++		end_io_acct(md, bio, start_time, &stats_aux);
+ 
+ 		if (io_error == BLK_STS_DM_REQUEUE)
+ 			return;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index 0cf8ae8aeac83..2fb4126ae8d8a 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -480,8 +480,8 @@ int aq_nic_start(struct aq_nic_s *self)
+ 	if (err < 0)
+ 		goto err_exit;
+ 
+-	for (i = 0U, aq_vec = self->aq_vec[0];
+-		self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i]) {
++	for (i = 0U; self->aq_vecs > i; ++i) {
++		aq_vec = self->aq_vec[i];
+ 		err = aq_vec_start(aq_vec);
+ 		if (err < 0)
+ 			goto err_exit;
+@@ -511,8 +511,8 @@ int aq_nic_start(struct aq_nic_s *self)
+ 		mod_timer(&self->polling_timer, jiffies +
+ 			  AQ_CFG_POLLING_TIMER_INTERVAL);
+ 	} else {
+-		for (i = 0U, aq_vec = self->aq_vec[0];
+-			self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i]) {
++		for (i = 0U; self->aq_vecs > i; ++i) {
++			aq_vec = self->aq_vec[i];
+ 			err = aq_pci_func_alloc_irq(self, i, self->ndev->name,
+ 						    aq_vec_isr, aq_vec,
+ 						    aq_vec_get_affinity_mask(aq_vec));
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index 1826253f97dc4..bdfd462c74db9 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -450,22 +450,22 @@ err_exit:
+ 
+ static int aq_pm_freeze(struct device *dev)
+ {
+-	return aq_suspend_common(dev, false);
++	return aq_suspend_common(dev, true);
+ }
+ 
+ static int aq_pm_suspend_poweroff(struct device *dev)
+ {
+-	return aq_suspend_common(dev, true);
++	return aq_suspend_common(dev, false);
+ }
+ 
+ static int aq_pm_thaw(struct device *dev)
+ {
+-	return atl_resume_common(dev, false);
++	return atl_resume_common(dev, true);
+ }
+ 
+ static int aq_pm_resume_restore(struct device *dev)
+ {
+-	return atl_resume_common(dev, true);
++	return atl_resume_common(dev, false);
+ }
+ 
+ static const struct dev_pm_ops aq_pm_ops = {
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_vec.c b/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
+index f4774cf051c97..6ab1f3212d246 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
+@@ -43,8 +43,8 @@ static int aq_vec_poll(struct napi_struct *napi, int budget)
+ 	if (!self) {
+ 		err = -EINVAL;
+ 	} else {
+-		for (i = 0U, ring = self->ring[0];
+-			self->tx_rings > i; ++i, ring = self->ring[i]) {
++		for (i = 0U; self->tx_rings > i; ++i) {
++			ring = self->ring[i];
+ 			u64_stats_update_begin(&ring[AQ_VEC_RX_ID].stats.rx.syncp);
+ 			ring[AQ_VEC_RX_ID].stats.rx.polls++;
+ 			u64_stats_update_end(&ring[AQ_VEC_RX_ID].stats.rx.syncp);
+@@ -182,8 +182,8 @@ int aq_vec_init(struct aq_vec_s *self, const struct aq_hw_ops *aq_hw_ops,
+ 	self->aq_hw_ops = aq_hw_ops;
+ 	self->aq_hw = aq_hw;
+ 
+-	for (i = 0U, ring = self->ring[0];
+-		self->tx_rings > i; ++i, ring = self->ring[i]) {
++	for (i = 0U; self->tx_rings > i; ++i) {
++		ring = self->ring[i];
+ 		err = aq_ring_init(&ring[AQ_VEC_TX_ID], ATL_RING_TX);
+ 		if (err < 0)
+ 			goto err_exit;
+@@ -224,8 +224,8 @@ int aq_vec_start(struct aq_vec_s *self)
+ 	unsigned int i = 0U;
+ 	int err = 0;
+ 
+-	for (i = 0U, ring = self->ring[0];
+-		self->tx_rings > i; ++i, ring = self->ring[i]) {
++	for (i = 0U; self->tx_rings > i; ++i) {
++		ring = self->ring[i];
+ 		err = self->aq_hw_ops->hw_ring_tx_start(self->aq_hw,
+ 							&ring[AQ_VEC_TX_ID]);
+ 		if (err < 0)
+@@ -248,8 +248,8 @@ void aq_vec_stop(struct aq_vec_s *self)
+ 	struct aq_ring_s *ring = NULL;
+ 	unsigned int i = 0U;
+ 
+-	for (i = 0U, ring = self->ring[0];
+-		self->tx_rings > i; ++i, ring = self->ring[i]) {
++	for (i = 0U; self->tx_rings > i; ++i) {
++		ring = self->ring[i];
+ 		self->aq_hw_ops->hw_ring_tx_stop(self->aq_hw,
+ 						 &ring[AQ_VEC_TX_ID]);
+ 
+@@ -268,8 +268,8 @@ void aq_vec_deinit(struct aq_vec_s *self)
+ 	if (!self)
+ 		goto err_exit;
+ 
+-	for (i = 0U, ring = self->ring[0];
+-		self->tx_rings > i; ++i, ring = self->ring[i]) {
++	for (i = 0U; self->tx_rings > i; ++i) {
++		ring = self->ring[i];
+ 		aq_ring_tx_clean(&ring[AQ_VEC_TX_ID]);
+ 		aq_ring_rx_deinit(&ring[AQ_VEC_RX_ID]);
+ 	}
+@@ -297,8 +297,8 @@ void aq_vec_ring_free(struct aq_vec_s *self)
+ 	if (!self)
+ 		goto err_exit;
+ 
+-	for (i = 0U, ring = self->ring[0];
+-		self->tx_rings > i; ++i, ring = self->ring[i]) {
++	for (i = 0U; self->tx_rings > i; ++i) {
++		ring = self->ring[i];
+ 		aq_ring_free(&ring[AQ_VEC_TX_ID]);
+ 		if (i < self->rx_rings)
+ 			aq_ring_free(&ring[AQ_VEC_RX_ID]);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index f29ec765d684a..bd13f91efe7c5 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1531,6 +1531,7 @@ static void macb_tx_restart(struct macb_queue *queue)
+ 	unsigned int head = queue->tx_head;
+ 	unsigned int tail = queue->tx_tail;
+ 	struct macb *bp = queue->bp;
++	unsigned int head_idx, tbqp;
+ 
+ 	if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ 		queue_writel(queue, ISR, MACB_BIT(TXUBR));
+@@ -1538,6 +1539,13 @@ static void macb_tx_restart(struct macb_queue *queue)
+ 	if (head == tail)
+ 		return;
+ 
++	tbqp = queue_readl(queue, TBQP) / macb_dma_desc_get_size(bp);
++	tbqp = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, tbqp));
++	head_idx = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, head));
++
++	if (tbqp == head_idx)
++		return;
++
+ 	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index 1268996b70301..2f9075429c43e 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -489,11 +489,15 @@ static int dpaa_get_ts_info(struct net_device *net_dev,
+ 	info->phc_index = -1;
+ 
+ 	fman_node = of_get_parent(mac_node);
+-	if (fman_node)
++	if (fman_node) {
+ 		ptp_node = of_parse_phandle(fman_node, "ptimer-handle", 0);
++		of_node_put(fman_node);
++	}
+ 
+-	if (ptp_node)
++	if (ptp_node) {
+ 		ptp_dev = of_find_device_by_node(ptp_node);
++		of_node_put(ptp_node);
++	}
+ 
+ 	if (ptp_dev)
+ 		ptp = platform_get_drvdata(ptp_dev);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 15b1503d5b6ca..1f51252b465a6 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -1006,8 +1006,8 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
+ {
+ 	u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) |
+ 	    link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND;
+-	u16 max_ltr_enc_d = 0;	/* maximum LTR decoded by platform */
+-	u16 lat_enc_d = 0;	/* latency decoded */
++	u32 max_ltr_enc_d = 0;	/* maximum LTR decoded by platform */
++	u32 lat_enc_d = 0;	/* latency decoded */
+ 	u16 lat_enc = 0;	/* latency encoded */
+ 
+ 	if (link) {
+diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c
+index 553d6bc78e6bd..624236a4202e5 100644
+--- a/drivers/net/ethernet/intel/igc/igc_i225.c
++++ b/drivers/net/ethernet/intel/igc/igc_i225.c
+@@ -156,8 +156,15 @@ void igc_release_swfw_sync_i225(struct igc_hw *hw, u16 mask)
+ {
+ 	u32 swfw_sync;
+ 
+-	while (igc_get_hw_semaphore_i225(hw))
+-		; /* Empty */
++	/* Releasing the resource requires first getting the HW semaphore.
++	 * If we fail to get the semaphore, there is nothing we can do,
++	 * except log an error and quit. We are not allowed to hang here
++	 * indefinitely, as it may cause denial of service or system crash.
++	 */
++	if (igc_get_hw_semaphore_i225(hw)) {
++		hw_dbg("Failed to release SW_FW_SYNC.\n");
++		return;
++	}
+ 
+ 	swfw_sync = rd32(IGC_SW_FW_SYNC);
+ 	swfw_sync &= ~mask;
+diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c
+index e380b7a3ea63b..8de4de2e56362 100644
+--- a/drivers/net/ethernet/intel/igc/igc_phy.c
++++ b/drivers/net/ethernet/intel/igc/igc_phy.c
+@@ -583,7 +583,7 @@ static s32 igc_read_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 *data)
+ 	 * the lower time out
+ 	 */
+ 	for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
+-		usleep_range(500, 1000);
++		udelay(50);
+ 		mdic = rd32(IGC_MDIC);
+ 		if (mdic & IGC_MDIC_READY)
+ 			break;
+@@ -640,7 +640,7 @@ static s32 igc_write_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 data)
+ 	 * the lower time out
+ 	 */
+ 	for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
+-		usleep_range(500, 1000);
++		udelay(50);
+ 		mdic = rd32(IGC_MDIC);
+ 		if (mdic & IGC_MDIC_READY)
+ 			break;
+diff --git a/drivers/net/ethernet/micrel/Kconfig b/drivers/net/ethernet/micrel/Kconfig
+index 9ceb7e1fb1696..42bc014136fe3 100644
+--- a/drivers/net/ethernet/micrel/Kconfig
++++ b/drivers/net/ethernet/micrel/Kconfig
+@@ -37,7 +37,6 @@ config KS8851
+ config KS8851_MLL
+ 	tristate "Micrel KS8851 MLL"
+ 	depends on HAS_IOMEM
+-	depends on PTP_1588_CLOCK_OPTIONAL
+ 	select MII
+ 	select CRC32
+ 	select EEPROM_93CX6
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index 07b1b8374cd26..53efcc9c40e28 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -68,9 +68,9 @@ static int init_systime(void __iomem *ioaddr, u32 sec, u32 nsec)
+ 	writel(value, ioaddr + PTP_TCR);
+ 
+ 	/* wait for present system time initialize to complete */
+-	return readl_poll_timeout(ioaddr + PTP_TCR, value,
++	return readl_poll_timeout_atomic(ioaddr + PTP_TCR, value,
+ 				 !(value & PTP_TCR_TSINIT),
+-				 10000, 100000);
++				 10, 100000);
+ }
+ 
+ static int config_addend(void __iomem *ioaddr, u32 addend)
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 48fbdce6a70e7..72d670667f64f 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -710,11 +710,11 @@ static int vxlan_fdb_append(struct vxlan_fdb *f,
+ 
+ 	rd = kmalloc(sizeof(*rd), GFP_ATOMIC);
+ 	if (rd == NULL)
+-		return -ENOBUFS;
++		return -ENOMEM;
+ 
+ 	if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
+ 		kfree(rd);
+-		return -ENOBUFS;
++		return -ENOMEM;
+ 	}
+ 
+ 	rd->remote_ip = *ip;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 6d5d5c39c6359..9929e90866f04 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -557,7 +557,7 @@ enum brcmf_sdio_frmtype {
+ 	BRCMF_SDIO_FT_SUB,
+ };
+ 
+-#define SDIOD_DRVSTR_KEY(chip, pmu)     (((chip) << 16) | (pmu))
++#define SDIOD_DRVSTR_KEY(chip, pmu)     (((unsigned int)(chip) << 16) | (pmu))
+ 
+ /* SDIO Pad drive strength to select value mappings */
+ struct sdiod_drive_str {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
+index ecaf85b483ac3..e57e49a722dc0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
+@@ -80,7 +80,7 @@ mt76x2e_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	mt76_rmw_field(dev, 0x15a10, 0x1f << 16, 0x9);
+ 
+ 	/* RG_SSUSB_G1_CDR_BIC_LTR = 0xf */
+-	mt76_rmw_field(dev, 0x15a0c, 0xf << 28, 0xf);
++	mt76_rmw_field(dev, 0x15a0c, 0xfU << 28, 0xf);
+ 
+ 	/* RG_SSUSB_CDR_BR_PE1D = 0x3 */
+ 	mt76_rmw_field(dev, 0x15c58, 0x3 << 6, 0x3);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 853b9a24f744e..ad4f1cfbad2e0 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1270,6 +1270,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
+ 				 warn_str, cur->nidl);
+ 			return -1;
+ 		}
++		if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
++			return NVME_NIDT_EUI64_LEN;
+ 		memcpy(ids->eui64, data + sizeof(*cur), NVME_NIDT_EUI64_LEN);
+ 		return NVME_NIDT_EUI64_LEN;
+ 	case NVME_NIDT_NGUID:
+@@ -1278,6 +1280,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
+ 				 warn_str, cur->nidl);
+ 			return -1;
+ 		}
++		if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
++			return NVME_NIDT_NGUID_LEN;
+ 		memcpy(ids->nguid, data + sizeof(*cur), NVME_NIDT_NGUID_LEN);
+ 		return NVME_NIDT_NGUID_LEN;
+ 	case NVME_NIDT_UUID:
+@@ -1286,6 +1290,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
+ 				 warn_str, cur->nidl);
+ 			return -1;
+ 		}
++		if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
++			return NVME_NIDT_UUID_LEN;
+ 		uuid_copy(&ids->uuid, data + sizeof(*cur));
+ 		return NVME_NIDT_UUID_LEN;
+ 	case NVME_NIDT_CSI:
+@@ -1381,12 +1387,18 @@ static int nvme_identify_ns(struct nvme_ctrl *ctrl, unsigned nsid,
+ 	if ((*id)->ncap == 0) /* namespace not allocated or attached */
+ 		goto out_free_id;
+ 
+-	if (ctrl->vs >= NVME_VS(1, 1, 0) &&
+-	    !memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
+-		memcpy(ids->eui64, (*id)->eui64, sizeof(ids->eui64));
+-	if (ctrl->vs >= NVME_VS(1, 2, 0) &&
+-	    !memchr_inv(ids->nguid, 0, sizeof(ids->nguid)))
+-		memcpy(ids->nguid, (*id)->nguid, sizeof(ids->nguid));
++
++	if (ctrl->quirks & NVME_QUIRK_BOGUS_NID) {
++		dev_info(ctrl->device,
++			 "Ignoring bogus Namespace Identifiers\n");
++	} else {
++		if (ctrl->vs >= NVME_VS(1, 1, 0) &&
++		    !memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
++			memcpy(ids->eui64, (*id)->eui64, sizeof(ids->eui64));
++		if (ctrl->vs >= NVME_VS(1, 2, 0) &&
++		    !memchr_inv(ids->nguid, 0, sizeof(ids->nguid)))
++			memcpy(ids->nguid, (*id)->nguid, sizeof(ids->nguid));
++	}
+ 
+ 	return 0;
+ 
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 5dd1dd8021ba1..10e5ae3a8c0df 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -150,6 +150,11 @@ enum nvme_quirks {
+ 	 * encoding the generation sequence number.
+ 	 */
+ 	NVME_QUIRK_SKIP_CID_GEN			= (1 << 17),
++
++	/*
++	 * Reports garbage in the namespace identifiers (eui64, nguid, uuid).
++	 */
++	NVME_QUIRK_BOGUS_NID			= (1 << 18),
+ };
+ 
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 97afeb898b253..6939b03a16c58 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3212,7 +3212,10 @@ static const struct pci_device_id nvme_id_table[] = {
+ 		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_VDEVICE(INTEL, 0x5845),	/* Qemu emulated controller */
+ 		.driver_data = NVME_QUIRK_IDENTIFY_CNS |
+-				NVME_QUIRK_DISABLE_WRITE_ZEROES, },
++				NVME_QUIRK_DISABLE_WRITE_ZEROES |
++				NVME_QUIRK_BOGUS_NID, },
++	{ PCI_VDEVICE(REDHAT, 0x0010),	/* Qemu emulated controller */
++		.driver_data = NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(0x126f, 0x2263),	/* Silicon Motion unidentified */
+ 		.driver_data = NVME_QUIRK_NO_NS_DESC_LIST, },
+ 	{ PCI_DEVICE(0x1bb1, 0x0100),   /* Seagate Nytro Flash Storage */
+diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
+index cb2f55f450e4a..7fd11ef5cb8a2 100644
+--- a/drivers/perf/arm_pmu.c
++++ b/drivers/perf/arm_pmu.c
+@@ -398,6 +398,9 @@ validate_group(struct perf_event *event)
+ 	if (!validate_event(event->pmu, &fake_pmu, leader))
+ 		return -EINVAL;
+ 
++	if (event == leader)
++		return 0;
++
+ 	for_each_sibling_event(sibling, leader) {
+ 		if (!validate_event(event->pmu, &fake_pmu, sibling))
+ 			return -EINVAL;
+@@ -487,12 +490,7 @@ __hw_perf_event_init(struct perf_event *event)
+ 		local64_set(&hwc->period_left, hwc->sample_period);
+ 	}
+ 
+-	if (event->group_leader != event) {
+-		if (validate_group(event) != 0)
+-			return -EINVAL;
+-	}
+-
+-	return 0;
++	return validate_group(event);
+ }
+ 
+ static int armpmu_event_init(struct perf_event *event)
+diff --git a/drivers/platform/x86/samsung-laptop.c b/drivers/platform/x86/samsung-laptop.c
+index d5cec6e35bb83..0e456c39a603d 100644
+--- a/drivers/platform/x86/samsung-laptop.c
++++ b/drivers/platform/x86/samsung-laptop.c
+@@ -1121,8 +1121,6 @@ static void kbd_led_set(struct led_classdev *led_cdev,
+ 
+ 	if (value > samsung->kbd_led.max_brightness)
+ 		value = samsung->kbd_led.max_brightness;
+-	else if (value < 0)
+-		value = 0;
+ 
+ 	samsung->kbd_led_wk = value;
+ 	queue_work(samsung->led_workqueue, &samsung->kbd_led_work);
+diff --git a/drivers/reset/tegra/reset-bpmp.c b/drivers/reset/tegra/reset-bpmp.c
+index 24d3395964cc4..4c5bba52b1059 100644
+--- a/drivers/reset/tegra/reset-bpmp.c
++++ b/drivers/reset/tegra/reset-bpmp.c
+@@ -20,6 +20,7 @@ static int tegra_bpmp_reset_common(struct reset_controller_dev *rstc,
+ 	struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc);
+ 	struct mrq_reset_request request;
+ 	struct tegra_bpmp_message msg;
++	int err;
+ 
+ 	memset(&request, 0, sizeof(request));
+ 	request.cmd = command;
+@@ -30,7 +31,13 @@ static int tegra_bpmp_reset_common(struct reset_controller_dev *rstc,
+ 	msg.tx.data = &request;
+ 	msg.tx.size = sizeof(request);
+ 
+-	return tegra_bpmp_transfer(bpmp, &msg);
++	err = tegra_bpmp_transfer(bpmp, &msg);
++	if (err)
++		return err;
++	if (msg.rx.ret)
++		return -EINVAL;
++
++	return 0;
+ }
+ 
+ static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 5f7e62f19d83a..3bcadb3dd40d2 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -828,6 +828,37 @@ static int qedi_task_xmit(struct iscsi_task *task)
+ 	return qedi_iscsi_send_ioreq(task);
+ }
+ 
++static void qedi_offload_work(struct work_struct *work)
++{
++	struct qedi_endpoint *qedi_ep =
++		container_of(work, struct qedi_endpoint, offload_work);
++	struct qedi_ctx *qedi;
++	int wait_delay = 5 * HZ;
++	int ret;
++
++	qedi = qedi_ep->qedi;
++
++	ret = qedi_iscsi_offload_conn(qedi_ep);
++	if (ret) {
++		QEDI_ERR(&qedi->dbg_ctx,
++			 "offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
++			 qedi_ep->iscsi_cid, qedi_ep, ret);
++		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
++		return;
++	}
++
++	ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
++					       (qedi_ep->state ==
++					       EP_STATE_OFLDCONN_COMPL),
++					       wait_delay);
++	if (ret <= 0 || qedi_ep->state != EP_STATE_OFLDCONN_COMPL) {
++		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
++		QEDI_ERR(&qedi->dbg_ctx,
++			 "Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
++			 qedi_ep->iscsi_cid, qedi_ep);
++	}
++}
++
+ static struct iscsi_endpoint *
+ qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+ 		int non_blocking)
+@@ -876,6 +907,7 @@ qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+ 	}
+ 	qedi_ep = ep->dd_data;
+ 	memset(qedi_ep, 0, sizeof(struct qedi_endpoint));
++	INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
+ 	qedi_ep->state = EP_STATE_IDLE;
+ 	qedi_ep->iscsi_cid = (u32)-1;
+ 	qedi_ep->qedi = qedi;
+@@ -1026,12 +1058,11 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+ 	qedi_ep = ep->dd_data;
+ 	qedi = qedi_ep->qedi;
+ 
++	flush_work(&qedi_ep->offload_work);
++
+ 	if (qedi_ep->state == EP_STATE_OFLDCONN_START)
+ 		goto ep_exit_recover;
+ 
+-	if (qedi_ep->state != EP_STATE_OFLDCONN_NONE)
+-		flush_work(&qedi_ep->offload_work);
+-
+ 	if (qedi_ep->conn) {
+ 		qedi_conn = qedi_ep->conn;
+ 		conn = qedi_conn->cls_conn->dd_data;
+@@ -1196,37 +1227,6 @@ static int qedi_data_avail(struct qedi_ctx *qedi, u16 vlanid)
+ 	return rc;
+ }
+ 
+-static void qedi_offload_work(struct work_struct *work)
+-{
+-	struct qedi_endpoint *qedi_ep =
+-		container_of(work, struct qedi_endpoint, offload_work);
+-	struct qedi_ctx *qedi;
+-	int wait_delay = 5 * HZ;
+-	int ret;
+-
+-	qedi = qedi_ep->qedi;
+-
+-	ret = qedi_iscsi_offload_conn(qedi_ep);
+-	if (ret) {
+-		QEDI_ERR(&qedi->dbg_ctx,
+-			 "offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
+-			 qedi_ep->iscsi_cid, qedi_ep, ret);
+-		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
+-		return;
+-	}
+-
+-	ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
+-					       (qedi_ep->state ==
+-					       EP_STATE_OFLDCONN_COMPL),
+-					       wait_delay);
+-	if ((ret <= 0) || (qedi_ep->state != EP_STATE_OFLDCONN_COMPL)) {
+-		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
+-		QEDI_ERR(&qedi->dbg_ctx,
+-			 "Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
+-			 qedi_ep->iscsi_cid, qedi_ep);
+-	}
+-}
+-
+ static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
+ {
+ 	struct qedi_ctx *qedi;
+@@ -1342,7 +1342,6 @@ static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
+ 			  qedi_ep->dst_addr, qedi_ep->dst_port);
+ 	}
+ 
+-	INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
+ 	queue_work(qedi->offload_thread, &qedi_ep->offload_work);
+ 
+ 	ret = 0;
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 1e63fd4821f96..8aa89d93db118 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -277,6 +277,9 @@ static int atmel_qspi_find_mode(const struct spi_mem_op *op)
+ static bool atmel_qspi_supports_op(struct spi_mem *mem,
+ 				   const struct spi_mem_op *op)
+ {
++	if (!spi_mem_default_supports_op(mem, op))
++		return false;
++
+ 	if (atmel_qspi_find_mode(op) < 0)
+ 		return false;
+ 
+diff --git a/drivers/spi/spi-mtk-nor.c b/drivers/spi/spi-mtk-nor.c
+index 288f6c2bbd573..106e3cacba4c3 100644
+--- a/drivers/spi/spi-mtk-nor.c
++++ b/drivers/spi/spi-mtk-nor.c
+@@ -895,7 +895,17 @@ static int __maybe_unused mtk_nor_suspend(struct device *dev)
+ 
+ static int __maybe_unused mtk_nor_resume(struct device *dev)
+ {
+-	return pm_runtime_force_resume(dev);
++	struct spi_controller *ctlr = dev_get_drvdata(dev);
++	struct mtk_nor *sp = spi_controller_get_devdata(ctlr);
++	int ret;
++
++	ret = pm_runtime_force_resume(dev);
++	if (ret)
++		return ret;
++
++	mtk_nor_init(sp);
++
++	return 0;
+ }
+ 
+ static const struct dev_pm_ops mtk_nor_pm_ops = {
+diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
+index e1fe03ceb7f13..e6d4a3ee6cda5 100644
+--- a/drivers/staging/android/ion/ion.c
++++ b/drivers/staging/android/ion/ion.c
+@@ -114,6 +114,9 @@ static void *ion_buffer_kmap_get(struct ion_buffer *buffer)
+ 	void *vaddr;
+ 
+ 	if (buffer->kmap_cnt) {
++		if (buffer->kmap_cnt == INT_MAX)
++			return ERR_PTR(-EOVERFLOW);
++
+ 		buffer->kmap_cnt++;
+ 		return buffer->vaddr;
+ 	}
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index aa5a4d759ca23..370188b2a55d2 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -898,7 +898,7 @@ cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	ssize_t rc;
+ 	struct inode *inode = file_inode(iocb->ki_filp);
+ 
+-	if (iocb->ki_filp->f_flags & O_DIRECT)
++	if (iocb->ki_flags & IOCB_DIRECT)
+ 		return cifs_user_readv(iocb, iter);
+ 
+ 	rc = cifs_revalidate_mapping(inode);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 455eb349c76f8..8329961546b58 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2159,6 +2159,10 @@ static inline int ext4_forced_shutdown(struct ext4_sb_info *sbi)
+  * Structure of a directory entry
+  */
+ #define EXT4_NAME_LEN 255
++/*
++ * Base length of the ext4 directory entry excluding the name length
++ */
++#define EXT4_BASE_DIR_LEN (sizeof(struct ext4_dir_entry_2) - EXT4_NAME_LEN)
+ 
+ struct ext4_dir_entry {
+ 	__le32	inode;			/* Inode number */
+@@ -2870,7 +2874,7 @@ extern int ext4_inode_attach_jinode(struct inode *inode);
+ extern int ext4_can_truncate(struct inode *inode);
+ extern int ext4_truncate(struct inode *);
+ extern int ext4_break_layouts(struct inode *);
+-extern int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length);
++extern int ext4_punch_hole(struct file *file, loff_t offset, loff_t length);
+ extern void ext4_set_inode_flags(struct inode *, bool init);
+ extern int ext4_alloc_da_blocks(struct inode *inode);
+ extern void ext4_set_aops(struct inode *inode);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 0fda3051760d1..80b876ab6b1fe 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4498,9 +4498,9 @@ retry:
+ 	return ret > 0 ? ret2 : ret;
+ }
+ 
+-static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len);
++static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len);
+ 
+-static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len);
++static int ext4_insert_range(struct file *file, loff_t offset, loff_t len);
+ 
+ static long ext4_zero_range(struct file *file, loff_t offset,
+ 			    loff_t len, int mode)
+@@ -4571,6 +4571,10 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ 	/* Wait all existing dio workers, newcomers will block on i_mutex */
+ 	inode_dio_wait(inode);
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out_mutex;
++
+ 	/* Preallocate the range including the unaligned edges */
+ 	if (partial_begin || partial_end) {
+ 		ret = ext4_alloc_file_blocks(file,
+@@ -4689,7 +4693,7 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 	ext4_fc_start_update(inode);
+ 
+ 	if (mode & FALLOC_FL_PUNCH_HOLE) {
+-		ret = ext4_punch_hole(inode, offset, len);
++		ret = ext4_punch_hole(file, offset, len);
+ 		goto exit;
+ 	}
+ 
+@@ -4698,12 +4702,12 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 		goto exit;
+ 
+ 	if (mode & FALLOC_FL_COLLAPSE_RANGE) {
+-		ret = ext4_collapse_range(inode, offset, len);
++		ret = ext4_collapse_range(file, offset, len);
+ 		goto exit;
+ 	}
+ 
+ 	if (mode & FALLOC_FL_INSERT_RANGE) {
+-		ret = ext4_insert_range(inode, offset, len);
++		ret = ext4_insert_range(file, offset, len);
+ 		goto exit;
+ 	}
+ 
+@@ -4739,6 +4743,10 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 	/* Wait all existing dio workers, newcomers will block on i_mutex */
+ 	inode_dio_wait(inode);
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out;
++
+ 	ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size, flags);
+ 	if (ret)
+ 		goto out;
+@@ -5241,8 +5249,9 @@ out:
+  * This implements the fallocate's collapse range functionality for ext4
+  * Returns: 0 and non-zero on error.
+  */
+-static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
++static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len)
+ {
++	struct inode *inode = file_inode(file);
+ 	struct super_block *sb = inode->i_sb;
+ 	ext4_lblk_t punch_start, punch_stop;
+ 	handle_t *handle;
+@@ -5293,6 +5302,10 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ 	/* Wait for existing dio to complete */
+ 	inode_dio_wait(inode);
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out_mutex;
++
+ 	/*
+ 	 * Prevent page faults from reinstantiating pages we have released from
+ 	 * page cache.
+@@ -5387,8 +5400,9 @@ out_mutex:
+  * by len bytes.
+  * Returns 0 on success, error otherwise.
+  */
+-static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
++static int ext4_insert_range(struct file *file, loff_t offset, loff_t len)
+ {
++	struct inode *inode = file_inode(file);
+ 	struct super_block *sb = inode->i_sb;
+ 	handle_t *handle;
+ 	struct ext4_ext_path *path;
+@@ -5444,6 +5458,10 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ 	/* Wait for existing dio to complete */
+ 	inode_dio_wait(inode);
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out_mutex;
++
+ 	/*
+ 	 * Prevent page faults from reinstantiating pages we have released from
+ 	 * page cache.
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 96546df39bcf9..31ab73c4b07e7 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4028,12 +4028,14 @@ int ext4_break_layouts(struct inode *inode)
+  * Returns: 0 on success or negative on failure
+  */
+ 
+-int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
++int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+ {
++	struct inode *inode = file_inode(file);
+ 	struct super_block *sb = inode->i_sb;
+ 	ext4_lblk_t first_block, stop_block;
+ 	struct address_space *mapping = inode->i_mapping;
+-	loff_t first_block_offset, last_block_offset;
++	loff_t first_block_offset, last_block_offset, max_length;
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	handle_t *handle;
+ 	unsigned int credits;
+ 	int ret = 0, ret2 = 0;
+@@ -4076,6 +4078,14 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ 		   offset;
+ 	}
+ 
++	/*
++	 * For punch hole the length + offset needs to be within one block
++	 * before last range. Adjust the length if it goes beyond that limit.
++	 */
++	max_length = sbi->s_bitmap_maxbytes - inode->i_sb->s_blocksize;
++	if (offset + length > max_length)
++		length = max_length - offset;
++
+ 	if (offset & (sb->s_blocksize - 1) ||
+ 	    (offset + length) & (sb->s_blocksize - 1)) {
+ 		/*
+@@ -4091,6 +4101,10 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ 	/* Wait all existing dio workers, newcomers will block on i_mutex */
+ 	inode_dio_wait(inode);
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out_mutex;
++
+ 	/*
+ 	 * Prevent page faults from reinstantiating pages we have released from
+ 	 * page cache.
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a622e186b7ee1..47ea35e98ffe9 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1388,10 +1388,10 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ 
+ 	de = (struct ext4_dir_entry_2 *)search_buf;
+ 	dlimit = search_buf + buf_size;
+-	while ((char *) de < dlimit) {
++	while ((char *) de < dlimit - EXT4_BASE_DIR_LEN) {
+ 		/* this code is executed quadratically often */
+ 		/* do minimal checking `by hand' */
+-		if ((char *) de + de->name_len <= dlimit &&
++		if (de->name + de->name_len <= dlimit &&
+ 		    ext4_match(dir, fname, de)) {
+ 			/* found a match - just to be sure, do
+ 			 * a full check */
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index defd2e10dfd10..4569075a7da0c 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -137,8 +137,10 @@ static void ext4_finish_bio(struct bio *bio)
+ 				continue;
+ 			}
+ 			clear_buffer_async_write(bh);
+-			if (bio->bi_status)
++			if (bio->bi_status) {
++				set_buffer_write_io_error(bh);
+ 				buffer_io_error(bh);
++			}
+ 		} while ((bh = bh->b_this_page) != head);
+ 		spin_unlock_irqrestore(&head->b_uptodate_lock, flags);
+ 		if (!under_io) {
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 9e210bc85c817..5e6c034583176 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3870,9 +3870,11 @@ static int count_overhead(struct super_block *sb, ext4_group_t grp,
+ 	ext4_fsblk_t		first_block, last_block, b;
+ 	ext4_group_t		i, ngroups = ext4_get_groups_count(sb);
+ 	int			s, j, count = 0;
++	int			has_super = ext4_bg_has_super(sb, grp);
+ 
+ 	if (!ext4_has_feature_bigalloc(sb))
+-		return (ext4_bg_has_super(sb, grp) + ext4_bg_num_gdb(sb, grp) +
++		return (has_super + ext4_bg_num_gdb(sb, grp) +
++			(has_super ? le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks) : 0) +
+ 			sbi->s_itb_per_group + 2);
+ 
+ 	first_block = le32_to_cpu(sbi->s_es->s_first_data_block) +
+@@ -4931,9 +4933,18 @@ no_journal:
+ 	 * Get the # of file system overhead blocks from the
+ 	 * superblock if present.
+ 	 */
+-	if (es->s_overhead_clusters)
+-		sbi->s_overhead = le32_to_cpu(es->s_overhead_clusters);
+-	else {
++	sbi->s_overhead = le32_to_cpu(es->s_overhead_clusters);
++	/* ignore the precalculated value if it is ridiculous */
++	if (sbi->s_overhead > ext4_blocks_count(es))
++		sbi->s_overhead = 0;
++	/*
++	 * If the bigalloc feature is not enabled recalculating the
++	 * overhead doesn't take long, so we might as well just redo
++	 * it to make sure we are using the correct value.
++	 */
++	if (!ext4_has_feature_bigalloc(sb))
++		sbi->s_overhead = 0;
++	if (sbi->s_overhead == 0) {
+ 		err = ext4_calculate_overhead(sb);
+ 		if (err)
+ 			goto failed_mount_wq;
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index dc55b029afaa4..c5bde789a16db 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -906,15 +906,15 @@ static int read_rindex_entry(struct gfs2_inode *ip)
+ 	rgd->rd_bitbytes = be32_to_cpu(buf.ri_bitbytes);
+ 	spin_lock_init(&rgd->rd_rsspin);
+ 
+-	error = compute_bitstructs(rgd);
+-	if (error)
+-		goto fail;
+-
+ 	error = gfs2_glock_get(sdp, rgd->rd_addr,
+ 			       &gfs2_rgrp_glops, CREATE, &rgd->rd_gl);
+ 	if (error)
+ 		goto fail;
+ 
++	error = compute_bitstructs(rgd);
++	if (error)
++		goto fail_glock;
++
+ 	rgd->rd_rgl = (struct gfs2_rgrp_lvb *)rgd->rd_gl->gl_lksb.sb_lvbptr;
+ 	rgd->rd_flags &= ~(GFS2_RDF_UPTODATE | GFS2_RDF_PREFERRED);
+ 	if (rgd->rd_data > sdp->sd_max_rg_data)
+@@ -928,6 +928,7 @@ static int read_rindex_entry(struct gfs2_inode *ip)
+ 	}
+ 
+ 	error = 0; /* someone else read in the rgrp; free it and ignore it */
++fail_glock:
+ 	gfs2_glock_put(rgd->rd_gl);
+ 
+ fail:
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 5fc9ccab907c3..a2f43f1a85f8d 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -206,7 +206,7 @@ hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr,
+ 	info.flags = 0;
+ 	info.length = len;
+ 	info.low_limit = current->mm->mmap_base;
+-	info.high_limit = TASK_SIZE;
++	info.high_limit = arch_get_mmap_end(addr);
+ 	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
+ 	info.align_offset = 0;
+ 	return vm_unmapped_area(&info);
+@@ -222,7 +222,7 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr,
+ 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ 	info.length = len;
+ 	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
+-	info.high_limit = current->mm->mmap_base;
++	info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base);
+ 	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
+ 	info.align_offset = 0;
+ 	addr = vm_unmapped_area(&info);
+@@ -237,7 +237,7 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr,
+ 		VM_BUG_ON(addr != -ENOMEM);
+ 		info.flags = 0;
+ 		info.low_limit = current->mm->mmap_base;
+-		info.high_limit = TASK_SIZE;
++		info.high_limit = arch_get_mmap_end(addr);
+ 		addr = vm_unmapped_area(&info);
+ 	}
+ 
+@@ -251,6 +251,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ 	struct mm_struct *mm = current->mm;
+ 	struct vm_area_struct *vma;
+ 	struct hstate *h = hstate_file(file);
++	const unsigned long mmap_end = arch_get_mmap_end(addr);
+ 
+ 	if (len & ~huge_page_mask(h))
+ 		return -EINVAL;
+@@ -266,7 +267,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ 	if (addr) {
+ 		addr = ALIGN(addr, huge_page_size(h));
+ 		vma = find_vma(mm, addr);
+-		if (TASK_SIZE - len >= addr &&
++		if (mmap_end - len >= addr &&
+ 		    (!vma || addr + len <= vm_start_gap(vma)))
+ 			return addr;
+ 	}
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index b121d7d434c67..867362f45cf63 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -501,7 +501,6 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ 	}
+ 	spin_unlock(&commit_transaction->t_handle_lock);
+ 	commit_transaction->t_state = T_SWITCH;
+-	write_unlock(&journal->j_state_lock);
+ 
+ 	J_ASSERT (atomic_read(&commit_transaction->t_outstanding_credits) <=
+ 			journal->j_max_transaction_buffers);
+@@ -521,6 +520,8 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ 	 * has reserved.  This is consistent with the existing behaviour
+ 	 * that multiple jbd2_journal_get_write_access() calls to the same
+ 	 * buffer are perfectly permissible.
++	 * We use journal->j_state_lock here to serialize processing of
++	 * t_reserved_list with eviction of buffers from journal_unmap_buffer().
+ 	 */
+ 	while (commit_transaction->t_reserved_list) {
+ 		jh = commit_transaction->t_reserved_list;
+@@ -540,6 +541,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ 		jbd2_journal_refile_buffer(journal, jh);
+ 	}
+ 
++	write_unlock(&journal->j_state_lock);
+ 	/*
+ 	 * Now try to drop any written-back buffers from the journal's
+ 	 * checkpoint lists.  We do this *before* commit because it potentially
+diff --git a/fs/stat.c b/fs/stat.c
+index 1196af4d1ea03..04550c0ba5407 100644
+--- a/fs/stat.c
++++ b/fs/stat.c
+@@ -306,9 +306,6 @@ SYSCALL_DEFINE2(fstat, unsigned int, fd, struct __old_kernel_stat __user *, stat
+ #  define choose_32_64(a,b) b
+ #endif
+ 
+-#define valid_dev(x)  choose_32_64(old_valid_dev(x),true)
+-#define encode_dev(x) choose_32_64(old_encode_dev,new_encode_dev)(x)
+-
+ #ifndef INIT_STRUCT_STAT_PADDING
+ #  define INIT_STRUCT_STAT_PADDING(st) memset(&st, 0, sizeof(st))
+ #endif
+@@ -317,7 +314,9 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
+ {
+ 	struct stat tmp;
+ 
+-	if (!valid_dev(stat->dev) || !valid_dev(stat->rdev))
++	if (sizeof(tmp.st_dev) < 4 && !old_valid_dev(stat->dev))
++		return -EOVERFLOW;
++	if (sizeof(tmp.st_rdev) < 4 && !old_valid_dev(stat->rdev))
+ 		return -EOVERFLOW;
+ #if BITS_PER_LONG == 32
+ 	if (stat->size > MAX_NON_LFS)
+@@ -325,7 +324,7 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
+ #endif
+ 
+ 	INIT_STRUCT_STAT_PADDING(tmp);
+-	tmp.st_dev = encode_dev(stat->dev);
++	tmp.st_dev = new_encode_dev(stat->dev);
+ 	tmp.st_ino = stat->ino;
+ 	if (sizeof(tmp.st_ino) < sizeof(stat->ino) && tmp.st_ino != stat->ino)
+ 		return -EOVERFLOW;
+@@ -335,7 +334,7 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
+ 		return -EOVERFLOW;
+ 	SET_UID(tmp.st_uid, from_kuid_munged(current_user_ns(), stat->uid));
+ 	SET_GID(tmp.st_gid, from_kgid_munged(current_user_ns(), stat->gid));
+-	tmp.st_rdev = encode_dev(stat->rdev);
++	tmp.st_rdev = new_encode_dev(stat->rdev);
+ 	tmp.st_size = stat->size;
+ 	tmp.st_atime = stat->atime.tv_sec;
+ 	tmp.st_mtime = stat->mtime.tv_sec;
+@@ -616,11 +615,13 @@ static int cp_compat_stat(struct kstat *stat, struct compat_stat __user *ubuf)
+ {
+ 	struct compat_stat tmp;
+ 
+-	if (!old_valid_dev(stat->dev) || !old_valid_dev(stat->rdev))
++	if (sizeof(tmp.st_dev) < 4 && !old_valid_dev(stat->dev))
++		return -EOVERFLOW;
++	if (sizeof(tmp.st_rdev) < 4 && !old_valid_dev(stat->rdev))
+ 		return -EOVERFLOW;
+ 
+ 	memset(&tmp, 0, sizeof(tmp));
+-	tmp.st_dev = old_encode_dev(stat->dev);
++	tmp.st_dev = new_encode_dev(stat->dev);
+ 	tmp.st_ino = stat->ino;
+ 	if (sizeof(tmp.st_ino) < sizeof(stat->ino) && tmp.st_ino != stat->ino)
+ 		return -EOVERFLOW;
+@@ -630,7 +631,7 @@ static int cp_compat_stat(struct kstat *stat, struct compat_stat __user *ubuf)
+ 		return -EOVERFLOW;
+ 	SET_UID(tmp.st_uid, from_kuid_munged(current_user_ns(), stat->uid));
+ 	SET_GID(tmp.st_gid, from_kgid_munged(current_user_ns(), stat->gid));
+-	tmp.st_rdev = old_encode_dev(stat->rdev);
++	tmp.st_rdev = new_encode_dev(stat->rdev);
+ 	if ((u64) stat->size > MAX_NON_LFS)
+ 		return -EOVERFLOW;
+ 	tmp.st_size = stat->size;
+diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
+index 2e5debc0373c5..99209f50915f4 100644
+--- a/include/linux/etherdevice.h
++++ b/include/linux/etherdevice.h
+@@ -127,7 +127,7 @@ static inline bool is_multicast_ether_addr(const u8 *addr)
+ #endif
+ }
+ 
+-static inline bool is_multicast_ether_addr_64bits(const u8 addr[6+2])
++static inline bool is_multicast_ether_addr_64bits(const u8 *addr)
+ {
+ #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
+ #ifdef __BIG_ENDIAN
+@@ -352,8 +352,7 @@ static inline bool ether_addr_equal(const u8 *addr1, const u8 *addr2)
+  * Please note that alignment of addr1 & addr2 are only guaranteed to be 16 bits.
+  */
+ 
+-static inline bool ether_addr_equal_64bits(const u8 addr1[6+2],
+-					   const u8 addr2[6+2])
++static inline bool ether_addr_equal_64bits(const u8 *addr1, const u8 *addr2)
+ {
+ #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
+ 	u64 fold = (*(const u64 *)addr1) ^ (*(const u64 *)addr2);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index f996d1f343bb7..4bca80c9931fb 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1325,6 +1325,7 @@ struct task_struct {
+ 	int				pagefault_disabled;
+ #ifdef CONFIG_MMU
+ 	struct task_struct		*oom_reaper_list;
++	struct timer_list		oom_reaper_timer;
+ #endif
+ #ifdef CONFIG_VMAP_STACK
+ 	struct vm_struct		*stack_vm_area;
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index dc1f4dcd9a825..e3e5e149b00e6 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -106,6 +106,14 @@ static inline void mm_update_next_owner(struct mm_struct *mm)
+ #endif /* CONFIG_MEMCG */
+ 
+ #ifdef CONFIG_MMU
++#ifndef arch_get_mmap_end
++#define arch_get_mmap_end(addr)	(TASK_SIZE)
++#endif
++
++#ifndef arch_get_mmap_base
++#define arch_get_mmap_base(addr, base) (base)
++#endif
++
+ extern void arch_pick_mmap_layout(struct mm_struct *mm,
+ 				  struct rlimit *rlim_stack);
+ extern unsigned long
+diff --git a/include/net/esp.h b/include/net/esp.h
+index 90cd02ff77ef6..9c5637d41d951 100644
+--- a/include/net/esp.h
++++ b/include/net/esp.h
+@@ -4,8 +4,6 @@
+ 
+ #include <linux/skbuff.h>
+ 
+-#define ESP_SKB_FRAG_MAXSIZE (PAGE_SIZE << SKB_FRAG_PAGE_ORDER)
+-
+ struct ip_esp_hdr;
+ 
+ static inline struct ip_esp_hdr *ip_esp_hdr(const struct sk_buff *skb)
+diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h
+index 1c0fbe3abf247..f179996c61844 100644
+--- a/include/net/netns/ipv6.h
++++ b/include/net/netns/ipv6.h
+@@ -78,8 +78,8 @@ struct netns_ipv6 {
+ 	struct dst_ops		ip6_dst_ops;
+ 	rwlock_t		fib6_walker_lock;
+ 	spinlock_t		fib6_gc_lock;
+-	unsigned int		 ip6_rt_gc_expire;
+-	unsigned long		 ip6_rt_last_gc;
++	atomic_t		ip6_rt_gc_expire;
++	unsigned long		ip6_rt_last_gc;
+ 	unsigned char		flowlabel_has_excl;
+ #ifdef CONFIG_IPV6_MULTIPLE_TABLES
+ 	bool			fib6_has_custom_rules;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 79d8b27cf2fc6..9aa6563587d88 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6221,7 +6221,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ again:
+ 	mutex_lock(&event->mmap_mutex);
+ 	if (event->rb) {
+-		if (event->rb->nr_pages != nr_pages) {
++		if (data_page_nr(event->rb) != nr_pages) {
+ 			ret = -EINVAL;
+ 			goto unlock;
+ 		}
+diff --git a/kernel/events/internal.h b/kernel/events/internal.h
+index 228801e207886..aa23ffdaf819f 100644
+--- a/kernel/events/internal.h
++++ b/kernel/events/internal.h
+@@ -116,6 +116,11 @@ static inline int page_order(struct perf_buffer *rb)
+ }
+ #endif
+ 
++static inline int data_page_nr(struct perf_buffer *rb)
++{
++	return rb->nr_pages << page_order(rb);
++}
++
+ static inline unsigned long perf_data_size(struct perf_buffer *rb)
+ {
+ 	return rb->nr_pages << (PAGE_SHIFT + page_order(rb));
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index ef91ae75ca56f..4032cd4750001 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -856,11 +856,6 @@ void rb_free(struct perf_buffer *rb)
+ }
+ 
+ #else
+-static int data_page_nr(struct perf_buffer *rb)
+-{
+-	return rb->nr_pages << page_order(rb);
+-}
+-
+ static struct page *
+ __perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
+ {
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index acd9833b8ec22..1a306ef51bbe5 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3748,11 +3748,11 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
+ 
+ 	se->avg.runnable_sum = se->avg.runnable_avg * divider;
+ 
+-	se->avg.load_sum = divider;
+-	if (se_weight(se)) {
+-		se->avg.load_sum =
+-			div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
+-	}
++	se->avg.load_sum = se->avg.load_avg * divider;
++	if (se_weight(se) < se->avg.load_sum)
++		se->avg.load_sum = div_u64(se->avg.load_sum, se_weight(se));
++	else
++		se->avg.load_sum = 1;
+ 
+ 	enqueue_load_avg(cfs_rq, se);
+ 	cfs_rq->avg.util_avg += se->avg.util_avg;
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index d0309de2f84fe..4bc90965abb25 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -1219,7 +1219,14 @@ static void
+ stacktrace_trigger(struct event_trigger_data *data, void *rec,
+ 		   struct ring_buffer_event *event)
+ {
+-	trace_dump_stack(STACK_SKIP);
++	struct trace_event_file *file = data->private_data;
++	unsigned long flags;
++
++	if (file) {
++		local_save_flags(flags);
++		__trace_stack(file->tr, flags, STACK_SKIP, preempt_count());
++	} else
++		trace_dump_stack(STACK_SKIP);
+ }
+ 
+ static void
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 46c160d4eac14..102f73ed4b1b9 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2140,14 +2140,6 @@ unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info)
+ 	return addr;
+ }
+ 
+-#ifndef arch_get_mmap_end
+-#define arch_get_mmap_end(addr)	(TASK_SIZE)
+-#endif
+-
+-#ifndef arch_get_mmap_base
+-#define arch_get_mmap_base(addr, base) (base)
+-#endif
+-
+ /* Get an address range which is currently unmapped.
+  * For shmat() with addr=0.
+  *
+diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
+index 07f42a7a60657..9165ca619c8cf 100644
+--- a/mm/mmu_notifier.c
++++ b/mm/mmu_notifier.c
+@@ -1043,6 +1043,18 @@ int mmu_interval_notifier_insert_locked(
+ }
+ EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked);
+ 
++static bool
++mmu_interval_seq_released(struct mmu_notifier_subscriptions *subscriptions,
++			  unsigned long seq)
++{
++	bool ret;
++
++	spin_lock(&subscriptions->lock);
++	ret = subscriptions->invalidate_seq != seq;
++	spin_unlock(&subscriptions->lock);
++	return ret;
++}
++
+ /**
+  * mmu_interval_notifier_remove - Remove a interval notifier
+  * @interval_sub: Interval subscription to unregister
+@@ -1090,7 +1102,7 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *interval_sub)
+ 	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
+ 	if (seq)
+ 		wait_event(subscriptions->wq,
+-			   READ_ONCE(subscriptions->invalidate_seq) != seq);
++			   mmu_interval_seq_released(subscriptions, seq));
+ 
+ 	/* pairs with mmgrab in mmu_interval_notifier_insert() */
+ 	mmdrop(mm);
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 419a814f467e0..3d7c557fb70c9 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -633,7 +633,7 @@ done:
+ 	 */
+ 	set_bit(MMF_OOM_SKIP, &mm->flags);
+ 
+-	/* Drop a reference taken by wake_oom_reaper */
++	/* Drop a reference taken by queue_oom_reaper */
+ 	put_task_struct(tsk);
+ }
+ 
+@@ -643,12 +643,12 @@ static int oom_reaper(void *unused)
+ 		struct task_struct *tsk = NULL;
+ 
+ 		wait_event_freezable(oom_reaper_wait, oom_reaper_list != NULL);
+-		spin_lock(&oom_reaper_lock);
++		spin_lock_irq(&oom_reaper_lock);
+ 		if (oom_reaper_list != NULL) {
+ 			tsk = oom_reaper_list;
+ 			oom_reaper_list = tsk->oom_reaper_list;
+ 		}
+-		spin_unlock(&oom_reaper_lock);
++		spin_unlock_irq(&oom_reaper_lock);
+ 
+ 		if (tsk)
+ 			oom_reap_task(tsk);
+@@ -657,22 +657,48 @@ static int oom_reaper(void *unused)
+ 	return 0;
+ }
+ 
+-static void wake_oom_reaper(struct task_struct *tsk)
++static void wake_oom_reaper(struct timer_list *timer)
+ {
+-	/* mm is already queued? */
+-	if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))
+-		return;
++	struct task_struct *tsk = container_of(timer, struct task_struct,
++			oom_reaper_timer);
++	struct mm_struct *mm = tsk->signal->oom_mm;
++	unsigned long flags;
+ 
+-	get_task_struct(tsk);
++	/* The victim managed to terminate on its own - see exit_mmap */
++	if (test_bit(MMF_OOM_SKIP, &mm->flags)) {
++		put_task_struct(tsk);
++		return;
++	}
+ 
+-	spin_lock(&oom_reaper_lock);
++	spin_lock_irqsave(&oom_reaper_lock, flags);
+ 	tsk->oom_reaper_list = oom_reaper_list;
+ 	oom_reaper_list = tsk;
+-	spin_unlock(&oom_reaper_lock);
++	spin_unlock_irqrestore(&oom_reaper_lock, flags);
+ 	trace_wake_reaper(tsk->pid);
+ 	wake_up(&oom_reaper_wait);
+ }
+ 
++/*
++ * Give the OOM victim time to exit naturally before invoking the oom_reaping.
++ * The timers timeout is arbitrary... the longer it is, the longer the worst
++ * case scenario for the OOM can take. If it is too small, the oom_reaper can
++ * get in the way and release resources needed by the process exit path.
++ * e.g. The futex robust list can sit in Anon|Private memory that gets reaped
++ * before the exit path is able to wake the futex waiters.
++ */
++#define OOM_REAPER_DELAY (2*HZ)
++static void queue_oom_reaper(struct task_struct *tsk)
++{
++	/* mm is already queued? */
++	if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))
++		return;
++
++	get_task_struct(tsk);
++	timer_setup(&tsk->oom_reaper_timer, wake_oom_reaper, 0);
++	tsk->oom_reaper_timer.expires = jiffies + OOM_REAPER_DELAY;
++	add_timer(&tsk->oom_reaper_timer);
++}
++
+ static int __init oom_init(void)
+ {
+ 	oom_reaper_th = kthread_run(oom_reaper, NULL, "oom_reaper");
+@@ -680,7 +706,7 @@ static int __init oom_init(void)
+ }
+ subsys_initcall(oom_init)
+ #else
+-static inline void wake_oom_reaper(struct task_struct *tsk)
++static inline void queue_oom_reaper(struct task_struct *tsk)
+ {
+ }
+ #endif /* CONFIG_MMU */
+@@ -931,7 +957,7 @@ static void __oom_kill_process(struct task_struct *victim, const char *message)
+ 	rcu_read_unlock();
+ 
+ 	if (can_oom_reap)
+-		wake_oom_reaper(victim);
++		queue_oom_reaper(victim);
+ 
+ 	mmdrop(mm);
+ 	put_task_struct(victim);
+@@ -967,7 +993,7 @@ static void oom_kill_process(struct oom_control *oc, const char *message)
+ 	task_lock(victim);
+ 	if (task_will_free_mem(victim)) {
+ 		mark_oom_victim(victim);
+-		wake_oom_reaper(victim);
++		queue_oom_reaper(victim);
+ 		task_unlock(victim);
+ 		put_task_struct(victim);
+ 		return;
+@@ -1065,7 +1091,7 @@ bool out_of_memory(struct oom_control *oc)
+ 	 */
+ 	if (task_will_free_mem(current)) {
+ 		mark_oom_victim(current);
+-		wake_oom_reaper(current);
++		queue_oom_reaper(current);
+ 		return true;
+ 	}
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index f022e0024e8db..f3418edb136be 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7678,7 +7678,7 @@ void __init mem_init_print_info(const char *str)
+ 	 */
+ #define adj_init_size(start, end, size, pos, adj) \
+ 	do { \
+-		if (start <= pos && pos < end && size > adj) \
++		if (&start[0] <= &pos[0] && &pos[0] < &end[0] && size > adj) \
+ 			size -= adj; \
+ 	} while (0)
+ 
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 9a4a9c5a9f24c..c515bbd46c679 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -864,6 +864,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	struct canfd_frame *cf;
+ 	int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
+ 	int wait_tx_done = (so->opt.flags & CAN_ISOTP_WAIT_TX_DONE) ? 1 : 0;
++	s64 hrtimer_sec = 0;
+ 	int off;
+ 	int err;
+ 
+@@ -962,7 +963,9 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		isotp_create_fframe(cf, so, ae);
+ 
+ 		/* start timeout for FC */
+-		hrtimer_start(&so->txtimer, ktime_set(1, 0), HRTIMER_MODE_REL_SOFT);
++		hrtimer_sec = 1;
++		hrtimer_start(&so->txtimer, ktime_set(hrtimer_sec, 0),
++			      HRTIMER_MODE_REL_SOFT);
+ 	}
+ 
+ 	/* send the first or only CAN frame */
+@@ -975,6 +978,11 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	if (err) {
+ 		pr_notice_once("can-isotp: %s: can_send_ret %d\n",
+ 			       __func__, err);
++
++		/* no transmission -> no timeout monitoring */
++		if (hrtimer_sec)
++			hrtimer_cancel(&so->txtimer);
++
+ 		goto err_out_drop;
+ 	}
+ 
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 9aae82145bc16..20d7381378418 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -448,7 +448,6 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 	struct page *page;
+ 	struct sk_buff *trailer;
+ 	int tailen = esp->tailen;
+-	unsigned int allocsz;
+ 
+ 	/* this is non-NULL only with TCP/UDP Encapsulation */
+ 	if (x->encap) {
+@@ -458,8 +457,8 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 			return err;
+ 	}
+ 
+-	allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
+-	if (allocsz > ESP_SKB_FRAG_MAXSIZE)
++	if (ALIGN(tailen, L1_CACHE_BYTES) > PAGE_SIZE ||
++	    ALIGN(skb->data_len, L1_CACHE_BYTES) > PAGE_SIZE)
+ 		goto cow;
+ 
+ 	if (!skb_cloned(skb)) {
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 20c7bef6829e1..cb28f8928f9ee 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -483,7 +483,6 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ 	struct page *page;
+ 	struct sk_buff *trailer;
+ 	int tailen = esp->tailen;
+-	unsigned int allocsz;
+ 
+ 	if (x->encap) {
+ 		int err = esp6_output_encap(x, skb, esp);
+@@ -492,8 +491,8 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ 			return err;
+ 	}
+ 
+-	allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
+-	if (allocsz > ESP_SKB_FRAG_MAXSIZE)
++	if (ALIGN(tailen, L1_CACHE_BYTES) > PAGE_SIZE ||
++	    ALIGN(skb->data_len, L1_CACHE_BYTES) > PAGE_SIZE)
+ 		goto cow;
+ 
+ 	if (!skb_cloned(skb)) {
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 9a0263f252323..1f6c752f13b40 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -733,9 +733,6 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ 	else
+ 		fl6->daddr = tunnel->parms.raddr;
+ 
+-	if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
+-		return -ENOMEM;
+-
+ 	/* Push GRE header. */
+ 	protocol = (dev->type == ARPHRD_ETHER) ? htons(ETH_P_TEB) : proto;
+ 
+@@ -743,6 +740,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ 		struct ip_tunnel_info *tun_info;
+ 		const struct ip_tunnel_key *key;
+ 		__be16 flags;
++		int tun_hlen;
+ 
+ 		tun_info = skb_tunnel_info_txcheck(skb);
+ 		if (IS_ERR(tun_info) ||
+@@ -760,9 +758,12 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ 		dsfield = key->tos;
+ 		flags = key->tun_flags &
+ 			(TUNNEL_CSUM | TUNNEL_KEY | TUNNEL_SEQ);
+-		tunnel->tun_hlen = gre_calc_hlen(flags);
++		tun_hlen = gre_calc_hlen(flags);
+ 
+-		gre_build_header(skb, tunnel->tun_hlen,
++		if (skb_cow_head(skb, dev->needed_headroom ?: tun_hlen + tunnel->encap_hlen))
++			return -ENOMEM;
++
++		gre_build_header(skb, tun_hlen,
+ 				 flags, protocol,
+ 				 tunnel_id_to_key32(tun_info->key.tun_id),
+ 				 (flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++)
+@@ -772,6 +773,9 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ 		if (tunnel->parms.o_flags & TUNNEL_SEQ)
+ 			tunnel->o_seqno++;
+ 
++		if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
++			return -ENOMEM;
++
+ 		gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
+ 				 protocol, tunnel->parms.o_key,
+ 				 htonl(tunnel->o_seqno));
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 776b1b58c5dc6..6ace9f0ac22f3 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3192,6 +3192,7 @@ static int ip6_dst_gc(struct dst_ops *ops)
+ 	int rt_elasticity = net->ipv6.sysctl.ip6_rt_gc_elasticity;
+ 	int rt_gc_timeout = net->ipv6.sysctl.ip6_rt_gc_timeout;
+ 	unsigned long rt_last_gc = net->ipv6.ip6_rt_last_gc;
++	unsigned int val;
+ 	int entries;
+ 
+ 	entries = dst_entries_get_fast(ops);
+@@ -3202,13 +3203,13 @@ static int ip6_dst_gc(struct dst_ops *ops)
+ 	    entries <= rt_max_size)
+ 		goto out;
+ 
+-	net->ipv6.ip6_rt_gc_expire++;
+-	fib6_run_gc(net->ipv6.ip6_rt_gc_expire, net, true);
++	fib6_run_gc(atomic_inc_return(&net->ipv6.ip6_rt_gc_expire), net, true);
+ 	entries = dst_entries_get_slow(ops);
+ 	if (entries < ops->gc_thresh)
+-		net->ipv6.ip6_rt_gc_expire = rt_gc_timeout>>1;
++		atomic_set(&net->ipv6.ip6_rt_gc_expire, rt_gc_timeout >> 1);
+ out:
+-	net->ipv6.ip6_rt_gc_expire -= net->ipv6.ip6_rt_gc_expire>>rt_elasticity;
++	val = atomic_read(&net->ipv6.ip6_rt_gc_expire);
++	atomic_set(&net->ipv6.ip6_rt_gc_expire, val - (val >> rt_elasticity));
+ 	return entries > rt_max_size;
+ }
+ 
+@@ -6363,7 +6364,7 @@ static int __net_init ip6_route_net_init(struct net *net)
+ 	net->ipv6.sysctl.ip6_rt_min_advmss = IPV6_MIN_MTU - 20 - 40;
+ 	net->ipv6.sysctl.skip_notify_on_dev_down = 0;
+ 
+-	net->ipv6.ip6_rt_gc_expire = 30*HZ;
++	atomic_set(&net->ipv6.ip6_rt_gc_expire, 30*HZ);
+ 
+ 	ret = 0;
+ out:
+diff --git a/net/l3mdev/l3mdev.c b/net/l3mdev/l3mdev.c
+index 864326f150e2f..f2c3a61ad134b 100644
+--- a/net/l3mdev/l3mdev.c
++++ b/net/l3mdev/l3mdev.c
+@@ -147,7 +147,7 @@ int l3mdev_master_upper_ifindex_by_index_rcu(struct net *net, int ifindex)
+ 
+ 	dev = dev_get_by_index_rcu(net, ifindex);
+ 	while (dev && !netif_is_l3_master(dev))
+-		dev = netdev_master_upper_dev_get(dev);
++		dev = netdev_master_upper_dev_get_rcu(dev);
+ 
+ 	return dev ? dev->ifindex : 0;
+ }
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index f37916156ca52..cbfb601c4ee98 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2276,6 +2276,13 @@ static int netlink_dump(struct sock *sk)
+ 	 * single netdev. The outcome is MSG_TRUNC error.
+ 	 */
+ 	skb_reserve(skb, skb_tailroom(skb) - alloc_size);
++
++	/* Make sure malicious BPF programs can not read unitialized memory
++	 * from skb->head -> skb->data
++	 */
++	skb_reset_network_header(skb);
++	skb_reset_mac_header(skb);
++
+ 	netlink_skb_set_owner_r(skb, sk);
+ 
+ 	if (nlk->dump_done_errno > 0) {
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 98a7e6f64ab0b..293a798e89f42 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2436,7 +2436,7 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa,
+ 	new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2);
+ 
+ 	if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
+-		if ((MAX_ACTIONS_BUFSIZE - next_offset) < req_size) {
++		if ((next_offset + req_size) > MAX_ACTIONS_BUFSIZE) {
+ 			OVS_NLERR(log, "Flow action size exceeds max %u",
+ 				  MAX_ACTIONS_BUFSIZE);
+ 			return ERR_PTR(-EMSGSIZE);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index d0c95d7dd292d..5ee600d108a0a 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2817,8 +2817,9 @@ tpacket_error:
+ 
+ 		status = TP_STATUS_SEND_REQUEST;
+ 		err = po->xmit(skb);
+-		if (unlikely(err > 0)) {
+-			err = net_xmit_errno(err);
++		if (unlikely(err != 0)) {
++			if (err > 0)
++				err = net_xmit_errno(err);
+ 			if (err && __packet_get_status(po, ph) ==
+ 				   TP_STATUS_AVAILABLE) {
+ 				/* skb was destructed already */
+@@ -3019,8 +3020,12 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 		skb->no_fcs = 1;
+ 
+ 	err = po->xmit(skb);
+-	if (err > 0 && (err = net_xmit_errno(err)) != 0)
+-		goto out_unlock;
++	if (unlikely(err != 0)) {
++		if (err > 0)
++			err = net_xmit_errno(err);
++		if (err)
++			goto out_unlock;
++	}
+ 
+ 	dev_put(dev);
+ 
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index f15d6942da453..cc7e30733feb0 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -113,7 +113,9 @@ static __net_exit void rxrpc_exit_net(struct net *net)
+ 	struct rxrpc_net *rxnet = rxrpc_net(net);
+ 
+ 	rxnet->live = false;
++	del_timer_sync(&rxnet->peer_keepalive_timer);
+ 	cancel_work_sync(&rxnet->peer_keepalive_work);
++	/* Remove the timer again as the worker may have restarted it. */
+ 	del_timer_sync(&rxnet->peer_keepalive_timer);
+ 	rxrpc_destroy_all_calls(rxnet);
+ 	rxrpc_destroy_all_connections(rxnet);
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 54209a18d7fec..da042bc8b239d 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -386,14 +386,19 @@ static int u32_init(struct tcf_proto *tp)
+ 	return 0;
+ }
+ 
+-static int u32_destroy_key(struct tc_u_knode *n, bool free_pf)
++static void __u32_destroy_key(struct tc_u_knode *n)
+ {
+ 	struct tc_u_hnode *ht = rtnl_dereference(n->ht_down);
+ 
+ 	tcf_exts_destroy(&n->exts);
+-	tcf_exts_put_net(&n->exts);
+ 	if (ht && --ht->refcnt == 0)
+ 		kfree(ht);
++	kfree(n);
++}
++
++static void u32_destroy_key(struct tc_u_knode *n, bool free_pf)
++{
++	tcf_exts_put_net(&n->exts);
+ #ifdef CONFIG_CLS_U32_PERF
+ 	if (free_pf)
+ 		free_percpu(n->pf);
+@@ -402,8 +407,7 @@ static int u32_destroy_key(struct tc_u_knode *n, bool free_pf)
+ 	if (free_pf)
+ 		free_percpu(n->pcpu_success);
+ #endif
+-	kfree(n);
+-	return 0;
++	__u32_destroy_key(n);
+ }
+ 
+ /* u32_delete_key_rcu should be called when free'ing a copied
+@@ -810,10 +814,6 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
+ 	new->flags = n->flags;
+ 	RCU_INIT_POINTER(new->ht_down, ht);
+ 
+-	/* bump reference count as long as we hold pointer to structure */
+-	if (ht)
+-		ht->refcnt++;
+-
+ #ifdef CONFIG_CLS_U32_PERF
+ 	/* Statistics may be incremented by readers during update
+ 	 * so we must keep them in tact. When the node is later destroyed
+@@ -835,6 +835,10 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
+ 		return NULL;
+ 	}
+ 
++	/* bump reference count as long as we hold pointer to structure */
++	if (ht)
++		ht->refcnt++;
++
+ 	return new;
+ }
+ 
+@@ -898,13 +902,13 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 				    tca[TCA_RATE], ovr, extack);
+ 
+ 		if (err) {
+-			u32_destroy_key(new, false);
++			__u32_destroy_key(new);
+ 			return err;
+ 		}
+ 
+ 		err = u32_replace_hw_knode(tp, new, flags, extack);
+ 		if (err) {
+-			u32_destroy_key(new, false);
++			__u32_destroy_key(new);
+ 			return err;
+ 		}
+ 
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 4f16d406ad8ea..1b98f3241150b 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2144,8 +2144,10 @@ static int smc_shutdown(struct socket *sock, int how)
+ 	if (smc->use_fallback) {
+ 		rc = kernel_sock_shutdown(smc->clcsock, how);
+ 		sk->sk_shutdown = smc->clcsock->sk->sk_shutdown;
+-		if (sk->sk_shutdown == SHUTDOWN_MASK)
++		if (sk->sk_shutdown == SHUTDOWN_MASK) {
+ 			sk->sk_state = SMC_CLOSED;
++			sock_put(sk);
++		}
+ 		goto out;
+ 	}
+ 	switch (how) {
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 11d653190e6ea..b5168959fcf63 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8897,6 +8897,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[5|7][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x866d, "Clevo NP5[05]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x867c, "Clevo NP7[01]PNP", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x867d, "Clevo NP7[01]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
+diff --git a/sound/soc/atmel/sam9g20_wm8731.c b/sound/soc/atmel/sam9g20_wm8731.c
+index 8a55d59a6c2aa..d243de5f23dc1 100644
+--- a/sound/soc/atmel/sam9g20_wm8731.c
++++ b/sound/soc/atmel/sam9g20_wm8731.c
+@@ -46,35 +46,6 @@
+  */
+ #undef ENABLE_MIC_INPUT
+ 
+-static struct clk *mclk;
+-
+-static int at91sam9g20ek_set_bias_level(struct snd_soc_card *card,
+-					struct snd_soc_dapm_context *dapm,
+-					enum snd_soc_bias_level level)
+-{
+-	static int mclk_on;
+-	int ret = 0;
+-
+-	switch (level) {
+-	case SND_SOC_BIAS_ON:
+-	case SND_SOC_BIAS_PREPARE:
+-		if (!mclk_on)
+-			ret = clk_enable(mclk);
+-		if (ret == 0)
+-			mclk_on = 1;
+-		break;
+-
+-	case SND_SOC_BIAS_OFF:
+-	case SND_SOC_BIAS_STANDBY:
+-		if (mclk_on)
+-			clk_disable(mclk);
+-		mclk_on = 0;
+-		break;
+-	}
+-
+-	return ret;
+-}
+-
+ static const struct snd_soc_dapm_widget at91sam9g20ek_dapm_widgets[] = {
+ 	SND_SOC_DAPM_MIC("Int Mic", NULL),
+ 	SND_SOC_DAPM_SPK("Ext Spk", NULL),
+@@ -135,7 +106,6 @@ static struct snd_soc_card snd_soc_at91sam9g20ek = {
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &at91sam9g20ek_dai,
+ 	.num_links = 1,
+-	.set_bias_level = at91sam9g20ek_set_bias_level,
+ 
+ 	.dapm_widgets = at91sam9g20ek_dapm_widgets,
+ 	.num_dapm_widgets = ARRAY_SIZE(at91sam9g20ek_dapm_widgets),
+@@ -148,7 +118,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct device_node *codec_np, *cpu_np;
+-	struct clk *pllb;
+ 	struct snd_soc_card *card = &snd_soc_at91sam9g20ek;
+ 	int ret;
+ 
+@@ -162,31 +131,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	/*
+-	 * Codec MCLK is supplied by PCK0 - set it up.
+-	 */
+-	mclk = clk_get(NULL, "pck0");
+-	if (IS_ERR(mclk)) {
+-		dev_err(&pdev->dev, "Failed to get MCLK\n");
+-		ret = PTR_ERR(mclk);
+-		goto err;
+-	}
+-
+-	pllb = clk_get(NULL, "pllb");
+-	if (IS_ERR(pllb)) {
+-		dev_err(&pdev->dev, "Failed to get PLLB\n");
+-		ret = PTR_ERR(pllb);
+-		goto err_mclk;
+-	}
+-	ret = clk_set_parent(mclk, pllb);
+-	clk_put(pllb);
+-	if (ret != 0) {
+-		dev_err(&pdev->dev, "Failed to set MCLK parent\n");
+-		goto err_mclk;
+-	}
+-
+-	clk_set_rate(mclk, MCLK_RATE);
+-
+ 	card->dev = &pdev->dev;
+ 
+ 	/* Parse device node info */
+@@ -230,9 +174,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ 
+ 	return ret;
+ 
+-err_mclk:
+-	clk_put(mclk);
+-	mclk = NULL;
+ err:
+ 	atmel_ssc_put_audio(0);
+ 	return ret;
+@@ -242,8 +183,6 @@ static int at91sam9g20ek_audio_remove(struct platform_device *pdev)
+ {
+ 	struct snd_soc_card *card = platform_get_drvdata(pdev);
+ 
+-	clk_disable(mclk);
+-	mclk = NULL;
+ 	snd_soc_unregister_card(card);
+ 	atmel_ssc_put_audio(0);
+ 
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index 9ad7fc0baf072..20a07c92b2fc2 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -1206,9 +1206,16 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
+ 
+ 	dev_set_drvdata(dev, priv);
+ 
+-	return devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
++	ret = devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
+ 				      msm8916_wcd_digital_dai,
+ 				      ARRAY_SIZE(msm8916_wcd_digital_dai));
++	if (ret)
++		goto err_mclk;
++
++	return 0;
++
++err_mclk:
++	clk_disable_unprepare(priv->mclk);
+ err_clk:
+ 	clk_disable_unprepare(priv->ahbclk);
+ 	return ret;
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index 8540ac230d0ed..fd704df9b1758 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -1188,29 +1188,7 @@ static int wcd934x_set_sido_input_src(struct wcd934x_codec *wcd, int sido_src)
+ 	if (sido_src == wcd->sido_input_src)
+ 		return 0;
+ 
+-	if (sido_src == SIDO_SOURCE_INTERNAL) {
+-		regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+-				   WCD934X_ANA_BUCK_HI_ACCU_EN_MASK, 0);
+-		usleep_range(100, 110);
+-		regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+-				   WCD934X_ANA_BUCK_HI_ACCU_PRE_ENX_MASK, 0x0);
+-		usleep_range(100, 110);
+-		regmap_update_bits(wcd->regmap, WCD934X_ANA_RCO,
+-				   WCD934X_ANA_RCO_BG_EN_MASK, 0);
+-		usleep_range(100, 110);
+-		regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+-				   WCD934X_ANA_BUCK_PRE_EN1_MASK,
+-				   WCD934X_ANA_BUCK_PRE_EN1_ENABLE);
+-		usleep_range(100, 110);
+-		regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+-				   WCD934X_ANA_BUCK_PRE_EN2_MASK,
+-				   WCD934X_ANA_BUCK_PRE_EN2_ENABLE);
+-		usleep_range(100, 110);
+-		regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+-				   WCD934X_ANA_BUCK_HI_ACCU_EN_MASK,
+-				   WCD934X_ANA_BUCK_HI_ACCU_ENABLE);
+-		usleep_range(100, 110);
+-	} else if (sido_src == SIDO_SOURCE_RCO_BG) {
++	if (sido_src == SIDO_SOURCE_RCO_BG) {
+ 		regmap_update_bits(wcd->regmap, WCD934X_ANA_RCO,
+ 				   WCD934X_ANA_RCO_BG_EN_MASK,
+ 				   WCD934X_ANA_RCO_BG_ENABLE);
+@@ -1296,8 +1274,6 @@ static int wcd934x_disable_ana_bias_and_syclk(struct wcd934x_codec *wcd)
+ 	regmap_update_bits(wcd->regmap, WCD934X_CLK_SYS_MCLK_PRG,
+ 			   WCD934X_EXT_CLK_BUF_EN_MASK |
+ 			   WCD934X_MCLK_EN_MASK, 0x0);
+-	wcd934x_set_sido_input_src(wcd, SIDO_SOURCE_INTERNAL);
+-
+ 	regmap_update_bits(wcd->regmap, WCD934X_ANA_BIAS,
+ 			   WCD934X_ANA_BIAS_EN_MASK, 0);
+ 	regmap_update_bits(wcd->regmap, WCD934X_ANA_BIAS,
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 2924d89bf0daf..417732bdf2860 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -1683,8 +1683,7 @@ static void dapm_seq_run(struct snd_soc_card *card,
+ 		switch (w->id) {
+ 		case snd_soc_dapm_pre:
+ 			if (!w->event)
+-				list_for_each_entry_safe_continue(w, n, list,
+-								  power_list);
++				continue;
+ 
+ 			if (event == SND_SOC_DAPM_STREAM_START)
+ 				ret = w->event(w,
+@@ -1696,8 +1695,7 @@ static void dapm_seq_run(struct snd_soc_card *card,
+ 
+ 		case snd_soc_dapm_post:
+ 			if (!w->event)
+-				list_for_each_entry_safe_continue(w, n, list,
+-								  power_list);
++				continue;
+ 
+ 			if (event == SND_SOC_DAPM_STREAM_START)
+ 				ret = w->event(w,
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index fa91290ad89db..84676a8fb60dc 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1210,6 +1210,7 @@ static void snd_usbmidi_output_drain(struct snd_rawmidi_substream *substream)
+ 		} while (drain_urbs && timeout);
+ 		finish_wait(&ep->drain_wait, &wait);
+ 	}
++	port->active = 0;
+ 	spin_unlock_irq(&ep->buffer_lock);
+ }
+ 
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index e54a98f465490..d8e31ee03b9d0 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -8,7 +8,7 @@
+  */
+ 
+ /* handling of USB vendor/product ID pairs as 32-bit numbers */
+-#define USB_ID(vendor, product) (((vendor) << 16) | (product))
++#define USB_ID(vendor, product) (((unsigned int)(vendor) << 16) | (product))
+ #define USB_ID_VENDOR(id) ((id) >> 16)
+ #define USB_ID_PRODUCT(id) ((u16)(id))
+ 
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index 17465d454a0e3..f76b1a9d5a6e1 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -571,7 +571,6 @@ int perf_evlist__mmap_ops(struct perf_evlist *evlist,
+ {
+ 	struct perf_evsel *evsel;
+ 	const struct perf_cpu_map *cpus = evlist->cpus;
+-	const struct perf_thread_map *threads = evlist->threads;
+ 
+ 	if (!ops || !ops->get || !ops->mmap)
+ 		return -EINVAL;
+@@ -583,7 +582,7 @@ int perf_evlist__mmap_ops(struct perf_evlist *evlist,
+ 	perf_evlist__for_each_entry(evlist, evsel) {
+ 		if ((evsel->attr.read_format & PERF_FORMAT_ID) &&
+ 		    evsel->sample_id == NULL &&
+-		    perf_evsel__alloc_id(evsel, perf_cpu_map__nr(cpus), threads->nr) < 0)
++		    perf_evsel__alloc_id(evsel, evsel->fd->max_x, evsel->fd->max_y) < 0)
+ 			return -ENOMEM;
+ 	}
+ 
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 91cab5cdfbc16..b55ee073c2f72 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -340,6 +340,7 @@ static int report__setup_sample_type(struct report *rep)
+ 	struct perf_session *session = rep->session;
+ 	u64 sample_type = evlist__combined_sample_type(session->evlist);
+ 	bool is_pipe = perf_data__is_pipe(session->data);
++	struct evsel *evsel;
+ 
+ 	if (session->itrace_synth_opts->callchain ||
+ 	    session->itrace_synth_opts->add_callchain ||
+@@ -394,6 +395,19 @@ static int report__setup_sample_type(struct report *rep)
+ 	}
+ 
+ 	if (sort__mode == SORT_MODE__MEMORY) {
++		/*
++		 * FIXUP: prior to kernel 5.18, Arm SPE missed to set
++		 * PERF_SAMPLE_DATA_SRC bit in sample type.  For backward
++		 * compatibility, set the bit if it's an old perf data file.
++		 */
++		evlist__for_each_entry(session->evlist, evsel) {
++			if (strstr(evsel->name, "arm_spe") &&
++				!(sample_type & PERF_SAMPLE_DATA_SRC)) {
++				evsel->core.attr.sample_type |= PERF_SAMPLE_DATA_SRC;
++				sample_type |= PERF_SAMPLE_DATA_SRC;
++			}
++		}
++
+ 		if (!is_pipe && !(sample_type & PERF_SAMPLE_DATA_SRC)) {
+ 			ui__error("Selected --mem-mode but no mem data. "
+ 				  "Did you call perf record without -d?\n");
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/vxlan_flooding.sh b/tools/testing/selftests/drivers/net/mlxsw/vxlan_flooding.sh
+index fedcb7b35af9f..af5ea50ed5c0e 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/vxlan_flooding.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/vxlan_flooding.sh
+@@ -172,6 +172,17 @@ flooding_filters_add()
+ 	local lsb
+ 	local i
+ 
++	# Prevent unwanted packets from entering the bridge and interfering
++	# with the test.
++	tc qdisc add dev br0 clsact
++	tc filter add dev br0 egress protocol all pref 1 handle 1 \
++		matchall skip_hw action drop
++	tc qdisc add dev $h1 clsact
++	tc filter add dev $h1 egress protocol all pref 1 handle 1 \
++		flower skip_hw dst_mac de:ad:be:ef:13:37 action pass
++	tc filter add dev $h1 egress protocol all pref 2 handle 2 \
++		matchall skip_hw action drop
++
+ 	tc qdisc add dev $rp2 clsact
+ 
+ 	for i in $(eval echo {1..$num_remotes}); do
+@@ -194,6 +205,12 @@ flooding_filters_del()
+ 	done
+ 
+ 	tc qdisc del dev $rp2 clsact
++
++	tc filter del dev $h1 egress protocol all pref 2 handle 2 matchall
++	tc filter del dev $h1 egress protocol all pref 1 handle 1 flower
++	tc qdisc del dev $h1 clsact
++	tc filter del dev br0 egress protocol all pref 1 handle 1 matchall
++	tc qdisc del dev br0 clsact
+ }
+ 
+ flooding_check_packets()


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-04-27 12:24 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-04-27 12:24 UTC (permalink / raw
  To: gentoo-commits

commit:     f5bc9221057b2aa457c445a0d45820614ceb96bb
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 27 12:24:20 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 27 12:24:20 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f5bc9221

Remove redundant patch

Removed:
2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 --
 ...quest-interrupts-after-IRQ-is-initialized.patch | 75 ----------------------
 2 files changed, 79 deletions(-)

diff --git a/0000_README b/0000_README
index e6bddb24..5c68defb 100644
--- a/0000_README
+++ b/0000_README
@@ -507,10 +507,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Path:   2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/drivers/gpio?id=06fb4ecfeac7e00d6704fa5ed19299f2fefb3cc9
-Desc:   gpio: Request interrupts after IRQ is initialized
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch b/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
deleted file mode 100644
index 0a1d4624..00000000
--- a/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
+++ /dev/null
@@ -1,75 +0,0 @@
-From 06fb4ecfeac7e00d6704fa5ed19299f2fefb3cc9 Mon Sep 17 00:00:00 2001
-From: Mario Limonciello <mario.limonciello@amd.com>
-Date: Fri, 22 Apr 2022 08:14:52 -0500
-Subject: gpio: Request interrupts after IRQ is initialized
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-Commit 5467801f1fcb ("gpio: Restrict usage of GPIO chip irq members
-before initialization") attempted to fix a race condition that lead to a
-NULL pointer, but in the process caused a regression for _AEI/_EVT
-declared GPIOs.
-
-This manifests in messages showing deferred probing while trying to
-allocate IRQs like so:
-
-  amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x0000 to IRQ, err -517
-  amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x002C to IRQ, err -517
-  amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x003D to IRQ, err -517
-  [ .. more of the same .. ]
-
-The code for walking _AEI doesn't handle deferred probing and so this
-leads to non-functional GPIO interrupts.
-
-Fix this issue by moving the call to `acpi_gpiochip_request_interrupts`
-to occur after gc->irc.initialized is set.
-
-Fixes: 5467801f1fcb ("gpio: Restrict usage of GPIO chip irq members before initialization")
-Link: https://lore.kernel.org/linux-gpio/BL1PR12MB51577A77F000A008AA694675E2EF9@BL1PR12MB5157.namprd12.prod.outlook.com/
-Link: https://bugzilla.suse.com/show_bug.cgi?id=1198697
-Link: https://bugzilla.kernel.org/show_bug.cgi?id=215850
-Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1979
-Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1976
-Reported-by: Mario Limonciello <mario.limonciello@amd.com>
-Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
-Reviewed-by: Shreeya Patel <shreeya.patel@collabora.com>
-Tested-By: Samuel Čavoj <samuel@cavoj.net>
-Tested-By: lukeluk498@gmail.com Link:
-Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
-Acked-by: Linus Walleij <linus.walleij@linaro.org>
-Reviewed-and-tested-by: Takashi Iwai <tiwai@suse.de>
-Cc: Shreeya Patel <shreeya.patel@collabora.com>
-Cc: stable@vger.kernel.org
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
----
- drivers/gpio/gpiolib.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-(limited to 'drivers/gpio')
-
-diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
-index 085348e089860..b7694171655cf 100644
---- a/drivers/gpio/gpiolib.c
-+++ b/drivers/gpio/gpiolib.c
-@@ -1601,8 +1601,6 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
- 
- 	gpiochip_set_irq_hooks(gc);
- 
--	acpi_gpiochip_request_interrupts(gc);
--
- 	/*
- 	 * Using barrier() here to prevent compiler from reordering
- 	 * gc->irq.initialized before initialization of above
-@@ -1612,6 +1610,8 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
- 
- 	gc->irq.initialized = true;
- 
-+	acpi_gpiochip_request_interrupts(gc);
-+
- 	return 0;
- }
- 
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-05-09 10:56 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-05-09 10:56 UTC (permalink / raw
  To: gentoo-commits

commit:     133b96e06019d0827355c84ae0f0293ea6799de4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May  9 10:55:52 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May  9 10:55:52 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=133b96e0

Linux patch 5.10.114

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1113_linux-5.10.114.patch | 4423 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4427 insertions(+)

diff --git a/0000_README b/0000_README
index 5c68defb..439eef26 100644
--- a/0000_README
+++ b/0000_README
@@ -495,6 +495,10 @@ Patch:  1112_linux-5.10.113.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.113
 
+Patch:  1113_linux-5.10.114.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.114
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1113_linux-5.10.114.patch b/1113_linux-5.10.114.patch
new file mode 100644
index 00000000..5551a035
--- /dev/null
+++ b/1113_linux-5.10.114.patch
@@ -0,0 +1,4423 @@
+diff --git a/Makefile b/Makefile
+index 99bbaa9f54f4c..b76e6d0aa85de 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 113
++SUBLEVEL = 114
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/am3517-evm.dts b/arch/arm/boot/dts/am3517-evm.dts
+index 0d2fac98ce7d2..c8b80f156ec98 100644
+--- a/arch/arm/boot/dts/am3517-evm.dts
++++ b/arch/arm/boot/dts/am3517-evm.dts
+@@ -161,6 +161,8 @@
+ 
+ 	/* HS USB Host PHY on PORT 1 */
+ 	hsusb1_phy: hsusb1_phy {
++		pinctrl-names = "default";
++		pinctrl-0 = <&hsusb1_rst_pins>;
+ 		compatible = "usb-nop-xceiv";
+ 		reset-gpios = <&gpio2 25 GPIO_ACTIVE_LOW>; /* gpio_57 */
+ 		#phy-cells = <0>;
+@@ -168,7 +170,9 @@
+ };
+ 
+ &davinci_emac {
+-	     status = "okay";
++	pinctrl-names = "default";
++	pinctrl-0 = <&ethernet_pins>;
++	status = "okay";
+ };
+ 
+ &davinci_mdio {
+@@ -193,6 +197,8 @@
+ };
+ 
+ &i2c2 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c2_pins>;
+ 	clock-frequency = <400000>;
+ 	/* User DIP swithes [1:8] / User LEDS [1:2] */
+ 	tca6416: gpio@21 {
+@@ -205,6 +211,8 @@
+ };
+ 
+ &i2c3 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c3_pins>;
+ 	clock-frequency = <400000>;
+ };
+ 
+@@ -223,6 +231,8 @@
+ };
+ 
+ &usbhshost {
++	pinctrl-names = "default";
++	pinctrl-0 = <&hsusb1_pins>;
+ 	port1-mode = "ehci-phy";
+ };
+ 
+@@ -231,8 +241,35 @@
+ };
+ 
+ &omap3_pmx_core {
+-	pinctrl-names = "default";
+-	pinctrl-0 = <&hsusb1_rst_pins>;
++
++	ethernet_pins: pinmux_ethernet_pins {
++		pinctrl-single,pins = <
++			OMAP3_CORE1_IOPAD(0x21fe, PIN_INPUT | MUX_MODE0) /* rmii_mdio_data */
++			OMAP3_CORE1_IOPAD(0x2200, MUX_MODE0) /* rmii_mdio_clk */
++			OMAP3_CORE1_IOPAD(0x2202, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii_rxd0 */
++			OMAP3_CORE1_IOPAD(0x2204, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii_rxd1 */
++			OMAP3_CORE1_IOPAD(0x2206, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii_crs_dv */
++			OMAP3_CORE1_IOPAD(0x2208, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* rmii_rxer */
++			OMAP3_CORE1_IOPAD(0x220a, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* rmii_txd0 */
++			OMAP3_CORE1_IOPAD(0x220c, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* rmii_txd1 */
++			OMAP3_CORE1_IOPAD(0x220e, PIN_OUTPUT_PULLDOWN |MUX_MODE0) /* rmii_txen */
++			OMAP3_CORE1_IOPAD(0x2210, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii_50mhz_clk */
++		>;
++	};
++
++	i2c2_pins: pinmux_i2c2_pins {
++		pinctrl-single,pins = <
++			OMAP3_CORE1_IOPAD(0x21be, PIN_INPUT_PULLUP | MUX_MODE0)  /* i2c2_scl */
++			OMAP3_CORE1_IOPAD(0x21c0, PIN_INPUT_PULLUP | MUX_MODE0)  /* i2c2_sda */
++		>;
++	};
++
++	i2c3_pins: pinmux_i2c3_pins {
++		pinctrl-single,pins = <
++			OMAP3_CORE1_IOPAD(0x21c2, PIN_INPUT_PULLUP | MUX_MODE0)  /* i2c3_scl */
++			OMAP3_CORE1_IOPAD(0x21c4, PIN_INPUT_PULLUP | MUX_MODE0)  /* i2c3_sda */
++		>;
++	};
+ 
+ 	leds_pins: pinmux_leds_pins {
+ 		pinctrl-single,pins = <
+@@ -300,8 +337,6 @@
+ };
+ 
+ &omap3_pmx_core2 {
+-	pinctrl-names = "default";
+-	pinctrl-0 = <&hsusb1_pins>;
+ 
+ 	hsusb1_pins: pinmux_hsusb1_pins {
+ 		pinctrl-single,pins = <
+diff --git a/arch/arm/boot/dts/am3517-som.dtsi b/arch/arm/boot/dts/am3517-som.dtsi
+index 8b669e2eafec4..f7b680f6c48ad 100644
+--- a/arch/arm/boot/dts/am3517-som.dtsi
++++ b/arch/arm/boot/dts/am3517-som.dtsi
+@@ -69,6 +69,8 @@
+ };
+ 
+ &i2c1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c1_pins>;
+ 	clock-frequency = <400000>;
+ 
+ 	s35390a: s35390a@30 {
+@@ -179,6 +181,13 @@
+ 
+ &omap3_pmx_core {
+ 
++	i2c1_pins: pinmux_i2c1_pins {
++		pinctrl-single,pins = <
++			OMAP3_CORE1_IOPAD(0x21ba, PIN_INPUT_PULLUP | MUX_MODE0)  /* i2c1_scl */
++			OMAP3_CORE1_IOPAD(0x21bc, PIN_INPUT_PULLUP | MUX_MODE0)  /* i2c1_sda */
++		>;
++	};
++
+ 	wl12xx_buffer_pins: pinmux_wl12xx_buffer_pins {
+ 		pinctrl-single,pins = <
+ 			OMAP3_CORE1_IOPAD(0x2156, PIN_OUTPUT | MUX_MODE4)  /* mmc1_dat7.gpio_129 */
+diff --git a/arch/arm/boot/dts/at91-sama5d4_xplained.dts b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+index e42dae06b5826..73cb157c4ef54 100644
+--- a/arch/arm/boot/dts/at91-sama5d4_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+@@ -91,7 +91,7 @@
+ 
+ 			spi1: spi@fc018000 {
+ 				pinctrl-names = "default";
+-				pinctrl-0 = <&pinctrl_spi0_cs>;
++				pinctrl-0 = <&pinctrl_spi1_cs>;
+ 				cs-gpios = <&pioB 21 0>;
+ 				status = "okay";
+ 			};
+@@ -149,7 +149,7 @@
+ 						atmel,pins =
+ 							<AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
+ 					};
+-					pinctrl_spi0_cs: spi0_cs_default {
++					pinctrl_spi1_cs: spi1_cs_default {
+ 						atmel,pins =
+ 							<AT91_PIOB 21 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ 					};
+diff --git a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+index 87bb39060e8be..ca03685f0f086 100644
+--- a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
++++ b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+@@ -219,6 +219,12 @@
+ 		wm8731: wm8731@1b {
+ 			compatible = "wm8731";
+ 			reg = <0x1b>;
++
++			/* PCK0 at 12MHz */
++			clocks = <&pmc PMC_TYPE_SYSTEM 8>;
++			clock-names = "mclk";
++			assigned-clocks = <&pmc PMC_TYPE_SYSTEM 8>;
++			assigned-clock-rates = <12000000>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-apalis.dtsi b/arch/arm/boot/dts/imx6qdl-apalis.dtsi
+index 30fa349f9d054..a696873dc1abe 100644
+--- a/arch/arm/boot/dts/imx6qdl-apalis.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-apalis.dtsi
+@@ -286,6 +286,8 @@
+ 	codec: sgtl5000@a {
+ 		compatible = "fsl,sgtl5000";
+ 		reg = <0x0a>;
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_sgtl5000>;
+ 		clocks = <&clks IMX6QDL_CLK_CKO>;
+ 		VDDA-supply = <&reg_module_3v3_audio>;
+ 		VDDIO-supply = <&reg_module_3v3>;
+@@ -516,8 +518,6 @@
+ 			MX6QDL_PAD_DISP0_DAT21__AUD4_TXD	0x130b0
+ 			MX6QDL_PAD_DISP0_DAT22__AUD4_TXFS	0x130b0
+ 			MX6QDL_PAD_DISP0_DAT23__AUD4_RXD	0x130b0
+-			/* SGTL5000 sys_mclk */
+-			MX6QDL_PAD_GPIO_5__CCM_CLKO1		0x130b0
+ 		>;
+ 	};
+ 
+@@ -810,6 +810,12 @@
+ 		>;
+ 	};
+ 
++	pinctrl_sgtl5000: sgtl5000grp {
++		fsl,pins = <
++			MX6QDL_PAD_GPIO_5__CCM_CLKO1	0x130b0
++		>;
++	};
++
+ 	pinctrl_spdif: spdifgrp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_GPIO_16__SPDIF_IN  0x1b0b0
+diff --git a/arch/arm/boot/dts/imx6ull-colibri.dtsi b/arch/arm/boot/dts/imx6ull-colibri.dtsi
+index 4436556624d67..548cfcc7a01da 100644
+--- a/arch/arm/boot/dts/imx6ull-colibri.dtsi
++++ b/arch/arm/boot/dts/imx6ull-colibri.dtsi
+@@ -37,7 +37,7 @@
+ 
+ 	reg_sd1_vmmc: regulator-sd1-vmmc {
+ 		compatible = "regulator-gpio";
+-		gpio = <&gpio5 9 GPIO_ACTIVE_HIGH>;
++		gpios = <&gpio5 9 GPIO_ACTIVE_HIGH>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_snvs_reg_sd>;
+ 		regulator-always-on;
+diff --git a/arch/arm/boot/dts/logicpd-som-lv-35xx-devkit.dts b/arch/arm/boot/dts/logicpd-som-lv-35xx-devkit.dts
+index 2a0a98fe67f06..3240c67e0c392 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv-35xx-devkit.dts
++++ b/arch/arm/boot/dts/logicpd-som-lv-35xx-devkit.dts
+@@ -11,3 +11,18 @@
+ 	model = "LogicPD Zoom OMAP35xx SOM-LV Development Kit";
+ 	compatible = "logicpd,dm3730-som-lv-devkit", "ti,omap3430", "ti,omap3";
+ };
++
++&omap3_pmx_core2 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&hsusb2_2_pins>;
++	hsusb2_2_pins: pinmux_hsusb2_2_pins {
++		pinctrl-single,pins = <
++			OMAP3430_CORE2_IOPAD(0x25f0, PIN_OUTPUT | MUX_MODE3)            /* etk_d10.hsusb2_clk */
++			OMAP3430_CORE2_IOPAD(0x25f2, PIN_OUTPUT | MUX_MODE3)            /* etk_d11.hsusb2_stp */
++			OMAP3430_CORE2_IOPAD(0x25f4, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d12.hsusb2_dir */
++			OMAP3430_CORE2_IOPAD(0x25f6, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d13.hsusb2_nxt */
++			OMAP3430_CORE2_IOPAD(0x25f8, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d14.hsusb2_data0 */
++			OMAP3430_CORE2_IOPAD(0x25fa, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d15.hsusb2_data1 */
++		>;
++	};
++};
+diff --git a/arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts b/arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts
+index a604d92221a4f..c757f0d7781c1 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts
++++ b/arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts
+@@ -11,3 +11,18 @@
+ 	model = "LogicPD Zoom DM3730 SOM-LV Development Kit";
+ 	compatible = "logicpd,dm3730-som-lv-devkit", "ti,omap3630", "ti,omap3";
+ };
++
++&omap3_pmx_core2 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&hsusb2_2_pins>;
++	hsusb2_2_pins: pinmux_hsusb2_2_pins {
++		pinctrl-single,pins = <
++			OMAP3630_CORE2_IOPAD(0x25f0, PIN_OUTPUT | MUX_MODE3)            /* etk_d10.hsusb2_clk */
++			OMAP3630_CORE2_IOPAD(0x25f2, PIN_OUTPUT | MUX_MODE3)            /* etk_d11.hsusb2_stp */
++			OMAP3630_CORE2_IOPAD(0x25f4, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d12.hsusb2_dir */
++			OMAP3630_CORE2_IOPAD(0x25f6, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d13.hsusb2_nxt */
++			OMAP3630_CORE2_IOPAD(0x25f8, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d14.hsusb2_data0 */
++			OMAP3630_CORE2_IOPAD(0x25fa, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d15.hsusb2_data1 */
++		>;
++	};
++};
+diff --git a/arch/arm/boot/dts/logicpd-som-lv.dtsi b/arch/arm/boot/dts/logicpd-som-lv.dtsi
+index b56524cc7fe27..55b619c99e24d 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv.dtsi
++++ b/arch/arm/boot/dts/logicpd-som-lv.dtsi
+@@ -265,21 +265,6 @@
+ 	};
+ };
+ 
+-&omap3_pmx_core2 {
+-	pinctrl-names = "default";
+-	pinctrl-0 = <&hsusb2_2_pins>;
+-	hsusb2_2_pins: pinmux_hsusb2_2_pins {
+-		pinctrl-single,pins = <
+-			OMAP3630_CORE2_IOPAD(0x25f0, PIN_OUTPUT | MUX_MODE3)            /* etk_d10.hsusb2_clk */
+-			OMAP3630_CORE2_IOPAD(0x25f2, PIN_OUTPUT | MUX_MODE3)            /* etk_d11.hsusb2_stp */
+-			OMAP3630_CORE2_IOPAD(0x25f4, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d12.hsusb2_dir */
+-			OMAP3630_CORE2_IOPAD(0x25f6, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d13.hsusb2_nxt */
+-			OMAP3630_CORE2_IOPAD(0x25f8, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d14.hsusb2_data0 */
+-			OMAP3630_CORE2_IOPAD(0x25fa, PIN_INPUT_PULLDOWN | MUX_MODE3)    /* etk_d15.hsusb2_data1 */
+-		>;
+-	};
+-};
+-
+ &uart2 {
+ 	interrupts-extended = <&intc 73 &omap3_pmx_core OMAP3_UART2_RX>;
+ 	pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index 80c9e5e34136a..cc8a378dd076e 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -31,6 +31,8 @@
+ 	aliases {
+ 		display0 = &lcd;
+ 		display1 = &tv0;
++		/delete-property/ mmc2;
++		/delete-property/ mmc3;
+ 	};
+ 
+ 	ldo_3v3: fixedregulator {
+diff --git a/arch/arm/mach-exynos/Kconfig b/arch/arm/mach-exynos/Kconfig
+index d2d249706ebb3..56314b1c74089 100644
+--- a/arch/arm/mach-exynos/Kconfig
++++ b/arch/arm/mach-exynos/Kconfig
+@@ -20,7 +20,6 @@ menuconfig ARCH_EXYNOS
+ 	select EXYNOS_PMU
+ 	select EXYNOS_SROM
+ 	select EXYNOS_PM_DOMAINS if PM_GENERIC_DOMAINS
+-	select GPIOLIB
+ 	select HAVE_ARM_ARCH_TIMER if ARCH_EXYNOS5
+ 	select HAVE_ARM_SCU if SMP
+ 	select HAVE_S3C2410_I2C if I2C
+diff --git a/arch/arm/mach-omap2/omap4-common.c b/arch/arm/mach-omap2/omap4-common.c
+index 5c3845730dbf5..0b80f8bcd3047 100644
+--- a/arch/arm/mach-omap2/omap4-common.c
++++ b/arch/arm/mach-omap2/omap4-common.c
+@@ -314,10 +314,12 @@ void __init omap_gic_of_init(void)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-gic");
+ 	gic_dist_base_addr = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!gic_dist_base_addr);
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-twd-timer");
+ 	twd_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!twd_base);
+ 
+ skip_errata_init:
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-a311d.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-a311d.dtsi
+index d61f43052a344..8e9ad1e51d665 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-a311d.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-a311d.dtsi
+@@ -11,26 +11,6 @@
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+-		opp-100000000 {
+-			opp-hz = /bits/ 64 <100000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-250000000 {
+-			opp-hz = /bits/ 64 <250000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-500000000 {
+-			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-667000000 {
+-			opp-hz = /bits/ 64 <667000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+ 			opp-microvolt = <761000>;
+@@ -71,26 +51,6 @@
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+-		opp-100000000 {
+-			opp-hz = /bits/ 64 <100000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-250000000 {
+-			opp-hz = /bits/ 64 <250000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-500000000 {
+-			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-667000000 {
+-			opp-hz = /bits/ 64 <667000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+ 			opp-microvolt = <731000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-s922x.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-s922x.dtsi
+index 1e5d0ee5d541b..44c23c984034c 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-s922x.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-s922x.dtsi
+@@ -11,26 +11,6 @@
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+-		opp-100000000 {
+-			opp-hz = /bits/ 64 <100000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-250000000 {
+-			opp-hz = /bits/ 64 <250000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-500000000 {
+-			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-667000000 {
+-			opp-hz = /bits/ 64 <667000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+ 			opp-microvolt = <731000>;
+@@ -76,26 +56,6 @@
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+-		opp-100000000 {
+-			opp-hz = /bits/ 64 <100000000>;
+-			opp-microvolt = <751000>;
+-		};
+-
+-		opp-250000000 {
+-			opp-hz = /bits/ 64 <250000000>;
+-			opp-microvolt = <751000>;
+-		};
+-
+-		opp-500000000 {
+-			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <751000>;
+-		};
+-
+-		opp-667000000 {
+-			opp-hz = /bits/ 64 <667000000>;
+-			opp-microvolt = <751000>;
+-		};
+-
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+ 			opp-microvolt = <771000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi b/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
+index c309517abae32..defe0b8d4d275 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
+@@ -95,26 +95,6 @@
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+-		opp-100000000 {
+-			opp-hz = /bits/ 64 <100000000>;
+-			opp-microvolt = <730000>;
+-		};
+-
+-		opp-250000000 {
+-			opp-hz = /bits/ 64 <250000000>;
+-			opp-microvolt = <730000>;
+-		};
+-
+-		opp-500000000 {
+-			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <730000>;
+-		};
+-
+-		opp-667000000 {
+-			opp-hz = /bits/ 64 <666666666>;
+-			opp-microvolt = <750000>;
+-		};
+-
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+ 			opp-microvolt = <770000>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts b/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
+index 7dfee715a2c4d..d8ce217c60166 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
+@@ -59,6 +59,10 @@
+ 		interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 		rohm,reset-snvs-powered;
+ 
++		#clock-cells = <0>;
++		clocks = <&osc_32k 0>;
++		clock-output-names = "clk-32k-out";
++
+ 		regulators {
+ 			buck1_reg: BUCK1 {
+ 				regulator-name = "buck1";
+diff --git a/arch/powerpc/perf/Makefile b/arch/powerpc/perf/Makefile
+index c02854dea2b20..da9f60ede97b2 100644
+--- a/arch/powerpc/perf/Makefile
++++ b/arch/powerpc/perf/Makefile
+@@ -5,11 +5,11 @@ ifdef CONFIG_COMPAT
+ obj-$(CONFIG_PERF_EVENTS)	+= callchain_32.o
+ endif
+ 
+-obj-$(CONFIG_PPC_PERF_CTRS)	+= core-book3s.o bhrb.o
++obj-$(CONFIG_PPC_PERF_CTRS)	+= core-book3s.o
+ obj64-$(CONFIG_PPC_PERF_CTRS)	+= ppc970-pmu.o power5-pmu.o \
+ 				   power5+-pmu.o power6-pmu.o power7-pmu.o \
+ 				   isa207-common.o power8-pmu.o power9-pmu.o \
+-				   generic-compat-pmu.o power10-pmu.o
++				   generic-compat-pmu.o power10-pmu.o bhrb.o
+ obj32-$(CONFIG_PPC_PERF_CTRS)	+= mpc7450-pmu.o
+ 
+ obj-$(CONFIG_PPC_POWERNV)	+= imc-pmu.o
+diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
+index 3fe7a5296aa58..1612e11f7bf6d 100644
+--- a/arch/riscv/kernel/patch.c
++++ b/arch/riscv/kernel/patch.c
+@@ -100,7 +100,7 @@ static int patch_text_cb(void *data)
+ 	struct patch_insn *patch = data;
+ 	int ret = 0;
+ 
+-	if (atomic_inc_return(&patch->cpu_count) == 1) {
++	if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
+ 		ret =
+ 		    patch_text_nosync(patch->addr, &patch->insn,
+ 					    GET_INSN_LENGTH(patch->insn));
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index 2b7cc5397f80d..91a06cef50c1b 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -133,11 +133,13 @@ extern void load_ucode_ap(void);
+ void reload_early_microcode(void);
+ extern bool get_builtin_firmware(struct cpio_data *cd, const char *name);
+ extern bool initrd_gone;
++void microcode_bsp_resume(void);
+ #else
+ static inline int __init microcode_init(void)			{ return 0; };
+ static inline void __init load_ucode_bsp(void)			{ }
+ static inline void load_ucode_ap(void)				{ }
+ static inline void reload_early_microcode(void)			{ }
++static inline void microcode_bsp_resume(void)			{ }
+ static inline bool
+ get_builtin_firmware(struct cpio_data *cd, const char *name)	{ return false; }
+ #endif
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index bbbd248fe9132..0b1732b98e719 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -775,9 +775,9 @@ static struct subsys_interface mc_cpu_interface = {
+ };
+ 
+ /**
+- * mc_bp_resume - Update boot CPU microcode during resume.
++ * microcode_bsp_resume - Update boot CPU microcode during resume.
+  */
+-static void mc_bp_resume(void)
++void microcode_bsp_resume(void)
+ {
+ 	int cpu = smp_processor_id();
+ 	struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+@@ -789,7 +789,7 @@ static void mc_bp_resume(void)
+ }
+ 
+ static struct syscore_ops mc_syscore_ops = {
+-	.resume			= mc_bp_resume,
++	.resume			= microcode_bsp_resume,
+ };
+ 
+ static int mc_cpu_starting(unsigned int cpu)
+diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
+index 508c81e97ab10..f1c0befb62df5 100644
+--- a/arch/x86/lib/usercopy_64.c
++++ b/arch/x86/lib/usercopy_64.c
+@@ -121,7 +121,7 @@ void __memcpy_flushcache(void *_dst, const void *_src, size_t size)
+ 
+ 	/* cache copy and flush to align dest */
+ 	if (!IS_ALIGNED(dest, 8)) {
+-		unsigned len = min_t(unsigned, size, ALIGN(dest, 8) - dest);
++		size_t len = min_t(size_t, size, ALIGN(dest, 8) - dest);
+ 
+ 		memcpy((void *) dest, (void *) source, len);
+ 		clean_cache_range((void *) dest, len);
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index c552cd2d0632b..326d6d1737338 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -476,7 +476,6 @@ static __init void xen_setup_pci_msi(void)
+ 			xen_msi_ops.setup_msi_irqs = xen_setup_msi_irqs;
+ 		}
+ 		xen_msi_ops.teardown_msi_irqs = xen_pv_teardown_msi_irqs;
+-		pci_msi_ignore_mask = 1;
+ 	} else if (xen_hvm_domain()) {
+ 		xen_msi_ops.setup_msi_irqs = xen_hvm_setup_msi_irqs;
+ 		xen_msi_ops.teardown_msi_irqs = xen_teardown_msi_irqs;
+@@ -490,6 +489,11 @@ static __init void xen_setup_pci_msi(void)
+ 	 * in allocating the native domain and never use it.
+ 	 */
+ 	x86_init.irqs.create_pci_msi_domain = xen_create_pci_msi_domain;
++	/*
++	 * With XEN PIRQ/Eventchannels in use PCI/MSI[-X] masking is solely
++	 * controlled by the hypervisor.
++	 */
++	pci_msi_ignore_mask = 1;
+ }
+ 
+ #else /* CONFIG_PCI_MSI */
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index decebcd8ee1c7..d023c85e3c536 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -25,6 +25,7 @@
+ #include <asm/cpu.h>
+ #include <asm/mmu_context.h>
+ #include <asm/cpu_device_id.h>
++#include <asm/microcode.h>
+ 
+ #ifdef CONFIG_X86_32
+ __visible unsigned long saved_context_ebx;
+@@ -265,11 +266,18 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
+ 	x86_platform.restore_sched_clock_state();
+ 	mtrr_bp_restore();
+ 	perf_restore_debug_store();
+-	msr_restore_context(ctxt);
+ 
+ 	c = &cpu_data(smp_processor_id());
+ 	if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL))
+ 		init_ia32_feat_ctl(c);
++
++	microcode_bsp_resume();
++
++	/*
++	 * This needs to happen after the microcode has been updated upon resume
++	 * because some of the MSRs are "emulated" in microcode.
++	 */
++	msr_restore_context(ctxt);
+ }
+ 
+ /* Needed by apm.c */
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 9af32b44b7173..fb8f959a7f327 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2257,7 +2257,17 @@ static void ioc_timer_fn(struct timer_list *timer)
+ 				iocg->hweight_donating = hwa;
+ 				iocg->hweight_after_donation = new_hwi;
+ 				list_add(&iocg->surplus_list, &surpluses);
+-			} else {
++			} else if (!iocg->abs_vdebt) {
++				/*
++				 * @iocg doesn't have enough to donate. Reset
++				 * its inuse to active.
++				 *
++				 * Don't reset debtors as their inuse's are
++				 * owned by debt handling. This shouldn't affect
++				 * donation calculuation in any meaningful way
++				 * as @iocg doesn't have a meaningful amount of
++				 * share anyway.
++				 */
+ 				TRACE_IOCG_PATH(inuse_shortage, iocg, &now,
+ 						iocg->inuse, iocg->active,
+ 						iocg->hweight_inuse, new_hwi);
+diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
+index de8587cc119e8..8272a3a002a34 100644
+--- a/drivers/base/arch_topology.c
++++ b/drivers/base/arch_topology.c
+@@ -515,7 +515,7 @@ void update_siblings_masks(unsigned int cpuid)
+ 	for_each_online_cpu(cpu) {
+ 		cpu_topo = &cpu_topology[cpu];
+ 
+-		if (cpuid_topo->llc_id == cpu_topo->llc_id) {
++		if (cpu_topo->llc_id != -1 && cpuid_topo->llc_id == cpu_topo->llc_id) {
+ 			cpumask_set_cpu(cpu, &cpuid_topo->llc_sibling);
+ 			cpumask_set_cpu(cpuid, &cpu_topo->llc_sibling);
+ 		}
+diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
+index f2548049aa0e9..40c53632512b7 100644
+--- a/drivers/block/Kconfig
++++ b/drivers/block/Kconfig
+@@ -39,6 +39,22 @@ config BLK_DEV_FD
+ 	  To compile this driver as a module, choose M here: the
+ 	  module will be called floppy.
+ 
++config BLK_DEV_FD_RAWCMD
++	bool "Support for raw floppy disk commands (DEPRECATED)"
++	depends on BLK_DEV_FD
++	help
++	  If you want to use actual physical floppies and expect to do
++	  special low-level hardware accesses to them (access and use
++	  non-standard formats, for example), then enable this.
++
++	  Note that the code enabled by this option is rarely used and
++	  might be unstable or insecure, and distros should not enable it.
++
++	  Note: FDRAWCMD is deprecated and will be removed from the kernel
++	  in the near future.
++
++	  If unsure, say N.
++
+ config AMIGA_FLOPPY
+ 	tristate "Amiga floppy support"
+ 	depends on AMIGA
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index aaee15058d181..c9411fe2f0af8 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -3069,6 +3069,8 @@ static const char *drive_name(int type, int drive)
+ 		return "(null)";
+ }
+ 
++#ifdef CONFIG_BLK_DEV_FD_RAWCMD
++
+ /* raw commands */
+ static void raw_cmd_done(int flag)
+ {
+@@ -3273,6 +3275,35 @@ static int raw_cmd_ioctl(int cmd, void __user *param)
+ 	return ret;
+ }
+ 
++static int floppy_raw_cmd_ioctl(int type, int drive, int cmd,
++				void __user *param)
++{
++	int ret;
++
++	pr_warn_once("Note: FDRAWCMD is deprecated and will be removed from the kernel in the near future.\n");
++
++	if (type)
++		return -EINVAL;
++	if (lock_fdc(drive))
++		return -EINTR;
++	set_floppy(drive);
++	ret = raw_cmd_ioctl(cmd, param);
++	if (ret == -EINTR)
++		return -EINTR;
++	process_fd_request();
++	return ret;
++}
++
++#else /* CONFIG_BLK_DEV_FD_RAWCMD */
++
++static int floppy_raw_cmd_ioctl(int type, int drive, int cmd,
++				void __user *param)
++{
++	return -EOPNOTSUPP;
++}
++
++#endif
++
+ static int invalidate_drive(struct block_device *bdev)
+ {
+ 	/* invalidate the buffer track to force a reread */
+@@ -3461,7 +3492,6 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ {
+ 	int drive = (long)bdev->bd_disk->private_data;
+ 	int type = ITYPE(drive_state[drive].fd_device);
+-	int i;
+ 	int ret;
+ 	int size;
+ 	union inparam {
+@@ -3612,16 +3642,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ 		outparam = &write_errors[drive];
+ 		break;
+ 	case FDRAWCMD:
+-		if (type)
+-			return -EINVAL;
+-		if (lock_fdc(drive))
+-			return -EINTR;
+-		set_floppy(drive);
+-		i = raw_cmd_ioctl(cmd, (void __user *)param);
+-		if (i == -EINTR)
+-			return -EINTR;
+-		process_fd_request();
+-		return i;
++		return floppy_raw_cmd_ioctl(type, drive, cmd, (void __user *)param);
+ 	case FDTWADDLE:
+ 		if (lock_fdc(drive))
+ 			return -EINTR;
+diff --git a/drivers/bus/sunxi-rsb.c b/drivers/bus/sunxi-rsb.c
+index 1bb00a959c67f..9b1a5e62417cb 100644
+--- a/drivers/bus/sunxi-rsb.c
++++ b/drivers/bus/sunxi-rsb.c
+@@ -224,6 +224,8 @@ static struct sunxi_rsb_device *sunxi_rsb_device_create(struct sunxi_rsb *rsb,
+ 
+ 	dev_dbg(&rdev->dev, "device %s registered\n", dev_name(&rdev->dev));
+ 
++	return rdev;
++
+ err_device_add:
+ 	put_device(&rdev->dev);
+ 
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 18f0650c5d405..ac559c2620335 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3130,13 +3130,27 @@ static int sysc_check_disabled_devices(struct sysc *ddata)
+  */
+ static int sysc_check_active_timer(struct sysc *ddata)
+ {
++	int error;
++
+ 	if (ddata->cap->type != TI_SYSC_OMAP2_TIMER &&
+ 	    ddata->cap->type != TI_SYSC_OMAP4_TIMER)
+ 		return 0;
+ 
++	/*
++	 * Quirk for omap3 beagleboard revision A to B4 to use gpt12.
++	 * Revision C and later are fixed with commit 23885389dbbb ("ARM:
++	 * dts: Fix timer regression for beagleboard revision c"). This all
++	 * can be dropped if we stop supporting old beagleboard revisions
++	 * A to B4 at some point.
++	 */
++	if (sysc_soc->soc == SOC_3430)
++		error = -ENXIO;
++	else
++		error = -EBUSY;
++
+ 	if ((ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT) &&
+ 	    (ddata->cfg.quirks & SYSC_QUIRK_NO_IDLE))
+-		return -ENXIO;
++		return error;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/clk/sunxi/clk-sun9i-mmc.c b/drivers/clk/sunxi/clk-sun9i-mmc.c
+index 542b31d6e96dd..636bcf2439ef2 100644
+--- a/drivers/clk/sunxi/clk-sun9i-mmc.c
++++ b/drivers/clk/sunxi/clk-sun9i-mmc.c
+@@ -109,6 +109,8 @@ static int sun9i_a80_mmc_config_clk_probe(struct platform_device *pdev)
+ 	spin_lock_init(&data->lock);
+ 
+ 	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!r)
++		return -EINVAL;
+ 	/* one clock/reset pair per word */
+ 	count = DIV_ROUND_UP((resource_size(r)), SUN9I_MMC_WIDTH);
+ 	data->membase = devm_ioremap_resource(&pdev->dev, r);
+diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+index 2deed8d8773fa..75e1bf3a08f7c 100644
+--- a/drivers/cpufreq/sun50i-cpufreq-nvmem.c
++++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+@@ -98,8 +98,10 @@ static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	ret = sun50i_cpufreq_get_efuse(&speed);
+-	if (ret)
++	if (ret) {
++		kfree(opp_tables);
+ 		return ret;
++	}
+ 
+ 	snprintf(name, MAX_NAME_LEN, "speed%d", speed);
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 2645ebc63a14d..195b7e02dc4b0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -138,19 +138,33 @@ void program_sh_mem_settings(struct device_queue_manager *dqm,
+ }
+ 
+ static void increment_queue_count(struct device_queue_manager *dqm,
+-			enum kfd_queue_type type)
++				  struct qcm_process_device *qpd,
++				  struct queue *q)
+ {
+ 	dqm->active_queue_count++;
+-	if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
++	if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE ||
++	    q->properties.type == KFD_QUEUE_TYPE_DIQ)
+ 		dqm->active_cp_queue_count++;
++
++	if (q->properties.is_gws) {
++		dqm->gws_queue_count++;
++		qpd->mapped_gws_queue = true;
++	}
+ }
+ 
+ static void decrement_queue_count(struct device_queue_manager *dqm,
+-			enum kfd_queue_type type)
++				  struct qcm_process_device *qpd,
++				  struct queue *q)
+ {
+ 	dqm->active_queue_count--;
+-	if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
++	if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE ||
++	    q->properties.type == KFD_QUEUE_TYPE_DIQ)
+ 		dqm->active_cp_queue_count--;
++
++	if (q->properties.is_gws) {
++		dqm->gws_queue_count--;
++		qpd->mapped_gws_queue = false;
++	}
+ }
+ 
+ static int allocate_doorbell(struct qcm_process_device *qpd, struct queue *q)
+@@ -377,7 +391,7 @@ add_queue_to_list:
+ 	list_add(&q->list, &qpd->queues_list);
+ 	qpd->queue_count++;
+ 	if (q->properties.is_active)
+-		increment_queue_count(dqm, q->properties.type);
++		increment_queue_count(dqm, qpd, q);
+ 
+ 	/*
+ 	 * Unconditionally increment this counter, regardless of the queue's
+@@ -502,13 +516,8 @@ static int destroy_queue_nocpsch_locked(struct device_queue_manager *dqm,
+ 		deallocate_vmid(dqm, qpd, q);
+ 	}
+ 	qpd->queue_count--;
+-	if (q->properties.is_active) {
+-		decrement_queue_count(dqm, q->properties.type);
+-		if (q->properties.is_gws) {
+-			dqm->gws_queue_count--;
+-			qpd->mapped_gws_queue = false;
+-		}
+-	}
++	if (q->properties.is_active)
++		decrement_queue_count(dqm, qpd, q);
+ 
+ 	return retval;
+ }
+@@ -598,12 +607,11 @@ static int update_queue(struct device_queue_manager *dqm, struct queue *q)
+ 	 * dqm->active_queue_count to determine whether a new runlist must be
+ 	 * uploaded.
+ 	 */
+-	if (q->properties.is_active && !prev_active)
+-		increment_queue_count(dqm, q->properties.type);
+-	else if (!q->properties.is_active && prev_active)
+-		decrement_queue_count(dqm, q->properties.type);
+-
+-	if (q->gws && !q->properties.is_gws) {
++	if (q->properties.is_active && !prev_active) {
++		increment_queue_count(dqm, &pdd->qpd, q);
++	} else if (!q->properties.is_active && prev_active) {
++		decrement_queue_count(dqm, &pdd->qpd, q);
++	} else if (q->gws && !q->properties.is_gws) {
+ 		if (q->properties.is_active) {
+ 			dqm->gws_queue_count++;
+ 			pdd->qpd.mapped_gws_queue = true;
+@@ -665,11 +673,7 @@ static int evict_process_queues_nocpsch(struct device_queue_manager *dqm,
+ 		mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
+ 				q->properties.type)];
+ 		q->properties.is_active = false;
+-		decrement_queue_count(dqm, q->properties.type);
+-		if (q->properties.is_gws) {
+-			dqm->gws_queue_count--;
+-			qpd->mapped_gws_queue = false;
+-		}
++		decrement_queue_count(dqm, qpd, q);
+ 
+ 		if (WARN_ONCE(!dqm->sched_running, "Evict when stopped\n"))
+ 			continue;
+@@ -713,7 +717,7 @@ static int evict_process_queues_cpsch(struct device_queue_manager *dqm,
+ 			continue;
+ 
+ 		q->properties.is_active = false;
+-		decrement_queue_count(dqm, q->properties.type);
++		decrement_queue_count(dqm, qpd, q);
+ 	}
+ 	pdd->last_evict_timestamp = get_jiffies_64();
+ 	retval = execute_queues_cpsch(dqm,
+@@ -784,11 +788,7 @@ static int restore_process_queues_nocpsch(struct device_queue_manager *dqm,
+ 		mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
+ 				q->properties.type)];
+ 		q->properties.is_active = true;
+-		increment_queue_count(dqm, q->properties.type);
+-		if (q->properties.is_gws) {
+-			dqm->gws_queue_count++;
+-			qpd->mapped_gws_queue = true;
+-		}
++		increment_queue_count(dqm, qpd, q);
+ 
+ 		if (WARN_ONCE(!dqm->sched_running, "Restore when stopped\n"))
+ 			continue;
+@@ -846,7 +846,7 @@ static int restore_process_queues_cpsch(struct device_queue_manager *dqm,
+ 			continue;
+ 
+ 		q->properties.is_active = true;
+-		increment_queue_count(dqm, q->properties.type);
++		increment_queue_count(dqm, &pdd->qpd, q);
+ 	}
+ 	retval = execute_queues_cpsch(dqm,
+ 				KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+@@ -1247,7 +1247,7 @@ static int create_kernel_queue_cpsch(struct device_queue_manager *dqm,
+ 			dqm->total_queue_count);
+ 
+ 	list_add(&kq->list, &qpd->priv_queue_list);
+-	increment_queue_count(dqm, kq->queue->properties.type);
++	increment_queue_count(dqm, qpd, kq->queue);
+ 	qpd->is_debug = true;
+ 	execute_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+ 	dqm_unlock(dqm);
+@@ -1261,7 +1261,7 @@ static void destroy_kernel_queue_cpsch(struct device_queue_manager *dqm,
+ {
+ 	dqm_lock(dqm);
+ 	list_del(&kq->list);
+-	decrement_queue_count(dqm, kq->queue->properties.type);
++	decrement_queue_count(dqm, qpd, kq->queue);
+ 	qpd->is_debug = false;
+ 	execute_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0);
+ 	/*
+@@ -1328,7 +1328,7 @@ static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue *q,
+ 	qpd->queue_count++;
+ 
+ 	if (q->properties.is_active) {
+-		increment_queue_count(dqm, q->properties.type);
++		increment_queue_count(dqm, qpd, q);
+ 
+ 		execute_queues_cpsch(dqm,
+ 				KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+@@ -1513,15 +1513,11 @@ static int destroy_queue_cpsch(struct device_queue_manager *dqm,
+ 	list_del(&q->list);
+ 	qpd->queue_count--;
+ 	if (q->properties.is_active) {
+-		decrement_queue_count(dqm, q->properties.type);
++		decrement_queue_count(dqm, qpd, q);
+ 		retval = execute_queues_cpsch(dqm,
+ 				KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+ 		if (retval == -ETIME)
+ 			qpd->reset_wavefronts = true;
+-		if (q->properties.is_gws) {
+-			dqm->gws_queue_count--;
+-			qpd->mapped_gws_queue = false;
+-		}
+ 	}
+ 
+ 	/*
+@@ -1732,7 +1728,7 @@ static int process_termination_cpsch(struct device_queue_manager *dqm,
+ 	/* Clean all kernel queues */
+ 	list_for_each_entry_safe(kq, kq_next, &qpd->priv_queue_list, list) {
+ 		list_del(&kq->list);
+-		decrement_queue_count(dqm, kq->queue->properties.type);
++		decrement_queue_count(dqm, qpd, kq->queue);
+ 		qpd->is_debug = false;
+ 		dqm->total_queue_count--;
+ 		filter = KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES;
+@@ -1745,13 +1741,8 @@ static int process_termination_cpsch(struct device_queue_manager *dqm,
+ 		else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
+ 			deallocate_sdma_queue(dqm, q);
+ 
+-		if (q->properties.is_active) {
+-			decrement_queue_count(dqm, q->properties.type);
+-			if (q->properties.is_gws) {
+-				dqm->gws_queue_count--;
+-				qpd->mapped_gws_queue = false;
+-			}
+-		}
++		if (q->properties.is_active)
++			decrement_queue_count(dqm, qpd, q);
+ 
+ 		dqm->total_queue_count--;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index 7ed4d7c8734f0..01c4e8753294b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -1267,6 +1267,7 @@ static struct clock_source *dcn21_clock_source_create(
+ 		return &clk_src->base;
+ 	}
+ 
++	kfree(clk_src);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 12488996a7f4f..f1ab26307db6f 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -7202,7 +7202,7 @@ enum {
+ #define _SEL_FETCH_PLANE_BASE_6_A		0x70940
+ #define _SEL_FETCH_PLANE_BASE_7_A		0x70960
+ #define _SEL_FETCH_PLANE_BASE_CUR_A		0x70880
+-#define _SEL_FETCH_PLANE_BASE_1_B		0x70990
++#define _SEL_FETCH_PLANE_BASE_1_B		0x71890
+ 
+ #define _SEL_FETCH_PLANE_BASE_A(plane) _PICK(plane, \
+ 					     _SEL_FETCH_PLANE_BASE_1_A, \
+diff --git a/drivers/iio/dac/ad5446.c b/drivers/iio/dac/ad5446.c
+index e86886ca5ee41..b168eb14c2465 100644
+--- a/drivers/iio/dac/ad5446.c
++++ b/drivers/iio/dac/ad5446.c
+@@ -178,7 +178,7 @@ static int ad5446_read_raw(struct iio_dev *indio_dev,
+ 
+ 	switch (m) {
+ 	case IIO_CHAN_INFO_RAW:
+-		*val = st->cached_val;
++		*val = st->cached_val >> chan->scan_type.shift;
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SCALE:
+ 		*val = st->vref_mv;
+diff --git a/drivers/iio/dac/ad5592r-base.c b/drivers/iio/dac/ad5592r-base.c
+index 0405e92b9e8c3..987264410278c 100644
+--- a/drivers/iio/dac/ad5592r-base.c
++++ b/drivers/iio/dac/ad5592r-base.c
+@@ -523,7 +523,7 @@ static int ad5592r_alloc_channels(struct iio_dev *iio_dev)
+ 		if (!ret)
+ 			st->channel_modes[reg] = tmp;
+ 
+-		fwnode_property_read_u32(child, "adi,off-state", &tmp);
++		ret = fwnode_property_read_u32(child, "adi,off-state", &tmp);
+ 		if (!ret)
+ 			st->channel_offstate[reg] = tmp;
+ 	}
+diff --git a/drivers/iio/imu/bmi160/bmi160_core.c b/drivers/iio/imu/bmi160/bmi160_core.c
+index 82f03a4dc47a7..5fd61889f5931 100644
+--- a/drivers/iio/imu/bmi160/bmi160_core.c
++++ b/drivers/iio/imu/bmi160/bmi160_core.c
+@@ -731,7 +731,7 @@ static int bmi160_chip_init(struct bmi160_data *data, bool use_spi)
+ 
+ 	ret = regmap_write(data->regmap, BMI160_REG_CMD, BMI160_CMD_SOFTRESET);
+ 	if (ret)
+-		return ret;
++		goto disable_regulator;
+ 
+ 	usleep_range(BMI160_SOFTRESET_USLEEP, BMI160_SOFTRESET_USLEEP + 1);
+ 
+@@ -742,29 +742,37 @@ static int bmi160_chip_init(struct bmi160_data *data, bool use_spi)
+ 	if (use_spi) {
+ 		ret = regmap_read(data->regmap, BMI160_REG_DUMMY, &val);
+ 		if (ret)
+-			return ret;
++			goto disable_regulator;
+ 	}
+ 
+ 	ret = regmap_read(data->regmap, BMI160_REG_CHIP_ID, &val);
+ 	if (ret) {
+ 		dev_err(dev, "Error reading chip id\n");
+-		return ret;
++		goto disable_regulator;
+ 	}
+ 	if (val != BMI160_CHIP_ID_VAL) {
+ 		dev_err(dev, "Wrong chip id, got %x expected %x\n",
+ 			val, BMI160_CHIP_ID_VAL);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto disable_regulator;
+ 	}
+ 
+ 	ret = bmi160_set_mode(data, BMI160_ACCEL, true);
+ 	if (ret)
+-		return ret;
++		goto disable_regulator;
+ 
+ 	ret = bmi160_set_mode(data, BMI160_GYRO, true);
+ 	if (ret)
+-		return ret;
++		goto disable_accel;
+ 
+ 	return 0;
++
++disable_accel:
++	bmi160_set_mode(data, BMI160_ACCEL, false);
++
++disable_regulator:
++	regulator_bulk_disable(ARRAY_SIZE(data->supplies), data->supplies);
++	return ret;
+ }
+ 
+ static int bmi160_data_rdy_trigger_set_state(struct iio_trigger *trig,
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_i2c.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_i2c.c
+index 85b1934cec60e..53891010a91de 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_i2c.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_i2c.c
+@@ -18,12 +18,15 @@ static int inv_icm42600_i2c_bus_setup(struct inv_icm42600_state *st)
+ 	unsigned int mask, val;
+ 	int ret;
+ 
+-	/* setup interface registers */
+-	ret = regmap_update_bits(st->map, INV_ICM42600_REG_INTF_CONFIG6,
+-				 INV_ICM42600_INTF_CONFIG6_MASK,
+-				 INV_ICM42600_INTF_CONFIG6_I3C_EN);
+-	if (ret)
+-		return ret;
++	/*
++	 * setup interface registers
++	 * This register write to REG_INTF_CONFIG6 enables a spike filter that
++	 * is impacting the line and can prevent the I2C ACK to be seen by the
++	 * controller. So we don't test the return value.
++	 */
++	regmap_update_bits(st->map, INV_ICM42600_REG_INTF_CONFIG6,
++			   INV_ICM42600_INTF_CONFIG6_MASK,
++			   INV_ICM42600_INTF_CONFIG6_I3C_EN);
+ 
+ 	ret = regmap_update_bits(st->map, INV_ICM42600_REG_INTF_CONFIG4,
+ 				 INV_ICM42600_INTF_CONFIG4_I3C_BUS_ONLY, 0);
+diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
+index d988b6ac36593..3774e5975f770 100644
+--- a/drivers/iio/magnetometer/ak8975.c
++++ b/drivers/iio/magnetometer/ak8975.c
+@@ -389,6 +389,7 @@ static int ak8975_power_on(const struct ak8975_data *data)
+ 	if (ret) {
+ 		dev_warn(&data->client->dev,
+ 			 "Failed to enable specified Vid supply\n");
++		regulator_disable(data->vdd);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
+index 4c2ce210c1237..95b09148167e8 100644
+--- a/drivers/lightnvm/Kconfig
++++ b/drivers/lightnvm/Kconfig
+@@ -5,7 +5,7 @@
+ 
+ menuconfig NVM
+ 	bool "Open-Channel SSD target support"
+-	depends on BLOCK
++	depends on BLOCK && BROKEN
+ 	help
+ 	  Say Y here to get to enable Open-channel SSDs.
+ 
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index 781af51e3f793..1dfb81dea9611 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -163,25 +163,39 @@ static const struct regmap_access_table rpcif_volatile_table = {
+ 
+ 
+ /*
+- * Custom accessor functions to ensure SMRDR0 and SMWDR0 are always accessed
+- * with proper width. Requires SMENR_SPIDE to be correctly set before!
++ * Custom accessor functions to ensure SM[RW]DR[01] are always accessed with
++ * proper width.  Requires rpcif.xfer_size to be correctly set before!
+  */
+ static int rpcif_reg_read(void *context, unsigned int reg, unsigned int *val)
+ {
+ 	struct rpcif *rpc = context;
+ 
+-	if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) {
+-		u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF);
+-
+-		if (spide == 0x8) {
++	switch (reg) {
++	case RPCIF_SMRDR0:
++	case RPCIF_SMWDR0:
++		switch (rpc->xfer_size) {
++		case 1:
+ 			*val = readb(rpc->base + reg);
+ 			return 0;
+-		} else if (spide == 0xC) {
++
++		case 2:
+ 			*val = readw(rpc->base + reg);
+ 			return 0;
+-		} else if (spide != 0xF) {
++
++		case 4:
++		case 8:
++			*val = readl(rpc->base + reg);
++			return 0;
++
++		default:
+ 			return -EILSEQ;
+ 		}
++
++	case RPCIF_SMRDR1:
++	case RPCIF_SMWDR1:
++		if (rpc->xfer_size != 8)
++			return -EILSEQ;
++		break;
+ 	}
+ 
+ 	*val = readl(rpc->base + reg);
+@@ -193,18 +207,34 @@ static int rpcif_reg_write(void *context, unsigned int reg, unsigned int val)
+ {
+ 	struct rpcif *rpc = context;
+ 
+-	if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) {
+-		u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF);
+-
+-		if (spide == 0x8) {
++	switch (reg) {
++	case RPCIF_SMWDR0:
++		switch (rpc->xfer_size) {
++		case 1:
+ 			writeb(val, rpc->base + reg);
+ 			return 0;
+-		} else if (spide == 0xC) {
++
++		case 2:
+ 			writew(val, rpc->base + reg);
+ 			return 0;
+-		} else if (spide != 0xF) {
++
++		case 4:
++		case 8:
++			writel(val, rpc->base + reg);
++			return 0;
++
++		default:
+ 			return -EILSEQ;
+ 		}
++
++	case RPCIF_SMWDR1:
++		if (rpc->xfer_size != 8)
++			return -EILSEQ;
++		break;
++
++	case RPCIF_SMRDR0:
++	case RPCIF_SMRDR1:
++		return -EPERM;
+ 	}
+ 
+ 	writel(val, rpc->base + reg);
+@@ -455,6 +485,7 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ 
+ 			smenr |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes));
+ 			regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
++			rpc->xfer_size = nbytes;
+ 
+ 			memcpy(data, rpc->buffer + pos, nbytes);
+ 			if (nbytes == 8) {
+@@ -519,6 +550,7 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ 			regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
+ 			regmap_write(rpc->regmap, RPCIF_SMCR,
+ 				     rpc->smcr | RPCIF_SMCR_SPIE);
++			rpc->xfer_size = nbytes;
+ 			ret = wait_msg_xfer_end(rpc);
+ 			if (ret)
+ 				goto err_out;
+diff --git a/drivers/mtd/nand/raw/mtk_ecc.c b/drivers/mtd/nand/raw/mtk_ecc.c
+index 75f1fa3d4d35c..c115e03ede889 100644
+--- a/drivers/mtd/nand/raw/mtk_ecc.c
++++ b/drivers/mtd/nand/raw/mtk_ecc.c
+@@ -43,6 +43,7 @@
+ 
+ struct mtk_ecc_caps {
+ 	u32 err_mask;
++	u32 err_shift;
+ 	const u8 *ecc_strength;
+ 	const u32 *ecc_regs;
+ 	u8 num_ecc_strength;
+@@ -76,7 +77,7 @@ static const u8 ecc_strength_mt2712[] = {
+ };
+ 
+ static const u8 ecc_strength_mt7622[] = {
+-	4, 6, 8, 10, 12, 14, 16
++	4, 6, 8, 10, 12
+ };
+ 
+ enum mtk_ecc_regs {
+@@ -221,7 +222,7 @@ void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
+ 	for (i = 0; i < sectors; i++) {
+ 		offset = (i >> 2) << 2;
+ 		err = readl(ecc->regs + ECC_DECENUM0 + offset);
+-		err = err >> ((i % 4) * 8);
++		err = err >> ((i % 4) * ecc->caps->err_shift);
+ 		err &= ecc->caps->err_mask;
+ 		if (err == ecc->caps->err_mask) {
+ 			/* uncorrectable errors */
+@@ -449,6 +450,7 @@ EXPORT_SYMBOL(mtk_ecc_get_parity_bits);
+ 
+ static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
+ 	.err_mask = 0x3f,
++	.err_shift = 8,
+ 	.ecc_strength = ecc_strength_mt2701,
+ 	.ecc_regs = mt2701_ecc_regs,
+ 	.num_ecc_strength = 20,
+@@ -459,6 +461,7 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
+ 
+ static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = {
+ 	.err_mask = 0x7f,
++	.err_shift = 8,
+ 	.ecc_strength = ecc_strength_mt2712,
+ 	.ecc_regs = mt2712_ecc_regs,
+ 	.num_ecc_strength = 23,
+@@ -468,10 +471,11 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = {
+ };
+ 
+ static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = {
+-	.err_mask = 0x3f,
++	.err_mask = 0x1f,
++	.err_shift = 5,
+ 	.ecc_strength = ecc_strength_mt7622,
+ 	.ecc_regs = mt7622_ecc_regs,
+-	.num_ecc_strength = 7,
++	.num_ecc_strength = 5,
+ 	.ecc_mode_shift = 4,
+ 	.parity_bits = 13,
+ 	.pg_irq_sel = 0,
+diff --git a/drivers/mtd/nand/raw/sh_flctl.c b/drivers/mtd/nand/raw/sh_flctl.c
+index 13df4bdf792af..8f89e2d3d817f 100644
+--- a/drivers/mtd/nand/raw/sh_flctl.c
++++ b/drivers/mtd/nand/raw/sh_flctl.c
+@@ -384,7 +384,8 @@ static int flctl_dma_fifo0_transfer(struct sh_flctl *flctl, unsigned long *buf,
+ 	dma_addr_t dma_addr;
+ 	dma_cookie_t cookie;
+ 	uint32_t reg;
+-	int ret;
++	int ret = 0;
++	unsigned long time_left;
+ 
+ 	if (dir == DMA_FROM_DEVICE) {
+ 		chan = flctl->chan_fifo0_rx;
+@@ -425,13 +426,14 @@ static int flctl_dma_fifo0_transfer(struct sh_flctl *flctl, unsigned long *buf,
+ 		goto out;
+ 	}
+ 
+-	ret =
++	time_left =
+ 	wait_for_completion_timeout(&flctl->dma_complete,
+ 				msecs_to_jiffies(3000));
+ 
+-	if (ret <= 0) {
++	if (time_left == 0) {
+ 		dmaengine_terminate_all(chan);
+ 		dev_err(&flctl->pdev->dev, "wait_for_completion_timeout\n");
++		ret = -ETIMEDOUT;
+ 	}
+ 
+ out:
+@@ -441,7 +443,7 @@ out:
+ 
+ 	dma_unmap_single(chan->device->dev, dma_addr, len, dir);
+ 
+-	/* ret > 0 is success */
++	/* ret == 0 is success */
+ 	return ret;
+ }
+ 
+@@ -465,7 +467,7 @@ static void read_fiforeg(struct sh_flctl *flctl, int rlen, int offset)
+ 
+ 	/* initiate DMA transfer */
+ 	if (flctl->chan_fifo0_rx && rlen >= 32 &&
+-		flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_FROM_DEVICE) > 0)
++		!flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_FROM_DEVICE))
+ 			goto convert;	/* DMA success */
+ 
+ 	/* do polling transfer */
+@@ -524,7 +526,7 @@ static void write_ec_fiforeg(struct sh_flctl *flctl, int rlen,
+ 
+ 	/* initiate DMA transfer */
+ 	if (flctl->chan_fifo0_tx && rlen >= 32 &&
+-		flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_TO_DEVICE) > 0)
++		!flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_TO_DEVICE))
+ 			return;	/* DMA success */
+ 
+ 	/* do polling transfer */
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 80ef7ea779545..4abae06499a96 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1629,9 +1629,6 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
+ 		break;
+ 	case PHY_INTERFACE_MODE_RMII:
+ 		miicfg |= GSWIP_MII_CFG_MODE_RMIIM;
+-
+-		/* Configure the RMII clock as output: */
+-		miicfg |= GSWIP_MII_CFG_RMII_CLK;
+ 		break;
+ 	case PHY_INTERFACE_MODE_RGMII:
+ 	case PHY_INTERFACE_MODE_RGMII_ID:
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index 6333471916be1..afb6d3ee1f564 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -14213,10 +14213,6 @@ static int bnx2x_eeh_nic_unload(struct bnx2x *bp)
+ 
+ 	/* Stop Tx */
+ 	bnx2x_tx_disable(bp);
+-	/* Delete all NAPI objects */
+-	bnx2x_del_all_napi(bp);
+-	if (CNIC_LOADED(bp))
+-		bnx2x_del_all_napi_cnic(bp);
+ 	netdev_reset_tc(bp->dev);
+ 
+ 	del_timer_sync(&bp->timer);
+@@ -14321,6 +14317,11 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev)
+ 		bnx2x_drain_tx_queues(bp);
+ 		bnx2x_send_unload_req(bp, UNLOAD_RECOVERY);
+ 		bnx2x_netif_stop(bp, 1);
++		bnx2x_del_all_napi(bp);
++
++		if (CNIC_LOADED(bp))
++			bnx2x_del_all_napi_cnic(bp);
++
+ 		bnx2x_free_irq(bp);
+ 
+ 		/* Report UNLOAD_DONE to MCP */
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index a2062144d7ca1..9ffdaa84ba124 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1987,6 +1987,11 @@ static struct sk_buff *bcmgenet_add_tsb(struct net_device *dev,
+ 	return skb;
+ }
+ 
++static void bcmgenet_hide_tsb(struct sk_buff *skb)
++{
++	__skb_pull(skb, sizeof(struct status_64));
++}
++
+ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+@@ -2093,6 +2098,8 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	}
+ 
+ 	GENET_CB(skb)->last_cb = tx_cb_ptr;
++
++	bcmgenet_hide_tsb(skb);
+ 	skb_tx_timestamp(skb);
+ 
+ 	/* Decrement total BD count and advance our write pointer */
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 166bc3f3b34cc..d8bdaf2e5365c 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3529,7 +3529,7 @@ static int fec_enet_init_stop_mode(struct fec_enet_private *fep,
+ 					 ARRAY_SIZE(out_val));
+ 	if (ret) {
+ 		dev_dbg(&fep->pdev->dev, "no stop mode property\n");
+-		return ret;
++		goto out;
+ 	}
+ 
+ 	fep->stop_gpr.gpr = syscon_node_to_regmap(gpr_np);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 5d39967672561..51b7b46f2f309 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -91,6 +91,13 @@ static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len,
+ 	enum hclge_cmd_status status;
+ 	struct hclge_desc desc;
+ 
++	if (msg_len > HCLGE_MBX_MAX_MSG_SIZE) {
++		dev_err(&hdev->pdev->dev,
++			"msg data length(=%u) exceeds maximum(=%u)\n",
++			msg_len, HCLGE_MBX_MAX_MSG_SIZE);
++		return -EMSGSIZE;
++	}
++
+ 	resp_pf_to_vf = (struct hclge_mbx_pf_to_vf_cmd *)desc.data;
+ 
+ 	hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false);
+@@ -173,7 +180,7 @@ static int hclge_get_ring_chain_from_mbx(
+ 	ring_num = req->msg.ring_num;
+ 
+ 	if (ring_num > HCLGE_MBX_MAX_RING_CHAIN_PARAM_NUM)
+-		return -ENOMEM;
++		return -EINVAL;
+ 
+ 	for (i = 0; i < ring_num; i++) {
+ 		if (req->msg.param[i].tqp_index >= vport->nic.kinfo.rss_size) {
+@@ -577,9 +584,9 @@ static int hclge_set_vf_mtu(struct hclge_vport *vport,
+ 	return hclge_set_vport_mtu(vport, mtu);
+ }
+ 
+-static void hclge_get_queue_id_in_pf(struct hclge_vport *vport,
+-				     struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+-				     struct hclge_respond_to_vf_msg *resp_msg)
++static int hclge_get_queue_id_in_pf(struct hclge_vport *vport,
++				    struct hclge_mbx_vf_to_pf_cmd *mbx_req,
++				    struct hclge_respond_to_vf_msg *resp_msg)
+ {
+ 	struct hnae3_handle *handle = &vport->nic;
+ 	struct hclge_dev *hdev = vport->back;
+@@ -589,17 +596,18 @@ static void hclge_get_queue_id_in_pf(struct hclge_vport *vport,
+ 	if (queue_id >= handle->kinfo.num_tqps) {
+ 		dev_err(&hdev->pdev->dev, "Invalid queue id(%u) from VF %u\n",
+ 			queue_id, mbx_req->mbx_src_vfid);
+-		return;
++		return -EINVAL;
+ 	}
+ 
+ 	qid_in_pf = hclge_covert_handle_qid_global(&vport->nic, queue_id);
+ 	memcpy(resp_msg->data, &qid_in_pf, sizeof(qid_in_pf));
+ 	resp_msg->len = sizeof(qid_in_pf);
++	return 0;
+ }
+ 
+-static void hclge_get_rss_key(struct hclge_vport *vport,
+-			      struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+-			      struct hclge_respond_to_vf_msg *resp_msg)
++static int hclge_get_rss_key(struct hclge_vport *vport,
++			     struct hclge_mbx_vf_to_pf_cmd *mbx_req,
++			     struct hclge_respond_to_vf_msg *resp_msg)
+ {
+ #define HCLGE_RSS_MBX_RESP_LEN	8
+ 	struct hclge_dev *hdev = vport->back;
+@@ -615,13 +623,14 @@ static void hclge_get_rss_key(struct hclge_vport *vport,
+ 		dev_warn(&hdev->pdev->dev,
+ 			 "failed to get the rss hash key, the index(%u) invalid !\n",
+ 			 index);
+-		return;
++		return -EINVAL;
+ 	}
+ 
+ 	memcpy(resp_msg->data,
+ 	       &hdev->vport[0].rss_hash_key[index * HCLGE_RSS_MBX_RESP_LEN],
+ 	       HCLGE_RSS_MBX_RESP_LEN);
+ 	resp_msg->len = HCLGE_RSS_MBX_RESP_LEN;
++	return 0;
+ }
+ 
+ static void hclge_link_fail_parse(struct hclge_dev *hdev, u8 link_fail_code)
+@@ -800,10 +809,10 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
+ 					"VF fail(%d) to set mtu\n", ret);
+ 			break;
+ 		case HCLGE_MBX_GET_QID_IN_PF:
+-			hclge_get_queue_id_in_pf(vport, req, &resp_msg);
++			ret = hclge_get_queue_id_in_pf(vport, req, &resp_msg);
+ 			break;
+ 		case HCLGE_MBX_GET_RSS_KEY:
+-			hclge_get_rss_key(vport, req, &resp_msg);
++			ret = hclge_get_rss_key(vport, req, &resp_msg);
+ 			break;
+ 		case HCLGE_MBX_GET_LINK_MODE:
+ 			hclge_get_link_mode(vport, req);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 95bee3d915934..61fb2a092451b 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -117,7 +117,7 @@ struct ibmvnic_stat {
+ 
+ #define IBMVNIC_STAT_OFF(stat) (offsetof(struct ibmvnic_adapter, stats) + \
+ 			     offsetof(struct ibmvnic_statistics, stat))
+-#define IBMVNIC_GET_STAT(a, off) (*((u64 *)(((unsigned long)(a)) + off)))
++#define IBMVNIC_GET_STAT(a, off) (*((u64 *)(((unsigned long)(a)) + (off))))
+ 
+ static const struct ibmvnic_stat ibmvnic_stats[] = {
+ 	{"rx_packets", IBMVNIC_STAT_OFF(rx_packets)},
+@@ -2063,14 +2063,14 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 			rc = reset_tx_pools(adapter);
+ 			if (rc) {
+ 				netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n",
+-						rc);
++					   rc);
+ 				goto out;
+ 			}
+ 
+ 			rc = reset_rx_pools(adapter);
+ 			if (rc) {
+ 				netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n",
+-						rc);
++					   rc);
+ 				goto out;
+ 			}
+ 		}
+@@ -2331,7 +2331,8 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 
+ 	if (adapter->state == VNIC_PROBING) {
+ 		netdev_warn(netdev, "Adapter reset during probe\n");
+-		ret = adapter->init_done_rc = EAGAIN;
++		adapter->init_done_rc = EAGAIN;
++		ret = EAGAIN;
+ 		goto err;
+ 	}
+ 
+@@ -2665,13 +2666,8 @@ static void ibmvnic_get_ringparam(struct net_device *netdev,
+ {
+ 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+ 
+-	if (adapter->priv_flags & IBMVNIC_USE_SERVER_MAXES) {
+-		ring->rx_max_pending = adapter->max_rx_add_entries_per_subcrq;
+-		ring->tx_max_pending = adapter->max_tx_entries_per_subcrq;
+-	} else {
+-		ring->rx_max_pending = IBMVNIC_MAX_QUEUE_SZ;
+-		ring->tx_max_pending = IBMVNIC_MAX_QUEUE_SZ;
+-	}
++	ring->rx_max_pending = adapter->max_rx_add_entries_per_subcrq;
++	ring->tx_max_pending = adapter->max_tx_entries_per_subcrq;
+ 	ring->rx_mini_max_pending = 0;
+ 	ring->rx_jumbo_max_pending = 0;
+ 	ring->rx_pending = adapter->req_rx_add_entries_per_subcrq;
+@@ -2684,23 +2680,21 @@ static int ibmvnic_set_ringparam(struct net_device *netdev,
+ 				 struct ethtool_ringparam *ring)
+ {
+ 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+-	int ret;
+ 
+-	ret = 0;
++	if (ring->rx_pending > adapter->max_rx_add_entries_per_subcrq  ||
++	    ring->tx_pending > adapter->max_tx_entries_per_subcrq) {
++		netdev_err(netdev, "Invalid request.\n");
++		netdev_err(netdev, "Max tx buffers = %llu\n",
++			   adapter->max_rx_add_entries_per_subcrq);
++		netdev_err(netdev, "Max rx buffers = %llu\n",
++			   adapter->max_tx_entries_per_subcrq);
++		return -EINVAL;
++	}
++
+ 	adapter->desired.rx_entries = ring->rx_pending;
+ 	adapter->desired.tx_entries = ring->tx_pending;
+ 
+-	ret = wait_for_reset(adapter);
+-
+-	if (!ret &&
+-	    (adapter->req_rx_add_entries_per_subcrq != ring->rx_pending ||
+-	     adapter->req_tx_entries_per_subcrq != ring->tx_pending))
+-		netdev_info(netdev,
+-			    "Could not match full ringsize request. Requested: RX %d, TX %d; Allowed: RX %llu, TX %llu\n",
+-			    ring->rx_pending, ring->tx_pending,
+-			    adapter->req_rx_add_entries_per_subcrq,
+-			    adapter->req_tx_entries_per_subcrq);
+-	return ret;
++	return wait_for_reset(adapter);
+ }
+ 
+ static void ibmvnic_get_channels(struct net_device *netdev,
+@@ -2708,14 +2702,8 @@ static void ibmvnic_get_channels(struct net_device *netdev,
+ {
+ 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+ 
+-	if (adapter->priv_flags & IBMVNIC_USE_SERVER_MAXES) {
+-		channels->max_rx = adapter->max_rx_queues;
+-		channels->max_tx = adapter->max_tx_queues;
+-	} else {
+-		channels->max_rx = IBMVNIC_MAX_QUEUES;
+-		channels->max_tx = IBMVNIC_MAX_QUEUES;
+-	}
+-
++	channels->max_rx = adapter->max_rx_queues;
++	channels->max_tx = adapter->max_tx_queues;
+ 	channels->max_other = 0;
+ 	channels->max_combined = 0;
+ 	channels->rx_count = adapter->req_rx_queues;
+@@ -2728,23 +2716,11 @@ static int ibmvnic_set_channels(struct net_device *netdev,
+ 				struct ethtool_channels *channels)
+ {
+ 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+-	int ret;
+ 
+-	ret = 0;
+ 	adapter->desired.rx_queues = channels->rx_count;
+ 	adapter->desired.tx_queues = channels->tx_count;
+ 
+-	ret = wait_for_reset(adapter);
+-
+-	if (!ret &&
+-	    (adapter->req_rx_queues != channels->rx_count ||
+-	     adapter->req_tx_queues != channels->tx_count))
+-		netdev_info(netdev,
+-			    "Could not match full channels request. Requested: RX %d, TX %d; Allowed: RX %llu, TX %llu\n",
+-			    channels->rx_count, channels->tx_count,
+-			    adapter->req_rx_queues, adapter->req_tx_queues);
+-	return ret;
+-
++	return wait_for_reset(adapter);
+ }
+ 
+ static void ibmvnic_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+@@ -2752,43 +2728,32 @@ static void ibmvnic_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+ 	struct ibmvnic_adapter *adapter = netdev_priv(dev);
+ 	int i;
+ 
+-	switch (stringset) {
+-	case ETH_SS_STATS:
+-		for (i = 0; i < ARRAY_SIZE(ibmvnic_stats);
+-				i++, data += ETH_GSTRING_LEN)
+-			memcpy(data, ibmvnic_stats[i].name, ETH_GSTRING_LEN);
++	if (stringset != ETH_SS_STATS)
++		return;
+ 
+-		for (i = 0; i < adapter->req_tx_queues; i++) {
+-			snprintf(data, ETH_GSTRING_LEN, "tx%d_packets", i);
+-			data += ETH_GSTRING_LEN;
++	for (i = 0; i < ARRAY_SIZE(ibmvnic_stats); i++, data += ETH_GSTRING_LEN)
++		memcpy(data, ibmvnic_stats[i].name, ETH_GSTRING_LEN);
+ 
+-			snprintf(data, ETH_GSTRING_LEN, "tx%d_bytes", i);
+-			data += ETH_GSTRING_LEN;
++	for (i = 0; i < adapter->req_tx_queues; i++) {
++		snprintf(data, ETH_GSTRING_LEN, "tx%d_packets", i);
++		data += ETH_GSTRING_LEN;
+ 
+-			snprintf(data, ETH_GSTRING_LEN,
+-				 "tx%d_dropped_packets", i);
+-			data += ETH_GSTRING_LEN;
+-		}
++		snprintf(data, ETH_GSTRING_LEN, "tx%d_bytes", i);
++		data += ETH_GSTRING_LEN;
+ 
+-		for (i = 0; i < adapter->req_rx_queues; i++) {
+-			snprintf(data, ETH_GSTRING_LEN, "rx%d_packets", i);
+-			data += ETH_GSTRING_LEN;
++		snprintf(data, ETH_GSTRING_LEN, "tx%d_dropped_packets", i);
++		data += ETH_GSTRING_LEN;
++	}
+ 
+-			snprintf(data, ETH_GSTRING_LEN, "rx%d_bytes", i);
+-			data += ETH_GSTRING_LEN;
++	for (i = 0; i < adapter->req_rx_queues; i++) {
++		snprintf(data, ETH_GSTRING_LEN, "rx%d_packets", i);
++		data += ETH_GSTRING_LEN;
+ 
+-			snprintf(data, ETH_GSTRING_LEN, "rx%d_interrupts", i);
+-			data += ETH_GSTRING_LEN;
+-		}
+-		break;
++		snprintf(data, ETH_GSTRING_LEN, "rx%d_bytes", i);
++		data += ETH_GSTRING_LEN;
+ 
+-	case ETH_SS_PRIV_FLAGS:
+-		for (i = 0; i < ARRAY_SIZE(ibmvnic_priv_flags); i++)
+-			strcpy(data + i * ETH_GSTRING_LEN,
+-			       ibmvnic_priv_flags[i]);
+-		break;
+-	default:
+-		return;
++		snprintf(data, ETH_GSTRING_LEN, "rx%d_interrupts", i);
++		data += ETH_GSTRING_LEN;
+ 	}
+ }
+ 
+@@ -2801,8 +2766,6 @@ static int ibmvnic_get_sset_count(struct net_device *dev, int sset)
+ 		return ARRAY_SIZE(ibmvnic_stats) +
+ 		       adapter->req_tx_queues * NUM_TX_STATS +
+ 		       adapter->req_rx_queues * NUM_RX_STATS;
+-	case ETH_SS_PRIV_FLAGS:
+-		return ARRAY_SIZE(ibmvnic_priv_flags);
+ 	default:
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -2833,8 +2796,8 @@ static void ibmvnic_get_ethtool_stats(struct net_device *dev,
+ 		return;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(ibmvnic_stats); i++)
+-		data[i] = be64_to_cpu(IBMVNIC_GET_STAT(adapter,
+-						ibmvnic_stats[i].offset));
++		data[i] = be64_to_cpu(IBMVNIC_GET_STAT
++				      (adapter, ibmvnic_stats[i].offset));
+ 
+ 	for (j = 0; j < adapter->req_tx_queues; j++) {
+ 		data[i] = adapter->tx_stats_buffers[j].packets;
+@@ -2855,25 +2818,6 @@ static void ibmvnic_get_ethtool_stats(struct net_device *dev,
+ 	}
+ }
+ 
+-static u32 ibmvnic_get_priv_flags(struct net_device *netdev)
+-{
+-	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+-
+-	return adapter->priv_flags;
+-}
+-
+-static int ibmvnic_set_priv_flags(struct net_device *netdev, u32 flags)
+-{
+-	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+-	bool which_maxes = !!(flags & IBMVNIC_USE_SERVER_MAXES);
+-
+-	if (which_maxes)
+-		adapter->priv_flags |= IBMVNIC_USE_SERVER_MAXES;
+-	else
+-		adapter->priv_flags &= ~IBMVNIC_USE_SERVER_MAXES;
+-
+-	return 0;
+-}
+ static const struct ethtool_ops ibmvnic_ethtool_ops = {
+ 	.get_drvinfo		= ibmvnic_get_drvinfo,
+ 	.get_msglevel		= ibmvnic_get_msglevel,
+@@ -2887,8 +2831,6 @@ static const struct ethtool_ops ibmvnic_ethtool_ops = {
+ 	.get_sset_count         = ibmvnic_get_sset_count,
+ 	.get_ethtool_stats	= ibmvnic_get_ethtool_stats,
+ 	.get_link_ksettings	= ibmvnic_get_link_ksettings,
+-	.get_priv_flags		= ibmvnic_get_priv_flags,
+-	.set_priv_flags		= ibmvnic_set_priv_flags,
+ };
+ 
+ /* Routines for managing CRQs/sCRQs  */
+@@ -3119,7 +3061,7 @@ static int enable_scrq_irq(struct ibmvnic_adapter *adapter,
+ 		/* H_EOI would fail with rc = H_FUNCTION when running
+ 		 * in XIVE mode which is expected, but not an error.
+ 		 */
+-		if (rc && (rc != H_FUNCTION))
++		if (rc && rc != H_FUNCTION)
+ 			dev_err(dev, "H_EOI FAILED irq 0x%llx. rc=%ld\n",
+ 				val, rc);
+ 	}
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index b27211063c643..2eb9a4d5533ab 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -41,11 +41,6 @@
+ 
+ #define IBMVNIC_RESET_DELAY 100
+ 
+-static const char ibmvnic_priv_flags[][ETH_GSTRING_LEN] = {
+-#define IBMVNIC_USE_SERVER_MAXES 0x1
+-	"use-server-maxes"
+-};
+-
+ struct ibmvnic_login_buffer {
+ 	__be32 len;
+ 	__be32 version;
+@@ -974,7 +969,6 @@ struct ibmvnic_adapter {
+ 	struct ibmvnic_control_ip_offload_buffer ip_offload_ctrl;
+ 	dma_addr_t ip_offload_ctrl_tok;
+ 	u32 msg_enable;
+-	u32 priv_flags;
+ 
+ 	/* Vital Product Data (VPD) */
+ 	struct ibmvnic_vpd *vpd;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index 54d47265a7ac1..319620856cba1 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -903,7 +903,8 @@ int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ 	/* Tx IPsec offload doesn't seem to work on this
+ 	 * device, so block these requests for now.
+ 	 */
+-	if (!(sam->flags & XFRM_OFFLOAD_INBOUND)) {
++	sam->flags = sam->flags & ~XFRM_OFFLOAD_IPV6;
++	if (sam->flags != XFRM_OFFLOAD_INBOUND) {
+ 		err = -EOPNOTSUPP;
+ 		goto err_out;
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index 8bb0106cb7ea3..142bf912011e2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -65,8 +65,9 @@ static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+ 	struct phy_device *phy_dev = ndev->phydev;
+ 	u32 val;
+ 
+-	writew(SGMII_ADAPTER_DISABLE,
+-	       sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
++	if (sgmii_adapter_base)
++		writew(SGMII_ADAPTER_DISABLE,
++		       sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+ 
+ 	if (splitter_base) {
+ 		val = readl(splitter_base + EMAC_SPLITTER_CTRL_REG);
+@@ -88,10 +89,11 @@ static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+ 		writel(val, splitter_base + EMAC_SPLITTER_CTRL_REG);
+ 	}
+ 
+-	writew(SGMII_ADAPTER_ENABLE,
+-	       sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+-	if (phy_dev)
++	if (phy_dev && sgmii_adapter_base) {
++		writew(SGMII_ADAPTER_ENABLE,
++		       sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+ 		tse_pcs_fix_mac_speed(&dwmac->pcs, phy_dev, speed);
++	}
+ }
+ 
+ static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *dev)
+diff --git a/drivers/net/hippi/rrunner.c b/drivers/net/hippi/rrunner.c
+index 22010384c4a33..b9646b369f8e7 100644
+--- a/drivers/net/hippi/rrunner.c
++++ b/drivers/net/hippi/rrunner.c
+@@ -1353,7 +1353,9 @@ static int rr_close(struct net_device *dev)
+ 
+ 	rrpriv->fw_running = 0;
+ 
++	spin_unlock_irqrestore(&rrpriv->lock, flags);
+ 	del_timer_sync(&rrpriv->timer);
++	spin_lock_irqsave(&rrpriv->lock, flags);
+ 
+ 	writel(0, &regs->TxPi);
+ 	writel(0, &regs->IpRxPi);
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index b1bb9b8e1e4ed..2b64318efdba6 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -650,7 +650,7 @@ static int mv3310_read_status_copper(struct phy_device *phydev)
+ 
+ 	cssr1 = phy_read_mmd(phydev, MDIO_MMD_PCS, MV_PCS_CSSR1);
+ 	if (cssr1 < 0)
+-		return val;
++		return cssr1;
+ 
+ 	/* If the link settings are not resolved, mark the link down */
+ 	if (!(cssr1 & MV_PCS_CSSR1_RESOLVED)) {
+diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
+index e189eb95678d8..e0693cd965ec4 100644
+--- a/drivers/net/wireguard/device.c
++++ b/drivers/net/wireguard/device.c
+@@ -19,6 +19,7 @@
+ #include <linux/if_arp.h>
+ #include <linux/icmp.h>
+ #include <linux/suspend.h>
++#include <net/dst_metadata.h>
+ #include <net/icmp.h>
+ #include <net/rtnetlink.h>
+ #include <net/ip_tunnels.h>
+@@ -152,7 +153,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		goto err_peer;
+ 	}
+ 
+-	mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
++	mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
+ 
+ 	__skb_queue_head_init(&packets);
+ 	if (!skb_is_gso(skb)) {
+diff --git a/drivers/phy/motorola/phy-mapphone-mdm6600.c b/drivers/phy/motorola/phy-mapphone-mdm6600.c
+index 5172971f4c360..3cd4d51c247c3 100644
+--- a/drivers/phy/motorola/phy-mapphone-mdm6600.c
++++ b/drivers/phy/motorola/phy-mapphone-mdm6600.c
+@@ -629,7 +629,8 @@ idle:
+ cleanup:
+ 	if (error < 0)
+ 		phy_mdm6600_device_power_off(ddata);
+-
++	pm_runtime_disable(ddata->dev);
++	pm_runtime_dont_use_autosuspend(ddata->dev);
+ 	return error;
+ }
+ 
+diff --git a/drivers/phy/samsung/phy-exynos5250-sata.c b/drivers/phy/samsung/phy-exynos5250-sata.c
+index 4dd7324d91b26..ea46576404b8e 100644
+--- a/drivers/phy/samsung/phy-exynos5250-sata.c
++++ b/drivers/phy/samsung/phy-exynos5250-sata.c
+@@ -190,6 +190,7 @@ static int exynos_sata_phy_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 
+ 	sata_phy->client = of_find_i2c_device_by_node(node);
++	of_node_put(node);
+ 	if (!sata_phy->client)
+ 		return -EPROBE_DEFER;
+ 
+@@ -198,20 +199,21 @@ static int exynos_sata_phy_probe(struct platform_device *pdev)
+ 	sata_phy->phyclk = devm_clk_get(dev, "sata_phyctrl");
+ 	if (IS_ERR(sata_phy->phyclk)) {
+ 		dev_err(dev, "failed to get clk for PHY\n");
+-		return PTR_ERR(sata_phy->phyclk);
++		ret = PTR_ERR(sata_phy->phyclk);
++		goto put_dev;
+ 	}
+ 
+ 	ret = clk_prepare_enable(sata_phy->phyclk);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to enable source clk\n");
+-		return ret;
++		goto put_dev;
+ 	}
+ 
+ 	sata_phy->phy = devm_phy_create(dev, NULL, &exynos_sata_phy_ops);
+ 	if (IS_ERR(sata_phy->phy)) {
+-		clk_disable_unprepare(sata_phy->phyclk);
+ 		dev_err(dev, "failed to create PHY\n");
+-		return PTR_ERR(sata_phy->phy);
++		ret = PTR_ERR(sata_phy->phy);
++		goto clk_disable;
+ 	}
+ 
+ 	phy_set_drvdata(sata_phy->phy, sata_phy);
+@@ -219,11 +221,18 @@ static int exynos_sata_phy_probe(struct platform_device *pdev)
+ 	phy_provider = devm_of_phy_provider_register(dev,
+ 					of_phy_simple_xlate);
+ 	if (IS_ERR(phy_provider)) {
+-		clk_disable_unprepare(sata_phy->phyclk);
+-		return PTR_ERR(phy_provider);
++		ret = PTR_ERR(phy_provider);
++		goto clk_disable;
+ 	}
+ 
+ 	return 0;
++
++clk_disable:
++	clk_disable_unprepare(sata_phy->phyclk);
++put_dev:
++	put_device(&sata_phy->client->dev);
++
++	return ret;
+ }
+ 
+ static const struct of_device_id exynos_sata_phy_of_match[] = {
+diff --git a/drivers/phy/ti/phy-am654-serdes.c b/drivers/phy/ti/phy-am654-serdes.c
+index 2ff56ce77b307..21c0088f5ca9e 100644
+--- a/drivers/phy/ti/phy-am654-serdes.c
++++ b/drivers/phy/ti/phy-am654-serdes.c
+@@ -838,7 +838,7 @@ static int serdes_am654_probe(struct platform_device *pdev)
+ 
+ clk_err:
+ 	of_clk_del_provider(node);
+-
++	pm_runtime_disable(dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/phy/ti/phy-omap-usb2.c b/drivers/phy/ti/phy-omap-usb2.c
+index 4fec90d2624ff..f77ac041d8368 100644
+--- a/drivers/phy/ti/phy-omap-usb2.c
++++ b/drivers/phy/ti/phy-omap-usb2.c
+@@ -215,7 +215,7 @@ static int omap_usb2_enable_clocks(struct omap_usb *phy)
+ 	return 0;
+ 
+ err1:
+-	clk_disable(phy->wkupclk);
++	clk_disable_unprepare(phy->wkupclk);
+ 
+ err0:
+ 	return ret;
+diff --git a/drivers/pinctrl/mediatek/Kconfig b/drivers/pinctrl/mediatek/Kconfig
+index eef17f2286694..4ed41d361589b 100644
+--- a/drivers/pinctrl/mediatek/Kconfig
++++ b/drivers/pinctrl/mediatek/Kconfig
+@@ -30,6 +30,7 @@ config PINCTRL_MTK_MOORE
+ 	select GENERIC_PINMUX_FUNCTIONS
+ 	select GPIOLIB
+ 	select OF_GPIO
++	select EINT_MTK
+ 	select PINCTRL_MTK_V2
+ 
+ config PINCTRL_MTK_PARIS
+diff --git a/drivers/pinctrl/pinctrl-pistachio.c b/drivers/pinctrl/pinctrl-pistachio.c
+index ec761ba2a2da8..989a37fb402d1 100644
+--- a/drivers/pinctrl/pinctrl-pistachio.c
++++ b/drivers/pinctrl/pinctrl-pistachio.c
+@@ -1374,10 +1374,10 @@ static int pistachio_gpio_register(struct pistachio_pinctrl *pctl)
+ 		}
+ 
+ 		irq = irq_of_parse_and_map(child, 0);
+-		if (irq < 0) {
+-			dev_err(pctl->dev, "No IRQ for bank %u: %d\n", i, irq);
++		if (!irq) {
++			dev_err(pctl->dev, "No IRQ for bank %u\n", i);
+ 			of_node_put(child);
+-			ret = irq;
++			ret = -EINVAL;
+ 			goto err;
+ 		}
+ 
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 9df48e0cf4cb4..07b1204174bf1 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -663,95 +663,110 @@ static  struct rockchip_mux_recalced_data rk3128_mux_recalced_data[] = {
+ 
+ static struct rockchip_mux_recalced_data rk3308_mux_recalced_data[] = {
+ 	{
++		/* gpio1b6_sel */
+ 		.num = 1,
+ 		.pin = 14,
+ 		.reg = 0x28,
+ 		.bit = 12,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio1b7_sel */
+ 		.num = 1,
+ 		.pin = 15,
+ 		.reg = 0x2c,
+ 		.bit = 0,
+ 		.mask = 0x3
+ 	}, {
++		/* gpio1c2_sel */
+ 		.num = 1,
+ 		.pin = 18,
+ 		.reg = 0x30,
+ 		.bit = 4,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio1c3_sel */
+ 		.num = 1,
+ 		.pin = 19,
+ 		.reg = 0x30,
+ 		.bit = 8,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio1c4_sel */
+ 		.num = 1,
+ 		.pin = 20,
+ 		.reg = 0x30,
+ 		.bit = 12,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio1c5_sel */
+ 		.num = 1,
+ 		.pin = 21,
+ 		.reg = 0x34,
+ 		.bit = 0,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio1c6_sel */
+ 		.num = 1,
+ 		.pin = 22,
+ 		.reg = 0x34,
+ 		.bit = 4,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio1c7_sel */
+ 		.num = 1,
+ 		.pin = 23,
+ 		.reg = 0x34,
+ 		.bit = 8,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio3b4_sel */
+ 		.num = 3,
+ 		.pin = 12,
+ 		.reg = 0x68,
+ 		.bit = 8,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio3b5_sel */
+ 		.num = 3,
+ 		.pin = 13,
+ 		.reg = 0x68,
+ 		.bit = 12,
+ 		.mask = 0xf
+ 	}, {
++		/* gpio2a2_sel */
+ 		.num = 2,
+ 		.pin = 2,
+-		.reg = 0x608,
+-		.bit = 0,
+-		.mask = 0x7
++		.reg = 0x40,
++		.bit = 4,
++		.mask = 0x3
+ 	}, {
++		/* gpio2a3_sel */
+ 		.num = 2,
+ 		.pin = 3,
+-		.reg = 0x608,
+-		.bit = 4,
+-		.mask = 0x7
++		.reg = 0x40,
++		.bit = 6,
++		.mask = 0x3
+ 	}, {
++		/* gpio2c0_sel */
+ 		.num = 2,
+ 		.pin = 16,
+-		.reg = 0x610,
+-		.bit = 8,
+-		.mask = 0x7
++		.reg = 0x50,
++		.bit = 0,
++		.mask = 0x3
+ 	}, {
++		/* gpio3b2_sel */
+ 		.num = 3,
+ 		.pin = 10,
+-		.reg = 0x610,
+-		.bit = 0,
+-		.mask = 0x7
++		.reg = 0x68,
++		.bit = 4,
++		.mask = 0x3
+ 	}, {
++		/* gpio3b3_sel */
+ 		.num = 3,
+ 		.pin = 11,
+-		.reg = 0x610,
+-		.bit = 4,
+-		.mask = 0x7
++		.reg = 0x68,
++		.bit = 6,
++		.mask = 0x3
+ 	},
+ };
+ 
+diff --git a/drivers/pinctrl/samsung/Kconfig b/drivers/pinctrl/samsung/Kconfig
+index dfd805e768624..7b0576f71376e 100644
+--- a/drivers/pinctrl/samsung/Kconfig
++++ b/drivers/pinctrl/samsung/Kconfig
+@@ -4,14 +4,13 @@
+ #
+ config PINCTRL_SAMSUNG
+ 	bool
+-	depends on OF_GPIO
++	select GPIOLIB
+ 	select PINMUX
+ 	select PINCONF
+ 
+ config PINCTRL_EXYNOS
+ 	bool "Pinctrl common driver part for Samsung Exynos SoCs"
+-	depends on OF_GPIO
+-	depends on ARCH_EXYNOS || ARCH_S5PV210 || COMPILE_TEST
++	depends on ARCH_EXYNOS || ARCH_S5PV210 || (COMPILE_TEST && OF)
+ 	select PINCTRL_SAMSUNG
+ 	select PINCTRL_EXYNOS_ARM if ARM && (ARCH_EXYNOS || ARCH_S5PV210)
+ 	select PINCTRL_EXYNOS_ARM64 if ARM64 && ARCH_EXYNOS
+@@ -26,12 +25,10 @@ config PINCTRL_EXYNOS_ARM64
+ 
+ config PINCTRL_S3C24XX
+ 	bool "Samsung S3C24XX SoC pinctrl driver"
+-	depends on OF_GPIO
+-	depends on ARCH_S3C24XX || COMPILE_TEST
++	depends on ARCH_S3C24XX || (COMPILE_TEST && OF)
+ 	select PINCTRL_SAMSUNG
+ 
+ config PINCTRL_S3C64XX
+ 	bool "Samsung S3C64XX SoC pinctrl driver"
+-	depends on OF_GPIO
+-	depends on ARCH_S3C64XX || COMPILE_TEST
++	depends on ARCH_S3C64XX || (COMPILE_TEST && OF)
+ 	select PINCTRL_SAMSUNG
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index e13723bb2be41..b017dd400c460 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -225,6 +225,13 @@ static void stm32_gpio_free(struct gpio_chip *chip, unsigned offset)
+ 	pinctrl_gpio_free(chip->base + offset);
+ }
+ 
++static int stm32_gpio_get_noclk(struct gpio_chip *chip, unsigned int offset)
++{
++	struct stm32_gpio_bank *bank = gpiochip_get_data(chip);
++
++	return !!(readl_relaxed(bank->base + STM32_GPIO_IDR) & BIT(offset));
++}
++
+ static int stm32_gpio_get(struct gpio_chip *chip, unsigned offset)
+ {
+ 	struct stm32_gpio_bank *bank = gpiochip_get_data(chip);
+@@ -232,7 +239,7 @@ static int stm32_gpio_get(struct gpio_chip *chip, unsigned offset)
+ 
+ 	clk_enable(bank->clk);
+ 
+-	ret = !!(readl_relaxed(bank->base + STM32_GPIO_IDR) & BIT(offset));
++	ret = stm32_gpio_get_noclk(chip, offset);
+ 
+ 	clk_disable(bank->clk);
+ 
+@@ -311,8 +318,12 @@ static void stm32_gpio_irq_trigger(struct irq_data *d)
+ 	struct stm32_gpio_bank *bank = d->domain->host_data;
+ 	int level;
+ 
++	/* Do not access the GPIO if this is not LEVEL triggered IRQ. */
++	if (!(bank->irq_type[d->hwirq] & IRQ_TYPE_LEVEL_MASK))
++		return;
++
+ 	/* If level interrupt type then retrig */
+-	level = stm32_gpio_get(&bank->gpio_chip, d->hwirq);
++	level = stm32_gpio_get_noclk(&bank->gpio_chip, d->hwirq);
+ 	if ((level == 0 && bank->irq_type[d->hwirq] == IRQ_TYPE_LEVEL_LOW) ||
+ 	    (level == 1 && bank->irq_type[d->hwirq] == IRQ_TYPE_LEVEL_HIGH))
+ 		irq_chip_retrigger_hierarchy(d);
+@@ -354,6 +365,7 @@ static int stm32_gpio_irq_request_resources(struct irq_data *irq_data)
+ {
+ 	struct stm32_gpio_bank *bank = irq_data->domain->host_data;
+ 	struct stm32_pinctrl *pctl = dev_get_drvdata(bank->gpio_chip.parent);
++	unsigned long flags;
+ 	int ret;
+ 
+ 	ret = stm32_gpio_direction_input(&bank->gpio_chip, irq_data->hwirq);
+@@ -367,6 +379,10 @@ static int stm32_gpio_irq_request_resources(struct irq_data *irq_data)
+ 		return ret;
+ 	}
+ 
++	flags = irqd_get_trigger_type(irq_data);
++	if (flags & IRQ_TYPE_LEVEL_MASK)
++		clk_enable(bank->clk);
++
+ 	return 0;
+ }
+ 
+@@ -374,6 +390,9 @@ static void stm32_gpio_irq_release_resources(struct irq_data *irq_data)
+ {
+ 	struct stm32_gpio_bank *bank = irq_data->domain->host_data;
+ 
++	if (bank->irq_type[irq_data->hwirq] & IRQ_TYPE_LEVEL_MASK)
++		clk_disable(bank->clk);
++
+ 	gpiochip_unlock_as_irq(&bank->gpio_chip, irq_data->hwirq);
+ }
+ 
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index 72a26867c2092..28913867cd4bc 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -67,7 +67,7 @@ static int evaluate_odvp(struct int3400_thermal_priv *priv);
+ struct odvp_attr {
+ 	int odvp;
+ 	struct int3400_thermal_priv *priv;
+-	struct kobj_attribute attr;
++	struct device_attribute attr;
+ };
+ 
+ static ssize_t data_vault_read(struct file *file, struct kobject *kobj,
+@@ -269,7 +269,7 @@ static int int3400_thermal_run_osc(acpi_handle handle,
+ 	return result;
+ }
+ 
+-static ssize_t odvp_show(struct kobject *kobj, struct kobj_attribute *attr,
++static ssize_t odvp_show(struct device *dev, struct device_attribute *attr,
+ 			 char *buf)
+ {
+ 	struct odvp_attr *odvp_attr;
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 05562b3cca451..aafc3bb60e52a 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -72,6 +72,8 @@ module_param(debug, int, 0600);
+  */
+ #define MAX_MRU 1500
+ #define MAX_MTU 1500
++/* SOF, ADDR, CTRL, LEN1, LEN2, ..., FCS, EOF */
++#define PROT_OVERHEAD 7
+ #define	GSM_NET_TX_TIMEOUT (HZ*10)
+ 
+ /**
+@@ -230,6 +232,7 @@ struct gsm_mux {
+ 	int initiator;			/* Did we initiate connection */
+ 	bool dead;			/* Has the mux been shut down */
+ 	struct gsm_dlci *dlci[NUM_DLCI];
++	int old_c_iflag;		/* termios c_iflag value before attach */
+ 	bool constipated;		/* Asked by remote to shut up */
+ 
+ 	spinlock_t tx_lock;
+@@ -818,7 +821,7 @@ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ 			break;
+ 		case 2:	/* Unstructed with modem bits.
+ 		Always one byte as we never send inline break data */
+-			*dp++ = gsm_encode_modem(dlci);
++			*dp++ = (gsm_encode_modem(dlci) << 1) | EA;
+ 			break;
+ 		}
+ 		WARN_ON(kfifo_out_locked(&dlci->fifo, dp , len, &dlci->lock) != len);
+@@ -1295,11 +1298,12 @@ static void gsm_control_response(struct gsm_mux *gsm, unsigned int command,
+ 
+ static void gsm_control_transmit(struct gsm_mux *gsm, struct gsm_control *ctrl)
+ {
+-	struct gsm_msg *msg = gsm_data_alloc(gsm, 0, ctrl->len + 1, gsm->ftype);
++	struct gsm_msg *msg = gsm_data_alloc(gsm, 0, ctrl->len + 2, gsm->ftype);
+ 	if (msg == NULL)
+ 		return;
+-	msg->data[0] = (ctrl->cmd << 1) | 2 | EA;	/* command */
+-	memcpy(msg->data + 1, ctrl->data, ctrl->len);
++	msg->data[0] = (ctrl->cmd << 1) | CR | EA;	/* command */
++	msg->data[1] = (ctrl->len << 1) | EA;
++	memcpy(msg->data + 2, ctrl->data, ctrl->len);
+ 	gsm_data_queue(gsm->dlci[0], msg);
+ }
+ 
+@@ -1322,7 +1326,6 @@ static void gsm_control_retransmit(struct timer_list *t)
+ 	spin_lock_irqsave(&gsm->control_lock, flags);
+ 	ctrl = gsm->pending_cmd;
+ 	if (ctrl) {
+-		gsm->cretries--;
+ 		if (gsm->cretries == 0) {
+ 			gsm->pending_cmd = NULL;
+ 			ctrl->error = -ETIMEDOUT;
+@@ -1331,6 +1334,7 @@ static void gsm_control_retransmit(struct timer_list *t)
+ 			wake_up(&gsm->event);
+ 			return;
+ 		}
++		gsm->cretries--;
+ 		gsm_control_transmit(gsm, ctrl);
+ 		mod_timer(&gsm->t2_timer, jiffies + gsm->t2 * HZ / 100);
+ 	}
+@@ -1371,7 +1375,7 @@ retry:
+ 
+ 	/* If DLCI0 is in ADM mode skip retries, it won't respond */
+ 	if (gsm->dlci[0]->mode == DLCI_MODE_ADM)
+-		gsm->cretries = 1;
++		gsm->cretries = 0;
+ 	else
+ 		gsm->cretries = gsm->n2;
+ 
+@@ -1419,13 +1423,17 @@ static int gsm_control_wait(struct gsm_mux *gsm, struct gsm_control *control)
+ 
+ static void gsm_dlci_close(struct gsm_dlci *dlci)
+ {
++	unsigned long flags;
++
+ 	del_timer(&dlci->t1);
+ 	if (debug & 8)
+ 		pr_debug("DLCI %d goes closed.\n", dlci->addr);
+ 	dlci->state = DLCI_CLOSED;
+ 	if (dlci->addr != 0) {
+ 		tty_port_tty_hangup(&dlci->port, false);
++		spin_lock_irqsave(&dlci->lock, flags);
+ 		kfifo_reset(&dlci->fifo);
++		spin_unlock_irqrestore(&dlci->lock, flags);
+ 		/* Ensure that gsmtty_open() can return. */
+ 		tty_port_set_initialized(&dlci->port, 0);
+ 		wake_up_interruptible(&dlci->port.open_wait);
+@@ -1810,7 +1818,6 @@ static void gsm_queue(struct gsm_mux *gsm)
+ 		gsm_response(gsm, address, UA);
+ 		gsm_dlci_close(dlci);
+ 		break;
+-	case UA:
+ 	case UA|PF:
+ 		if (cr == 0 || dlci == NULL)
+ 			break;
+@@ -1954,6 +1961,16 @@ static void gsm0_receive(struct gsm_mux *gsm, unsigned char c)
+ 
+ static void gsm1_receive(struct gsm_mux *gsm, unsigned char c)
+ {
++	/* handle XON/XOFF */
++	if ((c & ISO_IEC_646_MASK) == XON) {
++		gsm->constipated = true;
++		return;
++	} else if ((c & ISO_IEC_646_MASK) == XOFF) {
++		gsm->constipated = false;
++		/* Kick the link in case it is idling */
++		gsm_data_kick(gsm, NULL);
++		return;
++	}
+ 	if (c == GSM1_SOF) {
+ 		/* EOF is only valid in frame if we have got to the data state
+ 		   and received at least one byte (the FCS) */
+@@ -1968,7 +1985,8 @@ static void gsm1_receive(struct gsm_mux *gsm, unsigned char c)
+ 		}
+ 		/* Any partial frame was a runt so go back to start */
+ 		if (gsm->state != GSM_START) {
+-			gsm->malformed++;
++			if (gsm->state != GSM_SEARCH)
++				gsm->malformed++;
+ 			gsm->state = GSM_START;
+ 		}
+ 		/* A SOF in GSM_START means we are still reading idling or
+@@ -2040,74 +2058,43 @@ static void gsm_error(struct gsm_mux *gsm,
+ 	gsm->io_error++;
+ }
+ 
+-static int gsm_disconnect(struct gsm_mux *gsm)
+-{
+-	struct gsm_dlci *dlci = gsm->dlci[0];
+-	struct gsm_control *gc;
+-
+-	if (!dlci)
+-		return 0;
+-
+-	/* In theory disconnecting DLCI 0 is sufficient but for some
+-	   modems this is apparently not the case. */
+-	gc = gsm_control_send(gsm, CMD_CLD, NULL, 0);
+-	if (gc)
+-		gsm_control_wait(gsm, gc);
+-
+-	del_timer_sync(&gsm->t2_timer);
+-	/* Now we are sure T2 has stopped */
+-
+-	gsm_dlci_begin_close(dlci);
+-	wait_event_interruptible(gsm->event,
+-				dlci->state == DLCI_CLOSED);
+-
+-	if (signal_pending(current))
+-		return -EINTR;
+-
+-	return 0;
+-}
+-
+ /**
+  *	gsm_cleanup_mux		-	generic GSM protocol cleanup
+  *	@gsm: our mux
++ *	@disc: disconnect link?
+  *
+  *	Clean up the bits of the mux which are the same for all framing
+  *	protocols. Remove the mux from the mux table, stop all the timers
+  *	and then shut down each device hanging up the channels as we go.
+  */
+ 
+-static void gsm_cleanup_mux(struct gsm_mux *gsm)
++static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ {
+ 	int i;
+ 	struct gsm_dlci *dlci = gsm->dlci[0];
+ 	struct gsm_msg *txq, *ntxq;
+ 
+ 	gsm->dead = true;
++	mutex_lock(&gsm->mutex);
+ 
+-	spin_lock(&gsm_mux_lock);
+-	for (i = 0; i < MAX_MUX; i++) {
+-		if (gsm_mux[i] == gsm) {
+-			gsm_mux[i] = NULL;
+-			break;
++	if (dlci) {
++		if (disc && dlci->state != DLCI_CLOSED) {
++			gsm_dlci_begin_close(dlci);
++			wait_event(gsm->event, dlci->state == DLCI_CLOSED);
+ 		}
++		dlci->dead = true;
+ 	}
+-	spin_unlock(&gsm_mux_lock);
+-	/* open failed before registering => nothing to do */
+-	if (i == MAX_MUX)
+-		return;
+ 
++	/* Finish outstanding timers, making sure they are done */
+ 	del_timer_sync(&gsm->t2_timer);
+-	/* Now we are sure T2 has stopped */
+-	if (dlci)
+-		dlci->dead = true;
+ 
+-	/* Free up any link layer users */
+-	mutex_lock(&gsm->mutex);
+-	for (i = 0; i < NUM_DLCI; i++)
++	/* Free up any link layer users and finally the control channel */
++	for (i = NUM_DLCI - 1; i >= 0; i--)
+ 		if (gsm->dlci[i])
+ 			gsm_dlci_release(gsm->dlci[i]);
+ 	mutex_unlock(&gsm->mutex);
+ 	/* Now wipe the queues */
++	tty_ldisc_flush(gsm->tty);
+ 	list_for_each_entry_safe(txq, ntxq, &gsm->tx_list, list)
+ 		kfree(txq);
+ 	INIT_LIST_HEAD(&gsm->tx_list);
+@@ -2125,7 +2112,6 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm)
+ static int gsm_activate_mux(struct gsm_mux *gsm)
+ {
+ 	struct gsm_dlci *dlci;
+-	int i = 0;
+ 
+ 	timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
+ 	init_waitqueue_head(&gsm->event);
+@@ -2137,18 +2123,6 @@ static int gsm_activate_mux(struct gsm_mux *gsm)
+ 	else
+ 		gsm->receive = gsm1_receive;
+ 
+-	spin_lock(&gsm_mux_lock);
+-	for (i = 0; i < MAX_MUX; i++) {
+-		if (gsm_mux[i] == NULL) {
+-			gsm->num = i;
+-			gsm_mux[i] = gsm;
+-			break;
+-		}
+-	}
+-	spin_unlock(&gsm_mux_lock);
+-	if (i == MAX_MUX)
+-		return -EBUSY;
+-
+ 	dlci = gsm_dlci_alloc(gsm, 0);
+ 	if (dlci == NULL)
+ 		return -ENOMEM;
+@@ -2164,6 +2138,15 @@ static int gsm_activate_mux(struct gsm_mux *gsm)
+  */
+ static void gsm_free_mux(struct gsm_mux *gsm)
+ {
++	int i;
++
++	for (i = 0; i < MAX_MUX; i++) {
++		if (gsm == gsm_mux[i]) {
++			gsm_mux[i] = NULL;
++			break;
++		}
++	}
++	mutex_destroy(&gsm->mutex);
+ 	kfree(gsm->txframe);
+ 	kfree(gsm->buf);
+ 	kfree(gsm);
+@@ -2183,12 +2166,20 @@ static void gsm_free_muxr(struct kref *ref)
+ 
+ static inline void mux_get(struct gsm_mux *gsm)
+ {
++	unsigned long flags;
++
++	spin_lock_irqsave(&gsm_mux_lock, flags);
+ 	kref_get(&gsm->ref);
++	spin_unlock_irqrestore(&gsm_mux_lock, flags);
+ }
+ 
+ static inline void mux_put(struct gsm_mux *gsm)
+ {
++	unsigned long flags;
++
++	spin_lock_irqsave(&gsm_mux_lock, flags);
+ 	kref_put(&gsm->ref, gsm_free_muxr);
++	spin_unlock_irqrestore(&gsm_mux_lock, flags);
+ }
+ 
+ static inline unsigned int mux_num_to_base(struct gsm_mux *gsm)
+@@ -2209,6 +2200,7 @@ static inline unsigned int mux_line_to_num(unsigned int line)
+ 
+ static struct gsm_mux *gsm_alloc_mux(void)
+ {
++	int i;
+ 	struct gsm_mux *gsm = kzalloc(sizeof(struct gsm_mux), GFP_KERNEL);
+ 	if (gsm == NULL)
+ 		return NULL;
+@@ -2217,7 +2209,7 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ 		kfree(gsm);
+ 		return NULL;
+ 	}
+-	gsm->txframe = kmalloc(2 * MAX_MRU + 2, GFP_KERNEL);
++	gsm->txframe = kmalloc(2 * (MAX_MTU + PROT_OVERHEAD - 1), GFP_KERNEL);
+ 	if (gsm->txframe == NULL) {
+ 		kfree(gsm->buf);
+ 		kfree(gsm);
+@@ -2238,6 +2230,26 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ 	gsm->mtu = 64;
+ 	gsm->dead = true;	/* Avoid early tty opens */
+ 
++	/* Store the instance to the mux array or abort if no space is
++	 * available.
++	 */
++	spin_lock(&gsm_mux_lock);
++	for (i = 0; i < MAX_MUX; i++) {
++		if (!gsm_mux[i]) {
++			gsm_mux[i] = gsm;
++			gsm->num = i;
++			break;
++		}
++	}
++	spin_unlock(&gsm_mux_lock);
++	if (i == MAX_MUX) {
++		mutex_destroy(&gsm->mutex);
++		kfree(gsm->txframe);
++		kfree(gsm->buf);
++		kfree(gsm);
++		return NULL;
++	}
++
+ 	return gsm;
+ }
+ 
+@@ -2273,7 +2285,7 @@ static int gsm_config(struct gsm_mux *gsm, struct gsm_config *c)
+ 	/* Check the MRU/MTU range looks sane */
+ 	if (c->mru > MAX_MRU || c->mtu > MAX_MTU || c->mru < 8 || c->mtu < 8)
+ 		return -EINVAL;
+-	if (c->n2 < 3)
++	if (c->n2 > 255)
+ 		return -EINVAL;
+ 	if (c->encapsulation > 1)	/* Basic, advanced, no I */
+ 		return -EINVAL;
+@@ -2304,19 +2316,11 @@ static int gsm_config(struct gsm_mux *gsm, struct gsm_config *c)
+ 
+ 	/*
+ 	 * Close down what is needed, restart and initiate the new
+-	 * configuration
++	 * configuration. On the first time there is no DLCI[0]
++	 * and closing or cleaning up is not necessary.
+ 	 */
+-
+-	if (need_close || need_restart) {
+-		int ret;
+-
+-		ret = gsm_disconnect(gsm);
+-
+-		if (ret)
+-			return ret;
+-	}
+-	if (need_restart)
+-		gsm_cleanup_mux(gsm);
++	if (need_close || need_restart)
++		gsm_cleanup_mux(gsm, true);
+ 
+ 	gsm->initiator = c->initiator;
+ 	gsm->mru = c->mru;
+@@ -2385,6 +2389,9 @@ static int gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ 	int ret, i;
+ 
+ 	gsm->tty = tty_kref_get(tty);
++	/* Turn off tty XON/XOFF handling to handle it explicitly. */
++	gsm->old_c_iflag = tty->termios.c_iflag;
++	tty->termios.c_iflag &= (IXON | IXOFF);
+ 	ret =  gsm_activate_mux(gsm);
+ 	if (ret != 0)
+ 		tty_kref_put(gsm->tty);
+@@ -2425,7 +2432,8 @@ static void gsmld_detach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ 	WARN_ON(tty != gsm->tty);
+ 	for (i = 1; i < NUM_DLCI; i++)
+ 		tty_unregister_device(gsm_tty_driver, base + i);
+-	gsm_cleanup_mux(gsm);
++	/* Restore tty XON/XOFF handling. */
++	gsm->tty->termios.c_iflag = gsm->old_c_iflag;
+ 	tty_kref_put(gsm->tty);
+ 	gsm->tty = NULL;
+ }
+@@ -2493,6 +2501,12 @@ static void gsmld_close(struct tty_struct *tty)
+ {
+ 	struct gsm_mux *gsm = tty->disc_data;
+ 
++	/* The ldisc locks and closes the port before calling our close. This
++	 * means we have no way to do a proper disconnect. We will not bother
++	 * to do one.
++	 */
++	gsm_cleanup_mux(gsm, false);
++
+ 	gsmld_detach_gsm(tty, gsm);
+ 
+ 	gsmld_flush_buffer(tty);
+@@ -2531,7 +2545,7 @@ static int gsmld_open(struct tty_struct *tty)
+ 
+ 	ret = gsmld_attach_gsm(tty, gsm);
+ 	if (ret != 0) {
+-		gsm_cleanup_mux(gsm);
++		gsm_cleanup_mux(gsm, false);
+ 		mux_put(gsm);
+ 	}
+ 	return ret;
+@@ -2888,19 +2902,17 @@ static struct tty_ldisc_ops tty_ldisc_packet = {
+ 
+ static int gsmtty_modem_update(struct gsm_dlci *dlci, u8 brk)
+ {
+-	u8 modembits[5];
++	u8 modembits[3];
+ 	struct gsm_control *ctrl;
+ 	int len = 2;
+ 
+-	if (brk)
++	modembits[0] = (dlci->addr << 2) | 2 | EA;  /* DLCI, Valid, EA */
++	modembits[1] = (gsm_encode_modem(dlci) << 1) | EA;
++	if (brk) {
++		modembits[2] = (brk << 4) | 2 | EA; /* Length, Break, EA */
+ 		len++;
+-
+-	modembits[0] = len << 1 | EA;		/* Data bytes */
+-	modembits[1] = dlci->addr << 2 | 3;	/* DLCI, EA, 1 */
+-	modembits[2] = gsm_encode_modem(dlci) << 1 | EA;
+-	if (brk)
+-		modembits[3] = brk << 4 | 2 | EA;	/* Valid, EA */
+-	ctrl = gsm_control_send(dlci->gsm, CMD_MSC, modembits, len + 1);
++	}
++	ctrl = gsm_control_send(dlci->gsm, CMD_MSC, modembits, len);
+ 	if (ctrl == NULL)
+ 		return -ENOMEM;
+ 	return gsm_control_wait(dlci->gsm, ctrl);
+@@ -3085,13 +3097,17 @@ static int gsmtty_chars_in_buffer(struct tty_struct *tty)
+ static void gsmtty_flush_buffer(struct tty_struct *tty)
+ {
+ 	struct gsm_dlci *dlci = tty->driver_data;
++	unsigned long flags;
++
+ 	if (dlci->state == DLCI_CLOSED)
+ 		return;
+ 	/* Caution needed: If we implement reliable transport classes
+ 	   then the data being transmitted can't simply be junked once
+ 	   it has first hit the stack. Until then we can just blow it
+ 	   away */
++	spin_lock_irqsave(&dlci->lock, flags);
+ 	kfifo_reset(&dlci->fifo);
++	spin_unlock_irqrestore(&dlci->lock, flags);
+ 	/* Need to unhook this DLCI from the transmit queue logic */
+ }
+ 
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 3a985e953b8e9..da2373787f853 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -2940,7 +2940,7 @@ enum pci_board_num_t {
+ 	pbn_panacom2,
+ 	pbn_panacom4,
+ 	pbn_plx_romulus,
+-	pbn_endrun_2_4000000,
++	pbn_endrun_2_3906250,
+ 	pbn_oxsemi,
+ 	pbn_oxsemi_1_4000000,
+ 	pbn_oxsemi_2_4000000,
+@@ -3468,10 +3468,10 @@ static struct pciserial_board pci_boards[] = {
+ 	* signal now many ports are available
+ 	* 2 port 952 Uart support
+ 	*/
+-	[pbn_endrun_2_4000000] = {
++	[pbn_endrun_2_3906250] = {
+ 		.flags		= FL_BASE0,
+ 		.num_ports	= 2,
+-		.base_baud	= 4000000,
++		.base_baud	= 3906250,
+ 		.uart_offset	= 0x200,
+ 		.first_offset	= 0x1000,
+ 	},
+@@ -4386,7 +4386,7 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	*/
+ 	{	PCI_VENDOR_ID_ENDRUN, PCI_DEVICE_ID_ENDRUN_1588,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_endrun_2_4000000 },
++		pbn_endrun_2_3906250 },
+ 	/*
+ 	 * Quatech cards. These actually have configurable clocks but for
+ 	 * now we just use the default.
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 3055353514e1d..e0fa24f0f732d 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -3311,7 +3311,7 @@ static void serial8250_console_restore(struct uart_8250_port *up)
+ 
+ 	serial8250_set_divisor(port, baud, quot, frac);
+ 	serial_port_out(port, UART_LCR, up->lcr);
+-	serial8250_out_MCR(up, UART_MCR_DTR | UART_MCR_RTS);
++	serial8250_out_MCR(up, up->mcr | UART_MCR_DTR | UART_MCR_RTS);
+ }
+ 
+ /*
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 93cd8ad57f385..bfbca711bbf9b 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -1470,7 +1470,7 @@ static int imx_uart_startup(struct uart_port *port)
+ 	imx_uart_writel(sport, ucr1, UCR1);
+ 
+ 	ucr4 = imx_uart_readl(sport, UCR4) & ~(UCR4_OREN | UCR4_INVR);
+-	if (!sport->dma_is_enabled)
++	if (!dma_is_inited)
+ 		ucr4 |= UCR4_OREN;
+ 	if (sport->inverted_rx)
+ 		ucr4 |= UCR4_INVR;
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index e111622944139..d5056cc34974a 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -2697,6 +2697,7 @@ int __cdns3_gadget_ep_clear_halt(struct cdns3_endpoint *priv_ep)
+ 	struct usb_request *request;
+ 	struct cdns3_request *priv_req;
+ 	struct cdns3_trb *trb = NULL;
++	struct cdns3_trb trb_tmp;
+ 	int ret;
+ 	int val;
+ 
+@@ -2706,8 +2707,10 @@ int __cdns3_gadget_ep_clear_halt(struct cdns3_endpoint *priv_ep)
+ 	if (request) {
+ 		priv_req = to_cdns3_request(request);
+ 		trb = priv_req->trb;
+-		if (trb)
++		if (trb) {
++			trb_tmp = *trb;
+ 			trb->control = trb->control ^ cpu_to_le32(TRB_CYCLE);
++		}
+ 	}
+ 
+ 	writel(EP_CMD_CSTALL | EP_CMD_EPRST, &priv_dev->regs->ep_cmd);
+@@ -2722,7 +2725,7 @@ int __cdns3_gadget_ep_clear_halt(struct cdns3_endpoint *priv_ep)
+ 
+ 	if (request) {
+ 		if (trb)
+-			trb->control = trb->control ^ cpu_to_le32(TRB_CYCLE);
++			*trb = trb_tmp;
+ 
+ 		cdns3_rearm_transfer(priv_ep, 1);
+ 	}
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index baf80e2ac7d8e..7e5fd6afd9f45 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -404,6 +404,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x0b05, 0x17e0), .driver_info =
+ 			USB_QUIRK_IGNORE_REMOTE_WAKEUP },
+ 
++	/* Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader)*/
++	{ USB_DEVICE(0x0bda, 0x0151), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS },
++
+ 	/* Realtek hub in Dell WD19 (Type-C) */
+ 	{ USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM },
+ 	{ USB_DEVICE(0x0bda, 0x5487), .driver_info = USB_QUIRK_RESET_RESUME },
+@@ -508,6 +511,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* DJI CineSSD */
+ 	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+ 
++	/* VCOM device */
++	{ USB_DEVICE(0x4296, 0x7570), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS },
++
+ 	/* INTEL VALUE SSD */
+ 	{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 1580d51aea4f7..d97da7cef8679 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -275,7 +275,8 @@ static int dwc3_core_soft_reset(struct dwc3 *dwc)
+ 
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ 	reg |= DWC3_DCTL_CSFTRST;
+-	dwc3_writel(dwc->regs, DWC3_DCTL, reg);
++	reg &= ~DWC3_DCTL_RUN_STOP;
++	dwc3_gadget_dctl_write_safe(dwc, reg);
+ 
+ 	/*
+ 	 * For DWC_usb31 controller 1.90a and later, the DCTL.CSFRST bit
+@@ -1277,10 +1278,10 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ 	u8			lpm_nyet_threshold;
+ 	u8			tx_de_emphasis;
+ 	u8			hird_threshold;
+-	u8			rx_thr_num_pkt_prd;
+-	u8			rx_max_burst_prd;
+-	u8			tx_thr_num_pkt_prd;
+-	u8			tx_max_burst_prd;
++	u8			rx_thr_num_pkt_prd = 0;
++	u8			rx_max_burst_prd = 0;
++	u8			tx_thr_num_pkt_prd = 0;
++	u8			tx_max_burst_prd = 0;
+ 
+ 	/* default to highest possible threshold */
+ 	lpm_nyet_threshold = 0xf;
+diff --git a/drivers/usb/dwc3/drd.c b/drivers/usb/dwc3/drd.c
+index 3e1c1aacf002b..0a96f44ccca78 100644
+--- a/drivers/usb/dwc3/drd.c
++++ b/drivers/usb/dwc3/drd.c
+@@ -568,16 +568,15 @@ int dwc3_drd_init(struct dwc3 *dwc)
+ {
+ 	int ret, irq;
+ 
++	if (ROLE_SWITCH &&
++	    device_property_read_bool(dwc->dev, "usb-role-switch"))
++		return dwc3_setup_role_switch(dwc);
++
+ 	dwc->edev = dwc3_get_extcon(dwc);
+ 	if (IS_ERR(dwc->edev))
+ 		return PTR_ERR(dwc->edev);
+ 
+-	if (ROLE_SWITCH &&
+-	    device_property_read_bool(dwc->dev, "usb-role-switch")) {
+-		ret = dwc3_setup_role_switch(dwc);
+-		if (ret < 0)
+-			return ret;
+-	} else if (dwc->edev) {
++	if (dwc->edev) {
+ 		dwc->edev_nb.notifier_call = dwc3_drd_notifier;
+ 		ret = extcon_register_notifier(dwc->edev, EXTCON_USB_HOST,
+ 					       &dwc->edev_nb);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index b68fe48ac5792..1f0503dc96eed 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2859,6 +2859,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ 		const struct dwc3_event_depevt *event,
+ 		struct dwc3_request *req, int status)
+ {
++	int request_status;
+ 	int ret;
+ 
+ 	if (req->request.num_mapped_sgs)
+@@ -2879,7 +2880,35 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ 		req->needs_extra_trb = false;
+ 	}
+ 
+-	dwc3_gadget_giveback(dep, req, status);
++	/*
++	 * The event status only reflects the status of the TRB with IOC set.
++	 * For the requests that don't set interrupt on completion, the driver
++	 * needs to check and return the status of the completed TRBs associated
++	 * with the request. Use the status of the last TRB of the request.
++	 */
++	if (req->request.no_interrupt) {
++		struct dwc3_trb *trb;
++
++		trb = dwc3_ep_prev_trb(dep, dep->trb_dequeue);
++		switch (DWC3_TRB_SIZE_TRBSTS(trb->size)) {
++		case DWC3_TRBSTS_MISSED_ISOC:
++			/* Isoc endpoint only */
++			request_status = -EXDEV;
++			break;
++		case DWC3_TRB_STS_XFER_IN_PROG:
++			/* Applicable when End Transfer with ForceRM=0 */
++		case DWC3_TRBSTS_SETUP_PENDING:
++			/* Control endpoint only */
++		case DWC3_TRBSTS_OK:
++		default:
++			request_status = 0;
++			break;
++		}
++	} else {
++		request_status = status;
++	}
++
++	dwc3_gadget_giveback(dep, req, request_status);
+ 
+ out:
+ 	return ret;
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 9b7fa53d6642b..d51ea1c052f24 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -1443,6 +1443,8 @@ static void configfs_composite_unbind(struct usb_gadget *gadget)
+ 	usb_ep_autoconfig_reset(cdev->gadget);
+ 	spin_lock_irqsave(&gi->spinlock, flags);
+ 	cdev->gadget = NULL;
++	cdev->deactivations = 0;
++	gadget->deactivated = false;
+ 	set_gadget_data(gadget, NULL);
+ 	spin_unlock_irqrestore(&gi->spinlock, flags);
+ }
+diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
+index 61e2c94cc0b0c..cab1e30462c24 100644
+--- a/drivers/usb/gadget/function/uvc_queue.c
++++ b/drivers/usb/gadget/function/uvc_queue.c
+@@ -242,6 +242,8 @@ void uvcg_queue_cancel(struct uvc_video_queue *queue, int disconnect)
+ 		buf->state = UVC_BUF_STATE_ERROR;
+ 		vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_ERROR);
+ 	}
++	queue->buf_used = 0;
++
+ 	/* This must be protected by the irqlock spinlock to avoid race
+ 	 * conditions between uvc_queue_buffer and the disconnection event that
+ 	 * could result in an interruptible wait in uvc_dequeue_buffer. Do not
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 460a8a86e3111..1eb3b5deb940e 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1348,7 +1348,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				}
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+ 				if (!wait_for_completion_timeout(&bus_state->u3exit_done[wIndex],
+-								 msecs_to_jiffies(100)))
++								 msecs_to_jiffies(500)))
+ 					xhci_dbg(xhci, "missing U0 port change event for port %d-%d\n",
+ 						 hcd->self.busnum, wIndex + 1);
+ 				spin_lock_irqsave(&xhci->lock, flags);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index dafb58f05c9fb..48e2c04741891 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -59,6 +59,7 @@
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI		0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI		0x1138
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI		0x461e
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI	0x51ed
+ 
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4			0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+@@ -261,7 +262,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI))
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 76389c0dda8bc..fa3a7ac15f825 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2969,6 +2969,8 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd)
+ 		if (event_loop++ < TRBS_PER_SEGMENT / 2)
+ 			continue;
+ 		xhci_update_erst_dequeue(xhci, event_ring_deq);
++		event_ring_deq = xhci->event_ring->dequeue;
++
+ 		event_loop = 0;
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 95effd28179b4..a1ed5e0d06128 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -779,6 +779,17 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ 	if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
+ 		usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev));
+ 
++	/* Don't poll the roothubs after shutdown. */
++	xhci_dbg(xhci, "%s: stopping usb%d port polling.\n",
++			__func__, hcd->self.busnum);
++	clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
++	del_timer_sync(&hcd->rh_timer);
++
++	if (xhci->shared_hcd) {
++		clear_bit(HCD_FLAG_POLL_RH, &xhci->shared_hcd->flags);
++		del_timer_sync(&xhci->shared_hcd->rh_timer);
++	}
++
+ 	spin_lock_irq(&xhci->lock);
+ 	xhci_halt(xhci);
+ 	/* Workaround for spurious wakeups at shutdown with HSW */
+diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
+index 748139d262633..0be8efcda15d5 100644
+--- a/drivers/usb/misc/uss720.c
++++ b/drivers/usb/misc/uss720.c
+@@ -71,6 +71,7 @@ static void destroy_priv(struct kref *kref)
+ 
+ 	dev_dbg(&priv->usbdev->dev, "destroying priv datastructure\n");
+ 	usb_put_dev(priv->usbdev);
++	priv->usbdev = NULL;
+ 	kfree(priv);
+ }
+ 
+@@ -736,7 +737,6 @@ static int uss720_probe(struct usb_interface *intf,
+ 	parport_announce_port(pp);
+ 
+ 	usb_set_intfdata(intf, pp);
+-	usb_put_dev(usbdev);
+ 	return 0;
+ 
+ probe_abort:
+@@ -754,7 +754,6 @@ static void uss720_disconnect(struct usb_interface *intf)
+ 	usb_set_intfdata(intf, NULL);
+ 	if (pp) {
+ 		priv = pp->private_data;
+-		priv->usbdev = NULL;
+ 		priv->pp = NULL;
+ 		dev_dbg(&intf->dev, "parport_remove_port\n");
+ 		parport_remove_port(pp);
+diff --git a/drivers/usb/mtu3/mtu3_dr.c b/drivers/usb/mtu3/mtu3_dr.c
+index 04f666e857312..16731324e5d8a 100644
+--- a/drivers/usb/mtu3/mtu3_dr.c
++++ b/drivers/usb/mtu3/mtu3_dr.c
+@@ -41,10 +41,8 @@ static char *mailbox_state_string(enum mtu3_vbus_id_state state)
+ 
+ static void toggle_opstate(struct ssusb_mtk *ssusb)
+ {
+-	if (!ssusb->otg_switch.is_u3_drd) {
+-		mtu3_setbits(ssusb->mac_base, U3D_DEVICE_CONTROL, DC_SESSION);
+-		mtu3_setbits(ssusb->mac_base, U3D_POWER_MANAGEMENT, SOFT_CONN);
+-	}
++	mtu3_setbits(ssusb->mac_base, U3D_DEVICE_CONTROL, DC_SESSION);
++	mtu3_setbits(ssusb->mac_base, U3D_POWER_MANAGEMENT, SOFT_CONN);
+ }
+ 
+ /* only port0 supports dual-role mode */
+diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
+index 661a229c105dd..34b9f81401871 100644
+--- a/drivers/usb/phy/phy-generic.c
++++ b/drivers/usb/phy/phy-generic.c
+@@ -268,6 +268,13 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop)
+ 			return -EPROBE_DEFER;
+ 	}
+ 
++	nop->vbus_draw = devm_regulator_get_exclusive(dev, "vbus");
++	if (PTR_ERR(nop->vbus_draw) == -ENODEV)
++		nop->vbus_draw = NULL;
++	if (IS_ERR(nop->vbus_draw))
++		return dev_err_probe(dev, PTR_ERR(nop->vbus_draw),
++				     "could not get vbus regulator\n");
++
+ 	nop->dev		= dev;
+ 	nop->phy.dev		= nop->dev;
+ 	nop->phy.label		= "nop-xceiv";
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 7ac668023da87..067b206bd2527 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -198,6 +198,8 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x16DC, 0x0015) }, /* W-IE-NE-R Plein & Baus GmbH CML Control, Monitoring and Data Logger */
+ 	{ USB_DEVICE(0x17A8, 0x0001) }, /* Kamstrup Optical Eye/3-wire */
+ 	{ USB_DEVICE(0x17A8, 0x0005) }, /* Kamstrup M-Bus Master MultiPort 250D */
++	{ USB_DEVICE(0x17A8, 0x0101) }, /* Kamstrup 868 MHz wM-Bus C-Mode Meter Reader (Int Ant) */
++	{ USB_DEVICE(0x17A8, 0x0102) }, /* Kamstrup 868 MHz wM-Bus C-Mode Meter Reader (Ext Ant) */
+ 	{ USB_DEVICE(0x17F4, 0xAAAA) }, /* Wavesense Jazz blood glucose meter */
+ 	{ USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
+ 	{ USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index b878f4c87fee8..f14a090ce6b62 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -432,6 +432,8 @@ static void option_instat_callback(struct urb *urb);
+ #define CINTERION_PRODUCT_CLS8			0x00b0
+ #define CINTERION_PRODUCT_MV31_MBIM		0x00b3
+ #define CINTERION_PRODUCT_MV31_RMNET		0x00b7
++#define CINTERION_PRODUCT_MV32_WA		0x00f1
++#define CINTERION_PRODUCT_MV32_WB		0x00f2
+ 
+ /* Olivetti products */
+ #define OLIVETTI_VENDOR_ID			0x0b3c
+@@ -1217,6 +1219,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff),	/* Telit FD980 */
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1057, 0xff),	/* Telit FN980 */
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1058, 0xff),	/* Telit FN980 (PCIe) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1060, 0xff),	/* Telit LN920 (rmnet) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1061, 0xff),	/* Telit LN920 (MBIM) */
+@@ -1233,6 +1239,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff),	/* Telit FN990 (ECM) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff),	/* Telit FN990 (PCIe) */
++	  .driver_info = RSVD(0) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -1969,6 +1977,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3)},
+ 	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff),
+ 	  .driver_info = RSVD(0)},
++	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA, 0xff),
++	  .driver_info = RSVD(3)},
++	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB, 0xff),
++	  .driver_info = RSVD(3)},
+ 	{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100),
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120),
+diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c
+index ca3bd58f2025a..5576cfa50bc80 100644
+--- a/drivers/usb/serial/whiteheat.c
++++ b/drivers/usb/serial/whiteheat.c
+@@ -599,9 +599,8 @@ static int firm_send_command(struct usb_serial_port *port, __u8 command,
+ 		switch (command) {
+ 		case WHITEHEAT_GET_DTR_RTS:
+ 			info = usb_get_serial_port_data(port);
+-			memcpy(&info->mcr, command_info->result_buffer,
+-					sizeof(struct whiteheat_dr_info));
+-				break;
++			info->mcr = command_info->result_buffer[0];
++			break;
+ 		}
+ 	}
+ exit:
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 3bfa8005ae65d..dfda8f5487c09 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -928,6 +928,8 @@ static int ucsi_dr_swap(struct typec_port *port, enum typec_data_role role)
+ 	     role == TYPEC_HOST))
+ 		goto out_unlock;
+ 
++	reinit_completion(&con->complete);
++
+ 	command = UCSI_SET_UOR | UCSI_CONNECTOR_NUMBER(con->num);
+ 	command |= UCSI_SET_UOR_ROLE(role);
+ 	command |= UCSI_SET_UOR_ACCEPT_ROLE_SWAPS;
+@@ -935,14 +937,18 @@ static int ucsi_dr_swap(struct typec_port *port, enum typec_data_role role)
+ 	if (ret < 0)
+ 		goto out_unlock;
+ 
++	mutex_unlock(&con->lock);
++
+ 	if (!wait_for_completion_timeout(&con->complete,
+-					msecs_to_jiffies(UCSI_SWAP_TIMEOUT_MS)))
+-		ret = -ETIMEDOUT;
++					 msecs_to_jiffies(UCSI_SWAP_TIMEOUT_MS)))
++		return -ETIMEDOUT;
++
++	return 0;
+ 
+ out_unlock:
+ 	mutex_unlock(&con->lock);
+ 
+-	return ret < 0 ? ret : 0;
++	return ret;
+ }
+ 
+ static int ucsi_pr_swap(struct typec_port *port, enum typec_role role)
+@@ -964,6 +970,8 @@ static int ucsi_pr_swap(struct typec_port *port, enum typec_role role)
+ 	if (cur_role == role)
+ 		goto out_unlock;
+ 
++	reinit_completion(&con->complete);
++
+ 	command = UCSI_SET_PDR | UCSI_CONNECTOR_NUMBER(con->num);
+ 	command |= UCSI_SET_PDR_ROLE(role);
+ 	command |= UCSI_SET_PDR_ACCEPT_ROLE_SWAPS;
+@@ -971,11 +979,13 @@ static int ucsi_pr_swap(struct typec_port *port, enum typec_role role)
+ 	if (ret < 0)
+ 		goto out_unlock;
+ 
++	mutex_unlock(&con->lock);
++
+ 	if (!wait_for_completion_timeout(&con->complete,
+-				msecs_to_jiffies(UCSI_SWAP_TIMEOUT_MS))) {
+-		ret = -ETIMEDOUT;
+-		goto out_unlock;
+-	}
++					 msecs_to_jiffies(UCSI_SWAP_TIMEOUT_MS)))
++		return -ETIMEDOUT;
++
++	mutex_lock(&con->lock);
+ 
+ 	/* Something has gone wrong while swapping the role */
+ 	if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) !=
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index 90f48b71fd8f7..d9eec1b60e665 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -1649,8 +1649,9 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	const struct device_attribute *attr;
+ 	struct dlfb_data *dlfb;
+ 	struct fb_info *info;
+-	int retval = -ENOMEM;
++	int retval;
+ 	struct usb_device *usbdev = interface_to_usbdev(intf);
++	struct usb_endpoint_descriptor *out;
+ 
+ 	/* usb initialization */
+ 	dlfb = kzalloc(sizeof(*dlfb), GFP_KERNEL);
+@@ -1664,6 +1665,12 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	dlfb->udev = usb_get_dev(usbdev);
+ 	usb_set_intfdata(intf, dlfb);
+ 
++	retval = usb_find_common_endpoints(intf->cur_altsetting, NULL, &out, NULL, NULL);
++	if (retval) {
++		dev_err(&intf->dev, "Device should have at lease 1 bulk endpoint!\n");
++		goto error;
++	}
++
+ 	dev_dbg(&intf->dev, "console enable=%d\n", console);
+ 	dev_dbg(&intf->dev, "fb_defio enable=%d\n", fb_defio);
+ 	dev_dbg(&intf->dev, "shadow enable=%d\n", shadow);
+@@ -1673,6 +1680,7 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	if (!dlfb_parse_vendor_descriptor(dlfb, intf)) {
+ 		dev_err(&intf->dev,
+ 			"firmware not recognized, incompatible device?\n");
++		retval = -ENODEV;
+ 		goto error;
+ 	}
+ 
+@@ -1686,8 +1694,10 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 
+ 	/* allocates framebuffer driver structure, not framebuffer memory */
+ 	info = framebuffer_alloc(0, &dlfb->udev->dev);
+-	if (!info)
++	if (!info) {
++		retval = -ENOMEM;
+ 		goto error;
++	}
+ 
+ 	dlfb->info = info;
+ 	info->par = dlfb;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 0e8f484031da9..c758ff41b6386 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1744,9 +1744,17 @@ smb2_copychunk_range(const unsigned int xid,
+ 	int chunks_copied = 0;
+ 	bool chunk_sizes_updated = false;
+ 	ssize_t bytes_written, total_bytes_written = 0;
++	struct inode *inode;
+ 
+ 	pcchunk = kmalloc(sizeof(struct copychunk_ioctl), GFP_KERNEL);
+ 
++	/*
++	 * We need to flush all unwritten data before we can send the
++	 * copychunk ioctl to the server.
++	 */
++	inode = d_inode(trgtfile->dentry);
++	filemap_write_and_wait(inode->i_mapping);
++
+ 	if (pcchunk == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 5e6c034583176..3e26edeca8c73 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1176,18 +1176,23 @@ static void ext4_put_super(struct super_block *sb)
+ 	int aborted = 0;
+ 	int i, err;
+ 
+-	ext4_unregister_li_request(sb);
+-	ext4_quota_off_umount(sb);
+-
+-	destroy_workqueue(sbi->rsv_conversion_wq);
+-
+ 	/*
+ 	 * Unregister sysfs before destroying jbd2 journal.
+ 	 * Since we could still access attr_journal_task attribute via sysfs
+ 	 * path which could have sbi->s_journal->j_task as NULL
++	 * Unregister sysfs before flush sbi->s_error_work.
++	 * Since user may read /proc/fs/ext4/xx/mb_groups during umount, If
++	 * read metadata verify failed then will queue error work.
++	 * flush_stashed_error_work will call start_this_handle may trigger
++	 * BUG_ON.
+ 	 */
+ 	ext4_unregister_sysfs(sb);
+ 
++	ext4_unregister_li_request(sb);
++	ext4_quota_off_umount(sb);
++
++	destroy_workqueue(sbi->rsv_conversion_wq);
++
+ 	if (sbi->s_journal) {
+ 		aborted = is_journal_aborted(sbi->s_journal);
+ 		err = jbd2_journal_destroy(sbi->s_journal);
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index e60759d8bb5fb..08ab5d1e3a3e8 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -32,6 +32,17 @@ static inline int zonefs_zone_mgmt(struct inode *inode,
+ 
+ 	lockdep_assert_held(&zi->i_truncate_mutex);
+ 
++	/*
++	 * With ZNS drives, closing an explicitly open zone that has not been
++	 * written will change the zone state to "closed", that is, the zone
++	 * will remain active. Since this can then cause failure of explicit
++	 * open operation on other zones if the drive active zone resources
++	 * are exceeded, make sure that the zone does not remain active by
++	 * resetting it.
++	 */
++	if (op == REQ_OP_ZONE_CLOSE && !zi->i_wpoffset)
++		op = REQ_OP_ZONE_RESET;
++
+ 	ret = blkdev_zone_mgmt(inode->i_sb->s_bdev, op, zi->i_zsector,
+ 			       zi->i_zone_size >> SECTOR_SHIFT, GFP_NOFS);
+ 	if (ret) {
+@@ -1152,6 +1163,7 @@ static struct inode *zonefs_alloc_inode(struct super_block *sb)
+ 	mutex_init(&zi->i_truncate_mutex);
+ 	init_rwsem(&zi->i_mmap_sem);
+ 	zi->i_wr_refcnt = 0;
++	zi->i_flags = 0;
+ 
+ 	return &zi->i_vnode;
+ }
+@@ -1306,12 +1318,13 @@ static void zonefs_init_dir_inode(struct inode *parent, struct inode *inode,
+ 	inc_nlink(parent);
+ }
+ 
+-static void zonefs_init_file_inode(struct inode *inode, struct blk_zone *zone,
+-				   enum zonefs_ztype type)
++static int zonefs_init_file_inode(struct inode *inode, struct blk_zone *zone,
++				  enum zonefs_ztype type)
+ {
+ 	struct super_block *sb = inode->i_sb;
+ 	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+ 	struct zonefs_inode_info *zi = ZONEFS_I(inode);
++	int ret = 0;
+ 
+ 	inode->i_ino = zone->start >> sbi->s_zone_sectors_shift;
+ 	inode->i_mode = S_IFREG | sbi->s_perm;
+@@ -1336,6 +1349,22 @@ static void zonefs_init_file_inode(struct inode *inode, struct blk_zone *zone,
+ 	sb->s_maxbytes = max(zi->i_max_size, sb->s_maxbytes);
+ 	sbi->s_blocks += zi->i_max_size >> sb->s_blocksize_bits;
+ 	sbi->s_used_blocks += zi->i_wpoffset >> sb->s_blocksize_bits;
++
++	/*
++	 * For sequential zones, make sure that any open zone is closed first
++	 * to ensure that the initial number of open zones is 0, in sync with
++	 * the open zone accounting done when the mount option
++	 * ZONEFS_MNTOPT_EXPLICIT_OPEN is used.
++	 */
++	if (type == ZONEFS_ZTYPE_SEQ &&
++	    (zone->cond == BLK_ZONE_COND_IMP_OPEN ||
++	     zone->cond == BLK_ZONE_COND_EXP_OPEN)) {
++		mutex_lock(&zi->i_truncate_mutex);
++		ret = zonefs_zone_mgmt(inode, REQ_OP_ZONE_CLOSE);
++		mutex_unlock(&zi->i_truncate_mutex);
++	}
++
++	return ret;
+ }
+ 
+ static struct dentry *zonefs_create_inode(struct dentry *parent,
+@@ -1345,6 +1374,7 @@ static struct dentry *zonefs_create_inode(struct dentry *parent,
+ 	struct inode *dir = d_inode(parent);
+ 	struct dentry *dentry;
+ 	struct inode *inode;
++	int ret;
+ 
+ 	dentry = d_alloc_name(parent, name);
+ 	if (!dentry)
+@@ -1355,10 +1385,16 @@ static struct dentry *zonefs_create_inode(struct dentry *parent,
+ 		goto dput;
+ 
+ 	inode->i_ctime = inode->i_mtime = inode->i_atime = dir->i_ctime;
+-	if (zone)
+-		zonefs_init_file_inode(inode, zone, type);
+-	else
++	if (zone) {
++		ret = zonefs_init_file_inode(inode, zone, type);
++		if (ret) {
++			iput(inode);
++			goto dput;
++		}
++	} else {
+ 		zonefs_init_dir_inode(dir, inode, type);
++	}
++
+ 	d_add(dentry, inode);
+ 	dir->i_size++;
+ 
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index 2f05e9128201c..f5392d96d6886 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -635,7 +635,7 @@ static inline char *hex_byte_pack_upper(char *buf, u8 byte)
+ 	return buf;
+ }
+ 
+-extern int hex_to_bin(char ch);
++extern int hex_to_bin(unsigned char ch);
+ extern int __must_check hex2bin(u8 *dst, const char *src, size_t count);
+ extern char *bin2hex(char *dst, const void *src, size_t count);
+ 
+diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h
+index 157357ec14419..fc41fecfe44f9 100644
+--- a/include/linux/mtd/mtd.h
++++ b/include/linux/mtd/mtd.h
+@@ -388,10 +388,8 @@ struct mtd_info {
+ 	/* List of partitions attached to this MTD device */
+ 	struct list_head partitions;
+ 
+-	union {
+-		struct mtd_part part;
+-		struct mtd_master master;
+-	};
++	struct mtd_part part;
++	struct mtd_master master;
+ };
+ 
+ static inline struct mtd_info *mtd_get_master(struct mtd_info *mtd)
+diff --git a/include/memory/renesas-rpc-if.h b/include/memory/renesas-rpc-if.h
+index aceb2c360d3fa..0e3dac0302196 100644
+--- a/include/memory/renesas-rpc-if.h
++++ b/include/memory/renesas-rpc-if.h
+@@ -65,6 +65,7 @@ struct	rpcif {
+ 	size_t size;
+ 	enum rpcif_data_dir dir;
+ 	u8 bus_size;
++	u8 xfer_size;
+ 	void *buffer;
+ 	u32 xferlen;
+ 	u32 smcr;
+diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h
+index 028eaea1c8544..42d50856fcf24 100644
+--- a/include/net/ip6_tunnel.h
++++ b/include/net/ip6_tunnel.h
+@@ -57,7 +57,7 @@ struct ip6_tnl {
+ 
+ 	/* These fields used only by GRE */
+ 	__u32 i_seqno;	/* The last seen seqno	*/
+-	__u32 o_seqno;	/* The last output seqno */
++	atomic_t o_seqno;	/* The last output seqno */
+ 	int hlen;       /* tun_hlen + encap_hlen */
+ 	int tun_hlen;	/* Precalculated header length */
+ 	int encap_hlen; /* Encap header length (FOU,GUE) */
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index 61620677b0347..c3e55a9ae5857 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -113,7 +113,7 @@ struct ip_tunnel {
+ 
+ 	/* These four fields used only by GRE */
+ 	u32		i_seqno;	/* The last seen seqno	*/
+-	u32		o_seqno;	/* The last output seqno */
++	atomic_t	o_seqno;	/* The last output seqno */
+ 	int		tun_hlen;	/* Precalculated header length */
+ 
+ 	/* These four fields used only by ERSPAN */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 334b8d1b54429..2a28e09255735 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -460,6 +460,7 @@ int __cookie_v4_check(const struct iphdr *iph, const struct tcphdr *th,
+ 		      u32 cookie);
+ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb);
+ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
++					    const struct tcp_request_sock_ops *af_ops,
+ 					    struct sock *sk, struct sk_buff *skb);
+ #ifdef CONFIG_SYN_COOKIES
+ 
+@@ -598,6 +599,7 @@ void tcp_synack_rtt_meas(struct sock *sk, struct request_sock *req);
+ void tcp_reset(struct sock *sk);
+ void tcp_skb_mark_lost_uncond_verify(struct tcp_sock *tp, struct sk_buff *skb);
+ void tcp_fin(struct sock *sk);
++void tcp_check_space(struct sock *sk);
+ 
+ /* tcp_timer.c */
+ void tcp_init_xmit_timers(struct sock *);
+@@ -1041,6 +1043,7 @@ struct rate_sample {
+ 	int  losses;		/* number of packets marked lost upon ACK */
+ 	u32  acked_sacked;	/* number of packets newly (S)ACKed upon ACK */
+ 	u32  prior_in_flight;	/* in flight before this ACK */
++	u32  last_end_seq;	/* end_seq of most recently ACKed packet */
+ 	bool is_app_limited;	/* is sample from packet with bubble in pipe? */
+ 	bool is_retrans;	/* is sample from retransmission? */
+ 	bool is_ack_delayed;	/* is this (likely) a delayed ACK? */
+@@ -1151,6 +1154,11 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
+ 		  bool is_sack_reneg, struct rate_sample *rs);
+ void tcp_rate_check_app_limited(struct sock *sk);
+ 
++static inline bool tcp_skb_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
++{
++	return t1 > t2 || (t1 == t2 && after(seq1, seq2));
++}
++
+ /* These functions determine how the current flow behaves in respect of SACK
+  * handling. SACK is negotiated with the peer, and therefore it can vary
+  * between different flows.
+diff --git a/lib/hexdump.c b/lib/hexdump.c
+index 9301578f98e8c..06833d404398d 100644
+--- a/lib/hexdump.c
++++ b/lib/hexdump.c
+@@ -22,15 +22,33 @@ EXPORT_SYMBOL(hex_asc_upper);
+  *
+  * hex_to_bin() converts one hex digit to its actual value or -1 in case of bad
+  * input.
++ *
++ * This function is used to load cryptographic keys, so it is coded in such a
++ * way that there are no conditions or memory accesses that depend on data.
++ *
++ * Explanation of the logic:
++ * (ch - '9' - 1) is negative if ch <= '9'
++ * ('0' - 1 - ch) is negative if ch >= '0'
++ * we "and" these two values, so the result is negative if ch is in the range
++ *	'0' ... '9'
++ * we are only interested in the sign, so we do a shift ">> 8"; note that right
++ *	shift of a negative value is implementation-defined, so we cast the
++ *	value to (unsigned) before the shift --- we have 0xffffff if ch is in
++ *	the range '0' ... '9', 0 otherwise
++ * we "and" this value with (ch - '0' + 1) --- we have a value 1 ... 10 if ch is
++ *	in the range '0' ... '9', 0 otherwise
++ * we add this value to -1 --- we have a value 0 ... 9 if ch is in the range '0'
++ *	... '9', -1 otherwise
++ * the next line is similar to the previous one, but we need to decode both
++ *	uppercase and lowercase letters, so we use (ch & 0xdf), which converts
++ *	lowercase to uppercase
+  */
+-int hex_to_bin(char ch)
++int hex_to_bin(unsigned char ch)
+ {
+-	if ((ch >= '0') && (ch <= '9'))
+-		return ch - '0';
+-	ch = tolower(ch);
+-	if ((ch >= 'a') && (ch <= 'f'))
+-		return ch - 'a' + 10;
+-	return -1;
++	unsigned char cu = ch & 0xdf;
++	return -1 +
++		((ch - '0' +  1) & (unsigned)((ch - '9' - 1) & ('0' - 1 - ch)) >> 8) +
++		((cu - 'A' + 11) & (unsigned)((cu - 'F' - 1) & ('A' - 1 - cu)) >> 8);
+ }
+ EXPORT_SYMBOL(hex_to_bin);
+ 
+@@ -45,10 +63,13 @@ EXPORT_SYMBOL(hex_to_bin);
+ int hex2bin(u8 *dst, const char *src, size_t count)
+ {
+ 	while (count--) {
+-		int hi = hex_to_bin(*src++);
+-		int lo = hex_to_bin(*src++);
++		int hi, lo;
+ 
+-		if ((hi < 0) || (lo < 0))
++		hi = hex_to_bin(*src++);
++		if (unlikely(hi < 0))
++			return -EINVAL;
++		lo = hex_to_bin(*src++);
++		if (unlikely(lo < 0))
+ 			return -EINVAL;
+ 
+ 		*dst++ = (hi << 4) | lo;
+diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
+index 0e3f8494628f6..622193846b6b5 100644
+--- a/mm/kasan/quarantine.c
++++ b/mm/kasan/quarantine.c
+@@ -299,6 +299,13 @@ static void per_cpu_remove_cache(void *arg)
+ 	struct qlist_head *q;
+ 
+ 	q = this_cpu_ptr(&cpu_quarantine);
++	/*
++	 * Ensure the ordering between the writing to q->offline and
++	 * per_cpu_remove_cache.  Prevent cpu_quarantine from being corrupted
++	 * by interrupt.
++	 */
++	if (READ_ONCE(q->offline))
++		return;
+ 	qlist_move_cache(q, &to_free, cache);
+ 	qlist_free_all(&to_free, cache);
+ }
+diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c
+index 2f7940bcf7151..3fd207fe1284a 100644
+--- a/net/core/lwt_bpf.c
++++ b/net/core/lwt_bpf.c
+@@ -158,10 +158,8 @@ static int bpf_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 	return dst->lwtstate->orig_output(net, sk, skb);
+ }
+ 
+-static int xmit_check_hhlen(struct sk_buff *skb)
++static int xmit_check_hhlen(struct sk_buff *skb, int hh_len)
+ {
+-	int hh_len = skb_dst(skb)->dev->hard_header_len;
+-
+ 	if (skb_headroom(skb) < hh_len) {
+ 		int nhead = HH_DATA_ALIGN(hh_len - skb_headroom(skb));
+ 
+@@ -273,6 +271,7 @@ static int bpf_xmit(struct sk_buff *skb)
+ 
+ 	bpf = bpf_lwt_lwtunnel(dst->lwtstate);
+ 	if (bpf->xmit.prog) {
++		int hh_len = dst->dev->hard_header_len;
+ 		__be16 proto = skb->protocol;
+ 		int ret;
+ 
+@@ -290,7 +289,7 @@ static int bpf_xmit(struct sk_buff *skb)
+ 			/* If the header was expanded, headroom might be too
+ 			 * small for L2 header to come, expand as needed.
+ 			 */
+-			ret = xmit_check_hhlen(skb);
++			ret = xmit_check_hhlen(skb, hh_len);
+ 			if (unlikely(ret))
+ 				return ret;
+ 
+diff --git a/net/dsa/port.c b/net/dsa/port.c
+index 73569c9af3cc0..c9d552c4c358c 100644
+--- a/net/dsa/port.c
++++ b/net/dsa/port.c
+@@ -721,8 +721,10 @@ int dsa_port_link_register_of(struct dsa_port *dp)
+ 			if (ds->ops->phylink_mac_link_down)
+ 				ds->ops->phylink_mac_link_down(ds, port,
+ 					MLO_AN_FIXED, PHY_INTERFACE_MODE_NA);
++			of_node_put(phy_np);
+ 			return dsa_port_phylink_register(dp);
+ 		}
++		of_node_put(phy_np);
+ 		return 0;
+ 	}
+ 
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index e4504dd510c6d..2a80038575d27 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -454,14 +454,12 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		       __be16 proto)
+ {
+ 	struct ip_tunnel *tunnel = netdev_priv(dev);
+-
+-	if (tunnel->parms.o_flags & TUNNEL_SEQ)
+-		tunnel->o_seqno++;
++	__be16 flags = tunnel->parms.o_flags;
+ 
+ 	/* Push GRE header. */
+ 	gre_build_header(skb, tunnel->tun_hlen,
+-			 tunnel->parms.o_flags, proto, tunnel->parms.o_key,
+-			 htonl(tunnel->o_seqno));
++			 flags, proto, tunnel->parms.o_key,
++			 (flags & TUNNEL_SEQ) ? htonl(atomic_fetch_inc(&tunnel->o_seqno)) : 0);
+ 
+ 	ip_tunnel_xmit(skb, dev, tnl_params, tnl_params->protocol);
+ }
+@@ -499,7 +497,7 @@ static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		(TUNNEL_CSUM | TUNNEL_KEY | TUNNEL_SEQ);
+ 	gre_build_header(skb, tunnel_hlen, flags, proto,
+ 			 tunnel_id_to_key32(tun_info->key.tun_id),
+-			 (flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++) : 0);
++			 (flags & TUNNEL_SEQ) ? htonl(atomic_fetch_inc(&tunnel->o_seqno)) : 0);
+ 
+ 	ip_md_tunnel_xmit(skb, dev, IPPROTO_GRE, tunnel_hlen);
+ 
+@@ -576,7 +574,7 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	}
+ 
+ 	gre_build_header(skb, 8, TUNNEL_SEQ,
+-			 proto, 0, htonl(tunnel->o_seqno++));
++			 proto, 0, htonl(atomic_fetch_inc(&tunnel->o_seqno)));
+ 
+ 	ip_md_tunnel_xmit(skb, dev, IPPROTO_GRE, tunnel_hlen);
+ 
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 00dc3f943c80b..0b616094e7947 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -283,6 +283,7 @@ bool cookie_ecn_ok(const struct tcp_options_received *tcp_opt,
+ EXPORT_SYMBOL(cookie_ecn_ok);
+ 
+ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
++					    const struct tcp_request_sock_ops *af_ops,
+ 					    struct sock *sk,
+ 					    struct sk_buff *skb)
+ {
+@@ -299,6 +300,10 @@ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
+ 		return NULL;
+ 
+ 	treq = tcp_rsk(req);
++
++	/* treq->af_specific might be used to perform TCP_MD5 lookup */
++	treq->af_specific = af_ops;
++
+ 	treq->syn_tos = TCP_SKB_CB(skb)->ip_dsfield;
+ #if IS_ENABLED(CONFIG_MPTCP)
+ 	treq->is_mptcp = sk_is_mptcp(sk);
+@@ -366,7 +371,8 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
+ 		goto out;
+ 
+ 	ret = NULL;
+-	req = cookie_tcp_reqsk_alloc(&tcp_request_sock_ops, sk, skb);
++	req = cookie_tcp_reqsk_alloc(&tcp_request_sock_ops,
++				     &tcp_request_sock_ipv4_ops, sk, skb);
+ 	if (!req)
+ 		goto out;
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 12dd08af12b5e..2e267b2e33e5a 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3814,7 +3814,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ 		tcp_process_tlp_ack(sk, ack, flag);
+ 
+ 	if (tcp_ack_is_dubious(sk, flag)) {
+-		if (!(flag & (FLAG_SND_UNA_ADVANCED | FLAG_NOT_DUP))) {
++		if (!(flag & (FLAG_SND_UNA_ADVANCED |
++			      FLAG_NOT_DUP | FLAG_DSACKING_ACK))) {
+ 			num_dupack = 1;
+ 			/* Consider if pure acks were aggregated in tcp_add_backlog() */
+ 			if (!(flag & FLAG_DATA))
+@@ -5370,7 +5371,17 @@ static void tcp_new_space(struct sock *sk)
+ 	sk->sk_write_space(sk);
+ }
+ 
+-static void tcp_check_space(struct sock *sk)
++/* Caller made space either from:
++ * 1) Freeing skbs in rtx queues (after tp->snd_una has advanced)
++ * 2) Sent skbs from output queue (and thus advancing tp->snd_nxt)
++ *
++ * We might be able to generate EPOLLOUT to the application if:
++ * 1) Space consumed in output/rtx queues is below sk->sk_sndbuf/2
++ * 2) notsent amount (tp->write_seq - tp->snd_nxt) became
++ *    small enough that tcp_stream_memory_free() decides it
++ *    is time to generate EPOLLOUT.
++ */
++void tcp_check_space(struct sock *sk)
+ {
+ 	/* pairs with tcp_poll() */
+ 	smp_mb();
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index f0f67b25c97ab..62f5ef9e6f938 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -538,7 +538,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ 	newtp->tsoffset = treq->ts_off;
+ #ifdef CONFIG_TCP_MD5SIG
+ 	newtp->md5sig_info = NULL;	/*XXX*/
+-	if (newtp->af_specific->md5_lookup(sk, newsk))
++	if (treq->af_specific->req_md5_lookup(sk, req_to_sk(req)))
+ 		newtp->tcp_header_len += TCPOLEN_MD5SIG_ALIGNED;
+ #endif
+ 	if (skb->len >= TCP_MSS_DEFAULT + newtp->tcp_header_len)
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index ce9987e6ff252..e37ad0b3645c9 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -82,6 +82,7 @@ static void tcp_event_new_data_sent(struct sock *sk, struct sk_buff *skb)
+ 
+ 	NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPORIGDATASENT,
+ 		      tcp_skb_pcount(skb));
++	tcp_check_space(sk);
+ }
+ 
+ /* SND.NXT, if window was not shrunk or the amount of shrunk was less than one
+diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
+index 0de6935659635..6ab197928abbc 100644
+--- a/net/ipv4/tcp_rate.c
++++ b/net/ipv4/tcp_rate.c
+@@ -73,26 +73,31 @@ void tcp_rate_skb_sent(struct sock *sk, struct sk_buff *skb)
+  *
+  * If an ACK (s)acks multiple skbs (e.g., stretched-acks), this function is
+  * called multiple times. We favor the information from the most recently
+- * sent skb, i.e., the skb with the highest prior_delivered count.
++ * sent skb, i.e., the skb with the most recently sent time and the highest
++ * sequence.
+  */
+ void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff *skb,
+ 			    struct rate_sample *rs)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
++	u64 tx_tstamp;
+ 
+ 	if (!scb->tx.delivered_mstamp)
+ 		return;
+ 
++	tx_tstamp = tcp_skb_timestamp_us(skb);
+ 	if (!rs->prior_delivered ||
+-	    after(scb->tx.delivered, rs->prior_delivered)) {
++	    tcp_skb_sent_after(tx_tstamp, tp->first_tx_mstamp,
++			       scb->end_seq, rs->last_end_seq)) {
+ 		rs->prior_delivered  = scb->tx.delivered;
+ 		rs->prior_mstamp     = scb->tx.delivered_mstamp;
+ 		rs->is_app_limited   = scb->tx.is_app_limited;
+ 		rs->is_retrans	     = scb->sacked & TCPCB_RETRANS;
++		rs->last_end_seq     = scb->end_seq;
+ 
+ 		/* Record send time of most recently ACKed packet: */
+-		tp->first_tx_mstamp  = tcp_skb_timestamp_us(skb);
++		tp->first_tx_mstamp  = tx_tstamp;
+ 		/* Find the duration of the "send phase" of this window: */
+ 		rs->interval_us = tcp_stamp_us_delta(tp->first_tx_mstamp,
+ 						     scb->tx.first_tx_mstamp);
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 1f6c752f13b40..3f88ba6555ab8 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -724,6 +724,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ {
+ 	struct ip6_tnl *tunnel = netdev_priv(dev);
+ 	__be16 protocol;
++	__be16 flags;
+ 
+ 	if (dev->type == ARPHRD_ETHER)
+ 		IPCB(skb)->flags = 0;
+@@ -739,7 +740,6 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ 	if (tunnel->parms.collect_md) {
+ 		struct ip_tunnel_info *tun_info;
+ 		const struct ip_tunnel_key *key;
+-		__be16 flags;
+ 		int tun_hlen;
+ 
+ 		tun_info = skb_tunnel_info_txcheck(skb);
+@@ -766,19 +766,19 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ 		gre_build_header(skb, tun_hlen,
+ 				 flags, protocol,
+ 				 tunnel_id_to_key32(tun_info->key.tun_id),
+-				 (flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++)
++				 (flags & TUNNEL_SEQ) ? htonl(atomic_fetch_inc(&tunnel->o_seqno))
+ 						      : 0);
+ 
+ 	} else {
+-		if (tunnel->parms.o_flags & TUNNEL_SEQ)
+-			tunnel->o_seqno++;
+-
+ 		if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
+ 			return -ENOMEM;
+ 
+-		gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
++		flags = tunnel->parms.o_flags;
++
++		gre_build_header(skb, tunnel->tun_hlen, flags,
+ 				 protocol, tunnel->parms.o_key,
+-				 htonl(tunnel->o_seqno));
++				 (flags & TUNNEL_SEQ) ? htonl(atomic_fetch_inc(&tunnel->o_seqno))
++						      : 0);
+ 	}
+ 
+ 	return ip6_tnl_xmit(skb, dev, dsfield, fl6, encap_limit, pmtu,
+@@ -1056,7 +1056,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 	/* Push GRE header. */
+ 	proto = (t->parms.erspan_ver == 1) ? htons(ETH_P_ERSPAN)
+ 					   : htons(ETH_P_ERSPAN2);
+-	gre_build_header(skb, 8, TUNNEL_SEQ, proto, 0, htonl(t->o_seqno++));
++	gre_build_header(skb, 8, TUNNEL_SEQ, proto, 0, htonl(atomic_fetch_inc(&t->o_seqno)));
+ 
+ 	/* TooBig packet may have updated dst->dev's mtu */
+ 	if (!t->parms.collect_md && dst && dst_mtu(dst) > dst->dev->mtu)
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index 9b6cae1e49d91..5fa791cf39ca9 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -170,7 +170,8 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ 		goto out;
+ 
+ 	ret = NULL;
+-	req = cookie_tcp_reqsk_alloc(&tcp6_request_sock_ops, sk, skb);
++	req = cookie_tcp_reqsk_alloc(&tcp6_request_sock_ops,
++				     &tcp_request_sock_ipv6_ops, sk, skb);
+ 	if (!req)
+ 		goto out;
+ 
+diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
+index 2c467c422dc63..fb67f1ca2495b 100644
+--- a/net/netfilter/ipvs/ip_vs_conn.c
++++ b/net/netfilter/ipvs/ip_vs_conn.c
+@@ -1495,7 +1495,7 @@ int __init ip_vs_conn_init(void)
+ 	pr_info("Connection hash table configured "
+ 		"(size=%d, memory=%ldKbytes)\n",
+ 		ip_vs_conn_tab_size,
+-		(long)(ip_vs_conn_tab_size*sizeof(struct list_head))/1024);
++		(long)(ip_vs_conn_tab_size*sizeof(*ip_vs_conn_tab))/1024);
+ 	IP_VS_DBG(0, "Each connection entry needs %zd bytes at least\n",
+ 		  sizeof(struct ip_vs_conn));
+ 
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 217ab3644c25b..94a5446c5eae8 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -348,7 +348,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 				*ext = &rbe->ext;
+ 				return -EEXIST;
+ 			} else {
+-				p = &parent->rb_left;
++				overlap = false;
++				if (nft_rbtree_interval_end(rbe))
++					p = &parent->rb_left;
++				else
++					p = &parent->rb_right;
+ 			}
+ 		}
+ 
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index a28aca5124cef..8a0125e966c83 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -33,6 +33,32 @@ static void nft_socket_wildcard(const struct nft_pktinfo *pkt,
+ 	}
+ }
+ 
++static struct sock *nft_socket_do_lookup(const struct nft_pktinfo *pkt)
++{
++	const struct net_device *indev = nft_in(pkt);
++	const struct sk_buff *skb = pkt->skb;
++	struct sock *sk = NULL;
++
++	if (!indev)
++		return NULL;
++
++	switch (nft_pf(pkt)) {
++	case NFPROTO_IPV4:
++		sk = nf_sk_lookup_slow_v4(nft_net(pkt), skb, indev);
++		break;
++#if IS_ENABLED(CONFIG_NF_TABLES_IPV6)
++	case NFPROTO_IPV6:
++		sk = nf_sk_lookup_slow_v6(nft_net(pkt), skb, indev);
++		break;
++#endif
++	default:
++		WARN_ON_ONCE(1);
++		break;
++	}
++
++	return sk;
++}
++
+ static void nft_socket_eval(const struct nft_expr *expr,
+ 			    struct nft_regs *regs,
+ 			    const struct nft_pktinfo *pkt)
+@@ -46,20 +72,7 @@ static void nft_socket_eval(const struct nft_expr *expr,
+ 		sk = NULL;
+ 
+ 	if (!sk)
+-		switch(nft_pf(pkt)) {
+-		case NFPROTO_IPV4:
+-			sk = nf_sk_lookup_slow_v4(nft_net(pkt), skb, nft_in(pkt));
+-			break;
+-#if IS_ENABLED(CONFIG_NF_TABLES_IPV6)
+-		case NFPROTO_IPV6:
+-			sk = nf_sk_lookup_slow_v6(nft_net(pkt), skb, nft_in(pkt));
+-			break;
+-#endif
+-		default:
+-			WARN_ON_ONCE(1);
+-			regs->verdict.code = NFT_BREAK;
+-			return;
+-		}
++		sk = nft_socket_do_lookup(pkt);
+ 
+ 	if (!sk) {
+ 		regs->verdict.code = NFT_BREAK;
+@@ -150,6 +163,16 @@ static int nft_socket_dump(struct sk_buff *skb,
+ 	return 0;
+ }
+ 
++static int nft_socket_validate(const struct nft_ctx *ctx,
++			       const struct nft_expr *expr,
++			       const struct nft_data **data)
++{
++	return nft_chain_validate_hooks(ctx->chain,
++					(1 << NF_INET_PRE_ROUTING) |
++					(1 << NF_INET_LOCAL_IN) |
++					(1 << NF_INET_LOCAL_OUT));
++}
++
+ static struct nft_expr_type nft_socket_type;
+ static const struct nft_expr_ops nft_socket_ops = {
+ 	.type		= &nft_socket_type,
+@@ -157,6 +180,7 @@ static const struct nft_expr_ops nft_socket_ops = {
+ 	.eval		= nft_socket_eval,
+ 	.init		= nft_socket_init,
+ 	.dump		= nft_socket_dump,
++	.validate	= nft_socket_validate,
+ };
+ 
+ static struct nft_expr_type nft_socket_type __read_mostly = {
+diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
+index 0948f14ce221a..d4e5969771f0f 100644
+--- a/net/sctp/sm_sideeffect.c
++++ b/net/sctp/sm_sideeffect.c
+@@ -458,6 +458,10 @@ void sctp_generate_reconf_event(struct timer_list *t)
+ 		goto out_unlock;
+ 	}
+ 
++	/* This happens when the response arrives after the timer is triggered. */
++	if (!asoc->strreset_chunk)
++		goto out_unlock;
++
+ 	error = sctp_do_sm(net, SCTP_EVENT_T_TIMEOUT,
+ 			   SCTP_ST_TIMEOUT(SCTP_EVENT_TIMEOUT_RECONF),
+ 			   asoc->state, asoc->ep, asoc,
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 1b98f3241150b..35db3260e8d56 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1057,6 +1057,8 @@ static void smc_connect_work(struct work_struct *work)
+ 		smc->sk.sk_state = SMC_CLOSED;
+ 		if (rc == -EPIPE || rc == -EAGAIN)
+ 			smc->sk.sk_err = EPIPE;
++		else if (rc == -ECONNREFUSED)
++			smc->sk.sk_err = ECONNREFUSED;
+ 		else if (signal_pending(current))
+ 			smc->sk.sk_err = -sock_intr_errno(timeo);
+ 		sock_put(&smc->sk); /* passive closing */
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index f718c7346088f..1f56225a10e3c 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -483,11 +483,13 @@ handle_error:
+ 		copy = min_t(size_t, size, (pfrag->size - pfrag->offset));
+ 		copy = min_t(size_t, copy, (max_open_record_len - record->len));
+ 
+-		rc = tls_device_copy_data(page_address(pfrag->page) +
+-					  pfrag->offset, copy, msg_iter);
+-		if (rc)
+-			goto handle_error;
+-		tls_append_frag(record, pfrag, copy);
++		if (copy) {
++			rc = tls_device_copy_data(page_address(pfrag->page) +
++						  pfrag->offset, copy, msg_iter);
++			if (rc)
++				goto handle_error;
++			tls_append_frag(record, pfrag, copy);
++		}
+ 
+ 		size -= copy;
+ 		if (!size) {
+diff --git a/sound/soc/codecs/wm8731.c b/sound/soc/codecs/wm8731.c
+index 304bf725a6132..24a009d73f1e7 100644
+--- a/sound/soc/codecs/wm8731.c
++++ b/sound/soc/codecs/wm8731.c
+@@ -602,7 +602,7 @@ static int wm8731_hw_init(struct device *dev, struct wm8731_priv *wm8731)
+ 	ret = wm8731_reset(wm8731->regmap);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to issue reset: %d\n", ret);
+-		goto err_regulator_enable;
++		goto err;
+ 	}
+ 
+ 	/* Clear POWEROFF, keep everything else disabled */
+@@ -619,10 +619,7 @@ static int wm8731_hw_init(struct device *dev, struct wm8731_priv *wm8731)
+ 
+ 	regcache_mark_dirty(wm8731->regmap);
+ 
+-err_regulator_enable:
+-	/* Regulators will be enabled by bias management */
+-	regulator_bulk_disable(ARRAY_SIZE(wm8731->supplies), wm8731->supplies);
+-
++err:
+ 	return ret;
+ }
+ 
+@@ -766,21 +763,27 @@ static int wm8731_i2c_probe(struct i2c_client *i2c,
+ 		ret = PTR_ERR(wm8731->regmap);
+ 		dev_err(&i2c->dev, "Failed to allocate register map: %d\n",
+ 			ret);
+-		return ret;
++		goto err_regulator_enable;
+ 	}
+ 
+ 	ret = wm8731_hw_init(&i2c->dev, wm8731);
+ 	if (ret != 0)
+-		return ret;
++		goto err_regulator_enable;
+ 
+ 	ret = devm_snd_soc_register_component(&i2c->dev,
+ 			&soc_component_dev_wm8731, &wm8731_dai, 1);
+ 	if (ret != 0) {
+ 		dev_err(&i2c->dev, "Failed to register CODEC: %d\n", ret);
+-		return ret;
++		goto err_regulator_enable;
+ 	}
+ 
+ 	return 0;
++
++err_regulator_enable:
++	/* Regulators will be enabled by bias management */
++	regulator_bulk_disable(ARRAY_SIZE(wm8731->supplies), wm8731->supplies);
++
++	return ret;
+ }
+ 
+ static int wm8731_i2c_remove(struct i2c_client *client)
+diff --git a/sound/soc/intel/common/soc-acpi-intel-tgl-match.c b/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
+index 9f243e60b95c2..15d862cdcd2fe 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
+@@ -126,13 +126,13 @@ static const struct snd_soc_acpi_adr_device mx8373_1_adr[] = {
+ 	{
+ 		.adr = 0x000123019F837300,
+ 		.num_endpoints = 1,
+-		.endpoints = &spk_l_endpoint,
++		.endpoints = &spk_r_endpoint,
+ 		.name_prefix = "Right"
+ 	},
+ 	{
+ 		.adr = 0x000127019F837300,
+ 		.num_endpoints = 1,
+-		.endpoints = &spk_r_endpoint,
++		.endpoints = &spk_l_endpoint,
+ 		.name_prefix = "Left"
+ 	}
+ };
+diff --git a/tools/perf/arch/arm64/util/Build b/tools/perf/arch/arm64/util/Build
+index b53294d74b011..eddaf9bf5729c 100644
+--- a/tools/perf/arch/arm64/util/Build
++++ b/tools/perf/arch/arm64/util/Build
+@@ -1,5 +1,4 @@
+ perf-y += header.o
+-perf-y += machine.o
+ perf-y += perf_regs.o
+ perf-y += tsc.o
+ perf-$(CONFIG_DWARF)     += dwarf-regs.o
+diff --git a/tools/perf/arch/arm64/util/machine.c b/tools/perf/arch/arm64/util/machine.c
+deleted file mode 100644
+index d41b27e781d38..0000000000000
+--- a/tools/perf/arch/arm64/util/machine.c
++++ /dev/null
+@@ -1,27 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-
+-#include <stdio.h>
+-#include <string.h>
+-#include "debug.h"
+-#include "symbol.h"
+-
+-/* On arm64, kernel text segment start at high memory address,
+- * for example 0xffff 0000 8xxx xxxx. Modules start at a low memory
+- * address, like 0xffff 0000 00ax xxxx. When only samll amount of
+- * memory is used by modules, gap between end of module's text segment
+- * and start of kernel text segment may be reach 2G.
+- * Therefore do not fill this gap and do not assign it to the kernel dso map.
+- */
+-
+-#define SYMBOL_LIMIT (1 << 12) /* 4K */
+-
+-void arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
+-{
+-	if ((strchr(p->name, '[') && strchr(c->name, '[') == NULL) ||
+-			(strchr(p->name, '[') == NULL && strchr(c->name, '[')))
+-		/* Limit range of last symbol in module and kernel */
+-		p->end += SYMBOL_LIMIT;
+-	else
+-		p->end = c->start;
+-	pr_debug4("%s sym:%s end:%#lx\n", __func__, p->name, p->end);
+-}
+diff --git a/tools/perf/arch/s390/util/machine.c b/tools/perf/arch/s390/util/machine.c
+index 724efb2d842d7..7219ecdb84234 100644
+--- a/tools/perf/arch/s390/util/machine.c
++++ b/tools/perf/arch/s390/util/machine.c
+@@ -34,19 +34,3 @@ int arch__fix_module_text_start(u64 *start, u64 *size, const char *name)
+ 
+ 	return 0;
+ }
+-
+-/* On s390 kernel text segment start is located at very low memory addresses,
+- * for example 0x10000. Modules are located at very high memory addresses,
+- * for example 0x3ff xxxx xxxx. The gap between end of kernel text segment
+- * and beginning of first module's text segment is very big.
+- * Therefore do not fill this gap and do not assign it to the kernel dso map.
+- */
+-void arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
+-{
+-	if (strchr(p->name, '[') == NULL && strchr(c->name, '['))
+-		/* Last kernel symbol mapped to end of page */
+-		p->end = roundup(p->end, page_size);
+-	else
+-		p->end = c->start;
+-	pr_debug4("%s sym:%s end:%#lx\n", __func__, p->name, p->end);
+-}
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 7356eb398b32a..94809aed8b447 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -1245,7 +1245,7 @@ int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss,
+ 	 * For misannotated, zeroed, ASM function sizes.
+ 	 */
+ 	if (nr > 0) {
+-		symbols__fixup_end(&dso->symbols);
++		symbols__fixup_end(&dso->symbols, false);
+ 		symbols__fixup_duplicate(&dso->symbols);
+ 		if (kmap) {
+ 			/*
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 3609da7cce0ab..33954835c8231 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -101,11 +101,6 @@ static int prefix_underscores_count(const char *str)
+ 	return tail - str;
+ }
+ 
+-void __weak arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
+-{
+-	p->end = c->start;
+-}
+-
+ const char * __weak arch__normalize_symbol_name(const char *name)
+ {
+ 	return name;
+@@ -217,7 +212,8 @@ again:
+ 	}
+ }
+ 
+-void symbols__fixup_end(struct rb_root_cached *symbols)
++/* Update zero-sized symbols using the address of the next symbol */
++void symbols__fixup_end(struct rb_root_cached *symbols, bool is_kallsyms)
+ {
+ 	struct rb_node *nd, *prevnd = rb_first_cached(symbols);
+ 	struct symbol *curr, *prev;
+@@ -231,8 +227,29 @@ void symbols__fixup_end(struct rb_root_cached *symbols)
+ 		prev = curr;
+ 		curr = rb_entry(nd, struct symbol, rb_node);
+ 
+-		if (prev->end == prev->start || prev->end != curr->start)
+-			arch__symbols__fixup_end(prev, curr);
++		/*
++		 * On some architecture kernel text segment start is located at
++		 * some low memory address, while modules are located at high
++		 * memory addresses (or vice versa).  The gap between end of
++		 * kernel text segment and beginning of first module's text
++		 * segment is very big.  Therefore do not fill this gap and do
++		 * not assign it to the kernel dso map (kallsyms).
++		 *
++		 * In kallsyms, it determines module symbols using '[' character
++		 * like in:
++		 *   ffffffffc1937000 T hdmi_driver_init  [snd_hda_codec_hdmi]
++		 */
++		if (prev->end == prev->start) {
++			/* Last kernel/module symbol mapped to end of page */
++			if (is_kallsyms && (!strchr(prev->name, '[') !=
++					    !strchr(curr->name, '[')))
++				prev->end = roundup(prev->end + 4096, 4096);
++			else
++				prev->end = curr->start;
++
++			pr_debug4("%s sym:%s end:%#" PRIx64 "\n",
++				  __func__, prev->name, prev->end);
++		}
+ 	}
+ 
+ 	/* Last entry */
+@@ -1456,7 +1473,7 @@ int __dso__load_kallsyms(struct dso *dso, const char *filename,
+ 	if (kallsyms__delta(kmap, filename, &delta))
+ 		return -1;
+ 
+-	symbols__fixup_end(&dso->symbols);
++	symbols__fixup_end(&dso->symbols, true);
+ 	symbols__fixup_duplicate(&dso->symbols);
+ 
+ 	if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
+@@ -1651,7 +1668,7 @@ int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
+ #undef bfd_asymbol_section
+ #endif
+ 
+-	symbols__fixup_end(&dso->symbols);
++	symbols__fixup_end(&dso->symbols, false);
+ 	symbols__fixup_duplicate(&dso->symbols);
+ 	dso->adjust_symbols = 1;
+ 
+diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h
+index 954d6a049ee23..28721d761d91e 100644
+--- a/tools/perf/util/symbol.h
++++ b/tools/perf/util/symbol.h
+@@ -192,7 +192,7 @@ void __symbols__insert(struct rb_root_cached *symbols, struct symbol *sym,
+ 		       bool kernel);
+ void symbols__insert(struct rb_root_cached *symbols, struct symbol *sym);
+ void symbols__fixup_duplicate(struct rb_root_cached *symbols);
+-void symbols__fixup_end(struct rb_root_cached *symbols);
++void symbols__fixup_end(struct rb_root_cached *symbols, bool is_kallsyms);
+ void maps__fixup_end(struct maps *maps);
+ 
+ typedef int (*mapfn_t)(u64 start, u64 len, u64 pgoff, void *data);
+@@ -230,7 +230,6 @@ const char *arch__normalize_symbol_name(const char *name);
+ #define SYMBOL_A 0
+ #define SYMBOL_B 1
+ 
+-void arch__symbols__fixup_end(struct symbol *p, struct symbol *c);
+ int arch__compare_symbol_names(const char *namea, const char *nameb);
+ int arch__compare_symbol_names_n(const char *namea, const char *nameb,
+ 				 unsigned int n);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-05-12 11:29 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-05-12 11:29 UTC (permalink / raw
  To: gentoo-commits

commit:     652a47cd4bcad3632bff3a6ca6e7275e30f5207d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 12 11:29:10 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 12 11:29:10 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=652a47cd

Linux patch 5.10.115

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1114_linux-5.10.115.patch | 2390 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2394 insertions(+)

diff --git a/0000_README b/0000_README
index 439eef26..46bc953b 100644
--- a/0000_README
+++ b/0000_README
@@ -499,6 +499,10 @@ Patch:  1113_linux-5.10.114.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.114
 
+Patch:  1114_linux-5.10.115.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.115
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1114_linux-5.10.115.patch b/1114_linux-5.10.115.patch
new file mode 100644
index 00000000..eea14ce9
--- /dev/null
+++ b/1114_linux-5.10.115.patch
@@ -0,0 +1,2390 @@
+diff --git a/Makefile b/Makefile
+index b76e6d0aa85de..86d3e137d7f2d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 114
++SUBLEVEL = 115
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/mips/include/asm/timex.h b/arch/mips/include/asm/timex.h
+index b05bb70a2e46f..8026baf46e729 100644
+--- a/arch/mips/include/asm/timex.h
++++ b/arch/mips/include/asm/timex.h
+@@ -40,9 +40,9 @@
+ typedef unsigned int cycles_t;
+ 
+ /*
+- * On R4000/R4400 before version 5.0 an erratum exists such that if the
+- * cycle counter is read in the exact moment that it is matching the
+- * compare register, no interrupt will be generated.
++ * On R4000/R4400 an erratum exists such that if the cycle counter is
++ * read in the exact moment that it is matching the compare register,
++ * no interrupt will be generated.
+  *
+  * There is a suggested workaround and also the erratum can't strike if
+  * the compare interrupt isn't being used as the clock source device.
+@@ -63,7 +63,7 @@ static inline int can_use_mips_counter(unsigned int prid)
+ 	if (!__builtin_constant_p(cpu_has_counter))
+ 		asm volatile("" : "=m" (cpu_data[0].options));
+ 	if (likely(cpu_has_counter &&
+-		   prid >= (PRID_IMP_R4000 | PRID_REV_ENCODE_44(5, 0))))
++		   prid > (PRID_IMP_R4000 | PRID_REV_ENCODE_44(15, 15))))
+ 		return 1;
+ 	else
+ 		return 0;
+diff --git a/arch/mips/kernel/time.c b/arch/mips/kernel/time.c
+index caa01457dce60..ed339d7979f3f 100644
+--- a/arch/mips/kernel/time.c
++++ b/arch/mips/kernel/time.c
+@@ -141,15 +141,10 @@ static __init int cpu_has_mfc0_count_bug(void)
+ 	case CPU_R4400MC:
+ 		/*
+ 		 * The published errata for the R4400 up to 3.0 say the CPU
+-		 * has the mfc0 from count bug.
++		 * has the mfc0 from count bug.  This seems the last version
++		 * produced.
+ 		 */
+-		if ((current_cpu_data.processor_id & 0xff) <= 0x30)
+-			return 1;
+-
+-		/*
+-		 * we assume newer revisions are ok
+-		 */
+-		return 0;
++		return 1;
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/parisc/kernel/processor.c b/arch/parisc/kernel/processor.c
+index 7f2d0c0ecc804..176ef00bdd15e 100644
+--- a/arch/parisc/kernel/processor.c
++++ b/arch/parisc/kernel/processor.c
+@@ -419,8 +419,7 @@ show_cpuinfo (struct seq_file *m, void *v)
+ 		}
+ 		seq_printf(m, " (0x%02lx)\n", boot_cpu_data.pdc.capabilities);
+ 
+-		seq_printf(m, "model\t\t: %s\n"
+-				"model name\t: %s\n",
++		seq_printf(m, "model\t\t: %s - %s\n",
+ 				 boot_cpu_data.pdc.sys_model_name,
+ 				 cpuinfo->dev ?
+ 				 cpuinfo->dev->name : "Unknown");
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index 18e952fed021b..6c3d38b5a8add 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -66,6 +66,7 @@ static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __align
+ DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64) __visible;
+ static int has_steal_clock = 0;
+ 
++static int has_guest_poll = 0;
+ /*
+  * No need for any "IO delay" on KVM
+  */
+@@ -624,14 +625,26 @@ static int kvm_cpu_down_prepare(unsigned int cpu)
+ 
+ static int kvm_suspend(void)
+ {
++	u64 val = 0;
++
+ 	kvm_guest_cpu_offline(false);
+ 
++#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
++	if (kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL))
++		rdmsrl(MSR_KVM_POLL_CONTROL, val);
++	has_guest_poll = !(val & 1);
++#endif
+ 	return 0;
+ }
+ 
+ static void kvm_resume(void)
+ {
+ 	kvm_cpu_online(raw_smp_processor_id());
++
++#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
++	if (kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL) && has_guest_poll)
++		wrmsrl(MSR_KVM_POLL_CONTROL, 0);
++#endif
+ }
+ 
+ static struct syscore_ops kvm_syscore_ops = {
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 41b0dc37720e0..6e1ea5e85e598 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -668,6 +668,11 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		union cpuid10_eax eax;
+ 		union cpuid10_edx edx;
+ 
++		if (!static_cpu_has(X86_FEATURE_ARCH_PERFMON)) {
++			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
++			break;
++		}
++
+ 		perf_get_x86_pmu_capability(&cap);
+ 
+ 		/*
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index de11149e28e09..a3ef793fce5f1 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -113,7 +113,8 @@ static inline u32 kvm_x2apic_id(struct kvm_lapic *apic)
+ 
+ static bool kvm_can_post_timer_interrupt(struct kvm_vcpu *vcpu)
+ {
+-	return pi_inject_timer && kvm_vcpu_apicv_active(vcpu);
++	return pi_inject_timer && kvm_vcpu_apicv_active(vcpu) &&
++		(kvm_mwait_in_guest(vcpu->kvm) || kvm_hlt_in_guest(vcpu->kvm));
+ }
+ 
+ bool kvm_can_use_hv_timer(struct kvm_vcpu *vcpu)
+@@ -2106,10 +2107,9 @@ int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
+ 		break;
+ 
+ 	case APIC_SELF_IPI:
+-		if (apic_x2apic_mode(apic)) {
+-			kvm_lapic_reg_write(apic, APIC_ICR,
+-					    APIC_DEST_SELF | (val & APIC_VECTOR_MASK));
+-		} else
++		if (apic_x2apic_mode(apic))
++			kvm_apic_send_ipi(apic, APIC_DEST_SELF | (val & APIC_VECTOR_MASK), 0);
++		else
+ 			ret = 1;
+ 		break;
+ 	default:
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 99ea1ec12ffe0..70ef5b542681c 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3140,6 +3140,8 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa,
+ 		return;
+ 
+ 	sp = to_shadow_page(*root_hpa & PT64_BASE_ADDR_MASK);
++	if (WARN_ON(!sp))
++		return;
+ 
+ 	if (kvm_mmu_put_root(kvm, sp)) {
+ 		if (sp->tdp_mmu_page)
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index 0e9c2322d3988..49e5be735f147 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -44,6 +44,22 @@ static struct kvm_event_hw_type_mapping amd_event_mapping[] = {
+ 	[7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
+ };
+ 
++/* duplicated from amd_f17h_perfmon_event_map. */
++static struct kvm_event_hw_type_mapping amd_f17h_event_mapping[] = {
++	[0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES },
++	[1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS },
++	[2] = { 0x60, 0xff, PERF_COUNT_HW_CACHE_REFERENCES },
++	[3] = { 0x64, 0x09, PERF_COUNT_HW_CACHE_MISSES },
++	[4] = { 0xc2, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
++	[5] = { 0xc3, 0x00, PERF_COUNT_HW_BRANCH_MISSES },
++	[6] = { 0x87, 0x02, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND },
++	[7] = { 0x87, 0x01, PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
++};
++
++/* amd_pmc_perf_hw_id depends on these being the same size */
++static_assert(ARRAY_SIZE(amd_event_mapping) ==
++	     ARRAY_SIZE(amd_f17h_event_mapping));
++
+ static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type)
+ {
+ 	struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
+@@ -128,19 +144,25 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
+ 
+ static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc)
+ {
++	struct kvm_event_hw_type_mapping *event_mapping;
+ 	u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
+ 	u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+ 	int i;
+ 
++	if (guest_cpuid_family(pmc->vcpu) >= 0x17)
++		event_mapping = amd_f17h_event_mapping;
++	else
++		event_mapping = amd_event_mapping;
++
+ 	for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++)
+-		if (amd_event_mapping[i].eventsel == event_select
+-		    && amd_event_mapping[i].unit_mask == unit_mask)
++		if (event_mapping[i].eventsel == event_select
++		    && event_mapping[i].unit_mask == unit_mask)
+ 			break;
+ 
+ 	if (i == ARRAY_SIZE(amd_event_mapping))
+ 		return PERF_COUNT_HW_MAX;
+ 
+-	return amd_event_mapping[i].event_type;
++	return event_mapping[i].event_type;
+ }
+ 
+ /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */
+diff --git a/block/blk-map.c b/block/blk-map.c
+index 21630dccac628..ede73f4f70147 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -488,7 +488,7 @@ static struct bio *bio_copy_kern(struct request_queue *q, void *data,
+ 		if (bytes > len)
+ 			bytes = len;
+ 
+-		page = alloc_page(q->bounce_gfp | gfp_mask);
++		page = alloc_page(q->bounce_gfp | __GFP_ZERO | gfp_mask);
+ 		if (!page)
+ 			goto cleanup;
+ 
+diff --git a/drivers/firewire/core-card.c b/drivers/firewire/core-card.c
+index 54be88167c60b..f3b3953cac834 100644
+--- a/drivers/firewire/core-card.c
++++ b/drivers/firewire/core-card.c
+@@ -668,6 +668,7 @@ EXPORT_SYMBOL_GPL(fw_card_release);
+ void fw_core_remove_card(struct fw_card *card)
+ {
+ 	struct fw_card_driver dummy_driver = dummy_driver_template;
++	unsigned long flags;
+ 
+ 	card->driver->update_phy_reg(card, 4,
+ 				     PHY_LINK_ACTIVE | PHY_CONTENDER, 0);
+@@ -682,7 +683,9 @@ void fw_core_remove_card(struct fw_card *card)
+ 	dummy_driver.stop_iso		= card->driver->stop_iso;
+ 	card->driver = &dummy_driver;
+ 
++	spin_lock_irqsave(&card->lock, flags);
+ 	fw_destroy_nodes(card);
++	spin_unlock_irqrestore(&card->lock, flags);
+ 
+ 	/* Wait for all users, especially device workqueue jobs, to finish. */
+ 	fw_card_put(card);
+diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
+index fb6c651214f32..b0cc3f1e9bb00 100644
+--- a/drivers/firewire/core-cdev.c
++++ b/drivers/firewire/core-cdev.c
+@@ -1480,6 +1480,7 @@ static void outbound_phy_packet_callback(struct fw_packet *packet,
+ {
+ 	struct outbound_phy_packet_event *e =
+ 		container_of(packet, struct outbound_phy_packet_event, p);
++	struct client *e_client;
+ 
+ 	switch (status) {
+ 	/* expected: */
+@@ -1496,9 +1497,10 @@ static void outbound_phy_packet_callback(struct fw_packet *packet,
+ 	}
+ 	e->phy_packet.data[0] = packet->timestamp;
+ 
++	e_client = e->client;
+ 	queue_event(e->client, &e->event, &e->phy_packet,
+ 		    sizeof(e->phy_packet) + e->phy_packet.length, NULL, 0);
+-	client_put(e->client);
++	client_put(e_client);
+ }
+ 
+ static int ioctl_send_phy_packet(struct client *client, union ioctl_arg *arg)
+diff --git a/drivers/firewire/core-topology.c b/drivers/firewire/core-topology.c
+index ec68ed27b0a5f..4cdbfef79f2e1 100644
+--- a/drivers/firewire/core-topology.c
++++ b/drivers/firewire/core-topology.c
+@@ -374,16 +374,13 @@ static void report_found_node(struct fw_card *card,
+ 	card->bm_retries = 0;
+ }
+ 
++/* Must be called with card->lock held */
+ void fw_destroy_nodes(struct fw_card *card)
+ {
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&card->lock, flags);
+ 	card->color++;
+ 	if (card->local_node != NULL)
+ 		for_each_fw_node(card, card->local_node, report_lost_node);
+ 	card->local_node = NULL;
+-	spin_unlock_irqrestore(&card->lock, flags);
+ }
+ 
+ static void move_tree(struct fw_node *node0, struct fw_node *node1, int port)
+@@ -509,6 +506,8 @@ void fw_core_handle_bus_reset(struct fw_card *card, int node_id, int generation,
+ 	struct fw_node *local_node;
+ 	unsigned long flags;
+ 
++	spin_lock_irqsave(&card->lock, flags);
++
+ 	/*
+ 	 * If the selfID buffer is not the immediate successor of the
+ 	 * previously processed one, we cannot reliably compare the
+@@ -520,8 +519,6 @@ void fw_core_handle_bus_reset(struct fw_card *card, int node_id, int generation,
+ 		card->bm_retries = 0;
+ 	}
+ 
+-	spin_lock_irqsave(&card->lock, flags);
+-
+ 	card->broadcast_channel_allocated = card->broadcast_channel_auto_allocated;
+ 	card->node_id = node_id;
+ 	/*
+diff --git a/drivers/firewire/core-transaction.c b/drivers/firewire/core-transaction.c
+index ac487c96bb717..6c20815cc8d16 100644
+--- a/drivers/firewire/core-transaction.c
++++ b/drivers/firewire/core-transaction.c
+@@ -73,24 +73,25 @@ static int try_cancel_split_timeout(struct fw_transaction *t)
+ static int close_transaction(struct fw_transaction *transaction,
+ 			     struct fw_card *card, int rcode)
+ {
+-	struct fw_transaction *t;
++	struct fw_transaction *t = NULL, *iter;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&card->lock, flags);
+-	list_for_each_entry(t, &card->transaction_list, link) {
+-		if (t == transaction) {
+-			if (!try_cancel_split_timeout(t)) {
++	list_for_each_entry(iter, &card->transaction_list, link) {
++		if (iter == transaction) {
++			if (!try_cancel_split_timeout(iter)) {
+ 				spin_unlock_irqrestore(&card->lock, flags);
+ 				goto timed_out;
+ 			}
+-			list_del_init(&t->link);
+-			card->tlabel_mask &= ~(1ULL << t->tlabel);
++			list_del_init(&iter->link);
++			card->tlabel_mask &= ~(1ULL << iter->tlabel);
++			t = iter;
+ 			break;
+ 		}
+ 	}
+ 	spin_unlock_irqrestore(&card->lock, flags);
+ 
+-	if (&t->link != &card->transaction_list) {
++	if (t) {
+ 		t->callback(card, rcode, NULL, 0, t->callback_data);
+ 		return 0;
+ 	}
+@@ -935,7 +936,7 @@ EXPORT_SYMBOL(fw_core_handle_request);
+ 
+ void fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
+ {
+-	struct fw_transaction *t;
++	struct fw_transaction *t = NULL, *iter;
+ 	unsigned long flags;
+ 	u32 *data;
+ 	size_t data_length;
+@@ -947,20 +948,21 @@ void fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
+ 	rcode	= HEADER_GET_RCODE(p->header[1]);
+ 
+ 	spin_lock_irqsave(&card->lock, flags);
+-	list_for_each_entry(t, &card->transaction_list, link) {
+-		if (t->node_id == source && t->tlabel == tlabel) {
+-			if (!try_cancel_split_timeout(t)) {
++	list_for_each_entry(iter, &card->transaction_list, link) {
++		if (iter->node_id == source && iter->tlabel == tlabel) {
++			if (!try_cancel_split_timeout(iter)) {
+ 				spin_unlock_irqrestore(&card->lock, flags);
+ 				goto timed_out;
+ 			}
+-			list_del_init(&t->link);
+-			card->tlabel_mask &= ~(1ULL << t->tlabel);
++			list_del_init(&iter->link);
++			card->tlabel_mask &= ~(1ULL << iter->tlabel);
++			t = iter;
+ 			break;
+ 		}
+ 	}
+ 	spin_unlock_irqrestore(&card->lock, flags);
+ 
+-	if (&t->link == &card->transaction_list) {
++	if (!t) {
+  timed_out:
+ 		fw_notice(card, "unsolicited response (source %x, tlabel %x)\n",
+ 			  source, tlabel);
+diff --git a/drivers/firewire/sbp2.c b/drivers/firewire/sbp2.c
+index 4d5054211550b..2ceed9287435f 100644
+--- a/drivers/firewire/sbp2.c
++++ b/drivers/firewire/sbp2.c
+@@ -408,7 +408,7 @@ static void sbp2_status_write(struct fw_card *card, struct fw_request *request,
+ 			      void *payload, size_t length, void *callback_data)
+ {
+ 	struct sbp2_logical_unit *lu = callback_data;
+-	struct sbp2_orb *orb;
++	struct sbp2_orb *orb = NULL, *iter;
+ 	struct sbp2_status status;
+ 	unsigned long flags;
+ 
+@@ -433,17 +433,18 @@ static void sbp2_status_write(struct fw_card *card, struct fw_request *request,
+ 
+ 	/* Lookup the orb corresponding to this status write. */
+ 	spin_lock_irqsave(&lu->tgt->lock, flags);
+-	list_for_each_entry(orb, &lu->orb_list, link) {
++	list_for_each_entry(iter, &lu->orb_list, link) {
+ 		if (STATUS_GET_ORB_HIGH(status) == 0 &&
+-		    STATUS_GET_ORB_LOW(status) == orb->request_bus) {
+-			orb->rcode = RCODE_COMPLETE;
+-			list_del(&orb->link);
++		    STATUS_GET_ORB_LOW(status) == iter->request_bus) {
++			iter->rcode = RCODE_COMPLETE;
++			list_del(&iter->link);
++			orb = iter;
+ 			break;
+ 		}
+ 	}
+ 	spin_unlock_irqrestore(&lu->tgt->lock, flags);
+ 
+-	if (&orb->link != &lu->orb_list) {
++	if (orb) {
+ 		orb->callback(orb, &status);
+ 		kref_put(&orb->kref, free_orb); /* orb callback reference */
+ 	} else {
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index a78167b2c9ca2..e936e1eb1f95c 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -761,11 +761,11 @@ static bool pca953x_irq_pending(struct pca953x_chip *chip, unsigned long *pendin
+ 	bitmap_xor(cur_stat, new_stat, old_stat, gc->ngpio);
+ 	bitmap_and(trigger, cur_stat, chip->irq_mask, gc->ngpio);
+ 
++	bitmap_copy(chip->irq_stat, new_stat, gc->ngpio);
++
+ 	if (bitmap_empty(trigger, gc->ngpio))
+ 		return false;
+ 
+-	bitmap_copy(chip->irq_stat, new_stat, gc->ngpio);
+-
+ 	bitmap_and(cur_stat, chip->irq_trig_fall, old_stat, gc->ngpio);
+ 	bitmap_and(old_stat, chip->irq_trig_raise, new_stat, gc->ngpio);
+ 	bitmap_or(new_stat, old_stat, cur_stat, gc->ngpio);
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 2f895a2b8411d..921a99578ff0e 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -912,7 +912,7 @@ static void of_gpiochip_init_valid_mask(struct gpio_chip *chip)
+ 					   i, &start);
+ 		of_property_read_u32_index(np, "gpio-reserved-ranges",
+ 					   i + 1, &count);
+-		if (start >= chip->ngpio || start + count >= chip->ngpio)
++		if (start >= chip->ngpio || start + count > chip->ngpio)
+ 			continue;
+ 
+ 		bitmap_clear(chip->valid_mask, start, count);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 0e359a299f9ec..3f4403e778140 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -2822,7 +2822,7 @@ static void dp_test_get_audio_test_data(struct dc_link *link, bool disable_video
+ 		&dpcd_pattern_type.value,
+ 		sizeof(dpcd_pattern_type));
+ 
+-	channel_count = dpcd_test_mode.bits.channel_count + 1;
++	channel_count = min(dpcd_test_mode.bits.channel_count + 1, AUDIO_CHANNELS_COUNT);
+ 
+ 	// read pattern periods for requested channels when sawTooth pattern is requested
+ 	if (dpcd_pattern_type.value == AUDIO_TEST_PATTERN_SAWTOOTH ||
+diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c
+index 740f39a54ab08..71e357956ce4c 100644
+--- a/drivers/hwmon/adt7470.c
++++ b/drivers/hwmon/adt7470.c
+@@ -20,6 +20,7 @@
+ #include <linux/kthread.h>
+ #include <linux/slab.h>
+ #include <linux/util_macros.h>
++#include <linux/sched.h>
+ 
+ /* Addresses to scan */
+ static const unsigned short normal_i2c[] = { 0x2C, 0x2E, 0x2F, I2C_CLIENT_END };
+@@ -260,11 +261,10 @@ static int adt7470_update_thread(void *p)
+ 		adt7470_read_temperatures(client, data);
+ 		mutex_unlock(&data->lock);
+ 
+-		set_current_state(TASK_INTERRUPTIBLE);
+ 		if (kthread_should_stop())
+ 			break;
+ 
+-		schedule_timeout(msecs_to_jiffies(data->auto_update_interval));
++		schedule_timeout_interruptible(msecs_to_jiffies(data->auto_update_interval));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index 66764f7ef072a..6e7399c2ca8c9 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -968,14 +968,15 @@ static void siw_accept_newconn(struct siw_cep *cep)
+ 
+ 		siw_cep_set_inuse(new_cep);
+ 		rv = siw_proc_mpareq(new_cep);
+-		siw_cep_set_free(new_cep);
+-
+ 		if (rv != -EAGAIN) {
+ 			siw_cep_put(cep);
+ 			new_cep->listen_cep = NULL;
+-			if (rv)
++			if (rv) {
++				siw_cep_set_free(new_cep);
+ 				goto error;
++			}
+ 		}
++		siw_cep_set_free(new_cep);
+ 	}
+ 	return;
+ 
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index b21c8224b1c84..21749859ad459 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1626,7 +1626,8 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
+ 				  unsigned long pfn, unsigned int pages,
+ 				  int ih, int map)
+ {
+-	unsigned int mask = ilog2(__roundup_pow_of_two(pages));
++	unsigned int aligned_pages = __roundup_pow_of_two(pages);
++	unsigned int mask = ilog2(aligned_pages);
+ 	uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
+ 	u16 did = domain->iommu_did[iommu->seq_id];
+ 
+@@ -1638,10 +1639,30 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
+ 	if (domain_use_first_level(domain)) {
+ 		domain_flush_piotlb(iommu, domain, addr, pages, ih);
+ 	} else {
++		unsigned long bitmask = aligned_pages - 1;
++
++		/*
++		 * PSI masks the low order bits of the base address. If the
++		 * address isn't aligned to the mask, then compute a mask value
++		 * needed to ensure the target range is flushed.
++		 */
++		if (unlikely(bitmask & pfn)) {
++			unsigned long end_pfn = pfn + pages - 1, shared_bits;
++
++			/*
++			 * Since end_pfn <= pfn + bitmask, the only way bits
++			 * higher than bitmask can differ in pfn and end_pfn is
++			 * by carrying. This means after masking out bitmask,
++			 * high bits starting with the first set bit in
++			 * shared_bits are all equal in both pfn and end_pfn.
++			 */
++			shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
++			mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
++		}
++
+ 		/*
+ 		 * Fallback to domain selective flush if no PSI support or
+-		 * the size is too big. PSI requires page size to be 2 ^ x,
+-		 * and the base address is naturally aligned to the size.
++		 * the size is too big.
+ 		 */
+ 		if (!cap_pgsel_inv(iommu->cap) ||
+ 		    mask > cap_max_amask_val(iommu->cap))
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index b3d8d9e0e6f6e..ab0e2338e47ec 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -612,13 +612,15 @@ static void end_io_acct(struct mapped_device *md, struct bio *bio,
+ {
+ 	unsigned long duration = jiffies - start_time;
+ 
+-	bio_end_io_acct(bio, start_time);
+-
+ 	if (unlikely(dm_stats_used(&md->stats)))
+ 		dm_stats_account_io(&md->stats, bio_data_dir(bio),
+ 				    bio->bi_iter.bi_sector, bio_sectors(bio),
+ 				    true, duration, stats_aux);
+ 
++	smp_wmb();
++
++	bio_end_io_acct(bio, start_time);
++
+ 	/* nudge anyone waiting on suspend queue */
+ 	if (unlikely(wq_has_sleeper(&md->wait)))
+ 		wake_up(&md->wait);
+@@ -2348,6 +2350,8 @@ static int dm_wait_for_bios_completion(struct mapped_device *md, long task_state
+ 	}
+ 	finish_wait(&md->wait, &wait);
+ 
++	smp_rmb();
++
+ 	return r;
+ }
+ 
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 7494d595035e3..87807ef010a96 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -1378,13 +1378,17 @@ static int mmc_select_hs400es(struct mmc_card *card)
+ 		goto out_err;
+ 	}
+ 
++	/*
++	 * Bump to HS timing and frequency. Some cards don't handle
++	 * SEND_STATUS reliably at the initial frequency.
++	 */
+ 	mmc_set_timing(host, MMC_TIMING_MMC_HS);
++	mmc_set_bus_speed(card);
++
+ 	err = mmc_switch_status(card, true);
+ 	if (err)
+ 		goto out_err;
+ 
+-	mmc_set_clock(host, card->ext_csd.hs_max_dtr);
+-
+ 	/* Switch card to DDR with strobe bit */
+ 	val = EXT_CSD_DDR_BUS_WIDTH_8 | EXT_CSD_BUS_WIDTH_STROBE;
+ 	err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
+@@ -1442,7 +1446,7 @@ out_err:
+ static int mmc_select_hs200(struct mmc_card *card)
+ {
+ 	struct mmc_host *host = card->host;
+-	unsigned int old_timing, old_signal_voltage;
++	unsigned int old_timing, old_signal_voltage, old_clock;
+ 	int err = -EINVAL;
+ 	u8 val;
+ 
+@@ -1473,8 +1477,17 @@ static int mmc_select_hs200(struct mmc_card *card)
+ 				   false, true);
+ 		if (err)
+ 			goto err;
++
++		/*
++		 * Bump to HS timing and frequency. Some cards don't handle
++		 * SEND_STATUS reliably at the initial frequency.
++		 * NB: We can't move to full (HS200) speeds until after we've
++		 * successfully switched over.
++		 */
+ 		old_timing = host->ios.timing;
++		old_clock = host->ios.clock;
+ 		mmc_set_timing(host, MMC_TIMING_MMC_HS200);
++		mmc_set_clock(card->host, card->ext_csd.hs_max_dtr);
+ 
+ 		/*
+ 		 * For HS200, CRC errors are not a reliable way to know the
+@@ -1487,8 +1500,10 @@ static int mmc_select_hs200(struct mmc_card *card)
+ 		 * mmc_select_timing() assumes timing has not changed if
+ 		 * it is a switch error.
+ 		 */
+-		if (err == -EBADMSG)
++		if (err == -EBADMSG) {
++			mmc_set_clock(host, old_clock);
+ 			mmc_set_timing(host, old_timing);
++		}
+ 	}
+ err:
+ 	if (err) {
+diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c
+index e00167bcfaf6d..b5cb83bb9b851 100644
+--- a/drivers/mmc/host/rtsx_pci_sdmmc.c
++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
+@@ -37,10 +37,7 @@ struct realtek_pci_sdmmc {
+ 	bool			double_clk;
+ 	bool			eject;
+ 	bool			initial_mode;
+-	int			power_state;
+-#define SDMMC_POWER_ON		1
+-#define SDMMC_POWER_OFF		0
+-
++	int			prev_power_state;
+ 	int			sg_count;
+ 	s32			cookie;
+ 	int			cookie_sg_count;
+@@ -902,14 +899,21 @@ static int sd_set_bus_width(struct realtek_pci_sdmmc *host,
+ 	return err;
+ }
+ 
+-static int sd_power_on(struct realtek_pci_sdmmc *host)
++static int sd_power_on(struct realtek_pci_sdmmc *host, unsigned char power_mode)
+ {
+ 	struct rtsx_pcr *pcr = host->pcr;
+ 	int err;
+ 
+-	if (host->power_state == SDMMC_POWER_ON)
++	if (host->prev_power_state == MMC_POWER_ON)
+ 		return 0;
+ 
++	if (host->prev_power_state == MMC_POWER_UP) {
++		rtsx_pci_write_register(pcr, SD_BUS_STAT, SD_CLK_TOGGLE_EN, 0);
++		goto finish;
++	}
++
++	msleep(100);
++
+ 	rtsx_pci_init_cmd(pcr);
+ 	rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, CARD_SELECT, 0x07, SD_MOD_SEL);
+ 	rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, CARD_SHARE_MODE,
+@@ -928,11 +932,17 @@ static int sd_power_on(struct realtek_pci_sdmmc *host)
+ 	if (err < 0)
+ 		return err;
+ 
++	mdelay(1);
++
+ 	err = rtsx_pci_write_register(pcr, CARD_OE, SD_OUTPUT_EN, SD_OUTPUT_EN);
+ 	if (err < 0)
+ 		return err;
+ 
+-	host->power_state = SDMMC_POWER_ON;
++	/* send at least 74 clocks */
++	rtsx_pci_write_register(pcr, SD_BUS_STAT, SD_CLK_TOGGLE_EN, SD_CLK_TOGGLE_EN);
++
++finish:
++	host->prev_power_state = power_mode;
+ 	return 0;
+ }
+ 
+@@ -941,7 +951,7 @@ static int sd_power_off(struct realtek_pci_sdmmc *host)
+ 	struct rtsx_pcr *pcr = host->pcr;
+ 	int err;
+ 
+-	host->power_state = SDMMC_POWER_OFF;
++	host->prev_power_state = MMC_POWER_OFF;
+ 
+ 	rtsx_pci_init_cmd(pcr);
+ 
+@@ -967,7 +977,7 @@ static int sd_set_power_mode(struct realtek_pci_sdmmc *host,
+ 	if (power_mode == MMC_POWER_OFF)
+ 		err = sd_power_off(host);
+ 	else
+-		err = sd_power_on(host);
++		err = sd_power_on(host, power_mode);
+ 
+ 	return err;
+ }
+@@ -1404,10 +1414,11 @@ static int rtsx_pci_sdmmc_drv_probe(struct platform_device *pdev)
+ 
+ 	host = mmc_priv(mmc);
+ 	host->pcr = pcr;
++	mmc->ios.power_delay_ms = 5;
+ 	host->mmc = mmc;
+ 	host->pdev = pdev;
+ 	host->cookie = -1;
+-	host->power_state = SDMMC_POWER_OFF;
++	host->prev_power_state = MMC_POWER_OFF;
+ 	INIT_WORK(&host->work, sd_request);
+ 	platform_set_drvdata(pdev, host);
+ 	pcr->slots[RTSX_SD_CARD].p_dev = pdev;
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 588b9a5641179..192cb8b20b472 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -16,6 +16,7 @@
+ #include <linux/regulator/consumer.h>
+ #include <linux/interconnect.h>
+ #include <linux/pinctrl/consumer.h>
++#include <linux/reset.h>
+ 
+ #include "sdhci-pltfm.h"
+ #include "cqhci.h"
+@@ -2228,6 +2229,43 @@ static inline void sdhci_msm_get_of_property(struct platform_device *pdev,
+ 	of_property_read_u32(node, "qcom,dll-config", &msm_host->dll_config);
+ }
+ 
++static int sdhci_msm_gcc_reset(struct device *dev, struct sdhci_host *host)
++{
++	struct reset_control *reset;
++	int ret = 0;
++
++	reset = reset_control_get_optional_exclusive(dev, NULL);
++	if (IS_ERR(reset))
++		return dev_err_probe(dev, PTR_ERR(reset),
++				"unable to acquire core_reset\n");
++
++	if (!reset)
++		return ret;
++
++	ret = reset_control_assert(reset);
++	if (ret) {
++		reset_control_put(reset);
++		return dev_err_probe(dev, ret, "core_reset assert failed\n");
++	}
++
++	/*
++	 * The hardware requirement for delay between assert/deassert
++	 * is at least 3-4 sleep clock (32.7KHz) cycles, which comes to
++	 * ~125us (4/32768). To be on the safe side add 200us delay.
++	 */
++	usleep_range(200, 210);
++
++	ret = reset_control_deassert(reset);
++	if (ret) {
++		reset_control_put(reset);
++		return dev_err_probe(dev, ret, "core_reset deassert failed\n");
++	}
++
++	usleep_range(200, 210);
++	reset_control_put(reset);
++
++	return ret;
++}
+ 
+ static int sdhci_msm_probe(struct platform_device *pdev)
+ {
+@@ -2276,6 +2314,10 @@ static int sdhci_msm_probe(struct platform_device *pdev)
+ 
+ 	msm_host->saved_tuning_phase = INVALID_TUNING_PHASE;
+ 
++	ret = sdhci_msm_gcc_reset(&pdev->dev, host);
++	if (ret)
++		goto pltfm_free;
++
+ 	/* Setup SDCC bus voter clock. */
+ 	msm_host->bus_clk = devm_clk_get(&pdev->dev, "bus");
+ 	if (!IS_ERR(msm_host->bus_clk)) {
+diff --git a/drivers/net/can/grcan.c b/drivers/net/can/grcan.c
+index 39802f107eb1b..4923f9ac6a43b 100644
+--- a/drivers/net/can/grcan.c
++++ b/drivers/net/can/grcan.c
+@@ -241,13 +241,14 @@ struct grcan_device_config {
+ 		.rxsize		= GRCAN_DEFAULT_BUFFER_SIZE,	\
+ 		}
+ 
+-#define GRCAN_TXBUG_SAFE_GRLIB_VERSION	0x4100
++#define GRCAN_TXBUG_SAFE_GRLIB_VERSION	4100
+ #define GRLIB_VERSION_MASK		0xffff
+ 
+ /* GRCAN private data structure */
+ struct grcan_priv {
+ 	struct can_priv can;	/* must be the first member */
+ 	struct net_device *dev;
++	struct device *ofdev_dev;
+ 	struct napi_struct napi;
+ 
+ 	struct grcan_registers __iomem *regs;	/* ioremap'ed registers */
+@@ -924,7 +925,7 @@ static void grcan_free_dma_buffers(struct net_device *dev)
+ 	struct grcan_priv *priv = netdev_priv(dev);
+ 	struct grcan_dma *dma = &priv->dma;
+ 
+-	dma_free_coherent(&dev->dev, dma->base_size, dma->base_buf,
++	dma_free_coherent(priv->ofdev_dev, dma->base_size, dma->base_buf,
+ 			  dma->base_handle);
+ 	memset(dma, 0, sizeof(*dma));
+ }
+@@ -949,7 +950,7 @@ static int grcan_allocate_dma_buffers(struct net_device *dev,
+ 
+ 	/* Extra GRCAN_BUFFER_ALIGNMENT to allow for alignment */
+ 	dma->base_size = lsize + ssize + GRCAN_BUFFER_ALIGNMENT;
+-	dma->base_buf = dma_alloc_coherent(&dev->dev,
++	dma->base_buf = dma_alloc_coherent(priv->ofdev_dev,
+ 					   dma->base_size,
+ 					   &dma->base_handle,
+ 					   GFP_KERNEL);
+@@ -1113,8 +1114,10 @@ static int grcan_close(struct net_device *dev)
+ 
+ 	priv->closing = true;
+ 	if (priv->need_txbug_workaround) {
++		spin_unlock_irqrestore(&priv->lock, flags);
+ 		del_timer_sync(&priv->hang_timer);
+ 		del_timer_sync(&priv->rr_timer);
++		spin_lock_irqsave(&priv->lock, flags);
+ 	}
+ 	netif_stop_queue(dev);
+ 	grcan_stop_hardware(dev);
+@@ -1134,7 +1137,7 @@ static int grcan_close(struct net_device *dev)
+ 	return 0;
+ }
+ 
+-static int grcan_transmit_catch_up(struct net_device *dev, int budget)
++static void grcan_transmit_catch_up(struct net_device *dev)
+ {
+ 	struct grcan_priv *priv = netdev_priv(dev);
+ 	unsigned long flags;
+@@ -1142,7 +1145,7 @@ static int grcan_transmit_catch_up(struct net_device *dev, int budget)
+ 
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 
+-	work_done = catch_up_echo_skb(dev, budget, true);
++	work_done = catch_up_echo_skb(dev, -1, true);
+ 	if (work_done) {
+ 		if (!priv->resetting && !priv->closing &&
+ 		    !(priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
+@@ -1156,8 +1159,6 @@ static int grcan_transmit_catch_up(struct net_device *dev, int budget)
+ 	}
+ 
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+-
+-	return work_done;
+ }
+ 
+ static int grcan_receive(struct net_device *dev, int budget)
+@@ -1239,19 +1240,13 @@ static int grcan_poll(struct napi_struct *napi, int budget)
+ 	struct net_device *dev = priv->dev;
+ 	struct grcan_registers __iomem *regs = priv->regs;
+ 	unsigned long flags;
+-	int tx_work_done, rx_work_done;
+-	int rx_budget = budget / 2;
+-	int tx_budget = budget - rx_budget;
++	int work_done;
+ 
+-	/* Half of the budget for receiving messages */
+-	rx_work_done = grcan_receive(dev, rx_budget);
++	work_done = grcan_receive(dev, budget);
+ 
+-	/* Half of the budget for transmitting messages as that can trigger echo
+-	 * frames being received
+-	 */
+-	tx_work_done = grcan_transmit_catch_up(dev, tx_budget);
++	grcan_transmit_catch_up(dev);
+ 
+-	if (rx_work_done < rx_budget && tx_work_done < tx_budget) {
++	if (work_done < budget) {
+ 		napi_complete(napi);
+ 
+ 		/* Guarantee no interference with a running reset that otherwise
+@@ -1268,7 +1263,7 @@ static int grcan_poll(struct napi_struct *napi, int budget)
+ 		spin_unlock_irqrestore(&priv->lock, flags);
+ 	}
+ 
+-	return rx_work_done + tx_work_done;
++	return work_done;
+ }
+ 
+ /* Work tx bug by waiting while for the risky situation to clear. If that fails,
+@@ -1600,6 +1595,7 @@ static int grcan_setup_netdev(struct platform_device *ofdev,
+ 	memcpy(&priv->config, &grcan_module_config,
+ 	       sizeof(struct grcan_device_config));
+ 	priv->dev = dev;
++	priv->ofdev_dev = &ofdev->dev;
+ 	priv->regs = base;
+ 	priv->can.bittiming_const = &grcan_bittiming_const;
+ 	priv->can.do_set_bittiming = grcan_set_bittiming;
+@@ -1652,6 +1648,7 @@ exit_free_candev:
+ static int grcan_probe(struct platform_device *ofdev)
+ {
+ 	struct device_node *np = ofdev->dev.of_node;
++	struct device_node *sysid_parent;
+ 	u32 sysid, ambafreq;
+ 	int irq, err;
+ 	void __iomem *base;
+@@ -1660,10 +1657,15 @@ static int grcan_probe(struct platform_device *ofdev)
+ 	/* Compare GRLIB version number with the first that does not
+ 	 * have the tx bug (see start_xmit)
+ 	 */
+-	err = of_property_read_u32(np, "systemid", &sysid);
+-	if (!err && ((sysid & GRLIB_VERSION_MASK)
+-		     >= GRCAN_TXBUG_SAFE_GRLIB_VERSION))
+-		txbug = false;
++	sysid_parent = of_find_node_by_path("/ambapp0");
++	if (sysid_parent) {
++		of_node_get(sysid_parent);
++		err = of_property_read_u32(sysid_parent, "systemid", &sysid);
++		if (!err && ((sysid & GRLIB_VERSION_MASK) >=
++			     GRCAN_TXBUG_SAFE_GRLIB_VERSION))
++			txbug = false;
++		of_node_put(sysid_parent);
++	}
+ 
+ 	err = of_property_read_u32(np, "freq", &ambafreq);
+ 	if (err) {
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 5ee8809bc2711..c355824ddb814 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1663,6 +1663,7 @@ mt7530_setup(struct dsa_switch *ds)
+ 				ret = of_get_phy_mode(mac_np, &interface);
+ 				if (ret && ret != -ENODEV) {
+ 					of_node_put(mac_np);
++					of_node_put(phy_node);
+ 					return ret;
+ 				}
+ 				id = of_mdio_parse_addr(ds->dev, phy_node);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index cb0c270418a4d..b818d5f342d53 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2575,6 +2575,10 @@ static int bnxt_poll_p5(struct napi_struct *napi, int budget)
+ 			u32 idx = le32_to_cpu(nqcmp->cq_handle_low);
+ 			struct bnxt_cp_ring_info *cpr2;
+ 
++			/* No more budget for RX work */
++			if (budget && work_done >= budget && idx == BNXT_RX_HDL)
++				break;
++
+ 			cpr2 = cpr->cp_ring_arr[idx];
+ 			work_done += __bnxt_poll_work(bp, cpr2,
+ 						      budget - work_done);
+@@ -10453,7 +10457,7 @@ static bool bnxt_rfs_capable(struct bnxt *bp)
+ 
+ 	if (bp->flags & BNXT_FLAG_CHIP_P5)
+ 		return bnxt_rfs_supported(bp);
+-	if (!(bp->flags & BNXT_FLAG_MSIX_CAP) || !bnxt_can_reserve_rings(bp))
++	if (!(bp->flags & BNXT_FLAG_MSIX_CAP) || !bnxt_can_reserve_rings(bp) || !bp->rx_nr_rings)
+ 		return false;
+ 
+ 	vnics = 1 + bp->rx_nr_rings;
+@@ -12481,10 +12485,9 @@ static int bnxt_init_dflt_ring_mode(struct bnxt *bp)
+ 		goto init_dflt_ring_err;
+ 
+ 	bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
+-	if (bnxt_rfs_supported(bp) && bnxt_rfs_capable(bp)) {
+-		bp->flags |= BNXT_FLAG_RFS;
+-		bp->dev->features |= NETIF_F_NTUPLE;
+-	}
++
++	bnxt_set_dflt_rfs(bp);
++
+ init_dflt_ring_err:
+ 	bnxt_ulp_irq_restart(bp, rc);
+ 	return rc;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+index 5dc3743f80915..f04ac00e3e703 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+@@ -771,7 +771,7 @@ struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq *wq, unsigned int wqe_size,
+ 	/* If we only have one page, still need to get shadown wqe when
+ 	 * wqe rolling-over page
+ 	 */
+-	if (curr_pg != end_pg || MASKED_WQE_IDX(wq, end_prod_idx) < *prod_idx) {
++	if (curr_pg != end_pg || end_prod_idx < *prod_idx) {
+ 		void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
+ 
+ 		copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *prod_idx);
+@@ -841,7 +841,10 @@ struct hinic_hw_wqe *hinic_read_wqe(struct hinic_wq *wq, unsigned int wqe_size,
+ 
+ 	*cons_idx = curr_cons_idx;
+ 
+-	if (curr_pg != end_pg) {
++	/* If we only have one page, still need to get shadown wqe when
++	 * wqe rolling-over page
++	 */
++	if (curr_pg != end_pg || end_cons_idx < curr_cons_idx) {
+ 		void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
+ 
+ 		copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *cons_idx);
+diff --git a/drivers/net/ethernet/mediatek/mtk_sgmii.c b/drivers/net/ethernet/mediatek/mtk_sgmii.c
+index 32d83421226a2..5897940a418b6 100644
+--- a/drivers/net/ethernet/mediatek/mtk_sgmii.c
++++ b/drivers/net/ethernet/mediatek/mtk_sgmii.c
+@@ -26,6 +26,7 @@ int mtk_sgmii_init(struct mtk_sgmii *ss, struct device_node *r, u32 ana_rgc3)
+ 			break;
+ 
+ 		ss->regmap[i] = syscon_node_to_regmap(np);
++		of_node_put(np);
+ 		if (IS_ERR(ss->regmap[i]))
+ 			return PTR_ERR(ss->regmap[i]);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/rsc_dump.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/rsc_dump.c
+index ed4fb79b4db76..75b6060f7a9ae 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/rsc_dump.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/rsc_dump.c
+@@ -31,6 +31,7 @@ static const char *const mlx5_rsc_sgmt_name[] = {
+ struct mlx5_rsc_dump {
+ 	u32 pdn;
+ 	struct mlx5_core_mkey mkey;
++	u32 number_of_menu_items;
+ 	u16 fw_segment_type[MLX5_SGMT_TYPE_NUM];
+ };
+ 
+@@ -50,21 +51,37 @@ static int mlx5_rsc_dump_sgmt_get_by_name(char *name)
+ 	return -EINVAL;
+ }
+ 
+-static void mlx5_rsc_dump_read_menu_sgmt(struct mlx5_rsc_dump *rsc_dump, struct page *page)
++#define MLX5_RSC_DUMP_MENU_HEADER_SIZE (MLX5_ST_SZ_BYTES(resource_dump_info_segment) + \
++					MLX5_ST_SZ_BYTES(resource_dump_command_segment) + \
++					MLX5_ST_SZ_BYTES(resource_dump_menu_segment))
++
++static int mlx5_rsc_dump_read_menu_sgmt(struct mlx5_rsc_dump *rsc_dump, struct page *page,
++					int read_size, int start_idx)
+ {
+ 	void *data = page_address(page);
+ 	enum mlx5_sgmt_type sgmt_idx;
+ 	int num_of_items;
+ 	char *sgmt_name;
+ 	void *member;
++	int size = 0;
+ 	void *menu;
+ 	int i;
+ 
+-	menu = MLX5_ADDR_OF(menu_resource_dump_response, data, menu);
+-	num_of_items = MLX5_GET(resource_dump_menu_segment, menu, num_of_records);
++	if (!start_idx) {
++		menu = MLX5_ADDR_OF(menu_resource_dump_response, data, menu);
++		rsc_dump->number_of_menu_items = MLX5_GET(resource_dump_menu_segment, menu,
++							  num_of_records);
++		size = MLX5_RSC_DUMP_MENU_HEADER_SIZE;
++		data += size;
++	}
++	num_of_items = rsc_dump->number_of_menu_items;
++
++	for (i = 0; start_idx + i < num_of_items; i++) {
++		size += MLX5_ST_SZ_BYTES(resource_dump_menu_record);
++		if (size >= read_size)
++			return start_idx + i;
+ 
+-	for (i = 0; i < num_of_items; i++) {
+-		member = MLX5_ADDR_OF(resource_dump_menu_segment, menu, record[i]);
++		member = data + MLX5_ST_SZ_BYTES(resource_dump_menu_record) * i;
+ 		sgmt_name =  MLX5_ADDR_OF(resource_dump_menu_record, member, segment_name);
+ 		sgmt_idx = mlx5_rsc_dump_sgmt_get_by_name(sgmt_name);
+ 		if (sgmt_idx == -EINVAL)
+@@ -72,6 +89,7 @@ static void mlx5_rsc_dump_read_menu_sgmt(struct mlx5_rsc_dump *rsc_dump, struct
+ 		rsc_dump->fw_segment_type[sgmt_idx] = MLX5_GET(resource_dump_menu_record,
+ 							       member, segment_type);
+ 	}
++	return 0;
+ }
+ 
+ static int mlx5_rsc_dump_trigger(struct mlx5_core_dev *dev, struct mlx5_rsc_dump_cmd *cmd,
+@@ -168,6 +186,7 @@ static int mlx5_rsc_dump_menu(struct mlx5_core_dev *dev)
+ 	struct mlx5_rsc_dump_cmd *cmd = NULL;
+ 	struct mlx5_rsc_key key = {};
+ 	struct page *page;
++	int start_idx = 0;
+ 	int size;
+ 	int err;
+ 
+@@ -189,7 +208,7 @@ static int mlx5_rsc_dump_menu(struct mlx5_core_dev *dev)
+ 		if (err < 0)
+ 			goto destroy_cmd;
+ 
+-		mlx5_rsc_dump_read_menu_sgmt(dev->rsc_dump, page);
++		start_idx = mlx5_rsc_dump_read_menu_sgmt(dev->rsc_dump, page, size, start_idx);
+ 
+ 	} while (err > 0);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index 673f1c82d3815..c9d5d8d93994d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -309,8 +309,8 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 		if (err)
+ 			return err;
+ 
+-		err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer, port_buff_cell_sz,
+-					  xoff, &port_buffer, &update_buffer);
++		err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer, xoff,
++					  port_buff_cell_sz, &port_buffer, &update_buffer);
+ 		if (err)
+ 			return err;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 0469f53dfb99e..511c431465625 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -1629,6 +1629,8 @@ mlx5_tc_ct_flush_ft_entry(void *ptr, void *arg)
+ static void
+ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft)
+ {
++	struct mlx5e_priv *priv;
++
+ 	if (!refcount_dec_and_test(&ft->refcount))
+ 		return;
+ 
+@@ -1638,6 +1640,8 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft)
+ 	rhashtable_free_and_destroy(&ft->ct_entries_ht,
+ 				    mlx5_tc_ct_flush_ft_entry,
+ 				    ct_priv);
++	priv = netdev_priv(ct_priv->netdev);
++	flush_workqueue(priv->wq);
+ 	mlx5_tc_ct_free_pre_ct_tables(ft);
+ 	mapping_remove(ct_priv->zone_mapping, ft->zone_restore_id);
+ 	kfree(ft);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+index f23c67575073a..7c0ae7c38eefd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+@@ -1210,6 +1210,16 @@ static int mlx5e_trust_initialize(struct mlx5e_priv *priv)
+ 	if (err)
+ 		return err;
+ 
++	if (priv->dcbx_dp.trust_state == MLX5_QPTS_TRUST_PCP && priv->dcbx.dscp_app_cnt) {
++		/*
++		 * Align the driver state with the register state.
++		 * Temporary state change is required to enable the app list reset.
++		 */
++		priv->dcbx_dp.trust_state = MLX5_QPTS_TRUST_DSCP;
++		mlx5e_dcbnl_delete_app(priv);
++		priv->dcbx_dp.trust_state = MLX5_QPTS_TRUST_PCP;
++	}
++
+ 	mlx5e_params_calc_trust_tx_min_inline_mode(priv->mdev, &priv->channels.params,
+ 						   priv->dcbx_dp.trust_state);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 1ad1692a5b2d7..f1bf7f6758d0d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2396,6 +2396,17 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 				 match.key->vlan_priority);
+ 
+ 			*match_level = MLX5_MATCH_L2;
++
++			if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CVLAN) &&
++			    match.mask->vlan_eth_type &&
++			    MLX5_CAP_FLOWTABLE_TYPE(priv->mdev,
++						    ft_field_support.outer_second_vid,
++						    fs_type)) {
++				MLX5_SET(fte_match_set_misc, misc_c,
++					 outer_second_cvlan_tag, 1);
++				spec->match_criteria_enable |=
++					MLX5_MATCH_MISC_PARAMETERS;
++			}
+ 		}
+ 	} else if (*match_level != MLX5_MATCH_NONE) {
+ 		/* cvlan_tag enabled in match criteria and
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 9b472e793ee36..e29db4c39b37f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -134,14 +134,19 @@ static void mlx5_stop_sync_reset_poll(struct mlx5_core_dev *dev)
+ 	del_timer_sync(&fw_reset->timer);
+ }
+ 
+-static void mlx5_sync_reset_clear_reset_requested(struct mlx5_core_dev *dev, bool poll_health)
++static int mlx5_sync_reset_clear_reset_requested(struct mlx5_core_dev *dev, bool poll_health)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ 
++	if (!test_and_clear_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags)) {
++		mlx5_core_warn(dev, "Reset request was already cleared\n");
++		return -EALREADY;
++	}
++
+ 	mlx5_stop_sync_reset_poll(dev);
+-	clear_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags);
+ 	if (poll_health)
+ 		mlx5_start_health_poll(dev);
++	return 0;
+ }
+ 
+ #define MLX5_RESET_POLL_INTERVAL	(HZ / 10)
+@@ -185,13 +190,17 @@ static int mlx5_fw_reset_set_reset_sync_nack(struct mlx5_core_dev *dev)
+ 	return mlx5_reg_mfrl_set(dev, MLX5_MFRL_REG_RESET_LEVEL3, 0, 2, false);
+ }
+ 
+-static void mlx5_sync_reset_set_reset_requested(struct mlx5_core_dev *dev)
++static int mlx5_sync_reset_set_reset_requested(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ 
++	if (test_and_set_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags)) {
++		mlx5_core_warn(dev, "Reset request was already set\n");
++		return -EALREADY;
++	}
+ 	mlx5_stop_health_poll(dev, true);
+-	set_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags);
+ 	mlx5_start_sync_reset_poll(dev);
++	return 0;
+ }
+ 
+ static void mlx5_fw_live_patch_event(struct work_struct *work)
+@@ -225,7 +234,9 @@ static void mlx5_sync_reset_request_event(struct work_struct *work)
+ 			       err ? "Failed" : "Sent");
+ 		return;
+ 	}
+-	mlx5_sync_reset_set_reset_requested(dev);
++	if (mlx5_sync_reset_set_reset_requested(dev))
++		return;
++
+ 	err = mlx5_fw_reset_set_reset_sync_ack(dev);
+ 	if (err)
+ 		mlx5_core_warn(dev, "PCI Sync FW Update Reset Ack Failed. Error code: %d\n", err);
+@@ -325,7 +336,8 @@ static void mlx5_sync_reset_now_event(struct work_struct *work)
+ 	struct mlx5_core_dev *dev = fw_reset->dev;
+ 	int err;
+ 
+-	mlx5_sync_reset_clear_reset_requested(dev, false);
++	if (mlx5_sync_reset_clear_reset_requested(dev, false))
++		return;
+ 
+ 	mlx5_core_warn(dev, "Sync Reset now. Device is going to reset.\n");
+ 
+@@ -354,10 +366,8 @@ static void mlx5_sync_reset_abort_event(struct work_struct *work)
+ 						      reset_abort_work);
+ 	struct mlx5_core_dev *dev = fw_reset->dev;
+ 
+-	if (!test_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags))
++	if (mlx5_sync_reset_clear_reset_requested(dev, true))
+ 		return;
+-
+-	mlx5_sync_reset_clear_reset_requested(dev, true);
+ 	mlx5_core_warn(dev, "PCI Sync FW Update Reset Aborted.\n");
+ }
+ 
+diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c
+index 823d9a7184fe6..6c4479c310975 100644
+--- a/drivers/net/ethernet/smsc/smsc911x.c
++++ b/drivers/net/ethernet/smsc/smsc911x.c
+@@ -2422,7 +2422,7 @@ static int smsc911x_drv_probe(struct platform_device *pdev)
+ 	if (irq == -EPROBE_DEFER) {
+ 		retval = -EPROBE_DEFER;
+ 		goto out_0;
+-	} else if (irq <= 0) {
++	} else if (irq < 0) {
+ 		pr_warn("Could not allocate irq resource\n");
+ 		retval = -ENODEV;
+ 		goto out_0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index a9087dae767de..fb065b074553e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -243,6 +243,7 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
+ 	plat->has_gmac4 = 1;
+ 	plat->force_sf_dma_mode = 0;
+ 	plat->tso_en = 1;
++	plat->sph_disable = 1;
+ 
+ 	plat->rx_sched_algorithm = MTL_RX_ALGORITHM_SP;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index cad6588840d8b..958bbcfc2668d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -895,6 +895,7 @@ static int sun8i_dwmac_register_mdio_mux(struct stmmac_priv *priv)
+ 
+ 	ret = mdio_mux_init(priv->device, mdio_mux, mdio_mux_syscon_switch_fn,
+ 			    &gmac->mux_handle, priv, priv->mii);
++	of_node_put(mdio_mux);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index a46c32257de42..e9aa9a5eba6be 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -5046,7 +5046,7 @@ int stmmac_dvr_probe(struct device *device,
+ 		dev_info(priv->device, "TSO feature enabled\n");
+ 	}
+ 
+-	if (priv->dma_cap.sphen) {
++	if (priv->dma_cap.sphen && !priv->plat->sph_disable) {
+ 		ndev->hw_features |= NETIF_F_GRO;
+ 		priv->sph = true;
+ 		dev_info(priv->device, "SPH feature enabled\n");
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index 31cefd6ef6806..a1ee205d6a889 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -1255,8 +1255,10 @@ static int cpsw_probe_dt(struct cpsw_common *cpsw)
+ 	data->slave_data = devm_kcalloc(dev, CPSW_SLAVE_PORTS_NUM,
+ 					sizeof(struct cpsw_slave_data),
+ 					GFP_KERNEL);
+-	if (!data->slave_data)
++	if (!data->slave_data) {
++		of_node_put(tmp_node);
+ 		return -ENOMEM;
++	}
+ 
+ 	/* Populate all the child nodes here...
+ 	 */
+@@ -1353,6 +1355,7 @@ static int cpsw_probe_dt(struct cpsw_common *cpsw)
+ 
+ err_node_put:
+ 	of_node_put(port_np);
++	of_node_put(tmp_node);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index 4bd44fbc6ecfa..e29b5523b19bd 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -820,10 +820,10 @@ static int xemaclite_mdio_write(struct mii_bus *bus, int phy_id, int reg,
+ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ {
+ 	struct mii_bus *bus;
+-	int rc;
+ 	struct resource res;
+ 	struct device_node *np = of_get_parent(lp->phy_node);
+ 	struct device_node *npp;
++	int rc, ret;
+ 
+ 	/* Don't register the MDIO bus if the phy_node or its parent node
+ 	 * can't be found.
+@@ -833,8 +833,14 @@ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ 		return -ENODEV;
+ 	}
+ 	npp = of_get_parent(np);
+-
+-	of_address_to_resource(npp, 0, &res);
++	ret = of_address_to_resource(npp, 0, &res);
++	of_node_put(npp);
++	if (ret) {
++		dev_err(dev, "%s resource error!\n",
++			dev->of_node->full_name);
++		of_node_put(np);
++		return ret;
++	}
+ 	if (lp->ndev->mem_start != res.start) {
+ 		struct phy_device *phydev;
+ 		phydev = of_phy_find_device(lp->phy_node);
+@@ -843,6 +849,7 @@ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ 				 "MDIO of the phy is not registered yet\n");
+ 		else
+ 			put_device(&phydev->mdio.dev);
++		of_node_put(np);
+ 		return 0;
+ 	}
+ 
+@@ -855,6 +862,7 @@ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ 	bus = mdiobus_alloc();
+ 	if (!bus) {
+ 		dev_err(dev, "Failed to allocate mdiobus\n");
++		of_node_put(np);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -867,6 +875,7 @@ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ 	bus->parent = dev;
+ 
+ 	rc = of_mdiobus_register(bus, np);
++	of_node_put(np);
+ 	if (rc) {
+ 		dev_err(dev, "Failed to register mdio bus.\n");
+ 		goto err_register;
+diff --git a/drivers/nfc/nfcmrvl/main.c b/drivers/nfc/nfcmrvl/main.c
+index 529be35ac1782..54d228acc0f5d 100644
+--- a/drivers/nfc/nfcmrvl/main.c
++++ b/drivers/nfc/nfcmrvl/main.c
+@@ -194,6 +194,7 @@ void nfcmrvl_nci_unregister_dev(struct nfcmrvl_private *priv)
+ {
+ 	struct nci_dev *ndev = priv->ndev;
+ 
++	nci_unregister_device(ndev);
+ 	if (priv->ndev->nfc_dev->fw_download_in_progress)
+ 		nfcmrvl_fw_dnld_abort(priv);
+ 
+@@ -202,7 +203,6 @@ void nfcmrvl_nci_unregister_dev(struct nfcmrvl_private *priv)
+ 	if (gpio_is_valid(priv->config.reset_n_io))
+ 		gpio_free(priv->config.reset_n_io);
+ 
+-	nci_unregister_device(ndev);
+ 	nci_free_device(ndev);
+ 	kfree(priv);
+ }
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index af051fb886998..0c603e069f21d 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -114,6 +114,7 @@
+ #define PCIE_MSI_ADDR_HIGH_REG			(CONTROL_BASE_ADDR + 0x54)
+ #define PCIE_MSI_STATUS_REG			(CONTROL_BASE_ADDR + 0x58)
+ #define PCIE_MSI_MASK_REG			(CONTROL_BASE_ADDR + 0x5C)
++#define     PCIE_MSI_ALL_MASK			GENMASK(31, 0)
+ #define PCIE_MSI_PAYLOAD_REG			(CONTROL_BASE_ADDR + 0x9C)
+ #define     PCIE_MSI_DATA_MASK			GENMASK(15, 0)
+ 
+@@ -577,6 +578,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
+ 
+ 	/* Clear all interrupts */
++	advk_writel(pcie, PCIE_MSI_ALL_MASK, PCIE_MSI_STATUS_REG);
+ 	advk_writel(pcie, PCIE_ISR0_ALL_MASK, PCIE_ISR0_REG);
+ 	advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG);
+ 	advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG);
+@@ -589,7 +591,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG);
+ 
+ 	/* Unmask all MSIs */
+-	advk_writel(pcie, 0, PCIE_MSI_MASK_REG);
++	advk_writel(pcie, ~(u32)PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG);
+ 
+ 	/* Enable summary interrupt for GIC SPI source */
+ 	reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK);
+@@ -1386,23 +1388,19 @@ static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie)
+ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
+ {
+ 	u32 msi_val, msi_mask, msi_status, msi_idx;
+-	u16 msi_data;
++	int virq;
+ 
+ 	msi_mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
+ 	msi_val = advk_readl(pcie, PCIE_MSI_STATUS_REG);
+-	msi_status = msi_val & ~msi_mask;
++	msi_status = msi_val & ((~msi_mask) & PCIE_MSI_ALL_MASK);
+ 
+ 	for (msi_idx = 0; msi_idx < MSI_IRQ_NUM; msi_idx++) {
+ 		if (!(BIT(msi_idx) & msi_status))
+ 			continue;
+ 
+-		/*
+-		 * msi_idx contains bits [4:0] of the msi_data and msi_data
+-		 * contains 16bit MSI interrupt number
+-		 */
+ 		advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG);
+-		msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK;
+-		generic_handle_irq(msi_data);
++		virq = irq_find_mapping(pcie->msi_inner_domain, msi_idx);
++		generic_handle_irq(virq);
+ 	}
+ 
+ 	advk_writel(pcie, PCIE_ISR0_MSI_INT_PENDING,
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 2adfab552e22a..f4edfe383e9d9 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -1462,6 +1462,13 @@ int dasd_start_IO(struct dasd_ccw_req *cqr)
+ 		if (!cqr->lpm)
+ 			cqr->lpm = dasd_path_get_opm(device);
+ 	}
++	/*
++	 * remember the amount of formatted tracks to prevent double format on
++	 * ESE devices
++	 */
++	if (cqr->block)
++		cqr->trkcount = atomic_read(&cqr->block->trkcount);
++
+ 	if (cqr->cpmode == 1) {
+ 		rc = ccw_device_tm_start(device->cdev, cqr->cpaddr,
+ 					 (long) cqr, cqr->lpm);
+@@ -1680,6 +1687,7 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ 	unsigned long now;
+ 	int nrf_suppressed = 0;
+ 	int fp_suppressed = 0;
++	struct request *req;
+ 	u8 *sense = NULL;
+ 	int expires;
+ 
+@@ -1780,7 +1788,12 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ 	}
+ 
+ 	if (dasd_ese_needs_format(cqr->block, irb)) {
+-		if (rq_data_dir((struct request *)cqr->callback_data) == READ) {
++		req = dasd_get_callback_data(cqr);
++		if (!req) {
++			cqr->status = DASD_CQR_ERROR;
++			return;
++		}
++		if (rq_data_dir(req) == READ) {
+ 			device->discipline->ese_read(cqr, irb);
+ 			cqr->status = DASD_CQR_SUCCESS;
+ 			cqr->stopclk = now;
+@@ -2799,8 +2812,7 @@ static void __dasd_cleanup_cqr(struct dasd_ccw_req *cqr)
+ 		 * complete a request partially.
+ 		 */
+ 		if (proc_bytes) {
+-			blk_update_request(req, BLK_STS_OK,
+-					   blk_rq_bytes(req) - proc_bytes);
++			blk_update_request(req, BLK_STS_OK, proc_bytes);
+ 			blk_mq_requeue_request(req, true);
+ 		} else if (likely(!blk_should_fake_timeout(req->q))) {
+ 			blk_mq_complete_request(req);
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index ad44d22e88591..7749deb614d75 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -3026,13 +3026,24 @@ static int dasd_eckd_format_device(struct dasd_device *base,
+ }
+ 
+ static bool test_and_set_format_track(struct dasd_format_entry *to_format,
+-				      struct dasd_block *block)
++				      struct dasd_ccw_req *cqr)
+ {
++	struct dasd_block *block = cqr->block;
+ 	struct dasd_format_entry *format;
+ 	unsigned long flags;
+ 	bool rc = false;
+ 
+ 	spin_lock_irqsave(&block->format_lock, flags);
++	if (cqr->trkcount != atomic_read(&block->trkcount)) {
++		/*
++		 * The number of formatted tracks has changed after request
++		 * start and we can not tell if the current track was involved.
++		 * To avoid data corruption treat it as if the current track is
++		 * involved
++		 */
++		rc = true;
++		goto out;
++	}
+ 	list_for_each_entry(format, &block->format_list, list) {
+ 		if (format->track == to_format->track) {
+ 			rc = true;
+@@ -3052,6 +3063,7 @@ static void clear_format_track(struct dasd_format_entry *format,
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&block->format_lock, flags);
++	atomic_inc(&block->trkcount);
+ 	list_del_init(&format->list);
+ 	spin_unlock_irqrestore(&block->format_lock, flags);
+ }
+@@ -3088,7 +3100,7 @@ dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr,
+ 	sector_t curr_trk;
+ 	int rc;
+ 
+-	req = cqr->callback_data;
++	req = dasd_get_callback_data(cqr);
+ 	block = cqr->block;
+ 	base = block->base;
+ 	private = base->private;
+@@ -3113,8 +3125,11 @@ dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr,
+ 	}
+ 	format->track = curr_trk;
+ 	/* test if track is already in formatting by another thread */
+-	if (test_and_set_format_track(format, block))
++	if (test_and_set_format_track(format, cqr)) {
++		/* this is no real error so do not count down retries */
++		cqr->retries++;
+ 		return ERR_PTR(-EEXIST);
++	}
+ 
+ 	fdata.start_unit = curr_trk;
+ 	fdata.stop_unit = curr_trk;
+@@ -3213,12 +3228,11 @@ static int dasd_eckd_ese_read(struct dasd_ccw_req *cqr, struct irb *irb)
+ 				cqr->proc_bytes = blk_count * blksize;
+ 				return 0;
+ 			}
+-			if (dst && !skip_block) {
+-				dst += off;
++			if (dst && !skip_block)
+ 				memset(dst, 0, blksize);
+-			} else {
++			else
+ 				skip_block--;
+-			}
++			dst += blksize;
+ 			blk_count++;
+ 		}
+ 	}
+diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
+index fa552f9f16667..9d9685c25253d 100644
+--- a/drivers/s390/block/dasd_int.h
++++ b/drivers/s390/block/dasd_int.h
+@@ -188,6 +188,7 @@ struct dasd_ccw_req {
+ 	void (*callback)(struct dasd_ccw_req *, void *data);
+ 	void *callback_data;
+ 	unsigned int proc_bytes;	/* bytes for partial completion */
++	unsigned int trkcount;		/* count formatted tracks */
+ };
+ 
+ /*
+@@ -575,6 +576,7 @@ struct dasd_block {
+ 
+ 	struct list_head format_list;
+ 	spinlock_t format_lock;
++	atomic_t trkcount;
+ };
+ 
+ struct dasd_attention_data {
+@@ -723,6 +725,18 @@ dasd_check_blocksize(int bsize)
+ 	return 0;
+ }
+ 
++/*
++ * return the callback data of the original request in case there are
++ * ERP requests build on top of it
++ */
++static inline void *dasd_get_callback_data(struct dasd_ccw_req *cqr)
++{
++	while (cqr->refers)
++		cqr = cqr->refers;
++
++	return cqr->callback_data;
++}
++
+ /* externals in dasd.c */
+ #define DASD_PROFILE_OFF	 0
+ #define DASD_PROFILE_ON 	 1
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 62784b99a8074..c246ccc6bf057 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -5334,6 +5334,18 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ 		mutex_lock(&inode->log_mutex);
+ 	}
+ 
++	/*
++	 * For symlinks, we must always log their content, which is stored in an
++	 * inline extent, otherwise we could end up with an empty symlink after
++	 * log replay, which is invalid on linux (symlink(2) returns -ENOENT if
++	 * one attempts to create an empty symlink).
++	 * We don't need to worry about flushing delalloc, because when we create
++	 * the inline extent when the symlink is created (we never have delalloc
++	 * for symlinks).
++	 */
++	if (S_ISLNK(inode->vfs_inode.i_mode))
++		inode_only = LOG_INODE_ALL;
++
+ 	/*
+ 	 * a brute force approach to making sure we get the most uptodate
+ 	 * copies of everything.
+@@ -5724,7 +5736,7 @@ process_leaf:
+ 			}
+ 
+ 			ctx->log_new_dentries = false;
+-			if (type == BTRFS_FT_DIR || type == BTRFS_FT_SYMLINK)
++			if (type == BTRFS_FT_DIR)
+ 				log_mode = LOG_INODE_ALL;
+ 			ret = btrfs_log_inode(trans, root, BTRFS_I(di_inode),
+ 					      log_mode, ctx);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 77199d3560429..b6d60e69043ae 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -369,6 +369,14 @@ static void nfs4_setup_readdir(u64 cookie, __be32 *verifier, struct dentry *dent
+ 	kunmap_atomic(start);
+ }
+ 
++static void nfs4_fattr_set_prechange(struct nfs_fattr *fattr, u64 version)
++{
++	if (!(fattr->valid & NFS_ATTR_FATTR_PRECHANGE)) {
++		fattr->pre_change_attr = version;
++		fattr->valid |= NFS_ATTR_FATTR_PRECHANGE;
++	}
++}
++
+ static void nfs4_test_and_free_stateid(struct nfs_server *server,
+ 		nfs4_stateid *stateid,
+ 		const struct cred *cred)
+@@ -6464,7 +6472,9 @@ static void nfs4_delegreturn_release(void *calldata)
+ 		pnfs_roc_release(&data->lr.arg, &data->lr.res,
+ 				 data->res.lr_ret);
+ 	if (inode) {
+-		nfs_post_op_update_inode_force_wcc(inode, &data->fattr);
++		nfs4_fattr_set_prechange(&data->fattr,
++					 inode_peek_iversion_raw(inode));
++		nfs_refresh_inode(inode, &data->fattr);
+ 		nfs_iput_and_deactive(inode);
+ 	}
+ 	kfree(calldata);
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index b56e1dedcf2fd..40df88728a6f4 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -203,5 +203,6 @@ struct plat_stmmacenet_data {
+ 	bool vlan_fail_q_en;
+ 	u8 vlan_fail_q;
+ 	unsigned int eee_usecs_rate;
++	bool sph_disable;
+ };
+ #endif
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index 54363527feea4..e58342ace11f2 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -29,12 +29,14 @@ extern struct irqaction chained_action;
+  * IRQTF_WARNED    - warning "IRQ_WAKE_THREAD w/o thread_fn" has been printed
+  * IRQTF_AFFINITY  - irq thread is requested to adjust affinity
+  * IRQTF_FORCED_THREAD  - irq action is force threaded
++ * IRQTF_READY     - signals that irq thread is ready
+  */
+ enum {
+ 	IRQTF_RUNTHREAD,
+ 	IRQTF_WARNED,
+ 	IRQTF_AFFINITY,
+ 	IRQTF_FORCED_THREAD,
++	IRQTF_READY,
+ };
+ 
+ /*
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index 1a7723604399c..ca36c6179aa76 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -405,6 +405,7 @@ static struct irq_desc *alloc_desc(int irq, int node, unsigned int flags,
+ 	lockdep_set_class(&desc->lock, &irq_desc_lock_class);
+ 	mutex_init(&desc->request_mutex);
+ 	init_rcu_head(&desc->rcu);
++	init_waitqueue_head(&desc->wait_for_threads);
+ 
+ 	desc_set_defaults(irq, desc, node, affinity, owner);
+ 	irqd_set(&desc->irq_data, flags);
+@@ -573,6 +574,7 @@ int __init early_irq_init(void)
+ 		raw_spin_lock_init(&desc[i].lock);
+ 		lockdep_set_class(&desc[i].lock, &irq_desc_lock_class);
+ 		mutex_init(&desc[i].request_mutex);
++		init_waitqueue_head(&desc[i].wait_for_threads);
+ 		desc_set_defaults(i, &desc[i], node, NULL, NULL);
+ 	}
+ 	return arch_early_irq_init();
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 79dc02b956dc3..92d94615cbbbd 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -1148,6 +1148,31 @@ static void irq_wake_secondary(struct irq_desc *desc, struct irqaction *action)
+ 	raw_spin_unlock_irq(&desc->lock);
+ }
+ 
++/*
++ * Internal function to notify that a interrupt thread is ready.
++ */
++static void irq_thread_set_ready(struct irq_desc *desc,
++				 struct irqaction *action)
++{
++	set_bit(IRQTF_READY, &action->thread_flags);
++	wake_up(&desc->wait_for_threads);
++}
++
++/*
++ * Internal function to wake up a interrupt thread and wait until it is
++ * ready.
++ */
++static void wake_up_and_wait_for_irq_thread_ready(struct irq_desc *desc,
++						  struct irqaction *action)
++{
++	if (!action || !action->thread)
++		return;
++
++	wake_up_process(action->thread);
++	wait_event(desc->wait_for_threads,
++		   test_bit(IRQTF_READY, &action->thread_flags));
++}
++
+ /*
+  * Interrupt handler thread
+  */
+@@ -1159,6 +1184,8 @@ static int irq_thread(void *data)
+ 	irqreturn_t (*handler_fn)(struct irq_desc *desc,
+ 			struct irqaction *action);
+ 
++	irq_thread_set_ready(desc, action);
++
+ 	if (force_irqthreads && test_bit(IRQTF_FORCED_THREAD,
+ 					&action->thread_flags))
+ 		handler_fn = irq_forced_thread_fn;
+@@ -1583,8 +1610,6 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
+ 	}
+ 
+ 	if (!shared) {
+-		init_waitqueue_head(&desc->wait_for_threads);
+-
+ 		/* Setup the type (level, edge polarity) if configured: */
+ 		if (new->flags & IRQF_TRIGGER_MASK) {
+ 			ret = __irq_set_trigger(desc,
+@@ -1674,14 +1699,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
+ 
+ 	irq_setup_timings(desc, new);
+ 
+-	/*
+-	 * Strictly no need to wake it up, but hung_task complains
+-	 * when no hard interrupt wakes the thread up.
+-	 */
+-	if (new->thread)
+-		wake_up_process(new->thread);
+-	if (new->secondary)
+-		wake_up_process(new->secondary->thread);
++	wake_up_and_wait_for_irq_thread_ready(desc, new);
++	wake_up_and_wait_for_irq_thread_ready(desc, new->secondary);
+ 
+ 	register_irq_proc(irq, desc);
+ 	new->dir = NULL;
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 844c35803739e..b41009a283caf 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -2456,7 +2456,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
+ 	div = READ_ONCE(rcu_divisor);
+ 	div = div < 0 ? 7 : div > sizeof(long) * 8 - 2 ? sizeof(long) * 8 - 2 : div;
+ 	bl = max(rdp->blimit, pending >> div);
+-	if (unlikely(bl > 100)) {
++	if (in_serving_softirq() && unlikely(bl > 100)) {
+ 		long rrn = READ_ONCE(rcu_resched_ns);
+ 
+ 		rrn = rrn < NSEC_PER_MSEC ? NSEC_PER_MSEC : rrn > NSEC_PER_SEC ? NSEC_PER_SEC : rrn;
+@@ -2490,19 +2490,23 @@ static void rcu_do_batch(struct rcu_data *rdp)
+ 		 * Stop only if limit reached and CPU has something to do.
+ 		 * Note: The rcl structure counts down from zero.
+ 		 */
+-		if (-rcl.len >= bl && !offloaded &&
+-		    (need_resched() ||
+-		     (!is_idle_task(current) && !rcu_is_callbacks_kthread())))
+-			break;
+-		if (unlikely(tlimit)) {
+-			/* only call local_clock() every 32 callbacks */
+-			if (likely((-rcl.len & 31) || local_clock() < tlimit))
+-				continue;
+-			/* Exceeded the time limit, so leave. */
+-			break;
+-		}
+-		if (offloaded) {
+-			WARN_ON_ONCE(in_serving_softirq());
++		if (in_serving_softirq()) {
++			if (-rcl.len >= bl && (need_resched() ||
++					(!is_idle_task(current) && !rcu_is_callbacks_kthread())))
++				break;
++
++			/*
++			 * Make sure we don't spend too much time here and deprive other
++			 * softirq vectors of CPU cycles.
++			 */
++			if (unlikely(tlimit)) {
++				/* only call local_clock() every 32 callbacks */
++				if (likely((-rcl.len & 31) || local_clock() < tlimit))
++					continue;
++				/* Exceeded the time limit, so leave. */
++				break;
++			}
++		} else {
+ 			local_bh_enable();
+ 			lockdep_assert_irqs_enabled();
+ 			cond_resched_tasks_rcu_qs();
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index c515bbd46c679..f2f0bc7f0cb4c 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1144,6 +1144,11 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 
+ 	lock_sock(sk);
+ 
++	if (so->bound) {
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	/* do not register frame reception for functional addressing */
+ 	if (so->opt.flags & CAN_ISOTP_SF_BROADCAST)
+ 		do_rx_reg = 0;
+@@ -1154,10 +1159,6 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 		goto out;
+ 	}
+ 
+-	if (so->bound && addr->can_ifindex == so->ifindex &&
+-	    rx_id == so->rxid && tx_id == so->txid)
+-		goto out;
+-
+ 	dev = dev_get_by_index(net, addr->can_ifindex);
+ 	if (!dev) {
+ 		err = -ENODEV;
+@@ -1184,19 +1185,6 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 
+ 	dev_put(dev);
+ 
+-	if (so->bound && do_rx_reg) {
+-		/* unregister old filter */
+-		if (so->ifindex) {
+-			dev = dev_get_by_index(net, so->ifindex);
+-			if (dev) {
+-				can_rx_unregister(net, dev, so->rxid,
+-						  SINGLE_MASK(so->rxid),
+-						  isotp_rcv, sk);
+-				dev_put(dev);
+-			}
+-		}
+-	}
+-
+ 	/* switch to new settings */
+ 	so->ifindex = ifindex;
+ 	so->rxid = rx_id;
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 0c321996c6eb0..3817988a5a1d4 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -2401,9 +2401,10 @@ int ip_mc_source(int add, int omode, struct sock *sk, struct
+ 				newpsl->sl_addr[i] = psl->sl_addr[i];
+ 			/* decrease mem now to avoid the memleak warning */
+ 			atomic_sub(IP_SFLSIZE(psl->sl_max), &sk->sk_omem_alloc);
+-			kfree_rcu(psl, rcu);
+ 		}
+ 		rcu_assign_pointer(pmc->sflist, newpsl);
++		if (psl)
++			kfree_rcu(psl, rcu);
+ 		psl = newpsl;
+ 	}
+ 	rv = 1;	/* > 0 for insert logic below if sl_count is 0 */
+@@ -2501,11 +2502,13 @@ int ip_mc_msfilter(struct sock *sk, struct ip_msfilter *msf, int ifindex)
+ 			psl->sl_count, psl->sl_addr, 0);
+ 		/* decrease mem now to avoid the memleak warning */
+ 		atomic_sub(IP_SFLSIZE(psl->sl_max), &sk->sk_omem_alloc);
+-		kfree_rcu(psl, rcu);
+-	} else
++	} else {
+ 		(void) ip_mc_del_src(in_dev, &msf->imsf_multiaddr, pmc->sfmode,
+ 			0, NULL, 0);
++	}
+ 	rcu_assign_pointer(pmc->sflist, newpsl);
++	if (psl)
++		kfree_rcu(psl, rcu);
+ 	pmc->sfmode = msf->imsf_fmode;
+ 	err = 0;
+ done:
+diff --git a/net/nfc/core.c b/net/nfc/core.c
+index 6800470dd6df7..3b2983813ff13 100644
+--- a/net/nfc/core.c
++++ b/net/nfc/core.c
+@@ -38,7 +38,7 @@ int nfc_fw_download(struct nfc_dev *dev, const char *firmware_name)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -94,7 +94,7 @@ int nfc_dev_up(struct nfc_dev *dev)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -142,7 +142,7 @@ int nfc_dev_down(struct nfc_dev *dev)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -206,7 +206,7 @@ int nfc_start_poll(struct nfc_dev *dev, u32 im_protocols, u32 tm_protocols)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -245,7 +245,7 @@ int nfc_stop_poll(struct nfc_dev *dev)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -290,7 +290,7 @@ int nfc_dep_link_up(struct nfc_dev *dev, int target_index, u8 comm_mode)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -334,7 +334,7 @@ int nfc_dep_link_down(struct nfc_dev *dev)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -400,7 +400,7 @@ int nfc_activate_target(struct nfc_dev *dev, u32 target_idx, u32 protocol)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -446,7 +446,7 @@ int nfc_deactivate_target(struct nfc_dev *dev, u32 target_idx, u8 mode)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -493,7 +493,7 @@ int nfc_data_exchange(struct nfc_dev *dev, u32 target_idx, struct sk_buff *skb,
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		kfree_skb(skb);
+ 		goto error;
+@@ -550,7 +550,7 @@ int nfc_enable_se(struct nfc_dev *dev, u32 se_idx)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -599,7 +599,7 @@ int nfc_disable_se(struct nfc_dev *dev, u32 se_idx)
+ 
+ 	device_lock(&dev->dev);
+ 
+-	if (!device_is_registered(&dev->dev)) {
++	if (dev->shutting_down) {
+ 		rc = -ENODEV;
+ 		goto error;
+ 	}
+@@ -1126,6 +1126,7 @@ int nfc_register_device(struct nfc_dev *dev)
+ 			dev->rfkill = NULL;
+ 		}
+ 	}
++	dev->shutting_down = false;
+ 	device_unlock(&dev->dev);
+ 
+ 	rc = nfc_genl_device_added(dev);
+@@ -1158,12 +1159,10 @@ void nfc_unregister_device(struct nfc_dev *dev)
+ 		rfkill_unregister(dev->rfkill);
+ 		rfkill_destroy(dev->rfkill);
+ 	}
++	dev->shutting_down = true;
+ 	device_unlock(&dev->dev);
+ 
+ 	if (dev->ops->check_presence) {
+-		device_lock(&dev->dev);
+-		dev->shutting_down = true;
+-		device_unlock(&dev->dev);
+ 		del_timer_sync(&dev->check_pres_timer);
+ 		cancel_work_sync(&dev->check_pres_work);
+ 	}
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 78acc4e9ac932..b8939ebaa6d3e 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1244,7 +1244,7 @@ int nfc_genl_fw_download_done(struct nfc_dev *dev, const char *firmware_name,
+ 	struct sk_buff *msg;
+ 	void *hdr;
+ 
+-	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
++	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC);
+ 	if (!msg)
+ 		return -ENOMEM;
+ 
+@@ -1260,7 +1260,7 @@ int nfc_genl_fw_download_done(struct nfc_dev *dev, const char *firmware_name,
+ 
+ 	genlmsg_end(msg, hdr);
+ 
+-	genlmsg_multicast(&nfc_genl_family, msg, 0, 0, GFP_KERNEL);
++	genlmsg_multicast(&nfc_genl_family, msg, 0, 0, GFP_ATOMIC);
+ 
+ 	return 0;
+ 
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index bd123f1d09230..0ca25e3cc5806 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -2826,9 +2826,6 @@ static struct rpc_xprt *xs_setup_local(struct xprt_create *args)
+ 		}
+ 		xprt_set_bound(xprt);
+ 		xs_format_peer_addresses(xprt, "local", RPCBIND_NETID_LOCAL);
+-		ret = ERR_PTR(xs_local_setup_socket(transport));
+-		if (ret)
+-			goto out_err;
+ 		break;
+ 	default:
+ 		ret = ERR_PTR(-EAFNOSUPPORT);
+diff --git a/sound/firewire/fireworks/fireworks_hwdep.c b/sound/firewire/fireworks/fireworks_hwdep.c
+index e93eb4616c5f4..c739173c668f3 100644
+--- a/sound/firewire/fireworks/fireworks_hwdep.c
++++ b/sound/firewire/fireworks/fireworks_hwdep.c
+@@ -34,6 +34,7 @@ hwdep_read_resp_buf(struct snd_efw *efw, char __user *buf, long remained,
+ 	type = SNDRV_FIREWIRE_EVENT_EFW_RESPONSE;
+ 	if (copy_to_user(buf, &type, sizeof(type)))
+ 		return -EFAULT;
++	count += sizeof(type);
+ 	remained -= sizeof(type);
+ 	buf += sizeof(type);
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b5168959fcf63..a5b1fd62a99cf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8969,6 +8969,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ 	SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c
+index 0b3b7909efc92..5f8c96dea094a 100644
+--- a/sound/soc/codecs/da7219.c
++++ b/sound/soc/codecs/da7219.c
+@@ -446,7 +446,7 @@ static int da7219_tonegen_freq_put(struct snd_kcontrol *kcontrol,
+ 	struct soc_mixer_control *mixer_ctrl =
+ 		(struct soc_mixer_control *) kcontrol->private_value;
+ 	unsigned int reg = mixer_ctrl->reg;
+-	__le16 val;
++	__le16 val_new, val_old;
+ 	int ret;
+ 
+ 	/*
+@@ -454,13 +454,19 @@ static int da7219_tonegen_freq_put(struct snd_kcontrol *kcontrol,
+ 	 * Therefore we need to convert to little endian here to align with
+ 	 * HW registers.
+ 	 */
+-	val = cpu_to_le16(ucontrol->value.integer.value[0]);
++	val_new = cpu_to_le16(ucontrol->value.integer.value[0]);
+ 
+ 	mutex_lock(&da7219->ctrl_lock);
+-	ret = regmap_raw_write(da7219->regmap, reg, &val, sizeof(val));
++	ret = regmap_raw_read(da7219->regmap, reg, &val_old, sizeof(val_old));
++	if (ret == 0 && (val_old != val_new))
++		ret = regmap_raw_write(da7219->regmap, reg,
++				&val_new, sizeof(val_new));
+ 	mutex_unlock(&da7219->ctrl_lock);
+ 
+-	return ret;
++	if (ret < 0)
++		return ret;
++
++	return val_old != val_new;
+ }
+ 
+ 
+diff --git a/sound/soc/codecs/wm8958-dsp2.c b/sound/soc/codecs/wm8958-dsp2.c
+index 3bce9a14f0f31..f7256761d0432 100644
+--- a/sound/soc/codecs/wm8958-dsp2.c
++++ b/sound/soc/codecs/wm8958-dsp2.c
+@@ -530,7 +530,7 @@ static int wm8958_mbc_put(struct snd_kcontrol *kcontrol,
+ 
+ 	wm8958_dsp_apply(component, mbc, wm8994->mbc_ena[mbc]);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ #define WM8958_MBC_SWITCH(xname, xval) {\
+@@ -656,7 +656,7 @@ static int wm8958_vss_put(struct snd_kcontrol *kcontrol,
+ 
+ 	wm8958_dsp_apply(component, vss, wm8994->vss_ena[vss]);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ 
+@@ -730,7 +730,7 @@ static int wm8958_hpf_put(struct snd_kcontrol *kcontrol,
+ 
+ 	wm8958_dsp_apply(component, hpf % 3, ucontrol->value.integer.value[0]);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ #define WM8958_HPF_SWITCH(xname, xval) {\
+@@ -824,7 +824,7 @@ static int wm8958_enh_eq_put(struct snd_kcontrol *kcontrol,
+ 
+ 	wm8958_dsp_apply(component, eq, ucontrol->value.integer.value[0]);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ #define WM8958_ENH_EQ_SWITCH(xname, xval) {\
+diff --git a/sound/soc/meson/aiu-acodec-ctrl.c b/sound/soc/meson/aiu-acodec-ctrl.c
+index 7078197e0cc50..e11b6a5cd772e 100644
+--- a/sound/soc/meson/aiu-acodec-ctrl.c
++++ b/sound/soc/meson/aiu-acodec-ctrl.c
+@@ -58,7 +58,7 @@ static int aiu_acodec_ctrl_mux_put_enum(struct snd_kcontrol *kcontrol,
+ 
+ 	snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static SOC_ENUM_SINGLE_DECL(aiu_acodec_ctrl_mux_enum, AIU_ACODEC_CTRL,
+diff --git a/sound/soc/meson/aiu-codec-ctrl.c b/sound/soc/meson/aiu-codec-ctrl.c
+index 4b773d3e8b073..a807e5007d6ea 100644
+--- a/sound/soc/meson/aiu-codec-ctrl.c
++++ b/sound/soc/meson/aiu-codec-ctrl.c
+@@ -57,7 +57,7 @@ static int aiu_codec_ctrl_mux_put_enum(struct snd_kcontrol *kcontrol,
+ 
+ 	snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static SOC_ENUM_SINGLE_DECL(aiu_hdmi_ctrl_mux_enum, AIU_HDMI_CLK_DATA_CTRL,
+diff --git a/sound/soc/meson/g12a-tohdmitx.c b/sound/soc/meson/g12a-tohdmitx.c
+index 9b2b59536ced0..6c99052feafd8 100644
+--- a/sound/soc/meson/g12a-tohdmitx.c
++++ b/sound/soc/meson/g12a-tohdmitx.c
+@@ -67,7 +67,7 @@ static int g12a_tohdmitx_i2s_mux_put_enum(struct snd_kcontrol *kcontrol,
+ 
+ 	snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static SOC_ENUM_SINGLE_DECL(g12a_tohdmitx_i2s_mux_enum, TOHDMITX_CTRL0,
+diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c
+index 0d100b4e43f7e..9ef80a48707eb 100644
+--- a/sound/soc/soc-generic-dmaengine-pcm.c
++++ b/sound/soc/soc-generic-dmaengine-pcm.c
+@@ -83,10 +83,10 @@ static int dmaengine_pcm_hw_params(struct snd_soc_component *component,
+ 
+ 	memset(&slave_config, 0, sizeof(slave_config));
+ 
+-	if (pcm->config && pcm->config->prepare_slave_config)
+-		prepare_slave_config = pcm->config->prepare_slave_config;
+-	else
++	if (!pcm->config)
+ 		prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
++	else
++		prepare_slave_config = pcm->config->prepare_slave_config;
+ 
+ 	if (prepare_slave_config) {
+ 		ret = prepare_slave_config(substream, params, &slave_config);
+diff --git a/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh b/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh
+index beee0d5646a61..11189f3096270 100755
+--- a/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh
++++ b/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh
+@@ -185,7 +185,7 @@ setup_prepare()
+ 
+ 	tc filter add dev $eth0 ingress chain $(IS2 0 0) pref 1 \
+ 		protocol ipv4 flower skip_sw ip_proto udp dst_port 5201 \
+-		action police rate 50mbit burst 64k \
++		action police rate 50mbit burst 64k conform-exceed drop/pipe \
+ 		action goto chain $(IS2 1 0)
+ }
+ 
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
+index a3402cd8d5b68..9ff22f28032dd 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
+@@ -61,9 +61,12 @@ setup_prepare()
+ 
+ 	vrf_prepare
+ 	mirror_gre_topo_create
++	# Avoid changing br1's PVID while it is operational as a L3 interface.
++	ip link set dev br1 down
+ 
+ 	ip link set dev $swp3 master br1
+ 	bridge vlan add dev br1 vid 555 pvid untagged self
++	ip link set dev br1 up
+ 	ip address add dev br1 192.0.2.129/28
+ 	ip address add dev br1 2001:db8:2::1/64
+ 
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index dc21dc49b426f..e36745995f224 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -951,7 +951,7 @@ TEST(ERRNO_valid)
+ 	ASSERT_EQ(0, ret);
+ 
+ 	EXPECT_EQ(parent, syscall(__NR_getppid));
+-	EXPECT_EQ(-1, read(0, NULL, 0));
++	EXPECT_EQ(-1, read(-1, NULL, 0));
+ 	EXPECT_EQ(E2BIG, errno);
+ }
+ 
+@@ -970,7 +970,7 @@ TEST(ERRNO_zero)
+ 
+ 	EXPECT_EQ(parent, syscall(__NR_getppid));
+ 	/* "errno" of 0 is ok. */
+-	EXPECT_EQ(0, read(0, NULL, 0));
++	EXPECT_EQ(0, read(-1, NULL, 0));
+ }
+ 
+ /*
+@@ -991,7 +991,7 @@ TEST(ERRNO_capped)
+ 	ASSERT_EQ(0, ret);
+ 
+ 	EXPECT_EQ(parent, syscall(__NR_getppid));
+-	EXPECT_EQ(-1, read(0, NULL, 0));
++	EXPECT_EQ(-1, read(-1, NULL, 0));
+ 	EXPECT_EQ(4095, errno);
+ }
+ 
+@@ -1022,7 +1022,7 @@ TEST(ERRNO_order)
+ 	ASSERT_EQ(0, ret);
+ 
+ 	EXPECT_EQ(parent, syscall(__NR_getppid));
+-	EXPECT_EQ(-1, read(0, NULL, 0));
++	EXPECT_EQ(-1, read(-1, NULL, 0));
+ 	EXPECT_EQ(12, errno);
+ }
+ 
+@@ -2575,7 +2575,7 @@ void *tsync_sibling(void *data)
+ 	ret = prctl(PR_GET_NO_NEW_PRIVS, 0, 0, 0, 0);
+ 	if (!ret)
+ 		return (void *)SIBLING_EXIT_NEWPRIVS;
+-	read(0, NULL, 0);
++	read(-1, NULL, 0);
+ 	return (void *)SIBLING_EXIT_UNKILLED;
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-05-15 22:10 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-05-15 22:10 UTC (permalink / raw
  To: gentoo-commits

commit:     e3c14a90a3d9b3c0076842b5fbc28e710836fe1e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May 15 22:10:17 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May 15 22:10:17 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e3c14a90

Linux patch 5.10.116

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1115_linux-5.10.116.patch | 563 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 567 insertions(+)

diff --git a/0000_README b/0000_README
index 46bc953b..3d86ae35 100644
--- a/0000_README
+++ b/0000_README
@@ -503,6 +503,10 @@ Patch:  1114_linux-5.10.115.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.115
 
+Patch:  1115_linux-5.10.116.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.116
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1115_linux-5.10.116.patch b/1115_linux-5.10.116.patch
new file mode 100644
index 00000000..ef864f4c
--- /dev/null
+++ b/1115_linux-5.10.116.patch
@@ -0,0 +1,563 @@
+diff --git a/Documentation/vm/memory-model.rst b/Documentation/vm/memory-model.rst
+index 9daadf9faba14..ce398a7dc6cd5 100644
+--- a/Documentation/vm/memory-model.rst
++++ b/Documentation/vm/memory-model.rst
+@@ -51,8 +51,7 @@ call :c:func:`free_area_init` function. Yet, the mappings array is not
+ usable until the call to :c:func:`memblock_free_all` that hands all the
+ memory to the page allocator.
+ 
+-If an architecture enables `CONFIG_ARCH_HAS_HOLES_MEMORYMODEL` option,
+-it may free parts of the `mem_map` array that do not cover the
++An architecture may free parts of the `mem_map` array that do not cover the
+ actual physical pages. In such case, the architecture specific
+ :c:func:`pfn_valid` implementation should take the holes in the
+ `mem_map` into account.
+diff --git a/Makefile b/Makefile
+index 86d3e137d7f2d..c999de1b6a6b2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 115
++SUBLEVEL = 116
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index a0eac00e2c81a..b587ecc6f9493 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -25,7 +25,7 @@ config ARM
+ 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ 	select ARCH_HAVE_CUSTOM_GPIO_H
+ 	select ARCH_HAS_GCOV_PROFILE_ALL
+-	select ARCH_KEEP_MEMBLOCK if HAVE_ARCH_PFN_VALID || KEXEC
++	select ARCH_KEEP_MEMBLOCK
+ 	select ARCH_MIGHT_HAVE_PC_PARPORT
+ 	select ARCH_NO_SG_CHAIN if !ARM_HAS_SG_CHAIN
+ 	select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
+@@ -521,7 +521,6 @@ config ARCH_S3C24XX
+ config ARCH_OMAP1
+ 	bool "TI OMAP1"
+ 	depends on MMU
+-	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select ARCH_OMAP
+ 	select CLKDEV_LOOKUP
+ 	select CLKSRC_MMIO
+@@ -1481,9 +1480,6 @@ config OABI_COMPAT
+ 	  UNPREDICTABLE (in fact it can be predicted that it won't work
+ 	  at all). If in doubt say N.
+ 
+-config ARCH_HAS_HOLES_MEMORYMODEL
+-	bool
+-
+ config ARCH_SELECT_MEMORY_MODEL
+ 	bool
+ 
+@@ -1495,7 +1491,7 @@ config ARCH_SPARSEMEM_ENABLE
+ 	select SPARSEMEM_STATIC if SPARSEMEM
+ 
+ config HAVE_ARCH_PFN_VALID
+-	def_bool ARCH_HAS_HOLES_MEMORYMODEL || !SPARSEMEM
++	def_bool y
+ 
+ config HIGHMEM
+ 	bool "High Memory Support"
+diff --git a/arch/arm/mach-bcm/Kconfig b/arch/arm/mach-bcm/Kconfig
+index ae790908fc74a..9b594ae98153c 100644
+--- a/arch/arm/mach-bcm/Kconfig
++++ b/arch/arm/mach-bcm/Kconfig
+@@ -211,7 +211,6 @@ config ARCH_BRCMSTB
+ 	select BCM7038_L1_IRQ
+ 	select BRCMSTB_L2_IRQ
+ 	select BCM7120_L2_IRQ
+-	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select ZONE_DMA if ARM_LPAE
+ 	select SOC_BRCMSTB
+ 	select SOC_BUS
+diff --git a/arch/arm/mach-davinci/Kconfig b/arch/arm/mach-davinci/Kconfig
+index f56ff8c24043d..de11030748d0b 100644
+--- a/arch/arm/mach-davinci/Kconfig
++++ b/arch/arm/mach-davinci/Kconfig
+@@ -5,7 +5,6 @@ menuconfig ARCH_DAVINCI
+ 	depends on ARCH_MULTI_V5
+ 	select DAVINCI_TIMER
+ 	select ZONE_DMA
+-	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select PM_GENERIC_DOMAINS if PM
+ 	select PM_GENERIC_DOMAINS_OF if PM && OF
+ 	select REGMAP_MMIO
+diff --git a/arch/arm/mach-exynos/Kconfig b/arch/arm/mach-exynos/Kconfig
+index 56314b1c74089..b5df98ee5d176 100644
+--- a/arch/arm/mach-exynos/Kconfig
++++ b/arch/arm/mach-exynos/Kconfig
+@@ -8,7 +8,6 @@
+ menuconfig ARCH_EXYNOS
+ 	bool "Samsung Exynos"
+ 	depends on ARCH_MULTI_V7
+-	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select ARCH_SUPPORTS_BIG_ENDIAN
+ 	select ARM_AMBA
+ 	select ARM_GIC
+diff --git a/arch/arm/mach-highbank/Kconfig b/arch/arm/mach-highbank/Kconfig
+index 1bc68913d62c1..9de38ce8124f2 100644
+--- a/arch/arm/mach-highbank/Kconfig
++++ b/arch/arm/mach-highbank/Kconfig
+@@ -2,7 +2,6 @@
+ config ARCH_HIGHBANK
+ 	bool "Calxeda ECX-1000/2000 (Highbank/Midway)"
+ 	depends on ARCH_MULTI_V7
+-	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select ARCH_SUPPORTS_BIG_ENDIAN
+ 	select ARM_AMBA
+ 	select ARM_ERRATA_764369 if SMP
+diff --git a/arch/arm/mach-omap2/Kconfig b/arch/arm/mach-omap2/Kconfig
+index 3f62a0c9450dd..164985505f9e5 100644
+--- a/arch/arm/mach-omap2/Kconfig
++++ b/arch/arm/mach-omap2/Kconfig
+@@ -93,7 +93,6 @@ config SOC_DRA7XX
+ config ARCH_OMAP2PLUS
+ 	bool
+ 	select ARCH_HAS_BANDGAP
+-	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select ARCH_HAS_RESET_CONTROLLER
+ 	select ARCH_OMAP
+ 	select CLKSRC_MMIO
+diff --git a/arch/arm/mach-s5pv210/Kconfig b/arch/arm/mach-s5pv210/Kconfig
+index 95d4e82848662..d644b45bc29d0 100644
+--- a/arch/arm/mach-s5pv210/Kconfig
++++ b/arch/arm/mach-s5pv210/Kconfig
+@@ -8,7 +8,6 @@
+ config ARCH_S5PV210
+ 	bool "Samsung S5PV210/S5PC110"
+ 	depends on ARCH_MULTI_V7
+-	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select ARM_VIC
+ 	select CLKSRC_SAMSUNG_PWM
+ 	select COMMON_CLK_SAMSUNG
+diff --git a/arch/arm/mach-tango/Kconfig b/arch/arm/mach-tango/Kconfig
+index 25b2fd4348617..a9eeda36aeb15 100644
+--- a/arch/arm/mach-tango/Kconfig
++++ b/arch/arm/mach-tango/Kconfig
+@@ -3,7 +3,6 @@ config ARCH_TANGO
+ 	bool "Sigma Designs Tango4 (SMP87xx)"
+ 	depends on ARCH_MULTI_V7
+ 	# Cortex-A9 MPCore r3p0, PL310 r3p2
+-	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select ARM_ERRATA_754322
+ 	select ARM_ERRATA_764369 if SMP
+ 	select ARM_ERRATA_775420
+diff --git a/arch/mips/bmips/setup.c b/arch/mips/bmips/setup.c
+index 19308df5f5779..1b06b25aea87d 100644
+--- a/arch/mips/bmips/setup.c
++++ b/arch/mips/bmips/setup.c
+@@ -167,7 +167,7 @@ void __init plat_mem_setup(void)
+ 		dtb = phys_to_virt(fw_arg2);
+ 	else if (fw_passed_dtb) /* UHI interface or appended dtb */
+ 		dtb = (void *)fw_passed_dtb;
+-	else if (__dtb_start != __dtb_end)
++	else if (&__dtb_start != &__dtb_end)
+ 		dtb = (void *)__dtb_start;
+ 	else
+ 		panic("no dtb found");
+diff --git a/arch/mips/lantiq/prom.c b/arch/mips/lantiq/prom.c
+index 51a218f04fe0d..3f568f5aae2d1 100644
+--- a/arch/mips/lantiq/prom.c
++++ b/arch/mips/lantiq/prom.c
+@@ -79,7 +79,7 @@ void __init plat_mem_setup(void)
+ 
+ 	if (fw_passed_dtb) /* UHI interface */
+ 		dtb = (void *)fw_passed_dtb;
+-	else if (__dtb_start != __dtb_end)
++	else if (&__dtb_start != &__dtb_end)
+ 		dtb = (void *)__dtb_start;
+ 	else
+ 		panic("no dtb found");
+diff --git a/arch/mips/pic32/pic32mzda/init.c b/arch/mips/pic32/pic32mzda/init.c
+index 50f376f058f43..f232c77ff5265 100644
+--- a/arch/mips/pic32/pic32mzda/init.c
++++ b/arch/mips/pic32/pic32mzda/init.c
+@@ -28,7 +28,7 @@ static ulong get_fdtaddr(void)
+ 	if (fw_passed_dtb && !fw_arg2 && !fw_arg3)
+ 		return (ulong)fw_passed_dtb;
+ 
+-	if (__dtb_start < __dtb_end)
++	if (&__dtb_start < &__dtb_end)
+ 		ftaddr = (ulong)__dtb_start;
+ 
+ 	return ftaddr;
+diff --git a/arch/mips/ralink/of.c b/arch/mips/ralink/of.c
+index a971f1aca096c..3017263ac4f9f 100644
+--- a/arch/mips/ralink/of.c
++++ b/arch/mips/ralink/of.c
+@@ -77,7 +77,7 @@ void __init plat_mem_setup(void)
+ 	 */
+ 	if (fw_passed_dtb)
+ 		dtb = (void *)fw_passed_dtb;
+-	else if (__dtb_start != __dtb_end)
++	else if (&__dtb_start != &__dtb_end)
+ 		dtb = (void *)__dtb_start;
+ 
+ 	__dt_setup_arch(dtb);
+diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
+index f8d0146bf7852..54f77b4a0b494 100644
+--- a/drivers/block/drbd/drbd_nl.c
++++ b/drivers/block/drbd/drbd_nl.c
+@@ -790,9 +790,11 @@ int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info)
+ 	mutex_lock(&adm_ctx.resource->adm_mutex);
+ 
+ 	if (info->genlhdr->cmd == DRBD_ADM_PRIMARY)
+-		retcode = drbd_set_role(adm_ctx.device, R_PRIMARY, parms.assume_uptodate);
++		retcode = (enum drbd_ret_code)drbd_set_role(adm_ctx.device,
++						R_PRIMARY, parms.assume_uptodate);
+ 	else
+-		retcode = drbd_set_role(adm_ctx.device, R_SECONDARY, 0);
++		retcode = (enum drbd_ret_code)drbd_set_role(adm_ctx.device,
++						R_SECONDARY, 0);
+ 
+ 	mutex_unlock(&adm_ctx.resource->adm_mutex);
+ 	genl_lock();
+@@ -1962,7 +1964,7 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
+ 	drbd_flush_workqueue(&connection->sender_work);
+ 
+ 	rv = _drbd_request_state(device, NS(disk, D_ATTACHING), CS_VERBOSE);
+-	retcode = rv;  /* FIXME: Type mismatch. */
++	retcode = (enum drbd_ret_code)rv;
+ 	drbd_resume_io(device);
+ 	if (rv < SS_SUCCESS)
+ 		goto fail;
+@@ -2687,7 +2689,8 @@ int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 	rcu_read_unlock();
+ 
+-	retcode = conn_request_state(connection, NS(conn, C_UNCONNECTED), CS_VERBOSE);
++	retcode = (enum drbd_ret_code)conn_request_state(connection,
++					NS(conn, C_UNCONNECTED), CS_VERBOSE);
+ 
+ 	conn_reconfig_done(connection);
+ 	mutex_unlock(&adm_ctx.resource->adm_mutex);
+@@ -2800,7 +2803,7 @@ int drbd_adm_disconnect(struct sk_buff *skb, struct genl_info *info)
+ 	mutex_lock(&adm_ctx.resource->adm_mutex);
+ 	rv = conn_try_disconnect(connection, parms.force_disconnect);
+ 	if (rv < SS_SUCCESS)
+-		retcode = rv;  /* FIXME: Type mismatch. */
++		retcode = (enum drbd_ret_code)rv;
+ 	else
+ 		retcode = NO_ERROR;
+ 	mutex_unlock(&adm_ctx.resource->adm_mutex);
+diff --git a/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c b/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
+index 92280cc05e2db..dae8e489c8cf4 100644
+--- a/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
++++ b/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
+@@ -53,8 +53,8 @@
+  */
+ 
+ struct gpio_service *dal_gpio_service_create(
+-	enum dce_version dce_version_major,
+-	enum dce_version dce_version_minor,
++	enum dce_version dce_version,
++	enum dce_environment dce_environment,
+ 	struct dc_context *ctx)
+ {
+ 	struct gpio_service *service;
+@@ -67,14 +67,14 @@ struct gpio_service *dal_gpio_service_create(
+ 		return NULL;
+ 	}
+ 
+-	if (!dal_hw_translate_init(&service->translate, dce_version_major,
+-			dce_version_minor)) {
++	if (!dal_hw_translate_init(&service->translate, dce_version,
++			dce_environment)) {
+ 		BREAK_TO_DEBUGGER();
+ 		goto failure_1;
+ 	}
+ 
+-	if (!dal_hw_factory_init(&service->factory, dce_version_major,
+-			dce_version_minor)) {
++	if (!dal_hw_factory_init(&service->factory, dce_version,
++			dce_environment)) {
+ 		BREAK_TO_DEBUGGER();
+ 		goto failure_1;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/include/gpio_service_interface.h b/drivers/gpu/drm/amd/display/include/gpio_service_interface.h
+index 9c55d247227ea..7e3240e73c1fc 100644
+--- a/drivers/gpu/drm/amd/display/include/gpio_service_interface.h
++++ b/drivers/gpu/drm/amd/display/include/gpio_service_interface.h
+@@ -42,8 +42,8 @@ void dal_gpio_destroy(
+ 	struct gpio **ptr);
+ 
+ struct gpio_service *dal_gpio_service_create(
+-	enum dce_version dce_version_major,
+-	enum dce_version dce_version_minor,
++	enum dce_version dce_version,
++	enum dce_environment dce_environment,
+ 	struct dc_context *ctx);
+ 
+ struct gpio *dal_gpio_service_create_irq(
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_asm.c b/drivers/net/ethernet/netronome/nfp/nfp_asm.c
+index 2643ea5948f48..154399c5453fe 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_asm.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_asm.c
+@@ -196,7 +196,7 @@ int swreg_to_unrestricted(swreg dst, swreg lreg, swreg rreg,
+ 	}
+ 
+ 	reg->dst_lmextn = swreg_lmextn(dst);
+-	reg->src_lmextn = swreg_lmextn(lreg) | swreg_lmextn(rreg);
++	reg->src_lmextn = swreg_lmextn(lreg) || swreg_lmextn(rreg);
+ 
+ 	return 0;
+ }
+@@ -277,7 +277,7 @@ int swreg_to_restricted(swreg dst, swreg lreg, swreg rreg,
+ 	}
+ 
+ 	reg->dst_lmextn = swreg_lmextn(dst);
+-	reg->src_lmextn = swreg_lmextn(lreg) | swreg_lmextn(rreg);
++	reg->src_lmextn = swreg_lmextn(lreg) || swreg_lmextn(rreg);
+ 
+ 	return 0;
+ }
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index e502414b35564..4d2e64e9016c1 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -193,8 +193,6 @@ kclist_add_private(unsigned long pfn, unsigned long nr_pages, void *arg)
+ 		return 1;
+ 
+ 	p = pfn_to_page(pfn);
+-	if (!memmap_valid_within(pfn, p, page_zone(p)))
+-		return 1;
+ 
+ 	ent = kmalloc(sizeof(*ent), GFP_KERNEL);
+ 	if (!ent)
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index f3016b8e698ab..b2e4599b88832 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1443,37 +1443,6 @@ struct mminit_pfnnid_cache {
+ #define pfn_valid_within(pfn) (1)
+ #endif
+ 
+-#ifdef CONFIG_ARCH_HAS_HOLES_MEMORYMODEL
+-/*
+- * pfn_valid() is meant to be able to tell if a given PFN has valid memmap
+- * associated with it or not. This means that a struct page exists for this
+- * pfn. The caller cannot assume the page is fully initialized in general.
+- * Hotplugable pages might not have been onlined yet. pfn_to_online_page()
+- * will ensure the struct page is fully online and initialized. Special pages
+- * (e.g. ZONE_DEVICE) are never onlined and should be treated accordingly.
+- *
+- * In FLATMEM, it is expected that holes always have valid memmap as long as
+- * there is valid PFNs either side of the hole. In SPARSEMEM, it is assumed
+- * that a valid section has a memmap for the entire section.
+- *
+- * However, an ARM, and maybe other embedded architectures in the future
+- * free memmap backing holes to save memory on the assumption the memmap is
+- * never used. The page_zone linkages are then broken even though pfn_valid()
+- * returns true. A walker of the full memmap must then do this additional
+- * check to ensure the memmap they are looking at is sane by making sure
+- * the zone and PFN linkages are still valid. This is expensive, but walkers
+- * of the full memmap are extremely rare.
+- */
+-bool memmap_valid_within(unsigned long pfn,
+-					struct page *page, struct zone *zone);
+-#else
+-static inline bool memmap_valid_within(unsigned long pfn,
+-					struct page *page, struct zone *zone)
+-{
+-	return true;
+-}
+-#endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */
+-
+ #endif /* !__GENERATING_BOUNDS.H */
+ #endif /* !__ASSEMBLY__ */
+ #endif /* _LINUX_MMZONE_H */
+diff --git a/include/linux/regulator/consumer.h b/include/linux/regulator/consumer.h
+index 2024944fd2f78..20e84a84fb779 100644
+--- a/include/linux/regulator/consumer.h
++++ b/include/linux/regulator/consumer.h
+@@ -331,6 +331,12 @@ regulator_get_exclusive(struct device *dev, const char *id)
+ 	return ERR_PTR(-ENODEV);
+ }
+ 
++static inline struct regulator *__must_check
++devm_regulator_get_exclusive(struct device *dev, const char *id)
++{
++	return ERR_PTR(-ENODEV);
++}
++
+ static inline struct regulator *__must_check
+ regulator_get_optional(struct device *dev, const char *id)
+ {
+@@ -486,6 +492,11 @@ static inline int regulator_get_voltage(struct regulator *regulator)
+ 	return -EINVAL;
+ }
+ 
++static inline int regulator_sync_voltage(struct regulator *regulator)
++{
++	return -EINVAL;
++}
++
+ static inline int regulator_is_supported_voltage(struct regulator *regulator,
+ 				   int min_uV, int max_uV)
+ {
+@@ -578,6 +589,25 @@ static inline int devm_regulator_unregister_notifier(struct regulator *regulator
+ 	return 0;
+ }
+ 
++static inline int regulator_suspend_enable(struct regulator_dev *rdev,
++					   suspend_state_t state)
++{
++	return -EINVAL;
++}
++
++static inline int regulator_suspend_disable(struct regulator_dev *rdev,
++					    suspend_state_t state)
++{
++	return -EINVAL;
++}
++
++static inline int regulator_set_suspend_voltage(struct regulator *regulator,
++						int min_uV, int max_uV,
++						suspend_state_t state)
++{
++	return -EINVAL;
++}
++
+ static inline void *regulator_get_drvdata(struct regulator *regulator)
+ {
+ 	return NULL;
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index a592a826e2fb5..09104ca14a3e4 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -35,6 +35,9 @@
+ /* HCI priority */
+ #define HCI_PRIO_MAX	7
+ 
++/* HCI maximum id value */
++#define HCI_MAX_ID 10000
++
+ /* HCI Core structures */
+ struct inquiry_data {
+ 	bdaddr_t	bdaddr;
+diff --git a/mm/memory.c b/mm/memory.c
+index 14f91c7467d6f..72236b1ce5903 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -5295,6 +5295,8 @@ long copy_huge_page_from_user(struct page *dst_page,
+ 		if (rc)
+ 			break;
+ 
++		flush_dcache_page(subpage);
++
+ 		cond_resched();
+ 	}
+ 	return ret_val;
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 278e6f3fa62ce..495bdac5cf929 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1010,9 +1010,12 @@ static int move_to_new_page(struct page *newpage, struct page *page,
+ 		if (!PageMappingFlags(page))
+ 			page->mapping = NULL;
+ 
+-		if (likely(!is_zone_device_page(newpage)))
+-			flush_dcache_page(newpage);
++		if (likely(!is_zone_device_page(newpage))) {
++			int i, nr = compound_nr(newpage);
+ 
++			for (i = 0; i < nr; i++)
++				flush_dcache_page(newpage + i);
++		}
+ 	}
+ out:
+ 	return rc;
+diff --git a/mm/mmzone.c b/mm/mmzone.c
+index 4686fdc23bb96..f337831affc2d 100644
+--- a/mm/mmzone.c
++++ b/mm/mmzone.c
+@@ -72,20 +72,6 @@ struct zoneref *__next_zones_zonelist(struct zoneref *z,
+ 	return z;
+ }
+ 
+-#ifdef CONFIG_ARCH_HAS_HOLES_MEMORYMODEL
+-bool memmap_valid_within(unsigned long pfn,
+-					struct page *page, struct zone *zone)
+-{
+-	if (page_to_pfn(page) != pfn)
+-		return false;
+-
+-	if (page_zone(page) != zone)
+-		return false;
+-
+-	return true;
+-}
+-#endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */
+-
+ void lruvec_init(struct lruvec *lruvec)
+ {
+ 	enum lru_list lru;
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 9a3d451402d7b..078d95cd32c53 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -83,6 +83,8 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,
+ 			/* don't free the page */
+ 			goto out;
+ 		}
++
++		flush_dcache_page(page);
+ 	} else {
+ 		page = *pagep;
+ 		*pagep = NULL;
+@@ -595,6 +597,7 @@ retry:
+ 				err = -EFAULT;
+ 				goto out;
+ 			}
++			flush_dcache_page(page);
+ 			goto retry;
+ 		} else
+ 			BUG_ON(page);
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 698bc0bc18d14..e292e63afebf2 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1503,10 +1503,6 @@ static void pagetypeinfo_showblockcount_print(struct seq_file *m,
+ 		if (!page)
+ 			continue;
+ 
+-		/* Watch for unexpected holes punched in the memmap */
+-		if (!memmap_valid_within(pfn, page, zone))
+-			continue;
+-
+ 		if (page_zone(page) != zone)
+ 			continue;
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 2e7998bad133b..c331b4176de73 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3718,10 +3718,10 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	 */
+ 	switch (hdev->dev_type) {
+ 	case HCI_PRIMARY:
+-		id = ida_simple_get(&hci_index_ida, 0, 0, GFP_KERNEL);
++		id = ida_simple_get(&hci_index_ida, 0, HCI_MAX_ID, GFP_KERNEL);
+ 		break;
+ 	case HCI_AMP:
+-		id = ida_simple_get(&hci_index_ida, 1, 0, GFP_KERNEL);
++		id = ida_simple_get(&hci_index_ida, 1, HCI_MAX_ID, GFP_KERNEL);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -3730,7 +3730,7 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	if (id < 0)
+ 		return id;
+ 
+-	sprintf(hdev->name, "hci%d", id);
++	snprintf(hdev->name, sizeof(hdev->name), "hci%d", id);
+ 	hdev->id = id;
+ 
+ 	BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-05-18  9:48 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-05-18  9:48 UTC (permalink / raw
  To: gentoo-commits

commit:     07cf238278a01e2462edd0e487c3803e15a26ead
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 18 09:47:46 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 18 09:47:46 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=07cf2382

Linux patch 5.10.117

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1116_linux-5.10.117.patch | 1969 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1973 insertions(+)

diff --git a/0000_README b/0000_README
index 3d86ae35..28820046 100644
--- a/0000_README
+++ b/0000_README
@@ -507,6 +507,10 @@ Patch:  1115_linux-5.10.116.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.116
 
+Patch:  1116_linux-5.10.117.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.117
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1116_linux-5.10.117.patch b/1116_linux-5.10.117.patch
new file mode 100644
index 00000000..85108d4f
--- /dev/null
+++ b/1116_linux-5.10.117.patch
@@ -0,0 +1,1969 @@
+diff --git a/Makefile b/Makefile
+index c999de1b6a6b2..8f611d79d5e11 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 116
++SUBLEVEL = 117
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
+index ab2b654084fa3..b13e8a6c14b0b 100644
+--- a/arch/arm/include/asm/io.h
++++ b/arch/arm/include/asm/io.h
+@@ -442,6 +442,9 @@ extern void pci_iounmap(struct pci_dev *dev, void __iomem *addr);
+ extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
+ extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
+ extern int devmem_is_allowed(unsigned long pfn);
++extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
++					unsigned long flags);
++#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap
+ #endif
+ 
+ /*
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index 80fb5a4a5c050..2660bdfcad4d0 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -479,3 +479,11 @@ void __init early_ioremap_init(void)
+ {
+ 	early_ioremap_setup();
+ }
++
++bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
++				 unsigned long flags)
++{
++	unsigned long pfn = PHYS_PFN(offset);
++
++	return memblock_is_map_memory(pfn);
++}
+diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
+index fd172c41df905..5d0d94afc3086 100644
+--- a/arch/arm64/include/asm/io.h
++++ b/arch/arm64/include/asm/io.h
+@@ -203,4 +203,8 @@ extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
+ 
+ extern int devmem_is_allowed(unsigned long pfn);
+ 
++extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
++					unsigned long flags);
++#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap
++
+ #endif	/* __ASM_IO_H */
+diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
+index b5e83c46b23e7..f173a01a0c0ef 100644
+--- a/arch/arm64/mm/ioremap.c
++++ b/arch/arm64/mm/ioremap.c
+@@ -13,6 +13,7 @@
+ #include <linux/mm.h>
+ #include <linux/vmalloc.h>
+ #include <linux/io.h>
++#include <linux/memblock.h>
+ 
+ #include <asm/fixmap.h>
+ #include <asm/tlbflush.h>
+@@ -99,3 +100,11 @@ void __init early_ioremap_init(void)
+ {
+ 	early_ioremap_setup();
+ }
++
++bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
++				 unsigned long flags)
++{
++	unsigned long pfn = PHYS_PFN(offset);
++
++	return memblock_is_map_memory(pfn);
++}
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index 92506918da633..a8cb00f30a7c3 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -32,6 +32,16 @@ KBUILD_CFLAGS_DECOMPRESSOR += -fno-stack-protector
+ KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,))
++
++ifdef CONFIG_CC_IS_GCC
++	ifeq ($(call cc-ifversion, -ge, 1200, y), y)
++		ifeq ($(call cc-ifversion, -lt, 1300, y), y)
++			KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds)
++			KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, array-bounds)
++		endif
++	endif
++endif
++
+ UTS_MACHINE	:= s390x
+ STACK_SIZE	:= $(if $(CONFIG_KASAN),65536,16384)
+ CHECKFLAGS	+= -D__s390__ -D__s390x__
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 1372f40d0371f..a4dd500bc141a 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -793,6 +793,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ 		  size_t offset, u32 opt_flags)
+ {
+ 	struct firmware *fw = NULL;
++	struct cred *kern_cred = NULL;
++	const struct cred *old_cred;
+ 	bool nondirect = false;
+ 	int ret;
+ 
+@@ -809,6 +811,18 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 
++	/*
++	 * We are about to try to access the firmware file. Because we may have been
++	 * called by a driver when serving an unrelated request from userland, we use
++	 * the kernel credentials to read the file.
++	 */
++	kern_cred = prepare_kernel_cred(NULL);
++	if (!kern_cred) {
++		ret = -ENOMEM;
++		goto out;
++	}
++	old_cred = override_creds(kern_cred);
++
+ 	ret = fw_get_filesystem_firmware(device, fw->priv, "", NULL);
+ 
+ 	/* Only full reads can support decompression, platform, and sysfs. */
+@@ -834,6 +848,9 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ 	} else
+ 		ret = assign_fw(fw, device);
+ 
++	revert_creds(old_cred);
++	put_cred(kern_cred);
++
+  out:
+ 	if (ret < 0) {
+ 		fw_abort_batch_reqs(fw);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+index c7a94c94dbf37..f2f3280c3a50e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
++++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+@@ -51,8 +51,9 @@ static bool
+ nouveau_get_backlight_name(char backlight_name[BL_NAME_SIZE],
+ 			   struct nouveau_backlight *bl)
+ {
+-	const int nb = ida_simple_get(&bl_ida, 0, 0, GFP_KERNEL);
+-	if (nb < 0 || nb >= 100)
++	const int nb = ida_alloc_max(&bl_ida, 99, GFP_KERNEL);
++
++	if (nb < 0)
+ 		return false;
+ 	if (nb > 0)
+ 		snprintf(backlight_name, BL_NAME_SIZE, "nv_backlight%d", nb);
+@@ -280,7 +281,7 @@ nouveau_backlight_init(struct drm_connector *connector)
+ 					    nv_encoder, ops, &props);
+ 	if (IS_ERR(bl->dev)) {
+ 		if (bl->id >= 0)
+-			ida_simple_remove(&bl_ida, bl->id);
++			ida_free(&bl_ida, bl->id);
+ 		ret = PTR_ERR(bl->dev);
+ 		goto fail_alloc;
+ 	}
+@@ -306,7 +307,7 @@ nouveau_backlight_fini(struct drm_connector *connector)
+ 		return;
+ 
+ 	if (bl->id >= 0)
+-		ida_simple_remove(&bl_ida, bl->id);
++		ida_free(&bl_ida, bl->id);
+ 
+ 	backlight_device_unregister(bl->dev);
+ 	nv_conn->backlight = NULL;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+index d0d52c1d4aee0..950a3de3e1166 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+@@ -123,7 +123,7 @@ nvkm_device_tegra_probe_iommu(struct nvkm_device_tegra *tdev)
+ 
+ 	mutex_init(&tdev->iommu.mutex);
+ 
+-	if (iommu_present(&platform_bus_type)) {
++	if (device_iommu_mapped(dev)) {
+ 		tdev->iommu.domain = iommu_domain_alloc(&platform_bus_type);
+ 		if (!tdev->iommu.domain)
+ 			goto error;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+index c59806d40e15b..97d9d2557447b 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+@@ -498,7 +498,7 @@ static int vmw_fb_kms_detach(struct vmw_fb_par *par,
+ 
+ static int vmw_fb_kms_framebuffer(struct fb_info *info)
+ {
+-	struct drm_mode_fb_cmd2 mode_cmd;
++	struct drm_mode_fb_cmd2 mode_cmd = {0};
+ 	struct vmw_fb_par *par = info->par;
+ 	struct fb_var_screeninfo *var = &info->var;
+ 	struct drm_framebuffer *cur_fb;
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 0c2b032ee6176..f741c7492ee40 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -922,7 +922,7 @@ config SENSORS_LTC4261
+ 
+ config SENSORS_LTQ_CPUTEMP
+ 	bool "Lantiq cpu temperature sensor driver"
+-	depends on LANTIQ
++	depends on SOC_XWAY
+ 	help
+ 	  If you say yes here you get support for the temperature
+ 	  sensor inside your CPU.
+diff --git a/drivers/hwmon/f71882fg.c b/drivers/hwmon/f71882fg.c
+index 4dec793fd07d5..94b35723ee7ad 100644
+--- a/drivers/hwmon/f71882fg.c
++++ b/drivers/hwmon/f71882fg.c
+@@ -1577,8 +1577,9 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *devattr,
+ 		temp *= 125;
+ 		if (sign)
+ 			temp -= 128000;
+-	} else
+-		temp = data->temp[nr] * 1000;
++	} else {
++		temp = ((s8)data->temp[nr]) * 1000;
++	}
+ 
+ 	return sprintf(buf, "%d\n", temp);
+ }
+diff --git a/drivers/hwmon/tmp401.c b/drivers/hwmon/tmp401.c
+index 9dc210b55e69b..48466b0a4bb05 100644
+--- a/drivers/hwmon/tmp401.c
++++ b/drivers/hwmon/tmp401.c
+@@ -730,10 +730,21 @@ static int tmp401_probe(struct i2c_client *client)
+ 	return 0;
+ }
+ 
++static const struct of_device_id __maybe_unused tmp4xx_of_match[] = {
++	{ .compatible = "ti,tmp401", },
++	{ .compatible = "ti,tmp411", },
++	{ .compatible = "ti,tmp431", },
++	{ .compatible = "ti,tmp432", },
++	{ .compatible = "ti,tmp435", },
++	{ },
++};
++MODULE_DEVICE_TABLE(of, tmp4xx_of_match);
++
+ static struct i2c_driver tmp401_driver = {
+ 	.class		= I2C_CLASS_HWMON,
+ 	.driver = {
+ 		.name	= "tmp401",
++		.of_match_table = of_match_ptr(tmp4xx_of_match),
+ 	},
+ 	.probe_new	= tmp401_probe,
+ 	.id_table	= tmp401_id,
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 08a675a5328d7..b712b4f27efd9 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -710,6 +710,9 @@ static void bcm_sf2_sw_mac_link_down(struct dsa_switch *ds, int port,
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+ 	u32 reg, offset;
+ 
++	if (priv->wol_ports_mask & BIT(port))
++		return;
++
+ 	if (port != core_readl(priv, CORE_IMP0_PRT_ID)) {
+ 		if (priv->type == BCM7445_DEVICE_ID)
+ 			offset = CORE_STS_OVERRIDE_GMIIP_PORT(port);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index bdfd462c74db9..fc5ea434a27c9 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -455,7 +455,7 @@ static int aq_pm_freeze(struct device *dev)
+ 
+ static int aq_pm_suspend_poweroff(struct device *dev)
+ {
+-	return aq_suspend_common(dev, false);
++	return aq_suspend_common(dev, true);
+ }
+ 
+ static int aq_pm_thaw(struct device *dev)
+@@ -465,7 +465,7 @@ static int aq_pm_thaw(struct device *dev)
+ 
+ static int aq_pm_resume_restore(struct device *dev)
+ {
+-	return atl_resume_common(dev, false);
++	return atl_resume_common(dev, true);
+ }
+ 
+ static const struct dev_pm_ops aq_pm_ops = {
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 9ffdaa84ba124..e0a6a2e62d23b 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -3946,6 +3946,10 @@ static int bcmgenet_probe(struct platform_device *pdev)
+ 		goto err;
+ 	}
+ 	priv->wol_irq = platform_get_irq_optional(pdev, 2);
++	if (priv->wol_irq == -EPROBE_DEFER) {
++		err = priv->wol_irq;
++		goto err;
++	}
+ 
+ 	priv->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(priv->base)) {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index bd18a780a0008..4a18a7c7dd4c2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -7175,42 +7175,43 @@ static void i40e_free_macvlan_channels(struct i40e_vsi *vsi)
+ static int i40e_fwd_ring_up(struct i40e_vsi *vsi, struct net_device *vdev,
+ 			    struct i40e_fwd_adapter *fwd)
+ {
++	struct i40e_channel *ch = NULL, *ch_tmp, *iter;
+ 	int ret = 0, num_tc = 1,  i, aq_err;
+-	struct i40e_channel *ch, *ch_tmp;
+ 	struct i40e_pf *pf = vsi->back;
+ 	struct i40e_hw *hw = &pf->hw;
+ 
+-	if (list_empty(&vsi->macvlan_list))
+-		return -EINVAL;
+-
+ 	/* Go through the list and find an available channel */
+-	list_for_each_entry_safe(ch, ch_tmp, &vsi->macvlan_list, list) {
+-		if (!i40e_is_channel_macvlan(ch)) {
+-			ch->fwd = fwd;
++	list_for_each_entry_safe(iter, ch_tmp, &vsi->macvlan_list, list) {
++		if (!i40e_is_channel_macvlan(iter)) {
++			iter->fwd = fwd;
+ 			/* record configuration for macvlan interface in vdev */
+ 			for (i = 0; i < num_tc; i++)
+ 				netdev_bind_sb_channel_queue(vsi->netdev, vdev,
+ 							     i,
+-							     ch->num_queue_pairs,
+-							     ch->base_queue);
+-			for (i = 0; i < ch->num_queue_pairs; i++) {
++							     iter->num_queue_pairs,
++							     iter->base_queue);
++			for (i = 0; i < iter->num_queue_pairs; i++) {
+ 				struct i40e_ring *tx_ring, *rx_ring;
+ 				u16 pf_q;
+ 
+-				pf_q = ch->base_queue + i;
++				pf_q = iter->base_queue + i;
+ 
+ 				/* Get to TX ring ptr */
+ 				tx_ring = vsi->tx_rings[pf_q];
+-				tx_ring->ch = ch;
++				tx_ring->ch = iter;
+ 
+ 				/* Get the RX ring ptr */
+ 				rx_ring = vsi->rx_rings[pf_q];
+-				rx_ring->ch = ch;
++				rx_ring->ch = iter;
+ 			}
++			ch = iter;
+ 			break;
+ 		}
+ 	}
+ 
++	if (!ch)
++		return -EINVAL;
++
+ 	/* Guarantee all rings are updated before we update the
+ 	 * MAC address filter.
+ 	 */
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index c4c4649b2088e..b221b83ec5a6f 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -206,9 +206,10 @@ static int ocelot_flower_parse_action(struct ocelot *ocelot, int port,
+ 			filter->type = OCELOT_VCAP_FILTER_OFFLOAD;
+ 			break;
+ 		case FLOW_ACTION_TRAP:
+-			if (filter->block_id != VCAP_IS2) {
++			if (filter->block_id != VCAP_IS2 ||
++			    filter->lookup != 0) {
+ 				NL_SET_ERR_MSG_MOD(extack,
+-						   "Trap action can only be offloaded to VCAP IS2");
++						   "Trap action can only be offloaded to VCAP IS2 lookup 0");
+ 				return -EOPNOTSUPP;
+ 			}
+ 			if (filter->goto_target != -1) {
+diff --git a/drivers/net/ethernet/mscc/ocelot_vcap.c b/drivers/net/ethernet/mscc/ocelot_vcap.c
+index d8c778ee6f1b8..1185725906073 100644
+--- a/drivers/net/ethernet/mscc/ocelot_vcap.c
++++ b/drivers/net/ethernet/mscc/ocelot_vcap.c
+@@ -373,7 +373,6 @@ static void is2_entry_set(struct ocelot *ocelot, int ix,
+ 			 OCELOT_VCAP_BIT_0);
+ 	vcap_key_set(vcap, &data, VCAP_IS2_HK_IGR_PORT_MASK, 0,
+ 		     ~filter->ingress_port_mask);
+-	vcap_key_bit_set(vcap, &data, VCAP_IS2_HK_FIRST, OCELOT_VCAP_BIT_ANY);
+ 	vcap_key_bit_set(vcap, &data, VCAP_IS2_HK_HOST_MATCH,
+ 			 OCELOT_VCAP_BIT_ANY);
+ 	vcap_key_bit_set(vcap, &data, VCAP_IS2_HK_L2_MC, filter->dmac_mc);
+@@ -1143,6 +1142,8 @@ int ocelot_vcap_filter_add(struct ocelot *ocelot,
+ 		struct ocelot_vcap_filter *tmp;
+ 
+ 		tmp = ocelot_vcap_block_find_filter_by_index(block, i);
++		/* Read back the filter's counters before moving it */
++		vcap_entry_get(ocelot, i - 1, tmp);
+ 		vcap_entry_set(ocelot, i, tmp);
+ 	}
+ 
+@@ -1181,7 +1182,11 @@ int ocelot_vcap_filter_del(struct ocelot *ocelot,
+ 	struct ocelot_vcap_filter del_filter;
+ 	int i, index;
+ 
++	/* Need to inherit the block_id so that vcap_entry_set()
++	 * does not get confused and knows where to install it.
++	 */
+ 	memset(&del_filter, 0, sizeof(del_filter));
++	del_filter.block_id = filter->block_id;
+ 
+ 	/* Gets index of the filter */
+ 	index = ocelot_vcap_block_get_filter_index(block, filter);
+@@ -1196,6 +1201,8 @@ int ocelot_vcap_filter_del(struct ocelot *ocelot,
+ 		struct ocelot_vcap_filter *tmp;
+ 
+ 		tmp = ocelot_vcap_block_find_filter_by_index(block, i);
++		/* Read back the filter's counters before moving it */
++		vcap_entry_get(ocelot, i + 1, tmp);
+ 		vcap_entry_set(ocelot, i, tmp);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+index b0d8499d373bd..31fbe8904222b 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+@@ -251,7 +251,7 @@ static int ionic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	err = ionic_map_bars(ionic);
+ 	if (err)
+-		goto err_out_pci_disable_device;
++		goto err_out_pci_release_regions;
+ 
+ 	/* Configure the device */
+ 	err = ionic_setup(ionic);
+@@ -353,6 +353,7 @@ err_out_teardown:
+ 
+ err_out_unmap_bars:
+ 	ionic_unmap_bars(ionic);
++err_out_pci_release_regions:
+ 	pci_release_regions(pdev);
+ err_out_pci_disable_device:
+ 	pci_disable_device(pdev);
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index 4fa72b573c172..6f950979d25e4 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -3563,6 +3563,11 @@ static int efx_ef10_mtd_probe(struct efx_nic *efx)
+ 		n_parts++;
+ 	}
+ 
++	if (!n_parts) {
++		kfree(parts);
++		return 0;
++	}
++
+ 	rc = efx_mtd_add(efx, &parts[0].common, n_parts, sizeof(*parts));
+ fail:
+ 	if (rc)
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index 0a8799a208cf4..2ab8571ef1cc0 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -744,7 +744,9 @@ void efx_remove_channels(struct efx_nic *efx)
+ 
+ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+ {
+-	struct efx_channel *other_channel[EFX_MAX_CHANNELS], *channel;
++	struct efx_channel *other_channel[EFX_MAX_CHANNELS], *channel,
++			   *ptp_channel = efx_ptp_channel(efx);
++	struct efx_ptp_data *ptp_data = efx->ptp_data;
+ 	unsigned int i, next_buffer_table = 0;
+ 	u32 old_rxq_entries, old_txq_entries;
+ 	int rc, rc2;
+@@ -797,11 +799,8 @@ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+ 	old_txq_entries = efx->txq_entries;
+ 	efx->rxq_entries = rxq_entries;
+ 	efx->txq_entries = txq_entries;
+-	for (i = 0; i < efx->n_channels; i++) {
+-		channel = efx->channel[i];
+-		efx->channel[i] = other_channel[i];
+-		other_channel[i] = channel;
+-	}
++	for (i = 0; i < efx->n_channels; i++)
++		swap(efx->channel[i], other_channel[i]);
+ 
+ 	/* Restart buffer table allocation */
+ 	efx->next_buffer_table = next_buffer_table;
+@@ -817,6 +816,7 @@ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+ 	}
+ 
+ out:
++	efx->ptp_data = NULL;
+ 	/* Destroy unused channel structures */
+ 	for (i = 0; i < efx->n_channels; i++) {
+ 		channel = other_channel[i];
+@@ -827,6 +827,7 @@ out:
+ 		}
+ 	}
+ 
++	efx->ptp_data = ptp_data;
+ 	rc2 = efx_soft_enable_interrupts(efx);
+ 	if (rc2) {
+ 		rc = rc ? rc : rc2;
+@@ -843,11 +844,9 @@ rollback:
+ 	/* Swap back */
+ 	efx->rxq_entries = old_rxq_entries;
+ 	efx->txq_entries = old_txq_entries;
+-	for (i = 0; i < efx->n_channels; i++) {
+-		channel = efx->channel[i];
+-		efx->channel[i] = other_channel[i];
+-		other_channel[i] = channel;
+-	}
++	for (i = 0; i < efx->n_channels; i++)
++		swap(efx->channel[i], other_channel[i]);
++	efx_ptp_update_channel(efx, ptp_channel);
+ 	goto out;
+ }
+ 
+diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
+index 797e51802ccbb..725b0f38813a9 100644
+--- a/drivers/net/ethernet/sfc/ptp.c
++++ b/drivers/net/ethernet/sfc/ptp.c
+@@ -45,6 +45,7 @@
+ #include "farch_regs.h"
+ #include "tx.h"
+ #include "nic.h" /* indirectly includes ptp.h */
++#include "efx_channels.h"
+ 
+ /* Maximum number of events expected to make up a PTP event */
+ #define	MAX_EVENT_FRAGS			3
+@@ -541,6 +542,12 @@ struct efx_channel *efx_ptp_channel(struct efx_nic *efx)
+ 	return efx->ptp_data ? efx->ptp_data->channel : NULL;
+ }
+ 
++void efx_ptp_update_channel(struct efx_nic *efx, struct efx_channel *channel)
++{
++	if (efx->ptp_data)
++		efx->ptp_data->channel = channel;
++}
++
+ static u32 last_sync_timestamp_major(struct efx_nic *efx)
+ {
+ 	struct efx_channel *channel = efx_ptp_channel(efx);
+@@ -1443,6 +1450,11 @@ int efx_ptp_probe(struct efx_nic *efx, struct efx_channel *channel)
+ 	int rc = 0;
+ 	unsigned int pos;
+ 
++	if (efx->ptp_data) {
++		efx->ptp_data->channel = channel;
++		return 0;
++	}
++
+ 	ptp = kzalloc(sizeof(struct efx_ptp_data), GFP_KERNEL);
+ 	efx->ptp_data = ptp;
+ 	if (!efx->ptp_data)
+@@ -2179,7 +2191,7 @@ static const struct efx_channel_type efx_ptp_channel_type = {
+ 	.pre_probe		= efx_ptp_probe_channel,
+ 	.post_remove		= efx_ptp_remove_channel,
+ 	.get_name		= efx_ptp_get_channel_name,
+-	/* no copy operation; there is no need to reallocate this channel */
++	.copy                   = efx_copy_channel,
+ 	.receive_skb		= efx_ptp_rx,
+ 	.want_txqs		= efx_ptp_want_txqs,
+ 	.keep_eventq		= false,
+diff --git a/drivers/net/ethernet/sfc/ptp.h b/drivers/net/ethernet/sfc/ptp.h
+index 9855e8c9e544d..7b1ef7002b3f0 100644
+--- a/drivers/net/ethernet/sfc/ptp.h
++++ b/drivers/net/ethernet/sfc/ptp.h
+@@ -16,6 +16,7 @@ struct ethtool_ts_info;
+ int efx_ptp_probe(struct efx_nic *efx, struct efx_channel *channel);
+ void efx_ptp_defer_probe_with_channel(struct efx_nic *efx);
+ struct efx_channel *efx_ptp_channel(struct efx_nic *efx);
++void efx_ptp_update_channel(struct efx_nic *efx, struct efx_channel *channel);
+ void efx_ptp_remove(struct efx_nic *efx);
+ int efx_ptp_set_ts_config(struct efx_nic *efx, struct ifreq *ifr);
+ int efx_ptp_get_ts_config(struct efx_nic *efx, struct ifreq *ifr);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index e29b5523b19bd..f6ea4a0ad5dff 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -932,8 +932,6 @@ static int xemaclite_open(struct net_device *dev)
+ 	xemaclite_disable_interrupts(lp);
+ 
+ 	if (lp->phy_node) {
+-		u32 bmcr;
+-
+ 		lp->phy_dev = of_phy_connect(lp->ndev, lp->phy_node,
+ 					     xemaclite_adjust_link, 0,
+ 					     PHY_INTERFACE_MODE_MII);
+@@ -944,19 +942,6 @@ static int xemaclite_open(struct net_device *dev)
+ 
+ 		/* EmacLite doesn't support giga-bit speeds */
+ 		phy_set_max_speed(lp->phy_dev, SPEED_100);
+-
+-		/* Don't advertise 1000BASE-T Full/Half duplex speeds */
+-		phy_write(lp->phy_dev, MII_CTRL1000, 0);
+-
+-		/* Advertise only 10 and 100mbps full/half duplex speeds */
+-		phy_write(lp->phy_dev, MII_ADVERTISE, ADVERTISE_ALL |
+-			  ADVERTISE_CSMA);
+-
+-		/* Restart auto negotiation */
+-		bmcr = phy_read(lp->phy_dev, MII_BMCR);
+-		bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
+-		phy_write(lp->phy_dev, MII_BMCR, bmcr);
+-
+ 		phy_start(lp->phy_dev);
+ 	}
+ 
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index db7866b6f7525..18e67eb6d8b4f 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -124,10 +124,15 @@ EXPORT_SYMBOL(phy_print_status);
+  */
+ static int phy_clear_interrupt(struct phy_device *phydev)
+ {
+-	if (phydev->drv->ack_interrupt)
+-		return phydev->drv->ack_interrupt(phydev);
++	int ret = 0;
+ 
+-	return 0;
++	if (phydev->drv->ack_interrupt) {
++		mutex_lock(&phydev->lock);
++		ret = phydev->drv->ack_interrupt(phydev);
++		mutex_unlock(&phydev->lock);
++	}
++
++	return ret;
+ }
+ 
+ /**
+@@ -981,6 +986,36 @@ int phy_disable_interrupts(struct phy_device *phydev)
+ 	return phy_clear_interrupt(phydev);
+ }
+ 
++/**
++ * phy_did_interrupt - Checks if the PHY generated an interrupt
++ * @phydev: target phy_device struct
++ */
++static int phy_did_interrupt(struct phy_device *phydev)
++{
++	int ret;
++
++	mutex_lock(&phydev->lock);
++	ret = phydev->drv->did_interrupt(phydev);
++	mutex_unlock(&phydev->lock);
++
++	return ret;
++}
++
++/**
++ * phy_handle_interrupt - Handle PHY interrupt
++ * @phydev: target phy_device struct
++ */
++static irqreturn_t phy_handle_interrupt(struct phy_device *phydev)
++{
++	irqreturn_t ret;
++
++	mutex_lock(&phydev->lock);
++	ret = phydev->drv->handle_interrupt(phydev);
++	mutex_unlock(&phydev->lock);
++
++	return ret;
++}
++
+ /**
+  * phy_interrupt - PHY interrupt handler
+  * @irq: interrupt line
+@@ -994,9 +1029,9 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat)
+ 	struct phy_driver *drv = phydev->drv;
+ 
+ 	if (drv->handle_interrupt)
+-		return drv->handle_interrupt(phydev);
++		return phy_handle_interrupt(phydev);
+ 
+-	if (drv->did_interrupt && !drv->did_interrupt(phydev))
++	if (drv->did_interrupt && !phy_did_interrupt(phydev))
+ 		return IRQ_NONE;
+ 
+ 	/* reschedule state queue work to run as soon as possible */
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index efffa65f82143..96068e0d841ae 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -249,6 +249,7 @@ struct sfp {
+ 	struct sfp_eeprom_id id;
+ 	unsigned int module_power_mW;
+ 	unsigned int module_t_start_up;
++	bool tx_fault_ignore;
+ 
+ #if IS_ENABLED(CONFIG_HWMON)
+ 	struct sfp_diag diag;
+@@ -1893,6 +1894,12 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
+ 	else
+ 		sfp->module_t_start_up = T_START_UP;
+ 
++	if (!memcmp(id.base.vendor_name, "HUAWEI          ", 16) &&
++	    !memcmp(id.base.vendor_pn, "MA5671A         ", 16))
++		sfp->tx_fault_ignore = true;
++	else
++		sfp->tx_fault_ignore = false;
++
+ 	return 0;
+ }
+ 
+@@ -2320,7 +2327,10 @@ static void sfp_check_state(struct sfp *sfp)
+ 	mutex_lock(&sfp->st_mutex);
+ 	state = sfp_get_state(sfp);
+ 	changed = state ^ sfp->state;
+-	changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT;
++	if (sfp->tx_fault_ignore)
++		changed &= SFP_F_PRESENT | SFP_F_LOS;
++	else
++		changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT;
+ 
+ 	for (i = 0; i < GPIO_MAX; i++)
+ 		if (changed & BIT(i))
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index fcad5cdcabfa4..3c931b1b2a0bb 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -367,7 +367,7 @@ void iwl_dbg_tlv_del_timers(struct iwl_trans *trans)
+ 	struct iwl_dbg_tlv_timer_node *node, *tmp;
+ 
+ 	list_for_each_entry_safe(node, tmp, timer_list, list) {
+-		del_timer(&node->timer);
++		del_timer_sync(&node->timer);
+ 		list_del(&node->list);
+ 		kfree(node);
+ 	}
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index cc550ba0c9dfe..afd2d5add04b1 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -2264,11 +2264,13 @@ static void hw_scan_work(struct work_struct *work)
+ 			if (req->ie_len)
+ 				skb_put_data(probe, req->ie, req->ie_len);
+ 
++			rcu_read_lock();
+ 			if (!ieee80211_tx_prepare_skb(hwsim->hw,
+ 						      hwsim->hw_scan_vif,
+ 						      probe,
+ 						      hwsim->tmp_chan->band,
+ 						      NULL)) {
++				rcu_read_unlock();
+ 				kfree_skb(probe);
+ 				continue;
+ 			}
+@@ -2276,6 +2278,7 @@ static void hw_scan_work(struct work_struct *work)
+ 			local_bh_disable();
+ 			mac80211_hwsim_tx_frame(hwsim->hw, probe,
+ 						hwsim->tmp_chan);
++			rcu_read_unlock();
+ 			local_bh_enable();
+ 		}
+ 	}
+diff --git a/drivers/s390/net/ctcm_mpc.c b/drivers/s390/net/ctcm_mpc.c
+index 85a1a4533cbeb..20a6097e1b204 100644
+--- a/drivers/s390/net/ctcm_mpc.c
++++ b/drivers/s390/net/ctcm_mpc.c
+@@ -626,8 +626,6 @@ static void mpc_rcvd_sweep_resp(struct mpcg_info *mpcginfo)
+ 		ctcm_clear_busy_do(dev);
+ 	}
+ 
+-	kfree(mpcginfo);
+-
+ 	return;
+ 
+ }
+@@ -1206,10 +1204,10 @@ static void ctcmpc_unpack_skb(struct channel *ch, struct sk_buff *pskb)
+ 						CTCM_FUNTAIL, dev->name);
+ 			priv->stats.rx_dropped++;
+ 			/* mpcginfo only used for non-data transfers */
+-			kfree(mpcginfo);
+ 			if (do_debug_data)
+ 				ctcmpc_dump_skb(pskb, -8);
+ 		}
++		kfree(mpcginfo);
+ 	}
+ done:
+ 
+@@ -1991,7 +1989,6 @@ static void mpc_action_rcvd_xid0(fsm_instance *fsm, int event, void *arg)
+ 		}
+ 		break;
+ 	}
+-	kfree(mpcginfo);
+ 
+ 	CTCM_PR_DEBUG("ctcmpc:%s() %s xid2:%i xid7:%i xidt_p2:%i \n",
+ 		__func__, ch->id, grp->outstanding_xid2,
+@@ -2052,7 +2049,6 @@ static void mpc_action_rcvd_xid7(fsm_instance *fsm, int event, void *arg)
+ 		mpc_validate_xid(mpcginfo);
+ 		break;
+ 	}
+-	kfree(mpcginfo);
+ 	return;
+ }
+ 
+diff --git a/drivers/s390/net/ctcm_sysfs.c b/drivers/s390/net/ctcm_sysfs.c
+index ded1930a00b2d..e3813a7aa5e68 100644
+--- a/drivers/s390/net/ctcm_sysfs.c
++++ b/drivers/s390/net/ctcm_sysfs.c
+@@ -39,11 +39,12 @@ static ssize_t ctcm_buffer_write(struct device *dev,
+ 	struct ctcm_priv *priv = dev_get_drvdata(dev);
+ 	int rc;
+ 
+-	ndev = priv->channel[CTCM_READ]->netdev;
+-	if (!(priv && priv->channel[CTCM_READ] && ndev)) {
++	if (!(priv && priv->channel[CTCM_READ] &&
++	      priv->channel[CTCM_READ]->netdev)) {
+ 		CTCM_DBF_TEXT(SETUP, CTC_DBF_ERROR, "bfnondev");
+ 		return -ENODEV;
+ 	}
++	ndev = priv->channel[CTCM_READ]->netdev;
+ 
+ 	rc = kstrtouint(buf, 0, &bs1);
+ 	if (rc)
+diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c
+index 440219bcaa2be..06a322bdced6d 100644
+--- a/drivers/s390/net/lcs.c
++++ b/drivers/s390/net/lcs.c
+@@ -1735,10 +1735,11 @@ lcs_get_control(struct lcs_card *card, struct lcs_cmd *cmd)
+ 			lcs_schedule_recovery(card);
+ 			break;
+ 		case LCS_CMD_STOPLAN:
+-			pr_warn("Stoplan for %s initiated by LGW\n",
+-				card->dev->name);
+-			if (card->dev)
++			if (card->dev) {
++				pr_warn("Stoplan for %s initiated by LGW\n",
++					card->dev->name);
+ 				netif_carrier_off(card->dev);
++			}
+ 			break;
+ 		default:
+ 			LCS_DBF_TEXT(5, trace, "noLGWcmd");
+diff --git a/drivers/slimbus/qcom-ctrl.c b/drivers/slimbus/qcom-ctrl.c
+index f04b961b96cd4..ec58091fc948a 100644
+--- a/drivers/slimbus/qcom-ctrl.c
++++ b/drivers/slimbus/qcom-ctrl.c
+@@ -510,9 +510,9 @@ static int qcom_slim_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	ctrl->irq = platform_get_irq(pdev, 0);
+-	if (!ctrl->irq) {
++	if (ctrl->irq < 0) {
+ 		dev_err(&pdev->dev, "no slimbus IRQ\n");
+-		return -ENODEV;
++		return ctrl->irq;
+ 	}
+ 
+ 	sctrl = &ctrl->ctrl;
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index aafc3bb60e52a..b05b7862778c5 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -2276,6 +2276,7 @@ static void gsm_copy_config_values(struct gsm_mux *gsm,
+ 
+ static int gsm_config(struct gsm_mux *gsm, struct gsm_config *c)
+ {
++	int ret = 0;
+ 	int need_close = 0;
+ 	int need_restart = 0;
+ 
+@@ -2343,10 +2344,13 @@ static int gsm_config(struct gsm_mux *gsm, struct gsm_config *c)
+ 	 * FIXME: We need to separate activation/deactivation from adding
+ 	 * and removing from the mux array
+ 	 */
+-	if (need_restart)
+-		gsm_activate_mux(gsm);
+-	if (gsm->initiator && need_close)
+-		gsm_dlci_begin_open(gsm->dlci[0]);
++	if (gsm->dead) {
++		ret = gsm_activate_mux(gsm);
++		if (ret)
++			return ret;
++		if (gsm->initiator)
++			gsm_dlci_begin_open(gsm->dlci[0]);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
+index fb65dc601b237..de48a58460f47 100644
+--- a/drivers/tty/serial/8250/8250_mtk.c
++++ b/drivers/tty/serial/8250/8250_mtk.c
+@@ -37,6 +37,7 @@
+ #define MTK_UART_IER_RTSI	0x40	/* Enable RTS Modem status interrupt */
+ #define MTK_UART_IER_CTSI	0x80	/* Enable CTS Modem status interrupt */
+ 
++#define MTK_UART_EFR		38	/* I/O: Extended Features Register */
+ #define MTK_UART_EFR_EN		0x10	/* Enable enhancement feature */
+ #define MTK_UART_EFR_RTS	0x40	/* Enable hardware rx flow control */
+ #define MTK_UART_EFR_CTS	0x80	/* Enable hardware tx flow control */
+@@ -53,6 +54,9 @@
+ #define MTK_UART_TX_TRIGGER	1
+ #define MTK_UART_RX_TRIGGER	MTK_UART_RX_SIZE
+ 
++#define MTK_UART_XON1		40	/* I/O: Xon character 1 */
++#define MTK_UART_XOFF1		42	/* I/O: Xoff character 1 */
++
+ #ifdef CONFIG_SERIAL_8250_DMA
+ enum dma_rx_status {
+ 	DMA_RX_START = 0,
+@@ -169,7 +173,7 @@ static void mtk8250_dma_enable(struct uart_8250_port *up)
+ 		   MTK_UART_DMA_EN_RX | MTK_UART_DMA_EN_TX);
+ 
+ 	serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+-	serial_out(up, UART_EFR, UART_EFR_ECB);
++	serial_out(up, MTK_UART_EFR, UART_EFR_ECB);
+ 	serial_out(up, UART_LCR, lcr);
+ 
+ 	if (dmaengine_slave_config(dma->rxchan, &dma->rxconf) != 0)
+@@ -232,7 +236,7 @@ static void mtk8250_set_flow_ctrl(struct uart_8250_port *up, int mode)
+ 	int lcr = serial_in(up, UART_LCR);
+ 
+ 	serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+-	serial_out(up, UART_EFR, UART_EFR_ECB);
++	serial_out(up, MTK_UART_EFR, UART_EFR_ECB);
+ 	serial_out(up, UART_LCR, lcr);
+ 	lcr = serial_in(up, UART_LCR);
+ 
+@@ -241,7 +245,7 @@ static void mtk8250_set_flow_ctrl(struct uart_8250_port *up, int mode)
+ 		serial_out(up, MTK_UART_ESCAPE_DAT, MTK_UART_ESCAPE_CHAR);
+ 		serial_out(up, MTK_UART_ESCAPE_EN, 0x00);
+ 		serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+-		serial_out(up, UART_EFR, serial_in(up, UART_EFR) &
++		serial_out(up, MTK_UART_EFR, serial_in(up, MTK_UART_EFR) &
+ 			(~(MTK_UART_EFR_HW_FC | MTK_UART_EFR_SW_FC_MASK)));
+ 		serial_out(up, UART_LCR, lcr);
+ 		mtk8250_disable_intrs(up, MTK_UART_IER_XOFFI |
+@@ -255,8 +259,8 @@ static void mtk8250_set_flow_ctrl(struct uart_8250_port *up, int mode)
+ 		serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ 
+ 		/*enable hw flow control*/
+-		serial_out(up, UART_EFR, MTK_UART_EFR_HW_FC |
+-			(serial_in(up, UART_EFR) &
++		serial_out(up, MTK_UART_EFR, MTK_UART_EFR_HW_FC |
++			(serial_in(up, MTK_UART_EFR) &
+ 			(~(MTK_UART_EFR_HW_FC | MTK_UART_EFR_SW_FC_MASK))));
+ 
+ 		serial_out(up, UART_LCR, lcr);
+@@ -270,12 +274,12 @@ static void mtk8250_set_flow_ctrl(struct uart_8250_port *up, int mode)
+ 		serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ 
+ 		/*enable sw flow control */
+-		serial_out(up, UART_EFR, MTK_UART_EFR_XON1_XOFF1 |
+-			(serial_in(up, UART_EFR) &
++		serial_out(up, MTK_UART_EFR, MTK_UART_EFR_XON1_XOFF1 |
++			(serial_in(up, MTK_UART_EFR) &
+ 			(~(MTK_UART_EFR_HW_FC | MTK_UART_EFR_SW_FC_MASK))));
+ 
+-		serial_out(up, UART_XON1, START_CHAR(port->state->port.tty));
+-		serial_out(up, UART_XOFF1, STOP_CHAR(port->state->port.tty));
++		serial_out(up, MTK_UART_XON1, START_CHAR(port->state->port.tty));
++		serial_out(up, MTK_UART_XOFF1, STOP_CHAR(port->state->port.tty));
+ 		serial_out(up, UART_LCR, lcr);
+ 		mtk8250_disable_intrs(up, MTK_UART_IER_CTSI|MTK_UART_IER_RTSI);
+ 		mtk8250_enable_intrs(up, MTK_UART_IER_XOFFI);
+diff --git a/drivers/tty/serial/digicolor-usart.c b/drivers/tty/serial/digicolor-usart.c
+index 13ac36e2da4f0..c7f81aa1ce912 100644
+--- a/drivers/tty/serial/digicolor-usart.c
++++ b/drivers/tty/serial/digicolor-usart.c
+@@ -471,11 +471,10 @@ static int digicolor_uart_probe(struct platform_device *pdev)
+ 	if (IS_ERR(uart_clk))
+ 		return PTR_ERR(uart_clk);
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	dp->port.mapbase = res->start;
+-	dp->port.membase = devm_ioremap_resource(&pdev->dev, res);
++	dp->port.membase = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(dp->port.membase))
+ 		return PTR_ERR(dp->port.membase);
++	dp->port.mapbase = res->start;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index d1e4a7379bebd..80332b6a1963e 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -755,6 +755,7 @@ static int wdm_release(struct inode *inode, struct file *file)
+ 			poison_urbs(desc);
+ 			spin_lock_irq(&desc->iuspin);
+ 			desc->resp_count = 0;
++			clear_bit(WDM_RESPONDING, &desc->flags);
+ 			spin_unlock_irq(&desc->iuspin);
+ 			desc->manage_power(desc->intf, 0);
+ 			unpoison_urbs(desc);
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index f48a00e497945..fecdba85ab27f 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -883,17 +883,42 @@ static void uvc_free(struct usb_function *f)
+ 	kfree(uvc);
+ }
+ 
+-static void uvc_unbind(struct usb_configuration *c, struct usb_function *f)
++static void uvc_function_unbind(struct usb_configuration *c,
++				struct usb_function *f)
+ {
+ 	struct usb_composite_dev *cdev = c->cdev;
+ 	struct uvc_device *uvc = to_uvc(f);
++	long wait_ret = 1;
+ 
+-	uvcg_info(f, "%s\n", __func__);
++	uvcg_info(f, "%s()\n", __func__);
++
++	/* If we know we're connected via v4l2, then there should be a cleanup
++	 * of the device from userspace either via UVC_EVENT_DISCONNECT or
++	 * though the video device removal uevent. Allow some time for the
++	 * application to close out before things get deleted.
++	 */
++	if (uvc->func_connected) {
++		uvcg_dbg(f, "waiting for clean disconnect\n");
++		wait_ret = wait_event_interruptible_timeout(uvc->func_connected_queue,
++				uvc->func_connected == false, msecs_to_jiffies(500));
++		uvcg_dbg(f, "done waiting with ret: %ld\n", wait_ret);
++	}
+ 
+ 	device_remove_file(&uvc->vdev.dev, &dev_attr_function_name);
+ 	video_unregister_device(&uvc->vdev);
+ 	v4l2_device_unregister(&uvc->v4l2_dev);
+ 
++	if (uvc->func_connected) {
++		/* Wait for the release to occur to ensure there are no longer any
++		 * pending operations that may cause panics when resources are cleaned
++		 * up.
++		 */
++		uvcg_warn(f, "%s no clean disconnect, wait for release\n", __func__);
++		wait_ret = wait_event_interruptible_timeout(uvc->func_connected_queue,
++				uvc->func_connected == false, msecs_to_jiffies(1000));
++		uvcg_dbg(f, "done waiting for release with ret: %ld\n", wait_ret);
++	}
++
+ 	usb_ep_free_request(cdev->gadget->ep0, uvc->control_req);
+ 	kfree(uvc->control_buf);
+ 
+@@ -912,6 +937,7 @@ static struct usb_function *uvc_alloc(struct usb_function_instance *fi)
+ 
+ 	mutex_init(&uvc->video.mutex);
+ 	uvc->state = UVC_STATE_DISCONNECTED;
++	init_waitqueue_head(&uvc->func_connected_queue);
+ 	opts = fi_to_f_uvc_opts(fi);
+ 
+ 	mutex_lock(&opts->lock);
+@@ -942,7 +968,7 @@ static struct usb_function *uvc_alloc(struct usb_function_instance *fi)
+ 	/* Register the function. */
+ 	uvc->func.name = "uvc";
+ 	uvc->func.bind = uvc_function_bind;
+-	uvc->func.unbind = uvc_unbind;
++	uvc->func.unbind = uvc_function_unbind;
+ 	uvc->func.get_alt = uvc_function_get_alt;
+ 	uvc->func.set_alt = uvc_function_set_alt;
+ 	uvc->func.disable = uvc_function_disable;
+diff --git a/drivers/usb/gadget/function/uvc.h b/drivers/usb/gadget/function/uvc.h
+index 893aaa70f81a4..6c4fc4913f4fd 100644
+--- a/drivers/usb/gadget/function/uvc.h
++++ b/drivers/usb/gadget/function/uvc.h
+@@ -14,6 +14,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/usb/composite.h>
+ #include <linux/videodev2.h>
++#include <linux/wait.h>
+ 
+ #include <media/v4l2-device.h>
+ #include <media/v4l2-dev.h>
+@@ -118,6 +119,7 @@ struct uvc_device {
+ 	struct usb_function func;
+ 	struct uvc_video video;
+ 	bool func_connected;
++	wait_queue_head_t func_connected_queue;
+ 
+ 	/* Descriptors */
+ 	struct {
+diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
+index 197c26f7aec63..65abd55ce2348 100644
+--- a/drivers/usb/gadget/function/uvc_v4l2.c
++++ b/drivers/usb/gadget/function/uvc_v4l2.c
+@@ -252,10 +252,11 @@ uvc_v4l2_subscribe_event(struct v4l2_fh *fh,
+ 
+ static void uvc_v4l2_disable(struct uvc_device *uvc)
+ {
+-	uvc->func_connected = false;
+ 	uvc_function_disconnect(uvc);
+ 	uvcg_video_enable(&uvc->video, 0);
+ 	uvcg_free_buffers(&uvc->video.queue);
++	uvc->func_connected = false;
++	wake_up_interruptible(&uvc->func_connected_queue);
+ }
+ 
+ static int
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index f14a090ce6b62..2eb4083c5b45c 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2123,10 +2123,14 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+ 	  .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
++	{ USB_DEVICE(0x1782, 0x4d10) },						/* Fibocom L610 (AT mode) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x1782, 0x4d11, 0xff) },			/* Fibocom L610 (ECM/RNDIS mode) */
+ 	{ USB_DEVICE(0x2cb7, 0x0104),						/* Fibocom NL678 series */
+ 	  .driver_info = RSVD(4) | RSVD(5) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff),			/* Fibocom NL678 series */
+ 	  .driver_info = RSVD(6) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0106, 0xff) },			/* Fibocom MA510 (ECM mode w/ diag intf.) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x010a, 0xff) },			/* Fibocom MA510 (ECM mode) */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) },	/* Fibocom FG150 Diag */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) },		/* Fibocom FG150 AT */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index d736822e95e18..16118d9f23920 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -106,6 +106,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(HP_VENDOR_ID, HP_LCM220_PRODUCT_ID) },
+ 	{ USB_DEVICE(HP_VENDOR_ID, HP_LCM960_PRODUCT_ID) },
+ 	{ USB_DEVICE(HP_VENDOR_ID, HP_LM920_PRODUCT_ID) },
++	{ USB_DEVICE(HP_VENDOR_ID, HP_LM930_PRODUCT_ID) },
+ 	{ USB_DEVICE(HP_VENDOR_ID, HP_LM940_PRODUCT_ID) },
+ 	{ USB_DEVICE(HP_VENDOR_ID, HP_TD620_PRODUCT_ID) },
+ 	{ USB_DEVICE(CRESSI_VENDOR_ID, CRESSI_EDY_PRODUCT_ID) },
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index c5406452b774e..732f9b13ad5d5 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -135,6 +135,7 @@
+ #define HP_TD620_PRODUCT_ID	0x0956
+ #define HP_LD960_PRODUCT_ID	0x0b39
+ #define HP_LD381_PRODUCT_ID	0x0f7f
++#define HP_LM930_PRODUCT_ID	0x0f9b
+ #define HP_LCM220_PRODUCT_ID	0x3139
+ #define HP_LCM960_PRODUCT_ID	0x3239
+ #define HP_LD220_PRODUCT_ID	0x3524
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index c18bf8164bc2e..586ef5551e76e 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -166,6 +166,8 @@ static const struct usb_device_id id_table[] = {
+ 	{DEVICE_SWI(0x1199, 0x9090)},	/* Sierra Wireless EM7565 QDL */
+ 	{DEVICE_SWI(0x1199, 0x9091)},	/* Sierra Wireless EM7565 */
+ 	{DEVICE_SWI(0x1199, 0x90d2)},	/* Sierra Wireless EM9191 QDL */
++	{DEVICE_SWI(0x1199, 0xc080)},	/* Sierra Wireless EM7590 QDL */
++	{DEVICE_SWI(0x1199, 0xc081)},	/* Sierra Wireless EM7590 */
+ 	{DEVICE_SWI(0x413c, 0x81a2)},	/* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
+ 	{DEVICE_SWI(0x413c, 0x81a3)},	/* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
+ 	{DEVICE_SWI(0x413c, 0x81a4)},	/* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index a06da1854c10c..49420e28a1f76 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -709,7 +709,7 @@ static int tcpci_remove(struct i2c_client *client)
+ 	/* Disable chip interrupts before unregistering port */
+ 	err = tcpci_write16(chip->tcpci, TCPC_ALERT_MASK, 0);
+ 	if (err < 0)
+-		return err;
++		dev_warn(&client->dev, "Failed to disable irqs (%pe)\n", ERR_PTR(err));
+ 
+ 	tcpci_unregister_port(chip->tcpci);
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpci_mt6360.c b/drivers/usb/typec/tcpm/tcpci_mt6360.c
+index f1bd9e09bc87f..8a952eaf90163 100644
+--- a/drivers/usb/typec/tcpm/tcpci_mt6360.c
++++ b/drivers/usb/typec/tcpm/tcpci_mt6360.c
+@@ -15,6 +15,9 @@
+ 
+ #include "tcpci.h"
+ 
++#define MT6360_REG_PHYCTRL1	0x80
++#define MT6360_REG_PHYCTRL3	0x82
++#define MT6360_REG_PHYCTRL7	0x86
+ #define MT6360_REG_VCONNCTRL1	0x8C
+ #define MT6360_REG_MODECTRL2	0x8F
+ #define MT6360_REG_SWRESET	0xA0
+@@ -22,6 +25,8 @@
+ #define MT6360_REG_DRPCTRL1	0xA2
+ #define MT6360_REG_DRPCTRL2	0xA3
+ #define MT6360_REG_I2CTORST	0xBF
++#define MT6360_REG_PHYCTRL11	0xCA
++#define MT6360_REG_RXCTRL1	0xCE
+ #define MT6360_REG_RXCTRL2	0xCF
+ #define MT6360_REG_CTDCTRL2	0xEC
+ 
+@@ -106,6 +111,27 @@ static int mt6360_tcpc_init(struct tcpci *tcpci, struct tcpci_data *tdata)
+ 	if (ret)
+ 		return ret;
+ 
++	/* BMC PHY */
++	ret = mt6360_tcpc_write16(regmap, MT6360_REG_PHYCTRL1, 0x3A70);
++	if (ret)
++		return ret;
++
++	ret = regmap_write(regmap, MT6360_REG_PHYCTRL3,  0x82);
++	if (ret)
++		return ret;
++
++	ret = regmap_write(regmap, MT6360_REG_PHYCTRL7, 0x36);
++	if (ret)
++		return ret;
++
++	ret = mt6360_tcpc_write16(regmap, MT6360_REG_PHYCTRL11, 0x3C60);
++	if (ret)
++		return ret;
++
++	ret = regmap_write(regmap, MT6360_REG_RXCTRL1, 0xE8);
++	if (ret)
++		return ret;
++
+ 	/* Set shipping mode off, AUTOIDLE on */
+ 	return regmap_write(regmap, MT6360_REG_MODECTRL2, 0x7A);
+ }
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 450050801f3b6..93d986856f1c9 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -592,9 +592,15 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ 	iinfo.change_attr = 1;
+ 	ceph_encode_timespec64(&iinfo.btime, &now);
+ 
+-	iinfo.xattr_len = ARRAY_SIZE(xattr_buf);
+-	iinfo.xattr_data = xattr_buf;
+-	memset(iinfo.xattr_data, 0, iinfo.xattr_len);
++	if (req->r_pagelist) {
++		iinfo.xattr_len = req->r_pagelist->length;
++		iinfo.xattr_data = req->r_pagelist->mapped_tail;
++	} else {
++		/* fake it */
++		iinfo.xattr_len = ARRAY_SIZE(xattr_buf);
++		iinfo.xattr_data = xattr_buf;
++		memset(iinfo.xattr_data, 0, iinfo.xattr_len);
++	}
+ 
+ 	in.ino = cpu_to_le64(vino.ino);
+ 	in.snapid = cpu_to_le64(CEPH_NOSNAP);
+@@ -706,6 +712,10 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ 		err = ceph_security_init_secctx(dentry, mode, &as_ctx);
+ 		if (err < 0)
+ 			goto out_ctx;
++		/* Async create can't handle more than a page of xattrs */
++		if (as_ctx.pagelist &&
++		    !list_is_singular(&as_ctx.pagelist->head))
++			try_async = false;
+ 	} else if (!d_in_lookup(dentry)) {
+ 		/* If it's not being looked up, it's negative */
+ 		return -ENOENT;
+diff --git a/fs/file_table.c b/fs/file_table.c
+index 709ada3151da5..7a3b4a7f68086 100644
+--- a/fs/file_table.c
++++ b/fs/file_table.c
+@@ -376,6 +376,7 @@ void __fput_sync(struct file *file)
+ }
+ 
+ EXPORT_SYMBOL(fput);
++EXPORT_SYMBOL(__fput_sync);
+ 
+ void __init files_init(void)
+ {
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 6c047570d6a94..b4fde3a8eeb4b 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1235,13 +1235,12 @@ static int gfs2_iomap_end(struct inode *inode, loff_t pos, loff_t length,
+ 
+ 	if (length != written && (iomap->flags & IOMAP_F_NEW)) {
+ 		/* Deallocate blocks that were just allocated. */
+-		loff_t blockmask = i_blocksize(inode) - 1;
+-		loff_t end = (pos + length) & ~blockmask;
++		loff_t hstart = round_up(pos + written, i_blocksize(inode));
++		loff_t hend = iomap->offset + iomap->length;
+ 
+-		pos = (pos + written + blockmask) & ~blockmask;
+-		if (pos < end) {
+-			truncate_pagecache_range(inode, pos, end - 1);
+-			punch_hole(ip, pos, end - pos);
++		if (hstart < hend) {
++			truncate_pagecache_range(inode, hstart, hend - 1);
++			punch_hole(ip, hstart, hend - hstart);
+ 		}
+ 	}
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index ab9290ab4cae0..4330603eae35d 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1156,7 +1156,7 @@ static inline void __io_req_init_async(struct io_kiocb *req)
+  */
+ static inline void io_req_init_async(struct io_kiocb *req)
+ {
+-	struct io_uring_task *tctx = current->io_uring;
++	struct io_uring_task *tctx = req->task->io_uring;
+ 
+ 	if (req->flags & REQ_F_WORK_INITIALIZED)
+ 		return;
+diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
+index 05b39e8f97b9a..d60c086c6e9c7 100644
+--- a/fs/nfs/fs_context.c
++++ b/fs/nfs/fs_context.c
+@@ -476,7 +476,7 @@ static int nfs_fs_context_parse_param(struct fs_context *fc,
+ 		if (result.negated)
+ 			ctx->flags &= ~NFS_MOUNT_SOFTREVAL;
+ 		else
+-			ctx->flags &= NFS_MOUNT_SOFTREVAL;
++			ctx->flags |= NFS_MOUNT_SOFTREVAL;
+ 		break;
+ 	case Opt_posix:
+ 		if (result.negated)
+diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h
+index f96b7f8d82e52..e2a92697a6638 100644
+--- a/include/linux/netdev_features.h
++++ b/include/linux/netdev_features.h
+@@ -158,7 +158,7 @@ enum {
+ #define NETIF_F_GSO_FRAGLIST	__NETIF_F(GSO_FRAGLIST)
+ #define NETIF_F_HW_MACSEC	__NETIF_F(HW_MACSEC)
+ 
+-/* Finds the next feature with the highest number of the range of start till 0.
++/* Finds the next feature with the highest number of the range of start-1 till 0.
+  */
+ static inline int find_next_netdev_feature(u64 feature, unsigned long start)
+ {
+@@ -177,7 +177,7 @@ static inline int find_next_netdev_feature(u64 feature, unsigned long start)
+ 	for ((bit) = find_next_netdev_feature((mask_addr),		\
+ 					      NETDEV_FEATURE_COUNT);	\
+ 	     (bit) >= 0;						\
+-	     (bit) = find_next_netdev_feature((mask_addr), (bit) - 1))
++	     (bit) = find_next_netdev_feature((mask_addr), (bit)))
+ 
+ /* Features valid for ethtool to change */
+ /* = all defined minus driver/device-class-related */
+diff --git a/include/linux/sunrpc/xprtsock.h b/include/linux/sunrpc/xprtsock.h
+index 8c2a712cb2420..689062afdd610 100644
+--- a/include/linux/sunrpc/xprtsock.h
++++ b/include/linux/sunrpc/xprtsock.h
+@@ -89,5 +89,6 @@ struct sock_xprt {
+ #define XPRT_SOCK_WAKE_WRITE	(5)
+ #define XPRT_SOCK_WAKE_PENDING	(6)
+ #define XPRT_SOCK_WAKE_DISCONNECT	(7)
++#define XPRT_SOCK_CONNECT_SENT	(8)
+ 
+ #endif /* _LINUX_SUNRPC_XPRTSOCK_H */
+diff --git a/include/net/tc_act/tc_pedit.h b/include/net/tc_act/tc_pedit.h
+index 748cf87a4d7ea..3e02709a1df65 100644
+--- a/include/net/tc_act/tc_pedit.h
++++ b/include/net/tc_act/tc_pedit.h
+@@ -14,6 +14,7 @@ struct tcf_pedit {
+ 	struct tc_action	common;
+ 	unsigned char		tcfp_nkeys;
+ 	unsigned char		tcfp_flags;
++	u32			tcfp_off_max_hint;
+ 	struct tc_pedit_key	*tcfp_keys;
+ 	struct tcf_pedit_key_ex	*tcfp_keys_ex;
+ };
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index ed1bbac004d52..8220369ee6105 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1006,7 +1006,6 @@ DEFINE_RPC_XPRT_LIFETIME_EVENT(connect);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_auto);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_done);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_force);
+-DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_cleanup);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(destroy);
+ 
+ DECLARE_EVENT_CLASS(rpc_xprt_event,
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 0aa224c31f10a..ec39e123c2a51 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -3301,8 +3301,11 @@ static struct notifier_block cpuset_track_online_nodes_nb = {
+  */
+ void __init cpuset_init_smp(void)
+ {
+-	cpumask_copy(top_cpuset.cpus_allowed, cpu_active_mask);
+-	top_cpuset.mems_allowed = node_states[N_MEMORY];
++	/*
++	 * cpus_allowd/mems_allowed set to v2 values in the initial
++	 * cpuset_bind() call will be reset to v1 values in another
++	 * cpuset_bind() call when v1 cpuset is mounted.
++	 */
+ 	top_cpuset.old_mems_allowed = top_cpuset.mems_allowed;
+ 
+ 	cpumask_copy(top_cpuset.effective_cpus, cpu_active_mask);
+diff --git a/lib/dim/net_dim.c b/lib/dim/net_dim.c
+index a4db51c212663..dae3b51ac3d9b 100644
+--- a/lib/dim/net_dim.c
++++ b/lib/dim/net_dim.c
+@@ -12,41 +12,41 @@
+  *        Each profile size must be of NET_DIM_PARAMS_NUM_PROFILES
+  */
+ #define NET_DIM_PARAMS_NUM_PROFILES 5
+-#define NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE 256
+-#define NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE 128
++#define NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE 256
++#define NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE 128
+ #define NET_DIM_DEF_PROFILE_CQE 1
+ #define NET_DIM_DEF_PROFILE_EQE 1
+ 
+ #define NET_DIM_RX_EQE_PROFILES { \
+-	{1,   NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
+-	{8,   NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
+-	{64,  NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
+-	{128, NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
+-	{256, NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
++	{.usec = 1,   .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}, \
++	{.usec = 8,   .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}, \
++	{.usec = 64,  .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}, \
++	{.usec = 128, .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}, \
++	{.usec = 256, .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}  \
+ }
+ 
+ #define NET_DIM_RX_CQE_PROFILES { \
+-	{2,  256},             \
+-	{8,  128},             \
+-	{16, 64},              \
+-	{32, 64},              \
+-	{64, 64}               \
++	{.usec = 2,  .pkts = 256,},             \
++	{.usec = 8,  .pkts = 128,},             \
++	{.usec = 16, .pkts = 64,},              \
++	{.usec = 32, .pkts = 64,},              \
++	{.usec = 64, .pkts = 64,}               \
+ }
+ 
+ #define NET_DIM_TX_EQE_PROFILES { \
+-	{1,   NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE},  \
+-	{8,   NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE},  \
+-	{32,  NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE},  \
+-	{64,  NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE},  \
+-	{128, NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE}   \
++	{.usec = 1,   .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,},  \
++	{.usec = 8,   .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,},  \
++	{.usec = 32,  .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,},  \
++	{.usec = 64,  .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,},  \
++	{.usec = 128, .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,}   \
+ }
+ 
+ #define NET_DIM_TX_CQE_PROFILES { \
+-	{5,  128},  \
+-	{8,  64},  \
+-	{16, 32},  \
+-	{32, 32},  \
+-	{64, 32}   \
++	{.usec = 5,  .pkts = 128,},  \
++	{.usec = 8,  .pkts = 64,},  \
++	{.usec = 16, .pkts = 32,},  \
++	{.usec = 32, .pkts = 32,},  \
++	{.usec = 64, .pkts = 32,}   \
+ }
+ 
+ static const struct dim_cq_moder
+diff --git a/net/batman-adv/fragmentation.c b/net/batman-adv/fragmentation.c
+index 1f1f5b0873b29..895d834d479d1 100644
+--- a/net/batman-adv/fragmentation.c
++++ b/net/batman-adv/fragmentation.c
+@@ -478,6 +478,17 @@ int batadv_frag_send_packet(struct sk_buff *skb,
+ 		goto free_skb;
+ 	}
+ 
++	/* GRO might have added fragments to the fragment list instead of
++	 * frags[]. But this is not handled by skb_split and must be
++	 * linearized to avoid incorrect length information after all
++	 * batman-adv fragments were created and submitted to the
++	 * hard-interface
++	 */
++	if (skb_has_frag_list(skb) && __skb_linearize(skb)) {
++		ret = -ENOMEM;
++		goto free_skb;
++	}
++
+ 	/* Create one header to be copied to all fragments */
+ 	frag_header.packet_type = BATADV_UNICAST_FRAG;
+ 	frag_header.version = BATADV_COMPAT_VERSION;
+diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c
+index b5bc680d47553..b8a33c841846f 100644
+--- a/net/core/secure_seq.c
++++ b/net/core/secure_seq.c
+@@ -22,6 +22,8 @@
+ static siphash_key_t net_secret __read_mostly;
+ static siphash_key_t ts_secret __read_mostly;
+ 
++#define EPHEMERAL_PORT_SHUFFLE_PERIOD (10 * HZ)
++
+ static __always_inline void net_secret_init(void)
+ {
+ 	net_get_random_once(&net_secret, sizeof(net_secret));
+@@ -100,11 +102,13 @@ u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
+ 	const struct {
+ 		struct in6_addr saddr;
+ 		struct in6_addr daddr;
++		unsigned int timeseed;
+ 		__be16 dport;
+ 	} __aligned(SIPHASH_ALIGNMENT) combined = {
+ 		.saddr = *(struct in6_addr *)saddr,
+ 		.daddr = *(struct in6_addr *)daddr,
+-		.dport = dport
++		.timeseed = jiffies / EPHEMERAL_PORT_SHUFFLE_PERIOD,
++		.dport = dport,
+ 	};
+ 	net_secret_init();
+ 	return siphash(&combined, offsetofend(typeof(combined), dport),
+@@ -145,8 +149,10 @@ EXPORT_SYMBOL_GPL(secure_tcp_seq);
+ u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport)
+ {
+ 	net_secret_init();
+-	return siphash_3u32((__force u32)saddr, (__force u32)daddr,
+-			    (__force u16)dport, &net_secret);
++	return siphash_4u32((__force u32)saddr, (__force u32)daddr,
++			    (__force u16)dport,
++			    jiffies / EPHEMERAL_PORT_SHUFFLE_PERIOD,
++			    &net_secret);
+ }
+ EXPORT_SYMBOL_GPL(secure_ipv4_port_ephemeral);
+ #endif
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index e60ca03543a53..2853a3f0fc632 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -305,6 +305,7 @@ static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
+ 	struct net *net = sock_net(sk);
+ 	if (sk->sk_family == AF_INET) {
+ 		struct sockaddr_in *addr = (struct sockaddr_in *) uaddr;
++		u32 tb_id = RT_TABLE_LOCAL;
+ 		int chk_addr_ret;
+ 
+ 		if (addr_len < sizeof(*addr))
+@@ -320,8 +321,10 @@ static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
+ 
+ 		if (addr->sin_addr.s_addr == htonl(INADDR_ANY))
+ 			chk_addr_ret = RTN_LOCAL;
+-		else
+-			chk_addr_ret = inet_addr_type(net, addr->sin_addr.s_addr);
++		else {
++			tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ? : tb_id;
++			chk_addr_ret = inet_addr_type_table(net, addr->sin_addr.s_addr, tb_id);
++		}
+ 
+ 		if ((!inet_can_nonlocal_bind(net, isk) &&
+ 		     chk_addr_ret != RTN_LOCAL) ||
+@@ -359,6 +362,14 @@ static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
+ 				return -ENODEV;
+ 			}
+ 		}
++
++		if (!dev && sk->sk_bound_dev_if) {
++			dev = dev_get_by_index_rcu(net, sk->sk_bound_dev_if);
++			if (!dev) {
++				rcu_read_unlock();
++				return -ENODEV;
++			}
++		}
+ 		has_addr = pingv6_ops.ipv6_chk_addr(net, &addr->sin6_addr, dev,
+ 						    scoped);
+ 		rcu_read_unlock();
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index c72d0de8bf714..4080e3c6c50d8 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1792,6 +1792,7 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ #endif
+ 	RT_CACHE_STAT_INC(in_slow_mc);
+ 
++	skb_dst_drop(skb);
+ 	skb_dst_set(skb, &rth->dst);
+ 	return 0;
+ }
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 0dba353d3f8fe..3988403064ab6 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -3528,6 +3528,12 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
+ 				cbss->transmitted_bss->bssid);
+ 		bss_conf->bssid_indicator = cbss->max_bssid_indicator;
+ 		bss_conf->bssid_index = cbss->bssid_index;
++	} else {
++		bss_conf->nontransmitted = false;
++		memset(bss_conf->transmitter_bssid, 0,
++		       sizeof(bss_conf->transmitter_bssid));
++		bss_conf->bssid_indicator = 0;
++		bss_conf->bssid_index = 0;
+ 	}
+ 
+ 	/*
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index cbfb601c4ee98..d96a610929d9a 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1988,7 +1988,6 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		copied = len;
+ 	}
+ 
+-	skb_reset_transport_header(data_skb);
+ 	err = skb_copy_datagram_msg(data_skb, 0, msg, copied);
+ 
+ 	if (msg->msg_name) {
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index b45304446e13d..90510298b32a6 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -149,7 +149,7 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ 	struct nlattr *pattr;
+ 	struct tcf_pedit *p;
+ 	int ret = 0, err;
+-	int ksize;
++	int i, ksize;
+ 	u32 index;
+ 
+ 	if (!nla) {
+@@ -228,6 +228,18 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ 		p->tcfp_nkeys = parm->nkeys;
+ 	}
+ 	memcpy(p->tcfp_keys, parm->keys, ksize);
++	p->tcfp_off_max_hint = 0;
++	for (i = 0; i < p->tcfp_nkeys; ++i) {
++		u32 cur = p->tcfp_keys[i].off;
++
++		/* The AT option can read a single byte, we can bound the actual
++		 * value with uchar max.
++		 */
++		cur += (0xff & p->tcfp_keys[i].offmask) >> p->tcfp_keys[i].shift;
++
++		/* Each key touches 4 bytes starting from the computed offset */
++		p->tcfp_off_max_hint = max(p->tcfp_off_max_hint, cur + 4);
++	}
+ 
+ 	p->tcfp_flags = parm->flags;
+ 	goto_ch = tcf_action_set_ctrlact(*a, parm->action, goto_ch);
+@@ -308,13 +320,18 @@ static int tcf_pedit_act(struct sk_buff *skb, const struct tc_action *a,
+ 			 struct tcf_result *res)
+ {
+ 	struct tcf_pedit *p = to_pedit(a);
++	u32 max_offset;
+ 	int i;
+ 
+-	if (skb_unclone(skb, GFP_ATOMIC))
+-		return p->tcf_action;
+-
+ 	spin_lock(&p->tcf_lock);
+ 
++	max_offset = (skb_transport_header_was_set(skb) ?
++		      skb_transport_offset(skb) :
++		      skb_network_offset(skb)) +
++		     p->tcfp_off_max_hint;
++	if (skb_ensure_writable(skb, min(skb->len, max_offset)))
++		goto unlock;
++
+ 	tcf_lastuse_update(&p->tcf_tm);
+ 
+ 	if (p->tcfp_nkeys > 0) {
+@@ -403,6 +420,7 @@ bad:
+ 	p->tcf_qstats.overlimits++;
+ done:
+ 	bstats_update(&p->tcf_bstats, skb);
++unlock:
+ 	spin_unlock(&p->tcf_lock);
+ 	return p->tcf_action;
+ }
+diff --git a/net/smc/smc_rx.c b/net/smc/smc_rx.c
+index fcfac59f8b728..7f7e983e42b1f 100644
+--- a/net/smc/smc_rx.c
++++ b/net/smc/smc_rx.c
+@@ -346,12 +346,12 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ 				}
+ 				break;
+ 			}
++			if (!timeo)
++				return -EAGAIN;
+ 			if (signal_pending(current)) {
+ 				read_done = sock_intr_errno(timeo);
+ 				break;
+ 			}
+-			if (!timeo)
+-				return -EAGAIN;
+ 		}
+ 
+ 		if (!smc_rx_data_available(conn)) {
+diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c
+index 5f854ffbab925..bb13620e62468 100644
+--- a/net/sunrpc/rpc_pipe.c
++++ b/net/sunrpc/rpc_pipe.c
+@@ -478,6 +478,7 @@ rpc_get_inode(struct super_block *sb, umode_t mode)
+ 		inode->i_fop = &simple_dir_operations;
+ 		inode->i_op = &simple_dir_inode_operations;
+ 		inc_nlink(inode);
++		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 6bc225d64d23f..13d5323f80983 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -731,6 +731,21 @@ void xprt_disconnect_done(struct rpc_xprt *xprt)
+ }
+ EXPORT_SYMBOL_GPL(xprt_disconnect_done);
+ 
++/**
++ * xprt_schedule_autoclose_locked - Try to schedule an autoclose RPC call
++ * @xprt: transport to disconnect
++ */
++static void xprt_schedule_autoclose_locked(struct rpc_xprt *xprt)
++{
++	if (test_and_set_bit(XPRT_CLOSE_WAIT, &xprt->state))
++		return;
++	if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0)
++		queue_work(xprtiod_workqueue, &xprt->task_cleanup);
++	else if (xprt->snd_task && !test_bit(XPRT_SND_IS_COOKIE, &xprt->state))
++		rpc_wake_up_queued_task_set_status(&xprt->pending,
++						   xprt->snd_task, -ENOTCONN);
++}
++
+ /**
+  * xprt_force_disconnect - force a transport to disconnect
+  * @xprt: transport to disconnect
+@@ -742,13 +757,7 @@ void xprt_force_disconnect(struct rpc_xprt *xprt)
+ 
+ 	/* Don't race with the test_bit() in xprt_clear_locked() */
+ 	spin_lock(&xprt->transport_lock);
+-	set_bit(XPRT_CLOSE_WAIT, &xprt->state);
+-	/* Try to schedule an autoclose RPC call */
+-	if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0)
+-		queue_work(xprtiod_workqueue, &xprt->task_cleanup);
+-	else if (xprt->snd_task && !test_bit(XPRT_SND_IS_COOKIE, &xprt->state))
+-		rpc_wake_up_queued_task_set_status(&xprt->pending,
+-						   xprt->snd_task, -ENOTCONN);
++	xprt_schedule_autoclose_locked(xprt);
+ 	spin_unlock(&xprt->transport_lock);
+ }
+ EXPORT_SYMBOL_GPL(xprt_force_disconnect);
+@@ -788,11 +797,7 @@ void xprt_conditional_disconnect(struct rpc_xprt *xprt, unsigned int cookie)
+ 		goto out;
+ 	if (test_bit(XPRT_CLOSING, &xprt->state))
+ 		goto out;
+-	set_bit(XPRT_CLOSE_WAIT, &xprt->state);
+-	/* Try to schedule an autoclose RPC call */
+-	if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0)
+-		queue_work(xprtiod_workqueue, &xprt->task_cleanup);
+-	xprt_wake_pending_tasks(xprt, -EAGAIN);
++	xprt_schedule_autoclose_locked(xprt);
+ out:
+ 	spin_unlock(&xprt->transport_lock);
+ }
+@@ -881,12 +886,7 @@ void xprt_connect(struct rpc_task *task)
+ 	if (!xprt_lock_write(xprt, task))
+ 		return;
+ 
+-	if (test_and_clear_bit(XPRT_CLOSE_WAIT, &xprt->state)) {
+-		trace_xprt_disconnect_cleanup(xprt);
+-		xprt->ops->close(xprt);
+-	}
+-
+-	if (!xprt_connected(xprt)) {
++	if (!xprt_connected(xprt) && !test_bit(XPRT_CLOSE_WAIT, &xprt->state)) {
+ 		task->tk_rqstp->rq_connect_cookie = xprt->connect_cookie;
+ 		rpc_sleep_on_timeout(&xprt->pending, task, NULL,
+ 				xprt_request_timeout(task->tk_rqstp));
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 0ca25e3cc5806..ae5b5380f0f03 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -871,7 +871,7 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ 
+ 	/* Close the stream if the previous transmission was incomplete */
+ 	if (xs_send_request_was_aborted(transport, req)) {
+-		xs_close(xprt);
++		xprt_force_disconnect(xprt);
+ 		return -ENOTCONN;
+ 	}
+ 
+@@ -909,7 +909,7 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ 			-status);
+ 		fallthrough;
+ 	case -EPIPE:
+-		xs_close(xprt);
++		xprt_force_disconnect(xprt);
+ 		status = -ENOTCONN;
+ 	}
+ 
+@@ -1191,6 +1191,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
+ 
+ 	if (sk == NULL)
+ 		return;
++	/*
++	 * Make sure we're calling this in a context from which it is safe
++	 * to call __fput_sync(). In practice that means rpciod and the
++	 * system workqueue.
++	 */
++	if (!(current->flags & PF_WQ_WORKER)) {
++		WARN_ON_ONCE(1);
++		set_bit(XPRT_CLOSE_WAIT, &xprt->state);
++		return;
++	}
+ 
+ 	if (atomic_read(&transport->xprt.swapper))
+ 		sk_clear_memalloc(sk);
+@@ -1214,7 +1224,7 @@ static void xs_reset_transport(struct sock_xprt *transport)
+ 	mutex_unlock(&transport->recv_mutex);
+ 
+ 	trace_rpc_socket_close(xprt, sock);
+-	fput(filp);
++	__fput_sync(filp);
+ 
+ 	xprt_disconnect_done(xprt);
+ }
+@@ -1907,6 +1917,7 @@ static int xs_local_setup_socket(struct sock_xprt *transport)
+ 		xprt->stat.connect_time += (long)jiffies -
+ 					   xprt->stat.connect_start;
+ 		xprt_set_connected(xprt);
++		break;
+ 	case -ENOBUFS:
+ 		break;
+ 	case -ENOENT:
+@@ -2260,10 +2271,14 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ 	struct rpc_xprt *xprt = &transport->xprt;
+ 	int status = -EIO;
+ 
+-	if (!sock) {
+-		sock = xs_create_sock(xprt, transport,
+-				xs_addr(xprt)->sa_family, SOCK_STREAM,
+-				IPPROTO_TCP, true);
++	if (xprt_connected(xprt))
++		goto out;
++	if (test_and_clear_bit(XPRT_SOCK_CONNECT_SENT,
++			       &transport->sock_state) ||
++	    !sock) {
++		xs_reset_transport(transport);
++		sock = xs_create_sock(xprt, transport, xs_addr(xprt)->sa_family,
++				      SOCK_STREAM, IPPROTO_TCP, true);
+ 		if (IS_ERR(sock)) {
+ 			status = PTR_ERR(sock);
+ 			goto out;
+@@ -2294,6 +2309,8 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ 		break;
+ 	case 0:
+ 	case -EINPROGRESS:
++		set_bit(XPRT_SOCK_CONNECT_SENT, &transport->sock_state);
++		fallthrough;
+ 	case -EALREADY:
+ 		xprt_unlock_connect(xprt, transport);
+ 		return;
+@@ -2347,11 +2364,7 @@ static void xs_connect(struct rpc_xprt *xprt, struct rpc_task *task)
+ 
+ 	if (transport->sock != NULL) {
+ 		dprintk("RPC:       xs_connect delayed xprt %p for %lu "
+-				"seconds\n",
+-				xprt, xprt->reestablish_timeout / HZ);
+-
+-		/* Start by resetting any existing state */
+-		xs_reset_transport(transport);
++			"seconds\n", xprt, xprt->reestablish_timeout / HZ);
+ 
+ 		delay = xprt_reconnect_delay(xprt);
+ 		xprt_reconnect_backoff(xprt, XS_TCP_INIT_REEST_TO);
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 1f56225a10e3c..3c82286e5bcca 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -1345,7 +1345,10 @@ static int tls_device_down(struct net_device *netdev)
+ 
+ 		/* Device contexts for RX and TX will be freed in on sk_destruct
+ 		 * by tls_device_free_ctx. rx_conf and tx_conf stay in TLS_HW.
++		 * Now release the ref taken above.
+ 		 */
++		if (refcount_dec_and_test(&ctx->refcount))
++			tls_device_free_ctx(ctx);
+ 	}
+ 
+ 	up_write(&device_offload_lock);
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index 945a79e4f3eb4..5b6405392f085 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -413,6 +413,9 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+ 
+ 	val = (val >> mc->shift) & mask;
+ 
++	if (sel < 0 || sel > mc->max)
++		return -EINVAL;
++
+ 	*select = sel;
+ 
+ 	/* Setting a volume is only valid if it is already On */
+@@ -427,7 +430,7 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+ 		mask << mc->shift,
+ 		sel << mc->shift);
+ 
+-	return 0;
++	return *select != val;
+ }
+ 
+ static const char *max98090_perf_pwr_text[] =
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 2bc9fa6a34b8f..15bfcdbdfaa4e 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -510,7 +510,15 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 	unsigned int mask = (1 << fls(max)) - 1;
+ 	unsigned int invert = mc->invert;
+ 	unsigned int val, val_mask;
+-	int err, ret;
++	int err, ret, tmp;
++
++	tmp = ucontrol->value.integer.value[0];
++	if (tmp < 0)
++		return -EINVAL;
++	if (mc->platform_max && tmp > mc->platform_max)
++		return -EINVAL;
++	if (tmp > mc->max - mc->min + 1)
++		return -EINVAL;
+ 
+ 	if (invert)
+ 		val = (max - ucontrol->value.integer.value[0]) & mask;
+@@ -525,6 +533,14 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 	ret = err;
+ 
+ 	if (snd_soc_volsw_is_stereo(mc)) {
++		tmp = ucontrol->value.integer.value[1];
++		if (tmp < 0)
++			return -EINVAL;
++		if (mc->platform_max && tmp > mc->platform_max)
++			return -EINVAL;
++		if (tmp > mc->max - mc->min + 1)
++			return -EINVAL;
++
+ 		if (invert)
+ 			val = (max - ucontrol->value.integer.value[1]) & mask;
+ 		else
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index 01ec6876e8f58..d8479552e2224 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -44,9 +44,9 @@ CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_32bit_prog
+ CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_64bit_program.c)
+ CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_program.c -no-pie)
+ 
+-TARGETS := protection_keys
+-BINARIES_32 := $(TARGETS:%=%_32)
+-BINARIES_64 := $(TARGETS:%=%_64)
++VMTARGETS := protection_keys
++BINARIES_32 := $(VMTARGETS:%=%_32)
++BINARIES_64 := $(VMTARGETS:%=%_64)
+ 
+ ifeq ($(CAN_BUILD_WITH_NOPIE),1)
+ CFLAGS += -no-pie
+@@ -101,7 +101,7 @@ $(BINARIES_32): CFLAGS += -m32
+ $(BINARIES_32): LDLIBS += -lrt -ldl -lm
+ $(BINARIES_32): $(OUTPUT)/%_32: %.c
+ 	$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(notdir $^) $(LDLIBS) -o $@
+-$(foreach t,$(TARGETS),$(eval $(call gen-target-rule-32,$(t))))
++$(foreach t,$(VMTARGETS),$(eval $(call gen-target-rule-32,$(t))))
+ endif
+ 
+ ifeq ($(CAN_BUILD_X86_64),1)
+@@ -109,7 +109,7 @@ $(BINARIES_64): CFLAGS += -m64
+ $(BINARIES_64): LDLIBS += -lrt -ldl
+ $(BINARIES_64): $(OUTPUT)/%_64: %.c
+ 	$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(notdir $^) $(LDLIBS) -o $@
+-$(foreach t,$(TARGETS),$(eval $(call gen-target-rule-64,$(t))))
++$(foreach t,$(VMTARGETS),$(eval $(call gen-target-rule-64,$(t))))
+ endif
+ 
+ # x86_64 users should be encouraged to install 32-bit libraries


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-05-25 11:54 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-05-25 11:54 UTC (permalink / raw
  To: gentoo-commits

commit:     9508963d410e67c508a455358ee82b8e8dbbaac1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 25 11:54:13 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 25 11:54:13 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9508963d

Linux patch 5.10.118

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1117_linux-5.10.118.patch | 3822 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3826 insertions(+)

diff --git a/0000_README b/0000_README
index 28820046..51c56a3e 100644
--- a/0000_README
+++ b/0000_README
@@ -511,6 +511,10 @@ Patch:  1116_linux-5.10.117.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.117
 
+Patch:  1117_linux-5.10.118.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.118
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1117_linux-5.10.118.patch b/1117_linux-5.10.118.patch
new file mode 100644
index 00000000..969662bd
--- /dev/null
+++ b/1117_linux-5.10.118.patch
@@ -0,0 +1,3822 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 7195102472929..f01eed0ee23ad 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -160,6 +160,9 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Qualcomm Tech. | Kryo4xx Silver  | N/A             | ARM64_ERRATUM_1024718       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Qualcomm Tech. | Kryo4xx Gold    | N/A             | ARM64_ERRATUM_1286807       |
+++----------------+-----------------+-----------------+-----------------------------+
++
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Fujitsu        | A64FX           | E#010001        | FUJITSU_ERRATUM_010001      |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core-api/dma-attributes.rst
+index 17706dc91ec9f..1887d92e8e926 100644
+--- a/Documentation/core-api/dma-attributes.rst
++++ b/Documentation/core-api/dma-attributes.rst
+@@ -130,11 +130,3 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged
+ subsystem that the buffer is fully accessible at the elevated privilege
+ level (and ideally inaccessible or at least read-only at the
+ lesser-privileged levels).
+-
+-DMA_ATTR_OVERWRITE
+-------------------
+-
+-This is a hint to the DMA-mapping subsystem that the device is expected to
+-overwrite the entire mapped size, thus the caller does not require any of the
+-previous buffer contents to be preserved. This allows bounce-buffering
+-implementations to optimise DMA_FROM_DEVICE transfers.
+diff --git a/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml b/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml
+index c78ab7e2eee70..fa83c69820f86 100644
+--- a/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml
+@@ -58,7 +58,7 @@ patternProperties:
+           $ref: "/schemas/types.yaml#/definitions/string"
+           enum: [ ADC0, ADC1, ADC10, ADC11, ADC12, ADC13, ADC14, ADC15, ADC2,
+                   ADC3, ADC4, ADC5, ADC6, ADC7, ADC8, ADC9, BMCINT, EMMCG1, EMMCG4,
+-                  EMMCG8, ESPI, ESPIALT, FSI1, FSI2, FWSPIABR, FWSPID, FWQSPID, FWSPIWP,
++                  EMMCG8, ESPI, ESPIALT, FSI1, FSI2, FWSPIABR, FWSPID, FWSPIWP,
+                   GPIT0, GPIT1, GPIT2, GPIT3, GPIT4, GPIT5, GPIT6, GPIT7, GPIU0, GPIU1,
+                   GPIU2, GPIU3, GPIU4, GPIU5, GPIU6, GPIU7, HVI3C3, HVI3C4, I2C1, I2C10,
+                   I2C11, I2C12, I2C13, I2C14, I2C15, I2C16, I2C2, I2C3, I2C4, I2C5,
+diff --git a/Makefile b/Makefile
+index 8f611d79d5e11..f9210e43121dc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 117
++SUBLEVEL = 118
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+index a362714ae9fc0..1ef89dd55d92b 100644
+--- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+@@ -117,11 +117,6 @@
+ 		groups = "FWSPID";
+ 	};
+ 
+-	pinctrl_fwqspid_default: fwqspid_default {
+-		function = "FWSPID";
+-		groups = "FWQSPID";
+-	};
+-
+ 	pinctrl_fwspiwp_default: fwspiwp_default {
+ 		function = "FWSPIWP";
+ 		groups = "FWSPIWP";
+@@ -653,12 +648,12 @@
+ 	};
+ 
+ 	pinctrl_qspi1_default: qspi1_default {
+-		function = "QSPI1";
++		function = "SPI1";
+ 		groups = "QSPI1";
+ 	};
+ 
+ 	pinctrl_qspi2_default: qspi2_default {
+-		function = "QSPI2";
++		function = "SPI2";
+ 		groups = "QSPI2";
+ 	};
+ 
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index c3ebe3584103b..030351d169aac 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -1043,7 +1043,7 @@ vector_bhb_loop8_\name:
+ 
+ 	@ bhb workaround
+ 	mov	r0, #8
+-3:	b	. + 4
++3:	W(b)	. + 4
+ 	subs	r0, r0, #1
+ 	bne	3b
+ 	dsb
+diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c
+index db798eac74315..8247749998259 100644
+--- a/arch/arm/kernel/stacktrace.c
++++ b/arch/arm/kernel/stacktrace.c
+@@ -53,17 +53,17 @@ int notrace unwind_frame(struct stackframe *frame)
+ 		return -EINVAL;
+ 
+ 	frame->sp = frame->fp;
+-	frame->fp = *(unsigned long *)(fp);
+-	frame->pc = *(unsigned long *)(fp + 4);
++	frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
++	frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 4));
+ #else
+ 	/* check current frame pointer is within bounds */
+ 	if (fp < low + 12 || fp > high - 4)
+ 		return -EINVAL;
+ 
+ 	/* restore the registers from the stack frame */
+-	frame->fp = *(unsigned long *)(fp - 12);
+-	frame->sp = *(unsigned long *)(fp - 8);
+-	frame->pc = *(unsigned long *)(fp - 4);
++	frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 12));
++	frame->sp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 8));
++	frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 4));
+ #endif
+ 
+ 	return 0;
+diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
+index 06dbfb968182d..fb9f3eb6bf483 100644
+--- a/arch/arm/mm/proc-v7-bugs.c
++++ b/arch/arm/mm/proc-v7-bugs.c
+@@ -288,6 +288,7 @@ void cpu_v7_ca15_ibe(void)
+ {
+ 	if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
+ 		cpu_v7_spectre_v2_init();
++	cpu_v7_spectre_bhb_init();
+ }
+ 
+ void cpu_v7_bugs_init(void)
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 533559c7d2b31..ca42d58e8c821 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -220,6 +220,8 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_1286807
+ 	{
+ 		ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0),
++		/* Kryo4xx Gold (rcpe to rfpe) => (r0p0 to r3p0) */
++		ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe),
+ 	},
+ #endif
+ 	{},
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index 7a66a7d9c1ffc..4a069f85bd91b 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -45,6 +45,9 @@ void mte_sync_tags(pte_t *ptep, pte_t pte)
+ 		if (!test_and_set_bit(PG_mte_tagged, &page->flags))
+ 			mte_sync_page_tags(page, ptep, check_swap);
+ 	}
++
++	/* ensure the tags are visible before the PTE is set */
++	smp_wmb();
+ }
+ 
+ int memcmp_pages(struct page *page1, struct page *page2)
+diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
+index c07d7a0349410..69ec670bcc709 100644
+--- a/arch/arm64/kernel/paravirt.c
++++ b/arch/arm64/kernel/paravirt.c
+@@ -30,7 +30,7 @@ struct paravirt_patch_template pv_ops;
+ EXPORT_SYMBOL_GPL(pv_ops);
+ 
+ struct pv_time_stolen_time_region {
+-	struct pvclock_vcpu_stolen_time *kaddr;
++	struct pvclock_vcpu_stolen_time __rcu *kaddr;
+ };
+ 
+ static DEFINE_PER_CPU(struct pv_time_stolen_time_region, stolen_time_region);
+@@ -47,7 +47,9 @@ early_param("no-steal-acc", parse_no_stealacc);
+ /* return stolen time in ns by asking the hypervisor */
+ static u64 pv_steal_clock(int cpu)
+ {
++	struct pvclock_vcpu_stolen_time *kaddr = NULL;
+ 	struct pv_time_stolen_time_region *reg;
++	u64 ret = 0;
+ 
+ 	reg = per_cpu_ptr(&stolen_time_region, cpu);
+ 
+@@ -56,28 +58,37 @@ static u64 pv_steal_clock(int cpu)
+ 	 * online notification callback runs. Until the callback
+ 	 * has run we just return zero.
+ 	 */
+-	if (!reg->kaddr)
++	rcu_read_lock();
++	kaddr = rcu_dereference(reg->kaddr);
++	if (!kaddr) {
++		rcu_read_unlock();
+ 		return 0;
++	}
+ 
+-	return le64_to_cpu(READ_ONCE(reg->kaddr->stolen_time));
++	ret = le64_to_cpu(READ_ONCE(kaddr->stolen_time));
++	rcu_read_unlock();
++	return ret;
+ }
+ 
+ static int stolen_time_cpu_down_prepare(unsigned int cpu)
+ {
++	struct pvclock_vcpu_stolen_time *kaddr = NULL;
+ 	struct pv_time_stolen_time_region *reg;
+ 
+ 	reg = this_cpu_ptr(&stolen_time_region);
+ 	if (!reg->kaddr)
+ 		return 0;
+ 
+-	memunmap(reg->kaddr);
+-	memset(reg, 0, sizeof(*reg));
++	kaddr = rcu_replace_pointer(reg->kaddr, NULL, true);
++	synchronize_rcu();
++	memunmap(kaddr);
+ 
+ 	return 0;
+ }
+ 
+ static int stolen_time_cpu_online(unsigned int cpu)
+ {
++	struct pvclock_vcpu_stolen_time *kaddr = NULL;
+ 	struct pv_time_stolen_time_region *reg;
+ 	struct arm_smccc_res res;
+ 
+@@ -88,17 +99,19 @@ static int stolen_time_cpu_online(unsigned int cpu)
+ 	if (res.a0 == SMCCC_RET_NOT_SUPPORTED)
+ 		return -EINVAL;
+ 
+-	reg->kaddr = memremap(res.a0,
++	kaddr = memremap(res.a0,
+ 			      sizeof(struct pvclock_vcpu_stolen_time),
+ 			      MEMREMAP_WB);
+ 
++	rcu_assign_pointer(reg->kaddr, kaddr);
++
+ 	if (!reg->kaddr) {
+ 		pr_warn("Failed to map stolen time data structure\n");
+ 		return -ENOMEM;
+ 	}
+ 
+-	if (le32_to_cpu(reg->kaddr->revision) != 0 ||
+-	    le32_to_cpu(reg->kaddr->attributes) != 0) {
++	if (le32_to_cpu(kaddr->revision) != 0 ||
++	    le32_to_cpu(kaddr->attributes) != 0) {
+ 		pr_warn_once("Unexpected revision or attributes in stolen time data\n");
+ 		return -ENXIO;
+ 	}
+diff --git a/arch/mips/lantiq/falcon/sysctrl.c b/arch/mips/lantiq/falcon/sysctrl.c
+index 42222f849bd25..446a2536999bf 100644
+--- a/arch/mips/lantiq/falcon/sysctrl.c
++++ b/arch/mips/lantiq/falcon/sysctrl.c
+@@ -167,6 +167,8 @@ static inline void clkdev_add_sys(const char *dev, unsigned int module,
+ {
+ 	struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
+ 
++	if (!clk)
++		return;
+ 	clk->cl.dev_id = dev;
+ 	clk->cl.con_id = NULL;
+ 	clk->cl.clk = clk;
+diff --git a/arch/mips/lantiq/xway/gptu.c b/arch/mips/lantiq/xway/gptu.c
+index 3d5683e75cf1e..200fe9ff641d6 100644
+--- a/arch/mips/lantiq/xway/gptu.c
++++ b/arch/mips/lantiq/xway/gptu.c
+@@ -122,6 +122,8 @@ static inline void clkdev_add_gptu(struct device *dev, const char *con,
+ {
+ 	struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
+ 
++	if (!clk)
++		return;
+ 	clk->cl.dev_id = dev_name(dev);
+ 	clk->cl.con_id = con;
+ 	clk->cl.clk = clk;
+diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c
+index 917fac1636b71..084f6caba5f23 100644
+--- a/arch/mips/lantiq/xway/sysctrl.c
++++ b/arch/mips/lantiq/xway/sysctrl.c
+@@ -315,6 +315,8 @@ static void clkdev_add_pmu(const char *dev, const char *con, bool deactivate,
+ {
+ 	struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
+ 
++	if (!clk)
++		return;
+ 	clk->cl.dev_id = dev;
+ 	clk->cl.con_id = con;
+ 	clk->cl.clk = clk;
+@@ -338,6 +340,8 @@ static void clkdev_add_cgu(const char *dev, const char *con,
+ {
+ 	struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
+ 
++	if (!clk)
++		return;
+ 	clk->cl.dev_id = dev;
+ 	clk->cl.con_id = con;
+ 	clk->cl.clk = clk;
+@@ -356,24 +360,28 @@ static void clkdev_add_pci(void)
+ 	struct clk *clk_ext = kzalloc(sizeof(struct clk), GFP_KERNEL);
+ 
+ 	/* main pci clock */
+-	clk->cl.dev_id = "17000000.pci";
+-	clk->cl.con_id = NULL;
+-	clk->cl.clk = clk;
+-	clk->rate = CLOCK_33M;
+-	clk->rates = valid_pci_rates;
+-	clk->enable = pci_enable;
+-	clk->disable = pmu_disable;
+-	clk->module = 0;
+-	clk->bits = PMU_PCI;
+-	clkdev_add(&clk->cl);
++	if (clk) {
++		clk->cl.dev_id = "17000000.pci";
++		clk->cl.con_id = NULL;
++		clk->cl.clk = clk;
++		clk->rate = CLOCK_33M;
++		clk->rates = valid_pci_rates;
++		clk->enable = pci_enable;
++		clk->disable = pmu_disable;
++		clk->module = 0;
++		clk->bits = PMU_PCI;
++		clkdev_add(&clk->cl);
++	}
+ 
+ 	/* use internal/external bus clock */
+-	clk_ext->cl.dev_id = "17000000.pci";
+-	clk_ext->cl.con_id = "external";
+-	clk_ext->cl.clk = clk_ext;
+-	clk_ext->enable = pci_ext_enable;
+-	clk_ext->disable = pci_ext_disable;
+-	clkdev_add(&clk_ext->cl);
++	if (clk_ext) {
++		clk_ext->cl.dev_id = "17000000.pci";
++		clk_ext->cl.con_id = "external";
++		clk_ext->cl.clk = clk_ext;
++		clk_ext->enable = pci_ext_enable;
++		clk_ext->disable = pci_ext_disable;
++		clkdev_add(&clk_ext->cl);
++	}
+ }
+ 
+ /* xway socs can generate clocks on gpio pins */
+@@ -393,9 +401,15 @@ static void clkdev_add_clkout(void)
+ 		char *name;
+ 
+ 		name = kzalloc(sizeof("clkout0"), GFP_KERNEL);
++		if (!name)
++			continue;
+ 		sprintf(name, "clkout%d", i);
+ 
+ 		clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
++		if (!clk) {
++			kfree(name);
++			continue;
++		}
+ 		clk->cl.dev_id = "1f103000.cgu";
+ 		clk->cl.con_id = name;
+ 		clk->cl.clk = clk;
+diff --git a/arch/riscv/boot/dts/sifive/fu540-c000.dtsi b/arch/riscv/boot/dts/sifive/fu540-c000.dtsi
+index 7db8610534834..64c06c9b41dc8 100644
+--- a/arch/riscv/boot/dts/sifive/fu540-c000.dtsi
++++ b/arch/riscv/boot/dts/sifive/fu540-c000.dtsi
+@@ -166,7 +166,7 @@
+ 			clocks = <&prci PRCI_CLK_TLCLK>;
+ 			status = "disabled";
+ 		};
+-		dma: dma@3000000 {
++		dma: dma-controller@3000000 {
+ 			compatible = "sifive,fu540-c000-pdma";
+ 			reg = <0x0 0x3000000 0x0 0x8000>;
+ 			interrupt-parent = <&plic0>;
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index e14e4a3a647a2..74799439b2598 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -69,6 +69,7 @@ struct zpci_dev *get_zdev_by_fid(u32 fid)
+ 	list_for_each_entry(tmp, &zpci_list, entry) {
+ 		if (tmp->fid == fid) {
+ 			zdev = tmp;
++			zpci_zdev_get(zdev);
+ 			break;
+ 		}
+ 	}
+diff --git a/arch/s390/pci/pci_bus.h b/arch/s390/pci/pci_bus.h
+index 55c9488e504cc..8d2fcd091ca73 100644
+--- a/arch/s390/pci/pci_bus.h
++++ b/arch/s390/pci/pci_bus.h
+@@ -13,7 +13,8 @@ void zpci_bus_device_unregister(struct zpci_dev *zdev);
+ void zpci_release_device(struct kref *kref);
+ static inline void zpci_zdev_put(struct zpci_dev *zdev)
+ {
+-	kref_put(&zdev->kref, zpci_release_device);
++	if (zdev)
++		kref_put(&zdev->kref, zpci_release_device);
+ }
+ 
+ static inline void zpci_zdev_get(struct zpci_dev *zdev)
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index 0a0e8b8293bef..d1a5c80a41cb5 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -22,6 +22,8 @@
+ #include <asm/clp.h>
+ #include <uapi/asm/clp.h>
+ 
++#include "pci_bus.h"
++
+ bool zpci_unique_uid;
+ 
+ void update_uid_checking(bool new)
+@@ -372,8 +374,11 @@ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
+ 		return;
+ 
+ 	zdev = get_zdev_by_fid(entry->fid);
+-	if (!zdev)
+-		zpci_create_device(entry->fid, entry->fh, entry->config_state);
++	if (zdev) {
++		zpci_zdev_put(zdev);
++		return;
++	}
++	zpci_create_device(entry->fid, entry->fh, entry->config_state);
+ }
+ 
+ int clp_scan_pci_devices(void)
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index b7cfde7e80a8a..6ced44b5be8ab 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -61,10 +61,12 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
+ 	       pdev ? pci_name(pdev) : "n/a", ccdf->pec, ccdf->fid);
+ 
+ 	if (!pdev)
+-		return;
++		goto no_pdev;
+ 
+ 	pdev->error_state = pci_channel_io_perm_failure;
+ 	pci_dev_put(pdev);
++no_pdev:
++	zpci_zdev_put(zdev);
+ }
+ 
+ void zpci_event_error(void *data)
+@@ -76,6 +78,7 @@ void zpci_event_error(void *data)
+ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ {
+ 	struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
++	bool existing_zdev = !!zdev;
+ 	enum zpci_state state;
+ 	struct pci_dev *pdev;
+ 	int ret;
+@@ -161,6 +164,8 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 	default:
+ 		break;
+ 	}
++	if (existing_zdev)
++		zpci_zdev_put(zdev);
+ }
+ 
+ void zpci_event_availability(void *data)
+diff --git a/arch/x86/crypto/chacha-avx512vl-x86_64.S b/arch/x86/crypto/chacha-avx512vl-x86_64.S
+index bb193fde123a0..8713c16c2501a 100644
+--- a/arch/x86/crypto/chacha-avx512vl-x86_64.S
++++ b/arch/x86/crypto/chacha-avx512vl-x86_64.S
+@@ -172,7 +172,7 @@ SYM_FUNC_START(chacha_2block_xor_avx512vl)
+ 	# xor remaining bytes from partial register into output
+ 	mov		%rcx,%rax
+ 	and		$0xf,%rcx
+-	jz		.Ldone8
++	jz		.Ldone2
+ 	mov		%rax,%r9
+ 	and		$~0xf,%r9
+ 
+@@ -438,7 +438,7 @@ SYM_FUNC_START(chacha_4block_xor_avx512vl)
+ 	# xor remaining bytes from partial register into output
+ 	mov		%rcx,%rax
+ 	and		$0xf,%rcx
+-	jz		.Ldone8
++	jz		.Ldone4
+ 	mov		%rax,%r9
+ 	and		$~0xf,%r9
+ 
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 70ef5b542681c..306268f90455f 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5375,6 +5375,7 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
+ {
+ 	struct kvm_mmu_page *sp, *node;
+ 	int nr_zapped, batch = 0;
++	bool unstable;
+ 
+ restart:
+ 	list_for_each_entry_safe_reverse(sp, node,
+@@ -5406,11 +5407,12 @@ restart:
+ 			goto restart;
+ 		}
+ 
+-		if (__kvm_mmu_prepare_zap_page(kvm, sp,
+-				&kvm->arch.zapped_obsolete_pages, &nr_zapped)) {
+-			batch += nr_zapped;
++		unstable = __kvm_mmu_prepare_zap_page(kvm, sp,
++				&kvm->arch.zapped_obsolete_pages, &nr_zapped);
++		batch += nr_zapped;
++
++		if (unstable)
+ 			goto restart;
+-		}
+ 	}
+ 
+ 	/*
+diff --git a/arch/x86/um/shared/sysdep/syscalls_64.h b/arch/x86/um/shared/sysdep/syscalls_64.h
+index 8a7d5e1da98e5..1e6875b4ffd83 100644
+--- a/arch/x86/um/shared/sysdep/syscalls_64.h
++++ b/arch/x86/um/shared/sysdep/syscalls_64.h
+@@ -10,13 +10,12 @@
+ #include <linux/msg.h>
+ #include <linux/shm.h>
+ 
+-typedef long syscall_handler_t(void);
++typedef long syscall_handler_t(long, long, long, long, long, long);
+ 
+ extern syscall_handler_t *sys_call_table[];
+ 
+ #define EXECUTE_SYSCALL(syscall, regs) \
+-	(((long (*)(long, long, long, long, long, long)) \
+-	  (*sys_call_table[syscall]))(UPT_SYSCALL_ARG1(&regs->regs), \
++	(((*sys_call_table[syscall]))(UPT_SYSCALL_ARG1(&regs->regs), \
+ 		 		      UPT_SYSCALL_ARG2(&regs->regs), \
+ 				      UPT_SYSCALL_ARG3(&regs->regs), \
+ 				      UPT_SYSCALL_ARG4(&regs->regs), \
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 65b95aef8dbc9..3cdbd81f983fa 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -184,7 +184,7 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr,
+ 		unsigned int set_size)
+ {
+ 	struct drbd_request *r;
+-	struct drbd_request *req = NULL;
++	struct drbd_request *req = NULL, *tmp = NULL;
+ 	int expect_epoch = 0;
+ 	int expect_size = 0;
+ 
+@@ -238,8 +238,11 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr,
+ 	 * to catch requests being barrier-acked "unexpectedly".
+ 	 * It usually should find the same req again, or some READ preceding it. */
+ 	list_for_each_entry(req, &connection->transfer_log, tl_requests)
+-		if (req->epoch == expect_epoch)
++		if (req->epoch == expect_epoch) {
++			tmp = req;
+ 			break;
++		}
++	req = list_prepare_entry(tmp, &connection->transfer_log, tl_requests);
+ 	list_for_each_entry_safe_from(req, r, &connection->transfer_log, tl_requests) {
+ 		if (req->epoch != expect_epoch)
+ 			break;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index c9411fe2f0af8..4ef407a33996a 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -509,8 +509,8 @@ static unsigned long fdc_busy;
+ static DECLARE_WAIT_QUEUE_HEAD(fdc_wait);
+ static DECLARE_WAIT_QUEUE_HEAD(command_done);
+ 
+-/* Errors during formatting are counted here. */
+-static int format_errors;
++/* errors encountered on the current (or last) request */
++static int floppy_errors;
+ 
+ /* Format request descriptor. */
+ static struct format_descr format_req;
+@@ -530,7 +530,6 @@ static struct format_descr format_req;
+ static char *floppy_track_buffer;
+ static int max_buffer_sectors;
+ 
+-static int *errors;
+ typedef void (*done_f)(int);
+ static const struct cont_t {
+ 	void (*interrupt)(void);
+@@ -1455,7 +1454,7 @@ static int interpret_errors(void)
+ 			if (drive_params[current_drive].flags & FTD_MSG)
+ 				DPRINT("Over/Underrun - retrying\n");
+ 			bad = 0;
+-		} else if (*errors >= drive_params[current_drive].max_errors.reporting) {
++		} else if (floppy_errors >= drive_params[current_drive].max_errors.reporting) {
+ 			print_errors();
+ 		}
+ 		if (reply_buffer[ST2] & ST2_WC || reply_buffer[ST2] & ST2_BC)
+@@ -2095,7 +2094,7 @@ static void bad_flp_intr(void)
+ 		if (!next_valid_format(current_drive))
+ 			return;
+ 	}
+-	err_count = ++(*errors);
++	err_count = ++floppy_errors;
+ 	INFBOUND(write_errors[current_drive].badness, err_count);
+ 	if (err_count > drive_params[current_drive].max_errors.abort)
+ 		cont->done(0);
+@@ -2240,9 +2239,8 @@ static int do_format(int drive, struct format_descr *tmp_format_req)
+ 		return -EINVAL;
+ 	}
+ 	format_req = *tmp_format_req;
+-	format_errors = 0;
+ 	cont = &format_cont;
+-	errors = &format_errors;
++	floppy_errors = 0;
+ 	ret = wait_til_done(redo_format, true);
+ 	if (ret == -EINTR)
+ 		return -EINTR;
+@@ -2721,7 +2719,7 @@ static int make_raw_rw_request(void)
+ 		 */
+ 		if (!direct ||
+ 		    (indirect * 2 > direct * 3 &&
+-		     *errors < drive_params[current_drive].max_errors.read_track &&
++		     floppy_errors < drive_params[current_drive].max_errors.read_track &&
+ 		     ((!probing ||
+ 		       (drive_params[current_drive].read_track & (1 << drive_state[current_drive].probed_format)))))) {
+ 			max_size = blk_rq_sectors(current_req);
+@@ -2846,10 +2844,11 @@ static int set_next_request(void)
+ 	current_req = list_first_entry_or_null(&floppy_reqs, struct request,
+ 					       queuelist);
+ 	if (current_req) {
+-		current_req->error_count = 0;
++		floppy_errors = 0;
+ 		list_del_init(&current_req->queuelist);
++		return 1;
+ 	}
+-	return current_req != NULL;
++	return 0;
+ }
+ 
+ /* Starts or continues processing request. Will automatically unlock the
+@@ -2908,7 +2907,6 @@ do_request:
+ 		_floppy = floppy_type + drive_params[current_drive].autodetect[drive_state[current_drive].probed_format];
+ 	} else
+ 		probing = 0;
+-	errors = &(current_req->error_count);
+ 	tmp = make_raw_rw_request();
+ 	if (tmp < 2) {
+ 		request_done(tmp);
+diff --git a/drivers/clk/at91/clk-generated.c b/drivers/clk/at91/clk-generated.c
+index b656d25a97678..fe772baeb15ff 100644
+--- a/drivers/clk/at91/clk-generated.c
++++ b/drivers/clk/at91/clk-generated.c
+@@ -106,6 +106,10 @@ static void clk_generated_best_diff(struct clk_rate_request *req,
+ 		tmp_rate = parent_rate;
+ 	else
+ 		tmp_rate = parent_rate / div;
++
++	if (tmp_rate < req->min_rate || tmp_rate > req->max_rate)
++		return;
++
+ 	tmp_diff = abs(req->rate - tmp_rate);
+ 
+ 	if (*best_diff < 0 || *best_diff >= tmp_diff) {
+diff --git a/drivers/crypto/qcom-rng.c b/drivers/crypto/qcom-rng.c
+index 11f30fd48c141..031b5f701a0a3 100644
+--- a/drivers/crypto/qcom-rng.c
++++ b/drivers/crypto/qcom-rng.c
+@@ -65,6 +65,7 @@ static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
+ 		} else {
+ 			/* copy only remaining bytes */
+ 			memcpy(data, &val, max - currsize);
++			break;
+ 		}
+ 	} while (currsize < max);
+ 
+diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
+index be1bf39a317de..90a920e7f6642 100644
+--- a/drivers/crypto/stm32/stm32-crc32.c
++++ b/drivers/crypto/stm32/stm32-crc32.c
+@@ -384,8 +384,10 @@ static int stm32_crc_remove(struct platform_device *pdev)
+ 	struct stm32_crc *crc = platform_get_drvdata(pdev);
+ 	int ret = pm_runtime_get_sync(crc->dev);
+ 
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(crc->dev);
+ 		return ret;
++	}
+ 
+ 	spin_lock(&crc_list.lock);
+ 	list_del(&crc->list);
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index ed7c5fc47f524..2ab34a8e6273d 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -700,6 +700,9 @@ static int mvebu_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	unsigned long flags;
+ 	unsigned int on, off;
+ 
++	if (state->polarity != PWM_POLARITY_NORMAL)
++		return -EINVAL;
++
+ 	val = (unsigned long long) mvpwm->clk_rate * state->duty_cycle;
+ 	do_div(val, NSEC_PER_SEC);
+ 	if (val > UINT_MAX)
+diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
+index 58776f2d69ff8..1ae612c796eef 100644
+--- a/drivers/gpio/gpio-vf610.c
++++ b/drivers/gpio/gpio-vf610.c
+@@ -125,9 +125,13 @@ static int vf610_gpio_direction_output(struct gpio_chip *chip, unsigned gpio,
+ {
+ 	struct vf610_gpio_port *port = gpiochip_get_data(chip);
+ 	unsigned long mask = BIT(gpio);
++	u32 val;
+ 
+-	if (port->sdata && port->sdata->have_paddr)
+-		vf610_gpio_writel(mask, port->gpio_base + GPIO_PDDR);
++	if (port->sdata && port->sdata->have_paddr) {
++		val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR);
++		val |= mask;
++		vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR);
++	}
+ 
+ 	vf610_gpio_set(chip, gpio, value);
+ 
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 1f54e9470165a..ab423b0413ee5 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -4792,6 +4792,7 @@ static void fetch_monitor_name(struct drm_dp_mst_topology_mgr *mgr,
+ 
+ 	mst_edid = drm_dp_mst_get_edid(port->connector, mgr, port);
+ 	drm_edid_get_monitor_name(mst_edid, name, namelen);
++	kfree(mst_edid);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/i915/display/intel_opregion.c b/drivers/gpu/drm/i915/display/intel_opregion.c
+index 6d083b98f6ae6..abff2d6cedd12 100644
+--- a/drivers/gpu/drm/i915/display/intel_opregion.c
++++ b/drivers/gpu/drm/i915/display/intel_opregion.c
+@@ -376,21 +376,6 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
+ 		return -EINVAL;
+ 	}
+ 
+-	/*
+-	 * The port numbering and mapping here is bizarre. The now-obsolete
+-	 * swsci spec supports ports numbered [0..4]. Port E is handled as a
+-	 * special case, but port F and beyond are not. The functionality is
+-	 * supposed to be obsolete for new platforms. Just bail out if the port
+-	 * number is out of bounds after mapping.
+-	 */
+-	if (port > 4) {
+-		drm_dbg_kms(&dev_priv->drm,
+-			    "[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n",
+-			    intel_encoder->base.base.id, intel_encoder->base.name,
+-			    port_name(intel_encoder->port), port);
+-		return -EINVAL;
+-	}
+-
+ 	if (!enable)
+ 		parm |= 4 << 8;
+ 
+diff --git a/drivers/i2c/busses/i2c-mt7621.c b/drivers/i2c/busses/i2c-mt7621.c
+index 45fe4a7fe0c03..901f0fb04fee4 100644
+--- a/drivers/i2c/busses/i2c-mt7621.c
++++ b/drivers/i2c/busses/i2c-mt7621.c
+@@ -304,7 +304,8 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+ 
+ 	if (i2c->bus_freq == 0) {
+ 		dev_warn(i2c->dev, "clock-frequency 0 not supported\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_disable_clk;
+ 	}
+ 
+ 	adap = &i2c->adap;
+@@ -322,10 +323,15 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+ 
+ 	ret = i2c_add_adapter(adap);
+ 	if (ret < 0)
+-		return ret;
++		goto err_disable_clk;
+ 
+ 	dev_info(&pdev->dev, "clock %u kHz\n", i2c->bus_freq / 1000);
+ 
++	return 0;
++
++err_disable_clk:
++	clk_disable_unprepare(i2c->clk);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index 3cfd2c18eebd9..49504dcd5dc63 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -47,6 +47,17 @@ static DEFINE_MUTEX(input_mutex);
+ 
+ static const struct input_value input_value_sync = { EV_SYN, SYN_REPORT, 1 };
+ 
++static const unsigned int input_max_code[EV_CNT] = {
++	[EV_KEY] = KEY_MAX,
++	[EV_REL] = REL_MAX,
++	[EV_ABS] = ABS_MAX,
++	[EV_MSC] = MSC_MAX,
++	[EV_SW] = SW_MAX,
++	[EV_LED] = LED_MAX,
++	[EV_SND] = SND_MAX,
++	[EV_FF] = FF_MAX,
++};
++
+ static inline int is_event_supported(unsigned int code,
+ 				     unsigned long *bm, unsigned int max)
+ {
+@@ -1976,6 +1987,14 @@ EXPORT_SYMBOL(input_get_timestamp);
+  */
+ void input_set_capability(struct input_dev *dev, unsigned int type, unsigned int code)
+ {
++	if (type < EV_CNT && input_max_code[type] &&
++	    code > input_max_code[type]) {
++		pr_err("%s: invalid code %u for type %u\n", __func__, code,
++		       type);
++		dump_stack();
++		return;
++	}
++
+ 	switch (type) {
+ 	case EV_KEY:
+ 		__set_bit(code, dev->keybit);
+diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c
+index 30576a5f2f045..f437eefec94ad 100644
+--- a/drivers/input/touchscreen/ili210x.c
++++ b/drivers/input/touchscreen/ili210x.c
+@@ -420,9 +420,9 @@ static int ili210x_i2c_probe(struct i2c_client *client,
+ 		if (error)
+ 			return error;
+ 
+-		usleep_range(50, 100);
++		usleep_range(12000, 15000);
+ 		gpiod_set_value_cansleep(reset_gpio, 0);
+-		msleep(100);
++		msleep(160);
+ 	}
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c
+index 9a64e1dbc04ad..64b690a72d105 100644
+--- a/drivers/input/touchscreen/stmfts.c
++++ b/drivers/input/touchscreen/stmfts.c
+@@ -339,11 +339,11 @@ static int stmfts_input_open(struct input_dev *dev)
+ 
+ 	err = pm_runtime_get_sync(&sdata->client->dev);
+ 	if (err < 0)
+-		return err;
++		goto out;
+ 
+ 	err = i2c_smbus_write_byte(sdata->client, STMFTS_MS_MT_SENSE_ON);
+ 	if (err)
+-		return err;
++		goto out;
+ 
+ 	mutex_lock(&sdata->mutex);
+ 	sdata->running = true;
+@@ -366,7 +366,9 @@ static int stmfts_input_open(struct input_dev *dev)
+ 				 "failed to enable touchkey\n");
+ 	}
+ 
+-	return 0;
++out:
++	pm_runtime_put_noidle(&sdata->client->dev);
++	return err;
+ }
+ 
+ static void stmfts_input_close(struct input_dev *dev)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index 72f8751784c31..e9c6f1fa0b1a7 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -345,7 +345,6 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 		     int budget)
+ {
+ 	struct net_device *ndev = aq_nic_get_ndev(self->aq_nic);
+-	bool is_rsc_completed = true;
+ 	int err = 0;
+ 
+ 	for (; (self->sw_head != self->hw_head) && budget;
+@@ -363,12 +362,17 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 			continue;
+ 
+ 		if (!buff->is_eop) {
++			unsigned int frag_cnt = 0U;
+ 			buff_ = buff;
+ 			do {
++				bool is_rsc_completed = true;
++
+ 				if (buff_->next >= self->size) {
+ 					err = -EIO;
+ 					goto err_exit;
+ 				}
++
++				frag_cnt++;
+ 				next_ = buff_->next,
+ 				buff_ = &self->buff_ring[next_];
+ 				is_rsc_completed =
+@@ -376,18 +380,17 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 							    next_,
+ 							    self->hw_head);
+ 
+-				if (unlikely(!is_rsc_completed))
+-					break;
++				if (unlikely(!is_rsc_completed) ||
++						frag_cnt > MAX_SKB_FRAGS) {
++					err = 0;
++					goto err_exit;
++				}
+ 
+ 				buff->is_error |= buff_->is_error;
+ 				buff->is_cso_err |= buff_->is_cso_err;
+ 
+ 			} while (!buff_->is_eop);
+ 
+-			if (!is_rsc_completed) {
+-				err = 0;
+-				goto err_exit;
+-			}
+ 			if (buff->is_error ||
+ 			    (buff->is_lro && buff->is_cso_err)) {
+ 				buff_ = buff;
+@@ -445,7 +448,7 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 		       ALIGN(hdr_len, sizeof(long)));
+ 
+ 		if (buff->len - hdr_len > 0) {
+-			skb_add_rx_frag(skb, 0, buff->rxdata.page,
++			skb_add_rx_frag(skb, i++, buff->rxdata.page,
+ 					buff->rxdata.pg_off + hdr_len,
+ 					buff->len - hdr_len,
+ 					AQ_CFG_RX_FRAME_MAX);
+@@ -454,7 +457,6 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 
+ 		if (!buff->is_eop) {
+ 			buff_ = buff;
+-			i = 1U;
+ 			do {
+ 				next_ = buff_->next;
+ 				buff_ = &self->buff_ring[next_];
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+index 9f1b15077e7d6..45c17c585d743 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+@@ -889,6 +889,13 @@ int hw_atl_b0_hw_ring_tx_head_update(struct aq_hw_s *self,
+ 		err = -ENXIO;
+ 		goto err_exit;
+ 	}
++
++	/* Validate that the new hw_head_ is reasonable. */
++	if (hw_head_ >= ring->size) {
++		err = -ENXIO;
++		goto err_exit;
++	}
++
+ 	ring->hw_head = hw_head_;
+ 	err = aq_hw_err_from_flags(self);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 1a703b95208b0..82d369d9f7a50 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -2592,8 +2592,10 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+ 		device_set_wakeup_capable(&pdev->dev, 1);
+ 
+ 	priv->wol_clk = devm_clk_get_optional(&pdev->dev, "sw_sysportwol");
+-	if (IS_ERR(priv->wol_clk))
+-		return PTR_ERR(priv->wol_clk);
++	if (IS_ERR(priv->wol_clk)) {
++		ret = PTR_ERR(priv->wol_clk);
++		goto err_deregister_fixed_link;
++	}
+ 
+ 	/* Set the needed headroom once and for all */
+ 	BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index bd13f91efe7c5..792c8147c2c4c 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1092,7 +1092,6 @@ static void gem_rx_refill(struct macb_queue *queue)
+ 		/* Make hw descriptor updates visible to CPU */
+ 		rmb();
+ 
+-		queue->rx_prepared_head++;
+ 		desc = macb_rx_desc(queue, entry);
+ 
+ 		if (!queue->rx_skbuff[entry]) {
+@@ -1131,6 +1130,7 @@ static void gem_rx_refill(struct macb_queue *queue)
+ 			dma_wmb();
+ 			desc->addr &= ~MACB_BIT(RX_USED);
+ 		}
++		queue->rx_prepared_head++;
+ 	}
+ 
+ 	/* Make descriptor updates visible to hardware */
+diff --git a/drivers/net/ethernet/dec/tulip/tulip_core.c b/drivers/net/ethernet/dec/tulip/tulip_core.c
+index e7b0d7de40fd6..c22d945a79fd4 100644
+--- a/drivers/net/ethernet/dec/tulip/tulip_core.c
++++ b/drivers/net/ethernet/dec/tulip/tulip_core.c
+@@ -1396,8 +1396,10 @@ static int tulip_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	/* alloc_etherdev ensures aligned and zeroed private structures */
+ 	dev = alloc_etherdev (sizeof (*tp));
+-	if (!dev)
++	if (!dev) {
++		pci_disable_device(pdev);
+ 		return -ENOMEM;
++	}
+ 
+ 	SET_NETDEV_DEV(dev, &pdev->dev);
+ 	if (pci_resource_len (pdev, 0) < tulip_tbl[chip_idx].io_size) {
+@@ -1774,6 +1776,7 @@ err_out_free_res:
+ 
+ err_out_free_netdev:
+ 	free_netdev (dev);
++	pci_disable_device(pdev);
+ 	return -ENODEV;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index eb0625b52e453..aae79fdd51727 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -5271,9 +5271,10 @@ static int ice_up_complete(struct ice_vsi *vsi)
+ 		netif_carrier_on(vsi->netdev);
+ 	}
+ 
+-	/* clear this now, and the first stats read will be used as baseline */
+-	vsi->stat_offsets_loaded = false;
+-
++	/* Perform an initial read of the statistics registers now to
++	 * set the baseline so counters are ready when interface is up
++	 */
++	ice_update_eth_stats(vsi);
+ 	ice_service_task_schedule(pf);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index f854d41c6c94d..5e67c9c119d2f 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -5499,7 +5499,8 @@ static void igb_watchdog_task(struct work_struct *work)
+ 				break;
+ 			}
+ 
+-			if (adapter->link_speed != SPEED_1000)
++			if (adapter->link_speed != SPEED_1000 ||
++			    !hw->phy.ops.read_reg)
+ 				goto no_wait;
+ 
+ 			/* wait for Remote receiver status OK */
+diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c
+index fd37d2c203af7..7f3523f0d196f 100644
+--- a/drivers/net/ethernet/intel/igc/igc_base.c
++++ b/drivers/net/ethernet/intel/igc/igc_base.c
+@@ -187,15 +187,7 @@ static s32 igc_init_phy_params_base(struct igc_hw *hw)
+ 
+ 	igc_check_for_copper_link(hw);
+ 
+-	/* Verify phy id and set remaining function pointers */
+-	switch (phy->id) {
+-	case I225_I_PHY_ID:
+-		phy->type	= igc_phy_i225;
+-		break;
+-	default:
+-		ret_val = -IGC_ERR_PHY;
+-		goto out;
+-	}
++	phy->type = igc_phy_i225;
+ 
+ out:
+ 	return ret_val;
+diff --git a/drivers/net/ethernet/intel/igc/igc_hw.h b/drivers/net/ethernet/intel/igc/igc_hw.h
+index 55dae7c4703f8..7e29f41f70e0b 100644
+--- a/drivers/net/ethernet/intel/igc/igc_hw.h
++++ b/drivers/net/ethernet/intel/igc/igc_hw.h
+@@ -22,6 +22,7 @@
+ #define IGC_DEV_ID_I220_V			0x15F7
+ #define IGC_DEV_ID_I225_K			0x3100
+ #define IGC_DEV_ID_I225_K2			0x3101
++#define IGC_DEV_ID_I226_K			0x3102
+ #define IGC_DEV_ID_I225_LMVP			0x5502
+ #define IGC_DEV_ID_I225_IT			0x0D9F
+ #define IGC_DEV_ID_I226_LM			0x125B
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 61cebb7df6bcb..fd9257c7059a0 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -4177,20 +4177,12 @@ bool igc_has_link(struct igc_adapter *adapter)
+ 	 * false until the igc_check_for_link establishes link
+ 	 * for copper adapters ONLY
+ 	 */
+-	switch (hw->phy.media_type) {
+-	case igc_media_type_copper:
+-		if (!hw->mac.get_link_status)
+-			return true;
+-		hw->mac.ops.check_for_link(hw);
+-		link_active = !hw->mac.get_link_status;
+-		break;
+-	default:
+-	case igc_media_type_unknown:
+-		break;
+-	}
++	if (!hw->mac.get_link_status)
++		return true;
++	hw->mac.ops.check_for_link(hw);
++	link_active = !hw->mac.get_link_status;
+ 
+-	if (hw->mac.type == igc_i225 &&
+-	    hw->phy.id == I225_I_PHY_ID) {
++	if (hw->mac.type == igc_i225) {
+ 		if (!netif_carrier_ok(adapter->netdev)) {
+ 			adapter->flags &= ~IGC_FLAG_NEED_LINK_UPDATE;
+ 		} else if (!(adapter->flags & IGC_FLAG_NEED_LINK_UPDATE)) {
+diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c
+index 8de4de2e56362..3a103406eadb2 100644
+--- a/drivers/net/ethernet/intel/igc/igc_phy.c
++++ b/drivers/net/ethernet/intel/igc/igc_phy.c
+@@ -249,8 +249,7 @@ static s32 igc_phy_setup_autoneg(struct igc_hw *hw)
+ 			return ret_val;
+ 	}
+ 
+-	if ((phy->autoneg_mask & ADVERTISE_2500_FULL) &&
+-	    hw->phy.id == I225_I_PHY_ID) {
++	if (phy->autoneg_mask & ADVERTISE_2500_FULL) {
+ 		/* Read the MULTI GBT AN Control Register - reg 7.32 */
+ 		ret_val = phy->ops.read_reg(hw, (STANDARD_AN_REG_MASK <<
+ 					    MMD_DEVADDR_SHIFT) |
+@@ -390,8 +389,7 @@ static s32 igc_phy_setup_autoneg(struct igc_hw *hw)
+ 		ret_val = phy->ops.write_reg(hw, PHY_1000T_CTRL,
+ 					     mii_1000t_ctrl_reg);
+ 
+-	if ((phy->autoneg_mask & ADVERTISE_2500_FULL) &&
+-	    hw->phy.id == I225_I_PHY_ID)
++	if (phy->autoneg_mask & ADVERTISE_2500_FULL)
+ 		ret_val = phy->ops.write_reg(hw,
+ 					     (STANDARD_AN_REG_MASK <<
+ 					     MMD_DEVADDR_SHIFT) |
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 16e98ac47624c..d9cc0ed6c5f75 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -4009,6 +4009,13 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
+ 		}
+ 	}
+ 
++	if (params->xdp_prog) {
++		if (features & NETIF_F_LRO) {
++			netdev_warn(netdev, "LRO is incompatible with XDP\n");
++			features &= ~NETIF_F_LRO;
++		}
++	}
++
+ 	if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) {
+ 		features &= ~NETIF_F_RXHASH;
+ 		if (netdev->features & NETIF_F_RXHASH)
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index c9f32fc50254f..2219e4c59ae60 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -3628,7 +3628,8 @@ static void ql_reset_work(struct work_struct *work)
+ 		qdev->mem_map_registers;
+ 	unsigned long hw_flags;
+ 
+-	if (test_bit((QL_RESET_PER_SCSI | QL_RESET_START), &qdev->flags)) {
++	if (test_bit(QL_RESET_PER_SCSI, &qdev->flags) ||
++	    test_bit(QL_RESET_START, &qdev->flags)) {
+ 		clear_bit(QL_LINK_MASTER, &qdev->flags);
+ 
+ 		/*
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+index 272cb47af9f2e..a7a1227c9b92f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+@@ -175,7 +175,7 @@ static int stmmac_pci_probe(struct pci_dev *pdev,
+ 		return -ENOMEM;
+ 
+ 	/* Enable pci device */
+-	ret = pci_enable_device(pdev);
++	ret = pcim_enable_device(pdev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "%s: ERROR: failed to enable device\n",
+ 			__func__);
+@@ -227,8 +227,6 @@ static void stmmac_pci_remove(struct pci_dev *pdev)
+ 		pcim_iounmap_regions(pdev, BIT(i));
+ 		break;
+ 	}
+-
+-	pci_disable_device(pdev);
+ }
+ 
+ static int __maybe_unused stmmac_pci_suspend(struct device *dev)
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index 2a65efd3e8da9..fe91b72eca36c 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -1209,9 +1209,10 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
+ 	struct gsi_event *event_done;
+ 	struct gsi_event *event;
+ 	struct gsi_trans *trans;
++	u32 trans_count = 0;
+ 	u32 byte_count = 0;
+-	u32 old_index;
+ 	u32 event_avail;
++	u32 old_index;
+ 
+ 	trans_info = &channel->trans_info;
+ 
+@@ -1232,6 +1233,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
+ 	do {
+ 		trans->len = __le16_to_cpu(event->len);
+ 		byte_count += trans->len;
++		trans_count++;
+ 
+ 		/* Move on to the next event and transaction */
+ 		if (--event_avail)
+@@ -1243,7 +1245,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
+ 
+ 	/* We record RX bytes when they are received */
+ 	channel->byte_count += byte_count;
+-	channel->trans_count++;
++	channel->trans_count += trans_count;
+ }
+ 
+ /* Initialize a ring, including allocating DMA memory for its entries */
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 932a39945cc62..6678a734cc4d3 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -595,6 +595,7 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx,
+ 				if (dma_mapping_error(&adapter->pdev->dev,
+ 						      rbi->dma_addr)) {
+ 					dev_kfree_skb_any(rbi->skb);
++					rbi->skb = NULL;
+ 					rq->stats.rx_buf_alloc_failure++;
+ 					break;
+ 				}
+@@ -619,6 +620,7 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx,
+ 				if (dma_mapping_error(&adapter->pdev->dev,
+ 						      rbi->dma_addr)) {
+ 					put_page(rbi->page);
++					rbi->page = NULL;
+ 					rq->stats.rx_buf_alloc_failure++;
+ 					break;
+ 				}
+@@ -1654,6 +1656,10 @@ vmxnet3_rq_cleanup(struct vmxnet3_rx_queue *rq,
+ 	u32 i, ring_idx;
+ 	struct Vmxnet3_RxDesc *rxd;
+ 
++	/* ring has already been cleaned up */
++	if (!rq->rx_ring[0].base)
++		return;
++
+ 	for (ring_idx = 0; ring_idx < 2; ring_idx++) {
+ 		for (i = 0; i < rq->rx_ring[ring_idx].size; i++) {
+ #ifdef __BIG_ENDIAN_BITFIELD
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index ad4f1cfbad2e0..e73a5c62a858d 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4420,6 +4420,7 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl)
+ 	if (ctrl->queue_count > 1) {
+ 		nvme_queue_scan(ctrl);
+ 		nvme_start_queues(ctrl);
++		nvme_mpath_update(ctrl);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(nvme_start_ctrl);
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 18a756444d5a9..a9e15c8f907b7 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -484,8 +484,17 @@ static void nvme_update_ns_ana_state(struct nvme_ana_group_desc *desc,
+ 	ns->ana_grpid = le32_to_cpu(desc->grpid);
+ 	ns->ana_state = desc->state;
+ 	clear_bit(NVME_NS_ANA_PENDING, &ns->flags);
+-
+-	if (nvme_state_is_live(ns->ana_state))
++	/*
++	 * nvme_mpath_set_live() will trigger I/O to the multipath path device
++	 * and in turn to this path device.  However we cannot accept this I/O
++	 * if the controller is not live.  This may deadlock if called from
++	 * nvme_mpath_init_identify() and the ctrl will never complete
++	 * initialization, preventing I/O from completing.  For this case we
++	 * will reprocess the ANA log page in nvme_mpath_update() once the
++	 * controller is ready.
++	 */
++	if (nvme_state_is_live(ns->ana_state) &&
++	    ns->ctrl->state == NVME_CTRL_LIVE)
+ 		nvme_mpath_set_live(ns);
+ }
+ 
+@@ -572,6 +581,18 @@ static void nvme_ana_work(struct work_struct *work)
+ 	nvme_read_ana_log(ctrl);
+ }
+ 
++void nvme_mpath_update(struct nvme_ctrl *ctrl)
++{
++	u32 nr_change_groups = 0;
++
++	if (!ctrl->ana_log_buf)
++		return;
++
++	mutex_lock(&ctrl->ana_lock);
++	nvme_parse_ana_log(ctrl, &nr_change_groups, nvme_update_ana_state);
++	mutex_unlock(&ctrl->ana_lock);
++}
++
+ static void nvme_anatt_timeout(struct timer_list *t)
+ {
+ 	struct nvme_ctrl *ctrl = from_timer(ctrl, t, anatt_timer);
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 10e5ae3a8c0df..95b9657cabaf1 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -712,6 +712,7 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id);
+ void nvme_mpath_remove_disk(struct nvme_ns_head *head);
+ int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id);
+ void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl);
++void nvme_mpath_update(struct nvme_ctrl *ctrl);
+ void nvme_mpath_uninit(struct nvme_ctrl *ctrl);
+ void nvme_mpath_stop(struct nvme_ctrl *ctrl);
+ bool nvme_mpath_clear_current_path(struct nvme_ns *ns);
+@@ -798,6 +799,9 @@ static inline int nvme_mpath_init_identify(struct nvme_ctrl *ctrl,
+ "Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.\n");
+ 	return 0;
+ }
++static inline void nvme_mpath_update(struct nvme_ctrl *ctrl)
++{
++}
+ static inline void nvme_mpath_uninit(struct nvme_ctrl *ctrl)
+ {
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 6939b03a16c58..a36db0701d178 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3265,7 +3265,10 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_128_BYTES_SQES |
+ 				NVME_QUIRK_SHARED_TAGS |
+ 				NVME_QUIRK_SKIP_CID_GEN },
+-
++	{ PCI_DEVICE(0x144d, 0xa808),   /* Samsung X5 */
++		.driver_data =  NVME_QUIRK_DELAY_BEFORE_CHK_RDY|
++				NVME_QUIRK_NO_DEEPEST_PS |
++				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ 	{ 0, }
+ };
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 0d7109018a91f..cda17c6151480 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2829,6 +2829,16 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."),
+ 			DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"),
+ 		},
++		/*
++		 * Downstream device is not accessible after putting a root port
++		 * into D3cold and back into D0 on Elo i2.
++		 */
++		.ident = "Elo i2",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Elo Touch Solutions"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Elo i2"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "RevB"),
++		},
+ 	},
+ #endif
+ 	{ }
+diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
+index 5c1a109842a76..c2ba4064ce5b2 100644
+--- a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
+@@ -1224,18 +1224,12 @@ FUNC_GROUP_DECL(SALT8, AA12);
+ FUNC_GROUP_DECL(WDTRST4, AA12);
+ 
+ #define AE12 196
+-SIG_EXPR_LIST_DECL_SEMG(AE12, FWSPIDQ2, FWQSPID, FWSPID,
+-			SIG_DESC_SET(SCU438, 4));
+ SIG_EXPR_LIST_DECL_SESG(AE12, GPIOY4, GPIOY4);
+-PIN_DECL_(AE12, SIG_EXPR_LIST_PTR(AE12, FWSPIDQ2),
+-	  SIG_EXPR_LIST_PTR(AE12, GPIOY4));
++PIN_DECL_(AE12, SIG_EXPR_LIST_PTR(AE12, GPIOY4));
+ 
+ #define AF12 197
+-SIG_EXPR_LIST_DECL_SEMG(AF12, FWSPIDQ3, FWQSPID, FWSPID,
+-			SIG_DESC_SET(SCU438, 5));
+ SIG_EXPR_LIST_DECL_SESG(AF12, GPIOY5, GPIOY5);
+-PIN_DECL_(AF12, SIG_EXPR_LIST_PTR(AF12, FWSPIDQ3),
+-	  SIG_EXPR_LIST_PTR(AF12, GPIOY5));
++PIN_DECL_(AF12, SIG_EXPR_LIST_PTR(AF12, GPIOY5));
+ 
+ #define AC12 198
+ SSSF_PIN_DECL(AC12, GPIOY6, FWSPIABR, SIG_DESC_SET(SCU438, 6));
+@@ -1508,9 +1502,8 @@ SIG_EXPR_LIST_DECL_SEMG(Y4, EMMCDAT7, EMMCG8, EMMC, SIG_DESC_SET(SCU404, 3));
+ PIN_DECL_3(Y4, GPIO18E3, FWSPIDMISO, VBMISO, EMMCDAT7);
+ 
+ GROUP_DECL(FWSPID, Y1, Y2, Y3, Y4);
+-GROUP_DECL(FWQSPID, Y1, Y2, Y3, Y4, AE12, AF12);
+ GROUP_DECL(EMMCG8, AB4, AA4, AC4, AA5, Y5, AB5, AB6, AC5, Y1, Y2, Y3, Y4);
+-FUNC_DECL_2(FWSPID, FWSPID, FWQSPID);
++FUNC_DECL_1(FWSPID, FWSPID);
+ FUNC_GROUP_DECL(VB, Y1, Y2, Y3, Y4);
+ FUNC_DECL_3(EMMC, EMMCG1, EMMCG4, EMMCG8);
+ /*
+@@ -1906,7 +1899,6 @@ static const struct aspeed_pin_group aspeed_g6_groups[] = {
+ 	ASPEED_PINCTRL_GROUP(FSI2),
+ 	ASPEED_PINCTRL_GROUP(FWSPIABR),
+ 	ASPEED_PINCTRL_GROUP(FWSPID),
+-	ASPEED_PINCTRL_GROUP(FWQSPID),
+ 	ASPEED_PINCTRL_GROUP(FWSPIWP),
+ 	ASPEED_PINCTRL_GROUP(GPIT0),
+ 	ASPEED_PINCTRL_GROUP(GPIT1),
+diff --git a/drivers/platform/chrome/cros_ec_debugfs.c b/drivers/platform/chrome/cros_ec_debugfs.c
+index 272c89837d745..0dbceee87a4b1 100644
+--- a/drivers/platform/chrome/cros_ec_debugfs.c
++++ b/drivers/platform/chrome/cros_ec_debugfs.c
+@@ -25,6 +25,9 @@
+ 
+ #define CIRC_ADD(idx, size, value)	(((idx) + (value)) & ((size) - 1))
+ 
++/* waitqueue for log readers */
++static DECLARE_WAIT_QUEUE_HEAD(cros_ec_debugfs_log_wq);
++
+ /**
+  * struct cros_ec_debugfs - EC debugging information.
+  *
+@@ -33,7 +36,6 @@
+  * @log_buffer: circular buffer for console log information
+  * @read_msg: preallocated EC command and buffer to read console log
+  * @log_mutex: mutex to protect circular buffer
+- * @log_wq: waitqueue for log readers
+  * @log_poll_work: recurring task to poll EC for new console log data
+  * @panicinfo_blob: panicinfo debugfs blob
+  */
+@@ -44,7 +46,6 @@ struct cros_ec_debugfs {
+ 	struct circ_buf log_buffer;
+ 	struct cros_ec_command *read_msg;
+ 	struct mutex log_mutex;
+-	wait_queue_head_t log_wq;
+ 	struct delayed_work log_poll_work;
+ 	/* EC panicinfo */
+ 	struct debugfs_blob_wrapper panicinfo_blob;
+@@ -107,7 +108,7 @@ static void cros_ec_console_log_work(struct work_struct *__work)
+ 			buf_space--;
+ 		}
+ 
+-		wake_up(&debug_info->log_wq);
++		wake_up(&cros_ec_debugfs_log_wq);
+ 	}
+ 
+ 	mutex_unlock(&debug_info->log_mutex);
+@@ -141,7 +142,7 @@ static ssize_t cros_ec_console_log_read(struct file *file, char __user *buf,
+ 
+ 		mutex_unlock(&debug_info->log_mutex);
+ 
+-		ret = wait_event_interruptible(debug_info->log_wq,
++		ret = wait_event_interruptible(cros_ec_debugfs_log_wq,
+ 					CIRC_CNT(cb->head, cb->tail, LOG_SIZE));
+ 		if (ret < 0)
+ 			return ret;
+@@ -173,7 +174,7 @@ static __poll_t cros_ec_console_log_poll(struct file *file,
+ 	struct cros_ec_debugfs *debug_info = file->private_data;
+ 	__poll_t mask = 0;
+ 
+-	poll_wait(file, &debug_info->log_wq, wait);
++	poll_wait(file, &cros_ec_debugfs_log_wq, wait);
+ 
+ 	mutex_lock(&debug_info->log_mutex);
+ 	if (CIRC_CNT(debug_info->log_buffer.head,
+@@ -377,7 +378,6 @@ static int cros_ec_create_console_log(struct cros_ec_debugfs *debug_info)
+ 	debug_info->log_buffer.tail = 0;
+ 
+ 	mutex_init(&debug_info->log_mutex);
+-	init_waitqueue_head(&debug_info->log_wq);
+ 
+ 	debugfs_create_file("console_log", S_IFREG | 0444, debug_info->dir,
+ 			    debug_info, &cros_ec_console_log_fops);
+diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c
+index 7c88d190c51fc..625effe6cb65f 100644
+--- a/drivers/rtc/class.c
++++ b/drivers/rtc/class.c
+@@ -26,6 +26,15 @@ struct class *rtc_class;
+ static void rtc_device_release(struct device *dev)
+ {
+ 	struct rtc_device *rtc = to_rtc_device(dev);
++	struct timerqueue_head *head = &rtc->timerqueue;
++	struct timerqueue_node *node;
++
++	mutex_lock(&rtc->ops_lock);
++	while ((node = timerqueue_getnext(head)))
++		timerqueue_del(head, node);
++	mutex_unlock(&rtc->ops_lock);
++
++	cancel_work_sync(&rtc->irqwork);
+ 
+ 	ida_simple_remove(&rtc_ida, rtc->id);
+ 	kfree(rtc);
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index 5add637c9ad23..b036ff33fbe61 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -99,6 +99,17 @@ unsigned int mc146818_get_time(struct rtc_time *time)
+ }
+ EXPORT_SYMBOL_GPL(mc146818_get_time);
+ 
++/* AMD systems don't allow access to AltCentury with DV1 */
++static bool apply_amd_register_a_behavior(void)
++{
++#ifdef CONFIG_X86
++	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++	    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
++		return true;
++#endif
++	return false;
++}
++
+ /* Set the current date and time in the real time clock. */
+ int mc146818_set_time(struct rtc_time *time)
+ {
+@@ -172,7 +183,10 @@ int mc146818_set_time(struct rtc_time *time)
+ 	save_control = CMOS_READ(RTC_CONTROL);
+ 	CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
+ 	save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
+-	CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
++	if (apply_amd_register_a_behavior())
++		CMOS_WRITE((save_freq_select & ~RTC_AMD_BANK_SELECT), RTC_FREQ_SELECT);
++	else
++		CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
+ 
+ #ifdef CONFIG_MACH_DECSTATION
+ 	CMOS_WRITE(real_yrs, RTC_DEC_YEAR);
+diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c
+index f0a6861ff3aef..715513311ece5 100644
+--- a/drivers/rtc/rtc-pcf2127.c
++++ b/drivers/rtc/rtc-pcf2127.c
+@@ -366,7 +366,8 @@ static int pcf2127_watchdog_init(struct device *dev, struct pcf2127 *pcf2127)
+ static int pcf2127_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm)
+ {
+ 	struct pcf2127 *pcf2127 = dev_get_drvdata(dev);
+-	unsigned int buf[5], ctrl2;
++	u8 buf[5];
++	unsigned int ctrl2;
+ 	int ret;
+ 
+ 	ret = regmap_read(pcf2127->regmap, PCF2127_REG_CTRL2, &ctrl2);
+diff --git a/drivers/rtc/rtc-sun6i.c b/drivers/rtc/rtc-sun6i.c
+index f2818cdd11d82..52b36b7c61298 100644
+--- a/drivers/rtc/rtc-sun6i.c
++++ b/drivers/rtc/rtc-sun6i.c
+@@ -138,7 +138,7 @@ struct sun6i_rtc_dev {
+ 	const struct sun6i_rtc_clk_data *data;
+ 	void __iomem *base;
+ 	int irq;
+-	unsigned long alarm;
++	time64_t alarm;
+ 
+ 	struct clk_hw hw;
+ 	struct clk_hw *int_osc;
+@@ -510,10 +510,8 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm)
+ 	struct sun6i_rtc_dev *chip = dev_get_drvdata(dev);
+ 	struct rtc_time *alrm_tm = &wkalrm->time;
+ 	struct rtc_time tm_now;
+-	unsigned long time_now = 0;
+-	unsigned long time_set = 0;
+-	unsigned long time_gap = 0;
+-	int ret = 0;
++	time64_t time_now, time_set;
++	int ret;
+ 
+ 	ret = sun6i_rtc_gettime(dev, &tm_now);
+ 	if (ret < 0) {
+@@ -528,9 +526,7 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm)
+ 		return -EINVAL;
+ 	}
+ 
+-	time_gap = time_set - time_now;
+-
+-	if (time_gap > U32_MAX) {
++	if ((time_set - time_now) > U32_MAX) {
+ 		dev_err(dev, "Date too far in the future\n");
+ 		return -EINVAL;
+ 	}
+@@ -539,7 +535,7 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm)
+ 	writel(0, chip->base + SUN6I_ALRM_COUNTER);
+ 	usleep_range(100, 300);
+ 
+-	writel(time_gap, chip->base + SUN6I_ALRM_COUNTER);
++	writel(time_set - time_now, chip->base + SUN6I_ALRM_COUNTER);
+ 	chip->alarm = time_set;
+ 
+ 	sun6i_rtc_setaie(wkalrm->enabled, chip);
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index cf9ae0ab489a0..ba823e8eb902b 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -3773,6 +3773,9 @@ int qlt_abort_cmd(struct qla_tgt_cmd *cmd)
+ 
+ 	spin_lock_irqsave(&cmd->cmd_lock, flags);
+ 	if (cmd->aborted) {
++		if (cmd->sg_mapped)
++			qlt_unmap_sg(vha, cmd);
++
+ 		spin_unlock_irqrestore(&cmd->cmd_lock, flags);
+ 		/*
+ 		 * It's normal to see 2 calls in this path:
+diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c
+index 33efa6915b91d..34cecd3660bfc 100644
+--- a/drivers/usb/gadget/legacy/raw_gadget.c
++++ b/drivers/usb/gadget/legacy/raw_gadget.c
+@@ -144,6 +144,7 @@ enum dev_state {
+ 	STATE_DEV_INVALID = 0,
+ 	STATE_DEV_OPENED,
+ 	STATE_DEV_INITIALIZED,
++	STATE_DEV_REGISTERING,
+ 	STATE_DEV_RUNNING,
+ 	STATE_DEV_CLOSED,
+ 	STATE_DEV_FAILED
+@@ -507,6 +508,7 @@ static int raw_ioctl_run(struct raw_dev *dev, unsigned long value)
+ 		ret = -EINVAL;
+ 		goto out_unlock;
+ 	}
++	dev->state = STATE_DEV_REGISTERING;
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
+ 	ret = usb_gadget_probe_driver(&dev->driver);
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index e303f6f073d2b..5beb20768b204 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -1450,13 +1450,9 @@ err:
+ 	return ERR_PTR(r);
+ }
+ 
+-static struct ptr_ring *get_tap_ptr_ring(int fd)
++static struct ptr_ring *get_tap_ptr_ring(struct file *file)
+ {
+ 	struct ptr_ring *ring;
+-	struct file *file = fget(fd);
+-
+-	if (!file)
+-		return NULL;
+ 	ring = tun_get_tx_ring(file);
+ 	if (!IS_ERR(ring))
+ 		goto out;
+@@ -1465,7 +1461,6 @@ static struct ptr_ring *get_tap_ptr_ring(int fd)
+ 		goto out;
+ 	ring = NULL;
+ out:
+-	fput(file);
+ 	return ring;
+ }
+ 
+@@ -1552,8 +1547,12 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
+ 		r = vhost_net_enable_vq(n, vq);
+ 		if (r)
+ 			goto err_used;
+-		if (index == VHOST_NET_VQ_RX)
+-			nvq->rx_ring = get_tap_ptr_ring(fd);
++		if (index == VHOST_NET_VQ_RX) {
++			if (sock)
++				nvq->rx_ring = get_tap_ptr_ring(sock->file);
++			else
++				nvq->rx_ring = NULL;
++		}
+ 
+ 		oldubufs = nvq->ubufs;
+ 		nvq->ubufs = ubufs;
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index e4d60009d9083..04578aa87e4da 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -97,8 +97,11 @@ static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid)
+ 		return;
+ 
+ 	irq = ops->get_vq_irq(vdpa, qid);
++	if (irq < 0)
++		return;
++
+ 	irq_bypass_unregister_producer(&vq->call_ctx.producer);
+-	if (!vq->call_ctx.ctx || irq < 0)
++	if (!vq->call_ctx.ctx)
+ 		return;
+ 
+ 	vq->call_ctx.producer.token = vq->call_ctx.ctx;
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index f81a972bdd294..7e7a9454bcb9d 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -729,10 +729,22 @@ int afs_getattr(const struct path *path, struct kstat *stat,
+ {
+ 	struct inode *inode = d_inode(path->dentry);
+ 	struct afs_vnode *vnode = AFS_FS_I(inode);
+-	int seq = 0;
++	struct key *key;
++	int ret, seq = 0;
+ 
+ 	_enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation);
+ 
++	if (!(query_flags & AT_STATX_DONT_SYNC) &&
++	    !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) {
++		key = afs_request_key(vnode->volume->cell);
++		if (IS_ERR(key))
++			return PTR_ERR(key);
++		ret = afs_validate(vnode, key);
++		key_put(key);
++		if (ret < 0)
++			return ret;
++	}
++
+ 	do {
+ 		read_seqbegin_or_lock(&vnode->cb_lock, &seq);
+ 		generic_fillattr(inode, stat);
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index 2e6f622ed4283..55a8eb3c19634 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -858,14 +858,16 @@ static ssize_t gfs2_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ 			return ret;
+ 		iocb->ki_flags &= ~IOCB_DIRECT;
+ 	}
++	pagefault_disable();
+ 	iocb->ki_flags |= IOCB_NOIO;
+ 	ret = generic_file_read_iter(iocb, to);
+ 	iocb->ki_flags &= ~IOCB_NOIO;
++	pagefault_enable();
+ 	if (ret >= 0) {
+ 		if (!iov_iter_count(to))
+ 			return ret;
+ 		written = ret;
+-	} else {
++	} else if (ret != -EFAULT) {
+ 		if (ret != -EAGAIN)
+ 			return ret;
+ 		if (iocb->ki_flags & IOCB_NOWAIT)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4330603eae35d..3ecf71151fb1f 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4252,12 +4252,8 @@ static int io_statx(struct io_kiocb *req, bool force_nonblock)
+ 	struct io_statx *ctx = &req->statx;
+ 	int ret;
+ 
+-	if (force_nonblock) {
+-		/* only need file table for an actual valid fd */
+-		if (ctx->dfd == -1 || ctx->dfd == AT_FDCWD)
+-			req->flags |= REQ_F_NO_FILE_TABLE;
++	if (force_nonblock)
+ 		return -EAGAIN;
+-	}
+ 
+ 	ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask,
+ 		       ctx->buffer);
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index 4e6cc0a7d69c9..7bcc60091287c 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -170,7 +170,7 @@ int fiemap_prep(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 
+ 	if (*len == 0)
+ 		return -EINVAL;
+-	if (start > maxbytes)
++	if (start >= maxbytes)
+ 		return -EFBIG;
+ 
+ 	/*
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index 4391fd3abd8f8..e00e184b12615 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -20,6 +20,23 @@
+ #include "page.h"
+ #include "btnode.h"
+ 
++
++/**
++ * nilfs_init_btnc_inode - initialize B-tree node cache inode
++ * @btnc_inode: inode to be initialized
++ *
++ * nilfs_init_btnc_inode() sets up an inode for B-tree node cache.
++ */
++void nilfs_init_btnc_inode(struct inode *btnc_inode)
++{
++	struct nilfs_inode_info *ii = NILFS_I(btnc_inode);
++
++	btnc_inode->i_mode = S_IFREG;
++	ii->i_flags = 0;
++	memset(&ii->i_bmap_data, 0, sizeof(struct nilfs_bmap));
++	mapping_set_gfp_mask(btnc_inode->i_mapping, GFP_NOFS);
++}
++
+ void nilfs_btnode_cache_clear(struct address_space *btnc)
+ {
+ 	invalidate_mapping_pages(btnc, 0, -1);
+@@ -29,7 +46,7 @@ void nilfs_btnode_cache_clear(struct address_space *btnc)
+ struct buffer_head *
+ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
+ {
+-	struct inode *inode = NILFS_BTNC_I(btnc);
++	struct inode *inode = btnc->host;
+ 	struct buffer_head *bh;
+ 
+ 	bh = nilfs_grab_buffer(inode, btnc, blocknr, BIT(BH_NILFS_Node));
+@@ -57,7 +74,7 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
+ 			      struct buffer_head **pbh, sector_t *submit_ptr)
+ {
+ 	struct buffer_head *bh;
+-	struct inode *inode = NILFS_BTNC_I(btnc);
++	struct inode *inode = btnc->host;
+ 	struct page *page;
+ 	int err;
+ 
+@@ -157,7 +174,7 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
+ 				    struct nilfs_btnode_chkey_ctxt *ctxt)
+ {
+ 	struct buffer_head *obh, *nbh;
+-	struct inode *inode = NILFS_BTNC_I(btnc);
++	struct inode *inode = btnc->host;
+ 	__u64 oldkey = ctxt->oldkey, newkey = ctxt->newkey;
+ 	int err;
+ 
+diff --git a/fs/nilfs2/btnode.h b/fs/nilfs2/btnode.h
+index 0f88dbc9bcb3e..05ab64d354dc9 100644
+--- a/fs/nilfs2/btnode.h
++++ b/fs/nilfs2/btnode.h
+@@ -30,6 +30,7 @@ struct nilfs_btnode_chkey_ctxt {
+ 	struct buffer_head *newbh;
+ };
+ 
++void nilfs_init_btnc_inode(struct inode *btnc_inode);
+ void nilfs_btnode_cache_clear(struct address_space *);
+ struct buffer_head *nilfs_btnode_create_block(struct address_space *btnc,
+ 					      __u64 blocknr);
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index f42ab57201e7b..77efd69213a3d 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -58,7 +58,8 @@ static void nilfs_btree_free_path(struct nilfs_btree_path *path)
+ static int nilfs_btree_get_new_block(const struct nilfs_bmap *btree,
+ 				     __u64 ptr, struct buffer_head **bhp)
+ {
+-	struct address_space *btnc = &NILFS_BMAP_I(btree)->i_btnode_cache;
++	struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode;
++	struct address_space *btnc = btnc_inode->i_mapping;
+ 	struct buffer_head *bh;
+ 
+ 	bh = nilfs_btnode_create_block(btnc, ptr);
+@@ -470,7 +471,8 @@ static int __nilfs_btree_get_block(const struct nilfs_bmap *btree, __u64 ptr,
+ 				   struct buffer_head **bhp,
+ 				   const struct nilfs_btree_readahead_info *ra)
+ {
+-	struct address_space *btnc = &NILFS_BMAP_I(btree)->i_btnode_cache;
++	struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode;
++	struct address_space *btnc = btnc_inode->i_mapping;
+ 	struct buffer_head *bh, *ra_bh;
+ 	sector_t submit_ptr = 0;
+ 	int ret;
+@@ -1742,6 +1744,10 @@ nilfs_btree_prepare_convert_and_insert(struct nilfs_bmap *btree, __u64 key,
+ 		dat = nilfs_bmap_get_dat(btree);
+ 	}
+ 
++	ret = nilfs_attach_btree_node_cache(&NILFS_BMAP_I(btree)->vfs_inode);
++	if (ret < 0)
++		return ret;
++
+ 	ret = nilfs_bmap_prepare_alloc_ptr(btree, dreq, dat);
+ 	if (ret < 0)
+ 		return ret;
+@@ -1914,7 +1920,7 @@ static int nilfs_btree_prepare_update_v(struct nilfs_bmap *btree,
+ 		path[level].bp_ctxt.newkey = path[level].bp_newreq.bpr_ptr;
+ 		path[level].bp_ctxt.bh = path[level].bp_bh;
+ 		ret = nilfs_btnode_prepare_change_key(
+-			&NILFS_BMAP_I(btree)->i_btnode_cache,
++			NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ 			&path[level].bp_ctxt);
+ 		if (ret < 0) {
+ 			nilfs_dat_abort_update(dat,
+@@ -1940,7 +1946,7 @@ static void nilfs_btree_commit_update_v(struct nilfs_bmap *btree,
+ 
+ 	if (buffer_nilfs_node(path[level].bp_bh)) {
+ 		nilfs_btnode_commit_change_key(
+-			&NILFS_BMAP_I(btree)->i_btnode_cache,
++			NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ 			&path[level].bp_ctxt);
+ 		path[level].bp_bh = path[level].bp_ctxt.bh;
+ 	}
+@@ -1959,7 +1965,7 @@ static void nilfs_btree_abort_update_v(struct nilfs_bmap *btree,
+ 			       &path[level].bp_newreq.bpr_req);
+ 	if (buffer_nilfs_node(path[level].bp_bh))
+ 		nilfs_btnode_abort_change_key(
+-			&NILFS_BMAP_I(btree)->i_btnode_cache,
++			NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ 			&path[level].bp_ctxt);
+ }
+ 
+@@ -2135,7 +2141,8 @@ static void nilfs_btree_add_dirty_buffer(struct nilfs_bmap *btree,
+ static void nilfs_btree_lookup_dirty_buffers(struct nilfs_bmap *btree,
+ 					     struct list_head *listp)
+ {
+-	struct address_space *btcache = &NILFS_BMAP_I(btree)->i_btnode_cache;
++	struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode;
++	struct address_space *btcache = btnc_inode->i_mapping;
+ 	struct list_head lists[NILFS_BTREE_LEVEL_MAX];
+ 	struct pagevec pvec;
+ 	struct buffer_head *bh, *head;
+@@ -2189,12 +2196,12 @@ static int nilfs_btree_assign_p(struct nilfs_bmap *btree,
+ 		path[level].bp_ctxt.newkey = blocknr;
+ 		path[level].bp_ctxt.bh = *bh;
+ 		ret = nilfs_btnode_prepare_change_key(
+-			&NILFS_BMAP_I(btree)->i_btnode_cache,
++			NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ 			&path[level].bp_ctxt);
+ 		if (ret < 0)
+ 			return ret;
+ 		nilfs_btnode_commit_change_key(
+-			&NILFS_BMAP_I(btree)->i_btnode_cache,
++			NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ 			&path[level].bp_ctxt);
+ 		*bh = path[level].bp_ctxt.bh;
+ 	}
+@@ -2399,6 +2406,10 @@ int nilfs_btree_init(struct nilfs_bmap *bmap)
+ 
+ 	if (nilfs_btree_root_broken(nilfs_btree_get_root(bmap), bmap->b_inode))
+ 		ret = -EIO;
++	else
++		ret = nilfs_attach_btree_node_cache(
++			&NILFS_BMAP_I(bmap)->vfs_inode);
++
+ 	return ret;
+ }
+ 
+diff --git a/fs/nilfs2/dat.c b/fs/nilfs2/dat.c
+index 8bccdf1158fce..1a3d183027b9e 100644
+--- a/fs/nilfs2/dat.c
++++ b/fs/nilfs2/dat.c
+@@ -497,7 +497,9 @@ int nilfs_dat_read(struct super_block *sb, size_t entry_size,
+ 	di = NILFS_DAT_I(dat);
+ 	lockdep_set_class(&di->mi.mi_sem, &dat_lock_key);
+ 	nilfs_palloc_setup_cache(dat, &di->palloc_cache);
+-	nilfs_mdt_setup_shadow_map(dat, &di->shadow);
++	err = nilfs_mdt_setup_shadow_map(dat, &di->shadow);
++	if (err)
++		goto failed;
+ 
+ 	err = nilfs_read_inode_common(dat, raw_inode);
+ 	if (err)
+diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
+index 4483204968568..aadea660c66c9 100644
+--- a/fs/nilfs2/gcinode.c
++++ b/fs/nilfs2/gcinode.c
+@@ -126,9 +126,10 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
+ int nilfs_gccache_submit_read_node(struct inode *inode, sector_t pbn,
+ 				   __u64 vbn, struct buffer_head **out_bh)
+ {
++	struct inode *btnc_inode = NILFS_I(inode)->i_assoc_inode;
+ 	int ret;
+ 
+-	ret = nilfs_btnode_submit_block(&NILFS_I(inode)->i_btnode_cache,
++	ret = nilfs_btnode_submit_block(btnc_inode->i_mapping,
+ 					vbn ? : pbn, pbn, REQ_OP_READ, 0,
+ 					out_bh, &pbn);
+ 	if (ret == -EEXIST) /* internal code (cache hit) */
+@@ -170,7 +171,7 @@ int nilfs_init_gcinode(struct inode *inode)
+ 	ii->i_flags = 0;
+ 	nilfs_bmap_init_gc(ii->i_bmap);
+ 
+-	return 0;
++	return nilfs_attach_btree_node_cache(inode);
+ }
+ 
+ /**
+@@ -185,7 +186,7 @@ void nilfs_remove_all_gcinodes(struct the_nilfs *nilfs)
+ 		ii = list_first_entry(head, struct nilfs_inode_info, i_dirty);
+ 		list_del_init(&ii->i_dirty);
+ 		truncate_inode_pages(&ii->vfs_inode.i_data, 0);
+-		nilfs_btnode_cache_clear(&ii->i_btnode_cache);
++		nilfs_btnode_cache_clear(ii->i_assoc_inode->i_mapping);
+ 		iput(&ii->vfs_inode);
+ 	}
+ }
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index 745d371d6fea6..95684fa3c985a 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -29,12 +29,16 @@
+  * @cno: checkpoint number
+  * @root: pointer on NILFS root object (mounted checkpoint)
+  * @for_gc: inode for GC flag
++ * @for_btnc: inode for B-tree node cache flag
++ * @for_shadow: inode for shadowed page cache flag
+  */
+ struct nilfs_iget_args {
+ 	u64 ino;
+ 	__u64 cno;
+ 	struct nilfs_root *root;
+-	int for_gc;
++	bool for_gc;
++	bool for_btnc;
++	bool for_shadow;
+ };
+ 
+ static int nilfs_iget_test(struct inode *inode, void *opaque);
+@@ -314,7 +318,8 @@ static int nilfs_insert_inode_locked(struct inode *inode,
+ 				     unsigned long ino)
+ {
+ 	struct nilfs_iget_args args = {
+-		.ino = ino, .root = root, .cno = 0, .for_gc = 0
++		.ino = ino, .root = root, .cno = 0, .for_gc = false,
++		.for_btnc = false, .for_shadow = false
+ 	};
+ 
+ 	return insert_inode_locked4(inode, ino, nilfs_iget_test, &args);
+@@ -527,6 +532,19 @@ static int nilfs_iget_test(struct inode *inode, void *opaque)
+ 		return 0;
+ 
+ 	ii = NILFS_I(inode);
++	if (test_bit(NILFS_I_BTNC, &ii->i_state)) {
++		if (!args->for_btnc)
++			return 0;
++	} else if (args->for_btnc) {
++		return 0;
++	}
++	if (test_bit(NILFS_I_SHADOW, &ii->i_state)) {
++		if (!args->for_shadow)
++			return 0;
++	} else if (args->for_shadow) {
++		return 0;
++	}
++
+ 	if (!test_bit(NILFS_I_GCINODE, &ii->i_state))
+ 		return !args->for_gc;
+ 
+@@ -538,15 +556,17 @@ static int nilfs_iget_set(struct inode *inode, void *opaque)
+ 	struct nilfs_iget_args *args = opaque;
+ 
+ 	inode->i_ino = args->ino;
+-	if (args->for_gc) {
++	NILFS_I(inode)->i_cno = args->cno;
++	NILFS_I(inode)->i_root = args->root;
++	if (args->root && args->ino == NILFS_ROOT_INO)
++		nilfs_get_root(args->root);
++
++	if (args->for_gc)
+ 		NILFS_I(inode)->i_state = BIT(NILFS_I_GCINODE);
+-		NILFS_I(inode)->i_cno = args->cno;
+-		NILFS_I(inode)->i_root = NULL;
+-	} else {
+-		if (args->root && args->ino == NILFS_ROOT_INO)
+-			nilfs_get_root(args->root);
+-		NILFS_I(inode)->i_root = args->root;
+-	}
++	if (args->for_btnc)
++		NILFS_I(inode)->i_state |= BIT(NILFS_I_BTNC);
++	if (args->for_shadow)
++		NILFS_I(inode)->i_state |= BIT(NILFS_I_SHADOW);
+ 	return 0;
+ }
+ 
+@@ -554,7 +574,8 @@ struct inode *nilfs_ilookup(struct super_block *sb, struct nilfs_root *root,
+ 			    unsigned long ino)
+ {
+ 	struct nilfs_iget_args args = {
+-		.ino = ino, .root = root, .cno = 0, .for_gc = 0
++		.ino = ino, .root = root, .cno = 0, .for_gc = false,
++		.for_btnc = false, .for_shadow = false
+ 	};
+ 
+ 	return ilookup5(sb, ino, nilfs_iget_test, &args);
+@@ -564,7 +585,8 @@ struct inode *nilfs_iget_locked(struct super_block *sb, struct nilfs_root *root,
+ 				unsigned long ino)
+ {
+ 	struct nilfs_iget_args args = {
+-		.ino = ino, .root = root, .cno = 0, .for_gc = 0
++		.ino = ino, .root = root, .cno = 0, .for_gc = false,
++		.for_btnc = false, .for_shadow = false
+ 	};
+ 
+ 	return iget5_locked(sb, ino, nilfs_iget_test, nilfs_iget_set, &args);
+@@ -595,7 +617,8 @@ struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino,
+ 				__u64 cno)
+ {
+ 	struct nilfs_iget_args args = {
+-		.ino = ino, .root = NULL, .cno = cno, .for_gc = 1
++		.ino = ino, .root = NULL, .cno = cno, .for_gc = true,
++		.for_btnc = false, .for_shadow = false
+ 	};
+ 	struct inode *inode;
+ 	int err;
+@@ -615,6 +638,113 @@ struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino,
+ 	return inode;
+ }
+ 
++/**
++ * nilfs_attach_btree_node_cache - attach a B-tree node cache to the inode
++ * @inode: inode object
++ *
++ * nilfs_attach_btree_node_cache() attaches a B-tree node cache to @inode,
++ * or does nothing if the inode already has it.  This function allocates
++ * an additional inode to maintain page cache of B-tree nodes one-on-one.
++ *
++ * Return Value: On success, 0 is returned. On errors, one of the following
++ * negative error code is returned.
++ *
++ * %-ENOMEM - Insufficient memory available.
++ */
++int nilfs_attach_btree_node_cache(struct inode *inode)
++{
++	struct nilfs_inode_info *ii = NILFS_I(inode);
++	struct inode *btnc_inode;
++	struct nilfs_iget_args args;
++
++	if (ii->i_assoc_inode)
++		return 0;
++
++	args.ino = inode->i_ino;
++	args.root = ii->i_root;
++	args.cno = ii->i_cno;
++	args.for_gc = test_bit(NILFS_I_GCINODE, &ii->i_state) != 0;
++	args.for_btnc = true;
++	args.for_shadow = test_bit(NILFS_I_SHADOW, &ii->i_state) != 0;
++
++	btnc_inode = iget5_locked(inode->i_sb, inode->i_ino, nilfs_iget_test,
++				  nilfs_iget_set, &args);
++	if (unlikely(!btnc_inode))
++		return -ENOMEM;
++	if (btnc_inode->i_state & I_NEW) {
++		nilfs_init_btnc_inode(btnc_inode);
++		unlock_new_inode(btnc_inode);
++	}
++	NILFS_I(btnc_inode)->i_assoc_inode = inode;
++	NILFS_I(btnc_inode)->i_bmap = ii->i_bmap;
++	ii->i_assoc_inode = btnc_inode;
++
++	return 0;
++}
++
++/**
++ * nilfs_detach_btree_node_cache - detach the B-tree node cache from the inode
++ * @inode: inode object
++ *
++ * nilfs_detach_btree_node_cache() detaches the B-tree node cache and its
++ * holder inode bound to @inode, or does nothing if @inode doesn't have it.
++ */
++void nilfs_detach_btree_node_cache(struct inode *inode)
++{
++	struct nilfs_inode_info *ii = NILFS_I(inode);
++	struct inode *btnc_inode = ii->i_assoc_inode;
++
++	if (btnc_inode) {
++		NILFS_I(btnc_inode)->i_assoc_inode = NULL;
++		ii->i_assoc_inode = NULL;
++		iput(btnc_inode);
++	}
++}
++
++/**
++ * nilfs_iget_for_shadow - obtain inode for shadow mapping
++ * @inode: inode object that uses shadow mapping
++ *
++ * nilfs_iget_for_shadow() allocates a pair of inodes that holds page
++ * caches for shadow mapping.  The page cache for data pages is set up
++ * in one inode and the one for b-tree node pages is set up in the
++ * other inode, which is attached to the former inode.
++ *
++ * Return Value: On success, a pointer to the inode for data pages is
++ * returned. On errors, one of the following negative error code is returned
++ * in a pointer type.
++ *
++ * %-ENOMEM - Insufficient memory available.
++ */
++struct inode *nilfs_iget_for_shadow(struct inode *inode)
++{
++	struct nilfs_iget_args args = {
++		.ino = inode->i_ino, .root = NULL, .cno = 0, .for_gc = false,
++		.for_btnc = false, .for_shadow = true
++	};
++	struct inode *s_inode;
++	int err;
++
++	s_inode = iget5_locked(inode->i_sb, inode->i_ino, nilfs_iget_test,
++			       nilfs_iget_set, &args);
++	if (unlikely(!s_inode))
++		return ERR_PTR(-ENOMEM);
++	if (!(s_inode->i_state & I_NEW))
++		return inode;
++
++	NILFS_I(s_inode)->i_flags = 0;
++	memset(NILFS_I(s_inode)->i_bmap, 0, sizeof(struct nilfs_bmap));
++	mapping_set_gfp_mask(s_inode->i_mapping, GFP_NOFS);
++
++	err = nilfs_attach_btree_node_cache(s_inode);
++	if (unlikely(err)) {
++		iget_failed(s_inode);
++		return ERR_PTR(err);
++	}
++	unlock_new_inode(s_inode);
++	return s_inode;
++}
++
+ void nilfs_write_inode_common(struct inode *inode,
+ 			      struct nilfs_inode *raw_inode, int has_bmap)
+ {
+@@ -762,7 +892,8 @@ static void nilfs_clear_inode(struct inode *inode)
+ 	if (test_bit(NILFS_I_BMAP, &ii->i_state))
+ 		nilfs_bmap_clear(ii->i_bmap);
+ 
+-	nilfs_btnode_cache_clear(&ii->i_btnode_cache);
++	if (!test_bit(NILFS_I_BTNC, &ii->i_state))
++		nilfs_detach_btree_node_cache(inode);
+ 
+ 	if (ii->i_root && inode->i_ino == NILFS_ROOT_INO)
+ 		nilfs_put_root(ii->i_root);
+diff --git a/fs/nilfs2/mdt.c b/fs/nilfs2/mdt.c
+index c0361ce45f62d..e80ef2c0a785c 100644
+--- a/fs/nilfs2/mdt.c
++++ b/fs/nilfs2/mdt.c
+@@ -469,9 +469,18 @@ int nilfs_mdt_init(struct inode *inode, gfp_t gfp_mask, size_t objsz)
+ void nilfs_mdt_clear(struct inode *inode)
+ {
+ 	struct nilfs_mdt_info *mdi = NILFS_MDT(inode);
++	struct nilfs_shadow_map *shadow = mdi->mi_shadow;
+ 
+ 	if (mdi->mi_palloc_cache)
+ 		nilfs_palloc_destroy_cache(inode);
++
++	if (shadow) {
++		struct inode *s_inode = shadow->inode;
++
++		shadow->inode = NULL;
++		iput(s_inode);
++		mdi->mi_shadow = NULL;
++	}
+ }
+ 
+ /**
+@@ -505,12 +514,15 @@ int nilfs_mdt_setup_shadow_map(struct inode *inode,
+ 			       struct nilfs_shadow_map *shadow)
+ {
+ 	struct nilfs_mdt_info *mi = NILFS_MDT(inode);
++	struct inode *s_inode;
+ 
+ 	INIT_LIST_HEAD(&shadow->frozen_buffers);
+-	address_space_init_once(&shadow->frozen_data);
+-	nilfs_mapping_init(&shadow->frozen_data, inode);
+-	address_space_init_once(&shadow->frozen_btnodes);
+-	nilfs_mapping_init(&shadow->frozen_btnodes, inode);
++
++	s_inode = nilfs_iget_for_shadow(inode);
++	if (IS_ERR(s_inode))
++		return PTR_ERR(s_inode);
++
++	shadow->inode = s_inode;
+ 	mi->mi_shadow = shadow;
+ 	return 0;
+ }
+@@ -524,14 +536,15 @@ int nilfs_mdt_save_to_shadow_map(struct inode *inode)
+ 	struct nilfs_mdt_info *mi = NILFS_MDT(inode);
+ 	struct nilfs_inode_info *ii = NILFS_I(inode);
+ 	struct nilfs_shadow_map *shadow = mi->mi_shadow;
++	struct inode *s_inode = shadow->inode;
+ 	int ret;
+ 
+-	ret = nilfs_copy_dirty_pages(&shadow->frozen_data, inode->i_mapping);
++	ret = nilfs_copy_dirty_pages(s_inode->i_mapping, inode->i_mapping);
+ 	if (ret)
+ 		goto out;
+ 
+-	ret = nilfs_copy_dirty_pages(&shadow->frozen_btnodes,
+-				     &ii->i_btnode_cache);
++	ret = nilfs_copy_dirty_pages(NILFS_I(s_inode)->i_assoc_inode->i_mapping,
++				     ii->i_assoc_inode->i_mapping);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -547,7 +560,7 @@ int nilfs_mdt_freeze_buffer(struct inode *inode, struct buffer_head *bh)
+ 	struct page *page;
+ 	int blkbits = inode->i_blkbits;
+ 
+-	page = grab_cache_page(&shadow->frozen_data, bh->b_page->index);
++	page = grab_cache_page(shadow->inode->i_mapping, bh->b_page->index);
+ 	if (!page)
+ 		return -ENOMEM;
+ 
+@@ -579,7 +592,7 @@ nilfs_mdt_get_frozen_buffer(struct inode *inode, struct buffer_head *bh)
+ 	struct page *page;
+ 	int n;
+ 
+-	page = find_lock_page(&shadow->frozen_data, bh->b_page->index);
++	page = find_lock_page(shadow->inode->i_mapping, bh->b_page->index);
+ 	if (page) {
+ 		if (page_has_buffers(page)) {
+ 			n = bh_offset(bh) >> inode->i_blkbits;
+@@ -620,10 +633,11 @@ void nilfs_mdt_restore_from_shadow_map(struct inode *inode)
+ 		nilfs_palloc_clear_cache(inode);
+ 
+ 	nilfs_clear_dirty_pages(inode->i_mapping, true);
+-	nilfs_copy_back_pages(inode->i_mapping, &shadow->frozen_data);
++	nilfs_copy_back_pages(inode->i_mapping, shadow->inode->i_mapping);
+ 
+-	nilfs_clear_dirty_pages(&ii->i_btnode_cache, true);
+-	nilfs_copy_back_pages(&ii->i_btnode_cache, &shadow->frozen_btnodes);
++	nilfs_clear_dirty_pages(ii->i_assoc_inode->i_mapping, true);
++	nilfs_copy_back_pages(ii->i_assoc_inode->i_mapping,
++			      NILFS_I(shadow->inode)->i_assoc_inode->i_mapping);
+ 
+ 	nilfs_bmap_restore(ii->i_bmap, &shadow->bmap_store);
+ 
+@@ -638,10 +652,11 @@ void nilfs_mdt_clear_shadow_map(struct inode *inode)
+ {
+ 	struct nilfs_mdt_info *mi = NILFS_MDT(inode);
+ 	struct nilfs_shadow_map *shadow = mi->mi_shadow;
++	struct inode *shadow_btnc_inode = NILFS_I(shadow->inode)->i_assoc_inode;
+ 
+ 	down_write(&mi->mi_sem);
+ 	nilfs_release_frozen_buffers(shadow);
+-	truncate_inode_pages(&shadow->frozen_data, 0);
+-	truncate_inode_pages(&shadow->frozen_btnodes, 0);
++	truncate_inode_pages(shadow->inode->i_mapping, 0);
++	truncate_inode_pages(shadow_btnc_inode->i_mapping, 0);
+ 	up_write(&mi->mi_sem);
+ }
+diff --git a/fs/nilfs2/mdt.h b/fs/nilfs2/mdt.h
+index e77aea4bb921c..9d8ac0d27c16e 100644
+--- a/fs/nilfs2/mdt.h
++++ b/fs/nilfs2/mdt.h
+@@ -18,14 +18,12 @@
+ /**
+  * struct nilfs_shadow_map - shadow mapping of meta data file
+  * @bmap_store: shadow copy of bmap state
+- * @frozen_data: shadowed dirty data pages
+- * @frozen_btnodes: shadowed dirty b-tree nodes' pages
++ * @inode: holder of page caches used in shadow mapping
+  * @frozen_buffers: list of frozen buffers
+  */
+ struct nilfs_shadow_map {
+ 	struct nilfs_bmap_store bmap_store;
+-	struct address_space frozen_data;
+-	struct address_space frozen_btnodes;
++	struct inode *inode;
+ 	struct list_head frozen_buffers;
+ };
+ 
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index f8450ee3fd06c..9ca165bc97d2b 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -28,7 +28,7 @@
+  * @i_xattr: <TODO>
+  * @i_dir_start_lookup: page index of last successful search
+  * @i_cno: checkpoint number for GC inode
+- * @i_btnode_cache: cached pages of b-tree nodes
++ * @i_assoc_inode: associated inode (B-tree node cache holder or back pointer)
+  * @i_dirty: list for connecting dirty files
+  * @xattr_sem: semaphore for extended attributes processing
+  * @i_bh: buffer contains disk inode
+@@ -43,7 +43,7 @@ struct nilfs_inode_info {
+ 	__u64 i_xattr;	/* sector_t ??? */
+ 	__u32 i_dir_start_lookup;
+ 	__u64 i_cno;		/* check point number for GC inode */
+-	struct address_space i_btnode_cache;
++	struct inode *i_assoc_inode;
+ 	struct list_head i_dirty;	/* List for connecting dirty files */
+ 
+ #ifdef CONFIG_NILFS_XATTR
+@@ -75,13 +75,6 @@ NILFS_BMAP_I(const struct nilfs_bmap *bmap)
+ 	return container_of(bmap, struct nilfs_inode_info, i_bmap_data);
+ }
+ 
+-static inline struct inode *NILFS_BTNC_I(struct address_space *btnc)
+-{
+-	struct nilfs_inode_info *ii =
+-		container_of(btnc, struct nilfs_inode_info, i_btnode_cache);
+-	return &ii->vfs_inode;
+-}
+-
+ /*
+  * Dynamic state flags of NILFS on-memory inode (i_state)
+  */
+@@ -98,6 +91,8 @@ enum {
+ 	NILFS_I_INODE_SYNC,		/* dsync is not allowed for inode */
+ 	NILFS_I_BMAP,			/* has bmap and btnode_cache */
+ 	NILFS_I_GCINODE,		/* inode for GC, on memory only */
++	NILFS_I_BTNC,			/* inode for btree node cache */
++	NILFS_I_SHADOW,			/* inode for shadowed page cache */
+ };
+ 
+ /*
+@@ -264,6 +259,9 @@ struct inode *nilfs_iget(struct super_block *sb, struct nilfs_root *root,
+ 			 unsigned long ino);
+ extern struct inode *nilfs_iget_for_gc(struct super_block *sb,
+ 				       unsigned long ino, __u64 cno);
++int nilfs_attach_btree_node_cache(struct inode *inode);
++void nilfs_detach_btree_node_cache(struct inode *inode);
++struct inode *nilfs_iget_for_shadow(struct inode *inode);
+ extern void nilfs_update_inode(struct inode *, struct buffer_head *, int);
+ extern void nilfs_truncate(struct inode *);
+ extern void nilfs_evict_inode(struct inode *);
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index 171fb5cd427fd..d1a148f0cae33 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -448,10 +448,9 @@ void nilfs_mapping_init(struct address_space *mapping, struct inode *inode)
+ /*
+  * NILFS2 needs clear_page_dirty() in the following two cases:
+  *
+- * 1) For B-tree node pages and data pages of the dat/gcdat, NILFS2 clears
+- *    page dirty flags when it copies back pages from the shadow cache
+- *    (gcdat->{i_mapping,i_btnode_cache}) to its original cache
+- *    (dat->{i_mapping,i_btnode_cache}).
++ * 1) For B-tree node pages and data pages of DAT file, NILFS2 clears dirty
++ *    flag of pages when it copies back pages from shadow cache to the
++ *    original cache.
+  *
+  * 2) Some B-tree operations like insertion or deletion may dispose buffers
+  *    in dirty state, and this needs to cancel the dirty state of their pages.
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index e3726aca28ed6..8350c2eaee75a 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -738,15 +738,18 @@ static void nilfs_lookup_dirty_node_buffers(struct inode *inode,
+ 					    struct list_head *listp)
+ {
+ 	struct nilfs_inode_info *ii = NILFS_I(inode);
+-	struct address_space *mapping = &ii->i_btnode_cache;
++	struct inode *btnc_inode = ii->i_assoc_inode;
+ 	struct pagevec pvec;
+ 	struct buffer_head *bh, *head;
+ 	unsigned int i;
+ 	pgoff_t index = 0;
+ 
++	if (!btnc_inode)
++		return;
++
+ 	pagevec_init(&pvec);
+ 
+-	while (pagevec_lookup_tag(&pvec, mapping, &index,
++	while (pagevec_lookup_tag(&pvec, btnc_inode->i_mapping, &index,
+ 					PAGECACHE_TAG_DIRTY)) {
+ 		for (i = 0; i < pagevec_count(&pvec); i++) {
+ 			bh = head = page_buffers(pvec.pages[i]);
+@@ -2415,7 +2418,7 @@ nilfs_remove_written_gcinodes(struct the_nilfs *nilfs, struct list_head *head)
+ 			continue;
+ 		list_del_init(&ii->i_dirty);
+ 		truncate_inode_pages(&ii->vfs_inode.i_data, 0);
+-		nilfs_btnode_cache_clear(&ii->i_btnode_cache);
++		nilfs_btnode_cache_clear(ii->i_assoc_inode->i_mapping);
+ 		iput(&ii->vfs_inode);
+ 	}
+ }
+diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
+index 4abd928b0bc83..b9d30e8c43b06 100644
+--- a/fs/nilfs2/super.c
++++ b/fs/nilfs2/super.c
+@@ -157,7 +157,8 @@ struct inode *nilfs_alloc_inode(struct super_block *sb)
+ 	ii->i_bh = NULL;
+ 	ii->i_state = 0;
+ 	ii->i_cno = 0;
+-	nilfs_mapping_init(&ii->i_btnode_cache, &ii->vfs_inode);
++	ii->i_assoc_inode = NULL;
++	ii->i_bmap = &ii->i_bmap_data;
+ 	return &ii->vfs_inode;
+ }
+ 
+@@ -1377,8 +1378,6 @@ static void nilfs_inode_init_once(void *obj)
+ #ifdef CONFIG_NILFS_XATTR
+ 	init_rwsem(&ii->xattr_sem);
+ #endif
+-	address_space_init_once(&ii->i_btnode_cache);
+-	ii->i_bmap = &ii->i_bmap_data;
+ 	inode_init_once(&ii->vfs_inode);
+ }
+ 
+diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h
+index 83fa08a065071..787fff5ec7f58 100644
+--- a/include/linux/ceph/osd_client.h
++++ b/include/linux/ceph/osd_client.h
+@@ -287,6 +287,9 @@ struct ceph_osd_linger_request {
+ 	rados_watcherrcb_t errcb;
+ 	void *data;
+ 
++	struct ceph_pagelist *request_pl;
++	struct page **notify_id_pages;
++
+ 	struct page ***preply_pages;
+ 	size_t *preply_len;
+ };
+diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
+index a9361178c5dbb..a7d70cdee25e3 100644
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -61,14 +61,6 @@
+  */
+ #define DMA_ATTR_PRIVILEGED		(1UL << 9)
+ 
+-/*
+- * This is a hint to the DMA-mapping subsystem that the device is expected
+- * to overwrite the entire mapped size, thus the caller does not require any
+- * of the previous buffer contents to be preserved. This allows
+- * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
+- */
+-#define DMA_ATTR_OVERWRITE		(1UL << 10)
+-
+ /*
+  * A dma_addr_t can hold any valid DMA or bus address for the platform.  It can
+  * be given to a device to use as a DMA source or target.  It is specific to a
+diff --git a/include/linux/mc146818rtc.h b/include/linux/mc146818rtc.h
+index 0661af17a7584..1e02058113944 100644
+--- a/include/linux/mc146818rtc.h
++++ b/include/linux/mc146818rtc.h
+@@ -86,6 +86,8 @@ struct cmos_rtc_board_info {
+    /* 2 values for divider stage reset, others for "testing purposes only" */
+ #  define RTC_DIV_RESET1	0x60
+ #  define RTC_DIV_RESET2	0x70
++   /* In AMD BKDG bit 5 and 6 are reserved, bit 4 is for select dv0 bank */
++#  define RTC_AMD_BANK_SELECT	0x10
+   /* Periodic intr. / Square wave rate select. 0=none, 1=32.8kHz,... 15=2Hz */
+ # define RTC_RATE_SELECT 	0x0F
+ 
+diff --git a/include/net/ip.h b/include/net/ip.h
+index de2dc22a78f93..76aaa7eb5b823 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -55,6 +55,7 @@ struct inet_skb_parm {
+ #define IPSKB_DOREDIRECT	BIT(5)
+ #define IPSKB_FRAG_PMTU		BIT(6)
+ #define IPSKB_L3SLAVE		BIT(7)
++#define IPSKB_NOPOLICY		BIT(8)
+ 
+ 	u16			frag_max_size;
+ };
+diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
+index 22e1bc72b979c..69e4161462fb4 100644
+--- a/include/net/netns/xfrm.h
++++ b/include/net/netns/xfrm.h
+@@ -64,6 +64,9 @@ struct netns_xfrm {
+ 	u32			sysctl_aevent_rseqth;
+ 	int			sysctl_larval_drop;
+ 	u32			sysctl_acq_expires;
++
++	u8			policy_default[XFRM_POLICY_MAX];
++
+ #ifdef CONFIG_SYSCTL
+ 	struct ctl_table_header	*sysctl_hdr;
+ #endif
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 0049a74596490..8a9943d935f14 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1091,6 +1091,27 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un
+ int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb,
+ 			unsigned short family);
+ 
++static inline bool __xfrm_check_nopolicy(struct net *net, struct sk_buff *skb,
++					 int dir)
++{
++	if (!net->xfrm.policy_count[dir] && !secpath_exists(skb))
++		return net->xfrm.policy_default[dir] == XFRM_USERPOLICY_ACCEPT;
++
++	return false;
++}
++
++static inline bool __xfrm_check_dev_nopolicy(struct sk_buff *skb,
++					     int dir, unsigned short family)
++{
++	if (dir != XFRM_POLICY_OUT && family == AF_INET) {
++		/* same dst may be used for traffic originating from
++		 * devices with different policy settings.
++		 */
++		return IPCB(skb)->flags & IPSKB_NOPOLICY;
++	}
++	return skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY);
++}
++
+ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
+ 				       struct sk_buff *skb,
+ 				       unsigned int family, int reverse)
+@@ -1101,9 +1122,9 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
+ 	if (sk && sk->sk_policy[XFRM_POLICY_IN])
+ 		return __xfrm_policy_check(sk, ndir, skb, family);
+ 
+-	return	(!net->xfrm.policy_count[dir] && !secpath_exists(skb)) ||
+-		(skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) ||
+-		__xfrm_policy_check(sk, ndir, skb, family);
++	return __xfrm_check_nopolicy(net, skb, dir) ||
++	       __xfrm_check_dev_nopolicy(skb, dir, family) ||
++	       __xfrm_policy_check(sk, ndir, skb, family);
+ }
+ 
+ static inline int xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, unsigned short family)
+@@ -1155,9 +1176,12 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family)
+ {
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	return	!net->xfrm.policy_count[XFRM_POLICY_OUT] ||
+-		(skb_dst(skb)->flags & DST_NOXFRM) ||
+-		__xfrm_route_forward(skb, family);
++	if (!net->xfrm.policy_count[XFRM_POLICY_OUT] &&
++	    net->xfrm.policy_default[XFRM_POLICY_OUT] == XFRM_USERPOLICY_ACCEPT)
++		return true;
++
++	return (skb_dst(skb)->flags & DST_NOXFRM) ||
++	       __xfrm_route_forward(skb, family);
+ }
+ 
+ static inline int xfrm4_route_forward(struct sk_buff *skb)
+diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
+index 7f30393b92c3b..f76d11725c6c6 100644
+--- a/include/uapi/linux/dma-buf.h
++++ b/include/uapi/linux/dma-buf.h
+@@ -44,7 +44,7 @@ struct dma_buf_sync {
+  * between them in actual uapi, they're just different numbers.
+  */
+ #define DMA_BUF_SET_NAME	_IOW(DMA_BUF_BASE, 1, const char *)
+-#define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
+-#define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
++#define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, __u32)
++#define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, __u64)
+ 
+ #endif
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index 90ddb49fce84e..65e13a099b1a0 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -215,6 +215,11 @@ enum {
+ 
+ 	XFRM_MSG_MAPPING,
+ #define XFRM_MSG_MAPPING XFRM_MSG_MAPPING
++
++	XFRM_MSG_SETDEFAULT,
++#define XFRM_MSG_SETDEFAULT XFRM_MSG_SETDEFAULT
++	XFRM_MSG_GETDEFAULT,
++#define XFRM_MSG_GETDEFAULT XFRM_MSG_GETDEFAULT
+ 	__XFRM_MSG_MAX
+ };
+ #define XFRM_MSG_MAX (__XFRM_MSG_MAX - 1)
+@@ -515,6 +520,15 @@ struct xfrm_user_offload {
+ #define XFRM_OFFLOAD_IPV6	1
+ #define XFRM_OFFLOAD_INBOUND	2
+ 
++struct xfrm_userpolicy_default {
++#define XFRM_USERPOLICY_UNSPEC	0
++#define XFRM_USERPOLICY_BLOCK	1
++#define XFRM_USERPOLICY_ACCEPT	2
++	__u8				in;
++	__u8				fwd;
++	__u8				out;
++};
++
+ #ifndef __KERNEL__
+ /* backwards compatibility for userspace */
+ #define XFRMGRP_ACQUIRE		1
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 62b1e5fa86736..274587a57717f 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -597,10 +597,14 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
+ 		io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i);
+ 
+ 	tlb_addr = slot_addr(io_tlb_start, index) + offset;
+-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+-	    (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
+-	    dir == DMA_BIDIRECTIONAL))
+-		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
++	/*
++	 * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
++	 * to the tlb buffer, if we knew for sure the device will
++	 * overwirte the entire current content. But we don't. Thus
++	 * unconditional bounce may prevent leaking swiotlb content (i.e.
++	 * kernel memory) to user-space.
++	 */
++	swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
+ 	return tlb_addr;
+ }
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 9aa6563587d88..8ba155a7b59ed 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -11946,6 +11946,9 @@ SYSCALL_DEFINE5(perf_event_open,
+ 		 * Do not allow to attach to a group in a different task
+ 		 * or CPU context. If we're moving SW events, we'll fix
+ 		 * this up later, so allow that.
++		 *
++		 * Racy, not holding group_leader->ctx->mutex, see comment with
++		 * perf_event_ctx_lock().
+ 		 */
+ 		if (!move_group && group_leader->ctx != ctx)
+ 			goto err_context;
+@@ -12013,6 +12016,7 @@ SYSCALL_DEFINE5(perf_event_open,
+ 			} else {
+ 				perf_event_ctx_unlock(group_leader, gctx);
+ 				move_group = 0;
++				goto not_move_group;
+ 			}
+ 		}
+ 
+@@ -12029,7 +12033,17 @@ SYSCALL_DEFINE5(perf_event_open,
+ 		}
+ 	} else {
+ 		mutex_lock(&ctx->mutex);
++
++		/*
++		 * Now that we hold ctx->lock, (re)validate group_leader->ctx == ctx,
++		 * see the group_leader && !move_group test earlier.
++		 */
++		if (group_leader && group_leader->ctx != ctx) {
++			err = -EINVAL;
++			goto err_locked;
++		}
+ 	}
++not_move_group:
+ 
+ 	if (ctx->task == TASK_TOMBSTONE) {
+ 		err = -ESRCH;
+diff --git a/kernel/module.c b/kernel/module.c
+index 5f4403198f04b..6a0fd245c0483 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -2280,6 +2280,15 @@ void *__symbol_get(const char *symbol)
+ }
+ EXPORT_SYMBOL_GPL(__symbol_get);
+ 
++static bool module_init_layout_section(const char *sname)
++{
++#ifndef CONFIG_MODULE_UNLOAD
++	if (module_exit_section(sname))
++		return true;
++#endif
++	return module_init_section(sname);
++}
++
+ /*
+  * Ensure that an exported symbol [global namespace] does not already exist
+  * in the kernel or in some other module's exported symbol table.
+@@ -2489,7 +2498,7 @@ static void layout_sections(struct module *mod, struct load_info *info)
+ 			if ((s->sh_flags & masks[m][0]) != masks[m][0]
+ 			    || (s->sh_flags & masks[m][1])
+ 			    || s->sh_entsize != ~0UL
+-			    || module_init_section(sname))
++			    || module_init_layout_section(sname))
+ 				continue;
+ 			s->sh_entsize = get_offset(mod, &mod->core_layout.size, s, i);
+ 			pr_debug("\t%s\n", sname);
+@@ -2522,7 +2531,7 @@ static void layout_sections(struct module *mod, struct load_info *info)
+ 			if ((s->sh_flags & masks[m][0]) != masks[m][0]
+ 			    || (s->sh_flags & masks[m][1])
+ 			    || s->sh_entsize != ~0UL
+-			    || !module_init_section(sname))
++			    || !module_init_layout_section(sname))
+ 				continue;
+ 			s->sh_entsize = (get_offset(mod, &mod->init_layout.size, s, i)
+ 					 | INIT_OFFSET_MASK);
+@@ -3171,11 +3180,6 @@ static int rewrite_section_headers(struct load_info *info, int flags)
+ 		   temporary image. */
+ 		shdr->sh_addr = (size_t)info->hdr + shdr->sh_offset;
+ 
+-#ifndef CONFIG_MODULE_UNLOAD
+-		/* Don't load .exit sections */
+-		if (module_exit_section(info->secstrings+shdr->sh_name))
+-			shdr->sh_flags &= ~(unsigned long)SHF_ALLOC;
+-#endif
+ 	}
+ 
+ 	/* Track but don't keep modinfo and version sections. */
+diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
+index 59a318b9f6463..bf5bf148091f9 100644
+--- a/net/bridge/br_input.c
++++ b/net/bridge/br_input.c
+@@ -43,6 +43,13 @@ static int br_pass_frame_up(struct sk_buff *skb)
+ 	u64_stats_update_end(&brstats->syncp);
+ 
+ 	vg = br_vlan_group_rcu(br);
++
++	/* Reset the offload_fwd_mark because there could be a stacked
++	 * bridge above, and it should not think this bridge it doing
++	 * that bridge's work forwarding out its ports.
++	 */
++	br_switchdev_frame_unmark(skb);
++
+ 	/* Bridge is just like any other port.  Make sure the
+ 	 * packet is allowed except in promisc modue when someone
+ 	 * may be running packet capture.
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 7901ab6c79fd2..1e9fab79e2456 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -537,43 +537,6 @@ static void request_init(struct ceph_osd_request *req)
+ 	target_init(&req->r_t);
+ }
+ 
+-/*
+- * This is ugly, but it allows us to reuse linger registration and ping
+- * requests, keeping the structure of the code around send_linger{_ping}()
+- * reasonable.  Setting up a min_nr=2 mempool for each linger request
+- * and dealing with copying ops (this blasts req only, watch op remains
+- * intact) isn't any better.
+- */
+-static void request_reinit(struct ceph_osd_request *req)
+-{
+-	struct ceph_osd_client *osdc = req->r_osdc;
+-	bool mempool = req->r_mempool;
+-	unsigned int num_ops = req->r_num_ops;
+-	u64 snapid = req->r_snapid;
+-	struct ceph_snap_context *snapc = req->r_snapc;
+-	bool linger = req->r_linger;
+-	struct ceph_msg *request_msg = req->r_request;
+-	struct ceph_msg *reply_msg = req->r_reply;
+-
+-	dout("%s req %p\n", __func__, req);
+-	WARN_ON(kref_read(&req->r_kref) != 1);
+-	request_release_checks(req);
+-
+-	WARN_ON(kref_read(&request_msg->kref) != 1);
+-	WARN_ON(kref_read(&reply_msg->kref) != 1);
+-	target_destroy(&req->r_t);
+-
+-	request_init(req);
+-	req->r_osdc = osdc;
+-	req->r_mempool = mempool;
+-	req->r_num_ops = num_ops;
+-	req->r_snapid = snapid;
+-	req->r_snapc = snapc;
+-	req->r_linger = linger;
+-	req->r_request = request_msg;
+-	req->r_reply = reply_msg;
+-}
+-
+ struct ceph_osd_request *ceph_osdc_alloc_request(struct ceph_osd_client *osdc,
+ 					       struct ceph_snap_context *snapc,
+ 					       unsigned int num_ops,
+@@ -918,14 +881,30 @@ EXPORT_SYMBOL(osd_req_op_xattr_init);
+  * @watch_opcode: CEPH_OSD_WATCH_OP_*
+  */
+ static void osd_req_op_watch_init(struct ceph_osd_request *req, int which,
+-				  u64 cookie, u8 watch_opcode)
++				  u8 watch_opcode, u64 cookie, u32 gen)
+ {
+ 	struct ceph_osd_req_op *op;
+ 
+ 	op = osd_req_op_init(req, which, CEPH_OSD_OP_WATCH, 0);
+ 	op->watch.cookie = cookie;
+ 	op->watch.op = watch_opcode;
+-	op->watch.gen = 0;
++	op->watch.gen = gen;
++}
++
++/*
++ * prot_ver, timeout and notify payload (may be empty) should already be
++ * encoded in @request_pl
++ */
++static void osd_req_op_notify_init(struct ceph_osd_request *req, int which,
++				   u64 cookie, struct ceph_pagelist *request_pl)
++{
++	struct ceph_osd_req_op *op;
++
++	op = osd_req_op_init(req, which, CEPH_OSD_OP_NOTIFY, 0);
++	op->notify.cookie = cookie;
++
++	ceph_osd_data_pagelist_init(&op->notify.request_data, request_pl);
++	op->indata_len = request_pl->length;
+ }
+ 
+ /*
+@@ -2727,10 +2706,13 @@ static void linger_release(struct kref *kref)
+ 	WARN_ON(!list_empty(&lreq->pending_lworks));
+ 	WARN_ON(lreq->osd);
+ 
+-	if (lreq->reg_req)
+-		ceph_osdc_put_request(lreq->reg_req);
+-	if (lreq->ping_req)
+-		ceph_osdc_put_request(lreq->ping_req);
++	if (lreq->request_pl)
++		ceph_pagelist_release(lreq->request_pl);
++	if (lreq->notify_id_pages)
++		ceph_release_page_vector(lreq->notify_id_pages, 1);
++
++	ceph_osdc_put_request(lreq->reg_req);
++	ceph_osdc_put_request(lreq->ping_req);
+ 	target_destroy(&lreq->t);
+ 	kfree(lreq);
+ }
+@@ -2999,6 +2981,12 @@ static void linger_commit_cb(struct ceph_osd_request *req)
+ 	struct ceph_osd_linger_request *lreq = req->r_priv;
+ 
+ 	mutex_lock(&lreq->lock);
++	if (req != lreq->reg_req) {
++		dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n",
++		     __func__, lreq, lreq->linger_id, req, lreq->reg_req);
++		goto out;
++	}
++
+ 	dout("%s lreq %p linger_id %llu result %d\n", __func__, lreq,
+ 	     lreq->linger_id, req->r_result);
+ 	linger_reg_commit_complete(lreq, req->r_result);
+@@ -3022,6 +3010,7 @@ static void linger_commit_cb(struct ceph_osd_request *req)
+ 		}
+ 	}
+ 
++out:
+ 	mutex_unlock(&lreq->lock);
+ 	linger_put(lreq);
+ }
+@@ -3044,6 +3033,12 @@ static void linger_reconnect_cb(struct ceph_osd_request *req)
+ 	struct ceph_osd_linger_request *lreq = req->r_priv;
+ 
+ 	mutex_lock(&lreq->lock);
++	if (req != lreq->reg_req) {
++		dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n",
++		     __func__, lreq, lreq->linger_id, req, lreq->reg_req);
++		goto out;
++	}
++
+ 	dout("%s lreq %p linger_id %llu result %d last_error %d\n", __func__,
+ 	     lreq, lreq->linger_id, req->r_result, lreq->last_error);
+ 	if (req->r_result < 0) {
+@@ -3053,46 +3048,64 @@ static void linger_reconnect_cb(struct ceph_osd_request *req)
+ 		}
+ 	}
+ 
++out:
+ 	mutex_unlock(&lreq->lock);
+ 	linger_put(lreq);
+ }
+ 
+ static void send_linger(struct ceph_osd_linger_request *lreq)
+ {
+-	struct ceph_osd_request *req = lreq->reg_req;
+-	struct ceph_osd_req_op *op = &req->r_ops[0];
++	struct ceph_osd_client *osdc = lreq->osdc;
++	struct ceph_osd_request *req;
++	int ret;
+ 
+-	verify_osdc_wrlocked(req->r_osdc);
++	verify_osdc_wrlocked(osdc);
++	mutex_lock(&lreq->lock);
+ 	dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id);
+ 
+-	if (req->r_osd)
+-		cancel_linger_request(req);
++	if (lreq->reg_req) {
++		if (lreq->reg_req->r_osd)
++			cancel_linger_request(lreq->reg_req);
++		ceph_osdc_put_request(lreq->reg_req);
++	}
++
++	req = ceph_osdc_alloc_request(osdc, NULL, 1, true, GFP_NOIO);
++	BUG_ON(!req);
+ 
+-	request_reinit(req);
+ 	target_copy(&req->r_t, &lreq->t);
+ 	req->r_mtime = lreq->mtime;
+ 
+-	mutex_lock(&lreq->lock);
+ 	if (lreq->is_watch && lreq->committed) {
+-		WARN_ON(op->op != CEPH_OSD_OP_WATCH ||
+-			op->watch.cookie != lreq->linger_id);
+-		op->watch.op = CEPH_OSD_WATCH_OP_RECONNECT;
+-		op->watch.gen = ++lreq->register_gen;
++		osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_RECONNECT,
++				      lreq->linger_id, ++lreq->register_gen);
+ 		dout("lreq %p reconnect register_gen %u\n", lreq,
+-		     op->watch.gen);
++		     req->r_ops[0].watch.gen);
+ 		req->r_callback = linger_reconnect_cb;
+ 	} else {
+-		if (!lreq->is_watch)
++		if (lreq->is_watch) {
++			osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_WATCH,
++					      lreq->linger_id, 0);
++		} else {
+ 			lreq->notify_id = 0;
+-		else
+-			WARN_ON(op->watch.op != CEPH_OSD_WATCH_OP_WATCH);
++
++			refcount_inc(&lreq->request_pl->refcnt);
++			osd_req_op_notify_init(req, 0, lreq->linger_id,
++					       lreq->request_pl);
++			ceph_osd_data_pages_init(
++			    osd_req_op_data(req, 0, notify, response_data),
++			    lreq->notify_id_pages, PAGE_SIZE, 0, false, false);
++		}
+ 		dout("lreq %p register\n", lreq);
+ 		req->r_callback = linger_commit_cb;
+ 	}
+-	mutex_unlock(&lreq->lock);
++
++	ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
++	BUG_ON(ret);
+ 
+ 	req->r_priv = linger_get(lreq);
+ 	req->r_linger = true;
++	lreq->reg_req = req;
++	mutex_unlock(&lreq->lock);
+ 
+ 	submit_request(req, true);
+ }
+@@ -3102,6 +3115,12 @@ static void linger_ping_cb(struct ceph_osd_request *req)
+ 	struct ceph_osd_linger_request *lreq = req->r_priv;
+ 
+ 	mutex_lock(&lreq->lock);
++	if (req != lreq->ping_req) {
++		dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n",
++		     __func__, lreq, lreq->linger_id, req, lreq->ping_req);
++		goto out;
++	}
++
+ 	dout("%s lreq %p linger_id %llu result %d ping_sent %lu last_error %d\n",
+ 	     __func__, lreq, lreq->linger_id, req->r_result, lreq->ping_sent,
+ 	     lreq->last_error);
+@@ -3117,6 +3136,7 @@ static void linger_ping_cb(struct ceph_osd_request *req)
+ 		     lreq->register_gen, req->r_ops[0].watch.gen);
+ 	}
+ 
++out:
+ 	mutex_unlock(&lreq->lock);
+ 	linger_put(lreq);
+ }
+@@ -3124,8 +3144,8 @@ static void linger_ping_cb(struct ceph_osd_request *req)
+ static void send_linger_ping(struct ceph_osd_linger_request *lreq)
+ {
+ 	struct ceph_osd_client *osdc = lreq->osdc;
+-	struct ceph_osd_request *req = lreq->ping_req;
+-	struct ceph_osd_req_op *op = &req->r_ops[0];
++	struct ceph_osd_request *req;
++	int ret;
+ 
+ 	if (ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD)) {
+ 		dout("%s PAUSERD\n", __func__);
+@@ -3137,19 +3157,26 @@ static void send_linger_ping(struct ceph_osd_linger_request *lreq)
+ 	     __func__, lreq, lreq->linger_id, lreq->ping_sent,
+ 	     lreq->register_gen);
+ 
+-	if (req->r_osd)
+-		cancel_linger_request(req);
++	if (lreq->ping_req) {
++		if (lreq->ping_req->r_osd)
++			cancel_linger_request(lreq->ping_req);
++		ceph_osdc_put_request(lreq->ping_req);
++	}
+ 
+-	request_reinit(req);
+-	target_copy(&req->r_t, &lreq->t);
++	req = ceph_osdc_alloc_request(osdc, NULL, 1, true, GFP_NOIO);
++	BUG_ON(!req);
+ 
+-	WARN_ON(op->op != CEPH_OSD_OP_WATCH ||
+-		op->watch.cookie != lreq->linger_id ||
+-		op->watch.op != CEPH_OSD_WATCH_OP_PING);
+-	op->watch.gen = lreq->register_gen;
++	target_copy(&req->r_t, &lreq->t);
++	osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_PING, lreq->linger_id,
++			      lreq->register_gen);
+ 	req->r_callback = linger_ping_cb;
++
++	ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
++	BUG_ON(ret);
++
+ 	req->r_priv = linger_get(lreq);
+ 	req->r_linger = true;
++	lreq->ping_req = req;
+ 
+ 	ceph_osdc_get_request(req);
+ 	account_request(req);
+@@ -3165,12 +3192,6 @@ static void linger_submit(struct ceph_osd_linger_request *lreq)
+ 
+ 	down_write(&osdc->lock);
+ 	linger_register(lreq);
+-	if (lreq->is_watch) {
+-		lreq->reg_req->r_ops[0].watch.cookie = lreq->linger_id;
+-		lreq->ping_req->r_ops[0].watch.cookie = lreq->linger_id;
+-	} else {
+-		lreq->reg_req->r_ops[0].notify.cookie = lreq->linger_id;
+-	}
+ 
+ 	calc_target(osdc, &lreq->t, false);
+ 	osd = lookup_create_osd(osdc, lreq->t.osd, true);
+@@ -3202,9 +3223,9 @@ static void cancel_linger_map_check(struct ceph_osd_linger_request *lreq)
+  */
+ static void __linger_cancel(struct ceph_osd_linger_request *lreq)
+ {
+-	if (lreq->is_watch && lreq->ping_req->r_osd)
++	if (lreq->ping_req && lreq->ping_req->r_osd)
+ 		cancel_linger_request(lreq->ping_req);
+-	if (lreq->reg_req->r_osd)
++	if (lreq->reg_req && lreq->reg_req->r_osd)
+ 		cancel_linger_request(lreq->reg_req);
+ 	cancel_linger_map_check(lreq);
+ 	unlink_linger(lreq->osd, lreq);
+@@ -4651,43 +4672,6 @@ again:
+ }
+ EXPORT_SYMBOL(ceph_osdc_sync);
+ 
+-static struct ceph_osd_request *
+-alloc_linger_request(struct ceph_osd_linger_request *lreq)
+-{
+-	struct ceph_osd_request *req;
+-
+-	req = ceph_osdc_alloc_request(lreq->osdc, NULL, 1, false, GFP_NOIO);
+-	if (!req)
+-		return NULL;
+-
+-	ceph_oid_copy(&req->r_base_oid, &lreq->t.base_oid);
+-	ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc);
+-	return req;
+-}
+-
+-static struct ceph_osd_request *
+-alloc_watch_request(struct ceph_osd_linger_request *lreq, u8 watch_opcode)
+-{
+-	struct ceph_osd_request *req;
+-
+-	req = alloc_linger_request(lreq);
+-	if (!req)
+-		return NULL;
+-
+-	/*
+-	 * Pass 0 for cookie because we don't know it yet, it will be
+-	 * filled in by linger_submit().
+-	 */
+-	osd_req_op_watch_init(req, 0, 0, watch_opcode);
+-
+-	if (ceph_osdc_alloc_messages(req, GFP_NOIO)) {
+-		ceph_osdc_put_request(req);
+-		return NULL;
+-	}
+-
+-	return req;
+-}
+-
+ /*
+  * Returns a handle, caller owns a ref.
+  */
+@@ -4717,18 +4701,6 @@ ceph_osdc_watch(struct ceph_osd_client *osdc,
+ 	lreq->t.flags = CEPH_OSD_FLAG_WRITE;
+ 	ktime_get_real_ts64(&lreq->mtime);
+ 
+-	lreq->reg_req = alloc_watch_request(lreq, CEPH_OSD_WATCH_OP_WATCH);
+-	if (!lreq->reg_req) {
+-		ret = -ENOMEM;
+-		goto err_put_lreq;
+-	}
+-
+-	lreq->ping_req = alloc_watch_request(lreq, CEPH_OSD_WATCH_OP_PING);
+-	if (!lreq->ping_req) {
+-		ret = -ENOMEM;
+-		goto err_put_lreq;
+-	}
+-
+ 	linger_submit(lreq);
+ 	ret = linger_reg_commit_wait(lreq);
+ 	if (ret) {
+@@ -4766,8 +4738,8 @@ int ceph_osdc_unwatch(struct ceph_osd_client *osdc,
+ 	ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc);
+ 	req->r_flags = CEPH_OSD_FLAG_WRITE;
+ 	ktime_get_real_ts64(&req->r_mtime);
+-	osd_req_op_watch_init(req, 0, lreq->linger_id,
+-			      CEPH_OSD_WATCH_OP_UNWATCH);
++	osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_UNWATCH,
++			      lreq->linger_id, 0);
+ 
+ 	ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
+ 	if (ret)
+@@ -4853,35 +4825,6 @@ out_put_req:
+ }
+ EXPORT_SYMBOL(ceph_osdc_notify_ack);
+ 
+-static int osd_req_op_notify_init(struct ceph_osd_request *req, int which,
+-				  u64 cookie, u32 prot_ver, u32 timeout,
+-				  void *payload, u32 payload_len)
+-{
+-	struct ceph_osd_req_op *op;
+-	struct ceph_pagelist *pl;
+-	int ret;
+-
+-	op = osd_req_op_init(req, which, CEPH_OSD_OP_NOTIFY, 0);
+-	op->notify.cookie = cookie;
+-
+-	pl = ceph_pagelist_alloc(GFP_NOIO);
+-	if (!pl)
+-		return -ENOMEM;
+-
+-	ret = ceph_pagelist_encode_32(pl, 1); /* prot_ver */
+-	ret |= ceph_pagelist_encode_32(pl, timeout);
+-	ret |= ceph_pagelist_encode_32(pl, payload_len);
+-	ret |= ceph_pagelist_append(pl, payload, payload_len);
+-	if (ret) {
+-		ceph_pagelist_release(pl);
+-		return -ENOMEM;
+-	}
+-
+-	ceph_osd_data_pagelist_init(&op->notify.request_data, pl);
+-	op->indata_len = pl->length;
+-	return 0;
+-}
+-
+ /*
+  * @timeout: in seconds
+  *
+@@ -4900,7 +4843,6 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc,
+ 		     size_t *preply_len)
+ {
+ 	struct ceph_osd_linger_request *lreq;
+-	struct page **pages;
+ 	int ret;
+ 
+ 	WARN_ON(!timeout);
+@@ -4913,41 +4855,35 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc,
+ 	if (!lreq)
+ 		return -ENOMEM;
+ 
+-	lreq->preply_pages = preply_pages;
+-	lreq->preply_len = preply_len;
+-
+-	ceph_oid_copy(&lreq->t.base_oid, oid);
+-	ceph_oloc_copy(&lreq->t.base_oloc, oloc);
+-	lreq->t.flags = CEPH_OSD_FLAG_READ;
+-
+-	lreq->reg_req = alloc_linger_request(lreq);
+-	if (!lreq->reg_req) {
++	lreq->request_pl = ceph_pagelist_alloc(GFP_NOIO);
++	if (!lreq->request_pl) {
+ 		ret = -ENOMEM;
+ 		goto out_put_lreq;
+ 	}
+ 
+-	/*
+-	 * Pass 0 for cookie because we don't know it yet, it will be
+-	 * filled in by linger_submit().
+-	 */
+-	ret = osd_req_op_notify_init(lreq->reg_req, 0, 0, 1, timeout,
+-				     payload, payload_len);
+-	if (ret)
++	ret = ceph_pagelist_encode_32(lreq->request_pl, 1); /* prot_ver */
++	ret |= ceph_pagelist_encode_32(lreq->request_pl, timeout);
++	ret |= ceph_pagelist_encode_32(lreq->request_pl, payload_len);
++	ret |= ceph_pagelist_append(lreq->request_pl, payload, payload_len);
++	if (ret) {
++		ret = -ENOMEM;
+ 		goto out_put_lreq;
++	}
+ 
+ 	/* for notify_id */
+-	pages = ceph_alloc_page_vector(1, GFP_NOIO);
+-	if (IS_ERR(pages)) {
+-		ret = PTR_ERR(pages);
++	lreq->notify_id_pages = ceph_alloc_page_vector(1, GFP_NOIO);
++	if (IS_ERR(lreq->notify_id_pages)) {
++		ret = PTR_ERR(lreq->notify_id_pages);
++		lreq->notify_id_pages = NULL;
+ 		goto out_put_lreq;
+ 	}
+-	ceph_osd_data_pages_init(osd_req_op_data(lreq->reg_req, 0, notify,
+-						 response_data),
+-				 pages, PAGE_SIZE, 0, false, true);
+ 
+-	ret = ceph_osdc_alloc_messages(lreq->reg_req, GFP_NOIO);
+-	if (ret)
+-		goto out_put_lreq;
++	lreq->preply_pages = preply_pages;
++	lreq->preply_len = preply_len;
++
++	ceph_oid_copy(&lreq->t.base_oid, oid);
++	ceph_oloc_copy(&lreq->t.base_oloc, oloc);
++	lreq->t.flags = CEPH_OSD_FLAG_READ;
+ 
+ 	linger_submit(lreq);
+ 	ret = linger_reg_commit_wait(lreq);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 4080e3c6c50d8..aab8ac383d5d1 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1765,6 +1765,7 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ 	struct in_device *in_dev = __in_dev_get_rcu(dev);
+ 	unsigned int flags = RTCF_MULTICAST;
+ 	struct rtable *rth;
++	bool no_policy;
+ 	u32 itag = 0;
+ 	int err;
+ 
+@@ -1775,8 +1776,12 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ 	if (our)
+ 		flags |= RTCF_LOCAL;
+ 
++	no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY);
++	if (no_policy)
++		IPCB(skb)->flags |= IPSKB_NOPOLICY;
++
+ 	rth = rt_dst_alloc(dev_net(dev)->loopback_dev, flags, RTN_MULTICAST,
+-			   IN_DEV_CONF_GET(in_dev, NOPOLICY), false);
++			   no_policy, false);
+ 	if (!rth)
+ 		return -ENOBUFS;
+ 
+@@ -1835,7 +1840,7 @@ static int __mkroute_input(struct sk_buff *skb,
+ 	struct rtable *rth;
+ 	int err;
+ 	struct in_device *out_dev;
+-	bool do_cache;
++	bool do_cache, no_policy;
+ 	u32 itag = 0;
+ 
+ 	/* get a working reference to the output device */
+@@ -1880,6 +1885,10 @@ static int __mkroute_input(struct sk_buff *skb,
+ 		}
+ 	}
+ 
++	no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY);
++	if (no_policy)
++		IPCB(skb)->flags |= IPSKB_NOPOLICY;
++
+ 	fnhe = find_exception(nhc, daddr);
+ 	if (do_cache) {
+ 		if (fnhe)
+@@ -1892,9 +1901,8 @@ static int __mkroute_input(struct sk_buff *skb,
+ 		}
+ 	}
+ 
+-	rth = rt_dst_alloc(out_dev->dev, 0, res->type,
+-			   IN_DEV_CONF_GET(in_dev, NOPOLICY),
+-			   IN_DEV_CONF_GET(out_dev, NOXFRM));
++	rth = rt_dst_alloc(out_dev->dev, 0, res->type, no_policy,
++			   IN_DEV_ORCONF(out_dev, NOXFRM));
+ 	if (!rth) {
+ 		err = -ENOBUFS;
+ 		goto cleanup;
+@@ -2145,6 +2153,7 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ 	struct rtable	*rth;
+ 	struct flowi4	fl4;
+ 	bool do_cache = true;
++	bool no_policy;
+ 
+ 	/* IP on this device is disabled. */
+ 
+@@ -2262,6 +2271,10 @@ brd_input:
+ 	RT_CACHE_STAT_INC(in_brd);
+ 
+ local_input:
++	no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY);
++	if (no_policy)
++		IPCB(skb)->flags |= IPSKB_NOPOLICY;
++
+ 	do_cache &= res->fi && !itag;
+ 	if (do_cache) {
+ 		struct fib_nh_common *nhc = FIB_RES_NHC(*res);
+@@ -2276,7 +2289,7 @@ local_input:
+ 
+ 	rth = rt_dst_alloc(ip_rt_get_dev(net, res),
+ 			   flags | RTCF_LOCAL, res->type,
+-			   IN_DEV_CONF_GET(in_dev, NOPOLICY), false);
++			   no_policy, false);
+ 	if (!rth)
+ 		goto e_nobufs;
+ 
+@@ -2499,8 +2512,8 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
+ 
+ add:
+ 	rth = rt_dst_alloc(dev_out, flags, type,
+-			   IN_DEV_CONF_GET(in_dev, NOPOLICY),
+-			   IN_DEV_CONF_GET(in_dev, NOXFRM));
++			   IN_DEV_ORCONF(in_dev, NOPOLICY),
++			   IN_DEV_ORCONF(in_dev, NOXFRM));
+ 	if (!rth)
+ 		return ERR_PTR(-ENOBUFS);
+ 
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index bd9b5c573b5a4..61505b0df57db 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2830,8 +2830,10 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb
+ 	void *ext_hdrs[SADB_EXT_MAX];
+ 	int err;
+ 
+-	pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
+-			BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
++	err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
++			      BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
++	if (err)
++		return err;
+ 
+ 	memset(ext_hdrs, 0, sizeof(ext_hdrs));
+ 	err = parse_exthdrs(skb, hdr, ext_hdrs);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 1e7614abd947d..e991abb45f68f 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1387,8 +1387,7 @@ static void ieee80211_rx_reorder_ampdu(struct ieee80211_rx_data *rx,
+ 		goto dont_reorder;
+ 
+ 	/* not part of a BA session */
+-	if (ack_policy != IEEE80211_QOS_CTL_ACK_POLICY_BLOCKACK &&
+-	    ack_policy != IEEE80211_QOS_CTL_ACK_POLICY_NORMAL)
++	if (ack_policy == IEEE80211_QOS_CTL_ACK_POLICY_NOACK)
+ 		goto dont_reorder;
+ 
+ 	/* new, potentially un-ordered, ampdu frame - process it */
+diff --git a/net/nfc/nci/data.c b/net/nfc/nci/data.c
+index ce3382be937ff..b002e18f38c81 100644
+--- a/net/nfc/nci/data.c
++++ b/net/nfc/nci/data.c
+@@ -118,7 +118,7 @@ static int nci_queue_tx_data_frags(struct nci_dev *ndev,
+ 
+ 		skb_frag = nci_skb_alloc(ndev,
+ 					 (NCI_DATA_HDR_SIZE + frag_len),
+-					 GFP_KERNEL);
++					 GFP_ATOMIC);
+ 		if (skb_frag == NULL) {
+ 			rc = -ENOMEM;
+ 			goto free_exit;
+diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c
+index 04e55ccb33836..4fe336ff2bfa1 100644
+--- a/net/nfc/nci/hci.c
++++ b/net/nfc/nci/hci.c
+@@ -153,7 +153,7 @@ static int nci_hci_send_data(struct nci_dev *ndev, u8 pipe,
+ 
+ 	i = 0;
+ 	skb = nci_skb_alloc(ndev, conn_info->max_pkt_payload_len +
+-			    NCI_DATA_HDR_SIZE, GFP_KERNEL);
++			    NCI_DATA_HDR_SIZE, GFP_ATOMIC);
+ 	if (!skb)
+ 		return -ENOMEM;
+ 
+@@ -186,7 +186,7 @@ static int nci_hci_send_data(struct nci_dev *ndev, u8 pipe,
+ 		if (i < data_len) {
+ 			skb = nci_skb_alloc(ndev,
+ 					    conn_info->max_pkt_payload_len +
+-					    NCI_DATA_HDR_SIZE, GFP_KERNEL);
++					    NCI_DATA_HDR_SIZE, GFP_ATOMIC);
+ 			if (!skb)
+ 				return -ENOMEM;
+ 
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index 90510298b32a6..0d5463ddfd62f 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -232,6 +232,10 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ 	for (i = 0; i < p->tcfp_nkeys; ++i) {
+ 		u32 cur = p->tcfp_keys[i].off;
+ 
++		/* sanitize the shift value for any later use */
++		p->tcfp_keys[i].shift = min_t(size_t, BITS_PER_TYPE(int) - 1,
++					      p->tcfp_keys[i].shift);
++
+ 		/* The AT option can read a single byte, we can bound the actual
+ 		 * value with uchar max.
+ 		 */
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 12f44ad4e0d8e..f8d5f35cfc664 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -2955,6 +2955,15 @@ int nl80211_parse_chandef(struct cfg80211_registered_device *rdev,
+ 	} else if (attrs[NL80211_ATTR_CHANNEL_WIDTH]) {
+ 		chandef->width =
+ 			nla_get_u32(attrs[NL80211_ATTR_CHANNEL_WIDTH]);
++		if (chandef->chan->band == NL80211_BAND_S1GHZ) {
++			/* User input error for channel width doesn't match channel  */
++			if (chandef->width != ieee80211_s1g_channel_width(chandef->chan)) {
++				NL_SET_ERR_MSG_ATTR(extack,
++						    attrs[NL80211_ATTR_CHANNEL_WIDTH],
++						    "bad channel width");
++				return -EINVAL;
++			}
++		}
+ 		if (attrs[NL80211_ATTR_CENTER_FREQ1]) {
+ 			chandef->center_freq1 =
+ 				nla_get_u32(attrs[NL80211_ATTR_CENTER_FREQ1]);
+@@ -11086,18 +11095,23 @@ static int nl80211_set_tx_bitrate_mask(struct sk_buff *skb,
+ 	struct cfg80211_bitrate_mask mask;
+ 	struct cfg80211_registered_device *rdev = info->user_ptr[0];
+ 	struct net_device *dev = info->user_ptr[1];
++	struct wireless_dev *wdev = dev->ieee80211_ptr;
+ 	int err;
+ 
+ 	if (!rdev->ops->set_bitrate_mask)
+ 		return -EOPNOTSUPP;
+ 
++	wdev_lock(wdev);
+ 	err = nl80211_parse_tx_bitrate_mask(info, info->attrs,
+ 					    NL80211_ATTR_TX_RATES, &mask,
+ 					    dev);
+ 	if (err)
+-		return err;
++		goto out;
+ 
+-	return rdev_set_bitrate_mask(rdev, dev, NULL, &mask);
++	err = rdev_set_bitrate_mask(rdev, dev, NULL, &mask);
++out:
++	wdev_unlock(wdev);
++	return err;
+ }
+ 
+ static int nl80211_register_mgmt(struct sk_buff *skb, struct genl_info *info)
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 3d0ffd9270041..93cbcc8f9b39e 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3161,6 +3161,11 @@ ok:
+ 	return dst;
+ 
+ nopol:
++	if (!(dst_orig->dev->flags & IFF_LOOPBACK) &&
++	    net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) {
++		err = -EPERM;
++		goto error;
++	}
+ 	if (!(flags & XFRM_LOOKUP_ICMP)) {
+ 		dst = dst_orig;
+ 		goto ok;
+@@ -3608,6 +3613,11 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 	}
+ 
+ 	if (!pol) {
++		if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) {
++			XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS);
++			return 0;
++		}
++
+ 		if (sp && secpath_has_nontransport(sp, 0, &xerr_idx)) {
+ 			xfrm_secpath_reject(xerr_idx, skb, &fl);
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS);
+@@ -3662,6 +3672,13 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 				tpp[ti++] = &pols[pi]->xfrm_vec[i];
+ 		}
+ 		xfrm_nr = ti;
++
++		if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK &&
++		    !xfrm_nr) {
++			XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES);
++			goto reject;
++		}
++
+ 		if (npols > 1) {
+ 			xfrm_tmpl_sort(stp, tpp, xfrm_nr, family);
+ 			tpp = stp;
+@@ -4146,6 +4163,9 @@ static int __net_init xfrm_net_init(struct net *net)
+ 	spin_lock_init(&net->xfrm.xfrm_policy_lock);
+ 	seqcount_spinlock_init(&net->xfrm.xfrm_policy_hash_generation, &net->xfrm.xfrm_policy_lock);
+ 	mutex_init(&net->xfrm.xfrm_cfg_mutex);
++	net->xfrm.policy_default[XFRM_POLICY_IN] = XFRM_USERPOLICY_ACCEPT;
++	net->xfrm.policy_default[XFRM_POLICY_FWD] = XFRM_USERPOLICY_ACCEPT;
++	net->xfrm.policy_default[XFRM_POLICY_OUT] = XFRM_USERPOLICY_ACCEPT;
+ 
+ 	rv = xfrm_statistics_init(net);
+ 	if (rv < 0)
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 1ece01cd67a42..d9841f44487f2 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -1914,6 +1914,90 @@ static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb,
+ 	return skb;
+ }
+ 
++static int xfrm_notify_userpolicy(struct net *net)
++{
++	struct xfrm_userpolicy_default *up;
++	int len = NLMSG_ALIGN(sizeof(*up));
++	struct nlmsghdr *nlh;
++	struct sk_buff *skb;
++
++	skb = nlmsg_new(len, GFP_ATOMIC);
++	if (skb == NULL)
++		return -ENOMEM;
++
++	nlh = nlmsg_put(skb, 0, 0, XFRM_MSG_GETDEFAULT, sizeof(*up), 0);
++	if (nlh == NULL) {
++		kfree_skb(skb);
++		return -EMSGSIZE;
++	}
++
++	up = nlmsg_data(nlh);
++	up->in = net->xfrm.policy_default[XFRM_POLICY_IN];
++	up->fwd = net->xfrm.policy_default[XFRM_POLICY_FWD];
++	up->out = net->xfrm.policy_default[XFRM_POLICY_OUT];
++
++	nlmsg_end(skb, nlh);
++
++	return xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY);
++}
++
++static bool xfrm_userpolicy_is_valid(__u8 policy)
++{
++	return policy == XFRM_USERPOLICY_BLOCK ||
++	       policy == XFRM_USERPOLICY_ACCEPT;
++}
++
++static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh,
++			    struct nlattr **attrs)
++{
++	struct net *net = sock_net(skb->sk);
++	struct xfrm_userpolicy_default *up = nlmsg_data(nlh);
++
++	if (xfrm_userpolicy_is_valid(up->in))
++		net->xfrm.policy_default[XFRM_POLICY_IN] = up->in;
++
++	if (xfrm_userpolicy_is_valid(up->fwd))
++		net->xfrm.policy_default[XFRM_POLICY_FWD] = up->fwd;
++
++	if (xfrm_userpolicy_is_valid(up->out))
++		net->xfrm.policy_default[XFRM_POLICY_OUT] = up->out;
++
++	rt_genid_bump_all(net);
++
++	xfrm_notify_userpolicy(net);
++	return 0;
++}
++
++static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh,
++			    struct nlattr **attrs)
++{
++	struct sk_buff *r_skb;
++	struct nlmsghdr *r_nlh;
++	struct net *net = sock_net(skb->sk);
++	struct xfrm_userpolicy_default *r_up;
++	int len = NLMSG_ALIGN(sizeof(struct xfrm_userpolicy_default));
++	u32 portid = NETLINK_CB(skb).portid;
++	u32 seq = nlh->nlmsg_seq;
++
++	r_skb = nlmsg_new(len, GFP_ATOMIC);
++	if (!r_skb)
++		return -ENOMEM;
++
++	r_nlh = nlmsg_put(r_skb, portid, seq, XFRM_MSG_GETDEFAULT, sizeof(*r_up), 0);
++	if (!r_nlh) {
++		kfree_skb(r_skb);
++		return -EMSGSIZE;
++	}
++
++	r_up = nlmsg_data(r_nlh);
++	r_up->in = net->xfrm.policy_default[XFRM_POLICY_IN];
++	r_up->fwd = net->xfrm.policy_default[XFRM_POLICY_FWD];
++	r_up->out = net->xfrm.policy_default[XFRM_POLICY_OUT];
++	nlmsg_end(r_skb, r_nlh);
++
++	return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid);
++}
++
+ static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		struct nlattr **attrs)
+ {
+@@ -2621,6 +2705,8 @@ const int xfrm_msg_min[XFRM_NR_MSGTYPES] = {
+ 	[XFRM_MSG_GETSADINFO  - XFRM_MSG_BASE] = sizeof(u32),
+ 	[XFRM_MSG_NEWSPDINFO  - XFRM_MSG_BASE] = sizeof(u32),
+ 	[XFRM_MSG_GETSPDINFO  - XFRM_MSG_BASE] = sizeof(u32),
++	[XFRM_MSG_SETDEFAULT  - XFRM_MSG_BASE] = XMSGSIZE(xfrm_userpolicy_default),
++	[XFRM_MSG_GETDEFAULT  - XFRM_MSG_BASE] = XMSGSIZE(xfrm_userpolicy_default),
+ };
+ EXPORT_SYMBOL_GPL(xfrm_msg_min);
+ 
+@@ -2700,6 +2786,8 @@ static const struct xfrm_link {
+ 						   .nla_pol = xfrma_spd_policy,
+ 						   .nla_max = XFRMA_SPD_MAX },
+ 	[XFRM_MSG_GETSPDINFO  - XFRM_MSG_BASE] = { .doit = xfrm_get_spdinfo   },
++	[XFRM_MSG_SETDEFAULT  - XFRM_MSG_BASE] = { .doit = xfrm_set_default   },
++	[XFRM_MSG_GETDEFAULT  - XFRM_MSG_BASE] = { .doit = xfrm_get_default   },
+ };
+ 
+ static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
+diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c
+index b692319186862..c4fb57e90b6ac 100644
+--- a/security/selinux/nlmsgtab.c
++++ b/security/selinux/nlmsgtab.c
+@@ -123,6 +123,8 @@ static const struct nlmsg_perm nlmsg_xfrm_perms[] =
+ 	{ XFRM_MSG_NEWSPDINFO,	NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+ 	{ XFRM_MSG_GETSPDINFO,	NETLINK_XFRM_SOCKET__NLMSG_READ  },
+ 	{ XFRM_MSG_MAPPING,	NETLINK_XFRM_SOCKET__NLMSG_READ  },
++	{ XFRM_MSG_SETDEFAULT,	NETLINK_XFRM_SOCKET__NLMSG_WRITE },
++	{ XFRM_MSG_GETDEFAULT,	NETLINK_XFRM_SOCKET__NLMSG_READ  },
+ };
+ 
+ static const struct nlmsg_perm nlmsg_audit_perms[] =
+@@ -186,7 +188,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
+ 		 * structures at the top of this file with the new mappings
+ 		 * before updating the BUILD_BUG_ON() macro!
+ 		 */
+-		BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING);
++		BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_GETDEFAULT);
+ 		err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms,
+ 				 sizeof(nlmsg_xfrm_perms));
+ 		break;
+diff --git a/security/selinux/ss/hashtab.c b/security/selinux/ss/hashtab.c
+index 7335f67ce54eb..e8960a59586cb 100644
+--- a/security/selinux/ss/hashtab.c
++++ b/security/selinux/ss/hashtab.c
+@@ -178,7 +178,8 @@ int hashtab_duplicate(struct hashtab *new, struct hashtab *orig,
+ 			kmem_cache_free(hashtab_node_cachep, cur);
+ 		}
+ 	}
+-	kmem_cache_free(hashtab_node_cachep, new);
++	kfree(new->htable);
++	memset(new, 0, sizeof(*new));
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/sound/isa/wavefront/wavefront_synth.c b/sound/isa/wavefront/wavefront_synth.c
+index d6420d224d097..09b368761cc00 100644
+--- a/sound/isa/wavefront/wavefront_synth.c
++++ b/sound/isa/wavefront/wavefront_synth.c
+@@ -1088,7 +1088,8 @@ wavefront_send_sample (snd_wavefront_t *dev,
+ 
+ 			if (dataptr < data_end) {
+ 		
+-				__get_user (sample_short, dataptr);
++				if (get_user(sample_short, dataptr))
++					return -EFAULT;
+ 				dataptr += skip;
+ 		
+ 				if (data_is_unsigned) { /* GUS ? */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a5b1fd62a99cf..f630f9257ad11 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9011,6 +9011,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ 	SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS),
++	SND_PCI_QUIRK(0x1d05, 0x1096, "TongFang GMxMRxx", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x1d05, 0x1100, "TongFang GKxNRxx", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x1d05, 0x1111, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x1d05, 0x1119, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x1d05, 0x1129, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x1d05, 0x1147, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x1d05, 0x115c, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -10876,6 +10884,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS),
++	SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index aabd3a10ec5b4..1ac91c46da3cf 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3208,6 +3208,15 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ 	}
+ },
+ 
++/* Rane SL-1 */
++{
++	USB_DEVICE(0x13e5, 0x0001),
++	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_AUDIO_STANDARD_INTERFACE
++        }
++},
++
+ /* disabled due to regression for other devices;
+  * see https://bugzilla.kernel.org/show_bug.cgi?id=199905
+  */
+diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c
+index 11726ec6285f3..88c11305bdd5d 100644
+--- a/tools/perf/bench/numa.c
++++ b/tools/perf/bench/numa.c
+@@ -1656,7 +1656,7 @@ static int __bench_numa(const char *name)
+ 		"GB/sec,", "total-speed",	"GB/sec total speed");
+ 
+ 	if (g->p.show_details >= 2) {
+-		char tname[14 + 2 * 10 + 1];
++		char tname[14 + 2 * 11 + 1];
+ 		struct thread_data *td;
+ 		for (p = 0; p < g->p.nr_proc; p++) {
+ 			for (t = 0; t < g->p.nr_threads; t++) {
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index ace976d891252..4a11ea2261cbe 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -794,10 +794,16 @@ ipv4_ping()
+ 	setup
+ 	set_sysctl net.ipv4.raw_l3mdev_accept=1 2>/dev/null
+ 	ipv4_ping_novrf
++	setup
++	set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
++	ipv4_ping_novrf
+ 
+ 	log_subsection "With VRF"
+ 	setup "yes"
+ 	ipv4_ping_vrf
++	setup "yes"
++	set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
++	ipv4_ping_vrf
+ }
+ 
+ ################################################################################
+@@ -2261,10 +2267,16 @@ ipv6_ping()
+ 	log_subsection "No VRF"
+ 	setup
+ 	ipv6_ping_novrf
++	setup
++	set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
++	ipv6_ping_novrf
+ 
+ 	log_subsection "With VRF"
+ 	setup "yes"
+ 	ipv6_ping_vrf
++	setup "yes"
++	set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
++	ipv6_ping_vrf
+ }
+ 
+ ################################################################################
+diff --git a/tools/virtio/Makefile b/tools/virtio/Makefile
+index 0d7bbe49359d8..1b25cc7c64bbd 100644
+--- a/tools/virtio/Makefile
++++ b/tools/virtio/Makefile
+@@ -5,7 +5,8 @@ virtio_test: virtio_ring.o virtio_test.o
+ vringh_test: vringh_test.o vringh.o virtio_ring.o
+ 
+ CFLAGS += -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h
+-LDFLAGS += -lpthread
++CFLAGS += -pthread
++LDFLAGS += -pthread
+ vpath %.c ../../drivers/virtio ../../drivers/vhost
+ mod:
+ 	${MAKE} -C `pwd`/../.. M=`pwd`/vhost_test V=${V}


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-05-30 13:59 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-05-30 13:59 UTC (permalink / raw
  To: gentoo-commits

commit:     45dd19c15b74d99ba9dcf4e008ade81aaf3e657f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May 30 13:59:41 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May 30 13:59:41 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=45dd19c1

Linux patch 5.10.119

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1118_linux-5.10.119.patch | 6681 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6685 insertions(+)

diff --git a/0000_README b/0000_README
index 51c56a3e..32647390 100644
--- a/0000_README
+++ b/0000_README
@@ -515,6 +515,10 @@ Patch:  1117_linux-5.10.118.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.118
 
+Patch:  1118_linux-5.10.119.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.119
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1118_linux-5.10.119.patch b/1118_linux-5.10.119.patch
new file mode 100644
index 00000000..7794e863
--- /dev/null
+++ b/1118_linux-5.10.119.patch
@@ -0,0 +1,6681 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 611172f68bb57..5e34deec819fa 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4035,6 +4035,12 @@
+ 			fully seed the kernel's CRNG. Default is controlled
+ 			by CONFIG_RANDOM_TRUST_CPU.
+ 
++	random.trust_bootloader={on,off}
++			[KNL] Enable or disable trusting the use of a
++			seed passed by the bootloader (if available) to
++			fully seed the kernel's CRNG. Default is controlled
++			by CONFIG_RANDOM_TRUST_BOOTLOADER.
++
+ 	ras=option[,option,...]	[KNL] RAS-specific options
+ 
+ 		cec_disable	[X86]
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index e338306f45873..a4b1ebc2e70b0 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1006,28 +1006,22 @@ This is a directory, with the following entries:
+ * ``boot_id``: a UUID generated the first time this is retrieved, and
+   unvarying after that;
+ 
++* ``uuid``: a UUID generated every time this is retrieved (this can
++  thus be used to generate UUIDs at will);
++
+ * ``entropy_avail``: the pool's entropy count, in bits;
+ 
+ * ``poolsize``: the entropy pool size, in bits;
+ 
+ * ``urandom_min_reseed_secs``: obsolete (used to determine the minimum
+-  number of seconds between urandom pool reseeding).
+-
+-* ``uuid``: a UUID generated every time this is retrieved (this can
+-  thus be used to generate UUIDs at will);
++  number of seconds between urandom pool reseeding). This file is
++  writable for compatibility purposes, but writing to it has no effect
++  on any RNG behavior;
+ 
+ * ``write_wakeup_threshold``: when the entropy count drops below this
+   (as a number of bits), processes waiting to write to ``/dev/random``
+-  are woken up.
+-
+-If ``drivers/char/random.c`` is built with ``ADD_INTERRUPT_BENCH``
+-defined, these additional entries are present:
+-
+-* ``add_interrupt_avg_cycles``: the average number of cycles between
+-  interrupts used to feed the pool;
+-
+-* ``add_interrupt_avg_deviation``: the standard deviation seen on the
+-  number of cycles between interrupts used to feed the pool.
++  are woken up. This file is writable for compatibility purposes, but
++  writing to it has no effect on any RNG behavior.
+ 
+ 
+ randomize_va_space
+diff --git a/MAINTAINERS b/MAINTAINERS
+index c64c9354c287f..7c118b507912f 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -14671,6 +14671,8 @@ F:	arch/mips/generic/board-ranchu.c
+ 
+ RANDOM NUMBER DRIVER
+ M:	"Theodore Ts'o" <tytso@mit.edu>
++M:	Jason A. Donenfeld <Jason@zx2c4.com>
++T:	git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git
+ S:	Maintained
+ F:	drivers/char/random.c
+ 
+diff --git a/Makefile b/Makefile
+index f9210e43121dc..b442cc5bbfc30 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 118
++SUBLEVEL = 119
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/include/asm/timex.h b/arch/alpha/include/asm/timex.h
+index b565cc6f408e9..f89798da8a147 100644
+--- a/arch/alpha/include/asm/timex.h
++++ b/arch/alpha/include/asm/timex.h
+@@ -28,5 +28,6 @@ static inline cycles_t get_cycles (void)
+ 	__asm__ __volatile__ ("rpcc %0" : "=r"(ret));
+ 	return ret;
+ }
++#define get_cycles get_cycles
+ 
+ #endif
+diff --git a/arch/arm/include/asm/timex.h b/arch/arm/include/asm/timex.h
+index 7c3b3671d6c25..6d1337c169cd3 100644
+--- a/arch/arm/include/asm/timex.h
++++ b/arch/arm/include/asm/timex.h
+@@ -11,5 +11,6 @@
+ 
+ typedef unsigned long cycles_t;
+ #define get_cycles()	({ cycles_t c; read_current_timer(&c) ? 0 : c; })
++#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_entropy_fallback())
+ 
+ #endif
+diff --git a/arch/ia64/include/asm/timex.h b/arch/ia64/include/asm/timex.h
+index 869a3ac6bf23a..7ccc077a60bed 100644
+--- a/arch/ia64/include/asm/timex.h
++++ b/arch/ia64/include/asm/timex.h
+@@ -39,6 +39,7 @@ get_cycles (void)
+ 	ret = ia64_getreg(_IA64_REG_AR_ITC);
+ 	return ret;
+ }
++#define get_cycles get_cycles
+ 
+ extern void ia64_cpu_local_tick (void);
+ extern unsigned long long ia64_native_sched_clock (void);
+diff --git a/arch/m68k/include/asm/timex.h b/arch/m68k/include/asm/timex.h
+index 6a21d93582805..f4a7a340f4cae 100644
+--- a/arch/m68k/include/asm/timex.h
++++ b/arch/m68k/include/asm/timex.h
+@@ -35,7 +35,7 @@ static inline unsigned long random_get_entropy(void)
+ {
+ 	if (mach_random_get_entropy)
+ 		return mach_random_get_entropy();
+-	return 0;
++	return random_get_entropy_fallback();
+ }
+ #define random_get_entropy	random_get_entropy
+ 
+diff --git a/arch/mips/include/asm/timex.h b/arch/mips/include/asm/timex.h
+index 8026baf46e729..2e107886f97ac 100644
+--- a/arch/mips/include/asm/timex.h
++++ b/arch/mips/include/asm/timex.h
+@@ -76,25 +76,24 @@ static inline cycles_t get_cycles(void)
+ 	else
+ 		return 0;	/* no usable counter */
+ }
++#define get_cycles get_cycles
+ 
+ /*
+  * Like get_cycles - but where c0_count is not available we desperately
+  * use c0_random in an attempt to get at least a little bit of entropy.
+- *
+- * R6000 and R6000A neither have a count register nor a random register.
+- * That leaves no entropy source in the CPU itself.
+  */
+ static inline unsigned long random_get_entropy(void)
+ {
+-	unsigned int prid = read_c0_prid();
+-	unsigned int imp = prid & PRID_IMP_MASK;
++	unsigned int c0_random;
+ 
+-	if (can_use_mips_counter(prid))
++	if (can_use_mips_counter(read_c0_prid()))
+ 		return read_c0_count();
+-	else if (likely(imp != PRID_IMP_R6000 && imp != PRID_IMP_R6000A))
+-		return read_c0_random();
++
++	if (cpu_has_3kex)
++		c0_random = (read_c0_random() >> 8) & 0x3f;
+ 	else
+-		return 0;	/* no usable register */
++		c0_random = read_c0_random() & 0x3f;
++	return (random_get_entropy_fallback() << 6) | (0x3f - c0_random);
+ }
+ #define random_get_entropy random_get_entropy
+ 
+diff --git a/arch/nios2/include/asm/timex.h b/arch/nios2/include/asm/timex.h
+index a769f871b28d9..40a1adc9bd03e 100644
+--- a/arch/nios2/include/asm/timex.h
++++ b/arch/nios2/include/asm/timex.h
+@@ -8,5 +8,8 @@
+ typedef unsigned long cycles_t;
+ 
+ extern cycles_t get_cycles(void);
++#define get_cycles get_cycles
++
++#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_entropy_fallback())
+ 
+ #endif
+diff --git a/arch/parisc/include/asm/timex.h b/arch/parisc/include/asm/timex.h
+index 06b510f8172e3..b4622cb06a75e 100644
+--- a/arch/parisc/include/asm/timex.h
++++ b/arch/parisc/include/asm/timex.h
+@@ -13,9 +13,10 @@
+ 
+ typedef unsigned long cycles_t;
+ 
+-static inline cycles_t get_cycles (void)
++static inline cycles_t get_cycles(void)
+ {
+ 	return mfctl(16);
+ }
++#define get_cycles get_cycles
+ 
+ #endif
+diff --git a/arch/powerpc/include/asm/timex.h b/arch/powerpc/include/asm/timex.h
+index 95988870a57bc..171602fd358e1 100644
+--- a/arch/powerpc/include/asm/timex.h
++++ b/arch/powerpc/include/asm/timex.h
+@@ -19,6 +19,7 @@ static inline cycles_t get_cycles(void)
+ {
+ 	return mftb();
+ }
++#define get_cycles get_cycles
+ 
+ #endif	/* __KERNEL__ */
+ #endif	/* _ASM_POWERPC_TIMEX_H */
+diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
+index 81de51e6aa32b..a06697846e695 100644
+--- a/arch/riscv/include/asm/timex.h
++++ b/arch/riscv/include/asm/timex.h
+@@ -41,7 +41,7 @@ static inline u32 get_cycles_hi(void)
+ static inline unsigned long random_get_entropy(void)
+ {
+ 	if (unlikely(clint_time_val == NULL))
+-		return 0;
++		return random_get_entropy_fallback();
+ 	return get_cycles();
+ }
+ #define random_get_entropy()	random_get_entropy()
+diff --git a/arch/s390/include/asm/timex.h b/arch/s390/include/asm/timex.h
+index 289aaff4d365f..588aa0f2c842c 100644
+--- a/arch/s390/include/asm/timex.h
++++ b/arch/s390/include/asm/timex.h
+@@ -172,6 +172,7 @@ static inline cycles_t get_cycles(void)
+ {
+ 	return (cycles_t) get_tod_clock() >> 2;
+ }
++#define get_cycles get_cycles
+ 
+ int get_phys_clock(unsigned long *clock);
+ void init_cpu_timer(void);
+diff --git a/arch/sparc/include/asm/timex_32.h b/arch/sparc/include/asm/timex_32.h
+index 542915b462097..f86326a6f89e0 100644
+--- a/arch/sparc/include/asm/timex_32.h
++++ b/arch/sparc/include/asm/timex_32.h
+@@ -9,8 +9,6 @@
+ 
+ #define CLOCK_TICK_RATE	1193180 /* Underlying HZ */
+ 
+-/* XXX Maybe do something better at some point... -DaveM */
+-typedef unsigned long cycles_t;
+-#define get_cycles()	(0)
++#include <asm-generic/timex.h>
+ 
+ #endif
+diff --git a/arch/um/include/asm/timex.h b/arch/um/include/asm/timex.h
+index e392a9a5bc9bd..9f27176adb26d 100644
+--- a/arch/um/include/asm/timex.h
++++ b/arch/um/include/asm/timex.h
+@@ -2,13 +2,8 @@
+ #ifndef __UM_TIMEX_H
+ #define __UM_TIMEX_H
+ 
+-typedef unsigned long cycles_t;
+-
+-static inline cycles_t get_cycles (void)
+-{
+-	return 0;
+-}
+-
+ #define CLOCK_TICK_RATE (HZ)
+ 
++#include <asm-generic/timex.h>
++
+ #endif
+diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
+index a31de0c6ccde2..66f25c2938bfe 100644
+--- a/arch/x86/crypto/Makefile
++++ b/arch/x86/crypto/Makefile
+@@ -66,7 +66,9 @@ obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
+ sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o
+ 
+ obj-$(CONFIG_CRYPTO_BLAKE2S_X86) += blake2s-x86_64.o
+-blake2s-x86_64-y := blake2s-core.o blake2s-glue.o
++blake2s-x86_64-y := blake2s-shash.o
++obj-$(if $(CONFIG_CRYPTO_BLAKE2S_X86),y) += libblake2s-x86_64.o
++libblake2s-x86_64-y := blake2s-core.o blake2s-glue.o
+ 
+ obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o
+ ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o
+diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c
+index c025a01cf7084..69853c13e8fb0 100644
+--- a/arch/x86/crypto/blake2s-glue.c
++++ b/arch/x86/crypto/blake2s-glue.c
+@@ -5,7 +5,6 @@
+ 
+ #include <crypto/internal/blake2s.h>
+ #include <crypto/internal/simd.h>
+-#include <crypto/internal/hash.h>
+ 
+ #include <linux/types.h>
+ #include <linux/jump_label.h>
+@@ -28,9 +27,8 @@ asmlinkage void blake2s_compress_avx512(struct blake2s_state *state,
+ static __ro_after_init DEFINE_STATIC_KEY_FALSE(blake2s_use_ssse3);
+ static __ro_after_init DEFINE_STATIC_KEY_FALSE(blake2s_use_avx512);
+ 
+-void blake2s_compress_arch(struct blake2s_state *state,
+-			   const u8 *block, size_t nblocks,
+-			   const u32 inc)
++void blake2s_compress(struct blake2s_state *state, const u8 *block,
++		      size_t nblocks, const u32 inc)
+ {
+ 	/* SIMD disables preemption, so relax after processing each page. */
+ 	BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);
+@@ -56,147 +54,12 @@ void blake2s_compress_arch(struct blake2s_state *state,
+ 		block += blocks * BLAKE2S_BLOCK_SIZE;
+ 	} while (nblocks);
+ }
+-EXPORT_SYMBOL(blake2s_compress_arch);
+-
+-static int crypto_blake2s_setkey(struct crypto_shash *tfm, const u8 *key,
+-				 unsigned int keylen)
+-{
+-	struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+-
+-	if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE)
+-		return -EINVAL;
+-
+-	memcpy(tctx->key, key, keylen);
+-	tctx->keylen = keylen;
+-
+-	return 0;
+-}
+-
+-static int crypto_blake2s_init(struct shash_desc *desc)
+-{
+-	struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+-	struct blake2s_state *state = shash_desc_ctx(desc);
+-	const int outlen = crypto_shash_digestsize(desc->tfm);
+-
+-	if (tctx->keylen)
+-		blake2s_init_key(state, outlen, tctx->key, tctx->keylen);
+-	else
+-		blake2s_init(state, outlen);
+-
+-	return 0;
+-}
+-
+-static int crypto_blake2s_update(struct shash_desc *desc, const u8 *in,
+-				 unsigned int inlen)
+-{
+-	struct blake2s_state *state = shash_desc_ctx(desc);
+-	const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen;
+-
+-	if (unlikely(!inlen))
+-		return 0;
+-	if (inlen > fill) {
+-		memcpy(state->buf + state->buflen, in, fill);
+-		blake2s_compress_arch(state, state->buf, 1, BLAKE2S_BLOCK_SIZE);
+-		state->buflen = 0;
+-		in += fill;
+-		inlen -= fill;
+-	}
+-	if (inlen > BLAKE2S_BLOCK_SIZE) {
+-		const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE);
+-		/* Hash one less (full) block than strictly possible */
+-		blake2s_compress_arch(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE);
+-		in += BLAKE2S_BLOCK_SIZE * (nblocks - 1);
+-		inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1);
+-	}
+-	memcpy(state->buf + state->buflen, in, inlen);
+-	state->buflen += inlen;
+-
+-	return 0;
+-}
+-
+-static int crypto_blake2s_final(struct shash_desc *desc, u8 *out)
+-{
+-	struct blake2s_state *state = shash_desc_ctx(desc);
+-
+-	blake2s_set_lastblock(state);
+-	memset(state->buf + state->buflen, 0,
+-	       BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */
+-	blake2s_compress_arch(state, state->buf, 1, state->buflen);
+-	cpu_to_le32_array(state->h, ARRAY_SIZE(state->h));
+-	memcpy(out, state->h, state->outlen);
+-	memzero_explicit(state, sizeof(*state));
+-
+-	return 0;
+-}
+-
+-static struct shash_alg blake2s_algs[] = {{
+-	.base.cra_name		= "blake2s-128",
+-	.base.cra_driver_name	= "blake2s-128-x86",
+-	.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+-	.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx),
+-	.base.cra_priority	= 200,
+-	.base.cra_blocksize     = BLAKE2S_BLOCK_SIZE,
+-	.base.cra_module	= THIS_MODULE,
+-
+-	.digestsize		= BLAKE2S_128_HASH_SIZE,
+-	.setkey			= crypto_blake2s_setkey,
+-	.init			= crypto_blake2s_init,
+-	.update			= crypto_blake2s_update,
+-	.final			= crypto_blake2s_final,
+-	.descsize		= sizeof(struct blake2s_state),
+-}, {
+-	.base.cra_name		= "blake2s-160",
+-	.base.cra_driver_name	= "blake2s-160-x86",
+-	.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+-	.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx),
+-	.base.cra_priority	= 200,
+-	.base.cra_blocksize     = BLAKE2S_BLOCK_SIZE,
+-	.base.cra_module	= THIS_MODULE,
+-
+-	.digestsize		= BLAKE2S_160_HASH_SIZE,
+-	.setkey			= crypto_blake2s_setkey,
+-	.init			= crypto_blake2s_init,
+-	.update			= crypto_blake2s_update,
+-	.final			= crypto_blake2s_final,
+-	.descsize		= sizeof(struct blake2s_state),
+-}, {
+-	.base.cra_name		= "blake2s-224",
+-	.base.cra_driver_name	= "blake2s-224-x86",
+-	.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+-	.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx),
+-	.base.cra_priority	= 200,
+-	.base.cra_blocksize     = BLAKE2S_BLOCK_SIZE,
+-	.base.cra_module	= THIS_MODULE,
+-
+-	.digestsize		= BLAKE2S_224_HASH_SIZE,
+-	.setkey			= crypto_blake2s_setkey,
+-	.init			= crypto_blake2s_init,
+-	.update			= crypto_blake2s_update,
+-	.final			= crypto_blake2s_final,
+-	.descsize		= sizeof(struct blake2s_state),
+-}, {
+-	.base.cra_name		= "blake2s-256",
+-	.base.cra_driver_name	= "blake2s-256-x86",
+-	.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+-	.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx),
+-	.base.cra_priority	= 200,
+-	.base.cra_blocksize     = BLAKE2S_BLOCK_SIZE,
+-	.base.cra_module	= THIS_MODULE,
+-
+-	.digestsize		= BLAKE2S_256_HASH_SIZE,
+-	.setkey			= crypto_blake2s_setkey,
+-	.init			= crypto_blake2s_init,
+-	.update			= crypto_blake2s_update,
+-	.final			= crypto_blake2s_final,
+-	.descsize		= sizeof(struct blake2s_state),
+-}};
++EXPORT_SYMBOL(blake2s_compress);
+ 
+ static int __init blake2s_mod_init(void)
+ {
+-	if (!boot_cpu_has(X86_FEATURE_SSSE3))
+-		return 0;
+-
+-	static_branch_enable(&blake2s_use_ssse3);
++	if (boot_cpu_has(X86_FEATURE_SSSE3))
++		static_branch_enable(&blake2s_use_ssse3);
+ 
+ 	if (IS_ENABLED(CONFIG_AS_AVX512) &&
+ 	    boot_cpu_has(X86_FEATURE_AVX) &&
+@@ -207,26 +70,9 @@ static int __init blake2s_mod_init(void)
+ 			      XFEATURE_MASK_AVX512, NULL))
+ 		static_branch_enable(&blake2s_use_avx512);
+ 
+-	return IS_REACHABLE(CONFIG_CRYPTO_HASH) ?
+-		crypto_register_shashes(blake2s_algs,
+-					ARRAY_SIZE(blake2s_algs)) : 0;
+-}
+-
+-static void __exit blake2s_mod_exit(void)
+-{
+-	if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3))
+-		crypto_unregister_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
++	return 0;
+ }
+ 
+ module_init(blake2s_mod_init);
+-module_exit(blake2s_mod_exit);
+ 
+-MODULE_ALIAS_CRYPTO("blake2s-128");
+-MODULE_ALIAS_CRYPTO("blake2s-128-x86");
+-MODULE_ALIAS_CRYPTO("blake2s-160");
+-MODULE_ALIAS_CRYPTO("blake2s-160-x86");
+-MODULE_ALIAS_CRYPTO("blake2s-224");
+-MODULE_ALIAS_CRYPTO("blake2s-224-x86");
+-MODULE_ALIAS_CRYPTO("blake2s-256");
+-MODULE_ALIAS_CRYPTO("blake2s-256-x86");
+ MODULE_LICENSE("GPL v2");
+diff --git a/arch/x86/crypto/blake2s-shash.c b/arch/x86/crypto/blake2s-shash.c
+new file mode 100644
+index 0000000000000..59ae28abe35cc
+--- /dev/null
++++ b/arch/x86/crypto/blake2s-shash.c
+@@ -0,0 +1,77 @@
++// SPDX-License-Identifier: GPL-2.0 OR MIT
++/*
++ * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
++ */
++
++#include <crypto/internal/blake2s.h>
++#include <crypto/internal/simd.h>
++#include <crypto/internal/hash.h>
++
++#include <linux/types.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/sizes.h>
++
++#include <asm/cpufeature.h>
++#include <asm/processor.h>
++
++static int crypto_blake2s_update_x86(struct shash_desc *desc,
++				     const u8 *in, unsigned int inlen)
++{
++	return crypto_blake2s_update(desc, in, inlen, false);
++}
++
++static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out)
++{
++	return crypto_blake2s_final(desc, out, false);
++}
++
++#define BLAKE2S_ALG(name, driver_name, digest_size)			\
++	{								\
++		.base.cra_name		= name,				\
++		.base.cra_driver_name	= driver_name,			\
++		.base.cra_priority	= 200,				\
++		.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,	\
++		.base.cra_blocksize	= BLAKE2S_BLOCK_SIZE,		\
++		.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx), \
++		.base.cra_module	= THIS_MODULE,			\
++		.digestsize		= digest_size,			\
++		.setkey			= crypto_blake2s_setkey,	\
++		.init			= crypto_blake2s_init,		\
++		.update			= crypto_blake2s_update_x86,	\
++		.final			= crypto_blake2s_final_x86,	\
++		.descsize		= sizeof(struct blake2s_state),	\
++	}
++
++static struct shash_alg blake2s_algs[] = {
++	BLAKE2S_ALG("blake2s-128", "blake2s-128-x86", BLAKE2S_128_HASH_SIZE),
++	BLAKE2S_ALG("blake2s-160", "blake2s-160-x86", BLAKE2S_160_HASH_SIZE),
++	BLAKE2S_ALG("blake2s-224", "blake2s-224-x86", BLAKE2S_224_HASH_SIZE),
++	BLAKE2S_ALG("blake2s-256", "blake2s-256-x86", BLAKE2S_256_HASH_SIZE),
++};
++
++static int __init blake2s_mod_init(void)
++{
++	if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3))
++		return crypto_register_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
++	return 0;
++}
++
++static void __exit blake2s_mod_exit(void)
++{
++	if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3))
++		crypto_unregister_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
++}
++
++module_init(blake2s_mod_init);
++module_exit(blake2s_mod_exit);
++
++MODULE_ALIAS_CRYPTO("blake2s-128");
++MODULE_ALIAS_CRYPTO("blake2s-128-x86");
++MODULE_ALIAS_CRYPTO("blake2s-160");
++MODULE_ALIAS_CRYPTO("blake2s-160-x86");
++MODULE_ALIAS_CRYPTO("blake2s-224");
++MODULE_ALIAS_CRYPTO("blake2s-224-x86");
++MODULE_ALIAS_CRYPTO("blake2s-256");
++MODULE_ALIAS_CRYPTO("blake2s-256-x86");
++MODULE_LICENSE("GPL v2");
+diff --git a/arch/x86/include/asm/timex.h b/arch/x86/include/asm/timex.h
+index a4a8b1b16c0c1..956e4145311b1 100644
+--- a/arch/x86/include/asm/timex.h
++++ b/arch/x86/include/asm/timex.h
+@@ -5,6 +5,15 @@
+ #include <asm/processor.h>
+ #include <asm/tsc.h>
+ 
++static inline unsigned long random_get_entropy(void)
++{
++	if (!IS_ENABLED(CONFIG_X86_TSC) &&
++	    !cpu_feature_enabled(X86_FEATURE_TSC))
++		return random_get_entropy_fallback();
++	return rdtsc();
++}
++#define random_get_entropy random_get_entropy
++
+ /* Assume we use the PIT time source for the clock tick */
+ #define CLOCK_TICK_RATE		PIT_TICK_RATE
+ 
+diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
+index 01a300a9700b9..fbdc3d9514943 100644
+--- a/arch/x86/include/asm/tsc.h
++++ b/arch/x86/include/asm/tsc.h
+@@ -20,13 +20,12 @@ extern void disable_TSC(void);
+ 
+ static inline cycles_t get_cycles(void)
+ {
+-#ifndef CONFIG_X86_TSC
+-	if (!boot_cpu_has(X86_FEATURE_TSC))
++	if (!IS_ENABLED(CONFIG_X86_TSC) &&
++	    !cpu_feature_enabled(X86_FEATURE_TSC))
+ 		return 0;
+-#endif
+-
+ 	return rdtsc();
+ }
++#define get_cycles get_cycles
+ 
+ extern struct system_counterval_t convert_art_to_tsc(u64 art);
+ extern struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns);
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index 65d11711cd7bb..021cd067733e3 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -84,7 +84,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_stimer0)
+ 	inc_irq_stat(hyperv_stimer0_count);
+ 	if (hv_stimer0_handler)
+ 		hv_stimer0_handler();
+-	add_interrupt_randomness(HYPERV_STIMER0_VECTOR, 0);
++	add_interrupt_randomness(HYPERV_STIMER0_VECTOR);
+ 	ack_APIC_irq();
+ 
+ 	set_irq_regs(old_regs);
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index a3ef793fce5f1..6ed6b090be941 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -297,6 +297,10 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
+ 
+ 		atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY);
+ 	}
++
++	/* Check if there are APF page ready requests pending */
++	if (enabled)
++		kvm_make_request(KVM_REQ_APF_READY, apic->vcpu);
+ }
+ 
+ static inline void kvm_apic_set_xapic_id(struct kvm_lapic *apic, u8 id)
+@@ -2260,6 +2264,8 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
+ 		if (value & MSR_IA32_APICBASE_ENABLE) {
+ 			kvm_apic_set_xapic_id(apic, vcpu->vcpu_id);
+ 			static_key_slow_dec_deferred(&apic_hw_disabled);
++			/* Check if there are APF page ready requests pending */
++			kvm_make_request(KVM_REQ_APF_READY, vcpu);
+ 		} else {
+ 			static_key_slow_inc(&apic_hw_disabled.key);
+ 			atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 306268f90455f..6096d0f1a62af 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5178,14 +5178,16 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
+ 	uint i;
+ 
+ 	if (pcid == kvm_get_active_pcid(vcpu)) {
+-		mmu->invlpg(vcpu, gva, mmu->root_hpa);
++		if (mmu->invlpg)
++			mmu->invlpg(vcpu, gva, mmu->root_hpa);
+ 		tlb_flush = true;
+ 	}
+ 
+ 	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) {
+ 		if (VALID_PAGE(mmu->prev_roots[i].hpa) &&
+ 		    pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) {
+-			mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa);
++			if (mmu->invlpg)
++				mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa);
+ 			tlb_flush = true;
+ 		}
+ 	}
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 4588f73bf59a4..ae18062c26a66 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -11146,7 +11146,7 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu)
+ 	if (!kvm_pv_async_pf_enabled(vcpu))
+ 		return true;
+ 	else
+-		return apf_pageready_slot_free(vcpu);
++		return kvm_lapic_enabled(vcpu) && apf_pageready_slot_free(vcpu);
+ }
+ 
+ void kvm_arch_start_assignment(struct kvm *kvm)
+diff --git a/arch/xtensa/include/asm/timex.h b/arch/xtensa/include/asm/timex.h
+index 233ec75e60c69..3f2462f2d0270 100644
+--- a/arch/xtensa/include/asm/timex.h
++++ b/arch/xtensa/include/asm/timex.h
+@@ -29,10 +29,6 @@
+ 
+ extern unsigned long ccount_freq;
+ 
+-typedef unsigned long long cycles_t;
+-
+-#define get_cycles()	(0)
+-
+ void local_timer_setup(unsigned cpu);
+ 
+ /*
+@@ -59,4 +55,6 @@ static inline void set_linux_timer (unsigned long ccompare)
+ 	xtensa_set_sr(ccompare, SREG_CCOMPARE + LINUX_TIMER);
+ }
+ 
++#include <asm-generic/timex.h>
++
+ #endif	/* _XTENSA_TIMEX_H */
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 1157f82dc9cf4..0dee9242491cb 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -1936,9 +1936,10 @@ config CRYPTO_STATS
+ config CRYPTO_HASH_INFO
+ 	bool
+ 
+-source "lib/crypto/Kconfig"
+ source "drivers/crypto/Kconfig"
+ source "crypto/asymmetric_keys/Kconfig"
+ source "certs/Kconfig"
+ 
+ endif	# if CRYPTO
++
++source "lib/crypto/Kconfig"
+diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c
+index 005783ff45ad0..5f96a21f87883 100644
+--- a/crypto/blake2s_generic.c
++++ b/crypto/blake2s_generic.c
+@@ -1,149 +1,55 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /*
++ * shash interface to the generic implementation of BLAKE2s
++ *
+  * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+  */
+ 
+ #include <crypto/internal/blake2s.h>
+-#include <crypto/internal/simd.h>
+ #include <crypto/internal/hash.h>
+ 
+ #include <linux/types.h>
+-#include <linux/jump_label.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ 
+-static int crypto_blake2s_setkey(struct crypto_shash *tfm, const u8 *key,
+-				 unsigned int keylen)
++static int crypto_blake2s_update_generic(struct shash_desc *desc,
++					 const u8 *in, unsigned int inlen)
+ {
+-	struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+-
+-	if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE)
+-		return -EINVAL;
+-
+-	memcpy(tctx->key, key, keylen);
+-	tctx->keylen = keylen;
+-
+-	return 0;
++	return crypto_blake2s_update(desc, in, inlen, true);
+ }
+ 
+-static int crypto_blake2s_init(struct shash_desc *desc)
++static int crypto_blake2s_final_generic(struct shash_desc *desc, u8 *out)
+ {
+-	struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+-	struct blake2s_state *state = shash_desc_ctx(desc);
+-	const int outlen = crypto_shash_digestsize(desc->tfm);
+-
+-	if (tctx->keylen)
+-		blake2s_init_key(state, outlen, tctx->key, tctx->keylen);
+-	else
+-		blake2s_init(state, outlen);
+-
+-	return 0;
++	return crypto_blake2s_final(desc, out, true);
+ }
+ 
+-static int crypto_blake2s_update(struct shash_desc *desc, const u8 *in,
+-				 unsigned int inlen)
+-{
+-	struct blake2s_state *state = shash_desc_ctx(desc);
+-	const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen;
+-
+-	if (unlikely(!inlen))
+-		return 0;
+-	if (inlen > fill) {
+-		memcpy(state->buf + state->buflen, in, fill);
+-		blake2s_compress_generic(state, state->buf, 1, BLAKE2S_BLOCK_SIZE);
+-		state->buflen = 0;
+-		in += fill;
+-		inlen -= fill;
+-	}
+-	if (inlen > BLAKE2S_BLOCK_SIZE) {
+-		const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE);
+-		/* Hash one less (full) block than strictly possible */
+-		blake2s_compress_generic(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE);
+-		in += BLAKE2S_BLOCK_SIZE * (nblocks - 1);
+-		inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1);
++#define BLAKE2S_ALG(name, driver_name, digest_size)			\
++	{								\
++		.base.cra_name		= name,				\
++		.base.cra_driver_name	= driver_name,			\
++		.base.cra_priority	= 100,				\
++		.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,	\
++		.base.cra_blocksize	= BLAKE2S_BLOCK_SIZE,		\
++		.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx), \
++		.base.cra_module	= THIS_MODULE,			\
++		.digestsize		= digest_size,			\
++		.setkey			= crypto_blake2s_setkey,	\
++		.init			= crypto_blake2s_init,		\
++		.update			= crypto_blake2s_update_generic, \
++		.final			= crypto_blake2s_final_generic,	\
++		.descsize		= sizeof(struct blake2s_state),	\
+ 	}
+-	memcpy(state->buf + state->buflen, in, inlen);
+-	state->buflen += inlen;
+-
+-	return 0;
+-}
+-
+-static int crypto_blake2s_final(struct shash_desc *desc, u8 *out)
+-{
+-	struct blake2s_state *state = shash_desc_ctx(desc);
+-
+-	blake2s_set_lastblock(state);
+-	memset(state->buf + state->buflen, 0,
+-	       BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */
+-	blake2s_compress_generic(state, state->buf, 1, state->buflen);
+-	cpu_to_le32_array(state->h, ARRAY_SIZE(state->h));
+-	memcpy(out, state->h, state->outlen);
+-	memzero_explicit(state, sizeof(*state));
+-
+-	return 0;
+-}
+-
+-static struct shash_alg blake2s_algs[] = {{
+-	.base.cra_name		= "blake2s-128",
+-	.base.cra_driver_name	= "blake2s-128-generic",
+-	.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+-	.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx),
+-	.base.cra_priority	= 200,
+-	.base.cra_blocksize     = BLAKE2S_BLOCK_SIZE,
+-	.base.cra_module	= THIS_MODULE,
+-
+-	.digestsize		= BLAKE2S_128_HASH_SIZE,
+-	.setkey			= crypto_blake2s_setkey,
+-	.init			= crypto_blake2s_init,
+-	.update			= crypto_blake2s_update,
+-	.final			= crypto_blake2s_final,
+-	.descsize		= sizeof(struct blake2s_state),
+-}, {
+-	.base.cra_name		= "blake2s-160",
+-	.base.cra_driver_name	= "blake2s-160-generic",
+-	.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+-	.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx),
+-	.base.cra_priority	= 200,
+-	.base.cra_blocksize     = BLAKE2S_BLOCK_SIZE,
+-	.base.cra_module	= THIS_MODULE,
+-
+-	.digestsize		= BLAKE2S_160_HASH_SIZE,
+-	.setkey			= crypto_blake2s_setkey,
+-	.init			= crypto_blake2s_init,
+-	.update			= crypto_blake2s_update,
+-	.final			= crypto_blake2s_final,
+-	.descsize		= sizeof(struct blake2s_state),
+-}, {
+-	.base.cra_name		= "blake2s-224",
+-	.base.cra_driver_name	= "blake2s-224-generic",
+-	.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+-	.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx),
+-	.base.cra_priority	= 200,
+-	.base.cra_blocksize     = BLAKE2S_BLOCK_SIZE,
+-	.base.cra_module	= THIS_MODULE,
+-
+-	.digestsize		= BLAKE2S_224_HASH_SIZE,
+-	.setkey			= crypto_blake2s_setkey,
+-	.init			= crypto_blake2s_init,
+-	.update			= crypto_blake2s_update,
+-	.final			= crypto_blake2s_final,
+-	.descsize		= sizeof(struct blake2s_state),
+-}, {
+-	.base.cra_name		= "blake2s-256",
+-	.base.cra_driver_name	= "blake2s-256-generic",
+-	.base.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+-	.base.cra_ctxsize	= sizeof(struct blake2s_tfm_ctx),
+-	.base.cra_priority	= 200,
+-	.base.cra_blocksize     = BLAKE2S_BLOCK_SIZE,
+-	.base.cra_module	= THIS_MODULE,
+ 
+-	.digestsize		= BLAKE2S_256_HASH_SIZE,
+-	.setkey			= crypto_blake2s_setkey,
+-	.init			= crypto_blake2s_init,
+-	.update			= crypto_blake2s_update,
+-	.final			= crypto_blake2s_final,
+-	.descsize		= sizeof(struct blake2s_state),
+-}};
++static struct shash_alg blake2s_algs[] = {
++	BLAKE2S_ALG("blake2s-128", "blake2s-128-generic",
++		    BLAKE2S_128_HASH_SIZE),
++	BLAKE2S_ALG("blake2s-160", "blake2s-160-generic",
++		    BLAKE2S_160_HASH_SIZE),
++	BLAKE2S_ALG("blake2s-224", "blake2s-224-generic",
++		    BLAKE2S_224_HASH_SIZE),
++	BLAKE2S_ALG("blake2s-256", "blake2s-256-generic",
++		    BLAKE2S_256_HASH_SIZE),
++};
+ 
+ static int __init blake2s_mod_init(void)
+ {
+diff --git a/crypto/drbg.c b/crypto/drbg.c
+index 3132967a17497..19ea8d6628ffb 100644
+--- a/crypto/drbg.c
++++ b/crypto/drbg.c
+@@ -1490,12 +1490,13 @@ static int drbg_generate_long(struct drbg_state *drbg,
+ 	return 0;
+ }
+ 
+-static void drbg_schedule_async_seed(struct random_ready_callback *rdy)
++static int drbg_schedule_async_seed(struct notifier_block *nb, unsigned long action, void *data)
+ {
+-	struct drbg_state *drbg = container_of(rdy, struct drbg_state,
++	struct drbg_state *drbg = container_of(nb, struct drbg_state,
+ 					       random_ready);
+ 
+ 	schedule_work(&drbg->seed_work);
++	return 0;
+ }
+ 
+ static int drbg_prepare_hrng(struct drbg_state *drbg)
+@@ -1510,10 +1511,8 @@ static int drbg_prepare_hrng(struct drbg_state *drbg)
+ 
+ 	INIT_WORK(&drbg->seed_work, drbg_async_seed);
+ 
+-	drbg->random_ready.owner = THIS_MODULE;
+-	drbg->random_ready.func = drbg_schedule_async_seed;
+-
+-	err = add_random_ready_callback(&drbg->random_ready);
++	drbg->random_ready.notifier_call = drbg_schedule_async_seed;
++	err = register_random_ready_notifier(&drbg->random_ready);
+ 
+ 	switch (err) {
+ 	case 0:
+@@ -1524,7 +1523,7 @@ static int drbg_prepare_hrng(struct drbg_state *drbg)
+ 		fallthrough;
+ 
+ 	default:
+-		drbg->random_ready.func = NULL;
++		drbg->random_ready.notifier_call = NULL;
+ 		return err;
+ 	}
+ 
+@@ -1628,8 +1627,8 @@ free_everything:
+  */
+ static int drbg_uninstantiate(struct drbg_state *drbg)
+ {
+-	if (drbg->random_ready.func) {
+-		del_random_ready_callback(&drbg->random_ready);
++	if (drbg->random_ready.notifier_call) {
++		unregister_random_ready_notifier(&drbg->random_ready);
+ 		cancel_work_sync(&drbg->seed_work);
+ 	}
+ 
+diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c
+index a5cc4f3bb1e31..1d94c4625f365 100644
+--- a/drivers/acpi/sysfs.c
++++ b/drivers/acpi/sysfs.c
+@@ -439,18 +439,29 @@ static ssize_t acpi_data_show(struct file *filp, struct kobject *kobj,
+ {
+ 	struct acpi_data_attr *data_attr;
+ 	void __iomem *base;
+-	ssize_t rc;
++	ssize_t size;
+ 
+ 	data_attr = container_of(bin_attr, struct acpi_data_attr, attr);
++	size = data_attr->attr.size;
++
++	if (offset < 0)
++		return -EINVAL;
++
++	if (offset >= size)
++		return 0;
+ 
+-	base = acpi_os_map_memory(data_attr->addr, data_attr->attr.size);
++	if (count > size - offset)
++		count = size - offset;
++
++	base = acpi_os_map_iomem(data_attr->addr, size);
+ 	if (!base)
+ 		return -ENOMEM;
+-	rc = memory_read_from_buffer(buf, count, &offset, base,
+-				     data_attr->attr.size);
+-	acpi_os_unmap_memory(base, data_attr->attr.size);
+ 
+-	return rc;
++	memcpy_fromio(buf, base + offset, count);
++
++	acpi_os_unmap_iomem(base, size);
++
++	return count;
+ }
+ 
+ static int acpi_bert_data_init(void *th, struct acpi_data_attr *data_attr)
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index d229a2d0c0174..3e2703a496328 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -495,4 +495,5 @@ config RANDOM_TRUST_BOOTLOADER
+ 	device randomness. Say Y here to assume the entropy provided by the
+ 	booloader is trustworthy so it will be added to the kernel's entropy
+ 	pool. Otherwise, say N here so it will be regarded as device input that
+-	only mixes the entropy pool.
++	only mixes the entropy pool. This can also be configured at boot with
++	"random.trust_bootloader=on/off".
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 8c1c47dd9f464..5749998feaa46 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -15,6 +15,7 @@
+ #include <linux/err.h>
+ #include <linux/fs.h>
+ #include <linux/hw_random.h>
++#include <linux/random.h>
+ #include <linux/kernel.h>
+ #include <linux/kthread.h>
+ #include <linux/sched/signal.h>
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 5f541c9465598..00b50ccc9fae6 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1,310 +1,26 @@
++// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
+ /*
+- * random.c -- A strong random number generator
+- *
+- * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All
+- * Rights Reserved.
+- *
++ * Copyright (C) 2017-2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+  * Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005
+- *
+- * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999.  All
+- * rights reserved.
+- *
+- * Redistribution and use in source and binary forms, with or without
+- * modification, are permitted provided that the following conditions
+- * are met:
+- * 1. Redistributions of source code must retain the above copyright
+- *    notice, and the entire permission notice in its entirety,
+- *    including the disclaimer of warranties.
+- * 2. Redistributions in binary form must reproduce the above copyright
+- *    notice, this list of conditions and the following disclaimer in the
+- *    documentation and/or other materials provided with the distribution.
+- * 3. The name of the author may not be used to endorse or promote
+- *    products derived from this software without specific prior
+- *    written permission.
+- *
+- * ALTERNATIVELY, this product may be distributed under the terms of
+- * the GNU General Public License, in which case the provisions of the GPL are
+- * required INSTEAD OF the above restrictions.  (This clause is
+- * necessary due to a potential bad interaction between the GPL and
+- * the restrictions contained in a BSD-style copyright.)
+- *
+- * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+- * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+- * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
+- * WHICH ARE HEREBY DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR BE
+- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
+- * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+- * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+- * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
+- * DAMAGE.
+- */
+-
+-/*
+- * (now, with legal B.S. out of the way.....)
+- *
+- * This routine gathers environmental noise from device drivers, etc.,
+- * and returns good random numbers, suitable for cryptographic use.
+- * Besides the obvious cryptographic uses, these numbers are also good
+- * for seeding TCP sequence numbers, and other places where it is
+- * desirable to have numbers which are not only random, but hard to
+- * predict by an attacker.
+- *
+- * Theory of operation
+- * ===================
+- *
+- * Computers are very predictable devices.  Hence it is extremely hard
+- * to produce truly random numbers on a computer --- as opposed to
+- * pseudo-random numbers, which can easily generated by using a
+- * algorithm.  Unfortunately, it is very easy for attackers to guess
+- * the sequence of pseudo-random number generators, and for some
+- * applications this is not acceptable.  So instead, we must try to
+- * gather "environmental noise" from the computer's environment, which
+- * must be hard for outside attackers to observe, and use that to
+- * generate random numbers.  In a Unix environment, this is best done
+- * from inside the kernel.
+- *
+- * Sources of randomness from the environment include inter-keyboard
+- * timings, inter-interrupt timings from some interrupts, and other
+- * events which are both (a) non-deterministic and (b) hard for an
+- * outside observer to measure.  Randomness from these sources are
+- * added to an "entropy pool", which is mixed using a CRC-like function.
+- * This is not cryptographically strong, but it is adequate assuming
+- * the randomness is not chosen maliciously, and it is fast enough that
+- * the overhead of doing it on every interrupt is very reasonable.
+- * As random bytes are mixed into the entropy pool, the routines keep
+- * an *estimate* of how many bits of randomness have been stored into
+- * the random number generator's internal state.
+- *
+- * When random bytes are desired, they are obtained by taking the SHA
+- * hash of the contents of the "entropy pool".  The SHA hash avoids
+- * exposing the internal state of the entropy pool.  It is believed to
+- * be computationally infeasible to derive any useful information
+- * about the input of SHA from its output.  Even if it is possible to
+- * analyze SHA in some clever way, as long as the amount of data
+- * returned from the generator is less than the inherent entropy in
+- * the pool, the output data is totally unpredictable.  For this
+- * reason, the routine decreases its internal estimate of how many
+- * bits of "true randomness" are contained in the entropy pool as it
+- * outputs random numbers.
+- *
+- * If this estimate goes to zero, the routine can still generate
+- * random numbers; however, an attacker may (at least in theory) be
+- * able to infer the future output of the generator from prior
+- * outputs.  This requires successful cryptanalysis of SHA, which is
+- * not believed to be feasible, but there is a remote possibility.
+- * Nonetheless, these numbers should be useful for the vast majority
+- * of purposes.
+- *
+- * Exported interfaces ---- output
+- * ===============================
+- *
+- * There are four exported interfaces; two for use within the kernel,
+- * and two or use from userspace.
+- *
+- * Exported interfaces ---- userspace output
+- * -----------------------------------------
+- *
+- * The userspace interfaces are two character devices /dev/random and
+- * /dev/urandom.  /dev/random is suitable for use when very high
+- * quality randomness is desired (for example, for key generation or
+- * one-time pads), as it will only return a maximum of the number of
+- * bits of randomness (as estimated by the random number generator)
+- * contained in the entropy pool.
+- *
+- * The /dev/urandom device does not have this limit, and will return
+- * as many bytes as are requested.  As more and more random bytes are
+- * requested without giving time for the entropy pool to recharge,
+- * this will result in random numbers that are merely cryptographically
+- * strong.  For many applications, however, this is acceptable.
+- *
+- * Exported interfaces ---- kernel output
+- * --------------------------------------
+- *
+- * The primary kernel interface is
+- *
+- * 	void get_random_bytes(void *buf, int nbytes);
+- *
+- * This interface will return the requested number of random bytes,
+- * and place it in the requested buffer.  This is equivalent to a
+- * read from /dev/urandom.
+- *
+- * For less critical applications, there are the functions:
+- *
+- * 	u32 get_random_u32()
+- * 	u64 get_random_u64()
+- * 	unsigned int get_random_int()
+- * 	unsigned long get_random_long()
+- *
+- * These are produced by a cryptographic RNG seeded from get_random_bytes,
+- * and so do not deplete the entropy pool as much.  These are recommended
+- * for most in-kernel operations *if the result is going to be stored in
+- * the kernel*.
+- *
+- * Specifically, the get_random_int() family do not attempt to do
+- * "anti-backtracking".  If you capture the state of the kernel (e.g.
+- * by snapshotting the VM), you can figure out previous get_random_int()
+- * return values.  But if the value is stored in the kernel anyway,
+- * this is not a problem.
+- *
+- * It *is* safe to expose get_random_int() output to attackers (e.g. as
+- * network cookies); given outputs 1..n, it's not feasible to predict
+- * outputs 0 or n+1.  The only concern is an attacker who breaks into
+- * the kernel later; the get_random_int() engine is not reseeded as
+- * often as the get_random_bytes() one.
+- *
+- * get_random_bytes() is needed for keys that need to stay secret after
+- * they are erased from the kernel.  For example, any key that will
+- * be wrapped and stored encrypted.  And session encryption keys: we'd
+- * like to know that after the session is closed and the keys erased,
+- * the plaintext is unrecoverable to someone who recorded the ciphertext.
+- *
+- * But for network ports/cookies, stack canaries, PRNG seeds, address
+- * space layout randomization, session *authentication* keys, or other
+- * applications where the sensitive data is stored in the kernel in
+- * plaintext for as long as it's sensitive, the get_random_int() family
+- * is just fine.
+- *
+- * Consider ASLR.  We want to keep the address space secret from an
+- * outside attacker while the process is running, but once the address
+- * space is torn down, it's of no use to an attacker any more.  And it's
+- * stored in kernel data structures as long as it's alive, so worrying
+- * about an attacker's ability to extrapolate it from the get_random_int()
+- * CRNG is silly.
+- *
+- * Even some cryptographic keys are safe to generate with get_random_int().
+- * In particular, keys for SipHash are generally fine.  Here, knowledge
+- * of the key authorizes you to do something to a kernel object (inject
+- * packets to a network connection, or flood a hash table), and the
+- * key is stored with the object being protected.  Once it goes away,
+- * we no longer care if anyone knows the key.
+- *
+- * prandom_u32()
+- * -------------
+- *
+- * For even weaker applications, see the pseudorandom generator
+- * prandom_u32(), prandom_max(), and prandom_bytes().  If the random
+- * numbers aren't security-critical at all, these are *far* cheaper.
+- * Useful for self-tests, random error simulation, randomized backoffs,
+- * and any other application where you trust that nobody is trying to
+- * maliciously mess with you by guessing the "random" numbers.
+- *
+- * Exported interfaces ---- input
+- * ==============================
+- *
+- * The current exported interfaces for gathering environmental noise
+- * from the devices are:
+- *
+- *	void add_device_randomness(const void *buf, unsigned int size);
+- * 	void add_input_randomness(unsigned int type, unsigned int code,
+- *                                unsigned int value);
+- *	void add_interrupt_randomness(int irq, int irq_flags);
+- * 	void add_disk_randomness(struct gendisk *disk);
+- *
+- * add_device_randomness() is for adding data to the random pool that
+- * is likely to differ between two devices (or possibly even per boot).
+- * This would be things like MAC addresses or serial numbers, or the
+- * read-out of the RTC. This does *not* add any actual entropy to the
+- * pool, but it initializes the pool to different values for devices
+- * that might otherwise be identical and have very little entropy
+- * available to them (particularly common in the embedded world).
+- *
+- * add_input_randomness() uses the input layer interrupt timing, as well as
+- * the event type information from the hardware.
+- *
+- * add_interrupt_randomness() uses the interrupt timing as random
+- * inputs to the entropy pool. Using the cycle counters and the irq source
+- * as inputs, it feeds the randomness roughly once a second.
+- *
+- * add_disk_randomness() uses what amounts to the seek time of block
+- * layer request events, on a per-disk_devt basis, as input to the
+- * entropy pool. Note that high-speed solid state drives with very low
+- * seek times do not make for good sources of entropy, as their seek
+- * times are usually fairly consistent.
+- *
+- * All of these routines try to estimate how many bits of randomness a
+- * particular randomness source.  They do this by keeping track of the
+- * first and second order deltas of the event timings.
+- *
+- * Ensuring unpredictability at system startup
+- * ============================================
+- *
+- * When any operating system starts up, it will go through a sequence
+- * of actions that are fairly predictable by an adversary, especially
+- * if the start-up does not involve interaction with a human operator.
+- * This reduces the actual number of bits of unpredictability in the
+- * entropy pool below the value in entropy_count.  In order to
+- * counteract this effect, it helps to carry information in the
+- * entropy pool across shut-downs and start-ups.  To do this, put the
+- * following lines an appropriate script which is run during the boot
+- * sequence:
+- *
+- *	echo "Initializing random number generator..."
+- *	random_seed=/var/run/random-seed
+- *	# Carry a random seed from start-up to start-up
+- *	# Load and then save the whole entropy pool
+- *	if [ -f $random_seed ]; then
+- *		cat $random_seed >/dev/urandom
+- *	else
+- *		touch $random_seed
+- *	fi
+- *	chmod 600 $random_seed
+- *	dd if=/dev/urandom of=$random_seed count=1 bs=512
+- *
+- * and the following lines in an appropriate script which is run as
+- * the system is shutdown:
+- *
+- *	# Carry a random seed from shut-down to start-up
+- *	# Save the whole entropy pool
+- *	echo "Saving random seed..."
+- *	random_seed=/var/run/random-seed
+- *	touch $random_seed
+- *	chmod 600 $random_seed
+- *	dd if=/dev/urandom of=$random_seed count=1 bs=512
+- *
+- * For example, on most modern systems using the System V init
+- * scripts, such code fragments would be found in
+- * /etc/rc.d/init.d/random.  On older Linux systems, the correct script
+- * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0.
+- *
+- * Effectively, these commands cause the contents of the entropy pool
+- * to be saved at shut-down time and reloaded into the entropy pool at
+- * start-up.  (The 'dd' in the addition to the bootup script is to
+- * make sure that /etc/random-seed is different for every start-up,
+- * even if the system crashes without executing rc.0.)  Even with
+- * complete knowledge of the start-up activities, predicting the state
+- * of the entropy pool requires knowledge of the previous history of
+- * the system.
+- *
+- * Configuring the /dev/random driver under Linux
+- * ==============================================
+- *
+- * The /dev/random driver under Linux uses minor numbers 8 and 9 of
+- * the /dev/mem major number (#1).  So if your system does not have
+- * /dev/random and /dev/urandom created already, they can be created
+- * by using the commands:
+- *
+- * 	mknod /dev/random c 1 8
+- * 	mknod /dev/urandom c 1 9
+- *
+- * Acknowledgements:
+- * =================
+- *
+- * Ideas for constructing this random number generator were derived
+- * from Pretty Good Privacy's random number generator, and from private
+- * discussions with Phil Karn.  Colin Plumb provided a faster random
+- * number generator, which speed up the mixing function of the entropy
+- * pool, taken from PGPfone.  Dale Worley has also contributed many
+- * useful ideas and suggestions to improve this driver.
+- *
+- * Any flaws in the design are solely my responsibility, and should
+- * not be attributed to the Phil, Colin, or any of authors of PGP.
+- *
+- * Further background information on this topic may be obtained from
+- * RFC 1750, "Randomness Recommendations for Security", by Donald
+- * Eastlake, Steve Crocker, and Jeff Schiller.
++ * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All rights reserved.
++ *
++ * This driver produces cryptographically secure pseudorandom data. It is divided
++ * into roughly six sections, each with a section header:
++ *
++ *   - Initialization and readiness waiting.
++ *   - Fast key erasure RNG, the "crng".
++ *   - Entropy accumulation and extraction routines.
++ *   - Entropy collection routines.
++ *   - Userspace reader/writer interfaces.
++ *   - Sysctl interface.
++ *
++ * The high level overview is that there is one input pool, into which
++ * various pieces of data are hashed. Prior to initialization, some of that
++ * data is then "credited" as having a certain number of bits of entropy.
++ * When enough bits of entropy are available, the hash is finalized and
++ * handed as a key to a stream cipher that expands it indefinitely for
++ * various consumers. This key is periodically refreshed as the various
++ * entropy collectors, described below, add data to the input pool.
+  */
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+@@ -327,7 +43,6 @@
+ #include <linux/spinlock.h>
+ #include <linux/kthread.h>
+ #include <linux/percpu.h>
+-#include <linux/fips.h>
+ #include <linux/ptrace.h>
+ #include <linux/workqueue.h>
+ #include <linux/irq.h>
+@@ -335,1503 +50,1082 @@
+ #include <linux/syscalls.h>
+ #include <linux/completion.h>
+ #include <linux/uuid.h>
++#include <linux/uaccess.h>
++#include <linux/siphash.h>
++#include <linux/uio.h>
+ #include <crypto/chacha.h>
+-#include <crypto/sha.h>
+-
++#include <crypto/blake2s.h>
+ #include <asm/processor.h>
+-#include <linux/uaccess.h>
+ #include <asm/irq.h>
+ #include <asm/irq_regs.h>
+ #include <asm/io.h>
+ 
+-#define CREATE_TRACE_POINTS
+-#include <trace/events/random.h>
+-
+-/* #define ADD_INTERRUPT_BENCH */
++/*********************************************************************
++ *
++ * Initialization and readiness waiting.
++ *
++ * Much of the RNG infrastructure is devoted to various dependencies
++ * being able to wait until the RNG has collected enough entropy and
++ * is ready for safe consumption.
++ *
++ *********************************************************************/
+ 
+ /*
+- * Configuration information
++ * crng_init is protected by base_crng->lock, and only increases
++ * its value (from empty->early->ready).
+  */
+-#define INPUT_POOL_SHIFT	12
+-#define INPUT_POOL_WORDS	(1 << (INPUT_POOL_SHIFT-5))
+-#define OUTPUT_POOL_SHIFT	10
+-#define OUTPUT_POOL_WORDS	(1 << (OUTPUT_POOL_SHIFT-5))
+-#define EXTRACT_SIZE		10
+-
++static enum {
++	CRNG_EMPTY = 0, /* Little to no entropy collected */
++	CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */
++	CRNG_READY = 2  /* Fully initialized with POOL_READY_BITS collected */
++} crng_init __read_mostly = CRNG_EMPTY;
++static DEFINE_STATIC_KEY_FALSE(crng_is_ready);
++#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= CRNG_READY)
++/* Various types of waiters for crng_init->CRNG_READY transition. */
++static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
++static struct fasync_struct *fasync;
++static DEFINE_SPINLOCK(random_ready_chain_lock);
++static RAW_NOTIFIER_HEAD(random_ready_chain);
+ 
+-#define LONGS(x) (((x) + sizeof(unsigned long) - 1)/sizeof(unsigned long))
++/* Control how we warn userspace. */
++static struct ratelimit_state urandom_warning =
++	RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3);
++static int ratelimit_disable __read_mostly =
++	IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM);
++module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
++MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
+ 
+ /*
+- * To allow fractional bits to be tracked, the entropy_count field is
+- * denominated in units of 1/8th bits.
++ * Returns whether or not the input pool has been seeded and thus guaranteed
++ * to supply cryptographically secure random numbers. This applies to: the
++ * /dev/urandom device, the get_random_bytes function, and the get_random_{u32,
++ * ,u64,int,long} family of functions.
+  *
+- * 2*(ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in
+- * credit_entropy_bits() needs to be 64 bits wide.
++ * Returns: true if the input pool has been seeded.
++ *          false if the input pool has not been seeded.
+  */
+-#define ENTROPY_SHIFT 3
+-#define ENTROPY_BITS(r) ((r)->entropy_count >> ENTROPY_SHIFT)
++bool rng_is_initialized(void)
++{
++	return crng_ready();
++}
++EXPORT_SYMBOL(rng_is_initialized);
+ 
+-/*
+- * If the entropy count falls under this number of bits, then we
+- * should wake up processes which are selecting or polling on write
+- * access to /dev/random.
+- */
+-static int random_write_wakeup_bits = 28 * OUTPUT_POOL_WORDS;
++static void __cold crng_set_ready(struct work_struct *work)
++{
++	static_branch_enable(&crng_is_ready);
++}
++
++/* Used by wait_for_random_bytes(), and considered an entropy collector, below. */
++static void try_to_generate_entropy(void);
+ 
+ /*
+- * Originally, we used a primitive polynomial of degree .poolwords
+- * over GF(2).  The taps for various sizes are defined below.  They
+- * were chosen to be evenly spaced except for the last tap, which is 1
+- * to get the twisting happening as fast as possible.
+- *
+- * For the purposes of better mixing, we use the CRC-32 polynomial as
+- * well to make a (modified) twisted Generalized Feedback Shift
+- * Register.  (See M. Matsumoto & Y. Kurita, 1992.  Twisted GFSR
+- * generators.  ACM Transactions on Modeling and Computer Simulation
+- * 2(3):179-194.  Also see M. Matsumoto & Y. Kurita, 1994.  Twisted
+- * GFSR generators II.  ACM Transactions on Modeling and Computer
+- * Simulation 4:254-266)
+- *
+- * Thanks to Colin Plumb for suggesting this.
+- *
+- * The mixing operation is much less sensitive than the output hash,
+- * where we use SHA-1.  All that we want of mixing operation is that
+- * it be a good non-cryptographic hash; i.e. it not produce collisions
+- * when fed "random" data of the sort we expect to see.  As long as
+- * the pool state differs for different inputs, we have preserved the
+- * input entropy and done a good job.  The fact that an intelligent
+- * attacker can construct inputs that will produce controlled
+- * alterations to the pool's state is not important because we don't
+- * consider such inputs to contribute any randomness.  The only
+- * property we need with respect to them is that the attacker can't
+- * increase his/her knowledge of the pool's state.  Since all
+- * additions are reversible (knowing the final state and the input,
+- * you can reconstruct the initial state), if an attacker has any
+- * uncertainty about the initial state, he/she can only shuffle that
+- * uncertainty about, but never cause any collisions (which would
+- * decrease the uncertainty).
++ * Wait for the input pool to be seeded and thus guaranteed to supply
++ * cryptographically secure random numbers. This applies to: the /dev/urandom
++ * device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
++ * family of functions. Using any of these functions without first calling
++ * this function forfeits the guarantee of security.
+  *
+- * Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and
+- * Videau in their paper, "The Linux Pseudorandom Number Generator
+- * Revisited" (see: http://eprint.iacr.org/2012/251.pdf).  In their
+- * paper, they point out that we are not using a true Twisted GFSR,
+- * since Matsumoto & Kurita used a trinomial feedback polynomial (that
+- * is, with only three taps, instead of the six that we are using).
+- * As a result, the resulting polynomial is neither primitive nor
+- * irreducible, and hence does not have a maximal period over
+- * GF(2**32).  They suggest a slight change to the generator
+- * polynomial which improves the resulting TGFSR polynomial to be
+- * irreducible, which we have made here.
++ * Returns: 0 if the input pool has been seeded.
++ *          -ERESTARTSYS if the function was interrupted by a signal.
+  */
+-static const struct poolinfo {
+-	int poolbitshift, poolwords, poolbytes, poolfracbits;
+-#define S(x) ilog2(x)+5, (x), (x)*4, (x) << (ENTROPY_SHIFT+5)
+-	int tap1, tap2, tap3, tap4, tap5;
+-} poolinfo_table[] = {
+-	/* was: x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 */
+-	/* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */
+-	{ S(128),	104,	76,	51,	25,	1 },
+-};
++int wait_for_random_bytes(void)
++{
++	while (!crng_ready()) {
++		int ret;
++
++		try_to_generate_entropy();
++		ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ);
++		if (ret)
++			return ret > 0 ? 0 : ret;
++	}
++	return 0;
++}
++EXPORT_SYMBOL(wait_for_random_bytes);
+ 
+ /*
+- * Static global variables
++ * Add a callback function that will be invoked when the input
++ * pool is initialised.
++ *
++ * returns: 0 if callback is successfully added
++ *	    -EALREADY if pool is already initialised (callback not called)
+  */
+-static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
+-static struct fasync_struct *fasync;
+-
+-static DEFINE_SPINLOCK(random_ready_list_lock);
+-static LIST_HEAD(random_ready_list);
++int __cold register_random_ready_notifier(struct notifier_block *nb)
++{
++	unsigned long flags;
++	int ret = -EALREADY;
+ 
+-struct crng_state {
+-	__u32		state[16];
+-	unsigned long	init_time;
+-	spinlock_t	lock;
+-};
++	if (crng_ready())
++		return ret;
+ 
+-static struct crng_state primary_crng = {
+-	.lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock),
+-};
++	spin_lock_irqsave(&random_ready_chain_lock, flags);
++	if (!crng_ready())
++		ret = raw_notifier_chain_register(&random_ready_chain, nb);
++	spin_unlock_irqrestore(&random_ready_chain_lock, flags);
++	return ret;
++}
++EXPORT_SYMBOL(register_random_ready_notifier);
+ 
+ /*
+- * crng_init =  0 --> Uninitialized
+- *		1 --> Initialized
+- *		2 --> Initialized from input_pool
+- *
+- * crng_init is protected by primary_crng->lock, and only increases
+- * its value (from 0->1->2).
++ * Delete a previously registered readiness callback function.
+  */
+-static int crng_init = 0;
+-static bool crng_need_final_init = false;
+-#define crng_ready() (likely(crng_init > 1))
+-static int crng_init_cnt = 0;
+-static unsigned long crng_global_init_time = 0;
+-#define CRNG_INIT_CNT_THRESH (2*CHACHA_KEY_SIZE)
+-static void _extract_crng(struct crng_state *crng, __u8 out[CHACHA_BLOCK_SIZE]);
+-static void _crng_backtrack_protect(struct crng_state *crng,
+-				    __u8 tmp[CHACHA_BLOCK_SIZE], int used);
+-static void process_random_ready_list(void);
+-static void _get_random_bytes(void *buf, int nbytes);
+-
+-static struct ratelimit_state unseeded_warning =
+-	RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3);
+-static struct ratelimit_state urandom_warning =
+-	RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3);
++int __cold unregister_random_ready_notifier(struct notifier_block *nb)
++{
++	unsigned long flags;
++	int ret;
+ 
+-static int ratelimit_disable __read_mostly;
++	spin_lock_irqsave(&random_ready_chain_lock, flags);
++	ret = raw_notifier_chain_unregister(&random_ready_chain, nb);
++	spin_unlock_irqrestore(&random_ready_chain_lock, flags);
++	return ret;
++}
++EXPORT_SYMBOL(unregister_random_ready_notifier);
+ 
+-module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
+-MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
++static void __cold process_random_ready_list(void)
++{
++	unsigned long flags;
+ 
+-/**********************************************************************
++	spin_lock_irqsave(&random_ready_chain_lock, flags);
++	raw_notifier_call_chain(&random_ready_chain, 0, NULL);
++	spin_unlock_irqrestore(&random_ready_chain_lock, flags);
++}
++
++#define warn_unseeded_randomness() \
++	if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \
++		printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", \
++				__func__, (void *)_RET_IP_, crng_init)
++
++
++/*********************************************************************
+  *
+- * OS independent entropy store.   Here are the functions which handle
+- * storing entropy in an entropy pool.
++ * Fast key erasure RNG, the "crng".
+  *
+- **********************************************************************/
++ * These functions expand entropy from the entropy extractor into
++ * long streams for external consumption using the "fast key erasure"
++ * RNG described at <https://blog.cr.yp.to/20170723-random.html>.
++ *
++ * There are a few exported interfaces for use by other drivers:
++ *
++ *	void get_random_bytes(void *buf, size_t len)
++ *	u32 get_random_u32()
++ *	u64 get_random_u64()
++ *	unsigned int get_random_int()
++ *	unsigned long get_random_long()
++ *
++ * These interfaces will return the requested number of random bytes
++ * into the given buffer or as a return value. This is equivalent to
++ * a read from /dev/urandom. The u32, u64, int, and long family of
++ * functions may be higher performance for one-off random integers,
++ * because they do a bit of buffering and do not invoke reseeding
++ * until the buffer is emptied.
++ *
++ *********************************************************************/
+ 
+-struct entropy_store;
+-struct entropy_store {
+-	/* read-only data: */
+-	const struct poolinfo *poolinfo;
+-	__u32 *pool;
+-	const char *name;
++enum {
++	CRNG_RESEED_START_INTERVAL = HZ,
++	CRNG_RESEED_INTERVAL = 60 * HZ
++};
+ 
+-	/* read-write data: */
++static struct {
++	u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long));
++	unsigned long birth;
++	unsigned long generation;
+ 	spinlock_t lock;
+-	unsigned short add_ptr;
+-	unsigned short input_rotate;
+-	int entropy_count;
+-	unsigned int initialized:1;
+-	unsigned int last_data_init:1;
+-	__u8 last_data[EXTRACT_SIZE];
++} base_crng = {
++	.lock = __SPIN_LOCK_UNLOCKED(base_crng.lock)
+ };
+ 
+-static ssize_t extract_entropy(struct entropy_store *r, void *buf,
+-			       size_t nbytes, int min, int rsvd);
+-static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
+-				size_t nbytes, int fips);
+-
+-static void crng_reseed(struct crng_state *crng, struct entropy_store *r);
+-static __u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy;
+-
+-static struct entropy_store input_pool = {
+-	.poolinfo = &poolinfo_table[0],
+-	.name = "input",
+-	.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
+-	.pool = input_pool_data
++struct crng {
++	u8 key[CHACHA_KEY_SIZE];
++	unsigned long generation;
++	local_lock_t lock;
+ };
+ 
+-static __u32 const twist_table[8] = {
+-	0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
+-	0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
+-
+-/*
+- * This function adds bytes into the entropy "pool".  It does not
+- * update the entropy estimate.  The caller should call
+- * credit_entropy_bits if this is appropriate.
+- *
+- * The pool is stirred with a primitive polynomial of the appropriate
+- * degree, and then twisted.  We twist by three bits at a time because
+- * it's cheap to do so and helps slightly in the expected case where
+- * the entropy is concentrated in the low-order bits.
+- */
+-static void _mix_pool_bytes(struct entropy_store *r, const void *in,
+-			    int nbytes)
+-{
+-	unsigned long i, tap1, tap2, tap3, tap4, tap5;
+-	int input_rotate;
+-	int wordmask = r->poolinfo->poolwords - 1;
+-	const char *bytes = in;
+-	__u32 w;
+-
+-	tap1 = r->poolinfo->tap1;
+-	tap2 = r->poolinfo->tap2;
+-	tap3 = r->poolinfo->tap3;
+-	tap4 = r->poolinfo->tap4;
+-	tap5 = r->poolinfo->tap5;
+-
+-	input_rotate = r->input_rotate;
+-	i = r->add_ptr;
+-
+-	/* mix one byte at a time to simplify size handling and churn faster */
+-	while (nbytes--) {
+-		w = rol32(*bytes++, input_rotate);
+-		i = (i - 1) & wordmask;
+-
+-		/* XOR in the various taps */
+-		w ^= r->pool[i];
+-		w ^= r->pool[(i + tap1) & wordmask];
+-		w ^= r->pool[(i + tap2) & wordmask];
+-		w ^= r->pool[(i + tap3) & wordmask];
+-		w ^= r->pool[(i + tap4) & wordmask];
+-		w ^= r->pool[(i + tap5) & wordmask];
+-
+-		/* Mix the result back in with a twist */
+-		r->pool[i] = (w >> 3) ^ twist_table[w & 7];
+-
+-		/*
+-		 * Normally, we add 7 bits of rotation to the pool.
+-		 * At the beginning of the pool, add an extra 7 bits
+-		 * rotation, so that successive passes spread the
+-		 * input bits across the pool evenly.
+-		 */
+-		input_rotate = (input_rotate + (i ? 7 : 14)) & 31;
+-	}
+-
+-	r->input_rotate = input_rotate;
+-	r->add_ptr = i;
+-}
++static DEFINE_PER_CPU(struct crng, crngs) = {
++	.generation = ULONG_MAX,
++	.lock = INIT_LOCAL_LOCK(crngs.lock),
++};
+ 
+-static void __mix_pool_bytes(struct entropy_store *r, const void *in,
+-			     int nbytes)
+-{
+-	trace_mix_pool_bytes_nolock(r->name, nbytes, _RET_IP_);
+-	_mix_pool_bytes(r, in, nbytes);
+-}
++/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
++static void extract_entropy(void *buf, size_t len);
+ 
+-static void mix_pool_bytes(struct entropy_store *r, const void *in,
+-			   int nbytes)
++/* This extracts a new crng key from the input pool. */
++static void crng_reseed(void)
+ {
+ 	unsigned long flags;
++	unsigned long next_gen;
++	u8 key[CHACHA_KEY_SIZE];
+ 
+-	trace_mix_pool_bytes(r->name, nbytes, _RET_IP_);
+-	spin_lock_irqsave(&r->lock, flags);
+-	_mix_pool_bytes(r, in, nbytes);
+-	spin_unlock_irqrestore(&r->lock, flags);
+-}
++	extract_entropy(key, sizeof(key));
+ 
+-struct fast_pool {
+-	__u32		pool[4];
+-	unsigned long	last;
+-	unsigned short	reg_idx;
+-	unsigned char	count;
+-};
++	/*
++	 * We copy the new key into the base_crng, overwriting the old one,
++	 * and update the generation counter. We avoid hitting ULONG_MAX,
++	 * because the per-cpu crngs are initialized to ULONG_MAX, so this
++	 * forces new CPUs that come online to always initialize.
++	 */
++	spin_lock_irqsave(&base_crng.lock, flags);
++	memcpy(base_crng.key, key, sizeof(base_crng.key));
++	next_gen = base_crng.generation + 1;
++	if (next_gen == ULONG_MAX)
++		++next_gen;
++	WRITE_ONCE(base_crng.generation, next_gen);
++	WRITE_ONCE(base_crng.birth, jiffies);
++	if (!static_branch_likely(&crng_is_ready))
++		crng_init = CRNG_READY;
++	spin_unlock_irqrestore(&base_crng.lock, flags);
++	memzero_explicit(key, sizeof(key));
++}
+ 
+ /*
+- * This is a fast mixing routine used by the interrupt randomness
+- * collector.  It's hardcoded for an 128 bit pool and assumes that any
+- * locks that might be needed are taken by the caller.
++ * This generates a ChaCha block using the provided key, and then
++ * immediately overwites that key with half the block. It returns
++ * the resultant ChaCha state to the user, along with the second
++ * half of the block containing 32 bytes of random data that may
++ * be used; random_data_len may not be greater than 32.
++ *
++ * The returned ChaCha state contains within it a copy of the old
++ * key value, at index 4, so the state should always be zeroed out
++ * immediately after using in order to maintain forward secrecy.
++ * If the state cannot be erased in a timely manner, then it is
++ * safer to set the random_data parameter to &chacha_state[4] so
++ * that this function overwrites it before returning.
+  */
+-static void fast_mix(struct fast_pool *f)
++static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE],
++				  u32 chacha_state[CHACHA_STATE_WORDS],
++				  u8 *random_data, size_t random_data_len)
+ {
+-	__u32 a = f->pool[0],	b = f->pool[1];
+-	__u32 c = f->pool[2],	d = f->pool[3];
+-
+-	a += b;			c += d;
+-	b = rol32(b, 6);	d = rol32(d, 27);
+-	d ^= a;			b ^= c;
++	u8 first_block[CHACHA_BLOCK_SIZE];
+ 
+-	a += b;			c += d;
+-	b = rol32(b, 16);	d = rol32(d, 14);
+-	d ^= a;			b ^= c;
++	BUG_ON(random_data_len > 32);
+ 
+-	a += b;			c += d;
+-	b = rol32(b, 6);	d = rol32(d, 27);
+-	d ^= a;			b ^= c;
++	chacha_init_consts(chacha_state);
++	memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE);
++	memset(&chacha_state[12], 0, sizeof(u32) * 4);
++	chacha20_block(chacha_state, first_block);
+ 
+-	a += b;			c += d;
+-	b = rol32(b, 16);	d = rol32(d, 14);
+-	d ^= a;			b ^= c;
+-
+-	f->pool[0] = a;  f->pool[1] = b;
+-	f->pool[2] = c;  f->pool[3] = d;
+-	f->count++;
++	memcpy(key, first_block, CHACHA_KEY_SIZE);
++	memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len);
++	memzero_explicit(first_block, sizeof(first_block));
+ }
+ 
+-static void process_random_ready_list(void)
+-{
+-	unsigned long flags;
+-	struct random_ready_callback *rdy, *tmp;
+-
+-	spin_lock_irqsave(&random_ready_list_lock, flags);
+-	list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) {
+-		struct module *owner = rdy->owner;
+-
+-		list_del_init(&rdy->list);
+-		rdy->func(rdy);
+-		module_put(owner);
++/*
++ * Return whether the crng seed is considered to be sufficiently old
++ * that a reseeding is needed. This happens if the last reseeding
++ * was CRNG_RESEED_INTERVAL ago, or during early boot, at an interval
++ * proportional to the uptime.
++ */
++static bool crng_has_old_seed(void)
++{
++	static bool early_boot = true;
++	unsigned long interval = CRNG_RESEED_INTERVAL;
++
++	if (unlikely(READ_ONCE(early_boot))) {
++		time64_t uptime = ktime_get_seconds();
++		if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2)
++			WRITE_ONCE(early_boot, false);
++		else
++			interval = max_t(unsigned int, CRNG_RESEED_START_INTERVAL,
++					 (unsigned int)uptime / 2 * HZ);
+ 	}
+-	spin_unlock_irqrestore(&random_ready_list_lock, flags);
++	return time_is_before_jiffies(READ_ONCE(base_crng.birth) + interval);
+ }
+ 
+ /*
+- * Credit (or debit) the entropy store with n bits of entropy.
+- * Use credit_entropy_bits_safe() if the value comes from userspace
+- * or otherwise should be checked for extreme values.
++ * This function returns a ChaCha state that you may use for generating
++ * random data. It also returns up to 32 bytes on its own of random data
++ * that may be used; random_data_len may not be greater than 32.
+  */
+-static void credit_entropy_bits(struct entropy_store *r, int nbits)
++static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS],
++			    u8 *random_data, size_t random_data_len)
+ {
+-	int entropy_count, orig, has_initialized = 0;
+-	const int pool_size = r->poolinfo->poolfracbits;
+-	int nfrac = nbits << ENTROPY_SHIFT;
+-
+-	if (!nbits)
+-		return;
++	unsigned long flags;
++	struct crng *crng;
+ 
+-retry:
+-	entropy_count = orig = READ_ONCE(r->entropy_count);
+-	if (nfrac < 0) {
+-		/* Debit */
+-		entropy_count += nfrac;
+-	} else {
+-		/*
+-		 * Credit: we have to account for the possibility of
+-		 * overwriting already present entropy.	 Even in the
+-		 * ideal case of pure Shannon entropy, new contributions
+-		 * approach the full value asymptotically:
+-		 *
+-		 * entropy <- entropy + (pool_size - entropy) *
+-		 *	(1 - exp(-add_entropy/pool_size))
+-		 *
+-		 * For add_entropy <= pool_size/2 then
+-		 * (1 - exp(-add_entropy/pool_size)) >=
+-		 *    (add_entropy/pool_size)*0.7869...
+-		 * so we can approximate the exponential with
+-		 * 3/4*add_entropy/pool_size and still be on the
+-		 * safe side by adding at most pool_size/2 at a time.
+-		 *
+-		 * The use of pool_size-2 in the while statement is to
+-		 * prevent rounding artifacts from making the loop
+-		 * arbitrarily long; this limits the loop to log2(pool_size)*2
+-		 * turns no matter how large nbits is.
+-		 */
+-		int pnfrac = nfrac;
+-		const int s = r->poolinfo->poolbitshift + ENTROPY_SHIFT + 2;
+-		/* The +2 corresponds to the /4 in the denominator */
+-
+-		do {
+-			unsigned int anfrac = min(pnfrac, pool_size/2);
+-			unsigned int add =
+-				((pool_size - entropy_count)*anfrac*3) >> s;
+-
+-			entropy_count += add;
+-			pnfrac -= anfrac;
+-		} while (unlikely(entropy_count < pool_size-2 && pnfrac));
+-	}
++	BUG_ON(random_data_len > 32);
+ 
+-	if (WARN_ON(entropy_count < 0)) {
+-		pr_warn("negative entropy/overflow: pool %s count %d\n",
+-			r->name, entropy_count);
+-		entropy_count = 0;
+-	} else if (entropy_count > pool_size)
+-		entropy_count = pool_size;
+-	if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
+-		goto retry;
+-
+-	if (has_initialized) {
+-		r->initialized = 1;
+-		kill_fasync(&fasync, SIGIO, POLL_IN);
++	/*
++	 * For the fast path, we check whether we're ready, unlocked first, and
++	 * then re-check once locked later. In the case where we're really not
++	 * ready, we do fast key erasure with the base_crng directly, extracting
++	 * when crng_init is CRNG_EMPTY.
++	 */
++	if (!crng_ready()) {
++		bool ready;
++
++		spin_lock_irqsave(&base_crng.lock, flags);
++		ready = crng_ready();
++		if (!ready) {
++			if (crng_init == CRNG_EMPTY)
++				extract_entropy(base_crng.key, sizeof(base_crng.key));
++			crng_fast_key_erasure(base_crng.key, chacha_state,
++					      random_data, random_data_len);
++		}
++		spin_unlock_irqrestore(&base_crng.lock, flags);
++		if (!ready)
++			return;
+ 	}
+ 
+-	trace_credit_entropy_bits(r->name, nbits,
+-				  entropy_count >> ENTROPY_SHIFT, _RET_IP_);
++	/*
++	 * If the base_crng is old enough, we reseed, which in turn bumps the
++	 * generation counter that we check below.
++	 */
++	if (unlikely(crng_has_old_seed()))
++		crng_reseed();
+ 
+-	if (r == &input_pool) {
+-		int entropy_bits = entropy_count >> ENTROPY_SHIFT;
++	local_lock_irqsave(&crngs.lock, flags);
++	crng = raw_cpu_ptr(&crngs);
+ 
+-		if (crng_init < 2) {
+-			if (entropy_bits < 128)
+-				return;
+-			crng_reseed(&primary_crng, r);
+-			entropy_bits = ENTROPY_BITS(r);
+-		}
++	/*
++	 * If our per-cpu crng is older than the base_crng, then it means
++	 * somebody reseeded the base_crng. In that case, we do fast key
++	 * erasure on the base_crng, and use its output as the new key
++	 * for our per-cpu crng. This brings us up to date with base_crng.
++	 */
++	if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) {
++		spin_lock(&base_crng.lock);
++		crng_fast_key_erasure(base_crng.key, chacha_state,
++				      crng->key, sizeof(crng->key));
++		crng->generation = base_crng.generation;
++		spin_unlock(&base_crng.lock);
+ 	}
++
++	/*
++	 * Finally, when we've made it this far, our per-cpu crng has an up
++	 * to date key, and we can do fast key erasure with it to produce
++	 * some random data and a ChaCha state for the caller. All other
++	 * branches of this function are "unlikely", so most of the time we
++	 * should wind up here immediately.
++	 */
++	crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len);
++	local_unlock_irqrestore(&crngs.lock, flags);
+ }
+ 
+-static int credit_entropy_bits_safe(struct entropy_store *r, int nbits)
++static void _get_random_bytes(void *buf, size_t len)
+ {
+-	const int nbits_max = r->poolinfo->poolwords * 32;
+-
+-	if (nbits < 0)
+-		return -EINVAL;
++	u32 chacha_state[CHACHA_STATE_WORDS];
++	u8 tmp[CHACHA_BLOCK_SIZE];
++	size_t first_block_len;
+ 
+-	/* Cap the value to avoid overflows */
+-	nbits = min(nbits,  nbits_max);
++	if (!len)
++		return;
+ 
+-	credit_entropy_bits(r, nbits);
+-	return 0;
+-}
++	first_block_len = min_t(size_t, 32, len);
++	crng_make_state(chacha_state, buf, first_block_len);
++	len -= first_block_len;
++	buf += first_block_len;
+ 
+-/*********************************************************************
+- *
+- * CRNG using CHACHA20
+- *
+- *********************************************************************/
++	while (len) {
++		if (len < CHACHA_BLOCK_SIZE) {
++			chacha20_block(chacha_state, tmp);
++			memcpy(buf, tmp, len);
++			memzero_explicit(tmp, sizeof(tmp));
++			break;
++		}
+ 
+-#define CRNG_RESEED_INTERVAL (300*HZ)
++		chacha20_block(chacha_state, buf);
++		if (unlikely(chacha_state[12] == 0))
++			++chacha_state[13];
++		len -= CHACHA_BLOCK_SIZE;
++		buf += CHACHA_BLOCK_SIZE;
++	}
+ 
+-static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
++	memzero_explicit(chacha_state, sizeof(chacha_state));
++}
+ 
+-#ifdef CONFIG_NUMA
+ /*
+- * Hack to deal with crazy userspace progams when they are all trying
+- * to access /dev/urandom in parallel.  The programs are almost
+- * certainly doing something terribly wrong, but we'll work around
+- * their brain damage.
++ * This function is the exported kernel interface.  It returns some
++ * number of good random numbers, suitable for key generation, seeding
++ * TCP sequence numbers, etc.  It does not rely on the hardware random
++ * number generator.  For random bytes direct from the hardware RNG
++ * (when available), use get_random_bytes_arch(). In order to ensure
++ * that the randomness provided by this function is okay, the function
++ * wait_for_random_bytes() should be called and return 0 at least once
++ * at any point prior.
+  */
+-static struct crng_state **crng_node_pool __read_mostly;
+-#endif
+-
+-static void invalidate_batched_entropy(void);
+-static void numa_crng_init(void);
+-
+-static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
+-static int __init parse_trust_cpu(char *arg)
++void get_random_bytes(void *buf, size_t len)
+ {
+-	return kstrtobool(arg, &trust_cpu);
++	warn_unseeded_randomness();
++	_get_random_bytes(buf, len);
+ }
+-early_param("random.trust_cpu", parse_trust_cpu);
++EXPORT_SYMBOL(get_random_bytes);
+ 
+-static bool crng_init_try_arch(struct crng_state *crng)
++static ssize_t get_random_bytes_user(struct iov_iter *iter)
+ {
+-	int		i;
+-	bool		arch_init = true;
+-	unsigned long	rv;
+-
+-	for (i = 4; i < 16; i++) {
+-		if (!arch_get_random_seed_long(&rv) &&
+-		    !arch_get_random_long(&rv)) {
+-			rv = random_get_entropy();
+-			arch_init = false;
+-		}
+-		crng->state[i] ^= rv;
++	u32 chacha_state[CHACHA_STATE_WORDS];
++	u8 block[CHACHA_BLOCK_SIZE];
++	size_t ret = 0, copied;
++
++	if (unlikely(!iov_iter_count(iter)))
++		return 0;
++
++	/*
++	 * Immediately overwrite the ChaCha key at index 4 with random
++	 * bytes, in case userspace causes copy_to_user() below to sleep
++	 * forever, so that we still retain forward secrecy in that case.
++	 */
++	crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);
++	/*
++	 * However, if we're doing a read of len <= 32, we don't need to
++	 * use chacha_state after, so we can simply return those bytes to
++	 * the user directly.
++	 */
++	if (iov_iter_count(iter) <= CHACHA_KEY_SIZE) {
++		ret = copy_to_iter(&chacha_state[4], CHACHA_KEY_SIZE, iter);
++		goto out_zero_chacha;
+ 	}
+ 
+-	return arch_init;
+-}
++	for (;;) {
++		chacha20_block(chacha_state, block);
++		if (unlikely(chacha_state[12] == 0))
++			++chacha_state[13];
+ 
+-static bool __init crng_init_try_arch_early(struct crng_state *crng)
+-{
+-	int		i;
+-	bool		arch_init = true;
+-	unsigned long	rv;
+-
+-	for (i = 4; i < 16; i++) {
+-		if (!arch_get_random_seed_long_early(&rv) &&
+-		    !arch_get_random_long_early(&rv)) {
+-			rv = random_get_entropy();
+-			arch_init = false;
++		copied = copy_to_iter(block, sizeof(block), iter);
++		ret += copied;
++		if (!iov_iter_count(iter) || copied != sizeof(block))
++			break;
++
++		BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0);
++		if (ret % PAGE_SIZE == 0) {
++			if (signal_pending(current))
++				break;
++			cond_resched();
+ 		}
+-		crng->state[i] ^= rv;
+ 	}
+ 
+-	return arch_init;
++	memzero_explicit(block, sizeof(block));
++out_zero_chacha:
++	memzero_explicit(chacha_state, sizeof(chacha_state));
++	return ret ? ret : -EFAULT;
+ }
+ 
+-static void __maybe_unused crng_initialize_secondary(struct crng_state *crng)
+-{
+-	chacha_init_consts(crng->state);
+-	_get_random_bytes(&crng->state[4], sizeof(__u32) * 12);
+-	crng_init_try_arch(crng);
+-	crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
+-}
++/*
++ * Batched entropy returns random integers. The quality of the random
++ * number is good as /dev/urandom. In order to ensure that the randomness
++ * provided by this function is okay, the function wait_for_random_bytes()
++ * should be called and return 0 at least once at any point prior.
++ */
+ 
+-static void __init crng_initialize_primary(struct crng_state *crng)
++#define DEFINE_BATCHED_ENTROPY(type)						\
++struct batch_ ##type {								\
++	/*									\
++	 * We make this 1.5x a ChaCha block, so that we get the			\
++	 * remaining 32 bytes from fast key erasure, plus one full		\
++	 * block from the detached ChaCha state. We can increase		\
++	 * the size of this later if needed so long as we keep the		\
++	 * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE.		\
++	 */									\
++	type entropy[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(type))];		\
++	local_lock_t lock;							\
++	unsigned long generation;						\
++	unsigned int position;							\
++};										\
++										\
++static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = {	\
++	.lock = INIT_LOCAL_LOCK(batched_entropy_ ##type.lock),			\
++	.position = UINT_MAX							\
++};										\
++										\
++type get_random_ ##type(void)							\
++{										\
++	type ret;								\
++	unsigned long flags;							\
++	struct batch_ ##type *batch;						\
++	unsigned long next_gen;							\
++										\
++	warn_unseeded_randomness();						\
++										\
++	if  (!crng_ready()) {							\
++		_get_random_bytes(&ret, sizeof(ret));				\
++		return ret;							\
++	}									\
++										\
++	local_lock_irqsave(&batched_entropy_ ##type.lock, flags);		\
++	batch = raw_cpu_ptr(&batched_entropy_##type);				\
++										\
++	next_gen = READ_ONCE(base_crng.generation);				\
++	if (batch->position >= ARRAY_SIZE(batch->entropy) ||			\
++	    next_gen != batch->generation) {					\
++		_get_random_bytes(batch->entropy, sizeof(batch->entropy));	\
++		batch->position = 0;						\
++		batch->generation = next_gen;					\
++	}									\
++										\
++	ret = batch->entropy[batch->position];					\
++	batch->entropy[batch->position] = 0;					\
++	++batch->position;							\
++	local_unlock_irqrestore(&batched_entropy_ ##type.lock, flags);		\
++	return ret;								\
++}										\
++EXPORT_SYMBOL(get_random_ ##type);
++
++DEFINE_BATCHED_ENTROPY(u64)
++DEFINE_BATCHED_ENTROPY(u32)
++
++#ifdef CONFIG_SMP
++/*
++ * This function is called when the CPU is coming up, with entry
++ * CPUHP_RANDOM_PREPARE, which comes before CPUHP_WORKQUEUE_PREP.
++ */
++int __cold random_prepare_cpu(unsigned int cpu)
+ {
+-	chacha_init_consts(crng->state);
+-	_extract_entropy(&input_pool, &crng->state[4], sizeof(__u32) * 12, 0);
+-	if (crng_init_try_arch_early(crng) && trust_cpu) {
+-		invalidate_batched_entropy();
+-		numa_crng_init();
+-		crng_init = 2;
+-		pr_notice("crng done (trusting CPU's manufacturer)\n");
+-	}
+-	crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
+-}
+-
+-static void crng_finalize_init(struct crng_state *crng)
+-{
+-	if (crng != &primary_crng || crng_init >= 2)
+-		return;
+-	if (!system_wq) {
+-		/* We can't call numa_crng_init until we have workqueues,
+-		 * so mark this for processing later. */
+-		crng_need_final_init = true;
+-		return;
+-	}
+-
+-	invalidate_batched_entropy();
+-	numa_crng_init();
+-	crng_init = 2;
+-	process_random_ready_list();
+-	wake_up_interruptible(&crng_init_wait);
+-	kill_fasync(&fasync, SIGIO, POLL_IN);
+-	pr_notice("crng init done\n");
+-	if (unseeded_warning.missed) {
+-		pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
+-			  unseeded_warning.missed);
+-		unseeded_warning.missed = 0;
+-	}
+-	if (urandom_warning.missed) {
+-		pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
+-			  urandom_warning.missed);
+-		urandom_warning.missed = 0;
+-	}
+-}
+-
+-#ifdef CONFIG_NUMA
+-static void do_numa_crng_init(struct work_struct *work)
+-{
+-	int i;
+-	struct crng_state *crng;
+-	struct crng_state **pool;
+-
+-	pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL);
+-	for_each_online_node(i) {
+-		crng = kmalloc_node(sizeof(struct crng_state),
+-				    GFP_KERNEL | __GFP_NOFAIL, i);
+-		spin_lock_init(&crng->lock);
+-		crng_initialize_secondary(crng);
+-		pool[i] = crng;
+-	}
+-	/* pairs with READ_ONCE() in select_crng() */
+-	if (cmpxchg_release(&crng_node_pool, NULL, pool) != NULL) {
+-		for_each_node(i)
+-			kfree(pool[i]);
+-		kfree(pool);
+-	}
+-}
+-
+-static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init);
+-
+-static void numa_crng_init(void)
+-{
+-	schedule_work(&numa_crng_init_work);
+-}
+-
+-static struct crng_state *select_crng(void)
+-{
+-	struct crng_state **pool;
+-	int nid = numa_node_id();
+-
+-	/* pairs with cmpxchg_release() in do_numa_crng_init() */
+-	pool = READ_ONCE(crng_node_pool);
+-	if (pool && pool[nid])
+-		return pool[nid];
+-
+-	return &primary_crng;
+-}
+-#else
+-static void numa_crng_init(void) {}
+-
+-static struct crng_state *select_crng(void)
+-{
+-	return &primary_crng;
++	/*
++	 * When the cpu comes back online, immediately invalidate both
++	 * the per-cpu crng and all batches, so that we serve fresh
++	 * randomness.
++	 */
++	per_cpu_ptr(&crngs, cpu)->generation = ULONG_MAX;
++	per_cpu_ptr(&batched_entropy_u32, cpu)->position = UINT_MAX;
++	per_cpu_ptr(&batched_entropy_u64, cpu)->position = UINT_MAX;
++	return 0;
+ }
+ #endif
+ 
+ /*
+- * crng_fast_load() can be called by code in the interrupt service
+- * path.  So we can't afford to dilly-dally. Returns the number of
+- * bytes processed from cp.
+- */
+-static size_t crng_fast_load(const char *cp, size_t len)
+-{
+-	unsigned long flags;
+-	char *p;
+-	size_t ret = 0;
+-
+-	if (!spin_trylock_irqsave(&primary_crng.lock, flags))
+-		return 0;
+-	if (crng_init != 0) {
+-		spin_unlock_irqrestore(&primary_crng.lock, flags);
+-		return 0;
+-	}
+-	p = (unsigned char *) &primary_crng.state[4];
+-	while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
+-		p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp;
+-		cp++; crng_init_cnt++; len--; ret++;
+-	}
+-	spin_unlock_irqrestore(&primary_crng.lock, flags);
+-	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
+-		invalidate_batched_entropy();
+-		crng_init = 1;
+-		pr_notice("fast init done\n");
+-	}
+-	return ret;
+-}
+-
+-/*
+- * crng_slow_load() is called by add_device_randomness, which has two
+- * attributes.  (1) We can't trust the buffer passed to it is
+- * guaranteed to be unpredictable (so it might not have any entropy at
+- * all), and (2) it doesn't have the performance constraints of
+- * crng_fast_load().
+- *
+- * So we do something more comprehensive which is guaranteed to touch
+- * all of the primary_crng's state, and which uses a LFSR with a
+- * period of 255 as part of the mixing algorithm.  Finally, we do
+- * *not* advance crng_init_cnt since buffer we may get may be something
+- * like a fixed DMI table (for example), which might very well be
+- * unique to the machine, but is otherwise unvarying.
+- */
+-static int crng_slow_load(const char *cp, size_t len)
+-{
+-	unsigned long		flags;
+-	static unsigned char	lfsr = 1;
+-	unsigned char		tmp;
+-	unsigned		i, max = CHACHA_KEY_SIZE;
+-	const char *		src_buf = cp;
+-	char *			dest_buf = (char *) &primary_crng.state[4];
+-
+-	if (!spin_trylock_irqsave(&primary_crng.lock, flags))
+-		return 0;
+-	if (crng_init != 0) {
+-		spin_unlock_irqrestore(&primary_crng.lock, flags);
+-		return 0;
+-	}
+-	if (len > max)
+-		max = len;
+-
+-	for (i = 0; i < max ; i++) {
+-		tmp = lfsr;
+-		lfsr >>= 1;
+-		if (tmp & 1)
+-			lfsr ^= 0xE1;
+-		tmp = dest_buf[i % CHACHA_KEY_SIZE];
+-		dest_buf[i % CHACHA_KEY_SIZE] ^= src_buf[i % len] ^ lfsr;
+-		lfsr += (tmp << 3) | (tmp >> 5);
+-	}
+-	spin_unlock_irqrestore(&primary_crng.lock, flags);
+-	return 1;
+-}
+-
+-static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
+-{
+-	unsigned long	flags;
+-	int		i, num;
+-	union {
+-		__u8	block[CHACHA_BLOCK_SIZE];
+-		__u32	key[8];
+-	} buf;
+-
+-	if (r) {
+-		num = extract_entropy(r, &buf, 32, 16, 0);
+-		if (num == 0)
+-			return;
+-	} else {
+-		_extract_crng(&primary_crng, buf.block);
+-		_crng_backtrack_protect(&primary_crng, buf.block,
+-					CHACHA_KEY_SIZE);
+-	}
+-	spin_lock_irqsave(&crng->lock, flags);
+-	for (i = 0; i < 8; i++) {
+-		unsigned long	rv;
+-		if (!arch_get_random_seed_long(&rv) &&
+-		    !arch_get_random_long(&rv))
+-			rv = random_get_entropy();
+-		crng->state[i+4] ^= buf.key[i] ^ rv;
+-	}
+-	memzero_explicit(&buf, sizeof(buf));
+-	WRITE_ONCE(crng->init_time, jiffies);
+-	spin_unlock_irqrestore(&crng->lock, flags);
+-	crng_finalize_init(crng);
+-}
+-
+-static void _extract_crng(struct crng_state *crng,
+-			  __u8 out[CHACHA_BLOCK_SIZE])
+-{
+-	unsigned long v, flags, init_time;
+-
+-	if (crng_ready()) {
+-		init_time = READ_ONCE(crng->init_time);
+-		if (time_after(READ_ONCE(crng_global_init_time), init_time) ||
+-		    time_after(jiffies, init_time + CRNG_RESEED_INTERVAL))
+-			crng_reseed(crng, crng == &primary_crng ?
+-				    &input_pool : NULL);
+-	}
+-	spin_lock_irqsave(&crng->lock, flags);
+-	if (arch_get_random_long(&v))
+-		crng->state[14] ^= v;
+-	chacha20_block(&crng->state[0], out);
+-	if (crng->state[12] == 0)
+-		crng->state[13]++;
+-	spin_unlock_irqrestore(&crng->lock, flags);
+-}
+-
+-static void extract_crng(__u8 out[CHACHA_BLOCK_SIZE])
+-{
+-	_extract_crng(select_crng(), out);
+-}
+-
+-/*
+- * Use the leftover bytes from the CRNG block output (if there is
+- * enough) to mutate the CRNG key to provide backtracking protection.
++ * This function will use the architecture-specific hardware random
++ * number generator if it is available. It is not recommended for
++ * use. Use get_random_bytes() instead. It returns the number of
++ * bytes filled in.
+  */
+-static void _crng_backtrack_protect(struct crng_state *crng,
+-				    __u8 tmp[CHACHA_BLOCK_SIZE], int used)
++size_t __must_check get_random_bytes_arch(void *buf, size_t len)
+ {
+-	unsigned long	flags;
+-	__u32		*s, *d;
+-	int		i;
+-
+-	used = round_up(used, sizeof(__u32));
+-	if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) {
+-		extract_crng(tmp);
+-		used = 0;
+-	}
+-	spin_lock_irqsave(&crng->lock, flags);
+-	s = (__u32 *) &tmp[used];
+-	d = &crng->state[4];
+-	for (i=0; i < 8; i++)
+-		*d++ ^= *s++;
+-	spin_unlock_irqrestore(&crng->lock, flags);
+-}
++	size_t left = len;
++	u8 *p = buf;
+ 
+-static void crng_backtrack_protect(__u8 tmp[CHACHA_BLOCK_SIZE], int used)
+-{
+-	_crng_backtrack_protect(select_crng(), tmp, used);
+-}
+-
+-static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
+-{
+-	ssize_t ret = 0, i = CHACHA_BLOCK_SIZE;
+-	__u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
+-	int large_request = (nbytes > 256);
+-
+-	while (nbytes) {
+-		if (large_request && need_resched()) {
+-			if (signal_pending(current)) {
+-				if (ret == 0)
+-					ret = -ERESTARTSYS;
+-				break;
+-			}
+-			schedule();
+-		}
++	while (left) {
++		unsigned long v;
++		size_t block_len = min_t(size_t, left, sizeof(unsigned long));
+ 
+-		extract_crng(tmp);
+-		i = min_t(int, nbytes, CHACHA_BLOCK_SIZE);
+-		if (copy_to_user(buf, tmp, i)) {
+-			ret = -EFAULT;
++		if (!arch_get_random_long(&v))
+ 			break;
+-		}
+ 
+-		nbytes -= i;
+-		buf += i;
+-		ret += i;
++		memcpy(p, &v, block_len);
++		p += block_len;
++		left -= block_len;
+ 	}
+-	crng_backtrack_protect(tmp, i);
+-
+-	/* Wipe data just written to memory */
+-	memzero_explicit(tmp, sizeof(tmp));
+ 
+-	return ret;
++	return len - left;
+ }
++EXPORT_SYMBOL(get_random_bytes_arch);
+ 
+ 
+-/*********************************************************************
++/**********************************************************************
+  *
+- * Entropy input management
++ * Entropy accumulation and extraction routines.
+  *
+- *********************************************************************/
++ * Callers may add entropy via:
++ *
++ *     static void mix_pool_bytes(const void *buf, size_t len)
++ *
++ * After which, if added entropy should be credited:
++ *
++ *     static void credit_init_bits(size_t bits)
++ *
++ * Finally, extract entropy via:
++ *
++ *     static void extract_entropy(void *buf, size_t len)
++ *
++ **********************************************************************/
+ 
+-/* There is one of these per entropy source */
+-struct timer_rand_state {
+-	cycles_t last_time;
+-	long last_delta, last_delta2;
++enum {
++	POOL_BITS = BLAKE2S_HASH_SIZE * 8,
++	POOL_READY_BITS = POOL_BITS, /* When crng_init->CRNG_READY */
++	POOL_EARLY_BITS = POOL_READY_BITS / 2 /* When crng_init->CRNG_EARLY */
+ };
+ 
+-#define INIT_TIMER_RAND_STATE { INITIAL_JIFFIES, };
++static struct {
++	struct blake2s_state hash;
++	spinlock_t lock;
++	unsigned int init_bits;
++} input_pool = {
++	.hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE),
++		    BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4,
++		    BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 },
++	.hash.outlen = BLAKE2S_HASH_SIZE,
++	.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
++};
++
++static void _mix_pool_bytes(const void *buf, size_t len)
++{
++	blake2s_update(&input_pool.hash, buf, len);
++}
+ 
+ /*
+- * Add device- or boot-specific data to the input pool to help
+- * initialize it.
+- *
+- * None of this adds any entropy; it is meant to avoid the problem of
+- * the entropy pool having similar initial state across largely
+- * identical devices.
++ * This function adds bytes into the input pool. It does not
++ * update the initialization bit counter; the caller should call
++ * credit_init_bits if this is appropriate.
+  */
+-void add_device_randomness(const void *buf, unsigned int size)
++static void mix_pool_bytes(const void *buf, size_t len)
+ {
+-	unsigned long time = random_get_entropy() ^ jiffies;
+ 	unsigned long flags;
+ 
+-	if (!crng_ready() && size)
+-		crng_slow_load(buf, size);
+-
+-	trace_add_device_randomness(size, _RET_IP_);
+ 	spin_lock_irqsave(&input_pool.lock, flags);
+-	_mix_pool_bytes(&input_pool, buf, size);
+-	_mix_pool_bytes(&input_pool, &time, sizeof(time));
++	_mix_pool_bytes(buf, len);
+ 	spin_unlock_irqrestore(&input_pool.lock, flags);
+ }
+-EXPORT_SYMBOL(add_device_randomness);
+-
+-static struct timer_rand_state input_timer_state = INIT_TIMER_RAND_STATE;
+ 
+ /*
+- * This function adds entropy to the entropy "pool" by using timing
+- * delays.  It uses the timer_rand_state structure to make an estimate
+- * of how many bits of entropy this call has added to the pool.
+- *
+- * The number "num" is also added to the pool - it should somehow describe
+- * the type of event which just happened.  This is currently 0-255 for
+- * keyboard scan codes, and 256 upwards for interrupts.
+- *
++ * This is an HKDF-like construction for using the hashed collected entropy
++ * as a PRF key, that's then expanded block-by-block.
+  */
+-static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
++static void extract_entropy(void *buf, size_t len)
+ {
+-	struct entropy_store	*r;
++	unsigned long flags;
++	u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE];
+ 	struct {
+-		long jiffies;
+-		unsigned cycles;
+-		unsigned num;
+-	} sample;
+-	long delta, delta2, delta3;
+-
+-	sample.jiffies = jiffies;
+-	sample.cycles = random_get_entropy();
+-	sample.num = num;
+-	r = &input_pool;
+-	mix_pool_bytes(r, &sample, sizeof(sample));
+-
+-	/*
+-	 * Calculate number of bits of randomness we probably added.
+-	 * We take into account the first, second and third-order deltas
+-	 * in order to make our estimate.
+-	 */
+-	delta = sample.jiffies - READ_ONCE(state->last_time);
+-	WRITE_ONCE(state->last_time, sample.jiffies);
+-
+-	delta2 = delta - READ_ONCE(state->last_delta);
+-	WRITE_ONCE(state->last_delta, delta);
+-
+-	delta3 = delta2 - READ_ONCE(state->last_delta2);
+-	WRITE_ONCE(state->last_delta2, delta2);
+-
+-	if (delta < 0)
+-		delta = -delta;
+-	if (delta2 < 0)
+-		delta2 = -delta2;
+-	if (delta3 < 0)
+-		delta3 = -delta3;
+-	if (delta > delta2)
+-		delta = delta2;
+-	if (delta > delta3)
+-		delta = delta3;
+-
+-	/*
+-	 * delta is now minimum absolute delta.
+-	 * Round down by 1 bit on general principles,
+-	 * and limit entropy estimate to 12 bits.
+-	 */
+-	credit_entropy_bits(r, min_t(int, fls(delta>>1), 11));
+-}
+-
+-void add_input_randomness(unsigned int type, unsigned int code,
+-				 unsigned int value)
+-{
+-	static unsigned char last_value;
+-
+-	/* ignore autorepeat and the like */
+-	if (value == last_value)
+-		return;
++		unsigned long rdseed[32 / sizeof(long)];
++		size_t counter;
++	} block;
++	size_t i;
++
++	for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) {
++		if (!arch_get_random_seed_long(&block.rdseed[i]) &&
++		    !arch_get_random_long(&block.rdseed[i]))
++			block.rdseed[i] = random_get_entropy();
++	}
+ 
+-	last_value = value;
+-	add_timer_randomness(&input_timer_state,
+-			     (type << 4) ^ code ^ (code >> 4) ^ value);
+-	trace_add_input_randomness(ENTROPY_BITS(&input_pool));
+-}
+-EXPORT_SYMBOL_GPL(add_input_randomness);
++	spin_lock_irqsave(&input_pool.lock, flags);
+ 
+-static DEFINE_PER_CPU(struct fast_pool, irq_randomness);
++	/* seed = HASHPRF(last_key, entropy_input) */
++	blake2s_final(&input_pool.hash, seed);
+ 
+-#ifdef ADD_INTERRUPT_BENCH
+-static unsigned long avg_cycles, avg_deviation;
++	/* next_key = HASHPRF(seed, RDSEED || 0) */
++	block.counter = 0;
++	blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed));
++	blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key));
+ 
+-#define AVG_SHIFT 8     /* Exponential average factor k=1/256 */
+-#define FIXED_1_2 (1 << (AVG_SHIFT-1))
++	spin_unlock_irqrestore(&input_pool.lock, flags);
++	memzero_explicit(next_key, sizeof(next_key));
++
++	while (len) {
++		i = min_t(size_t, len, BLAKE2S_HASH_SIZE);
++		/* output = HASHPRF(seed, RDSEED || ++counter) */
++		++block.counter;
++		blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed));
++		len -= i;
++		buf += i;
++	}
+ 
+-static void add_interrupt_bench(cycles_t start)
+-{
+-        long delta = random_get_entropy() - start;
+-
+-        /* Use a weighted moving average */
+-        delta = delta - ((avg_cycles + FIXED_1_2) >> AVG_SHIFT);
+-        avg_cycles += delta;
+-        /* And average deviation */
+-        delta = abs(delta) - ((avg_deviation + FIXED_1_2) >> AVG_SHIFT);
+-        avg_deviation += delta;
++	memzero_explicit(seed, sizeof(seed));
++	memzero_explicit(&block, sizeof(block));
+ }
+-#else
+-#define add_interrupt_bench(x)
+-#endif
+-
+-static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs)
+-{
+-	__u32 *ptr = (__u32 *) regs;
+-	unsigned int idx;
+ 
+-	if (regs == NULL)
+-		return 0;
+-	idx = READ_ONCE(f->reg_idx);
+-	if (idx >= sizeof(struct pt_regs) / sizeof(__u32))
+-		idx = 0;
+-	ptr += idx++;
+-	WRITE_ONCE(f->reg_idx, idx);
+-	return *ptr;
+-}
++#define credit_init_bits(bits) if (!crng_ready()) _credit_init_bits(bits)
+ 
+-void add_interrupt_randomness(int irq, int irq_flags)
++static void __cold _credit_init_bits(size_t bits)
+ {
+-	struct entropy_store	*r;
+-	struct fast_pool	*fast_pool = this_cpu_ptr(&irq_randomness);
+-	struct pt_regs		*regs = get_irq_regs();
+-	unsigned long		now = jiffies;
+-	cycles_t		cycles = random_get_entropy();
+-	__u32			c_high, j_high;
+-	__u64			ip;
+-	unsigned long		seed;
+-	int			credit = 0;
+-
+-	if (cycles == 0)
+-		cycles = get_reg(fast_pool, regs);
+-	c_high = (sizeof(cycles) > 4) ? cycles >> 32 : 0;
+-	j_high = (sizeof(now) > 4) ? now >> 32 : 0;
+-	fast_pool->pool[0] ^= cycles ^ j_high ^ irq;
+-	fast_pool->pool[1] ^= now ^ c_high;
+-	ip = regs ? instruction_pointer(regs) : _RET_IP_;
+-	fast_pool->pool[2] ^= ip;
+-	fast_pool->pool[3] ^= (sizeof(ip) > 4) ? ip >> 32 :
+-		get_reg(fast_pool, regs);
+-
+-	fast_mix(fast_pool);
+-	add_interrupt_bench(cycles);
+-
+-	if (unlikely(crng_init == 0)) {
+-		if ((fast_pool->count >= 64) &&
+-		    crng_fast_load((char *) fast_pool->pool,
+-				   sizeof(fast_pool->pool)) > 0) {
+-			fast_pool->count = 0;
+-			fast_pool->last = now;
+-		}
+-		return;
+-	}
+-
+-	if ((fast_pool->count < 64) &&
+-	    !time_after(now, fast_pool->last + HZ))
+-		return;
++	static struct execute_work set_ready;
++	unsigned int new, orig, add;
++	unsigned long flags;
+ 
+-	r = &input_pool;
+-	if (!spin_trylock(&r->lock))
++	if (!bits)
+ 		return;
+ 
+-	fast_pool->last = now;
+-	__mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool));
++	add = min_t(size_t, bits, POOL_BITS);
+ 
+-	/*
+-	 * If we have architectural seed generator, produce a seed and
+-	 * add it to the pool.  For the sake of paranoia don't let the
+-	 * architectural seed generator dominate the input from the
+-	 * interrupt noise.
+-	 */
+-	if (arch_get_random_seed_long(&seed)) {
+-		__mix_pool_bytes(r, &seed, sizeof(seed));
+-		credit = 1;
++	do {
++		orig = READ_ONCE(input_pool.init_bits);
++		new = min_t(unsigned int, POOL_BITS, orig + add);
++	} while (cmpxchg(&input_pool.init_bits, orig, new) != orig);
++
++	if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
++		crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
++		execute_in_process_context(crng_set_ready, &set_ready);
++		process_random_ready_list();
++		wake_up_interruptible(&crng_init_wait);
++		kill_fasync(&fasync, SIGIO, POLL_IN);
++		pr_notice("crng init done\n");
++		if (urandom_warning.missed)
++			pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
++				  urandom_warning.missed);
++	} else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) {
++		spin_lock_irqsave(&base_crng.lock, flags);
++		/* Check if crng_init is CRNG_EMPTY, to avoid race with crng_reseed(). */
++		if (crng_init == CRNG_EMPTY) {
++			extract_entropy(base_crng.key, sizeof(base_crng.key));
++			crng_init = CRNG_EARLY;
++		}
++		spin_unlock_irqrestore(&base_crng.lock, flags);
+ 	}
+-	spin_unlock(&r->lock);
+-
+-	fast_pool->count = 0;
+-
+-	/* award one bit for the contents of the fast pool */
+-	credit_entropy_bits(r, credit + 1);
+ }
+-EXPORT_SYMBOL_GPL(add_interrupt_randomness);
+ 
+-#ifdef CONFIG_BLOCK
+-void add_disk_randomness(struct gendisk *disk)
+-{
+-	if (!disk || !disk->random)
+-		return;
+-	/* first major is 1, so we get >= 0x200 here */
+-	add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
+-	trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS(&input_pool));
+-}
+-EXPORT_SYMBOL_GPL(add_disk_randomness);
+-#endif
+ 
+-/*********************************************************************
++/**********************************************************************
+  *
+- * Entropy extraction routines
++ * Entropy collection routines.
+  *
+- *********************************************************************/
++ * The following exported functions are used for pushing entropy into
++ * the above entropy accumulation routines:
++ *
++ *	void add_device_randomness(const void *buf, size_t len);
++ *	void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
++ *	void add_bootloader_randomness(const void *buf, size_t len);
++ *	void add_interrupt_randomness(int irq);
++ *	void add_input_randomness(unsigned int type, unsigned int code, unsigned int value);
++ *	void add_disk_randomness(struct gendisk *disk);
++ *
++ * add_device_randomness() adds data to the input pool that
++ * is likely to differ between two devices (or possibly even per boot).
++ * This would be things like MAC addresses or serial numbers, or the
++ * read-out of the RTC. This does *not* credit any actual entropy to
++ * the pool, but it initializes the pool to different values for devices
++ * that might otherwise be identical and have very little entropy
++ * available to them (particularly common in the embedded world).
++ *
++ * add_hwgenerator_randomness() is for true hardware RNGs, and will credit
++ * entropy as specified by the caller. If the entropy pool is full it will
++ * block until more entropy is needed.
++ *
++ * add_bootloader_randomness() is called by bootloader drivers, such as EFI
++ * and device tree, and credits its input depending on whether or not the
++ * configuration option CONFIG_RANDOM_TRUST_BOOTLOADER is set.
++ *
++ * add_interrupt_randomness() uses the interrupt timing as random
++ * inputs to the entropy pool. Using the cycle counters and the irq source
++ * as inputs, it feeds the input pool roughly once a second or after 64
++ * interrupts, crediting 1 bit of entropy for whichever comes first.
++ *
++ * add_input_randomness() uses the input layer interrupt timing, as well
++ * as the event type information from the hardware.
++ *
++ * add_disk_randomness() uses what amounts to the seek time of block
++ * layer request events, on a per-disk_devt basis, as input to the
++ * entropy pool. Note that high-speed solid state drives with very low
++ * seek times do not make for good sources of entropy, as their seek
++ * times are usually fairly consistent.
++ *
++ * The last two routines try to estimate how many bits of entropy
++ * to credit. They do this by keeping track of the first and second
++ * order deltas of the event timings.
++ *
++ **********************************************************************/
+ 
+-/*
+- * This function decides how many bytes to actually take from the
+- * given pool, and also debits the entropy count accordingly.
+- */
+-static size_t account(struct entropy_store *r, size_t nbytes, int min,
+-		      int reserved)
++static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
++static bool trust_bootloader __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER);
++static int __init parse_trust_cpu(char *arg)
+ {
+-	int entropy_count, orig, have_bytes;
+-	size_t ibytes, nfrac;
+-
+-	BUG_ON(r->entropy_count > r->poolinfo->poolfracbits);
+-
+-	/* Can we pull enough? */
+-retry:
+-	entropy_count = orig = READ_ONCE(r->entropy_count);
+-	ibytes = nbytes;
+-	/* never pull more than available */
+-	have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
+-
+-	if ((have_bytes -= reserved) < 0)
+-		have_bytes = 0;
+-	ibytes = min_t(size_t, ibytes, have_bytes);
+-	if (ibytes < min)
+-		ibytes = 0;
+-
+-	if (WARN_ON(entropy_count < 0)) {
+-		pr_warn("negative entropy count: pool %s count %d\n",
+-			r->name, entropy_count);
+-		entropy_count = 0;
+-	}
+-	nfrac = ibytes << (ENTROPY_SHIFT + 3);
+-	if ((size_t) entropy_count > nfrac)
+-		entropy_count -= nfrac;
+-	else
+-		entropy_count = 0;
+-
+-	if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
+-		goto retry;
+-
+-	trace_debit_entropy(r->name, 8 * ibytes);
+-	if (ibytes && ENTROPY_BITS(r) < random_write_wakeup_bits) {
+-		wake_up_interruptible(&random_write_wait);
+-		kill_fasync(&fasync, SIGIO, POLL_OUT);
+-	}
+-
+-	return ibytes;
++	return kstrtobool(arg, &trust_cpu);
+ }
+-
+-/*
+- * This function does the actual extraction for extract_entropy and
+- * extract_entropy_user.
+- *
+- * Note: we assume that .poolwords is a multiple of 16 words.
+- */
+-static void extract_buf(struct entropy_store *r, __u8 *out)
++static int __init parse_trust_bootloader(char *arg)
+ {
+-	int i;
+-	union {
+-		__u32 w[5];
+-		unsigned long l[LONGS(20)];
+-	} hash;
+-	__u32 workspace[SHA1_WORKSPACE_WORDS];
+-	unsigned long flags;
+-
+-	/*
+-	 * If we have an architectural hardware random number
+-	 * generator, use it for SHA's initial vector
+-	 */
+-	sha1_init(hash.w);
+-	for (i = 0; i < LONGS(20); i++) {
+-		unsigned long v;
+-		if (!arch_get_random_long(&v))
+-			break;
+-		hash.l[i] = v;
+-	}
+-
+-	/* Generate a hash across the pool, 16 words (512 bits) at a time */
+-	spin_lock_irqsave(&r->lock, flags);
+-	for (i = 0; i < r->poolinfo->poolwords; i += 16)
+-		sha1_transform(hash.w, (__u8 *)(r->pool + i), workspace);
+-
+-	/*
+-	 * We mix the hash back into the pool to prevent backtracking
+-	 * attacks (where the attacker knows the state of the pool
+-	 * plus the current outputs, and attempts to find previous
+-	 * ouputs), unless the hash function can be inverted. By
+-	 * mixing at least a SHA1 worth of hash data back, we make
+-	 * brute-forcing the feedback as hard as brute-forcing the
+-	 * hash.
+-	 */
+-	__mix_pool_bytes(r, hash.w, sizeof(hash.w));
+-	spin_unlock_irqrestore(&r->lock, flags);
+-
+-	memzero_explicit(workspace, sizeof(workspace));
+-
+-	/*
+-	 * In case the hash function has some recognizable output
+-	 * pattern, we fold it in half. Thus, we always feed back
+-	 * twice as much data as we output.
+-	 */
+-	hash.w[0] ^= hash.w[3];
+-	hash.w[1] ^= hash.w[4];
+-	hash.w[2] ^= rol32(hash.w[2], 16);
+-
+-	memcpy(out, &hash, EXTRACT_SIZE);
+-	memzero_explicit(&hash, sizeof(hash));
++	return kstrtobool(arg, &trust_bootloader);
+ }
++early_param("random.trust_cpu", parse_trust_cpu);
++early_param("random.trust_bootloader", parse_trust_bootloader);
+ 
+-static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
+-				size_t nbytes, int fips)
++/*
++ * The first collection of entropy occurs at system boot while interrupts
++ * are still turned off. Here we push in latent entropy, RDSEED, a timestamp,
++ * utsname(), and the command line. Depending on the above configuration knob,
++ * RDSEED may be considered sufficient for initialization. Note that much
++ * earlier setup may already have pushed entropy into the input pool by the
++ * time we get here.
++ */
++int __init random_init(const char *command_line)
+ {
+-	ssize_t ret = 0, i;
+-	__u8 tmp[EXTRACT_SIZE];
+-	unsigned long flags;
++	ktime_t now = ktime_get_real();
++	unsigned int i, arch_bytes;
++	unsigned long entropy;
+ 
+-	while (nbytes) {
+-		extract_buf(r, tmp);
++#if defined(LATENT_ENTROPY_PLUGIN)
++	static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy;
++	_mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed));
++#endif
+ 
+-		if (fips) {
+-			spin_lock_irqsave(&r->lock, flags);
+-			if (!memcmp(tmp, r->last_data, EXTRACT_SIZE))
+-				panic("Hardware RNG duplicated output!\n");
+-			memcpy(r->last_data, tmp, EXTRACT_SIZE);
+-			spin_unlock_irqrestore(&r->lock, flags);
++	for (i = 0, arch_bytes = BLAKE2S_BLOCK_SIZE;
++	     i < BLAKE2S_BLOCK_SIZE; i += sizeof(entropy)) {
++		if (!arch_get_random_seed_long_early(&entropy) &&
++		    !arch_get_random_long_early(&entropy)) {
++			entropy = random_get_entropy();
++			arch_bytes -= sizeof(entropy);
+ 		}
+-		i = min_t(int, nbytes, EXTRACT_SIZE);
+-		memcpy(buf, tmp, i);
+-		nbytes -= i;
+-		buf += i;
+-		ret += i;
++		_mix_pool_bytes(&entropy, sizeof(entropy));
+ 	}
++	_mix_pool_bytes(&now, sizeof(now));
++	_mix_pool_bytes(utsname(), sizeof(*(utsname())));
++	_mix_pool_bytes(command_line, strlen(command_line));
++	add_latent_entropy();
+ 
+-	/* Wipe data just returned from memory */
+-	memzero_explicit(tmp, sizeof(tmp));
++	if (crng_ready())
++		crng_reseed();
++	else if (trust_cpu)
++		credit_init_bits(arch_bytes * 8);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ /*
+- * This function extracts randomness from the "entropy pool", and
+- * returns it in a buffer.
++ * Add device- or boot-specific data to the input pool to help
++ * initialize it.
+  *
+- * The min parameter specifies the minimum amount we can pull before
+- * failing to avoid races that defeat catastrophic reseeding while the
+- * reserved parameter indicates how much entropy we must leave in the
+- * pool after each pull to avoid starving other readers.
++ * None of this adds any entropy; it is meant to avoid the problem of
++ * the entropy pool having similar initial state across largely
++ * identical devices.
+  */
+-static ssize_t extract_entropy(struct entropy_store *r, void *buf,
+-				 size_t nbytes, int min, int reserved)
++void add_device_randomness(const void *buf, size_t len)
+ {
+-	__u8 tmp[EXTRACT_SIZE];
++	unsigned long entropy = random_get_entropy();
+ 	unsigned long flags;
+ 
+-	/* if last_data isn't primed, we need EXTRACT_SIZE extra bytes */
+-	if (fips_enabled) {
+-		spin_lock_irqsave(&r->lock, flags);
+-		if (!r->last_data_init) {
+-			r->last_data_init = 1;
+-			spin_unlock_irqrestore(&r->lock, flags);
+-			trace_extract_entropy(r->name, EXTRACT_SIZE,
+-					      ENTROPY_BITS(r), _RET_IP_);
+-			extract_buf(r, tmp);
+-			spin_lock_irqsave(&r->lock, flags);
+-			memcpy(r->last_data, tmp, EXTRACT_SIZE);
+-		}
+-		spin_unlock_irqrestore(&r->lock, flags);
+-	}
+-
+-	trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
+-	nbytes = account(r, nbytes, min, reserved);
+-
+-	return _extract_entropy(r, buf, nbytes, fips_enabled);
++	spin_lock_irqsave(&input_pool.lock, flags);
++	_mix_pool_bytes(&entropy, sizeof(entropy));
++	_mix_pool_bytes(buf, len);
++	spin_unlock_irqrestore(&input_pool.lock, flags);
+ }
++EXPORT_SYMBOL(add_device_randomness);
+ 
+-#define warn_unseeded_randomness(previous) \
+-	_warn_unseeded_randomness(__func__, (void *) _RET_IP_, (previous))
+-
+-static void _warn_unseeded_randomness(const char *func_name, void *caller,
+-				      void **previous)
++/*
++ * Interface for in-kernel drivers of true hardware RNGs.
++ * Those devices may produce endless random bits and will be throttled
++ * when our pool is full.
++ */
++void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy)
+ {
+-#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM
+-	const bool print_once = false;
+-#else
+-	static bool print_once __read_mostly;
+-#endif
++	mix_pool_bytes(buf, len);
++	credit_init_bits(entropy);
+ 
+-	if (print_once ||
+-	    crng_ready() ||
+-	    (previous && (caller == READ_ONCE(*previous))))
+-		return;
+-	WRITE_ONCE(*previous, caller);
+-#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM
+-	print_once = true;
+-#endif
+-	if (__ratelimit(&unseeded_warning))
+-		printk_deferred(KERN_NOTICE "random: %s called from %pS "
+-				"with crng_init=%d\n", func_name, caller,
+-				crng_init);
++	/*
++	 * Throttle writing to once every CRNG_RESEED_INTERVAL, unless
++	 * we're not yet initialized.
++	 */
++	if (!kthread_should_stop() && crng_ready())
++		schedule_timeout_interruptible(CRNG_RESEED_INTERVAL);
+ }
++EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
+ 
+ /*
+- * This function is the exported kernel interface.  It returns some
+- * number of good random numbers, suitable for key generation, seeding
+- * TCP sequence numbers, etc.  It does not rely on the hardware random
+- * number generator.  For random bytes direct from the hardware RNG
+- * (when available), use get_random_bytes_arch(). In order to ensure
+- * that the randomness provided by this function is okay, the function
+- * wait_for_random_bytes() should be called and return 0 at least once
+- * at any point prior.
++ * Handle random seed passed by bootloader, and credit it if
++ * CONFIG_RANDOM_TRUST_BOOTLOADER is set.
+  */
+-static void _get_random_bytes(void *buf, int nbytes)
++void __cold add_bootloader_randomness(const void *buf, size_t len)
+ {
+-	__u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
+-
+-	trace_get_random_bytes(nbytes, _RET_IP_);
+-
+-	while (nbytes >= CHACHA_BLOCK_SIZE) {
+-		extract_crng(buf);
+-		buf += CHACHA_BLOCK_SIZE;
+-		nbytes -= CHACHA_BLOCK_SIZE;
+-	}
+-
+-	if (nbytes > 0) {
+-		extract_crng(tmp);
+-		memcpy(buf, tmp, nbytes);
+-		crng_backtrack_protect(tmp, nbytes);
+-	} else
+-		crng_backtrack_protect(tmp, CHACHA_BLOCK_SIZE);
+-	memzero_explicit(tmp, sizeof(tmp));
++	mix_pool_bytes(buf, len);
++	if (trust_bootloader)
++		credit_init_bits(len * 8);
+ }
++EXPORT_SYMBOL_GPL(add_bootloader_randomness);
+ 
+-void get_random_bytes(void *buf, int nbytes)
+-{
+-	static void *previous;
+-
+-	warn_unseeded_randomness(&previous);
+-	_get_random_bytes(buf, nbytes);
+-}
+-EXPORT_SYMBOL(get_random_bytes);
++struct fast_pool {
++	struct work_struct mix;
++	unsigned long pool[4];
++	unsigned long last;
++	unsigned int count;
++};
+ 
++static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
++#ifdef CONFIG_64BIT
++#define FASTMIX_PERM SIPHASH_PERMUTATION
++	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
++#else
++#define FASTMIX_PERM HSIPHASH_PERMUTATION
++	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
++#endif
++};
+ 
+ /*
+- * Each time the timer fires, we expect that we got an unpredictable
+- * jump in the cycle counter. Even if the timer is running on another
+- * CPU, the timer activity will be touching the stack of the CPU that is
+- * generating entropy..
+- *
+- * Note that we don't re-arm the timer in the timer itself - we are
+- * happy to be scheduled away, since that just makes the load more
+- * complex, but we do not want the timer to keep ticking unless the
+- * entropy loop is running.
+- *
+- * So the re-arming always happens in the entropy loop itself.
++ * This is [Half]SipHash-1-x, starting from an empty key. Because
++ * the key is fixed, it assumes that its inputs are non-malicious,
++ * and therefore this has no security on its own. s represents the
++ * four-word SipHash state, while v represents a two-word input.
+  */
+-static void entropy_timer(struct timer_list *t)
++static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2)
+ {
+-	credit_entropy_bits(&input_pool, 1);
++	s[3] ^= v1;
++	FASTMIX_PERM(s[0], s[1], s[2], s[3]);
++	s[0] ^= v1;
++	s[3] ^= v2;
++	FASTMIX_PERM(s[0], s[1], s[2], s[3]);
++	s[0] ^= v2;
+ }
+ 
++#ifdef CONFIG_SMP
+ /*
+- * If we have an actual cycle counter, see if we can
+- * generate enough entropy with timing noise
++ * This function is called when the CPU has just come online, with
++ * entry CPUHP_AP_RANDOM_ONLINE, just after CPUHP_AP_WORKQUEUE_ONLINE.
+  */
+-static void try_to_generate_entropy(void)
++int __cold random_online_cpu(unsigned int cpu)
+ {
+-	struct {
+-		unsigned long now;
+-		struct timer_list timer;
+-	} stack;
++	/*
++	 * During CPU shutdown and before CPU onlining, add_interrupt_
++	 * randomness() may schedule mix_interrupt_randomness(), and
++	 * set the MIX_INFLIGHT flag. However, because the worker can
++	 * be scheduled on a different CPU during this period, that
++	 * flag will never be cleared. For that reason, we zero out
++	 * the flag here, which runs just after workqueues are onlined
++	 * for the CPU again. This also has the effect of setting the
++	 * irq randomness count to zero so that new accumulated irqs
++	 * are fresh.
++	 */
++	per_cpu_ptr(&irq_randomness, cpu)->count = 0;
++	return 0;
++}
++#endif
+ 
+-	stack.now = random_get_entropy();
++static void mix_interrupt_randomness(struct work_struct *work)
++{
++	struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
++	/*
++	 * The size of the copied stack pool is explicitly 2 longs so that we
++	 * only ever ingest half of the siphash output each time, retaining
++	 * the other half as the next "key" that carries over. The entropy is
++	 * supposed to be sufficiently dispersed between bits so on average
++	 * we don't wind up "losing" some.
++	 */
++	unsigned long pool[2];
++	unsigned int count;
+ 
+-	/* Slow counter - or none. Don't even bother */
+-	if (stack.now == random_get_entropy())
++	/* Check to see if we're running on the wrong CPU due to hotplug. */
++	local_irq_disable();
++	if (fast_pool != this_cpu_ptr(&irq_randomness)) {
++		local_irq_enable();
+ 		return;
+-
+-	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
+-	while (!crng_ready()) {
+-		if (!timer_pending(&stack.timer))
+-			mod_timer(&stack.timer, jiffies+1);
+-		mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now));
+-		schedule();
+-		stack.now = random_get_entropy();
+ 	}
+ 
+-	del_timer_sync(&stack.timer);
+-	destroy_timer_on_stack(&stack.timer);
+-	mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now));
+-}
+-
+-/*
+- * Wait for the urandom pool to be seeded and thus guaranteed to supply
+- * cryptographically secure random numbers. This applies to: the /dev/urandom
+- * device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
+- * family of functions. Using any of these functions without first calling
+- * this function forfeits the guarantee of security.
+- *
+- * Returns: 0 if the urandom pool has been seeded.
+- *          -ERESTARTSYS if the function was interrupted by a signal.
+- */
+-int wait_for_random_bytes(void)
+-{
+-	if (likely(crng_ready()))
+-		return 0;
+-
+-	do {
+-		int ret;
+-		ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ);
+-		if (ret)
+-			return ret > 0 ? 0 : ret;
++	/*
++	 * Copy the pool to the stack so that the mixer always has a
++	 * consistent view, before we reenable irqs again.
++	 */
++	memcpy(pool, fast_pool->pool, sizeof(pool));
++	count = fast_pool->count;
++	fast_pool->count = 0;
++	fast_pool->last = jiffies;
++	local_irq_enable();
+ 
+-		try_to_generate_entropy();
+-	} while (!crng_ready());
++	mix_pool_bytes(pool, sizeof(pool));
++	credit_init_bits(max(1u, (count & U16_MAX) / 64));
+ 
+-	return 0;
++	memzero_explicit(pool, sizeof(pool));
+ }
+-EXPORT_SYMBOL(wait_for_random_bytes);
+ 
+-/*
+- * Returns whether or not the urandom pool has been seeded and thus guaranteed
+- * to supply cryptographically secure random numbers. This applies to: the
+- * /dev/urandom device, the get_random_bytes function, and the get_random_{u32,
+- * ,u64,int,long} family of functions.
+- *
+- * Returns: true if the urandom pool has been seeded.
+- *          false if the urandom pool has not been seeded.
+- */
+-bool rng_is_initialized(void)
+-{
+-	return crng_ready();
+-}
+-EXPORT_SYMBOL(rng_is_initialized);
+-
+-/*
+- * Add a callback function that will be invoked when the nonblocking
+- * pool is initialised.
+- *
+- * returns: 0 if callback is successfully added
+- *	    -EALREADY if pool is already initialised (callback not called)
+- *	    -ENOENT if module for callback is not alive
+- */
+-int add_random_ready_callback(struct random_ready_callback *rdy)
++void add_interrupt_randomness(int irq)
+ {
+-	struct module *owner;
+-	unsigned long flags;
+-	int err = -EALREADY;
++	enum { MIX_INFLIGHT = 1U << 31 };
++	unsigned long entropy = random_get_entropy();
++	struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
++	struct pt_regs *regs = get_irq_regs();
++	unsigned int new_count;
+ 
+-	if (crng_ready())
+-		return err;
++	fast_mix(fast_pool->pool, entropy,
++		 (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq));
++	new_count = ++fast_pool->count;
+ 
+-	owner = rdy->owner;
+-	if (!try_module_get(owner))
+-		return -ENOENT;
+-
+-	spin_lock_irqsave(&random_ready_list_lock, flags);
+-	if (crng_ready())
+-		goto out;
+-
+-	owner = NULL;
+-
+-	list_add(&rdy->list, &random_ready_list);
+-	err = 0;
+-
+-out:
+-	spin_unlock_irqrestore(&random_ready_list_lock, flags);
++	if (new_count & MIX_INFLIGHT)
++		return;
+ 
+-	module_put(owner);
++	if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ))
++		return;
+ 
+-	return err;
++	if (unlikely(!fast_pool->mix.func))
++		INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
++	fast_pool->count |= MIX_INFLIGHT;
++	queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+ }
+-EXPORT_SYMBOL(add_random_ready_callback);
++EXPORT_SYMBOL_GPL(add_interrupt_randomness);
++
++/* There is one of these per entropy source */
++struct timer_rand_state {
++	unsigned long last_time;
++	long last_delta, last_delta2;
++};
+ 
+ /*
+- * Delete a previously registered readiness callback function.
++ * This function adds entropy to the entropy "pool" by using timing
++ * delays. It uses the timer_rand_state structure to make an estimate
++ * of how many bits of entropy this call has added to the pool. The
++ * value "num" is also added to the pool; it should somehow describe
++ * the type of event that just happened.
+  */
+-void del_random_ready_callback(struct random_ready_callback *rdy)
++static void add_timer_randomness(struct timer_rand_state *state, unsigned int num)
+ {
+-	unsigned long flags;
+-	struct module *owner = NULL;
++	unsigned long entropy = random_get_entropy(), now = jiffies, flags;
++	long delta, delta2, delta3;
++	unsigned int bits;
+ 
+-	spin_lock_irqsave(&random_ready_list_lock, flags);
+-	if (!list_empty(&rdy->list)) {
+-		list_del_init(&rdy->list);
+-		owner = rdy->owner;
++	/*
++	 * If we're in a hard IRQ, add_interrupt_randomness() will be called
++	 * sometime after, so mix into the fast pool.
++	 */
++	if (in_irq()) {
++		fast_mix(this_cpu_ptr(&irq_randomness)->pool, entropy, num);
++	} else {
++		spin_lock_irqsave(&input_pool.lock, flags);
++		_mix_pool_bytes(&entropy, sizeof(entropy));
++		_mix_pool_bytes(&num, sizeof(num));
++		spin_unlock_irqrestore(&input_pool.lock, flags);
+ 	}
+-	spin_unlock_irqrestore(&random_ready_list_lock, flags);
+ 
+-	module_put(owner);
+-}
+-EXPORT_SYMBOL(del_random_ready_callback);
++	if (crng_ready())
++		return;
+ 
+-/*
+- * This function will use the architecture-specific hardware random
+- * number generator if it is available.  The arch-specific hw RNG will
+- * almost certainly be faster than what we can do in software, but it
+- * is impossible to verify that it is implemented securely (as
+- * opposed, to, say, the AES encryption of a sequence number using a
+- * key known by the NSA).  So it's useful if we need the speed, but
+- * only if we're willing to trust the hardware manufacturer not to
+- * have put in a back door.
+- *
+- * Return number of bytes filled in.
+- */
+-int __must_check get_random_bytes_arch(void *buf, int nbytes)
+-{
+-	int left = nbytes;
+-	char *p = buf;
++	/*
++	 * Calculate number of bits of randomness we probably added.
++	 * We take into account the first, second and third-order deltas
++	 * in order to make our estimate.
++	 */
++	delta = now - READ_ONCE(state->last_time);
++	WRITE_ONCE(state->last_time, now);
+ 
+-	trace_get_random_bytes_arch(left, _RET_IP_);
+-	while (left) {
+-		unsigned long v;
+-		int chunk = min_t(int, left, sizeof(unsigned long));
++	delta2 = delta - READ_ONCE(state->last_delta);
++	WRITE_ONCE(state->last_delta, delta);
+ 
+-		if (!arch_get_random_long(&v))
+-			break;
++	delta3 = delta2 - READ_ONCE(state->last_delta2);
++	WRITE_ONCE(state->last_delta2, delta2);
+ 
+-		memcpy(p, &v, chunk);
+-		p += chunk;
+-		left -= chunk;
+-	}
++	if (delta < 0)
++		delta = -delta;
++	if (delta2 < 0)
++		delta2 = -delta2;
++	if (delta3 < 0)
++		delta3 = -delta3;
++	if (delta > delta2)
++		delta = delta2;
++	if (delta > delta3)
++		delta = delta3;
++
++	/*
++	 * delta is now minimum absolute delta. Round down by 1 bit
++	 * on general principles, and limit entropy estimate to 11 bits.
++	 */
++	bits = min(fls(delta >> 1), 11);
+ 
+-	return nbytes - left;
++	/*
++	 * As mentioned above, if we're in a hard IRQ, add_interrupt_randomness()
++	 * will run after this, which uses a different crediting scheme of 1 bit
++	 * per every 64 interrupts. In order to let that function do accounting
++	 * close to the one in this function, we credit a full 64/64 bit per bit,
++	 * and then subtract one to account for the extra one added.
++	 */
++	if (in_irq())
++		this_cpu_ptr(&irq_randomness)->count += max(1u, bits * 64) - 1;
++	else
++		_credit_init_bits(bits);
+ }
+-EXPORT_SYMBOL(get_random_bytes_arch);
+ 
+-/*
+- * init_std_data - initialize pool with system data
+- *
+- * @r: pool to initialize
+- *
+- * This function clears the pool's entropy count and mixes some system
+- * data into the pool to prepare it for use. The pool is not cleared
+- * as that can only decrease the entropy in the pool.
+- */
+-static void __init init_std_data(struct entropy_store *r)
++void add_input_randomness(unsigned int type, unsigned int code, unsigned int value)
+ {
+-	int i;
+-	ktime_t now = ktime_get_real();
+-	unsigned long rv;
+-
+-	mix_pool_bytes(r, &now, sizeof(now));
+-	for (i = r->poolinfo->poolbytes; i > 0; i -= sizeof(rv)) {
+-		if (!arch_get_random_seed_long(&rv) &&
+-		    !arch_get_random_long(&rv))
+-			rv = random_get_entropy();
+-		mix_pool_bytes(r, &rv, sizeof(rv));
+-	}
+-	mix_pool_bytes(r, utsname(), sizeof(*(utsname())));
++	static unsigned char last_value;
++	static struct timer_rand_state input_timer_state = { INITIAL_JIFFIES };
++
++	/* Ignore autorepeat and the like. */
++	if (value == last_value)
++		return;
++
++	last_value = value;
++	add_timer_randomness(&input_timer_state,
++			     (type << 4) ^ code ^ (code >> 4) ^ value);
+ }
++EXPORT_SYMBOL_GPL(add_input_randomness);
+ 
+-/*
+- * Note that setup_arch() may call add_device_randomness()
+- * long before we get here. This allows seeding of the pools
+- * with some platform dependent data very early in the boot
+- * process. But it limits our options here. We must use
+- * statically allocated structures that already have all
+- * initializations complete at compile time. We should also
+- * take care not to overwrite the precious per platform data
+- * we were given.
+- */
+-int __init rand_initialize(void)
++#ifdef CONFIG_BLOCK
++void add_disk_randomness(struct gendisk *disk)
+ {
+-	init_std_data(&input_pool);
+-	if (crng_need_final_init)
+-		crng_finalize_init(&primary_crng);
+-	crng_initialize_primary(&primary_crng);
+-	crng_global_init_time = jiffies;
+-	if (ratelimit_disable) {
+-		urandom_warning.interval = 0;
+-		unseeded_warning.interval = 0;
+-	}
+-	return 0;
++	if (!disk || !disk->random)
++		return;
++	/* First major is 1, so we get >= 0x200 here. */
++	add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
+ }
++EXPORT_SYMBOL_GPL(add_disk_randomness);
+ 
+-#ifdef CONFIG_BLOCK
+-void rand_initialize_disk(struct gendisk *disk)
++void __cold rand_initialize_disk(struct gendisk *disk)
+ {
+ 	struct timer_rand_state *state;
+ 
+@@ -1847,116 +1141,189 @@ void rand_initialize_disk(struct gendisk *disk)
+ }
+ #endif
+ 
+-static ssize_t
+-urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes,
+-		    loff_t *ppos)
++/*
++ * Each time the timer fires, we expect that we got an unpredictable
++ * jump in the cycle counter. Even if the timer is running on another
++ * CPU, the timer activity will be touching the stack of the CPU that is
++ * generating entropy..
++ *
++ * Note that we don't re-arm the timer in the timer itself - we are
++ * happy to be scheduled away, since that just makes the load more
++ * complex, but we do not want the timer to keep ticking unless the
++ * entropy loop is running.
++ *
++ * So the re-arming always happens in the entropy loop itself.
++ */
++static void __cold entropy_timer(struct timer_list *t)
+ {
+-	int ret;
+-
+-	nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
+-	ret = extract_crng_user(buf, nbytes);
+-	trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS(&input_pool));
+-	return ret;
++	credit_init_bits(1);
+ }
+ 
+-static ssize_t
+-urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
++/*
++ * If we have an actual cycle counter, see if we can
++ * generate enough entropy with timing noise
++ */
++static void __cold try_to_generate_entropy(void)
+ {
+-	unsigned long flags;
+-	static int maxwarn = 10;
++	struct {
++		unsigned long entropy;
++		struct timer_list timer;
++	} stack;
++
++	stack.entropy = random_get_entropy();
+ 
+-	if (!crng_ready() && maxwarn > 0) {
+-		maxwarn--;
+-		if (__ratelimit(&urandom_warning))
+-			pr_notice("%s: uninitialized urandom read (%zd bytes read)\n",
+-				  current->comm, nbytes);
+-		spin_lock_irqsave(&primary_crng.lock, flags);
+-		crng_init_cnt = 0;
+-		spin_unlock_irqrestore(&primary_crng.lock, flags);
++	/* Slow counter - or none. Don't even bother */
++	if (stack.entropy == random_get_entropy())
++		return;
++
++	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
++	while (!crng_ready() && !signal_pending(current)) {
++		if (!timer_pending(&stack.timer))
++			mod_timer(&stack.timer, jiffies + 1);
++		mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
++		schedule();
++		stack.entropy = random_get_entropy();
+ 	}
+ 
+-	return urandom_read_nowarn(file, buf, nbytes, ppos);
++	del_timer_sync(&stack.timer);
++	destroy_timer_on_stack(&stack.timer);
++	mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
+ }
+ 
+-static ssize_t
+-random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
++
++/**********************************************************************
++ *
++ * Userspace reader/writer interfaces.
++ *
++ * getrandom(2) is the primary modern interface into the RNG and should
++ * be used in preference to anything else.
++ *
++ * Reading from /dev/random has the same functionality as calling
++ * getrandom(2) with flags=0. In earlier versions, however, it had
++ * vastly different semantics and should therefore be avoided, to
++ * prevent backwards compatibility issues.
++ *
++ * Reading from /dev/urandom has the same functionality as calling
++ * getrandom(2) with flags=GRND_INSECURE. Because it does not block
++ * waiting for the RNG to be ready, it should not be used.
++ *
++ * Writing to either /dev/random or /dev/urandom adds entropy to
++ * the input pool but does not credit it.
++ *
++ * Polling on /dev/random indicates when the RNG is initialized, on
++ * the read side, and when it wants new entropy, on the write side.
++ *
++ * Both /dev/random and /dev/urandom have the same set of ioctls for
++ * adding entropy, getting the entropy count, zeroing the count, and
++ * reseeding the crng.
++ *
++ **********************************************************************/
++
++SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags)
+ {
++	struct iov_iter iter;
++	struct iovec iov;
+ 	int ret;
+ 
+-	ret = wait_for_random_bytes();
+-	if (ret != 0)
++	if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE))
++		return -EINVAL;
++
++	/*
++	 * Requesting insecure and blocking randomness at the same time makes
++	 * no sense.
++	 */
++	if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM))
++		return -EINVAL;
++
++	if (!crng_ready() && !(flags & GRND_INSECURE)) {
++		if (flags & GRND_NONBLOCK)
++			return -EAGAIN;
++		ret = wait_for_random_bytes();
++		if (unlikely(ret))
++			return ret;
++	}
++
++	ret = import_single_range(READ, ubuf, len, &iov, &iter);
++	if (unlikely(ret))
+ 		return ret;
+-	return urandom_read_nowarn(file, buf, nbytes, ppos);
++	return get_random_bytes_user(&iter);
+ }
+ 
+-static __poll_t
+-random_poll(struct file *file, poll_table * wait)
++static __poll_t random_poll(struct file *file, poll_table *wait)
+ {
+-	__poll_t mask;
+-
+ 	poll_wait(file, &crng_init_wait, wait);
+-	poll_wait(file, &random_write_wait, wait);
+-	mask = 0;
+-	if (crng_ready())
+-		mask |= EPOLLIN | EPOLLRDNORM;
+-	if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
+-		mask |= EPOLLOUT | EPOLLWRNORM;
+-	return mask;
++	return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM;
+ }
+ 
+-static int
+-write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
++static ssize_t write_pool_user(struct iov_iter *iter)
+ {
+-	size_t bytes;
+-	__u32 t, buf[16];
+-	const char __user *p = buffer;
++	u8 block[BLAKE2S_BLOCK_SIZE];
++	ssize_t ret = 0;
++	size_t copied;
+ 
+-	while (count > 0) {
+-		int b, i = 0;
++	if (unlikely(!iov_iter_count(iter)))
++		return 0;
+ 
+-		bytes = min(count, sizeof(buf));
+-		if (copy_from_user(&buf, p, bytes))
+-			return -EFAULT;
++	for (;;) {
++		copied = copy_from_iter(block, sizeof(block), iter);
++		ret += copied;
++		mix_pool_bytes(block, copied);
++		if (!iov_iter_count(iter) || copied != sizeof(block))
++			break;
+ 
+-		for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) {
+-			if (!arch_get_random_int(&t))
++		BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0);
++		if (ret % PAGE_SIZE == 0) {
++			if (signal_pending(current))
+ 				break;
+-			buf[i] ^= t;
++			cond_resched();
+ 		}
++	}
++
++	memzero_explicit(block, sizeof(block));
++	return ret ? ret : -EFAULT;
++}
++
++static ssize_t random_write_iter(struct kiocb *kiocb, struct iov_iter *iter)
++{
++	return write_pool_user(iter);
++}
+ 
+-		count -= bytes;
+-		p += bytes;
++static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
++{
++	static int maxwarn = 10;
+ 
+-		mix_pool_bytes(r, buf, bytes);
+-		cond_resched();
++	if (!crng_ready()) {
++		if (!ratelimit_disable && maxwarn <= 0)
++			++urandom_warning.missed;
++		else if (ratelimit_disable || __ratelimit(&urandom_warning)) {
++			--maxwarn;
++			pr_notice("%s: uninitialized urandom read (%zu bytes read)\n",
++				  current->comm, iov_iter_count(iter));
++		}
+ 	}
+ 
+-	return 0;
++	return get_random_bytes_user(iter);
+ }
+ 
+-static ssize_t random_write(struct file *file, const char __user *buffer,
+-			    size_t count, loff_t *ppos)
++static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
+ {
+-	size_t ret;
++	int ret;
+ 
+-	ret = write_pool(&input_pool, buffer, count);
+-	if (ret)
++	ret = wait_for_random_bytes();
++	if (ret != 0)
+ 		return ret;
+-
+-	return (ssize_t)count;
++	return get_random_bytes_user(iter);
+ }
+ 
+ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ {
+-	int size, ent_count;
+ 	int __user *p = (int __user *)arg;
+-	int retval;
++	int ent_count;
+ 
+ 	switch (cmd) {
+ 	case RNDGETENTCNT:
+-		/* inherently racy, no point locking */
+-		ent_count = ENTROPY_BITS(&input_pool);
+-		if (put_user(ent_count, p))
++		/* Inherently racy, no point locking. */
++		if (put_user(input_pool.init_bits, p))
+ 			return -EFAULT;
+ 		return 0;
+ 	case RNDADDTOENTCNT:
+@@ -1964,41 +1331,48 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ 			return -EPERM;
+ 		if (get_user(ent_count, p))
+ 			return -EFAULT;
+-		return credit_entropy_bits_safe(&input_pool, ent_count);
+-	case RNDADDENTROPY:
++		if (ent_count < 0)
++			return -EINVAL;
++		credit_init_bits(ent_count);
++		return 0;
++	case RNDADDENTROPY: {
++		struct iov_iter iter;
++		struct iovec iov;
++		ssize_t ret;
++		int len;
++
+ 		if (!capable(CAP_SYS_ADMIN))
+ 			return -EPERM;
+ 		if (get_user(ent_count, p++))
+ 			return -EFAULT;
+ 		if (ent_count < 0)
+ 			return -EINVAL;
+-		if (get_user(size, p++))
++		if (get_user(len, p++))
++			return -EFAULT;
++		ret = import_single_range(WRITE, p, len, &iov, &iter);
++		if (unlikely(ret))
++			return ret;
++		ret = write_pool_user(&iter);
++		if (unlikely(ret < 0))
++			return ret;
++		/* Since we're crediting, enforce that it was all written into the pool. */
++		if (unlikely(ret != len))
+ 			return -EFAULT;
+-		retval = write_pool(&input_pool, (const char __user *)p,
+-				    size);
+-		if (retval < 0)
+-			return retval;
+-		return credit_entropy_bits_safe(&input_pool, ent_count);
++		credit_init_bits(ent_count);
++		return 0;
++	}
+ 	case RNDZAPENTCNT:
+ 	case RNDCLEARPOOL:
+-		/*
+-		 * Clear the entropy pool counters. We no longer clear
+-		 * the entropy pool, as that's silly.
+-		 */
++		/* No longer has any effect. */
+ 		if (!capable(CAP_SYS_ADMIN))
+ 			return -EPERM;
+-		if (xchg(&input_pool.entropy_count, 0) && random_write_wakeup_bits) {
+-			wake_up_interruptible(&random_write_wait);
+-			kill_fasync(&fasync, SIGIO, POLL_OUT);
+-		}
+ 		return 0;
+ 	case RNDRESEEDCRNG:
+ 		if (!capable(CAP_SYS_ADMIN))
+ 			return -EPERM;
+-		if (crng_init < 2)
++		if (!crng_ready())
+ 			return -ENODATA;
+-		crng_reseed(&primary_crng, &input_pool);
+-		WRITE_ONCE(crng_global_init_time, jiffies - 1);
++		crng_reseed();
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+@@ -2011,55 +1385,56 @@ static int random_fasync(int fd, struct file *filp, int on)
+ }
+ 
+ const struct file_operations random_fops = {
+-	.read  = random_read,
+-	.write = random_write,
+-	.poll  = random_poll,
++	.read_iter = random_read_iter,
++	.write_iter = random_write_iter,
++	.poll = random_poll,
+ 	.unlocked_ioctl = random_ioctl,
+ 	.compat_ioctl = compat_ptr_ioctl,
+ 	.fasync = random_fasync,
+ 	.llseek = noop_llseek,
++	.splice_read = generic_file_splice_read,
++	.splice_write = iter_file_splice_write,
+ };
+ 
+ const struct file_operations urandom_fops = {
+-	.read  = urandom_read,
+-	.write = random_write,
++	.read_iter = urandom_read_iter,
++	.write_iter = random_write_iter,
+ 	.unlocked_ioctl = random_ioctl,
+ 	.compat_ioctl = compat_ptr_ioctl,
+ 	.fasync = random_fasync,
+ 	.llseek = noop_llseek,
++	.splice_read = generic_file_splice_read,
++	.splice_write = iter_file_splice_write,
+ };
+ 
+-SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
+-		unsigned int, flags)
+-{
+-	int ret;
+-
+-	if (flags & ~(GRND_NONBLOCK|GRND_RANDOM|GRND_INSECURE))
+-		return -EINVAL;
+-
+-	/*
+-	 * Requesting insecure and blocking randomness at the same time makes
+-	 * no sense.
+-	 */
+-	if ((flags & (GRND_INSECURE|GRND_RANDOM)) == (GRND_INSECURE|GRND_RANDOM))
+-		return -EINVAL;
+-
+-	if (count > INT_MAX)
+-		count = INT_MAX;
+-
+-	if (!(flags & GRND_INSECURE) && !crng_ready()) {
+-		if (flags & GRND_NONBLOCK)
+-			return -EAGAIN;
+-		ret = wait_for_random_bytes();
+-		if (unlikely(ret))
+-			return ret;
+-	}
+-	return urandom_read_nowarn(NULL, buf, count, NULL);
+-}
+ 
+ /********************************************************************
+  *
+- * Sysctl interface
++ * Sysctl interface.
++ *
++ * These are partly unused legacy knobs with dummy values to not break
++ * userspace and partly still useful things. They are usually accessible
++ * in /proc/sys/kernel/random/ and are as follows:
++ *
++ * - boot_id - a UUID representing the current boot.
++ *
++ * - uuid - a random UUID, different each time the file is read.
++ *
++ * - poolsize - the number of bits of entropy that the input pool can
++ *   hold, tied to the POOL_BITS constant.
++ *
++ * - entropy_avail - the number of bits of entropy currently in the
++ *   input pool. Always <= poolsize.
++ *
++ * - write_wakeup_threshold - the amount of entropy in the input pool
++ *   below which write polls to /dev/random will unblock, requesting
++ *   more entropy, tied to the POOL_READY_BITS constant. It is writable
++ *   to avoid breaking old userspaces, but writing to it does not
++ *   change any behavior of the RNG.
++ *
++ * - urandom_min_reseed_secs - fixed to the value CRNG_RESEED_INTERVAL.
++ *   It is writable to avoid breaking old userspaces, but writing
++ *   to it does not change any behavior of the RNG.
+  *
+  ********************************************************************/
+ 
+@@ -2067,25 +1442,28 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
+ 
+ #include <linux/sysctl.h>
+ 
+-static int min_write_thresh;
+-static int max_write_thresh = INPUT_POOL_WORDS * 32;
+-static int random_min_urandom_seed = 60;
+-static char sysctl_bootid[16];
++static int sysctl_random_min_urandom_seed = CRNG_RESEED_INTERVAL / HZ;
++static int sysctl_random_write_wakeup_bits = POOL_READY_BITS;
++static int sysctl_poolsize = POOL_BITS;
++static u8 sysctl_bootid[UUID_SIZE];
+ 
+ /*
+  * This function is used to return both the bootid UUID, and random
+- * UUID.  The difference is in whether table->data is NULL; if it is,
++ * UUID. The difference is in whether table->data is NULL; if it is,
+  * then a new UUID is generated and returned to the user.
+- *
+- * If the user accesses this via the proc interface, the UUID will be
+- * returned as an ASCII string in the standard UUID format; if via the
+- * sysctl system call, as 16 bytes of binary data.
+  */
+-static int proc_do_uuid(struct ctl_table *table, int write,
+-			void *buffer, size_t *lenp, loff_t *ppos)
+-{
+-	struct ctl_table fake_table;
+-	unsigned char buf[64], tmp_uuid[16], *uuid;
++static int proc_do_uuid(struct ctl_table *table, int write, void *buf,
++			size_t *lenp, loff_t *ppos)
++{
++	u8 tmp_uuid[UUID_SIZE], *uuid;
++	char uuid_string[UUID_STRING_LEN + 1];
++	struct ctl_table fake_table = {
++		.data = uuid_string,
++		.maxlen = UUID_STRING_LEN
++	};
++
++	if (write)
++		return -EPERM;
+ 
+ 	uuid = table->data;
+ 	if (!uuid) {
+@@ -2100,32 +1478,17 @@ static int proc_do_uuid(struct ctl_table *table, int write,
+ 		spin_unlock(&bootid_spinlock);
+ 	}
+ 
+-	sprintf(buf, "%pU", uuid);
+-
+-	fake_table.data = buf;
+-	fake_table.maxlen = sizeof(buf);
+-
+-	return proc_dostring(&fake_table, write, buffer, lenp, ppos);
++	snprintf(uuid_string, sizeof(uuid_string), "%pU", uuid);
++	return proc_dostring(&fake_table, 0, buf, lenp, ppos);
+ }
+ 
+-/*
+- * Return entropy available scaled to integral bits
+- */
+-static int proc_do_entropy(struct ctl_table *table, int write,
+-			   void *buffer, size_t *lenp, loff_t *ppos)
++/* The same as proc_dointvec, but writes don't change anything. */
++static int proc_do_rointvec(struct ctl_table *table, int write, void *buf,
++			    size_t *lenp, loff_t *ppos)
+ {
+-	struct ctl_table fake_table;
+-	int entropy_count;
+-
+-	entropy_count = *(int *)table->data >> ENTROPY_SHIFT;
+-
+-	fake_table.data = &entropy_count;
+-	fake_table.maxlen = sizeof(entropy_count);
+-
+-	return proc_dointvec(&fake_table, write, buffer, lenp, ppos);
++	return write ? 0 : proc_dointvec(table, 0, buf, lenp, ppos);
+ }
+ 
+-static int sysctl_poolsize = INPUT_POOL_WORDS * 32;
+ extern struct ctl_table random_table[];
+ struct ctl_table random_table[] = {
+ 	{
+@@ -2137,222 +1500,36 @@ struct ctl_table random_table[] = {
+ 	},
+ 	{
+ 		.procname	= "entropy_avail",
++		.data		= &input_pool.init_bits,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0444,
+-		.proc_handler	= proc_do_entropy,
+-		.data		= &input_pool.entropy_count,
++		.proc_handler	= proc_dointvec,
+ 	},
+ 	{
+ 		.procname	= "write_wakeup_threshold",
+-		.data		= &random_write_wakeup_bits,
++		.data		= &sysctl_random_write_wakeup_bits,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
+-		.extra1		= &min_write_thresh,
+-		.extra2		= &max_write_thresh,
++		.proc_handler	= proc_do_rointvec,
+ 	},
+ 	{
+ 		.procname	= "urandom_min_reseed_secs",
+-		.data		= &random_min_urandom_seed,
++		.data		= &sysctl_random_min_urandom_seed,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_do_rointvec,
+ 	},
+ 	{
+ 		.procname	= "boot_id",
+ 		.data		= &sysctl_bootid,
+-		.maxlen		= 16,
+ 		.mode		= 0444,
+ 		.proc_handler	= proc_do_uuid,
+ 	},
+ 	{
+ 		.procname	= "uuid",
+-		.maxlen		= 16,
+ 		.mode		= 0444,
+ 		.proc_handler	= proc_do_uuid,
+ 	},
+-#ifdef ADD_INTERRUPT_BENCH
+-	{
+-		.procname	= "add_interrupt_avg_cycles",
+-		.data		= &avg_cycles,
+-		.maxlen		= sizeof(avg_cycles),
+-		.mode		= 0444,
+-		.proc_handler	= proc_doulongvec_minmax,
+-	},
+-	{
+-		.procname	= "add_interrupt_avg_deviation",
+-		.data		= &avg_deviation,
+-		.maxlen		= sizeof(avg_deviation),
+-		.mode		= 0444,
+-		.proc_handler	= proc_doulongvec_minmax,
+-	},
+-#endif
+ 	{ }
+ };
+-#endif 	/* CONFIG_SYSCTL */
+-
+-struct batched_entropy {
+-	union {
+-		u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)];
+-		u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)];
+-	};
+-	unsigned int position;
+-	spinlock_t batch_lock;
+-};
+-
+-/*
+- * Get a random word for internal kernel use only. The quality of the random
+- * number is good as /dev/urandom, but there is no backtrack protection, with
+- * the goal of being quite fast and not depleting entropy. In order to ensure
+- * that the randomness provided by this function is okay, the function
+- * wait_for_random_bytes() should be called and return 0 at least once at any
+- * point prior.
+- */
+-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
+-	.batch_lock	= __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
+-};
+-
+-u64 get_random_u64(void)
+-{
+-	u64 ret;
+-	unsigned long flags;
+-	struct batched_entropy *batch;
+-	static void *previous;
+-
+-	warn_unseeded_randomness(&previous);
+-
+-	batch = raw_cpu_ptr(&batched_entropy_u64);
+-	spin_lock_irqsave(&batch->batch_lock, flags);
+-	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
+-		extract_crng((u8 *)batch->entropy_u64);
+-		batch->position = 0;
+-	}
+-	ret = batch->entropy_u64[batch->position++];
+-	spin_unlock_irqrestore(&batch->batch_lock, flags);
+-	return ret;
+-}
+-EXPORT_SYMBOL(get_random_u64);
+-
+-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = {
+-	.batch_lock	= __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock),
+-};
+-u32 get_random_u32(void)
+-{
+-	u32 ret;
+-	unsigned long flags;
+-	struct batched_entropy *batch;
+-	static void *previous;
+-
+-	warn_unseeded_randomness(&previous);
+-
+-	batch = raw_cpu_ptr(&batched_entropy_u32);
+-	spin_lock_irqsave(&batch->batch_lock, flags);
+-	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
+-		extract_crng((u8 *)batch->entropy_u32);
+-		batch->position = 0;
+-	}
+-	ret = batch->entropy_u32[batch->position++];
+-	spin_unlock_irqrestore(&batch->batch_lock, flags);
+-	return ret;
+-}
+-EXPORT_SYMBOL(get_random_u32);
+-
+-/* It's important to invalidate all potential batched entropy that might
+- * be stored before the crng is initialized, which we can do lazily by
+- * simply resetting the counter to zero so that it's re-extracted on the
+- * next usage. */
+-static void invalidate_batched_entropy(void)
+-{
+-	int cpu;
+-	unsigned long flags;
+-
+-	for_each_possible_cpu (cpu) {
+-		struct batched_entropy *batched_entropy;
+-
+-		batched_entropy = per_cpu_ptr(&batched_entropy_u32, cpu);
+-		spin_lock_irqsave(&batched_entropy->batch_lock, flags);
+-		batched_entropy->position = 0;
+-		spin_unlock(&batched_entropy->batch_lock);
+-
+-		batched_entropy = per_cpu_ptr(&batched_entropy_u64, cpu);
+-		spin_lock(&batched_entropy->batch_lock);
+-		batched_entropy->position = 0;
+-		spin_unlock_irqrestore(&batched_entropy->batch_lock, flags);
+-	}
+-}
+-
+-/**
+- * randomize_page - Generate a random, page aligned address
+- * @start:	The smallest acceptable address the caller will take.
+- * @range:	The size of the area, starting at @start, within which the
+- *		random address must fall.
+- *
+- * If @start + @range would overflow, @range is capped.
+- *
+- * NOTE: Historical use of randomize_range, which this replaces, presumed that
+- * @start was already page aligned.  We now align it regardless.
+- *
+- * Return: A page aligned address within [start, start + range).  On error,
+- * @start is returned.
+- */
+-unsigned long
+-randomize_page(unsigned long start, unsigned long range)
+-{
+-	if (!PAGE_ALIGNED(start)) {
+-		range -= PAGE_ALIGN(start) - start;
+-		start = PAGE_ALIGN(start);
+-	}
+-
+-	if (start > ULONG_MAX - range)
+-		range = ULONG_MAX - start;
+-
+-	range >>= PAGE_SHIFT;
+-
+-	if (range == 0)
+-		return start;
+-
+-	return start + (get_random_long() % range << PAGE_SHIFT);
+-}
+-
+-/* Interface for in-kernel drivers of true hardware RNGs.
+- * Those devices may produce endless random bits and will be throttled
+- * when our pool is full.
+- */
+-void add_hwgenerator_randomness(const char *buffer, size_t count,
+-				size_t entropy)
+-{
+-	struct entropy_store *poolp = &input_pool;
+-
+-	if (unlikely(crng_init == 0)) {
+-		size_t ret = crng_fast_load(buffer, count);
+-		count -= ret;
+-		buffer += ret;
+-		if (!count || crng_init == 0)
+-			return;
+-	}
+-
+-	/* Suspend writing if we're above the trickle threshold.
+-	 * We'll be woken up again once below random_write_wakeup_thresh,
+-	 * or when the calling thread is about to terminate.
+-	 */
+-	wait_event_interruptible(random_write_wait,
+-			!system_wq || kthread_should_stop() ||
+-			ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits);
+-	mix_pool_bytes(poolp, buffer, count);
+-	credit_entropy_bits(poolp, entropy);
+-}
+-EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
+-
+-/* Handle random seed passed by bootloader.
+- * If the seed is trustworthy, it would be regarded as hardware RNGs. Otherwise
+- * it would be regarded as device data.
+- * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER.
+- */
+-void add_bootloader_randomness(const void *buf, unsigned int size)
+-{
+-	if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER))
+-		add_hwgenerator_randomness(buf, size, size * 8);
+-	else
+-		add_device_randomness(buf, size);
+-}
+-EXPORT_SYMBOL_GPL(add_bootloader_randomness);
++#endif	/* CONFIG_SYSCTL */
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index b9ac357e465db..5d820037e2918 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1351,7 +1351,7 @@ static void vmbus_isr(void)
+ 			tasklet_schedule(&hv_cpu->msg_dpc);
+ 	}
+ 
+-	add_interrupt_randomness(hv_get_vector(), 0);
++	add_interrupt_randomness(hv_get_vector());
+ }
+ 
+ /*
+diff --git a/drivers/media/test-drivers/vim2m.c b/drivers/media/test-drivers/vim2m.c
+index a776bb8e0e093..a24624353f9ed 100644
+--- a/drivers/media/test-drivers/vim2m.c
++++ b/drivers/media/test-drivers/vim2m.c
+@@ -1325,12 +1325,6 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	vfd->lock = &dev->dev_mutex;
+ 	vfd->v4l2_dev = &dev->v4l2_dev;
+ 
+-	ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0);
+-	if (ret) {
+-		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+-		goto error_v4l2;
+-	}
+-
+ 	video_set_drvdata(vfd, dev);
+ 	v4l2_info(&dev->v4l2_dev,
+ 		  "Device registered as /dev/video%d\n", vfd->num);
+@@ -1353,12 +1347,20 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	media_device_init(&dev->mdev);
+ 	dev->mdev.ops = &m2m_media_ops;
+ 	dev->v4l2_dev.mdev = &dev->mdev;
++#endif
++
++	ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0);
++	if (ret) {
++		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
++		goto error_m2m;
++	}
+ 
++#ifdef CONFIG_MEDIA_CONTROLLER
+ 	ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
+ 						 MEDIA_ENT_F_PROC_VIDEO_SCALER);
+ 	if (ret) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
+-		goto error_dev;
++		goto error_v4l2;
+ 	}
+ 
+ 	ret = media_device_register(&dev->mdev);
+@@ -1373,11 +1375,13 @@ static int vim2m_probe(struct platform_device *pdev)
+ error_m2m_mc:
+ 	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+ #endif
+-error_dev:
++error_v4l2:
+ 	video_unregister_device(&dev->vfd);
+ 	/* vim2m_device_release called by video_unregister_device to release various objects */
+ 	return ret;
+-error_v4l2:
++error_m2m:
++	v4l2_m2m_release(dev->m2m_dev);
++error_dev:
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+ error_free:
+ 	kfree(dev);
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index cb865b7ec3750..f208080243055 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -80,7 +80,6 @@ config WIREGUARD
+ 	select CRYPTO
+ 	select CRYPTO_LIB_CURVE25519
+ 	select CRYPTO_LIB_CHACHA20POLY1305
+-	select CRYPTO_LIB_BLAKE2S
+ 	select CRYPTO_CHACHA20_X86_64 if X86 && 64BIT
+ 	select CRYPTO_POLY1305_X86_64 if X86 && 64BIT
+ 	select CRYPTO_BLAKE2S_X86 if X86 && 64BIT
+diff --git a/drivers/net/wireguard/noise.c b/drivers/net/wireguard/noise.c
+index c0cfd9b36c0b5..720952b92e784 100644
+--- a/drivers/net/wireguard/noise.c
++++ b/drivers/net/wireguard/noise.c
+@@ -302,6 +302,41 @@ void wg_noise_set_static_identity_private_key(
+ 		static_identity->static_public, private_key);
+ }
+ 
++static void hmac(u8 *out, const u8 *in, const u8 *key, const size_t inlen, const size_t keylen)
++{
++	struct blake2s_state state;
++	u8 x_key[BLAKE2S_BLOCK_SIZE] __aligned(__alignof__(u32)) = { 0 };
++	u8 i_hash[BLAKE2S_HASH_SIZE] __aligned(__alignof__(u32));
++	int i;
++
++	if (keylen > BLAKE2S_BLOCK_SIZE) {
++		blake2s_init(&state, BLAKE2S_HASH_SIZE);
++		blake2s_update(&state, key, keylen);
++		blake2s_final(&state, x_key);
++	} else
++		memcpy(x_key, key, keylen);
++
++	for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i)
++		x_key[i] ^= 0x36;
++
++	blake2s_init(&state, BLAKE2S_HASH_SIZE);
++	blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE);
++	blake2s_update(&state, in, inlen);
++	blake2s_final(&state, i_hash);
++
++	for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i)
++		x_key[i] ^= 0x5c ^ 0x36;
++
++	blake2s_init(&state, BLAKE2S_HASH_SIZE);
++	blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE);
++	blake2s_update(&state, i_hash, BLAKE2S_HASH_SIZE);
++	blake2s_final(&state, i_hash);
++
++	memcpy(out, i_hash, BLAKE2S_HASH_SIZE);
++	memzero_explicit(x_key, BLAKE2S_BLOCK_SIZE);
++	memzero_explicit(i_hash, BLAKE2S_HASH_SIZE);
++}
++
+ /* This is Hugo Krawczyk's HKDF:
+  *  - https://eprint.iacr.org/2010/264.pdf
+  *  - https://tools.ietf.org/html/rfc5869
+@@ -322,14 +357,14 @@ static void kdf(u8 *first_dst, u8 *second_dst, u8 *third_dst, const u8 *data,
+ 		 ((third_len || third_dst) && (!second_len || !second_dst))));
+ 
+ 	/* Extract entropy from data into secret */
+-	blake2s256_hmac(secret, data, chaining_key, data_len, NOISE_HASH_LEN);
++	hmac(secret, data, chaining_key, data_len, NOISE_HASH_LEN);
+ 
+ 	if (!first_dst || !first_len)
+ 		goto out;
+ 
+ 	/* Expand first key: key = secret, data = 0x1 */
+ 	output[0] = 1;
+-	blake2s256_hmac(output, output, secret, 1, BLAKE2S_HASH_SIZE);
++	hmac(output, output, secret, 1, BLAKE2S_HASH_SIZE);
+ 	memcpy(first_dst, output, first_len);
+ 
+ 	if (!second_dst || !second_len)
+@@ -337,8 +372,7 @@ static void kdf(u8 *first_dst, u8 *second_dst, u8 *third_dst, const u8 *data,
+ 
+ 	/* Expand second key: key = secret, data = first-key || 0x2 */
+ 	output[BLAKE2S_HASH_SIZE] = 2;
+-	blake2s256_hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1,
+-			BLAKE2S_HASH_SIZE);
++	hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1, BLAKE2S_HASH_SIZE);
+ 	memcpy(second_dst, output, second_len);
+ 
+ 	if (!third_dst || !third_len)
+@@ -346,8 +380,7 @@ static void kdf(u8 *first_dst, u8 *second_dst, u8 *third_dst, const u8 *data,
+ 
+ 	/* Expand third key: key = secret, data = second-key || 0x3 */
+ 	output[BLAKE2S_HASH_SIZE] = 3;
+-	blake2s256_hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1,
+-			BLAKE2S_HASH_SIZE);
++	hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1, BLAKE2S_HASH_SIZE);
+ 	memcpy(third_dst, output, third_len);
+ 
+ out:
+diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+index 902ac81699484..083ff72976cf0 100644
+--- a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
++++ b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+@@ -1351,9 +1351,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
+ 
+ 					sec_len = *(pos++); len -= 1;
+ 
+-					if (sec_len > 0 && sec_len <= len) {
++					if (sec_len > 0 &&
++					    sec_len <= len &&
++					    sec_len <= 32) {
+ 						ssid[ssid_index].SsidLength = sec_len;
+-						memcpy(ssid[ssid_index].Ssid, pos, ssid[ssid_index].SsidLength);
++						memcpy(ssid[ssid_index].Ssid, pos, sec_len);
+ 						/* DBG_871X("%s COMBO_SCAN with specific ssid:%s, %d\n", __func__ */
+ 						/* 	, ssid[ssid_index].Ssid, ssid[ssid_index].SsidLength); */
+ 						ssid_index++;
+diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h
+index b471deac28ff8..4e30e1799e614 100644
+--- a/include/crypto/blake2s.h
++++ b/include/crypto/blake2s.h
+@@ -3,15 +3,14 @@
+  * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+  */
+ 
+-#ifndef BLAKE2S_H
+-#define BLAKE2S_H
++#ifndef _CRYPTO_BLAKE2S_H
++#define _CRYPTO_BLAKE2S_H
+ 
++#include <linux/bug.h>
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/string.h>
+ 
+-#include <asm/bug.h>
+-
+ enum blake2s_lengths {
+ 	BLAKE2S_BLOCK_SIZE = 64,
+ 	BLAKE2S_HASH_SIZE = 32,
+@@ -24,6 +23,7 @@ enum blake2s_lengths {
+ };
+ 
+ struct blake2s_state {
++	/* 'h', 't', and 'f' are used in assembly code, so keep them as-is. */
+ 	u32 h[8];
+ 	u32 t[2];
+ 	u32 f[2];
+@@ -43,29 +43,34 @@ enum blake2s_iv {
+ 	BLAKE2S_IV7 = 0x5BE0CD19UL,
+ };
+ 
+-void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen);
+-void blake2s_final(struct blake2s_state *state, u8 *out);
+-
+-static inline void blake2s_init_param(struct blake2s_state *state,
+-				      const u32 param)
++static inline void __blake2s_init(struct blake2s_state *state, size_t outlen,
++				  const void *key, size_t keylen)
+ {
+-	*state = (struct blake2s_state){{
+-		BLAKE2S_IV0 ^ param,
+-		BLAKE2S_IV1,
+-		BLAKE2S_IV2,
+-		BLAKE2S_IV3,
+-		BLAKE2S_IV4,
+-		BLAKE2S_IV5,
+-		BLAKE2S_IV6,
+-		BLAKE2S_IV7,
+-	}};
++	state->h[0] = BLAKE2S_IV0 ^ (0x01010000 | keylen << 8 | outlen);
++	state->h[1] = BLAKE2S_IV1;
++	state->h[2] = BLAKE2S_IV2;
++	state->h[3] = BLAKE2S_IV3;
++	state->h[4] = BLAKE2S_IV4;
++	state->h[5] = BLAKE2S_IV5;
++	state->h[6] = BLAKE2S_IV6;
++	state->h[7] = BLAKE2S_IV7;
++	state->t[0] = 0;
++	state->t[1] = 0;
++	state->f[0] = 0;
++	state->f[1] = 0;
++	state->buflen = 0;
++	state->outlen = outlen;
++	if (keylen) {
++		memcpy(state->buf, key, keylen);
++		memset(&state->buf[keylen], 0, BLAKE2S_BLOCK_SIZE - keylen);
++		state->buflen = BLAKE2S_BLOCK_SIZE;
++	}
+ }
+ 
+ static inline void blake2s_init(struct blake2s_state *state,
+ 				const size_t outlen)
+ {
+-	blake2s_init_param(state, 0x01010000 | outlen);
+-	state->outlen = outlen;
++	__blake2s_init(state, outlen, NULL, 0);
+ }
+ 
+ static inline void blake2s_init_key(struct blake2s_state *state,
+@@ -75,12 +80,12 @@ static inline void blake2s_init_key(struct blake2s_state *state,
+ 	WARN_ON(IS_ENABLED(DEBUG) && (!outlen || outlen > BLAKE2S_HASH_SIZE ||
+ 		!key || !keylen || keylen > BLAKE2S_KEY_SIZE));
+ 
+-	blake2s_init_param(state, 0x01010000 | keylen << 8 | outlen);
+-	memcpy(state->buf, key, keylen);
+-	state->buflen = BLAKE2S_BLOCK_SIZE;
+-	state->outlen = outlen;
++	__blake2s_init(state, outlen, key, keylen);
+ }
+ 
++void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen);
++void blake2s_final(struct blake2s_state *state, u8 *out);
++
+ static inline void blake2s(u8 *out, const u8 *in, const u8 *key,
+ 			   const size_t outlen, const size_t inlen,
+ 			   const size_t keylen)
+@@ -91,16 +96,9 @@ static inline void blake2s(u8 *out, const u8 *in, const u8 *key,
+ 		outlen > BLAKE2S_HASH_SIZE || keylen > BLAKE2S_KEY_SIZE ||
+ 		(!key && keylen)));
+ 
+-	if (keylen)
+-		blake2s_init_key(&state, outlen, key, keylen);
+-	else
+-		blake2s_init(&state, outlen);
+-
++	__blake2s_init(&state, outlen, key, keylen);
+ 	blake2s_update(&state, in, inlen);
+ 	blake2s_final(&state, out);
+ }
+ 
+-void blake2s256_hmac(u8 *out, const u8 *in, const u8 *key, const size_t inlen,
+-		     const size_t keylen);
+-
+-#endif /* BLAKE2S_H */
++#endif /* _CRYPTO_BLAKE2S_H */
+diff --git a/include/crypto/chacha.h b/include/crypto/chacha.h
+index dabaee6987186..b3ea73b819443 100644
+--- a/include/crypto/chacha.h
++++ b/include/crypto/chacha.h
+@@ -47,12 +47,19 @@ static inline void hchacha_block(const u32 *state, u32 *out, int nrounds)
+ 		hchacha_block_generic(state, out, nrounds);
+ }
+ 
++enum chacha_constants { /* expand 32-byte k */
++	CHACHA_CONSTANT_EXPA = 0x61707865U,
++	CHACHA_CONSTANT_ND_3 = 0x3320646eU,
++	CHACHA_CONSTANT_2_BY = 0x79622d32U,
++	CHACHA_CONSTANT_TE_K = 0x6b206574U
++};
++
+ static inline void chacha_init_consts(u32 *state)
+ {
+-	state[0]  = 0x61707865; /* "expa" */
+-	state[1]  = 0x3320646e; /* "nd 3" */
+-	state[2]  = 0x79622d32; /* "2-by" */
+-	state[3]  = 0x6b206574; /* "te k" */
++	state[0]  = CHACHA_CONSTANT_EXPA;
++	state[1]  = CHACHA_CONSTANT_ND_3;
++	state[2]  = CHACHA_CONSTANT_2_BY;
++	state[3]  = CHACHA_CONSTANT_TE_K;
+ }
+ 
+ void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv);
+diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h
+index c4165126937e4..88e4d145f7cda 100644
+--- a/include/crypto/drbg.h
++++ b/include/crypto/drbg.h
+@@ -136,7 +136,7 @@ struct drbg_state {
+ 	const struct drbg_state_ops *d_ops;
+ 	const struct drbg_core *core;
+ 	struct drbg_string test_data;
+-	struct random_ready_callback random_ready;
++	struct notifier_block random_ready;
+ };
+ 
+ static inline __u8 drbg_statelen(struct drbg_state *drbg)
+diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h
+index 74ff77032e526..52363eee2b20e 100644
+--- a/include/crypto/internal/blake2s.h
++++ b/include/crypto/internal/blake2s.h
+@@ -1,24 +1,129 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR MIT */
++/*
++ * Helper functions for BLAKE2s implementations.
++ * Keep this in sync with the corresponding BLAKE2b header.
++ */
+ 
+-#ifndef BLAKE2S_INTERNAL_H
+-#define BLAKE2S_INTERNAL_H
++#ifndef _CRYPTO_INTERNAL_BLAKE2S_H
++#define _CRYPTO_INTERNAL_BLAKE2S_H
+ 
+ #include <crypto/blake2s.h>
++#include <crypto/internal/hash.h>
++#include <linux/string.h>
++
++void blake2s_compress_generic(struct blake2s_state *state, const u8 *block,
++			      size_t nblocks, const u32 inc);
++
++void blake2s_compress(struct blake2s_state *state, const u8 *block,
++		      size_t nblocks, const u32 inc);
++
++bool blake2s_selftest(void);
++
++static inline void blake2s_set_lastblock(struct blake2s_state *state)
++{
++	state->f[0] = -1;
++}
++
++/* Helper functions for BLAKE2s shared by the library and shash APIs */
++
++static __always_inline void
++__blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen,
++		 bool force_generic)
++{
++	const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen;
++
++	if (unlikely(!inlen))
++		return;
++	if (inlen > fill) {
++		memcpy(state->buf + state->buflen, in, fill);
++		if (force_generic)
++			blake2s_compress_generic(state, state->buf, 1,
++						 BLAKE2S_BLOCK_SIZE);
++		else
++			blake2s_compress(state, state->buf, 1,
++					 BLAKE2S_BLOCK_SIZE);
++		state->buflen = 0;
++		in += fill;
++		inlen -= fill;
++	}
++	if (inlen > BLAKE2S_BLOCK_SIZE) {
++		const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE);
++		/* Hash one less (full) block than strictly possible */
++		if (force_generic)
++			blake2s_compress_generic(state, in, nblocks - 1,
++						 BLAKE2S_BLOCK_SIZE);
++		else
++			blake2s_compress(state, in, nblocks - 1,
++					 BLAKE2S_BLOCK_SIZE);
++		in += BLAKE2S_BLOCK_SIZE * (nblocks - 1);
++		inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1);
++	}
++	memcpy(state->buf + state->buflen, in, inlen);
++	state->buflen += inlen;
++}
++
++static __always_inline void
++__blake2s_final(struct blake2s_state *state, u8 *out, bool force_generic)
++{
++	blake2s_set_lastblock(state);
++	memset(state->buf + state->buflen, 0,
++	       BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */
++	if (force_generic)
++		blake2s_compress_generic(state, state->buf, 1, state->buflen);
++	else
++		blake2s_compress(state, state->buf, 1, state->buflen);
++	cpu_to_le32_array(state->h, ARRAY_SIZE(state->h));
++	memcpy(out, state->h, state->outlen);
++}
++
++/* Helper functions for shash implementations of BLAKE2s */
+ 
+ struct blake2s_tfm_ctx {
+ 	u8 key[BLAKE2S_KEY_SIZE];
+ 	unsigned int keylen;
+ };
+ 
+-void blake2s_compress_generic(struct blake2s_state *state,const u8 *block,
+-			      size_t nblocks, const u32 inc);
++static inline int crypto_blake2s_setkey(struct crypto_shash *tfm,
++					const u8 *key, unsigned int keylen)
++{
++	struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+ 
+-void blake2s_compress_arch(struct blake2s_state *state,const u8 *block,
+-			   size_t nblocks, const u32 inc);
++	if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE)
++		return -EINVAL;
+ 
+-static inline void blake2s_set_lastblock(struct blake2s_state *state)
++	memcpy(tctx->key, key, keylen);
++	tctx->keylen = keylen;
++
++	return 0;
++}
++
++static inline int crypto_blake2s_init(struct shash_desc *desc)
+ {
+-	state->f[0] = -1;
++	const struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct blake2s_state *state = shash_desc_ctx(desc);
++	unsigned int outlen = crypto_shash_digestsize(desc->tfm);
++
++	__blake2s_init(state, outlen, tctx->key, tctx->keylen);
++	return 0;
++}
++
++static inline int crypto_blake2s_update(struct shash_desc *desc,
++					const u8 *in, unsigned int inlen,
++					bool force_generic)
++{
++	struct blake2s_state *state = shash_desc_ctx(desc);
++
++	__blake2s_update(state, in, inlen, force_generic);
++	return 0;
++}
++
++static inline int crypto_blake2s_final(struct shash_desc *desc, u8 *out,
++				       bool force_generic)
++{
++	struct blake2s_state *state = shash_desc_ctx(desc);
++
++	__blake2s_final(state, out, force_generic);
++	return 0;
+ }
+ 
+-#endif /* BLAKE2S_INTERNAL_H */
++#endif /* _CRYPTO_INTERNAL_BLAKE2S_H */
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index 8fb893ed205e3..fc945f9df2c1d 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -61,6 +61,7 @@ enum cpuhp_state {
+ 	CPUHP_LUSTRE_CFS_DEAD,
+ 	CPUHP_AP_ARM_CACHE_B15_RAC_DEAD,
+ 	CPUHP_PADATA_DEAD,
++	CPUHP_RANDOM_PREPARE,
+ 	CPUHP_WORKQUEUE_PREP,
+ 	CPUHP_POWER_NUMA_PREPARE,
+ 	CPUHP_HRTIMERS_PREPARE,
+@@ -187,6 +188,7 @@ enum cpuhp_state {
+ 	CPUHP_AP_PERF_POWERPC_HV_GPCI_ONLINE,
+ 	CPUHP_AP_WATCHDOG_ONLINE,
+ 	CPUHP_AP_WORKQUEUE_ONLINE,
++	CPUHP_AP_RANDOM_ONLINE,
+ 	CPUHP_AP_RCUTREE_ONLINE,
+ 	CPUHP_AP_BASE_CACHEINFO_ONLINE,
+ 	CPUHP_AP_ONLINE_DYN,
+diff --git a/include/linux/hw_random.h b/include/linux/hw_random.h
+index 8e6dd908da216..aa1d4da03538b 100644
+--- a/include/linux/hw_random.h
++++ b/include/linux/hw_random.h
+@@ -60,7 +60,5 @@ extern int devm_hwrng_register(struct device *dev, struct hwrng *rng);
+ /** Unregister a Hardware Random Number Generator driver. */
+ extern void hwrng_unregister(struct hwrng *rng);
+ extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng);
+-/** Feed random bits into the pool. */
+-extern void add_hwgenerator_randomness(const char *buffer, size_t count, size_t entropy);
+ 
+ #endif /* LINUX_HWRANDOM_H_ */
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 289c26f055cdd..5b4d88faf114a 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2585,6 +2585,7 @@ extern int install_special_mapping(struct mm_struct *mm,
+ 				   unsigned long flags, struct page **pages);
+ 
+ unsigned long randomize_stack_top(unsigned long stack_top);
++unsigned long randomize_page(unsigned long start, unsigned long range);
+ 
+ extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
+ 
+diff --git a/include/linux/prandom.h b/include/linux/prandom.h
+index 056d31317e499..a4aadd2dc153e 100644
+--- a/include/linux/prandom.h
++++ b/include/linux/prandom.h
+@@ -10,6 +10,7 @@
+ 
+ #include <linux/types.h>
+ #include <linux/percpu.h>
++#include <linux/siphash.h>
+ 
+ u32 prandom_u32(void);
+ void prandom_bytes(void *buf, size_t nbytes);
+@@ -27,15 +28,10 @@ DECLARE_PER_CPU(unsigned long, net_rand_noise);
+  * The core SipHash round function.  Each line can be executed in
+  * parallel given enough CPU resources.
+  */
+-#define PRND_SIPROUND(v0, v1, v2, v3) ( \
+-	v0 += v1, v1 = rol64(v1, 13),  v2 += v3, v3 = rol64(v3, 16), \
+-	v1 ^= v0, v0 = rol64(v0, 32),  v3 ^= v2,                     \
+-	v0 += v3, v3 = rol64(v3, 21),  v2 += v1, v1 = rol64(v1, 17), \
+-	v3 ^= v0,                      v1 ^= v2, v2 = rol64(v2, 32)  \
+-)
++#define PRND_SIPROUND(v0, v1, v2, v3) SIPHASH_PERMUTATION(v0, v1, v2, v3)
+ 
+-#define PRND_K0 (0x736f6d6570736575 ^ 0x6c7967656e657261)
+-#define PRND_K1 (0x646f72616e646f6d ^ 0x7465646279746573)
++#define PRND_K0 (SIPHASH_CONST_0 ^ SIPHASH_CONST_2)
++#define PRND_K1 (SIPHASH_CONST_1 ^ SIPHASH_CONST_3)
+ 
+ #elif BITS_PER_LONG == 32
+ /*
+@@ -43,14 +39,9 @@ DECLARE_PER_CPU(unsigned long, net_rand_noise);
+  * This is weaker, but 32-bit machines are not used for high-traffic
+  * applications, so there is less output for an attacker to analyze.
+  */
+-#define PRND_SIPROUND(v0, v1, v2, v3) ( \
+-	v0 += v1, v1 = rol32(v1,  5),  v2 += v3, v3 = rol32(v3,  8), \
+-	v1 ^= v0, v0 = rol32(v0, 16),  v3 ^= v2,                     \
+-	v0 += v3, v3 = rol32(v3,  7),  v2 += v1, v1 = rol32(v1, 13), \
+-	v3 ^= v0,                      v1 ^= v2, v2 = rol32(v2, 16)  \
+-)
+-#define PRND_K0 0x6c796765
+-#define PRND_K1 0x74656462
++#define PRND_SIPROUND(v0, v1, v2, v3) HSIPHASH_PERMUTATION(v0, v1, v2, v3)
++#define PRND_K0 (HSIPHASH_CONST_0 ^ HSIPHASH_CONST_2)
++#define PRND_K1 (HSIPHASH_CONST_1 ^ HSIPHASH_CONST_3)
+ 
+ #else
+ #error Unsupported BITS_PER_LONG
+diff --git a/include/linux/random.h b/include/linux/random.h
+index f45b8be3e3c4e..917470c4490ac 100644
+--- a/include/linux/random.h
++++ b/include/linux/random.h
+@@ -1,9 +1,5 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * include/linux/random.h
+- *
+- * Include file for the random number generator.
+- */
++
+ #ifndef _LINUX_RANDOM_H
+ #define _LINUX_RANDOM_H
+ 
+@@ -14,41 +10,26 @@
+ 
+ #include <uapi/linux/random.h>
+ 
+-struct random_ready_callback {
+-	struct list_head list;
+-	void (*func)(struct random_ready_callback *rdy);
+-	struct module *owner;
+-};
++struct notifier_block;
+ 
+-extern void add_device_randomness(const void *, unsigned int);
+-extern void add_bootloader_randomness(const void *, unsigned int);
++void add_device_randomness(const void *buf, size_t len);
++void add_bootloader_randomness(const void *buf, size_t len);
++void add_input_randomness(unsigned int type, unsigned int code,
++			  unsigned int value) __latent_entropy;
++void add_interrupt_randomness(int irq) __latent_entropy;
++void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
+ 
+ #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
+ static inline void add_latent_entropy(void)
+ {
+-	add_device_randomness((const void *)&latent_entropy,
+-			      sizeof(latent_entropy));
++	add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy));
+ }
+ #else
+-static inline void add_latent_entropy(void) {}
+-#endif
+-
+-extern void add_input_randomness(unsigned int type, unsigned int code,
+-				 unsigned int value) __latent_entropy;
+-extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
+-
+-extern void get_random_bytes(void *buf, int nbytes);
+-extern int wait_for_random_bytes(void);
+-extern int __init rand_initialize(void);
+-extern bool rng_is_initialized(void);
+-extern int add_random_ready_callback(struct random_ready_callback *rdy);
+-extern void del_random_ready_callback(struct random_ready_callback *rdy);
+-extern int __must_check get_random_bytes_arch(void *buf, int nbytes);
+-
+-#ifndef MODULE
+-extern const struct file_operations random_fops, urandom_fops;
++static inline void add_latent_entropy(void) { }
+ #endif
+ 
++void get_random_bytes(void *buf, size_t len);
++size_t __must_check get_random_bytes_arch(void *buf, size_t len);
+ u32 get_random_u32(void);
+ u64 get_random_u64(void);
+ static inline unsigned int get_random_int(void)
+@@ -80,36 +61,38 @@ static inline unsigned long get_random_long(void)
+ 
+ static inline unsigned long get_random_canary(void)
+ {
+-	unsigned long val = get_random_long();
+-
+-	return val & CANARY_MASK;
++	return get_random_long() & CANARY_MASK;
+ }
+ 
++int __init random_init(const char *command_line);
++bool rng_is_initialized(void);
++int wait_for_random_bytes(void);
++int register_random_ready_notifier(struct notifier_block *nb);
++int unregister_random_ready_notifier(struct notifier_block *nb);
++
+ /* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes).
+  * Returns the result of the call to wait_for_random_bytes. */
+-static inline int get_random_bytes_wait(void *buf, int nbytes)
++static inline int get_random_bytes_wait(void *buf, size_t nbytes)
+ {
+ 	int ret = wait_for_random_bytes();
+ 	get_random_bytes(buf, nbytes);
+ 	return ret;
+ }
+ 
+-#define declare_get_random_var_wait(var) \
+-	static inline int get_random_ ## var ## _wait(var *out) { \
++#define declare_get_random_var_wait(name, ret_type) \
++	static inline int get_random_ ## name ## _wait(ret_type *out) { \
+ 		int ret = wait_for_random_bytes(); \
+ 		if (unlikely(ret)) \
+ 			return ret; \
+-		*out = get_random_ ## var(); \
++		*out = get_random_ ## name(); \
+ 		return 0; \
+ 	}
+-declare_get_random_var_wait(u32)
+-declare_get_random_var_wait(u64)
+-declare_get_random_var_wait(int)
+-declare_get_random_var_wait(long)
++declare_get_random_var_wait(u32, u32)
++declare_get_random_var_wait(u64, u32)
++declare_get_random_var_wait(int, unsigned int)
++declare_get_random_var_wait(long, unsigned long)
+ #undef declare_get_random_var
+ 
+-unsigned long randomize_page(unsigned long start, unsigned long range);
+-
+ /*
+  * This is designed to be standalone for just prandom
+  * users, but for now we include it from <linux/random.h>
+@@ -120,22 +103,10 @@ unsigned long randomize_page(unsigned long start, unsigned long range);
+ #ifdef CONFIG_ARCH_RANDOM
+ # include <asm/archrandom.h>
+ #else
+-static inline bool __must_check arch_get_random_long(unsigned long *v)
+-{
+-	return false;
+-}
+-static inline bool __must_check arch_get_random_int(unsigned int *v)
+-{
+-	return false;
+-}
+-static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
+-{
+-	return false;
+-}
+-static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
+-{
+-	return false;
+-}
++static inline bool __must_check arch_get_random_long(unsigned long *v) { return false; }
++static inline bool __must_check arch_get_random_int(unsigned int *v) { return false; }
++static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { return false; }
++static inline bool __must_check arch_get_random_seed_int(unsigned int *v) { return false; }
+ #endif
+ 
+ /*
+@@ -158,4 +129,13 @@ static inline bool __init arch_get_random_long_early(unsigned long *v)
+ }
+ #endif
+ 
++#ifdef CONFIG_SMP
++int random_prepare_cpu(unsigned int cpu);
++int random_online_cpu(unsigned int cpu);
++#endif
++
++#ifndef MODULE
++extern const struct file_operations random_fops, urandom_fops;
++#endif
++
+ #endif /* _LINUX_RANDOM_H */
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 35355429648e3..330029ef7e894 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -121,10 +121,12 @@ enum lockdown_reason {
+ 	LOCKDOWN_DEBUGFS,
+ 	LOCKDOWN_XMON_WR,
+ 	LOCKDOWN_BPF_WRITE_USER,
++	LOCKDOWN_DBG_WRITE_KERNEL,
+ 	LOCKDOWN_INTEGRITY_MAX,
+ 	LOCKDOWN_KCORE,
+ 	LOCKDOWN_KPROBES,
+ 	LOCKDOWN_BPF_READ,
++	LOCKDOWN_DBG_READ_KERNEL,
+ 	LOCKDOWN_PERF,
+ 	LOCKDOWN_TRACEFS,
+ 	LOCKDOWN_XMON_RW,
+diff --git a/include/linux/siphash.h b/include/linux/siphash.h
+index 0cda61855d907..0bb5ecd507bef 100644
+--- a/include/linux/siphash.h
++++ b/include/linux/siphash.h
+@@ -136,4 +136,32 @@ static inline u32 hsiphash(const void *data, size_t len,
+ 	return ___hsiphash_aligned(data, len, key);
+ }
+ 
++/*
++ * These macros expose the raw SipHash and HalfSipHash permutations.
++ * Do not use them directly! If you think you have a use for them,
++ * be sure to CC the maintainer of this file explaining why.
++ */
++
++#define SIPHASH_PERMUTATION(a, b, c, d) ( \
++	(a) += (b), (b) = rol64((b), 13), (b) ^= (a), (a) = rol64((a), 32), \
++	(c) += (d), (d) = rol64((d), 16), (d) ^= (c), \
++	(a) += (d), (d) = rol64((d), 21), (d) ^= (a), \
++	(c) += (b), (b) = rol64((b), 17), (b) ^= (c), (c) = rol64((c), 32))
++
++#define SIPHASH_CONST_0 0x736f6d6570736575ULL
++#define SIPHASH_CONST_1 0x646f72616e646f6dULL
++#define SIPHASH_CONST_2 0x6c7967656e657261ULL
++#define SIPHASH_CONST_3 0x7465646279746573ULL
++
++#define HSIPHASH_PERMUTATION(a, b, c, d) ( \
++	(a) += (b), (b) = rol32((b), 5), (b) ^= (a), (a) = rol32((a), 16), \
++	(c) += (d), (d) = rol32((d), 8), (d) ^= (c), \
++	(a) += (d), (d) = rol32((d), 7), (d) ^= (a), \
++	(c) += (b), (b) = rol32((b), 13), (b) ^= (c), (c) = rol32((c), 16))
++
++#define HSIPHASH_CONST_0 0U
++#define HSIPHASH_CONST_1 0U
++#define HSIPHASH_CONST_2 0x6c796765U
++#define HSIPHASH_CONST_3 0x74656462U
++
+ #endif /* _LINUX_SIPHASH_H */
+diff --git a/include/linux/timex.h b/include/linux/timex.h
+index ce08597636705..2efab9a806a9d 100644
+--- a/include/linux/timex.h
++++ b/include/linux/timex.h
+@@ -62,6 +62,8 @@
+ #include <linux/types.h>
+ #include <linux/param.h>
+ 
++unsigned long random_get_entropy_fallback(void);
++
+ #include <asm/timex.h>
+ 
+ #ifndef random_get_entropy
+@@ -74,8 +76,14 @@
+  *
+  * By default we use get_cycles() for this purpose, but individual
+  * architectures may override this in their asm/timex.h header file.
++ * If a given arch does not have get_cycles(), then we fallback to
++ * using random_get_entropy_fallback().
+  */
+-#define random_get_entropy()	get_cycles()
++#ifdef get_cycles
++#define random_get_entropy()	((unsigned long)get_cycles())
++#else
++#define random_get_entropy()	random_get_entropy_fallback()
++#endif
+ #endif
+ 
+ /*
+diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h
+index ca6a3ea9057ec..d4d611064a76f 100644
+--- a/include/net/inet_hashtables.h
++++ b/include/net/inet_hashtables.h
+@@ -419,7 +419,7 @@ static inline void sk_rcv_saddr_set(struct sock *sk, __be32 addr)
+ }
+ 
+ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+-			struct sock *sk, u32 port_offset,
++			struct sock *sk, u64 port_offset,
+ 			int (*check_established)(struct inet_timewait_death_row *,
+ 						 struct sock *, __u16,
+ 						 struct inet_timewait_sock **));
+diff --git a/include/net/secure_seq.h b/include/net/secure_seq.h
+index d7d2495f83c27..dac91aa38c5af 100644
+--- a/include/net/secure_seq.h
++++ b/include/net/secure_seq.h
+@@ -4,8 +4,8 @@
+ 
+ #include <linux/types.h>
+ 
+-u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport);
+-u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
++u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport);
++u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
+ 			       __be16 dport);
+ u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
+ 		   __be16 sport, __be16 dport);
+diff --git a/include/trace/events/random.h b/include/trace/events/random.h
+deleted file mode 100644
+index 9570a10cb949b..0000000000000
+--- a/include/trace/events/random.h
++++ /dev/null
+@@ -1,330 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#undef TRACE_SYSTEM
+-#define TRACE_SYSTEM random
+-
+-#if !defined(_TRACE_RANDOM_H) || defined(TRACE_HEADER_MULTI_READ)
+-#define _TRACE_RANDOM_H
+-
+-#include <linux/writeback.h>
+-#include <linux/tracepoint.h>
+-
+-TRACE_EVENT(add_device_randomness,
+-	TP_PROTO(int bytes, unsigned long IP),
+-
+-	TP_ARGS(bytes, IP),
+-
+-	TP_STRUCT__entry(
+-		__field(	  int,	bytes			)
+-		__field(unsigned long,	IP			)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->bytes		= bytes;
+-		__entry->IP		= IP;
+-	),
+-
+-	TP_printk("bytes %d caller %pS",
+-		__entry->bytes, (void *)__entry->IP)
+-);
+-
+-DECLARE_EVENT_CLASS(random__mix_pool_bytes,
+-	TP_PROTO(const char *pool_name, int bytes, unsigned long IP),
+-
+-	TP_ARGS(pool_name, bytes, IP),
+-
+-	TP_STRUCT__entry(
+-		__field( const char *,	pool_name		)
+-		__field(	  int,	bytes			)
+-		__field(unsigned long,	IP			)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->pool_name	= pool_name;
+-		__entry->bytes		= bytes;
+-		__entry->IP		= IP;
+-	),
+-
+-	TP_printk("%s pool: bytes %d caller %pS",
+-		  __entry->pool_name, __entry->bytes, (void *)__entry->IP)
+-);
+-
+-DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes,
+-	TP_PROTO(const char *pool_name, int bytes, unsigned long IP),
+-
+-	TP_ARGS(pool_name, bytes, IP)
+-);
+-
+-DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock,
+-	TP_PROTO(const char *pool_name, int bytes, unsigned long IP),
+-
+-	TP_ARGS(pool_name, bytes, IP)
+-);
+-
+-TRACE_EVENT(credit_entropy_bits,
+-	TP_PROTO(const char *pool_name, int bits, int entropy_count,
+-		 unsigned long IP),
+-
+-	TP_ARGS(pool_name, bits, entropy_count, IP),
+-
+-	TP_STRUCT__entry(
+-		__field( const char *,	pool_name		)
+-		__field(	  int,	bits			)
+-		__field(	  int,	entropy_count		)
+-		__field(unsigned long,	IP			)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->pool_name	= pool_name;
+-		__entry->bits		= bits;
+-		__entry->entropy_count	= entropy_count;
+-		__entry->IP		= IP;
+-	),
+-
+-	TP_printk("%s pool: bits %d entropy_count %d caller %pS",
+-		  __entry->pool_name, __entry->bits,
+-		  __entry->entropy_count, (void *)__entry->IP)
+-);
+-
+-TRACE_EVENT(push_to_pool,
+-	TP_PROTO(const char *pool_name, int pool_bits, int input_bits),
+-
+-	TP_ARGS(pool_name, pool_bits, input_bits),
+-
+-	TP_STRUCT__entry(
+-		__field( const char *,	pool_name		)
+-		__field(	  int,	pool_bits		)
+-		__field(	  int,	input_bits		)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->pool_name	= pool_name;
+-		__entry->pool_bits	= pool_bits;
+-		__entry->input_bits	= input_bits;
+-	),
+-
+-	TP_printk("%s: pool_bits %d input_pool_bits %d",
+-		  __entry->pool_name, __entry->pool_bits,
+-		  __entry->input_bits)
+-);
+-
+-TRACE_EVENT(debit_entropy,
+-	TP_PROTO(const char *pool_name, int debit_bits),
+-
+-	TP_ARGS(pool_name, debit_bits),
+-
+-	TP_STRUCT__entry(
+-		__field( const char *,	pool_name		)
+-		__field(	  int,	debit_bits		)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->pool_name	= pool_name;
+-		__entry->debit_bits	= debit_bits;
+-	),
+-
+-	TP_printk("%s: debit_bits %d", __entry->pool_name,
+-		  __entry->debit_bits)
+-);
+-
+-TRACE_EVENT(add_input_randomness,
+-	TP_PROTO(int input_bits),
+-
+-	TP_ARGS(input_bits),
+-
+-	TP_STRUCT__entry(
+-		__field(	  int,	input_bits		)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->input_bits	= input_bits;
+-	),
+-
+-	TP_printk("input_pool_bits %d", __entry->input_bits)
+-);
+-
+-TRACE_EVENT(add_disk_randomness,
+-	TP_PROTO(dev_t dev, int input_bits),
+-
+-	TP_ARGS(dev, input_bits),
+-
+-	TP_STRUCT__entry(
+-		__field(	dev_t,	dev			)
+-		__field(	  int,	input_bits		)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->dev		= dev;
+-		__entry->input_bits	= input_bits;
+-	),
+-
+-	TP_printk("dev %d,%d input_pool_bits %d", MAJOR(__entry->dev),
+-		  MINOR(__entry->dev), __entry->input_bits)
+-);
+-
+-TRACE_EVENT(xfer_secondary_pool,
+-	TP_PROTO(const char *pool_name, int xfer_bits, int request_bits,
+-		 int pool_entropy, int input_entropy),
+-
+-	TP_ARGS(pool_name, xfer_bits, request_bits, pool_entropy,
+-		input_entropy),
+-
+-	TP_STRUCT__entry(
+-		__field( const char *,	pool_name		)
+-		__field(	  int,	xfer_bits		)
+-		__field(	  int,	request_bits		)
+-		__field(	  int,	pool_entropy		)
+-		__field(	  int,	input_entropy		)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->pool_name	= pool_name;
+-		__entry->xfer_bits	= xfer_bits;
+-		__entry->request_bits	= request_bits;
+-		__entry->pool_entropy	= pool_entropy;
+-		__entry->input_entropy	= input_entropy;
+-	),
+-
+-	TP_printk("pool %s xfer_bits %d request_bits %d pool_entropy %d "
+-		  "input_entropy %d", __entry->pool_name, __entry->xfer_bits,
+-		  __entry->request_bits, __entry->pool_entropy,
+-		  __entry->input_entropy)
+-);
+-
+-DECLARE_EVENT_CLASS(random__get_random_bytes,
+-	TP_PROTO(int nbytes, unsigned long IP),
+-
+-	TP_ARGS(nbytes, IP),
+-
+-	TP_STRUCT__entry(
+-		__field(	  int,	nbytes			)
+-		__field(unsigned long,	IP			)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->nbytes		= nbytes;
+-		__entry->IP		= IP;
+-	),
+-
+-	TP_printk("nbytes %d caller %pS", __entry->nbytes, (void *)__entry->IP)
+-);
+-
+-DEFINE_EVENT(random__get_random_bytes, get_random_bytes,
+-	TP_PROTO(int nbytes, unsigned long IP),
+-
+-	TP_ARGS(nbytes, IP)
+-);
+-
+-DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch,
+-	TP_PROTO(int nbytes, unsigned long IP),
+-
+-	TP_ARGS(nbytes, IP)
+-);
+-
+-DECLARE_EVENT_CLASS(random__extract_entropy,
+-	TP_PROTO(const char *pool_name, int nbytes, int entropy_count,
+-		 unsigned long IP),
+-
+-	TP_ARGS(pool_name, nbytes, entropy_count, IP),
+-
+-	TP_STRUCT__entry(
+-		__field( const char *,	pool_name		)
+-		__field(	  int,	nbytes			)
+-		__field(	  int,	entropy_count		)
+-		__field(unsigned long,	IP			)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->pool_name	= pool_name;
+-		__entry->nbytes		= nbytes;
+-		__entry->entropy_count	= entropy_count;
+-		__entry->IP		= IP;
+-	),
+-
+-	TP_printk("%s pool: nbytes %d entropy_count %d caller %pS",
+-		  __entry->pool_name, __entry->nbytes, __entry->entropy_count,
+-		  (void *)__entry->IP)
+-);
+-
+-
+-DEFINE_EVENT(random__extract_entropy, extract_entropy,
+-	TP_PROTO(const char *pool_name, int nbytes, int entropy_count,
+-		 unsigned long IP),
+-
+-	TP_ARGS(pool_name, nbytes, entropy_count, IP)
+-);
+-
+-DEFINE_EVENT(random__extract_entropy, extract_entropy_user,
+-	TP_PROTO(const char *pool_name, int nbytes, int entropy_count,
+-		 unsigned long IP),
+-
+-	TP_ARGS(pool_name, nbytes, entropy_count, IP)
+-);
+-
+-TRACE_EVENT(random_read,
+-	TP_PROTO(int got_bits, int need_bits, int pool_left, int input_left),
+-
+-	TP_ARGS(got_bits, need_bits, pool_left, input_left),
+-
+-	TP_STRUCT__entry(
+-		__field(	  int,	got_bits		)
+-		__field(	  int,	need_bits		)
+-		__field(	  int,	pool_left		)
+-		__field(	  int,	input_left		)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->got_bits	= got_bits;
+-		__entry->need_bits	= need_bits;
+-		__entry->pool_left	= pool_left;
+-		__entry->input_left	= input_left;
+-	),
+-
+-	TP_printk("got_bits %d still_needed_bits %d "
+-		  "blocking_pool_entropy_left %d input_entropy_left %d",
+-		  __entry->got_bits, __entry->got_bits, __entry->pool_left,
+-		  __entry->input_left)
+-);
+-
+-TRACE_EVENT(urandom_read,
+-	TP_PROTO(int got_bits, int pool_left, int input_left),
+-
+-	TP_ARGS(got_bits, pool_left, input_left),
+-
+-	TP_STRUCT__entry(
+-		__field(	  int,	got_bits		)
+-		__field(	  int,	pool_left		)
+-		__field(	  int,	input_left		)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->got_bits	= got_bits;
+-		__entry->pool_left	= pool_left;
+-		__entry->input_left	= input_left;
+-	),
+-
+-	TP_printk("got_bits %d nonblocking_pool_entropy_left %d "
+-		  "input_entropy_left %d", __entry->got_bits,
+-		  __entry->pool_left, __entry->input_left)
+-);
+-
+-TRACE_EVENT(prandom_u32,
+-
+-	TP_PROTO(unsigned int ret),
+-
+-	TP_ARGS(ret),
+-
+-	TP_STRUCT__entry(
+-		__field(   unsigned int, ret)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->ret = ret;
+-	),
+-
+-	TP_printk("ret=%u" , __entry->ret)
+-);
+-
+-#endif /* _TRACE_RANDOM_H */
+-
+-/* This part must be outside protection */
+-#include <trace/define_trace.h>
+diff --git a/init/main.c b/init/main.c
+index 3526eaec7508f..d8bfe61b5a889 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -952,21 +952,18 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	hrtimers_init();
+ 	softirq_init();
+ 	timekeeping_init();
++	time_init();
+ 
+ 	/*
+ 	 * For best initial stack canary entropy, prepare it after:
+ 	 * - setup_arch() for any UEFI RNG entropy and boot cmdline access
+-	 * - timekeeping_init() for ktime entropy used in rand_initialize()
+-	 * - rand_initialize() to get any arch-specific entropy like RDRAND
+-	 * - add_latent_entropy() to get any latent entropy
+-	 * - adding command line entropy
++	 * - timekeeping_init() for ktime entropy used in random_init()
++	 * - time_init() for making random_get_entropy() work on some platforms
++	 * - random_init() to initialize the RNG from from early entropy sources
+ 	 */
+-	rand_initialize();
+-	add_latent_entropy();
+-	add_device_randomness(command_line, strlen(command_line));
++	random_init(command_line);
+ 	boot_init_stack_canary();
+ 
+-	time_init();
+ 	perf_event_init();
+ 	profile_init();
+ 	call_function_init();
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index c06ced18f78ad..3c9ee966c56a5 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -34,6 +34,7 @@
+ #include <linux/scs.h>
+ #include <linux/percpu-rwsem.h>
+ #include <linux/cpuset.h>
++#include <linux/random.h>
+ 
+ #include <trace/events/power.h>
+ #define CREATE_TRACE_POINTS
+@@ -1581,6 +1582,11 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ 		.startup.single		= perf_event_init_cpu,
+ 		.teardown.single	= perf_event_exit_cpu,
+ 	},
++	[CPUHP_RANDOM_PREPARE] = {
++		.name			= "random:prepare",
++		.startup.single		= random_prepare_cpu,
++		.teardown.single	= NULL,
++	},
+ 	[CPUHP_WORKQUEUE_PREP] = {
+ 		.name			= "workqueue:prepare",
+ 		.startup.single		= workqueue_prepare_cpu,
+@@ -1697,6 +1703,11 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ 		.startup.single		= workqueue_online_cpu,
+ 		.teardown.single	= workqueue_offline_cpu,
+ 	},
++	[CPUHP_AP_RANDOM_ONLINE] = {
++		.name			= "random:online",
++		.startup.single		= random_online_cpu,
++		.teardown.single	= NULL,
++	},
+ 	[CPUHP_AP_RCUTREE_ONLINE] = {
+ 		.name			= "RCU/tree:online",
+ 		.startup.single		= rcutree_online_cpu,
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index 8661eb2b17711..0f31b22abe8d9 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -56,6 +56,7 @@
+ #include <linux/vmacache.h>
+ #include <linux/rcupdate.h>
+ #include <linux/irq.h>
++#include <linux/security.h>
+ 
+ #include <asm/cacheflush.h>
+ #include <asm/byteorder.h>
+@@ -756,6 +757,29 @@ cpu_master_loop:
+ 				continue;
+ 			kgdb_connected = 0;
+ 		} else {
++			/*
++			 * This is a brutal way to interfere with the debugger
++			 * and prevent gdb being used to poke at kernel memory.
++			 * This could cause trouble if lockdown is applied when
++			 * there is already an active gdb session. For now the
++			 * answer is simply "don't do that". Typically lockdown
++			 * *will* be applied before the debug core gets started
++			 * so only developers using kgdb for fairly advanced
++			 * early kernel debug can be biten by this. Hopefully
++			 * they are sophisticated enough to take care of
++			 * themselves, especially with help from the lockdown
++			 * message printed on the console!
++			 */
++			if (security_locked_down(LOCKDOWN_DBG_WRITE_KERNEL)) {
++				if (IS_ENABLED(CONFIG_KGDB_KDB)) {
++					/* Switch back to kdb if possible... */
++					dbg_kdb_mode = 1;
++					continue;
++				} else {
++					/* ... otherwise just bail */
++					break;
++				}
++			}
+ 			error = gdb_serial_stub(ks);
+ 		}
+ 
+diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
+index 930ac1b25ec7c..4e09fab52faf5 100644
+--- a/kernel/debug/kdb/kdb_main.c
++++ b/kernel/debug/kdb/kdb_main.c
+@@ -45,6 +45,7 @@
+ #include <linux/proc_fs.h>
+ #include <linux/uaccess.h>
+ #include <linux/slab.h>
++#include <linux/security.h>
+ #include "kdb_private.h"
+ 
+ #undef	MODULE_PARAM_PREFIX
+@@ -197,10 +198,62 @@ struct task_struct *kdb_curr_task(int cpu)
+ }
+ 
+ /*
+- * Check whether the flags of the current command and the permissions
+- * of the kdb console has allow a command to be run.
++ * Update the permissions flags (kdb_cmd_enabled) to match the
++ * current lockdown state.
++ *
++ * Within this function the calls to security_locked_down() are "lazy". We
++ * avoid calling them if the current value of kdb_cmd_enabled already excludes
++ * flags that might be subject to lockdown. Additionally we deliberately check
++ * the lockdown flags independently (even though read lockdown implies write
++ * lockdown) since that results in both simpler code and clearer messages to
++ * the user on first-time debugger entry.
++ *
++ * The permission masks during a read+write lockdown permits the following
++ * flags: INSPECT, SIGNAL, REBOOT (and ALWAYS_SAFE).
++ *
++ * The INSPECT commands are not blocked during lockdown because they are
++ * not arbitrary memory reads. INSPECT covers the backtrace family (sometimes
++ * forcing them to have no arguments) and lsmod. These commands do expose
++ * some kernel state but do not allow the developer seated at the console to
++ * choose what state is reported. SIGNAL and REBOOT should not be controversial,
++ * given these are allowed for root during lockdown already.
++ */
++static void kdb_check_for_lockdown(void)
++{
++	const int write_flags = KDB_ENABLE_MEM_WRITE |
++				KDB_ENABLE_REG_WRITE |
++				KDB_ENABLE_FLOW_CTRL;
++	const int read_flags = KDB_ENABLE_MEM_READ |
++			       KDB_ENABLE_REG_READ;
++
++	bool need_to_lockdown_write = false;
++	bool need_to_lockdown_read = false;
++
++	if (kdb_cmd_enabled & (KDB_ENABLE_ALL | write_flags))
++		need_to_lockdown_write =
++			security_locked_down(LOCKDOWN_DBG_WRITE_KERNEL);
++
++	if (kdb_cmd_enabled & (KDB_ENABLE_ALL | read_flags))
++		need_to_lockdown_read =
++			security_locked_down(LOCKDOWN_DBG_READ_KERNEL);
++
++	/* De-compose KDB_ENABLE_ALL if required */
++	if (need_to_lockdown_write || need_to_lockdown_read)
++		if (kdb_cmd_enabled & KDB_ENABLE_ALL)
++			kdb_cmd_enabled = KDB_ENABLE_MASK & ~KDB_ENABLE_ALL;
++
++	if (need_to_lockdown_write)
++		kdb_cmd_enabled &= ~write_flags;
++
++	if (need_to_lockdown_read)
++		kdb_cmd_enabled &= ~read_flags;
++}
++
++/*
++ * Check whether the flags of the current command, the permissions of the kdb
++ * console and the lockdown state allow a command to be run.
+  */
+-static inline bool kdb_check_flags(kdb_cmdflags_t flags, int permissions,
++static bool kdb_check_flags(kdb_cmdflags_t flags, int permissions,
+ 				   bool no_args)
+ {
+ 	/* permissions comes from userspace so needs massaging slightly */
+@@ -1194,6 +1247,9 @@ static int kdb_local(kdb_reason_t reason, int error, struct pt_regs *regs,
+ 		kdb_curr_task(raw_smp_processor_id());
+ 
+ 	KDB_DEBUG_STATE("kdb_local 1", reason);
++
++	kdb_check_for_lockdown();
++
+ 	kdb_go_count = 0;
+ 	if (reason == KDB_REASON_DEBUG) {
+ 		/* special case below */
+diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
+index 762a928e18f92..8806444a68550 100644
+--- a/kernel/irq/handle.c
++++ b/kernel/irq/handle.c
+@@ -195,7 +195,7 @@ irqreturn_t handle_irq_event_percpu(struct irq_desc *desc)
+ 
+ 	retval = __handle_irq_event_percpu(desc, &flags);
+ 
+-	add_interrupt_randomness(desc->irq_data.irq, flags);
++	add_interrupt_randomness(desc->irq_data.irq);
+ 
+ 	if (!noirqdebug)
+ 		note_interrupt(desc, retval);
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index cc4dc2857a870..e12ce2821dba5 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -17,6 +17,7 @@
+ #include <linux/clocksource.h>
+ #include <linux/jiffies.h>
+ #include <linux/time.h>
++#include <linux/timex.h>
+ #include <linux/tick.h>
+ #include <linux/stop_machine.h>
+ #include <linux/pvclock_gtod.h>
+@@ -2378,6 +2379,20 @@ static int timekeeping_validate_timex(const struct __kernel_timex *txc)
+ 	return 0;
+ }
+ 
++/**
++ * random_get_entropy_fallback - Returns the raw clock source value,
++ * used by random.c for platforms with no valid random_get_entropy().
++ */
++unsigned long random_get_entropy_fallback(void)
++{
++	struct tk_read_base *tkr = &tk_core.timekeeper.tkr_mono;
++	struct clocksource *clock = READ_ONCE(tkr->clock);
++
++	if (unlikely(timekeeping_suspended || !clock))
++		return 0;
++	return clock->read(clock);
++}
++EXPORT_SYMBOL_GPL(random_get_entropy_fallback);
+ 
+ /**
+  * do_adjtimex() - Accessor function to NTP __do_adjtimex function
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 95f909540587c..3656fa8837834 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1426,8 +1426,7 @@ config WARN_ALL_UNSEEDED_RANDOM
+ 	  so architecture maintainers really need to do what they can
+ 	  to get the CRNG seeded sooner after the system is booted.
+ 	  However, since users cannot do anything actionable to
+-	  address this, by default the kernel will issue only a single
+-	  warning for the first use of unseeded randomness.
++	  address this, by default this option is disabled.
+ 
+ 	  Say Y here if you want to receive warnings for all uses of
+ 	  unseeded randomness.  This will be of use primarily for
+diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
+index 14c032de276e6..af3da5a8bde8d 100644
+--- a/lib/crypto/Kconfig
++++ b/lib/crypto/Kconfig
+@@ -1,7 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
+-comment "Crypto library routines"
+-
+ config CRYPTO_LIB_AES
+ 	tristate
+ 
+@@ -9,14 +7,14 @@ config CRYPTO_LIB_ARC4
+ 	tristate
+ 
+ config CRYPTO_ARCH_HAVE_LIB_BLAKE2S
+-	tristate
++	bool
+ 	help
+ 	  Declares whether the architecture provides an arch-specific
+ 	  accelerated implementation of the Blake2s library interface,
+ 	  either builtin or as a module.
+ 
+ config CRYPTO_LIB_BLAKE2S_GENERIC
+-	tristate
++	def_bool !CRYPTO_ARCH_HAVE_LIB_BLAKE2S
+ 	help
+ 	  This symbol can be depended upon by arch implementations of the
+ 	  Blake2s library interface that require the generic code as a
+@@ -24,15 +22,6 @@ config CRYPTO_LIB_BLAKE2S_GENERIC
+ 	  implementation is enabled, this implementation serves the users
+ 	  of CRYPTO_LIB_BLAKE2S.
+ 
+-config CRYPTO_LIB_BLAKE2S
+-	tristate "BLAKE2s hash function library"
+-	depends on CRYPTO_ARCH_HAVE_LIB_BLAKE2S || !CRYPTO_ARCH_HAVE_LIB_BLAKE2S
+-	select CRYPTO_LIB_BLAKE2S_GENERIC if CRYPTO_ARCH_HAVE_LIB_BLAKE2S=n
+-	help
+-	  Enable the Blake2s library interface. This interface may be fulfilled
+-	  by either the generic implementation or an arch-specific one, if one
+-	  is available and enabled.
+-
+ config CRYPTO_ARCH_HAVE_LIB_CHACHA
+ 	tristate
+ 	help
+@@ -51,7 +40,7 @@ config CRYPTO_LIB_CHACHA_GENERIC
+ 	  of CRYPTO_LIB_CHACHA.
+ 
+ config CRYPTO_LIB_CHACHA
+-	tristate "ChaCha library interface"
++	tristate
+ 	depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA
+ 	select CRYPTO_LIB_CHACHA_GENERIC if CRYPTO_ARCH_HAVE_LIB_CHACHA=n
+ 	help
+@@ -76,7 +65,7 @@ config CRYPTO_LIB_CURVE25519_GENERIC
+ 	  of CRYPTO_LIB_CURVE25519.
+ 
+ config CRYPTO_LIB_CURVE25519
+-	tristate "Curve25519 scalar multiplication library"
++	tristate
+ 	depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519
+ 	select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n
+ 	help
+@@ -111,7 +100,7 @@ config CRYPTO_LIB_POLY1305_GENERIC
+ 	  of CRYPTO_LIB_POLY1305.
+ 
+ config CRYPTO_LIB_POLY1305
+-	tristate "Poly1305 library interface"
++	tristate
+ 	depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305
+ 	select CRYPTO_LIB_POLY1305_GENERIC if CRYPTO_ARCH_HAVE_LIB_POLY1305=n
+ 	help
+@@ -120,7 +109,7 @@ config CRYPTO_LIB_POLY1305
+ 	  is available and enabled.
+ 
+ config CRYPTO_LIB_CHACHA20POLY1305
+-	tristate "ChaCha20-Poly1305 AEAD support (8-byte nonce library version)"
++	tristate
+ 	depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA
+ 	depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305
+ 	select CRYPTO_LIB_CHACHA
+diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile
+index 3a435629d9ce9..26be2bbe09c59 100644
+--- a/lib/crypto/Makefile
++++ b/lib/crypto/Makefile
+@@ -10,11 +10,10 @@ libaes-y					:= aes.o
+ obj-$(CONFIG_CRYPTO_LIB_ARC4)			+= libarc4.o
+ libarc4-y					:= arc4.o
+ 
+-obj-$(CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC)	+= libblake2s-generic.o
+-libblake2s-generic-y				+= blake2s-generic.o
+-
+-obj-$(CONFIG_CRYPTO_LIB_BLAKE2S)		+= libblake2s.o
+-libblake2s-y					+= blake2s.o
++# blake2s is used by the /dev/random driver which is always builtin
++obj-y						+= libblake2s.o
++libblake2s-y					:= blake2s.o
++libblake2s-$(CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC)	+= blake2s-generic.o
+ 
+ obj-$(CONFIG_CRYPTO_LIB_CHACHA20POLY1305)	+= libchacha20poly1305.o
+ libchacha20poly1305-y				+= chacha20poly1305.o
+diff --git a/lib/crypto/blake2s-generic.c b/lib/crypto/blake2s-generic.c
+index 04ff8df245136..75ccb3e633e65 100644
+--- a/lib/crypto/blake2s-generic.c
++++ b/lib/crypto/blake2s-generic.c
+@@ -37,7 +37,11 @@ static inline void blake2s_increment_counter(struct blake2s_state *state,
+ 	state->t[1] += (state->t[0] < inc);
+ }
+ 
+-void blake2s_compress_generic(struct blake2s_state *state,const u8 *block,
++void blake2s_compress(struct blake2s_state *state, const u8 *block,
++		      size_t nblocks, const u32 inc)
++		      __weak __alias(blake2s_compress_generic);
++
++void blake2s_compress_generic(struct blake2s_state *state, const u8 *block,
+ 			      size_t nblocks, const u32 inc)
+ {
+ 	u32 m[16];
+diff --git a/lib/crypto/blake2s-selftest.c b/lib/crypto/blake2s-selftest.c
+index 79ef404a990d2..409e4b7287704 100644
+--- a/lib/crypto/blake2s-selftest.c
++++ b/lib/crypto/blake2s-selftest.c
+@@ -3,7 +3,7 @@
+  * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+  */
+ 
+-#include <crypto/blake2s.h>
++#include <crypto/internal/blake2s.h>
+ #include <linux/string.h>
+ 
+ /*
+@@ -15,7 +15,6 @@
+  * #include <stdio.h>
+  *
+  * #include <openssl/evp.h>
+- * #include <openssl/hmac.h>
+  *
+  * #define BLAKE2S_TESTVEC_COUNT	256
+  *
+@@ -58,16 +57,6 @@
+  *	}
+  *	printf("};\n\n");
+  *
+- *	printf("static const u8 blake2s_hmac_testvecs[][BLAKE2S_HASH_SIZE] __initconst = {\n");
+- *
+- *	HMAC(EVP_blake2s256(), key, sizeof(key), buf, sizeof(buf), hash, NULL);
+- *	print_vec(hash, BLAKE2S_OUTBYTES);
+- *
+- *	HMAC(EVP_blake2s256(), buf, sizeof(buf), key, sizeof(key), hash, NULL);
+- *	print_vec(hash, BLAKE2S_OUTBYTES);
+- *
+- *	printf("};\n");
+- *
+  *	return 0;
+  *}
+  */
+@@ -554,15 +543,6 @@ static const u8 blake2s_testvecs[][BLAKE2S_HASH_SIZE] __initconst = {
+     0xd6, 0x98, 0x6b, 0x07, 0x10, 0x65, 0x52, 0x65, },
+ };
+ 
+-static const u8 blake2s_hmac_testvecs[][BLAKE2S_HASH_SIZE] __initconst = {
+-  { 0xce, 0xe1, 0x57, 0x69, 0x82, 0xdc, 0xbf, 0x43, 0xad, 0x56, 0x4c, 0x70,
+-    0xed, 0x68, 0x16, 0x96, 0xcf, 0xa4, 0x73, 0xe8, 0xe8, 0xfc, 0x32, 0x79,
+-    0x08, 0x0a, 0x75, 0x82, 0xda, 0x3f, 0x05, 0x11, },
+-  { 0x77, 0x2f, 0x0c, 0x71, 0x41, 0xf4, 0x4b, 0x2b, 0xb3, 0xc6, 0xb6, 0xf9,
+-    0x60, 0xde, 0xe4, 0x52, 0x38, 0x66, 0xe8, 0xbf, 0x9b, 0x96, 0xc4, 0x9f,
+-    0x60, 0xd9, 0x24, 0x37, 0x99, 0xd6, 0xec, 0x31, },
+-};
+-
+ bool __init blake2s_selftest(void)
+ {
+ 	u8 key[BLAKE2S_KEY_SIZE];
+@@ -607,16 +587,5 @@ bool __init blake2s_selftest(void)
+ 		}
+ 	}
+ 
+-	if (success) {
+-		blake2s256_hmac(hash, buf, key, sizeof(buf), sizeof(key));
+-		success &= !memcmp(hash, blake2s_hmac_testvecs[0], BLAKE2S_HASH_SIZE);
+-
+-		blake2s256_hmac(hash, key, buf, sizeof(key), sizeof(buf));
+-		success &= !memcmp(hash, blake2s_hmac_testvecs[1], BLAKE2S_HASH_SIZE);
+-
+-		if (!success)
+-			pr_err("blake2s256_hmac self-test: FAIL\n");
+-	}
+-
+ 	return success;
+ }
+diff --git a/lib/crypto/blake2s.c b/lib/crypto/blake2s.c
+index 41025a30c524c..80b194f9a0a09 100644
+--- a/lib/crypto/blake2s.c
++++ b/lib/crypto/blake2s.c
+@@ -15,98 +15,21 @@
+ #include <linux/module.h>
+ #include <linux/init.h>
+ #include <linux/bug.h>
+-#include <asm/unaligned.h>
+-
+-bool blake2s_selftest(void);
+ 
+ void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen)
+ {
+-	const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen;
+-
+-	if (unlikely(!inlen))
+-		return;
+-	if (inlen > fill) {
+-		memcpy(state->buf + state->buflen, in, fill);
+-		if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S))
+-			blake2s_compress_arch(state, state->buf, 1,
+-					      BLAKE2S_BLOCK_SIZE);
+-		else
+-			blake2s_compress_generic(state, state->buf, 1,
+-						 BLAKE2S_BLOCK_SIZE);
+-		state->buflen = 0;
+-		in += fill;
+-		inlen -= fill;
+-	}
+-	if (inlen > BLAKE2S_BLOCK_SIZE) {
+-		const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE);
+-		/* Hash one less (full) block than strictly possible */
+-		if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S))
+-			blake2s_compress_arch(state, in, nblocks - 1,
+-					      BLAKE2S_BLOCK_SIZE);
+-		else
+-			blake2s_compress_generic(state, in, nblocks - 1,
+-						 BLAKE2S_BLOCK_SIZE);
+-		in += BLAKE2S_BLOCK_SIZE * (nblocks - 1);
+-		inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1);
+-	}
+-	memcpy(state->buf + state->buflen, in, inlen);
+-	state->buflen += inlen;
++	__blake2s_update(state, in, inlen, false);
+ }
+ EXPORT_SYMBOL(blake2s_update);
+ 
+ void blake2s_final(struct blake2s_state *state, u8 *out)
+ {
+ 	WARN_ON(IS_ENABLED(DEBUG) && !out);
+-	blake2s_set_lastblock(state);
+-	memset(state->buf + state->buflen, 0,
+-	       BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */
+-	if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S))
+-		blake2s_compress_arch(state, state->buf, 1, state->buflen);
+-	else
+-		blake2s_compress_generic(state, state->buf, 1, state->buflen);
+-	cpu_to_le32_array(state->h, ARRAY_SIZE(state->h));
+-	memcpy(out, state->h, state->outlen);
++	__blake2s_final(state, out, false);
+ 	memzero_explicit(state, sizeof(*state));
+ }
+ EXPORT_SYMBOL(blake2s_final);
+ 
+-void blake2s256_hmac(u8 *out, const u8 *in, const u8 *key, const size_t inlen,
+-		     const size_t keylen)
+-{
+-	struct blake2s_state state;
+-	u8 x_key[BLAKE2S_BLOCK_SIZE] __aligned(__alignof__(u32)) = { 0 };
+-	u8 i_hash[BLAKE2S_HASH_SIZE] __aligned(__alignof__(u32));
+-	int i;
+-
+-	if (keylen > BLAKE2S_BLOCK_SIZE) {
+-		blake2s_init(&state, BLAKE2S_HASH_SIZE);
+-		blake2s_update(&state, key, keylen);
+-		blake2s_final(&state, x_key);
+-	} else
+-		memcpy(x_key, key, keylen);
+-
+-	for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i)
+-		x_key[i] ^= 0x36;
+-
+-	blake2s_init(&state, BLAKE2S_HASH_SIZE);
+-	blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE);
+-	blake2s_update(&state, in, inlen);
+-	blake2s_final(&state, i_hash);
+-
+-	for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i)
+-		x_key[i] ^= 0x5c ^ 0x36;
+-
+-	blake2s_init(&state, BLAKE2S_HASH_SIZE);
+-	blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE);
+-	blake2s_update(&state, i_hash, BLAKE2S_HASH_SIZE);
+-	blake2s_final(&state, i_hash);
+-
+-	memcpy(out, i_hash, BLAKE2S_HASH_SIZE);
+-	memzero_explicit(x_key, BLAKE2S_BLOCK_SIZE);
+-	memzero_explicit(i_hash, BLAKE2S_HASH_SIZE);
+-}
+-EXPORT_SYMBOL(blake2s256_hmac);
+-
+ static int __init mod_init(void)
+ {
+ 	if (!IS_ENABLED(CONFIG_CRYPTO_MANAGER_DISABLE_TESTS) &&
+diff --git a/lib/random32.c b/lib/random32.c
+index 4d0e05e471d72..f0ab17c2244be 100644
+--- a/lib/random32.c
++++ b/lib/random32.c
+@@ -39,8 +39,9 @@
+ #include <linux/random.h>
+ #include <linux/sched.h>
+ #include <linux/bitops.h>
++#include <linux/slab.h>
++#include <linux/notifier.h>
+ #include <asm/unaligned.h>
+-#include <trace/events/random.h>
+ 
+ /**
+  *	prandom_u32_state - seeded pseudo-random number generator.
+@@ -386,7 +387,6 @@ u32 prandom_u32(void)
+ 	struct siprand_state *state = get_cpu_ptr(&net_rand_state);
+ 	u32 res = siprand_u32(state);
+ 
+-	trace_prandom_u32(res);
+ 	put_cpu_ptr(&net_rand_state);
+ 	return res;
+ }
+@@ -552,9 +552,11 @@ static void prandom_reseed(struct timer_list *unused)
+  * To avoid worrying about whether it's safe to delay that interrupt
+  * long enough to seed all CPUs, just schedule an immediate timer event.
+  */
+-static void prandom_timer_start(struct random_ready_callback *unused)
++static int prandom_timer_start(struct notifier_block *nb,
++			       unsigned long action, void *data)
+ {
+ 	mod_timer(&seed_timer, jiffies);
++	return 0;
+ }
+ 
+ #ifdef CONFIG_RANDOM32_SELFTEST
+@@ -618,13 +620,13 @@ core_initcall(prandom32_state_selftest);
+  */
+ static int __init prandom_init_late(void)
+ {
+-	static struct random_ready_callback random_ready = {
+-		.func = prandom_timer_start
++	static struct notifier_block random_ready = {
++		.notifier_call = prandom_timer_start
+ 	};
+-	int ret = add_random_ready_callback(&random_ready);
++	int ret = register_random_ready_notifier(&random_ready);
+ 
+ 	if (ret == -EALREADY) {
+-		prandom_timer_start(&random_ready);
++		prandom_timer_start(&random_ready, 0, NULL);
+ 		ret = 0;
+ 	}
+ 	return ret;
+diff --git a/lib/sha1.c b/lib/sha1.c
+index 49257a915bb60..5ad4e49482728 100644
+--- a/lib/sha1.c
++++ b/lib/sha1.c
+@@ -9,6 +9,7 @@
+ #include <linux/kernel.h>
+ #include <linux/export.h>
+ #include <linux/bitops.h>
++#include <linux/string.h>
+ #include <crypto/sha.h>
+ #include <asm/unaligned.h>
+ 
+@@ -55,7 +56,8 @@
+ #define SHA_ROUND(t, input, fn, constant, A, B, C, D, E) do { \
+ 	__u32 TEMP = input(t); setW(t, TEMP); \
+ 	E += TEMP + rol32(A,5) + (fn) + (constant); \
+-	B = ror32(B, 2); } while (0)
++	B = ror32(B, 2); \
++	TEMP = E; E = D; D = C; C = B; B = A; A = TEMP; } while (0)
+ 
+ #define T_0_15(t, A, B, C, D, E)  SHA_ROUND(t, SHA_SRC, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E )
+ #define T_16_19(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E )
+@@ -84,6 +86,7 @@
+ void sha1_transform(__u32 *digest, const char *data, __u32 *array)
+ {
+ 	__u32 A, B, C, D, E;
++	unsigned int i = 0;
+ 
+ 	A = digest[0];
+ 	B = digest[1];
+@@ -92,94 +95,24 @@ void sha1_transform(__u32 *digest, const char *data, __u32 *array)
+ 	E = digest[4];
+ 
+ 	/* Round 1 - iterations 0-16 take their input from 'data' */
+-	T_0_15( 0, A, B, C, D, E);
+-	T_0_15( 1, E, A, B, C, D);
+-	T_0_15( 2, D, E, A, B, C);
+-	T_0_15( 3, C, D, E, A, B);
+-	T_0_15( 4, B, C, D, E, A);
+-	T_0_15( 5, A, B, C, D, E);
+-	T_0_15( 6, E, A, B, C, D);
+-	T_0_15( 7, D, E, A, B, C);
+-	T_0_15( 8, C, D, E, A, B);
+-	T_0_15( 9, B, C, D, E, A);
+-	T_0_15(10, A, B, C, D, E);
+-	T_0_15(11, E, A, B, C, D);
+-	T_0_15(12, D, E, A, B, C);
+-	T_0_15(13, C, D, E, A, B);
+-	T_0_15(14, B, C, D, E, A);
+-	T_0_15(15, A, B, C, D, E);
++	for (; i < 16; ++i)
++		T_0_15(i, A, B, C, D, E);
+ 
+ 	/* Round 1 - tail. Input from 512-bit mixing array */
+-	T_16_19(16, E, A, B, C, D);
+-	T_16_19(17, D, E, A, B, C);
+-	T_16_19(18, C, D, E, A, B);
+-	T_16_19(19, B, C, D, E, A);
++	for (; i < 20; ++i)
++		T_16_19(i, A, B, C, D, E);
+ 
+ 	/* Round 2 */
+-	T_20_39(20, A, B, C, D, E);
+-	T_20_39(21, E, A, B, C, D);
+-	T_20_39(22, D, E, A, B, C);
+-	T_20_39(23, C, D, E, A, B);
+-	T_20_39(24, B, C, D, E, A);
+-	T_20_39(25, A, B, C, D, E);
+-	T_20_39(26, E, A, B, C, D);
+-	T_20_39(27, D, E, A, B, C);
+-	T_20_39(28, C, D, E, A, B);
+-	T_20_39(29, B, C, D, E, A);
+-	T_20_39(30, A, B, C, D, E);
+-	T_20_39(31, E, A, B, C, D);
+-	T_20_39(32, D, E, A, B, C);
+-	T_20_39(33, C, D, E, A, B);
+-	T_20_39(34, B, C, D, E, A);
+-	T_20_39(35, A, B, C, D, E);
+-	T_20_39(36, E, A, B, C, D);
+-	T_20_39(37, D, E, A, B, C);
+-	T_20_39(38, C, D, E, A, B);
+-	T_20_39(39, B, C, D, E, A);
++	for (; i < 40; ++i)
++		T_20_39(i, A, B, C, D, E);
+ 
+ 	/* Round 3 */
+-	T_40_59(40, A, B, C, D, E);
+-	T_40_59(41, E, A, B, C, D);
+-	T_40_59(42, D, E, A, B, C);
+-	T_40_59(43, C, D, E, A, B);
+-	T_40_59(44, B, C, D, E, A);
+-	T_40_59(45, A, B, C, D, E);
+-	T_40_59(46, E, A, B, C, D);
+-	T_40_59(47, D, E, A, B, C);
+-	T_40_59(48, C, D, E, A, B);
+-	T_40_59(49, B, C, D, E, A);
+-	T_40_59(50, A, B, C, D, E);
+-	T_40_59(51, E, A, B, C, D);
+-	T_40_59(52, D, E, A, B, C);
+-	T_40_59(53, C, D, E, A, B);
+-	T_40_59(54, B, C, D, E, A);
+-	T_40_59(55, A, B, C, D, E);
+-	T_40_59(56, E, A, B, C, D);
+-	T_40_59(57, D, E, A, B, C);
+-	T_40_59(58, C, D, E, A, B);
+-	T_40_59(59, B, C, D, E, A);
++	for (; i < 60; ++i)
++		T_40_59(i, A, B, C, D, E);
+ 
+ 	/* Round 4 */
+-	T_60_79(60, A, B, C, D, E);
+-	T_60_79(61, E, A, B, C, D);
+-	T_60_79(62, D, E, A, B, C);
+-	T_60_79(63, C, D, E, A, B);
+-	T_60_79(64, B, C, D, E, A);
+-	T_60_79(65, A, B, C, D, E);
+-	T_60_79(66, E, A, B, C, D);
+-	T_60_79(67, D, E, A, B, C);
+-	T_60_79(68, C, D, E, A, B);
+-	T_60_79(69, B, C, D, E, A);
+-	T_60_79(70, A, B, C, D, E);
+-	T_60_79(71, E, A, B, C, D);
+-	T_60_79(72, D, E, A, B, C);
+-	T_60_79(73, C, D, E, A, B);
+-	T_60_79(74, B, C, D, E, A);
+-	T_60_79(75, A, B, C, D, E);
+-	T_60_79(76, E, A, B, C, D);
+-	T_60_79(77, D, E, A, B, C);
+-	T_60_79(78, C, D, E, A, B);
+-	T_60_79(79, B, C, D, E, A);
++	for (; i < 80; ++i)
++		T_60_79(i, A, B, C, D, E);
+ 
+ 	digest[0] += A;
+ 	digest[1] += B;
+diff --git a/lib/siphash.c b/lib/siphash.c
+index 025f0cbf6d7a7..b4055b1cc2f67 100644
+--- a/lib/siphash.c
++++ b/lib/siphash.c
+@@ -18,19 +18,13 @@
+ #include <asm/word-at-a-time.h>
+ #endif
+ 
+-#define SIPROUND \
+-	do { \
+-	v0 += v1; v1 = rol64(v1, 13); v1 ^= v0; v0 = rol64(v0, 32); \
+-	v2 += v3; v3 = rol64(v3, 16); v3 ^= v2; \
+-	v0 += v3; v3 = rol64(v3, 21); v3 ^= v0; \
+-	v2 += v1; v1 = rol64(v1, 17); v1 ^= v2; v2 = rol64(v2, 32); \
+-	} while (0)
++#define SIPROUND SIPHASH_PERMUTATION(v0, v1, v2, v3)
+ 
+ #define PREAMBLE(len) \
+-	u64 v0 = 0x736f6d6570736575ULL; \
+-	u64 v1 = 0x646f72616e646f6dULL; \
+-	u64 v2 = 0x6c7967656e657261ULL; \
+-	u64 v3 = 0x7465646279746573ULL; \
++	u64 v0 = SIPHASH_CONST_0; \
++	u64 v1 = SIPHASH_CONST_1; \
++	u64 v2 = SIPHASH_CONST_2; \
++	u64 v3 = SIPHASH_CONST_3; \
+ 	u64 b = ((u64)(len)) << 56; \
+ 	v3 ^= key->key[1]; \
+ 	v2 ^= key->key[0]; \
+@@ -389,19 +383,13 @@ u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
+ }
+ EXPORT_SYMBOL(hsiphash_4u32);
+ #else
+-#define HSIPROUND \
+-	do { \
+-	v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \
+-	v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \
+-	v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \
+-	v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \
+-	} while (0)
++#define HSIPROUND HSIPHASH_PERMUTATION(v0, v1, v2, v3)
+ 
+ #define HPREAMBLE(len) \
+-	u32 v0 = 0; \
+-	u32 v1 = 0; \
+-	u32 v2 = 0x6c796765U; \
+-	u32 v3 = 0x74656462U; \
++	u32 v0 = HSIPHASH_CONST_0; \
++	u32 v1 = HSIPHASH_CONST_1; \
++	u32 v2 = HSIPHASH_CONST_2; \
++	u32 v3 = HSIPHASH_CONST_3; \
+ 	u32 b = ((u32)(len)) << 24; \
+ 	v3 ^= key->key[1]; \
+ 	v2 ^= key->key[0]; \
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 8ade1a86d8187..daf32a489dc06 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -756,14 +756,16 @@ static void enable_ptr_key_workfn(struct work_struct *work)
+ 
+ static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn);
+ 
+-static void fill_random_ptr_key(struct random_ready_callback *unused)
++static int fill_random_ptr_key(struct notifier_block *nb,
++			       unsigned long action, void *data)
+ {
+ 	/* This may be in an interrupt handler. */
+ 	queue_work(system_unbound_wq, &enable_ptr_key_work);
++	return 0;
+ }
+ 
+-static struct random_ready_callback random_ready = {
+-	.func = fill_random_ptr_key
++static struct notifier_block random_ready = {
++	.notifier_call = fill_random_ptr_key
+ };
+ 
+ static int __init initialize_ptr_random(void)
+@@ -777,7 +779,7 @@ static int __init initialize_ptr_random(void)
+ 		return 0;
+ 	}
+ 
+-	ret = add_random_ready_callback(&random_ready);
++	ret = register_random_ready_notifier(&random_ready);
+ 	if (!ret) {
+ 		return 0;
+ 	} else if (ret == -EALREADY) {
+diff --git a/mm/util.c b/mm/util.c
+index 8904727607907..ba9643de689ea 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -331,6 +331,38 @@ unsigned long randomize_stack_top(unsigned long stack_top)
+ #endif
+ }
+ 
++/**
++ * randomize_page - Generate a random, page aligned address
++ * @start:	The smallest acceptable address the caller will take.
++ * @range:	The size of the area, starting at @start, within which the
++ *		random address must fall.
++ *
++ * If @start + @range would overflow, @range is capped.
++ *
++ * NOTE: Historical use of randomize_range, which this replaces, presumed that
++ * @start was already page aligned.  We now align it regardless.
++ *
++ * Return: A page aligned address within [start, start + range).  On error,
++ * @start is returned.
++ */
++unsigned long randomize_page(unsigned long start, unsigned long range)
++{
++	if (!PAGE_ALIGNED(start)) {
++		range -= PAGE_ALIGN(start) - start;
++		start = PAGE_ALIGN(start);
++	}
++
++	if (start > ULONG_MAX - range)
++		range = ULONG_MAX - start;
++
++	range >>= PAGE_SHIFT;
++
++	if (range == 0)
++		return start;
++
++	return start + (get_random_long() % range << PAGE_SHIFT);
++}
++
+ #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
+ unsigned long arch_randomize_brk(struct mm_struct *mm)
+ {
+diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c
+index b8a33c841846f..7131cd1fb2ad5 100644
+--- a/net/core/secure_seq.c
++++ b/net/core/secure_seq.c
+@@ -96,7 +96,7 @@ u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr,
+ }
+ EXPORT_SYMBOL(secure_tcpv6_seq);
+ 
+-u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
++u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
+ 			       __be16 dport)
+ {
+ 	const struct {
+@@ -146,7 +146,7 @@ u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
+ }
+ EXPORT_SYMBOL_GPL(secure_tcp_seq);
+ 
+-u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport)
++u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport)
+ {
+ 	net_secret_init();
+ 	return siphash_4u32((__force u32)saddr, (__force u32)daddr,
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 915b8e1bd9efb..44b524136f953 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -504,7 +504,7 @@ not_unique:
+ 	return -EADDRNOTAVAIL;
+ }
+ 
+-static u32 inet_sk_port_offset(const struct sock *sk)
++static u64 inet_sk_port_offset(const struct sock *sk)
+ {
+ 	const struct inet_sock *inet = inet_sk(sk);
+ 
+@@ -722,8 +722,19 @@ void inet_unhash(struct sock *sk)
+ }
+ EXPORT_SYMBOL_GPL(inet_unhash);
+ 
++/* RFC 6056 3.3.4.  Algorithm 4: Double-Hash Port Selection Algorithm
++ * Note that we use 32bit integers (vs RFC 'short integers')
++ * because 2^16 is not a multiple of num_ephemeral and this
++ * property might be used by clever attacker.
++ * RFC claims using TABLE_LENGTH=10 buckets gives an improvement,
++ * we use 256 instead to really give more isolation and
++ * privacy, this only consumes 1 KB of kernel memory.
++ */
++#define INET_TABLE_PERTURB_SHIFT 8
++static u32 table_perturb[1 << INET_TABLE_PERTURB_SHIFT];
++
+ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+-		struct sock *sk, u32 port_offset,
++		struct sock *sk, u64 port_offset,
+ 		int (*check_established)(struct inet_timewait_death_row *,
+ 			struct sock *, __u16, struct inet_timewait_sock **))
+ {
+@@ -735,8 +746,8 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 	struct inet_bind_bucket *tb;
+ 	u32 remaining, offset;
+ 	int ret, i, low, high;
+-	static u32 hint;
+ 	int l3mdev;
++	u32 index;
+ 
+ 	if (port) {
+ 		head = &hinfo->bhash[inet_bhashfn(net, port,
+@@ -763,7 +774,12 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 	if (likely(remaining > 1))
+ 		remaining &= ~1U;
+ 
+-	offset = (hint + port_offset) % remaining;
++	net_get_random_once(table_perturb, sizeof(table_perturb));
++	index = hash_32(port_offset, INET_TABLE_PERTURB_SHIFT);
++
++	offset = READ_ONCE(table_perturb[index]) + port_offset;
++	offset %= remaining;
++
+ 	/* In first pass we try ports of @low parity.
+ 	 * inet_csk_get_port() does the opposite choice.
+ 	 */
+@@ -817,7 +833,7 @@ next_port:
+ 	return -EADDRNOTAVAIL;
+ 
+ ok:
+-	hint += i + 2;
++	WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2);
+ 
+ 	/* Head lock still held and bh's disabled */
+ 	inet_bind_hash(sk, tb, port);
+@@ -840,7 +856,7 @@ ok:
+ int inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 		      struct sock *sk)
+ {
+-	u32 port_offset = 0;
++	u64 port_offset = 0;
+ 
+ 	if (!inet_sk(sk)->inet_num)
+ 		port_offset = inet_sk_port_offset(sk);
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 0a2e7f2283911..40203255ed88b 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -308,7 +308,7 @@ not_unique:
+ 	return -EADDRNOTAVAIL;
+ }
+ 
+-static u32 inet6_sk_port_offset(const struct sock *sk)
++static u64 inet6_sk_port_offset(const struct sock *sk)
+ {
+ 	const struct inet_sock *inet = inet_sk(sk);
+ 
+@@ -320,7 +320,7 @@ static u32 inet6_sk_port_offset(const struct sock *sk)
+ int inet6_hash_connect(struct inet_timewait_death_row *death_row,
+ 		       struct sock *sk)
+ {
+-	u32 port_offset = 0;
++	u64 port_offset = 0;
+ 
+ 	if (!inet_sk(sk)->inet_num)
+ 		port_offset = inet6_sk_port_offset(sk);
+diff --git a/security/security.c b/security/security.c
+index d9d42d64f89f2..360706cdababc 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -59,10 +59,12 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = {
+ 	[LOCKDOWN_DEBUGFS] = "debugfs access",
+ 	[LOCKDOWN_XMON_WR] = "xmon write access",
+ 	[LOCKDOWN_BPF_WRITE_USER] = "use of bpf to write user RAM",
++	[LOCKDOWN_DBG_WRITE_KERNEL] = "use of kgdb/kdb to write kernel RAM",
+ 	[LOCKDOWN_INTEGRITY_MAX] = "integrity",
+ 	[LOCKDOWN_KCORE] = "/proc/kcore access",
+ 	[LOCKDOWN_KPROBES] = "use of kprobes",
+ 	[LOCKDOWN_BPF_READ] = "use of bpf to read kernel RAM",
++	[LOCKDOWN_DBG_READ_KERNEL] = "use of kgdb/kdb to read kernel RAM",
+ 	[LOCKDOWN_PERF] = "unsafe use of perf",
+ 	[LOCKDOWN_TRACEFS] = "use of tracefs",
+ 	[LOCKDOWN_XMON_RW] = "xmon read and write access",
+diff --git a/sound/pci/ctxfi/ctatc.c b/sound/pci/ctxfi/ctatc.c
+index f8ac96cf38a43..06775519dab00 100644
+--- a/sound/pci/ctxfi/ctatc.c
++++ b/sound/pci/ctxfi/ctatc.c
+@@ -36,6 +36,7 @@
+ 			    | ((IEC958_AES3_CON_FS_48000) << 24))
+ 
+ static const struct snd_pci_quirk subsys_20k1_list[] = {
++	SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0021, "SB046x", CTSB046X),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0022, "SB055x", CTSB055X),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x002f, "SB055x", CTSB055X),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0029, "SB073x", CTSB073X),
+@@ -64,6 +65,7 @@ static const struct snd_pci_quirk subsys_20k2_list[] = {
+ 
+ static const char *ct_subsys_name[NUM_CTCARDS] = {
+ 	/* 20k1 models */
++	[CTSB046X]	= "SB046x",
+ 	[CTSB055X]	= "SB055x",
+ 	[CTSB073X]	= "SB073x",
+ 	[CTUAA]		= "UAA",
+diff --git a/sound/pci/ctxfi/cthardware.h b/sound/pci/ctxfi/cthardware.h
+index 9e6b83bd432d9..b50d61a08e283 100644
+--- a/sound/pci/ctxfi/cthardware.h
++++ b/sound/pci/ctxfi/cthardware.h
+@@ -26,8 +26,9 @@ enum CHIPTYP {
+ 
+ enum CTCARDS {
+ 	/* 20k1 models */
++	CTSB046X,
++	CT20K1_MODEL_FIRST = CTSB046X,
+ 	CTSB055X,
+-	CT20K1_MODEL_FIRST = CTSB055X,
+ 	CTSB073X,
+ 	CTUAA,
+ 	CT20K1_UNKNOWN,


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-06-06 11:03 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-06-06 11:03 UTC (permalink / raw
  To: gentoo-commits

commit:     4b2fca6d71c3e7831becbf4ea5a7ac5e0130e589
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun  6 11:03:38 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun  6 11:03:38 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4b2fca6d

Linux patch 5.10.120

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1119_linux-5.10.120.patch | 2010 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2014 insertions(+)

diff --git a/0000_README b/0000_README
index 32647390..773deb53 100644
--- a/0000_README
+++ b/0000_README
@@ -519,6 +519,10 @@ Patch:  1118_linux-5.10.119.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.119
 
+Patch:  1119_linux-5.10.120.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.120
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1119_linux-5.10.120.patch b/1119_linux-5.10.120.patch
new file mode 100644
index 00000000..baad5e2a
--- /dev/null
+++ b/1119_linux-5.10.120.patch
@@ -0,0 +1,2010 @@
+diff --git a/Documentation/process/submitting-patches.rst b/Documentation/process/submitting-patches.rst
+index 5a267f5d1a501..edd263e0992dc 100644
+--- a/Documentation/process/submitting-patches.rst
++++ b/Documentation/process/submitting-patches.rst
+@@ -71,7 +71,7 @@ as you intend it to.
+ 
+ The maintainer will thank you if you write your patch description in a
+ form which can be easily pulled into Linux's source code management
+-system, ``git``, as a "commit log".  See :ref:`explicit_in_reply_to`.
++system, ``git``, as a "commit log".  See :ref:`the_canonical_patch_format`.
+ 
+ Solve only one problem per patch.  If your description starts to get
+ long, that's a sign that you probably need to split up your patch.
+diff --git a/Makefile b/Makefile
+index b442cc5bbfc30..fdd2ac273f420 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 119
++SUBLEVEL = 120
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index bd4450dbdcb61..986fa0b1a8774 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -896,7 +896,7 @@
+ 		device-wakeup-gpios = <&gpg3 4 GPIO_ACTIVE_HIGH>;
+ 		interrupt-parent = <&gph2>;
+ 		interrupts = <5 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "host-wake";
++		interrupt-names = "host-wakeup";
+ 	};
+ };
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
+index 84e5a2dc8be53..3dd58b4ee33e5 100644
+--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
++++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
+@@ -359,13 +359,15 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
+ static bool kvmppc_next_nontransitioned_gfn(const struct kvm_memory_slot *memslot,
+ 		struct kvm *kvm, unsigned long *gfn)
+ {
+-	struct kvmppc_uvmem_slot *p;
++	struct kvmppc_uvmem_slot *p = NULL, *iter;
+ 	bool ret = false;
+ 	unsigned long i;
+ 
+-	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list)
+-		if (*gfn >= p->base_pfn && *gfn < p->base_pfn + p->nr_pfns)
++	list_for_each_entry(iter, &kvm->arch.uvmem_pfns, list)
++		if (*gfn >= iter->base_pfn && *gfn < iter->base_pfn + iter->nr_pfns) {
++			p = iter;
+ 			break;
++		}
+ 	if (!p)
+ 		return ret;
+ 	/*
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index 6c3d38b5a8add..971609fb15c59 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -188,7 +188,7 @@ void kvm_async_pf_task_wake(u32 token)
+ {
+ 	u32 key = hash_32(token, KVM_TASK_SLEEP_HASHBITS);
+ 	struct kvm_task_sleep_head *b = &async_pf_sleepers[key];
+-	struct kvm_task_sleep_node *n;
++	struct kvm_task_sleep_node *n, *dummy = NULL;
+ 
+ 	if (token == ~0) {
+ 		apf_task_wake_all();
+@@ -200,28 +200,41 @@ again:
+ 	n = _find_apf_task(b, token);
+ 	if (!n) {
+ 		/*
+-		 * async PF was not yet handled.
+-		 * Add dummy entry for the token.
++		 * Async #PF not yet handled, add a dummy entry for the token.
++		 * Allocating the token must be down outside of the raw lock
++		 * as the allocator is preemptible on PREEMPT_RT kernels.
+ 		 */
+-		n = kzalloc(sizeof(*n), GFP_ATOMIC);
+-		if (!n) {
++		if (!dummy) {
++			raw_spin_unlock(&b->lock);
++			dummy = kzalloc(sizeof(*dummy), GFP_ATOMIC);
++
+ 			/*
+-			 * Allocation failed! Busy wait while other cpu
+-			 * handles async PF.
++			 * Continue looping on allocation failure, eventually
++			 * the async #PF will be handled and allocating a new
++			 * node will be unnecessary.
++			 */
++			if (!dummy)
++				cpu_relax();
++
++			/*
++			 * Recheck for async #PF completion before enqueueing
++			 * the dummy token to avoid duplicate list entries.
+ 			 */
+-			raw_spin_unlock(&b->lock);
+-			cpu_relax();
+ 			goto again;
+ 		}
+-		n->token = token;
+-		n->cpu = smp_processor_id();
+-		init_swait_queue_head(&n->wq);
+-		hlist_add_head(&n->link, &b->list);
++		dummy->token = token;
++		dummy->cpu = smp_processor_id();
++		init_swait_queue_head(&dummy->wq);
++		hlist_add_head(&dummy->link, &b->list);
++		dummy = NULL;
+ 	} else {
+ 		apf_task_wake_one(n);
+ 	}
+ 	raw_spin_unlock(&b->lock);
+-	return;
++
++	/* A dummy token might be allocated and ultimately not used.  */
++	if (dummy)
++		kfree(dummy);
+ }
+ EXPORT_SYMBOL_GPL(kvm_async_pf_task_wake);
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index ae18062c26a66..d9cec5daa1fff 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7295,7 +7295,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_skip_emulated_instruction);
+ 
+-static bool kvm_vcpu_check_breakpoint(struct kvm_vcpu *vcpu, int *r)
++static bool kvm_vcpu_check_code_breakpoint(struct kvm_vcpu *vcpu, int *r)
+ {
+ 	if (unlikely(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) &&
+ 	    (vcpu->arch.guest_debug_dr7 & DR7_BP_EN_MASK)) {
+@@ -7364,25 +7364,23 @@ static bool is_vmware_backdoor_opcode(struct x86_emulate_ctxt *ctxt)
+ }
+ 
+ /*
+- * Decode to be emulated instruction. Return EMULATION_OK if success.
++ * Decode an instruction for emulation.  The caller is responsible for handling
++ * code breakpoints.  Note, manually detecting code breakpoints is unnecessary
++ * (and wrong) when emulating on an intercepted fault-like exception[*], as
++ * code breakpoints have higher priority and thus have already been done by
++ * hardware.
++ *
++ * [*] Except #MC, which is higher priority, but KVM should never emulate in
++ *     response to a machine check.
+  */
+ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
+ 				    void *insn, int insn_len)
+ {
+-	int r = EMULATION_OK;
+ 	struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt;
++	int r;
+ 
+ 	init_emulate_ctxt(vcpu);
+ 
+-	/*
+-	 * We will reenter on the same instruction since we do not set
+-	 * complete_userspace_io. This does not handle watchpoints yet,
+-	 * those would be handled in the emulate_ops.
+-	 */
+-	if (!(emulation_type & EMULTYPE_SKIP) &&
+-	    kvm_vcpu_check_breakpoint(vcpu, &r))
+-		return r;
+-
+ 	ctxt->ud = emulation_type & EMULTYPE_TRAP_UD;
+ 
+ 	r = x86_decode_insn(ctxt, insn, insn_len);
+@@ -7417,6 +7415,15 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 	if (!(emulation_type & EMULTYPE_NO_DECODE)) {
+ 		kvm_clear_exception_queue(vcpu);
+ 
++		/*
++		 * Return immediately if RIP hits a code breakpoint, such #DBs
++		 * are fault-like and are higher priority than any faults on
++		 * the code fetch itself.
++		 */
++		if (!(emulation_type & EMULTYPE_SKIP) &&
++		    kvm_vcpu_check_code_breakpoint(vcpu, &r))
++			return r;
++
+ 		r = x86_decode_emulated_instruction(vcpu, emulation_type,
+ 						    insn, insn_len);
+ 		if (r != EMULATION_OK)  {
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 0dee9242491cb..c15bfc0e3723a 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -1941,5 +1941,3 @@ source "crypto/asymmetric_keys/Kconfig"
+ source "certs/Kconfig"
+ 
+ endif	# if CRYPTO
+-
+-source "lib/crypto/Kconfig"
+diff --git a/crypto/drbg.c b/crypto/drbg.c
+index 19ea8d6628ffb..a4b5d6dbe99d3 100644
+--- a/crypto/drbg.c
++++ b/crypto/drbg.c
+@@ -1035,17 +1035,38 @@ static const struct drbg_state_ops drbg_hash_ops = {
+  ******************************************************************/
+ 
+ static inline int __drbg_seed(struct drbg_state *drbg, struct list_head *seed,
+-			      int reseed)
++			      int reseed, enum drbg_seed_state new_seed_state)
+ {
+ 	int ret = drbg->d_ops->update(drbg, seed, reseed);
+ 
+ 	if (ret)
+ 		return ret;
+ 
+-	drbg->seeded = true;
++	drbg->seeded = new_seed_state;
+ 	/* 10.1.1.2 / 10.1.1.3 step 5 */
+ 	drbg->reseed_ctr = 1;
+ 
++	switch (drbg->seeded) {
++	case DRBG_SEED_STATE_UNSEEDED:
++		/* Impossible, but handle it to silence compiler warnings. */
++		fallthrough;
++	case DRBG_SEED_STATE_PARTIAL:
++		/*
++		 * Require frequent reseeds until the seed source is
++		 * fully initialized.
++		 */
++		drbg->reseed_threshold = 50;
++		break;
++
++	case DRBG_SEED_STATE_FULL:
++		/*
++		 * Seed source has become fully initialized, frequent
++		 * reseeds no longer required.
++		 */
++		drbg->reseed_threshold = drbg_max_requests(drbg);
++		break;
++	}
++
+ 	return ret;
+ }
+ 
+@@ -1065,12 +1086,10 @@ static inline int drbg_get_random_bytes(struct drbg_state *drbg,
+ 	return 0;
+ }
+ 
+-static void drbg_async_seed(struct work_struct *work)
++static int drbg_seed_from_random(struct drbg_state *drbg)
+ {
+ 	struct drbg_string data;
+ 	LIST_HEAD(seedlist);
+-	struct drbg_state *drbg = container_of(work, struct drbg_state,
+-					       seed_work);
+ 	unsigned int entropylen = drbg_sec_strength(drbg->core->flags);
+ 	unsigned char entropy[32];
+ 	int ret;
+@@ -1081,26 +1100,15 @@ static void drbg_async_seed(struct work_struct *work)
+ 	drbg_string_fill(&data, entropy, entropylen);
+ 	list_add_tail(&data.list, &seedlist);
+ 
+-	mutex_lock(&drbg->drbg_mutex);
+-
+ 	ret = drbg_get_random_bytes(drbg, entropy, entropylen);
+ 	if (ret)
+-		goto unlock;
+-
+-	/* Set seeded to false so that if __drbg_seed fails the
+-	 * next generate call will trigger a reseed.
+-	 */
+-	drbg->seeded = false;
+-
+-	__drbg_seed(drbg, &seedlist, true);
+-
+-	if (drbg->seeded)
+-		drbg->reseed_threshold = drbg_max_requests(drbg);
++		goto out;
+ 
+-unlock:
+-	mutex_unlock(&drbg->drbg_mutex);
++	ret = __drbg_seed(drbg, &seedlist, true, DRBG_SEED_STATE_FULL);
+ 
++out:
+ 	memzero_explicit(entropy, entropylen);
++	return ret;
+ }
+ 
+ /*
+@@ -1122,6 +1130,7 @@ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers,
+ 	unsigned int entropylen = drbg_sec_strength(drbg->core->flags);
+ 	struct drbg_string data1;
+ 	LIST_HEAD(seedlist);
++	enum drbg_seed_state new_seed_state = DRBG_SEED_STATE_FULL;
+ 
+ 	/* 9.1 / 9.2 / 9.3.1 step 3 */
+ 	if (pers && pers->len > (drbg_max_addtl(drbg))) {
+@@ -1149,6 +1158,9 @@ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers,
+ 		BUG_ON((entropylen * 2) > sizeof(entropy));
+ 
+ 		/* Get seed from in-kernel /dev/urandom */
++		if (!rng_is_initialized())
++			new_seed_state = DRBG_SEED_STATE_PARTIAL;
++
+ 		ret = drbg_get_random_bytes(drbg, entropy, entropylen);
+ 		if (ret)
+ 			goto out;
+@@ -1205,7 +1217,7 @@ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers,
+ 		memset(drbg->C, 0, drbg_statelen(drbg));
+ 	}
+ 
+-	ret = __drbg_seed(drbg, &seedlist, reseed);
++	ret = __drbg_seed(drbg, &seedlist, reseed, new_seed_state);
+ 
+ out:
+ 	memzero_explicit(entropy, entropylen * 2);
+@@ -1385,19 +1397,25 @@ static int drbg_generate(struct drbg_state *drbg,
+ 	 * here. The spec is a bit convoluted here, we make it simpler.
+ 	 */
+ 	if (drbg->reseed_threshold < drbg->reseed_ctr)
+-		drbg->seeded = false;
++		drbg->seeded = DRBG_SEED_STATE_UNSEEDED;
+ 
+-	if (drbg->pr || !drbg->seeded) {
++	if (drbg->pr || drbg->seeded == DRBG_SEED_STATE_UNSEEDED) {
+ 		pr_devel("DRBG: reseeding before generation (prediction "
+ 			 "resistance: %s, state %s)\n",
+ 			 drbg->pr ? "true" : "false",
+-			 drbg->seeded ? "seeded" : "unseeded");
++			 (drbg->seeded ==  DRBG_SEED_STATE_FULL ?
++			  "seeded" : "unseeded"));
+ 		/* 9.3.1 steps 7.1 through 7.3 */
+ 		len = drbg_seed(drbg, addtl, true);
+ 		if (len)
+ 			goto err;
+ 		/* 9.3.1 step 7.4 */
+ 		addtl = NULL;
++	} else if (rng_is_initialized() &&
++		   drbg->seeded == DRBG_SEED_STATE_PARTIAL) {
++		len = drbg_seed_from_random(drbg);
++		if (len)
++			goto err;
+ 	}
+ 
+ 	if (addtl && 0 < addtl->len)
+@@ -1490,50 +1508,15 @@ static int drbg_generate_long(struct drbg_state *drbg,
+ 	return 0;
+ }
+ 
+-static int drbg_schedule_async_seed(struct notifier_block *nb, unsigned long action, void *data)
+-{
+-	struct drbg_state *drbg = container_of(nb, struct drbg_state,
+-					       random_ready);
+-
+-	schedule_work(&drbg->seed_work);
+-	return 0;
+-}
+-
+ static int drbg_prepare_hrng(struct drbg_state *drbg)
+ {
+-	int err;
+-
+ 	/* We do not need an HRNG in test mode. */
+ 	if (list_empty(&drbg->test_data.list))
+ 		return 0;
+ 
+ 	drbg->jent = crypto_alloc_rng("jitterentropy_rng", 0, 0);
+ 
+-	INIT_WORK(&drbg->seed_work, drbg_async_seed);
+-
+-	drbg->random_ready.notifier_call = drbg_schedule_async_seed;
+-	err = register_random_ready_notifier(&drbg->random_ready);
+-
+-	switch (err) {
+-	case 0:
+-		break;
+-
+-	case -EALREADY:
+-		err = 0;
+-		fallthrough;
+-
+-	default:
+-		drbg->random_ready.notifier_call = NULL;
+-		return err;
+-	}
+-
+-	/*
+-	 * Require frequent reseeds until the seed source is fully
+-	 * initialized.
+-	 */
+-	drbg->reseed_threshold = 50;
+-
+-	return err;
++	return 0;
+ }
+ 
+ /*
+@@ -1576,7 +1559,7 @@ static int drbg_instantiate(struct drbg_state *drbg, struct drbg_string *pers,
+ 	if (!drbg->core) {
+ 		drbg->core = &drbg_cores[coreref];
+ 		drbg->pr = pr;
+-		drbg->seeded = false;
++		drbg->seeded = DRBG_SEED_STATE_UNSEEDED;
+ 		drbg->reseed_threshold = drbg_max_requests(drbg);
+ 
+ 		ret = drbg_alloc_state(drbg);
+@@ -1627,11 +1610,6 @@ free_everything:
+  */
+ static int drbg_uninstantiate(struct drbg_state *drbg)
+ {
+-	if (drbg->random_ready.notifier_call) {
+-		unregister_random_ready_notifier(&drbg->random_ready);
+-		cancel_work_sync(&drbg->seed_work);
+-	}
+-
+ 	if (!IS_ERR_OR_NULL(drbg->jent))
+ 		crypto_free_rng(drbg->jent);
+ 	drbg->jent = NULL;
+diff --git a/crypto/ecrdsa.c b/crypto/ecrdsa.c
+index 6a3fd09057d0c..f7ed430206720 100644
+--- a/crypto/ecrdsa.c
++++ b/crypto/ecrdsa.c
+@@ -113,15 +113,15 @@ static int ecrdsa_verify(struct akcipher_request *req)
+ 
+ 	/* Step 1: verify that 0 < r < q, 0 < s < q */
+ 	if (vli_is_zero(r, ndigits) ||
+-	    vli_cmp(r, ctx->curve->n, ndigits) == 1 ||
++	    vli_cmp(r, ctx->curve->n, ndigits) >= 0 ||
+ 	    vli_is_zero(s, ndigits) ||
+-	    vli_cmp(s, ctx->curve->n, ndigits) == 1)
++	    vli_cmp(s, ctx->curve->n, ndigits) >= 0)
+ 		return -EKEYREJECTED;
+ 
+ 	/* Step 2: calculate hash (h) of the message (passed as input) */
+ 	/* Step 3: calculate e = h \mod q */
+ 	vli_from_le64(e, digest, ndigits);
+-	if (vli_cmp(e, ctx->curve->n, ndigits) == 1)
++	if (vli_cmp(e, ctx->curve->n, ndigits) >= 0)
+ 		vli_sub(e, e, ctx->curve->n, ndigits);
+ 	if (vli_is_zero(e, ndigits))
+ 		e[0] = 1;
+@@ -137,7 +137,7 @@ static int ecrdsa_verify(struct akcipher_request *req)
+ 	/* Step 6: calculate point C = z_1P + z_2Q, and R = x_c \mod q */
+ 	ecc_point_mult_shamir(&cc, z1, &ctx->curve->g, z2, &ctx->pub_key,
+ 			      ctx->curve);
+-	if (vli_cmp(cc.x, ctx->curve->n, ndigits) == 1)
++	if (vli_cmp(cc.x, ctx->curve->n, ndigits) >= 0)
+ 		vli_sub(cc.x, cc.x, ctx->curve->n, ndigits);
+ 
+ 	/* Step 7: if R == r signature is valid */
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index dc7ee5dd2eeca..eea18aed17f8a 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -689,9 +689,9 @@ static int qca_close(struct hci_uart *hu)
+ 	skb_queue_purge(&qca->tx_wait_q);
+ 	skb_queue_purge(&qca->txq);
+ 	skb_queue_purge(&qca->rx_memdump_q);
+-	del_timer(&qca->tx_idle_timer);
+-	del_timer(&qca->wake_retrans_timer);
+ 	destroy_workqueue(qca->workqueue);
++	del_timer_sync(&qca->tx_idle_timer);
++	del_timer_sync(&qca->wake_retrans_timer);
+ 	qca->hu = NULL;
+ 
+ 	kfree_skb(qca->rx_skb);
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 00b50ccc9fae6..c206db96f60a1 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -163,7 +163,6 @@ int __cold register_random_ready_notifier(struct notifier_block *nb)
+ 	spin_unlock_irqrestore(&random_ready_chain_lock, flags);
+ 	return ret;
+ }
+-EXPORT_SYMBOL(register_random_ready_notifier);
+ 
+ /*
+  * Delete a previously registered readiness callback function.
+@@ -178,7 +177,6 @@ int __cold unregister_random_ready_notifier(struct notifier_block *nb)
+ 	spin_unlock_irqrestore(&random_ready_chain_lock, flags);
+ 	return ret;
+ }
+-EXPORT_SYMBOL(unregister_random_ready_notifier);
+ 
+ static void __cold process_random_ready_list(void)
+ {
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index c84d239512197..d0e11d7a3c08b 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -400,7 +400,16 @@ ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id,  u32 *value,
+ 	if (!rc) {
+ 		out = (struct tpm2_get_cap_out *)
+ 			&buf.data[TPM_HEADER_SIZE];
+-		*value = be32_to_cpu(out->value);
++		/*
++		 * To prevent failing boot up of some systems, Infineon TPM2.0
++		 * returns SUCCESS on TPM2_Startup in field upgrade mode. Also
++		 * the TPM2_Getcapability command returns a zero length list
++		 * in field upgrade mode.
++		 */
++		if (be32_to_cpu(out->property_cnt) > 0)
++			*value = be32_to_cpu(out->value);
++		else
++			rc = -ENODATA;
+ 	}
+ 	tpm_buf_destroy(&buf);
+ 	return rc;
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 3ca7528322f53..a1ec722d62a74 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -683,6 +683,7 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ 	if (!wait_event_timeout(ibmvtpm->crq_queue.wq,
+ 				ibmvtpm->rtce_buf != NULL,
+ 				HZ)) {
++		rc = -ENODEV;
+ 		dev_err(dev, "CRQ response timed out\n");
+ 		goto init_irq_cleanup;
+ 	}
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index ca0361b2dbb07..f87aa2169e5f5 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -609,6 +609,13 @@ static bool check_version(struct fsl_mc_version *mc_version, u32 major,
+ }
+ #endif
+ 
++static bool needs_entropy_delay_adjustment(void)
++{
++	if (of_machine_is_compatible("fsl,imx6sx"))
++		return true;
++	return false;
++}
++
+ /* Probe routine for CAAM top (controller) level */
+ static int caam_probe(struct platform_device *pdev)
+ {
+@@ -855,6 +862,8 @@ static int caam_probe(struct platform_device *pdev)
+ 			 * Also, if a handle was instantiated, do not change
+ 			 * the TRNG parameters.
+ 			 */
++			if (needs_entropy_delay_adjustment())
++				ent_delay = 12000;
+ 			if (!(ctrlpriv->rng4_sh_init || inst_handles)) {
+ 				dev_info(dev,
+ 					 "Entropy delay = %u\n",
+@@ -871,6 +880,15 @@ static int caam_probe(struct platform_device *pdev)
+ 			 */
+ 			ret = instantiate_rng(dev, inst_handles,
+ 					      gen_sk);
++			/*
++			 * Entropy delay is determined via TRNG characterization.
++			 * TRNG characterization is run across different voltages
++			 * and temperatures.
++			 * If worst case value for ent_dly is identified,
++			 * the loop can be skipped for that platform.
++			 */
++			if (needs_entropy_delay_adjustment())
++				break;
+ 			if (ret == -EAGAIN)
+ 				/*
+ 				 * if here, the loop will rerun,
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 472aaea75ef84..2f2dc029668bc 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -2846,7 +2846,7 @@ static void ilk_compute_wm_level(const struct drm_i915_private *dev_priv,
+ }
+ 
+ static void intel_read_wm_latency(struct drm_i915_private *dev_priv,
+-				  u16 wm[8])
++				  u16 wm[])
+ {
+ 	struct intel_uncore *uncore = &dev_priv->uncore;
+ 
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index d2e4f9f5507d5..3744c3db51405 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -743,6 +743,7 @@
+ #define USB_DEVICE_ID_LENOVO_X1_COVER	0x6085
+ #define USB_DEVICE_ID_LENOVO_X1_TAB	0x60a3
+ #define USB_DEVICE_ID_LENOVO_X1_TAB3	0x60b5
++#define USB_DEVICE_ID_LENOVO_X12_TAB	0x60fe
+ #define USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E	0x600e
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D	0x608d
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019	0x6019
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index e5a3704b9fe8f..d686917cc3b1f 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1990,6 +1990,12 @@ static const struct hid_device_id mt_devices[] = {
+ 			   USB_VENDOR_ID_LENOVO,
+ 			   USB_DEVICE_ID_LENOVO_X1_TAB3) },
+ 
++	/* Lenovo X12 TAB Gen 1 */
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
++			   USB_VENDOR_ID_LENOVO,
++			   USB_DEVICE_ID_LENOVO_X12_TAB) },
++
+ 	/* MosArt panels */
+ 	{ .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_ASUS,
+@@ -2129,6 +2135,9 @@ static const struct hid_device_id mt_devices[] = {
+ 	{ .driver_data = MT_CLS_GOOGLE,
+ 		HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY, USB_VENDOR_ID_GOOGLE,
+ 			USB_DEVICE_ID_GOOGLE_TOUCH_ROSE) },
++	{ .driver_data = MT_CLS_GOOGLE,
++		HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, USB_VENDOR_ID_GOOGLE,
++			USB_DEVICE_ID_GOOGLE_WHISKERS) },
+ 
+ 	/* Generic MT device */
+ 	{ HID_DEVICE(HID_BUS_ANY, HID_GROUP_MULTITOUCH, HID_ANY_ID, HID_ANY_ID) },
+diff --git a/drivers/i2c/busses/i2c-ismt.c b/drivers/i2c/busses/i2c-ismt.c
+index a35a27c320e7b..3d2d92640651e 100644
+--- a/drivers/i2c/busses/i2c-ismt.c
++++ b/drivers/i2c/busses/i2c-ismt.c
+@@ -82,6 +82,7 @@
+ 
+ #define ISMT_DESC_ENTRIES	2	/* number of descriptor entries */
+ #define ISMT_MAX_RETRIES	3	/* number of SMBus retries to attempt */
++#define ISMT_LOG_ENTRIES	3	/* number of interrupt cause log entries */
+ 
+ /* Hardware Descriptor Constants - Control Field */
+ #define ISMT_DESC_CWRL	0x01	/* Command/Write Length */
+@@ -175,6 +176,8 @@ struct ismt_priv {
+ 	u8 head;				/* ring buffer head pointer */
+ 	struct completion cmp;			/* interrupt completion */
+ 	u8 buffer[I2C_SMBUS_BLOCK_MAX + 16];	/* temp R/W data buffer */
++	dma_addr_t log_dma;
++	u32 *log;
+ };
+ 
+ static const struct pci_device_id ismt_ids[] = {
+@@ -409,6 +412,9 @@ static int ismt_access(struct i2c_adapter *adap, u16 addr,
+ 	memset(desc, 0, sizeof(struct ismt_desc));
+ 	desc->tgtaddr_rw = ISMT_DESC_ADDR_RW(addr, read_write);
+ 
++	/* Always clear the log entries */
++	memset(priv->log, 0, ISMT_LOG_ENTRIES * sizeof(u32));
++
+ 	/* Initialize common control bits */
+ 	if (likely(pci_dev_msi_enabled(priv->pci_dev)))
+ 		desc->control = ISMT_DESC_INT | ISMT_DESC_FAIR;
+@@ -693,6 +699,8 @@ static void ismt_hw_init(struct ismt_priv *priv)
+ 	/* initialize the Master Descriptor Base Address (MDBA) */
+ 	writeq(priv->io_rng_dma, priv->smba + ISMT_MSTR_MDBA);
+ 
++	writeq(priv->log_dma, priv->smba + ISMT_GR_SMTICL);
++
+ 	/* initialize the Master Control Register (MCTRL) */
+ 	writel(ISMT_MCTRL_MEIE, priv->smba + ISMT_MSTR_MCTRL);
+ 
+@@ -780,6 +788,12 @@ static int ismt_dev_init(struct ismt_priv *priv)
+ 	priv->head = 0;
+ 	init_completion(&priv->cmp);
+ 
++	priv->log = dmam_alloc_coherent(&priv->pci_dev->dev,
++					ISMT_LOG_ENTRIES * sizeof(u32),
++					&priv->log_dma, GFP_KERNEL);
++	if (!priv->log)
++		return -ENOMEM;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-thunderx-pcidrv.c b/drivers/i2c/busses/i2c-thunderx-pcidrv.c
+index 12c90aa0900e6..a77cd86fe75ed 100644
+--- a/drivers/i2c/busses/i2c-thunderx-pcidrv.c
++++ b/drivers/i2c/busses/i2c-thunderx-pcidrv.c
+@@ -213,6 +213,7 @@ static int thunder_i2c_probe_pci(struct pci_dev *pdev,
+ 	i2c->adap.bus_recovery_info = &octeon_i2c_recovery_info;
+ 	i2c->adap.dev.parent = dev;
+ 	i2c->adap.dev.of_node = pdev->dev.of_node;
++	i2c->adap.dev.fwnode = dev->fwnode;
+ 	snprintf(i2c->adap.name, sizeof(i2c->adap.name),
+ 		 "Cavium ThunderX i2c adapter at %s", dev_name(dev));
+ 	i2c_set_adapdata(&i2c->adap, i2c);
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index b9677f701b6a1..3d975db86434f 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -3404,6 +3404,11 @@ static int crypt_map(struct dm_target *ti, struct bio *bio)
+ 	return DM_MAPIO_SUBMITTED;
+ }
+ 
++static char hex2asc(unsigned char c)
++{
++	return c + '0' + ((unsigned)(9 - c) >> 4 & 0x27);
++}
++
+ static void crypt_status(struct dm_target *ti, status_type_t type,
+ 			 unsigned status_flags, char *result, unsigned maxlen)
+ {
+@@ -3422,9 +3427,12 @@ static void crypt_status(struct dm_target *ti, status_type_t type,
+ 		if (cc->key_size > 0) {
+ 			if (cc->key_string)
+ 				DMEMIT(":%u:%s", cc->key_size, cc->key_string);
+-			else
+-				for (i = 0; i < cc->key_size; i++)
+-					DMEMIT("%02x", cc->key[i]);
++			else {
++				for (i = 0; i < cc->key_size; i++) {
++					DMEMIT("%c%c", hex2asc(cc->key[i] >> 4),
++					       hex2asc(cc->key[i] & 0xf));
++				}
++			}
+ 		} else
+ 			DMEMIT("-");
+ 
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 6f085e96c3f33..835b1f3464d06 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -4327,8 +4327,6 @@ try_smaller_buffer:
+ 	}
+ 
+ 	if (should_write_sb) {
+-		int r;
+-
+ 		init_journal(ic, 0, ic->journal_sections, 0);
+ 		r = dm_integrity_failed(ic);
+ 		if (unlikely(r)) {
+diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
+index 35d368c418d03..55443a6598fa6 100644
+--- a/drivers/md/dm-stats.c
++++ b/drivers/md/dm-stats.c
+@@ -224,6 +224,7 @@ void dm_stats_cleanup(struct dm_stats *stats)
+ 				       atomic_read(&shared->in_flight[READ]),
+ 				       atomic_read(&shared->in_flight[WRITE]));
+ 			}
++			cond_resched();
+ 		}
+ 		dm_stat_free(&s->rcu_head);
+ 	}
+@@ -313,6 +314,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ 	for (ni = 0; ni < n_entries; ni++) {
+ 		atomic_set(&s->stat_shared[ni].in_flight[READ], 0);
+ 		atomic_set(&s->stat_shared[ni].in_flight[WRITE], 0);
++		cond_resched();
+ 	}
+ 
+ 	if (s->n_histogram_entries) {
+@@ -325,6 +327,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ 		for (ni = 0; ni < n_entries; ni++) {
+ 			s->stat_shared[ni].tmp.histogram = hi;
+ 			hi += s->n_histogram_entries + 1;
++			cond_resched();
+ 		}
+ 	}
+ 
+@@ -345,6 +348,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ 			for (ni = 0; ni < n_entries; ni++) {
+ 				p[ni].histogram = hi;
+ 				hi += s->n_histogram_entries + 1;
++				cond_resched();
+ 			}
+ 		}
+ 	}
+@@ -474,6 +478,7 @@ static int dm_stats_list(struct dm_stats *stats, const char *program,
+ 			}
+ 			DMEMIT("\n");
+ 		}
++		cond_resched();
+ 	}
+ 	mutex_unlock(&stats->mutex);
+ 
+@@ -750,6 +755,7 @@ static void __dm_stat_clear(struct dm_stat *s, size_t idx_start, size_t idx_end,
+ 				local_irq_enable();
+ 			}
+ 		}
++		cond_resched();
+ 	}
+ }
+ 
+@@ -865,6 +871,8 @@ static int dm_stats_print(struct dm_stats *stats, int id,
+ 
+ 		if (unlikely(sz + 1 >= maxlen))
+ 			goto buffer_overflow;
++
++		cond_resched();
+ 	}
+ 
+ 	if (clear)
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 808a98ef624c3..c801f6b93b7b4 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -1242,6 +1242,7 @@ bad:
+ 
+ static struct target_type verity_target = {
+ 	.name		= "verity",
++	.features	= DM_TARGET_IMMUTABLE,
+ 	.version	= {1, 7, 0},
+ 	.module		= THIS_MODULE,
+ 	.ctr		= verity_ctr,
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index c82953a3299e2..02767866b9ff6 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -686,17 +686,17 @@ int raid5_calc_degraded(struct r5conf *conf)
+ 	return degraded;
+ }
+ 
+-static int has_failed(struct r5conf *conf)
++static bool has_failed(struct r5conf *conf)
+ {
+-	int degraded;
++	int degraded = conf->mddev->degraded;
+ 
+-	if (conf->mddev->reshape_position == MaxSector)
+-		return conf->mddev->degraded > conf->max_degraded;
++	if (test_bit(MD_BROKEN, &conf->mddev->flags))
++		return true;
+ 
+-	degraded = raid5_calc_degraded(conf);
+-	if (degraded > conf->max_degraded)
+-		return 1;
+-	return 0;
++	if (conf->mddev->reshape_position != MaxSector)
++		degraded = raid5_calc_degraded(conf);
++
++	return degraded > conf->max_degraded;
+ }
+ 
+ struct stripe_head *
+@@ -2877,34 +2877,31 @@ static void raid5_error(struct mddev *mddev, struct md_rdev *rdev)
+ 	unsigned long flags;
+ 	pr_debug("raid456: error called\n");
+ 
++	pr_crit("md/raid:%s: Disk failure on %s, disabling device.\n",
++		mdname(mddev), bdevname(rdev->bdev, b));
++
+ 	spin_lock_irqsave(&conf->device_lock, flags);
++	set_bit(Faulty, &rdev->flags);
++	clear_bit(In_sync, &rdev->flags);
++	mddev->degraded = raid5_calc_degraded(conf);
+ 
+-	if (test_bit(In_sync, &rdev->flags) &&
+-	    mddev->degraded == conf->max_degraded) {
+-		/*
+-		 * Don't allow to achieve failed state
+-		 * Don't try to recover this device
+-		 */
++	if (has_failed(conf)) {
++		set_bit(MD_BROKEN, &conf->mddev->flags);
+ 		conf->recovery_disabled = mddev->recovery_disabled;
+-		spin_unlock_irqrestore(&conf->device_lock, flags);
+-		return;
++
++		pr_crit("md/raid:%s: Cannot continue operation (%d/%d failed).\n",
++			mdname(mddev), mddev->degraded, conf->raid_disks);
++	} else {
++		pr_crit("md/raid:%s: Operation continuing on %d devices.\n",
++			mdname(mddev), conf->raid_disks - mddev->degraded);
+ 	}
+ 
+-	set_bit(Faulty, &rdev->flags);
+-	clear_bit(In_sync, &rdev->flags);
+-	mddev->degraded = raid5_calc_degraded(conf);
+ 	spin_unlock_irqrestore(&conf->device_lock, flags);
+ 	set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+ 
+ 	set_bit(Blocked, &rdev->flags);
+ 	set_mask_bits(&mddev->sb_flags, 0,
+ 		      BIT(MD_SB_CHANGE_DEVS) | BIT(MD_SB_CHANGE_PENDING));
+-	pr_crit("md/raid:%s: Disk failure on %s, disabling device.\n"
+-		"md/raid:%s: Operation continuing on %d devices.\n",
+-		mdname(mddev),
+-		bdevname(rdev->bdev, b),
+-		mdname(mddev),
+-		conf->raid_disks - mddev->degraded);
+ 	r5c_update_on_rdev_error(mddev, rdev);
+ }
+ 
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index 5bc11d1bb9df8..eea4bd3116e8d 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1893,6 +1893,11 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ 	/* AST2400  doesn't have working HW checksum generation */
+ 	if (np && (of_device_is_compatible(np, "aspeed,ast2400-mac")))
+ 		netdev->hw_features &= ~NETIF_F_HW_CSUM;
++
++	/* AST2600 tx checksum with NCSI is broken */
++	if (priv->use_ncsi && of_device_is_compatible(np, "aspeed,ast2600-mac"))
++		netdev->hw_features &= ~NETIF_F_HW_CSUM;
++
+ 	if (np && of_get_property(np, "no-hw-checksum", NULL))
+ 		netdev->hw_features &= ~(NETIF_F_HW_CSUM | NETIF_F_RXCSUM);
+ 	netdev->features |= netdev->hw_features;
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index 621648ce750b7..eb25a13042ea9 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -610,12 +610,14 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
+ 
+ 	if (endpoint->data->aggregation) {
+ 		if (!endpoint->toward_ipa) {
++			u32 buffer_size;
+ 			u32 limit;
+ 
+ 			val |= u32_encode_bits(IPA_ENABLE_AGGR, AGGR_EN_FMASK);
+ 			val |= u32_encode_bits(IPA_GENERIC, AGGR_TYPE_FMASK);
+ 
+-			limit = ipa_aggr_size_kb(IPA_RX_BUFFER_SIZE);
++			buffer_size = IPA_RX_BUFFER_SIZE - NET_SKB_PAD;
++			limit = ipa_aggr_size_kb(buffer_size);
+ 			val |= u32_encode_bits(limit, AGGR_BYTE_LIMIT_FMASK);
+ 
+ 			limit = IPA_AGGR_TIME_LIMIT_DEFAULT;
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index d2c0116157759..8d7e29d953b7e 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -2844,13 +2844,14 @@ void pn53x_common_clean(struct pn533 *priv)
+ {
+ 	struct pn533_cmd *cmd, *n;
+ 
++	/* delete the timer before cleanup the worker */
++	del_timer_sync(&priv->listen_timer);
++
+ 	flush_delayed_work(&priv->poll_work);
+ 	destroy_workqueue(priv->wq);
+ 
+ 	skb_queue_purge(&priv->resp_q);
+ 
+-	del_timer(&priv->listen_timer);
+-
+ 	list_for_each_entry_safe(cmd, n, &priv->cmd_queue, queue) {
+ 		list_del(&cmd->queue);
+ 		kfree(cmd);
+diff --git a/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c b/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c
+index 2801ca7062732..68a5b627fb9b2 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c
++++ b/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c
+@@ -204,7 +204,7 @@ static const struct sunxi_desc_pin suniv_f1c100s_pins[] = {
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+ 		  SUNXI_FUNCTION(0x1, "gpio_out"),
+ 		  SUNXI_FUNCTION(0x2, "lcd"),		/* D20 */
+-		  SUNXI_FUNCTION(0x3, "lvds1"),		/* RX */
++		  SUNXI_FUNCTION(0x3, "uart2"),		/* RX */
+ 		  SUNXI_FUNCTION_IRQ_BANK(0x6, 0, 14)),
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(D, 15),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c
+index 579c10f57c2b0..258b6bb5762a4 100644
+--- a/fs/exfat/balloc.c
++++ b/fs/exfat/balloc.c
+@@ -148,7 +148,9 @@ int exfat_set_bitmap(struct inode *inode, unsigned int clu)
+ 	struct super_block *sb = inode->i_sb;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ 
+-	WARN_ON(clu < EXFAT_FIRST_CLUSTER);
++	if (!is_valid_cluster(sbi, clu))
++		return -EINVAL;
++
+ 	ent_idx = CLUSTER_TO_BITMAP_ENT(clu);
+ 	i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx);
+ 	b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx);
+@@ -166,7 +168,9 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu)
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ 	struct exfat_mount_options *opts = &sbi->options;
+ 
+-	WARN_ON(clu < EXFAT_FIRST_CLUSTER);
++	if (!is_valid_cluster(sbi, clu))
++		return;
++
+ 	ent_idx = CLUSTER_TO_BITMAP_ENT(clu);
+ 	i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx);
+ 	b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx);
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index b8f0e829ecbd2..0d139c7d150d9 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -380,6 +380,14 @@ static inline int exfat_sector_to_cluster(struct exfat_sb_info *sbi,
+ 		EXFAT_RESERVED_CLUSTERS;
+ }
+ 
++static inline bool is_valid_cluster(struct exfat_sb_info *sbi,
++		unsigned int clus)
++{
++	if (clus < EXFAT_FIRST_CLUSTER || sbi->num_clusters <= clus)
++		return false;
++	return true;
++}
++
+ /* super.c */
+ int exfat_set_volume_dirty(struct super_block *sb);
+ int exfat_clear_volume_dirty(struct super_block *sb);
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index c3c9afee7418f..a1481e47a7616 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -81,14 +81,6 @@ int exfat_ent_set(struct super_block *sb, unsigned int loc,
+ 	return 0;
+ }
+ 
+-static inline bool is_valid_cluster(struct exfat_sb_info *sbi,
+-		unsigned int clus)
+-{
+-	if (clus < EXFAT_FIRST_CLUSTER || sbi->num_clusters <= clus)
+-		return false;
+-	return true;
+-}
+-
+ int exfat_ent_get(struct super_block *sb, unsigned int loc,
+ 		unsigned int *content)
+ {
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 3ecf71151fb1f..871475d3fca2c 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2579,45 +2579,6 @@ static void io_complete_rw_common(struct kiocb *kiocb, long res,
+ #ifdef CONFIG_BLOCK
+ static bool io_resubmit_prep(struct io_kiocb *req, int error)
+ {
+-	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+-	ssize_t ret = -ECANCELED;
+-	struct iov_iter iter;
+-	int rw;
+-
+-	if (error) {
+-		ret = error;
+-		goto end_req;
+-	}
+-
+-	switch (req->opcode) {
+-	case IORING_OP_READV:
+-	case IORING_OP_READ_FIXED:
+-	case IORING_OP_READ:
+-		rw = READ;
+-		break;
+-	case IORING_OP_WRITEV:
+-	case IORING_OP_WRITE_FIXED:
+-	case IORING_OP_WRITE:
+-		rw = WRITE;
+-		break;
+-	default:
+-		printk_once(KERN_WARNING "io_uring: bad opcode in resubmit %d\n",
+-				req->opcode);
+-		goto end_req;
+-	}
+-
+-	if (!req->async_data) {
+-		ret = io_import_iovec(rw, req, &iovec, &iter, false);
+-		if (ret < 0)
+-			goto end_req;
+-		ret = io_setup_async_rw(req, iovec, inline_vecs, &iter, false);
+-		if (!ret)
+-			return true;
+-		kfree(iovec);
+-	} else {
+-		return true;
+-	}
+-end_req:
+ 	req_set_fail_links(req);
+ 	return false;
+ }
+@@ -3428,6 +3389,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
+ 	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+ 	struct kiocb *kiocb = &req->rw.kiocb;
+ 	struct iov_iter __iter, *iter = &__iter;
++	struct iov_iter iter_cp;
+ 	struct io_async_rw *rw = req->async_data;
+ 	ssize_t io_size, ret, ret2;
+ 	bool no_async;
+@@ -3438,6 +3400,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
+ 	ret = io_import_iovec(READ, req, &iovec, iter, !force_nonblock);
+ 	if (ret < 0)
+ 		return ret;
++	iter_cp = *iter;
+ 	io_size = iov_iter_count(iter);
+ 	req->result = io_size;
+ 	ret = 0;
+@@ -3473,7 +3436,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
+ 		if (req->file->f_flags & O_NONBLOCK)
+ 			goto done;
+ 		/* some cases will consume bytes even on error returns */
+-		iov_iter_revert(iter, io_size - iov_iter_count(iter));
++		*iter = iter_cp;
+ 		ret = 0;
+ 		goto copy_iov;
+ 	} else if (ret < 0) {
+@@ -3556,6 +3519,7 @@ static int io_write(struct io_kiocb *req, bool force_nonblock,
+ 	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+ 	struct kiocb *kiocb = &req->rw.kiocb;
+ 	struct iov_iter __iter, *iter = &__iter;
++	struct iov_iter iter_cp;
+ 	struct io_async_rw *rw = req->async_data;
+ 	ssize_t ret, ret2, io_size;
+ 
+@@ -3565,6 +3529,7 @@ static int io_write(struct io_kiocb *req, bool force_nonblock,
+ 	ret = io_import_iovec(WRITE, req, &iovec, iter, !force_nonblock);
+ 	if (ret < 0)
+ 		return ret;
++	iter_cp = *iter;
+ 	io_size = iov_iter_count(iter);
+ 	req->result = io_size;
+ 
+@@ -3626,7 +3591,7 @@ done:
+ 	} else {
+ copy_iov:
+ 		/* some cases will consume bytes even on error returns */
+-		iov_iter_revert(iter, io_size - iov_iter_count(iter));
++		*iter = iter_cp;
+ 		ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);
+ 		if (!ret)
+ 			return -EAGAIN;
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 7009a8dddd45b..a7e0970b5bfe1 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -832,6 +832,7 @@ static inline bool nfs_error_is_fatal_on_server(int err)
+ 	case 0:
+ 	case -ERESTARTSYS:
+ 	case -EINTR:
++	case -ENOMEM:
+ 		return false;
+ 	}
+ 	return nfs_error_is_fatal(err);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 84dd68091f422..f1b503bec2221 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -7122,16 +7122,12 @@ nfsd4_release_lockowner(struct svc_rqst *rqstp,
+ 		if (sop->so_is_open_owner || !same_owner_str(sop, owner))
+ 			continue;
+ 
+-		/* see if there are still any locks associated with it */
+-		lo = lockowner(sop);
+-		list_for_each_entry(stp, &sop->so_stateids, st_perstateowner) {
+-			if (check_for_locks(stp->st_stid.sc_file, lo)) {
+-				status = nfserr_locks_held;
+-				spin_unlock(&clp->cl_lock);
+-				return status;
+-			}
++		if (atomic_read(&sop->so_count) != 1) {
++			spin_unlock(&clp->cl_lock);
++			return nfserr_locks_held;
+ 		}
+ 
++		lo = lockowner(sop);
+ 		nfs4_get_stateowner(sop);
+ 		break;
+ 	}
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 9f2ca1b1c17ac..dbb090e1b026c 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -652,7 +652,7 @@ pipe_poll(struct file *filp, poll_table *wait)
+ 	unsigned int head, tail;
+ 
+ 	/* Epoll has some historical nasty semantics, this enables them */
+-	pipe->poll_usage = 1;
++	WRITE_ONCE(pipe->poll_usage, true);
+ 
+ 	/*
+ 	 * Reading pipe state only -- no need for acquiring the semaphore.
+@@ -1244,30 +1244,33 @@ unsigned int round_pipe_size(unsigned long size)
+ 
+ /*
+  * Resize the pipe ring to a number of slots.
++ *
++ * Note the pipe can be reduced in capacity, but only if the current
++ * occupancy doesn't exceed nr_slots; if it does, EBUSY will be
++ * returned instead.
+  */
+ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots)
+ {
+ 	struct pipe_buffer *bufs;
+ 	unsigned int head, tail, mask, n;
+ 
+-	/*
+-	 * We can shrink the pipe, if arg is greater than the ring occupancy.
+-	 * Since we don't expect a lot of shrink+grow operations, just free and
+-	 * allocate again like we would do for growing.  If the pipe currently
+-	 * contains more buffers than arg, then return busy.
+-	 */
+-	mask = pipe->ring_size - 1;
+-	head = pipe->head;
+-	tail = pipe->tail;
+-	n = pipe_occupancy(pipe->head, pipe->tail);
+-	if (nr_slots < n)
+-		return -EBUSY;
+-
+ 	bufs = kcalloc(nr_slots, sizeof(*bufs),
+ 		       GFP_KERNEL_ACCOUNT | __GFP_NOWARN);
+ 	if (unlikely(!bufs))
+ 		return -ENOMEM;
+ 
++	spin_lock_irq(&pipe->rd_wait.lock);
++	mask = pipe->ring_size - 1;
++	head = pipe->head;
++	tail = pipe->tail;
++
++	n = pipe_occupancy(head, tail);
++	if (nr_slots < n) {
++		spin_unlock_irq(&pipe->rd_wait.lock);
++		kfree(bufs);
++		return -EBUSY;
++	}
++
+ 	/*
+ 	 * The pipe array wraps around, so just start the new one at zero
+ 	 * and adjust the indices.
+@@ -1299,6 +1302,8 @@ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots)
+ 	pipe->tail = tail;
+ 	pipe->head = head;
+ 
++	spin_unlock_irq(&pipe->rd_wait.lock);
++
+ 	/* This might have made more room for writers */
+ 	wake_up_interruptible(&pipe->wr_wait);
+ 	return 0;
+diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
+index d9a692484eaed..de9c27ef68d86 100644
+--- a/fs/xfs/libxfs/xfs_bmap.c
++++ b/fs/xfs/libxfs/xfs_bmap.c
+@@ -6229,6 +6229,11 @@ xfs_bmap_validate_extent(
+ 	xfs_fsblock_t		endfsb;
+ 	bool			isrt;
+ 
++	if (irec->br_startblock + irec->br_blockcount <= irec->br_startblock)
++		return __this_address;
++	if (irec->br_startoff + irec->br_blockcount <= irec->br_startoff)
++		return __this_address;
++
+ 	isrt = XFS_IS_REALTIME_INODE(ip);
+ 	endfsb = irec->br_startblock + irec->br_blockcount - 1;
+ 	if (isrt && whichfork == XFS_DATA_FORK) {
+diff --git a/fs/xfs/libxfs/xfs_dir2.h b/fs/xfs/libxfs/xfs_dir2.h
+index e55378640b056..d03e6098ded9f 100644
+--- a/fs/xfs/libxfs/xfs_dir2.h
++++ b/fs/xfs/libxfs/xfs_dir2.h
+@@ -47,8 +47,6 @@ extern int xfs_dir_lookup(struct xfs_trans *tp, struct xfs_inode *dp,
+ extern int xfs_dir_removename(struct xfs_trans *tp, struct xfs_inode *dp,
+ 				struct xfs_name *name, xfs_ino_t ino,
+ 				xfs_extlen_t tot);
+-extern bool xfs_dir2_sf_replace_needblock(struct xfs_inode *dp,
+-				xfs_ino_t inum);
+ extern int xfs_dir_replace(struct xfs_trans *tp, struct xfs_inode *dp,
+ 				struct xfs_name *name, xfs_ino_t inum,
+ 				xfs_extlen_t tot);
+diff --git a/fs/xfs/libxfs/xfs_dir2_sf.c b/fs/xfs/libxfs/xfs_dir2_sf.c
+index 2463b5d734472..8c4f76bba88be 100644
+--- a/fs/xfs/libxfs/xfs_dir2_sf.c
++++ b/fs/xfs/libxfs/xfs_dir2_sf.c
+@@ -1018,7 +1018,7 @@ xfs_dir2_sf_removename(
+ /*
+  * Check whether the sf dir replace operation need more blocks.
+  */
+-bool
++static bool
+ xfs_dir2_sf_replace_needblock(
+ 	struct xfs_inode	*dp,
+ 	xfs_ino_t		inum)
+diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c
+index 0356f2e340a10..8c6e26d62ef28 100644
+--- a/fs/xfs/xfs_buf_item.c
++++ b/fs/xfs/xfs_buf_item.c
+@@ -56,14 +56,12 @@ xfs_buf_log_format_size(
+ }
+ 
+ /*
+- * This returns the number of log iovecs needed to log the
+- * given buf log item.
++ * Return the number of log iovecs and space needed to log the given buf log
++ * item segment.
+  *
+- * It calculates this as 1 iovec for the buf log format structure
+- * and 1 for each stretch of non-contiguous chunks to be logged.
+- * Contiguous chunks are logged in a single iovec.
+- *
+- * If the XFS_BLI_STALE flag has been set, then log nothing.
++ * It calculates this as 1 iovec for the buf log format structure and 1 for each
++ * stretch of non-contiguous chunks to be logged.  Contiguous chunks are logged
++ * in a single iovec.
+  */
+ STATIC void
+ xfs_buf_item_size_segment(
+@@ -119,11 +117,8 @@ xfs_buf_item_size_segment(
+ }
+ 
+ /*
+- * This returns the number of log iovecs needed to log the given buf log item.
+- *
+- * It calculates this as 1 iovec for the buf log format structure and 1 for each
+- * stretch of non-contiguous chunks to be logged.  Contiguous chunks are logged
+- * in a single iovec.
++ * Return the number of log iovecs and space needed to log the given buf log
++ * item.
+  *
+  * Discontiguous buffers need a format structure per region that is being
+  * logged. This makes the changes in the buffer appear to log recovery as though
+@@ -133,7 +128,11 @@ xfs_buf_item_size_segment(
+  * what ends up on disk.
+  *
+  * If the XFS_BLI_STALE flag has been set, then log nothing but the buf log
+- * format structures.
++ * format structures. If the item has previously been logged and has dirty
++ * regions, we do not relog them in stale buffers. This has the effect of
++ * reducing the size of the relogged item by the amount of dirty data tracked
++ * by the log item. This can result in the committing transaction reducing the
++ * amount of space being consumed by the CIL.
+  */
+ STATIC void
+ xfs_buf_item_size(
+@@ -147,9 +146,9 @@ xfs_buf_item_size(
+ 	ASSERT(atomic_read(&bip->bli_refcount) > 0);
+ 	if (bip->bli_flags & XFS_BLI_STALE) {
+ 		/*
+-		 * The buffer is stale, so all we need to log
+-		 * is the buf log format structure with the
+-		 * cancel flag in it.
++		 * The buffer is stale, so all we need to log is the buf log
++		 * format structure with the cancel flag in it as we are never
++		 * going to replay the changes tracked in the log item.
+ 		 */
+ 		trace_xfs_buf_item_size_stale(bip);
+ 		ASSERT(bip->__bli_format.blf_flags & XFS_BLF_CANCEL);
+@@ -164,9 +163,9 @@ xfs_buf_item_size(
+ 
+ 	if (bip->bli_flags & XFS_BLI_ORDERED) {
+ 		/*
+-		 * The buffer has been logged just to order it.
+-		 * It is not being included in the transaction
+-		 * commit, so no vectors are used at all.
++		 * The buffer has been logged just to order it. It is not being
++		 * included in the transaction commit, so no vectors are used at
++		 * all.
+ 		 */
+ 		trace_xfs_buf_item_size_ordered(bip);
+ 		*nvecs = XFS_LOG_VEC_ORDERED;
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 2bfbcf28b1bd2..e958b1c745615 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -3152,7 +3152,7 @@ xfs_rename(
+ 	struct xfs_trans	*tp;
+ 	struct xfs_inode	*wip = NULL;		/* whiteout inode */
+ 	struct xfs_inode	*inodes[__XFS_SORT_INODES];
+-	struct xfs_buf		*agibp;
++	int			i;
+ 	int			num_inodes = __XFS_SORT_INODES;
+ 	bool			new_parent = (src_dp != target_dp);
+ 	bool			src_is_directory = S_ISDIR(VFS_I(src_ip)->i_mode);
+@@ -3265,6 +3265,30 @@ xfs_rename(
+ 		}
+ 	}
+ 
++	/*
++	 * Lock the AGI buffers we need to handle bumping the nlink of the
++	 * whiteout inode off the unlinked list and to handle dropping the
++	 * nlink of the target inode.  Per locking order rules, do this in
++	 * increasing AG order and before directory block allocation tries to
++	 * grab AGFs because we grab AGIs before AGFs.
++	 *
++	 * The (vfs) caller must ensure that if src is a directory then
++	 * target_ip is either null or an empty directory.
++	 */
++	for (i = 0; i < num_inodes && inodes[i] != NULL; i++) {
++		if (inodes[i] == wip ||
++		    (inodes[i] == target_ip &&
++		     (VFS_I(target_ip)->i_nlink == 1 || src_is_directory))) {
++			struct xfs_buf	*bp;
++			xfs_agnumber_t	agno;
++
++			agno = XFS_INO_TO_AGNO(mp, inodes[i]->i_ino);
++			error = xfs_read_agi(mp, tp, agno, &bp);
++			if (error)
++				goto out_trans_cancel;
++		}
++	}
++
+ 	/*
+ 	 * Directory entry creation below may acquire the AGF. Remove
+ 	 * the whiteout from the unlinked list first to preserve correct
+@@ -3317,22 +3341,6 @@ xfs_rename(
+ 		 * In case there is already an entry with the same
+ 		 * name at the destination directory, remove it first.
+ 		 */
+-
+-		/*
+-		 * Check whether the replace operation will need to allocate
+-		 * blocks.  This happens when the shortform directory lacks
+-		 * space and we have to convert it to a block format directory.
+-		 * When more blocks are necessary, we must lock the AGI first
+-		 * to preserve locking order (AGI -> AGF).
+-		 */
+-		if (xfs_dir2_sf_replace_needblock(target_dp, src_ip->i_ino)) {
+-			error = xfs_read_agi(mp, tp,
+-					XFS_INO_TO_AGNO(mp, target_ip->i_ino),
+-					&agibp);
+-			if (error)
+-				goto out_trans_cancel;
+-		}
+-
+ 		error = xfs_dir_replace(tp, target_dp, target_name,
+ 					src_ip->i_ino, spaceres);
+ 		if (error)
+diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c
+index 17e20a6d8b4e2..6ff91e5bf3cd7 100644
+--- a/fs/xfs/xfs_inode_item.c
++++ b/fs/xfs/xfs_inode_item.c
+@@ -28,6 +28,20 @@ static inline struct xfs_inode_log_item *INODE_ITEM(struct xfs_log_item *lip)
+ 	return container_of(lip, struct xfs_inode_log_item, ili_item);
+ }
+ 
++/*
++ * The logged size of an inode fork is always the current size of the inode
++ * fork. This means that when an inode fork is relogged, the size of the logged
++ * region is determined by the current state, not the combination of the
++ * previously logged state + the current state. This is different relogging
++ * behaviour to most other log items which will retain the size of the
++ * previously logged changes when smaller regions are relogged.
++ *
++ * Hence operations that remove data from the inode fork (e.g. shortform
++ * dir/attr remove, extent form extent removal, etc), the size of the relogged
++ * inode gets -smaller- rather than stays the same size as the previously logged
++ * size and this can result in the committing transaction reducing the amount of
++ * space being consumed by the CIL.
++ */
+ STATIC void
+ xfs_inode_item_data_fork_size(
+ 	struct xfs_inode_log_item *iip,
+diff --git a/fs/xfs/xfs_iwalk.c b/fs/xfs/xfs_iwalk.c
+index 2a45138831e33..eae3aff9bc976 100644
+--- a/fs/xfs/xfs_iwalk.c
++++ b/fs/xfs/xfs_iwalk.c
+@@ -363,7 +363,7 @@ xfs_iwalk_run_callbacks(
+ 	/* Delete cursor but remember the last record we cached... */
+ 	xfs_iwalk_del_inobt(tp, curpp, agi_bpp, 0);
+ 	irec = &iwag->recs[iwag->nr_recs - 1];
+-	ASSERT(next_agino == irec->ir_startino + XFS_INODES_PER_CHUNK);
++	ASSERT(next_agino >= irec->ir_startino + XFS_INODES_PER_CHUNK);
+ 
+ 	error = xfs_iwalk_ag_recs(iwag);
+ 	if (error)
+diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c
+index b0ef071b3cb53..cd5c04dabe2e1 100644
+--- a/fs/xfs/xfs_log_cil.c
++++ b/fs/xfs/xfs_log_cil.c
+@@ -668,9 +668,14 @@ xlog_cil_push_work(
+ 	ASSERT(push_seq <= ctx->sequence);
+ 
+ 	/*
+-	 * Wake up any background push waiters now this context is being pushed.
++	 * As we are about to switch to a new, empty CIL context, we no longer
++	 * need to throttle tasks on CIL space overruns. Wake any waiters that
++	 * the hard push throttle may have caught so they can start committing
++	 * to the new context. The ctx->xc_push_lock provides the serialisation
++	 * necessary for safely using the lockless waitqueue_active() check in
++	 * this context.
+ 	 */
+-	if (ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log))
++	if (waitqueue_active(&cil->xc_push_wait))
+ 		wake_up_all(&cil->xc_push_wait);
+ 
+ 	/*
+@@ -907,7 +912,7 @@ xlog_cil_push_background(
+ 	ASSERT(!list_empty(&cil->xc_cil));
+ 
+ 	/*
+-	 * don't do a background push if we haven't used up all the
++	 * Don't do a background push if we haven't used up all the
+ 	 * space available yet.
+ 	 */
+ 	if (cil->xc_ctx->space_used < XLOG_CIL_SPACE_LIMIT(log)) {
+@@ -931,9 +936,16 @@ xlog_cil_push_background(
+ 
+ 	/*
+ 	 * If we are well over the space limit, throttle the work that is being
+-	 * done until the push work on this context has begun.
++	 * done until the push work on this context has begun. Enforce the hard
++	 * throttle on all transaction commits once it has been activated, even
++	 * if the committing transactions have resulted in the space usage
++	 * dipping back down under the hard limit.
++	 *
++	 * The ctx->xc_push_lock provides the serialisation necessary for safely
++	 * using the lockless waitqueue_active() check in this context.
+ 	 */
+-	if (cil->xc_ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log)) {
++	if (cil->xc_ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log) ||
++	    waitqueue_active(&cil->xc_push_wait)) {
+ 		trace_xfs_log_cil_wait(log, cil->xc_ctx->ticket);
+ 		ASSERT(cil->xc_ctx->space_used < log->l_logsize);
+ 		xlog_wait(&cil->xc_push_wait, &cil->xc_push_lock);
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index e3e229e52512a..5ebd6cdc44a7b 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -199,10 +199,12 @@ xfs_fs_show_options(
+ 		seq_printf(m, ",swidth=%d",
+ 				(int)XFS_FSB_TO_BB(mp, mp->m_swidth));
+ 
+-	if (mp->m_qflags & (XFS_UQUOTA_ACCT|XFS_UQUOTA_ENFD))
+-		seq_puts(m, ",usrquota");
+-	else if (mp->m_qflags & XFS_UQUOTA_ACCT)
+-		seq_puts(m, ",uqnoenforce");
++	if (mp->m_qflags & XFS_UQUOTA_ACCT) {
++		if (mp->m_qflags & XFS_UQUOTA_ENFD)
++			seq_puts(m, ",usrquota");
++		else
++			seq_puts(m, ",uqnoenforce");
++	}
+ 
+ 	if (mp->m_qflags & XFS_PQUOTA_ACCT) {
+ 		if (mp->m_qflags & XFS_PQUOTA_ENFD)
+diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h
+index 88e4d145f7cda..a6c3b8e7deb64 100644
+--- a/include/crypto/drbg.h
++++ b/include/crypto/drbg.h
+@@ -105,6 +105,12 @@ struct drbg_test_data {
+ 	struct drbg_string *testentropy; /* TEST PARAMETER: test entropy */
+ };
+ 
++enum drbg_seed_state {
++	DRBG_SEED_STATE_UNSEEDED,
++	DRBG_SEED_STATE_PARTIAL, /* Seeded with !rng_is_initialized() */
++	DRBG_SEED_STATE_FULL,
++};
++
+ struct drbg_state {
+ 	struct mutex drbg_mutex;	/* lock around DRBG */
+ 	unsigned char *V;	/* internal state 10.1.1.1 1a) */
+@@ -127,16 +133,14 @@ struct drbg_state {
+ 	struct crypto_wait ctr_wait;		/* CTR mode async wait obj */
+ 	struct scatterlist sg_in, sg_out;	/* CTR mode SGLs */
+ 
+-	bool seeded;		/* DRBG fully seeded? */
++	enum drbg_seed_state seeded;		/* DRBG fully seeded? */
+ 	bool pr;		/* Prediction resistance enabled? */
+ 	bool fips_primed;	/* Continuous test primed? */
+ 	unsigned char *prev;	/* FIPS 140-2 continuous test value */
+-	struct work_struct seed_work;	/* asynchronous seeding support */
+ 	struct crypto_rng *jent;
+ 	const struct drbg_state_ops *d_ops;
+ 	const struct drbg_core *core;
+ 	struct drbg_string test_data;
+-	struct notifier_block random_ready;
+ };
+ 
+ static inline __u8 drbg_statelen(struct drbg_state *drbg)
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index fc5642431b923..c0b6ec6bf65b7 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -71,7 +71,7 @@ struct pipe_inode_info {
+ 	unsigned int files;
+ 	unsigned int r_counter;
+ 	unsigned int w_counter;
+-	unsigned int poll_usage;
++	bool poll_usage;
+ 	struct page *tmp_page;
+ 	struct fasync_struct *fasync_readers;
+ 	struct fasync_struct *fasync_writers;
+diff --git a/include/net/netfilter/nf_conntrack_core.h b/include/net/netfilter/nf_conntrack_core.h
+index 09f2efea0b970..5805fe4947f3c 100644
+--- a/include/net/netfilter/nf_conntrack_core.h
++++ b/include/net/netfilter/nf_conntrack_core.h
+@@ -59,8 +59,13 @@ static inline int nf_conntrack_confirm(struct sk_buff *skb)
+ 	int ret = NF_ACCEPT;
+ 
+ 	if (ct) {
+-		if (!nf_ct_is_confirmed(ct))
++		if (!nf_ct_is_confirmed(ct)) {
+ 			ret = __nf_conntrack_confirm(skb);
++
++			if (ret == NF_ACCEPT)
++				ct = (struct nf_conn *)skb_nfct(skb);
++		}
++
+ 		if (likely(ret == NF_ACCEPT))
+ 			nf_ct_deliver_cached_events(ct);
+ 	}
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index 986dabc3d11f0..87becf77cc759 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -378,7 +378,7 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
+ {
+ 	enum bpf_tramp_prog_type kind;
+ 	int err = 0;
+-	int cnt;
++	int cnt = 0, i;
+ 
+ 	kind = bpf_attach_type_to_tramp(prog);
+ 	mutex_lock(&tr->mutex);
+@@ -389,7 +389,10 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
+ 		err = -EBUSY;
+ 		goto out;
+ 	}
+-	cnt = tr->progs_cnt[BPF_TRAMP_FENTRY] + tr->progs_cnt[BPF_TRAMP_FEXIT];
++
++	for (i = 0; i < BPF_TRAMP_MAX; i++)
++		cnt += tr->progs_cnt[i];
++
+ 	if (kind == BPF_TRAMP_REPLACE) {
+ 		/* Cannot attach extension if fentry/fexit are in use. */
+ 		if (cnt) {
+@@ -467,16 +470,19 @@ out:
+ 
+ void bpf_trampoline_put(struct bpf_trampoline *tr)
+ {
++	int i;
++
+ 	if (!tr)
+ 		return;
+ 	mutex_lock(&trampoline_mutex);
+ 	if (!refcount_dec_and_test(&tr->refcnt))
+ 		goto out;
+ 	WARN_ON_ONCE(mutex_is_locked(&tr->mutex));
+-	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FENTRY])))
+-		goto out;
+-	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
+-		goto out;
++
++	for (i = 0; i < BPF_TRAMP_MAX; i++)
++		if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[i])))
++			goto out;
++
+ 	/* This code will be executed even when the last bpf_tramp_image
+ 	 * is alive. All progs are detached from the trampoline and the
+ 	 * trampoline image is patched with jmp into epilogue to skip
+diff --git a/lib/Kconfig b/lib/Kconfig
+index 9216e24e51646..258e1ec7d5920 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -101,6 +101,8 @@ config INDIRECT_PIO
+ 
+ 	  When in doubt, say N.
+ 
++source "lib/crypto/Kconfig"
++
+ config CRC_CCITT
+ 	tristate "CRC-CCITT functions"
+ 	help
+diff --git a/lib/assoc_array.c b/lib/assoc_array.c
+index 6f4bcf5245547..b537a83678e11 100644
+--- a/lib/assoc_array.c
++++ b/lib/assoc_array.c
+@@ -1462,6 +1462,7 @@ int assoc_array_gc(struct assoc_array *array,
+ 	struct assoc_array_ptr *cursor, *ptr;
+ 	struct assoc_array_ptr *new_root, *new_parent, **new_ptr_pp;
+ 	unsigned long nr_leaves_on_tree;
++	bool retained;
+ 	int keylen, slot, nr_free, next_slot, i;
+ 
+ 	pr_devel("-->%s()\n", __func__);
+@@ -1538,6 +1539,7 @@ continue_node:
+ 		goto descend;
+ 	}
+ 
++retry_compress:
+ 	pr_devel("-- compress node %p --\n", new_n);
+ 
+ 	/* Count up the number of empty slots in this node and work out the
+@@ -1555,6 +1557,7 @@ continue_node:
+ 	pr_devel("free=%d, leaves=%lu\n", nr_free, new_n->nr_leaves_on_branch);
+ 
+ 	/* See what we can fold in */
++	retained = false;
+ 	next_slot = 0;
+ 	for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ 		struct assoc_array_shortcut *s;
+@@ -1604,9 +1607,14 @@ continue_node:
+ 			pr_devel("[%d] retain node %lu/%d [nx %d]\n",
+ 				 slot, child->nr_leaves_on_branch, nr_free + 1,
+ 				 next_slot);
++			retained = true;
+ 		}
+ 	}
+ 
++	if (retained && new_n->nr_leaves_on_branch <= ASSOC_ARRAY_FAN_OUT) {
++		pr_devel("internal nodes remain despite enough space, retrying\n");
++		goto retry_compress;
++	}
+ 	pr_devel("after: %lu\n", new_n->nr_leaves_on_branch);
+ 
+ 	nr_leaves_on_tree = new_n->nr_leaves_on_branch;
+diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
+index af3da5a8bde8d..9856e291f4141 100644
+--- a/lib/crypto/Kconfig
++++ b/lib/crypto/Kconfig
+@@ -1,5 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
++menu "Crypto library routines"
++
+ config CRYPTO_LIB_AES
+ 	tristate
+ 
+@@ -31,7 +33,7 @@ config CRYPTO_ARCH_HAVE_LIB_CHACHA
+ 
+ config CRYPTO_LIB_CHACHA_GENERIC
+ 	tristate
+-	select CRYPTO_ALGAPI
++	select XOR_BLOCKS
+ 	help
+ 	  This symbol can be depended upon by arch implementations of the
+ 	  ChaCha library interface that require the generic code as a
+@@ -40,7 +42,8 @@ config CRYPTO_LIB_CHACHA_GENERIC
+ 	  of CRYPTO_LIB_CHACHA.
+ 
+ config CRYPTO_LIB_CHACHA
+-	tristate
++	tristate "ChaCha library interface"
++	depends on CRYPTO
+ 	depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA
+ 	select CRYPTO_LIB_CHACHA_GENERIC if CRYPTO_ARCH_HAVE_LIB_CHACHA=n
+ 	help
+@@ -65,7 +68,7 @@ config CRYPTO_LIB_CURVE25519_GENERIC
+ 	  of CRYPTO_LIB_CURVE25519.
+ 
+ config CRYPTO_LIB_CURVE25519
+-	tristate
++	tristate "Curve25519 scalar multiplication library"
+ 	depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519
+ 	select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n
+ 	help
+@@ -100,7 +103,7 @@ config CRYPTO_LIB_POLY1305_GENERIC
+ 	  of CRYPTO_LIB_POLY1305.
+ 
+ config CRYPTO_LIB_POLY1305
+-	tristate
++	tristate "Poly1305 library interface"
+ 	depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305
+ 	select CRYPTO_LIB_POLY1305_GENERIC if CRYPTO_ARCH_HAVE_LIB_POLY1305=n
+ 	help
+@@ -109,11 +112,15 @@ config CRYPTO_LIB_POLY1305
+ 	  is available and enabled.
+ 
+ config CRYPTO_LIB_CHACHA20POLY1305
+-	tristate
++	tristate "ChaCha20-Poly1305 AEAD support (8-byte nonce library version)"
+ 	depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA
+ 	depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305
++	depends on CRYPTO
+ 	select CRYPTO_LIB_CHACHA
+ 	select CRYPTO_LIB_POLY1305
++	select CRYPTO_ALGAPI
+ 
+ config CRYPTO_LIB_SHA256
+ 	tristate
++
++endmenu
+diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
+index e59eda07305e6..493093b97093f 100644
+--- a/lib/percpu-refcount.c
++++ b/lib/percpu-refcount.c
+@@ -75,6 +75,7 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release,
+ 	data = kzalloc(sizeof(*ref->data), gfp);
+ 	if (!data) {
+ 		free_percpu((void __percpu *)ref->percpu_count_ptr);
++		ref->percpu_count_ptr = 0;
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index 73cd50735df29..c18dc8e61d352 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -1748,11 +1748,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
+  */
+ static void lock_zspage(struct zspage *zspage)
+ {
+-	struct page *page = get_first_page(zspage);
++	struct page *curr_page, *page;
+ 
+-	do {
+-		lock_page(page);
+-	} while ((page = get_next_page(page)) != NULL);
++	/*
++	 * Pages we haven't locked yet can be migrated off the list while we're
++	 * trying to lock them, so we need to be careful and only attempt to
++	 * lock each page under migrate_read_lock(). Otherwise, the page we lock
++	 * may no longer belong to the zspage. This means that we may wait for
++	 * the wrong page to unlock, so we must take a reference to the page
++	 * prior to waiting for it to unlock outside migrate_read_lock().
++	 */
++	while (1) {
++		migrate_read_lock(zspage);
++		page = get_first_page(zspage);
++		if (trylock_page(page))
++			break;
++		get_page(page);
++		migrate_read_unlock(zspage);
++		wait_on_page_locked(page);
++		put_page(page);
++	}
++
++	curr_page = page;
++	while ((page = get_next_page(curr_page))) {
++		if (trylock_page(page)) {
++			curr_page = page;
++		} else {
++			get_page(page);
++			migrate_read_unlock(zspage);
++			wait_on_page_locked(page);
++			put_page(page);
++			migrate_read_lock(zspage);
++		}
++	}
++	migrate_read_unlock(zspage);
+ }
+ 
+ static int zs_init_fs_context(struct fs_context *fc)
+diff --git a/net/core/filter.c b/net/core/filter.c
+index ddf9792c0cb2e..d348f1d3fb8fc 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1687,7 +1687,7 @@ BPF_CALL_5(bpf_skb_store_bytes, struct sk_buff *, skb, u32, offset,
+ 
+ 	if (unlikely(flags & ~(BPF_F_RECOMPUTE_CSUM | BPF_F_INVALIDATE_HASH)))
+ 		return -EINVAL;
+-	if (unlikely(offset > 0xffff))
++	if (unlikely(offset > INT_MAX))
+ 		return -EFAULT;
+ 	if (unlikely(bpf_try_make_writable(skb, offset + len)))
+ 		return -EFAULT;
+@@ -1722,7 +1722,7 @@ BPF_CALL_4(bpf_skb_load_bytes, const struct sk_buff *, skb, u32, offset,
+ {
+ 	void *ptr;
+ 
+-	if (unlikely(offset > 0xffff))
++	if (unlikely(offset > INT_MAX))
+ 		goto err_clear;
+ 
+ 	ptr = skb_header_pointer(skb, offset, len, to);
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 61505b0df57db..6b7ed5568c090 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2904,7 +2904,7 @@ static int count_ah_combs(const struct xfrm_tmpl *t)
+ 			break;
+ 		if (!aalg->pfkey_supported)
+ 			continue;
+-		if (aalg_tmpl_set(t, aalg))
++		if (aalg_tmpl_set(t, aalg) && aalg->available)
+ 			sz += sizeof(struct sadb_comb);
+ 	}
+ 	return sz + sizeof(struct sadb_prop);
+@@ -2922,7 +2922,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ 		if (!ealg->pfkey_supported)
+ 			continue;
+ 
+-		if (!(ealg_tmpl_set(t, ealg)))
++		if (!(ealg_tmpl_set(t, ealg) && ealg->available))
+ 			continue;
+ 
+ 		for (k = 1; ; k++) {
+@@ -2933,7 +2933,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ 			if (!aalg->pfkey_supported)
+ 				continue;
+ 
+-			if (aalg_tmpl_set(t, aalg))
++			if (aalg_tmpl_set(t, aalg) && aalg->available)
+ 				sz += sizeof(struct sadb_comb);
+ 		}
+ 	}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index fdd1da9ecea9e..ea162e36e0e4b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2679,27 +2679,31 @@ static struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+ 
+ 	err = nf_tables_expr_parse(ctx, nla, &info);
+ 	if (err < 0)
+-		goto err1;
++		goto err_expr_parse;
++
++	err = -EOPNOTSUPP;
++	if (!(info.ops->type->flags & NFT_EXPR_STATEFUL))
++		goto err_expr_stateful;
+ 
+ 	err = -ENOMEM;
+ 	expr = kzalloc(info.ops->size, GFP_KERNEL);
+ 	if (expr == NULL)
+-		goto err2;
++		goto err_expr_stateful;
+ 
+ 	err = nf_tables_newexpr(ctx, &info, expr);
+ 	if (err < 0)
+-		goto err3;
++		goto err_expr_new;
+ 
+ 	return expr;
+-err3:
++err_expr_new:
+ 	kfree(expr);
+-err2:
++err_expr_stateful:
+ 	owner = info.ops->type->owner;
+ 	if (info.ops->type->release_ops)
+ 		info.ops->type->release_ops(info.ops);
+ 
+ 	module_put(owner);
+-err1:
++err_expr_parse:
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -4047,6 +4051,9 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
+ 	u32 len;
+ 	int err;
+ 
++	if (desc->field_count >= ARRAY_SIZE(desc->field_len))
++		return -E2BIG;
++
+ 	err = nla_parse_nested_deprecated(tb, NFTA_SET_FIELD_MAX, attr,
+ 					  nft_concat_policy, NULL);
+ 	if (err < 0)
+@@ -4056,9 +4063,8 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
+ 		return -EINVAL;
+ 
+ 	len = ntohl(nla_get_be32(tb[NFTA_SET_FIELD_LEN]));
+-
+-	if (len * BITS_PER_BYTE / 32 > NFT_REG32_COUNT)
+-		return -E2BIG;
++	if (!len || len > U8_MAX)
++		return -EINVAL;
+ 
+ 	desc->field_len[desc->field_count++] = len;
+ 
+@@ -4069,7 +4075,8 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ 			       const struct nlattr *nla)
+ {
+ 	struct nlattr *attr;
+-	int rem, err;
++	u32 num_regs = 0;
++	int rem, err, i;
+ 
+ 	nla_for_each_nested(attr, nla, rem) {
+ 		if (nla_type(attr) != NFTA_LIST_ELEM)
+@@ -4080,6 +4087,12 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ 			return err;
+ 	}
+ 
++	for (i = 0; i < desc->field_count; i++)
++		num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32));
++
++	if (num_regs > NFT_REG32_COUNT)
++		return -E2BIG;
++
+ 	return 0;
+ }
+ 
+@@ -5055,9 +5068,6 @@ struct nft_expr *nft_set_elem_expr_alloc(const struct nft_ctx *ctx,
+ 		return expr;
+ 
+ 	err = -EOPNOTSUPP;
+-	if (!(expr->ops->type->flags & NFT_EXPR_STATEFUL))
+-		goto err_set_elem_expr;
+-
+ 	if (expr->ops->type->flags & NFT_EXPR_GC) {
+ 		if (set->flags & NFT_SET_TIMEOUT)
+ 			goto err_set_elem_expr;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 3f4554723761d..3b25b78896a28 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -5,7 +5,7 @@
+  * Copyright 2006-2010		Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright 2015-2017	Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+@@ -918,9 +918,6 @@ int wiphy_register(struct wiphy *wiphy)
+ 		return res;
+ 	}
+ 
+-	/* set up regulatory info */
+-	wiphy_regulatory_register(wiphy);
+-
+ 	list_add_rcu(&rdev->list, &cfg80211_rdev_list);
+ 	cfg80211_rdev_list_generation++;
+ 
+@@ -931,6 +928,9 @@ int wiphy_register(struct wiphy *wiphy)
+ 	cfg80211_debugfs_rdev_add(rdev);
+ 	nl80211_notify_wiphy(rdev, NL80211_CMD_NEW_WIPHY);
+ 
++	/* set up regulatory info */
++	wiphy_regulatory_register(wiphy);
++
+ 	if (wiphy->regulatory_flags & REGULATORY_CUSTOM_REG) {
+ 		struct regulatory_request request;
+ 
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index a04fdfb35f070..6b3386e1d93a5 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -4001,6 +4001,7 @@ void wiphy_regulatory_register(struct wiphy *wiphy)
+ 
+ 	wiphy_update_regulatory(wiphy, lr->initiator);
+ 	wiphy_all_share_dfs_chan_state(wiphy);
++	reg_process_self_managed_hints();
+ }
+ 
+ void wiphy_regulatory_deregister(struct wiphy *wiphy)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-06-09 11:27 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-06-09 11:27 UTC (permalink / raw
  To: gentoo-commits

commit:     a2be423064e975ee7d2874bcd57b21c949c4954f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun  9 11:27:32 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun  9 11:27:32 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a2be4230

Linux patch 5.10.121

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1120_linux-5.10.121.patch | 16978 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 16982 insertions(+)

diff --git a/0000_README b/0000_README
index 773deb53..89e3b935 100644
--- a/0000_README
+++ b/0000_README
@@ -523,6 +523,10 @@ Patch:  1119_linux-5.10.120.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.120
 
+Patch:  1120_linux-5.10.121.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.121
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1120_linux-5.10.121.patch b/1120_linux-5.10.121.patch
new file mode 100644
index 00000000..77711bdb
--- /dev/null
+++ b/1120_linux-5.10.121.patch
@@ -0,0 +1,16978 @@
+diff --git a/Documentation/conf.py b/Documentation/conf.py
+index ed2b43ec7754e..f47c3f1682db9 100644
+--- a/Documentation/conf.py
++++ b/Documentation/conf.py
+@@ -176,7 +176,7 @@ finally:
+ #
+ # This is also used if you do content translation via gettext catalogs.
+ # Usually you set "language" from the command line for these cases.
+-language = None
++language = 'en'
+ 
+ # There are two options for replacing |today|: either, you set today to some
+ # non-false value, then it is used:
+diff --git a/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml b/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
+index 0cebaaefda032..419c3b2ac5a6f 100644
+--- a/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
++++ b/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
+@@ -72,6 +72,7 @@ examples:
+                     dc-gpios = <&gpio 43 GPIO_ACTIVE_HIGH>;
+                     reset-gpios = <&gpio 80 GPIO_ACTIVE_HIGH>;
+                     rotation = <270>;
++                    backlight = <&backlight>;
+             };
+     };
+ 
+diff --git a/Documentation/devicetree/bindings/gpio/gpio-altera.txt b/Documentation/devicetree/bindings/gpio/gpio-altera.txt
+index 146e554b3c676..2a80e272cd666 100644
+--- a/Documentation/devicetree/bindings/gpio/gpio-altera.txt
++++ b/Documentation/devicetree/bindings/gpio/gpio-altera.txt
+@@ -9,8 +9,9 @@ Required properties:
+   - The second cell is reserved and is currently unused.
+ - gpio-controller : Marks the device node as a GPIO controller.
+ - interrupt-controller: Mark the device node as an interrupt controller
+-- #interrupt-cells : Should be 1. The interrupt type is fixed in the hardware.
++- #interrupt-cells : Should be 2. The interrupt type is fixed in the hardware.
+   - The first cell is the GPIO offset number within the GPIO controller.
++  - The second cell is the interrupt trigger type and level flags.
+ - interrupts: Specify the interrupt.
+ - altr,interrupt-type: Specifies the interrupt trigger type the GPIO
+   hardware is synthesized. This field is required if the Altera GPIO controller
+@@ -38,6 +39,6 @@ gpio_altr: gpio@ff200000 {
+ 	altr,interrupt-type = <IRQ_TYPE_EDGE_RISING>;
+ 	#gpio-cells = <2>;
+ 	gpio-controller;
+-	#interrupt-cells = <1>;
++	#interrupt-cells = <2>;
+ 	interrupt-controller;
+ };
+diff --git a/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml b/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
+index ef5698f426b2c..392204a08e96c 100644
+--- a/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
++++ b/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
+@@ -45,6 +45,7 @@ properties:
+     maxItems: 2
+ 
+   interconnect-names:
++    minItems: 1
+     items:
+       - const: qspi-config
+       - const: qspi-memory
+diff --git a/Makefile b/Makefile
+index fdd2ac273f420..5233d3d9a3b52 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 120
++SUBLEVEL = 121
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b.dts b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+index 1b63d6b19750b..25d87212cefd3 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+@@ -53,18 +53,17 @@
+ 			  "GPIO18",
+ 			  "NC", /* GPIO19 */
+ 			  "NC", /* GPIO20 */
+-			  "GPIO21",
++			  "CAM_GPIO0",
+ 			  "GPIO22",
+ 			  "GPIO23",
+ 			  "GPIO24",
+ 			  "GPIO25",
+ 			  "NC", /* GPIO26 */
+-			  "CAM_GPIO0",
+-			  /* Binary number representing build/revision */
+-			  "CONFIG0",
+-			  "CONFIG1",
+-			  "CONFIG2",
+-			  "CONFIG3",
++			  "GPIO27",
++			  "GPIO28",
++			  "GPIO29",
++			  "GPIO30",
++			  "GPIO31",
+ 			  "NC", /* GPIO32 */
+ 			  "NC", /* GPIO33 */
+ 			  "NC", /* GPIO34 */
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+index 33b2b77aa47db..00582eb2c12e2 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+@@ -74,16 +74,18 @@
+ 			  "GPIO27",
+ 			  "SDA0",
+ 			  "SCL0",
+-			  "NC", /* GPIO30 */
+-			  "NC", /* GPIO31 */
+-			  "NC", /* GPIO32 */
+-			  "NC", /* GPIO33 */
+-			  "NC", /* GPIO34 */
+-			  "NC", /* GPIO35 */
+-			  "NC", /* GPIO36 */
+-			  "NC", /* GPIO37 */
+-			  "NC", /* GPIO38 */
+-			  "NC", /* GPIO39 */
++			  /* Used by BT module */
++			  "CTS0",
++			  "RTS0",
++			  "TXD0",
++			  "RXD0",
++			  /* Used by Wifi */
++			  "SD1_CLK",
++			  "SD1_CMD",
++			  "SD1_DATA0",
++			  "SD1_DATA1",
++			  "SD1_DATA2",
++			  "SD1_DATA3",
+ 			  "CAM_GPIO1", /* GPIO40 */
+ 			  "WL_ON", /* GPIO41 */
+ 			  "NC", /* GPIO42 */
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+index 61010266ca9a3..90472e76a313e 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+@@ -45,7 +45,7 @@
+ 		#gpio-cells = <2>;
+ 		gpio-line-names = "BT_ON",
+ 				  "WL_ON",
+-				  "STATUS_LED_R",
++				  "PWR_LED_R",
+ 				  "LAN_RUN",
+ 				  "",
+ 				  "CAM_GPIO0",
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
+index 588d9411ceb61..3dfce4312dfc4 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
+@@ -63,8 +63,8 @@
+ 			  "GPIO43",
+ 			  "GPIO44",
+ 			  "GPIO45",
+-			  "GPIO46",
+-			  "GPIO47",
++			  "SMPS_SCL",
++			  "SMPS_SDA",
+ 			  /* Used by eMMC */
+ 			  "SD_CLK_R",
+ 			  "SD_CMD_R",
+diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+index 572198b6834e6..06c4e09965035 100644
+--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+@@ -129,7 +129,7 @@
+ 	samsung,i2c-max-bus-freq = <20000>;
+ 
+ 	eeprom@50 {
+-		compatible = "samsung,s524ad0xd1";
++		compatible = "samsung,s524ad0xd1", "atmel,24c128";
+ 		reg = <0x50>;
+ 	};
+ 
+@@ -289,7 +289,7 @@
+ 	samsung,i2c-max-bus-freq = <20000>;
+ 
+ 	eeprom@51 {
+-		compatible = "samsung,s524ad0xd1";
++		compatible = "samsung,s524ad0xd1", "atmel,24c128";
+ 		reg = <0x51>;
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts b/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
+index b4a9523e325b4..864dc5018451f 100644
+--- a/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
++++ b/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
+@@ -297,7 +297,11 @@
+ 	phy-mode = "rmii";
+ 	phy-reset-gpios = <&gpio1 18 GPIO_ACTIVE_LOW>;
+ 	phy-handle = <&phy>;
+-	clocks = <&clks IMX6QDL_CLK_ENET>, <&clks IMX6QDL_CLK_ENET>, <&rmii_clk>;
++	clocks = <&clks IMX6QDL_CLK_ENET>,
++		 <&clks IMX6QDL_CLK_ENET>,
++		 <&rmii_clk>,
++		 <&clks IMX6QDL_CLK_ENET_REF>;
++	clock-names = "ipg", "ahb", "ptp", "enet_out";
+ 	status = "okay";
+ 
+ 	mdio {
+diff --git a/arch/arm/boot/dts/imx6qdl-colibri.dtsi b/arch/arm/boot/dts/imx6qdl-colibri.dtsi
+index 4e2a309c93fa8..1e86b38147080 100644
+--- a/arch/arm/boot/dts/imx6qdl-colibri.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-colibri.dtsi
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0+ OR MIT
+ /*
+- * Copyright 2014-2020 Toradex
++ * Copyright 2014-2022 Toradex
+  * Copyright 2012 Freescale Semiconductor, Inc.
+  * Copyright 2011 Linaro Ltd.
+  */
+@@ -132,7 +132,7 @@
+ 	clock-frequency = <100000>;
+ 	pinctrl-names = "default", "gpio";
+ 	pinctrl-0 = <&pinctrl_i2c2>;
+-	pinctrl-0 = <&pinctrl_i2c2_gpio>;
++	pinctrl-1 = <&pinctrl_i2c2_gpio>;
+ 	scl-gpios = <&gpio2 30 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	sda-gpios = <&gpio3 16 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	status = "okay";
+@@ -488,7 +488,7 @@
+ 		>;
+ 	};
+ 
+-	pinctrl_i2c2_gpio: i2c2grp {
++	pinctrl_i2c2_gpio: i2c2gpiogrp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_EIM_EB2__GPIO2_IO30 0x4001b8b1
+ 			MX6QDL_PAD_EIM_D16__GPIO3_IO16 0x4001b8b1
+diff --git a/arch/arm/boot/dts/ox820.dtsi b/arch/arm/boot/dts/ox820.dtsi
+index 90846a7655b49..dde4364892bf0 100644
+--- a/arch/arm/boot/dts/ox820.dtsi
++++ b/arch/arm/boot/dts/ox820.dtsi
+@@ -287,7 +287,7 @@
+ 				clocks = <&armclk>;
+ 			};
+ 
+-			gic: gic@1000 {
++			gic: interrupt-controller@1000 {
+ 				compatible = "arm,arm11mp-gic";
+ 				interrupt-controller;
+ 				#interrupt-cells = <3>;
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index 986fa0b1a8774..9005f0a23e8f2 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -564,7 +564,6 @@
+ 			reset-gpios = <&mp05 5 GPIO_ACTIVE_LOW>;
+ 			vdd3-supply = <&ldo7_reg>;
+ 			vci-supply = <&ldo17_reg>;
+-			spi-cs-high;
+ 			spi-max-frequency = <1200000>;
+ 
+ 			pinctrl-names = "default";
+@@ -637,7 +636,7 @@
+ };
+ 
+ &i2s0 {
+-	dmas = <&pdma0 9>, <&pdma0 10>, <&pdma0 11>;
++	dmas = <&pdma0 10>, <&pdma0 9>, <&pdma0 11>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
+index 2871351ab9074..eb7e3660ada79 100644
+--- a/arch/arm/boot/dts/s5pv210.dtsi
++++ b/arch/arm/boot/dts/s5pv210.dtsi
+@@ -240,8 +240,8 @@
+ 			reg = <0xeee30000 0x1000>;
+ 			interrupt-parent = <&vic2>;
+ 			interrupts = <16>;
+-			dma-names = "rx", "tx", "tx-sec";
+-			dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>;
++			dma-names = "tx", "rx", "tx-sec";
++			dmas = <&pdma1 10>, <&pdma1 9>, <&pdma1 11>;
+ 			clock-names = "iis",
+ 				      "i2s_opclk0",
+ 				      "i2s_opclk1";
+@@ -260,8 +260,8 @@
+ 			reg = <0xe2100000 0x1000>;
+ 			interrupt-parent = <&vic2>;
+ 			interrupts = <17>;
+-			dma-names = "rx", "tx";
+-			dmas = <&pdma1 12>, <&pdma1 13>;
++			dma-names = "tx", "rx";
++			dmas = <&pdma1 13>, <&pdma1 12>;
+ 			clock-names = "iis", "i2s_opclk0";
+ 			clocks = <&clocks CLK_I2S1>, <&clocks SCLK_AUDIO1>;
+ 			pinctrl-names = "default";
+@@ -275,8 +275,8 @@
+ 			reg = <0xe2a00000 0x1000>;
+ 			interrupt-parent = <&vic2>;
+ 			interrupts = <18>;
+-			dma-names = "rx", "tx";
+-			dmas = <&pdma1 14>, <&pdma1 15>;
++			dma-names = "tx", "rx";
++			dmas = <&pdma1 15>, <&pdma1 14>;
+ 			clock-names = "iis", "i2s_opclk0";
+ 			clocks = <&clocks CLK_I2S2>, <&clocks SCLK_AUDIO2>;
+ 			pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 944d38b85eef4..f3e0c790a4b19 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -141,6 +141,7 @@
+ 		compatible = "snps,dwmac-mdio";
+ 		reset-gpios = <&gpioz 2 GPIO_ACTIVE_LOW>;
+ 		reset-delay-us = <1000>;
++		reset-post-delay-us = <1000>;
+ 
+ 		phy0: ethernet-phy@7 {
+ 			reg = <7>;
+diff --git a/arch/arm/boot/dts/suniv-f1c100s.dtsi b/arch/arm/boot/dts/suniv-f1c100s.dtsi
+index 6100d3b75f613..def8301014487 100644
+--- a/arch/arm/boot/dts/suniv-f1c100s.dtsi
++++ b/arch/arm/boot/dts/suniv-f1c100s.dtsi
+@@ -104,8 +104,10 @@
+ 
+ 		wdt: watchdog@1c20ca0 {
+ 			compatible = "allwinner,suniv-f1c100s-wdt",
+-				     "allwinner,sun4i-a10-wdt";
++				     "allwinner,sun6i-a31-wdt";
+ 			reg = <0x01c20ca0 0x20>;
++			interrupts = <16>;
++			clocks = <&osc32k>;
+ 		};
+ 
+ 		uart0: serial@1c25000 {
+diff --git a/arch/arm/mach-hisi/platsmp.c b/arch/arm/mach-hisi/platsmp.c
+index da7a09c1dae56..1cd1d9b0aabf9 100644
+--- a/arch/arm/mach-hisi/platsmp.c
++++ b/arch/arm/mach-hisi/platsmp.c
+@@ -67,14 +67,17 @@ static void __init hi3xxx_smp_prepare_cpus(unsigned int max_cpus)
+ 		}
+ 		ctrl_base = of_iomap(np, 0);
+ 		if (!ctrl_base) {
++			of_node_put(np);
+ 			pr_err("failed to map address\n");
+ 			return;
+ 		}
+ 		if (of_property_read_u32(np, "smp-offset", &offset) < 0) {
++			of_node_put(np);
+ 			pr_err("failed to find smp-offset property\n");
+ 			return;
+ 		}
+ 		ctrl_base += offset;
++		of_node_put(np);
+ 	}
+ }
+ 
+@@ -160,6 +163,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	if (WARN_ON(!node))
+ 		return -1;
+ 	ctrl_base = of_iomap(node, 0);
++	of_node_put(node);
+ 
+ 	/* set the secondary core boot from DDR */
+ 	remap_reg_value = readl_relaxed(ctrl_base + REG_SC_CTRL);
+diff --git a/arch/arm/mach-mediatek/Kconfig b/arch/arm/mach-mediatek/Kconfig
+index 9e0f592d87d8e..35a3430c7942d 100644
+--- a/arch/arm/mach-mediatek/Kconfig
++++ b/arch/arm/mach-mediatek/Kconfig
+@@ -30,6 +30,7 @@ config MACH_MT7623
+ config MACH_MT7629
+ 	bool "MediaTek MT7629 SoCs support"
+ 	default ARCH_MEDIATEK
++	select HAVE_ARM_ARCH_TIMER
+ 
+ config MACH_MT8127
+ 	bool "MediaTek MT8127 SoCs support"
+diff --git a/arch/arm/mach-omap1/clock.c b/arch/arm/mach-omap1/clock.c
+index bd5be82101f32..d89bda12bf3cd 100644
+--- a/arch/arm/mach-omap1/clock.c
++++ b/arch/arm/mach-omap1/clock.c
+@@ -41,7 +41,7 @@ static DEFINE_SPINLOCK(clockfw_lock);
+ unsigned long omap1_uart_recalc(struct clk *clk)
+ {
+ 	unsigned int val = __raw_readl(clk->enable_reg);
+-	return val & clk->enable_bit ? 48000000 : 12000000;
++	return val & 1 << clk->enable_bit ? 48000000 : 12000000;
+ }
+ 
+ unsigned long omap1_sossi_recalc(struct clk *clk)
+diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c
+index 2e35354b61f56..167e871f059ef 100644
+--- a/arch/arm/mach-pxa/cm-x300.c
++++ b/arch/arm/mach-pxa/cm-x300.c
+@@ -354,13 +354,13 @@ static struct platform_device cm_x300_spi_gpio = {
+ static struct gpiod_lookup_table cm_x300_spi_gpiod_table = {
+ 	.dev_id         = "spi_gpio",
+ 	.table          = {
+-		GPIO_LOOKUP("gpio-pxa", GPIO_LCD_SCL,
++		GPIO_LOOKUP("pca9555.1", GPIO_LCD_SCL - GPIO_LCD_BASE,
+ 			    "sck", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DIN,
++		GPIO_LOOKUP("pca9555.1", GPIO_LCD_DIN - GPIO_LCD_BASE,
+ 			    "mosi", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DOUT,
++		GPIO_LOOKUP("pca9555.1", GPIO_LCD_DOUT - GPIO_LCD_BASE,
+ 			    "miso", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("gpio-pxa", GPIO_LCD_CS,
++		GPIO_LOOKUP("pca9555.1", GPIO_LCD_CS - GPIO_LCD_BASE,
+ 			    "cs", GPIO_ACTIVE_HIGH),
+ 		{ },
+ 	},
+diff --git a/arch/arm/mach-pxa/magician.c b/arch/arm/mach-pxa/magician.c
+index cd9fa465b9b2a..9aee8e0f2bb1d 100644
+--- a/arch/arm/mach-pxa/magician.c
++++ b/arch/arm/mach-pxa/magician.c
+@@ -681,7 +681,7 @@ static struct platform_device bq24022 = {
+ static struct gpiod_lookup_table bq24022_gpiod_table = {
+ 	.dev_id = "gpio-regulator",
+ 	.table = {
+-		GPIO_LOOKUP("gpio-pxa", EGPIO_MAGICIAN_BQ24022_ISET2,
++		GPIO_LOOKUP("htc-egpio-0", EGPIO_MAGICIAN_BQ24022_ISET2 - MAGICIAN_EGPIO_BASE,
+ 			    NULL, GPIO_ACTIVE_HIGH),
+ 		GPIO_LOOKUP("gpio-pxa", GPIO30_MAGICIAN_BQ24022_nCHARGE_EN,
+ 			    "enable", GPIO_ACTIVE_LOW),
+diff --git a/arch/arm/mach-pxa/tosa.c b/arch/arm/mach-pxa/tosa.c
+index 431709725d02b..ded5e343e1984 100644
+--- a/arch/arm/mach-pxa/tosa.c
++++ b/arch/arm/mach-pxa/tosa.c
+@@ -296,9 +296,9 @@ static struct gpiod_lookup_table tosa_mci_gpio_table = {
+ 	.table = {
+ 		GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_nSD_DETECT,
+ 			    "cd", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_SD_WP,
++		GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_SD_WP - TOSA_SCOOP_GPIO_BASE,
+ 			    "wp", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_PWR_ON,
++		GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_PWR_ON - TOSA_SCOOP_GPIO_BASE,
+ 			    "power", GPIO_ACTIVE_HIGH),
+ 		{ },
+ 	},
+diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c
+index a0554d7d04f7c..e1adc098f89ac 100644
+--- a/arch/arm/mach-vexpress/dcscb.c
++++ b/arch/arm/mach-vexpress/dcscb.c
+@@ -144,6 +144,7 @@ static int __init dcscb_init(void)
+ 	if (!node)
+ 		return -ENODEV;
+ 	dcscb_base = of_iomap(node, 0);
++	of_node_put(node);
+ 	if (!dcscb_base)
+ 		return -EADDRNOTAVAIL;
+ 	cfg = readl_relaxed(dcscb_base + DCS_CFG_R);
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index 5c4ac1c9f4e02..889e78f40a25a 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -250,6 +250,7 @@ config ARCH_STRATIX10
+ 
+ config ARCH_SYNQUACER
+ 	bool "Socionext SynQuacer SoC Family"
++	select IRQ_FASTEOI_HIERARCHY_HANDLERS
+ 
+ config ARCH_TEGRA
+ 	bool "NVIDIA Tegra SoC Family"
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 776a6b0f61a62..dca040f66f5f3 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -13,7 +13,7 @@
+ 	clocks {
+ 		sleep_clk: sleep_clk {
+ 			compatible = "fixed-clock";
+-			clock-frequency = <32000>;
++			clock-frequency = <32768>;
+ 			#clock-cells = <0>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 45f9a44326a6d..297408b947ffb 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -316,7 +316,7 @@
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
+ 			qcom,controlled-remotely;
+-			num-channels = <18>;
++			num-channels = <24>;
+ 			qcom,num-ees = <4>;
+ 		};
+ 
+@@ -412,7 +412,7 @@
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
+ 			qcom,controlled-remotely;
+-			num-channels = <18>;
++			num-channels = <24>;
+ 			qcom,num-ees = <4>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 52ba4d07e7712..c5f3d4f8f4d21 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1471,6 +1471,7 @@
+ 			reg = <0xf780 0x24>;
+ 			clocks = <&sdhci>;
+ 			clock-names = "emmcclk";
++			drive-impedance-ohm = <50>;
+ 			#phy-cells = <0>;
+ 			status = "disabled";
+ 		};
+@@ -1481,7 +1482,6 @@
+ 			clock-names = "refclk";
+ 			#phy-cells = <1>;
+ 			resets = <&cru SRST_PCIEPHY>;
+-			drive-impedance-ohm = <50>;
+ 			reset-names = "phy";
+ 			status = "disabled";
+ 		};
+diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
+index 3c18c2454089b..51274bab25653 100644
+--- a/arch/arm64/kernel/sys_compat.c
++++ b/arch/arm64/kernel/sys_compat.c
+@@ -115,6 +115,6 @@ long compat_arm_syscall(struct pt_regs *regs, int scno)
+ 		(compat_thumb_mode(regs) ? 2 : 4);
+ 
+ 	arm64_notify_die("Oops - bad compat syscall(2)", regs,
+-			 SIGILL, ILL_ILLTRP, addr, scno);
++			 SIGILL, ILL_ILLTRP, addr, 0);
+ 	return 0;
+ }
+diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
+index 70a71f38b6a9e..24913271e898c 100644
+--- a/arch/arm64/mm/copypage.c
++++ b/arch/arm64/mm/copypage.c
+@@ -16,8 +16,8 @@
+ 
+ void copy_highpage(struct page *to, struct page *from)
+ {
+-	struct page *kto = page_address(to);
+-	struct page *kfrom = page_address(from);
++	void *kto = page_address(to);
++	void *kfrom = page_address(from);
+ 
+ 	copy_page(kto, kfrom);
+ 
+diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
+index 589f090f48b99..556b9ba61ec06 100644
+--- a/arch/csky/kernel/probes/kprobes.c
++++ b/arch/csky/kernel/probes/kprobes.c
+@@ -28,7 +28,7 @@ static int __kprobes patch_text_cb(void *priv)
+ 	struct csky_insn_patch *param = priv;
+ 	unsigned int addr = (unsigned int)param->addr;
+ 
+-	if (atomic_inc_return(&param->cpu_count) == 1) {
++	if (atomic_inc_return(&param->cpu_count) == num_online_cpus()) {
+ 		*(u16 *) addr = cpu_to_le16(param->opcode);
+ 		dcache_wb_range(addr, addr + 2);
+ 		atomic_inc(&param->cpu_count);
+diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu
+index c17205da47fe3..936cd9619bf01 100644
+--- a/arch/m68k/Kconfig.cpu
++++ b/arch/m68k/Kconfig.cpu
+@@ -312,7 +312,7 @@ comment "Processor Specific Options"
+ 
+ config M68KFPU_EMU
+ 	bool "Math emulation support"
+-	depends on MMU
++	depends on M68KCLASSIC && FPU
+ 	help
+ 	  At some point in the future, this will cause floating-point math
+ 	  instructions to be emulated by the kernel on machines that lack a
+diff --git a/arch/m68k/include/asm/raw_io.h b/arch/m68k/include/asm/raw_io.h
+index 80eb2396d01eb..3ba40bc1dfaa9 100644
+--- a/arch/m68k/include/asm/raw_io.h
++++ b/arch/m68k/include/asm/raw_io.h
+@@ -80,14 +80,14 @@
+ 	({ u16 __v = le16_to_cpu(*(__force volatile u16 *) (addr)); __v; })
+ 
+ #define rom_out_8(addr, b)	\
+-	({u8 __maybe_unused __w, __v = (b);  u32 _addr = ((u32) (addr)); \
++	(void)({u8 __maybe_unused __w, __v = (b);  u32 _addr = ((u32) (addr)); \
+ 	__w = ((*(__force volatile u8 *)  ((_addr | 0x10000) + (__v<<1)))); })
+ #define rom_out_be16(addr, w)	\
+-	({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
++	(void)({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
+ 	__w = ((*(__force volatile u16 *) ((_addr & 0xFFFF0000UL) + ((__v & 0xFF)<<1)))); \
+ 	__w = ((*(__force volatile u16 *) ((_addr | 0x10000) + ((__v >> 8)<<1)))); })
+ #define rom_out_le16(addr, w)	\
+-	({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
++	(void)({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
+ 	__w = ((*(__force volatile u16 *) ((_addr & 0xFFFF0000UL) + ((__v >> 8)<<1)))); \
+ 	__w = ((*(__force volatile u16 *) ((_addr | 0x10000) + ((__v & 0xFF)<<1)))); })
+ 
+diff --git a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
+index 58f829c9b6c70..79d6fd249583f 100644
+--- a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
+@@ -26,7 +26,6 @@
+ #define cpu_has_3k_cache		0
+ #define cpu_has_4k_cache		1
+ #define cpu_has_tx39_cache		0
+-#define cpu_has_fpu			1
+ #define cpu_has_nofpuex			0
+ #define cpu_has_32fpr			1
+ #define cpu_has_counter			1
+diff --git a/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
+index 49a93e82c2528..2635b6ba1cb54 100644
+--- a/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
+@@ -29,7 +29,6 @@
+ #define cpu_has_3k_cache		0
+ #define cpu_has_4k_cache		1
+ #define cpu_has_tx39_cache		0
+-#define cpu_has_fpu			1
+ #define cpu_has_nofpuex			0
+ #define cpu_has_32fpr			1
+ #define cpu_has_counter			1
+diff --git a/arch/openrisc/include/asm/timex.h b/arch/openrisc/include/asm/timex.h
+index d52b4e536e3f9..5487fa93dd9be 100644
+--- a/arch/openrisc/include/asm/timex.h
++++ b/arch/openrisc/include/asm/timex.h
+@@ -23,6 +23,7 @@ static inline cycles_t get_cycles(void)
+ {
+ 	return mfspr(SPR_TTCR);
+ }
++#define get_cycles get_cycles
+ 
+ /* This isn't really used any more */
+ #define CLOCK_TICK_RATE 1000
+diff --git a/arch/openrisc/kernel/head.S b/arch/openrisc/kernel/head.S
+index af355e3f4619a..459b0a1e4eb23 100644
+--- a/arch/openrisc/kernel/head.S
++++ b/arch/openrisc/kernel/head.S
+@@ -521,6 +521,15 @@ _start:
+ 	l.ori	r3,r0,0x1
+ 	l.mtspr	r0,r3,SPR_SR
+ 
++	/*
++	 * Start the TTCR as early as possible, so that the RNG can make use of
++	 * measurements of boot time from the earliest opportunity. Especially
++	 * important is that the TTCR does not return zero by the time we reach
++	 * rand_initialize().
++	 */
++	l.movhi r3,hi(SPR_TTMR_CR)
++	l.mtspr r0,r3,SPR_TTMR
++
+ 	CLEAR_GPR(r1)
+ 	CLEAR_GPR(r2)
+ 	CLEAR_GPR(r3)
+diff --git a/arch/parisc/include/asm/fb.h b/arch/parisc/include/asm/fb.h
+index c4cd6360f9964..d63a2acb91f2b 100644
+--- a/arch/parisc/include/asm/fb.h
++++ b/arch/parisc/include/asm/fb.h
+@@ -12,9 +12,13 @@ static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
+ 	pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE;
+ }
+ 
++#if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI)
++int fb_is_primary_device(struct fb_info *info);
++#else
+ static inline int fb_is_primary_device(struct fb_info *info)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ #endif /* _ASM_FB_H_ */
+diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
+index f2c5c26869f1a..03ae544eb6cc4 100644
+--- a/arch/powerpc/include/asm/page.h
++++ b/arch/powerpc/include/asm/page.h
+@@ -216,6 +216,9 @@ static inline bool pfn_valid(unsigned long pfn)
+ #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET)
+ #else
+ #ifdef CONFIG_PPC64
++
++#define VIRTUAL_WARN_ON(x)	WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x))
++
+ /*
+  * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
+  * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
+@@ -223,13 +226,13 @@ static inline bool pfn_valid(unsigned long pfn)
+  */
+ #define __va(x)								\
+ ({									\
+-	VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);		\
++	VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET);		\
+ 	(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);	\
+ })
+ 
+ #define __pa(x)								\
+ ({									\
+-	VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);		\
++	VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET);		\
+ 	(unsigned long)(x) & 0x0fffffffffffffffUL;			\
+ })
+ 
+diff --git a/arch/powerpc/include/asm/vas.h b/arch/powerpc/include/asm/vas.h
+index e33f80b0ea819..47062b4570490 100644
+--- a/arch/powerpc/include/asm/vas.h
++++ b/arch/powerpc/include/asm/vas.h
+@@ -52,7 +52,7 @@ enum vas_cop_type {
+  * Receive window attributes specified by the (in-kernel) owner of window.
+  */
+ struct vas_rx_win_attr {
+-	void *rx_fifo;
++	u64 rx_fifo;
+ 	int rx_fifo_size;
+ 	int wcreds_max;
+ 
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index c3bb800dc4352..1a5ba26aab156 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -861,7 +861,6 @@ static int fadump_alloc_mem_ranges(struct fadump_mrange_info *mrange_info)
+ 				       sizeof(struct fadump_memory_range));
+ 	return 0;
+ }
+-
+ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
+ 				       u64 base, u64 end)
+ {
+@@ -880,7 +879,12 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
+ 		start = mem_ranges[mrange_info->mem_range_cnt - 1].base;
+ 		size  = mem_ranges[mrange_info->mem_range_cnt - 1].size;
+ 
+-		if ((start + size) == base)
++		/*
++		 * Boot memory area needs separate PT_LOAD segment(s) as it
++		 * is moved to a different location at the time of crash.
++		 * So, fold only if the region is not boot memory area.
++		 */
++		if ((start + size) == base && start >= fw_dump.boot_mem_top)
+ 			is_adjacent = true;
+ 	}
+ 	if (!is_adjacent) {
+diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c
+index 1f835539fda42..f0271daa8f6a6 100644
+--- a/arch/powerpc/kernel/idle.c
++++ b/arch/powerpc/kernel/idle.c
+@@ -37,7 +37,7 @@ static int __init powersave_off(char *arg)
+ {
+ 	ppc_md.power_save = NULL;
+ 	cpuidle_disable = IDLE_POWERSAVE_OFF;
+-	return 0;
++	return 1;
+ }
+ __setup("powersave=off", powersave_off);
+ 
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index 58448f0e47213..52990becbdfc7 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -363,7 +363,8 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
+ 		if (event_is_threshold(event) && is_thresh_cmp_valid(event)) {
+ 			mask  |= CNST_THRESH_MASK;
+ 			value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
+-		}
++		} else if (event_is_threshold(event))
++			return -1;
+ 	} else {
+ 		/*
+ 		 * Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC,
+diff --git a/arch/powerpc/platforms/4xx/cpm.c b/arch/powerpc/platforms/4xx/cpm.c
+index ae8b812c92029..2481e78c04234 100644
+--- a/arch/powerpc/platforms/4xx/cpm.c
++++ b/arch/powerpc/platforms/4xx/cpm.c
+@@ -327,6 +327,6 @@ late_initcall(cpm_init);
+ static int __init cpm_powersave_off(char *arg)
+ {
+ 	cpm.powersave_off = 1;
+-	return 0;
++	return 1;
+ }
+ __setup("powersave=off", cpm_powersave_off);
+diff --git a/arch/powerpc/platforms/8xx/cpm1.c b/arch/powerpc/platforms/8xx/cpm1.c
+index c58b6f1c40e35..3ef5e9fd3a9b6 100644
+--- a/arch/powerpc/platforms/8xx/cpm1.c
++++ b/arch/powerpc/platforms/8xx/cpm1.c
+@@ -280,6 +280,7 @@ cpm_setbrg(uint brg, uint rate)
+ 		out_be32(bp, (((BRG_UART_CLK_DIV16 / rate) - 1) << 1) |
+ 			      CPM_BRG_EN | CPM_BRG_DIV16);
+ }
++EXPORT_SYMBOL(cpm_setbrg);
+ 
+ struct cpm_ioport16 {
+ 	__be16 dir, par, odr_sor, dat, intr;
+diff --git a/arch/powerpc/platforms/powernv/opal-fadump.c b/arch/powerpc/platforms/powernv/opal-fadump.c
+index 9a360ced663b0..e23a51a05f99a 100644
+--- a/arch/powerpc/platforms/powernv/opal-fadump.c
++++ b/arch/powerpc/platforms/powernv/opal-fadump.c
+@@ -60,7 +60,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ 	addr = be64_to_cpu(addr);
+ 	pr_debug("Kernel metadata addr: %llx\n", addr);
+ 	opal_fdm_active = (void *)addr;
+-	if (opal_fdm_active->registered_regions == 0)
++	if (be16_to_cpu(opal_fdm_active->registered_regions) == 0)
+ 		return;
+ 
+ 	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_BOOT_MEM, &addr);
+@@ -95,17 +95,17 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf);
+ static void opal_fadump_update_config(struct fw_dump *fadump_conf,
+ 				      const struct opal_fadump_mem_struct *fdm)
+ {
+-	pr_debug("Boot memory regions count: %d\n", fdm->region_cnt);
++	pr_debug("Boot memory regions count: %d\n", be16_to_cpu(fdm->region_cnt));
+ 
+ 	/*
+ 	 * The destination address of the first boot memory region is the
+ 	 * destination address of boot memory regions.
+ 	 */
+-	fadump_conf->boot_mem_dest_addr = fdm->rgn[0].dest;
++	fadump_conf->boot_mem_dest_addr = be64_to_cpu(fdm->rgn[0].dest);
+ 	pr_debug("Destination address of boot memory regions: %#016llx\n",
+ 		 fadump_conf->boot_mem_dest_addr);
+ 
+-	fadump_conf->fadumphdr_addr = fdm->fadumphdr_addr;
++	fadump_conf->fadumphdr_addr = be64_to_cpu(fdm->fadumphdr_addr);
+ }
+ 
+ /*
+@@ -126,9 +126,9 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf,
+ 	fadump_conf->boot_memory_size = 0;
+ 
+ 	pr_debug("Boot memory regions:\n");
+-	for (i = 0; i < fdm->region_cnt; i++) {
+-		base = fdm->rgn[i].src;
+-		size = fdm->rgn[i].size;
++	for (i = 0; i < be16_to_cpu(fdm->region_cnt); i++) {
++		base = be64_to_cpu(fdm->rgn[i].src);
++		size = be64_to_cpu(fdm->rgn[i].size);
+ 		pr_debug("\t[%03d] base: 0x%lx, size: 0x%lx\n", i, base, size);
+ 
+ 		fadump_conf->boot_mem_addr[i] = base;
+@@ -143,7 +143,7 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf,
+ 	 * Start address of reserve dump area (permanent reservation) for
+ 	 * re-registering FADump after dump capture.
+ 	 */
+-	fadump_conf->reserve_dump_area_start = fdm->rgn[0].dest;
++	fadump_conf->reserve_dump_area_start = be64_to_cpu(fdm->rgn[0].dest);
+ 
+ 	/*
+ 	 * Rarely, but it can so happen that system crashes before all
+@@ -155,13 +155,14 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf,
+ 	 * Hope the memory that could not be preserved only has pages
+ 	 * that are usually filtered out while saving the vmcore.
+ 	 */
+-	if (fdm->region_cnt > fdm->registered_regions) {
++	if (be16_to_cpu(fdm->region_cnt) > be16_to_cpu(fdm->registered_regions)) {
+ 		pr_warn("Not all memory regions were saved!!!\n");
+ 		pr_warn("  Unsaved memory regions:\n");
+-		i = fdm->registered_regions;
+-		while (i < fdm->region_cnt) {
++		i = be16_to_cpu(fdm->registered_regions);
++		while (i < be16_to_cpu(fdm->region_cnt)) {
+ 			pr_warn("\t[%03d] base: 0x%llx, size: 0x%llx\n",
+-				i, fdm->rgn[i].src, fdm->rgn[i].size);
++				i, be64_to_cpu(fdm->rgn[i].src),
++				be64_to_cpu(fdm->rgn[i].size));
+ 			i++;
+ 		}
+ 
+@@ -170,7 +171,7 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf,
+ 	}
+ 
+ 	fadump_conf->boot_mem_top = (fadump_conf->boot_memory_size + hole_size);
+-	fadump_conf->boot_mem_regs_cnt = fdm->region_cnt;
++	fadump_conf->boot_mem_regs_cnt = be16_to_cpu(fdm->region_cnt);
+ 	opal_fadump_update_config(fadump_conf, fdm);
+ }
+ 
+@@ -178,35 +179,38 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf,
+ static void opal_fadump_init_metadata(struct opal_fadump_mem_struct *fdm)
+ {
+ 	fdm->version = OPAL_FADUMP_VERSION;
+-	fdm->region_cnt = 0;
+-	fdm->registered_regions = 0;
+-	fdm->fadumphdr_addr = 0;
++	fdm->region_cnt = cpu_to_be16(0);
++	fdm->registered_regions = cpu_to_be16(0);
++	fdm->fadumphdr_addr = cpu_to_be64(0);
+ }
+ 
+ static u64 opal_fadump_init_mem_struct(struct fw_dump *fadump_conf)
+ {
+ 	u64 addr = fadump_conf->reserve_dump_area_start;
++	u16 reg_cnt;
+ 	int i;
+ 
+ 	opal_fdm = __va(fadump_conf->kernel_metadata);
+ 	opal_fadump_init_metadata(opal_fdm);
+ 
+ 	/* Boot memory regions */
++	reg_cnt = be16_to_cpu(opal_fdm->region_cnt);
+ 	for (i = 0; i < fadump_conf->boot_mem_regs_cnt; i++) {
+-		opal_fdm->rgn[i].src	= fadump_conf->boot_mem_addr[i];
+-		opal_fdm->rgn[i].dest	= addr;
+-		opal_fdm->rgn[i].size	= fadump_conf->boot_mem_sz[i];
++		opal_fdm->rgn[i].src	= cpu_to_be64(fadump_conf->boot_mem_addr[i]);
++		opal_fdm->rgn[i].dest	= cpu_to_be64(addr);
++		opal_fdm->rgn[i].size	= cpu_to_be64(fadump_conf->boot_mem_sz[i]);
+ 
+-		opal_fdm->region_cnt++;
++		reg_cnt++;
+ 		addr += fadump_conf->boot_mem_sz[i];
+ 	}
++	opal_fdm->region_cnt = cpu_to_be16(reg_cnt);
+ 
+ 	/*
+ 	 * Kernel metadata is passed to f/w and retrieved in capture kerenl.
+ 	 * So, use it to save fadump header address instead of calculating it.
+ 	 */
+-	opal_fdm->fadumphdr_addr = (opal_fdm->rgn[0].dest +
+-				    fadump_conf->boot_memory_size);
++	opal_fdm->fadumphdr_addr = cpu_to_be64(be64_to_cpu(opal_fdm->rgn[0].dest) +
++					       fadump_conf->boot_memory_size);
+ 
+ 	opal_fadump_update_config(fadump_conf, opal_fdm);
+ 
+@@ -269,18 +273,21 @@ static u64 opal_fadump_get_bootmem_min(void)
+ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ {
+ 	s64 rc = OPAL_PARAMETER;
++	u16 registered_regs;
+ 	int i, err = -EIO;
+ 
+-	for (i = 0; i < opal_fdm->region_cnt; i++) {
++	registered_regs = be16_to_cpu(opal_fdm->registered_regions);
++	for (i = 0; i < be16_to_cpu(opal_fdm->region_cnt); i++) {
+ 		rc = opal_mpipl_update(OPAL_MPIPL_ADD_RANGE,
+-				       opal_fdm->rgn[i].src,
+-				       opal_fdm->rgn[i].dest,
+-				       opal_fdm->rgn[i].size);
++				       be64_to_cpu(opal_fdm->rgn[i].src),
++				       be64_to_cpu(opal_fdm->rgn[i].dest),
++				       be64_to_cpu(opal_fdm->rgn[i].size));
+ 		if (rc != OPAL_SUCCESS)
+ 			break;
+ 
+-		opal_fdm->registered_regions++;
++		registered_regs++;
+ 	}
++	opal_fdm->registered_regions = cpu_to_be16(registered_regs);
+ 
+ 	switch (rc) {
+ 	case OPAL_SUCCESS:
+@@ -291,7 +298,8 @@ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ 	case OPAL_RESOURCE:
+ 		/* If MAX regions limit in f/w is hit, warn and proceed. */
+ 		pr_warn("%d regions could not be registered for MPIPL as MAX limit is reached!\n",
+-			(opal_fdm->region_cnt - opal_fdm->registered_regions));
++			(be16_to_cpu(opal_fdm->region_cnt) -
++			 be16_to_cpu(opal_fdm->registered_regions)));
+ 		fadump_conf->dump_registered = 1;
+ 		err = 0;
+ 		break;
+@@ -312,7 +320,7 @@ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ 	 * If some regions were registered before OPAL_MPIPL_ADD_RANGE
+ 	 * OPAL call failed, unregister all regions.
+ 	 */
+-	if ((err < 0) && (opal_fdm->registered_regions > 0))
++	if ((err < 0) && (be16_to_cpu(opal_fdm->registered_regions) > 0))
+ 		opal_fadump_unregister(fadump_conf);
+ 
+ 	return err;
+@@ -328,7 +336,7 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf)
+ 		return -EIO;
+ 	}
+ 
+-	opal_fdm->registered_regions = 0;
++	opal_fdm->registered_regions = cpu_to_be16(0);
+ 	fadump_conf->dump_registered = 0;
+ 	return 0;
+ }
+@@ -563,19 +571,20 @@ static void opal_fadump_region_show(struct fw_dump *fadump_conf,
+ 	else
+ 		fdm_ptr = opal_fdm;
+ 
+-	for (i = 0; i < fdm_ptr->region_cnt; i++) {
++	for (i = 0; i < be16_to_cpu(fdm_ptr->region_cnt); i++) {
+ 		/*
+ 		 * Only regions that are registered for MPIPL
+ 		 * would have dump data.
+ 		 */
+ 		if ((fadump_conf->dump_active) &&
+-		    (i < fdm_ptr->registered_regions))
+-			dumped_bytes = fdm_ptr->rgn[i].size;
++		    (i < be16_to_cpu(fdm_ptr->registered_regions)))
++			dumped_bytes = be64_to_cpu(fdm_ptr->rgn[i].size);
+ 
+ 		seq_printf(m, "DUMP: Src: %#016llx, Dest: %#016llx, ",
+-			   fdm_ptr->rgn[i].src, fdm_ptr->rgn[i].dest);
++			   be64_to_cpu(fdm_ptr->rgn[i].src),
++			   be64_to_cpu(fdm_ptr->rgn[i].dest));
+ 		seq_printf(m, "Size: %#llx, Dumped: %#llx bytes\n",
+-			   fdm_ptr->rgn[i].size, dumped_bytes);
++			   be64_to_cpu(fdm_ptr->rgn[i].size), dumped_bytes);
+ 	}
+ 
+ 	/* Dump is active. Show reserved area start address. */
+@@ -624,6 +633,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ {
+ 	const __be32 *prop;
+ 	unsigned long dn;
++	__be64 be_addr;
+ 	u64 addr = 0;
+ 	int i, len;
+ 	s64 ret;
+@@ -680,13 +690,13 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ 	if (!prop)
+ 		return;
+ 
+-	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &addr);
+-	if ((ret != OPAL_SUCCESS) || !addr) {
++	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &be_addr);
++	if ((ret != OPAL_SUCCESS) || !be_addr) {
+ 		pr_err("Failed to get Kernel metadata (%lld)\n", ret);
+ 		return;
+ 	}
+ 
+-	addr = be64_to_cpu(addr);
++	addr = be64_to_cpu(be_addr);
+ 	pr_debug("Kernel metadata addr: %llx\n", addr);
+ 
+ 	opal_fdm_active = __va(addr);
+@@ -697,14 +707,14 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ 	}
+ 
+ 	/* Kernel regions not registered with f/w for MPIPL */
+-	if (opal_fdm_active->registered_regions == 0) {
++	if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) {
+ 		opal_fdm_active = NULL;
+ 		return;
+ 	}
+ 
+-	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &addr);
+-	if (addr) {
+-		addr = be64_to_cpu(addr);
++	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &be_addr);
++	if (be_addr) {
++		addr = be64_to_cpu(be_addr);
+ 		pr_debug("CPU metadata addr: %llx\n", addr);
+ 		opal_cpu_metadata = __va(addr);
+ 	}
+diff --git a/arch/powerpc/platforms/powernv/opal-fadump.h b/arch/powerpc/platforms/powernv/opal-fadump.h
+index f1e9ecf548c5d..3f715efb0aa6e 100644
+--- a/arch/powerpc/platforms/powernv/opal-fadump.h
++++ b/arch/powerpc/platforms/powernv/opal-fadump.h
+@@ -31,14 +31,14 @@
+  * OPAL FADump kernel metadata
+  *
+  * The address of this structure will be registered with f/w for retrieving
+- * and processing during crash dump.
++ * in the capture kernel to process the crash dump.
+  */
+ struct opal_fadump_mem_struct {
+ 	u8	version;
+ 	u8	reserved[3];
+-	u16	region_cnt;		/* number of regions */
+-	u16	registered_regions;	/* Regions registered for MPIPL */
+-	u64	fadumphdr_addr;
++	__be16	region_cnt;		/* number of regions */
++	__be16	registered_regions;	/* Regions registered for MPIPL */
++	__be64	fadumphdr_addr;
+ 	struct opal_mpipl_region	rgn[FADUMP_MAX_MEM_REGS];
+ } __packed;
+ 
+@@ -135,7 +135,7 @@ static inline void opal_fadump_read_regs(char *bufp, unsigned int regs_cnt,
+ 	for (i = 0; i < regs_cnt; i++, bufp += reg_entry_size) {
+ 		reg_entry = (struct hdat_fadump_reg_entry *)bufp;
+ 		val = (cpu_endian ? be64_to_cpu(reg_entry->reg_val) :
+-		       reg_entry->reg_val);
++		       (u64)(reg_entry->reg_val));
+ 		opal_fadump_set_regval_regnum(regs,
+ 					      be32_to_cpu(reg_entry->reg_type),
+ 					      be32_to_cpu(reg_entry->reg_num),
+diff --git a/arch/powerpc/platforms/powernv/ultravisor.c b/arch/powerpc/platforms/powernv/ultravisor.c
+index e4a00ad06f9d3..67c8c4b2d8b17 100644
+--- a/arch/powerpc/platforms/powernv/ultravisor.c
++++ b/arch/powerpc/platforms/powernv/ultravisor.c
+@@ -55,6 +55,7 @@ static int __init uv_init(void)
+ 		return -ENODEV;
+ 
+ 	uv_memcons = memcons_init(node, "memcons");
++	of_node_put(node);
+ 	if (!uv_memcons)
+ 		return -ENOENT;
+ 
+diff --git a/arch/powerpc/platforms/powernv/vas-fault.c b/arch/powerpc/platforms/powernv/vas-fault.c
+index 3d21fce254b74..dd9c23c097816 100644
+--- a/arch/powerpc/platforms/powernv/vas-fault.c
++++ b/arch/powerpc/platforms/powernv/vas-fault.c
+@@ -352,7 +352,7 @@ int vas_setup_fault_window(struct vas_instance *vinst)
+ 	vas_init_rx_win_attr(&attr, VAS_COP_TYPE_FAULT);
+ 
+ 	attr.rx_fifo_size = vinst->fault_fifo_size;
+-	attr.rx_fifo = vinst->fault_fifo;
++	attr.rx_fifo = __pa(vinst->fault_fifo);
+ 
+ 	/*
+ 	 * Max creds is based on number of CRBs can fit in the FIFO.
+diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
+index 7ba0840fc3b55..3a86cdd5ae6c3 100644
+--- a/arch/powerpc/platforms/powernv/vas-window.c
++++ b/arch/powerpc/platforms/powernv/vas-window.c
+@@ -403,7 +403,7 @@ static void init_winctx_regs(struct vas_window *window,
+ 	 *
+ 	 * See also: Design note in function header.
+ 	 */
+-	val = __pa(winctx->rx_fifo);
++	val = winctx->rx_fifo;
+ 	val = SET_FIELD(VAS_PAGE_MIGRATION_SELECT, val, 0);
+ 	write_hvwc_reg(window, VREG(LFIFO_BAR), val);
+ 
+@@ -737,7 +737,7 @@ static void init_winctx_for_rxwin(struct vas_window *rxwin,
+ 		 */
+ 		winctx->fifo_disable = true;
+ 		winctx->intr_disable = true;
+-		winctx->rx_fifo = NULL;
++		winctx->rx_fifo = 0;
+ 	}
+ 
+ 	winctx->lnotify_lpid = rxattr->lnotify_lpid;
+diff --git a/arch/powerpc/platforms/powernv/vas.h b/arch/powerpc/platforms/powernv/vas.h
+index 70f793e8f6cc6..1f6e73809205e 100644
+--- a/arch/powerpc/platforms/powernv/vas.h
++++ b/arch/powerpc/platforms/powernv/vas.h
+@@ -383,7 +383,7 @@ struct vas_window {
+  * is a container for the register fields in the window context.
+  */
+ struct vas_winctx {
+-	void *rx_fifo;
++	u64 rx_fifo;
+ 	int rx_fifo_size;
+ 	int wcreds_max;
+ 	int rsvd_txbuf_count;
+diff --git a/arch/powerpc/sysdev/dart_iommu.c b/arch/powerpc/sysdev/dart_iommu.c
+index 6b4a34b36d987..8ff9bcfe4b8d4 100644
+--- a/arch/powerpc/sysdev/dart_iommu.c
++++ b/arch/powerpc/sysdev/dart_iommu.c
+@@ -403,9 +403,10 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops)
+ 	}
+ 
+ 	/* Initialize the DART HW */
+-	if (dart_init(dn) != 0)
++	if (dart_init(dn) != 0) {
++		of_node_put(dn);
+ 		return;
+-
++	}
+ 	/*
+ 	 * U4 supports a DART bypass, we use it for 64-bit capable devices to
+ 	 * improve performance.  However, that only works for devices connected
+@@ -418,6 +419,7 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops)
+ 
+ 	/* Setup pci_dma ops */
+ 	set_pci_dma_ops(&dma_iommu_ops);
++	of_node_put(dn);
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/arch/powerpc/sysdev/fsl_rio.c b/arch/powerpc/sysdev/fsl_rio.c
+index 07c164f7f8cfe..3f9f78621cf3c 100644
+--- a/arch/powerpc/sysdev/fsl_rio.c
++++ b/arch/powerpc/sysdev/fsl_rio.c
+@@ -505,8 +505,10 @@ int fsl_rio_setup(struct platform_device *dev)
+ 	if (rc) {
+ 		dev_err(&dev->dev, "Can't get %pOF property 'reg'\n",
+ 				rmu_node);
++		of_node_put(rmu_node);
+ 		goto err_rmu;
+ 	}
++	of_node_put(rmu_node);
+ 	rmu_regs_win = ioremap(rmu_regs.start, resource_size(&rmu_regs));
+ 	if (!rmu_regs_win) {
+ 		dev_err(&dev->dev, "Unable to map rmu register window\n");
+diff --git a/arch/powerpc/sysdev/xics/icp-opal.c b/arch/powerpc/sysdev/xics/icp-opal.c
+index 68fd2540b0931..7fa520efcefa0 100644
+--- a/arch/powerpc/sysdev/xics/icp-opal.c
++++ b/arch/powerpc/sysdev/xics/icp-opal.c
+@@ -195,6 +195,7 @@ int icp_opal_init(void)
+ 
+ 	printk("XICS: Using OPAL ICP fallbacks\n");
+ 
++	of_node_put(np);
+ 	return 0;
+ }
+ 
+diff --git a/arch/riscv/include/asm/irq_work.h b/arch/riscv/include/asm/irq_work.h
+index d6c277992f76a..b53891964ae03 100644
+--- a/arch/riscv/include/asm/irq_work.h
++++ b/arch/riscv/include/asm/irq_work.h
+@@ -4,7 +4,7 @@
+ 
+ static inline bool arch_irq_work_has_interrupt(void)
+ {
+-	return true;
++	return IS_ENABLED(CONFIG_SMP);
+ }
+ extern void arch_irq_work_raise(void);
+ #endif /* _ASM_RISCV_IRQ_WORK_H */
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index 1a819c18bedec..47d1411db0a93 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -261,6 +261,7 @@ clear_bss_done:
+ 	REG_S a0, (a2)
+ 
+ 	/* Initialize page tables and relocate to virtual addresses */
++	la tp, init_task
+ 	la sp, init_thread_union + THREAD_SIZE
+ 	mv a0, s1
+ 	call setup_vm
+diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h
+index 7f3c9ac34bd8d..63098df81c9f2 100644
+--- a/arch/s390/include/asm/kexec.h
++++ b/arch/s390/include/asm/kexec.h
+@@ -9,6 +9,8 @@
+ #ifndef _S390_KEXEC_H
+ #define _S390_KEXEC_H
+ 
++#include <linux/module.h>
++
+ #include <asm/processor.h>
+ #include <asm/page.h>
+ #include <asm/setup.h>
+@@ -83,4 +85,12 @@ struct kimage_arch {
+ extern const struct kexec_file_ops s390_kexec_image_ops;
+ extern const struct kexec_file_ops s390_kexec_elf_ops;
+ 
++#ifdef CONFIG_KEXEC_FILE
++struct purgatory_info;
++int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
++				     Elf_Shdr *section,
++				     const Elf_Shdr *relsec,
++				     const Elf_Shdr *symtab);
++#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
++#endif
+ #endif /*_S390_KEXEC_H */
+diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h
+index b5f545db461a4..60e101b8460ce 100644
+--- a/arch/s390/include/asm/preempt.h
++++ b/arch/s390/include/asm/preempt.h
+@@ -46,10 +46,17 @@ static inline bool test_preempt_need_resched(void)
+ 
+ static inline void __preempt_count_add(int val)
+ {
+-	if (__builtin_constant_p(val) && (val >= -128) && (val <= 127))
+-		__atomic_add_const(val, &S390_lowcore.preempt_count);
+-	else
+-		__atomic_add(val, &S390_lowcore.preempt_count);
++	/*
++	 * With some obscure config options and CONFIG_PROFILE_ALL_BRANCHES
++	 * enabled, gcc 12 fails to handle __builtin_constant_p().
++	 */
++	if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES)) {
++		if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) {
++			__atomic_add_const(val, &S390_lowcore.preempt_count);
++			return;
++		}
++	}
++	__atomic_add(val, &S390_lowcore.preempt_count);
+ }
+ 
+ static inline void __preempt_count_sub(int val)
+diff --git a/arch/s390/kernel/perf_event.c b/arch/s390/kernel/perf_event.c
+index 1e75cc9835468..b922dc0c81309 100644
+--- a/arch/s390/kernel/perf_event.c
++++ b/arch/s390/kernel/perf_event.c
+@@ -51,7 +51,7 @@ static struct kvm_s390_sie_block *sie_block(struct pt_regs *regs)
+ 	if (!stack)
+ 		return NULL;
+ 
+-	return (struct kvm_s390_sie_block *) stack->empty1[0];
++	return (struct kvm_s390_sie_block *)stack->empty1[1];
+ }
+ 
+ static bool is_in_guest(struct pt_regs *regs)
+diff --git a/arch/um/drivers/chan_user.c b/arch/um/drivers/chan_user.c
+index 6040817c036f3..25727ed648b72 100644
+--- a/arch/um/drivers/chan_user.c
++++ b/arch/um/drivers/chan_user.c
+@@ -220,7 +220,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ 		       unsigned long *stack_out)
+ {
+ 	struct winch_data data;
+-	int fds[2], n, err;
++	int fds[2], n, err, pid;
+ 	char c;
+ 
+ 	err = os_pipe(fds, 1, 1);
+@@ -238,8 +238,9 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ 	 * problem with /dev/net/tun, which if held open by this
+ 	 * thread, prevents the TUN/TAP device from being reused.
+ 	 */
+-	err = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
+-	if (err < 0) {
++	pid = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
++	if (pid < 0) {
++		err = pid;
+ 		printk(UM_KERN_ERR "fork of winch_thread failed - errno = %d\n",
+ 		       -err);
+ 		goto out_close;
+@@ -263,7 +264,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ 		goto out_close;
+ 	}
+ 
+-	return err;
++	return pid;
+ 
+  out_close:
+ 	close(fds[1]);
+diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h
+index 4c19ce4c49f18..66ab6a07330b2 100644
+--- a/arch/um/include/asm/thread_info.h
++++ b/arch/um/include/asm/thread_info.h
+@@ -63,6 +63,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_RESTORE_SIGMASK	7
+ #define TIF_NOTIFY_RESUME	8
+ #define TIF_SECCOMP		9	/* secure computing */
++#define TIF_SINGLESTEP		10	/* single stepping userspace */
+ 
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+@@ -70,5 +71,6 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
++#define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
+ 
+ #endif
+diff --git a/arch/um/kernel/exec.c b/arch/um/kernel/exec.c
+index e8fd5d540b05d..7f7a74c82abb6 100644
+--- a/arch/um/kernel/exec.c
++++ b/arch/um/kernel/exec.c
+@@ -44,7 +44,7 @@ void start_thread(struct pt_regs *regs, unsigned long eip, unsigned long esp)
+ {
+ 	PT_REGS_IP(regs) = eip;
+ 	PT_REGS_SP(regs) = esp;
+-	current->ptrace &= ~PT_DTRACE;
++	clear_thread_flag(TIF_SINGLESTEP);
+ #ifdef SUBARCH_EXECVE1
+ 	SUBARCH_EXECVE1(regs->regs);
+ #endif
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 9505a7e87396a..8eb8b736abc18 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -341,7 +341,7 @@ int singlestepping(void * t)
+ {
+ 	struct task_struct *task = t ? t : current;
+ 
+-	if (!(task->ptrace & PT_DTRACE))
++	if (!test_thread_flag(TIF_SINGLESTEP))
+ 		return 0;
+ 
+ 	if (task->thread.singlestep_syscall)
+diff --git a/arch/um/kernel/ptrace.c b/arch/um/kernel/ptrace.c
+index b425f47bddbb3..d37802ced5636 100644
+--- a/arch/um/kernel/ptrace.c
++++ b/arch/um/kernel/ptrace.c
+@@ -12,7 +12,7 @@
+ 
+ void user_enable_single_step(struct task_struct *child)
+ {
+-	child->ptrace |= PT_DTRACE;
++	set_tsk_thread_flag(child, TIF_SINGLESTEP);
+ 	child->thread.singlestep_syscall = 0;
+ 
+ #ifdef SUBARCH_SET_SINGLESTEPPING
+@@ -22,7 +22,7 @@ void user_enable_single_step(struct task_struct *child)
+ 
+ void user_disable_single_step(struct task_struct *child)
+ {
+-	child->ptrace &= ~PT_DTRACE;
++	clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+ 	child->thread.singlestep_syscall = 0;
+ 
+ #ifdef SUBARCH_SET_SINGLESTEPPING
+@@ -121,7 +121,7 @@ static void send_sigtrap(struct uml_pt_regs *regs, int error_code)
+ }
+ 
+ /*
+- * XXX Check PT_DTRACE vs TIF_SINGLESTEP for singlestepping check and
++ * XXX Check TIF_SINGLESTEP for singlestepping check and
+  * PT_PTRACED vs TIF_SYSCALL_TRACE for syscall tracing check
+  */
+ int syscall_trace_enter(struct pt_regs *regs)
+@@ -145,7 +145,7 @@ void syscall_trace_leave(struct pt_regs *regs)
+ 	audit_syscall_exit(regs);
+ 
+ 	/* Fake a debug trap */
+-	if (ptraced & PT_DTRACE)
++	if (test_thread_flag(TIF_SINGLESTEP))
+ 		send_sigtrap(&regs->regs, 0);
+ 
+ 	if (!test_thread_flag(TIF_SYSCALL_TRACE))
+diff --git a/arch/um/kernel/signal.c b/arch/um/kernel/signal.c
+index 88cd9b5c1b744..ae4658f576ab7 100644
+--- a/arch/um/kernel/signal.c
++++ b/arch/um/kernel/signal.c
+@@ -53,7 +53,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+ 	unsigned long sp;
+ 	int err;
+ 
+-	if ((current->ptrace & PT_DTRACE) && (current->ptrace & PT_PTRACED))
++	if (test_thread_flag(TIF_SINGLESTEP) && (current->ptrace & PT_PTRACED))
+ 		singlestep = 1;
+ 
+ 	/* Did we come from a system call? */
+@@ -128,7 +128,7 @@ void do_signal(struct pt_regs *regs)
+ 	 * on the host.  The tracing thread will check this flag and
+ 	 * PTRACE_SYSCALL if necessary.
+ 	 */
+-	if (current->ptrace & PT_DTRACE)
++	if (test_thread_flag(TIF_SINGLESTEP))
+ 		current->thread.singlestep_syscall =
+ 			is_syscall(PT_REGS_IP(&current->thread.regs));
+ 
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index db95ac482e0ef..ed713840d4698 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1321,7 +1321,7 @@ config MICROCODE
+ 
+ config MICROCODE_INTEL
+ 	bool "Intel microcode loading support"
+-	depends on MICROCODE
++	depends on CPU_SUP_INTEL && MICROCODE
+ 	default MICROCODE
+ 	help
+ 	  This options enables microcode patch loading support for Intel
+@@ -1333,7 +1333,7 @@ config MICROCODE_INTEL
+ 
+ config MICROCODE_AMD
+ 	bool "AMD microcode loading support"
+-	depends on MICROCODE
++	depends on CPU_SUP_AMD && MICROCODE
+ 	help
+ 	  If you select this option, microcode patch loading support for AMD
+ 	  processors will be enabled.
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index a24ce5905ab82..2f2d52729e176 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -500,6 +500,7 @@ SYM_CODE_START(\asmsym)
+ 	call	vc_switch_off_ist
+ 	movq	%rax, %rsp		/* Switch to new stack */
+ 
++	ENCODE_FRAME_POINTER
+ 	UNWIND_HINT_REGS
+ 
+ 	/* Update pt_regs */
+diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
+index 9185cb1d13b9b..5876289e48d89 100644
+--- a/arch/x86/entry/vdso/vma.c
++++ b/arch/x86/entry/vdso/vma.c
+@@ -440,7 +440,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ static __init int vdso_setup(char *s)
+ {
+ 	vdso64_enabled = simple_strtoul(s, NULL, 0);
+-	return 0;
++	return 1;
+ }
+ __setup("vdso=", vdso_setup);
+ 
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index ccc9ee1971e89..8a85658a24cc1 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -312,6 +312,16 @@ static int perf_ibs_init(struct perf_event *event)
+ 	hwc->config_base = perf_ibs->msr;
+ 	hwc->config = config;
+ 
++	/*
++	 * rip recorded by IbsOpRip will not be consistent with rsp and rbp
++	 * recorded as part of interrupt regs. Thus we need to use rip from
++	 * interrupt regs while unwinding call stack. Setting _EARLY flag
++	 * makes sure we unwind call-stack before perf sample rip is set to
++	 * IbsOpRip.
++	 */
++	if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
++		event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY;
++
+ 	return 0;
+ }
+ 
+@@ -692,6 +702,14 @@ fail:
+ 		data.raw = &raw;
+ 	}
+ 
++	/*
++	 * rip recorded by IbsOpRip will not be consistent with rsp and rbp
++	 * recorded as part of interrupt regs. Thus we need to use rip from
++	 * interrupt regs while unwinding call stack.
++	 */
++	if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
++		data.callchain = perf_callchain(event, iregs);
++
+ 	throttle = perf_event_overflow(event, &data, &regs);
+ out:
+ 	if (throttle) {
+@@ -764,9 +782,10 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
+ 	return ret;
+ }
+ 
+-static __init void perf_event_ibs_init(void)
++static __init int perf_event_ibs_init(void)
+ {
+ 	struct attribute **attr = ibs_op_format_attrs;
++	int ret;
+ 
+ 	/*
+ 	 * Some chips fail to reset the fetch count when it is written; instead
+@@ -778,7 +797,9 @@ static __init void perf_event_ibs_init(void)
+ 	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
+ 		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
+ 
+-	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
++	ret = perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
++	if (ret)
++		return ret;
+ 
+ 	if (ibs_caps & IBS_CAPS_OPCNT) {
+ 		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
+@@ -791,15 +812,35 @@ static __init void perf_event_ibs_init(void)
+ 		perf_ibs_op.cnt_mask    |= IBS_OP_MAX_CNT_EXT_MASK;
+ 	}
+ 
+-	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
++	ret = perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
++	if (ret)
++		goto err_op;
++
++	ret = register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
++	if (ret)
++		goto err_nmi;
+ 
+-	register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
+ 	pr_info("perf: AMD IBS detected (0x%08x)\n", ibs_caps);
++	return 0;
++
++err_nmi:
++	perf_pmu_unregister(&perf_ibs_op.pmu);
++	free_percpu(perf_ibs_op.pcpu);
++	perf_ibs_op.pcpu = NULL;
++err_op:
++	perf_pmu_unregister(&perf_ibs_fetch.pmu);
++	free_percpu(perf_ibs_fetch.pcpu);
++	perf_ibs_fetch.pcpu = NULL;
++
++	return ret;
+ }
+ 
+ #else /* defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD) */
+ 
+-static __init void perf_event_ibs_init(void) { }
++static __init int perf_event_ibs_init(void)
++{
++	return 0;
++}
+ 
+ #endif
+ 
+@@ -1069,9 +1110,7 @@ static __init int amd_ibs_init(void)
+ 			  x86_pmu_amd_ibs_starting_cpu,
+ 			  x86_pmu_amd_ibs_dying_cpu);
+ 
+-	perf_event_ibs_init();
+-
+-	return 0;
++	return perf_event_ibs_init();
+ }
+ 
+ /* Since we need the pci subsystem to init ibs we can't do this earlier: */
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 5ba13b00e3a71..f6eadf9320a1a 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -254,7 +254,7 @@ static struct event_constraint intel_icl_event_constraints[] = {
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0x03, 0x0a, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0x1f, 0x28, 0xf),
+ 	INTEL_EVENT_CONSTRAINT(0x32, 0xf),	/* SW_PREFETCH_ACCESS.* */
+-	INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x54, 0xf),
++	INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x56, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0x60, 0x8b, 0xf),
+ 	INTEL_UEVENT_CONSTRAINT(0x04a3, 0xff),  /* CYCLE_ACTIVITY.STALLS_TOTAL */
+ 	INTEL_UEVENT_CONSTRAINT(0x10a3, 0xff),  /* CYCLE_ACTIVITY.CYCLES_MEM_ANY */
+diff --git a/arch/x86/include/asm/acenv.h b/arch/x86/include/asm/acenv.h
+index 9aff97f0de7fd..d937c55e717e6 100644
+--- a/arch/x86/include/asm/acenv.h
++++ b/arch/x86/include/asm/acenv.h
+@@ -13,7 +13,19 @@
+ 
+ /* Asm macros */
+ 
+-#define ACPI_FLUSH_CPU_CACHE()	wbinvd()
++/*
++ * ACPI_FLUSH_CPU_CACHE() flushes caches on entering sleep states.
++ * It is required to prevent data loss.
++ *
++ * While running inside virtual machine, the kernel can bypass cache flushing.
++ * Changing sleep state in a virtual machine doesn't affect the host system
++ * sleep state and cannot lead to data loss.
++ */
++#define ACPI_FLUSH_CPU_CACHE()					\
++do {								\
++	if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR))	\
++		wbinvd();					\
++} while (0)
+ 
+ int __acpi_acquire_global_lock(unsigned int *lock);
+ int __acpi_release_global_lock(unsigned int *lock);
+diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
+index 6802c59e82523..8e80c2655ca9c 100644
+--- a/arch/x86/include/asm/kexec.h
++++ b/arch/x86/include/asm/kexec.h
+@@ -191,6 +191,14 @@ extern int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages,
+ extern void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages);
+ #define arch_kexec_pre_free_pages arch_kexec_pre_free_pages
+ 
++#ifdef CONFIG_KEXEC_FILE
++struct purgatory_info;
++int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
++				     Elf_Shdr *section,
++				     const Elf_Shdr *relsec,
++				     const Elf_Shdr *symtab);
++#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
++#endif
+ #endif
+ 
+ typedef void crash_vmclear_fn(void);
+diff --git a/arch/x86/include/asm/suspend_32.h b/arch/x86/include/asm/suspend_32.h
+index fdbd9d7b7bca1..3b97aa9215430 100644
+--- a/arch/x86/include/asm/suspend_32.h
++++ b/arch/x86/include/asm/suspend_32.h
+@@ -21,7 +21,6 @@ struct saved_context {
+ #endif
+ 	unsigned long cr0, cr2, cr3, cr4;
+ 	u64 misc_enable;
+-	bool misc_enable_saved;
+ 	struct saved_msrs saved_msrs;
+ 	struct desc_ptr gdt_desc;
+ 	struct desc_ptr idt;
+@@ -30,6 +29,7 @@ struct saved_context {
+ 	unsigned long tr;
+ 	unsigned long safety;
+ 	unsigned long return_address;
++	bool misc_enable_saved;
+ } __attribute__((packed));
+ 
+ /* routines for saving/restoring kernel state */
+diff --git a/arch/x86/include/asm/suspend_64.h b/arch/x86/include/asm/suspend_64.h
+index 35bb35d28733e..54df06687d834 100644
+--- a/arch/x86/include/asm/suspend_64.h
++++ b/arch/x86/include/asm/suspend_64.h
+@@ -14,9 +14,13 @@
+  * Image of the saved processor state, used by the low level ACPI suspend to
+  * RAM code and by the low level hibernation code.
+  *
+- * If you modify it, fix arch/x86/kernel/acpi/wakeup_64.S and make sure that
+- * __save/__restore_processor_state(), defined in arch/x86/kernel/suspend_64.c,
+- * still work as required.
++ * If you modify it, check how it is used in arch/x86/kernel/acpi/wakeup_64.S
++ * and make sure that __save/__restore_processor_state(), defined in
++ * arch/x86/power/cpu.c, still work as required.
++ *
++ * Because the structure is packed, make sure to avoid unaligned members. For
++ * optimisation purposes but also because tools like kmemleak only search for
++ * pointers that are aligned.
+  */
+ struct saved_context {
+ 	struct pt_regs regs;
+@@ -36,7 +40,6 @@ struct saved_context {
+ 
+ 	unsigned long cr0, cr2, cr3, cr4;
+ 	u64 misc_enable;
+-	bool misc_enable_saved;
+ 	struct saved_msrs saved_msrs;
+ 	unsigned long efer;
+ 	u16 gdt_pad; /* Unused */
+@@ -48,6 +51,7 @@ struct saved_context {
+ 	unsigned long tr;
+ 	unsigned long safety;
+ 	unsigned long return_address;
++	bool misc_enable_saved;
+ } __attribute__((packed));
+ 
+ #define loaddebug(thread,register) \
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 24539a05c58c7..1c96f2425eafd 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -168,7 +168,7 @@ static __init int setup_apicpmtimer(char *s)
+ {
+ 	apic_calibrate_pmtmr = 1;
+ 	notsc_setup(NULL);
+-	return 0;
++	return 1;
+ }
+ __setup("apicpmtimer", setup_apicpmtimer);
+ #endif
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index 40f466de89242..9c283562dfd49 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -199,7 +199,13 @@ static void __init uv_tsc_check_sync(void)
+ 	int mmr_shift;
+ 	char *state;
+ 
+-	/* Different returns from different UV BIOS versions */
++	/* UV5 guarantees synced TSCs; do not zero TSC_ADJUST */
++	if (!is_uv(UV2|UV3|UV4)) {
++		mark_tsc_async_resets("UV5+");
++		return;
++	}
++
++	/* UV2,3,4, UV BIOS TSC sync state available */
+ 	mmr = uv_early_read_mmr(UVH_TSC_SYNC_MMR);
+ 	mmr_shift =
+ 		is_uv2_hub() ? UVH_TSC_SYNC_SHIFT_UV2K : UVH_TSC_SYNC_SHIFT;
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 816fdbec795a4..c6ad53e38f653 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -88,7 +88,7 @@ static bool ring3mwait_disabled __read_mostly;
+ static int __init ring3mwait_disable(char *__unused)
+ {
+ 	ring3mwait_disabled = true;
+-	return 0;
++	return 1;
+ }
+ __setup("ring3mwait=disable", ring3mwait_disable);
+ 
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index f73f1184b1c13..09f7c652346a9 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -1457,10 +1457,23 @@ out_free:
+ 	kfree(bank);
+ }
+ 
++static void __threshold_remove_device(struct threshold_bank **bp)
++{
++	unsigned int bank, numbanks = this_cpu_read(mce_num_banks);
++
++	for (bank = 0; bank < numbanks; bank++) {
++		if (!bp[bank])
++			continue;
++
++		threshold_remove_bank(bp[bank]);
++		bp[bank] = NULL;
++	}
++	kfree(bp);
++}
++
+ int mce_threshold_remove_device(unsigned int cpu)
+ {
+ 	struct threshold_bank **bp = this_cpu_read(threshold_banks);
+-	unsigned int bank, numbanks = this_cpu_read(mce_num_banks);
+ 
+ 	if (!bp)
+ 		return 0;
+@@ -1471,13 +1484,7 @@ int mce_threshold_remove_device(unsigned int cpu)
+ 	 */
+ 	this_cpu_write(threshold_banks, NULL);
+ 
+-	for (bank = 0; bank < numbanks; bank++) {
+-		if (bp[bank]) {
+-			threshold_remove_bank(bp[bank]);
+-			bp[bank] = NULL;
+-		}
+-	}
+-	kfree(bp);
++	__threshold_remove_device(bp);
+ 	return 0;
+ }
+ 
+@@ -1514,15 +1521,14 @@ int mce_threshold_create_device(unsigned int cpu)
+ 		if (!(this_cpu_read(bank_map) & (1 << bank)))
+ 			continue;
+ 		err = threshold_create_bank(bp, cpu, bank);
+-		if (err)
+-			goto out_err;
++		if (err) {
++			__threshold_remove_device(bp);
++			return err;
++		}
+ 	}
+ 	this_cpu_write(threshold_banks, bp);
+ 
+ 	if (thresholding_irq_en)
+ 		mce_threshold_vector = amd_threshold_interrupt;
+ 	return 0;
+-out_err:
+-	mce_threshold_remove_device(cpu);
+-	return err;
+ }
+diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c
+index 60d2c3798ba28..2f97d1a1032f3 100644
+--- a/arch/x86/kernel/step.c
++++ b/arch/x86/kernel/step.c
+@@ -175,8 +175,7 @@ void set_task_blockstep(struct task_struct *task, bool on)
+ 	 *
+ 	 * NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if
+ 	 * task is current or it can't be running, otherwise we can race
+-	 * with __switch_to_xtra(). We rely on ptrace_freeze_traced() but
+-	 * PTRACE_KILL is not safe.
++	 * with __switch_to_xtra(). We rely on ptrace_freeze_traced().
+ 	 */
+ 	local_irq_disable();
+ 	debugctl = get_debugctlmsr();
+diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
+index 504fa5425bcec..3fd1c81eb5e35 100644
+--- a/arch/x86/kernel/sys_x86_64.c
++++ b/arch/x86/kernel/sys_x86_64.c
+@@ -68,9 +68,6 @@ static int __init control_va_addr_alignment(char *str)
+ 	if (*str == 0)
+ 		return 1;
+ 
+-	if (*str == '=')
+-		str++;
+-
+ 	if (!strcmp(str, "32"))
+ 		va_align.flags = ALIGN_VA_32;
+ 	else if (!strcmp(str, "64"))
+@@ -80,11 +77,11 @@ static int __init control_va_addr_alignment(char *str)
+ 	else if (!strcmp(str, "on"))
+ 		va_align.flags = ALIGN_VA_32 | ALIGN_VA_64;
+ 	else
+-		return 0;
++		pr_warn("invalid option value: 'align_va_addr=%s'\n", str);
+ 
+ 	return 1;
+ }
+-__setup("align_va_addr", control_va_addr_alignment);
++__setup("align_va_addr=", control_va_addr_alignment);
+ 
+ SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+ 		unsigned long, prot, unsigned long, flags,
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 0c2389d0fdafe..90881d7b42ead 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3668,12 +3668,34 @@ vmcs12_guest_cr4(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ }
+ 
+ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
+-				      struct vmcs12 *vmcs12)
++				      struct vmcs12 *vmcs12,
++				      u32 vm_exit_reason, u32 exit_intr_info)
+ {
+ 	u32 idt_vectoring;
+ 	unsigned int nr;
+ 
+-	if (vcpu->arch.exception.injected) {
++	/*
++	 * Per the SDM, VM-Exits due to double and triple faults are never
++	 * considered to occur during event delivery, even if the double/triple
++	 * fault is the result of an escalating vectoring issue.
++	 *
++	 * Note, the SDM qualifies the double fault behavior with "The original
++	 * event results in a double-fault exception".  It's unclear why the
++	 * qualification exists since exits due to double fault can occur only
++	 * while vectoring a different exception (injected events are never
++	 * subject to interception), i.e. there's _always_ an original event.
++	 *
++	 * The SDM also uses NMI as a confusing example for the "original event
++	 * causes the VM exit directly" clause.  NMI isn't special in any way,
++	 * the same rule applies to all events that cause an exit directly.
++	 * NMI is an odd choice for the example because NMIs can only occur on
++	 * instruction boundaries, i.e. they _can't_ occur during vectoring.
++	 */
++	if ((u16)vm_exit_reason == EXIT_REASON_TRIPLE_FAULT ||
++	    ((u16)vm_exit_reason == EXIT_REASON_EXCEPTION_NMI &&
++	     is_double_fault(exit_intr_info))) {
++		vmcs12->idt_vectoring_info_field = 0;
++	} else if (vcpu->arch.exception.injected) {
+ 		nr = vcpu->arch.exception.nr;
+ 		idt_vectoring = nr | VECTORING_INFO_VALID_MASK;
+ 
+@@ -3706,6 +3728,8 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
+ 			idt_vectoring |= INTR_TYPE_EXT_INTR;
+ 
+ 		vmcs12->idt_vectoring_info_field = idt_vectoring;
++	} else {
++		vmcs12->idt_vectoring_info_field = 0;
+ 	}
+ }
+ 
+@@ -4143,12 +4167,12 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 	/* update exit information fields: */
+ 	vmcs12->vm_exit_reason = vm_exit_reason;
+ 	vmcs12->exit_qualification = exit_qualification;
+-	vmcs12->vm_exit_intr_info = exit_intr_info;
+-
+-	vmcs12->idt_vectoring_info_field = 0;
+-	vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+-	vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
+ 
++	/*
++	 * On VM-Exit due to a failed VM-Entry, the VMCS isn't marked launched
++	 * and only EXIT_REASON and EXIT_QUALIFICATION are updated, all other
++	 * exit info fields are unmodified.
++	 */
+ 	if (!(vmcs12->vm_exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY)) {
+ 		vmcs12->launch_state = 1;
+ 
+@@ -4160,7 +4184,12 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 		 * Transfer the event that L0 or L1 may wanted to inject into
+ 		 * L2 to IDT_VECTORING_INFO_FIELD.
+ 		 */
+-		vmcs12_save_pending_event(vcpu, vmcs12);
++		vmcs12_save_pending_event(vcpu, vmcs12,
++					  vm_exit_reason, exit_intr_info);
++
++		vmcs12->vm_exit_intr_info = exit_intr_info;
++		vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
++		vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
+ 
+ 		/*
+ 		 * According to spec, there's no need to store the guest's
+diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h
+index 571d9ad80a59e..69c147df957fd 100644
+--- a/arch/x86/kvm/vmx/vmcs.h
++++ b/arch/x86/kvm/vmx/vmcs.h
+@@ -102,6 +102,11 @@ static inline bool is_breakpoint(u32 intr_info)
+ 	return is_exception_n(intr_info, BP_VECTOR);
+ }
+ 
++static inline bool is_double_fault(u32 intr_info)
++{
++	return is_exception_n(intr_info, DF_VECTOR);
++}
++
+ static inline bool is_page_fault(u32 intr_info)
+ {
+ 	return is_exception_n(intr_info, PF_VECTOR);
+diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c
+index 65d15df6212d6..0e65d00e2339f 100644
+--- a/arch/x86/lib/delay.c
++++ b/arch/x86/lib/delay.c
+@@ -54,8 +54,8 @@ static void delay_loop(u64 __loops)
+ 		"	jnz 2b		\n"
+ 		"3:	dec %0		\n"
+ 
+-		: /* we don't need output */
+-		:"a" (loops)
++		: "+a" (loops)
++		:
+ 	);
+ }
+ 
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index 232932bda4e5e..f9c53a7107407 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -101,7 +101,7 @@ int pat_debug_enable;
+ static int __init pat_debug_setup(char *str)
+ {
+ 	pat_debug_enable = 1;
+-	return 0;
++	return 1;
+ }
+ __setup("debugpat", pat_debug_setup);
+ 
+diff --git a/arch/x86/um/ldt.c b/arch/x86/um/ldt.c
+index 3ee234b6234dd..255a44dd415a9 100644
+--- a/arch/x86/um/ldt.c
++++ b/arch/x86/um/ldt.c
+@@ -23,9 +23,11 @@ static long write_ldt_entry(struct mm_id *mm_idp, int func,
+ {
+ 	long res;
+ 	void *stub_addr;
++
++	BUILD_BUG_ON(sizeof(*desc) % sizeof(long));
++
+ 	res = syscall_stub_data(mm_idp, (unsigned long *)desc,
+-				(sizeof(*desc) + sizeof(long) - 1) &
+-				    ~(sizeof(long) - 1),
++				sizeof(*desc) / sizeof(long),
+ 				addr, &stub_addr);
+ 	if (!res) {
+ 		unsigned long args[] = { func,
+diff --git a/arch/xtensa/kernel/ptrace.c b/arch/xtensa/kernel/ptrace.c
+index bb3f4797d212b..db6cdea471d83 100644
+--- a/arch/xtensa/kernel/ptrace.c
++++ b/arch/xtensa/kernel/ptrace.c
+@@ -226,12 +226,12 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+ 
+ void user_enable_single_step(struct task_struct *child)
+ {
+-	child->ptrace |= PT_SINGLESTEP;
++	set_tsk_thread_flag(child, TIF_SINGLESTEP);
+ }
+ 
+ void user_disable_single_step(struct task_struct *child)
+ {
+-	child->ptrace &= ~PT_SINGLESTEP;
++	clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+ }
+ 
+ /*
+diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c
+index 1fb1047f905ca..1cb230fafdf2b 100644
+--- a/arch/xtensa/kernel/signal.c
++++ b/arch/xtensa/kernel/signal.c
+@@ -465,7 +465,7 @@ static void do_signal(struct pt_regs *regs)
+ 		/* Set up the stack frame */
+ 		ret = setup_frame(&ksig, sigmask_to_save(), regs);
+ 		signal_setup_done(ret, &ksig, 0);
+-		if (current->ptrace & PT_SINGLESTEP)
++		if (test_thread_flag(TIF_SINGLESTEP))
+ 			task_pt_regs(current)->icountlevel = 1;
+ 
+ 		return;
+@@ -491,7 +491,7 @@ static void do_signal(struct pt_regs *regs)
+ 	/* If there's no signal to deliver, we just restore the saved mask.  */
+ 	restore_saved_sigmask();
+ 
+-	if (current->ptrace & PT_SINGLESTEP)
++	if (test_thread_flag(TIF_SINGLESTEP))
+ 		task_pt_regs(current)->icountlevel = 1;
+ 	return;
+ }
+diff --git a/arch/xtensa/platforms/iss/simdisk.c b/arch/xtensa/platforms/iss/simdisk.c
+index 3447556d276d3..2b3c829f655b3 100644
+--- a/arch/xtensa/platforms/iss/simdisk.c
++++ b/arch/xtensa/platforms/iss/simdisk.c
+@@ -213,12 +213,18 @@ static ssize_t proc_read_simdisk(struct file *file, char __user *buf,
+ 	struct simdisk *dev = PDE_DATA(file_inode(file));
+ 	const char *s = dev->filename;
+ 	if (s) {
+-		ssize_t n = simple_read_from_buffer(buf, size, ppos,
+-							s, strlen(s));
+-		if (n < 0)
+-			return n;
+-		buf += n;
+-		size -= n;
++		ssize_t len = strlen(s);
++		char *temp = kmalloc(len + 2, GFP_KERNEL);
++
++		if (!temp)
++			return -ENOMEM;
++
++		len = scnprintf(temp, len + 2, "%s\n", s);
++		len = simple_read_from_buffer(buf, size, ppos,
++					      temp, len);
++
++		kfree(temp);
++		return len;
+ 	}
+ 	return simple_read_from_buffer(buf, size, ppos, "\n", 1);
+ }
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index c2fdd6fcdaee6..be6733558b831 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -553,6 +553,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd)
+ 				   */
+ 	bfqg->bfqd = bfqd;
+ 	bfqg->active_entities = 0;
++	bfqg->online = true;
+ 	bfqg->rq_pos_tree = RB_ROOT;
+ }
+ 
+@@ -581,28 +582,11 @@ static void bfq_group_set_parent(struct bfq_group *bfqg,
+ 	entity->sched_data = &parent->sched_data;
+ }
+ 
+-static struct bfq_group *bfq_lookup_bfqg(struct bfq_data *bfqd,
+-					 struct blkcg *blkcg)
++static void bfq_link_bfqg(struct bfq_data *bfqd, struct bfq_group *bfqg)
+ {
+-	struct blkcg_gq *blkg;
+-
+-	blkg = blkg_lookup(blkcg, bfqd->queue);
+-	if (likely(blkg))
+-		return blkg_to_bfqg(blkg);
+-	return NULL;
+-}
+-
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+-				     struct blkcg *blkcg)
+-{
+-	struct bfq_group *bfqg, *parent;
++	struct bfq_group *parent;
+ 	struct bfq_entity *entity;
+ 
+-	bfqg = bfq_lookup_bfqg(bfqd, blkcg);
+-
+-	if (unlikely(!bfqg))
+-		return NULL;
+-
+ 	/*
+ 	 * Update chain of bfq_groups as we might be handling a leaf group
+ 	 * which, along with some of its relatives, has not been hooked yet
+@@ -619,8 +603,24 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+ 			bfq_group_set_parent(curr_bfqg, parent);
+ 		}
+ 	}
++}
+ 
+-	return bfqg;
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio)
++{
++	struct blkcg_gq *blkg = bio->bi_blkg;
++	struct bfq_group *bfqg;
++
++	while (blkg) {
++		bfqg = blkg_to_bfqg(blkg);
++		if (bfqg->online) {
++			bio_associate_blkg_from_css(bio, &blkg->blkcg->css);
++			return bfqg;
++		}
++		blkg = blkg->parent;
++	}
++	bio_associate_blkg_from_css(bio,
++				&bfqg_to_blkg(bfqd->root_group)->blkcg->css);
++	return bfqd->root_group;
+ }
+ 
+ /**
+@@ -696,25 +696,15 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+  * Move bic to blkcg, assuming that bfqd->lock is held; which makes
+  * sure that the reference to cgroup is valid across the call (see
+  * comments in bfq_bic_update_cgroup on this issue)
+- *
+- * NOTE: an alternative approach might have been to store the current
+- * cgroup in bfqq and getting a reference to it, reducing the lookup
+- * time here, at the price of slightly more complex code.
+  */
+-static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+-						struct bfq_io_cq *bic,
+-						struct blkcg *blkcg)
++static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
++				     struct bfq_io_cq *bic,
++				     struct bfq_group *bfqg)
+ {
+ 	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
+ 	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
+-	struct bfq_group *bfqg;
+ 	struct bfq_entity *entity;
+ 
+-	bfqg = bfq_find_set_group(bfqd, blkcg);
+-
+-	if (unlikely(!bfqg))
+-		bfqg = bfqd->root_group;
+-
+ 	if (async_bfqq) {
+ 		entity = &async_bfqq->entity;
+ 
+@@ -725,9 +715,39 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ 	}
+ 
+ 	if (sync_bfqq) {
+-		entity = &sync_bfqq->entity;
+-		if (entity->sched_data != &bfqg->sched_data)
+-			bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
++		if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
++			/* We are the only user of this bfqq, just move it */
++			if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
++				bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
++		} else {
++			struct bfq_queue *bfqq;
++
++			/*
++			 * The queue was merged to a different queue. Check
++			 * that the merge chain still belongs to the same
++			 * cgroup.
++			 */
++			for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
++				if (bfqq->entity.sched_data !=
++				    &bfqg->sched_data)
++					break;
++			if (bfqq) {
++				/*
++				 * Some queue changed cgroup so the merge is
++				 * not valid anymore. We cannot easily just
++				 * cancel the merge (by clearing new_bfqq) as
++				 * there may be other processes using this
++				 * queue and holding refs to all queues below
++				 * sync_bfqq->new_bfqq. Similarly if the merge
++				 * already happened, we need to detach from
++				 * bfqq now so that we cannot merge bio to a
++				 * request from the old cgroup.
++				 */
++				bfq_put_cooperator(sync_bfqq);
++				bfq_release_process_ref(bfqd, sync_bfqq);
++				bic_set_bfqq(bic, NULL, 1);
++			}
++		}
+ 	}
+ 
+ 	return bfqg;
+@@ -736,20 +756,24 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ {
+ 	struct bfq_data *bfqd = bic_to_bfqd(bic);
+-	struct bfq_group *bfqg = NULL;
++	struct bfq_group *bfqg = bfq_bio_bfqg(bfqd, bio);
+ 	uint64_t serial_nr;
+ 
+-	rcu_read_lock();
+-	serial_nr = __bio_blkcg(bio)->css.serial_nr;
++	serial_nr = bfqg_to_blkg(bfqg)->blkcg->css.serial_nr;
+ 
+ 	/*
+ 	 * Check whether blkcg has changed.  The condition may trigger
+ 	 * spuriously on a newly created cic but there's no harm.
+ 	 */
+ 	if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr))
+-		goto out;
++		return;
+ 
+-	bfqg = __bfq_bic_change_cgroup(bfqd, bic, __bio_blkcg(bio));
++	/*
++	 * New cgroup for this process. Make sure it is linked to bfq internal
++	 * cgroup hierarchy.
++	 */
++	bfq_link_bfqg(bfqd, bfqg);
++	__bfq_bic_change_cgroup(bfqd, bic, bfqg);
+ 	/*
+ 	 * Update blkg_path for bfq_log_* functions. We cache this
+ 	 * path, and update it here, for the following
+@@ -802,8 +826,6 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ 	 */
+ 	blkg_path(bfqg_to_blkg(bfqg), bfqg->blkg_path, sizeof(bfqg->blkg_path));
+ 	bic->blkcg_serial_nr = serial_nr;
+-out:
+-	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -931,6 +953,7 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ 
+ put_async_queues:
+ 	bfq_put_async_queues(bfqd, bfqg);
++	bfqg->online = false;
+ 
+ 	spin_unlock_irqrestore(&bfqd->lock, flags);
+ 	/*
+@@ -1420,7 +1443,7 @@ void bfq_end_wr_async(struct bfq_data *bfqd)
+ 	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
+ }
+ 
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, struct blkcg *blkcg)
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio)
+ {
+ 	return bfqd->root_group;
+ }
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index de2cd4bd602fd..592d32a46c4c3 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2227,10 +2227,17 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
+ 
+ 	spin_lock_irq(&bfqd->lock);
+ 
+-	if (bic)
++	if (bic) {
++		/*
++		 * Make sure cgroup info is uptodate for current process before
++		 * considering the merge.
++		 */
++		bfq_bic_update_cgroup(bic, bio);
++
+ 		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf));
+-	else
++	} else {
+ 		bfqd->bio_bfqq = NULL;
++	}
+ 	bfqd->bio_bic = bic;
+ 
+ 	ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
+@@ -2260,8 +2267,6 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
+ 	return ELEVATOR_NO_MERGE;
+ }
+ 
+-static struct bfq_queue *bfq_init_rq(struct request *rq);
+-
+ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ 			       enum elv_merge type)
+ {
+@@ -2270,7 +2275,7 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ 	    blk_rq_pos(req) <
+ 	    blk_rq_pos(container_of(rb_prev(&req->rb_node),
+ 				    struct request, rb_node))) {
+-		struct bfq_queue *bfqq = bfq_init_rq(req);
++		struct bfq_queue *bfqq = RQ_BFQQ(req);
+ 		struct bfq_data *bfqd;
+ 		struct request *prev, *next_rq;
+ 
+@@ -2322,8 +2327,8 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ static void bfq_requests_merged(struct request_queue *q, struct request *rq,
+ 				struct request *next)
+ {
+-	struct bfq_queue *bfqq = bfq_init_rq(rq),
+-		*next_bfqq = bfq_init_rq(next);
++	struct bfq_queue *bfqq = RQ_BFQQ(rq),
++		*next_bfqq = RQ_BFQQ(next);
+ 
+ 	if (!bfqq)
+ 		return;
+@@ -2502,6 +2507,14 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	if (process_refs == 0 || new_process_refs == 0)
+ 		return NULL;
+ 
++	/*
++	 * Make sure merged queues belong to the same parent. Parents could
++	 * have changed since the time we decided the two queues are suitable
++	 * for merging.
++	 */
++	if (new_bfqq->entity.parent != bfqq->entity.parent)
++		return NULL;
++
+ 	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
+ 		new_bfqq->pid);
+ 
+@@ -4917,7 +4930,7 @@ void bfq_put_queue(struct bfq_queue *bfqq)
+ 	bfqg_and_blkg_put(bfqg);
+ }
+ 
+-static void bfq_put_cooperator(struct bfq_queue *bfqq)
++void bfq_put_cooperator(struct bfq_queue *bfqq)
+ {
+ 	struct bfq_queue *__bfqq, *next;
+ 
+@@ -5149,14 +5162,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
+ 	struct bfq_queue *bfqq;
+ 	struct bfq_group *bfqg;
+ 
+-	rcu_read_lock();
+-
+-	bfqg = bfq_find_set_group(bfqd, __bio_blkcg(bio));
+-	if (!bfqg) {
+-		bfqq = &bfqd->oom_bfqq;
+-		goto out;
+-	}
+-
++	bfqg = bfq_bio_bfqg(bfqd, bio);
+ 	if (!is_sync) {
+ 		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
+ 						  ioprio);
+@@ -5200,7 +5206,6 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
+ out:
+ 	bfqq->ref++; /* get a process reference to this queue */
+ 	bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq, bfqq->ref);
+-	rcu_read_unlock();
+ 	return bfqq;
+ }
+ 
+@@ -5503,6 +5508,8 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
+ 					   unsigned int cmd_flags) {}
+ #endif /* CONFIG_BFQ_CGROUP_DEBUG */
+ 
++static struct bfq_queue *bfq_init_rq(struct request *rq);
++
+ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ 			       bool at_head)
+ {
+@@ -5517,17 +5524,14 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ 		bfqg_stats_update_legacy_io(q, rq);
+ #endif
+ 	spin_lock_irq(&bfqd->lock);
++	bfqq = bfq_init_rq(rq);
+ 	if (blk_mq_sched_try_insert_merge(q, rq)) {
+ 		spin_unlock_irq(&bfqd->lock);
+ 		return;
+ 	}
+ 
+-	spin_unlock_irq(&bfqd->lock);
+-
+ 	blk_mq_sched_request_inserted(rq);
+ 
+-	spin_lock_irq(&bfqd->lock);
+-	bfqq = bfq_init_rq(rq);
+ 	if (!bfqq || at_head || blk_rq_is_passthrough(rq)) {
+ 		if (at_head)
+ 			list_add(&rq->queuelist, &bfqd->dispatch);
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 703895224562c..2a4a6f44efffe 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -901,6 +901,8 @@ struct bfq_group {
+ 
+ 	/* reference counter (see comments in bfq_bic_update_cgroup) */
+ 	int ref;
++	/* Is bfq_group still online? */
++	bool online;
+ 
+ 	struct bfq_entity entity;
+ 	struct bfq_sched_data sched_data;
+@@ -954,6 +956,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 		     bool compensate, enum bfqq_expiration reason);
+ void bfq_put_queue(struct bfq_queue *bfqq);
++void bfq_put_cooperator(struct bfq_queue *bfqq);
+ void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
+ void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq);
+ void bfq_schedule_dispatch(struct bfq_data *bfqd);
+@@ -981,8 +984,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg);
+ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio);
+ void bfq_end_wr_async(struct bfq_data *bfqd);
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+-				     struct blkcg *blkcg);
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio);
+ struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);
+ struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
+ struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 5b19665bc486a..484c6b2dd264e 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1892,12 +1892,8 @@ EXPORT_SYMBOL_GPL(bio_associate_blkg);
+  */
+ void bio_clone_blkg_association(struct bio *dst, struct bio *src)
+ {
+-	if (src->bi_blkg) {
+-		if (dst->bi_blkg)
+-			blkg_put(dst->bi_blkg);
+-		blkg_get(src->bi_blkg);
+-		dst->bi_blkg = src->bi_blkg;
+-	}
++	if (src->bi_blkg)
++		bio_associate_blkg_from_css(dst, &bio_blkcg(src)->css);
+ }
+ EXPORT_SYMBOL_GPL(bio_clone_blkg_association);
+ 
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index d8b0d8bd132bc..74511a060d598 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -86,7 +86,17 @@ struct iolatency_grp;
+ struct blk_iolatency {
+ 	struct rq_qos rqos;
+ 	struct timer_list timer;
+-	atomic_t enabled;
++
++	/*
++	 * ->enabled is the master enable switch gating the throttling logic and
++	 * inflight tracking. The number of cgroups which have iolat enabled is
++	 * tracked in ->enable_cnt, and ->enable is flipped on/off accordingly
++	 * from ->enable_work with the request_queue frozen. For details, See
++	 * blkiolatency_enable_work_fn().
++	 */
++	bool enabled;
++	atomic_t enable_cnt;
++	struct work_struct enable_work;
+ };
+ 
+ static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos)
+@@ -94,11 +104,6 @@ static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos)
+ 	return container_of(rqos, struct blk_iolatency, rqos);
+ }
+ 
+-static inline bool blk_iolatency_enabled(struct blk_iolatency *blkiolat)
+-{
+-	return atomic_read(&blkiolat->enabled) > 0;
+-}
+-
+ struct child_latency_info {
+ 	spinlock_t lock;
+ 
+@@ -463,7 +468,7 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
+ 	struct blkcg_gq *blkg = bio->bi_blkg;
+ 	bool issue_as_root = bio_issue_as_root_blkg(bio);
+ 
+-	if (!blk_iolatency_enabled(blkiolat))
++	if (!blkiolat->enabled)
+ 		return;
+ 
+ 	while (blkg && blkg->parent) {
+@@ -593,7 +598,6 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ 	u64 window_start;
+ 	u64 now;
+ 	bool issue_as_root = bio_issue_as_root_blkg(bio);
+-	bool enabled = false;
+ 	int inflight = 0;
+ 
+ 	blkg = bio->bi_blkg;
+@@ -604,8 +608,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ 	if (!iolat)
+ 		return;
+ 
+-	enabled = blk_iolatency_enabled(iolat->blkiolat);
+-	if (!enabled)
++	if (!iolat->blkiolat->enabled)
+ 		return;
+ 
+ 	now = ktime_to_ns(ktime_get());
+@@ -644,6 +647,7 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos)
+ 	struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
+ 
+ 	del_timer_sync(&blkiolat->timer);
++	flush_work(&blkiolat->enable_work);
+ 	blkcg_deactivate_policy(rqos->q, &blkcg_policy_iolatency);
+ 	kfree(blkiolat);
+ }
+@@ -715,6 +719,44 @@ next:
+ 	rcu_read_unlock();
+ }
+ 
++/**
++ * blkiolatency_enable_work_fn - Enable or disable iolatency on the device
++ * @work: enable_work of the blk_iolatency of interest
++ *
++ * iolatency needs to keep track of the number of in-flight IOs per cgroup. This
++ * is relatively expensive as it involves walking up the hierarchy twice for
++ * every IO. Thus, if iolatency is not enabled in any cgroup for the device, we
++ * want to disable the in-flight tracking.
++ *
++ * We have to make sure that the counting is balanced - we don't want to leak
++ * the in-flight counts by disabling accounting in the completion path while IOs
++ * are in flight. This is achieved by ensuring that no IO is in flight by
++ * freezing the queue while flipping ->enabled. As this requires a sleepable
++ * context, ->enabled flipping is punted to this work function.
++ */
++static void blkiolatency_enable_work_fn(struct work_struct *work)
++{
++	struct blk_iolatency *blkiolat = container_of(work, struct blk_iolatency,
++						      enable_work);
++	bool enabled;
++
++	/*
++	 * There can only be one instance of this function running for @blkiolat
++	 * and it's guaranteed to be executed at least once after the latest
++	 * ->enabled_cnt modification. Acting on the latest ->enable_cnt is
++	 * sufficient.
++	 *
++	 * Also, we know @blkiolat is safe to access as ->enable_work is flushed
++	 * in blkcg_iolatency_exit().
++	 */
++	enabled = atomic_read(&blkiolat->enable_cnt);
++	if (enabled != blkiolat->enabled) {
++		blk_mq_freeze_queue(blkiolat->rqos.q);
++		blkiolat->enabled = enabled;
++		blk_mq_unfreeze_queue(blkiolat->rqos.q);
++	}
++}
++
+ int blk_iolatency_init(struct request_queue *q)
+ {
+ 	struct blk_iolatency *blkiolat;
+@@ -740,17 +782,15 @@ int blk_iolatency_init(struct request_queue *q)
+ 	}
+ 
+ 	timer_setup(&blkiolat->timer, blkiolatency_timer_fn, 0);
++	INIT_WORK(&blkiolat->enable_work, blkiolatency_enable_work_fn);
+ 
+ 	return 0;
+ }
+ 
+-/*
+- * return 1 for enabling iolatency, return -1 for disabling iolatency, otherwise
+- * return 0.
+- */
+-static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
++static void iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
+ {
+ 	struct iolatency_grp *iolat = blkg_to_lat(blkg);
++	struct blk_iolatency *blkiolat = iolat->blkiolat;
+ 	u64 oldval = iolat->min_lat_nsec;
+ 
+ 	iolat->min_lat_nsec = val;
+@@ -758,13 +798,15 @@ static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
+ 	iolat->cur_win_nsec = min_t(u64, iolat->cur_win_nsec,
+ 				    BLKIOLATENCY_MAX_WIN_SIZE);
+ 
+-	if (!oldval && val)
+-		return 1;
++	if (!oldval && val) {
++		if (atomic_inc_return(&blkiolat->enable_cnt) == 1)
++			schedule_work(&blkiolat->enable_work);
++	}
+ 	if (oldval && !val) {
+ 		blkcg_clear_delay(blkg);
+-		return -1;
++		if (atomic_dec_return(&blkiolat->enable_cnt) == 0)
++			schedule_work(&blkiolat->enable_work);
+ 	}
+-	return 0;
+ }
+ 
+ static void iolatency_clear_scaling(struct blkcg_gq *blkg)
+@@ -796,7 +838,6 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
+ 	u64 lat_val = 0;
+ 	u64 oldval;
+ 	int ret;
+-	int enable = 0;
+ 
+ 	ret = blkg_conf_prep(blkcg, &blkcg_policy_iolatency, buf, &ctx);
+ 	if (ret)
+@@ -831,41 +872,12 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
+ 	blkg = ctx.blkg;
+ 	oldval = iolat->min_lat_nsec;
+ 
+-	enable = iolatency_set_min_lat_nsec(blkg, lat_val);
+-	if (enable) {
+-		if (!blk_get_queue(blkg->q)) {
+-			ret = -ENODEV;
+-			goto out;
+-		}
+-
+-		blkg_get(blkg);
+-	}
+-
+-	if (oldval != iolat->min_lat_nsec) {
++	iolatency_set_min_lat_nsec(blkg, lat_val);
++	if (oldval != iolat->min_lat_nsec)
+ 		iolatency_clear_scaling(blkg);
+-	}
+-
+ 	ret = 0;
+ out:
+ 	blkg_conf_finish(&ctx);
+-	if (ret == 0 && enable) {
+-		struct iolatency_grp *tmp = blkg_to_lat(blkg);
+-		struct blk_iolatency *blkiolat = tmp->blkiolat;
+-
+-		blk_mq_freeze_queue(blkg->q);
+-
+-		if (enable == 1)
+-			atomic_inc(&blkiolat->enabled);
+-		else if (enable == -1)
+-			atomic_dec(&blkiolat->enabled);
+-		else
+-			WARN_ON_ONCE(1);
+-
+-		blk_mq_unfreeze_queue(blkg->q);
+-
+-		blkg_put(blkg);
+-		blk_put_queue(blkg->q);
+-	}
+ 	return ret ?: nbytes;
+ }
+ 
+@@ -1006,14 +1018,8 @@ static void iolatency_pd_offline(struct blkg_policy_data *pd)
+ {
+ 	struct iolatency_grp *iolat = pd_to_lat(pd);
+ 	struct blkcg_gq *blkg = lat_to_blkg(iolat);
+-	struct blk_iolatency *blkiolat = iolat->blkiolat;
+-	int ret;
+ 
+-	ret = iolatency_set_min_lat_nsec(blkg, 0);
+-	if (ret == 1)
+-		atomic_inc(&blkiolat->enabled);
+-	if (ret == -1)
+-		atomic_dec(&blkiolat->enabled);
++	iolatency_set_min_lat_nsec(blkg, 0);
+ 	iolatency_clear_scaling(blkg);
+ }
+ 
+diff --git a/crypto/cryptd.c b/crypto/cryptd.c
+index a1bea0f4baa88..668095eca0faf 100644
+--- a/crypto/cryptd.c
++++ b/crypto/cryptd.c
+@@ -39,6 +39,10 @@ struct cryptd_cpu_queue {
+ };
+ 
+ struct cryptd_queue {
++	/*
++	 * Protected by disabling BH to allow enqueueing from softinterrupt and
++	 * dequeuing from kworker (cryptd_queue_worker()).
++	 */
+ 	struct cryptd_cpu_queue __percpu *cpu_queue;
+ };
+ 
+@@ -125,28 +129,28 @@ static void cryptd_fini_queue(struct cryptd_queue *queue)
+ static int cryptd_enqueue_request(struct cryptd_queue *queue,
+ 				  struct crypto_async_request *request)
+ {
+-	int cpu, err;
++	int err;
+ 	struct cryptd_cpu_queue *cpu_queue;
+ 	refcount_t *refcnt;
+ 
+-	cpu = get_cpu();
++	local_bh_disable();
+ 	cpu_queue = this_cpu_ptr(queue->cpu_queue);
+ 	err = crypto_enqueue_request(&cpu_queue->queue, request);
+ 
+ 	refcnt = crypto_tfm_ctx(request->tfm);
+ 
+ 	if (err == -ENOSPC)
+-		goto out_put_cpu;
++		goto out;
+ 
+-	queue_work_on(cpu, cryptd_wq, &cpu_queue->work);
++	queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work);
+ 
+ 	if (!refcount_read(refcnt))
+-		goto out_put_cpu;
++		goto out;
+ 
+ 	refcount_inc(refcnt);
+ 
+-out_put_cpu:
+-	put_cpu();
++out:
++	local_bh_enable();
+ 
+ 	return err;
+ }
+@@ -162,15 +166,10 @@ static void cryptd_queue_worker(struct work_struct *work)
+ 	cpu_queue = container_of(work, struct cryptd_cpu_queue, work);
+ 	/*
+ 	 * Only handle one request at a time to avoid hogging crypto workqueue.
+-	 * preempt_disable/enable is used to prevent being preempted by
+-	 * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent
+-	 * cryptd_enqueue_request() being accessed from software interrupts.
+ 	 */
+ 	local_bh_disable();
+-	preempt_disable();
+ 	backlog = crypto_get_backlog(&cpu_queue->queue);
+ 	req = crypto_dequeue_request(&cpu_queue->queue);
+-	preempt_enable();
+ 	local_bh_enable();
+ 
+ 	if (!req)
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index bd16340088389..619d34d73dcfe 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -433,6 +433,16 @@ void acpi_init_properties(struct acpi_device *adev)
+ 		acpi_extract_apple_properties(adev);
+ }
+ 
++static void acpi_free_device_properties(struct list_head *list)
++{
++	struct acpi_device_properties *props, *tmp;
++
++	list_for_each_entry_safe(props, tmp, list, list) {
++		list_del(&props->list);
++		kfree(props);
++	}
++}
++
+ static void acpi_destroy_nondev_subnodes(struct list_head *list)
+ {
+ 	struct acpi_data_node *dn, *next;
+@@ -445,22 +455,18 @@ static void acpi_destroy_nondev_subnodes(struct list_head *list)
+ 		wait_for_completion(&dn->kobj_done);
+ 		list_del(&dn->sibling);
+ 		ACPI_FREE((void *)dn->data.pointer);
++		acpi_free_device_properties(&dn->data.properties);
+ 		kfree(dn);
+ 	}
+ }
+ 
+ void acpi_free_properties(struct acpi_device *adev)
+ {
+-	struct acpi_device_properties *props, *tmp;
+-
+ 	acpi_destroy_nondev_subnodes(&adev->data.subnodes);
+ 	ACPI_FREE((void *)adev->data.pointer);
+ 	adev->data.of_compatible = NULL;
+ 	adev->data.pointer = NULL;
+-	list_for_each_entry_safe(props, tmp, &adev->data.properties, list) {
+-		list_del(&props->list);
+-		kfree(props);
+-	}
++	acpi_free_device_properties(&adev->data.properties);
+ }
+ 
+ /**
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 503935b1deeb1..cfda5720de027 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -377,6 +377,18 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "20GGA00L00"),
+ 		},
+ 	},
++	/*
++	 * ASUS B1400CEAE hangs on resume from suspend (see
++	 * https://bugzilla.kernel.org/show_bug.cgi?id=215742).
++	 */
++	{
++	.callback = init_default_s3,
++	.ident = "ASUS B1400CEAE",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK B1400CEAE"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index de058d15b33ea..49eb14271f287 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -560,10 +560,9 @@ int register_memory(struct memory_block *memory)
+ 	}
+ 	ret = xa_err(xa_store(&memory_blocks, memory->dev.id, memory,
+ 			      GFP_KERNEL));
+-	if (ret) {
+-		put_device(&memory->dev);
++	if (ret)
+ 		device_unregister(&memory->dev);
+-	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index 21965de8538be..5f745c906c330 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -655,6 +655,7 @@ static int register_node(struct node *node, int num)
+  */
+ void unregister_node(struct node *node)
+ {
++	compaction_unregister_node(node);
+ 	hugetlb_unregister_node(node);		/* no-op, if memoryless node */
+ 	node_remove_accesses(node);
+ 	node_remove_caches(node);
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 3cdbd81f983fa..407527ff6b1f6 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -3631,9 +3631,8 @@ const char *cmdname(enum drbd_packet cmd)
+ 	 * when we want to support more than
+ 	 * one PRO_VERSION */
+ 	static const char *cmdnames[] = {
++
+ 		[P_DATA]	        = "Data",
+-		[P_WSAME]	        = "WriteSame",
+-		[P_TRIM]	        = "Trim",
+ 		[P_DATA_REPLY]	        = "DataReply",
+ 		[P_RS_DATA_REPLY]	= "RSDataReply",
+ 		[P_BARRIER]	        = "Barrier",
+@@ -3644,7 +3643,6 @@ const char *cmdname(enum drbd_packet cmd)
+ 		[P_DATA_REQUEST]	= "DataRequest",
+ 		[P_RS_DATA_REQUEST]     = "RSDataRequest",
+ 		[P_SYNC_PARAM]	        = "SyncParam",
+-		[P_SYNC_PARAM89]	= "SyncParam89",
+ 		[P_PROTOCOL]            = "ReportProtocol",
+ 		[P_UUIDS]	        = "ReportUUIDs",
+ 		[P_SIZES]	        = "ReportSizes",
+@@ -3652,6 +3650,7 @@ const char *cmdname(enum drbd_packet cmd)
+ 		[P_SYNC_UUID]           = "ReportSyncUUID",
+ 		[P_AUTH_CHALLENGE]      = "AuthChallenge",
+ 		[P_AUTH_RESPONSE]	= "AuthResponse",
++		[P_STATE_CHG_REQ]       = "StateChgRequest",
+ 		[P_PING]		= "Ping",
+ 		[P_PING_ACK]	        = "PingAck",
+ 		[P_RECV_ACK]	        = "RecvAck",
+@@ -3662,24 +3661,26 @@ const char *cmdname(enum drbd_packet cmd)
+ 		[P_NEG_DREPLY]	        = "NegDReply",
+ 		[P_NEG_RS_DREPLY]	= "NegRSDReply",
+ 		[P_BARRIER_ACK]	        = "BarrierAck",
+-		[P_STATE_CHG_REQ]       = "StateChgRequest",
+ 		[P_STATE_CHG_REPLY]     = "StateChgReply",
+ 		[P_OV_REQUEST]          = "OVRequest",
+ 		[P_OV_REPLY]            = "OVReply",
+ 		[P_OV_RESULT]           = "OVResult",
+ 		[P_CSUM_RS_REQUEST]     = "CsumRSRequest",
+ 		[P_RS_IS_IN_SYNC]	= "CsumRSIsInSync",
++		[P_SYNC_PARAM89]	= "SyncParam89",
+ 		[P_COMPRESSED_BITMAP]   = "CBitmap",
+ 		[P_DELAY_PROBE]         = "DelayProbe",
+ 		[P_OUT_OF_SYNC]		= "OutOfSync",
+-		[P_RETRY_WRITE]		= "RetryWrite",
+ 		[P_RS_CANCEL]		= "RSCancel",
+ 		[P_CONN_ST_CHG_REQ]	= "conn_st_chg_req",
+ 		[P_CONN_ST_CHG_REPLY]	= "conn_st_chg_reply",
+ 		[P_RETRY_WRITE]		= "retry_write",
+ 		[P_PROTOCOL_UPDATE]	= "protocol_update",
++		[P_TRIM]	        = "Trim",
+ 		[P_RS_THIN_REQ]         = "rs_thin_req",
+ 		[P_RS_DEALLOCATED]      = "rs_deallocated",
++		[P_WSAME]	        = "WriteSame",
++		[P_ZEROES]		= "Zeroes",
+ 
+ 		/* enum drbd_packet, but not commands - obsoleted flags:
+ 		 *	P_MAY_IGNORE
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 59c452fff8352..ecde800ba2102 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -880,11 +880,15 @@ static int wait_for_reconnect(struct nbd_device *nbd)
+ 	struct nbd_config *config = nbd->config;
+ 	if (!config->dead_conn_timeout)
+ 		return 0;
+-	if (test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags))
++
++	if (!wait_event_timeout(config->conn_wait,
++				test_bit(NBD_RT_DISCONNECTED,
++					 &config->runtime_flags) ||
++				atomic_read(&config->live_connections) > 0,
++				config->dead_conn_timeout))
+ 		return 0;
+-	return wait_event_timeout(config->conn_wait,
+-				  atomic_read(&config->live_connections) > 0,
+-				  config->dead_conn_timeout) > 0;
++
++	return !test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags);
+ }
+ 
+ static int nbd_handle_cmd(struct nbd_cmd *cmd, int index)
+@@ -2029,6 +2033,7 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ 	mutex_lock(&nbd->config_lock);
+ 	nbd_disconnect(nbd);
+ 	sock_shutdown(nbd);
++	wake_up(&nbd->config->conn_wait);
+ 	/*
+ 	 * Make sure recv thread has finished, so it does not drop the last
+ 	 * config ref and try to destroy the workqueue from inside the work
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 02e2056780ad2..9b54eec9b17eb 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -865,11 +865,12 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 		blk_queue_io_opt(q, blk_size * opt_io_size);
+ 
+ 	if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) {
+-		q->limits.discard_granularity = blk_size;
+-
+ 		virtio_cread(vdev, struct virtio_blk_config,
+ 			     discard_sector_alignment, &v);
+-		q->limits.discard_alignment = v ? v << SECTOR_SHIFT : 0;
++		if (v)
++			q->limits.discard_granularity = v << SECTOR_SHIFT;
++		else
++			q->limits.discard_granularity = blk_size;
+ 
+ 		virtio_cread(vdev, struct virtio_blk_config,
+ 			     max_discard_sectors, &v);
+diff --git a/drivers/char/hw_random/omap3-rom-rng.c b/drivers/char/hw_random/omap3-rom-rng.c
+index e0d77fa048fb6..f06e4f95114f9 100644
+--- a/drivers/char/hw_random/omap3-rom-rng.c
++++ b/drivers/char/hw_random/omap3-rom-rng.c
+@@ -92,7 +92,7 @@ static int __maybe_unused omap_rom_rng_runtime_resume(struct device *dev)
+ 
+ 	r = ddata->rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT);
+ 	if (r != 0) {
+-		clk_disable(ddata->clk);
++		clk_disable_unprepare(ddata->clk);
+ 		dev_err(dev, "HW init failed: %d\n", r);
+ 
+ 		return -EIO;
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 8f147274f826a..05e7339752ac3 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -11,8 +11,8 @@
+  * Copyright 2002 MontaVista Software Inc.
+  */
+ 
+-#define pr_fmt(fmt) "%s" fmt, "IPMI message handler: "
+-#define dev_fmt pr_fmt
++#define pr_fmt(fmt) "IPMI message handler: " fmt
++#define dev_fmt(fmt) pr_fmt(fmt)
+ 
+ #include <linux/module.h>
+ #include <linux/errno.h>
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 3de679723648b..4771397495130 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -840,6 +840,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 		break;
+ 
+ 	case SSIF_GETTING_EVENTS:
++		if (!msg) {
++			/* Should never happen, but just in case. */
++			dev_warn(&ssif_info->client->dev,
++				 "No message set while getting events\n");
++			ipmi_ssif_unlock_cond(ssif_info, flags);
++			break;
++		}
++
+ 		if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
+ 			/* Error getting event, probably done. */
+ 			msg->done(msg);
+@@ -864,6 +872,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 		break;
+ 
+ 	case SSIF_GETTING_MESSAGES:
++		if (!msg) {
++			/* Should never happen, but just in case. */
++			dev_warn(&ssif_info->client->dev,
++				 "No message set while getting messages\n");
++			ipmi_ssif_unlock_cond(ssif_info, flags);
++			break;
++		}
++
+ 		if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
+ 			/* Error getting event, probably done. */
+ 			msg->done(msg);
+@@ -887,6 +903,13 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			deliver_recv_msg(ssif_info, msg);
+ 		}
+ 		break;
++
++	default:
++		/* Should never happen, but just in case. */
++		dev_warn(&ssif_info->client->dev,
++			 "Invalid state in message done handling: %d\n",
++			 ssif_info->ssif_state);
++		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 	}
+ 
+ 	flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index c206db96f60a1..5776dfd4a6fca 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -79,8 +79,7 @@ static enum {
+ 	CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */
+ 	CRNG_READY = 2  /* Fully initialized with POOL_READY_BITS collected */
+ } crng_init __read_mostly = CRNG_EMPTY;
+-static DEFINE_STATIC_KEY_FALSE(crng_is_ready);
+-#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= CRNG_READY)
++#define crng_ready() (likely(crng_init >= CRNG_READY))
+ /* Various types of waiters for crng_init->CRNG_READY transition. */
+ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
+ static struct fasync_struct *fasync;
+@@ -110,11 +109,6 @@ bool rng_is_initialized(void)
+ }
+ EXPORT_SYMBOL(rng_is_initialized);
+ 
+-static void __cold crng_set_ready(struct work_struct *work)
+-{
+-	static_branch_enable(&crng_is_ready);
+-}
+-
+ /* Used by wait_for_random_bytes(), and considered an entropy collector, below. */
+ static void try_to_generate_entropy(void);
+ 
+@@ -268,7 +262,7 @@ static void crng_reseed(void)
+ 		++next_gen;
+ 	WRITE_ONCE(base_crng.generation, next_gen);
+ 	WRITE_ONCE(base_crng.birth, jiffies);
+-	if (!static_branch_likely(&crng_is_ready))
++	if (!crng_ready())
+ 		crng_init = CRNG_READY;
+ 	spin_unlock_irqrestore(&base_crng.lock, flags);
+ 	memzero_explicit(key, sizeof(key));
+@@ -711,7 +705,6 @@ static void extract_entropy(void *buf, size_t len)
+ 
+ static void __cold _credit_init_bits(size_t bits)
+ {
+-	static struct execute_work set_ready;
+ 	unsigned int new, orig, add;
+ 	unsigned long flags;
+ 
+@@ -727,7 +720,6 @@ static void __cold _credit_init_bits(size_t bits)
+ 
+ 	if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
+ 		crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
+-		execute_in_process_context(crng_set_ready, &set_ready);
+ 		process_random_ready_list();
+ 		wake_up_interruptible(&crng_init_wait);
+ 		kill_fasync(&fasync, SIGIO, POLL_IN);
+diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
+index a310372dc53e9..82f6592bbadb0 100644
+--- a/drivers/cpufreq/mediatek-cpufreq.c
++++ b/drivers/cpufreq/mediatek-cpufreq.c
+@@ -44,6 +44,8 @@ struct mtk_cpu_dvfs_info {
+ 	bool need_voltage_tracking;
+ };
+ 
++static struct platform_device *cpufreq_pdev;
++
+ static LIST_HEAD(dvfs_info_list);
+ 
+ static struct mtk_cpu_dvfs_info *mtk_cpu_dvfs_info_lookup(int cpu)
+@@ -546,7 +548,6 @@ static int __init mtk_cpufreq_driver_init(void)
+ {
+ 	struct device_node *np;
+ 	const struct of_device_id *match;
+-	struct platform_device *pdev;
+ 	int err;
+ 
+ 	np = of_find_node_by_path("/");
+@@ -570,15 +571,23 @@ static int __init mtk_cpufreq_driver_init(void)
+ 	 * and the device registration codes are put here to handle defer
+ 	 * probing.
+ 	 */
+-	pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
+-	if (IS_ERR(pdev)) {
++	cpufreq_pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
++	if (IS_ERR(cpufreq_pdev)) {
+ 		pr_err("failed to register mtk-cpufreq platform device\n");
+-		return PTR_ERR(pdev);
++		platform_driver_unregister(&mtk_cpufreq_platdrv);
++		return PTR_ERR(cpufreq_pdev);
+ 	}
+ 
+ 	return 0;
+ }
+-device_initcall(mtk_cpufreq_driver_init);
++module_init(mtk_cpufreq_driver_init)
++
++static void __exit mtk_cpufreq_driver_exit(void)
++{
++	platform_device_unregister(cpufreq_pdev);
++	platform_driver_unregister(&mtk_cpufreq_platdrv);
++}
++module_exit(mtk_cpufreq_driver_exit)
+ 
+ MODULE_DESCRIPTION("MediaTek CPUFreq driver");
+ MODULE_AUTHOR("Pi-Cheng Chen <pi-cheng.chen@linaro.org>");
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index f783748462f94..7b3be3dc2210e 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -93,6 +93,68 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq)
+ 	return err;
+ }
+ 
++static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
++{
++	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
++	struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
++	struct sun8i_ss_dev *ss = op->ss;
++	struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
++	struct scatterlist *sg = areq->src;
++	unsigned int todo, offset;
++	unsigned int len = areq->cryptlen;
++	unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++	struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
++	int i = 0;
++	u32 a;
++	int err;
++
++	rctx->ivlen = ivsize;
++	if (rctx->op_dir & SS_DECRYPTION) {
++		offset = areq->cryptlen - ivsize;
++		scatterwalk_map_and_copy(sf->biv, areq->src, offset,
++					 ivsize, 0);
++	}
++
++	/* we need to copy all IVs from source in case DMA is bi-directionnal */
++	while (sg && len) {
++		if (sg_dma_len(sg) == 0) {
++			sg = sg_next(sg);
++			continue;
++		}
++		if (i == 0)
++			memcpy(sf->iv[0], areq->iv, ivsize);
++		a = dma_map_single(ss->dev, sf->iv[i], ivsize, DMA_TO_DEVICE);
++		if (dma_mapping_error(ss->dev, a)) {
++			memzero_explicit(sf->iv[i], ivsize);
++			dev_err(ss->dev, "Cannot DMA MAP IV\n");
++			err = -EFAULT;
++			goto dma_iv_error;
++		}
++		rctx->p_iv[i] = a;
++		/* we need to setup all others IVs only in the decrypt way */
++		if (rctx->op_dir & SS_ENCRYPTION)
++			return 0;
++		todo = min(len, sg_dma_len(sg));
++		len -= todo;
++		i++;
++		if (i < MAX_SG) {
++			offset = sg->length - ivsize;
++			scatterwalk_map_and_copy(sf->iv[i], sg, offset, ivsize, 0);
++		}
++		rctx->niv = i;
++		sg = sg_next(sg);
++	}
++
++	return 0;
++dma_iv_error:
++	i--;
++	while (i >= 0) {
++		dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
++		memzero_explicit(sf->iv[i], ivsize);
++	}
++	return err;
++}
++
+ static int sun8i_ss_cipher(struct skcipher_request *areq)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
+@@ -101,9 +163,9 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
+ 	struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
+ 	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+ 	struct sun8i_ss_alg_template *algt;
++	struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
+ 	struct scatterlist *sg;
+ 	unsigned int todo, len, offset, ivsize;
+-	void *backup_iv = NULL;
+ 	int nr_sgs = 0;
+ 	int nr_sgd = 0;
+ 	int err = 0;
+@@ -134,30 +196,9 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
+ 
+ 	ivsize = crypto_skcipher_ivsize(tfm);
+ 	if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
+-		rctx->ivlen = ivsize;
+-		rctx->biv = kzalloc(ivsize, GFP_KERNEL | GFP_DMA);
+-		if (!rctx->biv) {
+-			err = -ENOMEM;
++		err = sun8i_ss_setup_ivs(areq);
++		if (err)
+ 			goto theend_key;
+-		}
+-		if (rctx->op_dir & SS_DECRYPTION) {
+-			backup_iv = kzalloc(ivsize, GFP_KERNEL);
+-			if (!backup_iv) {
+-				err = -ENOMEM;
+-				goto theend_key;
+-			}
+-			offset = areq->cryptlen - ivsize;
+-			scatterwalk_map_and_copy(backup_iv, areq->src, offset,
+-						 ivsize, 0);
+-		}
+-		memcpy(rctx->biv, areq->iv, ivsize);
+-		rctx->p_iv = dma_map_single(ss->dev, rctx->biv, rctx->ivlen,
+-					    DMA_TO_DEVICE);
+-		if (dma_mapping_error(ss->dev, rctx->p_iv)) {
+-			dev_err(ss->dev, "Cannot DMA MAP IV\n");
+-			err = -ENOMEM;
+-			goto theend_iv;
+-		}
+ 	}
+ 	if (areq->src == areq->dst) {
+ 		nr_sgs = dma_map_sg(ss->dev, areq->src, sg_nents(areq->src),
+@@ -240,21 +281,19 @@ theend_sgs:
+ 	}
+ 
+ theend_iv:
+-	if (rctx->p_iv)
+-		dma_unmap_single(ss->dev, rctx->p_iv, rctx->ivlen,
+-				 DMA_TO_DEVICE);
+-
+ 	if (areq->iv && ivsize > 0) {
+-		if (rctx->biv) {
+-			offset = areq->cryptlen - ivsize;
+-			if (rctx->op_dir & SS_DECRYPTION) {
+-				memcpy(areq->iv, backup_iv, ivsize);
+-				kfree_sensitive(backup_iv);
+-			} else {
+-				scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
+-							 ivsize, 0);
+-			}
+-			kfree(rctx->biv);
++		for (i = 0; i < rctx->niv; i++) {
++			dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
++			memzero_explicit(sf->iv[i], ivsize);
++		}
++
++		offset = areq->cryptlen - ivsize;
++		if (rctx->op_dir & SS_DECRYPTION) {
++			memcpy(areq->iv, sf->biv, ivsize);
++			memzero_explicit(sf->biv, ivsize);
++		} else {
++			scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
++					ivsize, 0);
+ 		}
+ 	}
+ 
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index 319fe3279a716..6575305786436 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -66,6 +66,7 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
+ 		      const char *name)
+ {
+ 	int flow = rctx->flow;
++	unsigned int ivlen = rctx->ivlen;
+ 	u32 v = SS_START;
+ 	int i;
+ 
+@@ -104,15 +105,14 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
+ 		mutex_lock(&ss->mlock);
+ 		writel(rctx->p_key, ss->base + SS_KEY_ADR_REG);
+ 
+-		if (i == 0) {
+-			if (rctx->p_iv)
+-				writel(rctx->p_iv, ss->base + SS_IV_ADR_REG);
+-		} else {
+-			if (rctx->biv) {
+-				if (rctx->op_dir == SS_ENCRYPTION)
+-					writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG);
++		if (ivlen) {
++			if (rctx->op_dir == SS_ENCRYPTION) {
++				if (i == 0)
++					writel(rctx->p_iv[0], ss->base + SS_IV_ADR_REG);
+ 				else
+-					writel(rctx->t_src[i - 1].addr + rctx->t_src[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG);
++					writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - ivlen, ss->base + SS_IV_ADR_REG);
++			} else {
++				writel(rctx->p_iv[i], ss->base + SS_IV_ADR_REG);
+ 			}
+ 		}
+ 
+@@ -464,7 +464,7 @@ static void sun8i_ss_free_flows(struct sun8i_ss_dev *ss, int i)
+  */
+ static int allocate_flows(struct sun8i_ss_dev *ss)
+ {
+-	int i, err;
++	int i, j, err;
+ 
+ 	ss->flows = devm_kcalloc(ss->dev, MAXFLOW, sizeof(struct sun8i_ss_flow),
+ 				 GFP_KERNEL);
+@@ -474,6 +474,18 @@ static int allocate_flows(struct sun8i_ss_dev *ss)
+ 	for (i = 0; i < MAXFLOW; i++) {
+ 		init_completion(&ss->flows[i].complete);
+ 
++		ss->flows[i].biv = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
++						GFP_KERNEL | GFP_DMA);
++		if (!ss->flows[i].biv)
++			goto error_engine;
++
++		for (j = 0; j < MAX_SG; j++) {
++			ss->flows[i].iv[j] = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
++							  GFP_KERNEL | GFP_DMA);
++			if (!ss->flows[i].iv[j])
++				goto error_engine;
++		}
++
+ 		ss->flows[i].engine = crypto_engine_alloc_init(ss->dev, true);
+ 		if (!ss->flows[i].engine) {
+ 			dev_err(ss->dev, "Cannot allocate engine\n");
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index c9edecd43ef96..55d652cd468be 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -379,13 +379,21 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ 	}
+ 
+ 	len = areq->nbytes;
+-	for_each_sg(areq->src, sg, nr_sgs, i) {
++	sg = areq->src;
++	i = 0;
++	while (len > 0 && sg) {
++		if (sg_dma_len(sg) == 0) {
++			sg = sg_next(sg);
++			continue;
++		}
+ 		rctx->t_src[i].addr = sg_dma_address(sg);
+ 		todo = min(len, sg_dma_len(sg));
+ 		rctx->t_src[i].len = todo / 4;
+ 		len -= todo;
+ 		rctx->t_dst[i].addr = addr_res;
+ 		rctx->t_dst[i].len = digestsize / 4;
++		sg = sg_next(sg);
++		i++;
+ 	}
+ 	if (len > 0) {
+ 		dev_err(ss->dev, "remaining len %d\n", len);
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
+index 1a66457f4a205..49147195ecf6c 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
+@@ -120,11 +120,15 @@ struct sginfo {
+  * @complete:	completion for the current task on this flow
+  * @status:	set to 1 by interrupt if task is done
+  * @stat_req:	number of request done by this flow
++ * @iv:		list of IV to use for each step
++ * @biv:	buffer which contain the backuped IV
+  */
+ struct sun8i_ss_flow {
+ 	struct crypto_engine *engine;
+ 	struct completion complete;
+ 	int status;
++	u8 *iv[MAX_SG];
++	u8 *biv;
+ #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
+ 	unsigned long stat_req;
+ #endif
+@@ -163,28 +167,28 @@ struct sun8i_ss_dev {
+  * @t_src:		list of mapped SGs with their size
+  * @t_dst:		list of mapped SGs with their size
+  * @p_key:		DMA address of the key
+- * @p_iv:		DMA address of the IV
++ * @p_iv:		DMA address of the IVs
++ * @niv:		Number of IVs DMA mapped
+  * @method:		current algorithm for this request
+  * @op_mode:		op_mode for this request
+  * @op_dir:		direction (encrypt vs decrypt) for this request
+  * @flow:		the flow to use for this request
+- * @ivlen:		size of biv
++ * @ivlen:		size of IVs
+  * @keylen:		keylen for this request
+- * @biv:		buffer which contain the IV
+  * @fallback_req:	request struct for invoking the fallback skcipher TFM
+  */
+ struct sun8i_cipher_req_ctx {
+ 	struct sginfo t_src[MAX_SG];
+ 	struct sginfo t_dst[MAX_SG];
+ 	u32 p_key;
+-	u32 p_iv;
++	u32 p_iv[MAX_SG];
++	int niv;
+ 	u32 method;
+ 	u32 op_mode;
+ 	u32 op_dir;
+ 	int flow;
+ 	unsigned int ivlen;
+ 	unsigned int keylen;
+-	void *biv;
+ 	struct skcipher_request fallback_req;   // keep at the end
+ };
+ 
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index 11e0278c8631d..6140e49273226 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -356,12 +356,14 @@ void cc_unmap_cipher_request(struct device *dev, void *ctx,
+ 			      req_ctx->mlli_params.mlli_dma_addr);
+ 	}
+ 
+-	dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
+-	dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
+-
+ 	if (src != dst) {
+-		dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_BIDIRECTIONAL);
++		dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_TO_DEVICE);
++		dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_FROM_DEVICE);
+ 		dev_dbg(dev, "Unmapped req->dst=%pK\n", sg_virt(dst));
++		dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
++	} else {
++		dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
++		dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
+ 	}
+ }
+ 
+@@ -377,6 +379,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ 	u32 dummy = 0;
+ 	int rc = 0;
+ 	u32 mapped_nents = 0;
++	int src_direction = (src != dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL);
+ 
+ 	req_ctx->dma_buf_type = CC_DMA_BUF_DLLI;
+ 	mlli_params->curr_pool = NULL;
+@@ -399,7 +402,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ 	}
+ 
+ 	/* Map the src SGL */
+-	rc = cc_map_sg(dev, src, nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents,
++	rc = cc_map_sg(dev, src, nbytes, src_direction, &req_ctx->in_nents,
+ 		       LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
+ 	if (rc)
+ 		goto cipher_exit;
+@@ -416,7 +419,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ 		}
+ 	} else {
+ 		/* Map the dst sg */
+-		rc = cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL,
++		rc = cc_map_sg(dev, dst, nbytes, DMA_FROM_DEVICE,
+ 			       &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES,
+ 			       &dummy, &mapped_nents);
+ 		if (rc)
+@@ -456,6 +459,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ 	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+ 	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
++	int src_direction = (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL);
+ 
+ 	if (areq_ctx->mac_buf_dma_addr) {
+ 		dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr,
+@@ -514,13 +518,11 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 		sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents,
+ 		areq_ctx->assoclen, req->cryptlen);
+ 
+-	dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents,
+-		     DMA_BIDIRECTIONAL);
++	dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, src_direction);
+ 	if (req->src != req->dst) {
+ 		dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
+ 			sg_virt(req->dst));
+-		dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents,
+-			     DMA_BIDIRECTIONAL);
++		dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_FROM_DEVICE);
+ 	}
+ 	if (drvdata->coherent &&
+ 	    areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT &&
+@@ -843,7 +845,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 		else
+ 			size_for_map -= authsize;
+ 
+-		rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL,
++		rc = cc_map_sg(dev, req->dst, size_for_map, DMA_FROM_DEVICE,
+ 			       &areq_ctx->dst.mapped_nents,
+ 			       LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
+ 			       &dst_mapped_nents);
+@@ -1056,7 +1058,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
+ 		size_to_map += authsize;
+ 	}
+ 
+-	rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL,
++	rc = cc_map_sg(dev, req->src, size_to_map,
++		       (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL),
+ 		       &areq_ctx->src.mapped_nents,
+ 		       (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES +
+ 			LLI_MAX_NUM_OF_DATA_ENTRIES),
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index b4a6ff9dd6d5e..596a8c74e40a5 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -614,7 +614,6 @@ struct skcipher_alg mv_cesa_ecb_des3_ede_alg = {
+ 	.decrypt = mv_cesa_ecb_des3_ede_decrypt,
+ 	.min_keysize = DES3_EDE_KEY_SIZE,
+ 	.max_keysize = DES3_EDE_KEY_SIZE,
+-	.ivsize = DES3_EDE_BLOCK_SIZE,
+ 	.base = {
+ 		.cra_name = "ecb(des3_ede)",
+ 		.cra_driver_name = "mv-ecb-des3-ede",
+diff --git a/drivers/crypto/nx/nx-common-powernv.c b/drivers/crypto/nx/nx-common-powernv.c
+index 13c65deda8e97..8a4f10bb3fcdd 100644
+--- a/drivers/crypto/nx/nx-common-powernv.c
++++ b/drivers/crypto/nx/nx-common-powernv.c
+@@ -827,7 +827,7 @@ static int __init vas_cfg_coproc_info(struct device_node *dn, int chip_id,
+ 		goto err_out;
+ 
+ 	vas_init_rx_win_attr(&rxattr, coproc->ct);
+-	rxattr.rx_fifo = (void *)rx_fifo;
++	rxattr.rx_fifo = rx_fifo;
+ 	rxattr.rx_fifo_size = fifo_size;
+ 	rxattr.lnotify_lpid = lpid;
+ 	rxattr.lnotify_pid = pid;
+diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c
+index 2e912166a9934..7e52375d9818d 100644
+--- a/drivers/devfreq/rk3399_dmc.c
++++ b/drivers/devfreq/rk3399_dmc.c
+@@ -485,6 +485,8 @@ static int rk3399_dmcfreq_remove(struct platform_device *pdev)
+ {
+ 	struct rk3399_dmcfreq *dmcfreq = dev_get_drvdata(&pdev->dev);
+ 
++	devfreq_event_disable_edev(dmcfreq->edev);
++
+ 	/*
+ 	 * Before remove the opp table we need to unregister the opp notifier.
+ 	 */
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index 4da88578ed646..ae65eb90afaba 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -266,10 +266,16 @@ int idxd_cdev_register(void)
+ 		rc = alloc_chrdev_region(&ictx[i].devt, 0, MINORMASK,
+ 					 ictx[i].name);
+ 		if (rc)
+-			return rc;
++			goto err_free_chrdev_region;
+ 	}
+ 
+ 	return 0;
++
++err_free_chrdev_region:
++	for (i--; i >= 0; i--)
++		unregister_chrdev_region(ictx[i].devt, MINORMASK);
++
++	return rc;
+ }
+ 
+ void idxd_cdev_remove(void)
+diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
+index fe36738f2dd7e..9d54746c422c6 100644
+--- a/drivers/dma/stm32-mdma.c
++++ b/drivers/dma/stm32-mdma.c
+@@ -40,7 +40,6 @@
+ 					 STM32_MDMA_SHIFT(mask))
+ 
+ #define STM32_MDMA_GISR0		0x0000 /* MDMA Int Status Reg 1 */
+-#define STM32_MDMA_GISR1		0x0004 /* MDMA Int Status Reg 2 */
+ 
+ /* MDMA Channel x interrupt/status register */
+ #define STM32_MDMA_CISR(x)		(0x40 + 0x40 * (x)) /* x = 0..62 */
+@@ -196,7 +195,7 @@
+ 
+ #define STM32_MDMA_MAX_BUF_LEN		128
+ #define STM32_MDMA_MAX_BLOCK_LEN	65536
+-#define STM32_MDMA_MAX_CHANNELS		63
++#define STM32_MDMA_MAX_CHANNELS		32
+ #define STM32_MDMA_MAX_REQUESTS		256
+ #define STM32_MDMA_MAX_BURST		128
+ #define STM32_MDMA_VERY_HIGH_PRIORITY	0x11
+@@ -1345,90 +1344,84 @@ static void stm32_mdma_xfer_end(struct stm32_mdma_chan *chan)
+ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid)
+ {
+ 	struct stm32_mdma_device *dmadev = devid;
+-	struct stm32_mdma_chan *chan = devid;
+-	u32 reg, id, ien, status, flag;
++	struct stm32_mdma_chan *chan;
++	u32 reg, id, ccr, ien, status;
+ 
+ 	/* Find out which channel generates the interrupt */
+ 	status = readl_relaxed(dmadev->base + STM32_MDMA_GISR0);
+-	if (status) {
+-		id = __ffs(status);
+-	} else {
+-		status = readl_relaxed(dmadev->base + STM32_MDMA_GISR1);
+-		if (!status) {
+-			dev_dbg(mdma2dev(dmadev), "spurious it\n");
+-			return IRQ_NONE;
+-		}
+-		id = __ffs(status);
+-		/*
+-		 * As GISR0 provides status for channel id from 0 to 31,
+-		 * so GISR1 provides status for channel id from 32 to 62
+-		 */
+-		id += 32;
++	if (!status) {
++		dev_dbg(mdma2dev(dmadev), "spurious it\n");
++		return IRQ_NONE;
+ 	}
++	id = __ffs(status);
+ 
+ 	chan = &dmadev->chan[id];
+ 	if (!chan) {
+-		dev_dbg(mdma2dev(dmadev), "MDMA channel not initialized\n");
+-		goto exit;
++		dev_warn(mdma2dev(dmadev), "MDMA channel not initialized\n");
++		return IRQ_NONE;
+ 	}
+ 
+ 	/* Handle interrupt for the channel */
+ 	spin_lock(&chan->vchan.lock);
+-	status = stm32_mdma_read(dmadev, STM32_MDMA_CISR(chan->id));
+-	ien = stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id));
+-	ien &= STM32_MDMA_CCR_IRQ_MASK;
+-	ien >>= 1;
++	status = stm32_mdma_read(dmadev, STM32_MDMA_CISR(id));
++	/* Mask Channel ReQuest Active bit which can be set in case of MEM2MEM */
++	status &= ~STM32_MDMA_CISR_CRQA;
++	ccr = stm32_mdma_read(dmadev, STM32_MDMA_CCR(id));
++	ien = (ccr & STM32_MDMA_CCR_IRQ_MASK) >> 1;
+ 
+ 	if (!(status & ien)) {
+ 		spin_unlock(&chan->vchan.lock);
+-		dev_dbg(chan2dev(chan),
+-			"spurious it (status=0x%04x, ien=0x%04x)\n",
+-			status, ien);
++		dev_warn(chan2dev(chan),
++			 "spurious it (status=0x%04x, ien=0x%04x)\n",
++			 status, ien);
+ 		return IRQ_NONE;
+ 	}
+ 
+-	flag = __ffs(status & ien);
+-	reg = STM32_MDMA_CIFCR(chan->id);
++	reg = STM32_MDMA_CIFCR(id);
+ 
+-	switch (1 << flag) {
+-	case STM32_MDMA_CISR_TEIF:
+-		id = chan->id;
+-		status = readl_relaxed(dmadev->base + STM32_MDMA_CESR(id));
+-		dev_err(chan2dev(chan), "Transfer Err: stat=0x%08x\n", status);
++	if (status & STM32_MDMA_CISR_TEIF) {
++		dev_err(chan2dev(chan), "Transfer Err: stat=0x%08x\n",
++			readl_relaxed(dmadev->base + STM32_MDMA_CESR(id)));
+ 		stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CTEIF);
+-		break;
++		status &= ~STM32_MDMA_CISR_TEIF;
++	}
+ 
+-	case STM32_MDMA_CISR_CTCIF:
++	if (status & STM32_MDMA_CISR_CTCIF) {
+ 		stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CCTCIF);
++		status &= ~STM32_MDMA_CISR_CTCIF;
+ 		stm32_mdma_xfer_end(chan);
+-		break;
++	}
+ 
+-	case STM32_MDMA_CISR_BRTIF:
++	if (status & STM32_MDMA_CISR_BRTIF) {
+ 		stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CBRTIF);
+-		break;
++		status &= ~STM32_MDMA_CISR_BRTIF;
++	}
+ 
+-	case STM32_MDMA_CISR_BTIF:
++	if (status & STM32_MDMA_CISR_BTIF) {
+ 		stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CBTIF);
++		status &= ~STM32_MDMA_CISR_BTIF;
+ 		chan->curr_hwdesc++;
+ 		if (chan->desc && chan->desc->cyclic) {
+ 			if (chan->curr_hwdesc == chan->desc->count)
+ 				chan->curr_hwdesc = 0;
+ 			vchan_cyclic_callback(&chan->desc->vdesc);
+ 		}
+-		break;
++	}
+ 
+-	case STM32_MDMA_CISR_TCIF:
++	if (status & STM32_MDMA_CISR_TCIF) {
+ 		stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CLTCIF);
+-		break;
++		status &= ~STM32_MDMA_CISR_TCIF;
++	}
+ 
+-	default:
+-		dev_err(chan2dev(chan), "it %d unhandled (status=0x%04x)\n",
+-			1 << flag, status);
++	if (status) {
++		stm32_mdma_set_bits(dmadev, reg, status);
++		dev_err(chan2dev(chan), "DMA error: status=0x%08x\n", status);
++		if (!(ccr & STM32_MDMA_CCR_EN))
++			dev_err(chan2dev(chan), "chan disabled by HW\n");
+ 	}
+ 
+ 	spin_unlock(&chan->vchan.lock);
+ 
+-exit:
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/edac/dmc520_edac.c b/drivers/edac/dmc520_edac.c
+index b8a7d9594afd4..1fa5ca57e9ec1 100644
+--- a/drivers/edac/dmc520_edac.c
++++ b/drivers/edac/dmc520_edac.c
+@@ -489,7 +489,7 @@ static int dmc520_edac_probe(struct platform_device *pdev)
+ 	dev = &pdev->dev;
+ 
+ 	for (idx = 0; idx < NUMBER_OF_IRQS; idx++) {
+-		irq = platform_get_irq_byname(pdev, dmc520_irq_configs[idx].name);
++		irq = platform_get_irq_byname_optional(pdev, dmc520_irq_configs[idx].name);
+ 		irqs[idx] = irq;
+ 		masks[idx] = dmc520_irq_configs[idx].mask;
+ 		if (irq >= 0) {
+diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
+index 017e5d8bd869a..e51d28ba2833d 100644
+--- a/drivers/firmware/arm_scmi/base.c
++++ b/drivers/firmware/arm_scmi/base.c
+@@ -188,7 +188,7 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
+ 			break;
+ 
+ 		loop_num_ret = le32_to_cpu(*num_ret);
+-		if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) {
++		if (loop_num_ret > MAX_PROTOCOLS_IMP - tot_num_ret) {
+ 			dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP");
+ 			break;
+ 		}
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 921a99578ff0e..01424af654db7 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -933,6 +933,11 @@ static int of_gpiochip_add_pin_range(struct gpio_chip *chip)
+ 	if (!np)
+ 		return 0;
+ 
++	if (!of_property_read_bool(np, "gpio-ranges") &&
++	    chip->of_gpio_ranges_fallback) {
++		return chip->of_gpio_ranges_fallback(chip, np);
++	}
++
+ 	group_names = of_find_property(np, group_names_propname, NULL);
+ 
+ 	for (;; index++) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 867fcee6b0d3b..ffd8f5601e28a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -116,7 +116,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
+ 	int ret;
+ 
+ 	if (cs->in.num_chunks == 0)
+-		return 0;
++		return -EINVAL;
+ 
+ 	chunk_array = kmalloc_array(cs->in.num_chunks, sizeof(uint64_t), GFP_KERNEL);
+ 	if (!chunk_array)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+index b313ce4c3e97d..30005ed8156f0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+@@ -625,8 +625,7 @@ int amdgpu_ucode_create_bo(struct amdgpu_device *adev)
+ 
+ void amdgpu_ucode_free_bo(struct amdgpu_device *adev)
+ {
+-	if (adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT)
+-		amdgpu_bo_free_kernel(&adev->firmware.fw_buf,
++	amdgpu_bo_free_kernel(&adev->firmware.fw_buf,
+ 		&adev->firmware.fw_buf_mc,
+ 		&adev->firmware.fw_buf_ptr);
+ }
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+index 4b3faaccecb94..c8a5a5698edd9 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+@@ -1609,19 +1609,7 @@ static int kv_update_samu_dpm(struct amdgpu_device *adev, bool gate)
+ 
+ static u8 kv_get_acp_boot_level(struct amdgpu_device *adev)
+ {
+-	u8 i;
+-	struct amdgpu_clock_voltage_dependency_table *table =
+-		&adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table;
+-
+-	for (i = 0; i < table->count; i++) {
+-		if (table->entries[i].clk >= 0) /* XXX */
+-			break;
+-	}
+-
+-	if (i >= table->count)
+-		i = table->count - 1;
+-
+-	return i;
++	return 0;
+ }
+ 
+ static void kv_update_acp_boot_level(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+index a1e7ba5995c57..d6544a6dabc7c 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+@@ -7250,17 +7250,15 @@ static int si_parse_power_table(struct amdgpu_device *adev)
+ 	if (!adev->pm.dpm.ps)
+ 		return -ENOMEM;
+ 	power_state_offset = (u8 *)state_array->states;
+-	for (i = 0; i < state_array->ucNumEntries; i++) {
++	for (adev->pm.dpm.num_ps = 0, i = 0; i < state_array->ucNumEntries; i++) {
+ 		u8 *idx;
+ 		power_state = (union pplib_power_state *)power_state_offset;
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+ 		ps = kzalloc(sizeof(struct  si_ps), GFP_KERNEL);
+-		if (ps == NULL) {
+-			kfree(adev->pm.dpm.ps);
++		if (ps == NULL)
+ 			return -ENOMEM;
+-		}
+ 		adev->pm.dpm.ps[i].ps_priv = ps;
+ 		si_parse_pplib_non_clock_info(adev, &adev->pm.dpm.ps[i],
+ 					      non_clock_info,
+@@ -7282,8 +7280,8 @@ static int si_parse_power_table(struct amdgpu_device *adev)
+ 			k++;
+ 		}
+ 		power_state_offset += 2 + power_state->v2.ucNumDPMLevels;
++		adev->pm.dpm.num_ps++;
+ 	}
+-	adev->pm.dpm.num_ps = state_array->ucNumEntries;
+ 
+ 	/* fill in the vce power states */
+ 	for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) {
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
+index 98e915e325ddf..bc3f42e915e91 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
+@@ -264,6 +264,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms,
+ 
+ 	formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl,
+ 					       layer->layer_type, &n_formats);
++	if (!formats) {
++		kfree(kplane);
++		return -ENOMEM;
++	}
+ 
+ 	err = drm_universal_plane_init(&kms->base, plane,
+ 			get_possible_crtcs(kms, c->pipeline),
+@@ -274,8 +278,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms,
+ 
+ 	komeda_put_fourcc_list(formats);
+ 
+-	if (err)
+-		goto cleanup;
++	if (err) {
++		kfree(kplane);
++		return err;
++	}
+ 
+ 	drm_plane_helper_add(plane, &komeda_plane_helper_funcs);
+ 
+diff --git a/drivers/gpu/drm/arm/malidp_crtc.c b/drivers/gpu/drm/arm/malidp_crtc.c
+index 587d94798f5c2..af729094260c4 100644
+--- a/drivers/gpu/drm/arm/malidp_crtc.c
++++ b/drivers/gpu/drm/arm/malidp_crtc.c
+@@ -483,7 +483,10 @@ static void malidp_crtc_reset(struct drm_crtc *crtc)
+ 	if (crtc->state)
+ 		malidp_crtc_destroy_state(crtc, crtc->state);
+ 
+-	__drm_atomic_helper_crtc_reset(crtc, &state->base);
++	if (state)
++		__drm_atomic_helper_crtc_reset(crtc, &state->base);
++	else
++		__drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+ 
+ static int malidp_crtc_enable_vblank(struct drm_crtc *crtc)
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index c6f059be4b897..aca2f14f04c2a 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1306,6 +1306,7 @@ static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id)
+ 	return 0;
+ 
+ err_unregister_cec:
++	cec_unregister_adapter(adv7511->cec_adap);
+ 	i2c_unregister_device(adv7511->i2c_cec);
+ 	if (adv7511->cec_clk)
+ 		clk_disable_unprepare(adv7511->cec_clk);
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index aa1bb86293fdf..9755672caf1a5 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1639,8 +1639,19 @@ static ssize_t analogix_dpaux_transfer(struct drm_dp_aux *aux,
+ 				       struct drm_dp_aux_msg *msg)
+ {
+ 	struct analogix_dp_device *dp = to_dp(aux);
++	int ret;
++
++	pm_runtime_get_sync(dp->dev);
++
++	ret = analogix_dp_detect_hpd(dp);
++	if (ret)
++		goto out;
+ 
+-	return analogix_dp_transfer(dp, msg);
++	ret = analogix_dp_transfer(dp, msg);
++out:
++	pm_runtime_put(dp->dev);
++
++	return ret;
+ }
+ 
+ struct analogix_dp_device *
+@@ -1705,8 +1716,10 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 
+ 	dp->reg_base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(dp->reg_base))
+-		return ERR_CAST(dp->reg_base);
++	if (IS_ERR(dp->reg_base)) {
++		ret = PTR_ERR(dp->reg_base);
++		goto err_disable_clk;
++	}
+ 
+ 	dp->force_hpd = of_property_read_bool(dev->of_node, "force-hpd");
+ 
+@@ -1718,7 +1731,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 	if (IS_ERR(dp->hpd_gpiod)) {
+ 		dev_err(dev, "error getting HDP GPIO: %ld\n",
+ 			PTR_ERR(dp->hpd_gpiod));
+-		return ERR_CAST(dp->hpd_gpiod);
++		ret = PTR_ERR(dp->hpd_gpiod);
++		goto err_disable_clk;
+ 	}
+ 
+ 	if (dp->hpd_gpiod) {
+@@ -1738,7 +1752,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 
+ 	if (dp->irq == -ENXIO) {
+ 		dev_err(&pdev->dev, "failed to get irq\n");
+-		return ERR_PTR(-ENODEV);
++		ret = -ENODEV;
++		goto err_disable_clk;
+ 	}
+ 
+ 	ret = devm_request_threaded_irq(&pdev->dev, dp->irq,
+@@ -1747,11 +1762,15 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 					irq_flags, "analogix-dp", dp);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to request irq\n");
+-		return ERR_PTR(ret);
++		goto err_disable_clk;
+ 	}
+ 	disable_irq(dp->irq);
+ 
+ 	return dp;
++
++err_disable_clk:
++	clk_disable_unprepare(dp->clock);
++	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_probe);
+ 
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 862e173d34315..4334e466b4e05 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -1995,9 +1995,6 @@ struct edid *drm_do_get_edid(struct drm_connector *connector,
+ 
+ 		connector_bad_edid(connector, edid, edid[0x7e] + 1);
+ 
+-		edid[EDID_LENGTH-1] += edid[0x7e] - valid_extensions;
+-		edid[0x7e] = valid_extensions;
+-
+ 		new = kmalloc_array(valid_extensions + 1, EDID_LENGTH,
+ 				    GFP_KERNEL);
+ 		if (!new)
+@@ -2014,6 +2011,9 @@ struct edid *drm_do_get_edid(struct drm_connector *connector,
+ 			base += EDID_LENGTH;
+ 		}
+ 
++		new[EDID_LENGTH - 1] += new[0x7e] - valid_extensions;
++		new[0x7e] = valid_extensions;
++
+ 		kfree(edid);
+ 		edid = new;
+ 	}
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index affe1cfed0098..24f643982903a 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -186,6 +186,13 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ 	if (WARN_ON(config->num_total_plane >= 32))
+ 		return -EINVAL;
+ 
++	/*
++	 * First driver to need more than 64 formats needs to fix this. Each
++	 * format is encoded as a bit and the current code only supports a u64.
++	 */
++	if (WARN_ON(format_count > 64))
++		return -EINVAL;
++
+ 	WARN_ON(drm_drv_uses_atomic_modeset(dev) &&
+ 		(!funcs->atomic_destroy_state ||
+ 		 !funcs->atomic_duplicate_state));
+@@ -207,13 +214,6 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ 		return -ENOMEM;
+ 	}
+ 
+-	/*
+-	 * First driver to need more than 64 formats needs to fix this. Each
+-	 * format is encoded as a bit and the current code only supports a u64.
+-	 */
+-	if (WARN_ON(format_count > 64))
+-		return -EINVAL;
+-
+ 	if (format_modifiers) {
+ 		const uint64_t *temp_modifiers = format_modifiers;
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+index 984569a59a90a..9ba2fe48228f1 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+@@ -282,6 +282,12 @@ void etnaviv_iommu_unmap_gem(struct etnaviv_iommu_context *context,
+ 
+ 	mutex_lock(&context->lock);
+ 
++	/* Bail if the mapping has been reaped by another thread */
++	if (!mapping->context) {
++		mutex_unlock(&context->lock);
++		return;
++	}
++
+ 	/* If the vram node is on the mm, unmap and remove the node */
+ 	if (mapping->vram_node.mm == &context->mm)
+ 		etnaviv_iommu_remove_mapping(context, mapping);
+diff --git a/drivers/gpu/drm/gma500/psb_intel_display.c b/drivers/gpu/drm/gma500/psb_intel_display.c
+index 531c5485be172..bd3df5adb96f2 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_display.c
++++ b/drivers/gpu/drm/gma500/psb_intel_display.c
+@@ -536,14 +536,15 @@ void psb_intel_crtc_init(struct drm_device *dev, int pipe,
+ 
+ struct drm_crtc *psb_intel_get_crtc_from_pipe(struct drm_device *dev, int pipe)
+ {
+-	struct drm_crtc *crtc = NULL;
++	struct drm_crtc *crtc;
+ 
+ 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+ 		struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
++
+ 		if (gma_crtc->pipe == pipe)
+-			break;
++			return crtc;
+ 	}
+-	return crtc;
++	return NULL;
+ }
+ 
+ int gma_connector_clones(struct drm_device *dev, int type_mask)
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+index eed037ec0b297..8777d35ac7a73 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+@@ -121,9 +121,25 @@ struct i2c_adapter_lookup {
+ #define  ICL_GPIO_DDPA_CTRLCLK_2	8
+ #define  ICL_GPIO_DDPA_CTRLDATA_2	9
+ 
+-static enum port intel_dsi_seq_port_to_port(u8 port)
++static enum port intel_dsi_seq_port_to_port(struct intel_dsi *intel_dsi,
++					    u8 seq_port)
+ {
+-	return port ? PORT_C : PORT_A;
++	/*
++	 * If single link DSI is being used on any port, the VBT sequence block
++	 * send packet apparently always has 0 for the port. Just use the port
++	 * we have configured, and ignore the sequence block port.
++	 */
++	if (hweight8(intel_dsi->ports) == 1)
++		return ffs(intel_dsi->ports) - 1;
++
++	if (seq_port) {
++		if (intel_dsi->ports & PORT_B)
++			return PORT_B;
++		else if (intel_dsi->ports & PORT_C)
++			return PORT_C;
++	}
++
++	return PORT_A;
+ }
+ 
+ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
+@@ -145,15 +161,10 @@ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
+ 
+ 	seq_port = (flags >> MIPI_PORT_SHIFT) & 3;
+ 
+-	/* For DSI single link on Port A & C, the seq_port value which is
+-	 * parsed from Sequence Block#53 of VBT has been set to 0
+-	 * Now, read/write of packets for the DSI single link on Port A and
+-	 * Port C will based on the DVO port from VBT block 2.
+-	 */
+-	if (intel_dsi->ports == (1 << PORT_C))
+-		port = PORT_C;
+-	else
+-		port = intel_dsi_seq_port_to_port(seq_port);
++	port = intel_dsi_seq_port_to_port(intel_dsi, seq_port);
++
++	if (drm_WARN_ON(&dev_priv->drm, !intel_dsi->dsi_hosts[port]))
++		goto out;
+ 
+ 	dsi_device = intel_dsi->dsi_hosts[port]->device;
+ 	if (!dsi_device) {
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index 74e66dea57086..5e670aace59f3 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -3964,8 +3964,8 @@ addr_err:
+ 	return ERR_PTR(err);
+ }
+ 
+-static ssize_t show_dynamic_id(struct device *dev,
+-			       struct device_attribute *attr,
++static ssize_t show_dynamic_id(struct kobject *kobj,
++			       struct kobj_attribute *attr,
+ 			       char *buf)
+ {
+ 	struct i915_oa_config *oa_config =
+diff --git a/drivers/gpu/drm/i915/i915_perf_types.h b/drivers/gpu/drm/i915/i915_perf_types.h
+index a36a455ae3369..534951ff38bb3 100644
+--- a/drivers/gpu/drm/i915/i915_perf_types.h
++++ b/drivers/gpu/drm/i915/i915_perf_types.h
+@@ -54,7 +54,7 @@ struct i915_oa_config {
+ 
+ 	struct attribute_group sysfs_metric;
+ 	struct attribute *attrs[2];
+-	struct device_attribute sysfs_metric_id;
++	struct kobj_attribute sysfs_metric_id;
+ 
+ 	struct kref ref;
+ 	struct rcu_head rcu;
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+index b6bb5fc7d183e..e34718cf5c2e9 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+@@ -10,6 +10,7 @@
+ #include <linux/clk.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+@@ -68,6 +69,21 @@ struct ingenic_drm {
+ 
+ 	bool panel_is_sharp;
+ 	bool no_vblank;
++
++	/*
++	 * clk_mutex is used to synchronize the pixel clock rate update with
++	 * the VBLANK. When the pixel clock's parent clock needs to be updated,
++	 * clock_nb's notifier function will lock the mutex, then wait until the
++	 * next VBLANK. At that point, the parent clock's rate can be updated,
++	 * and the mutex is then unlocked. If an atomic commit happens in the
++	 * meantime, it will lock on the mutex, effectively waiting until the
++	 * clock update process finishes. Finally, the pixel clock's rate will
++	 * be recomputed when the mutex has been released, in the pending atomic
++	 * commit, or a future one.
++	 */
++	struct mutex clk_mutex;
++	bool update_clk_rate;
++	struct notifier_block clock_nb;
+ };
+ 
+ static const u32 ingenic_drm_primary_formats[] = {
+@@ -111,6 +127,29 @@ static inline struct ingenic_drm *drm_crtc_get_priv(struct drm_crtc *crtc)
+ 	return container_of(crtc, struct ingenic_drm, crtc);
+ }
+ 
++static inline struct ingenic_drm *drm_nb_get_priv(struct notifier_block *nb)
++{
++	return container_of(nb, struct ingenic_drm, clock_nb);
++}
++
++static int ingenic_drm_update_pixclk(struct notifier_block *nb,
++				     unsigned long action,
++				     void *data)
++{
++	struct ingenic_drm *priv = drm_nb_get_priv(nb);
++
++	switch (action) {
++	case PRE_RATE_CHANGE:
++		mutex_lock(&priv->clk_mutex);
++		priv->update_clk_rate = true;
++		drm_crtc_wait_one_vblank(&priv->crtc);
++		return NOTIFY_OK;
++	default:
++		mutex_unlock(&priv->clk_mutex);
++		return NOTIFY_OK;
++	}
++}
++
+ static void ingenic_drm_crtc_atomic_enable(struct drm_crtc *crtc,
+ 					   struct drm_crtc_state *state)
+ {
+@@ -276,8 +315,14 @@ static void ingenic_drm_crtc_atomic_flush(struct drm_crtc *crtc,
+ 
+ 	if (drm_atomic_crtc_needs_modeset(state)) {
+ 		ingenic_drm_crtc_update_timings(priv, &state->mode);
++		priv->update_clk_rate = true;
++	}
+ 
++	if (priv->update_clk_rate) {
++		mutex_lock(&priv->clk_mutex);
+ 		clk_set_rate(priv->pix_clk, state->adjusted_mode.clock * 1000);
++		priv->update_clk_rate = false;
++		mutex_unlock(&priv->clk_mutex);
+ 	}
+ 
+ 	if (event) {
+@@ -936,16 +981,28 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
+ 	if (soc_info->has_osd)
+ 		regmap_write(priv->map, JZ_REG_LCD_OSDC, JZ_LCD_OSDC_OSDEN);
+ 
++	mutex_init(&priv->clk_mutex);
++	priv->clock_nb.notifier_call = ingenic_drm_update_pixclk;
++
++	parent_clk = clk_get_parent(priv->pix_clk);
++	ret = clk_notifier_register(parent_clk, &priv->clock_nb);
++	if (ret) {
++		dev_err(dev, "Unable to register clock notifier\n");
++		goto err_devclk_disable;
++	}
++
+ 	ret = drm_dev_register(drm, 0);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register DRM driver\n");
+-		goto err_devclk_disable;
++		goto err_clk_notifier_unregister;
+ 	}
+ 
+ 	drm_fbdev_generic_setup(drm, 32);
+ 
+ 	return 0;
+ 
++err_clk_notifier_unregister:
++	clk_notifier_unregister(parent_clk, &priv->clock_nb);
+ err_devclk_disable:
+ 	if (priv->lcd_clk)
+ 		clk_disable_unprepare(priv->lcd_clk);
+@@ -967,7 +1024,9 @@ static int compare_of(struct device *dev, void *data)
+ static void ingenic_drm_unbind(struct device *dev)
+ {
+ 	struct ingenic_drm *priv = dev_get_drvdata(dev);
++	struct clk *parent_clk = clk_get_parent(priv->pix_clk);
+ 
++	clk_notifier_unregister(parent_clk, &priv->clock_nb);
+ 	if (priv->lcd_clk)
+ 		clk_disable_unprepare(priv->lcd_clk);
+ 	clk_disable_unprepare(priv->pix_clk);
+diff --git a/drivers/gpu/drm/mediatek/mtk_cec.c b/drivers/gpu/drm/mediatek/mtk_cec.c
+index cb29b649fcdba..12bf937694977 100644
+--- a/drivers/gpu/drm/mediatek/mtk_cec.c
++++ b/drivers/gpu/drm/mediatek/mtk_cec.c
+@@ -84,7 +84,7 @@ static void mtk_cec_mask(struct mtk_cec *cec, unsigned int offset,
+ 	u32 tmp = readl(cec->regs + offset) & ~mask;
+ 
+ 	tmp |= val & mask;
+-	writel(val, cec->regs + offset);
++	writel(tmp, cec->regs + offset);
+ }
+ 
+ void mtk_cec_set_hpd_event(struct device *dev,
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 39563daff4a0b..dffc133b8b1cc 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1308,6 +1308,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
+ 	BUG_ON(!node);
+ 
+ 	ret = a6xx_gmu_init(a6xx_gpu, node);
++	of_node_put(node);
+ 	if (ret) {
+ 		a6xx_destroy(&(a6xx_gpu->base.base));
+ 		return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index 6f0f54588124b..108882bbd2b8b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -146,6 +146,7 @@ static void dpu_hw_intf_setup_timing_engine(struct dpu_hw_intf *ctx,
+ 		active_v_end = active_v_start + (p->yres * hsync_period) - 1;
+ 
+ 		display_v_start += p->hsync_pulse_width + p->h_back_porch;
++		display_v_end   -= p->h_front_porch; 
+ 
+ 		active_hctl = (active_h_end << 16) | active_h_start;
+ 		display_hctl = active_hctl;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index 08e082d0443af..7503f093f3b64 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -678,8 +678,10 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms)
+ 		for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
+ 			u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
+ 
+-			if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx])
++			if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) {
+ 				dpu_hw_vbif_destroy(dpu_kms->hw_vbif[vbif_idx]);
++				dpu_kms->hw_vbif[vbif_idx] = NULL;
++			}
+ 		}
+ 	}
+ 
+@@ -937,7 +939,9 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+ 
+ 	dpu_kms_parse_data_bus_icc_path(dpu_kms);
+ 
+-	pm_runtime_get_sync(&dpu_kms->pdev->dev);
++	rc = pm_runtime_resume_and_get(&dpu_kms->pdev->dev);
++	if (rc < 0)
++		goto error;
+ 
+ 	dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index a8fa084dfa494..ff4f207cbdeaf 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -608,9 +608,15 @@ int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc,
+ 		if (ret)
+ 			return ret;
+ 
+-		mdp5_mixer_release(new_crtc_state->state, old_mixer);
++		ret = mdp5_mixer_release(new_crtc_state->state, old_mixer);
++		if (ret)
++			return ret;
++
+ 		if (old_r_mixer) {
+-			mdp5_mixer_release(new_crtc_state->state, old_r_mixer);
++			ret = mdp5_mixer_release(new_crtc_state->state, old_r_mixer);
++			if (ret)
++				return ret;
++
+ 			if (!need_right_mixer)
+ 				pipeline->r_mixer = NULL;
+ 		}
+@@ -977,8 +983,10 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc,
+ 
+ 	ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace,
+ 			&mdp5_crtc->cursor.iova);
+-	if (ret)
++	if (ret) {
++		drm_gem_object_put(cursor_bo);
+ 		return -EINVAL;
++	}
+ 
+ 	pm_runtime_get_sync(&pdev->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+index e193865ce9a26..9baaaef706ab5 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+@@ -598,9 +598,9 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev)
+ 	pdev = mdp5_kms->pdev;
+ 
+ 	irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+-	if (irq < 0) {
+-		ret = irq;
+-		DRM_DEV_ERROR(&pdev->dev, "failed to get irq: %d\n", ret);
++	if (!irq) {
++		ret = -EINVAL;
++		DRM_DEV_ERROR(&pdev->dev, "failed to get irq\n");
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
+index 954db683ae444..2536def2a0005 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
+@@ -116,21 +116,28 @@ int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc,
+ 	return 0;
+ }
+ 
+-void mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer)
++int mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer)
+ {
+ 	struct mdp5_global_state *global_state = mdp5_get_global_state(s);
+-	struct mdp5_hw_mixer_state *new_state = &global_state->hwmixer;
++	struct mdp5_hw_mixer_state *new_state;
+ 
+ 	if (!mixer)
+-		return;
++		return 0;
++
++	if (IS_ERR(global_state))
++		return PTR_ERR(global_state);
++
++	new_state = &global_state->hwmixer;
+ 
+ 	if (WARN_ON(!new_state->hwmixer_to_crtc[mixer->idx]))
+-		return;
++		return -EINVAL;
+ 
+ 	DBG("%s: release from crtc %s", mixer->name,
+ 	    new_state->hwmixer_to_crtc[mixer->idx]->name);
+ 
+ 	new_state->hwmixer_to_crtc[mixer->idx] = NULL;
++
++	return 0;
+ }
+ 
+ void mdp5_mixer_destroy(struct mdp5_hw_mixer *mixer)
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
+index 43c9ba43ce185..545ee223b9d74 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
+@@ -30,7 +30,7 @@ void mdp5_mixer_destroy(struct mdp5_hw_mixer *lm);
+ int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc,
+ 		      uint32_t caps, struct mdp5_hw_mixer **mixer,
+ 		      struct mdp5_hw_mixer **r_mixer);
+-void mdp5_mixer_release(struct drm_atomic_state *s,
+-			struct mdp5_hw_mixer *mixer);
++int mdp5_mixer_release(struct drm_atomic_state *s,
++		       struct mdp5_hw_mixer *mixer);
+ 
+ #endif /* __MDP5_LM_H__ */
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+index ba6695963aa66..a4f5cb90f3e80 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+@@ -119,18 +119,23 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane,
+ 	return 0;
+ }
+ 
+-void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
++int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
+ {
+ 	struct msm_drm_private *priv = s->dev->dev_private;
+ 	struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
+ 	struct mdp5_global_state *state = mdp5_get_global_state(s);
+-	struct mdp5_hw_pipe_state *new_state = &state->hwpipe;
++	struct mdp5_hw_pipe_state *new_state;
+ 
+ 	if (!hwpipe)
+-		return;
++		return 0;
++
++	if (IS_ERR(state))
++		return PTR_ERR(state);
++
++	new_state = &state->hwpipe;
+ 
+ 	if (WARN_ON(!new_state->hwpipe_to_plane[hwpipe->idx]))
+-		return;
++		return -EINVAL;
+ 
+ 	DBG("%s: release from plane %s", hwpipe->name,
+ 		new_state->hwpipe_to_plane[hwpipe->idx]->name);
+@@ -141,6 +146,8 @@ void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
+ 	}
+ 
+ 	new_state->hwpipe_to_plane[hwpipe->idx] = NULL;
++
++	return 0;
+ }
+ 
+ void mdp5_pipe_destroy(struct mdp5_hw_pipe *hwpipe)
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
+index 9b26d0761bd4f..cca67938cab21 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
+@@ -37,7 +37,7 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane,
+ 		     uint32_t caps, uint32_t blkcfg,
+ 		     struct mdp5_hw_pipe **hwpipe,
+ 		     struct mdp5_hw_pipe **r_hwpipe);
+-void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe);
++int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe);
+ 
+ struct mdp5_hw_pipe *mdp5_pipe_init(enum mdp5_pipe pipe,
+ 		uint32_t reg_offset, uint32_t caps);
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+index da07993339702..0dc23c86747e8 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+@@ -393,12 +393,24 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state,
+ 				mdp5_state->r_hwpipe = NULL;
+ 
+ 
+-			mdp5_pipe_release(state->state, old_hwpipe);
+-			mdp5_pipe_release(state->state, old_right_hwpipe);
++			ret = mdp5_pipe_release(state->state, old_hwpipe);
++			if (ret)
++				return ret;
++
++			ret = mdp5_pipe_release(state->state, old_right_hwpipe);
++			if (ret)
++				return ret;
++
+ 		}
+ 	} else {
+-		mdp5_pipe_release(state->state, mdp5_state->hwpipe);
+-		mdp5_pipe_release(state->state, mdp5_state->r_hwpipe);
++		ret = mdp5_pipe_release(state->state, mdp5_state->hwpipe);
++		if (ret)
++			return ret;
++
++		ret = mdp5_pipe_release(state->state, mdp5_state->r_hwpipe);
++		if (ret)
++			return ret;
++
+ 		mdp5_state->hwpipe = mdp5_state->r_hwpipe = NULL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 6cd6934c8c9f1..ebd05678a27ba 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -111,6 +111,7 @@ struct dp_display_private {
+ 	u32 hpd_state;
+ 	u32 event_pndx;
+ 	u32 event_gndx;
++	struct task_struct *ev_tsk;
+ 	struct dp_event event_list[DP_EVENT_Q_MAX];
+ 	spinlock_t event_lock;
+ 
+@@ -194,6 +195,8 @@ void dp_display_signal_audio_complete(struct msm_dp *dp_display)
+ 	complete_all(&dp->audio_comp);
+ }
+ 
++static int dp_hpd_event_thread_start(struct dp_display_private *dp_priv);
++
+ static int dp_display_bind(struct device *dev, struct device *master,
+ 			   void *data)
+ {
+@@ -234,9 +237,18 @@ static int dp_display_bind(struct device *dev, struct device *master,
+ 	}
+ 
+ 	rc = dp_register_audio_driver(dev, dp->audio);
+-	if (rc)
++	if (rc) {
+ 		DRM_ERROR("Audio registration Dp failed\n");
++		goto end;
++	}
+ 
++	rc = dp_hpd_event_thread_start(dp);
++	if (rc) {
++		DRM_ERROR("Event thread create failed\n");
++		goto end;
++	}
++
++	return 0;
+ end:
+ 	return rc;
+ }
+@@ -255,6 +267,11 @@ static void dp_display_unbind(struct device *dev, struct device *master,
+ 		return;
+ 	}
+ 
++	/* disable all HPD interrupts */
++	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
++
++	kthread_stop(dp->ev_tsk);
++
+ 	dp_power_client_deinit(dp->power);
+ 	dp_aux_unregister(dp->aux);
+ 	priv->dp = NULL;
+@@ -984,12 +1001,17 @@ static int hpd_event_thread(void *data)
+ 	while (1) {
+ 		if (timeout_mode) {
+ 			wait_event_timeout(dp_priv->event_q,
+-				(dp_priv->event_pndx == dp_priv->event_gndx),
+-						EVENT_TIMEOUT);
++				(dp_priv->event_pndx == dp_priv->event_gndx) ||
++					kthread_should_stop(), EVENT_TIMEOUT);
+ 		} else {
+ 			wait_event_interruptible(dp_priv->event_q,
+-				(dp_priv->event_pndx != dp_priv->event_gndx));
++				(dp_priv->event_pndx != dp_priv->event_gndx) ||
++					kthread_should_stop());
+ 		}
++
++		if (kthread_should_stop())
++			break;
++
+ 		spin_lock_irqsave(&dp_priv->event_lock, flag);
+ 		todo = &dp_priv->event_list[dp_priv->event_gndx];
+ 		if (todo->delay) {
+@@ -1062,12 +1084,17 @@ static int hpd_event_thread(void *data)
+ 	return 0;
+ }
+ 
+-static void dp_hpd_event_setup(struct dp_display_private *dp_priv)
++static int dp_hpd_event_thread_start(struct dp_display_private *dp_priv)
+ {
+-	init_waitqueue_head(&dp_priv->event_q);
+-	spin_lock_init(&dp_priv->event_lock);
++	/* set event q to empty */
++	dp_priv->event_gndx = 0;
++	dp_priv->event_pndx = 0;
+ 
+-	kthread_run(hpd_event_thread, dp_priv, "dp_hpd_handler");
++	dp_priv->ev_tsk = kthread_run(hpd_event_thread, dp_priv, "dp_hpd_handler");
++	if (IS_ERR(dp_priv->ev_tsk))
++		return PTR_ERR(dp_priv->ev_tsk);
++
++	return 0;
+ }
+ 
+ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id)
+@@ -1125,10 +1152,9 @@ int dp_display_request_irq(struct msm_dp *dp_display)
+ 	dp = container_of(dp_display, struct dp_display_private, dp_display);
+ 
+ 	dp->irq = irq_of_parse_and_map(dp->pdev->dev.of_node, 0);
+-	if (dp->irq < 0) {
+-		rc = dp->irq;
+-		DRM_ERROR("failed to get irq: %d\n", rc);
+-		return rc;
++	if (!dp->irq) {
++		DRM_ERROR("failed to get irq\n");
++		return -EINVAL;
+ 	}
+ 
+ 	rc = devm_request_irq(&dp->pdev->dev, dp->irq,
+@@ -1167,8 +1193,11 @@ static int dp_display_probe(struct platform_device *pdev)
+ 		return -EPROBE_DEFER;
+ 	}
+ 
++	/* setup event q */
+ 	mutex_init(&dp->event_mutex);
+ 	g_dp_display = &dp->dp_display;
++	init_waitqueue_head(&dp->event_q);
++	spin_lock_init(&dp->event_lock);
+ 
+ 	/* Store DP audio handle inside DP display */
+ 	g_dp_display->dp_audio = dp->audio;
+@@ -1308,8 +1337,6 @@ void msm_dp_irq_postinstall(struct msm_dp *dp_display)
+ 
+ 	dp = container_of(dp_display, struct dp_display_private, dp_display);
+ 
+-	dp_hpd_event_setup(dp);
+-
+ 	dp_add_event(dp, EV_HPD_INIT_SETUP, 0, 100);
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 64454a63bbacf..51e8318cc8ff4 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1371,10 +1371,10 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host,
+ 			dsi_get_bpp(msm_host->format) / 8;
+ 
+ 	len = dsi_cmd_dma_add(msm_host, msg);
+-	if (!len) {
++	if (len < 0) {
+ 		pr_err("%s: failed to add cmd type = 0x%x\n",
+ 			__func__,  msg->type);
+-		return -EINVAL;
++		return len;
+ 	}
+ 
+ 	/* for video mode, do not send cmds more than
+@@ -1393,10 +1393,14 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host,
+ 	}
+ 
+ 	ret = dsi_cmd_dma_tx(msm_host, len);
+-	if (ret < len) {
+-		pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d\n",
+-			__func__, msg->type, (*(u8 *)(msg->tx_buf)), len);
+-		return -ECOMM;
++	if (ret < 0) {
++		pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d, ret=%d\n",
++			__func__, msg->type, (*(u8 *)(msg->tx_buf)), len, ret);
++		return ret;
++	} else if (ret < len) {
++		pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, ret=%d len=%d\n",
++			__func__, msg->type, (*(u8 *)(msg->tx_buf)), ret, len);
++		return -EIO;
+ 	}
+ 
+ 	return len;
+@@ -2139,9 +2143,12 @@ int msm_dsi_host_cmd_rx(struct mipi_dsi_host *host,
+ 		}
+ 
+ 		ret = dsi_cmds2buf_tx(msm_host, msg);
+-		if (ret < msg->tx_len) {
++		if (ret < 0) {
+ 			pr_err("%s: Read cmd Tx failed, %d\n", __func__, ret);
+ 			return ret;
++		} else if (ret < msg->tx_len) {
++			pr_err("%s: Read cmd Tx failed, too short: %d\n", __func__, ret);
++			return -ECOMM;
+ 		}
+ 
+ 		/*
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index 94f948ef279d1..28b33b35a30ce 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -142,6 +142,10 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev)
+ 	/* HDCP needs physical address of hdmi register */
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ 		config->mmio_name);
++	if (!res) {
++		ret = -EINVAL;
++		goto fail;
++	}
+ 	hdmi->mmio_phy_addr = res->start;
+ 
+ 	hdmi->qfprom_mmio = msm_ioremap(pdev,
+@@ -311,9 +315,9 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
+ 	}
+ 
+ 	hdmi->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+-	if (hdmi->irq < 0) {
+-		ret = hdmi->irq;
+-		DRM_DEV_ERROR(dev->dev, "failed to get irq: %d\n", ret);
++	if (!hdmi->irq) {
++		ret = -EINVAL;
++		DRM_DEV_ERROR(dev->dev, "failed to get irq\n");
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index e37e5afc680a2..087efcb1f34cf 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -10,6 +10,7 @@
+ #include <linux/uaccess.h>
+ #include <uapi/linux/sched/types.h>
+ 
++#include <drm/drm_bridge.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_file.h>
+ #include <drm/drm_ioctl.h>
+diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
+index 515ef80816a0d..8c64ce7288f1e 100644
+--- a/drivers/gpu/drm/msm/msm_gem_prime.c
++++ b/drivers/gpu/drm/msm/msm_gem_prime.c
+@@ -17,7 +17,7 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
+ 	int npages = obj->size >> PAGE_SHIFT;
+ 
+ 	if (WARN_ON(!msm_obj->pages))  /* should have already pinned! */
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	return drm_prime_pages_to_sg(obj->dev, msm_obj->pages, npages);
+ }
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/atom.h b/drivers/gpu/drm/nouveau/dispnv50/atom.h
+index 3d82b3c67decc..93f8f4f645784 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/atom.h
++++ b/drivers/gpu/drm/nouveau/dispnv50/atom.h
+@@ -160,14 +160,14 @@ nv50_head_atom_get(struct drm_atomic_state *state, struct drm_crtc *crtc)
+ static inline struct drm_encoder *
+ nv50_head_atom_get_encoder(struct nv50_head_atom *atom)
+ {
+-	struct drm_encoder *encoder = NULL;
++	struct drm_encoder *encoder;
+ 
+ 	/* We only ever have a single encoder */
+ 	drm_for_each_encoder_mask(encoder, atom->state.crtc->dev,
+ 				  atom->state.encoder_mask)
+-		break;
++		return encoder;
+ 
+-	return encoder;
++	return NULL;
+ }
+ 
+ #define nv50_wndw_atom(p) container_of((p), struct nv50_wndw_atom, state)
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/crc.c b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+index 66f32d965c723..5624a716e11c1 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/crc.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+@@ -411,9 +411,18 @@ void nv50_crc_atomic_check_outp(struct nv50_atom *atom)
+ 		struct nv50_head_atom *armh = nv50_head_atom(old_crtc_state);
+ 		struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state);
+ 		struct nv50_outp_atom *outp_atom;
+-		struct nouveau_encoder *outp =
+-			nv50_real_outp(nv50_head_atom_get_encoder(armh));
+-		struct drm_encoder *encoder = &outp->base.base;
++		struct nouveau_encoder *outp;
++		struct drm_encoder *encoder, *enc;
++
++		enc = nv50_head_atom_get_encoder(armh);
++		if (!enc)
++			continue;
++
++		outp = nv50_real_outp(enc);
++		if (!outp)
++			continue;
++
++		encoder = &outp->base.base;
+ 
+ 		if (!asyh->clr.crc)
+ 			continue;
+@@ -464,8 +473,16 @@ void nv50_crc_atomic_set(struct nv50_head *head,
+ 	struct drm_device *dev = crtc->dev;
+ 	struct nv50_crc *crc = &head->crc;
+ 	const struct nv50_crc_func *func = nv50_disp(dev)->core->func->crc;
+-	struct nouveau_encoder *outp =
+-		nv50_real_outp(nv50_head_atom_get_encoder(asyh));
++	struct nouveau_encoder *outp;
++	struct drm_encoder *encoder;
++
++	encoder = nv50_head_atom_get_encoder(asyh);
++	if (!encoder)
++		return;
++
++	outp = nv50_real_outp(encoder);
++	if (!outp)
++		return;
+ 
+ 	func->set_src(head, outp->or,
+ 		      nv50_crc_source_type(outp, asyh->crc.src),
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
+index dc184e857f857..5214fd0658b88 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
+@@ -135,10 +135,10 @@ nvkm_cstate_find_best(struct nvkm_clk *clk, struct nvkm_pstate *pstate,
+ 
+ 	list_for_each_entry_from_reverse(cstate, &pstate->list, head) {
+ 		if (nvkm_cstate_valid(clk, cstate, max_volt, clk->temp))
+-			break;
++			return cstate;
+ 	}
+ 
+-	return cstate;
++	return NULL;
+ }
+ 
+ static struct nvkm_cstate *
+@@ -169,6 +169,8 @@ nvkm_cstate_prog(struct nvkm_clk *clk, struct nvkm_pstate *pstate, int cstatei)
+ 	if (!list_empty(&pstate->list)) {
+ 		cstate = nvkm_cstate_get(clk, pstate, cstatei);
+ 		cstate = nvkm_cstate_find_best(clk, pstate, cstate);
++		if (!cstate)
++			return -EINVAL;
+ 	} else {
+ 		cstate = &pstate->base;
+ 	}
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 959dcbd8a29c1..bf2c845ef3a20 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -676,7 +676,7 @@ static const struct drm_display_mode ampire_am_1280800n3tzqw_t00h_mode = {
+ static const struct panel_desc ampire_am_1280800n3tzqw_t00h = {
+ 	.modes = &ampire_am_1280800n3tzqw_t00h_mode,
+ 	.num_modes = 1,
+-	.bpc = 6,
++	.bpc = 8,
+ 	.size = {
+ 		.width = 217,
+ 		.height = 136,
+@@ -2144,6 +2144,7 @@ static const struct panel_desc innolux_g070y2_l01 = {
+ 		.unprepare = 800,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 0f23144491e40..91568f166a8ad 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -2097,10 +2097,10 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
+ 	vop_win_init(vop);
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	vop->len = resource_size(res);
+ 	vop->regs = devm_ioremap_resource(dev, res);
+ 	if (IS_ERR(vop->regs))
+ 		return PTR_ERR(vop->regs);
++	vop->len = resource_size(res);
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ 	if (res) {
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 62488ac149238..089c00a8e7d49 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -527,8 +527,8 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ 	struct drm_device *ddev = crtc->dev;
+ 	struct drm_connector_list_iter iter;
+ 	struct drm_connector *connector = NULL;
+-	struct drm_encoder *encoder = NULL;
+-	struct drm_bridge *bridge = NULL;
++	struct drm_encoder *encoder = NULL, *en_iter;
++	struct drm_bridge *bridge = NULL, *br_iter;
+ 	struct drm_display_mode *mode = &crtc->state->adjusted_mode;
+ 	struct videomode vm;
+ 	u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h;
+@@ -538,15 +538,19 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ 	int ret;
+ 
+ 	/* get encoder from crtc */
+-	drm_for_each_encoder(encoder, ddev)
+-		if (encoder->crtc == crtc)
++	drm_for_each_encoder(en_iter, ddev)
++		if (en_iter->crtc == crtc) {
++			encoder = en_iter;
+ 			break;
++		}
+ 
+ 	if (encoder) {
+ 		/* get bridge from encoder */
+-		list_for_each_entry(bridge, &encoder->bridge_chain, chain_node)
+-			if (bridge->encoder == encoder)
++		list_for_each_entry(br_iter, &encoder->bridge_chain, chain_node)
++			if (br_iter->encoder == encoder) {
++				bridge = br_iter;
+ 				break;
++			}
+ 
+ 		/* Get the connector from encoder */
+ 		drm_connector_list_iter_begin(ddev, &iter);
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_external.c b/drivers/gpu/drm/tilcdc/tilcdc_external.c
+index b177525588c14..8ece0dd0b469d 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_external.c
++++ b/drivers/gpu/drm/tilcdc/tilcdc_external.c
+@@ -60,11 +60,13 @@ struct drm_connector *tilcdc_encoder_find_connector(struct drm_device *ddev,
+ int tilcdc_add_component_encoder(struct drm_device *ddev)
+ {
+ 	struct tilcdc_drm_private *priv = ddev->dev_private;
+-	struct drm_encoder *encoder;
++	struct drm_encoder *encoder = NULL, *iter;
+ 
+-	list_for_each_entry(encoder, &ddev->mode_config.encoder_list, head)
+-		if (encoder->possible_crtcs & (1 << priv->crtc->index))
++	list_for_each_entry(iter, &ddev->mode_config.encoder_list, head)
++		if (iter->possible_crtcs & (1 << priv->crtc->index)) {
++			encoder = iter;
+ 			break;
++		}
+ 
+ 	if (!encoder) {
+ 		dev_err(ddev->dev, "%s: No suitable encoder found\n", __func__);
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index ad691571d759f..95fa6fc052a72 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -564,6 +564,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 	struct vc4_hvs *hvs = NULL;
+ 	int ret;
+ 	u32 dispctrl;
++	u32 reg;
+ 
+ 	hvs = devm_kzalloc(&pdev->dev, sizeof(*hvs), GFP_KERNEL);
+ 	if (!hvs)
+@@ -635,6 +636,26 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 
+ 	vc4->hvs = hvs;
+ 
++	reg = HVS_READ(SCALER_DISPECTRL);
++	reg &= ~SCALER_DISPECTRL_DSP2_MUX_MASK;
++	HVS_WRITE(SCALER_DISPECTRL,
++		  reg | VC4_SET_FIELD(0, SCALER_DISPECTRL_DSP2_MUX));
++
++	reg = HVS_READ(SCALER_DISPCTRL);
++	reg &= ~SCALER_DISPCTRL_DSP3_MUX_MASK;
++	HVS_WRITE(SCALER_DISPCTRL,
++		  reg | VC4_SET_FIELD(3, SCALER_DISPCTRL_DSP3_MUX));
++
++	reg = HVS_READ(SCALER_DISPEOLN);
++	reg &= ~SCALER_DISPEOLN_DSP4_MUX_MASK;
++	HVS_WRITE(SCALER_DISPEOLN,
++		  reg | VC4_SET_FIELD(3, SCALER_DISPEOLN_DSP4_MUX));
++
++	reg = HVS_READ(SCALER_DISPDITHER);
++	reg &= ~SCALER_DISPDITHER_DSP5_MUX_MASK;
++	HVS_WRITE(SCALER_DISPDITHER,
++		  reg | VC4_SET_FIELD(3, SCALER_DISPDITHER_DSP5_MUX));
++
+ 	dispctrl = HVS_READ(SCALER_DISPCTRL);
+ 
+ 	dispctrl |= SCALER_DISPCTRL_ENABLE;
+@@ -642,10 +663,6 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 		    SCALER_DISPCTRL_DISPEIRQ(1) |
+ 		    SCALER_DISPCTRL_DISPEIRQ(2);
+ 
+-	/* Set DSP3 (PV1) to use HVS channel 2, which would otherwise
+-	 * be unused.
+-	 */
+-	dispctrl &= ~SCALER_DISPCTRL_DSP3_MUX_MASK;
+ 	dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
+ 		      SCALER_DISPCTRL_SLVWREIRQ |
+ 		      SCALER_DISPCTRL_SLVRDEIRQ |
+@@ -659,7 +676,6 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 		      SCALER_DISPCTRL_DSPEISLUR(1) |
+ 		      SCALER_DISPCTRL_DSPEISLUR(2) |
+ 		      SCALER_DISPCTRL_SCLEIRQ);
+-	dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_DSP3_MUX);
+ 
+ 	HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c
+index d13502ae973dd..f8fa09dfea5da 100644
+--- a/drivers/gpu/drm/vc4/vc4_txp.c
++++ b/drivers/gpu/drm/vc4/vc4_txp.c
+@@ -295,12 +295,18 @@ static void vc4_txp_connector_atomic_commit(struct drm_connector *conn,
+ 	if (WARN_ON(i == ARRAY_SIZE(drm_fmts)))
+ 		return;
+ 
+-	ctrl = TXP_GO | TXP_VSTART_AT_EOF | TXP_EI |
++	ctrl = TXP_GO | TXP_EI |
+ 	       VC4_SET_FIELD(0xf, TXP_BYTE_ENABLE) |
+ 	       VC4_SET_FIELD(txp_fmts[i], TXP_FORMAT);
+ 
+ 	if (fb->format->has_alpha)
+ 		ctrl |= TXP_ALPHA_ENABLE;
++	else
++		/*
++		 * If TXP_ALPHA_ENABLE isn't set and TXP_ALPHA_INVERT is, the
++		 * hardware will force the output padding to be 0xff.
++		 */
++		ctrl |= TXP_ALPHA_INVERT;
+ 
+ 	gem = drm_fb_cma_get_gem_obj(fb, 0);
+ 	TXP_WRITE(TXP_DST_PTR, gem->paddr + fb->offsets[0]);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c
+index f84b7e61311bc..9b2b99e853422 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_display.c
++++ b/drivers/gpu/drm/virtio/virtgpu_display.c
+@@ -177,6 +177,8 @@ static int virtio_gpu_conn_get_modes(struct drm_connector *connector)
+ 		DRM_DEBUG("add mode: %dx%d\n", width, height);
+ 		mode = drm_cvt_mode(connector->dev, width, height, 60,
+ 				    false, false, false);
++		if (!mode)
++			return count;
+ 		mode->type |= DRM_MODE_TYPE_PREFERRED;
+ 		drm_mode_probed_add(connector, mode);
+ 		count++;
+diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c
+index 74ad8bf98bfd5..e8c5e3ac9fff1 100644
+--- a/drivers/hid/hid-bigbenff.c
++++ b/drivers/hid/hid-bigbenff.c
+@@ -347,6 +347,12 @@ static int bigben_probe(struct hid_device *hid,
+ 	bigben->report = list_entry(report_list->next,
+ 		struct hid_report, list);
+ 
++	if (list_empty(&hid->inputs)) {
++		hid_err(hid, "no inputs found\n");
++		error = -ENODEV;
++		goto error_hw_stop;
++	}
++
+ 	hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
+ 	set_bit(FF_RUMBLE, hidinput->input->ffbit);
+ 
+diff --git a/drivers/hid/hid-elan.c b/drivers/hid/hid-elan.c
+index 0e8f424025fea..838673303f77f 100644
+--- a/drivers/hid/hid-elan.c
++++ b/drivers/hid/hid-elan.c
+@@ -188,7 +188,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ 	ret = input_mt_init_slots(input, ELAN_MAX_FINGERS, INPUT_MT_POINTER);
+ 	if (ret) {
+ 		hid_err(hdev, "Failed to init elan MT slots: %d\n", ret);
+-		input_free_device(input);
+ 		return ret;
+ 	}
+ 
+@@ -200,7 +199,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ 		hid_err(hdev, "Failed to register elan input device: %d\n",
+ 			ret);
+ 		input_mt_destroy_slots(input);
+-		input_free_device(input);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/hid/hid-led.c b/drivers/hid/hid-led.c
+index c2c66ceca1327..7d82f8d426bbc 100644
+--- a/drivers/hid/hid-led.c
++++ b/drivers/hid/hid-led.c
+@@ -366,7 +366,7 @@ static const struct hidled_config hidled_configs[] = {
+ 		.type = DREAM_CHEEKY,
+ 		.name = "Dream Cheeky Webmail Notifier",
+ 		.short_name = "dream_cheeky",
+-		.max_brightness = 31,
++		.max_brightness = 63,
+ 		.num_leds = 1,
+ 		.report_size = 9,
+ 		.report_type = RAW_REQUEST,
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index 424c296845dbb..e8a7f47b8fce3 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1337,7 +1337,7 @@ static int coresight_fixup_device_conns(struct coresight_device *csdev)
+ 			continue;
+ 		conn->child_dev =
+ 			coresight_find_csdev_by_fwnode(conn->child_fwnode);
+-		if (conn->child_dev) {
++		if (conn->child_dev && conn->child_dev->has_conns_grp) {
+ 			ret = coresight_make_links(csdev, conn,
+ 						   conn->child_dev);
+ 			if (ret)
+@@ -1486,6 +1486,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 	int nr_refcnts = 1;
+ 	atomic_t *refcnts = NULL;
+ 	struct coresight_device *csdev;
++	bool registered = false;
+ 
+ 	csdev = kzalloc(sizeof(*csdev), GFP_KERNEL);
+ 	if (!csdev) {
+@@ -1506,7 +1507,8 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 	refcnts = kcalloc(nr_refcnts, sizeof(*refcnts), GFP_KERNEL);
+ 	if (!refcnts) {
+ 		ret = -ENOMEM;
+-		goto err_free_csdev;
++		kfree(csdev);
++		goto err_out;
+ 	}
+ 
+ 	csdev->refcnt = refcnts;
+@@ -1530,6 +1532,13 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 	csdev->dev.fwnode = fwnode_handle_get(dev_fwnode(desc->dev));
+ 	dev_set_name(&csdev->dev, "%s", desc->name);
+ 
++	/*
++	 * Make sure the device registration and the connection fixup
++	 * are synchronised, so that we don't see uninitialised devices
++	 * on the coresight bus while trying to resolve the connections.
++	 */
++	mutex_lock(&coresight_mutex);
++
+ 	ret = device_register(&csdev->dev);
+ 	if (ret) {
+ 		put_device(&csdev->dev);
+@@ -1537,7 +1546,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 		 * All resources are free'd explicitly via
+ 		 * coresight_device_release(), triggered from put_device().
+ 		 */
+-		goto err_out;
++		goto out_unlock;
+ 	}
+ 
+ 	if (csdev->type == CORESIGHT_DEV_TYPE_SINK ||
+@@ -1552,11 +1561,11 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 			 * from put_device(), which is in turn called from
+ 			 * function device_unregister().
+ 			 */
+-			goto err_out;
++			goto out_unlock;
+ 		}
+ 	}
+-
+-	mutex_lock(&coresight_mutex);
++	/* Device is now registered */
++	registered = true;
+ 
+ 	ret = coresight_create_conns_sysfs_group(csdev);
+ 	if (!ret)
+@@ -1566,16 +1575,18 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 	if (!ret && cti_assoc_ops && cti_assoc_ops->add)
+ 		cti_assoc_ops->add(csdev);
+ 
++out_unlock:
+ 	mutex_unlock(&coresight_mutex);
+-	if (ret) {
++	/* Success */
++	if (!ret)
++		return csdev;
++
++	/* Unregister the device if needed */
++	if (registered) {
+ 		coresight_unregister(csdev);
+ 		return ERR_PTR(ret);
+ 	}
+ 
+-	return csdev;
+-
+-err_free_csdev:
+-	kfree(csdev);
+ err_out:
+ 	/* Cleanup the connection information */
+ 	coresight_release_platform_data(NULL, desc->pdata);
+diff --git a/drivers/i2c/busses/i2c-at91-master.c b/drivers/i2c/busses/i2c-at91-master.c
+index 66864f9cf7ac5..7960fa4b8c5b0 100644
+--- a/drivers/i2c/busses/i2c-at91-master.c
++++ b/drivers/i2c/busses/i2c-at91-master.c
+@@ -657,6 +657,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
+ 	unsigned int_addr_flag = 0;
+ 	struct i2c_msg *m_start = msg;
+ 	bool is_read;
++	u8 *dma_buf = NULL;
+ 
+ 	dev_dbg(&adap->dev, "at91_xfer: processing %d messages:\n", num);
+ 
+@@ -704,7 +705,17 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
+ 	dev->msg = m_start;
+ 	dev->recv_len_abort = false;
+ 
++	if (dev->use_dma) {
++		dma_buf = i2c_get_dma_safe_msg_buf(m_start, 1);
++		if (!dma_buf) {
++			ret = -ENOMEM;
++			goto out;
++		}
++		dev->buf = dma_buf;
++	}
++
+ 	ret = at91_do_twi_transfer(dev);
++	i2c_put_dma_safe_msg_buf(dma_buf, m_start, !ret);
+ 
+ 	ret = (ret < 0) ? ret : num;
+ out:
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index 2ad166355ec9b..20a2f903b7f6c 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -359,14 +359,14 @@ static int npcm_i2c_get_SCL(struct i2c_adapter *_adap)
+ {
+ 	struct npcm_i2c *bus = container_of(_adap, struct npcm_i2c, adap);
+ 
+-	return !!(I2CCTL3_SCL_LVL & ioread32(bus->reg + NPCM_I2CCTL3));
++	return !!(I2CCTL3_SCL_LVL & ioread8(bus->reg + NPCM_I2CCTL3));
+ }
+ 
+ static int npcm_i2c_get_SDA(struct i2c_adapter *_adap)
+ {
+ 	struct npcm_i2c *bus = container_of(_adap, struct npcm_i2c, adap);
+ 
+-	return !!(I2CCTL3_SDA_LVL & ioread32(bus->reg + NPCM_I2CCTL3));
++	return !!(I2CCTL3_SDA_LVL & ioread8(bus->reg + NPCM_I2CCTL3));
+ }
+ 
+ static inline u16 npcm_i2c_get_index(struct npcm_i2c *bus)
+@@ -563,6 +563,15 @@ static inline void npcm_i2c_nack(struct npcm_i2c *bus)
+ 	iowrite8(val, bus->reg + NPCM_I2CCTL1);
+ }
+ 
++static inline void npcm_i2c_clear_master_status(struct npcm_i2c *bus)
++{
++	u8 val;
++
++	/* Clear NEGACK, STASTR and BER bits */
++	val = NPCM_I2CST_BER | NPCM_I2CST_NEGACK | NPCM_I2CST_STASTR;
++	iowrite8(val, bus->reg + NPCM_I2CST);
++}
++
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ static void npcm_i2c_slave_int_enable(struct npcm_i2c *bus, bool enable)
+ {
+@@ -642,8 +651,8 @@ static void npcm_i2c_reset(struct npcm_i2c *bus)
+ 	iowrite8(NPCM_I2CCST_BB, bus->reg + NPCM_I2CCST);
+ 	iowrite8(0xFF, bus->reg + NPCM_I2CST);
+ 
+-	/* Clear EOB bit */
+-	iowrite8(NPCM_I2CCST3_EO_BUSY, bus->reg + NPCM_I2CCST3);
++	/* Clear and disable EOB */
++	npcm_i2c_eob_int(bus, false);
+ 
+ 	/* Clear all fifo bits: */
+ 	iowrite8(NPCM_I2CFIF_CTS_CLR_FIFO, bus->reg + NPCM_I2CFIF_CTS);
+@@ -655,6 +664,9 @@ static void npcm_i2c_reset(struct npcm_i2c *bus)
+ 	}
+ #endif
+ 
++	/* clear status bits for spurious interrupts */
++	npcm_i2c_clear_master_status(bus);
++
+ 	bus->state = I2C_IDLE;
+ }
+ 
+@@ -815,15 +827,6 @@ static void npcm_i2c_read_fifo(struct npcm_i2c *bus, u8 bytes_in_fifo)
+ 	}
+ }
+ 
+-static inline void npcm_i2c_clear_master_status(struct npcm_i2c *bus)
+-{
+-	u8 val;
+-
+-	/* Clear NEGACK, STASTR and BER bits */
+-	val = NPCM_I2CST_BER | NPCM_I2CST_NEGACK | NPCM_I2CST_STASTR;
+-	iowrite8(val, bus->reg + NPCM_I2CST);
+-}
+-
+ static void npcm_i2c_master_abort(struct npcm_i2c *bus)
+ {
+ 	/* Only current master is allowed to issue a stop condition */
+@@ -1231,7 +1234,16 @@ static irqreturn_t npcm_i2c_int_slave_handler(struct npcm_i2c *bus)
+ 		ret = IRQ_HANDLED;
+ 	} /* SDAST */
+ 
+-	return ret;
++	/*
++	 * if irq is not one of the above, make sure EOB is disabled and all
++	 * status bits are cleared.
++	 */
++	if (ret == IRQ_NONE) {
++		npcm_i2c_eob_int(bus, false);
++		npcm_i2c_clear_master_status(bus);
++	}
++
++	return IRQ_HANDLED;
+ }
+ 
+ static int npcm_i2c_reg_slave(struct i2c_client *client)
+@@ -1467,6 +1479,9 @@ static void npcm_i2c_irq_handle_nack(struct npcm_i2c *bus)
+ 		npcm_i2c_eob_int(bus, false);
+ 		npcm_i2c_master_stop(bus);
+ 
++		/* Clear SDA Status bit (by reading dummy byte) */
++		npcm_i2c_rd_byte(bus);
++
+ 		/*
+ 		 * The bus is released from stall only after the SW clears
+ 		 * NEGACK bit. Then a Stop condition is sent.
+@@ -1474,6 +1489,8 @@ static void npcm_i2c_irq_handle_nack(struct npcm_i2c *bus)
+ 		npcm_i2c_clear_master_status(bus);
+ 		readx_poll_timeout_atomic(ioread8, bus->reg + NPCM_I2CCST, val,
+ 					  !(val & NPCM_I2CCST_BUSY), 10, 200);
++		/* verify no status bits are still set after bus is released */
++		npcm_i2c_clear_master_status(bus);
+ 	}
+ 	bus->state = I2C_IDLE;
+ 
+@@ -1672,10 +1689,10 @@ static int npcm_i2c_recovery_tgclk(struct i2c_adapter *_adap)
+ 	int              iter = 27;
+ 
+ 	if ((npcm_i2c_get_SDA(_adap) == 1) && (npcm_i2c_get_SCL(_adap) == 1)) {
+-		dev_dbg(bus->dev, "bus%d recovery skipped, bus not stuck",
+-			bus->num);
++		dev_dbg(bus->dev, "bus%d-0x%x recovery skipped, bus not stuck",
++			bus->num, bus->dest_addr);
+ 		npcm_i2c_reset(bus);
+-		return status;
++		return 0;
+ 	}
+ 
+ 	npcm_i2c_int_enable(bus, false);
+@@ -1909,6 +1926,7 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode,
+ 	    bus_freq_hz < I2C_FREQ_MIN_HZ || bus_freq_hz > I2C_FREQ_MAX_HZ)
+ 		return -EINVAL;
+ 
++	npcm_i2c_int_enable(bus, false);
+ 	npcm_i2c_disable(bus);
+ 
+ 	/* Configure FIFO mode : */
+@@ -1937,10 +1955,17 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode,
+ 	val = (val | NPCM_I2CCTL1_NMINTE) & ~NPCM_I2CCTL1_RWS;
+ 	iowrite8(val, bus->reg + NPCM_I2CCTL1);
+ 
+-	npcm_i2c_int_enable(bus, true);
+-
+ 	npcm_i2c_reset(bus);
+ 
++	/* check HW is OK: SDA and SCL should be high at this point. */
++	if ((npcm_i2c_get_SDA(&bus->adap) == 0) || (npcm_i2c_get_SCL(&bus->adap) == 0)) {
++		dev_err(bus->dev, "I2C%d init fail: lines are low\n", bus->num);
++		dev_err(bus->dev, "SDA=%d SCL=%d\n", npcm_i2c_get_SDA(&bus->adap),
++			npcm_i2c_get_SCL(&bus->adap));
++		return -ENXIO;
++	}
++
++	npcm_i2c_int_enable(bus, true);
+ 	return 0;
+ }
+ 
+@@ -1988,10 +2013,14 @@ static irqreturn_t npcm_i2c_bus_irq(int irq, void *dev_id)
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ 	if (bus->slave) {
+ 		bus->master_or_slave = I2C_SLAVE;
+-		return npcm_i2c_int_slave_handler(bus);
++		if (npcm_i2c_int_slave_handler(bus))
++			return IRQ_HANDLED;
+ 	}
+ #endif
+-	return IRQ_NONE;
++	/* clear status bits for spurious interrupts */
++	npcm_i2c_clear_master_status(bus);
++
++	return IRQ_HANDLED;
+ }
+ 
+ static bool npcm_i2c_master_start_xmit(struct npcm_i2c *bus,
+@@ -2047,8 +2076,7 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 	u16 nwrite, nread;
+ 	u8 *write_data, *read_data;
+ 	u8 slave_addr;
+-	int timeout;
+-	int ret = 0;
++	unsigned long timeout;
+ 	bool read_block = false;
+ 	bool read_PEC = false;
+ 	u8 bus_busy;
+@@ -2099,13 +2127,13 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 	 * 9: bits per transaction (including the ack/nack)
+ 	 */
+ 	timeout_usec = (2 * 9 * USEC_PER_SEC / bus->bus_freq) * (2 + nread + nwrite);
+-	timeout = max(msecs_to_jiffies(35), usecs_to_jiffies(timeout_usec));
++	timeout = max_t(unsigned long, bus->adap.timeout, usecs_to_jiffies(timeout_usec));
+ 	if (nwrite >= 32 * 1024 || nread >= 32 * 1024) {
+ 		dev_err(bus->dev, "i2c%d buffer too big\n", bus->num);
+ 		return -EINVAL;
+ 	}
+ 
+-	time_left = jiffies + msecs_to_jiffies(DEFAULT_STALL_COUNT) + 1;
++	time_left = jiffies + timeout + 1;
+ 	do {
+ 		/*
+ 		 * we must clear slave address immediately when the bus is not
+@@ -2138,12 +2166,12 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 	bus->read_block_use = read_block;
+ 
+ 	reinit_completion(&bus->cmd_complete);
+-	if (!npcm_i2c_master_start_xmit(bus, slave_addr, nwrite, nread,
+-					write_data, read_data, read_PEC,
+-					read_block))
+-		ret = -EBUSY;
+ 
+-	if (ret != -EBUSY) {
++	npcm_i2c_int_enable(bus, true);
++
++	if (npcm_i2c_master_start_xmit(bus, slave_addr, nwrite, nread,
++				       write_data, read_data, read_PEC,
++				       read_block)) {
+ 		time_left = wait_for_completion_timeout(&bus->cmd_complete,
+ 							timeout);
+ 
+@@ -2157,26 +2185,31 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 			}
+ 		}
+ 	}
+-	ret = bus->cmd_err;
+ 
+ 	/* if there was BER, check if need to recover the bus: */
+ 	if (bus->cmd_err == -EAGAIN)
+-		ret = i2c_recover_bus(adap);
++		bus->cmd_err = i2c_recover_bus(adap);
+ 
+ 	/*
+ 	 * After any type of error, check if LAST bit is still set,
+ 	 * due to a HW issue.
+ 	 * It cannot be cleared without resetting the module.
+ 	 */
+-	if (bus->cmd_err &&
+-	    (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL)))
++	else if (bus->cmd_err &&
++		 (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL)))
+ 		npcm_i2c_reset(bus);
+ 
++	/* after any xfer, successful or not, stall and EOB must be disabled */
++	npcm_i2c_stall_after_start(bus, false);
++	npcm_i2c_eob_int(bus, false);
++
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ 	/* reenable slave if it was enabled */
+ 	if (bus->slave)
+ 		iowrite8((bus->slave->addr & 0x7F) | NPCM_I2CADDR_SAEN,
+ 			 bus->reg + NPCM_I2CADDR1);
++#else
++	npcm_i2c_int_enable(bus, false);
+ #endif
+ 	return bus->cmd_err;
+ }
+@@ -2269,7 +2302,7 @@ static int npcm_i2c_probe_bus(struct platform_device *pdev)
+ 	adap = &bus->adap;
+ 	adap->owner = THIS_MODULE;
+ 	adap->retries = 3;
+-	adap->timeout = HZ;
++	adap->timeout = msecs_to_jiffies(35);
+ 	adap->algo = &npcm_i2c_algo;
+ 	adap->quirks = &npcm_i2c_quirks;
+ 	adap->algo_data = bus;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 8722ca23f889b..6a7a7a074a975 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -999,8 +999,10 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	pm_runtime_get_sync(dev);
+ 	ret = rcar_i2c_clock_calculate(priv);
+-	if (ret < 0)
+-		goto out_pm_put;
++	if (ret < 0) {
++		pm_runtime_put(dev);
++		goto out_pm_disable;
++	}
+ 
+ 	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+ 
+@@ -1029,19 +1031,19 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 
+ 	ret = platform_get_irq(pdev, 0);
+ 	if (ret < 0)
+-		goto out_pm_disable;
++		goto out_pm_put;
+ 	priv->irq = ret;
+ 	ret = devm_request_irq(dev, priv->irq, irqhandler, irqflags, dev_name(dev), priv);
+ 	if (ret < 0) {
+ 		dev_err(dev, "cannot get irq %d\n", priv->irq);
+-		goto out_pm_disable;
++		goto out_pm_put;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 
+ 	ret = i2c_add_numbered_adapter(adap);
+ 	if (ret < 0)
+-		goto out_pm_disable;
++		goto out_pm_put;
+ 
+ 	if (priv->flags & ID_P_HOST_NOTIFY) {
+ 		priv->host_notify_client = i2c_new_slave_host_notify_device(adap);
+@@ -1058,7 +1060,8 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+  out_del_device:
+ 	i2c_del_adapter(&priv->adap);
+  out_pm_put:
+-	pm_runtime_put(dev);
++	if (priv->flags & ID_P_PM_BLOCKED)
++		pm_runtime_put(dev);
+  out_pm_disable:
+ 	pm_runtime_disable(dev);
+ 	return ret;
+diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
+index 329ee4f48d957..cfc2110fc38ab 100644
+--- a/drivers/infiniband/hw/hfi1/file_ops.c
++++ b/drivers/infiniband/hw/hfi1/file_ops.c
+@@ -306,6 +306,8 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
+ 	unsigned long dim = from->nr_segs;
+ 	int idx;
+ 
++	if (!HFI1_CAP_IS_KSET(SDMA))
++		return -EINVAL;
+ 	idx = srcu_read_lock(&fd->pq_srcu);
+ 	pq = srcu_dereference(fd->pq, &fd->pq_srcu);
+ 	if (!cq || !pq) {
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index fa2cd76747ff4..837293aac68fe 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -530,7 +530,7 @@ void set_link_ipg(struct hfi1_pportdata *ppd)
+ 	u16 shift, mult;
+ 	u64 src;
+ 	u32 current_egress_rate; /* Mbits /sec */
+-	u32 max_pkt_time;
++	u64 max_pkt_time;
+ 	/*
+ 	 * max_pkt_time is the maximum packet egress time in units
+ 	 * of the fabric clock period 1/(805 MHz).
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 0b73dc7847aae..a044bee257f94 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1330,11 +1330,13 @@ void sdma_clean(struct hfi1_devdata *dd, size_t num_engines)
+ 		kvfree(sde->tx_ring);
+ 		sde->tx_ring = NULL;
+ 	}
+-	spin_lock_irq(&dd->sde_map_lock);
+-	sdma_map_free(rcu_access_pointer(dd->sdma_map));
+-	RCU_INIT_POINTER(dd->sdma_map, NULL);
+-	spin_unlock_irq(&dd->sde_map_lock);
+-	synchronize_rcu();
++	if (rcu_access_pointer(dd->sdma_map)) {
++		spin_lock_irq(&dd->sde_map_lock);
++		sdma_map_free(rcu_access_pointer(dd->sdma_map));
++		RCU_INIT_POINTER(dd->sdma_map, NULL);
++		spin_unlock_irq(&dd->sde_map_lock);
++		synchronize_rcu();
++	}
+ 	kfree(dd->per_sdma);
+ 	dd->per_sdma = NULL;
+ 
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index d8d52a00a1be9..585a9c76e5183 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -2826,7 +2826,7 @@ void rvt_qp_iter(struct rvt_dev_info *rdi,
+ EXPORT_SYMBOL(rvt_qp_iter);
+ 
+ /*
+- * This should be called with s_lock held.
++ * This should be called with s_lock and r_lock held.
+  */
+ void rvt_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe,
+ 		       enum ib_wc_status status)
+@@ -3185,7 +3185,9 @@ send_comp:
+ 	rvp->n_loop_pkts++;
+ flush_send:
+ 	sqp->s_rnr_retry = sqp->s_rnr_retry_cnt;
++	spin_lock(&sqp->r_lock);
+ 	rvt_send_complete(sqp, wqe, send_status);
++	spin_unlock(&sqp->r_lock);
+ 	if (local_ops) {
+ 		atomic_dec(&sqp->local_ops_pending);
+ 		local_ops = 0;
+@@ -3239,7 +3241,9 @@ serr:
+ 	spin_unlock_irqrestore(&qp->r_lock, flags);
+ serr_no_r_lock:
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
++	spin_lock(&sqp->r_lock);
+ 	rvt_send_complete(sqp, wqe, send_status);
++	spin_unlock(&sqp->r_lock);
+ 	if (sqp->ibqp.qp_type == IB_QPT_RC) {
+ 		int lastwqe;
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index d4917646641aa..69238f856c677 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -650,7 +650,7 @@ next_wqe:
+ 	opcode = next_opcode(qp, wqe, wqe->wr.opcode);
+ 	if (unlikely(opcode < 0)) {
+ 		wqe->status = IB_WC_LOC_QP_OP_ERR;
+-		goto exit;
++		goto err;
+ 	}
+ 
+ 	mask = rxe_opcode[opcode].mask;
+diff --git a/drivers/input/misc/sparcspkr.c b/drivers/input/misc/sparcspkr.c
+index fe43e5557ed72..cdcb7737c46aa 100644
+--- a/drivers/input/misc/sparcspkr.c
++++ b/drivers/input/misc/sparcspkr.c
+@@ -205,6 +205,7 @@ static int bbc_beep_probe(struct platform_device *op)
+ 
+ 	info = &state->u.bbc;
+ 	info->clock_freq = of_getintprop_default(dp, "clock-frequency", 0);
++	of_node_put(dp);
+ 	if (!info->clock_freq)
+ 		goto out_free;
+ 
+diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c
+index 64b690a72d105..a05a7a66b4ed4 100644
+--- a/drivers/input/touchscreen/stmfts.c
++++ b/drivers/input/touchscreen/stmfts.c
+@@ -337,13 +337,15 @@ static int stmfts_input_open(struct input_dev *dev)
+ 	struct stmfts_data *sdata = input_get_drvdata(dev);
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(&sdata->client->dev);
+-	if (err < 0)
+-		goto out;
++	err = pm_runtime_resume_and_get(&sdata->client->dev);
++	if (err)
++		return err;
+ 
+ 	err = i2c_smbus_write_byte(sdata->client, STMFTS_MS_MT_SENSE_ON);
+-	if (err)
+-		goto out;
++	if (err) {
++		pm_runtime_put_sync(&sdata->client->dev);
++		return err;
++	}
+ 
+ 	mutex_lock(&sdata->mutex);
+ 	sdata->running = true;
+@@ -366,9 +368,7 @@ static int stmfts_input_open(struct input_dev *dev)
+ 				 "failed to enable touchkey\n");
+ 	}
+ 
+-out:
+-	pm_runtime_put_noidle(&sdata->client->dev);
+-	return err;
++	return 0;
+ }
+ 
+ static void stmfts_input_close(struct input_dev *dev)
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 6eaefc9e7b3d6..e988f6f198c5c 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -84,7 +84,7 @@
+ #define ACPI_DEVFLAG_LINT1              0x80
+ #define ACPI_DEVFLAG_ATSDIS             0x10000000
+ 
+-#define LOOP_TIMEOUT	100000
++#define LOOP_TIMEOUT	2000000
+ /*
+  * ACPI table definitions
+  *
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 21749859ad459..477dde39823c7 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -6296,7 +6296,7 @@ static void quirk_igfx_skip_te_disable(struct pci_dev *dev)
+ 	ver = (dev->device >> 8) & 0xff;
+ 	if (ver != 0x45 && ver != 0x46 && ver != 0x4c &&
+ 	    ver != 0x4e && ver != 0x8a && ver != 0x98 &&
+-	    ver != 0x9a)
++	    ver != 0x9a && ver != 0xa7)
+ 		return;
+ 
+ 	if (risky_device(dev))
+diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
+index 3615cd6241c4d..149f2e53b39d4 100644
+--- a/drivers/iommu/msm_iommu.c
++++ b/drivers/iommu/msm_iommu.c
+@@ -616,16 +616,19 @@ static void insert_iommu_master(struct device *dev,
+ static int qcom_iommu_of_xlate(struct device *dev,
+ 			       struct of_phandle_args *spec)
+ {
+-	struct msm_iommu_dev *iommu;
++	struct msm_iommu_dev *iommu = NULL, *iter;
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+ 	spin_lock_irqsave(&msm_iommu_lock, flags);
+-	list_for_each_entry(iommu, &qcom_iommu_devices, dev_node)
+-		if (iommu->dev->of_node == spec->np)
++	list_for_each_entry(iter, &qcom_iommu_devices, dev_node) {
++		if (iter->dev->of_node == spec->np) {
++			iommu = iter;
+ 			break;
++		}
++	}
+ 
+-	if (!iommu || iommu->dev->of_node != spec->np) {
++	if (!iommu) {
+ 		ret = -ENODEV;
+ 		goto fail;
+ 	}
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 19387d2bc4b4f..051815c9d2bb4 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -768,8 +768,7 @@ static int mtk_iommu_remove(struct platform_device *pdev)
+ 	iommu_device_sysfs_remove(&data->iommu);
+ 	iommu_device_unregister(&data->iommu);
+ 
+-	if (iommu_present(&platform_bus_type))
+-		bus_set_iommu(&platform_bus_type, NULL);
++	list_del(&data->list);
+ 
+ 	clk_disable_unprepare(data->bclk);
+ 	devm_free_irq(&pdev->dev, data->irq, data);
+diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
+index 84f2741aaac6a..c76fb70c70bb6 100644
+--- a/drivers/irqchip/irq-armada-370-xp.c
++++ b/drivers/irqchip/irq-armada-370-xp.c
+@@ -308,7 +308,16 @@ static inline int armada_370_xp_msi_init(struct device_node *node,
+ 
+ static void armada_xp_mpic_perf_init(void)
+ {
+-	unsigned long cpuid = cpu_logical_map(smp_processor_id());
++	unsigned long cpuid;
++
++	/*
++	 * This Performance Counter Overflow interrupt is specific for
++	 * Armada 370 and XP. It is not available on Armada 375, 38x and 39x.
++	 */
++	if (!of_machine_is_compatible("marvell,armada-370-xp"))
++		return;
++
++	cpuid = cpu_logical_map(smp_processor_id());
+ 
+ 	/* Enable Performance Counter Overflow interrupts */
+ 	writel(ARMADA_370_XP_INT_CAUSE_PERF(cpuid),
+diff --git a/drivers/irqchip/irq-aspeed-i2c-ic.c b/drivers/irqchip/irq-aspeed-i2c-ic.c
+index 8d591c179f812..3d3210828e9bf 100644
+--- a/drivers/irqchip/irq-aspeed-i2c-ic.c
++++ b/drivers/irqchip/irq-aspeed-i2c-ic.c
+@@ -79,8 +79,8 @@ static int __init aspeed_i2c_ic_of_init(struct device_node *node,
+ 	}
+ 
+ 	i2c_ic->parent_irq = irq_of_parse_and_map(node, 0);
+-	if (i2c_ic->parent_irq < 0) {
+-		ret = i2c_ic->parent_irq;
++	if (!i2c_ic->parent_irq) {
++		ret = -EINVAL;
+ 		goto err_iounmap;
+ 	}
+ 
+diff --git a/drivers/irqchip/irq-aspeed-scu-ic.c b/drivers/irqchip/irq-aspeed-scu-ic.c
+index 0f0aac7cc1140..7cb13364ecfa1 100644
+--- a/drivers/irqchip/irq-aspeed-scu-ic.c
++++ b/drivers/irqchip/irq-aspeed-scu-ic.c
+@@ -159,8 +159,8 @@ static int aspeed_scu_ic_of_init_common(struct aspeed_scu_ic *scu_ic,
+ 	}
+ 
+ 	irq = irq_of_parse_and_map(node, 0);
+-	if (irq < 0) {
+-		rc = irq;
++	if (!irq) {
++		rc = -EINVAL;
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/irqchip/irq-sni-exiu.c b/drivers/irqchip/irq-sni-exiu.c
+index abd011fcecf4a..c7db617e1a2f6 100644
+--- a/drivers/irqchip/irq-sni-exiu.c
++++ b/drivers/irqchip/irq-sni-exiu.c
+@@ -37,11 +37,26 @@ struct exiu_irq_data {
+ 	u32		spi_base;
+ };
+ 
+-static void exiu_irq_eoi(struct irq_data *d)
++static void exiu_irq_ack(struct irq_data *d)
+ {
+ 	struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
+ 
+ 	writel(BIT(d->hwirq), data->base + EIREQCLR);
++}
++
++static void exiu_irq_eoi(struct irq_data *d)
++{
++	struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
++
++	/*
++	 * Level triggered interrupts are latched and must be cleared during
++	 * EOI or the interrupt will be jammed on. Of course if a level
++	 * triggered interrupt is still asserted then the write will not clear
++	 * the interrupt.
++	 */
++	if (irqd_is_level_type(d))
++		writel(BIT(d->hwirq), data->base + EIREQCLR);
++
+ 	irq_chip_eoi_parent(d);
+ }
+ 
+@@ -91,10 +106,13 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type)
+ 	writel_relaxed(val, data->base + EILVL);
+ 
+ 	val = readl_relaxed(data->base + EIEDG);
+-	if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH)
++	if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) {
+ 		val &= ~BIT(d->hwirq);
+-	else
++		irq_set_handler_locked(d, handle_fasteoi_irq);
++	} else {
+ 		val |= BIT(d->hwirq);
++		irq_set_handler_locked(d, handle_fasteoi_ack_irq);
++	}
+ 	writel_relaxed(val, data->base + EIEDG);
+ 
+ 	writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR);
+@@ -104,6 +122,7 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type)
+ 
+ static struct irq_chip exiu_irq_chip = {
+ 	.name			= "EXIU",
++	.irq_ack		= exiu_irq_ack,
+ 	.irq_eoi		= exiu_irq_eoi,
+ 	.irq_enable		= exiu_irq_enable,
+ 	.irq_mask		= exiu_irq_mask,
+diff --git a/drivers/irqchip/irq-xtensa-mx.c b/drivers/irqchip/irq-xtensa-mx.c
+index 27933338f7b36..8c581c985aa7d 100644
+--- a/drivers/irqchip/irq-xtensa-mx.c
++++ b/drivers/irqchip/irq-xtensa-mx.c
+@@ -151,14 +151,25 @@ static struct irq_chip xtensa_mx_irq_chip = {
+ 	.irq_set_affinity = xtensa_mx_irq_set_affinity,
+ };
+ 
++static void __init xtensa_mx_init_common(struct irq_domain *root_domain)
++{
++	unsigned int i;
++
++	irq_set_default_host(root_domain);
++	secondary_init_irq();
++
++	/* Initialize default IRQ routing to CPU 0 */
++	for (i = 0; i < XCHAL_NUM_EXTINTERRUPTS; ++i)
++		set_er(1, MIROUT(i));
++}
++
+ int __init xtensa_mx_init_legacy(struct device_node *interrupt_parent)
+ {
+ 	struct irq_domain *root_domain =
+ 		irq_domain_add_legacy(NULL, NR_IRQS - 1, 1, 0,
+ 				&xtensa_mx_irq_domain_ops,
+ 				&xtensa_mx_irq_chip);
+-	irq_set_default_host(root_domain);
+-	secondary_init_irq();
++	xtensa_mx_init_common(root_domain);
+ 	return 0;
+ }
+ 
+@@ -168,8 +179,7 @@ static int __init xtensa_mx_init(struct device_node *np,
+ 	struct irq_domain *root_domain =
+ 		irq_domain_add_linear(np, NR_IRQS, &xtensa_mx_irq_domain_ops,
+ 				&xtensa_mx_irq_chip);
+-	irq_set_default_host(root_domain);
+-	secondary_init_irq();
++	xtensa_mx_init_common(root_domain);
+ 	return 0;
+ }
+ IRQCHIP_DECLARE(xtensa_mx_irq_chip, "cdns,xtensa-mx", xtensa_mx_init);
+diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig
+index 5cdc361da37cb..539a2ed4e13dc 100644
+--- a/drivers/macintosh/Kconfig
++++ b/drivers/macintosh/Kconfig
+@@ -44,6 +44,7 @@ config ADB_IOP
+ config ADB_CUDA
+ 	bool "Support for Cuda/Egret based Macs and PowerMacs"
+ 	depends on (ADB || PPC_PMAC) && !PPC_PMAC64
++	select RTC_LIB
+ 	help
+ 	  This provides support for Cuda/Egret based Macintosh and
+ 	  Power Macintosh systems. This includes most m68k based Macs,
+@@ -57,6 +58,7 @@ config ADB_CUDA
+ config ADB_PMU
+ 	bool "Support for PMU based PowerMacs and PowerBooks"
+ 	depends on PPC_PMAC || MAC
++	select RTC_LIB
+ 	help
+ 	  On PowerBooks, iBooks, and recent iMacs and Power Macintoshes, the
+ 	  PMU is an embedded microprocessor whose primary function is to
+@@ -67,6 +69,10 @@ config ADB_PMU
+ 	  this device; you should do so if your machine is one of those
+ 	  mentioned above.
+ 
++config ADB_PMU_EVENT
++	def_bool y
++	depends on ADB_PMU && INPUT=y
++
+ config ADB_PMU_LED
+ 	bool "Support for the Power/iBook front LED"
+ 	depends on PPC_PMAC && ADB_PMU
+diff --git a/drivers/macintosh/Makefile b/drivers/macintosh/Makefile
+index 49819b1b6f201..712edcb3e0b08 100644
+--- a/drivers/macintosh/Makefile
++++ b/drivers/macintosh/Makefile
+@@ -12,7 +12,8 @@ obj-$(CONFIG_MAC_EMUMOUSEBTN)	+= mac_hid.o
+ obj-$(CONFIG_INPUT_ADBHID)	+= adbhid.o
+ obj-$(CONFIG_ANSLCD)		+= ans-lcd.o
+ 
+-obj-$(CONFIG_ADB_PMU)		+= via-pmu.o via-pmu-event.o
++obj-$(CONFIG_ADB_PMU)		+= via-pmu.o
++obj-$(CONFIG_ADB_PMU_EVENT)	+= via-pmu-event.o
+ obj-$(CONFIG_ADB_PMU_LED)	+= via-pmu-led.o
+ obj-$(CONFIG_PMAC_BACKLIGHT)	+= via-pmu-backlight.o
+ obj-$(CONFIG_ADB_CUDA)		+= via-cuda.o
+diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c
+index 73e6ae88fafd4..aae6328b2429d 100644
+--- a/drivers/macintosh/via-pmu.c
++++ b/drivers/macintosh/via-pmu.c
+@@ -1460,7 +1460,7 @@ next:
+ 		pmu_pass_intr(data, len);
+ 		/* len == 6 is probably a bad check. But how do I
+ 		 * know what PMU versions send what events here? */
+-		if (len == 6) {
++		if (IS_ENABLED(CONFIG_ADB_PMU_EVENT) && len == 6) {
+ 			via_pmu_event(PMU_EVT_POWER, !!(data[1]&8));
+ 			via_pmu_event(PMU_EVT_LID, data[1]&1);
+ 		}
+diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
+index 3e7d4b20ab34f..4229b9b5da98f 100644
+--- a/drivers/mailbox/mailbox.c
++++ b/drivers/mailbox/mailbox.c
+@@ -82,11 +82,11 @@ static void msg_submit(struct mbox_chan *chan)
+ exit:
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+ 
+-	/* kick start the timer immediately to avoid delays */
+ 	if (!err && (chan->txdone_method & TXDONE_BY_POLL)) {
+-		/* but only if not already active */
+-		if (!hrtimer_active(&chan->mbox->poll_hrt))
+-			hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
++		/* kick start the timer immediately to avoid delays */
++		spin_lock_irqsave(&chan->mbox->poll_hrt_lock, flags);
++		hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
++		spin_unlock_irqrestore(&chan->mbox->poll_hrt_lock, flags);
+ 	}
+ }
+ 
+@@ -120,20 +120,26 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer)
+ 		container_of(hrtimer, struct mbox_controller, poll_hrt);
+ 	bool txdone, resched = false;
+ 	int i;
++	unsigned long flags;
+ 
+ 	for (i = 0; i < mbox->num_chans; i++) {
+ 		struct mbox_chan *chan = &mbox->chans[i];
+ 
+ 		if (chan->active_req && chan->cl) {
+-			resched = true;
+ 			txdone = chan->mbox->ops->last_tx_done(chan);
+ 			if (txdone)
+ 				tx_tick(chan, 0);
++			else
++				resched = true;
+ 		}
+ 	}
+ 
+ 	if (resched) {
+-		hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period));
++		spin_lock_irqsave(&mbox->poll_hrt_lock, flags);
++		if (!hrtimer_is_queued(hrtimer))
++			hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period));
++		spin_unlock_irqrestore(&mbox->poll_hrt_lock, flags);
++
+ 		return HRTIMER_RESTART;
+ 	}
+ 	return HRTIMER_NORESTART;
+@@ -500,6 +506,7 @@ int mbox_controller_register(struct mbox_controller *mbox)
+ 		hrtimer_init(&mbox->poll_hrt, CLOCK_MONOTONIC,
+ 			     HRTIMER_MODE_REL);
+ 		mbox->poll_hrt.function = txdone_hrtimer;
++		spin_lock_init(&mbox->poll_hrt_lock);
+ 	}
+ 
+ 	for (i = 0; i < mbox->num_chans; i++) {
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 418914373a513..f64834785c8b9 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2006,8 +2006,7 @@ int bch_btree_check(struct cache_set *c)
+ 	int i;
+ 	struct bkey *k = NULL;
+ 	struct btree_iter iter;
+-	struct btree_check_state *check_state;
+-	char name[32];
++	struct btree_check_state check_state;
+ 
+ 	/* check and mark root node keys */
+ 	for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid)
+@@ -2018,63 +2017,58 @@ int bch_btree_check(struct cache_set *c)
+ 	if (c->root->level == 0)
+ 		return 0;
+ 
+-	check_state = kzalloc(sizeof(struct btree_check_state), GFP_KERNEL);
+-	if (!check_state)
+-		return -ENOMEM;
+-
+-	check_state->c = c;
+-	check_state->total_threads = bch_btree_chkthread_nr();
+-	check_state->key_idx = 0;
+-	spin_lock_init(&check_state->idx_lock);
+-	atomic_set(&check_state->started, 0);
+-	atomic_set(&check_state->enough, 0);
+-	init_waitqueue_head(&check_state->wait);
++	check_state.c = c;
++	check_state.total_threads = bch_btree_chkthread_nr();
++	check_state.key_idx = 0;
++	spin_lock_init(&check_state.idx_lock);
++	atomic_set(&check_state.started, 0);
++	atomic_set(&check_state.enough, 0);
++	init_waitqueue_head(&check_state.wait);
+ 
++	rw_lock(0, c->root, c->root->level);
+ 	/*
+ 	 * Run multiple threads to check btree nodes in parallel,
+-	 * if check_state->enough is non-zero, it means current
++	 * if check_state.enough is non-zero, it means current
+ 	 * running check threads are enough, unncessary to create
+ 	 * more.
+ 	 */
+-	for (i = 0; i < check_state->total_threads; i++) {
+-		/* fetch latest check_state->enough earlier */
++	for (i = 0; i < check_state.total_threads; i++) {
++		/* fetch latest check_state.enough earlier */
+ 		smp_mb__before_atomic();
+-		if (atomic_read(&check_state->enough))
++		if (atomic_read(&check_state.enough))
+ 			break;
+ 
+-		check_state->infos[i].result = 0;
+-		check_state->infos[i].state = check_state;
+-		snprintf(name, sizeof(name), "bch_btrchk[%u]", i);
+-		atomic_inc(&check_state->started);
++		check_state.infos[i].result = 0;
++		check_state.infos[i].state = &check_state;
+ 
+-		check_state->infos[i].thread =
++		check_state.infos[i].thread =
+ 			kthread_run(bch_btree_check_thread,
+-				    &check_state->infos[i],
+-				    name);
+-		if (IS_ERR(check_state->infos[i].thread)) {
++				    &check_state.infos[i],
++				    "bch_btrchk[%d]", i);
++		if (IS_ERR(check_state.infos[i].thread)) {
+ 			pr_err("fails to run thread bch_btrchk[%d]\n", i);
+ 			for (--i; i >= 0; i--)
+-				kthread_stop(check_state->infos[i].thread);
++				kthread_stop(check_state.infos[i].thread);
+ 			ret = -ENOMEM;
+ 			goto out;
+ 		}
++		atomic_inc(&check_state.started);
+ 	}
+ 
+ 	/*
+ 	 * Must wait for all threads to stop.
+ 	 */
+-	wait_event_interruptible(check_state->wait,
+-				 atomic_read(&check_state->started) == 0);
++	wait_event(check_state.wait, atomic_read(&check_state.started) == 0);
+ 
+-	for (i = 0; i < check_state->total_threads; i++) {
+-		if (check_state->infos[i].result) {
+-			ret = check_state->infos[i].result;
++	for (i = 0; i < check_state.total_threads; i++) {
++		if (check_state.infos[i].result) {
++			ret = check_state.infos[i].result;
+ 			goto out;
+ 		}
+ 	}
+ 
+ out:
+-	kfree(check_state);
++	rw_unlock(0, c->root);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h
+index 50482107134f1..1b5fdbc0d83eb 100644
+--- a/drivers/md/bcache/btree.h
++++ b/drivers/md/bcache/btree.h
+@@ -226,7 +226,7 @@ struct btree_check_info {
+ 	int				result;
+ };
+ 
+-#define BCH_BTR_CHKTHREAD_MAX	64
++#define BCH_BTR_CHKTHREAD_MAX	12
+ struct btree_check_state {
+ 	struct cache_set		*c;
+ 	int				total_threads;
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index c6613e8173337..aea4833f67fc1 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -407,6 +407,11 @@ err:
+ 	return ret;
+ }
+ 
++void bch_journal_space_reserve(struct journal *j)
++{
++	j->do_reserve = true;
++}
++
+ /* Journalling */
+ 
+ static void btree_flush_write(struct cache_set *c)
+@@ -625,12 +630,30 @@ static void do_journal_discard(struct cache *ca)
+ 	}
+ }
+ 
++static unsigned int free_journal_buckets(struct cache_set *c)
++{
++	struct journal *j = &c->journal;
++	struct cache *ca = c->cache;
++	struct journal_device *ja = &c->cache->journal;
++	unsigned int n;
++
++	/* In case njournal_buckets is not power of 2 */
++	if (ja->cur_idx >= ja->discard_idx)
++		n = ca->sb.njournal_buckets +  ja->discard_idx - ja->cur_idx;
++	else
++		n = ja->discard_idx - ja->cur_idx;
++
++	if (n > (1 + j->do_reserve))
++		return n - (1 + j->do_reserve);
++
++	return 0;
++}
++
+ static void journal_reclaim(struct cache_set *c)
+ {
+ 	struct bkey *k = &c->journal.key;
+ 	struct cache *ca = c->cache;
+ 	uint64_t last_seq;
+-	unsigned int next;
+ 	struct journal_device *ja = &ca->journal;
+ 	atomic_t p __maybe_unused;
+ 
+@@ -653,12 +676,10 @@ static void journal_reclaim(struct cache_set *c)
+ 	if (c->journal.blocks_free)
+ 		goto out;
+ 
+-	next = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
+-	/* No space available on this device */
+-	if (next == ja->discard_idx)
++	if (!free_journal_buckets(c))
+ 		goto out;
+ 
+-	ja->cur_idx = next;
++	ja->cur_idx = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
+ 	k->ptr[0] = MAKE_PTR(0,
+ 			     bucket_to_sector(c, ca->sb.d[ja->cur_idx]),
+ 			     ca->sb.nr_this_dev);
+diff --git a/drivers/md/bcache/journal.h b/drivers/md/bcache/journal.h
+index f2ea34d5f431b..cd316b4a1e95f 100644
+--- a/drivers/md/bcache/journal.h
++++ b/drivers/md/bcache/journal.h
+@@ -105,6 +105,7 @@ struct journal {
+ 	spinlock_t		lock;
+ 	spinlock_t		flush_write_lock;
+ 	bool			btree_flushing;
++	bool			do_reserve;
+ 	/* used when waiting because the journal was full */
+ 	struct closure_waitlist	wait;
+ 	struct closure		io;
+@@ -182,5 +183,6 @@ int bch_journal_replay(struct cache_set *c, struct list_head *list);
+ 
+ void bch_journal_free(struct cache_set *c);
+ int bch_journal_alloc(struct cache_set *c);
++void bch_journal_space_reserve(struct journal *j);
+ 
+ #endif /* _BCACHE_JOURNAL_H */
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 2143263831456..97895262fc542 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -1109,6 +1109,12 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio)
+ 	 * which would call closure_get(&dc->disk.cl)
+ 	 */
+ 	ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO);
++	if (!ddip) {
++		bio->bi_status = BLK_STS_RESOURCE;
++		bio->bi_end_io(bio);
++		return;
++	}
++
+ 	ddip->d = d;
+ 	/* Count on the bcache device */
+ 	ddip->start_time = part_start_io_acct(d->disk, &ddip->part, bio);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 81f1cc5b34999..7f5ea25096430 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -2150,6 +2150,7 @@ static int run_cache_set(struct cache_set *c)
+ 
+ 	flash_devs_run(c);
+ 
++	bch_journal_space_reserve(&c->journal);
+ 	set_bit(CACHE_SET_RUNNING, &c->flags);
+ 	return 0;
+ err:
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 952253f24175a..0145046a45f43 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -756,13 +756,11 @@ static int bch_writeback_thread(void *arg)
+ 
+ /* Init */
+ #define INIT_KEYS_EACH_TIME	500000
+-#define INIT_KEYS_SLEEP_MS	100
+ 
+ struct sectors_dirty_init {
+ 	struct btree_op	op;
+ 	unsigned int	inode;
+ 	size_t		count;
+-	struct bkey	start;
+ };
+ 
+ static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b,
+@@ -778,11 +776,8 @@ static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b,
+ 					     KEY_START(k), KEY_SIZE(k));
+ 
+ 	op->count++;
+-	if (atomic_read(&b->c->search_inflight) &&
+-	    !(op->count % INIT_KEYS_EACH_TIME)) {
+-		bkey_copy_key(&op->start, k);
+-		return -EAGAIN;
+-	}
++	if (!(op->count % INIT_KEYS_EACH_TIME))
++		cond_resched();
+ 
+ 	return MAP_CONTINUE;
+ }
+@@ -797,24 +792,16 @@ static int bch_root_node_dirty_init(struct cache_set *c,
+ 	bch_btree_op_init(&op.op, -1);
+ 	op.inode = d->id;
+ 	op.count = 0;
+-	op.start = KEY(op.inode, 0, 0);
+-
+-	do {
+-		ret = bcache_btree(map_keys_recurse,
+-				   k,
+-				   c->root,
+-				   &op.op,
+-				   &op.start,
+-				   sectors_dirty_init_fn,
+-				   0);
+-		if (ret == -EAGAIN)
+-			schedule_timeout_interruptible(
+-				msecs_to_jiffies(INIT_KEYS_SLEEP_MS));
+-		else if (ret < 0) {
+-			pr_warn("sectors dirty init failed, ret=%d!\n", ret);
+-			break;
+-		}
+-	} while (ret == -EAGAIN);
++
++	ret = bcache_btree(map_keys_recurse,
++			   k,
++			   c->root,
++			   &op.op,
++			   &KEY(op.inode, 0, 0),
++			   sectors_dirty_init_fn,
++			   0);
++	if (ret < 0)
++		pr_warn("sectors dirty init failed, ret=%d!\n", ret);
+ 
+ 	return ret;
+ }
+@@ -858,7 +845,6 @@ static int bch_dirty_init_thread(void *arg)
+ 				goto out;
+ 			}
+ 			skip_nr--;
+-			cond_resched();
+ 		}
+ 
+ 		if (p) {
+@@ -868,7 +854,6 @@ static int bch_dirty_init_thread(void *arg)
+ 
+ 		p = NULL;
+ 		prev_idx = cur_idx;
+-		cond_resched();
+ 	}
+ 
+ out:
+@@ -899,67 +884,55 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ 	struct btree_iter iter;
+ 	struct sectors_dirty_init op;
+ 	struct cache_set *c = d->c;
+-	struct bch_dirty_init_state *state;
+-	char name[32];
++	struct bch_dirty_init_state state;
+ 
+ 	/* Just count root keys if no leaf node */
++	rw_lock(0, c->root, c->root->level);
+ 	if (c->root->level == 0) {
+ 		bch_btree_op_init(&op.op, -1);
+ 		op.inode = d->id;
+ 		op.count = 0;
+-		op.start = KEY(op.inode, 0, 0);
+ 
+ 		for_each_key_filter(&c->root->keys,
+ 				    k, &iter, bch_ptr_invalid)
+ 			sectors_dirty_init_fn(&op.op, c->root, k);
+-		return;
+-	}
+ 
+-	state = kzalloc(sizeof(struct bch_dirty_init_state), GFP_KERNEL);
+-	if (!state) {
+-		pr_warn("sectors dirty init failed: cannot allocate memory\n");
++		rw_unlock(0, c->root);
+ 		return;
+ 	}
+ 
+-	state->c = c;
+-	state->d = d;
+-	state->total_threads = bch_btre_dirty_init_thread_nr();
+-	state->key_idx = 0;
+-	spin_lock_init(&state->idx_lock);
+-	atomic_set(&state->started, 0);
+-	atomic_set(&state->enough, 0);
+-	init_waitqueue_head(&state->wait);
+-
+-	for (i = 0; i < state->total_threads; i++) {
+-		/* Fetch latest state->enough earlier */
++	state.c = c;
++	state.d = d;
++	state.total_threads = bch_btre_dirty_init_thread_nr();
++	state.key_idx = 0;
++	spin_lock_init(&state.idx_lock);
++	atomic_set(&state.started, 0);
++	atomic_set(&state.enough, 0);
++	init_waitqueue_head(&state.wait);
++
++	for (i = 0; i < state.total_threads; i++) {
++		/* Fetch latest state.enough earlier */
+ 		smp_mb__before_atomic();
+-		if (atomic_read(&state->enough))
++		if (atomic_read(&state.enough))
+ 			break;
+ 
+-		state->infos[i].state = state;
+-		atomic_inc(&state->started);
+-		snprintf(name, sizeof(name), "bch_dirty_init[%d]", i);
+-
+-		state->infos[i].thread =
+-			kthread_run(bch_dirty_init_thread,
+-				    &state->infos[i],
+-				    name);
+-		if (IS_ERR(state->infos[i].thread)) {
++		state.infos[i].state = &state;
++		state.infos[i].thread =
++			kthread_run(bch_dirty_init_thread, &state.infos[i],
++				    "bch_dirtcnt[%d]", i);
++		if (IS_ERR(state.infos[i].thread)) {
+ 			pr_err("fails to run thread bch_dirty_init[%d]\n", i);
+ 			for (--i; i >= 0; i--)
+-				kthread_stop(state->infos[i].thread);
++				kthread_stop(state.infos[i].thread);
+ 			goto out;
+ 		}
++		atomic_inc(&state.started);
+ 	}
+ 
+-	/*
+-	 * Must wait for all threads to stop.
+-	 */
+-	wait_event_interruptible(state->wait,
+-		 atomic_read(&state->started) == 0);
+-
+ out:
+-	kfree(state);
++	/* Must wait for all threads to stop. */
++	wait_event(state.wait, atomic_read(&state.started) == 0);
++	rw_unlock(0, c->root);
+ }
+ 
+ void bch_cached_dev_writeback_init(struct cached_dev *dc)
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index 3f1230e22de01..0f1d96920630d 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -16,7 +16,7 @@
+ 
+ #define BCH_AUTO_GC_DIRTY_THRESHOLD	50
+ 
+-#define BCH_DIRTY_INIT_THRD_MAX	64
++#define BCH_DIRTY_INIT_THRD_MAX	12
+ /*
+  * 14 (16384ths) is chosen here as something that each backing device
+  * should be a reasonable fraction of the share, and not to blow up
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index ea3130e116801..d377ea0609255 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -639,14 +639,6 @@ re_read:
+ 	daemon_sleep = le32_to_cpu(sb->daemon_sleep) * HZ;
+ 	write_behind = le32_to_cpu(sb->write_behind);
+ 	sectors_reserved = le32_to_cpu(sb->sectors_reserved);
+-	/* Setup nodes/clustername only if bitmap version is
+-	 * cluster-compatible
+-	 */
+-	if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) {
+-		nodes = le32_to_cpu(sb->nodes);
+-		strlcpy(bitmap->mddev->bitmap_info.cluster_name,
+-				sb->cluster_name, 64);
+-	}
+ 
+ 	/* verify that the bitmap-specific fields are valid */
+ 	if (sb->magic != cpu_to_le32(BITMAP_MAGIC))
+@@ -668,6 +660,16 @@ re_read:
+ 		goto out;
+ 	}
+ 
++	/*
++	 * Setup nodes/clustername only if bitmap version is
++	 * cluster-compatible
++	 */
++	if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) {
++		nodes = le32_to_cpu(sb->nodes);
++		strlcpy(bitmap->mddev->bitmap_info.cluster_name,
++				sb->cluster_name, 64);
++	}
++
+ 	/* keep the array size field of the bitmap superblock up to date */
+ 	sb->sync_size = cpu_to_le64(bitmap->mddev->resync_max_sectors);
+ 
+@@ -700,9 +702,9 @@ re_read:
+ 
+ out:
+ 	kunmap_atomic(sb);
+-	/* Assigning chunksize is required for "re_read" */
+-	bitmap->mddev->bitmap_info.chunksize = chunksize;
+ 	if (err == 0 && nodes && (bitmap->cluster_slot < 0)) {
++		/* Assigning chunksize is required for "re_read" */
++		bitmap->mddev->bitmap_info.chunksize = chunksize;
+ 		err = md_setup_cluster(bitmap->mddev, nodes);
+ 		if (err) {
+ 			pr_warn("%s: Could not setup cluster service (%d)\n",
+@@ -713,18 +715,18 @@ out:
+ 		goto re_read;
+ 	}
+ 
+-
+ out_no_sb:
+-	if (test_bit(BITMAP_STALE, &bitmap->flags))
+-		bitmap->events_cleared = bitmap->mddev->events;
+-	bitmap->mddev->bitmap_info.chunksize = chunksize;
+-	bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep;
+-	bitmap->mddev->bitmap_info.max_write_behind = write_behind;
+-	bitmap->mddev->bitmap_info.nodes = nodes;
+-	if (bitmap->mddev->bitmap_info.space == 0 ||
+-	    bitmap->mddev->bitmap_info.space > sectors_reserved)
+-		bitmap->mddev->bitmap_info.space = sectors_reserved;
+-	if (err) {
++	if (err == 0) {
++		if (test_bit(BITMAP_STALE, &bitmap->flags))
++			bitmap->events_cleared = bitmap->mddev->events;
++		bitmap->mddev->bitmap_info.chunksize = chunksize;
++		bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep;
++		bitmap->mddev->bitmap_info.max_write_behind = write_behind;
++		bitmap->mddev->bitmap_info.nodes = nodes;
++		if (bitmap->mddev->bitmap_info.space == 0 ||
++			bitmap->mddev->bitmap_info.space > sectors_reserved)
++			bitmap->mddev->bitmap_info.space = sectors_reserved;
++	} else {
+ 		md_bitmap_print_sb(bitmap);
+ 		if (bitmap->cluster_slot < 0)
+ 			md_cluster_stop(bitmap->mddev);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index cc3876500c4b2..7a9701adee738 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -2648,14 +2648,16 @@ static void sync_sbs(struct mddev *mddev, int nospares)
+ 
+ static bool does_sb_need_changing(struct mddev *mddev)
+ {
+-	struct md_rdev *rdev;
++	struct md_rdev *rdev = NULL, *iter;
+ 	struct mdp_superblock_1 *sb;
+ 	int role;
+ 
+ 	/* Find a good rdev */
+-	rdev_for_each(rdev, mddev)
+-		if ((rdev->raid_disk >= 0) && !test_bit(Faulty, &rdev->flags))
++	rdev_for_each(iter, mddev)
++		if ((iter->raid_disk >= 0) && !test_bit(Faulty, &iter->flags)) {
++			rdev = iter;
+ 			break;
++		}
+ 
+ 	/* No good device found. */
+ 	if (!rdev)
+@@ -9728,16 +9730,18 @@ static int read_rdev(struct mddev *mddev, struct md_rdev *rdev)
+ 
+ void md_reload_sb(struct mddev *mddev, int nr)
+ {
+-	struct md_rdev *rdev;
++	struct md_rdev *rdev = NULL, *iter;
+ 	int err;
+ 
+ 	/* Find the rdev */
+-	rdev_for_each_rcu(rdev, mddev) {
+-		if (rdev->desc_nr == nr)
++	rdev_for_each_rcu(iter, mddev) {
++		if (iter->desc_nr == nr) {
++			rdev = iter;
+ 			break;
++		}
+ 	}
+ 
+-	if (!rdev || rdev->desc_nr != nr) {
++	if (!rdev) {
+ 		pr_warn("%s: %d Could not find rdev with nr %d\n", __func__, __LINE__, nr);
+ 		return;
+ 	}
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index 2e5698fbc3a87..e23aa608f66f6 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -1271,7 +1271,7 @@ static int cec_config_log_addr(struct cec_adapter *adap,
+ 		 * While trying to poll the physical address was reset
+ 		 * and the adapter was unconfigured, so bail out.
+ 		 */
+-		if (!adap->is_configuring)
++		if (adap->phys_addr == CEC_PHYS_ADDR_INVALID)
+ 			return -EINTR;
+ 
+ 		if (err)
+@@ -1328,7 +1328,6 @@ static void cec_adap_unconfigure(struct cec_adapter *adap)
+ 	    adap->phys_addr != CEC_PHYS_ADDR_INVALID)
+ 		WARN_ON(adap->ops->adap_log_addr(adap, CEC_LOG_ADDR_INVALID));
+ 	adap->log_addrs.log_addr_mask = 0;
+-	adap->is_configuring = false;
+ 	adap->is_configured = false;
+ 	cec_flush(adap);
+ 	wake_up_interruptible(&adap->kthread_waitq);
+@@ -1520,9 +1519,10 @@ unconfigure:
+ 	for (i = 0; i < las->num_log_addrs; i++)
+ 		las->log_addr[i] = CEC_LOG_ADDR_INVALID;
+ 	cec_adap_unconfigure(adap);
++	adap->is_configuring = false;
+ 	adap->kthread_config = NULL;
+-	mutex_unlock(&adap->lock);
+ 	complete(&adap->config_completion);
++	mutex_unlock(&adap->lock);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index b42b289faaef4..154776d0069ea 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -2000,7 +2000,6 @@ static int ov7670_remove(struct i2c_client *client)
+ 	v4l2_async_unregister_subdev(sd);
+ 	v4l2_ctrl_handler_free(&info->hdl);
+ 	media_entity_cleanup(&info->sd.entity);
+-	ov7670_power_off(sd);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/pci/cx23885/cx23885-core.c b/drivers/media/pci/cx23885/cx23885-core.c
+index 4e8132d4b2dfa..a23c025595a04 100644
+--- a/drivers/media/pci/cx23885/cx23885-core.c
++++ b/drivers/media/pci/cx23885/cx23885-core.c
+@@ -2154,7 +2154,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+ 	err = pci_set_dma_mask(pci_dev, 0xffffffff);
+ 	if (err) {
+ 		pr_err("%s/0: Oops: no 32bit PCI DMA ???\n", dev->name);
+-		goto fail_ctrl;
++		goto fail_dma_set_mask;
+ 	}
+ 
+ 	err = request_irq(pci_dev->irq, cx23885_irq,
+@@ -2162,7 +2162,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+ 	if (err < 0) {
+ 		pr_err("%s: can't get IRQ %d\n",
+ 		       dev->name, pci_dev->irq);
+-		goto fail_irq;
++		goto fail_dma_set_mask;
+ 	}
+ 
+ 	switch (dev->board) {
+@@ -2184,7 +2184,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+ 
+ 	return 0;
+ 
+-fail_irq:
++fail_dma_set_mask:
+ 	cx23885_dev_unregister(dev);
+ fail_ctrl:
+ 	v4l2_ctrl_handler_free(hdl);
+diff --git a/drivers/media/pci/cx25821/cx25821-core.c b/drivers/media/pci/cx25821/cx25821-core.c
+index 285047b32c44a..a3d45287a5343 100644
+--- a/drivers/media/pci/cx25821/cx25821-core.c
++++ b/drivers/media/pci/cx25821/cx25821-core.c
+@@ -1340,11 +1340,11 @@ static void cx25821_finidev(struct pci_dev *pci_dev)
+ 	struct cx25821_dev *dev = get_cx25821(v4l2_dev);
+ 
+ 	cx25821_shutdown(dev);
+-	pci_disable_device(pci_dev);
+ 
+ 	/* unregister stuff */
+ 	if (pci_dev->irq)
+ 		free_irq(pci_dev->irq, dev);
++	pci_disable_device(pci_dev);
+ 
+ 	cx25821_dev_unregister(dev);
+ 	v4l2_device_unregister(v4l2_dev);
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index 757a58829a512..9d9124308f6ad 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -1723,6 +1723,7 @@ static int aspeed_video_probe(struct platform_device *pdev)
+ 
+ 	rc = aspeed_video_setup_video(video);
+ 	if (rc) {
++		aspeed_video_free_buf(video, &video->jpeg);
+ 		clk_unprepare(video->vclk);
+ 		clk_unprepare(video->eclk);
+ 		return rc;
+@@ -1748,8 +1749,7 @@ static int aspeed_video_remove(struct platform_device *pdev)
+ 
+ 	v4l2_device_unregister(v4l2_dev);
+ 
+-	dma_free_coherent(video->dev, VE_JPEG_HEADER_SIZE, video->jpeg.virt,
+-			  video->jpeg.dma);
++	aspeed_video_free_buf(video, &video->jpeg);
+ 
+ 	of_reserved_mem_device_release(dev);
+ 
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index 2333079a83c71..14d4830d5db51 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -1318,7 +1318,8 @@ static int coda_enum_frameintervals(struct file *file, void *fh,
+ 				    struct v4l2_frmivalenum *f)
+ {
+ 	struct coda_ctx *ctx = fh_to_ctx(fh);
+-	int i;
++	struct coda_q_data *q_data;
++	const struct coda_codec *codec;
+ 
+ 	if (f->index)
+ 		return -EINVAL;
+@@ -1327,12 +1328,19 @@ static int coda_enum_frameintervals(struct file *file, void *fh,
+ 	if (!ctx->vdoa && f->pixel_format == V4L2_PIX_FMT_YUYV)
+ 		return -EINVAL;
+ 
+-	for (i = 0; i < CODA_MAX_FORMATS; i++) {
+-		if (f->pixel_format == ctx->cvd->src_formats[i] ||
+-		    f->pixel_format == ctx->cvd->dst_formats[i])
+-			break;
++	if (coda_format_normalize_yuv(f->pixel_format) == V4L2_PIX_FMT_YUV420) {
++		q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++		codec = coda_find_codec(ctx->dev, f->pixel_format,
++					q_data->fourcc);
++	} else {
++		codec = coda_find_codec(ctx->dev, V4L2_PIX_FMT_YUV420,
++					f->pixel_format);
+ 	}
+-	if (i == CODA_MAX_FORMATS)
++	if (!codec)
++		return -EINVAL;
++
++	if (f->width < MIN_W || f->width > codec->max_w ||
++	    f->height < MIN_H || f->height > codec->max_h)
+ 		return -EINVAL;
+ 
+ 	f->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
+@@ -2335,8 +2343,8 @@ static void coda_encode_ctrls(struct coda_ctx *ctx)
+ 		V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET, -12, 12, 1, 0);
+ 	v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+ 		V4L2_CID_MPEG_VIDEO_H264_PROFILE,
+-		V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE, 0x0,
+-		V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE);
++		V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE, 0x0,
++		V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE);
+ 	if (ctx->dev->devtype->product == CODA_HX4 ||
+ 	    ctx->dev->devtype->product == CODA_7541) {
+ 		v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+@@ -2350,12 +2358,15 @@ static void coda_encode_ctrls(struct coda_ctx *ctx)
+ 	if (ctx->dev->devtype->product == CODA_960) {
+ 		v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+ 			V4L2_CID_MPEG_VIDEO_H264_LEVEL,
+-			V4L2_MPEG_VIDEO_H264_LEVEL_4_0,
+-			~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) |
++			V4L2_MPEG_VIDEO_H264_LEVEL_4_2,
++			~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_1_0) |
++			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) |
+ 			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_0) |
+ 			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_1) |
+ 			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_2) |
+-			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0)),
++			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0) |
++			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_1) |
++			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_2)),
+ 			V4L2_MPEG_VIDEO_H264_LEVEL_4_0);
+ 	}
+ 	v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops,
+@@ -2417,7 +2428,7 @@ static void coda_decode_ctrls(struct coda_ctx *ctx)
+ 	ctx->h264_profile_ctrl = v4l2_ctrl_new_std_menu(&ctx->ctrls,
+ 		&coda_ctrl_ops, V4L2_CID_MPEG_VIDEO_H264_PROFILE,
+ 		V4L2_MPEG_VIDEO_H264_PROFILE_HIGH,
+-		~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) |
++		~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE) |
+ 		  (1 << V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) |
+ 		  (1 << V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)),
+ 		V4L2_MPEG_VIDEO_H264_PROFILE_HIGH);
+diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
+index d26fa5967d821..dc2a144cd29b6 100644
+--- a/drivers/media/platform/exynos4-is/fimc-is.c
++++ b/drivers/media/platform/exynos4-is/fimc-is.c
+@@ -140,7 +140,7 @@ static int fimc_is_enable_clocks(struct fimc_is *is)
+ 			dev_err(&is->pdev->dev, "clock %s enable failed\n",
+ 				fimc_is_clocks[i]);
+ 			for (--i; i >= 0; i--)
+-				clk_disable(is->clocks[i]);
++				clk_disable_unprepare(is->clocks[i]);
+ 			return ret;
+ 		}
+ 		pr_debug("enabled clock: %s\n", fimc_is_clocks[i]);
+@@ -830,7 +830,7 @@ static int fimc_is_probe(struct platform_device *pdev)
+ 
+ 	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+-		goto err_irq;
++		goto err_pm_disable;
+ 
+ 	vb2_dma_contig_set_max_seg_size(dev, DMA_BIT_MASK(32));
+ 
+@@ -864,6 +864,8 @@ err_pm:
+ 	pm_runtime_put_noidle(dev);
+ 	if (!pm_runtime_enabled(dev))
+ 		fimc_is_runtime_suspend(dev);
++err_pm_disable:
++	pm_runtime_disable(dev);
+ err_irq:
+ 	free_irq(is->irq, is);
+ err_clk:
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.h b/drivers/media/platform/exynos4-is/fimc-isp-video.h
+index edcb3a5e3cb90..2dd4ddbc748a1 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp-video.h
++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.h
+@@ -32,7 +32,7 @@ static inline int fimc_isp_video_device_register(struct fimc_isp *isp,
+ 	return 0;
+ }
+ 
+-void fimc_isp_video_device_unregister(struct fimc_isp *isp,
++static inline void fimc_isp_video_device_unregister(struct fimc_isp *isp,
+ 				enum v4l2_buf_type type)
+ {
+ }
+diff --git a/drivers/media/platform/qcom/venus/hfi.c b/drivers/media/platform/qcom/venus/hfi.c
+index a59022adb14c7..966b4d9b57a97 100644
+--- a/drivers/media/platform/qcom/venus/hfi.c
++++ b/drivers/media/platform/qcom/venus/hfi.c
+@@ -104,6 +104,9 @@ int hfi_core_deinit(struct venus_core *core, bool blocking)
+ 		mutex_lock(&core->lock);
+ 	}
+ 
++	if (!core->ops)
++		goto unlock;
++
+ 	ret = core->ops->core_deinit(core);
+ 
+ 	if (!ret)
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index d99ea8973b678..e3246344fb724 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -868,7 +868,7 @@ static int rga_probe(struct platform_device *pdev)
+ 
+ 	ret = pm_runtime_resume_and_get(rga->dev);
+ 	if (ret < 0)
+-		goto rel_vdev;
++		goto rel_m2m;
+ 
+ 	rga->version.major = (rga_read(rga, RGA_VERSION_INFO) >> 24) & 0xFF;
+ 	rga->version.minor = (rga_read(rga, RGA_VERSION_INFO) >> 20) & 0x0F;
+@@ -884,7 +884,7 @@ static int rga_probe(struct platform_device *pdev)
+ 					   DMA_ATTR_WRITE_COMBINE);
+ 	if (!rga->cmdbuf_virt) {
+ 		ret = -ENOMEM;
+-		goto rel_vdev;
++		goto rel_m2m;
+ 	}
+ 
+ 	rga->src_mmu_pages =
+@@ -921,6 +921,8 @@ free_src_pages:
+ free_dma:
+ 	dma_free_attrs(rga->dev, RGA_CMDBUF_SIZE, rga->cmdbuf_virt,
+ 		       rga->cmdbuf_phy, DMA_ATTR_WRITE_COMBINE);
++rel_m2m:
++	v4l2_m2m_release(rga->m2m_dev);
+ rel_vdev:
+ 	video_device_release(vfd);
+ unreg_v4l2_dev:
+diff --git a/drivers/media/platform/sti/delta/delta-v4l2.c b/drivers/media/platform/sti/delta/delta-v4l2.c
+index c691b3d81549d..5da49a0ab70b9 100644
+--- a/drivers/media/platform/sti/delta/delta-v4l2.c
++++ b/drivers/media/platform/sti/delta/delta-v4l2.c
+@@ -1862,7 +1862,7 @@ static int delta_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(delta->dev, "%s failed to initialize firmware ipc channel\n",
+ 			DELTA_PREFIX);
+-		goto err;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* register all available decoders */
+@@ -1876,7 +1876,7 @@ static int delta_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(delta->dev, "%s failed to register V4L2 device\n",
+ 			DELTA_PREFIX);
+-		goto err;
++		goto err_pm_disable;
+ 	}
+ 
+ 	delta->work_queue = create_workqueue(DELTA_NAME);
+@@ -1901,6 +1901,8 @@ err_work_queue:
+ 	destroy_workqueue(delta->work_queue);
+ err_v4l2:
+ 	v4l2_device_unregister(&delta->v4l2_dev);
++err_pm_disable:
++	pm_runtime_disable(dev);
+ err:
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/vsp1/vsp1_rpf.c b/drivers/media/platform/vsp1/vsp1_rpf.c
+index 85587c1b6a373..75083cb234fe3 100644
+--- a/drivers/media/platform/vsp1/vsp1_rpf.c
++++ b/drivers/media/platform/vsp1/vsp1_rpf.c
+@@ -291,11 +291,11 @@ static void rpf_configure_partition(struct vsp1_entity *entity,
+ 		     + crop.left * fmtinfo->bpp[0] / 8;
+ 
+ 	if (format->num_planes > 1) {
++		unsigned int bpl = format->plane_fmt[1].bytesperline;
+ 		unsigned int offset;
+ 
+-		offset = crop.top * format->plane_fmt[1].bytesperline
+-		       + crop.left / fmtinfo->hsub
+-		       * fmtinfo->bpp[1] / 8;
++		offset = crop.top / fmtinfo->vsub * bpl
++		       + crop.left / fmtinfo->hsub * fmtinfo->bpp[1] / 8;
+ 		mem.addr[1] += offset;
+ 		mem.addr[2] += offset;
+ 	}
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index a7962ca2ac8e3..bc9ac6002e259 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -153,6 +153,24 @@ struct imon_context {
+ 	const struct imon_usb_dev_descr *dev_descr;
+ 					/* device description with key */
+ 					/* table for front panels */
++	/*
++	 * Fields for deferring free_imon_context().
++	 *
++	 * Since reference to "struct imon_context" is stored into
++	 * "struct file"->private_data, we need to remember
++	 * how many file descriptors might access this "struct imon_context".
++	 */
++	refcount_t users;
++	/*
++	 * Use a flag for telling display_open()/vfd_write()/lcd_write() that
++	 * imon_disconnect() was already called.
++	 */
++	bool disconnected;
++	/*
++	 * We need to wait for RCU grace period in order to allow
++	 * display_open() to safely check ->disconnected and increment ->users.
++	 */
++	struct rcu_head rcu;
+ };
+ 
+ #define TOUCH_TIMEOUT	(HZ/30)
+@@ -160,18 +178,18 @@ struct imon_context {
+ /* vfd character device file operations */
+ static const struct file_operations vfd_fops = {
+ 	.owner		= THIS_MODULE,
+-	.open		= &display_open,
+-	.write		= &vfd_write,
+-	.release	= &display_close,
++	.open		= display_open,
++	.write		= vfd_write,
++	.release	= display_close,
+ 	.llseek		= noop_llseek,
+ };
+ 
+ /* lcd character device file operations */
+ static const struct file_operations lcd_fops = {
+ 	.owner		= THIS_MODULE,
+-	.open		= &display_open,
+-	.write		= &lcd_write,
+-	.release	= &display_close,
++	.open		= display_open,
++	.write		= lcd_write,
++	.release	= display_close,
+ 	.llseek		= noop_llseek,
+ };
+ 
+@@ -439,9 +457,6 @@ static struct usb_driver imon_driver = {
+ 	.id_table	= imon_usb_id_table,
+ };
+ 
+-/* to prevent races between open() and disconnect(), probing, etc */
+-static DEFINE_MUTEX(driver_lock);
+-
+ /* Module bookkeeping bits */
+ MODULE_AUTHOR(MOD_AUTHOR);
+ MODULE_DESCRIPTION(MOD_DESC);
+@@ -481,9 +496,11 @@ static void free_imon_context(struct imon_context *ictx)
+ 	struct device *dev = ictx->dev;
+ 
+ 	usb_free_urb(ictx->tx_urb);
++	WARN_ON(ictx->dev_present_intf0);
+ 	usb_free_urb(ictx->rx_urb_intf0);
++	WARN_ON(ictx->dev_present_intf1);
+ 	usb_free_urb(ictx->rx_urb_intf1);
+-	kfree(ictx);
++	kfree_rcu(ictx, rcu);
+ 
+ 	dev_dbg(dev, "%s: iMON context freed\n", __func__);
+ }
+@@ -499,9 +516,6 @@ static int display_open(struct inode *inode, struct file *file)
+ 	int subminor;
+ 	int retval = 0;
+ 
+-	/* prevent races with disconnect */
+-	mutex_lock(&driver_lock);
+-
+ 	subminor = iminor(inode);
+ 	interface = usb_find_interface(&imon_driver, subminor);
+ 	if (!interface) {
+@@ -509,13 +523,16 @@ static int display_open(struct inode *inode, struct file *file)
+ 		retval = -ENODEV;
+ 		goto exit;
+ 	}
+-	ictx = usb_get_intfdata(interface);
+ 
+-	if (!ictx) {
++	rcu_read_lock();
++	ictx = usb_get_intfdata(interface);
++	if (!ictx || ictx->disconnected || !refcount_inc_not_zero(&ictx->users)) {
++		rcu_read_unlock();
+ 		pr_err("no context found for minor %d\n", subminor);
+ 		retval = -ENODEV;
+ 		goto exit;
+ 	}
++	rcu_read_unlock();
+ 
+ 	mutex_lock(&ictx->lock);
+ 
+@@ -533,8 +550,10 @@ static int display_open(struct inode *inode, struct file *file)
+ 
+ 	mutex_unlock(&ictx->lock);
+ 
++	if (retval && refcount_dec_and_test(&ictx->users))
++		free_imon_context(ictx);
++
+ exit:
+-	mutex_unlock(&driver_lock);
+ 	return retval;
+ }
+ 
+@@ -544,16 +563,9 @@ exit:
+  */
+ static int display_close(struct inode *inode, struct file *file)
+ {
+-	struct imon_context *ictx = NULL;
++	struct imon_context *ictx = file->private_data;
+ 	int retval = 0;
+ 
+-	ictx = file->private_data;
+-
+-	if (!ictx) {
+-		pr_err("no context for device\n");
+-		return -ENODEV;
+-	}
+-
+ 	mutex_lock(&ictx->lock);
+ 
+ 	if (!ictx->display_supported) {
+@@ -568,6 +580,8 @@ static int display_close(struct inode *inode, struct file *file)
+ 	}
+ 
+ 	mutex_unlock(&ictx->lock);
++	if (refcount_dec_and_test(&ictx->users))
++		free_imon_context(ictx);
+ 	return retval;
+ }
+ 
+@@ -937,15 +951,12 @@ static ssize_t vfd_write(struct file *file, const char __user *buf,
+ 	int offset;
+ 	int seq;
+ 	int retval = 0;
+-	struct imon_context *ictx;
++	struct imon_context *ictx = file->private_data;
+ 	static const unsigned char vfd_packet6[] = {
+ 		0x01, 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF };
+ 
+-	ictx = file->private_data;
+-	if (!ictx) {
+-		pr_err_ratelimited("no context for device\n");
++	if (ictx->disconnected)
+ 		return -ENODEV;
+-	}
+ 
+ 	mutex_lock(&ictx->lock);
+ 
+@@ -1021,13 +1032,10 @@ static ssize_t lcd_write(struct file *file, const char __user *buf,
+ 			 size_t n_bytes, loff_t *pos)
+ {
+ 	int retval = 0;
+-	struct imon_context *ictx;
++	struct imon_context *ictx = file->private_data;
+ 
+-	ictx = file->private_data;
+-	if (!ictx) {
+-		pr_err_ratelimited("no context for device\n");
++	if (ictx->disconnected)
+ 		return -ENODEV;
+-	}
+ 
+ 	mutex_lock(&ictx->lock);
+ 
+@@ -2405,7 +2413,6 @@ static int imon_probe(struct usb_interface *interface,
+ 	int ifnum, sysfs_err;
+ 	int ret = 0;
+ 	struct imon_context *ictx = NULL;
+-	struct imon_context *first_if_ctx = NULL;
+ 	u16 vendor, product;
+ 
+ 	usbdev     = usb_get_dev(interface_to_usbdev(interface));
+@@ -2417,17 +2424,12 @@ static int imon_probe(struct usb_interface *interface,
+ 	dev_dbg(dev, "%s: found iMON device (%04x:%04x, intf%d)\n",
+ 		__func__, vendor, product, ifnum);
+ 
+-	/* prevent races probing devices w/multiple interfaces */
+-	mutex_lock(&driver_lock);
+-
+ 	first_if = usb_ifnum_to_if(usbdev, 0);
+ 	if (!first_if) {
+ 		ret = -ENODEV;
+ 		goto fail;
+ 	}
+ 
+-	first_if_ctx = usb_get_intfdata(first_if);
+-
+ 	if (ifnum == 0) {
+ 		ictx = imon_init_intf0(interface, id);
+ 		if (!ictx) {
+@@ -2435,9 +2437,11 @@ static int imon_probe(struct usb_interface *interface,
+ 			ret = -ENODEV;
+ 			goto fail;
+ 		}
++		refcount_set(&ictx->users, 1);
+ 
+ 	} else {
+ 		/* this is the secondary interface on the device */
++		struct imon_context *first_if_ctx = usb_get_intfdata(first_if);
+ 
+ 		/* fail early if first intf failed to register */
+ 		if (!first_if_ctx) {
+@@ -2451,14 +2455,13 @@ static int imon_probe(struct usb_interface *interface,
+ 			ret = -ENODEV;
+ 			goto fail;
+ 		}
++		refcount_inc(&ictx->users);
+ 
+ 	}
+ 
+ 	usb_set_intfdata(interface, ictx);
+ 
+ 	if (ifnum == 0) {
+-		mutex_lock(&ictx->lock);
+-
+ 		if (product == 0xffdc && ictx->rf_device) {
+ 			sysfs_err = sysfs_create_group(&interface->dev.kobj,
+ 						       &imon_rf_attr_group);
+@@ -2469,21 +2472,17 @@ static int imon_probe(struct usb_interface *interface,
+ 
+ 		if (ictx->display_supported)
+ 			imon_init_display(ictx, interface);
+-
+-		mutex_unlock(&ictx->lock);
+ 	}
+ 
+ 	dev_info(dev, "iMON device (%04x:%04x, intf%d) on usb<%d:%d> initialized\n",
+ 		 vendor, product, ifnum,
+ 		 usbdev->bus->busnum, usbdev->devnum);
+ 
+-	mutex_unlock(&driver_lock);
+ 	usb_put_dev(usbdev);
+ 
+ 	return 0;
+ 
+ fail:
+-	mutex_unlock(&driver_lock);
+ 	usb_put_dev(usbdev);
+ 	dev_err(dev, "unable to register, err %d\n", ret);
+ 
+@@ -2499,10 +2498,8 @@ static void imon_disconnect(struct usb_interface *interface)
+ 	struct device *dev;
+ 	int ifnum;
+ 
+-	/* prevent races with multi-interface device probing and display_open */
+-	mutex_lock(&driver_lock);
+-
+ 	ictx = usb_get_intfdata(interface);
++	ictx->disconnected = true;
+ 	dev = ictx->dev;
+ 	ifnum = interface->cur_altsetting->desc.bInterfaceNumber;
+ 
+@@ -2543,11 +2540,9 @@ static void imon_disconnect(struct usb_interface *interface)
+ 		}
+ 	}
+ 
+-	if (!ictx->dev_present_intf0 && !ictx->dev_present_intf1)
++	if (refcount_dec_and_test(&ictx->users))
+ 		free_imon_context(ictx);
+ 
+-	mutex_unlock(&driver_lock);
+-
+ 	dev_dbg(dev, "%s: iMON device (intf%d) disconnected\n",
+ 		__func__, ifnum);
+ }
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index 3915d551d59e7..fccd1798445d5 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -2569,6 +2569,11 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf,
+ 	} while (0);
+ 	mutex_unlock(&pvr2_unit_mtx);
+ 
++	INIT_WORK(&hdw->workpoll, pvr2_hdw_worker_poll);
++
++	if (hdw->unit_number == -1)
++		goto fail;
++
+ 	cnt1 = 0;
+ 	cnt2 = scnprintf(hdw->name+cnt1,sizeof(hdw->name)-cnt1,"pvrusb2");
+ 	cnt1 += cnt2;
+@@ -2580,8 +2585,6 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf,
+ 	if (cnt1 >= sizeof(hdw->name)) cnt1 = sizeof(hdw->name)-1;
+ 	hdw->name[cnt1] = 0;
+ 
+-	INIT_WORK(&hdw->workpoll,pvr2_hdw_worker_poll);
+-
+ 	pvr2_trace(PVR2_TRACE_INIT,"Driver unit number is %d, name is %s",
+ 		   hdw->unit_number,hdw->name);
+ 
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 753b8a99e08fc..b40a2b904acef 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -863,29 +863,31 @@ static int uvc_ioctl_enum_input(struct file *file, void *fh,
+ 	struct uvc_video_chain *chain = handle->chain;
+ 	const struct uvc_entity *selector = chain->selector;
+ 	struct uvc_entity *iterm = NULL;
++	struct uvc_entity *it;
+ 	u32 index = input->index;
+-	int pin = 0;
+ 
+ 	if (selector == NULL ||
+ 	    (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) {
+ 		if (index != 0)
+ 			return -EINVAL;
+-		list_for_each_entry(iterm, &chain->entities, chain) {
+-			if (UVC_ENTITY_IS_ITERM(iterm))
++		list_for_each_entry(it, &chain->entities, chain) {
++			if (UVC_ENTITY_IS_ITERM(it)) {
++				iterm = it;
+ 				break;
++			}
+ 		}
+-		pin = iterm->id;
+ 	} else if (index < selector->bNrInPins) {
+-		pin = selector->baSourceID[index];
+-		list_for_each_entry(iterm, &chain->entities, chain) {
+-			if (!UVC_ENTITY_IS_ITERM(iterm))
++		list_for_each_entry(it, &chain->entities, chain) {
++			if (!UVC_ENTITY_IS_ITERM(it))
+ 				continue;
+-			if (iterm->id == pin)
++			if (it->id == selector->baSourceID[index]) {
++				iterm = it;
+ 				break;
++			}
+ 		}
+ 	}
+ 
+-	if (iterm == NULL || iterm->id != pin)
++	if (iterm == NULL)
+ 		return -EINVAL;
+ 
+ 	memset(input, 0, sizeof(*input));
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index 3d230f07eaf21..049a1356f7dd4 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -1327,7 +1327,6 @@ static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc)
+  */
+ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc)
+ {
+-	int counters_size;
+ 	int ret, i;
+ 
+ 	dmc->num_counters = devfreq_event_get_edev_count(dmc->dev,
+@@ -1337,8 +1336,8 @@ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc)
+ 		return dmc->num_counters;
+ 	}
+ 
+-	counters_size = sizeof(struct devfreq_event_dev) * dmc->num_counters;
+-	dmc->counter = devm_kzalloc(dmc->dev, counters_size, GFP_KERNEL);
++	dmc->counter = devm_kcalloc(dmc->dev, dmc->num_counters,
++				    sizeof(*dmc->counter), GFP_KERNEL);
+ 	if (!dmc->counter)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/mfd/davinci_voicecodec.c b/drivers/mfd/davinci_voicecodec.c
+index e5c8bc998eb4e..965820481f1e1 100644
+--- a/drivers/mfd/davinci_voicecodec.c
++++ b/drivers/mfd/davinci_voicecodec.c
+@@ -46,14 +46,12 @@ static int __init davinci_vc_probe(struct platform_device *pdev)
+ 	}
+ 	clk_enable(davinci_vc->clk);
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-
+-	fifo_base = (dma_addr_t)res->start;
+-	davinci_vc->base = devm_ioremap_resource(&pdev->dev, res);
++	davinci_vc->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(davinci_vc->base)) {
+ 		ret = PTR_ERR(davinci_vc->base);
+ 		goto fail;
+ 	}
++	fifo_base = (dma_addr_t)res->start;
+ 
+ 	davinci_vc->regmap = devm_regmap_init_mmio(&pdev->dev,
+ 						   davinci_vc->base,
+diff --git a/drivers/mfd/ipaq-micro.c b/drivers/mfd/ipaq-micro.c
+index e92eeeb67a98a..4cd5ecc722112 100644
+--- a/drivers/mfd/ipaq-micro.c
++++ b/drivers/mfd/ipaq-micro.c
+@@ -403,7 +403,7 @@ static int __init micro_probe(struct platform_device *pdev)
+ 	micro_reset_comm(micro);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (!irq)
++	if (irq < 0)
+ 		return -EINVAL;
+ 	ret = devm_request_irq(&pdev->dev, irq, micro_serial_isr,
+ 			       IRQF_SHARED, "ipaq-micro",
+diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
+index 4d1b44de14921..c742ab02ae186 100644
+--- a/drivers/misc/ocxl/file.c
++++ b/drivers/misc/ocxl/file.c
+@@ -558,7 +558,9 @@ int ocxl_file_register_afu(struct ocxl_afu *afu)
+ 
+ err_unregister:
+ 	ocxl_sysfs_unregister_afu(info); // safe to call even if register failed
++	free_minor(info);
+ 	device_unregister(&info->dev);
++	return rc;
+ err_put:
+ 	ocxl_afu_put(afu);
+ 	free_minor(info);
+diff --git a/drivers/mmc/host/jz4740_mmc.c b/drivers/mmc/host/jz4740_mmc.c
+index a1f92fed2a55b..aa3dfb9c1071b 100644
+--- a/drivers/mmc/host/jz4740_mmc.c
++++ b/drivers/mmc/host/jz4740_mmc.c
+@@ -236,6 +236,26 @@ static int jz4740_mmc_acquire_dma_channels(struct jz4740_mmc_host *host)
+ 		return PTR_ERR(host->dma_rx);
+ 	}
+ 
++	/*
++	 * Limit the maximum segment size in any SG entry according to
++	 * the parameters of the DMA engine device.
++	 */
++	if (host->dma_tx) {
++		struct device *dev = host->dma_tx->device->dev;
++		unsigned int max_seg_size = dma_get_max_seg_size(dev);
++
++		if (max_seg_size < host->mmc->max_seg_size)
++			host->mmc->max_seg_size = max_seg_size;
++	}
++
++	if (host->dma_rx) {
++		struct device *dev = host->dma_rx->device->dev;
++		unsigned int max_seg_size = dma_get_max_seg_size(dev);
++
++		if (max_seg_size < host->mmc->max_seg_size)
++			host->mmc->max_seg_size = max_seg_size;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index a64ea143d1852..7cab9d831afb3 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -147,6 +147,9 @@ struct sdhci_am654_data {
+ 	int drv_strength;
+ 	int strb_sel;
+ 	u32 flags;
++	u32 quirks;
++
++#define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0)
+ };
+ 
+ struct sdhci_am654_driver_data {
+@@ -369,6 +372,21 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg)
+ 	}
+ }
+ 
++static void sdhci_am654_reset(struct sdhci_host *host, u8 mask)
++{
++	u8 ctrl;
++	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++	struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
++
++	sdhci_reset(host, mask);
++
++	if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_FORCE_CDTEST) {
++		ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
++		ctrl |= SDHCI_CTRL_CDTEST_INS | SDHCI_CTRL_CDTEST_EN;
++		sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
++	}
++}
++
+ static int sdhci_am654_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ {
+ 	struct sdhci_host *host = mmc_priv(mmc);
+@@ -500,7 +518,7 @@ static struct sdhci_ops sdhci_j721e_4bit_ops = {
+ 	.set_clock = sdhci_j721e_4bit_set_clock,
+ 	.write_b = sdhci_am654_write_b,
+ 	.irq = sdhci_am654_cqhci_irq,
+-	.reset = sdhci_reset,
++	.reset = sdhci_am654_reset,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = {
+@@ -719,6 +737,9 @@ static int sdhci_am654_get_of_property(struct platform_device *pdev,
+ 	device_property_read_u32(dev, "ti,clkbuf-sel",
+ 				 &sdhci_am654->clkbuf_sel);
+ 
++	if (device_property_read_bool(dev, "ti,fails-without-test-cd"))
++		sdhci_am654->quirks |= SDHCI_AM654_QUIRK_FORCE_CDTEST;
++
+ 	sdhci_get_of_property(pdev);
+ 
+ 	return 0;
+diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c
+index 96a27e06401fd..9bd65f3f805c0 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0002.c
++++ b/drivers/mtd/chips/cfi_cmdset_0002.c
+@@ -59,6 +59,10 @@
+ #define CFI_SR_WBASB		BIT(3)
+ #define CFI_SR_SLSB		BIT(1)
+ 
++enum cfi_quirks {
++	CFI_QUIRK_DQ_TRUE_DATA = BIT(0),
++};
++
+ static int cfi_amdstd_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *);
+ static int cfi_amdstd_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *);
+ #if !FORCE_WORD_WRITE
+@@ -432,6 +436,15 @@ static void fixup_s29ns512p_sectors(struct mtd_info *mtd)
+ 		mtd->name);
+ }
+ 
++static void fixup_quirks(struct mtd_info *mtd)
++{
++	struct map_info *map = mtd->priv;
++	struct cfi_private *cfi = map->fldrv_priv;
++
++	if (cfi->mfr == CFI_MFR_AMD && cfi->id == 0x0c01)
++		cfi->quirks |= CFI_QUIRK_DQ_TRUE_DATA;
++}
++
+ /* Used to fix CFI-Tables of chips without Extended Query Tables */
+ static struct cfi_fixup cfi_nopri_fixup_table[] = {
+ 	{ CFI_MFR_SST, 0x234a, fixup_sst39vf }, /* SST39VF1602 */
+@@ -470,6 +483,7 @@ static struct cfi_fixup cfi_fixup_table[] = {
+ #if !FORCE_WORD_WRITE
+ 	{ CFI_MFR_ANY, CFI_ID_ANY, fixup_use_write_buffers },
+ #endif
++	{ CFI_MFR_ANY, CFI_ID_ANY, fixup_quirks },
+ 	{ 0, 0, NULL }
+ };
+ static struct cfi_fixup jedec_fixup_table[] = {
+@@ -798,21 +812,25 @@ static struct mtd_info *cfi_amdstd_setup(struct mtd_info *mtd)
+ }
+ 
+ /*
+- * Return true if the chip is ready.
++ * Return true if the chip is ready and has the correct value.
+  *
+  * Ready is one of: read mode, query mode, erase-suspend-read mode (in any
+  * non-suspended sector) and is indicated by no toggle bits toggling.
+  *
++ * Error are indicated by toggling bits or bits held with the wrong value,
++ * or with bits toggling.
++ *
+  * Note that anything more complicated than checking if no bits are toggling
+  * (including checking DQ5 for an error status) is tricky to get working
+  * correctly and is therefore not done	(particularly with interleaved chips
+  * as each chip must be checked independently of the others).
+  */
+ static int __xipram chip_ready(struct map_info *map, struct flchip *chip,
+-			       unsigned long addr)
++			       unsigned long addr, map_word *expected)
+ {
+ 	struct cfi_private *cfi = map->fldrv_priv;
+ 	map_word d, t;
++	int ret;
+ 
+ 	if (cfi_use_status_reg(cfi)) {
+ 		map_word ready = CMD(CFI_SR_DRB);
+@@ -822,57 +840,32 @@ static int __xipram chip_ready(struct map_info *map, struct flchip *chip,
+ 		 */
+ 		cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi,
+ 				 cfi->device_type, NULL);
+-		d = map_read(map, addr);
++		t = map_read(map, addr);
+ 
+-		return map_word_andequal(map, d, ready, ready);
++		return map_word_andequal(map, t, ready, ready);
+ 	}
+ 
+ 	d = map_read(map, addr);
+ 	t = map_read(map, addr);
+ 
+-	return map_word_equal(map, d, t);
++	ret = map_word_equal(map, d, t);
++
++	if (!ret || !expected)
++		return ret;
++
++	return map_word_equal(map, t, *expected);
+ }
+ 
+-/*
+- * Return true if the chip is ready and has the correct value.
+- *
+- * Ready is one of: read mode, query mode, erase-suspend-read mode (in any
+- * non-suspended sector) and it is indicated by no bits toggling.
+- *
+- * Error are indicated by toggling bits or bits held with the wrong value,
+- * or with bits toggling.
+- *
+- * Note that anything more complicated than checking if no bits are toggling
+- * (including checking DQ5 for an error status) is tricky to get working
+- * correctly and is therefore not done	(particularly with interleaved chips
+- * as each chip must be checked independently of the others).
+- *
+- */
+ static int __xipram chip_good(struct map_info *map, struct flchip *chip,
+-			      unsigned long addr, map_word expected)
++			      unsigned long addr, map_word *expected)
+ {
+ 	struct cfi_private *cfi = map->fldrv_priv;
+-	map_word oldd, curd;
+-
+-	if (cfi_use_status_reg(cfi)) {
+-		map_word ready = CMD(CFI_SR_DRB);
+-
+-		/*
+-		 * For chips that support status register, check device
+-		 * ready bit
+-		 */
+-		cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi,
+-				 cfi->device_type, NULL);
+-		curd = map_read(map, addr);
+-
+-		return map_word_andequal(map, curd, ready, ready);
+-	}
++	map_word *datum = expected;
+ 
+-	oldd = map_read(map, addr);
+-	curd = map_read(map, addr);
++	if (cfi->quirks & CFI_QUIRK_DQ_TRUE_DATA)
++		datum = NULL;
+ 
+-	return	map_word_equal(map, oldd, curd) &&
+-		map_word_equal(map, curd, expected);
++	return chip_ready(map, chip, addr, datum);
+ }
+ 
+ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr, int mode)
+@@ -889,7 +882,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
+ 
+ 	case FL_STATUS:
+ 		for (;;) {
+-			if (chip_ready(map, chip, adr))
++			if (chip_ready(map, chip, adr, NULL))
+ 				break;
+ 
+ 			if (time_after(jiffies, timeo)) {
+@@ -927,7 +920,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
+ 		chip->state = FL_ERASE_SUSPENDING;
+ 		chip->erase_suspended = 1;
+ 		for (;;) {
+-			if (chip_ready(map, chip, adr))
++			if (chip_ready(map, chip, adr, NULL))
+ 				break;
+ 
+ 			if (time_after(jiffies, timeo)) {
+@@ -1458,7 +1451,7 @@ static int do_otp_lock(struct map_info *map, struct flchip *chip, loff_t adr,
+ 	/* wait for chip to become ready */
+ 	timeo = jiffies + msecs_to_jiffies(2);
+ 	for (;;) {
+-		if (chip_ready(map, chip, adr))
++		if (chip_ready(map, chip, adr, NULL))
+ 			break;
+ 
+ 		if (time_after(jiffies, timeo)) {
+@@ -1694,7 +1687,7 @@ static int __xipram do_write_oneword_once(struct map_info *map,
+ 		 * "chip_good" to avoid the failure due to scheduling.
+ 		 */
+ 		if (time_after(jiffies, timeo) &&
+-		    !chip_good(map, chip, adr, datum)) {
++		    !chip_good(map, chip, adr, &datum)) {
+ 			xip_enable(map, chip, adr);
+ 			printk(KERN_WARNING "MTD %s(): software timeout\n", __func__);
+ 			xip_disable(map, chip, adr);
+@@ -1702,7 +1695,7 @@ static int __xipram do_write_oneword_once(struct map_info *map,
+ 			break;
+ 		}
+ 
+-		if (chip_good(map, chip, adr, datum)) {
++		if (chip_good(map, chip, adr, &datum)) {
+ 			if (cfi_check_err_status(map, chip, adr))
+ 				ret = -EIO;
+ 			break;
+@@ -1974,14 +1967,14 @@ static int __xipram do_write_buffer_wait(struct map_info *map,
+ 		 * "chip_good" to avoid the failure due to scheduling.
+ 		 */
+ 		if (time_after(jiffies, timeo) &&
+-		    !chip_good(map, chip, adr, datum)) {
++		    !chip_good(map, chip, adr, &datum)) {
+ 			pr_err("MTD %s(): software timeout, address:0x%.8lx.\n",
+ 			       __func__, adr);
+ 			ret = -EIO;
+ 			break;
+ 		}
+ 
+-		if (chip_good(map, chip, adr, datum)) {
++		if (chip_good(map, chip, adr, &datum)) {
+ 			if (cfi_check_err_status(map, chip, adr))
+ 				ret = -EIO;
+ 			break;
+@@ -2190,7 +2183,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip,
+ 	 * If the driver thinks the chip is idle, and no toggle bits
+ 	 * are changing, then the chip is actually idle for sure.
+ 	 */
+-	if (chip->state == FL_READY && chip_ready(map, chip, adr))
++	if (chip->state == FL_READY && chip_ready(map, chip, adr, NULL))
+ 		return 0;
+ 
+ 	/*
+@@ -2207,7 +2200,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip,
+ 
+ 		/* wait for the chip to become ready */
+ 		for (i = 0; i < jiffies_to_usecs(timeo); i++) {
+-			if (chip_ready(map, chip, adr))
++			if (chip_ready(map, chip, adr, NULL))
+ 				return 0;
+ 
+ 			udelay(1);
+@@ -2271,13 +2264,13 @@ retry:
+ 	map_write(map, datum, adr);
+ 
+ 	for (i = 0; i < jiffies_to_usecs(uWriteTimeout); i++) {
+-		if (chip_ready(map, chip, adr))
++		if (chip_ready(map, chip, adr, NULL))
+ 			break;
+ 
+ 		udelay(1);
+ 	}
+ 
+-	if (!chip_good(map, chip, adr, datum) ||
++	if (!chip_ready(map, chip, adr, &datum) ||
+ 	    cfi_check_err_status(map, chip, adr)) {
+ 		/* reset on all failures. */
+ 		map_write(map, CMD(0xF0), chip->start);
+@@ -2419,6 +2412,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ 	DECLARE_WAITQUEUE(wait, current);
+ 	int ret;
+ 	int retry_cnt = 0;
++	map_word datum = map_word_ff(map);
+ 
+ 	adr = cfi->addr_unlock1;
+ 
+@@ -2473,7 +2467,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ 			chip->erase_suspended = 0;
+ 		}
+ 
+-		if (chip_good(map, chip, adr, map_word_ff(map))) {
++		if (chip_ready(map, chip, adr, &datum)) {
+ 			if (cfi_check_err_status(map, chip, adr))
+ 				ret = -EIO;
+ 			break;
+@@ -2518,6 +2512,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ 	DECLARE_WAITQUEUE(wait, current);
+ 	int ret;
+ 	int retry_cnt = 0;
++	map_word datum = map_word_ff(map);
+ 
+ 	adr += chip->start;
+ 
+@@ -2572,7 +2567,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ 			chip->erase_suspended = 0;
+ 		}
+ 
+-		if (chip_good(map, chip, adr, map_word_ff(map))) {
++		if (chip_ready(map, chip, adr, &datum)) {
+ 			if (cfi_check_err_status(map, chip, adr))
+ 				ret = -EIO;
+ 			break;
+@@ -2766,7 +2761,7 @@ static int __maybe_unused do_ppb_xxlock(struct map_info *map,
+ 	 */
+ 	timeo = jiffies + msecs_to_jiffies(2000);	/* 2s max (un)locking */
+ 	for (;;) {
+-		if (chip_ready(map, chip, adr))
++		if (chip_ready(map, chip, adr, NULL))
+ 			break;
+ 
+ 		if (time_after(jiffies, timeo)) {
+diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c
+index b46786cd53e0b..4fdb39214a124 100644
+--- a/drivers/mtd/nand/raw/cadence-nand-controller.c
++++ b/drivers/mtd/nand/raw/cadence-nand-controller.c
+@@ -2983,11 +2983,10 @@ static int cadence_nand_dt_probe(struct platform_device *ofdev)
+ 	if (IS_ERR(cdns_ctrl->reg))
+ 		return PTR_ERR(cdns_ctrl->reg);
+ 
+-	res = platform_get_resource(ofdev, IORESOURCE_MEM, 1);
+-	cdns_ctrl->io.dma = res->start;
+-	cdns_ctrl->io.virt = devm_ioremap_resource(&ofdev->dev, res);
++	cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res);
+ 	if (IS_ERR(cdns_ctrl->io.virt))
+ 		return PTR_ERR(cdns_ctrl->io.virt);
++	cdns_ctrl->io.dma = res->start;
+ 
+ 	dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk");
+ 	if (IS_ERR(dt->clk))
+diff --git a/drivers/mtd/nand/raw/denali_pci.c b/drivers/mtd/nand/raw/denali_pci.c
+index 20c085a30adcb..de7e722d38262 100644
+--- a/drivers/mtd/nand/raw/denali_pci.c
++++ b/drivers/mtd/nand/raw/denali_pci.c
+@@ -74,22 +74,21 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 		return ret;
+ 	}
+ 
+-	denali->reg = ioremap(csr_base, csr_len);
++	denali->reg = devm_ioremap(denali->dev, csr_base, csr_len);
+ 	if (!denali->reg) {
+ 		dev_err(&dev->dev, "Spectra: Unable to remap memory region\n");
+ 		return -ENOMEM;
+ 	}
+ 
+-	denali->host = ioremap(mem_base, mem_len);
++	denali->host = devm_ioremap(denali->dev, mem_base, mem_len);
+ 	if (!denali->host) {
+ 		dev_err(&dev->dev, "Spectra: ioremap failed!");
+-		ret = -ENOMEM;
+-		goto out_unmap_reg;
++		return -ENOMEM;
+ 	}
+ 
+ 	ret = denali_init(denali);
+ 	if (ret)
+-		goto out_unmap_host;
++		return ret;
+ 
+ 	nsels = denali->nbanks;
+ 
+@@ -117,10 +116,6 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 
+ out_remove_denali:
+ 	denali_remove(denali);
+-out_unmap_host:
+-	iounmap(denali->host);
+-out_unmap_reg:
+-	iounmap(denali->reg);
+ 	return ret;
+ }
+ 
+@@ -129,8 +124,6 @@ static void denali_pci_remove(struct pci_dev *dev)
+ 	struct denali_controller *denali = pci_get_drvdata(dev);
+ 
+ 	denali_remove(denali);
+-	iounmap(denali->reg);
+-	iounmap(denali->host);
+ }
+ 
+ static struct pci_driver denali_pci_driver = {
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 2b26a875a8550..e8146a47da123 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -827,6 +827,15 @@ static int spi_nor_write_16bit_sr_and_check(struct spi_nor *nor, u8 sr1)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = spi_nor_read_sr(nor, sr_cr);
++	if (ret)
++		return ret;
++
++	if (sr1 != sr_cr[0]) {
++		dev_dbg(nor->dev, "SR: Read back test failed\n");
++		return -EIO;
++	}
++
+ 	if (nor->flags & SNOR_F_NO_READ_CR)
+ 		return 0;
+ 
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+index fa1246e399806..766dbd19bba61 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+@@ -426,7 +426,7 @@ struct mcp251xfd_hw_tef_obj {
+ /* The tx_obj_raw version is used in spi async, i.e. without
+  * regmap. We have to take care of endianness ourselves.
+  */
+-struct mcp251xfd_hw_tx_obj_raw {
++struct __packed mcp251xfd_hw_tx_obj_raw {
+ 	__le32 id;
+ 	__le32 flags;
+ 	u8 data[sizeof_field(struct canfd_frame, data)];
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 375998263af7a..1c42417810fcd 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -239,7 +239,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd = {
+ };
+ 
+ /* AXI CANFD Data Bittiming constants as per AXI CANFD 1.0 specs */
+-static struct can_bittiming_const xcan_data_bittiming_const_canfd = {
++static const struct can_bittiming_const xcan_data_bittiming_const_canfd = {
+ 	.name = DRIVER_NAME,
+ 	.tseg1_min = 1,
+ 	.tseg1_max = 16,
+@@ -265,7 +265,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = {
+ };
+ 
+ /* AXI CANFD 2.0 Data Bittiming constants as per AXI CANFD 2.0 spec */
+-static struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
++static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
+ 	.name = DRIVER_NAME,
+ 	.tseg1_min = 1,
+ 	.tseg1_max = 32,
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index c355824ddb814..265620a81f9f6 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1952,13 +1952,7 @@ static void mt7531_sgmii_validate(struct mt7530_priv *priv, int port,
+ 	/* Port5 supports ethier RGMII or SGMII.
+ 	 * Port6 supports SGMII only.
+ 	 */
+-	switch (port) {
+-	case 5:
+-		if (mt7531_is_rgmii_port(priv, port))
+-			break;
+-		fallthrough;
+-	case 6:
+-		phylink_set(supported, 1000baseX_Full);
++	if (port == 6) {
+ 		phylink_set(supported, 2500baseX_Full);
+ 		phylink_set(supported, 2500baseT_Full);
+ 	}
+@@ -2315,8 +2309,6 @@ static void
+ mt7530_mac_port_validate(struct dsa_switch *ds, int port,
+ 			 unsigned long *supported)
+ {
+-	if (port == 5)
+-		phylink_set(supported, 1000baseX_Full);
+ }
+ 
+ static void mt7531_mac_port_validate(struct dsa_switch *ds, int port,
+@@ -2353,8 +2345,10 @@ mt753x_phylink_validate(struct dsa_switch *ds, int port,
+ 	}
+ 
+ 	/* This switch only supports 1G full-duplex. */
+-	if (state->interface != PHY_INTERFACE_MODE_MII)
++	if (state->interface != PHY_INTERFACE_MODE_MII) {
+ 		phylink_set(mask, 1000baseT_Full);
++		phylink_set(mask, 1000baseX_Full);
++	}
+ 
+ 	priv->info->mac_port_validate(ds, port, mask);
+ 
+diff --git a/drivers/net/ethernet/broadcom/Makefile b/drivers/net/ethernet/broadcom/Makefile
+index 7046ad6d3d0e3..ac50da49ca770 100644
+--- a/drivers/net/ethernet/broadcom/Makefile
++++ b/drivers/net/ethernet/broadcom/Makefile
+@@ -16,3 +16,8 @@ obj-$(CONFIG_BGMAC_BCMA) += bgmac-bcma.o bgmac-bcma-mdio.o
+ obj-$(CONFIG_BGMAC_PLATFORM) += bgmac-platform.o
+ obj-$(CONFIG_SYSTEMPORT) += bcmsysport.o
+ obj-$(CONFIG_BNXT) += bnxt/
++
++# FIXME: temporarily silence -Warray-bounds on non W=1+ builds
++ifndef KBUILD_EXTRA_WARN
++CFLAGS_tg3.o += -Wno-array-bounds
++endif
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index d6b6ebb3f1ec7..51e071c20e397 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -1150,7 +1150,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ 		fl6.daddr = ip6h->saddr;
+ 		fl6.fl6_dport = inet_rsk(oreq)->ir_rmt_port;
+ 		fl6.fl6_sport = htons(inet_rsk(oreq)->ir_num);
+-		security_req_classify_flow(oreq, flowi6_to_flowi(&fl6));
++		security_req_classify_flow(oreq, flowi6_to_flowi_common(&fl6));
+ 		dst = ip6_dst_lookup_flow(sock_net(lsk), lsk, &fl6, NULL);
+ 		if (IS_ERR(dst))
+ 			goto free_sk;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
+index 4e4029d5c8e11..9553d280ec1b7 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
+@@ -818,7 +818,6 @@ static int api_chain_init(struct hinic_api_cmd_chain *chain,
+ {
+ 	struct hinic_hwif *hwif = attr->hwif;
+ 	struct pci_dev *pdev = hwif->pdev;
+-	size_t cell_ctxt_size;
+ 
+ 	chain->hwif = hwif;
+ 	chain->chain_type  = attr->chain_type;
+@@ -830,8 +829,8 @@ static int api_chain_init(struct hinic_api_cmd_chain *chain,
+ 
+ 	sema_init(&chain->sem, 1);
+ 
+-	cell_ctxt_size = chain->num_cells * sizeof(*chain->cell_ctxt);
+-	chain->cell_ctxt = devm_kzalloc(&pdev->dev, cell_ctxt_size, GFP_KERNEL);
++	chain->cell_ctxt = devm_kcalloc(&pdev->dev, chain->num_cells,
++					sizeof(*chain->cell_ctxt), GFP_KERNEL);
+ 	if (!chain->cell_ctxt)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
+index 5a6bbee819cde..21b8235952d33 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
+@@ -796,11 +796,10 @@ static int init_cmdqs_ctxt(struct hinic_hwdev *hwdev,
+ 	struct hinic_cmdq_ctxt *cmdq_ctxts;
+ 	struct pci_dev *pdev = hwif->pdev;
+ 	struct hinic_pfhwdev *pfhwdev;
+-	size_t cmdq_ctxts_size;
+ 	int err;
+ 
+-	cmdq_ctxts_size = HINIC_MAX_CMDQ_TYPES * sizeof(*cmdq_ctxts);
+-	cmdq_ctxts = devm_kzalloc(&pdev->dev, cmdq_ctxts_size, GFP_KERNEL);
++	cmdq_ctxts = devm_kcalloc(&pdev->dev, HINIC_MAX_CMDQ_TYPES,
++				  sizeof(*cmdq_ctxts), GFP_KERNEL);
+ 	if (!cmdq_ctxts)
+ 		return -ENOMEM;
+ 
+@@ -884,7 +883,6 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
+ 	struct hinic_func_to_io *func_to_io = cmdqs_to_func_to_io(cmdqs);
+ 	struct pci_dev *pdev = hwif->pdev;
+ 	struct hinic_hwdev *hwdev;
+-	size_t saved_wqs_size;
+ 	u16 max_wqe_size;
+ 	int err;
+ 
+@@ -895,8 +893,8 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
+ 	if (!cmdqs->cmdq_buf_pool)
+ 		return -ENOMEM;
+ 
+-	saved_wqs_size = HINIC_MAX_CMDQ_TYPES * sizeof(struct hinic_wq);
+-	cmdqs->saved_wqs = devm_kzalloc(&pdev->dev, saved_wqs_size, GFP_KERNEL);
++	cmdqs->saved_wqs = devm_kcalloc(&pdev->dev, HINIC_MAX_CMDQ_TYPES,
++					sizeof(*cmdqs->saved_wqs), GFP_KERNEL);
+ 	if (!cmdqs->saved_wqs) {
+ 		err = -ENOMEM;
+ 		goto err_saved_wqs;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
+index 0c74f66746344..799b85c88eff8 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
+@@ -162,7 +162,6 @@ static int init_msix(struct hinic_hwdev *hwdev)
+ 	struct hinic_hwif *hwif = hwdev->hwif;
+ 	struct pci_dev *pdev = hwif->pdev;
+ 	int nr_irqs, num_aeqs, num_ceqs;
+-	size_t msix_entries_size;
+ 	int i, err;
+ 
+ 	num_aeqs = HINIC_HWIF_NUM_AEQS(hwif);
+@@ -171,8 +170,8 @@ static int init_msix(struct hinic_hwdev *hwdev)
+ 	if (nr_irqs > HINIC_HWIF_NUM_IRQS(hwif))
+ 		nr_irqs = HINIC_HWIF_NUM_IRQS(hwif);
+ 
+-	msix_entries_size = nr_irqs * sizeof(*hwdev->msix_entries);
+-	hwdev->msix_entries = devm_kzalloc(&pdev->dev, msix_entries_size,
++	hwdev->msix_entries = devm_kcalloc(&pdev->dev, nr_irqs,
++					   sizeof(*hwdev->msix_entries),
+ 					   GFP_KERNEL);
+ 	if (!hwdev->msix_entries)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
+index 19942fef99d97..7396158df64f7 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
+@@ -631,16 +631,15 @@ static int alloc_eq_pages(struct hinic_eq *eq)
+ 	struct hinic_hwif *hwif = eq->hwif;
+ 	struct pci_dev *pdev = hwif->pdev;
+ 	u32 init_val, addr, val;
+-	size_t addr_size;
+ 	int err, pg;
+ 
+-	addr_size = eq->num_pages * sizeof(*eq->dma_addr);
+-	eq->dma_addr = devm_kzalloc(&pdev->dev, addr_size, GFP_KERNEL);
++	eq->dma_addr = devm_kcalloc(&pdev->dev, eq->num_pages,
++				    sizeof(*eq->dma_addr), GFP_KERNEL);
+ 	if (!eq->dma_addr)
+ 		return -ENOMEM;
+ 
+-	addr_size = eq->num_pages * sizeof(*eq->virt_addr);
+-	eq->virt_addr = devm_kzalloc(&pdev->dev, addr_size, GFP_KERNEL);
++	eq->virt_addr = devm_kcalloc(&pdev->dev, eq->num_pages,
++				     sizeof(*eq->virt_addr), GFP_KERNEL);
+ 	if (!eq->virt_addr) {
+ 		err = -ENOMEM;
+ 		goto err_virt_addr_alloc;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+index 819fa13034c05..027dcc4535065 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+@@ -647,6 +647,7 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ 	err = alloc_msg_buf(pf_to_mgmt);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to allocate msg buffers\n");
++		destroy_workqueue(pf_to_mgmt->workq);
+ 		hinic_health_reporters_destroy(hwdev->devlink_dev);
+ 		return err;
+ 	}
+@@ -654,6 +655,7 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ 	err = hinic_api_cmd_init(pf_to_mgmt->cmd_chain, hwif);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to initialize cmd chains\n");
++		destroy_workqueue(pf_to_mgmt->workq);
+ 		hinic_health_reporters_destroy(hwdev->devlink_dev);
+ 		return err;
+ 	}
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+index f04ac00e3e703..f930cd6a75f71 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+@@ -192,20 +192,20 @@ static int alloc_page_arrays(struct hinic_wqs *wqs)
+ {
+ 	struct hinic_hwif *hwif = wqs->hwif;
+ 	struct pci_dev *pdev = hwif->pdev;
+-	size_t size;
+ 
+-	size = wqs->num_pages * sizeof(*wqs->page_paddr);
+-	wqs->page_paddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
++	wqs->page_paddr = devm_kcalloc(&pdev->dev, wqs->num_pages,
++				       sizeof(*wqs->page_paddr), GFP_KERNEL);
+ 	if (!wqs->page_paddr)
+ 		return -ENOMEM;
+ 
+-	size = wqs->num_pages * sizeof(*wqs->page_vaddr);
+-	wqs->page_vaddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
++	wqs->page_vaddr = devm_kcalloc(&pdev->dev, wqs->num_pages,
++				       sizeof(*wqs->page_vaddr), GFP_KERNEL);
+ 	if (!wqs->page_vaddr)
+ 		goto err_page_vaddr;
+ 
+-	size = wqs->num_pages * sizeof(*wqs->shadow_page_vaddr);
+-	wqs->shadow_page_vaddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
++	wqs->shadow_page_vaddr = devm_kcalloc(&pdev->dev, wqs->num_pages,
++					      sizeof(*wqs->shadow_page_vaddr),
++					      GFP_KERNEL);
+ 	if (!wqs->shadow_page_vaddr)
+ 		goto err_page_shadow_vaddr;
+ 
+@@ -378,15 +378,14 @@ static int alloc_wqes_shadow(struct hinic_wq *wq)
+ {
+ 	struct hinic_hwif *hwif = wq->hwif;
+ 	struct pci_dev *pdev = hwif->pdev;
+-	size_t size;
+ 
+-	size = wq->num_q_pages * wq->max_wqe_size;
+-	wq->shadow_wqe = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
++	wq->shadow_wqe = devm_kcalloc(&pdev->dev, wq->num_q_pages,
++				      wq->max_wqe_size, GFP_KERNEL);
+ 	if (!wq->shadow_wqe)
+ 		return -ENOMEM;
+ 
+-	size = wq->num_q_pages * sizeof(wq->prod_idx);
+-	wq->shadow_idx = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
++	wq->shadow_idx = devm_kcalloc(&pdev->dev, wq->num_q_pages,
++				      sizeof(*wq->shadow_idx), GFP_KERNEL);
+ 	if (!wq->shadow_idx)
+ 		goto err_shadow_idx;
+ 
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+index 350225bbe0be1..ace949fe62331 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+@@ -144,13 +144,12 @@ static int create_txqs(struct hinic_dev *nic_dev)
+ {
+ 	int err, i, j, num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
+ 	struct net_device *netdev = nic_dev->netdev;
+-	size_t txq_size;
+ 
+ 	if (nic_dev->txqs)
+ 		return -EINVAL;
+ 
+-	txq_size = num_txqs * sizeof(*nic_dev->txqs);
+-	nic_dev->txqs = devm_kzalloc(&netdev->dev, txq_size, GFP_KERNEL);
++	nic_dev->txqs = devm_kcalloc(&netdev->dev, num_txqs,
++				     sizeof(*nic_dev->txqs), GFP_KERNEL);
+ 	if (!nic_dev->txqs)
+ 		return -ENOMEM;
+ 
+@@ -242,13 +241,12 @@ static int create_rxqs(struct hinic_dev *nic_dev)
+ {
+ 	int err, i, j, num_rxqs = hinic_hwdev_num_qps(nic_dev->hwdev);
+ 	struct net_device *netdev = nic_dev->netdev;
+-	size_t rxq_size;
+ 
+ 	if (nic_dev->rxqs)
+ 		return -EINVAL;
+ 
+-	rxq_size = num_rxqs * sizeof(*nic_dev->rxqs);
+-	nic_dev->rxqs = devm_kzalloc(&netdev->dev, rxq_size, GFP_KERNEL);
++	nic_dev->rxqs = devm_kcalloc(&netdev->dev, num_rxqs,
++				     sizeof(*nic_dev->rxqs), GFP_KERNEL);
+ 	if (!nic_dev->rxqs)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+index 8da7d46363b27..3828b09bfea3f 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+@@ -861,7 +861,6 @@ int hinic_init_txq(struct hinic_txq *txq, struct hinic_sq *sq,
+ 	struct hinic_dev *nic_dev = netdev_priv(netdev);
+ 	struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ 	int err, irqname_len;
+-	size_t sges_size;
+ 
+ 	txq->netdev = netdev;
+ 	txq->sq = sq;
+@@ -870,13 +869,13 @@ int hinic_init_txq(struct hinic_txq *txq, struct hinic_sq *sq,
+ 
+ 	txq->max_sges = HINIC_MAX_SQ_BUFDESCS;
+ 
+-	sges_size = txq->max_sges * sizeof(*txq->sges);
+-	txq->sges = devm_kzalloc(&netdev->dev, sges_size, GFP_KERNEL);
++	txq->sges = devm_kcalloc(&netdev->dev, txq->max_sges,
++				 sizeof(*txq->sges), GFP_KERNEL);
+ 	if (!txq->sges)
+ 		return -ENOMEM;
+ 
+-	sges_size = txq->max_sges * sizeof(*txq->free_sges);
+-	txq->free_sges = devm_kzalloc(&netdev->dev, sges_size, GFP_KERNEL);
++	txq->free_sges = devm_kcalloc(&netdev->dev, txq->max_sges,
++				      sizeof(*txq->free_sges), GFP_KERNEL);
+ 	if (!txq->free_sges) {
+ 		err = -ENOMEM;
+ 		goto err_alloc_free_sges;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 55772f0cbbf8f..15472fb15d7d2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2024,16 +2024,16 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
+ 	down_write_ref_node(&fte->node, false);
+ 	for (i = handle->num_rules - 1; i >= 0; i--)
+ 		tree_remove_node(&handle->rule[i]->node, true);
+-	if (fte->dests_size) {
+-		if (fte->modify_mask)
+-			modify_fte(fte);
+-		up_write_ref_node(&fte->node, false);
+-	} else if (list_empty(&fte->node.children)) {
++	if (list_empty(&fte->node.children)) {
+ 		del_hw_fte(&fte->node);
+ 		/* Avoid double call to del_hw_fte */
+ 		fte->node.del_hw_func = NULL;
+ 		up_write_ref_node(&fte->node, false);
+ 		tree_put_node(&fte->node, false);
++	} else if (fte->dests_size) {
++		if (fte->modify_mask)
++			modify_fte(fte);
++		up_write_ref_node(&fte->node, false);
+ 	} else {
+ 		up_write_ref_node(&fte->node, false);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+index 7ec1d0ee9beeb..ecd1856bef5e3 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+@@ -133,7 +133,7 @@ static int mlxsw_get_cooling_device_idx(struct mlxsw_thermal *thermal,
+ 	/* Allow mlxsw thermal zone binding to an external cooling device */
+ 	for (i = 0; i < ARRAY_SIZE(mlxsw_thermal_external_allowed_cdev); i++) {
+ 		if (strnstr(cdev->type, mlxsw_thermal_external_allowed_cdev[i],
+-			    sizeof(cdev->type)))
++			    strlen(cdev->type)))
+ 			return 0;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
+index 5f92b16913605..aff6d4f35cd2f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
+@@ -168,8 +168,6 @@ static int mlxsw_sp_dcbnl_ieee_setets(struct net_device *dev,
+ static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev,
+ 				       struct dcb_app *app)
+ {
+-	int prio;
+-
+ 	if (app->priority >= IEEE_8021QAZ_MAX_TCS) {
+ 		netdev_err(dev, "APP entry with priority value %u is invalid\n",
+ 			   app->priority);
+@@ -183,17 +181,6 @@ static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev,
+ 				   app->protocol);
+ 			return -EINVAL;
+ 		}
+-
+-		/* Warn about any DSCP APP entries with the same PID. */
+-		prio = fls(dcb_ieee_getapp_mask(dev, app));
+-		if (prio--) {
+-			if (prio < app->priority)
+-				netdev_warn(dev, "Choosing priority %d for DSCP %d in favor of previously-active value of %d\n",
+-					    app->priority, app->protocol, prio);
+-			else if (prio > app->priority)
+-				netdev_warn(dev, "Ignoring new priority %d for DSCP %d in favor of current value of %d\n",
+-					    app->priority, app->protocol, prio);
+-		}
+ 		break;
+ 
+ 	case IEEE_8021QAZ_APP_SEL_ETHERTYPE:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
+index 433f14ade464c..02ba6aa011055 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
+@@ -709,7 +709,7 @@ static const struct mlxsw_sp_trap_item mlxsw_sp_trap_items_arr[] = {
+ 		.trap = MLXSW_SP_TRAP_CONTROL(LLDP, LLDP, TRAP),
+ 		.listeners_arr = {
+ 			MLXSW_RXL(mlxsw_sp_rx_ptp_listener, LLDP, TRAP_TO_CPU,
+-				  false, SP_LLDP, DISCARD),
++				  true, SP_LLDP, DISCARD),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index 6f950979d25e4..fa1a872c4bc83 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -2240,7 +2240,7 @@ int efx_ef10_tx_tso_desc(struct efx_tx_queue *tx_queue, struct sk_buff *skb,
+ 	 * guaranteed to satisfy the second as we only attempt TSO if
+ 	 * inner_network_header <= 208.
+ 	 */
+-	ip_tot_len = -EFX_TSO2_MAX_HDRLEN;
++	ip_tot_len = 0x10000 - EFX_TSO2_MAX_HDRLEN;
+ 	EFX_WARN_ON_ONCE_PARANOID(mss + EFX_TSO2_MAX_HDRLEN +
+ 				  (tcp->doff << 2u) > ip_tot_len);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+index 0462dcc93e536..dd5c4ef92ef3c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+@@ -1084,8 +1084,9 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ 	unsigned char addr[ETH_ALEN] = {0xde, 0xad, 0xbe, 0xef, 0x00, 0x00};
+ 	struct tc_cls_u32_offload cls_u32 = { };
+ 	struct stmmac_packet_attrs attr = { };
+-	struct tc_action **actions, *act;
++	struct tc_action **actions;
+ 	struct tc_u32_sel *sel;
++	struct tcf_gact *gact;
+ 	struct tcf_exts *exts;
+ 	int ret, i, nk = 1;
+ 
+@@ -1104,14 +1105,14 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ 		goto cleanup_sel;
+ 	}
+ 
+-	actions = kzalloc(nk * sizeof(*actions), GFP_KERNEL);
++	actions = kcalloc(nk, sizeof(*actions), GFP_KERNEL);
+ 	if (!actions) {
+ 		ret = -ENOMEM;
+ 		goto cleanup_exts;
+ 	}
+ 
+-	act = kzalloc(nk * sizeof(*act), GFP_KERNEL);
+-	if (!act) {
++	gact = kcalloc(nk, sizeof(*gact), GFP_KERNEL);
++	if (!gact) {
+ 		ret = -ENOMEM;
+ 		goto cleanup_actions;
+ 	}
+@@ -1126,9 +1127,7 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ 	exts->nr_actions = nk;
+ 	exts->actions = actions;
+ 	for (i = 0; i < nk; i++) {
+-		struct tcf_gact *gact = to_gact(&act[i]);
+-
+-		actions[i] = &act[i];
++		actions[i] = (struct tc_action *)&gact[i];
+ 		gact->tcf_action = TC_ACT_SHOT;
+ 	}
+ 
+@@ -1152,7 +1151,7 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ 	stmmac_tc_setup_cls_u32(priv, priv, &cls_u32);
+ 
+ cleanup_act:
+-	kfree(act);
++	kfree(gact);
+ cleanup_actions:
+ 	kfree(actions);
+ cleanup_exts:
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index e3676386d0eeb..18484370da0d4 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2629,7 +2629,10 @@ static int netvsc_suspend(struct hv_device *dev)
+ 
+ 	/* Save the current config info */
+ 	ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev);
+-
++	if (!ndev_ctx->saved_netvsc_dev_info) {
++		ret = -ENOMEM;
++		goto out;
++	}
+ 	ret = netvsc_detach(net, nvdev);
+ out:
+ 	rtnl_unlock();
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index eb25a13042ea9..7e1208446e04b 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -884,7 +884,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
+ err_trans_free:
+ 	gsi_trans_free(trans);
+ err_free_pages:
+-	__free_pages(page, get_order(IPA_RX_BUFFER_SIZE));
++	put_page(page);
+ 
+ 	return -ENOMEM;
+ }
+@@ -1179,7 +1179,7 @@ void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
+ 		struct page *page = trans->data;
+ 
+ 		if (page)
+-			__free_pages(page, get_order(IPA_RX_BUFFER_SIZE));
++			put_page(page);
+ 	}
+ }
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 92e94ac94a342..bbbe198f83e88 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -283,7 +283,7 @@ static int kszphy_config_reset(struct phy_device *phydev)
+ 		}
+ 	}
+ 
+-	if (priv->led_mode >= 0)
++	if (priv->type && priv->led_mode >= 0)
+ 		kszphy_setup_led(phydev, priv->type->led_mode_reg, priv->led_mode);
+ 
+ 	return 0;
+@@ -299,10 +299,10 @@ static int kszphy_config_init(struct phy_device *phydev)
+ 
+ 	type = priv->type;
+ 
+-	if (type->has_broadcast_disable)
++	if (type && type->has_broadcast_disable)
+ 		kszphy_broadcast_disable(phydev);
+ 
+-	if (type->has_nand_tree_disable)
++	if (type && type->has_nand_tree_disable)
+ 		kszphy_nand_tree_disable(phydev);
+ 
+ 	return kszphy_config_reset(phydev);
+@@ -1112,7 +1112,7 @@ static int kszphy_probe(struct phy_device *phydev)
+ 
+ 	priv->type = type;
+ 
+-	if (type->led_mode_reg) {
++	if (type && type->led_mode_reg) {
+ 		ret = of_property_read_u32(np, "micrel,led-mode",
+ 				&priv->led_mode);
+ 		if (ret)
+@@ -1133,7 +1133,8 @@ static int kszphy_probe(struct phy_device *phydev)
+ 		unsigned long rate = clk_get_rate(clk);
+ 		bool rmii_ref_clk_sel_25_mhz;
+ 
+-		priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel;
++		if (type)
++			priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel;
+ 		rmii_ref_clk_sel_25_mhz = of_property_read_bool(np,
+ 				"micrel,rmii-reference-clock-select-25-mhz");
+ 
+diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c
+index 473221aa22368..eef5911fa2100 100644
+--- a/drivers/net/wireguard/socket.c
++++ b/drivers/net/wireguard/socket.c
+@@ -49,7 +49,7 @@ static int send4(struct wg_device *wg, struct sk_buff *skb,
+ 		rt = dst_cache_get_ip4(cache, &fl.saddr);
+ 
+ 	if (!rt) {
+-		security_sk_classify_flow(sock, flowi4_to_flowi(&fl));
++		security_sk_classify_flow(sock, flowi4_to_flowi_common(&fl));
+ 		if (unlikely(!inet_confirm_addr(sock_net(sock), NULL, 0,
+ 						fl.saddr, RT_SCOPE_HOST))) {
+ 			endpoint->src4.s_addr = 0;
+@@ -129,7 +129,7 @@ static int send6(struct wg_device *wg, struct sk_buff *skb,
+ 		dst = dst_cache_get_ip6(cache, &fl.saddr);
+ 
+ 	if (!dst) {
+-		security_sk_classify_flow(sock, flowi6_to_flowi(&fl));
++		security_sk_classify_flow(sock, flowi6_to_flowi_common(&fl));
+ 		if (unlikely(!ipv6_addr_any(&fl.saddr) &&
+ 			     !ipv6_chk_addr(sock_net(sock), &fl.saddr, NULL, 0))) {
+ 			endpoint->src6 = fl.saddr = in6addr_any;
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index b59d482d9c23e..b61cd275fbda2 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -5170,13 +5170,29 @@ err:
+ static void ath10k_stop(struct ieee80211_hw *hw)
+ {
+ 	struct ath10k *ar = hw->priv;
++	u32 opt;
+ 
+ 	ath10k_drain_tx(ar);
+ 
+ 	mutex_lock(&ar->conf_mutex);
+ 	if (ar->state != ATH10K_STATE_OFF) {
+-		if (!ar->hw_rfkill_on)
+-			ath10k_halt(ar);
++		if (!ar->hw_rfkill_on) {
++			/* If the current driver state is RESTARTING but not yet
++			 * fully RESTARTED because of incoming suspend event,
++			 * then ath10k_halt() is already called via
++			 * ath10k_core_restart() and should not be called here.
++			 */
++			if (ar->state != ATH10K_STATE_RESTARTING) {
++				ath10k_halt(ar);
++			} else {
++				/* Suspending here, because when in RESTARTING
++				 * state, ath10k_core_stop() skips
++				 * ath10k_wait_for_suspend().
++				 */
++				opt = WMI_PDEV_SUSPEND_AND_DISABLE_INTR;
++				ath10k_wait_for_suspend(ar, opt);
++			}
++		}
+ 		ar->state = ATH10K_STATE_OFF;
+ 	}
+ 	mutex_unlock(&ar->conf_mutex);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index cc9122f420243..44282aec069d3 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -4008,8 +4008,8 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ 		}
+ 
+ 		arvif = ath11k_vif_to_arvif(skb_cb->vif);
+-		if (ar->allocated_vdev_map & (1LL << arvif->vdev_id) &&
+-		    arvif->is_started) {
++		mutex_lock(&ar->conf_mutex);
++		if (ar->allocated_vdev_map & (1LL << arvif->vdev_id)) {
+ 			ret = ath11k_mac_mgmt_tx_wmi(ar, arvif, skb);
+ 			if (ret) {
+ 				ath11k_warn(ar->ab, "failed to tx mgmt frame, vdev_id %d :%d\n",
+@@ -4025,6 +4025,7 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ 				    arvif->is_started);
+ 			ieee80211_free_txskb(ar->hw, skb);
+ 		}
++		mutex_unlock(&ar->conf_mutex);
+ 	}
+ }
+ 
+@@ -5325,6 +5326,7 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ 	struct ath11k *ar = hw->priv;
+ 	struct ath11k_base *ab = ar->ab;
+ 	struct ath11k_vif *arvif = (void *)vif->drv_priv;
++	struct ath11k_peer *peer;
+ 	int ret;
+ 
+ 	mutex_lock(&ar->conf_mutex);
+@@ -5336,9 +5338,13 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ 	WARN_ON(!arvif->is_started);
+ 
+ 	if (ab->hw_params.vdev_start_delay &&
+-	    arvif->vdev_type == WMI_VDEV_TYPE_MONITOR &&
+-	    ath11k_peer_find_by_addr(ab, ar->mac_addr))
+-		ath11k_peer_delete(ar, arvif->vdev_id, ar->mac_addr);
++	    arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
++		spin_lock_bh(&ab->base_lock);
++		peer = ath11k_peer_find_by_addr(ab, ar->mac_addr);
++		spin_unlock_bh(&ab->base_lock);
++		if (peer)
++			ath11k_peer_delete(ar, arvif->vdev_id, ar->mac_addr);
++	}
+ 
+ 	ret = ath11k_mac_vdev_stop(arvif);
+ 	if (ret)
+diff --git a/drivers/net/wireless/ath/ath11k/spectral.c b/drivers/net/wireless/ath/ath11k/spectral.c
+index ac2a8cfdc1c01..f5ab455ea1a22 100644
+--- a/drivers/net/wireless/ath/ath11k/spectral.c
++++ b/drivers/net/wireless/ath/ath11k/spectral.c
+@@ -214,7 +214,10 @@ static int ath11k_spectral_scan_config(struct ath11k *ar,
+ 		return -ENODEV;
+ 
+ 	arvif->spectral_enabled = (mode != ATH11K_SPECTRAL_DISABLED);
++
++	spin_lock_bh(&ar->spectral.lock);
+ 	ar->spectral.mode = mode;
++	spin_unlock_bh(&ar->spectral.lock);
+ 
+ 	ret = ath11k_wmi_vdev_spectral_enable(ar, arvif->vdev_id,
+ 					      ATH11K_WMI_SPECTRAL_TRIGGER_CMD_CLEAR,
+@@ -829,9 +832,6 @@ static inline void ath11k_spectral_ring_free(struct ath11k *ar)
+ {
+ 	struct ath11k_spectral *sp = &ar->spectral;
+ 
+-	if (!sp->enabled)
+-		return;
+-
+ 	ath11k_dbring_srng_cleanup(ar, &sp->rx_ring);
+ 	ath11k_dbring_buf_cleanup(ar, &sp->rx_ring);
+ }
+@@ -883,15 +883,16 @@ void ath11k_spectral_deinit(struct ath11k_base *ab)
+ 		if (!sp->enabled)
+ 			continue;
+ 
+-		ath11k_spectral_debug_unregister(ar);
+-		ath11k_spectral_ring_free(ar);
++		mutex_lock(&ar->conf_mutex);
++		ath11k_spectral_scan_config(ar, ATH11K_SPECTRAL_DISABLED);
++		mutex_unlock(&ar->conf_mutex);
+ 
+ 		spin_lock_bh(&sp->lock);
+-
+-		sp->mode = ATH11K_SPECTRAL_DISABLED;
+ 		sp->enabled = false;
+-
+ 		spin_unlock_bh(&sp->lock);
++
++		ath11k_spectral_debug_unregister(ar);
++		ath11k_spectral_ring_free(ar);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+index b0a4ca3559fd8..abed1effd95ca 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+@@ -5615,7 +5615,7 @@ unsigned int ar9003_get_paprd_scale_factor(struct ath_hw *ah,
+ 
+ static u8 ar9003_get_eepmisc(struct ath_hw *ah)
+ {
+-	return ah->eeprom.map4k.baseEepHeader.eepMisc;
++	return ah->eeprom.ar9300_eep.baseEepHeader.opCapFlags.eepMisc;
+ }
+ 
+ const struct eeprom_ops eep_ar9300_ops = {
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
+index a171dbb29fbb6..ad949eb02f3d2 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h
++++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
+@@ -720,7 +720,7 @@
+ #define AR_CH0_TOP2		(AR_SREV_9300(ah) ? 0x1628c : \
+ 					(AR_SREV_9462(ah) ? 0x16290 : 0x16284))
+ #define AR_CH0_TOP2_XPABIASLVL		(AR_SREV_9561(ah) ? 0x1e00 : 0xf000)
+-#define AR_CH0_TOP2_XPABIASLVL_S	12
++#define AR_CH0_TOP2_XPABIASLVL_S	(AR_SREV_9561(ah) ? 9 : 12)
+ 
+ #define AR_CH0_XTAL		(AR_SREV_9300(ah) ? 0x16294 : \
+ 				 ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16298 : \
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 0bdc4dcb7b8fe..30ddf333e04dc 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -1006,6 +1006,14 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv,
+ 		goto rx_next;
+ 	}
+ 
++	if (rxstatus->rs_keyix >= ATH_KEYMAX &&
++	    rxstatus->rs_keyix != ATH9K_RXKEYIX_INVALID) {
++		ath_dbg(common, ANY,
++			"Invalid keyix, dropping (keyix: %d)\n",
++			rxstatus->rs_keyix);
++		goto rx_next;
++	}
++
+ 	/* Get the RX status information */
+ 
+ 	memset(rx_status, 0, sizeof(struct ieee80211_rx_status));
+diff --git a/drivers/net/wireless/ath/carl9170/tx.c b/drivers/net/wireless/ath/carl9170/tx.c
+index 235cf77cd60c5..43050dc7b98df 100644
+--- a/drivers/net/wireless/ath/carl9170/tx.c
++++ b/drivers/net/wireless/ath/carl9170/tx.c
+@@ -1557,6 +1557,9 @@ static struct carl9170_vif_info *carl9170_pick_beaconing_vif(struct ar9170 *ar)
+ 					goto out;
+ 			}
+ 		} while (ar->beacon_enabled && i--);
++
++		/* no entry found in list */
++		return NULL;
+ 	}
+ 
+ out:
+diff --git a/drivers/net/wireless/broadcom/b43/phy_n.c b/drivers/net/wireless/broadcom/b43/phy_n.c
+index 665b737fbb0d8..39975b7d1a16b 100644
+--- a/drivers/net/wireless/broadcom/b43/phy_n.c
++++ b/drivers/net/wireless/broadcom/b43/phy_n.c
+@@ -582,7 +582,7 @@ static void b43_nphy_adjust_lna_gain_table(struct b43_wldev *dev)
+ 	u16 data[4];
+ 	s16 gain[2];
+ 	u16 minmax[2];
+-	static const u16 lna_gain[4] = { -2, 10, 19, 25 };
++	static const s16 lna_gain[4] = { -2, 10, 19, 25 };
+ 
+ 	if (nphy->hang_avoid)
+ 		b43_nphy_stay_in_carrier_search(dev, 1);
+diff --git a/drivers/net/wireless/broadcom/b43legacy/phy.c b/drivers/net/wireless/broadcom/b43legacy/phy.c
+index 05404fbd1e70b..c1395e622759e 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/phy.c
++++ b/drivers/net/wireless/broadcom/b43legacy/phy.c
+@@ -1123,7 +1123,7 @@ void b43legacy_phy_lo_b_measure(struct b43legacy_wldev *dev)
+ 	struct b43legacy_phy *phy = &dev->phy;
+ 	u16 regstack[12] = { 0 };
+ 	u16 mls;
+-	u16 fval;
++	s16 fval;
+ 	int i;
+ 	int j;
+ 
+diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
+index d9baa2fa603b2..e4c60caa6543c 100644
+--- a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
++++ b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
+@@ -383,7 +383,7 @@ netdev_tx_t libipw_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 		/* Each fragment may need to have room for encryption
+ 		 * pre/postfix */
+-		if (host_encrypt)
++		if (host_encrypt && crypt && crypt->ops)
+ 			bytes_per_frag -= crypt->ops->extra_mpdu_prefix_len +
+ 			    crypt->ops->extra_mpdu_postfix_len;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+index c146303ec73b1..66a968de3bc26 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+@@ -621,6 +621,9 @@ static void iwl_mvm_power_get_vifs_iterator(void *_data, u8 *mac,
+ 	struct iwl_power_vifs *power_iterator = _data;
+ 	bool active = mvmvif->phy_ctxt && mvmvif->phy_ctxt->id < NUM_PHY_CTX;
+ 
++	if (!mvmvif->uploaded)
++		return;
++
+ 	switch (ieee80211_vif_type_p2p(vif)) {
+ 	case NL80211_IFTYPE_P2P_DEVICE:
+ 		break;
+diff --git a/drivers/net/wireless/marvell/mwifiex/11h.c b/drivers/net/wireless/marvell/mwifiex/11h.c
+index d2ee6469e67bb..3fa25cd64cda0 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11h.c
++++ b/drivers/net/wireless/marvell/mwifiex/11h.c
+@@ -303,5 +303,7 @@ void mwifiex_dfs_chan_sw_work_queue(struct work_struct *work)
+ 
+ 	mwifiex_dbg(priv->adapter, MSG,
+ 		    "indicating channel switch completion to kernel\n");
++	mutex_lock(&priv->wdev.mtx);
+ 	cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef);
++	mutex_unlock(&priv->wdev.mtx);
+ }
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
+index 2477e18c7caec..025619cd14e82 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
+@@ -460,8 +460,10 @@ static void rtl8180_tx(struct ieee80211_hw *dev,
+ 	struct rtl8180_priv *priv = dev->priv;
+ 	struct rtl8180_tx_ring *ring;
+ 	struct rtl8180_tx_desc *entry;
++	unsigned int prio = 0;
+ 	unsigned long flags;
+-	unsigned int idx, prio, hw_prio;
++	unsigned int idx, hw_prio;
++
+ 	dma_addr_t mapping;
+ 	u32 tx_flags;
+ 	u8 rc_flags;
+@@ -470,7 +472,9 @@ static void rtl8180_tx(struct ieee80211_hw *dev,
+ 	/* do arithmetic and then convert to le16 */
+ 	u16 frame_duration = 0;
+ 
+-	prio = skb_get_queue_mapping(skb);
++	/* rtl8180/rtl8185 only has one useable tx queue */
++	if (dev->queues > IEEE80211_AC_BK)
++		prio = skb_get_queue_mapping(skb);
+ 	ring = &priv->tx_ring[prio];
+ 
+ 	mapping = dma_map_single(&priv->pdev->dev, skb->data, skb->len,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index 06e073defad65..c6e4fda7e431f 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -1015,7 +1015,7 @@ int rtl_usb_probe(struct usb_interface *intf,
+ 	hw = ieee80211_alloc_hw(sizeof(struct rtl_priv) +
+ 				sizeof(struct rtl_usb_priv), &rtl_ops);
+ 	if (!hw) {
+-		WARN_ONCE(true, "rtl_usb: ieee80211 alloc failed\n");
++		pr_warn("rtl_usb: ieee80211 alloc failed\n");
+ 		return -ENOMEM;
+ 	}
+ 	rtlpriv = hw->priv;
+diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
+index 0841e0e370a03..0194e80193d9c 100644
+--- a/drivers/nfc/st21nfca/se.c
++++ b/drivers/nfc/st21nfca/se.c
+@@ -241,7 +241,7 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx,
+ }
+ EXPORT_SYMBOL(st21nfca_hci_se_io);
+ 
+-static void st21nfca_se_wt_timeout(struct timer_list *t)
++static void st21nfca_se_wt_work(struct work_struct *work)
+ {
+ 	/*
+ 	 * No answer from the secure element
+@@ -254,8 +254,9 @@ static void st21nfca_se_wt_timeout(struct timer_list *t)
+ 	 */
+ 	/* hardware reset managed through VCC_UICC_OUT power supply */
+ 	u8 param = 0x01;
+-	struct st21nfca_hci_info *info = from_timer(info, t,
+-						    se_info.bwi_timer);
++	struct st21nfca_hci_info *info = container_of(work,
++						struct st21nfca_hci_info,
++						se_info.timeout_work);
+ 
+ 	pr_debug("\n");
+ 
+@@ -273,6 +274,13 @@ static void st21nfca_se_wt_timeout(struct timer_list *t)
+ 	info->se_info.cb(info->se_info.cb_context, NULL, 0, -ETIME);
+ }
+ 
++static void st21nfca_se_wt_timeout(struct timer_list *t)
++{
++	struct st21nfca_hci_info *info = from_timer(info, t, se_info.bwi_timer);
++
++	schedule_work(&info->se_info.timeout_work);
++}
++
+ static void st21nfca_se_activation_timeout(struct timer_list *t)
+ {
+ 	struct st21nfca_hci_info *info = from_timer(info, t,
+@@ -364,6 +372,7 @@ int st21nfca_apdu_reader_event_received(struct nfc_hci_dev *hdev,
+ 	switch (event) {
+ 	case ST21NFCA_EVT_TRANSMIT_DATA:
+ 		del_timer_sync(&info->se_info.bwi_timer);
++		cancel_work_sync(&info->se_info.timeout_work);
+ 		info->se_info.bwi_active = false;
+ 		r = nfc_hci_send_event(hdev, ST21NFCA_DEVICE_MGNT_GATE,
+ 				ST21NFCA_EVT_SE_END_OF_APDU_TRANSFER, NULL, 0);
+@@ -393,6 +402,7 @@ void st21nfca_se_init(struct nfc_hci_dev *hdev)
+ 	struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev);
+ 
+ 	init_completion(&info->se_info.req_completion);
++	INIT_WORK(&info->se_info.timeout_work, st21nfca_se_wt_work);
+ 	/* initialize timers */
+ 	timer_setup(&info->se_info.bwi_timer, st21nfca_se_wt_timeout, 0);
+ 	info->se_info.bwi_active = false;
+@@ -420,6 +430,7 @@ void st21nfca_se_deinit(struct nfc_hci_dev *hdev)
+ 	if (info->se_info.se_active)
+ 		del_timer_sync(&info->se_info.se_active_timer);
+ 
++	cancel_work_sync(&info->se_info.timeout_work);
+ 	info->se_info.bwi_active = false;
+ 	info->se_info.se_active = false;
+ }
+diff --git a/drivers/nfc/st21nfca/st21nfca.h b/drivers/nfc/st21nfca/st21nfca.h
+index 5e0de0fef1d4e..0e4a93d11efb7 100644
+--- a/drivers/nfc/st21nfca/st21nfca.h
++++ b/drivers/nfc/st21nfca/st21nfca.h
+@@ -141,6 +141,7 @@ struct st21nfca_se_info {
+ 
+ 	se_io_cb_t cb;
+ 	void *cb_context;
++	struct work_struct timeout_work;
+ };
+ 
+ struct st21nfca_hci_info {
+diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c
+index c21ba0602029c..1c92c883afdd8 100644
+--- a/drivers/nvdimm/core.c
++++ b/drivers/nvdimm/core.c
+@@ -400,9 +400,7 @@ static ssize_t capability_show(struct device *dev,
+ 	if (!nd_desc->fw_ops)
+ 		return -EOPNOTSUPP;
+ 
+-	nvdimm_bus_lock(dev);
+ 	cap = nd_desc->fw_ops->capability(nd_desc);
+-	nvdimm_bus_unlock(dev);
+ 
+ 	switch (cap) {
+ 	case NVDIMM_FWA_CAP_QUIESCE:
+@@ -427,10 +425,8 @@ static ssize_t activate_show(struct device *dev,
+ 	if (!nd_desc->fw_ops)
+ 		return -EOPNOTSUPP;
+ 
+-	nvdimm_bus_lock(dev);
+ 	cap = nd_desc->fw_ops->capability(nd_desc);
+ 	state = nd_desc->fw_ops->activate_state(nd_desc);
+-	nvdimm_bus_unlock(dev);
+ 
+ 	if (cap < NVDIMM_FWA_CAP_QUIESCE)
+ 		return -EOPNOTSUPP;
+@@ -475,7 +471,6 @@ static ssize_t activate_store(struct device *dev,
+ 	else
+ 		return -EINVAL;
+ 
+-	nvdimm_bus_lock(dev);
+ 	state = nd_desc->fw_ops->activate_state(nd_desc);
+ 
+ 	switch (state) {
+@@ -493,7 +488,6 @@ static ssize_t activate_store(struct device *dev,
+ 	default:
+ 		rc = -ENXIO;
+ 	}
+-	nvdimm_bus_unlock(dev);
+ 
+ 	if (rc == 0)
+ 		rc = len;
+@@ -516,10 +510,7 @@ static umode_t nvdimm_bus_firmware_visible(struct kobject *kobj, struct attribut
+ 	if (!nd_desc->fw_ops)
+ 		return 0;
+ 
+-	nvdimm_bus_lock(dev);
+ 	cap = nd_desc->fw_ops->capability(nd_desc);
+-	nvdimm_bus_unlock(dev);
+-
+ 	if (cap < NVDIMM_FWA_CAP_QUIESCE)
+ 		return 0;
+ 
+diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
+index 4b80150e4afa7..b5aa55c614616 100644
+--- a/drivers/nvdimm/security.c
++++ b/drivers/nvdimm/security.c
+@@ -379,11 +379,6 @@ static int security_overwrite(struct nvdimm *nvdimm, unsigned int keyid)
+ 			|| !nvdimm->sec.flags)
+ 		return -EOPNOTSUPP;
+ 
+-	if (dev->driver == NULL) {
+-		dev_dbg(dev, "Unable to overwrite while DIMM active.\n");
+-		return -EINVAL;
+-	}
+-
+ 	rc = check_security_state(nvdimm);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index e73a5c62a858d..d301f0280ff6a 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2024,7 +2024,7 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
+ 		blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX));
+ 	}
+ 	blk_queue_virt_boundary(q, NVME_CTRL_PAGE_SIZE - 1);
+-	blk_queue_dma_alignment(q, 7);
++	blk_queue_dma_alignment(q, 3);
+ 	blk_queue_write_cache(q, vwc, vwc);
+ }
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a36db0701d178..7de24a10dd921 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1666,6 +1666,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
+ 		dev->ctrl.admin_q = blk_mq_init_queue(&dev->admin_tagset);
+ 		if (IS_ERR(dev->ctrl.admin_q)) {
+ 			blk_mq_free_tag_set(&dev->admin_tagset);
++			dev->ctrl.admin_q = NULL;
+ 			return -ENOMEM;
+ 		}
+ 		if (!blk_get_queue(dev->ctrl.admin_q)) {
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 43a77d7200087..c8a0c0e9dec1c 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -170,9 +170,7 @@ static int overlay_notify(struct overlay_changeset *ovcs,
+ 
+ 		ret = blocking_notifier_call_chain(&overlay_notify_chain,
+ 						   action, &nd);
+-		if (ret == NOTIFY_OK || ret == NOTIFY_STOP)
+-			return 0;
+-		if (ret) {
++		if (notifier_to_errno(ret)) {
+ 			ret = notifier_to_errno(ret);
+ 			pr_err("overlay changeset %s notifier error %d, target: %pOF\n",
+ 			       of_overlay_action_name[action], ret, nd.target);
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 5de46aa99d243..3d7adc0de1288 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -346,11 +346,11 @@ static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table)
+ 
+ 	/* Checking only first OPP is sufficient */
+ 	np = of_get_next_available_child(opp_np, NULL);
++	of_node_put(opp_np);
+ 	if (!np) {
+ 		dev_err(dev, "OPP table empty\n");
+ 		return -EINVAL;
+ 	}
+-	of_node_put(opp_np);
+ 
+ 	prop = of_find_property(np, "opp-peak-kBps", NULL);
+ 	of_node_put(np);
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 1af14474abcf1..4c5e6349d78ce 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -154,8 +154,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, phys_addr_t addr,
+ 	struct cdns_pcie *pcie = &ep->pcie;
+ 	u32 r;
+ 
+-	r = find_first_zero_bit(&ep->ob_region_map,
+-				sizeof(ep->ob_region_map) * BITS_PER_LONG);
++	r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG);
+ 	if (r >= ep->max_regions - 1) {
+ 		dev_err(&epc->dev, "no free outbound region\n");
+ 		return -EINVAL;
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 5cf1ef12fb9b0..ceb4815379cd4 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -401,6 +401,11 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
+ 			dev_err(dev, "failed to disable vpcie regulator: %d\n",
+ 				ret);
+ 	}
++
++	/* Some boards don't have PCIe reset GPIO. */
++	if (gpio_is_valid(imx6_pcie->reset_gpio))
++		gpio_set_value_cansleep(imx6_pcie->reset_gpio,
++					imx6_pcie->gpio_active_high);
+ }
+ 
+ static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie)
+@@ -523,15 +528,6 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
+ 	/* allow the clocks to stabilize */
+ 	usleep_range(200, 500);
+ 
+-	/* Some boards don't have PCIe reset GPIO. */
+-	if (gpio_is_valid(imx6_pcie->reset_gpio)) {
+-		gpio_set_value_cansleep(imx6_pcie->reset_gpio,
+-					imx6_pcie->gpio_active_high);
+-		msleep(100);
+-		gpio_set_value_cansleep(imx6_pcie->reset_gpio,
+-					!imx6_pcie->gpio_active_high);
+-	}
+-
+ 	switch (imx6_pcie->drvdata->variant) {
+ 	case IMX8MQ:
+ 		reset_control_deassert(imx6_pcie->pciephy_reset);
+@@ -574,6 +570,15 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
+ 		break;
+ 	}
+ 
++	/* Some boards don't have PCIe reset GPIO. */
++	if (gpio_is_valid(imx6_pcie->reset_gpio)) {
++		msleep(100);
++		gpio_set_value_cansleep(imx6_pcie->reset_gpio,
++					!imx6_pcie->gpio_active_high);
++		/* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */
++		msleep(100);
++	}
++
+ 	return;
+ 
+ err_ref_clk:
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 44c2a6572199c..42d8116a4a002 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -392,7 +392,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 						      sizeof(pp->msi_msg),
+ 						      DMA_FROM_DEVICE,
+ 						      DMA_ATTR_SKIP_CPU_SYNC);
+-			if (dma_mapping_error(pci->dev, pp->msi_data)) {
++			ret = dma_mapping_error(pci->dev, pp->msi_data);
++			if (ret) {
+ 				dev_err(pci->dev, "Failed to map MSI data\n");
+ 				pp->msi_data = 0;
+ 				goto err_free_msi;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 557554f53ce9c..9c7cdc4e12f21 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1443,22 +1443,21 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	ret = phy_init(pcie->phy);
+-	if (ret) {
+-		pm_runtime_disable(&pdev->dev);
++	if (ret)
+ 		goto err_pm_runtime_put;
+-	}
+ 
+ 	platform_set_drvdata(pdev, pcie);
+ 
+ 	ret = dw_pcie_host_init(pp);
+ 	if (ret) {
+ 		dev_err(dev, "cannot initialize host\n");
+-		pm_runtime_disable(&pdev->dev);
+-		goto err_pm_runtime_put;
++		goto err_phy_exit;
+ 	}
+ 
+ 	return 0;
+ 
++err_phy_exit:
++	phy_exit(pcie->phy);
+ err_pm_runtime_put:
+ 	pm_runtime_put(dev);
+ 	pm_runtime_disable(dev);
+diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
+index 7631dc3961c12..379cde59988cf 100644
+--- a/drivers/pci/controller/pcie-rockchip-ep.c
++++ b/drivers/pci/controller/pcie-rockchip-ep.c
+@@ -264,8 +264,7 @@ static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn,
+ 	struct rockchip_pcie *pcie = &ep->rockchip;
+ 	u32 r;
+ 
+-	r = find_first_zero_bit(&ep->ob_region_map,
+-				sizeof(ep->ob_region_map) * BITS_PER_LONG);
++	r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG);
+ 	/*
+ 	 * Region 0 is reserved for configuration space and shouldn't
+ 	 * be used elsewhere per TRM, so leave it out.
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index cda17c6151480..1162734546481 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2829,6 +2829,8 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."),
+ 			DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"),
+ 		},
++	},
++	{
+ 		/*
+ 		 * Downstream device is not accessible after putting a root port
+ 		 * into D3cold and back into D0 on Elo i2.
+@@ -4975,18 +4977,18 @@ static int pci_dev_reset_slot_function(struct pci_dev *dev, int probe)
+ 
+ static void pci_dev_lock(struct pci_dev *dev)
+ {
+-	pci_cfg_access_lock(dev);
+ 	/* block PM suspend, driver probe, etc. */
+ 	device_lock(&dev->dev);
++	pci_cfg_access_lock(dev);
+ }
+ 
+ /* Return 1 on successful lock, 0 on contention */
+ static int pci_dev_trylock(struct pci_dev *dev)
+ {
+-	if (pci_cfg_access_trylock(dev)) {
+-		if (device_trylock(&dev->dev))
++	if (device_trylock(&dev->dev)) {
++		if (pci_cfg_access_trylock(dev))
+ 			return 1;
+-		pci_cfg_access_unlock(dev);
++		device_unlock(&dev->dev);
+ 	}
+ 
+ 	return 0;
+@@ -4994,8 +4996,8 @@ static int pci_dev_trylock(struct pci_dev *dev)
+ 
+ static void pci_dev_unlock(struct pci_dev *dev)
+ {
+-	device_unlock(&dev->dev);
+ 	pci_cfg_access_unlock(dev);
++	device_unlock(&dev->dev);
+ }
+ 
+ static void pci_dev_save_and_disable(struct pci_dev *dev)
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index 65dff5f3457ac..c40546eeecb39 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -101,6 +101,11 @@ struct aer_stats {
+ #define ERR_COR_ID(d)			(d & 0xffff)
+ #define ERR_UNCOR_ID(d)			(d >> 16)
+ 
++#define AER_ERR_STATUS_MASK		(PCI_ERR_ROOT_UNCOR_RCV |	\
++					PCI_ERR_ROOT_COR_RCV |		\
++					PCI_ERR_ROOT_MULTI_COR_RCV |	\
++					PCI_ERR_ROOT_MULTI_UNCOR_RCV)
++
+ static int pcie_aer_disable;
+ static pci_ers_result_t aer_root_reset(struct pci_dev *dev);
+ 
+@@ -1187,7 +1192,7 @@ static irqreturn_t aer_irq(int irq, void *context)
+ 	struct aer_err_source e_src = {};
+ 
+ 	pci_read_config_dword(rp, aer + PCI_ERR_ROOT_STATUS, &e_src.status);
+-	if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV)))
++	if (!(e_src.status & AER_ERR_STATUS_MASK))
+ 		return IRQ_NONE;
+ 
+ 	pci_read_config_dword(rp, aer + PCI_ERR_ROOT_ERR_SRC, &e_src.id);
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c
+index 0cda168469625..ea46950c5d2a9 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c
+@@ -3717,6 +3717,11 @@ static const struct phy_ops qcom_qmp_pcie_ufs_ops = {
+ 	.owner		= THIS_MODULE,
+ };
+ 
++static void qcom_qmp_reset_control_put(void *data)
++{
++	reset_control_put(data);
++}
++
+ static
+ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ 			void __iomem *serdes, const struct qmp_phy_cfg *cfg)
+@@ -3789,7 +3794,7 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ 	 * all phys that don't need this.
+ 	 */
+ 	snprintf(prop_name, sizeof(prop_name), "pipe%d", id);
+-	qphy->pipe_clk = of_clk_get_by_name(np, prop_name);
++	qphy->pipe_clk = devm_get_clk_from_child(dev, np, prop_name);
+ 	if (IS_ERR(qphy->pipe_clk)) {
+ 		if (cfg->type == PHY_TYPE_PCIE ||
+ 		    cfg->type == PHY_TYPE_USB3) {
+@@ -3811,6 +3816,10 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ 			dev_err(dev, "failed to get lane%d reset\n", id);
+ 			return PTR_ERR(qphy->lane_rst);
+ 		}
++		ret = devm_add_action_or_reset(dev, qcom_qmp_reset_control_put,
++					       qphy->lane_rst);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	if (cfg->type == PHY_TYPE_UFS || cfg->type == PHY_TYPE_PCIE)
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 6768b2f03d685..39d2024dc2ee5 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -351,6 +351,22 @@ static int bcm2835_gpio_direction_output(struct gpio_chip *chip,
+ 	return pinctrl_gpio_direction_output(chip->base + offset);
+ }
+ 
++static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc,
++					   struct device_node *np)
++{
++	struct pinctrl_dev *pctldev = of_pinctrl_get(np);
++
++	of_node_put(np);
++
++	if (!pctldev)
++		return 0;
++
++	gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
++			       gc->ngpio);
++
++	return 0;
++}
++
+ static const struct gpio_chip bcm2835_gpio_chip = {
+ 	.label = MODULE_NAME,
+ 	.owner = THIS_MODULE,
+@@ -365,6 +381,7 @@ static const struct gpio_chip bcm2835_gpio_chip = {
+ 	.base = -1,
+ 	.ngpio = BCM2835_NUM_GPIOS,
+ 	.can_sleep = false,
++	.of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback,
+ };
+ 
+ static const struct gpio_chip bcm2711_gpio_chip = {
+@@ -381,6 +398,7 @@ static const struct gpio_chip bcm2711_gpio_chip = {
+ 	.base = -1,
+ 	.ngpio = BCM2711_NUM_GPIOS,
+ 	.can_sleep = false,
++	.of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback,
+ };
+ 
+ static void bcm2835_gpio_irq_handle_bank(struct bcm2835_pinctrl *pc,
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 5cb018f988003..85a0052bb0e62 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -781,7 +781,7 @@ static int armada_37xx_irqchip_register(struct platform_device *pdev,
+ 	for (i = 0; i < nr_irq_parent; i++) {
+ 		int irq = irq_of_parse_and_map(np, i);
+ 
+-		if (irq < 0)
++		if (!irq)
+ 			continue;
+ 		girq->parents[i] = irq;
+ 	}
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index 258972672eda1..54f1a7334027a 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -71,12 +71,11 @@ static int sh_pfc_map_resources(struct sh_pfc *pfc,
+ 
+ 	/* Fill them. */
+ 	for (i = 0; i < num_windows; i++) {
+-		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+-		windows->phys = res->start;
+-		windows->size = resource_size(res);
+-		windows->virt = devm_ioremap_resource(pfc->dev, res);
++		windows->virt = devm_platform_get_and_ioremap_resource(pdev, i, &res);
+ 		if (IS_ERR(windows->virt))
+ 			return -ENOMEM;
++		windows->phys = res->start;
++		windows->size = resource_size(res);
+ 		windows++;
+ 	}
+ 	for (i = 0; i < num_irqs; i++)
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzn1.c b/drivers/pinctrl/renesas/pinctrl-rzn1.c
+index ef5fb25b6016d..849d091205d4d 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzn1.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzn1.c
+@@ -865,17 +865,15 @@ static int rzn1_pinctrl_probe(struct platform_device *pdev)
+ 	ipctl->mdio_func[0] = -1;
+ 	ipctl->mdio_func[1] = -1;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	ipctl->lev1_protect_phys = (u32)res->start + 0x400;
+-	ipctl->lev1 = devm_ioremap_resource(&pdev->dev, res);
++	ipctl->lev1 = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(ipctl->lev1))
+ 		return PTR_ERR(ipctl->lev1);
++	ipctl->lev1_protect_phys = (u32)res->start + 0x400;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+-	ipctl->lev2_protect_phys = (u32)res->start + 0x400;
+-	ipctl->lev2 = devm_ioremap_resource(&pdev->dev, res);
++	ipctl->lev2 = devm_platform_get_and_ioremap_resource(pdev, 1, &res);
+ 	if (IS_ERR(ipctl->lev2))
+ 		return PTR_ERR(ipctl->lev2);
++	ipctl->lev2_protect_phys = (u32)res->start + 0x400;
+ 
+ 	ipctl->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(ipctl->clk))
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index 3104680b74855..979f92194e81a 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -175,6 +175,8 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 	ec_dev->max_request = sizeof(struct ec_params_hello);
+ 	ec_dev->max_response = sizeof(struct ec_response_get_protocol_info);
+ 	ec_dev->max_passthru = 0;
++	ec_dev->ec = NULL;
++	ec_dev->pd = NULL;
+ 
+ 	ec_dev->din = devm_kzalloc(dev, ec_dev->din_size, GFP_KERNEL);
+ 	if (!ec_dev->din)
+@@ -231,18 +233,16 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 		if (IS_ERR(ec_dev->pd)) {
+ 			dev_err(ec_dev->dev,
+ 				"Failed to create CrOS PD platform device\n");
+-			platform_device_unregister(ec_dev->ec);
+-			return PTR_ERR(ec_dev->pd);
++			err = PTR_ERR(ec_dev->pd);
++			goto exit;
+ 		}
+ 	}
+ 
+ 	if (IS_ENABLED(CONFIG_OF) && dev->of_node) {
+ 		err = devm_of_platform_populate(dev);
+ 		if (err) {
+-			platform_device_unregister(ec_dev->pd);
+-			platform_device_unregister(ec_dev->ec);
+ 			dev_err(dev, "Failed to register sub-devices\n");
+-			return err;
++			goto exit;
+ 		}
+ 	}
+ 
+@@ -264,12 +264,16 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 		err = blocking_notifier_chain_register(&ec_dev->event_notifier,
+ 						      &ec_dev->notifier_ready);
+ 		if (err)
+-			return err;
++			goto exit;
+ 	}
+ 
+ 	dev_info(dev, "Chrome EC device registered\n");
+ 
+ 	return 0;
++exit:
++	platform_device_unregister(ec_dev->ec);
++	platform_device_unregister(ec_dev->pd);
++	return err;
+ }
+ EXPORT_SYMBOL(cros_ec_register);
+ 
+diff --git a/drivers/platform/chrome/cros_ec_chardev.c b/drivers/platform/chrome/cros_ec_chardev.c
+index e0bce869c49a9..fd33de546aee0 100644
+--- a/drivers/platform/chrome/cros_ec_chardev.c
++++ b/drivers/platform/chrome/cros_ec_chardev.c
+@@ -301,7 +301,7 @@ static long cros_ec_chardev_ioctl_xcmd(struct cros_ec_dev *ec, void __user *arg)
+ 	}
+ 
+ 	s_cmd->command += ec->cmd_offset;
+-	ret = cros_ec_cmd_xfer_status(ec->ec_dev, s_cmd);
++	ret = cros_ec_cmd_xfer(ec->ec_dev, s_cmd);
+ 	/* Only copy data to userland if data was received. */
+ 	if (ret < 0)
+ 		goto exit;
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index 9f698a7aad129..e1fadf059e05e 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -560,22 +560,28 @@ exit:
+ EXPORT_SYMBOL(cros_ec_query_all);
+ 
+ /**
+- * cros_ec_cmd_xfer_status() - Send a command to the ChromeOS EC.
++ * cros_ec_cmd_xfer() - Send a command to the ChromeOS EC.
+  * @ec_dev: EC device.
+  * @msg: Message to write.
+  *
+- * Call this to send a command to the ChromeOS EC. This should be used instead of calling the EC's
+- * cmd_xfer() callback directly. It returns success status only if both the command was transmitted
+- * successfully and the EC replied with success status.
++ * Call this to send a command to the ChromeOS EC. This should be used instead
++ * of calling the EC's cmd_xfer() callback directly. This function does not
++ * convert EC command execution error codes to Linux error codes. Most
++ * in-kernel users will want to use cros_ec_cmd_xfer_status() instead since
++ * that function implements the conversion.
+  *
+  * Return:
+- * >=0 - The number of bytes transferred
+- * <0 - Linux error code
++ * >0 - EC command was executed successfully. The return value is the number
++ *      of bytes returned by the EC (excluding the header).
++ * =0 - EC communication was successful. EC command execution results are
++ *      reported in msg->result. The result will be EC_RES_SUCCESS if the
++ *      command was executed successfully or report an EC command execution
++ *      error.
++ * <0 - EC communication error. Return value is the Linux error code.
+  */
+-int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+-			    struct cros_ec_command *msg)
++int cros_ec_cmd_xfer(struct cros_ec_device *ec_dev, struct cros_ec_command *msg)
+ {
+-	int ret, mapped;
++	int ret;
+ 
+ 	mutex_lock(&ec_dev->lock);
+ 	if (ec_dev->proto_version == EC_PROTO_VERSION_UNKNOWN) {
+@@ -616,6 +622,32 @@ int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+ 	ret = send_command(ec_dev, msg);
+ 	mutex_unlock(&ec_dev->lock);
+ 
++	return ret;
++}
++EXPORT_SYMBOL(cros_ec_cmd_xfer);
++
++/**
++ * cros_ec_cmd_xfer_status() - Send a command to the ChromeOS EC.
++ * @ec_dev: EC device.
++ * @msg: Message to write.
++ *
++ * Call this to send a command to the ChromeOS EC. This should be used instead of calling the EC's
++ * cmd_xfer() callback directly. It returns success status only if both the command was transmitted
++ * successfully and the EC replied with success status.
++ *
++ * Return:
++ * >=0 - The number of bytes transferred.
++ * <0 - Linux error code
++ */
++int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
++			    struct cros_ec_command *msg)
++{
++	int ret, mapped;
++
++	ret = cros_ec_cmd_xfer(ec_dev, msg);
++	if (ret < 0)
++		return ret;
++
+ 	mapped = cros_ec_map_error(msg->result);
+ 	if (mapped) {
+ 		dev_dbg(ec_dev->dev, "Command result (err: %d [%d])\n",
+diff --git a/drivers/platform/mips/cpu_hwmon.c b/drivers/platform/mips/cpu_hwmon.c
+index 386389ffec419..d8c5f9195f85f 100644
+--- a/drivers/platform/mips/cpu_hwmon.c
++++ b/drivers/platform/mips/cpu_hwmon.c
+@@ -55,55 +55,6 @@ out:
+ static int nr_packages;
+ static struct device *cpu_hwmon_dev;
+ 
+-static SENSOR_DEVICE_ATTR(name, 0444, NULL, NULL, 0);
+-
+-static struct attribute *cpu_hwmon_attributes[] = {
+-	&sensor_dev_attr_name.dev_attr.attr,
+-	NULL
+-};
+-
+-/* Hwmon device attribute group */
+-static struct attribute_group cpu_hwmon_attribute_group = {
+-	.attrs = cpu_hwmon_attributes,
+-};
+-
+-static ssize_t get_cpu_temp(struct device *dev,
+-			struct device_attribute *attr, char *buf);
+-static ssize_t cpu_temp_label(struct device *dev,
+-			struct device_attribute *attr, char *buf);
+-
+-static SENSOR_DEVICE_ATTR(temp1_input, 0444, get_cpu_temp, NULL, 1);
+-static SENSOR_DEVICE_ATTR(temp1_label, 0444, cpu_temp_label, NULL, 1);
+-static SENSOR_DEVICE_ATTR(temp2_input, 0444, get_cpu_temp, NULL, 2);
+-static SENSOR_DEVICE_ATTR(temp2_label, 0444, cpu_temp_label, NULL, 2);
+-static SENSOR_DEVICE_ATTR(temp3_input, 0444, get_cpu_temp, NULL, 3);
+-static SENSOR_DEVICE_ATTR(temp3_label, 0444, cpu_temp_label, NULL, 3);
+-static SENSOR_DEVICE_ATTR(temp4_input, 0444, get_cpu_temp, NULL, 4);
+-static SENSOR_DEVICE_ATTR(temp4_label, 0444, cpu_temp_label, NULL, 4);
+-
+-static const struct attribute *hwmon_cputemp[4][3] = {
+-	{
+-		&sensor_dev_attr_temp1_input.dev_attr.attr,
+-		&sensor_dev_attr_temp1_label.dev_attr.attr,
+-		NULL
+-	},
+-	{
+-		&sensor_dev_attr_temp2_input.dev_attr.attr,
+-		&sensor_dev_attr_temp2_label.dev_attr.attr,
+-		NULL
+-	},
+-	{
+-		&sensor_dev_attr_temp3_input.dev_attr.attr,
+-		&sensor_dev_attr_temp3_label.dev_attr.attr,
+-		NULL
+-	},
+-	{
+-		&sensor_dev_attr_temp4_input.dev_attr.attr,
+-		&sensor_dev_attr_temp4_label.dev_attr.attr,
+-		NULL
+-	}
+-};
+-
+ static ssize_t cpu_temp_label(struct device *dev,
+ 			struct device_attribute *attr, char *buf)
+ {
+@@ -121,24 +72,47 @@ static ssize_t get_cpu_temp(struct device *dev,
+ 	return sprintf(buf, "%d\n", value);
+ }
+ 
+-static int create_sysfs_cputemp_files(struct kobject *kobj)
+-{
+-	int i, ret = 0;
+-
+-	for (i = 0; i < nr_packages; i++)
+-		ret = sysfs_create_files(kobj, hwmon_cputemp[i]);
++static SENSOR_DEVICE_ATTR(temp1_input, 0444, get_cpu_temp, NULL, 1);
++static SENSOR_DEVICE_ATTR(temp1_label, 0444, cpu_temp_label, NULL, 1);
++static SENSOR_DEVICE_ATTR(temp2_input, 0444, get_cpu_temp, NULL, 2);
++static SENSOR_DEVICE_ATTR(temp2_label, 0444, cpu_temp_label, NULL, 2);
++static SENSOR_DEVICE_ATTR(temp3_input, 0444, get_cpu_temp, NULL, 3);
++static SENSOR_DEVICE_ATTR(temp3_label, 0444, cpu_temp_label, NULL, 3);
++static SENSOR_DEVICE_ATTR(temp4_input, 0444, get_cpu_temp, NULL, 4);
++static SENSOR_DEVICE_ATTR(temp4_label, 0444, cpu_temp_label, NULL, 4);
+ 
+-	return ret;
+-}
++static struct attribute *cpu_hwmon_attributes[] = {
++	&sensor_dev_attr_temp1_input.dev_attr.attr,
++	&sensor_dev_attr_temp1_label.dev_attr.attr,
++	&sensor_dev_attr_temp2_input.dev_attr.attr,
++	&sensor_dev_attr_temp2_label.dev_attr.attr,
++	&sensor_dev_attr_temp3_input.dev_attr.attr,
++	&sensor_dev_attr_temp3_label.dev_attr.attr,
++	&sensor_dev_attr_temp4_input.dev_attr.attr,
++	&sensor_dev_attr_temp4_label.dev_attr.attr,
++	NULL
++};
+ 
+-static void remove_sysfs_cputemp_files(struct kobject *kobj)
++static umode_t cpu_hwmon_is_visible(struct kobject *kobj,
++				    struct attribute *attr, int i)
+ {
+-	int i;
++	int id = i / 2;
+ 
+-	for (i = 0; i < nr_packages; i++)
+-		sysfs_remove_files(kobj, hwmon_cputemp[i]);
++	if (id < nr_packages)
++		return attr->mode;
++	return 0;
+ }
+ 
++static struct attribute_group cpu_hwmon_group = {
++	.attrs = cpu_hwmon_attributes,
++	.is_visible = cpu_hwmon_is_visible,
++};
++
++static const struct attribute_group *cpu_hwmon_groups[] = {
++	&cpu_hwmon_group,
++	NULL
++};
++
+ #define CPU_THERMAL_THRESHOLD 90000
+ static struct delayed_work thermal_work;
+ 
+@@ -159,50 +133,31 @@ static void do_thermal_timer(struct work_struct *work)
+ 
+ static int __init loongson_hwmon_init(void)
+ {
+-	int ret;
+-
+ 	pr_info("Loongson Hwmon Enter...\n");
+ 
+ 	if (cpu_has_csr())
+ 		csr_temp_enable = csr_readl(LOONGSON_CSR_FEATURES) &
+ 				  LOONGSON_CSRF_TEMP;
+ 
+-	cpu_hwmon_dev = hwmon_device_register_with_info(NULL, "cpu_hwmon", NULL, NULL, NULL);
+-	if (IS_ERR(cpu_hwmon_dev)) {
+-		ret = PTR_ERR(cpu_hwmon_dev);
+-		pr_err("hwmon_device_register fail!\n");
+-		goto fail_hwmon_device_register;
+-	}
+-
+ 	nr_packages = loongson_sysconf.nr_cpus /
+ 		loongson_sysconf.cores_per_package;
+ 
+-	ret = create_sysfs_cputemp_files(&cpu_hwmon_dev->kobj);
+-	if (ret) {
+-		pr_err("fail to create cpu temperature interface!\n");
+-		goto fail_create_sysfs_cputemp_files;
++	cpu_hwmon_dev = hwmon_device_register_with_groups(NULL, "cpu_hwmon",
++							  NULL, cpu_hwmon_groups);
++	if (IS_ERR(cpu_hwmon_dev)) {
++		pr_err("hwmon_device_register fail!\n");
++		return PTR_ERR(cpu_hwmon_dev);
+ 	}
+ 
+ 	INIT_DEFERRABLE_WORK(&thermal_work, do_thermal_timer);
+ 	schedule_delayed_work(&thermal_work, msecs_to_jiffies(20000));
+ 
+-	return ret;
+-
+-fail_create_sysfs_cputemp_files:
+-	sysfs_remove_group(&cpu_hwmon_dev->kobj,
+-				&cpu_hwmon_attribute_group);
+-	hwmon_device_unregister(cpu_hwmon_dev);
+-
+-fail_hwmon_device_register:
+-	return ret;
++	return 0;
+ }
+ 
+ static void __exit loongson_hwmon_exit(void)
+ {
+ 	cancel_delayed_work_sync(&thermal_work);
+-	remove_sysfs_cputemp_files(&cpu_hwmon_dev->kobj);
+-	sysfs_remove_group(&cpu_hwmon_dev->kobj,
+-				&cpu_hwmon_attribute_group);
+ 	hwmon_device_unregister(cpu_hwmon_dev);
+ }
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 2c48e55c4104e..6e3f3511e7ddd 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2027,10 +2027,13 @@ struct regulator *_regulator_get(struct device *dev, const char *id,
+ 		rdev->exclusive = 1;
+ 
+ 		ret = _regulator_is_enabled(rdev);
+-		if (ret > 0)
++		if (ret > 0) {
+ 			rdev->use_count = 1;
+-		else
++			regulator->enable_count = 1;
++		} else {
+ 			rdev->use_count = 0;
++			regulator->enable_count = 0;
++		}
+ 	}
+ 
+ 	link = device_link_add(dev, &rdev->dev, DL_FLAG_STATELESS);
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index 01a12cfcea7c6..0a19500d3725e 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -531,6 +531,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip)
+ 	parent = of_get_child_by_name(np, "regulators");
+ 	if (!parent) {
+ 		dev_err(dev, "regulators node not found\n");
++		of_node_put(np);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -560,6 +561,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip)
+ 	}
+ 
+ 	of_node_put(parent);
++	of_node_put(np);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Error parsing regulator init data: %d\n",
+ 			ret);
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 8d784a2a09d86..05d227f9d2f28 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -844,32 +844,31 @@ static const struct rpm_regulator_data rpm_pm8950_regulators[] = {
+ 	{ "s2", QCOM_SMD_RPM_SMPA, 2, &pm8950_hfsmps, "vdd_s2" },
+ 	{ "s3", QCOM_SMD_RPM_SMPA, 3, &pm8950_hfsmps, "vdd_s3" },
+ 	{ "s4", QCOM_SMD_RPM_SMPA, 4, &pm8950_hfsmps, "vdd_s4" },
+-	{ "s5", QCOM_SMD_RPM_SMPA, 5, &pm8950_ftsmps2p5, "vdd_s5" },
++	/* S5 is managed via SPMI. */
+ 	{ "s6", QCOM_SMD_RPM_SMPA, 6, &pm8950_hfsmps, "vdd_s6" },
+ 
+ 	{ "l1", QCOM_SMD_RPM_LDOA, 1, &pm8950_ult_nldo, "vdd_l1_l19" },
+ 	{ "l2", QCOM_SMD_RPM_LDOA, 2, &pm8950_ult_nldo, "vdd_l2_l23" },
+ 	{ "l3", QCOM_SMD_RPM_LDOA, 3, &pm8950_ult_nldo, "vdd_l3" },
+-	{ "l4", QCOM_SMD_RPM_LDOA, 4, &pm8950_ult_pldo, "vdd_l4_l5_l6_l7_l16" },
+-	{ "l5", QCOM_SMD_RPM_LDOA, 5, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
+-	{ "l6", QCOM_SMD_RPM_LDOA, 6, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
+-	{ "l7", QCOM_SMD_RPM_LDOA, 7, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
++	/* L4 seems not to exist. */
++	{ "l5", QCOM_SMD_RPM_LDOA, 5, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
++	{ "l6", QCOM_SMD_RPM_LDOA, 6, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
++	{ "l7", QCOM_SMD_RPM_LDOA, 7, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
+ 	{ "l8", QCOM_SMD_RPM_LDOA, 8, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
+ 	{ "l9", QCOM_SMD_RPM_LDOA, 9, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
+ 	{ "l10", QCOM_SMD_RPM_LDOA, 10, &pm8950_ult_nldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l11", QCOM_SMD_RPM_LDOA, 11, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+-	{ "l12", QCOM_SMD_RPM_LDOA, 12, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+-	{ "l13", QCOM_SMD_RPM_LDOA, 13, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l14", QCOM_SMD_RPM_LDOA, 14, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l15", QCOM_SMD_RPM_LDOA, 15, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l16", QCOM_SMD_RPM_LDOA, 16, &pm8950_ult_pldo, "vdd_l4_l5_l6_l7_l16"},
+-	{ "l17", QCOM_SMD_RPM_LDOA, 17, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+-	{ "l18", QCOM_SMD_RPM_LDOA, 18, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l19", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l1_l19"},
+-	{ "l20", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l20"},
+-	{ "l21", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l21"},
+-	{ "l22", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l8_l11_l12_l17_l22"},
+-	{ "l23", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l2_l23"},
++	{ "l11", QCOM_SMD_RPM_LDOA, 11, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++	{ "l12", QCOM_SMD_RPM_LDOA, 12, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++	{ "l13", QCOM_SMD_RPM_LDOA, 13, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++	{ "l14", QCOM_SMD_RPM_LDOA, 14, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++	{ "l15", QCOM_SMD_RPM_LDOA, 15, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++	{ "l16", QCOM_SMD_RPM_LDOA, 16, &pm8950_ult_pldo, "vdd_l5_l6_l7_l16" },
++	{ "l17", QCOM_SMD_RPM_LDOA, 17, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++	/* L18 seems not to exist. */
++	{ "l19", QCOM_SMD_RPM_LDOA, 19, &pm8950_pldo, "vdd_l1_l19" },
++	/* L20 & L21 seem not to exist. */
++	{ "l22", QCOM_SMD_RPM_LDOA, 22, &pm8950_pldo, "vdd_l8_l11_l12_l17_l22" },
++	{ "l23", QCOM_SMD_RPM_LDOA, 23, &pm8950_pldo, "vdd_l2_l23" },
+ 	{}
+ };
+ 
+diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c
+index 6cb48ae8e1241..d8967bf72ef26 100644
+--- a/drivers/scsi/dc395x.c
++++ b/drivers/scsi/dc395x.c
+@@ -3631,10 +3631,19 @@ static struct DeviceCtlBlk *device_alloc(struct AdapterCtlBlk *acb,
+ #endif
+ 	if (dcb->target_lun != 0) {
+ 		/* Copy settings */
+-		struct DeviceCtlBlk *p;
+-		list_for_each_entry(p, &acb->dcb_list, list)
+-			if (p->target_id == dcb->target_id)
++		struct DeviceCtlBlk *p = NULL, *iter;
++
++		list_for_each_entry(iter, &acb->dcb_list, list)
++			if (iter->target_id == dcb->target_id) {
++				p = iter;
+ 				break;
++			}
++
++		if (!p) {
++			kfree(dcb);
++			return NULL;
++		}
++
+ 		dprintkdbg(DBG_1, 
+ 		       "device_alloc: <%02i-%i> copy from <%02i-%i>\n",
+ 		       dcb->target_id, dcb->target_lun,
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 5ea426effa609..bbc5d6b9be737 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -1969,7 +1969,7 @@ EXPORT_SYMBOL(fcoe_ctlr_recv_flogi);
+  *
+  * Returns: u64 fc world wide name
+  */
+-u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN],
++u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN],
+ 		      unsigned int scheme, unsigned int port)
+ {
+ 	u64 wwn;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index a50f870c5f725..755d68b981602 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -17445,7 +17445,6 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr)
+ 	case FC_RCTL_ELS_REP:	/* extended link services reply */
+ 	case FC_RCTL_ELS4_REQ:	/* FC-4 ELS request */
+ 	case FC_RCTL_ELS4_REP:	/* FC-4 ELS reply */
+-	case FC_RCTL_BA_NOP:  	/* basic link service NOP */
+ 	case FC_RCTL_BA_ABTS: 	/* basic link service abort */
+ 	case FC_RCTL_BA_RMC: 	/* remove connection */
+ 	case FC_RCTL_BA_ACC:	/* basic accept */
+@@ -17466,6 +17465,7 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr)
+ 		fc_vft_hdr = (struct fc_vft_header *)fc_hdr;
+ 		fc_hdr = &((struct fc_frame_header *)fc_vft_hdr)[1];
+ 		return lpfc_fc_frame_check(phba, fc_hdr);
++	case FC_RCTL_BA_NOP:	/* basic link service NOP */
+ 	default:
+ 		goto drop;
+ 	}
+@@ -18284,12 +18284,14 @@ lpfc_sli4_send_seq_to_ulp(struct lpfc_vport *vport,
+ 	if (!lpfc_complete_unsol_iocb(phba,
+ 				      phba->sli4_hba.els_wq->pring,
+ 				      iocbq, fc_hdr->fh_r_ctl,
+-				      fc_hdr->fh_type))
++				      fc_hdr->fh_type)) {
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+ 				"2540 Ring %d handler: unexpected Rctl "
+ 				"x%x Type x%x received\n",
+ 				LPFC_ELS_RING,
+ 				fc_hdr->fh_r_ctl, fc_hdr->fh_type);
++		lpfc_in_buf_free(phba, &seq_dmabuf->dbuf);
++	}
+ 
+ 	/* Free iocb created in lpfc_prep_seq */
+ 	list_for_each_entry_safe(curr_iocb, next_iocb,
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index 80f546976c7e1..daffa36988aee 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -4634,7 +4634,7 @@ static int __init megaraid_init(void)
+ 	 * major number allocation.
+ 	 */
+ 	major = register_chrdev(0, "megadev_legacy", &megadev_fops);
+-	if (!major) {
++	if (major < 0) {
+ 		printk(KERN_WARNING
+ 				"megaraid: failed to register char device\n");
+ 	}
+diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c
+index eafe0db98d542..122d650d08102 100644
+--- a/drivers/scsi/ufs/ti-j721e-ufs.c
++++ b/drivers/scsi/ufs/ti-j721e-ufs.c
+@@ -29,11 +29,9 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ 		return PTR_ERR(regbase);
+ 
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(dev);
++	ret = pm_runtime_resume_and_get(dev);
++	if (ret < 0)
+ 		goto disable_pm;
+-	}
+ 
+ 	/* Select MPHY refclk frequency */
+ 	clk = devm_clk_get(dev, NULL);
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index 20182e39cb282..08331ecbe91fb 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -623,12 +623,7 @@ static int ufs_qcom_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ 			return err;
+ 	}
+ 
+-	err = ufs_qcom_ice_resume(host);
+-	if (err)
+-		return err;
+-
+-	hba->is_sys_suspended = false;
+-	return 0;
++	return ufs_qcom_ice_resume(host);
+ }
+ 
+ static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable)
+@@ -669,8 +664,11 @@ static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable)
+ 
+ 		writel_relaxed(temp, host->dev_ref_clk_ctrl_mmio);
+ 
+-		/* ensure that ref_clk is enabled/disabled before we return */
+-		wmb();
++		/*
++		 * Make sure the write to ref_clk reaches the destination and
++		 * not stored in a Write Buffer (WB).
++		 */
++		readl(host->dev_ref_clk_ctrl_mmio);
+ 
+ 		/*
+ 		 * If we call hibern8 exit after this, we need to make sure that
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index bf302776340ce..ea6ceab1a1b25 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -107,8 +107,13 @@ int ufshcd_dump_regs(struct ufs_hba *hba, size_t offset, size_t len,
+ 	if (!regs)
+ 		return -ENOMEM;
+ 
+-	for (pos = 0; pos < len; pos += 4)
++	for (pos = 0; pos < len; pos += 4) {
++		if (offset == 0 &&
++		    pos >= REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER &&
++		    pos <= REG_UIC_ERROR_CODE_DME)
++			continue;
+ 		regs[pos / 4] = ufshcd_readl(hba, offset + pos);
++	}
+ 
+ 	ufshcd_hex_dump(prefix, regs, len);
+ 	kfree(regs);
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 70fbe70c6213c..2e06f48d683d1 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -496,6 +496,7 @@ static const struct of_device_id qcom_llcc_of_match[] = {
+ 	{ .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfg },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, qcom_llcc_of_match);
+ 
+ static struct platform_driver qcom_llcc_driver = {
+ 	.driver = {
+diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c
+index a9709aae54abb..fb76c8bc3c64b 100644
+--- a/drivers/soc/qcom/smp2p.c
++++ b/drivers/soc/qcom/smp2p.c
+@@ -420,6 +420,7 @@ static int smp2p_parse_ipc(struct qcom_smp2p *smp2p)
+ 	}
+ 
+ 	smp2p->ipc_regmap = syscon_node_to_regmap(syscon);
++	of_node_put(syscon);
+ 	if (IS_ERR(smp2p->ipc_regmap))
+ 		return PTR_ERR(smp2p->ipc_regmap);
+ 
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index c428d0f78816e..6564f15c53190 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -359,6 +359,7 @@ static int smsm_parse_ipc(struct qcom_smsm *smsm, unsigned host_id)
+ 		return 0;
+ 
+ 	host->ipc_regmap = syscon_node_to_regmap(syscon);
++	of_node_put(syscon);
+ 	if (IS_ERR(host->ipc_regmap))
+ 		return PTR_ERR(host->ipc_regmap);
+ 
+diff --git a/drivers/soc/ti/ti_sci_pm_domains.c b/drivers/soc/ti/ti_sci_pm_domains.c
+index 8afb3f45d2637..a33ec7eaf23d1 100644
+--- a/drivers/soc/ti/ti_sci_pm_domains.c
++++ b/drivers/soc/ti/ti_sci_pm_domains.c
+@@ -183,6 +183,8 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ 		devm_kcalloc(dev, max_id + 1,
+ 			     sizeof(*pd_provider->data.domains),
+ 			     GFP_KERNEL);
++	if (!pd_provider->data.domains)
++		return -ENOMEM;
+ 
+ 	pd_provider->data.num_domains = max_id + 1;
+ 	pd_provider->data.xlate = ti_sci_pd_xlate;
+diff --git a/drivers/spi/spi-fsl-qspi.c b/drivers/spi/spi-fsl-qspi.c
+index 9851551ebbe05..46ae46a944c5c 100644
+--- a/drivers/spi/spi-fsl-qspi.c
++++ b/drivers/spi/spi-fsl-qspi.c
+@@ -876,6 +876,10 @@ static int fsl_qspi_probe(struct platform_device *pdev)
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ 					"QuadSPI-memory");
++	if (!res) {
++		ret = -EINVAL;
++		goto err_put_ctrl;
++	}
+ 	q->memmap_phy = res->start;
+ 	/* Since there are 4 cs, map size required is 4 times ahb_buf_size */
+ 	q->ahb_addr = devm_ioremap(dev, q->memmap_phy,
+diff --git a/drivers/spi/spi-img-spfi.c b/drivers/spi/spi-img-spfi.c
+index 5f05d519fbbd0..71376b6df89db 100644
+--- a/drivers/spi/spi-img-spfi.c
++++ b/drivers/spi/spi-img-spfi.c
+@@ -731,7 +731,7 @@ static int img_spfi_resume(struct device *dev)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret) {
++	if (ret < 0) {
+ 		pm_runtime_put_noidle(dev);
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index e39fd38f5180e..ea03cc589e61f 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -1107,14 +1107,11 @@ static struct dma_chan *rspi_request_dma_chan(struct device *dev,
+ 	}
+ 
+ 	memset(&cfg, 0, sizeof(cfg));
++	cfg.dst_addr = port_addr + RSPI_SPDR;
++	cfg.src_addr = port_addr + RSPI_SPDR;
++	cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
++	cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 	cfg.direction = dir;
+-	if (dir == DMA_MEM_TO_DEV) {
+-		cfg.dst_addr = port_addr;
+-		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+-	} else {
+-		cfg.src_addr = port_addr;
+-		cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+-	}
+ 
+ 	ret = dmaengine_slave_config(chan, &cfg);
+ 	if (ret) {
+@@ -1145,12 +1142,12 @@ static int rspi_request_dma(struct device *dev, struct spi_controller *ctlr,
+ 	}
+ 
+ 	ctlr->dma_tx = rspi_request_dma_chan(dev, DMA_MEM_TO_DEV, dma_tx_id,
+-					     res->start + RSPI_SPDR);
++					     res->start);
+ 	if (!ctlr->dma_tx)
+ 		return -ENODEV;
+ 
+ 	ctlr->dma_rx = rspi_request_dma_chan(dev, DMA_DEV_TO_MEM, dma_rx_id,
+-					     res->start + RSPI_SPDR);
++					     res->start);
+ 	if (!ctlr->dma_rx) {
+ 		dma_release_channel(ctlr->dma_tx);
+ 		ctlr->dma_tx = NULL;
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index 4f24f63922126..9c58dcd7b3242 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -295,7 +295,8 @@ static int stm32_qspi_wait_cmd(struct stm32_qspi *qspi,
+ 	if (!op->data.nbytes)
+ 		goto wait_nobusy;
+ 
+-	if (readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF)
++	if ((readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) ||
++	    qspi->fmode == CCR_FMODE_APM)
+ 		goto out;
+ 
+ 	reinit_completion(&qspi->data_completion);
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index e06aafe169e0c..081da1fd3fd7e 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -448,6 +448,7 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst,
+ 	enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
+ 	struct dma_async_tx_descriptor *tx;
+ 	int ret;
++	unsigned long time_left;
+ 
+ 	tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags);
+ 	if (!tx) {
+@@ -467,9 +468,9 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst,
+ 	}
+ 
+ 	dma_async_issue_pending(chan);
+-	ret = wait_for_completion_timeout(&qspi->transfer_complete,
++	time_left = wait_for_completion_timeout(&qspi->transfer_complete,
+ 					  msecs_to_jiffies(len));
+-	if (ret <= 0) {
++	if (time_left == 0) {
+ 		dmaengine_terminate_sync(chan);
+ 		dev_err(qspi->dev, "DMA wait_for_completion_timeout\n");
+ 		return -ETIMEDOUT;
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index 5c2ca61add8e8..b65f2f3ef357f 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -644,8 +644,12 @@ static int hantro_buf_prepare(struct vb2_buffer *vb)
+ 	 * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
+ 	 * it to buffer length).
+ 	 */
+-	if (V4L2_TYPE_IS_CAPTURE(vq->type))
+-		vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++	if (V4L2_TYPE_IS_CAPTURE(vq->type)) {
++		if (ctx->is_encoder)
++			vb2_set_plane_payload(vb, 0, 0);
++		else
++			vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c
+index 5487f6d0bcb63..7013f7ce36781 100644
+--- a/drivers/staging/media/rkvdec/rkvdec-h264.c
++++ b/drivers/staging/media/rkvdec/rkvdec-h264.c
+@@ -112,6 +112,7 @@ struct rkvdec_h264_run {
+ 	const struct v4l2_ctrl_h264_sps *sps;
+ 	const struct v4l2_ctrl_h264_pps *pps;
+ 	const struct v4l2_ctrl_h264_scaling_matrix *scaling_matrix;
++	int ref_buf_idx[V4L2_H264_NUM_DPB_ENTRIES];
+ };
+ 
+ struct rkvdec_h264_ctx {
+@@ -661,8 +662,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx,
+ 	WRITE_PPS(0xff, PROFILE_IDC);
+ 	WRITE_PPS(1, CONSTRAINT_SET3_FLAG);
+ 	WRITE_PPS(sps->chroma_format_idc, CHROMA_FORMAT_IDC);
+-	WRITE_PPS(sps->bit_depth_luma_minus8 + 8, BIT_DEPTH_LUMA);
+-	WRITE_PPS(sps->bit_depth_chroma_minus8 + 8, BIT_DEPTH_CHROMA);
++	WRITE_PPS(sps->bit_depth_luma_minus8, BIT_DEPTH_LUMA);
++	WRITE_PPS(sps->bit_depth_chroma_minus8, BIT_DEPTH_CHROMA);
+ 	WRITE_PPS(0, QPPRIME_Y_ZERO_TRANSFORM_BYPASS_FLAG);
+ 	WRITE_PPS(sps->log2_max_frame_num_minus4, LOG2_MAX_FRAME_NUM_MINUS4);
+ 	WRITE_PPS(sps->max_num_ref_frames, MAX_NUM_REF_FRAMES);
+@@ -725,6 +726,26 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx,
+ 	}
+ }
+ 
++static void lookup_ref_buf_idx(struct rkvdec_ctx *ctx,
++			       struct rkvdec_h264_run *run)
++{
++	const struct v4l2_ctrl_h264_decode_params *dec_params = run->decode_params;
++	u32 i;
++
++	for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) {
++		struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
++		const struct v4l2_h264_dpb_entry *dpb = run->decode_params->dpb;
++		struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q;
++		int buf_idx = -1;
++
++		if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)
++			buf_idx = vb2_find_timestamp(cap_q,
++						     dpb[i].reference_ts, 0);
++
++		run->ref_buf_idx[i] = buf_idx;
++	}
++}
++
+ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+ 			    struct rkvdec_h264_run *run)
+ {
+@@ -762,7 +783,7 @@ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+ 
+ 	for (j = 0; j < RKVDEC_NUM_REFLIST; j++) {
+ 		for (i = 0; i < h264_ctx->reflists.num_valid; i++) {
+-			u8 dpb_valid = 0;
++			bool dpb_valid = run->ref_buf_idx[i] >= 0;
+ 			u8 idx = 0;
+ 
+ 			switch (j) {
+@@ -779,8 +800,6 @@ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+ 
+ 			if (idx >= ARRAY_SIZE(dec_params->dpb))
+ 				continue;
+-			dpb_valid = !!(dpb[idx].flags &
+-				       V4L2_H264_DPB_ENTRY_FLAG_ACTIVE);
+ 
+ 			set_ps_field(hw_rps, DPB_INFO(i, j),
+ 				     idx | dpb_valid << 4);
+@@ -859,13 +878,8 @@ get_ref_buf(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run,
+ 	    unsigned int dpb_idx)
+ {
+ 	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
+-	const struct v4l2_h264_dpb_entry *dpb = run->decode_params->dpb;
+ 	struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q;
+-	int buf_idx = -1;
+-
+-	if (dpb[dpb_idx].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)
+-		buf_idx = vb2_find_timestamp(cap_q,
+-					     dpb[dpb_idx].reference_ts, 0);
++	int buf_idx = run->ref_buf_idx[dpb_idx];
+ 
+ 	/*
+ 	 * If a DPB entry is unused or invalid, address of current destination
+@@ -1102,6 +1116,7 @@ static int rkvdec_h264_run(struct rkvdec_ctx *ctx)
+ 
+ 	assemble_hw_scaling_list(ctx, &run);
+ 	assemble_hw_pps(ctx, &run);
++	lookup_ref_buf_idx(ctx, &run);
+ 	assemble_hw_rps(ctx, &run);
+ 	config_registers(ctx, &run);
+ 
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index a7788e7a9542a..e384ea8d72801 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -1000,7 +1000,6 @@ static const char * const rkvdec_clk_names[] = {
+ static int rkvdec_probe(struct platform_device *pdev)
+ {
+ 	struct rkvdec_dev *rkvdec;
+-	struct resource *res;
+ 	unsigned int i;
+ 	int ret, irq;
+ 
+@@ -1032,8 +1031,7 @@ static int rkvdec_probe(struct platform_device *pdev)
+ 	 */
+ 	clk_set_rate(rkvdec->clocks[0].clk, 500 * 1000 * 1000);
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	rkvdec->regs = devm_ioremap_resource(&pdev->dev, res);
++	rkvdec->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(rkvdec->regs))
+ 		return PTR_ERR(rkvdec->regs);
+ 
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 109f019d21480..1eded5c4ebda6 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -831,7 +831,6 @@ bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib,
+ 	attrib->unmap_granularity = q->limits.discard_granularity / block_size;
+ 	attrib->unmap_granularity_alignment = q->limits.discard_alignment /
+ 								block_size;
+-	attrib->unmap_zeroes_data = !!(q->limits.max_write_zeroes_sectors);
+ 	return true;
+ }
+ EXPORT_SYMBOL(target_configure_unmap_from_queue);
+diff --git a/drivers/thermal/broadcom/bcm2711_thermal.c b/drivers/thermal/broadcom/bcm2711_thermal.c
+index 67c2a737bc9d9..7b536c8a59dca 100644
+--- a/drivers/thermal/broadcom/bcm2711_thermal.c
++++ b/drivers/thermal/broadcom/bcm2711_thermal.c
+@@ -38,7 +38,6 @@ static int bcm2711_get_temp(void *data, int *temp)
+ 	int offset = thermal_zone_get_offset(priv->thermal);
+ 	u32 val;
+ 	int ret;
+-	long t;
+ 
+ 	ret = regmap_read(priv->regmap, AVS_RO_TEMP_STATUS, &val);
+ 	if (ret)
+@@ -50,9 +49,7 @@ static int bcm2711_get_temp(void *data, int *temp)
+ 	val &= AVS_RO_TEMP_STATUS_DATA_MSK;
+ 
+ 	/* Convert a HW code to a temperature reading (millidegree celsius) */
+-	t = slope * val + offset;
+-
+-	*temp = t < 0 ? 0 : t;
++	*temp = slope * val + offset;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/thermal/broadcom/sr-thermal.c b/drivers/thermal/broadcom/sr-thermal.c
+index 475ce29007713..85ab9edd580cc 100644
+--- a/drivers/thermal/broadcom/sr-thermal.c
++++ b/drivers/thermal/broadcom/sr-thermal.c
+@@ -60,6 +60,9 @@ static int sr_thermal_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -ENOENT;
++
+ 	sr_thermal->regs = (void __iomem *)devm_memremap(&pdev->dev, res->start,
+ 							 resource_size(res),
+ 							 MEMREMAP_WB);
+diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c
+index 8d76dbfde6a9f..331a241eb0ef3 100644
+--- a/drivers/thermal/imx_sc_thermal.c
++++ b/drivers/thermal/imx_sc_thermal.c
+@@ -94,8 +94,8 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 		sensor = devm_kzalloc(&pdev->dev, sizeof(*sensor), GFP_KERNEL);
+ 		if (!sensor) {
+ 			of_node_put(child);
+-			of_node_put(sensor_np);
+-			return -ENOMEM;
++			ret = -ENOMEM;
++			goto put_node;
+ 		}
+ 
+ 		ret = thermal_zone_of_get_sensor_id(child,
+@@ -124,7 +124,9 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 			dev_warn(&pdev->dev, "failed to add hwmon sysfs attributes\n");
+ 	}
+ 
++put_node:
+ 	of_node_put(sensor_np);
++	of_node_put(np);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index d9e34ac376626..dd449945e1e5e 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -1092,10 +1092,7 @@ __thermal_cooling_device_register(struct device_node *np,
+ {
+ 	struct thermal_cooling_device *cdev;
+ 	struct thermal_zone_device *pos = NULL;
+-	int result;
+-
+-	if (type && strlen(type) >= THERMAL_NAME_LENGTH)
+-		return ERR_PTR(-EINVAL);
++	int id, ret;
+ 
+ 	if (!ops || !ops->get_max_state || !ops->get_cur_state ||
+ 	    !ops->set_cur_state)
+@@ -1105,14 +1102,18 @@ __thermal_cooling_device_register(struct device_node *np,
+ 	if (!cdev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	result = ida_simple_get(&thermal_cdev_ida, 0, 0, GFP_KERNEL);
+-	if (result < 0) {
+-		kfree(cdev);
+-		return ERR_PTR(result);
++	ret = ida_simple_get(&thermal_cdev_ida, 0, 0, GFP_KERNEL);
++	if (ret < 0)
++		goto out_kfree_cdev;
++	cdev->id = ret;
++	id = ret;
++
++	cdev->type = kstrdup(type ? type : "", GFP_KERNEL);
++	if (!cdev->type) {
++		ret = -ENOMEM;
++		goto out_ida_remove;
+ 	}
+ 
+-	cdev->id = result;
+-	strlcpy(cdev->type, type ? : "", sizeof(cdev->type));
+ 	mutex_init(&cdev->lock);
+ 	INIT_LIST_HEAD(&cdev->thermal_instances);
+ 	cdev->np = np;
+@@ -1122,12 +1123,9 @@ __thermal_cooling_device_register(struct device_node *np,
+ 	cdev->devdata = devdata;
+ 	thermal_cooling_device_setup_sysfs(cdev);
+ 	dev_set_name(&cdev->device, "cooling_device%d", cdev->id);
+-	result = device_register(&cdev->device);
+-	if (result) {
+-		ida_simple_remove(&thermal_cdev_ida, cdev->id);
+-		put_device(&cdev->device);
+-		return ERR_PTR(result);
+-	}
++	ret = device_register(&cdev->device);
++	if (ret)
++		goto out_kfree_type;
+ 
+ 	/* Add 'this' new cdev to the global cdev list */
+ 	mutex_lock(&thermal_list_lock);
+@@ -1145,6 +1143,17 @@ __thermal_cooling_device_register(struct device_node *np,
+ 	mutex_unlock(&thermal_list_lock);
+ 
+ 	return cdev;
++
++out_kfree_type:
++	thermal_cooling_device_destroy_sysfs(cdev);
++	kfree(cdev->type);
++	put_device(&cdev->device);
++	cdev = NULL;
++out_ida_remove:
++	ida_simple_remove(&thermal_cdev_ida, id);
++out_kfree_cdev:
++	kfree(cdev);
++	return ERR_PTR(ret);
+ }
+ 
+ /**
+@@ -1303,6 +1312,7 @@ void thermal_cooling_device_unregister(struct thermal_cooling_device *cdev)
+ 	ida_simple_remove(&thermal_cdev_ida, cdev->id);
+ 	device_del(&cdev->device);
+ 	thermal_cooling_device_destroy_sysfs(cdev);
++	kfree(cdev->type);
+ 	put_device(&cdev->device);
+ }
+ EXPORT_SYMBOL_GPL(thermal_cooling_device_unregister);
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index a7363bc66c11a..351ad0b020291 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -628,22 +628,6 @@ static int push_rx(struct eg20t_port *priv, const unsigned char *buf,
+ 	return 0;
+ }
+ 
+-static int pop_tx_x(struct eg20t_port *priv, unsigned char *buf)
+-{
+-	int ret = 0;
+-	struct uart_port *port = &priv->port;
+-
+-	if (port->x_char) {
+-		dev_dbg(priv->port.dev, "%s:X character send %02x (%lu)\n",
+-			__func__, port->x_char, jiffies);
+-		buf[0] = port->x_char;
+-		port->x_char = 0;
+-		ret = 1;
+-	}
+-
+-	return ret;
+-}
+-
+ static int dma_push_rx(struct eg20t_port *priv, int size)
+ {
+ 	int room;
+@@ -893,9 +877,10 @@ static unsigned int handle_tx(struct eg20t_port *priv)
+ 
+ 	fifo_size = max(priv->fifo_size, 1);
+ 	tx_empty = 1;
+-	if (pop_tx_x(priv, xmit->buf)) {
+-		pch_uart_hal_write(priv, xmit->buf, 1);
++	if (port->x_char) {
++		pch_uart_hal_write(priv, &port->x_char, 1);
+ 		port->icount.tx++;
++		port->x_char = 0;
+ 		tx_empty = 0;
+ 		fifo_size--;
+ 	}
+@@ -950,9 +935,11 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv)
+ 	}
+ 
+ 	fifo_size = max(priv->fifo_size, 1);
+-	if (pop_tx_x(priv, xmit->buf)) {
+-		pch_uart_hal_write(priv, xmit->buf, 1);
++
++	if (port->x_char) {
++		pch_uart_hal_write(priv, &port->x_char, 1);
+ 		port->icount.tx++;
++		port->x_char = 0;
+ 		fifo_size--;
+ 	}
+ 
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 0fc473321d3e3..6c4a50addadd8 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -172,7 +172,8 @@ static struct tty_buffer *tty_buffer_alloc(struct tty_port *port, size_t size)
+ 	   have queued and recycle that ? */
+ 	if (atomic_read(&port->buf.mem_used) > port->buf.mem_limit)
+ 		return NULL;
+-	p = kmalloc(sizeof(struct tty_buffer) + 2 * size, GFP_ATOMIC);
++	p = kmalloc(sizeof(struct tty_buffer) + 2 * size,
++		    GFP_ATOMIC | __GFP_NOWARN);
+ 	if (p == NULL)
+ 		return NULL;
+ 
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index ddd1d3eef912b..bf5e376676977 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2661,6 +2661,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ {
+ 	int retval;
+ 	struct usb_device *rhdev;
++	struct usb_hcd *shared_hcd;
+ 
+ 	if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) {
+ 		hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+@@ -2817,13 +2818,26 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 		goto err_hcd_driver_start;
+ 	}
+ 
++	/* starting here, usbcore will pay attention to the shared HCD roothub */
++	shared_hcd = hcd->shared_hcd;
++	if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) {
++		retval = register_root_hub(shared_hcd);
++		if (retval != 0)
++			goto err_register_root_hub;
++
++		if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd))
++			usb_hcd_poll_rh_status(shared_hcd);
++	}
++
+ 	/* starting here, usbcore will pay attention to this root hub */
+-	retval = register_root_hub(hcd);
+-	if (retval != 0)
+-		goto err_register_root_hub;
++	if (!HCD_DEFER_RH_REGISTER(hcd)) {
++		retval = register_root_hub(hcd);
++		if (retval != 0)
++			goto err_register_root_hub;
+ 
+-	if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
+-		usb_hcd_poll_rh_status(hcd);
++		if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
++			usb_hcd_poll_rh_status(hcd);
++	}
+ 
+ 	return retval;
+ 
+@@ -2866,6 +2880,7 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
+ void usb_remove_hcd(struct usb_hcd *hcd)
+ {
+ 	struct usb_device *rhdev = hcd->self.root_hub;
++	bool rh_registered;
+ 
+ 	dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
+ 
+@@ -2876,6 +2891,7 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 
+ 	dev_dbg(hcd->self.controller, "roothub graceful disconnect\n");
+ 	spin_lock_irq (&hcd_root_hub_lock);
++	rh_registered = hcd->rh_registered;
+ 	hcd->rh_registered = 0;
+ 	spin_unlock_irq (&hcd_root_hub_lock);
+ 
+@@ -2885,7 +2901,8 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 	cancel_work_sync(&hcd->died_work);
+ 
+ 	mutex_lock(&usb_bus_idr_lock);
+-	usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
++	if (rh_registered)
++		usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
+ 	mutex_unlock(&usb_bus_idr_lock);
+ 
+ 	/*
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 7e5fd6afd9f45..f03ee889ecc70 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -511,6 +511,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* DJI CineSSD */
+ 	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+ 
++	/* DELL USB GEN2 */
++	{ USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME },
++
+ 	/* VCOM device */
+ 	{ USB_DEVICE(0x4296, 0x7570), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS },
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 1f0503dc96eed..05fe6ded66a52 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2960,14 +2960,14 @@ static bool dwc3_gadget_endpoint_trbs_complete(struct dwc3_ep *dep,
+ 	struct dwc3		*dwc = dep->dwc;
+ 	bool			no_started_trb = true;
+ 
+-	if (!dep->endpoint.desc)
+-		return no_started_trb;
+-
+ 	dwc3_gadget_ep_cleanup_completed_requests(dep, event, status);
+ 
+ 	if (dep->flags & DWC3_EP_END_TRANSFER_PENDING)
+ 		goto out;
+ 
++	if (!dep->endpoint.desc)
++		return no_started_trb;
++
+ 	if (usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
+ 		list_empty(&dep->started_list) &&
+ 		(list_empty(&dep->pending_list) || status == -EXDEV))
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 48e2c04741891..886279755804e 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -59,6 +59,7 @@
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI		0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI		0x1138
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI		0x461e
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI		0x464e
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI	0x51ed
+ 
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4			0x43b9
+@@ -263,6 +264,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 2eb4083c5b45c..a40c0f3b85c2c 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1137,6 +1137,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0xff, 0x30) },	/* EM160R-GL */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0, 0) },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0700, 0xff), /* BG95 */
++	  .driver_info = RSVD(3) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index f2ad450db5478..e65c0fa95d311 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -473,11 +473,14 @@ static void vdpasim_set_vq_ready(struct vdpa_device *vdpa, u16 idx, bool ready)
+ {
+ 	struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+ 	struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx];
++	bool old_ready;
+ 
+ 	spin_lock(&vdpasim->lock);
++	old_ready = vq->ready;
+ 	vq->ready = ready;
+-	if (vq->ready)
++	if (vq->ready && !old_ready) {
+ 		vdpasim_queue_ready(vdpasim, idx);
++	}
+ 	spin_unlock(&vdpasim->lock);
+ }
+ 
+diff --git a/drivers/video/console/sticon.c b/drivers/video/console/sticon.c
+index 40496e9e9b438..f304163e87e99 100644
+--- a/drivers/video/console/sticon.c
++++ b/drivers/video/console/sticon.c
+@@ -46,6 +46,7 @@
+ #include <linux/slab.h>
+ #include <linux/font.h>
+ #include <linux/crc32.h>
++#include <linux/fb.h>
+ 
+ #include <asm/io.h>
+ 
+@@ -392,7 +393,9 @@ static int __init sticonsole_init(void)
+     for (i = 0; i < MAX_NR_CONSOLES; i++)
+ 	font_data[i] = STI_DEF_FONT;
+ 
+-    pr_info("sticon: Initializing STI text console.\n");
++    pr_info("sticon: Initializing STI text console on %s at [%s]\n",
++	sticon_sti->sti_data->inq_outptr.dev_name,
++	sticon_sti->pa_path);
+     console_lock();
+     err = do_take_over_console(&sti_con, 0, MAX_NR_CONSOLES - 1,
+ 		PAGE0->mem_cons.cl_class != CL_DUPLEX);
+diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c
+index 6a26a364f9bd7..77622ef401d8f 100644
+--- a/drivers/video/console/sticore.c
++++ b/drivers/video/console/sticore.c
+@@ -30,10 +30,11 @@
+ #include <asm/pdc.h>
+ #include <asm/cacheflush.h>
+ #include <asm/grfioctl.h>
++#include <asm/fb.h>
+ 
+ #include "../fbdev/sticore.h"
+ 
+-#define STI_DRIVERVERSION "Version 0.9b"
++#define STI_DRIVERVERSION "Version 0.9c"
+ 
+ static struct sti_struct *default_sti __read_mostly;
+ 
+@@ -502,7 +503,7 @@ sti_select_fbfont(struct sti_cooked_rom *cooked_rom, const char *fbfont_name)
+ 	if (!fbfont)
+ 		return NULL;
+ 
+-	pr_info("STI selected %dx%d framebuffer font %s for sticon\n",
++	pr_info("    using %ux%u framebuffer font %s\n",
+ 			fbfont->width, fbfont->height, fbfont->name);
+ 			
+ 	bpc = ((fbfont->width+7)/8) * fbfont->height; 
+@@ -946,6 +947,7 @@ out_err:
+ 
+ static void sticore_check_for_default_sti(struct sti_struct *sti, char *path)
+ {
++	pr_info("    located at [%s]\n", sti->pa_path);
+ 	if (strcmp (path, default_sti_path) == 0)
+ 		default_sti = sti;
+ }
+@@ -957,7 +959,6 @@ static void sticore_check_for_default_sti(struct sti_struct *sti, char *path)
+  */
+ static int __init sticore_pa_init(struct parisc_device *dev)
+ {
+-	char pa_path[21];
+ 	struct sti_struct *sti = NULL;
+ 	int hpa = dev->hpa.start;
+ 
+@@ -970,8 +971,8 @@ static int __init sticore_pa_init(struct parisc_device *dev)
+ 	if (!sti)
+ 		return 1;
+ 
+-	print_pa_hwpath(dev, pa_path);
+-	sticore_check_for_default_sti(sti, pa_path);
++	print_pa_hwpath(dev, sti->pa_path);
++	sticore_check_for_default_sti(sti, sti->pa_path);
+ 	return 0;
+ }
+ 
+@@ -1007,9 +1008,8 @@ static int sticore_pci_init(struct pci_dev *pd, const struct pci_device_id *ent)
+ 
+ 	sti = sti_try_rom_generic(rom_base, fb_base, pd);
+ 	if (sti) {
+-		char pa_path[30];
+-		print_pci_hwpath(pd, pa_path);
+-		sticore_check_for_default_sti(sti, pa_path);
++		print_pci_hwpath(pd, sti->pa_path);
++		sticore_check_for_default_sti(sti, sti->pa_path);
+ 	}
+ 	
+ 	if (!sti) {
+@@ -1127,6 +1127,22 @@ int sti_call(const struct sti_struct *sti, unsigned long func,
+ 	return ret;
+ }
+ 
++/* check if given fb_info is the primary device */
++int fb_is_primary_device(struct fb_info *info)
++{
++	struct sti_struct *sti;
++
++	sti = sti_get_rom(0);
++
++	/* if no built-in graphics card found, allow any fb driver as default */
++	if (!sti)
++		return true;
++
++	/* return true if it's the default built-in framebuffer driver */
++	return (sti->info == info);
++}
++EXPORT_SYMBOL(fb_is_primary_device);
++
+ MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer");
+ MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c
+index 33595cc4778e9..79efefd224f40 100644
+--- a/drivers/video/fbdev/amba-clcd.c
++++ b/drivers/video/fbdev/amba-clcd.c
+@@ -771,12 +771,15 @@ static int clcdfb_of_vram_setup(struct clcd_fb *fb)
+ 		return -ENODEV;
+ 
+ 	fb->fb.screen_base = of_iomap(memory, 0);
+-	if (!fb->fb.screen_base)
++	if (!fb->fb.screen_base) {
++		of_node_put(memory);
+ 		return -ENOMEM;
++	}
+ 
+ 	fb->fb.fix.smem_start = of_translate_address(memory,
+ 			of_get_address(memory, 0, &size, NULL));
+ 	fb->fb.fix.smem_len = size;
++	of_node_put(memory);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index f102519ccefb4..13de2bebb09a5 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -3300,6 +3300,9 @@ static void fbcon_register_existing_fbs(struct work_struct *work)
+ 
+ 	console_lock();
+ 
++	deferred_takeover = false;
++	logo_shown = FBCON_LOGO_DONTSHOW;
++
+ 	for_each_registered_fb(i)
+ 		fbcon_fb_registered(registered_fb[i]);
+ 
+@@ -3317,8 +3320,6 @@ static int fbcon_output_notifier(struct notifier_block *nb,
+ 	pr_info("fbcon: Taking over console\n");
+ 
+ 	dummycon_unregister_output_notifier(&fbcon_output_nb);
+-	deferred_takeover = false;
+-	logo_shown = FBCON_LOGO_DONTSHOW;
+ 
+ 	/* We may get called in atomic context */
+ 	schedule_work(&fbcon_deferred_takeover_work);
+diff --git a/drivers/video/fbdev/sticore.h b/drivers/video/fbdev/sticore.h
+index c338f7848ae2b..0ebdd28a0b813 100644
+--- a/drivers/video/fbdev/sticore.h
++++ b/drivers/video/fbdev/sticore.h
+@@ -370,6 +370,9 @@ struct sti_struct {
+ 
+ 	/* pointer to all internal data */
+ 	struct sti_all_data *sti_data;
++
++	/* pa_path of this device */
++	char pa_path[24];
+ };
+ 
+ 
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index 265865610edc6..002f265d8db58 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -1317,11 +1317,11 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
+ 		goto out_err3;
+ 	}
+ 
++	/* save for primary gfx device detection & unregister_framebuffer() */
++	sti->info = info;
+ 	if (register_framebuffer(&fb->info) < 0)
+ 		goto out_err4;
+ 
+-	sti->info = info; /* save for unregister_framebuffer() */
+-
+ 	fb_info(&fb->info, "%s %dx%d-%d frame buffer device, %s, id: %04x, mmio: 0x%04lx\n",
+ 		fix->id,
+ 		var->xres, 
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 8be709cb8542a..efe0fb3ad8bdc 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -572,6 +572,8 @@ static void afs_deliver_to_call(struct afs_call *call)
+ 		case -ENODATA:
+ 		case -EBADMSG:
+ 		case -EMSGSIZE:
++		case -ENOMEM:
++		case -EFAULT:
+ 			abort_code = RXGEN_CC_UNMARSHAL;
+ 			if (state != AFS_CALL_CL_AWAIT_REPLY)
+ 				abort_code = RXGEN_SS_UNMARSHAL;
+@@ -579,7 +581,7 @@ static void afs_deliver_to_call(struct afs_call *call)
+ 						abort_code, ret, "KUM");
+ 			goto local_abort;
+ 		default:
+-			abort_code = RX_USER_ABORT;
++			abort_code = RX_CALL_DEAD;
+ 			rxrpc_kernel_abort_call(call->net->socket, call->rxcall,
+ 						abort_code, ret, "KER");
+ 			goto local_abort;
+@@ -871,7 +873,7 @@ void afs_send_empty_reply(struct afs_call *call)
+ 	case -ENOMEM:
+ 		_debug("oom");
+ 		rxrpc_kernel_abort_call(net->socket, call->rxcall,
+-					RX_USER_ABORT, -ENOMEM, "KOO");
++					RXGEN_SS_MARSHAL, -ENOMEM, "KOO");
+ 		fallthrough;
+ 	default:
+ 		_leave(" [error]");
+@@ -913,7 +915,7 @@ void afs_send_simple_reply(struct afs_call *call, const void *buf, size_t len)
+ 	if (n == -ENOMEM) {
+ 		_debug("oom");
+ 		rxrpc_kernel_abort_call(net->socket, call->rxcall,
+-					RX_USER_ABORT, -ENOMEM, "KOO");
++					RXGEN_SS_MARSHAL, -ENOMEM, "KOO");
+ 	}
+ 	_leave(" [error]");
+ }
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index b9c658e0548eb..69f4db05191a3 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -427,6 +427,30 @@ static void old_reloc(unsigned long rl)
+ 
+ /****************************************************************************/
+ 
++static inline u32 __user *skip_got_header(u32 __user *rp)
++{
++	if (IS_ENABLED(CONFIG_RISCV)) {
++		/*
++		 * RISC-V has a 16 byte GOT PLT header for elf64-riscv
++		 * and 8 byte GOT PLT header for elf32-riscv.
++		 * Skip the whole GOT PLT header, since it is reserved
++		 * for the dynamic linker (ld.so).
++		 */
++		u32 rp_val0, rp_val1;
++
++		if (get_user(rp_val0, rp))
++			return rp;
++		if (get_user(rp_val1, rp + 1))
++			return rp;
++
++		if (rp_val0 == 0xffffffff && rp_val1 == 0xffffffff)
++			rp += 4;
++		else if (rp_val0 == 0xffffffff)
++			rp += 2;
++	}
++	return rp;
++}
++
+ static int load_flat_file(struct linux_binprm *bprm,
+ 		struct lib_info *libinfo, int id, unsigned long *extra_stack)
+ {
+@@ -774,7 +798,8 @@ static int load_flat_file(struct linux_binprm *bprm,
+ 	 * image.
+ 	 */
+ 	if (flags & FLAT_FLAG_GOTPIC) {
+-		for (rp = (u32 __user *)datapos; ; rp++) {
++		rp = skip_got_header((u32 __user *) datapos);
++		for (; ; rp++) {
+ 			u32 addr, rp_val;
+ 			if (get_user(rp_val, rp))
+ 				return -EFAULT;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 87e55b024ac2e..35acdab56a1c9 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3061,7 +3061,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		~BTRFS_FEATURE_INCOMPAT_SUPP;
+ 	if (features) {
+ 		btrfs_err(fs_info,
+-		    "cannot mount because of unsupported optional features (%llx)",
++		    "cannot mount because of unsupported optional features (0x%llx)",
+ 		    features);
+ 		err = -EINVAL;
+ 		goto fail_alloc;
+@@ -3099,7 +3099,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		~BTRFS_FEATURE_COMPAT_RO_SUPP;
+ 	if (!sb_rdonly(sb) && features) {
+ 		btrfs_err(fs_info,
+-	"cannot mount read-write because of unsupported optional features (%llx)",
++	"cannot mount read-write because of unsupported optional features (0x%llx)",
+ 		       features);
+ 		err = -EINVAL;
+ 		goto fail_alloc;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 366d047638646..2fdf178aa76f6 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -7191,12 +7191,12 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ 	 * do another round of validation checks.
+ 	 */
+ 	if (total_dev != fs_info->fs_devices->total_devices) {
+-		btrfs_err(fs_info,
+-	   "super_num_devices %llu mismatch with num_devices %llu found here",
++		btrfs_warn(fs_info,
++"super block num_devices %llu mismatch with DEV_ITEM count %llu, will be repaired on next transaction commit",
+ 			  btrfs_super_num_devices(fs_info->super_copy),
+ 			  total_dev);
+-		ret = -EINVAL;
+-		goto error;
++		fs_info->fs_devices->total_devices = total_dev;
++		btrfs_set_super_num_devices(fs_info->super_copy, total_dev);
+ 	}
+ 	if (btrfs_super_total_bytes(fs_info->super_copy) <
+ 	    fs_info->fs_devices->total_rw_bytes) {
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index a718dc77e604e..97cd4df040608 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -371,8 +371,6 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	num_rqst++;
+ 
+ 	if (cfile) {
+-		cifsFileInfo_put(cfile);
+-		cfile = NULL;
+ 		rc = compound_send_recv(xid, ses, server,
+ 					flags, num_rqst - 2,
+ 					&rqst[1], &resp_buftype[1],
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index c758ff41b6386..7fea94ebda573 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3632,7 +3632,7 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 		if (rc)
+ 			goto out;
+ 
+-		if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) == 0)
++		if (cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE)
+ 			smb2_set_sparse(xid, tcon, cfile, inode, false);
+ 
+ 		eof = cpu_to_le64(off + len);
+diff --git a/fs/dax.c b/fs/dax.c
+index d5d7b9393bcaa..3e7e9a57fd28c 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -846,7 +846,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
+ 			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
+ 				goto unlock_pmd;
+ 
+-			flush_cache_page(vma, address, pfn);
++			flush_cache_range(vma, address,
++					  address + HPAGE_PMD_SIZE);
+ 			pmd = pmdp_invalidate(vma, address, pmdp);
+ 			pmd = pmd_wrprotect(pmd);
+ 			pmd = pmd_mkclean(pmd);
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index 1e9d8999b9390..2ce96a9ce63c0 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -1551,6 +1551,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype,
+ 		lkb->lkb_wait_type = 0;
+ 		lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL;
+ 		lkb->lkb_wait_count--;
++		unhold_lkb(lkb);
+ 		goto out_del;
+ 	}
+ 
+@@ -1577,6 +1578,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype,
+ 		log_error(ls, "remwait error %x reply %d wait_type %d overlap",
+ 			  lkb->lkb_id, mstype, lkb->lkb_wait_type);
+ 		lkb->lkb_wait_count--;
++		unhold_lkb(lkb);
+ 		lkb->lkb_wait_type = 0;
+ 	}
+ 
+@@ -5312,11 +5314,16 @@ int dlm_recover_waiters_post(struct dlm_ls *ls)
+ 		lkb->lkb_flags &= ~DLM_IFL_OVERLAP_UNLOCK;
+ 		lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL;
+ 		lkb->lkb_wait_type = 0;
+-		lkb->lkb_wait_count = 0;
++		/* drop all wait_count references we still
++		 * hold a reference for this iteration.
++		 */
++		while (lkb->lkb_wait_count) {
++			lkb->lkb_wait_count--;
++			unhold_lkb(lkb);
++		}
+ 		mutex_lock(&ls->ls_waiters_mutex);
+ 		list_del_init(&lkb->lkb_wait_reply);
+ 		mutex_unlock(&ls->ls_waiters_mutex);
+-		unhold_lkb(lkb); /* for waiters list */
+ 
+ 		if (oc || ou) {
+ 			/* do an unlock or cancel instead of resending */
+diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
+index c38b2b8ffd1d3..a10d2bcfe75a8 100644
+--- a/fs/dlm/plock.c
++++ b/fs/dlm/plock.c
+@@ -23,11 +23,11 @@ struct plock_op {
+ 	struct list_head list;
+ 	int done;
+ 	struct dlm_plock_info info;
++	int (*callback)(struct file_lock *fl, int result);
+ };
+ 
+ struct plock_xop {
+ 	struct plock_op xop;
+-	int (*callback)(struct file_lock *fl, int result);
+ 	void *fl;
+ 	void *file;
+ 	struct file_lock flc;
+@@ -129,19 +129,18 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 		/* fl_owner is lockd which doesn't distinguish
+ 		   processes on the nfs client */
+ 		op->info.owner	= (__u64) fl->fl_pid;
+-		xop->callback	= fl->fl_lmops->lm_grant;
++		op->callback	= fl->fl_lmops->lm_grant;
+ 		locks_init_lock(&xop->flc);
+ 		locks_copy_lock(&xop->flc, fl);
+ 		xop->fl		= fl;
+ 		xop->file	= file;
+ 	} else {
+ 		op->info.owner	= (__u64)(long) fl->fl_owner;
+-		xop->callback	= NULL;
+ 	}
+ 
+ 	send_op(op);
+ 
+-	if (xop->callback == NULL) {
++	if (!op->callback) {
+ 		rv = wait_event_interruptible(recv_wq, (op->done != 0));
+ 		if (rv == -ERESTARTSYS) {
+ 			log_debug(ls, "dlm_posix_lock: wait killed %llx",
+@@ -203,7 +202,7 @@ static int dlm_plock_callback(struct plock_op *op)
+ 	file = xop->file;
+ 	flc = &xop->flc;
+ 	fl = xop->fl;
+-	notify = xop->callback;
++	notify = op->callback;
+ 
+ 	if (op->info.rv) {
+ 		notify(fl, op->info.rv);
+@@ -436,10 +435,9 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 		if (op->info.fsid == info.fsid &&
+ 		    op->info.number == info.number &&
+ 		    op->info.owner == info.owner) {
+-			struct plock_xop *xop = (struct plock_xop *)op;
+ 			list_del_init(&op->list);
+ 			memcpy(&op->info, &info, sizeof(info));
+-			if (xop->callback)
++			if (op->callback)
+ 				do_callback = 1;
+ 			else
+ 				op->done = 1;
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 8329961546b58..4ad1c3ce9398a 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1419,12 +1419,6 @@ struct ext4_super_block {
+ 
+ #ifdef __KERNEL__
+ 
+-#ifdef CONFIG_FS_ENCRYPTION
+-#define DUMMY_ENCRYPTION_ENABLED(sbi) ((sbi)->s_dummy_enc_policy.policy != NULL)
+-#else
+-#define DUMMY_ENCRYPTION_ENABLED(sbi) (0)
+-#endif
+-
+ /* Number of quota types we support */
+ #define EXT4_MAXQUOTAS 3
+ 
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 80b876ab6b1fe..6641b74ad4620 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -371,7 +371,7 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ {
+ 	unsigned short entries;
+ 	ext4_lblk_t lblock = 0;
+-	ext4_lblk_t prev = 0;
++	ext4_lblk_t cur = 0;
+ 
+ 	if (eh->eh_entries == 0)
+ 		return 1;
+@@ -395,11 +395,11 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ 
+ 			/* Check for overlapping extents */
+ 			lblock = le32_to_cpu(ext->ee_block);
+-			if ((lblock <= prev) && prev) {
++			if (lblock < cur) {
+ 				*pblk = ext4_ext_pblock(ext);
+ 				return 0;
+ 			}
+-			prev = lblock + ext4_ext_get_actual_len(ext) - 1;
++			cur = lblock + ext4_ext_get_actual_len(ext);
+ 			ext++;
+ 			entries--;
+ 		}
+@@ -419,13 +419,13 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ 
+ 			/* Check for overlapping index extents */
+ 			lblock = le32_to_cpu(ext_idx->ei_block);
+-			if ((lblock <= prev) && prev) {
++			if (lblock < cur) {
+ 				*pblk = ext4_idx_pblock(ext_idx);
+ 				return 0;
+ 			}
+ 			ext_idx++;
+ 			entries--;
+-			prev = lblock;
++			cur = lblock + 1;
+ 		}
+ 	}
+ 	return 1;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index c9a8c7d24f89c..fbad4180514c9 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1974,6 +1974,18 @@ int ext4_convert_inline_data(struct inode *inode)
+ 	if (!ext4_has_inline_data(inode)) {
+ 		ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ 		return 0;
++	} else if (!ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) {
++		/*
++		 * Inode has inline data but EXT4_STATE_MAY_INLINE_DATA is
++		 * cleared. This means we are in the middle of moving of
++		 * inline data to delay allocated block. Just force writeout
++		 * here to finish conversion.
++		 */
++		error = filemap_flush(inode->i_mapping);
++		if (error)
++			return error;
++		if (!ext4_has_inline_data(inode))
++			return 0;
+ 	}
+ 
+ 	needed_blocks = ext4_writepage_trans_blocks(inode);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 31ab73c4b07e7..72e3f55f1e07a 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5444,6 +5444,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 	if (attr->ia_valid & ATTR_SIZE) {
+ 		handle_t *handle;
+ 		loff_t oldsize = inode->i_size;
++		loff_t old_disksize;
+ 		int shrink = (attr->ia_size < inode->i_size);
+ 
+ 		if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) {
+@@ -5517,6 +5518,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 					inode->i_sb->s_blocksize_bits);
+ 
+ 			down_write(&EXT4_I(inode)->i_data_sem);
++			old_disksize = EXT4_I(inode)->i_disksize;
+ 			EXT4_I(inode)->i_disksize = attr->ia_size;
+ 			rc = ext4_mark_inode_dirty(handle, inode);
+ 			if (!error)
+@@ -5528,6 +5530,8 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 			 */
+ 			if (!error)
+ 				i_size_write(inode, attr->ia_size);
++			else
++				EXT4_I(inode)->i_disksize = old_disksize;
+ 			up_write(&EXT4_I(inode)->i_data_sem);
+ 			ext4_journal_stop(handle);
+ 			if (error)
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 47ea35e98ffe9..feae39f1db37c 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -280,9 +280,9 @@ static struct dx_frame *dx_probe(struct ext4_filename *fname,
+ 				 struct dx_hash_info *hinfo,
+ 				 struct dx_frame *frame);
+ static void dx_release(struct dx_frame *frames);
+-static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+-		       unsigned blocksize, struct dx_hash_info *hinfo,
+-		       struct dx_map_entry map[]);
++static int dx_make_map(struct inode *dir, struct buffer_head *bh,
++		       struct dx_hash_info *hinfo,
++		       struct dx_map_entry *map_tail);
+ static void dx_sort_map(struct dx_map_entry *map, unsigned count);
+ static struct ext4_dir_entry_2 *dx_move_dirents(char *from, char *to,
+ 		struct dx_map_entry *offsets, int count, unsigned blocksize);
+@@ -756,12 +756,14 @@ static struct dx_frame *
+ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ 	 struct dx_hash_info *hinfo, struct dx_frame *frame_in)
+ {
+-	unsigned count, indirect;
++	unsigned count, indirect, level, i;
+ 	struct dx_entry *at, *entries, *p, *q, *m;
+ 	struct dx_root *root;
+ 	struct dx_frame *frame = frame_in;
+ 	struct dx_frame *ret_err = ERR_PTR(ERR_BAD_DX_DIR);
+ 	u32 hash;
++	ext4_lblk_t block;
++	ext4_lblk_t blocks[EXT4_HTREE_LEVEL];
+ 
+ 	memset(frame_in, 0, EXT4_HTREE_LEVEL * sizeof(frame_in[0]));
+ 	frame->bh = ext4_read_dirblock(dir, 0, INDEX);
+@@ -817,6 +819,8 @@ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ 	}
+ 
+ 	dxtrace(printk("Look up %x", hash));
++	level = 0;
++	blocks[0] = 0;
+ 	while (1) {
+ 		count = dx_get_count(entries);
+ 		if (!count || count > dx_get_limit(entries)) {
+@@ -858,15 +862,27 @@ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ 			       dx_get_block(at)));
+ 		frame->entries = entries;
+ 		frame->at = at;
+-		if (!indirect--)
++
++		block = dx_get_block(at);
++		for (i = 0; i <= level; i++) {
++			if (blocks[i] == block) {
++				ext4_warning_inode(dir,
++					"dx entry: tree cycle block %u points back to block %u",
++					blocks[level], block);
++				goto fail;
++			}
++		}
++		if (++level > indirect)
+ 			return frame;
++		blocks[level] = block;
+ 		frame++;
+-		frame->bh = ext4_read_dirblock(dir, dx_get_block(at), INDEX);
++		frame->bh = ext4_read_dirblock(dir, block, INDEX);
+ 		if (IS_ERR(frame->bh)) {
+ 			ret_err = (struct dx_frame *) frame->bh;
+ 			frame->bh = NULL;
+ 			goto fail;
+ 		}
++
+ 		entries = ((struct dx_node *) frame->bh->b_data)->entries;
+ 
+ 		if (dx_get_limit(entries) != dx_node_limit(dir)) {
+@@ -1208,15 +1224,23 @@ static inline int search_dirblock(struct buffer_head *bh,
+  * Create map of hash values, offsets, and sizes, stored at end of block.
+  * Returns number of entries mapped.
+  */
+-static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+-		       unsigned blocksize, struct dx_hash_info *hinfo,
++static int dx_make_map(struct inode *dir, struct buffer_head *bh,
++		       struct dx_hash_info *hinfo,
+ 		       struct dx_map_entry *map_tail)
+ {
+ 	int count = 0;
+-	char *base = (char *) de;
++	struct ext4_dir_entry_2 *de = (struct ext4_dir_entry_2 *)bh->b_data;
++	unsigned int buflen = bh->b_size;
++	char *base = bh->b_data;
+ 	struct dx_hash_info h = *hinfo;
+ 
+-	while ((char *) de < base + blocksize) {
++	if (ext4_has_metadata_csum(dir->i_sb))
++		buflen -= sizeof(struct ext4_dir_entry_tail);
++
++	while ((char *) de < base + buflen) {
++		if (ext4_check_dir_entry(dir, NULL, de, bh, base, buflen,
++					 ((char *)de) - base))
++			return -EFSCORRUPTED;
+ 		if (de->name_len && de->inode) {
+ 			ext4fs_dirhash(dir, de->name, de->name_len, &h);
+ 			map_tail--;
+@@ -1226,8 +1250,7 @@ static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+ 			count++;
+ 			cond_resched();
+ 		}
+-		/* XXX: do we need to check rec_len == 0 case? -Chris */
+-		de = ext4_next_entry(de, blocksize);
++		de = ext4_next_entry(de, dir->i_sb->s_blocksize);
+ 	}
+ 	return count;
+ }
+@@ -1853,8 +1876,11 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ 
+ 	/* create map in the end of data2 block */
+ 	map = (struct dx_map_entry *) (data2 + blocksize);
+-	count = dx_make_map(dir, (struct ext4_dir_entry_2 *) data1,
+-			     blocksize, hinfo, map);
++	count = dx_make_map(dir, *bh, hinfo, map);
++	if (count < 0) {
++		err = count;
++		goto journal_error;
++	}
+ 	map -= count;
+ 	dx_sort_map(map, count);
+ 	/* Ensure that neither split block is over half full */
+@@ -3500,6 +3526,9 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ 	struct buffer_head *bh;
+ 
+ 	if (!ext4_has_inline_data(inode)) {
++		struct ext4_dir_entry_2 *de;
++		unsigned int offset;
++
+ 		/* The first directory block must not be a hole, so
+ 		 * treat it as DIRENT_HTREE
+ 		 */
+@@ -3508,9 +3537,30 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ 			*retval = PTR_ERR(bh);
+ 			return NULL;
+ 		}
+-		*parent_de = ext4_next_entry(
+-					(struct ext4_dir_entry_2 *)bh->b_data,
+-					inode->i_sb->s_blocksize);
++
++		de = (struct ext4_dir_entry_2 *) bh->b_data;
++		if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data,
++					 bh->b_size, 0) ||
++		    le32_to_cpu(de->inode) != inode->i_ino ||
++		    strcmp(".", de->name)) {
++			EXT4_ERROR_INODE(inode, "directory missing '.'");
++			brelse(bh);
++			*retval = -EFSCORRUPTED;
++			return NULL;
++		}
++		offset = ext4_rec_len_from_disk(de->rec_len,
++						inode->i_sb->s_blocksize);
++		de = ext4_next_entry(de, inode->i_sb->s_blocksize);
++		if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data,
++					 bh->b_size, offset) ||
++		    le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) {
++			EXT4_ERROR_INODE(inode, "directory missing '..'");
++			brelse(bh);
++			*retval = -EFSCORRUPTED;
++			return NULL;
++		}
++		*parent_de = de;
++
+ 		return bh;
+ 	}
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 3e26edeca8c73..a0af833f7da70 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1960,6 +1960,7 @@ static const struct mount_opts {
+ 	 MOPT_EXT4_ONLY | MOPT_CLEAR},
+ 	{Opt_warn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_SET},
+ 	{Opt_nowarn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_CLEAR},
++	{Opt_commit, 0, MOPT_NO_EXT2},
+ 	{Opt_nojournal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM,
+ 	 MOPT_EXT4_ONLY | MOPT_CLEAR},
+ 	{Opt_journal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM,
+@@ -2083,6 +2084,12 @@ static int ext4_set_test_dummy_encryption(struct super_block *sb,
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	int err;
+ 
++	if (!ext4_has_feature_encrypt(sb)) {
++		ext4_msg(sb, KERN_WARNING,
++			 "test_dummy_encryption requires encrypt feature");
++		return -1;
++	}
++
+ 	/*
+ 	 * This mount option is just for testing, and it's not worthwhile to
+ 	 * implement the extra complexity (e.g. RCU protection) that would be
+@@ -2110,11 +2117,13 @@ static int ext4_set_test_dummy_encryption(struct super_block *sb,
+ 		return -1;
+ 	}
+ 	ext4_msg(sb, KERN_WARNING, "Test dummy encryption mode enabled");
++	return 1;
+ #else
+ 	ext4_msg(sb, KERN_WARNING,
+-		 "Test dummy encryption mount option ignored");
++		 "test_dummy_encryption option not supported");
++	return -1;
++
+ #endif
+-	return 1;
+ }
+ 
+ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
+@@ -4537,7 +4546,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 					sbi->s_inodes_per_block;
+ 	sbi->s_desc_per_block = blocksize / EXT4_DESC_SIZE(sb);
+ 	sbi->s_sbh = bh;
+-	sbi->s_mount_state = le16_to_cpu(es->s_state);
++	sbi->s_mount_state = le16_to_cpu(es->s_state) & ~EXT4_FC_REPLAY;
+ 	sbi->s_addr_per_block_bits = ilog2(EXT4_ADDR_PER_BLOCK(sb));
+ 	sbi->s_desc_per_block_bits = ilog2(EXT4_DESC_PER_BLOCK(sb));
+ 
+@@ -4928,12 +4937,6 @@ no_journal:
+ 		goto failed_mount_wq;
+ 	}
+ 
+-	if (DUMMY_ENCRYPTION_ENABLED(sbi) && !sb_rdonly(sb) &&
+-	    !ext4_has_feature_encrypt(sb)) {
+-		ext4_set_feature_encrypt(sb);
+-		ext4_commit_super(sb, 1);
+-	}
+-
+ 	/*
+ 	 * Get the # of file system overhead blocks from the
+ 	 * superblock if present.
+@@ -5997,7 +6000,8 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 				if (err)
+ 					goto restore_opts;
+ 			}
+-			sbi->s_mount_state = le16_to_cpu(es->s_state);
++			sbi->s_mount_state = (le16_to_cpu(es->s_state) &
++					      ~EXT4_FC_REPLAY);
+ 
+ 			err = ext4_setup_super(sb, es, 0);
+ 			if (err)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 6c4bf22a3e83e..1066725c3c5d5 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1021,8 +1021,8 @@ enum count_type {
+  */
+ #define PAGE_TYPE_OF_BIO(type)	((type) > META ? META : (type))
+ enum page_type {
+-	DATA,
+-	NODE,
++	DATA = 0,
++	NODE = 1,	/* should not change this */
+ 	META,
+ 	NR_PAGE_TYPE,
+ 	META_FLUSH,
+@@ -2284,11 +2284,17 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi,
+ {
+ 	spin_lock(&sbi->stat_lock);
+ 
+-	f2fs_bug_on(sbi, !sbi->total_valid_block_count);
+-	f2fs_bug_on(sbi, !sbi->total_valid_node_count);
++	if (unlikely(!sbi->total_valid_block_count ||
++			!sbi->total_valid_node_count)) {
++		f2fs_warn(sbi, "dec_valid_node_count: inconsistent block counts, total_valid_block:%u, total_valid_node:%u",
++			  sbi->total_valid_block_count,
++			  sbi->total_valid_node_count);
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++	} else {
++		sbi->total_valid_block_count--;
++		sbi->total_valid_node_count--;
++	}
+ 
+-	sbi->total_valid_node_count--;
+-	sbi->total_valid_block_count--;
+ 	if (sbi->reserved_blocks &&
+ 		sbi->current_reserved_blocks < sbi->reserved_blocks)
+ 		sbi->current_reserved_blocks++;
+@@ -3729,6 +3735,7 @@ extern struct kmem_cache *f2fs_inode_entry_slab;
+  * inline.c
+  */
+ bool f2fs_may_inline_data(struct inode *inode);
++bool f2fs_sanity_check_inline_data(struct inode *inode);
+ bool f2fs_may_inline_dentry(struct inode *inode);
+ void f2fs_do_read_inline_data(struct page *page, struct page *ipage);
+ void f2fs_truncate_inline_inode(struct inode *inode,
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 792f9059d897c..defa068b4c7cd 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1413,11 +1413,19 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
+ 			ret = -ENOSPC;
+ 			break;
+ 		}
+-		if (dn->data_blkaddr != NEW_ADDR) {
+-			f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
+-			dn->data_blkaddr = NEW_ADDR;
+-			f2fs_set_data_blkaddr(dn);
++
++		if (dn->data_blkaddr == NEW_ADDR)
++			continue;
++
++		if (!f2fs_is_valid_blkaddr(sbi, dn->data_blkaddr,
++					DATA_GENERIC_ENHANCE)) {
++			ret = -EFSCORRUPTED;
++			break;
+ 		}
++
++		f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
++		dn->data_blkaddr = NEW_ADDR;
++		f2fs_set_data_blkaddr(dn);
+ 	}
+ 
+ 	f2fs_update_extent_cache_range(dn, start, 0, index - start);
+@@ -1736,6 +1744,10 @@ static long f2fs_fallocate(struct file *file, int mode,
+ 
+ 	inode_lock(inode);
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out;
++
+ 	if (mode & FALLOC_FL_PUNCH_HOLE) {
+ 		if (offset >= inode->i_size)
+ 			goto out;
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 1d7dafdaffe30..f97c23ec93ce5 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -14,21 +14,40 @@
+ #include "node.h"
+ #include <trace/events/f2fs.h>
+ 
+-bool f2fs_may_inline_data(struct inode *inode)
++static bool support_inline_data(struct inode *inode)
+ {
+ 	if (f2fs_is_atomic_file(inode))
+ 		return false;
+-
+ 	if (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))
+ 		return false;
+-
+ 	if (i_size_read(inode) > MAX_INLINE_DATA(inode))
+ 		return false;
++	return true;
++}
+ 
+-	if (f2fs_post_read_required(inode))
++bool f2fs_may_inline_data(struct inode *inode)
++{
++	if (!support_inline_data(inode))
+ 		return false;
+ 
+-	return true;
++	return !f2fs_post_read_required(inode);
++}
++
++bool f2fs_sanity_check_inline_data(struct inode *inode)
++{
++	if (!f2fs_has_inline_data(inode))
++		return false;
++
++	if (!support_inline_data(inode))
++		return true;
++
++	/*
++	 * used by sanity_check_inode(), when disk layout fields has not
++	 * been synchronized to inmem fields.
++	 */
++	return (S_ISREG(inode->i_mode) &&
++		(file_is_encrypt(inode) || file_is_verity(inode) ||
++		(F2FS_I(inode)->i_flags & F2FS_COMPR_FL)));
+ }
+ 
+ bool f2fs_may_inline_dentry(struct inode *inode)
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 98483f50e5e92..87752550f78c8 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -272,8 +272,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ 		}
+ 	}
+ 
+-	if (f2fs_has_inline_data(inode) &&
+-			(!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))) {
++	if (f2fs_sanity_check_inline_data(inode)) {
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 		f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_data, run fsck to fix",
+ 			  __func__, inode->i_ino, inode->i_mode);
+@@ -757,8 +756,22 @@ retry:
+ 		f2fs_lock_op(sbi);
+ 		err = f2fs_remove_inode_page(inode);
+ 		f2fs_unlock_op(sbi);
+-		if (err == -ENOENT)
++		if (err == -ENOENT) {
+ 			err = 0;
++
++			/*
++			 * in fuzzed image, another node may has the same
++			 * block address as inode's, if it was truncated
++			 * previously, truncation of inode node will fail.
++			 */
++			if (is_inode_flag_set(inode, FI_DIRTY_INODE)) {
++				f2fs_warn(F2FS_I_SB(inode),
++					"f2fs_evict_inode: inconsistent node id, ino:%lu",
++					inode->i_ino);
++				f2fs_inode_synced(inode);
++				set_sbi_flag(sbi, SBI_NEED_FSCK);
++			}
++		}
+ 	}
+ 
+ 	/* give more chances, if ENOMEM case */
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 49f5cb532738d..20091f4cf84de 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -356,16 +356,19 @@ void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct list_head *head = &fi->inmem_pages;
+ 	struct inmem_pages *cur = NULL;
++	struct inmem_pages *tmp;
+ 
+ 	f2fs_bug_on(sbi, !IS_ATOMIC_WRITTEN_PAGE(page));
+ 
+ 	mutex_lock(&fi->inmem_lock);
+-	list_for_each_entry(cur, head, list) {
+-		if (cur->page == page)
++	list_for_each_entry(tmp, head, list) {
++		if (tmp->page == page) {
++			cur = tmp;
+ 			break;
++		}
+ 	}
+ 
+-	f2fs_bug_on(sbi, list_empty(head) || cur->page != page);
++	f2fs_bug_on(sbi, !cur);
+ 	list_del(&cur->list);
+ 	mutex_unlock(&fi->inmem_lock);
+ 
+@@ -4420,7 +4423,7 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 	unsigned int i, start, end;
+ 	unsigned int readed, start_blk = 0;
+ 	int err = 0;
+-	block_t total_node_blocks = 0;
++	block_t sit_valid_blocks[2] = {0, 0};
+ 
+ 	do {
+ 		readed = f2fs_ra_meta_pages(sbi, start_blk, BIO_MAX_PAGES,
+@@ -4445,8 +4448,8 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 			if (err)
+ 				return err;
+ 			seg_info_from_raw_sit(se, &sit);
+-			if (IS_NODESEG(se->type))
+-				total_node_blocks += se->valid_blocks;
++
++			sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+ 
+ 			/* build discard map only one time */
+ 			if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) {
+@@ -4484,15 +4487,15 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 		sit = sit_in_journal(journal, i);
+ 
+ 		old_valid_blocks = se->valid_blocks;
+-		if (IS_NODESEG(se->type))
+-			total_node_blocks -= old_valid_blocks;
++
++		sit_valid_blocks[SE_PAGETYPE(se)] -= old_valid_blocks;
+ 
+ 		err = check_block_count(sbi, start, &sit);
+ 		if (err)
+ 			break;
+ 		seg_info_from_raw_sit(se, &sit);
+-		if (IS_NODESEG(se->type))
+-			total_node_blocks += se->valid_blocks;
++
++		sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+ 
+ 		if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) {
+ 			memset(se->discard_map, 0xff, SIT_VBLOCK_MAP_SIZE);
+@@ -4512,13 +4515,24 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 	}
+ 	up_read(&curseg->journal_rwsem);
+ 
+-	if (!err && total_node_blocks != valid_node_count(sbi)) {
++	if (err)
++		return err;
++
++	if (sit_valid_blocks[NODE] != valid_node_count(sbi)) {
+ 		f2fs_err(sbi, "SIT is corrupted node# %u vs %u",
+-			 total_node_blocks, valid_node_count(sbi));
+-		err = -EFSCORRUPTED;
++			 sit_valid_blocks[NODE], valid_node_count(sbi));
++		return -EFSCORRUPTED;
+ 	}
+ 
+-	return err;
++	if (sit_valid_blocks[DATA] + sit_valid_blocks[NODE] >
++				valid_user_blocks(sbi)) {
++		f2fs_err(sbi, "SIT is corrupted data# %u %u vs %u",
++			 sit_valid_blocks[DATA], sit_valid_blocks[NODE],
++			 valid_user_blocks(sbi));
++		return -EFSCORRUPTED;
++	}
++
++	return 0;
+ }
+ 
+ static void init_free_segmap(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index beef833a69604..eafd89f0c77e8 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -24,6 +24,7 @@
+ 
+ #define IS_DATASEG(t)	((t) <= CURSEG_COLD_DATA)
+ #define IS_NODESEG(t)	((t) >= CURSEG_HOT_NODE && (t) <= CURSEG_COLD_NODE)
++#define SE_PAGETYPE(se)	((IS_NODESEG((se)->type) ? NODE : DATA))
+ 
+ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
+ 						unsigned short seg_type)
+@@ -573,11 +574,10 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi)
+ 	return GET_SEC_FROM_SEG(sbi, reserved_segments(sbi));
+ }
+ 
+-static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi)
++static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
++			unsigned int node_blocks, unsigned int dent_blocks)
+ {
+-	unsigned int node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) +
+-					get_pages(sbi, F2FS_DIRTY_DENTS);
+-	unsigned int dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++
+ 	unsigned int segno, left_blocks;
+ 	int i;
+ 
+@@ -603,19 +603,28 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi)
+ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+ 					int freed, int needed)
+ {
+-	int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
+-	int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
+-	int imeta_secs = get_blocktype_secs(sbi, F2FS_DIRTY_IMETA);
++	unsigned int total_node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) +
++					get_pages(sbi, F2FS_DIRTY_DENTS) +
++					get_pages(sbi, F2FS_DIRTY_IMETA);
++	unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++	unsigned int node_secs = total_node_blocks / BLKS_PER_SEC(sbi);
++	unsigned int dent_secs = total_dent_blocks / BLKS_PER_SEC(sbi);
++	unsigned int node_blocks = total_node_blocks % BLKS_PER_SEC(sbi);
++	unsigned int dent_blocks = total_dent_blocks % BLKS_PER_SEC(sbi);
++	unsigned int free, need_lower, need_upper;
+ 
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ 		return false;
+ 
+-	if (free_sections(sbi) + freed == reserved_sections(sbi) + needed &&
+-			has_curseg_enough_space(sbi))
++	free = free_sections(sbi) + freed;
++	need_lower = node_secs + dent_secs + reserved_sections(sbi) + needed;
++	need_upper = need_lower + (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
++
++	if (free > need_upper)
+ 		return false;
+-	return (free_sections(sbi) + freed) <=
+-		(node_secs + 2 * dent_secs + imeta_secs +
+-		reserved_sections(sbi) + needed);
++	else if (free <= need_lower)
++		return true;
++	return !has_curseg_enough_space(sbi, node_blocks, dent_blocks);
+ }
+ 
+ static inline bool f2fs_is_checkpoint_ready(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 78ee14f6e939e..ccfb6c5a8fbc0 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2292,7 +2292,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 		if (!sb_has_quota_active(sb, cnt))
+ 			continue;
+ 
+-		inode_lock(dqopt->files[cnt]);
++		if (!f2fs_sb_has_quota_ino(sbi))
++			inode_lock(dqopt->files[cnt]);
+ 
+ 		/*
+ 		 * do_quotactl
+@@ -2311,7 +2312,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 		up_read(&sbi->quota_sem);
+ 		f2fs_unlock_op(sbi);
+ 
+-		inode_unlock(dqopt->files[cnt]);
++		if (!f2fs_sb_has_quota_ino(sbi))
++			inode_unlock(dqopt->files[cnt]);
+ 
+ 		if (ret)
+ 			break;
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index f7e3304b78029..353735032947c 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -93,7 +93,8 @@ static int fat12_ent_bread(struct super_block *sb, struct fat_entry *fatent,
+ err_brelse:
+ 	brelse(bhs[0]);
+ err:
+-	fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)", (llu)blocknr);
++	fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
++			  (llu)blocknr);
+ 	return -EIO;
+ }
+ 
+@@ -106,8 +107,8 @@ static int fat_ent_bread(struct super_block *sb, struct fat_entry *fatent,
+ 	fatent->fat_inode = MSDOS_SB(sb)->fat_inode;
+ 	fatent->bhs[0] = sb_bread(sb, blocknr);
+ 	if (!fatent->bhs[0]) {
+-		fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
+-		       (llu)blocknr);
++		fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
++				  (llu)blocknr);
+ 		return -EIO;
+ 	}
+ 	fatent->nr_bhs = 1;
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index a0869194ab739..46c15dd2405c6 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -1650,11 +1650,12 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 	};
+ 	unsigned long start_time = jiffies;
+ 	long write_chunk;
+-	long wrote = 0;  /* count both pages and inodes */
++	long total_wrote = 0;  /* count both pages and inodes */
+ 
+ 	while (!list_empty(&wb->b_io)) {
+ 		struct inode *inode = wb_inode(wb->b_io.prev);
+ 		struct bdi_writeback *tmp_wb;
++		long wrote;
+ 
+ 		if (inode->i_sb != sb) {
+ 			if (work->sb) {
+@@ -1730,7 +1731,9 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 
+ 		wbc_detach_inode(&wbc);
+ 		work->nr_pages -= write_chunk - wbc.nr_to_write;
+-		wrote += write_chunk - wbc.nr_to_write;
++		wrote = write_chunk - wbc.nr_to_write - wbc.pages_skipped;
++		wrote = wrote < 0 ? 0 : wrote;
++		total_wrote += wrote;
+ 
+ 		if (need_resched()) {
+ 			/*
+@@ -1752,7 +1755,7 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 		tmp_wb = inode_to_wb_and_lock_list(inode);
+ 		spin_lock(&inode->i_lock);
+ 		if (!(inode->i_state & I_DIRTY_ALL))
+-			wrote++;
++			total_wrote++;
+ 		requeue_inode(inode, tmp_wb, &wbc);
+ 		inode_sync_complete(inode);
+ 		spin_unlock(&inode->i_lock);
+@@ -1766,14 +1769,14 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 		 * bail out to wb_writeback() often enough to check
+ 		 * background threshold and other termination conditions.
+ 		 */
+-		if (wrote) {
++		if (total_wrote) {
+ 			if (time_is_before_jiffies(start_time + HZ / 10UL))
+ 				break;
+ 			if (work->nr_pages <= 0)
+ 				break;
+ 		}
+ 	}
+-	return wrote;
++	return total_wrote;
+ }
+ 
+ static long __writeback_inodes_wb(struct bdi_writeback *wb,
+diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
+index 6e173ae378c44..ad953ecb58532 100644
+--- a/fs/gfs2/quota.c
++++ b/fs/gfs2/quota.c
+@@ -531,34 +531,42 @@ static void qdsb_put(struct gfs2_quota_data *qd)
+  */
+ int gfs2_qa_get(struct gfs2_inode *ip)
+ {
+-	int error = 0;
+ 	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
++	struct inode *inode = &ip->i_inode;
+ 
+ 	if (sdp->sd_args.ar_quota == GFS2_QUOTA_OFF)
+ 		return 0;
+ 
+-	down_write(&ip->i_rw_mutex);
++	spin_lock(&inode->i_lock);
+ 	if (ip->i_qadata == NULL) {
+-		ip->i_qadata = kmem_cache_zalloc(gfs2_qadata_cachep, GFP_NOFS);
+-		if (!ip->i_qadata) {
+-			error = -ENOMEM;
+-			goto out;
+-		}
++		struct gfs2_qadata *tmp;
++
++		spin_unlock(&inode->i_lock);
++		tmp = kmem_cache_zalloc(gfs2_qadata_cachep, GFP_NOFS);
++		if (!tmp)
++			return -ENOMEM;
++
++		spin_lock(&inode->i_lock);
++		if (ip->i_qadata == NULL)
++			ip->i_qadata = tmp;
++		else
++			kmem_cache_free(gfs2_qadata_cachep, tmp);
+ 	}
+ 	ip->i_qadata->qa_ref++;
+-out:
+-	up_write(&ip->i_rw_mutex);
+-	return error;
++	spin_unlock(&inode->i_lock);
++	return 0;
+ }
+ 
+ void gfs2_qa_put(struct gfs2_inode *ip)
+ {
+-	down_write(&ip->i_rw_mutex);
++	struct inode *inode = &ip->i_inode;
++
++	spin_lock(&inode->i_lock);
+ 	if (ip->i_qadata && --ip->i_qadata->qa_ref == 0) {
+ 		kmem_cache_free(gfs2_qadata_cachep, ip->i_qadata);
+ 		ip->i_qadata = NULL;
+ 	}
+-	up_write(&ip->i_rw_mutex);
++	spin_unlock(&inode->i_lock);
+ }
+ 
+ int gfs2_quota_hold(struct gfs2_inode *ip, kuid_t uid, kgid_t gid)
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index cd9f7baa5bb7b..dd33b31b0a826 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -528,7 +528,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len)
+ 	 * write started inside the existing inode size.
+ 	 */
+ 	if (pos + len > i_size)
+-		truncate_pagecache_range(inode, max(pos, i_size), pos + len);
++		truncate_pagecache_range(inode, max(pos, i_size),
++					 pos + len - 1);
+ }
+ 
+ static int
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index e58ae29a223d7..0ce17ea8fa8a1 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -385,7 +385,8 @@ int dbFree(struct inode *ip, s64 blkno, s64 nblocks)
+ 	}
+ 
+ 	/* write the last buffer. */
+-	write_metapage(mp);
++	if (mp)
++		write_metapage(mp);
+ 
+ 	IREAD_UNLOCK(ipbmap);
+ 
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 7b47f9b063f1f..ad856b7b9a46c 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -208,15 +208,16 @@ static int
+ nfs_file_fsync_commit(struct file *file, int datasync)
+ {
+ 	struct inode *inode = file_inode(file);
+-	int ret;
++	int ret, ret2;
+ 
+ 	dprintk("NFS: fsync file(%pD2) datasync %d\n", file, datasync);
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSFSYNC);
+ 	ret = nfs_commit_inode(inode, FLUSH_SYNC);
+-	if (ret < 0)
+-		return ret;
+-	return file_check_and_advance_wb_err(file);
++	ret2 = file_check_and_advance_wb_err(file);
++	if (ret2 < 0)
++		return ret2;
++	return ret;
+ }
+ 
+ int
+@@ -389,11 +390,8 @@ static int nfs_write_end(struct file *file, struct address_space *mapping,
+ 		return status;
+ 	NFS_I(mapping->host)->write_io += copied;
+ 
+-	if (nfs_ctx_key_to_expire(ctx, mapping->host)) {
+-		status = nfs_wb_all(mapping->host);
+-		if (status < 0)
+-			return status;
+-	}
++	if (nfs_ctx_key_to_expire(ctx, mapping->host))
++		nfs_wb_all(mapping->host);
+ 
+ 	return copied;
+ }
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index b3b9eff5d5727..8c0803d980084 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2006,6 +2006,7 @@ lookup_again:
+ 	lo = pnfs_find_alloc_layout(ino, ctx, gfp_flags);
+ 	if (lo == NULL) {
+ 		spin_unlock(&ino->i_lock);
++		lseg = ERR_PTR(-ENOMEM);
+ 		trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
+ 				 PNFS_UPDATE_LAYOUT_NOMEM);
+ 		goto out;
+@@ -2134,6 +2135,7 @@ lookup_again:
+ 
+ 	lgp = pnfs_alloc_init_layoutget_args(ino, ctx, &stateid, &arg, gfp_flags);
+ 	if (!lgp) {
++		lseg = ERR_PTR(-ENOMEM);
+ 		trace_pnfs_update_layout(ino, pos, count, iomode, lo, NULL,
+ 					 PNFS_UPDATE_LAYOUT_NOMEM);
+ 		nfs_layoutget_end(lo);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 5d07799513a65..dc08a0c02f095 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -675,11 +675,7 @@ static int nfs_writepage_locked(struct page *page,
+ 	err = nfs_do_writepage(page, wbc, &pgio);
+ 	pgio.pg_error = 0;
+ 	nfs_pageio_complete(&pgio);
+-	if (err < 0)
+-		return err;
+-	if (nfs_error_is_fatal(pgio.pg_error))
+-		return pgio.pg_error;
+-	return 0;
++	return err;
+ }
+ 
+ int nfs_writepage(struct page *page, struct writeback_control *wbc)
+@@ -730,9 +726,6 @@ int nfs_writepages(struct address_space *mapping, struct writeback_control *wbc)
+ 
+ 	if (err < 0)
+ 		goto out_err;
+-	err = pgio.pg_error;
+-	if (nfs_error_is_fatal(err))
+-		goto out_err;
+ 	return 0;
+ out_err:
+ 	return err;
+@@ -1411,7 +1404,7 @@ static void nfs_async_write_error(struct list_head *head, int error)
+ 	while (!list_empty(head)) {
+ 		req = nfs_list_entry(head->next);
+ 		nfs_list_remove_request(req);
+-		if (nfs_error_is_fatal(error))
++		if (nfs_error_is_fatal_on_server(error))
+ 			nfs_write_error(req, error);
+ 		else
+ 			nfs_redirty_request(req);
+diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c
+index f0d6b54be412e..765b50aeadd28 100644
+--- a/fs/notify/fdinfo.c
++++ b/fs/notify/fdinfo.c
+@@ -83,16 +83,9 @@ static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark)
+ 	inode_mark = container_of(mark, struct inotify_inode_mark, fsn_mark);
+ 	inode = igrab(fsnotify_conn_inode(mark->connector));
+ 	if (inode) {
+-		/*
+-		 * IN_ALL_EVENTS represents all of the mask bits
+-		 * that we expose to userspace.  There is at
+-		 * least one bit (FS_EVENT_ON_CHILD) which is
+-		 * used only internally to the kernel.
+-		 */
+-		u32 mask = mark->mask & IN_ALL_EVENTS;
+-		seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:%x ",
++		seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:0 ",
+ 			   inode_mark->wd, inode->i_ino, inode->i_sb->s_dev,
+-			   mask, mark->ignored_mask);
++			   inotify_mark_user_mask(mark));
+ 		show_mark_fhandle(m, inode);
+ 		seq_putc(m, '\n');
+ 		iput(inode);
+diff --git a/fs/notify/inotify/inotify.h b/fs/notify/inotify/inotify.h
+index 2007e37119160..8f00151eb731f 100644
+--- a/fs/notify/inotify/inotify.h
++++ b/fs/notify/inotify/inotify.h
+@@ -22,6 +22,18 @@ static inline struct inotify_event_info *INOTIFY_E(struct fsnotify_event *fse)
+ 	return container_of(fse, struct inotify_event_info, fse);
+ }
+ 
++/*
++ * INOTIFY_USER_FLAGS represents all of the mask bits that we expose to
++ * userspace.  There is at least one bit (FS_EVENT_ON_CHILD) which is
++ * used only internally to the kernel.
++ */
++#define INOTIFY_USER_MASK (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK)
++
++static inline __u32 inotify_mark_user_mask(struct fsnotify_mark *fsn_mark)
++{
++	return fsn_mark->mask & INOTIFY_USER_MASK;
++}
++
+ extern void inotify_ignored_and_remove_idr(struct fsnotify_mark *fsn_mark,
+ 					   struct fsnotify_group *group);
+ extern int inotify_handle_inode_event(struct fsnotify_mark *inode_mark,
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 5f6c6bf65909c..32b6b97021bef 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -88,7 +88,7 @@ static inline __u32 inotify_arg_to_mask(struct inode *inode, u32 arg)
+ 		mask |= FS_EVENT_ON_CHILD;
+ 
+ 	/* mask off the flags used to open the fd */
+-	mask |= (arg & (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK));
++	mask |= (arg & INOTIFY_USER_MASK);
+ 
+ 	return mask;
+ }
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 8387937b9d01c..5b44be5f93dd8 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -430,7 +430,7 @@ void fsnotify_free_mark(struct fsnotify_mark *mark)
+ void fsnotify_destroy_mark(struct fsnotify_mark *mark,
+ 			   struct fsnotify_group *group)
+ {
+-	mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++	mutex_lock(&group->mark_mutex);
+ 	fsnotify_detach_mark(mark);
+ 	mutex_unlock(&group->mark_mutex);
+ 	fsnotify_free_mark(mark);
+@@ -742,7 +742,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
+ 	 * move marks to free to to_free list in one go and then free marks in
+ 	 * to_free list one by one.
+ 	 */
+-	mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++	mutex_lock(&group->mark_mutex);
+ 	list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) {
+ 		if ((1U << mark->connector->type) & type_mask)
+ 			list_move(&mark->g_list, &to_free);
+@@ -751,7 +751,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
+ 
+ clear:
+ 	while (1) {
+-		mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++		mutex_lock(&group->mark_mutex);
+ 		if (list_empty(head)) {
+ 			mutex_unlock(&group->mark_mutex);
+ 			break;
+diff --git a/fs/ocfs2/dlmfs/userdlm.c b/fs/ocfs2/dlmfs/userdlm.c
+index 339f098d9592c..8fa289de39391 100644
+--- a/fs/ocfs2/dlmfs/userdlm.c
++++ b/fs/ocfs2/dlmfs/userdlm.c
+@@ -435,6 +435,11 @@ again:
+ 	}
+ 
+ 	spin_lock(&lockres->l_lock);
++	if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) {
++		spin_unlock(&lockres->l_lock);
++		status = -EAGAIN;
++		goto bail;
++	}
+ 
+ 	/* We only compare against the currently granted level
+ 	 * here. If the lock is blocked waiting on a downconvert,
+@@ -597,7 +602,7 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+ 	spin_lock(&lockres->l_lock);
+ 	if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) {
+ 		spin_unlock(&lockres->l_lock);
+-		return 0;
++		goto bail;
+ 	}
+ 
+ 	lockres->l_flags |= USER_LOCK_IN_TEARDOWN;
+@@ -611,12 +616,17 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+ 	}
+ 
+ 	if (lockres->l_ro_holders || lockres->l_ex_holders) {
++		lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN;
+ 		spin_unlock(&lockres->l_lock);
+ 		goto bail;
+ 	}
+ 
+ 	status = 0;
+ 	if (!(lockres->l_flags & USER_LOCK_ATTACHED)) {
++		/*
++		 * lock is never requested, leave USER_LOCK_IN_TEARDOWN set
++		 * to avoid new lock request coming in.
++		 */
+ 		spin_unlock(&lockres->l_lock);
+ 		goto bail;
+ 	}
+@@ -627,6 +637,10 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+ 
+ 	status = ocfs2_dlm_unlock(conn, &lockres->l_lksb, DLM_LKF_VALBLK);
+ 	if (status) {
++		spin_lock(&lockres->l_lock);
++		lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN;
++		lockres->l_flags &= ~USER_LOCK_BUSY;
++		spin_unlock(&lockres->l_lock);
+ 		user_log_dlm_error("ocfs2_dlm_unlock", status, lockres);
+ 		goto bail;
+ 	}
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index 09e4d8a499a38..5898761698c22 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -453,6 +453,9 @@ static struct proc_dir_entry *__proc_create(struct proc_dir_entry **parent,
+ 	proc_set_user(ent, (*parent)->uid, (*parent)->gid);
+ 
+ 	ent->proc_dops = &proc_misc_dentry_ops;
++	/* Revalidate everything under /proc/${pid}/net */
++	if ((*parent)->proc_dops == &proc_net_dentry_ops)
++		pde_force_lookup(ent);
+ 
+ out:
+ 	return ent;
+diff --git a/fs/proc/proc_net.c b/fs/proc/proc_net.c
+index 1aa9236bf1af5..707477e27f831 100644
+--- a/fs/proc/proc_net.c
++++ b/fs/proc/proc_net.c
+@@ -362,6 +362,9 @@ static __net_init int proc_net_ns_init(struct net *net)
+ 
+ 	proc_set_user(netd, uid, gid);
+ 
++	/* Seed dentry revalidation for /proc/${pid}/net */
++	pde_force_lookup(netd);
++
+ 	err = -EEXIST;
+ 	net_statd = proc_net_mkdir(net, "stat", netd);
+ 	if (!net_statd)
+diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c
+index 2d25bab687647..98c82f4935e1e 100644
+--- a/fs/xfs/libxfs/xfs_btree.c
++++ b/fs/xfs/libxfs/xfs_btree.c
+@@ -353,20 +353,17 @@ xfs_btree_free_block(
+  */
+ void
+ xfs_btree_del_cursor(
+-	xfs_btree_cur_t	*cur,		/* btree cursor */
+-	int		error)		/* del because of error */
++	struct xfs_btree_cur	*cur,		/* btree cursor */
++	int			error)		/* del because of error */
+ {
+-	int		i;		/* btree level */
++	int			i;		/* btree level */
+ 
+ 	/*
+-	 * Clear the buffer pointers, and release the buffers.
+-	 * If we're doing this in the face of an error, we
+-	 * need to make sure to inspect all of the entries
+-	 * in the bc_bufs array for buffers to be unlocked.
+-	 * This is because some of the btree code works from
+-	 * level n down to 0, and if we get an error along
+-	 * the way we won't have initialized all the entries
+-	 * down to 0.
++	 * Clear the buffer pointers and release the buffers. If we're doing
++	 * this because of an error, inspect all of the entries in the bc_bufs
++	 * array for buffers to be unlocked. This is because some of the btree
++	 * code works from level n down to 0, and if we get an error along the
++	 * way we won't have initialized all the entries down to 0.
+ 	 */
+ 	for (i = 0; i < cur->bc_nlevels; i++) {
+ 		if (cur->bc_bufs[i])
+@@ -374,17 +371,17 @@ xfs_btree_del_cursor(
+ 		else if (!error)
+ 			break;
+ 	}
++
+ 	/*
+-	 * Can't free a bmap cursor without having dealt with the
+-	 * allocated indirect blocks' accounting.
+-	 */
+-	ASSERT(cur->bc_btnum != XFS_BTNUM_BMAP ||
+-	       cur->bc_ino.allocated == 0);
+-	/*
+-	 * Free the cursor.
++	 * If we are doing a BMBT update, the number of unaccounted blocks
++	 * allocated during this cursor life time should be zero. If it's not
++	 * zero, then we should be shut down or on our way to shutdown due to
++	 * cancelling a dirty transaction on error.
+ 	 */
++	ASSERT(cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_ino.allocated == 0 ||
++	       XFS_FORCED_SHUTDOWN(cur->bc_mp) || error != 0);
+ 	if (unlikely(cur->bc_flags & XFS_BTREE_STAGING))
+-		kmem_free((void *)cur->bc_ops);
++		kmem_free(cur->bc_ops);
+ 	kmem_cache_free(xfs_btree_cur_zone, cur);
+ }
+ 
+diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
+index 1d95ed387d66d..80c4579d68357 100644
+--- a/fs/xfs/xfs_dquot.c
++++ b/fs/xfs/xfs_dquot.c
+@@ -500,6 +500,42 @@ xfs_dquot_alloc(
+ 	return dqp;
+ }
+ 
++/* Check the ondisk dquot's id and type match what the incore dquot expects. */
++static bool
++xfs_dquot_check_type(
++	struct xfs_dquot	*dqp,
++	struct xfs_disk_dquot	*ddqp)
++{
++	uint8_t			ddqp_type;
++	uint8_t			dqp_type;
++
++	ddqp_type = ddqp->d_type & XFS_DQTYPE_REC_MASK;
++	dqp_type = xfs_dquot_type(dqp);
++
++	if (be32_to_cpu(ddqp->d_id) != dqp->q_id)
++		return false;
++
++	/*
++	 * V5 filesystems always expect an exact type match.  V4 filesystems
++	 * expect an exact match for user dquots and for non-root group and
++	 * project dquots.
++	 */
++	if (xfs_sb_version_hascrc(&dqp->q_mount->m_sb) ||
++	    dqp_type == XFS_DQTYPE_USER || dqp->q_id != 0)
++		return ddqp_type == dqp_type;
++
++	/*
++	 * V4 filesystems support either group or project quotas, but not both
++	 * at the same time.  The non-user quota file can be switched between
++	 * group and project quota uses depending on the mount options, which
++	 * means that we can encounter the other type when we try to load quota
++	 * defaults.  Quotacheck will soon reset the the entire quota file
++	 * (including the root dquot) anyway, but don't log scary corruption
++	 * reports to dmesg.
++	 */
++	return ddqp_type == XFS_DQTYPE_GROUP || ddqp_type == XFS_DQTYPE_PROJ;
++}
++
+ /* Copy the in-core quota fields in from the on-disk buffer. */
+ STATIC int
+ xfs_dquot_from_disk(
+@@ -512,8 +548,7 @@ xfs_dquot_from_disk(
+ 	 * Ensure that we got the type and ID we were looking for.
+ 	 * Everything else was checked by the dquot buffer verifier.
+ 	 */
+-	if ((ddqp->d_type & XFS_DQTYPE_REC_MASK) != xfs_dquot_type(dqp) ||
+-	    be32_to_cpu(ddqp->d_id) != dqp->q_id) {
++	if (!xfs_dquot_check_type(dqp, ddqp)) {
+ 		xfs_alert_tag(bp->b_mount, XFS_PTAG_VERIFIER_ERROR,
+ 			  "Metadata corruption detected at %pS, quota %u",
+ 			  __this_address, dqp->q_id);
+diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
+index 7b9ff824e82d4..74bc2beadc237 100644
+--- a/fs/xfs/xfs_iomap.c
++++ b/fs/xfs/xfs_iomap.c
+@@ -870,6 +870,9 @@ xfs_buffered_write_iomap_begin(
+ 	int			allocfork = XFS_DATA_FORK;
+ 	int			error = 0;
+ 
++	if (XFS_FORCED_SHUTDOWN(mp))
++		return -EIO;
++
+ 	/* we can't use delayed allocations when using extent size hints */
+ 	if (xfs_get_extsz_hint(ip))
+ 		return xfs_direct_write_iomap_begin(inode, offset, count,
+diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
+index fa2d05e65ff1f..b445e63cbc3c7 100644
+--- a/fs/xfs/xfs_log.c
++++ b/fs/xfs/xfs_log.c
+@@ -347,6 +347,25 @@ xlog_tic_add_region(xlog_ticket_t *tic, uint len, uint type)
+ 	tic->t_res_num++;
+ }
+ 
++bool
++xfs_log_writable(
++	struct xfs_mount	*mp)
++{
++	/*
++	 * Never write to the log on norecovery mounts, if the block device is
++	 * read-only, or if the filesystem is shutdown. Read-only mounts still
++	 * allow internal writes for log recovery and unmount purposes, so don't
++	 * restrict that case here.
++	 */
++	if (mp->m_flags & XFS_MOUNT_NORECOVERY)
++		return false;
++	if (xfs_readonly_buftarg(mp->m_log->l_targ))
++		return false;
++	if (XFS_FORCED_SHUTDOWN(mp))
++		return false;
++	return true;
++}
++
+ /*
+  * Replenish the byte reservation required by moving the grant write head.
+  */
+@@ -886,15 +905,8 @@ xfs_log_unmount_write(
+ {
+ 	struct xlog		*log = mp->m_log;
+ 
+-	/*
+-	 * Don't write out unmount record on norecovery mounts or ro devices.
+-	 * Or, if we are doing a forced umount (typically because of IO errors).
+-	 */
+-	if (mp->m_flags & XFS_MOUNT_NORECOVERY ||
+-	    xfs_readonly_buftarg(log->l_targ)) {
+-		ASSERT(mp->m_flags & XFS_MOUNT_RDONLY);
++	if (!xfs_log_writable(mp))
+ 		return;
+-	}
+ 
+ 	xfs_log_force(mp, XFS_LOG_SYNC);
+ 
+diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h
+index 58c3fcbec94a2..98c913da7587e 100644
+--- a/fs/xfs/xfs_log.h
++++ b/fs/xfs/xfs_log.h
+@@ -127,6 +127,7 @@ int	  xfs_log_reserve(struct xfs_mount *mp,
+ int	  xfs_log_regrant(struct xfs_mount *mp, struct xlog_ticket *tic);
+ void      xfs_log_unmount(struct xfs_mount *mp);
+ int	  xfs_log_force_umount(struct xfs_mount *mp, int logerror);
++bool	xfs_log_writable(struct xfs_mount *mp);
+ 
+ struct xlog_ticket *xfs_log_ticket_get(struct xlog_ticket *ticket);
+ void	  xfs_log_ticket_put(struct xlog_ticket *ticket);
+diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
+index 7110507a2b6bc..44b05e1d5d327 100644
+--- a/fs/xfs/xfs_mount.c
++++ b/fs/xfs/xfs_mount.c
+@@ -631,6 +631,47 @@ xfs_check_summary_counts(
+ 	return xfs_initialize_perag_data(mp, mp->m_sb.sb_agcount);
+ }
+ 
++/*
++ * Flush and reclaim dirty inodes in preparation for unmount. Inodes and
++ * internal inode structures can be sitting in the CIL and AIL at this point,
++ * so we need to unpin them, write them back and/or reclaim them before unmount
++ * can proceed.
++ *
++ * An inode cluster that has been freed can have its buffer still pinned in
++ * memory because the transaction is still sitting in a iclog. The stale inodes
++ * on that buffer will be pinned to the buffer until the transaction hits the
++ * disk and the callbacks run. Pushing the AIL will skip the stale inodes and
++ * may never see the pinned buffer, so nothing will push out the iclog and
++ * unpin the buffer.
++ *
++ * Hence we need to force the log to unpin everything first. However, log
++ * forces don't wait for the discards they issue to complete, so we have to
++ * explicitly wait for them to complete here as well.
++ *
++ * Then we can tell the world we are unmounting so that error handling knows
++ * that the filesystem is going away and we should error out anything that we
++ * have been retrying in the background.  This will prevent never-ending
++ * retries in AIL pushing from hanging the unmount.
++ *
++ * Finally, we can push the AIL to clean all the remaining dirty objects, then
++ * reclaim the remaining inodes that are still in memory at this point in time.
++ */
++static void
++xfs_unmount_flush_inodes(
++	struct xfs_mount	*mp)
++{
++	xfs_log_force(mp, XFS_LOG_SYNC);
++	xfs_extent_busy_wait_all(mp);
++	flush_workqueue(xfs_discard_wq);
++
++	mp->m_flags |= XFS_MOUNT_UNMOUNTING;
++
++	xfs_ail_push_all_sync(mp->m_ail);
++	cancel_delayed_work_sync(&mp->m_reclaim_work);
++	xfs_reclaim_inodes(mp);
++	xfs_health_unmount(mp);
++}
++
+ /*
+  * This function does the following on an initial mount of a file system:
+  *	- reads the superblock from disk and init the mount struct
+@@ -1005,7 +1046,7 @@ xfs_mountfs(
+ 	/* Clean out dquots that might be in memory after quotacheck. */
+ 	xfs_qm_unmount(mp);
+ 	/*
+-	 * Cancel all delayed reclaim work and reclaim the inodes directly.
++	 * Flush all inode reclamation work and flush the log.
+ 	 * We have to do this /after/ rtunmount and qm_unmount because those
+ 	 * two will have scheduled delayed reclaim for the rt/quota inodes.
+ 	 *
+@@ -1015,11 +1056,8 @@ xfs_mountfs(
+ 	 * qm_unmount_quotas and therefore rely on qm_unmount to release the
+ 	 * quota inodes.
+ 	 */
+-	cancel_delayed_work_sync(&mp->m_reclaim_work);
+-	xfs_reclaim_inodes(mp);
+-	xfs_health_unmount(mp);
++	xfs_unmount_flush_inodes(mp);
+  out_log_dealloc:
+-	mp->m_flags |= XFS_MOUNT_UNMOUNTING;
+ 	xfs_log_mount_cancel(mp);
+  out_fail_wait:
+ 	if (mp->m_logdev_targp && mp->m_logdev_targp != mp->m_ddev_targp)
+@@ -1060,47 +1098,7 @@ xfs_unmountfs(
+ 	xfs_rtunmount_inodes(mp);
+ 	xfs_irele(mp->m_rootip);
+ 
+-	/*
+-	 * We can potentially deadlock here if we have an inode cluster
+-	 * that has been freed has its buffer still pinned in memory because
+-	 * the transaction is still sitting in a iclog. The stale inodes
+-	 * on that buffer will be pinned to the buffer until the
+-	 * transaction hits the disk and the callbacks run. Pushing the AIL will
+-	 * skip the stale inodes and may never see the pinned buffer, so
+-	 * nothing will push out the iclog and unpin the buffer. Hence we
+-	 * need to force the log here to ensure all items are flushed into the
+-	 * AIL before we go any further.
+-	 */
+-	xfs_log_force(mp, XFS_LOG_SYNC);
+-
+-	/*
+-	 * Wait for all busy extents to be freed, including completion of
+-	 * any discard operation.
+-	 */
+-	xfs_extent_busy_wait_all(mp);
+-	flush_workqueue(xfs_discard_wq);
+-
+-	/*
+-	 * We now need to tell the world we are unmounting. This will allow
+-	 * us to detect that the filesystem is going away and we should error
+-	 * out anything that we have been retrying in the background. This will
+-	 * prevent neverending retries in AIL pushing from hanging the unmount.
+-	 */
+-	mp->m_flags |= XFS_MOUNT_UNMOUNTING;
+-
+-	/*
+-	 * Flush all pending changes from the AIL.
+-	 */
+-	xfs_ail_push_all_sync(mp->m_ail);
+-
+-	/*
+-	 * Reclaim all inodes. At this point there should be no dirty inodes and
+-	 * none should be pinned or locked. Stop background inode reclaim here
+-	 * if it is still running.
+-	 */
+-	cancel_delayed_work_sync(&mp->m_reclaim_work);
+-	xfs_reclaim_inodes(mp);
+-	xfs_health_unmount(mp);
++	xfs_unmount_flush_inodes(mp);
+ 
+ 	xfs_qm_unmount(mp);
+ 
+@@ -1176,8 +1174,7 @@ xfs_fs_writable(
+ int
+ xfs_log_sbcount(xfs_mount_t *mp)
+ {
+-	/* allow this to proceed during the freeze sequence... */
+-	if (!xfs_fs_writable(mp, SB_FREEZE_COMPLETE))
++	if (!xfs_log_writable(mp))
+ 		return 0;
+ 
+ 	/*
+diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
+index b2a9abee8b2b0..64e5da33733b9 100644
+--- a/fs/xfs/xfs_qm.c
++++ b/fs/xfs/xfs_qm.c
+@@ -1785,6 +1785,29 @@ xfs_qm_vop_chown(
+ 	xfs_trans_mod_dquot(tp, newdq, bfield, ip->i_d.di_nblocks);
+ 	xfs_trans_mod_dquot(tp, newdq, XFS_TRANS_DQ_ICOUNT, 1);
+ 
++	/*
++	 * Back when we made quota reservations for the chown, we reserved the
++	 * ondisk blocks + delalloc blocks with the new dquot.  Now that we've
++	 * switched the dquots, decrease the new dquot's block reservation
++	 * (having already bumped up the real counter) so that we don't have
++	 * any reservation to give back when we commit.
++	 */
++	xfs_trans_mod_dquot(tp, newdq, XFS_TRANS_DQ_RES_BLKS,
++			-ip->i_delayed_blks);
++
++	/*
++	 * Give the incore reservation for delalloc blocks back to the old
++	 * dquot.  We don't normally handle delalloc quota reservations
++	 * transactionally, so just lock the dquot and subtract from the
++	 * reservation.  Dirty the transaction because it's too late to turn
++	 * back now.
++	 */
++	tp->t_flags |= XFS_TRANS_DIRTY;
++	xfs_dqlock(prevdq);
++	ASSERT(prevdq->q_blk.reserved >= ip->i_delayed_blks);
++	prevdq->q_blk.reserved -= ip->i_delayed_blks;
++	xfs_dqunlock(prevdq);
++
+ 	/*
+ 	 * Take an extra reference, because the inode is going to keep
+ 	 * this dquot pointer even after the trans_commit.
+@@ -1807,84 +1830,39 @@ xfs_qm_vop_chown_reserve(
+ 	uint			flags)
+ {
+ 	struct xfs_mount	*mp = ip->i_mount;
+-	uint64_t		delblks;
+ 	unsigned int		blkflags;
+-	struct xfs_dquot	*udq_unres = NULL;
+-	struct xfs_dquot	*gdq_unres = NULL;
+-	struct xfs_dquot	*pdq_unres = NULL;
+ 	struct xfs_dquot	*udq_delblks = NULL;
+ 	struct xfs_dquot	*gdq_delblks = NULL;
+ 	struct xfs_dquot	*pdq_delblks = NULL;
+-	int			error;
+-
+ 
+ 	ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED));
+ 	ASSERT(XFS_IS_QUOTA_RUNNING(mp));
+ 
+-	delblks = ip->i_delayed_blks;
+ 	blkflags = XFS_IS_REALTIME_INODE(ip) ?
+ 			XFS_QMOPT_RES_RTBLKS : XFS_QMOPT_RES_REGBLKS;
+ 
+ 	if (XFS_IS_UQUOTA_ON(mp) && udqp &&
+-	    i_uid_read(VFS_I(ip)) != udqp->q_id) {
++	    i_uid_read(VFS_I(ip)) != udqp->q_id)
+ 		udq_delblks = udqp;
+-		/*
+-		 * If there are delayed allocation blocks, then we have to
+-		 * unreserve those from the old dquot, and add them to the
+-		 * new dquot.
+-		 */
+-		if (delblks) {
+-			ASSERT(ip->i_udquot);
+-			udq_unres = ip->i_udquot;
+-		}
+-	}
++
+ 	if (XFS_IS_GQUOTA_ON(ip->i_mount) && gdqp &&
+-	    i_gid_read(VFS_I(ip)) != gdqp->q_id) {
++	    i_gid_read(VFS_I(ip)) != gdqp->q_id)
+ 		gdq_delblks = gdqp;
+-		if (delblks) {
+-			ASSERT(ip->i_gdquot);
+-			gdq_unres = ip->i_gdquot;
+-		}
+-	}
+ 
+ 	if (XFS_IS_PQUOTA_ON(ip->i_mount) && pdqp &&
+-	    ip->i_d.di_projid != pdqp->q_id) {
++	    ip->i_d.di_projid != pdqp->q_id)
+ 		pdq_delblks = pdqp;
+-		if (delblks) {
+-			ASSERT(ip->i_pdquot);
+-			pdq_unres = ip->i_pdquot;
+-		}
+-	}
+-
+-	error = xfs_trans_reserve_quota_bydquots(tp, ip->i_mount,
+-				udq_delblks, gdq_delblks, pdq_delblks,
+-				ip->i_d.di_nblocks, 1, flags | blkflags);
+-	if (error)
+-		return error;
+ 
+ 	/*
+-	 * Do the delayed blks reservations/unreservations now. Since, these
+-	 * are done without the help of a transaction, if a reservation fails
+-	 * its previous reservations won't be automatically undone by trans
+-	 * code. So, we have to do it manually here.
++	 * Reserve enough quota to handle blocks on disk and reserved for a
++	 * delayed allocation.  We'll actually transfer the delalloc
++	 * reservation between dquots at chown time, even though that part is
++	 * only semi-transactional.
+ 	 */
+-	if (delblks) {
+-		/*
+-		 * Do the reservations first. Unreservation can't fail.
+-		 */
+-		ASSERT(udq_delblks || gdq_delblks || pdq_delblks);
+-		ASSERT(udq_unres || gdq_unres || pdq_unres);
+-		error = xfs_trans_reserve_quota_bydquots(NULL, ip->i_mount,
+-			    udq_delblks, gdq_delblks, pdq_delblks,
+-			    (xfs_qcnt_t)delblks, 0, flags | blkflags);
+-		if (error)
+-			return error;
+-		xfs_trans_reserve_quota_bydquots(NULL, ip->i_mount,
+-				udq_unres, gdq_unres, pdq_unres,
+-				-((xfs_qcnt_t)delblks), 0, blkflags);
+-	}
+-
+-	return 0;
++	return xfs_trans_reserve_quota_bydquots(tp, ip->i_mount, udq_delblks,
++			gdq_delblks, pdq_delblks,
++			ip->i_d.di_nblocks + ip->i_delayed_blks,
++			1, blkflags | flags);
+ }
+ 
+ int
+diff --git a/fs/xfs/xfs_symlink.c b/fs/xfs/xfs_symlink.c
+index 8e88a7ca387ea..8d3abf06c54f8 100644
+--- a/fs/xfs/xfs_symlink.c
++++ b/fs/xfs/xfs_symlink.c
+@@ -300,6 +300,7 @@ xfs_symlink(
+ 		}
+ 		ASSERT(pathlen == 0);
+ 	}
++	i_size_write(VFS_I(ip), ip->i_d.di_size);
+ 
+ 	/*
+ 	 * Create the directory entry for the symlink.
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index e97daf6ffbb19..4526b6a1e5831 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -121,7 +121,7 @@ struct detailed_data_monitor_range {
+ 			u8 supported_scalings;
+ 			u8 preferred_refresh;
+ 		} __attribute__((packed)) cvt;
+-	} formula;
++	} __attribute__((packed)) formula;
+ } __attribute__((packed));
+ 
+ struct detailed_data_wpindex {
+@@ -154,7 +154,7 @@ struct detailed_non_pixel {
+ 		struct detailed_data_wpindex color;
+ 		struct std_timing timings[6];
+ 		struct cvt_timing cvt[4];
+-	} data;
++	} __attribute__((packed)) data;
+ } __attribute__((packed));
+ 
+ #define EDID_DETAIL_EST_TIMINGS 0xf7
+@@ -172,7 +172,7 @@ struct detailed_timing {
+ 	union {
+ 		struct detailed_pixel_timing pixel_data;
+ 		struct detailed_non_pixel other_data;
+-	} data;
++	} __attribute__((packed)) data;
+ } __attribute__((packed));
+ 
+ #define DRM_EDID_INPUT_SERRATION_VSYNC (1 << 0)
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index ea3ff499e94a3..f21bc441e3fa8 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1730,6 +1730,8 @@ void bpf_offload_dev_netdev_unregister(struct bpf_offload_dev *offdev,
+ 				       struct net_device *netdev);
+ bool bpf_offload_dev_match(struct bpf_prog *prog, struct net_device *netdev);
+ 
++void unpriv_ebpf_notify(int new_state);
++
+ #if defined(CONFIG_NET) && defined(CONFIG_BPF_SYSCALL)
+ int bpf_prog_offload_init(struct bpf_prog *prog, union bpf_attr *attr);
+ 
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index e17cd4c44f93a..3bac68fb7ff1c 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -167,6 +167,8 @@ struct capsule_info {
+ 	size_t			page_bytes_remain;
+ };
+ 
++int efi_capsule_setup_info(struct capsule_info *cap_info, void *kbuff,
++                           size_t hdr_bytes);
+ int __efi_capsule_setup_info(struct capsule_info *cap_info);
+ 
+ typedef int (*efi_freemem_callback_t) (u64 start, u64 end, void *arg);
+diff --git a/include/linux/font.h b/include/linux/font.h
+index b5b312c19e463..4f50d736ea72b 100644
+--- a/include/linux/font.h
++++ b/include/linux/font.h
+@@ -16,7 +16,7 @@
+ struct font_desc {
+     int idx;
+     const char *name;
+-    int width, height;
++    unsigned int width, height;
+     const void *data;
+     int pref;
+ };
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index b216899b4745e..0552a9859a01e 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -477,6 +477,18 @@ struct gpio_chip {
+ 	 */
+ 	int (*of_xlate)(struct gpio_chip *gc,
+ 			const struct of_phandle_args *gpiospec, u32 *flags);
++
++	/**
++	 * @of_gpio_ranges_fallback:
++	 *
++	 * Optional hook for the case that no gpio-ranges property is defined
++	 * within the device tree node "np" (usually DT before introduction
++	 * of gpio-ranges). So this callback is helpful to provide the
++	 * necessary backward compatibility for the pin ranges.
++	 */
++	int (*of_gpio_ranges_fallback)(struct gpio_chip *gc,
++				       struct device_node *np);
++
+ #endif /* CONFIG_OF_GPIO */
+ };
+ 
+diff --git a/include/linux/kexec.h b/include/linux/kexec.h
+index 5f61389f5f361..037192c3a46f7 100644
+--- a/include/linux/kexec.h
++++ b/include/linux/kexec.h
+@@ -187,14 +187,6 @@ void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name);
+ int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
+ 				  unsigned long buf_len);
+ void *arch_kexec_kernel_image_load(struct kimage *image);
+-int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+-				     Elf_Shdr *section,
+-				     const Elf_Shdr *relsec,
+-				     const Elf_Shdr *symtab);
+-int arch_kexec_apply_relocations(struct purgatory_info *pi,
+-				 Elf_Shdr *section,
+-				 const Elf_Shdr *relsec,
+-				 const Elf_Shdr *symtab);
+ int arch_kimage_file_post_load_cleanup(struct kimage *image);
+ #ifdef CONFIG_KEXEC_SIG
+ int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+@@ -223,6 +215,44 @@ extern int crash_exclude_mem_range(struct crash_mem *mem,
+ 				   unsigned long long mend);
+ extern int crash_prepare_elf64_headers(struct crash_mem *mem, int kernel_map,
+ 				       void **addr, unsigned long *sz);
++
++#ifndef arch_kexec_apply_relocations_add
++/*
++ * arch_kexec_apply_relocations_add - apply relocations of type RELA
++ * @pi:		Purgatory to be relocated.
++ * @section:	Section relocations applying to.
++ * @relsec:	Section containing RELAs.
++ * @symtab:	Corresponding symtab.
++ *
++ * Return: 0 on success, negative errno on error.
++ */
++static inline int
++arch_kexec_apply_relocations_add(struct purgatory_info *pi, Elf_Shdr *section,
++				 const Elf_Shdr *relsec, const Elf_Shdr *symtab)
++{
++	pr_err("RELA relocation unsupported.\n");
++	return -ENOEXEC;
++}
++#endif
++
++#ifndef arch_kexec_apply_relocations
++/*
++ * arch_kexec_apply_relocations - apply relocations of type REL
++ * @pi:		Purgatory to be relocated.
++ * @section:	Section relocations applying to.
++ * @relsec:	Section containing RELs.
++ * @symtab:	Corresponding symtab.
++ *
++ * Return: 0 on success, negative errno on error.
++ */
++static inline int
++arch_kexec_apply_relocations(struct purgatory_info *pi, Elf_Shdr *section,
++			     const Elf_Shdr *relsec, const Elf_Shdr *symtab)
++{
++	pr_err("REL relocation unsupported.\n");
++	return -ENOEXEC;
++}
++#endif
+ #endif /* CONFIG_KEXEC_FILE */
+ 
+ #ifdef CONFIG_KEXEC_ELF
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index a6a3d4ddfc2df..d13631a5e9087 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -311,7 +311,7 @@ LSM_HOOK(int, 0, secmark_relabel_packet, u32 secid)
+ LSM_HOOK(void, LSM_RET_VOID, secmark_refcount_inc, void)
+ LSM_HOOK(void, LSM_RET_VOID, secmark_refcount_dec, void)
+ LSM_HOOK(void, LSM_RET_VOID, req_classify_flow, const struct request_sock *req,
+-	 struct flowi *fl)
++	 struct flowi_common *flic)
+ LSM_HOOK(int, 0, tun_dev_alloc_security, void **security)
+ LSM_HOOK(void, LSM_RET_VOID, tun_dev_free_security, void *security)
+ LSM_HOOK(int, 0, tun_dev_create, void)
+@@ -351,7 +351,7 @@ LSM_HOOK(int, 0, xfrm_state_delete_security, struct xfrm_state *x)
+ LSM_HOOK(int, 0, xfrm_policy_lookup, struct xfrm_sec_ctx *ctx, u32 fl_secid,
+ 	 u8 dir)
+ LSM_HOOK(int, 1, xfrm_state_pol_flow_match, struct xfrm_state *x,
+-	 struct xfrm_policy *xp, const struct flowi *fl)
++	 struct xfrm_policy *xp, const struct flowi_common *flic)
+ LSM_HOOK(int, 0, xfrm_decode_session, struct sk_buff *skb, u32 *secid,
+ 	 int ckall)
+ #endif /* CONFIG_SECURITY_NETWORK_XFRM */
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index a8531b37e6f52..64cdf4d7bfb30 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -1105,7 +1105,7 @@
+  * @xfrm_state_pol_flow_match:
+  *	@x contains the state to match.
+  *	@xp contains the policy to check for a match.
+- *	@fl contains the flow to check for a match.
++ *	@flic contains the flowi_common struct to check for a match.
+  *	Return 1 if there is a match.
+  * @xfrm_decode_session:
+  *	@skb points to skb to decode.
+diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h
+index 36d6ce673503c..6fee33cb52f58 100644
+--- a/include/linux/mailbox_controller.h
++++ b/include/linux/mailbox_controller.h
+@@ -83,6 +83,7 @@ struct mbox_controller {
+ 				      const struct of_phandle_args *sp);
+ 	/* Internal to API */
+ 	struct hrtimer poll_hrt;
++	spinlock_t poll_hrt_lock;
+ 	struct list_head node;
+ };
+ 
+diff --git a/include/linux/mtd/cfi.h b/include/linux/mtd/cfi.h
+index fd1ecb8211060..d88bb56c18e2e 100644
+--- a/include/linux/mtd/cfi.h
++++ b/include/linux/mtd/cfi.h
+@@ -286,6 +286,7 @@ struct cfi_private {
+ 	map_word sector_erase_cmd;
+ 	unsigned long chipshift; /* Because they're of the same type */
+ 	const char *im_name;	 /* inter_module name for cmdset_setup */
++	unsigned long quirks;
+ 	struct flchip chips[];  /* per-chip data structure for each chip */
+ };
+ 
+diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
+index ac398e143c9a1..843678bfc364f 100644
+--- a/include/linux/nodemask.h
++++ b/include/linux/nodemask.h
+@@ -375,14 +375,13 @@ static inline void __nodes_fold(nodemask_t *dstp, const nodemask_t *origp,
+ }
+ 
+ #if MAX_NUMNODES > 1
+-#define for_each_node_mask(node, mask)			\
+-	for ((node) = first_node(mask);			\
+-		(node) < MAX_NUMNODES;			\
+-		(node) = next_node((node), (mask)))
++#define for_each_node_mask(node, mask)				    \
++	for ((node) = first_node(mask);				    \
++	     (node >= 0) && (node) < MAX_NUMNODES;		    \
++	     (node) = next_node((node), (mask)))
+ #else /* MAX_NUMNODES == 1 */
+-#define for_each_node_mask(node, mask)			\
+-	if (!nodes_empty(mask))				\
+-		for ((node) = 0; (node) < 1; (node)++)
++#define for_each_node_mask(node, mask)                                  \
++	for ((node) = 0; (node) < 1 && !nodes_empty(mask); (node)++)
+ #endif /* MAX_NUMNODES */
+ 
+ /*
+diff --git a/include/linux/platform_data/cros_ec_proto.h b/include/linux/platform_data/cros_ec_proto.h
+index 02599687770c5..7f03e02c48cd4 100644
+--- a/include/linux/platform_data/cros_ec_proto.h
++++ b/include/linux/platform_data/cros_ec_proto.h
+@@ -216,6 +216,9 @@ int cros_ec_prepare_tx(struct cros_ec_device *ec_dev,
+ int cros_ec_check_result(struct cros_ec_device *ec_dev,
+ 			 struct cros_ec_command *msg);
+ 
++int cros_ec_cmd_xfer(struct cros_ec_device *ec_dev,
++		     struct cros_ec_command *msg);
++
+ int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+ 			    struct cros_ec_command *msg);
+ 
+diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
+index 2a9df80ea8876..ae7dbdfa3d832 100644
+--- a/include/linux/ptrace.h
++++ b/include/linux/ptrace.h
+@@ -30,7 +30,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr,
+ 
+ #define PT_SEIZED	0x00010000	/* SEIZE used, enable new behavior */
+ #define PT_PTRACED	0x00000001
+-#define PT_DTRACE	0x00000002	/* delayed trace (used on m68k, i386) */
+ 
+ #define PT_OPT_FLAG_SHIFT	3
+ /* PT_TRACE_* event enable flags */
+@@ -47,12 +46,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr,
+ #define PT_EXITKILL		(PTRACE_O_EXITKILL << PT_OPT_FLAG_SHIFT)
+ #define PT_SUSPEND_SECCOMP	(PTRACE_O_SUSPEND_SECCOMP << PT_OPT_FLAG_SHIFT)
+ 
+-/* single stepping state bits (used on ARM and PA-RISC) */
+-#define PT_SINGLESTEP_BIT	31
+-#define PT_SINGLESTEP		(1<<PT_SINGLESTEP_BIT)
+-#define PT_BLOCKSTEP_BIT	30
+-#define PT_BLOCKSTEP		(1<<PT_BLOCKSTEP_BIT)
+-
+ extern long arch_ptrace(struct task_struct *child, long request,
+ 			unsigned long addr, unsigned long data);
+ extern int ptrace_readdata(struct task_struct *tsk, unsigned long src, char __user *dst, int len);
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 330029ef7e894..e9b4b54106147 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -170,7 +170,7 @@ struct sk_buff;
+ struct sock;
+ struct sockaddr;
+ struct socket;
+-struct flowi;
++struct flowi_common;
+ struct dst_entry;
+ struct xfrm_selector;
+ struct xfrm_policy;
+@@ -1363,8 +1363,9 @@ int security_socket_getpeersec_dgram(struct socket *sock, struct sk_buff *skb, u
+ int security_sk_alloc(struct sock *sk, int family, gfp_t priority);
+ void security_sk_free(struct sock *sk);
+ void security_sk_clone(const struct sock *sk, struct sock *newsk);
+-void security_sk_classify_flow(struct sock *sk, struct flowi *fl);
+-void security_req_classify_flow(const struct request_sock *req, struct flowi *fl);
++void security_sk_classify_flow(struct sock *sk, struct flowi_common *flic);
++void security_req_classify_flow(const struct request_sock *req,
++				struct flowi_common *flic);
+ void security_sock_graft(struct sock*sk, struct socket *parent);
+ int security_inet_conn_request(struct sock *sk,
+ 			struct sk_buff *skb, struct request_sock *req);
+@@ -1515,11 +1516,13 @@ static inline void security_sk_clone(const struct sock *sk, struct sock *newsk)
+ {
+ }
+ 
+-static inline void security_sk_classify_flow(struct sock *sk, struct flowi *fl)
++static inline void security_sk_classify_flow(struct sock *sk,
++					     struct flowi_common *flic)
+ {
+ }
+ 
+-static inline void security_req_classify_flow(const struct request_sock *req, struct flowi *fl)
++static inline void security_req_classify_flow(const struct request_sock *req,
++					      struct flowi_common *flic)
+ {
+ }
+ 
+@@ -1646,9 +1649,9 @@ void security_xfrm_state_free(struct xfrm_state *x);
+ int security_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir);
+ int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
+ 				       struct xfrm_policy *xp,
+-				       const struct flowi *fl);
++				       const struct flowi_common *flic);
+ int security_xfrm_decode_session(struct sk_buff *skb, u32 *secid);
+-void security_skb_classify_flow(struct sk_buff *skb, struct flowi *fl);
++void security_skb_classify_flow(struct sk_buff *skb, struct flowi_common *flic);
+ 
+ #else	/* CONFIG_SECURITY_NETWORK_XFRM */
+ 
+@@ -1700,7 +1703,8 @@ static inline int security_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_s
+ }
+ 
+ static inline int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
+-			struct xfrm_policy *xp, const struct flowi *fl)
++						     struct xfrm_policy *xp,
++						     const struct flowi_common *flic)
+ {
+ 	return 1;
+ }
+@@ -1710,7 +1714,8 @@ static inline int security_xfrm_decode_session(struct sk_buff *skb, u32 *secid)
+ 	return 0;
+ }
+ 
+-static inline void security_skb_classify_flow(struct sk_buff *skb, struct flowi *fl)
++static inline void security_skb_classify_flow(struct sk_buff *skb,
++					      struct flowi_common *flic)
+ {
+ }
+ 
+diff --git a/include/linux/thermal.h b/include/linux/thermal.h
+index 176d9454e8f36..7097d4dcfdd07 100644
+--- a/include/linux/thermal.h
++++ b/include/linux/thermal.h
+@@ -92,7 +92,7 @@ struct thermal_cooling_device_ops {
+ 
+ struct thermal_cooling_device {
+ 	int id;
+-	char type[THERMAL_NAME_LENGTH];
++	char *type;
+ 	struct device device;
+ 	struct device_node *np;
+ 	void *devdata;
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 3dbb42c637c14..9f05016d823f8 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -124,6 +124,7 @@ struct usb_hcd {
+ #define HCD_FLAG_RH_RUNNING		5	/* root hub is running? */
+ #define HCD_FLAG_DEAD			6	/* controller has died? */
+ #define HCD_FLAG_INTF_AUTHORIZED	7	/* authorize interfaces? */
++#define HCD_FLAG_DEFER_RH_REGISTER	8	/* Defer roothub registration */
+ 
+ 	/* The flags can be tested using these macros; they are likely to
+ 	 * be slightly faster than test_bit().
+@@ -134,6 +135,7 @@ struct usb_hcd {
+ #define HCD_WAKEUP_PENDING(hcd)	((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
+ #define HCD_RH_RUNNING(hcd)	((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
+ #define HCD_DEAD(hcd)		((hcd)->flags & (1U << HCD_FLAG_DEAD))
++#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER))
+ 
+ 	/*
+ 	 * Specifies if interfaces are authorized by default
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 243de74e118e7..ede7a153c69a5 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -1503,7 +1503,7 @@ struct hci_cp_le_set_scan_enable {
+ } __packed;
+ 
+ #define HCI_LE_USE_PEER_ADDR		0x00
+-#define HCI_LE_USE_WHITELIST		0x01
++#define HCI_LE_USE_ACCEPT_LIST		0x01
+ 
+ #define HCI_OP_LE_CREATE_CONN		0x200d
+ struct hci_cp_le_create_conn {
+@@ -1523,22 +1523,22 @@ struct hci_cp_le_create_conn {
+ 
+ #define HCI_OP_LE_CREATE_CONN_CANCEL	0x200e
+ 
+-#define HCI_OP_LE_READ_WHITE_LIST_SIZE	0x200f
+-struct hci_rp_le_read_white_list_size {
++#define HCI_OP_LE_READ_ACCEPT_LIST_SIZE	0x200f
++struct hci_rp_le_read_accept_list_size {
+ 	__u8	status;
+ 	__u8	size;
+ } __packed;
+ 
+-#define HCI_OP_LE_CLEAR_WHITE_LIST	0x2010
++#define HCI_OP_LE_CLEAR_ACCEPT_LIST	0x2010
+ 
+-#define HCI_OP_LE_ADD_TO_WHITE_LIST	0x2011
+-struct hci_cp_le_add_to_white_list {
++#define HCI_OP_LE_ADD_TO_ACCEPT_LIST	0x2011
++struct hci_cp_le_add_to_accept_list {
+ 	__u8     bdaddr_type;
+ 	bdaddr_t bdaddr;
+ } __packed;
+ 
+-#define HCI_OP_LE_DEL_FROM_WHITE_LIST	0x2012
+-struct hci_cp_le_del_from_white_list {
++#define HCI_OP_LE_DEL_FROM_ACCEPT_LIST	0x2012
++struct hci_cp_le_del_from_accept_list {
+ 	__u8     bdaddr_type;
+ 	bdaddr_t bdaddr;
+ } __packed;
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 09104ca14a3e4..11a92bb4d7a9f 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -308,7 +308,7 @@ struct hci_dev {
+ 	__u8		max_page;
+ 	__u8		features[HCI_MAX_PAGES][8];
+ 	__u8		le_features[8];
+-	__u8		le_white_list_size;
++	__u8		le_accept_list_size;
+ 	__u8		le_resolv_list_size;
+ 	__u8		le_num_of_adv_sets;
+ 	__u8		le_states[8];
+@@ -364,6 +364,8 @@ struct hci_dev {
+ 	__u8		ssp_debug_mode;
+ 	__u8		hw_error_code;
+ 	__u32		clock;
++	__u16		advmon_allowlist_duration;
++	__u16		advmon_no_filter_duration;
+ 
+ 	__u16		devid_source;
+ 	__u16		devid_vendor;
+@@ -497,14 +499,14 @@ struct hci_dev {
+ 	struct hci_conn_hash	conn_hash;
+ 
+ 	struct list_head	mgmt_pending;
+-	struct list_head	blacklist;
+-	struct list_head	whitelist;
++	struct list_head	reject_list;
++	struct list_head	accept_list;
+ 	struct list_head	uuids;
+ 	struct list_head	link_keys;
+ 	struct list_head	long_term_keys;
+ 	struct list_head	identity_resolving_keys;
+ 	struct list_head	remote_oob_data;
+-	struct list_head	le_white_list;
++	struct list_head	le_accept_list;
+ 	struct list_head	le_resolv_list;
+ 	struct list_head	le_conn_params;
+ 	struct list_head	pend_le_conns;
+@@ -545,6 +547,14 @@ struct hci_dev {
+ 	struct delayed_work	rpa_expired;
+ 	bdaddr_t		rpa;
+ 
++	enum {
++		INTERLEAVE_SCAN_NONE,
++		INTERLEAVE_SCAN_NO_FILTER,
++		INTERLEAVE_SCAN_ALLOWLIST
++	} interleave_scan_state;
++
++	struct delayed_work	interleave_scan;
++
+ #if IS_ENABLED(CONFIG_BT_LEDS)
+ 	struct led_trigger	*power_led;
+ #endif
+diff --git a/include/net/flow.h b/include/net/flow.h
+index b2531df3f65f1..39d0cedcddeee 100644
+--- a/include/net/flow.h
++++ b/include/net/flow.h
+@@ -195,11 +195,21 @@ static inline struct flowi *flowi4_to_flowi(struct flowi4 *fl4)
+ 	return container_of(fl4, struct flowi, u.ip4);
+ }
+ 
++static inline struct flowi_common *flowi4_to_flowi_common(struct flowi4 *fl4)
++{
++	return &(flowi4_to_flowi(fl4)->u.__fl_common);
++}
++
+ static inline struct flowi *flowi6_to_flowi(struct flowi6 *fl6)
+ {
+ 	return container_of(fl6, struct flowi, u.ip6);
+ }
+ 
++static inline struct flowi_common *flowi6_to_flowi_common(struct flowi6 *fl6)
++{
++	return &(flowi6_to_flowi(fl6)->u.__fl_common);
++}
++
+ static inline struct flowi *flowidn_to_flowi(struct flowidn *fldn)
+ {
+ 	return container_of(fldn, struct flowi, u.dn);
+diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h
+index 8bf5906073bc7..e03ba8e80781a 100644
+--- a/include/net/if_inet6.h
++++ b/include/net/if_inet6.h
+@@ -64,6 +64,14 @@ struct inet6_ifaddr {
+ 
+ 	struct hlist_node	addr_lst;
+ 	struct list_head	if_list;
++	/*
++	 * Used to safely traverse idev->addr_list in process context
++	 * if the idev->lock needed to protect idev->addr_list cannot be held.
++	 * In that case, add the items to this list temporarily and iterate
++	 * without holding idev->lock.
++	 * See addrconf_ifdown and dev_forward_change.
++	 */
++	struct list_head	if_list_aux;
+ 
+ 	struct list_head	tmp_list;
+ 	struct inet6_ifaddr	*ifpub;
+diff --git a/include/net/route.h b/include/net/route.h
+index a07c277cd33e8..2551f3f03b37e 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -165,7 +165,7 @@ static inline struct rtable *ip_route_output_ports(struct net *net, struct flowi
+ 			   sk ? inet_sk_flowi_flags(sk) : 0,
+ 			   daddr, saddr, dport, sport, sock_net_uid(net, sk));
+ 	if (sk)
+-		security_sk_classify_flow(sk, flowi4_to_flowi(fl4));
++		security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
+ 	return ip_route_output_flow(net, fl4, sk);
+ }
+ 
+@@ -322,7 +322,7 @@ static inline struct rtable *ip_route_connect(struct flowi4 *fl4,
+ 		ip_rt_put(rt);
+ 		flowi4_update_output(fl4, oif, tos, fl4->daddr, fl4->saddr);
+ 	}
+-	security_sk_classify_flow(sk, flowi4_to_flowi(fl4));
++	security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
+ 	return ip_route_output_flow(net, fl4, sk);
+ }
+ 
+@@ -338,7 +338,7 @@ static inline struct rtable *ip_route_newports(struct flowi4 *fl4, struct rtable
+ 		flowi4_update_output(fl4, sk->sk_bound_dev_if,
+ 				     RT_CONN_FLAGS(sk), fl4->daddr,
+ 				     fl4->saddr);
+-		security_sk_classify_flow(sk, flowi4_to_flowi(fl4));
++		security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
+ 		return ip_route_output_flow(sock_net(sk), fl4, sk);
+ 	}
+ 	return rt;
+diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
+index fac8e89aed81d..310e0dbffda99 100644
+--- a/include/scsi/libfcoe.h
++++ b/include/scsi/libfcoe.h
+@@ -249,7 +249,8 @@ int fcoe_ctlr_recv_flogi(struct fcoe_ctlr *, struct fc_lport *,
+ 			 struct fc_frame *);
+ 
+ /* libfcoe funcs */
+-u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int);
++u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN], unsigned int scheme,
++		      unsigned int port);
+ int fcoe_libfc_config(struct fc_lport *, struct fcoe_ctlr *,
+ 		      const struct libfc_function_template *, int init_fcp);
+ u32 fcoe_fc_crc(struct fc_frame *fp);
+diff --git a/include/sound/jack.h b/include/sound/jack.h
+index 9eb2b5ec1ec41..78f3619f3de94 100644
+--- a/include/sound/jack.h
++++ b/include/sound/jack.h
+@@ -62,6 +62,7 @@ struct snd_jack {
+ 	const char *id;
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+ 	struct input_dev *input_dev;
++	struct mutex input_dev_lock;
+ 	int registered;
+ 	int type;
+ 	char name[100];
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 4a3ab0ed6e062..1c714336b8635 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -1509,7 +1509,7 @@ TRACE_EVENT(rxrpc_call_reset,
+ 		    __entry->call_serial = call->rx_serial;
+ 		    __entry->conn_serial = call->conn->hi_serial;
+ 		    __entry->tx_seq = call->tx_hard_ack;
+-		    __entry->rx_seq = call->ackr_seen;
++		    __entry->rx_seq = call->rx_hard_ack;
+ 			   ),
+ 
+ 	    TP_printk("c=%08x %08x:%08x r=%08x/%08x tx=%08x rx=%08x",
+diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
+index 2070df64958ea..b4feeb4b216a0 100644
+--- a/include/trace/events/vmscan.h
++++ b/include/trace/events/vmscan.h
+@@ -283,7 +283,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate,
+ 		__field(unsigned long, nr_scanned)
+ 		__field(unsigned long, nr_skipped)
+ 		__field(unsigned long, nr_taken)
+-		__field(isolate_mode_t, isolate_mode)
++		__field(unsigned int, isolate_mode)
+ 		__field(int, lru)
+ 	),
+ 
+@@ -294,7 +294,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate,
+ 		__entry->nr_scanned = nr_scanned;
+ 		__entry->nr_skipped = nr_skipped;
+ 		__entry->nr_taken = nr_taken;
+-		__entry->isolate_mode = isolate_mode;
++		__entry->isolate_mode = (__force unsigned int)isolate_mode;
+ 		__entry->lru = lru;
+ 	),
+ 
+diff --git a/init/Kconfig b/init/Kconfig
+index 13685bffef370..22912631d79b4 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -68,6 +68,11 @@ config CC_HAS_ASM_GOTO_OUTPUT
+ 	depends on CC_HAS_ASM_GOTO
+ 	def_bool $(success,echo 'int foo(int x) { asm goto ("": "=r"(x) ::: bar); return x; bar: return 0; }' | $(CC) -x c - -c -o /dev/null)
+ 
++config CC_HAS_ASM_GOTO_TIED_OUTPUT
++	depends on CC_HAS_ASM_GOTO_OUTPUT
++	# Detect buggy gcc and clang, fixed in gcc-11 clang-14.
++	def_bool $(success,echo 'int foo(int *x) { asm goto (".long (%l[bar]) - .\n": "+m"(*x) ::: bar); return *x; bar: return 0; }' | $CC -x c - -c -o /dev/null)
++
+ config TOOLS_SUPPORT_RELR
+ 	def_bool $(success,env "CC=$(CC)" "LD=$(LD)" "NM=$(NM)" "OBJCOPY=$(OBJCOPY)" $(srctree)/scripts/tools-support-relr.sh)
+ 
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index 05d2176cc4712..86969de170843 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -45,6 +45,7 @@
+ 
+ struct mqueue_fs_context {
+ 	struct ipc_namespace	*ipc_ns;
++	bool			 newns;	/* Set if newly created ipc namespace */
+ };
+ 
+ #define MQUEUE_MAGIC	0x19800202
+@@ -424,6 +425,14 @@ static int mqueue_get_tree(struct fs_context *fc)
+ {
+ 	struct mqueue_fs_context *ctx = fc->fs_private;
+ 
++	/*
++	 * With a newly created ipc namespace, we don't need to do a search
++	 * for an ipc namespace match, but we still need to set s_fs_info.
++	 */
++	if (ctx->newns) {
++		fc->s_fs_info = ctx->ipc_ns;
++		return get_tree_nodev(fc, mqueue_fill_super);
++	}
+ 	return get_tree_keyed(fc, mqueue_fill_super, ctx->ipc_ns);
+ }
+ 
+@@ -451,6 +460,10 @@ static int mqueue_init_fs_context(struct fs_context *fc)
+ 	return 0;
+ }
+ 
++/*
++ * mq_init_ns() is currently the only caller of mq_create_mount().
++ * So the ns parameter is always a newly created ipc namespace.
++ */
+ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
+ {
+ 	struct mqueue_fs_context *ctx;
+@@ -462,6 +475,7 @@ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
+ 		return ERR_CAST(fc);
+ 
+ 	ctx = fc->fs_private;
++	ctx->newns = true;
+ 	put_ipc_ns(ctx->ipc_ns);
+ 	ctx->ipc_ns = get_ipc_ns(ns);
+ 	put_user_ns(fc->user_ns);
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 4575d2d60cb10..c19e669afba0e 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -121,7 +121,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
+ 		return ERR_PTR(-E2BIG);
+ 
+ 	cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap);
+-	cost += n_buckets * (value_size + sizeof(struct stack_map_bucket));
+ 	err = bpf_map_charge_init(&mem, cost);
+ 	if (err)
+ 		return ERR_PTR(err);
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index f8ae546798651..ee7da1f2462f5 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -448,7 +448,7 @@ void debug_dma_dump_mappings(struct device *dev)
+  * other hand, consumes a single dma_debug_entry, but inserts 'nents'
+  * entries into the tree.
+  */
+-static RADIX_TREE(dma_active_cacheline, GFP_NOWAIT);
++static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC);
+ static DEFINE_SPINLOCK(radix_lock);
+ #define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1)
+ #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT)
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index aea9104265f29..2e0f0b3fb9ab0 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -108,40 +108,6 @@ int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+ }
+ #endif
+ 
+-/*
+- * arch_kexec_apply_relocations_add - apply relocations of type RELA
+- * @pi:		Purgatory to be relocated.
+- * @section:	Section relocations applying to.
+- * @relsec:	Section containing RELAs.
+- * @symtab:	Corresponding symtab.
+- *
+- * Return: 0 on success, negative errno on error.
+- */
+-int __weak
+-arch_kexec_apply_relocations_add(struct purgatory_info *pi, Elf_Shdr *section,
+-				 const Elf_Shdr *relsec, const Elf_Shdr *symtab)
+-{
+-	pr_err("RELA relocation unsupported.\n");
+-	return -ENOEXEC;
+-}
+-
+-/*
+- * arch_kexec_apply_relocations - apply relocations of type REL
+- * @pi:		Purgatory to be relocated.
+- * @section:	Section relocations applying to.
+- * @relsec:	Section containing RELs.
+- * @symtab:	Corresponding symtab.
+- *
+- * Return: 0 on success, negative errno on error.
+- */
+-int __weak
+-arch_kexec_apply_relocations(struct purgatory_info *pi, Elf_Shdr *section,
+-			     const Elf_Shdr *relsec, const Elf_Shdr *symtab)
+-{
+-	pr_err("REL relocation unsupported.\n");
+-	return -ENOEXEC;
+-}
+-
+ /*
+  * Free up memory used by kernel, initrd, and command line. This is temporary
+  * memory allocation which is not needed any more after these buffers have
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index d99f73f83bf5f..aab480e24bd60 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -1219,9 +1219,8 @@ int ptrace_request(struct task_struct *child, long request,
+ 		return ptrace_resume(child, request, data);
+ 
+ 	case PTRACE_KILL:
+-		if (child->exit_state)	/* already dead */
+-			return 0;
+-		return ptrace_resume(child, request, SIGKILL);
++		send_sig_info(SIGKILL, SEND_SIG_NOINFO, child);
++		return 0;
+ 
+ #ifdef CONFIG_HAVE_ARCH_TRACEHOOK
+ 	case PTRACE_GETREGSET:
+diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
+index b71e21f73c403..cd6e11403f1b1 100644
+--- a/kernel/rcu/Kconfig
++++ b/kernel/rcu/Kconfig
+@@ -86,6 +86,7 @@ config TASKS_RCU
+ 
+ config TASKS_RUDE_RCU
+ 	def_bool 0
++	select IRQ_WORK
+ 	help
+ 	  This option enables a task-based RCU implementation that uses
+ 	  only context switch (including preemption) and user-mode
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 7c05c5ab78653..14af29fe13770 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -620,6 +620,9 @@ static void rcu_tasks_be_rude(struct work_struct *work)
+ // Wait for one rude RCU-tasks grace period.
+ static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
+ {
++	if (num_online_cpus() <= 1)
++		return;	// Fastpath for only one CPU.
++
+ 	rtp->n_ipis += cpumask_weight(cpu_online_mask);
+ 	schedule_on_each_cpu(rcu_tasks_be_rude);
+ }
+diff --git a/kernel/scftorture.c b/kernel/scftorture.c
+index 554a521ee235e..060ee0b1569a0 100644
+--- a/kernel/scftorture.c
++++ b/kernel/scftorture.c
+@@ -253,9 +253,10 @@ static void scf_handler(void *scfc_in)
+ 	}
+ 	this_cpu_inc(scf_invoked_count);
+ 	if (longwait <= 0) {
+-		if (!(r & 0xffc0))
++		if (!(r & 0xffc0)) {
+ 			udelay(r & 0x3f);
+-		goto out;
++			goto out;
++		}
+ 	}
+ 	if (r & 0xfff)
+ 		goto out;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 1a306ef51bbe5..bca0efc03a51d 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4758,8 +4758,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
+ 
+ 	cfs_rq->throttle_count--;
+ 	if (!cfs_rq->throttle_count) {
+-		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
+-					     cfs_rq->throttled_clock_task;
++		cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) -
++					     cfs_rq->throttled_clock_pelt;
+ 
+ 		/* Add cfs_rq with already running entity in the list */
+ 		if (cfs_rq->nr_running >= 1)
+@@ -4776,7 +4776,7 @@ static int tg_throttle_down(struct task_group *tg, void *data)
+ 
+ 	/* group is entering throttled state, stop time */
+ 	if (!cfs_rq->throttle_count) {
+-		cfs_rq->throttled_clock_task = rq_clock_task(rq);
++		cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
+ 		list_del_leaf_cfs_rq(cfs_rq);
+ 	}
+ 	cfs_rq->throttle_count++;
+@@ -5194,7 +5194,7 @@ static void sync_throttle(struct task_group *tg, int cpu)
+ 	pcfs_rq = tg->parent->cfs_rq[cpu];
+ 
+ 	cfs_rq->throttle_count = pcfs_rq->throttle_count;
+-	cfs_rq->throttled_clock_task = rq_clock_task(cpu_rq(cpu));
++	cfs_rq->throttled_clock_pelt = rq_clock_pelt(cpu_rq(cpu));
+ }
+ 
+ /* conditionally throttle active cfs_rq's from put_prev_entity() */
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 45bf08e22207c..89150ced09cf4 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -145,9 +145,9 @@ static inline u64 rq_clock_pelt(struct rq *rq)
+ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ {
+ 	if (unlikely(cfs_rq->throttle_count))
+-		return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
++		return cfs_rq->throttled_clock_pelt - cfs_rq->throttled_clock_pelt_time;
+ 
+-	return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
++	return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
+ }
+ #else
+ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 08db8e095e48f..8d39f5d99172a 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -599,8 +599,8 @@ struct cfs_rq {
+ 	s64			runtime_remaining;
+ 
+ 	u64			throttled_clock;
+-	u64			throttled_clock_task;
+-	u64			throttled_clock_task_time;
++	u64			throttled_clock_pelt;
++	u64			throttled_clock_pelt_time;
+ 	int			throttled;
+ 	int			throttle_count;
+ 	struct list_head	throttled_list;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 4a5d35dc490b2..a63713dcd05d5 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -4427,7 +4427,7 @@ int ftrace_func_mapper_add_ip(struct ftrace_func_mapper *mapper,
+  * @ip: The instruction pointer address to remove the data from
+  *
+  * Returns the data if it is found, otherwise NULL.
+- * Note, if the data pointer is used as the data itself, (see 
++ * Note, if the data pointer is used as the data itself, (see
+  * ftrace_func_mapper_find_ip(), then the return value may be meaningless,
+  * if the data pointer was set to zero.
+  */
+@@ -5153,8 +5153,6 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
+ 	__add_hash_entry(direct_functions, entry);
+ 
+ 	ret = ftrace_set_filter_ip(&direct_ops, ip, 0, 0);
+-	if (ret)
+-		remove_hash_entry(direct_functions, entry);
+ 
+ 	if (!ret && !(direct_ops.flags & FTRACE_OPS_FL_ENABLED)) {
+ 		ret = register_ftrace_function(&direct_ops);
+@@ -5163,6 +5161,7 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
+ 	}
+ 
+ 	if (ret) {
++		remove_hash_entry(direct_functions, entry);
+ 		kfree(entry);
+ 		if (!direct->count) {
+ 			list_del_rcu(&direct->next);
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index eb7200699cf66..3ed1723b68d56 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1789,8 +1789,11 @@ static int init_var_ref(struct hist_field *ref_field,
+ 	return err;
+  free:
+ 	kfree(ref_field->system);
++	ref_field->system = NULL;
+ 	kfree(ref_field->event_name);
++	ref_field->event_name = NULL;
+ 	kfree(ref_field->name);
++	ref_field->name = NULL;
+ 
+ 	goto out;
+ }
+diff --git a/mm/compaction.c b/mm/compaction.c
+index dba424447473d..8dfbe86bd74f7 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1747,6 +1747,8 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+ 
+ 				update_fast_start_pfn(cc, free_pfn);
+ 				pfn = pageblock_start_pfn(free_pfn);
++				if (pfn < cc->zone->zone_start_pfn)
++					pfn = cc->zone->zone_start_pfn;
+ 				cc->fast_search_fail = 0;
+ 				found_block = true;
+ 				set_pageblock_skip(freepage);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index fce705fc2848a..c42c76447e103 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5465,7 +5465,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	pud_clear(pud);
+ 	put_page(virt_to_page(ptep));
+ 	mm_dec_nr_pmds(mm);
+-	*addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
++	/*
++	 * This update of passed address optimizes loops sequentially
++	 * processing addresses in increments of huge page size (PMD_SIZE
++	 * in this case).  By clearing the pud, a PUD_SIZE area is unmapped.
++	 * Update address to the 'last page' in the cleared area so that
++	 * calling loop can move to first page past this area.
++	 */
++	*addr |= PUD_SIZE - PMD_SIZE;
+ 	return 1;
+ }
+ #define want_pmd_share()	(1)
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index ecd2ffcf2ba28..140d9764c77e3 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -240,7 +240,7 @@ int hci_disconnect(struct hci_conn *conn, __u8 reason)
+ {
+ 	BT_DBG("hcon %p", conn);
+ 
+-	/* When we are master of an established connection and it enters
++	/* When we are central of an established connection and it enters
+ 	 * the disconnect timeout, then go ahead and try to read the
+ 	 * current clock offset.  Processing of the result is done
+ 	 * within the event handling and hci_clock_offset_evt function.
+@@ -1065,16 +1065,16 @@ struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst,
+ 
+ 	hci_req_init(&req, hdev);
+ 
+-	/* Disable advertising if we're active. For master role
++	/* Disable advertising if we're active. For central role
+ 	 * connections most controllers will refuse to connect if
+-	 * advertising is enabled, and for slave role connections we
++	 * advertising is enabled, and for peripheral role connections we
+ 	 * anyway have to disable it in order to start directed
+ 	 * advertising.
+ 	 */
+ 	if (hci_dev_test_flag(hdev, HCI_LE_ADV))
+ 		 __hci_req_disable_advertising(&req);
+ 
+-	/* If requested to connect as slave use directed advertising */
++	/* If requested to connect as peripheral use directed advertising */
+ 	if (conn->role == HCI_ROLE_SLAVE) {
+ 		/* If we're active scanning most controllers are unable
+ 		 * to initiate advertising. Simply reject the attempt.
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index c331b4176de73..2cb0cf035476b 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -742,14 +742,14 @@ static int hci_init3_req(struct hci_request *req, unsigned long opt)
+ 		}
+ 
+ 		if (hdev->commands[26] & 0x40) {
+-			/* Read LE White List Size */
+-			hci_req_add(req, HCI_OP_LE_READ_WHITE_LIST_SIZE,
++			/* Read LE Accept List Size */
++			hci_req_add(req, HCI_OP_LE_READ_ACCEPT_LIST_SIZE,
+ 				    0, NULL);
+ 		}
+ 
+ 		if (hdev->commands[26] & 0x80) {
+-			/* Clear LE White List */
+-			hci_req_add(req, HCI_OP_LE_CLEAR_WHITE_LIST, 0, NULL);
++			/* Clear LE Accept List */
++			hci_req_add(req, HCI_OP_LE_CLEAR_ACCEPT_LIST, 0, NULL);
+ 		}
+ 
+ 		if (hdev->commands[34] & 0x40) {
+@@ -3548,13 +3548,13 @@ static int hci_suspend_notifier(struct notifier_block *nb, unsigned long action,
+ 		/* Suspend consists of two actions:
+ 		 *  - First, disconnect everything and make the controller not
+ 		 *    connectable (disabling scanning)
+-		 *  - Second, program event filter/whitelist and enable scan
++		 *  - Second, program event filter/accept list and enable scan
+ 		 */
+ 		ret = hci_change_suspend_state(hdev, BT_SUSPEND_DISCONNECT);
+ 		if (!ret)
+ 			state = BT_SUSPEND_DISCONNECT;
+ 
+-		/* Only configure whitelist if disconnect succeeded and wake
++		/* Only configure accept list if disconnect succeeded and wake
+ 		 * isn't being prevented.
+ 		 */
+ 		if (!ret && !(hdev->prevent_wake && hdev->prevent_wake(hdev))) {
+@@ -3606,6 +3606,9 @@ struct hci_dev *hci_alloc_dev(void)
+ 	hdev->cur_adv_instance = 0x00;
+ 	hdev->adv_instance_timeout = 0;
+ 
++	hdev->advmon_allowlist_duration = 300;
++	hdev->advmon_no_filter_duration = 500;
++
+ 	hdev->sniff_max_interval = 800;
+ 	hdev->sniff_min_interval = 80;
+ 
+@@ -3654,14 +3657,14 @@ struct hci_dev *hci_alloc_dev(void)
+ 	mutex_init(&hdev->req_lock);
+ 
+ 	INIT_LIST_HEAD(&hdev->mgmt_pending);
+-	INIT_LIST_HEAD(&hdev->blacklist);
+-	INIT_LIST_HEAD(&hdev->whitelist);
++	INIT_LIST_HEAD(&hdev->reject_list);
++	INIT_LIST_HEAD(&hdev->accept_list);
+ 	INIT_LIST_HEAD(&hdev->uuids);
+ 	INIT_LIST_HEAD(&hdev->link_keys);
+ 	INIT_LIST_HEAD(&hdev->long_term_keys);
+ 	INIT_LIST_HEAD(&hdev->identity_resolving_keys);
+ 	INIT_LIST_HEAD(&hdev->remote_oob_data);
+-	INIT_LIST_HEAD(&hdev->le_white_list);
++	INIT_LIST_HEAD(&hdev->le_accept_list);
+ 	INIT_LIST_HEAD(&hdev->le_resolv_list);
+ 	INIT_LIST_HEAD(&hdev->le_conn_params);
+ 	INIT_LIST_HEAD(&hdev->pend_le_conns);
+@@ -3877,8 +3880,8 @@ void hci_cleanup_dev(struct hci_dev *hdev)
+ 	destroy_workqueue(hdev->req_workqueue);
+ 
+ 	hci_dev_lock(hdev);
+-	hci_bdaddr_list_clear(&hdev->blacklist);
+-	hci_bdaddr_list_clear(&hdev->whitelist);
++	hci_bdaddr_list_clear(&hdev->reject_list);
++	hci_bdaddr_list_clear(&hdev->accept_list);
+ 	hci_uuids_clear(hdev);
+ 	hci_link_keys_clear(hdev);
+ 	hci_smp_ltks_clear(hdev);
+@@ -3886,7 +3889,7 @@ void hci_cleanup_dev(struct hci_dev *hdev)
+ 	hci_remote_oob_data_clear(hdev);
+ 	hci_adv_instances_clear(hdev);
+ 	hci_adv_monitors_clear(hdev);
+-	hci_bdaddr_list_clear(&hdev->le_white_list);
++	hci_bdaddr_list_clear(&hdev->le_accept_list);
+ 	hci_bdaddr_list_clear(&hdev->le_resolv_list);
+ 	hci_conn_params_clear_all(hdev);
+ 	hci_discovery_filter_clear(hdev);
+diff --git a/net/bluetooth/hci_debugfs.c b/net/bluetooth/hci_debugfs.c
+index 5e8af2658e44a..338833f123659 100644
+--- a/net/bluetooth/hci_debugfs.c
++++ b/net/bluetooth/hci_debugfs.c
+@@ -125,7 +125,7 @@ static int device_list_show(struct seq_file *f, void *ptr)
+ 	struct bdaddr_list *b;
+ 
+ 	hci_dev_lock(hdev);
+-	list_for_each_entry(b, &hdev->whitelist, list)
++	list_for_each_entry(b, &hdev->accept_list, list)
+ 		seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type);
+ 	list_for_each_entry(p, &hdev->le_conn_params, list) {
+ 		seq_printf(f, "%pMR (type %u) %u\n", &p->addr, p->addr_type,
+@@ -144,7 +144,7 @@ static int blacklist_show(struct seq_file *f, void *p)
+ 	struct bdaddr_list *b;
+ 
+ 	hci_dev_lock(hdev);
+-	list_for_each_entry(b, &hdev->blacklist, list)
++	list_for_each_entry(b, &hdev->reject_list, list)
+ 		seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type);
+ 	hci_dev_unlock(hdev);
+ 
+@@ -734,7 +734,7 @@ static int white_list_show(struct seq_file *f, void *ptr)
+ 	struct bdaddr_list *b;
+ 
+ 	hci_dev_lock(hdev);
+-	list_for_each_entry(b, &hdev->le_white_list, list)
++	list_for_each_entry(b, &hdev->le_accept_list, list)
+ 		seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type);
+ 	hci_dev_unlock(hdev);
+ 
+@@ -1145,7 +1145,7 @@ void hci_debugfs_create_le(struct hci_dev *hdev)
+ 				    &force_static_address_fops);
+ 
+ 	debugfs_create_u8("white_list_size", 0444, hdev->debugfs,
+-			  &hdev->le_white_list_size);
++			  &hdev->le_accept_list_size);
+ 	debugfs_create_file("white_list", 0444, hdev->debugfs, hdev,
+ 			    &white_list_fops);
+ 	debugfs_create_u8("resolv_list_size", 0444, hdev->debugfs,
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index e926e80d9731b..954b29605c942 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -236,7 +236,7 @@ static void hci_cc_reset(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	hdev->ssp_debug_mode = 0;
+ 
+-	hci_bdaddr_list_clear(&hdev->le_white_list);
++	hci_bdaddr_list_clear(&hdev->le_accept_list);
+ 	hci_bdaddr_list_clear(&hdev->le_resolv_list);
+ }
+ 
+@@ -1456,21 +1456,21 @@ static void hci_cc_le_read_num_adv_sets(struct hci_dev *hdev,
+ 	hdev->le_num_of_adv_sets = rp->num_of_sets;
+ }
+ 
+-static void hci_cc_le_read_white_list_size(struct hci_dev *hdev,
+-					   struct sk_buff *skb)
++static void hci_cc_le_read_accept_list_size(struct hci_dev *hdev,
++					    struct sk_buff *skb)
+ {
+-	struct hci_rp_le_read_white_list_size *rp = (void *) skb->data;
++	struct hci_rp_le_read_accept_list_size *rp = (void *)skb->data;
+ 
+ 	BT_DBG("%s status 0x%2.2x size %u", hdev->name, rp->status, rp->size);
+ 
+ 	if (rp->status)
+ 		return;
+ 
+-	hdev->le_white_list_size = rp->size;
++	hdev->le_accept_list_size = rp->size;
+ }
+ 
+-static void hci_cc_le_clear_white_list(struct hci_dev *hdev,
+-				       struct sk_buff *skb)
++static void hci_cc_le_clear_accept_list(struct hci_dev *hdev,
++					struct sk_buff *skb)
+ {
+ 	__u8 status = *((__u8 *) skb->data);
+ 
+@@ -1479,13 +1479,13 @@ static void hci_cc_le_clear_white_list(struct hci_dev *hdev,
+ 	if (status)
+ 		return;
+ 
+-	hci_bdaddr_list_clear(&hdev->le_white_list);
++	hci_bdaddr_list_clear(&hdev->le_accept_list);
+ }
+ 
+-static void hci_cc_le_add_to_white_list(struct hci_dev *hdev,
+-					struct sk_buff *skb)
++static void hci_cc_le_add_to_accept_list(struct hci_dev *hdev,
++					 struct sk_buff *skb)
+ {
+-	struct hci_cp_le_add_to_white_list *sent;
++	struct hci_cp_le_add_to_accept_list *sent;
+ 	__u8 status = *((__u8 *) skb->data);
+ 
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, status);
+@@ -1493,18 +1493,18 @@ static void hci_cc_le_add_to_white_list(struct hci_dev *hdev,
+ 	if (status)
+ 		return;
+ 
+-	sent = hci_sent_cmd_data(hdev, HCI_OP_LE_ADD_TO_WHITE_LIST);
++	sent = hci_sent_cmd_data(hdev, HCI_OP_LE_ADD_TO_ACCEPT_LIST);
+ 	if (!sent)
+ 		return;
+ 
+-	hci_bdaddr_list_add(&hdev->le_white_list, &sent->bdaddr,
+-			   sent->bdaddr_type);
++	hci_bdaddr_list_add(&hdev->le_accept_list, &sent->bdaddr,
++			    sent->bdaddr_type);
+ }
+ 
+-static void hci_cc_le_del_from_white_list(struct hci_dev *hdev,
+-					  struct sk_buff *skb)
++static void hci_cc_le_del_from_accept_list(struct hci_dev *hdev,
++					   struct sk_buff *skb)
+ {
+-	struct hci_cp_le_del_from_white_list *sent;
++	struct hci_cp_le_del_from_accept_list *sent;
+ 	__u8 status = *((__u8 *) skb->data);
+ 
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, status);
+@@ -1512,11 +1512,11 @@ static void hci_cc_le_del_from_white_list(struct hci_dev *hdev,
+ 	if (status)
+ 		return;
+ 
+-	sent = hci_sent_cmd_data(hdev, HCI_OP_LE_DEL_FROM_WHITE_LIST);
++	sent = hci_sent_cmd_data(hdev, HCI_OP_LE_DEL_FROM_ACCEPT_LIST);
+ 	if (!sent)
+ 		return;
+ 
+-	hci_bdaddr_list_del(&hdev->le_white_list, &sent->bdaddr,
++	hci_bdaddr_list_del(&hdev->le_accept_list, &sent->bdaddr,
+ 			    sent->bdaddr_type);
+ }
+ 
+@@ -2331,7 +2331,7 @@ static void cs_le_create_conn(struct hci_dev *hdev, bdaddr_t *peer_addr,
+ 	/* We don't want the connection attempt to stick around
+ 	 * indefinitely since LE doesn't have a page timeout concept
+ 	 * like BR/EDR. Set a timer for any connection that doesn't use
+-	 * the white list for connecting.
++	 * the accept list for connecting.
+ 	 */
+ 	if (filter_policy == HCI_LE_USE_PEER_ADDR)
+ 		queue_delayed_work(conn->hdev->workqueue,
+@@ -2587,7 +2587,7 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		 * only used during suspend.
+ 		 */
+ 		if (ev->link_type == ACL_LINK &&
+-		    hci_bdaddr_list_lookup_with_flags(&hdev->whitelist,
++		    hci_bdaddr_list_lookup_with_flags(&hdev->accept_list,
+ 						      &ev->bdaddr,
+ 						      BDADDR_BREDR)) {
+ 			conn = hci_conn_add(hdev, ev->link_type, &ev->bdaddr,
+@@ -2709,28 +2709,28 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
+-	if (hci_bdaddr_list_lookup(&hdev->blacklist, &ev->bdaddr,
++	hci_dev_lock(hdev);
++
++	if (hci_bdaddr_list_lookup(&hdev->reject_list, &ev->bdaddr,
+ 				   BDADDR_BREDR)) {
+ 		hci_reject_conn(hdev, &ev->bdaddr);
+-		return;
++		goto unlock;
+ 	}
+ 
+-	/* Require HCI_CONNECTABLE or a whitelist entry to accept the
++	/* Require HCI_CONNECTABLE or an accept list entry to accept the
+ 	 * connection. These features are only touched through mgmt so
+ 	 * only do the checks if HCI_MGMT is set.
+ 	 */
+ 	if (hci_dev_test_flag(hdev, HCI_MGMT) &&
+ 	    !hci_dev_test_flag(hdev, HCI_CONNECTABLE) &&
+-	    !hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, &ev->bdaddr,
++	    !hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &ev->bdaddr,
+ 					       BDADDR_BREDR)) {
+ 		hci_reject_conn(hdev, &ev->bdaddr);
+-		return;
++		goto unlock;
+ 	}
+ 
+ 	/* Connection accepted */
+ 
+-	hci_dev_lock(hdev);
+-
+ 	ie = hci_inquiry_cache_lookup(hdev, &ev->bdaddr);
+ 	if (ie)
+ 		memcpy(ie->data.dev_class, ev->dev_class, 3);
+@@ -2742,8 +2742,7 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 				    HCI_ROLE_SLAVE);
+ 		if (!conn) {
+ 			bt_dev_err(hdev, "no memory for new connection");
+-			hci_dev_unlock(hdev);
+-			return;
++			goto unlock;
+ 		}
+ 	}
+ 
+@@ -2759,9 +2758,9 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		bacpy(&cp.bdaddr, &ev->bdaddr);
+ 
+ 		if (lmp_rswitch_capable(hdev) && (mask & HCI_LM_MASTER))
+-			cp.role = 0x00; /* Become master */
++			cp.role = 0x00; /* Become central */
+ 		else
+-			cp.role = 0x01; /* Remain slave */
++			cp.role = 0x01; /* Remain peripheral */
+ 
+ 		hci_send_cmd(hdev, HCI_OP_ACCEPT_CONN_REQ, sizeof(cp), &cp);
+ 	} else if (!(flags & HCI_PROTO_DEFER)) {
+@@ -2783,6 +2782,10 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		conn->state = BT_CONNECT2;
+ 		hci_connect_cfm(conn, 0);
+ 	}
++
++	return;
++unlock:
++	hci_dev_unlock(hdev);
+ }
+ 
+ static u8 hci_to_mgmt_reason(u8 err)
+@@ -3481,20 +3484,20 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb,
+ 		hci_cc_le_set_scan_enable(hdev, skb);
+ 		break;
+ 
+-	case HCI_OP_LE_READ_WHITE_LIST_SIZE:
+-		hci_cc_le_read_white_list_size(hdev, skb);
++	case HCI_OP_LE_READ_ACCEPT_LIST_SIZE:
++		hci_cc_le_read_accept_list_size(hdev, skb);
+ 		break;
+ 
+-	case HCI_OP_LE_CLEAR_WHITE_LIST:
+-		hci_cc_le_clear_white_list(hdev, skb);
++	case HCI_OP_LE_CLEAR_ACCEPT_LIST:
++		hci_cc_le_clear_accept_list(hdev, skb);
+ 		break;
+ 
+-	case HCI_OP_LE_ADD_TO_WHITE_LIST:
+-		hci_cc_le_add_to_white_list(hdev, skb);
++	case HCI_OP_LE_ADD_TO_ACCEPT_LIST:
++		hci_cc_le_add_to_accept_list(hdev, skb);
+ 		break;
+ 
+-	case HCI_OP_LE_DEL_FROM_WHITE_LIST:
+-		hci_cc_le_del_from_white_list(hdev, skb);
++	case HCI_OP_LE_DEL_FROM_ACCEPT_LIST:
++		hci_cc_le_del_from_accept_list(hdev, skb);
+ 		break;
+ 
+ 	case HCI_OP_LE_READ_SUPPORTED_STATES:
+@@ -5153,8 +5156,8 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 		conn->dst_type = bdaddr_type;
+ 
+ 		/* If we didn't have a hci_conn object previously
+-		 * but we're in master role this must be something
+-		 * initiated using a white list. Since white list based
++		 * but we're in central role this must be something
++		 * initiated using an accept list. Since accept list based
+ 		 * connections are not "first class citizens" we don't
+ 		 * have full tracking of them. Therefore, we go ahead
+ 		 * with a "best effort" approach of determining the
+@@ -5204,7 +5207,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 		addr_type = BDADDR_LE_RANDOM;
+ 
+ 	/* Drop the connection if the device is blocked */
+-	if (hci_bdaddr_list_lookup(&hdev->blacklist, &conn->dst, addr_type)) {
++	if (hci_bdaddr_list_lookup(&hdev->reject_list, &conn->dst, addr_type)) {
+ 		hci_conn_drop(conn);
+ 		goto unlock;
+ 	}
+@@ -5372,7 +5375,7 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev,
+ 		return NULL;
+ 
+ 	/* Ignore if the device is blocked */
+-	if (hci_bdaddr_list_lookup(&hdev->blacklist, addr, addr_type))
++	if (hci_bdaddr_list_lookup(&hdev->reject_list, addr, addr_type))
+ 		return NULL;
+ 
+ 	/* Most controller will fail if we try to create new connections
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index d965b7c66bd62..a0f980e615052 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -382,6 +382,53 @@ void __hci_req_write_fast_connectable(struct hci_request *req, bool enable)
+ 		hci_req_add(req, HCI_OP_WRITE_PAGE_SCAN_TYPE, 1, &type);
+ }
+ 
++static void start_interleave_scan(struct hci_dev *hdev)
++{
++	hdev->interleave_scan_state = INTERLEAVE_SCAN_NO_FILTER;
++	queue_delayed_work(hdev->req_workqueue,
++			   &hdev->interleave_scan, 0);
++}
++
++static bool is_interleave_scanning(struct hci_dev *hdev)
++{
++	return hdev->interleave_scan_state != INTERLEAVE_SCAN_NONE;
++}
++
++static void cancel_interleave_scan(struct hci_dev *hdev)
++{
++	bt_dev_dbg(hdev, "cancelling interleave scan");
++
++	cancel_delayed_work_sync(&hdev->interleave_scan);
++
++	hdev->interleave_scan_state = INTERLEAVE_SCAN_NONE;
++}
++
++/* Return true if interleave_scan wasn't started until exiting this function,
++ * otherwise, return false
++ */
++static bool __hci_update_interleaved_scan(struct hci_dev *hdev)
++{
++	/* If there is at least one ADV monitors and one pending LE connection
++	 * or one device to be scanned for, we should alternate between
++	 * allowlist scan and one without any filters to save power.
++	 */
++	bool use_interleaving = hci_is_adv_monitoring(hdev) &&
++				!(list_empty(&hdev->pend_le_conns) &&
++				  list_empty(&hdev->pend_le_reports));
++	bool is_interleaving = is_interleave_scanning(hdev);
++
++	if (use_interleaving && !is_interleaving) {
++		start_interleave_scan(hdev);
++		bt_dev_dbg(hdev, "starting interleave scan");
++		return true;
++	}
++
++	if (!use_interleaving && is_interleaving)
++		cancel_interleave_scan(hdev);
++
++	return false;
++}
++
+ /* This function controls the background scanning based on hdev->pend_le_conns
+  * list. If there are pending LE connection we start the background scanning,
+  * otherwise we stop it.
+@@ -454,8 +501,7 @@ static void __hci_update_background_scan(struct hci_request *req)
+ 			hci_req_add_le_scan_disable(req, false);
+ 
+ 		hci_req_add_le_passive_scan(req);
+-
+-		BT_DBG("%s starting background scanning", hdev->name);
++		bt_dev_dbg(hdev, "starting background scanning");
+ 	}
+ }
+ 
+@@ -690,17 +736,17 @@ void hci_req_add_le_scan_disable(struct hci_request *req, bool rpa_le_conn)
+ 	}
+ }
+ 
+-static void del_from_white_list(struct hci_request *req, bdaddr_t *bdaddr,
+-				u8 bdaddr_type)
++static void del_from_accept_list(struct hci_request *req, bdaddr_t *bdaddr,
++				 u8 bdaddr_type)
+ {
+-	struct hci_cp_le_del_from_white_list cp;
++	struct hci_cp_le_del_from_accept_list cp;
+ 
+ 	cp.bdaddr_type = bdaddr_type;
+ 	bacpy(&cp.bdaddr, bdaddr);
+ 
+-	bt_dev_dbg(req->hdev, "Remove %pMR (0x%x) from whitelist", &cp.bdaddr,
++	bt_dev_dbg(req->hdev, "Remove %pMR (0x%x) from accept list", &cp.bdaddr,
+ 		   cp.bdaddr_type);
+-	hci_req_add(req, HCI_OP_LE_DEL_FROM_WHITE_LIST, sizeof(cp), &cp);
++	hci_req_add(req, HCI_OP_LE_DEL_FROM_ACCEPT_LIST, sizeof(cp), &cp);
+ 
+ 	if (use_ll_privacy(req->hdev) &&
+ 	    hci_dev_test_flag(req->hdev, HCI_ENABLE_LL_PRIVACY)) {
+@@ -719,31 +765,31 @@ static void del_from_white_list(struct hci_request *req, bdaddr_t *bdaddr,
+ 	}
+ }
+ 
+-/* Adds connection to white list if needed. On error, returns -1. */
+-static int add_to_white_list(struct hci_request *req,
+-			     struct hci_conn_params *params, u8 *num_entries,
+-			     bool allow_rpa)
++/* Adds connection to accept list if needed. On error, returns -1. */
++static int add_to_accept_list(struct hci_request *req,
++			      struct hci_conn_params *params, u8 *num_entries,
++			      bool allow_rpa)
+ {
+-	struct hci_cp_le_add_to_white_list cp;
++	struct hci_cp_le_add_to_accept_list cp;
+ 	struct hci_dev *hdev = req->hdev;
+ 
+-	/* Already in white list */
+-	if (hci_bdaddr_list_lookup(&hdev->le_white_list, &params->addr,
++	/* Already in accept list */
++	if (hci_bdaddr_list_lookup(&hdev->le_accept_list, &params->addr,
+ 				   params->addr_type))
+ 		return 0;
+ 
+ 	/* Select filter policy to accept all advertising */
+-	if (*num_entries >= hdev->le_white_list_size)
++	if (*num_entries >= hdev->le_accept_list_size)
+ 		return -1;
+ 
+-	/* White list can not be used with RPAs */
++	/* Accept list can not be used with RPAs */
+ 	if (!allow_rpa &&
+ 	    !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY) &&
+ 	    hci_find_irk_by_addr(hdev, &params->addr, params->addr_type)) {
+ 		return -1;
+ 	}
+ 
+-	/* During suspend, only wakeable devices can be in whitelist */
++	/* During suspend, only wakeable devices can be in accept list */
+ 	if (hdev->suspended && !hci_conn_test_flag(HCI_CONN_FLAG_REMOTE_WAKEUP,
+ 						   params->current_flags))
+ 		return 0;
+@@ -752,9 +798,9 @@ static int add_to_white_list(struct hci_request *req,
+ 	cp.bdaddr_type = params->addr_type;
+ 	bacpy(&cp.bdaddr, &params->addr);
+ 
+-	bt_dev_dbg(hdev, "Add %pMR (0x%x) to whitelist", &cp.bdaddr,
++	bt_dev_dbg(hdev, "Add %pMR (0x%x) to accept list", &cp.bdaddr,
+ 		   cp.bdaddr_type);
+-	hci_req_add(req, HCI_OP_LE_ADD_TO_WHITE_LIST, sizeof(cp), &cp);
++	hci_req_add(req, HCI_OP_LE_ADD_TO_ACCEPT_LIST, sizeof(cp), &cp);
+ 
+ 	if (use_ll_privacy(hdev) &&
+ 	    hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY)) {
+@@ -782,27 +828,31 @@ static int add_to_white_list(struct hci_request *req,
+ 	return 0;
+ }
+ 
+-static u8 update_white_list(struct hci_request *req)
++static u8 update_accept_list(struct hci_request *req)
+ {
+ 	struct hci_dev *hdev = req->hdev;
+ 	struct hci_conn_params *params;
+ 	struct bdaddr_list *b;
+ 	u8 num_entries = 0;
+ 	bool pend_conn, pend_report;
+-	/* We allow whitelisting even with RPAs in suspend. In the worst case,
+-	 * we won't be able to wake from devices that use the privacy1.2
++	/* We allow usage of accept list even with RPAs in suspend. In the worst
++	 * case, we won't be able to wake from devices that use the privacy1.2
+ 	 * features. Additionally, once we support privacy1.2 and IRK
+ 	 * offloading, we can update this to also check for those conditions.
+ 	 */
+ 	bool allow_rpa = hdev->suspended;
+ 
+-	/* Go through the current white list programmed into the
++	if (use_ll_privacy(hdev) &&
++	    hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY))
++		allow_rpa = true;
++
++	/* Go through the current accept list programmed into the
+ 	 * controller one by one and check if that address is still
+ 	 * in the list of pending connections or list of devices to
+ 	 * report. If not present in either list, then queue the
+ 	 * command to remove it from the controller.
+ 	 */
+-	list_for_each_entry(b, &hdev->le_white_list, list) {
++	list_for_each_entry(b, &hdev->le_accept_list, list) {
+ 		pend_conn = hci_pend_le_action_lookup(&hdev->pend_le_conns,
+ 						      &b->bdaddr,
+ 						      b->bdaddr_type);
+@@ -811,14 +861,14 @@ static u8 update_white_list(struct hci_request *req)
+ 							b->bdaddr_type);
+ 
+ 		/* If the device is not likely to connect or report,
+-		 * remove it from the whitelist.
++		 * remove it from the accept list.
+ 		 */
+ 		if (!pend_conn && !pend_report) {
+-			del_from_white_list(req, &b->bdaddr, b->bdaddr_type);
++			del_from_accept_list(req, &b->bdaddr, b->bdaddr_type);
+ 			continue;
+ 		}
+ 
+-		/* White list can not be used with RPAs */
++		/* Accept list can not be used with RPAs */
+ 		if (!allow_rpa &&
+ 		    !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY) &&
+ 		    hci_find_irk_by_addr(hdev, &b->bdaddr, b->bdaddr_type)) {
+@@ -828,39 +878,44 @@ static u8 update_white_list(struct hci_request *req)
+ 		num_entries++;
+ 	}
+ 
+-	/* Since all no longer valid white list entries have been
++	/* Since all no longer valid accept list entries have been
+ 	 * removed, walk through the list of pending connections
+ 	 * and ensure that any new device gets programmed into
+ 	 * the controller.
+ 	 *
+ 	 * If the list of the devices is larger than the list of
+-	 * available white list entries in the controller, then
++	 * available accept list entries in the controller, then
+ 	 * just abort and return filer policy value to not use the
+-	 * white list.
++	 * accept list.
+ 	 */
+ 	list_for_each_entry(params, &hdev->pend_le_conns, action) {
+-		if (add_to_white_list(req, params, &num_entries, allow_rpa))
++		if (add_to_accept_list(req, params, &num_entries, allow_rpa))
+ 			return 0x00;
+ 	}
+ 
+ 	/* After adding all new pending connections, walk through
+ 	 * the list of pending reports and also add these to the
+-	 * white list if there is still space. Abort if space runs out.
++	 * accept list if there is still space. Abort if space runs out.
+ 	 */
+ 	list_for_each_entry(params, &hdev->pend_le_reports, action) {
+-		if (add_to_white_list(req, params, &num_entries, allow_rpa))
++		if (add_to_accept_list(req, params, &num_entries, allow_rpa))
+ 			return 0x00;
+ 	}
+ 
+-	/* Once the controller offloading of advertisement monitor is in place,
+-	 * the if condition should include the support of MSFT extension
+-	 * support. If suspend is ongoing, whitelist should be the default to
+-	 * prevent waking by random advertisements.
++	/* Use the allowlist unless the following conditions are all true:
++	 * - We are not currently suspending
++	 * - There are 1 or more ADV monitors registered
++	 * - Interleaved scanning is not currently using the allowlist
++	 *
++	 * Once the controller offloading of advertisement monitor is in place,
++	 * the above condition should include the support of MSFT extension
++	 * support.
+ 	 */
+-	if (!idr_is_empty(&hdev->adv_monitors_idr) && !hdev->suspended)
++	if (!idr_is_empty(&hdev->adv_monitors_idr) && !hdev->suspended &&
++	    hdev->interleave_scan_state != INTERLEAVE_SCAN_ALLOWLIST)
+ 		return 0x00;
+ 
+-	/* Select filter policy to use white list */
++	/* Select filter policy to use accept list */
+ 	return 0x01;
+ }
+ 
+@@ -1010,20 +1065,24 @@ void hci_req_add_le_passive_scan(struct hci_request *req)
+ 				      &own_addr_type))
+ 		return;
+ 
+-	/* Adding or removing entries from the white list must
++	if (__hci_update_interleaved_scan(hdev))
++		return;
++
++	bt_dev_dbg(hdev, "interleave state %d", hdev->interleave_scan_state);
++	/* Adding or removing entries from the accept list must
+ 	 * happen before enabling scanning. The controller does
+-	 * not allow white list modification while scanning.
++	 * not allow accept list modification while scanning.
+ 	 */
+-	filter_policy = update_white_list(req);
++	filter_policy = update_accept_list(req);
+ 
+ 	/* When the controller is using random resolvable addresses and
+ 	 * with that having LE privacy enabled, then controllers with
+ 	 * Extended Scanner Filter Policies support can now enable support
+ 	 * for handling directed advertising.
+ 	 *
+-	 * So instead of using filter polices 0x00 (no whitelist)
+-	 * and 0x01 (whitelist enabled) use the new filter policies
+-	 * 0x02 (no whitelist) and 0x03 (whitelist enabled).
++	 * So instead of using filter polices 0x00 (no accept list)
++	 * and 0x01 (accept list enabled) use the new filter policies
++	 * 0x02 (no accept list) and 0x03 (accept list enabled).
+ 	 */
+ 	if (hci_dev_test_flag(hdev, HCI_PRIVACY) &&
+ 	    (hdev->le_features[0] & HCI_LE_EXT_SCAN_POLICY))
+@@ -1043,7 +1102,8 @@ void hci_req_add_le_passive_scan(struct hci_request *req)
+ 		interval = hdev->le_scan_interval;
+ 	}
+ 
+-	bt_dev_dbg(hdev, "LE passive scan with whitelist = %d", filter_policy);
++	bt_dev_dbg(hdev, "LE passive scan with accept list = %d",
++		   filter_policy);
+ 	hci_req_start_scan(req, LE_SCAN_PASSIVE, interval, window,
+ 			   own_addr_type, filter_policy, addr_resolv);
+ }
+@@ -1091,7 +1151,7 @@ static void hci_req_set_event_filter(struct hci_request *req)
+ 	/* Always clear event filter when starting */
+ 	hci_req_clear_event_filter(req);
+ 
+-	list_for_each_entry(b, &hdev->whitelist, list) {
++	list_for_each_entry(b, &hdev->accept_list, list) {
+ 		if (!hci_conn_test_flag(HCI_CONN_FLAG_REMOTE_WAKEUP,
+ 					b->current_flags))
+ 			continue;
+@@ -1884,6 +1944,62 @@ unlock:
+ 	hci_dev_unlock(hdev);
+ }
+ 
++static int hci_req_add_le_interleaved_scan(struct hci_request *req,
++					   unsigned long opt)
++{
++	struct hci_dev *hdev = req->hdev;
++	int ret = 0;
++
++	hci_dev_lock(hdev);
++
++	if (hci_dev_test_flag(hdev, HCI_LE_SCAN))
++		hci_req_add_le_scan_disable(req, false);
++	hci_req_add_le_passive_scan(req);
++
++	switch (hdev->interleave_scan_state) {
++	case INTERLEAVE_SCAN_ALLOWLIST:
++		bt_dev_dbg(hdev, "next state: allowlist");
++		hdev->interleave_scan_state = INTERLEAVE_SCAN_NO_FILTER;
++		break;
++	case INTERLEAVE_SCAN_NO_FILTER:
++		bt_dev_dbg(hdev, "next state: no filter");
++		hdev->interleave_scan_state = INTERLEAVE_SCAN_ALLOWLIST;
++		break;
++	case INTERLEAVE_SCAN_NONE:
++		BT_ERR("unexpected error");
++		ret = -1;
++	}
++
++	hci_dev_unlock(hdev);
++
++	return ret;
++}
++
++static void interleave_scan_work(struct work_struct *work)
++{
++	struct hci_dev *hdev = container_of(work, struct hci_dev,
++					    interleave_scan.work);
++	u8 status;
++	unsigned long timeout;
++
++	if (hdev->interleave_scan_state == INTERLEAVE_SCAN_ALLOWLIST) {
++		timeout = msecs_to_jiffies(hdev->advmon_allowlist_duration);
++	} else if (hdev->interleave_scan_state == INTERLEAVE_SCAN_NO_FILTER) {
++		timeout = msecs_to_jiffies(hdev->advmon_no_filter_duration);
++	} else {
++		bt_dev_err(hdev, "unexpected error");
++		return;
++	}
++
++	hci_req_sync(hdev, hci_req_add_le_interleaved_scan, 0,
++		     HCI_CMD_TIMEOUT, &status);
++
++	/* Don't continue interleaving if it was canceled */
++	if (is_interleave_scanning(hdev))
++		queue_delayed_work(hdev->req_workqueue,
++				   &hdev->interleave_scan, timeout);
++}
++
+ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+ 			   bool use_rpa, struct adv_info *adv_instance,
+ 			   u8 *own_addr_type, bdaddr_t *rand_addr)
+@@ -2447,11 +2563,11 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 	return 0;
+ }
+ 
+-static bool disconnected_whitelist_entries(struct hci_dev *hdev)
++static bool disconnected_accept_list_entries(struct hci_dev *hdev)
+ {
+ 	struct bdaddr_list *b;
+ 
+-	list_for_each_entry(b, &hdev->whitelist, list) {
++	list_for_each_entry(b, &hdev->accept_list, list) {
+ 		struct hci_conn *conn;
+ 
+ 		conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &b->bdaddr);
+@@ -2483,7 +2599,7 @@ void __hci_req_update_scan(struct hci_request *req)
+ 		return;
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_CONNECTABLE) ||
+-	    disconnected_whitelist_entries(hdev))
++	    disconnected_accept_list_entries(hdev))
+ 		scan = SCAN_PAGE;
+ 	else
+ 		scan = SCAN_DISABLED;
+@@ -2972,7 +3088,7 @@ static int active_scan(struct hci_request *req, unsigned long opt)
+ 	uint16_t interval = opt;
+ 	struct hci_dev *hdev = req->hdev;
+ 	u8 own_addr_type;
+-	/* White list is not used for discovery */
++	/* Accept list is not used for discovery */
+ 	u8 filter_policy = 0x00;
+ 	/* Discovery doesn't require controller address resolution */
+ 	bool addr_resolv = false;
+@@ -3311,6 +3427,7 @@ void hci_request_setup(struct hci_dev *hdev)
+ 	INIT_DELAYED_WORK(&hdev->le_scan_disable, le_scan_disable_work);
+ 	INIT_DELAYED_WORK(&hdev->le_scan_restart, le_scan_restart_work);
+ 	INIT_DELAYED_WORK(&hdev->adv_instance_expire, adv_timeout_expire);
++	INIT_DELAYED_WORK(&hdev->interleave_scan, interleave_scan_work);
+ }
+ 
+ void hci_request_cancel_all(struct hci_dev *hdev)
+@@ -3330,4 +3447,6 @@ void hci_request_cancel_all(struct hci_dev *hdev)
+ 		cancel_delayed_work_sync(&hdev->adv_instance_expire);
+ 		hdev->adv_instance_timeout = 0;
+ 	}
++
++	cancel_interleave_scan(hdev);
+ }
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 53f85d7c5f9e5..71d18d3295f50 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -897,7 +897,7 @@ static int hci_sock_release(struct socket *sock)
+ 	return 0;
+ }
+ 
+-static int hci_sock_blacklist_add(struct hci_dev *hdev, void __user *arg)
++static int hci_sock_reject_list_add(struct hci_dev *hdev, void __user *arg)
+ {
+ 	bdaddr_t bdaddr;
+ 	int err;
+@@ -907,14 +907,14 @@ static int hci_sock_blacklist_add(struct hci_dev *hdev, void __user *arg)
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	err = hci_bdaddr_list_add(&hdev->blacklist, &bdaddr, BDADDR_BREDR);
++	err = hci_bdaddr_list_add(&hdev->reject_list, &bdaddr, BDADDR_BREDR);
+ 
+ 	hci_dev_unlock(hdev);
+ 
+ 	return err;
+ }
+ 
+-static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg)
++static int hci_sock_reject_list_del(struct hci_dev *hdev, void __user *arg)
+ {
+ 	bdaddr_t bdaddr;
+ 	int err;
+@@ -924,7 +924,7 @@ static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg)
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	err = hci_bdaddr_list_del(&hdev->blacklist, &bdaddr, BDADDR_BREDR);
++	err = hci_bdaddr_list_del(&hdev->reject_list, &bdaddr, BDADDR_BREDR);
+ 
+ 	hci_dev_unlock(hdev);
+ 
+@@ -964,12 +964,12 @@ static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd,
+ 	case HCIBLOCKADDR:
+ 		if (!capable(CAP_NET_ADMIN))
+ 			return -EPERM;
+-		return hci_sock_blacklist_add(hdev, (void __user *)arg);
++		return hci_sock_reject_list_add(hdev, (void __user *)arg);
+ 
+ 	case HCIUNBLOCKADDR:
+ 		if (!capable(CAP_NET_ADMIN))
+ 			return -EPERM;
+-		return hci_sock_blacklist_del(hdev, (void __user *)arg);
++		return hci_sock_reject_list_del(hdev, (void __user *)arg);
+ 	}
+ 
+ 	return -ENOIOCTLCMD;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 012c1a0abda8c..2557cd917f5ed 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1688,8 +1688,8 @@ static void l2cap_le_conn_ready(struct l2cap_conn *conn)
+ 	if (hcon->out)
+ 		smp_conn_security(hcon, hcon->pending_sec_level);
+ 
+-	/* For LE slave connections, make sure the connection interval
+-	 * is in the range of the minium and maximum interval that has
++	/* For LE peripheral connections, make sure the connection interval
++	 * is in the range of the minimum and maximum interval that has
+ 	 * been configured for this connection. If not, then trigger
+ 	 * the connection update procedure.
+ 	 */
+@@ -7540,7 +7540,7 @@ static void l2cap_data_channel(struct l2cap_conn *conn, u16 cid,
+ 	BT_DBG("chan %p, len %d", chan, skb->len);
+ 
+ 	/* If we receive data on a fixed channel before the info req/rsp
+-	 * procdure is done simply assume that the channel is supported
++	 * procedure is done simply assume that the channel is supported
+ 	 * and mark it as ready.
+ 	 */
+ 	if (chan->chan_type == L2CAP_CHAN_FIXED)
+@@ -7652,7 +7652,7 @@ static void l2cap_recv_frame(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * at least ensure that we ignore incoming data from them.
+ 	 */
+ 	if (hcon->type == LE_LINK &&
+-	    hci_bdaddr_list_lookup(&hcon->hdev->blacklist, &hcon->dst,
++	    hci_bdaddr_list_lookup(&hcon->hdev->reject_list, &hcon->dst,
+ 				   bdaddr_dst_type(hcon))) {
+ 		kfree_skb(skb);
+ 		return;
+@@ -8108,7 +8108,7 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
+ 	dst_type = bdaddr_dst_type(hcon);
+ 
+ 	/* If device is blocked, do not create channels for it */
+-	if (hci_bdaddr_list_lookup(&hdev->blacklist, &hcon->dst, dst_type))
++	if (hci_bdaddr_list_lookup(&hdev->reject_list, &hcon->dst, dst_type))
+ 		return;
+ 
+ 	/* Find fixed channels and notify them of the new connection. We
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 08f67f91d427f..878bf73822449 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -4041,7 +4041,7 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	memset(&rp, 0, sizeof(rp));
+ 
+ 	if (cp->addr.type == BDADDR_BREDR) {
+-		br_params = hci_bdaddr_list_lookup_with_flags(&hdev->whitelist,
++		br_params = hci_bdaddr_list_lookup_with_flags(&hdev->accept_list,
+ 							      &cp->addr.bdaddr,
+ 							      cp->addr.type);
+ 		if (!br_params)
+@@ -4109,7 +4109,7 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	hci_dev_lock(hdev);
+ 
+ 	if (cp->addr.type == BDADDR_BREDR) {
+-		br_params = hci_bdaddr_list_lookup_with_flags(&hdev->whitelist,
++		br_params = hci_bdaddr_list_lookup_with_flags(&hdev->accept_list,
+ 							      &cp->addr.bdaddr,
+ 							      cp->addr.type);
+ 
+@@ -4979,7 +4979,7 @@ static int block_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	err = hci_bdaddr_list_add(&hdev->blacklist, &cp->addr.bdaddr,
++	err = hci_bdaddr_list_add(&hdev->reject_list, &cp->addr.bdaddr,
+ 				  cp->addr.type);
+ 	if (err < 0) {
+ 		status = MGMT_STATUS_FAILED;
+@@ -5015,7 +5015,7 @@ static int unblock_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	err = hci_bdaddr_list_del(&hdev->blacklist, &cp->addr.bdaddr,
++	err = hci_bdaddr_list_del(&hdev->reject_list, &cp->addr.bdaddr,
+ 				  cp->addr.type);
+ 	if (err < 0) {
+ 		status = MGMT_STATUS_INVALID_PARAMS;
+@@ -6506,7 +6506,7 @@ static int add_device(struct sock *sk, struct hci_dev *hdev,
+ 			goto unlock;
+ 		}
+ 
+-		err = hci_bdaddr_list_add_with_flags(&hdev->whitelist,
++		err = hci_bdaddr_list_add_with_flags(&hdev->accept_list,
+ 						     &cp->addr.bdaddr,
+ 						     cp->addr.type, 0);
+ 		if (err)
+@@ -6604,7 +6604,7 @@ static int remove_device(struct sock *sk, struct hci_dev *hdev,
+ 		}
+ 
+ 		if (cp->addr.type == BDADDR_BREDR) {
+-			err = hci_bdaddr_list_del(&hdev->whitelist,
++			err = hci_bdaddr_list_del(&hdev->accept_list,
+ 						  &cp->addr.bdaddr,
+ 						  cp->addr.type);
+ 			if (err) {
+@@ -6675,7 +6675,7 @@ static int remove_device(struct sock *sk, struct hci_dev *hdev,
+ 			goto unlock;
+ 		}
+ 
+-		list_for_each_entry_safe(b, btmp, &hdev->whitelist, list) {
++		list_for_each_entry_safe(b, btmp, &hdev->accept_list, list) {
+ 			device_removed(sk, hdev, &b->bdaddr, b->bdaddr_type);
+ 			list_del(&b->list);
+ 			kfree(b);
+diff --git a/net/bluetooth/mgmt_config.c b/net/bluetooth/mgmt_config.c
+index b30b571f8caf8..2d3ad288c78ac 100644
+--- a/net/bluetooth/mgmt_config.c
++++ b/net/bluetooth/mgmt_config.c
+@@ -67,6 +67,8 @@ int read_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		HDEV_PARAM_U16(0x001a, le_supv_timeout),
+ 		HDEV_PARAM_U16_JIFFIES_TO_MSECS(0x001b,
+ 						def_le_autoconnect_timeout),
++		HDEV_PARAM_U16(0x001d, advmon_allowlist_duration),
++		HDEV_PARAM_U16(0x001e, advmon_no_filter_duration),
+ 	};
+ 	struct mgmt_rp_read_def_system_config *rp = (void *)params;
+ 
+@@ -138,6 +140,8 @@ int set_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		case 0x0019:
+ 		case 0x001a:
+ 		case 0x001b:
++		case 0x001d:
++		case 0x001e:
+ 			if (len != sizeof(u16)) {
+ 				bt_dev_warn(hdev, "invalid length %d, exp %zu for type %d",
+ 					    len, sizeof(u16), type);
+@@ -251,6 +255,12 @@ int set_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data,
+ 			hdev->def_le_autoconnect_timeout =
+ 					msecs_to_jiffies(TLV_GET_LE16(buffer));
+ 			break;
++		case 0x0001d:
++			hdev->advmon_allowlist_duration = TLV_GET_LE16(buffer);
++			break;
++		case 0x0001e:
++			hdev->advmon_no_filter_duration = TLV_GET_LE16(buffer);
++			break;
+ 		default:
+ 			bt_dev_warn(hdev, "unsupported parameter %u", type);
+ 			break;
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 2f2b8ddc4dd5d..df254c7de2dd6 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -575,19 +575,24 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ 	    addr->sa_family != AF_BLUETOOTH)
+ 		return -EINVAL;
+ 
+-	if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND)
+-		return -EBADFD;
++	lock_sock(sk);
++	if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) {
++		err = -EBADFD;
++		goto done;
++	}
+ 
+-	if (sk->sk_type != SOCK_SEQPACKET)
+-		return -EINVAL;
++	if (sk->sk_type != SOCK_SEQPACKET) {
++		err = -EINVAL;
++		goto done;
++	}
+ 
+ 	hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR);
+-	if (!hdev)
+-		return -EHOSTUNREACH;
++	if (!hdev) {
++		err = -EHOSTUNREACH;
++		goto done;
++	}
+ 	hci_dev_lock(hdev);
+ 
+-	lock_sock(sk);
+-
+ 	/* Set destination address and psm */
+ 	bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr);
+ 
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 2b7879afc333b..b7374dbee23a3 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -909,8 +909,8 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth,
+ 			hcon->pending_sec_level = BT_SECURITY_HIGH;
+ 	}
+ 
+-	/* If both devices have Keyoard-Display I/O, the master
+-	 * Confirms and the slave Enters the passkey.
++	/* If both devices have Keyboard-Display I/O, the initiator
++	 * Confirms and the responder Enters the passkey.
+ 	 */
+ 	if (smp->method == OVERLAP) {
+ 		if (hcon->role == HCI_ROLE_MASTER)
+@@ -3076,7 +3076,7 @@ static void bredr_pairing(struct l2cap_chan *chan)
+ 	if (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags))
+ 		return;
+ 
+-	/* Only master may initiate SMP over BR/EDR */
++	/* Only initiator may initiate SMP over BR/EDR */
+ 	if (hcon->role != HCI_ROLE_MASTER)
+ 		return;
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 0bab2aca07fd3..af52050b0f383 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3241,11 +3241,15 @@ int skb_checksum_help(struct sk_buff *skb)
+ 	}
+ 
+ 	offset = skb_checksum_start_offset(skb);
+-	BUG_ON(offset >= skb_headlen(skb));
++	ret = -EINVAL;
++	if (WARN_ON_ONCE(offset >= skb_headlen(skb)))
++		goto out;
++
+ 	csum = skb_checksum(skb, offset, skb->len - offset, 0);
+ 
+ 	offset += skb->csum_offset;
+-	BUG_ON(offset + sizeof(__sum16) > skb_headlen(skb));
++	if (WARN_ON_ONCE(offset + sizeof(__sum16) > skb_headlen(skb)))
++		goto out;
+ 
+ 	ret = skb_ensure_writable(skb, offset + sizeof(__sum16));
+ 	if (ret)
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index b0b6e6a4784e5..2455b0c0e4866 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -464,7 +464,7 @@ static struct dst_entry* dccp_v4_route_skb(struct net *net, struct sock *sk,
+ 		.fl4_dport = dccp_hdr(skb)->dccph_sport,
+ 	};
+ 
+-	security_skb_classify_flow(skb, flowi4_to_flowi(&fl4));
++	security_skb_classify_flow(skb, flowi4_to_flowi_common(&fl4));
+ 	rt = ip_route_output_flow(net, &fl4, sk);
+ 	if (IS_ERR(rt)) {
+ 		IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 49f4034bf1263..2be5c69824f94 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -203,7 +203,7 @@ static int dccp_v6_send_response(const struct sock *sk, struct request_sock *req
+ 	fl6.flowi6_oif = ireq->ir_iif;
+ 	fl6.fl6_dport = ireq->ir_rmt_port;
+ 	fl6.fl6_sport = htons(ireq->ir_num);
+-	security_req_classify_flow(req, flowi6_to_flowi(&fl6));
++	security_req_classify_flow(req, flowi6_to_flowi_common(&fl6));
+ 
+ 
+ 	rcu_read_lock();
+@@ -279,7 +279,7 @@ static void dccp_v6_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb)
+ 	fl6.flowi6_oif = inet6_iif(rxskb);
+ 	fl6.fl6_dport = dccp_hdr(skb)->dccph_dport;
+ 	fl6.fl6_sport = dccp_hdr(skb)->dccph_sport;
+-	security_skb_classify_flow(rxskb, flowi6_to_flowi(&fl6));
++	security_skb_classify_flow(rxskb, flowi6_to_flowi_common(&fl6));
+ 
+ 	/* sk = NULL, but it is safe for now. RST socket required. */
+ 	dst = ip6_dst_lookup_flow(sock_net(ctl_sk), ctl_sk, &fl6, NULL);
+@@ -912,7 +912,7 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ 	fl6.flowi6_oif = sk->sk_bound_dev_if;
+ 	fl6.fl6_dport = usin->sin6_port;
+ 	fl6.fl6_sport = inet->inet_sport;
+-	security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+ 
+ 	opt = rcu_dereference_protected(np->opt, lockdep_sock_is_held(sk));
+ 	final_p = fl6_update_dst(&fl6, opt, &final);
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index b71b836cc7d19..cd65d3146c300 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -447,7 +447,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ 	fl4.flowi4_tos = RT_TOS(ip_hdr(skb)->tos);
+ 	fl4.flowi4_proto = IPPROTO_ICMP;
+ 	fl4.flowi4_oif = l3mdev_master_ifindex(skb->dev);
+-	security_skb_classify_flow(skb, flowi4_to_flowi(&fl4));
++	security_skb_classify_flow(skb, flowi4_to_flowi_common(&fl4));
+ 	rt = ip_route_output_key(net, &fl4);
+ 	if (IS_ERR(rt))
+ 		goto out_unlock;
+@@ -503,7 +503,7 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ 	route_lookup_dev = icmp_get_route_lookup_dev(skb_in);
+ 	fl4->flowi4_oif = l3mdev_master_ifindex(route_lookup_dev);
+ 
+-	security_skb_classify_flow(skb_in, flowi4_to_flowi(fl4));
++	security_skb_classify_flow(skb_in, flowi4_to_flowi_common(fl4));
+ 	rt = ip_route_output_key_hash(net, fl4, skb_in);
+ 	if (IS_ERR(rt))
+ 		return rt;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index addd595bb3fe6..7785a4775e58a 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -602,7 +602,7 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ 			   (opt && opt->opt.srr) ? opt->opt.faddr : ireq->ir_rmt_addr,
+ 			   ireq->ir_loc_addr, ireq->ir_rmt_port,
+ 			   htons(ireq->ir_num), sk->sk_uid);
+-	security_req_classify_flow(req, flowi4_to_flowi(fl4));
++	security_req_classify_flow(req, flowi4_to_flowi_common(fl4));
+ 	rt = ip_route_output_flow(net, fl4, sk);
+ 	if (IS_ERR(rt))
+ 		goto no_route;
+@@ -640,7 +640,7 @@ struct dst_entry *inet_csk_route_child_sock(const struct sock *sk,
+ 			   (opt && opt->opt.srr) ? opt->opt.faddr : ireq->ir_rmt_addr,
+ 			   ireq->ir_loc_addr, ireq->ir_rmt_port,
+ 			   htons(ireq->ir_num), sk->sk_uid);
+-	security_req_classify_flow(req, flowi4_to_flowi(fl4));
++	security_req_classify_flow(req, flowi4_to_flowi_common(fl4));
+ 	rt = ip_route_output_flow(net, fl4, sk);
+ 	if (IS_ERR(rt))
+ 		goto no_route;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 5e48b3d3a00db..f77b0af3cb657 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1712,7 +1712,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ 			   daddr, saddr,
+ 			   tcp_hdr(skb)->source, tcp_hdr(skb)->dest,
+ 			   arg->uid);
+-	security_skb_classify_flow(skb, flowi4_to_flowi(&fl4));
++	security_skb_classify_flow(skb, flowi4_to_flowi_common(&fl4));
+ 	rt = ip_route_output_key(net, &fl4);
+ 	if (IS_ERR(rt))
+ 		return;
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 2853a3f0fc632..1bad851b3fc35 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -796,7 +796,7 @@ static int ping_v4_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	fl4.fl4_icmp_type = user_icmph.type;
+ 	fl4.fl4_icmp_code = user_icmph.code;
+ 
+-	security_sk_classify_flow(sk, flowi4_to_flowi(&fl4));
++	security_sk_classify_flow(sk, flowi4_to_flowi_common(&fl4));
+ 	rt = ip_route_output_flow(net, &fl4, sk);
+ 	if (IS_ERR(rt)) {
+ 		err = PTR_ERR(rt);
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 5d95f80314f95..4899ebe569eb6 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -640,7 +640,7 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 			goto done;
+ 	}
+ 
+-	security_sk_classify_flow(sk, flowi4_to_flowi(&fl4));
++	security_sk_classify_flow(sk, flowi4_to_flowi_common(&fl4));
+ 	rt = ip_route_output_flow(net, &fl4, sk);
+ 	if (IS_ERR(rt)) {
+ 		err = PTR_ERR(rt);
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 0b616094e7947..10b469aee4920 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -424,7 +424,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
+ 			   inet_sk_flowi_flags(sk),
+ 			   opt->srr ? opt->faddr : ireq->ir_rmt_addr,
+ 			   ireq->ir_loc_addr, th->source, th->dest, sk->sk_uid);
+-	security_req_classify_flow(req, flowi4_to_flowi(&fl4));
++	security_req_classify_flow(req, flowi4_to_flowi_common(&fl4));
+ 	rt = ip_route_output_key(sock_net(sk), &fl4);
+ 	if (IS_ERR(rt)) {
+ 		reqsk_free(req);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index e97a2dd206e14..6056d5609167c 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1204,7 +1204,7 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 				   faddr, saddr, dport, inet->inet_sport,
+ 				   sk->sk_uid);
+ 
+-		security_sk_classify_flow(sk, flowi4_to_flowi(fl4));
++		security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
+ 		rt = ip_route_output_flow(net, fl4, sk);
+ 		if (IS_ERR(rt)) {
+ 			err = PTR_ERR(rt);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 86bcb18256982..0562fb321959e 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -789,6 +789,7 @@ static void dev_forward_change(struct inet6_dev *idev)
+ {
+ 	struct net_device *dev;
+ 	struct inet6_ifaddr *ifa;
++	LIST_HEAD(tmp_addr_list);
+ 
+ 	if (!idev)
+ 		return;
+@@ -807,14 +808,24 @@ static void dev_forward_change(struct inet6_dev *idev)
+ 		}
+ 	}
+ 
++	read_lock_bh(&idev->lock);
+ 	list_for_each_entry(ifa, &idev->addr_list, if_list) {
+ 		if (ifa->flags&IFA_F_TENTATIVE)
+ 			continue;
++		list_add_tail(&ifa->if_list_aux, &tmp_addr_list);
++	}
++	read_unlock_bh(&idev->lock);
++
++	while (!list_empty(&tmp_addr_list)) {
++		ifa = list_first_entry(&tmp_addr_list,
++				       struct inet6_ifaddr, if_list_aux);
++		list_del(&ifa->if_list_aux);
+ 		if (idev->cnf.forwarding)
+ 			addrconf_join_anycast(ifa);
+ 		else
+ 			addrconf_leave_anycast(ifa);
+ 	}
++
+ 	inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ 				     NETCONFA_FORWARDING,
+ 				     dev->ifindex, &idev->cnf);
+@@ -3710,7 +3721,8 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister)
+ 	unsigned long event = unregister ? NETDEV_UNREGISTER : NETDEV_DOWN;
+ 	struct net *net = dev_net(dev);
+ 	struct inet6_dev *idev;
+-	struct inet6_ifaddr *ifa, *tmp;
++	struct inet6_ifaddr *ifa;
++	LIST_HEAD(tmp_addr_list);
+ 	bool keep_addr = false;
+ 	bool was_ready;
+ 	int state, i;
+@@ -3802,16 +3814,23 @@ restart:
+ 		write_lock_bh(&idev->lock);
+ 	}
+ 
+-	list_for_each_entry_safe(ifa, tmp, &idev->addr_list, if_list) {
++	list_for_each_entry(ifa, &idev->addr_list, if_list)
++		list_add_tail(&ifa->if_list_aux, &tmp_addr_list);
++	write_unlock_bh(&idev->lock);
++
++	while (!list_empty(&tmp_addr_list)) {
+ 		struct fib6_info *rt = NULL;
+ 		bool keep;
+ 
++		ifa = list_first_entry(&tmp_addr_list,
++				       struct inet6_ifaddr, if_list_aux);
++		list_del(&ifa->if_list_aux);
++
+ 		addrconf_del_dad_work(ifa);
+ 
+ 		keep = keep_addr && (ifa->flags & IFA_F_PERMANENT) &&
+ 			!addr_is_local(&ifa->addr);
+ 
+-		write_unlock_bh(&idev->lock);
+ 		spin_lock_bh(&ifa->lock);
+ 
+ 		if (keep) {
+@@ -3842,15 +3861,14 @@ restart:
+ 			addrconf_leave_solict(ifa->idev, &ifa->addr);
+ 		}
+ 
+-		write_lock_bh(&idev->lock);
+ 		if (!keep) {
++			write_lock_bh(&idev->lock);
+ 			list_del_rcu(&ifa->if_list);
++			write_unlock_bh(&idev->lock);
+ 			in6_ifa_put(ifa);
+ 		}
+ 	}
+ 
+-	write_unlock_bh(&idev->lock);
+-
+ 	/* Step 5: Discard anycast and multicast list */
+ 	if (unregister) {
+ 		ipv6_ac_destroy_dev(idev);
+@@ -4182,7 +4200,8 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id,
+ 	send_rs = send_mld &&
+ 		  ipv6_accept_ra(ifp->idev) &&
+ 		  ifp->idev->cnf.rtr_solicits != 0 &&
+-		  (dev->flags&IFF_LOOPBACK) == 0;
++		  (dev->flags & IFF_LOOPBACK) == 0 &&
++		  (dev->type != ARPHRD_TUNNEL);
+ 	read_unlock_bh(&ifp->idev->lock);
+ 
+ 	/* While dad is in progress mld report's source address is in6_addrany.
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 090575346daf6..890a9cfc6ce27 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -819,7 +819,7 @@ int inet6_sk_rebuild_header(struct sock *sk)
+ 		fl6.fl6_dport = inet->inet_dport;
+ 		fl6.fl6_sport = inet->inet_sport;
+ 		fl6.flowi6_uid = sk->sk_uid;
+-		security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
++		security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+ 
+ 		rcu_read_lock();
+ 		final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt),
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index cc8ad7ddecdaa..206f66310a88d 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -60,7 +60,7 @@ static void ip6_datagram_flow_key_init(struct flowi6 *fl6, struct sock *sk)
+ 	if (!fl6->flowi6_oif && ipv6_addr_is_multicast(&fl6->daddr))
+ 		fl6->flowi6_oif = np->mcast_oif;
+ 
+-	security_sk_classify_flow(sk, flowi6_to_flowi(fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(fl6));
+ }
+ 
+ int ip6_datagram_dst_update(struct sock *sk, bool fix_sk_saddr)
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index cbab41d557b20..fd1f896115c1e 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -573,7 +573,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ 	fl6.fl6_icmp_code = code;
+ 	fl6.flowi6_uid = sock_net_uid(net, NULL);
+ 	fl6.mp_hash = rt6_multipath_hash(net, &fl6, skb, NULL);
+-	security_skb_classify_flow(skb, flowi6_to_flowi(&fl6));
++	security_skb_classify_flow(skb, flowi6_to_flowi_common(&fl6));
+ 
+ 	np = inet6_sk(sk);
+ 
+@@ -755,7 +755,7 @@ static void icmpv6_echo_reply(struct sk_buff *skb)
+ 	fl6.fl6_icmp_type = ICMPV6_ECHO_REPLY;
+ 	fl6.flowi6_mark = mark;
+ 	fl6.flowi6_uid = sock_net_uid(net, NULL);
+-	security_skb_classify_flow(skb, flowi6_to_flowi(&fl6));
++	security_skb_classify_flow(skb, flowi6_to_flowi_common(&fl6));
+ 
+ 	local_bh_disable();
+ 	sk = icmpv6_xmit_lock(net);
+@@ -1008,7 +1008,7 @@ void icmpv6_flow_init(struct sock *sk, struct flowi6 *fl6,
+ 	fl6->fl6_icmp_type	= type;
+ 	fl6->fl6_icmp_code	= 0;
+ 	fl6->flowi6_oif		= oif;
+-	security_sk_classify_flow(sk, flowi6_to_flowi(fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(fl6));
+ }
+ 
+ static void __net_exit icmpv6_sk_exit(struct net *net)
+diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c
+index e315526fa244a..5a9f4d722f35d 100644
+--- a/net/ipv6/inet6_connection_sock.c
++++ b/net/ipv6/inet6_connection_sock.c
+@@ -46,7 +46,7 @@ struct dst_entry *inet6_csk_route_req(const struct sock *sk,
+ 	fl6->fl6_dport = ireq->ir_rmt_port;
+ 	fl6->fl6_sport = htons(ireq->ir_num);
+ 	fl6->flowi6_uid = sk->sk_uid;
+-	security_req_classify_flow(req, flowi6_to_flowi(fl6));
++	security_req_classify_flow(req, flowi6_to_flowi_common(fl6));
+ 
+ 	dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+ 	if (IS_ERR(dst))
+@@ -95,7 +95,7 @@ static struct dst_entry *inet6_csk_route_socket(struct sock *sk,
+ 	fl6->fl6_sport = inet->inet_sport;
+ 	fl6->fl6_dport = inet->inet_dport;
+ 	fl6->flowi6_uid = sk->sk_uid;
+-	security_sk_classify_flow(sk, flowi6_to_flowi(fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(fl6));
+ 
+ 	rcu_read_lock();
+ 	final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index 4aef6baaa55eb..bf95513736c92 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -179,7 +179,7 @@ void nf_send_reset6(struct net *net, struct sk_buff *oldskb, int hook)
+ 
+ 	fl6.flowi6_oif = l3mdev_master_ifindex(skb_dst(oldskb)->dev);
+ 	fl6.flowi6_mark = IP6_REPLY_MARK(net, oldskb->mark);
+-	security_skb_classify_flow(oldskb, flowi6_to_flowi(&fl6));
++	security_skb_classify_flow(oldskb, flowi6_to_flowi_common(&fl6));
+ 	dst = ip6_route_output(net, NULL, &fl6);
+ 	if (dst->error) {
+ 		dst_release(dst);
+diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
+index 6caa062f68e72..6ac88fe24a8e0 100644
+--- a/net/ipv6/ping.c
++++ b/net/ipv6/ping.c
+@@ -111,7 +111,7 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	fl6.flowi6_uid = sk->sk_uid;
+ 	fl6.fl6_icmp_type = user_icmph.icmp6_type;
+ 	fl6.fl6_icmp_code = user_icmph.icmp6_code;
+-	security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+ 
+ 	ipcm6_init_sk(&ipc6, np);
+ 	ipc6.sockc.mark = sk->sk_mark;
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 38349054e361e..31eb54e92b3f9 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -915,7 +915,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 		fl6.flowi6_oif = np->mcast_oif;
+ 	else if (!fl6.flowi6_oif)
+ 		fl6.flowi6_oif = np->ucast_oif;
+-	security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+ 
+ 	if (hdrincl)
+ 		fl6.flowi6_flags |= FLOWI_FLAG_KNOWN_NH;
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index 5fa791cf39ca9..ca92dd6981dea 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -234,7 +234,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ 		fl6.fl6_dport = ireq->ir_rmt_port;
+ 		fl6.fl6_sport = inet_sk(sk)->inet_sport;
+ 		fl6.flowi6_uid = sk->sk_uid;
+-		security_req_classify_flow(req, flowi6_to_flowi(&fl6));
++		security_req_classify_flow(req, flowi6_to_flowi_common(&fl6));
+ 
+ 		dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ 		if (IS_ERR(dst))
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index df33145b876c6..303b54414a6cc 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -278,7 +278,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ 	opt = rcu_dereference_protected(np->opt, lockdep_sock_is_held(sk));
+ 	final_p = fl6_update_dst(&fl6, opt, &final);
+ 
+-	security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+ 
+ 	dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ 	if (IS_ERR(dst)) {
+@@ -975,7 +975,7 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
+ 	fl6.fl6_dport = t1->dest;
+ 	fl6.fl6_sport = t1->source;
+ 	fl6.flowi6_uid = sock_net_uid(net, sk && sk_fullsock(sk) ? sk : NULL);
+-	security_skb_classify_flow(skb, flowi6_to_flowi(&fl6));
++	security_skb_classify_flow(skb, flowi6_to_flowi_common(&fl6));
+ 
+ 	/* Pass a socket to ip6_dst_lookup either it is for RST
+ 	 * Underlying function will use this to retrieve the network
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 10760164a80f4..7745d8a402091 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1497,7 +1497,7 @@ do_udp_sendmsg:
+ 	} else if (!fl6.flowi6_oif)
+ 		fl6.flowi6_oif = np->ucast_oif;
+ 
+-	security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+ 
+ 	if (ipc6.tclass < 0)
+ 		ipc6.tclass = np->tclass;
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index e5e5036257b05..96f975777438f 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -606,7 +606,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	else if (!fl6.flowi6_oif)
+ 		fl6.flowi6_oif = np->ucast_oif;
+ 
+-	security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
++	security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+ 
+ 	if (ipc6.tclass < 0)
+ 		ipc6.tclass = np->tclass;
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index 8f48aff74c7ba..5639a71257e41 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -1652,12 +1652,9 @@ int ieee80211_vif_use_reserved_context(struct ieee80211_sub_if_data *sdata)
+ 
+ 	if (new_ctx->replace_state == IEEE80211_CHANCTX_REPLACE_NONE) {
+ 		if (old_ctx)
+-			err = ieee80211_vif_use_reserved_reassign(sdata);
+-		else
+-			err = ieee80211_vif_use_reserved_assign(sdata);
++			return ieee80211_vif_use_reserved_reassign(sdata);
+ 
+-		if (err)
+-			return err;
++		return ieee80211_vif_use_reserved_assign(sdata);
+ 	}
+ 
+ 	/*
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index fe8f586886b41..bcc94cc1b6201 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1103,6 +1103,9 @@ struct tpt_led_trigger {
+  *	a scan complete for an aborted scan.
+  * @SCAN_HW_CANCELLED: Set for our scan work function when the scan is being
+  *	cancelled.
++ * @SCAN_BEACON_WAIT: Set whenever we're passive scanning because of radar/no-IR
++ *	and could send a probe request after receiving a beacon.
++ * @SCAN_BEACON_DONE: Beacon received, we can now send a probe request
+  */
+ enum {
+ 	SCAN_SW_SCANNING,
+@@ -1111,6 +1114,8 @@ enum {
+ 	SCAN_COMPLETED,
+ 	SCAN_ABORTED,
+ 	SCAN_HW_CANCELLED,
++	SCAN_BEACON_WAIT,
++	SCAN_BEACON_DONE,
+ };
+ 
+ /**
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index 6b50cb5e0e3cc..887f945bb12d4 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -277,6 +277,16 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 	if (likely(!sdata1 && !sdata2))
+ 		return;
+ 
++	if (test_and_clear_bit(SCAN_BEACON_WAIT, &local->scanning)) {
++		/*
++		 * we were passive scanning because of radar/no-IR, but
++		 * the beacon/proberesp rx gives us an opportunity to upgrade
++		 * to active scan
++		 */
++		 set_bit(SCAN_BEACON_DONE, &local->scanning);
++		 ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
++	}
++
+ 	if (ieee80211_is_probe_resp(mgmt->frame_control)) {
+ 		struct cfg80211_scan_request *scan_req;
+ 		struct cfg80211_sched_scan_request *sched_scan_req;
+@@ -783,6 +793,8 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata,
+ 						IEEE80211_CHAN_RADAR)) ||
+ 		    !req->n_ssids) {
+ 			next_delay = IEEE80211_PASSIVE_CHANNEL_TIME;
++			if (req->n_ssids)
++				set_bit(SCAN_BEACON_WAIT, &local->scanning);
+ 		} else {
+ 			ieee80211_scan_state_send_probe(local, &next_delay);
+ 			next_delay = IEEE80211_CHANNEL_TIME;
+@@ -994,6 +1006,8 @@ set_channel:
+ 	    !scan_req->n_ssids) {
+ 		*next_delay = IEEE80211_PASSIVE_CHANNEL_TIME;
+ 		local->next_scan_state = SCAN_DECISION;
++		if (scan_req->n_ssids)
++			set_bit(SCAN_BEACON_WAIT, &local->scanning);
+ 		return;
+ 	}
+ 
+@@ -1086,6 +1100,8 @@ void ieee80211_scan_work(struct work_struct *work)
+ 			goto out;
+ 	}
+ 
++	clear_bit(SCAN_BEACON_WAIT, &local->scanning);
++
+ 	/*
+ 	 * as long as no delay is required advance immediately
+ 	 * without scheduling a new work
+@@ -1096,6 +1112,10 @@ void ieee80211_scan_work(struct work_struct *work)
+ 			goto out_complete;
+ 		}
+ 
++		if (test_and_clear_bit(SCAN_BEACON_DONE, &local->scanning) &&
++		    local->next_scan_state == SCAN_DECISION)
++			local->next_scan_state = SCAN_SEND_PROBE;
++
+ 		switch (local->next_scan_state) {
+ 		case SCAN_DECISION:
+ 			/* if no more bands/channels left, complete scan */
+diff --git a/net/netfilter/nf_synproxy_core.c b/net/netfilter/nf_synproxy_core.c
+index 2fc4ae960769d..3d6d49420db8b 100644
+--- a/net/netfilter/nf_synproxy_core.c
++++ b/net/netfilter/nf_synproxy_core.c
+@@ -854,7 +854,7 @@ synproxy_send_tcp_ipv6(struct net *net,
+ 	fl6.fl6_sport = nth->source;
+ 	fl6.fl6_dport = nth->dest;
+ 	security_skb_classify_flow((struct sk_buff *)skb,
+-				   flowi6_to_flowi(&fl6));
++				   flowi6_to_flowi_common(&fl6));
+ 	err = nf_ip6_route(net, &dst, flowi6_to_flowi(&fl6), false);
+ 	if (err) {
+ 		goto free_nskb;
+diff --git a/net/nfc/core.c b/net/nfc/core.c
+index 3b2983813ff13..2ef56366bd5fe 100644
+--- a/net/nfc/core.c
++++ b/net/nfc/core.c
+@@ -1158,6 +1158,7 @@ void nfc_unregister_device(struct nfc_dev *dev)
+ 	if (dev->rfkill) {
+ 		rfkill_unregister(dev->rfkill);
+ 		rfkill_destroy(dev->rfkill);
++		dev->rfkill = NULL;
+ 	}
+ 	dev->shutting_down = true;
+ 	device_unlock(&dev->dev);
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 3bad9f5f91023..ccb65412b6704 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -659,13 +659,12 @@ struct rxrpc_call {
+ 
+ 	spinlock_t		input_lock;	/* Lock for packet input to this call */
+ 
+-	/* receive-phase ACK management */
++	/* Receive-phase ACK management (ACKs we send). */
+ 	u8			ackr_reason;	/* reason to ACK */
+ 	rxrpc_serial_t		ackr_serial;	/* serial of packet being ACK'd */
+-	rxrpc_serial_t		ackr_first_seq;	/* first sequence number received */
+-	rxrpc_seq_t		ackr_prev_seq;	/* previous sequence number received */
+-	rxrpc_seq_t		ackr_consumed;	/* Highest packet shown consumed */
+-	rxrpc_seq_t		ackr_seen;	/* Highest packet shown seen */
++	rxrpc_seq_t		ackr_highest_seq; /* Higest sequence number received */
++	atomic_t		ackr_nr_unacked; /* Number of unacked packets */
++	atomic_t		ackr_nr_consumed; /* Number of packets needing hard ACK */
+ 
+ 	/* RTT management */
+ 	rxrpc_serial_t		rtt_serial[4];	/* Serial number of DATA or PING sent */
+@@ -675,8 +674,10 @@ struct rxrpc_call {
+ #define RXRPC_CALL_RTT_AVAIL_MASK	0xf
+ #define RXRPC_CALL_RTT_PEND_SHIFT	8
+ 
+-	/* transmission-phase ACK management */
++	/* Transmission-phase ACK management (ACKs we've received). */
+ 	ktime_t			acks_latest_ts;	/* Timestamp of latest ACK received */
++	rxrpc_seq_t		acks_first_seq;	/* first sequence number received */
++	rxrpc_seq_t		acks_prev_seq;	/* Highest previousPacket received */
+ 	rxrpc_seq_t		acks_lowest_nak; /* Lowest NACK in the buffer (or ==tx_hard_ack) */
+ 	rxrpc_seq_t		acks_lost_top;	/* tx_top at the time lost-ack ping sent */
+ 	rxrpc_serial_t		acks_lost_ping;	/* Serial number of probe ACK */
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index 22e05de5d1ca9..f8ecad2b730e8 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -377,9 +377,9 @@ recheck_state:
+ 		if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) &&
+ 		    (int)call->conn->hi_serial - (int)call->rx_serial > 0) {
+ 			trace_rxrpc_call_reset(call);
+-			rxrpc_abort_call("EXP", call, 0, RX_USER_ABORT, -ECONNRESET);
++			rxrpc_abort_call("EXP", call, 0, RX_CALL_DEAD, -ECONNRESET);
+ 		} else {
+-			rxrpc_abort_call("EXP", call, 0, RX_USER_ABORT, -ETIME);
++			rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME);
+ 		}
+ 		set_bit(RXRPC_CALL_EV_ABORT, &call->events);
+ 		goto recheck_state;
+@@ -406,7 +406,8 @@ recheck_state:
+ 		goto recheck_state;
+ 	}
+ 
+-	if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events)) {
++	if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events) &&
++	    call->state != RXRPC_CALL_CLIENT_RECV_REPLY) {
+ 		rxrpc_resend(call, now);
+ 		goto recheck_state;
+ 	}
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index 3bcbe0665f915..3ef05a0e90ad0 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -184,7 +184,7 @@ void __rxrpc_disconnect_call(struct rxrpc_connection *conn,
+ 			chan->last_type = RXRPC_PACKET_TYPE_ABORT;
+ 			break;
+ 		default:
+-			chan->last_abort = RX_USER_ABORT;
++			chan->last_abort = RX_CALL_DEAD;
+ 			chan->last_type = RXRPC_PACKET_TYPE_ABORT;
+ 			break;
+ 		}
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index dc201363f2c48..1145cb14d86f8 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -412,8 +412,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ {
+ 	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ 	enum rxrpc_call_state state;
+-	unsigned int j, nr_subpackets;
+-	rxrpc_serial_t serial = sp->hdr.serial, ack_serial = 0;
++	unsigned int j, nr_subpackets, nr_unacked = 0;
++	rxrpc_serial_t serial = sp->hdr.serial, ack_serial = serial;
+ 	rxrpc_seq_t seq0 = sp->hdr.seq, hard_ack;
+ 	bool immediate_ack = false, jumbo_bad = false;
+ 	u8 ack = 0;
+@@ -453,7 +453,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 	    !rxrpc_receiving_reply(call))
+ 		goto unlock;
+ 
+-	call->ackr_prev_seq = seq0;
+ 	hard_ack = READ_ONCE(call->rx_hard_ack);
+ 
+ 	nr_subpackets = sp->nr_subpackets;
+@@ -534,6 +533,9 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 			ack_serial = serial;
+ 		}
+ 
++		if (after(seq0, call->ackr_highest_seq))
++			call->ackr_highest_seq = seq0;
++
+ 		/* Queue the packet.  We use a couple of memory barriers here as need
+ 		 * to make sure that rx_top is perceived to be set after the buffer
+ 		 * pointer and that the buffer pointer is set after the annotation and
+@@ -567,6 +569,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 			sp = NULL;
+ 		}
+ 
++		nr_unacked++;
++
+ 		if (last) {
+ 			set_bit(RXRPC_CALL_RX_LAST, &call->flags);
+ 			if (!ack) {
+@@ -586,9 +590,14 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 			}
+ 			call->rx_expect_next = seq + 1;
+ 		}
++		if (!ack)
++			ack_serial = serial;
+ 	}
+ 
+ ack:
++	if (atomic_add_return(nr_unacked, &call->ackr_nr_unacked) > 2 && !ack)
++		ack = RXRPC_ACK_IDLE;
++
+ 	if (ack)
+ 		rxrpc_propose_ACK(call, ack, ack_serial,
+ 				  immediate_ack, true,
+@@ -812,7 +821,7 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, u8 *acks,
+ static bool rxrpc_is_ack_valid(struct rxrpc_call *call,
+ 			       rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt)
+ {
+-	rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq);
++	rxrpc_seq_t base = READ_ONCE(call->acks_first_seq);
+ 
+ 	if (after(first_pkt, base))
+ 		return true; /* The window advanced */
+@@ -820,7 +829,7 @@ static bool rxrpc_is_ack_valid(struct rxrpc_call *call,
+ 	if (before(first_pkt, base))
+ 		return false; /* firstPacket regressed */
+ 
+-	if (after_eq(prev_pkt, call->ackr_prev_seq))
++	if (after_eq(prev_pkt, call->acks_prev_seq))
+ 		return true; /* previousPacket hasn't regressed. */
+ 
+ 	/* Some rx implementations put a serial number in previousPacket. */
+@@ -906,8 +915,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	/* Discard any out-of-order or duplicate ACKs (outside lock). */
+ 	if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
+ 		trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial,
+-					   first_soft_ack, call->ackr_first_seq,
+-					   prev_pkt, call->ackr_prev_seq);
++					   first_soft_ack, call->acks_first_seq,
++					   prev_pkt, call->acks_prev_seq);
+ 		return;
+ 	}
+ 
+@@ -922,14 +931,14 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	/* Discard any out-of-order or duplicate ACKs (inside lock). */
+ 	if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
+ 		trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial,
+-					   first_soft_ack, call->ackr_first_seq,
+-					   prev_pkt, call->ackr_prev_seq);
++					   first_soft_ack, call->acks_first_seq,
++					   prev_pkt, call->acks_prev_seq);
+ 		goto out;
+ 	}
+ 	call->acks_latest_ts = skb->tstamp;
+ 
+-	call->ackr_first_seq = first_soft_ack;
+-	call->ackr_prev_seq = prev_pkt;
++	call->acks_first_seq = first_soft_ack;
++	call->acks_prev_seq = prev_pkt;
+ 
+ 	/* Parse rwind and mtu sizes if provided. */
+ 	if (buf.info.rxMTU)
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index a45c83f22236e..9683617db7049 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -74,11 +74,18 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
+ 				 u8 reason)
+ {
+ 	rxrpc_serial_t serial;
++	unsigned int tmp;
+ 	rxrpc_seq_t hard_ack, top, seq;
+ 	int ix;
+ 	u32 mtu, jmax;
+ 	u8 *ackp = pkt->acks;
+ 
++	tmp = atomic_xchg(&call->ackr_nr_unacked, 0);
++	tmp |= atomic_xchg(&call->ackr_nr_consumed, 0);
++	if (!tmp && (reason == RXRPC_ACK_DELAY ||
++		     reason == RXRPC_ACK_IDLE))
++		return 0;
++
+ 	/* Barrier against rxrpc_input_data(). */
+ 	serial = call->ackr_serial;
+ 	hard_ack = READ_ONCE(call->rx_hard_ack);
+@@ -89,7 +96,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
+ 	pkt->ack.bufferSpace	= htons(8);
+ 	pkt->ack.maxSkew	= htons(0);
+ 	pkt->ack.firstPacket	= htonl(hard_ack + 1);
+-	pkt->ack.previousPacket	= htonl(call->ackr_prev_seq);
++	pkt->ack.previousPacket	= htonl(call->ackr_highest_seq);
+ 	pkt->ack.serial		= htonl(serial);
+ 	pkt->ack.reason		= reason;
+ 	pkt->ack.nAcks		= top - hard_ack;
+@@ -223,6 +230,10 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 	n = rxrpc_fill_out_ack(conn, call, pkt, &hard_ack, &top, reason);
+ 
+ 	spin_unlock_bh(&call->lock);
++	if (n == 0) {
++		kfree(pkt);
++		return 0;
++	}
+ 
+ 	iov[0].iov_base	= pkt;
+ 	iov[0].iov_len	= sizeof(pkt->whdr) + sizeof(pkt->ack) + n;
+@@ -259,13 +270,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 					  ntohl(pkt->ack.serial),
+ 					  false, true,
+ 					  rxrpc_propose_ack_retry_tx);
+-		} else {
+-			spin_lock_bh(&call->lock);
+-			if (after(hard_ack, call->ackr_consumed))
+-				call->ackr_consumed = hard_ack;
+-			if (after(top, call->ackr_seen))
+-				call->ackr_seen = top;
+-			spin_unlock_bh(&call->lock);
+ 		}
+ 
+ 		rxrpc_set_keepalive(call);
+diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
+index 2c842851d72e5..7878267739378 100644
+--- a/net/rxrpc/recvmsg.c
++++ b/net/rxrpc/recvmsg.c
+@@ -260,11 +260,9 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call)
+ 		rxrpc_end_rx_phase(call, serial);
+ 	} else {
+ 		/* Check to see if there's an ACK that needs sending. */
+-		if (after_eq(hard_ack, call->ackr_consumed + 2) ||
+-		    after_eq(top, call->ackr_seen + 2) ||
+-		    (hard_ack == top && after(hard_ack, call->ackr_consumed)))
+-			rxrpc_propose_ACK(call, RXRPC_ACK_DELAY, serial,
+-					  true, true,
++		if (atomic_inc_return(&call->ackr_nr_consumed) > 2)
++			rxrpc_propose_ACK(call, RXRPC_ACK_IDLE, serial,
++					  true, false,
+ 					  rxrpc_propose_ack_rotate_rx);
+ 		if (call->ackr_reason && call->ackr_reason != RXRPC_ACK_DELAY)
+ 			rxrpc_send_ack_packet(call, false, NULL);
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index d27140c836cce..aa23ba4e25662 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -461,6 +461,12 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
+ 
+ success:
+ 	ret = copied;
++	if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE) {
++		read_lock_bh(&call->state_lock);
++		if (call->error < 0)
++			ret = call->error;
++		read_unlock_bh(&call->state_lock);
++	}
+ out:
+ 	call->tx_pending = skb;
+ 	_leave(" = %d", ret);
+diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c
+index 540351d6a5f47..555e0910786bc 100644
+--- a/net/rxrpc/sysctl.c
++++ b/net/rxrpc/sysctl.c
+@@ -12,7 +12,7 @@
+ 
+ static struct ctl_table_header *rxrpc_sysctl_reg_table;
+ static const unsigned int four = 4;
+-static const unsigned int thirtytwo = 32;
++static const unsigned int max_backlog = RXRPC_BACKLOG_MAX - 1;
+ static const unsigned int n_65535 = 65535;
+ static const unsigned int n_max_acks = RXRPC_RXTX_BUFF_SIZE - 1;
+ static const unsigned long one_jiffy = 1;
+@@ -89,7 +89,7 @@ static struct ctl_table rxrpc_sysctl_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= (void *)&four,
+-		.extra2		= (void *)&thirtytwo,
++		.extra2		= (void *)&max_backlog,
+ 	},
+ 	{
+ 		.procname	= "rx_window_size",
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 34494a0b28bd0..8f3aab6a4458b 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -92,6 +92,7 @@ int sctp_rcv(struct sk_buff *skb)
+ 	struct sctp_chunk *chunk;
+ 	union sctp_addr src;
+ 	union sctp_addr dest;
++	int bound_dev_if;
+ 	int family;
+ 	struct sctp_af *af;
+ 	struct net *net = dev_net(skb->dev);
+@@ -169,7 +170,8 @@ int sctp_rcv(struct sk_buff *skb)
+ 	 * If a frame arrives on an interface and the receiving socket is
+ 	 * bound to another interface, via SO_BINDTODEVICE, treat it as OOTB
+ 	 */
+-	if (sk->sk_bound_dev_if && (sk->sk_bound_dev_if != af->skb_iif(skb))) {
++	bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
++	if (bound_dev_if && (bound_dev_if != af->skb_iif(skb))) {
+ 		if (transport) {
+ 			sctp_transport_put(transport);
+ 			asoc = NULL;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 35db3260e8d56..5d7710dd95145 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1118,9 +1118,9 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr,
+ 	if (rc && rc != -EINPROGRESS)
+ 		goto out;
+ 
+-	sock_hold(&smc->sk); /* sock put in passive closing */
+ 	if (smc->use_fallback)
+ 		goto out;
++	sock_hold(&smc->sk); /* sock put in passive closing */
+ 	if (flags & O_NONBLOCK) {
+ 		if (queue_work(smc_hs_wq, &smc->connect_work))
+ 			smc->connect_nonblock = 1;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index f8d5f35cfc664..8a7f0c8fba5e9 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3485,6 +3485,7 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
+ 	wdev_lock(wdev);
+ 	switch (wdev->iftype) {
+ 	case NL80211_IFTYPE_AP:
++	case NL80211_IFTYPE_P2P_GO:
+ 		if (wdev->ssid_len &&
+ 		    nla_put(msg, NL80211_ATTR_SSID, wdev->ssid_len, wdev->ssid))
+ 			goto nla_put_failure_locked;
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 6b3386e1d93a5..fd848609e656a 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -787,6 +787,8 @@ static int __init load_builtin_regdb_keys(void)
+ 	return 0;
+ }
+ 
++MODULE_FIRMWARE("regulatory.db.p7s");
++
+ static bool regdb_has_valid_signature(const u8 *data, unsigned int size)
+ {
+ 	const struct firmware *sig;
+@@ -1058,6 +1060,8 @@ static void regdb_fw_cb(const struct firmware *fw, void *context)
+ 	release_firmware(fw);
+ }
+ 
++MODULE_FIRMWARE("regulatory.db");
++
+ static int query_regdb_file(const char *alpha2)
+ {
+ 	ASSERT_RTNL();
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 1befc6db723b0..717db5ecd0bd4 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1020,7 +1020,8 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ 		if ((x->sel.family &&
+ 		     (x->sel.family != family ||
+ 		      !xfrm_selector_match(&x->sel, fl, family))) ||
+-		    !security_xfrm_state_pol_flow_match(x, pol, fl))
++		    !security_xfrm_state_pol_flow_match(x, pol,
++							&fl->u.__fl_common))
+ 			return;
+ 
+ 		if (!*best ||
+@@ -1035,7 +1036,8 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ 		if ((!x->sel.family ||
+ 		     (x->sel.family == family &&
+ 		      xfrm_selector_match(&x->sel, fl, family))) &&
+-		    security_xfrm_state_pol_flow_match(x, pol, fl))
++		    security_xfrm_state_pol_flow_match(x, pol,
++						       &fl->u.__fl_common))
+ 			*error = -ESRCH;
+ 	}
+ }
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index 6c6439f69a725..0e6268d598835 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -44,17 +44,6 @@
+ set -o errexit
+ set -o nounset
+ 
+-READELF="${CROSS_COMPILE:-}readelf"
+-ADDR2LINE="${CROSS_COMPILE:-}addr2line"
+-SIZE="${CROSS_COMPILE:-}size"
+-NM="${CROSS_COMPILE:-}nm"
+-
+-command -v awk >/dev/null 2>&1 || die "awk isn't installed"
+-command -v ${READELF} >/dev/null 2>&1 || die "readelf isn't installed"
+-command -v ${ADDR2LINE} >/dev/null 2>&1 || die "addr2line isn't installed"
+-command -v ${SIZE} >/dev/null 2>&1 || die "size isn't installed"
+-command -v ${NM} >/dev/null 2>&1 || die "nm isn't installed"
+-
+ usage() {
+ 	echo "usage: faddr2line [--list] <object file> <func+offset> <func+offset>..." >&2
+ 	exit 1
+@@ -69,6 +58,14 @@ die() {
+ 	exit 1
+ }
+ 
++READELF="${CROSS_COMPILE:-}readelf"
++ADDR2LINE="${CROSS_COMPILE:-}addr2line"
++AWK="awk"
++
++command -v ${AWK} >/dev/null 2>&1 || die "${AWK} isn't installed"
++command -v ${READELF} >/dev/null 2>&1 || die "${READELF} isn't installed"
++command -v ${ADDR2LINE} >/dev/null 2>&1 || die "${ADDR2LINE} isn't installed"
++
+ # Try to figure out the source directory prefix so we can remove it from the
+ # addr2line output.  HACK ALERT: This assumes that start_kernel() is in
+ # init/main.c!  This only works for vmlinux.  Otherwise it falls back to
+@@ -76,7 +73,7 @@ die() {
+ find_dir_prefix() {
+ 	local objfile=$1
+ 
+-	local start_kernel_addr=$(${READELF} -sW $objfile | awk '$8 == "start_kernel" {printf "0x%s", $2}')
++	local start_kernel_addr=$(${READELF} --symbols --wide $objfile | ${AWK} '$8 == "start_kernel" {printf "0x%s", $2}')
+ 	[[ -z $start_kernel_addr ]] && return
+ 
+ 	local file_line=$(${ADDR2LINE} -e $objfile $start_kernel_addr)
+@@ -97,86 +94,133 @@ __faddr2line() {
+ 	local dir_prefix=$3
+ 	local print_warnings=$4
+ 
+-	local func=${func_addr%+*}
++	local sym_name=${func_addr%+*}
+ 	local offset=${func_addr#*+}
+ 	offset=${offset%/*}
+-	local size=
+-	[[ $func_addr =~ "/" ]] && size=${func_addr#*/}
++	local user_size=
++	[[ $func_addr =~ "/" ]] && user_size=${func_addr#*/}
+ 
+-	if [[ -z $func ]] || [[ -z $offset ]] || [[ $func = $func_addr ]]; then
++	if [[ -z $sym_name ]] || [[ -z $offset ]] || [[ $sym_name = $func_addr ]]; then
+ 		warn "bad func+offset $func_addr"
+ 		DONE=1
+ 		return
+ 	fi
+ 
+ 	# Go through each of the object's symbols which match the func name.
+-	# In rare cases there might be duplicates.
+-	file_end=$(${SIZE} -Ax $objfile | awk '$1 == ".text" {print $2}')
+-	while read symbol; do
+-		local fields=($symbol)
+-		local sym_base=0x${fields[0]}
+-		local sym_type=${fields[1]}
+-		local sym_end=${fields[3]}
+-
+-		# calculate the size
+-		local sym_size=$(($sym_end - $sym_base))
++	# In rare cases there might be duplicates, in which case we print all
++	# matches.
++	while read line; do
++		local fields=($line)
++		local sym_addr=0x${fields[1]}
++		local sym_elf_size=${fields[2]}
++		local sym_sec=${fields[6]}
++
++		# Get the section size:
++		local sec_size=$(${READELF} --section-headers --wide $objfile |
++			sed 's/\[ /\[/' |
++			${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print "0x" $6; exit }')
++
++		if [[ -z $sec_size ]]; then
++			warn "bad section size: section: $sym_sec"
++			DONE=1
++			return
++		fi
++
++		# Calculate the symbol size.
++		#
++		# Unfortunately we can't use the ELF size, because kallsyms
++		# also includes the padding bytes in its size calculation.  For
++		# kallsyms, the size calculation is the distance between the
++		# symbol and the next symbol in a sorted list.
++		local sym_size
++		local cur_sym_addr
++		local found=0
++		while read line; do
++			local fields=($line)
++			cur_sym_addr=0x${fields[1]}
++			local cur_sym_elf_size=${fields[2]}
++			local cur_sym_name=${fields[7]:-}
++
++			if [[ $cur_sym_addr = $sym_addr ]] &&
++			   [[ $cur_sym_elf_size = $sym_elf_size ]] &&
++			   [[ $cur_sym_name = $sym_name ]]; then
++				found=1
++				continue
++			fi
++
++			if [[ $found = 1 ]]; then
++				sym_size=$(($cur_sym_addr - $sym_addr))
++				[[ $sym_size -lt $sym_elf_size ]] && continue;
++				found=2
++				break
++			fi
++		done < <(${READELF} --symbols --wide $objfile | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
++
++		if [[ $found = 0 ]]; then
++			warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size"
++			DONE=1
++			return
++		fi
++
++		# If nothing was found after the symbol, assume it's the last
++		# symbol in the section.
++		[[ $found = 1 ]] && sym_size=$(($sec_size - $sym_addr))
++
+ 		if [[ -z $sym_size ]] || [[ $sym_size -le 0 ]]; then
+-			warn "bad symbol size: base: $sym_base end: $sym_end"
++			warn "bad symbol size: sym_addr: $sym_addr cur_sym_addr: $cur_sym_addr"
+ 			DONE=1
+ 			return
+ 		fi
++
+ 		sym_size=0x$(printf %x $sym_size)
+ 
+-		# calculate the address
+-		local addr=$(($sym_base + $offset))
++		# Calculate the section address from user-supplied offset:
++		local addr=$(($sym_addr + $offset))
+ 		if [[ -z $addr ]] || [[ $addr = 0 ]]; then
+-			warn "bad address: $sym_base + $offset"
++			warn "bad address: $sym_addr + $offset"
+ 			DONE=1
+ 			return
+ 		fi
+ 		addr=0x$(printf %x $addr)
+ 
+-		# weed out non-function symbols
+-		if [[ $sym_type != t ]] && [[ $sym_type != T ]]; then
+-			[[ $print_warnings = 1 ]] &&
+-				echo "skipping $func address at $addr due to non-function symbol of type '$sym_type'"
+-			continue
+-		fi
+-
+-		# if the user provided a size, make sure it matches the symbol's size
+-		if [[ -n $size ]] && [[ $size -ne $sym_size ]]; then
++		# If the user provided a size, make sure it matches the symbol's size:
++		if [[ -n $user_size ]] && [[ $user_size -ne $sym_size ]]; then
+ 			[[ $print_warnings = 1 ]] &&
+-				echo "skipping $func address at $addr due to size mismatch ($size != $sym_size)"
++				echo "skipping $sym_name address at $addr due to size mismatch ($user_size != $sym_size)"
+ 			continue;
+ 		fi
+ 
+-		# make sure the provided offset is within the symbol's range
++		# Make sure the provided offset is within the symbol's range:
+ 		if [[ $offset -gt $sym_size ]]; then
+ 			[[ $print_warnings = 1 ]] &&
+-				echo "skipping $func address at $addr due to size mismatch ($offset > $sym_size)"
++				echo "skipping $sym_name address at $addr due to size mismatch ($offset > $sym_size)"
+ 			continue
+ 		fi
+ 
+-		# separate multiple entries with a blank line
++		# In case of duplicates or multiple addresses specified on the
++		# cmdline, separate multiple entries with a blank line:
+ 		[[ $FIRST = 0 ]] && echo
+ 		FIRST=0
+ 
+-		# pass real address to addr2line
+-		echo "$func+$offset/$sym_size:"
+-		local file_lines=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;")
+-		[[ -z $file_lines ]] && return
++		echo "$sym_name+$offset/$sym_size:"
+ 
++		# Pass section address to addr2line and strip absolute paths
++		# from the output:
++		local output=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;")
++		[[ -z $output ]] && continue
++
++		# Default output (non --list):
+ 		if [[ $LIST = 0 ]]; then
+-			echo "$file_lines" | while read -r line
++			echo "$output" | while read -r line
+ 			do
+ 				echo $line
+ 			done
+ 			DONE=1;
+-			return
++			continue
+ 		fi
+ 
+-		# show each line with context
+-		echo "$file_lines" | while read -r line
++		# For --list, show each line with its corresponding source code:
++		echo "$output" | while read -r line
+ 		do
+ 			echo
+ 			echo $line
+@@ -184,12 +228,12 @@ __faddr2line() {
+ 			n1=$[$n-5]
+ 			n2=$[$n+5]
+ 			f=$(echo $line | sed 's/.*at \(.\+\):.*/\1/g')
+-			awk 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f
++			${AWK} 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f
+ 		done
+ 
+ 		DONE=1
+ 
+-	done < <(${NM} -n $objfile | awk -v fn=$func -v end=$file_end '$3 == fn { found=1; line=$0; start=$1; next } found == 1 { found=0; print line, "0x"$1 } END {if (found == 1) print line, end; }')
++	done < <(${READELF} --symbols --wide $objfile | ${AWK} -v fn=$sym_name '$4 == "FUNC" && $8 == fn')
+ }
+ 
+ [[ $# -lt 2 ]] && usage
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index 9e72edb8d31af..755af0b29e755 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -69,10 +69,9 @@ choice
+ 	  hash, defined as 20 bytes, and a null terminated pathname,
+ 	  limited to 255 characters.  The 'ima-ng' measurement list
+ 	  template permits both larger hash digests and longer
+-	  pathnames.
++	  pathnames. The configured default template can be replaced
++	  by specifying "ima_template=" on the boot command line.
+ 
+-	config IMA_TEMPLATE
+-		bool "ima"
+ 	config IMA_NG_TEMPLATE
+ 		bool "ima-ng (default)"
+ 	config IMA_SIG_TEMPLATE
+@@ -82,7 +81,6 @@ endchoice
+ config IMA_DEFAULT_TEMPLATE
+ 	string
+ 	depends on IMA
+-	default "ima" if IMA_TEMPLATE
+ 	default "ima-ng" if IMA_NG_TEMPLATE
+ 	default "ima-sig" if IMA_SIG_TEMPLATE
+ 
+@@ -102,19 +100,19 @@ choice
+ 
+ 	config IMA_DEFAULT_HASH_SHA256
+ 		bool "SHA256"
+-		depends on CRYPTO_SHA256=y && !IMA_TEMPLATE
++		depends on CRYPTO_SHA256=y
+ 
+ 	config IMA_DEFAULT_HASH_SHA512
+ 		bool "SHA512"
+-		depends on CRYPTO_SHA512=y && !IMA_TEMPLATE
++		depends on CRYPTO_SHA512=y
+ 
+ 	config IMA_DEFAULT_HASH_WP512
+ 		bool "WP512"
+-		depends on CRYPTO_WP512=y && !IMA_TEMPLATE
++		depends on CRYPTO_WP512=y
+ 
+ 	config IMA_DEFAULT_HASH_SM3
+ 		bool "SM3"
+-		depends on CRYPTO_SM3=y && !IMA_TEMPLATE
++		depends on CRYPTO_SM3=y
+ endchoice
+ 
+ config IMA_DEFAULT_HASH
+diff --git a/security/integrity/platform_certs/keyring_handler.h b/security/integrity/platform_certs/keyring_handler.h
+index 2462bfa08fe34..cd06bd6072be2 100644
+--- a/security/integrity/platform_certs/keyring_handler.h
++++ b/security/integrity/platform_certs/keyring_handler.h
+@@ -30,3 +30,11 @@ efi_element_handler_t get_handler_for_db(const efi_guid_t *sig_type);
+ efi_element_handler_t get_handler_for_dbx(const efi_guid_t *sig_type);
+ 
+ #endif
++
++#ifndef UEFI_QUIRK_SKIP_CERT
++#define UEFI_QUIRK_SKIP_CERT(vendor, product) \
++		 .matches = { \
++			DMI_MATCH(DMI_BOARD_VENDOR, vendor), \
++			DMI_MATCH(DMI_PRODUCT_NAME, product), \
++		},
++#endif
+diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c
+index f290f78c3f301..555d2dfc0ff79 100644
+--- a/security/integrity/platform_certs/load_uefi.c
++++ b/security/integrity/platform_certs/load_uefi.c
+@@ -3,6 +3,7 @@
+ #include <linux/kernel.h>
+ #include <linux/sched.h>
+ #include <linux/cred.h>
++#include <linux/dmi.h>
+ #include <linux/err.h>
+ #include <linux/efi.h>
+ #include <linux/slab.h>
+@@ -11,6 +12,31 @@
+ #include "../integrity.h"
+ #include "keyring_handler.h"
+ 
++/*
++ * On T2 Macs reading the db and dbx efi variables to load UEFI Secure Boot
++ * certificates causes occurrence of a page fault in Apple's firmware and
++ * a crash disabling EFI runtime services. The following quirk skips reading
++ * these variables.
++ */
++static const struct dmi_system_id uefi_skip_cert[] = {
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,2") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,3") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,4") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,2") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,3") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,4") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,2") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir9,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacMini8,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacPro7,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,2") },
++	{ }
++};
++
+ /*
+  * Look to see if a UEFI variable called MokIgnoreDB exists and return true if
+  * it does.
+@@ -137,6 +163,13 @@ static int __init load_uefi_certs(void)
+ 	unsigned long dbsize = 0, dbxsize = 0, mokxsize = 0;
+ 	efi_status_t status;
+ 	int rc = 0;
++	const struct dmi_system_id *dmi_id;
++
++	dmi_id = dmi_first_match(uefi_skip_cert);
++	if (dmi_id) {
++		pr_err("Reading UEFI Secure Boot Certs is not supported on T2 Macs.\n");
++		return false;
++	}
+ 
+ 	if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ 		return false;
+diff --git a/security/security.c b/security/security.c
+index 360706cdababc..8ea826ea6167e 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -2223,15 +2223,16 @@ void security_sk_clone(const struct sock *sk, struct sock *newsk)
+ }
+ EXPORT_SYMBOL(security_sk_clone);
+ 
+-void security_sk_classify_flow(struct sock *sk, struct flowi *fl)
++void security_sk_classify_flow(struct sock *sk, struct flowi_common *flic)
+ {
+-	call_void_hook(sk_getsecid, sk, &fl->flowi_secid);
++	call_void_hook(sk_getsecid, sk, &flic->flowic_secid);
+ }
+ EXPORT_SYMBOL(security_sk_classify_flow);
+ 
+-void security_req_classify_flow(const struct request_sock *req, struct flowi *fl)
++void security_req_classify_flow(const struct request_sock *req,
++				struct flowi_common *flic)
+ {
+-	call_void_hook(req_classify_flow, req, fl);
++	call_void_hook(req_classify_flow, req, flic);
+ }
+ EXPORT_SYMBOL(security_req_classify_flow);
+ 
+@@ -2423,7 +2424,7 @@ int security_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir)
+ 
+ int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
+ 				       struct xfrm_policy *xp,
+-				       const struct flowi *fl)
++				       const struct flowi_common *flic)
+ {
+ 	struct security_hook_list *hp;
+ 	int rc = LSM_RET_DEFAULT(xfrm_state_pol_flow_match);
+@@ -2439,7 +2440,7 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
+ 	 */
+ 	hlist_for_each_entry(hp, &security_hook_heads.xfrm_state_pol_flow_match,
+ 				list) {
+-		rc = hp->hook.xfrm_state_pol_flow_match(x, xp, fl);
++		rc = hp->hook.xfrm_state_pol_flow_match(x, xp, flic);
+ 		break;
+ 	}
+ 	return rc;
+@@ -2450,9 +2451,9 @@ int security_xfrm_decode_session(struct sk_buff *skb, u32 *secid)
+ 	return call_int_hook(xfrm_decode_session, 0, skb, secid, 1);
+ }
+ 
+-void security_skb_classify_flow(struct sk_buff *skb, struct flowi *fl)
++void security_skb_classify_flow(struct sk_buff *skb, struct flowi_common *flic)
+ {
+-	int rc = call_int_hook(xfrm_decode_session, 0, skb, &fl->flowi_secid,
++	int rc = call_int_hook(xfrm_decode_session, 0, skb, &flic->flowic_secid,
+ 				0);
+ 
+ 	BUG_ON(rc);
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 8c901ae05dd84..ee37ce2e2619b 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -5448,9 +5448,9 @@ static void selinux_secmark_refcount_dec(void)
+ }
+ 
+ static void selinux_req_classify_flow(const struct request_sock *req,
+-				      struct flowi *fl)
++				      struct flowi_common *flic)
+ {
+-	fl->flowi_secid = req->secid;
++	flic->flowic_secid = req->secid;
+ }
+ 
+ static int selinux_tun_dev_alloc_security(void **security)
+diff --git a/security/selinux/include/xfrm.h b/security/selinux/include/xfrm.h
+index a0b4653162922..0a6f34a7a9714 100644
+--- a/security/selinux/include/xfrm.h
++++ b/security/selinux/include/xfrm.h
+@@ -26,7 +26,7 @@ int selinux_xfrm_state_delete(struct xfrm_state *x);
+ int selinux_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir);
+ int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x,
+ 				      struct xfrm_policy *xp,
+-				      const struct flowi *fl);
++				      const struct flowi_common *flic);
+ 
+ #ifdef CONFIG_SECURITY_NETWORK_XFRM
+ extern atomic_t selinux_xfrm_refcount;
+diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c
+index 00e95f8bd7c73..114245b6f7c7b 100644
+--- a/security/selinux/xfrm.c
++++ b/security/selinux/xfrm.c
+@@ -175,9 +175,10 @@ int selinux_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir)
+  */
+ int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x,
+ 				      struct xfrm_policy *xp,
+-				      const struct flowi *fl)
++				      const struct flowi_common *flic)
+ {
+ 	u32 state_sid;
++	u32 flic_sid;
+ 
+ 	if (!xp->security)
+ 		if (x->security)
+@@ -196,17 +197,17 @@ int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x,
+ 				return 0;
+ 
+ 	state_sid = x->security->ctx_sid;
++	flic_sid = flic->flowic_secid;
+ 
+-	if (fl->flowi_secid != state_sid)
++	if (flic_sid != state_sid)
+ 		return 0;
+ 
+ 	/* We don't need a separate SA Vs. policy polmatch check since the SA
+ 	 * is now of the same label as the flow and a flow Vs. policy polmatch
+ 	 * check had already happened in selinux_xfrm_policy_lookup() above. */
+-	return (avc_has_perm(&selinux_state,
+-			     fl->flowi_secid, state_sid,
+-			    SECCLASS_ASSOCIATION, ASSOCIATION__SENDTO,
+-			    NULL) ? 0 : 1);
++	return (avc_has_perm(&selinux_state, flic_sid, state_sid,
++			     SECCLASS_ASSOCIATION, ASSOCIATION__SENDTO,
++			     NULL) ? 0 : 1);
+ }
+ 
+ static u32 selinux_xfrm_skb_sid_egress(struct sk_buff *skb)
+diff --git a/sound/core/jack.c b/sound/core/jack.c
+index dc2e06ae24149..45e28db6ea38d 100644
+--- a/sound/core/jack.c
++++ b/sound/core/jack.c
+@@ -34,8 +34,11 @@ static int snd_jack_dev_disconnect(struct snd_device *device)
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+ 	struct snd_jack *jack = device->device_data;
+ 
+-	if (!jack->input_dev)
++	mutex_lock(&jack->input_dev_lock);
++	if (!jack->input_dev) {
++		mutex_unlock(&jack->input_dev_lock);
+ 		return 0;
++	}
+ 
+ 	/* If the input device is registered with the input subsystem
+ 	 * then we need to use a different deallocator. */
+@@ -44,6 +47,7 @@ static int snd_jack_dev_disconnect(struct snd_device *device)
+ 	else
+ 		input_free_device(jack->input_dev);
+ 	jack->input_dev = NULL;
++	mutex_unlock(&jack->input_dev_lock);
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+ 	return 0;
+ }
+@@ -82,8 +86,11 @@ static int snd_jack_dev_register(struct snd_device *device)
+ 	snprintf(jack->name, sizeof(jack->name), "%s %s",
+ 		 card->shortname, jack->id);
+ 
+-	if (!jack->input_dev)
++	mutex_lock(&jack->input_dev_lock);
++	if (!jack->input_dev) {
++		mutex_unlock(&jack->input_dev_lock);
+ 		return 0;
++	}
+ 
+ 	jack->input_dev->name = jack->name;
+ 
+@@ -108,6 +115,7 @@ static int snd_jack_dev_register(struct snd_device *device)
+ 	if (err == 0)
+ 		jack->registered = 1;
+ 
++	mutex_unlock(&jack->input_dev_lock);
+ 	return err;
+ }
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+@@ -228,9 +236,11 @@ int snd_jack_new(struct snd_card *card, const char *id, int type,
+ 		return -ENOMEM;
+ 	}
+ 
+-	/* don't creat input device for phantom jack */
+-	if (!phantom_jack) {
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
++	mutex_init(&jack->input_dev_lock);
++
++	/* don't create input device for phantom jack */
++	if (!phantom_jack) {
+ 		int i;
+ 
+ 		jack->input_dev = input_allocate_device();
+@@ -248,8 +258,8 @@ int snd_jack_new(struct snd_card *card, const char *id, int type,
+ 				input_set_capability(jack->input_dev, EV_SW,
+ 						     jack_switch_types[i]);
+ 
+-#endif /* CONFIG_SND_JACK_INPUT_DEV */
+ 	}
++#endif /* CONFIG_SND_JACK_INPUT_DEV */
+ 
+ 	err = snd_device_new(card, SNDRV_DEV_JACK, jack, &ops);
+ 	if (err < 0)
+@@ -289,10 +299,14 @@ EXPORT_SYMBOL(snd_jack_new);
+ void snd_jack_set_parent(struct snd_jack *jack, struct device *parent)
+ {
+ 	WARN_ON(jack->registered);
+-	if (!jack->input_dev)
++	mutex_lock(&jack->input_dev_lock);
++	if (!jack->input_dev) {
++		mutex_unlock(&jack->input_dev_lock);
+ 		return;
++	}
+ 
+ 	jack->input_dev->dev.parent = parent;
++	mutex_unlock(&jack->input_dev_lock);
+ }
+ EXPORT_SYMBOL(snd_jack_set_parent);
+ 
+@@ -340,6 +354,8 @@ EXPORT_SYMBOL(snd_jack_set_key);
+ 
+ /**
+  * snd_jack_report - Report the current status of a jack
++ * Note: This function uses mutexes and should be called from a
++ * context which can sleep (such as a workqueue).
+  *
+  * @jack:   The jack to report status for
+  * @status: The current status of the jack
+@@ -359,8 +375,11 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ 					    status & jack_kctl->mask_bits);
+ 
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+-	if (!jack->input_dev)
++	mutex_lock(&jack->input_dev_lock);
++	if (!jack->input_dev) {
++		mutex_unlock(&jack->input_dev_lock);
+ 		return;
++	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(jack->key); i++) {
+ 		int testbit = SND_JACK_BTN_0 >> i;
+@@ -379,6 +398,7 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ 	}
+ 
+ 	input_sync(jack->input_dev);
++	mutex_unlock(&jack->input_dev_lock);
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+ }
+ EXPORT_SYMBOL(snd_jack_report);
+diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c
+index a9a0d74f31656..191883842a35d 100644
+--- a/sound/core/pcm_memory.c
++++ b/sound/core/pcm_memory.c
+@@ -434,7 +434,6 @@ EXPORT_SYMBOL(snd_pcm_lib_malloc_pages);
+  */
+ int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream)
+ {
+-	struct snd_card *card = substream->pcm->card;
+ 	struct snd_pcm_runtime *runtime;
+ 
+ 	if (PCM_RUNTIME_CHECK(substream))
+@@ -443,6 +442,8 @@ int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream)
+ 	if (runtime->dma_area == NULL)
+ 		return 0;
+ 	if (runtime->dma_buffer_p != &substream->dma_buffer) {
++		struct snd_card *card = substream->pcm->card;
++
+ 		/* it's a newly allocated buffer.  release it now. */
+ 		do_free_pages(card, runtime->dma_buffer_p);
+ 		kfree(runtime->dma_buffer_p);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f630f9257ad11..71a9462e8f6ec 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1990,6 +1990,7 @@ enum {
+ 	ALC1220_FIXUP_CLEVO_PB51ED_PINS,
+ 	ALC887_FIXUP_ASUS_AUDIO,
+ 	ALC887_FIXUP_ASUS_HMIC,
++	ALCS1200A_FIXUP_MIC_VREF,
+ };
+ 
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -2535,6 +2536,14 @@ static const struct hda_fixup alc882_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC887_FIXUP_ASUS_AUDIO,
+ 	},
++	[ALCS1200A_FIXUP_MIC_VREF] = {
++		.type = HDA_FIXUP_PINCTLS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x18, PIN_VREF50 }, /* rear mic */
++			{ 0x19, PIN_VREF50 }, /* front mic */
++			{}
++		}
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+@@ -2572,6 +2581,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601),
+ 	SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS),
+ 	SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3),
++	SND_PCI_QUIRK(0x1043, 0x8797, "ASUS TUF B550M-PLUS", ALCS1200A_FIXUP_MIC_VREF),
+ 	SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
+ 	SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
+ 	SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
+@@ -8651,6 +8661,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1028, 0x0b19, "Dell XPS 15 9520", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+diff --git a/sound/soc/atmel/atmel-classd.c b/sound/soc/atmel/atmel-classd.c
+index b1a28a9382fbc..f91a0e728ed5d 100644
+--- a/sound/soc/atmel/atmel-classd.c
++++ b/sound/soc/atmel/atmel-classd.c
+@@ -458,7 +458,6 @@ static const struct snd_soc_component_driver atmel_classd_cpu_dai_component = {
+ 	.num_controls		= ARRAY_SIZE(atmel_classd_snd_controls),
+ 	.idle_bias_on		= 1,
+ 	.use_pmdown_time	= 1,
+-	.endianness		= 1,
+ };
+ 
+ /* ASoC sound card */
+diff --git a/sound/soc/atmel/atmel-pdmic.c b/sound/soc/atmel/atmel-pdmic.c
+index 8e1d8230b180f..049383e5405ee 100644
+--- a/sound/soc/atmel/atmel-pdmic.c
++++ b/sound/soc/atmel/atmel-pdmic.c
+@@ -481,7 +481,6 @@ static const struct snd_soc_component_driver atmel_pdmic_cpu_dai_component = {
+ 	.num_controls		= ARRAY_SIZE(atmel_pdmic_snd_controls),
+ 	.idle_bias_on		= 1,
+ 	.use_pmdown_time	= 1,
+-	.endianness		= 1,
+ };
+ 
+ /* ASoC sound card */
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 52c89a6f54e9a..25f331551f689 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -857,7 +857,6 @@ config SND_SOC_MAX98095
+ 
+ config SND_SOC_MAX98357A
+ 	tristate "Maxim MAX98357A CODEC"
+-	depends on GPIOLIB
+ 
+ config SND_SOC_MAX98371
+ 	tristate
+@@ -1099,7 +1098,6 @@ config SND_SOC_RT1015
+ 
+ config SND_SOC_RT1015P
+ 	tristate
+-	depends on GPIOLIB
+ 
+ config SND_SOC_RT1305
+ 	tristate
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index 5b6405392f085..0c73979cad4a4 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -393,7 +393,8 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
+ 	unsigned int mask = (1 << fls(mc->max)) - 1;
+-	unsigned int sel = ucontrol->value.integer.value[0];
++	int sel_unchecked = ucontrol->value.integer.value[0];
++	unsigned int sel;
+ 	unsigned int val = snd_soc_component_read(component, mc->reg);
+ 	unsigned int *select;
+ 
+@@ -413,8 +414,9 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+ 
+ 	val = (val >> mc->shift) & mask;
+ 
+-	if (sel < 0 || sel > mc->max)
++	if (sel_unchecked < 0 || sel_unchecked > mc->max)
+ 		return -EINVAL;
++	sel = sel_unchecked;
+ 
+ 	*select = sel;
+ 
+diff --git a/sound/soc/codecs/rk3328_codec.c b/sound/soc/codecs/rk3328_codec.c
+index aed18cbb9f68e..e40b97929f6c2 100644
+--- a/sound/soc/codecs/rk3328_codec.c
++++ b/sound/soc/codecs/rk3328_codec.c
+@@ -481,7 +481,7 @@ static int rk3328_platform_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(rk3328->pclk);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to enable acodec pclk\n");
+-		return ret;
++		goto err_unprepare_mclk;
+ 	}
+ 
+ 	base = devm_platform_ioremap_resource(pdev, 0);
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index 7081142a355e1..c444a56df95ba 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -419,7 +419,7 @@ static int rt5514_dsp_voice_wake_up_put(struct snd_kcontrol *kcontrol,
+ 		}
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static const struct snd_kcontrol_new rt5514_snd_controls[] = {
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 420003d062c7f..d1533e95a74f6 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -4095,9 +4095,14 @@ static int rt5645_i2c_remove(struct i2c_client *i2c)
+ 	if (i2c->irq)
+ 		free_irq(i2c->irq, rt5645);
+ 
++	/*
++	 * Since the rt5645_btn_check_callback() can queue jack_detect_work,
++	 * the timer need to be delted first
++	 */
++	del_timer_sync(&rt5645->btn_check_timer);
++
+ 	cancel_delayed_work_sync(&rt5645->jack_detect_work);
+ 	cancel_delayed_work_sync(&rt5645->rcclock_work);
+-	del_timer_sync(&rt5645->btn_check_timer);
+ 
+ 	regulator_bulk_disable(ARRAY_SIZE(rt5645->supplies), rt5645->supplies);
+ 
+diff --git a/sound/soc/codecs/tscs454.c b/sound/soc/codecs/tscs454.c
+index d0af16b4db2f4..a6f339bb47711 100644
+--- a/sound/soc/codecs/tscs454.c
++++ b/sound/soc/codecs/tscs454.c
+@@ -3115,18 +3115,17 @@ static int set_aif_sample_format(struct snd_soc_component *component,
+ 	unsigned int width;
+ 	int ret;
+ 
+-	switch (format) {
+-	case SNDRV_PCM_FORMAT_S16_LE:
++	switch (snd_pcm_format_width(format)) {
++	case 16:
+ 		width = FV_WL_16;
+ 		break;
+-	case SNDRV_PCM_FORMAT_S20_3LE:
++	case 20:
+ 		width = FV_WL_20;
+ 		break;
+-	case SNDRV_PCM_FORMAT_S24_3LE:
++	case 24:
+ 		width = FV_WL_24;
+ 		break;
+-	case SNDRV_PCM_FORMAT_S24_LE:
+-	case SNDRV_PCM_FORMAT_S32_LE:
++	case 32:
+ 		width = FV_WL_32;
+ 		break;
+ 	default:
+@@ -3321,6 +3320,7 @@ static const struct snd_soc_component_driver soc_component_dev_tscs454 = {
+ 	.num_dapm_routes = ARRAY_SIZE(tscs454_intercon),
+ 	.controls =	tscs454_snd_controls,
+ 	.num_controls = ARRAY_SIZE(tscs454_snd_controls),
++	.endianness = 1,
+ };
+ 
+ #define TSCS454_RATES SNDRV_PCM_RATE_8000_96000
+diff --git a/sound/soc/codecs/wm2000.c b/sound/soc/codecs/wm2000.c
+index 72e165cc64439..97ece3114b3dc 100644
+--- a/sound/soc/codecs/wm2000.c
++++ b/sound/soc/codecs/wm2000.c
+@@ -536,7 +536,7 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000,
+ {
+ 	struct i2c_client *i2c = wm2000->i2c;
+ 	int i, j;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (wm2000->anc_mode == mode)
+ 		return 0;
+@@ -566,13 +566,13 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000,
+ 		ret = anc_transitions[i].step[j](i2c,
+ 						 anc_transitions[i].analogue);
+ 		if (ret != 0)
+-			return ret;
++			break;
+ 	}
+ 
+ 	if (anc_transitions[i].dest == ANC_OFF)
+ 		clk_disable_unprepare(wm2000->mclk);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int wm2000_anc_set_mode(struct wm2000_priv *wm2000)
+diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c
+index f45cb4bbb6c4d..5997bb5acb738 100644
+--- a/sound/soc/fsl/imx-sgtl5000.c
++++ b/sound/soc/fsl/imx-sgtl5000.c
+@@ -120,19 +120,19 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 	data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+ 	if (!data) {
+ 		ret = -ENOMEM;
+-		goto fail;
++		goto put_device;
+ 	}
+ 
+ 	comp = devm_kzalloc(&pdev->dev, 3 * sizeof(*comp), GFP_KERNEL);
+ 	if (!comp) {
+ 		ret = -ENOMEM;
+-		goto fail;
++		goto put_device;
+ 	}
+ 
+ 	data->codec_clk = clk_get(&codec_dev->dev, NULL);
+ 	if (IS_ERR(data->codec_clk)) {
+ 		ret = PTR_ERR(data->codec_clk);
+-		goto fail;
++		goto put_device;
+ 	}
+ 
+ 	data->clk_frequency = clk_get_rate(data->codec_clk);
+@@ -158,10 +158,10 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 	data->card.dev = &pdev->dev;
+ 	ret = snd_soc_of_parse_card_name(&data->card, "model");
+ 	if (ret)
+-		goto fail;
++		goto put_device;
+ 	ret = snd_soc_of_parse_audio_routing(&data->card, "audio-routing");
+ 	if (ret)
+-		goto fail;
++		goto put_device;
+ 	data->card.num_links = 1;
+ 	data->card.owner = THIS_MODULE;
+ 	data->card.dai_link = &data->dai;
+@@ -176,7 +176,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 		if (ret != -EPROBE_DEFER)
+ 			dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ 				ret);
+-		goto fail;
++		goto put_device;
+ 	}
+ 
+ 	of_node_put(ssi_np);
+@@ -184,6 +184,8 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++put_device:
++	put_device(&codec_dev->dev);
+ fail:
+ 	if (data && !IS_ERR(data->codec_clk))
+ 		clk_put(data->codec_clk);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 43ee3d095a1be..3020a993f6ef5 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -615,6 +615,18 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_OVCD_SF_0P75 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* HP Pro Tablet 408 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pro Tablet 408"),
++		},
++		.driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_1500UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{	/* HP Stream 7 */
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+diff --git a/sound/soc/mediatek/mt2701/mt2701-wm8960.c b/sound/soc/mediatek/mt2701/mt2701-wm8960.c
+index 414e422c0eba0..70e494fb3da87 100644
+--- a/sound/soc/mediatek/mt2701/mt2701-wm8960.c
++++ b/sound/soc/mediatek/mt2701/mt2701-wm8960.c
+@@ -129,7 +129,8 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ 	if (!codec_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_platform_node;
+ 	}
+ 	for_each_card_prelinks(card, i, dai_link) {
+ 		if (dai_link->codecs->name)
+@@ -140,7 +141,7 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ 	ret = snd_soc_of_parse_audio_routing(card, "audio-routing");
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to parse audio-routing: %d\n", ret);
+-		return ret;
++		goto put_codec_node;
+ 	}
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+@@ -148,6 +149,10 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
+ 
++put_codec_node:
++	of_node_put(codec_node);
++put_platform_node:
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-max98090.c b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+index 3bdd4931316cd..5f39e810e27ae 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-max98090.c
++++ b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+@@ -167,7 +167,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
+ 	if (!codec_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_platform_node;
+ 	}
+ 	for_each_card_prelinks(card, i, dai_link) {
+ 		if (dai_link->codecs->name)
+@@ -182,6 +183,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
+ 			__func__, ret);
+ 
+ 	of_node_put(codec_node);
++
++put_platform_node:
+ 	of_node_put(platform_node);
+ 	return ret;
+ }
+diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c
+index f2eda81985e27..d87ac26999cfd 100644
+--- a/sound/soc/mxs/mxs-saif.c
++++ b/sound/soc/mxs/mxs-saif.c
+@@ -764,6 +764,7 @@ static int mxs_saif_probe(struct platform_device *pdev)
+ 		saif->master_id = saif->id;
+ 	} else {
+ 		ret = of_alias_get_id(master, "saif");
++		of_node_put(master);
+ 		if (ret < 0)
+ 			return ret;
+ 		else
+diff --git a/sound/soc/samsung/aries_wm8994.c b/sound/soc/samsung/aries_wm8994.c
+index 0ac5956ba270d..18458192aff18 100644
+--- a/sound/soc/samsung/aries_wm8994.c
++++ b/sound/soc/samsung/aries_wm8994.c
+@@ -585,19 +585,16 @@ static int aries_audio_probe(struct platform_device *pdev)
+ 
+ 	extcon_np = of_parse_phandle(np, "extcon", 0);
+ 	priv->usb_extcon = extcon_find_edev_by_node(extcon_np);
+-	if (IS_ERR(priv->usb_extcon)) {
+-		if (PTR_ERR(priv->usb_extcon) != -EPROBE_DEFER)
+-			dev_err(dev, "Failed to get extcon device");
+-		return PTR_ERR(priv->usb_extcon);
+-	}
+ 	of_node_put(extcon_np);
++	if (IS_ERR(priv->usb_extcon))
++		return dev_err_probe(dev, PTR_ERR(priv->usb_extcon),
++				     "Failed to get extcon device");
+ 
+ 	priv->adc = devm_iio_channel_get(dev, "headset-detect");
+-	if (IS_ERR(priv->adc)) {
+-		if (PTR_ERR(priv->adc) != -EPROBE_DEFER)
+-			dev_err(dev, "Failed to get ADC channel");
+-		return PTR_ERR(priv->adc);
+-	}
++	if (IS_ERR(priv->adc))
++		return dev_err_probe(dev, PTR_ERR(priv->adc),
++				     "Failed to get ADC channel");
++
+ 	if (priv->adc->channel->type != IIO_VOLTAGE)
+ 		return -EINVAL;
+ 
+diff --git a/sound/soc/samsung/arndale.c b/sound/soc/samsung/arndale.c
+index 28587375813ac..35e34e534b8ba 100644
+--- a/sound/soc/samsung/arndale.c
++++ b/sound/soc/samsung/arndale.c
+@@ -174,9 +174,8 @@ static int arndale_audio_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_snd_soc_register_card(card->dev, card);
+ 	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(&pdev->dev,
+-				"snd_soc_register_card() failed: %d\n", ret);
++		dev_err_probe(&pdev->dev, ret,
++			      "snd_soc_register_card() failed\n");
+ 		goto err_put_of_nodes;
+ 	}
+ 	return 0;
+diff --git a/sound/soc/samsung/littlemill.c b/sound/soc/samsung/littlemill.c
+index a1ff1400857ed..e73356a660872 100644
+--- a/sound/soc/samsung/littlemill.c
++++ b/sound/soc/samsung/littlemill.c
+@@ -325,9 +325,8 @@ static int littlemill_probe(struct platform_device *pdev)
+ 	card->dev = &pdev->dev;
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+-	if (ret && ret != -EPROBE_DEFER)
+-		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n",
+-			ret);
++	if (ret)
++		dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n");
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/samsung/lowland.c b/sound/soc/samsung/lowland.c
+index 998d10cf8c947..7b12ccd2a9b22 100644
+--- a/sound/soc/samsung/lowland.c
++++ b/sound/soc/samsung/lowland.c
+@@ -183,9 +183,8 @@ static int lowland_probe(struct platform_device *pdev)
+ 	card->dev = &pdev->dev;
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+-	if (ret && ret != -EPROBE_DEFER)
+-		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n",
+-			ret);
++	if (ret)
++		dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n");
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/samsung/odroid.c b/sound/soc/samsung/odroid.c
+index ca643a488c3ca..4ff12e2e704fe 100644
+--- a/sound/soc/samsung/odroid.c
++++ b/sound/soc/samsung/odroid.c
+@@ -311,9 +311,7 @@ static int odroid_audio_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_snd_soc_register_card(dev, card);
+ 	if (ret < 0) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "snd_soc_register_card() failed: %d\n",
+-				ret);
++		dev_err_probe(dev, ret, "snd_soc_register_card() failed\n");
+ 		goto err_put_clk_i2s;
+ 	}
+ 
+diff --git a/sound/soc/samsung/smdk_wm8994.c b/sound/soc/samsung/smdk_wm8994.c
+index 64a1a64656aba..92cd9e8a2e617 100644
+--- a/sound/soc/samsung/smdk_wm8994.c
++++ b/sound/soc/samsung/smdk_wm8994.c
+@@ -178,8 +178,8 @@ static int smdk_audio_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+ 
+-	if (ret && ret != -EPROBE_DEFER)
+-		dev_err(&pdev->dev, "snd_soc_register_card() failed:%d\n", ret);
++	if (ret)
++		dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n");
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/samsung/smdk_wm8994pcm.c b/sound/soc/samsung/smdk_wm8994pcm.c
+index a01640576f71d..110a51a4f870a 100644
+--- a/sound/soc/samsung/smdk_wm8994pcm.c
++++ b/sound/soc/samsung/smdk_wm8994pcm.c
+@@ -118,8 +118,8 @@ static int snd_smdk_probe(struct platform_device *pdev)
+ 
+ 	smdk_pcm.dev = &pdev->dev;
+ 	ret = devm_snd_soc_register_card(&pdev->dev, &smdk_pcm);
+-	if (ret && ret != -EPROBE_DEFER)
+-		dev_err(&pdev->dev, "snd_soc_register_card failed %d\n", ret);
++	if (ret)
++		dev_err_probe(&pdev->dev, ret, "snd_soc_register_card failed\n");
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/samsung/snow.c b/sound/soc/samsung/snow.c
+index 07163f07c6d56..6aa2c66d8e8c9 100644
+--- a/sound/soc/samsung/snow.c
++++ b/sound/soc/samsung/snow.c
+@@ -215,12 +215,9 @@ static int snow_probe(struct platform_device *pdev)
+ 	snd_soc_card_set_drvdata(card, priv);
+ 
+ 	ret = devm_snd_soc_register_card(dev, card);
+-	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(&pdev->dev,
+-				"snd_soc_register_card failed (%d)\n", ret);
+-		return ret;
+-	}
++	if (ret)
++		return dev_err_probe(&pdev->dev, ret,
++				     "snd_soc_register_card failed\n");
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/samsung/speyside.c b/sound/soc/samsung/speyside.c
+index f5f6ba00d0731..37b1f4f60b210 100644
+--- a/sound/soc/samsung/speyside.c
++++ b/sound/soc/samsung/speyside.c
+@@ -330,9 +330,8 @@ static int speyside_probe(struct platform_device *pdev)
+ 	card->dev = &pdev->dev;
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+-	if (ret && ret != -EPROBE_DEFER)
+-		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n",
+-			ret);
++	if (ret)
++		dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n");
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/samsung/tm2_wm5110.c b/sound/soc/samsung/tm2_wm5110.c
+index 125e07f65d2b5..ca1be7a7cb8ab 100644
+--- a/sound/soc/samsung/tm2_wm5110.c
++++ b/sound/soc/samsung/tm2_wm5110.c
+@@ -611,8 +611,7 @@ static int tm2_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_snd_soc_register_card(dev, card);
+ 	if (ret < 0) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "Failed to register card: %d\n", ret);
++		dev_err_probe(dev, ret, "Failed to register card\n");
+ 		goto dai_node_put;
+ 	}
+ 
+diff --git a/sound/soc/samsung/tobermory.c b/sound/soc/samsung/tobermory.c
+index c962d2c2a7f78..95c6267b0c0cb 100644
+--- a/sound/soc/samsung/tobermory.c
++++ b/sound/soc/samsung/tobermory.c
+@@ -229,9 +229,8 @@ static int tobermory_probe(struct platform_device *pdev)
+ 	card->dev = &pdev->dev;
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+-	if (ret && ret != -EPROBE_DEFER)
+-		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n",
+-			ret);
++	if (ret)
++		dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n");
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 417732bdf2860..f2f7f2dde93cf 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3427,7 +3427,6 @@ int snd_soc_dapm_put_volsw(struct snd_kcontrol *kcontrol,
+ 			update.val = val;
+ 			card->update = &update;
+ 		}
+-		change |= reg_change;
+ 
+ 		ret = soc_dapm_mixer_update_power(card, kcontrol, connect,
+ 						  rconnect);
+@@ -3529,7 +3528,6 @@ int snd_soc_dapm_put_enum_double(struct snd_kcontrol *kcontrol,
+ 			update.val = val;
+ 			card->update = &update;
+ 		}
+-		change |= reg_change;
+ 
+ 		ret = soc_dapm_mux_update_power(card, kcontrol, item[0], e);
+ 
+diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c
+index 265bbc5a2f96a..756cd9694cbe8 100644
+--- a/sound/soc/ti/j721e-evm.c
++++ b/sound/soc/ti/j721e-evm.c
+@@ -631,17 +631,18 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ 	codec_node = of_parse_phandle(node, "ti,cpb-codec", 0);
+ 	if (!codec_node) {
+ 		dev_err(priv->dev, "CPB codec node is not provided\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_dai_node;
+ 	}
+ 
+ 	domain = &priv->audio_domains[J721E_AUDIO_DOMAIN_CPB];
+ 	ret = j721e_get_clocks(priv->dev, &domain->codec, "cpb-codec-scki");
+ 	if (ret)
+-		return ret;
++		goto put_codec_node;
+ 
+ 	ret = j721e_get_clocks(priv->dev, &domain->mcasp, "cpb-mcasp-auxclk");
+ 	if (ret)
+-		return ret;
++		goto put_codec_node;
+ 
+ 	/*
+ 	 * Common Processor Board, two links
+@@ -651,8 +652,10 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ 	comp_count = 6;
+ 	compnent = devm_kzalloc(priv->dev, comp_count * sizeof(*compnent),
+ 				GFP_KERNEL);
+-	if (!compnent)
+-		return -ENOMEM;
++	if (!compnent) {
++		ret = -ENOMEM;
++		goto put_codec_node;
++	}
+ 
+ 	comp_idx = 0;
+ 	priv->dai_links[*link_idx].cpus = &compnent[comp_idx++];
+@@ -703,6 +706,12 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ 	(*conf_idx)++;
+ 
+ 	return 0;
++
++put_codec_node:
++	of_node_put(codec_node);
++put_dai_node:
++	of_node_put(dai_node);
++	return ret;
+ }
+ 
+ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+@@ -727,23 +736,25 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ 	codeca_node = of_parse_phandle(node, "ti,ivi-codec-a", 0);
+ 	if (!codeca_node) {
+ 		dev_err(priv->dev, "IVI codec-a node is not provided\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_dai_node;
+ 	}
+ 
+ 	codecb_node = of_parse_phandle(node, "ti,ivi-codec-b", 0);
+ 	if (!codecb_node) {
+ 		dev_warn(priv->dev, "IVI codec-b node is not provided\n");
+-		return 0;
++		ret = 0;
++		goto put_codeca_node;
+ 	}
+ 
+ 	domain = &priv->audio_domains[J721E_AUDIO_DOMAIN_IVI];
+ 	ret = j721e_get_clocks(priv->dev, &domain->codec, "ivi-codec-scki");
+ 	if (ret)
+-		return ret;
++		goto put_codecb_node;
+ 
+ 	ret = j721e_get_clocks(priv->dev, &domain->mcasp, "ivi-mcasp-auxclk");
+ 	if (ret)
+-		return ret;
++		goto put_codecb_node;
+ 
+ 	/*
+ 	 * IVI extension, two links
+@@ -755,8 +766,10 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ 	comp_count = 8;
+ 	compnent = devm_kzalloc(priv->dev, comp_count * sizeof(*compnent),
+ 				GFP_KERNEL);
+-	if (!compnent)
+-		return -ENOMEM;
++	if (!compnent) {
++		ret = -ENOMEM;
++		goto put_codecb_node;
++	}
+ 
+ 	comp_idx = 0;
+ 	priv->dai_links[*link_idx].cpus = &compnent[comp_idx++];
+@@ -817,6 +830,15 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ 	(*conf_idx)++;
+ 
+ 	return 0;
++
++
++put_codecb_node:
++	of_node_put(codecb_node);
++put_codeca_node:
++	of_node_put(codeca_node);
++put_dai_node:
++	of_node_put(dai_node);
++	return ret;
+ }
+ 
+ static int j721e_soc_probe(struct platform_device *pdev)
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 84676a8fb60dc..93fee6e365a6e 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1161,6 +1161,9 @@ static int snd_usbmidi_output_open(struct snd_rawmidi_substream *substream)
+ 
+ static int snd_usbmidi_output_close(struct snd_rawmidi_substream *substream)
+ {
++	struct usbmidi_out_port *port = substream->runtime->private_data;
++
++	cancel_work_sync(&port->ep->work);
+ 	return substream_open(substream, 0, 0);
+ }
+ 
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 61df26f048d91..8fada26529b79 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -5928,9 +5928,10 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
+ 		 */
+ 		prog = NULL;
+ 		for (i = 0; i < obj->nr_programs; i++) {
+-			prog = &obj->programs[i];
+-			if (strcmp(prog->sec_name, sec_name) == 0)
++			if (strcmp(obj->programs[i].sec_name, sec_name) == 0) {
++				prog = &obj->programs[i];
+ 				break;
++			}
+ 		}
+ 		if (!prog) {
+ 			pr_warn("sec '%s': failed to find a BPF program\n", sec_name);
+@@ -5945,10 +5946,17 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
+ 			insn_idx = rec->insn_off / BPF_INSN_SZ;
+ 			prog = find_prog_by_sec_insn(obj, sec_idx, insn_idx);
+ 			if (!prog) {
+-				pr_warn("sec '%s': failed to find program at insn #%d for CO-RE offset relocation #%d\n",
+-					sec_name, insn_idx, i);
+-				err = -EINVAL;
+-				goto out;
++				/* When __weak subprog is "overridden" by another instance
++				 * of the subprog from a different object file, linker still
++				 * appends all the .BTF.ext info that used to belong to that
++				 * eliminated subprogram.
++				 * This is similar to what x86-64 linker does for relocations.
++				 * So just ignore such relocations just like we ignore
++				 * subprog instructions when discovering subprograms.
++				 */
++				pr_debug("sec '%s': skipping CO-RE relocation #%d for insn #%d belonging to eliminated weak subprogram\n",
++					 sec_name, i, insn_idx);
++				continue;
+ 			}
+ 			/* no need to apply CO-RE relocation if the program is
+ 			 * not going to be loaded
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 41dff8d38448d..5ee3c4d1fbb2b 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -222,18 +222,33 @@ ifdef PARSER_DEBUG
+ endif
+ 
+ # Try different combinations to accommodate systems that only have
+-# python[2][-config] in weird combinations but always preferring
+-# python2 and python2-config as per pep-0394. If python2 or python
+-# aren't found, then python3 is used.
+-PYTHON_AUTO := python
+-PYTHON_AUTO := $(if $(call get-executable,python3),python3,$(PYTHON_AUTO))
+-PYTHON_AUTO := $(if $(call get-executable,python),python,$(PYTHON_AUTO))
+-PYTHON_AUTO := $(if $(call get-executable,python2),python2,$(PYTHON_AUTO))
+-override PYTHON := $(call get-executable-or-default,PYTHON,$(PYTHON_AUTO))
+-PYTHON_AUTO_CONFIG := \
+-  $(if $(call get-executable,$(PYTHON)-config),$(PYTHON)-config,python-config)
+-override PYTHON_CONFIG := \
+-  $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO_CONFIG))
++# python[2][3]-config in weird combinations in the following order of
++# priority from lowest to highest:
++#   * python3-config
++#   * python-config
++#   * python2-config as per pep-0394.
++#   * $(PYTHON)-config (If PYTHON is user supplied but PYTHON_CONFIG isn't)
++#
++PYTHON_AUTO := python-config
++PYTHON_AUTO := $(if $(call get-executable,python3-config),python3-config,$(PYTHON_AUTO))
++PYTHON_AUTO := $(if $(call get-executable,python-config),python-config,$(PYTHON_AUTO))
++PYTHON_AUTO := $(if $(call get-executable,python2-config),python2-config,$(PYTHON_AUTO))
++
++# If PYTHON is defined but PYTHON_CONFIG isn't, then take $(PYTHON)-config as if it was the user
++# supplied value for PYTHON_CONFIG. Because it's "user supplied", error out if it doesn't exist.
++ifdef PYTHON
++  ifndef PYTHON_CONFIG
++    PYTHON_CONFIG_AUTO := $(call get-executable,$(PYTHON)-config)
++    PYTHON_CONFIG := $(if $(PYTHON_CONFIG_AUTO),$(PYTHON_CONFIG_AUTO),\
++                          $(call $(error $(PYTHON)-config not found)))
++  endif
++endif
++
++# Select either auto detected python and python-config or use user supplied values if they are
++# defined. get-executable-or-default fails with an error if the first argument is supplied but
++# doesn't exist.
++override PYTHON_CONFIG := $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO))
++override PYTHON := $(call get-executable-or-default,PYTHON,$(subst -config,,$(PYTHON_AUTO)))
+ 
+ grep-libs  = $(filter -l%,$(1))
+ strip-libs  = $(filter-out -l%,$(1))
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index d5bea5d3cd51a..7f7111d4b3ad0 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2694,9 +2694,7 @@ static int perf_c2c__report(int argc, const char **argv)
+ 		   "the input file to process"),
+ 	OPT_INCR('N', "node-info", &c2c.node_info,
+ 		 "show extra node info in report (repeat for more info)"),
+-#ifdef HAVE_SLANG_SUPPORT
+ 	OPT_BOOLEAN(0, "stdio", &c2c.use_stdio, "Use the stdio interface"),
+-#endif
+ 	OPT_BOOLEAN(0, "stats", &c2c.stats_only,
+ 		    "Display only statistic tables (implies --stdio)"),
+ 	OPT_BOOLEAN(0, "full-symbols", &c2c.symbol_full,
+@@ -2725,6 +2723,10 @@ static int perf_c2c__report(int argc, const char **argv)
+ 	if (argc)
+ 		usage_with_options(report_c2c_usage, options);
+ 
++#ifndef HAVE_SLANG_SUPPORT
++	c2c.use_stdio = true;
++#endif
++
+ 	if (c2c.stats_only)
+ 		c2c.use_stdio = true;
+ 
+diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
+index c679a79aef513..1f20f587e0534 100644
+--- a/tools/perf/pmu-events/jevents.c
++++ b/tools/perf/pmu-events/jevents.c
+@@ -579,7 +579,7 @@ static int json_events(const char *fn,
+ 			} else if (json_streq(map, field, "ExtSel")) {
+ 				char *code = NULL;
+ 				addfield(map, &code, "", "", val);
+-				eventcode |= strtoul(code, NULL, 0) << 21;
++				eventcode |= strtoul(code, NULL, 0) << 8;
+ 				free(code);
+ 			} else if (json_streq(map, field, "EventName")) {
+ 				addfield(map, &je.name, "", "", val);
+diff --git a/tools/perf/util/data.h b/tools/perf/util/data.h
+index 75947ef6bc170..5b52ffedf0d54 100644
+--- a/tools/perf/util/data.h
++++ b/tools/perf/util/data.h
+@@ -3,6 +3,7 @@
+ #define __PERF_DATA_H
+ 
+ #include <stdbool.h>
++#include <linux/types.h>
+ 
+ enum perf_data_mode {
+ 	PERF_DATA_MODE_WRITE,
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 424ed19a9d542..ef65f7eed1ec9 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -4189,6 +4189,7 @@ rapl_dram_energy_units_probe(int  model, double rapl_energy_units)
+ 	case INTEL_FAM6_HASWELL_X:	/* HSX */
+ 	case INTEL_FAM6_BROADWELL_X:	/* BDX */
+ 	case INTEL_FAM6_XEON_PHI_KNL:	/* KNL */
++	case INTEL_FAM6_ICELAKE_X:	/* ICX */
+ 		return (rapl_dram_energy_units = 15.3 / 1000000);
+ 	default:
+ 		return (rapl_energy_units);
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+index 31975c96e2c9c..fe43556e1a611 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+@@ -94,7 +94,7 @@ typedef void (* (*signal_t)(int, void (*)(int)))(int);
+ 
+ typedef char * (*fn_ptr_arr1_t[10])(int **);
+ 
+-typedef char * (* const (* const fn_ptr_arr2_t[5])())(char * (*)(int));
++typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int));
+ 
+ struct struct_w_typedefs {
+ 	int_t a;
+diff --git a/tools/testing/selftests/cgroup/test_stress.sh b/tools/testing/selftests/cgroup/test_stress.sh
+index 15d9d58963941..3c9c4554d5f6a 100755
+--- a/tools/testing/selftests/cgroup/test_stress.sh
++++ b/tools/testing/selftests/cgroup/test_stress.sh
+@@ -1,4 +1,4 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+-./with_stress.sh -s subsys -s fork ./test_core
++./with_stress.sh -s subsys -s fork ${OUTPUT:-.}/test_core
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index 51e5cf22632f7..56ccbeae0638d 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -121,8 +121,10 @@ static int fill_cache_read(unsigned char *start_ptr, unsigned char *end_ptr,
+ 
+ 	/* Consume read result so that reading memory is not optimized out. */
+ 	fp = fopen("/dev/null", "w");
+-	if (!fp)
++	if (!fp) {
+ 		perror("Unable to write to /dev/null");
++		return -1;
++	}
+ 	fprintf(fp, "Sum: %d ", ret);
+ 	fclose(fp);
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-06-14 17:12 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-06-14 17:12 UTC (permalink / raw
  To: gentoo-commits

commit:     81f555b11988f95330780409459dfa36d845ce9d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 14 17:11:52 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 14 17:11:52 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=81f555b1

Linux patch 5.10.122

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1121_linux-5.10.122.patch | 4932 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4936 insertions(+)

diff --git a/0000_README b/0000_README
index 89e3b935..25417782 100644
--- a/0000_README
+++ b/0000_README
@@ -527,6 +527,10 @@ Patch:  1120_linux-5.10.121.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.121
 
+Patch:  1121_linux-5.10.122.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.122
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1121_linux-5.10.122.patch b/1121_linux-5.10.122.patch
new file mode 100644
index 00000000..882bbc05
--- /dev/null
+++ b/1121_linux-5.10.122.patch
@@ -0,0 +1,4932 @@
+diff --git a/Documentation/ABI/testing/sysfs-ata b/Documentation/ABI/testing/sysfs-ata
+index 9ab0ef1dd1c72..299e0d1dc1619 100644
+--- a/Documentation/ABI/testing/sysfs-ata
++++ b/Documentation/ABI/testing/sysfs-ata
+@@ -107,13 +107,14 @@ Description:
+ 				described in ATA8 7.16 and 7.17. Only valid if
+ 				the device is not a PM.
+ 
+-		pio_mode:	(RO) Transfer modes supported by the device when
+-				in PIO mode. Mostly used by PATA device.
++		pio_mode:	(RO) PIO transfer mode used by the device.
++				Mostly used by PATA devices.
+ 
+-		xfer_mode:	(RO) Current transfer mode
++		xfer_mode:	(RO) Current transfer mode. Mostly used by
++				PATA devices.
+ 
+-		dma_mode:	(RO) Transfer modes supported by the device when
+-				in DMA mode. Mostly used by PATA device.
++		dma_mode:	(RO) DMA transfer mode used by the device.
++				Mostly used by PATA devices.
+ 
+ 		class:		(RO) Device class. Can be "ata" for disk,
+ 				"atapi" for packet device, "pmp" for PM, or
+diff --git a/Makefile b/Makefile
+index 5233d3d9a3b52..3ed1da61a3c7a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 121
++SUBLEVEL = 122
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 9c6cab71ba98b..18627cbd6da4e 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1111,6 +1111,7 @@ skip_init_ctx:
+ 			bpf_jit_binary_free(header);
+ 			prog->bpf_func = NULL;
+ 			prog->jited = 0;
++			prog->jited_len = 0;
+ 			goto out_off;
+ 		}
+ 		bpf_jit_binary_lock_ro(header);
+diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine
+index 51a878803fb6d..16730561d166f 100644
+--- a/arch/m68k/Kconfig.machine
++++ b/arch/m68k/Kconfig.machine
+@@ -321,6 +321,7 @@ comment "Machine Options"
+ 
+ config UBOOT
+ 	bool "Support for U-Boot command line parameters"
++	depends on COLDFIRE
+ 	help
+ 	  If you say Y here kernel will try to collect command
+ 	  line parameters from the initial u-boot stack.
+diff --git a/arch/m68k/include/asm/pgtable_no.h b/arch/m68k/include/asm/pgtable_no.h
+index 87151d67d91e7..bce5ca56c3883 100644
+--- a/arch/m68k/include/asm/pgtable_no.h
++++ b/arch/m68k/include/asm/pgtable_no.h
+@@ -42,7 +42,8 @@ extern void paging_init(void);
+  * ZERO_PAGE is a global shared page that is always zero: used
+  * for zero-mapped memory areas etc..
+  */
+-#define ZERO_PAGE(vaddr)	(virt_to_page(0))
++extern void *empty_zero_page;
++#define ZERO_PAGE(vaddr)	(virt_to_page(empty_zero_page))
+ 
+ /*
+  * All 32bit addresses are effectively valid for vmalloc...
+diff --git a/arch/mips/kernel/mips-cpc.c b/arch/mips/kernel/mips-cpc.c
+index 8d2535123f11c..d005be84c482b 100644
+--- a/arch/mips/kernel/mips-cpc.c
++++ b/arch/mips/kernel/mips-cpc.c
+@@ -27,6 +27,7 @@ phys_addr_t __weak mips_cpc_default_phys_base(void)
+ 	cpc_node = of_find_compatible_node(of_root, NULL, "mti,mips-cpc");
+ 	if (cpc_node) {
+ 		err = of_address_to_resource(cpc_node, 0, &res);
++		of_node_put(cpc_node);
+ 		if (!err)
+ 			return res.start;
+ 	}
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 5afa0ebd78ca5..78dd6be8b31dd 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -786,7 +786,6 @@ config THREAD_SHIFT
+ 	range 13 15
+ 	default "15" if PPC_256K_PAGES
+ 	default "14" if PPC64
+-	default "14" if KASAN
+ 	default "13"
+ 	help
+ 	  Used to define the stack size. The default is almost always what you
+diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
+index f0c0816f57270..d6a3cd1470599 100644
+--- a/arch/powerpc/include/asm/ppc-opcode.h
++++ b/arch/powerpc/include/asm/ppc-opcode.h
+@@ -212,6 +212,7 @@
+ #define PPC_INST_COPY			0x7c20060c
+ #define PPC_INST_DCBA			0x7c0005ec
+ #define PPC_INST_DCBA_MASK		0xfc0007fe
++#define PPC_INST_DSSALL			0x7e00066c
+ #define PPC_INST_ISEL			0x7c00001e
+ #define PPC_INST_ISEL_MASK		0xfc00003e
+ #define PPC_INST_LSWI			0x7c0004aa
+@@ -517,6 +518,7 @@
+ #define	PPC_DCBZL(a, b)		stringify_in_c(.long PPC_RAW_DCBZL(a, b))
+ #define	PPC_DIVDE(t, a, b)	stringify_in_c(.long PPC_RAW_DIVDE(t, a, b))
+ #define	PPC_DIVDEU(t, a, b)	stringify_in_c(.long PPC_RAW_DIVDEU(t, a, b))
++#define PPC_DSSALL		stringify_in_c(.long PPC_INST_DSSALL)
+ #define PPC_LQARX(t, a, b, eh)	stringify_in_c(.long PPC_RAW_LQARX(t, a, b, eh))
+ #define PPC_LDARX(t, a, b, eh)	stringify_in_c(.long PPC_RAW_LDARX(t, a, b, eh))
+ #define PPC_LWARX(t, a, b, eh)	stringify_in_c(.long PPC_RAW_LWARX(t, a, b, eh))
+diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
+index 46a210b03d2b8..6de3517bea94e 100644
+--- a/arch/powerpc/include/asm/thread_info.h
++++ b/arch/powerpc/include/asm/thread_info.h
+@@ -14,10 +14,16 @@
+ 
+ #ifdef __KERNEL__
+ 
+-#if defined(CONFIG_VMAP_STACK) && CONFIG_THREAD_SHIFT < PAGE_SHIFT
++#ifdef CONFIG_KASAN
++#define MIN_THREAD_SHIFT	(CONFIG_THREAD_SHIFT + 1)
++#else
++#define MIN_THREAD_SHIFT	CONFIG_THREAD_SHIFT
++#endif
++
++#if defined(CONFIG_VMAP_STACK) && MIN_THREAD_SHIFT < PAGE_SHIFT
+ #define THREAD_SHIFT		PAGE_SHIFT
+ #else
+-#define THREAD_SHIFT		CONFIG_THREAD_SHIFT
++#define THREAD_SHIFT		MIN_THREAD_SHIFT
+ #endif
+ 
+ #define THREAD_SIZE		(1 << THREAD_SHIFT)
+diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c
+index f0271daa8f6a6..77cd4c5a2d631 100644
+--- a/arch/powerpc/kernel/idle.c
++++ b/arch/powerpc/kernel/idle.c
+@@ -82,7 +82,7 @@ void power4_idle(void)
+ 		return;
+ 
+ 	if (cpu_has_feature(CPU_FTR_ALTIVEC))
+-		asm volatile("DSSALL ; sync" ::: "memory");
++		asm volatile(PPC_DSSALL " ; sync" ::: "memory");
+ 
+ 	power4_idle_nap();
+ 
+diff --git a/arch/powerpc/kernel/idle_6xx.S b/arch/powerpc/kernel/idle_6xx.S
+index 69df840f72535..315e5e2ad7031 100644
+--- a/arch/powerpc/kernel/idle_6xx.S
++++ b/arch/powerpc/kernel/idle_6xx.S
+@@ -129,7 +129,7 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFCLR(CPU_FTR_NO_DPM)
+ 	mtspr	SPRN_HID0,r4
+ BEGIN_FTR_SECTION
+-	DSSALL
++	PPC_DSSALL
+ 	sync
+ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+ 	lwz	r8,TI_LOCAL_FLAGS(r2)	/* set napping bit */
+diff --git a/arch/powerpc/kernel/l2cr_6xx.S b/arch/powerpc/kernel/l2cr_6xx.S
+index 225511d73bef5..f2e03ed423d0f 100644
+--- a/arch/powerpc/kernel/l2cr_6xx.S
++++ b/arch/powerpc/kernel/l2cr_6xx.S
+@@ -96,7 +96,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_L2CR)
+ 
+ 	/* Stop DST streams */
+ BEGIN_FTR_SECTION
+-	DSSALL
++	PPC_DSSALL
+ 	sync
+ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+ 
+@@ -292,7 +292,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_L3CR)
+ 	isync
+ 
+ 	/* Stop DST streams */
+-	DSSALL
++	PPC_DSSALL
+ 	sync
+ 
+ 	/* Get the current enable bit of the L3CR into r4 */
+@@ -401,7 +401,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_L3CR)
+ _GLOBAL(__flush_disable_L1)
+ 	/* Stop pending alitvec streams and memory accesses */
+ BEGIN_FTR_SECTION
+-	DSSALL
++	PPC_DSSALL
+ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+  	sync
+ 
+diff --git a/arch/powerpc/kernel/ptrace/ptrace.c b/arch/powerpc/kernel/ptrace/ptrace.c
+index f6e51be47c6e4..9ea9ee513ae1f 100644
+--- a/arch/powerpc/kernel/ptrace/ptrace.c
++++ b/arch/powerpc/kernel/ptrace/ptrace.c
+@@ -75,8 +75,13 @@ long arch_ptrace(struct task_struct *child, long request,
+ 
+ 			flush_fp_to_thread(child);
+ 			if (fpidx < (PT_FPSCR - PT_FPR0))
+-				memcpy(&tmp, &child->thread.TS_FPR(fpidx),
+-				       sizeof(long));
++				if (IS_ENABLED(CONFIG_PPC32)) {
++					// On 32-bit the index we are passed refers to 32-bit words
++					tmp = ((u32 *)child->thread.fp_state.fpr)[fpidx];
++				} else {
++					memcpy(&tmp, &child->thread.TS_FPR(fpidx),
++					       sizeof(long));
++				}
+ 			else
+ 				tmp = child->thread.fp_state.fpscr;
+ 		}
+@@ -108,8 +113,13 @@ long arch_ptrace(struct task_struct *child, long request,
+ 
+ 			flush_fp_to_thread(child);
+ 			if (fpidx < (PT_FPSCR - PT_FPR0))
+-				memcpy(&child->thread.TS_FPR(fpidx), &data,
+-				       sizeof(long));
++				if (IS_ENABLED(CONFIG_PPC32)) {
++					// On 32-bit the index we are passed refers to 32-bit words
++					((u32 *)child->thread.fp_state.fpr)[fpidx] = data;
++				} else {
++					memcpy(&child->thread.TS_FPR(fpidx), &data,
++					       sizeof(long));
++				}
+ 			else
+ 				child->thread.fp_state.fpscr = data;
+ 			ret = 0;
+@@ -478,4 +488,7 @@ void __init pt_regs_check(void)
+ 	 * real registers.
+ 	 */
+ 	BUILD_BUG_ON(PT_DSCR < sizeof(struct user_pt_regs) / sizeof(unsigned long));
++
++	// ptrace_get/put_fpr() rely on PPC32 and VSX being incompatible
++	BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC32) && IS_ENABLED(CONFIG_VSX));
+ }
+diff --git a/arch/powerpc/kernel/swsusp_32.S b/arch/powerpc/kernel/swsusp_32.S
+index f73f4d72fea43..e0cbd63007f21 100644
+--- a/arch/powerpc/kernel/swsusp_32.S
++++ b/arch/powerpc/kernel/swsusp_32.S
+@@ -181,7 +181,7 @@ _GLOBAL(swsusp_arch_resume)
+ #ifdef CONFIG_ALTIVEC
+ 	/* Stop pending alitvec streams and memory accesses */
+ BEGIN_FTR_SECTION
+-	DSSALL
++	PPC_DSSALL
+ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+ #endif
+  	sync
+diff --git a/arch/powerpc/kernel/swsusp_asm64.S b/arch/powerpc/kernel/swsusp_asm64.S
+index 6d3189830dd32..068a268a8013e 100644
+--- a/arch/powerpc/kernel/swsusp_asm64.S
++++ b/arch/powerpc/kernel/swsusp_asm64.S
+@@ -142,7 +142,7 @@ END_FW_FTR_SECTION_IFCLR(FW_FEATURE_LPAR)
+ _GLOBAL(swsusp_arch_resume)
+ 	/* Stop pending alitvec streams and memory accesses */
+ BEGIN_FTR_SECTION
+-	DSSALL
++	PPC_DSSALL
+ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+ 	sync
+ 
+diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.c
+index 18f20da0d3483..64290d343b557 100644
+--- a/arch/powerpc/mm/mmu_context.c
++++ b/arch/powerpc/mm/mmu_context.c
+@@ -79,7 +79,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ 	 * context
+ 	 */
+ 	if (cpu_has_feature(CPU_FTR_ALTIVEC))
+-		asm volatile ("dssall");
++		asm volatile (PPC_DSSALL);
+ 
+ 	if (new_on_cpu)
+ 		radix_kvm_prefetch_workaround(next);
+diff --git a/arch/powerpc/platforms/powermac/cache.S b/arch/powerpc/platforms/powermac/cache.S
+index ced2254154860..b8ae56e9f4146 100644
+--- a/arch/powerpc/platforms/powermac/cache.S
++++ b/arch/powerpc/platforms/powermac/cache.S
+@@ -48,7 +48,7 @@ flush_disable_75x:
+ 
+ 	/* Stop DST streams */
+ BEGIN_FTR_SECTION
+-	DSSALL
++	PPC_DSSALL
+ 	sync
+ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+ 
+@@ -197,7 +197,7 @@ flush_disable_745x:
+ 	isync
+ 
+ 	/* Stop prefetch streams */
+-	DSSALL
++	PPC_DSSALL
+ 	sync
+ 
+ 	/* Disable L2 prefetching */
+diff --git a/arch/riscv/kernel/efi.c b/arch/riscv/kernel/efi.c
+index 0241592982314..1aa540350abd3 100644
+--- a/arch/riscv/kernel/efi.c
++++ b/arch/riscv/kernel/efi.c
+@@ -65,7 +65,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data)
+ 
+ 	if (md->attribute & EFI_MEMORY_RO) {
+ 		val = pte_val(pte) & ~_PAGE_WRITE;
+-		val = pte_val(pte) | _PAGE_READ;
++		val |= _PAGE_READ;
+ 		pte = __pte(val);
+ 	}
+ 	if (md->attribute & EFI_MEMORY_XP) {
+diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
+index 73044634d3427..812730e6bfffd 100644
+--- a/arch/s390/crypto/aes_s390.c
++++ b/arch/s390/crypto/aes_s390.c
+@@ -700,7 +700,7 @@ static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw,
+ 					     unsigned int nbytes)
+ {
+ 	gw->walk_bytes_remain -= nbytes;
+-	scatterwalk_unmap(&gw->walk);
++	scatterwalk_unmap(gw->walk_ptr);
+ 	scatterwalk_advance(&gw->walk, nbytes);
+ 	scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain);
+ 	gw->walk_ptr = NULL;
+@@ -775,7 +775,7 @@ static int gcm_out_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
+ 		goto out;
+ 	}
+ 
+-	scatterwalk_unmap(&gw->walk);
++	scatterwalk_unmap(gw->walk_ptr);
+ 	gw->walk_ptr = NULL;
+ 
+ 	gw->ptr = gw->buf;
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index f2d19d40272cf..2db097c14cec0 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2596,6 +2596,18 @@ static int __s390_enable_skey_pte(pte_t *pte, unsigned long addr,
+ 	return 0;
+ }
+ 
++/*
++ * Give a chance to schedule after setting a key to 256 pages.
++ * We only hold the mm lock, which is a rwsem and the kvm srcu.
++ * Both can sleep.
++ */
++static int __s390_enable_skey_pmd(pmd_t *pmd, unsigned long addr,
++				  unsigned long next, struct mm_walk *walk)
++{
++	cond_resched();
++	return 0;
++}
++
+ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
+ 				      unsigned long hmask, unsigned long next,
+ 				      struct mm_walk *walk)
+@@ -2618,12 +2630,14 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
+ 	end = start + HPAGE_SIZE - 1;
+ 	__storage_key_init_range(start, end);
+ 	set_bit(PG_arch_1, &page->flags);
++	cond_resched();
+ 	return 0;
+ }
+ 
+ static const struct mm_walk_ops enable_skey_walk_ops = {
+ 	.hugetlb_entry		= __s390_enable_skey_hugetlb,
+ 	.pte_entry		= __s390_enable_skey_pte,
++	.pmd_entry		= __s390_enable_skey_pmd,
+ };
+ 
+ int s390_enable_skey(void)
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index 59bf91c57aa85..619c1f80a2abe 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -49,7 +49,7 @@ extern const char * const x86_power_flags[32];
+ extern const char * const x86_bug_flags[NBUGINTS*32];
+ 
+ #define test_cpu_cap(c, bit)						\
+-	 test_bit(bit, (unsigned long *)((c)->x86_capability))
++	 arch_test_bit(bit, (unsigned long *)((c)->x86_capability))
+ 
+ /*
+  * There are 32 bits/features in each mask word.  The high bits
+diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c
+index 6a40e3c6cf492..b33772df9bc60 100644
+--- a/drivers/ata/libata-transport.c
++++ b/drivers/ata/libata-transport.c
+@@ -196,7 +196,7 @@ static struct {
+ 	{ XFER_PIO_0,			"XFER_PIO_0" },
+ 	{ XFER_PIO_SLOW,		"XFER_PIO_SLOW" }
+ };
+-ata_bitfield_name_match(xfer,ata_xfer_names)
++ata_bitfield_name_search(xfer, ata_xfer_names)
+ 
+ /*
+  * ATA Port attributes
+diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c
+index b5a3f710d76de..4cc8a1027888a 100644
+--- a/drivers/ata/pata_octeon_cf.c
++++ b/drivers/ata/pata_octeon_cf.c
+@@ -888,12 +888,14 @@ static int octeon_cf_probe(struct platform_device *pdev)
+ 				int i;
+ 				res_dma = platform_get_resource(dma_dev, IORESOURCE_MEM, 0);
+ 				if (!res_dma) {
++					put_device(&dma_dev->dev);
+ 					of_node_put(dma_node);
+ 					return -EINVAL;
+ 				}
+ 				cf_port->dma_base = (u64)devm_ioremap(&pdev->dev, res_dma->start,
+ 									 resource_size(res_dma));
+ 				if (!cf_port->dma_base) {
++					put_device(&dma_dev->dev);
+ 					of_node_put(dma_node);
+ 					return -EINVAL;
+ 				}
+@@ -903,6 +905,7 @@ static int octeon_cf_probe(struct platform_device *pdev)
+ 					irq = i;
+ 					irq_handler = octeon_cf_interrupt;
+ 				}
++				put_device(&dma_dev->dev);
+ 			}
+ 			of_node_put(dma_node);
+ 		}
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index a9c23ecebc7c8..df85e928b97f2 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -621,7 +621,7 @@ int bus_add_driver(struct device_driver *drv)
+ 	if (drv->bus->p->drivers_autoprobe) {
+ 		error = driver_attach(drv);
+ 		if (error)
+-			goto out_unregister;
++			goto out_del_list;
+ 	}
+ 	module_add_driver(drv->owner, drv);
+ 
+@@ -648,6 +648,8 @@ int bus_add_driver(struct device_driver *drv)
+ 
+ 	return 0;
+ 
++out_del_list:
++	klist_del(&priv->knode_bus);
+ out_unregister:
+ 	kobject_put(&priv->kobj);
+ 	/* drv->p is freed in driver_release()  */
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 2728223c1fbc8..f9d9f1ad9215e 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -250,7 +250,6 @@ DEFINE_SHOW_ATTRIBUTE(deferred_devs);
+ 
+ int driver_deferred_probe_timeout;
+ EXPORT_SYMBOL_GPL(driver_deferred_probe_timeout);
+-static DECLARE_WAIT_QUEUE_HEAD(probe_timeout_waitqueue);
+ 
+ static int __init deferred_probe_timeout_setup(char *str)
+ {
+@@ -302,7 +301,6 @@ static void deferred_probe_timeout_work_func(struct work_struct *work)
+ 	list_for_each_entry(p, &deferred_probe_pending_list, deferred_probe)
+ 		dev_info(p->device, "deferred probe pending\n");
+ 	mutex_unlock(&deferred_probe_mutex);
+-	wake_up_all(&probe_timeout_waitqueue);
+ }
+ static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
+ 
+@@ -706,9 +704,6 @@ int driver_probe_done(void)
+  */
+ void wait_for_device_probe(void)
+ {
+-	/* wait for probe timeout */
+-	wait_event(probe_timeout_waitqueue, !driver_deferred_probe_timeout);
+-
+ 	/* wait for the deferred probe workqueue to finish */
+ 	flush_work(&deferred_probe_work);
+ 
+@@ -897,6 +892,7 @@ out_unlock:
+ static int __device_attach(struct device *dev, bool allow_async)
+ {
+ 	int ret = 0;
++	bool async = false;
+ 
+ 	device_lock(dev);
+ 	if (dev->p->dead) {
+@@ -935,7 +931,7 @@ static int __device_attach(struct device *dev, bool allow_async)
+ 			 */
+ 			dev_dbg(dev, "scheduling asynchronous probe\n");
+ 			get_device(dev);
+-			async_schedule_dev(__device_attach_async_helper, dev);
++			async = true;
+ 		} else {
+ 			pm_request_idle(dev);
+ 		}
+@@ -945,6 +941,8 @@ static int __device_attach(struct device *dev, bool allow_async)
+ 	}
+ out_unlock:
+ 	device_unlock(dev);
++	if (async)
++		async_schedule_dev(__device_attach_async_helper, dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index ecde800ba2102..4a6b82d434eef 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1359,7 +1359,7 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b
+ static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
+ 				 struct block_device *bdev)
+ {
+-	sock_shutdown(nbd);
++	nbd_clear_sock(nbd);
+ 	__invalidate_device(bdev, true);
+ 	nbd_bdev_reset(bdev);
+ 	if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
+@@ -1472,15 +1472,20 @@ static struct nbd_config *nbd_alloc_config(void)
+ {
+ 	struct nbd_config *config;
+ 
++	if (!try_module_get(THIS_MODULE))
++		return ERR_PTR(-ENODEV);
++
+ 	config = kzalloc(sizeof(struct nbd_config), GFP_NOFS);
+-	if (!config)
+-		return NULL;
++	if (!config) {
++		module_put(THIS_MODULE);
++		return ERR_PTR(-ENOMEM);
++	}
++
+ 	atomic_set(&config->recv_threads, 0);
+ 	init_waitqueue_head(&config->recv_wq);
+ 	init_waitqueue_head(&config->conn_wait);
+ 	config->blksize = NBD_DEF_BLKSIZE;
+ 	atomic_set(&config->live_connections, 0);
+-	try_module_get(THIS_MODULE);
+ 	return config;
+ }
+ 
+@@ -1507,12 +1512,13 @@ static int nbd_open(struct block_device *bdev, fmode_t mode)
+ 			mutex_unlock(&nbd->config_lock);
+ 			goto out;
+ 		}
+-		config = nbd->config = nbd_alloc_config();
+-		if (!config) {
+-			ret = -ENOMEM;
++		config = nbd_alloc_config();
++		if (IS_ERR(config)) {
++			ret = PTR_ERR(config);
+ 			mutex_unlock(&nbd->config_lock);
+ 			goto out;
+ 		}
++		nbd->config = config;
+ 		refcount_set(&nbd->config_refs, 1);
+ 		refcount_inc(&nbd->refs);
+ 		mutex_unlock(&nbd->config_lock);
+@@ -1934,13 +1940,14 @@ again:
+ 		nbd_put(nbd);
+ 		return -EINVAL;
+ 	}
+-	config = nbd->config = nbd_alloc_config();
+-	if (!nbd->config) {
++	config = nbd_alloc_config();
++	if (IS_ERR(config)) {
+ 		mutex_unlock(&nbd->config_lock);
+ 		nbd_put(nbd);
+ 		printk(KERN_ERR "nbd: couldn't allocate config\n");
+-		return -ENOMEM;
++		return PTR_ERR(config);
+ 	}
++	nbd->config = config;
+ 	refcount_set(&nbd->config_refs, 1);
+ 	set_bit(NBD_RT_BOUND, &config->runtime_flags);
+ 
+@@ -2461,6 +2468,12 @@ static void __exit nbd_cleanup(void)
+ 	struct nbd_device *nbd;
+ 	LIST_HEAD(del_list);
+ 
++	/*
++	 * Unregister netlink interface prior to waiting
++	 * for the completion of netlink commands.
++	 */
++	genl_unregister_family(&nbd_genl_family);
++
+ 	nbd_dbg_close();
+ 
+ 	mutex_lock(&nbd_index_mutex);
+@@ -2470,13 +2483,15 @@ static void __exit nbd_cleanup(void)
+ 	while (!list_empty(&del_list)) {
+ 		nbd = list_first_entry(&del_list, struct nbd_device, list);
+ 		list_del_init(&nbd->list);
++		if (refcount_read(&nbd->config_refs))
++			printk(KERN_ERR "nbd: possibly leaking nbd_config (ref %d)\n",
++					refcount_read(&nbd->config_refs));
+ 		if (refcount_read(&nbd->refs) != 1)
+ 			printk(KERN_ERR "nbd: possibly leaking a device\n");
+ 		nbd_put(nbd);
+ 	}
+ 
+ 	idr_destroy(&nbd_index_idr);
+-	genl_unregister_family(&nbd_genl_family);
+ 	unregister_blkdev(NBD_MAJOR, "nbd");
+ }
+ 
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index ac559c2620335..4ee20be76508f 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3291,7 +3291,9 @@ static int sysc_remove(struct platform_device *pdev)
+ 	struct sysc *ddata = platform_get_drvdata(pdev);
+ 	int error;
+ 
+-	cancel_delayed_work_sync(&ddata->idle_work);
++	/* Device can still be enabled, see deferred idle quirk in probe */
++	if (cancel_delayed_work_sync(&ddata->idle_work))
++		ti_sysc_idle(&ddata->idle_work.work);
+ 
+ 	error = pm_runtime_get_sync(ddata->dev);
+ 	if (error < 0) {
+diff --git a/drivers/clocksource/timer-oxnas-rps.c b/drivers/clocksource/timer-oxnas-rps.c
+index 56c0cc32d0ac6..d514b44e67dd1 100644
+--- a/drivers/clocksource/timer-oxnas-rps.c
++++ b/drivers/clocksource/timer-oxnas-rps.c
+@@ -236,7 +236,7 @@ static int __init oxnas_rps_timer_init(struct device_node *np)
+ 	}
+ 
+ 	rps->irq = irq_of_parse_and_map(np, 0);
+-	if (rps->irq < 0) {
++	if (!rps->irq) {
+ 		ret = -EINVAL;
+ 		goto err_iomap;
+ 	}
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index c51c5ed15aa75..0e7748df4be30 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -32,7 +32,7 @@ static int riscv_clock_next_event(unsigned long delta,
+ static unsigned int riscv_clock_event_irq;
+ static DEFINE_PER_CPU(struct clock_event_device, riscv_clock_event) = {
+ 	.name			= "riscv_timer_clockevent",
+-	.features		= CLOCK_EVT_FEAT_ONESHOT,
++	.features		= CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_C3STOP,
+ 	.rating			= 100,
+ 	.set_next_event		= riscv_clock_next_event,
+ };
+diff --git a/drivers/clocksource/timer-sp804.c b/drivers/clocksource/timer-sp804.c
+index 6e8ad4a4ea3c7..bedd3570474b6 100644
+--- a/drivers/clocksource/timer-sp804.c
++++ b/drivers/clocksource/timer-sp804.c
+@@ -274,6 +274,11 @@ static int __init sp804_of_init(struct device_node *np, struct sp804_timer *time
+ 	struct clk *clk1, *clk2;
+ 	const char *name = of_get_property(np, "compatible", NULL);
+ 
++	if (initialized) {
++		pr_debug("%pOF: skipping further SP804 timer device\n", np);
++		return 0;
++	}
++
+ 	base = of_iomap(np, 0);
+ 	if (!base)
+ 		return -ENXIO;
+@@ -285,11 +290,6 @@ static int __init sp804_of_init(struct device_node *np, struct sp804_timer *time
+ 	writel(0, timer1_base + timer->ctrl);
+ 	writel(0, timer2_base + timer->ctrl);
+ 
+-	if (initialized || !of_device_is_available(np)) {
+-		ret = -EINVAL;
+-		goto err;
+-	}
+-
+ 	clk1 = of_clk_get(np, 0);
+ 	if (IS_ERR(clk1))
+ 		clk1 = NULL;
+diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c
+index aa7435555de95..09ad37bbd98b6 100644
+--- a/drivers/dma/idxd/dma.c
++++ b/drivers/dma/idxd/dma.c
+@@ -82,6 +82,27 @@ static inline void idxd_prep_desc_common(struct idxd_wq *wq,
+ 	hw->int_handle =  wq->vec_ptr;
+ }
+ 
++static struct dma_async_tx_descriptor *
++idxd_dma_prep_interrupt(struct dma_chan *c, unsigned long flags)
++{
++	struct idxd_wq *wq = to_idxd_wq(c);
++	u32 desc_flags;
++	struct idxd_desc *desc;
++
++	if (wq->state != IDXD_WQ_ENABLED)
++		return NULL;
++
++	op_flag_setup(flags, &desc_flags);
++	desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
++	if (IS_ERR(desc))
++		return NULL;
++
++	idxd_prep_desc_common(wq, desc->hw, DSA_OPCODE_NOOP,
++			      0, 0, 0, desc->compl_dma, desc_flags);
++	desc->txd.flags = flags;
++	return &desc->txd;
++}
++
+ static struct dma_async_tx_descriptor *
+ idxd_dma_submit_memcpy(struct dma_chan *c, dma_addr_t dma_dest,
+ 		       dma_addr_t dma_src, size_t len, unsigned long flags)
+@@ -188,10 +209,12 @@ int idxd_register_dma_device(struct idxd_device *idxd)
+ 	INIT_LIST_HEAD(&dma->channels);
+ 	dma->dev = dev;
+ 
++	dma_cap_set(DMA_INTERRUPT, dma->cap_mask);
+ 	dma_cap_set(DMA_PRIVATE, dma->cap_mask);
+ 	dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask);
+ 	dma->device_release = idxd_dma_release;
+ 
++	dma->device_prep_dma_interrupt = idxd_dma_prep_interrupt;
+ 	if (idxd->hw.opcap.bits[0] & IDXD_OPCAP_MEMMOVE) {
+ 		dma_cap_set(DMA_MEMCPY, dma->cap_mask);
+ 		dma->device_prep_dma_memcpy = idxd_dma_submit_memcpy;
+diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c
+index 5fecf5aa6e858..7e6be076e9d34 100644
+--- a/drivers/dma/xilinx/zynqmp_dma.c
++++ b/drivers/dma/xilinx/zynqmp_dma.c
+@@ -232,7 +232,7 @@ struct zynqmp_dma_chan {
+ 	bool is_dmacoherent;
+ 	struct tasklet_struct tasklet;
+ 	bool idle;
+-	u32 desc_size;
++	size_t desc_size;
+ 	bool err;
+ 	u32 bus_width;
+ 	u32 src_burst_len;
+@@ -490,7 +490,8 @@ static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan)
+ 	}
+ 
+ 	chan->desc_pool_v = dma_alloc_coherent(chan->dev,
+-					       (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS),
++					       (2 * ZYNQMP_DMA_DESC_SIZE(chan) *
++					       ZYNQMP_DMA_NUM_DESCS),
+ 					       &chan->desc_pool_p, GFP_KERNEL);
+ 	if (!chan->desc_pool_v)
+ 		return -ENOMEM;
+diff --git a/drivers/extcon/extcon-ptn5150.c b/drivers/extcon/extcon-ptn5150.c
+index 5b9a3cf8df268..2a7874108df87 100644
+--- a/drivers/extcon/extcon-ptn5150.c
++++ b/drivers/extcon/extcon-ptn5150.c
+@@ -194,6 +194,13 @@ static int ptn5150_init_dev_type(struct ptn5150_info *info)
+ 	return 0;
+ }
+ 
++static void ptn5150_work_sync_and_put(void *data)
++{
++	struct ptn5150_info *info = data;
++
++	cancel_work_sync(&info->irq_work);
++}
++
+ static int ptn5150_i2c_probe(struct i2c_client *i2c)
+ {
+ 	struct device *dev = &i2c->dev;
+@@ -284,6 +291,10 @@ static int ptn5150_i2c_probe(struct i2c_client *i2c)
+ 	if (ret)
+ 		return -EINVAL;
+ 
++	ret = devm_add_action_or_reset(dev, ptn5150_work_sync_and_put, info);
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * Update current extcon state if for example OTG connection was there
+ 	 * before the probe
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index e7a9561a826d3..356610404bb40 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -1230,19 +1230,14 @@ int extcon_dev_register(struct extcon_dev *edev)
+ 		edev->dev.type = &edev->extcon_dev_type;
+ 	}
+ 
+-	ret = device_register(&edev->dev);
+-	if (ret) {
+-		put_device(&edev->dev);
+-		goto err_dev;
+-	}
+-
+ 	spin_lock_init(&edev->lock);
+-	edev->nh = devm_kcalloc(&edev->dev, edev->max_supported,
+-				sizeof(*edev->nh), GFP_KERNEL);
+-	if (!edev->nh) {
+-		ret = -ENOMEM;
+-		device_unregister(&edev->dev);
+-		goto err_dev;
++	if (edev->max_supported) {
++		edev->nh = kcalloc(edev->max_supported, sizeof(*edev->nh),
++				GFP_KERNEL);
++		if (!edev->nh) {
++			ret = -ENOMEM;
++			goto err_alloc_nh;
++		}
+ 	}
+ 
+ 	for (index = 0; index < edev->max_supported; index++)
+@@ -1253,6 +1248,12 @@ int extcon_dev_register(struct extcon_dev *edev)
+ 	dev_set_drvdata(&edev->dev, edev);
+ 	edev->state = 0;
+ 
++	ret = device_register(&edev->dev);
++	if (ret) {
++		put_device(&edev->dev);
++		goto err_dev;
++	}
++
+ 	mutex_lock(&extcon_dev_list_lock);
+ 	list_add(&edev->entry, &extcon_dev_list);
+ 	mutex_unlock(&extcon_dev_list_lock);
+@@ -1260,6 +1261,9 @@ int extcon_dev_register(struct extcon_dev *edev)
+ 	return 0;
+ 
+ err_dev:
++	if (edev->max_supported)
++		kfree(edev->nh);
++err_alloc_nh:
+ 	if (edev->max_supported)
+ 		kfree(edev->extcon_dev_type.groups);
+ err_alloc_groups:
+@@ -1320,6 +1324,7 @@ void extcon_dev_unregister(struct extcon_dev *edev)
+ 	if (edev->max_supported) {
+ 		kfree(edev->extcon_dev_type.groups);
+ 		kfree(edev->cables);
++		kfree(edev->nh);
+ 	}
+ 
+ 	put_device(&edev->dev);
+diff --git a/drivers/firmware/dmi-sysfs.c b/drivers/firmware/dmi-sysfs.c
+index 8b8127fa89553..4a93fb490cb46 100644
+--- a/drivers/firmware/dmi-sysfs.c
++++ b/drivers/firmware/dmi-sysfs.c
+@@ -603,7 +603,7 @@ static void __init dmi_sysfs_register_handle(const struct dmi_header *dh,
+ 				    "%d-%d", dh->type, entry->instance);
+ 
+ 	if (*ret) {
+-		kfree(entry);
++		kobject_put(&entry->kobj);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 53c7e3f8cfde2..7dd0ac1a0cfc7 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -941,17 +941,17 @@ EXPORT_SYMBOL_GPL(stratix10_svc_allocate_memory);
+ void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr)
+ {
+ 	struct stratix10_svc_data_mem *pmem;
+-	size_t size = 0;
+ 
+ 	list_for_each_entry(pmem, &svc_data_mem, node)
+ 		if (pmem->vaddr == kaddr) {
+-			size = pmem->size;
+-			break;
++			gen_pool_free(chan->ctrl->genpool,
++				       (unsigned long)kaddr, pmem->size);
++			pmem->vaddr = NULL;
++			list_del(&pmem->node);
++			return;
+ 		}
+ 
+-	gen_pool_free(chan->ctrl->genpool, (unsigned long)kaddr, size);
+-	pmem->vaddr = NULL;
+-	list_del(&pmem->node);
++	list_del(&svc_data_mem);
+ }
+ EXPORT_SYMBOL_GPL(stratix10_svc_free_memory);
+ 
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index e936e1eb1f95c..bb4ca064447e9 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1107,20 +1107,21 @@ static int pca953x_regcache_sync(struct device *dev)
+ {
+ 	struct pca953x_chip *chip = dev_get_drvdata(dev);
+ 	int ret;
++	u8 regaddr;
+ 
+ 	/*
+ 	 * The ordering between direction and output is important,
+ 	 * sync these registers first and only then sync the rest.
+ 	 */
+-	ret = regcache_sync_region(chip->regmap, chip->regs->direction,
+-				   chip->regs->direction + NBANK(chip));
++	regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0);
++	ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip));
+ 	if (ret) {
+ 		dev_err(dev, "Failed to sync GPIO dir registers: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+-	ret = regcache_sync_region(chip->regmap, chip->regs->output,
+-				   chip->regs->output + NBANK(chip));
++	regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0);
++	ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip));
+ 	if (ret) {
+ 		dev_err(dev, "Failed to sync GPIO out registers: %d\n", ret);
+ 		return ret;
+@@ -1128,16 +1129,18 @@ static int pca953x_regcache_sync(struct device *dev)
+ 
+ #ifdef CONFIG_GPIO_PCA953X_IRQ
+ 	if (chip->driver_data & PCA_PCAL) {
+-		ret = regcache_sync_region(chip->regmap, PCAL953X_IN_LATCH,
+-					   PCAL953X_IN_LATCH + NBANK(chip));
++		regaddr = pca953x_recalc_addr(chip, PCAL953X_IN_LATCH, 0);
++		ret = regcache_sync_region(chip->regmap, regaddr,
++					   regaddr + NBANK(chip));
+ 		if (ret) {
+ 			dev_err(dev, "Failed to sync INT latch registers: %d\n",
+ 				ret);
+ 			return ret;
+ 		}
+ 
+-		ret = regcache_sync_region(chip->regmap, PCAL953X_INT_MASK,
+-					   PCAL953X_INT_MASK + NBANK(chip));
++		regaddr = pca953x_recalc_addr(chip, PCAL953X_INT_MASK, 0);
++		ret = regcache_sync_region(chip->regmap, regaddr,
++					   regaddr + NBANK(chip));
+ 		if (ret) {
+ 			dev_err(dev, "Failed to sync INT mask registers: %d\n",
+ 				ret);
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index 9755672caf1a5..a7bcb429c02b5 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1268,6 +1268,25 @@ static int analogix_dp_bridge_attach(struct drm_bridge *bridge,
+ 	return 0;
+ }
+ 
++static
++struct drm_crtc *analogix_dp_get_old_crtc(struct analogix_dp_device *dp,
++					  struct drm_atomic_state *state)
++{
++	struct drm_encoder *encoder = dp->encoder;
++	struct drm_connector *connector;
++	struct drm_connector_state *conn_state;
++
++	connector = drm_atomic_get_old_connector_for_encoder(state, encoder);
++	if (!connector)
++		return NULL;
++
++	conn_state = drm_atomic_get_old_connector_state(state, connector);
++	if (!conn_state)
++		return NULL;
++
++	return conn_state->crtc;
++}
++
+ static
+ struct drm_crtc *analogix_dp_get_new_crtc(struct analogix_dp_device *dp,
+ 					  struct drm_atomic_state *state)
+@@ -1448,14 +1467,16 @@ analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge,
+ {
+ 	struct drm_atomic_state *old_state = old_bridge_state->base.state;
+ 	struct analogix_dp_device *dp = bridge->driver_private;
+-	struct drm_crtc *crtc;
++	struct drm_crtc *old_crtc, *new_crtc;
++	struct drm_crtc_state *old_crtc_state = NULL;
+ 	struct drm_crtc_state *new_crtc_state = NULL;
++	int ret;
+ 
+-	crtc = analogix_dp_get_new_crtc(dp, old_state);
+-	if (!crtc)
++	new_crtc = analogix_dp_get_new_crtc(dp, old_state);
++	if (!new_crtc)
+ 		goto out;
+ 
+-	new_crtc_state = drm_atomic_get_new_crtc_state(old_state, crtc);
++	new_crtc_state = drm_atomic_get_new_crtc_state(old_state, new_crtc);
+ 	if (!new_crtc_state)
+ 		goto out;
+ 
+@@ -1464,6 +1485,19 @@ analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge,
+ 		return;
+ 
+ out:
++	old_crtc = analogix_dp_get_old_crtc(dp, old_state);
++	if (old_crtc) {
++		old_crtc_state = drm_atomic_get_old_crtc_state(old_state,
++							       old_crtc);
++
++		/* When moving from PSR to fully disabled, exit PSR first. */
++		if (old_crtc_state && old_crtc_state->self_refresh_active) {
++			ret = analogix_dp_disable_psr(dp);
++			if (ret)
++				DRM_ERROR("Failed to disable psr (%d)\n", ret);
++		}
++	}
++
+ 	analogix_dp_bridge_disable(bridge);
+ }
+ 
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 8a871e5c3e26b..7fc8e7000046c 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -996,9 +996,19 @@ crtc_needs_disable(struct drm_crtc_state *old_state,
+ 		return drm_atomic_crtc_effectively_active(old_state);
+ 
+ 	/*
+-	 * We need to run through the crtc_funcs->disable() function if the CRTC
+-	 * is currently on, if it's transitioning to self refresh mode, or if
+-	 * it's in self refresh mode and needs to be fully disabled.
++	 * We need to disable bridge(s) and CRTC if we're transitioning out of
++	 * self-refresh and changing CRTCs at the same time, because the
++	 * bridge tracks self-refresh status via CRTC state.
++	 */
++	if (old_state->self_refresh_active &&
++	    old_state->crtc != new_state->crtc)
++		return true;
++
++	/*
++	 * We also need to run through the crtc_funcs->disable() function if
++	 * the CRTC is currently on, if it's transitioning to self refresh
++	 * mode, or if it's in self refresh mode and needs to be fully
++	 * disabled.
+ 	 */
+ 	return old_state->active ||
+ 	       (old_state->self_refresh_active && !new_state->enable) ||
+diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
+index d412fc265395e..fd9d8e51837fa 100644
+--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
+@@ -68,7 +68,7 @@ static void ipu_crtc_disable_planes(struct ipu_crtc *ipu_crtc,
+ 	drm_atomic_crtc_state_for_each_plane(plane, old_crtc_state) {
+ 		if (plane == &ipu_crtc->plane[0]->base)
+ 			disable_full = true;
+-		if (&ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base)
++		if (ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base)
+ 			disable_partial = true;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index e308344344425..ef111d460be28 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -473,6 +473,8 @@ static struct drm_display_mode *radeon_fp_native_mode(struct drm_encoder *encode
+ 	    native_mode->vdisplay != 0 &&
+ 	    native_mode->clock != 0) {
+ 		mode = drm_mode_duplicate(dev, native_mode);
++		if (!mode)
++			return NULL;
+ 		mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+ 		drm_mode_set_name(mode);
+ 
+@@ -487,6 +489,8 @@ static struct drm_display_mode *radeon_fp_native_mode(struct drm_encoder *encode
+ 		 * simpler.
+ 		 */
+ 		mode = drm_cvt_mode(dev, native_mode->hdisplay, native_mode->vdisplay, 60, true, false, false);
++		if (!mode)
++			return NULL;
+ 		mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+ 		DRM_DEBUG_KMS("Adding cvt approximation of native panel mode %s\n", mode->name);
+ 	}
+diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+index 2dcf13de751fc..1e98562f4287c 100644
+--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c
++++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+@@ -379,9 +379,10 @@ static int debug_notifier_call(struct notifier_block *self,
+ 	int cpu;
+ 	struct debug_drvdata *drvdata;
+ 
+-	mutex_lock(&debug_lock);
++	/* Bail out if we can't acquire the mutex or the functionality is off */
++	if (!mutex_trylock(&debug_lock))
++		return NOTIFY_DONE;
+ 
+-	/* Bail out if the functionality is disabled */
+ 	if (!debug_enable)
+ 		goto skip_dump;
+ 
+@@ -400,7 +401,7 @@ static int debug_notifier_call(struct notifier_block *self,
+ 
+ skip_dump:
+ 	mutex_unlock(&debug_lock);
+-	return 0;
++	return NOTIFY_DONE;
+ }
+ 
+ static struct notifier_block debug_notifier = {
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index c1bbc4caeb5c9..50e3ddba52ba7 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -724,7 +724,7 @@ static void cdns_i2c_master_reset(struct i2c_adapter *adap)
+ static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg,
+ 		struct i2c_adapter *adap)
+ {
+-	unsigned long time_left;
++	unsigned long time_left, msg_timeout;
+ 	u32 reg;
+ 
+ 	id->p_msg = msg;
+@@ -749,8 +749,16 @@ static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg,
+ 	else
+ 		cdns_i2c_msend(id);
+ 
++	/* Minimal time to execute this message */
++	msg_timeout = msecs_to_jiffies((1000 * msg->len * BITS_PER_BYTE) / id->i2c_clk);
++	/* Plus some wiggle room */
++	msg_timeout += msecs_to_jiffies(500);
++
++	if (msg_timeout < adap->timeout)
++		msg_timeout = adap->timeout;
++
+ 	/* Wait for the signal of completion */
+-	time_left = wait_for_completion_timeout(&id->xfer_done, adap->timeout);
++	time_left = wait_for_completion_timeout(&id->xfer_done, msg_timeout);
+ 	if (time_left == 0) {
+ 		cdns_i2c_master_reset(adap);
+ 		dev_err(id->adap.dev.parent,
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index bd35009950376..19ab7d7251bcb 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -170,7 +170,6 @@ static const struct iio_chan_spec ad7124_channel_template = {
+ 		.sign = 'u',
+ 		.realbits = 24,
+ 		.storagebits = 32,
+-		.shift = 8,
+ 		.endianness = IIO_BE,
+ 	},
+ };
+diff --git a/drivers/iio/adc/sc27xx_adc.c b/drivers/iio/adc/sc27xx_adc.c
+index aa32a1f385e22..2b463e1cf1c77 100644
+--- a/drivers/iio/adc/sc27xx_adc.c
++++ b/drivers/iio/adc/sc27xx_adc.c
+@@ -36,8 +36,8 @@
+ 
+ /* Bits and mask definition for SC27XX_ADC_CH_CFG register */
+ #define SC27XX_ADC_CHN_ID_MASK		GENMASK(4, 0)
+-#define SC27XX_ADC_SCALE_MASK		GENMASK(10, 8)
+-#define SC27XX_ADC_SCALE_SHIFT		8
++#define SC27XX_ADC_SCALE_MASK		GENMASK(10, 9)
++#define SC27XX_ADC_SCALE_SHIFT		9
+ 
+ /* Bits definitions for SC27XX_ADC_INT_EN registers */
+ #define SC27XX_ADC_IRQ_EN		BIT(0)
+@@ -103,14 +103,14 @@ static struct sc27xx_adc_linear_graph small_scale_graph = {
+ 	100, 341,
+ };
+ 
+-static const struct sc27xx_adc_linear_graph big_scale_graph_calib = {
+-	4200, 856,
+-	3600, 733,
++static const struct sc27xx_adc_linear_graph sc2731_big_scale_graph_calib = {
++	4200, 850,
++	3600, 728,
+ };
+ 
+-static const struct sc27xx_adc_linear_graph small_scale_graph_calib = {
+-	1000, 833,
+-	100, 80,
++static const struct sc27xx_adc_linear_graph sc2731_small_scale_graph_calib = {
++	1000, 838,
++	100, 84,
+ };
+ 
+ static int sc27xx_adc_get_calib_data(u32 calib_data, int calib_adc)
+@@ -130,11 +130,11 @@ static int sc27xx_adc_scale_calibration(struct sc27xx_adc_data *data,
+ 	size_t len;
+ 
+ 	if (big_scale) {
+-		calib_graph = &big_scale_graph_calib;
++		calib_graph = &sc2731_big_scale_graph_calib;
+ 		graph = &big_scale_graph;
+ 		cell_name = "big_scale_calib";
+ 	} else {
+-		calib_graph = &small_scale_graph_calib;
++		calib_graph = &sc2731_small_scale_graph_calib;
+ 		graph = &small_scale_graph;
+ 		cell_name = "small_scale_calib";
+ 	}
+diff --git a/drivers/iio/adc/stmpe-adc.c b/drivers/iio/adc/stmpe-adc.c
+index fba659bfdb40a..64305d9fa5602 100644
+--- a/drivers/iio/adc/stmpe-adc.c
++++ b/drivers/iio/adc/stmpe-adc.c
+@@ -61,7 +61,7 @@ struct stmpe_adc {
+ static int stmpe_read_voltage(struct stmpe_adc *info,
+ 		struct iio_chan_spec const *chan, int *val)
+ {
+-	long ret;
++	unsigned long ret;
+ 
+ 	mutex_lock(&info->lock);
+ 
+@@ -79,7 +79,7 @@ static int stmpe_read_voltage(struct stmpe_adc *info,
+ 
+ 	ret = wait_for_completion_timeout(&info->completion, STMPE_ADC_TIMEOUT);
+ 
+-	if (ret <= 0) {
++	if (ret == 0) {
+ 		stmpe_reg_write(info->stmpe, STMPE_REG_ADC_INT_STA,
+ 				STMPE_ADC_CH(info->channel));
+ 		mutex_unlock(&info->lock);
+@@ -96,7 +96,7 @@ static int stmpe_read_voltage(struct stmpe_adc *info,
+ static int stmpe_read_temp(struct stmpe_adc *info,
+ 		struct iio_chan_spec const *chan, int *val)
+ {
+-	long ret;
++	unsigned long ret;
+ 
+ 	mutex_lock(&info->lock);
+ 
+@@ -114,7 +114,7 @@ static int stmpe_read_temp(struct stmpe_adc *info,
+ 
+ 	ret = wait_for_completion_timeout(&info->completion, STMPE_ADC_TIMEOUT);
+ 
+-	if (ret <= 0) {
++	if (ret == 0) {
+ 		mutex_unlock(&info->lock);
+ 		return -ETIMEDOUT;
+ 	}
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index 7a69c1be73937..56206fdbceb9d 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -70,16 +70,18 @@ st_sensors_match_odr_error:
+ 
+ int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr)
+ {
+-	int err;
++	int err = 0;
+ 	struct st_sensor_odr_avl odr_out = {0, 0};
+ 	struct st_sensor_data *sdata = iio_priv(indio_dev);
+ 
++	mutex_lock(&sdata->odr_lock);
++
+ 	if (!sdata->sensor_settings->odr.mask)
+-		return 0;
++		goto unlock_mutex;
+ 
+ 	err = st_sensors_match_odr(sdata->sensor_settings, odr, &odr_out);
+ 	if (err < 0)
+-		goto st_sensors_match_odr_error;
++		goto unlock_mutex;
+ 
+ 	if ((sdata->sensor_settings->odr.addr ==
+ 					sdata->sensor_settings->pw.addr) &&
+@@ -102,7 +104,9 @@ int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr)
+ 	if (err >= 0)
+ 		sdata->odr = odr_out.hz;
+ 
+-st_sensors_match_odr_error:
++unlock_mutex:
++	mutex_unlock(&sdata->odr_lock);
++
+ 	return err;
+ }
+ EXPORT_SYMBOL(st_sensors_set_odr);
+@@ -364,6 +368,8 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ 	struct st_sensors_platform_data *of_pdata;
+ 	int err = 0;
+ 
++	mutex_init(&sdata->odr_lock);
++
+ 	/* If OF/DT pdata exists, it will take precedence of anything else */
+ 	of_pdata = st_sensors_dev_probe(indio_dev->dev.parent, pdata);
+ 	if (IS_ERR(of_pdata))
+@@ -557,18 +563,24 @@ int st_sensors_read_info_raw(struct iio_dev *indio_dev,
+ 		err = -EBUSY;
+ 		goto out;
+ 	} else {
++		mutex_lock(&sdata->odr_lock);
+ 		err = st_sensors_set_enable(indio_dev, true);
+-		if (err < 0)
++		if (err < 0) {
++			mutex_unlock(&sdata->odr_lock);
+ 			goto out;
++		}
+ 
+ 		msleep((sdata->sensor_settings->bootime * 1000) / sdata->odr);
+ 		err = st_sensors_read_axis_data(indio_dev, ch, val);
+-		if (err < 0)
++		if (err < 0) {
++			mutex_unlock(&sdata->odr_lock);
+ 			goto out;
++		}
+ 
+ 		*val = *val >> ch->scan_type.shift;
+ 
+ 		err = st_sensors_set_enable(indio_dev, false);
++		mutex_unlock(&sdata->odr_lock);
+ 	}
+ out:
+ 	mutex_unlock(&indio_dev->mlock);
+diff --git a/drivers/iio/dummy/iio_simple_dummy.c b/drivers/iio/dummy/iio_simple_dummy.c
+index c0b7ef9007354..c24f609c2ade6 100644
+--- a/drivers/iio/dummy/iio_simple_dummy.c
++++ b/drivers/iio/dummy/iio_simple_dummy.c
+@@ -575,10 +575,9 @@ static struct iio_sw_device *iio_dummy_probe(const char *name)
+ 	 */
+ 
+ 	swd = kzalloc(sizeof(*swd), GFP_KERNEL);
+-	if (!swd) {
+-		ret = -ENOMEM;
+-		goto error_kzalloc;
+-	}
++	if (!swd)
++		return ERR_PTR(-ENOMEM);
++
+ 	/*
+ 	 * Allocate an IIO device.
+ 	 *
+@@ -590,7 +589,7 @@ static struct iio_sw_device *iio_dummy_probe(const char *name)
+ 	indio_dev = iio_device_alloc(parent, sizeof(*st));
+ 	if (!indio_dev) {
+ 		ret = -ENOMEM;
+-		goto error_ret;
++		goto error_free_swd;
+ 	}
+ 
+ 	st = iio_priv(indio_dev);
+@@ -616,6 +615,10 @@ static struct iio_sw_device *iio_dummy_probe(const char *name)
+ 	 *    indio_dev->name = spi_get_device_id(spi)->name;
+ 	 */
+ 	indio_dev->name = kstrdup(name, GFP_KERNEL);
++	if (!indio_dev->name) {
++		ret = -ENOMEM;
++		goto error_free_device;
++	}
+ 
+ 	/* Provide description of available channels */
+ 	indio_dev->channels = iio_dummy_channels;
+@@ -632,7 +635,7 @@ static struct iio_sw_device *iio_dummy_probe(const char *name)
+ 
+ 	ret = iio_simple_dummy_events_register(indio_dev);
+ 	if (ret < 0)
+-		goto error_free_device;
++		goto error_free_name;
+ 
+ 	ret = iio_simple_dummy_configure_buffer(indio_dev);
+ 	if (ret < 0)
+@@ -649,11 +652,12 @@ error_unconfigure_buffer:
+ 	iio_simple_dummy_unconfigure_buffer(indio_dev);
+ error_unregister_events:
+ 	iio_simple_dummy_events_unregister(indio_dev);
++error_free_name:
++	kfree(indio_dev->name);
+ error_free_device:
+ 	iio_device_free(indio_dev);
+-error_ret:
++error_free_swd:
+ 	kfree(swd);
+-error_kzalloc:
+ 	return ERR_PTR(ret);
+ }
+ 
+diff --git a/drivers/iio/proximity/vl53l0x-i2c.c b/drivers/iio/proximity/vl53l0x-i2c.c
+index 235e125aeb3a2..3d3ab86423eee 100644
+--- a/drivers/iio/proximity/vl53l0x-i2c.c
++++ b/drivers/iio/proximity/vl53l0x-i2c.c
+@@ -104,6 +104,7 @@ static int vl53l0x_read_proximity(struct vl53l0x_data *data,
+ 	u16 tries = 20;
+ 	u8 buffer[12];
+ 	int ret;
++	unsigned long time_left;
+ 
+ 	ret = i2c_smbus_write_byte_data(client, VL_REG_SYSRANGE_START, 1);
+ 	if (ret < 0)
+@@ -112,10 +113,8 @@ static int vl53l0x_read_proximity(struct vl53l0x_data *data,
+ 	if (data->client->irq) {
+ 		reinit_completion(&data->completion);
+ 
+-		ret = wait_for_completion_timeout(&data->completion, HZ/10);
+-		if (ret < 0)
+-			return ret;
+-		else if (ret == 0)
++		time_left = wait_for_completion_timeout(&data->completion, HZ/10);
++		if (time_left == 0)
+ 			return -ETIMEDOUT;
+ 
+ 		vl53l0x_clear_irq(data);
+diff --git a/drivers/input/mouse/bcm5974.c b/drivers/input/mouse/bcm5974.c
+index 59a14505b9cd1..ca150618d32f1 100644
+--- a/drivers/input/mouse/bcm5974.c
++++ b/drivers/input/mouse/bcm5974.c
+@@ -942,17 +942,22 @@ static int bcm5974_probe(struct usb_interface *iface,
+ 	if (!dev->tp_data)
+ 		goto err_free_bt_buffer;
+ 
+-	if (dev->bt_urb)
++	if (dev->bt_urb) {
+ 		usb_fill_int_urb(dev->bt_urb, udev,
+ 				 usb_rcvintpipe(udev, cfg->bt_ep),
+ 				 dev->bt_data, dev->cfg.bt_datalen,
+ 				 bcm5974_irq_button, dev, 1);
+ 
++		dev->bt_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
++	}
++
+ 	usb_fill_int_urb(dev->tp_urb, udev,
+ 			 usb_rcvintpipe(udev, cfg->tp_ep),
+ 			 dev->tp_data, dev->cfg.tp_datalen,
+ 			 bcm5974_irq_trackpad, dev, 1);
+ 
++	dev->tp_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
++
+ 	/* create bcm5974 device */
+ 	usb_make_path(udev, dev->phys, sizeof(dev->phys));
+ 	strlcat(dev->phys, "/input0", sizeof(dev->phys));
+diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
+index 7887941730dbb..ceb6cdc20484e 100644
+--- a/drivers/interconnect/core.c
++++ b/drivers/interconnect/core.c
+@@ -1084,9 +1084,14 @@ static int of_count_icc_providers(struct device_node *np)
+ {
+ 	struct device_node *child;
+ 	int count = 0;
++	const struct of_device_id __maybe_unused ignore_list[] = {
++		{ .compatible = "qcom,sc7180-ipa-virt" },
++		{}
++	};
+ 
+ 	for_each_available_child_of_node(np, child) {
+-		if (of_property_read_bool(child, "#interconnect-cells"))
++		if (of_property_read_bool(child, "#interconnect-cells") &&
++		    likely(!of_match_node(ignore_list, child)))
+ 			count++;
+ 		count += of_count_icc_providers(child);
+ 	}
+diff --git a/drivers/interconnect/qcom/sc7180.c b/drivers/interconnect/qcom/sc7180.c
+index 8d9044ed18ab9..597a7ee7a9bbf 100644
+--- a/drivers/interconnect/qcom/sc7180.c
++++ b/drivers/interconnect/qcom/sc7180.c
+@@ -47,7 +47,6 @@ DEFINE_QNODE(qnm_mnoc_sf, SC7180_MASTER_MNOC_SF_MEM_NOC, 1, 32, SC7180_SLAVE_GEM
+ DEFINE_QNODE(qnm_snoc_gc, SC7180_MASTER_SNOC_GC_MEM_NOC, 1, 8, SC7180_SLAVE_LLCC);
+ DEFINE_QNODE(qnm_snoc_sf, SC7180_MASTER_SNOC_SF_MEM_NOC, 1, 16, SC7180_SLAVE_LLCC);
+ DEFINE_QNODE(qxm_gpu, SC7180_MASTER_GFX3D, 2, 32, SC7180_SLAVE_GEM_NOC_SNOC, SC7180_SLAVE_LLCC);
+-DEFINE_QNODE(ipa_core_master, SC7180_MASTER_IPA_CORE, 1, 8, SC7180_SLAVE_IPA_CORE);
+ DEFINE_QNODE(llcc_mc, SC7180_MASTER_LLCC, 2, 4, SC7180_SLAVE_EBI1);
+ DEFINE_QNODE(qhm_mnoc_cfg, SC7180_MASTER_CNOC_MNOC_CFG, 1, 4, SC7180_SLAVE_SERVICE_MNOC);
+ DEFINE_QNODE(qxm_camnoc_hf0, SC7180_MASTER_CAMNOC_HF0, 2, 32, SC7180_SLAVE_MNOC_HF_MEM_NOC);
+@@ -129,7 +128,6 @@ DEFINE_QNODE(qhs_mdsp_ms_mpu_cfg, SC7180_SLAVE_MSS_PROC_MS_MPU_CFG, 1, 4);
+ DEFINE_QNODE(qns_gem_noc_snoc, SC7180_SLAVE_GEM_NOC_SNOC, 1, 8, SC7180_MASTER_GEM_NOC_SNOC);
+ DEFINE_QNODE(qns_llcc, SC7180_SLAVE_LLCC, 1, 16, SC7180_MASTER_LLCC);
+ DEFINE_QNODE(srvc_gemnoc, SC7180_SLAVE_SERVICE_GEM_NOC, 1, 4);
+-DEFINE_QNODE(ipa_core_slave, SC7180_SLAVE_IPA_CORE, 1, 8);
+ DEFINE_QNODE(ebi, SC7180_SLAVE_EBI1, 2, 4);
+ DEFINE_QNODE(qns_mem_noc_hf, SC7180_SLAVE_MNOC_HF_MEM_NOC, 1, 32, SC7180_MASTER_MNOC_HF_MEM_NOC);
+ DEFINE_QNODE(qns_mem_noc_sf, SC7180_SLAVE_MNOC_SF_MEM_NOC, 1, 32, SC7180_MASTER_MNOC_SF_MEM_NOC);
+@@ -160,7 +158,6 @@ DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi);
+ DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc);
+ DEFINE_QBCM(bcm_mm0, "MM0", false, &qns_mem_noc_hf);
+ DEFINE_QBCM(bcm_ce0, "CE0", false, &qxm_crypto);
+-DEFINE_QBCM(bcm_ip0, "IP0", false, &ipa_core_slave);
+ DEFINE_QBCM(bcm_cn0, "CN0", true, &qnm_snoc, &xm_qdss_dap, &qhs_a1_noc_cfg, &qhs_a2_noc_cfg, &qhs_ahb2phy0, &qhs_aop, &qhs_aoss, &qhs_boot_rom, &qhs_camera_cfg, &qhs_camera_nrt_throttle_cfg, &qhs_camera_rt_throttle_cfg, &qhs_clk_ctl, &qhs_cpr_cx, &qhs_cpr_mx, &qhs_crypto0_cfg, &qhs_dcc_cfg, &qhs_ddrss_cfg, &qhs_display_cfg, &qhs_display_rt_throttle_cfg, &qhs_display_throttle_cfg, &qhs_glm, &qhs_gpuss_cfg, &qhs_imem_cfg, &qhs_ipa, &qhs_mnoc_cfg, &qhs_mss_cfg, &qhs_npu_cfg, &qhs_npu_dma_throttle_cfg, &qhs_npu_dsp_throttle_cfg, &qhs_pimem_cfg, &qhs_prng, &qhs_qdss_cfg, &qhs_qm_cfg, &qhs_qm_mpu_cfg, &qhs_qup0, &qhs_qup1, &qhs_security, &qhs_snoc_cfg, &qhs_tcsr, &qhs_tlmm_1, &qhs_tlmm_2, &qhs_tlmm_3, &qhs_ufs_mem_cfg, &qhs_usb3, &qhs_venus_cfg, &qhs_venus_throttle_cfg, &qhs_vsense_ctrl_cfg, &srvc_cnoc);
+ DEFINE_QBCM(bcm_mm1, "MM1", false, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qhm_mnoc_cfg, &qxm_mdp0, &qxm_rot, &qxm_venus0, &qxm_venus_arm9);
+ DEFINE_QBCM(bcm_sh2, "SH2", false, &acm_sys_tcu);
+@@ -372,22 +369,6 @@ static struct qcom_icc_desc sc7180_gem_noc = {
+ 	.num_bcms = ARRAY_SIZE(gem_noc_bcms),
+ };
+ 
+-static struct qcom_icc_bcm *ipa_virt_bcms[] = {
+-	&bcm_ip0,
+-};
+-
+-static struct qcom_icc_node *ipa_virt_nodes[] = {
+-	[MASTER_IPA_CORE] = &ipa_core_master,
+-	[SLAVE_IPA_CORE] = &ipa_core_slave,
+-};
+-
+-static struct qcom_icc_desc sc7180_ipa_virt = {
+-	.nodes = ipa_virt_nodes,
+-	.num_nodes = ARRAY_SIZE(ipa_virt_nodes),
+-	.bcms = ipa_virt_bcms,
+-	.num_bcms = ARRAY_SIZE(ipa_virt_bcms),
+-};
+-
+ static struct qcom_icc_bcm *mc_virt_bcms[] = {
+ 	&bcm_acv,
+ 	&bcm_mc0,
+@@ -611,8 +592,6 @@ static const struct of_device_id qnoc_of_match[] = {
+ 	  .data = &sc7180_dc_noc},
+ 	{ .compatible = "qcom,sc7180-gem-noc",
+ 	  .data = &sc7180_gem_noc},
+-	{ .compatible = "qcom,sc7180-ipa-virt",
+-	  .data = &sc7180_ipa_virt},
+ 	{ .compatible = "qcom,sc7180-mc-virt",
+ 	  .data = &sc7180_mc_virt},
+ 	{ .compatible = "qcom,sc7180-mmss-noc",
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 483c1362cc4aa..bc4cbc7542ce2 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -3512,6 +3512,8 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ 
+ 	/* Base address */
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
+ 		dev_err(dev, "MMIO region too small (%pr)\n", res);
+ 		return -EINVAL;
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+index df24bbe3ea4f1..6b41fe229a053 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+@@ -2123,11 +2123,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	ioaddr = res->start;
+-	smmu->base = devm_ioremap_resource(dev, res);
++	smmu->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(smmu->base))
+ 		return PTR_ERR(smmu->base);
++	ioaddr = res->start;
+ 	/*
+ 	 * The resource size should effectively match the value of SMMU_TOP;
+ 	 * stash that temporarily until we know PAGESIZE to validate it with.
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 7a9701adee738..5bd1edbb415bd 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -7970,17 +7970,22 @@ EXPORT_SYMBOL(md_register_thread);
+ 
+ void md_unregister_thread(struct md_thread **threadp)
+ {
+-	struct md_thread *thread = *threadp;
+-	if (!thread)
+-		return;
+-	pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk));
+-	/* Locking ensures that mddev_unlock does not wake_up a
++	struct md_thread *thread;
++
++	/*
++	 * Locking ensures that mddev_unlock does not wake_up a
+ 	 * non-existent thread
+ 	 */
+ 	spin_lock(&pers_lock);
++	thread = *threadp;
++	if (!thread) {
++		spin_unlock(&pers_lock);
++		return;
++	}
+ 	*threadp = NULL;
+ 	spin_unlock(&pers_lock);
+ 
++	pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk));
+ 	kthread_stop(thread->tsk);
+ 	kfree(thread);
+ }
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 35843df15b5e6..a4c0cafa6010a 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -128,21 +128,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ 	pr_debug("md/raid0:%s: FINAL %d zones\n",
+ 		 mdname(mddev), conf->nr_strip_zones);
+ 
+-	if (conf->nr_strip_zones == 1) {
+-		conf->layout = RAID0_ORIG_LAYOUT;
+-	} else if (mddev->layout == RAID0_ORIG_LAYOUT ||
+-		   mddev->layout == RAID0_ALT_MULTIZONE_LAYOUT) {
+-		conf->layout = mddev->layout;
+-	} else if (default_layout == RAID0_ORIG_LAYOUT ||
+-		   default_layout == RAID0_ALT_MULTIZONE_LAYOUT) {
+-		conf->layout = default_layout;
+-	} else {
+-		pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n",
+-		       mdname(mddev));
+-		pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n");
+-		err = -ENOTSUPP;
+-		goto abort;
+-	}
+ 	/*
+ 	 * now since we have the hard sector sizes, we can make sure
+ 	 * chunk size is a multiple of that sector size
+@@ -273,6 +258,22 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ 			 (unsigned long long)smallest->sectors);
+ 	}
+ 
++	if (conf->nr_strip_zones == 1 || conf->strip_zone[1].nb_dev == 1) {
++		conf->layout = RAID0_ORIG_LAYOUT;
++	} else if (mddev->layout == RAID0_ORIG_LAYOUT ||
++		   mddev->layout == RAID0_ALT_MULTIZONE_LAYOUT) {
++		conf->layout = mddev->layout;
++	} else if (default_layout == RAID0_ORIG_LAYOUT ||
++		   default_layout == RAID0_ALT_MULTIZONE_LAYOUT) {
++		conf->layout = default_layout;
++	} else {
++		pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n",
++		       mdname(mddev));
++		pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n");
++		err = -EOPNOTSUPP;
++		goto abort;
++	}
++
+ 	pr_debug("md/raid0:%s: done.\n", mdname(mddev));
+ 	*private_conf = conf;
+ 
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index 59eda55d92a38..1ef9b61077c44 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -667,6 +667,7 @@ static int rtsx_usb_probe(struct usb_interface *intf,
+ 	return 0;
+ 
+ out_init_fail:
++	usb_set_intfdata(ucr->pusb_intf, NULL);
+ 	usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf,
+ 			ucr->iobuf_dma);
+ 	return ret;
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index d0471fec37fbb..65f24b6150aa3 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1349,17 +1349,18 @@ static int fastrpc_req_munmap_impl(struct fastrpc_user *fl,
+ 				   struct fastrpc_req_munmap *req)
+ {
+ 	struct fastrpc_invoke_args args[1] = { [0] = { 0 } };
+-	struct fastrpc_buf *buf, *b;
++	struct fastrpc_buf *buf = NULL, *iter, *b;
+ 	struct fastrpc_munmap_req_msg req_msg;
+ 	struct device *dev = fl->sctx->dev;
+ 	int err;
+ 	u32 sc;
+ 
+ 	spin_lock(&fl->lock);
+-	list_for_each_entry_safe(buf, b, &fl->mmaps, node) {
+-		if ((buf->raddr == req->vaddrout) && (buf->size == req->size))
++	list_for_each_entry_safe(iter, b, &fl->mmaps, node) {
++		if ((iter->raddr == req->vaddrout) && (iter->size == req->size)) {
++			buf = iter;
+ 			break;
+-		buf = NULL;
++		}
+ 	}
+ 	spin_unlock(&fl->lock);
+ 
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index a337f97b30e28..d39b8139b0961 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -231,6 +231,11 @@ void lkdtm_ARRAY_BOUNDS(void)
+ 
+ 	not_checked = kmalloc(sizeof(*not_checked) * 2, GFP_KERNEL);
+ 	checked = kmalloc(sizeof(*checked) * 2, GFP_KERNEL);
++	if (!not_checked || !checked) {
++		kfree(not_checked);
++		kfree(checked);
++		return;
++	}
+ 
+ 	pr_info("Array access within bounds ...\n");
+ 	/* For both, touch all bytes in the actual member size. */
+diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
+index 109e8d4302c11..cde2655487ffd 100644
+--- a/drivers/misc/lkdtm/usercopy.c
++++ b/drivers/misc/lkdtm/usercopy.c
+@@ -30,12 +30,12 @@ static const unsigned char test_text[] = "This is a test.\n";
+  */
+ static noinline unsigned char *trick_compiler(unsigned char *stack)
+ {
+-	return stack + 0;
++	return stack + unconst;
+ }
+ 
+ static noinline unsigned char *do_usercopy_stack_callee(int value)
+ {
+-	unsigned char buf[32];
++	unsigned char buf[128];
+ 	int i;
+ 
+ 	/* Exercise stack to avoid everything living in registers. */
+@@ -43,7 +43,12 @@ static noinline unsigned char *do_usercopy_stack_callee(int value)
+ 		buf[i] = value & 0xff;
+ 	}
+ 
+-	return trick_compiler(buf);
++	/*
++	 * Put the target buffer in the middle of stack allocation
++	 * so that we don't step on future stack users regardless
++	 * of stack growth direction.
++	 */
++	return trick_compiler(&buf[(128/2)-32]);
+ }
+ 
+ static noinline void do_usercopy_stack(bool to_user, bool bad_frame)
+@@ -66,6 +71,12 @@ static noinline void do_usercopy_stack(bool to_user, bool bad_frame)
+ 		bad_stack -= sizeof(unsigned long);
+ 	}
+ 
++#ifdef ARCH_HAS_CURRENT_STACK_POINTER
++	pr_info("stack     : %px\n", (void *)current_stack_pointer);
++#endif
++	pr_info("good_stack: %px-%px\n", good_stack, good_stack + sizeof(good_stack));
++	pr_info("bad_stack : %px-%px\n", bad_stack, bad_stack + sizeof(good_stack));
++
+ 	user_addr = vm_mmap(NULL, 0, PAGE_SIZE,
+ 			    PROT_READ | PROT_WRITE | PROT_EXEC,
+ 			    MAP_ANONYMOUS | MAP_PRIVATE, 0);
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 99b981a05b6c0..70eb3d03937ff 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1442,8 +1442,7 @@ void mmc_blk_cqe_recovery(struct mmc_queue *mq)
+ 	err = mmc_cqe_recovery(host);
+ 	if (err)
+ 		mmc_blk_reset(mq->blkdata, host, MMC_BLK_CQE_RECOVERY);
+-	else
+-		mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY);
++	mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY);
+ 
+ 	pr_debug("%s: CQE recovery done\n", mmc_hostname(host));
+ }
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 28f55f9cf7153..053ab52668e8b 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -97,6 +97,33 @@ out:
+ 	return e;
+ }
+ 
++/*
++ * has_enough_free_count - whether ubi has enough free pebs to fill fm pools
++ * @ubi: UBI device description object
++ * @is_wl_pool: whether UBI is filling wear leveling pool
++ *
++ * This helper function checks whether there are enough free pebs (deducted
++ * by fastmap pebs) to fill fm_pool and fm_wl_pool, above rule works after
++ * there is at least one of free pebs is filled into fm_wl_pool.
++ * For wear leveling pool, UBI should also reserve free pebs for bad pebs
++ * handling, because there maybe no enough free pebs for user volumes after
++ * producing new bad pebs.
++ */
++static bool has_enough_free_count(struct ubi_device *ubi, bool is_wl_pool)
++{
++	int fm_used = 0;	// fastmap non anchor pebs.
++	int beb_rsvd_pebs;
++
++	if (!ubi->free.rb_node)
++		return false;
++
++	beb_rsvd_pebs = is_wl_pool ? ubi->beb_rsvd_pebs : 0;
++	if (ubi->fm_wl_pool.size > 0 && !(ubi->ro_mode || ubi->fm_disabled))
++		fm_used = ubi->fm_size / ubi->leb_size - 1;
++
++	return ubi->free_count - beb_rsvd_pebs > fm_used;
++}
++
+ /**
+  * ubi_refill_pools - refills all fastmap PEB pools.
+  * @ubi: UBI device description object
+@@ -120,21 +147,17 @@ void ubi_refill_pools(struct ubi_device *ubi)
+ 		wl_tree_add(ubi->fm_anchor, &ubi->free);
+ 		ubi->free_count++;
+ 	}
+-	if (ubi->fm_next_anchor) {
+-		wl_tree_add(ubi->fm_next_anchor, &ubi->free);
+-		ubi->free_count++;
+-	}
+ 
+-	/* All available PEBs are in ubi->free, now is the time to get
++	/*
++	 * All available PEBs are in ubi->free, now is the time to get
+ 	 * the best anchor PEBs.
+ 	 */
+ 	ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1);
+-	ubi->fm_next_anchor = ubi_wl_get_fm_peb(ubi, 1);
+ 
+ 	for (;;) {
+ 		enough = 0;
+ 		if (pool->size < pool->max_size) {
+-			if (!ubi->free.rb_node)
++			if (!has_enough_free_count(ubi, false))
+ 				break;
+ 
+ 			e = wl_get_wle(ubi);
+@@ -147,8 +170,7 @@ void ubi_refill_pools(struct ubi_device *ubi)
+ 			enough++;
+ 
+ 		if (wl_pool->size < wl_pool->max_size) {
+-			if (!ubi->free.rb_node ||
+-			   (ubi->free_count - ubi->beb_rsvd_pebs < 5))
++			if (!has_enough_free_count(ubi, true))
+ 				break;
+ 
+ 			e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF);
+@@ -286,20 +308,26 @@ static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi)
+ int ubi_ensure_anchor_pebs(struct ubi_device *ubi)
+ {
+ 	struct ubi_work *wrk;
++	struct ubi_wl_entry *anchor;
+ 
+ 	spin_lock(&ubi->wl_lock);
+ 
+-	/* Do we have a next anchor? */
+-	if (!ubi->fm_next_anchor) {
+-		ubi->fm_next_anchor = ubi_wl_get_fm_peb(ubi, 1);
+-		if (!ubi->fm_next_anchor)
+-			/* Tell wear leveling to produce a new anchor PEB */
+-			ubi->fm_do_produce_anchor = 1;
++	/* Do we already have an anchor? */
++	if (ubi->fm_anchor) {
++		spin_unlock(&ubi->wl_lock);
++		return 0;
+ 	}
+ 
+-	/* Do wear leveling to get a new anchor PEB or check the
+-	 * existing next anchor candidate.
+-	 */
++	/* See if we can find an anchor PEB on the list of free PEBs */
++	anchor = ubi_wl_get_fm_peb(ubi, 1);
++	if (anchor) {
++		ubi->fm_anchor = anchor;
++		spin_unlock(&ubi->wl_lock);
++		return 0;
++	}
++
++	ubi->fm_do_produce_anchor = 1;
++	/* No luck, trigger wear leveling to produce a new anchor PEB. */
+ 	if (ubi->wl_scheduled) {
+ 		spin_unlock(&ubi->wl_lock);
+ 		return 0;
+@@ -381,11 +409,6 @@ static void ubi_fastmap_close(struct ubi_device *ubi)
+ 		ubi->fm_anchor = NULL;
+ 	}
+ 
+-	if (ubi->fm_next_anchor) {
+-		return_unused_peb(ubi, ubi->fm_next_anchor);
+-		ubi->fm_next_anchor = NULL;
+-	}
+-
+ 	if (ubi->fm) {
+ 		for (i = 0; i < ubi->fm->used_blocks; i++)
+ 			kfree(ubi->fm->e[i]);
+diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
+index 6b5f1ffd961b9..6e95c4b1473e6 100644
+--- a/drivers/mtd/ubi/fastmap.c
++++ b/drivers/mtd/ubi/fastmap.c
+@@ -1230,17 +1230,6 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
+ 		fm_pos += sizeof(*fec);
+ 		ubi_assert(fm_pos <= ubi->fm_size);
+ 	}
+-	if (ubi->fm_next_anchor) {
+-		fec = (struct ubi_fm_ec *)(fm_raw + fm_pos);
+-
+-		fec->pnum = cpu_to_be32(ubi->fm_next_anchor->pnum);
+-		set_seen(ubi, ubi->fm_next_anchor->pnum, seen_pebs);
+-		fec->ec = cpu_to_be32(ubi->fm_next_anchor->ec);
+-
+-		free_peb_count++;
+-		fm_pos += sizeof(*fec);
+-		ubi_assert(fm_pos <= ubi->fm_size);
+-	}
+ 	fmh->free_peb_count = cpu_to_be32(free_peb_count);
+ 
+ 	ubi_for_each_used_peb(ubi, wl_e, tmp_rb) {
+diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h
+index c2da77163f948..da0bee13fe7f0 100644
+--- a/drivers/mtd/ubi/ubi.h
++++ b/drivers/mtd/ubi/ubi.h
+@@ -491,8 +491,7 @@ struct ubi_debug_info {
+  * @fm_work: fastmap work queue
+  * @fm_work_scheduled: non-zero if fastmap work was scheduled
+  * @fast_attach: non-zero if UBI was attached by fastmap
+- * @fm_anchor: The new anchor PEB used during fastmap update
+- * @fm_next_anchor: An anchor PEB candidate for the next time fastmap is updated
++ * @fm_anchor: The next anchor PEB to use for fastmap
+  * @fm_do_produce_anchor: If true produce an anchor PEB in wl
+  *
+  * @used: RB-tree of used physical eraseblocks
+@@ -603,7 +602,6 @@ struct ubi_device {
+ 	int fm_work_scheduled;
+ 	int fast_attach;
+ 	struct ubi_wl_entry *fm_anchor;
+-	struct ubi_wl_entry *fm_next_anchor;
+ 	int fm_do_produce_anchor;
+ 
+ 	/* Wear-leveling sub-system's stuff */
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 1bc7b3a056046..6ea95ade4ca6b 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -309,7 +309,6 @@ out_mapping:
+ 	ubi->volumes[vol_id] = NULL;
+ 	ubi->vol_count -= 1;
+ 	spin_unlock(&ubi->volumes_lock);
+-	ubi_eba_destroy_table(eba_tbl);
+ out_acc:
+ 	spin_lock(&ubi->volumes_lock);
+ 	ubi->rsvd_pebs -= vol->reserved_pebs;
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 7847de75a74ca..820b5c1c8e8e7 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -688,16 +688,16 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ 
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+ 	e1 = find_anchor_wl_entry(&ubi->used);
+-	if (e1 && ubi->fm_next_anchor &&
+-	    (ubi->fm_next_anchor->ec - e1->ec >= UBI_WL_THRESHOLD)) {
++	if (e1 && ubi->fm_anchor &&
++	    (ubi->fm_anchor->ec - e1->ec >= UBI_WL_THRESHOLD)) {
+ 		ubi->fm_do_produce_anchor = 1;
+-		/* fm_next_anchor is no longer considered a good anchor
+-		 * candidate.
++		/*
++		 * fm_anchor is no longer considered a good anchor.
+ 		 * NULL assignment also prevents multiple wear level checks
+ 		 * of this PEB.
+ 		 */
+-		wl_tree_add(ubi->fm_next_anchor, &ubi->free);
+-		ubi->fm_next_anchor = NULL;
++		wl_tree_add(ubi->fm_anchor, &ubi->free);
++		ubi->fm_anchor = NULL;
+ 		ubi->free_count++;
+ 	}
+ 
+@@ -1086,12 +1086,13 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
+ 	if (!err) {
+ 		spin_lock(&ubi->wl_lock);
+ 
+-		if (!ubi->fm_disabled && !ubi->fm_next_anchor &&
++		if (!ubi->fm_disabled && !ubi->fm_anchor &&
+ 		    e->pnum < UBI_FM_MAX_START) {
+-			/* Abort anchor production, if needed it will be
++			/*
++			 * Abort anchor production, if needed it will be
+ 			 * enabled again in the wear leveling started below.
+ 			 */
+-			ubi->fm_next_anchor = e;
++			ubi->fm_anchor = e;
+ 			ubi->fm_do_produce_anchor = 0;
+ 		} else {
+ 			wl_tree_add(e, &ubi->free);
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 4abae06499a96..70895e480683d 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1981,8 +1981,10 @@ static int gswip_gphy_fw_list(struct gswip_priv *priv,
+ 	for_each_available_child_of_node(gphy_fw_list_np, gphy_fw_np) {
+ 		err = gswip_gphy_fw_probe(priv, &priv->gphy_fw[i],
+ 					  gphy_fw_np, i);
+-		if (err)
++		if (err) {
++			of_node_put(gphy_fw_np);
+ 			goto remove_gphy;
++		}
+ 		i++;
+ 	}
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index e79a808375fc8..7b7a8a74405df 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3148,6 +3148,7 @@ static int mv88e6xxx_mdios_register(struct mv88e6xxx_chip *chip,
+ 	 */
+ 	child = of_get_child_by_name(np, "mdio");
+ 	err = mv88e6xxx_mdio_register(chip, child, false);
++	of_node_put(child);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c
+index a7d8d45e0e941..b779f3adbc568 100644
+--- a/drivers/net/ethernet/altera/altera_tse_main.c
++++ b/drivers/net/ethernet/altera/altera_tse_main.c
+@@ -163,7 +163,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
+ 	mdio = mdiobus_alloc();
+ 	if (mdio == NULL) {
+ 		netdev_err(dev, "Error allocating MDIO bus\n");
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto put_node;
+ 	}
+ 
+ 	mdio->name = ALTERA_TSE_RESOURCE_NAME;
+@@ -180,6 +181,7 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
+ 			   mdio->id);
+ 		goto out_free_mdio;
+ 	}
++	of_node_put(mdio_node);
+ 
+ 	if (netif_msg_drv(priv))
+ 		netdev_info(dev, "MDIO bus %s: created\n", mdio->id);
+@@ -189,6 +191,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
+ out_free_mdio:
+ 	mdiobus_free(mdio);
+ 	mdio = NULL;
++put_node:
++	of_node_put(mdio_node);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 214a38de3f415..aaebdae8b5fff 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -1157,9 +1157,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter,
+ 
+ 	switch (xcast_mode) {
+ 	case IXGBEVF_XCAST_MODE_NONE:
+-		disable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |
++		disable = IXGBE_VMOLR_ROMPE |
+ 			  IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
+-		enable = 0;
++		enable = IXGBE_VMOLR_BAM;
+ 		break;
+ 	case IXGBEVF_XCAST_MODE_MULTI:
+ 		disable = IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
+@@ -1181,9 +1181,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter,
+ 			return -EPERM;
+ 		}
+ 
+-		disable = 0;
++		disable = IXGBE_VMOLR_VPE;
+ 		enable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |
+-			 IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
++			 IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE;
+ 		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 7d7dc0754a3a1..789642647cd32 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1966,6 +1966,9 @@ static int mtk_hwlro_get_fdir_entry(struct net_device *dev,
+ 	struct ethtool_rx_flow_spec *fsp =
+ 		(struct ethtool_rx_flow_spec *)&cmd->fs;
+ 
++	if (fsp->location >= ARRAY_SIZE(mac->hwlro_ip))
++		return -EINVAL;
++
+ 	/* only tcp dst ipv4 is meaningful, others are meaningless */
+ 	fsp->flow_type = TCP_V4_FLOW;
+ 	fsp->h_u.tcp_ip4_spec.ip4dst = ntohl(mac->hwlro_ip[fsp->location]);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index 01275c376721c..962851000ace4 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -2099,7 +2099,7 @@ static int mlx4_en_get_module_eeprom(struct net_device *dev,
+ 			en_err(priv,
+ 			       "mlx4_get_module_info i(%d) offset(%d) bytes_to_read(%d) - FAILED (0x%x)\n",
+ 			       i, offset, ee->len - i, ret);
+-			return 0;
++			return ret;
+ 		}
+ 
+ 		i += ret;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 857be86b4a11a..e8a4adccd2b26 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -675,6 +675,9 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
+ 	if (!tracer->owner)
+ 		return;
+ 
++	if (unlikely(!tracer->str_db.loaded))
++		goto arm;
++
+ 	block_count = tracer->buff.size / TRACER_BLOCK_SIZE_BYTE;
+ 	start_offset = tracer->buff.consumer_index * TRACER_BLOCK_SIZE_BYTE;
+ 
+@@ -732,6 +735,7 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
+ 						      &tmp_trace_block[TRACES_PER_BLOCK - 1]);
+ 	}
+ 
++arm:
+ 	mlx5_fw_tracer_arm(dev);
+ }
+ 
+@@ -1138,8 +1142,7 @@ static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void
+ 			queue_work(tracer->work_queue, &tracer->ownership_change_work);
+ 		break;
+ 	case MLX5_TRACER_SUBTYPE_TRACES_AVAILABLE:
+-		if (likely(tracer->str_db.loaded))
+-			queue_work(tracer->work_queue, &tracer->handle_traces_work);
++		queue_work(tracer->work_queue, &tracer->handle_traces_work);
+ 		break;
+ 	default:
+ 		mlx5_core_dbg(dev, "FWTracer: Event with unrecognized subtype: sub_type %d\n",
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index d9cc0ed6c5f75..cfc3bfcb04a2f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -4576,6 +4576,11 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
+ 
+ unlock:
+ 	mutex_unlock(&priv->state_lock);
++
++	/* Need to fix some features. */
++	if (!err)
++		netdev_update_features(netdev);
++
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 15472fb15d7d2..4bdcceffe9d38 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1520,9 +1520,22 @@ static struct mlx5_flow_rule *find_flow_rule(struct fs_fte *fte,
+ 	return NULL;
+ }
+ 
+-static bool check_conflicting_actions(u32 action1, u32 action2)
++static bool check_conflicting_actions_vlan(const struct mlx5_fs_vlan *vlan0,
++					   const struct mlx5_fs_vlan *vlan1)
+ {
+-	u32 xored_actions = action1 ^ action2;
++	return vlan0->ethtype != vlan1->ethtype ||
++	       vlan0->vid != vlan1->vid ||
++	       vlan0->prio != vlan1->prio;
++}
++
++static bool check_conflicting_actions(const struct mlx5_flow_act *act1,
++				      const struct mlx5_flow_act *act2)
++{
++	u32 action1 = act1->action;
++	u32 action2 = act2->action;
++	u32 xored_actions;
++
++	xored_actions = action1 ^ action2;
+ 
+ 	/* if one rule only wants to count, it's ok */
+ 	if (action1 == MLX5_FLOW_CONTEXT_ACTION_COUNT ||
+@@ -1539,6 +1552,22 @@ static bool check_conflicting_actions(u32 action1, u32 action2)
+ 			     MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2))
+ 		return true;
+ 
++	if (action1 & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT &&
++	    act1->pkt_reformat != act2->pkt_reformat)
++		return true;
++
++	if (action1 & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
++	    act1->modify_hdr != act2->modify_hdr)
++		return true;
++
++	if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH &&
++	    check_conflicting_actions_vlan(&act1->vlan[0], &act2->vlan[0]))
++		return true;
++
++	if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2 &&
++	    check_conflicting_actions_vlan(&act1->vlan[1], &act2->vlan[1]))
++		return true;
++
+ 	return false;
+ }
+ 
+@@ -1546,7 +1575,7 @@ static int check_conflicting_ftes(struct fs_fte *fte,
+ 				  const struct mlx5_flow_context *flow_context,
+ 				  const struct mlx5_flow_act *flow_act)
+ {
+-	if (check_conflicting_actions(flow_act->action, fte->action.action)) {
++	if (check_conflicting_actions(flow_act, &fte->action)) {
+ 		mlx5_core_warn(get_dev(&fte->node),
+ 			       "Found two FTEs with conflicting actions\n");
+ 		return -EEXIST;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+index 96c39a17d0261..b227fa9ada46c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+@@ -43,11 +43,10 @@ static int set_miss_action(struct mlx5_flow_root_namespace *ns,
+ 	err = mlx5dr_table_set_miss_action(ft->fs_dr_table.dr_table, action);
+ 	if (err && action) {
+ 		err = mlx5dr_action_destroy(action);
+-		if (err) {
+-			action = NULL;
+-			mlx5_core_err(ns->dev, "Failed to destroy action (%d)\n",
+-				      err);
+-		}
++		if (err)
++			mlx5_core_err(ns->dev,
++				      "Failed to destroy action (%d)\n", err);
++		action = NULL;
+ 	}
+ 	ft->fs_dr_table.miss_action = action;
+ 	if (old_miss_action) {
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index cd0c9623f7dd2..e0b801d107396 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -286,8 +286,6 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
+ 
+ 	/* Init to unknowns */
+ 	ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
+-	ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
+-	ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+ 	cmd->base.port = PORT_OTHER;
+ 	cmd->base.speed = SPEED_UNKNOWN;
+ 	cmd->base.duplex = DUPLEX_UNKNOWN;
+@@ -295,6 +293,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
+ 	port = nfp_port_from_netdev(netdev);
+ 	eth_port = nfp_port_get_eth_port(port);
+ 	if (eth_port) {
++		ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
++		ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+ 		cmd->base.autoneg = eth_port->aneg != NFP_ANEG_DISABLED ?
+ 			AUTONEG_ENABLE : AUTONEG_DISABLE;
+ 		nfp_net_set_fec_link_mode(eth_port, cmd);
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index 2ab8571ef1cc0..d0f1b2dc7dff0 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -287,6 +287,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
+ 		efx->n_channels = 1;
+ 		efx->n_rx_channels = 1;
+ 		efx->n_tx_channels = 1;
++		efx->tx_channel_offset = 0;
+ 		efx->n_xdp_channels = 0;
+ 		efx->xdp_channel_offset = efx->n_channels;
+ 		rc = pci_enable_msi(efx->pci_dev);
+@@ -307,6 +308,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
+ 		efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0);
+ 		efx->n_rx_channels = 1;
+ 		efx->n_tx_channels = 1;
++		efx->tx_channel_offset = 1;
+ 		efx->n_xdp_channels = 0;
+ 		efx->xdp_channel_offset = efx->n_channels;
+ 		efx->legacy_irq = efx->pci_dev->irq;
+@@ -858,10 +860,6 @@ int efx_set_channels(struct efx_nic *efx)
+ 	int xdp_queue_number;
+ 	int rc;
+ 
+-	efx->tx_channel_offset =
+-		efx_separate_tx_channels ?
+-		efx->n_channels - efx->n_tx_channels : 0;
+-
+ 	if (efx->xdp_tx_queue_count) {
+ 		EFX_WARN_ON_PARANOID(efx->xdp_tx_queues);
+ 
+diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
+index 9f7dfdf708cf9..8aecb4bd2c0d5 100644
+--- a/drivers/net/ethernet/sfc/net_driver.h
++++ b/drivers/net/ethernet/sfc/net_driver.h
+@@ -1522,7 +1522,7 @@ static inline bool efx_channel_is_xdp_tx(struct efx_channel *channel)
+ 
+ static inline bool efx_channel_has_tx_queues(struct efx_channel *channel)
+ {
+-	return true;
++	return channel && channel->channel >= channel->efx->tx_channel_offset;
+ }
+ 
+ static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel)
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 0805edef56254..059d68d48f1e9 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -1716,6 +1716,7 @@ static int am65_cpsw_init_cpts(struct am65_cpsw_common *common)
+ 	if (IS_ERR(cpts)) {
+ 		int ret = PTR_ERR(cpts);
+ 
++		of_node_put(node);
+ 		if (ret == -EOPNOTSUPP) {
+ 			dev_info(dev, "cpts disabled\n");
+ 			return 0;
+@@ -2064,9 +2065,9 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ 	if (!node)
+ 		return -ENOENT;
+ 	common->port_num = of_get_child_count(node);
++	of_node_put(node);
+ 	if (common->port_num < 1 || common->port_num > AM65_CPSW_MAX_PORTS)
+ 		return -ENOENT;
+-	of_node_put(node);
+ 
+ 	if (common->port_num != 1)
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index c716074fdef0b..f86acad0aad44 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -137,6 +137,7 @@
+ #define DP83867_DOWNSHIFT_2_COUNT	2
+ #define DP83867_DOWNSHIFT_4_COUNT	4
+ #define DP83867_DOWNSHIFT_8_COUNT	8
++#define DP83867_SGMII_AUTONEG_EN	BIT(7)
+ 
+ /* CFG3 bits */
+ #define DP83867_CFG3_INT_OE			BIT(7)
+@@ -802,6 +803,32 @@ static int dp83867_phy_reset(struct phy_device *phydev)
+ 			 DP83867_PHYCR_FORCE_LINK_GOOD, 0);
+ }
+ 
++static void dp83867_link_change_notify(struct phy_device *phydev)
++{
++	/* There is a limitation in DP83867 PHY device where SGMII AN is
++	 * only triggered once after the device is booted up. Even after the
++	 * PHY TPI is down and up again, SGMII AN is not triggered and
++	 * hence no new in-band message from PHY to MAC side SGMII.
++	 * This could cause an issue during power up, when PHY is up prior
++	 * to MAC. At this condition, once MAC side SGMII is up, MAC side
++	 * SGMII wouldn`t receive new in-band message from TI PHY with
++	 * correct link status, speed and duplex info.
++	 * Thus, implemented a SW solution here to retrigger SGMII Auto-Neg
++	 * whenever there is a link change.
++	 */
++	if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
++		int val = 0;
++
++		val = phy_clear_bits(phydev, DP83867_CFG2,
++				     DP83867_SGMII_AUTONEG_EN);
++		if (val < 0)
++			return;
++
++		phy_set_bits(phydev, DP83867_CFG2,
++			     DP83867_SGMII_AUTONEG_EN);
++	}
++}
++
+ static struct phy_driver dp83867_driver[] = {
+ 	{
+ 		.phy_id		= DP83867_PHY_ID,
+@@ -826,6 +853,8 @@ static struct phy_driver dp83867_driver[] = {
+ 
+ 		.suspend	= genphy_suspend,
+ 		.resume		= genphy_resume,
++
++		.link_change_notify = dp83867_link_change_notify,
+ 	},
+ };
+ module_phy_driver(dp83867_driver);
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index c416ab1d2b008..c1cbdac4b376f 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -1008,7 +1008,6 @@ int __init mdio_bus_init(void)
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(mdio_bus_init);
+ 
+ #if IS_ENABLED(CONFIG_PHYLIB)
+ void mdio_bus_exit(void)
+diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
+index 0194e80193d9c..d416365042462 100644
+--- a/drivers/nfc/st21nfca/se.c
++++ b/drivers/nfc/st21nfca/se.c
+@@ -304,6 +304,8 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
+ 	int r = 0;
+ 	struct device *dev = &hdev->ndev->dev;
+ 	struct nfc_evt_transaction *transaction;
++	u32 aid_len;
++	u8 params_len;
+ 
+ 	pr_debug("connectivity gate event: %x\n", event);
+ 
+@@ -312,43 +314,48 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
+ 		r = nfc_se_connectivity(hdev->ndev, host);
+ 	break;
+ 	case ST21NFCA_EVT_TRANSACTION:
+-		/*
+-		 * According to specification etsi 102 622
++		/* According to specification etsi 102 622
+ 		 * 11.2.2.4 EVT_TRANSACTION Table 52
+ 		 * Description	Tag	Length
+ 		 * AID		81	5 to 16
+ 		 * PARAMETERS	82	0 to 255
++		 *
++		 * The key differences are aid storage length is variably sized
++		 * in the packet, but fixed in nfc_evt_transaction, and that the aid_len
++		 * is u8 in the packet, but u32 in the structure, and the tags in
++		 * the packet are not included in nfc_evt_transaction.
++		 *
++		 * size in bytes: 1          1       5-16 1             1           0-255
++		 * offset:        0          1       2    aid_len + 2   aid_len + 3 aid_len + 4
++		 * member name:   aid_tag(M) aid_len aid  params_tag(M) params_len  params
++		 * example:       0x81       5-16    X    0x82 0-255    X
+ 		 */
+-		if (skb->len < NFC_MIN_AID_LENGTH + 2 &&
+-		    skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG)
++		if (skb->len < 2 || skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG)
+ 			return -EPROTO;
+ 
+-		transaction = devm_kzalloc(dev, skb->len - 2, GFP_KERNEL);
+-		if (!transaction)
+-			return -ENOMEM;
+-
+-		transaction->aid_len = skb->data[1];
++		aid_len = skb->data[1];
+ 
+-		/* Checking if the length of the AID is valid */
+-		if (transaction->aid_len > sizeof(transaction->aid))
+-			return -EINVAL;
++		if (skb->len < aid_len + 4 || aid_len > sizeof(transaction->aid))
++			return -EPROTO;
+ 
+-		memcpy(transaction->aid, &skb->data[2],
+-		       transaction->aid_len);
++		params_len = skb->data[aid_len + 3];
+ 
+-		/* Check next byte is PARAMETERS tag (82) */
+-		if (skb->data[transaction->aid_len + 2] !=
+-		    NFC_EVT_TRANSACTION_PARAMS_TAG)
++		/* Verify PARAMETERS tag is (82), and final check that there is enough
++		 * space in the packet to read everything.
++		 */
++		if ((skb->data[aid_len + 2] != NFC_EVT_TRANSACTION_PARAMS_TAG) ||
++		    (skb->len < aid_len + 4 + params_len))
+ 			return -EPROTO;
+ 
+-		transaction->params_len = skb->data[transaction->aid_len + 3];
++		transaction = devm_kzalloc(dev, sizeof(*transaction) + params_len, GFP_KERNEL);
++		if (!transaction)
++			return -ENOMEM;
+ 
+-		/* Total size is allocated (skb->len - 2) minus fixed array members */
+-		if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction)))
+-			return -EINVAL;
++		transaction->aid_len = aid_len;
++		transaction->params_len = params_len;
+ 
+-		memcpy(transaction->params, skb->data +
+-		       transaction->aid_len + 4, transaction->params_len);
++		memcpy(transaction->aid, &skb->data[2], aid_len);
++		memcpy(transaction->params, &skb->data[aid_len + 4], params_len);
+ 
+ 		r = nfc_se_transaction(hdev->ndev, host, transaction);
+ 	break;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 9c7cdc4e12f21..1b8b3c12eeced 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1192,12 +1192,6 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
+ 		goto err_disable_clocks;
+ 	}
+ 
+-	ret = clk_prepare_enable(res->pipe_clk);
+-	if (ret) {
+-		dev_err(dev, "cannot prepare/enable pipe clock\n");
+-		goto err_disable_clocks;
+-	}
+-
+ 	/* configure PCIe to RC mode */
+ 	writel(DEVICE_TYPE_RC, pcie->parf + PCIE20_PARF_DEVICE_TYPE);
+ 
+diff --git a/drivers/pcmcia/Kconfig b/drivers/pcmcia/Kconfig
+index 82d10b6661c73..73508fca520c0 100644
+--- a/drivers/pcmcia/Kconfig
++++ b/drivers/pcmcia/Kconfig
+@@ -151,7 +151,7 @@ config TCIC
+ 
+ config PCMCIA_ALCHEMY_DEVBOARD
+ 	tristate "Alchemy Db/Pb1xxx PCMCIA socket services"
+-	depends on MIPS_ALCHEMY && PCMCIA
++	depends on MIPS_DB1XXX && PCMCIA
+ 	help
+ 	  Enable this driver of you want PCMCIA support on your Alchemy
+ 	  Db1000, Db/Pb1100, Db/Pb1500, Db/Pb1550, Db/Pb1200, DB1300
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c
+index ea46950c5d2a9..afcc82ab32022 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c
+@@ -3141,7 +3141,7 @@ static int qcom_qmp_phy_power_on(struct phy *phy)
+ 
+ 	ret = reset_control_deassert(qmp->ufs_reset);
+ 	if (ret)
+-		goto err_lane_rst;
++		goto err_pcs_ready;
+ 
+ 	qcom_qmp_phy_configure(pcs_misc, cfg->regs, cfg->pcs_misc_tbl,
+ 			       cfg->pcs_misc_tbl_num);
+diff --git a/drivers/pwm/pwm-lp3943.c b/drivers/pwm/pwm-lp3943.c
+index bf3f14fb5f244..05e4120fd7022 100644
+--- a/drivers/pwm/pwm-lp3943.c
++++ b/drivers/pwm/pwm-lp3943.c
+@@ -125,6 +125,7 @@ static int lp3943_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	if (err)
+ 		return err;
+ 
++	duty_ns = min(duty_ns, period_ns);
+ 	val = (u8)(duty_ns * LP3943_MAX_DUTY / period_ns);
+ 
+ 	return lp3943_write_byte(lp3943, reg_duty, val);
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 19903de6268db..a4db9f6100d2f 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1388,9 +1388,9 @@ static int qcom_smd_parse_edge(struct device *dev,
+ 		edge->name = node->name;
+ 
+ 	irq = irq_of_parse_and_map(node, 0);
+-	if (irq < 0) {
++	if (!irq) {
+ 		dev_err(dev, "required smd interrupt missing\n");
+-		ret = irq;
++		ret = -EINVAL;
+ 		goto put_node;
+ 	}
+ 
+diff --git a/drivers/rtc/rtc-mt6397.c b/drivers/rtc/rtc-mt6397.c
+index 1894aded4c857..acfcb378767db 100644
+--- a/drivers/rtc/rtc-mt6397.c
++++ b/drivers/rtc/rtc-mt6397.c
+@@ -269,6 +269,8 @@ static int mtk_rtc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 	rtc->addr_base = res->start;
+ 
+ 	rtc->data = of_device_get_match_data(&pdev->dev);
+diff --git a/drivers/scsi/myrb.c b/drivers/scsi/myrb.c
+index 5fa0f4ed6565f..ad17c2beaacad 100644
+--- a/drivers/scsi/myrb.c
++++ b/drivers/scsi/myrb.c
+@@ -1241,7 +1241,8 @@ static void myrb_cleanup(struct myrb_hba *cb)
+ 	myrb_unmap(cb);
+ 
+ 	if (cb->mmio_base) {
+-		cb->disable_intr(cb->io_base);
++		if (cb->disable_intr)
++			cb->disable_intr(cb->io_base);
+ 		iounmap(cb->mmio_base);
+ 	}
+ 	if (cb->irq)
+@@ -3515,9 +3516,13 @@ static struct myrb_hba *myrb_detect(struct pci_dev *pdev,
+ 	mutex_init(&cb->dcmd_mutex);
+ 	mutex_init(&cb->dma_mutex);
+ 	cb->pdev = pdev;
++	cb->host = shost;
+ 
+-	if (pci_enable_device(pdev))
+-		goto failure;
++	if (pci_enable_device(pdev)) {
++		dev_err(&pdev->dev, "Failed to enable PCI device\n");
++		scsi_host_put(shost);
++		return NULL;
++	}
+ 
+ 	if (privdata->hw_init == DAC960_PD_hw_init ||
+ 	    privdata->hw_init == DAC960_P_hw_init) {
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 56e2917085874..bd068d3bb455d 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3511,7 +3511,6 @@ static int sd_probe(struct device *dev)
+  out_put:
+ 	put_disk(gd);
+  out_free:
+-	sd_zbc_release_disk(sdkp);
+ 	kfree(sdkp);
+  out:
+ 	scsi_autopm_put_device(sdp);
+diff --git a/drivers/soc/rockchip/grf.c b/drivers/soc/rockchip/grf.c
+index 494cf2b5bf7b6..343ff61ccccbb 100644
+--- a/drivers/soc/rockchip/grf.c
++++ b/drivers/soc/rockchip/grf.c
+@@ -148,12 +148,14 @@ static int __init rockchip_grf_init(void)
+ 		return -ENODEV;
+ 	if (!match || !match->data) {
+ 		pr_err("%s: missing grf data\n", __func__);
++		of_node_put(np);
+ 		return -EINVAL;
+ 	}
+ 
+ 	grf_info = match->data;
+ 
+ 	grf = syscon_node_to_regmap(np);
++	of_node_put(np);
+ 	if (IS_ERR(grf)) {
+ 		pr_err("%s: could not get grf syscon\n", __func__);
+ 		return PTR_ERR(grf);
+diff --git a/drivers/staging/fieldbus/anybuss/host.c b/drivers/staging/fieldbus/anybuss/host.c
+index 549cb7d51af81..2a20a1767d778 100644
+--- a/drivers/staging/fieldbus/anybuss/host.c
++++ b/drivers/staging/fieldbus/anybuss/host.c
+@@ -1384,7 +1384,7 @@ anybuss_host_common_probe(struct device *dev,
+ 		goto err_device;
+ 	return cd;
+ err_device:
+-	device_unregister(&cd->client->dev);
++	put_device(&cd->client->dev);
+ err_kthread:
+ 	kthread_stop(cd->qthread);
+ err_reset:
+diff --git a/drivers/staging/greybus/audio_codec.c b/drivers/staging/greybus/audio_codec.c
+index 42ce6c88ea753..4ed29f852c23f 100644
+--- a/drivers/staging/greybus/audio_codec.c
++++ b/drivers/staging/greybus/audio_codec.c
+@@ -621,8 +621,8 @@ static int gbcodec_mute_stream(struct snd_soc_dai *dai, int mute, int stream)
+ 			break;
+ 	}
+ 	if (!data) {
+-		dev_err(dai->dev, "%s:%s DATA connection missing\n",
+-			dai->name, module->name);
++		dev_err(dai->dev, "%s DATA connection missing\n",
++			dai->name);
+ 		mutex_unlock(&codec->lock);
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c
+index e8e72f79ca007..aeb6f015fdda3 100644
+--- a/drivers/staging/rtl8192e/rtllib_softmac.c
++++ b/drivers/staging/rtl8192e/rtllib_softmac.c
+@@ -651,9 +651,9 @@ static void rtllib_beacons_stop(struct rtllib_device *ieee)
+ 	spin_lock_irqsave(&ieee->beacon_lock, flags);
+ 
+ 	ieee->beacon_txing = 0;
+-	del_timer_sync(&ieee->beacon_timer);
+ 
+ 	spin_unlock_irqrestore(&ieee->beacon_lock, flags);
++	del_timer_sync(&ieee->beacon_timer);
+ 
+ }
+ 
+diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
+index 690b664df8fae..56a4476516440 100644
+--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
++++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
+@@ -528,9 +528,9 @@ static void ieee80211_beacons_stop(struct ieee80211_device *ieee)
+ 	spin_lock_irqsave(&ieee->beacon_lock, flags);
+ 
+ 	ieee->beacon_txing = 0;
+-	del_timer_sync(&ieee->beacon_timer);
+ 
+ 	spin_unlock_irqrestore(&ieee->beacon_lock, flags);
++	del_timer_sync(&ieee->beacon_timer);
+ }
+ 
+ void ieee80211_stop_send_beacons(struct ieee80211_device *ieee)
+diff --git a/drivers/staging/rtl8712/os_intfs.c b/drivers/staging/rtl8712/os_intfs.c
+index 2214aca097308..daa3180dfde30 100644
+--- a/drivers/staging/rtl8712/os_intfs.c
++++ b/drivers/staging/rtl8712/os_intfs.c
+@@ -332,7 +332,6 @@ void r8712_free_drv_sw(struct _adapter *padapter)
+ 	r8712_free_evt_priv(&padapter->evtpriv);
+ 	r8712_DeInitSwLeds(padapter);
+ 	r8712_free_mlme_priv(&padapter->mlmepriv);
+-	r8712_free_io_queue(padapter);
+ 	_free_xmit_priv(&padapter->xmitpriv);
+ 	_r8712_free_sta_priv(&padapter->stapriv);
+ 	_r8712_free_recv_priv(&padapter->recvpriv);
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index fed96d4251bfa..68d66c3ce2c8f 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -266,6 +266,7 @@ static uint r8712_usb_dvobj_init(struct _adapter *padapter)
+ 
+ static void r8712_usb_dvobj_deinit(struct _adapter *padapter)
+ {
++	r8712_free_io_queue(padapter);
+ }
+ 
+ void rtl871x_intf_stop(struct _adapter *padapter)
+@@ -303,9 +304,6 @@ void r871x_dev_unload(struct _adapter *padapter)
+ 			rtl8712_hal_deinit(padapter);
+ 		}
+ 
+-		/*s6.*/
+-		if (padapter->dvobj_deinit)
+-			padapter->dvobj_deinit(padapter);
+ 		padapter->bup = false;
+ 	}
+ }
+@@ -541,13 +539,13 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ 		} else {
+ 			AutoloadFail = false;
+ 		}
+-		if (((mac[0] == 0xff) && (mac[1] == 0xff) &&
++		if ((!AutoloadFail) ||
++		    ((mac[0] == 0xff) && (mac[1] == 0xff) &&
+ 		     (mac[2] == 0xff) && (mac[3] == 0xff) &&
+ 		     (mac[4] == 0xff) && (mac[5] == 0xff)) ||
+ 		    ((mac[0] == 0x00) && (mac[1] == 0x00) &&
+ 		     (mac[2] == 0x00) && (mac[3] == 0x00) &&
+-		     (mac[4] == 0x00) && (mac[5] == 0x00)) ||
+-		     (!AutoloadFail)) {
++		     (mac[4] == 0x00) && (mac[5] == 0x00))) {
+ 			mac[0] = 0x00;
+ 			mac[1] = 0xe0;
+ 			mac[2] = 0x4c;
+@@ -610,6 +608,8 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ 	/* Stop driver mlme relation timer */
+ 	r8712_stop_drv_timers(padapter);
+ 	r871x_dev_unload(padapter);
++	if (padapter->dvobj_deinit)
++		padapter->dvobj_deinit(padapter);
+ 	r8712_free_drv_sw(padapter);
+ 	free_netdev(pnetdev);
+ 
+diff --git a/drivers/staging/rtl8712/usb_ops.c b/drivers/staging/rtl8712/usb_ops.c
+index e64845e6adf3d..af9966d03979c 100644
+--- a/drivers/staging/rtl8712/usb_ops.c
++++ b/drivers/staging/rtl8712/usb_ops.c
+@@ -29,7 +29,8 @@ static u8 usb_read8(struct intf_hdl *intfhdl, u32 addr)
+ 	u16 wvalue;
+ 	u16 index;
+ 	u16 len;
+-	__le32 data;
++	int status;
++	__le32 data = 0;
+ 	struct intf_priv *intfpriv = intfhdl->pintfpriv;
+ 
+ 	request = 0x05;
+@@ -37,8 +38,10 @@ static u8 usb_read8(struct intf_hdl *intfhdl, u32 addr)
+ 	index = 0;
+ 	wvalue = (u16)(addr & 0x0000ffff);
+ 	len = 1;
+-	r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len,
+-				requesttype);
++	status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index,
++					 &data, len, requesttype);
++	if (status < 0)
++		return 0;
+ 	return (u8)(le32_to_cpu(data) & 0x0ff);
+ }
+ 
+@@ -49,7 +52,8 @@ static u16 usb_read16(struct intf_hdl *intfhdl, u32 addr)
+ 	u16 wvalue;
+ 	u16 index;
+ 	u16 len;
+-	__le32 data;
++	int status;
++	__le32 data = 0;
+ 	struct intf_priv *intfpriv = intfhdl->pintfpriv;
+ 
+ 	request = 0x05;
+@@ -57,8 +61,10 @@ static u16 usb_read16(struct intf_hdl *intfhdl, u32 addr)
+ 	index = 0;
+ 	wvalue = (u16)(addr & 0x0000ffff);
+ 	len = 2;
+-	r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len,
+-				requesttype);
++	status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index,
++					 &data, len, requesttype);
++	if (status < 0)
++		return 0;
+ 	return (u16)(le32_to_cpu(data) & 0xffff);
+ }
+ 
+@@ -69,7 +75,8 @@ static u32 usb_read32(struct intf_hdl *intfhdl, u32 addr)
+ 	u16 wvalue;
+ 	u16 index;
+ 	u16 len;
+-	__le32 data;
++	int status;
++	__le32 data = 0;
+ 	struct intf_priv *intfpriv = intfhdl->pintfpriv;
+ 
+ 	request = 0x05;
+@@ -77,8 +84,10 @@ static u32 usb_read32(struct intf_hdl *intfhdl, u32 addr)
+ 	index = 0;
+ 	wvalue = (u16)(addr & 0x0000ffff);
+ 	len = 4;
+-	r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len,
+-				requesttype);
++	status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index,
++					 &data, len, requesttype);
++	if (status < 0)
++		return 0;
+ 	return le32_to_cpu(data);
+ }
+ 
+diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c
+index c8c5cdfc5e199..abc84d84f6386 100644
+--- a/drivers/tty/goldfish.c
++++ b/drivers/tty/goldfish.c
+@@ -407,6 +407,7 @@ static int goldfish_tty_probe(struct platform_device *pdev)
+ err_tty_register_device_failed:
+ 	free_irq(irq, qtty);
+ err_dec_line_count:
++	tty_port_destroy(&qtty->port);
+ 	goldfish_tty_current_line_count--;
+ 	if (goldfish_tty_current_line_count == 0)
+ 		goldfish_tty_delete_driver();
+@@ -428,6 +429,7 @@ static int goldfish_tty_remove(struct platform_device *pdev)
+ 	iounmap(qtty->base);
+ 	qtty->base = NULL;
+ 	free_irq(qtty->irq, pdev);
++	tty_port_destroy(&qtty->port);
+ 	goldfish_tty_current_line_count--;
+ 	if (goldfish_tty_current_line_count == 0)
+ 		goldfish_tty_delete_driver();
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index 58190135efb7d..12dde01e576b5 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -2073,6 +2073,35 @@ static bool canon_copy_from_read_buf(struct tty_struct *tty,
+ 	return ldata->read_tail != canon_head;
+ }
+ 
++/*
++ * If we finished a read at the exact location of an
++ * EOF (special EOL character that's a __DISABLED_CHAR)
++ * in the stream, silently eat the EOF.
++ */
++static void canon_skip_eof(struct tty_struct *tty)
++{
++	struct n_tty_data *ldata = tty->disc_data;
++	size_t tail, canon_head;
++
++	canon_head = smp_load_acquire(&ldata->canon_head);
++	tail = ldata->read_tail;
++
++	// No data?
++	if (tail == canon_head)
++		return;
++
++	// See if the tail position is EOF in the circular buffer
++	tail &= (N_TTY_BUF_SIZE - 1);
++	if (!test_bit(tail, ldata->read_flags))
++		return;
++	if (read_buf(ldata, tail) != __DISABLED_CHAR)
++		return;
++
++	// Clear the EOL bit, skip the EOF char.
++	clear_bit(tail, ldata->read_flags);
++	smp_store_release(&ldata->read_tail, ldata->read_tail + 1);
++}
++
+ /**
+  *	job_control		-	check job control
+  *	@tty: tty
+@@ -2142,7 +2171,14 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 	 */
+ 	if (*cookie) {
+ 		if (ldata->icanon && !L_EXTPROC(tty)) {
+-			if (canon_copy_from_read_buf(tty, &kb, &nr))
++			/*
++			 * If we have filled the user buffer, see
++			 * if we should skip an EOF character before
++			 * releasing the lock and returning done.
++			 */
++			if (!nr)
++				canon_skip_eof(tty);
++			else if (canon_copy_from_read_buf(tty, &kb, &nr))
+ 				return kb - kbuf;
+ 		} else {
+ 			if (copy_from_read_buf(tty, &kb, &nr))
+diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
+index 251f0018ae8ca..dba5950b8d0e2 100644
+--- a/drivers/tty/serial/8250/8250_fintek.c
++++ b/drivers/tty/serial/8250/8250_fintek.c
+@@ -200,12 +200,12 @@ static int fintek_8250_rs485_config(struct uart_port *port,
+ 	if (!pdata)
+ 		return -EINVAL;
+ 
+-	/* Hardware do not support same RTS level on send and receive */
+-	if (!(rs485->flags & SER_RS485_RTS_ON_SEND) ==
+-			!(rs485->flags & SER_RS485_RTS_AFTER_SEND))
+-		return -EINVAL;
+ 
+ 	if (rs485->flags & SER_RS485_ENABLED) {
++		/* Hardware do not support same RTS level on send and receive */
++		if (!(rs485->flags & SER_RS485_RTS_ON_SEND) ==
++		    !(rs485->flags & SER_RS485_RTS_AFTER_SEND))
++			return -EINVAL;
+ 		memset(rs485->padding, 0, sizeof(rs485->padding));
+ 		config |= RS485_URA;
+ 	} else {
+diff --git a/drivers/tty/serial/digicolor-usart.c b/drivers/tty/serial/digicolor-usart.c
+index c7f81aa1ce912..5fea9bf86e85e 100644
+--- a/drivers/tty/serial/digicolor-usart.c
++++ b/drivers/tty/serial/digicolor-usart.c
+@@ -309,6 +309,8 @@ static void digicolor_uart_set_termios(struct uart_port *port,
+ 	case CS8:
+ 	default:
+ 		config |= UA_CONFIG_CHAR_LEN;
++		termios->c_cflag &= ~CSIZE;
++		termios->c_cflag |= CS8;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index b9f8add284e33..52a603a6f9b88 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -229,8 +229,6 @@
+ /* IMX lpuart has four extra unused regs located at the beginning */
+ #define IMX_REG_OFF	0x10
+ 
+-static DEFINE_IDA(fsl_lpuart_ida);
+-
+ enum lpuart_type {
+ 	VF610_LPUART,
+ 	LS1021A_LPUART,
+@@ -265,7 +263,6 @@ struct lpuart_port {
+ 	int			rx_dma_rng_buf_len;
+ 	unsigned int		dma_tx_nents;
+ 	wait_queue_head_t	dma_wait;
+-	bool			id_allocated;
+ };
+ 
+ struct lpuart_soc_data {
+@@ -2638,23 +2635,18 @@ static int lpuart_probe(struct platform_device *pdev)
+ 
+ 	ret = of_alias_get_id(np, "serial");
+ 	if (ret < 0) {
+-		ret = ida_simple_get(&fsl_lpuart_ida, 0, UART_NR, GFP_KERNEL);
+-		if (ret < 0) {
+-			dev_err(&pdev->dev, "port line is full, add device failed\n");
+-			return ret;
+-		}
+-		sport->id_allocated = true;
++		dev_err(&pdev->dev, "failed to get alias id, errno %d\n", ret);
++		return ret;
+ 	}
+ 	if (ret >= ARRAY_SIZE(lpuart_ports)) {
+ 		dev_err(&pdev->dev, "serial%d out of range\n", ret);
+-		ret = -EINVAL;
+-		goto failed_out_of_range;
++		return -EINVAL;
+ 	}
+ 	sport->port.line = ret;
+ 
+ 	ret = lpuart_enable_clks(sport);
+ 	if (ret)
+-		goto failed_clock_enable;
++		return ret;
+ 	sport->port.uartclk = lpuart_get_baud_clk_rate(sport);
+ 
+ 	lpuart_ports[sport->port.line] = sport;
+@@ -2697,10 +2689,6 @@ failed_get_rs485:
+ failed_attach_port:
+ failed_irq_request:
+ 	lpuart_disable_clks(sport);
+-failed_clock_enable:
+-failed_out_of_range:
+-	if (sport->id_allocated)
+-		ida_simple_remove(&fsl_lpuart_ida, sport->port.line);
+ 	return ret;
+ }
+ 
+@@ -2710,9 +2698,6 @@ static int lpuart_remove(struct platform_device *pdev)
+ 
+ 	uart_remove_one_port(&lpuart_reg, &sport->port);
+ 
+-	if (sport->id_allocated)
+-		ida_simple_remove(&fsl_lpuart_ida, sport->port.line);
+-
+ 	lpuart_disable_clks(sport);
+ 
+ 	if (sport->dma_tx_chan)
+@@ -2842,7 +2827,6 @@ static int __init lpuart_serial_init(void)
+ 
+ static void __exit lpuart_serial_exit(void)
+ {
+-	ida_destroy(&fsl_lpuart_ida);
+ 	platform_driver_unregister(&lpuart_driver);
+ 	uart_unregister_driver(&lpuart_reg);
+ }
+diff --git a/drivers/tty/serial/icom.c b/drivers/tty/serial/icom.c
+index 94c8281ddb5f2..74b325c344da2 100644
+--- a/drivers/tty/serial/icom.c
++++ b/drivers/tty/serial/icom.c
+@@ -1503,7 +1503,7 @@ static int icom_probe(struct pci_dev *dev,
+ 	retval = pci_read_config_dword(dev, PCI_COMMAND, &command_reg);
+ 	if (retval) {
+ 		dev_err(&dev->dev, "PCI Config read FAILED\n");
+-		return retval;
++		goto probe_exit0;
+ 	}
+ 
+ 	pci_write_config_dword(dev, PCI_COMMAND,
+diff --git a/drivers/tty/serial/meson_uart.c b/drivers/tty/serial/meson_uart.c
+index d2c08b760f830..91b7359b79a2f 100644
+--- a/drivers/tty/serial/meson_uart.c
++++ b/drivers/tty/serial/meson_uart.c
+@@ -255,6 +255,14 @@ static const char *meson_uart_type(struct uart_port *port)
+ 	return (port->type == PORT_MESON) ? "meson_uart" : NULL;
+ }
+ 
++/*
++ * This function is called only from probe() using a temporary io mapping
++ * in order to perform a reset before setting up the device. Since the
++ * temporarily mapped region was successfully requested, there can be no
++ * console on this port at this time. Hence it is not necessary for this
++ * function to acquire the port->lock. (Since there is no console on this
++ * port at this time, the port->lock is not initialized yet.)
++ */
+ static void meson_uart_reset(struct uart_port *port)
+ {
+ 	u32 val;
+@@ -269,9 +277,12 @@ static void meson_uart_reset(struct uart_port *port)
+ 
+ static int meson_uart_startup(struct uart_port *port)
+ {
++	unsigned long flags;
+ 	u32 val;
+ 	int ret = 0;
+ 
++	spin_lock_irqsave(&port->lock, flags);
++
+ 	val = readl(port->membase + AML_UART_CONTROL);
+ 	val |= AML_UART_CLEAR_ERR;
+ 	writel(val, port->membase + AML_UART_CONTROL);
+@@ -287,6 +298,8 @@ static int meson_uart_startup(struct uart_port *port)
+ 	val = (AML_UART_RECV_IRQ(1) | AML_UART_XMIT_IRQ(port->fifosize / 2));
+ 	writel(val, port->membase + AML_UART_MISC);
+ 
++	spin_unlock_irqrestore(&port->lock, flags);
++
+ 	ret = request_irq(port->irq, meson_uart_interrupt, 0,
+ 			  port->name, port);
+ 
+diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c
+index 26bcbec5422e2..27023a56f3ac1 100644
+--- a/drivers/tty/serial/msm_serial.c
++++ b/drivers/tty/serial/msm_serial.c
+@@ -1593,6 +1593,7 @@ static inline struct uart_port *msm_get_port_from_line(unsigned int line)
+ static void __msm_console_write(struct uart_port *port, const char *s,
+ 				unsigned int count, bool is_uartdm)
+ {
++	unsigned long flags;
+ 	int i;
+ 	int num_newlines = 0;
+ 	bool replaced = false;
+@@ -1610,6 +1611,8 @@ static void __msm_console_write(struct uart_port *port, const char *s,
+ 			num_newlines++;
+ 	count += num_newlines;
+ 
++	local_irq_save(flags);
++
+ 	if (port->sysrq)
+ 		locked = 0;
+ 	else if (oops_in_progress)
+@@ -1655,6 +1658,8 @@ static void __msm_console_write(struct uart_port *port, const char *s,
+ 
+ 	if (locked)
+ 		spin_unlock(&port->lock);
++
++	local_irq_restore(flags);
+ }
+ 
+ static void msm_console_write(struct console *co, const char *s,
+diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c
+index c149f8c300074..a0d4bffe70bdd 100644
+--- a/drivers/tty/serial/owl-uart.c
++++ b/drivers/tty/serial/owl-uart.c
+@@ -695,6 +695,7 @@ static int owl_uart_probe(struct platform_device *pdev)
+ 	owl_port->port.uartclk = clk_get_rate(owl_port->clk);
+ 	if (owl_port->port.uartclk == 0) {
+ 		dev_err(&pdev->dev, "clock rate is zero\n");
++		clk_disable_unprepare(owl_port->clk);
+ 		return -EINVAL;
+ 	}
+ 	owl_port->port.flags = UPF_BOOT_AUTOCONF | UPF_IOREMAP | UPF_LOW_LATENCY;
+diff --git a/drivers/tty/serial/rda-uart.c b/drivers/tty/serial/rda-uart.c
+index 85366e0592585..a45069e7ebea9 100644
+--- a/drivers/tty/serial/rda-uart.c
++++ b/drivers/tty/serial/rda-uart.c
+@@ -262,6 +262,8 @@ static void rda_uart_set_termios(struct uart_port *port,
+ 		fallthrough;
+ 	case CS7:
+ 		ctrl &= ~RDA_UART_DBITS_8;
++		termios->c_cflag &= ~CSIZE;
++		termios->c_cflag |= CS7;
+ 		break;
+ 	default:
+ 		ctrl |= RDA_UART_DBITS_8;
+diff --git a/drivers/tty/serial/sa1100.c b/drivers/tty/serial/sa1100.c
+index f5fab1dd96bcd..aa1cf2ae17a90 100644
+--- a/drivers/tty/serial/sa1100.c
++++ b/drivers/tty/serial/sa1100.c
+@@ -448,6 +448,8 @@ sa1100_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk/16); 
+ 	quot = uart_get_divisor(port, baud);
+ 
++	del_timer_sync(&sport->timer);
++
+ 	spin_lock_irqsave(&sport->port.lock, flags);
+ 
+ 	sport->port.read_status_mask &= UTSR0_TO_SM(UTSR0_TFS);
+@@ -478,8 +480,6 @@ sa1100_set_termios(struct uart_port *port, struct ktermios *termios,
+ 				UTSR1_TO_SM(UTSR1_ROR);
+ 	}
+ 
+-	del_timer_sync(&sport->timer);
+-
+ 	/*
+ 	 * Update the per-port timeout.
+ 	 */
+diff --git a/drivers/tty/serial/serial_txx9.c b/drivers/tty/serial/serial_txx9.c
+index 7a07e7272de12..7beec331010c5 100644
+--- a/drivers/tty/serial/serial_txx9.c
++++ b/drivers/tty/serial/serial_txx9.c
+@@ -644,6 +644,8 @@ serial_txx9_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	case CS6:	/* not supported */
+ 	case CS8:
+ 		cval |= TXX9_SILCR_UMODE_8BIT;
++		termios->c_cflag &= ~CSIZE;
++		termios->c_cflag |= CS8;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index f700bfaef1293..8d924727d6f0a 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2392,8 +2392,12 @@ static void sci_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	int best_clk = -1;
+ 	unsigned long flags;
+ 
+-	if ((termios->c_cflag & CSIZE) == CS7)
++	if ((termios->c_cflag & CSIZE) == CS7) {
+ 		smr_val |= SCSMR_CHR;
++	} else {
++		termios->c_cflag &= ~CSIZE;
++		termios->c_cflag |= CS8;
++	}
+ 	if (termios->c_cflag & PARENB)
+ 		smr_val |= SCSMR_PE;
+ 	if (termios->c_cflag & PARODD)
+diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c
+index 214bf3086c68a..91952be010740 100644
+--- a/drivers/tty/serial/sifive.c
++++ b/drivers/tty/serial/sifive.c
+@@ -667,12 +667,16 @@ static void sifive_serial_set_termios(struct uart_port *port,
+ 	int rate;
+ 	char nstop;
+ 
+-	if ((termios->c_cflag & CSIZE) != CS8)
++	if ((termios->c_cflag & CSIZE) != CS8) {
+ 		dev_err_once(ssp->port.dev, "only 8-bit words supported\n");
++		termios->c_cflag &= ~CSIZE;
++		termios->c_cflag |= CS8;
++	}
+ 	if (termios->c_iflag & (INPCK | PARMRK))
+ 		dev_err_once(ssp->port.dev, "parity checking not supported\n");
+ 	if (termios->c_iflag & BRKINT)
+ 		dev_err_once(ssp->port.dev, "BREAK detection not supported\n");
++	termios->c_iflag &= ~(INPCK|PARMRK|BRKINT);
+ 
+ 	/* Set number of stop bits */
+ 	nstop = (termios->c_cflag & CSTOPB) ? 2 : 1;
+@@ -999,7 +1003,7 @@ static int sifive_serial_probe(struct platform_device *pdev)
+ 	/* Set up clock divider */
+ 	ssp->clkin_rate = clk_get_rate(ssp->clk);
+ 	ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE;
+-	ssp->port.uartclk = ssp->baud_rate * 16;
++	ssp->port.uartclk = ssp->clkin_rate;
+ 	__ssp_update_div(ssp);
+ 
+ 	platform_set_drvdata(pdev, ssp);
+diff --git a/drivers/tty/serial/st-asc.c b/drivers/tty/serial/st-asc.c
+index e7048515a79ca..97d36f870f640 100644
+--- a/drivers/tty/serial/st-asc.c
++++ b/drivers/tty/serial/st-asc.c
+@@ -535,10 +535,14 @@ static void asc_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	/* set character length */
+ 	if ((cflag & CSIZE) == CS7) {
+ 		ctrl_val |= ASC_CTL_MODE_7BIT_PAR;
++		cflag |= PARENB;
+ 	} else {
+ 		ctrl_val |= (cflag & PARENB) ?  ASC_CTL_MODE_8BIT_PAR :
+ 						ASC_CTL_MODE_8BIT;
++		cflag &= ~CSIZE;
++		cflag |= CS8;
+ 	}
++	termios->c_cflag = cflag;
+ 
+ 	/* set stop bit */
+ 	ctrl_val |= (cflag & CSTOPB) ? ASC_CTL_STOP_2BIT : ASC_CTL_STOP_1BIT;
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 6afae051ba8d1..8cd9e5b077b64 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -810,13 +810,22 @@ static void stm32_usart_set_termios(struct uart_port *port,
+ 	 * CS8 or (CS7 + parity), 8 bits word aka [M1:M0] = 0b00
+ 	 * M0 and M1 already cleared by cr1 initialization.
+ 	 */
+-	if (bits == 9)
++	if (bits == 9) {
+ 		cr1 |= USART_CR1_M0;
+-	else if ((bits == 7) && cfg->has_7bits_data)
++	} else if ((bits == 7) && cfg->has_7bits_data) {
+ 		cr1 |= USART_CR1_M1;
+-	else if (bits != 8)
++	} else if (bits != 8) {
+ 		dev_dbg(port->dev, "Unsupported data bits config: %u bits\n"
+ 			, bits);
++		cflag &= ~CSIZE;
++		cflag |= CS8;
++		termios->c_cflag = cflag;
++		bits = 8;
++		if (cflag & PARENB) {
++			bits++;
++			cr1 |= USART_CR1_M0;
++		}
++	}
+ 
+ 	if (ofs->rtor != UNDEF_REG && (stm32_port->rx_ch ||
+ 				       stm32_port->fifoen)) {
+diff --git a/drivers/tty/synclink_gt.c b/drivers/tty/synclink_gt.c
+index 1a0c7beec1019..0569d59491339 100644
+--- a/drivers/tty/synclink_gt.c
++++ b/drivers/tty/synclink_gt.c
+@@ -1749,6 +1749,8 @@ static int hdlcdev_init(struct slgt_info *info)
+  */
+ static void hdlcdev_exit(struct slgt_info *info)
+ {
++	if (!info->netdev)
++		return;
+ 	unregister_hdlc_device(info->netdev);
+ 	free_netdev(info->netdev);
+ 	info->netdev = NULL;
+diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
+index 959f9e121cc61..7ca209d4e0883 100644
+--- a/drivers/tty/sysrq.c
++++ b/drivers/tty/sysrq.c
+@@ -231,8 +231,10 @@ static void showacpu(void *dummy)
+ 	unsigned long flags;
+ 
+ 	/* Idle CPUs have no interesting backtrace. */
+-	if (idle_cpu(smp_processor_id()))
++	if (idle_cpu(smp_processor_id())) {
++		pr_info("CPU%d: backtrace skipped as idling\n", smp_processor_id());
+ 		return;
++	}
+ 
+ 	raw_spin_lock_irqsave(&show_lock, flags);
+ 	pr_info("CPU%d:\n", smp_processor_id());
+@@ -259,10 +261,13 @@ static void sysrq_handle_showallcpus(int key)
+ 
+ 		if (in_irq())
+ 			regs = get_irq_regs();
+-		if (regs) {
+-			pr_info("CPU%d:\n", smp_processor_id());
++
++		pr_info("CPU%d:\n", smp_processor_id());
++		if (regs)
+ 			show_regs(regs);
+-		}
++		else
++			show_stack(NULL, NULL, KERN_INFO);
++
+ 		schedule_work(&sysrq_showallcpus);
+ 	}
+ }
+diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
+index ec0d6c50610ce..eee78cbfaa72f 100644
+--- a/drivers/usb/core/hcd-pci.c
++++ b/drivers/usb/core/hcd-pci.c
+@@ -614,10 +614,10 @@ const struct dev_pm_ops usb_hcd_pci_pm_ops = {
+ 	.suspend_noirq	= hcd_pci_suspend_noirq,
+ 	.resume_noirq	= hcd_pci_resume_noirq,
+ 	.resume		= hcd_pci_resume,
+-	.freeze		= check_root_hub_suspended,
++	.freeze		= hcd_pci_suspend,
+ 	.freeze_noirq	= check_root_hub_suspended,
+ 	.thaw_noirq	= NULL,
+-	.thaw		= NULL,
++	.thaw		= hcd_pci_resume,
+ 	.poweroff	= hcd_pci_suspend,
+ 	.poweroff_noirq	= hcd_pci_suspend_noirq,
+ 	.restore_noirq	= hcd_pci_resume_noirq,
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index ec54971063f8f..64485f82dc5b9 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -4518,7 +4518,6 @@ static int dwc2_hsotg_udc_start(struct usb_gadget *gadget,
+ 
+ 	WARN_ON(hsotg->driver);
+ 
+-	driver->driver.bus = NULL;
+ 	hsotg->driver = driver;
+ 	hsotg->gadget.dev.of_node = hsotg->dev->of_node;
+ 	hsotg->gadget.speed = USB_SPEED_UNKNOWN;
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 98df8d52c765c..a5a8c5712bce4 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -213,7 +213,7 @@ static void dwc3_pci_resume_work(struct work_struct *work)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(&dwc3->dev);
+-	if (ret) {
++	if (ret < 0) {
+ 		pm_runtime_put_sync_autosuspend(&dwc3->dev);
+ 		return;
+ 	}
+diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c
+index 3055d9abfec30..3e5c54742befe 100644
+--- a/drivers/usb/host/isp116x-hcd.c
++++ b/drivers/usb/host/isp116x-hcd.c
+@@ -1541,10 +1541,12 @@ static int isp116x_remove(struct platform_device *pdev)
+ 
+ 	iounmap(isp116x->data_reg);
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+-	release_mem_region(res->start, 2);
++	if (res)
++		release_mem_region(res->start, 2);
+ 	iounmap(isp116x->addr_reg);
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	release_mem_region(res->start, 2);
++	if (res)
++		release_mem_region(res->start, 2);
+ 
+ 	usb_put_hcd(hcd);
+ 	return 0;
+diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c
+index e832909a924fa..6df2881cd7b94 100644
+--- a/drivers/usb/host/oxu210hp-hcd.c
++++ b/drivers/usb/host/oxu210hp-hcd.c
+@@ -3908,8 +3908,10 @@ static int oxu_bus_suspend(struct usb_hcd *hcd)
+ 		}
+ 	}
+ 
++	spin_unlock_irq(&oxu->lock);
+ 	/* turn off now-idle HC */
+ 	del_timer_sync(&oxu->watchdog);
++	spin_lock_irq(&oxu->lock);
+ 	ehci_halt(oxu);
+ 	hcd->state = HC_STATE_SUSPENDED;
+ 
+diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c
+index 4232f1ce3fbfa..1d435e4ee857d 100644
+--- a/drivers/usb/musb/omap2430.c
++++ b/drivers/usb/musb/omap2430.c
+@@ -360,6 +360,7 @@ static int omap2430_probe(struct platform_device *pdev)
+ 	control_node = of_parse_phandle(np, "ctrl-module", 0);
+ 	if (control_node) {
+ 		control_pdev = of_find_device_by_node(control_node);
++		of_node_put(control_node);
+ 		if (!control_pdev) {
+ 			dev_err(&pdev->dev, "Failed to get control device\n");
+ 			ret = -EINVAL;
+diff --git a/drivers/usb/storage/karma.c b/drivers/usb/storage/karma.c
+index 05cec81dcd3f2..38ddfedef629c 100644
+--- a/drivers/usb/storage/karma.c
++++ b/drivers/usb/storage/karma.c
+@@ -174,24 +174,25 @@ static void rio_karma_destructor(void *extra)
+ 
+ static int rio_karma_init(struct us_data *us)
+ {
+-	int ret = 0;
+ 	struct karma_data *data = kzalloc(sizeof(struct karma_data), GFP_NOIO);
+ 
+ 	if (!data)
+-		goto out;
++		return -ENOMEM;
+ 
+ 	data->recv = kmalloc(RIO_RECV_LEN, GFP_NOIO);
+ 	if (!data->recv) {
+ 		kfree(data);
+-		goto out;
++		return -ENOMEM;
+ 	}
+ 
+ 	us->extra = data;
+ 	us->extra_destructor = rio_karma_destructor;
+-	ret = rio_karma_send_command(RIO_ENTER_STORAGE, us);
+-	data->in_storage = (ret == 0);
+-out:
+-	return ret;
++	if (rio_karma_send_command(RIO_ENTER_STORAGE, us))
++		return -EIO;
++
++	data->in_storage = 1;
++
++	return 0;
+ }
+ 
+ static struct scsi_host_template karma_host_template;
+diff --git a/drivers/usb/typec/mux.c b/drivers/usb/typec/mux.c
+index b9035c3407b56..a6e5028105d4e 100644
+--- a/drivers/usb/typec/mux.c
++++ b/drivers/usb/typec/mux.c
+@@ -127,8 +127,11 @@ typec_switch_register(struct device *parent,
+ 	sw->dev.class = &typec_mux_class;
+ 	sw->dev.type = &typec_switch_dev_type;
+ 	sw->dev.driver_data = desc->drvdata;
+-	dev_set_name(&sw->dev, "%s-switch",
+-		     desc->name ? desc->name : dev_name(parent));
++	ret = dev_set_name(&sw->dev, "%s-switch", desc->name ? desc->name : dev_name(parent));
++	if (ret) {
++		put_device(&sw->dev);
++		return ERR_PTR(ret);
++	}
+ 
+ 	ret = device_add(&sw->dev);
+ 	if (ret) {
+@@ -331,8 +334,11 @@ typec_mux_register(struct device *parent, const struct typec_mux_desc *desc)
+ 	mux->dev.class = &typec_mux_class;
+ 	mux->dev.type = &typec_mux_dev_type;
+ 	mux->dev.driver_data = desc->drvdata;
+-	dev_set_name(&mux->dev, "%s-mux",
+-		     desc->name ? desc->name : dev_name(parent));
++	ret = dev_set_name(&mux->dev, "%s-mux", desc->name ? desc->name : dev_name(parent));
++	if (ret) {
++		put_device(&mux->dev);
++		return ERR_PTR(ret);
++	}
+ 
+ 	ret = device_add(&mux->dev);
+ 	if (ret) {
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index d8d3892e5a69a..3c6d452e3bf40 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -393,7 +393,6 @@ static int stub_probe(struct usb_device *udev)
+ 
+ err_port:
+ 	dev_set_drvdata(&udev->dev, NULL);
+-	usb_put_dev(udev);
+ 
+ 	/* we already have busid_priv, just lock busid_lock */
+ 	spin_lock(&busid_priv->busid_lock);
+@@ -408,6 +407,7 @@ call_put_busid_priv:
+ 	put_busid_priv(busid_priv);
+ 
+ sdev_free:
++	usb_put_dev(udev);
+ 	stub_device_free(sdev);
+ 
+ 	return rc;
+diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c
+index 325c22008e536..5dd41e8215e0f 100644
+--- a/drivers/usb/usbip/stub_rx.c
++++ b/drivers/usb/usbip/stub_rx.c
+@@ -138,7 +138,9 @@ static int tweak_set_configuration_cmd(struct urb *urb)
+ 	req = (struct usb_ctrlrequest *) urb->setup_packet;
+ 	config = le16_to_cpu(req->wValue);
+ 
++	usb_lock_device(sdev->udev);
+ 	err = usb_set_configuration(sdev->udev, config);
++	usb_unlock_device(sdev->udev);
+ 	if (err && err != -ENODEV)
+ 		dev_err(&sdev->udev->dev, "can't set config #%d, error %d\n",
+ 			config, err);
+diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
+index 0bd7e64331f08..5a0340c85dc6b 100644
+--- a/drivers/vhost/vringh.c
++++ b/drivers/vhost/vringh.c
+@@ -274,7 +274,7 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ 	     int (*copy)(const struct vringh *vrh,
+ 			 void *dst, const void *src, size_t len))
+ {
+-	int err, count = 0, up_next, desc_max;
++	int err, count = 0, indirect_count = 0, up_next, desc_max;
+ 	struct vring_desc desc, *descs;
+ 	struct vringh_range range = { -1ULL, 0 }, slowrange;
+ 	bool slow = false;
+@@ -331,7 +331,12 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ 			continue;
+ 		}
+ 
+-		if (count++ == vrh->vring.num) {
++		if (up_next == -1)
++			count++;
++		else
++			indirect_count++;
++
++		if (count > vrh->vring.num || indirect_count > desc_max) {
+ 			vringh_bad("Descriptor loop in %p", descs);
+ 			err = -ELOOP;
+ 			goto fail;
+@@ -393,6 +398,7 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ 				i = return_from_indirect(vrh, &up_next,
+ 							 &descs, &desc_max);
+ 				slow = false;
++				indirect_count = 0;
+ 			} else
+ 				break;
+ 		}
+diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
+index 3c309ab208874..40baa79f8046e 100644
+--- a/drivers/video/fbdev/hyperv_fb.c
++++ b/drivers/video/fbdev/hyperv_fb.c
+@@ -1008,7 +1008,6 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+ 	struct pci_dev *pdev  = NULL;
+ 	void __iomem *fb_virt;
+ 	int gen2vm = efi_enabled(EFI_BOOT);
+-	resource_size_t pot_start, pot_end;
+ 	phys_addr_t paddr;
+ 	int ret;
+ 
+@@ -1059,23 +1058,7 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+ 	dio_fb_size =
+ 		screen_width * screen_height * screen_depth / 8;
+ 
+-	if (gen2vm) {
+-		pot_start = 0;
+-		pot_end = -1;
+-	} else {
+-		if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM) ||
+-		    pci_resource_len(pdev, 0) < screen_fb_size) {
+-			pr_err("Resource not available or (0x%lx < 0x%lx)\n",
+-			       (unsigned long) pci_resource_len(pdev, 0),
+-			       (unsigned long) screen_fb_size);
+-			goto err1;
+-		}
+-
+-		pot_end = pci_resource_end(pdev, 0);
+-		pot_start = pot_end - screen_fb_size + 1;
+-	}
+-
+-	ret = vmbus_allocate_mmio(&par->mem, hdev, pot_start, pot_end,
++	ret = vmbus_allocate_mmio(&par->mem, hdev, 0, -1,
+ 				  screen_fb_size, 0x100000, true);
+ 	if (ret != 0) {
+ 		pr_err("Unable to allocate framebuffer memory\n");
+diff --git a/drivers/video/fbdev/pxa3xx-gcu.c b/drivers/video/fbdev/pxa3xx-gcu.c
+index 4279e13a3b58d..9421d14d0eb02 100644
+--- a/drivers/video/fbdev/pxa3xx-gcu.c
++++ b/drivers/video/fbdev/pxa3xx-gcu.c
+@@ -650,6 +650,7 @@ static int pxa3xx_gcu_probe(struct platform_device *pdev)
+ 	for (i = 0; i < 8; i++) {
+ 		ret = pxa3xx_gcu_add_buffer(dev, priv);
+ 		if (ret) {
++			pxa3xx_gcu_free_buffers(dev, priv);
+ 			dev_err(dev, "failed to allocate DMA memory\n");
+ 			goto err_disable_clk;
+ 		}
+@@ -666,15 +667,15 @@ static int pxa3xx_gcu_probe(struct platform_device *pdev)
+ 			SHARED_SIZE, irq);
+ 	return 0;
+ 
+-err_free_dma:
+-	dma_free_coherent(dev, SHARED_SIZE,
+-			priv->shared, priv->shared_phys);
++err_disable_clk:
++	clk_disable_unprepare(priv->clk);
+ 
+ err_misc_deregister:
+ 	misc_deregister(&priv->misc_dev);
+ 
+-err_disable_clk:
+-	clk_disable_unprepare(priv->clk);
++err_free_dma:
++	dma_free_coherent(dev, SHARED_SIZE,
++			  priv->shared, priv->shared_phys);
+ 
+ 	return ret;
+ }
+@@ -687,6 +688,7 @@ static int pxa3xx_gcu_remove(struct platform_device *pdev)
+ 	pxa3xx_gcu_wait_idle(priv);
+ 	misc_deregister(&priv->misc_dev);
+ 	dma_free_coherent(dev, SHARED_SIZE, priv->shared, priv->shared_phys);
++	clk_disable_unprepare(priv->clk);
+ 	pxa3xx_gcu_free_buffers(dev, priv);
+ 
+ 	return 0;
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index ae7f9357bb871..46c2a4bd9ebe9 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -227,7 +227,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret) {
++	if (ret < 0) {
+ 		pm_runtime_put_noidle(dev);
+ 		pm_runtime_disable(&pdev->dev);
+ 		return dev_err_probe(dev, ret, "runtime pm failed\n");
+diff --git a/drivers/watchdog/ts4800_wdt.c b/drivers/watchdog/ts4800_wdt.c
+index c137ad2bd5c31..0ea554c7cda57 100644
+--- a/drivers/watchdog/ts4800_wdt.c
++++ b/drivers/watchdog/ts4800_wdt.c
+@@ -125,13 +125,16 @@ static int ts4800_wdt_probe(struct platform_device *pdev)
+ 	ret = of_property_read_u32_index(np, "syscon", 1, &reg);
+ 	if (ret < 0) {
+ 		dev_err(dev, "no offset in syscon\n");
++		of_node_put(syscon_np);
+ 		return ret;
+ 	}
+ 
+ 	/* allocate memory for watchdog struct */
+ 	wdt = devm_kzalloc(dev, sizeof(*wdt), GFP_KERNEL);
+-	if (!wdt)
++	if (!wdt) {
++		of_node_put(syscon_np);
+ 		return -ENOMEM;
++	}
+ 
+ 	/* set regmap and offset to know where to write */
+ 	wdt->feed_offset = reg;
+diff --git a/drivers/watchdog/wdat_wdt.c b/drivers/watchdog/wdat_wdt.c
+index 3065dd670a182..c60723f5ed99d 100644
+--- a/drivers/watchdog/wdat_wdt.c
++++ b/drivers/watchdog/wdat_wdt.c
+@@ -462,6 +462,7 @@ static int wdat_wdt_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	watchdog_set_nowayout(&wdat->wdd, nowayout);
++	watchdog_stop_on_reboot(&wdat->wdd);
+ 	return devm_watchdog_register_device(dev, &wdat->wdd);
+ }
+ 
+diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
+index 34742c6e189e3..f17c4c03db30c 100644
+--- a/drivers/xen/xlate_mmu.c
++++ b/drivers/xen/xlate_mmu.c
+@@ -261,7 +261,6 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(xen_xlate_map_ballooned_pages);
+ 
+ struct remap_pfn {
+ 	struct mm_struct *mm;
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 262c0ae505af9..159795059547f 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -412,8 +412,11 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode,
+ 		}
+ 
+ 		/* skip if starts before the current position */
+-		if (offset < curr)
++		if (offset < curr) {
++			if (next > curr)
++				ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent);
+ 			continue;
++		}
+ 
+ 		/* found the next entry */
+ 		if (!dir_emit(ctx, dire->u.name, nlen,
+diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c
+index 197cb12343414..76322c0f6e5f3 100644
+--- a/fs/ceph/xattr.c
++++ b/fs/ceph/xattr.c
+@@ -317,6 +317,14 @@ static ssize_t ceph_vxattrcb_snap_btime(struct ceph_inode_info *ci, char *val,
+ 	}
+ #define XATTR_RSTAT_FIELD(_type, _name)			\
+ 	XATTR_NAME_CEPH(_type, _name, VXATTR_FLAG_RSTAT)
++#define XATTR_RSTAT_FIELD_UPDATABLE(_type, _name)			\
++	{								\
++		.name = CEPH_XATTR_NAME(_type, _name),			\
++		.name_size = sizeof (CEPH_XATTR_NAME(_type, _name)),	\
++		.getxattr_cb = ceph_vxattrcb_ ## _type ## _ ## _name,	\
++		.exists_cb = NULL,					\
++		.flags = VXATTR_FLAG_RSTAT,				\
++	}
+ #define XATTR_LAYOUT_FIELD(_type, _name, _field)			\
+ 	{								\
+ 		.name = CEPH_XATTR_NAME2(_type, _name, _field),	\
+@@ -354,7 +362,7 @@ static struct ceph_vxattr ceph_dir_vxattrs[] = {
+ 	XATTR_RSTAT_FIELD(dir, rfiles),
+ 	XATTR_RSTAT_FIELD(dir, rsubdirs),
+ 	XATTR_RSTAT_FIELD(dir, rbytes),
+-	XATTR_RSTAT_FIELD(dir, rctime),
++	XATTR_RSTAT_FIELD_UPDATABLE(dir, rctime),
+ 	{
+ 		.name = "ceph.dir.pin",
+ 		.name_size = sizeof("ceph.dir.pin"),
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 370188b2a55d2..bc957e6ca48b9 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -1033,7 +1033,7 @@ struct file_system_type cifs_fs_type = {
+ };
+ MODULE_ALIAS_FS("cifs");
+ 
+-static struct file_system_type smb3_fs_type = {
++struct file_system_type smb3_fs_type = {
+ 	.owner = THIS_MODULE,
+ 	.name = "smb3",
+ 	.mount = smb3_do_mount,
+diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
+index 905d038637214..e996f0bef4145 100644
+--- a/fs/cifs/cifsfs.h
++++ b/fs/cifs/cifsfs.h
+@@ -51,7 +51,7 @@ static inline unsigned long cifs_get_time(struct dentry *dentry)
+ 	return (unsigned long) dentry->d_fsdata;
+ }
+ 
+-extern struct file_system_type cifs_fs_type;
++extern struct file_system_type cifs_fs_type, smb3_fs_type;
+ extern const struct address_space_operations cifs_addr_ops;
+ extern const struct address_space_operations cifs_addr_ops_smallbuf;
+ 
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 6599069be690e..196285b0fe46c 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1982,11 +1982,13 @@ extern mempool_t *cifs_mid_poolp;
+ 
+ /* Operations for different SMB versions */
+ #define SMB1_VERSION_STRING	"1.0"
++#define SMB20_VERSION_STRING    "2.0"
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ extern struct smb_version_operations smb1_operations;
+ extern struct smb_version_values smb1_values;
+-#define SMB20_VERSION_STRING	"2.0"
+ extern struct smb_version_operations smb20_operations;
+ extern struct smb_version_values smb20_values;
++#endif /* CIFS_ALLOW_INSECURE_LEGACY */
+ #define SMB21_VERSION_STRING	"2.1"
+ extern struct smb_version_operations smb21_operations;
+ extern struct smb_version_values smb21_values;
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 1c14cf01dbef0..9d740916a8ee5 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -1053,18 +1053,23 @@ static struct super_block *__cifs_get_super(void (*f)(struct super_block *, void
+ 		.data = data,
+ 		.sb = NULL,
+ 	};
++	struct file_system_type **fs_type = (struct file_system_type *[]) {
++		&cifs_fs_type, &smb3_fs_type, NULL,
++	};
+ 
+-	iterate_supers_type(&cifs_fs_type, f, &sd);
+-
+-	if (!sd.sb)
+-		return ERR_PTR(-EINVAL);
+-	/*
+-	 * Grab an active reference in order to prevent automounts (DFS links)
+-	 * of expiring and then freeing up our cifs superblock pointer while
+-	 * we're doing failover.
+-	 */
+-	cifs_sb_active(sd.sb);
+-	return sd.sb;
++	for (; *fs_type; fs_type++) {
++		iterate_supers_type(*fs_type, f, &sd);
++		if (sd.sb) {
++			/*
++			 * Grab an active reference in order to prevent automounts (DFS links)
++			 * of expiring and then freeing up our cifs superblock pointer while
++			 * we're doing failover.
++			 */
++			cifs_sb_active(sd.sb);
++			return sd.sb;
++		}
++	}
++	return ERR_PTR(-EINVAL);
+ }
+ 
+ static void __cifs_put_super(struct super_block *sb)
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 7fea94ebda573..b855abfaaf87b 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -4032,11 +4032,13 @@ smb3_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+ 	}
+ }
+ 
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ static bool
+ smb2_is_read_op(__u32 oplock)
+ {
+ 	return oplock == SMB2_OPLOCK_LEVEL_II;
+ }
++#endif /* CIFS_ALLOW_INSECURE_LEGACY */
+ 
+ static bool
+ smb21_is_read_op(__u32 oplock)
+@@ -5122,7 +5124,7 @@ out:
+ 	return rc;
+ }
+ 
+-
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ struct smb_version_operations smb20_operations = {
+ 	.compare_fids = smb2_compare_fids,
+ 	.setup_request = smb2_setup_request,
+@@ -5220,6 +5222,7 @@ struct smb_version_operations smb20_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ };
++#endif /* CIFS_ALLOW_INSECURE_LEGACY */
+ 
+ struct smb_version_operations smb21_operations = {
+ 	.compare_fids = smb2_compare_fids,
+@@ -5548,6 +5551,7 @@ struct smb_version_operations smb311_operations = {
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ };
+ 
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ struct smb_version_values smb20_values = {
+ 	.version_string = SMB20_VERSION_STRING,
+ 	.protocol_id = SMB20_PROT_ID,
+@@ -5568,6 +5572,7 @@ struct smb_version_values smb20_values = {
+ 	.signing_required = SMB2_NEGOTIATE_SIGNING_REQUIRED,
+ 	.create_lease_size = sizeof(struct create_lease),
+ };
++#endif /* ALLOW_INSECURE_LEGACY */
+ 
+ struct smb_version_values smb21_values = {
+ 	.version_string = SMB21_VERSION_STRING,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 88554b640b0da..24dd711fa9b95 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -281,6 +281,9 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 			ses->binding_chan = NULL;
+ 			mutex_unlock(&tcon->ses->session_mutex);
+ 			goto failed;
++		} else if (rc) {
++			mutex_unlock(&ses->session_mutex);
++			goto out;
+ 		}
+ 	}
+ 	/*
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 77f30320f8628..1c49b9959b32a 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -148,7 +148,7 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr,
+ 		f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d",
+ 			 blkaddr, exist);
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+-		WARN_ON(1);
++		dump_stack();
+ 	}
+ 	return exist;
+ }
+@@ -186,7 +186,7 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
+ 			f2fs_warn(sbi, "access invalid blkaddr:%u",
+ 				  blkaddr);
+ 			set_sbi_flag(sbi, SBI_NEED_FSCK);
+-			WARN_ON(1);
++			dump_stack();
+ 			return false;
+ 		} else {
+ 			return __is_bitmap_valid(sbi, blkaddr, type);
+diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c
+index 7170de78cd260..db210989784d4 100644
+--- a/fs/jffs2/fs.c
++++ b/fs/jffs2/fs.c
+@@ -603,6 +603,7 @@ out_root:
+ 	jffs2_free_raw_node_refs(c);
+ 	kvfree(c->blocks);
+ 	jffs2_clear_xattr_subsystem(c);
++	jffs2_sum_exit(c);
+  out_inohash:
+ 	kfree(c->inocache_list);
+  out_wbuf:
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 9aec80b9d7c6c..afb39e1bbe3bf 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -19,7 +19,15 @@
+ 
+ DEFINE_MUTEX(kernfs_mutex);
+ static DEFINE_SPINLOCK(kernfs_rename_lock);	/* kn->parent and ->name */
+-static char kernfs_pr_cont_buf[PATH_MAX];	/* protected by rename_lock */
++/*
++ * Don't use rename_lock to piggy back on pr_cont_buf. We don't want to
++ * call pr_cont() while holding rename_lock. Because sometimes pr_cont()
++ * will perform wakeups when releasing console_sem. Holding rename_lock
++ * will introduce deadlock if the scheduler reads the kernfs_name in the
++ * wakeup path.
++ */
++static DEFINE_SPINLOCK(kernfs_pr_cont_lock);
++static char kernfs_pr_cont_buf[PATH_MAX];	/* protected by pr_cont_lock */
+ static DEFINE_SPINLOCK(kernfs_idr_lock);	/* root->ino_idr */
+ 
+ #define rb_to_kn(X) rb_entry((X), struct kernfs_node, rb)
+@@ -230,12 +238,12 @@ void pr_cont_kernfs_name(struct kernfs_node *kn)
+ {
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&kernfs_rename_lock, flags);
++	spin_lock_irqsave(&kernfs_pr_cont_lock, flags);
+ 
+-	kernfs_name_locked(kn, kernfs_pr_cont_buf, sizeof(kernfs_pr_cont_buf));
++	kernfs_name(kn, kernfs_pr_cont_buf, sizeof(kernfs_pr_cont_buf));
+ 	pr_cont("%s", kernfs_pr_cont_buf);
+ 
+-	spin_unlock_irqrestore(&kernfs_rename_lock, flags);
++	spin_unlock_irqrestore(&kernfs_pr_cont_lock, flags);
+ }
+ 
+ /**
+@@ -249,10 +257,10 @@ void pr_cont_kernfs_path(struct kernfs_node *kn)
+ 	unsigned long flags;
+ 	int sz;
+ 
+-	spin_lock_irqsave(&kernfs_rename_lock, flags);
++	spin_lock_irqsave(&kernfs_pr_cont_lock, flags);
+ 
+-	sz = kernfs_path_from_node_locked(kn, NULL, kernfs_pr_cont_buf,
+-					  sizeof(kernfs_pr_cont_buf));
++	sz = kernfs_path_from_node(kn, NULL, kernfs_pr_cont_buf,
++				   sizeof(kernfs_pr_cont_buf));
+ 	if (sz < 0) {
+ 		pr_cont("(error)");
+ 		goto out;
+@@ -266,7 +274,7 @@ void pr_cont_kernfs_path(struct kernfs_node *kn)
+ 	pr_cont("%s", kernfs_pr_cont_buf);
+ 
+ out:
+-	spin_unlock_irqrestore(&kernfs_rename_lock, flags);
++	spin_unlock_irqrestore(&kernfs_pr_cont_lock, flags);
+ }
+ 
+ /**
+@@ -864,13 +872,12 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent,
+ 
+ 	lockdep_assert_held(&kernfs_mutex);
+ 
+-	/* grab kernfs_rename_lock to piggy back on kernfs_pr_cont_buf */
+-	spin_lock_irq(&kernfs_rename_lock);
++	spin_lock_irq(&kernfs_pr_cont_lock);
+ 
+ 	len = strlcpy(kernfs_pr_cont_buf, path, sizeof(kernfs_pr_cont_buf));
+ 
+ 	if (len >= sizeof(kernfs_pr_cont_buf)) {
+-		spin_unlock_irq(&kernfs_rename_lock);
++		spin_unlock_irq(&kernfs_pr_cont_lock);
+ 		return NULL;
+ 	}
+ 
+@@ -882,7 +889,7 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent,
+ 		parent = kernfs_find_ns(parent, name, ns);
+ 	}
+ 
+-	spin_unlock_irq(&kernfs_rename_lock);
++	spin_unlock_irq(&kernfs_pr_cont_lock);
+ 
+ 	return parent;
+ }
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index b6d60e69043ae..b22da4e3165b4 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3086,6 +3086,10 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ 	}
+ 
+ out:
++	if (opendata->lgp) {
++		nfs4_lgopen_release(opendata->lgp);
++		opendata->lgp = NULL;
++	}
+ 	if (!opendata->cancelled)
+ 		nfs4_sequence_free_slot(&opendata->o_res.seq_res);
+ 	return ret;
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 08ab5d1e3a3e8..8c7d01e907a31 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -1706,11 +1706,6 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent)
+ 	sbi->s_mount_opts = ZONEFS_MNTOPT_ERRORS_RO;
+ 	sbi->s_max_open_zones = bdev_max_open_zones(sb->s_bdev);
+ 	atomic_set(&sbi->s_open_zones, 0);
+-	if (!sbi->s_max_open_zones &&
+-	    sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) {
+-		zonefs_info(sb, "No open zones limit. Ignoring explicit_open mount option\n");
+-		sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN;
+-	}
+ 
+ 	ret = zonefs_read_super(sb);
+ 	if (ret)
+@@ -1729,6 +1724,12 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent)
+ 	zonefs_info(sb, "Mounting %u zones",
+ 		    blkdev_nr_zones(sb->s_bdev->bd_disk));
+ 
++	if (!sbi->s_max_open_zones &&
++	    sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) {
++		zonefs_info(sb, "No open zones limit. Ignoring explicit_open mount option\n");
++		sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN;
++	}
++
+ 	/* Create root directory inode */
+ 	ret = -ENOMEM;
+ 	inode = new_inode(sb);
+diff --git a/include/linux/iio/common/st_sensors.h b/include/linux/iio/common/st_sensors.h
+index 33e939977444b..c16a9dda3ad57 100644
+--- a/include/linux/iio/common/st_sensors.h
++++ b/include/linux/iio/common/st_sensors.h
+@@ -228,6 +228,7 @@ struct st_sensor_settings {
+  * @hw_irq_trigger: if we're using the hardware interrupt on the sensor.
+  * @hw_timestamp: Latest timestamp from the interrupt handler, when in use.
+  * @buffer_data: Data used by buffer part.
++ * @odr_lock: Local lock for preventing concurrent ODR accesses/changes
+  */
+ struct st_sensor_data {
+ 	struct device *dev;
+@@ -253,6 +254,8 @@ struct st_sensor_data {
+ 	s64 hw_timestamp;
+ 
+ 	char buffer_data[ST_SENSORS_MAX_BUFFER_SIZE] ____cacheline_aligned;
++
++	struct mutex odr_lock;
+ };
+ 
+ #ifdef CONFIG_IIO_BUFFER
+diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
+index 32809624d422e..e67ee4d7318f9 100644
+--- a/include/linux/jump_label.h
++++ b/include/linux/jump_label.h
+@@ -249,9 +249,9 @@ extern void static_key_disable_cpuslocked(struct static_key *key);
+ #include <linux/atomic.h>
+ #include <linux/bug.h>
+ 
+-static inline int static_key_count(struct static_key *key)
++static __always_inline int static_key_count(struct static_key *key)
+ {
+-	return atomic_read(&key->enabled);
++	return arch_atomic_read(&key->enabled);
+ }
+ 
+ static __always_inline void jump_label_init(void)
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index eba1f1cbc9fbd..6ca97729b54a4 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -4877,12 +4877,11 @@ struct mlx5_ifc_query_qp_out_bits {
+ 
+ 	u8         syndrome[0x20];
+ 
+-	u8         reserved_at_40[0x20];
+-	u8         ece[0x20];
++	u8         reserved_at_40[0x40];
+ 
+ 	u8         opt_param_mask[0x20];
+ 
+-	u8         reserved_at_a0[0x20];
++	u8         ece[0x20];
+ 
+ 	struct mlx5_ifc_qpc_bits qpc;
+ 
+diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
+index 843678bfc364f..2a63ef05a6cc0 100644
+--- a/include/linux/nodemask.h
++++ b/include/linux/nodemask.h
+@@ -42,11 +42,11 @@
+  * void nodes_shift_right(dst, src, n)	Shift right
+  * void nodes_shift_left(dst, src, n)	Shift left
+  *
+- * int first_node(mask)			Number lowest set bit, or MAX_NUMNODES
+- * int next_node(node, mask)		Next node past 'node', or MAX_NUMNODES
+- * int next_node_in(node, mask)		Next node past 'node', or wrap to first,
++ * unsigned int first_node(mask)	Number lowest set bit, or MAX_NUMNODES
++ * unsigend int next_node(node, mask)	Next node past 'node', or MAX_NUMNODES
++ * unsigned int next_node_in(node, mask) Next node past 'node', or wrap to first,
+  *					or MAX_NUMNODES
+- * int first_unset_node(mask)		First node not set in mask, or 
++ * unsigned int first_unset_node(mask)	First node not set in mask, or
+  *					MAX_NUMNODES
+  *
+  * nodemask_t nodemask_of_node(node)	Return nodemask with bit 'node' set
+@@ -153,7 +153,7 @@ static inline void __nodes_clear(nodemask_t *dstp, unsigned int nbits)
+ 
+ #define node_test_and_set(node, nodemask) \
+ 			__node_test_and_set((node), &(nodemask))
+-static inline int __node_test_and_set(int node, nodemask_t *addr)
++static inline bool __node_test_and_set(int node, nodemask_t *addr)
+ {
+ 	return test_and_set_bit(node, addr->bits);
+ }
+@@ -200,7 +200,7 @@ static inline void __nodes_complement(nodemask_t *dstp,
+ 
+ #define nodes_equal(src1, src2) \
+ 			__nodes_equal(&(src1), &(src2), MAX_NUMNODES)
+-static inline int __nodes_equal(const nodemask_t *src1p,
++static inline bool __nodes_equal(const nodemask_t *src1p,
+ 					const nodemask_t *src2p, unsigned int nbits)
+ {
+ 	return bitmap_equal(src1p->bits, src2p->bits, nbits);
+@@ -208,7 +208,7 @@ static inline int __nodes_equal(const nodemask_t *src1p,
+ 
+ #define nodes_intersects(src1, src2) \
+ 			__nodes_intersects(&(src1), &(src2), MAX_NUMNODES)
+-static inline int __nodes_intersects(const nodemask_t *src1p,
++static inline bool __nodes_intersects(const nodemask_t *src1p,
+ 					const nodemask_t *src2p, unsigned int nbits)
+ {
+ 	return bitmap_intersects(src1p->bits, src2p->bits, nbits);
+@@ -216,20 +216,20 @@ static inline int __nodes_intersects(const nodemask_t *src1p,
+ 
+ #define nodes_subset(src1, src2) \
+ 			__nodes_subset(&(src1), &(src2), MAX_NUMNODES)
+-static inline int __nodes_subset(const nodemask_t *src1p,
++static inline bool __nodes_subset(const nodemask_t *src1p,
+ 					const nodemask_t *src2p, unsigned int nbits)
+ {
+ 	return bitmap_subset(src1p->bits, src2p->bits, nbits);
+ }
+ 
+ #define nodes_empty(src) __nodes_empty(&(src), MAX_NUMNODES)
+-static inline int __nodes_empty(const nodemask_t *srcp, unsigned int nbits)
++static inline bool __nodes_empty(const nodemask_t *srcp, unsigned int nbits)
+ {
+ 	return bitmap_empty(srcp->bits, nbits);
+ }
+ 
+ #define nodes_full(nodemask) __nodes_full(&(nodemask), MAX_NUMNODES)
+-static inline int __nodes_full(const nodemask_t *srcp, unsigned int nbits)
++static inline bool __nodes_full(const nodemask_t *srcp, unsigned int nbits)
+ {
+ 	return bitmap_full(srcp->bits, nbits);
+ }
+@@ -260,15 +260,15 @@ static inline void __nodes_shift_left(nodemask_t *dstp,
+           > MAX_NUMNODES, then the silly min_ts could be dropped. */
+ 
+ #define first_node(src) __first_node(&(src))
+-static inline int __first_node(const nodemask_t *srcp)
++static inline unsigned int __first_node(const nodemask_t *srcp)
+ {
+-	return min_t(int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES));
++	return min_t(unsigned int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES));
+ }
+ 
+ #define next_node(n, src) __next_node((n), &(src))
+-static inline int __next_node(int n, const nodemask_t *srcp)
++static inline unsigned int __next_node(int n, const nodemask_t *srcp)
+ {
+-	return min_t(int,MAX_NUMNODES,find_next_bit(srcp->bits, MAX_NUMNODES, n+1));
++	return min_t(unsigned int, MAX_NUMNODES, find_next_bit(srcp->bits, MAX_NUMNODES, n+1));
+ }
+ 
+ /*
+@@ -276,7 +276,7 @@ static inline int __next_node(int n, const nodemask_t *srcp)
+  * the first node in src if needed.  Returns MAX_NUMNODES if src is empty.
+  */
+ #define next_node_in(n, src) __next_node_in((n), &(src))
+-int __next_node_in(int node, const nodemask_t *srcp);
++unsigned int __next_node_in(int node, const nodemask_t *srcp);
+ 
+ static inline void init_nodemask_of_node(nodemask_t *mask, int node)
+ {
+@@ -296,9 +296,9 @@ static inline void init_nodemask_of_node(nodemask_t *mask, int node)
+ })
+ 
+ #define first_unset_node(mask) __first_unset_node(&(mask))
+-static inline int __first_unset_node(const nodemask_t *maskp)
++static inline unsigned int __first_unset_node(const nodemask_t *maskp)
+ {
+-	return min_t(int,MAX_NUMNODES,
++	return min_t(unsigned int, MAX_NUMNODES,
+ 			find_first_zero_bit(maskp->bits, MAX_NUMNODES));
+ }
+ 
+@@ -435,11 +435,11 @@ static inline int num_node_state(enum node_states state)
+ 
+ #define first_online_node	first_node(node_states[N_ONLINE])
+ #define first_memory_node	first_node(node_states[N_MEMORY])
+-static inline int next_online_node(int nid)
++static inline unsigned int next_online_node(int nid)
+ {
+ 	return next_node(nid, node_states[N_ONLINE]);
+ }
+-static inline int next_memory_node(int nid)
++static inline unsigned int next_memory_node(int nid)
+ {
+ 	return next_node(nid, node_states[N_MEMORY]);
+ }
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index 010d581598873..9a58274e62173 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -568,5 +568,6 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
+ 				enum tc_setup_type type, void *data,
+ 				struct flow_block_offload *bo,
+ 				void (*cleanup)(struct flow_block_cb *block_cb));
++bool flow_indr_dev_exists(void);
+ 
+ #endif /* _NET_FLOW_OFFLOAD_H */
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 76bfb6cd5815d..b7907385a02ff 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1013,7 +1013,6 @@ struct nft_stats {
+ 
+ struct nft_hook {
+ 	struct list_head	list;
+-	bool			inactive;
+ 	struct nf_hook_ops	ops;
+ 	struct rcu_head		rcu;
+ };
+diff --git a/include/net/netfilter/nf_tables_offload.h b/include/net/netfilter/nf_tables_offload.h
+index 7a453a35a41dd..1058f38e2acab 100644
+--- a/include/net/netfilter/nf_tables_offload.h
++++ b/include/net/netfilter/nf_tables_offload.h
+@@ -91,7 +91,7 @@ int nft_flow_rule_offload_commit(struct net *net);
+ 	NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg)		\
+ 	memset(&(__reg)->mask, 0xff, (__reg)->len);
+ 
+-int nft_chain_offload_priority(struct nft_base_chain *basechain);
++bool nft_chain_offload_support(const struct nft_base_chain *basechain);
+ 
+ int nft_offload_init(void);
+ void nft_offload_exit(void);
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 1042c449e7db5..bed2387af456d 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -163,37 +163,17 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ 		if (spin_trylock(&qdisc->seqlock))
+ 			goto nolock_empty;
+ 
+-		/* Paired with smp_mb__after_atomic() to make sure
+-		 * STATE_MISSED checking is synchronized with clearing
+-		 * in pfifo_fast_dequeue().
++		/* No need to insist if the MISSED flag was already set.
++		 * Note that test_and_set_bit() also gives us memory ordering
++		 * guarantees wrt potential earlier enqueue() and below
++		 * spin_trylock(), both of which are necessary to prevent races
+ 		 */
+-		smp_mb__before_atomic();
+-
+-		/* If the MISSED flag is set, it means other thread has
+-		 * set the MISSED flag before second spin_trylock(), so
+-		 * we can return false here to avoid multi cpus doing
+-		 * the set_bit() and second spin_trylock() concurrently.
+-		 */
+-		if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
++		if (test_and_set_bit(__QDISC_STATE_MISSED, &qdisc->state))
+ 			return false;
+ 
+-		/* Set the MISSED flag before the second spin_trylock(),
+-		 * if the second spin_trylock() return false, it means
+-		 * other cpu holding the lock will do dequeuing for us
+-		 * or it will see the MISSED flag set after releasing
+-		 * lock and reschedule the net_tx_action() to do the
+-		 * dequeuing.
+-		 */
+-		set_bit(__QDISC_STATE_MISSED, &qdisc->state);
+-
+-		/* spin_trylock() only has load-acquire semantic, so use
+-		 * smp_mb__after_atomic() to ensure STATE_MISSED is set
+-		 * before doing the second spin_trylock().
+-		 */
+-		smp_mb__after_atomic();
+-
+-		/* Retry again in case other CPU may not see the new flag
+-		 * after it releases the lock at the end of qdisc_run_end().
++		/* Try to take the lock again to make sure that we will either
++		 * grab it or the CPU that still has it will see MISSED set
++		 * when testing it in qdisc_run_end()
+ 		 */
+ 		if (!spin_trylock(&qdisc->seqlock))
+ 			return false;
+@@ -217,6 +197,12 @@ static inline void qdisc_run_end(struct Qdisc *qdisc)
+ 	if (qdisc->flags & TCQ_F_NOLOCK) {
+ 		spin_unlock(&qdisc->seqlock);
+ 
++		/* spin_unlock() only has store-release semantic. The unlock
++		 * and test_bit() ordering is a store-load ordering, so a full
++		 * memory barrier is needed here.
++		 */
++		smp_mb();
++
+ 		if (unlikely(test_bit(__QDISC_STATE_MISSED,
+ 				      &qdisc->state))) {
+ 			clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index d3a1f25f8ec2e..845a4c0524332 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1653,6 +1653,11 @@ out:
+ 		CONT;							\
+ 	LDX_MEM_##SIZEOP:						\
+ 		DST = *(SIZE *)(unsigned long) (SRC + insn->off);	\
++		CONT;							\
++	LDX_PROBE_MEM_##SIZEOP:						\
++		bpf_probe_read_kernel(&DST, sizeof(SIZE),		\
++				      (const void *)(long) (SRC + insn->off));	\
++		DST = *((SIZE *)&DST);					\
+ 		CONT;
+ 
+ 	LDST(B,   u8)
+@@ -1660,15 +1665,6 @@ out:
+ 	LDST(W,  u32)
+ 	LDST(DW, u64)
+ #undef LDST
+-#define LDX_PROBE(SIZEOP, SIZE)							\
+-	LDX_PROBE_MEM_##SIZEOP:							\
+-		bpf_probe_read_kernel(&DST, SIZE, (const void *)(long) (SRC + insn->off));	\
+-		CONT;
+-	LDX_PROBE(B,  1)
+-	LDX_PROBE(H,  2)
+-	LDX_PROBE(W,  4)
+-	LDX_PROBE(DW, 8)
+-#undef LDX_PROBE
+ 
+ 	STX_XADD_W: /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
+ 		atomic_add((u32) SRC, (atomic_t *)(unsigned long)
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 953dd9568dd74..50200898410d5 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2784,7 +2784,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
+ }
+ EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
+ 
+-static DEFINE_SPINLOCK(tracepoint_iter_lock);
++static DEFINE_RAW_SPINLOCK(tracepoint_iter_lock);
+ static DEFINE_MUTEX(tracepoint_printk_mutex);
+ 
+ static void output_printk(struct trace_event_buffer *fbuffer)
+@@ -2812,14 +2812,14 @@ static void output_printk(struct trace_event_buffer *fbuffer)
+ 
+ 	event = &fbuffer->trace_file->event_call->event;
+ 
+-	spin_lock_irqsave(&tracepoint_iter_lock, flags);
++	raw_spin_lock_irqsave(&tracepoint_iter_lock, flags);
+ 	trace_seq_init(&iter->seq);
+ 	iter->ent = fbuffer->entry;
+ 	event_call->event.funcs->trace(iter, 0, event);
+ 	trace_seq_putc(&iter->seq, 0);
+ 	printk("%s", iter->seq.buffer);
+ 
+-	spin_unlock_irqrestore(&tracepoint_iter_lock, flags);
++	raw_spin_unlock_irqrestore(&tracepoint_iter_lock, flags);
+ }
+ 
+ int tracepoint_printk_sysctl(struct ctl_table *table, int write,
+@@ -5907,12 +5907,18 @@ static void tracing_set_nop(struct trace_array *tr)
+ 	tr->current_trace = &nop_trace;
+ }
+ 
++static bool tracer_options_updated;
++
+ static void add_tracer_options(struct trace_array *tr, struct tracer *t)
+ {
+ 	/* Only enable if the directory has been created already. */
+ 	if (!tr->dir)
+ 		return;
+ 
++	/* Only create trace option files after update_tracer_options finish */
++	if (!tracer_options_updated)
++		return;
++
+ 	create_trace_option_files(tr, t);
+ }
+ 
+@@ -8649,6 +8655,7 @@ static void __update_tracer_options(struct trace_array *tr)
+ static void update_tracer_options(struct trace_array *tr)
+ {
+ 	mutex_lock(&trace_types_lock);
++	tracer_options_updated = true;
+ 	__update_tracer_options(tr);
+ 	mutex_unlock(&trace_types_lock);
+ }
+diff --git a/lib/Makefile b/lib/Makefile
+index d415fc7067c5b..69b8217652ed5 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -274,7 +274,7 @@ $(foreach file, $(libfdt_files), \
+ 	$(eval CFLAGS_$(file) = -I $(srctree)/scripts/dtc/libfdt))
+ lib-$(CONFIG_LIBFDT) += $(libfdt_files)
+ 
+-lib-$(CONFIG_BOOT_CONFIG) += bootconfig.o
++obj-$(CONFIG_BOOT_CONFIG) += bootconfig.o
+ 
+ obj-$(CONFIG_RBTREE_TEST) += rbtree_test.o
+ obj-$(CONFIG_INTERVAL_TREE_TEST) += interval_tree_test.o
+diff --git a/lib/nodemask.c b/lib/nodemask.c
+index 3aa454c54c0de..e22647f5181b3 100644
+--- a/lib/nodemask.c
++++ b/lib/nodemask.c
+@@ -3,9 +3,9 @@
+ #include <linux/module.h>
+ #include <linux/random.h>
+ 
+-int __next_node_in(int node, const nodemask_t *srcp)
++unsigned int __next_node_in(int node, const nodemask_t *srcp)
+ {
+-	int ret = __next_node(node, srcp);
++	unsigned int ret = __next_node(node, srcp);
+ 
+ 	if (ret == MAX_NUMNODES)
+ 		ret = __first_node(srcp);
+diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
+index e3f0d59068117..8d958290b7d22 100644
+--- a/net/core/flow_offload.c
++++ b/net/core/flow_offload.c
+@@ -566,3 +566,9 @@ int flow_indr_dev_setup_offload(struct net_device *dev,	struct Qdisc *sch,
+ 	return list_empty(&bo->cb_list) ? -EOPNOTSUPP : 0;
+ }
+ EXPORT_SYMBOL(flow_indr_dev_setup_offload);
++
++bool flow_indr_dev_exists(void)
++{
++	return !list_empty(&flow_block_indr_dev_list);
++}
++EXPORT_SYMBOL(flow_indr_dev_exists);
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 2a80038575d27..a7e32be8714f5 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -624,21 +624,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
+ 	}
+ 
+ 	if (dev->header_ops) {
+-		const int pull_len = tunnel->hlen + sizeof(struct iphdr);
+-
+ 		if (skb_cow_head(skb, 0))
+ 			goto free_skb;
+ 
+ 		tnl_params = (const struct iphdr *)skb->data;
+ 
+-		if (pull_len > skb_transport_offset(skb))
+-			goto free_skb;
+-
+ 		/* Pull skb since ip_tunnel_xmit() needs skb->data pointing
+ 		 * to gre header.
+ 		 */
+-		skb_pull(skb, pull_len);
++		skb_pull(skb, tunnel->hlen + sizeof(struct iphdr));
+ 		skb_reset_mac_header(skb);
++
++		if (skb->ip_summed == CHECKSUM_PARTIAL &&
++		    skb_checksum_start(skb) < skb->data)
++			goto free_skb;
+ 	} else {
+ 		if (skb_cow_head(skb, dev->needed_headroom))
+ 			goto free_skb;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 2e267b2e33e5a..54ed68e05b66a 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2667,12 +2667,15 @@ static void tcp_mtup_probe_success(struct sock *sk)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
++	u64 val;
+ 
+-	/* FIXME: breaks with very large cwnd */
+ 	tp->prior_ssthresh = tcp_current_ssthresh(sk);
+-	tp->snd_cwnd = tp->snd_cwnd *
+-		       tcp_mss_to_mtu(sk, tp->mss_cache) /
+-		       icsk->icsk_mtup.probe_size;
++
++	val = (u64)tp->snd_cwnd * tcp_mss_to_mtu(sk, tp->mss_cache);
++	do_div(val, icsk->icsk_mtup.probe_size);
++	WARN_ON_ONCE((u32)val != val);
++	tp->snd_cwnd = max_t(u32, 1U, val);
++
+ 	tp->snd_cwnd_cnt = 0;
+ 	tp->snd_cwnd_stamp = tcp_jiffies32;
+ 	tp->snd_ssthresh = tcp_current_ssthresh(sk);
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index e37ad0b3645c9..8634a5c853f51 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -4115,8 +4115,8 @@ int tcp_rtx_synack(const struct sock *sk, struct request_sock *req)
+ 	res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL,
+ 				  NULL);
+ 	if (!res) {
+-		__TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
+-		__NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
++		TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
++		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
+ 		if (unlikely(tcp_passive_fastopen(sk)))
+ 			tcp_sk(sk)->total_retrans++;
+ 		trace_tcp_retransmit_synack(sk, req);
+diff --git a/net/ipv4/xfrm4_protocol.c b/net/ipv4/xfrm4_protocol.c
+index ea595c8549c77..cfd46222ef91b 100644
+--- a/net/ipv4/xfrm4_protocol.c
++++ b/net/ipv4/xfrm4_protocol.c
+@@ -307,4 +307,3 @@ void __init xfrm4_protocol_init(void)
+ {
+ 	xfrm_input_register_afinfo(&xfrm4_input_afinfo);
+ }
+-EXPORT_SYMBOL(xfrm4_protocol_init);
+diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c
+index 85dddfe3a2c6e..b9179708e3c1a 100644
+--- a/net/ipv6/seg6_hmac.c
++++ b/net/ipv6/seg6_hmac.c
+@@ -400,7 +400,6 @@ int __init seg6_hmac_init(void)
+ {
+ 	return seg6_hmac_init_algo();
+ }
+-EXPORT_SYMBOL(seg6_hmac_init);
+ 
+ int __net_init seg6_hmac_net_init(struct net *net)
+ {
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 6b7ed5568c090..2aa16a171285b 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2830,10 +2830,12 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb
+ 	void *ext_hdrs[SADB_EXT_MAX];
+ 	int err;
+ 
+-	err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
+-			      BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
+-	if (err)
+-		return err;
++	/* Non-zero return value of pfkey_broadcast() does not always signal
++	 * an error and even on an actual error we may still want to process
++	 * the message so rather ignore the return value.
++	 */
++	pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
++			BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
+ 
+ 	memset(ext_hdrs, 0, sizeof(ext_hdrs));
+ 	err = parse_exthdrs(skb, hdr, ext_hdrs);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index ea162e36e0e4b..0c56a90c3f086 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -481,6 +481,7 @@ static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type,
+ 	if (msg_type == NFT_MSG_NEWFLOWTABLE)
+ 		nft_activate_next(ctx->net, flowtable);
+ 
++	INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
+ 	nft_trans_flowtable(trans) = flowtable;
+ 	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
+ 
+@@ -1733,7 +1734,6 @@ static struct nft_hook *nft_netdev_hook_alloc(struct net *net,
+ 		goto err_hook_dev;
+ 	}
+ 	hook->ops.dev = dev;
+-	hook->inactive = false;
+ 
+ 	return hook;
+ 
+@@ -1963,7 +1963,7 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family,
+ 	chain->flags |= NFT_CHAIN_BASE | flags;
+ 	basechain->policy = NF_ACCEPT;
+ 	if (chain->flags & NFT_CHAIN_HW_OFFLOAD &&
+-	    nft_chain_offload_priority(basechain) < 0)
++	    !nft_chain_offload_support(basechain))
+ 		return -EOPNOTSUPP;
+ 
+ 	flow_block_init(&basechain->flow_block);
+@@ -6694,11 +6694,15 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+ 
+ 	if (nla[NFTA_FLOWTABLE_FLAGS]) {
+ 		flags = ntohl(nla_get_be32(nla[NFTA_FLOWTABLE_FLAGS]));
+-		if (flags & ~NFT_FLOWTABLE_MASK)
+-			return -EOPNOTSUPP;
++		if (flags & ~NFT_FLOWTABLE_MASK) {
++			err = -EOPNOTSUPP;
++			goto err_flowtable_update_hook;
++		}
+ 		if ((flowtable->data.flags & NFT_FLOWTABLE_HW_OFFLOAD) ^
+-		    (flags & NFT_FLOWTABLE_HW_OFFLOAD))
+-			return -EOPNOTSUPP;
++		    (flags & NFT_FLOWTABLE_HW_OFFLOAD)) {
++			err = -EOPNOTSUPP;
++			goto err_flowtable_update_hook;
++		}
+ 	} else {
+ 		flags = flowtable->data.flags;
+ 	}
+@@ -6880,6 +6884,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ {
+ 	const struct nlattr * const *nla = ctx->nla;
+ 	struct nft_flowtable_hook flowtable_hook;
++	LIST_HEAD(flowtable_del_list);
+ 	struct nft_hook *this, *hook;
+ 	struct nft_trans *trans;
+ 	int err;
+@@ -6895,7 +6900,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ 			err = -ENOENT;
+ 			goto err_flowtable_del_hook;
+ 		}
+-		hook->inactive = true;
++		list_move(&hook->list, &flowtable_del_list);
+ 	}
+ 
+ 	trans = nft_trans_alloc(ctx, NFT_MSG_DELFLOWTABLE,
+@@ -6908,6 +6913,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ 	nft_trans_flowtable(trans) = flowtable;
+ 	nft_trans_flowtable_update(trans) = true;
+ 	INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
++	list_splice(&flowtable_del_list, &nft_trans_flowtable_hooks(trans));
+ 	nft_flowtable_hook_release(&flowtable_hook);
+ 
+ 	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
+@@ -6915,13 +6921,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ 	return 0;
+ 
+ err_flowtable_del_hook:
+-	list_for_each_entry(this, &flowtable_hook.list, list) {
+-		hook = nft_hook_list_find(&flowtable->hook_list, this);
+-		if (!hook)
+-			break;
+-
+-		hook->inactive = false;
+-	}
++	list_splice(&flowtable_del_list, &flowtable->hook_list);
+ 	nft_flowtable_hook_release(&flowtable_hook);
+ 
+ 	return err;
+@@ -7587,6 +7587,9 @@ static void nft_commit_release(struct nft_trans *trans)
+ 		nf_tables_chain_destroy(&trans->ctx);
+ 		break;
+ 	case NFT_MSG_DELRULE:
++		if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
++			nft_flow_rule_destroy(nft_trans_flow_rule(trans));
++
+ 		nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
+ 		break;
+ 	case NFT_MSG_DELSET:
+@@ -7771,17 +7774,6 @@ void nft_chain_del(struct nft_chain *chain)
+ 	list_del_rcu(&chain->list);
+ }
+ 
+-static void nft_flowtable_hooks_del(struct nft_flowtable *flowtable,
+-				    struct list_head *hook_list)
+-{
+-	struct nft_hook *hook, *next;
+-
+-	list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) {
+-		if (hook->inactive)
+-			list_move(&hook->list, hook_list);
+-	}
+-}
+-
+ static void nf_tables_module_autoload_cleanup(struct net *net)
+ {
+ 	struct nft_module_request *req, *next;
+@@ -7957,6 +7949,9 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			nf_tables_rule_notify(&trans->ctx,
+ 					      nft_trans_rule(trans),
+ 					      NFT_MSG_NEWRULE);
++			if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
++				nft_flow_rule_destroy(nft_trans_flow_rule(trans));
++
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_DELRULE:
+@@ -8045,8 +8040,6 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			break;
+ 		case NFT_MSG_DELFLOWTABLE:
+ 			if (nft_trans_flowtable_update(trans)) {
+-				nft_flowtable_hooks_del(nft_trans_flowtable(trans),
+-							&nft_trans_flowtable_hooks(trans));
+ 				nf_tables_flowtable_notify(&trans->ctx,
+ 							   nft_trans_flowtable(trans),
+ 							   &nft_trans_flowtable_hooks(trans),
+@@ -8124,7 +8117,6 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ {
+ 	struct nft_trans *trans, *next;
+ 	struct nft_trans_elem *te;
+-	struct nft_hook *hook;
+ 
+ 	if (action == NFNL_ABORT_VALIDATE &&
+ 	    nf_tables_validate(net) < 0)
+@@ -8242,8 +8234,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			break;
+ 		case NFT_MSG_DELFLOWTABLE:
+ 			if (nft_trans_flowtable_update(trans)) {
+-				list_for_each_entry(hook, &nft_trans_flowtable(trans)->hook_list, list)
+-					hook->inactive = false;
++				list_splice(&nft_trans_flowtable_hooks(trans),
++					    &nft_trans_flowtable(trans)->hook_list);
+ 			} else {
+ 				trans->ctx.table->use++;
+ 				nft_clear(trans->ctx.net, nft_trans_flowtable(trans));
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index 839fd09f1bb4a..4e99b1731b3f9 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -208,7 +208,7 @@ static int nft_setup_cb_call(enum tc_setup_type type, void *type_data,
+ 	return 0;
+ }
+ 
+-int nft_chain_offload_priority(struct nft_base_chain *basechain)
++static int nft_chain_offload_priority(const struct nft_base_chain *basechain)
+ {
+ 	if (basechain->ops.priority <= 0 ||
+ 	    basechain->ops.priority > USHRT_MAX)
+@@ -217,6 +217,27 @@ int nft_chain_offload_priority(struct nft_base_chain *basechain)
+ 	return 0;
+ }
+ 
++bool nft_chain_offload_support(const struct nft_base_chain *basechain)
++{
++	struct net_device *dev;
++	struct nft_hook *hook;
++
++	if (nft_chain_offload_priority(basechain) < 0)
++		return false;
++
++	list_for_each_entry(hook, &basechain->hook_list, list) {
++		if (hook->ops.pf != NFPROTO_NETDEV ||
++		    hook->ops.hooknum != NF_NETDEV_INGRESS)
++			return false;
++
++		dev = hook->ops.dev;
++		if (!dev->netdev_ops->ndo_setup_tc && !flow_indr_dev_exists())
++			return false;
++	}
++
++	return true;
++}
++
+ static void nft_flow_cls_offload_setup(struct flow_cls_offload *cls_flow,
+ 				       const struct nft_base_chain *basechain,
+ 				       const struct nft_rule *rule,
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index ea53fd999f465..6a4a5ac88db70 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -341,7 +341,8 @@ static void nft_nat_inet_eval(const struct nft_expr *expr,
+ {
+ 	const struct nft_nat *priv = nft_expr_priv(expr);
+ 
+-	if (priv->family == nft_pf(pkt))
++	if (priv->family == nft_pf(pkt) ||
++	    priv->family == NFPROTO_INET)
+ 		nft_nat_eval(expr, regs, pkt);
+ }
+ 
+diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c
+index 0c490cdde6a49..94503f36b9a61 100644
+--- a/net/smc/smc_cdc.c
++++ b/net/smc/smc_cdc.c
+@@ -72,7 +72,7 @@ int smc_cdc_get_free_slot(struct smc_connection *conn,
+ 		/* abnormal termination */
+ 		if (!rc)
+ 			smc_wr_tx_put_slot(link,
+-					   (struct smc_wr_tx_pend_priv *)pend);
++					   (struct smc_wr_tx_pend_priv *)(*pend));
+ 		rc = -EPIPE;
+ 	}
+ 	return rc;
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index 71e03b930b70a..c8ed6d3d5762e 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -752,7 +752,11 @@ static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr,
+ 	 */
+ 	xdr->p = (void *)p + frag2bytes;
+ 	space_left = xdr->buf->buflen - xdr->buf->len;
+-	xdr->end = (void *)p + min_t(int, space_left, PAGE_SIZE);
++	if (space_left - nbytes >= PAGE_SIZE)
++		xdr->end = (void *)p + PAGE_SIZE;
++	else
++		xdr->end = (void *)p + space_left - frag1bytes;
++
+ 	xdr->buf->page_len += frag2bytes;
+ 	xdr->buf->len += nbytes;
+ 	return p;
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index ca267a855a12c..b8174c77dfe17 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -1137,6 +1137,7 @@ static bool
+ rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
+ {
++	struct rpc_xprt *xprt = &r_xprt->rx_xprt;
+ 	struct xdr_stream *xdr = &rep->rr_stream;
+ 	__be32 *p;
+ 
+@@ -1160,6 +1161,10 @@ rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
+ 	if (*p != cpu_to_be32(RPC_CALL))
+ 		return false;
+ 
++	/* No bc service. */
++	if (xprt->bc_serv == NULL)
++		return false;
++
+ 	/* Now that we are sure this is a backchannel call,
+ 	 * advance to the RPC header.
+ 	 */
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 6911f1cab2063..72c31ef985eb3 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -249,9 +249,8 @@ static int tipc_enable_bearer(struct net *net, const char *name,
+ 	u32 i;
+ 
+ 	if (!bearer_name_validate(name, &b_names)) {
+-		errstr = "illegal name";
+ 		NL_SET_ERR_MSG(extack, "Illegal name");
+-		goto rejected;
++		return res;
+ 	}
+ 
+ 	if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) {
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index b7edca89e0ba9..28721e9575b75 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -438,7 +438,7 @@ static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other)
+ 	 * -ECONNREFUSED. Otherwise, if we haven't queued any skbs
+ 	 * to other and its full, we will hang waiting for POLLOUT.
+ 	 */
+-	if (unix_recvq_full(other) && !sock_flag(other, SOCK_DEAD))
++	if (unix_recvq_full_lockless(other) && !sock_flag(other, SOCK_DEAD))
+ 		return 1;
+ 
+ 	if (connected)
+diff --git a/scripts/gdb/linux/config.py b/scripts/gdb/linux/config.py
+index 90e1565b19671..8843ab3cbaddc 100644
+--- a/scripts/gdb/linux/config.py
++++ b/scripts/gdb/linux/config.py
+@@ -24,9 +24,9 @@ class LxConfigDump(gdb.Command):
+             filename = arg
+ 
+         try:
+-            py_config_ptr = gdb.parse_and_eval("kernel_config_data + 8")
+-            py_config_size = gdb.parse_and_eval(
+-                    "sizeof(kernel_config_data) - 1 - 8 * 2")
++            py_config_ptr = gdb.parse_and_eval("&kernel_config_data")
++            py_config_ptr_end = gdb.parse_and_eval("&kernel_config_data_end")
++            py_config_size = py_config_ptr_end - py_config_ptr
+         except gdb.error as e:
+             raise gdb.GdbError("Can't find config, enable CONFIG_IKCONFIG?")
+ 
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index e08f75aed4293..79aef50ede170 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -1271,7 +1271,8 @@ static int secref_whitelist(const struct sectioncheck *mismatch,
+ 
+ static inline int is_arm_mapping_symbol(const char *str)
+ {
+-	return str[0] == '$' && strchr("axtd", str[1])
++	return str[0] == '$' &&
++	       (str[1] == 'a' || str[1] == 'd' || str[1] == 't' || str[1] == 'x')
+ 	       && (str[2] == '\0' || str[2] == '.');
+ }
+ 
+@@ -1982,7 +1983,7 @@ static char *remove_dot(char *s)
+ 
+ 	if (n && s[n]) {
+ 		size_t m = strspn(s + n + 1, "0123456789");
+-		if (m && (s[n + m] == '.' || s[n + m] == 0))
++		if (m && (s[n + m + 1] == '.' || s[n + m + 1] == 0))
+ 			s[n] = 0;
+ 	}
+ 	return s;
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 8098088b00568..0dd6d37db9666 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -1045,6 +1045,13 @@ static int patch_conexant_auto(struct hda_codec *codec)
+ 		snd_hda_pick_fixup(codec, cxt5051_fixup_models,
+ 				   cxt5051_fixups, cxt_fixups);
+ 		break;
++	case 0x14f15098:
++		codec->pin_amp_workaround = 1;
++		spec->gen.mixer_nid = 0x22;
++		spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO;
++		snd_hda_pick_fixup(codec, cxt5066_fixup_models,
++				   cxt5066_fixups, cxt_fixups);
++		break;
+ 	case 0x14f150f2:
+ 		codec->power_save_node = 1;
+ 		fallthrough;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 71a9462e8f6ec..cf3b1133b7850 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8977,6 +8977,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
++	SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index 4bbcd0dbe8f10..8923c680f0e0f 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -80,8 +80,8 @@
+ #define FSL_SAI_xCR3(tx, ofs)	(tx ? FSL_SAI_TCR3(ofs) : FSL_SAI_RCR3(ofs))
+ #define FSL_SAI_xCR4(tx, ofs)	(tx ? FSL_SAI_TCR4(ofs) : FSL_SAI_RCR4(ofs))
+ #define FSL_SAI_xCR5(tx, ofs)	(tx ? FSL_SAI_TCR5(ofs) : FSL_SAI_RCR5(ofs))
+-#define FSL_SAI_xDR(tx, ofs)	(tx ? FSL_SAI_TDR(ofs) : FSL_SAI_RDR(ofs))
+-#define FSL_SAI_xFR(tx, ofs)	(tx ? FSL_SAI_TFR(ofs) : FSL_SAI_RFR(ofs))
++#define FSL_SAI_xDR0(tx)	(tx ? FSL_SAI_TDR0 : FSL_SAI_RDR0)
++#define FSL_SAI_xFR0(tx)	(tx ? FSL_SAI_TFR0 : FSL_SAI_RFR0)
+ #define FSL_SAI_xMR(tx)		(tx ? FSL_SAI_TMR : FSL_SAI_RMR)
+ 
+ /* SAI Transmit/Receive Control Register */
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index 7f7111d4b3ad0..fb7d01f3961b7 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -918,8 +918,8 @@ percent_rmt_hitm_cmp(struct perf_hpp_fmt *fmt __maybe_unused,
+ 	double per_left;
+ 	double per_right;
+ 
+-	per_left  = PERCENT(left, lcl_hitm);
+-	per_right = PERCENT(right, lcl_hitm);
++	per_left  = PERCENT(left, rmt_hitm);
++	per_right = PERCENT(right, rmt_hitm);
+ 
+ 	return per_left - per_right;
+ }
+diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh
+index d7e07f4c3d7fc..4e15e81673104 100755
+--- a/tools/testing/selftests/netfilter/nft_nat.sh
++++ b/tools/testing/selftests/netfilter/nft_nat.sh
+@@ -374,6 +374,45 @@ EOF
+ 	return $lret
+ }
+ 
++test_local_dnat_portonly()
++{
++	local family=$1
++	local daddr=$2
++	local lret=0
++	local sr_s
++	local sr_r
++
++ip netns exec "$ns0" nft -f /dev/stdin <<EOF
++table $family nat {
++	chain output {
++		type nat hook output priority 0; policy accept;
++		meta l4proto tcp dnat to :2000
++
++	}
++}
++EOF
++	if [ $? -ne 0 ]; then
++		if [ $family = "inet" ];then
++			echo "SKIP: inet port test"
++			test_inet_nat=false
++			return
++		fi
++		echo "SKIP: Could not add $family dnat hook"
++		return
++	fi
++
++	echo SERVER-$family | ip netns exec "$ns1" timeout 5 socat -u STDIN TCP-LISTEN:2000 &
++	sc_s=$!
++
++	result=$(ip netns exec "$ns0" timeout 1 socat TCP:$daddr:2000 STDOUT)
++
++	if [ "$result" = "SERVER-inet" ];then
++		echo "PASS: inet port rewrite without l3 address"
++	else
++		echo "ERROR: inet port rewrite"
++		ret=1
++	fi
++}
+ 
+ test_masquerade6()
+ {
+@@ -841,6 +880,10 @@ fi
+ reset_counters
+ test_local_dnat ip
+ test_local_dnat6 ip6
++
++reset_counters
++test_local_dnat_portonly inet 10.0.1.99
++
+ reset_counters
+ $test_inet_nat && test_local_dnat inet
+ $test_inet_nat && test_local_dnat6 inet


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-06-16 11:44 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-06-16 11:44 UTC (permalink / raw
  To: gentoo-commits

commit:     ed761ade7e65de06ca2b9a78f9661eee00059c6c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 16 11:44:38 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun 16 11:44:38 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ed761ade

Linux patch 5.10.123

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1122_linux-5.10.123.patch | 1112 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1116 insertions(+)

diff --git a/0000_README b/0000_README
index 25417782..6dfaf9fd 100644
--- a/0000_README
+++ b/0000_README
@@ -531,6 +531,10 @@ Patch:  1121_linux-5.10.122.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.122
 
+Patch:  1122_linux-5.10.123.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.123
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1122_linux-5.10.123.patch b/1122_linux-5.10.123.patch
new file mode 100644
index 00000000..12203e3c
--- /dev/null
+++ b/1122_linux-5.10.123.patch
@@ -0,0 +1,1112 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 1a04ca8162ad8..44c6e57303988 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -510,6 +510,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/srbds
+ 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ 		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
++		/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index ca4dbdd9016d5..2adec1e6520a6 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -15,3 +15,4 @@ are configurable at compile, boot or run time.
+    tsx_async_abort
+    multihit.rst
+    special-register-buffer-data-sampling.rst
++   processor_mmio_stale_data.rst
+diff --git a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+new file mode 100644
+index 0000000000000..9393c50b5afc9
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+@@ -0,0 +1,246 @@
++=========================================
++Processor MMIO Stale Data Vulnerabilities
++=========================================
++
++Processor MMIO Stale Data Vulnerabilities are a class of memory-mapped I/O
++(MMIO) vulnerabilities that can expose data. The sequences of operations for
++exposing data range from simple to very complex. Because most of the
++vulnerabilities require the attacker to have access to MMIO, many environments
++are not affected. System environments using virtualization where MMIO access is
++provided to untrusted guests may need mitigation. These vulnerabilities are
++not transient execution attacks. However, these vulnerabilities may propagate
++stale data into core fill buffers where the data can subsequently be inferred
++by an unmitigated transient execution attack. Mitigation for these
++vulnerabilities includes a combination of microcode update and software
++changes, depending on the platform and usage model. Some of these mitigations
++are similar to those used to mitigate Microarchitectural Data Sampling (MDS) or
++those used to mitigate Special Register Buffer Data Sampling (SRBDS).
++
++Data Propagators
++================
++Propagators are operations that result in stale data being copied or moved from
++one microarchitectural buffer or register to another. Processor MMIO Stale Data
++Vulnerabilities are operations that may result in stale data being directly
++read into an architectural, software-visible state or sampled from a buffer or
++register.
++
++Fill Buffer Stale Data Propagator (FBSDP)
++-----------------------------------------
++Stale data may propagate from fill buffers (FB) into the non-coherent portion
++of the uncore on some non-coherent writes. Fill buffer propagation by itself
++does not make stale data architecturally visible. Stale data must be propagated
++to a location where it is subject to reading or sampling.
++
++Sideband Stale Data Propagator (SSDP)
++-------------------------------------
++The sideband stale data propagator (SSDP) is limited to the client (including
++Intel Xeon server E3) uncore implementation. The sideband response buffer is
++shared by all client cores. For non-coherent reads that go to sideband
++destinations, the uncore logic returns 64 bytes of data to the core, including
++both requested data and unrequested stale data, from a transaction buffer and
++the sideband response buffer. As a result, stale data from the sideband
++response and transaction buffers may now reside in a core fill buffer.
++
++Primary Stale Data Propagator (PSDP)
++------------------------------------
++The primary stale data propagator (PSDP) is limited to the client (including
++Intel Xeon server E3) uncore implementation. Similar to the sideband response
++buffer, the primary response buffer is shared by all client cores. For some
++processors, MMIO primary reads will return 64 bytes of data to the core fill
++buffer including both requested data and unrequested stale data. This is
++similar to the sideband stale data propagator.
++
++Vulnerabilities
++===============
++Device Register Partial Write (DRPW) (CVE-2022-21166)
++-----------------------------------------------------
++Some endpoint MMIO registers incorrectly handle writes that are smaller than
++the register size. Instead of aborting the write or only copying the correct
++subset of bytes (for example, 2 bytes for a 2-byte write), more bytes than
++specified by the write transaction may be written to the register. On
++processors affected by FBSDP, this may expose stale data from the fill buffers
++of the core that created the write transaction.
++
++Shared Buffers Data Sampling (SBDS) (CVE-2022-21125)
++----------------------------------------------------
++After propagators may have moved data around the uncore and copied stale data
++into client core fill buffers, processors affected by MFBDS can leak data from
++the fill buffer. It is limited to the client (including Intel Xeon server E3)
++uncore implementation.
++
++Shared Buffers Data Read (SBDR) (CVE-2022-21123)
++------------------------------------------------
++It is similar to Shared Buffer Data Sampling (SBDS) except that the data is
++directly read into the architectural software-visible state. It is limited to
++the client (including Intel Xeon server E3) uncore implementation.
++
++Affected Processors
++===================
++Not all the CPUs are affected by all the variants. For instance, most
++processors for the server market (excluding Intel Xeon E3 processors) are
++impacted by only Device Register Partial Write (DRPW).
++
++Below is the list of affected Intel processors [#f1]_:
++
++   ===================  ============  =========
++   Common name          Family_Model  Steppings
++   ===================  ============  =========
++   HASWELL_X            06_3FH        2,4
++   SKYLAKE_L            06_4EH        3
++   BROADWELL_X          06_4FH        All
++   SKYLAKE_X            06_55H        3,4,6,7,11
++   BROADWELL_D          06_56H        3,4,5
++   SKYLAKE              06_5EH        3
++   ICELAKE_X            06_6AH        4,5,6
++   ICELAKE_D            06_6CH        1
++   ICELAKE_L            06_7EH        5
++   ATOM_TREMONT_D       06_86H        All
++   LAKEFIELD            06_8AH        1
++   KABYLAKE_L           06_8EH        9 to 12
++   ATOM_TREMONT         06_96H        1
++   ATOM_TREMONT_L       06_9CH        0
++   KABYLAKE             06_9EH        9 to 13
++   COMETLAKE            06_A5H        2,3,5
++   COMETLAKE_L          06_A6H        0,1
++   ROCKETLAKE           06_A7H        1
++   ===================  ============  =========
++
++If a CPU is in the affected processor list, but not affected by a variant, it
++is indicated by new bits in MSR IA32_ARCH_CAPABILITIES. As described in a later
++section, mitigation largely remains the same for all the variants, i.e. to
++clear the CPU fill buffers via VERW instruction.
++
++New bits in MSRs
++================
++Newer processors and microcode update on existing affected processors added new
++bits to IA32_ARCH_CAPABILITIES MSR. These bits can be used to enumerate
++specific variants of Processor MMIO Stale Data vulnerabilities and mitigation
++capability.
++
++MSR IA32_ARCH_CAPABILITIES
++--------------------------
++Bit 13 - SBDR_SSDP_NO - When set, processor is not affected by either the
++	 Shared Buffers Data Read (SBDR) vulnerability or the sideband stale
++	 data propagator (SSDP).
++Bit 14 - FBSDP_NO - When set, processor is not affected by the Fill Buffer
++	 Stale Data Propagator (FBSDP).
++Bit 15 - PSDP_NO - When set, processor is not affected by Primary Stale Data
++	 Propagator (PSDP).
++Bit 17 - FB_CLEAR - When set, VERW instruction will overwrite CPU fill buffer
++	 values as part of MD_CLEAR operations. Processors that do not
++	 enumerate MDS_NO (meaning they are affected by MDS) but that do
++	 enumerate support for both L1D_FLUSH and MD_CLEAR implicitly enumerate
++	 FB_CLEAR as part of their MD_CLEAR support.
++Bit 18 - FB_CLEAR_CTRL - Processor supports read and write to MSR
++	 IA32_MCU_OPT_CTRL[FB_CLEAR_DIS]. On such processors, the FB_CLEAR_DIS
++	 bit can be set to cause the VERW instruction to not perform the
++	 FB_CLEAR action. Not all processors that support FB_CLEAR will support
++	 FB_CLEAR_CTRL.
++
++MSR IA32_MCU_OPT_CTRL
++---------------------
++Bit 3 - FB_CLEAR_DIS - When set, VERW instruction does not perform the FB_CLEAR
++action. This may be useful to reduce the performance impact of FB_CLEAR in
++cases where system software deems it warranted (for example, when performance
++is more critical, or the untrusted software has no MMIO access). Note that
++FB_CLEAR_DIS has no impact on enumeration (for example, it does not change
++FB_CLEAR or MD_CLEAR enumeration) and it may not be supported on all processors
++that enumerate FB_CLEAR.
++
++Mitigation
++==========
++Like MDS, all variants of Processor MMIO Stale Data vulnerabilities  have the
++same mitigation strategy to force the CPU to clear the affected buffers before
++an attacker can extract the secrets.
++
++This is achieved by using the otherwise unused and obsolete VERW instruction in
++combination with a microcode update. The microcode clears the affected CPU
++buffers when the VERW instruction is executed.
++
++Kernel reuses the MDS function to invoke the buffer clearing:
++
++	mds_clear_cpu_buffers()
++
++On MDS affected CPUs, the kernel already invokes CPU buffer clear on
++kernel/userspace, hypervisor/guest and C-state (idle) transitions. No
++additional mitigation is needed on such CPUs.
++
++For CPUs not affected by MDS or TAA, mitigation is needed only for the attacker
++with MMIO capability. Therefore, VERW is not required for kernel/userspace. For
++virtualization case, VERW is only needed at VMENTER for a guest with MMIO
++capability.
++
++Mitigation points
++-----------------
++Return to user space
++^^^^^^^^^^^^^^^^^^^^
++Same mitigation as MDS when affected by MDS/TAA, otherwise no mitigation
++needed.
++
++C-State transition
++^^^^^^^^^^^^^^^^^^
++Control register writes by CPU during C-state transition can propagate data
++from fill buffer to uncore buffers. Execute VERW before C-state transition to
++clear CPU fill buffers.
++
++Guest entry point
++^^^^^^^^^^^^^^^^^
++Same mitigation as MDS when processor is also affected by MDS/TAA, otherwise
++execute VERW at VMENTER only for MMIO capable guests. On CPUs not affected by
++MDS/TAA, guest without MMIO access cannot extract secrets using Processor MMIO
++Stale Data vulnerabilities, so there is no need to execute VERW for such guests.
++
++Mitigation control on the kernel command line
++---------------------------------------------
++The kernel command line allows to control the Processor MMIO Stale Data
++mitigations at boot time with the option "mmio_stale_data=". The valid
++arguments for this option are:
++
++  ==========  =================================================================
++  full        If the CPU is vulnerable, enable mitigation; CPU buffer clearing
++              on exit to userspace and when entering a VM. Idle transitions are
++              protected as well. It does not automatically disable SMT.
++  full,nosmt  Same as full, with SMT disabled on vulnerable CPUs. This is the
++              complete mitigation.
++  off         Disables mitigation completely.
++  ==========  =================================================================
++
++If the CPU is affected and mmio_stale_data=off is not supplied on the kernel
++command line, then the kernel selects the appropriate mitigation.
++
++Mitigation status information
++-----------------------------
++The Linux kernel provides a sysfs interface to enumerate the current
++vulnerability status of the system: whether the system is vulnerable, and
++which mitigations are active. The relevant sysfs file is:
++
++	/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
++
++The possible values in this file are:
++
++  .. list-table::
++
++     * - 'Not affected'
++       - The processor is not vulnerable
++     * - 'Vulnerable'
++       - The processor is vulnerable, but no mitigation enabled
++     * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
++       - The processor is vulnerable, but microcode is not updated. The
++         mitigation is enabled on a best effort basis.
++     * - 'Mitigation: Clear CPU buffers'
++       - The processor is vulnerable and the CPU buffer clearing mitigation is
++         enabled.
++
++If the processor is vulnerable then the following information is appended to
++the above information:
++
++  ========================  ===========================================
++  'SMT vulnerable'          SMT is enabled
++  'SMT disabled'            SMT is disabled
++  'SMT Host state unknown'  Kernel runs in a VM, Host SMT state unknown
++  ========================  ===========================================
++
++References
++----------
++.. [#f1] Affected Processors
++   https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 5e34deec819fa..ea8b704be7052 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2872,6 +2872,7 @@
+ 					       kvm.nx_huge_pages=off [X86]
+ 					       no_entry_flush [PPC]
+ 					       no_uaccess_flush [PPC]
++					       mmio_stale_data=off [X86]
+ 
+ 				Exceptions:
+ 					       This does not have any effect on
+@@ -2893,6 +2894,7 @@
+ 				Equivalent to: l1tf=flush,nosmt [X86]
+ 					       mds=full,nosmt [X86]
+ 					       tsx_async_abort=full,nosmt [X86]
++					       mmio_stale_data=full,nosmt [X86]
+ 
+ 	mminit_loglevel=
+ 			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
+@@ -2902,6 +2904,40 @@
+ 			log everything. Information is printed at KERN_DEBUG
+ 			so loglevel=8 may also need to be specified.
+ 
++	mmio_stale_data=
++			[X86,INTEL] Control mitigation for the Processor
++			MMIO Stale Data vulnerabilities.
++
++			Processor MMIO Stale Data is a class of
++			vulnerabilities that may expose data after an MMIO
++			operation. Exposed data could originate or end in
++			the same CPU buffers as affected by MDS and TAA.
++			Therefore, similar to MDS and TAA, the mitigation
++			is to clear the affected CPU buffers.
++
++			This parameter controls the mitigation. The
++			options are:
++
++			full       - Enable mitigation on vulnerable CPUs
++
++			full,nosmt - Enable mitigation and disable SMT on
++				     vulnerable CPUs.
++
++			off        - Unconditionally disable mitigation
++
++			On MDS or TAA affected machines,
++			mmio_stale_data=off can be prevented by an active
++			MDS or TAA mitigation as these vulnerabilities are
++			mitigated with the same mechanism so in order to
++			disable this mitigation, you need to specify
++			mds=off and tsx_async_abort=off too.
++
++			Not specifying this option is equivalent to
++			mmio_stale_data=full.
++
++			For details see:
++			Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
++
+ 	module.sig_enforce
+ 			[KNL] When CONFIG_MODULE_SIG is set, this means that
+ 			modules without (valid) signatures will fail to load.
+diff --git a/Makefile b/Makefile
+index 3ed1da61a3c7a..862946040186a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 122
++SUBLEVEL = 123
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 3b407f46f1a0d..f6a6ac0b3bcd4 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -417,5 +417,6 @@
+ #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
+ #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+ #define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
++#define X86_BUG_MMIO_STALE_DATA		X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 972a34d935059..96973d1979723 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -114,6 +114,30 @@
+ 						 * Not susceptible to
+ 						 * TSX Async Abort (TAA) vulnerabilities.
+ 						 */
++#define ARCH_CAP_SBDR_SSDP_NO		BIT(13)	/*
++						 * Not susceptible to SBDR and SSDP
++						 * variants of Processor MMIO stale data
++						 * vulnerabilities.
++						 */
++#define ARCH_CAP_FBSDP_NO		BIT(14)	/*
++						 * Not susceptible to FBSDP variant of
++						 * Processor MMIO stale data
++						 * vulnerabilities.
++						 */
++#define ARCH_CAP_PSDP_NO		BIT(15)	/*
++						 * Not susceptible to PSDP variant of
++						 * Processor MMIO stale data
++						 * vulnerabilities.
++						 */
++#define ARCH_CAP_FB_CLEAR		BIT(17)	/*
++						 * VERW clears CPU fill buffer
++						 * even on MDS_NO CPUs.
++						 */
++#define ARCH_CAP_FB_CLEAR_CTRL		BIT(18)	/*
++						 * MSR_IA32_MCU_OPT_CTRL[FB_CLEAR_DIS]
++						 * bit available to control VERW
++						 * behavior.
++						 */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+ #define L1D_FLUSH			BIT(0)	/*
+@@ -131,6 +155,7 @@
+ /* SRBDS support */
+ #define MSR_IA32_MCU_OPT_CTRL		0x00000123
+ #define RNGDS_MITG_DIS			BIT(0)
++#define FB_CLEAR_DIS			BIT(3)	/* CPU Fill buffer clear disable */
+ 
+ #define MSR_IA32_SYSENTER_CS		0x00000174
+ #define MSR_IA32_SYSENTER_ESP		0x00000175
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 4d0f5386e637b..e247151c3dcf2 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -255,6 +255,8 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ DECLARE_STATIC_KEY_FALSE(mds_user_clear);
+ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
+ 
++DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
++
+ #include <asm/segment.h>
+ 
+ /**
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 78b9514a38440..2a21046846b6f 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -41,8 +41,10 @@ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
+ static void __init mds_select_mitigation(void);
+-static void __init mds_print_mitigation(void);
++static void __init md_clear_update_mitigation(void);
++static void __init md_clear_select_mitigation(void);
+ static void __init taa_select_mitigation(void);
++static void __init mmio_select_mitigation(void);
+ static void __init srbds_select_mitigation(void);
+ 
+ /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+@@ -77,6 +79,10 @@ EXPORT_SYMBOL_GPL(mds_user_clear);
+ DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
+ EXPORT_SYMBOL_GPL(mds_idle_clear);
+ 
++/* Controls CPU Fill buffer clear before KVM guest MMIO accesses */
++DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
++EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
++
+ void __init check_bugs(void)
+ {
+ 	identify_boot_cpu();
+@@ -109,16 +115,9 @@ void __init check_bugs(void)
+ 	spectre_v2_select_mitigation();
+ 	ssb_select_mitigation();
+ 	l1tf_select_mitigation();
+-	mds_select_mitigation();
+-	taa_select_mitigation();
++	md_clear_select_mitigation();
+ 	srbds_select_mitigation();
+ 
+-	/*
+-	 * As MDS and TAA mitigations are inter-related, print MDS
+-	 * mitigation until after TAA mitigation selection is done.
+-	 */
+-	mds_print_mitigation();
+-
+ 	arch_smt_update();
+ 
+ #ifdef CONFIG_X86_32
+@@ -258,14 +257,6 @@ static void __init mds_select_mitigation(void)
+ 	}
+ }
+ 
+-static void __init mds_print_mitigation(void)
+-{
+-	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
+-		return;
+-
+-	pr_info("%s\n", mds_strings[mds_mitigation]);
+-}
+-
+ static int __init mds_cmdline(char *str)
+ {
+ 	if (!boot_cpu_has_bug(X86_BUG_MDS))
+@@ -320,7 +311,7 @@ static void __init taa_select_mitigation(void)
+ 	/* TSX previously disabled by tsx=off */
+ 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
+ 		taa_mitigation = TAA_MITIGATION_TSX_DISABLED;
+-		goto out;
++		return;
+ 	}
+ 
+ 	if (cpu_mitigations_off()) {
+@@ -334,7 +325,7 @@ static void __init taa_select_mitigation(void)
+ 	 */
+ 	if (taa_mitigation == TAA_MITIGATION_OFF &&
+ 	    mds_mitigation == MDS_MITIGATION_OFF)
+-		goto out;
++		return;
+ 
+ 	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ 		taa_mitigation = TAA_MITIGATION_VERW;
+@@ -366,18 +357,6 @@ static void __init taa_select_mitigation(void)
+ 
+ 	if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ 		cpu_smt_disable(false);
+-
+-	/*
+-	 * Update MDS mitigation, if necessary, as the mds_user_clear is
+-	 * now enabled for TAA mitigation.
+-	 */
+-	if (mds_mitigation == MDS_MITIGATION_OFF &&
+-	    boot_cpu_has_bug(X86_BUG_MDS)) {
+-		mds_mitigation = MDS_MITIGATION_FULL;
+-		mds_select_mitigation();
+-	}
+-out:
+-	pr_info("%s\n", taa_strings[taa_mitigation]);
+ }
+ 
+ static int __init tsx_async_abort_parse_cmdline(char *str)
+@@ -401,6 +380,151 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
+ }
+ early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"MMIO Stale Data: " fmt
++
++enum mmio_mitigations {
++	MMIO_MITIGATION_OFF,
++	MMIO_MITIGATION_UCODE_NEEDED,
++	MMIO_MITIGATION_VERW,
++};
++
++/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
++static enum mmio_mitigations mmio_mitigation __ro_after_init = MMIO_MITIGATION_VERW;
++static bool mmio_nosmt __ro_after_init = false;
++
++static const char * const mmio_strings[] = {
++	[MMIO_MITIGATION_OFF]		= "Vulnerable",
++	[MMIO_MITIGATION_UCODE_NEEDED]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
++	[MMIO_MITIGATION_VERW]		= "Mitigation: Clear CPU buffers",
++};
++
++static void __init mmio_select_mitigation(void)
++{
++	u64 ia32_cap;
++
++	if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) ||
++	    cpu_mitigations_off()) {
++		mmio_mitigation = MMIO_MITIGATION_OFF;
++		return;
++	}
++
++	if (mmio_mitigation == MMIO_MITIGATION_OFF)
++		return;
++
++	ia32_cap = x86_read_arch_cap_msr();
++
++	/*
++	 * Enable CPU buffer clear mitigation for host and VMM, if also affected
++	 * by MDS or TAA. Otherwise, enable mitigation for VMM only.
++	 */
++	if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
++					      boot_cpu_has(X86_FEATURE_RTM)))
++		static_branch_enable(&mds_user_clear);
++	else
++		static_branch_enable(&mmio_stale_data_clear);
++
++	/*
++	 * If Processor-MMIO-Stale-Data bug is present and Fill Buffer data can
++	 * be propagated to uncore buffers, clearing the Fill buffers on idle
++	 * is required irrespective of SMT state.
++	 */
++	if (!(ia32_cap & ARCH_CAP_FBSDP_NO))
++		static_branch_enable(&mds_idle_clear);
++
++	/*
++	 * Check if the system has the right microcode.
++	 *
++	 * CPU Fill buffer clear mitigation is enumerated by either an explicit
++	 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
++	 * affected systems.
++	 */
++	if ((ia32_cap & ARCH_CAP_FB_CLEAR) ||
++	    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
++	     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
++	     !(ia32_cap & ARCH_CAP_MDS_NO)))
++		mmio_mitigation = MMIO_MITIGATION_VERW;
++	else
++		mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
++
++	if (mmio_nosmt || cpu_mitigations_auto_nosmt())
++		cpu_smt_disable(false);
++}
++
++static int __init mmio_stale_data_parse_cmdline(char *str)
++{
++	if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
++		return 0;
++
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off")) {
++		mmio_mitigation = MMIO_MITIGATION_OFF;
++	} else if (!strcmp(str, "full")) {
++		mmio_mitigation = MMIO_MITIGATION_VERW;
++	} else if (!strcmp(str, "full,nosmt")) {
++		mmio_mitigation = MMIO_MITIGATION_VERW;
++		mmio_nosmt = true;
++	}
++
++	return 0;
++}
++early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
++
++#undef pr_fmt
++#define pr_fmt(fmt)     "" fmt
++
++static void __init md_clear_update_mitigation(void)
++{
++	if (cpu_mitigations_off())
++		return;
++
++	if (!static_key_enabled(&mds_user_clear))
++		goto out;
++
++	/*
++	 * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data
++	 * mitigation, if necessary.
++	 */
++	if (mds_mitigation == MDS_MITIGATION_OFF &&
++	    boot_cpu_has_bug(X86_BUG_MDS)) {
++		mds_mitigation = MDS_MITIGATION_FULL;
++		mds_select_mitigation();
++	}
++	if (taa_mitigation == TAA_MITIGATION_OFF &&
++	    boot_cpu_has_bug(X86_BUG_TAA)) {
++		taa_mitigation = TAA_MITIGATION_VERW;
++		taa_select_mitigation();
++	}
++	if (mmio_mitigation == MMIO_MITIGATION_OFF &&
++	    boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
++		mmio_mitigation = MMIO_MITIGATION_VERW;
++		mmio_select_mitigation();
++	}
++out:
++	if (boot_cpu_has_bug(X86_BUG_MDS))
++		pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
++	if (boot_cpu_has_bug(X86_BUG_TAA))
++		pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
++	if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
++		pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
++}
++
++static void __init md_clear_select_mitigation(void)
++{
++	mds_select_mitigation();
++	taa_select_mitigation();
++	mmio_select_mitigation();
++
++	/*
++	 * As MDS, TAA and MMIO Stale Data mitigations are inter-related, update
++	 * and print their mitigation after MDS, TAA and MMIO Stale Data
++	 * mitigation selection is done.
++	 */
++	md_clear_update_mitigation();
++}
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)	"SRBDS: " fmt
+ 
+@@ -462,11 +586,13 @@ static void __init srbds_select_mitigation(void)
+ 		return;
+ 
+ 	/*
+-	 * Check to see if this is one of the MDS_NO systems supporting
+-	 * TSX that are only exposed to SRBDS when TSX is enabled.
++	 * Check to see if this is one of the MDS_NO systems supporting TSX that
++	 * are only exposed to SRBDS when TSX is enabled or when CPU is affected
++	 * by Processor MMIO Stale Data vulnerability.
+ 	 */
+ 	ia32_cap = x86_read_arch_cap_msr();
+-	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
++	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM) &&
++	    !boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
+ 		srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
+ 	else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ 		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
+@@ -1072,6 +1198,8 @@ static void update_indir_branch_cond(void)
+ /* Update the static key controlling the MDS CPU buffer clear in idle */
+ static void update_mds_branch_idle(void)
+ {
++	u64 ia32_cap = x86_read_arch_cap_msr();
++
+ 	/*
+ 	 * Enable the idle clearing if SMT is active on CPUs which are
+ 	 * affected only by MSBDS and not any other MDS variant.
+@@ -1083,14 +1211,17 @@ static void update_mds_branch_idle(void)
+ 	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
+ 		return;
+ 
+-	if (sched_smt_active())
++	if (sched_smt_active()) {
+ 		static_branch_enable(&mds_idle_clear);
+-	else
++	} else if (mmio_mitigation == MMIO_MITIGATION_OFF ||
++		   (ia32_cap & ARCH_CAP_FBSDP_NO)) {
+ 		static_branch_disable(&mds_idle_clear);
++	}
+ }
+ 
+ #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
+ #define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
++#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
+ 
+ void cpu_bugs_smt_update(void)
+ {
+@@ -1135,6 +1266,16 @@ void cpu_bugs_smt_update(void)
+ 		break;
+ 	}
+ 
++	switch (mmio_mitigation) {
++	case MMIO_MITIGATION_VERW:
++	case MMIO_MITIGATION_UCODE_NEEDED:
++		if (sched_smt_active())
++			pr_warn_once(MMIO_MSG_SMT);
++		break;
++	case MMIO_MITIGATION_OFF:
++		break;
++	}
++
+ 	mutex_unlock(&spec_ctrl_mutex);
+ }
+ 
+@@ -1704,6 +1845,20 @@ static ssize_t tsx_async_abort_show_state(char *buf)
+ 		       sched_smt_active() ? "vulnerable" : "disabled");
+ }
+ 
++static ssize_t mmio_stale_data_show_state(char *buf)
++{
++	if (mmio_mitigation == MMIO_MITIGATION_OFF)
++		return sysfs_emit(buf, "%s\n", mmio_strings[mmio_mitigation]);
++
++	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
++		return sysfs_emit(buf, "%s; SMT Host state unknown\n",
++				  mmio_strings[mmio_mitigation]);
++	}
++
++	return sysfs_emit(buf, "%s; SMT %s\n", mmio_strings[mmio_mitigation],
++			  sched_smt_active() ? "vulnerable" : "disabled");
++}
++
+ static char *stibp_state(void)
+ {
+ 	if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+@@ -1804,6 +1959,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_SRBDS:
+ 		return srbds_show_state(buf);
+ 
++	case X86_BUG_MMIO_STALE_DATA:
++		return mmio_stale_data_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -1855,4 +2013,9 @@ ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
+ }
++
++ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 9c8fc6f513ed3..4917c2698ac1f 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1098,18 +1098,42 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 					    X86_FEATURE_ANY, issues)
+ 
+ #define SRBDS		BIT(0)
++/* CPU is affected by X86_BUG_MMIO_STALE_DATA */
++#define MMIO		BIT(1)
++/* CPU is affected by Shared Buffers Data Sampling (SBDS), a variant of X86_BUG_MMIO_STALE_DATA */
++#define MMIO_SBDS	BIT(2)
+ 
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(HASWELL,		X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(HASWELL_X,	BIT(2) | BIT(4),		MMIO),
++	VULNBL_INTEL_STEPPINGS(BROADWELL_D,	X86_STEPPINGS(0x3, 0x5),	MMIO),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(BROADWELL_X,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPINGS(0x3, 0x3),	SRBDS | MMIO),
+ 	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	BIT(3) | BIT(4) | BIT(6) |
++						BIT(7) | BIT(0xB),              MMIO),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPINGS(0x3, 0x3),	SRBDS | MMIO),
+ 	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xC),	SRBDS),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPINGS(0x0, 0xD),	SRBDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPINGS(0x9, 0xC),	SRBDS | MMIO),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPINGS(0x0, 0x8),	SRBDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPINGS(0x9, 0xD),	SRBDS | MMIO),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPINGS(0x0, 0x8),	SRBDS),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPINGS(0x5, 0x5),	MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPINGS(0x1, 0x1),	MMIO),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_X,	X86_STEPPINGS(0x4, 0x6),	MMIO),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE,	BIT(2) | BIT(3) | BIT(5),	MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPINGS(0x1, 0x1),	MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPINGS(0x0, 0x0),	MMIO),
++	VULNBL_INTEL_STEPPINGS(LAKEFIELD,	X86_STEPPINGS(0x1, 0x1),	MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPINGS(0x1, 0x1),	MMIO),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPINGS(0x1, 0x1),	MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D,	X86_STEPPING_ANY,		MMIO),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPINGS(0x0, 0x0),	MMIO | MMIO_SBDS),
+ 	{}
+ };
+ 
+@@ -1130,6 +1154,13 @@ u64 x86_read_arch_cap_msr(void)
+ 	return ia32_cap;
+ }
+ 
++static bool arch_cap_mmio_immune(u64 ia32_cap)
++{
++	return (ia32_cap & ARCH_CAP_FBSDP_NO &&
++		ia32_cap & ARCH_CAP_PSDP_NO &&
++		ia32_cap & ARCH_CAP_SBDR_SSDP_NO);
++}
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ 	u64 ia32_cap = x86_read_arch_cap_msr();
+@@ -1183,12 +1214,27 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	/*
+ 	 * SRBDS affects CPUs which support RDRAND or RDSEED and are listed
+ 	 * in the vulnerability blacklist.
++	 *
++	 * Some of the implications and mitigation of Shared Buffers Data
++	 * Sampling (SBDS) are similar to SRBDS. Give SBDS same treatment as
++	 * SRBDS.
+ 	 */
+ 	if ((cpu_has(c, X86_FEATURE_RDRAND) ||
+ 	     cpu_has(c, X86_FEATURE_RDSEED)) &&
+-	    cpu_matches(cpu_vuln_blacklist, SRBDS))
++	    cpu_matches(cpu_vuln_blacklist, SRBDS | MMIO_SBDS))
+ 		    setup_force_cpu_bug(X86_BUG_SRBDS);
+ 
++	/*
++	 * Processor MMIO Stale Data bug enumeration
++	 *
++	 * Affected CPU list is generally enough to enumerate the vulnerability,
++	 * but for virtualization case check for ARCH_CAP MSR bits also, VMM may
++	 * not want the guest to enumerate the bug.
++	 */
++	if (cpu_matches(cpu_vuln_blacklist, MMIO) &&
++	    !arch_cap_mmio_immune(ia32_cap))
++		setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 94f5f2129e3b4..e729f65c67600 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -226,6 +226,9 @@ static const struct {
+ #define L1D_CACHE_ORDER 4
+ static void *vmx_l1d_flush_pages;
+ 
++/* Control for disabling CPU Fill buffer clear */
++static bool __read_mostly vmx_fb_clear_ctrl_available;
++
+ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
+ {
+ 	struct page *page;
+@@ -357,6 +360,60 @@ static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp)
+ 	return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option);
+ }
+ 
++static void vmx_setup_fb_clear_ctrl(void)
++{
++	u64 msr;
++
++	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES) &&
++	    !boot_cpu_has_bug(X86_BUG_MDS) &&
++	    !boot_cpu_has_bug(X86_BUG_TAA)) {
++		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, msr);
++		if (msr & ARCH_CAP_FB_CLEAR_CTRL)
++			vmx_fb_clear_ctrl_available = true;
++	}
++}
++
++static __always_inline void vmx_disable_fb_clear(struct vcpu_vmx *vmx)
++{
++	u64 msr;
++
++	if (!vmx->disable_fb_clear)
++		return;
++
++	rdmsrl(MSR_IA32_MCU_OPT_CTRL, msr);
++	msr |= FB_CLEAR_DIS;
++	wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr);
++	/* Cache the MSR value to avoid reading it later */
++	vmx->msr_ia32_mcu_opt_ctrl = msr;
++}
++
++static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx)
++{
++	if (!vmx->disable_fb_clear)
++		return;
++
++	vmx->msr_ia32_mcu_opt_ctrl &= ~FB_CLEAR_DIS;
++	wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl);
++}
++
++static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
++{
++	vmx->disable_fb_clear = vmx_fb_clear_ctrl_available;
++
++	/*
++	 * If guest will not execute VERW, there is no need to set FB_CLEAR_DIS
++	 * at VMEntry. Skip the MSR read/write when a guest has no use case to
++	 * execute VERW.
++	 */
++	if ((vcpu->arch.arch_capabilities & ARCH_CAP_FB_CLEAR) ||
++	   ((vcpu->arch.arch_capabilities & ARCH_CAP_MDS_NO) &&
++	    (vcpu->arch.arch_capabilities & ARCH_CAP_TAA_NO) &&
++	    (vcpu->arch.arch_capabilities & ARCH_CAP_PSDP_NO) &&
++	    (vcpu->arch.arch_capabilities & ARCH_CAP_FBSDP_NO) &&
++	    (vcpu->arch.arch_capabilities & ARCH_CAP_SBDR_SSDP_NO)))
++		vmx->disable_fb_clear = false;
++}
++
+ static const struct kernel_param_ops vmentry_l1d_flush_ops = {
+ 	.set = vmentry_l1d_flush_set,
+ 	.get = vmentry_l1d_flush_get,
+@@ -2211,6 +2268,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			ret = kvm_set_msr_common(vcpu, msr_info);
+ 	}
+ 
++	/* FB_CLEAR may have changed, also update the FB_CLEAR_DIS behavior */
++	if (msr_index == MSR_IA32_ARCH_CAPABILITIES)
++		vmx_update_fb_clear_dis(vcpu, vmx);
++
+ 	return ret;
+ }
+ 
+@@ -4483,6 +4544,8 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ 	vpid_sync_context(vmx->vpid);
+ 	if (init_event)
+ 		vmx_clear_hlt(vcpu);
++
++	vmx_update_fb_clear_dis(vcpu, vmx);
+ }
+ 
+ static void enable_irq_window(struct kvm_vcpu *vcpu)
+@@ -6654,6 +6717,11 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 		vmx_l1d_flush(vcpu);
+ 	else if (static_branch_unlikely(&mds_user_clear))
+ 		mds_clear_cpu_buffers();
++	else if (static_branch_unlikely(&mmio_stale_data_clear) &&
++		 kvm_arch_has_assigned_device(vcpu->kvm))
++		mds_clear_cpu_buffers();
++
++	vmx_disable_fb_clear(vmx);
+ 
+ 	if (vcpu->arch.cr2 != native_read_cr2())
+ 		native_write_cr2(vcpu->arch.cr2);
+@@ -6663,6 +6731,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 
+ 	vcpu->arch.cr2 = native_read_cr2();
+ 
++	vmx_enable_fb_clear(vmx);
++
+ 	/*
+ 	 * VMEXIT disables interrupts (host state), but tracing and lockdep
+ 	 * have them in state 'on' as recorded before entering guest mode.
+@@ -8047,6 +8117,8 @@ static int __init vmx_init(void)
+ 		return r;
+ 	}
+ 
++	vmx_setup_fb_clear_ctrl();
++
+ 	for_each_possible_cpu(cpu) {
+ 		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 5ff24537393e2..31317c8915e40 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -300,6 +300,8 @@ struct vcpu_vmx {
+ 	u64 msr_ia32_feature_control;
+ 	u64 msr_ia32_feature_control_valid_bits;
+ 	u64 ept_pointer;
++	u64 msr_ia32_mcu_opt_ctrl;
++	bool disable_fb_clear;
+ 
+ 	struct pt_desc pt_desc;
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index d9cec5daa1fff..da547752580a3 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1415,6 +1415,9 @@ static u64 kvm_get_arch_capabilities(void)
+ 		 */
+ 	}
+ 
++	/* Guests don't need to know "Fill buffer clear control" exists */
++	data &= ~ARCH_CAP_FB_CLEAR_CTRL;
++
+ 	return data;
+ }
+ 
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 8f1d6569564c4..8ecb9f90f467b 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -566,6 +566,12 @@ ssize_t __weak cpu_show_srbds(struct device *dev,
+ 	return sysfs_emit(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_mmio_stale_data(struct device *dev,
++					struct device_attribute *attr, char *buf)
++{
++	return sysfs_emit(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -575,6 +581,7 @@ static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+ static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
+ static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+ static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
++static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -586,6 +593,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_tsx_async_abort.attr,
+ 	&dev_attr_itlb_multihit.attr,
+ 	&dev_attr_srbds.attr,
++	&dev_attr_mmio_stale_data.attr,
+ 	NULL
+ };
+ 
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index d6428aaf67e73..d63b8f70d1239 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -65,6 +65,9 @@ extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
+ extern ssize_t cpu_show_itlb_multihit(struct device *dev,
+ 				      struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_mmio_stale_data(struct device *dev,
++					struct device_attribute *attr,
++					char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index b58730cc12e83..a7b5c5efcf3b0 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -417,5 +417,6 @@
+ #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
+ #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+ #define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
++#define X86_BUG_MMIO_STALE_DATA		X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
+index 972a34d935059..96973d1979723 100644
+--- a/tools/arch/x86/include/asm/msr-index.h
++++ b/tools/arch/x86/include/asm/msr-index.h
+@@ -114,6 +114,30 @@
+ 						 * Not susceptible to
+ 						 * TSX Async Abort (TAA) vulnerabilities.
+ 						 */
++#define ARCH_CAP_SBDR_SSDP_NO		BIT(13)	/*
++						 * Not susceptible to SBDR and SSDP
++						 * variants of Processor MMIO stale data
++						 * vulnerabilities.
++						 */
++#define ARCH_CAP_FBSDP_NO		BIT(14)	/*
++						 * Not susceptible to FBSDP variant of
++						 * Processor MMIO stale data
++						 * vulnerabilities.
++						 */
++#define ARCH_CAP_PSDP_NO		BIT(15)	/*
++						 * Not susceptible to PSDP variant of
++						 * Processor MMIO stale data
++						 * vulnerabilities.
++						 */
++#define ARCH_CAP_FB_CLEAR		BIT(17)	/*
++						 * VERW clears CPU fill buffer
++						 * even on MDS_NO CPUs.
++						 */
++#define ARCH_CAP_FB_CLEAR_CTRL		BIT(18)	/*
++						 * MSR_IA32_MCU_OPT_CTRL[FB_CLEAR_DIS]
++						 * bit available to control VERW
++						 * behavior.
++						 */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+ #define L1D_FLUSH			BIT(0)	/*
+@@ -131,6 +155,7 @@
+ /* SRBDS support */
+ #define MSR_IA32_MCU_OPT_CTRL		0x00000123
+ #define RNGDS_MITG_DIS			BIT(0)
++#define FB_CLEAR_DIS			BIT(3)	/* CPU Fill buffer clear disable */
+ 
+ #define MSR_IA32_SYSENTER_CS		0x00000174
+ #define MSR_IA32_SYSENTER_ESP		0x00000175


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-06-22 12:45 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-06-22 12:45 UTC (permalink / raw
  To: gentoo-commits

commit:     ba1c4606e1b020272a68d079d59639eedeff3ec1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 22 12:45:13 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 22 12:45:13 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ba1c4606

Linux patch 5.10.124

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1123_linux-5.10.124.patch | 3088 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3092 insertions(+)

diff --git a/0000_README b/0000_README
index 6dfaf9fd..aedaaf1a 100644
--- a/0000_README
+++ b/0000_README
@@ -535,6 +535,10 @@ Patch:  1122_linux-5.10.123.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.123
 
+Patch:  1123_linux-5.10.124.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.124
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1123_linux-5.10.124.patch b/1123_linux-5.10.124.patch
new file mode 100644
index 00000000..d30399a5
--- /dev/null
+++ b/1123_linux-5.10.124.patch
@@ -0,0 +1,3088 @@
+diff --git a/Makefile b/Makefile
+index 862946040186a..9ed79a05a9725 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 123
++SUBLEVEL = 124
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
+index d6b9dedd168f1..5667009aae13a 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
+@@ -167,6 +167,7 @@
+ 	pinctrl-0 = <&pinctrl_uart3>;
+ 	assigned-clocks = <&clk IMX8MM_CLK_UART3>;
+ 	assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_80M>;
++	uart-has-rtscts;
+ 	status = "okay";
+ };
+ 
+@@ -237,6 +238,8 @@
+ 		fsl,pins = <
+ 			MX8MM_IOMUXC_ECSPI1_SCLK_UART3_DCE_RX	0x40
+ 			MX8MM_IOMUXC_ECSPI1_MOSI_UART3_DCE_TX	0x40
++			MX8MM_IOMUXC_ECSPI1_MISO_UART3_DCE_CTS_B	0x40
++			MX8MM_IOMUXC_ECSPI1_SS0_UART3_DCE_RTS_B	0x40
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 86a5cf9bc19a1..3724bab278b28 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -77,47 +77,76 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
+ }
+ 
+ /*
+- * Turn on the call to ftrace_caller() in instrumented function
++ * Find the address the callsite must branch to in order to reach '*addr'.
++ *
++ * Due to the limited range of 'BL' instructions, modules may be placed too far
++ * away to branch directly and must use a PLT.
++ *
++ * Returns true when '*addr' contains a reachable target address, or has been
++ * modified to contain a PLT address. Returns false otherwise.
+  */
+-int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
++static bool ftrace_find_callable_addr(struct dyn_ftrace *rec,
++				      struct module *mod,
++				      unsigned long *addr)
+ {
+ 	unsigned long pc = rec->ip;
+-	u32 old, new;
+-	long offset = (long)pc - (long)addr;
++	long offset = (long)*addr - (long)pc;
++	struct plt_entry *plt;
+ 
+-	if (offset < -SZ_128M || offset >= SZ_128M) {
+-		struct module *mod;
+-		struct plt_entry *plt;
++	/*
++	 * When the target is within range of the 'BL' instruction, use 'addr'
++	 * as-is and branch to that directly.
++	 */
++	if (offset >= -SZ_128M && offset < SZ_128M)
++		return true;
+ 
+-		if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
+-			return -EINVAL;
++	/*
++	 * When the target is outside of the range of a 'BL' instruction, we
++	 * must use a PLT to reach it. We can only place PLTs for modules, and
++	 * only when module PLT support is built-in.
++	 */
++	if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
++		return false;
+ 
+-		/*
+-		 * On kernels that support module PLTs, the offset between the
+-		 * branch instruction and its target may legally exceed the
+-		 * range of an ordinary relative 'bl' opcode. In this case, we
+-		 * need to branch via a trampoline in the module.
+-		 *
+-		 * NOTE: __module_text_address() must be called with preemption
+-		 * disabled, but we can rely on ftrace_lock to ensure that 'mod'
+-		 * retains its validity throughout the remainder of this code.
+-		 */
++	/*
++	 * 'mod' is only set at module load time, but if we end up
++	 * dealing with an out-of-range condition, we can assume it
++	 * is due to a module being loaded far away from the kernel.
++	 *
++	 * NOTE: __module_text_address() must be called with preemption
++	 * disabled, but we can rely on ftrace_lock to ensure that 'mod'
++	 * retains its validity throughout the remainder of this code.
++	 */
++	if (!mod) {
+ 		preempt_disable();
+ 		mod = __module_text_address(pc);
+ 		preempt_enable();
++	}
+ 
+-		if (WARN_ON(!mod))
+-			return -EINVAL;
++	if (WARN_ON(!mod))
++		return false;
+ 
+-		plt = get_ftrace_plt(mod, addr);
+-		if (!plt) {
+-			pr_err("ftrace: no module PLT for %ps\n", (void *)addr);
+-			return -EINVAL;
+-		}
+-
+-		addr = (unsigned long)plt;
++	plt = get_ftrace_plt(mod, *addr);
++	if (!plt) {
++		pr_err("ftrace: no module PLT for %ps\n", (void *)*addr);
++		return false;
+ 	}
+ 
++	*addr = (unsigned long)plt;
++	return true;
++}
++
++/*
++ * Turn on the call to ftrace_caller() in instrumented function
++ */
++int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
++{
++	unsigned long pc = rec->ip;
++	u32 old, new;
++
++	if (!ftrace_find_callable_addr(rec, NULL, &addr))
++		return -EINVAL;
++
+ 	old = aarch64_insn_gen_nop();
+ 	new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
+ 
+@@ -131,6 +160,11 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ 	unsigned long pc = rec->ip;
+ 	u32 old, new;
+ 
++	if (!ftrace_find_callable_addr(rec, NULL, &old_addr))
++		return -EINVAL;
++	if (!ftrace_find_callable_addr(rec, NULL, &addr))
++		return -EINVAL;
++
+ 	old = aarch64_insn_gen_branch_imm(pc, old_addr,
+ 					  AARCH64_INSN_BRANCH_LINK);
+ 	new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
+@@ -180,54 +214,15 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
+ 		    unsigned long addr)
+ {
+ 	unsigned long pc = rec->ip;
+-	bool validate = true;
+ 	u32 old = 0, new;
+-	long offset = (long)pc - (long)addr;
+ 
+-	if (offset < -SZ_128M || offset >= SZ_128M) {
+-		u32 replaced;
+-
+-		if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
+-			return -EINVAL;
+-
+-		/*
+-		 * 'mod' is only set at module load time, but if we end up
+-		 * dealing with an out-of-range condition, we can assume it
+-		 * is due to a module being loaded far away from the kernel.
+-		 */
+-		if (!mod) {
+-			preempt_disable();
+-			mod = __module_text_address(pc);
+-			preempt_enable();
+-
+-			if (WARN_ON(!mod))
+-				return -EINVAL;
+-		}
+-
+-		/*
+-		 * The instruction we are about to patch may be a branch and
+-		 * link instruction that was redirected via a PLT entry. In
+-		 * this case, the normal validation will fail, but we can at
+-		 * least check that we are dealing with a branch and link
+-		 * instruction that points into the right module.
+-		 */
+-		if (aarch64_insn_read((void *)pc, &replaced))
+-			return -EFAULT;
+-
+-		if (!aarch64_insn_is_bl(replaced) ||
+-		    !within_module(pc + aarch64_get_branch_offset(replaced),
+-				   mod))
+-			return -EINVAL;
+-
+-		validate = false;
+-	} else {
+-		old = aarch64_insn_gen_branch_imm(pc, addr,
+-						  AARCH64_INSN_BRANCH_LINK);
+-	}
++	if (!ftrace_find_callable_addr(rec, mod, &addr))
++		return -EINVAL;
+ 
++	old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
+ 	new = aarch64_insn_gen_nop();
+ 
+-	return ftrace_modify_code(pc, old, new, validate);
++	return ftrace_modify_code(pc, old, new, true);
+ }
+ 
+ void arch_ftrace_update_code(int command)
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v2.c b/arch/arm64/kvm/vgic/vgic-mmio-v2.c
+index a016f07adc281..b3cc517956507 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio-v2.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio-v2.c
+@@ -418,11 +418,11 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = {
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_SET,
+ 		vgic_mmio_read_pending, vgic_mmio_write_spending,
+-		NULL, vgic_uaccess_write_spending, 1,
++		vgic_uaccess_read_pending, vgic_uaccess_write_spending, 1,
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_CLEAR,
+ 		vgic_mmio_read_pending, vgic_mmio_write_cpending,
+-		NULL, vgic_uaccess_write_cpending, 1,
++		vgic_uaccess_read_pending, vgic_uaccess_write_cpending, 1,
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET,
+ 		vgic_mmio_read_active, vgic_mmio_write_sactive,
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c
+index 9e1459534ce54..5b441777937b4 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio.c
+@@ -226,8 +226,9 @@ int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu,
+ 	return 0;
+ }
+ 
+-unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
+-				     gpa_t addr, unsigned int len)
++static unsigned long __read_pending(struct kvm_vcpu *vcpu,
++				    gpa_t addr, unsigned int len,
++				    bool is_user)
+ {
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 	u32 value = 0;
+@@ -248,7 +249,7 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
+ 						    IRQCHIP_STATE_PENDING,
+ 						    &val);
+ 			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
+-		} else if (vgic_irq_is_mapped_level(irq)) {
++		} else if (!is_user && vgic_irq_is_mapped_level(irq)) {
+ 			val = vgic_get_phys_line_level(irq);
+ 		} else {
+ 			val = irq_is_pending(irq);
+@@ -263,6 +264,18 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
+ 	return value;
+ }
+ 
++unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
++				     gpa_t addr, unsigned int len)
++{
++	return __read_pending(vcpu, addr, len, false);
++}
++
++unsigned long vgic_uaccess_read_pending(struct kvm_vcpu *vcpu,
++					gpa_t addr, unsigned int len)
++{
++	return __read_pending(vcpu, addr, len, true);
++}
++
+ static bool is_vgic_v2_sgi(struct kvm_vcpu *vcpu, struct vgic_irq *irq)
+ {
+ 	return (vgic_irq_is_sgi(irq->intid) &&
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio.h b/arch/arm64/kvm/vgic/vgic-mmio.h
+index fefcca2b14dc7..dcea440159855 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio.h
++++ b/arch/arm64/kvm/vgic/vgic-mmio.h
+@@ -149,6 +149,9 @@ int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu,
+ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
+ 				     gpa_t addr, unsigned int len);
+ 
++unsigned long vgic_uaccess_read_pending(struct kvm_vcpu *vcpu,
++					gpa_t addr, unsigned int len);
++
+ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu,
+ 			      gpa_t addr, unsigned int len,
+ 			      unsigned long val);
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 3064694afea17..cfb8fd76afb43 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -2108,12 +2108,12 @@ static unsigned long __get_wchan(struct task_struct *p)
+ 		return 0;
+ 
+ 	do {
+-		sp = *(unsigned long *)sp;
++		sp = READ_ONCE_NOCHECK(*(unsigned long *)sp);
+ 		if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD) ||
+ 		    p->state == TASK_RUNNING)
+ 			return 0;
+ 		if (count > 0) {
+-			ip = ((unsigned long *)sp)[STACK_FRAME_LR_SAVE];
++			ip = READ_ONCE_NOCHECK(((unsigned long *)sp)[STACK_FRAME_LR_SAVE]);
+ 			if (!in_sched_functions(ip))
+ 				return ip;
+ 		}
+diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
+index 4c74e8a5482bf..c555ad9fa00b1 100644
+--- a/arch/powerpc/mm/nohash/kaslr_booke.c
++++ b/arch/powerpc/mm/nohash/kaslr_booke.c
+@@ -18,7 +18,6 @@
+ #include <asm/prom.h>
+ #include <asm/kdump.h>
+ #include <mm/mmu_decl.h>
+-#include <generated/compile.h>
+ #include <generated/utsrelease.h>
+ 
+ struct regions {
+@@ -36,10 +35,6 @@ struct regions {
+ 	int reserved_mem_size_cells;
+ };
+ 
+-/* Simplified build-specific string for starting entropy. */
+-static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+-		LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
+-
+ struct regions __initdata regions;
+ 
+ static __init void kaslr_get_cmdline(void *fdt)
+@@ -72,7 +67,8 @@ static unsigned long __init get_boot_seed(void *fdt)
+ {
+ 	unsigned long hash = 0;
+ 
+-	hash = rotate_xor(hash, build_str, sizeof(build_str));
++	/* build-specific string for starting entropy. */
++	hash = rotate_xor(hash, linux_banner, strlen(linux_banner));
+ 	hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
+ 
+ 	return hash;
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 23910e6a3f011..e7feaa7910ab3 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -1198,8 +1198,8 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 
+ 	ret  = -ENOMEM;
+-	ctl  = kzalloc(sizeof(*ctl),  GFP_KERNEL);
+-	save = kzalloc(sizeof(*save), GFP_KERNEL);
++	ctl  = kzalloc(sizeof(*ctl),  GFP_KERNEL_ACCOUNT);
++	save = kzalloc(sizeof(*save), GFP_KERNEL_ACCOUNT);
+ 	if (!ctl || !save)
+ 		goto out_free;
+ 
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 6c82ef22985d9..7397cc449e2fc 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -537,7 +537,7 @@ static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 		}
+ 
+ 		ret = -ENOMEM;
+-		blob = kmalloc(params.len, GFP_KERNEL);
++		blob = kzalloc(params.len, GFP_KERNEL_ACCOUNT);
+ 		if (!blob)
+ 			goto e_free;
+ 
+@@ -676,7 +676,7 @@ static int __sev_dbg_decrypt_user(struct kvm *kvm, unsigned long paddr,
+ 	if (!IS_ALIGNED(dst_paddr, 16) ||
+ 	    !IS_ALIGNED(paddr,     16) ||
+ 	    !IS_ALIGNED(size,      16)) {
+-		tpage = (void *)alloc_page(GFP_KERNEL);
++		tpage = (void *)alloc_page(GFP_KERNEL | __GFP_ZERO);
+ 		if (!tpage)
+ 			return -ENOMEM;
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index e729f65c67600..cc647dcc228b7 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -619,7 +619,7 @@ static int hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu)
+ 	 * evmcs in singe VM shares same assist page.
+ 	 */
+ 	if (!*p_hv_pa_pg)
+-		*p_hv_pa_pg = kzalloc(PAGE_SIZE, GFP_KERNEL);
++		*p_hv_pa_pg = kzalloc(PAGE_SIZE, GFP_KERNEL_ACCOUNT);
+ 
+ 	if (!*p_hv_pa_pg)
+ 		return -ENOMEM;
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 15a11a217cd03..c5d82b21a1ccb 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -466,6 +466,8 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+ 	if (!blk_mq_hw_queue_mapped(data.hctx))
+ 		goto out_queue_exit;
+ 	cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
++	if (cpu >= nr_cpu_ids)
++		goto out_queue_exit;
+ 	data.ctx = __blk_mq_get_ctx(q, cpu);
+ 
+ 	if (!q->elevator)
+diff --git a/certs/blacklist_hashes.c b/certs/blacklist_hashes.c
+index 344892337be07..d5961aa3d3380 100644
+--- a/certs/blacklist_hashes.c
++++ b/certs/blacklist_hashes.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "blacklist.h"
+ 
+-const char __initdata *const blacklist_hashes[] = {
++const char __initconst *const blacklist_hashes[] = {
+ #include CONFIG_SYSTEM_BLACKLIST_HASH_LIST
+ 	, NULL
+ };
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index c15bfc0e3723a..4a53cb98f3dfd 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -15,6 +15,7 @@ source "crypto/async_tx/Kconfig"
+ #
+ menuconfig CRYPTO
+ 	tristate "Cryptographic API"
++	select LIB_MEMNEQ
+ 	help
+ 	  This option provides the core Cryptographic API.
+ 
+diff --git a/crypto/Makefile b/crypto/Makefile
+index b279483fba50b..3d53cc1d8a867 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -4,7 +4,7 @@
+ #
+ 
+ obj-$(CONFIG_CRYPTO) += crypto.o
+-crypto-y := api.o cipher.o compress.o memneq.o
++crypto-y := api.o cipher.o compress.o
+ 
+ obj-$(CONFIG_CRYPTO_ENGINE) += crypto_engine.o
+ obj-$(CONFIG_CRYPTO_FIPS) += fips.o
+diff --git a/crypto/memneq.c b/crypto/memneq.c
+deleted file mode 100644
+index afed1bd16aee0..0000000000000
+--- a/crypto/memneq.c
++++ /dev/null
+@@ -1,168 +0,0 @@
+-/*
+- * Constant-time equality testing of memory regions.
+- *
+- * Authors:
+- *
+- *   James Yonan <james@openvpn.net>
+- *   Daniel Borkmann <dborkman@redhat.com>
+- *
+- * This file is provided under a dual BSD/GPLv2 license.  When using or
+- * redistributing this file, you may do so under either license.
+- *
+- * GPL LICENSE SUMMARY
+- *
+- * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of version 2 of the GNU General Public License as
+- * published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful, but
+- * WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+- * General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+- * The full GNU General Public License is included in this distribution
+- * in the file called LICENSE.GPL.
+- *
+- * BSD LICENSE
+- *
+- * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
+- *
+- * Redistribution and use in source and binary forms, with or without
+- * modification, are permitted provided that the following conditions
+- * are met:
+- *
+- *   * Redistributions of source code must retain the above copyright
+- *     notice, this list of conditions and the following disclaimer.
+- *   * Redistributions in binary form must reproduce the above copyright
+- *     notice, this list of conditions and the following disclaimer in
+- *     the documentation and/or other materials provided with the
+- *     distribution.
+- *   * Neither the name of OpenVPN Technologies nor the names of its
+- *     contributors may be used to endorse or promote products derived
+- *     from this software without specific prior written permission.
+- *
+- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+- */
+-
+-#include <crypto/algapi.h>
+-
+-#ifndef __HAVE_ARCH_CRYPTO_MEMNEQ
+-
+-/* Generic path for arbitrary size */
+-static inline unsigned long
+-__crypto_memneq_generic(const void *a, const void *b, size_t size)
+-{
+-	unsigned long neq = 0;
+-
+-#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
+-	while (size >= sizeof(unsigned long)) {
+-		neq |= *(unsigned long *)a ^ *(unsigned long *)b;
+-		OPTIMIZER_HIDE_VAR(neq);
+-		a += sizeof(unsigned long);
+-		b += sizeof(unsigned long);
+-		size -= sizeof(unsigned long);
+-	}
+-#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+-	while (size > 0) {
+-		neq |= *(unsigned char *)a ^ *(unsigned char *)b;
+-		OPTIMIZER_HIDE_VAR(neq);
+-		a += 1;
+-		b += 1;
+-		size -= 1;
+-	}
+-	return neq;
+-}
+-
+-/* Loop-free fast-path for frequently used 16-byte size */
+-static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
+-{
+-	unsigned long neq = 0;
+-
+-#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+-	if (sizeof(unsigned long) == 8) {
+-		neq |= *(unsigned long *)(a)   ^ *(unsigned long *)(b);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8);
+-		OPTIMIZER_HIDE_VAR(neq);
+-	} else if (sizeof(unsigned int) == 4) {
+-		neq |= *(unsigned int *)(a)    ^ *(unsigned int *)(b);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned int *)(a+4)  ^ *(unsigned int *)(b+4);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned int *)(a+8)  ^ *(unsigned int *)(b+8);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12);
+-		OPTIMIZER_HIDE_VAR(neq);
+-	} else
+-#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+-	{
+-		neq |= *(unsigned char *)(a)    ^ *(unsigned char *)(b);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+1)  ^ *(unsigned char *)(b+1);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+2)  ^ *(unsigned char *)(b+2);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+3)  ^ *(unsigned char *)(b+3);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+4)  ^ *(unsigned char *)(b+4);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+5)  ^ *(unsigned char *)(b+5);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+6)  ^ *(unsigned char *)(b+6);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+7)  ^ *(unsigned char *)(b+7);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+8)  ^ *(unsigned char *)(b+8);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+9)  ^ *(unsigned char *)(b+9);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+10) ^ *(unsigned char *)(b+10);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+11) ^ *(unsigned char *)(b+11);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+12) ^ *(unsigned char *)(b+12);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+13) ^ *(unsigned char *)(b+13);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+14) ^ *(unsigned char *)(b+14);
+-		OPTIMIZER_HIDE_VAR(neq);
+-		neq |= *(unsigned char *)(a+15) ^ *(unsigned char *)(b+15);
+-		OPTIMIZER_HIDE_VAR(neq);
+-	}
+-
+-	return neq;
+-}
+-
+-/* Compare two areas of memory without leaking timing information,
+- * and with special optimizations for common sizes.  Users should
+- * not call this function directly, but should instead use
+- * crypto_memneq defined in crypto/algapi.h.
+- */
+-noinline unsigned long __crypto_memneq(const void *a, const void *b,
+-				       size_t size)
+-{
+-	switch (size) {
+-	case 16:
+-		return __crypto_memneq_16(a, b);
+-	default:
+-		return __crypto_memneq_generic(a, b, size);
+-	}
+-}
+-EXPORT_SYMBOL(__crypto_memneq);
+-
+-#endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index f963a0a7da46a..2402fa4d8aa55 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5475,7 +5475,7 @@ struct ata_host *ata_host_alloc_pinfo(struct device *dev,
+ 				      const struct ata_port_info * const * ppi,
+ 				      int n_ports)
+ {
+-	const struct ata_port_info *pi;
++	const struct ata_port_info *pi = &ata_dummy_port_info;
+ 	struct ata_host *host;
+ 	int i, j;
+ 
+@@ -5483,7 +5483,7 @@ struct ata_host *ata_host_alloc_pinfo(struct device *dev,
+ 	if (!host)
+ 		return NULL;
+ 
+-	for (i = 0, j = 0, pi = NULL; i < host->n_ports; i++) {
++	for (i = 0, j = 0; i < host->n_ports; i++) {
+ 		struct ata_port *ap = host->ports[i];
+ 
+ 		if (ppi[j])
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index 3e2703a496328..b4e65d1ede263 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -471,29 +471,41 @@ config ADI
+ 	  and SSM (Silicon Secured Memory).  Intended consumers of this
+ 	  driver include crash and makedumpfile.
+ 
+-endmenu
+-
+ config RANDOM_TRUST_CPU
+-	bool "Trust the CPU manufacturer to initialize Linux's CRNG"
++	bool "Initialize RNG using CPU RNG instructions"
++	default y
+ 	depends on ARCH_RANDOM
+-	default n
+ 	help
+-	Assume that CPU manufacturer (e.g., Intel or AMD for RDSEED or
+-	RDRAND, IBM for the S390 and Power PC architectures) is trustworthy
+-	for the purposes of initializing Linux's CRNG.  Since this is not
+-	something that can be independently audited, this amounts to trusting
+-	that CPU manufacturer (perhaps with the insistence or mandate
+-	of a Nation State's intelligence or law enforcement agencies)
+-	has not installed a hidden back door to compromise the CPU's
+-	random number generation facilities. This can also be configured
+-	at boot with "random.trust_cpu=on/off".
++	  Initialize the RNG using random numbers supplied by the CPU's
++	  RNG instructions (e.g. RDRAND), if supported and available. These
++	  random numbers are never used directly, but are rather hashed into
++	  the main input pool, and this happens regardless of whether or not
++	  this option is enabled. Instead, this option controls whether the
++	  they are credited and hence can initialize the RNG. Additionally,
++	  other sources of randomness are always used, regardless of this
++	  setting.  Enabling this implies trusting that the CPU can supply high
++	  quality and non-backdoored random numbers.
++
++	  Say Y here unless you have reason to mistrust your CPU or believe
++	  its RNG facilities may be faulty. This may also be configured at
++	  boot time with "random.trust_cpu=on/off".
+ 
+ config RANDOM_TRUST_BOOTLOADER
+-	bool "Trust the bootloader to initialize Linux's CRNG"
+-	help
+-	Some bootloaders can provide entropy to increase the kernel's initial
+-	device randomness. Say Y here to assume the entropy provided by the
+-	booloader is trustworthy so it will be added to the kernel's entropy
+-	pool. Otherwise, say N here so it will be regarded as device input that
+-	only mixes the entropy pool. This can also be configured at boot with
+-	"random.trust_bootloader=on/off".
++	bool "Initialize RNG using bootloader-supplied seed"
++	default y
++	help
++	  Initialize the RNG using a seed supplied by the bootloader or boot
++	  environment (e.g. EFI or a bootloader-generated device tree). This
++	  seed is not used directly, but is rather hashed into the main input
++	  pool, and this happens regardless of whether or not this option is
++	  enabled. Instead, this option controls whether the seed is credited
++	  and hence can initialize the RNG. Additionally, other sources of
++	  randomness are always used, regardless of this setting. Enabling
++	  this implies trusting that the bootloader can supply high quality and
++	  non-backdoored seeds.
++
++	  Say Y here unless you have reason to mistrust your bootloader or
++	  believe its RNG facilities may be faulty. This may also be configured
++	  at boot time with "random.trust_bootloader=on/off".
++
++endmenu
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 0391f5bda5e46..36e8d619e3348 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -691,7 +691,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MP_CLK_UART2_ROOT] = imx_clk_hw_gate4("uart2_root_clk", "uart2", ccm_base + 0x44a0, 0);
+ 	hws[IMX8MP_CLK_UART3_ROOT] = imx_clk_hw_gate4("uart3_root_clk", "uart3", ccm_base + 0x44b0, 0);
+ 	hws[IMX8MP_CLK_UART4_ROOT] = imx_clk_hw_gate4("uart4_root_clk", "uart4", ccm_base + 0x44c0, 0);
+-	hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "osc_32k", ccm_base + 0x44d0, 0);
++	hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0);
+ 	hws[IMX8MP_CLK_USB_PHY_ROOT] = imx_clk_hw_gate4("usb_phy_root_clk", "usb_phy_ref", ccm_base + 0x44f0, 0);
+ 	hws[IMX8MP_CLK_USDHC1_ROOT] = imx_clk_hw_gate4("usdhc1_root_clk", "usdhc1", ccm_base + 0x4510, 0);
+ 	hws[IMX8MP_CLK_USDHC2_ROOT] = imx_clk_hw_gate4("usdhc2_root_clk", "usdhc2", ccm_base + 0x4520, 0);
+diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
+index ba04cb381cd3f..7c617d8dff3f5 100644
+--- a/drivers/clocksource/hyperv_timer.c
++++ b/drivers/clocksource/hyperv_timer.c
+@@ -472,4 +472,3 @@ void __init hv_init_clocksource(void)
+ 	hv_sched_clock_offset = hv_read_reference_counter();
+ 	hv_setup_sched_clock(read_hv_sched_clock_msr);
+ }
+-EXPORT_SYMBOL_GPL(hv_init_clocksource);
+diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
+index 4275c18a097ab..ea2e2618b7945 100644
+--- a/drivers/gpio/gpio-dwapb.c
++++ b/drivers/gpio/gpio-dwapb.c
+@@ -646,10 +646,9 @@ static int dwapb_get_clks(struct dwapb_gpio *gpio)
+ 	gpio->clks[1].id = "db";
+ 	err = devm_clk_bulk_get_optional(gpio->dev, DWAPB_NR_CLOCKS,
+ 					 gpio->clks);
+-	if (err) {
+-		dev_err(gpio->dev, "Cannot get APB/Debounce clocks\n");
+-		return err;
+-	}
++	if (err)
++		return dev_err_probe(gpio->dev, err,
++				     "Cannot get APB/Debounce clocks\n");
+ 
+ 	err = clk_bulk_prepare_enable(DWAPB_NR_CLOCKS, gpio->clks);
+ 	if (err) {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7bb151283f44b..f069d0faba64b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2141,7 +2141,7 @@ static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = {
+ 
+ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ {
+-	u32 max_cll, min_cll, max, min, q, r;
++	u32 max_avg, min_cll, max, min, q, r;
+ 	struct amdgpu_dm_backlight_caps *caps;
+ 	struct amdgpu_display_manager *dm;
+ 	struct drm_connector *conn_base;
+@@ -2164,7 +2164,7 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ 	caps = &dm->backlight_caps;
+ 	caps->ext_caps = &aconnector->dc_link->dpcd_sink_ext_caps;
+ 	caps->aux_support = false;
+-	max_cll = conn_base->hdr_sink_metadata.hdmi_type1.max_cll;
++	max_avg = conn_base->hdr_sink_metadata.hdmi_type1.max_fall;
+ 	min_cll = conn_base->hdr_sink_metadata.hdmi_type1.min_cll;
+ 
+ 	if (caps->ext_caps->bits.oled == 1 /*||
+@@ -2192,8 +2192,8 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ 	 * The results of the above expressions can be verified at
+ 	 * pre_computed_values.
+ 	 */
+-	q = max_cll >> 5;
+-	r = max_cll % 32;
++	q = max_avg >> 5;
++	r = max_avg % 32;
+ 	max = (1 << q) * pre_computed_values[r];
+ 
+ 	// min luminance: maxLum * (CV/255)^2 / 100
+diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
+index 45d32ef427875..ac40a95374d3d 100644
+--- a/drivers/gpu/drm/i915/i915_sysfs.c
++++ b/drivers/gpu/drm/i915/i915_sysfs.c
+@@ -500,7 +500,14 @@ static ssize_t error_state_read(struct file *filp, struct kobject *kobj,
+ 	struct device *kdev = kobj_to_dev(kobj);
+ 	struct drm_i915_private *i915 = kdev_minor_to_i915(kdev);
+ 	struct i915_gpu_coredump *gpu;
+-	ssize_t ret;
++	ssize_t ret = 0;
++
++	/*
++	 * FIXME: Concurrent clients triggering resets and reading + clearing
++	 * dumps can cause inconsistent sysfs reads when a user calls in with a
++	 * non-zero offset to complete a prior partial read but the
++	 * gpu_coredump has been cleared or replaced.
++	 */
+ 
+ 	gpu = i915_first_error_state(i915);
+ 	if (IS_ERR(gpu)) {
+@@ -512,8 +519,10 @@ static ssize_t error_state_read(struct file *filp, struct kobject *kobj,
+ 		const char *str = "No error state collected\n";
+ 		size_t len = strlen(str);
+ 
+-		ret = min_t(size_t, count, len - off);
+-		memcpy(buf, str + off, ret);
++		if (off < len) {
++			ret = min_t(size_t, count, len - off);
++			memcpy(buf, str + off, ret);
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 5dbb949b1afd8..10188b1a6a089 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -606,6 +606,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
+ 		 */
+ 		if (newchannel->offermsg.offer.sub_channel_index == 0) {
+ 			mutex_unlock(&vmbus_connection.channel_mutex);
++			cpus_read_unlock();
+ 			/*
+ 			 * Don't call free_channel(), because newchannel->kobj
+ 			 * is not initialized yet.
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index 3c19aada4b30e..9468c6c89b3f5 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -474,9 +474,6 @@ int i2c_dw_prepare_clk(struct dw_i2c_dev *dev, bool prepare)
+ {
+ 	int ret;
+ 
+-	if (IS_ERR(dev->clk))
+-		return PTR_ERR(dev->clk);
+-
+ 	if (prepare) {
+ 		/* Optional interface clock */
+ 		ret = clk_prepare_enable(dev->pclk);
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 0dfeb2d116038..ad91c7c0faa54 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -266,8 +266,17 @@ static int dw_i2c_plat_probe(struct platform_device *pdev)
+ 		goto exit_reset;
+ 	}
+ 
+-	dev->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (!i2c_dw_prepare_clk(dev, true)) {
++	dev->clk = devm_clk_get_optional(&pdev->dev, NULL);
++	if (IS_ERR(dev->clk)) {
++		ret = PTR_ERR(dev->clk);
++		goto exit_reset;
++	}
++
++	ret = i2c_dw_prepare_clk(dev, true);
++	if (ret)
++		goto exit_reset;
++
++	if (dev->clk) {
+ 		u64 clk_khz;
+ 
+ 		dev->get_clk_rate_khz = i2c_dw_get_clk_rate_khz;
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index 20a2f903b7f6c..d9ac62c1ac25e 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -2369,8 +2369,7 @@ static struct platform_driver npcm_i2c_bus_driver = {
+ static int __init npcm_i2c_init(void)
+ {
+ 	npcm_i2c_debugfs_dir = debugfs_create_dir("npcm_i2c", NULL);
+-	platform_driver_register(&npcm_i2c_bus_driver);
+-	return 0;
++	return platform_driver_register(&npcm_i2c_bus_driver);
+ }
+ module_init(npcm_i2c_init);
+ 
+diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
+index cb6ec59a045d4..efffcf0ebd3b4 100644
+--- a/drivers/input/misc/soc_button_array.c
++++ b/drivers/input/misc/soc_button_array.c
+@@ -85,13 +85,13 @@ static const struct dmi_system_id dmi_use_low_level_irq[] = {
+ 	},
+ 	{
+ 		/*
+-		 * Lenovo Yoga Tab2 1051L, something messes with the home-button
++		 * Lenovo Yoga Tab2 1051F/1051L, something messes with the home-button
+ 		 * IRQ settings, leading to a non working home-button.
+ 		 */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "60073"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "1051L"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "1051"),
+ 		},
+ 	},
+ 	{} /* Terminating entry */
+diff --git a/drivers/irqchip/irq-gic-realview.c b/drivers/irqchip/irq-gic-realview.c
+index b4c1924f02554..38fab02ffe9d0 100644
+--- a/drivers/irqchip/irq-gic-realview.c
++++ b/drivers/irqchip/irq-gic-realview.c
+@@ -57,6 +57,7 @@ realview_gic_of_init(struct device_node *node, struct device_node *parent)
+ 
+ 	/* The PB11MPCore GIC needs to be configured in the syscon */
+ 	map = syscon_node_to_regmap(np);
++	of_node_put(np);
+ 	if (!IS_ERR(map)) {
+ 		/* new irq mode with no DCC */
+ 		regmap_write(map, REALVIEW_SYS_LOCK_OFFSET,
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index e5e3fd6b95543..4c8f18f0cecf8 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -1831,7 +1831,7 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
+ 
+ 	gic_data.ppi_descs = kcalloc(gic_data.ppi_nr, sizeof(*gic_data.ppi_descs), GFP_KERNEL);
+ 	if (!gic_data.ppi_descs)
+-		return;
++		goto out_put_node;
+ 
+ 	nr_parts = of_get_child_count(parts_node);
+ 
+@@ -1872,12 +1872,15 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
+ 				continue;
+ 
+ 			cpu = of_cpu_node_to_id(cpu_node);
+-			if (WARN_ON(cpu < 0))
++			if (WARN_ON(cpu < 0)) {
++				of_node_put(cpu_node);
+ 				continue;
++			}
+ 
+ 			pr_cont("%pOF[%d] ", cpu_node, cpu);
+ 
+ 			cpumask_set_cpu(cpu, &part->mask);
++			of_node_put(cpu_node);
+ 		}
+ 
+ 		pr_cont("}\n");
+diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
+index 33e71ea6cc143..8b15f53cbdd95 100644
+--- a/drivers/md/dm-log.c
++++ b/drivers/md/dm-log.c
+@@ -415,8 +415,7 @@ static int create_log_context(struct dm_dirty_log *log, struct dm_target *ti,
+ 	/*
+ 	 * Work out how many "unsigned long"s we need to hold the bitset.
+ 	 */
+-	bitset_size = dm_round_up(region_count,
+-				  sizeof(*lc->clean_bits) << BYTE_SHIFT);
++	bitset_size = dm_round_up(region_count, BITS_PER_LONG);
+ 	bitset_size >>= BYTE_SHIFT;
+ 
+ 	lc->bitset_uint32_count = bitset_size / sizeof(*lc->clean_bits);
+diff --git a/drivers/misc/atmel-ssc.c b/drivers/misc/atmel-ssc.c
+index d6cd5537126c6..69f9b0336410d 100644
+--- a/drivers/misc/atmel-ssc.c
++++ b/drivers/misc/atmel-ssc.c
+@@ -232,9 +232,9 @@ static int ssc_probe(struct platform_device *pdev)
+ 	clk_disable_unprepare(ssc->clk);
+ 
+ 	ssc->irq = platform_get_irq(pdev, 0);
+-	if (!ssc->irq) {
++	if (ssc->irq < 0) {
+ 		dev_dbg(&pdev->dev, "could not get irq\n");
+-		return -ENXIO;
++		return ssc->irq;
+ 	}
+ 
+ 	mutex_lock(&user_lock);
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index d81d75a20b8f2..afb2e78df4d60 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -109,6 +109,8 @@
+ #define MEI_DEV_ID_ADP_P      0x51E0  /* Alder Lake Point P */
+ #define MEI_DEV_ID_ADP_N      0x54E0  /* Alder Lake Point N */
+ 
++#define MEI_DEV_ID_RPL_S      0x7A68  /* Raptor Lake Point S */
++
+ /*
+  * MEI HW Section
+  */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index a738253dbd056..5324b65d0d29a 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -115,6 +115,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)},
+ 
++	{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
++
+ 	/* required last entry */
+ 	{0, }
+ };
+diff --git a/drivers/net/ethernet/broadcom/bgmac-bcma.c b/drivers/net/ethernet/broadcom/bgmac-bcma.c
+index a5fd161ab5ee1..26746197515fc 100644
+--- a/drivers/net/ethernet/broadcom/bgmac-bcma.c
++++ b/drivers/net/ethernet/broadcom/bgmac-bcma.c
+@@ -323,7 +323,6 @@ static void bgmac_remove(struct bcma_device *core)
+ 	bcma_mdio_mii_unregister(bgmac->mii_bus);
+ 	bgmac_enet_remove(bgmac);
+ 	bcma_set_drvdata(core, NULL);
+-	kfree(bgmac);
+ }
+ 
+ static struct bcma_driver bgmac_bcma_driver = {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index a2bdb2906519e..63054061966e6 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -2582,15 +2582,16 @@ static void i40e_diag_test(struct net_device *netdev,
+ 
+ 		set_bit(__I40E_TESTING, pf->state);
+ 
++		if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
++		    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state)) {
++			dev_warn(&pf->pdev->dev,
++				 "Cannot start offline testing when PF is in reset state.\n");
++			goto skip_ol_tests;
++		}
++
+ 		if (i40e_active_vfs(pf) || i40e_active_vmdqs(pf)) {
+ 			dev_warn(&pf->pdev->dev,
+ 				 "Please take active VFs and Netqueues offline and restart the adapter before running NIC diagnostics\n");
+-			data[I40E_ETH_TEST_REG]		= 1;
+-			data[I40E_ETH_TEST_EEPROM]	= 1;
+-			data[I40E_ETH_TEST_INTR]	= 1;
+-			data[I40E_ETH_TEST_LINK]	= 1;
+-			eth_test->flags |= ETH_TEST_FL_FAILED;
+-			clear_bit(__I40E_TESTING, pf->state);
+ 			goto skip_ol_tests;
+ 		}
+ 
+@@ -2637,9 +2638,17 @@ static void i40e_diag_test(struct net_device *netdev,
+ 		data[I40E_ETH_TEST_INTR] = 0;
+ 	}
+ 
+-skip_ol_tests:
+-
+ 	netif_info(pf, drv, netdev, "testing finished\n");
++	return;
++
++skip_ol_tests:
++	data[I40E_ETH_TEST_REG]		= 1;
++	data[I40E_ETH_TEST_EEPROM]	= 1;
++	data[I40E_ETH_TEST_INTR]	= 1;
++	data[I40E_ETH_TEST_LINK]	= 1;
++	eth_test->flags |= ETH_TEST_FL_FAILED;
++	clear_bit(__I40E_TESTING, pf->state);
++	netif_info(pf, drv, netdev, "testing failed\n");
+ }
+ 
+ static void i40e_get_wol(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 4a18a7c7dd4c2..614f3e9951009 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -8163,6 +8163,11 @@ static int i40e_configure_clsflower(struct i40e_vsi *vsi,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	if (!tc) {
++		dev_err(&pf->pdev->dev, "Unable to add filter because of invalid destination");
++		return -EINVAL;
++	}
++
+ 	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
+ 	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
+ 		return -EBUSY;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 9181e007e0392..1947c5a775505 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -2228,7 +2228,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	}
+ 
+ 	if (vf->adq_enabled) {
+-		for (i = 0; i < I40E_MAX_VF_VSI; i++)
++		for (i = 0; i < vf->num_tc; i++)
+ 			num_qps_all += vf->ch[i].num_qps;
+ 		if (num_qps_all != qci->num_queue_pairs) {
+ 			aq_ret = I40E_ERR_PARAM;
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index fd9257c7059a0..53e31002ce52a 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -9,6 +9,7 @@
+ #include <linux/udp.h>
+ #include <linux/ip.h>
+ #include <linux/pm_runtime.h>
++#include <linux/pci.h>
+ #include <net/pkt_sched.h>
+ 
+ #include <net/ipv6.h>
+@@ -5041,6 +5042,10 @@ static int igc_probe(struct pci_dev *pdev,
+ 
+ 	pci_enable_pcie_error_reporting(pdev);
+ 
++	err = pci_enable_ptm(pdev, NULL);
++	if (err < 0)
++		dev_info(&pdev->dev, "PCIe PTM not supported by PCIe bus/controller\n");
++
+ 	pci_set_master(pdev);
+ 
+ 	err = -ENOMEM;
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 789642647cd32..c7aff89141e17 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -806,6 +806,17 @@ static inline void mtk_rx_get_desc(struct mtk_rx_dma *rxd,
+ 	rxd->rxd4 = READ_ONCE(dma_rxd->rxd4);
+ }
+ 
++static void *mtk_max_lro_buf_alloc(gfp_t gfp_mask)
++{
++	unsigned int size = mtk_max_frag_size(MTK_MAX_LRO_RX_LENGTH);
++	unsigned long data;
++
++	data = __get_free_pages(gfp_mask | __GFP_COMP | __GFP_NOWARN,
++				get_order(size));
++
++	return (void *)data;
++}
++
+ /* the qdma core needs scratch memory to be setup */
+ static int mtk_init_fq_dma(struct mtk_eth *eth)
+ {
+@@ -1303,7 +1314,10 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
+ 			goto release_desc;
+ 
+ 		/* alloc new buffer */
+-		new_data = napi_alloc_frag(ring->frag_size);
++		if (ring->frag_size <= PAGE_SIZE)
++			new_data = napi_alloc_frag(ring->frag_size);
++		else
++			new_data = mtk_max_lro_buf_alloc(GFP_ATOMIC);
+ 		if (unlikely(!new_data)) {
+ 			netdev->stats.rx_dropped++;
+ 			goto release_desc;
+@@ -1700,7 +1714,10 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag)
+ 		return -ENOMEM;
+ 
+ 	for (i = 0; i < rx_dma_size; i++) {
+-		ring->data[i] = netdev_alloc_frag(ring->frag_size);
++		if (ring->frag_size <= PAGE_SIZE)
++			ring->data[i] = netdev_alloc_frag(ring->frag_size);
++		else
++			ring->data[i] = mtk_max_lro_buf_alloc(GFP_KERNEL);
+ 		if (!ring->data[i])
+ 			return -ENOMEM;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+index 11cc3ea5010aa..9fb3e5ec1da69 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+@@ -274,7 +274,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
+ {
+ 	struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
+ 	struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev;
+-	struct lag_tracker tracker;
++	struct lag_tracker tracker = { };
+ 	bool do_bond, roce_lag;
+ 	int err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h
+index a68d931090dd5..15c8d4de83508 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h
+@@ -8,8 +8,8 @@
+ #include "spectrum.h"
+ 
+ enum mlxsw_sp_counter_sub_pool_id {
+-	MLXSW_SP_COUNTER_SUB_POOL_FLOW,
+ 	MLXSW_SP_COUNTER_SUB_POOL_RIF,
++	MLXSW_SP_COUNTER_SUB_POOL_FLOW,
+ };
+ 
+ int mlxsw_sp_counter_alloc(struct mlxsw_sp *mlxsw_sp,
+diff --git a/drivers/nfc/nfcmrvl/usb.c b/drivers/nfc/nfcmrvl/usb.c
+index 888e298f610b8..f26986eb53f19 100644
+--- a/drivers/nfc/nfcmrvl/usb.c
++++ b/drivers/nfc/nfcmrvl/usb.c
+@@ -401,13 +401,25 @@ static void nfcmrvl_play_deferred(struct nfcmrvl_usb_drv_data *drv_data)
+ 	int err;
+ 
+ 	while ((urb = usb_get_from_anchor(&drv_data->deferred))) {
++		usb_anchor_urb(urb, &drv_data->tx_anchor);
++
+ 		err = usb_submit_urb(urb, GFP_ATOMIC);
+-		if (err)
++		if (err) {
++			kfree(urb->setup_packet);
++			usb_unanchor_urb(urb);
++			usb_free_urb(urb);
+ 			break;
++		}
+ 
+ 		drv_data->tx_in_flight++;
++		usb_free_urb(urb);
++	}
++
++	/* Cleanup the rest deferred urbs. */
++	while ((urb = usb_get_from_anchor(&drv_data->deferred))) {
++		kfree(urb->setup_packet);
++		usb_free_urb(urb);
+ 	}
+-	usb_scuttle_anchored_urbs(&drv_data->deferred);
+ }
+ 
+ static int nfcmrvl_resume(struct usb_interface *intf)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index d301f0280ff6a..0aa68da51ed70 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2813,8 +2813,8 @@ static ssize_t subsys_##field##_show(struct device *dev,		\
+ {									\
+ 	struct nvme_subsystem *subsys =					\
+ 		container_of(dev, struct nvme_subsystem, dev);		\
+-	return sprintf(buf, "%.*s\n",					\
+-		       (int)sizeof(subsys->field), subsys->field);	\
++	return sysfs_emit(buf, "%.*s\n",				\
++			   (int)sizeof(subsys->field), subsys->field);	\
+ }									\
+ static SUBSYS_ATTR_RO(field, S_IRUGO, subsys_##field##_show);
+ 
+@@ -3335,13 +3335,13 @@ static ssize_t wwid_show(struct device *dev, struct device_attribute *attr,
+ 	int model_len = sizeof(subsys->model);
+ 
+ 	if (!uuid_is_null(&ids->uuid))
+-		return sprintf(buf, "uuid.%pU\n", &ids->uuid);
++		return sysfs_emit(buf, "uuid.%pU\n", &ids->uuid);
+ 
+ 	if (memchr_inv(ids->nguid, 0, sizeof(ids->nguid)))
+-		return sprintf(buf, "eui.%16phN\n", ids->nguid);
++		return sysfs_emit(buf, "eui.%16phN\n", ids->nguid);
+ 
+ 	if (memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
+-		return sprintf(buf, "eui.%8phN\n", ids->eui64);
++		return sysfs_emit(buf, "eui.%8phN\n", ids->eui64);
+ 
+ 	while (serial_len > 0 && (subsys->serial[serial_len - 1] == ' ' ||
+ 				  subsys->serial[serial_len - 1] == '\0'))
+@@ -3350,7 +3350,7 @@ static ssize_t wwid_show(struct device *dev, struct device_attribute *attr,
+ 				 subsys->model[model_len - 1] == '\0'))
+ 		model_len--;
+ 
+-	return sprintf(buf, "nvme.%04x-%*phN-%*phN-%08x\n", subsys->vendor_id,
++	return sysfs_emit(buf, "nvme.%04x-%*phN-%*phN-%08x\n", subsys->vendor_id,
+ 		serial_len, subsys->serial, model_len, subsys->model,
+ 		head->ns_id);
+ }
+@@ -3359,7 +3359,7 @@ static DEVICE_ATTR_RO(wwid);
+ static ssize_t nguid_show(struct device *dev, struct device_attribute *attr,
+ 		char *buf)
+ {
+-	return sprintf(buf, "%pU\n", dev_to_ns_head(dev)->ids.nguid);
++	return sysfs_emit(buf, "%pU\n", dev_to_ns_head(dev)->ids.nguid);
+ }
+ static DEVICE_ATTR_RO(nguid);
+ 
+@@ -3372,25 +3372,25 @@ static ssize_t uuid_show(struct device *dev, struct device_attribute *attr,
+ 	 * we have no UUID set
+ 	 */
+ 	if (uuid_is_null(&ids->uuid)) {
+-		printk_ratelimited(KERN_WARNING
+-				   "No UUID available providing old NGUID\n");
+-		return sprintf(buf, "%pU\n", ids->nguid);
++		dev_warn_ratelimited(dev,
++			"No UUID available providing old NGUID\n");
++		return sysfs_emit(buf, "%pU\n", ids->nguid);
+ 	}
+-	return sprintf(buf, "%pU\n", &ids->uuid);
++	return sysfs_emit(buf, "%pU\n", &ids->uuid);
+ }
+ static DEVICE_ATTR_RO(uuid);
+ 
+ static ssize_t eui_show(struct device *dev, struct device_attribute *attr,
+ 		char *buf)
+ {
+-	return sprintf(buf, "%8ph\n", dev_to_ns_head(dev)->ids.eui64);
++	return sysfs_emit(buf, "%8ph\n", dev_to_ns_head(dev)->ids.eui64);
+ }
+ static DEVICE_ATTR_RO(eui);
+ 
+ static ssize_t nsid_show(struct device *dev, struct device_attribute *attr,
+ 		char *buf)
+ {
+-	return sprintf(buf, "%d\n", dev_to_ns_head(dev)->ns_id);
++	return sysfs_emit(buf, "%d\n", dev_to_ns_head(dev)->ns_id);
+ }
+ static DEVICE_ATTR_RO(nsid);
+ 
+@@ -3455,7 +3455,7 @@ static ssize_t  field##_show(struct device *dev,				\
+ 			    struct device_attribute *attr, char *buf)		\
+ {										\
+         struct nvme_ctrl *ctrl = dev_get_drvdata(dev);				\
+-        return sprintf(buf, "%.*s\n",						\
++        return sysfs_emit(buf, "%.*s\n",					\
+ 		(int)sizeof(ctrl->subsys->field), ctrl->subsys->field);		\
+ }										\
+ static DEVICE_ATTR(field, S_IRUGO, field##_show, NULL);
+@@ -3469,7 +3469,7 @@ static ssize_t  field##_show(struct device *dev,				\
+ 			    struct device_attribute *attr, char *buf)		\
+ {										\
+         struct nvme_ctrl *ctrl = dev_get_drvdata(dev);				\
+-        return sprintf(buf, "%d\n", ctrl->field);	\
++        return sysfs_emit(buf, "%d\n", ctrl->field);				\
+ }										\
+ static DEVICE_ATTR(field, S_IRUGO, field##_show, NULL);
+ 
+@@ -3517,9 +3517,9 @@ static ssize_t nvme_sysfs_show_state(struct device *dev,
+ 
+ 	if ((unsigned)ctrl->state < ARRAY_SIZE(state_name) &&
+ 	    state_name[ctrl->state])
+-		return sprintf(buf, "%s\n", state_name[ctrl->state]);
++		return sysfs_emit(buf, "%s\n", state_name[ctrl->state]);
+ 
+-	return sprintf(buf, "unknown state\n");
++	return sysfs_emit(buf, "unknown state\n");
+ }
+ 
+ static DEVICE_ATTR(state, S_IRUGO, nvme_sysfs_show_state, NULL);
+@@ -3571,9 +3571,9 @@ static ssize_t nvme_ctrl_loss_tmo_show(struct device *dev,
+ 	struct nvmf_ctrl_options *opts = ctrl->opts;
+ 
+ 	if (ctrl->opts->max_reconnects == -1)
+-		return sprintf(buf, "off\n");
+-	return sprintf(buf, "%d\n",
+-			opts->max_reconnects * opts->reconnect_delay);
++		return sysfs_emit(buf, "off\n");
++	return sysfs_emit(buf, "%d\n",
++			  opts->max_reconnects * opts->reconnect_delay);
+ }
+ 
+ static ssize_t nvme_ctrl_loss_tmo_store(struct device *dev,
+@@ -3603,8 +3603,8 @@ static ssize_t nvme_ctrl_reconnect_delay_show(struct device *dev,
+ 	struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+ 
+ 	if (ctrl->opts->reconnect_delay == -1)
+-		return sprintf(buf, "off\n");
+-	return sprintf(buf, "%d\n", ctrl->opts->reconnect_delay);
++		return sysfs_emit(buf, "off\n");
++	return sysfs_emit(buf, "%d\n", ctrl->opts->reconnect_delay);
+ }
+ 
+ static ssize_t nvme_ctrl_reconnect_delay_store(struct device *dev,
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index a9e15c8f907b7..379d6818a0635 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -624,8 +624,8 @@ static ssize_t nvme_subsys_iopolicy_show(struct device *dev,
+ 	struct nvme_subsystem *subsys =
+ 		container_of(dev, struct nvme_subsystem, dev);
+ 
+-	return sprintf(buf, "%s\n",
+-			nvme_iopolicy_names[READ_ONCE(subsys->iopolicy)]);
++	return sysfs_emit(buf, "%s\n",
++			  nvme_iopolicy_names[READ_ONCE(subsys->iopolicy)]);
+ }
+ 
+ static ssize_t nvme_subsys_iopolicy_store(struct device *dev,
+@@ -650,7 +650,7 @@ SUBSYS_ATTR_RW(iopolicy, S_IRUGO | S_IWUSR,
+ static ssize_t ana_grpid_show(struct device *dev, struct device_attribute *attr,
+ 		char *buf)
+ {
+-	return sprintf(buf, "%d\n", nvme_get_ns_from_dev(dev)->ana_grpid);
++	return sysfs_emit(buf, "%d\n", nvme_get_ns_from_dev(dev)->ana_grpid);
+ }
+ DEVICE_ATTR_RO(ana_grpid);
+ 
+@@ -659,7 +659,7 @@ static ssize_t ana_state_show(struct device *dev, struct device_attribute *attr,
+ {
+ 	struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
+ 
+-	return sprintf(buf, "%s\n", nvme_ana_state_names[ns->ana_state]);
++	return sysfs_emit(buf, "%s\n", nvme_ana_state_names[ns->ana_state]);
+ }
+ DEVICE_ATTR_RO(ana_state);
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index a96dc6f530760..4084764bf0b1b 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -585,11 +585,8 @@ static inline void pcie_ecrc_get_policy(char *str) { }
+ 
+ #ifdef CONFIG_PCIE_PTM
+ void pci_ptm_init(struct pci_dev *dev);
+-int pci_enable_ptm(struct pci_dev *dev, u8 *granularity);
+ #else
+ static inline void pci_ptm_init(struct pci_dev *dev) { }
+-static inline int pci_enable_ptm(struct pci_dev *dev, u8 *granularity)
+-{ return -EINVAL; }
+ #endif
+ 
+ struct pci_dev_reset_methods {
+diff --git a/drivers/platform/mips/Kconfig b/drivers/platform/mips/Kconfig
+index 8ac149173c64b..495da331ca2db 100644
+--- a/drivers/platform/mips/Kconfig
++++ b/drivers/platform/mips/Kconfig
+@@ -17,7 +17,7 @@ menuconfig MIPS_PLATFORM_DEVICES
+ if MIPS_PLATFORM_DEVICES
+ 
+ config CPU_HWMON
+-	tristate "Loongson-3 CPU HWMon Driver"
++	bool "Loongson-3 CPU HWMon Driver"
+ 	depends on MACH_LOONGSON64
+ 	select HWMON
+ 	default y
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index b0aa58d117cc9..90e8a538b078b 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -9792,7 +9792,7 @@ static int ipr_alloc_mem(struct ipr_ioa_cfg *ioa_cfg)
+ 					GFP_KERNEL);
+ 
+ 		if (!ioa_cfg->hrrq[i].host_rrq)  {
+-			while (--i > 0)
++			while (--i >= 0)
+ 				dma_free_coherent(&pdev->dev,
+ 					sizeof(u32) * ioa_cfg->hrrq[i].size,
+ 					ioa_cfg->hrrq[i].host_rrq,
+@@ -10065,7 +10065,7 @@ static int ipr_request_other_msi_irqs(struct ipr_ioa_cfg *ioa_cfg,
+ 			ioa_cfg->vectors_info[i].desc,
+ 			&ioa_cfg->hrrq[i]);
+ 		if (rc) {
+-			while (--i >= 0)
++			while (--i > 0)
+ 				free_irq(pci_irq_vector(pdev, i),
+ 					&ioa_cfg->hrrq[i]);
+ 			return rc;
+diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h
+index 47e832b7f2c25..bfbc1c4fcab18 100644
+--- a/drivers/scsi/lpfc/lpfc_hw4.h
++++ b/drivers/scsi/lpfc/lpfc_hw4.h
+@@ -4281,6 +4281,9 @@ struct wqe_common {
+ #define wqe_sup_SHIFT         6
+ #define wqe_sup_MASK          0x00000001
+ #define wqe_sup_WORD          word11
++#define wqe_ffrq_SHIFT         6
++#define wqe_ffrq_MASK          0x00000001
++#define wqe_ffrq_WORD          word11
+ #define wqe_wqec_SHIFT        7
+ #define wqe_wqec_MASK         0x00000001
+ #define wqe_wqec_WORD         word11
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index e33f752318c19..1e22364a31fcf 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -857,7 +857,8 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 		lpfc_nvmet_invalidate_host(phba, ndlp);
+ 
+ 	if (ndlp->nlp_DID == Fabric_DID) {
+-		if (vport->port_state <= LPFC_FDISC)
++		if (vport->port_state <= LPFC_FDISC ||
++		    vport->fc_flag & FC_PT2PT)
+ 			goto out;
+ 		lpfc_linkdown_port(vport);
+ 		spin_lock_irq(shost->host_lock);
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 03c81cec6bc98..ef92e0b4b9cf9 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -1315,7 +1315,8 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport,
+ {
+ 	struct lpfc_hba *phba = vport->phba;
+ 	struct nvmefc_fcp_req *nCmd = lpfc_ncmd->nvmeCmd;
+-	struct lpfc_iocbq *pwqeq = &(lpfc_ncmd->cur_iocbq);
++	struct nvme_common_command *sqe;
++	struct lpfc_iocbq *pwqeq = &lpfc_ncmd->cur_iocbq;
+ 	union lpfc_wqe128 *wqe = &pwqeq->wqe;
+ 	uint32_t req_len;
+ 
+@@ -1371,8 +1372,14 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport,
+ 		cstat->control_requests++;
+ 	}
+ 
+-	if (pnode->nlp_nvme_info & NLP_NVME_NSLER)
++	if (pnode->nlp_nvme_info & NLP_NVME_NSLER) {
+ 		bf_set(wqe_erp, &wqe->generic.wqe_com, 1);
++		sqe = &((struct nvme_fc_cmd_iu *)
++			nCmd->cmdaddr)->sqe.common;
++		if (sqe->opcode == nvme_admin_async_event)
++			bf_set(wqe_ffrq, &wqe->generic.wqe_com, 1);
++	}
++
+ 	/*
+ 	 * Finish initializing those WQE fields that are independent
+ 	 * of the nvme_cmnd request_buffer
+diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c
+index cbe5fab793eb2..ce10d680c56cd 100644
+--- a/drivers/scsi/pmcraid.c
++++ b/drivers/scsi/pmcraid.c
+@@ -4528,7 +4528,7 @@ pmcraid_register_interrupt_handler(struct pmcraid_instance *pinstance)
+ 	return 0;
+ 
+ out_unwind:
+-	while (--i > 0)
++	while (--i >= 0)
+ 		free_irq(pci_irq_vector(pdev, i), &pinstance->hrrq_vector[i]);
+ 	pci_free_irq_vectors(pdev);
+ 	return rc;
+diff --git a/drivers/scsi/vmw_pvscsi.h b/drivers/scsi/vmw_pvscsi.h
+index 75966d3f326e0..d87c12324c032 100644
+--- a/drivers/scsi/vmw_pvscsi.h
++++ b/drivers/scsi/vmw_pvscsi.h
+@@ -333,8 +333,8 @@ struct PVSCSIRingReqDesc {
+ 	u8	tag;
+ 	u8	bus;
+ 	u8	target;
+-	u8	vcpuHint;
+-	u8	unused[59];
++	u16	vcpuHint;
++	u8	unused[58];
+ } __packed;
+ 
+ /*
+diff --git a/drivers/staging/comedi/drivers/vmk80xx.c b/drivers/staging/comedi/drivers/vmk80xx.c
+index 7769eadfaf61d..ccc65cfc519f5 100644
+--- a/drivers/staging/comedi/drivers/vmk80xx.c
++++ b/drivers/staging/comedi/drivers/vmk80xx.c
+@@ -685,7 +685,7 @@ static int vmk80xx_alloc_usb_buffers(struct comedi_device *dev)
+ 	if (!devpriv->usb_rx_buf)
+ 		return -ENOMEM;
+ 
+-	size = max(usb_endpoint_maxp(devpriv->ep_rx), MIN_BUF_SIZE);
++	size = max(usb_endpoint_maxp(devpriv->ep_tx), MIN_BUF_SIZE);
+ 	devpriv->usb_tx_buf = kzalloc(size, GFP_KERNEL);
+ 	if (!devpriv->usb_tx_buf)
+ 		return -ENOMEM;
+diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c
+index abc84d84f6386..9180ca5e4dcd4 100644
+--- a/drivers/tty/goldfish.c
++++ b/drivers/tty/goldfish.c
+@@ -428,7 +428,7 @@ static int goldfish_tty_remove(struct platform_device *pdev)
+ 	tty_unregister_device(goldfish_tty_driver, qtty->console.index);
+ 	iounmap(qtty->base);
+ 	qtty->base = NULL;
+-	free_irq(qtty->irq, pdev);
++	free_irq(qtty->irq, qtty);
+ 	tty_port_destroy(&qtty->port);
+ 	goldfish_tty_current_line_count--;
+ 	if (goldfish_tty_current_line_count == 0)
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index e0fa24f0f732d..9cf5177815a87 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1532,6 +1532,8 @@ static inline void __stop_tx(struct uart_8250_port *p)
+ 
+ 	if (em485) {
+ 		unsigned char lsr = serial_in(p, UART_LSR);
++		p->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
++
+ 		/*
+ 		 * To provide required timeing and allow FIFO transfer,
+ 		 * __stop_tx_rs485() must be called only when both FIFO and
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 30919f741b7fd..9279d3d3698c2 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -5076,7 +5076,7 @@ int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (!res) {
+ 		retval = -EINVAL;
+-		goto error1;
++		goto error2;
+ 	}
+ 	hcd->rsrc_start = res->start;
+ 	hcd->rsrc_len = resource_size(res);
+diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c
+index 3f1c62adce4b6..314cb5ea061a2 100644
+--- a/drivers/usb/gadget/udc/lpc32xx_udc.c
++++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
+@@ -3015,6 +3015,7 @@ static int lpc32xx_udc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	udc->isp1301_i2c_client = isp1301_get_client(isp1301_node);
++	of_node_put(isp1301_node);
+ 	if (!udc->isp1301_i2c_client) {
+ 		return -EPROBE_DEFER;
+ 	}
+diff --git a/drivers/usb/serial/io_ti.c b/drivers/usb/serial/io_ti.c
+index c327d4cf79285..03bcab3b9bd09 100644
+--- a/drivers/usb/serial/io_ti.c
++++ b/drivers/usb/serial/io_ti.c
+@@ -168,6 +168,7 @@ static const struct usb_device_id edgeport_2port_id_table[] = {
+ 	{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_8S) },
+ 	{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416) },
+ 	{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416B) },
++	{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_E5805A) },
+ 	{ }
+ };
+ 
+@@ -206,6 +207,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_8S) },
+ 	{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416) },
+ 	{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416B) },
++	{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_E5805A) },
+ 	{ }
+ };
+ 
+diff --git a/drivers/usb/serial/io_usbvend.h b/drivers/usb/serial/io_usbvend.h
+index 52cbc353051fe..9a6f742ad3abd 100644
+--- a/drivers/usb/serial/io_usbvend.h
++++ b/drivers/usb/serial/io_usbvend.h
+@@ -212,6 +212,7 @@
+ //
+ // Definitions for other product IDs
+ #define ION_DEVICE_ID_MT4X56USB			0x1403	// OEM device
++#define ION_DEVICE_ID_E5805A			0x1A01  // OEM device (rebranded Edgeport/4)
+ 
+ 
+ #define	GENERATION_ID_FROM_USB_PRODUCT_ID(ProductId)				\
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index a40c0f3b85c2c..3744cde5146f4 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -432,6 +432,8 @@ static void option_instat_callback(struct urb *urb);
+ #define CINTERION_PRODUCT_CLS8			0x00b0
+ #define CINTERION_PRODUCT_MV31_MBIM		0x00b3
+ #define CINTERION_PRODUCT_MV31_RMNET		0x00b7
++#define CINTERION_PRODUCT_MV31_2_MBIM		0x00b8
++#define CINTERION_PRODUCT_MV31_2_RMNET		0x00b9
+ #define CINTERION_PRODUCT_MV32_WA		0x00f1
+ #define CINTERION_PRODUCT_MV32_WB		0x00f2
+ 
+@@ -1979,6 +1981,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3)},
+ 	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff),
+ 	  .driver_info = RSVD(0)},
++	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_2_MBIM, 0xff),
++	  .driver_info = RSVD(3)},
++	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_2_RMNET, 0xff),
++	  .driver_info = RSVD(0)},
+ 	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA, 0xff),
+ 	  .driver_info = RSVD(3)},
+ 	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB, 0xff),
+diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
+index 238383ff1064c..5c970e6f664c8 100644
+--- a/drivers/virtio/virtio_mmio.c
++++ b/drivers/virtio/virtio_mmio.c
+@@ -689,6 +689,7 @@ static int vm_cmdline_set(const char *device,
+ 	if (!vm_cmdline_parent_registered) {
+ 		err = device_register(&vm_cmdline_parent);
+ 		if (err) {
++			put_device(&vm_cmdline_parent);
+ 			pr_err("Failed to register parent device!\n");
+ 			return err;
+ 		}
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index b35bb2d57f62c..1e890ef176873 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -254,8 +254,7 @@ void vp_del_vqs(struct virtio_device *vdev)
+ 
+ 	if (vp_dev->msix_affinity_masks) {
+ 		for (i = 0; i < vp_dev->msix_vectors; i++)
+-			if (vp_dev->msix_affinity_masks[i])
+-				free_cpumask_var(vp_dev->msix_affinity_masks[i]);
++			free_cpumask_var(vp_dev->msix_affinity_masks[i]);
+ 	}
+ 
+ 	if (vp_dev->msix_enabled) {
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index a13ef836fe4e1..84f3a6405b558 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -657,14 +657,10 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
+ 		if (stat->st_result_mask & P9_STATS_NLINK)
+ 			set_nlink(inode, stat->st_nlink);
+ 		if (stat->st_result_mask & P9_STATS_MODE) {
+-			inode->i_mode = stat->st_mode;
+-			if ((S_ISBLK(inode->i_mode)) ||
+-						(S_ISCHR(inode->i_mode)))
+-				init_special_inode(inode, inode->i_mode,
+-								inode->i_rdev);
++			mode = stat->st_mode & S_IALLUGO;
++			mode |= inode->i_mode & ~S_IALLUGO;
++			inode->i_mode = mode;
+ 		}
+-		if (stat->st_result_mask & P9_STATS_RDEV)
+-			inode->i_rdev = new_decode_dev(stat->st_rdev);
+ 		if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) &&
+ 		    stat->st_result_mask & P9_STATS_SIZE)
+ 			v9fs_i_size_write(inode, stat->st_size);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 15223b5a3af97..c32d0895c3a3d 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3520,6 +3520,15 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 	size = size >> bsbits;
+ 	start = start_off >> bsbits;
+ 
++	/*
++	 * For tiny groups (smaller than 8MB) the chosen allocation
++	 * alignment may be larger than group size. Make sure the
++	 * alignment does not move allocation to a different group which
++	 * makes mballoc fail assertions later.
++	 */
++	start = max(start, rounddown(ac->ac_o_ex.fe_logical,
++			(ext4_lblk_t)EXT4_BLOCKS_PER_GROUP(ac->ac_sb)));
++
+ 	/* don't cover already allocated blocks in selected range */
+ 	if (ar->pleft && start <= ar->lleft) {
+ 		size -= ar->lleft + 1 - start;
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index feae39f1db37c..2c9ae72a1f5cb 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1841,7 +1841,8 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ 			struct dx_hash_info *hinfo)
+ {
+ 	unsigned blocksize = dir->i_sb->s_blocksize;
+-	unsigned count, continued;
++	unsigned continued;
++	int count;
+ 	struct buffer_head *bh2;
+ 	ext4_lblk_t newblock;
+ 	u32 hash2;
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 6513079c728be..015028302305d 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -52,6 +52,16 @@ int ext4_resize_begin(struct super_block *sb)
+ 	if (!capable(CAP_SYS_RESOURCE))
+ 		return -EPERM;
+ 
++	/*
++	 * If the reserved GDT blocks is non-zero, the resize_inode feature
++	 * should always be set.
++	 */
++	if (EXT4_SB(sb)->s_es->s_reserved_gdt_blocks &&
++	    !ext4_has_feature_resize_inode(sb)) {
++		ext4_error(sb, "resize_inode disabled but reserved GDT blocks non-zero");
++		return -EFSCORRUPTED;
++	}
++
+ 	/*
+ 	 * If we are not using the primary superblock/GDT copy don't resize,
+          * because the user tools have no way of handling this.  Probably a
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index a5209643ac36c..bfdd212240739 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -283,6 +283,7 @@ static u32 initiate_file_draining(struct nfs_client *clp,
+ 		rv = NFS4_OK;
+ 		break;
+ 	case -ENOENT:
++		set_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags);
+ 		/* Embrace your forgetfulness! */
+ 		rv = NFS4ERR_NOMATCHING_LAYOUT;
+ 
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 8c0803d980084..21436721745b6 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -469,6 +469,7 @@ pnfs_mark_layout_stateid_invalid(struct pnfs_layout_hdr *lo,
+ 		pnfs_clear_lseg_state(lseg, lseg_list);
+ 	pnfs_clear_layoutreturn_info(lo);
+ 	pnfs_free_returned_lsegs(lo, lseg_list, &range, 0);
++	set_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags);
+ 	if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags) &&
+ 	    !test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags))
+ 		pnfs_clear_layoutreturn_waitbit(lo);
+@@ -1923,8 +1924,9 @@ static void nfs_layoutget_begin(struct pnfs_layout_hdr *lo)
+ 
+ static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
+ {
+-	if (atomic_dec_and_test(&lo->plh_outstanding))
+-		wake_up_var(&lo->plh_outstanding);
++	if (atomic_dec_and_test(&lo->plh_outstanding) &&
++	    test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags))
++		wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN);
+ }
+ 
+ static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
+@@ -2031,11 +2033,11 @@ lookup_again:
+ 	 * If the layout segment list is empty, but there are outstanding
+ 	 * layoutget calls, then they might be subject to a layoutrecall.
+ 	 */
+-	if ((list_empty(&lo->plh_segs) || !pnfs_layout_is_valid(lo)) &&
++	if (test_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags) &&
+ 	    atomic_read(&lo->plh_outstanding) != 0) {
+ 		spin_unlock(&ino->i_lock);
+-		lseg = ERR_PTR(wait_var_event_killable(&lo->plh_outstanding,
+-					!atomic_read(&lo->plh_outstanding)));
++		lseg = ERR_PTR(wait_on_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN,
++					   TASK_KILLABLE));
+ 		if (IS_ERR(lseg))
+ 			goto out_put_layout_hdr;
+ 		pnfs_put_layout_hdr(lo);
+@@ -2155,6 +2157,12 @@ lookup_again:
+ 		case -ERECALLCONFLICT:
+ 		case -EAGAIN:
+ 			break;
++		case -ENODATA:
++			/* The server returned NFS4ERR_LAYOUTUNAVAILABLE */
++			pnfs_layout_set_fail_bit(
++				lo, pnfs_iomode_to_fail_bit(iomode));
++			lseg = NULL;
++			goto out_put_layout_hdr;
+ 		default:
+ 			if (!nfs_error_is_fatal(PTR_ERR(lseg))) {
+ 				pnfs_layout_clear_fail_bit(lo, pnfs_iomode_to_fail_bit(iomode));
+@@ -2408,7 +2416,8 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
+ 		goto out_forget;
+ 	}
+ 
+-	if (!pnfs_layout_is_valid(lo) && !pnfs_is_first_layoutget(lo))
++	if (test_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags) &&
++	    !pnfs_is_first_layoutget(lo))
+ 		goto out_forget;
+ 
+ 	if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 11d9ed9addc06..a7cf84a6673bf 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -107,6 +107,7 @@ enum {
+ 	NFS_LAYOUT_FIRST_LAYOUTGET,	/* Serialize first layoutget */
+ 	NFS_LAYOUT_INODE_FREEING,	/* The inode is being freed */
+ 	NFS_LAYOUT_HASHED,		/* The layout visible */
++	NFS_LAYOUT_DRAIN,
+ };
+ 
+ enum layoutdriver_policy_flags {
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index acd0898e3866d..e30e1ddc1aceb 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -194,7 +194,6 @@ nfsd_file_alloc(struct inode *inode, unsigned int may, unsigned int hashval,
+ 				__set_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags);
+ 		}
+ 		nf->nf_mark = NULL;
+-		init_rwsem(&nf->nf_rwsem);
+ 		trace_nfsd_file_alloc(nf);
+ 	}
+ 	return nf;
+diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h
+index 7872df5a0fe3a..435ceab27897a 100644
+--- a/fs/nfsd/filecache.h
++++ b/fs/nfsd/filecache.h
+@@ -46,7 +46,6 @@ struct nfsd_file {
+ 	refcount_t		nf_ref;
+ 	unsigned char		nf_may;
+ 	struct nfsd_file_mark	*nf_mark;
+-	struct rw_semaphore	nf_rwsem;
+ };
+ 
+ int nfsd_file_cache_init(void);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 7850d141c7621..735ee8a798705 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1380,6 +1380,8 @@ static void nfsd4_init_copy_res(struct nfsd4_copy *copy, bool sync)
+ 
+ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
+ {
++	struct file *dst = copy->nf_dst->nf_file;
++	struct file *src = copy->nf_src->nf_file;
+ 	ssize_t bytes_copied = 0;
+ 	size_t bytes_total = copy->cp_count;
+ 	u64 src_pos = copy->cp_src_pos;
+@@ -1388,9 +1390,8 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
+ 	do {
+ 		if (kthread_should_stop())
+ 			break;
+-		bytes_copied = nfsd_copy_file_range(copy->nf_src->nf_file,
+-				src_pos, copy->nf_dst->nf_file, dst_pos,
+-				bytes_total);
++		bytes_copied = nfsd_copy_file_range(src, src_pos, dst, dst_pos,
++						    bytes_total);
+ 		if (bytes_copied <= 0)
+ 			break;
+ 		bytes_total -= bytes_copied;
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 011cd570b50df..548ebc913f920 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -535,10 +535,11 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
+ {
+ 	struct file *src = nf_src->nf_file;
+ 	struct file *dst = nf_dst->nf_file;
++	errseq_t since;
+ 	loff_t cloned;
+ 	__be32 ret = 0;
+ 
+-	down_write(&nf_dst->nf_rwsem);
++	since = READ_ONCE(dst->f_wb_err);
+ 	cloned = vfs_clone_file_range(src, src_pos, dst, dst_pos, count, 0);
+ 	if (cloned < 0) {
+ 		ret = nfserrno(cloned);
+@@ -552,6 +553,8 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
+ 		loff_t dst_end = count ? dst_pos + count - 1 : LLONG_MAX;
+ 		int status = vfs_fsync_range(dst, dst_pos, dst_end, 0);
+ 
++		if (!status)
++			status = filemap_check_wb_err(dst->f_mapping, since);
+ 		if (!status)
+ 			status = commit_inode_metadata(file_inode(src));
+ 		if (status < 0) {
+@@ -561,7 +564,6 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
+ 		}
+ 	}
+ out_err:
+-	up_write(&nf_dst->nf_rwsem);
+ 	return ret;
+ }
+ 
+@@ -980,6 +982,7 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 	struct file		*file = nf->nf_file;
+ 	struct svc_export	*exp;
+ 	struct iov_iter		iter;
++	errseq_t		since;
+ 	__be32			nfserr;
+ 	int			host_err;
+ 	int			use_wgather;
+@@ -1009,21 +1012,18 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 		flags |= RWF_SYNC;
+ 
+ 	iov_iter_kvec(&iter, WRITE, vec, vlen, *cnt);
++	since = READ_ONCE(file->f_wb_err);
+ 	if (flags & RWF_SYNC) {
+-		down_write(&nf->nf_rwsem);
+ 		host_err = vfs_iter_write(file, &iter, &pos, flags);
+ 		if (host_err < 0)
+ 			nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
+ 						 nfsd_net_id));
+-		up_write(&nf->nf_rwsem);
+ 	} else {
+-		down_read(&nf->nf_rwsem);
+ 		if (verf)
+ 			nfsd_copy_boot_verifier(verf,
+ 					net_generic(SVC_NET(rqstp),
+ 					nfsd_net_id));
+ 		host_err = vfs_iter_write(file, &iter, &pos, flags);
+-		up_read(&nf->nf_rwsem);
+ 	}
+ 	if (host_err < 0) {
+ 		nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
+@@ -1033,6 +1033,9 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 	*cnt = host_err;
+ 	nfsdstats.io_write += *cnt;
+ 	fsnotify_modify(file);
++	host_err = filemap_check_wb_err(file->f_mapping, since);
++	if (host_err < 0)
++		goto out_nfserr;
+ 
+ 	if (stable && use_wgather) {
+ 		host_err = wait_for_concurrent_writes(file);
+@@ -1113,19 +1116,6 @@ out:
+ }
+ 
+ #ifdef CONFIG_NFSD_V3
+-static int
+-nfsd_filemap_write_and_wait_range(struct nfsd_file *nf, loff_t offset,
+-				  loff_t end)
+-{
+-	struct address_space *mapping = nf->nf_file->f_mapping;
+-	int ret = filemap_fdatawrite_range(mapping, offset, end);
+-
+-	if (ret)
+-		return ret;
+-	filemap_fdatawait_range_keep_errors(mapping, offset, end);
+-	return 0;
+-}
+-
+ /*
+  * Commit all pending writes to stable storage.
+  *
+@@ -1156,25 +1146,25 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	if (err)
+ 		goto out;
+ 	if (EX_ISSYNC(fhp->fh_export)) {
+-		int err2 = nfsd_filemap_write_and_wait_range(nf, offset, end);
++		errseq_t since = READ_ONCE(nf->nf_file->f_wb_err);
++		int err2;
+ 
+-		down_write(&nf->nf_rwsem);
+-		if (!err2)
+-			err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
++		err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
+ 		switch (err2) {
+ 		case 0:
+ 			nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
+ 						nfsd_net_id));
++			err2 = filemap_check_wb_err(nf->nf_file->f_mapping,
++						    since);
+ 			break;
+ 		case -EINVAL:
+ 			err = nfserr_notsupp;
+ 			break;
+ 		default:
+-			err = nfserrno(err2);
+ 			nfsd_reset_boot_verifier(net_generic(nf->nf_net,
+ 						 nfsd_net_id));
+ 		}
+-		up_write(&nf->nf_rwsem);
++		err = nfserrno(err2);
+ 	} else
+ 		nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
+ 					nfsd_net_id));
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 09fb8459bb5ce..65f123d5809bd 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -79,6 +79,7 @@
+ #include <linux/capability.h>
+ #include <linux/quotaops.h>
+ #include <linux/blkdev.h>
++#include <linux/sched/mm.h>
+ #include "../internal.h" /* ugh */
+ 
+ #include <linux/uaccess.h>
+@@ -427,9 +428,11 @@ EXPORT_SYMBOL(mark_info_dirty);
+ int dquot_acquire(struct dquot *dquot)
+ {
+ 	int ret = 0, ret2 = 0;
++	unsigned int memalloc;
+ 	struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
+ 
+ 	mutex_lock(&dquot->dq_lock);
++	memalloc = memalloc_nofs_save();
+ 	if (!test_bit(DQ_READ_B, &dquot->dq_flags)) {
+ 		ret = dqopt->ops[dquot->dq_id.type]->read_dqblk(dquot);
+ 		if (ret < 0)
+@@ -460,6 +463,7 @@ int dquot_acquire(struct dquot *dquot)
+ 	smp_mb__before_atomic();
+ 	set_bit(DQ_ACTIVE_B, &dquot->dq_flags);
+ out_iolock:
++	memalloc_nofs_restore(memalloc);
+ 	mutex_unlock(&dquot->dq_lock);
+ 	return ret;
+ }
+@@ -471,9 +475,11 @@ EXPORT_SYMBOL(dquot_acquire);
+ int dquot_commit(struct dquot *dquot)
+ {
+ 	int ret = 0;
++	unsigned int memalloc;
+ 	struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
+ 
+ 	mutex_lock(&dquot->dq_lock);
++	memalloc = memalloc_nofs_save();
+ 	if (!clear_dquot_dirty(dquot))
+ 		goto out_lock;
+ 	/* Inactive dquot can be only if there was error during read/init
+@@ -483,6 +489,7 @@ int dquot_commit(struct dquot *dquot)
+ 	else
+ 		ret = -EIO;
+ out_lock:
++	memalloc_nofs_restore(memalloc);
+ 	mutex_unlock(&dquot->dq_lock);
+ 	return ret;
+ }
+@@ -494,9 +501,11 @@ EXPORT_SYMBOL(dquot_commit);
+ int dquot_release(struct dquot *dquot)
+ {
+ 	int ret = 0, ret2 = 0;
++	unsigned int memalloc;
+ 	struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
+ 
+ 	mutex_lock(&dquot->dq_lock);
++	memalloc = memalloc_nofs_save();
+ 	/* Check whether we are not racing with some other dqget() */
+ 	if (dquot_is_busy(dquot))
+ 		goto out_dqlock;
+@@ -512,6 +521,7 @@ int dquot_release(struct dquot *dquot)
+ 	}
+ 	clear_bit(DQ_ACTIVE_B, &dquot->dq_flags);
+ out_dqlock:
++	memalloc_nofs_restore(memalloc);
+ 	mutex_unlock(&dquot->dq_lock);
+ 	return ret;
+ }
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index bc5a1150f0723..692ce678c5f1c 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1599,6 +1599,13 @@ static inline bool pci_aer_available(void) { return false; }
+ 
+ bool pci_ats_disabled(void);
+ 
++#ifdef CONFIG_PCIE_PTM
++int pci_enable_ptm(struct pci_dev *dev, u8 *granularity);
++#else
++static inline int pci_enable_ptm(struct pci_dev *dev, u8 *granularity)
++{ return -EINVAL; }
++#endif
++
+ void pci_cfg_access_lock(struct pci_dev *dev);
+ bool pci_cfg_access_trylock(struct pci_dev *dev);
+ void pci_cfg_access_unlock(struct pci_dev *dev);
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index c19e669afba0e..0c5bf98d55767 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -121,7 +121,8 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
+ 		return ERR_PTR(-E2BIG);
+ 
+ 	cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap);
+-	err = bpf_map_charge_init(&mem, cost);
++	err = bpf_map_charge_init(&mem, cost + attr->max_entries *
++			   (sizeof(struct stack_map_bucket) + (u64)value_size));
+ 	if (err)
+ 		return ERR_PTR(err);
+ 
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index ee7da1f2462f5..ae9fc1ee6d206 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -564,7 +564,7 @@ static void add_dma_entry(struct dma_debug_entry *entry)
+ 
+ 	rc = active_cacheline_insert(entry);
+ 	if (rc == -ENOMEM) {
+-		pr_err("cacheline tracking ENOMEM, dma-debug disabled\n");
++		pr_err_once("cacheline tracking ENOMEM, dma-debug disabled\n");
+ 		global_disable = true;
+ 	}
+ 
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 06c111544f61d..2922250f93b44 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -188,7 +188,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
+ 			goto out_free_pages;
+ 		if (force_dma_unencrypted(dev)) {
+ 			err = set_memory_decrypted((unsigned long)ret,
+-						   1 << get_order(size));
++						   PFN_UP(size));
+ 			if (err)
+ 				goto out_free_pages;
+ 		}
+@@ -210,7 +210,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
+ 	ret = page_address(page);
+ 	if (force_dma_unencrypted(dev)) {
+ 		err = set_memory_decrypted((unsigned long)ret,
+-					   1 << get_order(size));
++					   PFN_UP(size));
+ 		if (err)
+ 			goto out_free_pages;
+ 	}
+@@ -231,7 +231,7 @@ done:
+ out_encrypt_pages:
+ 	if (force_dma_unencrypted(dev)) {
+ 		err = set_memory_encrypted((unsigned long)page_address(page),
+-					   1 << get_order(size));
++					   PFN_UP(size));
+ 		/* If memory cannot be re-encrypted, it must be leaked */
+ 		if (err)
+ 			return NULL;
+@@ -244,8 +244,6 @@ out_free_pages:
+ void dma_direct_free(struct device *dev, size_t size,
+ 		void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs)
+ {
+-	unsigned int page_order = get_order(size);
+-
+ 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
+ 	    !force_dma_unencrypted(dev)) {
+ 		/* cpu_addr is a struct page cookie, not a kernel address */
+@@ -266,7 +264,7 @@ void dma_direct_free(struct device *dev, size_t size,
+ 		return;
+ 
+ 	if (force_dma_unencrypted(dev))
+-		set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order);
++		set_memory_encrypted((unsigned long)cpu_addr, PFN_UP(size));
+ 
+ 	if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr))
+ 		vunmap(cpu_addr);
+@@ -302,8 +300,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
+ 
+ 	ret = page_address(page);
+ 	if (force_dma_unencrypted(dev)) {
+-		if (set_memory_decrypted((unsigned long)ret,
+-				1 << get_order(size)))
++		if (set_memory_decrypted((unsigned long)ret, PFN_UP(size)))
+ 			goto out_free_pages;
+ 	}
+ 	memset(ret, 0, size);
+@@ -318,7 +315,6 @@ void dma_direct_free_pages(struct device *dev, size_t size,
+ 		struct page *page, dma_addr_t dma_addr,
+ 		enum dma_data_direction dir)
+ {
+-	unsigned int page_order = get_order(size);
+ 	void *vaddr = page_address(page);
+ 
+ 	/* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */
+@@ -327,7 +323,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
+ 		return;
+ 
+ 	if (force_dma_unencrypted(dev))
+-		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
++		set_memory_encrypted((unsigned long)vaddr, PFN_UP(size));
+ 
+ 	dma_free_contiguous(dev, page, size);
+ }
+diff --git a/lib/Kconfig b/lib/Kconfig
+index 258e1ec7d5920..36326864249dd 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -103,6 +103,9 @@ config INDIRECT_PIO
+ 
+ source "lib/crypto/Kconfig"
+ 
++config LIB_MEMNEQ
++	bool
++
+ config CRC_CCITT
+ 	tristate "CRC-CCITT functions"
+ 	help
+diff --git a/lib/Makefile b/lib/Makefile
+index 69b8217652ed5..a803e1527c4b5 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -248,6 +248,7 @@ obj-$(CONFIG_DIMLIB) += dim/
+ obj-$(CONFIG_SIGNATURE) += digsig.o
+ 
+ lib-$(CONFIG_CLZ_TAB) += clz_tab.o
++lib-$(CONFIG_LIB_MEMNEQ) += memneq.o
+ 
+ obj-$(CONFIG_GENERIC_STRNCPY_FROM_USER) += strncpy_from_user.o
+ obj-$(CONFIG_GENERIC_STRNLEN_USER) += strnlen_user.o
+diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
+index 9856e291f4141..2082af43d51fb 100644
+--- a/lib/crypto/Kconfig
++++ b/lib/crypto/Kconfig
+@@ -71,6 +71,7 @@ config CRYPTO_LIB_CURVE25519
+ 	tristate "Curve25519 scalar multiplication library"
+ 	depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519
+ 	select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n
++	select LIB_MEMNEQ
+ 	help
+ 	  Enable the Curve25519 library interface. This interface may be
+ 	  fulfilled by either the generic implementation or an arch-specific
+diff --git a/lib/memneq.c b/lib/memneq.c
+new file mode 100644
+index 0000000000000..afed1bd16aee0
+--- /dev/null
++++ b/lib/memneq.c
+@@ -0,0 +1,168 @@
++/*
++ * Constant-time equality testing of memory regions.
++ *
++ * Authors:
++ *
++ *   James Yonan <james@openvpn.net>
++ *   Daniel Borkmann <dborkman@redhat.com>
++ *
++ * This file is provided under a dual BSD/GPLv2 license.  When using or
++ * redistributing this file, you may do so under either license.
++ *
++ * GPL LICENSE SUMMARY
++ *
++ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of version 2 of the GNU General Public License as
++ * published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
++ * The full GNU General Public License is included in this distribution
++ * in the file called LICENSE.GPL.
++ *
++ * BSD LICENSE
++ *
++ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ *
++ *   * Redistributions of source code must retain the above copyright
++ *     notice, this list of conditions and the following disclaimer.
++ *   * Redistributions in binary form must reproduce the above copyright
++ *     notice, this list of conditions and the following disclaimer in
++ *     the documentation and/or other materials provided with the
++ *     distribution.
++ *   * Neither the name of OpenVPN Technologies nor the names of its
++ *     contributors may be used to endorse or promote products derived
++ *     from this software without specific prior written permission.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
++ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
++ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
++ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
++ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
++ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
++ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
++ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
++ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <crypto/algapi.h>
++
++#ifndef __HAVE_ARCH_CRYPTO_MEMNEQ
++
++/* Generic path for arbitrary size */
++static inline unsigned long
++__crypto_memneq_generic(const void *a, const void *b, size_t size)
++{
++	unsigned long neq = 0;
++
++#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
++	while (size >= sizeof(unsigned long)) {
++		neq |= *(unsigned long *)a ^ *(unsigned long *)b;
++		OPTIMIZER_HIDE_VAR(neq);
++		a += sizeof(unsigned long);
++		b += sizeof(unsigned long);
++		size -= sizeof(unsigned long);
++	}
++#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
++	while (size > 0) {
++		neq |= *(unsigned char *)a ^ *(unsigned char *)b;
++		OPTIMIZER_HIDE_VAR(neq);
++		a += 1;
++		b += 1;
++		size -= 1;
++	}
++	return neq;
++}
++
++/* Loop-free fast-path for frequently used 16-byte size */
++static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
++{
++	unsigned long neq = 0;
++
++#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
++	if (sizeof(unsigned long) == 8) {
++		neq |= *(unsigned long *)(a)   ^ *(unsigned long *)(b);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8);
++		OPTIMIZER_HIDE_VAR(neq);
++	} else if (sizeof(unsigned int) == 4) {
++		neq |= *(unsigned int *)(a)    ^ *(unsigned int *)(b);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned int *)(a+4)  ^ *(unsigned int *)(b+4);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned int *)(a+8)  ^ *(unsigned int *)(b+8);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12);
++		OPTIMIZER_HIDE_VAR(neq);
++	} else
++#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
++	{
++		neq |= *(unsigned char *)(a)    ^ *(unsigned char *)(b);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+1)  ^ *(unsigned char *)(b+1);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+2)  ^ *(unsigned char *)(b+2);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+3)  ^ *(unsigned char *)(b+3);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+4)  ^ *(unsigned char *)(b+4);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+5)  ^ *(unsigned char *)(b+5);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+6)  ^ *(unsigned char *)(b+6);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+7)  ^ *(unsigned char *)(b+7);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+8)  ^ *(unsigned char *)(b+8);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+9)  ^ *(unsigned char *)(b+9);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+10) ^ *(unsigned char *)(b+10);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+11) ^ *(unsigned char *)(b+11);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+12) ^ *(unsigned char *)(b+12);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+13) ^ *(unsigned char *)(b+13);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+14) ^ *(unsigned char *)(b+14);
++		OPTIMIZER_HIDE_VAR(neq);
++		neq |= *(unsigned char *)(a+15) ^ *(unsigned char *)(b+15);
++		OPTIMIZER_HIDE_VAR(neq);
++	}
++
++	return neq;
++}
++
++/* Compare two areas of memory without leaking timing information,
++ * and with special optimizations for common sizes.  Users should
++ * not call this function directly, but should instead use
++ * crypto_memneq defined in crypto/algapi.h.
++ */
++noinline unsigned long __crypto_memneq(const void *a, const void *b,
++				       size_t size)
++{
++	switch (size) {
++	case 16:
++		return __crypto_memneq_16(a, b);
++	default:
++		return __crypto_memneq_generic(a, b, size);
++	}
++}
++EXPORT_SYMBOL(__crypto_memneq);
++
++#endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 5fff027f25fad..a1f4cb836fcf1 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -1653,9 +1653,12 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 			int flags)
+ {
+ 	struct sock *sk = sock->sk;
+-	struct sk_buff *skb;
++	struct sk_buff *skb, *last;
++	struct sk_buff_head *sk_queue;
+ 	int copied;
+ 	int err = 0;
++	int off = 0;
++	long timeo;
+ 
+ 	lock_sock(sk);
+ 	/*
+@@ -1667,11 +1670,29 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 		goto out;
+ 	}
+ 
+-	/* Now we can treat all alike */
+-	skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
+-				flags & MSG_DONTWAIT, &err);
+-	if (skb == NULL)
+-		goto out;
++	/*  We need support for non-blocking reads. */
++	sk_queue = &sk->sk_receive_queue;
++	skb = __skb_try_recv_datagram(sk, sk_queue, flags, &off, &err, &last);
++	/* If no packet is available, release_sock(sk) and try again. */
++	if (!skb) {
++		if (err != -EAGAIN)
++			goto out;
++		release_sock(sk);
++		timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
++		while (timeo && !__skb_wait_for_more_packets(sk, sk_queue, &err,
++							     &timeo, last)) {
++			skb = __skb_try_recv_datagram(sk, sk_queue, flags, &off,
++						      &err, &last);
++			if (skb)
++				break;
++
++			if (err != -EAGAIN)
++				goto done;
++		}
++		if (!skb)
++			goto done;
++		lock_sock(sk);
++	}
+ 
+ 	if (!sk_to_ax25(sk)->pidincl)
+ 		skb_pull(skb, 1);		/* Remove PID */
+@@ -1718,6 +1739,7 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ out:
+ 	release_sock(sk);
+ 
++done:
+ 	return err;
+ }
+ 
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 96f975777438f..d54dbd01d86f1 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -502,14 +502,15 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	struct ipcm6_cookie ipc6;
+ 	int addr_len = msg->msg_namelen;
+ 	int transhdrlen = 4; /* zero session-id */
+-	int ulen = len + transhdrlen;
++	int ulen;
+ 	int err;
+ 
+ 	/* Rough check on arithmetic overflow,
+ 	 * better check is made in ip6_append_data().
+ 	 */
+-	if (len > INT_MAX)
++	if (len > INT_MAX - transhdrlen)
+ 		return -EMSGSIZE;
++	ulen = len + transhdrlen;
+ 
+ 	/* Mirror BSD error message compatibility */
+ 	if (msg->msg_flags & MSG_OOB)
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 6d8d700216662..80fee9d118eec 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -372,6 +372,7 @@ static void set_ip_addr(struct sk_buff *skb, struct iphdr *nh,
+ 	update_ip_l4_checksum(skb, nh, *addr, new_addr);
+ 	csum_replace4(&nh->check, *addr, new_addr);
+ 	skb_clear_hash(skb);
++	ovs_ct_clear(skb, NULL);
+ 	*addr = new_addr;
+ }
+ 
+@@ -419,6 +420,7 @@ static void set_ipv6_addr(struct sk_buff *skb, u8 l4_proto,
+ 		update_ipv6_checksum(skb, l4_proto, addr, new_addr);
+ 
+ 	skb_clear_hash(skb);
++	ovs_ct_clear(skb, NULL);
+ 	memcpy(addr, new_addr, sizeof(__be32[4]));
+ }
+ 
+@@ -659,6 +661,7 @@ static int set_nsh(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ static void set_tp_port(struct sk_buff *skb, __be16 *port,
+ 			__be16 new_port, __sum16 *check)
+ {
++	ovs_ct_clear(skb, NULL);
+ 	inet_proto_csum_replace2(check, skb, *port, new_port, false);
+ 	*port = new_port;
+ }
+@@ -698,6 +701,7 @@ static int set_udp(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ 		uh->dest = dst;
+ 		flow_key->tp.src = src;
+ 		flow_key->tp.dst = dst;
++		ovs_ct_clear(skb, NULL);
+ 	}
+ 
+ 	skb_clear_hash(skb);
+@@ -760,6 +764,8 @@ static int set_sctp(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ 	sh->checksum = old_csum ^ old_correct_csum ^ new_csum;
+ 
+ 	skb_clear_hash(skb);
++	ovs_ct_clear(skb, NULL);
++
+ 	flow_key->tp.src = sh->source;
+ 	flow_key->tp.dst = sh->dest;
+ 
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 7ff98d39ec942..41f248895a871 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -1324,7 +1324,8 @@ int ovs_ct_clear(struct sk_buff *skb, struct sw_flow_key *key)
+ 	if (skb_nfct(skb)) {
+ 		nf_conntrack_put(skb_nfct(skb));
+ 		nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
+-		ovs_ct_fill_key(skb, key);
++		if (key)
++			ovs_ct_fill_key(skb, key);
+ 	}
+ 
+ 	return 0;
+diff --git a/net/sched/act_police.c b/net/sched/act_police.c
+index 8d8452b1cdd42..3807335889590 100644
+--- a/net/sched/act_police.c
++++ b/net/sched/act_police.c
+@@ -213,6 +213,20 @@ release_idr:
+ 	return err;
+ }
+ 
++static bool tcf_police_mtu_check(struct sk_buff *skb, u32 limit)
++{
++	u32 len;
++
++	if (skb_is_gso(skb))
++		return skb_gso_validate_mac_len(skb, limit);
++
++	len = qdisc_pkt_len(skb);
++	if (skb_at_tc_ingress(skb))
++		len += skb->mac_len;
++
++	return len <= limit;
++}
++
+ static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a,
+ 			  struct tcf_result *res)
+ {
+@@ -235,7 +249,7 @@ static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a,
+ 			goto inc_overlimits;
+ 	}
+ 
+-	if (qdisc_pkt_len(skb) <= p->tcfp_mtu) {
++	if (tcf_police_mtu_check(skb, p->tcfp_mtu)) {
+ 		if (!p->rate_present) {
+ 			ret = p->tcfp_result;
+ 			goto end;
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index 0e6268d598835..94ed98dd899f3 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -95,17 +95,25 @@ __faddr2line() {
+ 	local print_warnings=$4
+ 
+ 	local sym_name=${func_addr%+*}
+-	local offset=${func_addr#*+}
+-	offset=${offset%/*}
++	local func_offset=${func_addr#*+}
++	func_offset=${func_offset%/*}
+ 	local user_size=
++	local file_type
++	local is_vmlinux=0
+ 	[[ $func_addr =~ "/" ]] && user_size=${func_addr#*/}
+ 
+-	if [[ -z $sym_name ]] || [[ -z $offset ]] || [[ $sym_name = $func_addr ]]; then
++	if [[ -z $sym_name ]] || [[ -z $func_offset ]] || [[ $sym_name = $func_addr ]]; then
+ 		warn "bad func+offset $func_addr"
+ 		DONE=1
+ 		return
+ 	fi
+ 
++	# vmlinux uses absolute addresses in the section table rather than
++	# section offsets.
++	local file_type=$(${READELF} --file-header $objfile |
++		${AWK} '$1 == "Type:" { print $2; exit }')
++	[[ $file_type = "EXEC" ]] && is_vmlinux=1
++
+ 	# Go through each of the object's symbols which match the func name.
+ 	# In rare cases there might be duplicates, in which case we print all
+ 	# matches.
+@@ -114,9 +122,11 @@ __faddr2line() {
+ 		local sym_addr=0x${fields[1]}
+ 		local sym_elf_size=${fields[2]}
+ 		local sym_sec=${fields[6]}
++		local sec_size
++		local sec_name
+ 
+ 		# Get the section size:
+-		local sec_size=$(${READELF} --section-headers --wide $objfile |
++		sec_size=$(${READELF} --section-headers --wide $objfile |
+ 			sed 's/\[ /\[/' |
+ 			${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print "0x" $6; exit }')
+ 
+@@ -126,6 +136,17 @@ __faddr2line() {
+ 			return
+ 		fi
+ 
++		# Get the section name:
++		sec_name=$(${READELF} --section-headers --wide $objfile |
++			sed 's/\[ /\[/' |
++			${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print $2; exit }')
++
++		if [[ -z $sec_name ]]; then
++			warn "bad section name: section: $sym_sec"
++			DONE=1
++			return
++		fi
++
+ 		# Calculate the symbol size.
+ 		#
+ 		# Unfortunately we can't use the ELF size, because kallsyms
+@@ -174,10 +195,10 @@ __faddr2line() {
+ 
+ 		sym_size=0x$(printf %x $sym_size)
+ 
+-		# Calculate the section address from user-supplied offset:
+-		local addr=$(($sym_addr + $offset))
++		# Calculate the address from user-supplied offset:
++		local addr=$(($sym_addr + $func_offset))
+ 		if [[ -z $addr ]] || [[ $addr = 0 ]]; then
+-			warn "bad address: $sym_addr + $offset"
++			warn "bad address: $sym_addr + $func_offset"
+ 			DONE=1
+ 			return
+ 		fi
+@@ -191,9 +212,9 @@ __faddr2line() {
+ 		fi
+ 
+ 		# Make sure the provided offset is within the symbol's range:
+-		if [[ $offset -gt $sym_size ]]; then
++		if [[ $func_offset -gt $sym_size ]]; then
+ 			[[ $print_warnings = 1 ]] &&
+-				echo "skipping $sym_name address at $addr due to size mismatch ($offset > $sym_size)"
++				echo "skipping $sym_name address at $addr due to size mismatch ($func_offset > $sym_size)"
+ 			continue
+ 		fi
+ 
+@@ -202,11 +223,13 @@ __faddr2line() {
+ 		[[ $FIRST = 0 ]] && echo
+ 		FIRST=0
+ 
+-		echo "$sym_name+$offset/$sym_size:"
++		echo "$sym_name+$func_offset/$sym_size:"
+ 
+ 		# Pass section address to addr2line and strip absolute paths
+ 		# from the output:
+-		local output=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;")
++		local args="--functions --pretty-print --inlines --exe=$objfile"
++		[[ $is_vmlinux = 0 ]] && args="$args --section=$sec_name"
++		local output=$(${ADDR2LINE} $args $addr | sed "s; $dir_prefix\(\./\)*; ;")
+ 		[[ -z $output ]] && continue
+ 
+ 		# Default output (non --list):
+diff --git a/sound/hda/hdac_device.c b/sound/hda/hdac_device.c
+index 3e9e9ac804f62..b7e5032b61c97 100644
+--- a/sound/hda/hdac_device.c
++++ b/sound/hda/hdac_device.c
+@@ -660,6 +660,7 @@ static const struct hda_vendor_id hda_vendor_ids[] = {
+ 	{ 0x14f1, "Conexant" },
+ 	{ 0x17e8, "Chrontel" },
+ 	{ 0x1854, "LG" },
++	{ 0x19e5, "Huawei" },
+ 	{ 0x1aec, "Wolfson Microelectronics" },
+ 	{ 0x1af4, "QEMU" },
+ 	{ 0x434d, "C-Media" },
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index cf3b1133b7850..7c720f03c1349 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -439,6 +439,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 	case 0x10ec0245:
+ 	case 0x10ec0255:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 	case 0x10ec0257:
+ 	case 0x10ec0282:
+ 	case 0x10ec0283:
+@@ -576,6 +577,7 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ 	switch (codec->core.vendor_id) {
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 	case 0x10ec0283:
+ 	case 0x10ec0286:
+ 	case 0x10ec0288:
+@@ -3252,6 +3254,7 @@ static void alc_disable_headset_jack_key(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_write_coef_idx(codec, 0x48, 0x0);
+ 		alc_update_coef_idx(codec, 0x49, 0x0045, 0x0);
+ 		break;
+@@ -3280,6 +3283,7 @@ static void alc_enable_headset_jack_key(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_write_coef_idx(codec, 0x48, 0xd011);
+ 		alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
+ 		break;
+@@ -4849,6 +4853,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_process_coef_fw(codec, coef0256);
+ 		break;
+ 	case 0x10ec0234:
+@@ -4964,6 +4969,7 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin,
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_write_coef_idx(codec, 0x45, 0xc489);
+ 		snd_hda_set_pin_ctl_cache(codec, hp_pin, 0);
+ 		alc_process_coef_fw(codec, coef0256);
+@@ -5114,6 +5120,7 @@ static void alc_headset_mode_default(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_write_coef_idx(codec, 0x1b, 0x0e4b);
+ 		alc_write_coef_idx(codec, 0x45, 0xc089);
+ 		msleep(50);
+@@ -5213,6 +5220,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_process_coef_fw(codec, coef0256);
+ 		break;
+ 	case 0x10ec0234:
+@@ -5327,6 +5335,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_process_coef_fw(codec, coef0256);
+ 		break;
+ 	case 0x10ec0234:
+@@ -5428,6 +5437,7 @@ static void alc_determine_headset_type(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_write_coef_idx(codec, 0x1b, 0x0e4b);
+ 		alc_write_coef_idx(codec, 0x06, 0x6104);
+ 		alc_write_coefex_idx(codec, 0x57, 0x3, 0x09a3);
+@@ -5722,6 +5732,7 @@ static void alc255_set_default_jack_type(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_process_coef_fw(codec, alc256fw);
+ 		break;
+ 	}
+@@ -6325,6 +6336,7 @@ static void alc_combo_jack_hp_jd_restart(struct hda_codec *codec)
+ 	case 0x10ec0236:
+ 	case 0x10ec0255:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
+ 		alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
+ 		break;
+@@ -8781,6 +8793,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x89aa, "HP EliteBook 630 G9", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -9813,6 +9826,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x19e58326:
+ 		spec->codec_variant = ALC269_TYPE_ALC256;
+ 		spec->shutup = alc256_shutup;
+ 		spec->init_hook = alc256_init;
+@@ -11255,6 +11269,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
+ 	HDA_CODEC_ENTRY(0x10ec0b00, "ALCS1200A", patch_alc882),
+ 	HDA_CODEC_ENTRY(0x10ec1168, "ALC1220", patch_alc882),
+ 	HDA_CODEC_ENTRY(0x10ec1220, "ALC1220", patch_alc882),
++	HDA_CODEC_ENTRY(0x19e58326, "HW8326", patch_alc269),
+ 	{} /* terminator */
+ };
+ MODULE_DEVICE_TABLE(hdaudio, snd_hda_id_realtek);
+diff --git a/sound/soc/codecs/cs35l36.c b/sound/soc/codecs/cs35l36.c
+index e9b5f76f27a86..aa32b8c26578a 100644
+--- a/sound/soc/codecs/cs35l36.c
++++ b/sound/soc/codecs/cs35l36.c
+@@ -444,7 +444,8 @@ static bool cs35l36_volatile_reg(struct device *dev, unsigned int reg)
+ 	}
+ }
+ 
+-static DECLARE_TLV_DB_SCALE(dig_vol_tlv, -10200, 25, 0);
++static const DECLARE_TLV_DB_RANGE(dig_vol_tlv, 0, 912,
++				  TLV_DB_MINMAX_ITEM(-10200, 1200));
+ static DECLARE_TLV_DB_SCALE(amp_gain_tlv, 0, 1, 1);
+ 
+ static const char * const cs35l36_pcm_sftramp_text[] =  {
+diff --git a/sound/soc/codecs/cs42l51.c b/sound/soc/codecs/cs42l51.c
+index c61b17dc2af87..fc6a2bc311b4f 100644
+--- a/sound/soc/codecs/cs42l51.c
++++ b/sound/soc/codecs/cs42l51.c
+@@ -146,7 +146,7 @@ static const struct snd_kcontrol_new cs42l51_snd_controls[] = {
+ 			0, 0xA0, 96, adc_att_tlv),
+ 	SOC_DOUBLE_R_SX_TLV("PGA Volume",
+ 			CS42L51_ALC_PGA_CTL, CS42L51_ALC_PGB_CTL,
+-			0, 0x1A, 30, pga_tlv),
++			0, 0x19, 30, pga_tlv),
+ 	SOC_SINGLE("Playback Deemphasis Switch", CS42L51_DAC_CTL, 3, 1, 0),
+ 	SOC_SINGLE("Auto-Mute Switch", CS42L51_DAC_CTL, 2, 1, 0),
+ 	SOC_SINGLE("Soft Ramp Switch", CS42L51_DAC_CTL, 1, 1, 0),
+diff --git a/sound/soc/codecs/cs42l52.c b/sound/soc/codecs/cs42l52.c
+index f772628f233ef..38223641bdf64 100644
+--- a/sound/soc/codecs/cs42l52.c
++++ b/sound/soc/codecs/cs42l52.c
+@@ -137,7 +137,9 @@ static DECLARE_TLV_DB_SCALE(mic_tlv, 1600, 100, 0);
+ 
+ static DECLARE_TLV_DB_SCALE(pga_tlv, -600, 50, 0);
+ 
+-static DECLARE_TLV_DB_SCALE(mix_tlv, -50, 50, 0);
++static DECLARE_TLV_DB_SCALE(pass_tlv, -6000, 50, 0);
++
++static DECLARE_TLV_DB_SCALE(mix_tlv, -5150, 50, 0);
+ 
+ static DECLARE_TLV_DB_SCALE(beep_tlv, -56, 200, 0);
+ 
+@@ -351,7 +353,7 @@ static const struct snd_kcontrol_new cs42l52_snd_controls[] = {
+ 			      CS42L52_SPKB_VOL, 0, 0x40, 0xC0, hl_tlv),
+ 
+ 	SOC_DOUBLE_R_SX_TLV("Bypass Volume", CS42L52_PASSTHRUA_VOL,
+-			      CS42L52_PASSTHRUB_VOL, 0, 0x88, 0x90, pga_tlv),
++			      CS42L52_PASSTHRUB_VOL, 0, 0x88, 0x90, pass_tlv),
+ 
+ 	SOC_DOUBLE("Bypass Mute", CS42L52_MISC_CTL, 4, 5, 1, 0),
+ 
+@@ -364,7 +366,7 @@ static const struct snd_kcontrol_new cs42l52_snd_controls[] = {
+ 			      CS42L52_ADCB_VOL, 0, 0xA0, 0x78, ipd_tlv),
+ 	SOC_DOUBLE_R_SX_TLV("ADC Mixer Volume",
+ 			     CS42L52_ADCA_MIXER_VOL, CS42L52_ADCB_MIXER_VOL,
+-				0, 0x19, 0x7F, ipd_tlv),
++				0, 0x19, 0x7F, mix_tlv),
+ 
+ 	SOC_DOUBLE("ADC Switch", CS42L52_ADC_MISC_CTL, 0, 1, 1, 0),
+ 
+diff --git a/sound/soc/codecs/cs42l56.c b/sound/soc/codecs/cs42l56.c
+index 06dcfae9dfe71..d41e031931061 100644
+--- a/sound/soc/codecs/cs42l56.c
++++ b/sound/soc/codecs/cs42l56.c
+@@ -391,9 +391,9 @@ static const struct snd_kcontrol_new cs42l56_snd_controls[] = {
+ 	SOC_DOUBLE("ADC Boost Switch", CS42L56_GAIN_BIAS_CTL, 3, 2, 1, 1),
+ 
+ 	SOC_DOUBLE_R_SX_TLV("Headphone Volume", CS42L56_HPA_VOLUME,
+-			      CS42L56_HPB_VOLUME, 0, 0x84, 0x48, hl_tlv),
++			      CS42L56_HPB_VOLUME, 0, 0x44, 0x48, hl_tlv),
+ 	SOC_DOUBLE_R_SX_TLV("LineOut Volume", CS42L56_LOA_VOLUME,
+-			      CS42L56_LOB_VOLUME, 0, 0x84, 0x48, hl_tlv),
++			      CS42L56_LOB_VOLUME, 0, 0x44, 0x48, hl_tlv),
+ 
+ 	SOC_SINGLE_TLV("Bass Shelving Volume", CS42L56_TONE_CTL,
+ 			0, 0x00, 1, tone_tlv),
+diff --git a/sound/soc/codecs/cs53l30.c b/sound/soc/codecs/cs53l30.c
+index ed22361b35c14..a5a383b923054 100644
+--- a/sound/soc/codecs/cs53l30.c
++++ b/sound/soc/codecs/cs53l30.c
+@@ -347,22 +347,22 @@ static const struct snd_kcontrol_new cs53l30_snd_controls[] = {
+ 	SOC_ENUM("ADC2 NG Delay", adc2_ng_delay_enum),
+ 
+ 	SOC_SINGLE_SX_TLV("ADC1A PGA Volume",
+-		    CS53L30_ADC1A_AFE_CTL, 0, 0x34, 0x18, pga_tlv),
++		    CS53L30_ADC1A_AFE_CTL, 0, 0x34, 0x24, pga_tlv),
+ 	SOC_SINGLE_SX_TLV("ADC1B PGA Volume",
+-		    CS53L30_ADC1B_AFE_CTL, 0, 0x34, 0x18, pga_tlv),
++		    CS53L30_ADC1B_AFE_CTL, 0, 0x34, 0x24, pga_tlv),
+ 	SOC_SINGLE_SX_TLV("ADC2A PGA Volume",
+-		    CS53L30_ADC2A_AFE_CTL, 0, 0x34, 0x18, pga_tlv),
++		    CS53L30_ADC2A_AFE_CTL, 0, 0x34, 0x24, pga_tlv),
+ 	SOC_SINGLE_SX_TLV("ADC2B PGA Volume",
+-		    CS53L30_ADC2B_AFE_CTL, 0, 0x34, 0x18, pga_tlv),
++		    CS53L30_ADC2B_AFE_CTL, 0, 0x34, 0x24, pga_tlv),
+ 
+ 	SOC_SINGLE_SX_TLV("ADC1A Digital Volume",
+-		    CS53L30_ADC1A_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv),
++		    CS53L30_ADC1A_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv),
+ 	SOC_SINGLE_SX_TLV("ADC1B Digital Volume",
+-		    CS53L30_ADC1B_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv),
++		    CS53L30_ADC1B_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv),
+ 	SOC_SINGLE_SX_TLV("ADC2A Digital Volume",
+-		    CS53L30_ADC2A_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv),
++		    CS53L30_ADC2A_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv),
+ 	SOC_SINGLE_SX_TLV("ADC2B Digital Volume",
+-		    CS53L30_ADC2B_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv),
++		    CS53L30_ADC2B_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv),
+ };
+ 
+ static const struct snd_soc_dapm_widget cs53l30_dapm_widgets[] = {
+diff --git a/sound/soc/codecs/es8328.c b/sound/soc/codecs/es8328.c
+index 7e26231a596a4..081b5f189632e 100644
+--- a/sound/soc/codecs/es8328.c
++++ b/sound/soc/codecs/es8328.c
+@@ -161,13 +161,16 @@ static int es8328_put_deemph(struct snd_kcontrol *kcontrol,
+ 	if (deemph > 1)
+ 		return -EINVAL;
+ 
++	if (es8328->deemph == deemph)
++		return 0;
++
+ 	ret = es8328_set_deemph(component);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	es8328->deemph = deemph;
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ 
+diff --git a/sound/soc/codecs/nau8822.c b/sound/soc/codecs/nau8822.c
+index 609aeeb278189..d831959d8ff73 100644
+--- a/sound/soc/codecs/nau8822.c
++++ b/sound/soc/codecs/nau8822.c
+@@ -740,6 +740,8 @@ static int nau8822_set_pll(struct snd_soc_dai *dai, int pll_id, int source,
+ 		pll_param->pll_int, pll_param->pll_frac,
+ 		pll_param->mclk_scaler, pll_param->pre_factor);
+ 
++	snd_soc_component_update_bits(component,
++		NAU8822_REG_POWER_MANAGEMENT_1, NAU8822_PLL_EN_MASK, NAU8822_PLL_OFF);
+ 	snd_soc_component_update_bits(component,
+ 		NAU8822_REG_PLL_N, NAU8822_PLLMCLK_DIV2 | NAU8822_PLLN_MASK,
+ 		(pll_param->pre_factor ? NAU8822_PLLMCLK_DIV2 : 0) |
+@@ -757,6 +759,8 @@ static int nau8822_set_pll(struct snd_soc_dai *dai, int pll_id, int source,
+ 		pll_param->mclk_scaler << NAU8822_MCLKSEL_SFT);
+ 	snd_soc_component_update_bits(component,
+ 		NAU8822_REG_CLOCKING, NAU8822_CLKM_MASK, NAU8822_CLKM_PLL);
++	snd_soc_component_update_bits(component,
++		NAU8822_REG_POWER_MANAGEMENT_1, NAU8822_PLL_EN_MASK, NAU8822_PLL_ON);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/nau8822.h b/sound/soc/codecs/nau8822.h
+index 489191ff187ec..b45d42c15de6b 100644
+--- a/sound/soc/codecs/nau8822.h
++++ b/sound/soc/codecs/nau8822.h
+@@ -90,6 +90,9 @@
+ #define NAU8822_REFIMP_3K			0x3
+ #define NAU8822_IOBUF_EN			(0x1 << 2)
+ #define NAU8822_ABIAS_EN			(0x1 << 3)
++#define NAU8822_PLL_EN_MASK			(0x1 << 5)
++#define NAU8822_PLL_ON				(0x1 << 5)
++#define NAU8822_PLL_OFF				(0x0 << 5)
+ 
+ /* NAU8822_REG_AUDIO_INTERFACE (0x4) */
+ #define NAU8822_AIFMT_MASK			(0x3 << 3)
+diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c
+index 0bd3bbc2aacfe..38651022e3d5f 100644
+--- a/sound/soc/codecs/wm8962.c
++++ b/sound/soc/codecs/wm8962.c
+@@ -3864,6 +3864,7 @@ static int wm8962_runtime_suspend(struct device *dev)
+ #endif
+ 
+ static const struct dev_pm_ops wm8962_pm = {
++	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ 	SET_RUNTIME_PM_OPS(wm8962_runtime_suspend, wm8962_runtime_resume, NULL)
+ };
+ 
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 51d95437e0fdf..10189f44af28f 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -800,7 +800,7 @@ int wm_adsp_fw_put(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	struct wm_adsp *dsp = snd_soc_component_get_drvdata(component);
+-	int ret = 0;
++	int ret = 1;
+ 
+ 	if (ucontrol->value.enumerated.item[0] == dsp[e->shift_l].fw)
+ 		return 0;
+diff --git a/tools/testing/selftests/net/forwarding/tc_police.sh b/tools/testing/selftests/net/forwarding/tc_police.sh
+index 160f9cccdfb79..eb09acdcb3ff1 100755
+--- a/tools/testing/selftests/net/forwarding/tc_police.sh
++++ b/tools/testing/selftests/net/forwarding/tc_police.sh
+@@ -35,6 +35,8 @@ ALL_TESTS="
+ 	police_shared_test
+ 	police_rx_mirror_test
+ 	police_tx_mirror_test
++	police_mtu_rx_test
++	police_mtu_tx_test
+ "
+ NUM_NETIFS=6
+ source tc_common.sh
+@@ -290,6 +292,56 @@ police_tx_mirror_test()
+ 	police_mirror_common_test $rp2 egress "police tx and mirror"
+ }
+ 
++police_mtu_common_test() {
++	RET=0
++
++	local test_name=$1; shift
++	local dev=$1; shift
++	local direction=$1; shift
++
++	tc filter add dev $dev $direction protocol ip pref 1 handle 101 flower \
++		dst_ip 198.51.100.1 ip_proto udp dst_port 54321 \
++		action police mtu 1042 conform-exceed drop/ok
++
++	# to count "conform" packets
++	tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \
++		dst_ip 198.51.100.1 ip_proto udp dst_port 54321 \
++		action drop
++
++	mausezahn $h1 -a own -b $(mac_get $rp1) -A 192.0.2.1 -B 198.51.100.1 \
++		-t udp sp=12345,dp=54321 -p 1001 -c 10 -q
++
++	mausezahn $h1 -a own -b $(mac_get $rp1) -A 192.0.2.1 -B 198.51.100.1 \
++		-t udp sp=12345,dp=54321 -p 1000 -c 3 -q
++
++	tc_check_packets "dev $dev $direction" 101 13
++	check_err $? "wrong packet counter"
++
++	# "exceed" packets
++	local overlimits_t0=$(tc_rule_stats_get ${dev} 1 ${direction} .overlimits)
++	test ${overlimits_t0} = 10
++	check_err $? "wrong overlimits, expected 10 got ${overlimits_t0}"
++
++	# "conform" packets
++	tc_check_packets "dev $h2 ingress" 101 3
++	check_err $? "forwarding error"
++
++	tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower
++	tc filter del dev $dev $direction protocol ip pref 1 handle 101 flower
++
++	log_test "$test_name"
++}
++
++police_mtu_rx_test()
++{
++	police_mtu_common_test "police mtu (rx)" $rp1 ingress
++}
++
++police_mtu_tx_test()
++{
++	police_mtu_common_test "police mtu (tx)" $rp2 egress
++}
++
+ setup_prepare()
+ {
+ 	h1=${NETIFS[p1]}


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-06-25 19:45 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-06-25 19:45 UTC (permalink / raw
  To: gentoo-commits

commit:     6e0d0b04ebaf8be6ef1adff3d563b4c16b465af0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 25 19:44:50 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jun 25 19:44:50 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6e0d0b04

Linux patch 5.10.125

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1124_linux-5.10.125.patch | 493 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 497 insertions(+)

diff --git a/0000_README b/0000_README
index aedaaf1a..fb42ce16 100644
--- a/0000_README
+++ b/0000_README
@@ -539,6 +539,10 @@ Patch:  1123_linux-5.10.124.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.124
 
+Patch:  1124_linux-5.10.125.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.125
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1124_linux-5.10.125.patch b/1124_linux-5.10.125.patch
new file mode 100644
index 00000000..0af8c810
--- /dev/null
+++ b/1124_linux-5.10.125.patch
@@ -0,0 +1,493 @@
+diff --git a/Makefile b/Makefile
+index 9ed79a05a9725..da5b28931e5cb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 124
++SUBLEVEL = 125
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
+index 2d881f34dd9d5..7b8158ae36ecc 100644
+--- a/arch/arm64/mm/cache.S
++++ b/arch/arm64/mm/cache.S
+@@ -228,8 +228,6 @@ SYM_FUNC_END_PI(__dma_flush_area)
+  *	- dir	- DMA direction
+  */
+ SYM_FUNC_START_PI(__dma_map_area)
+-	cmp	w2, #DMA_FROM_DEVICE
+-	b.eq	__dma_inv_area
+ 	b	__dma_clean_area
+ SYM_FUNC_END_PI(__dma_map_area)
+ 
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index fabaedddc90cb..1c05caf68e7d8 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -734,7 +734,7 @@ void ptep_zap_key(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ 	pgste_val(pgste) |= PGSTE_GR_BIT | PGSTE_GC_BIT;
+ 	ptev = pte_val(*ptep);
+ 	if (!(ptev & _PAGE_INVALID) && (ptev & _PAGE_WRITE))
+-		page_set_storage_key(ptev & PAGE_MASK, PAGE_DEFAULT_KEY, 1);
++		page_set_storage_key(ptev & PAGE_MASK, PAGE_DEFAULT_KEY, 0);
+ 	pgste_set_unlock(ptep, pgste);
+ 	preempt_enable();
+ }
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 19f0c5db11e33..32d09d024f6c9 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -144,6 +144,11 @@ uart_update_mctrl(struct uart_port *port, unsigned int set, unsigned int clear)
+ 	unsigned long flags;
+ 	unsigned int old;
+ 
++	if (port->rs485.flags & SER_RS485_ENABLED) {
++		set &= ~TIOCM_RTS;
++		clear &= ~TIOCM_RTS;
++	}
++
+ 	spin_lock_irqsave(&port->lock, flags);
+ 	old = port->mctrl;
+ 	port->mctrl = (old & ~clear) | set;
+@@ -157,23 +162,10 @@ uart_update_mctrl(struct uart_port *port, unsigned int set, unsigned int clear)
+ 
+ static void uart_port_dtr_rts(struct uart_port *uport, int raise)
+ {
+-	int rs485_on = uport->rs485_config &&
+-		(uport->rs485.flags & SER_RS485_ENABLED);
+-	int RTS_after_send = !!(uport->rs485.flags & SER_RS485_RTS_AFTER_SEND);
+-
+-	if (raise) {
+-		if (rs485_on && RTS_after_send) {
+-			uart_set_mctrl(uport, TIOCM_DTR);
+-			uart_clear_mctrl(uport, TIOCM_RTS);
+-		} else {
+-			uart_set_mctrl(uport, TIOCM_DTR | TIOCM_RTS);
+-		}
+-	} else {
+-		unsigned int clear = TIOCM_DTR;
+-
+-		clear |= (!rs485_on || RTS_after_send) ? TIOCM_RTS : 0;
+-		uart_clear_mctrl(uport, clear);
+-	}
++	if (raise)
++		uart_set_mctrl(uport, TIOCM_DTR | TIOCM_RTS);
++	else
++		uart_clear_mctrl(uport, TIOCM_DTR | TIOCM_RTS);
+ }
+ 
+ /*
+@@ -1116,11 +1108,6 @@ uart_tiocmset(struct tty_struct *tty, unsigned int set, unsigned int clear)
+ 		goto out;
+ 
+ 	if (!tty_io_error(tty)) {
+-		if (uport->rs485.flags & SER_RS485_ENABLED) {
+-			set &= ~TIOCM_RTS;
+-			clear &= ~TIOCM_RTS;
+-		}
+-
+ 		uart_update_mctrl(uport, set, clear);
+ 		ret = 0;
+ 	}
+@@ -2429,6 +2416,9 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 		 */
+ 		spin_lock_irqsave(&port->lock, flags);
+ 		port->mctrl &= TIOCM_DTR;
++		if (port->rs485.flags & SER_RS485_ENABLED &&
++		    !(port->rs485.flags & SER_RS485_RTS_AFTER_SEND))
++			port->mctrl |= TIOCM_RTS;
+ 		port->ops->set_mctrl(port, port->mctrl);
+ 		spin_unlock_irqrestore(&port->lock, flags);
+ 
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index a40be8b448c24..64ef97ab9274a 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -772,9 +772,13 @@ struct eth_dev *gether_setup_name(struct usb_gadget *g,
+ 	dev->qmult = qmult;
+ 	snprintf(net->name, sizeof(net->name), "%s%%d", netname);
+ 
+-	if (get_ether_addr(dev_addr, net->dev_addr))
++	if (get_ether_addr(dev_addr, net->dev_addr)) {
++		net->addr_assign_type = NET_ADDR_RANDOM;
+ 		dev_warn(&g->dev,
+ 			"using random %s ethernet address\n", "self");
++	} else {
++		net->addr_assign_type = NET_ADDR_SET;
++	}
+ 	if (get_ether_addr(host_addr, dev->host_mac))
+ 		dev_warn(&g->dev,
+ 			"using random %s ethernet address\n", "host");
+@@ -831,6 +835,9 @@ struct net_device *gether_setup_name_default(const char *netname)
+ 	INIT_LIST_HEAD(&dev->tx_reqs);
+ 	INIT_LIST_HEAD(&dev->rx_reqs);
+ 
++	/* by default we always have a random MAC address */
++	net->addr_assign_type = NET_ADDR_RANDOM;
++
+ 	skb_queue_head_init(&dev->rx_frames);
+ 
+ 	/* network device setup */
+@@ -868,7 +875,6 @@ int gether_register_netdev(struct net_device *net)
+ 	g = dev->gadget;
+ 
+ 	memcpy(net->dev_addr, dev->dev_mac, ETH_ALEN);
+-	net->addr_assign_type = NET_ADDR_RANDOM;
+ 
+ 	status = register_netdev(net);
+ 	if (status < 0) {
+@@ -908,6 +914,7 @@ int gether_set_dev_addr(struct net_device *net, const char *dev_addr)
+ 	if (get_ether_addr(dev_addr, new_addr))
+ 		return -EINVAL;
+ 	memcpy(dev->dev_mac, new_addr, ETH_ALEN);
++	net->addr_assign_type = NET_ADDR_SET;
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(gether_set_dev_addr);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 871475d3fca2c..40ac37beca47d 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -773,7 +773,8 @@ static const struct io_op_def io_op_defs[] = {
+ 		.buffer_select		= 1,
+ 		.needs_async_data	= 1,
+ 		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG,
++		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
++					  IO_WQ_WORK_FILES,
+ 	},
+ 	[IORING_OP_WRITEV] = {
+ 		.needs_file		= 1,
+@@ -783,7 +784,7 @@ static const struct io_op_def io_op_defs[] = {
+ 		.needs_async_data	= 1,
+ 		.async_size		= sizeof(struct io_async_rw),
+ 		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-						IO_WQ_WORK_FSIZE,
++					  IO_WQ_WORK_FSIZE | IO_WQ_WORK_FILES,
+ 	},
+ 	[IORING_OP_FSYNC] = {
+ 		.needs_file		= 1,
+@@ -794,7 +795,8 @@ static const struct io_op_def io_op_defs[] = {
+ 		.unbound_nonreg_file	= 1,
+ 		.pollin			= 1,
+ 		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_BLKCG | IO_WQ_WORK_MM,
++		.work_flags		= IO_WQ_WORK_BLKCG | IO_WQ_WORK_MM |
++					  IO_WQ_WORK_FILES,
+ 	},
+ 	[IORING_OP_WRITE_FIXED] = {
+ 		.needs_file		= 1,
+@@ -803,7 +805,7 @@ static const struct io_op_def io_op_defs[] = {
+ 		.pollout		= 1,
+ 		.async_size		= sizeof(struct io_async_rw),
+ 		.work_flags		= IO_WQ_WORK_BLKCG | IO_WQ_WORK_FSIZE |
+-						IO_WQ_WORK_MM,
++					  IO_WQ_WORK_MM | IO_WQ_WORK_FILES,
+ 	},
+ 	[IORING_OP_POLL_ADD] = {
+ 		.needs_file		= 1,
+@@ -857,7 +859,7 @@ static const struct io_op_def io_op_defs[] = {
+ 		.pollout		= 1,
+ 		.needs_async_data	= 1,
+ 		.async_size		= sizeof(struct io_async_connect),
+-		.work_flags		= IO_WQ_WORK_MM,
++		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_FS,
+ 	},
+ 	[IORING_OP_FALLOCATE] = {
+ 		.needs_file		= 1,
+@@ -885,7 +887,8 @@ static const struct io_op_def io_op_defs[] = {
+ 		.pollin			= 1,
+ 		.buffer_select		= 1,
+ 		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG,
++		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
++					  IO_WQ_WORK_FILES,
+ 	},
+ 	[IORING_OP_WRITE] = {
+ 		.needs_file		= 1,
+@@ -894,7 +897,7 @@ static const struct io_op_def io_op_defs[] = {
+ 		.pollout		= 1,
+ 		.async_size		= sizeof(struct io_async_rw),
+ 		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-						IO_WQ_WORK_FSIZE,
++					  IO_WQ_WORK_FSIZE | IO_WQ_WORK_FILES,
+ 	},
+ 	[IORING_OP_FADVISE] = {
+ 		.needs_file		= 1,
+@@ -907,14 +910,16 @@ static const struct io_op_def io_op_defs[] = {
+ 		.needs_file		= 1,
+ 		.unbound_nonreg_file	= 1,
+ 		.pollout		= 1,
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG,
++		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
++					  IO_WQ_WORK_FS,
+ 	},
+ 	[IORING_OP_RECV] = {
+ 		.needs_file		= 1,
+ 		.unbound_nonreg_file	= 1,
+ 		.pollin			= 1,
+ 		.buffer_select		= 1,
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG,
++		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
++					  IO_WQ_WORK_FS,
+ 	},
+ 	[IORING_OP_OPENAT2] = {
+ 		.work_flags		= IO_WQ_WORK_FILES | IO_WQ_WORK_FS |
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 8c7d01e907a31..bf5cb6efb8c09 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -68,15 +68,49 @@ static inline void zonefs_i_size_write(struct inode *inode, loff_t isize)
+ 		zi->i_flags &= ~ZONEFS_ZONE_OPEN;
+ }
+ 
+-static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+-			      unsigned int flags, struct iomap *iomap,
+-			      struct iomap *srcmap)
++static int zonefs_read_iomap_begin(struct inode *inode, loff_t offset,
++				   loff_t length, unsigned int flags,
++				   struct iomap *iomap, struct iomap *srcmap)
+ {
+ 	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+ 	struct super_block *sb = inode->i_sb;
+ 	loff_t isize;
+ 
+-	/* All I/Os should always be within the file maximum size */
++	/*
++	 * All blocks are always mapped below EOF. If reading past EOF,
++	 * act as if there is a hole up to the file maximum size.
++	 */
++	mutex_lock(&zi->i_truncate_mutex);
++	iomap->bdev = inode->i_sb->s_bdev;
++	iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize);
++	isize = i_size_read(inode);
++	if (iomap->offset >= isize) {
++		iomap->type = IOMAP_HOLE;
++		iomap->addr = IOMAP_NULL_ADDR;
++		iomap->length = length;
++	} else {
++		iomap->type = IOMAP_MAPPED;
++		iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
++		iomap->length = isize - iomap->offset;
++	}
++	mutex_unlock(&zi->i_truncate_mutex);
++
++	return 0;
++}
++
++static const struct iomap_ops zonefs_read_iomap_ops = {
++	.iomap_begin	= zonefs_read_iomap_begin,
++};
++
++static int zonefs_write_iomap_begin(struct inode *inode, loff_t offset,
++				    loff_t length, unsigned int flags,
++				    struct iomap *iomap, struct iomap *srcmap)
++{
++	struct zonefs_inode_info *zi = ZONEFS_I(inode);
++	struct super_block *sb = inode->i_sb;
++	loff_t isize;
++
++	/* All write I/Os should always be within the file maximum size */
+ 	if (WARN_ON_ONCE(offset + length > zi->i_max_size))
+ 		return -EIO;
+ 
+@@ -86,7 +120,7 @@ static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ 	 * operation.
+ 	 */
+ 	if (WARN_ON_ONCE(zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
+-			 (flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)))
++			 !(flags & IOMAP_DIRECT)))
+ 		return -EIO;
+ 
+ 	/*
+@@ -95,45 +129,42 @@ static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ 	 * write pointer) and unwriten beyond.
+ 	 */
+ 	mutex_lock(&zi->i_truncate_mutex);
++	iomap->bdev = inode->i_sb->s_bdev;
++	iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize);
++	iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
+ 	isize = i_size_read(inode);
+-	if (offset >= isize)
++	if (iomap->offset >= isize) {
+ 		iomap->type = IOMAP_UNWRITTEN;
+-	else
++		iomap->length = zi->i_max_size - iomap->offset;
++	} else {
+ 		iomap->type = IOMAP_MAPPED;
+-	if (flags & IOMAP_WRITE)
+-		length = zi->i_max_size - offset;
+-	else
+-		length = min(length, isize - offset);
++		iomap->length = isize - iomap->offset;
++	}
+ 	mutex_unlock(&zi->i_truncate_mutex);
+ 
+-	iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize);
+-	iomap->length = ALIGN(offset + length, sb->s_blocksize) - iomap->offset;
+-	iomap->bdev = inode->i_sb->s_bdev;
+-	iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
+-
+ 	return 0;
+ }
+ 
+-static const struct iomap_ops zonefs_iomap_ops = {
+-	.iomap_begin	= zonefs_iomap_begin,
++static const struct iomap_ops zonefs_write_iomap_ops = {
++	.iomap_begin	= zonefs_write_iomap_begin,
+ };
+ 
+ static int zonefs_readpage(struct file *unused, struct page *page)
+ {
+-	return iomap_readpage(page, &zonefs_iomap_ops);
++	return iomap_readpage(page, &zonefs_read_iomap_ops);
+ }
+ 
+ static void zonefs_readahead(struct readahead_control *rac)
+ {
+-	iomap_readahead(rac, &zonefs_iomap_ops);
++	iomap_readahead(rac, &zonefs_read_iomap_ops);
+ }
+ 
+ /*
+  * Map blocks for page writeback. This is used only on conventional zone files,
+  * which implies that the page range can only be within the fixed inode size.
+  */
+-static int zonefs_map_blocks(struct iomap_writepage_ctx *wpc,
+-			     struct inode *inode, loff_t offset)
++static int zonefs_write_map_blocks(struct iomap_writepage_ctx *wpc,
++				   struct inode *inode, loff_t offset)
+ {
+ 	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+ 
+@@ -147,12 +178,12 @@ static int zonefs_map_blocks(struct iomap_writepage_ctx *wpc,
+ 	    offset < wpc->iomap.offset + wpc->iomap.length)
+ 		return 0;
+ 
+-	return zonefs_iomap_begin(inode, offset, zi->i_max_size - offset,
+-				  IOMAP_WRITE, &wpc->iomap, NULL);
++	return zonefs_write_iomap_begin(inode, offset, zi->i_max_size - offset,
++					IOMAP_WRITE, &wpc->iomap, NULL);
+ }
+ 
+ static const struct iomap_writeback_ops zonefs_writeback_ops = {
+-	.map_blocks		= zonefs_map_blocks,
++	.map_blocks		= zonefs_write_map_blocks,
+ };
+ 
+ static int zonefs_writepage(struct page *page, struct writeback_control *wbc)
+@@ -182,7 +213,8 @@ static int zonefs_swap_activate(struct swap_info_struct *sis,
+ 		return -EINVAL;
+ 	}
+ 
+-	return iomap_swapfile_activate(sis, swap_file, span, &zonefs_iomap_ops);
++	return iomap_swapfile_activate(sis, swap_file, span,
++				       &zonefs_read_iomap_ops);
+ }
+ 
+ static const struct address_space_operations zonefs_file_aops = {
+@@ -612,7 +644,7 @@ static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
+ 
+ 	/* Serialize against truncates */
+ 	down_read(&zi->i_mmap_sem);
+-	ret = iomap_page_mkwrite(vmf, &zonefs_iomap_ops);
++	ret = iomap_page_mkwrite(vmf, &zonefs_write_iomap_ops);
+ 	up_read(&zi->i_mmap_sem);
+ 
+ 	sb_end_pagefault(inode->i_sb);
+@@ -869,7 +901,7 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from)
+ 	if (append)
+ 		ret = zonefs_file_dio_append(iocb, from);
+ 	else
+-		ret = iomap_dio_rw(iocb, from, &zonefs_iomap_ops,
++		ret = iomap_dio_rw(iocb, from, &zonefs_write_iomap_ops,
+ 				   &zonefs_write_dio_ops, sync);
+ 	if (zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
+ 	    (ret > 0 || ret == -EIOCBQUEUED)) {
+@@ -911,7 +943,7 @@ static ssize_t zonefs_file_buffered_write(struct kiocb *iocb,
+ 	if (ret <= 0)
+ 		goto inode_unlock;
+ 
+-	ret = iomap_file_buffered_write(iocb, from, &zonefs_iomap_ops);
++	ret = iomap_file_buffered_write(iocb, from, &zonefs_write_iomap_ops);
+ 	if (ret > 0)
+ 		iocb->ki_pos += ret;
+ 	else if (ret == -EIO)
+@@ -1004,7 +1036,7 @@ static ssize_t zonefs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ 			goto inode_unlock;
+ 		}
+ 		file_accessed(iocb->ki_filp);
+-		ret = iomap_dio_rw(iocb, to, &zonefs_iomap_ops,
++		ret = iomap_dio_rw(iocb, to, &zonefs_read_iomap_ops,
+ 				   &zonefs_read_dio_ops, is_sync_kiocb(iocb));
+ 	} else {
+ 		ret = generic_file_read_iter(iocb, to);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 44b524136f953..f38b71cc3edbe 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -726,12 +726,14 @@ EXPORT_SYMBOL_GPL(inet_unhash);
+  * Note that we use 32bit integers (vs RFC 'short integers')
+  * because 2^16 is not a multiple of num_ephemeral and this
+  * property might be used by clever attacker.
+- * RFC claims using TABLE_LENGTH=10 buckets gives an improvement,
+- * we use 256 instead to really give more isolation and
+- * privacy, this only consumes 1 KB of kernel memory.
++ * RFC claims using TABLE_LENGTH=10 buckets gives an improvement, though
++ * attacks were since demonstrated, thus we use 65536 instead to really
++ * give more isolation and privacy, at the expense of 256kB of kernel
++ * memory.
+  */
+-#define INET_TABLE_PERTURB_SHIFT 8
+-static u32 table_perturb[1 << INET_TABLE_PERTURB_SHIFT];
++#define INET_TABLE_PERTURB_SHIFT 16
++#define INET_TABLE_PERTURB_SIZE (1 << INET_TABLE_PERTURB_SHIFT)
++static u32 *table_perturb;
+ 
+ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 		struct sock *sk, u64 port_offset,
+@@ -774,10 +776,11 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 	if (likely(remaining > 1))
+ 		remaining &= ~1U;
+ 
+-	net_get_random_once(table_perturb, sizeof(table_perturb));
+-	index = hash_32(port_offset, INET_TABLE_PERTURB_SHIFT);
++	net_get_random_once(table_perturb,
++			    INET_TABLE_PERTURB_SIZE * sizeof(*table_perturb));
++	index = port_offset & (INET_TABLE_PERTURB_SIZE - 1);
+ 
+-	offset = READ_ONCE(table_perturb[index]) + port_offset;
++	offset = READ_ONCE(table_perturb[index]) + (port_offset >> 32);
+ 	offset %= remaining;
+ 
+ 	/* In first pass we try ports of @low parity.
+@@ -833,6 +836,12 @@ next_port:
+ 	return -EADDRNOTAVAIL;
+ 
+ ok:
++	/* Here we want to add a little bit of randomness to the next source
++	 * port that will be chosen. We use a max() with a random here so that
++	 * on low contention the randomness is maximal and on high contention
++	 * it may be inexistent.
++	 */
++	i = max_t(int, i, (prandom_u32() & 7) * 2);
+ 	WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2);
+ 
+ 	/* Head lock still held and bh's disabled */
+@@ -906,6 +915,12 @@ void __init inet_hashinfo2_init(struct inet_hashinfo *h, const char *name,
+ 					    low_limit,
+ 					    high_limit);
+ 	init_hashinfo_lhash2(h);
++
++	/* this one is used for source ports of outgoing connections */
++	table_perturb = kmalloc_array(INET_TABLE_PERTURB_SIZE,
++				      sizeof(*table_perturb), GFP_KERNEL);
++	if (!table_perturb)
++		panic("TCP: failed to alloc table_perturb");
+ }
+ 
+ int inet_hashinfo2_init_mod(struct inet_hashinfo *h)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-06-27 11:12 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-06-27 11:12 UTC (permalink / raw
  To: gentoo-commits

commit:     efb8f6113b4a4df22556ed572fa800d945fc4300
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 27 11:11:33 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 27 11:11:33 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=efb8f611

Linux patych 5.10.126

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 ++
 1125_linux-5.10.126.patch | 103 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 107 insertions(+)

diff --git a/0000_README b/0000_README
index fb42ce16..5378e1d4 100644
--- a/0000_README
+++ b/0000_README
@@ -543,6 +543,10 @@ Patch:  1124_linux-5.10.125.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.125
 
+Patch:  1125_linux-5.10.126.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.126
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1125_linux-5.10.126.patch b/1125_linux-5.10.126.patch
new file mode 100644
index 00000000..3948f970
--- /dev/null
+++ b/1125_linux-5.10.126.patch
@@ -0,0 +1,103 @@
+diff --git a/Makefile b/Makefile
+index da5b28931e5cb..57434487c2b4d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 125
++SUBLEVEL = 126
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 40ac37beca47d..2e12dcbc7b0fd 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -696,6 +696,8 @@ struct io_kiocb {
+ 	 */
+ 	struct list_head		inflight_entry;
+ 
++	struct list_head		iopoll_entry;
++
+ 	struct percpu_ref		*fixed_file_refs;
+ 	struct callback_head		task_work;
+ 	/* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
+@@ -2350,8 +2352,8 @@ static void io_iopoll_queue(struct list_head *again)
+ 	struct io_kiocb *req;
+ 
+ 	do {
+-		req = list_first_entry(again, struct io_kiocb, inflight_entry);
+-		list_del(&req->inflight_entry);
++		req = list_first_entry(again, struct io_kiocb, iopoll_entry);
++		list_del(&req->iopoll_entry);
+ 		__io_complete_rw(req, -EAGAIN, 0, NULL);
+ 	} while (!list_empty(again));
+ }
+@@ -2373,14 +2375,14 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ 	while (!list_empty(done)) {
+ 		int cflags = 0;
+ 
+-		req = list_first_entry(done, struct io_kiocb, inflight_entry);
++		req = list_first_entry(done, struct io_kiocb, iopoll_entry);
+ 		if (READ_ONCE(req->result) == -EAGAIN) {
+ 			req->result = 0;
+ 			req->iopoll_completed = 0;
+-			list_move_tail(&req->inflight_entry, &again);
++			list_move_tail(&req->iopoll_entry, &again);
+ 			continue;
+ 		}
+-		list_del(&req->inflight_entry);
++		list_del(&req->iopoll_entry);
+ 
+ 		if (req->flags & REQ_F_BUFFER_SELECTED)
+ 			cflags = io_put_rw_kbuf(req);
+@@ -2416,7 +2418,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ 	spin = !ctx->poll_multi_file && *nr_events < min;
+ 
+ 	ret = 0;
+-	list_for_each_entry_safe(req, tmp, &ctx->iopoll_list, inflight_entry) {
++	list_for_each_entry_safe(req, tmp, &ctx->iopoll_list, iopoll_entry) {
+ 		struct kiocb *kiocb = &req->rw.kiocb;
+ 
+ 		/*
+@@ -2425,7 +2427,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ 		 * and complete those lists first, if we have entries there.
+ 		 */
+ 		if (READ_ONCE(req->iopoll_completed)) {
+-			list_move_tail(&req->inflight_entry, &done);
++			list_move_tail(&req->iopoll_entry, &done);
+ 			continue;
+ 		}
+ 		if (!list_empty(&done))
+@@ -2437,7 +2439,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ 
+ 		/* iopoll may have completed current req */
+ 		if (READ_ONCE(req->iopoll_completed))
+-			list_move_tail(&req->inflight_entry, &done);
++			list_move_tail(&req->iopoll_entry, &done);
+ 
+ 		if (ret && spin)
+ 			spin = false;
+@@ -2670,7 +2672,7 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
+ 		struct io_kiocb *list_req;
+ 
+ 		list_req = list_first_entry(&ctx->iopoll_list, struct io_kiocb,
+-						inflight_entry);
++						iopoll_entry);
+ 		if (list_req->file != req->file)
+ 			ctx->poll_multi_file = true;
+ 	}
+@@ -2680,9 +2682,9 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
+ 	 * it to the front so we find it first.
+ 	 */
+ 	if (READ_ONCE(req->iopoll_completed))
+-		list_add(&req->inflight_entry, &ctx->iopoll_list);
++		list_add(&req->iopoll_entry, &ctx->iopoll_list);
+ 	else
+-		list_add_tail(&req->inflight_entry, &ctx->iopoll_list);
++		list_add_tail(&req->iopoll_entry, &ctx->iopoll_list);
+ 
+ 	if ((ctx->flags & IORING_SETUP_SQPOLL) &&
+ 	    wq_has_sleeper(&ctx->sq_data->wait))


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-06-29 11:08 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-06-29 11:08 UTC (permalink / raw
  To: gentoo-commits

commit:     1ebc9e2bcbeba680916eeddcf7f4e0132704b55a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 29 11:08:37 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 29 11:08:37 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1ebc9e2b

Linux patch 5.10.127

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1126_linux-5.10.127.patch | 4804 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4808 insertions(+)

diff --git a/0000_README b/0000_README
index 5378e1d4..0104ffd6 100644
--- a/0000_README
+++ b/0000_README
@@ -547,6 +547,10 @@ Patch:  1125_linux-5.10.126.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.126
 
+Patch:  1126_linux-5.10.127.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.127
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1126_linux-5.10.127.patch b/1126_linux-5.10.127.patch
new file mode 100644
index 00000000..db80f4e8
--- /dev/null
+++ b/1126_linux-5.10.127.patch
@@ -0,0 +1,4804 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio-vf610 b/Documentation/ABI/testing/sysfs-bus-iio-vf610
+index 308a6756d3bf3..491ead8044888 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio-vf610
++++ b/Documentation/ABI/testing/sysfs-bus-iio-vf610
+@@ -1,4 +1,4 @@
+-What:		/sys/bus/iio/devices/iio:deviceX/conversion_mode
++What:		/sys/bus/iio/devices/iio:deviceX/in_conversion_mode
+ KernelVersion:	4.2
+ Contact:	linux-iio@vger.kernel.org
+ Description:
+diff --git a/Makefile b/Makefile
+index 57434487c2b4d..e3eb9ba19f86e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 126
++SUBLEVEL = 127
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1156,7 +1156,7 @@ KBUILD_MODULES := 1
+ 
+ autoksyms_recursive: descend modules.order
+ 	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/adjust_autoksyms.sh \
+-	  "$(MAKE) -f $(srctree)/Makefile vmlinux"
++	  "$(MAKE) -f $(srctree)/Makefile autoksyms_recursive"
+ endif
+ 
+ autoksyms_h := $(if $(CONFIG_TRIM_UNUSED_KSYMS), include/generated/autoksyms.h)
+diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
+index 7a8837cbe21bf..7858ae5d39df7 100644
+--- a/arch/arm/boot/dts/imx6qdl.dtsi
++++ b/arch/arm/boot/dts/imx6qdl.dtsi
+@@ -756,7 +756,7 @@
+ 					regulator-name = "vddpu";
+ 					regulator-min-microvolt = <725000>;
+ 					regulator-max-microvolt = <1450000>;
+-					regulator-enable-ramp-delay = <150>;
++					regulator-enable-ramp-delay = <380>;
+ 					anatop-reg-offset = <0x140>;
+ 					anatop-vol-bit-shift = <9>;
+ 					anatop-vol-bit-width = <5>;
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 84d9cc13afb95..9e1b0af0aa43f 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -102,6 +102,7 @@
+ 		compatible = "usb-nop-xceiv";
+ 		clocks = <&clks IMX7D_USB_HSIC_ROOT_CLK>;
+ 		clock-names = "main_clk";
++		power-domains = <&pgc_hsic_phy>;
+ 		#phy-cells = <0>;
+ 	};
+ 
+@@ -1104,7 +1105,6 @@
+ 				compatible = "fsl,imx7d-usb", "fsl,imx27-usb";
+ 				reg = <0x30b30000 0x200>;
+ 				interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+-				power-domains = <&pgc_hsic_phy>;
+ 				clocks = <&clks IMX7D_USB_CTRL_CLK>;
+ 				fsl,usbphy = <&usbphynop3>;
+ 				fsl,usbmisc = <&usbmisc3 0>;
+diff --git a/arch/arm/mach-axxia/platsmp.c b/arch/arm/mach-axxia/platsmp.c
+index 512943eae30a5..2e203626eda52 100644
+--- a/arch/arm/mach-axxia/platsmp.c
++++ b/arch/arm/mach-axxia/platsmp.c
+@@ -39,6 +39,7 @@ static int axxia_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 		return -ENOENT;
+ 
+ 	syscon = of_iomap(syscon_np, 0);
++	of_node_put(syscon_np);
+ 	if (!syscon)
+ 		return -ENOMEM;
+ 
+diff --git a/arch/arm/mach-cns3xxx/core.c b/arch/arm/mach-cns3xxx/core.c
+index e4f4b20b83a2d..3fc4ec830e3a3 100644
+--- a/arch/arm/mach-cns3xxx/core.c
++++ b/arch/arm/mach-cns3xxx/core.c
+@@ -372,6 +372,7 @@ static void __init cns3xxx_init(void)
+ 		/* De-Asscer SATA Reset */
+ 		cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SATA));
+ 	}
++	of_node_put(dn);
+ 
+ 	dn = of_find_compatible_node(NULL, NULL, "cavium,cns3420-sdhci");
+ 	if (of_device_is_available(dn)) {
+@@ -385,6 +386,7 @@ static void __init cns3xxx_init(void)
+ 		cns3xxx_pwr_clk_en(CNS3XXX_PWR_CLK_EN(SDIO));
+ 		cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SDIO));
+ 	}
++	of_node_put(dn);
+ 
+ 	pm_power_off = cns3xxx_power_off;
+ 
+diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c
+index 83d1d1327f96e..1276585f72c53 100644
+--- a/arch/arm/mach-exynos/exynos.c
++++ b/arch/arm/mach-exynos/exynos.c
+@@ -149,6 +149,7 @@ static void exynos_map_pmu(void)
+ 	np = of_find_matching_node(NULL, exynos_dt_pmu_match);
+ 	if (np)
+ 		pmu_base_addr = of_iomap(np, 0);
++	of_node_put(np);
+ }
+ 
+ static void __init exynos_init_irq(void)
+diff --git a/arch/mips/vr41xx/common/icu.c b/arch/mips/vr41xx/common/icu.c
+index 7b7f25b4b057e..9240bcdbe74e4 100644
+--- a/arch/mips/vr41xx/common/icu.c
++++ b/arch/mips/vr41xx/common/icu.c
+@@ -640,8 +640,6 @@ static int icu_get_irq(unsigned int irq)
+ 
+ 	printk(KERN_ERR "spurious ICU interrupt: %04x,%04x\n", pend1, pend2);
+ 
+-	atomic_inc(&irq_err_count);
+-
+ 	return -1;
+ }
+ 
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index 14f3252f2da03..2d89f79f460cb 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -11,6 +11,7 @@ config PARISC
+ 	select ARCH_WANT_FRAME_POINTERS
+ 	select ARCH_HAS_ELF_RANDOMIZE
+ 	select ARCH_HAS_STRICT_KERNEL_RWX
++	select ARCH_HAS_STRICT_MODULE_RWX
+ 	select ARCH_HAS_UBSAN_SANITIZE_ALL
+ 	select ARCH_NO_SG_CHAIN
+ 	select ARCH_SUPPORTS_MEMORY_FAILURE
+diff --git a/arch/parisc/include/asm/fb.h b/arch/parisc/include/asm/fb.h
+index d63a2acb91f2b..55d29c4f716e6 100644
+--- a/arch/parisc/include/asm/fb.h
++++ b/arch/parisc/include/asm/fb.h
+@@ -12,7 +12,7 @@ static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
+ 	pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE;
+ }
+ 
+-#if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI)
++#if defined(CONFIG_FB_STI)
+ int fb_is_primary_device(struct fb_info *info);
+ #else
+ static inline int fb_is_primary_device(struct fb_info *info)
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index cfb8fd76afb43..c43cc26bde5db 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -1800,7 +1800,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
+ 		tm_reclaim_current(0);
+ #endif
+ 
+-	memset(regs->gpr, 0, sizeof(regs->gpr));
++	memset(&regs->gpr[1], 0, sizeof(regs->gpr) - sizeof(regs->gpr[0]));
+ 	regs->ctr = 0;
+ 	regs->link = 0;
+ 	regs->xer = 0;
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index cf421eb7f90d4..bf962051af0a0 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -1040,7 +1040,7 @@ static struct rtas_filter rtas_filters[] __ro_after_init = {
+ 	{ "get-time-of-day", -1, -1, -1, -1, -1 },
+ 	{ "ibm,get-vpd", -1, 0, -1, 1, 2 },
+ 	{ "ibm,lpar-perftools", -1, 2, 3, -1, -1 },
+-	{ "ibm,platform-dump", -1, 4, 5, -1, -1 },
++	{ "ibm,platform-dump", -1, 4, 5, -1, -1 },		/* Special cased */
+ 	{ "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 },
+ 	{ "ibm,scan-log-dump", -1, 0, 1, -1, -1 },
+ 	{ "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 },
+@@ -1087,6 +1087,15 @@ static bool block_rtas_call(int token, int nargs,
+ 				size = 1;
+ 
+ 			end = base + size - 1;
++
++			/*
++			 * Special case for ibm,platform-dump - NULL buffer
++			 * address is used to indicate end of dump processing
++			 */
++			if (!strcmp(f->name, "ibm,platform-dump") &&
++			    base == 0)
++				return false;
++
+ 			if (!in_rmo_buf(base, end))
+ 				goto err;
+ 		}
+diff --git a/arch/powerpc/platforms/powernv/powernv.h b/arch/powerpc/platforms/powernv/powernv.h
+index 11df4e16a1cc3..528946ee7a777 100644
+--- a/arch/powerpc/platforms/powernv/powernv.h
++++ b/arch/powerpc/platforms/powernv/powernv.h
+@@ -42,4 +42,6 @@ ssize_t memcons_copy(struct memcons *mc, char *to, loff_t pos, size_t count);
+ u32 memcons_get_size(struct memcons *mc);
+ struct memcons *memcons_init(struct device_node *node, const char *mc_prop_name);
+ 
++void pnv_rng_init(void);
++
+ #endif /* _POWERNV_H */
+diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c
+index 69c344c8884f3..2b5a1a41234cc 100644
+--- a/arch/powerpc/platforms/powernv/rng.c
++++ b/arch/powerpc/platforms/powernv/rng.c
+@@ -17,6 +17,7 @@
+ #include <asm/prom.h>
+ #include <asm/machdep.h>
+ #include <asm/smp.h>
++#include "powernv.h"
+ 
+ #define DARN_ERR 0xFFFFFFFFFFFFFFFFul
+ 
+@@ -28,7 +29,6 @@ struct powernv_rng {
+ 
+ static DEFINE_PER_CPU(struct powernv_rng *, powernv_rng);
+ 
+-
+ int powernv_hwrng_present(void)
+ {
+ 	struct powernv_rng *rng;
+@@ -98,9 +98,6 @@ static int initialise_darn(void)
+ 			return 0;
+ 		}
+ 	}
+-
+-	pr_warn("Unable to use DARN for get_random_seed()\n");
+-
+ 	return -EIO;
+ }
+ 
+@@ -163,32 +160,55 @@ static __init int rng_create(struct device_node *dn)
+ 
+ 	rng_init_per_cpu(rng, dn);
+ 
+-	pr_info_once("Registering arch random hook.\n");
+-
+ 	ppc_md.get_random_seed = powernv_get_random_long;
+ 
+ 	return 0;
+ }
+ 
+-static __init int rng_init(void)
++static int __init pnv_get_random_long_early(unsigned long *v)
+ {
+ 	struct device_node *dn;
+-	int rc;
++
++	if (!slab_is_available())
++		return 0;
++
++	if (cmpxchg(&ppc_md.get_random_seed, pnv_get_random_long_early,
++		    NULL) != pnv_get_random_long_early)
++		return 0;
+ 
+ 	for_each_compatible_node(dn, NULL, "ibm,power-rng") {
+-		rc = rng_create(dn);
+-		if (rc) {
+-			pr_err("Failed creating rng for %pOF (%d).\n",
+-				dn, rc);
++		if (rng_create(dn))
+ 			continue;
+-		}
+-
+ 		/* Create devices for hwrng driver */
+ 		of_platform_device_create(dn, NULL, NULL);
+ 	}
+ 
+-	initialise_darn();
++	if (!ppc_md.get_random_seed)
++		return 0;
++	return ppc_md.get_random_seed(v);
++}
++
++void __init pnv_rng_init(void)
++{
++	struct device_node *dn;
+ 
++	/* Prefer darn over the rest. */
++	if (!initialise_darn())
++		return;
++
++	dn = of_find_compatible_node(NULL, NULL, "ibm,power-rng");
++	if (dn)
++		ppc_md.get_random_seed = pnv_get_random_long_early;
++
++	of_node_put(dn);
++}
++
++static int __init pnv_rng_late_init(void)
++{
++	unsigned long v;
++	/* In case it wasn't called during init for some other reason. */
++	if (ppc_md.get_random_seed == pnv_get_random_long_early)
++		pnv_get_random_long_early(&v);
+ 	return 0;
+ }
+-machine_subsys_initcall(powernv, rng_init);
++machine_subsys_initcall(powernv, pnv_rng_late_init);
+diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
+index 4426a109ec2f4..1a2f12dc05525 100644
+--- a/arch/powerpc/platforms/powernv/setup.c
++++ b/arch/powerpc/platforms/powernv/setup.c
+@@ -193,6 +193,8 @@ static void __init pnv_setup_arch(void)
+ 	pnv_check_guarded_cores();
+ 
+ 	/* XXX PMCS */
++
++	pnv_rng_init();
+ }
+ 
+ static void __init pnv_init(void)
+diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
+index 593840847cd3d..ada9601aaff1a 100644
+--- a/arch/powerpc/platforms/pseries/pseries.h
++++ b/arch/powerpc/platforms/pseries/pseries.h
+@@ -114,4 +114,6 @@ int dlpar_workqueue_init(void);
+ void pseries_setup_security_mitigations(void);
+ void pseries_lpar_read_hblkrm_characteristics(void);
+ 
++void pseries_rng_init(void);
++
+ #endif /* _PSERIES_PSERIES_H */
+diff --git a/arch/powerpc/platforms/pseries/rng.c b/arch/powerpc/platforms/pseries/rng.c
+index 6268545947b83..6ddfdeaace9ef 100644
+--- a/arch/powerpc/platforms/pseries/rng.c
++++ b/arch/powerpc/platforms/pseries/rng.c
+@@ -10,6 +10,7 @@
+ #include <asm/archrandom.h>
+ #include <asm/machdep.h>
+ #include <asm/plpar_wrappers.h>
++#include "pseries.h"
+ 
+ 
+ static int pseries_get_random_long(unsigned long *v)
+@@ -24,19 +25,13 @@ static int pseries_get_random_long(unsigned long *v)
+ 	return 0;
+ }
+ 
+-static __init int rng_init(void)
++void __init pseries_rng_init(void)
+ {
+ 	struct device_node *dn;
+ 
+ 	dn = of_find_compatible_node(NULL, NULL, "ibm,random");
+ 	if (!dn)
+-		return -ENODEV;
+-
+-	pr_info("Registering arch random hook.\n");
+-
++		return;
+ 	ppc_md.get_random_seed = pseries_get_random_long;
+-
+ 	of_node_put(dn);
+-	return 0;
+ }
+-machine_subsys_initcall(pseries, rng_init);
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 47dfada140e19..0eac9ca782c21 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -824,6 +824,8 @@ static void __init pSeries_setup_arch(void)
+ 
+ 	if (swiotlb_force == SWIOTLB_FORCE)
+ 		ppc_swiotlb_enable = 1;
++
++	pseries_rng_init();
+ }
+ 
+ static void pseries_panic(char *str)
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index 0eb1d1cc53a88..dddb32e53db8b 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -292,6 +292,26 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type)
+ 	return err;
+ }
+ 
++/* Events CPU_CYLCES and INSTRUCTIONS can be submitted with two different
++ * attribute::type values:
++ * - PERF_TYPE_HARDWARE:
++ * - pmu->type:
++ * Handle both type of invocations identical. They address the same hardware.
++ * The result is different when event modifiers exclude_kernel and/or
++ * exclude_user are also set.
++ */
++static int cpumf_pmu_event_type(struct perf_event *event)
++{
++	u64 ev = event->attr.config;
++
++	if (cpumf_generic_events_basic[PERF_COUNT_HW_CPU_CYCLES] == ev ||
++	    cpumf_generic_events_basic[PERF_COUNT_HW_INSTRUCTIONS] == ev ||
++	    cpumf_generic_events_user[PERF_COUNT_HW_CPU_CYCLES] == ev ||
++	    cpumf_generic_events_user[PERF_COUNT_HW_INSTRUCTIONS] == ev)
++		return PERF_TYPE_HARDWARE;
++	return PERF_TYPE_RAW;
++}
++
+ static int cpumf_pmu_event_init(struct perf_event *event)
+ {
+ 	unsigned int type = event->attr.type;
+@@ -301,7 +321,7 @@ static int cpumf_pmu_event_init(struct perf_event *event)
+ 		err = __hw_perf_event_init(event, type);
+ 	else if (event->pmu->type == type)
+ 		/* Registered as unknown PMU */
+-		err = __hw_perf_event_init(event, PERF_TYPE_RAW);
++		err = __hw_perf_event_init(event, cpumf_pmu_event_type(event));
+ 	else
+ 		return -ENOENT;
+ 
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index a0a7ead52698c..1714e85eb26d2 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1261,8 +1261,9 @@ xadd:			if (is_imm8(insn->off))
+ 		case BPF_JMP | BPF_CALL:
+ 			func = (u8 *) __bpf_call_base + imm32;
+ 			if (tail_call_reachable) {
++				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
+ 				EMIT3_off32(0x48, 0x8B, 0x85,
+-					    -(bpf_prog->aux->stack_depth + 8));
++					    -round_up(bpf_prog->aux->stack_depth, 8) - 8);
+ 				if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7))
+ 					return -EINVAL;
+ 			} else {
+diff --git a/arch/xtensa/kernel/time.c b/arch/xtensa/kernel/time.c
+index 77971fe4cc95b..8e81ba63ed389 100644
+--- a/arch/xtensa/kernel/time.c
++++ b/arch/xtensa/kernel/time.c
+@@ -154,6 +154,7 @@ static void __init calibrate_ccount(void)
+ 	cpu = of_find_compatible_node(NULL, NULL, "cdns,xtensa-cpu");
+ 	if (cpu) {
+ 		clk = of_clk_get(cpu, 0);
++		of_node_put(cpu);
+ 		if (!IS_ERR(clk)) {
+ 			ccount_freq = clk_get_rate(clk);
+ 			return;
+diff --git a/arch/xtensa/platforms/xtfpga/setup.c b/arch/xtensa/platforms/xtfpga/setup.c
+index 538e6748e85a7..c79c1d09ea863 100644
+--- a/arch/xtensa/platforms/xtfpga/setup.c
++++ b/arch/xtensa/platforms/xtfpga/setup.c
+@@ -133,6 +133,7 @@ static int __init machine_setup(void)
+ 
+ 	if ((eth = of_find_compatible_node(eth, NULL, "opencores,ethoc")))
+ 		update_local_mac(eth);
++	of_node_put(eth);
+ 	return 0;
+ }
+ arch_initcall(machine_setup);
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index 87c5c421e0f46..4466f8bdab2e1 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -220,6 +220,7 @@ static void regmap_irq_enable(struct irq_data *data)
+ 	struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);
+ 	struct regmap *map = d->map;
+ 	const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq);
++	unsigned int reg = irq_data->reg_offset / map->reg_stride;
+ 	unsigned int mask, type;
+ 
+ 	type = irq_data->type.type_falling_val | irq_data->type.type_rising_val;
+@@ -236,14 +237,14 @@ static void regmap_irq_enable(struct irq_data *data)
+ 	 * at the corresponding offset in regmap_irq_set_type().
+ 	 */
+ 	if (d->chip->type_in_mask && type)
+-		mask = d->type_buf[irq_data->reg_offset / map->reg_stride];
++		mask = d->type_buf[reg] & irq_data->mask;
+ 	else
+ 		mask = irq_data->mask;
+ 
+ 	if (d->chip->clear_on_unmask)
+ 		d->clear_status = true;
+ 
+-	d->mask_buf[irq_data->reg_offset / map->reg_stride] &= ~mask;
++	d->mask_buf[reg] &= ~mask;
+ }
+ 
+ static void regmap_irq_disable(struct irq_data *data)
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 5776dfd4a6fca..f769d858eda73 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -88,7 +88,7 @@ static RAW_NOTIFIER_HEAD(random_ready_chain);
+ 
+ /* Control how we warn userspace. */
+ static struct ratelimit_state urandom_warning =
+-	RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3);
++	RATELIMIT_STATE_INIT_FLAGS("urandom_warning", HZ, 3, RATELIMIT_MSG_ON_RELEASE);
+ static int ratelimit_disable __read_mostly =
+ 	IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM);
+ module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
+@@ -452,7 +452,7 @@ static ssize_t get_random_bytes_user(struct iov_iter *iter)
+ 
+ 	/*
+ 	 * Immediately overwrite the ChaCha key at index 4 with random
+-	 * bytes, in case userspace causes copy_to_user() below to sleep
++	 * bytes, in case userspace causes copy_to_iter() below to sleep
+ 	 * forever, so that we still retain forward secrecy in that case.
+ 	 */
+ 	crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);
+@@ -1001,7 +1001,7 @@ void add_interrupt_randomness(int irq)
+ 	if (new_count & MIX_INFLIGHT)
+ 		return;
+ 
+-	if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ))
++	if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
+ 		return;
+ 
+ 	if (unlikely(!fast_pool->mix.func))
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index cfbf10128aaed..2e3b76519b49d 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -26,8 +26,11 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct udmabuf *ubuf = vma->vm_private_data;
++	pgoff_t pgoff = vmf->pgoff;
+ 
+-	vmf->page = ubuf->pages[vmf->pgoff];
++	if (pgoff >= ubuf->pagecount)
++		return VM_FAULT_SIGBUS;
++	vmf->page = ubuf->pages[pgoff];
+ 	get_page(vmf->page);
+ 	return 0;
+ }
+diff --git a/drivers/gpio/gpio-vr41xx.c b/drivers/gpio/gpio-vr41xx.c
+index 98cd715ccc33c..8d09b619c1669 100644
+--- a/drivers/gpio/gpio-vr41xx.c
++++ b/drivers/gpio/gpio-vr41xx.c
+@@ -217,8 +217,6 @@ static int giu_get_irq(unsigned int irq)
+ 	printk(KERN_ERR "spurious GIU interrupt: %04x(%04x),%04x(%04x)\n",
+ 	       maskl, pendl, maskh, pendh);
+ 
+-	atomic_inc(&irq_err_count);
+-
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/gpio/gpio-winbond.c b/drivers/gpio/gpio-winbond.c
+index 7f8f5b02e31d5..4b61d975cc0ec 100644
+--- a/drivers/gpio/gpio-winbond.c
++++ b/drivers/gpio/gpio-winbond.c
+@@ -385,12 +385,13 @@ static int winbond_gpio_get(struct gpio_chip *gc, unsigned int offset)
+ 	unsigned long *base = gpiochip_get_data(gc);
+ 	const struct winbond_gpio_info *info;
+ 	bool val;
++	int ret;
+ 
+ 	winbond_gpio_get_info(&offset, &info);
+ 
+-	val = winbond_sio_enter(*base);
+-	if (val)
+-		return val;
++	ret = winbond_sio_enter(*base);
++	if (ret)
++		return ret;
+ 
+ 	winbond_sio_select_logical(*base, info->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index 458b5b26d3c26..de8cc25506d61 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -960,7 +960,8 @@ void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
+ 	for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
+ 		release_firmware(adreno_gpu->fw[i]);
+ 
+-	pm_runtime_disable(&priv->gpu_pdev->dev);
++	if (pm_runtime_enabled(&priv->gpu_pdev->dev))
++		pm_runtime_disable(&priv->gpu_pdev->dev);
+ 
+ 	msm_gpu_cleanup(&adreno_gpu->base);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index 913de5938782a..b4d0bfc83d70e 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -221,6 +221,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms,
+ 		encoder = mdp4_lcdc_encoder_init(dev, panel_node);
+ 		if (IS_ERR(encoder)) {
+ 			DRM_DEV_ERROR(dev->dev, "failed to construct LCDC encoder\n");
++			of_node_put(panel_node);
+ 			return PTR_ERR(encoder);
+ 		}
+ 
+@@ -230,6 +231,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms,
+ 		connector = mdp4_lvds_connector_init(dev, panel_node, encoder);
+ 		if (IS_ERR(connector)) {
+ 			DRM_DEV_ERROR(dev->dev, "failed to initialize LVDS connector\n");
++			of_node_put(panel_node);
+ 			return PTR_ERR(connector);
+ 		}
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
+index aeca8b2ac5c6b..2da6982efdbfc 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
+@@ -572,7 +572,7 @@ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog)
+ 	dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, DP_DP_HPD_CTRL_HPD_EN);
+ }
+ 
+-u32 dp_catalog_hpd_get_state_status(struct dp_catalog *dp_catalog)
++u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog)
+ {
+ 	struct dp_catalog_private *catalog = container_of(dp_catalog,
+ 				struct dp_catalog_private, dp_catalog);
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.h b/drivers/gpu/drm/msm/dp/dp_catalog.h
+index 6d257dbebf294..176a9020a520c 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.h
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.h
+@@ -97,7 +97,7 @@ void dp_catalog_ctrl_enable_irq(struct dp_catalog *dp_catalog, bool enable);
+ void dp_catalog_hpd_config_intr(struct dp_catalog *dp_catalog,
+ 			u32 intr_mask, bool en);
+ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog);
+-u32 dp_catalog_hpd_get_state_status(struct dp_catalog *dp_catalog);
++u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog);
+ u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog);
+ void dp_catalog_ctrl_phy_reset(struct dp_catalog *dp_catalog);
+ int dp_catalog_ctrl_update_vx_px(struct dp_catalog *dp_catalog, u8 v_level,
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index c83a1650437da..b9ca844ce2ad0 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1460,6 +1460,30 @@ static int dp_ctrl_reinitialize_mainlink(struct dp_ctrl_private *ctrl)
+ 	return ret;
+ }
+ 
++static int dp_ctrl_deinitialize_mainlink(struct dp_ctrl_private *ctrl)
++{
++	struct dp_io *dp_io;
++	struct phy *phy;
++	int ret;
++
++	dp_io = &ctrl->parser->io;
++	phy = dp_io->phy;
++
++	dp_catalog_ctrl_mainlink_ctrl(ctrl->catalog, false);
++
++	dp_catalog_ctrl_reset(ctrl->catalog);
++
++	ret = dp_power_clk_enable(ctrl->power, DP_CTRL_PM, false);
++	if (ret) {
++		DRM_ERROR("Failed to disable link clocks. ret=%d\n", ret);
++	}
++
++	phy_power_off(phy);
++	phy_exit(phy);
++
++	return 0;
++}
++
+ static int dp_ctrl_link_maintenance(struct dp_ctrl_private *ctrl)
+ {
+ 	int ret = 0;
+@@ -1640,8 +1664,7 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	if (rc)
+ 		return rc;
+ 
+-	while (--link_train_max_retries &&
+-		!atomic_read(&ctrl->dp_ctrl.aborted)) {
++	while (--link_train_max_retries) {
+ 		rc = dp_ctrl_reinitialize_mainlink(ctrl);
+ 		if (rc) {
+ 			DRM_ERROR("Failed to reinitialize mainlink. rc=%d\n",
+@@ -1656,6 +1679,10 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 			break;
+ 		} else if (training_step == DP_TRAINING_1) {
+ 			/* link train_1 failed */
++			if (!dp_catalog_link_is_connected(ctrl->catalog)) {
++				break;
++			}
++
+ 			rc = dp_ctrl_link_rate_down_shift(ctrl);
+ 			if (rc < 0) { /* already in RBR = 1.6G */
+ 				if (cr.lane_0_1 & DP_LANE0_1_CR_DONE) {
+@@ -1675,6 +1702,10 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 			}
+ 		} else if (training_step == DP_TRAINING_2) {
+ 			/* link train_2 failed, lower lane rate */
++			if (!dp_catalog_link_is_connected(ctrl->catalog)) {
++				break;
++			}
++
+ 			rc = dp_ctrl_link_lane_down_shift(ctrl);
+ 			if (rc < 0) {
+ 				/* end with failure */
+@@ -1695,6 +1726,11 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	 */
+ 	if (rc == 0)  /* link train successfully */
+ 		dp_ctrl_push_idle(dp_ctrl);
++	else  {
++		/* link training failed */
++		dp_ctrl_deinitialize_mainlink(ctrl);
++		rc = -ECONNRESET;
++	}
+ 
+ 	return rc;
+ }
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index ebd05678a27ba..a3de1d0523ea0 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -45,7 +45,7 @@ enum {
+ 	ST_CONNECT_PENDING,
+ 	ST_CONNECTED,
+ 	ST_DISCONNECT_PENDING,
+-	ST_SUSPEND_PENDING,
++	ST_DISPLAY_OFF,
+ 	ST_SUSPENDED,
+ };
+ 
+@@ -102,6 +102,8 @@ struct dp_display_private {
+ 	struct dp_display_mode dp_mode;
+ 	struct msm_dp dp_display;
+ 
++	bool encoder_mode_set;
++
+ 	/* wait for audio signaling */
+ 	struct completion audio_comp;
+ 
+@@ -268,7 +270,8 @@ static void dp_display_unbind(struct device *dev, struct device *master,
+ 	}
+ 
+ 	/* disable all HPD interrupts */
+-	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
++	if (dp->core_initialized)
++		dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
+ 
+ 	kthread_stop(dp->ev_tsk);
+ 
+@@ -305,13 +308,24 @@ static void dp_display_send_hpd_event(struct msm_dp *dp_display)
+ 	drm_helper_hpd_irq_event(connector->dev);
+ }
+ 
+-static int dp_display_send_hpd_notification(struct dp_display_private *dp,
+-					    bool hpd)
++
++static void dp_display_set_encoder_mode(struct dp_display_private *dp)
+ {
+-	static bool encoder_mode_set;
+ 	struct msm_drm_private *priv = dp->dp_display.drm_dev->dev_private;
+ 	struct msm_kms *kms = priv->kms;
+ 
++	if (!dp->encoder_mode_set && dp->dp_display.encoder &&
++				kms->funcs->set_encoder_mode) {
++		kms->funcs->set_encoder_mode(kms,
++				dp->dp_display.encoder, false);
++
++		dp->encoder_mode_set = true;
++	}
++}
++
++static int dp_display_send_hpd_notification(struct dp_display_private *dp,
++					    bool hpd)
++{
+ 	if ((hpd && dp->dp_display.is_connected) ||
+ 			(!hpd && !dp->dp_display.is_connected)) {
+ 		DRM_DEBUG_DP("HPD already %s\n", (hpd ? "on" : "off"));
+@@ -324,15 +338,6 @@ static int dp_display_send_hpd_notification(struct dp_display_private *dp,
+ 
+ 	dp->dp_display.is_connected = hpd;
+ 
+-	if (dp->dp_display.is_connected && dp->dp_display.encoder
+-				&& !encoder_mode_set
+-				&& kms->funcs->set_encoder_mode) {
+-		kms->funcs->set_encoder_mode(kms,
+-				dp->dp_display.encoder, false);
+-		DRM_DEBUG_DP("set_encoder_mode() Completed\n");
+-		encoder_mode_set = true;
+-	}
+-
+ 	dp_display_send_hpd_event(&dp->dp_display);
+ 
+ 	return 0;
+@@ -368,7 +373,6 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
+ 
+ 	dp_add_event(dp, EV_USER_NOTIFICATION, true, 0);
+ 
+-
+ end:
+ 	return rc;
+ }
+@@ -385,6 +389,8 @@ static void dp_display_host_init(struct dp_display_private *dp)
+ 	if (dp->usbpd->orientation == ORIENTATION_CC2)
+ 		flip = true;
+ 
++	dp_display_set_encoder_mode(dp);
++
+ 	dp_power_init(dp->power, flip);
+ 	dp_ctrl_host_init(dp->ctrl, flip);
+ 	dp_aux_init(dp->aux);
+@@ -468,25 +474,42 @@ static void dp_display_handle_video_request(struct dp_display_private *dp)
+ 	}
+ }
+ 
+-static int dp_display_handle_irq_hpd(struct dp_display_private *dp)
++static int dp_display_handle_port_ststus_changed(struct dp_display_private *dp)
+ {
+-	u32 sink_request;
+-
+-	sink_request = dp->link->sink_request;
++	int rc = 0;
+ 
+-	if (sink_request & DS_PORT_STATUS_CHANGED) {
+-		dp_add_event(dp, EV_USER_NOTIFICATION, false, 0);
+-		if (dp_display_is_sink_count_zero(dp)) {
+-			DRM_DEBUG_DP("sink count is zero, nothing to do\n");
+-			return 0;
++	if (dp_display_is_sink_count_zero(dp)) {
++		DRM_DEBUG_DP("sink count is zero, nothing to do\n");
++		if (dp->hpd_state != ST_DISCONNECTED) {
++			dp->hpd_state = ST_DISCONNECT_PENDING;
++			dp_add_event(dp, EV_USER_NOTIFICATION, false, 0);
+ 		}
++	} else {
++		if (dp->hpd_state == ST_DISCONNECTED) {
++			dp->hpd_state = ST_CONNECT_PENDING;
++			rc = dp_display_process_hpd_high(dp);
++			if (rc)
++				dp->hpd_state = ST_DISCONNECTED;
++		}
++	}
+ 
+-		return dp_display_process_hpd_high(dp);
++	return rc;
++}
++
++static int dp_display_handle_irq_hpd(struct dp_display_private *dp)
++{
++	u32 sink_request = dp->link->sink_request;
++
++	if (dp->hpd_state == ST_DISCONNECTED) {
++		if (sink_request & DP_LINK_STATUS_UPDATED) {
++			DRM_ERROR("Disconnected, no DP_LINK_STATUS_UPDATED\n");
++			return -EINVAL;
++		}
+ 	}
+ 
+ 	dp_ctrl_handle_sink_request(dp->ctrl);
+ 
+-	if (dp->link->sink_request & DP_TEST_LINK_VIDEO_PATTERN)
++	if (sink_request & DP_TEST_LINK_VIDEO_PATTERN)
+ 		dp_display_handle_video_request(dp);
+ 
+ 	return 0;
+@@ -495,7 +518,9 @@ static int dp_display_handle_irq_hpd(struct dp_display_private *dp)
+ static int dp_display_usbpd_attention_cb(struct device *dev)
+ {
+ 	int rc = 0;
++	u32 sink_request;
+ 	struct dp_display_private *dp;
++	struct dp_usbpd *hpd;
+ 
+ 	if (!dev) {
+ 		DRM_ERROR("invalid dev\n");
+@@ -509,10 +534,17 @@ static int dp_display_usbpd_attention_cb(struct device *dev)
+ 		return -ENODEV;
+ 	}
+ 
++	hpd = dp->usbpd;
++
+ 	/* check for any test request issued by sink */
+ 	rc = dp_link_process_request(dp->link);
+-	if (!rc)
+-		dp_display_handle_irq_hpd(dp);
++	if (!rc) {
++		sink_request = dp->link->sink_request;
++		if (sink_request & DS_PORT_STATUS_CHANGED)
++			rc = dp_display_handle_port_ststus_changed(dp);
++		else
++			rc = dp_display_handle_irq_hpd(dp);
++	}
+ 
+ 	return rc;
+ }
+@@ -530,7 +562,7 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
+ 	mutex_lock(&dp->event_mutex);
+ 
+ 	state =  dp->hpd_state;
+-	if (state == ST_SUSPEND_PENDING) {
++	if (state == ST_DISPLAY_OFF || state == ST_SUSPENDED) {
+ 		mutex_unlock(&dp->event_mutex);
+ 		return 0;
+ 	}
+@@ -552,13 +584,18 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
+ 	hpd->hpd_high = 1;
+ 
+ 	ret = dp_display_usbpd_configure_cb(&dp->pdev->dev);
+-	if (ret) {	/* failed */
++	if (ret) {	/* link train failed */
+ 		hpd->hpd_high = 0;
+ 		dp->hpd_state = ST_DISCONNECTED;
+-	}
+ 
+-	/* start sanity checking */
+-	dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout);
++		if (ret == -ECONNRESET) { /* cable unplugged */
++			dp->core_initialized = false;
++		}
++
++	} else {
++		/* start sentinel checking in case of missing uevent */
++		dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout);
++	}
+ 
+ 	mutex_unlock(&dp->event_mutex);
+ 
+@@ -611,11 +648,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 	mutex_lock(&dp->event_mutex);
+ 
+ 	state = dp->hpd_state;
+-	if (state == ST_SUSPEND_PENDING) {
+-		mutex_unlock(&dp->event_mutex);
+-		return 0;
+-	}
+-
+ 	if (state == ST_DISCONNECT_PENDING || state == ST_DISCONNECTED) {
+ 		mutex_unlock(&dp->event_mutex);
+ 		return 0;
+@@ -642,7 +674,7 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 	 */
+ 	dp_display_usbpd_disconnect_cb(&dp->pdev->dev);
+ 
+-	/* start sanity checking */
++	/* start sentinel checking in case of missing uevent */
+ 	dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND);
+ 
+ 	/* signal the disconnect event early to ensure proper teardown */
+@@ -676,17 +708,21 @@ static int dp_disconnect_pending_timeout(struct dp_display_private *dp, u32 data
+ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
+ {
+ 	u32 state;
++	int ret;
+ 
+ 	mutex_lock(&dp->event_mutex);
+ 
+ 	/* irq_hpd can happen at either connected or disconnected state */
+ 	state =  dp->hpd_state;
+-	if (state == ST_SUSPEND_PENDING) {
++	if (state == ST_DISPLAY_OFF) {
+ 		mutex_unlock(&dp->event_mutex);
+ 		return 0;
+ 	}
+ 
+-	dp_display_usbpd_attention_cb(&dp->pdev->dev);
++	ret = dp_display_usbpd_attention_cb(&dp->pdev->dev);
++	if (ret == -ECONNRESET) { /* cable unplugged */
++		dp->core_initialized = false;
++	}
+ 
+ 	mutex_unlock(&dp->event_mutex);
+ 
+@@ -831,6 +867,11 @@ static int dp_display_enable(struct dp_display_private *dp, u32 data)
+ 
+ 	dp_display = g_dp_display;
+ 
++	if (dp_display->power_on) {
++		DRM_DEBUG_DP("Link already setup, return\n");
++		return 0;
++	}
++
+ 	rc = dp_ctrl_on_stream(dp->ctrl);
+ 	if (!rc)
+ 		dp_display->power_on = true;
+@@ -863,6 +904,9 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
+ 
+ 	dp_display = g_dp_display;
+ 
++	if (!dp_display->power_on)
++		return 0;
++
+ 	/* wait only if audio was enabled */
+ 	if (dp_display->audio_enabled) {
+ 		/* signal the disconnect event */
+@@ -1118,7 +1162,7 @@ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id)
+ 		}
+ 
+ 		if (hpd_isr_status & DP_DP_IRQ_HPD_INT_MASK) {
+-			/* delete connect pending event first */
++			/* stop sentinel connect pending checking */
+ 			dp_del_event(dp, EV_CONNECT_PENDING_TIMEOUT);
+ 			dp_add_event(dp, EV_IRQ_HPD_INT, 0, 0);
+ 		}
+@@ -1249,15 +1293,12 @@ static int dp_pm_resume(struct device *dev)
+ 
+ 	dp_catalog_ctrl_hpd_config(dp->catalog);
+ 
+-	status = dp_catalog_hpd_get_state_status(dp->catalog);
++	status = dp_catalog_link_is_connected(dp->catalog);
+ 
+-	if (status) {
++	if (status)
+ 		dp->dp_display.is_connected = true;
+-	} else {
++	else
+ 		dp->dp_display.is_connected = false;
+-		/* make sure next resume host_init be called */
+-		dp->core_initialized = false;
+-	}
+ 
+ 	mutex_unlock(&dp->event_mutex);
+ 
+@@ -1279,6 +1320,9 @@ static int dp_pm_suspend(struct device *dev)
+ 
+ 	dp->hpd_state = ST_SUSPENDED;
+ 
++	/* host_init will be called at pm_resume */
++	dp->core_initialized = false;
++
+ 	mutex_unlock(&dp->event_mutex);
+ 
+ 	return 0;
+@@ -1411,6 +1455,7 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 
+ 	mutex_lock(&dp_display->event_mutex);
+ 
++	/* stop sentinel checking */
+ 	dp_del_event(dp_display, EV_CONNECT_PENDING_TIMEOUT);
+ 
+ 	rc = dp_display_set_mode(dp, &dp_display->dp_mode);
+@@ -1429,7 +1474,7 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 
+ 	state =  dp_display->hpd_state;
+ 
+-	if (state == ST_SUSPEND_PENDING)
++	if (state == ST_DISPLAY_OFF)
+ 		dp_display_host_init(dp_display);
+ 
+ 	dp_display_enable(dp_display, 0);
+@@ -1441,7 +1486,8 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 		dp_display_unprepare(dp);
+ 	}
+ 
+-	if (state == ST_SUSPEND_PENDING)
++	/* manual kick off plug event to train link */
++	if (state == ST_DISPLAY_OFF)
+ 		dp_add_event(dp_display, EV_IRQ_HPD_INT, 0, 0);
+ 
+ 	/* completed connection */
+@@ -1473,6 +1519,7 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 
+ 	mutex_lock(&dp_display->event_mutex);
+ 
++	/* stop sentinel checking */
+ 	dp_del_event(dp_display, EV_DISCONNECT_PENDING_TIMEOUT);
+ 
+ 	dp_display_disable(dp_display, 0);
+@@ -1486,7 +1533,7 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder)
+ 		/* completed disconnection */
+ 		dp_display->hpd_state = ST_DISCONNECTED;
+ 	} else {
+-		dp_display->hpd_state = ST_SUSPEND_PENDING;
++		dp_display->hpd_state = ST_DISPLAY_OFF;
+ 	}
+ 
+ 	mutex_unlock(&dp_display->event_mutex);
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 2768d1d306f00..4e8a19114e87d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -196,6 +196,11 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ 					      &panel->aux->ddc);
+ 	if (!dp_panel->edid) {
+ 		DRM_ERROR("panel edid read failed\n");
++		/* check edid read fail is due to unplug */
++		if (!dp_catalog_link_is_connected(panel->catalog)) {
++			rc = -ETIMEDOUT;
++			goto end;
++		}
+ 
+ 		/* fail safe edid */
+ 		mutex_lock(&connector->dev->mode_config.mutex);
+diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
+index 22ac7c692a81d..ecab6287c1c39 100644
+--- a/drivers/gpu/drm/msm/msm_iommu.c
++++ b/drivers/gpu/drm/msm/msm_iommu.c
+@@ -58,7 +58,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
+ 	u64 addr = iova;
+ 	unsigned int i;
+ 
+-	for_each_sg(sgt->sgl, sg, sgt->nents, i) {
++	for_each_sgtable_sg(sgt, sg, i) {
+ 		size_t size = sg->length;
+ 		phys_addr_t phys = sg_phys(sg);
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c
+index 29861fc81b35f..c5912fd537729 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
++++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
+@@ -71,7 +71,6 @@ static int sun4i_drv_bind(struct device *dev)
+ 		goto free_drm;
+ 	}
+ 
+-	dev_set_drvdata(dev, drm);
+ 	drm->dev_private = drv;
+ 	INIT_LIST_HEAD(&drv->frontend_list);
+ 	INIT_LIST_HEAD(&drv->engine_list);
+@@ -112,6 +111,8 @@ static int sun4i_drv_bind(struct device *dev)
+ 
+ 	drm_fbdev_generic_setup(drm, 32);
+ 
++	dev_set_drvdata(dev, drm);
++
+ 	return 0;
+ 
+ finish_poll:
+@@ -128,6 +129,7 @@ static void sun4i_drv_unbind(struct device *dev)
+ {
+ 	struct drm_device *drm = dev_get_drvdata(dev);
+ 
++	dev_set_drvdata(dev, NULL);
+ 	drm_dev_unregister(drm);
+ 	drm_kms_helper_poll_fini(drm);
+ 	drm_atomic_helper_shutdown(drm);
+diff --git a/drivers/iio/accel/bma180.c b/drivers/iio/accel/bma180.c
+index da56488182d07..6aa5a72c89b2b 100644
+--- a/drivers/iio/accel/bma180.c
++++ b/drivers/iio/accel/bma180.c
+@@ -1068,11 +1068,12 @@ static int bma180_probe(struct i2c_client *client,
+ 		data->trig->dev.parent = dev;
+ 		data->trig->ops = &bma180_trigger_ops;
+ 		iio_trigger_set_drvdata(data->trig, indio_dev);
+-		indio_dev->trig = iio_trigger_get(data->trig);
+ 
+ 		ret = iio_trigger_register(data->trig);
+ 		if (ret)
+ 			goto err_trigger_free;
++
++		indio_dev->trig = iio_trigger_get(data->trig);
+ 	}
+ 
+ 	ret = iio_triggered_buffer_setup(indio_dev, NULL,
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index e7e2802827740..b12e804647063 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -1496,10 +1496,14 @@ static int mma8452_reset(struct i2c_client *client)
+ 	int i;
+ 	int ret;
+ 
+-	ret = i2c_smbus_write_byte_data(client,	MMA8452_CTRL_REG2,
++	/*
++	 * Find on fxls8471, after config reset bit, it reset immediately,
++	 * and will not give ACK, so here do not check the return value.
++	 * The following code will read the reset register, and check whether
++	 * this reset works.
++	 */
++	i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2,
+ 					MMA8452_CTRL_REG2_RST);
+-	if (ret < 0)
+-		return ret;
+ 
+ 	for (i = 0; i < 10; i++) {
+ 		usleep_range(100, 200);
+@@ -1542,11 +1546,13 @@ static int mma8452_probe(struct i2c_client *client,
+ 	mutex_init(&data->lock);
+ 
+ 	data->chip_info = device_get_match_data(&client->dev);
+-	if (!data->chip_info && id) {
+-		data->chip_info = &mma_chip_info_table[id->driver_data];
+-	} else {
+-		dev_err(&client->dev, "unknown device model\n");
+-		return -ENODEV;
++	if (!data->chip_info) {
++		if (id) {
++			data->chip_info = &mma_chip_info_table[id->driver_data];
++		} else {
++			dev_err(&client->dev, "unknown device model\n");
++			return -ENODEV;
++		}
+ 	}
+ 
+ 	data->vdd_reg = devm_regulator_get(&client->dev, "vdd");
+diff --git a/drivers/iio/accel/mxc4005.c b/drivers/iio/accel/mxc4005.c
+index 5a2b0ffbb145d..ecd9d8ad59288 100644
+--- a/drivers/iio/accel/mxc4005.c
++++ b/drivers/iio/accel/mxc4005.c
+@@ -461,8 +461,6 @@ static int mxc4005_probe(struct i2c_client *client,
+ 		data->dready_trig->dev.parent = &client->dev;
+ 		data->dready_trig->ops = &mxc4005_trigger_ops;
+ 		iio_trigger_set_drvdata(data->dready_trig, indio_dev);
+-		indio_dev->trig = data->dready_trig;
+-		iio_trigger_get(indio_dev->trig);
+ 		ret = devm_iio_trigger_register(&client->dev,
+ 						data->dready_trig);
+ 		if (ret) {
+@@ -470,6 +468,8 @@ static int mxc4005_probe(struct i2c_client *client,
+ 				"failed to register trigger\n");
+ 			return ret;
+ 		}
++
++		indio_dev->trig = iio_trigger_get(data->dready_trig);
+ 	}
+ 
+ 	return devm_iio_device_register(&client->dev, indio_dev);
+diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
+index 9109da2d2e15f..cbe1011a2408a 100644
+--- a/drivers/iio/adc/adi-axi-adc.c
++++ b/drivers/iio/adc/adi-axi-adc.c
+@@ -334,16 +334,19 @@ static struct adi_axi_adc_client *adi_axi_adc_attach_client(struct device *dev)
+ 
+ 		if (!try_module_get(cl->dev->driver->owner)) {
+ 			mutex_unlock(&registered_clients_lock);
++			of_node_put(cln);
+ 			return ERR_PTR(-ENODEV);
+ 		}
+ 
+ 		get_device(cl->dev);
+ 		cl->info = info;
+ 		mutex_unlock(&registered_clients_lock);
++		of_node_put(cln);
+ 		return cl;
+ 	}
+ 
+ 	mutex_unlock(&registered_clients_lock);
++	of_node_put(cln);
+ 
+ 	return ERR_PTR(-EPROBE_DEFER);
+ }
+diff --git a/drivers/iio/adc/axp288_adc.c b/drivers/iio/adc/axp288_adc.c
+index 5f5e8b39e4d22..84dbe9e2f0eff 100644
+--- a/drivers/iio/adc/axp288_adc.c
++++ b/drivers/iio/adc/axp288_adc.c
+@@ -196,6 +196,14 @@ static const struct dmi_system_id axp288_adc_ts_bias_override[] = {
+ 		},
+ 		.driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA,
+ 	},
++	{
++		/* Nuvision Solo 10 Draw */
++		.matches = {
++		  DMI_MATCH(DMI_SYS_VENDOR, "TMAX"),
++		  DMI_MATCH(DMI_PRODUCT_NAME, "TM101W610L"),
++		},
++		.driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA,
++	},
+ 	{}
+ };
+ 
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index a83199b212a43..20fc867e39986 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -64,6 +64,7 @@ struct stm32_adc_priv;
+  * @max_clk_rate_hz: maximum analog clock rate (Hz, from datasheet)
+  * @has_syscfg: SYSCFG capability flags
+  * @num_irqs:	number of interrupt lines
++ * @num_adcs:   maximum number of ADC instances in the common registers
+  */
+ struct stm32_adc_priv_cfg {
+ 	const struct stm32_adc_common_regs *regs;
+@@ -71,6 +72,7 @@ struct stm32_adc_priv_cfg {
+ 	u32 max_clk_rate_hz;
+ 	unsigned int has_syscfg;
+ 	unsigned int num_irqs;
++	unsigned int num_adcs;
+ };
+ 
+ /**
+@@ -333,7 +335,7 @@ static void stm32_adc_irq_handler(struct irq_desc *desc)
+ 	 * before invoking the interrupt handler (e.g. call ISR only for
+ 	 * IRQ-enabled ADCs).
+ 	 */
+-	for (i = 0; i < priv->cfg->num_irqs; i++) {
++	for (i = 0; i < priv->cfg->num_adcs; i++) {
+ 		if ((status & priv->cfg->regs->eoc_msk[i] &&
+ 		     stm32_adc_eoc_enabled(priv, i)) ||
+ 		     (status & priv->cfg->regs->ovr_msk[i]))
+@@ -784,6 +786,7 @@ static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = {
+ 	.clk_sel = stm32f4_adc_clk_sel,
+ 	.max_clk_rate_hz = 36000000,
+ 	.num_irqs = 1,
++	.num_adcs = 3,
+ };
+ 
+ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
+@@ -792,14 +795,16 @@ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
+ 	.max_clk_rate_hz = 36000000,
+ 	.has_syscfg = HAS_VBOOSTER,
+ 	.num_irqs = 1,
++	.num_adcs = 2,
+ };
+ 
+ static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
+ 	.regs = &stm32h7_adc_common_regs,
+ 	.clk_sel = stm32h7_adc_clk_sel,
+-	.max_clk_rate_hz = 40000000,
++	.max_clk_rate_hz = 36000000,
+ 	.has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD,
+ 	.num_irqs = 2,
++	.num_adcs = 2,
+ };
+ 
+ static const struct of_device_id stm32_adc_of_match[] = {
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index 9939dee017433..e60ad48196ff8 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -1265,7 +1265,6 @@ static irqreturn_t stm32_adc_threaded_isr(int irq, void *data)
+ 	struct stm32_adc *adc = iio_priv(indio_dev);
+ 	const struct stm32_adc_regspec *regs = adc->cfg->regs;
+ 	u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg);
+-	u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg);
+ 
+ 	/* Check ovr status right now, as ovr mask should be already disabled */
+ 	if (status & regs->isr_ovr.mask) {
+@@ -1280,11 +1279,6 @@ static irqreturn_t stm32_adc_threaded_isr(int irq, void *data)
+ 		return IRQ_HANDLED;
+ 	}
+ 
+-	if (!(status & mask))
+-		dev_err_ratelimited(&indio_dev->dev,
+-				    "Unexpected IRQ: IER=0x%08x, ISR=0x%08x\n",
+-				    mask, status);
+-
+ 	return IRQ_NONE;
+ }
+ 
+@@ -1294,10 +1288,6 @@ static irqreturn_t stm32_adc_isr(int irq, void *data)
+ 	struct stm32_adc *adc = iio_priv(indio_dev);
+ 	const struct stm32_adc_regspec *regs = adc->cfg->regs;
+ 	u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg);
+-	u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg);
+-
+-	if (!(status & mask))
+-		return IRQ_WAKE_THREAD;
+ 
+ 	if (status & regs->isr_ovr.mask) {
+ 		/*
+diff --git a/drivers/iio/chemical/ccs811.c b/drivers/iio/chemical/ccs811.c
+index 60dd87e96f5f8..384d167b4fd65 100644
+--- a/drivers/iio/chemical/ccs811.c
++++ b/drivers/iio/chemical/ccs811.c
+@@ -500,11 +500,11 @@ static int ccs811_probe(struct i2c_client *client,
+ 		data->drdy_trig->dev.parent = &client->dev;
+ 		data->drdy_trig->ops = &ccs811_trigger_ops;
+ 		iio_trigger_set_drvdata(data->drdy_trig, indio_dev);
+-		indio_dev->trig = data->drdy_trig;
+-		iio_trigger_get(indio_dev->trig);
+ 		ret = iio_trigger_register(data->drdy_trig);
+ 		if (ret)
+ 			goto err_poweroff;
++
++		indio_dev->trig = iio_trigger_get(data->drdy_trig);
+ 	}
+ 
+ 	ret = iio_triggered_buffer_setup(indio_dev, NULL,
+diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c
+index 39e1c4306c474..84c6ad4bcccba 100644
+--- a/drivers/iio/gyro/mpu3050-core.c
++++ b/drivers/iio/gyro/mpu3050-core.c
+@@ -872,6 +872,7 @@ static int mpu3050_power_up(struct mpu3050 *mpu3050)
+ 	ret = regmap_update_bits(mpu3050->map, MPU3050_PWR_MGM,
+ 				 MPU3050_PWR_MGM_SLEEP, 0);
+ 	if (ret) {
++		regulator_bulk_disable(ARRAY_SIZE(mpu3050->regs), mpu3050->regs);
+ 		dev_err(mpu3050->dev, "error setting power mode\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600.h b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+index c0f5059b13b31..995a9dc06521d 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600.h
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+@@ -17,6 +17,7 @@
+ #include "inv_icm42600_buffer.h"
+ 
+ enum inv_icm42600_chip {
++	INV_CHIP_INVALID,
+ 	INV_CHIP_ICM42600,
+ 	INV_CHIP_ICM42602,
+ 	INV_CHIP_ICM42605,
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+index 8bd77185ccb71..dcbd4e9288519 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+@@ -565,7 +565,7 @@ int inv_icm42600_core_probe(struct regmap *regmap, int chip, int irq,
+ 	bool open_drain;
+ 	int ret;
+ 
+-	if (chip < 0 || chip >= INV_CHIP_NB) {
++	if (chip <= INV_CHIP_INVALID || chip >= INV_CHIP_NB) {
+ 		dev_err(dev, "invalid chip = %d\n", chip);
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/iio/trigger/iio-trig-sysfs.c b/drivers/iio/trigger/iio-trig-sysfs.c
+index e09e58072872c..2277d6336ac06 100644
+--- a/drivers/iio/trigger/iio-trig-sysfs.c
++++ b/drivers/iio/trigger/iio-trig-sysfs.c
+@@ -196,6 +196,7 @@ static int iio_sysfs_trigger_remove(int id)
+ 	}
+ 
+ 	iio_trigger_unregister(t->trig);
++	irq_work_sync(&t->work);
+ 	iio_trigger_free(t->trig);
+ 
+ 	list_del(&t->l);
+diff --git a/drivers/md/dm-era-target.c b/drivers/md/dm-era-target.c
+index d9ac7372108c9..96bad057bea2f 100644
+--- a/drivers/md/dm-era-target.c
++++ b/drivers/md/dm-era-target.c
+@@ -1396,7 +1396,7 @@ static void start_worker(struct era *era)
+ static void stop_worker(struct era *era)
+ {
+ 	atomic_set(&era->suspended, 1);
+-	flush_workqueue(era->wq);
++	drain_workqueue(era->wq);
+ }
+ 
+ /*----------------------------------------------------------------
+@@ -1566,6 +1566,12 @@ static void era_postsuspend(struct dm_target *ti)
+ 	}
+ 
+ 	stop_worker(era);
++
++	r = metadata_commit(era->md);
++	if (r) {
++		DMERR("%s: metadata_commit failed", __func__);
++		/* FIXME: fail mode */
++	}
+ }
+ 
+ static int era_preresume(struct dm_target *ti)
+diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
+index 8b15f53cbdd95..fe3a9473f3387 100644
+--- a/drivers/md/dm-log.c
++++ b/drivers/md/dm-log.c
+@@ -615,7 +615,7 @@ static int disk_resume(struct dm_dirty_log *log)
+ 			log_clear_bit(lc, lc->clean_bits, i);
+ 
+ 	/* clear any old bits -- device has shrunk */
+-	for (i = lc->region_count; i % (sizeof(*lc->clean_bits) << BYTE_SHIFT); i++)
++	for (i = lc->region_count; i % BITS_PER_LONG; i++)
+ 		log_clear_bit(lc, lc->clean_bits, i);
+ 
+ 	/* copy clean across to sync */
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index 049a1356f7dd4..5cec30d2dc7db 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -1192,33 +1192,39 @@ static int of_get_dram_timings(struct exynos5_dmc *dmc)
+ 
+ 	dmc->timing_row = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
+ 					     sizeof(u32), GFP_KERNEL);
+-	if (!dmc->timing_row)
+-		return -ENOMEM;
++	if (!dmc->timing_row) {
++		ret = -ENOMEM;
++		goto put_node;
++	}
+ 
+ 	dmc->timing_data = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
+ 					      sizeof(u32), GFP_KERNEL);
+-	if (!dmc->timing_data)
+-		return -ENOMEM;
++	if (!dmc->timing_data) {
++		ret = -ENOMEM;
++		goto put_node;
++	}
+ 
+ 	dmc->timing_power = devm_kmalloc_array(dmc->dev, TIMING_COUNT,
+ 					       sizeof(u32), GFP_KERNEL);
+-	if (!dmc->timing_power)
+-		return -ENOMEM;
++	if (!dmc->timing_power) {
++		ret = -ENOMEM;
++		goto put_node;
++	}
+ 
+ 	dmc->timings = of_lpddr3_get_ddr_timings(np_ddr, dmc->dev,
+ 						 DDR_TYPE_LPDDR3,
+ 						 &dmc->timings_arr_size);
+ 	if (!dmc->timings) {
+-		of_node_put(np_ddr);
+ 		dev_warn(dmc->dev, "could not get timings from DT\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_node;
+ 	}
+ 
+ 	dmc->min_tck = of_lpddr3_get_min_tck(np_ddr, dmc->dev);
+ 	if (!dmc->min_tck) {
+-		of_node_put(np_ddr);
+ 		dev_warn(dmc->dev, "could not get tck from DT\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_node;
+ 	}
+ 
+ 	/* Sorted array of OPPs with frequency ascending */
+@@ -1232,13 +1238,14 @@ static int of_get_dram_timings(struct exynos5_dmc *dmc)
+ 					     clk_period_ps);
+ 	}
+ 
+-	of_node_put(np_ddr);
+ 
+ 	/* Take the highest frequency's timings as 'bypass' */
+ 	dmc->bypass_timing_row = dmc->timing_row[idx - 1];
+ 	dmc->bypass_timing_data = dmc->timing_data[idx - 1];
+ 	dmc->bypass_timing_power = dmc->timing_power[idx - 1];
+ 
++put_node:
++	of_node_put(np_ddr);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index 94e3f72f6405d..8c357e3b78d7c 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -147,6 +147,8 @@ static int sdhci_o2_get_cd(struct mmc_host *mmc)
+ 
+ 	if (!(sdhci_readw(host, O2_PLL_DLL_WDT_CONTROL1) & O2_PLL_LOCK_STATUS))
+ 		sdhci_o2_enable_internal_clock(host);
++	else
++		sdhci_o2_wait_card_detect_stable(host);
+ 
+ 	return !!(sdhci_readl(host, SDHCI_PRESENT_STATE) & SDHCI_CARD_PRESENT);
+ }
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 92e8ca56f5665..8d096ca770b04 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -683,7 +683,7 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 	hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) |
+ 		      BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) |
+ 		      BF_GPMI_TIMING0_DATA_SETUP(data_setup_cycles);
+-	hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(busy_timeout_cycles * 4096);
++	hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(DIV_ROUND_UP(busy_timeout_cycles, 4096));
+ 
+ 	/*
+ 	 * Derive NFC ideal delay from {3}:
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index cbeb69bca0bba..9c4b45341fd28 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3368,9 +3368,11 @@ re_arm:
+ 		if (!rtnl_trylock())
+ 			return;
+ 
+-		if (should_notify_peers)
++		if (should_notify_peers) {
++			bond->send_peer_notif--;
+ 			call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
+ 						 bond->dev);
++		}
+ 		if (should_notify_rtnl) {
+ 			bond_slave_state_notify(bond);
+ 			bond_slave_link_notify(bond);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 421fc707f80af..060897eb9cabe 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -2174,6 +2174,42 @@ ice_setup_autoneg(struct ice_port_info *p, struct ethtool_link_ksettings *ks,
+ 	return err;
+ }
+ 
++/**
++ * ice_set_phy_type_from_speed - set phy_types based on speeds
++ * and advertised modes
++ * @ks: ethtool link ksettings struct
++ * @phy_type_low: pointer to the lower part of phy_type
++ * @phy_type_high: pointer to the higher part of phy_type
++ * @adv_link_speed: targeted link speeds bitmap
++ */
++static void
++ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks,
++			    u64 *phy_type_low, u64 *phy_type_high,
++			    u16 adv_link_speed)
++{
++	/* Handle 1000M speed in a special way because ice_update_phy_type
++	 * enables all link modes, but having mixed copper and optical
++	 * standards is not supported.
++	 */
++	adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB;
++
++	if (ethtool_link_ksettings_test_link_mode(ks, advertising,
++						  1000baseT_Full))
++		*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T |
++				 ICE_PHY_TYPE_LOW_1G_SGMII;
++
++	if (ethtool_link_ksettings_test_link_mode(ks, advertising,
++						  1000baseKX_Full))
++		*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX;
++
++	if (ethtool_link_ksettings_test_link_mode(ks, advertising,
++						  1000baseX_Full))
++		*phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX |
++				 ICE_PHY_TYPE_LOW_1000BASE_LX;
++
++	ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed);
++}
++
+ /**
+  * ice_set_link_ksettings - Set Speed and Duplex
+  * @netdev: network interface device structure
+@@ -2310,7 +2346,8 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 		adv_link_speed = curr_link_speed;
+ 
+ 	/* Convert the advertise link speeds to their corresponded PHY_TYPE */
+-	ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed);
++	ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high,
++				    adv_link_speed);
+ 
+ 	if (!autoneg_changed && adv_link_speed == curr_link_speed) {
+ 		netdev_info(netdev, "Nothing changed, exiting without setting anything.\n");
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 5e67c9c119d2f..4e51f4bb58ffc 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4813,8 +4813,11 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring)
+ 	while (i != tx_ring->next_to_use) {
+ 		union e1000_adv_tx_desc *eop_desc, *tx_desc;
+ 
+-		/* Free all the Tx ring sk_buffs */
+-		dev_kfree_skb_any(tx_buffer->skb);
++		/* Free all the Tx ring sk_buffs or xdp frames */
++		if (tx_buffer->type == IGB_TYPE_SKB)
++			dev_kfree_skb_any(tx_buffer->skb);
++		else
++			xdp_return_frame(tx_buffer->xdpf);
+ 
+ 		/* unmap skb header data */
+ 		dma_unmap_single(tx_ring->dev,
+@@ -9826,11 +9829,10 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
+ 	struct e1000_hw *hw = &adapter->hw;
+ 	u32 dmac_thr;
+ 	u16 hwm;
++	u32 reg;
+ 
+ 	if (hw->mac.type > e1000_82580) {
+ 		if (adapter->flags & IGB_FLAG_DMAC) {
+-			u32 reg;
+-
+ 			/* force threshold to 0. */
+ 			wr32(E1000_DMCTXTH, 0);
+ 
+@@ -9863,7 +9865,6 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
+ 			/* Disable BMC-to-OS Watchdog Enable */
+ 			if (hw->mac.type != e1000_i354)
+ 				reg &= ~E1000_DMACR_DC_BMC2OSW_EN;
+-
+ 			wr32(E1000_DMACR, reg);
+ 
+ 			/* no lower threshold to disable
+@@ -9880,12 +9881,12 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba)
+ 			 */
+ 			wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE -
+ 			     (IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6);
++		}
+ 
+-			/* make low power state decision controlled
+-			 * by DMA coal
+-			 */
++		if (hw->mac.type >= e1000_i210 ||
++		    (adapter->flags & IGB_FLAG_DMAC)) {
+ 			reg = rd32(E1000_PCIEMISC);
+-			reg &= ~E1000_PCIEMISC_LX_DECISION;
++			reg |= E1000_PCIEMISC_LX_DECISION;
+ 			wr32(E1000_PCIEMISC, reg);
+ 		} /* endif adapter->dmac is not disabled */
+ 	} else if (hw->mac.type == e1000_82580) {
+diff --git a/drivers/net/phy/aquantia_main.c b/drivers/net/phy/aquantia_main.c
+index 41e7c1432497a..75a62d1cc7375 100644
+--- a/drivers/net/phy/aquantia_main.c
++++ b/drivers/net/phy/aquantia_main.c
+@@ -34,6 +34,8 @@
+ #define MDIO_AN_VEND_PROV			0xc400
+ #define MDIO_AN_VEND_PROV_1000BASET_FULL	BIT(15)
+ #define MDIO_AN_VEND_PROV_1000BASET_HALF	BIT(14)
++#define MDIO_AN_VEND_PROV_5000BASET_FULL	BIT(11)
++#define MDIO_AN_VEND_PROV_2500BASET_FULL	BIT(10)
+ #define MDIO_AN_VEND_PROV_DOWNSHIFT_EN		BIT(4)
+ #define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK	GENMASK(3, 0)
+ #define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT	4
+@@ -230,9 +232,20 @@ static int aqr_config_aneg(struct phy_device *phydev)
+ 			      phydev->advertising))
+ 		reg |= MDIO_AN_VEND_PROV_1000BASET_HALF;
+ 
++	/* Handle the case when the 2.5G and 5G speeds are not advertised */
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT,
++			      phydev->advertising))
++		reg |= MDIO_AN_VEND_PROV_2500BASET_FULL;
++
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT,
++			      phydev->advertising))
++		reg |= MDIO_AN_VEND_PROV_5000BASET_FULL;
++
+ 	ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV,
+ 				     MDIO_AN_VEND_PROV_1000BASET_HALF |
+-				     MDIO_AN_VEND_PROV_1000BASET_FULL, reg);
++				     MDIO_AN_VEND_PROV_1000BASET_FULL |
++				     MDIO_AN_VEND_PROV_2500BASET_FULL |
++				     MDIO_AN_VEND_PROV_5000BASET_FULL, reg);
+ 	if (ret < 0)
+ 		return ret;
+ 	if (ret > 0)
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index cbe47eed7cc3c..ad9064df3debb 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2366,7 +2366,6 @@ static const struct ethtool_ops virtnet_ethtool_ops = {
+ static void virtnet_freeze_down(struct virtio_device *vdev)
+ {
+ 	struct virtnet_info *vi = vdev->priv;
+-	int i;
+ 
+ 	/* Make sure no work handler is accessing the device */
+ 	flush_work(&vi->config_work);
+@@ -2374,14 +2373,8 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
+ 	netif_tx_lock_bh(vi->dev);
+ 	netif_device_detach(vi->dev);
+ 	netif_tx_unlock_bh(vi->dev);
+-	cancel_delayed_work_sync(&vi->refill);
+-
+-	if (netif_running(vi->dev)) {
+-		for (i = 0; i < vi->max_queue_pairs; i++) {
+-			napi_disable(&vi->rq[i].napi);
+-			virtnet_napi_tx_disable(&vi->sq[i].napi);
+-		}
+-	}
++	if (netif_running(vi->dev))
++		virtnet_close(vi->dev);
+ }
+ 
+ static int init_vqs(struct virtnet_info *vi);
+@@ -2389,7 +2382,7 @@ static int init_vqs(struct virtnet_info *vi);
+ static int virtnet_restore_up(struct virtio_device *vdev)
+ {
+ 	struct virtnet_info *vi = vdev->priv;
+-	int err, i;
++	int err;
+ 
+ 	err = init_vqs(vi);
+ 	if (err)
+@@ -2398,15 +2391,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
+ 	virtio_device_ready(vdev);
+ 
+ 	if (netif_running(vi->dev)) {
+-		for (i = 0; i < vi->curr_queue_pairs; i++)
+-			if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
+-				schedule_delayed_work(&vi->refill, 0);
+-
+-		for (i = 0; i < vi->max_queue_pairs; i++) {
+-			virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
+-			virtnet_napi_tx_enable(vi, vi->sq[i].vq,
+-					       &vi->sq[i].napi);
+-		}
++		err = virtnet_open(vi->dev);
++		if (err)
++			return err;
+ 	}
+ 
+ 	netif_tx_lock_bh(vi->dev);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 0aa68da51ed70..af2902d70b196 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -531,36 +531,54 @@ EXPORT_SYMBOL_NS_GPL(nvme_put_ns, NVME_TARGET_PASSTHRU);
+ 
+ static inline void nvme_clear_nvme_request(struct request *req)
+ {
+-	if (!(req->rq_flags & RQF_DONTPREP)) {
+-		nvme_req(req)->retries = 0;
+-		nvme_req(req)->flags = 0;
+-		req->rq_flags |= RQF_DONTPREP;
+-	}
++	nvme_req(req)->retries = 0;
++	nvme_req(req)->flags = 0;
++	req->rq_flags |= RQF_DONTPREP;
+ }
+ 
+-struct request *nvme_alloc_request(struct request_queue *q,
+-		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
++static inline unsigned int nvme_req_op(struct nvme_command *cmd)
+ {
+-	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
+-	struct request *req;
++	return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
++}
+ 
+-	if (qid == NVME_QID_ANY) {
+-		req = blk_mq_alloc_request(q, op, flags);
+-	} else {
+-		req = blk_mq_alloc_request_hctx(q, op, flags,
+-				qid ? qid - 1 : 0);
+-	}
+-	if (IS_ERR(req))
+-		return req;
++static inline void nvme_init_request(struct request *req,
++		struct nvme_command *cmd)
++{
++	if (req->q->queuedata)
++		req->timeout = NVME_IO_TIMEOUT;
++	else /* no queuedata implies admin queue */
++		req->timeout = ADMIN_TIMEOUT;
+ 
+ 	req->cmd_flags |= REQ_FAILFAST_DRIVER;
+ 	nvme_clear_nvme_request(req);
+ 	nvme_req(req)->cmd = cmd;
++}
++
++struct request *nvme_alloc_request(struct request_queue *q,
++		struct nvme_command *cmd, blk_mq_req_flags_t flags)
++{
++	struct request *req;
+ 
++	req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
++	if (!IS_ERR(req))
++		nvme_init_request(req, cmd);
+ 	return req;
+ }
+ EXPORT_SYMBOL_GPL(nvme_alloc_request);
+ 
++struct request *nvme_alloc_request_qid(struct request_queue *q,
++		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
++{
++	struct request *req;
++
++	req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
++			qid ? qid - 1 : 0);
++	if (!IS_ERR(req))
++		nvme_init_request(req, cmd);
++	return req;
++}
++EXPORT_SYMBOL_GPL(nvme_alloc_request_qid);
++
+ static int nvme_toggle_streams(struct nvme_ctrl *ctrl, bool enable)
+ {
+ 	struct nvme_command c;
+@@ -663,7 +681,7 @@ static void nvme_assign_write_stream(struct nvme_ctrl *ctrl,
+ 		req->q->write_hints[streamid] += blk_rq_bytes(req) >> 9;
+ }
+ 
+-static void nvme_setup_passthrough(struct request *req,
++static inline void nvme_setup_passthrough(struct request *req,
+ 		struct nvme_command *cmd)
+ {
+ 	memcpy(cmd, nvme_req(req)->cmd, sizeof(*cmd));
+@@ -834,7 +852,8 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
+ 	struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
+ 	blk_status_t ret = BLK_STS_OK;
+ 
+-	nvme_clear_nvme_request(req);
++	if (!(req->rq_flags & RQF_DONTPREP))
++		nvme_clear_nvme_request(req);
+ 
+ 	memset(cmd, 0, sizeof(*cmd));
+ 	switch (req_op(req)) {
+@@ -923,11 +942,15 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
+ 	struct request *req;
+ 	int ret;
+ 
+-	req = nvme_alloc_request(q, cmd, flags, qid);
++	if (qid == NVME_QID_ANY)
++		req = nvme_alloc_request(q, cmd, flags);
++	else
++		req = nvme_alloc_request_qid(q, cmd, flags, qid);
+ 	if (IS_ERR(req))
+ 		return PTR_ERR(req);
+ 
+-	req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
++	if (timeout)
++		req->timeout = timeout;
+ 
+ 	if (buffer && bufflen) {
+ 		ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
+@@ -1093,11 +1116,12 @@ static int nvme_submit_user_cmd(struct request_queue *q,
+ 	void *meta = NULL;
+ 	int ret;
+ 
+-	req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY);
++	req = nvme_alloc_request(q, cmd, 0);
+ 	if (IS_ERR(req))
+ 		return PTR_ERR(req);
+ 
+-	req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
++	if (timeout)
++		req->timeout = timeout;
+ 	nvme_req(req)->flags |= NVME_REQ_USERCMD;
+ 
+ 	if (ubuffer && bufflen) {
+@@ -1167,8 +1191,8 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl)
+ {
+ 	struct request *rq;
+ 
+-	rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, BLK_MQ_REQ_RESERVED,
+-			NVME_QID_ANY);
++	rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd,
++			BLK_MQ_REQ_RESERVED);
+ 	if (IS_ERR(rq))
+ 		return PTR_ERR(rq);
+ 
+@@ -2675,6 +2699,34 @@ static const struct nvme_core_quirk_entry core_quirks[] = {
+ 		.vid = 0x14a4,
+ 		.fr = "22301111",
+ 		.quirks = NVME_QUIRK_SIMPLE_SUSPEND,
++	},
++	{
++		/*
++		 * This Kioxia CD6-V Series / HPE PE8030 device times out and
++		 * aborts I/O during any load, but more easily reproducible
++		 * with discards (fstrim).
++		 *
++		 * The device is left in a state where it is also not possible
++		 * to use "nvme set-feature" to disable APST, but booting with
++		 * nvme_core.default_ps_max_latency=0 works.
++		 */
++		.vid = 0x1e0f,
++		.mn = "KCD6XVUL6T40",
++		.quirks = NVME_QUIRK_NO_APST,
++	},
++	{
++		/*
++		 * The external Samsung X5 SSD fails initialization without a
++		 * delay before checking if it is ready and has a whole set of
++		 * other problems.  To make this even more interesting, it
++		 * shares the PCI ID with internal Samsung 970 Evo Plus that
++		 * does not need or want these quirks.
++		 */
++		.vid = 0x144d,
++		.mn = "Samsung Portable SSD X5",
++		.quirks = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |
++			  NVME_QUIRK_NO_DEEPEST_PS |
++			  NVME_QUIRK_IGNORE_DEV_SUBNQN,
+ 	}
+ };
+ 
+diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
+index 8e562d0f2c301..470cef3abec3d 100644
+--- a/drivers/nvme/host/lightnvm.c
++++ b/drivers/nvme/host/lightnvm.c
+@@ -653,7 +653,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q,
+ 
+ 	nvme_nvm_rqtocmd(rqd, ns, cmd);
+ 
+-	rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY);
++	rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0);
+ 	if (IS_ERR(rq))
+ 		return rq;
+ 
+@@ -767,14 +767,14 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q,
+ 	DECLARE_COMPLETION_ONSTACK(wait);
+ 	int ret = 0;
+ 
+-	rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0,
+-			NVME_QID_ANY);
++	rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0);
+ 	if (IS_ERR(rq)) {
+ 		ret = -ENOMEM;
+ 		goto err_cmd;
+ 	}
+ 
+-	rq->timeout = timeout ? timeout : ADMIN_TIMEOUT;
++	if (timeout)
++		rq->timeout = timeout;
+ 
+ 	if (ppa_buf && ppa_len) {
+ 		ppa_list = dma_pool_alloc(dev->dma_pool, GFP_KERNEL, &ppa_dma);
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 95b9657cabaf1..8e40a6306e53d 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -662,6 +662,8 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl);
+ 
+ #define NVME_QID_ANY -1
+ struct request *nvme_alloc_request(struct request_queue *q,
++		struct nvme_command *cmd, blk_mq_req_flags_t flags);
++struct request *nvme_alloc_request_qid(struct request_queue *q,
+ 		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid);
+ void nvme_cleanup_cmd(struct request *req);
+ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 7de24a10dd921..9e633f4dcec71 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -224,6 +224,7 @@ struct nvme_queue {
+  */
+ struct nvme_iod {
+ 	struct nvme_request req;
++	struct nvme_command cmd;
+ 	struct nvme_queue *nvmeq;
+ 	bool use_sgl;
+ 	int aborted;
+@@ -917,7 +918,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	struct nvme_dev *dev = nvmeq->dev;
+ 	struct request *req = bd->rq;
+ 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+-	struct nvme_command cmnd;
++	struct nvme_command *cmnd = &iod->cmd;
+ 	blk_status_t ret;
+ 
+ 	iod->aborted = 0;
+@@ -931,24 +932,24 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
+ 		return BLK_STS_IOERR;
+ 
+-	ret = nvme_setup_cmd(ns, req, &cmnd);
++	ret = nvme_setup_cmd(ns, req, cmnd);
+ 	if (ret)
+ 		return ret;
+ 
+ 	if (blk_rq_nr_phys_segments(req)) {
+-		ret = nvme_map_data(dev, req, &cmnd);
++		ret = nvme_map_data(dev, req, cmnd);
+ 		if (ret)
+ 			goto out_free_cmd;
+ 	}
+ 
+ 	if (blk_integrity_rq(req)) {
+-		ret = nvme_map_metadata(dev, req, &cmnd);
++		ret = nvme_map_metadata(dev, req, cmnd);
+ 		if (ret)
+ 			goto out_unmap_data;
+ 	}
+ 
+ 	blk_mq_start_request(req);
+-	nvme_submit_cmd(nvmeq, &cmnd, bd->last);
++	nvme_submit_cmd(nvmeq, cmnd, bd->last);
+ 	return BLK_STS_OK;
+ out_unmap_data:
+ 	nvme_unmap_data(dev, req);
+@@ -1350,13 +1351,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
+ 		 req->tag, nvmeq->qid);
+ 
+ 	abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd,
+-			BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
++			BLK_MQ_REQ_NOWAIT);
+ 	if (IS_ERR(abort_req)) {
+ 		atomic_inc(&dev->ctrl.abort_limit);
+ 		return BLK_EH_RESET_TIMER;
+ 	}
+ 
+-	abort_req->timeout = ADMIN_TIMEOUT;
+ 	abort_req->end_io_data = NULL;
+ 	blk_execute_rq_nowait(abort_req->q, NULL, abort_req, 0, abort_endio);
+ 
+@@ -2279,11 +2279,10 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
+ 	cmd.delete_queue.opcode = opcode;
+ 	cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
+ 
+-	req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
++	req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT);
+ 	if (IS_ERR(req))
+ 		return PTR_ERR(req);
+ 
+-	req->timeout = ADMIN_TIMEOUT;
+ 	req->end_io_data = nvmeq;
+ 
+ 	init_completion(&nvmeq->delete_done);
+@@ -3266,10 +3265,6 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_128_BYTES_SQES |
+ 				NVME_QUIRK_SHARED_TAGS |
+ 				NVME_QUIRK_SKIP_CID_GEN },
+-	{ PCI_DEVICE(0x144d, 0xa808),   /* Samsung X5 */
+-		.driver_data =  NVME_QUIRK_DELAY_BEFORE_CHK_RDY|
+-				NVME_QUIRK_NO_DEEPEST_PS |
+-				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ 	{ 0, }
+ };
+diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
+index 8ee94f0568983..d24251ece5023 100644
+--- a/drivers/nvme/target/passthru.c
++++ b/drivers/nvme/target/passthru.c
+@@ -244,7 +244,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
+ 		q = ns->queue;
+ 	}
+ 
+-	rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY);
++	rq = nvme_alloc_request(q, req->cmd, 0);
+ 	if (IS_ERR(rq)) {
+ 		status = NVME_SC_INTERNAL;
+ 		goto out_put_ns;
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 6b00de6b6f0ef..5eb959b5f7010 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -2746,6 +2746,24 @@ static void zbc_open_zone(struct sdebug_dev_info *devip,
+ 	}
+ }
+ 
++static inline void zbc_set_zone_full(struct sdebug_dev_info *devip,
++				     struct sdeb_zone_state *zsp)
++{
++	switch (zsp->z_cond) {
++	case ZC2_IMPLICIT_OPEN:
++		devip->nr_imp_open--;
++		break;
++	case ZC3_EXPLICIT_OPEN:
++		devip->nr_exp_open--;
++		break;
++	default:
++		WARN_ONCE(true, "Invalid zone %llu condition %x\n",
++			  zsp->z_start, zsp->z_cond);
++		break;
++	}
++	zsp->z_cond = ZC5_FULL;
++}
++
+ static void zbc_inc_wp(struct sdebug_dev_info *devip,
+ 		       unsigned long long lba, unsigned int num)
+ {
+@@ -2758,7 +2776,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip,
+ 	if (zsp->z_type == ZBC_ZONE_TYPE_SWR) {
+ 		zsp->z_wp += num;
+ 		if (zsp->z_wp >= zend)
+-			zsp->z_cond = ZC5_FULL;
++			zbc_set_zone_full(devip, zsp);
+ 		return;
+ 	}
+ 
+@@ -2777,7 +2795,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip,
+ 			n = num;
+ 		}
+ 		if (zsp->z_wp >= zend)
+-			zsp->z_cond = ZC5_FULL;
++			zbc_set_zone_full(devip, zsp);
+ 
+ 		num -= n;
+ 		lba += n;
+diff --git a/drivers/soc/bcm/brcmstb/pm/pm-arm.c b/drivers/soc/bcm/brcmstb/pm/pm-arm.c
+index b1062334e6089..c6ec7d95bcfcc 100644
+--- a/drivers/soc/bcm/brcmstb/pm/pm-arm.c
++++ b/drivers/soc/bcm/brcmstb/pm/pm-arm.c
+@@ -780,6 +780,7 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	ret = brcmstb_init_sram(dn);
++	of_node_put(dn);
+ 	if (ret) {
+ 		pr_err("error setting up SRAM for PM\n");
+ 		return ret;
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index a7ee1171eeb3e..0a6336d54a650 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -4625,16 +4625,8 @@ static int con_font_get(struct vc_data *vc, struct console_font_op *op)
+ 
+ 	if (op->data && font.charcount > op->charcount)
+ 		rc = -ENOSPC;
+-	if (!(op->flags & KD_FONT_FLAG_OLD)) {
+-		if (font.width > op->width || font.height > op->height) 
+-			rc = -ENOSPC;
+-	} else {
+-		if (font.width != 8)
+-			rc = -EIO;
+-		else if ((op->height && font.height > op->height) ||
+-			 font.height > 32)
+-			rc = -ENOSPC;
+-	}
++	if (font.width > op->width || font.height > op->height)
++		rc = -ENOSPC;
+ 	if (rc)
+ 		goto out;
+ 
+@@ -4662,7 +4654,7 @@ static int con_font_set(struct vc_data *vc, struct console_font_op *op)
+ 		return -EINVAL;
+ 	if (op->charcount > 512)
+ 		return -EINVAL;
+-	if (op->width <= 0 || op->width > 32 || op->height > 32)
++	if (op->width <= 0 || op->width > 32 || !op->height || op->height > 32)
+ 		return -EINVAL;
+ 	size = (op->width+7)/8 * 32 * op->charcount;
+ 	if (size > max_font_size)
+@@ -4672,31 +4664,6 @@ static int con_font_set(struct vc_data *vc, struct console_font_op *op)
+ 	if (IS_ERR(font.data))
+ 		return PTR_ERR(font.data);
+ 
+-	if (!op->height) {		/* Need to guess font height [compat] */
+-		int h, i;
+-		u8 *charmap = font.data;
+-
+-		/*
+-		 * If from KDFONTOP ioctl, don't allow things which can be done
+-		 * in userland,so that we can get rid of this soon
+-		 */
+-		if (!(op->flags & KD_FONT_FLAG_OLD)) {
+-			kfree(font.data);
+-			return -EINVAL;
+-		}
+-
+-		for (h = 32; h > 0; h--)
+-			for (i = 0; i < op->charcount; i++)
+-				if (charmap[32*i+h-1])
+-					goto nonzero;
+-
+-		kfree(font.data);
+-		return -EINVAL;
+-
+-	nonzero:
+-		op->height = h;
+-	}
+-
+ 	font.charcount = op->charcount;
+ 	font.width = op->width;
+ 	font.height = op->height;
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index a9c6ea8986af0..b10b86e2c17e9 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -486,70 +486,6 @@ static int vt_k_ioctl(struct tty_struct *tty, unsigned int cmd,
+ 	return 0;
+ }
+ 
+-static inline int do_fontx_ioctl(struct vc_data *vc, int cmd,
+-		struct consolefontdesc __user *user_cfd,
+-		struct console_font_op *op)
+-{
+-	struct consolefontdesc cfdarg;
+-	int i;
+-
+-	if (copy_from_user(&cfdarg, user_cfd, sizeof(struct consolefontdesc)))
+-		return -EFAULT;
+-
+-	switch (cmd) {
+-	case PIO_FONTX:
+-		op->op = KD_FONT_OP_SET;
+-		op->flags = KD_FONT_FLAG_OLD;
+-		op->width = 8;
+-		op->height = cfdarg.charheight;
+-		op->charcount = cfdarg.charcount;
+-		op->data = cfdarg.chardata;
+-		return con_font_op(vc, op);
+-
+-	case GIO_FONTX:
+-		op->op = KD_FONT_OP_GET;
+-		op->flags = KD_FONT_FLAG_OLD;
+-		op->width = 8;
+-		op->height = cfdarg.charheight;
+-		op->charcount = cfdarg.charcount;
+-		op->data = cfdarg.chardata;
+-		i = con_font_op(vc, op);
+-		if (i)
+-			return i;
+-		cfdarg.charheight = op->height;
+-		cfdarg.charcount = op->charcount;
+-		if (copy_to_user(user_cfd, &cfdarg, sizeof(struct consolefontdesc)))
+-			return -EFAULT;
+-		return 0;
+-	}
+-	return -EINVAL;
+-}
+-
+-static int vt_io_fontreset(struct vc_data *vc, struct console_font_op *op)
+-{
+-	int ret;
+-
+-	if (__is_defined(BROKEN_GRAPHICS_PROGRAMS)) {
+-		/*
+-		 * With BROKEN_GRAPHICS_PROGRAMS defined, the default font is
+-		 * not saved.
+-		 */
+-		return -ENOSYS;
+-	}
+-
+-	op->op = KD_FONT_OP_SET_DEFAULT;
+-	op->data = NULL;
+-	ret = con_font_op(vc, op);
+-	if (ret)
+-		return ret;
+-
+-	console_lock();
+-	con_set_default_unimap(vc);
+-	console_unlock();
+-
+-	return 0;
+-}
+-
+ static inline int do_unimap_ioctl(int cmd, struct unimapdesc __user *user_ud,
+ 		bool perm, struct vc_data *vc)
+ {
+@@ -574,29 +510,7 @@ static inline int do_unimap_ioctl(int cmd, struct unimapdesc __user *user_ud,
+ static int vt_io_ioctl(struct vc_data *vc, unsigned int cmd, void __user *up,
+ 		bool perm)
+ {
+-	struct console_font_op op;	/* used in multiple places here */
+-
+ 	switch (cmd) {
+-	case PIO_FONT:
+-		if (!perm)
+-			return -EPERM;
+-		op.op = KD_FONT_OP_SET;
+-		op.flags = KD_FONT_FLAG_OLD | KD_FONT_FLAG_DONT_RECALC;	/* Compatibility */
+-		op.width = 8;
+-		op.height = 0;
+-		op.charcount = 256;
+-		op.data = up;
+-		return con_font_op(vc, &op);
+-
+-	case GIO_FONT:
+-		op.op = KD_FONT_OP_GET;
+-		op.flags = KD_FONT_FLAG_OLD;
+-		op.width = 8;
+-		op.height = 32;
+-		op.charcount = 256;
+-		op.data = up;
+-		return con_font_op(vc, &op);
+-
+ 	case PIO_CMAP:
+                 if (!perm)
+ 			return -EPERM;
+@@ -605,20 +519,6 @@ static int vt_io_ioctl(struct vc_data *vc, unsigned int cmd, void __user *up,
+ 	case GIO_CMAP:
+                 return con_get_cmap(up);
+ 
+-	case PIO_FONTX:
+-		if (!perm)
+-			return -EPERM;
+-
+-		fallthrough;
+-	case GIO_FONTX:
+-		return do_fontx_ioctl(vc, cmd, up, &op);
+-
+-	case PIO_FONTRESET:
+-		if (!perm)
+-			return -EPERM;
+-
+-		return vt_io_fontreset(vc, &op);
+-
+ 	case PIO_SCRNMAP:
+ 		if (!perm)
+ 			return -EPERM;
+@@ -1099,54 +999,6 @@ void vc_SAK(struct work_struct *work)
+ 
+ #ifdef CONFIG_COMPAT
+ 
+-struct compat_consolefontdesc {
+-	unsigned short charcount;       /* characters in font (256 or 512) */
+-	unsigned short charheight;      /* scan lines per character (1-32) */
+-	compat_caddr_t chardata;	/* font data in expanded form */
+-};
+-
+-static inline int
+-compat_fontx_ioctl(struct vc_data *vc, int cmd,
+-		   struct compat_consolefontdesc __user *user_cfd,
+-		   int perm, struct console_font_op *op)
+-{
+-	struct compat_consolefontdesc cfdarg;
+-	int i;
+-
+-	if (copy_from_user(&cfdarg, user_cfd, sizeof(struct compat_consolefontdesc)))
+-		return -EFAULT;
+-
+-	switch (cmd) {
+-	case PIO_FONTX:
+-		if (!perm)
+-			return -EPERM;
+-		op->op = KD_FONT_OP_SET;
+-		op->flags = KD_FONT_FLAG_OLD;
+-		op->width = 8;
+-		op->height = cfdarg.charheight;
+-		op->charcount = cfdarg.charcount;
+-		op->data = compat_ptr(cfdarg.chardata);
+-		return con_font_op(vc, op);
+-
+-	case GIO_FONTX:
+-		op->op = KD_FONT_OP_GET;
+-		op->flags = KD_FONT_FLAG_OLD;
+-		op->width = 8;
+-		op->height = cfdarg.charheight;
+-		op->charcount = cfdarg.charcount;
+-		op->data = compat_ptr(cfdarg.chardata);
+-		i = con_font_op(vc, op);
+-		if (i)
+-			return i;
+-		cfdarg.charheight = op->height;
+-		cfdarg.charcount = op->charcount;
+-		if (copy_to_user(user_cfd, &cfdarg, sizeof(struct compat_consolefontdesc)))
+-			return -EFAULT;
+-		return 0;
+-	}
+-	return -EINVAL;
+-}
+-
+ struct compat_console_font_op {
+ 	compat_uint_t op;        /* operation code KD_FONT_OP_* */
+ 	compat_uint_t flags;     /* KD_FONT_FLAG_* */
+@@ -1223,9 +1075,6 @@ long vt_compat_ioctl(struct tty_struct *tty,
+ 	/*
+ 	 * these need special handlers for incompatible data structures
+ 	 */
+-	case PIO_FONTX:
+-	case GIO_FONTX:
+-		return compat_fontx_ioctl(vc, cmd, up, perm, &op);
+ 
+ 	case KDFONTOP:
+ 		return compat_kdfontop_ioctl(up, perm, &op, vc);
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 5f35cdd2cf1dd..67d8da04848ec 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -1034,6 +1034,9 @@ isr_setup_status_complete(struct usb_ep *ep, struct usb_request *req)
+ 	struct ci_hdrc *ci = req->context;
+ 	unsigned long flags;
+ 
++	if (req->status < 0)
++		return;
++
+ 	if (ci->setaddr) {
+ 		hw_usb_set_address(ci, ci->address);
+ 		ci->setaddr = false;
+diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c
+index 34cecd3660bfc..b496ca937deed 100644
+--- a/drivers/usb/gadget/legacy/raw_gadget.c
++++ b/drivers/usb/gadget/legacy/raw_gadget.c
+@@ -10,6 +10,7 @@
+ #include <linux/ctype.h>
+ #include <linux/debugfs.h>
+ #include <linux/delay.h>
++#include <linux/idr.h>
+ #include <linux/kref.h>
+ #include <linux/miscdevice.h>
+ #include <linux/module.h>
+@@ -35,6 +36,9 @@ MODULE_LICENSE("GPL");
+ 
+ /*----------------------------------------------------------------------*/
+ 
++static DEFINE_IDA(driver_id_numbers);
++#define DRIVER_DRIVER_NAME_LENGTH_MAX	32
++
+ #define RAW_EVENT_QUEUE_SIZE	16
+ 
+ struct raw_event_queue {
+@@ -160,6 +164,9 @@ struct raw_dev {
+ 	/* Reference to misc device: */
+ 	struct device			*dev;
+ 
++	/* Make driver names unique */
++	int				driver_id_number;
++
+ 	/* Protected by lock: */
+ 	enum dev_state			state;
+ 	bool				gadget_registered;
+@@ -188,6 +195,7 @@ static struct raw_dev *dev_new(void)
+ 	spin_lock_init(&dev->lock);
+ 	init_completion(&dev->ep0_done);
+ 	raw_event_queue_init(&dev->queue);
++	dev->driver_id_number = -1;
+ 	return dev;
+ }
+ 
+@@ -198,6 +206,9 @@ static void dev_free(struct kref *kref)
+ 
+ 	kfree(dev->udc_name);
+ 	kfree(dev->driver.udc_name);
++	kfree(dev->driver.driver.name);
++	if (dev->driver_id_number >= 0)
++		ida_free(&driver_id_numbers, dev->driver_id_number);
+ 	if (dev->req) {
+ 		if (dev->ep0_urb_queued)
+ 			usb_ep_dequeue(dev->gadget->ep0, dev->req);
+@@ -418,9 +429,11 @@ out_put:
+ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
+ {
+ 	int ret = 0;
++	int driver_id_number;
+ 	struct usb_raw_init arg;
+ 	char *udc_driver_name;
+ 	char *udc_device_name;
++	char *driver_driver_name;
+ 	unsigned long flags;
+ 
+ 	if (copy_from_user(&arg, (void __user *)value, sizeof(arg)))
+@@ -439,36 +452,43 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
+ 		return -EINVAL;
+ 	}
+ 
++	driver_id_number = ida_alloc(&driver_id_numbers, GFP_KERNEL);
++	if (driver_id_number < 0)
++		return driver_id_number;
++
++	driver_driver_name = kmalloc(DRIVER_DRIVER_NAME_LENGTH_MAX, GFP_KERNEL);
++	if (!driver_driver_name) {
++		ret = -ENOMEM;
++		goto out_free_driver_id_number;
++	}
++	snprintf(driver_driver_name, DRIVER_DRIVER_NAME_LENGTH_MAX,
++				DRIVER_NAME ".%d", driver_id_number);
++
+ 	udc_driver_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL);
+-	if (!udc_driver_name)
+-		return -ENOMEM;
++	if (!udc_driver_name) {
++		ret = -ENOMEM;
++		goto out_free_driver_driver_name;
++	}
+ 	ret = strscpy(udc_driver_name, &arg.driver_name[0],
+ 				UDC_NAME_LENGTH_MAX);
+-	if (ret < 0) {
+-		kfree(udc_driver_name);
+-		return ret;
+-	}
++	if (ret < 0)
++		goto out_free_udc_driver_name;
+ 	ret = 0;
+ 
+ 	udc_device_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL);
+ 	if (!udc_device_name) {
+-		kfree(udc_driver_name);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto out_free_udc_driver_name;
+ 	}
+ 	ret = strscpy(udc_device_name, &arg.device_name[0],
+ 				UDC_NAME_LENGTH_MAX);
+-	if (ret < 0) {
+-		kfree(udc_driver_name);
+-		kfree(udc_device_name);
+-		return ret;
+-	}
++	if (ret < 0)
++		goto out_free_udc_device_name;
+ 	ret = 0;
+ 
+ 	spin_lock_irqsave(&dev->lock, flags);
+ 	if (dev->state != STATE_DEV_OPENED) {
+ 		dev_dbg(dev->dev, "fail, device is not opened\n");
+-		kfree(udc_driver_name);
+-		kfree(udc_device_name);
+ 		ret = -EINVAL;
+ 		goto out_unlock;
+ 	}
+@@ -483,14 +503,25 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value)
+ 	dev->driver.suspend = gadget_suspend;
+ 	dev->driver.resume = gadget_resume;
+ 	dev->driver.reset = gadget_reset;
+-	dev->driver.driver.name = DRIVER_NAME;
++	dev->driver.driver.name = driver_driver_name;
+ 	dev->driver.udc_name = udc_device_name;
+ 	dev->driver.match_existing_only = 1;
++	dev->driver_id_number = driver_id_number;
+ 
+ 	dev->state = STATE_DEV_INITIALIZED;
++	spin_unlock_irqrestore(&dev->lock, flags);
++	return ret;
+ 
+ out_unlock:
+ 	spin_unlock_irqrestore(&dev->lock, flags);
++out_free_udc_device_name:
++	kfree(udc_device_name);
++out_free_udc_driver_name:
++	kfree(udc_driver_name);
++out_free_driver_driver_name:
++	kfree(driver_driver_name);
++out_free_driver_id_number:
++	ida_free(&driver_id_numbers, driver_id_number);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 1eb3b5deb940e..94adae8b19f00 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -566,7 +566,7 @@ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd)
+  * It will release and re-aquire the lock while calling ACPI
+  * method.
+  */
+-static void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
++void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
+ 				u16 index, bool on, unsigned long *flags)
+ 	__must_hold(&xhci->lock)
+ {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 886279755804e..8952492d43be6 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -61,6 +61,8 @@
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI		0x461e
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI		0x464e
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI	0x51ed
++#define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI		0xa71e
++#define PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI		0x7ec0
+ 
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4			0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+@@ -265,7 +267,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI))
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index a1ed5e0d06128..997de5f294f15 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -775,6 +775,8 @@ static void xhci_stop(struct usb_hcd *hcd)
+ void xhci_shutdown(struct usb_hcd *hcd)
+ {
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
++	unsigned long flags;
++	int i;
+ 
+ 	if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
+ 		usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev));
+@@ -790,12 +792,21 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ 		del_timer_sync(&xhci->shared_hcd->rh_timer);
+ 	}
+ 
+-	spin_lock_irq(&xhci->lock);
++	spin_lock_irqsave(&xhci->lock, flags);
+ 	xhci_halt(xhci);
++
++	/* Power off USB2 ports*/
++	for (i = 0; i < xhci->usb2_rhub.num_ports; i++)
++		xhci_set_port_power(xhci, xhci->main_hcd, i, false, &flags);
++
++	/* Power off USB3 ports*/
++	for (i = 0; i < xhci->usb3_rhub.num_ports; i++)
++		xhci_set_port_power(xhci, xhci->shared_hcd, i, false, &flags);
++
+ 	/* Workaround for spurious wakeups at shutdown with HSW */
+ 	if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+ 		xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+-	spin_unlock_irq(&xhci->lock);
++	spin_unlock_irqrestore(&xhci->lock, flags);
+ 
+ 	xhci_cleanup_msix(xhci);
+ 
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index a46bbf5beffa9..0c66424b34ba9 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -2162,6 +2162,8 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex,
+ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf);
+ int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1);
+ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd);
++void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, u16 index,
++			 bool on, unsigned long *flags);
+ 
+ void xhci_hc_died(struct xhci_hcd *xhci);
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 3744cde5146f4..44e06b95584e5 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -252,10 +252,12 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EG95			0x0195
+ #define QUECTEL_PRODUCT_BG96			0x0296
+ #define QUECTEL_PRODUCT_EP06			0x0306
++#define QUECTEL_PRODUCT_EM05G			0x030a
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
+ #define QUECTEL_PRODUCT_EC200T			0x6026
++#define QUECTEL_PRODUCT_RM500K			0x7001
+ 
+ #define CMOTECH_VENDOR_ID			0x16d8
+ #define CMOTECH_PRODUCT_6001			0x6001
+@@ -1134,6 +1136,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+@@ -1147,6 +1151,7 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
+ 
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+@@ -1279,6 +1284,7 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1231, 0xff),	/* Telit LE910Cx (RNDIS) */
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x1250, 0xff, 0x00, 0x00) },	/* Telit LE910Cx (rmnet) */
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x1260),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x1261),
+diff --git a/drivers/usb/typec/tcpm/Kconfig b/drivers/usb/typec/tcpm/Kconfig
+index 557f392fe24da..073fd2ea5e0bb 100644
+--- a/drivers/usb/typec/tcpm/Kconfig
++++ b/drivers/usb/typec/tcpm/Kconfig
+@@ -56,7 +56,6 @@ config TYPEC_WCOVE
+ 	tristate "Intel WhiskeyCove PMIC USB Type-C PHY driver"
+ 	depends on ACPI
+ 	depends on MFD_INTEL_PMC_BXT
+-	depends on INTEL_SOC_PMIC
+ 	depends on BXT_WC_PMIC_OPREGION
+ 	help
+ 	  This driver adds support for USB Type-C on Intel Broxton platforms
+diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c
+index 77622ef401d8f..68fb531f245a5 100644
+--- a/drivers/video/console/sticore.c
++++ b/drivers/video/console/sticore.c
+@@ -1127,6 +1127,7 @@ int sti_call(const struct sti_struct *sti, unsigned long func,
+ 	return ret;
+ }
+ 
++#if defined(CONFIG_FB_STI)
+ /* check if given fb_info is the primary device */
+ int fb_is_primary_device(struct fb_info *info)
+ {
+@@ -1142,6 +1143,7 @@ int fb_is_primary_device(struct fb_info *info)
+ 	return (sti->info == info);
+ }
+ EXPORT_SYMBOL(fb_is_primary_device);
++#endif
+ 
+ MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer");
+ MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines");
+diff --git a/drivers/xen/features.c b/drivers/xen/features.c
+index 25c053b096051..2c306de228db3 100644
+--- a/drivers/xen/features.c
++++ b/drivers/xen/features.c
+@@ -29,6 +29,6 @@ void xen_setup_features(void)
+ 		if (HYPERVISOR_xen_version(XENVER_get_features, &fi) < 0)
+ 			break;
+ 		for (j = 0; j < 32; j++)
+-			xen_features[i * 32 + j] = !!(fi.submap & 1<<j);
++			xen_features[i * 32 + j] = !!(fi.submap & 1U << j);
+ 	}
+ }
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 7e7a9454bcb9d..826fae22a8cc9 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -734,7 +734,8 @@ int afs_getattr(const struct path *path, struct kstat *stat,
+ 
+ 	_enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation);
+ 
+-	if (!(query_flags & AT_STATX_DONT_SYNC) &&
++	if (vnode->volume &&
++	    !(query_flags & AT_STATX_DONT_SYNC) &&
+ 	    !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) {
+ 		key = afs_request_key(vnode->volume->cell);
+ 		if (IS_ERR(key))
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 2663485c17cb8..8bf8cdb62a3af 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -652,6 +652,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 				compress_force = false;
+ 				no_compress++;
+ 			} else {
++				btrfs_err(info, "unrecognized compression value %s",
++					  args[0].from);
+ 				ret = -EINVAL;
+ 				goto out;
+ 			}
+@@ -710,8 +712,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 		case Opt_thread_pool:
+ 			ret = match_int(&args[0], &intarg);
+ 			if (ret) {
++				btrfs_err(info, "unrecognized thread_pool value %s",
++					  args[0].from);
+ 				goto out;
+ 			} else if (intarg == 0) {
++				btrfs_err(info, "invalid value 0 for thread_pool");
+ 				ret = -EINVAL;
+ 				goto out;
+ 			}
+@@ -772,8 +777,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 			break;
+ 		case Opt_ratio:
+ 			ret = match_int(&args[0], &intarg);
+-			if (ret)
++			if (ret) {
++				btrfs_err(info, "unrecognized metadata_ratio value %s",
++					  args[0].from);
+ 				goto out;
++			}
+ 			info->metadata_ratio = intarg;
+ 			btrfs_info(info, "metadata ratio %u",
+ 				   info->metadata_ratio);
+@@ -790,6 +798,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 				btrfs_set_and_info(info, DISCARD_ASYNC,
+ 						   "turning on async discard");
+ 			} else {
++				btrfs_err(info, "unrecognized discard mode value %s",
++					  args[0].from);
+ 				ret = -EINVAL;
+ 				goto out;
+ 			}
+@@ -814,6 +824,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 				btrfs_set_and_info(info, FREE_SPACE_TREE,
+ 						   "enabling free space tree");
+ 			} else {
++				btrfs_err(info, "unrecognized space_cache value %s",
++					  args[0].from);
+ 				ret = -EINVAL;
+ 				goto out;
+ 			}
+@@ -889,8 +901,12 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 			break;
+ 		case Opt_check_integrity_print_mask:
+ 			ret = match_int(&args[0], &intarg);
+-			if (ret)
++			if (ret) {
++				btrfs_err(info,
++				"unrecognized check_integrity_print_mask value %s",
++					args[0].from);
+ 				goto out;
++			}
+ 			info->check_integrity_print_mask = intarg;
+ 			btrfs_info(info, "check_integrity_print_mask 0x%x",
+ 				   info->check_integrity_print_mask);
+@@ -905,13 +921,15 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 			goto out;
+ #endif
+ 		case Opt_fatal_errors:
+-			if (strcmp(args[0].from, "panic") == 0)
++			if (strcmp(args[0].from, "panic") == 0) {
+ 				btrfs_set_opt(info->mount_opt,
+ 					      PANIC_ON_FATAL_ERROR);
+-			else if (strcmp(args[0].from, "bug") == 0)
++			} else if (strcmp(args[0].from, "bug") == 0) {
+ 				btrfs_clear_opt(info->mount_opt,
+ 					      PANIC_ON_FATAL_ERROR);
+-			else {
++			} else {
++				btrfs_err(info, "unrecognized fatal_errors value %s",
++					  args[0].from);
+ 				ret = -EINVAL;
+ 				goto out;
+ 			}
+@@ -919,8 +937,12 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 		case Opt_commit_interval:
+ 			intarg = 0;
+ 			ret = match_int(&args[0], &intarg);
+-			if (ret)
++			if (ret) {
++				btrfs_err(info, "unrecognized commit_interval value %s",
++					  args[0].from);
++				ret = -EINVAL;
+ 				goto out;
++			}
+ 			if (intarg == 0) {
+ 				btrfs_info(info,
+ 					   "using default commit interval %us",
+@@ -934,8 +956,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ 			break;
+ 		case Opt_rescue:
+ 			ret = parse_rescue_options(info, args[0].from);
+-			if (ret < 0)
++			if (ret < 0) {
++				btrfs_err(info, "unrecognized rescue value %s",
++					  args[0].from);
+ 				goto out;
++			}
+ 			break;
+ #ifdef CONFIG_BTRFS_DEBUG
+ 		case Opt_fragment_all:
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 6ae2beabe578d..72b109685db47 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -91,8 +91,6 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
+ 	if (test_opt(sbi, INLINE_XATTR))
+ 		set_inode_flag(inode, FI_INLINE_XATTR);
+ 
+-	if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode))
+-		set_inode_flag(inode, FI_INLINE_DATA);
+ 	if (f2fs_may_inline_dentry(inode))
+ 		set_inode_flag(inode, FI_INLINE_DENTRY);
+ 
+@@ -109,10 +107,6 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
+ 
+ 	f2fs_init_extent_tree(inode, NULL);
+ 
+-	stat_inc_inline_xattr(inode);
+-	stat_inc_inline_inode(inode);
+-	stat_inc_inline_dir(inode);
+-
+ 	F2FS_I(inode)->i_flags =
+ 		f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED);
+ 
+@@ -129,6 +123,14 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
+ 			set_compress_context(inode);
+ 	}
+ 
++	/* Should enable inline_data after compression set */
++	if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode))
++		set_inode_flag(inode, FI_INLINE_DATA);
++
++	stat_inc_inline_xattr(inode);
++	stat_inc_inline_inode(inode);
++	stat_inc_inline_dir(inode);
++
+ 	f2fs_set_inode_flags(inode);
+ 
+ 	trace_f2fs_new_inode(inode, 0);
+@@ -317,6 +319,9 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode,
+ 		if (!is_extension_exist(name, ext[i], false))
+ 			continue;
+ 
++		/* Do not use inline_data with compression */
++		stat_dec_inline_inode(inode);
++		clear_inode_flag(inode, FI_INLINE_DATA);
+ 		set_compress_context(inode);
+ 		return;
+ 	}
+diff --git a/include/linux/kd.h b/include/linux/kd.h
+deleted file mode 100644
+index b130a18f860f0..0000000000000
+--- a/include/linux/kd.h
++++ /dev/null
+@@ -1,8 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _LINUX_KD_H
+-#define _LINUX_KD_H
+-
+-#include <uapi/linux/kd.h>
+-
+-#define KD_FONT_FLAG_OLD		0x80000000	/* Invoked via old interface [compat] */
+-#endif /* _LINUX_KD_H */
+diff --git a/include/linux/ratelimit_types.h b/include/linux/ratelimit_types.h
+index b676aa419eef8..f0e535f199bef 100644
+--- a/include/linux/ratelimit_types.h
++++ b/include/linux/ratelimit_types.h
+@@ -23,12 +23,16 @@ struct ratelimit_state {
+ 	unsigned long	flags;
+ };
+ 
+-#define RATELIMIT_STATE_INIT(name, interval_init, burst_init) {		\
+-		.lock		= __RAW_SPIN_LOCK_UNLOCKED(name.lock),	\
+-		.interval	= interval_init,			\
+-		.burst		= burst_init,				\
++#define RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, flags_init) { \
++		.lock		= __RAW_SPIN_LOCK_UNLOCKED(name.lock),		  \
++		.interval	= interval_init,				  \
++		.burst		= burst_init,					  \
++		.flags		= flags_init,					  \
+ 	}
+ 
++#define RATELIMIT_STATE_INIT(name, interval_init, burst_init) \
++	RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, 0)
++
+ #define RATELIMIT_STATE_INIT_DISABLED					\
+ 	RATELIMIT_STATE_INIT(ratelimit_state, 0, DEFAULT_RATELIMIT_BURST)
+ 
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index b7907385a02ff..b9948e7861f22 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -203,11 +203,11 @@ int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest);
+ unsigned int nft_parse_register(const struct nlattr *attr);
+ int nft_dump_register(struct sk_buff *skb, unsigned int attr, unsigned int reg);
+ 
+-int nft_validate_register_load(enum nft_registers reg, unsigned int len);
+-int nft_validate_register_store(const struct nft_ctx *ctx,
+-				enum nft_registers reg,
+-				const struct nft_data *data,
+-				enum nft_data_types type, unsigned int len);
++int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len);
++int nft_parse_register_store(const struct nft_ctx *ctx,
++			     const struct nlattr *attr, u8 *dreg,
++			     const struct nft_data *data,
++			     enum nft_data_types type, unsigned int len);
+ 
+ /**
+  *	struct nft_userdata - user defined data associated with an object
+diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h
+index 8657e6815b07c..fd10a7862fdc6 100644
+--- a/include/net/netfilter/nf_tables_core.h
++++ b/include/net/netfilter/nf_tables_core.h
+@@ -26,21 +26,21 @@ void nf_tables_core_module_exit(void);
+ struct nft_bitwise_fast_expr {
+ 	u32			mask;
+ 	u32			xor;
+-	enum nft_registers	sreg:8;
+-	enum nft_registers	dreg:8;
++	u8			sreg;
++	u8			dreg;
+ };
+ 
+ struct nft_cmp_fast_expr {
+ 	u32			data;
+ 	u32			mask;
+-	enum nft_registers	sreg:8;
++	u8			sreg;
+ 	u8			len;
+ 	bool			inv;
+ };
+ 
+ struct nft_immediate_expr {
+ 	struct nft_data		data;
+-	enum nft_registers	dreg:8;
++	u8			dreg;
+ 	u8			dlen;
+ };
+ 
+@@ -60,14 +60,14 @@ struct nft_payload {
+ 	enum nft_payload_bases	base:8;
+ 	u8			offset;
+ 	u8			len;
+-	enum nft_registers	dreg:8;
++	u8			dreg;
+ };
+ 
+ struct nft_payload_set {
+ 	enum nft_payload_bases	base:8;
+ 	u8			offset;
+ 	u8			len;
+-	enum nft_registers	sreg:8;
++	u8			sreg;
+ 	u8			csum_type;
+ 	u8			csum_offset;
+ 	u8			csum_flags;
+diff --git a/include/net/netfilter/nft_fib.h b/include/net/netfilter/nft_fib.h
+index 628b6fa579cd8..237f3757637e1 100644
+--- a/include/net/netfilter/nft_fib.h
++++ b/include/net/netfilter/nft_fib.h
+@@ -5,7 +5,7 @@
+ #include <net/netfilter/nf_tables.h>
+ 
+ struct nft_fib {
+-	enum nft_registers	dreg:8;
++	u8			dreg;
+ 	u8			result;
+ 	u32			flags;
+ };
+diff --git a/include/net/netfilter/nft_meta.h b/include/net/netfilter/nft_meta.h
+index 07e2fd507963a..2dce55c736f40 100644
+--- a/include/net/netfilter/nft_meta.h
++++ b/include/net/netfilter/nft_meta.h
+@@ -7,8 +7,8 @@
+ struct nft_meta {
+ 	enum nft_meta_keys	key:8;
+ 	union {
+-		enum nft_registers	dreg:8;
+-		enum nft_registers	sreg:8;
++		u8		dreg;
++		u8		sreg;
+ 	};
+ };
+ 
+diff --git a/include/trace/events/libata.h b/include/trace/events/libata.h
+index ab69434e2329e..72e785a903b65 100644
+--- a/include/trace/events/libata.h
++++ b/include/trace/events/libata.h
+@@ -249,6 +249,7 @@ DECLARE_EVENT_CLASS(ata_qc_complete_template,
+ 		__entry->hob_feature	= qc->result_tf.hob_feature;
+ 		__entry->nsect		= qc->result_tf.nsect;
+ 		__entry->hob_nsect	= qc->result_tf.hob_nsect;
++		__entry->flags		= qc->flags;
+ 	),
+ 
+ 	TP_printk("ata_port=%u ata_dev=%u tag=%d flags=%s status=%s " \
+diff --git a/net/bridge/netfilter/nft_meta_bridge.c b/net/bridge/netfilter/nft_meta_bridge.c
+index 8e8ffac037cd4..97805ec424c19 100644
+--- a/net/bridge/netfilter/nft_meta_bridge.c
++++ b/net/bridge/netfilter/nft_meta_bridge.c
+@@ -87,9 +87,8 @@ static int nft_meta_bridge_get_init(const struct nft_ctx *ctx,
+ 		return nft_meta_get_init(ctx, expr, tb);
+ 	}
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_META_DREG]);
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, len);
++	return nft_parse_register_store(ctx, tb[NFTA_META_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, len);
+ }
+ 
+ static struct nft_expr_type nft_meta_bridge_type;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index d348f1d3fb8fc..246947fbc9581 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -5982,10 +5982,21 @@ __bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 					   ifindex, proto, netns_id, flags);
+ 
+ 	if (sk) {
+-		sk = sk_to_full_sk(sk);
+-		if (!sk_fullsock(sk)) {
++		struct sock *sk2 = sk_to_full_sk(sk);
++
++		/* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk
++		 * sock refcnt is decremented to prevent a request_sock leak.
++		 */
++		if (!sk_fullsock(sk2))
++			sk2 = NULL;
++		if (sk2 != sk) {
+ 			sock_gen_put(sk);
+-			return NULL;
++			/* Ensure there is no need to bump sk2 refcnt */
++			if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) {
++				WARN_ONCE(1, "Found non-RCU, unreferenced socket!");
++				return NULL;
++			}
++			sk = sk2;
+ 		}
+ 	}
+ 
+@@ -6019,10 +6030,21 @@ bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 					 flags);
+ 
+ 	if (sk) {
+-		sk = sk_to_full_sk(sk);
+-		if (!sk_fullsock(sk)) {
++		struct sock *sk2 = sk_to_full_sk(sk);
++
++		/* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk
++		 * sock refcnt is decremented to prevent a request_sock leak.
++		 */
++		if (!sk_fullsock(sk2))
++			sk2 = NULL;
++		if (sk2 != sk) {
+ 			sock_gen_put(sk);
+-			return NULL;
++			/* Ensure there is no need to bump sk2 refcnt */
++			if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) {
++				WARN_ONCE(1, "Found non-RCU, unreferenced socket!");
++				return NULL;
++			}
++			sk = sk2;
+ 		}
+ 	}
+ 
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index a7e32be8714f5..6ab5c50aa7a87 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -519,7 +519,6 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	int tunnel_hlen;
+ 	int version;
+ 	int nhoff;
+-	int thoff;
+ 
+ 	tun_info = skb_tunnel_info(skb);
+ 	if (unlikely(!tun_info || !(tun_info->mode & IP_TUNNEL_INFO_TX) ||
+@@ -553,10 +552,16 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	    (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
+ 		truncate = true;
+ 
+-	thoff = skb_transport_header(skb) - skb_mac_header(skb);
+-	if (skb->protocol == htons(ETH_P_IPV6) &&
+-	    (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff))
+-		truncate = true;
++	if (skb->protocol == htons(ETH_P_IPV6)) {
++		int thoff;
++
++		if (skb_transport_header_was_set(skb))
++			thoff = skb_transport_header(skb) - skb_mac_header(skb);
++		else
++			thoff = nhoff + sizeof(struct ipv6hdr);
++		if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
++			truncate = true;
++	}
+ 
+ 	if (version == 1) {
+ 		erspan_build_header(skb, ntohl(tunnel_id_to_key32(key->tun_id)),
+diff --git a/net/ipv4/netfilter/nft_dup_ipv4.c b/net/ipv4/netfilter/nft_dup_ipv4.c
+index bcdb37f86a949..aeb631760eb9e 100644
+--- a/net/ipv4/netfilter/nft_dup_ipv4.c
++++ b/net/ipv4/netfilter/nft_dup_ipv4.c
+@@ -13,8 +13,8 @@
+ #include <net/netfilter/ipv4/nf_dup_ipv4.h>
+ 
+ struct nft_dup_ipv4 {
+-	enum nft_registers	sreg_addr:8;
+-	enum nft_registers	sreg_dev:8;
++	u8	sreg_addr;
++	u8	sreg_dev;
+ };
+ 
+ static void nft_dup_ipv4_eval(const struct nft_expr *expr,
+@@ -40,16 +40,16 @@ static int nft_dup_ipv4_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_DUP_SREG_ADDR] == NULL)
+ 		return -EINVAL;
+ 
+-	priv->sreg_addr = nft_parse_register(tb[NFTA_DUP_SREG_ADDR]);
+-	err = nft_validate_register_load(priv->sreg_addr, sizeof(struct in_addr));
++	err = nft_parse_register_load(tb[NFTA_DUP_SREG_ADDR], &priv->sreg_addr,
++				      sizeof(struct in_addr));
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (tb[NFTA_DUP_SREG_DEV] != NULL) {
+-		priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]);
+-		return nft_validate_register_load(priv->sreg_dev, sizeof(int));
+-	}
+-	return 0;
++	if (tb[NFTA_DUP_SREG_DEV])
++		err = nft_parse_register_load(tb[NFTA_DUP_SREG_DEV],
++					      &priv->sreg_dev, sizeof(int));
++
++	return err;
+ }
+ 
+ static int nft_dup_ipv4_dump(struct sk_buff *skb, const struct nft_expr *expr)
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 3f88ba6555ab8..9e0890738d93f 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -944,7 +944,6 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 	__be16 proto;
+ 	__u32 mtu;
+ 	int nhoff;
+-	int thoff;
+ 
+ 	if (!pskb_inet_may_pull(skb))
+ 		goto tx_err;
+@@ -965,10 +964,16 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 	    (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
+ 		truncate = true;
+ 
+-	thoff = skb_transport_header(skb) - skb_mac_header(skb);
+-	if (skb->protocol == htons(ETH_P_IPV6) &&
+-	    (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff))
+-		truncate = true;
++	if (skb->protocol == htons(ETH_P_IPV6)) {
++		int thoff;
++
++		if (skb_transport_header_was_set(skb))
++			thoff = skb_transport_header(skb) - skb_mac_header(skb);
++		else
++			thoff = nhoff + sizeof(struct ipv6hdr);
++		if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
++			truncate = true;
++	}
+ 
+ 	if (skb_cow_head(skb, dev->needed_headroom ?: t->hlen))
+ 		goto tx_err;
+diff --git a/net/ipv6/netfilter/nft_dup_ipv6.c b/net/ipv6/netfilter/nft_dup_ipv6.c
+index 8b5193efb1f1b..3a00d95e964e9 100644
+--- a/net/ipv6/netfilter/nft_dup_ipv6.c
++++ b/net/ipv6/netfilter/nft_dup_ipv6.c
+@@ -13,8 +13,8 @@
+ #include <net/netfilter/ipv6/nf_dup_ipv6.h>
+ 
+ struct nft_dup_ipv6 {
+-	enum nft_registers	sreg_addr:8;
+-	enum nft_registers	sreg_dev:8;
++	u8	sreg_addr;
++	u8	sreg_dev;
+ };
+ 
+ static void nft_dup_ipv6_eval(const struct nft_expr *expr,
+@@ -38,16 +38,16 @@ static int nft_dup_ipv6_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_DUP_SREG_ADDR] == NULL)
+ 		return -EINVAL;
+ 
+-	priv->sreg_addr = nft_parse_register(tb[NFTA_DUP_SREG_ADDR]);
+-	err = nft_validate_register_load(priv->sreg_addr, sizeof(struct in6_addr));
++	err = nft_parse_register_load(tb[NFTA_DUP_SREG_ADDR], &priv->sreg_addr,
++				      sizeof(struct in6_addr));
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (tb[NFTA_DUP_SREG_DEV] != NULL) {
+-		priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]);
+-		return nft_validate_register_load(priv->sreg_dev, sizeof(int));
+-	}
+-	return 0;
++	if (tb[NFTA_DUP_SREG_DEV])
++		err = nft_parse_register_load(tb[NFTA_DUP_SREG_DEV],
++					      &priv->sreg_dev, sizeof(int));
++
++	return err;
+ }
+ 
+ static int nft_dup_ipv6_dump(struct sk_buff *skb, const struct nft_expr *expr)
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0c56a90c3f086..3c17fadaab5fa 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4414,6 +4414,12 @@ static int nf_tables_delset(struct net *net, struct sock *nlsk,
+ 	return nft_delset(&ctx, set);
+ }
+ 
++static int nft_validate_register_store(const struct nft_ctx *ctx,
++				       enum nft_registers reg,
++				       const struct nft_data *data,
++				       enum nft_data_types type,
++				       unsigned int len);
++
+ static int nf_tables_bind_check_setelem(const struct nft_ctx *ctx,
+ 					struct nft_set *set,
+ 					const struct nft_set_iter *iter,
+@@ -8514,7 +8520,7 @@ EXPORT_SYMBOL_GPL(nft_dump_register);
+  * 	Validate that the input register is one of the general purpose
+  * 	registers and that the length of the load is within the bounds.
+  */
+-int nft_validate_register_load(enum nft_registers reg, unsigned int len)
++static int nft_validate_register_load(enum nft_registers reg, unsigned int len)
+ {
+ 	if (reg < NFT_REG_1 * NFT_REG_SIZE / NFT_REG32_SIZE)
+ 		return -EINVAL;
+@@ -8525,7 +8531,21 @@ int nft_validate_register_load(enum nft_registers reg, unsigned int len)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(nft_validate_register_load);
++
++int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len)
++{
++	u32 reg;
++	int err;
++
++	reg = nft_parse_register(attr);
++	err = nft_validate_register_load(reg, len);
++	if (err < 0)
++		return err;
++
++	*sreg = reg;
++	return 0;
++}
++EXPORT_SYMBOL_GPL(nft_parse_register_load);
+ 
+ /**
+  *	nft_validate_register_store - validate an expressions' register store
+@@ -8541,10 +8561,11 @@ EXPORT_SYMBOL_GPL(nft_validate_register_load);
+  * 	A value of NULL for the data means that its runtime gathered
+  * 	data.
+  */
+-int nft_validate_register_store(const struct nft_ctx *ctx,
+-				enum nft_registers reg,
+-				const struct nft_data *data,
+-				enum nft_data_types type, unsigned int len)
++static int nft_validate_register_store(const struct nft_ctx *ctx,
++				       enum nft_registers reg,
++				       const struct nft_data *data,
++				       enum nft_data_types type,
++				       unsigned int len)
+ {
+ 	int err;
+ 
+@@ -8576,7 +8597,24 @@ int nft_validate_register_store(const struct nft_ctx *ctx,
+ 		return 0;
+ 	}
+ }
+-EXPORT_SYMBOL_GPL(nft_validate_register_store);
++
++int nft_parse_register_store(const struct nft_ctx *ctx,
++			     const struct nlattr *attr, u8 *dreg,
++			     const struct nft_data *data,
++			     enum nft_data_types type, unsigned int len)
++{
++	int err;
++	u32 reg;
++
++	reg = nft_parse_register(attr);
++	err = nft_validate_register_store(ctx, reg, data, type, len);
++	if (err < 0)
++		return err;
++
++	*dreg = reg;
++	return 0;
++}
++EXPORT_SYMBOL_GPL(nft_parse_register_store);
+ 
+ static const struct nla_policy nft_verdict_policy[NFTA_VERDICT_MAX + 1] = {
+ 	[NFTA_VERDICT_CODE]	= { .type = NLA_U32 },
+diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c
+index bbd773d743773..47b0dba95054f 100644
+--- a/net/netfilter/nft_bitwise.c
++++ b/net/netfilter/nft_bitwise.c
+@@ -16,8 +16,8 @@
+ #include <net/netfilter/nf_tables_offload.h>
+ 
+ struct nft_bitwise {
+-	enum nft_registers	sreg:8;
+-	enum nft_registers	dreg:8;
++	u8			sreg;
++	u8			dreg;
+ 	enum nft_bitwise_ops	op:8;
+ 	u8			len;
+ 	struct nft_data		mask;
+@@ -169,14 +169,14 @@ static int nft_bitwise_init(const struct nft_ctx *ctx,
+ 
+ 	priv->len = len;
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_BITWISE_SREG]);
+-	err = nft_validate_register_load(priv->sreg, priv->len);
++	err = nft_parse_register_load(tb[NFTA_BITWISE_SREG], &priv->sreg,
++				      priv->len);
+ 	if (err < 0)
+ 		return err;
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_BITWISE_DREG]);
+-	err = nft_validate_register_store(ctx, priv->dreg, NULL,
+-					  NFT_DATA_VALUE, priv->len);
++	err = nft_parse_register_store(ctx, tb[NFTA_BITWISE_DREG],
++				       &priv->dreg, NULL, NFT_DATA_VALUE,
++				       priv->len);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -315,14 +315,13 @@ static int nft_bitwise_fast_init(const struct nft_ctx *ctx,
+ 	struct nft_bitwise_fast_expr *priv = nft_expr_priv(expr);
+ 	int err;
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_BITWISE_SREG]);
+-	err = nft_validate_register_load(priv->sreg, sizeof(u32));
++	err = nft_parse_register_load(tb[NFTA_BITWISE_SREG], &priv->sreg,
++				      sizeof(u32));
+ 	if (err < 0)
+ 		return err;
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_BITWISE_DREG]);
+-	err = nft_validate_register_store(ctx, priv->dreg, NULL,
+-					  NFT_DATA_VALUE, sizeof(u32));
++	err = nft_parse_register_store(ctx, tb[NFTA_BITWISE_DREG], &priv->dreg,
++				       NULL, NFT_DATA_VALUE, sizeof(u32));
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c
+index 12bed3f7bbc6d..9d5947ab8d4ef 100644
+--- a/net/netfilter/nft_byteorder.c
++++ b/net/netfilter/nft_byteorder.c
+@@ -16,8 +16,8 @@
+ #include <net/netfilter/nf_tables.h>
+ 
+ struct nft_byteorder {
+-	enum nft_registers	sreg:8;
+-	enum nft_registers	dreg:8;
++	u8			sreg;
++	u8			dreg;
+ 	enum nft_byteorder_ops	op:8;
+ 	u8			len;
+ 	u8			size;
+@@ -131,20 +131,20 @@ static int nft_byteorder_init(const struct nft_ctx *ctx,
+ 		return -EINVAL;
+ 	}
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_BYTEORDER_SREG]);
+ 	err = nft_parse_u32_check(tb[NFTA_BYTEORDER_LEN], U8_MAX, &len);
+ 	if (err < 0)
+ 		return err;
+ 
+ 	priv->len = len;
+ 
+-	err = nft_validate_register_load(priv->sreg, priv->len);
++	err = nft_parse_register_load(tb[NFTA_BYTEORDER_SREG], &priv->sreg,
++				      priv->len);
+ 	if (err < 0)
+ 		return err;
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_BYTEORDER_DREG]);
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, priv->len);
++	return nft_parse_register_store(ctx, tb[NFTA_BYTEORDER_DREG],
++					&priv->dreg, NULL, NFT_DATA_VALUE,
++					priv->len);
+ }
+ 
+ static int nft_byteorder_dump(struct sk_buff *skb, const struct nft_expr *expr)
+diff --git a/net/netfilter/nft_cmp.c b/net/netfilter/nft_cmp.c
+index 1d42d06f5b64b..b529c0e865466 100644
+--- a/net/netfilter/nft_cmp.c
++++ b/net/netfilter/nft_cmp.c
+@@ -18,7 +18,7 @@
+ 
+ struct nft_cmp_expr {
+ 	struct nft_data		data;
+-	enum nft_registers	sreg:8;
++	u8			sreg;
+ 	u8			len;
+ 	enum nft_cmp_ops	op:8;
+ };
+@@ -87,8 +87,7 @@ static int nft_cmp_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 		return err;
+ 	}
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_CMP_SREG]);
+-	err = nft_validate_register_load(priv->sreg, desc.len);
++	err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -211,8 +210,7 @@ static int nft_cmp_fast_init(const struct nft_ctx *ctx,
+ 	if (err < 0)
+ 		return err;
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_CMP_SREG]);
+-	err = nft_validate_register_load(priv->sreg, desc.len);
++	err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 7fcb73ac2e6ed..781118465d466 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -27,8 +27,8 @@ struct nft_ct {
+ 	enum nft_ct_keys	key:8;
+ 	enum ip_conntrack_dir	dir:8;
+ 	union {
+-		enum nft_registers	dreg:8;
+-		enum nft_registers	sreg:8;
++		u8		dreg;
++		u8		sreg;
+ 	};
+ };
+ 
+@@ -499,9 +499,8 @@ static int nft_ct_get_init(const struct nft_ctx *ctx,
+ 		}
+ 	}
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_CT_DREG]);
+-	err = nft_validate_register_store(ctx, priv->dreg, NULL,
+-					  NFT_DATA_VALUE, len);
++	err = nft_parse_register_store(ctx, tb[NFTA_CT_DREG], &priv->dreg, NULL,
++				       NFT_DATA_VALUE, len);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -608,8 +607,7 @@ static int nft_ct_set_init(const struct nft_ctx *ctx,
+ 		}
+ 	}
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_CT_SREG]);
+-	err = nft_validate_register_load(priv->sreg, len);
++	err = nft_parse_register_load(tb[NFTA_CT_SREG], &priv->sreg, len);
+ 	if (err < 0)
+ 		goto err1;
+ 
+diff --git a/net/netfilter/nft_dup_netdev.c b/net/netfilter/nft_dup_netdev.c
+index 70c457476b874..5b5c607fbf83f 100644
+--- a/net/netfilter/nft_dup_netdev.c
++++ b/net/netfilter/nft_dup_netdev.c
+@@ -14,7 +14,7 @@
+ #include <net/netfilter/nf_dup_netdev.h>
+ 
+ struct nft_dup_netdev {
+-	enum nft_registers	sreg_dev:8;
++	u8	sreg_dev;
+ };
+ 
+ static void nft_dup_netdev_eval(const struct nft_expr *expr,
+@@ -40,8 +40,8 @@ static int nft_dup_netdev_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_DUP_SREG_DEV] == NULL)
+ 		return -EINVAL;
+ 
+-	priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]);
+-	return nft_validate_register_load(priv->sreg_dev, sizeof(int));
++	return nft_parse_register_load(tb[NFTA_DUP_SREG_DEV], &priv->sreg_dev,
++				       sizeof(int));
+ }
+ 
+ static int nft_dup_netdev_dump(struct sk_buff *skb, const struct nft_expr *expr)
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 58904bee1a0df..8c45e01fecdd8 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -16,8 +16,8 @@ struct nft_dynset {
+ 	struct nft_set			*set;
+ 	struct nft_set_ext_tmpl		tmpl;
+ 	enum nft_dynset_ops		op:8;
+-	enum nft_registers		sreg_key:8;
+-	enum nft_registers		sreg_data:8;
++	u8				sreg_key;
++	u8				sreg_data;
+ 	bool				invert;
+ 	u64				timeout;
+ 	struct nft_expr			*expr;
+@@ -154,8 +154,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 			return err;
+ 	}
+ 
+-	priv->sreg_key = nft_parse_register(tb[NFTA_DYNSET_SREG_KEY]);
+-	err = nft_validate_register_load(priv->sreg_key, set->klen);
++	err = nft_parse_register_load(tb[NFTA_DYNSET_SREG_KEY], &priv->sreg_key,
++				      set->klen);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -165,8 +165,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 		if (set->dtype == NFT_DATA_VERDICT)
+ 			return -EOPNOTSUPP;
+ 
+-		priv->sreg_data = nft_parse_register(tb[NFTA_DYNSET_SREG_DATA]);
+-		err = nft_validate_register_load(priv->sreg_data, set->dlen);
++		err = nft_parse_register_load(tb[NFTA_DYNSET_SREG_DATA],
++					      &priv->sreg_data, set->dlen);
+ 		if (err < 0)
+ 			return err;
+ 	} else if (set->flags & NFT_SET_MAP)
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index faa0844c01fb8..670dd146fb2b1 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -19,8 +19,8 @@ struct nft_exthdr {
+ 	u8			offset;
+ 	u8			len;
+ 	u8			op;
+-	enum nft_registers	dreg:8;
+-	enum nft_registers	sreg:8;
++	u8			dreg;
++	u8			sreg;
+ 	u8			flags;
+ };
+ 
+@@ -353,12 +353,12 @@ static int nft_exthdr_init(const struct nft_ctx *ctx,
+ 	priv->type   = nla_get_u8(tb[NFTA_EXTHDR_TYPE]);
+ 	priv->offset = offset;
+ 	priv->len    = len;
+-	priv->dreg   = nft_parse_register(tb[NFTA_EXTHDR_DREG]);
+ 	priv->flags  = flags;
+ 	priv->op     = op;
+ 
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, priv->len);
++	return nft_parse_register_store(ctx, tb[NFTA_EXTHDR_DREG],
++					&priv->dreg, NULL, NFT_DATA_VALUE,
++					priv->len);
+ }
+ 
+ static int nft_exthdr_tcp_set_init(const struct nft_ctx *ctx,
+@@ -403,11 +403,11 @@ static int nft_exthdr_tcp_set_init(const struct nft_ctx *ctx,
+ 	priv->type   = nla_get_u8(tb[NFTA_EXTHDR_TYPE]);
+ 	priv->offset = offset;
+ 	priv->len    = len;
+-	priv->sreg   = nft_parse_register(tb[NFTA_EXTHDR_SREG]);
+ 	priv->flags  = flags;
+ 	priv->op     = op;
+ 
+-	return nft_validate_register_load(priv->sreg, priv->len);
++	return nft_parse_register_load(tb[NFTA_EXTHDR_SREG], &priv->sreg,
++				       priv->len);
+ }
+ 
+ static int nft_exthdr_ipv4_init(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_fib.c b/net/netfilter/nft_fib.c
+index 4dfdaeaf09a5b..b10ce732b337c 100644
+--- a/net/netfilter/nft_fib.c
++++ b/net/netfilter/nft_fib.c
+@@ -86,7 +86,6 @@ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 		return -EINVAL;
+ 
+ 	priv->result = ntohl(nla_get_be32(tb[NFTA_FIB_RESULT]));
+-	priv->dreg = nft_parse_register(tb[NFTA_FIB_DREG]);
+ 
+ 	switch (priv->result) {
+ 	case NFT_FIB_RESULT_OIF:
+@@ -106,8 +105,8 @@ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 		return -EINVAL;
+ 	}
+ 
+-	err = nft_validate_register_store(ctx, priv->dreg, NULL,
+-					  NFT_DATA_VALUE, len);
++	err = nft_parse_register_store(ctx, tb[NFTA_FIB_DREG], &priv->dreg,
++				       NULL, NFT_DATA_VALUE, len);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_fwd_netdev.c b/net/netfilter/nft_fwd_netdev.c
+index 3b0dcd170551b..7730409f6f091 100644
+--- a/net/netfilter/nft_fwd_netdev.c
++++ b/net/netfilter/nft_fwd_netdev.c
+@@ -18,7 +18,7 @@
+ #include <net/ip.h>
+ 
+ struct nft_fwd_netdev {
+-	enum nft_registers	sreg_dev:8;
++	u8	sreg_dev;
+ };
+ 
+ static void nft_fwd_netdev_eval(const struct nft_expr *expr,
+@@ -50,8 +50,8 @@ static int nft_fwd_netdev_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_FWD_SREG_DEV] == NULL)
+ 		return -EINVAL;
+ 
+-	priv->sreg_dev = nft_parse_register(tb[NFTA_FWD_SREG_DEV]);
+-	return nft_validate_register_load(priv->sreg_dev, sizeof(int));
++	return nft_parse_register_load(tb[NFTA_FWD_SREG_DEV], &priv->sreg_dev,
++				       sizeof(int));
+ }
+ 
+ static int nft_fwd_netdev_dump(struct sk_buff *skb, const struct nft_expr *expr)
+@@ -83,8 +83,8 @@ static bool nft_fwd_netdev_offload_action(const struct nft_expr *expr)
+ }
+ 
+ struct nft_fwd_neigh {
+-	enum nft_registers	sreg_dev:8;
+-	enum nft_registers	sreg_addr:8;
++	u8			sreg_dev;
++	u8			sreg_addr;
+ 	u8			nfproto;
+ };
+ 
+@@ -162,8 +162,6 @@ static int nft_fwd_neigh_init(const struct nft_ctx *ctx,
+ 	    !tb[NFTA_FWD_NFPROTO])
+ 		return -EINVAL;
+ 
+-	priv->sreg_dev = nft_parse_register(tb[NFTA_FWD_SREG_DEV]);
+-	priv->sreg_addr = nft_parse_register(tb[NFTA_FWD_SREG_ADDR]);
+ 	priv->nfproto = ntohl(nla_get_be32(tb[NFTA_FWD_NFPROTO]));
+ 
+ 	switch (priv->nfproto) {
+@@ -177,11 +175,13 @@ static int nft_fwd_neigh_init(const struct nft_ctx *ctx,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	err = nft_validate_register_load(priv->sreg_dev, sizeof(int));
++	err = nft_parse_register_load(tb[NFTA_FWD_SREG_DEV], &priv->sreg_dev,
++				      sizeof(int));
+ 	if (err < 0)
+ 		return err;
+ 
+-	return nft_validate_register_load(priv->sreg_addr, addr_len);
++	return nft_parse_register_load(tb[NFTA_FWD_SREG_ADDR], &priv->sreg_addr,
++				       addr_len);
+ }
+ 
+ static int nft_fwd_neigh_dump(struct sk_buff *skb, const struct nft_expr *expr)
+diff --git a/net/netfilter/nft_hash.c b/net/netfilter/nft_hash.c
+index 96371d878e7e5..f829f5289e162 100644
+--- a/net/netfilter/nft_hash.c
++++ b/net/netfilter/nft_hash.c
+@@ -14,8 +14,8 @@
+ #include <linux/jhash.h>
+ 
+ struct nft_jhash {
+-	enum nft_registers      sreg:8;
+-	enum nft_registers      dreg:8;
++	u8			sreg;
++	u8			dreg;
+ 	u8			len;
+ 	bool			autogen_seed:1;
+ 	u32			modulus;
+@@ -38,7 +38,7 @@ static void nft_jhash_eval(const struct nft_expr *expr,
+ }
+ 
+ struct nft_symhash {
+-	enum nft_registers      dreg:8;
++	u8			dreg;
+ 	u32			modulus;
+ 	u32			offset;
+ };
+@@ -83,9 +83,6 @@ static int nft_jhash_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_HASH_OFFSET])
+ 		priv->offset = ntohl(nla_get_be32(tb[NFTA_HASH_OFFSET]));
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_HASH_SREG]);
+-	priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]);
+-
+ 	err = nft_parse_u32_check(tb[NFTA_HASH_LEN], U8_MAX, &len);
+ 	if (err < 0)
+ 		return err;
+@@ -94,6 +91,10 @@ static int nft_jhash_init(const struct nft_ctx *ctx,
+ 
+ 	priv->len = len;
+ 
++	err = nft_parse_register_load(tb[NFTA_HASH_SREG], &priv->sreg, len);
++	if (err < 0)
++		return err;
++
+ 	priv->modulus = ntohl(nla_get_be32(tb[NFTA_HASH_MODULUS]));
+ 	if (priv->modulus < 1)
+ 		return -ERANGE;
+@@ -108,9 +109,8 @@ static int nft_jhash_init(const struct nft_ctx *ctx,
+ 		get_random_bytes(&priv->seed, sizeof(priv->seed));
+ 	}
+ 
+-	return nft_validate_register_load(priv->sreg, len) &&
+-	       nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, sizeof(u32));
++	return nft_parse_register_store(ctx, tb[NFTA_HASH_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, sizeof(u32));
+ }
+ 
+ static int nft_symhash_init(const struct nft_ctx *ctx,
+@@ -126,8 +126,6 @@ static int nft_symhash_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_HASH_OFFSET])
+ 		priv->offset = ntohl(nla_get_be32(tb[NFTA_HASH_OFFSET]));
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]);
+-
+ 	priv->modulus = ntohl(nla_get_be32(tb[NFTA_HASH_MODULUS]));
+ 	if (priv->modulus < 1)
+ 		return -ERANGE;
+@@ -135,8 +133,9 @@ static int nft_symhash_init(const struct nft_ctx *ctx,
+ 	if (priv->offset + priv->modulus - 1 < priv->offset)
+ 		return -EOVERFLOW;
+ 
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, sizeof(u32));
++	return nft_parse_register_store(ctx, tb[NFTA_HASH_DREG],
++					&priv->dreg, NULL, NFT_DATA_VALUE,
++					sizeof(u32));
+ }
+ 
+ static int nft_jhash_dump(struct sk_buff *skb,
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index 5c9d88560a474..d0f67d325bdfd 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -48,9 +48,9 @@ static int nft_immediate_init(const struct nft_ctx *ctx,
+ 
+ 	priv->dlen = desc.len;
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_IMMEDIATE_DREG]);
+-	err = nft_validate_register_store(ctx, priv->dreg, &priv->data,
+-					  desc.type, desc.len);
++	err = nft_parse_register_store(ctx, tb[NFTA_IMMEDIATE_DREG],
++				       &priv->dreg, &priv->data, desc.type,
++				       desc.len);
+ 	if (err < 0)
+ 		goto err1;
+ 
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index f1363b8aabba8..b0f558b4fea54 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -17,8 +17,8 @@
+ 
+ struct nft_lookup {
+ 	struct nft_set			*set;
+-	enum nft_registers		sreg:8;
+-	enum nft_registers		dreg:8;
++	u8				sreg;
++	u8				dreg;
+ 	bool				invert;
+ 	struct nft_set_binding		binding;
+ };
+@@ -76,8 +76,8 @@ static int nft_lookup_init(const struct nft_ctx *ctx,
+ 	if (IS_ERR(set))
+ 		return PTR_ERR(set);
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_LOOKUP_SREG]);
+-	err = nft_validate_register_load(priv->sreg, set->klen);
++	err = nft_parse_register_load(tb[NFTA_LOOKUP_SREG], &priv->sreg,
++				      set->klen);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -100,9 +100,9 @@ static int nft_lookup_init(const struct nft_ctx *ctx,
+ 		if (!(set->flags & NFT_SET_MAP))
+ 			return -EINVAL;
+ 
+-		priv->dreg = nft_parse_register(tb[NFTA_LOOKUP_DREG]);
+-		err = nft_validate_register_store(ctx, priv->dreg, NULL,
+-						  set->dtype, set->dlen);
++		err = nft_parse_register_store(ctx, tb[NFTA_LOOKUP_DREG],
++					       &priv->dreg, NULL, set->dtype,
++					       set->dlen);
+ 		if (err < 0)
+ 			return err;
+ 	} else if (set->flags & NFT_SET_MAP)
+diff --git a/net/netfilter/nft_masq.c b/net/netfilter/nft_masq.c
+index 71390b7270405..9953e80537536 100644
+--- a/net/netfilter/nft_masq.c
++++ b/net/netfilter/nft_masq.c
+@@ -15,8 +15,8 @@
+ 
+ struct nft_masq {
+ 	u32			flags;
+-	enum nft_registers      sreg_proto_min:8;
+-	enum nft_registers      sreg_proto_max:8;
++	u8			sreg_proto_min;
++	u8			sreg_proto_max;
+ };
+ 
+ static const struct nla_policy nft_masq_policy[NFTA_MASQ_MAX + 1] = {
+@@ -54,19 +54,15 @@ static int nft_masq_init(const struct nft_ctx *ctx,
+ 	}
+ 
+ 	if (tb[NFTA_MASQ_REG_PROTO_MIN]) {
+-		priv->sreg_proto_min =
+-			nft_parse_register(tb[NFTA_MASQ_REG_PROTO_MIN]);
+-
+-		err = nft_validate_register_load(priv->sreg_proto_min, plen);
++		err = nft_parse_register_load(tb[NFTA_MASQ_REG_PROTO_MIN],
++					      &priv->sreg_proto_min, plen);
+ 		if (err < 0)
+ 			return err;
+ 
+ 		if (tb[NFTA_MASQ_REG_PROTO_MAX]) {
+-			priv->sreg_proto_max =
+-				nft_parse_register(tb[NFTA_MASQ_REG_PROTO_MAX]);
+-
+-			err = nft_validate_register_load(priv->sreg_proto_max,
+-							 plen);
++			err = nft_parse_register_load(tb[NFTA_MASQ_REG_PROTO_MAX],
++						      &priv->sreg_proto_max,
++						      plen);
+ 			if (err < 0)
+ 				return err;
+ 		} else {
+diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c
+index bf4b3ad5314c3..44d9b38e5f90c 100644
+--- a/net/netfilter/nft_meta.c
++++ b/net/netfilter/nft_meta.c
+@@ -14,6 +14,7 @@
+ #include <linux/in.h>
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
++#include <linux/random.h>
+ #include <linux/smp.h>
+ #include <linux/static_key.h>
+ #include <net/dst.h>
+@@ -32,8 +33,6 @@
+ #define NFT_META_SECS_PER_DAY		86400
+ #define NFT_META_DAYS_PER_WEEK		7
+ 
+-static DEFINE_PER_CPU(struct rnd_state, nft_prandom_state);
+-
+ static u8 nft_meta_weekday(void)
+ {
+ 	time64_t secs = ktime_get_real_seconds();
+@@ -267,13 +266,6 @@ static bool nft_meta_get_eval_ifname(enum nft_meta_keys key, u32 *dest,
+ 	return true;
+ }
+ 
+-static noinline u32 nft_prandom_u32(void)
+-{
+-	struct rnd_state *state = this_cpu_ptr(&nft_prandom_state);
+-
+-	return prandom_u32_state(state);
+-}
+-
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+ static noinline bool
+ nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest)
+@@ -385,7 +377,7 @@ void nft_meta_get_eval(const struct nft_expr *expr,
+ 		break;
+ #endif
+ 	case NFT_META_PRANDOM:
+-		*dest = nft_prandom_u32();
++		*dest = get_random_u32();
+ 		break;
+ #ifdef CONFIG_XFRM
+ 	case NFT_META_SECPATH:
+@@ -514,7 +506,6 @@ int nft_meta_get_init(const struct nft_ctx *ctx,
+ 		len = IFNAMSIZ;
+ 		break;
+ 	case NFT_META_PRANDOM:
+-		prandom_init_once(&nft_prandom_state);
+ 		len = sizeof(u32);
+ 		break;
+ #ifdef CONFIG_XFRM
+@@ -535,9 +526,8 @@ int nft_meta_get_init(const struct nft_ctx *ctx,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_META_DREG]);
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, len);
++	return nft_parse_register_store(ctx, tb[NFTA_META_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, len);
+ }
+ EXPORT_SYMBOL_GPL(nft_meta_get_init);
+ 
+@@ -661,8 +651,7 @@ int nft_meta_set_init(const struct nft_ctx *ctx,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_META_SREG]);
+-	err = nft_validate_register_load(priv->sreg, len);
++	err = nft_parse_register_load(tb[NFTA_META_SREG], &priv->sreg, len);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index 6a4a5ac88db70..db8f9116eeb43 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -21,10 +21,10 @@
+ #include <net/ip.h>
+ 
+ struct nft_nat {
+-	enum nft_registers      sreg_addr_min:8;
+-	enum nft_registers      sreg_addr_max:8;
+-	enum nft_registers      sreg_proto_min:8;
+-	enum nft_registers      sreg_proto_max:8;
++	u8			sreg_addr_min;
++	u8			sreg_addr_max;
++	u8			sreg_proto_min;
++	u8			sreg_proto_max;
+ 	enum nf_nat_manip_type  type:8;
+ 	u8			family;
+ 	u16			flags;
+@@ -208,18 +208,15 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 	priv->family = family;
+ 
+ 	if (tb[NFTA_NAT_REG_ADDR_MIN]) {
+-		priv->sreg_addr_min =
+-			nft_parse_register(tb[NFTA_NAT_REG_ADDR_MIN]);
+-		err = nft_validate_register_load(priv->sreg_addr_min, alen);
++		err = nft_parse_register_load(tb[NFTA_NAT_REG_ADDR_MIN],
++					      &priv->sreg_addr_min, alen);
+ 		if (err < 0)
+ 			return err;
+ 
+ 		if (tb[NFTA_NAT_REG_ADDR_MAX]) {
+-			priv->sreg_addr_max =
+-				nft_parse_register(tb[NFTA_NAT_REG_ADDR_MAX]);
+-
+-			err = nft_validate_register_load(priv->sreg_addr_max,
+-							 alen);
++			err = nft_parse_register_load(tb[NFTA_NAT_REG_ADDR_MAX],
++						      &priv->sreg_addr_max,
++						      alen);
+ 			if (err < 0)
+ 				return err;
+ 		} else {
+@@ -231,19 +228,15 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 
+ 	plen = sizeof_field(struct nf_nat_range, min_addr.all);
+ 	if (tb[NFTA_NAT_REG_PROTO_MIN]) {
+-		priv->sreg_proto_min =
+-			nft_parse_register(tb[NFTA_NAT_REG_PROTO_MIN]);
+-
+-		err = nft_validate_register_load(priv->sreg_proto_min, plen);
++		err = nft_parse_register_load(tb[NFTA_NAT_REG_PROTO_MIN],
++					      &priv->sreg_proto_min, plen);
+ 		if (err < 0)
+ 			return err;
+ 
+ 		if (tb[NFTA_NAT_REG_PROTO_MAX]) {
+-			priv->sreg_proto_max =
+-				nft_parse_register(tb[NFTA_NAT_REG_PROTO_MAX]);
+-
+-			err = nft_validate_register_load(priv->sreg_proto_max,
+-							 plen);
++			err = nft_parse_register_load(tb[NFTA_NAT_REG_PROTO_MAX],
++						      &priv->sreg_proto_max,
++						      plen);
+ 			if (err < 0)
+ 				return err;
+ 		} else {
+diff --git a/net/netfilter/nft_numgen.c b/net/netfilter/nft_numgen.c
+index f1fc824f97370..4e43214e88def 100644
+--- a/net/netfilter/nft_numgen.c
++++ b/net/netfilter/nft_numgen.c
+@@ -9,14 +9,13 @@
+ #include <linux/netlink.h>
+ #include <linux/netfilter.h>
+ #include <linux/netfilter/nf_tables.h>
++#include <linux/random.h>
+ #include <linux/static_key.h>
+ #include <net/netfilter/nf_tables.h>
+ #include <net/netfilter/nf_tables_core.h>
+ 
+-static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state);
+-
+ struct nft_ng_inc {
+-	enum nft_registers      dreg:8;
++	u8			dreg;
+ 	u32			modulus;
+ 	atomic_t		counter;
+ 	u32			offset;
+@@ -66,11 +65,10 @@ static int nft_ng_inc_init(const struct nft_ctx *ctx,
+ 	if (priv->offset + priv->modulus - 1 < priv->offset)
+ 		return -EOVERFLOW;
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_NG_DREG]);
+ 	atomic_set(&priv->counter, priv->modulus - 1);
+ 
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, sizeof(u32));
++	return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, sizeof(u32));
+ }
+ 
+ static int nft_ng_dump(struct sk_buff *skb, enum nft_registers dreg,
+@@ -100,17 +98,14 @@ static int nft_ng_inc_dump(struct sk_buff *skb, const struct nft_expr *expr)
+ }
+ 
+ struct nft_ng_random {
+-	enum nft_registers      dreg:8;
++	u8			dreg;
+ 	u32			modulus;
+ 	u32			offset;
+ };
+ 
+-static u32 nft_ng_random_gen(struct nft_ng_random *priv)
++static u32 nft_ng_random_gen(const struct nft_ng_random *priv)
+ {
+-	struct rnd_state *state = this_cpu_ptr(&nft_numgen_prandom_state);
+-
+-	return reciprocal_scale(prandom_u32_state(state), priv->modulus) +
+-	       priv->offset;
++	return reciprocal_scale(get_random_u32(), priv->modulus) + priv->offset;
+ }
+ 
+ static void nft_ng_random_eval(const struct nft_expr *expr,
+@@ -138,12 +133,8 @@ static int nft_ng_random_init(const struct nft_ctx *ctx,
+ 	if (priv->offset + priv->modulus - 1 < priv->offset)
+ 		return -EOVERFLOW;
+ 
+-	prandom_init_once(&nft_numgen_prandom_state);
+-
+-	priv->dreg = nft_parse_register(tb[NFTA_NG_DREG]);
+-
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, sizeof(u32));
++	return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, sizeof(u32));
+ }
+ 
+ static int nft_ng_random_dump(struct sk_buff *skb, const struct nft_expr *expr)
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index 5f9207a9f4851..bc104d36d3bb2 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -95,7 +95,7 @@ static const struct nft_expr_ops nft_objref_ops = {
+ 
+ struct nft_objref_map {
+ 	struct nft_set		*set;
+-	enum nft_registers	sreg:8;
++	u8			sreg;
+ 	struct nft_set_binding	binding;
+ };
+ 
+@@ -137,8 +137,8 @@ static int nft_objref_map_init(const struct nft_ctx *ctx,
+ 	if (!(set->flags & NFT_SET_OBJECT))
+ 		return -EINVAL;
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_OBJREF_SET_SREG]);
+-	err = nft_validate_register_load(priv->sreg, set->klen);
++	err = nft_parse_register_load(tb[NFTA_OBJREF_SET_SREG], &priv->sreg,
++				      set->klen);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_osf.c b/net/netfilter/nft_osf.c
+index 2c957629ea660..d82677e83400b 100644
+--- a/net/netfilter/nft_osf.c
++++ b/net/netfilter/nft_osf.c
+@@ -6,7 +6,7 @@
+ #include <linux/netfilter/nfnetlink_osf.h>
+ 
+ struct nft_osf {
+-	enum nft_registers	dreg:8;
++	u8			dreg;
+ 	u8			ttl;
+ 	u32			flags;
+ };
+@@ -83,9 +83,9 @@ static int nft_osf_init(const struct nft_ctx *ctx,
+ 		priv->flags = flags;
+ 	}
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_OSF_DREG]);
+-	err = nft_validate_register_store(ctx, priv->dreg, NULL,
+-					  NFT_DATA_VALUE, NFT_OSF_MAXGENRELEN);
++	err = nft_parse_register_store(ctx, tb[NFTA_OSF_DREG], &priv->dreg,
++				       NULL, NFT_DATA_VALUE,
++				       NFT_OSF_MAXGENRELEN);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 6a8495bd08bb2..01878c16418c2 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -144,10 +144,10 @@ static int nft_payload_init(const struct nft_ctx *ctx,
+ 	priv->base   = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE]));
+ 	priv->offset = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET]));
+ 	priv->len    = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN]));
+-	priv->dreg   = nft_parse_register(tb[NFTA_PAYLOAD_DREG]);
+ 
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, priv->len);
++	return nft_parse_register_store(ctx, tb[NFTA_PAYLOAD_DREG],
++					&priv->dreg, NULL, NFT_DATA_VALUE,
++					priv->len);
+ }
+ 
+ static int nft_payload_dump(struct sk_buff *skb, const struct nft_expr *expr)
+@@ -664,7 +664,6 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
+ 	priv->base        = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE]));
+ 	priv->offset      = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET]));
+ 	priv->len         = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN]));
+-	priv->sreg        = nft_parse_register(tb[NFTA_PAYLOAD_SREG]);
+ 
+ 	if (tb[NFTA_PAYLOAD_CSUM_TYPE])
+ 		priv->csum_type =
+@@ -697,7 +696,8 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	return nft_validate_register_load(priv->sreg, priv->len);
++	return nft_parse_register_load(tb[NFTA_PAYLOAD_SREG], &priv->sreg,
++				       priv->len);
+ }
+ 
+ static int nft_payload_set_dump(struct sk_buff *skb, const struct nft_expr *expr)
+diff --git a/net/netfilter/nft_queue.c b/net/netfilter/nft_queue.c
+index 23265d757acbc..9ba1de51ac070 100644
+--- a/net/netfilter/nft_queue.c
++++ b/net/netfilter/nft_queue.c
+@@ -19,10 +19,10 @@
+ static u32 jhash_initval __read_mostly;
+ 
+ struct nft_queue {
+-	enum nft_registers	sreg_qnum:8;
+-	u16			queuenum;
+-	u16			queues_total;
+-	u16			flags;
++	u8	sreg_qnum;
++	u16	queuenum;
++	u16	queues_total;
++	u16	flags;
+ };
+ 
+ static void nft_queue_eval(const struct nft_expr *expr,
+@@ -111,8 +111,8 @@ static int nft_queue_sreg_init(const struct nft_ctx *ctx,
+ 	struct nft_queue *priv = nft_expr_priv(expr);
+ 	int err;
+ 
+-	priv->sreg_qnum = nft_parse_register(tb[NFTA_QUEUE_SREG_QNUM]);
+-	err = nft_validate_register_load(priv->sreg_qnum, sizeof(u32));
++	err = nft_parse_register_load(tb[NFTA_QUEUE_SREG_QNUM],
++				      &priv->sreg_qnum, sizeof(u32));
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_range.c b/net/netfilter/nft_range.c
+index 89efcc5a533d2..e4a1c44d7f513 100644
+--- a/net/netfilter/nft_range.c
++++ b/net/netfilter/nft_range.c
+@@ -15,7 +15,7 @@
+ struct nft_range_expr {
+ 	struct nft_data		data_from;
+ 	struct nft_data		data_to;
+-	enum nft_registers	sreg:8;
++	u8			sreg;
+ 	u8			len;
+ 	enum nft_range_ops	op:8;
+ };
+@@ -86,8 +86,8 @@ static int nft_range_init(const struct nft_ctx *ctx, const struct nft_expr *expr
+ 		goto err2;
+ 	}
+ 
+-	priv->sreg = nft_parse_register(tb[NFTA_RANGE_SREG]);
+-	err = nft_validate_register_load(priv->sreg, desc_from.len);
++	err = nft_parse_register_load(tb[NFTA_RANGE_SREG], &priv->sreg,
++				      desc_from.len);
+ 	if (err < 0)
+ 		goto err2;
+ 
+diff --git a/net/netfilter/nft_redir.c b/net/netfilter/nft_redir.c
+index 2056051c0af0d..ba09890dddb50 100644
+--- a/net/netfilter/nft_redir.c
++++ b/net/netfilter/nft_redir.c
+@@ -14,8 +14,8 @@
+ #include <net/netfilter/nf_tables.h>
+ 
+ struct nft_redir {
+-	enum nft_registers	sreg_proto_min:8;
+-	enum nft_registers	sreg_proto_max:8;
++	u8			sreg_proto_min;
++	u8			sreg_proto_max;
+ 	u16			flags;
+ };
+ 
+@@ -50,19 +50,15 @@ static int nft_redir_init(const struct nft_ctx *ctx,
+ 
+ 	plen = sizeof_field(struct nf_nat_range, min_addr.all);
+ 	if (tb[NFTA_REDIR_REG_PROTO_MIN]) {
+-		priv->sreg_proto_min =
+-			nft_parse_register(tb[NFTA_REDIR_REG_PROTO_MIN]);
+-
+-		err = nft_validate_register_load(priv->sreg_proto_min, plen);
++		err = nft_parse_register_load(tb[NFTA_REDIR_REG_PROTO_MIN],
++					      &priv->sreg_proto_min, plen);
+ 		if (err < 0)
+ 			return err;
+ 
+ 		if (tb[NFTA_REDIR_REG_PROTO_MAX]) {
+-			priv->sreg_proto_max =
+-				nft_parse_register(tb[NFTA_REDIR_REG_PROTO_MAX]);
+-
+-			err = nft_validate_register_load(priv->sreg_proto_max,
+-							 plen);
++			err = nft_parse_register_load(tb[NFTA_REDIR_REG_PROTO_MAX],
++						      &priv->sreg_proto_max,
++						      plen);
+ 			if (err < 0)
+ 				return err;
+ 		} else {
+diff --git a/net/netfilter/nft_rt.c b/net/netfilter/nft_rt.c
+index 7cfcb0e2f7ee1..bcd01a63e38f1 100644
+--- a/net/netfilter/nft_rt.c
++++ b/net/netfilter/nft_rt.c
+@@ -15,7 +15,7 @@
+ 
+ struct nft_rt {
+ 	enum nft_rt_keys	key:8;
+-	enum nft_registers	dreg:8;
++	u8			dreg;
+ };
+ 
+ static u16 get_tcpmss(const struct nft_pktinfo *pkt, const struct dst_entry *skbdst)
+@@ -141,9 +141,8 @@ static int nft_rt_get_init(const struct nft_ctx *ctx,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_RT_DREG]);
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, len);
++	return nft_parse_register_store(ctx, tb[NFTA_RT_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, len);
+ }
+ 
+ static int nft_rt_get_dump(struct sk_buff *skb,
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index 8a0125e966c83..f6d517185d9c0 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -10,7 +10,7 @@
+ struct nft_socket {
+ 	enum nft_socket_keys		key:8;
+ 	union {
+-		enum nft_registers	dreg:8;
++		u8			dreg;
+ 	};
+ };
+ 
+@@ -146,9 +146,8 @@ static int nft_socket_init(const struct nft_ctx *ctx,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_SOCKET_DREG]);
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, len);
++	return nft_parse_register_store(ctx, tb[NFTA_SOCKET_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, len);
+ }
+ 
+ static int nft_socket_dump(struct sk_buff *skb,
+diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
+index 242222dc52c3c..37c728bdad41c 100644
+--- a/net/netfilter/nft_tproxy.c
++++ b/net/netfilter/nft_tproxy.c
+@@ -13,9 +13,9 @@
+ #endif
+ 
+ struct nft_tproxy {
+-	enum nft_registers      sreg_addr:8;
+-	enum nft_registers      sreg_port:8;
+-	u8			family;
++	u8	sreg_addr;
++	u8	sreg_port;
++	u8	family;
+ };
+ 
+ static void nft_tproxy_eval_v4(const struct nft_expr *expr,
+@@ -254,15 +254,15 @@ static int nft_tproxy_init(const struct nft_ctx *ctx,
+ 	}
+ 
+ 	if (tb[NFTA_TPROXY_REG_ADDR]) {
+-		priv->sreg_addr = nft_parse_register(tb[NFTA_TPROXY_REG_ADDR]);
+-		err = nft_validate_register_load(priv->sreg_addr, alen);
++		err = nft_parse_register_load(tb[NFTA_TPROXY_REG_ADDR],
++					      &priv->sreg_addr, alen);
+ 		if (err < 0)
+ 			return err;
+ 	}
+ 
+ 	if (tb[NFTA_TPROXY_REG_PORT]) {
+-		priv->sreg_port = nft_parse_register(tb[NFTA_TPROXY_REG_PORT]);
+-		err = nft_validate_register_load(priv->sreg_port, sizeof(u16));
++		err = nft_parse_register_load(tb[NFTA_TPROXY_REG_PORT],
++					      &priv->sreg_port, sizeof(u16));
+ 		if (err < 0)
+ 			return err;
+ 	}
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index d3eb953d0333b..3b27926d5382c 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -15,7 +15,7 @@
+ 
+ struct nft_tunnel {
+ 	enum nft_tunnel_keys	key:8;
+-	enum nft_registers	dreg:8;
++	u8			dreg;
+ 	enum nft_tunnel_mode	mode:8;
+ };
+ 
+@@ -93,8 +93,6 @@ static int nft_tunnel_get_init(const struct nft_ctx *ctx,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_TUNNEL_DREG]);
+-
+ 	if (tb[NFTA_TUNNEL_MODE]) {
+ 		priv->mode = ntohl(nla_get_be32(tb[NFTA_TUNNEL_MODE]));
+ 		if (priv->mode > NFT_TUNNEL_MODE_MAX)
+@@ -103,8 +101,8 @@ static int nft_tunnel_get_init(const struct nft_ctx *ctx,
+ 		priv->mode = NFT_TUNNEL_MODE_NONE;
+ 	}
+ 
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, len);
++	return nft_parse_register_store(ctx, tb[NFTA_TUNNEL_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, len);
+ }
+ 
+ static int nft_tunnel_get_dump(struct sk_buff *skb,
+diff --git a/net/netfilter/nft_xfrm.c b/net/netfilter/nft_xfrm.c
+index 06d5cabf1d7c4..cbbbc4ecad3ae 100644
+--- a/net/netfilter/nft_xfrm.c
++++ b/net/netfilter/nft_xfrm.c
+@@ -24,7 +24,7 @@ static const struct nla_policy nft_xfrm_policy[NFTA_XFRM_MAX + 1] = {
+ 
+ struct nft_xfrm {
+ 	enum nft_xfrm_keys	key:8;
+-	enum nft_registers	dreg:8;
++	u8			dreg;
+ 	u8			dir;
+ 	u8			spnum;
+ };
+@@ -86,9 +86,8 @@ static int nft_xfrm_get_init(const struct nft_ctx *ctx,
+ 
+ 	priv->spnum = spnum;
+ 
+-	priv->dreg = nft_parse_register(tb[NFTA_XFRM_DREG]);
+-	return nft_validate_register_store(ctx, priv->dreg, NULL,
+-					   NFT_DATA_VALUE, len);
++	return nft_parse_register_store(ctx, tb[NFTA_XFRM_DREG], &priv->dreg,
++					NULL, NFT_DATA_VALUE, len);
+ }
+ 
+ /* Return true if key asks for daddr/saddr and current
+diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
+index b03d142ec82ef..c9ba61413c98b 100644
+--- a/net/openvswitch/flow.c
++++ b/net/openvswitch/flow.c
+@@ -265,7 +265,7 @@ static int parse_ipv6hdr(struct sk_buff *skb, struct sw_flow_key *key)
+ 	if (flags & IP6_FH_F_FRAG) {
+ 		if (frag_off) {
+ 			key->ip.frag = OVS_FRAG_TYPE_LATER;
+-			key->ip.proto = nexthdr;
++			key->ip.proto = NEXTHDR_FRAGMENT;
+ 			return 0;
+ 		}
+ 		key->ip.frag = OVS_FRAG_TYPE_FIRST;
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 0c345e43a09a3..adc5407fd5d58 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -1146,9 +1146,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
+ 	struct tc_netem_rate rate;
+ 	struct tc_netem_slot slot;
+ 
+-	qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency),
++	qopt.latency = min_t(psched_time_t, PSCHED_NS2TICKS(q->latency),
+ 			     UINT_MAX);
+-	qopt.jitter = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->jitter),
++	qopt.jitter = min_t(psched_time_t, PSCHED_NS2TICKS(q->jitter),
+ 			    UINT_MAX);
+ 	qopt.limit = q->limit;
+ 	qopt.loss = q->loss;
+diff --git a/net/tipc/core.c b/net/tipc/core.c
+index 40c03085c0eaf..7724499f516e9 100644
+--- a/net/tipc/core.c
++++ b/net/tipc/core.c
+@@ -60,7 +60,7 @@ static int __net_init tipc_init_net(struct net *net)
+ 	tn->trial_addr = 0;
+ 	tn->addr_trial_end = 0;
+ 	tn->capabilities = TIPC_NODE_CAPABILITIES;
+-	INIT_WORK(&tn->final_work.work, tipc_net_finalize_work);
++	INIT_WORK(&tn->work, tipc_net_finalize_work);
+ 	memset(tn->node_id, 0, sizeof(tn->node_id));
+ 	memset(tn->node_id_string, 0, sizeof(tn->node_id_string));
+ 	tn->mon_threshold = TIPC_DEF_MON_THRESHOLD;
+@@ -111,10 +111,9 @@ static void __net_exit tipc_exit_net(struct net *net)
+ 	struct tipc_net *tn = tipc_net(net);
+ 
+ 	tipc_detach_loopback(net);
+-	/* Make sure the tipc_net_finalize_work() finished */
+-	cancel_work_sync(&tn->final_work.work);
+ 	tipc_net_stop(net);
+-
++	/* Make sure the tipc_net_finalize_work() finished */
++	cancel_work_sync(&tn->work);
+ 	tipc_bcast_stop(net);
+ 	tipc_nametbl_stop(net);
+ 	tipc_sk_rht_destroy(net);
+diff --git a/net/tipc/core.h b/net/tipc/core.h
+index 992924a849be6..73a26b0b9ca19 100644
+--- a/net/tipc/core.h
++++ b/net/tipc/core.h
+@@ -90,12 +90,6 @@ extern unsigned int tipc_net_id __read_mostly;
+ extern int sysctl_tipc_rmem[3] __read_mostly;
+ extern int sysctl_tipc_named_timeout __read_mostly;
+ 
+-struct tipc_net_work {
+-	struct work_struct work;
+-	struct net *net;
+-	u32 addr;
+-};
+-
+ struct tipc_net {
+ 	u8  node_id[NODE_ID_LEN];
+ 	u32 node_addr;
+@@ -150,7 +144,7 @@ struct tipc_net {
+ 	struct tipc_crypto *crypto_tx;
+ #endif
+ 	/* Work item for net finalize */
+-	struct tipc_net_work final_work;
++	struct work_struct work;
+ 	/* The numbers of work queues in schedule */
+ 	atomic_t wq_count;
+ };
+diff --git a/net/tipc/discover.c b/net/tipc/discover.c
+index d4ecacddb40ce..14bc20604051d 100644
+--- a/net/tipc/discover.c
++++ b/net/tipc/discover.c
+@@ -167,7 +167,7 @@ static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d,
+ 
+ 	/* Apply trial address if we just left trial period */
+ 	if (!trial && !self) {
+-		tipc_sched_net_finalize(net, tn->trial_addr);
++		schedule_work(&tn->work);
+ 		msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);
+ 		msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
+ 	}
+@@ -307,7 +307,7 @@ static void tipc_disc_timeout(struct timer_list *t)
+ 	if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) {
+ 		mod_timer(&d->timer, jiffies + TIPC_DISC_INIT);
+ 		spin_unlock_bh(&d->lock);
+-		tipc_sched_net_finalize(net, tn->trial_addr);
++		schedule_work(&tn->work);
+ 		return;
+ 	}
+ 
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 7a353ff628448..064fdb8e50e19 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -344,6 +344,11 @@ char tipc_link_plane(struct tipc_link *l)
+ 	return l->net_plane;
+ }
+ 
++struct net *tipc_link_net(struct tipc_link *l)
++{
++	return l->net;
++}
++
+ void tipc_link_update_caps(struct tipc_link *l, u16 capabilities)
+ {
+ 	l->peer_caps = capabilities;
+diff --git a/net/tipc/link.h b/net/tipc/link.h
+index fc07232c9a127..a16f401fdabda 100644
+--- a/net/tipc/link.h
++++ b/net/tipc/link.h
+@@ -156,4 +156,5 @@ int tipc_link_bc_sync_rcv(struct tipc_link *l,   struct tipc_msg *hdr,
+ int tipc_link_bc_nack_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 			  struct sk_buff_head *xmitq);
+ bool tipc_link_too_silent(struct tipc_link *l);
++struct net *tipc_link_net(struct tipc_link *l);
+ #endif
+diff --git a/net/tipc/net.c b/net/tipc/net.c
+index 0bb2323201daa..671cb4f9d5633 100644
+--- a/net/tipc/net.c
++++ b/net/tipc/net.c
+@@ -41,6 +41,7 @@
+ #include "socket.h"
+ #include "node.h"
+ #include "bcast.h"
++#include "link.h"
+ #include "netlink.h"
+ #include "monitor.h"
+ 
+@@ -138,19 +139,9 @@ static void tipc_net_finalize(struct net *net, u32 addr)
+ 
+ void tipc_net_finalize_work(struct work_struct *work)
+ {
+-	struct tipc_net_work *fwork;
++	struct tipc_net *tn = container_of(work, struct tipc_net, work);
+ 
+-	fwork = container_of(work, struct tipc_net_work, work);
+-	tipc_net_finalize(fwork->net, fwork->addr);
+-}
+-
+-void tipc_sched_net_finalize(struct net *net, u32 addr)
+-{
+-	struct tipc_net *tn = tipc_net(net);
+-
+-	tn->final_work.net = net;
+-	tn->final_work.addr = addr;
+-	schedule_work(&tn->final_work.work);
++	tipc_net_finalize(tipc_link_net(tn->bcl), tn->trial_addr);
+ }
+ 
+ void tipc_net_stop(struct net *net)
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 79aef50ede170..e48742760fec8 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -1119,7 +1119,7 @@ static const struct sectioncheck sectioncheck[] = {
+ },
+ /* Do not export init/exit functions or data */
+ {
+-	.fromsec = { "__ksymtab*", NULL },
++	.fromsec = { "___ksymtab*", NULL },
+ 	.bad_tosec = { INIT_SECTIONS, EXIT_SECTIONS, NULL },
+ 	.mismatch = EXPORT_TO_INIT_EXIT,
+ 	.symbol_white_list = { DEFAULT_SYMBOL_WHITE_LIST, NULL },
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 4dc01647753c8..ec821a263036d 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -823,7 +823,7 @@ static void set_pin_targets(struct hda_codec *codec,
+ 		snd_hda_set_pin_ctl_cache(codec, cfg->nid, cfg->val);
+ }
+ 
+-static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
++void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth)
+ {
+ 	const char *modelname = codec->fixup_name;
+ 
+@@ -833,7 +833,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
+ 		if (++depth > 10)
+ 			break;
+ 		if (fix->chained_before)
+-			apply_fixup(codec, fix->chain_id, action, depth + 1);
++			__snd_hda_apply_fixup(codec, fix->chain_id, action, depth + 1);
+ 
+ 		switch (fix->type) {
+ 		case HDA_FIXUP_PINS:
+@@ -874,6 +874,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
+ 		id = fix->chain_id;
+ 	}
+ }
++EXPORT_SYMBOL_GPL(__snd_hda_apply_fixup);
+ 
+ /**
+  * snd_hda_apply_fixup - Apply the fixup chain with the given action
+@@ -883,7 +884,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)
+ void snd_hda_apply_fixup(struct hda_codec *codec, int action)
+ {
+ 	if (codec->fixup_list)
+-		apply_fixup(codec, codec->fixup_id, action, 0);
++		__snd_hda_apply_fixup(codec, codec->fixup_id, action, 0);
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_apply_fixup);
+ 
+diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h
+index 5beb8aa44ecd8..efc0c68a54427 100644
+--- a/sound/pci/hda/hda_local.h
++++ b/sound/pci/hda/hda_local.h
+@@ -357,6 +357,7 @@ void snd_hda_apply_verbs(struct hda_codec *codec);
+ void snd_hda_apply_pincfgs(struct hda_codec *codec,
+ 			   const struct hda_pintbl *cfg);
+ void snd_hda_apply_fixup(struct hda_codec *codec, int action);
++void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth);
+ void snd_hda_pick_fixup(struct hda_codec *codec,
+ 			const struct hda_model_fixup *models,
+ 			const struct snd_pci_quirk *quirk,
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 0dd6d37db9666..53b7ea86f3f84 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -1072,11 +1072,11 @@ static int patch_conexant_auto(struct hda_codec *codec)
+ 	if (err < 0)
+ 		goto error;
+ 
+-	err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);
++	err = cx_auto_parse_beep(codec);
+ 	if (err < 0)
+ 		goto error;
+ 
+-	err = cx_auto_parse_beep(codec);
++	err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);
+ 	if (err < 0)
+ 		goto error;
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 7c720f03c1349..604f55ec7944b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2643,6 +2643,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67f1, "Clevo PC70H[PRS]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67f5, "Clevo PD70PN[NRT]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED),
+@@ -6827,6 +6828,7 @@ enum {
+ 	ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS,
+ 	ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ 	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
++	ALC298_FIXUP_LENOVO_C940_DUET7,
+ 	ALC287_FIXUP_13S_GEN2_SPEAKERS,
+ 	ALC256_FIXUP_SET_COEF_DEFAULTS,
+ 	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+@@ -6836,6 +6838,23 @@ enum {
+ 	ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE,
+ };
+ 
++/* A special fixup for Lenovo C940 and Yoga Duet 7;
++ * both have the very same PCI SSID, and we need to apply different fixups
++ * depending on the codec ID
++ */
++static void alc298_fixup_lenovo_c940_duet7(struct hda_codec *codec,
++					   const struct hda_fixup *fix,
++					   int action)
++{
++	int id;
++
++	if (codec->core.vendor_id == 0x10ec0298)
++		id = ALC298_FIXUP_LENOVO_SPK_VOLUME; /* C940 */
++	else
++		id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* Duet 7 */
++	__snd_hda_apply_fixup(codec, id, action, 0);
++}
++
+ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC269_FIXUP_GPIO2] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -8529,6 +8548,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE,
+ 	},
++	[ALC298_FIXUP_LENOVO_C940_DUET7] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc298_fixup_lenovo_c940_duet7,
++	},
+ 	[ALC287_FIXUP_13S_GEN2_SPEAKERS] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -8768,6 +8791,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation",
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8787, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -8909,6 +8933,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x70f3, "Clevo NH77DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -8992,7 +9017,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+ 	SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+-	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
++	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7),
+ 	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+@@ -10446,6 +10471,7 @@ enum {
+ 	ALC668_FIXUP_MIC_DET_COEF,
+ 	ALC897_FIXUP_LENOVO_HEADSET_MIC,
+ 	ALC897_FIXUP_HEADSET_MIC_PIN,
++	ALC897_FIXUP_HP_HSMIC_VERB,
+ };
+ 
+ static const struct hda_fixup alc662_fixups[] = {
+@@ -10865,6 +10891,13 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC897_FIXUP_LENOVO_HEADSET_MIC
+ 	},
++	[ALC897_FIXUP_HP_HSMIC_VERB] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+@@ -10890,6 +10923,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
++	SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index 773a136161f11..a188901a83bbe 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -520,11 +520,11 @@ static int via_parse_auto_config(struct hda_codec *codec)
+ 	if (err < 0)
+ 		return err;
+ 
+-	err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);
++	err = auto_parse_beep(codec);
+ 	if (err < 0)
+ 		return err;
+ 
+-	err = auto_parse_beep(codec);
++	err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/tools/testing/selftests/netfilter/nft_concat_range.sh b/tools/testing/selftests/netfilter/nft_concat_range.sh
+index b5eef5ffb58e5..af3461cb5c409 100755
+--- a/tools/testing/selftests/netfilter/nft_concat_range.sh
++++ b/tools/testing/selftests/netfilter/nft_concat_range.sh
+@@ -31,7 +31,7 @@ BUGS="flush_remove_add reload"
+ 
+ # List of possible paths to pktgen script from kernel tree for performance tests
+ PKTGEN_SCRIPT_PATHS="
+-	../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh
++	../../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh
+ 	pktgen/pktgen_bench_xmit_mode_netif_receive.sh"
+ 
+ # Definition of set types:


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-07-02 16:10 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-07-02 16:10 UTC (permalink / raw
  To: gentoo-commits

commit:     ba3c411e0ad756ee82a1f10eb6f03b842f1f486a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jul  2 16:09:40 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jul  2 16:09:40 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ba3c411e

Linux patch 5.10.128

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1127_linux-5.10.128.patch | 397 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 401 insertions(+)

diff --git a/0000_README b/0000_README
index 0104ffd6..26bbe9cb 100644
--- a/0000_README
+++ b/0000_README
@@ -551,6 +551,10 @@ Patch:  1126_linux-5.10.127.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.127
 
+Patch:  1127_linux-5.10.128.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.128
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1127_linux-5.10.128.patch b/1127_linux-5.10.128.patch
new file mode 100644
index 00000000..eb112269
--- /dev/null
+++ b/1127_linux-5.10.128.patch
@@ -0,0 +1,397 @@
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 7c118b507912f..4d10e79030a9c 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -19246,7 +19246,8 @@ F:	arch/x86/xen/*swiotlb*
+ F:	drivers/xen/*swiotlb*
+ 
+ XFS FILESYSTEM
+-M:	Darrick J. Wong <darrick.wong@oracle.com>
++M:	Amir Goldstein <amir73il@gmail.com>
++M:	Darrick J. Wong <djwong@kernel.org>
+ M:	linux-xfs@vger.kernel.org
+ L:	linux-xfs@vger.kernel.org
+ S:	Supported
+diff --git a/Makefile b/Makefile
+index e3eb9ba19f86e..b89ad8a987db8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 127
++SUBLEVEL = 128
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
+index bc76970b6ee53..e647dfcb31917 100644
+--- a/arch/powerpc/include/asm/ftrace.h
++++ b/arch/powerpc/include/asm/ftrace.h
+@@ -96,7 +96,7 @@ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name
+ #endif /* PPC64_ELF_ABI_v1 */
+ #endif /* CONFIG_FTRACE_SYSCALLS */
+ 
+-#ifdef CONFIG_PPC64
++#if defined(CONFIG_PPC64) && defined(CONFIG_FUNCTION_TRACER)
+ #include <asm/paca.h>
+ 
+ static inline void this_cpu_disable_ftrace(void)
+@@ -120,11 +120,13 @@ static inline u8 this_cpu_get_ftrace_enabled(void)
+ 	return get_paca()->ftrace_enabled;
+ }
+ 
++void ftrace_free_init_tramp(void);
+ #else /* CONFIG_PPC64 */
+ static inline void this_cpu_disable_ftrace(void) { }
+ static inline void this_cpu_enable_ftrace(void) { }
+ static inline void this_cpu_set_ftrace_enabled(u8 ftrace_enabled) { }
+ static inline u8 this_cpu_get_ftrace_enabled(void) { return 1; }
++static inline void ftrace_free_init_tramp(void) { }
+ #endif /* CONFIG_PPC64 */
+ #endif /* !__ASSEMBLY__ */
+ 
+diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
+index 42761ebec9f75..d24aea4fed7a3 100644
+--- a/arch/powerpc/kernel/trace/ftrace.c
++++ b/arch/powerpc/kernel/trace/ftrace.c
+@@ -336,9 +336,7 @@ static int setup_mcount_compiler_tramp(unsigned long tramp)
+ 
+ 	/* Is this a known long jump tramp? */
+ 	for (i = 0; i < NUM_FTRACE_TRAMPS; i++)
+-		if (!ftrace_tramps[i])
+-			break;
+-		else if (ftrace_tramps[i] == tramp)
++		if (ftrace_tramps[i] == tramp)
+ 			return 0;
+ 
+ 	/* Is this a known plt tramp? */
+@@ -882,6 +880,17 @@ void arch_ftrace_update_code(int command)
+ 
+ extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[];
+ 
++void ftrace_free_init_tramp(void)
++{
++	int i;
++
++	for (i = 0; i < NUM_FTRACE_TRAMPS && ftrace_tramps[i]; i++)
++		if (ftrace_tramps[i] == (unsigned long)ftrace_tramp_init) {
++			ftrace_tramps[i] = 0;
++			return;
++		}
++}
++
+ int __init ftrace_dyn_arch_init(void)
+ {
+ 	int i;
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index 22eb1c718e622..1ed276d2305fa 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -51,6 +51,7 @@
+ #include <asm/kasan.h>
+ #include <asm/svm.h>
+ #include <asm/mmzone.h>
++#include <asm/ftrace.h>
+ 
+ #include <mm/mmu_decl.h>
+ 
+@@ -347,6 +348,7 @@ void free_initmem(void)
+ 	mark_initmem_nx();
+ 	init_mem_is_free = true;
+ 	free_initmem_default(POISON_FREE_INITMEM);
++	ftrace_free_init_tramp();
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/drm_crtc_helper_internal.h b/drivers/gpu/drm/drm_crtc_helper_internal.h
+index 25ce42e799952..61e09f8a8d0ff 100644
+--- a/drivers/gpu/drm/drm_crtc_helper_internal.h
++++ b/drivers/gpu/drm/drm_crtc_helper_internal.h
+@@ -32,16 +32,6 @@
+ #include <drm/drm_encoder.h>
+ #include <drm/drm_modes.h>
+ 
+-/* drm_fb_helper.c */
+-#ifdef CONFIG_DRM_FBDEV_EMULATION
+-int drm_fb_helper_modinit(void);
+-#else
+-static inline int drm_fb_helper_modinit(void)
+-{
+-	return 0;
+-}
+-#endif
+-
+ /* drm_dp_aux_dev.c */
+ #ifdef CONFIG_DRM_DP_AUX_CHARDEV
+ int drm_dp_aux_dev_init(void);
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 8033467db4bee..ac5d61e65124e 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -2271,24 +2271,3 @@ void drm_fbdev_generic_setup(struct drm_device *dev,
+ 	drm_client_register(&fb_helper->client);
+ }
+ EXPORT_SYMBOL(drm_fbdev_generic_setup);
+-
+-/* The Kconfig DRM_KMS_HELPER selects FRAMEBUFFER_CONSOLE (if !EXPERT)
+- * but the module doesn't depend on any fb console symbols.  At least
+- * attempt to load fbcon to avoid leaving the system without a usable console.
+- */
+-int __init drm_fb_helper_modinit(void)
+-{
+-#if defined(CONFIG_FRAMEBUFFER_CONSOLE_MODULE) && !defined(CONFIG_EXPERT)
+-	const char name[] = "fbcon";
+-	struct module *fbcon;
+-
+-	mutex_lock(&module_mutex);
+-	fbcon = find_module(name);
+-	mutex_unlock(&module_mutex);
+-
+-	if (!fbcon)
+-		request_module_nowait(name);
+-#endif
+-	return 0;
+-}
+-EXPORT_SYMBOL(drm_fb_helper_modinit);
+diff --git a/drivers/gpu/drm/drm_kms_helper_common.c b/drivers/gpu/drm/drm_kms_helper_common.c
+index 221a8528c9937..f933da1656eb5 100644
+--- a/drivers/gpu/drm/drm_kms_helper_common.c
++++ b/drivers/gpu/drm/drm_kms_helper_common.c
+@@ -64,19 +64,18 @@ MODULE_PARM_DESC(edid_firmware,
+ 
+ static int __init drm_kms_helper_init(void)
+ {
+-	int ret;
+-
+-	/* Call init functions from specific kms helpers here */
+-	ret = drm_fb_helper_modinit();
+-	if (ret < 0)
+-		goto out;
+-
+-	ret = drm_dp_aux_dev_init();
+-	if (ret < 0)
+-		goto out;
+-
+-out:
+-	return ret;
++	/*
++	 * The Kconfig DRM_KMS_HELPER selects FRAMEBUFFER_CONSOLE (if !EXPERT)
++	 * but the module doesn't depend on any fb console symbols.  At least
++	 * attempt to load fbcon to avoid leaving the system without a usable
++	 * console.
++	 */
++	if (IS_ENABLED(CONFIG_DRM_FBDEV_EMULATION) &&
++	    IS_MODULE(CONFIG_FRAMEBUFFER_CONSOLE) &&
++	    !IS_ENABLED(CONFIG_EXPERT))
++		request_module_nowait("fbcon");
++
++	return drm_dp_aux_dev_init();
+ }
+ 
+ static void __exit drm_kms_helper_exit(void)
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index f64834785c8b9..b47c00dea0f20 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2017,6 +2017,7 @@ int bch_btree_check(struct cache_set *c)
+ 	if (c->root->level == 0)
+ 		return 0;
+ 
++	memset(&check_state, 0, sizeof(struct btree_check_state));
+ 	check_state.c = c;
+ 	check_state.total_threads = bch_btree_chkthread_nr();
+ 	check_state.key_idx = 0;
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 0145046a45f43..a878b959fbcdd 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -901,6 +901,7 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ 		return;
+ 	}
+ 
++	memset(&state, 0, sizeof(struct bch_dirty_init_state));
+ 	state.c = c;
+ 	state.d = d;
+ 	state.total_threads = bch_btre_dirty_init_thread_nr();
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index a06466ecca12a..a55861ea42063 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1593,8 +1593,12 @@ int ocelot_init(struct ocelot *ocelot)
+ 	ocelot_write_rix(ocelot,
+ 			 ANA_PGID_PGID_PGID(GENMASK(ocelot->num_phys_ports, 0)),
+ 			 ANA_PGID_PGID, PGID_MC);
+-	ocelot_write_rix(ocelot, 0, ANA_PGID_PGID, PGID_MCIPV4);
+-	ocelot_write_rix(ocelot, 0, ANA_PGID_PGID, PGID_MCIPV6);
++	ocelot_write_rix(ocelot,
++			 ANA_PGID_PGID_PGID(GENMASK(ocelot->num_phys_ports, 0)),
++			 ANA_PGID_PGID, PGID_MCIPV4);
++	ocelot_write_rix(ocelot,
++			 ANA_PGID_PGID_PGID(GENMASK(ocelot->num_phys_ports, 0)),
++			 ANA_PGID_PGID, PGID_MCIPV6);
+ 
+ 	/* Allow manual injection via DEVCPU_QS registers, and byte swap these
+ 	 * registers endianness.
+diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c
+index 96ac7e562b871..fcca36bbd9975 100644
+--- a/fs/xfs/libxfs/xfs_attr.c
++++ b/fs/xfs/libxfs/xfs_attr.c
+@@ -876,21 +876,18 @@ xfs_attr_node_hasname(
+ 
+ 	state = xfs_da_state_alloc(args);
+ 	if (statep != NULL)
+-		*statep = NULL;
++		*statep = state;
+ 
+ 	/*
+ 	 * Search to see if name exists, and get back a pointer to it.
+ 	 */
+ 	error = xfs_da3_node_lookup_int(state, &retval);
+-	if (error) {
+-		xfs_da_state_free(state);
+-		return error;
+-	}
++	if (error)
++		retval = error;
+ 
+-	if (statep != NULL)
+-		*statep = state;
+-	else
++	if (!statep)
+ 		xfs_da_state_free(state);
++
+ 	return retval;
+ }
+ 
+diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
+index 4304c6416fbbc..4b76a32d2f16d 100644
+--- a/fs/xfs/xfs_aops.c
++++ b/fs/xfs/xfs_aops.c
+@@ -145,6 +145,7 @@ xfs_end_ioend(
+ 	struct iomap_ioend	*ioend)
+ {
+ 	struct xfs_inode	*ip = XFS_I(ioend->io_inode);
++	struct xfs_mount	*mp = ip->i_mount;
+ 	xfs_off_t		offset = ioend->io_offset;
+ 	size_t			size = ioend->io_size;
+ 	unsigned int		nofs_flag;
+@@ -160,18 +161,26 @@ xfs_end_ioend(
+ 	/*
+ 	 * Just clean up the in-memory strutures if the fs has been shut down.
+ 	 */
+-	if (XFS_FORCED_SHUTDOWN(ip->i_mount)) {
++	if (XFS_FORCED_SHUTDOWN(mp)) {
+ 		error = -EIO;
+ 		goto done;
+ 	}
+ 
+ 	/*
+-	 * Clean up any COW blocks on an I/O error.
++	 * Clean up all COW blocks and underlying data fork delalloc blocks on
++	 * I/O error. The delalloc punch is required because this ioend was
++	 * mapped to blocks in the COW fork and the associated pages are no
++	 * longer dirty. If we don't remove delalloc blocks here, they become
++	 * stale and can corrupt free space accounting on unmount.
+ 	 */
+ 	error = blk_status_to_errno(ioend->io_bio->bi_status);
+ 	if (unlikely(error)) {
+-		if (ioend->io_flags & IOMAP_F_SHARED)
++		if (ioend->io_flags & IOMAP_F_SHARED) {
+ 			xfs_reflink_cancel_cow_range(ip, offset, size, true);
++			xfs_bmap_punch_delalloc_range(ip,
++						      XFS_B_TO_FSBT(mp, offset),
++						      XFS_B_TO_FSB(mp, size));
++		}
+ 		goto done;
+ 	}
+ 
+diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c
+index d44e8b4a33919..1d649462d731a 100644
+--- a/fs/xfs/xfs_buf_item_recover.c
++++ b/fs/xfs/xfs_buf_item_recover.c
+@@ -805,7 +805,7 @@ xlog_recover_get_buf_lsn(
+ 	}
+ 
+ 	if (lsn != (xfs_lsn_t)-1) {
+-		if (!uuid_equal(&mp->m_sb.sb_uuid, uuid))
++		if (!uuid_equal(&mp->m_sb.sb_meta_uuid, uuid))
+ 			goto recover_immediately;
+ 		return lsn;
+ 	}
+diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
+index 5c0395256bd1d..11474770d630f 100644
+--- a/fs/xfs/xfs_extfree_item.c
++++ b/fs/xfs/xfs_extfree_item.c
+@@ -482,7 +482,7 @@ xfs_extent_free_finish_item(
+ 			free->xefi_startblock,
+ 			free->xefi_blockcount,
+ 			&free->xefi_oinfo, free->xefi_skip_discard);
+-	kmem_free(free);
++	kmem_cache_free(xfs_bmap_free_item_zone, free);
+ 	return error;
+ }
+ 
+@@ -502,7 +502,7 @@ xfs_extent_free_cancel_item(
+ 	struct xfs_extent_free_item	*free;
+ 
+ 	free = container_of(item, struct xfs_extent_free_item, xefi_list);
+-	kmem_free(free);
++	kmem_cache_free(xfs_bmap_free_item_zone, free);
+ }
+ 
+ const struct xfs_defer_op_type xfs_extent_free_defer_type = {
+@@ -564,7 +564,7 @@ xfs_agfl_free_finish_item(
+ 	extp->ext_len = free->xefi_blockcount;
+ 	efdp->efd_next_extent++;
+ 
+-	kmem_free(free);
++	kmem_cache_free(xfs_bmap_free_item_zone, free);
+ 	return error;
+ }
+ 
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index 5ebd6cdc44a7b..05cea7788d494 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -1695,7 +1695,10 @@ static int
+ xfs_remount_ro(
+ 	struct xfs_mount	*mp)
+ {
+-	int error;
++	struct xfs_eofblocks	eofb = {
++		.eof_flags	= XFS_EOF_FLAGS_SYNC,
++	};
++	int			error;
+ 
+ 	/*
+ 	 * Cancel background eofb scanning so it cannot race with the final
+@@ -1703,8 +1706,13 @@ xfs_remount_ro(
+ 	 */
+ 	xfs_stop_block_reaping(mp);
+ 
+-	/* Get rid of any leftover CoW reservations... */
+-	error = xfs_icache_free_cowblocks(mp, NULL);
++	/*
++	 * Clear out all remaining COW staging extents and speculative post-EOF
++	 * preallocations so that we don't leave inodes requiring inactivation
++	 * cleanups during reclaim on a read-only mount.  We must process every
++	 * cached inode, so this requires a synchronous cache scan.
++	 */
++	error = xfs_icache_free_cowblocks(mp, &eofb);
+ 	if (error) {
+ 		xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+ 		return error;
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 9c352357fc8bc..92fb738813f39 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -425,7 +425,6 @@ void __init tick_nohz_full_setup(cpumask_var_t cpumask)
+ 	cpumask_copy(tick_nohz_full_mask, cpumask);
+ 	tick_nohz_full_running = true;
+ }
+-EXPORT_SYMBOL_GPL(tick_nohz_full_setup);
+ 
+ static int tick_nohz_cpu_down(unsigned int cpu)
+ {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-07-07 16:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-07-07 16:17 UTC (permalink / raw
  To: gentoo-commits

commit:     c98a3a7b4905a9a65a5438e8c3d1d11cc968c721
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul  7 16:17:19 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul  7 16:17:19 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c98a3a7b

Linux patch 5.10.129

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1128_linux-5.10.129.patch | 5672 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5676 insertions(+)

diff --git a/0000_README b/0000_README
index 26bbe9cb..42dc1c5f 100644
--- a/0000_README
+++ b/0000_README
@@ -555,6 +555,10 @@ Patch:  1127_linux-5.10.128.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.128
 
+Patch:  1128_linux-5.10.129.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.129
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1128_linux-5.10.129.patch b/1128_linux-5.10.129.patch
new file mode 100644
index 00000000..e0d3e800
--- /dev/null
+++ b/1128_linux-5.10.129.patch
@@ -0,0 +1,5672 @@
+diff --git a/Makefile b/Makefile
+index b89ad8a987db8..7d52cee374880 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 128
++SUBLEVEL = 129
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
+index acb464547a54f..4a1991a103ea0 100644
+--- a/arch/arm/xen/p2m.c
++++ b/arch/arm/xen/p2m.c
+@@ -62,11 +62,12 @@ out:
+ 
+ unsigned long __pfn_to_mfn(unsigned long pfn)
+ {
+-	struct rb_node *n = phys_to_mach.rb_node;
++	struct rb_node *n;
+ 	struct xen_p2m_entry *entry;
+ 	unsigned long irqflags;
+ 
+ 	read_lock_irqsave(&p2m_lock, irqflags);
++	n = phys_to_mach.rb_node;
+ 	while (n) {
+ 		entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
+ 		if (entry->pfn <= pfn &&
+@@ -153,10 +154,11 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
+ 	int rc;
+ 	unsigned long irqflags;
+ 	struct xen_p2m_entry *p2m_entry;
+-	struct rb_node *n = phys_to_mach.rb_node;
++	struct rb_node *n;
+ 
+ 	if (mfn == INVALID_P2M_ENTRY) {
+ 		write_lock_irqsave(&p2m_lock, irqflags);
++		n = phys_to_mach.rb_node;
+ 		while (n) {
+ 			p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
+ 			if (p2m_entry->pfn <= pfn &&
+diff --git a/arch/powerpc/include/asm/bpf_perf_event.h b/arch/powerpc/include/asm/bpf_perf_event.h
+new file mode 100644
+index 0000000000000..e8a7b4ffb58c2
+--- /dev/null
++++ b/arch/powerpc/include/asm/bpf_perf_event.h
+@@ -0,0 +1,9 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_POWERPC_BPF_PERF_EVENT_H
++#define _ASM_POWERPC_BPF_PERF_EVENT_H
++
++#include <asm/ptrace.h>
++
++typedef struct user_pt_regs bpf_user_pt_regs_t;
++
++#endif /* _ASM_POWERPC_BPF_PERF_EVENT_H */
+diff --git a/arch/powerpc/include/uapi/asm/bpf_perf_event.h b/arch/powerpc/include/uapi/asm/bpf_perf_event.h
+deleted file mode 100644
+index 5e1e648aeec4c..0000000000000
+--- a/arch/powerpc/include/uapi/asm/bpf_perf_event.h
++++ /dev/null
+@@ -1,9 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+-#ifndef _UAPI__ASM_BPF_PERF_EVENT_H__
+-#define _UAPI__ASM_BPF_PERF_EVENT_H__
+-
+-#include <asm/ptrace.h>
+-
+-typedef struct user_pt_regs bpf_user_pt_regs_t;
+-
+-#endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
+diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh
+index b183ab9c5107c..dfa5f729f774d 100644
+--- a/arch/powerpc/kernel/prom_init_check.sh
++++ b/arch/powerpc/kernel/prom_init_check.sh
+@@ -13,7 +13,7 @@
+ # If you really need to reference something from prom_init.o add
+ # it to the list below:
+ 
+-grep "^CONFIG_KASAN=y$" .config >/dev/null
++grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null
+ if [ $? -eq 0 ]
+ then
+ 	MEM_FUNCS="__memcpy __memset"
+diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c b/arch/powerpc/mm/nohash/book3e_pgtable.c
+index 77884e24281dd..3d845e001c874 100644
+--- a/arch/powerpc/mm/nohash/book3e_pgtable.c
++++ b/arch/powerpc/mm/nohash/book3e_pgtable.c
+@@ -95,8 +95,8 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
+ 		pgdp = pgd_offset_k(ea);
+ 		p4dp = p4d_offset(pgdp, ea);
+ 		if (p4d_none(*p4dp)) {
+-			pmdp = early_alloc_pgtable(PMD_TABLE_SIZE);
+-			p4d_populate(&init_mm, p4dp, pmdp);
++			pudp = early_alloc_pgtable(PUD_TABLE_SIZE);
++			p4d_populate(&init_mm, p4dp, pudp);
+ 		}
+ 		pudp = pud_offset(p4dp, ea);
+ 		if (pud_none(*pudp)) {
+@@ -105,7 +105,7 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
+ 		}
+ 		pmdp = pmd_offset(pudp, ea);
+ 		if (!pmd_present(*pmdp)) {
+-			ptep = early_alloc_pgtable(PAGE_SIZE);
++			ptep = early_alloc_pgtable(PTE_TABLE_SIZE);
+ 			pmd_populate_kernel(&init_mm, pmdp, ptep);
+ 		}
+ 		ptep = pte_offset_kernel(pmdp, ea);
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index 896b68e541b2e..878993982e39d 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -507,7 +507,6 @@ config KEXEC
+ config KEXEC_FILE
+ 	bool "kexec file based system call"
+ 	select KEXEC_CORE
+-	select BUILD_BIN2C
+ 	depends on CRYPTO
+ 	depends on CRYPTO_SHA256
+ 	depends on CRYPTO_SHA256_S390
+diff --git a/arch/s390/crypto/arch_random.c b/arch/s390/crypto/arch_random.c
+index 4cbb4b6d85a83..1f2d40993c4d2 100644
+--- a/arch/s390/crypto/arch_random.c
++++ b/arch/s390/crypto/arch_random.c
+@@ -2,126 +2,17 @@
+ /*
+  * s390 arch random implementation.
+  *
+- * Copyright IBM Corp. 2017, 2018
++ * Copyright IBM Corp. 2017, 2020
+  * Author(s): Harald Freudenberger
+- *
+- * The s390_arch_random_generate() function may be called from random.c
+- * in interrupt context. So this implementation does the best to be very
+- * fast. There is a buffer of random data which is asynchronously checked
+- * and filled by a workqueue thread.
+- * If there are enough bytes in the buffer the s390_arch_random_generate()
+- * just delivers these bytes. Otherwise false is returned until the
+- * worker thread refills the buffer.
+- * The worker fills the rng buffer by pulling fresh entropy from the
+- * high quality (but slow) true hardware random generator. This entropy
+- * is then spread over the buffer with an pseudo random generator PRNG.
+- * As the arch_get_random_seed_long() fetches 8 bytes and the calling
+- * function add_interrupt_randomness() counts this as 1 bit entropy the
+- * distribution needs to make sure there is in fact 1 bit entropy contained
+- * in 8 bytes of the buffer. The current values pull 32 byte entropy
+- * and scatter this into a 2048 byte buffer. So 8 byte in the buffer
+- * will contain 1 bit of entropy.
+- * The worker thread is rescheduled based on the charge level of the
+- * buffer but at least with 500 ms delay to avoid too much CPU consumption.
+- * So the max. amount of rng data delivered via arch_get_random_seed is
+- * limited to 4k bytes per second.
+  */
+ 
+ #include <linux/kernel.h>
+ #include <linux/atomic.h>
+ #include <linux/random.h>
+-#include <linux/slab.h>
+ #include <linux/static_key.h>
+-#include <linux/workqueue.h>
+ #include <asm/cpacf.h>
+ 
+ DEFINE_STATIC_KEY_FALSE(s390_arch_random_available);
+ 
+ atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0);
+ EXPORT_SYMBOL(s390_arch_random_counter);
+-
+-#define ARCH_REFILL_TICKS (HZ/2)
+-#define ARCH_PRNG_SEED_SIZE 32
+-#define ARCH_RNG_BUF_SIZE 2048
+-
+-static DEFINE_SPINLOCK(arch_rng_lock);
+-static u8 *arch_rng_buf;
+-static unsigned int arch_rng_buf_idx;
+-
+-static void arch_rng_refill_buffer(struct work_struct *);
+-static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);
+-
+-bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)
+-{
+-	/* max hunk is ARCH_RNG_BUF_SIZE */
+-	if (nbytes > ARCH_RNG_BUF_SIZE)
+-		return false;
+-
+-	/* lock rng buffer */
+-	if (!spin_trylock(&arch_rng_lock))
+-		return false;
+-
+-	/* try to resolve the requested amount of bytes from the buffer */
+-	arch_rng_buf_idx -= nbytes;
+-	if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) {
+-		memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes);
+-		atomic64_add(nbytes, &s390_arch_random_counter);
+-		spin_unlock(&arch_rng_lock);
+-		return true;
+-	}
+-
+-	/* not enough bytes in rng buffer, refill is done asynchronously */
+-	spin_unlock(&arch_rng_lock);
+-
+-	return false;
+-}
+-EXPORT_SYMBOL(s390_arch_random_generate);
+-
+-static void arch_rng_refill_buffer(struct work_struct *unused)
+-{
+-	unsigned int delay = ARCH_REFILL_TICKS;
+-
+-	spin_lock(&arch_rng_lock);
+-	if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) {
+-		/* buffer is exhausted and needs refill */
+-		u8 seed[ARCH_PRNG_SEED_SIZE];
+-		u8 prng_wa[240];
+-		/* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */
+-		cpacf_trng(NULL, 0, seed, sizeof(seed));
+-		/* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */
+-		memset(prng_wa, 0, sizeof(prng_wa));
+-		cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,
+-			   &prng_wa, NULL, 0, seed, sizeof(seed));
+-		cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN,
+-			   &prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0);
+-		arch_rng_buf_idx = ARCH_RNG_BUF_SIZE;
+-	}
+-	delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE;
+-	spin_unlock(&arch_rng_lock);
+-
+-	/* kick next check */
+-	queue_delayed_work(system_long_wq, &arch_rng_work, delay);
+-}
+-
+-static int __init s390_arch_random_init(void)
+-{
+-	/* all the needed PRNO subfunctions available ? */
+-	if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) &&
+-	    cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) {
+-
+-		/* alloc arch random working buffer */
+-		arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL);
+-		if (!arch_rng_buf)
+-			return -ENOMEM;
+-
+-		/* kick worker queue job to fill the random buffer */
+-		queue_delayed_work(system_long_wq,
+-				   &arch_rng_work, ARCH_REFILL_TICKS);
+-
+-		/* enable arch random to the outside world */
+-		static_branch_enable(&s390_arch_random_available);
+-	}
+-
+-	return 0;
+-}
+-arch_initcall(s390_arch_random_init);
+diff --git a/arch/s390/include/asm/archrandom.h b/arch/s390/include/asm/archrandom.h
+index de61ce5620527..2c6e1c6ecbe78 100644
+--- a/arch/s390/include/asm/archrandom.h
++++ b/arch/s390/include/asm/archrandom.h
+@@ -2,7 +2,7 @@
+ /*
+  * Kernel interface for the s390 arch_random_* functions
+  *
+- * Copyright IBM Corp. 2017
++ * Copyright IBM Corp. 2017, 2020
+  *
+  * Author: Harald Freudenberger <freude@de.ibm.com>
+  *
+@@ -15,12 +15,11 @@
+ 
+ #include <linux/static_key.h>
+ #include <linux/atomic.h>
++#include <asm/cpacf.h>
+ 
+ DECLARE_STATIC_KEY_FALSE(s390_arch_random_available);
+ extern atomic64_t s390_arch_random_counter;
+ 
+-bool s390_arch_random_generate(u8 *buf, unsigned int nbytes);
+-
+ static inline bool __must_check arch_get_random_long(unsigned long *v)
+ {
+ 	return false;
+@@ -34,7 +33,9 @@ static inline bool __must_check arch_get_random_int(unsigned int *v)
+ static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
+ {
+ 	if (static_branch_likely(&s390_arch_random_available)) {
+-		return s390_arch_random_generate((u8 *)v, sizeof(*v));
++		cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
++		atomic64_add(sizeof(*v), &s390_arch_random_counter);
++		return true;
+ 	}
+ 	return false;
+ }
+@@ -42,7 +43,9 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
+ static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
+ {
+ 	if (static_branch_likely(&s390_arch_random_available)) {
+-		return s390_arch_random_generate((u8 *)v, sizeof(*v));
++		cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
++		atomic64_add(sizeof(*v), &s390_arch_random_counter);
++		return true;
+ 	}
+ 	return false;
+ }
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index f9f8721dc5321..520cf5a152cf4 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -1009,6 +1009,11 @@ static void __init setup_randomness(void)
+ 	if (stsi(vmms, 3, 2, 2) == 0 && vmms->count)
+ 		add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count);
+ 	memblock_free((unsigned long) vmms, PAGE_SIZE);
++
++#ifdef CONFIG_ARCH_RANDOM
++	if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG))
++		static_branch_enable(&s390_arch_random_available);
++#endif
+ }
+ 
+ /*
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 47d4bb23d6f31..abbb68b6d9bd5 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -151,6 +151,10 @@ static unsigned int xen_blkif_max_ring_order;
+ module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 0444);
+ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring");
+ 
++static bool __read_mostly xen_blkif_trusted = true;
++module_param_named(trusted, xen_blkif_trusted, bool, 0644);
++MODULE_PARM_DESC(trusted, "Is the backend trusted");
++
+ #define BLK_RING_SIZE(info)	\
+ 	__CONST_RING_SIZE(blkif, XEN_PAGE_SIZE * (info)->nr_ring_pages)
+ 
+@@ -208,6 +212,7 @@ struct blkfront_info
+ 	unsigned int feature_discard:1;
+ 	unsigned int feature_secdiscard:1;
+ 	unsigned int feature_persistent:1;
++	unsigned int bounce:1;
+ 	unsigned int discard_granularity;
+ 	unsigned int discard_alignment;
+ 	/* Number of 4KB segments handled */
+@@ -310,8 +315,8 @@ static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num)
+ 		if (!gnt_list_entry)
+ 			goto out_of_memory;
+ 
+-		if (info->feature_persistent) {
+-			granted_page = alloc_page(GFP_NOIO);
++		if (info->bounce) {
++			granted_page = alloc_page(GFP_NOIO | __GFP_ZERO);
+ 			if (!granted_page) {
+ 				kfree(gnt_list_entry);
+ 				goto out_of_memory;
+@@ -330,7 +335,7 @@ out_of_memory:
+ 	list_for_each_entry_safe(gnt_list_entry, n,
+ 	                         &rinfo->grants, node) {
+ 		list_del(&gnt_list_entry->node);
+-		if (info->feature_persistent)
++		if (info->bounce)
+ 			__free_page(gnt_list_entry->page);
+ 		kfree(gnt_list_entry);
+ 		i--;
+@@ -376,7 +381,7 @@ static struct grant *get_grant(grant_ref_t *gref_head,
+ 	/* Assign a gref to this page */
+ 	gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
+ 	BUG_ON(gnt_list_entry->gref == -ENOSPC);
+-	if (info->feature_persistent)
++	if (info->bounce)
+ 		grant_foreign_access(gnt_list_entry, info);
+ 	else {
+ 		/* Grant access to the GFN passed by the caller */
+@@ -400,7 +405,7 @@ static struct grant *get_indirect_grant(grant_ref_t *gref_head,
+ 	/* Assign a gref to this page */
+ 	gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
+ 	BUG_ON(gnt_list_entry->gref == -ENOSPC);
+-	if (!info->feature_persistent) {
++	if (!info->bounce) {
+ 		struct page *indirect_page;
+ 
+ 		/* Fetch a pre-allocated page to use for indirect grefs */
+@@ -715,7 +720,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
+ 		.grant_idx = 0,
+ 		.segments = NULL,
+ 		.rinfo = rinfo,
+-		.need_copy = rq_data_dir(req) && info->feature_persistent,
++		.need_copy = rq_data_dir(req) && info->bounce,
+ 	};
+ 
+ 	/*
+@@ -1035,11 +1040,12 @@ static void xlvbd_flush(struct blkfront_info *info)
+ {
+ 	blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
+ 			      info->feature_fua ? true : false);
+-	pr_info("blkfront: %s: %s %s %s %s %s\n",
++	pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
+ 		info->gd->disk_name, flush_info(info),
+ 		"persistent grants:", info->feature_persistent ?
+ 		"enabled;" : "disabled;", "indirect descriptors:",
+-		info->max_indirect_segments ? "enabled;" : "disabled;");
++		info->max_indirect_segments ? "enabled;" : "disabled;",
++		"bounce buffer:", info->bounce ? "enabled" : "disabled;");
+ }
+ 
+ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
+@@ -1273,7 +1279,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
+ 	if (!list_empty(&rinfo->indirect_pages)) {
+ 		struct page *indirect_page, *n;
+ 
+-		BUG_ON(info->feature_persistent);
++		BUG_ON(info->bounce);
+ 		list_for_each_entry_safe(indirect_page, n, &rinfo->indirect_pages, lru) {
+ 			list_del(&indirect_page->lru);
+ 			__free_page(indirect_page);
+@@ -1290,7 +1296,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
+ 							  0, 0UL);
+ 				rinfo->persistent_gnts_c--;
+ 			}
+-			if (info->feature_persistent)
++			if (info->bounce)
+ 				__free_page(persistent_gnt->page);
+ 			kfree(persistent_gnt);
+ 		}
+@@ -1311,7 +1317,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
+ 		for (j = 0; j < segs; j++) {
+ 			persistent_gnt = rinfo->shadow[i].grants_used[j];
+ 			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
+-			if (info->feature_persistent)
++			if (info->bounce)
+ 				__free_page(persistent_gnt->page);
+ 			kfree(persistent_gnt);
+ 		}
+@@ -1501,7 +1507,7 @@ static int blkif_completion(unsigned long *id,
+ 	data.s = s;
+ 	num_sg = s->num_sg;
+ 
+-	if (bret->operation == BLKIF_OP_READ && info->feature_persistent) {
++	if (bret->operation == BLKIF_OP_READ && info->bounce) {
+ 		for_each_sg(s->sg, sg, num_sg, i) {
+ 			BUG_ON(sg->offset + sg->length > PAGE_SIZE);
+ 
+@@ -1560,7 +1566,7 @@ static int blkif_completion(unsigned long *id,
+ 				 * Add the used indirect page back to the list of
+ 				 * available pages for indirect grefs.
+ 				 */
+-				if (!info->feature_persistent) {
++				if (!info->bounce) {
+ 					indirect_page = s->indirect_grants[i]->page;
+ 					list_add(&indirect_page->lru, &rinfo->indirect_pages);
+ 				}
+@@ -1753,7 +1759,7 @@ static int setup_blkring(struct xenbus_device *dev,
+ 	for (i = 0; i < info->nr_ring_pages; i++)
+ 		rinfo->ring_ref[i] = GRANT_INVALID_REF;
+ 
+-	sring = alloc_pages_exact(ring_size, GFP_NOIO);
++	sring = alloc_pages_exact(ring_size, GFP_NOIO | __GFP_ZERO);
+ 	if (!sring) {
+ 		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
+ 		return -ENOMEM;
+@@ -1857,6 +1863,10 @@ static int talk_to_blkback(struct xenbus_device *dev,
+ 	if (!info)
+ 		return -ENODEV;
+ 
++	/* Check if backend is trusted. */
++	info->bounce = !xen_blkif_trusted ||
++		       !xenbus_read_unsigned(dev->nodename, "trusted", 1);
++
+ 	max_page_order = xenbus_read_unsigned(info->xbdev->otherend,
+ 					      "max-ring-page-order", 0);
+ 	ring_page_order = min(xen_blkif_max_ring_order, max_page_order);
+@@ -2283,17 +2293,18 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
+ 	if (err)
+ 		goto out_of_memory;
+ 
+-	if (!info->feature_persistent && info->max_indirect_segments) {
++	if (!info->bounce && info->max_indirect_segments) {
+ 		/*
+-		 * We are using indirect descriptors but not persistent
+-		 * grants, we need to allocate a set of pages that can be
++		 * We are using indirect descriptors but don't have a bounce
++		 * buffer, we need to allocate a set of pages that can be
+ 		 * used for mapping indirect grefs
+ 		 */
+ 		int num = INDIRECT_GREFS(grants) * BLK_RING_SIZE(info);
+ 
+ 		BUG_ON(!list_empty(&rinfo->indirect_pages));
+ 		for (i = 0; i < num; i++) {
+-			struct page *indirect_page = alloc_page(GFP_KERNEL);
++			struct page *indirect_page = alloc_page(GFP_KERNEL |
++			                                        __GFP_ZERO);
+ 			if (!indirect_page)
+ 				goto out_of_memory;
+ 			list_add(&indirect_page->lru, &rinfo->indirect_pages);
+@@ -2386,6 +2397,8 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
+ 		info->feature_persistent =
+ 			!!xenbus_read_unsigned(info->xbdev->otherend,
+ 					       "feature-persistent", 0);
++	if (info->feature_persistent)
++		info->bounce = true;
+ 
+ 	indirect_segments = xenbus_read_unsigned(info->xbdev->otherend,
+ 					"feature-max-indirect-segments", 0);
+@@ -2759,6 +2772,13 @@ static void blkfront_delay_work(struct work_struct *work)
+ 	struct blkfront_info *info;
+ 	bool need_schedule_work = false;
+ 
++	/*
++	 * Note that when using bounce buffers but not persistent grants
++	 * there's no need to run blkfront_delay_work because grants are
++	 * revoked in blkif_completion or else an error is reported and the
++	 * connection is closed.
++	 */
++
+ 	mutex_lock(&blkfront_mutex);
+ 
+ 	list_for_each_entry(info, &info_list, info_list) {
+diff --git a/drivers/clocksource/timer-ixp4xx.c b/drivers/clocksource/timer-ixp4xx.c
+index 9396745e1c179..ad904bbbac6fd 100644
+--- a/drivers/clocksource/timer-ixp4xx.c
++++ b/drivers/clocksource/timer-ixp4xx.c
+@@ -258,7 +258,6 @@ void __init ixp4xx_timer_setup(resource_size_t timerbase,
+ 	}
+ 	ixp4xx_timer_register(base, timer_irq, timer_freq);
+ }
+-EXPORT_SYMBOL_GPL(ixp4xx_timer_setup);
+ 
+ #ifdef CONFIG_OF
+ static __init int ixp4xx_of_timer_init(struct device_node *np)
+diff --git a/drivers/cpufreq/qoriq-cpufreq.c b/drivers/cpufreq/qoriq-cpufreq.c
+index 6b6b20da2bcfc..573b417e14833 100644
+--- a/drivers/cpufreq/qoriq-cpufreq.c
++++ b/drivers/cpufreq/qoriq-cpufreq.c
+@@ -275,6 +275,7 @@ static int qoriq_cpufreq_probe(struct platform_device *pdev)
+ 
+ 	np = of_find_matching_node(NULL, qoriq_cpufreq_blacklist);
+ 	if (np) {
++		of_node_put(np);
+ 		dev_info(&pdev->dev, "Disabling due to erratum A-008083");
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/devfreq/event/exynos-ppmu.c b/drivers/devfreq/event/exynos-ppmu.c
+index 17ed980d90998..d6da9c3e31067 100644
+--- a/drivers/devfreq/event/exynos-ppmu.c
++++ b/drivers/devfreq/event/exynos-ppmu.c
+@@ -514,15 +514,19 @@ static int of_get_devfreq_events(struct device_node *np,
+ 
+ 	count = of_get_child_count(events_np);
+ 	desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL);
+-	if (!desc)
++	if (!desc) {
++		of_node_put(events_np);
+ 		return -ENOMEM;
++	}
+ 	info->num_events = count;
+ 
+ 	of_id = of_match_device(exynos_ppmu_id_match, dev);
+ 	if (of_id)
+ 		info->ppmu_type = (enum exynos_ppmu_type)of_id->data;
+-	else
++	else {
++		of_node_put(events_np);
+ 		return -EINVAL;
++	}
+ 
+ 	j = 0;
+ 	for_each_child_of_node(events_np, node) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index fb6230c62daad..d3a974d105529 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -689,7 +689,8 @@ int amdgpu_amdkfd_flush_gpu_tlb_pasid(struct kgd_dev *kgd, uint16_t pasid)
+ 	const uint32_t flush_type = 0;
+ 	bool all_hub = false;
+ 
+-	if (adev->family == AMDGPU_FAMILY_AI)
++	if (adev->family == AMDGPU_FAMILY_AI ||
++	    adev->family == AMDGPU_FAMILY_RV)
+ 		all_hub = true;
+ 
+ 	return amdgpu_gmc_flush_gpu_tlb_pasid(adev, pasid, flush_type, all_hub);
+diff --git a/drivers/hwmon/ibmaem.c b/drivers/hwmon/ibmaem.c
+index a4ec85207782d..2e6d6a5cffa16 100644
+--- a/drivers/hwmon/ibmaem.c
++++ b/drivers/hwmon/ibmaem.c
+@@ -550,7 +550,7 @@ static int aem_init_aem1_inst(struct aem_ipmi_data *probe, u8 module_handle)
+ 
+ 	res = platform_device_add(data->pdev);
+ 	if (res)
+-		goto ipmi_err;
++		goto dev_add_err;
+ 
+ 	platform_set_drvdata(data->pdev, data);
+ 
+@@ -598,7 +598,9 @@ hwmon_reg_err:
+ 	ipmi_destroy_user(data->ipmi.user);
+ ipmi_err:
+ 	platform_set_drvdata(data->pdev, NULL);
+-	platform_device_unregister(data->pdev);
++	platform_device_del(data->pdev);
++dev_add_err:
++	platform_device_put(data->pdev);
+ dev_err:
+ 	ida_simple_remove(&aem_ida, data->id);
+ id_err:
+@@ -690,7 +692,7 @@ static int aem_init_aem2_inst(struct aem_ipmi_data *probe,
+ 
+ 	res = platform_device_add(data->pdev);
+ 	if (res)
+-		goto ipmi_err;
++		goto dev_add_err;
+ 
+ 	platform_set_drvdata(data->pdev, data);
+ 
+@@ -738,7 +740,9 @@ hwmon_reg_err:
+ 	ipmi_destroy_user(data->ipmi.user);
+ ipmi_err:
+ 	platform_set_drvdata(data->pdev, NULL);
+-	platform_device_unregister(data->pdev);
++	platform_device_del(data->pdev);
++dev_add_err:
++	platform_device_put(data->pdev);
+ dev_err:
+ 	ida_simple_remove(&aem_ida, data->id);
+ id_err:
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index ee568bdf3c788..3cc7a23fa69fe 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -1280,8 +1280,10 @@ struct ib_cm_id *ib_cm_insert_listen(struct ib_device *device,
+ 		return ERR_CAST(cm_id_priv);
+ 
+ 	err = cm_init_listen(cm_id_priv, service_id, 0);
+-	if (err)
++	if (err) {
++		ib_destroy_cm_id(&cm_id_priv->id);
+ 		return ERR_PTR(err);
++	}
+ 
+ 	spin_lock_irq(&cm_id_priv->lock);
+ 	listen_id_priv = cm_insert_listen(cm_id_priv, cm_handler);
+diff --git a/drivers/infiniband/hw/qedr/qedr.h b/drivers/infiniband/hw/qedr/qedr.h
+index 9dde70373a553..8ef6eecc42a0a 100644
+--- a/drivers/infiniband/hw/qedr/qedr.h
++++ b/drivers/infiniband/hw/qedr/qedr.h
+@@ -418,6 +418,7 @@ struct qedr_qp {
+ 	u32 sq_psn;
+ 	u32 qkey;
+ 	u32 dest_qp_num;
++	u8 timeout;
+ 
+ 	/* Relevant to qps created from kernel space only (ULPs) */
+ 	u8 prev_wqe_size;
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index eeb87f31cd252..f7b97b8e81a43 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -2622,6 +2622,8 @@ int qedr_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 					1 << max_t(int, attr->timeout - 8, 0);
+ 		else
+ 			qp_params.ack_timeout = 0;
++
++		qp->timeout = attr->timeout;
+ 	}
+ 
+ 	if (attr_mask & IB_QP_RETRY_CNT) {
+@@ -2781,7 +2783,7 @@ int qedr_query_qp(struct ib_qp *ibqp,
+ 	rdma_ah_set_dgid_raw(&qp_attr->ah_attr, &params.dgid.bytes[0]);
+ 	rdma_ah_set_port_num(&qp_attr->ah_attr, 1);
+ 	rdma_ah_set_sl(&qp_attr->ah_attr, 0);
+-	qp_attr->timeout = params.timeout;
++	qp_attr->timeout = qp->timeout;
+ 	qp_attr->rnr_retry = params.rnr_retry;
+ 	qp_attr->retry_cnt = params.retry_cnt;
+ 	qp_attr->min_rnr_timer = params.min_rnr_nak_timer;
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index f5083b4a01958..4e94200e01423 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -1002,12 +1002,13 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size)
+ static int validate_raid_redundancy(struct raid_set *rs)
+ {
+ 	unsigned int i, rebuild_cnt = 0;
+-	unsigned int rebuilds_per_group = 0, copies;
++	unsigned int rebuilds_per_group = 0, copies, raid_disks;
+ 	unsigned int group_size, last_group_start;
+ 
+-	for (i = 0; i < rs->md.raid_disks; i++)
+-		if (!test_bit(In_sync, &rs->dev[i].rdev.flags) ||
+-		    !rs->dev[i].rdev.sb_page)
++	for (i = 0; i < rs->raid_disks; i++)
++		if (!test_bit(FirstUse, &rs->dev[i].rdev.flags) &&
++		    ((!test_bit(In_sync, &rs->dev[i].rdev.flags) ||
++		      !rs->dev[i].rdev.sb_page)))
+ 			rebuild_cnt++;
+ 
+ 	switch (rs->md.level) {
+@@ -1047,8 +1048,9 @@ static int validate_raid_redundancy(struct raid_set *rs)
+ 		 *	    A	 A    B	   B	C
+ 		 *	    C	 D    D	   E	E
+ 		 */
++		raid_disks = min(rs->raid_disks, rs->md.raid_disks);
+ 		if (__is_raid10_near(rs->md.new_layout)) {
+-			for (i = 0; i < rs->md.raid_disks; i++) {
++			for (i = 0; i < raid_disks; i++) {
+ 				if (!(i % copies))
+ 					rebuilds_per_group = 0;
+ 				if ((!rs->dev[i].rdev.sb_page ||
+@@ -1071,10 +1073,10 @@ static int validate_raid_redundancy(struct raid_set *rs)
+ 		 * results in the need to treat the last (potentially larger)
+ 		 * set differently.
+ 		 */
+-		group_size = (rs->md.raid_disks / copies);
+-		last_group_start = (rs->md.raid_disks / group_size) - 1;
++		group_size = (raid_disks / copies);
++		last_group_start = (raid_disks / group_size) - 1;
+ 		last_group_start *= group_size;
+-		for (i = 0; i < rs->md.raid_disks; i++) {
++		for (i = 0; i < raid_disks; i++) {
+ 			if (!(i % copies) && !(i > last_group_start))
+ 				rebuilds_per_group = 0;
+ 			if ((!rs->dev[i].rdev.sb_page ||
+@@ -1589,7 +1591,7 @@ static sector_t __rdev_sectors(struct raid_set *rs)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < rs->md.raid_disks; i++) {
++	for (i = 0; i < rs->raid_disks; i++) {
+ 		struct md_rdev *rdev = &rs->dev[i].rdev;
+ 
+ 		if (!test_bit(Journal, &rdev->flags) &&
+@@ -3732,13 +3734,13 @@ static int raid_iterate_devices(struct dm_target *ti,
+ 	unsigned int i;
+ 	int r = 0;
+ 
+-	for (i = 0; !r && i < rs->md.raid_disks; i++)
+-		if (rs->dev[i].data_dev)
+-			r = fn(ti,
+-				 rs->dev[i].data_dev,
+-				 0, /* No offset on data devs */
+-				 rs->md.dev_sectors,
+-				 data);
++	for (i = 0; !r && i < rs->raid_disks; i++) {
++		if (rs->dev[i].data_dev) {
++			r = fn(ti, rs->dev[i].data_dev,
++			       0, /* No offset on data devs */
++			       rs->md.dev_sectors, data);
++		}
++	}
+ 
+ 	return r;
+ }
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 02767866b9ff6..c8cafdb094aaa 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -8004,6 +8004,7 @@ static int raid5_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	 */
+ 	if (rdev->saved_raid_disk >= 0 &&
+ 	    rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk <= last &&
+ 	    conf->disks[rdev->saved_raid_disk].rdev == NULL)
+ 		first = rdev->saved_raid_disk;
+ 
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index c2cef7ba26719..325b20729d8ba 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -2209,7 +2209,8 @@ void bond_3ad_unbind_slave(struct slave *slave)
+ 				temp_aggregator->num_of_ports--;
+ 				if (__agg_active_ports(temp_aggregator) == 0) {
+ 					select_new_active_agg = temp_aggregator->is_active;
+-					ad_clear_agg(temp_aggregator);
++					if (temp_aggregator->num_of_ports == 0)
++						ad_clear_agg(temp_aggregator);
+ 					if (select_new_active_agg) {
+ 						slave_info(bond->dev, slave->dev, "Removing an active aggregator\n");
+ 						/* select new active aggregator */
+diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
+index 0436aef9c9ef5..152f76f869278 100644
+--- a/drivers/net/bonding/bond_alb.c
++++ b/drivers/net/bonding/bond_alb.c
+@@ -1279,12 +1279,12 @@ int bond_alb_initialize(struct bonding *bond, int rlb_enabled)
+ 		return res;
+ 
+ 	if (rlb_enabled) {
+-		bond->alb_info.rlb_enabled = 1;
+ 		res = rlb_initialize(bond);
+ 		if (res) {
+ 			tlb_deinitialize(bond);
+ 			return res;
+ 		}
++		bond->alb_info.rlb_enabled = 1;
+ 	} else {
+ 		bond->alb_info.rlb_enabled = 0;
+ 	}
+diff --git a/drivers/net/caif/caif_virtio.c b/drivers/net/caif/caif_virtio.c
+index 47a6d62b75111..a701932f5cc29 100644
+--- a/drivers/net/caif/caif_virtio.c
++++ b/drivers/net/caif/caif_virtio.c
+@@ -723,13 +723,21 @@ static int cfv_probe(struct virtio_device *vdev)
+ 	/* Carrier is off until netdevice is opened */
+ 	netif_carrier_off(netdev);
+ 
++	/* serialize netdev register + virtio_device_ready() with ndo_open() */
++	rtnl_lock();
++
+ 	/* register Netdev */
+-	err = register_netdev(netdev);
++	err = register_netdevice(netdev);
+ 	if (err) {
++		rtnl_unlock();
+ 		dev_err(&vdev->dev, "Unable to register netdev (%d)\n", err);
+ 		goto err;
+ 	}
+ 
++	virtio_device_ready(vdev);
++
++	rtnl_unlock();
++
+ 	debugfs_init(cfv);
+ 
+ 	return 0;
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index b712b4f27efd9..c6563d212476a 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -774,6 +774,11 @@ static void bcm_sf2_sw_mac_link_up(struct dsa_switch *ds, int port,
+ 		if (duplex == DUPLEX_FULL)
+ 			reg |= DUPLX_MODE;
+ 
++		if (tx_pause)
++			reg |= TXFLOW_CNTL;
++		if (rx_pause)
++			reg |= RXFLOW_CNTL;
++
+ 		core_writel(priv, reg, offset);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/smsc/epic100.c b/drivers/net/ethernet/smsc/epic100.c
+index 51cd7dca91cda..3a7d2baec6c16 100644
+--- a/drivers/net/ethernet/smsc/epic100.c
++++ b/drivers/net/ethernet/smsc/epic100.c
+@@ -1513,14 +1513,14 @@ static void epic_remove_one(struct pci_dev *pdev)
+ 	struct net_device *dev = pci_get_drvdata(pdev);
+ 	struct epic_private *ep = netdev_priv(dev);
+ 
++	unregister_netdev(dev);
+ 	dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, ep->tx_ring,
+ 			  ep->tx_ring_dma);
+ 	dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, ep->rx_ring,
+ 			  ep->rx_ring_dma);
+-	unregister_netdev(dev);
+ 	pci_iounmap(pdev, ep->ioaddr);
+-	pci_release_regions(pdev);
+ 	free_netdev(dev);
++	pci_release_regions(pdev);
+ 	pci_disable_device(pdev);
+ 	/* pci_power_off(pdev, -1); */
+ }
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index 3d75b98f3051d..3a8849716459a 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -243,9 +243,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 		if (misr_status < 0)
+ 			return misr_status;
+ 
+-		misr_status |= (DP83822_RX_ERR_HF_INT_EN |
+-				DP83822_FALSE_CARRIER_HF_INT_EN |
+-				DP83822_LINK_STAT_INT_EN |
++		misr_status |= (DP83822_LINK_STAT_INT_EN |
+ 				DP83822_ENERGY_DET_INT_EN |
+ 				DP83822_LINK_QUAL_INT_EN);
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 55ce141c93c75..be9ff6a74ecce 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -279,6 +279,12 @@ static void tun_napi_init(struct tun_struct *tun, struct tun_file *tfile,
+ 	}
+ }
+ 
++static void tun_napi_enable(struct tun_file *tfile)
++{
++	if (tfile->napi_enabled)
++		napi_enable(&tfile->napi);
++}
++
+ static void tun_napi_disable(struct tun_file *tfile)
+ {
+ 	if (tfile->napi_enabled)
+@@ -640,7 +646,8 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 	tun = rtnl_dereference(tfile->tun);
+ 
+ 	if (tun && clean) {
+-		tun_napi_disable(tfile);
++		if (!tfile->detached)
++			tun_napi_disable(tfile);
+ 		tun_napi_del(tfile);
+ 	}
+ 
+@@ -659,8 +666,10 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 		if (clean) {
+ 			RCU_INIT_POINTER(tfile->tun, NULL);
+ 			sock_put(&tfile->sk);
+-		} else
++		} else {
+ 			tun_disable_queue(tun, tfile);
++			tun_napi_disable(tfile);
++		}
+ 
+ 		synchronize_net();
+ 		tun_flow_delete_by_queue(tun, tun->numqueues + 1);
+@@ -733,6 +742,7 @@ static void tun_detach_all(struct net_device *dev)
+ 		sock_put(&tfile->sk);
+ 	}
+ 	list_for_each_entry_safe(tfile, tmp, &tun->disabled, next) {
++		tun_napi_del(tfile);
+ 		tun_enable_queue(tfile);
+ 		tun_queue_purge(tfile);
+ 		xdp_rxq_info_unreg(&tfile->xdp_rxq);
+@@ -813,6 +823,7 @@ static int tun_attach(struct tun_struct *tun, struct file *file,
+ 
+ 	if (tfile->detached) {
+ 		tun_enable_queue(tfile);
++		tun_napi_enable(tfile);
+ 	} else {
+ 		sock_hold(&tfile->sk);
+ 		tun_napi_init(tun, tfile, napi, napi_frags);
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 0b0cbcee1920b..79a53fe245e5c 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1471,6 +1471,42 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 	 * are bundled into this buffer and where we can find an array of
+ 	 * per-packet metadata (which contains elements encoded into u16).
+ 	 */
++
++	/* SKB contents for current firmware:
++	 *   <packet 1> <padding>
++	 *   ...
++	 *   <packet N> <padding>
++	 *   <per-packet metadata entry 1> <dummy header>
++	 *   ...
++	 *   <per-packet metadata entry N> <dummy header>
++	 *   <padding2> <rx_hdr>
++	 *
++	 * where:
++	 *   <packet N> contains pkt_len bytes:
++	 *		2 bytes of IP alignment pseudo header
++	 *		packet received
++	 *   <per-packet metadata entry N> contains 4 bytes:
++	 *		pkt_len and fields AX_RXHDR_*
++	 *   <padding>	0-7 bytes to terminate at
++	 *		8 bytes boundary (64-bit).
++	 *   <padding2> 4 bytes to make rx_hdr terminate at
++	 *		8 bytes boundary (64-bit)
++	 *   <dummy-header> contains 4 bytes:
++	 *		pkt_len=0 and AX_RXHDR_DROP_ERR
++	 *   <rx-hdr>	contains 4 bytes:
++	 *		pkt_cnt and hdr_off (offset of
++	 *		  <per-packet metadata entry 1>)
++	 *
++	 * pkt_cnt is number of entrys in the per-packet metadata.
++	 * In current firmware there is 2 entrys per packet.
++	 * The first points to the packet and the
++	 *  second is a dummy header.
++	 * This was done probably to align fields in 64-bit and
++	 *  maintain compatibility with old firmware.
++	 * This code assumes that <dummy header> and <padding2> are
++	 *  optional.
++	 */
++
+ 	if (skb->len < 4)
+ 		return 0;
+ 	skb_trim(skb, skb->len - 4);
+@@ -1484,51 +1520,66 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 	/* Make sure that the bounds of the metadata array are inside the SKB
+ 	 * (and in front of the counter at the end).
+ 	 */
+-	if (pkt_cnt * 2 + hdr_off > skb->len)
++	if (pkt_cnt * 4 + hdr_off > skb->len)
+ 		return 0;
+ 	pkt_hdr = (u32 *)(skb->data + hdr_off);
+ 
+ 	/* Packets must not overlap the metadata array */
+ 	skb_trim(skb, hdr_off);
+ 
+-	for (; ; pkt_cnt--, pkt_hdr++) {
++	for (; pkt_cnt > 0; pkt_cnt--, pkt_hdr++) {
++		u16 pkt_len_plus_padd;
+ 		u16 pkt_len;
+ 
+ 		le32_to_cpus(pkt_hdr);
+ 		pkt_len = (*pkt_hdr >> 16) & 0x1fff;
++		pkt_len_plus_padd = (pkt_len + 7) & 0xfff8;
+ 
+-		if (pkt_len > skb->len)
++		/* Skip dummy header used for alignment
++		 */
++		if (pkt_len == 0)
++			continue;
++
++		if (pkt_len_plus_padd > skb->len)
+ 			return 0;
+ 
+ 		/* Check CRC or runt packet */
+-		if (((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) == 0) &&
+-		    pkt_len >= 2 + ETH_HLEN) {
+-			bool last = (pkt_cnt == 0);
+-
+-			if (last) {
+-				ax_skb = skb;
+-			} else {
+-				ax_skb = skb_clone(skb, GFP_ATOMIC);
+-				if (!ax_skb)
+-					return 0;
+-			}
+-			ax_skb->len = pkt_len;
+-			/* Skip IP alignment pseudo header */
+-			skb_pull(ax_skb, 2);
+-			skb_set_tail_pointer(ax_skb, ax_skb->len);
+-			ax_skb->truesize = pkt_len + sizeof(struct sk_buff);
+-			ax88179_rx_checksum(ax_skb, pkt_hdr);
++		if ((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) ||
++		    pkt_len < 2 + ETH_HLEN) {
++			dev->net->stats.rx_errors++;
++			skb_pull(skb, pkt_len_plus_padd);
++			continue;
++		}
+ 
+-			if (last)
+-				return 1;
++		/* last packet */
++		if (pkt_len_plus_padd == skb->len) {
++			skb_trim(skb, pkt_len);
+ 
+-			usbnet_skb_return(dev, ax_skb);
++			/* Skip IP alignment pseudo header */
++			skb_pull(skb, 2);
++
++			skb->truesize = SKB_TRUESIZE(pkt_len_plus_padd);
++			ax88179_rx_checksum(skb, pkt_hdr);
++			return 1;
+ 		}
+ 
+-		/* Trim this packet away from the SKB */
+-		if (!skb_pull(skb, (pkt_len + 7) & 0xFFF8))
++		ax_skb = skb_clone(skb, GFP_ATOMIC);
++		if (!ax_skb)
+ 			return 0;
++		skb_trim(ax_skb, pkt_len);
++
++		/* Skip IP alignment pseudo header */
++		skb_pull(ax_skb, 2);
++
++		skb->truesize = pkt_len_plus_padd +
++				SKB_DATA_ALIGN(sizeof(struct sk_buff));
++		ax88179_rx_checksum(ax_skb, pkt_hdr);
++		usbnet_skb_return(dev, ax_skb);
++
++		skb_pull(skb, pkt_len_plus_padd);
+ 	}
++
++	return 0;
+ }
+ 
+ static struct sk_buff *
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 597766d14563e..48e8b94e4a7c5 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1293,6 +1293,8 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)},	/* Telit LE922A */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)},	/* Telit FN980 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)},	/* Telit LN920 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)},	/* Telit FN990 */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)},	/* Telit ME910 */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)},	/* Telit ME910 dual modem */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)},	/* Telit LE920 */
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 402390b1a66b5..74a833ad7aa99 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1969,7 +1969,7 @@ static int __usbnet_read_cmd(struct usbnet *dev, u8 cmd, u8 reqtype,
+ 		   cmd, reqtype, value, index, size);
+ 
+ 	if (size) {
+-		buf = kmalloc(size, GFP_KERNEL);
++		buf = kmalloc(size, GFP_NOIO);
+ 		if (!buf)
+ 			goto out;
+ 	}
+@@ -2001,7 +2001,7 @@ static int __usbnet_write_cmd(struct usbnet *dev, u8 cmd, u8 reqtype,
+ 		   cmd, reqtype, value, index, size);
+ 
+ 	if (data) {
+-		buf = kmemdup(data, size, GFP_KERNEL);
++		buf = kmemdup(data, size, GFP_NOIO);
+ 		if (!buf)
+ 			goto out;
+ 	} else {
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index ad9064df3debb..37178b078ee37 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3171,14 +3171,20 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 		}
+ 	}
+ 
+-	err = register_netdev(dev);
++	/* serialize netdev register + virtio_device_ready() with ndo_open() */
++	rtnl_lock();
++
++	err = register_netdevice(dev);
+ 	if (err) {
+ 		pr_debug("virtio_net: registering device failed\n");
++		rtnl_unlock();
+ 		goto free_failover;
+ 	}
+ 
+ 	virtio_device_ready(vdev);
+ 
++	rtnl_unlock();
++
+ 	err = virtnet_cpu_notif_add(vi);
+ 	if (err) {
+ 		pr_debug("virtio_net: registering cpu notifier failed\n");
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 1a69b5246133b..569f3c8e7b756 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -66,6 +66,10 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
+ MODULE_PARM_DESC(max_queues,
+ 		 "Maximum number of queues per virtual interface");
+ 
++static bool __read_mostly xennet_trusted = true;
++module_param_named(trusted, xennet_trusted, bool, 0644);
++MODULE_PARM_DESC(trusted, "Is the backend trusted");
++
+ #define XENNET_TIMEOUT  (5 * HZ)
+ 
+ static const struct ethtool_ops xennet_ethtool_ops;
+@@ -175,6 +179,9 @@ struct netfront_info {
+ 	/* Is device behaving sane? */
+ 	bool broken;
+ 
++	/* Should skbs be bounced into a zeroed buffer? */
++	bool bounce;
++
+ 	atomic_t rx_gso_checksum_fixup;
+ };
+ 
+@@ -273,7 +280,8 @@ static struct sk_buff *xennet_alloc_one_rx_buffer(struct netfront_queue *queue)
+ 	if (unlikely(!skb))
+ 		return NULL;
+ 
+-	page = page_pool_dev_alloc_pages(queue->page_pool);
++	page = page_pool_alloc_pages(queue->page_pool,
++				     GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO);
+ 	if (unlikely(!page)) {
+ 		kfree_skb(skb);
+ 		return NULL;
+@@ -669,6 +677,33 @@ static int xennet_xdp_xmit(struct net_device *dev, int n,
+ 	return n - drops;
+ }
+ 
++struct sk_buff *bounce_skb(const struct sk_buff *skb)
++{
++	unsigned int headerlen = skb_headroom(skb);
++	/* Align size to allocate full pages and avoid contiguous data leaks */
++	unsigned int size = ALIGN(skb_end_offset(skb) + skb->data_len,
++				  XEN_PAGE_SIZE);
++	struct sk_buff *n = alloc_skb(size, GFP_ATOMIC | __GFP_ZERO);
++
++	if (!n)
++		return NULL;
++
++	if (!IS_ALIGNED((uintptr_t)n->head, XEN_PAGE_SIZE)) {
++		WARN_ONCE(1, "misaligned skb allocated\n");
++		kfree_skb(n);
++		return NULL;
++	}
++
++	/* Set the data pointer */
++	skb_reserve(n, headerlen);
++	/* Set the tail pointer and length */
++	skb_put(n, skb->len);
++
++	BUG_ON(skb_copy_bits(skb, -headerlen, n->head, headerlen + skb->len));
++
++	skb_copy_header(n, skb);
++	return n;
++}
+ 
+ #define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1)
+ 
+@@ -722,9 +757,13 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 
+ 	/* The first req should be at least ETH_HLEN size or the packet will be
+ 	 * dropped by netback.
++	 *
++	 * If the backend is not trusted bounce all data to zeroed pages to
++	 * avoid exposing contiguous data on the granted page not belonging to
++	 * the skb.
+ 	 */
+-	if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
+-		nskb = skb_copy(skb, GFP_ATOMIC);
++	if (np->bounce || unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
++		nskb = bounce_skb(skb);
+ 		if (!nskb)
+ 			goto drop;
+ 		dev_consume_skb_any(skb);
+@@ -1057,8 +1096,10 @@ static int xennet_get_responses(struct netfront_queue *queue,
+ 			}
+ 		}
+ 		rcu_read_unlock();
+-next:
++
+ 		__skb_queue_tail(list, skb);
++
++next:
+ 		if (!(rx->flags & XEN_NETRXF_more_data))
+ 			break;
+ 
+@@ -2248,6 +2289,10 @@ static int talk_to_netback(struct xenbus_device *dev,
+ 
+ 	info->netdev->irq = 0;
+ 
++	/* Check if backend is trusted. */
++	info->bounce = !xennet_trusted ||
++		       !xenbus_read_unsigned(dev->nodename, "trusted", 1);
++
+ 	/* Check if backend supports multiple queues */
+ 	max_queues = xenbus_read_unsigned(info->xbdev->otherend,
+ 					  "multi-queue-max-queues", 1);
+@@ -2414,6 +2459,9 @@ static int xennet_connect(struct net_device *dev)
+ 		return err;
+ 	if (np->netback_has_xdp_headroom)
+ 		pr_info("backend supports XDP headroom\n");
++	if (np->bounce)
++		dev_info(&np->xbdev->dev,
++			 "bouncing transmitted data to zeroed pages\n");
+ 
+ 	/* talk_to_netback() sets the correct number of queues */
+ 	num_queues = dev->real_num_tx_queues;
+diff --git a/drivers/nfc/nfcmrvl/i2c.c b/drivers/nfc/nfcmrvl/i2c.c
+index 18cd96284b77a..f81f1cae93243 100644
+--- a/drivers/nfc/nfcmrvl/i2c.c
++++ b/drivers/nfc/nfcmrvl/i2c.c
+@@ -186,9 +186,9 @@ static int nfcmrvl_i2c_parse_dt(struct device_node *node,
+ 		pdata->irq_polarity = IRQF_TRIGGER_RISING;
+ 
+ 	ret = irq_of_parse_and_map(node, 0);
+-	if (ret < 0) {
+-		pr_err("Unable to get irq, error: %d\n", ret);
+-		return ret;
++	if (!ret) {
++		pr_err("Unable to get irq\n");
++		return -EINVAL;
+ 	}
+ 	pdata->irq = ret;
+ 
+diff --git a/drivers/nfc/nfcmrvl/spi.c b/drivers/nfc/nfcmrvl/spi.c
+index 8e0ddb4347704..1f4120e3314b2 100644
+--- a/drivers/nfc/nfcmrvl/spi.c
++++ b/drivers/nfc/nfcmrvl/spi.c
+@@ -129,9 +129,9 @@ static int nfcmrvl_spi_parse_dt(struct device_node *node,
+ 	}
+ 
+ 	ret = irq_of_parse_and_map(node, 0);
+-	if (ret < 0) {
+-		pr_err("Unable to get irq, error: %d\n", ret);
+-		return ret;
++	if (!ret) {
++		pr_err("Unable to get irq\n");
++		return -EINVAL;
+ 	}
+ 	pdata->irq = ret;
+ 
+diff --git a/drivers/nfc/nxp-nci/i2c.c b/drivers/nfc/nxp-nci/i2c.c
+index 9f60e4dc5a908..3943a30053b3b 100644
+--- a/drivers/nfc/nxp-nci/i2c.c
++++ b/drivers/nfc/nxp-nci/i2c.c
+@@ -162,6 +162,9 @@ static int nxp_nci_i2c_nci_read(struct nxp_nci_i2c_phy *phy,
+ 
+ 	skb_put_data(*skb, (void *)&header, NCI_CTRL_HDR_SIZE);
+ 
++	if (!header.plen)
++		return 0;
++
+ 	r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen);
+ 	if (r != header.plen) {
+ 		nfc_err(&client->dev,
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 2304c6183822e..9ec59960f2163 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -187,8 +187,8 @@ static int nvdimm_clear_badblocks_region(struct device *dev, void *data)
+ 	ndr_end = nd_region->ndr_start + nd_region->ndr_size - 1;
+ 
+ 	/* make sure we are in the region */
+-	if (ctx->phys < nd_region->ndr_start
+-			|| (ctx->phys + ctx->cleared) > ndr_end)
++	if (ctx->phys < nd_region->ndr_start ||
++	    (ctx->phys + ctx->cleared - 1) > ndr_end)
+ 		return 0;
+ 
+ 	sector = (ctx->phys - nd_region->ndr_start) / 512;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 9e633f4dcec71..3622c5c9515fa 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3245,7 +3245,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_DEVICE(0x1d1d, 0x2601),	/* CNEX Granby */
+ 		.driver_data = NVME_QUIRK_LIGHTNVM, },
+ 	{ PCI_DEVICE(0x10ec, 0x5762),   /* ADATA SX6000LNP */
+-		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN |
++				NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(0x1cc1, 0x8201),   /* ADATA SX8200PNP 512GB */
+ 		.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
+index 20d7d059dadb5..40ef379c28ab0 100644
+--- a/drivers/xen/gntdev-common.h
++++ b/drivers/xen/gntdev-common.h
+@@ -16,6 +16,7 @@
+ #include <linux/mmu_notifier.h>
+ #include <linux/types.h>
+ #include <xen/interface/event_channel.h>
++#include <xen/grant_table.h>
+ 
+ struct gntdev_dmabuf_priv;
+ 
+@@ -56,6 +57,7 @@ struct gntdev_grant_map {
+ 	struct gnttab_unmap_grant_ref *unmap_ops;
+ 	struct gnttab_map_grant_ref   *kmap_ops;
+ 	struct gnttab_unmap_grant_ref *kunmap_ops;
++	bool *being_removed;
+ 	struct page **pages;
+ 	unsigned long pages_vm_start;
+ 
+@@ -73,6 +75,11 @@ struct gntdev_grant_map {
+ 	/* Needed to avoid allocation in gnttab_dma_free_pages(). */
+ 	xen_pfn_t *frames;
+ #endif
++
++	/* Number of live grants */
++	atomic_t live_grants;
++	/* Needed to avoid allocation in __unmap_grant_pages */
++	struct gntab_unmap_queue_data unmap_data;
+ };
+ 
+ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 54778aadf618d..f415c056ff8ab 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -35,6 +35,7 @@
+ #include <linux/slab.h>
+ #include <linux/highmem.h>
+ #include <linux/refcount.h>
++#include <linux/workqueue.h>
+ 
+ #include <xen/xen.h>
+ #include <xen/grant_table.h>
+@@ -60,10 +61,11 @@ module_param(limit, uint, 0644);
+ MODULE_PARM_DESC(limit,
+ 	"Maximum number of grants that may be mapped by one mapping request");
+ 
++/* True in PV mode, false otherwise */
+ static int use_ptemod;
+ 
+-static int unmap_grant_pages(struct gntdev_grant_map *map,
+-			     int offset, int pages);
++static void unmap_grant_pages(struct gntdev_grant_map *map,
++			      int offset, int pages);
+ 
+ static struct miscdevice gntdev_miscdev;
+ 
+@@ -120,6 +122,7 @@ static void gntdev_free_map(struct gntdev_grant_map *map)
+ 	kvfree(map->unmap_ops);
+ 	kvfree(map->kmap_ops);
+ 	kvfree(map->kunmap_ops);
++	kvfree(map->being_removed);
+ 	kfree(map);
+ }
+ 
+@@ -140,12 +143,15 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
+ 	add->kunmap_ops = kvcalloc(count,
+ 				   sizeof(add->kunmap_ops[0]), GFP_KERNEL);
+ 	add->pages     = kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
++	add->being_removed =
++		kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
+ 	if (NULL == add->grants    ||
+ 	    NULL == add->map_ops   ||
+ 	    NULL == add->unmap_ops ||
+ 	    NULL == add->kmap_ops  ||
+ 	    NULL == add->kunmap_ops ||
+-	    NULL == add->pages)
++	    NULL == add->pages     ||
++	    NULL == add->being_removed)
+ 		goto err;
+ 
+ #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
+@@ -240,9 +246,36 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
+ 	if (!refcount_dec_and_test(&map->users))
+ 		return;
+ 
+-	if (map->pages && !use_ptemod)
++	if (map->pages && !use_ptemod) {
++		/*
++		 * Increment the reference count.  This ensures that the
++		 * subsequent call to unmap_grant_pages() will not wind up
++		 * re-entering itself.  It *can* wind up calling
++		 * gntdev_put_map() recursively, but such calls will be with a
++		 * reference count greater than 1, so they will return before
++		 * this code is reached.  The recursion depth is thus limited to
++		 * 1.  Do NOT use refcount_inc() here, as it will detect that
++		 * the reference count is zero and WARN().
++		 */
++		refcount_set(&map->users, 1);
++
++		/*
++		 * Unmap the grants.  This may or may not be asynchronous, so it
++		 * is possible that the reference count is 1 on return, but it
++		 * could also be greater than 1.
++		 */
+ 		unmap_grant_pages(map, 0, map->count);
+ 
++		/* Check if the memory now needs to be freed */
++		if (!refcount_dec_and_test(&map->users))
++			return;
++
++		/*
++		 * All pages have been returned to the hypervisor, so free the
++		 * map.
++		 */
++	}
++
+ 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
+ 		notify_remote_via_evtchn(map->notify.event);
+ 		evtchn_put(map->notify.event);
+@@ -288,6 +321,7 @@ static int set_grant_ptes_as_special(pte_t *pte, unsigned long addr, void *data)
+ 
+ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
+ {
++	size_t alloced = 0;
+ 	int i, err = 0;
+ 
+ 	if (!use_ptemod) {
+@@ -336,87 +370,109 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
+ 			map->pages, map->count);
+ 
+ 	for (i = 0; i < map->count; i++) {
+-		if (map->map_ops[i].status == GNTST_okay)
++		if (map->map_ops[i].status == GNTST_okay) {
+ 			map->unmap_ops[i].handle = map->map_ops[i].handle;
+-		else if (!err)
++			if (!use_ptemod)
++				alloced++;
++		} else if (!err)
+ 			err = -EINVAL;
+ 
+ 		if (map->flags & GNTMAP_device_map)
+ 			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
+ 
+ 		if (use_ptemod) {
+-			if (map->kmap_ops[i].status == GNTST_okay)
++			if (map->kmap_ops[i].status == GNTST_okay) {
++				if (map->map_ops[i].status == GNTST_okay)
++					alloced++;
+ 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
+-			else if (!err)
++			} else if (!err)
+ 				err = -EINVAL;
+ 		}
+ 	}
++	atomic_add(alloced, &map->live_grants);
+ 	return err;
+ }
+ 
+-static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+-			       int pages)
++static void __unmap_grant_pages_done(int result,
++		struct gntab_unmap_queue_data *data)
+ {
+-	int i, err = 0;
+-	struct gntab_unmap_queue_data unmap_data;
++	unsigned int i;
++	struct gntdev_grant_map *map = data->data;
++	unsigned int offset = data->unmap_ops - map->unmap_ops;
+ 
++	for (i = 0; i < data->count; i++) {
++		WARN_ON(map->unmap_ops[offset+i].status);
++		pr_debug("unmap handle=%d st=%d\n",
++			map->unmap_ops[offset+i].handle,
++			map->unmap_ops[offset+i].status);
++		map->unmap_ops[offset+i].handle = -1;
++	}
++	/*
++	 * Decrease the live-grant counter.  This must happen after the loop to
++	 * prevent premature reuse of the grants by gnttab_mmap().
++	 */
++	atomic_sub(data->count, &map->live_grants);
++
++	/* Release reference taken by __unmap_grant_pages */
++	gntdev_put_map(NULL, map);
++}
++
++static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
++			       int pages)
++{
+ 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
+ 		int pgno = (map->notify.addr >> PAGE_SHIFT);
++
+ 		if (pgno >= offset && pgno < offset + pages) {
+ 			/* No need for kmap, pages are in lowmem */
+ 			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
++
+ 			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
+ 			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
+ 		}
+ 	}
+ 
+-	unmap_data.unmap_ops = map->unmap_ops + offset;
+-	unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+-	unmap_data.pages = map->pages + offset;
+-	unmap_data.count = pages;
++	map->unmap_data.unmap_ops = map->unmap_ops + offset;
++	map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
++	map->unmap_data.pages = map->pages + offset;
++	map->unmap_data.count = pages;
++	map->unmap_data.done = __unmap_grant_pages_done;
++	map->unmap_data.data = map;
++	refcount_inc(&map->users); /* to keep map alive during async call below */
+ 
+-	err = gnttab_unmap_refs_sync(&unmap_data);
+-	if (err)
+-		return err;
+-
+-	for (i = 0; i < pages; i++) {
+-		if (map->unmap_ops[offset+i].status)
+-			err = -EINVAL;
+-		pr_debug("unmap handle=%d st=%d\n",
+-			map->unmap_ops[offset+i].handle,
+-			map->unmap_ops[offset+i].status);
+-		map->unmap_ops[offset+i].handle = -1;
+-	}
+-	return err;
++	gnttab_unmap_refs_async(&map->unmap_data);
+ }
+ 
+-static int unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+-			     int pages)
++static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
++			      int pages)
+ {
+-	int range, err = 0;
++	int range;
++
++	if (atomic_read(&map->live_grants) == 0)
++		return; /* Nothing to do */
+ 
+ 	pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
+ 
+ 	/* It is possible the requested range will have a "hole" where we
+ 	 * already unmapped some of the grants. Only unmap valid ranges.
+ 	 */
+-	while (pages && !err) {
+-		while (pages && map->unmap_ops[offset].handle == -1) {
++	while (pages) {
++		while (pages && map->being_removed[offset]) {
+ 			offset++;
+ 			pages--;
+ 		}
+ 		range = 0;
+ 		while (range < pages) {
+-			if (map->unmap_ops[offset+range].handle == -1)
++			if (map->being_removed[offset + range])
+ 				break;
++			map->being_removed[offset + range] = true;
+ 			range++;
+ 		}
+-		err = __unmap_grant_pages(map, offset, range);
++		if (range)
++			__unmap_grant_pages(map, offset, range);
+ 		offset += range;
+ 		pages -= range;
+ 	}
+-
+-	return err;
+ }
+ 
+ /* ------------------------------------------------------------------ */
+@@ -468,7 +524,6 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
+ 	struct gntdev_grant_map *map =
+ 		container_of(mn, struct gntdev_grant_map, notifier);
+ 	unsigned long mstart, mend;
+-	int err;
+ 
+ 	if (!mmu_notifier_range_blockable(range))
+ 		return false;
+@@ -489,10 +544,9 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
+ 			map->index, map->count,
+ 			map->vma->vm_start, map->vma->vm_end,
+ 			range->start, range->end, mstart, mend);
+-	err = unmap_grant_pages(map,
++	unmap_grant_pages(map,
+ 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
+ 				(mend - mstart) >> PAGE_SHIFT);
+-	WARN_ON(err);
+ 
+ 	return true;
+ }
+@@ -980,6 +1034,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ 		goto unlock_out;
+ 	if (use_ptemod && map->vma)
+ 		goto unlock_out;
++	if (atomic_read(&map->live_grants)) {
++		err = -EAGAIN;
++		goto unlock_out;
++	}
+ 	refcount_inc(&map->users);
+ 
+ 	vma->vm_ops = &gntdev_vmops;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 2e12dcbc7b0fd..023c3eb34dcca 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4366,6 +4366,8 @@ static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
++	if (unlikely(sqe->addr2 || sqe->splice_fd_in || sqe->ioprio))
++		return -EINVAL;
+ 
+ 	sr->msg_flags = READ_ONCE(sqe->msg_flags);
+ 	sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+@@ -4601,6 +4603,8 @@ static int io_recvmsg_prep(struct io_kiocb *req,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
++	if (unlikely(sqe->addr2 || sqe->splice_fd_in || sqe->ioprio))
++		return -EINVAL;
+ 
+ 	sr->msg_flags = READ_ONCE(sqe->msg_flags);
+ 	sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index dd33b31b0a826..86297f59b43e2 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1460,13 +1460,6 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
+ 			PF_MEMALLOC))
+ 		goto redirty;
+ 
+-	/*
+-	 * Given that we do not allow direct reclaim to call us, we should
+-	 * never be called in a recursive filesystem reclaim context.
+-	 */
+-	if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS))
+-		goto redirty;
+-
+ 	/*
+ 	 * Is this page beyond the end of the file?
+ 	 *
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 548ebc913f920..c852bb5ff2121 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1156,6 +1156,7 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 						nfsd_net_id));
+ 			err2 = filemap_check_wb_err(nf->nf_file->f_mapping,
+ 						    since);
++			err = nfserrno(err2);
+ 			break;
+ 		case -EINVAL:
+ 			err = nfserr_notsupp;
+@@ -1163,8 +1164,8 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 		default:
+ 			nfsd_reset_boot_verifier(net_generic(nf->nf_net,
+ 						 nfsd_net_id));
++			err = nfserrno(err2);
+ 		}
+-		err = nfserrno(err2);
+ 	} else
+ 		nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
+ 					nfsd_net_id));
+diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c
+index 98c82f4935e1e..24c7d30e41dfe 100644
+--- a/fs/xfs/libxfs/xfs_btree.c
++++ b/fs/xfs/libxfs/xfs_btree.c
+@@ -2811,7 +2811,7 @@ xfs_btree_split_worker(
+ 	struct xfs_btree_split_args	*args = container_of(work,
+ 						struct xfs_btree_split_args, work);
+ 	unsigned long		pflags;
+-	unsigned long		new_pflags = PF_MEMALLOC_NOFS;
++	unsigned long		new_pflags = 0;
+ 
+ 	/*
+ 	 * we are in a transaction context here, but may also be doing work
+@@ -2823,12 +2823,20 @@ xfs_btree_split_worker(
+ 		new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD;
+ 
+ 	current_set_flags_nested(&pflags, new_pflags);
++	xfs_trans_set_context(args->cur->bc_tp);
+ 
+ 	args->result = __xfs_btree_split(args->cur, args->level, args->ptrp,
+ 					 args->key, args->curp, args->stat);
+-	complete(args->done);
+ 
++	xfs_trans_clear_context(args->cur->bc_tp);
+ 	current_restore_flags_nested(&pflags, new_pflags);
++
++	/*
++	 * Do not access args after complete() has run here. We don't own args
++	 * and the owner may run and free args before we return here.
++	 */
++	complete(args->done);
++
+ }
+ 
+ /*
+diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
+index 5aeafa59ed276..66e8353da2f3c 100644
+--- a/fs/xfs/libxfs/xfs_sb.c
++++ b/fs/xfs/libxfs/xfs_sb.c
+@@ -956,9 +956,19 @@ xfs_log_sb(
+ 	struct xfs_mount	*mp = tp->t_mountp;
+ 	struct xfs_buf		*bp = xfs_trans_getsb(tp);
+ 
+-	mp->m_sb.sb_icount = percpu_counter_sum(&mp->m_icount);
+-	mp->m_sb.sb_ifree = percpu_counter_sum(&mp->m_ifree);
+-	mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks);
++	/*
++	 * Lazy sb counters don't update the in-core superblock so do that now.
++	 * If this is at unmount, the counters will be exactly correct, but at
++	 * any other time they will only be ballpark correct because of
++	 * reservations that have been taken out percpu counters. If we have an
++	 * unclean shutdown, this will be corrected by log recovery rebuilding
++	 * the counters from the AGF block counts.
++	 */
++	if (xfs_sb_version_haslazysbcount(&mp->m_sb)) {
++		mp->m_sb.sb_icount = percpu_counter_sum(&mp->m_icount);
++		mp->m_sb.sb_ifree = percpu_counter_sum(&mp->m_ifree);
++		mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks);
++	}
+ 
+ 	xfs_sb_to_disk(bp->b_addr, &mp->m_sb);
+ 	xfs_trans_buf_set_type(tp, bp, XFS_BLFT_SB_BUF);
+diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
+index 4b76a32d2f16d..953de843d9c38 100644
+--- a/fs/xfs/xfs_aops.c
++++ b/fs/xfs/xfs_aops.c
+@@ -62,7 +62,7 @@ xfs_setfilesize_trans_alloc(
+ 	 * We hand off the transaction to the completion thread now, so
+ 	 * clear the flag here.
+ 	 */
+-	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
++	xfs_trans_clear_context(tp);
+ 	return 0;
+ }
+ 
+@@ -125,7 +125,7 @@ xfs_setfilesize_ioend(
+ 	 * thus we need to mark ourselves as being in a transaction manually.
+ 	 * Similarly for freeze protection.
+ 	 */
+-	current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
++	xfs_trans_set_context(tp);
+ 	__sb_writers_acquired(VFS_I(ip)->i_sb, SB_FREEZE_FS);
+ 
+ 	/* we abort the update if there was an IO error */
+@@ -577,6 +577,12 @@ xfs_vm_writepage(
+ {
+ 	struct xfs_writepage_ctx wpc = { };
+ 
++	if (WARN_ON_ONCE(current->journal_info)) {
++		redirty_page_for_writepage(wbc, page);
++		unlock_page(page);
++		return 0;
++	}
++
+ 	return iomap_writepage(page, wbc, &wpc.ctx, &xfs_writeback_ops);
+ }
+ 
+@@ -587,6 +593,13 @@ xfs_vm_writepages(
+ {
+ 	struct xfs_writepage_ctx wpc = { };
+ 
++	/*
++	 * Writing back data in a transaction context can result in recursive
++	 * transactions. This is bad, so issue a warning and get out of here.
++	 */
++	if (WARN_ON_ONCE(current->journal_info))
++		return 0;
++
+ 	xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED);
+ 	return iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops);
+ }
+diff --git a/fs/xfs/xfs_error.c b/fs/xfs/xfs_error.c
+index 7f6e208994730..f9e2f606b5b8c 100644
+--- a/fs/xfs/xfs_error.c
++++ b/fs/xfs/xfs_error.c
+@@ -293,6 +293,8 @@ xfs_errortag_add(
+ 	struct xfs_mount	*mp,
+ 	unsigned int		error_tag)
+ {
++	BUILD_BUG_ON(ARRAY_SIZE(xfs_errortag_random_default) != XFS_ERRTAG_MAX);
++
+ 	if (error_tag >= XFS_ERRTAG_MAX)
+ 		return -EINVAL;
+ 
+diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
+index 6fa05fb78189b..aa46b75d75af5 100644
+--- a/fs/xfs/xfs_reflink.c
++++ b/fs/xfs/xfs_reflink.c
+@@ -1503,7 +1503,8 @@ xfs_reflink_unshare(
+ 	if (error)
+ 		goto out;
+ 
+-	error = filemap_write_and_wait_range(inode->i_mapping, offset, len);
++	error = filemap_write_and_wait_range(inode->i_mapping, offset,
++			offset + len - 1);
+ 	if (error)
+ 		goto out;
+ 
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index 05cea7788d494..6323974d6b3e6 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -1155,6 +1155,22 @@ suffix_kstrtoint(
+ 	return ret;
+ }
+ 
++static inline void
++xfs_fs_warn_deprecated(
++	struct fs_context	*fc,
++	struct fs_parameter	*param,
++	uint64_t		flag,
++	bool			value)
++{
++	/* Don't print the warning if reconfiguring and current mount point
++	 * already had the flag set
++	 */
++	if ((fc->purpose & FS_CONTEXT_FOR_RECONFIGURE) &&
++			!!(XFS_M(fc->root->d_sb)->m_flags & flag) == value)
++		return;
++	xfs_warn(fc->s_fs_info, "%s mount option is deprecated.", param->key);
++}
++
+ /*
+  * Set mount state from a mount option.
+  *
+@@ -1165,7 +1181,7 @@ xfs_fc_parse_param(
+ 	struct fs_context	*fc,
+ 	struct fs_parameter	*param)
+ {
+-	struct xfs_mount	*mp = fc->s_fs_info;
++	struct xfs_mount	*parsing_mp = fc->s_fs_info;
+ 	struct fs_parse_result	result;
+ 	int			size = 0;
+ 	int			opt;
+@@ -1176,142 +1192,142 @@ xfs_fc_parse_param(
+ 
+ 	switch (opt) {
+ 	case Opt_logbufs:
+-		mp->m_logbufs = result.uint_32;
++		parsing_mp->m_logbufs = result.uint_32;
+ 		return 0;
+ 	case Opt_logbsize:
+-		if (suffix_kstrtoint(param->string, 10, &mp->m_logbsize))
++		if (suffix_kstrtoint(param->string, 10, &parsing_mp->m_logbsize))
+ 			return -EINVAL;
+ 		return 0;
+ 	case Opt_logdev:
+-		kfree(mp->m_logname);
+-		mp->m_logname = kstrdup(param->string, GFP_KERNEL);
+-		if (!mp->m_logname)
++		kfree(parsing_mp->m_logname);
++		parsing_mp->m_logname = kstrdup(param->string, GFP_KERNEL);
++		if (!parsing_mp->m_logname)
+ 			return -ENOMEM;
+ 		return 0;
+ 	case Opt_rtdev:
+-		kfree(mp->m_rtname);
+-		mp->m_rtname = kstrdup(param->string, GFP_KERNEL);
+-		if (!mp->m_rtname)
++		kfree(parsing_mp->m_rtname);
++		parsing_mp->m_rtname = kstrdup(param->string, GFP_KERNEL);
++		if (!parsing_mp->m_rtname)
+ 			return -ENOMEM;
+ 		return 0;
+ 	case Opt_allocsize:
+ 		if (suffix_kstrtoint(param->string, 10, &size))
+ 			return -EINVAL;
+-		mp->m_allocsize_log = ffs(size) - 1;
+-		mp->m_flags |= XFS_MOUNT_ALLOCSIZE;
++		parsing_mp->m_allocsize_log = ffs(size) - 1;
++		parsing_mp->m_flags |= XFS_MOUNT_ALLOCSIZE;
+ 		return 0;
+ 	case Opt_grpid:
+ 	case Opt_bsdgroups:
+-		mp->m_flags |= XFS_MOUNT_GRPID;
++		parsing_mp->m_flags |= XFS_MOUNT_GRPID;
+ 		return 0;
+ 	case Opt_nogrpid:
+ 	case Opt_sysvgroups:
+-		mp->m_flags &= ~XFS_MOUNT_GRPID;
++		parsing_mp->m_flags &= ~XFS_MOUNT_GRPID;
+ 		return 0;
+ 	case Opt_wsync:
+-		mp->m_flags |= XFS_MOUNT_WSYNC;
++		parsing_mp->m_flags |= XFS_MOUNT_WSYNC;
+ 		return 0;
+ 	case Opt_norecovery:
+-		mp->m_flags |= XFS_MOUNT_NORECOVERY;
++		parsing_mp->m_flags |= XFS_MOUNT_NORECOVERY;
+ 		return 0;
+ 	case Opt_noalign:
+-		mp->m_flags |= XFS_MOUNT_NOALIGN;
++		parsing_mp->m_flags |= XFS_MOUNT_NOALIGN;
+ 		return 0;
+ 	case Opt_swalloc:
+-		mp->m_flags |= XFS_MOUNT_SWALLOC;
++		parsing_mp->m_flags |= XFS_MOUNT_SWALLOC;
+ 		return 0;
+ 	case Opt_sunit:
+-		mp->m_dalign = result.uint_32;
++		parsing_mp->m_dalign = result.uint_32;
+ 		return 0;
+ 	case Opt_swidth:
+-		mp->m_swidth = result.uint_32;
++		parsing_mp->m_swidth = result.uint_32;
+ 		return 0;
+ 	case Opt_inode32:
+-		mp->m_flags |= XFS_MOUNT_SMALL_INUMS;
++		parsing_mp->m_flags |= XFS_MOUNT_SMALL_INUMS;
+ 		return 0;
+ 	case Opt_inode64:
+-		mp->m_flags &= ~XFS_MOUNT_SMALL_INUMS;
++		parsing_mp->m_flags &= ~XFS_MOUNT_SMALL_INUMS;
+ 		return 0;
+ 	case Opt_nouuid:
+-		mp->m_flags |= XFS_MOUNT_NOUUID;
++		parsing_mp->m_flags |= XFS_MOUNT_NOUUID;
+ 		return 0;
+ 	case Opt_largeio:
+-		mp->m_flags |= XFS_MOUNT_LARGEIO;
++		parsing_mp->m_flags |= XFS_MOUNT_LARGEIO;
+ 		return 0;
+ 	case Opt_nolargeio:
+-		mp->m_flags &= ~XFS_MOUNT_LARGEIO;
++		parsing_mp->m_flags &= ~XFS_MOUNT_LARGEIO;
+ 		return 0;
+ 	case Opt_filestreams:
+-		mp->m_flags |= XFS_MOUNT_FILESTREAMS;
++		parsing_mp->m_flags |= XFS_MOUNT_FILESTREAMS;
+ 		return 0;
+ 	case Opt_noquota:
+-		mp->m_qflags &= ~XFS_ALL_QUOTA_ACCT;
+-		mp->m_qflags &= ~XFS_ALL_QUOTA_ENFD;
+-		mp->m_qflags &= ~XFS_ALL_QUOTA_ACTIVE;
++		parsing_mp->m_qflags &= ~XFS_ALL_QUOTA_ACCT;
++		parsing_mp->m_qflags &= ~XFS_ALL_QUOTA_ENFD;
++		parsing_mp->m_qflags &= ~XFS_ALL_QUOTA_ACTIVE;
+ 		return 0;
+ 	case Opt_quota:
+ 	case Opt_uquota:
+ 	case Opt_usrquota:
+-		mp->m_qflags |= (XFS_UQUOTA_ACCT | XFS_UQUOTA_ACTIVE |
++		parsing_mp->m_qflags |= (XFS_UQUOTA_ACCT | XFS_UQUOTA_ACTIVE |
+ 				 XFS_UQUOTA_ENFD);
+ 		return 0;
+ 	case Opt_qnoenforce:
+ 	case Opt_uqnoenforce:
+-		mp->m_qflags |= (XFS_UQUOTA_ACCT | XFS_UQUOTA_ACTIVE);
+-		mp->m_qflags &= ~XFS_UQUOTA_ENFD;
++		parsing_mp->m_qflags |= (XFS_UQUOTA_ACCT | XFS_UQUOTA_ACTIVE);
++		parsing_mp->m_qflags &= ~XFS_UQUOTA_ENFD;
+ 		return 0;
+ 	case Opt_pquota:
+ 	case Opt_prjquota:
+-		mp->m_qflags |= (XFS_PQUOTA_ACCT | XFS_PQUOTA_ACTIVE |
++		parsing_mp->m_qflags |= (XFS_PQUOTA_ACCT | XFS_PQUOTA_ACTIVE |
+ 				 XFS_PQUOTA_ENFD);
+ 		return 0;
+ 	case Opt_pqnoenforce:
+-		mp->m_qflags |= (XFS_PQUOTA_ACCT | XFS_PQUOTA_ACTIVE);
+-		mp->m_qflags &= ~XFS_PQUOTA_ENFD;
++		parsing_mp->m_qflags |= (XFS_PQUOTA_ACCT | XFS_PQUOTA_ACTIVE);
++		parsing_mp->m_qflags &= ~XFS_PQUOTA_ENFD;
+ 		return 0;
+ 	case Opt_gquota:
+ 	case Opt_grpquota:
+-		mp->m_qflags |= (XFS_GQUOTA_ACCT | XFS_GQUOTA_ACTIVE |
++		parsing_mp->m_qflags |= (XFS_GQUOTA_ACCT | XFS_GQUOTA_ACTIVE |
+ 				 XFS_GQUOTA_ENFD);
+ 		return 0;
+ 	case Opt_gqnoenforce:
+-		mp->m_qflags |= (XFS_GQUOTA_ACCT | XFS_GQUOTA_ACTIVE);
+-		mp->m_qflags &= ~XFS_GQUOTA_ENFD;
++		parsing_mp->m_qflags |= (XFS_GQUOTA_ACCT | XFS_GQUOTA_ACTIVE);
++		parsing_mp->m_qflags &= ~XFS_GQUOTA_ENFD;
+ 		return 0;
+ 	case Opt_discard:
+-		mp->m_flags |= XFS_MOUNT_DISCARD;
++		parsing_mp->m_flags |= XFS_MOUNT_DISCARD;
+ 		return 0;
+ 	case Opt_nodiscard:
+-		mp->m_flags &= ~XFS_MOUNT_DISCARD;
++		parsing_mp->m_flags &= ~XFS_MOUNT_DISCARD;
+ 		return 0;
+ #ifdef CONFIG_FS_DAX
+ 	case Opt_dax:
+-		xfs_mount_set_dax_mode(mp, XFS_DAX_ALWAYS);
++		xfs_mount_set_dax_mode(parsing_mp, XFS_DAX_ALWAYS);
+ 		return 0;
+ 	case Opt_dax_enum:
+-		xfs_mount_set_dax_mode(mp, result.uint_32);
++		xfs_mount_set_dax_mode(parsing_mp, result.uint_32);
+ 		return 0;
+ #endif
+ 	/* Following mount options will be removed in September 2025 */
+ 	case Opt_ikeep:
+-		xfs_warn(mp, "%s mount option is deprecated.", param->key);
+-		mp->m_flags |= XFS_MOUNT_IKEEP;
++		xfs_fs_warn_deprecated(fc, param, XFS_MOUNT_IKEEP, true);
++		parsing_mp->m_flags |= XFS_MOUNT_IKEEP;
+ 		return 0;
+ 	case Opt_noikeep:
+-		xfs_warn(mp, "%s mount option is deprecated.", param->key);
+-		mp->m_flags &= ~XFS_MOUNT_IKEEP;
++		xfs_fs_warn_deprecated(fc, param, XFS_MOUNT_IKEEP, false);
++		parsing_mp->m_flags &= ~XFS_MOUNT_IKEEP;
+ 		return 0;
+ 	case Opt_attr2:
+-		xfs_warn(mp, "%s mount option is deprecated.", param->key);
+-		mp->m_flags |= XFS_MOUNT_ATTR2;
++		xfs_fs_warn_deprecated(fc, param, XFS_MOUNT_ATTR2, true);
++		parsing_mp->m_flags |= XFS_MOUNT_ATTR2;
+ 		return 0;
+ 	case Opt_noattr2:
+-		xfs_warn(mp, "%s mount option is deprecated.", param->key);
+-		mp->m_flags &= ~XFS_MOUNT_ATTR2;
+-		mp->m_flags |= XFS_MOUNT_NOATTR2;
++		xfs_fs_warn_deprecated(fc, param, XFS_MOUNT_NOATTR2, true);
++		parsing_mp->m_flags &= ~XFS_MOUNT_ATTR2;
++		parsing_mp->m_flags |= XFS_MOUNT_NOATTR2;
+ 		return 0;
+ 	default:
+-		xfs_warn(mp, "unknown mount option [%s].", param->key);
++		xfs_warn(parsing_mp, "unknown mount option [%s].", param->key);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1918,7 +1934,7 @@ xfs_init_zones(void)
+ 	if (!xfs_ifork_zone)
+ 		goto out_destroy_da_state_zone;
+ 
+-	xfs_trans_zone = kmem_cache_create("xf_trans",
++	xfs_trans_zone = kmem_cache_create("xfs_trans",
+ 					   sizeof(struct xfs_trans),
+ 					   0, 0, NULL);
+ 	if (!xfs_trans_zone)
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index c94e71f741b67..36166bae24a6f 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -68,6 +68,7 @@ xfs_trans_free(
+ 	xfs_extent_busy_clear(tp->t_mountp, &tp->t_busy, false);
+ 
+ 	trace_xfs_trans_free(tp, _RET_IP_);
++	xfs_trans_clear_context(tp);
+ 	if (!(tp->t_flags & XFS_TRANS_NO_WRITECOUNT))
+ 		sb_end_intwrite(tp->t_mountp->m_super);
+ 	xfs_trans_free_dqinfo(tp);
+@@ -119,7 +120,8 @@ xfs_trans_dup(
+ 
+ 	ntp->t_rtx_res = tp->t_rtx_res - tp->t_rtx_res_used;
+ 	tp->t_rtx_res = tp->t_rtx_res_used;
+-	ntp->t_pflags = tp->t_pflags;
++
++	xfs_trans_switch_context(tp, ntp);
+ 
+ 	/* move deferred ops over to the new tp */
+ 	xfs_defer_move(ntp, tp);
+@@ -153,9 +155,6 @@ xfs_trans_reserve(
+ 	int			error = 0;
+ 	bool			rsvd = (tp->t_flags & XFS_TRANS_RESERVE) != 0;
+ 
+-	/* Mark this thread as being in a transaction */
+-	current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
+-
+ 	/*
+ 	 * Attempt to reserve the needed disk blocks by decrementing
+ 	 * the number needed from the number available.  This will
+@@ -163,10 +162,8 @@ xfs_trans_reserve(
+ 	 */
+ 	if (blocks > 0) {
+ 		error = xfs_mod_fdblocks(mp, -((int64_t)blocks), rsvd);
+-		if (error != 0) {
+-			current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
++		if (error != 0)
+ 			return -ENOSPC;
+-		}
+ 		tp->t_blk_res += blocks;
+ 	}
+ 
+@@ -240,9 +237,6 @@ undo_blocks:
+ 		xfs_mod_fdblocks(mp, (int64_t)blocks, rsvd);
+ 		tp->t_blk_res = 0;
+ 	}
+-
+-	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
+-
+ 	return error;
+ }
+ 
+@@ -266,6 +260,7 @@ xfs_trans_alloc(
+ 	tp = kmem_cache_zalloc(xfs_trans_zone, GFP_KERNEL | __GFP_NOFAIL);
+ 	if (!(flags & XFS_TRANS_NO_WRITECOUNT))
+ 		sb_start_intwrite(mp->m_super);
++	xfs_trans_set_context(tp);
+ 
+ 	/*
+ 	 * Zero-reservation ("empty") transactions can't modify anything, so
+@@ -620,6 +615,9 @@ xfs_trans_unreserve_and_mod_sb(
+ 
+ 	/* apply remaining deltas */
+ 	spin_lock(&mp->m_sb_lock);
++	mp->m_sb.sb_fdblocks += tp->t_fdblocks_delta + tp->t_res_fdblocks_delta;
++	mp->m_sb.sb_icount += idelta;
++	mp->m_sb.sb_ifree += ifreedelta;
+ 	mp->m_sb.sb_frextents += rtxdelta;
+ 	mp->m_sb.sb_dblocks += tp->t_dblocks_delta;
+ 	mp->m_sb.sb_agcount += tp->t_agcount_delta;
+@@ -878,7 +876,6 @@ __xfs_trans_commit(
+ 
+ 	xfs_log_commit_cil(mp, tp, &commit_lsn, regrant);
+ 
+-	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
+ 	xfs_trans_free(tp);
+ 
+ 	/*
+@@ -910,7 +907,6 @@ out_unreserve:
+ 			xfs_log_ticket_ungrant(mp->m_log, tp->t_ticket);
+ 		tp->t_ticket = NULL;
+ 	}
+-	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
+ 	xfs_trans_free_items(tp, !!error);
+ 	xfs_trans_free(tp);
+ 
+@@ -970,9 +966,6 @@ xfs_trans_cancel(
+ 		tp->t_ticket = NULL;
+ 	}
+ 
+-	/* mark this thread as no longer being in a transaction */
+-	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
+-
+ 	xfs_trans_free_items(tp, dirty);
+ 	xfs_trans_free(tp);
+ }
+diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
+index 084658946cc89..075eeade4f7d5 100644
+--- a/fs/xfs/xfs_trans.h
++++ b/fs/xfs/xfs_trans.h
+@@ -268,4 +268,34 @@ xfs_trans_item_relog(
+ 	return lip->li_ops->iop_relog(lip, tp);
+ }
+ 
++static inline void
++xfs_trans_set_context(
++	struct xfs_trans	*tp)
++{
++	ASSERT(current->journal_info == NULL);
++	tp->t_pflags = memalloc_nofs_save();
++	current->journal_info = tp;
++}
++
++static inline void
++xfs_trans_clear_context(
++	struct xfs_trans	*tp)
++{
++	if (current->journal_info == tp) {
++		memalloc_nofs_restore(tp->t_pflags);
++		current->journal_info = NULL;
++	}
++}
++
++static inline void
++xfs_trans_switch_context(
++	struct xfs_trans	*old_tp,
++	struct xfs_trans	*new_tp)
++{
++	ASSERT(current->journal_info == old_tp);
++	new_tp->t_pflags = old_tp->t_pflags;
++	old_tp->t_pflags = 0;
++	current->journal_info = new_tp;
++}
++
+ #endif	/* __XFS_TRANS_H__ */
+diff --git a/include/linux/dim.h b/include/linux/dim.h
+index b698266d00356..6c5733981563e 100644
+--- a/include/linux/dim.h
++++ b/include/linux/dim.h
+@@ -21,7 +21,7 @@
+  * We consider 10% difference as significant.
+  */
+ #define IS_SIGNIFICANT_DIFF(val, ref) \
+-	(((100UL * abs((val) - (ref))) / (ref)) > 10)
++	((ref) && (((100UL * abs((val) - (ref))) / (ref)) > 10))
+ 
+ /*
+  * Calculate the gap between two values.
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index e25be2d01a7ac..4b74c67f13c9d 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -410,7 +410,7 @@ int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst,
+ 	u32 mtu = dst_mtu(encap_dst) - headroom;
+ 
+ 	if ((skb_is_gso(skb) && skb_gso_validate_network_len(skb, mtu)) ||
+-	    (!skb_is_gso(skb) && (skb->len - skb_mac_header_len(skb)) <= mtu))
++	    (!skb_is_gso(skb) && (skb->len - skb_network_offset(skb)) <= mtu))
+ 		return 0;
+ 
+ 	skb_dst_update_pmtu_no_confirm(skb, mtu);
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 017cd666387f3..32c60122db9c8 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1980,7 +1980,8 @@ process:
+ 		struct sock *nsk;
+ 
+ 		sk = req->rsk_listener;
+-		if (unlikely(tcp_v4_inbound_md5_hash(sk, skb, dif, sdif))) {
++		if (unlikely(!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb) ||
++			     tcp_v4_inbound_md5_hash(sk, skb, dif, sdif))) {
+ 			sk_drops_add(sk, skb);
+ 			reqsk_put(req);
+ 			goto discard_it;
+@@ -2019,6 +2020,7 @@ process:
+ 			}
+ 			goto discard_and_relse;
+ 		}
++		nf_reset_ct(skb);
+ 		if (nsk == sk) {
+ 			reqsk_put(req);
+ 			tcp_v4_restore_cb(skb);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 0562fb321959e..05317e6f48f8a 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -1102,10 +1102,6 @@ ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg,
+ 		goto out;
+ 	}
+ 
+-	if (net->ipv6.devconf_all->disable_policy ||
+-	    idev->cnf.disable_policy)
+-		f6i->dst_nopolicy = true;
+-
+ 	neigh_parms_data_state_setall(idev->nd_parms);
+ 
+ 	ifa->addr = *cfg->pfx;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 6ace9f0ac22f3..e67505c6d8562 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -4479,8 +4479,15 @@ struct fib6_info *addrconf_f6i_alloc(struct net *net,
+ 	}
+ 
+ 	f6i = ip6_route_info_create(&cfg, gfp_flags, NULL);
+-	if (!IS_ERR(f6i))
++	if (!IS_ERR(f6i)) {
+ 		f6i->dst_nocount = true;
++
++		if (!anycast &&
++		    (net->ipv6.devconf_all->disable_policy ||
++		     idev->cnf.disable_policy))
++			f6i->dst_nopolicy = true;
++	}
++
+ 	return f6i;
+ }
+ 
+diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c
+index b9179708e3c1a..552bce1fdfb94 100644
+--- a/net/ipv6/seg6_hmac.c
++++ b/net/ipv6/seg6_hmac.c
+@@ -409,7 +409,6 @@ int __net_init seg6_hmac_net_init(struct net *net)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(seg6_hmac_net_init);
+ 
+ void seg6_hmac_exit(void)
+ {
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index bab0e99f6e356..3c92e8cacbbab 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -321,9 +321,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ifreq *ifr)
+ 		kcalloc(cmax, sizeof(*kp), GFP_KERNEL | __GFP_NOWARN) :
+ 		NULL;
+ 
+-	rcu_read_lock();
+-
+-	ca = t->prl_count < cmax ? t->prl_count : cmax;
++	ca = min(t->prl_count, cmax);
+ 
+ 	if (!kp) {
+ 		/* We don't try hard to allocate much memory for
+@@ -338,7 +336,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ifreq *ifr)
+ 		}
+ 	}
+ 
+-	c = 0;
++	rcu_read_lock();
+ 	for_each_prl_rcu(t->prl) {
+ 		if (c >= cmax)
+ 			break;
+@@ -350,7 +348,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ifreq *ifr)
+ 		if (kprl.addr != htonl(INADDR_ANY))
+ 			break;
+ 	}
+-out:
++
+ 	rcu_read_unlock();
+ 
+ 	len = sizeof(*kp) * c;
+@@ -359,7 +357,7 @@ out:
+ 		ret = -EFAULT;
+ 
+ 	kfree(kp);
+-
++out:
+ 	return ret;
+ }
+ 
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index 858c8d4d659a8..a5cfb321ae23a 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -142,6 +142,7 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+ 	/* Another cpu may race to insert the element with the same key */
+ 	if (prev) {
+ 		nft_set_elem_destroy(set, he, true);
++		atomic_dec(&set->nelems);
+ 		he = prev;
+ 	}
+ 
+@@ -151,6 +152,7 @@ out:
+ 
+ err2:
+ 	nft_set_elem_destroy(set, he, true);
++	atomic_dec(&set->nelems);
+ err1:
+ 	return false;
+ }
+diff --git a/net/rose/rose_timer.c b/net/rose/rose_timer.c
+index b3138fc2e552e..f06ddbed3fed6 100644
+--- a/net/rose/rose_timer.c
++++ b/net/rose/rose_timer.c
+@@ -31,89 +31,89 @@ static void rose_idletimer_expiry(struct timer_list *);
+ 
+ void rose_start_heartbeat(struct sock *sk)
+ {
+-	del_timer(&sk->sk_timer);
++	sk_stop_timer(sk, &sk->sk_timer);
+ 
+ 	sk->sk_timer.function = rose_heartbeat_expiry;
+ 	sk->sk_timer.expires  = jiffies + 5 * HZ;
+ 
+-	add_timer(&sk->sk_timer);
++	sk_reset_timer(sk, &sk->sk_timer, sk->sk_timer.expires);
+ }
+ 
+ void rose_start_t1timer(struct sock *sk)
+ {
+ 	struct rose_sock *rose = rose_sk(sk);
+ 
+-	del_timer(&rose->timer);
++	sk_stop_timer(sk, &rose->timer);
+ 
+ 	rose->timer.function = rose_timer_expiry;
+ 	rose->timer.expires  = jiffies + rose->t1;
+ 
+-	add_timer(&rose->timer);
++	sk_reset_timer(sk, &rose->timer, rose->timer.expires);
+ }
+ 
+ void rose_start_t2timer(struct sock *sk)
+ {
+ 	struct rose_sock *rose = rose_sk(sk);
+ 
+-	del_timer(&rose->timer);
++	sk_stop_timer(sk, &rose->timer);
+ 
+ 	rose->timer.function = rose_timer_expiry;
+ 	rose->timer.expires  = jiffies + rose->t2;
+ 
+-	add_timer(&rose->timer);
++	sk_reset_timer(sk, &rose->timer, rose->timer.expires);
+ }
+ 
+ void rose_start_t3timer(struct sock *sk)
+ {
+ 	struct rose_sock *rose = rose_sk(sk);
+ 
+-	del_timer(&rose->timer);
++	sk_stop_timer(sk, &rose->timer);
+ 
+ 	rose->timer.function = rose_timer_expiry;
+ 	rose->timer.expires  = jiffies + rose->t3;
+ 
+-	add_timer(&rose->timer);
++	sk_reset_timer(sk, &rose->timer, rose->timer.expires);
+ }
+ 
+ void rose_start_hbtimer(struct sock *sk)
+ {
+ 	struct rose_sock *rose = rose_sk(sk);
+ 
+-	del_timer(&rose->timer);
++	sk_stop_timer(sk, &rose->timer);
+ 
+ 	rose->timer.function = rose_timer_expiry;
+ 	rose->timer.expires  = jiffies + rose->hb;
+ 
+-	add_timer(&rose->timer);
++	sk_reset_timer(sk, &rose->timer, rose->timer.expires);
+ }
+ 
+ void rose_start_idletimer(struct sock *sk)
+ {
+ 	struct rose_sock *rose = rose_sk(sk);
+ 
+-	del_timer(&rose->idletimer);
++	sk_stop_timer(sk, &rose->idletimer);
+ 
+ 	if (rose->idle > 0) {
+ 		rose->idletimer.function = rose_idletimer_expiry;
+ 		rose->idletimer.expires  = jiffies + rose->idle;
+ 
+-		add_timer(&rose->idletimer);
++		sk_reset_timer(sk, &rose->idletimer, rose->idletimer.expires);
+ 	}
+ }
+ 
+ void rose_stop_heartbeat(struct sock *sk)
+ {
+-	del_timer(&sk->sk_timer);
++	sk_stop_timer(sk, &sk->sk_timer);
+ }
+ 
+ void rose_stop_timer(struct sock *sk)
+ {
+-	del_timer(&rose_sk(sk)->timer);
++	sk_stop_timer(sk, &rose_sk(sk)->timer);
+ }
+ 
+ void rose_stop_idletimer(struct sock *sk)
+ {
+-	del_timer(&rose_sk(sk)->idletimer);
++	sk_stop_timer(sk, &rose_sk(sk)->idletimer);
+ }
+ 
+ static void rose_heartbeat_expiry(struct timer_list *t)
+@@ -130,6 +130,7 @@ static void rose_heartbeat_expiry(struct timer_list *t)
+ 		    (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) {
+ 			bh_unlock_sock(sk);
+ 			rose_destroy_socket(sk);
++			sock_put(sk);
+ 			return;
+ 		}
+ 		break;
+@@ -152,6 +153,7 @@ static void rose_heartbeat_expiry(struct timer_list *t)
+ 
+ 	rose_start_heartbeat(sk);
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+ 
+ static void rose_timer_expiry(struct timer_list *t)
+@@ -181,6 +183,7 @@ static void rose_timer_expiry(struct timer_list *t)
+ 		break;
+ 	}
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+ 
+ static void rose_idletimer_expiry(struct timer_list *t)
+@@ -205,4 +208,5 @@ static void rose_idletimer_expiry(struct timer_list *t)
+ 		sock_set_flag(sk, SOCK_DEAD);
+ 	}
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 7b29aa1a3ce9a..4ab9c2a6f6501 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -302,7 +302,8 @@ static int tcf_idr_release_unsafe(struct tc_action *p)
+ }
+ 
+ static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
+-			  const struct tc_action_ops *ops)
++			  const struct tc_action_ops *ops,
++			  struct netlink_ext_ack *extack)
+ {
+ 	struct nlattr *nest;
+ 	int n_i = 0;
+@@ -318,20 +319,25 @@ static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
+ 	if (nla_put_string(skb, TCA_KIND, ops->kind))
+ 		goto nla_put_failure;
+ 
++	ret = 0;
+ 	mutex_lock(&idrinfo->lock);
+ 	idr_for_each_entry_ul(idr, p, tmp, id) {
+ 		if (IS_ERR(p))
+ 			continue;
+ 		ret = tcf_idr_release_unsafe(p);
+-		if (ret == ACT_P_DELETED) {
++		if (ret == ACT_P_DELETED)
+ 			module_put(ops->owner);
+-			n_i++;
+-		} else if (ret < 0) {
+-			mutex_unlock(&idrinfo->lock);
+-			goto nla_put_failure;
+-		}
++		else if (ret < 0)
++			break;
++		n_i++;
+ 	}
+ 	mutex_unlock(&idrinfo->lock);
++	if (ret < 0) {
++		if (n_i)
++			NL_SET_ERR_MSG(extack, "Unable to flush all TC actions");
++		else
++			goto nla_put_failure;
++	}
+ 
+ 	ret = nla_put_u32(skb, TCA_FCNT, n_i);
+ 	if (ret)
+@@ -352,7 +358,7 @@ int tcf_generic_walker(struct tc_action_net *tn, struct sk_buff *skb,
+ 	struct tcf_idrinfo *idrinfo = tn->idrinfo;
+ 
+ 	if (type == RTM_DELACTION) {
+-		return tcf_del_walker(idrinfo, skb, ops);
++		return tcf_del_walker(idrinfo, skb, ops, extack);
+ 	} else if (type == RTM_GETACTION) {
+ 		return tcf_dump_walker(idrinfo, skb, cb);
+ 	} else {
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index c8ed6d3d5762e..d84bb5037bb5b 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -752,7 +752,7 @@ static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr,
+ 	 */
+ 	xdr->p = (void *)p + frag2bytes;
+ 	space_left = xdr->buf->buflen - xdr->buf->len;
+-	if (space_left - nbytes >= PAGE_SIZE)
++	if (space_left - frag1bytes >= PAGE_SIZE)
+ 		xdr->end = (void *)p + PAGE_SIZE;
+ 	else
+ 		xdr->end = (void *)p + space_left - frag1bytes;
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index e4452d55851f9..60059827563ae 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -456,8 +456,8 @@ struct tipc_node *tipc_node_create(struct net *net, u32 addr, u8 *peer_id,
+ 				   bool preliminary)
+ {
+ 	struct tipc_net *tn = net_generic(net, tipc_net_id);
++	struct tipc_link *l, *snd_l = tipc_bc_sndlink(net);
+ 	struct tipc_node *n, *temp_node;
+-	struct tipc_link *l;
+ 	unsigned long intv;
+ 	int bearer_id;
+ 	int i;
+@@ -472,6 +472,16 @@ struct tipc_node *tipc_node_create(struct net *net, u32 addr, u8 *peer_id,
+ 			goto exit;
+ 		/* A preliminary node becomes "real" now, refresh its data */
+ 		tipc_node_write_lock(n);
++		if (!tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX,
++					 tipc_link_min_win(snd_l), tipc_link_max_win(snd_l),
++					 n->capabilities, &n->bc_entry.inputq1,
++					 &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) {
++			pr_warn("Broadcast rcv link refresh failed, no memory\n");
++			tipc_node_write_unlock_fast(n);
++			tipc_node_put(n);
++			n = NULL;
++			goto exit;
++		}
+ 		n->preliminary = false;
+ 		n->addr = addr;
+ 		hlist_del_rcu(&n->hash);
+@@ -551,7 +561,16 @@ update:
+ 	n->signature = INVALID_NODE_SIG;
+ 	n->active_links[0] = INVALID_BEARER_ID;
+ 	n->active_links[1] = INVALID_BEARER_ID;
+-	n->bc_entry.link = NULL;
++	if (!preliminary &&
++	    !tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX,
++				 tipc_link_min_win(snd_l), tipc_link_max_win(snd_l),
++				 n->capabilities, &n->bc_entry.inputq1,
++				 &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) {
++		pr_warn("Broadcast rcv link creation failed, no memory\n");
++		kfree(n);
++		n = NULL;
++		goto exit;
++	}
+ 	tipc_node_get(n);
+ 	timer_setup(&n->timer, tipc_node_timeout, 0);
+ 	/* Start a slow timer anyway, crypto needs it */
+@@ -1128,7 +1147,7 @@ void tipc_node_check_dest(struct net *net, u32 addr,
+ 			  bool *respond, bool *dupl_addr)
+ {
+ 	struct tipc_node *n;
+-	struct tipc_link *l, *snd_l;
++	struct tipc_link *l;
+ 	struct tipc_link_entry *le;
+ 	bool addr_match = false;
+ 	bool sign_match = false;
+@@ -1148,22 +1167,6 @@ void tipc_node_check_dest(struct net *net, u32 addr,
+ 		return;
+ 
+ 	tipc_node_write_lock(n);
+-	if (unlikely(!n->bc_entry.link)) {
+-		snd_l = tipc_bc_sndlink(net);
+-		if (!tipc_link_bc_create(net, tipc_own_addr(net),
+-					 addr, peer_id, U16_MAX,
+-					 tipc_link_min_win(snd_l),
+-					 tipc_link_max_win(snd_l),
+-					 n->capabilities,
+-					 &n->bc_entry.inputq1,
+-					 &n->bc_entry.namedq, snd_l,
+-					 &n->bc_entry.link)) {
+-			pr_warn("Broadcast rcv link creation failed, no mem\n");
+-			tipc_node_write_unlock_fast(n);
+-			tipc_node_put(n);
+-			return;
+-		}
+-	}
+ 
+ 	le = &n->links[b->identity];
+ 
+diff --git a/tools/testing/selftests/net/udpgso_bench.sh b/tools/testing/selftests/net/udpgso_bench.sh
+index 80b5d352702e5..dc932fd653634 100755
+--- a/tools/testing/selftests/net/udpgso_bench.sh
++++ b/tools/testing/selftests/net/udpgso_bench.sh
+@@ -120,7 +120,7 @@ run_all() {
+ 	run_udp "${ipv4_args}"
+ 
+ 	echo "ipv6"
+-	run_tcp "${ipv4_args}"
++	run_tcp "${ipv6_args}"
+ 	run_udp "${ipv6_args}"
+ }
+ 
+diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
+index 2af9d39a97168..215e1067f0376 100644
+--- a/tools/testing/selftests/rseq/Makefile
++++ b/tools/testing/selftests/rseq/Makefile
+@@ -6,7 +6,7 @@ endif
+ 
+ CFLAGS += -O2 -Wall -g -I./ -I../../../../usr/include/ -L$(OUTPUT) -Wl,-rpath=./ \
+ 	  $(CLANG_FLAGS)
+-LDLIBS += -lpthread
++LDLIBS += -lpthread -ldl
+ 
+ # Own dependencies because we only want to build against 1st prerequisite, but
+ # still track changes to header files and depend on shared object.
+diff --git a/tools/testing/selftests/rseq/basic_percpu_ops_test.c b/tools/testing/selftests/rseq/basic_percpu_ops_test.c
+index eb3f6db36d369..517756afc2a4e 100644
+--- a/tools/testing/selftests/rseq/basic_percpu_ops_test.c
++++ b/tools/testing/selftests/rseq/basic_percpu_ops_test.c
+@@ -9,10 +9,9 @@
+ #include <string.h>
+ #include <stddef.h>
+ 
++#include "../kselftest.h"
+ #include "rseq.h"
+ 
+-#define ARRAY_SIZE(arr)	(sizeof(arr) / sizeof((arr)[0]))
+-
+ struct percpu_lock_entry {
+ 	intptr_t v;
+ } __attribute__((aligned(128)));
+@@ -168,7 +167,7 @@ struct percpu_list_node *this_cpu_list_pop(struct percpu_list *list,
+ 	for (;;) {
+ 		struct percpu_list_node *head;
+ 		intptr_t *targetptr, expectnot, *load;
+-		off_t offset;
++		long offset;
+ 		int ret, cpu;
+ 
+ 		cpu = rseq_cpu_start();
+diff --git a/tools/testing/selftests/rseq/compiler.h b/tools/testing/selftests/rseq/compiler.h
+new file mode 100644
+index 0000000000000..876eb6a7f75be
+--- /dev/null
++++ b/tools/testing/selftests/rseq/compiler.h
+@@ -0,0 +1,30 @@
++/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */
++/*
++ * rseq/compiler.h
++ *
++ * Work-around asm goto compiler bugs.
++ *
++ * (C) Copyright 2021 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
++ */
++
++#ifndef RSEQ_COMPILER_H
++#define RSEQ_COMPILER_H
++
++/*
++ * gcc prior to 4.8.2 miscompiles asm goto.
++ * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
++ *
++ * gcc prior to 8.1.0 miscompiles asm goto at O1.
++ * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103908
++ *
++ * clang prior to version 13.0.1 miscompiles asm goto at O2.
++ * https://github.com/llvm/llvm-project/issues/52735
++ *
++ * Work around these issues by adding a volatile inline asm with
++ * memory clobber in the fallthrough after the asm goto and at each
++ * label target.  Emit this for all compilers in case other similar
++ * issues are found in the future.
++ */
++#define rseq_after_asm_goto()	asm volatile ("" : : : "memory")
++
++#endif  /* RSEQ_COMPILER_H_ */
+diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c
+index 384589095864d..e29ecc7158e8b 100644
+--- a/tools/testing/selftests/rseq/param_test.c
++++ b/tools/testing/selftests/rseq/param_test.c
+@@ -161,7 +161,7 @@ unsigned int yield_mod_cnt, nr_abort;
+ 	"	cbnz	" INJECT_ASM_REG ", 222b\n"			\
+ 	"333:\n"
+ 
+-#elif __PPC__
++#elif defined(__PPC__)
+ 
+ #define RSEQ_INJECT_INPUT \
+ 	, [loop_cnt_1]"m"(loop_cnt[1]) \
+@@ -368,9 +368,7 @@ void *test_percpu_spinlock_thread(void *arg)
+ 		abort();
+ 	reps = thread_data->reps;
+ 	for (i = 0; i < reps; i++) {
+-		int cpu = rseq_cpu_start();
+-
+-		cpu = rseq_this_cpu_lock(&data->lock);
++		int cpu = rseq_this_cpu_lock(&data->lock);
+ 		data->c[cpu].count++;
+ 		rseq_percpu_unlock(&data->lock, cpu);
+ #ifndef BENCHMARK
+@@ -551,7 +549,7 @@ struct percpu_list_node *this_cpu_list_pop(struct percpu_list *list,
+ 	for (;;) {
+ 		struct percpu_list_node *head;
+ 		intptr_t *targetptr, expectnot, *load;
+-		off_t offset;
++		long offset;
+ 		int ret;
+ 
+ 		cpu = rseq_cpu_start();
+diff --git a/tools/testing/selftests/rseq/rseq-abi.h b/tools/testing/selftests/rseq/rseq-abi.h
+new file mode 100644
+index 0000000000000..a8c44d9af71fb
+--- /dev/null
++++ b/tools/testing/selftests/rseq/rseq-abi.h
+@@ -0,0 +1,151 @@
++/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
++#ifndef _RSEQ_ABI_H
++#define _RSEQ_ABI_H
++
++/*
++ * rseq-abi.h
++ *
++ * Restartable sequences system call API
++ *
++ * Copyright (c) 2015-2022 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
++ */
++
++#include <linux/types.h>
++#include <asm/byteorder.h>
++
++enum rseq_abi_cpu_id_state {
++	RSEQ_ABI_CPU_ID_UNINITIALIZED			= -1,
++	RSEQ_ABI_CPU_ID_REGISTRATION_FAILED		= -2,
++};
++
++enum rseq_abi_flags {
++	RSEQ_ABI_FLAG_UNREGISTER = (1 << 0),
++};
++
++enum rseq_abi_cs_flags_bit {
++	RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT	= 0,
++	RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT	= 1,
++	RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT	= 2,
++};
++
++enum rseq_abi_cs_flags {
++	RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT	=
++		(1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT),
++	RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL	=
++		(1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT),
++	RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE	=
++		(1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT),
++};
++
++/*
++ * struct rseq_abi_cs is aligned on 4 * 8 bytes to ensure it is always
++ * contained within a single cache-line. It is usually declared as
++ * link-time constant data.
++ */
++struct rseq_abi_cs {
++	/* Version of this structure. */
++	__u32 version;
++	/* enum rseq_abi_cs_flags */
++	__u32 flags;
++	__u64 start_ip;
++	/* Offset from start_ip. */
++	__u64 post_commit_offset;
++	__u64 abort_ip;
++} __attribute__((aligned(4 * sizeof(__u64))));
++
++/*
++ * struct rseq_abi is aligned on 4 * 8 bytes to ensure it is always
++ * contained within a single cache-line.
++ *
++ * A single struct rseq_abi per thread is allowed.
++ */
++struct rseq_abi {
++	/*
++	 * Restartable sequences cpu_id_start field. Updated by the
++	 * kernel. Read by user-space with single-copy atomicity
++	 * semantics. This field should only be read by the thread which
++	 * registered this data structure. Aligned on 32-bit. Always
++	 * contains a value in the range of possible CPUs, although the
++	 * value may not be the actual current CPU (e.g. if rseq is not
++	 * initialized). This CPU number value should always be compared
++	 * against the value of the cpu_id field before performing a rseq
++	 * commit or returning a value read from a data structure indexed
++	 * using the cpu_id_start value.
++	 */
++	__u32 cpu_id_start;
++	/*
++	 * Restartable sequences cpu_id field. Updated by the kernel.
++	 * Read by user-space with single-copy atomicity semantics. This
++	 * field should only be read by the thread which registered this
++	 * data structure. Aligned on 32-bit. Values
++	 * RSEQ_CPU_ID_UNINITIALIZED and RSEQ_CPU_ID_REGISTRATION_FAILED
++	 * have a special semantic: the former means "rseq uninitialized",
++	 * and latter means "rseq initialization failed". This value is
++	 * meant to be read within rseq critical sections and compared
++	 * with the cpu_id_start value previously read, before performing
++	 * the commit instruction, or read and compared with the
++	 * cpu_id_start value before returning a value loaded from a data
++	 * structure indexed using the cpu_id_start value.
++	 */
++	__u32 cpu_id;
++	/*
++	 * Restartable sequences rseq_cs field.
++	 *
++	 * Contains NULL when no critical section is active for the current
++	 * thread, or holds a pointer to the currently active struct rseq_cs.
++	 *
++	 * Updated by user-space, which sets the address of the currently
++	 * active rseq_cs at the beginning of assembly instruction sequence
++	 * block, and set to NULL by the kernel when it restarts an assembly
++	 * instruction sequence block, as well as when the kernel detects that
++	 * it is preempting or delivering a signal outside of the range
++	 * targeted by the rseq_cs. Also needs to be set to NULL by user-space
++	 * before reclaiming memory that contains the targeted struct rseq_cs.
++	 *
++	 * Read and set by the kernel. Set by user-space with single-copy
++	 * atomicity semantics. This field should only be updated by the
++	 * thread which registered this data structure. Aligned on 64-bit.
++	 */
++	union {
++		__u64 ptr64;
++
++		/*
++		 * The "arch" field provides architecture accessor for
++		 * the ptr field based on architecture pointer size and
++		 * endianness.
++		 */
++		struct {
++#ifdef __LP64__
++			__u64 ptr;
++#elif defined(__BYTE_ORDER) ? (__BYTE_ORDER == __BIG_ENDIAN) : defined(__BIG_ENDIAN)
++			__u32 padding;		/* Initialized to zero. */
++			__u32 ptr;
++#else
++			__u32 ptr;
++			__u32 padding;		/* Initialized to zero. */
++#endif
++		} arch;
++	} rseq_cs;
++
++	/*
++	 * Restartable sequences flags field.
++	 *
++	 * This field should only be updated by the thread which
++	 * registered this data structure. Read by the kernel.
++	 * Mainly used for single-stepping through rseq critical sections
++	 * with debuggers.
++	 *
++	 * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT
++	 *     Inhibit instruction sequence block restart on preemption
++	 *     for this thread.
++	 * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL
++	 *     Inhibit instruction sequence block restart on signal
++	 *     delivery for this thread.
++	 * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE
++	 *     Inhibit instruction sequence block restart on migration for
++	 *     this thread.
++	 */
++	__u32 flags;
++} __attribute__((aligned(4 * sizeof(__u64))));
++
++#endif /* _RSEQ_ABI_H */
+diff --git a/tools/testing/selftests/rseq/rseq-arm.h b/tools/testing/selftests/rseq/rseq-arm.h
+index 5943c816c07ce..893a11eca9d51 100644
+--- a/tools/testing/selftests/rseq/rseq-arm.h
++++ b/tools/testing/selftests/rseq/rseq-arm.h
+@@ -147,14 +147,11 @@ do {									\
+ 		teardown						\
+ 		"b %l[" __rseq_str(cmpfail_label) "]\n\t"
+ 
+-#define rseq_workaround_gcc_asm_size_guess()	__asm__ __volatile__("")
+-
+ static inline __attribute__((always_inline))
+ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -185,8 +182,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+ 		  [newv]		"r" (newv)
+@@ -198,30 +195,31 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+ 
+ static inline __attribute__((always_inline))
+ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+-			       off_t voffp, intptr_t *load, int cpu)
++			       long voffp, intptr_t *load, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -255,8 +253,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expectnot]		"r" (expectnot),
+@@ -270,19 +268,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -292,7 +292,6 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ #ifdef RSEQ_COMPARE_TWICE
+@@ -316,8 +315,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"m" (*v),
+ 		  [count]		"Ir" (count)
+ 		  RSEQ_INJECT_INPUT
+@@ -328,14 +327,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		  , error1
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ #endif
+ }
+@@ -347,7 +347,6 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -381,8 +380,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -398,19 +397,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -422,7 +423,6 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -457,8 +457,8 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -474,19 +474,21 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -498,7 +500,6 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -537,8 +538,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* cmp2 input */
+ 		  [v2]			"m" (*v2),
+ 		  [expect2]		"r" (expect2),
+@@ -554,21 +555,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2, error3
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("1st expected value comparison failed");
+ error3:
++	rseq_after_asm_goto();
+ 	rseq_bug("2nd expected value comparison failed");
+ #endif
+ }
+@@ -582,7 +586,6 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -657,8 +660,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		"8:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+@@ -678,21 +681,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -706,7 +709,6 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -782,8 +784,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		"8:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+@@ -803,21 +805,21 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
+-	rseq_workaround_gcc_asm_size_guess();
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+diff --git a/tools/testing/selftests/rseq/rseq-arm64.h b/tools/testing/selftests/rseq/rseq-arm64.h
+index 200dae9e4208c..cbe190a4d0056 100644
+--- a/tools/testing/selftests/rseq/rseq-arm64.h
++++ b/tools/testing/selftests/rseq/rseq-arm64.h
+@@ -230,8 +230,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"Qo" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"Qo" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"Qo" (*v),
+ 		  [expect]		"r" (expect),
+ 		  [newv]		"r" (newv)
+@@ -242,24 +242,28 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		  , error1, error2
+ #endif
+ 	);
+-
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+ 
+ static inline __attribute__((always_inline))
+ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+-			       off_t voffp, intptr_t *load, int cpu)
++			       long voffp, intptr_t *load, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+@@ -287,8 +291,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"Qo" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"Qo" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"Qo" (*v),
+ 		  [expectnot]		"r" (expectnot),
+ 		  [load]		"Qo" (*load),
+@@ -300,16 +304,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -337,8 +346,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"Qo" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"Qo" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"Qo" (*v),
+ 		  [count]		"r" (count)
+ 		  RSEQ_INJECT_INPUT
+@@ -348,12 +357,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		  , error1
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ #endif
+ }
+@@ -388,8 +400,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"Qo" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"Qo" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [expect]		"r" (expect),
+ 		  [v]			"Qo" (*v),
+ 		  [newv]		"r" (newv),
+@@ -402,17 +414,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -447,8 +463,8 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"Qo" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"Qo" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [expect]		"r" (expect),
+ 		  [v]			"Qo" (*v),
+ 		  [newv]		"r" (newv),
+@@ -461,17 +477,21 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -508,8 +528,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"Qo" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"Qo" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"Qo" (*v),
+ 		  [expect]		"r" (expect),
+ 		  [v2]			"Qo" (*v2),
+@@ -522,19 +542,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2, error3
+ #endif
+ 	);
+-
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ error3:
++	rseq_after_asm_goto();
+ 	rseq_bug("2nd expected value comparison failed");
+ #endif
+ }
+@@ -569,8 +594,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"Qo" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"Qo" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [expect]		"r" (expect),
+ 		  [v]			"Qo" (*v),
+ 		  [newv]		"r" (newv),
+@@ -584,17 +609,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -629,8 +658,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"Qo" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"Qo" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [expect]		"r" (expect),
+ 		  [v]			"Qo" (*v),
+ 		  [newv]		"r" (newv),
+@@ -644,17 +673,21 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+diff --git a/tools/testing/selftests/rseq/rseq-generic-thread-pointer.h b/tools/testing/selftests/rseq/rseq-generic-thread-pointer.h
+new file mode 100644
+index 0000000000000..38c5846615714
+--- /dev/null
++++ b/tools/testing/selftests/rseq/rseq-generic-thread-pointer.h
+@@ -0,0 +1,25 @@
++/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */
++/*
++ * rseq-generic-thread-pointer.h
++ *
++ * (C) Copyright 2021 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
++ */
++
++#ifndef _RSEQ_GENERIC_THREAD_POINTER
++#define _RSEQ_GENERIC_THREAD_POINTER
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++/* Use gcc builtin thread pointer. */
++static inline void *rseq_thread_pointer(void)
++{
++	return __builtin_thread_pointer();
++}
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif
+diff --git a/tools/testing/selftests/rseq/rseq-mips.h b/tools/testing/selftests/rseq/rseq-mips.h
+index e989e7c14b097..878739fae2fde 100644
+--- a/tools/testing/selftests/rseq/rseq-mips.h
++++ b/tools/testing/selftests/rseq/rseq-mips.h
+@@ -154,14 +154,11 @@ do {									\
+ 		teardown \
+ 		"b %l[" __rseq_str(cmpfail_label) "]\n\t"
+ 
+-#define rseq_workaround_gcc_asm_size_guess()	__asm__ __volatile__("")
+-
+ static inline __attribute__((always_inline))
+ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -190,8 +187,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+ 		  [newv]		"r" (newv)
+@@ -203,14 +200,11 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+@@ -222,11 +216,10 @@ error2:
+ 
+ static inline __attribute__((always_inline))
+ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+-			       off_t voffp, intptr_t *load, int cpu)
++			       long voffp, intptr_t *load, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -258,8 +251,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expectnot]		"r" (expectnot),
+@@ -273,14 +266,11 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+@@ -295,7 +285,6 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ #ifdef RSEQ_COMPARE_TWICE
+@@ -319,8 +308,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"m" (*v),
+ 		  [count]		"Ir" (count)
+ 		  RSEQ_INJECT_INPUT
+@@ -331,10 +320,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		  , error1
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ #ifdef RSEQ_COMPARE_TWICE
+@@ -350,7 +337,6 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -382,8 +368,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -399,14 +385,11 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+@@ -423,7 +406,6 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -456,8 +438,8 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -473,14 +455,11 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+@@ -497,7 +476,6 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -532,8 +510,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		"5:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* cmp2 input */
+ 		  [v2]			"m" (*v2),
+ 		  [expect2]		"r" (expect2),
+@@ -549,14 +527,11 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2, error3
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+@@ -577,7 +552,6 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -649,8 +623,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		"8:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+@@ -670,21 +644,16 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -698,7 +667,6 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 
+ 	RSEQ_INJECT_C(9)
+ 
+-	rseq_workaround_gcc_asm_size_guess();
+ 	__asm__ __volatile__ goto (
+ 		RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+@@ -771,8 +739,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		"8:\n\t"
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+@@ -792,21 +760,16 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 0;
+ abort:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
+-	rseq_workaround_gcc_asm_size_guess();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+diff --git a/tools/testing/selftests/rseq/rseq-ppc-thread-pointer.h b/tools/testing/selftests/rseq/rseq-ppc-thread-pointer.h
+new file mode 100644
+index 0000000000000..263eee84fb760
+--- /dev/null
++++ b/tools/testing/selftests/rseq/rseq-ppc-thread-pointer.h
+@@ -0,0 +1,30 @@
++/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */
++/*
++ * rseq-ppc-thread-pointer.h
++ *
++ * (C) Copyright 2021 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
++ */
++
++#ifndef _RSEQ_PPC_THREAD_POINTER
++#define _RSEQ_PPC_THREAD_POINTER
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++static inline void *rseq_thread_pointer(void)
++{
++#ifdef __powerpc64__
++	register void *__result asm ("r13");
++#else
++	register void *__result asm ("r2");
++#endif
++	asm ("" : "=r" (__result));
++	return __result;
++}
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif
+diff --git a/tools/testing/selftests/rseq/rseq-ppc.h b/tools/testing/selftests/rseq/rseq-ppc.h
+index 76be90196fe4f..bab8e0b9fb115 100644
+--- a/tools/testing/selftests/rseq/rseq-ppc.h
++++ b/tools/testing/selftests/rseq/rseq-ppc.h
+@@ -47,10 +47,13 @@ do {									\
+ 
+ #ifdef __PPC64__
+ 
+-#define STORE_WORD	"std "
+-#define LOAD_WORD	"ld "
+-#define LOADX_WORD	"ldx "
+-#define CMP_WORD	"cmpd "
++#define RSEQ_STORE_LONG(arg)	"std%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] "	/* To memory ("m" constraint) */
++#define RSEQ_STORE_INT(arg)	"stw%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] "	/* To memory ("m" constraint) */
++#define RSEQ_LOAD_LONG(arg)	"ld%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] "	/* From memory ("m" constraint) */
++#define RSEQ_LOAD_INT(arg)	"lwz%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] "	/* From memory ("m" constraint) */
++#define RSEQ_LOADX_LONG		"ldx "							/* From base register ("b" constraint) */
++#define RSEQ_CMP_LONG		"cmpd "
++#define RSEQ_CMP_LONG_INT	"cmpdi "
+ 
+ #define __RSEQ_ASM_DEFINE_TABLE(label, version, flags,				\
+ 			start_ip, post_commit_offset, abort_ip)			\
+@@ -89,10 +92,13 @@ do {									\
+ 
+ #else /* #ifdef __PPC64__ */
+ 
+-#define STORE_WORD	"stw "
+-#define LOAD_WORD	"lwz "
+-#define LOADX_WORD	"lwzx "
+-#define CMP_WORD	"cmpw "
++#define RSEQ_STORE_LONG(arg)	"stw%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] "	/* To memory ("m" constraint) */
++#define RSEQ_STORE_INT(arg)	RSEQ_STORE_LONG(arg)					/* To memory ("m" constraint) */
++#define RSEQ_LOAD_LONG(arg)	"lwz%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] "	/* From memory ("m" constraint) */
++#define RSEQ_LOAD_INT(arg)	RSEQ_LOAD_LONG(arg)					/* From memory ("m" constraint) */
++#define RSEQ_LOADX_LONG		"lwzx "							/* From base register ("b" constraint) */
++#define RSEQ_CMP_LONG		"cmpw "
++#define RSEQ_CMP_LONG_INT	"cmpwi "
+ 
+ #define __RSEQ_ASM_DEFINE_TABLE(label, version, flags,				\
+ 			start_ip, post_commit_offset, abort_ip)			\
+@@ -125,7 +131,7 @@ do {									\
+ 		RSEQ_INJECT_ASM(1)						\
+ 		"lis %%r17, (" __rseq_str(cs_label) ")@ha\n\t"			\
+ 		"addi %%r17, %%r17, (" __rseq_str(cs_label) ")@l\n\t"		\
+-		"stw %%r17, %[" __rseq_str(rseq_cs) "]\n\t"			\
++		RSEQ_STORE_INT(rseq_cs) "%%r17, %[" __rseq_str(rseq_cs) "]\n\t"	\
+ 		__rseq_str(label) ":\n\t"
+ 
+ #endif /* #ifdef __PPC64__ */
+@@ -136,7 +142,7 @@ do {									\
+ 
+ #define RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, label)			\
+ 		RSEQ_INJECT_ASM(2)						\
+-		"lwz %%r17, %[" __rseq_str(current_cpu_id) "]\n\t"		\
++		RSEQ_LOAD_INT(current_cpu_id) "%%r17, %[" __rseq_str(current_cpu_id) "]\n\t" \
+ 		"cmpw cr7, %[" __rseq_str(cpu_id) "], %%r17\n\t"		\
+ 		"bne- cr7, " __rseq_str(label) "\n\t"
+ 
+@@ -153,25 +159,25 @@ do {									\
+  * 	RSEQ_ASM_OP_* (else): doesn't have hard-code registers(unless cr7)
+  */
+ #define RSEQ_ASM_OP_CMPEQ(var, expect, label)					\
+-		LOAD_WORD "%%r17, %[" __rseq_str(var) "]\n\t"			\
+-		CMP_WORD "cr7, %%r17, %[" __rseq_str(expect) "]\n\t"		\
++		RSEQ_LOAD_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t"		\
++		RSEQ_CMP_LONG "cr7, %%r17, %[" __rseq_str(expect) "]\n\t"		\
+ 		"bne- cr7, " __rseq_str(label) "\n\t"
+ 
+ #define RSEQ_ASM_OP_CMPNE(var, expectnot, label)				\
+-		LOAD_WORD "%%r17, %[" __rseq_str(var) "]\n\t"			\
+-		CMP_WORD "cr7, %%r17, %[" __rseq_str(expectnot) "]\n\t"		\
++		RSEQ_LOAD_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t"		\
++		RSEQ_CMP_LONG "cr7, %%r17, %[" __rseq_str(expectnot) "]\n\t"		\
+ 		"beq- cr7, " __rseq_str(label) "\n\t"
+ 
+ #define RSEQ_ASM_OP_STORE(value, var)						\
+-		STORE_WORD "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t"
++		RSEQ_STORE_LONG(var) "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t"
+ 
+ /* Load @var to r17 */
+ #define RSEQ_ASM_OP_R_LOAD(var)							\
+-		LOAD_WORD "%%r17, %[" __rseq_str(var) "]\n\t"
++		RSEQ_LOAD_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t"
+ 
+ /* Store r17 to @var */
+ #define RSEQ_ASM_OP_R_STORE(var)						\
+-		STORE_WORD "%%r17, %[" __rseq_str(var) "]\n\t"
++		RSEQ_STORE_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t"
+ 
+ /* Add @count to r17 */
+ #define RSEQ_ASM_OP_R_ADD(count)						\
+@@ -179,11 +185,11 @@ do {									\
+ 
+ /* Load (r17 + voffp) to r17 */
+ #define RSEQ_ASM_OP_R_LOADX(voffp)						\
+-		LOADX_WORD "%%r17, %[" __rseq_str(voffp) "], %%r17\n\t"
++		RSEQ_LOADX_LONG "%%r17, %[" __rseq_str(voffp) "], %%r17\n\t"
+ 
+ /* TODO: implement a faster memcpy. */
+ #define RSEQ_ASM_OP_R_MEMCPY() \
+-		"cmpdi %%r19, 0\n\t" \
++		RSEQ_CMP_LONG_INT "%%r19, 0\n\t" \
+ 		"beq 333f\n\t" \
+ 		"addi %%r20, %%r20, -1\n\t" \
+ 		"addi %%r21, %%r21, -1\n\t" \
+@@ -191,16 +197,16 @@ do {									\
+ 		"lbzu %%r18, 1(%%r20)\n\t" \
+ 		"stbu %%r18, 1(%%r21)\n\t" \
+ 		"addi %%r19, %%r19, -1\n\t" \
+-		"cmpdi %%r19, 0\n\t" \
++		RSEQ_CMP_LONG_INT "%%r19, 0\n\t" \
+ 		"bne 222b\n\t" \
+ 		"333:\n\t" \
+ 
+ #define RSEQ_ASM_OP_R_FINAL_STORE(var, post_commit_label)			\
+-		STORE_WORD "%%r17, %[" __rseq_str(var) "]\n\t"			\
++		RSEQ_STORE_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t"			\
+ 		__rseq_str(post_commit_label) ":\n\t"
+ 
+ #define RSEQ_ASM_OP_FINAL_STORE(value, var, post_commit_label)			\
+-		STORE_WORD "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t" \
++		RSEQ_STORE_LONG(var) "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t" \
+ 		__rseq_str(post_commit_label) ":\n\t"
+ 
+ static inline __attribute__((always_inline))
+@@ -235,8 +241,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+ 		  [newv]		"r" (newv)
+@@ -248,23 +254,28 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+ 
+ static inline __attribute__((always_inline))
+ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+-			       off_t voffp, intptr_t *load, int cpu)
++			       long voffp, intptr_t *load, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+@@ -301,8 +312,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expectnot]		"r" (expectnot),
+@@ -316,16 +327,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -359,8 +375,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [count]		"r" (count)
+@@ -372,12 +388,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		  , error1
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ #endif
+ }
+@@ -419,8 +438,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -436,16 +455,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -489,8 +513,8 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -506,16 +530,21 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -560,8 +589,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* cmp2 input */
+ 		  [v2]			"m" (*v2),
+ 		  [expect2]		"r" (expect2),
+@@ -577,18 +606,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2, error3
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("1st expected value comparison failed");
+ error3:
++	rseq_after_asm_goto();
+ 	rseq_bug("2nd expected value comparison failed");
+ #endif
+ }
+@@ -635,8 +670,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+@@ -653,16 +688,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -711,8 +751,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+@@ -729,23 +769,23 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+ 
+-#undef STORE_WORD
+-#undef LOAD_WORD
+-#undef LOADX_WORD
+-#undef CMP_WORD
+-
+ #endif /* !RSEQ_SKIP_FASTPATH */
+diff --git a/tools/testing/selftests/rseq/rseq-s390.h b/tools/testing/selftests/rseq/rseq-s390.h
+index 8ef94ad1cbb45..4e6dc5f0cb429 100644
+--- a/tools/testing/selftests/rseq/rseq-s390.h
++++ b/tools/testing/selftests/rseq/rseq-s390.h
+@@ -165,8 +165,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+ 		  [newv]		"r" (newv)
+@@ -178,16 +178,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -198,7 +203,7 @@ error2:
+  */
+ static inline __attribute__((always_inline))
+ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+-			       off_t voffp, intptr_t *load, int cpu)
++			       long voffp, intptr_t *load, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+@@ -233,8 +238,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expectnot]		"r" (expectnot),
+@@ -248,16 +253,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -288,8 +298,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [count]		"r" (count)
+@@ -301,12 +311,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		  , error1
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ #endif
+ }
+@@ -347,8 +360,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -364,16 +377,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -426,8 +444,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* cmp2 input */
+ 		  [v2]			"m" (*v2),
+ 		  [expect2]		"r" (expect2),
+@@ -443,18 +461,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2, error3
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("1st expected value comparison failed");
+ error3:
++	rseq_after_asm_goto();
+ 	rseq_bug("2nd expected value comparison failed");
+ #endif
+ }
+@@ -534,8 +558,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ #endif
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [current_cpu_id]	"m" (__rseq_abi.cpu_id),
+-		  [rseq_cs]		"m" (__rseq_abi.rseq_cs),
++		  [current_cpu_id]	"m" (rseq_get_abi()->cpu_id),
++		  [rseq_cs]		"m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+@@ -555,16 +579,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+diff --git a/tools/testing/selftests/rseq/rseq-skip.h b/tools/testing/selftests/rseq/rseq-skip.h
+index 72750b5905a96..7b53dac1fcdd9 100644
+--- a/tools/testing/selftests/rseq/rseq-skip.h
++++ b/tools/testing/selftests/rseq/rseq-skip.h
+@@ -13,7 +13,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 
+ static inline __attribute__((always_inline))
+ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+-			       off_t voffp, intptr_t *load, int cpu)
++			       long voffp, intptr_t *load, int cpu)
+ {
+ 	return -1;
+ }
+diff --git a/tools/testing/selftests/rseq/rseq-thread-pointer.h b/tools/testing/selftests/rseq/rseq-thread-pointer.h
+new file mode 100644
+index 0000000000000..977c25d758b2a
+--- /dev/null
++++ b/tools/testing/selftests/rseq/rseq-thread-pointer.h
+@@ -0,0 +1,19 @@
++/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */
++/*
++ * rseq-thread-pointer.h
++ *
++ * (C) Copyright 2021 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
++ */
++
++#ifndef _RSEQ_THREAD_POINTER
++#define _RSEQ_THREAD_POINTER
++
++#if defined(__x86_64__) || defined(__i386__)
++#include "rseq-x86-thread-pointer.h"
++#elif defined(__PPC__)
++#include "rseq-ppc-thread-pointer.h"
++#else
++#include "rseq-generic-thread-pointer.h"
++#endif
++
++#endif
+diff --git a/tools/testing/selftests/rseq/rseq-x86-thread-pointer.h b/tools/testing/selftests/rseq/rseq-x86-thread-pointer.h
+new file mode 100644
+index 0000000000000..d3133587d9968
+--- /dev/null
++++ b/tools/testing/selftests/rseq/rseq-x86-thread-pointer.h
+@@ -0,0 +1,40 @@
++/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */
++/*
++ * rseq-x86-thread-pointer.h
++ *
++ * (C) Copyright 2021 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
++ */
++
++#ifndef _RSEQ_X86_THREAD_POINTER
++#define _RSEQ_X86_THREAD_POINTER
++
++#include <features.h>
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++#if __GNUC_PREREQ (11, 1)
++static inline void *rseq_thread_pointer(void)
++{
++	return __builtin_thread_pointer();
++}
++#else
++static inline void *rseq_thread_pointer(void)
++{
++	void *__result;
++
++# ifdef __x86_64__
++	__asm__ ("mov %%fs:0, %0" : "=r" (__result));
++# else
++	__asm__ ("mov %%gs:0, %0" : "=r" (__result));
++# endif
++	return __result;
++}
++#endif /* !GCC 11 */
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif
+diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h
+index 640411518e466..bd01dc41ca130 100644
+--- a/tools/testing/selftests/rseq/rseq-x86.h
++++ b/tools/testing/selftests/rseq/rseq-x86.h
+@@ -28,6 +28,8 @@
+ 
+ #ifdef __x86_64__
+ 
++#define RSEQ_ASM_TP_SEGMENT	%%fs
++
+ #define rseq_smp_mb()	\
+ 	__asm__ __volatile__ ("lock; addl $0,-128(%%rsp)" ::: "memory", "cc")
+ #define rseq_smp_rmb()	rseq_barrier()
+@@ -123,14 +125,14 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"cmpq %[v], %[expect]\n\t"
+ 		"jnz %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"cmpq %[v], %[expect]\n\t"
+ 		"jnz %l[error2]\n\t"
+ #endif
+@@ -141,7 +143,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+ 		  [newv]		"r" (newv)
+@@ -152,16 +154,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -172,7 +179,7 @@ error2:
+  */
+ static inline __attribute__((always_inline))
+ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+-			       off_t voffp, intptr_t *load, int cpu)
++			       long voffp, intptr_t *load, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+@@ -184,15 +191,15 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"movq %[v], %%rbx\n\t"
+ 		"cmpq %%rbx, %[expectnot]\n\t"
+ 		"je %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"movq %[v], %%rbx\n\t"
+ 		"cmpq %%rbx, %[expectnot]\n\t"
+ 		"je %l[error2]\n\t"
+@@ -207,7 +214,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expectnot]		"r" (expectnot),
+@@ -220,16 +227,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -245,11 +257,11 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ #endif
+ 		/* final store */
+ 		"addq %[count], %[v]\n\t"
+@@ -258,7 +270,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [count]		"er" (count)
+@@ -269,12 +281,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		  , error1
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ #endif
+ }
+@@ -286,7 +301,7 @@ error1:
+  *  *pval += inc;
+  */
+ static inline __attribute__((always_inline))
+-int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu)
++int rseq_offset_deref_addv(intptr_t *ptr, long off, intptr_t inc, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+@@ -296,11 +311,11 @@ int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu)
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ #endif
+ 		/* get p+v */
+ 		"movq %[ptr], %%rbx\n\t"
+@@ -314,7 +329,7 @@ int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* final store input */
+ 		  [ptr]			"m" (*ptr),
+ 		  [off]			"er" (off),
+@@ -351,14 +366,14 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"cmpq %[v], %[expect]\n\t"
+ 		"jnz %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"cmpq %[v], %[expect]\n\t"
+ 		"jnz %l[error2]\n\t"
+ #endif
+@@ -372,7 +387,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -387,16 +402,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -426,8 +446,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"cmpq %[v], %[expect]\n\t"
+ 		"jnz %l[cmpfail]\n\t"
+@@ -436,7 +456,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		"jnz %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(5)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"cmpq %[v], %[expect]\n\t"
+ 		"jnz %l[error2]\n\t"
+ 		"cmpq %[v2], %[expect2]\n\t"
+@@ -449,7 +469,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* cmp2 input */
+ 		  [v2]			"m" (*v2),
+ 		  [expect2]		"r" (expect2),
+@@ -464,18 +484,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2, error3
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("1st expected value comparison failed");
+ error3:
++	rseq_after_asm_goto();
+ 	rseq_bug("2nd expected value comparison failed");
+ #endif
+ }
+@@ -500,14 +526,14 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		"movq %[dst], %[rseq_scratch1]\n\t"
+ 		"movq %[len], %[rseq_scratch2]\n\t"
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"cmpq %[v], %[expect]\n\t"
+ 		"jnz 5f\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 6f)
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f)
+ 		"cmpq %[v], %[expect]\n\t"
+ 		"jnz 7f\n\t"
+ #endif
+@@ -555,7 +581,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ #endif
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+@@ -574,16 +600,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -600,7 +631,9 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 
+ #endif /* !RSEQ_SKIP_FASTPATH */
+ 
+-#elif __i386__
++#elif defined(__i386__)
++
++#define RSEQ_ASM_TP_SEGMENT	%%gs
+ 
+ #define rseq_smp_mb()	\
+ 	__asm__ __volatile__ ("lock; addl $0,-128(%%esp)" ::: "memory", "cc")
+@@ -701,14 +734,14 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"cmpl %[v], %[expect]\n\t"
+ 		"jnz %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"cmpl %[v], %[expect]\n\t"
+ 		"jnz %l[error2]\n\t"
+ #endif
+@@ -719,7 +752,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  [v]			"m" (*v),
+ 		  [expect]		"r" (expect),
+ 		  [newv]		"r" (newv)
+@@ -730,16 +763,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -750,7 +788,7 @@ error2:
+  */
+ static inline __attribute__((always_inline))
+ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+-			       off_t voffp, intptr_t *load, int cpu)
++			       long voffp, intptr_t *load, int cpu)
+ {
+ 	RSEQ_INJECT_C(9)
+ 
+@@ -762,15 +800,15 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"movl %[v], %%ebx\n\t"
+ 		"cmpl %%ebx, %[expectnot]\n\t"
+ 		"je %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"movl %[v], %%ebx\n\t"
+ 		"cmpl %%ebx, %[expectnot]\n\t"
+ 		"je %l[error2]\n\t"
+@@ -785,7 +823,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expectnot]		"r" (expectnot),
+@@ -798,16 +836,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -823,11 +866,11 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ #endif
+ 		/* final store */
+ 		"addl %[count], %[v]\n\t"
+@@ -836,7 +879,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [count]		"ir" (count)
+@@ -847,12 +890,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
+ 		  , error1
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ #endif
+ }
+@@ -872,14 +918,14 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"cmpl %[v], %[expect]\n\t"
+ 		"jnz %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"cmpl %[v], %[expect]\n\t"
+ 		"jnz %l[error2]\n\t"
+ #endif
+@@ -894,7 +940,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"m" (newv2),
+@@ -909,16 +955,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -938,15 +989,15 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"movl %[expect], %%eax\n\t"
+ 		"cmpl %[v], %%eax\n\t"
+ 		"jnz %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"movl %[expect], %%eax\n\t"
+ 		"cmpl %[v], %%eax\n\t"
+ 		"jnz %l[error2]\n\t"
+@@ -962,7 +1013,7 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* try store input */
+ 		  [v2]			"m" (*v2),
+ 		  [newv2]		"r" (newv2),
+@@ -977,16 +1028,21 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ 
+@@ -1008,8 +1064,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
+ #endif
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"cmpl %[v], %[expect]\n\t"
+ 		"jnz %l[cmpfail]\n\t"
+@@ -1018,7 +1074,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		"jnz %l[cmpfail]\n\t"
+ 		RSEQ_INJECT_ASM(5)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ 		"cmpl %[v], %[expect]\n\t"
+ 		"jnz %l[error2]\n\t"
+ 		"cmpl %[expect2], %[v2]\n\t"
+@@ -1032,7 +1088,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* cmp2 input */
+ 		  [v2]			"m" (*v2),
+ 		  [expect2]		"r" (expect2),
+@@ -1047,18 +1103,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2, error3
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("1st expected value comparison failed");
+ error3:
++	rseq_after_asm_goto();
+ 	rseq_bug("2nd expected value comparison failed");
+ #endif
+ }
+@@ -1084,15 +1146,15 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		"movl %[dst], %[rseq_scratch1]\n\t"
+ 		"movl %[len], %[rseq_scratch2]\n\t"
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"movl %[expect], %%eax\n\t"
+ 		"cmpl %%eax, %[v]\n\t"
+ 		"jnz 5f\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 6f)
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f)
+ 		"movl %[expect], %%eax\n\t"
+ 		"cmpl %%eax, %[v]\n\t"
+ 		"jnz 7f\n\t"
+@@ -1142,7 +1204,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ #endif
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"m" (expect),
+@@ -1161,16 +1223,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+@@ -1196,15 +1263,15 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		"movl %[dst], %[rseq_scratch1]\n\t"
+ 		"movl %[len], %[rseq_scratch2]\n\t"
+ 		/* Start rseq by storing table entry pointer into rseq_cs. */
+-		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
++		RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ 		RSEQ_INJECT_ASM(3)
+ 		"movl %[expect], %%eax\n\t"
+ 		"cmpl %%eax, %[v]\n\t"
+ 		"jnz 5f\n\t"
+ 		RSEQ_INJECT_ASM(4)
+ #ifdef RSEQ_COMPARE_TWICE
+-		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 6f)
++		RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f)
+ 		"movl %[expect], %%eax\n\t"
+ 		"cmpl %%eax, %[v]\n\t"
+ 		"jnz 7f\n\t"
+@@ -1255,7 +1322,7 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ #endif
+ 		: /* gcc asm goto does not allow outputs */
+ 		: [cpu_id]		"r" (cpu),
+-		  [rseq_abi]		"r" (&__rseq_abi),
++		  [rseq_offset]		"r" (rseq_offset),
+ 		  /* final store input */
+ 		  [v]			"m" (*v),
+ 		  [expect]		"m" (expect),
+@@ -1274,16 +1341,21 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
+ 		  , error1, error2
+ #endif
+ 	);
++	rseq_after_asm_goto();
+ 	return 0;
+ abort:
++	rseq_after_asm_goto();
+ 	RSEQ_INJECT_FAILED
+ 	return -1;
+ cmpfail:
++	rseq_after_asm_goto();
+ 	return 1;
+ #ifdef RSEQ_COMPARE_TWICE
+ error1:
++	rseq_after_asm_goto();
+ 	rseq_bug("cpu_id comparison failed");
+ error2:
++	rseq_after_asm_goto();
+ 	rseq_bug("expected value comparison failed");
+ #endif
+ }
+diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
+index 7159eb777fd34..986b9458efb26 100644
+--- a/tools/testing/selftests/rseq/rseq.c
++++ b/tools/testing/selftests/rseq/rseq.c
+@@ -26,131 +26,124 @@
+ #include <assert.h>
+ #include <signal.h>
+ #include <limits.h>
++#include <dlfcn.h>
++#include <stddef.h>
+ 
++#include "../kselftest.h"
+ #include "rseq.h"
+ 
+-#define ARRAY_SIZE(arr)	(sizeof(arr) / sizeof((arr)[0]))
++static const ptrdiff_t *libc_rseq_offset_p;
++static const unsigned int *libc_rseq_size_p;
++static const unsigned int *libc_rseq_flags_p;
+ 
+-__thread volatile struct rseq __rseq_abi = {
+-	.cpu_id = RSEQ_CPU_ID_UNINITIALIZED,
+-};
++/* Offset from the thread pointer to the rseq area.  */
++ptrdiff_t rseq_offset;
+ 
+-/*
+- * Shared with other libraries. This library may take rseq ownership if it is
+- * still 0 when executing the library constructor. Set to 1 by library
+- * constructor when handling rseq. Set to 0 in destructor if handling rseq.
+- */
+-int __rseq_handled;
++/* Size of the registered rseq area.  0 if the registration was
++   unsuccessful.  */
++unsigned int rseq_size = -1U;
++
++/* Flags used during rseq registration.  */
++unsigned int rseq_flags;
+ 
+-/* Whether this library have ownership of rseq registration. */
+ static int rseq_ownership;
+ 
+-static __thread volatile uint32_t __rseq_refcount;
++static
++__thread struct rseq_abi __rseq_abi __attribute__((tls_model("initial-exec"))) = {
++	.cpu_id = RSEQ_ABI_CPU_ID_UNINITIALIZED,
++};
+ 
+-static void signal_off_save(sigset_t *oldset)
++static int sys_rseq(struct rseq_abi *rseq_abi, uint32_t rseq_len,
++		    int flags, uint32_t sig)
+ {
+-	sigset_t set;
+-	int ret;
+-
+-	sigfillset(&set);
+-	ret = pthread_sigmask(SIG_BLOCK, &set, oldset);
+-	if (ret)
+-		abort();
++	return syscall(__NR_rseq, rseq_abi, rseq_len, flags, sig);
+ }
+ 
+-static void signal_restore(sigset_t oldset)
++int rseq_available(void)
+ {
+-	int ret;
++	int rc;
+ 
+-	ret = pthread_sigmask(SIG_SETMASK, &oldset, NULL);
+-	if (ret)
++	rc = sys_rseq(NULL, 0, 0, 0);
++	if (rc != -1)
+ 		abort();
+-}
+-
+-static int sys_rseq(volatile struct rseq *rseq_abi, uint32_t rseq_len,
+-		    int flags, uint32_t sig)
+-{
+-	return syscall(__NR_rseq, rseq_abi, rseq_len, flags, sig);
++	switch (errno) {
++	case ENOSYS:
++		return 0;
++	case EINVAL:
++		return 1;
++	default:
++		abort();
++	}
+ }
+ 
+ int rseq_register_current_thread(void)
+ {
+-	int rc, ret = 0;
+-	sigset_t oldset;
++	int rc;
+ 
+-	if (!rseq_ownership)
++	if (!rseq_ownership) {
++		/* Treat libc's ownership as a successful registration. */
+ 		return 0;
+-	signal_off_save(&oldset);
+-	if (__rseq_refcount == UINT_MAX) {
+-		ret = -1;
+-		goto end;
+-	}
+-	if (__rseq_refcount++)
+-		goto end;
+-	rc = sys_rseq(&__rseq_abi, sizeof(struct rseq), 0, RSEQ_SIG);
+-	if (!rc) {
+-		assert(rseq_current_cpu_raw() >= 0);
+-		goto end;
+ 	}
+-	if (errno != EBUSY)
+-		__rseq_abi.cpu_id = RSEQ_CPU_ID_REGISTRATION_FAILED;
+-	ret = -1;
+-	__rseq_refcount--;
+-end:
+-	signal_restore(oldset);
+-	return ret;
++	rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), 0, RSEQ_SIG);
++	if (rc)
++		return -1;
++	assert(rseq_current_cpu_raw() >= 0);
++	return 0;
+ }
+ 
+ int rseq_unregister_current_thread(void)
+ {
+-	int rc, ret = 0;
+-	sigset_t oldset;
++	int rc;
+ 
+-	if (!rseq_ownership)
++	if (!rseq_ownership) {
++		/* Treat libc's ownership as a successful unregistration. */
+ 		return 0;
+-	signal_off_save(&oldset);
+-	if (!__rseq_refcount) {
+-		ret = -1;
+-		goto end;
+ 	}
+-	if (--__rseq_refcount)
+-		goto end;
+-	rc = sys_rseq(&__rseq_abi, sizeof(struct rseq),
+-		      RSEQ_FLAG_UNREGISTER, RSEQ_SIG);
+-	if (!rc)
+-		goto end;
+-	__rseq_refcount = 1;
+-	ret = -1;
+-end:
+-	signal_restore(oldset);
+-	return ret;
++	rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), RSEQ_ABI_FLAG_UNREGISTER, RSEQ_SIG);
++	if (rc)
++		return -1;
++	return 0;
+ }
+ 
+-int32_t rseq_fallback_current_cpu(void)
++static __attribute__((constructor))
++void rseq_init(void)
+ {
+-	int32_t cpu;
+-
+-	cpu = sched_getcpu();
+-	if (cpu < 0) {
+-		perror("sched_getcpu()");
+-		abort();
++	libc_rseq_offset_p = dlsym(RTLD_NEXT, "__rseq_offset");
++	libc_rseq_size_p = dlsym(RTLD_NEXT, "__rseq_size");
++	libc_rseq_flags_p = dlsym(RTLD_NEXT, "__rseq_flags");
++	if (libc_rseq_size_p && libc_rseq_offset_p && libc_rseq_flags_p) {
++		/* rseq registration owned by glibc */
++		rseq_offset = *libc_rseq_offset_p;
++		rseq_size = *libc_rseq_size_p;
++		rseq_flags = *libc_rseq_flags_p;
++		return;
+ 	}
+-	return cpu;
+-}
+-
+-void __attribute__((constructor)) rseq_init(void)
+-{
+-	/* Check whether rseq is handled by another library. */
+-	if (__rseq_handled)
++	if (!rseq_available())
+ 		return;
+-	__rseq_handled = 1;
+ 	rseq_ownership = 1;
++	rseq_offset = (void *)&__rseq_abi - rseq_thread_pointer();
++	rseq_size = sizeof(struct rseq_abi);
++	rseq_flags = 0;
+ }
+ 
+-void __attribute__((destructor)) rseq_fini(void)
++static __attribute__((destructor))
++void rseq_exit(void)
+ {
+ 	if (!rseq_ownership)
+ 		return;
+-	__rseq_handled = 0;
++	rseq_offset = 0;
++	rseq_size = -1U;
+ 	rseq_ownership = 0;
+ }
++
++int32_t rseq_fallback_current_cpu(void)
++{
++	int32_t cpu;
++
++	cpu = sched_getcpu();
++	if (cpu < 0) {
++		perror("sched_getcpu()");
++		abort();
++	}
++	return cpu;
++}
+diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
+index 3f63eb362b92f..9d850b290c2e6 100644
+--- a/tools/testing/selftests/rseq/rseq.h
++++ b/tools/testing/selftests/rseq/rseq.h
+@@ -16,7 +16,9 @@
+ #include <errno.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+-#include <linux/rseq.h>
++#include <stddef.h>
++#include "rseq-abi.h"
++#include "compiler.h"
+ 
+ /*
+  * Empty code injection macros, override when testing.
+@@ -43,8 +45,20 @@
+ #define RSEQ_INJECT_FAILED
+ #endif
+ 
+-extern __thread volatile struct rseq __rseq_abi;
+-extern int __rseq_handled;
++#include "rseq-thread-pointer.h"
++
++/* Offset from the thread pointer to the rseq area.  */
++extern ptrdiff_t rseq_offset;
++/* Size of the registered rseq area.  0 if the registration was
++   unsuccessful.  */
++extern unsigned int rseq_size;
++/* Flags used during rseq registration.  */
++extern unsigned int rseq_flags;
++
++static inline struct rseq_abi *rseq_get_abi(void)
++{
++	return (struct rseq_abi *) ((uintptr_t) rseq_thread_pointer() + rseq_offset);
++}
+ 
+ #define rseq_likely(x)		__builtin_expect(!!(x), 1)
+ #define rseq_unlikely(x)	__builtin_expect(!!(x), 0)
+@@ -108,7 +122,7 @@ int32_t rseq_fallback_current_cpu(void);
+  */
+ static inline int32_t rseq_current_cpu_raw(void)
+ {
+-	return RSEQ_ACCESS_ONCE(__rseq_abi.cpu_id);
++	return RSEQ_ACCESS_ONCE(rseq_get_abi()->cpu_id);
+ }
+ 
+ /*
+@@ -124,7 +138,7 @@ static inline int32_t rseq_current_cpu_raw(void)
+  */
+ static inline uint32_t rseq_cpu_start(void)
+ {
+-	return RSEQ_ACCESS_ONCE(__rseq_abi.cpu_id_start);
++	return RSEQ_ACCESS_ONCE(rseq_get_abi()->cpu_id_start);
+ }
+ 
+ static inline uint32_t rseq_current_cpu(void)
+@@ -139,11 +153,7 @@ static inline uint32_t rseq_current_cpu(void)
+ 
+ static inline void rseq_clear_rseq_cs(void)
+ {
+-#ifdef __LP64__
+-	__rseq_abi.rseq_cs.ptr = 0;
+-#else
+-	__rseq_abi.rseq_cs.ptr.ptr32 = 0;
+-#endif
++	RSEQ_WRITE_ONCE(rseq_get_abi()->rseq_cs.arch.ptr, 0);
+ }
+ 
+ /*


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-07-12 15:59 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-07-12 15:59 UTC (permalink / raw
  To: gentoo-commits

commit:     127112fa96db1417fafbf5c6199dba8a33a4a6bc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 12 15:59:42 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul 12 15:59:42 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=127112fa

Linux patch 5.10.130

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1129_linux-5.10.130.patch | 2378 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2382 insertions(+)

diff --git a/0000_README b/0000_README
index 42dc1c5f..5c651d7f 100644
--- a/0000_README
+++ b/0000_README
@@ -559,6 +559,10 @@ Patch:  1128_linux-5.10.129.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.129
 
+Patch:  1129_linux-5.10.130.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.130
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1129_linux-5.10.130.patch b/1129_linux-5.10.130.patch
new file mode 100644
index 00000000..da436947
--- /dev/null
+++ b/1129_linux-5.10.130.patch
@@ -0,0 +1,2378 @@
+diff --git a/Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml b/Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml
+index 372679dbd216f..7e250ce136ee9 100644
+--- a/Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml
++++ b/Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml
+@@ -61,7 +61,7 @@ if:
+ then:
+   properties:
+     clocks:
+-      maxItems: 2
++      minItems: 2
+ 
+   required:
+     - clock-names
+diff --git a/Makefile b/Makefile
+index 7d52cee374880..b0a35378803db 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 129
++SUBLEVEL = 130
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index b1068cca42287..fd8dc1183b3e8 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -233,10 +233,9 @@
+ 		status = "okay";
+ 
+ 		eeprom@53 {
+-			compatible = "atmel,24c32";
++			compatible = "atmel,24c02";
+ 			reg = <0x53>;
+ 			pagesize = <16>;
+-			size = <128>;
+ 			status = "okay";
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+index 308d472bd1044..634411d13b4aa 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_icp.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+@@ -317,21 +317,21 @@
+ 	status = "okay";
+ 
+ 	eeprom@50 {
+-		compatible = "atmel,24c32";
++		compatible = "atmel,24c02";
+ 		reg = <0x50>;
+ 		pagesize = <16>;
+ 		status = "okay";
+ 	};
+ 
+ 	eeprom@52 {
+-		compatible = "atmel,24c32";
++		compatible = "atmel,24c02";
+ 		reg = <0x52>;
+ 		pagesize = <16>;
+ 		status = "disabled";
+ 	};
+ 
+ 	eeprom@53 {
+-		compatible = "atmel,24c32";
++		compatible = "atmel,24c02";
+ 		reg = <0x53>;
+ 		pagesize = <16>;
+ 		status = "disabled";
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 3f015cb6ec2b0..f2ce2d0949254 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -104,7 +104,7 @@ static const struct wakeup_source_info ws_info[] = {
+ 
+ static const struct of_device_id sama5d2_ws_ids[] = {
+ 	{ .compatible = "atmel,sama5d2-gem",		.data = &ws_info[0] },
+-	{ .compatible = "atmel,at91rm9200-rtc",		.data = &ws_info[1] },
++	{ .compatible = "atmel,sama5d2-rtc",		.data = &ws_info[1] },
+ 	{ .compatible = "atmel,sama5d3-udc",		.data = &ws_info[2] },
+ 	{ .compatible = "atmel,at91rm9200-ohci",	.data = &ws_info[2] },
+ 	{ .compatible = "usb-ohci",			.data = &ws_info[2] },
+@@ -115,12 +115,12 @@ static const struct of_device_id sama5d2_ws_ids[] = {
+ };
+ 
+ static const struct of_device_id sam9x60_ws_ids[] = {
+-	{ .compatible = "atmel,at91sam9x5-rtc",		.data = &ws_info[1] },
++	{ .compatible = "microchip,sam9x60-rtc",	.data = &ws_info[1] },
+ 	{ .compatible = "atmel,at91rm9200-ohci",	.data = &ws_info[2] },
+ 	{ .compatible = "usb-ohci",			.data = &ws_info[2] },
+ 	{ .compatible = "atmel,at91sam9g45-ehci",	.data = &ws_info[2] },
+ 	{ .compatible = "usb-ehci",			.data = &ws_info[2] },
+-	{ .compatible = "atmel,at91sam9260-rtt",	.data = &ws_info[4] },
++	{ .compatible = "microchip,sam9x60-rtt",	.data = &ws_info[4] },
+ 	{ .compatible = "cdns,sam9x60-macb",		.data = &ws_info[5] },
+ 	{ /* sentinel */ }
+ };
+diff --git a/arch/arm/mach-meson/platsmp.c b/arch/arm/mach-meson/platsmp.c
+index 4b8ad728bb42a..32ac60b89fdcc 100644
+--- a/arch/arm/mach-meson/platsmp.c
++++ b/arch/arm/mach-meson/platsmp.c
+@@ -71,6 +71,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
+ 	}
+ 
+ 	sram_base = of_iomap(node, 0);
++	of_node_put(node);
+ 	if (!sram_base) {
+ 		pr_err("Couldn't map SRAM registers\n");
+ 		return;
+@@ -91,6 +92,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
+ 	}
+ 
+ 	scu_base = of_iomap(node, 0);
++	of_node_put(node);
+ 	if (!scu_base) {
+ 		pr_err("Couldn't map SCU registers\n");
+ 		return;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
+index c13b4a02d12f8..c016f5b7d24a6 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
+@@ -148,27 +148,27 @@
+ 
+ 	pinctrl_gpio_led: gpioledgrp {
+ 		fsl,pins = <
+-			MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16	0x19
++			MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16	0x140
+ 		>;
+ 	};
+ 
+ 	pinctrl_i2c3: i2c3grp {
+ 		fsl,pins = <
+-			MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL		0x400001c3
+-			MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA		0x400001c3
++			MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL		0x400001c2
++			MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA		0x400001c2
+ 		>;
+ 	};
+ 
+ 	pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
+ 		fsl,pins = <
+-			MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19	0x41
++			MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19	0x40
+ 		>;
+ 	};
+ 
+ 	pinctrl_uart2: uart2grp {
+ 		fsl,pins = <
+-			MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX	0x49
+-			MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX	0x49
++			MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX	0x140
++			MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX	0x140
+ 		>;
+ 	};
+ 
+@@ -180,7 +180,7 @@
+ 			MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d0
+ 			MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d0
+ 			MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d0
+-			MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT	0xc1
++			MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT	0xc0
+ 		>;
+ 	};
+ 
+@@ -192,7 +192,7 @@
+ 			MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d4
+ 			MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d4
+ 			MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4
+-			MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
++			MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
+ 		>;
+ 	};
+ 
+@@ -204,7 +204,7 @@
+ 			MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d6
+ 			MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d6
+ 			MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d6
+-			MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
++			MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
+index cb82864a90ef3..42f2b235011f0 100644
+--- a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
++++ b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
+@@ -64,7 +64,7 @@
+ 		vdd_l17_29-supply = <&vreg_vph_pwr>;
+ 		vdd_l20_21-supply = <&vreg_vph_pwr>;
+ 		vdd_l25-supply = <&pm8994_s5>;
+-		vdd_lvs1_2 = <&pm8994_s4>;
++		vdd_lvs1_2-supply = <&pm8994_s4>;
+ 
+ 		pm8994_s1: s1 {
+ 			regulator-min-microvolt = <800000>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts b/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
+index 4f64ca3ea1efd..6ed2a9c01e8c4 100644
+--- a/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
++++ b/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
+@@ -151,7 +151,7 @@
+ 		vdd_l17_29-supply = <&vreg_vph_pwr>;
+ 		vdd_l20_21-supply = <&vreg_vph_pwr>;
+ 		vdd_l25-supply = <&pm8994_s5>;
+-		vdd_lvs1_2 = <&pm8994_s4>;
++		vdd_lvs1_2-supply = <&pm8994_s4>;
+ 
+ 		pm8994_s1: s1 {
+ 			/* unused */
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 297408b947ffb..aeb5762566e91 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -92,7 +92,7 @@
+ 		CPU6: cpu@102 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a57";
+-			reg = <0x0 0x101>;
++			reg = <0x0 0x102>;
+ 			enable-method = "psci";
+ 			next-level-cache = <&L2_1>;
+ 		};
+@@ -100,7 +100,7 @@
+ 		CPU7: cpu@103 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a57";
+-			reg = <0x0 0x101>;
++			reg = <0x0 0x103>;
+ 			enable-method = "psci";
+ 			next-level-cache = <&L2_1>;
+ 		};
+diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c
+index 2b5a1a41234cc..236bd2ba51b98 100644
+--- a/arch/powerpc/platforms/powernv/rng.c
++++ b/arch/powerpc/platforms/powernv/rng.c
+@@ -176,12 +176,8 @@ static int __init pnv_get_random_long_early(unsigned long *v)
+ 		    NULL) != pnv_get_random_long_early)
+ 		return 0;
+ 
+-	for_each_compatible_node(dn, NULL, "ibm,power-rng") {
+-		if (rng_create(dn))
+-			continue;
+-		/* Create devices for hwrng driver */
+-		of_platform_device_create(dn, NULL, NULL);
+-	}
++	for_each_compatible_node(dn, NULL, "ibm,power-rng")
++		rng_create(dn);
+ 
+ 	if (!ppc_md.get_random_seed)
+ 		return 0;
+@@ -205,10 +201,18 @@ void __init pnv_rng_init(void)
+ 
+ static int __init pnv_rng_late_init(void)
+ {
++	struct device_node *dn;
+ 	unsigned long v;
++
+ 	/* In case it wasn't called during init for some other reason. */
+ 	if (ppc_md.get_random_seed == pnv_get_random_long_early)
+ 		pnv_get_random_long_early(&v);
++
++	if (ppc_md.get_random_seed == powernv_get_random_long) {
++		for_each_compatible_node(dn, NULL, "ibm,power-rng")
++			of_platform_device_create(dn, NULL, NULL);
++	}
++
+ 	return 0;
+ }
+ machine_subsys_initcall(powernv, pnv_rng_late_init);
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index c0566aff53551..9a874a58d690c 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -348,7 +348,8 @@ static void device_link_release_fn(struct work_struct *work)
+ 	/* Ensure that all references to the link object have been dropped. */
+ 	device_link_synchronize_removal();
+ 
+-	pm_runtime_release_supplier(link, true);
++	pm_runtime_release_supplier(link);
++	pm_request_idle(link->supplier);
+ 
+ 	put_device(link->consumer);
+ 	put_device(link->supplier);
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 1573319404888..835a39e84c1dd 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -308,13 +308,10 @@ static int rpm_get_suppliers(struct device *dev)
+ /**
+  * pm_runtime_release_supplier - Drop references to device link's supplier.
+  * @link: Target device link.
+- * @check_idle: Whether or not to check if the supplier device is idle.
+  *
+- * Drop all runtime PM references associated with @link to its supplier device
+- * and if @check_idle is set, check if that device is idle (and so it can be
+- * suspended).
++ * Drop all runtime PM references associated with @link to its supplier device.
+  */
+-void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
++void pm_runtime_release_supplier(struct device_link *link)
+ {
+ 	struct device *supplier = link->supplier;
+ 
+@@ -327,9 +324,6 @@ void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
+ 	while (refcount_dec_not_one(&link->rpm_active) &&
+ 	       atomic_read(&supplier->power.usage_count) > 0)
+ 		pm_runtime_put_noidle(supplier);
+-
+-	if (check_idle)
+-		pm_request_idle(supplier);
+ }
+ 
+ static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
+@@ -337,8 +331,11 @@ static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
+ 	struct device_link *link;
+ 
+ 	list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
+-				device_links_read_lock_held())
+-		pm_runtime_release_supplier(link, try_to_suspend);
++				device_links_read_lock_held()) {
++		pm_runtime_release_supplier(link);
++		if (try_to_suspend)
++			pm_request_idle(link->supplier);
++	}
+ }
+ 
+ static void rpm_put_suppliers(struct device *dev)
+@@ -1776,7 +1773,8 @@ void pm_runtime_drop_link(struct device_link *link)
+ 		return;
+ 
+ 	pm_runtime_drop_link_count(link->consumer);
+-	pm_runtime_release_supplier(link, true);
++	pm_runtime_release_supplier(link);
++	pm_request_idle(link->supplier);
+ }
+ 
+ static bool pm_runtime_need_not_resume(struct device *dev)
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 47552db6b8dc3..b5d691ae45dcf 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -1838,6 +1838,11 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
+ 	for (i = 0; i < init_nr_desc_per_channel; i++) {
+ 		desc = at_xdmac_alloc_desc(chan, GFP_KERNEL);
+ 		if (!desc) {
++			if (i == 0) {
++				dev_warn(chan2dev(chan),
++					 "can't allocate any descriptors\n");
++				return -EIO;
++			}
+ 			dev_warn(chan2dev(chan),
+ 				"only %d descriptors have been allocated\n", i);
+ 			break;
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 792c91cd16080..2283dcd8bf91d 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -2212,7 +2212,7 @@ MODULE_DESCRIPTION("i.MX SDMA driver");
+ #if IS_ENABLED(CONFIG_SOC_IMX6Q)
+ MODULE_FIRMWARE("imx/sdma/sdma-imx6q.bin");
+ #endif
+-#if IS_ENABLED(CONFIG_SOC_IMX7D)
++#if IS_ENABLED(CONFIG_SOC_IMX7D) || IS_ENABLED(CONFIG_SOC_IMX8M)
+ MODULE_FIRMWARE("imx/sdma/sdma-imx7d.bin");
+ #endif
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index 6dca548f4dab1..5bbae99f2d34e 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2591,7 +2591,7 @@ static struct dma_pl330_desc *pl330_get_desc(struct dma_pl330_chan *pch)
+ 
+ 	/* If the DMAC pool is empty, alloc new */
+ 	if (!desc) {
+-		DEFINE_SPINLOCK(lock);
++		static DEFINE_SPINLOCK(lock);
+ 		LIST_HEAD(pool);
+ 
+ 		if (!add_desc(&pool, &lock, GFP_ATOMIC, 1))
+diff --git a/drivers/dma/ti/dma-crossbar.c b/drivers/dma/ti/dma-crossbar.c
+index 4ba8fa5d9c367..86ced7f2d7718 100644
+--- a/drivers/dma/ti/dma-crossbar.c
++++ b/drivers/dma/ti/dma-crossbar.c
+@@ -245,6 +245,7 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
+ 	if (dma_spec->args[0] >= xbar->xbar_requests) {
+ 		dev_err(&pdev->dev, "Invalid XBAR request number: %d\n",
+ 			dma_spec->args[0]);
++		put_device(&pdev->dev);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+@@ -252,12 +253,14 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
+ 	dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);
+ 	if (!dma_spec->np) {
+ 		dev_err(&pdev->dev, "Can't get DMA master\n");
++		put_device(&pdev->dev);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	map = kzalloc(sizeof(*map), GFP_KERNEL);
+ 	if (!map) {
+ 		of_node_put(dma_spec->np);
++		put_device(&pdev->dev);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+@@ -268,6 +271,8 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
+ 		mutex_unlock(&xbar->mutex);
+ 		dev_err(&pdev->dev, "Run out of free DMA requests\n");
+ 		kfree(map);
++		of_node_put(dma_spec->np);
++		put_device(&pdev->dev);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	set_bit(map->xbar_out, xbar->dma_inuse);
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index 50e3ddba52ba7..01564bd96c624 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -1289,6 +1289,7 @@ static int cdns_i2c_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_clk_dis:
++	clk_notifier_unregister(id->clk, &id->clk_rate_change_nb);
+ 	clk_disable_unprepare(id->clk);
+ 	pm_runtime_disable(&pdev->dev);
+ 	pm_runtime_set_suspended(&pdev->dev);
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index b8d0b56a75751..70d569b80ecf1 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -385,7 +385,7 @@ static int dmar_pci_bus_notifier(struct notifier_block *nb,
+ 
+ static struct notifier_block dmar_pci_bus_nb = {
+ 	.notifier_call = dmar_pci_bus_notifier,
+-	.priority = INT_MIN,
++	.priority = 1,
+ };
+ 
+ static struct dmar_drhd_unit *
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index 1ef9b61077c44..f150d8769f198 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -631,16 +631,20 @@ static int rtsx_usb_probe(struct usb_interface *intf,
+ 
+ 	ucr->pusb_dev = usb_dev;
+ 
+-	ucr->iobuf = usb_alloc_coherent(ucr->pusb_dev, IOBUF_SIZE,
+-			GFP_KERNEL, &ucr->iobuf_dma);
+-	if (!ucr->iobuf)
++	ucr->cmd_buf = kmalloc(IOBUF_SIZE, GFP_KERNEL);
++	if (!ucr->cmd_buf)
+ 		return -ENOMEM;
+ 
++	ucr->rsp_buf = kmalloc(IOBUF_SIZE, GFP_KERNEL);
++	if (!ucr->rsp_buf) {
++		ret = -ENOMEM;
++		goto out_free_cmd_buf;
++	}
++
+ 	usb_set_intfdata(intf, ucr);
+ 
+ 	ucr->vendor_id = id->idVendor;
+ 	ucr->product_id = id->idProduct;
+-	ucr->cmd_buf = ucr->rsp_buf = ucr->iobuf;
+ 
+ 	mutex_init(&ucr->dev_mutex);
+ 
+@@ -668,8 +672,11 @@ static int rtsx_usb_probe(struct usb_interface *intf,
+ 
+ out_init_fail:
+ 	usb_set_intfdata(ucr->pusb_intf, NULL);
+-	usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf,
+-			ucr->iobuf_dma);
++	kfree(ucr->rsp_buf);
++	ucr->rsp_buf = NULL;
++out_free_cmd_buf:
++	kfree(ucr->cmd_buf);
++	ucr->cmd_buf = NULL;
+ 	return ret;
+ }
+ 
+@@ -682,8 +689,12 @@ static void rtsx_usb_disconnect(struct usb_interface *intf)
+ 	mfd_remove_devices(&intf->dev);
+ 
+ 	usb_set_intfdata(ucr->pusb_intf, NULL);
+-	usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf,
+-			ucr->iobuf_dma);
++
++	kfree(ucr->cmd_buf);
++	ucr->cmd_buf = NULL;
++
++	kfree(ucr->rsp_buf);
++	ucr->rsp_buf = NULL;
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/net/can/grcan.c b/drivers/net/can/grcan.c
+index 4923f9ac6a43b..3794a7b2c64e8 100644
+--- a/drivers/net/can/grcan.c
++++ b/drivers/net/can/grcan.c
+@@ -1659,7 +1659,6 @@ static int grcan_probe(struct platform_device *ofdev)
+ 	 */
+ 	sysid_parent = of_find_node_by_path("/ambapp0");
+ 	if (sysid_parent) {
+-		of_node_get(sysid_parent);
+ 		err = of_property_read_u32(sysid_parent, "systemid", &sysid);
+ 		if (!err && ((sysid & GRLIB_VERSION_MASK) >=
+ 			     GRCAN_TXBUG_SAFE_GRLIB_VERSION))
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index e023c401f4f77..1bfc497da9ac8 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -184,6 +184,8 @@ struct gs_can {
+ 
+ 	struct usb_anchor tx_submitted;
+ 	atomic_t active_tx_urbs;
++	void *rxbuf[GS_MAX_RX_URBS];
++	dma_addr_t rxbuf_dma[GS_MAX_RX_URBS];
+ };
+ 
+ /* usb interface struct */
+@@ -592,6 +594,7 @@ static int gs_can_open(struct net_device *netdev)
+ 		for (i = 0; i < GS_MAX_RX_URBS; i++) {
+ 			struct urb *urb;
+ 			u8 *buf;
++			dma_addr_t buf_dma;
+ 
+ 			/* alloc rx urb */
+ 			urb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -602,7 +605,7 @@ static int gs_can_open(struct net_device *netdev)
+ 			buf = usb_alloc_coherent(dev->udev,
+ 						 sizeof(struct gs_host_frame),
+ 						 GFP_KERNEL,
+-						 &urb->transfer_dma);
++						 &buf_dma);
+ 			if (!buf) {
+ 				netdev_err(netdev,
+ 					   "No memory left for USB buffer\n");
+@@ -610,6 +613,8 @@ static int gs_can_open(struct net_device *netdev)
+ 				return -ENOMEM;
+ 			}
+ 
++			urb->transfer_dma = buf_dma;
++
+ 			/* fill, anchor, and submit rx urb */
+ 			usb_fill_bulk_urb(urb,
+ 					  dev->udev,
+@@ -633,10 +638,17 @@ static int gs_can_open(struct net_device *netdev)
+ 					   rc);
+ 
+ 				usb_unanchor_urb(urb);
++				usb_free_coherent(dev->udev,
++						  sizeof(struct gs_host_frame),
++						  buf,
++						  buf_dma);
+ 				usb_free_urb(urb);
+ 				break;
+ 			}
+ 
++			dev->rxbuf[i] = buf;
++			dev->rxbuf_dma[i] = buf_dma;
++
+ 			/* Drop reference,
+ 			 * USB core will take care of freeing it
+ 			 */
+@@ -701,13 +713,20 @@ static int gs_can_close(struct net_device *netdev)
+ 	int rc;
+ 	struct gs_can *dev = netdev_priv(netdev);
+ 	struct gs_usb *parent = dev->parent;
++	unsigned int i;
+ 
+ 	netif_stop_queue(netdev);
+ 
+ 	/* Stop polling */
+ 	parent->active_channels--;
+-	if (!parent->active_channels)
++	if (!parent->active_channels) {
+ 		usb_kill_anchored_urbs(&parent->rx_submitted);
++		for (i = 0; i < GS_MAX_RX_URBS; i++)
++			usb_free_coherent(dev->udev,
++					  sizeof(struct gs_host_frame),
++					  dev->rxbuf[i],
++					  dev->rxbuf_dma[i]);
++	}
+ 
+ 	/* Stop sending URBs */
+ 	usb_kill_anchored_urbs(&dev->tx_submitted);
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+index 390b6bde883c8..61e67986b625e 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+@@ -35,9 +35,10 @@
+ #define KVASER_USB_RX_BUFFER_SIZE		3072
+ #define KVASER_USB_MAX_NET_DEVICES		5
+ 
+-/* USB devices features */
+-#define KVASER_USB_HAS_SILENT_MODE		BIT(0)
+-#define KVASER_USB_HAS_TXRX_ERRORS		BIT(1)
++/* Kvaser USB device quirks */
++#define KVASER_USB_QUIRK_HAS_SILENT_MODE	BIT(0)
++#define KVASER_USB_QUIRK_HAS_TXRX_ERRORS	BIT(1)
++#define KVASER_USB_QUIRK_IGNORE_CLK_FREQ	BIT(2)
+ 
+ /* Device capabilities */
+ #define KVASER_USB_CAP_BERR_CAP			0x01
+@@ -65,12 +66,7 @@ struct kvaser_usb_dev_card_data_hydra {
+ struct kvaser_usb_dev_card_data {
+ 	u32 ctrlmode_supported;
+ 	u32 capabilities;
+-	union {
+-		struct {
+-			enum kvaser_usb_leaf_family family;
+-		} leaf;
+-		struct kvaser_usb_dev_card_data_hydra hydra;
+-	};
++	struct kvaser_usb_dev_card_data_hydra hydra;
+ };
+ 
+ /* Context for an outstanding, not yet ACKed, transmission */
+@@ -84,7 +80,7 @@ struct kvaser_usb {
+ 	struct usb_device *udev;
+ 	struct usb_interface *intf;
+ 	struct kvaser_usb_net_priv *nets[KVASER_USB_MAX_NET_DEVICES];
+-	const struct kvaser_usb_dev_ops *ops;
++	const struct kvaser_usb_driver_info *driver_info;
+ 	const struct kvaser_usb_dev_cfg *cfg;
+ 
+ 	struct usb_endpoint_descriptor *bulk_in, *bulk_out;
+@@ -166,6 +162,12 @@ struct kvaser_usb_dev_ops {
+ 				  int *cmd_len, u16 transid);
+ };
+ 
++struct kvaser_usb_driver_info {
++	u32 quirks;
++	enum kvaser_usb_leaf_family family;
++	const struct kvaser_usb_dev_ops *ops;
++};
++
+ struct kvaser_usb_dev_cfg {
+ 	const struct can_clock clock;
+ 	const unsigned int timestamp_freq;
+@@ -185,4 +187,7 @@ int kvaser_usb_send_cmd_async(struct kvaser_usb_net_priv *priv, void *cmd,
+ 			      int len);
+ 
+ int kvaser_usb_can_rx_over_error(struct net_device *netdev);
++
++extern const struct can_bittiming_const kvaser_usb_flexc_bittiming_const;
++
+ #endif /* KVASER_USB_H */
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index 0f1d3e807d631..416763fd1f11c 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -79,104 +79,134 @@
+ #define USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID	269
+ #define USB_HYBRID_PRO_CANLIN_PRODUCT_ID	270
+ 
+-static inline bool kvaser_is_leaf(const struct usb_device_id *id)
+-{
+-	return (id->idProduct >= USB_LEAF_DEVEL_PRODUCT_ID &&
+-		id->idProduct <= USB_CAN_R_PRODUCT_ID) ||
+-		(id->idProduct >= USB_LEAF_LITE_V2_PRODUCT_ID &&
+-		 id->idProduct <= USB_MINI_PCIE_2HS_PRODUCT_ID);
+-}
++static const struct kvaser_usb_driver_info kvaser_usb_driver_info_hydra = {
++	.quirks = 0,
++	.ops = &kvaser_usb_hydra_dev_ops,
++};
+ 
+-static inline bool kvaser_is_usbcan(const struct usb_device_id *id)
+-{
+-	return id->idProduct >= USB_USBCAN_REVB_PRODUCT_ID &&
+-	       id->idProduct <= USB_MEMORATOR_PRODUCT_ID;
+-}
++static const struct kvaser_usb_driver_info kvaser_usb_driver_info_usbcan = {
++	.quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS |
++		  KVASER_USB_QUIRK_HAS_SILENT_MODE,
++	.family = KVASER_USBCAN,
++	.ops = &kvaser_usb_leaf_dev_ops,
++};
+ 
+-static inline bool kvaser_is_hydra(const struct usb_device_id *id)
+-{
+-	return id->idProduct >= USB_BLACKBIRD_V2_PRODUCT_ID &&
+-	       id->idProduct <= USB_HYBRID_PRO_CANLIN_PRODUCT_ID;
+-}
++static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf = {
++	.quirks = KVASER_USB_QUIRK_IGNORE_CLK_FREQ,
++	.family = KVASER_LEAF,
++	.ops = &kvaser_usb_leaf_dev_ops,
++};
++
++static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err = {
++	.quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS |
++		  KVASER_USB_QUIRK_IGNORE_CLK_FREQ,
++	.family = KVASER_LEAF,
++	.ops = &kvaser_usb_leaf_dev_ops,
++};
++
++static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err_listen = {
++	.quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS |
++		  KVASER_USB_QUIRK_HAS_SILENT_MODE |
++		  KVASER_USB_QUIRK_IGNORE_CLK_FREQ,
++	.family = KVASER_LEAF,
++	.ops = &kvaser_usb_leaf_dev_ops,
++};
++
++static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leafimx = {
++	.quirks = 0,
++	.ops = &kvaser_usb_leaf_dev_ops,
++};
+ 
+ static const struct usb_device_id kvaser_usb_table[] = {
+-	/* Leaf USB product IDs */
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID) },
++	/* Leaf M32C USB product IDs */
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LS_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_SWC_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LIN_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_LS_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_SWC_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_DEVEL_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSHS_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_UPRO_HSHS_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID) },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_OBDII_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
+-			       KVASER_USB_HAS_SILENT_MODE },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSLS_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_CH_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_SPRO_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_MERCURY_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_LEAF_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_CAN_R_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID) },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err },
++
++	/* Leaf i.MX28 USB product IDs */
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx },
+ 
+ 	/* USBCANII USB product IDs */
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN2_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_REVB_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMORATOR_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan },
+ 	{ USB_DEVICE(KVASER_VENDOR_ID, USB_VCI2_PRODUCT_ID),
+-		.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan },
+ 
+ 	/* Minihydra USB product IDs */
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID) },
+-	{ USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID) },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
++	{ USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID),
++		.driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(usb, kvaser_usb_table);
+@@ -267,6 +297,7 @@ int kvaser_usb_can_rx_over_error(struct net_device *netdev)
+ static void kvaser_usb_read_bulk_callback(struct urb *urb)
+ {
+ 	struct kvaser_usb *dev = urb->context;
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
+ 	int err;
+ 	unsigned int i;
+ 
+@@ -283,8 +314,8 @@ static void kvaser_usb_read_bulk_callback(struct urb *urb)
+ 		goto resubmit_urb;
+ 	}
+ 
+-	dev->ops->dev_read_bulk_callback(dev, urb->transfer_buffer,
+-					 urb->actual_length);
++	ops->dev_read_bulk_callback(dev, urb->transfer_buffer,
++				    urb->actual_length);
+ 
+ resubmit_urb:
+ 	usb_fill_bulk_urb(urb, dev->udev,
+@@ -378,6 +409,7 @@ static int kvaser_usb_open(struct net_device *netdev)
+ {
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+ 	struct kvaser_usb *dev = priv->dev;
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
+ 	int err;
+ 
+ 	err = open_candev(netdev);
+@@ -388,11 +420,11 @@ static int kvaser_usb_open(struct net_device *netdev)
+ 	if (err)
+ 		goto error;
+ 
+-	err = dev->ops->dev_set_opt_mode(priv);
++	err = ops->dev_set_opt_mode(priv);
+ 	if (err)
+ 		goto error;
+ 
+-	err = dev->ops->dev_start_chip(priv);
++	err = ops->dev_start_chip(priv);
+ 	if (err) {
+ 		netdev_warn(netdev, "Cannot start device, error %d\n", err);
+ 		goto error;
+@@ -449,22 +481,23 @@ static int kvaser_usb_close(struct net_device *netdev)
+ {
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+ 	struct kvaser_usb *dev = priv->dev;
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
+ 	int err;
+ 
+ 	netif_stop_queue(netdev);
+ 
+-	err = dev->ops->dev_flush_queue(priv);
++	err = ops->dev_flush_queue(priv);
+ 	if (err)
+ 		netdev_warn(netdev, "Cannot flush queue, error %d\n", err);
+ 
+-	if (dev->ops->dev_reset_chip) {
+-		err = dev->ops->dev_reset_chip(dev, priv->channel);
++	if (ops->dev_reset_chip) {
++		err = ops->dev_reset_chip(dev, priv->channel);
+ 		if (err)
+ 			netdev_warn(netdev, "Cannot reset card, error %d\n",
+ 				    err);
+ 	}
+ 
+-	err = dev->ops->dev_stop_chip(priv);
++	err = ops->dev_stop_chip(priv);
+ 	if (err)
+ 		netdev_warn(netdev, "Cannot stop device, error %d\n", err);
+ 
+@@ -503,6 +536,7 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
+ {
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+ 	struct kvaser_usb *dev = priv->dev;
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
+ 	struct net_device_stats *stats = &netdev->stats;
+ 	struct kvaser_usb_tx_urb_context *context = NULL;
+ 	struct urb *urb;
+@@ -545,8 +579,8 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
+ 		goto freeurb;
+ 	}
+ 
+-	buf = dev->ops->dev_frame_to_cmd(priv, skb, &context->dlc, &cmd_len,
+-					 context->echo_index);
++	buf = ops->dev_frame_to_cmd(priv, skb, &context->dlc, &cmd_len,
++				    context->echo_index);
+ 	if (!buf) {
+ 		stats->tx_dropped++;
+ 		dev_kfree_skb(skb);
+@@ -630,15 +664,16 @@ static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
+ 	}
+ }
+ 
+-static int kvaser_usb_init_one(struct kvaser_usb *dev,
+-			       const struct usb_device_id *id, int channel)
++static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ {
+ 	struct net_device *netdev;
+ 	struct kvaser_usb_net_priv *priv;
++	const struct kvaser_usb_driver_info *driver_info = dev->driver_info;
++	const struct kvaser_usb_dev_ops *ops = driver_info->ops;
+ 	int err;
+ 
+-	if (dev->ops->dev_reset_chip) {
+-		err = dev->ops->dev_reset_chip(dev, channel);
++	if (ops->dev_reset_chip) {
++		err = ops->dev_reset_chip(dev, channel);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -667,20 +702,19 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev,
+ 	priv->can.state = CAN_STATE_STOPPED;
+ 	priv->can.clock.freq = dev->cfg->clock.freq;
+ 	priv->can.bittiming_const = dev->cfg->bittiming_const;
+-	priv->can.do_set_bittiming = dev->ops->dev_set_bittiming;
+-	priv->can.do_set_mode = dev->ops->dev_set_mode;
+-	if ((id->driver_info & KVASER_USB_HAS_TXRX_ERRORS) ||
++	priv->can.do_set_bittiming = ops->dev_set_bittiming;
++	priv->can.do_set_mode = ops->dev_set_mode;
++	if ((driver_info->quirks & KVASER_USB_QUIRK_HAS_TXRX_ERRORS) ||
+ 	    (priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP))
+-		priv->can.do_get_berr_counter = dev->ops->dev_get_berr_counter;
+-	if (id->driver_info & KVASER_USB_HAS_SILENT_MODE)
++		priv->can.do_get_berr_counter = ops->dev_get_berr_counter;
++	if (driver_info->quirks & KVASER_USB_QUIRK_HAS_SILENT_MODE)
+ 		priv->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY;
+ 
+ 	priv->can.ctrlmode_supported |= dev->card_data.ctrlmode_supported;
+ 
+ 	if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) {
+ 		priv->can.data_bittiming_const = dev->cfg->data_bittiming_const;
+-		priv->can.do_set_data_bittiming =
+-					dev->ops->dev_set_data_bittiming;
++		priv->can.do_set_data_bittiming = ops->dev_set_data_bittiming;
+ 	}
+ 
+ 	netdev->flags |= IFF_ECHO;
+@@ -711,29 +745,22 @@ static int kvaser_usb_probe(struct usb_interface *intf,
+ 	struct kvaser_usb *dev;
+ 	int err;
+ 	int i;
++	const struct kvaser_usb_driver_info *driver_info;
++	const struct kvaser_usb_dev_ops *ops;
++
++	driver_info = (const struct kvaser_usb_driver_info *)id->driver_info;
++	if (!driver_info)
++		return -ENODEV;
+ 
+ 	dev = devm_kzalloc(&intf->dev, sizeof(*dev), GFP_KERNEL);
+ 	if (!dev)
+ 		return -ENOMEM;
+ 
+-	if (kvaser_is_leaf(id)) {
+-		dev->card_data.leaf.family = KVASER_LEAF;
+-		dev->ops = &kvaser_usb_leaf_dev_ops;
+-	} else if (kvaser_is_usbcan(id)) {
+-		dev->card_data.leaf.family = KVASER_USBCAN;
+-		dev->ops = &kvaser_usb_leaf_dev_ops;
+-	} else if (kvaser_is_hydra(id)) {
+-		dev->ops = &kvaser_usb_hydra_dev_ops;
+-	} else {
+-		dev_err(&intf->dev,
+-			"Product ID (%d) is not a supported Kvaser USB device\n",
+-			id->idProduct);
+-		return -ENODEV;
+-	}
+-
+ 	dev->intf = intf;
++	dev->driver_info = driver_info;
++	ops = driver_info->ops;
+ 
+-	err = dev->ops->dev_setup_endpoints(dev);
++	err = ops->dev_setup_endpoints(dev);
+ 	if (err) {
+ 		dev_err(&intf->dev, "Cannot get usb endpoint(s)");
+ 		return err;
+@@ -747,22 +774,22 @@ static int kvaser_usb_probe(struct usb_interface *intf,
+ 
+ 	dev->card_data.ctrlmode_supported = 0;
+ 	dev->card_data.capabilities = 0;
+-	err = dev->ops->dev_init_card(dev);
++	err = ops->dev_init_card(dev);
+ 	if (err) {
+ 		dev_err(&intf->dev,
+ 			"Failed to initialize card, error %d\n", err);
+ 		return err;
+ 	}
+ 
+-	err = dev->ops->dev_get_software_info(dev);
++	err = ops->dev_get_software_info(dev);
+ 	if (err) {
+ 		dev_err(&intf->dev,
+ 			"Cannot get software info, error %d\n", err);
+ 		return err;
+ 	}
+ 
+-	if (dev->ops->dev_get_software_details) {
+-		err = dev->ops->dev_get_software_details(dev);
++	if (ops->dev_get_software_details) {
++		err = ops->dev_get_software_details(dev);
+ 		if (err) {
+ 			dev_err(&intf->dev,
+ 				"Cannot get software details, error %d\n", err);
+@@ -780,14 +807,14 @@ static int kvaser_usb_probe(struct usb_interface *intf,
+ 
+ 	dev_dbg(&intf->dev, "Max outstanding tx = %d URBs\n", dev->max_tx_urbs);
+ 
+-	err = dev->ops->dev_get_card_info(dev);
++	err = ops->dev_get_card_info(dev);
+ 	if (err) {
+ 		dev_err(&intf->dev, "Cannot get card info, error %d\n", err);
+ 		return err;
+ 	}
+ 
+-	if (dev->ops->dev_get_capabilities) {
+-		err = dev->ops->dev_get_capabilities(dev);
++	if (ops->dev_get_capabilities) {
++		err = ops->dev_get_capabilities(dev);
+ 		if (err) {
+ 			dev_err(&intf->dev,
+ 				"Cannot get capabilities, error %d\n", err);
+@@ -797,7 +824,7 @@ static int kvaser_usb_probe(struct usb_interface *intf,
+ 	}
+ 
+ 	for (i = 0; i < dev->nchannels; i++) {
+-		err = kvaser_usb_init_one(dev, id, i);
++		err = kvaser_usb_init_one(dev, i);
+ 		if (err) {
+ 			kvaser_usb_remove_interfaces(dev);
+ 			return err;
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index 218fadc911558..a7c408acb0c09 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -371,7 +371,7 @@ static const struct can_bittiming_const kvaser_usb_hydra_kcan_bittiming_c = {
+ 	.brp_inc = 1,
+ };
+ 
+-static const struct can_bittiming_const kvaser_usb_hydra_flexc_bittiming_c = {
++const struct can_bittiming_const kvaser_usb_flexc_bittiming_const = {
+ 	.name = "kvaser_usb_flex",
+ 	.tseg1_min = 4,
+ 	.tseg1_max = 16,
+@@ -2024,5 +2024,5 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_flexc = {
+ 		.freq = 24000000,
+ 	},
+ 	.timestamp_freq = 1,
+-	.bittiming_const = &kvaser_usb_hydra_flexc_bittiming_c,
++	.bittiming_const = &kvaser_usb_flexc_bittiming_const,
+ };
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index 8b5d1add899a6..0e0403dd05500 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -100,16 +100,6 @@
+ #define USBCAN_ERROR_STATE_RX_ERROR	BIT(1)
+ #define USBCAN_ERROR_STATE_BUSERROR	BIT(2)
+ 
+-/* bittiming parameters */
+-#define KVASER_USB_TSEG1_MIN		1
+-#define KVASER_USB_TSEG1_MAX		16
+-#define KVASER_USB_TSEG2_MIN		1
+-#define KVASER_USB_TSEG2_MAX		8
+-#define KVASER_USB_SJW_MAX		4
+-#define KVASER_USB_BRP_MIN		1
+-#define KVASER_USB_BRP_MAX		64
+-#define KVASER_USB_BRP_INC		1
+-
+ /* ctrl modes */
+ #define KVASER_CTRL_MODE_NORMAL		1
+ #define KVASER_CTRL_MODE_SILENT		2
+@@ -342,48 +332,68 @@ struct kvaser_usb_err_summary {
+ 	};
+ };
+ 
+-static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
+-	.name = "kvaser_usb",
+-	.tseg1_min = KVASER_USB_TSEG1_MIN,
+-	.tseg1_max = KVASER_USB_TSEG1_MAX,
+-	.tseg2_min = KVASER_USB_TSEG2_MIN,
+-	.tseg2_max = KVASER_USB_TSEG2_MAX,
+-	.sjw_max = KVASER_USB_SJW_MAX,
+-	.brp_min = KVASER_USB_BRP_MIN,
+-	.brp_max = KVASER_USB_BRP_MAX,
+-	.brp_inc = KVASER_USB_BRP_INC,
++static const struct can_bittiming_const kvaser_usb_leaf_m16c_bittiming_const = {
++	.name = "kvaser_usb_ucii",
++	.tseg1_min = 4,
++	.tseg1_max = 16,
++	.tseg2_min = 2,
++	.tseg2_max = 8,
++	.sjw_max = 4,
++	.brp_min = 1,
++	.brp_max = 16,
++	.brp_inc = 1,
++};
++
++static const struct can_bittiming_const kvaser_usb_leaf_m32c_bittiming_const = {
++	.name = "kvaser_usb_leaf",
++	.tseg1_min = 3,
++	.tseg1_max = 16,
++	.tseg2_min = 2,
++	.tseg2_max = 8,
++	.sjw_max = 4,
++	.brp_min = 2,
++	.brp_max = 128,
++	.brp_inc = 2,
+ };
+ 
+-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = {
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_usbcan_dev_cfg = {
+ 	.clock = {
+ 		.freq = 8000000,
+ 	},
+ 	.timestamp_freq = 1,
+-	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
++	.bittiming_const = &kvaser_usb_leaf_m16c_bittiming_const,
++};
++
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_m32c_dev_cfg = {
++	.clock = {
++		.freq = 16000000,
++	},
++	.timestamp_freq = 1,
++	.bittiming_const = &kvaser_usb_leaf_m32c_bittiming_const,
+ };
+ 
+-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = {
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_16mhz = {
+ 	.clock = {
+ 		.freq = 16000000,
+ 	},
+ 	.timestamp_freq = 1,
+-	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
++	.bittiming_const = &kvaser_usb_flexc_bittiming_const,
+ };
+ 
+-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = {
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_24mhz = {
+ 	.clock = {
+ 		.freq = 24000000,
+ 	},
+ 	.timestamp_freq = 1,
+-	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
++	.bittiming_const = &kvaser_usb_flexc_bittiming_const,
+ };
+ 
+-static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = {
++static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_32mhz = {
+ 	.clock = {
+ 		.freq = 32000000,
+ 	},
+ 	.timestamp_freq = 1,
+-	.bittiming_const = &kvaser_usb_leaf_bittiming_const,
++	.bittiming_const = &kvaser_usb_flexc_bittiming_const,
+ };
+ 
+ static void *
+@@ -405,7 +415,7 @@ kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
+ 				      sizeof(struct kvaser_cmd_tx_can);
+ 		cmd->u.tx_can.channel = priv->channel;
+ 
+-		switch (dev->card_data.leaf.family) {
++		switch (dev->driver_info->family) {
+ 		case KVASER_LEAF:
+ 			cmd_tx_can_flags = &cmd->u.tx_can.leaf.flags;
+ 			break;
+@@ -525,16 +535,23 @@ static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev,
+ 	dev->fw_version = le32_to_cpu(softinfo->fw_version);
+ 	dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx);
+ 
+-	switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) {
+-	case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK:
+-		dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz;
+-		break;
+-	case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK:
+-		dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz;
+-		break;
+-	case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK:
+-		dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz;
+-		break;
++	if (dev->driver_info->quirks & KVASER_USB_QUIRK_IGNORE_CLK_FREQ) {
++		/* Firmware expects bittiming parameters calculated for 16MHz
++		 * clock, regardless of the actual clock
++		 */
++		dev->cfg = &kvaser_usb_leaf_m32c_dev_cfg;
++	} else {
++		switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) {
++		case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK:
++			dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_16mhz;
++			break;
++		case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK:
++			dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_24mhz;
++			break;
++		case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK:
++			dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_32mhz;
++			break;
++		}
+ 	}
+ }
+ 
+@@ -551,7 +568,7 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
+ 	if (err)
+ 		return err;
+ 
+-	switch (dev->card_data.leaf.family) {
++	switch (dev->driver_info->family) {
+ 	case KVASER_LEAF:
+ 		kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo);
+ 		break;
+@@ -559,7 +576,7 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
+ 		dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version);
+ 		dev->max_tx_urbs =
+ 			le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx);
+-		dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz;
++		dev->cfg = &kvaser_usb_leaf_usbcan_dev_cfg;
+ 		break;
+ 	}
+ 
+@@ -598,7 +615,7 @@ static int kvaser_usb_leaf_get_card_info(struct kvaser_usb *dev)
+ 
+ 	dev->nchannels = cmd.u.cardinfo.nchannels;
+ 	if (dev->nchannels > KVASER_USB_MAX_NET_DEVICES ||
+-	    (dev->card_data.leaf.family == KVASER_USBCAN &&
++	    (dev->driver_info->family == KVASER_USBCAN &&
+ 	     dev->nchannels > MAX_USBCAN_NET_DEVICES))
+ 		return -EINVAL;
+ 
+@@ -734,7 +751,7 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
+ 	    new_state < CAN_STATE_BUS_OFF)
+ 		priv->can.can_stats.restarts++;
+ 
+-	switch (dev->card_data.leaf.family) {
++	switch (dev->driver_info->family) {
+ 	case KVASER_LEAF:
+ 		if (es->leaf.error_factor) {
+ 			priv->can.can_stats.bus_error++;
+@@ -813,7 +830,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 		}
+ 	}
+ 
+-	switch (dev->card_data.leaf.family) {
++	switch (dev->driver_info->family) {
+ 	case KVASER_LEAF:
+ 		if (es->leaf.error_factor) {
+ 			cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT;
+@@ -1005,7 +1022,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
+ 	stats = &priv->netdev->stats;
+ 
+ 	if ((cmd->u.rx_can_header.flag & MSG_FLAG_ERROR_FRAME) &&
+-	    (dev->card_data.leaf.family == KVASER_LEAF &&
++	    (dev->driver_info->family == KVASER_LEAF &&
+ 	     cmd->id == CMD_LEAF_LOG_MESSAGE)) {
+ 		kvaser_usb_leaf_leaf_rx_error(dev, cmd);
+ 		return;
+@@ -1021,7 +1038,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
+ 		return;
+ 	}
+ 
+-	switch (dev->card_data.leaf.family) {
++	switch (dev->driver_info->family) {
+ 	case KVASER_LEAF:
+ 		rx_data = cmd->u.leaf.rx_can.data;
+ 		break;
+@@ -1036,7 +1053,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
+ 		return;
+ 	}
+ 
+-	if (dev->card_data.leaf.family == KVASER_LEAF && cmd->id ==
++	if (dev->driver_info->family == KVASER_LEAF && cmd->id ==
+ 	    CMD_LEAF_LOG_MESSAGE) {
+ 		cf->can_id = le32_to_cpu(cmd->u.leaf.log_message.id);
+ 		if (cf->can_id & KVASER_EXTENDED_FRAME)
+@@ -1133,14 +1150,14 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
+ 		break;
+ 
+ 	case CMD_LEAF_LOG_MESSAGE:
+-		if (dev->card_data.leaf.family != KVASER_LEAF)
++		if (dev->driver_info->family != KVASER_LEAF)
+ 			goto warn;
+ 		kvaser_usb_leaf_rx_can_msg(dev, cmd);
+ 		break;
+ 
+ 	case CMD_CHIP_STATE_EVENT:
+ 	case CMD_CAN_ERROR_EVENT:
+-		if (dev->card_data.leaf.family == KVASER_LEAF)
++		if (dev->driver_info->family == KVASER_LEAF)
+ 			kvaser_usb_leaf_leaf_rx_error(dev, cmd);
+ 		else
+ 			kvaser_usb_leaf_usbcan_rx_error(dev, cmd);
+@@ -1152,12 +1169,12 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
+ 
+ 	/* Ignored commands */
+ 	case CMD_USBCAN_CLOCK_OVERFLOW_EVENT:
+-		if (dev->card_data.leaf.family != KVASER_USBCAN)
++		if (dev->driver_info->family != KVASER_USBCAN)
+ 			goto warn;
+ 		break;
+ 
+ 	case CMD_FLUSH_QUEUE_REPLY:
+-		if (dev->card_data.leaf.family != KVASER_LEAF)
++		if (dev->driver_info->family != KVASER_LEAF)
+ 			goto warn;
+ 		break;
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 61fb2a092451b..7fe2e47dc83dc 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -5228,6 +5228,15 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
+ 			release_sub_crqs(adapter, 0);
+ 			rc = init_sub_crqs(adapter);
+ 		} else {
++			/* no need to reinitialize completely, but we do
++			 * need to clean up transmits that were in flight
++			 * when we processed the reset.  Failure to do so
++			 * will confound the upper layer, usually TCP, by
++			 * creating the illusion of transmits that are
++			 * awaiting completion.
++			 */
++			clean_tx_pools(adapter);
++
+ 			rc = reset_sub_crq_queues(adapter);
+ 		}
+ 	} else {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index effdc3361266f..dd630b6bc74bd 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -37,6 +37,7 @@
+ #include <net/tc_act/tc_mirred.h>
+ #include <net/udp_tunnel.h>
+ #include <net/xdp_sock.h>
++#include <linux/bitfield.h>
+ #include "i40e_type.h"
+ #include "i40e_prototype.h"
+ #include <linux/net/intel/i40e_client.h>
+@@ -991,6 +992,21 @@ static inline void i40e_write_fd_input_set(struct i40e_pf *pf,
+ 			  (u32)(val & 0xFFFFFFFFULL));
+ }
+ 
++/**
++ * i40e_get_pf_count - get PCI PF count.
++ * @hw: pointer to a hw.
++ *
++ * Reports the function number of the highest PCI physical
++ * function plus 1 as it is loaded from the NVM.
++ *
++ * Return: PCI PF count.
++ **/
++static inline u32 i40e_get_pf_count(struct i40e_hw *hw)
++{
++	return FIELD_GET(I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK,
++			 rd32(hw, I40E_GLGEN_PCIFCNCNT));
++}
++
+ /* needed by i40e_ethtool.c */
+ int i40e_up(struct i40e_vsi *vsi);
+ void i40e_down(struct i40e_vsi *vsi);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 614f3e9951009..58453f7958df9 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -548,6 +548,47 @@ void i40e_pf_reset_stats(struct i40e_pf *pf)
+ 	pf->hw_csum_rx_error = 0;
+ }
+ 
++/**
++ * i40e_compute_pci_to_hw_id - compute index form PCI function.
++ * @vsi: ptr to the VSI to read from.
++ * @hw: ptr to the hardware info.
++ **/
++static u32 i40e_compute_pci_to_hw_id(struct i40e_vsi *vsi, struct i40e_hw *hw)
++{
++	int pf_count = i40e_get_pf_count(hw);
++
++	if (vsi->type == I40E_VSI_SRIOV)
++		return (hw->port * BIT(7)) / pf_count + vsi->vf_id;
++
++	return hw->port + BIT(7);
++}
++
++/**
++ * i40e_stat_update64 - read and update a 64 bit stat from the chip.
++ * @hw: ptr to the hardware info.
++ * @hireg: the high 32 bit reg to read.
++ * @loreg: the low 32 bit reg to read.
++ * @offset_loaded: has the initial offset been loaded yet.
++ * @offset: ptr to current offset value.
++ * @stat: ptr to the stat.
++ *
++ * Since the device stats are not reset at PFReset, they will not
++ * be zeroed when the driver starts.  We'll save the first values read
++ * and use them as offsets to be subtracted from the raw values in order
++ * to report stats that count from zero.
++ **/
++static void i40e_stat_update64(struct i40e_hw *hw, u32 hireg, u32 loreg,
++			       bool offset_loaded, u64 *offset, u64 *stat)
++{
++	u64 new_data;
++
++	new_data = rd64(hw, loreg);
++
++	if (!offset_loaded || new_data < *offset)
++		*offset = new_data;
++	*stat = new_data - *offset;
++}
++
+ /**
+  * i40e_stat_update48 - read and update a 48 bit stat from the chip
+  * @hw: ptr to the hardware info
+@@ -619,6 +660,34 @@ static void i40e_stat_update_and_clear32(struct i40e_hw *hw, u32 reg, u64 *stat)
+ 	*stat += new_data;
+ }
+ 
++/**
++ * i40e_stats_update_rx_discards - update rx_discards.
++ * @vsi: ptr to the VSI to be updated.
++ * @hw: ptr to the hardware info.
++ * @stat_idx: VSI's stat_counter_idx.
++ * @offset_loaded: ptr to the VSI's stat_offsets_loaded.
++ * @stat_offset: ptr to stat_offset to store first read of specific register.
++ * @stat: ptr to VSI's stat to be updated.
++ **/
++static void
++i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw,
++			      int stat_idx, bool offset_loaded,
++			      struct i40e_eth_stats *stat_offset,
++			      struct i40e_eth_stats *stat)
++{
++	u64 rx_rdpc, rx_rxerr;
++
++	i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded,
++			   &stat_offset->rx_discards, &rx_rdpc);
++	i40e_stat_update64(hw,
++			   I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)),
++			   I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)),
++			   offset_loaded, &stat_offset->rx_discards_other,
++			   &rx_rxerr);
++
++	stat->rx_discards = rx_rdpc + rx_rxerr;
++}
++
+ /**
+  * i40e_update_eth_stats - Update VSI-specific ethernet statistics counters.
+  * @vsi: the VSI to be updated
+@@ -678,6 +747,10 @@ void i40e_update_eth_stats(struct i40e_vsi *vsi)
+ 			   I40E_GLV_BPTCL(stat_idx),
+ 			   vsi->stat_offsets_loaded,
+ 			   &oes->tx_broadcast, &es->tx_broadcast);
++
++	i40e_stats_update_rx_discards(vsi, hw, stat_idx,
++				      vsi->stat_offsets_loaded, oes, es);
++
+ 	vsi->stat_offsets_loaded = true;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h
+index 8335f151ceefc..117fd9674d7f8 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_register.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_register.h
+@@ -77,6 +77,11 @@
+ #define I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT 0
+ #define I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT 16
+ #define I40E_GLGEN_MSRWD_MDIRDDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT)
++#define I40E_GLGEN_PCIFCNCNT                0x001C0AB4 /* Reset: PCIR */
++#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT 0
++#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK  I40E_MASK(0x1F, I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT)
++#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT 16
++#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_MASK  I40E_MASK(0xFF, I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT)
+ #define I40E_GLGEN_RSTAT 0x000B8188 /* Reset: POR */
+ #define I40E_GLGEN_RSTAT_DEVSTATE_SHIFT 0
+ #define I40E_GLGEN_RSTAT_DEVSTATE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_DEVSTATE_SHIFT)
+@@ -461,6 +466,14 @@
+ #define I40E_VFQF_HKEY1_MAX_INDEX 12
+ #define I40E_VFQF_HLUT1(_i, _VF) (0x00220000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: CORER */
+ #define I40E_VFQF_HLUT1_MAX_INDEX 15
++#define I40E_GL_RXERR1H(_i)             (0x00318004 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */
++#define I40E_GL_RXERR1H_MAX_INDEX       143
++#define I40E_GL_RXERR1H_RXERR1H_SHIFT   0
++#define I40E_GL_RXERR1H_RXERR1H_MASK    I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1H_RXERR1H_SHIFT)
++#define I40E_GL_RXERR1L(_i)             (0x00318000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */
++#define I40E_GL_RXERR1L_MAX_INDEX       143
++#define I40E_GL_RXERR1L_RXERR1L_SHIFT   0
++#define I40E_GL_RXERR1L_RXERR1L_MASK    I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1L_RXERR1L_SHIFT)
+ #define I40E_GLPRT_BPRCH(_i) (0x003005E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */
+ #define I40E_GLPRT_BPRCL(_i) (0x003005E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */
+ #define I40E_GLPRT_BPTCH(_i) (0x00300A04 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
+index add67f7b73e8b..446672a7e39fb 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
+@@ -1172,6 +1172,7 @@ struct i40e_eth_stats {
+ 	u64 tx_broadcast;		/* bptc */
+ 	u64 tx_discards;		/* tdpc */
+ 	u64 tx_errors;			/* tepc */
++	u64 rx_discards_other;          /* rxerr1 */
+ };
+ 
+ /* Statistics collected per VEB per TC */
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 5eac3f494d9e9..c025dadcce289 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4183,7 +4183,6 @@ static void rtl8169_tso_csum_v1(struct sk_buff *skb, u32 *opts)
+ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
+ 				struct sk_buff *skb, u32 *opts)
+ {
+-	u32 transport_offset = (u32)skb_transport_offset(skb);
+ 	struct skb_shared_info *shinfo = skb_shinfo(skb);
+ 	u32 mss = shinfo->gso_size;
+ 
+@@ -4200,7 +4199,7 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
+ 			WARN_ON_ONCE(1);
+ 		}
+ 
+-		opts[0] |= transport_offset << GTTCPHO_SHIFT;
++		opts[0] |= skb_transport_offset(skb) << GTTCPHO_SHIFT;
+ 		opts[1] |= mss << TD1_MSS_SHIFT;
+ 	} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		u8 ip_protocol;
+@@ -4228,7 +4227,7 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
+ 		else
+ 			WARN_ON_ONCE(1);
+ 
+-		opts[1] |= transport_offset << TCPHO_SHIFT;
++		opts[1] |= skb_transport_offset(skb) << TCPHO_SHIFT;
+ 	} else {
+ 		unsigned int padto = rtl_quirk_packet_padto(tp, skb);
+ 
+@@ -4401,14 +4400,13 @@ static netdev_features_t rtl8169_features_check(struct sk_buff *skb,
+ 						struct net_device *dev,
+ 						netdev_features_t features)
+ {
+-	int transport_offset = skb_transport_offset(skb);
+ 	struct rtl8169_private *tp = netdev_priv(dev);
+ 
+ 	if (skb_is_gso(skb)) {
+ 		if (tp->mac_version == RTL_GIGA_MAC_VER_34)
+ 			features = rtl8168evl_fix_tso(skb, features);
+ 
+-		if (transport_offset > GTTCPHO_MAX &&
++		if (skb_transport_offset(skb) > GTTCPHO_MAX &&
+ 		    rtl_chip_supports_csum_v2(tp))
+ 			features &= ~NETIF_F_ALL_TSO;
+ 	} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
+@@ -4419,7 +4417,7 @@ static netdev_features_t rtl8169_features_check(struct sk_buff *skb,
+ 		if (rtl_quirk_packet_padto(tp, skb))
+ 			features &= ~NETIF_F_CSUM_MASK;
+ 
+-		if (transport_offset > TCPHO_MAX &&
++		if (skb_transport_offset(skb) > TCPHO_MAX &&
+ 		    rtl_chip_supports_csum_v2(tp))
+ 			features &= ~NETIF_F_CSUM_MASK;
+ 	}
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 74a833ad7aa99..58dd77efcaade 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -2102,7 +2102,7 @@ static void usbnet_async_cmd_cb(struct urb *urb)
+ int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype,
+ 			   u16 value, u16 index, const void *data, u16 size)
+ {
+-	struct usb_ctrlrequest *req = NULL;
++	struct usb_ctrlrequest *req;
+ 	struct urb *urb;
+ 	int err = -ENOMEM;
+ 	void *buf = NULL;
+@@ -2120,7 +2120,7 @@ int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype,
+ 		if (!buf) {
+ 			netdev_err(dev->net, "Error allocating buffer"
+ 				   " in %s!\n", __func__);
+-			goto fail_free;
++			goto fail_free_urb;
+ 		}
+ 	}
+ 
+@@ -2144,14 +2144,21 @@ int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype,
+ 	if (err < 0) {
+ 		netdev_err(dev->net, "Error submitting the control"
+ 			   " message: status=%d\n", err);
+-		goto fail_free;
++		goto fail_free_all;
+ 	}
+ 	return 0;
+ 
++fail_free_all:
++	kfree(req);
+ fail_free_buf:
+ 	kfree(buf);
+-fail_free:
+-	kfree(req);
++	/*
++	 * avoid a double free
++	 * needed because the flag can be set only
++	 * after filling the URB
++	 */
++	urb->transfer_flags = 0;
++fail_free_urb:
+ 	usb_free_urb(urb);
+ fail:
+ 	return err;
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c b/drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c
+index 4ada80317a3bd..b5c1a8f363f32 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c
+@@ -158,26 +158,26 @@ static const struct sunxi_desc_pin sun8i_a83t_pins[] = {
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 14),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+ 		  SUNXI_FUNCTION(0x1, "gpio_out"),
+-		  SUNXI_FUNCTION(0x2, "nand"),		/* DQ6 */
++		  SUNXI_FUNCTION(0x2, "nand0"),		/* DQ6 */
+ 		  SUNXI_FUNCTION(0x3, "mmc2")),		/* D6 */
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 15),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+ 		  SUNXI_FUNCTION(0x1, "gpio_out"),
+-		  SUNXI_FUNCTION(0x2, "nand"),		/* DQ7 */
++		  SUNXI_FUNCTION(0x2, "nand0"),		/* DQ7 */
+ 		  SUNXI_FUNCTION(0x3, "mmc2")),		/* D7 */
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 16),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+ 		  SUNXI_FUNCTION(0x1, "gpio_out"),
+-		  SUNXI_FUNCTION(0x2, "nand"),		/* DQS */
++		  SUNXI_FUNCTION(0x2, "nand0"),		/* DQS */
+ 		  SUNXI_FUNCTION(0x3, "mmc2")),		/* RST */
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 17),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+ 		  SUNXI_FUNCTION(0x1, "gpio_out"),
+-		  SUNXI_FUNCTION(0x2, "nand")),		/* CE2 */
++		  SUNXI_FUNCTION(0x2, "nand0")),	/* CE2 */
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 18),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+ 		  SUNXI_FUNCTION(0x1, "gpio_out"),
+-		  SUNXI_FUNCTION(0x2, "nand")),		/* CE3 */
++		  SUNXI_FUNCTION(0x2, "nand0")),	/* CE3 */
+ 	/* Hole */
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(D, 2),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index be7f4f95f455d..24c861434bf13 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -544,6 +544,8 @@ static int sunxi_pconf_set(struct pinctrl_dev *pctldev, unsigned pin,
+ 	struct sunxi_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev);
+ 	int i;
+ 
++	pin -= pctl->desc->pin_base;
++
+ 	for (i = 0; i < num_configs; i++) {
+ 		enum pin_config_param param;
+ 		unsigned long flags;
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 13de2bebb09a5..76fedfd1b1b0a 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2510,6 +2510,11 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font,
+ 	if (charcount != 256 && charcount != 512)
+ 		return -EINVAL;
+ 
++	/* font bigger than screen resolution ? */
++	if (w > FBCON_SWAP(info->var.rotate, info->var.xres, info->var.yres) ||
++	    h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres))
++		return -EINVAL;
++
+ 	/* Make sure drawing engine can handle the font */
+ 	if (!(info->pixmap.blit_x & (1 << (font->width - 1))) ||
+ 	    !(info->pixmap.blit_y & (1 << (font->height - 1))))
+@@ -2771,6 +2776,34 @@ void fbcon_update_vcs(struct fb_info *info, bool all)
+ }
+ EXPORT_SYMBOL(fbcon_update_vcs);
+ 
++/* let fbcon check if it supports a new screen resolution */
++int fbcon_modechange_possible(struct fb_info *info, struct fb_var_screeninfo *var)
++{
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct vc_data *vc;
++	unsigned int i;
++
++	WARN_CONSOLE_UNLOCKED();
++
++	if (!ops)
++		return 0;
++
++	/* prevent setting a screen size which is smaller than font size */
++	for (i = first_fb_vc; i <= last_fb_vc; i++) {
++		vc = vc_cons[i].d;
++		if (!vc || vc->vc_mode != KD_TEXT ||
++			   registered_fb[con2fb_map[i]] != info)
++			continue;
++
++		if (vc->vc_font.width  > FBCON_SWAP(var->rotate, var->xres, var->yres) ||
++		    vc->vc_font.height > FBCON_SWAP(var->rotate, var->yres, var->xres))
++			return -EINVAL;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(fbcon_modechange_possible);
++
+ int fbcon_mode_deleted(struct fb_info *info,
+ 		       struct fb_videomode *mode)
+ {
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 00939ca2065a9..d787a344b3b88 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -513,7 +513,7 @@ static int fb_show_logo_line(struct fb_info *info, int rotate,
+ 
+ 		while (n && (n * (logo->width + 8) - 8 > xres))
+ 			--n;
+-		image.dx = (xres - n * (logo->width + 8) - 8) / 2;
++		image.dx = (xres - (n * (logo->width + 8) - 8)) / 2;
+ 		image.dy = y ?: (yres - logo->height) / 2;
+ 	} else {
+ 		image.dx = 0;
+@@ -1019,6 +1019,16 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 	if (ret)
+ 		return ret;
+ 
++	/* verify that virtual resolution >= physical resolution */
++	if (var->xres_virtual < var->xres ||
++	    var->yres_virtual < var->yres) {
++		pr_warn("WARNING: fbcon: Driver '%s' missed to adjust virtual screen size (%ux%u vs. %ux%u)\n",
++			info->fix.id,
++			var->xres_virtual, var->yres_virtual,
++			var->xres, var->yres);
++		return -EINVAL;
++	}
++
+ 	if ((var->activate & FB_ACTIVATE_MASK) != FB_ACTIVATE_NOW)
+ 		return 0;
+ 
+@@ -1109,7 +1119,9 @@ static long do_fb_ioctl(struct fb_info *info, unsigned int cmd,
+ 			return -EFAULT;
+ 		console_lock();
+ 		lock_fb_info(info);
+-		ret = fb_set_var(info, &var);
++		ret = fbcon_modechange_possible(info, &var);
++		if (!ret)
++			ret = fb_set_var(info, &var);
+ 		if (!ret)
+ 			fbcon_update_vcs(info, var.activate & FB_ACTIVATE_ALL);
+ 		unlock_fb_info(info);
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index e958b1c745615..03497741aef74 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -3170,7 +3170,6 @@ xfs_rename(
+ 	 * appropriately.
+ 	 */
+ 	if (flags & RENAME_WHITEOUT) {
+-		ASSERT(!(flags & (RENAME_NOREPLACE | RENAME_EXCHANGE)));
+ 		error = xfs_rename_alloc_whiteout(target_dp, &wip);
+ 		if (error)
+ 			return error;
+diff --git a/include/linux/fbcon.h b/include/linux/fbcon.h
+index ff5596dd30f85..2382dec6d6ab8 100644
+--- a/include/linux/fbcon.h
++++ b/include/linux/fbcon.h
+@@ -15,6 +15,8 @@ void fbcon_new_modelist(struct fb_info *info);
+ void fbcon_get_requirement(struct fb_info *info,
+ 			   struct fb_blit_caps *caps);
+ void fbcon_fb_blanked(struct fb_info *info, int blank);
++int  fbcon_modechange_possible(struct fb_info *info,
++			       struct fb_var_screeninfo *var);
+ void fbcon_update_vcs(struct fb_info *info, bool all);
+ void fbcon_remap_all(struct fb_info *info);
+ int fbcon_set_con2fb_map_ioctl(void __user *argp);
+@@ -33,6 +35,8 @@ static inline void fbcon_new_modelist(struct fb_info *info) {}
+ static inline void fbcon_get_requirement(struct fb_info *info,
+ 					 struct fb_blit_caps *caps) {}
+ static inline void fbcon_fb_blanked(struct fb_info *info, int blank) {}
++static inline int  fbcon_modechange_possible(struct fb_info *info,
++				struct fb_var_screeninfo *var) { return 0; }
+ static inline void fbcon_update_vcs(struct fb_info *info, bool all) {}
+ static inline void fbcon_remap_all(struct fb_info *info) {}
+ static inline int fbcon_set_con2fb_map_ioctl(void __user *argp) { return 0; }
+diff --git a/include/linux/memregion.h b/include/linux/memregion.h
+index e11595256cac0..c04c4fd2e2091 100644
+--- a/include/linux/memregion.h
++++ b/include/linux/memregion.h
+@@ -16,7 +16,7 @@ static inline int memregion_alloc(gfp_t gfp)
+ {
+ 	return -ENOMEM;
+ }
+-void memregion_free(int id)
++static inline void memregion_free(int id)
+ {
+ }
+ #endif
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index 30091ab5de287..718600e83020a 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -58,7 +58,7 @@ extern void pm_runtime_get_suppliers(struct device *dev);
+ extern void pm_runtime_put_suppliers(struct device *dev);
+ extern void pm_runtime_new_link(struct device *dev);
+ extern void pm_runtime_drop_link(struct device_link *link);
+-extern void pm_runtime_release_supplier(struct device_link *link, bool check_idle);
++extern void pm_runtime_release_supplier(struct device_link *link);
+ 
+ /**
+  * pm_runtime_get_if_in_use - Conditionally bump up runtime PM usage counter.
+@@ -280,8 +280,7 @@ static inline void pm_runtime_get_suppliers(struct device *dev) {}
+ static inline void pm_runtime_put_suppliers(struct device *dev) {}
+ static inline void pm_runtime_new_link(struct device *dev) {}
+ static inline void pm_runtime_drop_link(struct device_link *link) {}
+-static inline void pm_runtime_release_supplier(struct device_link *link,
+-					       bool check_idle) {}
++static inline void pm_runtime_release_supplier(struct device_link *link) {}
+ 
+ #endif /* !CONFIG_PM */
+ 
+diff --git a/include/linux/rtsx_usb.h b/include/linux/rtsx_usb.h
+index 159729cffd8e1..3247ed8e9ff0f 100644
+--- a/include/linux/rtsx_usb.h
++++ b/include/linux/rtsx_usb.h
+@@ -54,8 +54,6 @@ struct rtsx_ucr {
+ 	struct usb_device	*pusb_dev;
+ 	struct usb_interface	*pusb_intf;
+ 	struct usb_sg_request	current_sg;
+-	unsigned char		*iobuf;
+-	dma_addr_t		iobuf_dma;
+ 
+ 	struct timer_list	sg_timer;
+ 	struct mutex		dev_mutex;
+diff --git a/include/video/of_display_timing.h b/include/video/of_display_timing.h
+index e1126a74882a5..eff166fdd81b9 100644
+--- a/include/video/of_display_timing.h
++++ b/include/video/of_display_timing.h
+@@ -8,6 +8,8 @@
+ #ifndef __LINUX_OF_DISPLAY_TIMING_H
+ #define __LINUX_OF_DISPLAY_TIMING_H
+ 
++#include <linux/errno.h>
++
+ struct device_node;
+ struct display_timing;
+ struct display_timings;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 015bf2ba4a0b6..15ddc4292bc0b 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1249,6 +1249,21 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
+ 	reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);
+ }
+ 
++static void reg_bounds_sync(struct bpf_reg_state *reg)
++{
++	/* We might have learned new bounds from the var_off. */
++	__update_reg_bounds(reg);
++	/* We might have learned something about the sign bit. */
++	__reg_deduce_bounds(reg);
++	/* We might have learned some bits from the bounds. */
++	__reg_bound_offset(reg);
++	/* Intersecting with the old var_off might have improved our bounds
++	 * slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
++	 * then new var_off is (0; 0x7f...fc) which improves our umax.
++	 */
++	__update_reg_bounds(reg);
++}
++
+ static bool __reg32_bound_s64(s32 a)
+ {
+ 	return a >= 0 && a <= S32_MAX;
+@@ -1290,16 +1305,8 @@ static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
+ 		 * so they do not impact tnum bounds calculation.
+ 		 */
+ 		__mark_reg64_unbounded(reg);
+-		__update_reg_bounds(reg);
+ 	}
+-
+-	/* Intersecting with the old var_off might have improved our bounds
+-	 * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
+-	 * then new var_off is (0; 0x7f...fc) which improves our umax.
+-	 */
+-	__reg_deduce_bounds(reg);
+-	__reg_bound_offset(reg);
+-	__update_reg_bounds(reg);
++	reg_bounds_sync(reg);
+ }
+ 
+ static bool __reg64_bound_s32(s64 a)
+@@ -1315,7 +1322,6 @@ static bool __reg64_bound_u32(u64 a)
+ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
+ {
+ 	__mark_reg32_unbounded(reg);
+-
+ 	if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) {
+ 		reg->s32_min_value = (s32)reg->smin_value;
+ 		reg->s32_max_value = (s32)reg->smax_value;
+@@ -1324,14 +1330,7 @@ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
+ 		reg->u32_min_value = (u32)reg->umin_value;
+ 		reg->u32_max_value = (u32)reg->umax_value;
+ 	}
+-
+-	/* Intersecting with the old var_off might have improved our bounds
+-	 * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
+-	 * then new var_off is (0; 0x7f...fc) which improves our umax.
+-	 */
+-	__reg_deduce_bounds(reg);
+-	__reg_bound_offset(reg);
+-	__update_reg_bounds(reg);
++	reg_bounds_sync(reg);
+ }
+ 
+ /* Mark a register as having a completely unknown (scalar) value. */
+@@ -5230,9 +5229,7 @@ static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type,
+ 	ret_reg->s32_max_value = meta->msize_max_value;
+ 	ret_reg->smin_value = -MAX_ERRNO;
+ 	ret_reg->s32_min_value = -MAX_ERRNO;
+-	__reg_deduce_bounds(ret_reg);
+-	__reg_bound_offset(ret_reg);
+-	__update_reg_bounds(ret_reg);
++	reg_bounds_sync(ret_reg);
+ }
+ 
+ static int
+@@ -6197,11 +6194,7 @@ reject:
+ 
+ 	if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))
+ 		return -EINVAL;
+-
+-	__update_reg_bounds(dst_reg);
+-	__reg_deduce_bounds(dst_reg);
+-	__reg_bound_offset(dst_reg);
+-
++	reg_bounds_sync(dst_reg);
+ 	if (sanitize_check_bounds(env, insn, dst_reg) < 0)
+ 		return -EACCES;
+ 	if (sanitize_needed(opcode)) {
+@@ -6939,10 +6932,7 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	/* ALU32 ops are zero extended into 64bit register */
+ 	if (alu32)
+ 		zext_32_to_64(dst_reg);
+-
+-	__update_reg_bounds(dst_reg);
+-	__reg_deduce_bounds(dst_reg);
+-	__reg_bound_offset(dst_reg);
++	reg_bounds_sync(dst_reg);
+ 	return 0;
+ }
+ 
+@@ -7131,10 +7121,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 							 insn->dst_reg);
+ 				}
+ 				zext_32_to_64(dst_reg);
+-
+-				__update_reg_bounds(dst_reg);
+-				__reg_deduce_bounds(dst_reg);
+-				__reg_bound_offset(dst_reg);
++				reg_bounds_sync(dst_reg);
+ 			}
+ 		} else {
+ 			/* case: R = imm
+@@ -7512,26 +7499,33 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
+ 		return;
+ 
+ 	switch (opcode) {
++	/* JEQ/JNE comparison doesn't change the register equivalence.
++	 *
++	 * r1 = r2;
++	 * if (r1 == 42) goto label;
++	 * ...
++	 * label: // here both r1 and r2 are known to be 42.
++	 *
++	 * Hence when marking register as known preserve it's ID.
++	 */
+ 	case BPF_JEQ:
++		if (is_jmp32) {
++			__mark_reg32_known(true_reg, val32);
++			true_32off = tnum_subreg(true_reg->var_off);
++		} else {
++			___mark_reg_known(true_reg, val);
++			true_64off = true_reg->var_off;
++		}
++		break;
+ 	case BPF_JNE:
+-	{
+-		struct bpf_reg_state *reg =
+-			opcode == BPF_JEQ ? true_reg : false_reg;
+-
+-		/* JEQ/JNE comparison doesn't change the register equivalence.
+-		 * r1 = r2;
+-		 * if (r1 == 42) goto label;
+-		 * ...
+-		 * label: // here both r1 and r2 are known to be 42.
+-		 *
+-		 * Hence when marking register as known preserve it's ID.
+-		 */
+-		if (is_jmp32)
+-			__mark_reg32_known(reg, val32);
+-		else
+-			___mark_reg_known(reg, val);
++		if (is_jmp32) {
++			__mark_reg32_known(false_reg, val32);
++			false_32off = tnum_subreg(false_reg->var_off);
++		} else {
++			___mark_reg_known(false_reg, val);
++			false_64off = false_reg->var_off;
++		}
+ 		break;
+-	}
+ 	case BPF_JSET:
+ 		if (is_jmp32) {
+ 			false_32off = tnum_and(false_32off, tnum_const(~val32));
+@@ -7686,21 +7680,8 @@ static void __reg_combine_min_max(struct bpf_reg_state *src_reg,
+ 							dst_reg->smax_value);
+ 	src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off,
+ 							     dst_reg->var_off);
+-	/* We might have learned new bounds from the var_off. */
+-	__update_reg_bounds(src_reg);
+-	__update_reg_bounds(dst_reg);
+-	/* We might have learned something about the sign bit. */
+-	__reg_deduce_bounds(src_reg);
+-	__reg_deduce_bounds(dst_reg);
+-	/* We might have learned some bits from the bounds. */
+-	__reg_bound_offset(src_reg);
+-	__reg_bound_offset(dst_reg);
+-	/* Intersecting with the old var_off might have improved our bounds
+-	 * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
+-	 * then new var_off is (0; 0x7f...fc) which improves our umax.
+-	 */
+-	__update_reg_bounds(src_reg);
+-	__update_reg_bounds(dst_reg);
++	reg_bounds_sync(src_reg);
++	reg_bounds_sync(dst_reg);
+ }
+ 
+ static void reg_combine_min_max(struct bpf_reg_state *true_src,
+diff --git a/lib/idr.c b/lib/idr.c
+index f4ab4f4aa3c7f..7ecdfdb5309e7 100644
+--- a/lib/idr.c
++++ b/lib/idr.c
+@@ -491,7 +491,8 @@ void ida_free(struct ida *ida, unsigned int id)
+ 	struct ida_bitmap *bitmap;
+ 	unsigned long flags;
+ 
+-	BUG_ON((int)id < 0);
++	if ((int)id < 0)
++		return;
+ 
+ 	xas_lock_irqsave(&xas, flags);
+ 	bitmap = xas_load(&xas);
+diff --git a/mm/slub.c b/mm/slub.c
+index 1384dc9068337..b395ef0645444 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -2297,6 +2297,7 @@ redo:
+ 
+ 	c->page = NULL;
+ 	c->freelist = NULL;
++	c->tid = next_tid(c->tid);
+ }
+ 
+ /*
+@@ -2430,8 +2431,6 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c)
+ {
+ 	stat(s, CPUSLAB_FLUSH);
+ 	deactivate_slab(s, c->page, c->freelist, c);
+-
+-	c->tid = next_tid(c->tid);
+ }
+ 
+ /*
+@@ -2717,6 +2716,7 @@ redo:
+ 
+ 	if (!freelist) {
+ 		c->page = NULL;
++		c->tid = next_tid(c->tid);
+ 		stat(s, DEACTIVATE_BYPASS);
+ 		goto new_slab;
+ 	}
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 0928a39c4423b..e918a0f3cda28 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -100,6 +100,7 @@ static inline u64 get_u64(const struct canfd_frame *cp, int offset)
+ 
+ struct bcm_op {
+ 	struct list_head list;
++	struct rcu_head rcu;
+ 	int ifindex;
+ 	canid_t can_id;
+ 	u32 flags;
+@@ -718,10 +719,9 @@ static struct bcm_op *bcm_find_op(struct list_head *ops,
+ 	return NULL;
+ }
+ 
+-static void bcm_remove_op(struct bcm_op *op)
++static void bcm_free_op_rcu(struct rcu_head *rcu_head)
+ {
+-	hrtimer_cancel(&op->timer);
+-	hrtimer_cancel(&op->thrtimer);
++	struct bcm_op *op = container_of(rcu_head, struct bcm_op, rcu);
+ 
+ 	if ((op->frames) && (op->frames != &op->sframe))
+ 		kfree(op->frames);
+@@ -732,6 +732,14 @@ static void bcm_remove_op(struct bcm_op *op)
+ 	kfree(op);
+ }
+ 
++static void bcm_remove_op(struct bcm_op *op)
++{
++	hrtimer_cancel(&op->timer);
++	hrtimer_cancel(&op->thrtimer);
++
++	call_rcu(&op->rcu, bcm_free_op_rcu);
++}
++
+ static void bcm_rx_unreg(struct net_device *dev, struct bcm_op *op)
+ {
+ 	if (op->rx_reg_dev == dev) {
+@@ -757,6 +765,9 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh,
+ 		if ((op->can_id == mh->can_id) && (op->ifindex == ifindex) &&
+ 		    (op->flags & CAN_FD_FRAME) == (mh->flags & CAN_FD_FRAME)) {
+ 
++			/* disable automatic timer on frame reception */
++			op->flags |= RX_NO_AUTOTIMER;
++
+ 			/*
+ 			 * Don't care if we're bound or not (due to netdev
+ 			 * problems) can_rx_unregister() is always a save
+@@ -785,7 +796,6 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh,
+ 						  bcm_rx_handler, op);
+ 
+ 			list_del(&op->list);
+-			synchronize_rcu();
+ 			bcm_remove_op(op);
+ 			return 1; /* done */
+ 		}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 3c17fadaab5fa..e5622e925ea97 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4886,13 +4886,20 @@ static int nft_setelem_parse_data(struct nft_ctx *ctx, struct nft_set *set,
+ 				  struct nft_data *data,
+ 				  struct nlattr *attr)
+ {
++	u32 dtype;
+ 	int err;
+ 
+ 	err = nft_data_init(ctx, data, NFT_DATA_VALUE_MAXLEN, desc, attr);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (desc->type != NFT_DATA_VERDICT && desc->len != set->dlen) {
++	if (set->dtype == NFT_DATA_VERDICT)
++		dtype = NFT_DATA_VERDICT;
++	else
++		dtype = NFT_DATA_VALUE;
++
++	if (dtype != desc->type ||
++	    set->dlen != desc->len) {
+ 		nft_data_release(data, desc->type);
+ 		return -EINVAL;
+ 	}
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index f67c4436c5d31..949da87dbb063 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -2120,6 +2120,32 @@ out_scratch:
+ 	return err;
+ }
+ 
++/**
++ * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array
++ * @set:	nftables API set representation
++ * @m:		matching data pointing to key mapping array
++ */
++static void nft_set_pipapo_match_destroy(const struct nft_set *set,
++					 struct nft_pipapo_match *m)
++{
++	struct nft_pipapo_field *f;
++	int i, r;
++
++	for (i = 0, f = m->f; i < m->field_count - 1; i++, f++)
++		;
++
++	for (r = 0; r < f->rules; r++) {
++		struct nft_pipapo_elem *e;
++
++		if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)
++			continue;
++
++		e = f->mt[r].e;
++
++		nft_set_elem_destroy(set, e, true);
++	}
++}
++
+ /**
+  * nft_pipapo_destroy() - Free private data for set and all committed elements
+  * @set:	nftables API set representation
+@@ -2128,26 +2154,13 @@ static void nft_pipapo_destroy(const struct nft_set *set)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct nft_pipapo_match *m;
+-	struct nft_pipapo_field *f;
+-	int i, r, cpu;
++	int cpu;
+ 
+ 	m = rcu_dereference_protected(priv->match, true);
+ 	if (m) {
+ 		rcu_barrier();
+ 
+-		for (i = 0, f = m->f; i < m->field_count - 1; i++, f++)
+-			;
+-
+-		for (r = 0; r < f->rules; r++) {
+-			struct nft_pipapo_elem *e;
+-
+-			if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)
+-				continue;
+-
+-			e = f->mt[r].e;
+-
+-			nft_set_elem_destroy(set, e, true);
+-		}
++		nft_set_pipapo_match_destroy(set, m);
+ 
+ #ifdef NFT_PIPAPO_ALIGN
+ 		free_percpu(m->scratch_aligned);
+@@ -2161,6 +2174,11 @@ static void nft_pipapo_destroy(const struct nft_set *set)
+ 	}
+ 
+ 	if (priv->clone) {
++		m = priv->clone;
++
++		if (priv->dirty)
++			nft_set_pipapo_match_destroy(set, m);
++
+ #ifdef NFT_PIPAPO_ALIGN
+ 		free_percpu(priv->clone->scratch_aligned);
+ #endif
+diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c
+index 6e35703ff353d..95b198f84a3af 100644
+--- a/net/rose/rose_route.c
++++ b/net/rose/rose_route.c
+@@ -227,8 +227,8 @@ static void rose_remove_neigh(struct rose_neigh *rose_neigh)
+ {
+ 	struct rose_neigh *s;
+ 
+-	rose_stop_ftimer(rose_neigh);
+-	rose_stop_t0timer(rose_neigh);
++	del_timer_sync(&rose_neigh->ftimer);
++	del_timer_sync(&rose_neigh->t0timer);
+ 
+ 	skb_queue_purge(&rose_neigh->queue);
+ 
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index 2ef6f926610ee..e63a285a98565 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -318,6 +318,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
+ 	for (i = 0; i < dma_map->dma_pages_cnt; i++) {
+ 		dma = &dma_map->dma_pages[i];
+ 		if (*dma) {
++			*dma &= ~XSK_NEXT_PG_CONTIG_MASK;
+ 			dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
+ 					     DMA_BIDIRECTIONAL, attrs);
+ 			*dma = 0;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 604f55ec7944b..f7645720d29c3 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8934,6 +8934,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
+index be6fa808d2196..54020d05a62b8 100644
+--- a/tools/testing/selftests/net/forwarding/lib.sh
++++ b/tools/testing/selftests/net/forwarding/lib.sh
+@@ -1063,6 +1063,7 @@ learning_test()
+ 	# FDB entry was installed.
+ 	bridge link set dev $br_port1 flood off
+ 
++	ip link set $host1_if promisc on
+ 	tc qdisc add dev $host1_if ingress
+ 	tc filter add dev $host1_if ingress protocol ip pref 1 handle 101 \
+ 		flower dst_mac $mac action drop
+@@ -1073,7 +1074,7 @@ learning_test()
+ 	tc -j -s filter show dev $host1_if ingress \
+ 		| jq -e ".[] | select(.options.handle == 101) \
+ 		| select(.options.actions[0].stats.packets == 1)" &> /dev/null
+-	check_fail $? "Packet reached second host when should not"
++	check_fail $? "Packet reached first host when should not"
+ 
+ 	$MZ $host1_if -c 1 -p 64 -a $mac -t ip -q
+ 	sleep 1
+@@ -1112,6 +1113,7 @@ learning_test()
+ 
+ 	tc filter del dev $host1_if ingress protocol ip pref 1 handle 101 flower
+ 	tc qdisc del dev $host1_if ingress
++	ip link set $host1_if promisc off
+ 
+ 	bridge link set dev $br_port1 flood on
+ 
+@@ -1129,6 +1131,7 @@ flood_test_do()
+ 
+ 	# Add an ACL on `host2_if` which will tell us whether the packet
+ 	# was flooded to it or not.
++	ip link set $host2_if promisc on
+ 	tc qdisc add dev $host2_if ingress
+ 	tc filter add dev $host2_if ingress protocol ip pref 1 handle 101 \
+ 		flower dst_mac $mac action drop
+@@ -1146,6 +1149,7 @@ flood_test_do()
+ 
+ 	tc filter del dev $host2_if ingress protocol ip pref 1 handle 101 flower
+ 	tc qdisc del dev $host2_if ingress
++	ip link set $host2_if promisc off
+ 
+ 	return $err
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-07-15 10:03 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-07-15 10:03 UTC (permalink / raw
  To: gentoo-commits

commit:     b75ac876bd560b3da5f50061fd2548f6150c2ed0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 15 10:03:00 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul 15 10:03:00 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b75ac876

Linux patch 5.10.131

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |  4 ++++
 1130_linux-5.10.131.patch | 26 ++++++++++++++++++++++++++
 2 files changed, 30 insertions(+)

diff --git a/0000_README b/0000_README
index 5c651d7f..7e7a9fd2 100644
--- a/0000_README
+++ b/0000_README
@@ -563,6 +563,10 @@ Patch:  1129_linux-5.10.130.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.130
 
+Patch:  1130_linux-5.10.131.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.131
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1130_linux-5.10.131.patch b/1130_linux-5.10.131.patch
new file mode 100644
index 00000000..0cde88b2
--- /dev/null
+++ b/1130_linux-5.10.131.patch
@@ -0,0 +1,26 @@
+diff --git a/Makefile b/Makefile
+index b0a35378803db..53f1a45ae69b0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 130
++SUBLEVEL = 131
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 8d096ca770b04..92e8ca56f5665 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -683,7 +683,7 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 	hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) |
+ 		      BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) |
+ 		      BF_GPMI_TIMING0_DATA_SETUP(data_setup_cycles);
+-	hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(DIV_ROUND_UP(busy_timeout_cycles, 4096));
++	hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(busy_timeout_cycles * 4096);
+ 
+ 	/*
+ 	 * Derive NFC ideal delay from {3}:


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-07-21 20:08 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-07-21 20:08 UTC (permalink / raw
  To: gentoo-commits

commit:     3cced38822768a8795123158b3e2a5d184c9be68
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 21 20:08:13 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul 21 20:08:13 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3cced388

Linux patch 5.10.132

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1131_linux-5.10.132.patch | 2956 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2960 insertions(+)

diff --git a/0000_README b/0000_README
index 7e7a9fd2..04169db1 100644
--- a/0000_README
+++ b/0000_README
@@ -567,6 +567,10 @@ Patch:  1130_linux-5.10.131.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.131
 
+Patch:  1131_linux-5.10.132.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.132
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1131_linux-5.10.132.patch b/1131_linux-5.10.132.patch
new file mode 100644
index 00000000..23d80f8d
--- /dev/null
+++ b/1131_linux-5.10.132.patch
@@ -0,0 +1,2956 @@
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index 4822a058a81d7..0b1f3235aa773 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -988,7 +988,7 @@ cipso_cache_enable - BOOLEAN
+ cipso_cache_bucket_size - INTEGER
+ 	The CIPSO label cache consists of a fixed size hash table with each
+ 	hash bucket containing a number of cache entries.  This variable limits
+-	the number of entries in each hash bucket; the larger the value the
++	the number of entries in each hash bucket; the larger the value is, the
+ 	more CIPSO label mappings that can be cached.  When the number of
+ 	entries in a given hash bucket reaches this limit adding new entries
+ 	causes the oldest entry in the bucket to be removed to make room.
+@@ -1080,7 +1080,7 @@ ip_autobind_reuse - BOOLEAN
+ 	option should only be set by experts.
+ 	Default: 0
+ 
+-ip_dynaddr - BOOLEAN
++ip_dynaddr - INTEGER
+ 	If set non-zero, enables support for dynamic addresses.
+ 	If set to a non-zero value larger than 1, a kernel log
+ 	message will be printed when dynamic address rewriting
+diff --git a/Makefile b/Makefile
+index 53f1a45ae69b0..5bee8f281b061 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 131
++SUBLEVEL = 132
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-ts7970.dtsi b/arch/arm/boot/dts/imx6qdl-ts7970.dtsi
+index e6aa0c33754de..966038ecc5bfb 100644
+--- a/arch/arm/boot/dts/imx6qdl-ts7970.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-ts7970.dtsi
+@@ -226,7 +226,7 @@
+ 		reg = <0x28>;
+ 		#gpio-cells = <2>;
+ 		gpio-controller;
+-		ngpio = <32>;
++		ngpios = <62>;
+ 	};
+ 
+ 	sgtl5000: codec@a {
+diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi
+index 12f57278ba4a5..33f76d14341ef 100644
+--- a/arch/arm/boot/dts/sama5d2.dtsi
++++ b/arch/arm/boot/dts/sama5d2.dtsi
+@@ -1125,7 +1125,7 @@
+ 				clocks = <&pmc PMC_TYPE_PERIPHERAL 55>, <&pmc PMC_TYPE_GCK 55>;
+ 				clock-names = "pclk", "gclk";
+ 				assigned-clocks = <&pmc PMC_TYPE_CORE PMC_I2S1_MUX>;
+-				assigned-parrents = <&pmc PMC_TYPE_GCK 55>;
++				assigned-clock-parents = <&pmc PMC_TYPE_GCK 55>;
+ 				status = "disabled";
+ 			};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp151.dtsi b/arch/arm/boot/dts/stm32mp151.dtsi
+index 7a0ef01de969e..9919fc86bdc34 100644
+--- a/arch/arm/boot/dts/stm32mp151.dtsi
++++ b/arch/arm/boot/dts/stm32mp151.dtsi
+@@ -543,7 +543,7 @@
+ 			compatible = "st,stm32-cec";
+ 			reg = <0x40016000 0x400>;
+ 			interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
+-			clocks = <&rcc CEC_K>, <&clk_lse>;
++			clocks = <&rcc CEC_K>, <&rcc CEC>;
+ 			clock-names = "cec", "hdmi-cec";
+ 			status = "disabled";
+ 		};
+diff --git a/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts b/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
+index f19ed981da9d9..3706216ffb40b 100644
+--- a/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
++++ b/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
+@@ -169,7 +169,7 @@
+ 	flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "mxicy,mx25l1606e", "winbond,w25q128";
++		compatible = "mxicy,mx25l1606e", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <40000000>;
+ 	};
+diff --git a/arch/arm/include/asm/mach/map.h b/arch/arm/include/asm/mach/map.h
+index 92282558caf7c..2b8970d8e5a2f 100644
+--- a/arch/arm/include/asm/mach/map.h
++++ b/arch/arm/include/asm/mach/map.h
+@@ -27,6 +27,7 @@ enum {
+ 	MT_HIGH_VECTORS,
+ 	MT_MEMORY_RWX,
+ 	MT_MEMORY_RW,
++	MT_MEMORY_RO,
+ 	MT_ROM,
+ 	MT_MEMORY_RWX_NONCACHED,
+ 	MT_MEMORY_RW_DTCM,
+diff --git a/arch/arm/include/asm/ptrace.h b/arch/arm/include/asm/ptrace.h
+index 91d6b7856be4b..73c83f4d33b3b 100644
+--- a/arch/arm/include/asm/ptrace.h
++++ b/arch/arm/include/asm/ptrace.h
+@@ -164,5 +164,31 @@ static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+ 		((current_stack_pointer | (THREAD_SIZE - 1)) - 7) - 1;	\
+ })
+ 
++
++/*
++ * Update ITSTATE after normal execution of an IT block instruction.
++ *
++ * The 8 IT state bits are split into two parts in CPSR:
++ *	ITSTATE<1:0> are in CPSR<26:25>
++ *	ITSTATE<7:2> are in CPSR<15:10>
++ */
++static inline unsigned long it_advance(unsigned long cpsr)
++{
++	if ((cpsr & 0x06000400) == 0) {
++		/* ITSTATE<2:0> == 0 means end of IT block, so clear IT state */
++		cpsr &= ~PSR_IT_MASK;
++	} else {
++		/* We need to shift left ITSTATE<4:0> */
++		const unsigned long mask = 0x06001c00;  /* Mask ITSTATE<4:0> */
++		unsigned long it = cpsr & mask;
++		it <<= 1;
++		it |= it >> (27 - 10);  /* Carry ITSTATE<2> to correct place */
++		it &= mask;
++		cpsr &= ~mask;
++		cpsr |= it;
++	}
++	return cpsr;
++}
++
+ #endif /* __ASSEMBLY__ */
+ #endif
+diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
+index ea81e89e77400..bcefe3f51744c 100644
+--- a/arch/arm/mm/alignment.c
++++ b/arch/arm/mm/alignment.c
+@@ -935,6 +935,9 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ 	if (type == TYPE_LDST)
+ 		do_alignment_finish_ldst(addr, instr, regs, offset);
+ 
++	if (thumb_mode(regs))
++		regs->ARM_cpsr = it_advance(regs->ARM_cpsr);
++
+ 	return 0;
+ 
+  bad_or_fault:
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index 3e3001998460b..86f213f1b44b8 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -296,6 +296,13 @@ static struct mem_type mem_types[] __ro_after_init = {
+ 		.prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE,
+ 		.domain    = DOMAIN_KERNEL,
+ 	},
++	[MT_MEMORY_RO] = {
++		.prot_pte  = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
++			     L_PTE_XN | L_PTE_RDONLY,
++		.prot_l1   = PMD_TYPE_TABLE,
++		.prot_sect = PMD_TYPE_SECT,
++		.domain    = DOMAIN_KERNEL,
++	},
+ 	[MT_ROM] = {
+ 		.prot_sect = PMD_TYPE_SECT,
+ 		.domain    = DOMAIN_KERNEL,
+@@ -490,6 +497,7 @@ static void __init build_mem_type_table(void)
+ 
+ 			/* Also setup NX memory mapping */
+ 			mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_XN;
++			mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_XN;
+ 		}
+ 		if (cpu_arch >= CPU_ARCH_ARMv7 && (cr & CR_TRE)) {
+ 			/*
+@@ -569,6 +577,7 @@ static void __init build_mem_type_table(void)
+ 		mem_types[MT_ROM].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
+ 		mem_types[MT_MINICLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
+ 		mem_types[MT_CACHECLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
++		mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
+ #endif
+ 
+ 		/*
+@@ -588,6 +597,8 @@ static void __init build_mem_type_table(void)
+ 			mem_types[MT_MEMORY_RWX].prot_pte |= L_PTE_SHARED;
+ 			mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_S;
+ 			mem_types[MT_MEMORY_RW].prot_pte |= L_PTE_SHARED;
++			mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_S;
++			mem_types[MT_MEMORY_RO].prot_pte |= L_PTE_SHARED;
+ 			mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED;
+ 			mem_types[MT_MEMORY_RWX_NONCACHED].prot_sect |= PMD_SECT_S;
+ 			mem_types[MT_MEMORY_RWX_NONCACHED].prot_pte |= L_PTE_SHARED;
+@@ -648,6 +659,8 @@ static void __init build_mem_type_table(void)
+ 	mem_types[MT_MEMORY_RWX].prot_pte |= kern_pgprot;
+ 	mem_types[MT_MEMORY_RW].prot_sect |= ecc_mask | cp->pmd;
+ 	mem_types[MT_MEMORY_RW].prot_pte |= kern_pgprot;
++	mem_types[MT_MEMORY_RO].prot_sect |= ecc_mask | cp->pmd;
++	mem_types[MT_MEMORY_RO].prot_pte |= kern_pgprot;
+ 	mem_types[MT_MEMORY_DMA_READY].prot_pte |= kern_pgprot;
+ 	mem_types[MT_MEMORY_RWX_NONCACHED].prot_sect |= ecc_mask;
+ 	mem_types[MT_ROM].prot_sect |= cp->pmd;
+@@ -1342,7 +1355,7 @@ static void __init devicemaps_init(const struct machine_desc *mdesc)
+ 		map.pfn = __phys_to_pfn(__atags_pointer & SECTION_MASK);
+ 		map.virtual = FDT_FIXED_BASE;
+ 		map.length = FDT_FIXED_SIZE;
+-		map.type = MT_ROM;
++		map.type = MT_MEMORY_RO;
+ 		create_mapping(&map);
+ 	}
+ 
+diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
+index fb9f3eb6bf483..8bc7a2d6d6c7f 100644
+--- a/arch/arm/mm/proc-v7-bugs.c
++++ b/arch/arm/mm/proc-v7-bugs.c
+@@ -108,8 +108,7 @@ static unsigned int spectre_v2_install_workaround(unsigned int method)
+ #else
+ static unsigned int spectre_v2_install_workaround(unsigned int method)
+ {
+-	pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n",
+-		smp_processor_id());
++	pr_info_once("Spectre V2: workarounds disabled by configuration\n");
+ 
+ 	return SPECTRE_VULNERABLE;
+ }
+@@ -209,10 +208,10 @@ static int spectre_bhb_install_workaround(int method)
+ 			return SPECTRE_VULNERABLE;
+ 
+ 		spectre_bhb_method = method;
+-	}
+ 
+-	pr_info("CPU%u: Spectre BHB: using %s workaround\n",
+-		smp_processor_id(), spectre_bhb_method_name(method));
++		pr_info("CPU%u: Spectre BHB: enabling %s workaround for all CPUs\n",
++			smp_processor_id(), spectre_bhb_method_name(method));
++	}
+ 
+ 	return SPECTRE_MITIGATED;
+ }
+diff --git a/arch/arm/probes/decode.h b/arch/arm/probes/decode.h
+index 9731735989921..facc889d05eee 100644
+--- a/arch/arm/probes/decode.h
++++ b/arch/arm/probes/decode.h
+@@ -14,6 +14,7 @@
+ #include <linux/types.h>
+ #include <linux/stddef.h>
+ #include <asm/probes.h>
++#include <asm/ptrace.h>
+ #include <asm/kprobes.h>
+ 
+ void __init arm_probes_decode_init(void);
+@@ -35,31 +36,6 @@ void __init find_str_pc_offset(void);
+ #endif
+ 
+ 
+-/*
+- * Update ITSTATE after normal execution of an IT block instruction.
+- *
+- * The 8 IT state bits are split into two parts in CPSR:
+- *	ITSTATE<1:0> are in CPSR<26:25>
+- *	ITSTATE<7:2> are in CPSR<15:10>
+- */
+-static inline unsigned long it_advance(unsigned long cpsr)
+-	{
+-	if ((cpsr & 0x06000400) == 0) {
+-		/* ITSTATE<2:0> == 0 means end of IT block, so clear IT state */
+-		cpsr &= ~PSR_IT_MASK;
+-	} else {
+-		/* We need to shift left ITSTATE<4:0> */
+-		const unsigned long mask = 0x06001c00;  /* Mask ITSTATE<4:0> */
+-		unsigned long it = cpsr & mask;
+-		it <<= 1;
+-		it |= it >> (27 - 10);  /* Carry ITSTATE<2> to correct place */
+-		it &= mask;
+-		cpsr &= ~mask;
+-		cpsr |= it;
+-	}
+-	return cpsr;
+-}
+-
+ static inline void __kprobes bx_write_pc(long pcv, struct pt_regs *regs)
+ {
+ 	long cpsr = regs->ARM_cpsr;
+diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h
+index 6d5c6463bc07e..de99a19e72d72 100644
+--- a/arch/sh/include/asm/io.h
++++ b/arch/sh/include/asm/io.h
+@@ -271,8 +271,12 @@ static inline void __iomem *ioremap_prot(phys_addr_t offset, unsigned long size,
+ #endif /* CONFIG_HAVE_IOREMAP_PROT */
+ 
+ #else /* CONFIG_MMU */
+-#define iounmap(addr)		do { } while (0)
+-#define ioremap(offset, size)	((void __iomem *)(unsigned long)(offset))
++static inline void __iomem *ioremap(phys_addr_t offset, size_t size)
++{
++	return (void __iomem *)(unsigned long)offset;
++}
++
++static inline void iounmap(volatile void __iomem *addr) { }
+ #endif /* CONFIG_MMU */
+ 
+ #define ioremap_uc	ioremap
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 05e117137b459..efe13ab366f47 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -419,6 +419,8 @@ static void __init clear_bss(void)
+ {
+ 	memset(__bss_start, 0,
+ 	       (unsigned long) __bss_stop - (unsigned long) __bss_start);
++	memset(__brk_base, 0,
++	       (unsigned long) __brk_limit - (unsigned long) __brk_base);
+ }
+ 
+ static unsigned long get_cmd_line_ptr(void)
+diff --git a/arch/x86/kernel/ima_arch.c b/arch/x86/kernel/ima_arch.c
+index 7dfb1e8089284..bd218470d1459 100644
+--- a/arch/x86/kernel/ima_arch.c
++++ b/arch/x86/kernel/ima_arch.c
+@@ -88,6 +88,8 @@ const char * const *arch_get_ima_policy(void)
+ 	if (IS_ENABLED(CONFIG_IMA_ARCH_POLICY) && arch_ima_get_secureboot()) {
+ 		if (IS_ENABLED(CONFIG_MODULE_SIG))
+ 			set_module_sig_enforced();
++		if (IS_ENABLED(CONFIG_KEXEC_SIG))
++			set_kexec_sig_enforced();
+ 		return sb_arch_rules;
+ 	}
+ 	return NULL;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index da547752580a3..c71f702c037de 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8142,15 +8142,17 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
+  */
+ static void kvm_pv_kick_cpu_op(struct kvm *kvm, unsigned long flags, int apicid)
+ {
+-	struct kvm_lapic_irq lapic_irq;
+-
+-	lapic_irq.shorthand = APIC_DEST_NOSHORT;
+-	lapic_irq.dest_mode = APIC_DEST_PHYSICAL;
+-	lapic_irq.level = 0;
+-	lapic_irq.dest_id = apicid;
+-	lapic_irq.msi_redir_hint = false;
++	/*
++	 * All other fields are unused for APIC_DM_REMRD, but may be consumed by
++	 * common code, e.g. for tracing. Defer initialization to the compiler.
++	 */
++	struct kvm_lapic_irq lapic_irq = {
++		.delivery_mode = APIC_DM_REMRD,
++		.dest_mode = APIC_DEST_PHYSICAL,
++		.shorthand = APIC_DEST_NOSHORT,
++		.dest_id = apicid,
++	};
+ 
+-	lapic_irq.delivery_mode = APIC_DM_REMRD;
+ 	kvm_irq_delivery_to_apic(kvm, NULL, &lapic_irq, NULL);
+ }
+ 
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index c7a47603537f2..63d8c6c7d1254 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -78,10 +78,20 @@ static uint8_t __pte2cachemode_tbl[8] = {
+ 	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
+ };
+ 
+-/* Check that the write-protect PAT entry is set for write-protect */
++/*
++ * Check that the write-protect PAT entry is set for write-protect.
++ * To do this without making assumptions how PAT has been set up (Xen has
++ * another layout than the kernel), translate the _PAGE_CACHE_MODE_WP cache
++ * mode via the __cachemode2pte_tbl[] into protection bits (those protection
++ * bits will select a cache mode of WP or better), and then translate the
++ * protection bits back into the cache mode using __pte2cm_idx() and the
++ * __pte2cachemode_tbl[] array. This will return the really used cache mode.
++ */
+ bool x86_has_pat_wp(void)
+ {
+-	return __pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] == _PAGE_CACHE_MODE_WP;
++	uint16_t prot = __cachemode2pte_tbl[_PAGE_CACHE_MODE_WP];
++
++	return __pte2cachemode_tbl[__pte2cm_idx(prot)] == _PAGE_CACHE_MODE_WP;
+ }
+ 
+ enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
+diff --git a/drivers/cpufreq/pmac32-cpufreq.c b/drivers/cpufreq/pmac32-cpufreq.c
+index 73621bc119768..3704476bb83a0 100644
+--- a/drivers/cpufreq/pmac32-cpufreq.c
++++ b/drivers/cpufreq/pmac32-cpufreq.c
+@@ -471,6 +471,10 @@ static int pmac_cpufreq_init_MacRISC3(struct device_node *cpunode)
+ 	if (slew_done_gpio_np)
+ 		slew_done_gpio = read_gpio(slew_done_gpio_np);
+ 
++	of_node_put(volt_gpio_np);
++	of_node_put(freq_gpio_np);
++	of_node_put(slew_done_gpio_np);
++
+ 	/* If we use the frequency GPIOs, calculate the min/max speeds based
+ 	 * on the bus frequencies
+ 	 */
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index ecaa538b2d357..ef78781934919 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -790,6 +790,7 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
+ 	ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs,
+ 				 DRM_MODE_CONNECTOR_DisplayPort);
+ 	if (ret) {
++		drm_dp_mst_put_port_malloc(port);
+ 		intel_connector_free(intel_connector);
+ 		return NULL;
+ 	}
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index 6615eb5147e23..a33887f2464fa 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -736,6 +736,20 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ 	mutex_lock(&gt->tlb_invalidate_lock);
+ 	intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
+ 
++	spin_lock_irq(&uncore->lock); /* serialise invalidate with GT reset */
++
++	for_each_engine(engine, gt, id) {
++		struct reg_and_bit rb;
++
++		rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
++		if (!i915_mmio_reg_offset(rb.reg))
++			continue;
++
++		intel_uncore_write_fw(uncore, rb.reg, rb.bit);
++	}
++
++	spin_unlock_irq(&uncore->lock);
++
+ 	for_each_engine(engine, gt, id) {
+ 		/*
+ 		 * HW architecture suggest typical invalidation time at 40us,
+@@ -750,7 +764,6 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ 		if (!i915_mmio_reg_offset(rb.reg))
+ 			continue;
+ 
+-		intel_uncore_write_fw(uncore, rb.reg, rb.bit);
+ 		if (__intel_wait_for_register_fw(uncore,
+ 						 rb.reg, rb.bit, 0,
+ 						 timeout_us, timeout_ms,
+diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
+index 95d41c01d0e04..35d55f98a06f5 100644
+--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
++++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
+@@ -4788,8 +4788,8 @@ static int live_lrc_layout(void *arg)
+ 			continue;
+ 
+ 		hw = shmem_pin_map(engine->default_state);
+-		if (IS_ERR(hw)) {
+-			err = PTR_ERR(hw);
++		if (!hw) {
++			err = -ENOMEM;
+ 			break;
+ 		}
+ 		hw += LRC_STATE_OFFSET / sizeof(*hw);
+@@ -4965,8 +4965,8 @@ static int live_lrc_fixed(void *arg)
+ 			continue;
+ 
+ 		hw = shmem_pin_map(engine->default_state);
+-		if (IS_ERR(hw)) {
+-			err = PTR_ERR(hw);
++		if (!hw) {
++			err = -ENOMEM;
+ 			break;
+ 		}
+ 		hw += LRC_STATE_OFFSET / sizeof(*hw);
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index a70261809cdd2..1dfc457bbefc8 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -427,8 +427,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
+ 
+ 	if (args->retained) {
+ 		if (args->madv == PANFROST_MADV_DONTNEED)
+-			list_add_tail(&bo->base.madv_list,
+-				      &pfdev->shrinker_list);
++			list_move_tail(&bo->base.madv_list,
++				       &pfdev->shrinker_list);
+ 		else if (args->madv == PANFROST_MADV_WILLNEED)
+ 			list_del_init(&bo->base.madv_list);
+ 	}
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index 7fc45b13a52c2..13596961ae17f 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -491,7 +491,7 @@ err_map:
+ err_pages:
+ 	drm_gem_shmem_put_pages(&bo->base);
+ err_bo:
+-	drm_gem_object_put(&bo->base.base);
++	panfrost_gem_mapping_put(bomapping);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/irqchip/irq-or1k-pic.c b/drivers/irqchip/irq-or1k-pic.c
+index 03d2366118dd4..d5f1fabc45d79 100644
+--- a/drivers/irqchip/irq-or1k-pic.c
++++ b/drivers/irqchip/irq-or1k-pic.c
+@@ -66,7 +66,6 @@ static struct or1k_pic_dev or1k_pic_level = {
+ 		.name = "or1k-PIC-level",
+ 		.irq_unmask = or1k_pic_unmask,
+ 		.irq_mask = or1k_pic_mask,
+-		.irq_mask_ack = or1k_pic_mask_ack,
+ 	},
+ 	.handle = handle_level_irq,
+ 	.flags = IRQ_LEVEL | IRQ_NOPROBE,
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 1c42417810fcd..1a3fba352cadb 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -259,7 +259,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = {
+ 	.tseg2_min = 1,
+ 	.tseg2_max = 128,
+ 	.sjw_max = 128,
+-	.brp_min = 2,
++	.brp_min = 1,
+ 	.brp_max = 256,
+ 	.brp_inc = 1,
+ };
+@@ -272,7 +272,7 @@ static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
+ 	.tseg2_min = 1,
+ 	.tseg2_max = 16,
+ 	.sjw_max = 16,
+-	.brp_min = 2,
++	.brp_min = 1,
+ 	.brp_max = 256,
+ 	.brp_inc = 1,
+ };
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index fc5ea434a27c9..a0ce213c473bc 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -385,7 +385,7 @@ static void aq_pci_shutdown(struct pci_dev *pdev)
+ 	}
+ }
+ 
+-static int aq_suspend_common(struct device *dev, bool deep)
++static int aq_suspend_common(struct device *dev)
+ {
+ 	struct aq_nic_s *nic = pci_get_drvdata(to_pci_dev(dev));
+ 
+@@ -398,17 +398,15 @@ static int aq_suspend_common(struct device *dev, bool deep)
+ 	if (netif_running(nic->ndev))
+ 		aq_nic_stop(nic);
+ 
+-	if (deep) {
+-		aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
+-		aq_nic_set_power(nic);
+-	}
++	aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
++	aq_nic_set_power(nic);
+ 
+ 	rtnl_unlock();
+ 
+ 	return 0;
+ }
+ 
+-static int atl_resume_common(struct device *dev, bool deep)
++static int atl_resume_common(struct device *dev)
+ {
+ 	struct pci_dev *pdev = to_pci_dev(dev);
+ 	struct aq_nic_s *nic;
+@@ -421,11 +419,6 @@ static int atl_resume_common(struct device *dev, bool deep)
+ 	pci_set_power_state(pdev, PCI_D0);
+ 	pci_restore_state(pdev);
+ 
+-	if (deep) {
+-		/* Reinitialize Nic/Vecs objects */
+-		aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
+-	}
+-
+ 	if (netif_running(nic->ndev)) {
+ 		ret = aq_nic_init(nic);
+ 		if (ret)
+@@ -450,22 +443,22 @@ err_exit:
+ 
+ static int aq_pm_freeze(struct device *dev)
+ {
+-	return aq_suspend_common(dev, true);
++	return aq_suspend_common(dev);
+ }
+ 
+ static int aq_pm_suspend_poweroff(struct device *dev)
+ {
+-	return aq_suspend_common(dev, true);
++	return aq_suspend_common(dev);
+ }
+ 
+ static int aq_pm_thaw(struct device *dev)
+ {
+-	return atl_resume_common(dev, true);
++	return atl_resume_common(dev);
+ }
+ 
+ static int aq_pm_resume_restore(struct device *dev)
+ {
+-	return atl_resume_common(dev, true);
++	return atl_resume_common(dev);
+ }
+ 
+ static const struct dev_pm_ops aq_pm_ops = {
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index eea4bd3116e8d..969af4dd64055 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1747,6 +1747,19 @@ cleanup_clk:
+ 	return rc;
+ }
+ 
++static bool ftgmac100_has_child_node(struct device_node *np, const char *name)
++{
++	struct device_node *child_np = of_get_child_by_name(np, name);
++	bool ret = false;
++
++	if (child_np) {
++		ret = true;
++		of_node_put(child_np);
++	}
++
++	return ret;
++}
++
+ static int ftgmac100_probe(struct platform_device *pdev)
+ {
+ 	struct resource *res;
+@@ -1860,7 +1873,7 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ 
+ 		/* Display what we found */
+ 		phy_attached_info(phy);
+-	} else if (np && !of_get_child_by_name(np, "mdio")) {
++	} else if (np && !ftgmac100_has_child_node(np, "mdio")) {
+ 		/* Support legacy ASPEED devicetree descriptions that decribe a
+ 		 * MAC with an embedded MDIO controller but have no "mdio"
+ 		 * child node. Automatically scan the MDIO bus for available
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
+index d06532d0baa43..634777fd7db9b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
+@@ -231,8 +231,7 @@ mlx5e_set_ktls_rx_priv_ctx(struct tls_context *tls_ctx,
+ 	struct mlx5e_ktls_offload_context_rx **ctx =
+ 		__tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_RX);
+ 
+-	BUILD_BUG_ON(sizeof(struct mlx5e_ktls_offload_context_rx *) >
+-		     TLS_OFFLOAD_CONTEXT_SIZE_RX);
++	BUILD_BUG_ON(sizeof(priv_rx) > TLS_DRIVER_STATE_SIZE_RX);
+ 
+ 	*ctx = priv_rx;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+index b140e13fdcc88..679747db3110c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+@@ -63,8 +63,7 @@ mlx5e_set_ktls_tx_priv_ctx(struct tls_context *tls_ctx,
+ 	struct mlx5e_ktls_offload_context_tx **ctx =
+ 		__tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_TX);
+ 
+-	BUILD_BUG_ON(sizeof(struct mlx5e_ktls_offload_context_tx *) >
+-		     TLS_OFFLOAD_CONTEXT_SIZE_TX);
++	BUILD_BUG_ON(sizeof(priv_tx) > TLS_DRIVER_STATE_SIZE_TX);
+ 
+ 	*ctx = priv_tx;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+index 78f6a6f0a7e0a..ff4f10d0f090b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+@@ -536,7 +536,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(vnic_env)
+ 	u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {};
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 
+-	if (!MLX5_CAP_GEN(priv->mdev, nic_receive_steering_discard))
++	if (!mlx5e_stats_grp_vnic_env_num_stats(priv))
+ 		return;
+ 
+ 	MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV);
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index fa1a872c4bc83..5b7413305be63 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -1916,7 +1916,10 @@ static int efx_ef10_try_update_nic_stats_vf(struct efx_nic *efx)
+ 
+ 	efx_update_sw_stats(efx, stats);
+ out:
++	/* releasing a DMA coherent buffer with BH disabled can panic */
++	spin_unlock_bh(&efx->stats_lock);
+ 	efx_nic_free_buffer(efx, &stats_buf);
++	spin_lock_bh(&efx->stats_lock);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/net/ethernet/sfc/ef10_sriov.c b/drivers/net/ethernet/sfc/ef10_sriov.c
+index 84041cd587d78..b44acb6e3953f 100644
+--- a/drivers/net/ethernet/sfc/ef10_sriov.c
++++ b/drivers/net/ethernet/sfc/ef10_sriov.c
+@@ -411,8 +411,9 @@ fail1:
+ static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force)
+ {
+ 	struct pci_dev *dev = efx->pci_dev;
++	struct efx_ef10_nic_data *nic_data = efx->nic_data;
+ 	unsigned int vfs_assigned = pci_vfs_assigned(dev);
+-	int rc = 0;
++	int i, rc = 0;
+ 
+ 	if (vfs_assigned && !force) {
+ 		netif_info(efx, drv, efx->net_dev, "VFs are assigned to guests; "
+@@ -420,10 +421,13 @@ static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force)
+ 		return -EBUSY;
+ 	}
+ 
+-	if (!vfs_assigned)
++	if (!vfs_assigned) {
++		for (i = 0; i < efx->vf_count; i++)
++			nic_data->vf[i].pci_dev = NULL;
+ 		pci_disable_sriov(dev);
+-	else
++	} else {
+ 		rc = -EBUSY;
++	}
+ 
+ 	efx_ef10_sriov_free_vf_vswitching(efx);
+ 	efx->vf_count = 0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
+index 2342d497348ea..fd1b0cc6b5faf 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
+@@ -363,6 +363,7 @@ bypass_clk_reset_gpio:
+ 	data->fix_mac_speed = tegra_eqos_fix_speed;
+ 	data->init = tegra_eqos_init;
+ 	data->bsp_priv = eqos;
++	data->sph_disable = 1;
+ 
+ 	err = tegra_eqos_init(pdev, eqos);
+ 	if (err < 0)
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 96068e0d841ae..dcbe278086dca 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -2427,7 +2427,7 @@ static int sfp_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, sfp);
+ 
+-	err = devm_add_action(sfp->dev, sfp_cleanup, sfp);
++	err = devm_add_action_or_reset(sfp->dev, sfp_cleanup, sfp);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
+index dbac4c03d21a1..a0335407be423 100644
+--- a/drivers/net/xen-netback/rx.c
++++ b/drivers/net/xen-netback/rx.c
+@@ -495,6 +495,7 @@ void xenvif_rx_action(struct xenvif_queue *queue)
+ 	queue->rx_copy.completed = &completed_skbs;
+ 
+ 	while (xenvif_rx_ring_slots_available(queue) &&
++	       !skb_queue_empty(&queue->rx_queue) &&
+ 	       work_done < RX_BATCH_SIZE) {
+ 		xenvif_rx_skb(queue);
+ 		work_done++;
+diff --git a/drivers/nfc/nxp-nci/i2c.c b/drivers/nfc/nxp-nci/i2c.c
+index 3943a30053b3b..f426dcdfcdd6a 100644
+--- a/drivers/nfc/nxp-nci/i2c.c
++++ b/drivers/nfc/nxp-nci/i2c.c
+@@ -122,7 +122,9 @@ static int nxp_nci_i2c_fw_read(struct nxp_nci_i2c_phy *phy,
+ 	skb_put_data(*skb, &header, NXP_NCI_FW_HDR_LEN);
+ 
+ 	r = i2c_master_recv(client, skb_put(*skb, frame_len), frame_len);
+-	if (r != frame_len) {
++	if (r < 0) {
++		goto fw_read_exit_free_skb;
++	} else if (r != frame_len) {
+ 		nfc_err(&client->dev,
+ 			"Invalid frame length: %u (expected %zu)\n",
+ 			r, frame_len);
+@@ -166,7 +168,9 @@ static int nxp_nci_i2c_nci_read(struct nxp_nci_i2c_phy *phy,
+ 		return 0;
+ 
+ 	r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen);
+-	if (r != header.plen) {
++	if (r < 0) {
++		goto nci_read_exit_free_skb;
++	} else if (r != header.plen) {
+ 		nfc_err(&client->dev,
+ 			"Invalid frame payload length: %u (expected %u)\n",
+ 			r, header.plen);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index af2902d70b196..ab060b4911ffd 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4460,6 +4460,8 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
+ 	nvme_stop_keep_alive(ctrl);
+ 	flush_work(&ctrl->async_event_work);
+ 	cancel_work_sync(&ctrl->fw_act_work);
++	if (ctrl->ops->stop_ctrl)
++		ctrl->ops->stop_ctrl(ctrl);
+ }
+ EXPORT_SYMBOL_GPL(nvme_stop_ctrl);
+ 
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 8e40a6306e53d..58cf9e39d613e 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -478,6 +478,7 @@ struct nvme_ctrl_ops {
+ 	void (*free_ctrl)(struct nvme_ctrl *ctrl);
+ 	void (*submit_async_event)(struct nvme_ctrl *ctrl);
+ 	void (*delete_ctrl)(struct nvme_ctrl *ctrl);
++	void (*stop_ctrl)(struct nvme_ctrl *ctrl);
+ 	int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
+ };
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 3622c5c9515fa..ce129655ef0a3 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3234,7 +3234,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_DISABLE_WRITE_ZEROES|
+ 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_DEVICE(0x1987, 0x5016),	/* Phison E16 */
+-		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN |
++				NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(0x1b4b, 0x1092),	/* Lexar 256 GB SSD */
+ 		.driver_data = NVME_QUIRK_NO_NS_DESC_LIST |
+ 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 8eacc9bd58f5a..b61924394032a 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1057,6 +1057,14 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+ 	}
+ }
+ 
++static void nvme_rdma_stop_ctrl(struct nvme_ctrl *nctrl)
++{
++	struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
++
++	cancel_work_sync(&ctrl->err_work);
++	cancel_delayed_work_sync(&ctrl->reconnect_work);
++}
++
+ static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl)
+ {
+ 	struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
+@@ -2236,9 +2244,6 @@ static const struct blk_mq_ops nvme_rdma_admin_mq_ops = {
+ 
+ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
+ {
+-	cancel_work_sync(&ctrl->err_work);
+-	cancel_delayed_work_sync(&ctrl->reconnect_work);
+-
+ 	nvme_rdma_teardown_io_queues(ctrl, shutdown);
+ 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
+ 	if (shutdown)
+@@ -2288,6 +2293,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
+ 	.submit_async_event	= nvme_rdma_submit_async_event,
+ 	.delete_ctrl		= nvme_rdma_delete_ctrl,
+ 	.get_address		= nvmf_get_address,
++	.stop_ctrl		= nvme_rdma_stop_ctrl,
+ };
+ 
+ /*
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 7e39320337072..fe8c27bbc3f20 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1149,8 +1149,7 @@ done:
+ 	} else if (ret < 0) {
+ 		dev_err(queue->ctrl->ctrl.device,
+ 			"failed to send request %d\n", ret);
+-		if (ret != -EPIPE && ret != -ECONNRESET)
+-			nvme_tcp_fail_request(queue->request);
++		nvme_tcp_fail_request(queue->request);
+ 		nvme_tcp_done_send_req(queue);
+ 	}
+ 	return ret;
+@@ -2136,9 +2135,6 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
+ 
+ static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
+ {
+-	cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
+-	cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
+-
+ 	nvme_tcp_teardown_io_queues(ctrl, shutdown);
+ 	blk_mq_quiesce_queue(ctrl->admin_q);
+ 	if (shutdown)
+@@ -2178,6 +2174,12 @@ out_fail:
+ 	nvme_tcp_reconnect_or_remove(ctrl);
+ }
+ 
++static void nvme_tcp_stop_ctrl(struct nvme_ctrl *ctrl)
++{
++	cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
++	cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
++}
++
+ static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl)
+ {
+ 	struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
+@@ -2500,6 +2502,7 @@ static const struct nvme_ctrl_ops nvme_tcp_ctrl_ops = {
+ 	.submit_async_event	= nvme_tcp_submit_async_event,
+ 	.delete_ctrl		= nvme_tcp_delete_ctrl,
+ 	.get_address		= nvmf_get_address,
++	.stop_ctrl		= nvme_tcp_stop_ctrl,
+ };
+ 
+ static bool
+diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed.c b/drivers/pinctrl/aspeed/pinctrl-aspeed.c
+index 9c65d560d48f7..e792318c38946 100644
+--- a/drivers/pinctrl/aspeed/pinctrl-aspeed.c
++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed.c
+@@ -235,11 +235,11 @@ int aspeed_pinmux_set_mux(struct pinctrl_dev *pctldev, unsigned int function,
+ 		const struct aspeed_sig_expr **funcs;
+ 		const struct aspeed_sig_expr ***prios;
+ 
+-		pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name);
+-
+ 		if (!pdesc)
+ 			return -EINVAL;
+ 
++		pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name);
++
+ 		prios = pdesc->prios;
+ 
+ 		if (!prios)
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index e94e59283ecb9..012639f6d3354 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -62,6 +62,7 @@ enum hp_wmi_event_ids {
+ 	HPWMI_BACKLIT_KB_BRIGHTNESS	= 0x0D,
+ 	HPWMI_PEAKSHIFT_PERIOD		= 0x0F,
+ 	HPWMI_BATTERY_CHARGE_PERIOD	= 0x10,
++	HPWMI_SANITIZATION_MODE		= 0x17,
+ };
+ 
+ struct bios_args {
+@@ -629,6 +630,8 @@ static void hp_wmi_notify(u32 value, void *context)
+ 		break;
+ 	case HPWMI_BATTERY_CHARGE_PERIOD:
+ 		break;
++	case HPWMI_SANITIZATION_MODE:
++		break;
+ 	default:
+ 		pr_info("Unknown event_id - %d - 0x%x\n", event_id, event_data);
+ 		break;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index cd41dc061d874..dfe7e6370d84f 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2738,6 +2738,7 @@ static int slave_configure_v3_hw(struct scsi_device *sdev)
+ 	struct hisi_hba *hisi_hba = shost_priv(shost);
+ 	struct device *dev = hisi_hba->dev;
+ 	int ret = sas_slave_configure(sdev);
++	unsigned int max_sectors;
+ 
+ 	if (ret)
+ 		return ret;
+@@ -2755,6 +2756,12 @@ static int slave_configure_v3_hw(struct scsi_device *sdev)
+ 		}
+ 	}
+ 
++	/* Set according to IOMMU IOVA caching limit */
++	max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue),
++			    (PAGE_SIZE * 32) >> SECTOR_SHIFT);
++
++	blk_queue_max_hw_sectors(sdev->request_queue, max_sectors);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/ixp4xx/ixp4xx-npe.c b/drivers/soc/ixp4xx/ixp4xx-npe.c
+index 6065aaab67403..8482a4892b83b 100644
+--- a/drivers/soc/ixp4xx/ixp4xx-npe.c
++++ b/drivers/soc/ixp4xx/ixp4xx-npe.c
+@@ -735,7 +735,7 @@ static const struct of_device_id ixp4xx_npe_of_match[] = {
+ static struct platform_driver ixp4xx_npe_driver = {
+ 	.driver = {
+ 		.name           = "ixp4xx-npe",
+-		.of_match_table = of_match_ptr(ixp4xx_npe_of_match),
++		.of_match_table = ixp4xx_npe_of_match,
+ 	},
+ 	.probe = ixp4xx_npe_probe,
+ 	.remove = ixp4xx_npe_remove,
+diff --git a/drivers/spi/spi-amd.c b/drivers/spi/spi-amd.c
+index 7f629544060db..a027cfd49df8a 100644
+--- a/drivers/spi/spi-amd.c
++++ b/drivers/spi/spi-amd.c
+@@ -28,6 +28,7 @@
+ #define AMD_SPI_RX_COUNT_REG	0x4B
+ #define AMD_SPI_STATUS_REG	0x4C
+ 
++#define AMD_SPI_FIFO_SIZE	70
+ #define AMD_SPI_MEM_SIZE	200
+ 
+ /* M_CMD OP codes for SPI */
+@@ -245,6 +246,11 @@ static int amd_spi_master_transfer(struct spi_master *master,
+ 	return 0;
+ }
+ 
++static size_t amd_spi_max_transfer_size(struct spi_device *spi)
++{
++	return AMD_SPI_FIFO_SIZE;
++}
++
+ static int amd_spi_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -278,6 +284,8 @@ static int amd_spi_probe(struct platform_device *pdev)
+ 	master->flags = SPI_MASTER_HALF_DUPLEX;
+ 	master->setup = amd_spi_master_setup;
+ 	master->transfer_one_message = amd_spi_master_transfer;
++	master->max_transfer_size = amd_spi_max_transfer_size;
++	master->max_message_size = amd_spi_max_transfer_size;
+ 
+ 	/* Register the controller with SPI framework */
+ 	err = devm_spi_register_master(dev, master);
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index cae61d1ebec5a..98ce484f1089d 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -23,6 +23,7 @@
+ #include <linux/sysrq.h>
+ #include <linux/delay.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ #include <linux/tty.h>
+ #include <linux/ratelimit.h>
+ #include <linux/tty_flip.h>
+@@ -571,6 +572,9 @@ serial8250_register_ports(struct uart_driver *drv, struct device *dev)
+ 
+ 		up->port.dev = dev;
+ 
++		if (uart_console_enabled(&up->port))
++			pm_runtime_get_sync(up->port.dev);
++
+ 		serial8250_apply_quirks(up);
+ 		uart_add_one_port(drv, &up->port);
+ 	}
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 9cf5177815a87..43884e8b51610 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2953,8 +2953,10 @@ static int serial8250_request_std_resource(struct uart_8250_port *up)
+ 	case UPIO_MEM32BE:
+ 	case UPIO_MEM16:
+ 	case UPIO_MEM:
+-		if (!port->mapbase)
++		if (!port->mapbase) {
++			ret = -EINVAL;
+ 			break;
++		}
+ 
+ 		if (!request_mem_region(port->mapbase, size, "serial")) {
+ 			ret = -EBUSY;
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 07b19e97f850d..9900ee3f90683 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1326,6 +1326,15 @@ static void pl011_stop_rx(struct uart_port *port)
+ 	pl011_dma_rx_stop(uap);
+ }
+ 
++static void pl011_throttle_rx(struct uart_port *port)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&port->lock, flags);
++	pl011_stop_rx(port);
++	spin_unlock_irqrestore(&port->lock, flags);
++}
++
+ static void pl011_enable_ms(struct uart_port *port)
+ {
+ 	struct uart_amba_port *uap =
+@@ -1717,9 +1726,10 @@ static int pl011_allocate_irq(struct uart_amba_port *uap)
+  */
+ static void pl011_enable_interrupts(struct uart_amba_port *uap)
+ {
++	unsigned long flags;
+ 	unsigned int i;
+ 
+-	spin_lock_irq(&uap->port.lock);
++	spin_lock_irqsave(&uap->port.lock, flags);
+ 
+ 	/* Clear out any spuriously appearing RX interrupts */
+ 	pl011_write(UART011_RTIS | UART011_RXIS, uap, REG_ICR);
+@@ -1741,7 +1751,14 @@ static void pl011_enable_interrupts(struct uart_amba_port *uap)
+ 	if (!pl011_dma_rx_running(uap))
+ 		uap->im |= UART011_RXIM;
+ 	pl011_write(uap->im, uap, REG_IMSC);
+-	spin_unlock_irq(&uap->port.lock);
++	spin_unlock_irqrestore(&uap->port.lock, flags);
++}
++
++static void pl011_unthrottle_rx(struct uart_port *port)
++{
++	struct uart_amba_port *uap = container_of(port, struct uart_amba_port, port);
++
++	pl011_enable_interrupts(uap);
+ }
+ 
+ static int pl011_startup(struct uart_port *port)
+@@ -2116,6 +2133,8 @@ static const struct uart_ops amba_pl011_pops = {
+ 	.stop_tx	= pl011_stop_tx,
+ 	.start_tx	= pl011_start_tx,
+ 	.stop_rx	= pl011_stop_rx,
++	.throttle	= pl011_throttle_rx,
++	.unthrottle	= pl011_unthrottle_rx,
+ 	.enable_ms	= pl011_enable_ms,
+ 	.break_ctl	= pl011_break_ctl,
+ 	.startup	= pl011_startup,
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index 81faead3c4f80..263c33260d8a8 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -361,8 +361,7 @@ static void enable_tx_dma(struct s3c24xx_uart_port *ourport)
+ 	/* Enable tx dma mode */
+ 	ucon = rd_regl(port, S3C2410_UCON);
+ 	ucon &= ~(S3C64XX_UCON_TXBURST_MASK | S3C64XX_UCON_TXMODE_MASK);
+-	ucon |= (dma_get_cache_alignment() >= 16) ?
+-		S3C64XX_UCON_TXBURST_16 : S3C64XX_UCON_TXBURST_1;
++	ucon |= S3C64XX_UCON_TXBURST_1;
+ 	ucon |= S3C64XX_UCON_TXMODE_DMA;
+ 	wr_regl(port,  S3C2410_UCON, ucon);
+ 
+@@ -634,7 +633,7 @@ static void enable_rx_dma(struct s3c24xx_uart_port *ourport)
+ 			S3C64XX_UCON_DMASUS_EN |
+ 			S3C64XX_UCON_TIMEOUT_EN |
+ 			S3C64XX_UCON_RXMODE_MASK);
+-	ucon |= S3C64XX_UCON_RXBURST_16 |
++	ucon |= S3C64XX_UCON_RXBURST_1 |
+ 			0xf << S3C64XX_UCON_TIMEOUT_SHIFT |
+ 			S3C64XX_UCON_EMPTYINT_EN |
+ 			S3C64XX_UCON_TIMEOUT_EN |
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 32d09d024f6c9..b578f7090b637 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1941,11 +1941,6 @@ static int uart_proc_show(struct seq_file *m, void *v)
+ }
+ #endif
+ 
+-static inline bool uart_console_enabled(struct uart_port *port)
+-{
+-	return uart_console(port) && (port->cons->flags & CON_ENABLED);
+-}
+-
+ static void uart_port_spin_lock_init(struct uart_port *port)
+ {
+ 	spin_lock_init(&port->lock);
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 8cd9e5b077b64..9377da1e97c08 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -70,6 +70,8 @@ static void stm32_usart_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
+ 	*cr3 |= USART_CR3_DEM;
+ 	over8 = *cr1 & USART_CR1_OVER8;
+ 
++	*cr1 &= ~(USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
++
+ 	if (over8)
+ 		rs485_deat_dedt = delay_ADE * baud * 8;
+ 	else
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 0a6336d54a650..f043fd7e0f924 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -855,7 +855,7 @@ static void delete_char(struct vc_data *vc, unsigned int nr)
+ 	unsigned short *p = (unsigned short *) vc->vc_pos;
+ 
+ 	vc_uniscr_delete(vc, nr);
+-	scr_memcpyw(p, p + nr, (vc->vc_cols - vc->state.x - nr) * 2);
++	scr_memmovew(p, p + nr, (vc->vc_cols - vc->state.x - nr) * 2);
+ 	scr_memsetw(p + vc->vc_cols - vc->state.x - nr, vc->vc_video_erase_char,
+ 			nr * 2);
+ 	vc->vc_need_wrap = 0;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 05fe6ded66a52..94e9d336855bc 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3781,7 +3781,6 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt)
+ 	}
+ 
+ 	evt->count = 0;
+-	evt->flags &= ~DWC3_EVENT_PENDING;
+ 	ret = IRQ_HANDLED;
+ 
+ 	/* Unmask interrupt */
+@@ -3794,6 +3793,9 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt)
+ 		dwc3_writel(dwc->regs, DWC3_DEV_IMOD(0), dwc->imod_interval);
+ 	}
+ 
++	/* Keep the clearing of DWC3_EVENT_PENDING at the end */
++	evt->flags &= ~DWC3_EVENT_PENDING;
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index b74621dc2a658..8f980fc6efc19 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1023,6 +1023,9 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_DISPLAY_PID) },
+ 	{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_LITE_PID) },
+ 	{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ANALOG_PID) },
++	/* Belimo Automation devices */
++	{ USB_DEVICE(FTDI_VID, BELIMO_ZTH_PID) },
++	{ USB_DEVICE(FTDI_VID, BELIMO_ZIP_PID) },
+ 	/* ICP DAS I-756xU devices */
+ 	{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7560U_PID) },
+ 	{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7561U_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index d1a9564697a4b..4e92c165c86bf 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1568,6 +1568,12 @@
+ #define CHETCO_SEASMART_LITE_PID	0xA5AE /* SeaSmart Lite USB Adapter */
+ #define CHETCO_SEASMART_ANALOG_PID	0xA5AF /* SeaSmart Analog Adapter */
+ 
++/*
++ * Belimo Automation
++ */
++#define BELIMO_ZTH_PID			0x8050
++#define BELIMO_ZIP_PID			0xC811
++
+ /*
+  * Unjo AB
+  */
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index c7d44daa05c4a..9d3a35b2046d3 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -1444,6 +1444,7 @@ void typec_set_pwr_opmode(struct typec_port *port,
+ 			partner->usb_pd = 1;
+ 			sysfs_notify(&partner_dev->kobj, NULL,
+ 				     "supports_usb_power_delivery");
++			kobject_uevent(&partner_dev->kobj, KOBJ_CHANGE);
+ 		}
+ 		put_device(partner_dev);
+ 	}
+diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
+index 5c970e6f664c8..e8ef0c66e558f 100644
+--- a/drivers/virtio/virtio_mmio.c
++++ b/drivers/virtio/virtio_mmio.c
+@@ -62,6 +62,7 @@
+ #include <linux/list.h>
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
++#include <linux/pm.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+ #include <linux/virtio.h>
+@@ -543,6 +544,28 @@ static const struct virtio_config_ops virtio_mmio_config_ops = {
+ 	.get_shm_region = vm_get_shm_region,
+ };
+ 
++#ifdef CONFIG_PM_SLEEP
++static int virtio_mmio_freeze(struct device *dev)
++{
++	struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev);
++
++	return virtio_device_freeze(&vm_dev->vdev);
++}
++
++static int virtio_mmio_restore(struct device *dev)
++{
++	struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev);
++
++	if (vm_dev->version == 1)
++		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE);
++
++	return virtio_device_restore(&vm_dev->vdev);
++}
++
++static const struct dev_pm_ops virtio_mmio_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(virtio_mmio_freeze, virtio_mmio_restore)
++};
++#endif
+ 
+ static void virtio_mmio_release_dev(struct device *_d)
+ {
+@@ -787,6 +810,9 @@ static struct platform_driver virtio_mmio_driver = {
+ 		.name	= "virtio-mmio",
+ 		.of_match_table	= virtio_mmio_match,
+ 		.acpi_match_table = ACPI_PTR(virtio_mmio_acpi_match),
++#ifdef CONFIG_PM_SLEEP
++		.pm	= &virtio_mmio_pm_ops,
++#endif
+ 	},
+ };
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4a5248097d7aa..779b7745cdc48 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -7480,7 +7480,19 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start,
+ 	if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) ||
+ 	    em->block_start == EXTENT_MAP_INLINE) {
+ 		free_extent_map(em);
+-		ret = -ENOTBLK;
++		/*
++		 * If we are in a NOWAIT context, return -EAGAIN in order to
++		 * fallback to buffered IO. This is not only because we can
++		 * block with buffered IO (no support for NOWAIT semantics at
++		 * the moment) but also to avoid returning short reads to user
++		 * space - this happens if we were able to read some data from
++		 * previous non-compressed extents and then when we fallback to
++		 * buffered IO, at btrfs_file_read_iter() by calling
++		 * filemap_read(), we fail to fault in pages for the read buffer,
++		 * in which case filemap_read() returns a short read (the number
++		 * of bytes previously read is > 0, so it does not return -EFAULT).
++		 */
++		ret = (flags & IOMAP_NOWAIT) ? -EAGAIN : -ENOTBLK;
+ 		goto unlock_err;
+ 	}
+ 
+diff --git a/fs/exec.c b/fs/exec.c
+index bcd86f2d176c3..d37a82206fa31 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1286,7 +1286,7 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 	bprm->mm = NULL;
+ 
+ #ifdef CONFIG_POSIX_TIMERS
+-	exit_itimers(me->signal);
++	exit_itimers(me);
+ 	flush_itimer_signals();
+ #endif
+ 
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 6641b74ad4620..0f49bf547b848 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4691,16 +4691,17 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 		return -EOPNOTSUPP;
+ 
+ 	ext4_fc_start_update(inode);
++	inode_lock(inode);
++	ret = ext4_convert_inline_data(inode);
++	inode_unlock(inode);
++	if (ret)
++		goto exit;
+ 
+ 	if (mode & FALLOC_FL_PUNCH_HOLE) {
+ 		ret = ext4_punch_hole(file, offset, len);
+ 		goto exit;
+ 	}
+ 
+-	ret = ext4_convert_inline_data(inode);
+-	if (ret)
+-		goto exit;
+-
+ 	if (mode & FALLOC_FL_COLLAPSE_RANGE) {
+ 		ret = ext4_collapse_range(file, offset, len);
+ 		goto exit;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 72e3f55f1e07a..bd0d0a10ca429 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4042,15 +4042,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+ 
+ 	trace_ext4_punch_hole(inode, offset, length, 0);
+ 
+-	ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+-	if (ext4_has_inline_data(inode)) {
+-		down_write(&EXT4_I(inode)->i_mmap_sem);
+-		ret = ext4_convert_inline_data(inode);
+-		up_write(&EXT4_I(inode)->i_mmap_sem);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	/*
+ 	 * Write out all dirty pages to avoid race conditions
+ 	 * Then release them.
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index 9ca165bc97d2b..ace27a89fbb07 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -198,6 +198,9 @@ static inline int nilfs_acl_chmod(struct inode *inode)
+ 
+ static inline int nilfs_init_acl(struct inode *inode, struct inode *dir)
+ {
++	if (S_ISLNK(inode->i_mode))
++		return 0;
++
+ 	inode->i_mode &= ~current_umask();
+ 	return 0;
+ }
+diff --git a/fs/remap_range.c b/fs/remap_range.c
+index e6099beefa97d..e8e00e217d6c9 100644
+--- a/fs/remap_range.c
++++ b/fs/remap_range.c
+@@ -71,7 +71,8 @@ static int generic_remap_checks(struct file *file_in, loff_t pos_in,
+ 	 * Otherwise, make sure the count is also block-aligned, having
+ 	 * already confirmed the starting offsets' block alignment.
+ 	 */
+-	if (pos_in + count == size_in) {
++	if (pos_in + count == size_in &&
++	    (!(remap_flags & REMAP_FILE_DEDUP) || pos_out + count == size_out)) {
+ 		bcount = ALIGN(size_in, bs) - pos_in;
+ 	} else {
+ 		if (!IS_ALIGNED(count, bs))
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index fee0b5547cd0a..c9fafca1c30c5 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -260,7 +260,8 @@ struct css_set {
+ 	 * List of csets participating in the on-going migration either as
+ 	 * source or destination.  Protected by cgroup_mutex.
+ 	 */
+-	struct list_head mg_preload_node;
++	struct list_head mg_src_preload_node;
++	struct list_head mg_dst_preload_node;
+ 	struct list_head mg_node;
+ 
+ 	/*
+diff --git a/include/linux/kexec.h b/include/linux/kexec.h
+index 037192c3a46f7..a1f12e959bbad 100644
+--- a/include/linux/kexec.h
++++ b/include/linux/kexec.h
+@@ -442,6 +442,12 @@ static inline int kexec_crash_loaded(void) { return 0; }
+ #define kexec_in_progress false
+ #endif /* CONFIG_KEXEC_CORE */
+ 
++#ifdef CONFIG_KEXEC_SIG
++void set_kexec_sig_enforced(void);
++#else
++static inline void set_kexec_sig_enforced(void) {}
++#endif
++
+ #endif /* !defined(__ASSEBMLY__) */
+ 
+ #endif /* LINUX_KEXEC_H */
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index fa75f325dad53..eeacb4a16fe3f 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -82,7 +82,7 @@ static inline void exit_thread(struct task_struct *tsk)
+ extern void do_group_exit(int);
+ 
+ extern void exit_files(struct task_struct *);
+-extern void exit_itimers(struct signal_struct *);
++extern void exit_itimers(struct task_struct *);
+ 
+ extern pid_t kernel_clone(struct kernel_clone_args *kargs);
+ struct task_struct *fork_idle(int);
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 35b26743dbb28..9c1292ea47fdc 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -394,6 +394,11 @@ static const bool earlycon_acpi_spcr_enable EARLYCON_USED_OR_UNUSED;
+ static inline int setup_earlycon(char *buf) { return 0; }
+ #endif
+ 
++static inline bool uart_console_enabled(struct uart_port *port)
++{
++	return uart_console(port) && (port->cons->flags & CON_ENABLED);
++}
++
+ struct uart_port *uart_get_console(struct uart_port *ports, int nr,
+ 				   struct console *c);
+ int uart_parse_earlycon(char *p, unsigned char *iotype, resource_size_t *addr,
+diff --git a/include/net/raw.h b/include/net/raw.h
+index 8ad8df5948536..c51a635671a73 100644
+--- a/include/net/raw.h
++++ b/include/net/raw.h
+@@ -75,7 +75,7 @@ static inline bool raw_sk_bound_dev_eq(struct net *net, int bound_dev_if,
+ 				       int dif, int sdif)
+ {
+ #if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV)
+-	return inet_bound_dev_eq(!!net->ipv4.sysctl_raw_l3mdev_accept,
++	return inet_bound_dev_eq(READ_ONCE(net->ipv4.sysctl_raw_l3mdev_accept),
+ 				 bound_dev_if, dif, sdif);
+ #else
+ 	return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 2c11eb4abdd24..83854cec4a471 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1445,7 +1445,7 @@ void __sk_mem_reclaim(struct sock *sk, int amount);
+ /* sysctl_mem values are in pages, we convert them in SK_MEM_QUANTUM units */
+ static inline long sk_prot_mem_limits(const struct sock *sk, int index)
+ {
+-	long val = sk->sk_prot->sysctl_mem[index];
++	long val = READ_ONCE(sk->sk_prot->sysctl_mem[index]);
+ 
+ #if PAGE_SIZE > SK_MEM_QUANTUM
+ 	val <<= PAGE_SHIFT - SK_MEM_QUANTUM_SHIFT;
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 745b3bc6ce91d..d9cb597cab46a 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -707,7 +707,7 @@ int tls_sw_fallback_init(struct sock *sk,
+ 			 struct tls_crypto_info *crypto_info);
+ 
+ #ifdef CONFIG_TLS_DEVICE
+-void tls_device_init(void);
++int tls_device_init(void);
+ void tls_device_cleanup(void);
+ void tls_device_sk_destruct(struct sock *sk);
+ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx);
+@@ -727,7 +727,7 @@ static inline bool tls_is_sk_rx_device_offloaded(struct sock *sk)
+ 	return tls_get_ctx(sk)->rx_conf == TLS_HW;
+ }
+ #else
+-static inline void tls_device_init(void) {}
++static inline int tls_device_init(void) { return 0; }
+ static inline void tls_device_cleanup(void) {}
+ 
+ static inline int
+diff --git a/include/trace/events/sock.h b/include/trace/events/sock.h
+index a966d4b5ab377..905b151bc3dd9 100644
+--- a/include/trace/events/sock.h
++++ b/include/trace/events/sock.h
+@@ -98,7 +98,7 @@ TRACE_EVENT(sock_exceed_buf_limit,
+ 
+ 	TP_STRUCT__entry(
+ 		__array(char, name, 32)
+-		__field(long *, sysctl_mem)
++		__array(long, sysctl_mem, 3)
+ 		__field(long, allocated)
+ 		__field(int, sysctl_rmem)
+ 		__field(int, rmem_alloc)
+@@ -110,7 +110,9 @@ TRACE_EVENT(sock_exceed_buf_limit,
+ 
+ 	TP_fast_assign(
+ 		strncpy(__entry->name, prot->name, 32);
+-		__entry->sysctl_mem = prot->sysctl_mem;
++		__entry->sysctl_mem[0] = READ_ONCE(prot->sysctl_mem[0]);
++		__entry->sysctl_mem[1] = READ_ONCE(prot->sysctl_mem[1]);
++		__entry->sysctl_mem[2] = READ_ONCE(prot->sysctl_mem[2]);
+ 		__entry->allocated = allocated;
+ 		__entry->sysctl_rmem = sk_get_rmem0(sk, prot);
+ 		__entry->rmem_alloc = atomic_read(&sk->sk_rmem_alloc);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 0853289d321a5..5046c99deba86 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -736,7 +736,8 @@ struct css_set init_css_set = {
+ 	.task_iters		= LIST_HEAD_INIT(init_css_set.task_iters),
+ 	.threaded_csets		= LIST_HEAD_INIT(init_css_set.threaded_csets),
+ 	.cgrp_links		= LIST_HEAD_INIT(init_css_set.cgrp_links),
+-	.mg_preload_node	= LIST_HEAD_INIT(init_css_set.mg_preload_node),
++	.mg_src_preload_node	= LIST_HEAD_INIT(init_css_set.mg_src_preload_node),
++	.mg_dst_preload_node	= LIST_HEAD_INIT(init_css_set.mg_dst_preload_node),
+ 	.mg_node		= LIST_HEAD_INIT(init_css_set.mg_node),
+ 
+ 	/*
+@@ -1211,7 +1212,8 @@ static struct css_set *find_css_set(struct css_set *old_cset,
+ 	INIT_LIST_HEAD(&cset->threaded_csets);
+ 	INIT_HLIST_NODE(&cset->hlist);
+ 	INIT_LIST_HEAD(&cset->cgrp_links);
+-	INIT_LIST_HEAD(&cset->mg_preload_node);
++	INIT_LIST_HEAD(&cset->mg_src_preload_node);
++	INIT_LIST_HEAD(&cset->mg_dst_preload_node);
+ 	INIT_LIST_HEAD(&cset->mg_node);
+ 
+ 	/* Copy the set of subsystem state objects generated in
+@@ -2556,21 +2558,27 @@ int cgroup_migrate_vet_dst(struct cgroup *dst_cgrp)
+  */
+ void cgroup_migrate_finish(struct cgroup_mgctx *mgctx)
+ {
+-	LIST_HEAD(preloaded);
+ 	struct css_set *cset, *tmp_cset;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+ 
+ 	spin_lock_irq(&css_set_lock);
+ 
+-	list_splice_tail_init(&mgctx->preloaded_src_csets, &preloaded);
+-	list_splice_tail_init(&mgctx->preloaded_dst_csets, &preloaded);
++	list_for_each_entry_safe(cset, tmp_cset, &mgctx->preloaded_src_csets,
++				 mg_src_preload_node) {
++		cset->mg_src_cgrp = NULL;
++		cset->mg_dst_cgrp = NULL;
++		cset->mg_dst_cset = NULL;
++		list_del_init(&cset->mg_src_preload_node);
++		put_css_set_locked(cset);
++	}
+ 
+-	list_for_each_entry_safe(cset, tmp_cset, &preloaded, mg_preload_node) {
++	list_for_each_entry_safe(cset, tmp_cset, &mgctx->preloaded_dst_csets,
++				 mg_dst_preload_node) {
+ 		cset->mg_src_cgrp = NULL;
+ 		cset->mg_dst_cgrp = NULL;
+ 		cset->mg_dst_cset = NULL;
+-		list_del_init(&cset->mg_preload_node);
++		list_del_init(&cset->mg_dst_preload_node);
+ 		put_css_set_locked(cset);
+ 	}
+ 
+@@ -2612,7 +2620,7 @@ void cgroup_migrate_add_src(struct css_set *src_cset,
+ 
+ 	src_cgrp = cset_cgroup_from_root(src_cset, dst_cgrp->root);
+ 
+-	if (!list_empty(&src_cset->mg_preload_node))
++	if (!list_empty(&src_cset->mg_src_preload_node))
+ 		return;
+ 
+ 	WARN_ON(src_cset->mg_src_cgrp);
+@@ -2623,7 +2631,7 @@ void cgroup_migrate_add_src(struct css_set *src_cset,
+ 	src_cset->mg_src_cgrp = src_cgrp;
+ 	src_cset->mg_dst_cgrp = dst_cgrp;
+ 	get_css_set(src_cset);
+-	list_add_tail(&src_cset->mg_preload_node, &mgctx->preloaded_src_csets);
++	list_add_tail(&src_cset->mg_src_preload_node, &mgctx->preloaded_src_csets);
+ }
+ 
+ /**
+@@ -2648,7 +2656,7 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx)
+ 
+ 	/* look up the dst cset for each src cset and link it to src */
+ 	list_for_each_entry_safe(src_cset, tmp_cset, &mgctx->preloaded_src_csets,
+-				 mg_preload_node) {
++				 mg_src_preload_node) {
+ 		struct css_set *dst_cset;
+ 		struct cgroup_subsys *ss;
+ 		int ssid;
+@@ -2667,7 +2675,7 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx)
+ 		if (src_cset == dst_cset) {
+ 			src_cset->mg_src_cgrp = NULL;
+ 			src_cset->mg_dst_cgrp = NULL;
+-			list_del_init(&src_cset->mg_preload_node);
++			list_del_init(&src_cset->mg_src_preload_node);
+ 			put_css_set(src_cset);
+ 			put_css_set(dst_cset);
+ 			continue;
+@@ -2675,8 +2683,8 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx)
+ 
+ 		src_cset->mg_dst_cset = dst_cset;
+ 
+-		if (list_empty(&dst_cset->mg_preload_node))
+-			list_add_tail(&dst_cset->mg_preload_node,
++		if (list_empty(&dst_cset->mg_dst_preload_node))
++			list_add_tail(&dst_cset->mg_dst_preload_node,
+ 				      &mgctx->preloaded_dst_csets);
+ 		else
+ 			put_css_set(dst_cset);
+@@ -2922,7 +2930,8 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ 		goto out_finish;
+ 
+ 	spin_lock_irq(&css_set_lock);
+-	list_for_each_entry(src_cset, &mgctx.preloaded_src_csets, mg_preload_node) {
++	list_for_each_entry(src_cset, &mgctx.preloaded_src_csets,
++			    mg_src_preload_node) {
+ 		struct task_struct *task, *ntask;
+ 
+ 		/* all tasks in src_csets need to be migrated */
+diff --git a/kernel/exit.c b/kernel/exit.c
+index d13d67fc5f4e2..ab900b661867f 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -782,7 +782,7 @@ void __noreturn do_exit(long code)
+ 
+ #ifdef CONFIG_POSIX_TIMERS
+ 		hrtimer_cancel(&tsk->signal->real_timer);
+-		exit_itimers(tsk->signal);
++		exit_itimers(tsk);
+ #endif
+ 		if (tsk->mm)
+ 			setmax_mm_hiwater_rss(&tsk->signal->maxrss, tsk->mm);
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index 2e0f0b3fb9ab0..fff11916aba33 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -29,6 +29,15 @@
+ #include <linux/vmalloc.h>
+ #include "kexec_internal.h"
+ 
++#ifdef CONFIG_KEXEC_SIG
++static bool sig_enforce = IS_ENABLED(CONFIG_KEXEC_SIG_FORCE);
++
++void set_kexec_sig_enforced(void)
++{
++	sig_enforce = true;
++}
++#endif
++
+ static int kexec_calculate_store_digests(struct kimage *image);
+ 
+ /*
+@@ -159,7 +168,7 @@ kimage_validate_signature(struct kimage *image)
+ 					   image->kernel_buf_len);
+ 	if (ret) {
+ 
+-		if (IS_ENABLED(CONFIG_KEXEC_SIG_FORCE)) {
++		if (sig_enforce) {
+ 			pr_notice("Enforced kernel signature verification failed (%d).\n", ret);
+ 			return ret;
+ 		}
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 6bb2df4f6109d..d05f783d5a5e6 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1912,12 +1912,12 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
+ 	bool autoreap = false;
+ 	u64 utime, stime;
+ 
+-	BUG_ON(sig == -1);
++	WARN_ON_ONCE(sig == -1);
+ 
+- 	/* do_notify_parent_cldstop should have been called instead.  */
+- 	BUG_ON(task_is_stopped_or_traced(tsk));
++	/* do_notify_parent_cldstop should have been called instead.  */
++	WARN_ON_ONCE(task_is_stopped_or_traced(tsk));
+ 
+-	BUG_ON(!tsk->ptrace &&
++	WARN_ON_ONCE(!tsk->ptrace &&
+ 	       (tsk->group_leader != tsk || !thread_group_empty(tsk)));
+ 
+ 	/* Wake up all pidfd waiters */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 8832440a4938e..f0dd1a3b66eb9 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -557,14 +557,14 @@ static int do_proc_dointvec_conv(bool *negp, unsigned long *lvalp,
+ 		if (*negp) {
+ 			if (*lvalp > (unsigned long) INT_MAX + 1)
+ 				return -EINVAL;
+-			*valp = -*lvalp;
++			WRITE_ONCE(*valp, -*lvalp);
+ 		} else {
+ 			if (*lvalp > (unsigned long) INT_MAX)
+ 				return -EINVAL;
+-			*valp = *lvalp;
++			WRITE_ONCE(*valp, *lvalp);
+ 		}
+ 	} else {
+-		int val = *valp;
++		int val = READ_ONCE(*valp);
+ 		if (val < 0) {
+ 			*negp = true;
+ 			*lvalp = -(unsigned long)val;
+@@ -583,9 +583,9 @@ static int do_proc_douintvec_conv(unsigned long *lvalp,
+ 	if (write) {
+ 		if (*lvalp > UINT_MAX)
+ 			return -EINVAL;
+-		*valp = *lvalp;
++		WRITE_ONCE(*valp, *lvalp);
+ 	} else {
+-		unsigned int val = *valp;
++		unsigned int val = READ_ONCE(*valp);
+ 		*lvalp = (unsigned long)val;
+ 	}
+ 	return 0;
+@@ -959,7 +959,7 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp,
+ 		if ((param->min && *param->min > tmp) ||
+ 		    (param->max && *param->max < tmp))
+ 			return -EINVAL;
+-		*valp = tmp;
++		WRITE_ONCE(*valp, tmp);
+ 	}
+ 
+ 	return 0;
+@@ -1025,7 +1025,7 @@ static int do_proc_douintvec_minmax_conv(unsigned long *lvalp,
+ 		    (param->max && *param->max < tmp))
+ 			return -ERANGE;
+ 
+-		*valp = tmp;
++		WRITE_ONCE(*valp, tmp);
+ 	}
+ 
+ 	return 0;
+@@ -1193,9 +1193,9 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table,
+ 				err = -EINVAL;
+ 				break;
+ 			}
+-			*i = val;
++			WRITE_ONCE(*i, val);
+ 		} else {
+-			val = convdiv * (*i) / convmul;
++			val = convdiv * READ_ONCE(*i) / convmul;
+ 			if (!first)
+ 				proc_put_char(&buffer, &left, '\t');
+ 			proc_put_long(&buffer, &left, val, false);
+@@ -1276,9 +1276,12 @@ static int do_proc_dointvec_jiffies_conv(bool *negp, unsigned long *lvalp,
+ 	if (write) {
+ 		if (*lvalp > INT_MAX / HZ)
+ 			return 1;
+-		*valp = *negp ? -(*lvalp*HZ) : (*lvalp*HZ);
++		if (*negp)
++			WRITE_ONCE(*valp, -*lvalp * HZ);
++		else
++			WRITE_ONCE(*valp, *lvalp * HZ);
+ 	} else {
+-		int val = *valp;
++		int val = READ_ONCE(*valp);
+ 		unsigned long lval;
+ 		if (val < 0) {
+ 			*negp = true;
+@@ -1324,9 +1327,9 @@ static int do_proc_dointvec_ms_jiffies_conv(bool *negp, unsigned long *lvalp,
+ 
+ 		if (jif > INT_MAX)
+ 			return 1;
+-		*valp = (int)jif;
++		WRITE_ONCE(*valp, (int)jif);
+ 	} else {
+-		int val = *valp;
++		int val = READ_ONCE(*valp);
+ 		unsigned long lval;
+ 		if (val < 0) {
+ 			*negp = true;
+@@ -1394,8 +1397,8 @@ int proc_dointvec_userhz_jiffies(struct ctl_table *table, int write,
+  * @ppos: the current position in the file
+  *
+  * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
+- * values from/to the user buffer, treated as an ASCII string. 
+- * The values read are assumed to be in 1/1000 seconds, and 
++ * values from/to the user buffer, treated as an ASCII string.
++ * The values read are assumed to be in 1/1000 seconds, and
+  * are converted into jiffies.
+  *
+  * Returns 0 on success.
+@@ -2811,6 +2814,17 @@ static struct ctl_table vm_table[] = {
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= &two_hundred,
+ 	},
++#ifdef CONFIG_NUMA
++	{
++		.procname	= "numa_stat",
++		.data		= &sysctl_vm_numa_stat,
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= sysctl_vm_numa_stat_handler,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_ONE,
++	},
++#endif
+ #ifdef CONFIG_HUGETLB_PAGE
+ 	{
+ 		.procname	= "nr_hugepages",
+@@ -2827,15 +2841,6 @@ static struct ctl_table vm_table[] = {
+ 		.mode           = 0644,
+ 		.proc_handler   = &hugetlb_mempolicy_sysctl_handler,
+ 	},
+-	{
+-		.procname		= "numa_stat",
+-		.data			= &sysctl_vm_numa_stat,
+-		.maxlen			= sizeof(int),
+-		.mode			= 0644,
+-		.proc_handler	= sysctl_vm_numa_stat_handler,
+-		.extra1			= SYSCTL_ZERO,
+-		.extra2			= SYSCTL_ONE,
+-	},
+ #endif
+ 	 {
+ 		.procname	= "hugetlb_shm_group",
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index dd5697d7347b1..b624788023d8f 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -1051,15 +1051,24 @@ retry_delete:
+ }
+ 
+ /*
+- * This is called by do_exit or de_thread, only when there are no more
+- * references to the shared signal_struct.
++ * This is called by do_exit or de_thread, only when nobody else can
++ * modify the signal->posix_timers list. Yet we need sighand->siglock
++ * to prevent the race with /proc/pid/timers.
+  */
+-void exit_itimers(struct signal_struct *sig)
++void exit_itimers(struct task_struct *tsk)
+ {
++	struct list_head timers;
+ 	struct k_itimer *tmr;
+ 
+-	while (!list_empty(&sig->posix_timers)) {
+-		tmr = list_entry(sig->posix_timers.next, struct k_itimer, list);
++	if (list_empty(&tsk->signal->posix_timers))
++		return;
++
++	spin_lock_irq(&tsk->sighand->siglock);
++	list_replace_init(&tsk->signal->posix_timers, &timers);
++	spin_unlock_irq(&tsk->sighand->siglock);
++
++	while (!list_empty(&timers)) {
++		tmr = list_first_entry(&timers, struct k_itimer, list);
+ 		itimer_delete(tmr);
+ 	}
+ }
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 3ed1723b68d56..fd54168294456 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -3943,6 +3943,8 @@ static int parse_var_defs(struct hist_trigger_data *hist_data)
+ 
+ 			s = kstrdup(field_str, GFP_KERNEL);
+ 			if (!s) {
++				kfree(hist_data->attrs->var_defs.name[n_vars]);
++				hist_data->attrs->var_defs.name[n_vars] = NULL;
+ 				ret = -ENOMEM;
+ 				goto free;
+ 			}
+diff --git a/mm/memory.c b/mm/memory.c
+index 72236b1ce5903..cc50fa0f4590d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4365,6 +4365,19 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
+ 
+ static vm_fault_t create_huge_pud(struct vm_fault *vmf)
+ {
++#if defined(CONFIG_TRANSPARENT_HUGEPAGE) &&			\
++	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
++	/* No support for anonymous transparent PUD pages yet */
++	if (vma_is_anonymous(vmf->vma))
++		return VM_FAULT_FALLBACK;
++	if (vmf->vma->vm_ops->huge_fault)
++		return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD);
++#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
++	return VM_FAULT_FALLBACK;
++}
++
++static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
++{
+ #if defined(CONFIG_TRANSPARENT_HUGEPAGE) &&			\
+ 	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
+ 	/* No support for anonymous transparent PUD pages yet */
+@@ -4379,19 +4392,7 @@ static vm_fault_t create_huge_pud(struct vm_fault *vmf)
+ split:
+ 	/* COW or write-notify not handled on PUD level: split pud.*/
+ 	__split_huge_pud(vmf->vma, vmf->pud, vmf->address);
+-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+-	return VM_FAULT_FALLBACK;
+-}
+-
+-static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
+-{
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-	/* No support for anonymous transparent PUD pages yet */
+-	if (vma_is_anonymous(vmf->vma))
+-		return VM_FAULT_FALLBACK;
+-	if (vmf->vma->vm_ops->huge_fault)
+-		return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD);
+-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
++#endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+ 	return VM_FAULT_FALLBACK;
+ }
+ 
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 68c0d0f928908..10a2c7bca7199 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -1012,9 +1012,24 @@ int br_nf_hook_thresh(unsigned int hook, struct net *net,
+ 		return okfn(net, sk, skb);
+ 
+ 	ops = nf_hook_entries_get_hook_ops(e);
+-	for (i = 0; i < e->num_hook_entries &&
+-	      ops[i]->priority <= NF_BR_PRI_BRNF; i++)
+-		;
++	for (i = 0; i < e->num_hook_entries; i++) {
++		/* These hooks have already been called */
++		if (ops[i]->priority < NF_BR_PRI_BRNF)
++			continue;
++
++		/* These hooks have not been called yet, run them. */
++		if (ops[i]->priority > NF_BR_PRI_BRNF)
++			break;
++
++		/* take a closer look at NF_BR_PRI_BRNF. */
++		if (ops[i]->hook == br_nf_pre_routing) {
++			/* This hook diverted the skb to this function,
++			 * hooks after this have not been run yet.
++			 */
++			i++;
++			break;
++		}
++	}
+ 
+ 	nf_hook_state_init(&state, hook, NFPROTO_BRIDGE, indev, outdev,
+ 			   sk, net, okfn);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 246947fbc9581..34ae30503ac4f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -5624,7 +5624,6 @@ static int bpf_push_seg6_encap(struct sk_buff *skb, u32 type, void *hdr, u32 len
+ 	if (err)
+ 		return err;
+ 
+-	ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ 	skb_set_transport_header(skb, sizeof(struct ipv6hdr));
+ 
+ 	return seg6_lookup_nexthop(skb, NULL, 0);
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 742218594741a..e77283069c7b7 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1245,7 +1245,7 @@ static int inet_sk_reselect_saddr(struct sock *sk)
+ 	if (new_saddr == old_saddr)
+ 		return 0;
+ 
+-	if (sock_net(sk)->ipv4.sysctl_ip_dynaddr > 1) {
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_ip_dynaddr) > 1) {
+ 		pr_info("%s(): shifting inet->saddr from %pI4 to %pI4\n",
+ 			__func__, &old_saddr, &new_saddr);
+ 	}
+@@ -1300,7 +1300,7 @@ int inet_sk_rebuild_header(struct sock *sk)
+ 		 * Other protocols have to map its equivalent state to TCP_SYN_SENT.
+ 		 * DCCP maps its DCCP_REQUESTING state to TCP_SYN_SENT. -acme
+ 		 */
+-		if (!sock_net(sk)->ipv4.sysctl_ip_dynaddr ||
++		if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_ip_dynaddr) ||
+ 		    sk->sk_state != TCP_SYN_SENT ||
+ 		    (sk->sk_userlocks & SOCK_BINDADDR_LOCK) ||
+ 		    (err = inet_sk_reselect_saddr(sk)) != 0)
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index ca217a6f488f6..d4a4160159a92 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -240,7 +240,7 @@ static int cipso_v4_cache_check(const unsigned char *key,
+ 	struct cipso_v4_map_cache_entry *prev_entry = NULL;
+ 	u32 hash;
+ 
+-	if (!cipso_v4_cache_enabled)
++	if (!READ_ONCE(cipso_v4_cache_enabled))
+ 		return -ENOENT;
+ 
+ 	hash = cipso_v4_map_cache_hash(key, key_len);
+@@ -297,13 +297,14 @@ static int cipso_v4_cache_check(const unsigned char *key,
+ int cipso_v4_cache_add(const unsigned char *cipso_ptr,
+ 		       const struct netlbl_lsm_secattr *secattr)
+ {
++	int bkt_size = READ_ONCE(cipso_v4_cache_bucketsize);
+ 	int ret_val = -EPERM;
+ 	u32 bkt;
+ 	struct cipso_v4_map_cache_entry *entry = NULL;
+ 	struct cipso_v4_map_cache_entry *old_entry = NULL;
+ 	u32 cipso_ptr_len;
+ 
+-	if (!cipso_v4_cache_enabled || cipso_v4_cache_bucketsize <= 0)
++	if (!READ_ONCE(cipso_v4_cache_enabled) || bkt_size <= 0)
+ 		return 0;
+ 
+ 	cipso_ptr_len = cipso_ptr[1];
+@@ -323,7 +324,7 @@ int cipso_v4_cache_add(const unsigned char *cipso_ptr,
+ 
+ 	bkt = entry->hash & (CIPSO_V4_CACHE_BUCKETS - 1);
+ 	spin_lock_bh(&cipso_v4_cache[bkt].lock);
+-	if (cipso_v4_cache[bkt].size < cipso_v4_cache_bucketsize) {
++	if (cipso_v4_cache[bkt].size < bkt_size) {
+ 		list_add(&entry->list, &cipso_v4_cache[bkt].list);
+ 		cipso_v4_cache[bkt].size += 1;
+ 	} else {
+@@ -1200,7 +1201,8 @@ static int cipso_v4_gentag_rbm(const struct cipso_v4_doi *doi_def,
+ 		/* This will send packets using the "optimized" format when
+ 		 * possible as specified in  section 3.4.2.6 of the
+ 		 * CIPSO draft. */
+-		if (cipso_v4_rbm_optfmt && ret_val > 0 && ret_val <= 10)
++		if (READ_ONCE(cipso_v4_rbm_optfmt) && ret_val > 0 &&
++		    ret_val <= 10)
+ 			tag_len = 14;
+ 		else
+ 			tag_len = 4 + ret_val;
+@@ -1604,7 +1606,7 @@ int cipso_v4_validate(const struct sk_buff *skb, unsigned char **option)
+ 			 * all the CIPSO validations here but it doesn't
+ 			 * really specify _exactly_ what we need to validate
+ 			 * ... so, just make it a sysctl tunable. */
+-			if (cipso_v4_rbm_strictvalid) {
++			if (READ_ONCE(cipso_v4_rbm_strictvalid)) {
+ 				if (cipso_v4_map_lvl_valid(doi_def,
+ 							   tag[3]) < 0) {
+ 					err_offset = opt_iter + 3;
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index c8c7b76c3b2e2..70c866308abea 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1229,7 +1229,7 @@ static int fib_check_nh_nongw(struct net *net, struct fib_nh *nh,
+ 
+ 	nh->fib_nh_dev = in_dev->dev;
+ 	dev_hold(nh->fib_nh_dev);
+-	nh->fib_nh_scope = RT_SCOPE_HOST;
++	nh->fib_nh_scope = RT_SCOPE_LINK;
+ 	if (!netif_carrier_ok(nh->fib_nh_dev))
+ 		nh->fib_nh_flags |= RTNH_F_LINKDOWN;
+ 	err = 0;
+@@ -1831,7 +1831,7 @@ int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event,
+ 			goto nla_put_failure;
+ 		if (nexthop_is_blackhole(fi->nh))
+ 			rtm->rtm_type = RTN_BLACKHOLE;
+-		if (!fi->fib_net->ipv4.sysctl_nexthop_compat_mode)
++		if (!READ_ONCE(fi->fib_net->ipv4.sysctl_nexthop_compat_mode))
+ 			goto offload;
+ 	}
+ 
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index ffc5332f13906..a28f525e2c474 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -497,7 +497,7 @@ static void tnode_free(struct key_vector *tn)
+ 		tn = container_of(head, struct tnode, rcu)->kv;
+ 	}
+ 
+-	if (tnode_free_size >= sysctl_fib_sync_mem) {
++	if (tnode_free_size >= READ_ONCE(sysctl_fib_sync_mem)) {
+ 		tnode_free_size = 0;
+ 		synchronize_rcu();
+ 	}
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index cd65d3146c300..0fa0da1d71f57 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -261,11 +261,12 @@ bool icmp_global_allow(void)
+ 	spin_lock(&icmp_global.lock);
+ 	delta = min_t(u32, now - icmp_global.stamp, HZ);
+ 	if (delta >= HZ / 50) {
+-		incr = sysctl_icmp_msgs_per_sec * delta / HZ ;
++		incr = READ_ONCE(sysctl_icmp_msgs_per_sec) * delta / HZ;
+ 		if (incr)
+ 			WRITE_ONCE(icmp_global.stamp, now);
+ 	}
+-	credit = min_t(u32, icmp_global.credit + incr, sysctl_icmp_msgs_burst);
++	credit = min_t(u32, icmp_global.credit + incr,
++		       READ_ONCE(sysctl_icmp_msgs_burst));
+ 	if (credit) {
+ 		/* We want to use a credit of one in average, but need to randomize
+ 		 * it for security reasons.
+@@ -289,7 +290,7 @@ static bool icmpv4_mask_allow(struct net *net, int type, int code)
+ 		return true;
+ 
+ 	/* Limit if icmp type is enabled in ratemask. */
+-	if (!((1 << type) & net->ipv4.sysctl_icmp_ratemask))
++	if (!((1 << type) & READ_ONCE(net->ipv4.sysctl_icmp_ratemask)))
+ 		return true;
+ 
+ 	return false;
+@@ -327,7 +328,8 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+ 
+ 	vif = l3mdev_master_ifindex(dst->dev);
+ 	peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, vif, 1);
+-	rc = inet_peer_xrlim_allow(peer, net->ipv4.sysctl_icmp_ratelimit);
++	rc = inet_peer_xrlim_allow(peer,
++				   READ_ONCE(net->ipv4.sysctl_icmp_ratelimit));
+ 	if (peer)
+ 		inet_putpeer(peer);
+ out:
+diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
+index ff327a62c9ce9..a18668552d33d 100644
+--- a/net/ipv4/inetpeer.c
++++ b/net/ipv4/inetpeer.c
+@@ -148,16 +148,20 @@ static void inet_peer_gc(struct inet_peer_base *base,
+ 			 struct inet_peer *gc_stack[],
+ 			 unsigned int gc_cnt)
+ {
++	int peer_threshold, peer_maxttl, peer_minttl;
+ 	struct inet_peer *p;
+ 	__u32 delta, ttl;
+ 	int i;
+ 
+-	if (base->total >= inet_peer_threshold)
++	peer_threshold = READ_ONCE(inet_peer_threshold);
++	peer_maxttl = READ_ONCE(inet_peer_maxttl);
++	peer_minttl = READ_ONCE(inet_peer_minttl);
++
++	if (base->total >= peer_threshold)
+ 		ttl = 0; /* be aggressive */
+ 	else
+-		ttl = inet_peer_maxttl
+-				- (inet_peer_maxttl - inet_peer_minttl) / HZ *
+-					base->total / inet_peer_threshold * HZ;
++		ttl = peer_maxttl - (peer_maxttl - peer_minttl) / HZ *
++			base->total / peer_threshold * HZ;
+ 	for (i = 0; i < gc_cnt; i++) {
+ 		p = gc_stack[i];
+ 
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 8bd3f5e3c0e7a..2a17dc9413ae9 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -882,7 +882,7 @@ static void __remove_nexthop_fib(struct net *net, struct nexthop *nh)
+ 		/* __ip6_del_rt does a release, so do a hold here */
+ 		fib6_info_hold(f6i);
+ 		ipv6_stub->ip6_del_rt(net, f6i,
+-				      !net->ipv4.sysctl_nexthop_compat_mode);
++				      !READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode));
+ 	}
+ }
+ 
+@@ -1194,7 +1194,8 @@ out:
+ 	if (!rc) {
+ 		nh_base_seq_inc(net);
+ 		nexthop_notify(RTM_NEWNEXTHOP, new_nh, &cfg->nlinfo);
+-		if (replace_notify && net->ipv4.sysctl_nexthop_compat_mode)
++		if (replace_notify &&
++		    READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode))
+ 			nexthop_replace_notify(net, new_nh, &cfg->nlinfo);
+ 	}
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index a3ec2a08027b8..19c13ad5c121b 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2490,7 +2490,8 @@ static void tcp_orphan_update(struct timer_list *unused)
+ 
+ static bool tcp_too_many_orphans(int shift)
+ {
+-	return READ_ONCE(tcp_orphan_cache) << shift > sysctl_tcp_max_orphans;
++	return READ_ONCE(tcp_orphan_cache) << shift >
++		READ_ONCE(sysctl_tcp_max_orphans);
+ }
+ 
+ bool tcp_check_oom(struct sock *sk, int shift)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index e67505c6d8562..cdf215442d373 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5641,7 +5641,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 		if (nexthop_is_blackhole(rt->nh))
+ 			rtm->rtm_type = RTN_BLACKHOLE;
+ 
+-		if (net->ipv4.sysctl_nexthop_compat_mode &&
++		if (READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode) &&
+ 		    rt6_fill_node_nexthop(skb, rt->nh, &nh_flags) < 0)
+ 			goto nla_put_failure;
+ 
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index 4d4399c5c5ea9..40ac23242c378 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -188,6 +188,8 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
+ 	}
+ #endif
+ 
++	hdr->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
++
+ 	skb_postpush_rcsum(skb, hdr, tot_len);
+ 
+ 	return 0;
+@@ -240,6 +242,8 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
+ 	}
+ #endif
+ 
++	hdr->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
++
+ 	skb_postpush_rcsum(skb, hdr, sizeof(struct ipv6hdr) + hdrlen);
+ 
+ 	return 0;
+@@ -301,7 +305,6 @@ static int seg6_do_srh(struct sk_buff *skb)
+ 		break;
+ 	}
+ 
+-	ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ 	skb_set_transport_header(skb, sizeof(struct ipv6hdr));
+ 
+ 	return 0;
+diff --git a/net/ipv6/seg6_local.c b/net/ipv6/seg6_local.c
+index eba23279912df..11f7da4139f66 100644
+--- a/net/ipv6/seg6_local.c
++++ b/net/ipv6/seg6_local.c
+@@ -435,7 +435,6 @@ static int input_action_end_b6(struct sk_buff *skb, struct seg6_local_lwt *slwt)
+ 	if (err)
+ 		goto drop;
+ 
+-	ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ 	skb_set_transport_header(skb, sizeof(struct ipv6hdr));
+ 
+ 	seg6_lookup_nexthop(skb, NULL, 0);
+@@ -467,7 +466,6 @@ static int input_action_end_b6_encap(struct sk_buff *skb,
+ 	if (err)
+ 		goto drop;
+ 
+-	ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ 	skb_set_transport_header(skb, sizeof(struct ipv6hdr));
+ 
+ 	seg6_lookup_nexthop(skb, NULL, 0);
+diff --git a/net/mac80211/wme.c b/net/mac80211/wme.c
+index 2fb99325135a0..b9404b0560871 100644
+--- a/net/mac80211/wme.c
++++ b/net/mac80211/wme.c
+@@ -145,8 +145,8 @@ u16 __ieee80211_select_queue(struct ieee80211_sub_if_data *sdata,
+ 	bool qos;
+ 
+ 	/* all mesh/ocb stations are required to support WME */
+-	if (sdata->vif.type == NL80211_IFTYPE_MESH_POINT ||
+-	    sdata->vif.type == NL80211_IFTYPE_OCB)
++	if (sta && (sdata->vif.type == NL80211_IFTYPE_MESH_POINT ||
++		    sdata->vif.type == NL80211_IFTYPE_OCB))
+ 		qos = true;
+ 	else if (sta)
+ 		qos = sta->sta.wme;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 42283dc6c5b7c..38256aabf4f1d 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -489,6 +489,7 @@ static int tipc_sk_create(struct net *net, struct socket *sock,
+ 	sock_init_data(sock, sk);
+ 	tipc_set_sk_state(sk, TIPC_OPEN);
+ 	if (tipc_sk_insert(tsk)) {
++		sk_free(sk);
+ 		pr_warn("Socket create failed; port number exhausted\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 3c82286e5bcca..6ae2ce411b4bf 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -1390,9 +1390,9 @@ static struct notifier_block tls_dev_notifier = {
+ 	.notifier_call	= tls_dev_event,
+ };
+ 
+-void __init tls_device_init(void)
++int __init tls_device_init(void)
+ {
+-	register_netdevice_notifier(&tls_dev_notifier);
++	return register_netdevice_notifier(&tls_dev_notifier);
+ }
+ 
+ void __exit tls_device_cleanup(void)
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 58d22d6b86ae6..e537085b184fe 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -905,7 +905,12 @@ static int __init tls_register(void)
+ 	if (err)
+ 		return err;
+ 
+-	tls_device_init();
++	err = tls_device_init();
++	if (err) {
++		unregister_pernet_subsys(&tls_proc_ops);
++		return err;
++	}
++
+ 	tcp_register_ulp(&tcp_tls_ulp_ops);
+ 
+ 	return 0;
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index a6dd47eb086da..168c3b78ac47b 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -73,7 +73,7 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo)
+ {
+ 	long rc;
+ 	const char *algo;
+-	struct crypto_shash **tfm, *tmp_tfm = NULL;
++	struct crypto_shash **tfm, *tmp_tfm;
+ 	struct shash_desc *desc;
+ 
+ 	if (type == EVM_XATTR_HMAC) {
+@@ -118,16 +118,13 @@ unlock:
+ alloc:
+ 	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(*tfm),
+ 			GFP_KERNEL);
+-	if (!desc) {
+-		crypto_free_shash(tmp_tfm);
++	if (!desc)
+ 		return ERR_PTR(-ENOMEM);
+-	}
+ 
+ 	desc->tfm = *tfm;
+ 
+ 	rc = crypto_shash_init(desc);
+ 	if (rc) {
+-		crypto_free_shash(tmp_tfm);
+ 		kfree(desc);
+ 		return ERR_PTR(rc);
+ 	}
+diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
+index 3dd8c2e4314ea..7122a359a268e 100644
+--- a/security/integrity/ima/ima_appraise.c
++++ b/security/integrity/ima/ima_appraise.c
+@@ -396,7 +396,8 @@ int ima_appraise_measurement(enum ima_hooks func,
+ 		goto out;
+ 	}
+ 
+-	status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, rc, iint);
++	status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value,
++				 rc < 0 ? 0 : rc, iint);
+ 	switch (status) {
+ 	case INTEGRITY_PASS:
+ 	case INTEGRITY_PASS_IMMUTABLE:
+diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c
+index f6a7e9643b546..b1e5e7749e416 100644
+--- a/security/integrity/ima/ima_crypto.c
++++ b/security/integrity/ima/ima_crypto.c
+@@ -205,6 +205,7 @@ out_array:
+ 
+ 		crypto_free_shash(ima_algo_array[i].tfm);
+ 	}
++	kfree(ima_algo_array);
+ out:
+ 	crypto_free_shash(ima_shash_tfm);
+ 	return rc;
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 53b7ea86f3f84..6b5d7b4760eda 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -937,6 +937,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK),
+ 	SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x82b4, "HP ProDesk 600 G3", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f7645720d29c3..6155261264083 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6725,6 +6725,7 @@ enum {
+ 	ALC298_FIXUP_LENOVO_SPK_VOLUME,
+ 	ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER,
+ 	ALC269_FIXUP_ATIV_BOOK_8,
++	ALC221_FIXUP_HP_288PRO_MIC_NO_PRESENCE,
+ 	ALC221_FIXUP_HP_MIC_NO_PRESENCE,
+ 	ALC256_FIXUP_ASUS_HEADSET_MODE,
+ 	ALC256_FIXUP_ASUS_MIC,
+@@ -7651,6 +7652,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_NO_SHUTUP
+ 	},
++	[ALC221_FIXUP_HP_288PRO_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++			{ 0x1a, 0x01813030 }, /* use as headphone mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE
++	},
+ 	[ALC221_FIXUP_HP_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -8633,6 +8644,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x129c, "Acer SWIFT SF314-55", ALC256_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x129d, "Acer SWIFT SF313-51", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1300, "Acer SWIFT SF314-56", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+@@ -8642,6 +8654,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
++	SND_PCI_QUIRK(0x1028, 0x053c, "Dell Latitude E5430", ALC292_FIXUP_DELL_E7X),
+ 	SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+ 	SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X),
+ 	SND_PCI_QUIRK(0x1028, 0x05be, "Dell Latitude E6540", ALC292_FIXUP_DELL_E7X),
+@@ -8756,6 +8769,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x2335, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x2b5e, "HP 288 Pro G2 MT", ALC221_FIXUP_HP_288PRO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8077, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+@@ -9073,6 +9087,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+@@ -10926,6 +10941,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ 	SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
++	SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+diff --git a/sound/soc/codecs/cs47l15.c b/sound/soc/codecs/cs47l15.c
+index 254f9d96e766d..7c20642f160ac 100644
+--- a/sound/soc/codecs/cs47l15.c
++++ b/sound/soc/codecs/cs47l15.c
+@@ -122,6 +122,9 @@ static int cs47l15_in1_adc_put(struct snd_kcontrol *kcontrol,
+ 		snd_soc_kcontrol_component(kcontrol);
+ 	struct cs47l15 *cs47l15 = snd_soc_component_get_drvdata(component);
+ 
++	if (!!ucontrol->value.integer.value[0] == cs47l15->in1_lp_mode)
++		return 0;
++
+ 	switch (ucontrol->value.integer.value[0]) {
+ 	case 0:
+ 		/* Set IN1 to normal mode */
+@@ -150,7 +153,7 @@ static int cs47l15_in1_adc_put(struct snd_kcontrol *kcontrol,
+ 		break;
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static const struct snd_kcontrol_new cs47l15_snd_controls[] = {
+diff --git a/sound/soc/codecs/madera.c b/sound/soc/codecs/madera.c
+index 680f31a6493a2..bbab4bc1f6b50 100644
+--- a/sound/soc/codecs/madera.c
++++ b/sound/soc/codecs/madera.c
+@@ -618,7 +618,13 @@ int madera_out1_demux_put(struct snd_kcontrol *kcontrol,
+ end:
+ 	snd_soc_dapm_mutex_unlock(dapm);
+ 
+-	return snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
++	ret = snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
++	if (ret < 0) {
++		dev_err(madera->dev, "Failed to update demux power state: %d\n", ret);
++		return ret;
++	}
++
++	return change;
+ }
+ EXPORT_SYMBOL_GPL(madera_out1_demux_put);
+ 
+@@ -893,7 +899,7 @@ static int madera_adsp_rate_put(struct snd_kcontrol *kcontrol,
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	const int adsp_num = e->shift_l;
+ 	const unsigned int item = ucontrol->value.enumerated.item[0];
+-	int ret;
++	int ret = 0;
+ 
+ 	if (item >= e->items)
+ 		return -EINVAL;
+@@ -910,10 +916,10 @@ static int madera_adsp_rate_put(struct snd_kcontrol *kcontrol,
+ 			 "Cannot change '%s' while in use by active audio paths\n",
+ 			 kcontrol->id.name);
+ 		ret = -EBUSY;
+-	} else {
++	} else if (priv->adsp_rate_cache[adsp_num] != e->values[item]) {
+ 		/* Volatile register so defer until the codec is powered up */
+ 		priv->adsp_rate_cache[adsp_num] = e->values[item];
+-		ret = 0;
++		ret = 1;
+ 	}
+ 
+ 	mutex_unlock(&priv->rate_lock);
+diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
+index 4c0e87e22b97b..f066e016a874a 100644
+--- a/sound/soc/codecs/sgtl5000.c
++++ b/sound/soc/codecs/sgtl5000.c
+@@ -1797,6 +1797,9 @@ static int sgtl5000_i2c_remove(struct i2c_client *client)
+ {
+ 	struct sgtl5000_priv *sgtl5000 = i2c_get_clientdata(client);
+ 
++	regmap_write(sgtl5000->regmap, SGTL5000_CHIP_DIG_POWER, SGTL5000_DIG_POWER_DEFAULT);
++	regmap_write(sgtl5000->regmap, SGTL5000_CHIP_ANA_POWER, SGTL5000_ANA_POWER_DEFAULT);
++
+ 	clk_disable_unprepare(sgtl5000->mclk);
+ 	regulator_bulk_disable(sgtl5000->num_supplies, sgtl5000->supplies);
+ 	regulator_bulk_free(sgtl5000->num_supplies, sgtl5000->supplies);
+@@ -1804,6 +1807,11 @@ static int sgtl5000_i2c_remove(struct i2c_client *client)
+ 	return 0;
+ }
+ 
++static void sgtl5000_i2c_shutdown(struct i2c_client *client)
++{
++	sgtl5000_i2c_remove(client);
++}
++
+ static const struct i2c_device_id sgtl5000_id[] = {
+ 	{"sgtl5000", 0},
+ 	{},
+@@ -1824,6 +1832,7 @@ static struct i2c_driver sgtl5000_i2c_driver = {
+ 		   },
+ 	.probe = sgtl5000_i2c_probe,
+ 	.remove = sgtl5000_i2c_remove,
++	.shutdown = sgtl5000_i2c_shutdown,
+ 	.id_table = sgtl5000_id,
+ };
+ 
+diff --git a/sound/soc/codecs/sgtl5000.h b/sound/soc/codecs/sgtl5000.h
+index 56ec5863f2507..3a808c762299e 100644
+--- a/sound/soc/codecs/sgtl5000.h
++++ b/sound/soc/codecs/sgtl5000.h
+@@ -80,6 +80,7 @@
+ /*
+  * SGTL5000_CHIP_DIG_POWER
+  */
++#define SGTL5000_DIG_POWER_DEFAULT		0x0000
+ #define SGTL5000_ADC_EN				0x0040
+ #define SGTL5000_DAC_EN				0x0020
+ #define SGTL5000_DAP_POWERUP			0x0010
+diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c
+index 14a193e48dc76..37588804a6b5f 100644
+--- a/sound/soc/codecs/tas2764.c
++++ b/sound/soc/codecs/tas2764.c
+@@ -42,10 +42,12 @@ static void tas2764_reset(struct tas2764_priv *tas2764)
+ 		gpiod_set_value_cansleep(tas2764->reset_gpio, 0);
+ 		msleep(20);
+ 		gpiod_set_value_cansleep(tas2764->reset_gpio, 1);
++		usleep_range(1000, 2000);
+ 	}
+ 
+ 	snd_soc_component_write(tas2764->component, TAS2764_SW_RST,
+ 				TAS2764_RST);
++	usleep_range(1000, 2000);
+ }
+ 
+ static int tas2764_set_bias_level(struct snd_soc_component *component,
+@@ -107,8 +109,10 @@ static int tas2764_codec_resume(struct snd_soc_component *component)
+ 	struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component);
+ 	int ret;
+ 
+-	if (tas2764->sdz_gpio)
++	if (tas2764->sdz_gpio) {
+ 		gpiod_set_value_cansleep(tas2764->sdz_gpio, 1);
++		usleep_range(1000, 2000);
++	}
+ 
+ 	ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+ 					    TAS2764_PWR_CTRL_MASK,
+@@ -131,7 +135,8 @@ static const char * const tas2764_ASI1_src[] = {
+ };
+ 
+ static SOC_ENUM_SINGLE_DECL(
+-	tas2764_ASI1_src_enum, TAS2764_TDM_CFG2, 4, tas2764_ASI1_src);
++	tas2764_ASI1_src_enum, TAS2764_TDM_CFG2, TAS2764_TDM_CFG2_SCFG_SHIFT,
++	tas2764_ASI1_src);
+ 
+ static const struct snd_kcontrol_new tas2764_asi1_mux =
+ 	SOC_DAPM_ENUM("ASI1 Source", tas2764_ASI1_src_enum);
+@@ -329,20 +334,22 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ {
+ 	struct snd_soc_component *component = dai->component;
+ 	struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component);
+-	u8 tdm_rx_start_slot = 0, asi_cfg_1 = 0;
+-	int iface;
++	u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0;
+ 	int ret;
+ 
+ 	switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
++	case SND_SOC_DAIFMT_NB_IF:
++		asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START;
++		fallthrough;
+ 	case SND_SOC_DAIFMT_NB_NF:
+ 		asi_cfg_1 = TAS2764_TDM_CFG1_RX_RISING;
+ 		break;
++	case SND_SOC_DAIFMT_IB_IF:
++		asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START;
++		fallthrough;
+ 	case SND_SOC_DAIFMT_IB_NF:
+ 		asi_cfg_1 = TAS2764_TDM_CFG1_RX_FALLING;
+ 		break;
+-	default:
+-		dev_err(tas2764->dev, "ASI format Inverse is not found\n");
+-		return -EINVAL;
+ 	}
+ 
+ 	ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG1,
+@@ -353,13 +360,13 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 
+ 	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_I2S:
++		asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START;
++		fallthrough;
+ 	case SND_SOC_DAIFMT_DSP_A:
+-		iface = TAS2764_TDM_CFG2_SCFG_I2S;
+ 		tdm_rx_start_slot = 1;
+ 		break;
+ 	case SND_SOC_DAIFMT_DSP_B:
+ 	case SND_SOC_DAIFMT_LEFT_J:
+-		iface = TAS2764_TDM_CFG2_SCFG_LEFT_J;
+ 		tdm_rx_start_slot = 0;
+ 		break;
+ 	default:
+@@ -368,14 +375,15 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG1,
+-					    TAS2764_TDM_CFG1_MASK,
+-					    (tdm_rx_start_slot << TAS2764_TDM_CFG1_51_SHIFT));
++	ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG0,
++					    TAS2764_TDM_CFG0_FRAME_START,
++					    asi_cfg_0);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG2,
+-					    TAS2764_TDM_CFG2_SCFG_MASK, iface);
++	ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG1,
++					    TAS2764_TDM_CFG1_MASK,
++					    (tdm_rx_start_slot << TAS2764_TDM_CFG1_51_SHIFT));
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -501,8 +509,10 @@ static int tas2764_codec_probe(struct snd_soc_component *component)
+ 
+ 	tas2764->component = component;
+ 
+-	if (tas2764->sdz_gpio)
++	if (tas2764->sdz_gpio) {
+ 		gpiod_set_value_cansleep(tas2764->sdz_gpio, 1);
++		usleep_range(1000, 2000);
++	}
+ 
+ 	tas2764_reset(tas2764);
+ 
+@@ -526,12 +536,12 @@ static int tas2764_codec_probe(struct snd_soc_component *component)
+ }
+ 
+ static DECLARE_TLV_DB_SCALE(tas2764_digital_tlv, 1100, 50, 0);
+-static DECLARE_TLV_DB_SCALE(tas2764_playback_volume, -10000, 50, 0);
++static DECLARE_TLV_DB_SCALE(tas2764_playback_volume, -10050, 50, 1);
+ 
+ static const struct snd_kcontrol_new tas2764_snd_controls[] = {
+ 	SOC_SINGLE_TLV("Speaker Volume", TAS2764_DVC, 0,
+ 		       TAS2764_DVC_MAX, 1, tas2764_playback_volume),
+-	SOC_SINGLE_TLV("Amp Gain Volume", TAS2764_CHNL_0, 0, 0x14, 0,
++	SOC_SINGLE_TLV("Amp Gain Volume", TAS2764_CHNL_0, 1, 0x14, 0,
+ 		       tas2764_digital_tlv),
+ };
+ 
+@@ -556,7 +566,7 @@ static const struct reg_default tas2764_reg_defaults[] = {
+ 	{ TAS2764_SW_RST, 0x00 },
+ 	{ TAS2764_PWR_CTRL, 0x1a },
+ 	{ TAS2764_DVC, 0x00 },
+-	{ TAS2764_CHNL_0, 0x00 },
++	{ TAS2764_CHNL_0, 0x28 },
+ 	{ TAS2764_TDM_CFG0, 0x09 },
+ 	{ TAS2764_TDM_CFG1, 0x02 },
+ 	{ TAS2764_TDM_CFG2, 0x0a },
+diff --git a/sound/soc/codecs/tas2764.h b/sound/soc/codecs/tas2764.h
+index 67d6fd903c42c..f015f22a083b5 100644
+--- a/sound/soc/codecs/tas2764.h
++++ b/sound/soc/codecs/tas2764.h
+@@ -47,6 +47,7 @@
+ #define TAS2764_TDM_CFG0_MASK		GENMASK(3, 1)
+ #define TAS2764_TDM_CFG0_44_1_48KHZ	BIT(3)
+ #define TAS2764_TDM_CFG0_88_2_96KHZ	(BIT(3) | BIT(1))
++#define TAS2764_TDM_CFG0_FRAME_START	BIT(0)
+ 
+ /* TDM Configuration Reg1 */
+ #define TAS2764_TDM_CFG1		TAS2764_REG(0X0, 0x09)
+@@ -66,10 +67,7 @@
+ #define TAS2764_TDM_CFG2_RXS_16BITS	0x0
+ #define TAS2764_TDM_CFG2_RXS_24BITS	BIT(0)
+ #define TAS2764_TDM_CFG2_RXS_32BITS	BIT(1)
+-#define TAS2764_TDM_CFG2_SCFG_MASK	GENMASK(5, 4)
+-#define TAS2764_TDM_CFG2_SCFG_I2S	0x0
+-#define TAS2764_TDM_CFG2_SCFG_LEFT_J	BIT(4)
+-#define TAS2764_TDM_CFG2_SCFG_RIGHT_J	BIT(5)
++#define TAS2764_TDM_CFG2_SCFG_SHIFT	4
+ 
+ /* TDM Configuration Reg3 */
+ #define TAS2764_TDM_CFG3		TAS2764_REG(0X0, 0x0c)
+diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c
+index 4238929b23751..d0cef982215dc 100644
+--- a/sound/soc/codecs/wm5110.c
++++ b/sound/soc/codecs/wm5110.c
+@@ -413,6 +413,7 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol,
+ 	unsigned int rnew = (!!ucontrol->value.integer.value[1]) << mc->rshift;
+ 	unsigned int lold, rold;
+ 	unsigned int lena, rena;
++	bool change = false;
+ 	int ret;
+ 
+ 	snd_soc_dapm_mutex_lock(dapm);
+@@ -440,8 +441,8 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol,
+ 		goto err;
+ 	}
+ 
+-	ret = regmap_update_bits(arizona->regmap, ARIZONA_DRE_ENABLE,
+-				 mask, lnew | rnew);
++	ret = regmap_update_bits_check(arizona->regmap, ARIZONA_DRE_ENABLE,
++				       mask, lnew | rnew, &change);
+ 	if (ret) {
+ 		dev_err(arizona->dev, "Failed to set DRE: %d\n", ret);
+ 		goto err;
+@@ -454,6 +455,9 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol,
+ 	if (!rnew && rold)
+ 		wm5110_clear_pga_volume(arizona, mc->rshift);
+ 
++	if (change)
++		ret = 1;
++
+ err:
+ 	snd_soc_dapm_mutex_unlock(dapm);
+ 
+diff --git a/sound/soc/intel/skylake/skl-nhlt.c b/sound/soc/intel/skylake/skl-nhlt.c
+index 87c891c462910..3b3868df9f670 100644
+--- a/sound/soc/intel/skylake/skl-nhlt.c
++++ b/sound/soc/intel/skylake/skl-nhlt.c
+@@ -201,7 +201,6 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks,
+ 	struct nhlt_fmt_cfg *fmt_cfg;
+ 	struct wav_fmt_ext *wav_fmt;
+ 	unsigned long rate;
+-	bool present = false;
+ 	int rate_index = 0;
+ 	u16 channels, bps;
+ 	u8 clk_src;
+@@ -214,9 +213,12 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks,
+ 	if (fmt->fmt_count == 0)
+ 		return;
+ 
++	fmt_cfg = (struct nhlt_fmt_cfg *)fmt->fmt_config;
+ 	for (i = 0; i < fmt->fmt_count; i++) {
+-		fmt_cfg = &fmt->fmt_config[i];
+-		wav_fmt = &fmt_cfg->fmt_ext;
++		struct nhlt_fmt_cfg *saved_fmt_cfg = fmt_cfg;
++		bool present = false;
++
++		wav_fmt = &saved_fmt_cfg->fmt_ext;
+ 
+ 		channels = wav_fmt->fmt.channels;
+ 		bps = wav_fmt->fmt.bits_per_sample;
+@@ -234,12 +236,18 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks,
+ 		 * derive the rate.
+ 		 */
+ 		for (j = i; j < fmt->fmt_count; j++) {
+-			fmt_cfg = &fmt->fmt_config[j];
+-			wav_fmt = &fmt_cfg->fmt_ext;
++			struct nhlt_fmt_cfg *tmp_fmt_cfg = fmt_cfg;
++
++			wav_fmt = &tmp_fmt_cfg->fmt_ext;
+ 			if ((fs == wav_fmt->fmt.samples_per_sec) &&
+-			   (bps == wav_fmt->fmt.bits_per_sample))
++			   (bps == wav_fmt->fmt.bits_per_sample)) {
+ 				channels = max_t(u16, channels,
+ 						wav_fmt->fmt.channels);
++				saved_fmt_cfg = tmp_fmt_cfg;
++			}
++			/* Move to the next nhlt_fmt_cfg */
++			tmp_fmt_cfg = (struct nhlt_fmt_cfg *)(tmp_fmt_cfg->config.caps +
++							      tmp_fmt_cfg->config.size);
+ 		}
+ 
+ 		rate = channels * bps * fs;
+@@ -255,8 +263,11 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks,
+ 
+ 		/* Fill rate and parent for sclk/sclkfs */
+ 		if (!present) {
++			struct nhlt_fmt_cfg *first_fmt_cfg;
++
++			first_fmt_cfg = (struct nhlt_fmt_cfg *)fmt->fmt_config;
+ 			i2s_config_ext = (struct skl_i2s_config_blob_ext *)
+-						fmt->fmt_config[0].config.caps;
++						first_fmt_cfg->config.caps;
+ 
+ 			/* MCLK Divider Source Select */
+ 			if (is_legacy_blob(i2s_config_ext->hdr.sig)) {
+@@ -270,6 +281,9 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks,
+ 
+ 			parent = skl_get_parent_clk(clk_src);
+ 
++			/* Move to the next nhlt_fmt_cfg */
++			fmt_cfg = (struct nhlt_fmt_cfg *)(fmt_cfg->config.caps +
++							  fmt_cfg->config.size);
+ 			/*
+ 			 * Do not copy the config data if there is no parent
+ 			 * clock available for this clock source select
+@@ -278,9 +292,9 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks,
+ 				continue;
+ 
+ 			sclk[id].rate_cfg[rate_index].rate = rate;
+-			sclk[id].rate_cfg[rate_index].config = fmt_cfg;
++			sclk[id].rate_cfg[rate_index].config = saved_fmt_cfg;
+ 			sclkfs[id].rate_cfg[rate_index].rate = rate;
+-			sclkfs[id].rate_cfg[rate_index].config = fmt_cfg;
++			sclkfs[id].rate_cfg[rate_index].config = saved_fmt_cfg;
+ 			sclk[id].parent_name = parent->name;
+ 			sclkfs[id].parent_name = parent->name;
+ 
+@@ -294,13 +308,13 @@ static void skl_get_mclk(struct skl_dev *skl, struct skl_ssp_clk *mclk,
+ {
+ 	struct skl_i2s_config_blob_ext *i2s_config_ext;
+ 	struct skl_i2s_config_blob_legacy *i2s_config;
+-	struct nhlt_specific_cfg *fmt_cfg;
++	struct nhlt_fmt_cfg *fmt_cfg;
+ 	struct skl_clk_parent_src *parent;
+ 	u32 clkdiv, div_ratio;
+ 	u8 clk_src;
+ 
+-	fmt_cfg = &fmt->fmt_config[0].config;
+-	i2s_config_ext = (struct skl_i2s_config_blob_ext *)fmt_cfg->caps;
++	fmt_cfg = (struct nhlt_fmt_cfg *)fmt->fmt_config;
++	i2s_config_ext = (struct skl_i2s_config_blob_ext *)fmt_cfg->config.caps;
+ 
+ 	/* MCLK Divider Source Select and divider */
+ 	if (is_legacy_blob(i2s_config_ext->hdr.sig)) {
+@@ -329,7 +343,7 @@ static void skl_get_mclk(struct skl_dev *skl, struct skl_ssp_clk *mclk,
+ 		return;
+ 
+ 	mclk[id].rate_cfg[0].rate = parent->rate/div_ratio;
+-	mclk[id].rate_cfg[0].config = &fmt->fmt_config[0];
++	mclk[id].rate_cfg[0].config = fmt_cfg;
+ 	mclk[id].parent_name = parent->name;
+ }
+ 
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index f2f7f2dde93cf..754c1f16ee83f 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -62,6 +62,8 @@ struct snd_soc_dapm_widget *
+ snd_soc_dapm_new_control_unlocked(struct snd_soc_dapm_context *dapm,
+ 			 const struct snd_soc_dapm_widget *widget);
+ 
++static unsigned int soc_dapm_read(struct snd_soc_dapm_context *dapm, int reg);
++
+ /* dapm power sequences - make this per codec in the future */
+ static int dapm_up_seq[] = {
+ 	[snd_soc_dapm_pre] = 1,
+@@ -442,6 +444,9 @@ static int dapm_kcontrol_data_alloc(struct snd_soc_dapm_widget *widget,
+ 
+ 			snd_soc_dapm_add_path(widget->dapm, data->widget,
+ 					      widget, NULL, NULL);
++		} else if (e->reg != SND_SOC_NOPM) {
++			data->value = soc_dapm_read(widget->dapm, e->reg) &
++				      (e->mask << e->shift_l);
+ 		}
+ 		break;
+ 	default:
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 15bfcdbdfaa4e..0f26d6c31ce50 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -517,7 +517,7 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 		return -EINVAL;
+ 	if (mc->platform_max && tmp > mc->platform_max)
+ 		return -EINVAL;
+-	if (tmp > mc->max - mc->min + 1)
++	if (tmp > mc->max - mc->min)
+ 		return -EINVAL;
+ 
+ 	if (invert)
+@@ -538,7 +538,7 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 			return -EINVAL;
+ 		if (mc->platform_max && tmp > mc->platform_max)
+ 			return -EINVAL;
+-		if (tmp > mc->max - mc->min + 1)
++		if (tmp > mc->max - mc->min)
+ 			return -EINVAL;
+ 
+ 		if (invert)
+diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c
+index 347636a80b487..4012097a9d60b 100644
+--- a/sound/soc/sof/intel/hda-loader.c
++++ b/sound/soc/sof/intel/hda-loader.c
+@@ -79,9 +79,9 @@ out_put:
+ }
+ 
+ /*
+- * first boot sequence has some extra steps. core 0 waits for power
+- * status on core 1, so power up core 1 also momentarily, keep it in
+- * reset/stall and then turn it off
++ * first boot sequence has some extra steps.
++ * power on all host managed cores and only unstall/run the boot core to boot the
++ * DSP then turn off all non boot cores (if any) is powered on.
+  */
+ static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag)
+ {
+@@ -115,7 +115,7 @@ static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag)
+ 			  ((stream_tag - 1) << 9)));
+ 
+ 	/* step 3: unset core 0 reset state & unstall/run core 0 */
+-	ret = hda_dsp_core_run(sdev, BIT(0));
++	ret = hda_dsp_core_run(sdev, chip->init_core_mask);
+ 	if (ret < 0) {
+ 		if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS)
+ 			dev_err(sdev->dev,


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-07-25 10:19 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2022-07-25 10:19 UTC (permalink / raw
  To: gentoo-commits

commit:     5d0fc057f0efd1a7328e0917e851b650fc47ab48
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 25 10:19:07 2022 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Jul 25 10:19:16 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5d0fc057

Linux patch 5.10.133

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |     4 +
 1132_linux-5.10.133.patch | 13647 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 13651 insertions(+)

diff --git a/0000_README b/0000_README
index 04169db1..3fc0f934 100644
--- a/0000_README
+++ b/0000_README
@@ -571,6 +571,10 @@ Patch:  1131_linux-5.10.132.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.132
 
+Patch:  1132_linux-5.10.133.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.133
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1132_linux-5.10.133.patch b/1132_linux-5.10.133.patch
new file mode 100644
index 00000000..0cf8cb70
--- /dev/null
+++ b/1132_linux-5.10.133.patch
@@ -0,0 +1,13647 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index ea8b704be7052..1a58c580b2366 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4656,6 +4656,30 @@
+ 
+ 	retain_initrd	[RAM] Keep initrd memory after extraction
+ 
++	retbleed=	[X86] Control mitigation of RETBleed (Arbitrary
++			Speculative Code Execution with Return Instructions)
++			vulnerability.
++
++			off          - no mitigation
++			auto         - automatically select a migitation
++			auto,nosmt   - automatically select a mitigation,
++				       disabling SMT if necessary for
++				       the full mitigation (only on Zen1
++				       and older without STIBP).
++			ibpb	     - mitigate short speculation windows on
++				       basic block boundaries too. Safe, highest
++				       perf impact.
++			unret        - force enable untrained return thunks,
++				       only effective on AMD f15h-f17h
++				       based systems.
++			unret,nosmt  - like unret, will disable SMT when STIBP
++			               is not available.
++
++			Selecting 'auto' will choose a mitigation method at run
++			time according to the CPU.
++
++			Not specifying this option is equivalent to retbleed=auto.
++
+ 	rfkill.default_state=
+ 		0	"airplane mode".  All wifi, bluetooth, wimax, gps, fm,
+ 			etc. communication is blocked by default.
+@@ -5005,6 +5029,7 @@
+ 			eibrs		  - enhanced IBRS
+ 			eibrs,retpoline   - enhanced IBRS + Retpolines
+ 			eibrs,lfence      - enhanced IBRS + LFENCE
++			ibrs		  - use IBRS to protect kernel
+ 
+ 			Not specifying this option is equivalent to
+ 			spectre_v2=auto.
+diff --git a/Makefile b/Makefile
+index 5bee8f281b061..fbd330e58c3b8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 132
++SUBLEVEL = 133
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -670,12 +670,21 @@ ifdef CONFIG_FUNCTION_TRACER
+   CC_FLAGS_FTRACE := -pg
+ endif
+ 
+-RETPOLINE_CFLAGS_GCC := -mindirect-branch=thunk-extern -mindirect-branch-register
+-RETPOLINE_VDSO_CFLAGS_GCC := -mindirect-branch=thunk-inline -mindirect-branch-register
+-RETPOLINE_CFLAGS_CLANG := -mretpoline-external-thunk
+-RETPOLINE_VDSO_CFLAGS_CLANG := -mretpoline
+-RETPOLINE_CFLAGS := $(call cc-option,$(RETPOLINE_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_CFLAGS_CLANG)))
+-RETPOLINE_VDSO_CFLAGS := $(call cc-option,$(RETPOLINE_VDSO_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_VDSO_CFLAGS_CLANG)))
++ifdef CONFIG_CC_IS_GCC
++RETPOLINE_CFLAGS	:= $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register)
++RETPOLINE_CFLAGS	+= $(call cc-option,-mindirect-branch-cs-prefix)
++RETPOLINE_VDSO_CFLAGS	:= $(call cc-option,-mindirect-branch=thunk-inline -mindirect-branch-register)
++endif
++ifdef CONFIG_CC_IS_CLANG
++RETPOLINE_CFLAGS	:= -mretpoline-external-thunk
++RETPOLINE_VDSO_CFLAGS	:= -mretpoline
++endif
++
++ifdef CONFIG_RETHUNK
++RETHUNK_CFLAGS         := -mfunction-return=thunk-extern
++RETPOLINE_CFLAGS       += $(RETHUNK_CFLAGS)
++endif
++
+ export RETPOLINE_CFLAGS
+ export RETPOLINE_VDSO_CFLAGS
+ 
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index 76b37297b7d4c..26af24b5d900a 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -358,6 +358,14 @@ void __init check_bugs(void)
+ 	os_check_bugs();
+ }
+ 
++void apply_retpolines(s32 *start, s32 *end)
++{
++}
++
++void apply_returns(s32 *start, s32 *end)
++{
++}
++
+ void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
+ {
+ }
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index ed713840d4698..16c045906b2ac 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -453,15 +453,6 @@ config GOLDFISH
+ 	def_bool y
+ 	depends on X86_GOLDFISH
+ 
+-config RETPOLINE
+-	bool "Avoid speculative indirect branches in kernel"
+-	default y
+-	help
+-	  Compile kernel with the retpoline compiler options to guard against
+-	  kernel-to-user data leaks by avoiding speculative indirect
+-	  branches. Requires a compiler with -mindirect-branch=thunk-extern
+-	  support for full protection. The kernel may run slower.
+-
+ config X86_CPU_RESCTRL
+ 	bool "x86 CPU resource control support"
+ 	depends on X86 && (CPU_SUP_INTEL || CPU_SUP_AMD)
+@@ -2415,6 +2406,88 @@ source "kernel/livepatch/Kconfig"
+ 
+ endmenu
+ 
++config CC_HAS_SLS
++	def_bool $(cc-option,-mharden-sls=all)
++
++config CC_HAS_RETURN_THUNK
++	def_bool $(cc-option,-mfunction-return=thunk-extern)
++
++menuconfig SPECULATION_MITIGATIONS
++	bool "Mitigations for speculative execution vulnerabilities"
++	default y
++	help
++	  Say Y here to enable options which enable mitigations for
++	  speculative execution hardware vulnerabilities.
++
++	  If you say N, all mitigations will be disabled. You really
++	  should know what you are doing to say so.
++
++if SPECULATION_MITIGATIONS
++
++config PAGE_TABLE_ISOLATION
++	bool "Remove the kernel mapping in user mode"
++	default y
++	depends on (X86_64 || X86_PAE)
++	help
++	  This feature reduces the number of hardware side channels by
++	  ensuring that the majority of kernel addresses are not mapped
++	  into userspace.
++
++	  See Documentation/x86/pti.rst for more details.
++
++config RETPOLINE
++	bool "Avoid speculative indirect branches in kernel"
++	default y
++	help
++	  Compile kernel with the retpoline compiler options to guard against
++	  kernel-to-user data leaks by avoiding speculative indirect
++	  branches. Requires a compiler with -mindirect-branch=thunk-extern
++	  support for full protection. The kernel may run slower.
++
++config RETHUNK
++	bool "Enable return-thunks"
++	depends on RETPOLINE && CC_HAS_RETURN_THUNK
++	default y
++	help
++	  Compile the kernel with the return-thunks compiler option to guard
++	  against kernel-to-user data leaks by avoiding return speculation.
++	  Requires a compiler with -mfunction-return=thunk-extern
++	  support for full protection. The kernel may run slower.
++
++config CPU_UNRET_ENTRY
++	bool "Enable UNRET on kernel entry"
++	depends on CPU_SUP_AMD && RETHUNK
++	default y
++	help
++	  Compile the kernel with support for the retbleed=unret mitigation.
++
++config CPU_IBPB_ENTRY
++	bool "Enable IBPB on kernel entry"
++	depends on CPU_SUP_AMD
++	default y
++	help
++	  Compile the kernel with support for the retbleed=ibpb mitigation.
++
++config CPU_IBRS_ENTRY
++	bool "Enable IBRS on kernel entry"
++	depends on CPU_SUP_INTEL
++	default y
++	help
++	  Compile the kernel with support for the spectre_v2=ibrs mitigation.
++	  This mitigates both spectre_v2 and retbleed at great cost to
++	  performance.
++
++config SLS
++	bool "Mitigate Straight-Line-Speculation"
++	depends on CC_HAS_SLS && X86_64
++	default n
++	help
++	  Compile the kernel with straight-line-speculation options to guard
++	  against straight line speculation. The kernel image might be slightly
++	  larger.
++
++endif
++
+ config ARCH_HAS_ADD_PAGES
+ 	def_bool y
+ 	depends on X86_64 && ARCH_ENABLE_MEMORY_HOTPLUG
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 8ed757d06f772..1f796050c6dde 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -31,7 +31,7 @@ endif
+ CODE16GCC_CFLAGS := -m32 -Wa,$(srctree)/arch/x86/boot/code16gcc.h
+ M16_CFLAGS	 := $(call cc-option, -m16, $(CODE16GCC_CFLAGS))
+ 
+-REALMODE_CFLAGS	:= $(M16_CFLAGS) -g -Os -DDISABLE_BRANCH_PROFILING \
++REALMODE_CFLAGS	:= $(M16_CFLAGS) -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
+ 		   -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
+ 		   -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ 		   -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
+@@ -196,7 +196,11 @@ ifdef CONFIG_RETPOLINE
+   endif
+ endif
+ 
+-KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)
++ifdef CONFIG_SLS
++  KBUILD_CFLAGS += -mharden-sls=all
++endif
++
++KBUILD_LDFLAGS += -m elf_$(UTS_MACHINE)
+ 
+ ifdef CONFIG_X86_NEED_RELOCS
+ LDFLAGS_vmlinux := --emit-relocs --discard-none
+diff --git a/arch/x86/boot/compressed/efi_thunk_64.S b/arch/x86/boot/compressed/efi_thunk_64.S
+index c4bb0f9363f5e..c8052a141b088 100644
+--- a/arch/x86/boot/compressed/efi_thunk_64.S
++++ b/arch/x86/boot/compressed/efi_thunk_64.S
+@@ -89,7 +89,7 @@ SYM_FUNC_START(__efi64_thunk)
+ 
+ 	pop	%rbx
+ 	pop	%rbp
+-	ret
++	RET
+ SYM_FUNC_END(__efi64_thunk)
+ 
+ 	.code32
+diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
+index 72f655c238cf1..b55e2007b30c6 100644
+--- a/arch/x86/boot/compressed/head_64.S
++++ b/arch/x86/boot/compressed/head_64.S
+@@ -786,7 +786,7 @@ SYM_FUNC_START(efi32_pe_entry)
+ 2:	popl	%edi				// restore callee-save registers
+ 	popl	%ebx
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(efi32_pe_entry)
+ 
+ 	.section ".rodata"
+@@ -868,7 +868,7 @@ SYM_FUNC_START(startup32_check_sev_cbit)
+ 	popl	%ebx
+ 	popl	%eax
+ #endif
+-	ret
++	RET
+ SYM_FUNC_END(startup32_check_sev_cbit)
+ 
+ /*
+diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
+index a6dea4e8a082f..484a9c06f41d8 100644
+--- a/arch/x86/boot/compressed/mem_encrypt.S
++++ b/arch/x86/boot/compressed/mem_encrypt.S
+@@ -58,7 +58,7 @@ SYM_FUNC_START(get_sev_encryption_bit)
+ 
+ #endif	/* CONFIG_AMD_MEM_ENCRYPT */
+ 
+-	ret
++	RET
+ SYM_FUNC_END(get_sev_encryption_bit)
+ 
+ 	.code64
+@@ -99,7 +99,7 @@ SYM_FUNC_START(set_sev_encryption_mask)
+ #endif
+ 
+ 	xor	%rax, %rax
+-	ret
++	RET
+ SYM_FUNC_END(set_sev_encryption_mask)
+ 
+ 	.data
+diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S
+index 51d46d93efbcc..b48ddebb47489 100644
+--- a/arch/x86/crypto/aegis128-aesni-asm.S
++++ b/arch/x86/crypto/aegis128-aesni-asm.S
+@@ -122,7 +122,7 @@ SYM_FUNC_START_LOCAL(__load_partial)
+ 	pxor T0, MSG
+ 
+ .Lld_partial_8:
+-	ret
++	RET
+ SYM_FUNC_END(__load_partial)
+ 
+ /*
+@@ -180,7 +180,7 @@ SYM_FUNC_START_LOCAL(__store_partial)
+ 	mov %r10b, (%r9)
+ 
+ .Lst_partial_1:
+-	ret
++	RET
+ SYM_FUNC_END(__store_partial)
+ 
+ /*
+@@ -225,7 +225,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_init)
+ 	movdqu STATE4, 0x40(STATEP)
+ 
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(crypto_aegis128_aesni_init)
+ 
+ /*
+@@ -337,7 +337,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad)
+ 	movdqu STATE3, 0x30(STATEP)
+ 	movdqu STATE4, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lad_out_1:
+ 	movdqu STATE4, 0x00(STATEP)
+@@ -346,7 +346,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad)
+ 	movdqu STATE2, 0x30(STATEP)
+ 	movdqu STATE3, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lad_out_2:
+ 	movdqu STATE3, 0x00(STATEP)
+@@ -355,7 +355,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad)
+ 	movdqu STATE1, 0x30(STATEP)
+ 	movdqu STATE2, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lad_out_3:
+ 	movdqu STATE2, 0x00(STATEP)
+@@ -364,7 +364,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad)
+ 	movdqu STATE0, 0x30(STATEP)
+ 	movdqu STATE1, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lad_out_4:
+ 	movdqu STATE1, 0x00(STATEP)
+@@ -373,11 +373,11 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad)
+ 	movdqu STATE4, 0x30(STATEP)
+ 	movdqu STATE0, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lad_out:
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(crypto_aegis128_aesni_ad)
+ 
+ .macro encrypt_block a s0 s1 s2 s3 s4 i
+@@ -452,7 +452,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc)
+ 	movdqu STATE2, 0x30(STATEP)
+ 	movdqu STATE3, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lenc_out_1:
+ 	movdqu STATE3, 0x00(STATEP)
+@@ -461,7 +461,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc)
+ 	movdqu STATE1, 0x30(STATEP)
+ 	movdqu STATE2, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lenc_out_2:
+ 	movdqu STATE2, 0x00(STATEP)
+@@ -470,7 +470,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc)
+ 	movdqu STATE0, 0x30(STATEP)
+ 	movdqu STATE1, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lenc_out_3:
+ 	movdqu STATE1, 0x00(STATEP)
+@@ -479,7 +479,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc)
+ 	movdqu STATE4, 0x30(STATEP)
+ 	movdqu STATE0, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lenc_out_4:
+ 	movdqu STATE0, 0x00(STATEP)
+@@ -488,11 +488,11 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc)
+ 	movdqu STATE3, 0x30(STATEP)
+ 	movdqu STATE4, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lenc_out:
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(crypto_aegis128_aesni_enc)
+ 
+ /*
+@@ -532,7 +532,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc_tail)
+ 	movdqu STATE3, 0x40(STATEP)
+ 
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(crypto_aegis128_aesni_enc_tail)
+ 
+ .macro decrypt_block a s0 s1 s2 s3 s4 i
+@@ -606,7 +606,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec)
+ 	movdqu STATE2, 0x30(STATEP)
+ 	movdqu STATE3, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Ldec_out_1:
+ 	movdqu STATE3, 0x00(STATEP)
+@@ -615,7 +615,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec)
+ 	movdqu STATE1, 0x30(STATEP)
+ 	movdqu STATE2, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Ldec_out_2:
+ 	movdqu STATE2, 0x00(STATEP)
+@@ -624,7 +624,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec)
+ 	movdqu STATE0, 0x30(STATEP)
+ 	movdqu STATE1, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Ldec_out_3:
+ 	movdqu STATE1, 0x00(STATEP)
+@@ -633,7 +633,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec)
+ 	movdqu STATE4, 0x30(STATEP)
+ 	movdqu STATE0, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Ldec_out_4:
+ 	movdqu STATE0, 0x00(STATEP)
+@@ -642,11 +642,11 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec)
+ 	movdqu STATE3, 0x30(STATEP)
+ 	movdqu STATE4, 0x40(STATEP)
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Ldec_out:
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(crypto_aegis128_aesni_dec)
+ 
+ /*
+@@ -696,7 +696,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec_tail)
+ 	movdqu STATE3, 0x40(STATEP)
+ 
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(crypto_aegis128_aesni_dec_tail)
+ 
+ /*
+@@ -743,5 +743,5 @@ SYM_FUNC_START(crypto_aegis128_aesni_final)
+ 	movdqu MSG, (%rsi)
+ 
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(crypto_aegis128_aesni_final)
+diff --git a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+index 3f0fc7dd87d77..c799838242a69 100644
+--- a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
++++ b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+@@ -525,7 +525,7 @@ ddq_add_8:
+ 	/* return updated IV */
+ 	vpshufb	xbyteswap, xcounter, xcounter
+ 	vmovdqu	xcounter, (p_iv)
+-	ret
++	RET
+ .endm
+ 
+ /*
+diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
+index 57aef3f5a81e2..69c7c0dc22eaf 100644
+--- a/arch/x86/crypto/aesni-intel_asm.S
++++ b/arch/x86/crypto/aesni-intel_asm.S
+@@ -1598,7 +1598,7 @@ SYM_FUNC_START(aesni_gcm_dec)
+ 	GCM_ENC_DEC dec
+ 	GCM_COMPLETE arg10, arg11
+ 	FUNC_RESTORE
+-	ret
++	RET
+ SYM_FUNC_END(aesni_gcm_dec)
+ 
+ 
+@@ -1687,7 +1687,7 @@ SYM_FUNC_START(aesni_gcm_enc)
+ 
+ 	GCM_COMPLETE arg10, arg11
+ 	FUNC_RESTORE
+-	ret
++	RET
+ SYM_FUNC_END(aesni_gcm_enc)
+ 
+ /*****************************************************************************
+@@ -1705,7 +1705,7 @@ SYM_FUNC_START(aesni_gcm_init)
+ 	FUNC_SAVE
+ 	GCM_INIT %arg3, %arg4,%arg5, %arg6
+ 	FUNC_RESTORE
+-	ret
++	RET
+ SYM_FUNC_END(aesni_gcm_init)
+ 
+ /*****************************************************************************
+@@ -1720,7 +1720,7 @@ SYM_FUNC_START(aesni_gcm_enc_update)
+ 	FUNC_SAVE
+ 	GCM_ENC_DEC enc
+ 	FUNC_RESTORE
+-	ret
++	RET
+ SYM_FUNC_END(aesni_gcm_enc_update)
+ 
+ /*****************************************************************************
+@@ -1735,7 +1735,7 @@ SYM_FUNC_START(aesni_gcm_dec_update)
+ 	FUNC_SAVE
+ 	GCM_ENC_DEC dec
+ 	FUNC_RESTORE
+-	ret
++	RET
+ SYM_FUNC_END(aesni_gcm_dec_update)
+ 
+ /*****************************************************************************
+@@ -1750,7 +1750,7 @@ SYM_FUNC_START(aesni_gcm_finalize)
+ 	FUNC_SAVE
+ 	GCM_COMPLETE %arg3 %arg4
+ 	FUNC_RESTORE
+-	ret
++	RET
+ SYM_FUNC_END(aesni_gcm_finalize)
+ 
+ #endif
+@@ -1766,7 +1766,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_256a)
+ 	pxor %xmm1, %xmm0
+ 	movaps %xmm0, (TKEYP)
+ 	add $0x10, TKEYP
+-	ret
++	RET
+ SYM_FUNC_END(_key_expansion_256a)
+ SYM_FUNC_END_ALIAS(_key_expansion_128)
+ 
+@@ -1791,7 +1791,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_192a)
+ 	shufps $0b01001110, %xmm2, %xmm1
+ 	movaps %xmm1, 0x10(TKEYP)
+ 	add $0x20, TKEYP
+-	ret
++	RET
+ SYM_FUNC_END(_key_expansion_192a)
+ 
+ SYM_FUNC_START_LOCAL(_key_expansion_192b)
+@@ -1810,7 +1810,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_192b)
+ 
+ 	movaps %xmm0, (TKEYP)
+ 	add $0x10, TKEYP
+-	ret
++	RET
+ SYM_FUNC_END(_key_expansion_192b)
+ 
+ SYM_FUNC_START_LOCAL(_key_expansion_256b)
+@@ -1822,7 +1822,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_256b)
+ 	pxor %xmm1, %xmm2
+ 	movaps %xmm2, (TKEYP)
+ 	add $0x10, TKEYP
+-	ret
++	RET
+ SYM_FUNC_END(_key_expansion_256b)
+ 
+ /*
+@@ -1937,7 +1937,7 @@ SYM_FUNC_START(aesni_set_key)
+ 	popl KEYP
+ #endif
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_set_key)
+ 
+ /*
+@@ -1961,7 +1961,7 @@ SYM_FUNC_START(aesni_enc)
+ 	popl KEYP
+ #endif
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_enc)
+ 
+ /*
+@@ -2018,7 +2018,7 @@ SYM_FUNC_START_LOCAL(_aesni_enc1)
+ 	aesenc KEY, STATE
+ 	movaps 0x70(TKEYP), KEY
+ 	aesenclast KEY, STATE
+-	ret
++	RET
+ SYM_FUNC_END(_aesni_enc1)
+ 
+ /*
+@@ -2126,7 +2126,7 @@ SYM_FUNC_START_LOCAL(_aesni_enc4)
+ 	aesenclast KEY, STATE2
+ 	aesenclast KEY, STATE3
+ 	aesenclast KEY, STATE4
+-	ret
++	RET
+ SYM_FUNC_END(_aesni_enc4)
+ 
+ /*
+@@ -2151,7 +2151,7 @@ SYM_FUNC_START(aesni_dec)
+ 	popl KEYP
+ #endif
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_dec)
+ 
+ /*
+@@ -2208,7 +2208,7 @@ SYM_FUNC_START_LOCAL(_aesni_dec1)
+ 	aesdec KEY, STATE
+ 	movaps 0x70(TKEYP), KEY
+ 	aesdeclast KEY, STATE
+-	ret
++	RET
+ SYM_FUNC_END(_aesni_dec1)
+ 
+ /*
+@@ -2316,7 +2316,7 @@ SYM_FUNC_START_LOCAL(_aesni_dec4)
+ 	aesdeclast KEY, STATE2
+ 	aesdeclast KEY, STATE3
+ 	aesdeclast KEY, STATE4
+-	ret
++	RET
+ SYM_FUNC_END(_aesni_dec4)
+ 
+ /*
+@@ -2376,7 +2376,7 @@ SYM_FUNC_START(aesni_ecb_enc)
+ 	popl LEN
+ #endif
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_ecb_enc)
+ 
+ /*
+@@ -2437,7 +2437,7 @@ SYM_FUNC_START(aesni_ecb_dec)
+ 	popl LEN
+ #endif
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_ecb_dec)
+ 
+ /*
+@@ -2481,7 +2481,7 @@ SYM_FUNC_START(aesni_cbc_enc)
+ 	popl IVP
+ #endif
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_cbc_enc)
+ 
+ /*
+@@ -2574,7 +2574,7 @@ SYM_FUNC_START(aesni_cbc_dec)
+ 	popl IVP
+ #endif
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_cbc_dec)
+ 
+ #ifdef __x86_64__
+@@ -2602,7 +2602,7 @@ SYM_FUNC_START_LOCAL(_aesni_inc_init)
+ 	mov $1, TCTR_LOW
+ 	movq TCTR_LOW, INC
+ 	movq CTR, TCTR_LOW
+-	ret
++	RET
+ SYM_FUNC_END(_aesni_inc_init)
+ 
+ /*
+@@ -2630,7 +2630,7 @@ SYM_FUNC_START_LOCAL(_aesni_inc)
+ .Linc_low:
+ 	movaps CTR, IV
+ 	pshufb BSWAP_MASK, IV
+-	ret
++	RET
+ SYM_FUNC_END(_aesni_inc)
+ 
+ /*
+@@ -2693,7 +2693,7 @@ SYM_FUNC_START(aesni_ctr_enc)
+ 	movups IV, (IVP)
+ .Lctr_enc_just_ret:
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_ctr_enc)
+ 
+ /*
+@@ -2778,7 +2778,7 @@ SYM_FUNC_START(aesni_xts_encrypt)
+ 	movups IV, (IVP)
+ 
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_xts_encrypt)
+ 
+ /*
+@@ -2846,7 +2846,7 @@ SYM_FUNC_START(aesni_xts_decrypt)
+ 	movups IV, (IVP)
+ 
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(aesni_xts_decrypt)
+ 
+ #endif
+diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S
+index 2cf8e94d986a5..4d9b2f887064e 100644
+--- a/arch/x86/crypto/aesni-intel_avx-x86_64.S
++++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S
+@@ -1777,7 +1777,7 @@ SYM_FUNC_START(aesni_gcm_init_avx_gen2)
+         FUNC_SAVE
+         INIT GHASH_MUL_AVX, PRECOMPUTE_AVX
+         FUNC_RESTORE
+-        ret
++        RET
+ SYM_FUNC_END(aesni_gcm_init_avx_gen2)
+ 
+ ###############################################################################
+@@ -1798,15 +1798,15 @@ SYM_FUNC_START(aesni_gcm_enc_update_avx_gen2)
+         # must be 192
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 11
+         FUNC_RESTORE
+-        ret
++        RET
+ key_128_enc_update:
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 9
+         FUNC_RESTORE
+-        ret
++        RET
+ key_256_enc_update:
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 13
+         FUNC_RESTORE
+-        ret
++        RET
+ SYM_FUNC_END(aesni_gcm_enc_update_avx_gen2)
+ 
+ ###############################################################################
+@@ -1827,15 +1827,15 @@ SYM_FUNC_START(aesni_gcm_dec_update_avx_gen2)
+         # must be 192
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 11
+         FUNC_RESTORE
+-        ret
++        RET
+ key_128_dec_update:
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 9
+         FUNC_RESTORE
+-        ret
++        RET
+ key_256_dec_update:
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 13
+         FUNC_RESTORE
+-        ret
++        RET
+ SYM_FUNC_END(aesni_gcm_dec_update_avx_gen2)
+ 
+ ###############################################################################
+@@ -1856,15 +1856,15 @@ SYM_FUNC_START(aesni_gcm_finalize_avx_gen2)
+         # must be 192
+         GCM_COMPLETE GHASH_MUL_AVX, 11, arg3, arg4
+         FUNC_RESTORE
+-        ret
++        RET
+ key_128_finalize:
+         GCM_COMPLETE GHASH_MUL_AVX, 9, arg3, arg4
+         FUNC_RESTORE
+-        ret
++        RET
+ key_256_finalize:
+         GCM_COMPLETE GHASH_MUL_AVX, 13, arg3, arg4
+         FUNC_RESTORE
+-        ret
++        RET
+ SYM_FUNC_END(aesni_gcm_finalize_avx_gen2)
+ 
+ ###############################################################################
+@@ -2745,7 +2745,7 @@ SYM_FUNC_START(aesni_gcm_init_avx_gen4)
+         FUNC_SAVE
+         INIT GHASH_MUL_AVX2, PRECOMPUTE_AVX2
+         FUNC_RESTORE
+-        ret
++        RET
+ SYM_FUNC_END(aesni_gcm_init_avx_gen4)
+ 
+ ###############################################################################
+@@ -2766,15 +2766,15 @@ SYM_FUNC_START(aesni_gcm_enc_update_avx_gen4)
+         # must be 192
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 11
+         FUNC_RESTORE
+-	ret
++	RET
+ key_128_enc_update4:
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 9
+         FUNC_RESTORE
+-	ret
++	RET
+ key_256_enc_update4:
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 13
+         FUNC_RESTORE
+-	ret
++	RET
+ SYM_FUNC_END(aesni_gcm_enc_update_avx_gen4)
+ 
+ ###############################################################################
+@@ -2795,15 +2795,15 @@ SYM_FUNC_START(aesni_gcm_dec_update_avx_gen4)
+         # must be 192
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 11
+         FUNC_RESTORE
+-        ret
++        RET
+ key_128_dec_update4:
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 9
+         FUNC_RESTORE
+-        ret
++        RET
+ key_256_dec_update4:
+         GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 13
+         FUNC_RESTORE
+-        ret
++        RET
+ SYM_FUNC_END(aesni_gcm_dec_update_avx_gen4)
+ 
+ ###############################################################################
+@@ -2824,13 +2824,13 @@ SYM_FUNC_START(aesni_gcm_finalize_avx_gen4)
+         # must be 192
+         GCM_COMPLETE GHASH_MUL_AVX2, 11, arg3, arg4
+         FUNC_RESTORE
+-        ret
++        RET
+ key_128_finalize4:
+         GCM_COMPLETE GHASH_MUL_AVX2, 9, arg3, arg4
+         FUNC_RESTORE
+-        ret
++        RET
+ key_256_finalize4:
+         GCM_COMPLETE GHASH_MUL_AVX2, 13, arg3, arg4
+         FUNC_RESTORE
+-        ret
++        RET
+ SYM_FUNC_END(aesni_gcm_finalize_avx_gen4)
+diff --git a/arch/x86/crypto/blake2s-core.S b/arch/x86/crypto/blake2s-core.S
+index 2ca79974f8198..b50b35ff1fdba 100644
+--- a/arch/x86/crypto/blake2s-core.S
++++ b/arch/x86/crypto/blake2s-core.S
+@@ -171,7 +171,7 @@ SYM_FUNC_START(blake2s_compress_ssse3)
+ 	movdqu		%xmm1,0x10(%rdi)
+ 	movdqu		%xmm14,0x20(%rdi)
+ .Lendofloop:
+-	ret
++	RET
+ SYM_FUNC_END(blake2s_compress_ssse3)
+ 
+ #ifdef CONFIG_AS_AVX512
+@@ -251,6 +251,6 @@ SYM_FUNC_START(blake2s_compress_avx512)
+ 	vmovdqu		%xmm1,0x10(%rdi)
+ 	vmovdqu		%xmm4,0x20(%rdi)
+ 	vzeroupper
+-	retq
++	RET
+ SYM_FUNC_END(blake2s_compress_avx512)
+ #endif /* CONFIG_AS_AVX512 */
+diff --git a/arch/x86/crypto/blowfish-x86_64-asm_64.S b/arch/x86/crypto/blowfish-x86_64-asm_64.S
+index 4222ac6d65848..802d715826891 100644
+--- a/arch/x86/crypto/blowfish-x86_64-asm_64.S
++++ b/arch/x86/crypto/blowfish-x86_64-asm_64.S
+@@ -135,10 +135,10 @@ SYM_FUNC_START(__blowfish_enc_blk)
+ 	jnz .L__enc_xor;
+ 
+ 	write_block();
+-	ret;
++	RET;
+ .L__enc_xor:
+ 	xor_block();
+-	ret;
++	RET;
+ SYM_FUNC_END(__blowfish_enc_blk)
+ 
+ SYM_FUNC_START(blowfish_dec_blk)
+@@ -170,7 +170,7 @@ SYM_FUNC_START(blowfish_dec_blk)
+ 
+ 	movq %r11, %r12;
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(blowfish_dec_blk)
+ 
+ /**********************************************************************
+@@ -322,14 +322,14 @@ SYM_FUNC_START(__blowfish_enc_blk_4way)
+ 
+ 	popq %rbx;
+ 	popq %r12;
+-	ret;
++	RET;
+ 
+ .L__enc_xor4:
+ 	xor_block4();
+ 
+ 	popq %rbx;
+ 	popq %r12;
+-	ret;
++	RET;
+ SYM_FUNC_END(__blowfish_enc_blk_4way)
+ 
+ SYM_FUNC_START(blowfish_dec_blk_4way)
+@@ -364,5 +364,5 @@ SYM_FUNC_START(blowfish_dec_blk_4way)
+ 	popq %rbx;
+ 	popq %r12;
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(blowfish_dec_blk_4way)
+diff --git a/arch/x86/crypto/camellia-aesni-avx-asm_64.S b/arch/x86/crypto/camellia-aesni-avx-asm_64.S
+index ecc0a9a905c48..297b1a7ed2bc7 100644
+--- a/arch/x86/crypto/camellia-aesni-avx-asm_64.S
++++ b/arch/x86/crypto/camellia-aesni-avx-asm_64.S
+@@ -193,7 +193,7 @@ SYM_FUNC_START_LOCAL(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_c
+ 	roundsm16(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7,
+ 		  %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15,
+ 		  %rcx, (%r9));
+-	ret;
++	RET;
+ SYM_FUNC_END(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
+ 
+ .align 8
+@@ -201,7 +201,7 @@ SYM_FUNC_START_LOCAL(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_a
+ 	roundsm16(%xmm4, %xmm5, %xmm6, %xmm7, %xmm0, %xmm1, %xmm2, %xmm3,
+ 		  %xmm12, %xmm13, %xmm14, %xmm15, %xmm8, %xmm9, %xmm10, %xmm11,
+ 		  %rax, (%r9));
+-	ret;
++	RET;
+ SYM_FUNC_END(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
+ 
+ /*
+@@ -787,7 +787,7 @@ SYM_FUNC_START_LOCAL(__camellia_enc_blk16)
+ 		    %xmm15, (key_table)(CTX, %r8, 8), (%rax), 1 * 16(%rax));
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ 
+ .align 8
+ .Lenc_max32:
+@@ -874,7 +874,7 @@ SYM_FUNC_START_LOCAL(__camellia_dec_blk16)
+ 		    %xmm15, (key_table)(CTX), (%rax), 1 * 16(%rax));
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ 
+ .align 8
+ .Ldec_max32:
+@@ -915,7 +915,7 @@ SYM_FUNC_START(camellia_ecb_enc_16way)
+ 		     %xmm8, %rsi);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_ecb_enc_16way)
+ 
+ SYM_FUNC_START(camellia_ecb_dec_16way)
+@@ -945,7 +945,7 @@ SYM_FUNC_START(camellia_ecb_dec_16way)
+ 		     %xmm8, %rsi);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_ecb_dec_16way)
+ 
+ SYM_FUNC_START(camellia_cbc_dec_16way)
+@@ -996,7 +996,7 @@ SYM_FUNC_START(camellia_cbc_dec_16way)
+ 		     %xmm8, %rsi);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_cbc_dec_16way)
+ 
+ #define inc_le128(x, minus_one, tmp) \
+@@ -1109,7 +1109,7 @@ SYM_FUNC_START(camellia_ctr_16way)
+ 		     %xmm8, %rsi);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_ctr_16way)
+ 
+ #define gf128mul_x_ble(iv, mask, tmp) \
+@@ -1253,7 +1253,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_16way)
+ 		     %xmm8, %rsi);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_xts_crypt_16way)
+ 
+ SYM_FUNC_START(camellia_xts_enc_16way)
+diff --git a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
+index 0907243c501cd..288cd246da20f 100644
+--- a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
++++ b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
+@@ -227,7 +227,7 @@ SYM_FUNC_START_LOCAL(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_c
+ 	roundsm32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7,
+ 		  %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, %ymm15,
+ 		  %rcx, (%r9));
+-	ret;
++	RET;
+ SYM_FUNC_END(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
+ 
+ .align 8
+@@ -235,7 +235,7 @@ SYM_FUNC_START_LOCAL(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_a
+ 	roundsm32(%ymm4, %ymm5, %ymm6, %ymm7, %ymm0, %ymm1, %ymm2, %ymm3,
+ 		  %ymm12, %ymm13, %ymm14, %ymm15, %ymm8, %ymm9, %ymm10, %ymm11,
+ 		  %rax, (%r9));
+-	ret;
++	RET;
+ SYM_FUNC_END(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
+ 
+ /*
+@@ -825,7 +825,7 @@ SYM_FUNC_START_LOCAL(__camellia_enc_blk32)
+ 		    %ymm15, (key_table)(CTX, %r8, 8), (%rax), 1 * 32(%rax));
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ 
+ .align 8
+ .Lenc_max32:
+@@ -912,7 +912,7 @@ SYM_FUNC_START_LOCAL(__camellia_dec_blk32)
+ 		    %ymm15, (key_table)(CTX), (%rax), 1 * 32(%rax));
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ 
+ .align 8
+ .Ldec_max32:
+@@ -957,7 +957,7 @@ SYM_FUNC_START(camellia_ecb_enc_32way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_ecb_enc_32way)
+ 
+ SYM_FUNC_START(camellia_ecb_dec_32way)
+@@ -991,7 +991,7 @@ SYM_FUNC_START(camellia_ecb_dec_32way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_ecb_dec_32way)
+ 
+ SYM_FUNC_START(camellia_cbc_dec_32way)
+@@ -1059,7 +1059,7 @@ SYM_FUNC_START(camellia_cbc_dec_32way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_cbc_dec_32way)
+ 
+ #define inc_le128(x, minus_one, tmp) \
+@@ -1199,7 +1199,7 @@ SYM_FUNC_START(camellia_ctr_32way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_ctr_32way)
+ 
+ #define gf128mul_x_ble(iv, mask, tmp) \
+@@ -1366,7 +1366,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_32way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_xts_crypt_32way)
+ 
+ SYM_FUNC_START(camellia_xts_enc_32way)
+diff --git a/arch/x86/crypto/camellia-x86_64-asm_64.S b/arch/x86/crypto/camellia-x86_64-asm_64.S
+index 1372e64088507..347c059f59403 100644
+--- a/arch/x86/crypto/camellia-x86_64-asm_64.S
++++ b/arch/x86/crypto/camellia-x86_64-asm_64.S
+@@ -213,13 +213,13 @@ SYM_FUNC_START(__camellia_enc_blk)
+ 	enc_outunpack(mov, RT1);
+ 
+ 	movq RR12, %r12;
+-	ret;
++	RET;
+ 
+ .L__enc_xor:
+ 	enc_outunpack(xor, RT1);
+ 
+ 	movq RR12, %r12;
+-	ret;
++	RET;
+ SYM_FUNC_END(__camellia_enc_blk)
+ 
+ SYM_FUNC_START(camellia_dec_blk)
+@@ -257,7 +257,7 @@ SYM_FUNC_START(camellia_dec_blk)
+ 	dec_outunpack();
+ 
+ 	movq RR12, %r12;
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_dec_blk)
+ 
+ /**********************************************************************
+@@ -448,14 +448,14 @@ SYM_FUNC_START(__camellia_enc_blk_2way)
+ 
+ 	movq RR12, %r12;
+ 	popq %rbx;
+-	ret;
++	RET;
+ 
+ .L__enc2_xor:
+ 	enc_outunpack2(xor, RT2);
+ 
+ 	movq RR12, %r12;
+ 	popq %rbx;
+-	ret;
++	RET;
+ SYM_FUNC_END(__camellia_enc_blk_2way)
+ 
+ SYM_FUNC_START(camellia_dec_blk_2way)
+@@ -495,5 +495,5 @@ SYM_FUNC_START(camellia_dec_blk_2way)
+ 
+ 	movq RR12, %r12;
+ 	movq RXOR, %rbx;
+-	ret;
++	RET;
+ SYM_FUNC_END(camellia_dec_blk_2way)
+diff --git a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
+index 8a6181b08b590..b258af420c92c 100644
+--- a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
++++ b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
+@@ -279,7 +279,7 @@ SYM_FUNC_START_LOCAL(__cast5_enc_blk16)
+ 	outunpack_blocks(RR3, RL3, RTMP, RX, RKM);
+ 	outunpack_blocks(RR4, RL4, RTMP, RX, RKM);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__cast5_enc_blk16)
+ 
+ .align 16
+@@ -352,7 +352,7 @@ SYM_FUNC_START_LOCAL(__cast5_dec_blk16)
+ 	outunpack_blocks(RR3, RL3, RTMP, RX, RKM);
+ 	outunpack_blocks(RR4, RL4, RTMP, RX, RKM);
+ 
+-	ret;
++	RET;
+ 
+ .L__skip_dec:
+ 	vpsrldq $4, RKR, RKR;
+@@ -393,7 +393,7 @@ SYM_FUNC_START(cast5_ecb_enc_16way)
+ 
+ 	popq %r15;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast5_ecb_enc_16way)
+ 
+ SYM_FUNC_START(cast5_ecb_dec_16way)
+@@ -431,7 +431,7 @@ SYM_FUNC_START(cast5_ecb_dec_16way)
+ 
+ 	popq %r15;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast5_ecb_dec_16way)
+ 
+ SYM_FUNC_START(cast5_cbc_dec_16way)
+@@ -483,7 +483,7 @@ SYM_FUNC_START(cast5_cbc_dec_16way)
+ 	popq %r15;
+ 	popq %r12;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast5_cbc_dec_16way)
+ 
+ SYM_FUNC_START(cast5_ctr_16way)
+@@ -559,5 +559,5 @@ SYM_FUNC_START(cast5_ctr_16way)
+ 	popq %r15;
+ 	popq %r12;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast5_ctr_16way)
+diff --git a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
+index 932a3ce32a88e..6eccaf1fb4c6a 100644
+--- a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
++++ b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
+@@ -291,7 +291,7 @@ SYM_FUNC_START_LOCAL(__cast6_enc_blk8)
+ 	outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM);
+ 	outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__cast6_enc_blk8)
+ 
+ .align 8
+@@ -338,7 +338,7 @@ SYM_FUNC_START_LOCAL(__cast6_dec_blk8)
+ 	outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM);
+ 	outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__cast6_dec_blk8)
+ 
+ SYM_FUNC_START(cast6_ecb_enc_8way)
+@@ -361,7 +361,7 @@ SYM_FUNC_START(cast6_ecb_enc_8way)
+ 
+ 	popq %r15;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast6_ecb_enc_8way)
+ 
+ SYM_FUNC_START(cast6_ecb_dec_8way)
+@@ -384,7 +384,7 @@ SYM_FUNC_START(cast6_ecb_dec_8way)
+ 
+ 	popq %r15;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast6_ecb_dec_8way)
+ 
+ SYM_FUNC_START(cast6_cbc_dec_8way)
+@@ -410,7 +410,7 @@ SYM_FUNC_START(cast6_cbc_dec_8way)
+ 	popq %r15;
+ 	popq %r12;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast6_cbc_dec_8way)
+ 
+ SYM_FUNC_START(cast6_ctr_8way)
+@@ -438,7 +438,7 @@ SYM_FUNC_START(cast6_ctr_8way)
+ 	popq %r15;
+ 	popq %r12;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast6_ctr_8way)
+ 
+ SYM_FUNC_START(cast6_xts_enc_8way)
+@@ -465,7 +465,7 @@ SYM_FUNC_START(cast6_xts_enc_8way)
+ 
+ 	popq %r15;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast6_xts_enc_8way)
+ 
+ SYM_FUNC_START(cast6_xts_dec_8way)
+@@ -492,5 +492,5 @@ SYM_FUNC_START(cast6_xts_dec_8way)
+ 
+ 	popq %r15;
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(cast6_xts_dec_8way)
+diff --git a/arch/x86/crypto/chacha-avx2-x86_64.S b/arch/x86/crypto/chacha-avx2-x86_64.S
+index ee9a40ab41093..f3d8fc0182493 100644
+--- a/arch/x86/crypto/chacha-avx2-x86_64.S
++++ b/arch/x86/crypto/chacha-avx2-x86_64.S
+@@ -193,7 +193,7 @@ SYM_FUNC_START(chacha_2block_xor_avx2)
+ 
+ .Ldone2:
+ 	vzeroupper
+-	ret
++	RET
+ 
+ .Lxorpart2:
+ 	# xor remaining bytes from partial register into output
+@@ -498,7 +498,7 @@ SYM_FUNC_START(chacha_4block_xor_avx2)
+ 
+ .Ldone4:
+ 	vzeroupper
+-	ret
++	RET
+ 
+ .Lxorpart4:
+ 	# xor remaining bytes from partial register into output
+@@ -992,7 +992,7 @@ SYM_FUNC_START(chacha_8block_xor_avx2)
+ .Ldone8:
+ 	vzeroupper
+ 	lea		-8(%r10),%rsp
+-	ret
++	RET
+ 
+ .Lxorpart8:
+ 	# xor remaining bytes from partial register into output
+diff --git a/arch/x86/crypto/chacha-avx512vl-x86_64.S b/arch/x86/crypto/chacha-avx512vl-x86_64.S
+index 8713c16c2501a..259383e1ad440 100644
+--- a/arch/x86/crypto/chacha-avx512vl-x86_64.S
++++ b/arch/x86/crypto/chacha-avx512vl-x86_64.S
+@@ -166,7 +166,7 @@ SYM_FUNC_START(chacha_2block_xor_avx512vl)
+ 
+ .Ldone2:
+ 	vzeroupper
+-	ret
++	RET
+ 
+ .Lxorpart2:
+ 	# xor remaining bytes from partial register into output
+@@ -432,7 +432,7 @@ SYM_FUNC_START(chacha_4block_xor_avx512vl)
+ 
+ .Ldone4:
+ 	vzeroupper
+-	ret
++	RET
+ 
+ .Lxorpart4:
+ 	# xor remaining bytes from partial register into output
+@@ -812,7 +812,7 @@ SYM_FUNC_START(chacha_8block_xor_avx512vl)
+ 
+ .Ldone8:
+ 	vzeroupper
+-	ret
++	RET
+ 
+ .Lxorpart8:
+ 	# xor remaining bytes from partial register into output
+diff --git a/arch/x86/crypto/chacha-ssse3-x86_64.S b/arch/x86/crypto/chacha-ssse3-x86_64.S
+index ca1788bfee162..7111949cd5b99 100644
+--- a/arch/x86/crypto/chacha-ssse3-x86_64.S
++++ b/arch/x86/crypto/chacha-ssse3-x86_64.S
+@@ -108,7 +108,7 @@ SYM_FUNC_START_LOCAL(chacha_permute)
+ 	sub		$2,%r8d
+ 	jnz		.Ldoubleround
+ 
+-	ret
++	RET
+ SYM_FUNC_END(chacha_permute)
+ 
+ SYM_FUNC_START(chacha_block_xor_ssse3)
+@@ -166,7 +166,7 @@ SYM_FUNC_START(chacha_block_xor_ssse3)
+ 
+ .Ldone:
+ 	FRAME_END
+-	ret
++	RET
+ 
+ .Lxorpart:
+ 	# xor remaining bytes from partial register into output
+@@ -217,7 +217,7 @@ SYM_FUNC_START(hchacha_block_ssse3)
+ 	movdqu		%xmm3,0x10(%rsi)
+ 
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(hchacha_block_ssse3)
+ 
+ SYM_FUNC_START(chacha_4block_xor_ssse3)
+@@ -762,7 +762,7 @@ SYM_FUNC_START(chacha_4block_xor_ssse3)
+ 
+ .Ldone4:
+ 	lea		-8(%r10),%rsp
+-	ret
++	RET
+ 
+ .Lxorpart4:
+ 	# xor remaining bytes from partial register into output
+diff --git a/arch/x86/crypto/crc32-pclmul_asm.S b/arch/x86/crypto/crc32-pclmul_asm.S
+index 6e7d4c4d32081..c392a6edbfff6 100644
+--- a/arch/x86/crypto/crc32-pclmul_asm.S
++++ b/arch/x86/crypto/crc32-pclmul_asm.S
+@@ -236,5 +236,5 @@ fold_64:
+ 	pxor    %xmm2, %xmm1
+ 	pextrd  $0x01, %xmm1, %eax
+ 
+-	ret
++	RET
+ SYM_FUNC_END(crc32_pclmul_le_16)
+diff --git a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
+index 884dc767b0514..f6e3568fbd635 100644
+--- a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
++++ b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
+@@ -309,7 +309,7 @@ do_return:
+ 	popq    %rsi
+ 	popq    %rdi
+ 	popq    %rbx
+-        ret
++        RET
+ SYM_FUNC_END(crc_pcl)
+ 
+ .section	.rodata, "a", @progbits
+diff --git a/arch/x86/crypto/crct10dif-pcl-asm_64.S b/arch/x86/crypto/crct10dif-pcl-asm_64.S
+index b2533d63030e5..721474abfb719 100644
+--- a/arch/x86/crypto/crct10dif-pcl-asm_64.S
++++ b/arch/x86/crypto/crct10dif-pcl-asm_64.S
+@@ -257,7 +257,7 @@ SYM_FUNC_START(crc_t10dif_pcl)
+ 	# Final CRC value (x^16 * M(x)) mod G(x) is in low 16 bits of xmm0.
+ 
+ 	pextrw	$0, %xmm0, %eax
+-	ret
++	RET
+ 
+ .align 16
+ .Lless_than_256_bytes:
+diff --git a/arch/x86/crypto/des3_ede-asm_64.S b/arch/x86/crypto/des3_ede-asm_64.S
+index fac0fdc3f25da..f4c760f4cade6 100644
+--- a/arch/x86/crypto/des3_ede-asm_64.S
++++ b/arch/x86/crypto/des3_ede-asm_64.S
+@@ -243,7 +243,7 @@ SYM_FUNC_START(des3_ede_x86_64_crypt_blk)
+ 	popq %r12;
+ 	popq %rbx;
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(des3_ede_x86_64_crypt_blk)
+ 
+ /***********************************************************************
+@@ -528,7 +528,7 @@ SYM_FUNC_START(des3_ede_x86_64_crypt_blk_3way)
+ 	popq %r12;
+ 	popq %rbx;
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(des3_ede_x86_64_crypt_blk_3way)
+ 
+ .section	.rodata, "a", @progbits
+diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S
+index 99ac25e18e098..2bf8718999209 100644
+--- a/arch/x86/crypto/ghash-clmulni-intel_asm.S
++++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S
+@@ -85,7 +85,7 @@ SYM_FUNC_START_LOCAL(__clmul_gf128mul_ble)
+ 	psrlq $1, T2
+ 	pxor T2, T1
+ 	pxor T1, DATA
+-	ret
++	RET
+ SYM_FUNC_END(__clmul_gf128mul_ble)
+ 
+ /* void clmul_ghash_mul(char *dst, const u128 *shash) */
+@@ -99,7 +99,7 @@ SYM_FUNC_START(clmul_ghash_mul)
+ 	pshufb BSWAP, DATA
+ 	movups DATA, (%rdi)
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(clmul_ghash_mul)
+ 
+ /*
+@@ -128,5 +128,5 @@ SYM_FUNC_START(clmul_ghash_update)
+ 	movups DATA, (%rdi)
+ .Lupdate_just_ret:
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(clmul_ghash_update)
+diff --git a/arch/x86/crypto/nh-avx2-x86_64.S b/arch/x86/crypto/nh-avx2-x86_64.S
+index b22c7b9362726..6a0b15e7196a8 100644
+--- a/arch/x86/crypto/nh-avx2-x86_64.S
++++ b/arch/x86/crypto/nh-avx2-x86_64.S
+@@ -153,5 +153,5 @@ SYM_FUNC_START(nh_avx2)
+ 	vpaddq		T1, T0, T0
+ 	vpaddq		T4, T0, T0
+ 	vmovdqu		T0, (HASH)
+-	ret
++	RET
+ SYM_FUNC_END(nh_avx2)
+diff --git a/arch/x86/crypto/nh-sse2-x86_64.S b/arch/x86/crypto/nh-sse2-x86_64.S
+index d7ae22dd66839..34c567bbcb4fa 100644
+--- a/arch/x86/crypto/nh-sse2-x86_64.S
++++ b/arch/x86/crypto/nh-sse2-x86_64.S
+@@ -119,5 +119,5 @@ SYM_FUNC_START(nh_sse2)
+ 	paddq		PASS2_SUMS, T1
+ 	movdqu		T0, 0x00(HASH)
+ 	movdqu		T1, 0x10(HASH)
+-	ret
++	RET
+ SYM_FUNC_END(nh_sse2)
+diff --git a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
+index 7d568012cc15b..58eaec958c866 100644
+--- a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
++++ b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
+@@ -297,7 +297,7 @@ ___
+ $code.=<<___;
+ 	mov	\$1,%eax
+ .Lno_key:
+-	ret
++	RET
+ ___
+ &end_function("poly1305_init_x86_64");
+ 
+@@ -373,7 +373,7 @@ $code.=<<___;
+ .cfi_adjust_cfa_offset	-48
+ .Lno_data:
+ .Lblocks_epilogue:
+-	ret
++	RET
+ .cfi_endproc
+ ___
+ &end_function("poly1305_blocks_x86_64");
+@@ -399,7 +399,7 @@ $code.=<<___;
+ 	mov	%rax,0($mac)	# write result
+ 	mov	%rcx,8($mac)
+ 
+-	ret
++	RET
+ ___
+ &end_function("poly1305_emit_x86_64");
+ if ($avx) {
+@@ -429,7 +429,7 @@ ___
+ 	&poly1305_iteration();
+ $code.=<<___;
+ 	pop $ctx
+-	ret
++	RET
+ .size	__poly1305_block,.-__poly1305_block
+ 
+ .type	__poly1305_init_avx,\@abi-omnipotent
+@@ -594,7 +594,7 @@ __poly1305_init_avx:
+ 
+ 	lea	-48-64($ctx),$ctx	# size [de-]optimization
+ 	pop %rbp
+-	ret
++	RET
+ .size	__poly1305_init_avx,.-__poly1305_init_avx
+ ___
+ 
+@@ -747,7 +747,7 @@ $code.=<<___;
+ .cfi_restore	%rbp
+ .Lno_data_avx:
+ .Lblocks_avx_epilogue:
+-	ret
++	RET
+ .cfi_endproc
+ 
+ .align	32
+@@ -1452,7 +1452,7 @@ $code.=<<___	if (!$win64);
+ ___
+ $code.=<<___;
+ 	vzeroupper
+-	ret
++	RET
+ .cfi_endproc
+ ___
+ &end_function("poly1305_blocks_avx");
+@@ -1508,7 +1508,7 @@ $code.=<<___;
+ 	mov	%rax,0($mac)	# write result
+ 	mov	%rcx,8($mac)
+ 
+-	ret
++	RET
+ ___
+ &end_function("poly1305_emit_avx");
+ 
+@@ -1675,7 +1675,7 @@ $code.=<<___;
+ .cfi_restore 	%rbp
+ .Lno_data_avx2$suffix:
+ .Lblocks_avx2_epilogue$suffix:
+-	ret
++	RET
+ .cfi_endproc
+ 
+ .align	32
+@@ -2201,7 +2201,7 @@ $code.=<<___	if (!$win64);
+ ___
+ $code.=<<___;
+ 	vzeroupper
+-	ret
++	RET
+ .cfi_endproc
+ ___
+ if($avx > 2 && $avx512) {
+@@ -2792,7 +2792,7 @@ $code.=<<___	if (!$win64);
+ .cfi_def_cfa_register	%rsp
+ ___
+ $code.=<<___;
+-	ret
++	RET
+ .cfi_endproc
+ ___
+ 
+@@ -2893,7 +2893,7 @@ $code.=<<___	if ($flavour =~ /elf32/);
+ ___
+ $code.=<<___;
+ 	mov	\$1,%eax
+-	ret
++	RET
+ .size	poly1305_init_base2_44,.-poly1305_init_base2_44
+ ___
+ {
+@@ -3010,7 +3010,7 @@ poly1305_blocks_vpmadd52:
+ 	jnz		.Lblocks_vpmadd52_4x
+ 
+ .Lno_data_vpmadd52:
+-	ret
++	RET
+ .size	poly1305_blocks_vpmadd52,.-poly1305_blocks_vpmadd52
+ ___
+ }
+@@ -3451,7 +3451,7 @@ poly1305_blocks_vpmadd52_4x:
+ 	vzeroall
+ 
+ .Lno_data_vpmadd52_4x:
+-	ret
++	RET
+ .size	poly1305_blocks_vpmadd52_4x,.-poly1305_blocks_vpmadd52_4x
+ ___
+ }
+@@ -3824,7 +3824,7 @@ $code.=<<___;
+ 	vzeroall
+ 
+ .Lno_data_vpmadd52_8x:
+-	ret
++	RET
+ .size	poly1305_blocks_vpmadd52_8x,.-poly1305_blocks_vpmadd52_8x
+ ___
+ }
+@@ -3861,7 +3861,7 @@ poly1305_emit_base2_44:
+ 	mov	%rax,0($mac)	# write result
+ 	mov	%rcx,8($mac)
+ 
+-	ret
++	RET
+ .size	poly1305_emit_base2_44,.-poly1305_emit_base2_44
+ ___
+ }	}	}
+@@ -3916,7 +3916,7 @@ xor128_encrypt_n_pad:
+ 
+ .Ldone_enc:
+ 	mov	$otp,%rax
+-	ret
++	RET
+ .size	xor128_encrypt_n_pad,.-xor128_encrypt_n_pad
+ 
+ .globl	xor128_decrypt_n_pad
+@@ -3967,7 +3967,7 @@ xor128_decrypt_n_pad:
+ 
+ .Ldone_dec:
+ 	mov	$otp,%rax
+-	ret
++	RET
+ .size	xor128_decrypt_n_pad,.-xor128_decrypt_n_pad
+ ___
+ }
+@@ -4109,7 +4109,7 @@ avx_handler:
+ 	pop	%rbx
+ 	pop	%rdi
+ 	pop	%rsi
+-	ret
++	RET
+ .size	avx_handler,.-avx_handler
+ 
+ .section	.pdata
+diff --git a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
+index ba9e4c1e7f5c7..c985bc15304b2 100644
+--- a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
++++ b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
+@@ -605,7 +605,7 @@ SYM_FUNC_START_LOCAL(__serpent_enc_blk8_avx)
+ 	write_blocks(RA1, RB1, RC1, RD1, RK0, RK1, RK2);
+ 	write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__serpent_enc_blk8_avx)
+ 
+ .align 8
+@@ -659,7 +659,7 @@ SYM_FUNC_START_LOCAL(__serpent_dec_blk8_avx)
+ 	write_blocks(RC1, RD1, RB1, RE1, RK0, RK1, RK2);
+ 	write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__serpent_dec_blk8_avx)
+ 
+ SYM_FUNC_START(serpent_ecb_enc_8way_avx)
+@@ -677,7 +677,7 @@ SYM_FUNC_START(serpent_ecb_enc_8way_avx)
+ 	store_8way(%rsi, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_ecb_enc_8way_avx)
+ 
+ SYM_FUNC_START(serpent_ecb_dec_8way_avx)
+@@ -695,7 +695,7 @@ SYM_FUNC_START(serpent_ecb_dec_8way_avx)
+ 	store_8way(%rsi, RC1, RD1, RB1, RE1, RC2, RD2, RB2, RE2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_ecb_dec_8way_avx)
+ 
+ SYM_FUNC_START(serpent_cbc_dec_8way_avx)
+@@ -713,7 +713,7 @@ SYM_FUNC_START(serpent_cbc_dec_8way_avx)
+ 	store_cbc_8way(%rdx, %rsi, RC1, RD1, RB1, RE1, RC2, RD2, RB2, RE2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_cbc_dec_8way_avx)
+ 
+ SYM_FUNC_START(serpent_ctr_8way_avx)
+@@ -733,7 +733,7 @@ SYM_FUNC_START(serpent_ctr_8way_avx)
+ 	store_ctr_8way(%rdx, %rsi, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_ctr_8way_avx)
+ 
+ SYM_FUNC_START(serpent_xts_enc_8way_avx)
+@@ -755,7 +755,7 @@ SYM_FUNC_START(serpent_xts_enc_8way_avx)
+ 	store_xts_8way(%rsi, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_xts_enc_8way_avx)
+ 
+ SYM_FUNC_START(serpent_xts_dec_8way_avx)
+@@ -777,5 +777,5 @@ SYM_FUNC_START(serpent_xts_dec_8way_avx)
+ 	store_xts_8way(%rsi, RC1, RD1, RB1, RE1, RC2, RD2, RB2, RE2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_xts_dec_8way_avx)
+diff --git a/arch/x86/crypto/serpent-avx2-asm_64.S b/arch/x86/crypto/serpent-avx2-asm_64.S
+index c9648aeae7053..ca18948964f47 100644
+--- a/arch/x86/crypto/serpent-avx2-asm_64.S
++++ b/arch/x86/crypto/serpent-avx2-asm_64.S
+@@ -611,7 +611,7 @@ SYM_FUNC_START_LOCAL(__serpent_enc_blk16)
+ 	write_blocks(RA1, RB1, RC1, RD1, RK0, RK1, RK2);
+ 	write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__serpent_enc_blk16)
+ 
+ .align 8
+@@ -665,7 +665,7 @@ SYM_FUNC_START_LOCAL(__serpent_dec_blk16)
+ 	write_blocks(RC1, RD1, RB1, RE1, RK0, RK1, RK2);
+ 	write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__serpent_dec_blk16)
+ 
+ SYM_FUNC_START(serpent_ecb_enc_16way)
+@@ -687,7 +687,7 @@ SYM_FUNC_START(serpent_ecb_enc_16way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_ecb_enc_16way)
+ 
+ SYM_FUNC_START(serpent_ecb_dec_16way)
+@@ -709,7 +709,7 @@ SYM_FUNC_START(serpent_ecb_dec_16way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_ecb_dec_16way)
+ 
+ SYM_FUNC_START(serpent_cbc_dec_16way)
+@@ -732,7 +732,7 @@ SYM_FUNC_START(serpent_cbc_dec_16way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_cbc_dec_16way)
+ 
+ SYM_FUNC_START(serpent_ctr_16way)
+@@ -757,7 +757,7 @@ SYM_FUNC_START(serpent_ctr_16way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_ctr_16way)
+ 
+ SYM_FUNC_START(serpent_xts_enc_16way)
+@@ -783,7 +783,7 @@ SYM_FUNC_START(serpent_xts_enc_16way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_xts_enc_16way)
+ 
+ SYM_FUNC_START(serpent_xts_dec_16way)
+@@ -809,5 +809,5 @@ SYM_FUNC_START(serpent_xts_dec_16way)
+ 	vzeroupper;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_xts_dec_16way)
+diff --git a/arch/x86/crypto/serpent-sse2-i586-asm_32.S b/arch/x86/crypto/serpent-sse2-i586-asm_32.S
+index 6379b99cb722e..8ccb03ad7cef5 100644
+--- a/arch/x86/crypto/serpent-sse2-i586-asm_32.S
++++ b/arch/x86/crypto/serpent-sse2-i586-asm_32.S
+@@ -553,12 +553,12 @@ SYM_FUNC_START(__serpent_enc_blk_4way)
+ 
+ 	write_blocks(%eax, RA, RB, RC, RD, RT0, RT1, RE);
+ 
+-	ret;
++	RET;
+ 
+ .L__enc_xor4:
+ 	xor_blocks(%eax, RA, RB, RC, RD, RT0, RT1, RE);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__serpent_enc_blk_4way)
+ 
+ SYM_FUNC_START(serpent_dec_blk_4way)
+@@ -612,5 +612,5 @@ SYM_FUNC_START(serpent_dec_blk_4way)
+ 	movl arg_dst(%esp), %eax;
+ 	write_blocks(%eax, RC, RD, RB, RE, RT0, RT1, RA);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_dec_blk_4way)
+diff --git a/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S b/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S
+index efb6dc17dc907..e0998a011d1dd 100644
+--- a/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S
++++ b/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S
+@@ -675,13 +675,13 @@ SYM_FUNC_START(__serpent_enc_blk_8way)
+ 	write_blocks(%rsi, RA1, RB1, RC1, RD1, RK0, RK1, RK2);
+ 	write_blocks(%rax, RA2, RB2, RC2, RD2, RK0, RK1, RK2);
+ 
+-	ret;
++	RET;
+ 
+ .L__enc_xor8:
+ 	xor_blocks(%rsi, RA1, RB1, RC1, RD1, RK0, RK1, RK2);
+ 	xor_blocks(%rax, RA2, RB2, RC2, RD2, RK0, RK1, RK2);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__serpent_enc_blk_8way)
+ 
+ SYM_FUNC_START(serpent_dec_blk_8way)
+@@ -735,5 +735,5 @@ SYM_FUNC_START(serpent_dec_blk_8way)
+ 	write_blocks(%rsi, RC1, RD1, RB1, RE1, RK0, RK1, RK2);
+ 	write_blocks(%rax, RC2, RD2, RB2, RE2, RK0, RK1, RK2);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(serpent_dec_blk_8way)
+diff --git a/arch/x86/crypto/sha1_avx2_x86_64_asm.S b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
+index 1e594d60afa56..6fa622652bfc7 100644
+--- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S
++++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
+@@ -674,7 +674,7 @@ _loop3:
+ 	pop	%r12
+ 	pop	%rbx
+ 
+-	ret
++	RET
+ 
+ 	SYM_FUNC_END(\name)
+ .endm
+diff --git a/arch/x86/crypto/sha1_ni_asm.S b/arch/x86/crypto/sha1_ni_asm.S
+index 11efe3a45a1fd..b59f3ca628377 100644
+--- a/arch/x86/crypto/sha1_ni_asm.S
++++ b/arch/x86/crypto/sha1_ni_asm.S
+@@ -290,7 +290,7 @@ SYM_FUNC_START(sha1_ni_transform)
+ .Ldone_hash:
+ 	mov		RSPSAVE, %rsp
+ 
+-	ret
++	RET
+ SYM_FUNC_END(sha1_ni_transform)
+ 
+ .section	.rodata.cst16.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 16
+diff --git a/arch/x86/crypto/sha1_ssse3_asm.S b/arch/x86/crypto/sha1_ssse3_asm.S
+index d25668d2a1e92..263f916362e02 100644
+--- a/arch/x86/crypto/sha1_ssse3_asm.S
++++ b/arch/x86/crypto/sha1_ssse3_asm.S
+@@ -99,7 +99,7 @@
+ 	pop	%rbp
+ 	pop	%r12
+ 	pop	%rbx
+-	ret
++	RET
+ 
+ 	SYM_FUNC_END(\name)
+ .endm
+diff --git a/arch/x86/crypto/sha256-avx-asm.S b/arch/x86/crypto/sha256-avx-asm.S
+index 4739cd31b9db1..3baa1ec390974 100644
+--- a/arch/x86/crypto/sha256-avx-asm.S
++++ b/arch/x86/crypto/sha256-avx-asm.S
+@@ -458,7 +458,7 @@ done_hash:
+ 	popq    %r13
+ 	popq	%r12
+ 	popq    %rbx
+-	ret
++	RET
+ SYM_FUNC_END(sha256_transform_avx)
+ 
+ .section	.rodata.cst256.K256, "aM", @progbits, 256
+diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S
+index 11ff60c29c8bb..3439aaf4295d2 100644
+--- a/arch/x86/crypto/sha256-avx2-asm.S
++++ b/arch/x86/crypto/sha256-avx2-asm.S
+@@ -711,7 +711,7 @@ done_hash:
+ 	popq	%r13
+ 	popq	%r12
+ 	popq	%rbx
+-	ret
++	RET
+ SYM_FUNC_END(sha256_transform_rorx)
+ 
+ .section	.rodata.cst512.K256, "aM", @progbits, 512
+diff --git a/arch/x86/crypto/sha256-ssse3-asm.S b/arch/x86/crypto/sha256-ssse3-asm.S
+index ddfa863b4ee33..c4a5db612c327 100644
+--- a/arch/x86/crypto/sha256-ssse3-asm.S
++++ b/arch/x86/crypto/sha256-ssse3-asm.S
+@@ -472,7 +472,7 @@ done_hash:
+ 	popq    %r12
+ 	popq    %rbx
+ 
+-	ret
++	RET
+ SYM_FUNC_END(sha256_transform_ssse3)
+ 
+ .section	.rodata.cst256.K256, "aM", @progbits, 256
+diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S
+index 7abade04a3a38..94d50dd27cb53 100644
+--- a/arch/x86/crypto/sha256_ni_asm.S
++++ b/arch/x86/crypto/sha256_ni_asm.S
+@@ -326,7 +326,7 @@ SYM_FUNC_START(sha256_ni_transform)
+ 
+ .Ldone_hash:
+ 
+-	ret
++	RET
+ SYM_FUNC_END(sha256_ni_transform)
+ 
+ .section	.rodata.cst256.K256, "aM", @progbits, 256
+diff --git a/arch/x86/crypto/sha512-avx-asm.S b/arch/x86/crypto/sha512-avx-asm.S
+index 63470fd6ae320..34fc71c3524eb 100644
+--- a/arch/x86/crypto/sha512-avx-asm.S
++++ b/arch/x86/crypto/sha512-avx-asm.S
+@@ -364,7 +364,7 @@ updateblock:
+ 	mov	frame_RSPSAVE(%rsp), %rsp
+ 
+ nowork:
+-	ret
++	RET
+ SYM_FUNC_END(sha512_transform_avx)
+ 
+ ########################################################################
+diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/arch/x86/crypto/sha512-avx2-asm.S
+index 3a44bdcfd5837..399fa74ab3d37 100644
+--- a/arch/x86/crypto/sha512-avx2-asm.S
++++ b/arch/x86/crypto/sha512-avx2-asm.S
+@@ -681,7 +681,7 @@ done_hash:
+ 
+ 	# Restore Stack Pointer
+ 	mov	frame_RSPSAVE(%rsp), %rsp
+-	ret
++	RET
+ SYM_FUNC_END(sha512_transform_rorx)
+ 
+ ########################################################################
+diff --git a/arch/x86/crypto/sha512-ssse3-asm.S b/arch/x86/crypto/sha512-ssse3-asm.S
+index 7946a1bee85b2..e9b460a54503e 100644
+--- a/arch/x86/crypto/sha512-ssse3-asm.S
++++ b/arch/x86/crypto/sha512-ssse3-asm.S
+@@ -366,7 +366,7 @@ updateblock:
+ 	mov	frame_RSPSAVE(%rsp), %rsp
+ 
+ nowork:
+-	ret
++	RET
+ SYM_FUNC_END(sha512_transform_ssse3)
+ 
+ ########################################################################
+diff --git a/arch/x86/crypto/twofish-avx-x86_64-asm_64.S b/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
+index a5151393bb2f3..c3707a71ef726 100644
+--- a/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
++++ b/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
+@@ -272,7 +272,7 @@ SYM_FUNC_START_LOCAL(__twofish_enc_blk8)
+ 	outunpack_blocks(RC1, RD1, RA1, RB1, RK1, RX0, RY0, RK2);
+ 	outunpack_blocks(RC2, RD2, RA2, RB2, RK1, RX0, RY0, RK2);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__twofish_enc_blk8)
+ 
+ .align 8
+@@ -312,7 +312,7 @@ SYM_FUNC_START_LOCAL(__twofish_dec_blk8)
+ 	outunpack_blocks(RA1, RB1, RC1, RD1, RK1, RX0, RY0, RK2);
+ 	outunpack_blocks(RA2, RB2, RC2, RD2, RK1, RX0, RY0, RK2);
+ 
+-	ret;
++	RET;
+ SYM_FUNC_END(__twofish_dec_blk8)
+ 
+ SYM_FUNC_START(twofish_ecb_enc_8way)
+@@ -332,7 +332,7 @@ SYM_FUNC_START(twofish_ecb_enc_8way)
+ 	store_8way(%r11, RC1, RD1, RA1, RB1, RC2, RD2, RA2, RB2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(twofish_ecb_enc_8way)
+ 
+ SYM_FUNC_START(twofish_ecb_dec_8way)
+@@ -352,7 +352,7 @@ SYM_FUNC_START(twofish_ecb_dec_8way)
+ 	store_8way(%r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(twofish_ecb_dec_8way)
+ 
+ SYM_FUNC_START(twofish_cbc_dec_8way)
+@@ -377,7 +377,7 @@ SYM_FUNC_START(twofish_cbc_dec_8way)
+ 	popq %r12;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(twofish_cbc_dec_8way)
+ 
+ SYM_FUNC_START(twofish_ctr_8way)
+@@ -404,7 +404,7 @@ SYM_FUNC_START(twofish_ctr_8way)
+ 	popq %r12;
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(twofish_ctr_8way)
+ 
+ SYM_FUNC_START(twofish_xts_enc_8way)
+@@ -428,7 +428,7 @@ SYM_FUNC_START(twofish_xts_enc_8way)
+ 	store_xts_8way(%r11, RC1, RD1, RA1, RB1, RC2, RD2, RA2, RB2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(twofish_xts_enc_8way)
+ 
+ SYM_FUNC_START(twofish_xts_dec_8way)
+@@ -452,5 +452,5 @@ SYM_FUNC_START(twofish_xts_dec_8way)
+ 	store_xts_8way(%r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ 
+ 	FRAME_END
+-	ret;
++	RET;
+ SYM_FUNC_END(twofish_xts_dec_8way)
+diff --git a/arch/x86/crypto/twofish-i586-asm_32.S b/arch/x86/crypto/twofish-i586-asm_32.S
+index a6f09e4f2e463..3abcad6618840 100644
+--- a/arch/x86/crypto/twofish-i586-asm_32.S
++++ b/arch/x86/crypto/twofish-i586-asm_32.S
+@@ -260,7 +260,7 @@ SYM_FUNC_START(twofish_enc_blk)
+ 	pop	%ebx
+ 	pop	%ebp
+ 	mov	$1,	%eax
+-	ret
++	RET
+ SYM_FUNC_END(twofish_enc_blk)
+ 
+ SYM_FUNC_START(twofish_dec_blk)
+@@ -317,5 +317,5 @@ SYM_FUNC_START(twofish_dec_blk)
+ 	pop	%ebx
+ 	pop	%ebp
+ 	mov	$1,	%eax
+-	ret
++	RET
+ SYM_FUNC_END(twofish_dec_blk)
+diff --git a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
+index fc23552afe37d..b12d916f08d69 100644
+--- a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
++++ b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
+@@ -258,7 +258,7 @@ SYM_FUNC_START(__twofish_enc_blk_3way)
+ 	popq %rbx;
+ 	popq %r12;
+ 	popq %r13;
+-	ret;
++	RET;
+ 
+ .L__enc_xor3:
+ 	outunpack_enc3(xor);
+@@ -266,7 +266,7 @@ SYM_FUNC_START(__twofish_enc_blk_3way)
+ 	popq %rbx;
+ 	popq %r12;
+ 	popq %r13;
+-	ret;
++	RET;
+ SYM_FUNC_END(__twofish_enc_blk_3way)
+ 
+ SYM_FUNC_START(twofish_dec_blk_3way)
+@@ -301,5 +301,5 @@ SYM_FUNC_START(twofish_dec_blk_3way)
+ 	popq %rbx;
+ 	popq %r12;
+ 	popq %r13;
+-	ret;
++	RET;
+ SYM_FUNC_END(twofish_dec_blk_3way)
+diff --git a/arch/x86/crypto/twofish-x86_64-asm_64.S b/arch/x86/crypto/twofish-x86_64-asm_64.S
+index d2e56232494a8..775af290cd196 100644
+--- a/arch/x86/crypto/twofish-x86_64-asm_64.S
++++ b/arch/x86/crypto/twofish-x86_64-asm_64.S
+@@ -252,7 +252,7 @@ SYM_FUNC_START(twofish_enc_blk)
+ 
+ 	popq	R1
+ 	movl	$1,%eax
+-	ret
++	RET
+ SYM_FUNC_END(twofish_enc_blk)
+ 
+ SYM_FUNC_START(twofish_dec_blk)
+@@ -304,5 +304,5 @@ SYM_FUNC_START(twofish_dec_blk)
+ 
+ 	popq	R1
+ 	movl	$1,%eax
+-	ret
++	RET
+ SYM_FUNC_END(twofish_dec_blk)
+diff --git a/arch/x86/entry/Makefile b/arch/x86/entry/Makefile
+index 08bf95dbc9112..58533752efab2 100644
+--- a/arch/x86/entry/Makefile
++++ b/arch/x86/entry/Makefile
+@@ -21,7 +21,7 @@ CFLAGS_syscall_64.o		+= $(call cc-option,-Wno-override-init,)
+ CFLAGS_syscall_32.o		+= $(call cc-option,-Wno-override-init,)
+ CFLAGS_syscall_x32.o		+= $(call cc-option,-Wno-override-init,)
+ 
+-obj-y				:= entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
++obj-y				:= entry.o entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
+ obj-y				+= common.o
+ 
+ obj-y				+= vdso/
+diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
+index 07a9331d55e73..a4b357e5bbfe9 100644
+--- a/arch/x86/entry/calling.h
++++ b/arch/x86/entry/calling.h
+@@ -6,6 +6,8 @@
+ #include <asm/percpu.h>
+ #include <asm/asm-offsets.h>
+ #include <asm/processor-flags.h>
++#include <asm/msr.h>
++#include <asm/nospec-branch.h>
+ 
+ /*
+ 
+@@ -146,27 +148,19 @@ For 32-bit we have the following conventions - kernel is built with
+ 
+ .endm
+ 
+-.macro POP_REGS pop_rdi=1 skip_r11rcx=0
++.macro POP_REGS pop_rdi=1
+ 	popq %r15
+ 	popq %r14
+ 	popq %r13
+ 	popq %r12
+ 	popq %rbp
+ 	popq %rbx
+-	.if \skip_r11rcx
+-	popq %rsi
+-	.else
+ 	popq %r11
+-	.endif
+ 	popq %r10
+ 	popq %r9
+ 	popq %r8
+ 	popq %rax
+-	.if \skip_r11rcx
+-	popq %rsi
+-	.else
+ 	popq %rcx
+-	.endif
+ 	popq %rdx
+ 	popq %rsi
+ 	.if \pop_rdi
+@@ -316,6 +310,66 @@ For 32-bit we have the following conventions - kernel is built with
+ 
+ #endif
+ 
++/*
++ * IBRS kernel mitigation for Spectre_v2.
++ *
++ * Assumes full context is established (PUSH_REGS, CR3 and GS) and it clobbers
++ * the regs it uses (AX, CX, DX). Must be called before the first RET
++ * instruction (NOTE! UNTRAIN_RET includes a RET instruction)
++ *
++ * The optional argument is used to save/restore the current value,
++ * which is used on the paranoid paths.
++ *
++ * Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set.
++ */
++.macro IBRS_ENTER save_reg
++#ifdef CONFIG_CPU_IBRS_ENTRY
++	ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS
++	movl	$MSR_IA32_SPEC_CTRL, %ecx
++
++.ifnb \save_reg
++	rdmsr
++	shl	$32, %rdx
++	or	%rdx, %rax
++	mov	%rax, \save_reg
++	test	$SPEC_CTRL_IBRS, %eax
++	jz	.Ldo_wrmsr_\@
++	lfence
++	jmp	.Lend_\@
++.Ldo_wrmsr_\@:
++.endif
++
++	movq	PER_CPU_VAR(x86_spec_ctrl_current), %rdx
++	movl	%edx, %eax
++	shr	$32, %rdx
++	wrmsr
++.Lend_\@:
++#endif
++.endm
++
++/*
++ * Similar to IBRS_ENTER, requires KERNEL GS,CR3 and clobbers (AX, CX, DX)
++ * regs. Must be called after the last RET.
++ */
++.macro IBRS_EXIT save_reg
++#ifdef CONFIG_CPU_IBRS_ENTRY
++	ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS
++	movl	$MSR_IA32_SPEC_CTRL, %ecx
++
++.ifnb \save_reg
++	mov	\save_reg, %rdx
++.else
++	movq	PER_CPU_VAR(x86_spec_ctrl_current), %rdx
++	andl	$(~SPEC_CTRL_IBRS), %edx
++.endif
++
++	movl	%edx, %eax
++	shr	$32, %rdx
++	wrmsr
++.Lend_\@:
++#endif
++.endm
++
+ /*
+  * Mitigate Spectre v1 for conditional swapgs code paths.
+  *
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+new file mode 100644
+index 0000000000000..bfb7bcb362bcf
+--- /dev/null
++++ b/arch/x86/entry/entry.S
+@@ -0,0 +1,22 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Common place for both 32- and 64-bit entry routines.
++ */
++
++#include <linux/linkage.h>
++#include <asm/export.h>
++#include <asm/msr-index.h>
++
++.pushsection .noinstr.text, "ax"
++
++SYM_FUNC_START(entry_ibpb)
++	movl	$MSR_IA32_PRED_CMD, %ecx
++	movl	$PRED_CMD_IBPB, %eax
++	xorl	%edx, %edx
++	wrmsr
++	RET
++SYM_FUNC_END(entry_ibpb)
++/* For KVM */
++EXPORT_SYMBOL_GPL(entry_ibpb);
++
++.popsection
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index df8c017e61611..8fcd6a42b3a18 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -40,7 +40,7 @@
+ #include <asm/processor-flags.h>
+ #include <asm/irq_vectors.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/asm.h>
+ #include <asm/smap.h>
+ #include <asm/frame.h>
+@@ -782,7 +782,6 @@ SYM_CODE_START(__switch_to_asm)
+ 	movl	%ebx, PER_CPU_VAR(stack_canary)+stack_canary_offset
+ #endif
+ 
+-#ifdef CONFIG_RETPOLINE
+ 	/*
+ 	 * When switching from a shallower to a deeper call stack
+ 	 * the RSB may either underflow or use entries populated
+@@ -791,7 +790,6 @@ SYM_CODE_START(__switch_to_asm)
+ 	 * speculative execution to prevent attack.
+ 	 */
+ 	FILL_RETURN_BUFFER %ebx, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
+-#endif
+ 
+ 	/* Restore flags or the incoming task to restore AC state. */
+ 	popfl
+@@ -821,7 +819,7 @@ SYM_FUNC_START(schedule_tail_wrapper)
+ 	popl	%eax
+ 
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(schedule_tail_wrapper)
+ .popsection
+ 
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 2f2d52729e176..559c82b834757 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -93,7 +93,7 @@ SYM_CODE_END(native_usergs_sysret64)
+  */
+ 
+ SYM_CODE_START(entry_SYSCALL_64)
+-	UNWIND_HINT_EMPTY
++	UNWIND_HINT_ENTRY
+ 
+ 	swapgs
+ 	/* tss.sp2 is scratch space. */
+@@ -117,6 +117,11 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
+ 	/* IRQs are off. */
+ 	movq	%rax, %rdi
+ 	movq	%rsp, %rsi
++
++	/* clobbers %rax, make sure it is after saving the syscall nr */
++	IBRS_ENTER
++	UNTRAIN_RET
++
+ 	call	do_syscall_64		/* returns with IRQs disabled */
+ 
+ 	/*
+@@ -191,8 +196,8 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
+ 	 * perf profiles. Nothing jumps here.
+ 	 */
+ syscall_return_via_sysret:
+-	/* rcx and r11 are already restored (see code above) */
+-	POP_REGS pop_rdi=0 skip_r11rcx=1
++	IBRS_EXIT
++	POP_REGS pop_rdi=0
+ 
+ 	/*
+ 	 * Now all regs are restored except RSP and RDI.
+@@ -244,7 +249,6 @@ SYM_FUNC_START(__switch_to_asm)
+ 	movq	%rbx, PER_CPU_VAR(fixed_percpu_data) + stack_canary_offset
+ #endif
+ 
+-#ifdef CONFIG_RETPOLINE
+ 	/*
+ 	 * When switching from a shallower to a deeper call stack
+ 	 * the RSB may either underflow or use entries populated
+@@ -253,7 +257,6 @@ SYM_FUNC_START(__switch_to_asm)
+ 	 * speculative execution to prevent attack.
+ 	 */
+ 	FILL_RETURN_BUFFER %r12, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
+-#endif
+ 
+ 	/* restore callee-saved registers */
+ 	popq	%r15
+@@ -569,6 +572,7 @@ __irqentry_text_end:
+ 
+ SYM_CODE_START_LOCAL(common_interrupt_return)
+ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
++	IBRS_EXIT
+ #ifdef CONFIG_DEBUG_ENTRY
+ 	/* Assert that pt_regs indicates user mode. */
+ 	testb	$3, CS(%rsp)
+@@ -676,6 +680,7 @@ native_irq_return_ldt:
+ 	pushq	%rdi				/* Stash user RDI */
+ 	swapgs					/* to kernel GS */
+ 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi	/* to kernel CR3 */
++	UNTRAIN_RET
+ 
+ 	movq	PER_CPU_VAR(espfix_waddr), %rdi
+ 	movq	%rax, (0*8)(%rdi)		/* user RAX */
+@@ -740,7 +745,7 @@ SYM_FUNC_START(asm_load_gs_index)
+ 2:	ALTERNATIVE "", "mfence", X86_BUG_SWAPGS_FENCE
+ 	swapgs
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(asm_load_gs_index)
+ EXPORT_SYMBOL(asm_load_gs_index)
+ 
+@@ -799,7 +804,7 @@ SYM_INNER_LABEL(asm_call_irq_on_stack, SYM_L_GLOBAL)
+ 
+ 	/* Restore the previous stack pointer from RBP. */
+ 	leaveq
+-	ret
++	RET
+ SYM_FUNC_END(asm_call_on_stack)
+ 
+ #ifdef CONFIG_XEN_PV
+@@ -888,6 +893,9 @@ SYM_CODE_END(xen_failsafe_callback)
+  *              1 -> no SWAPGS on exit
+  *
+  *     Y        GSBASE value at entry, must be restored in paranoid_exit
++ *
++ * R14 - old CR3
++ * R15 - old SPEC_CTRL
+  */
+ SYM_CODE_START_LOCAL(paranoid_entry)
+ 	UNWIND_HINT_FUNC
+@@ -932,7 +940,7 @@ SYM_CODE_START_LOCAL(paranoid_entry)
+ 	 * is needed here.
+ 	 */
+ 	SAVE_AND_SET_GSBASE scratch_reg=%rax save_reg=%rbx
+-	ret
++	jmp .Lparanoid_gsbase_done
+ 
+ .Lparanoid_entry_checkgs:
+ 	/* EBX = 1 -> kernel GSBASE active, no restore required */
+@@ -951,9 +959,17 @@ SYM_CODE_START_LOCAL(paranoid_entry)
+ 	xorl	%ebx, %ebx
+ 	swapgs
+ .Lparanoid_kernel_gsbase:
+-
+ 	FENCE_SWAPGS_KERNEL_ENTRY
+-	ret
++.Lparanoid_gsbase_done:
++
++	/*
++	 * Once we have CR3 and %GS setup save and set SPEC_CTRL. Just like
++	 * CR3 above, keep the old value in a callee saved register.
++	 */
++	IBRS_ENTER save_reg=%r15
++	UNTRAIN_RET
++
++	RET
+ SYM_CODE_END(paranoid_entry)
+ 
+ /*
+@@ -974,9 +990,19 @@ SYM_CODE_END(paranoid_entry)
+  *              1 -> no SWAPGS on exit
+  *
+  *     Y        User space GSBASE, must be restored unconditionally
++ *
++ * R14 - old CR3
++ * R15 - old SPEC_CTRL
+  */
+ SYM_CODE_START_LOCAL(paranoid_exit)
+ 	UNWIND_HINT_REGS
++
++	/*
++	 * Must restore IBRS state before both CR3 and %GS since we need access
++	 * to the per-CPU x86_spec_ctrl_shadow variable.
++	 */
++	IBRS_EXIT save_reg=%r15
++
+ 	/*
+ 	 * The order of operations is important. RESTORE_CR3 requires
+ 	 * kernel GSBASE.
+@@ -1023,8 +1049,11 @@ SYM_CODE_START_LOCAL(error_entry)
+ 	FENCE_SWAPGS_USER_ENTRY
+ 	/* We have user CR3.  Change to kernel CR3. */
+ 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
++	IBRS_ENTER
++	UNTRAIN_RET
+ 
+ .Lerror_entry_from_usermode_after_swapgs:
++
+ 	/* Put us onto the real thread stack. */
+ 	popq	%r12				/* save return addr in %12 */
+ 	movq	%rsp, %rdi			/* arg0 = pt_regs pointer */
+@@ -1032,7 +1061,7 @@ SYM_CODE_START_LOCAL(error_entry)
+ 	movq	%rax, %rsp			/* switch stack */
+ 	ENCODE_FRAME_POINTER
+ 	pushq	%r12
+-	ret
++	RET
+ 
+ 	/*
+ 	 * There are two places in the kernel that can potentially fault with
+@@ -1063,7 +1092,8 @@ SYM_CODE_START_LOCAL(error_entry)
+ 	 */
+ .Lerror_entry_done_lfence:
+ 	FENCE_SWAPGS_KERNEL_ENTRY
+-	ret
++	ANNOTATE_UNRET_END
++	RET
+ 
+ .Lbstep_iret:
+ 	/* Fix truncated RIP */
+@@ -1078,6 +1108,8 @@ SYM_CODE_START_LOCAL(error_entry)
+ 	SWAPGS
+ 	FENCE_SWAPGS_USER_ENTRY
+ 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
++	IBRS_ENTER
++	UNTRAIN_RET
+ 
+ 	/*
+ 	 * Pretend that the exception came from user mode: set up pt_regs
+@@ -1182,6 +1214,9 @@ SYM_CODE_START(asm_exc_nmi)
+ 	PUSH_AND_CLEAR_REGS rdx=(%rdx)
+ 	ENCODE_FRAME_POINTER
+ 
++	IBRS_ENTER
++	UNTRAIN_RET
++
+ 	/*
+ 	 * At this point we no longer need to worry about stack damage
+ 	 * due to nesting -- we're on the normal thread stack and we're
+@@ -1404,6 +1439,9 @@ end_repeat_nmi:
+ 	movq	$-1, %rsi
+ 	call	exc_nmi
+ 
++	/* Always restore stashed SPEC_CTRL value (see paranoid_entry) */
++	IBRS_EXIT save_reg=%r15
++
+ 	/* Always restore stashed CR3 value (see paranoid_entry) */
+ 	RESTORE_CR3 scratch_reg=%r15 save_reg=%r14
+ 
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index 0051cf5c792d1..4d637a965efbe 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -4,7 +4,6 @@
+  *
+  * Copyright 2000-2002 Andi Kleen, SuSE Labs.
+  */
+-#include "calling.h"
+ #include <asm/asm-offsets.h>
+ #include <asm/current.h>
+ #include <asm/errno.h>
+@@ -14,9 +13,12 @@
+ #include <asm/irqflags.h>
+ #include <asm/asm.h>
+ #include <asm/smap.h>
++#include <asm/nospec-branch.h>
+ #include <linux/linkage.h>
+ #include <linux/err.h>
+ 
++#include "calling.h"
++
+ 	.section .entry.text, "ax"
+ 
+ /*
+@@ -47,7 +49,7 @@
+  * 0(%ebp) arg6
+  */
+ SYM_CODE_START(entry_SYSENTER_compat)
+-	UNWIND_HINT_EMPTY
++	UNWIND_HINT_ENTRY
+ 	/* Interrupts are off on entry. */
+ 	SWAPGS
+ 
+@@ -112,6 +114,9 @@ SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)
+ 
+ 	cld
+ 
++	IBRS_ENTER
++	UNTRAIN_RET
++
+ 	/*
+ 	 * SYSENTER doesn't filter flags, so we need to clear NT and AC
+ 	 * ourselves.  To save a few cycles, we can check whether
+@@ -197,7 +202,7 @@ SYM_CODE_END(entry_SYSENTER_compat)
+  * 0(%esp) arg6
+  */
+ SYM_CODE_START(entry_SYSCALL_compat)
+-	UNWIND_HINT_EMPTY
++	UNWIND_HINT_ENTRY
+ 	/* Interrupts are off on entry. */
+ 	swapgs
+ 
+@@ -252,6 +257,9 @@ SYM_INNER_LABEL(entry_SYSCALL_compat_after_hwframe, SYM_L_GLOBAL)
+ 
+ 	UNWIND_HINT_REGS
+ 
++	IBRS_ENTER
++	UNTRAIN_RET
++
+ 	movq	%rsp, %rdi
+ 	call	do_fast_syscall_32
+ 	/* XEN PV guests always use IRET path */
+@@ -266,6 +274,8 @@ sysret32_from_system_call:
+ 	 */
+ 	STACKLEAK_ERASE
+ 
++	IBRS_EXIT
++
+ 	movq	RBX(%rsp), %rbx		/* pt_regs->rbx */
+ 	movq	RBP(%rsp), %rbp		/* pt_regs->rbp */
+ 	movq	EFLAGS(%rsp), %r11	/* pt_regs->flags (in r11) */
+@@ -339,7 +349,7 @@ SYM_CODE_END(entry_SYSCALL_compat)
+  * ebp  arg6
+  */
+ SYM_CODE_START(entry_INT80_compat)
+-	UNWIND_HINT_EMPTY
++	UNWIND_HINT_ENTRY
+ 	/*
+ 	 * Interrupts are off on entry.
+ 	 */
+@@ -409,6 +419,9 @@ SYM_CODE_START(entry_INT80_compat)
+ 
+ 	cld
+ 
++	IBRS_ENTER
++	UNTRAIN_RET
++
+ 	movq	%rsp, %rdi
+ 	call	do_int80_syscall_32
+ 	jmp	swapgs_restore_regs_and_return_to_usermode
+diff --git a/arch/x86/entry/thunk_32.S b/arch/x86/entry/thunk_32.S
+index f1f96d4d8cd60..7591bab060f70 100644
+--- a/arch/x86/entry/thunk_32.S
++++ b/arch/x86/entry/thunk_32.S
+@@ -24,7 +24,7 @@ SYM_CODE_START_NOALIGN(\name)
+ 	popl %edx
+ 	popl %ecx
+ 	popl %eax
+-	ret
++	RET
+ 	_ASM_NOKPROBE(\name)
+ SYM_CODE_END(\name)
+ 	.endm
+diff --git a/arch/x86/entry/thunk_64.S b/arch/x86/entry/thunk_64.S
+index c9a9fbf1655f3..1b5044ad8cd0d 100644
+--- a/arch/x86/entry/thunk_64.S
++++ b/arch/x86/entry/thunk_64.S
+@@ -55,7 +55,7 @@ SYM_CODE_START_LOCAL_NOALIGN(__thunk_restore)
+ 	popq %rsi
+ 	popq %rdi
+ 	popq %rbp
+-	ret
++	RET
+ 	_ASM_NOKPROBE(__thunk_restore)
+ SYM_CODE_END(__thunk_restore)
+ #endif
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 21243747965d3..f181220f1b5dc 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -91,6 +91,7 @@ endif
+ endif
+ 
+ $(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
++$(vobjs): KBUILD_AFLAGS += -DBUILD_VDSO
+ 
+ #
+ # vDSO code runs in userspace and -pg doesn't help with profiling anyway.
+diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
+index de1fff7188aad..c92a8fef2ed2c 100644
+--- a/arch/x86/entry/vdso/vdso32/system_call.S
++++ b/arch/x86/entry/vdso/vdso32/system_call.S
+@@ -6,7 +6,7 @@
+ #include <linux/linkage.h>
+ #include <asm/dwarf2.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ 
+ 	.text
+ 	.globl __kernel_vsyscall
+@@ -78,7 +78,7 @@ SYM_INNER_LABEL(int80_landing_pad, SYM_L_GLOBAL)
+ 	popl	%ecx
+ 	CFI_RESTORE		ecx
+ 	CFI_ADJUST_CFA_OFFSET	-4
+-	ret
++	RET
+ 	CFI_ENDPROC
+ 
+ 	.size __kernel_vsyscall,.-__kernel_vsyscall
+diff --git a/arch/x86/entry/vsyscall/vsyscall_emu_64.S b/arch/x86/entry/vsyscall/vsyscall_emu_64.S
+index 2e203f3a25a7b..ef2dd18272431 100644
+--- a/arch/x86/entry/vsyscall/vsyscall_emu_64.S
++++ b/arch/x86/entry/vsyscall/vsyscall_emu_64.S
+@@ -20,16 +20,19 @@ __vsyscall_page:
+ 	mov $__NR_gettimeofday, %rax
+ 	syscall
+ 	ret
++	int3
+ 
+ 	.balign 1024, 0xcc
+ 	mov $__NR_time, %rax
+ 	syscall
+ 	ret
++	int3
+ 
+ 	.balign 1024, 0xcc
+ 	mov $__NR_getcpu, %rax
+ 	syscall
+ 	ret
++	int3
+ 
+ 	.balign 4096, 0xcc
+ 
+diff --git a/arch/x86/include/asm/GEN-for-each-reg.h b/arch/x86/include/asm/GEN-for-each-reg.h
+index 1b07fb102c4ed..07949102a08d0 100644
+--- a/arch/x86/include/asm/GEN-for-each-reg.h
++++ b/arch/x86/include/asm/GEN-for-each-reg.h
+@@ -1,11 +1,16 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * These are in machine order; things rely on that.
++ */
+ #ifdef CONFIG_64BIT
+ GEN(rax)
+-GEN(rbx)
+ GEN(rcx)
+ GEN(rdx)
++GEN(rbx)
++GEN(rsp)
++GEN(rbp)
+ GEN(rsi)
+ GEN(rdi)
+-GEN(rbp)
+ GEN(r8)
+ GEN(r9)
+ GEN(r10)
+@@ -16,10 +21,11 @@ GEN(r14)
+ GEN(r15)
+ #else
+ GEN(eax)
+-GEN(ebx)
+ GEN(ecx)
+ GEN(edx)
++GEN(ebx)
++GEN(esp)
++GEN(ebp)
+ GEN(esi)
+ GEN(edi)
+-GEN(ebp)
+ #endif
+diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h
+deleted file mode 100644
+index 464034db299f7..0000000000000
+--- a/arch/x86/include/asm/alternative-asm.h
++++ /dev/null
+@@ -1,114 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _ASM_X86_ALTERNATIVE_ASM_H
+-#define _ASM_X86_ALTERNATIVE_ASM_H
+-
+-#ifdef __ASSEMBLY__
+-
+-#include <asm/asm.h>
+-
+-#ifdef CONFIG_SMP
+-	.macro LOCK_PREFIX
+-672:	lock
+-	.pushsection .smp_locks,"a"
+-	.balign 4
+-	.long 672b - .
+-	.popsection
+-	.endm
+-#else
+-	.macro LOCK_PREFIX
+-	.endm
+-#endif
+-
+-/*
+- * objtool annotation to ignore the alternatives and only consider the original
+- * instruction(s).
+- */
+-.macro ANNOTATE_IGNORE_ALTERNATIVE
+-	.Lannotate_\@:
+-	.pushsection .discard.ignore_alts
+-	.long .Lannotate_\@ - .
+-	.popsection
+-.endm
+-
+-/*
+- * Issue one struct alt_instr descriptor entry (need to put it into
+- * the section .altinstructions, see below). This entry contains
+- * enough information for the alternatives patching code to patch an
+- * instruction. See apply_alternatives().
+- */
+-.macro altinstruction_entry orig alt feature orig_len alt_len pad_len
+-	.long \orig - .
+-	.long \alt - .
+-	.word \feature
+-	.byte \orig_len
+-	.byte \alt_len
+-	.byte \pad_len
+-.endm
+-
+-/*
+- * Define an alternative between two instructions. If @feature is
+- * present, early code in apply_alternatives() replaces @oldinstr with
+- * @newinstr. ".skip" directive takes care of proper instruction padding
+- * in case @newinstr is longer than @oldinstr.
+- */
+-.macro ALTERNATIVE oldinstr, newinstr, feature
+-140:
+-	\oldinstr
+-141:
+-	.skip -(((144f-143f)-(141b-140b)) > 0) * ((144f-143f)-(141b-140b)),0x90
+-142:
+-
+-	.pushsection .altinstructions,"a"
+-	altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f,142b-141b
+-	.popsection
+-
+-	.pushsection .altinstr_replacement,"ax"
+-143:
+-	\newinstr
+-144:
+-	.popsection
+-.endm
+-
+-#define old_len			141b-140b
+-#define new_len1		144f-143f
+-#define new_len2		145f-144f
+-
+-/*
+- * gas compatible max based on the idea from:
+- * http://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax
+- *
+- * The additional "-" is needed because gas uses a "true" value of -1.
+- */
+-#define alt_max_short(a, b)	((a) ^ (((a) ^ (b)) & -(-((a) < (b)))))
+-
+-
+-/*
+- * Same as ALTERNATIVE macro above but for two alternatives. If CPU
+- * has @feature1, it replaces @oldinstr with @newinstr1. If CPU has
+- * @feature2, it replaces @oldinstr with @feature2.
+- */
+-.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2
+-140:
+-	\oldinstr
+-141:
+-	.skip -((alt_max_short(new_len1, new_len2) - (old_len)) > 0) * \
+-		(alt_max_short(new_len1, new_len2) - (old_len)),0x90
+-142:
+-
+-	.pushsection .altinstructions,"a"
+-	altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f,142b-141b
+-	altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f,142b-141b
+-	.popsection
+-
+-	.pushsection .altinstr_replacement,"ax"
+-143:
+-	\newinstr1
+-144:
+-	\newinstr2
+-145:
+-	.popsection
+-.endm
+-
+-#endif  /*  __ASSEMBLY__  */
+-
+-#endif /* _ASM_X86_ALTERNATIVE_ASM_H */
+diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
+index 13adca37c99a3..0e777b27972be 100644
+--- a/arch/x86/include/asm/alternative.h
++++ b/arch/x86/include/asm/alternative.h
+@@ -2,13 +2,17 @@
+ #ifndef _ASM_X86_ALTERNATIVE_H
+ #define _ASM_X86_ALTERNATIVE_H
+ 
+-#ifndef __ASSEMBLY__
+-
+ #include <linux/types.h>
+-#include <linux/stddef.h>
+ #include <linux/stringify.h>
+ #include <asm/asm.h>
+ 
++#define ALTINSTR_FLAG_INV	(1 << 15)
++#define ALT_NOT(feat)		((feat) | ALTINSTR_FLAG_INV)
++
++#ifndef __ASSEMBLY__
++
++#include <linux/stddef.h>
++
+ /*
+  * Alternative inline assembly for SMP.
+  *
+@@ -61,7 +65,6 @@ struct alt_instr {
+ 	u16 cpuid;		/* cpuid bit set for replacement */
+ 	u8  instrlen;		/* length of original instruction */
+ 	u8  replacementlen;	/* length of new instruction */
+-	u8  padlen;		/* length of build-time padding */
+ } __packed;
+ 
+ /*
+@@ -72,6 +75,8 @@ extern int alternatives_patched;
+ 
+ extern void alternative_instructions(void);
+ extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end);
++extern void apply_retpolines(s32 *start, s32 *end);
++extern void apply_returns(s32 *start, s32 *end);
+ 
+ struct module;
+ 
+@@ -100,7 +105,6 @@ static inline int alternatives_text_reserved(void *start, void *end)
+ 
+ #define alt_end_marker		"663"
+ #define alt_slen		"662b-661b"
+-#define alt_pad_len		alt_end_marker"b-662b"
+ #define alt_total_slen		alt_end_marker"b-661b"
+ #define alt_rlen(num)		e_replacement(num)"f-"b_replacement(num)"f"
+ 
+@@ -147,8 +151,7 @@ static inline int alternatives_text_reserved(void *start, void *end)
+ 	" .long " b_replacement(num)"f - .\n"		/* new instruction */ \
+ 	" .word " __stringify(feature) "\n"		/* feature bit     */ \
+ 	" .byte " alt_total_slen "\n"			/* source len      */ \
+-	" .byte " alt_rlen(num) "\n"			/* replacement len */ \
+-	" .byte " alt_pad_len "\n"			/* pad len */
++	" .byte " alt_rlen(num) "\n"			/* replacement len */
+ 
+ #define ALTINSTR_REPLACEMENT(newinstr, feature, num)	/* replacement */	\
+ 	"# ALT: replacement " #num "\n"						\
+@@ -175,6 +178,11 @@ static inline int alternatives_text_reserved(void *start, void *end)
+ 	ALTINSTR_REPLACEMENT(newinstr2, feature2, 2)			\
+ 	".popsection\n"
+ 
++/* If @feature is set, patch in @newinstr_yes, otherwise @newinstr_no. */
++#define ALTERNATIVE_TERNARY(oldinstr, feature, newinstr_yes, newinstr_no) \
++	ALTERNATIVE_2(oldinstr, newinstr_no, X86_FEATURE_ALWAYS,	\
++		      newinstr_yes, feature)
++
+ #define ALTERNATIVE_3(oldinsn, newinsn1, feat1, newinsn2, feat2, newinsn3, feat3) \
+ 	OLDINSTR_3(oldinsn, 1, 2, 3)						\
+ 	".pushsection .altinstructions,\"a\"\n"					\
+@@ -206,15 +214,15 @@ static inline int alternatives_text_reserved(void *start, void *end)
+ #define alternative_2(oldinstr, newinstr1, feature1, newinstr2, feature2) \
+ 	asm_inline volatile(ALTERNATIVE_2(oldinstr, newinstr1, feature1, newinstr2, feature2) ::: "memory")
+ 
++#define alternative_ternary(oldinstr, feature, newinstr_yes, newinstr_no) \
++	asm_inline volatile(ALTERNATIVE_TERNARY(oldinstr, feature, newinstr_yes, newinstr_no) ::: "memory")
++
+ /*
+  * Alternative inline assembly with input.
+  *
+  * Peculiarities:
+  * No memory clobber here.
+  * Argument numbers start with 1.
+- * Best is to use constraints that are fixed size (like (%1) ... "r")
+- * If you use variable sized constraints like "m" or "g" in the
+- * replacement make sure to pad to the worst case length.
+  * Leaving an unused argument 0 to keep API compatibility.
+  */
+ #define alternative_input(oldinstr, newinstr, feature, input...)	\
+@@ -271,6 +279,115 @@ static inline int alternatives_text_reserved(void *start, void *end)
+  */
+ #define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr
+ 
++#else /* __ASSEMBLY__ */
++
++#ifdef CONFIG_SMP
++	.macro LOCK_PREFIX
++672:	lock
++	.pushsection .smp_locks,"a"
++	.balign 4
++	.long 672b - .
++	.popsection
++	.endm
++#else
++	.macro LOCK_PREFIX
++	.endm
++#endif
++
++/*
++ * objtool annotation to ignore the alternatives and only consider the original
++ * instruction(s).
++ */
++.macro ANNOTATE_IGNORE_ALTERNATIVE
++	.Lannotate_\@:
++	.pushsection .discard.ignore_alts
++	.long .Lannotate_\@ - .
++	.popsection
++.endm
++
++/*
++ * Issue one struct alt_instr descriptor entry (need to put it into
++ * the section .altinstructions, see below). This entry contains
++ * enough information for the alternatives patching code to patch an
++ * instruction. See apply_alternatives().
++ */
++.macro altinstruction_entry orig alt feature orig_len alt_len
++	.long \orig - .
++	.long \alt - .
++	.word \feature
++	.byte \orig_len
++	.byte \alt_len
++.endm
++
++/*
++ * Define an alternative between two instructions. If @feature is
++ * present, early code in apply_alternatives() replaces @oldinstr with
++ * @newinstr. ".skip" directive takes care of proper instruction padding
++ * in case @newinstr is longer than @oldinstr.
++ */
++.macro ALTERNATIVE oldinstr, newinstr, feature
++140:
++	\oldinstr
++141:
++	.skip -(((144f-143f)-(141b-140b)) > 0) * ((144f-143f)-(141b-140b)),0x90
++142:
++
++	.pushsection .altinstructions,"a"
++	altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f
++	.popsection
++
++	.pushsection .altinstr_replacement,"ax"
++143:
++	\newinstr
++144:
++	.popsection
++.endm
++
++#define old_len			141b-140b
++#define new_len1		144f-143f
++#define new_len2		145f-144f
++
++/*
++ * gas compatible max based on the idea from:
++ * http://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax
++ *
++ * The additional "-" is needed because gas uses a "true" value of -1.
++ */
++#define alt_max_short(a, b)	((a) ^ (((a) ^ (b)) & -(-((a) < (b)))))
++
++
++/*
++ * Same as ALTERNATIVE macro above but for two alternatives. If CPU
++ * has @feature1, it replaces @oldinstr with @newinstr1. If CPU has
++ * @feature2, it replaces @oldinstr with @feature2.
++ */
++.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2
++140:
++	\oldinstr
++141:
++	.skip -((alt_max_short(new_len1, new_len2) - (old_len)) > 0) * \
++		(alt_max_short(new_len1, new_len2) - (old_len)),0x90
++142:
++
++	.pushsection .altinstructions,"a"
++	altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f
++	altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f
++	.popsection
++
++	.pushsection .altinstr_replacement,"ax"
++143:
++	\newinstr1
++144:
++	\newinstr2
++145:
++	.popsection
++.endm
++
++/* If @feature is set, patch in @newinstr_yes, otherwise @newinstr_no. */
++#define ALTERNATIVE_TERNARY(oldinstr, feature, newinstr_yes, newinstr_no) \
++	ALTERNATIVE_2 oldinstr, newinstr_no, X86_FEATURE_ALWAYS,	\
++	newinstr_yes, feature
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #endif /* _ASM_X86_ALTERNATIVE_H */
+diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
+index 51e2bf27cc9b0..8f80de627c60a 100644
+--- a/arch/x86/include/asm/asm-prototypes.h
++++ b/arch/x86/include/asm/asm-prototypes.h
+@@ -17,20 +17,3 @@
+ extern void cmpxchg8b_emu(void);
+ #endif
+ 
+-#ifdef CONFIG_RETPOLINE
+-
+-#define DECL_INDIRECT_THUNK(reg) \
+-	extern asmlinkage void __x86_indirect_thunk_ ## reg (void);
+-
+-#define DECL_RETPOLINE(reg) \
+-	extern asmlinkage void __x86_retpoline_ ## reg (void);
+-
+-#undef GEN
+-#define GEN(reg) DECL_INDIRECT_THUNK(reg)
+-#include <asm/GEN-for-each-reg.h>
+-
+-#undef GEN
+-#define GEN(reg) DECL_RETPOLINE(reg)
+-#include <asm/GEN-for-each-reg.h>
+-
+-#endif /* CONFIG_RETPOLINE */
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index 619c1f80a2abe..f4cbc01c0bc46 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -8,6 +8,7 @@
+ 
+ #include <asm/asm.h>
+ #include <linux/bitops.h>
++#include <asm/alternative.h>
+ 
+ enum cpuid_leafs
+ {
+@@ -172,39 +173,15 @@ extern void clear_cpu_cap(struct cpuinfo_x86 *c, unsigned int bit);
+  */
+ static __always_inline bool _static_cpu_has(u16 bit)
+ {
+-	asm_volatile_goto("1: jmp 6f\n"
+-		 "2:\n"
+-		 ".skip -(((5f-4f) - (2b-1b)) > 0) * "
+-			 "((5f-4f) - (2b-1b)),0x90\n"
+-		 "3:\n"
+-		 ".section .altinstructions,\"a\"\n"
+-		 " .long 1b - .\n"		/* src offset */
+-		 " .long 4f - .\n"		/* repl offset */
+-		 " .word %P[always]\n"		/* always replace */
+-		 " .byte 3b - 1b\n"		/* src len */
+-		 " .byte 5f - 4f\n"		/* repl len */
+-		 " .byte 3b - 2b\n"		/* pad len */
+-		 ".previous\n"
+-		 ".section .altinstr_replacement,\"ax\"\n"
+-		 "4: jmp %l[t_no]\n"
+-		 "5:\n"
+-		 ".previous\n"
+-		 ".section .altinstructions,\"a\"\n"
+-		 " .long 1b - .\n"		/* src offset */
+-		 " .long 0\n"			/* no replacement */
+-		 " .word %P[feature]\n"		/* feature bit */
+-		 " .byte 3b - 1b\n"		/* src len */
+-		 " .byte 0\n"			/* repl len */
+-		 " .byte 0\n"			/* pad len */
+-		 ".previous\n"
+-		 ".section .altinstr_aux,\"ax\"\n"
+-		 "6:\n"
+-		 " testb %[bitnum],%[cap_byte]\n"
+-		 " jnz %l[t_yes]\n"
+-		 " jmp %l[t_no]\n"
+-		 ".previous\n"
++	asm_volatile_goto(
++		ALTERNATIVE_TERNARY("jmp 6f", %P[feature], "", "jmp %l[t_no]")
++		".section .altinstr_aux,\"ax\"\n"
++		"6:\n"
++		" testb %[bitnum],%[cap_byte]\n"
++		" jnz %l[t_yes]\n"
++		" jmp %l[t_no]\n"
++		".previous\n"
+ 		 : : [feature]  "i" (bit),
+-		     [always]   "i" (X86_FEATURE_ALWAYS),
+ 		     [bitnum]   "i" (1 << (bit & 7)),
+ 		     [cap_byte] "m" (((const char *)boot_cpu_data.x86_capability)[bit >> 3])
+ 		 : : t_yes, t_no);
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index f6a6ac0b3bcd4..e077f4fe7c6d4 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -203,8 +203,8 @@
+ #define X86_FEATURE_PROC_FEEDBACK	( 7*32+ 9) /* AMD ProcFeedbackInterface */
+ #define X86_FEATURE_SME			( 7*32+10) /* AMD Secure Memory Encryption */
+ #define X86_FEATURE_PTI			( 7*32+11) /* Kernel Page Table Isolation enabled */
+-#define X86_FEATURE_RETPOLINE		( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
+-#define X86_FEATURE_RETPOLINE_LFENCE	( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */
++#define X86_FEATURE_KERNEL_IBRS		( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */
++#define X86_FEATURE_RSB_VMEXIT		( 7*32+13) /* "" Fill RSB on VM-Exit */
+ #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
+ #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
+ #define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
+@@ -290,6 +290,14 @@
+ #define X86_FEATURE_FENCE_SWAPGS_KERNEL	(11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */
+ #define X86_FEATURE_SPLIT_LOCK_DETECT	(11*32+ 6) /* #AC for split lock */
+ #define X86_FEATURE_PER_THREAD_MBA	(11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */
++/* FREE!				(11*32+ 8) */
++/* FREE!				(11*32+ 9) */
++#define X86_FEATURE_ENTRY_IBPB		(11*32+10) /* "" Issue an IBPB on kernel entry */
++#define X86_FEATURE_RRSBA_CTRL		(11*32+11) /* "" RET prediction control */
++#define X86_FEATURE_RETPOLINE		(11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
++#define X86_FEATURE_RETPOLINE_LFENCE	(11*32+13) /* "" Use LFENCE for Spectre variant 2 */
++#define X86_FEATURE_RETHUNK		(11*32+14) /* "" Use REturn THUNK */
++#define X86_FEATURE_UNRET		(11*32+15) /* "" AMD BTB untrain return */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX512_BF16		(12*32+ 5) /* AVX512 BFLOAT16 instructions */
+@@ -308,6 +316,7 @@
+ #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
+ #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
+ #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
++#define X86_FEATURE_BTC_NO		(13*32+29) /* "" Not vulnerable to Branch Type Confusion */
+ 
+ /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
+ #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
+@@ -418,5 +427,6 @@
+ #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+ #define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+ #define X86_BUG_MMIO_STALE_DATA		X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
++#define X86_BUG_RETBLEED		X86_BUG(26) /* CPU is affected by RETBleed */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
+index 09db5b8f1444a..16c24915210d3 100644
+--- a/arch/x86/include/asm/disabled-features.h
++++ b/arch/x86/include/asm/disabled-features.h
+@@ -56,6 +56,25 @@
+ # define DISABLE_PTI		(1 << (X86_FEATURE_PTI & 31))
+ #endif
+ 
++#ifdef CONFIG_RETPOLINE
++# define DISABLE_RETPOLINE	0
++#else
++# define DISABLE_RETPOLINE	((1 << (X86_FEATURE_RETPOLINE & 31)) | \
++				 (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
++#endif
++
++#ifdef CONFIG_RETHUNK
++# define DISABLE_RETHUNK	0
++#else
++# define DISABLE_RETHUNK	(1 << (X86_FEATURE_RETHUNK & 31))
++#endif
++
++#ifdef CONFIG_CPU_UNRET_ENTRY
++# define DISABLE_UNRET		0
++#else
++# define DISABLE_UNRET		(1 << (X86_FEATURE_UNRET & 31))
++#endif
++
+ /* Force disable because it's broken beyond repair */
+ #define DISABLE_ENQCMD		(1 << (X86_FEATURE_ENQCMD & 31))
+ 
+@@ -73,7 +92,7 @@
+ #define DISABLED_MASK8	0
+ #define DISABLED_MASK9	(DISABLE_SMAP)
+ #define DISABLED_MASK10	0
+-#define DISABLED_MASK11	0
++#define DISABLED_MASK11	(DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET)
+ #define DISABLED_MASK12	0
+ #define DISABLED_MASK13	0
+ #define DISABLED_MASK14	0
+diff --git a/arch/x86/include/asm/inat.h b/arch/x86/include/asm/inat.h
+index 4cf2ad521f656..b56c5741581a7 100644
+--- a/arch/x86/include/asm/inat.h
++++ b/arch/x86/include/asm/inat.h
+@@ -6,7 +6,7 @@
+  *
+  * Written by Masami Hiramatsu <mhiramat@redhat.com>
+  */
+-#include <asm/inat_types.h>
++#include <asm/inat_types.h> /* __ignore_sync_check__ */
+ 
+ /*
+  * Internal bits. Don't use bitmasks directly, because these bits are
+diff --git a/arch/x86/include/asm/insn-eval.h b/arch/x86/include/asm/insn-eval.h
+index c1438f9239e46..594ad2f57d232 100644
+--- a/arch/x86/include/asm/insn-eval.h
++++ b/arch/x86/include/asm/insn-eval.h
+@@ -26,7 +26,7 @@ int insn_fetch_from_user(struct pt_regs *regs,
+ 			 unsigned char buf[MAX_INSN_SIZE]);
+ int insn_fetch_from_user_inatomic(struct pt_regs *regs,
+ 				  unsigned char buf[MAX_INSN_SIZE]);
+-bool insn_decode(struct insn *insn, struct pt_regs *regs,
+-		 unsigned char buf[MAX_INSN_SIZE], int buf_size);
++bool insn_decode_from_regs(struct insn *insn, struct pt_regs *regs,
++			   unsigned char buf[MAX_INSN_SIZE], int buf_size);
+ 
+ #endif /* _ASM_X86_INSN_EVAL_H */
+diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
+index a8c3d284fa46c..0da37756917a9 100644
+--- a/arch/x86/include/asm/insn.h
++++ b/arch/x86/include/asm/insn.h
+@@ -8,7 +8,7 @@
+  */
+ 
+ /* insn_attr_t is defined in inat.h */
+-#include <asm/inat.h>
++#include <asm/inat.h> /* __ignore_sync_check__ */
+ 
+ struct insn_field {
+ 	union {
+@@ -87,13 +87,25 @@ struct insn {
+ #define X86_VEX_M_MAX	0x1f			/* VEX3.M Maximum value */
+ 
+ extern void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64);
+-extern void insn_get_prefixes(struct insn *insn);
+-extern void insn_get_opcode(struct insn *insn);
+-extern void insn_get_modrm(struct insn *insn);
+-extern void insn_get_sib(struct insn *insn);
+-extern void insn_get_displacement(struct insn *insn);
+-extern void insn_get_immediate(struct insn *insn);
+-extern void insn_get_length(struct insn *insn);
++extern int insn_get_prefixes(struct insn *insn);
++extern int insn_get_opcode(struct insn *insn);
++extern int insn_get_modrm(struct insn *insn);
++extern int insn_get_sib(struct insn *insn);
++extern int insn_get_displacement(struct insn *insn);
++extern int insn_get_immediate(struct insn *insn);
++extern int insn_get_length(struct insn *insn);
++
++enum insn_mode {
++	INSN_MODE_32,
++	INSN_MODE_64,
++	/* Mode is determined by the current kernel build. */
++	INSN_MODE_KERN,
++	INSN_NUM_MODES,
++};
++
++extern int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m);
++
++#define insn_decode_kernel(_insn, _ptr) insn_decode((_insn), (_ptr), MAX_INSN_SIZE, INSN_MODE_KERN)
+ 
+ /* Attribute will be determined after getting ModRM (for opcode groups) */
+ static inline void insn_get_attribute(struct insn *insn)
+diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
+index 365111789cc68..5000cf59bdf5b 100644
+--- a/arch/x86/include/asm/linkage.h
++++ b/arch/x86/include/asm/linkage.h
+@@ -18,6 +18,28 @@
+ #define __ALIGN_STR	__stringify(__ALIGN)
+ #endif
+ 
++#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
++#define RET	jmp __x86_return_thunk
++#else /* CONFIG_RETPOLINE */
++#ifdef CONFIG_SLS
++#define RET	ret; int3
++#else
++#define RET	ret
++#endif
++#endif /* CONFIG_RETPOLINE */
++
++#else /* __ASSEMBLY__ */
++
++#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
++#define ASM_RET	"jmp __x86_return_thunk\n\t"
++#else /* CONFIG_RETPOLINE */
++#ifdef CONFIG_SLS
++#define ASM_RET	"ret; int3\n\t"
++#else
++#define ASM_RET	"ret\n\t"
++#endif
++#endif /* CONFIG_RETPOLINE */
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #endif /* _ASM_X86_LINKAGE_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 96973d1979723..407de670cd609 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -51,6 +51,8 @@
+ #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
+ #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
+ #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
++#define SPEC_CTRL_RRSBA_DIS_S_SHIFT	6	   /* Disable RRSBA behavior */
++#define SPEC_CTRL_RRSBA_DIS_S		BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT)
+ 
+ #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
+ #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
+@@ -91,6 +93,7 @@
+ #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
+ #define ARCH_CAP_RDCL_NO		BIT(0)	/* Not susceptible to Meltdown */
+ #define ARCH_CAP_IBRS_ALL		BIT(1)	/* Enhanced IBRS support */
++#define ARCH_CAP_RSBA			BIT(2)	/* RET may use alternative branch predictors */
+ #define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	BIT(3)	/* Skip L1D flush on vmentry */
+ #define ARCH_CAP_SSB_NO			BIT(4)	/*
+ 						 * Not susceptible to Speculative Store Bypass
+@@ -138,6 +141,13 @@
+ 						 * bit available to control VERW
+ 						 * behavior.
+ 						 */
++#define ARCH_CAP_RRSBA			BIT(19)	/*
++						 * Indicates RET may use predictors
++						 * other than the RSB. With eIBRS
++						 * enabled predictions in kernel mode
++						 * are restricted to targets in
++						 * kernel.
++						 */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+ #define L1D_FLUSH			BIT(0)	/*
+@@ -507,6 +517,9 @@
+ /* Fam 17h MSRs */
+ #define MSR_F17H_IRPERF			0xc00000e9
+ 
++#define MSR_ZEN2_SPECTRAL_CHICKEN	0xc00110e3
++#define MSR_ZEN2_SPECTRAL_CHICKEN_BIT	BIT_ULL(1)
++
+ /* Fam 16h MSRs */
+ #define MSR_F16H_L2I_PERF_CTL		0xc0010230
+ #define MSR_F16H_L2I_PERF_CTR		0xc0010231
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index e247151c3dcf2..6f2adaf53f467 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -5,12 +5,15 @@
+ 
+ #include <linux/static_key.h>
+ #include <linux/objtool.h>
++#include <linux/linkage.h>
+ 
+ #include <asm/alternative.h>
+-#include <asm/alternative-asm.h>
+ #include <asm/cpufeatures.h>
+ #include <asm/msr-index.h>
+ #include <asm/unwind_hints.h>
++#include <asm/percpu.h>
++
++#define RETPOLINE_THUNK_SIZE	32
+ 
+ /*
+  * Fill the CPU return stack buffer.
+@@ -73,6 +76,23 @@
+ 	.popsection
+ .endm
+ 
++/*
++ * (ab)use RETPOLINE_SAFE on RET to annotate away 'bare' RET instructions
++ * vs RETBleed validation.
++ */
++#define ANNOTATE_UNRET_SAFE ANNOTATE_RETPOLINE_SAFE
++
++/*
++ * Abuse ANNOTATE_RETPOLINE_SAFE on a NOP to indicate UNRET_END, should
++ * eventually turn into it's own annotation.
++ */
++.macro ANNOTATE_UNRET_END
++#ifdef CONFIG_DEBUG_ENTRY
++	ANNOTATE_RETPOLINE_SAFE
++	nop
++#endif
++.endm
++
+ /*
+  * JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple
+  * indirect jmp/call which may be susceptible to the Spectre variant 2
+@@ -81,7 +101,7 @@
+ .macro JMP_NOSPEC reg:req
+ #ifdef CONFIG_RETPOLINE
+ 	ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \
+-		      __stringify(jmp __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \
++		      __stringify(jmp __x86_indirect_thunk_\reg), X86_FEATURE_RETPOLINE, \
+ 		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE
+ #else
+ 	jmp	*%\reg
+@@ -91,7 +111,7 @@
+ .macro CALL_NOSPEC reg:req
+ #ifdef CONFIG_RETPOLINE
+ 	ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *%\reg), \
+-		      __stringify(call __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \
++		      __stringify(call __x86_indirect_thunk_\reg), X86_FEATURE_RETPOLINE, \
+ 		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_LFENCE
+ #else
+ 	call	*%\reg
+@@ -103,10 +123,34 @@
+   * monstrosity above, manually.
+   */
+ .macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
+-#ifdef CONFIG_RETPOLINE
+ 	ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr
+ 	__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)
+ .Lskip_rsb_\@:
++.endm
++
++#ifdef CONFIG_CPU_UNRET_ENTRY
++#define CALL_ZEN_UNTRAIN_RET	"call zen_untrain_ret"
++#else
++#define CALL_ZEN_UNTRAIN_RET	""
++#endif
++
++/*
++ * Mitigate RETBleed for AMD/Hygon Zen uarch. Requires KERNEL CR3 because the
++ * return thunk isn't mapped into the userspace tables (then again, AMD
++ * typically has NO_MELTDOWN).
++ *
++ * While zen_untrain_ret() doesn't clobber anything but requires stack,
++ * entry_ibpb() will clobber AX, CX, DX.
++ *
++ * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
++ * where we have a stack but before any RET instruction.
++ */
++.macro UNTRAIN_RET
++#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY)
++	ANNOTATE_UNRET_END
++	ALTERNATIVE_2 "",						\
++	              CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET,		\
++		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB
+ #endif
+ .endm
+ 
+@@ -118,7 +162,21 @@
+ 	_ASM_PTR " 999b\n\t"					\
+ 	".popsection\n\t"
+ 
++extern void __x86_return_thunk(void);
++extern void zen_untrain_ret(void);
++extern void entry_ibpb(void);
++
+ #ifdef CONFIG_RETPOLINE
++
++typedef u8 retpoline_thunk_t[RETPOLINE_THUNK_SIZE];
++
++#define GEN(reg) \
++	extern retpoline_thunk_t __x86_indirect_thunk_ ## reg;
++#include <asm/GEN-for-each-reg.h>
++#undef GEN
++
++extern retpoline_thunk_t __x86_indirect_thunk_array[];
++
+ #ifdef CONFIG_X86_64
+ 
+ /*
+@@ -129,7 +187,7 @@
+ 	ALTERNATIVE_2(						\
+ 	ANNOTATE_RETPOLINE_SAFE					\
+ 	"call *%[thunk_target]\n",				\
+-	"call __x86_retpoline_%V[thunk_target]\n",		\
++	"call __x86_indirect_thunk_%V[thunk_target]\n",		\
+ 	X86_FEATURE_RETPOLINE,					\
+ 	"lfence;\n"						\
+ 	ANNOTATE_RETPOLINE_SAFE					\
+@@ -181,6 +239,7 @@ enum spectre_v2_mitigation {
+ 	SPECTRE_V2_EIBRS,
+ 	SPECTRE_V2_EIBRS_RETPOLINE,
+ 	SPECTRE_V2_EIBRS_LFENCE,
++	SPECTRE_V2_IBRS,
+ };
+ 
+ /* The indirect branch speculation control variants */
+@@ -223,6 +282,9 @@ static inline void indirect_branch_prediction_barrier(void)
+ 
+ /* The Intel SPEC CTRL MSR base value cache */
+ extern u64 x86_spec_ctrl_base;
++DECLARE_PER_CPU(u64, x86_spec_ctrl_current);
++extern void write_spec_ctrl_current(u64 val, bool force);
++extern u64 spec_ctrl_current(void);
+ 
+ /*
+  * With retpoline, we must use IBRS to restrict branch prediction
+@@ -232,18 +294,16 @@ extern u64 x86_spec_ctrl_base;
+  */
+ #define firmware_restrict_branch_speculation_start()			\
+ do {									\
+-	u64 val = x86_spec_ctrl_base | SPEC_CTRL_IBRS;			\
+-									\
+ 	preempt_disable();						\
+-	alternative_msr_write(MSR_IA32_SPEC_CTRL, val,			\
++	alternative_msr_write(MSR_IA32_SPEC_CTRL,			\
++			      spec_ctrl_current() | SPEC_CTRL_IBRS,	\
+ 			      X86_FEATURE_USE_IBRS_FW);			\
+ } while (0)
+ 
+ #define firmware_restrict_branch_speculation_end()			\
+ do {									\
+-	u64 val = x86_spec_ctrl_base;					\
+-									\
+-	alternative_msr_write(MSR_IA32_SPEC_CTRL, val,			\
++	alternative_msr_write(MSR_IA32_SPEC_CTRL,			\
++			      spec_ctrl_current(),			\
+ 			      X86_FEATURE_USE_IBRS_FW);			\
+ 	preempt_enable();						\
+ } while (0)
+@@ -306,63 +366,4 @@ static inline void mds_idle_clear_cpu_buffers(void)
+ 
+ #endif /* __ASSEMBLY__ */
+ 
+-/*
+- * Below is used in the eBPF JIT compiler and emits the byte sequence
+- * for the following assembly:
+- *
+- * With retpolines configured:
+- *
+- *    callq do_rop
+- *  spec_trap:
+- *    pause
+- *    lfence
+- *    jmp spec_trap
+- *  do_rop:
+- *    mov %rcx,(%rsp) for x86_64
+- *    mov %edx,(%esp) for x86_32
+- *    retq
+- *
+- * Without retpolines configured:
+- *
+- *    jmp *%rcx for x86_64
+- *    jmp *%edx for x86_32
+- */
+-#ifdef CONFIG_RETPOLINE
+-# ifdef CONFIG_X86_64
+-#  define RETPOLINE_RCX_BPF_JIT_SIZE	17
+-#  define RETPOLINE_RCX_BPF_JIT()				\
+-do {								\
+-	EMIT1_off32(0xE8, 7);	 /* callq do_rop */		\
+-	/* spec_trap: */					\
+-	EMIT2(0xF3, 0x90);       /* pause */			\
+-	EMIT3(0x0F, 0xAE, 0xE8); /* lfence */			\
+-	EMIT2(0xEB, 0xF9);       /* jmp spec_trap */		\
+-	/* do_rop: */						\
+-	EMIT4(0x48, 0x89, 0x0C, 0x24); /* mov %rcx,(%rsp) */	\
+-	EMIT1(0xC3);             /* retq */			\
+-} while (0)
+-# else /* !CONFIG_X86_64 */
+-#  define RETPOLINE_EDX_BPF_JIT()				\
+-do {								\
+-	EMIT1_off32(0xE8, 7);	 /* call do_rop */		\
+-	/* spec_trap: */					\
+-	EMIT2(0xF3, 0x90);       /* pause */			\
+-	EMIT3(0x0F, 0xAE, 0xE8); /* lfence */			\
+-	EMIT2(0xEB, 0xF9);       /* jmp spec_trap */		\
+-	/* do_rop: */						\
+-	EMIT3(0x89, 0x14, 0x24); /* mov %edx,(%esp) */		\
+-	EMIT1(0xC3);             /* ret */			\
+-} while (0)
+-# endif
+-#else /* !CONFIG_RETPOLINE */
+-# ifdef CONFIG_X86_64
+-#  define RETPOLINE_RCX_BPF_JIT_SIZE	2
+-#  define RETPOLINE_RCX_BPF_JIT()				\
+-	EMIT2(0xFF, 0xE1);       /* jmp *%rcx */
+-# else /* !CONFIG_X86_64 */
+-#  define RETPOLINE_EDX_BPF_JIT()				\
+-	EMIT2(0xFF, 0xE2)        /* jmp *%edx */
+-# endif
+-#endif
+-
+ #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index 5647bcdba776e..4a32b0d343762 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -630,7 +630,7 @@ bool __raw_callee_save___native_vcpu_is_preempted(long cpu);
+ 	    "call " #func ";"						\
+ 	    PV_RESTORE_ALL_CALLER_REGS					\
+ 	    FRAME_END							\
+-	    "ret;"							\
++	    ASM_RET							\
+ 	    ".size " PV_THUNK_NAME(func) ", .-" PV_THUNK_NAME(func) ";"	\
+ 	    ".popsection")
+ 
+diff --git a/arch/x86/include/asm/qspinlock_paravirt.h b/arch/x86/include/asm/qspinlock_paravirt.h
+index 159622ee06748..1474cf96251dd 100644
+--- a/arch/x86/include/asm/qspinlock_paravirt.h
++++ b/arch/x86/include/asm/qspinlock_paravirt.h
+@@ -48,7 +48,7 @@ asm    (".pushsection .text;"
+ 	"jne   .slowpath;"
+ 	"pop   %rdx;"
+ 	FRAME_END
+-	"ret;"
++	ASM_RET
+ 	".slowpath: "
+ 	"push   %rsi;"
+ 	"movzbl %al,%esi;"
+@@ -56,7 +56,7 @@ asm    (".pushsection .text;"
+ 	"pop    %rsi;"
+ 	"pop    %rdx;"
+ 	FRAME_END
+-	"ret;"
++	ASM_RET
+ 	".size " PV_UNLOCK ", .-" PV_UNLOCK ";"
+ 	".popsection");
+ 
+diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h
+index 8b58d6975d5d4..ea1d8eb644cb7 100644
+--- a/arch/x86/include/asm/smap.h
++++ b/arch/x86/include/asm/smap.h
+@@ -11,6 +11,7 @@
+ 
+ #include <asm/nops.h>
+ #include <asm/cpufeatures.h>
++#include <asm/alternative.h>
+ 
+ /* "Raw" instruction opcodes */
+ #define __ASM_CLAC	".byte 0x0f,0x01,0xca"
+@@ -18,8 +19,6 @@
+ 
+ #ifdef __ASSEMBLY__
+ 
+-#include <asm/alternative-asm.h>
+-
+ #ifdef CONFIG_X86_SMAP
+ 
+ #define ASM_CLAC \
+@@ -37,8 +36,6 @@
+ 
+ #else /* __ASSEMBLY__ */
+ 
+-#include <asm/alternative.h>
+-
+ #ifdef CONFIG_X86_SMAP
+ 
+ static __always_inline void clac(void)
+diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
+index cbb67b6030f97..491aadfac6117 100644
+--- a/arch/x86/include/asm/static_call.h
++++ b/arch/x86/include/asm/static_call.h
+@@ -21,6 +21,16 @@
+  * relative displacement across sections.
+  */
+ 
++/*
++ * The trampoline is 8 bytes and of the general form:
++ *
++ *   jmp.d32 \func
++ *   ud1 %esp, %ecx
++ *
++ * That trailing #UD provides both a speculation stop and serves as a unique
++ * 3 byte signature identifying static call trampolines. Also see tramp_ud[]
++ * and __static_call_fixup().
++ */
+ #define __ARCH_DEFINE_STATIC_CALL_TRAMP(name, insns)			\
+ 	asm(".pushsection .static_call.text, \"ax\"		\n"	\
+ 	    ".align 4						\n"	\
+@@ -34,8 +44,13 @@
+ #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func)			\
+ 	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)")
+ 
++#ifdef CONFIG_RETHUNK
++#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)			\
++	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk")
++#else
+ #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)			\
+-	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; nop; nop; nop; nop")
++	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; int3; nop; nop; nop")
++#endif
+ 
+ 
+ #define ARCH_ADD_TRAMP_KEY(name)					\
+@@ -44,4 +59,6 @@
+ 	    ".long " STATIC_CALL_KEY_STR(name) " - .		\n"	\
+ 	    ".popsection					\n")
+ 
++extern bool __static_call_fixup(void *tramp, u8 op, void *dest);
++
+ #endif /* _ASM_STATIC_CALL_H */
+diff --git a/arch/x86/include/asm/unwind_hints.h b/arch/x86/include/asm/unwind_hints.h
+index 664d4610d700e..56664b31b6dad 100644
+--- a/arch/x86/include/asm/unwind_hints.h
++++ b/arch/x86/include/asm/unwind_hints.h
+@@ -8,7 +8,11 @@
+ #ifdef __ASSEMBLY__
+ 
+ .macro UNWIND_HINT_EMPTY
+-	UNWIND_HINT sp_reg=ORC_REG_UNDEFINED type=UNWIND_HINT_TYPE_CALL end=1
++	UNWIND_HINT type=UNWIND_HINT_TYPE_CALL end=1
++.endm
++
++.macro UNWIND_HINT_ENTRY
++	UNWIND_HINT type=UNWIND_HINT_TYPE_ENTRY end=1
+ .endm
+ 
+ .macro UNWIND_HINT_REGS base=%rsp offset=0 indirect=0 extra=1 partial=0
+@@ -48,17 +52,16 @@
+ 	UNWIND_HINT_REGS base=\base offset=\offset partial=1
+ .endm
+ 
+-.macro UNWIND_HINT_FUNC sp_offset=8
+-	UNWIND_HINT sp_reg=ORC_REG_SP sp_offset=\sp_offset type=UNWIND_HINT_TYPE_CALL
++.macro UNWIND_HINT_FUNC
++	UNWIND_HINT sp_reg=ORC_REG_SP sp_offset=8 type=UNWIND_HINT_TYPE_FUNC
++.endm
++
++.macro UNWIND_HINT_SAVE
++	UNWIND_HINT type=UNWIND_HINT_TYPE_SAVE
+ .endm
+ 
+-/*
+- * RET_OFFSET: Used on instructions that terminate a function; mostly RETURN
+- * and sibling calls. On these, sp_offset denotes the expected offset from
+- * initial_func_cfi.
+- */
+-.macro UNWIND_HINT_RET_OFFSET sp_offset=8
+-	UNWIND_HINT sp_reg=ORC_REG_SP type=UNWIND_HINT_TYPE_RET_OFFSET sp_offset=\sp_offset
++.macro UNWIND_HINT_RESTORE
++	UNWIND_HINT type=UNWIND_HINT_TYPE_RESTORE
+ .endm
+ 
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
+index daf88f8143c5f..cf69081073b54 100644
+--- a/arch/x86/kernel/acpi/wakeup_32.S
++++ b/arch/x86/kernel/acpi/wakeup_32.S
+@@ -60,7 +60,7 @@ save_registers:
+ 	popl	saved_context_eflags
+ 
+ 	movl	$ret_point, saved_eip
+-	ret
++	RET
+ 
+ 
+ restore_registers:
+@@ -70,7 +70,7 @@ restore_registers:
+ 	movl	saved_context_edi, %edi
+ 	pushl	saved_context_eflags
+ 	popfl
+-	ret
++	RET
+ 
+ SYM_CODE_START(do_suspend_lowlevel)
+ 	call	save_processor_state
+@@ -86,7 +86,7 @@ SYM_CODE_START(do_suspend_lowlevel)
+ ret_point:
+ 	call	restore_registers
+ 	call	restore_processor_state
+-	ret
++	RET
+ SYM_CODE_END(do_suspend_lowlevel)
+ 
+ .data
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 2400ad62f330b..77eefa0b0f324 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -28,6 +28,7 @@
+ #include <asm/insn.h>
+ #include <asm/io.h>
+ #include <asm/fixmap.h>
++#include <asm/asm-prototypes.h>
+ 
+ int __read_mostly alternatives_patched;
+ 
+@@ -268,6 +269,8 @@ static void __init_or_module add_nops(void *insns, unsigned int len)
+ 	}
+ }
+ 
++extern s32 __retpoline_sites[], __retpoline_sites_end[];
++extern s32 __return_sites[], __return_sites_end[];
+ extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
+ extern s32 __smp_locks[], __smp_locks_end[];
+ void text_poke_early(void *addr, const void *opcode, size_t len);
+@@ -338,25 +341,69 @@ done:
+ }
+ 
+ /*
+- * "noinline" to cause control flow change and thus invalidate I$ and
+- * cause refetch after modification.
++ * optimize_nops_range() - Optimize a sequence of single byte NOPs (0x90)
++ *
++ * @instr: instruction byte stream
++ * @instrlen: length of the above
++ * @off: offset within @instr where the first NOP has been detected
++ *
++ * Return: number of NOPs found (and replaced).
+  */
+-static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *instr)
++static __always_inline int optimize_nops_range(u8 *instr, u8 instrlen, int off)
+ {
+ 	unsigned long flags;
+-	int i;
++	int i = off, nnops;
+ 
+-	for (i = 0; i < a->padlen; i++) {
++	while (i < instrlen) {
+ 		if (instr[i] != 0x90)
+-			return;
++			break;
++
++		i++;
+ 	}
+ 
++	nnops = i - off;
++
++	if (nnops <= 1)
++		return nnops;
++
+ 	local_irq_save(flags);
+-	add_nops(instr + (a->instrlen - a->padlen), a->padlen);
++	add_nops(instr + off, nnops);
+ 	local_irq_restore(flags);
+ 
+-	DUMP_BYTES(instr, a->instrlen, "%px: [%d:%d) optimized NOPs: ",
+-		   instr, a->instrlen - a->padlen, a->padlen);
++	DUMP_BYTES(instr, instrlen, "%px: [%d:%d) optimized NOPs: ", instr, off, i);
++
++	return nnops;
++}
++
++/*
++ * "noinline" to cause control flow change and thus invalidate I$ and
++ * cause refetch after modification.
++ */
++static void __init_or_module noinline optimize_nops(u8 *instr, size_t len)
++{
++	struct insn insn;
++	int i = 0;
++
++	/*
++	 * Jump over the non-NOP insns and optimize single-byte NOPs into bigger
++	 * ones.
++	 */
++	for (;;) {
++		if (insn_decode_kernel(&insn, &instr[i]))
++			return;
++
++		/*
++		 * See if this and any potentially following NOPs can be
++		 * optimized.
++		 */
++		if (insn.length == 1 && insn.opcode.bytes[0] == 0x90)
++			i += optimize_nops_range(instr, len, i);
++		else
++			i += insn.length;
++
++		if (i >= len)
++			return;
++	}
+ }
+ 
+ /*
+@@ -388,23 +435,29 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
+ 	 */
+ 	for (a = start; a < end; a++) {
+ 		int insn_buff_sz = 0;
++		/* Mask away "NOT" flag bit for feature to test. */
++		u16 feature = a->cpuid & ~ALTINSTR_FLAG_INV;
+ 
+ 		instr = (u8 *)&a->instr_offset + a->instr_offset;
+ 		replacement = (u8 *)&a->repl_offset + a->repl_offset;
+ 		BUG_ON(a->instrlen > sizeof(insn_buff));
+-		BUG_ON(a->cpuid >= (NCAPINTS + NBUGINTS) * 32);
+-		if (!boot_cpu_has(a->cpuid)) {
+-			if (a->padlen > 1)
+-				optimize_nops(a, instr);
++		BUG_ON(feature >= (NCAPINTS + NBUGINTS) * 32);
+ 
+-			continue;
+-		}
++		/*
++		 * Patch if either:
++		 * - feature is present
++		 * - feature not present but ALTINSTR_FLAG_INV is set to mean,
++		 *   patch if feature is *NOT* present.
++		 */
++		if (!boot_cpu_has(feature) == !(a->cpuid & ALTINSTR_FLAG_INV))
++			goto next;
+ 
+-		DPRINTK("feat: %d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d), pad: %d",
+-			a->cpuid >> 5,
+-			a->cpuid & 0x1f,
++		DPRINTK("feat: %s%d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d)",
++			(a->cpuid & ALTINSTR_FLAG_INV) ? "!" : "",
++			feature >> 5,
++			feature & 0x1f,
+ 			instr, instr, a->instrlen,
+-			replacement, a->replacementlen, a->padlen);
++			replacement, a->replacementlen);
+ 
+ 		DUMP_BYTES(instr, a->instrlen, "%px: old_insn: ", instr);
+ 		DUMP_BYTES(replacement, a->replacementlen, "%px: rpl_insn: ", replacement);
+@@ -428,16 +481,259 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
+ 		if (a->replacementlen && is_jmp(replacement[0]))
+ 			recompute_jump(a, instr, replacement, insn_buff);
+ 
+-		if (a->instrlen > a->replacementlen) {
+-			add_nops(insn_buff + a->replacementlen,
+-				 a->instrlen - a->replacementlen);
+-			insn_buff_sz += a->instrlen - a->replacementlen;
+-		}
++		for (; insn_buff_sz < a->instrlen; insn_buff_sz++)
++			insn_buff[insn_buff_sz] = 0x90;
++
+ 		DUMP_BYTES(insn_buff, insn_buff_sz, "%px: final_insn: ", instr);
+ 
+ 		text_poke_early(instr, insn_buff, insn_buff_sz);
++
++next:
++		optimize_nops(instr, a->instrlen);
++	}
++}
++
++#if defined(CONFIG_RETPOLINE) && defined(CONFIG_STACK_VALIDATION)
++
++/*
++ * CALL/JMP *%\reg
++ */
++static int emit_indirect(int op, int reg, u8 *bytes)
++{
++	int i = 0;
++	u8 modrm;
++
++	switch (op) {
++	case CALL_INSN_OPCODE:
++		modrm = 0x10; /* Reg = 2; CALL r/m */
++		break;
++
++	case JMP32_INSN_OPCODE:
++		modrm = 0x20; /* Reg = 4; JMP r/m */
++		break;
++
++	default:
++		WARN_ON_ONCE(1);
++		return -1;
++	}
++
++	if (reg >= 8) {
++		bytes[i++] = 0x41; /* REX.B prefix */
++		reg -= 8;
++	}
++
++	modrm |= 0xc0; /* Mod = 3 */
++	modrm += reg;
++
++	bytes[i++] = 0xff; /* opcode */
++	bytes[i++] = modrm;
++
++	return i;
++}
++
++/*
++ * Rewrite the compiler generated retpoline thunk calls.
++ *
++ * For spectre_v2=off (!X86_FEATURE_RETPOLINE), rewrite them into immediate
++ * indirect instructions, avoiding the extra indirection.
++ *
++ * For example, convert:
++ *
++ *   CALL __x86_indirect_thunk_\reg
++ *
++ * into:
++ *
++ *   CALL *%\reg
++ *
++ * It also tries to inline spectre_v2=retpoline,amd when size permits.
++ */
++static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
++{
++	retpoline_thunk_t *target;
++	int reg, ret, i = 0;
++	u8 op, cc;
++
++	target = addr + insn->length + insn->immediate.value;
++	reg = target - __x86_indirect_thunk_array;
++
++	if (WARN_ON_ONCE(reg & ~0xf))
++		return -1;
++
++	/* If anyone ever does: CALL/JMP *%rsp, we're in deep trouble. */
++	BUG_ON(reg == 4);
++
++	if (cpu_feature_enabled(X86_FEATURE_RETPOLINE) &&
++	    !cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE))
++		return -1;
++
++	op = insn->opcode.bytes[0];
++
++	/*
++	 * Convert:
++	 *
++	 *   Jcc.d32 __x86_indirect_thunk_\reg
++	 *
++	 * into:
++	 *
++	 *   Jncc.d8 1f
++	 *   [ LFENCE ]
++	 *   JMP *%\reg
++	 *   [ NOP ]
++	 * 1:
++	 */
++	/* Jcc.d32 second opcode byte is in the range: 0x80-0x8f */
++	if (op == 0x0f && (insn->opcode.bytes[1] & 0xf0) == 0x80) {
++		cc = insn->opcode.bytes[1] & 0xf;
++		cc ^= 1; /* invert condition */
++
++		bytes[i++] = 0x70 + cc;        /* Jcc.d8 */
++		bytes[i++] = insn->length - 2; /* sizeof(Jcc.d8) == 2 */
++
++		/* Continue as if: JMP.d32 __x86_indirect_thunk_\reg */
++		op = JMP32_INSN_OPCODE;
++	}
++
++	/*
++	 * For RETPOLINE_AMD: prepend the indirect CALL/JMP with an LFENCE.
++	 */
++	if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
++		bytes[i++] = 0x0f;
++		bytes[i++] = 0xae;
++		bytes[i++] = 0xe8; /* LFENCE */
++	}
++
++	ret = emit_indirect(op, reg, bytes + i);
++	if (ret < 0)
++		return ret;
++	i += ret;
++
++	for (; i < insn->length;)
++		bytes[i++] = 0x90;
++
++	return i;
++}
++
++/*
++ * Generated by 'objtool --retpoline'.
++ */
++void __init_or_module noinline apply_retpolines(s32 *start, s32 *end)
++{
++	s32 *s;
++
++	for (s = start; s < end; s++) {
++		void *addr = (void *)s + *s;
++		struct insn insn;
++		int len, ret;
++		u8 bytes[16];
++		u8 op1, op2;
++
++		ret = insn_decode_kernel(&insn, addr);
++		if (WARN_ON_ONCE(ret < 0))
++			continue;
++
++		op1 = insn.opcode.bytes[0];
++		op2 = insn.opcode.bytes[1];
++
++		switch (op1) {
++		case CALL_INSN_OPCODE:
++		case JMP32_INSN_OPCODE:
++			break;
++
++		case 0x0f: /* escape */
++			if (op2 >= 0x80 && op2 <= 0x8f)
++				break;
++			fallthrough;
++		default:
++			WARN_ON_ONCE(1);
++			continue;
++		}
++
++		DPRINTK("retpoline at: %pS (%px) len: %d to: %pS",
++			addr, addr, insn.length,
++			addr + insn.length + insn.immediate.value);
++
++		len = patch_retpoline(addr, &insn, bytes);
++		if (len == insn.length) {
++			optimize_nops(bytes, len);
++			DUMP_BYTES(((u8*)addr),  len, "%px: orig: ", addr);
++			DUMP_BYTES(((u8*)bytes), len, "%px: repl: ", addr);
++			text_poke_early(addr, bytes, len);
++		}
++	}
++}
++
++#ifdef CONFIG_RETHUNK
++/*
++ * Rewrite the compiler generated return thunk tail-calls.
++ *
++ * For example, convert:
++ *
++ *   JMP __x86_return_thunk
++ *
++ * into:
++ *
++ *   RET
++ */
++static int patch_return(void *addr, struct insn *insn, u8 *bytes)
++{
++	int i = 0;
++
++	if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++		return -1;
++
++	bytes[i++] = RET_INSN_OPCODE;
++
++	for (; i < insn->length;)
++		bytes[i++] = INT3_INSN_OPCODE;
++
++	return i;
++}
++
++void __init_or_module noinline apply_returns(s32 *start, s32 *end)
++{
++	s32 *s;
++
++	for (s = start; s < end; s++) {
++		void *dest = NULL, *addr = (void *)s + *s;
++		struct insn insn;
++		int len, ret;
++		u8 bytes[16];
++		u8 op;
++
++		ret = insn_decode_kernel(&insn, addr);
++		if (WARN_ON_ONCE(ret < 0))
++			continue;
++
++		op = insn.opcode.bytes[0];
++		if (op == JMP32_INSN_OPCODE)
++			dest = addr + insn.length + insn.immediate.value;
++
++		if (__static_call_fixup(addr, op, dest) ||
++		    WARN_ON_ONCE(dest != &__x86_return_thunk))
++			continue;
++
++		DPRINTK("return thunk at: %pS (%px) len: %d to: %pS",
++			addr, addr, insn.length,
++			addr + insn.length + insn.immediate.value);
++
++		len = patch_return(addr, &insn, bytes);
++		if (len == insn.length) {
++			DUMP_BYTES(((u8*)addr),  len, "%px: orig: ", addr);
++			DUMP_BYTES(((u8*)bytes), len, "%px: repl: ", addr);
++			text_poke_early(addr, bytes, len);
++		}
+ 	}
+ }
++#else
++void __init_or_module noinline apply_returns(s32 *start, s32 *end) { }
++#endif /* CONFIG_RETHUNK */
++
++#else /* !RETPOLINES || !CONFIG_STACK_VALIDATION */
++
++void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { }
++void __init_or_module noinline apply_returns(s32 *start, s32 *end) { }
++
++#endif /* CONFIG_RETPOLINE && CONFIG_STACK_VALIDATION */
+ 
+ #ifdef CONFIG_SMP
+ static void alternatives_smp_lock(const s32 *start, const s32 *end,
+@@ -641,7 +937,7 @@ asm (
+ "	.type		int3_magic, @function\n"
+ "int3_magic:\n"
+ "	movl	$1, (%" _ASM_ARG1 ")\n"
+-"	ret\n"
++	ASM_RET
+ "	.size		int3_magic, .-int3_magic\n"
+ "	.popsection\n"
+ );
+@@ -723,6 +1019,13 @@ void __init alternative_instructions(void)
+ 	 * patching.
+ 	 */
+ 
++	/*
++	 * Rewrite the retpolines, must be done before alternatives since
++	 * those can rewrite the retpoline thunks.
++	 */
++	apply_retpolines(__retpoline_sites, __retpoline_sites_end);
++	apply_returns(__return_sites, __return_sites_end);
++
+ 	apply_alternatives(__alt_instructions, __alt_instructions_end);
+ 
+ #ifdef CONFIG_SMP
+@@ -1009,10 +1312,13 @@ void text_poke_sync(void)
+ }
+ 
+ struct text_poke_loc {
+-	s32 rel_addr; /* addr := _stext + rel_addr */
+-	s32 rel32;
++	/* addr := _stext + rel_addr */
++	s32 rel_addr;
++	s32 disp;
++	u8 len;
+ 	u8 opcode;
+ 	const u8 text[POKE_MAX_OPCODE_SIZE];
++	/* see text_poke_bp_batch() */
+ 	u8 old;
+ };
+ 
+@@ -1027,7 +1333,8 @@ static struct bp_patching_desc *bp_desc;
+ static __always_inline
+ struct bp_patching_desc *try_get_desc(struct bp_patching_desc **descp)
+ {
+-	struct bp_patching_desc *desc = __READ_ONCE(*descp); /* rcu_dereference */
++	/* rcu_dereference */
++	struct bp_patching_desc *desc = __READ_ONCE(*descp);
+ 
+ 	if (!desc || !arch_atomic_inc_not_zero(&desc->refs))
+ 		return NULL;
+@@ -1061,7 +1368,7 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
+ {
+ 	struct bp_patching_desc *desc;
+ 	struct text_poke_loc *tp;
+-	int len, ret = 0;
++	int ret = 0;
+ 	void *ip;
+ 
+ 	if (user_mode(regs))
+@@ -1101,8 +1408,7 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
+ 			goto out_put;
+ 	}
+ 
+-	len = text_opcode_size(tp->opcode);
+-	ip += len;
++	ip += tp->len;
+ 
+ 	switch (tp->opcode) {
+ 	case INT3_INSN_OPCODE:
+@@ -1117,12 +1423,12 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
+ 		break;
+ 
+ 	case CALL_INSN_OPCODE:
+-		int3_emulate_call(regs, (long)ip + tp->rel32);
++		int3_emulate_call(regs, (long)ip + tp->disp);
+ 		break;
+ 
+ 	case JMP32_INSN_OPCODE:
+ 	case JMP8_INSN_OPCODE:
+-		int3_emulate_jmp(regs, (long)ip + tp->rel32);
++		int3_emulate_jmp(regs, (long)ip + tp->disp);
+ 		break;
+ 
+ 	default:
+@@ -1197,7 +1503,7 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
+ 	 */
+ 	for (do_sync = 0, i = 0; i < nr_entries; i++) {
+ 		u8 old[POKE_MAX_OPCODE_SIZE] = { tp[i].old, };
+-		int len = text_opcode_size(tp[i].opcode);
++		int len = tp[i].len;
+ 
+ 		if (len - INT3_INSN_SIZE > 0) {
+ 			memcpy(old + INT3_INSN_SIZE,
+@@ -1274,20 +1580,36 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+ 			       const void *opcode, size_t len, const void *emulate)
+ {
+ 	struct insn insn;
++	int ret, i;
+ 
+ 	memcpy((void *)tp->text, opcode, len);
+ 	if (!emulate)
+ 		emulate = opcode;
+ 
+-	kernel_insn_init(&insn, emulate, MAX_INSN_SIZE);
+-	insn_get_length(&insn);
+-
+-	BUG_ON(!insn_complete(&insn));
+-	BUG_ON(len != insn.length);
++	ret = insn_decode_kernel(&insn, emulate);
++	BUG_ON(ret < 0);
+ 
+ 	tp->rel_addr = addr - (void *)_stext;
++	tp->len = len;
+ 	tp->opcode = insn.opcode.bytes[0];
+ 
++	switch (tp->opcode) {
++	case RET_INSN_OPCODE:
++	case JMP32_INSN_OPCODE:
++	case JMP8_INSN_OPCODE:
++		/*
++		 * Control flow instructions without implied execution of the
++		 * next instruction can be padded with INT3.
++		 */
++		for (i = insn.length; i < len; i++)
++			BUG_ON(tp->text[i] != INT3_INSN_OPCODE);
++		break;
++
++	default:
++		BUG_ON(len != insn.length);
++	};
++
++
+ 	switch (tp->opcode) {
+ 	case INT3_INSN_OPCODE:
+ 	case RET_INSN_OPCODE:
+@@ -1296,7 +1618,7 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+ 	case CALL_INSN_OPCODE:
+ 	case JMP32_INSN_OPCODE:
+ 	case JMP8_INSN_OPCODE:
+-		tp->rel32 = insn.immediate.value;
++		tp->disp = insn.immediate.value;
+ 		break;
+ 
+ 	default: /* assume NOP */
+@@ -1304,13 +1626,13 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+ 		case 2: /* NOP2 -- emulate as JMP8+0 */
+ 			BUG_ON(memcmp(emulate, ideal_nops[len], len));
+ 			tp->opcode = JMP8_INSN_OPCODE;
+-			tp->rel32 = 0;
++			tp->disp = 0;
+ 			break;
+ 
+ 		case 5: /* NOP5 -- emulate as JMP32+0 */
+ 			BUG_ON(memcmp(emulate, ideal_nops[NOP_ATOMIC5], len));
+ 			tp->opcode = JMP32_INSN_OPCODE;
+-			tp->rel32 = 0;
++			tp->disp = 0;
+ 			break;
+ 
+ 		default: /* unknown instruction */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index acea05eed27d4..8b9e3277a6ceb 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -914,6 +914,28 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ 	clear_rdrand_cpuid_bit(c);
+ }
+ 
++void init_spectral_chicken(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_CPU_UNRET_ENTRY
++	u64 value;
++
++	/*
++	 * On Zen2 we offer this chicken (bit) on the altar of Speculation.
++	 *
++	 * This suppresses speculation from the middle of a basic block, i.e. it
++	 * suppresses non-branch predictions.
++	 *
++	 * We use STIBP as a heuristic to filter out Zen2 from the rest of F17H
++	 */
++	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && cpu_has(c, X86_FEATURE_AMD_STIBP)) {
++		if (!rdmsrl_safe(MSR_ZEN2_SPECTRAL_CHICKEN, &value)) {
++			value |= MSR_ZEN2_SPECTRAL_CHICKEN_BIT;
++			wrmsrl_safe(MSR_ZEN2_SPECTRAL_CHICKEN, value);
++		}
++	}
++#endif
++}
++
+ static void init_amd_zn(struct cpuinfo_x86 *c)
+ {
+ 	set_cpu_cap(c, X86_FEATURE_ZEN);
+@@ -922,12 +944,21 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
+ 	node_reclaim_distance = 32;
+ #endif
+ 
+-	/*
+-	 * Fix erratum 1076: CPB feature bit not being set in CPUID.
+-	 * Always set it, except when running under a hypervisor.
+-	 */
+-	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_CPB))
+-		set_cpu_cap(c, X86_FEATURE_CPB);
++	/* Fix up CPUID bits, but only if not virtualised. */
++	if (!cpu_has(c, X86_FEATURE_HYPERVISOR)) {
++
++		/* Erratum 1076: CPB feature bit not being set in CPUID. */
++		if (!cpu_has(c, X86_FEATURE_CPB))
++			set_cpu_cap(c, X86_FEATURE_CPB);
++
++		/*
++		 * Zen3 (Fam19 model < 0x10) parts are not susceptible to
++		 * Branch Type Confusion, but predate the allocation of the
++		 * BTC_NO bit.
++		 */
++		if (c->x86 == 0x19 && !cpu_has(c, X86_FEATURE_BTC_NO))
++			set_cpu_cap(c, X86_FEATURE_BTC_NO);
++	}
+ }
+ 
+ static void init_amd(struct cpuinfo_x86 *c)
+@@ -959,7 +990,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 	case 0x12: init_amd_ln(c); break;
+ 	case 0x15: init_amd_bd(c); break;
+ 	case 0x16: init_amd_jg(c); break;
+-	case 0x17: fallthrough;
++	case 0x17: init_spectral_chicken(c);
++		   fallthrough;
+ 	case 0x19: init_amd_zn(c); break;
+ 	}
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 2a21046846b6f..bc6382f5ec27b 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -38,6 +38,8 @@
+ 
+ static void __init spectre_v1_select_mitigation(void);
+ static void __init spectre_v2_select_mitigation(void);
++static void __init retbleed_select_mitigation(void);
++static void __init spectre_v2_user_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
+ static void __init mds_select_mitigation(void);
+@@ -47,16 +49,40 @@ static void __init taa_select_mitigation(void);
+ static void __init mmio_select_mitigation(void);
+ static void __init srbds_select_mitigation(void);
+ 
+-/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
++/* The base value of the SPEC_CTRL MSR without task-specific bits set */
+ u64 x86_spec_ctrl_base;
+ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
++
++/* The current value of the SPEC_CTRL MSR with task-specific bits set */
++DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
++EXPORT_SYMBOL_GPL(x86_spec_ctrl_current);
++
+ static DEFINE_MUTEX(spec_ctrl_mutex);
+ 
+ /*
+- * The vendor and possibly platform specific bits which can be modified in
+- * x86_spec_ctrl_base.
++ * Keep track of the SPEC_CTRL MSR value for the current task, which may differ
++ * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update().
+  */
+-static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS;
++void write_spec_ctrl_current(u64 val, bool force)
++{
++	if (this_cpu_read(x86_spec_ctrl_current) == val)
++		return;
++
++	this_cpu_write(x86_spec_ctrl_current, val);
++
++	/*
++	 * When KERNEL_IBRS this MSR is written on return-to-user, unless
++	 * forced the update can be delayed until that time.
++	 */
++	if (force || !cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS))
++		wrmsrl(MSR_IA32_SPEC_CTRL, val);
++}
++
++u64 spec_ctrl_current(void)
++{
++	return this_cpu_read(x86_spec_ctrl_current);
++}
++EXPORT_SYMBOL_GPL(spec_ctrl_current);
+ 
+ /*
+  * AMD specific MSR info for Speculative Store Bypass control.
+@@ -106,13 +132,21 @@ void __init check_bugs(void)
+ 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
+ 		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+ 
+-	/* Allow STIBP in MSR_SPEC_CTRL if supported */
+-	if (boot_cpu_has(X86_FEATURE_STIBP))
+-		x86_spec_ctrl_mask |= SPEC_CTRL_STIBP;
+-
+ 	/* Select the proper CPU mitigations before patching alternatives: */
+ 	spectre_v1_select_mitigation();
+ 	spectre_v2_select_mitigation();
++	/*
++	 * retbleed_select_mitigation() relies on the state set by
++	 * spectre_v2_select_mitigation(); specifically it wants to know about
++	 * spectre_v2=ibrs.
++	 */
++	retbleed_select_mitigation();
++	/*
++	 * spectre_v2_user_select_mitigation() relies on the state set by
++	 * retbleed_select_mitigation(); specifically the STIBP selection is
++	 * forced for UNRET.
++	 */
++	spectre_v2_user_select_mitigation();
+ 	ssb_select_mitigation();
+ 	l1tf_select_mitigation();
+ 	md_clear_select_mitigation();
+@@ -152,31 +186,17 @@ void __init check_bugs(void)
+ #endif
+ }
+ 
++/*
++ * NOTE: For VMX, this function is not called in the vmexit path.
++ * It uses vmx_spec_ctrl_restore_host() instead.
++ */
+ void
+ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
+ {
+-	u64 msrval, guestval, hostval = x86_spec_ctrl_base;
++	u64 msrval, guestval = guest_spec_ctrl, hostval = spec_ctrl_current();
+ 	struct thread_info *ti = current_thread_info();
+ 
+-	/* Is MSR_SPEC_CTRL implemented ? */
+ 	if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
+-		/*
+-		 * Restrict guest_spec_ctrl to supported values. Clear the
+-		 * modifiable bits in the host base value and or the
+-		 * modifiable bits from the guest value.
+-		 */
+-		guestval = hostval & ~x86_spec_ctrl_mask;
+-		guestval |= guest_spec_ctrl & x86_spec_ctrl_mask;
+-
+-		/* SSBD controlled in MSR_SPEC_CTRL */
+-		if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+-		    static_cpu_has(X86_FEATURE_AMD_SSBD))
+-			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
+-
+-		/* Conditional STIBP enabled? */
+-		if (static_branch_unlikely(&switch_to_cond_stibp))
+-			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
+-
+ 		if (hostval != guestval) {
+ 			msrval = setguest ? guestval : hostval;
+ 			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
+@@ -708,12 +728,180 @@ static int __init nospectre_v1_cmdline(char *str)
+ }
+ early_param("nospectre_v1", nospectre_v1_cmdline);
+ 
+-#undef pr_fmt
+-#define pr_fmt(fmt)     "Spectre V2 : " fmt
+-
+ static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
+ 	SPECTRE_V2_NONE;
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)     "RETBleed: " fmt
++
++enum retbleed_mitigation {
++	RETBLEED_MITIGATION_NONE,
++	RETBLEED_MITIGATION_UNRET,
++	RETBLEED_MITIGATION_IBPB,
++	RETBLEED_MITIGATION_IBRS,
++	RETBLEED_MITIGATION_EIBRS,
++};
++
++enum retbleed_mitigation_cmd {
++	RETBLEED_CMD_OFF,
++	RETBLEED_CMD_AUTO,
++	RETBLEED_CMD_UNRET,
++	RETBLEED_CMD_IBPB,
++};
++
++const char * const retbleed_strings[] = {
++	[RETBLEED_MITIGATION_NONE]	= "Vulnerable",
++	[RETBLEED_MITIGATION_UNRET]	= "Mitigation: untrained return thunk",
++	[RETBLEED_MITIGATION_IBPB]	= "Mitigation: IBPB",
++	[RETBLEED_MITIGATION_IBRS]	= "Mitigation: IBRS",
++	[RETBLEED_MITIGATION_EIBRS]	= "Mitigation: Enhanced IBRS",
++};
++
++static enum retbleed_mitigation retbleed_mitigation __ro_after_init =
++	RETBLEED_MITIGATION_NONE;
++static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =
++	RETBLEED_CMD_AUTO;
++
++static int __ro_after_init retbleed_nosmt = false;
++
++static int __init retbleed_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	while (str) {
++		char *next = strchr(str, ',');
++		if (next) {
++			*next = 0;
++			next++;
++		}
++
++		if (!strcmp(str, "off")) {
++			retbleed_cmd = RETBLEED_CMD_OFF;
++		} else if (!strcmp(str, "auto")) {
++			retbleed_cmd = RETBLEED_CMD_AUTO;
++		} else if (!strcmp(str, "unret")) {
++			retbleed_cmd = RETBLEED_CMD_UNRET;
++		} else if (!strcmp(str, "ibpb")) {
++			retbleed_cmd = RETBLEED_CMD_IBPB;
++		} else if (!strcmp(str, "nosmt")) {
++			retbleed_nosmt = true;
++		} else {
++			pr_err("Ignoring unknown retbleed option (%s).", str);
++		}
++
++		str = next;
++	}
++
++	return 0;
++}
++early_param("retbleed", retbleed_parse_cmdline);
++
++#define RETBLEED_UNTRAIN_MSG "WARNING: BTB untrained return thunk mitigation is only effective on AMD/Hygon!\n"
++#define RETBLEED_INTEL_MSG "WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!\n"
++
++static void __init retbleed_select_mitigation(void)
++{
++	bool mitigate_smt = false;
++
++	if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
++		return;
++
++	switch (retbleed_cmd) {
++	case RETBLEED_CMD_OFF:
++		return;
++
++	case RETBLEED_CMD_UNRET:
++		if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) {
++			retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
++		} else {
++			pr_err("WARNING: kernel not compiled with CPU_UNRET_ENTRY.\n");
++			goto do_cmd_auto;
++		}
++		break;
++
++	case RETBLEED_CMD_IBPB:
++		if (!boot_cpu_has(X86_FEATURE_IBPB)) {
++			pr_err("WARNING: CPU does not support IBPB.\n");
++			goto do_cmd_auto;
++		} else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) {
++			retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
++		} else {
++			pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n");
++			goto do_cmd_auto;
++		}
++		break;
++
++do_cmd_auto:
++	case RETBLEED_CMD_AUTO:
++	default:
++		if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++		    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
++			if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY))
++				retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
++			else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY) && boot_cpu_has(X86_FEATURE_IBPB))
++				retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
++		}
++
++		/*
++		 * The Intel mitigation (IBRS or eIBRS) was already selected in
++		 * spectre_v2_select_mitigation().  'retbleed_mitigation' will
++		 * be set accordingly below.
++		 */
++
++		break;
++	}
++
++	switch (retbleed_mitigation) {
++	case RETBLEED_MITIGATION_UNRET:
++		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
++		setup_force_cpu_cap(X86_FEATURE_UNRET);
++
++		if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
++		    boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
++			pr_err(RETBLEED_UNTRAIN_MSG);
++
++		mitigate_smt = true;
++		break;
++
++	case RETBLEED_MITIGATION_IBPB:
++		setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
++		mitigate_smt = true;
++		break;
++
++	default:
++		break;
++	}
++
++	if (mitigate_smt && !boot_cpu_has(X86_FEATURE_STIBP) &&
++	    (retbleed_nosmt || cpu_mitigations_auto_nosmt()))
++		cpu_smt_disable(false);
++
++	/*
++	 * Let IBRS trump all on Intel without affecting the effects of the
++	 * retbleed= cmdline option.
++	 */
++	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
++		switch (spectre_v2_enabled) {
++		case SPECTRE_V2_IBRS:
++			retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
++			break;
++		case SPECTRE_V2_EIBRS:
++		case SPECTRE_V2_EIBRS_RETPOLINE:
++		case SPECTRE_V2_EIBRS_LFENCE:
++			retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
++			break;
++		default:
++			pr_err(RETBLEED_INTEL_MSG);
++		}
++	}
++
++	pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
++}
++
++#undef pr_fmt
++#define pr_fmt(fmt)     "Spectre V2 : " fmt
++
+ static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init =
+ 	SPECTRE_V2_USER_NONE;
+ static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_init =
+@@ -784,6 +972,7 @@ enum spectre_v2_mitigation_cmd {
+ 	SPECTRE_V2_CMD_EIBRS,
+ 	SPECTRE_V2_CMD_EIBRS_RETPOLINE,
+ 	SPECTRE_V2_CMD_EIBRS_LFENCE,
++	SPECTRE_V2_CMD_IBRS,
+ };
+ 
+ enum spectre_v2_user_cmd {
+@@ -824,13 +1013,15 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
+ 		pr_info("spectre_v2_user=%s forced on command line.\n", reason);
+ }
+ 
++static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
++
+ static enum spectre_v2_user_cmd __init
+-spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
++spectre_v2_parse_user_cmdline(void)
+ {
+ 	char arg[20];
+ 	int ret, i;
+ 
+-	switch (v2_cmd) {
++	switch (spectre_v2_cmd) {
+ 	case SPECTRE_V2_CMD_NONE:
+ 		return SPECTRE_V2_USER_CMD_NONE;
+ 	case SPECTRE_V2_CMD_FORCE:
+@@ -856,15 +1047,16 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
+ 	return SPECTRE_V2_USER_CMD_AUTO;
+ }
+ 
+-static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
++static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
+ {
+-	return (mode == SPECTRE_V2_EIBRS ||
+-		mode == SPECTRE_V2_EIBRS_RETPOLINE ||
+-		mode == SPECTRE_V2_EIBRS_LFENCE);
++	return mode == SPECTRE_V2_IBRS ||
++	       mode == SPECTRE_V2_EIBRS ||
++	       mode == SPECTRE_V2_EIBRS_RETPOLINE ||
++	       mode == SPECTRE_V2_EIBRS_LFENCE;
+ }
+ 
+ static void __init
+-spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
++spectre_v2_user_select_mitigation(void)
+ {
+ 	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
+ 	bool smt_possible = IS_ENABLED(CONFIG_SMP);
+@@ -877,7 +1069,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ 	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
+ 		smt_possible = false;
+ 
+-	cmd = spectre_v2_parse_user_cmdline(v2_cmd);
++	cmd = spectre_v2_parse_user_cmdline();
+ 	switch (cmd) {
+ 	case SPECTRE_V2_USER_CMD_NONE:
+ 		goto set_mode;
+@@ -925,12 +1117,12 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ 	}
+ 
+ 	/*
+-	 * If no STIBP, enhanced IBRS is enabled or SMT impossible, STIBP is not
+-	 * required.
++	 * If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible,
++	 * STIBP is not required.
+ 	 */
+ 	if (!boot_cpu_has(X86_FEATURE_STIBP) ||
+ 	    !smt_possible ||
+-	    spectre_v2_in_eibrs_mode(spectre_v2_enabled))
++	    spectre_v2_in_ibrs_mode(spectre_v2_enabled))
+ 		return;
+ 
+ 	/*
+@@ -942,6 +1134,13 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ 	    boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+ 		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+ 
++	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) {
++		if (mode != SPECTRE_V2_USER_STRICT &&
++		    mode != SPECTRE_V2_USER_STRICT_PREFERRED)
++			pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
++		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
++	}
++
+ 	spectre_v2_user_stibp = mode;
+ 
+ set_mode:
+@@ -955,6 +1154,7 @@ static const char * const spectre_v2_strings[] = {
+ 	[SPECTRE_V2_EIBRS]			= "Mitigation: Enhanced IBRS",
+ 	[SPECTRE_V2_EIBRS_LFENCE]		= "Mitigation: Enhanced IBRS + LFENCE",
+ 	[SPECTRE_V2_EIBRS_RETPOLINE]		= "Mitigation: Enhanced IBRS + Retpolines",
++	[SPECTRE_V2_IBRS]			= "Mitigation: IBRS",
+ };
+ 
+ static const struct {
+@@ -972,6 +1172,7 @@ static const struct {
+ 	{ "eibrs,lfence",	SPECTRE_V2_CMD_EIBRS_LFENCE,	  false },
+ 	{ "eibrs,retpoline",	SPECTRE_V2_CMD_EIBRS_RETPOLINE,	  false },
+ 	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
++	{ "ibrs",		SPECTRE_V2_CMD_IBRS,              false },
+ };
+ 
+ static void __init spec_v2_print_cond(const char *reason, bool secure)
+@@ -1034,6 +1235,30 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 		return SPECTRE_V2_CMD_AUTO;
+ 	}
+ 
++	if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_CPU_IBRS_ENTRY)) {
++		pr_err("%s selected but not compiled in. Switching to AUTO select\n",
++		       mitigation_options[i].option);
++		return SPECTRE_V2_CMD_AUTO;
++	}
++
++	if (cmd == SPECTRE_V2_CMD_IBRS && boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
++		pr_err("%s selected but not Intel CPU. Switching to AUTO select\n",
++		       mitigation_options[i].option);
++		return SPECTRE_V2_CMD_AUTO;
++	}
++
++	if (cmd == SPECTRE_V2_CMD_IBRS && !boot_cpu_has(X86_FEATURE_IBRS)) {
++		pr_err("%s selected but CPU doesn't have IBRS. Switching to AUTO select\n",
++		       mitigation_options[i].option);
++		return SPECTRE_V2_CMD_AUTO;
++	}
++
++	if (cmd == SPECTRE_V2_CMD_IBRS && boot_cpu_has(X86_FEATURE_XENPV)) {
++		pr_err("%s selected but running as XenPV guest. Switching to AUTO select\n",
++		       mitigation_options[i].option);
++		return SPECTRE_V2_CMD_AUTO;
++	}
++
+ 	spec_v2_print_cond(mitigation_options[i].option,
+ 			   mitigation_options[i].secure);
+ 	return cmd;
+@@ -1049,6 +1274,22 @@ static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
+ 	return SPECTRE_V2_RETPOLINE;
+ }
+ 
++/* Disable in-kernel use of non-RSB RET predictors */
++static void __init spec_ctrl_disable_kernel_rrsba(void)
++{
++	u64 ia32_cap;
++
++	if (!boot_cpu_has(X86_FEATURE_RRSBA_CTRL))
++		return;
++
++	ia32_cap = x86_read_arch_cap_msr();
++
++	if (ia32_cap & ARCH_CAP_RRSBA) {
++		x86_spec_ctrl_base |= SPEC_CTRL_RRSBA_DIS_S;
++		write_spec_ctrl_current(x86_spec_ctrl_base, true);
++	}
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -1073,6 +1314,15 @@ static void __init spectre_v2_select_mitigation(void)
+ 			break;
+ 		}
+ 
++		if (IS_ENABLED(CONFIG_CPU_IBRS_ENTRY) &&
++		    boot_cpu_has_bug(X86_BUG_RETBLEED) &&
++		    retbleed_cmd != RETBLEED_CMD_OFF &&
++		    boot_cpu_has(X86_FEATURE_IBRS) &&
++		    boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
++			mode = SPECTRE_V2_IBRS;
++			break;
++		}
++
+ 		mode = spectre_v2_select_retpoline();
+ 		break;
+ 
+@@ -1089,6 +1339,10 @@ static void __init spectre_v2_select_mitigation(void)
+ 		mode = spectre_v2_select_retpoline();
+ 		break;
+ 
++	case SPECTRE_V2_CMD_IBRS:
++		mode = SPECTRE_V2_IBRS;
++		break;
++
+ 	case SPECTRE_V2_CMD_EIBRS:
+ 		mode = SPECTRE_V2_EIBRS;
+ 		break;
+@@ -1105,10 +1359,9 @@ static void __init spectre_v2_select_mitigation(void)
+ 	if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
+ 		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
+ 
+-	if (spectre_v2_in_eibrs_mode(mode)) {
+-		/* Force it so VMEXIT will restore correctly */
++	if (spectre_v2_in_ibrs_mode(mode)) {
+ 		x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+-		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++		write_spec_ctrl_current(x86_spec_ctrl_base, true);
+ 	}
+ 
+ 	switch (mode) {
+@@ -1116,6 +1369,10 @@ static void __init spectre_v2_select_mitigation(void)
+ 	case SPECTRE_V2_EIBRS:
+ 		break;
+ 
++	case SPECTRE_V2_IBRS:
++		setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS);
++		break;
++
+ 	case SPECTRE_V2_LFENCE:
+ 	case SPECTRE_V2_EIBRS_LFENCE:
+ 		setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
+@@ -1127,43 +1384,107 @@ static void __init spectre_v2_select_mitigation(void)
+ 		break;
+ 	}
+ 
++	/*
++	 * Disable alternate RSB predictions in kernel when indirect CALLs and
++	 * JMPs gets protection against BHI and Intramode-BTI, but RET
++	 * prediction from a non-RSB predictor is still a risk.
++	 */
++	if (mode == SPECTRE_V2_EIBRS_LFENCE ||
++	    mode == SPECTRE_V2_EIBRS_RETPOLINE ||
++	    mode == SPECTRE_V2_RETPOLINE)
++		spec_ctrl_disable_kernel_rrsba();
++
+ 	spectre_v2_enabled = mode;
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+ 	/*
+-	 * If spectre v2 protection has been enabled, unconditionally fill
+-	 * RSB during a context switch; this protects against two independent
+-	 * issues:
++	 * If Spectre v2 protection has been enabled, fill the RSB during a
++	 * context switch.  In general there are two types of RSB attacks
++	 * across context switches, for which the CALLs/RETs may be unbalanced.
+ 	 *
+-	 *	- RSB underflow (and switch to BTB) on Skylake+
+-	 *	- SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs
++	 * 1) RSB underflow
++	 *
++	 *    Some Intel parts have "bottomless RSB".  When the RSB is empty,
++	 *    speculated return targets may come from the branch predictor,
++	 *    which could have a user-poisoned BTB or BHB entry.
++	 *
++	 *    AMD has it even worse: *all* returns are speculated from the BTB,
++	 *    regardless of the state of the RSB.
++	 *
++	 *    When IBRS or eIBRS is enabled, the "user -> kernel" attack
++	 *    scenario is mitigated by the IBRS branch prediction isolation
++	 *    properties, so the RSB buffer filling wouldn't be necessary to
++	 *    protect against this type of attack.
++	 *
++	 *    The "user -> user" attack scenario is mitigated by RSB filling.
++	 *
++	 * 2) Poisoned RSB entry
++	 *
++	 *    If the 'next' in-kernel return stack is shorter than 'prev',
++	 *    'next' could be tricked into speculating with a user-poisoned RSB
++	 *    entry.
++	 *
++	 *    The "user -> kernel" attack scenario is mitigated by SMEP and
++	 *    eIBRS.
++	 *
++	 *    The "user -> user" scenario, also known as SpectreBHB, requires
++	 *    RSB clearing.
++	 *
++	 * So to mitigate all cases, unconditionally fill RSB on context
++	 * switches.
++	 *
++	 * FIXME: Is this pointless for retbleed-affected AMD?
+ 	 */
+ 	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+ 	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
+ 
+ 	/*
+-	 * Retpoline means the kernel is safe because it has no indirect
+-	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
+-	 * speculation around firmware calls only when Enhanced IBRS isn't
+-	 * supported.
++	 * Similar to context switches, there are two types of RSB attacks
++	 * after vmexit:
++	 *
++	 * 1) RSB underflow
++	 *
++	 * 2) Poisoned RSB entry
++	 *
++	 * When retpoline is enabled, both are mitigated by filling/clearing
++	 * the RSB.
++	 *
++	 * When IBRS is enabled, while #1 would be mitigated by the IBRS branch
++	 * prediction isolation protections, RSB still needs to be cleared
++	 * because of #2.  Note that SMEP provides no protection here, unlike
++	 * user-space-poisoned RSB entries.
++	 *
++	 * eIBRS, on the other hand, has RSB-poisoning protections, so it
++	 * doesn't need RSB clearing after vmexit.
++	 */
++	if (boot_cpu_has(X86_FEATURE_RETPOLINE) ||
++	    boot_cpu_has(X86_FEATURE_KERNEL_IBRS))
++		setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
++
++	/*
++	 * Retpoline protects the kernel, but doesn't protect firmware.  IBRS
++	 * and Enhanced IBRS protect firmware too, so enable IBRS around
++	 * firmware calls only when IBRS / Enhanced IBRS aren't otherwise
++	 * enabled.
+ 	 *
+ 	 * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
+ 	 * the user might select retpoline on the kernel command line and if
+ 	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
+ 	 * enable IBRS around firmware calls.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) {
++	if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
+ 		pr_info("Enabling Restricted Speculation for firmware calls\n");
+ 	}
+ 
+ 	/* Set up IBPB and STIBP depending on the general spectre V2 command */
+-	spectre_v2_user_select_mitigation(cmd);
++	spectre_v2_cmd = cmd;
+ }
+ 
+ static void update_stibp_msr(void * __unused)
+ {
+-	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++	u64 val = spec_ctrl_current() | (x86_spec_ctrl_base & SPEC_CTRL_STIBP);
++	write_spec_ctrl_current(val, true);
+ }
+ 
+ /* Update x86_spec_ctrl_base in case SMT state changed. */
+@@ -1379,16 +1700,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
+ 		break;
+ 	}
+ 
+-	/*
+-	 * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper
+-	 * bit in the mask to allow guests to use the mitigation even in the
+-	 * case where the host does not enable it.
+-	 */
+-	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+-	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+-		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
+-	}
+-
+ 	/*
+ 	 * We have three CPU feature flags that are in play here:
+ 	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
+@@ -1406,7 +1717,7 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
+ 			x86_amd_ssb_disable();
+ 		} else {
+ 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
+-			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++			write_spec_ctrl_current(x86_spec_ctrl_base, true);
+ 		}
+ 	}
+ 
+@@ -1624,7 +1935,7 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+ void x86_spec_ctrl_setup_ap(void)
+ {
+ 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
+-		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++		write_spec_ctrl_current(x86_spec_ctrl_base, true);
+ 
+ 	if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
+ 		x86_amd_ssb_disable();
+@@ -1861,7 +2172,7 @@ static ssize_t mmio_stale_data_show_state(char *buf)
+ 
+ static char *stibp_state(void)
+ {
+-	if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
++	if (spectre_v2_in_ibrs_mode(spectre_v2_enabled))
+ 		return "";
+ 
+ 	switch (spectre_v2_user_stibp) {
+@@ -1917,6 +2228,24 @@ static ssize_t srbds_show_state(char *buf)
+ 	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
+ }
+ 
++static ssize_t retbleed_show_state(char *buf)
++{
++	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) {
++	    if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
++		boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
++		    return sprintf(buf, "Vulnerable: untrained return thunk on non-Zen uarch\n");
++
++	    return sprintf(buf, "%s; SMT %s\n",
++			   retbleed_strings[retbleed_mitigation],
++			   !sched_smt_active() ? "disabled" :
++			   spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++			   spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ?
++			   "enabled with STIBP protection" : "vulnerable");
++	}
++
++	return sprintf(buf, "%s\n", retbleed_strings[retbleed_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -1962,6 +2291,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_MMIO_STALE_DATA:
+ 		return mmio_stale_data_show_state(buf);
+ 
++	case X86_BUG_RETBLEED:
++		return retbleed_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -2018,4 +2350,9 @@ ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *at
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA);
+ }
++
++ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_RETBLEED);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 4917c2698ac1f..901352bd3b426 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1092,48 +1092,60 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 	{}
+ };
+ 
++#define VULNBL(vendor, family, model, blacklist)	\
++	X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, blacklist)
++
+ #define VULNBL_INTEL_STEPPINGS(model, steppings, issues)		   \
+ 	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6,		   \
+ 					    INTEL_FAM6_##model, steppings, \
+ 					    X86_FEATURE_ANY, issues)
+ 
++#define VULNBL_AMD(family, blacklist)		\
++	VULNBL(AMD, family, X86_MODEL_ANY, blacklist)
++
++#define VULNBL_HYGON(family, blacklist)		\
++	VULNBL(HYGON, family, X86_MODEL_ANY, blacklist)
++
+ #define SRBDS		BIT(0)
+ /* CPU is affected by X86_BUG_MMIO_STALE_DATA */
+ #define MMIO		BIT(1)
+ /* CPU is affected by Shared Buffers Data Sampling (SBDS), a variant of X86_BUG_MMIO_STALE_DATA */
+ #define MMIO_SBDS	BIT(2)
++/* CPU is affected by RETbleed, speculating where you would not expect it */
++#define RETBLEED	BIT(3)
+ 
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(HASWELL,		X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
+-	VULNBL_INTEL_STEPPINGS(HASWELL_X,	BIT(2) | BIT(4),		MMIO),
+-	VULNBL_INTEL_STEPPINGS(BROADWELL_D,	X86_STEPPINGS(0x3, 0x5),	MMIO),
++	VULNBL_INTEL_STEPPINGS(HASWELL_X,	X86_STEPPING_ANY,		MMIO),
++	VULNBL_INTEL_STEPPINGS(BROADWELL_D,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_X,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPINGS(0x3, 0x3),	SRBDS | MMIO),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	BIT(3) | BIT(4) | BIT(6) |
+-						BIT(7) | BIT(0xB),              MMIO),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPINGS(0x3, 0x3),	SRBDS | MMIO),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPINGS(0x9, 0xC),	SRBDS | MMIO),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPINGS(0x0, 0x8),	SRBDS),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPINGS(0x9, 0xD),	SRBDS | MMIO),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPINGS(0x0, 0x8),	SRBDS),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPINGS(0x5, 0x5),	MMIO | MMIO_SBDS),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPINGS(0x1, 0x1),	MMIO),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_X,	X86_STEPPINGS(0x4, 0x6),	MMIO),
+-	VULNBL_INTEL_STEPPINGS(COMETLAKE,	BIT(2) | BIT(3) | BIT(5),	MMIO | MMIO_SBDS),
+-	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPINGS(0x1, 0x1),	MMIO | MMIO_SBDS),
+-	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPINGS(0x0, 0x0),	MMIO),
+-	VULNBL_INTEL_STEPPINGS(LAKEFIELD,	X86_STEPPINGS(0x1, 0x1),	MMIO | MMIO_SBDS),
+-	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPINGS(0x1, 0x1),	MMIO),
+-	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPINGS(0x1, 0x1),	MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	X86_STEPPING_ANY,		MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(CANNONLAKE_L,	X86_STEPPING_ANY,		RETBLEED),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPING_ANY,		MMIO),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_X,	X86_STEPPING_ANY,		MMIO),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPINGS(0x0, 0x0),	MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(LAKEFIELD,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
+ 	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D,	X86_STEPPING_ANY,		MMIO),
+-	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPINGS(0x0, 0x0),	MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
++
++	VULNBL_AMD(0x15, RETBLEED),
++	VULNBL_AMD(0x16, RETBLEED),
++	VULNBL_AMD(0x17, RETBLEED),
++	VULNBL_HYGON(0x18, RETBLEED),
+ 	{}
+ };
+ 
+@@ -1235,6 +1247,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	    !arch_cap_mmio_immune(ia32_cap))
+ 		setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
+ 
++	if (!cpu_has(c, X86_FEATURE_BTC_NO)) {
++		if (cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA))
++			setup_force_cpu_bug(X86_BUG_RETBLEED);
++	}
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 093f5fc860e3f..91df90abc1d9c 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -60,6 +60,8 @@ extern void tsx_disable(void);
+ static inline void tsx_init(void) { }
+ #endif /* CONFIG_CPU_SUP_INTEL */
+ 
++extern void init_spectral_chicken(struct cpuinfo_x86 *c);
++
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
+ extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
+ extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index b78c471ec344b..774ca6bfda9f4 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -318,6 +318,12 @@ static void init_hygon(struct cpuinfo_x86 *c)
+ 	/* get apicid instead of initial apic id from cpuid */
+ 	c->apicid = hard_smp_processor_id();
+ 
++	/*
++	 * XXX someone from Hygon needs to confirm this DTRT
++	 *
++	init_spectral_chicken(c);
++	 */
++
+ 	set_cpu_cap(c, X86_FEATURE_ZEN);
+ 	set_cpu_cap(c, X86_FEATURE_CPB);
+ 
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index 866c9a9bcdee7..82fe492121bb3 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -26,6 +26,7 @@ struct cpuid_bit {
+ static const struct cpuid_bit cpuid_bits[] = {
+ 	{ X86_FEATURE_APERFMPERF,       CPUID_ECX,  0, 0x00000006, 0 },
+ 	{ X86_FEATURE_EPB,		CPUID_ECX,  3, 0x00000006, 0 },
++	{ X86_FEATURE_RRSBA_CTRL,	CPUID_EDX,  2, 0x00000007, 2 },
+ 	{ X86_FEATURE_CQM_LLC,		CPUID_EDX,  1, 0x0000000f, 0 },
+ 	{ X86_FEATURE_CQM_OCCUP_LLC,	CPUID_EDX,  0, 0x0000000f, 1 },
+ 	{ X86_FEATURE_CQM_MBM_TOTAL,	CPUID_EDX,  1, 0x0000000f, 1 },
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 7edbd5ee5ed43..dca5cf82144c0 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -308,7 +308,7 @@ union ftrace_op_code_union {
+ 	} __attribute__((packed));
+ };
+ 
+-#define RET_SIZE		1
++#define RET_SIZE		(IS_ENABLED(CONFIG_RETPOLINE) ? 5 : 1 + IS_ENABLED(CONFIG_SLS))
+ 
+ static unsigned long
+ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+@@ -367,7 +367,10 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 
+ 	/* The trampoline ends with ret(q) */
+ 	retq = (unsigned long)ftrace_stub;
+-	ret = copy_from_kernel_nofault(ip, (void *)retq, RET_SIZE);
++	if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++		memcpy(ip, text_gen_insn(JMP32_INSN_OPCODE, ip, &__x86_return_thunk), JMP32_INSN_SIZE);
++	else
++		ret = copy_from_kernel_nofault(ip, (void *)retq, RET_SIZE);
+ 	if (WARN_ON(ret < 0))
+ 		goto fail;
+ 
+diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S
+index e405fe1a8bf41..a0ed0e4a2c0cd 100644
+--- a/arch/x86/kernel/ftrace_32.S
++++ b/arch/x86/kernel/ftrace_32.S
+@@ -19,7 +19,7 @@
+ #endif
+ 
+ SYM_FUNC_START(__fentry__)
+-	ret
++	RET
+ SYM_FUNC_END(__fentry__)
+ EXPORT_SYMBOL(__fentry__)
+ 
+@@ -84,7 +84,7 @@ ftrace_graph_call:
+ 
+ /* This is weak to keep gas from relaxing the jumps */
+ SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK)
+-	ret
++	RET
+ SYM_CODE_END(ftrace_caller)
+ 
+ SYM_CODE_START(ftrace_regs_caller)
+@@ -177,7 +177,7 @@ SYM_CODE_START(ftrace_graph_caller)
+ 	popl	%edx
+ 	popl	%ecx
+ 	popl	%eax
+-	ret
++	RET
+ SYM_CODE_END(ftrace_graph_caller)
+ 
+ .globl return_to_handler
+diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
+index ac3d5f22fe64b..e3a375185a1b4 100644
+--- a/arch/x86/kernel/ftrace_64.S
++++ b/arch/x86/kernel/ftrace_64.S
+@@ -132,7 +132,7 @@
+ #ifdef CONFIG_DYNAMIC_FTRACE
+ 
+ SYM_FUNC_START(__fentry__)
+-	retq
++	RET
+ SYM_FUNC_END(__fentry__)
+ EXPORT_SYMBOL(__fentry__)
+ 
+@@ -170,10 +170,11 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
+ 
+ /*
+  * This is weak to keep gas from relaxing the jumps.
+- * It is also used to copy the retq for trampolines.
++ * It is also used to copy the RET for trampolines.
+  */
+ SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK)
+-	retq
++	UNWIND_HINT_FUNC
++	RET
+ SYM_FUNC_END(ftrace_epilogue)
+ 
+ SYM_FUNC_START(ftrace_regs_caller)
+@@ -265,7 +266,7 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
+ 	restore_mcount_regs 8
+ 	/* Restore flags */
+ 	popfq
+-	UNWIND_HINT_RET_OFFSET
++	UNWIND_HINT_FUNC
+ 	jmp	ftrace_epilogue
+ 
+ SYM_FUNC_END(ftrace_regs_caller)
+@@ -287,7 +288,7 @@ fgraph_trace:
+ #endif
+ 
+ SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL)
+-	retq
++	RET
+ 
+ trace:
+ 	/* save_mcount_regs fills in first two parameters */
+@@ -319,7 +320,7 @@ SYM_FUNC_START(ftrace_graph_caller)
+ 
+ 	restore_mcount_regs
+ 
+-	retq
++	RET
+ SYM_FUNC_END(ftrace_graph_caller)
+ 
+ SYM_CODE_START(return_to_handler)
+diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
+index 7ed84c2822332..3f1691b89231f 100644
+--- a/arch/x86/kernel/head_32.S
++++ b/arch/x86/kernel/head_32.S
+@@ -23,6 +23,7 @@
+ #include <asm/cpufeatures.h>
+ #include <asm/percpu.h>
+ #include <asm/nops.h>
++#include <asm/nospec-branch.h>
+ #include <asm/bootparam.h>
+ #include <asm/export.h>
+ #include <asm/pgtable_32.h>
+@@ -354,7 +355,7 @@ setup_once:
+ #endif
+ 
+ 	andl $0,setup_once_ref	/* Once is enough, thanks */
+-	ret
++	RET
+ 
+ SYM_FUNC_START(early_idt_handler_array)
+ 	# 36(%esp) %eflags
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 3c417734790f0..0424c2a6c15b8 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -321,6 +321,8 @@ SYM_CODE_END(start_cpu0)
+ SYM_CODE_START_NOALIGN(vc_boot_ghcb)
+ 	UNWIND_HINT_IRET_REGS offset=8
+ 
++	ANNOTATE_UNRET_END
++
+ 	/* Build pt_regs */
+ 	PUSH_AND_CLEAR_REGS
+ 
+@@ -378,6 +380,7 @@ SYM_CODE_START(early_idt_handler_array)
+ SYM_CODE_END(early_idt_handler_array)
+ 
+ SYM_CODE_START_LOCAL(early_idt_handler_common)
++	ANNOTATE_UNRET_END
+ 	/*
+ 	 * The stack is the hardware frame, an error code or zero, and the
+ 	 * vector number.
+@@ -424,6 +427,8 @@ SYM_CODE_END(early_idt_handler_common)
+ SYM_CODE_START_NOALIGN(vc_no_ghcb)
+ 	UNWIND_HINT_IRET_REGS offset=8
+ 
++	ANNOTATE_UNRET_END
++
+ 	/* Build pt_regs */
+ 	PUSH_AND_CLEAR_REGS
+ 
+diff --git a/arch/x86/kernel/irqflags.S b/arch/x86/kernel/irqflags.S
+index 0db0375235b48..a9c36400bdf8f 100644
+--- a/arch/x86/kernel/irqflags.S
++++ b/arch/x86/kernel/irqflags.S
+@@ -10,7 +10,7 @@
+ SYM_FUNC_START(native_save_fl)
+ 	pushf
+ 	pop %_ASM_AX
+-	ret
++	RET
+ SYM_FUNC_END(native_save_fl)
+ EXPORT_SYMBOL(native_save_fl)
+ 
+@@ -21,6 +21,6 @@ EXPORT_SYMBOL(native_save_fl)
+ SYM_FUNC_START(native_restore_fl)
+ 	push %_ASM_ARG1
+ 	popf
+-	ret
++	RET
+ SYM_FUNC_END(native_restore_fl)
+ EXPORT_SYMBOL(native_restore_fl)
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 535da74c124e4..ee85f1b258d0a 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -768,7 +768,7 @@ asm(
+ 	RESTORE_REGS_STRING
+ 	"	popfl\n"
+ #endif
+-	"	ret\n"
++	ASM_RET
+ 	".size kretprobe_trampoline, .-kretprobe_trampoline\n"
+ );
+ NOKPROBE_SYMBOL(kretprobe_trampoline);
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index 971609fb15c59..fe9babe94861f 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -953,7 +953,7 @@ asm(
+ "movq	__per_cpu_offset(,%rdi,8), %rax;"
+ "cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
+ "setne	%al;"
+-"ret;"
++ASM_RET
+ ".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;"
+ ".popsection");
+ 
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index 5e9a34b5bd741..455e195847f9e 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -251,7 +251,8 @@ int module_finalize(const Elf_Ehdr *hdr,
+ 		    struct module *me)
+ {
+ 	const Elf_Shdr *s, *text = NULL, *alt = NULL, *locks = NULL,
+-		*para = NULL, *orc = NULL, *orc_ip = NULL;
++		*para = NULL, *orc = NULL, *orc_ip = NULL,
++		*retpolines = NULL, *returns = NULL;
+ 	char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+ 
+ 	for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) {
+@@ -267,8 +268,20 @@ int module_finalize(const Elf_Ehdr *hdr,
+ 			orc = s;
+ 		if (!strcmp(".orc_unwind_ip", secstrings + s->sh_name))
+ 			orc_ip = s;
++		if (!strcmp(".retpoline_sites", secstrings + s->sh_name))
++			retpolines = s;
++		if (!strcmp(".return_sites", secstrings + s->sh_name))
++			returns = s;
+ 	}
+ 
++	if (retpolines) {
++		void *rseg = (void *)retpolines->sh_addr;
++		apply_retpolines(rseg, rseg + retpolines->sh_size);
++	}
++	if (returns) {
++		void *rseg = (void *)returns->sh_addr;
++		apply_returns(rseg, rseg + returns->sh_size);
++	}
+ 	if (alt) {
+ 		/* patch .altinstructions */
+ 		void *aseg = (void *)alt->sh_addr;
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 5e5fcf5c376de..e21937680d1f2 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -40,7 +40,7 @@ extern void _paravirt_nop(void);
+ asm (".pushsection .entry.text, \"ax\"\n"
+      ".global _paravirt_nop\n"
+      "_paravirt_nop:\n\t"
+-     "ret\n\t"
++     ASM_RET
+      ".size _paravirt_nop, . - _paravirt_nop\n\t"
+      ".type _paravirt_nop, @function\n\t"
+      ".popsection");
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 0aa1baf9a3afc..a2823682d64e7 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -556,7 +556,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+ 	}
+ 
+ 	if (updmsr)
+-		wrmsrl(MSR_IA32_SPEC_CTRL, msr);
++		write_spec_ctrl_current(msr, false);
+ }
+ 
+ static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)
+diff --git a/arch/x86/kernel/relocate_kernel_32.S b/arch/x86/kernel/relocate_kernel_32.S
+index 94b33885f8d20..82a839885a3c5 100644
+--- a/arch/x86/kernel/relocate_kernel_32.S
++++ b/arch/x86/kernel/relocate_kernel_32.S
+@@ -7,10 +7,12 @@
+ #include <linux/linkage.h>
+ #include <asm/page_types.h>
+ #include <asm/kexec.h>
++#include <asm/nospec-branch.h>
+ #include <asm/processor-flags.h>
+ 
+ /*
+- * Must be relocatable PIC code callable as a C function
++ * Must be relocatable PIC code callable as a C function, in particular
++ * there must be a plain RET and not jump to return thunk.
+  */
+ 
+ #define PTR(x) (x << 2)
+@@ -91,7 +93,9 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
+ 	movl    %edi, %eax
+ 	addl    $(identity_mapped - relocate_kernel), %eax
+ 	pushl   %eax
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_CODE_END(relocate_kernel)
+ 
+ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
+@@ -159,12 +163,15 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
+ 	xorl    %edx, %edx
+ 	xorl    %esi, %esi
+ 	xorl    %ebp, %ebp
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ 1:
+ 	popl	%edx
+ 	movl	CP_PA_SWAP_PAGE(%edi), %esp
+ 	addl	$PAGE_SIZE, %esp
+ 2:
++	ANNOTATE_RETPOLINE_SAFE
+ 	call	*%edx
+ 
+ 	/* get the re-entry point of the peer system */
+@@ -190,7 +197,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
+ 	movl	%edi, %eax
+ 	addl	$(virtual_mapped - relocate_kernel), %eax
+ 	pushl	%eax
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_CODE_END(identity_mapped)
+ 
+ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
+@@ -208,7 +217,9 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
+ 	popl	%edi
+ 	popl	%esi
+ 	popl	%ebx
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_CODE_END(virtual_mapped)
+ 
+ 	/* Do the copies */
+@@ -271,7 +282,9 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
+ 	popl	%edi
+ 	popl	%ebx
+ 	popl	%ebp
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_CODE_END(swap_pages)
+ 
+ 	.globl kexec_control_code_size
+diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
+index a4d9a261425b0..d2b7d212caa42 100644
+--- a/arch/x86/kernel/relocate_kernel_64.S
++++ b/arch/x86/kernel/relocate_kernel_64.S
+@@ -13,7 +13,8 @@
+ #include <asm/unwind_hints.h>
+ 
+ /*
+- * Must be relocatable PIC code callable as a C function
++ * Must be relocatable PIC code callable as a C function, in particular
++ * there must be a plain RET and not jump to return thunk.
+  */
+ 
+ #define PTR(x) (x << 3)
+@@ -104,7 +105,9 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
+ 	/* jump to identity mapped page */
+ 	addq	$(identity_mapped - relocate_kernel), %r8
+ 	pushq	%r8
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_CODE_END(relocate_kernel)
+ 
+ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
+@@ -191,7 +194,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
+ 	xorl	%r14d, %r14d
+ 	xorl	%r15d, %r15d
+ 
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ 
+ 1:
+ 	popq	%rdx
+@@ -210,7 +215,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
+ 	call	swap_pages
+ 	movq	$virtual_mapped, %rax
+ 	pushq	%rax
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_CODE_END(identity_mapped)
+ 
+ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
+@@ -231,7 +238,9 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
+ 	popq	%r12
+ 	popq	%rbp
+ 	popq	%rbx
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_CODE_END(virtual_mapped)
+ 
+ 	/* Do the copies */
+@@ -288,7 +297,9 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
+ 	lea	PAGE_SIZE(%rax), %rsi
+ 	jmp	0b
+ 3:
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_CODE_END(swap_pages)
+ 
+ 	.globl kexec_control_code_size
+diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
+index c222fab112cbd..f441002c23276 100644
+--- a/arch/x86/kernel/sev-es.c
++++ b/arch/x86/kernel/sev-es.c
+@@ -236,7 +236,7 @@ static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
+ 			return ES_EXCEPTION;
+ 		}
+ 
+-		if (!insn_decode(&ctxt->insn, ctxt->regs, buffer, res))
++		if (!insn_decode_from_regs(&ctxt->insn, ctxt->regs, buffer, res))
+ 			return ES_DECODE_FAILED;
+ 	} else {
+ 		res = vc_fetch_insn_kernel(ctxt, buffer);
+diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
+index ee04941a6546a..3355e27c69ebf 100644
+--- a/arch/x86/kernel/sev_verify_cbit.S
++++ b/arch/x86/kernel/sev_verify_cbit.S
+@@ -85,5 +85,5 @@ SYM_FUNC_START(sev_verify_cbit)
+ #endif
+ 	/* Return page-table pointer */
+ 	movq	%rdi, %rax
+-	ret
++	RET
+ SYM_FUNC_END(sev_verify_cbit)
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index ca9a380d9c0b3..2973b3fb0ec1a 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -11,7 +11,17 @@ enum insn_type {
+ 	RET = 3,  /* tramp / site cond-tail-call */
+ };
+ 
+-static void __ref __static_call_transform(void *insn, enum insn_type type, void *func)
++/*
++ * ud1 %esp, %ecx - a 3 byte #UD that is unique to trampolines, chosen such
++ * that there is no false-positive trampoline identification while also being a
++ * speculation stop.
++ */
++static const u8 tramp_ud[] = { 0x0f, 0xb9, 0xcc };
++
++static const u8 retinsn[] = { RET_INSN_OPCODE, 0xcc, 0xcc, 0xcc, 0xcc };
++
++static void __ref __static_call_transform(void *insn, enum insn_type type,
++					  void *func, bool modinit)
+ {
+ 	int size = CALL_INSN_SIZE;
+ 	const void *code;
+@@ -30,15 +40,17 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, void
+ 		break;
+ 
+ 	case RET:
+-		code = text_gen_insn(RET_INSN_OPCODE, insn, func);
+-		size = RET_INSN_SIZE;
++		if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++			code = text_gen_insn(JMP32_INSN_OPCODE, insn, &__x86_return_thunk);
++		else
++			code = &retinsn;
+ 		break;
+ 	}
+ 
+ 	if (memcmp(insn, code, size) == 0)
+ 		return;
+ 
+-	if (unlikely(system_state == SYSTEM_BOOTING))
++	if (system_state == SYSTEM_BOOTING || modinit)
+ 		return text_poke_early(insn, code, size);
+ 
+ 	text_poke_bp(insn, code, size, NULL);
+@@ -85,14 +97,42 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
+ 
+ 	if (tramp) {
+ 		__static_call_validate(tramp, true);
+-		__static_call_transform(tramp, __sc_insn(!func, true), func);
++		__static_call_transform(tramp, __sc_insn(!func, true), func, false);
+ 	}
+ 
+ 	if (IS_ENABLED(CONFIG_HAVE_STATIC_CALL_INLINE) && site) {
+ 		__static_call_validate(site, tail);
+-		__static_call_transform(site, __sc_insn(!func, tail), func);
++		__static_call_transform(site, __sc_insn(!func, tail), func, false);
+ 	}
+ 
+ 	mutex_unlock(&text_mutex);
+ }
+ EXPORT_SYMBOL_GPL(arch_static_call_transform);
++
++#ifdef CONFIG_RETHUNK
++/*
++ * This is called by apply_returns() to fix up static call trampolines,
++ * specifically ARCH_DEFINE_STATIC_CALL_NULL_TRAMP which is recorded as
++ * having a return trampoline.
++ *
++ * The problem is that static_call() is available before determining
++ * X86_FEATURE_RETHUNK and, by implication, running alternatives.
++ *
++ * This means that __static_call_transform() above can have overwritten the
++ * return trampoline and we now need to fix things up to be consistent.
++ */
++bool __static_call_fixup(void *tramp, u8 op, void *dest)
++{
++	if (memcmp(tramp+5, tramp_ud, 3)) {
++		/* Not a trampoline site, not our problem. */
++		return false;
++	}
++
++	mutex_lock(&text_mutex);
++	if (op == RET_INSN_OPCODE || dest == &__x86_return_thunk)
++		__static_call_transform(tramp, RET, NULL, true);
++	mutex_unlock(&text_mutex);
++
++	return true;
++}
++#endif
+diff --git a/arch/x86/kernel/umip.c b/arch/x86/kernel/umip.c
+index f6225bf22c02f..8032f5f7eef94 100644
+--- a/arch/x86/kernel/umip.c
++++ b/arch/x86/kernel/umip.c
+@@ -356,7 +356,7 @@ bool fixup_umip_exception(struct pt_regs *regs)
+ 	if (!nr_copied)
+ 		return false;
+ 
+-	if (!insn_decode(&insn, regs, buf, nr_copied))
++	if (!insn_decode_from_regs(&insn, regs, buf, nr_copied))
+ 		return false;
+ 
+ 	umip_inst = identify_insn(&insn);
+diff --git a/arch/x86/kernel/verify_cpu.S b/arch/x86/kernel/verify_cpu.S
+index 641f0fe1e5b4a..1258a5872d128 100644
+--- a/arch/x86/kernel/verify_cpu.S
++++ b/arch/x86/kernel/verify_cpu.S
+@@ -132,9 +132,9 @@ SYM_FUNC_START_LOCAL(verify_cpu)
+ .Lverify_cpu_no_longmode:
+ 	popf				# Restore caller passed flags
+ 	movl $1,%eax
+-	ret
++	RET
+ .Lverify_cpu_sse_ok:
+ 	popf				# Restore caller passed flags
+ 	xorl %eax, %eax
+-	ret
++	RET
+ SYM_FUNC_END(verify_cpu)
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index bf9e0adb5b7ec..a21cd2381fa8f 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -142,7 +142,7 @@ SECTIONS
+ 
+ #ifdef CONFIG_RETPOLINE
+ 		__indirect_thunk_start = .;
+-		*(.text.__x86.indirect_thunk)
++		*(.text.__x86.*)
+ 		__indirect_thunk_end = .;
+ #endif
+ 	} :text =0xcccc
+@@ -272,6 +272,27 @@ SECTIONS
+ 		__parainstructions_end = .;
+ 	}
+ 
++#ifdef CONFIG_RETPOLINE
++	/*
++	 * List of instructions that call/jmp/jcc to retpoline thunks
++	 * __x86_indirect_thunk_*(). These instructions can be patched along
++	 * with alternatives, after which the section can be freed.
++	 */
++	. = ALIGN(8);
++	.retpoline_sites : AT(ADDR(.retpoline_sites) - LOAD_OFFSET) {
++		__retpoline_sites = .;
++		*(.retpoline_sites)
++		__retpoline_sites_end = .;
++	}
++
++	. = ALIGN(8);
++	.return_sites : AT(ADDR(.return_sites) - LOAD_OFFSET) {
++		__return_sites = .;
++		*(.return_sites)
++		__return_sites_end = .;
++	}
++#endif
++
+ 	/*
+ 	 * struct alt_inst entries. From the header (alternative.h):
+ 	 * "Alternative instructions for different CPU types or capabilities"
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 71e1a2d39f218..737035f16a9e6 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -188,9 +188,6 @@
+ #define X8(x...) X4(x), X4(x)
+ #define X16(x...) X8(x), X8(x)
+ 
+-#define NR_FASTOP (ilog2(sizeof(ulong)) + 1)
+-#define FASTOP_SIZE 8
+-
+ struct opcode {
+ 	u64 flags : 56;
+ 	u64 intercept : 8;
+@@ -304,9 +301,15 @@ static void invalidate_registers(struct x86_emulate_ctxt *ctxt)
+  * Moreover, they are all exactly FASTOP_SIZE bytes long, so functions for
+  * different operand sizes can be reached by calculation, rather than a jump
+  * table (which would be bigger than the code).
++ *
++ * The 16 byte alignment, considering 5 bytes for the RET thunk, 3 for ENDBR
++ * and 1 for the straight line speculation INT3, leaves 7 bytes for the
++ * body of the function.  Currently none is larger than 4.
+  */
+ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
+ 
++#define FASTOP_SIZE	16
++
+ #define __FOP_FUNC(name) \
+ 	".align " __stringify(FASTOP_SIZE) " \n\t" \
+ 	".type " name ", @function \n\t" \
+@@ -316,19 +319,21 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
+ 	__FOP_FUNC(#name)
+ 
+ #define __FOP_RET(name) \
+-	"ret \n\t" \
++	ASM_RET \
+ 	".size " name ", .-" name "\n\t"
+ 
+ #define FOP_RET(name) \
+ 	__FOP_RET(#name)
+ 
+-#define FOP_START(op) \
++#define __FOP_START(op, align) \
+ 	extern void em_##op(struct fastop *fake); \
+ 	asm(".pushsection .text, \"ax\" \n\t" \
+ 	    ".global em_" #op " \n\t" \
+-	    ".align " __stringify(FASTOP_SIZE) " \n\t" \
++	    ".align " __stringify(align) " \n\t" \
+ 	    "em_" #op ":\n\t"
+ 
++#define FOP_START(op) __FOP_START(op, FASTOP_SIZE)
++
+ #define FOP_END \
+ 	    ".popsection")
+ 
+@@ -428,19 +433,29 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
+ 	FOP_END
+ 
+ /* Special case for SETcc - 1 instruction per cc */
++
++/*
++ * Depending on .config the SETcc functions look like:
++ *
++ * SETcc %al			[3 bytes]
++ * RET | JMP __x86_return_thunk	[1,5 bytes; CONFIG_RETHUNK]
++ * INT3				[1 byte; CONFIG_SLS]
++ */
++#define SETCC_ALIGN	16
++
+ #define FOP_SETCC(op) \
+-	".align 4 \n\t" \
++	".align " __stringify(SETCC_ALIGN) " \n\t" \
+ 	".type " #op ", @function \n\t" \
+ 	#op ": \n\t" \
+ 	#op " %al \n\t" \
+-	__FOP_RET(#op)
++	__FOP_RET(#op) \
++	".skip " __stringify(SETCC_ALIGN) " - (.-" #op "), 0xcc \n\t"
+ 
+ asm(".pushsection .fixup, \"ax\"\n"
+-    ".global kvm_fastop_exception \n"
+-    "kvm_fastop_exception: xor %esi, %esi; ret\n"
++    "kvm_fastop_exception: xor %esi, %esi; " ASM_RET
+     ".popsection");
+ 
+-FOP_START(setcc)
++__FOP_START(setcc, SETCC_ALIGN)
+ FOP_SETCC(seto)
+ FOP_SETCC(setno)
+ FOP_SETCC(setc)
+@@ -1055,7 +1070,7 @@ static int em_bsr_c(struct x86_emulate_ctxt *ctxt)
+ static __always_inline u8 test_cc(unsigned int condition, unsigned long flags)
+ {
+ 	u8 rc;
+-	void (*fop)(void) = (void *)em_setcc + 4 * (condition & 0xf);
++	void (*fop)(void) = (void *)em_setcc + SETCC_ALIGN * (condition & 0xf);
+ 
+ 	flags = (flags & EFLAGS_MASK) | X86_EFLAGS_IF;
+ 	asm("push %[flags]; popf; " CALL_NOSPEC
+diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
+index 1ec1ac40e3280..c18d812d00cd7 100644
+--- a/arch/x86/kvm/svm/vmenter.S
++++ b/arch/x86/kvm/svm/vmenter.S
+@@ -128,6 +128,15 @@ SYM_FUNC_START(__svm_vcpu_run)
+ 	mov %r15, VCPU_R15(%_ASM_AX)
+ #endif
+ 
++	/*
++	 * Mitigate RETBleed for AMD/Hygon Zen uarch. RET should be
++	 * untrained as soon as we exit the VM and are back to the
++	 * kernel. This should be done before re-enabling interrupts
++	 * because interrupt handlers won't sanitize 'ret' if the return is
++	 * from the kernel.
++	 */
++	UNTRAIN_RET
++
+ 	/*
+ 	 * Clear all general purpose registers except RSP and RAX to prevent
+ 	 * speculative use of the guest's values, even those that are reloaded
+@@ -166,5 +175,5 @@ SYM_FUNC_START(__svm_vcpu_run)
+ 	pop %edi
+ #endif
+ 	pop %_ASM_BP
+-	ret
++	RET
+ SYM_FUNC_END(__svm_vcpu_run)
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 90881d7b42ead..09804cad6e2db 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -12,6 +12,7 @@
+ #include "nested.h"
+ #include "pmu.h"
+ #include "trace.h"
++#include "vmx.h"
+ #include "x86.h"
+ 
+ static bool __read_mostly enable_shadow_vmcs = 1;
+@@ -3075,35 +3076,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
+ 		vmx->loaded_vmcs->host_state.cr4 = cr4;
+ 	}
+ 
+-	asm(
+-		"sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */
+-		"cmp %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t"
+-		"je 1f \n\t"
+-		__ex("vmwrite %%" _ASM_SP ", %[HOST_RSP]") "\n\t"
+-		"mov %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t"
+-		"1: \n\t"
+-		"add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */
+-
+-		/* Check if vmlaunch or vmresume is needed */
+-		"cmpb $0, %c[launched](%[loaded_vmcs])\n\t"
+-
+-		/*
+-		 * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set
+-		 * RFLAGS.CF on VM-Fail Invalid and set RFLAGS.ZF on VM-Fail
+-		 * Valid.  vmx_vmenter() directly "returns" RFLAGS, and so the
+-		 * results of VM-Enter is captured via CC_{SET,OUT} to vm_fail.
+-		 */
+-		"call vmx_vmenter\n\t"
+-
+-		CC_SET(be)
+-	      : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail)
+-	      :	[HOST_RSP]"r"((unsigned long)HOST_RSP),
+-		[loaded_vmcs]"r"(vmx->loaded_vmcs),
+-		[launched]"i"(offsetof(struct loaded_vmcs, launched)),
+-		[host_state_rsp]"i"(offsetof(struct loaded_vmcs, host_state.rsp)),
+-		[wordsize]"i"(sizeof(ulong))
+-	      : "memory"
+-	);
++	vm_fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs,
++				 __vmx_vcpu_run_flags(vmx));
+ 
+ 	if (vmx->msr_autoload.host.nr)
+ 		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
+diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
+new file mode 100644
+index 0000000000000..edc3f16cc1896
+--- /dev/null
++++ b/arch/x86/kvm/vmx/run_flags.h
+@@ -0,0 +1,8 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __KVM_X86_VMX_RUN_FLAGS_H
++#define __KVM_X86_VMX_RUN_FLAGS_H
++
++#define VMX_RUN_VMRESUME	(1 << 0)
++#define VMX_RUN_SAVE_SPEC_CTRL	(1 << 1)
++
++#endif /* __KVM_X86_VMX_RUN_FLAGS_H */
+diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
+index 90ad7a6246e36..857fa0fc49faf 100644
+--- a/arch/x86/kvm/vmx/vmenter.S
++++ b/arch/x86/kvm/vmx/vmenter.S
+@@ -5,6 +5,7 @@
+ #include <asm/kvm_vcpu_regs.h>
+ #include <asm/nospec-branch.h>
+ #include <asm/segment.h>
++#include "run_flags.h"
+ 
+ #define WORD_SIZE (BITS_PER_LONG / 8)
+ 
+@@ -30,73 +31,12 @@
+ 
+ .section .noinstr.text, "ax"
+ 
+-/**
+- * vmx_vmenter - VM-Enter the current loaded VMCS
+- *
+- * %RFLAGS.ZF:	!VMCS.LAUNCHED, i.e. controls VMLAUNCH vs. VMRESUME
+- *
+- * Returns:
+- *	%RFLAGS.CF is set on VM-Fail Invalid
+- *	%RFLAGS.ZF is set on VM-Fail Valid
+- *	%RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit
+- *
+- * Note that VMRESUME/VMLAUNCH fall-through and return directly if
+- * they VM-Fail, whereas a successful VM-Enter + VM-Exit will jump
+- * to vmx_vmexit.
+- */
+-SYM_FUNC_START(vmx_vmenter)
+-	/* EFLAGS.ZF is set if VMCS.LAUNCHED == 0 */
+-	je 2f
+-
+-1:	vmresume
+-	ret
+-
+-2:	vmlaunch
+-	ret
+-
+-3:	cmpb $0, kvm_rebooting
+-	je 4f
+-	ret
+-4:	ud2
+-
+-	_ASM_EXTABLE(1b, 3b)
+-	_ASM_EXTABLE(2b, 3b)
+-
+-SYM_FUNC_END(vmx_vmenter)
+-
+-/**
+- * vmx_vmexit - Handle a VMX VM-Exit
+- *
+- * Returns:
+- *	%RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit
+- *
+- * This is vmx_vmenter's partner in crime.  On a VM-Exit, control will jump
+- * here after hardware loads the host's state, i.e. this is the destination
+- * referred to by VMCS.HOST_RIP.
+- */
+-SYM_FUNC_START(vmx_vmexit)
+-#ifdef CONFIG_RETPOLINE
+-	ALTERNATIVE "jmp .Lvmexit_skip_rsb", "", X86_FEATURE_RETPOLINE
+-	/* Preserve guest's RAX, it's used to stuff the RSB. */
+-	push %_ASM_AX
+-
+-	/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
+-	FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
+-
+-	/* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */
+-	or $1, %_ASM_AX
+-
+-	pop %_ASM_AX
+-.Lvmexit_skip_rsb:
+-#endif
+-	ret
+-SYM_FUNC_END(vmx_vmexit)
+-
+ /**
+  * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode
+- * @vmx:	struct vcpu_vmx * (forwarded to vmx_update_host_rsp)
++ * @vmx:	struct vcpu_vmx *
+  * @regs:	unsigned long * (to guest registers)
+- * @launched:	%true if the VMCS has been launched
++ * @flags:	VMX_RUN_VMRESUME:	use VMRESUME instead of VMLAUNCH
++ *		VMX_RUN_SAVE_SPEC_CTRL: save guest SPEC_CTRL into vmx->spec_ctrl
+  *
+  * Returns:
+  *	0 on VM-Exit, 1 on VM-Fail
+@@ -115,24 +55,29 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ #endif
+ 	push %_ASM_BX
+ 
++	/* Save @vmx for SPEC_CTRL handling */
++	push %_ASM_ARG1
++
++	/* Save @flags for SPEC_CTRL handling */
++	push %_ASM_ARG3
++
+ 	/*
+ 	 * Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and
+ 	 * @regs is needed after VM-Exit to save the guest's register values.
+ 	 */
+ 	push %_ASM_ARG2
+ 
+-	/* Copy @launched to BL, _ASM_ARG3 is volatile. */
++	/* Copy @flags to BL, _ASM_ARG3 is volatile. */
+ 	mov %_ASM_ARG3B, %bl
+ 
+-	/* Adjust RSP to account for the CALL to vmx_vmenter(). */
+-	lea -WORD_SIZE(%_ASM_SP), %_ASM_ARG2
++	lea (%_ASM_SP), %_ASM_ARG2
+ 	call vmx_update_host_rsp
+ 
+ 	/* Load @regs to RAX. */
+ 	mov (%_ASM_SP), %_ASM_AX
+ 
+ 	/* Check if vmlaunch or vmresume is needed */
+-	cmpb $0, %bl
++	testb $VMX_RUN_VMRESUME, %bl
+ 
+ 	/* Load guest registers.  Don't clobber flags. */
+ 	mov VCPU_RCX(%_ASM_AX), %_ASM_CX
+@@ -154,11 +99,36 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 	/* Load guest RAX.  This kills the @regs pointer! */
+ 	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
+ 
+-	/* Enter guest mode */
+-	call vmx_vmenter
++	/* Check EFLAGS.ZF from 'testb' above */
++	jz .Lvmlaunch
++
++	/*
++	 * After a successful VMRESUME/VMLAUNCH, control flow "magically"
++	 * resumes below at 'vmx_vmexit' due to the VMCS HOST_RIP setting.
++	 * So this isn't a typical function and objtool needs to be told to
++	 * save the unwind state here and restore it below.
++	 */
++	UNWIND_HINT_SAVE
++
++/*
++ * If VMRESUME/VMLAUNCH and corresponding vmexit succeed, execution resumes at
++ * the 'vmx_vmexit' label below.
++ */
++.Lvmresume:
++	vmresume
++	jmp .Lvmfail
++
++.Lvmlaunch:
++	vmlaunch
++	jmp .Lvmfail
++
++	_ASM_EXTABLE(.Lvmresume, .Lfixup)
++	_ASM_EXTABLE(.Lvmlaunch, .Lfixup)
+ 
+-	/* Jump on VM-Fail. */
+-	jbe 2f
++SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL)
++
++	/* Restore unwind state from before the VMRESUME/VMLAUNCH. */
++	UNWIND_HINT_RESTORE
+ 
+ 	/* Temporarily save guest's RAX. */
+ 	push %_ASM_AX
+@@ -185,21 +155,23 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 	mov %r15, VCPU_R15(%_ASM_AX)
+ #endif
+ 
+-	/* Clear RAX to indicate VM-Exit (as opposed to VM-Fail). */
+-	xor %eax, %eax
++	/* Clear return value to indicate VM-Exit (as opposed to VM-Fail). */
++	xor %ebx, %ebx
+ 
++.Lclear_regs:
+ 	/*
+-	 * Clear all general purpose registers except RSP and RAX to prevent
++	 * Clear all general purpose registers except RSP and RBX to prevent
+ 	 * speculative use of the guest's values, even those that are reloaded
+ 	 * via the stack.  In theory, an L1 cache miss when restoring registers
+ 	 * could lead to speculative execution with the guest's values.
+ 	 * Zeroing XORs are dirt cheap, i.e. the extra paranoia is essentially
+ 	 * free.  RSP and RAX are exempt as RSP is restored by hardware during
+-	 * VM-Exit and RAX is explicitly loaded with 0 or 1 to return VM-Fail.
++	 * VM-Exit and RBX is explicitly loaded with 0 or 1 to hold the return
++	 * value.
+ 	 */
+-1:	xor %ecx, %ecx
++	xor %eax, %eax
++	xor %ecx, %ecx
+ 	xor %edx, %edx
+-	xor %ebx, %ebx
+ 	xor %ebp, %ebp
+ 	xor %esi, %esi
+ 	xor %edi, %edi
+@@ -216,8 +188,30 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 
+ 	/* "POP" @regs. */
+ 	add $WORD_SIZE, %_ASM_SP
+-	pop %_ASM_BX
+ 
++	/*
++	 * IMPORTANT: RSB filling and SPEC_CTRL handling must be done before
++	 * the first unbalanced RET after vmexit!
++	 *
++	 * For retpoline or IBRS, RSB filling is needed to prevent poisoned RSB
++	 * entries and (in some cases) RSB underflow.
++	 *
++	 * eIBRS has its own protection against poisoned RSB, so it doesn't
++	 * need the RSB filling sequence.  But it does need to be enabled
++	 * before the first unbalanced RET.
++         */
++
++	FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
++
++	pop %_ASM_ARG2	/* @flags */
++	pop %_ASM_ARG1	/* @vmx */
++
++	call vmx_spec_ctrl_restore_host
++
++	/* Put return value in AX */
++	mov %_ASM_BX, %_ASM_AX
++
++	pop %_ASM_BX
+ #ifdef CONFIG_X86_64
+ 	pop %r12
+ 	pop %r13
+@@ -228,11 +222,17 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 	pop %edi
+ #endif
+ 	pop %_ASM_BP
+-	ret
++	RET
++
++.Lfixup:
++	cmpb $0, kvm_rebooting
++	jne .Lvmfail
++	ud2
++.Lvmfail:
++	/* VM-Fail: set return value to 1 */
++	mov $1, %_ASM_BX
++	jmp .Lclear_regs
+ 
+-	/* VM-Fail.  Out-of-line to avoid a taken Jcc after VM-Exit. */
+-2:	mov $1, %eax
+-	jmp 1b
+ SYM_FUNC_END(__vmx_vcpu_run)
+ 
+ 
+@@ -293,7 +293,7 @@ SYM_FUNC_START(vmread_error_trampoline)
+ 	pop %_ASM_AX
+ 	pop %_ASM_BP
+ 
+-	ret
++	RET
+ SYM_FUNC_END(vmread_error_trampoline)
+ 
+ SYM_FUNC_START(vmx_do_interrupt_nmi_irqoff)
+@@ -326,5 +326,5 @@ SYM_FUNC_START(vmx_do_interrupt_nmi_irqoff)
+ 	 */
+ 	mov %_ASM_BP, %_ASM_SP
+ 	pop %_ASM_BP
+-	ret
++	RET
+ SYM_FUNC_END(vmx_do_interrupt_nmi_irqoff)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index cc647dcc228b7..9b520da3f7488 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -380,9 +380,9 @@ static __always_inline void vmx_disable_fb_clear(struct vcpu_vmx *vmx)
+ 	if (!vmx->disable_fb_clear)
+ 		return;
+ 
+-	rdmsrl(MSR_IA32_MCU_OPT_CTRL, msr);
++	msr = __rdmsr(MSR_IA32_MCU_OPT_CTRL);
+ 	msr |= FB_CLEAR_DIS;
+-	wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr);
++	native_wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr);
+ 	/* Cache the MSR value to avoid reading it later */
+ 	vmx->msr_ia32_mcu_opt_ctrl = msr;
+ }
+@@ -393,7 +393,7 @@ static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx)
+ 		return;
+ 
+ 	vmx->msr_ia32_mcu_opt_ctrl &= ~FB_CLEAR_DIS;
+-	wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl);
++	native_wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl);
+ }
+ 
+ static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+@@ -936,6 +936,24 @@ static bool msr_write_intercepted(struct vcpu_vmx *vmx, u32 msr)
+ 	return true;
+ }
+ 
++unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
++{
++	unsigned int flags = 0;
++
++	if (vmx->loaded_vmcs->launched)
++		flags |= VMX_RUN_VMRESUME;
++
++	/*
++	 * If writes to the SPEC_CTRL MSR aren't intercepted, the guest is free
++	 * to change it directly without causing a vmexit.  In that case read
++	 * it after vmexit and store it in vmx->spec_ctrl.
++	 */
++	if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)))
++		flags |= VMX_RUN_SAVE_SPEC_CTRL;
++
++	return flags;
++}
++
+ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ 		unsigned long entry, unsigned long exit)
+ {
+@@ -6675,6 +6693,31 @@ void noinstr vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
+ 	}
+ }
+ 
++void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx,
++					unsigned int flags)
++{
++	u64 hostval = this_cpu_read(x86_spec_ctrl_current);
++
++	if (!cpu_feature_enabled(X86_FEATURE_MSR_SPEC_CTRL))
++		return;
++
++	if (flags & VMX_RUN_SAVE_SPEC_CTRL)
++		vmx->spec_ctrl = __rdmsr(MSR_IA32_SPEC_CTRL);
++
++	/*
++	 * If the guest/host SPEC_CTRL values differ, restore the host value.
++	 *
++	 * For legacy IBRS, the IBRS bit always needs to be written after
++	 * transitioning from a less privileged predictor mode, regardless of
++	 * whether the guest/host values differ.
++	 */
++	if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) ||
++	    vmx->spec_ctrl != hostval)
++		native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval);
++
++	barrier_nospec();
++}
++
+ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+ {
+ 	switch (to_vmx(vcpu)->exit_reason.basic) {
+@@ -6687,10 +6730,9 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
+-bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
+-
+ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+-					struct vcpu_vmx *vmx)
++					struct vcpu_vmx *vmx,
++					unsigned long flags)
+ {
+ 	/*
+ 	 * VMENTER enables interrupts (host state), but the kernel state is
+@@ -6727,7 +6769,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 		native_write_cr2(vcpu->arch.cr2);
+ 
+ 	vmx->fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs,
+-				   vmx->loaded_vmcs->launched);
++				   flags);
+ 
+ 	vcpu->arch.cr2 = native_read_cr2();
+ 
+@@ -6826,27 +6868,7 @@ reenter_guest:
+ 	x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0);
+ 
+ 	/* The actual VMENTER/EXIT is in the .noinstr.text section. */
+-	vmx_vcpu_enter_exit(vcpu, vmx);
+-
+-	/*
+-	 * We do not use IBRS in the kernel. If this vCPU has used the
+-	 * SPEC_CTRL MSR it may have left it on; save the value and
+-	 * turn it off. This is much more efficient than blindly adding
+-	 * it to the atomic save/restore list. Especially as the former
+-	 * (Saving guest MSRs on vmexit) doesn't even exist in KVM.
+-	 *
+-	 * For non-nested case:
+-	 * If the L01 MSR bitmap does not intercept the MSR, then we need to
+-	 * save it.
+-	 *
+-	 * For nested case:
+-	 * If the L02 MSR bitmap does not intercept the MSR, then we need to
+-	 * save it.
+-	 */
+-	if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)))
+-		vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+-
+-	x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0);
++	vmx_vcpu_enter_exit(vcpu, vmx, __vmx_vcpu_run_flags(vmx));
+ 
+ 	/* All fields are clean at this point */
+ 	if (static_branch_unlikely(&enable_evmcs))
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 31317c8915e40..a6b52d3a39c9d 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -13,6 +13,7 @@
+ #include "vmcs.h"
+ #include "vmx_ops.h"
+ #include "cpuid.h"
++#include "run_flags.h"
+ 
+ extern const u32 vmx_msr_index[];
+ 
+@@ -365,6 +366,10 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+ struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr);
+ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu);
+ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
++void vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, unsigned int flags);
++unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx);
++bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs,
++		    unsigned int flags);
+ int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr);
+ void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu);
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c71f702c037de..29a8ca95c581d 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -11173,9 +11173,9 @@ void kvm_arch_end_assignment(struct kvm *kvm)
+ }
+ EXPORT_SYMBOL_GPL(kvm_arch_end_assignment);
+ 
+-bool kvm_arch_has_assigned_device(struct kvm *kvm)
++bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm)
+ {
+-	return atomic_read(&kvm->arch.assigned_device_count);
++	return arch_atomic_read(&kvm->arch.assigned_device_count);
+ }
+ EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device);
+ 
+diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S
+index 3b6544111ac92..e768815e58ae4 100644
+--- a/arch/x86/lib/atomic64_386_32.S
++++ b/arch/x86/lib/atomic64_386_32.S
+@@ -6,84 +6,86 @@
+  */
+ 
+ #include <linux/linkage.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ 
+ /* if you want SMP support, implement these with real spinlocks */
+-.macro LOCK reg
++.macro IRQ_SAVE reg
+ 	pushfl
+ 	cli
+ .endm
+ 
+-.macro UNLOCK reg
++.macro IRQ_RESTORE reg
+ 	popfl
+ .endm
+ 
+-#define BEGIN(op) \
++#define BEGIN_IRQ_SAVE(op) \
+ .macro endp; \
+ SYM_FUNC_END(atomic64_##op##_386); \
+ .purgem endp; \
+ .endm; \
+ SYM_FUNC_START(atomic64_##op##_386); \
+-	LOCK v;
++	IRQ_SAVE v;
+ 
+ #define ENDP endp
+ 
+-#define RET \
+-	UNLOCK v; \
+-	ret
+-
+-#define RET_ENDP \
+-	RET; \
+-	ENDP
++#define RET_IRQ_RESTORE \
++	IRQ_RESTORE v; \
++	RET
+ 
+ #define v %ecx
+-BEGIN(read)
++BEGIN_IRQ_SAVE(read)
+ 	movl  (v), %eax
+ 	movl 4(v), %edx
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %esi
+-BEGIN(set)
++BEGIN_IRQ_SAVE(set)
+ 	movl %ebx,  (v)
+ 	movl %ecx, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v  %esi
+-BEGIN(xchg)
++BEGIN_IRQ_SAVE(xchg)
+ 	movl  (v), %eax
+ 	movl 4(v), %edx
+ 	movl %ebx,  (v)
+ 	movl %ecx, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %ecx
+-BEGIN(add)
++BEGIN_IRQ_SAVE(add)
+ 	addl %eax,  (v)
+ 	adcl %edx, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %ecx
+-BEGIN(add_return)
++BEGIN_IRQ_SAVE(add_return)
+ 	addl  (v), %eax
+ 	adcl 4(v), %edx
+ 	movl %eax,  (v)
+ 	movl %edx, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %ecx
+-BEGIN(sub)
++BEGIN_IRQ_SAVE(sub)
+ 	subl %eax,  (v)
+ 	sbbl %edx, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %ecx
+-BEGIN(sub_return)
++BEGIN_IRQ_SAVE(sub_return)
+ 	negl %edx
+ 	negl %eax
+ 	sbbl $0, %edx
+@@ -91,47 +93,52 @@ BEGIN(sub_return)
+ 	adcl 4(v), %edx
+ 	movl %eax,  (v)
+ 	movl %edx, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %esi
+-BEGIN(inc)
++BEGIN_IRQ_SAVE(inc)
+ 	addl $1,  (v)
+ 	adcl $0, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %esi
+-BEGIN(inc_return)
++BEGIN_IRQ_SAVE(inc_return)
+ 	movl  (v), %eax
+ 	movl 4(v), %edx
+ 	addl $1, %eax
+ 	adcl $0, %edx
+ 	movl %eax,  (v)
+ 	movl %edx, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %esi
+-BEGIN(dec)
++BEGIN_IRQ_SAVE(dec)
+ 	subl $1,  (v)
+ 	sbbl $0, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %esi
+-BEGIN(dec_return)
++BEGIN_IRQ_SAVE(dec_return)
+ 	movl  (v), %eax
+ 	movl 4(v), %edx
+ 	subl $1, %eax
+ 	sbbl $0, %edx
+ 	movl %eax,  (v)
+ 	movl %edx, 4(v)
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+ 
+ #define v %esi
+-BEGIN(add_unless)
++BEGIN_IRQ_SAVE(add_unless)
+ 	addl %eax, %ecx
+ 	adcl %edx, %edi
+ 	addl  (v), %eax
+@@ -143,7 +150,7 @@ BEGIN(add_unless)
+ 	movl %edx, 4(v)
+ 	movl $1, %eax
+ 2:
+-	RET
++	RET_IRQ_RESTORE
+ 3:
+ 	cmpl %edx, %edi
+ 	jne 1b
+@@ -153,7 +160,7 @@ ENDP
+ #undef v
+ 
+ #define v %esi
+-BEGIN(inc_not_zero)
++BEGIN_IRQ_SAVE(inc_not_zero)
+ 	movl  (v), %eax
+ 	movl 4(v), %edx
+ 	testl %eax, %eax
+@@ -165,7 +172,7 @@ BEGIN(inc_not_zero)
+ 	movl %edx, 4(v)
+ 	movl $1, %eax
+ 2:
+-	RET
++	RET_IRQ_RESTORE
+ 3:
+ 	testl %edx, %edx
+ 	jne 1b
+@@ -174,7 +181,7 @@ ENDP
+ #undef v
+ 
+ #define v %esi
+-BEGIN(dec_if_positive)
++BEGIN_IRQ_SAVE(dec_if_positive)
+ 	movl  (v), %eax
+ 	movl 4(v), %edx
+ 	subl $1, %eax
+@@ -183,5 +190,6 @@ BEGIN(dec_if_positive)
+ 	movl %eax,  (v)
+ 	movl %edx, 4(v)
+ 1:
+-RET_ENDP
++	RET_IRQ_RESTORE
++ENDP
+ #undef v
+diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S
+index 1c5c81c16b066..90afb488b396a 100644
+--- a/arch/x86/lib/atomic64_cx8_32.S
++++ b/arch/x86/lib/atomic64_cx8_32.S
+@@ -6,7 +6,7 @@
+  */
+ 
+ #include <linux/linkage.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ 
+ .macro read64 reg
+ 	movl %ebx, %eax
+@@ -18,7 +18,7 @@
+ 
+ SYM_FUNC_START(atomic64_read_cx8)
+ 	read64 %ecx
+-	ret
++	RET
+ SYM_FUNC_END(atomic64_read_cx8)
+ 
+ SYM_FUNC_START(atomic64_set_cx8)
+@@ -28,7 +28,7 @@ SYM_FUNC_START(atomic64_set_cx8)
+ 	cmpxchg8b (%esi)
+ 	jne 1b
+ 
+-	ret
++	RET
+ SYM_FUNC_END(atomic64_set_cx8)
+ 
+ SYM_FUNC_START(atomic64_xchg_cx8)
+@@ -37,7 +37,7 @@ SYM_FUNC_START(atomic64_xchg_cx8)
+ 	cmpxchg8b (%esi)
+ 	jne 1b
+ 
+-	ret
++	RET
+ SYM_FUNC_END(atomic64_xchg_cx8)
+ 
+ .macro addsub_return func ins insc
+@@ -68,7 +68,7 @@ SYM_FUNC_START(atomic64_\func\()_return_cx8)
+ 	popl %esi
+ 	popl %ebx
+ 	popl %ebp
+-	ret
++	RET
+ SYM_FUNC_END(atomic64_\func\()_return_cx8)
+ .endm
+ 
+@@ -93,7 +93,7 @@ SYM_FUNC_START(atomic64_\func\()_return_cx8)
+ 	movl %ebx, %eax
+ 	movl %ecx, %edx
+ 	popl %ebx
+-	ret
++	RET
+ SYM_FUNC_END(atomic64_\func\()_return_cx8)
+ .endm
+ 
+@@ -118,7 +118,7 @@ SYM_FUNC_START(atomic64_dec_if_positive_cx8)
+ 	movl %ebx, %eax
+ 	movl %ecx, %edx
+ 	popl %ebx
+-	ret
++	RET
+ SYM_FUNC_END(atomic64_dec_if_positive_cx8)
+ 
+ SYM_FUNC_START(atomic64_add_unless_cx8)
+@@ -149,7 +149,7 @@ SYM_FUNC_START(atomic64_add_unless_cx8)
+ 	addl $8, %esp
+ 	popl %ebx
+ 	popl %ebp
+-	ret
++	RET
+ 4:
+ 	cmpl %edx, 4(%esp)
+ 	jne 2b
+@@ -176,5 +176,5 @@ SYM_FUNC_START(atomic64_inc_not_zero_cx8)
+ 	movl $1, %eax
+ 3:
+ 	popl %ebx
+-	ret
++	RET
+ SYM_FUNC_END(atomic64_inc_not_zero_cx8)
+diff --git a/arch/x86/lib/checksum_32.S b/arch/x86/lib/checksum_32.S
+index 4304320e51f4d..929ad1747dea0 100644
+--- a/arch/x86/lib/checksum_32.S
++++ b/arch/x86/lib/checksum_32.S
+@@ -127,7 +127,7 @@ SYM_FUNC_START(csum_partial)
+ 8:
+ 	popl %ebx
+ 	popl %esi
+-	ret
++	RET
+ SYM_FUNC_END(csum_partial)
+ 
+ #else
+@@ -245,7 +245,7 @@ SYM_FUNC_START(csum_partial)
+ 90: 
+ 	popl %ebx
+ 	popl %esi
+-	ret
++	RET
+ SYM_FUNC_END(csum_partial)
+ 				
+ #endif
+@@ -371,7 +371,7 @@ EXC(	movb %cl, (%edi)	)
+ 	popl %esi
+ 	popl %edi
+ 	popl %ecx			# equivalent to addl $4,%esp
+-	ret	
++	RET
+ SYM_FUNC_END(csum_partial_copy_generic)
+ 
+ #else
+@@ -447,7 +447,7 @@ EXC(	movb %dl, (%edi)         )
+ 	popl %esi
+ 	popl %edi
+ 	popl %ebx
+-	ret
++	RET
+ SYM_FUNC_END(csum_partial_copy_generic)
+ 				
+ #undef ROUND
+diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S
+index c4c7dd115953c..fe59b8ac4fccd 100644
+--- a/arch/x86/lib/clear_page_64.S
++++ b/arch/x86/lib/clear_page_64.S
+@@ -17,7 +17,7 @@ SYM_FUNC_START(clear_page_rep)
+ 	movl $4096/8,%ecx
+ 	xorl %eax,%eax
+ 	rep stosq
+-	ret
++	RET
+ SYM_FUNC_END(clear_page_rep)
+ EXPORT_SYMBOL_GPL(clear_page_rep)
+ 
+@@ -39,7 +39,7 @@ SYM_FUNC_START(clear_page_orig)
+ 	leaq	64(%rdi),%rdi
+ 	jnz	.Lloop
+ 	nop
+-	ret
++	RET
+ SYM_FUNC_END(clear_page_orig)
+ EXPORT_SYMBOL_GPL(clear_page_orig)
+ 
+@@ -47,6 +47,6 @@ SYM_FUNC_START(clear_page_erms)
+ 	movl $4096,%ecx
+ 	xorl %eax,%eax
+ 	rep stosb
+-	ret
++	RET
+ SYM_FUNC_END(clear_page_erms)
+ EXPORT_SYMBOL_GPL(clear_page_erms)
+diff --git a/arch/x86/lib/cmpxchg16b_emu.S b/arch/x86/lib/cmpxchg16b_emu.S
+index 3542502faa3b7..33c70c0160ea0 100644
+--- a/arch/x86/lib/cmpxchg16b_emu.S
++++ b/arch/x86/lib/cmpxchg16b_emu.S
+@@ -37,11 +37,11 @@ SYM_FUNC_START(this_cpu_cmpxchg16b_emu)
+ 
+ 	popfq
+ 	mov $1, %al
+-	ret
++	RET
+ 
+ .Lnot_same:
+ 	popfq
+ 	xor %al,%al
+-	ret
++	RET
+ 
+ SYM_FUNC_END(this_cpu_cmpxchg16b_emu)
+diff --git a/arch/x86/lib/cmpxchg8b_emu.S b/arch/x86/lib/cmpxchg8b_emu.S
+index ca01ed6029f4f..6a912d58fecc3 100644
+--- a/arch/x86/lib/cmpxchg8b_emu.S
++++ b/arch/x86/lib/cmpxchg8b_emu.S
+@@ -32,7 +32,7 @@ SYM_FUNC_START(cmpxchg8b_emu)
+ 	movl %ecx, 4(%esi)
+ 
+ 	popfl
+-	ret
++	RET
+ 
+ .Lnot_same:
+ 	movl  (%esi), %eax
+@@ -40,7 +40,7 @@ SYM_FUNC_START(cmpxchg8b_emu)
+ 	movl 4(%esi), %edx
+ 
+ 	popfl
+-	ret
++	RET
+ 
+ SYM_FUNC_END(cmpxchg8b_emu)
+ EXPORT_SYMBOL(cmpxchg8b_emu)
+diff --git a/arch/x86/lib/copy_mc_64.S b/arch/x86/lib/copy_mc_64.S
+index 892d8915f609e..fdd1929b1f9d5 100644
+--- a/arch/x86/lib/copy_mc_64.S
++++ b/arch/x86/lib/copy_mc_64.S
+@@ -86,7 +86,7 @@ SYM_FUNC_START(copy_mc_fragile)
+ .L_done_memcpy_trap:
+ 	xorl %eax, %eax
+ .L_done:
+-	ret
++	RET
+ SYM_FUNC_END(copy_mc_fragile)
+ EXPORT_SYMBOL_GPL(copy_mc_fragile)
+ 
+@@ -142,7 +142,7 @@ SYM_FUNC_START(copy_mc_enhanced_fast_string)
+ 	rep movsb
+ 	/* Copy successful. Return zero */
+ 	xorl %eax, %eax
+-	ret
++	RET
+ SYM_FUNC_END(copy_mc_enhanced_fast_string)
+ 
+ 	.section .fixup, "ax"
+@@ -155,7 +155,7 @@ SYM_FUNC_END(copy_mc_enhanced_fast_string)
+ 	 * user-copy routines.
+ 	 */
+ 	movq %rcx, %rax
+-	ret
++	RET
+ 
+ 	.previous
+ 
+diff --git a/arch/x86/lib/copy_page_64.S b/arch/x86/lib/copy_page_64.S
+index 2402d4c489d29..30ea644bf446d 100644
+--- a/arch/x86/lib/copy_page_64.S
++++ b/arch/x86/lib/copy_page_64.S
+@@ -3,7 +3,7 @@
+ 
+ #include <linux/linkage.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/export.h>
+ 
+ /*
+@@ -17,7 +17,7 @@ SYM_FUNC_START(copy_page)
+ 	ALTERNATIVE "jmp copy_page_regs", "", X86_FEATURE_REP_GOOD
+ 	movl	$4096/8, %ecx
+ 	rep	movsq
+-	ret
++	RET
+ SYM_FUNC_END(copy_page)
+ EXPORT_SYMBOL(copy_page)
+ 
+@@ -85,5 +85,5 @@ SYM_FUNC_START_LOCAL(copy_page_regs)
+ 	movq	(%rsp), %rbx
+ 	movq	1*8(%rsp), %r12
+ 	addq	$2*8, %rsp
+-	ret
++	RET
+ SYM_FUNC_END(copy_page_regs)
+diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
+index 77b9b2a3b5c84..84cee84fc658a 100644
+--- a/arch/x86/lib/copy_user_64.S
++++ b/arch/x86/lib/copy_user_64.S
+@@ -11,7 +11,7 @@
+ #include <asm/asm-offsets.h>
+ #include <asm/thread_info.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/asm.h>
+ #include <asm/smap.h>
+ #include <asm/export.h>
+@@ -105,7 +105,7 @@ SYM_FUNC_START(copy_user_generic_unrolled)
+ 	jnz 21b
+ 23:	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ 
+ 	.section .fixup,"ax"
+ 30:	shll $6,%ecx
+@@ -173,7 +173,7 @@ SYM_FUNC_START(copy_user_generic_string)
+ 	movsb
+ 	xorl %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ 
+ 	.section .fixup,"ax"
+ 11:	leal (%rdx,%rcx,8),%ecx
+@@ -207,7 +207,7 @@ SYM_FUNC_START(copy_user_enhanced_fast_string)
+ 	movsb
+ 	xorl %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ 
+ 	.section .fixup,"ax"
+ 12:	movl %ecx,%edx		/* ecx is zerorest also */
+@@ -239,7 +239,7 @@ SYM_CODE_START_LOCAL(.Lcopy_user_handle_tail)
+ 1:	rep movsb
+ 2:	mov %ecx,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ 
+ 	/*
+ 	 * Return zero to pretend that this copy succeeded. This
+@@ -250,7 +250,7 @@ SYM_CODE_START_LOCAL(.Lcopy_user_handle_tail)
+ 	 */
+ 3:	xorl %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ 
+ 	_ASM_EXTABLE_CPY(1b, 2b)
+ SYM_CODE_END(.Lcopy_user_handle_tail)
+@@ -361,7 +361,7 @@ SYM_FUNC_START(__copy_user_nocache)
+ 	xorl %eax,%eax
+ 	ASM_CLAC
+ 	sfence
+-	ret
++	RET
+ 
+ 	.section .fixup,"ax"
+ .L_fixup_4x8b_copy:
+diff --git a/arch/x86/lib/csum-copy_64.S b/arch/x86/lib/csum-copy_64.S
+index 1fbd8ee9642d1..d9e16a2cf2856 100644
+--- a/arch/x86/lib/csum-copy_64.S
++++ b/arch/x86/lib/csum-copy_64.S
+@@ -201,7 +201,7 @@ SYM_FUNC_START(csum_partial_copy_generic)
+ 	movq 3*8(%rsp), %r13
+ 	movq 4*8(%rsp), %r15
+ 	addq $5*8, %rsp
+-	ret
++	RET
+ .Lshort:
+ 	movl %ecx, %r10d
+ 	jmp  .L1
+diff --git a/arch/x86/lib/error-inject.c b/arch/x86/lib/error-inject.c
+index be5b5fb1598bd..520897061ee09 100644
+--- a/arch/x86/lib/error-inject.c
++++ b/arch/x86/lib/error-inject.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ 
++#include <linux/linkage.h>
+ #include <linux/error-injection.h>
+ #include <linux/kprobes.h>
+ 
+@@ -10,7 +11,7 @@ asm(
+ 	".type just_return_func, @function\n"
+ 	".globl just_return_func\n"
+ 	"just_return_func:\n"
+-	"	ret\n"
++		ASM_RET
+ 	".size just_return_func, .-just_return_func\n"
+ );
+ 
+diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
+index fa1bc2104b326..b70d98d79a9da 100644
+--- a/arch/x86/lib/getuser.S
++++ b/arch/x86/lib/getuser.S
+@@ -57,7 +57,7 @@ SYM_FUNC_START(__get_user_1)
+ 1:	movzbl (%_ASM_AX),%edx
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__get_user_1)
+ EXPORT_SYMBOL(__get_user_1)
+ 
+@@ -71,7 +71,7 @@ SYM_FUNC_START(__get_user_2)
+ 2:	movzwl (%_ASM_AX),%edx
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__get_user_2)
+ EXPORT_SYMBOL(__get_user_2)
+ 
+@@ -85,7 +85,7 @@ SYM_FUNC_START(__get_user_4)
+ 3:	movl (%_ASM_AX),%edx
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__get_user_4)
+ EXPORT_SYMBOL(__get_user_4)
+ 
+@@ -100,7 +100,7 @@ SYM_FUNC_START(__get_user_8)
+ 4:	movq (%_ASM_AX),%rdx
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ #else
+ 	LOAD_TASK_SIZE_MINUS_N(7)
+ 	cmp %_ASM_DX,%_ASM_AX
+@@ -112,7 +112,7 @@ SYM_FUNC_START(__get_user_8)
+ 5:	movl 4(%_ASM_AX),%ecx
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ #endif
+ SYM_FUNC_END(__get_user_8)
+ EXPORT_SYMBOL(__get_user_8)
+@@ -124,7 +124,7 @@ SYM_FUNC_START(__get_user_nocheck_1)
+ 6:	movzbl (%_ASM_AX),%edx
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__get_user_nocheck_1)
+ EXPORT_SYMBOL(__get_user_nocheck_1)
+ 
+@@ -134,7 +134,7 @@ SYM_FUNC_START(__get_user_nocheck_2)
+ 7:	movzwl (%_ASM_AX),%edx
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__get_user_nocheck_2)
+ EXPORT_SYMBOL(__get_user_nocheck_2)
+ 
+@@ -144,7 +144,7 @@ SYM_FUNC_START(__get_user_nocheck_4)
+ 8:	movl (%_ASM_AX),%edx
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__get_user_nocheck_4)
+ EXPORT_SYMBOL(__get_user_nocheck_4)
+ 
+@@ -159,7 +159,7 @@ SYM_FUNC_START(__get_user_nocheck_8)
+ #endif
+ 	xor %eax,%eax
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__get_user_nocheck_8)
+ EXPORT_SYMBOL(__get_user_nocheck_8)
+ 
+@@ -169,7 +169,7 @@ SYM_CODE_START_LOCAL(.Lbad_get_user_clac)
+ bad_get_user:
+ 	xor %edx,%edx
+ 	mov $(-EFAULT),%_ASM_AX
+-	ret
++	RET
+ SYM_CODE_END(.Lbad_get_user_clac)
+ 
+ #ifdef CONFIG_X86_32
+@@ -179,7 +179,7 @@ bad_get_user_8:
+ 	xor %edx,%edx
+ 	xor %ecx,%ecx
+ 	mov $(-EFAULT),%_ASM_AX
+-	ret
++	RET
+ SYM_CODE_END(.Lbad_get_user_8_clac)
+ #endif
+ 
+diff --git a/arch/x86/lib/hweight.S b/arch/x86/lib/hweight.S
+index dbf8cc97b7f53..12c16c6aa44a3 100644
+--- a/arch/x86/lib/hweight.S
++++ b/arch/x86/lib/hweight.S
+@@ -32,7 +32,7 @@ SYM_FUNC_START(__sw_hweight32)
+ 	imull $0x01010101, %eax, %eax		# w_tmp *= 0x01010101
+ 	shrl $24, %eax				# w = w_tmp >> 24
+ 	__ASM_SIZE(pop,) %__ASM_REG(dx)
+-	ret
++	RET
+ SYM_FUNC_END(__sw_hweight32)
+ EXPORT_SYMBOL(__sw_hweight32)
+ 
+@@ -65,7 +65,7 @@ SYM_FUNC_START(__sw_hweight64)
+ 
+ 	popq    %rdx
+ 	popq    %rdi
+-	ret
++	RET
+ #else /* CONFIG_X86_32 */
+ 	/* We're getting an u64 arg in (%eax,%edx): unsigned long hweight64(__u64 w) */
+ 	pushl   %ecx
+@@ -77,7 +77,7 @@ SYM_FUNC_START(__sw_hweight64)
+ 	addl    %ecx, %eax                      # result
+ 
+ 	popl    %ecx
+-	ret
++	RET
+ #endif
+ SYM_FUNC_END(__sw_hweight64)
+ EXPORT_SYMBOL(__sw_hweight64)
+diff --git a/arch/x86/lib/inat.c b/arch/x86/lib/inat.c
+index 12539fca75c4a..b0f3b2a62ae27 100644
+--- a/arch/x86/lib/inat.c
++++ b/arch/x86/lib/inat.c
+@@ -4,7 +4,7 @@
+  *
+  * Written by Masami Hiramatsu <mhiramat@redhat.com>
+  */
+-#include <asm/insn.h>
++#include <asm/insn.h> /* __ignore_sync_check__ */
+ 
+ /* Attribute tables are generated from opcode map */
+ #include "inat-tables.c"
+diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
+index c6a19c88af547..ffc8b7dcf1feb 100644
+--- a/arch/x86/lib/insn-eval.c
++++ b/arch/x86/lib/insn-eval.c
+@@ -928,10 +928,11 @@ static int get_seg_base_limit(struct insn *insn, struct pt_regs *regs,
+ static int get_eff_addr_reg(struct insn *insn, struct pt_regs *regs,
+ 			    int *regoff, long *eff_addr)
+ {
+-	insn_get_modrm(insn);
++	int ret;
+ 
+-	if (!insn->modrm.nbytes)
+-		return -EINVAL;
++	ret = insn_get_modrm(insn);
++	if (ret)
++		return ret;
+ 
+ 	if (X86_MODRM_MOD(insn->modrm.value) != 3)
+ 		return -EINVAL;
+@@ -977,14 +978,14 @@ static int get_eff_addr_modrm(struct insn *insn, struct pt_regs *regs,
+ 			      int *regoff, long *eff_addr)
+ {
+ 	long tmp;
++	int ret;
+ 
+ 	if (insn->addr_bytes != 8 && insn->addr_bytes != 4)
+ 		return -EINVAL;
+ 
+-	insn_get_modrm(insn);
+-
+-	if (!insn->modrm.nbytes)
+-		return -EINVAL;
++	ret = insn_get_modrm(insn);
++	if (ret)
++		return ret;
+ 
+ 	if (X86_MODRM_MOD(insn->modrm.value) > 2)
+ 		return -EINVAL;
+@@ -1106,18 +1107,21 @@ static int get_eff_addr_modrm_16(struct insn *insn, struct pt_regs *regs,
+  * @base_offset will have a register, as an offset from the base of pt_regs,
+  * that can be used to resolve the associated segment.
+  *
+- * -EINVAL on error.
++ * Negative value on error.
+  */
+ static int get_eff_addr_sib(struct insn *insn, struct pt_regs *regs,
+ 			    int *base_offset, long *eff_addr)
+ {
+ 	long base, indx;
+ 	int indx_offset;
++	int ret;
+ 
+ 	if (insn->addr_bytes != 8 && insn->addr_bytes != 4)
+ 		return -EINVAL;
+ 
+-	insn_get_modrm(insn);
++	ret = insn_get_modrm(insn);
++	if (ret)
++		return ret;
+ 
+ 	if (!insn->modrm.nbytes)
+ 		return -EINVAL;
+@@ -1125,7 +1129,9 @@ static int get_eff_addr_sib(struct insn *insn, struct pt_regs *regs,
+ 	if (X86_MODRM_MOD(insn->modrm.value) > 2)
+ 		return -EINVAL;
+ 
+-	insn_get_sib(insn);
++	ret = insn_get_sib(insn);
++	if (ret)
++		return ret;
+ 
+ 	if (!insn->sib.nbytes)
+ 		return -EINVAL;
+@@ -1194,8 +1200,8 @@ static void __user *get_addr_ref_16(struct insn *insn, struct pt_regs *regs)
+ 	short eff_addr;
+ 	long tmp;
+ 
+-	insn_get_modrm(insn);
+-	insn_get_displacement(insn);
++	if (insn_get_displacement(insn))
++		goto out;
+ 
+ 	if (insn->addr_bytes != 2)
+ 		goto out;
+@@ -1492,7 +1498,7 @@ int insn_fetch_from_user_inatomic(struct pt_regs *regs, unsigned char buf[MAX_IN
+ }
+ 
+ /**
+- * insn_decode() - Decode an instruction
++ * insn_decode_from_regs() - Decode an instruction
+  * @insn:	Structure to store decoded instruction
+  * @regs:	Structure with register values as seen when entering kernel mode
+  * @buf:	Buffer containing the instruction bytes
+@@ -1505,8 +1511,8 @@ int insn_fetch_from_user_inatomic(struct pt_regs *regs, unsigned char buf[MAX_IN
+  *
+  * True if instruction was decoded, False otherwise.
+  */
+-bool insn_decode(struct insn *insn, struct pt_regs *regs,
+-		 unsigned char buf[MAX_INSN_SIZE], int buf_size)
++bool insn_decode_from_regs(struct insn *insn, struct pt_regs *regs,
++			   unsigned char buf[MAX_INSN_SIZE], int buf_size)
+ {
+ 	int seg_defs;
+ 
+@@ -1529,7 +1535,9 @@ bool insn_decode(struct insn *insn, struct pt_regs *regs,
+ 	insn->addr_bytes = INSN_CODE_SEG_ADDR_SZ(seg_defs);
+ 	insn->opnd_bytes = INSN_CODE_SEG_OPND_SZ(seg_defs);
+ 
+-	insn_get_length(insn);
++	if (insn_get_length(insn))
++		return false;
++
+ 	if (buf_size < insn->length)
+ 		return false;
+ 
+diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c
+index 404279563891a..24e89239dcca9 100644
+--- a/arch/x86/lib/insn.c
++++ b/arch/x86/lib/insn.c
+@@ -10,10 +10,13 @@
+ #else
+ #include <string.h>
+ #endif
+-#include <asm/inat.h>
+-#include <asm/insn.h>
++#include <asm/inat.h> /*__ignore_sync_check__ */
++#include <asm/insn.h> /* __ignore_sync_check__ */
+ 
+-#include <asm/emulate_prefix.h>
++#include <linux/errno.h>
++#include <linux/kconfig.h>
++
++#include <asm/emulate_prefix.h> /* __ignore_sync_check__ */
+ 
+ /* Verify next sizeof(t) bytes can be on the same instruction */
+ #define validate_next(t, insn, n)	\
+@@ -97,8 +100,12 @@ static void insn_get_emulate_prefix(struct insn *insn)
+  * Populates the @insn->prefixes bitmap, and updates @insn->next_byte
+  * to point to the (first) opcode.  No effect if @insn->prefixes.got
+  * is already set.
++ *
++ * * Returns:
++ * 0:  on success
++ * < 0: on error
+  */
+-void insn_get_prefixes(struct insn *insn)
++int insn_get_prefixes(struct insn *insn)
+ {
+ 	struct insn_field *prefixes = &insn->prefixes;
+ 	insn_attr_t attr;
+@@ -106,7 +113,7 @@ void insn_get_prefixes(struct insn *insn)
+ 	int i, nb;
+ 
+ 	if (prefixes->got)
+-		return;
++		return 0;
+ 
+ 	insn_get_emulate_prefix(insn);
+ 
+@@ -217,8 +224,10 @@ vex_end:
+ 
+ 	prefixes->got = 1;
+ 
++	return 0;
++
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ /**
+@@ -230,16 +239,25 @@ err_out:
+  * If necessary, first collects any preceding (prefix) bytes.
+  * Sets @insn->opcode.value = opcode1.  No effect if @insn->opcode.got
+  * is already 1.
++ *
++ * Returns:
++ * 0:  on success
++ * < 0: on error
+  */
+-void insn_get_opcode(struct insn *insn)
++int insn_get_opcode(struct insn *insn)
+ {
+ 	struct insn_field *opcode = &insn->opcode;
++	int pfx_id, ret;
+ 	insn_byte_t op;
+-	int pfx_id;
++
+ 	if (opcode->got)
+-		return;
+-	if (!insn->prefixes.got)
+-		insn_get_prefixes(insn);
++		return 0;
++
++	if (!insn->prefixes.got) {
++		ret = insn_get_prefixes(insn);
++		if (ret)
++			return ret;
++	}
+ 
+ 	/* Get first opcode */
+ 	op = get_next(insn_byte_t, insn);
+@@ -254,9 +272,13 @@ void insn_get_opcode(struct insn *insn)
+ 		insn->attr = inat_get_avx_attribute(op, m, p);
+ 		if ((inat_must_evex(insn->attr) && !insn_is_evex(insn)) ||
+ 		    (!inat_accept_vex(insn->attr) &&
+-		     !inat_is_group(insn->attr)))
+-			insn->attr = 0;	/* This instruction is bad */
+-		goto end;	/* VEX has only 1 byte for opcode */
++		     !inat_is_group(insn->attr))) {
++			/* This instruction is bad */
++			insn->attr = 0;
++			return -EINVAL;
++		}
++		/* VEX has only 1 byte for opcode */
++		goto end;
+ 	}
+ 
+ 	insn->attr = inat_get_opcode_attribute(op);
+@@ -267,13 +289,18 @@ void insn_get_opcode(struct insn *insn)
+ 		pfx_id = insn_last_prefix_id(insn);
+ 		insn->attr = inat_get_escape_attribute(op, pfx_id, insn->attr);
+ 	}
+-	if (inat_must_vex(insn->attr))
+-		insn->attr = 0;	/* This instruction is bad */
++
++	if (inat_must_vex(insn->attr)) {
++		/* This instruction is bad */
++		insn->attr = 0;
++		return -EINVAL;
++	}
+ end:
+ 	opcode->got = 1;
++	return 0;
+ 
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ /**
+@@ -283,15 +310,25 @@ err_out:
+  * Populates @insn->modrm and updates @insn->next_byte to point past the
+  * ModRM byte, if any.  If necessary, first collects the preceding bytes
+  * (prefixes and opcode(s)).  No effect if @insn->modrm.got is already 1.
++ *
++ * Returns:
++ * 0:  on success
++ * < 0: on error
+  */
+-void insn_get_modrm(struct insn *insn)
++int insn_get_modrm(struct insn *insn)
+ {
+ 	struct insn_field *modrm = &insn->modrm;
+ 	insn_byte_t pfx_id, mod;
++	int ret;
++
+ 	if (modrm->got)
+-		return;
+-	if (!insn->opcode.got)
+-		insn_get_opcode(insn);
++		return 0;
++
++	if (!insn->opcode.got) {
++		ret = insn_get_opcode(insn);
++		if (ret)
++			return ret;
++	}
+ 
+ 	if (inat_has_modrm(insn->attr)) {
+ 		mod = get_next(insn_byte_t, insn);
+@@ -301,17 +338,22 @@ void insn_get_modrm(struct insn *insn)
+ 			pfx_id = insn_last_prefix_id(insn);
+ 			insn->attr = inat_get_group_attribute(mod, pfx_id,
+ 							      insn->attr);
+-			if (insn_is_avx(insn) && !inat_accept_vex(insn->attr))
+-				insn->attr = 0;	/* This is bad */
++			if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) {
++				/* Bad insn */
++				insn->attr = 0;
++				return -EINVAL;
++			}
+ 		}
+ 	}
+ 
+ 	if (insn->x86_64 && inat_is_force64(insn->attr))
+ 		insn->opnd_bytes = 8;
++
+ 	modrm->got = 1;
++	return 0;
+ 
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ 
+@@ -325,11 +367,16 @@ err_out:
+ int insn_rip_relative(struct insn *insn)
+ {
+ 	struct insn_field *modrm = &insn->modrm;
++	int ret;
+ 
+ 	if (!insn->x86_64)
+ 		return 0;
+-	if (!modrm->got)
+-		insn_get_modrm(insn);
++
++	if (!modrm->got) {
++		ret = insn_get_modrm(insn);
++		if (ret)
++			return 0;
++	}
+ 	/*
+ 	 * For rip-relative instructions, the mod field (top 2 bits)
+ 	 * is zero and the r/m field (bottom 3 bits) is 0x5.
+@@ -343,15 +390,25 @@ int insn_rip_relative(struct insn *insn)
+  *
+  * If necessary, first collects the instruction up to and including the
+  * ModRM byte.
++ *
++ * Returns:
++ * 0: if decoding succeeded
++ * < 0: otherwise.
+  */
+-void insn_get_sib(struct insn *insn)
++int insn_get_sib(struct insn *insn)
+ {
+ 	insn_byte_t modrm;
++	int ret;
+ 
+ 	if (insn->sib.got)
+-		return;
+-	if (!insn->modrm.got)
+-		insn_get_modrm(insn);
++		return 0;
++
++	if (!insn->modrm.got) {
++		ret = insn_get_modrm(insn);
++		if (ret)
++			return ret;
++	}
++
+ 	if (insn->modrm.nbytes) {
+ 		modrm = (insn_byte_t)insn->modrm.value;
+ 		if (insn->addr_bytes != 2 &&
+@@ -362,8 +419,10 @@ void insn_get_sib(struct insn *insn)
+ 	}
+ 	insn->sib.got = 1;
+ 
++	return 0;
++
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ 
+@@ -374,15 +433,25 @@ err_out:
+  * If necessary, first collects the instruction up to and including the
+  * SIB byte.
+  * Displacement value is sign-expanded.
++ *
++ * * Returns:
++ * 0: if decoding succeeded
++ * < 0: otherwise.
+  */
+-void insn_get_displacement(struct insn *insn)
++int insn_get_displacement(struct insn *insn)
+ {
+ 	insn_byte_t mod, rm, base;
++	int ret;
+ 
+ 	if (insn->displacement.got)
+-		return;
+-	if (!insn->sib.got)
+-		insn_get_sib(insn);
++		return 0;
++
++	if (!insn->sib.got) {
++		ret = insn_get_sib(insn);
++		if (ret)
++			return ret;
++	}
++
+ 	if (insn->modrm.nbytes) {
+ 		/*
+ 		 * Interpreting the modrm byte:
+@@ -425,9 +494,10 @@ void insn_get_displacement(struct insn *insn)
+ 	}
+ out:
+ 	insn->displacement.got = 1;
++	return 0;
+ 
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ /* Decode moffset16/32/64. Return 0 if failed */
+@@ -538,20 +608,30 @@ err_out:
+ }
+ 
+ /**
+- * insn_get_immediate() - Get the immediates of instruction
++ * insn_get_immediate() - Get the immediate in an instruction
+  * @insn:	&struct insn containing instruction
+  *
+  * If necessary, first collects the instruction up to and including the
+  * displacement bytes.
+  * Basically, most of immediates are sign-expanded. Unsigned-value can be
+- * get by bit masking with ((1 << (nbytes * 8)) - 1)
++ * computed by bit masking with ((1 << (nbytes * 8)) - 1)
++ *
++ * Returns:
++ * 0:  on success
++ * < 0: on error
+  */
+-void insn_get_immediate(struct insn *insn)
++int insn_get_immediate(struct insn *insn)
+ {
++	int ret;
++
+ 	if (insn->immediate.got)
+-		return;
+-	if (!insn->displacement.got)
+-		insn_get_displacement(insn);
++		return 0;
++
++	if (!insn->displacement.got) {
++		ret = insn_get_displacement(insn);
++		if (ret)
++			return ret;
++	}
+ 
+ 	if (inat_has_moffset(insn->attr)) {
+ 		if (!__get_moffset(insn))
+@@ -604,9 +684,10 @@ void insn_get_immediate(struct insn *insn)
+ 	}
+ done:
+ 	insn->immediate.got = 1;
++	return 0;
+ 
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ /**
+@@ -615,13 +696,58 @@ err_out:
+  *
+  * If necessary, first collects the instruction up to and including the
+  * immediates bytes.
+- */
+-void insn_get_length(struct insn *insn)
++ *
++ * Returns:
++ *  - 0 on success
++ *  - < 0 on error
++*/
++int insn_get_length(struct insn *insn)
+ {
++	int ret;
++
+ 	if (insn->length)
+-		return;
+-	if (!insn->immediate.got)
+-		insn_get_immediate(insn);
++		return 0;
++
++	if (!insn->immediate.got) {
++		ret = insn_get_immediate(insn);
++		if (ret)
++			return ret;
++	}
++
+ 	insn->length = (unsigned char)((unsigned long)insn->next_byte
+ 				     - (unsigned long)insn->kaddr);
++
++	return 0;
++}
++
++/**
++ * insn_decode() - Decode an x86 instruction
++ * @insn:	&struct insn to be initialized
++ * @kaddr:	address (in kernel memory) of instruction (or copy thereof)
++ * @buf_len:	length of the insn buffer at @kaddr
++ * @m:		insn mode, see enum insn_mode
++ *
++ * Returns:
++ * 0: if decoding succeeded
++ * < 0: otherwise.
++ */
++int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m)
++{
++	int ret;
++
++/* #define INSN_MODE_KERN	-1 __ignore_sync_check__ mode is only valid in the kernel */
++
++	if (m == INSN_MODE_KERN)
++		insn_init(insn, kaddr, buf_len, IS_ENABLED(CONFIG_X86_64));
++	else
++		insn_init(insn, kaddr, buf_len, m == INSN_MODE_64);
++
++	ret = insn_get_length(insn);
++	if (ret)
++		return ret;
++
++	if (insn_complete(insn))
++		return 0;
++
++	return -EINVAL;
+ }
+diff --git a/arch/x86/lib/iomap_copy_64.S b/arch/x86/lib/iomap_copy_64.S
+index cb5a1964506b1..a1f9416bf67a5 100644
+--- a/arch/x86/lib/iomap_copy_64.S
++++ b/arch/x86/lib/iomap_copy_64.S
+@@ -11,5 +11,5 @@
+ SYM_FUNC_START(__iowrite32_copy)
+ 	movl %edx,%ecx
+ 	rep movsd
+-	ret
++	RET
+ SYM_FUNC_END(__iowrite32_copy)
+diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
+index 1e299ac73c869..59cf2343f3d90 100644
+--- a/arch/x86/lib/memcpy_64.S
++++ b/arch/x86/lib/memcpy_64.S
+@@ -4,7 +4,7 @@
+ #include <linux/linkage.h>
+ #include <asm/errno.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/export.h>
+ 
+ .pushsection .noinstr.text, "ax"
+@@ -39,7 +39,7 @@ SYM_FUNC_START_WEAK(memcpy)
+ 	rep movsq
+ 	movl %edx, %ecx
+ 	rep movsb
+-	ret
++	RET
+ SYM_FUNC_END(memcpy)
+ SYM_FUNC_END_ALIAS(__memcpy)
+ EXPORT_SYMBOL(memcpy)
+@@ -53,7 +53,7 @@ SYM_FUNC_START_LOCAL(memcpy_erms)
+ 	movq %rdi, %rax
+ 	movq %rdx, %rcx
+ 	rep movsb
+-	ret
++	RET
+ SYM_FUNC_END(memcpy_erms)
+ 
+ SYM_FUNC_START_LOCAL(memcpy_orig)
+@@ -137,7 +137,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ 	movq %r9,	1*8(%rdi)
+ 	movq %r10,	-2*8(%rdi, %rdx)
+ 	movq %r11,	-1*8(%rdi, %rdx)
+-	retq
++	RET
+ 	.p2align 4
+ .Lless_16bytes:
+ 	cmpl $8,	%edx
+@@ -149,7 +149,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ 	movq -1*8(%rsi, %rdx),	%r9
+ 	movq %r8,	0*8(%rdi)
+ 	movq %r9,	-1*8(%rdi, %rdx)
+-	retq
++	RET
+ 	.p2align 4
+ .Lless_8bytes:
+ 	cmpl $4,	%edx
+@@ -162,7 +162,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ 	movl -4(%rsi, %rdx), %r8d
+ 	movl %ecx, (%rdi)
+ 	movl %r8d, -4(%rdi, %rdx)
+-	retq
++	RET
+ 	.p2align 4
+ .Lless_3bytes:
+ 	subl $1, %edx
+@@ -180,7 +180,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ 	movb %cl, (%rdi)
+ 
+ .Lend:
+-	retq
++	RET
+ SYM_FUNC_END(memcpy_orig)
+ 
+ .popsection
+diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
+index 41902fe8b8598..4b8ee3a2fcc37 100644
+--- a/arch/x86/lib/memmove_64.S
++++ b/arch/x86/lib/memmove_64.S
+@@ -8,7 +8,7 @@
+  */
+ #include <linux/linkage.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/export.h>
+ 
+ #undef memmove
+@@ -40,7 +40,7 @@ SYM_FUNC_START(__memmove)
+ 	/* FSRM implies ERMS => no length checks, do the copy directly */
+ .Lmemmove_begin_forward:
+ 	ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM
+-	ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_ERMS
++	ALTERNATIVE "", "jmp .Lmemmove_erms", X86_FEATURE_ERMS
+ 
+ 	/*
+ 	 * movsq instruction have many startup latency
+@@ -205,7 +205,12 @@ SYM_FUNC_START(__memmove)
+ 	movb (%rsi), %r11b
+ 	movb %r11b, (%rdi)
+ 13:
+-	retq
++	RET
++
++.Lmemmove_erms:
++	movq %rdx, %rcx
++	rep movsb
++	RET
+ SYM_FUNC_END(__memmove)
+ SYM_FUNC_END_ALIAS(memmove)
+ EXPORT_SYMBOL(__memmove)
+diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
+index 0bfd26e4ca9e9..d624f2bc42f16 100644
+--- a/arch/x86/lib/memset_64.S
++++ b/arch/x86/lib/memset_64.S
+@@ -3,7 +3,7 @@
+ 
+ #include <linux/linkage.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/export.h>
+ 
+ /*
+@@ -40,7 +40,7 @@ SYM_FUNC_START(__memset)
+ 	movl %edx,%ecx
+ 	rep stosb
+ 	movq %r9,%rax
+-	ret
++	RET
+ SYM_FUNC_END(__memset)
+ SYM_FUNC_END_ALIAS(memset)
+ EXPORT_SYMBOL(memset)
+@@ -63,7 +63,7 @@ SYM_FUNC_START_LOCAL(memset_erms)
+ 	movq %rdx,%rcx
+ 	rep stosb
+ 	movq %r9,%rax
+-	ret
++	RET
+ SYM_FUNC_END(memset_erms)
+ 
+ SYM_FUNC_START_LOCAL(memset_orig)
+@@ -125,7 +125,7 @@ SYM_FUNC_START_LOCAL(memset_orig)
+ 
+ .Lende:
+ 	movq	%r10,%rax
+-	ret
++	RET
+ 
+ .Lbad_alignment:
+ 	cmpq $7,%rdx
+diff --git a/arch/x86/lib/msr-reg.S b/arch/x86/lib/msr-reg.S
+index a2b9caa5274c8..ebd259f314963 100644
+--- a/arch/x86/lib/msr-reg.S
++++ b/arch/x86/lib/msr-reg.S
+@@ -35,7 +35,7 @@ SYM_FUNC_START(\op\()_safe_regs)
+ 	movl    %edi, 28(%r10)
+ 	popq %r12
+ 	popq %rbx
+-	ret
++	RET
+ 3:
+ 	movl    $-EIO, %r11d
+ 	jmp     2b
+@@ -77,7 +77,7 @@ SYM_FUNC_START(\op\()_safe_regs)
+ 	popl %esi
+ 	popl %ebp
+ 	popl %ebx
+-	ret
++	RET
+ 3:
+ 	movl    $-EIO, 4(%esp)
+ 	jmp     2b
+diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S
+index 0ea344c5ea439..ecb2049c1273f 100644
+--- a/arch/x86/lib/putuser.S
++++ b/arch/x86/lib/putuser.S
+@@ -52,7 +52,7 @@ SYM_INNER_LABEL(__put_user_nocheck_1, SYM_L_GLOBAL)
+ 1:	movb %al,(%_ASM_CX)
+ 	xor %ecx,%ecx
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__put_user_1)
+ EXPORT_SYMBOL(__put_user_1)
+ EXPORT_SYMBOL(__put_user_nocheck_1)
+@@ -66,7 +66,7 @@ SYM_INNER_LABEL(__put_user_nocheck_2, SYM_L_GLOBAL)
+ 2:	movw %ax,(%_ASM_CX)
+ 	xor %ecx,%ecx
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__put_user_2)
+ EXPORT_SYMBOL(__put_user_2)
+ EXPORT_SYMBOL(__put_user_nocheck_2)
+@@ -80,7 +80,7 @@ SYM_INNER_LABEL(__put_user_nocheck_4, SYM_L_GLOBAL)
+ 3:	movl %eax,(%_ASM_CX)
+ 	xor %ecx,%ecx
+ 	ASM_CLAC
+-	ret
++	RET
+ SYM_FUNC_END(__put_user_4)
+ EXPORT_SYMBOL(__put_user_4)
+ EXPORT_SYMBOL(__put_user_nocheck_4)
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index b4c43a9b14836..1221bb099afb4 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -4,33 +4,37 @@
+ #include <linux/linkage.h>
+ #include <asm/dwarf2.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/export.h>
+ #include <asm/nospec-branch.h>
+ #include <asm/unwind_hints.h>
+ #include <asm/frame.h>
+ 
+-.macro THUNK reg
+ 	.section .text.__x86.indirect_thunk
+ 
+-	.align 32
+-SYM_FUNC_START(__x86_indirect_thunk_\reg)
+-	JMP_NOSPEC \reg
+-SYM_FUNC_END(__x86_indirect_thunk_\reg)
+-
+-SYM_FUNC_START_NOALIGN(__x86_retpoline_\reg)
++.macro RETPOLINE reg
+ 	ANNOTATE_INTRA_FUNCTION_CALL
+-	call	.Ldo_rop_\@
++	call    .Ldo_rop_\@
+ .Lspec_trap_\@:
+ 	UNWIND_HINT_EMPTY
+ 	pause
+ 	lfence
+-	jmp	.Lspec_trap_\@
++	jmp .Lspec_trap_\@
+ .Ldo_rop_\@:
+-	mov	%\reg, (%_ASM_SP)
+-	UNWIND_HINT_RET_OFFSET
+-	ret
+-SYM_FUNC_END(__x86_retpoline_\reg)
++	mov     %\reg, (%_ASM_SP)
++	UNWIND_HINT_FUNC
++	RET
++.endm
++
++.macro THUNK reg
++
++	.align RETPOLINE_THUNK_SIZE
++SYM_INNER_LABEL(__x86_indirect_thunk_\reg, SYM_L_GLOBAL)
++	UNWIND_HINT_EMPTY
++
++	ALTERNATIVE_2 __stringify(RETPOLINE \reg), \
++		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg; int3), X86_FEATURE_RETPOLINE_LFENCE, \
++		      __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), ALT_NOT(X86_FEATURE_RETPOLINE)
+ 
+ .endm
+ 
+@@ -48,16 +52,90 @@ SYM_FUNC_END(__x86_retpoline_\reg)
+ 
+ #define __EXPORT_THUNK(sym)	_ASM_NOKPROBE(sym); EXPORT_SYMBOL(sym)
+ #define EXPORT_THUNK(reg)	__EXPORT_THUNK(__x86_indirect_thunk_ ## reg)
+-#define EXPORT_RETPOLINE(reg)  __EXPORT_THUNK(__x86_retpoline_ ## reg)
+ 
+-#undef GEN
++	.align RETPOLINE_THUNK_SIZE
++SYM_CODE_START(__x86_indirect_thunk_array)
++
+ #define GEN(reg) THUNK reg
+ #include <asm/GEN-for-each-reg.h>
+-
+ #undef GEN
++
++	.align RETPOLINE_THUNK_SIZE
++SYM_CODE_END(__x86_indirect_thunk_array)
++
+ #define GEN(reg) EXPORT_THUNK(reg)
+ #include <asm/GEN-for-each-reg.h>
+-
+ #undef GEN
+-#define GEN(reg) EXPORT_RETPOLINE(reg)
+-#include <asm/GEN-for-each-reg.h>
++
++/*
++ * This function name is magical and is used by -mfunction-return=thunk-extern
++ * for the compiler to generate JMPs to it.
++ */
++#ifdef CONFIG_RETHUNK
++
++	.section .text.__x86.return_thunk
++
++/*
++ * Safety details here pertain to the AMD Zen{1,2} microarchitecture:
++ * 1) The RET at __x86_return_thunk must be on a 64 byte boundary, for
++ *    alignment within the BTB.
++ * 2) The instruction at zen_untrain_ret must contain, and not
++ *    end with, the 0xc3 byte of the RET.
++ * 3) STIBP must be enabled, or SMT disabled, to prevent the sibling thread
++ *    from re-poisioning the BTB prediction.
++ */
++	.align 64
++	.skip 63, 0xcc
++SYM_FUNC_START_NOALIGN(zen_untrain_ret);
++
++	/*
++	 * As executed from zen_untrain_ret, this is:
++	 *
++	 *   TEST $0xcc, %bl
++	 *   LFENCE
++	 *   JMP __x86_return_thunk
++	 *
++	 * Executing the TEST instruction has a side effect of evicting any BTB
++	 * prediction (potentially attacker controlled) attached to the RET, as
++	 * __x86_return_thunk + 1 isn't an instruction boundary at the moment.
++	 */
++	.byte	0xf6
++
++	/*
++	 * As executed from __x86_return_thunk, this is a plain RET.
++	 *
++	 * As part of the TEST above, RET is the ModRM byte, and INT3 the imm8.
++	 *
++	 * We subsequently jump backwards and architecturally execute the RET.
++	 * This creates a correct BTB prediction (type=ret), but in the
++	 * meantime we suffer Straight Line Speculation (because the type was
++	 * no branch) which is halted by the INT3.
++	 *
++	 * With SMT enabled and STIBP active, a sibling thread cannot poison
++	 * RET's prediction to a type of its choice, but can evict the
++	 * prediction due to competitive sharing. If the prediction is
++	 * evicted, __x86_return_thunk will suffer Straight Line Speculation
++	 * which will be contained safely by the INT3.
++	 */
++SYM_INNER_LABEL(__x86_return_thunk, SYM_L_GLOBAL)
++	ret
++	int3
++SYM_CODE_END(__x86_return_thunk)
++
++	/*
++	 * Ensure the TEST decoding / BTB invalidation is complete.
++	 */
++	lfence
++
++	/*
++	 * Jump back and execute the RET in the middle of the TEST instruction.
++	 * INT3 is for SLS protection.
++	 */
++	jmp __x86_return_thunk
++	int3
++SYM_FUNC_END(zen_untrain_ret)
++__EXPORT_THUNK(zen_untrain_ret)
++
++EXPORT_SYMBOL(__x86_return_thunk)
++
++#endif /* CONFIG_RETHUNK */
+diff --git a/arch/x86/math-emu/div_Xsig.S b/arch/x86/math-emu/div_Xsig.S
+index 951da2ad54bbf..8c270ab415bee 100644
+--- a/arch/x86/math-emu/div_Xsig.S
++++ b/arch/x86/math-emu/div_Xsig.S
+@@ -341,7 +341,7 @@ L_exit:
+ 	popl	%esi
+ 
+ 	leave
+-	ret
++	RET
+ 
+ 
+ #ifdef PARANOID
+diff --git a/arch/x86/math-emu/div_small.S b/arch/x86/math-emu/div_small.S
+index d047d1816abe9..637439bfefa47 100644
+--- a/arch/x86/math-emu/div_small.S
++++ b/arch/x86/math-emu/div_small.S
+@@ -44,5 +44,5 @@ SYM_FUNC_START(FPU_div_small)
+ 	popl	%esi
+ 
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(FPU_div_small)
+diff --git a/arch/x86/math-emu/mul_Xsig.S b/arch/x86/math-emu/mul_Xsig.S
+index 4afc7b1fa6e95..54a031b661421 100644
+--- a/arch/x86/math-emu/mul_Xsig.S
++++ b/arch/x86/math-emu/mul_Xsig.S
+@@ -62,7 +62,7 @@ SYM_FUNC_START(mul32_Xsig)
+ 
+ 	popl %esi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(mul32_Xsig)
+ 
+ 
+@@ -115,7 +115,7 @@ SYM_FUNC_START(mul64_Xsig)
+ 
+ 	popl %esi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(mul64_Xsig)
+ 
+ 
+@@ -175,5 +175,5 @@ SYM_FUNC_START(mul_Xsig_Xsig)
+ 
+ 	popl %esi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(mul_Xsig_Xsig)
+diff --git a/arch/x86/math-emu/polynom_Xsig.S b/arch/x86/math-emu/polynom_Xsig.S
+index 702315eecb860..35fd723fc0df8 100644
+--- a/arch/x86/math-emu/polynom_Xsig.S
++++ b/arch/x86/math-emu/polynom_Xsig.S
+@@ -133,5 +133,5 @@ L_accum_done:
+ 	popl	%edi
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(polynomial_Xsig)
+diff --git a/arch/x86/math-emu/reg_norm.S b/arch/x86/math-emu/reg_norm.S
+index cad1d60b1e844..594936eeed67a 100644
+--- a/arch/x86/math-emu/reg_norm.S
++++ b/arch/x86/math-emu/reg_norm.S
+@@ -72,7 +72,7 @@ L_exit_valid:
+ L_exit:
+ 	popl	%ebx
+ 	leave
+-	ret
++	RET
+ 
+ 
+ L_zero:
+@@ -138,7 +138,7 @@ L_exit_nuo_valid:
+ 
+ 	popl	%ebx
+ 	leave
+-	ret
++	RET
+ 
+ L_exit_nuo_zero:
+ 	movl	TAG_Zero,%eax
+@@ -146,5 +146,5 @@ L_exit_nuo_zero:
+ 
+ 	popl	%ebx
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(FPU_normalize_nuo)
+diff --git a/arch/x86/math-emu/reg_round.S b/arch/x86/math-emu/reg_round.S
+index 11a1f798451bd..8bdbdcee74560 100644
+--- a/arch/x86/math-emu/reg_round.S
++++ b/arch/x86/math-emu/reg_round.S
+@@ -437,7 +437,7 @@ fpu_Arith_exit:
+ 	popl	%edi
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ 
+ /*
+diff --git a/arch/x86/math-emu/reg_u_add.S b/arch/x86/math-emu/reg_u_add.S
+index 9c9e2c810afe8..07247287a3af7 100644
+--- a/arch/x86/math-emu/reg_u_add.S
++++ b/arch/x86/math-emu/reg_u_add.S
+@@ -164,6 +164,6 @@ L_exit:
+ 	popl	%edi
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ #endif /* PARANOID */
+ SYM_FUNC_END(FPU_u_add)
+diff --git a/arch/x86/math-emu/reg_u_div.S b/arch/x86/math-emu/reg_u_div.S
+index e2fb5c2644c55..b5a41e2fc484c 100644
+--- a/arch/x86/math-emu/reg_u_div.S
++++ b/arch/x86/math-emu/reg_u_div.S
+@@ -468,7 +468,7 @@ L_exit:
+ 	popl	%esi
+ 
+ 	leave
+-	ret
++	RET
+ #endif /* PARANOID */ 
+ 
+ SYM_FUNC_END(FPU_u_div)
+diff --git a/arch/x86/math-emu/reg_u_mul.S b/arch/x86/math-emu/reg_u_mul.S
+index 0c779c87ac5b3..e2588b24b8c2c 100644
+--- a/arch/x86/math-emu/reg_u_mul.S
++++ b/arch/x86/math-emu/reg_u_mul.S
+@@ -144,7 +144,7 @@ L_exit:
+ 	popl	%edi
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ #endif /* PARANOID */ 
+ 
+ SYM_FUNC_END(FPU_u_mul)
+diff --git a/arch/x86/math-emu/reg_u_sub.S b/arch/x86/math-emu/reg_u_sub.S
+index e9bb7c248649f..4c900c29e4ff2 100644
+--- a/arch/x86/math-emu/reg_u_sub.S
++++ b/arch/x86/math-emu/reg_u_sub.S
+@@ -270,5 +270,5 @@ L_exit:
+ 	popl	%edi
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(FPU_u_sub)
+diff --git a/arch/x86/math-emu/round_Xsig.S b/arch/x86/math-emu/round_Xsig.S
+index d9d7de8dbd7b6..126c40473badb 100644
+--- a/arch/x86/math-emu/round_Xsig.S
++++ b/arch/x86/math-emu/round_Xsig.S
+@@ -78,7 +78,7 @@ L_exit:
+ 	popl	%esi
+ 	popl	%ebx
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(round_Xsig)
+ 
+ 
+@@ -138,5 +138,5 @@ L_n_exit:
+ 	popl	%esi
+ 	popl	%ebx
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(norm_Xsig)
+diff --git a/arch/x86/math-emu/shr_Xsig.S b/arch/x86/math-emu/shr_Xsig.S
+index 726af985f7582..f726bf6f6396e 100644
+--- a/arch/x86/math-emu/shr_Xsig.S
++++ b/arch/x86/math-emu/shr_Xsig.S
+@@ -45,7 +45,7 @@ SYM_FUNC_START(shr_Xsig)
+ 	popl	%ebx
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ L_more_than_31:
+ 	cmpl	$64,%ecx
+@@ -61,7 +61,7 @@ L_more_than_31:
+ 	movl	$0,8(%esi)
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ L_more_than_63:
+ 	cmpl	$96,%ecx
+@@ -76,7 +76,7 @@ L_more_than_63:
+ 	movl	%edx,8(%esi)
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ L_more_than_95:
+ 	xorl	%eax,%eax
+@@ -85,5 +85,5 @@ L_more_than_95:
+ 	movl	%eax,8(%esi)
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(shr_Xsig)
+diff --git a/arch/x86/math-emu/wm_shrx.S b/arch/x86/math-emu/wm_shrx.S
+index 4fc89174caf0c..f608a28a4c43a 100644
+--- a/arch/x86/math-emu/wm_shrx.S
++++ b/arch/x86/math-emu/wm_shrx.S
+@@ -55,7 +55,7 @@ SYM_FUNC_START(FPU_shrx)
+ 	popl	%ebx
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ L_more_than_31:
+ 	cmpl	$64,%ecx
+@@ -70,7 +70,7 @@ L_more_than_31:
+ 	movl	$0,4(%esi)
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ L_more_than_63:
+ 	cmpl	$96,%ecx
+@@ -84,7 +84,7 @@ L_more_than_63:
+ 	movl	%edx,4(%esi)
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ L_more_than_95:
+ 	xorl	%eax,%eax
+@@ -92,7 +92,7 @@ L_more_than_95:
+ 	movl	%eax,4(%esi)
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(FPU_shrx)
+ 
+ 
+@@ -146,7 +146,7 @@ SYM_FUNC_START(FPU_shrxs)
+ 	popl	%ebx
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ /* Shift by [0..31] bits */
+ Ls_less_than_32:
+@@ -163,7 +163,7 @@ Ls_less_than_32:
+ 	popl	%ebx
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ /* Shift by [64..95] bits */
+ Ls_more_than_63:
+@@ -189,7 +189,7 @@ Ls_more_than_63:
+ 	popl	%ebx
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ 
+ Ls_more_than_95:
+ /* Shift by [96..inf) bits */
+@@ -203,5 +203,5 @@ Ls_more_than_95:
+ 	popl	%ebx
+ 	popl	%esi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(FPU_shrxs)
+diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S
+index 7a84fc8bc5c36..145b67299ab67 100644
+--- a/arch/x86/mm/mem_encrypt_boot.S
++++ b/arch/x86/mm/mem_encrypt_boot.S
+@@ -65,7 +65,10 @@ SYM_FUNC_START(sme_encrypt_execute)
+ 	movq	%rbp, %rsp		/* Restore original stack pointer */
+ 	pop	%rbp
+ 
++	/* Offset to __x86_return_thunk would be wrong here */
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ SYM_FUNC_END(sme_encrypt_execute)
+ 
+ SYM_FUNC_START(__enc_copy)
+@@ -151,6 +154,9 @@ SYM_FUNC_START(__enc_copy)
+ 	pop	%r12
+ 	pop	%r15
+ 
++	/* Offset to __x86_return_thunk would be wrong here */
++	ANNOTATE_UNRET_SAFE
+ 	ret
++	int3
+ .L__enc_copy_end:
+ SYM_FUNC_END(__enc_copy)
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 1714e85eb26d2..8e3c3d8916dd2 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -15,7 +15,6 @@
+ #include <asm/set_memory.h>
+ #include <asm/nospec-branch.h>
+ #include <asm/text-patching.h>
+-#include <asm/asm-prototypes.h>
+ 
+ static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
+ {
+@@ -213,6 +212,14 @@ static void jit_fill_hole(void *area, unsigned int size)
+ 
+ struct jit_context {
+ 	int cleanup_addr; /* Epilogue code offset */
++
++	/*
++	 * Program specific offsets of labels in the code; these rely on the
++	 * JIT doing at least 2 passes, recording the position on the first
++	 * pass, only to generate the correct offset on the second pass.
++	 */
++	int tail_call_direct_label;
++	int tail_call_indirect_label;
+ };
+ 
+ /* Maximum number of bytes emitted while JITing one eBPF insn */
+@@ -372,20 +379,40 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
+ 	return __bpf_arch_text_poke(ip, t, old_addr, new_addr, true);
+ }
+ 
+-static int get_pop_bytes(bool *callee_regs_used)
++#define EMIT_LFENCE()	EMIT3(0x0F, 0xAE, 0xE8)
++
++static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
+ {
+-	int bytes = 0;
++	u8 *prog = *pprog;
++	int cnt = 0;
+ 
+-	if (callee_regs_used[3])
+-		bytes += 2;
+-	if (callee_regs_used[2])
+-		bytes += 2;
+-	if (callee_regs_used[1])
+-		bytes += 2;
+-	if (callee_regs_used[0])
+-		bytes += 1;
++#ifdef CONFIG_RETPOLINE
++	if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
++		EMIT_LFENCE();
++		EMIT2(0xFF, 0xE0 + reg);
++	} else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
++		emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
++	} else
++#endif
++	EMIT2(0xFF, 0xE0 + reg);
++
++	*pprog = prog;
++}
++
++static void emit_return(u8 **pprog, u8 *ip)
++{
++	u8 *prog = *pprog;
++	int cnt = 0;
++
++	if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
++		emit_jump(&prog, &__x86_return_thunk, ip);
++	} else {
++		EMIT1(0xC3);		/* ret */
++		if (IS_ENABLED(CONFIG_SLS))
++			EMIT1(0xCC);	/* int3 */
++	}
+ 
+-	return bytes;
++	*pprog = prog;
+ }
+ 
+ /*
+@@ -403,30 +430,12 @@ static int get_pop_bytes(bool *callee_regs_used)
+  * out:
+  */
+ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
+-					u32 stack_depth)
++					u32 stack_depth, u8 *ip,
++					struct jit_context *ctx)
+ {
+ 	int tcc_off = -4 - round_up(stack_depth, 8);
+-	u8 *prog = *pprog;
+-	int pop_bytes = 0;
+-	int off1 = 42;
+-	int off2 = 31;
+-	int off3 = 9;
+-	int cnt = 0;
+-
+-	/* count the additional bytes used for popping callee regs from stack
+-	 * that need to be taken into account for each of the offsets that
+-	 * are used for bailing out of the tail call
+-	 */
+-	pop_bytes = get_pop_bytes(callee_regs_used);
+-	off1 += pop_bytes;
+-	off2 += pop_bytes;
+-	off3 += pop_bytes;
+-
+-	if (stack_depth) {
+-		off1 += 7;
+-		off2 += 7;
+-		off3 += 7;
+-	}
++	u8 *prog = *pprog, *start = *pprog;
++	int cnt = 0, offset;
+ 
+ 	/*
+ 	 * rdi - pointer to ctx
+@@ -441,8 +450,9 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
+ 	EMIT2(0x89, 0xD2);                        /* mov edx, edx */
+ 	EMIT3(0x39, 0x56,                         /* cmp dword ptr [rsi + 16], edx */
+ 	      offsetof(struct bpf_array, map.max_entries));
+-#define OFFSET1 (off1 + RETPOLINE_RCX_BPF_JIT_SIZE) /* Number of bytes to jump */
+-	EMIT2(X86_JBE, OFFSET1);                  /* jbe out */
++
++	offset = ctx->tail_call_indirect_label - (prog + 2 - start);
++	EMIT2(X86_JBE, offset);                   /* jbe out */
+ 
+ 	/*
+ 	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
+@@ -450,8 +460,9 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
+ 	 */
+ 	EMIT2_off32(0x8B, 0x85, tcc_off);         /* mov eax, dword ptr [rbp - tcc_off] */
+ 	EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT);     /* cmp eax, MAX_TAIL_CALL_CNT */
+-#define OFFSET2 (off2 + RETPOLINE_RCX_BPF_JIT_SIZE)
+-	EMIT2(X86_JA, OFFSET2);                   /* ja out */
++
++	offset = ctx->tail_call_indirect_label - (prog + 2 - start);
++	EMIT2(X86_JA, offset);                    /* ja out */
+ 	EMIT3(0x83, 0xC0, 0x01);                  /* add eax, 1 */
+ 	EMIT2_off32(0x89, 0x85, tcc_off);         /* mov dword ptr [rbp - tcc_off], eax */
+ 
+@@ -464,12 +475,11 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
+ 	 *	goto out;
+ 	 */
+ 	EMIT3(0x48, 0x85, 0xC9);                  /* test rcx,rcx */
+-#define OFFSET3 (off3 + RETPOLINE_RCX_BPF_JIT_SIZE)
+-	EMIT2(X86_JE, OFFSET3);                   /* je out */
+ 
+-	*pprog = prog;
+-	pop_callee_regs(pprog, callee_regs_used);
+-	prog = *pprog;
++	offset = ctx->tail_call_indirect_label - (prog + 2 - start);
++	EMIT2(X86_JE, offset);                    /* je out */
++
++	pop_callee_regs(&prog, callee_regs_used);
+ 
+ 	EMIT1(0x58);                              /* pop rax */
+ 	if (stack_depth)
+@@ -486,42 +496,21 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
+ 	 * rdi == ctx (1st arg)
+ 	 * rcx == prog->bpf_func + X86_TAIL_CALL_OFFSET
+ 	 */
+-	RETPOLINE_RCX_BPF_JIT();
++	emit_indirect_jump(&prog, 1 /* rcx */, ip + (prog - start));
+ 
+ 	/* out: */
++	ctx->tail_call_indirect_label = prog - start;
+ 	*pprog = prog;
+ }
+ 
+ static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke,
+-				      u8 **pprog, int addr, u8 *image,
+-				      bool *callee_regs_used, u32 stack_depth)
++				      u8 **pprog, u8 *ip,
++				      bool *callee_regs_used, u32 stack_depth,
++				      struct jit_context *ctx)
+ {
+ 	int tcc_off = -4 - round_up(stack_depth, 8);
+-	u8 *prog = *pprog;
+-	int pop_bytes = 0;
+-	int off1 = 20;
+-	int poke_off;
+-	int cnt = 0;
+-
+-	/* count the additional bytes used for popping callee regs to stack
+-	 * that need to be taken into account for jump offset that is used for
+-	 * bailing out from of the tail call when limit is reached
+-	 */
+-	pop_bytes = get_pop_bytes(callee_regs_used);
+-	off1 += pop_bytes;
+-
+-	/*
+-	 * total bytes for:
+-	 * - nop5/ jmpq $off
+-	 * - pop callee regs
+-	 * - sub rsp, $val if depth > 0
+-	 * - pop rax
+-	 */
+-	poke_off = X86_PATCH_SIZE + pop_bytes + 1;
+-	if (stack_depth) {
+-		poke_off += 7;
+-		off1 += 7;
+-	}
++	u8 *prog = *pprog, *start = *pprog;
++	int cnt = 0, offset;
+ 
+ 	/*
+ 	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
+@@ -529,28 +518,30 @@ static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke,
+ 	 */
+ 	EMIT2_off32(0x8B, 0x85, tcc_off);             /* mov eax, dword ptr [rbp - tcc_off] */
+ 	EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT);         /* cmp eax, MAX_TAIL_CALL_CNT */
+-	EMIT2(X86_JA, off1);                          /* ja out */
++
++	offset = ctx->tail_call_direct_label - (prog + 2 - start);
++	EMIT2(X86_JA, offset);                        /* ja out */
+ 	EMIT3(0x83, 0xC0, 0x01);                      /* add eax, 1 */
+ 	EMIT2_off32(0x89, 0x85, tcc_off);             /* mov dword ptr [rbp - tcc_off], eax */
+ 
+-	poke->tailcall_bypass = image + (addr - poke_off - X86_PATCH_SIZE);
++	poke->tailcall_bypass = ip + (prog - start);
+ 	poke->adj_off = X86_TAIL_CALL_OFFSET;
+-	poke->tailcall_target = image + (addr - X86_PATCH_SIZE);
++	poke->tailcall_target = ip + ctx->tail_call_direct_label - X86_PATCH_SIZE;
+ 	poke->bypass_addr = (u8 *)poke->tailcall_target + X86_PATCH_SIZE;
+ 
+ 	emit_jump(&prog, (u8 *)poke->tailcall_target + X86_PATCH_SIZE,
+ 		  poke->tailcall_bypass);
+ 
+-	*pprog = prog;
+-	pop_callee_regs(pprog, callee_regs_used);
+-	prog = *pprog;
++	pop_callee_regs(&prog, callee_regs_used);
+ 	EMIT1(0x58);                                  /* pop rax */
+ 	if (stack_depth)
+ 		EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8));
+ 
+ 	memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE);
+ 	prog += X86_PATCH_SIZE;
++
+ 	/* out: */
++	ctx->tail_call_direct_label = prog - start;
+ 
+ 	*pprog = prog;
+ }
+@@ -1144,8 +1135,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			/* speculation barrier */
+ 		case BPF_ST | BPF_NOSPEC:
+ 			if (boot_cpu_has(X86_FEATURE_XMM2))
+-				/* Emit 'lfence' */
+-				EMIT3(0x0F, 0xAE, 0xE8);
++				EMIT_LFENCE();
+ 			break;
+ 
+ 			/* ST: *(u8*)(dst_reg + off) = imm */
+@@ -1275,13 +1265,16 @@ xadd:			if (is_imm8(insn->off))
+ 		case BPF_JMP | BPF_TAIL_CALL:
+ 			if (imm32)
+ 				emit_bpf_tail_call_direct(&bpf_prog->aux->poke_tab[imm32 - 1],
+-							  &prog, addrs[i], image,
++							  &prog, image + addrs[i - 1],
+ 							  callee_regs_used,
+-							  bpf_prog->aux->stack_depth);
++							  bpf_prog->aux->stack_depth,
++							  ctx);
+ 			else
+ 				emit_bpf_tail_call_indirect(&prog,
+ 							    callee_regs_used,
+-							    bpf_prog->aux->stack_depth);
++							    bpf_prog->aux->stack_depth,
++							    image + addrs[i - 1],
++							    ctx);
+ 			break;
+ 
+ 			/* cond jump */
+@@ -1466,7 +1459,7 @@ emit_jmp:
+ 			ctx->cleanup_addr = proglen;
+ 			pop_callee_regs(&prog, callee_regs_used);
+ 			EMIT1(0xC9);         /* leave */
+-			EMIT1(0xC3);         /* ret */
++			emit_return(&prog, image + addrs[i - 1] + (prog - temp));
+ 			break;
+ 
+ 		default:
+@@ -1907,7 +1900,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 	if (flags & BPF_TRAMP_F_SKIP_FRAME)
+ 		/* skip our return address and return to parent */
+ 		EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
+-	EMIT1(0xC3); /* ret */
++	emit_return(&prog, prog);
+ 	/* Make sure the trampoline generation logic doesn't overflow */
+ 	if (WARN_ON_ONCE(prog > (u8 *)image_end - BPF_INSN_SAFETY)) {
+ 		ret = -EFAULT;
+@@ -1920,26 +1913,6 @@ cleanup:
+ 	return ret;
+ }
+ 
+-static int emit_fallback_jump(u8 **pprog)
+-{
+-	u8 *prog = *pprog;
+-	int err = 0;
+-
+-#ifdef CONFIG_RETPOLINE
+-	/* Note that this assumes the the compiler uses external
+-	 * thunks for indirect calls. Both clang and GCC use the same
+-	 * naming convention for external thunks.
+-	 */
+-	err = emit_jump(&prog, __x86_indirect_thunk_rdx, prog);
+-#else
+-	int cnt = 0;
+-
+-	EMIT2(0xFF, 0xE2);	/* jmp rdx */
+-#endif
+-	*pprog = prog;
+-	return err;
+-}
+-
+ static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs)
+ {
+ 	u8 *jg_reloc, *prog = *pprog;
+@@ -1961,9 +1934,7 @@ static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs)
+ 		if (err)
+ 			return err;
+ 
+-		err = emit_fallback_jump(&prog);	/* jmp thunk/indirect */
+-		if (err)
+-			return err;
++		emit_indirect_jump(&prog, 2 /* rdx */, prog);
+ 
+ 		*pprog = prog;
+ 		return 0;
+diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
+index 4bd0f98df700c..622af951220c0 100644
+--- a/arch/x86/net/bpf_jit_comp32.c
++++ b/arch/x86/net/bpf_jit_comp32.c
+@@ -15,6 +15,7 @@
+ #include <asm/cacheflush.h>
+ #include <asm/set_memory.h>
+ #include <asm/nospec-branch.h>
++#include <asm/asm-prototypes.h>
+ #include <linux/bpf.h>
+ 
+ /*
+@@ -1267,6 +1268,21 @@ static void emit_epilogue(u8 **pprog, u32 stack_depth)
+ 	*pprog = prog;
+ }
+ 
++static int emit_jmp_edx(u8 **pprog, u8 *ip)
++{
++	u8 *prog = *pprog;
++	int cnt = 0;
++
++#ifdef CONFIG_RETPOLINE
++	EMIT1_off32(0xE9, (u8 *)__x86_indirect_thunk_edx - (ip + 5));
++#else
++	EMIT2(0xFF, 0xE2);
++#endif
++	*pprog = prog;
++
++	return cnt;
++}
++
+ /*
+  * Generate the following code:
+  * ... bpf_tail_call(void *ctx, struct bpf_array *array, u64 index) ...
+@@ -1280,7 +1296,7 @@ static void emit_epilogue(u8 **pprog, u32 stack_depth)
+  *   goto *(prog->bpf_func + prologue_size);
+  * out:
+  */
+-static void emit_bpf_tail_call(u8 **pprog)
++static void emit_bpf_tail_call(u8 **pprog, u8 *ip)
+ {
+ 	u8 *prog = *pprog;
+ 	int cnt = 0;
+@@ -1362,7 +1378,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+ 	 * eax == ctx (1st arg)
+ 	 * edx == prog->bpf_func + prologue_size
+ 	 */
+-	RETPOLINE_EDX_BPF_JIT();
++	cnt += emit_jmp_edx(&prog, ip + cnt);
+ 
+ 	if (jmp_label1 == -1)
+ 		jmp_label1 = cnt;
+@@ -1929,7 +1945,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			break;
+ 		}
+ 		case BPF_JMP | BPF_TAIL_CALL:
+-			emit_bpf_tail_call(&prog);
++			emit_bpf_tail_call(&prog, image + addrs[i - 1]);
+ 			break;
+ 
+ 		/* cond jump */
+diff --git a/arch/x86/platform/efi/efi_stub_32.S b/arch/x86/platform/efi/efi_stub_32.S
+index 09ec84f6ef517..f3cfdb1c9a359 100644
+--- a/arch/x86/platform/efi/efi_stub_32.S
++++ b/arch/x86/platform/efi/efi_stub_32.S
+@@ -56,5 +56,5 @@ SYM_FUNC_START(efi_call_svam)
+ 
+ 	movl	16(%esp), %ebx
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(efi_call_svam)
+diff --git a/arch/x86/platform/efi/efi_stub_64.S b/arch/x86/platform/efi/efi_stub_64.S
+index 90380a17ab238..2206b8bc47b8a 100644
+--- a/arch/x86/platform/efi/efi_stub_64.S
++++ b/arch/x86/platform/efi/efi_stub_64.S
+@@ -23,5 +23,5 @@ SYM_FUNC_START(__efi_call)
+ 	mov %rsi, %rcx
+ 	CALL_NOSPEC rdi
+ 	leave
+-	ret
++	RET
+ SYM_FUNC_END(__efi_call)
+diff --git a/arch/x86/platform/efi/efi_thunk_64.S b/arch/x86/platform/efi/efi_thunk_64.S
+index 26f0da238c1ca..9eac088ef4def 100644
+--- a/arch/x86/platform/efi/efi_thunk_64.S
++++ b/arch/x86/platform/efi/efi_thunk_64.S
+@@ -22,6 +22,7 @@
+ #include <linux/linkage.h>
+ #include <asm/page_types.h>
+ #include <asm/segment.h>
++#include <asm/nospec-branch.h>
+ 
+ 	.text
+ 	.code64
+@@ -63,7 +64,9 @@ SYM_CODE_START(__efi64_thunk)
+ 1:	movq	24(%rsp), %rsp
+ 	pop	%rbx
+ 	pop	%rbp
+-	retq
++	ANNOTATE_UNRET_SAFE
++	ret
++	int3
+ 
+ 	.code32
+ 2:	pushl	$__KERNEL_CS
+diff --git a/arch/x86/platform/olpc/xo1-wakeup.S b/arch/x86/platform/olpc/xo1-wakeup.S
+index 75f4faff84682..3a5abffe5660d 100644
+--- a/arch/x86/platform/olpc/xo1-wakeup.S
++++ b/arch/x86/platform/olpc/xo1-wakeup.S
+@@ -77,7 +77,7 @@ save_registers:
+ 	pushfl
+ 	popl saved_context_eflags
+ 
+-	ret
++	RET
+ 
+ restore_registers:
+ 	movl saved_context_ebp, %ebp
+@@ -88,7 +88,7 @@ restore_registers:
+ 	pushl saved_context_eflags
+ 	popfl
+ 
+-	ret
++	RET
+ 
+ SYM_CODE_START(do_olpc_suspend_lowlevel)
+ 	call	save_processor_state
+@@ -109,7 +109,7 @@ ret_point:
+ 
+ 	call	restore_registers
+ 	call	restore_processor_state
+-	ret
++	RET
+ SYM_CODE_END(do_olpc_suspend_lowlevel)
+ 
+ .data
+diff --git a/arch/x86/power/hibernate_asm_32.S b/arch/x86/power/hibernate_asm_32.S
+index 8786653ad3c06..5606a15cf9a17 100644
+--- a/arch/x86/power/hibernate_asm_32.S
++++ b/arch/x86/power/hibernate_asm_32.S
+@@ -32,7 +32,7 @@ SYM_FUNC_START(swsusp_arch_suspend)
+ 	FRAME_BEGIN
+ 	call swsusp_save
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(swsusp_arch_suspend)
+ 
+ SYM_CODE_START(restore_image)
+@@ -108,5 +108,5 @@ SYM_FUNC_START(restore_registers)
+ 	/* tell the hibernation core that we've just restored the memory */
+ 	movl	%eax, in_suspend
+ 
+-	ret
++	RET
+ SYM_FUNC_END(restore_registers)
+diff --git a/arch/x86/power/hibernate_asm_64.S b/arch/x86/power/hibernate_asm_64.S
+index 7918b8415f132..3ae7a3d7d61e5 100644
+--- a/arch/x86/power/hibernate_asm_64.S
++++ b/arch/x86/power/hibernate_asm_64.S
+@@ -49,7 +49,7 @@ SYM_FUNC_START(swsusp_arch_suspend)
+ 	FRAME_BEGIN
+ 	call swsusp_save
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(swsusp_arch_suspend)
+ 
+ SYM_CODE_START(restore_image)
+@@ -143,5 +143,5 @@ SYM_FUNC_START(restore_registers)
+ 	/* tell the hibernation core that we've just restored the memory */
+ 	movq	%rax, in_suspend(%rip)
+ 
+-	ret
++	RET
+ SYM_FUNC_END(restore_registers)
+diff --git a/arch/x86/um/checksum_32.S b/arch/x86/um/checksum_32.S
+index 13f118dec74f8..aed782ab77213 100644
+--- a/arch/x86/um/checksum_32.S
++++ b/arch/x86/um/checksum_32.S
+@@ -110,7 +110,7 @@ csum_partial:
+ 7:	
+ 	popl %ebx
+ 	popl %esi
+-	ret
++	RET
+ 
+ #else
+ 
+@@ -208,7 +208,7 @@ csum_partial:
+ 80: 
+ 	popl %ebx
+ 	popl %esi
+-	ret
++	RET
+ 				
+ #endif
+ 	EXPORT_SYMBOL(csum_partial)
+diff --git a/arch/x86/um/setjmp_32.S b/arch/x86/um/setjmp_32.S
+index 62eaf8c80e041..2d991ddbcca57 100644
+--- a/arch/x86/um/setjmp_32.S
++++ b/arch/x86/um/setjmp_32.S
+@@ -34,7 +34,7 @@ kernel_setjmp:
+ 	movl %esi,12(%edx)
+ 	movl %edi,16(%edx)
+ 	movl %ecx,20(%edx)		# Return address
+-	ret
++	RET
+ 
+ 	.size kernel_setjmp,.-kernel_setjmp
+ 
+diff --git a/arch/x86/um/setjmp_64.S b/arch/x86/um/setjmp_64.S
+index 1b5d40d4ff46d..b46acb6a8ebd8 100644
+--- a/arch/x86/um/setjmp_64.S
++++ b/arch/x86/um/setjmp_64.S
+@@ -33,7 +33,7 @@ kernel_setjmp:
+ 	movq %r14,40(%rdi)
+ 	movq %r15,48(%rdi)
+ 	movq %rsi,56(%rdi)		# Return address
+-	ret
++	RET
+ 
+ 	.size kernel_setjmp,.-kernel_setjmp
+ 
+diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile
+index fc5c5ba4aacba..40b5779fce21c 100644
+--- a/arch/x86/xen/Makefile
++++ b/arch/x86/xen/Makefile
+@@ -1,5 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0
+-OBJECT_FILES_NON_STANDARD_xen-asm.o := y
+ 
+ ifdef CONFIG_FUNCTION_TRACER
+ # Do not profile debug and lowlevel utilities
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index 8bfc103301077..1f80dd3a2dd4a 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -922,7 +922,7 @@ void xen_enable_sysenter(void)
+ 	if (!boot_cpu_has(sysenter_feature))
+ 		return;
+ 
+-	ret = register_callback(CALLBACKTYPE_sysenter, xen_sysenter_target);
++	ret = register_callback(CALLBACKTYPE_sysenter, xen_entry_SYSENTER_compat);
+ 	if(ret != 0)
+ 		setup_clear_cpu_cap(sysenter_feature);
+ }
+@@ -931,7 +931,7 @@ void xen_enable_syscall(void)
+ {
+ 	int ret;
+ 
+-	ret = register_callback(CALLBACKTYPE_syscall, xen_syscall_target);
++	ret = register_callback(CALLBACKTYPE_syscall, xen_entry_SYSCALL_64);
+ 	if (ret != 0) {
+ 		printk(KERN_ERR "Failed to set syscall callback: %d\n", ret);
+ 		/* Pretty fatal; 64-bit userspace has no other
+@@ -940,7 +940,7 @@ void xen_enable_syscall(void)
+ 
+ 	if (boot_cpu_has(X86_FEATURE_SYSCALL32)) {
+ 		ret = register_callback(CALLBACKTYPE_syscall32,
+-					xen_syscall32_target);
++					xen_entry_SYSCALL_compat);
+ 		if (ret != 0)
+ 			setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
+ 	}
+diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
+index 011ec649f3886..e3031afcb1039 100644
+--- a/arch/x86/xen/xen-asm.S
++++ b/arch/x86/xen/xen-asm.S
+@@ -14,6 +14,7 @@
+ #include <asm/thread_info.h>
+ #include <asm/asm.h>
+ #include <asm/frame.h>
++#include <asm/unwind_hints.h>
+ 
+ #include <xen/interface/xen.h>
+ 
+@@ -44,7 +45,7 @@ SYM_FUNC_START(xen_irq_enable_direct)
+ 	call check_events
+ 1:
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(xen_irq_enable_direct)
+ 
+ 
+@@ -54,7 +55,7 @@ SYM_FUNC_END(xen_irq_enable_direct)
+  */
+ SYM_FUNC_START(xen_irq_disable_direct)
+ 	movb $1, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
+-	ret
++	RET
+ SYM_FUNC_END(xen_irq_disable_direct)
+ 
+ /*
+@@ -70,7 +71,7 @@ SYM_FUNC_START(xen_save_fl_direct)
+ 	testb $0xff, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
+ 	setz %ah
+ 	addb %ah, %ah
+-	ret
++	RET
+ SYM_FUNC_END(xen_save_fl_direct)
+ 
+ 
+@@ -97,7 +98,7 @@ SYM_FUNC_START(xen_restore_fl_direct)
+ 	call check_events
+ 1:
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(xen_restore_fl_direct)
+ 
+ 
+@@ -127,7 +128,7 @@ SYM_FUNC_START(check_events)
+ 	pop %rcx
+ 	pop %rax
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(check_events)
+ 
+ SYM_FUNC_START(xen_read_cr2)
+@@ -135,18 +136,19 @@ SYM_FUNC_START(xen_read_cr2)
+ 	_ASM_MOV PER_CPU_VAR(xen_vcpu), %_ASM_AX
+ 	_ASM_MOV XEN_vcpu_info_arch_cr2(%_ASM_AX), %_ASM_AX
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(xen_read_cr2);
+ 
+ SYM_FUNC_START(xen_read_cr2_direct)
+ 	FRAME_BEGIN
+ 	_ASM_MOV PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_arch_cr2, %_ASM_AX
+ 	FRAME_END
+-	ret
++	RET
+ SYM_FUNC_END(xen_read_cr2_direct);
+ 
+ .macro xen_pv_trap name
+ SYM_CODE_START(xen_\name)
++	UNWIND_HINT_ENTRY
+ 	pop %rcx
+ 	pop %r11
+ 	jmp  \name
+@@ -186,6 +188,7 @@ xen_pv_trap asm_exc_xen_hypervisor_callback
+ SYM_CODE_START(xen_early_idt_handler_array)
+ 	i = 0
+ 	.rept NUM_EXCEPTION_VECTORS
++	UNWIND_HINT_EMPTY
+ 	pop %rcx
+ 	pop %r11
+ 	jmp early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE
+@@ -212,11 +215,13 @@ hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
+  * rsp->rax		}
+  */
+ SYM_CODE_START(xen_iret)
++	UNWIND_HINT_EMPTY
+ 	pushq $0
+ 	jmp hypercall_iret
+ SYM_CODE_END(xen_iret)
+ 
+ SYM_CODE_START(xen_sysret64)
++	UNWIND_HINT_EMPTY
+ 	/*
+ 	 * We're already on the usermode stack at this point, but
+ 	 * still with the kernel gs, so we can easily switch back.
+@@ -271,7 +276,8 @@ SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode)
+  */
+ 
+ /* Normal 64-bit system call target */
+-SYM_FUNC_START(xen_syscall_target)
++SYM_CODE_START(xen_entry_SYSCALL_64)
++	UNWIND_HINT_ENTRY
+ 	popq %rcx
+ 	popq %r11
+ 
+@@ -284,12 +290,13 @@ SYM_FUNC_START(xen_syscall_target)
+ 	movq $__USER_CS, 1*8(%rsp)
+ 
+ 	jmp entry_SYSCALL_64_after_hwframe
+-SYM_FUNC_END(xen_syscall_target)
++SYM_CODE_END(xen_entry_SYSCALL_64)
+ 
+ #ifdef CONFIG_IA32_EMULATION
+ 
+ /* 32-bit compat syscall target */
+-SYM_FUNC_START(xen_syscall32_target)
++SYM_CODE_START(xen_entry_SYSCALL_compat)
++	UNWIND_HINT_ENTRY
+ 	popq %rcx
+ 	popq %r11
+ 
+@@ -302,10 +309,11 @@ SYM_FUNC_START(xen_syscall32_target)
+ 	movq $__USER32_CS, 1*8(%rsp)
+ 
+ 	jmp entry_SYSCALL_compat_after_hwframe
+-SYM_FUNC_END(xen_syscall32_target)
++SYM_CODE_END(xen_entry_SYSCALL_compat)
+ 
+ /* 32-bit compat sysenter target */
+-SYM_FUNC_START(xen_sysenter_target)
++SYM_CODE_START(xen_entry_SYSENTER_compat)
++	UNWIND_HINT_ENTRY
+ 	/*
+ 	 * NB: Xen is polite and clears TF from EFLAGS for us.  This means
+ 	 * that we don't need to guard against single step exceptions here.
+@@ -322,17 +330,18 @@ SYM_FUNC_START(xen_sysenter_target)
+ 	movq $__USER32_CS, 1*8(%rsp)
+ 
+ 	jmp entry_SYSENTER_compat_after_hwframe
+-SYM_FUNC_END(xen_sysenter_target)
++SYM_CODE_END(xen_entry_SYSENTER_compat)
+ 
+ #else /* !CONFIG_IA32_EMULATION */
+ 
+-SYM_FUNC_START_ALIAS(xen_syscall32_target)
+-SYM_FUNC_START(xen_sysenter_target)
++SYM_CODE_START(xen_entry_SYSCALL_compat)
++SYM_CODE_START(xen_entry_SYSENTER_compat)
++	UNWIND_HINT_ENTRY
+ 	lea 16(%rsp), %rsp	/* strip %rcx, %r11 */
+ 	mov $-ENOSYS, %rax
+ 	pushq $0
+ 	jmp hypercall_iret
+-SYM_FUNC_END(xen_sysenter_target)
+-SYM_FUNC_END_ALIAS(xen_syscall32_target)
++SYM_CODE_END(xen_entry_SYSENTER_compat)
++SYM_CODE_END(xen_entry_SYSCALL_compat)
+ 
+ #endif	/* CONFIG_IA32_EMULATION */
+diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
+index 2d7c8f34f56c7..2a3ef5fcba34b 100644
+--- a/arch/x86/xen/xen-head.S
++++ b/arch/x86/xen/xen-head.S
+@@ -68,8 +68,10 @@ SYM_CODE_END(asm_cpu_bringup_and_idle)
+ 	.balign PAGE_SIZE
+ SYM_CODE_START(hypercall_page)
+ 	.rept (PAGE_SIZE / 32)
+-		UNWIND_HINT_EMPTY
+-		.skip 32
++		UNWIND_HINT_FUNC
++		ANNOTATE_UNRET_SAFE
++		ret
++		.skip 31, 0xcc
+ 	.endr
+ 
+ #define HYPERCALL(n) \
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 9546c3384c759..8695809b88f08 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -10,10 +10,10 @@
+ /* These are code, but not functions.  Defined in entry.S */
+ extern const char xen_failsafe_callback[];
+ 
+-void xen_sysenter_target(void);
++void xen_entry_SYSENTER_compat(void);
+ #ifdef CONFIG_X86_64
+-void xen_syscall_target(void);
+-void xen_syscall32_target(void);
++void xen_entry_SYSCALL_64(void);
++void xen_entry_SYSCALL_compat(void);
+ #endif
+ 
+ extern void *xen_initial_gdt;
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 8ecb9f90f467b..24dd532c8e5ed 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -572,6 +572,12 @@ ssize_t __weak cpu_show_mmio_stale_data(struct device *dev,
+ 	return sysfs_emit(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_retbleed(struct device *dev,
++				 struct device_attribute *attr, char *buf)
++{
++	return sysfs_emit(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -582,6 +588,7 @@ static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
+ static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+ static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
+ static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL);
++static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -594,6 +601,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_itlb_multihit.attr,
+ 	&dev_attr_srbds.attr,
+ 	&dev_attr_mmio_stale_data.attr,
++	&dev_attr_retbleed.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index d79335506ecd3..b92b032fb6d13 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -47,11 +47,13 @@
+ #include <linux/tick.h>
+ #include <trace/events/power.h>
+ #include <linux/sched.h>
++#include <linux/sched/smt.h>
+ #include <linux/notifier.h>
+ #include <linux/cpu.h>
+ #include <linux/moduleparam.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
++#include <asm/nospec-branch.h>
+ #include <asm/mwait.h>
+ #include <asm/msr.h>
+ 
+@@ -93,6 +95,12 @@ static unsigned int mwait_substates __initdata;
+  */
+ #define CPUIDLE_FLAG_ALWAYS_ENABLE	BIT(15)
+ 
++/*
++ * Disable IBRS across idle (when KERNEL_IBRS), is exclusive vs IRQ_ENABLE
++ * above.
++ */
++#define CPUIDLE_FLAG_IBRS		BIT(16)
++
+ /*
+  * MWAIT takes an 8-bit "hint" in EAX "suggesting"
+  * the C-state (top nibble) and sub-state (bottom nibble)
+@@ -132,6 +140,24 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev,
+ 	return index;
+ }
+ 
++static __cpuidle int intel_idle_ibrs(struct cpuidle_device *dev,
++				     struct cpuidle_driver *drv, int index)
++{
++	bool smt_active = sched_smt_active();
++	u64 spec_ctrl = spec_ctrl_current();
++	int ret;
++
++	if (smt_active)
++		wrmsrl(MSR_IA32_SPEC_CTRL, 0);
++
++	ret = intel_idle(dev, drv, index);
++
++	if (smt_active)
++		wrmsrl(MSR_IA32_SPEC_CTRL, spec_ctrl);
++
++	return ret;
++}
++
+ /**
+  * intel_idle_s2idle - Ask the processor to enter the given idle state.
+  * @dev: cpuidle device of the target CPU.
+@@ -653,7 +679,7 @@ static struct cpuidle_state skl_cstates[] __initdata = {
+ 	{
+ 		.name = "C6",
+ 		.desc = "MWAIT 0x20",
+-		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
++		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
+ 		.exit_latency = 85,
+ 		.target_residency = 200,
+ 		.enter = &intel_idle,
+@@ -661,7 +687,7 @@ static struct cpuidle_state skl_cstates[] __initdata = {
+ 	{
+ 		.name = "C7s",
+ 		.desc = "MWAIT 0x33",
+-		.flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED,
++		.flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
+ 		.exit_latency = 124,
+ 		.target_residency = 800,
+ 		.enter = &intel_idle,
+@@ -669,7 +695,7 @@ static struct cpuidle_state skl_cstates[] __initdata = {
+ 	{
+ 		.name = "C8",
+ 		.desc = "MWAIT 0x40",
+-		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
++		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
+ 		.exit_latency = 200,
+ 		.target_residency = 800,
+ 		.enter = &intel_idle,
+@@ -677,7 +703,7 @@ static struct cpuidle_state skl_cstates[] __initdata = {
+ 	{
+ 		.name = "C9",
+ 		.desc = "MWAIT 0x50",
+-		.flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED,
++		.flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
+ 		.exit_latency = 480,
+ 		.target_residency = 5000,
+ 		.enter = &intel_idle,
+@@ -685,7 +711,7 @@ static struct cpuidle_state skl_cstates[] __initdata = {
+ 	{
+ 		.name = "C10",
+ 		.desc = "MWAIT 0x60",
+-		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
++		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
+ 		.exit_latency = 890,
+ 		.target_residency = 5000,
+ 		.enter = &intel_idle,
+@@ -714,7 +740,7 @@ static struct cpuidle_state skx_cstates[] __initdata = {
+ 	{
+ 		.name = "C6",
+ 		.desc = "MWAIT 0x20",
+-		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
++		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
+ 		.exit_latency = 133,
+ 		.target_residency = 600,
+ 		.enter = &intel_idle,
+@@ -1501,6 +1527,11 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
+ 		/* Structure copy. */
+ 		drv->states[drv->state_count] = cpuidle_state_table[cstate];
+ 
++		if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) &&
++		    cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IBRS) {
++			drv->states[drv->state_count].enter = intel_idle_ibrs;
++		}
++
+ 		if ((disabled_states_mask & BIT(drv->state_count)) ||
+ 		    ((icpu->use_acpi || force_use_acpi) &&
+ 		     intel_idle_off_by_default(mwait_hint) &&
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index d63b8f70d1239..24e5f237244d8 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -68,6 +68,8 @@ extern ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr,
+ extern ssize_t cpu_show_mmio_stale_data(struct device *dev,
+ 					struct device_attribute *attr,
+ 					char *buf);
++extern ssize_t cpu_show_retbleed(struct device *dev,
++				 struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index c66c702a4f079..439fbe0ee0c74 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -988,7 +988,7 @@ static inline void kvm_arch_end_assignment(struct kvm *kvm)
+ {
+ }
+ 
+-static inline bool kvm_arch_has_assigned_device(struct kvm *kvm)
++static __always_inline bool kvm_arch_has_assigned_device(struct kvm *kvm)
+ {
+ 	return false;
+ }
+diff --git a/include/linux/objtool.h b/include/linux/objtool.h
+index 577f51436cf92..662f19374bd98 100644
+--- a/include/linux/objtool.h
++++ b/include/linux/objtool.h
+@@ -29,11 +29,19 @@ struct unwind_hint {
+  *
+  * UNWIND_HINT_TYPE_REGS_PARTIAL: Used in entry code to indicate that
+  * sp_reg+sp_offset points to the iret return frame.
++ *
++ * UNWIND_HINT_FUNC: Generate the unwind metadata of a callable function.
++ * Useful for code which doesn't have an ELF function annotation.
++ *
++ * UNWIND_HINT_ENTRY: machine entry without stack, SYSCALL/SYSENTER etc.
+  */
+ #define UNWIND_HINT_TYPE_CALL		0
+ #define UNWIND_HINT_TYPE_REGS		1
+ #define UNWIND_HINT_TYPE_REGS_PARTIAL	2
+-#define UNWIND_HINT_TYPE_RET_OFFSET	3
++#define UNWIND_HINT_TYPE_FUNC		3
++#define UNWIND_HINT_TYPE_ENTRY		4
++#define UNWIND_HINT_TYPE_SAVE		5
++#define UNWIND_HINT_TYPE_RESTORE	6
+ 
+ #ifdef CONFIG_STACK_VALIDATION
+ 
+@@ -96,7 +104,7 @@ struct unwind_hint {
+  * the debuginfo as necessary.  It will also warn if it sees any
+  * inconsistencies.
+  */
+-.macro UNWIND_HINT sp_reg:req sp_offset=0 type:req end=0
++.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
+ .Lunwind_hint_ip_\@:
+ 	.pushsection .discard.unwind_hints
+ 		/* struct unwind_hint */
+@@ -120,7 +128,7 @@ struct unwind_hint {
+ #define STACK_FRAME_NON_STANDARD(func)
+ #else
+ #define ANNOTATE_INTRA_FUNCTION_CALL
+-.macro UNWIND_HINT sp_reg:req sp_offset=0 type:req end=0
++.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
+ .endm
+ #endif
+ 
+diff --git a/samples/ftrace/ftrace-direct-modify.c b/samples/ftrace/ftrace-direct-modify.c
+index 89e6bf27cd9f6..d620f3da086f7 100644
+--- a/samples/ftrace/ftrace-direct-modify.c
++++ b/samples/ftrace/ftrace-direct-modify.c
+@@ -31,7 +31,7 @@ asm (
+ "	call my_direct_func1\n"
+ "	leave\n"
+ "	.size		my_tramp1, .-my_tramp1\n"
+-"	ret\n"
++	ASM_RET
+ "	.type		my_tramp2, @function\n"
+ "	.globl		my_tramp2\n"
+ "   my_tramp2:"
+@@ -39,7 +39,7 @@ asm (
+ "	movq %rsp, %rbp\n"
+ "	call my_direct_func2\n"
+ "	leave\n"
+-"	ret\n"
++	ASM_RET
+ "	.size		my_tramp2, .-my_tramp2\n"
+ "	.popsection\n"
+ );
+diff --git a/samples/ftrace/ftrace-direct-too.c b/samples/ftrace/ftrace-direct-too.c
+index 11b99325f3dbf..3927cb880d1ab 100644
+--- a/samples/ftrace/ftrace-direct-too.c
++++ b/samples/ftrace/ftrace-direct-too.c
+@@ -31,7 +31,7 @@ asm (
+ "	popq %rsi\n"
+ "	popq %rdi\n"
+ "	leave\n"
+-"	ret\n"
++	ASM_RET
+ "	.size		my_tramp, .-my_tramp\n"
+ "	.popsection\n"
+ );
+diff --git a/samples/ftrace/ftrace-direct.c b/samples/ftrace/ftrace-direct.c
+index 642c50b5f7166..1e901bb8d7293 100644
+--- a/samples/ftrace/ftrace-direct.c
++++ b/samples/ftrace/ftrace-direct.c
+@@ -24,7 +24,7 @@ asm (
+ "	call my_direct_func\n"
+ "	popq %rdi\n"
+ "	leave\n"
+-"	ret\n"
++	ASM_RET
+ "	.size		my_tramp, .-my_tramp\n"
+ "	.popsection\n"
+ );
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 8bd4e673383f3..17e8b20659001 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -227,9 +227,15 @@ endif
+ ifdef CONFIG_RETPOLINE
+   objtool_args += --retpoline
+ endif
++ifdef CONFIG_RETHUNK
++  objtool_args += --rethunk
++endif
+ ifdef CONFIG_X86_SMAP
+   objtool_args += --uaccess
+ endif
++ifdef CONFIG_SLS
++  objtool_args += --sls
++endif
+ 
+ # 'OBJECT_FILES_NON_STANDARD := y': skip objtool checking for a directory
+ # 'OBJECT_FILES_NON_STANDARD_foo.o := 'y': skip objtool checking for a file
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index 6eded325c8378..d0b44bee9286e 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -65,6 +65,9 @@ objtool_link()
+ 
+ 	if [ -n "${CONFIG_VMLINUX_VALIDATION}" ]; then
+ 		objtoolopt="check"
++		if [ -n "${CONFIG_CPU_UNRET_ENTRY}" ]; then
++			objtoolopt="${objtoolopt} --unret"
++		fi
+ 		if [ -z "${CONFIG_FRAME_POINTER}" ]; then
+ 			objtoolopt="${objtoolopt} --no-fp"
+ 		fi
+@@ -77,6 +80,9 @@ objtool_link()
+ 		if [ -n "${CONFIG_X86_SMAP}" ]; then
+ 			objtoolopt="${objtoolopt} --uaccess"
+ 		fi
++		if [ -n "${CONFIG_SLS}" ]; then
++			objtoolopt="${objtoolopt} --sls"
++		fi
+ 		info OBJTOOL ${1}
+ 		tools/objtool/objtool ${objtoolopt} ${1}
+ 	fi
+diff --git a/security/Kconfig b/security/Kconfig
+index 0548db16c49dc..9893c316da897 100644
+--- a/security/Kconfig
++++ b/security/Kconfig
+@@ -54,17 +54,6 @@ config SECURITY_NETWORK
+ 	  implement socket and networking access controls.
+ 	  If you are unsure how to answer this question, answer N.
+ 
+-config PAGE_TABLE_ISOLATION
+-	bool "Remove the kernel mapping in user mode"
+-	default y
+-	depends on (X86_64 || X86_PAE) && !UML
+-	help
+-	  This feature reduces the number of hardware side channels by
+-	  ensuring that the majority of kernel addresses are not mapped
+-	  into userspace.
+-
+-	  See Documentation/x86/pti.rst for more details.
+-
+ config SECURITY_INFINIBAND
+ 	bool "Infiniband Security Hooks"
+ 	depends on SECURITY && INFINIBAND
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index a7b5c5efcf3b0..54ba20492ad11 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -203,8 +203,8 @@
+ #define X86_FEATURE_PROC_FEEDBACK	( 7*32+ 9) /* AMD ProcFeedbackInterface */
+ #define X86_FEATURE_SME			( 7*32+10) /* AMD Secure Memory Encryption */
+ #define X86_FEATURE_PTI			( 7*32+11) /* Kernel Page Table Isolation enabled */
+-#define X86_FEATURE_RETPOLINE		( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
+-#define X86_FEATURE_RETPOLINE_LFENCE	( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */
++#define X86_FEATURE_KERNEL_IBRS		( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */
++#define X86_FEATURE_RSB_VMEXIT		( 7*32+13) /* "" Fill RSB on VM-Exit */
+ #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
+ #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
+ #define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
+@@ -290,6 +290,12 @@
+ #define X86_FEATURE_FENCE_SWAPGS_KERNEL	(11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */
+ #define X86_FEATURE_SPLIT_LOCK_DETECT	(11*32+ 6) /* #AC for split lock */
+ #define X86_FEATURE_PER_THREAD_MBA	(11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */
++#define X86_FEATURE_ENTRY_IBPB		(11*32+10) /* "" Issue an IBPB on kernel entry */
++#define X86_FEATURE_RRSBA_CTRL		(11*32+11) /* "" RET prediction control */
++#define X86_FEATURE_RETPOLINE		(11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
++#define X86_FEATURE_RETPOLINE_LFENCE	(11*32+13) /* "" Use LFENCE for Spectre variant 2 */
++#define X86_FEATURE_RETHUNK		(11*32+14) /* "" Use REturn THUNK */
++#define X86_FEATURE_UNRET		(11*32+15) /* "" AMD BTB untrain return */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX512_BF16		(12*32+ 5) /* AVX512 BFLOAT16 instructions */
+@@ -308,6 +314,7 @@
+ #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
+ #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
+ #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
++#define X86_FEATURE_BTC_NO		(13*32+29) /* "" Not vulnerable to Branch Type Confusion */
+ 
+ /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
+ #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
+@@ -418,5 +425,6 @@
+ #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+ #define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+ #define X86_BUG_MMIO_STALE_DATA		X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
++#define X86_BUG_RETBLEED		X86_BUG(26) /* CPU is affected by RETBleed */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
+index 5861d34f97718..d109c5ec967ce 100644
+--- a/tools/arch/x86/include/asm/disabled-features.h
++++ b/tools/arch/x86/include/asm/disabled-features.h
+@@ -56,6 +56,25 @@
+ # define DISABLE_PTI		(1 << (X86_FEATURE_PTI & 31))
+ #endif
+ 
++#ifdef CONFIG_RETPOLINE
++# define DISABLE_RETPOLINE	0
++#else
++# define DISABLE_RETPOLINE	((1 << (X86_FEATURE_RETPOLINE & 31)) | \
++				 (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
++#endif
++
++#ifdef CONFIG_RETHUNK
++# define DISABLE_RETHUNK	0
++#else
++# define DISABLE_RETHUNK	(1 << (X86_FEATURE_RETHUNK & 31))
++#endif
++
++#ifdef CONFIG_CPU_UNRET_ENTRY
++# define DISABLE_UNRET		0
++#else
++# define DISABLE_UNRET		(1 << (X86_FEATURE_UNRET & 31))
++#endif
++
+ #ifdef CONFIG_IOMMU_SUPPORT
+ # define DISABLE_ENQCMD	0
+ #else
+@@ -76,7 +95,7 @@
+ #define DISABLED_MASK8	0
+ #define DISABLED_MASK9	(DISABLE_SMAP)
+ #define DISABLED_MASK10	0
+-#define DISABLED_MASK11	0
++#define DISABLED_MASK11	(DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET)
+ #define DISABLED_MASK12	0
+ #define DISABLED_MASK13	0
+ #define DISABLED_MASK14	0
+diff --git a/tools/arch/x86/include/asm/inat.h b/tools/arch/x86/include/asm/inat.h
+index 877827b7c2c33..a610514003115 100644
+--- a/tools/arch/x86/include/asm/inat.h
++++ b/tools/arch/x86/include/asm/inat.h
+@@ -6,7 +6,7 @@
+  *
+  * Written by Masami Hiramatsu <mhiramat@redhat.com>
+  */
+-#include "inat_types.h"
++#include "inat_types.h" /* __ignore_sync_check__ */
+ 
+ /*
+  * Internal bits. Don't use bitmasks directly, because these bits are
+diff --git a/tools/arch/x86/include/asm/insn.h b/tools/arch/x86/include/asm/insn.h
+index 52c6262e6bfd1..636ec02793a78 100644
+--- a/tools/arch/x86/include/asm/insn.h
++++ b/tools/arch/x86/include/asm/insn.h
+@@ -8,7 +8,7 @@
+  */
+ 
+ /* insn_attr_t is defined in inat.h */
+-#include "inat.h"
++#include "inat.h" /* __ignore_sync_check__ */
+ 
+ struct insn_field {
+ 	union {
+@@ -87,13 +87,25 @@ struct insn {
+ #define X86_VEX_M_MAX	0x1f			/* VEX3.M Maximum value */
+ 
+ extern void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64);
+-extern void insn_get_prefixes(struct insn *insn);
+-extern void insn_get_opcode(struct insn *insn);
+-extern void insn_get_modrm(struct insn *insn);
+-extern void insn_get_sib(struct insn *insn);
+-extern void insn_get_displacement(struct insn *insn);
+-extern void insn_get_immediate(struct insn *insn);
+-extern void insn_get_length(struct insn *insn);
++extern int insn_get_prefixes(struct insn *insn);
++extern int insn_get_opcode(struct insn *insn);
++extern int insn_get_modrm(struct insn *insn);
++extern int insn_get_sib(struct insn *insn);
++extern int insn_get_displacement(struct insn *insn);
++extern int insn_get_immediate(struct insn *insn);
++extern int insn_get_length(struct insn *insn);
++
++enum insn_mode {
++	INSN_MODE_32,
++	INSN_MODE_64,
++	/* Mode is determined by the current kernel build. */
++	INSN_MODE_KERN,
++	INSN_NUM_MODES,
++};
++
++extern int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m);
++
++#define insn_decode_kernel(_insn, _ptr) insn_decode((_insn), (_ptr), MAX_INSN_SIZE, INSN_MODE_KERN)
+ 
+ /* Attribute will be determined after getting ModRM (for opcode groups) */
+ static inline void insn_get_attribute(struct insn *insn)
+diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
+index 96973d1979723..407de670cd609 100644
+--- a/tools/arch/x86/include/asm/msr-index.h
++++ b/tools/arch/x86/include/asm/msr-index.h
+@@ -51,6 +51,8 @@
+ #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
+ #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
+ #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
++#define SPEC_CTRL_RRSBA_DIS_S_SHIFT	6	   /* Disable RRSBA behavior */
++#define SPEC_CTRL_RRSBA_DIS_S		BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT)
+ 
+ #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
+ #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
+@@ -91,6 +93,7 @@
+ #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
+ #define ARCH_CAP_RDCL_NO		BIT(0)	/* Not susceptible to Meltdown */
+ #define ARCH_CAP_IBRS_ALL		BIT(1)	/* Enhanced IBRS support */
++#define ARCH_CAP_RSBA			BIT(2)	/* RET may use alternative branch predictors */
+ #define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	BIT(3)	/* Skip L1D flush on vmentry */
+ #define ARCH_CAP_SSB_NO			BIT(4)	/*
+ 						 * Not susceptible to Speculative Store Bypass
+@@ -138,6 +141,13 @@
+ 						 * bit available to control VERW
+ 						 * behavior.
+ 						 */
++#define ARCH_CAP_RRSBA			BIT(19)	/*
++						 * Indicates RET may use predictors
++						 * other than the RSB. With eIBRS
++						 * enabled predictions in kernel mode
++						 * are restricted to targets in
++						 * kernel.
++						 */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+ #define L1D_FLUSH			BIT(0)	/*
+@@ -507,6 +517,9 @@
+ /* Fam 17h MSRs */
+ #define MSR_F17H_IRPERF			0xc00000e9
+ 
++#define MSR_ZEN2_SPECTRAL_CHICKEN	0xc00110e3
++#define MSR_ZEN2_SPECTRAL_CHICKEN_BIT	BIT_ULL(1)
++
+ /* Fam 16h MSRs */
+ #define MSR_F16H_L2I_PERF_CTL		0xc0010230
+ #define MSR_F16H_L2I_PERF_CTR		0xc0010231
+diff --git a/tools/arch/x86/lib/inat.c b/tools/arch/x86/lib/inat.c
+index 4f5ed49e1b4ee..dfbcc64059412 100644
+--- a/tools/arch/x86/lib/inat.c
++++ b/tools/arch/x86/lib/inat.c
+@@ -4,7 +4,7 @@
+  *
+  * Written by Masami Hiramatsu <mhiramat@redhat.com>
+  */
+-#include "../include/asm/insn.h"
++#include "../include/asm/insn.h" /* __ignore_sync_check__ */
+ 
+ /* Attribute tables are generated from opcode map */
+ #include "inat-tables.c"
+diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c
+index 0151dfc6da616..f24cc0f618e45 100644
+--- a/tools/arch/x86/lib/insn.c
++++ b/tools/arch/x86/lib/insn.c
+@@ -10,10 +10,13 @@
+ #else
+ #include <string.h>
+ #endif
+-#include "../include/asm/inat.h"
+-#include "../include/asm/insn.h"
++#include "../include/asm/inat.h" /* __ignore_sync_check__ */
++#include "../include/asm/insn.h" /* __ignore_sync_check__ */
+ 
+-#include "../include/asm/emulate_prefix.h"
++#include <linux/errno.h>
++#include <linux/kconfig.h>
++
++#include "../include/asm/emulate_prefix.h" /* __ignore_sync_check__ */
+ 
+ /* Verify next sizeof(t) bytes can be on the same instruction */
+ #define validate_next(t, insn, n)	\
+@@ -97,8 +100,12 @@ static void insn_get_emulate_prefix(struct insn *insn)
+  * Populates the @insn->prefixes bitmap, and updates @insn->next_byte
+  * to point to the (first) opcode.  No effect if @insn->prefixes.got
+  * is already set.
++ *
++ * * Returns:
++ * 0:  on success
++ * < 0: on error
+  */
+-void insn_get_prefixes(struct insn *insn)
++int insn_get_prefixes(struct insn *insn)
+ {
+ 	struct insn_field *prefixes = &insn->prefixes;
+ 	insn_attr_t attr;
+@@ -106,7 +113,7 @@ void insn_get_prefixes(struct insn *insn)
+ 	int i, nb;
+ 
+ 	if (prefixes->got)
+-		return;
++		return 0;
+ 
+ 	insn_get_emulate_prefix(insn);
+ 
+@@ -217,8 +224,10 @@ vex_end:
+ 
+ 	prefixes->got = 1;
+ 
++	return 0;
++
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ /**
+@@ -230,16 +239,25 @@ err_out:
+  * If necessary, first collects any preceding (prefix) bytes.
+  * Sets @insn->opcode.value = opcode1.  No effect if @insn->opcode.got
+  * is already 1.
++ *
++ * Returns:
++ * 0:  on success
++ * < 0: on error
+  */
+-void insn_get_opcode(struct insn *insn)
++int insn_get_opcode(struct insn *insn)
+ {
+ 	struct insn_field *opcode = &insn->opcode;
++	int pfx_id, ret;
+ 	insn_byte_t op;
+-	int pfx_id;
++
+ 	if (opcode->got)
+-		return;
+-	if (!insn->prefixes.got)
+-		insn_get_prefixes(insn);
++		return 0;
++
++	if (!insn->prefixes.got) {
++		ret = insn_get_prefixes(insn);
++		if (ret)
++			return ret;
++	}
+ 
+ 	/* Get first opcode */
+ 	op = get_next(insn_byte_t, insn);
+@@ -254,9 +272,13 @@ void insn_get_opcode(struct insn *insn)
+ 		insn->attr = inat_get_avx_attribute(op, m, p);
+ 		if ((inat_must_evex(insn->attr) && !insn_is_evex(insn)) ||
+ 		    (!inat_accept_vex(insn->attr) &&
+-		     !inat_is_group(insn->attr)))
+-			insn->attr = 0;	/* This instruction is bad */
+-		goto end;	/* VEX has only 1 byte for opcode */
++		     !inat_is_group(insn->attr))) {
++			/* This instruction is bad */
++			insn->attr = 0;
++			return -EINVAL;
++		}
++		/* VEX has only 1 byte for opcode */
++		goto end;
+ 	}
+ 
+ 	insn->attr = inat_get_opcode_attribute(op);
+@@ -267,13 +289,18 @@ void insn_get_opcode(struct insn *insn)
+ 		pfx_id = insn_last_prefix_id(insn);
+ 		insn->attr = inat_get_escape_attribute(op, pfx_id, insn->attr);
+ 	}
+-	if (inat_must_vex(insn->attr))
+-		insn->attr = 0;	/* This instruction is bad */
++
++	if (inat_must_vex(insn->attr)) {
++		/* This instruction is bad */
++		insn->attr = 0;
++		return -EINVAL;
++	}
+ end:
+ 	opcode->got = 1;
++	return 0;
+ 
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ /**
+@@ -283,15 +310,25 @@ err_out:
+  * Populates @insn->modrm and updates @insn->next_byte to point past the
+  * ModRM byte, if any.  If necessary, first collects the preceding bytes
+  * (prefixes and opcode(s)).  No effect if @insn->modrm.got is already 1.
++ *
++ * Returns:
++ * 0:  on success
++ * < 0: on error
+  */
+-void insn_get_modrm(struct insn *insn)
++int insn_get_modrm(struct insn *insn)
+ {
+ 	struct insn_field *modrm = &insn->modrm;
+ 	insn_byte_t pfx_id, mod;
++	int ret;
++
+ 	if (modrm->got)
+-		return;
+-	if (!insn->opcode.got)
+-		insn_get_opcode(insn);
++		return 0;
++
++	if (!insn->opcode.got) {
++		ret = insn_get_opcode(insn);
++		if (ret)
++			return ret;
++	}
+ 
+ 	if (inat_has_modrm(insn->attr)) {
+ 		mod = get_next(insn_byte_t, insn);
+@@ -301,17 +338,22 @@ void insn_get_modrm(struct insn *insn)
+ 			pfx_id = insn_last_prefix_id(insn);
+ 			insn->attr = inat_get_group_attribute(mod, pfx_id,
+ 							      insn->attr);
+-			if (insn_is_avx(insn) && !inat_accept_vex(insn->attr))
+-				insn->attr = 0;	/* This is bad */
++			if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) {
++				/* Bad insn */
++				insn->attr = 0;
++				return -EINVAL;
++			}
+ 		}
+ 	}
+ 
+ 	if (insn->x86_64 && inat_is_force64(insn->attr))
+ 		insn->opnd_bytes = 8;
++
+ 	modrm->got = 1;
++	return 0;
+ 
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ 
+@@ -325,11 +367,16 @@ err_out:
+ int insn_rip_relative(struct insn *insn)
+ {
+ 	struct insn_field *modrm = &insn->modrm;
++	int ret;
+ 
+ 	if (!insn->x86_64)
+ 		return 0;
+-	if (!modrm->got)
+-		insn_get_modrm(insn);
++
++	if (!modrm->got) {
++		ret = insn_get_modrm(insn);
++		if (ret)
++			return 0;
++	}
+ 	/*
+ 	 * For rip-relative instructions, the mod field (top 2 bits)
+ 	 * is zero and the r/m field (bottom 3 bits) is 0x5.
+@@ -343,15 +390,25 @@ int insn_rip_relative(struct insn *insn)
+  *
+  * If necessary, first collects the instruction up to and including the
+  * ModRM byte.
++ *
++ * Returns:
++ * 0: if decoding succeeded
++ * < 0: otherwise.
+  */
+-void insn_get_sib(struct insn *insn)
++int insn_get_sib(struct insn *insn)
+ {
+ 	insn_byte_t modrm;
++	int ret;
+ 
+ 	if (insn->sib.got)
+-		return;
+-	if (!insn->modrm.got)
+-		insn_get_modrm(insn);
++		return 0;
++
++	if (!insn->modrm.got) {
++		ret = insn_get_modrm(insn);
++		if (ret)
++			return ret;
++	}
++
+ 	if (insn->modrm.nbytes) {
+ 		modrm = (insn_byte_t)insn->modrm.value;
+ 		if (insn->addr_bytes != 2 &&
+@@ -362,8 +419,10 @@ void insn_get_sib(struct insn *insn)
+ 	}
+ 	insn->sib.got = 1;
+ 
++	return 0;
++
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ 
+@@ -374,15 +433,25 @@ err_out:
+  * If necessary, first collects the instruction up to and including the
+  * SIB byte.
+  * Displacement value is sign-expanded.
++ *
++ * * Returns:
++ * 0: if decoding succeeded
++ * < 0: otherwise.
+  */
+-void insn_get_displacement(struct insn *insn)
++int insn_get_displacement(struct insn *insn)
+ {
+ 	insn_byte_t mod, rm, base;
++	int ret;
+ 
+ 	if (insn->displacement.got)
+-		return;
+-	if (!insn->sib.got)
+-		insn_get_sib(insn);
++		return 0;
++
++	if (!insn->sib.got) {
++		ret = insn_get_sib(insn);
++		if (ret)
++			return ret;
++	}
++
+ 	if (insn->modrm.nbytes) {
+ 		/*
+ 		 * Interpreting the modrm byte:
+@@ -425,9 +494,10 @@ void insn_get_displacement(struct insn *insn)
+ 	}
+ out:
+ 	insn->displacement.got = 1;
++	return 0;
+ 
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ /* Decode moffset16/32/64. Return 0 if failed */
+@@ -538,20 +608,30 @@ err_out:
+ }
+ 
+ /**
+- * insn_get_immediate() - Get the immediates of instruction
++ * insn_get_immediate() - Get the immediate in an instruction
+  * @insn:	&struct insn containing instruction
+  *
+  * If necessary, first collects the instruction up to and including the
+  * displacement bytes.
+  * Basically, most of immediates are sign-expanded. Unsigned-value can be
+- * get by bit masking with ((1 << (nbytes * 8)) - 1)
++ * computed by bit masking with ((1 << (nbytes * 8)) - 1)
++ *
++ * Returns:
++ * 0:  on success
++ * < 0: on error
+  */
+-void insn_get_immediate(struct insn *insn)
++int insn_get_immediate(struct insn *insn)
+ {
++	int ret;
++
+ 	if (insn->immediate.got)
+-		return;
+-	if (!insn->displacement.got)
+-		insn_get_displacement(insn);
++		return 0;
++
++	if (!insn->displacement.got) {
++		ret = insn_get_displacement(insn);
++		if (ret)
++			return ret;
++	}
+ 
+ 	if (inat_has_moffset(insn->attr)) {
+ 		if (!__get_moffset(insn))
+@@ -604,9 +684,10 @@ void insn_get_immediate(struct insn *insn)
+ 	}
+ done:
+ 	insn->immediate.got = 1;
++	return 0;
+ 
+ err_out:
+-	return;
++	return -ENODATA;
+ }
+ 
+ /**
+@@ -615,13 +696,58 @@ err_out:
+  *
+  * If necessary, first collects the instruction up to and including the
+  * immediates bytes.
+- */
+-void insn_get_length(struct insn *insn)
++ *
++ * Returns:
++ *  - 0 on success
++ *  - < 0 on error
++*/
++int insn_get_length(struct insn *insn)
+ {
++	int ret;
++
+ 	if (insn->length)
+-		return;
+-	if (!insn->immediate.got)
+-		insn_get_immediate(insn);
++		return 0;
++
++	if (!insn->immediate.got) {
++		ret = insn_get_immediate(insn);
++		if (ret)
++			return ret;
++	}
++
+ 	insn->length = (unsigned char)((unsigned long)insn->next_byte
+ 				     - (unsigned long)insn->kaddr);
++
++	return 0;
++}
++
++/**
++ * insn_decode() - Decode an x86 instruction
++ * @insn:	&struct insn to be initialized
++ * @kaddr:	address (in kernel memory) of instruction (or copy thereof)
++ * @buf_len:	length of the insn buffer at @kaddr
++ * @m:		insn mode, see enum insn_mode
++ *
++ * Returns:
++ * 0: if decoding succeeded
++ * < 0: otherwise.
++ */
++int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m)
++{
++	int ret;
++
++#define INSN_MODE_KERN (enum insn_mode)-1 /* __ignore_sync_check__ mode is only valid in the kernel */
++
++	if (m == INSN_MODE_KERN)
++		insn_init(insn, kaddr, buf_len, IS_ENABLED(CONFIG_X86_64));
++	else
++		insn_init(insn, kaddr, buf_len, m == INSN_MODE_64);
++
++	ret = insn_get_length(insn);
++	if (ret)
++		return ret;
++
++	if (insn_complete(insn))
++		return 0;
++
++	return -EINVAL;
+ }
+diff --git a/tools/arch/x86/lib/memcpy_64.S b/tools/arch/x86/lib/memcpy_64.S
+index 1e299ac73c869..59cf2343f3d90 100644
+--- a/tools/arch/x86/lib/memcpy_64.S
++++ b/tools/arch/x86/lib/memcpy_64.S
+@@ -4,7 +4,7 @@
+ #include <linux/linkage.h>
+ #include <asm/errno.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/export.h>
+ 
+ .pushsection .noinstr.text, "ax"
+@@ -39,7 +39,7 @@ SYM_FUNC_START_WEAK(memcpy)
+ 	rep movsq
+ 	movl %edx, %ecx
+ 	rep movsb
+-	ret
++	RET
+ SYM_FUNC_END(memcpy)
+ SYM_FUNC_END_ALIAS(__memcpy)
+ EXPORT_SYMBOL(memcpy)
+@@ -53,7 +53,7 @@ SYM_FUNC_START_LOCAL(memcpy_erms)
+ 	movq %rdi, %rax
+ 	movq %rdx, %rcx
+ 	rep movsb
+-	ret
++	RET
+ SYM_FUNC_END(memcpy_erms)
+ 
+ SYM_FUNC_START_LOCAL(memcpy_orig)
+@@ -137,7 +137,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ 	movq %r9,	1*8(%rdi)
+ 	movq %r10,	-2*8(%rdi, %rdx)
+ 	movq %r11,	-1*8(%rdi, %rdx)
+-	retq
++	RET
+ 	.p2align 4
+ .Lless_16bytes:
+ 	cmpl $8,	%edx
+@@ -149,7 +149,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ 	movq -1*8(%rsi, %rdx),	%r9
+ 	movq %r8,	0*8(%rdi)
+ 	movq %r9,	-1*8(%rdi, %rdx)
+-	retq
++	RET
+ 	.p2align 4
+ .Lless_8bytes:
+ 	cmpl $4,	%edx
+@@ -162,7 +162,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ 	movl -4(%rsi, %rdx), %r8d
+ 	movl %ecx, (%rdi)
+ 	movl %r8d, -4(%rdi, %rdx)
+-	retq
++	RET
+ 	.p2align 4
+ .Lless_3bytes:
+ 	subl $1, %edx
+@@ -180,7 +180,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ 	movb %cl, (%rdi)
+ 
+ .Lend:
+-	retq
++	RET
+ SYM_FUNC_END(memcpy_orig)
+ 
+ .popsection
+diff --git a/tools/arch/x86/lib/memset_64.S b/tools/arch/x86/lib/memset_64.S
+index 0bfd26e4ca9e9..d624f2bc42f16 100644
+--- a/tools/arch/x86/lib/memset_64.S
++++ b/tools/arch/x86/lib/memset_64.S
+@@ -3,7 +3,7 @@
+ 
+ #include <linux/linkage.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/alternative-asm.h>
++#include <asm/alternative.h>
+ #include <asm/export.h>
+ 
+ /*
+@@ -40,7 +40,7 @@ SYM_FUNC_START(__memset)
+ 	movl %edx,%ecx
+ 	rep stosb
+ 	movq %r9,%rax
+-	ret
++	RET
+ SYM_FUNC_END(__memset)
+ SYM_FUNC_END_ALIAS(memset)
+ EXPORT_SYMBOL(memset)
+@@ -63,7 +63,7 @@ SYM_FUNC_START_LOCAL(memset_erms)
+ 	movq %rdx,%rcx
+ 	rep stosb
+ 	movq %r9,%rax
+-	ret
++	RET
+ SYM_FUNC_END(memset_erms)
+ 
+ SYM_FUNC_START_LOCAL(memset_orig)
+@@ -125,7 +125,7 @@ SYM_FUNC_START_LOCAL(memset_orig)
+ 
+ .Lende:
+ 	movq	%r10,%rax
+-	ret
++	RET
+ 
+ .Lbad_alignment:
+ 	cmpq $7,%rdx
+diff --git a/tools/include/asm/alternative-asm.h b/tools/include/asm/alternative-asm.h
+deleted file mode 100644
+index b54bd860dff6f..0000000000000
+--- a/tools/include/asm/alternative-asm.h
++++ /dev/null
+@@ -1,10 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _TOOLS_ASM_ALTERNATIVE_ASM_H
+-#define _TOOLS_ASM_ALTERNATIVE_ASM_H
+-
+-/* Just disable it so we can build arch/x86/lib/memcpy_64.S for perf bench: */
+-
+-#define altinstruction_entry #
+-#define ALTERNATIVE_2 #
+-
+-#endif
+diff --git a/tools/include/asm/alternative.h b/tools/include/asm/alternative.h
+new file mode 100644
+index 0000000000000..b54bd860dff6f
+--- /dev/null
++++ b/tools/include/asm/alternative.h
+@@ -0,0 +1,10 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _TOOLS_ASM_ALTERNATIVE_ASM_H
++#define _TOOLS_ASM_ALTERNATIVE_ASM_H
++
++/* Just disable it so we can build arch/x86/lib/memcpy_64.S for perf bench: */
++
++#define altinstruction_entry #
++#define ALTERNATIVE_2 #
++
++#endif
+diff --git a/tools/include/linux/kconfig.h b/tools/include/linux/kconfig.h
+new file mode 100644
+index 0000000000000..13b86bd3b7461
+--- /dev/null
++++ b/tools/include/linux/kconfig.h
+@@ -0,0 +1,67 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _TOOLS_LINUX_KCONFIG_H
++#define _TOOLS_LINUX_KCONFIG_H
++
++/* CONFIG_CC_VERSION_TEXT (Do not delete this comment. See help in Kconfig) */
++
++#define __ARG_PLACEHOLDER_1 0,
++#define __take_second_arg(__ignored, val, ...) val
++
++/*
++ * The use of "&&" / "||" is limited in certain expressions.
++ * The following enable to calculate "and" / "or" with macro expansion only.
++ */
++#define __and(x, y)			___and(x, y)
++#define ___and(x, y)			____and(__ARG_PLACEHOLDER_##x, y)
++#define ____and(arg1_or_junk, y)	__take_second_arg(arg1_or_junk y, 0)
++
++#define __or(x, y)			___or(x, y)
++#define ___or(x, y)			____or(__ARG_PLACEHOLDER_##x, y)
++#define ____or(arg1_or_junk, y)		__take_second_arg(arg1_or_junk 1, y)
++
++/*
++ * Helper macros to use CONFIG_ options in C/CPP expressions. Note that
++ * these only work with boolean and tristate options.
++ */
++
++/*
++ * Getting something that works in C and CPP for an arg that may or may
++ * not be defined is tricky.  Here, if we have "#define CONFIG_BOOGER 1"
++ * we match on the placeholder define, insert the "0," for arg1 and generate
++ * the triplet (0, 1, 0).  Then the last step cherry picks the 2nd arg (a one).
++ * When CONFIG_BOOGER is not defined, we generate a (... 1, 0) pair, and when
++ * the last step cherry picks the 2nd arg, we get a zero.
++ */
++#define __is_defined(x)			___is_defined(x)
++#define ___is_defined(val)		____is_defined(__ARG_PLACEHOLDER_##val)
++#define ____is_defined(arg1_or_junk)	__take_second_arg(arg1_or_junk 1, 0)
++
++/*
++ * IS_BUILTIN(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y', 0
++ * otherwise. For boolean options, this is equivalent to
++ * IS_ENABLED(CONFIG_FOO).
++ */
++#define IS_BUILTIN(option) __is_defined(option)
++
++/*
++ * IS_MODULE(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'm', 0
++ * otherwise.
++ */
++#define IS_MODULE(option) __is_defined(option##_MODULE)
++
++/*
++ * IS_REACHABLE(CONFIG_FOO) evaluates to 1 if the currently compiled
++ * code can call a function defined in code compiled based on CONFIG_FOO.
++ * This is similar to IS_ENABLED(), but returns false when invoked from
++ * built-in code when CONFIG_FOO is set to 'm'.
++ */
++#define IS_REACHABLE(option) __or(IS_BUILTIN(option), \
++				__and(IS_MODULE(option), __is_defined(MODULE)))
++
++/*
++ * IS_ENABLED(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y' or 'm',
++ * 0 otherwise.
++ */
++#define IS_ENABLED(option) __or(IS_BUILTIN(option), IS_MODULE(option))
++
++#endif /* _TOOLS_LINUX_KCONFIG_H */
+diff --git a/tools/include/linux/objtool.h b/tools/include/linux/objtool.h
+index 577f51436cf92..662f19374bd98 100644
+--- a/tools/include/linux/objtool.h
++++ b/tools/include/linux/objtool.h
+@@ -29,11 +29,19 @@ struct unwind_hint {
+  *
+  * UNWIND_HINT_TYPE_REGS_PARTIAL: Used in entry code to indicate that
+  * sp_reg+sp_offset points to the iret return frame.
++ *
++ * UNWIND_HINT_FUNC: Generate the unwind metadata of a callable function.
++ * Useful for code which doesn't have an ELF function annotation.
++ *
++ * UNWIND_HINT_ENTRY: machine entry without stack, SYSCALL/SYSENTER etc.
+  */
+ #define UNWIND_HINT_TYPE_CALL		0
+ #define UNWIND_HINT_TYPE_REGS		1
+ #define UNWIND_HINT_TYPE_REGS_PARTIAL	2
+-#define UNWIND_HINT_TYPE_RET_OFFSET	3
++#define UNWIND_HINT_TYPE_FUNC		3
++#define UNWIND_HINT_TYPE_ENTRY		4
++#define UNWIND_HINT_TYPE_SAVE		5
++#define UNWIND_HINT_TYPE_RESTORE	6
+ 
+ #ifdef CONFIG_STACK_VALIDATION
+ 
+@@ -96,7 +104,7 @@ struct unwind_hint {
+  * the debuginfo as necessary.  It will also warn if it sees any
+  * inconsistencies.
+  */
+-.macro UNWIND_HINT sp_reg:req sp_offset=0 type:req end=0
++.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
+ .Lunwind_hint_ip_\@:
+ 	.pushsection .discard.unwind_hints
+ 		/* struct unwind_hint */
+@@ -120,7 +128,7 @@ struct unwind_hint {
+ #define STACK_FRAME_NON_STANDARD(func)
+ #else
+ #define ANNOTATE_INTRA_FUNCTION_CALL
+-.macro UNWIND_HINT sp_reg:req sp_offset=0 type:req end=0
++.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
+ .endm
+ #endif
+ 
+diff --git a/tools/objtool/Documentation/stack-validation.txt b/tools/objtool/Documentation/stack-validation.txt
+index 0542e46c75528..30f38fdc0d56c 100644
+--- a/tools/objtool/Documentation/stack-validation.txt
++++ b/tools/objtool/Documentation/stack-validation.txt
+@@ -315,13 +315,15 @@ they mean, and suggestions for how to fix them.
+       function tracing inserts additional calls, which is not obvious from the
+       sources).
+ 
+-10. file.o: warning: func()+0x5c: alternative modifies stack
+-
+-    This means that an alternative includes instructions that modify the
+-    stack. The problem is that there is only one ORC unwind table, this means
+-    that the ORC unwind entries must be valid for each of the alternatives.
+-    The easiest way to enforce this is to ensure alternatives do not contain
+-    any ORC entries, which in turn implies the above constraint.
++10. file.o: warning: func()+0x5c: stack layout conflict in alternatives
++
++    This means that in the use of the alternative() or ALTERNATIVE()
++    macro, the code paths have conflicting modifications to the stack.
++    The problem is that there is only one ORC unwind table, which means
++    that the ORC unwind entries must be consistent for all possible
++    instruction boundaries regardless of which code has been patched.
++    This limitation can be overcome by massaging the alternatives with
++    NOPs to shift the stack changes around so they no longer conflict.
+ 
+ 11. file.o: warning: unannotated intra-function call
+ 
+diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
+index 5cdb19036d7f7..a43096f713c7b 100644
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -46,10 +46,6 @@ ifeq ($(SRCARCH),x86)
+ 	SUBCMD_ORC := y
+ endif
+ 
+-ifeq ($(SUBCMD_ORC),y)
+-	CFLAGS += -DINSN_USE_ORC
+-endif
+-
+ export SUBCMD_CHECK SUBCMD_ORC
+ export srctree OUTPUT CFLAGS SRCARCH AWK
+ include $(srctree)/tools/build/Makefile.include
+diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h
+index 4a84c3081b8e1..580ce18575857 100644
+--- a/tools/objtool/arch.h
++++ b/tools/objtool/arch.h
+@@ -11,10 +11,6 @@
+ #include "objtool.h"
+ #include "cfi.h"
+ 
+-#ifdef INSN_USE_ORC
+-#include <asm/orc_types.h>
+-#endif
+-
+ enum insn_type {
+ 	INSN_JUMP_CONDITIONAL,
+ 	INSN_JUMP_UNCONDITIONAL,
+@@ -30,6 +26,7 @@ enum insn_type {
+ 	INSN_CLAC,
+ 	INSN_STD,
+ 	INSN_CLD,
++	INSN_TRAP,
+ 	INSN_OTHER,
+ };
+ 
+@@ -87,7 +84,13 @@ unsigned long arch_jump_destination(struct instruction *insn);
+ unsigned long arch_dest_reloc_offset(int addend);
+ 
+ const char *arch_nop_insn(int len);
++const char *arch_ret_insn(int len);
++
++int arch_decode_hint_reg(u8 sp_reg, int *base);
++
++bool arch_is_retpoline(struct symbol *sym);
++bool arch_is_rethunk(struct symbol *sym);
+ 
+-int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg);
++int arch_rewrite_retpolines(struct objtool_file *file);
+ 
+ #endif /* _ARCH_H */
+diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
+index cde9c36e40ae0..d8f47704fd85f 100644
+--- a/tools/objtool/arch/x86/decode.c
++++ b/tools/objtool/arch/x86/decode.c
+@@ -16,6 +16,7 @@
+ #include "../../arch.h"
+ #include "../../warn.h"
+ #include <asm/orc_types.h>
++#include "arch_elf.h"
+ 
+ static unsigned char op_to_cfi_reg[][2] = {
+ 	{CFI_AX, CFI_R8},
+@@ -455,6 +456,11 @@ int arch_decode_instruction(const struct elf *elf, const struct section *sec,
+ 
+ 		break;
+ 
++	case 0xcc:
++		/* int3 */
++		*type = INSN_TRAP;
++		break;
++
+ 	case 0xe3:
+ 		/* jecxz/jrcxz */
+ 		*type = INSN_JUMP_CONDITIONAL;
+@@ -563,8 +569,8 @@ void arch_initial_func_cfi_state(struct cfi_init_state *state)
+ 	state->cfa.offset = 8;
+ 
+ 	/* initial RA (return address) */
+-	state->regs[16].base = CFI_CFA;
+-	state->regs[16].offset = -8;
++	state->regs[CFI_RA].base = CFI_CFA;
++	state->regs[CFI_RA].offset = -8;
+ }
+ 
+ const char *arch_nop_insn(int len)
+@@ -585,34 +591,52 @@ const char *arch_nop_insn(int len)
+ 	return nops[len-1];
+ }
+ 
+-int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg)
++#define BYTE_RET	0xC3
++
++const char *arch_ret_insn(int len)
+ {
+-	struct cfi_reg *cfa = &insn->cfi.cfa;
++	static const char ret[5][5] = {
++		{ BYTE_RET },
++		{ BYTE_RET, 0xcc },
++		{ BYTE_RET, 0xcc, 0x90 },
++		{ BYTE_RET, 0xcc, 0x66, 0x90 },
++		{ BYTE_RET, 0xcc, 0x0f, 0x1f, 0x00 },
++	};
+ 
++	if (len < 1 || len > 5) {
++		WARN("invalid RET size: %d\n", len);
++		return NULL;
++	}
++
++	return ret[len-1];
++}
++
++int arch_decode_hint_reg(u8 sp_reg, int *base)
++{
+ 	switch (sp_reg) {
+ 	case ORC_REG_UNDEFINED:
+-		cfa->base = CFI_UNDEFINED;
++		*base = CFI_UNDEFINED;
+ 		break;
+ 	case ORC_REG_SP:
+-		cfa->base = CFI_SP;
++		*base = CFI_SP;
+ 		break;
+ 	case ORC_REG_BP:
+-		cfa->base = CFI_BP;
++		*base = CFI_BP;
+ 		break;
+ 	case ORC_REG_SP_INDIRECT:
+-		cfa->base = CFI_SP_INDIRECT;
++		*base = CFI_SP_INDIRECT;
+ 		break;
+ 	case ORC_REG_R10:
+-		cfa->base = CFI_R10;
++		*base = CFI_R10;
+ 		break;
+ 	case ORC_REG_R13:
+-		cfa->base = CFI_R13;
++		*base = CFI_R13;
+ 		break;
+ 	case ORC_REG_DI:
+-		cfa->base = CFI_DI;
++		*base = CFI_DI;
+ 		break;
+ 	case ORC_REG_DX:
+-		cfa->base = CFI_DX;
++		*base = CFI_DX;
+ 		break;
+ 	default:
+ 		return -1;
+@@ -620,3 +644,13 @@ int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg)
+ 
+ 	return 0;
+ }
++
++bool arch_is_retpoline(struct symbol *sym)
++{
++	return !strncmp(sym->name, "__x86_indirect_", 15);
++}
++
++bool arch_is_rethunk(struct symbol *sym)
++{
++	return !strcmp(sym->name, "__x86_return_thunk");
++}
+diff --git a/tools/objtool/arch/x86/include/arch_special.h b/tools/objtool/arch/x86/include/arch_special.h
+index d818b2bffa02c..14271cca0c740 100644
+--- a/tools/objtool/arch/x86/include/arch_special.h
++++ b/tools/objtool/arch/x86/include/arch_special.h
+@@ -10,7 +10,7 @@
+ #define JUMP_ORIG_OFFSET	0
+ #define JUMP_NEW_OFFSET		4
+ 
+-#define ALT_ENTRY_SIZE		13
++#define ALT_ENTRY_SIZE		12
+ #define ALT_ORIG_OFFSET		0
+ #define ALT_NEW_OFFSET		4
+ #define ALT_FEATURE_OFFSET	8
+diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
+index c6d199bfd0ae2..447a49c03abb3 100644
+--- a/tools/objtool/builtin-check.c
++++ b/tools/objtool/builtin-check.c
+@@ -18,7 +18,8 @@
+ #include "builtin.h"
+ #include "objtool.h"
+ 
+-bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux;
++bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats,
++     validate_dup, vmlinux, sls, unret, rethunk;
+ 
+ static const char * const check_usage[] = {
+ 	"objtool check [<options>] file.o",
+@@ -29,12 +30,15 @@ const struct option check_options[] = {
+ 	OPT_BOOLEAN('f', "no-fp", &no_fp, "Skip frame pointer validation"),
+ 	OPT_BOOLEAN('u', "no-unreachable", &no_unreachable, "Skip 'unreachable instruction' warnings"),
+ 	OPT_BOOLEAN('r', "retpoline", &retpoline, "Validate retpoline assumptions"),
++	OPT_BOOLEAN(0,   "rethunk", &rethunk, "validate and annotate rethunk usage"),
++	OPT_BOOLEAN(0,   "unret", &unret, "validate entry unret placement"),
+ 	OPT_BOOLEAN('m', "module", &module, "Indicates the object will be part of a kernel module"),
+ 	OPT_BOOLEAN('b', "backtrace", &backtrace, "unwind on error"),
+ 	OPT_BOOLEAN('a', "uaccess", &uaccess, "enable uaccess checking"),
+ 	OPT_BOOLEAN('s', "stats", &stats, "print statistics"),
+ 	OPT_BOOLEAN('d', "duplicate", &validate_dup, "duplicate validation for vmlinux.o"),
+ 	OPT_BOOLEAN('l', "vmlinux", &vmlinux, "vmlinux.o validation"),
++	OPT_BOOLEAN('S', "sls", &sls, "validate straight-line-speculation"),
+ 	OPT_END(),
+ };
+ 
+diff --git a/tools/objtool/builtin-orc.c b/tools/objtool/builtin-orc.c
+index 7b31121fa60b2..508bdf6ae8dc6 100644
+--- a/tools/objtool/builtin-orc.c
++++ b/tools/objtool/builtin-orc.c
+@@ -51,11 +51,7 @@ int cmd_orc(int argc, const char **argv)
+ 		if (list_empty(&file->insn_list))
+ 			return 0;
+ 
+-		ret = create_orc(file);
+-		if (ret)
+-			return ret;
+-
+-		ret = create_orc_sections(file);
++		ret = orc_create(file);
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/tools/objtool/builtin.h b/tools/objtool/builtin.h
+index 85c979caa3677..61d8d49dbc657 100644
+--- a/tools/objtool/builtin.h
++++ b/tools/objtool/builtin.h
+@@ -8,7 +8,8 @@
+ #include <subcmd/parse-options.h>
+ 
+ extern const struct option check_options[];
+-extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux;
++extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats,
++            validate_dup, vmlinux, sls, unret, rethunk;
+ 
+ extern int cmd_check(int argc, const char **argv);
+ extern int cmd_orc(int argc, const char **argv);
+diff --git a/tools/objtool/cfi.h b/tools/objtool/cfi.h
+index c7c59c6a44eea..f579802d7ec24 100644
+--- a/tools/objtool/cfi.h
++++ b/tools/objtool/cfi.h
+@@ -7,6 +7,7 @@
+ #define _OBJTOOL_CFI_H
+ 
+ #include "cfi_regs.h"
++#include <linux/list.h>
+ 
+ #define CFI_UNDEFINED		-1
+ #define CFI_CFA			-2
+@@ -24,6 +25,7 @@ struct cfi_init_state {
+ };
+ 
+ struct cfi_state {
++	struct hlist_node hash; /* must be first, cficmp() */
+ 	struct cfi_reg regs[CFI_NUM_REGS];
+ 	struct cfi_reg vals[CFI_NUM_REGS];
+ 	struct cfi_reg cfa;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 8932f41c387ff..ea80b29b99134 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -5,6 +5,8 @@
+ 
+ #include <string.h>
+ #include <stdlib.h>
++#include <inttypes.h>
++#include <sys/mman.h>
+ 
+ #include "builtin.h"
+ #include "cfi.h"
+@@ -19,15 +21,17 @@
+ #include <linux/kernel.h>
+ #include <linux/static_call_types.h>
+ 
+-#define FAKE_JUMP_OFFSET -1
+-
+ struct alternative {
+ 	struct list_head list;
+ 	struct instruction *insn;
+ 	bool skip_orig;
+ };
+ 
+-struct cfi_init_state initial_func_cfi;
++static unsigned long nr_cfi, nr_cfi_reused, nr_cfi_cache;
++
++static struct cfi_init_state initial_func_cfi;
++static struct cfi_state init_cfi;
++static struct cfi_state func_cfi;
+ 
+ struct instruction *find_insn(struct objtool_file *file,
+ 			      struct section *sec, unsigned long offset)
+@@ -109,17 +113,34 @@ static struct instruction *prev_insn_same_sym(struct objtool_file *file,
+ 	for (insn = next_insn_same_sec(file, insn); insn;		\
+ 	     insn = next_insn_same_sec(file, insn))
+ 
++static bool is_jump_table_jump(struct instruction *insn)
++{
++	struct alt_group *alt_group = insn->alt_group;
++
++	if (insn->jump_table)
++		return true;
++
++	/* Retpoline alternative for a jump table? */
++	return alt_group && alt_group->orig_group &&
++	       alt_group->orig_group->first_insn->jump_table;
++}
++
+ static bool is_sibling_call(struct instruction *insn)
+ {
++	/*
++	 * Assume only ELF functions can make sibling calls.  This ensures
++	 * sibling call detection consistency between vmlinux.o and individual
++	 * objects.
++	 */
++	if (!insn->func)
++		return false;
++
+ 	/* An indirect jump is either a sibling call or a jump to a table. */
+ 	if (insn->type == INSN_JUMP_DYNAMIC)
+-		return list_empty(&insn->alts);
+-
+-	if (!is_static_jump(insn))
+-		return false;
++		return !is_jump_table_jump(insn);
+ 
+ 	/* add_jump_destinations() sets insn->call_dest for sibling calls. */
+-	return !!insn->call_dest;
++	return (is_static_jump(insn) && insn->call_dest);
+ }
+ 
+ /*
+@@ -250,6 +271,78 @@ static void init_insn_state(struct insn_state *state, struct section *sec)
+ 		state->noinstr = sec->noinstr;
+ }
+ 
++static struct cfi_state *cfi_alloc(void)
++{
++	struct cfi_state *cfi = calloc(sizeof(struct cfi_state), 1);
++	if (!cfi) {
++		WARN("calloc failed");
++		exit(1);
++	}
++	nr_cfi++;
++	return cfi;
++}
++
++static int cfi_bits;
++static struct hlist_head *cfi_hash;
++
++static inline bool cficmp(struct cfi_state *cfi1, struct cfi_state *cfi2)
++{
++	return memcmp((void *)cfi1 + sizeof(cfi1->hash),
++		      (void *)cfi2 + sizeof(cfi2->hash),
++		      sizeof(struct cfi_state) - sizeof(struct hlist_node));
++}
++
++static inline u32 cfi_key(struct cfi_state *cfi)
++{
++	return jhash((void *)cfi + sizeof(cfi->hash),
++		     sizeof(*cfi) - sizeof(cfi->hash), 0);
++}
++
++static struct cfi_state *cfi_hash_find_or_add(struct cfi_state *cfi)
++{
++	struct hlist_head *head = &cfi_hash[hash_min(cfi_key(cfi), cfi_bits)];
++	struct cfi_state *obj;
++
++	hlist_for_each_entry(obj, head, hash) {
++		if (!cficmp(cfi, obj)) {
++			nr_cfi_cache++;
++			return obj;
++		}
++	}
++
++	obj = cfi_alloc();
++	*obj = *cfi;
++	hlist_add_head(&obj->hash, head);
++
++	return obj;
++}
++
++static void cfi_hash_add(struct cfi_state *cfi)
++{
++	struct hlist_head *head = &cfi_hash[hash_min(cfi_key(cfi), cfi_bits)];
++
++	hlist_add_head(&cfi->hash, head);
++}
++
++static void *cfi_hash_alloc(void)
++{
++	cfi_bits = vmlinux ? ELF_HASH_BITS - 3 : 13;
++	cfi_hash = mmap(NULL, sizeof(struct hlist_head) << cfi_bits,
++			PROT_READ|PROT_WRITE,
++			MAP_PRIVATE|MAP_ANON, -1, 0);
++	if (cfi_hash == (void *)-1L) {
++		WARN("mmap fail cfi_hash");
++		cfi_hash = NULL;
++	}  else if (stats) {
++		printf("cfi_bits: %d\n", cfi_bits);
++	}
++
++	return cfi_hash;
++}
++
++static unsigned long nr_insns;
++static unsigned long nr_insns_visited;
++
+ /*
+  * Call the arch-specific instruction decoder for all the instructions and add
+  * them to the global instruction list.
+@@ -260,7 +353,6 @@ static int decode_instructions(struct objtool_file *file)
+ 	struct symbol *func;
+ 	unsigned long offset;
+ 	struct instruction *insn;
+-	unsigned long nr_insns = 0;
+ 	int ret;
+ 
+ 	for_each_sec(file, sec) {
+@@ -274,7 +366,8 @@ static int decode_instructions(struct objtool_file *file)
+ 			sec->text = true;
+ 
+ 		if (!strcmp(sec->name, ".noinstr.text") ||
+-		    !strcmp(sec->name, ".entry.text"))
++		    !strcmp(sec->name, ".entry.text") ||
++		    !strncmp(sec->name, ".text.__x86.", 12))
+ 			sec->noinstr = true;
+ 
+ 		for (offset = 0; offset < sec->len; offset += insn->len) {
+@@ -286,7 +379,6 @@ static int decode_instructions(struct objtool_file *file)
+ 			memset(insn, 0, sizeof(*insn));
+ 			INIT_LIST_HEAD(&insn->alts);
+ 			INIT_LIST_HEAD(&insn->stack_ops);
+-			init_cfi_state(&insn->cfi);
+ 
+ 			insn->sec = sec;
+ 			insn->offset = offset;
+@@ -377,12 +469,12 @@ static int add_dead_ends(struct objtool_file *file)
+ 		else if (reloc->addend == reloc->sym->sec->len) {
+ 			insn = find_last_insn(file, reloc->sym->sec);
+ 			if (!insn) {
+-				WARN("can't find unreachable insn at %s+0x%x",
++				WARN("can't find unreachable insn at %s+0x%" PRIx64,
+ 				     reloc->sym->sec->name, reloc->addend);
+ 				return -1;
+ 			}
+ 		} else {
+-			WARN("can't find unreachable insn at %s+0x%x",
++			WARN("can't find unreachable insn at %s+0x%" PRIx64,
+ 			     reloc->sym->sec->name, reloc->addend);
+ 			return -1;
+ 		}
+@@ -412,12 +504,12 @@ reachable:
+ 		else if (reloc->addend == reloc->sym->sec->len) {
+ 			insn = find_last_insn(file, reloc->sym->sec);
+ 			if (!insn) {
+-				WARN("can't find reachable insn at %s+0x%x",
++				WARN("can't find reachable insn at %s+0x%" PRIx64,
+ 				     reloc->sym->sec->name, reloc->addend);
+ 				return -1;
+ 			}
+ 		} else {
+-			WARN("can't find reachable insn at %s+0x%x",
++			WARN("can't find reachable insn at %s+0x%" PRIx64,
+ 			     reloc->sym->sec->name, reloc->addend);
+ 			return -1;
+ 		}
+@@ -430,8 +522,7 @@ reachable:
+ 
+ static int create_static_call_sections(struct objtool_file *file)
+ {
+-	struct section *sec, *reloc_sec;
+-	struct reloc *reloc;
++	struct section *sec;
+ 	struct static_call_site *site;
+ 	struct instruction *insn;
+ 	struct symbol *key_sym;
+@@ -449,7 +540,7 @@ static int create_static_call_sections(struct objtool_file *file)
+ 		return 0;
+ 
+ 	idx = 0;
+-	list_for_each_entry(insn, &file->static_call_list, static_call_node)
++	list_for_each_entry(insn, &file->static_call_list, call_node)
+ 		idx++;
+ 
+ 	sec = elf_create_section(file->elf, ".static_call_sites", SHF_WRITE,
+@@ -457,36 +548,18 @@ static int create_static_call_sections(struct objtool_file *file)
+ 	if (!sec)
+ 		return -1;
+ 
+-	reloc_sec = elf_create_reloc_section(file->elf, sec, SHT_RELA);
+-	if (!reloc_sec)
+-		return -1;
+-
+ 	idx = 0;
+-	list_for_each_entry(insn, &file->static_call_list, static_call_node) {
++	list_for_each_entry(insn, &file->static_call_list, call_node) {
+ 
+ 		site = (struct static_call_site *)sec->data->d_buf + idx;
+ 		memset(site, 0, sizeof(struct static_call_site));
+ 
+ 		/* populate reloc for 'addr' */
+-		reloc = malloc(sizeof(*reloc));
+-
+-		if (!reloc) {
+-			perror("malloc");
+-			return -1;
+-		}
+-		memset(reloc, 0, sizeof(*reloc));
+-
+-		insn_to_reloc_sym_addend(insn->sec, insn->offset, reloc);
+-		if (!reloc->sym) {
+-			WARN_FUNC("static call tramp: missing containing symbol",
+-				  insn->sec, insn->offset);
++		if (elf_add_reloc_to_insn(file->elf, sec,
++					  idx * sizeof(struct static_call_site),
++					  R_X86_64_PC32,
++					  insn->sec, insn->offset))
+ 			return -1;
+-		}
+-
+-		reloc->type = R_X86_64_PC32;
+-		reloc->offset = idx * sizeof(struct static_call_site);
+-		reloc->sec = reloc_sec;
+-		elf_add_reloc(file->elf, reloc);
+ 
+ 		/* find key symbol */
+ 		key_name = strdup(insn->call_dest->name);
+@@ -523,24 +596,106 @@ static int create_static_call_sections(struct objtool_file *file)
+ 		free(key_name);
+ 
+ 		/* populate reloc for 'key' */
+-		reloc = malloc(sizeof(*reloc));
+-		if (!reloc) {
+-			perror("malloc");
++		if (elf_add_reloc(file->elf, sec,
++				  idx * sizeof(struct static_call_site) + 4,
++				  R_X86_64_PC32, key_sym,
++				  is_sibling_call(insn) * STATIC_CALL_SITE_TAIL))
++			return -1;
++
++		idx++;
++	}
++
++	return 0;
++}
++
++static int create_retpoline_sites_sections(struct objtool_file *file)
++{
++	struct instruction *insn;
++	struct section *sec;
++	int idx;
++
++	sec = find_section_by_name(file->elf, ".retpoline_sites");
++	if (sec) {
++		WARN("file already has .retpoline_sites, skipping");
++		return 0;
++	}
++
++	idx = 0;
++	list_for_each_entry(insn, &file->retpoline_call_list, call_node)
++		idx++;
++
++	if (!idx)
++		return 0;
++
++	sec = elf_create_section(file->elf, ".retpoline_sites", 0,
++				 sizeof(int), idx);
++	if (!sec) {
++		WARN("elf_create_section: .retpoline_sites");
++		return -1;
++	}
++
++	idx = 0;
++	list_for_each_entry(insn, &file->retpoline_call_list, call_node) {
++
++		int *site = (int *)sec->data->d_buf + idx;
++		*site = 0;
++
++		if (elf_add_reloc_to_insn(file->elf, sec,
++					  idx * sizeof(int),
++					  R_X86_64_PC32,
++					  insn->sec, insn->offset)) {
++			WARN("elf_add_reloc_to_insn: .retpoline_sites");
+ 			return -1;
+ 		}
+-		memset(reloc, 0, sizeof(*reloc));
+-		reloc->sym = key_sym;
+-		reloc->addend = is_sibling_call(insn) ? STATIC_CALL_SITE_TAIL : 0;
+-		reloc->type = R_X86_64_PC32;
+-		reloc->offset = idx * sizeof(struct static_call_site) + 4;
+-		reloc->sec = reloc_sec;
+-		elf_add_reloc(file->elf, reloc);
+ 
+ 		idx++;
+ 	}
+ 
+-	if (elf_rebuild_reloc_section(file->elf, reloc_sec))
++	return 0;
++}
++
++static int create_return_sites_sections(struct objtool_file *file)
++{
++	struct instruction *insn;
++	struct section *sec;
++	int idx;
++
++	sec = find_section_by_name(file->elf, ".return_sites");
++	if (sec) {
++		WARN("file already has .return_sites, skipping");
++		return 0;
++	}
++
++	idx = 0;
++	list_for_each_entry(insn, &file->return_thunk_list, call_node)
++		idx++;
++
++	if (!idx)
++		return 0;
++
++	sec = elf_create_section(file->elf, ".return_sites", 0,
++				 sizeof(int), idx);
++	if (!sec) {
++		WARN("elf_create_section: .return_sites");
+ 		return -1;
++	}
++
++	idx = 0;
++	list_for_each_entry(insn, &file->return_thunk_list, call_node) {
++
++		int *site = (int *)sec->data->d_buf + idx;
++		*site = 0;
++
++		if (elf_add_reloc_to_insn(file->elf, sec,
++					  idx * sizeof(int),
++					  R_X86_64_PC32,
++					  insn->sec, insn->offset)) {
++			WARN("elf_add_reloc_to_insn: .return_sites");
++			return -1;
++		}
++
++		idx++;
++	}
+ 
+ 	return 0;
+ }
+@@ -775,6 +930,172 @@ static int add_ignore_alternatives(struct objtool_file *file)
+ 	return 0;
+ }
+ 
++__weak bool arch_is_retpoline(struct symbol *sym)
++{
++	return false;
++}
++
++__weak bool arch_is_rethunk(struct symbol *sym)
++{
++	return false;
++}
++
++#define NEGATIVE_RELOC	((void *)-1L)
++
++static struct reloc *insn_reloc(struct objtool_file *file, struct instruction *insn)
++{
++	if (insn->reloc == NEGATIVE_RELOC)
++		return NULL;
++
++	if (!insn->reloc) {
++		insn->reloc = find_reloc_by_dest_range(file->elf, insn->sec,
++						       insn->offset, insn->len);
++		if (!insn->reloc) {
++			insn->reloc = NEGATIVE_RELOC;
++			return NULL;
++		}
++	}
++
++	return insn->reloc;
++}
++
++static void remove_insn_ops(struct instruction *insn)
++{
++	struct stack_op *op, *tmp;
++
++	list_for_each_entry_safe(op, tmp, &insn->stack_ops, list) {
++		list_del(&op->list);
++		free(op);
++	}
++}
++
++static void annotate_call_site(struct objtool_file *file,
++			       struct instruction *insn, bool sibling)
++{
++	struct reloc *reloc = insn_reloc(file, insn);
++	struct symbol *sym = insn->call_dest;
++
++	if (!sym)
++		sym = reloc->sym;
++
++	/*
++	 * Alternative replacement code is just template code which is
++	 * sometimes copied to the original instruction. For now, don't
++	 * annotate it. (In the future we might consider annotating the
++	 * original instruction if/when it ever makes sense to do so.)
++	 */
++	if (!strcmp(insn->sec->name, ".altinstr_replacement"))
++		return;
++
++	if (sym->static_call_tramp) {
++		list_add_tail(&insn->call_node, &file->static_call_list);
++		return;
++	}
++
++	if (sym->retpoline_thunk) {
++		list_add_tail(&insn->call_node, &file->retpoline_call_list);
++		return;
++	}
++
++	/*
++	 * Many compilers cannot disable KCOV with a function attribute
++	 * so they need a little help, NOP out any KCOV calls from noinstr
++	 * text.
++	 */
++	if (insn->sec->noinstr && sym->kcov) {
++		if (reloc) {
++			reloc->type = R_NONE;
++			elf_write_reloc(file->elf, reloc);
++		}
++
++		elf_write_insn(file->elf, insn->sec,
++			       insn->offset, insn->len,
++			       sibling ? arch_ret_insn(insn->len)
++			               : arch_nop_insn(insn->len));
++
++		insn->type = sibling ? INSN_RETURN : INSN_NOP;
++
++		if (sibling) {
++			/*
++			 * We've replaced the tail-call JMP insn by two new
++			 * insn: RET; INT3, except we only have a single struct
++			 * insn here. Mark it retpoline_safe to avoid the SLS
++			 * warning, instead of adding another insn.
++			 */
++			insn->retpoline_safe = true;
++		}
++
++		return;
++	}
++}
++
++static void add_call_dest(struct objtool_file *file, struct instruction *insn,
++			  struct symbol *dest, bool sibling)
++{
++	insn->call_dest = dest;
++	if (!dest)
++		return;
++
++	/*
++	 * Whatever stack impact regular CALLs have, should be undone
++	 * by the RETURN of the called function.
++	 *
++	 * Annotated intra-function calls retain the stack_ops but
++	 * are converted to JUMP, see read_intra_function_calls().
++	 */
++	remove_insn_ops(insn);
++
++	annotate_call_site(file, insn, sibling);
++}
++
++static void add_retpoline_call(struct objtool_file *file, struct instruction *insn)
++{
++	/*
++	 * Retpoline calls/jumps are really dynamic calls/jumps in disguise,
++	 * so convert them accordingly.
++	 */
++	switch (insn->type) {
++	case INSN_CALL:
++		insn->type = INSN_CALL_DYNAMIC;
++		break;
++	case INSN_JUMP_UNCONDITIONAL:
++		insn->type = INSN_JUMP_DYNAMIC;
++		break;
++	case INSN_JUMP_CONDITIONAL:
++		insn->type = INSN_JUMP_DYNAMIC_CONDITIONAL;
++		break;
++	default:
++		return;
++	}
++
++	insn->retpoline_safe = true;
++
++	/*
++	 * Whatever stack impact regular CALLs have, should be undone
++	 * by the RETURN of the called function.
++	 *
++	 * Annotated intra-function calls retain the stack_ops but
++	 * are converted to JUMP, see read_intra_function_calls().
++	 */
++	remove_insn_ops(insn);
++
++	annotate_call_site(file, insn, false);
++}
++
++static void add_return_call(struct objtool_file *file, struct instruction *insn, bool add)
++{
++	/*
++	 * Return thunk tail calls are really just returns in disguise,
++	 * so convert them accordingly.
++	 */
++	insn->type = INSN_RETURN;
++	insn->retpoline_safe = true;
++
++	/* Skip the non-text sections, specially .discard ones */
++	if (add && insn->sec->text)
++		list_add_tail(&insn->call_node, &file->return_thunk_list);
++}
++
+ /*
+  * Find the destination instructions for all jumps.
+  */
+@@ -789,46 +1110,35 @@ static int add_jump_destinations(struct objtool_file *file)
+ 		if (!is_static_jump(insn))
+ 			continue;
+ 
+-		if (insn->offset == FAKE_JUMP_OFFSET)
+-			continue;
+-
+-		reloc = find_reloc_by_dest_range(file->elf, insn->sec,
+-					       insn->offset, insn->len);
++		reloc = insn_reloc(file, insn);
+ 		if (!reloc) {
+ 			dest_sec = insn->sec;
+ 			dest_off = arch_jump_destination(insn);
+ 		} else if (reloc->sym->type == STT_SECTION) {
+ 			dest_sec = reloc->sym->sec;
+ 			dest_off = arch_dest_reloc_offset(reloc->addend);
++		} else if (reloc->sym->retpoline_thunk) {
++			add_retpoline_call(file, insn);
++			continue;
++		} else if (reloc->sym->return_thunk) {
++			add_return_call(file, insn, true);
++			continue;
++		} else if (insn->func) {
++			/* internal or external sibling call (with reloc) */
++			add_call_dest(file, insn, reloc->sym, true);
++			continue;
+ 		} else if (reloc->sym->sec->idx) {
+ 			dest_sec = reloc->sym->sec;
+ 			dest_off = reloc->sym->sym.st_value +
+ 				   arch_dest_reloc_offset(reloc->addend);
+-		} else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21) ||
+-			   !strncmp(reloc->sym->name, "__x86_retpoline_", 16)) {
+-			/*
+-			 * Retpoline jumps are really dynamic jumps in
+-			 * disguise, so convert them accordingly.
+-			 */
+-			if (insn->type == INSN_JUMP_UNCONDITIONAL)
+-				insn->type = INSN_JUMP_DYNAMIC;
+-			else
+-				insn->type = INSN_JUMP_DYNAMIC_CONDITIONAL;
+-
+-			insn->retpoline_safe = true;
+-			continue;
+ 		} else {
+-			/* external sibling call */
+-			insn->call_dest = reloc->sym;
+-			if (insn->call_dest->static_call_tramp) {
+-				list_add_tail(&insn->static_call_node,
+-					      &file->static_call_list);
+-			}
++			/* non-func asm code jumping to another file */
+ 			continue;
+ 		}
+ 
+ 		insn->jump_dest = find_insn(file, dest_sec, dest_off);
+ 		if (!insn->jump_dest) {
++			struct symbol *sym = find_symbol_by_offset(dest_sec, dest_off);
+ 
+ 			/*
+ 			 * This is a special case where an alt instruction
+@@ -838,6 +1148,19 @@ static int add_jump_destinations(struct objtool_file *file)
+ 			if (!strcmp(insn->sec->name, ".altinstr_replacement"))
+ 				continue;
+ 
++			/*
++			 * This is a special case for zen_untrain_ret().
++			 * It jumps to __x86_return_thunk(), but objtool
++			 * can't find the thunk's starting RET
++			 * instruction, because the RET is also in the
++			 * middle of another instruction.  Objtool only
++			 * knows about the outer instruction.
++			 */
++			if (sym && sym->return_thunk) {
++				add_return_call(file, insn, false);
++				continue;
++			}
++
+ 			WARN_FUNC("can't find jump dest instruction at %s+0x%lx",
+ 				  insn->sec, insn->offset, dest_sec->name,
+ 				  dest_off);
+@@ -872,13 +1195,8 @@ static int add_jump_destinations(struct objtool_file *file)
+ 
+ 			} else if (insn->jump_dest->func->pfunc != insn->func->pfunc &&
+ 				   insn->jump_dest->offset == insn->jump_dest->func->offset) {
+-
+-				/* internal sibling call */
+-				insn->call_dest = insn->jump_dest->func;
+-				if (insn->call_dest->static_call_tramp) {
+-					list_add_tail(&insn->static_call_node,
+-						      &file->static_call_list);
+-				}
++				/* internal sibling call (without reloc) */
++				add_call_dest(file, insn, insn->jump_dest->func, true);
+ 			}
+ 		}
+ 	}
+@@ -886,16 +1204,6 @@ static int add_jump_destinations(struct objtool_file *file)
+ 	return 0;
+ }
+ 
+-static void remove_insn_ops(struct instruction *insn)
+-{
+-	struct stack_op *op, *tmp;
+-
+-	list_for_each_entry_safe(op, tmp, &insn->stack_ops, list) {
+-		list_del(&op->list);
+-		free(op);
+-	}
+-}
+-
+ static struct symbol *find_call_destination(struct section *sec, unsigned long offset)
+ {
+ 	struct symbol *call_dest;
+@@ -914,17 +1222,19 @@ static int add_call_destinations(struct objtool_file *file)
+ {
+ 	struct instruction *insn;
+ 	unsigned long dest_off;
++	struct symbol *dest;
+ 	struct reloc *reloc;
+ 
+ 	for_each_insn(file, insn) {
+ 		if (insn->type != INSN_CALL)
+ 			continue;
+ 
+-		reloc = find_reloc_by_dest_range(file->elf, insn->sec,
+-					       insn->offset, insn->len);
++		reloc = insn_reloc(file, insn);
+ 		if (!reloc) {
+ 			dest_off = arch_jump_destination(insn);
+-			insn->call_dest = find_call_destination(insn->sec, dest_off);
++			dest = find_call_destination(insn->sec, dest_off);
++
++			add_call_dest(file, insn, dest, false);
+ 
+ 			if (insn->ignore)
+ 				continue;
+@@ -942,122 +1252,104 @@ static int add_call_destinations(struct objtool_file *file)
+ 
+ 		} else if (reloc->sym->type == STT_SECTION) {
+ 			dest_off = arch_dest_reloc_offset(reloc->addend);
+-			insn->call_dest = find_call_destination(reloc->sym->sec,
+-								dest_off);
+-			if (!insn->call_dest) {
++			dest = find_call_destination(reloc->sym->sec, dest_off);
++			if (!dest) {
+ 				WARN_FUNC("can't find call dest symbol at %s+0x%lx",
+ 					  insn->sec, insn->offset,
+ 					  reloc->sym->sec->name,
+ 					  dest_off);
+ 				return -1;
+ 			}
+-		} else
+-			insn->call_dest = reloc->sym;
+-
+-		if (insn->call_dest && insn->call_dest->static_call_tramp) {
+-			list_add_tail(&insn->static_call_node,
+-				      &file->static_call_list);
+-		}
+ 
+-		/*
+-		 * Many compilers cannot disable KCOV with a function attribute
+-		 * so they need a little help, NOP out any KCOV calls from noinstr
+-		 * text.
+-		 */
+-		if (insn->sec->noinstr &&
+-		    !strncmp(insn->call_dest->name, "__sanitizer_cov_", 16)) {
+-			if (reloc) {
+-				reloc->type = R_NONE;
+-				elf_write_reloc(file->elf, reloc);
+-			}
++			add_call_dest(file, insn, dest, false);
+ 
+-			elf_write_insn(file->elf, insn->sec,
+-				       insn->offset, insn->len,
+-				       arch_nop_insn(insn->len));
+-			insn->type = INSN_NOP;
+-		}
++		} else if (reloc->sym->retpoline_thunk) {
++			add_retpoline_call(file, insn);
+ 
+-		/*
+-		 * Whatever stack impact regular CALLs have, should be undone
+-		 * by the RETURN of the called function.
+-		 *
+-		 * Annotated intra-function calls retain the stack_ops but
+-		 * are converted to JUMP, see read_intra_function_calls().
+-		 */
+-		remove_insn_ops(insn);
++		} else
++			add_call_dest(file, insn, reloc->sym, false);
+ 	}
+ 
+ 	return 0;
+ }
+ 
+ /*
+- * The .alternatives section requires some extra special care, over and above
+- * what other special sections require:
+- *
+- * 1. Because alternatives are patched in-place, we need to insert a fake jump
+- *    instruction at the end so that validate_branch() skips all the original
+- *    replaced instructions when validating the new instruction path.
+- *
+- * 2. An added wrinkle is that the new instruction length might be zero.  In
+- *    that case the old instructions are replaced with noops.  We simulate that
+- *    by creating a fake jump as the only new instruction.
+- *
+- * 3. In some cases, the alternative section includes an instruction which
+- *    conditionally jumps to the _end_ of the entry.  We have to modify these
+- *    jumps' destinations to point back to .text rather than the end of the
+- *    entry in .altinstr_replacement.
++ * The .alternatives section requires some extra special care over and above
++ * other special sections because alternatives are patched in place.
+  */
+ static int handle_group_alt(struct objtool_file *file,
+ 			    struct special_alt *special_alt,
+ 			    struct instruction *orig_insn,
+ 			    struct instruction **new_insn)
+ {
+-	static unsigned int alt_group_next_index = 1;
+-	struct instruction *last_orig_insn, *last_new_insn, *insn, *fake_jump = NULL;
+-	unsigned int alt_group = alt_group_next_index++;
++	struct instruction *last_orig_insn, *last_new_insn = NULL, *insn, *nop = NULL;
++	struct alt_group *orig_alt_group, *new_alt_group;
+ 	unsigned long dest_off;
+ 
++
++	orig_alt_group = malloc(sizeof(*orig_alt_group));
++	if (!orig_alt_group) {
++		WARN("malloc failed");
++		return -1;
++	}
++	orig_alt_group->cfi = calloc(special_alt->orig_len,
++				     sizeof(struct cfi_state *));
++	if (!orig_alt_group->cfi) {
++		WARN("calloc failed");
++		return -1;
++	}
++
+ 	last_orig_insn = NULL;
+ 	insn = orig_insn;
+ 	sec_for_each_insn_from(file, insn) {
+ 		if (insn->offset >= special_alt->orig_off + special_alt->orig_len)
+ 			break;
+ 
+-		insn->alt_group = alt_group;
++		insn->alt_group = orig_alt_group;
+ 		last_orig_insn = insn;
+ 	}
++	orig_alt_group->orig_group = NULL;
++	orig_alt_group->first_insn = orig_insn;
++	orig_alt_group->last_insn = last_orig_insn;
+ 
+-	if (next_insn_same_sec(file, last_orig_insn)) {
+-		fake_jump = malloc(sizeof(*fake_jump));
+-		if (!fake_jump) {
++
++	new_alt_group = malloc(sizeof(*new_alt_group));
++	if (!new_alt_group) {
++		WARN("malloc failed");
++		return -1;
++	}
++
++	if (special_alt->new_len < special_alt->orig_len) {
++		/*
++		 * Insert a fake nop at the end to make the replacement
++		 * alt_group the same size as the original.  This is needed to
++		 * allow propagate_alt_cfi() to do its magic.  When the last
++		 * instruction affects the stack, the instruction after it (the
++		 * nop) will propagate the new state to the shared CFI array.
++		 */
++		nop = malloc(sizeof(*nop));
++		if (!nop) {
+ 			WARN("malloc failed");
+ 			return -1;
+ 		}
+-		memset(fake_jump, 0, sizeof(*fake_jump));
+-		INIT_LIST_HEAD(&fake_jump->alts);
+-		INIT_LIST_HEAD(&fake_jump->stack_ops);
+-		init_cfi_state(&fake_jump->cfi);
++		memset(nop, 0, sizeof(*nop));
++		INIT_LIST_HEAD(&nop->alts);
++		INIT_LIST_HEAD(&nop->stack_ops);
+ 
+-		fake_jump->sec = special_alt->new_sec;
+-		fake_jump->offset = FAKE_JUMP_OFFSET;
+-		fake_jump->type = INSN_JUMP_UNCONDITIONAL;
+-		fake_jump->jump_dest = list_next_entry(last_orig_insn, list);
+-		fake_jump->func = orig_insn->func;
++		nop->sec = special_alt->new_sec;
++		nop->offset = special_alt->new_off + special_alt->new_len;
++		nop->len = special_alt->orig_len - special_alt->new_len;
++		nop->type = INSN_NOP;
++		nop->func = orig_insn->func;
++		nop->alt_group = new_alt_group;
++		nop->ignore = orig_insn->ignore_alts;
+ 	}
+ 
+ 	if (!special_alt->new_len) {
+-		if (!fake_jump) {
+-			WARN("%s: empty alternative at end of section",
+-			     special_alt->orig_sec->name);
+-			return -1;
+-		}
+-
+-		*new_insn = fake_jump;
+-		return 0;
++		*new_insn = nop;
++		goto end;
+ 	}
+ 
+-	last_new_insn = NULL;
+-	alt_group = alt_group_next_index++;
+ 	insn = *new_insn;
+ 	sec_for_each_insn_from(file, insn) {
+ 		struct reloc *alt_reloc;
+@@ -1069,7 +1361,7 @@ static int handle_group_alt(struct objtool_file *file,
+ 
+ 		insn->ignore = orig_insn->ignore_alts;
+ 		insn->func = orig_insn->func;
+-		insn->alt_group = alt_group;
++		insn->alt_group = new_alt_group;
+ 
+ 		/*
+ 		 * Since alternative replacement code is copy/pasted by the
+@@ -1079,8 +1371,7 @@ static int handle_group_alt(struct objtool_file *file,
+ 		 * alternatives code can adjust the relative offsets
+ 		 * accordingly.
+ 		 */
+-		alt_reloc = find_reloc_by_dest_range(file->elf, insn->sec,
+-						   insn->offset, insn->len);
++		alt_reloc = insn_reloc(file, insn);
+ 		if (alt_reloc &&
+ 		    !arch_support_alt_relocation(special_alt, insn, alt_reloc)) {
+ 
+@@ -1096,14 +1387,8 @@ static int handle_group_alt(struct objtool_file *file,
+ 			continue;
+ 
+ 		dest_off = arch_jump_destination(insn);
+-		if (dest_off == special_alt->new_off + special_alt->new_len) {
+-			if (!fake_jump) {
+-				WARN("%s: alternative jump to end of section",
+-				     special_alt->orig_sec->name);
+-				return -1;
+-			}
+-			insn->jump_dest = fake_jump;
+-		}
++		if (dest_off == special_alt->new_off + special_alt->new_len)
++			insn->jump_dest = next_insn_same_sec(file, last_orig_insn);
+ 
+ 		if (!insn->jump_dest) {
+ 			WARN_FUNC("can't find alternative jump destination",
+@@ -1118,9 +1403,13 @@ static int handle_group_alt(struct objtool_file *file,
+ 		return -1;
+ 	}
+ 
+-	if (fake_jump)
+-		list_add(&fake_jump->list, &last_new_insn->list);
+-
++	if (nop)
++		list_add(&nop->list, &last_new_insn->list);
++end:
++	new_alt_group->orig_group = orig_alt_group;
++	new_alt_group->first_insn = *new_insn;
++	new_alt_group->last_insn = nop ? : last_new_insn;
++	new_alt_group->cfi = orig_alt_group->cfi;
+ 	return 0;
+ }
+ 
+@@ -1412,13 +1701,21 @@ static int add_jump_table_alts(struct objtool_file *file)
+ 	return 0;
+ }
+ 
++static void set_func_state(struct cfi_state *state)
++{
++	state->cfa = initial_func_cfi.cfa;
++	memcpy(&state->regs, &initial_func_cfi.regs,
++	       CFI_NUM_REGS * sizeof(struct cfi_reg));
++	state->stack_size = initial_func_cfi.cfa.offset;
++}
++
+ static int read_unwind_hints(struct objtool_file *file)
+ {
++	struct cfi_state cfi = init_cfi;
+ 	struct section *sec, *relocsec;
+-	struct reloc *reloc;
+ 	struct unwind_hint *hint;
+ 	struct instruction *insn;
+-	struct cfi_reg *cfa;
++	struct reloc *reloc;
+ 	int i;
+ 
+ 	sec = find_section_by_name(file->elf, ".discard.unwind_hints");
+@@ -1453,24 +1750,51 @@ static int read_unwind_hints(struct objtool_file *file)
+ 			return -1;
+ 		}
+ 
+-		cfa = &insn->cfi.cfa;
++		insn->hint = true;
++
++		if (hint->type == UNWIND_HINT_TYPE_SAVE) {
++			insn->hint = false;
++			insn->save = true;
++			continue;
++		}
++
++		if (hint->type == UNWIND_HINT_TYPE_RESTORE) {
++			insn->restore = true;
++			continue;
++		}
+ 
+-		if (hint->type == UNWIND_HINT_TYPE_RET_OFFSET) {
+-			insn->ret_offset = hint->sp_offset;
++		if (hint->type == UNWIND_HINT_TYPE_REGS_PARTIAL) {
++			struct symbol *sym = find_symbol_by_offset(insn->sec, insn->offset);
++
++			if (sym && sym->bind == STB_GLOBAL) {
++				insn->entry = 1;
++			}
++		}
++
++		if (hint->type == UNWIND_HINT_TYPE_ENTRY) {
++			hint->type = UNWIND_HINT_TYPE_CALL;
++			insn->entry = 1;
++		}
++
++		if (hint->type == UNWIND_HINT_TYPE_FUNC) {
++			insn->cfi = &func_cfi;
+ 			continue;
+ 		}
+ 
+-		insn->hint = true;
++		if (insn->cfi)
++			cfi = *(insn->cfi);
+ 
+-		if (arch_decode_hint_reg(insn, hint->sp_reg)) {
++		if (arch_decode_hint_reg(hint->sp_reg, &cfi.cfa.base)) {
+ 			WARN_FUNC("unsupported unwind_hint sp base reg %d",
+ 				  insn->sec, insn->offset, hint->sp_reg);
+ 			return -1;
+ 		}
+ 
+-		cfa->offset = hint->sp_offset;
+-		insn->cfi.type = hint->type;
+-		insn->cfi.end = hint->end;
++		cfi.cfa.offset = hint->sp_offset;
++		cfi.type = hint->type;
++		cfi.end = hint->end;
++
++		insn->cfi = cfi_hash_find_or_add(&cfi);
+ 	}
+ 
+ 	return 0;
+@@ -1499,8 +1823,10 @@ static int read_retpoline_hints(struct objtool_file *file)
+ 		}
+ 
+ 		if (insn->type != INSN_JUMP_DYNAMIC &&
+-		    insn->type != INSN_CALL_DYNAMIC) {
+-			WARN_FUNC("retpoline_safe hint not an indirect jump/call",
++		    insn->type != INSN_CALL_DYNAMIC &&
++		    insn->type != INSN_RETURN &&
++		    insn->type != INSN_NOP) {
++			WARN_FUNC("retpoline_safe hint not an indirect jump/call/ret/nop",
+ 				  insn->sec, insn->offset);
+ 			return -1;
+ 		}
+@@ -1609,17 +1935,31 @@ static int read_intra_function_calls(struct objtool_file *file)
+ 	return 0;
+ }
+ 
+-static int read_static_call_tramps(struct objtool_file *file)
++static int classify_symbols(struct objtool_file *file)
+ {
+ 	struct section *sec;
+ 	struct symbol *func;
+ 
+ 	for_each_sec(file, sec) {
+ 		list_for_each_entry(func, &sec->symbol_list, list) {
+-			if (func->bind == STB_GLOBAL &&
+-			    !strncmp(func->name, STATIC_CALL_TRAMP_PREFIX_STR,
++			if (func->bind != STB_GLOBAL)
++				continue;
++
++			if (!strncmp(func->name, STATIC_CALL_TRAMP_PREFIX_STR,
+ 				     strlen(STATIC_CALL_TRAMP_PREFIX_STR)))
+ 				func->static_call_tramp = true;
++
++			if (arch_is_retpoline(func))
++				func->retpoline_thunk = true;
++
++			if (arch_is_rethunk(func))
++				func->return_thunk = true;
++
++			if (!strcmp(func->name, "__fentry__"))
++				func->fentry = true;
++
++			if (!strncmp(func->name, "__sanitizer_cov_", 16))
++				func->kcov = true;
+ 		}
+ 	}
+ 
+@@ -1676,10 +2016,14 @@ static int decode_sections(struct objtool_file *file)
+ 	/*
+ 	 * Must be before add_{jump_call}_destination.
+ 	 */
+-	ret = read_static_call_tramps(file);
++	ret = classify_symbols(file);
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Must be before add_special_section_alts() as that depends on
++	 * jump_dest being set.
++	 */
+ 	ret = add_jump_destinations(file);
+ 	if (ret)
+ 		return ret;
+@@ -1721,9 +2065,9 @@ static int decode_sections(struct objtool_file *file)
+ 
+ static bool is_fentry_call(struct instruction *insn)
+ {
+-	if (insn->type == INSN_CALL && insn->call_dest &&
+-	    insn->call_dest->type == STT_NOTYPE &&
+-	    !strcmp(insn->call_dest->name, "__fentry__"))
++	if (insn->type == INSN_CALL &&
++	    insn->call_dest &&
++	    insn->call_dest->fentry)
+ 		return true;
+ 
+ 	return false;
+@@ -1731,27 +2075,18 @@ static bool is_fentry_call(struct instruction *insn)
+ 
+ static bool has_modified_stack_frame(struct instruction *insn, struct insn_state *state)
+ {
+-	u8 ret_offset = insn->ret_offset;
+ 	struct cfi_state *cfi = &state->cfi;
+ 	int i;
+ 
+ 	if (cfi->cfa.base != initial_func_cfi.cfa.base || cfi->drap)
+ 		return true;
+ 
+-	if (cfi->cfa.offset != initial_func_cfi.cfa.offset + ret_offset)
++	if (cfi->cfa.offset != initial_func_cfi.cfa.offset)
+ 		return true;
+ 
+-	if (cfi->stack_size != initial_func_cfi.cfa.offset + ret_offset)
++	if (cfi->stack_size != initial_func_cfi.cfa.offset)
+ 		return true;
+ 
+-	/*
+-	 * If there is a ret offset hint then don't check registers
+-	 * because a callee-saved register might have been pushed on
+-	 * the stack.
+-	 */
+-	if (ret_offset)
+-		return false;
+-
+ 	for (i = 0; i < CFI_NUM_REGS; i++) {
+ 		if (cfi->regs[i].base != initial_func_cfi.regs[i].base ||
+ 		    cfi->regs[i].offset != initial_func_cfi.regs[i].offset)
+@@ -2220,22 +2555,52 @@ static int update_cfi_state(struct instruction *insn, struct cfi_state *cfi,
+ 	return 0;
+ }
+ 
+-static int handle_insn_ops(struct instruction *insn, struct insn_state *state)
++/*
++ * The stack layouts of alternatives instructions can sometimes diverge when
++ * they have stack modifications.  That's fine as long as the potential stack
++ * layouts don't conflict at any given potential instruction boundary.
++ *
++ * Flatten the CFIs of the different alternative code streams (both original
++ * and replacement) into a single shared CFI array which can be used to detect
++ * conflicts and nicely feed a linear array of ORC entries to the unwinder.
++ */
++static int propagate_alt_cfi(struct objtool_file *file, struct instruction *insn)
+ {
+-	struct stack_op *op;
++	struct cfi_state **alt_cfi;
++	int group_off;
+ 
+-	list_for_each_entry(op, &insn->stack_ops, list) {
+-		struct cfi_state old_cfi = state->cfi;
+-		int res;
++	if (!insn->alt_group)
++		return 0;
++
++	if (!insn->cfi) {
++		WARN("CFI missing");
++		return -1;
++	}
+ 
+-		res = update_cfi_state(insn, &state->cfi, op);
+-		if (res)
+-			return res;
++	alt_cfi = insn->alt_group->cfi;
++	group_off = insn->offset - insn->alt_group->first_insn->offset;
+ 
+-		if (insn->alt_group && memcmp(&state->cfi, &old_cfi, sizeof(struct cfi_state))) {
+-			WARN_FUNC("alternative modifies stack", insn->sec, insn->offset);
++	if (!alt_cfi[group_off]) {
++		alt_cfi[group_off] = insn->cfi;
++	} else {
++		if (cficmp(alt_cfi[group_off], insn->cfi)) {
++			WARN_FUNC("stack layout conflict in alternatives",
++				  insn->sec, insn->offset);
+ 			return -1;
+ 		}
++	}
++
++	return 0;
++}
++
++static int handle_insn_ops(struct instruction *insn, struct insn_state *state)
++{
++	struct stack_op *op;
++
++	list_for_each_entry(op, &insn->stack_ops, list) {
++
++		if (update_cfi_state(insn, &state->cfi, op))
++			return 1;
+ 
+ 		if (op->dest.type == OP_DEST_PUSHF) {
+ 			if (!state->uaccess_stack) {
+@@ -2264,9 +2629,14 @@ static int handle_insn_ops(struct instruction *insn, struct insn_state *state)
+ 
+ static bool insn_cfi_match(struct instruction *insn, struct cfi_state *cfi2)
+ {
+-	struct cfi_state *cfi1 = &insn->cfi;
++	struct cfi_state *cfi1 = insn->cfi;
+ 	int i;
+ 
++	if (!cfi1) {
++		WARN("CFI missing");
++		return false;
++	}
++
+ 	if (memcmp(&cfi1->cfa, &cfi2->cfa, sizeof(cfi1->cfa))) {
+ 
+ 		WARN_FUNC("stack state mismatch: cfa1=%d%+d cfa2=%d%+d",
+@@ -2425,28 +2795,20 @@ static int validate_return(struct symbol *func, struct instruction *insn, struct
+ 	return 0;
+ }
+ 
+-/*
+- * Alternatives should not contain any ORC entries, this in turn means they
+- * should not contain any CFI ops, which implies all instructions should have
+- * the same same CFI state.
+- *
+- * It is possible to constuct alternatives that have unreachable holes that go
+- * unreported (because they're NOPs), such holes would result in CFI_UNDEFINED
+- * states which then results in ORC entries, which we just said we didn't want.
+- *
+- * Avoid them by copying the CFI entry of the first instruction into the whole
+- * alternative.
+- */
+-static void fill_alternative_cfi(struct objtool_file *file, struct instruction *insn)
++static struct instruction *next_insn_to_validate(struct objtool_file *file,
++						 struct instruction *insn)
+ {
+-	struct instruction *first_insn = insn;
+-	int alt_group = insn->alt_group;
++	struct alt_group *alt_group = insn->alt_group;
+ 
+-	sec_for_each_insn_continue(file, insn) {
+-		if (insn->alt_group != alt_group)
+-			break;
+-		insn->cfi = first_insn->cfi;
+-	}
++	/*
++	 * Simulate the fact that alternatives are patched in-place.  When the
++	 * end of a replacement alt_group is reached, redirect objtool flow to
++	 * the end of the original alt_group.
++	 */
++	if (alt_group && insn == alt_group->last_insn && alt_group->orig_group)
++		return next_insn_same_sec(file, alt_group->orig_group->last_insn);
++
++	return next_insn_same_sec(file, insn);
+ }
+ 
+ /*
+@@ -2459,7 +2821,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 			   struct instruction *insn, struct insn_state state)
+ {
+ 	struct alternative *alt;
+-	struct instruction *next_insn;
++	struct instruction *next_insn, *prev_insn = NULL;
+ 	struct section *sec;
+ 	u8 visited;
+ 	int ret;
+@@ -2467,7 +2829,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 	sec = insn->sec;
+ 
+ 	while (1) {
+-		next_insn = next_insn_same_sec(file, insn);
++		next_insn = next_insn_to_validate(file, insn);
+ 
+ 		if (file->c_file && func && insn->func && func != insn->func->pfunc) {
+ 			WARN("%s() falls through to next function %s()",
+@@ -2481,25 +2843,67 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 			return 1;
+ 		}
+ 
+-		visited = 1 << state.uaccess;
+-		if (insn->visited) {
++		visited = VISITED_BRANCH << state.uaccess;
++		if (insn->visited & VISITED_BRANCH_MASK) {
+ 			if (!insn->hint && !insn_cfi_match(insn, &state.cfi))
+ 				return 1;
+ 
+ 			if (insn->visited & visited)
+ 				return 0;
++		} else {
++			nr_insns_visited++;
+ 		}
+ 
+ 		if (state.noinstr)
+ 			state.instr += insn->instr;
+ 
+-		if (insn->hint)
+-			state.cfi = insn->cfi;
+-		else
+-			insn->cfi = state.cfi;
++		if (insn->hint) {
++			if (insn->restore) {
++				struct instruction *save_insn, *i;
++
++				i = insn;
++				save_insn = NULL;
++
++				sym_for_each_insn_continue_reverse(file, func, i) {
++					if (i->save) {
++						save_insn = i;
++						break;
++					}
++				}
++
++				if (!save_insn) {
++					WARN_FUNC("no corresponding CFI save for CFI restore",
++						  sec, insn->offset);
++					return 1;
++				}
++
++				if (!save_insn->visited) {
++					WARN_FUNC("objtool isn't smart enough to handle this CFI save/restore combo",
++						  sec, insn->offset);
++					return 1;
++				}
++
++				insn->cfi = save_insn->cfi;
++				nr_cfi_reused++;
++			}
++
++			state.cfi = *insn->cfi;
++		} else {
++			/* XXX track if we actually changed state.cfi */
++
++			if (prev_insn && !cficmp(prev_insn->cfi, &state.cfi)) {
++				insn->cfi = prev_insn->cfi;
++				nr_cfi_reused++;
++			} else {
++				insn->cfi = cfi_hash_find_or_add(&state.cfi);
++			}
++		}
+ 
+ 		insn->visited |= visited;
+ 
++		if (propagate_alt_cfi(file, insn))
++			return 1;
++
+ 		if (!insn->ignore_alts && !list_empty(&insn->alts)) {
+ 			bool skip_orig = false;
+ 
+@@ -2515,9 +2919,6 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 				}
+ 			}
+ 
+-			if (insn->alt_group)
+-				fill_alternative_cfi(file, insn);
+-
+ 			if (skip_orig)
+ 				return 0;
+ 		}
+@@ -2528,6 +2929,11 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 		switch (insn->type) {
+ 
+ 		case INSN_RETURN:
++			if (sls && !insn->retpoline_safe &&
++			    next_insn && next_insn->type != INSN_TRAP) {
++				WARN_FUNC("missing int3 after ret",
++					  insn->sec, insn->offset);
++			}
+ 			return validate_return(func, insn, &state);
+ 
+ 		case INSN_CALL:
+@@ -2550,7 +2956,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 
+ 		case INSN_JUMP_CONDITIONAL:
+ 		case INSN_JUMP_UNCONDITIONAL:
+-			if (func && is_sibling_call(insn)) {
++			if (is_sibling_call(insn)) {
+ 				ret = validate_sibling_call(insn, &state);
+ 				if (ret)
+ 					return ret;
+@@ -2571,8 +2977,15 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 			break;
+ 
+ 		case INSN_JUMP_DYNAMIC:
++			if (sls && !insn->retpoline_safe &&
++			    next_insn && next_insn->type != INSN_TRAP) {
++				WARN_FUNC("missing int3 after indirect jump",
++					  insn->sec, insn->offset);
++			}
++
++			/* fallthrough */
+ 		case INSN_JUMP_DYNAMIC_CONDITIONAL:
+-			if (func && is_sibling_call(insn)) {
++			if (is_sibling_call(insn)) {
+ 				ret = validate_sibling_call(insn, &state);
+ 				if (ret)
+ 					return ret;
+@@ -2646,6 +3059,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 			return 1;
+ 		}
+ 
++		prev_insn = insn;
+ 		insn = next_insn;
+ 	}
+ 
+@@ -2685,6 +3099,145 @@ static int validate_unwind_hints(struct objtool_file *file, struct section *sec)
+ 	return warnings;
+ }
+ 
++/*
++ * Validate rethunk entry constraint: must untrain RET before the first RET.
++ *
++ * Follow every branch (intra-function) and ensure ANNOTATE_UNRET_END comes
++ * before an actual RET instruction.
++ */
++static int validate_entry(struct objtool_file *file, struct instruction *insn)
++{
++	struct instruction *next, *dest;
++	int ret, warnings = 0;
++
++	for (;;) {
++		next = next_insn_to_validate(file, insn);
++
++		if (insn->visited & VISITED_ENTRY)
++			return 0;
++
++		insn->visited |= VISITED_ENTRY;
++
++		if (!insn->ignore_alts && !list_empty(&insn->alts)) {
++			struct alternative *alt;
++			bool skip_orig = false;
++
++			list_for_each_entry(alt, &insn->alts, list) {
++				if (alt->skip_orig)
++					skip_orig = true;
++
++				ret = validate_entry(file, alt->insn);
++				if (ret) {
++				        if (backtrace)
++						BT_FUNC("(alt)", insn);
++					return ret;
++				}
++			}
++
++			if (skip_orig)
++				return 0;
++		}
++
++		switch (insn->type) {
++
++		case INSN_CALL_DYNAMIC:
++		case INSN_JUMP_DYNAMIC:
++		case INSN_JUMP_DYNAMIC_CONDITIONAL:
++			WARN_FUNC("early indirect call", insn->sec, insn->offset);
++			return 1;
++
++		case INSN_JUMP_UNCONDITIONAL:
++		case INSN_JUMP_CONDITIONAL:
++			if (!is_sibling_call(insn)) {
++				if (!insn->jump_dest) {
++					WARN_FUNC("unresolved jump target after linking?!?",
++						  insn->sec, insn->offset);
++					return -1;
++				}
++				ret = validate_entry(file, insn->jump_dest);
++				if (ret) {
++					if (backtrace) {
++						BT_FUNC("(branch%s)", insn,
++							insn->type == INSN_JUMP_CONDITIONAL ? "-cond" : "");
++					}
++					return ret;
++				}
++
++				if (insn->type == INSN_JUMP_UNCONDITIONAL)
++					return 0;
++
++				break;
++			}
++
++			/* fallthrough */
++		case INSN_CALL:
++			dest = find_insn(file, insn->call_dest->sec,
++					 insn->call_dest->offset);
++			if (!dest) {
++				WARN("Unresolved function after linking!?: %s",
++				     insn->call_dest->name);
++				return -1;
++			}
++
++			ret = validate_entry(file, dest);
++			if (ret) {
++				if (backtrace)
++					BT_FUNC("(call)", insn);
++				return ret;
++			}
++			/*
++			 * If a call returns without error, it must have seen UNTRAIN_RET.
++			 * Therefore any non-error return is a success.
++			 */
++			return 0;
++
++		case INSN_RETURN:
++			WARN_FUNC("RET before UNTRAIN", insn->sec, insn->offset);
++			return 1;
++
++		case INSN_NOP:
++			if (insn->retpoline_safe)
++				return 0;
++			break;
++
++		default:
++			break;
++		}
++
++		if (!next) {
++			WARN_FUNC("teh end!", insn->sec, insn->offset);
++			return -1;
++		}
++		insn = next;
++	}
++
++	return warnings;
++}
++
++/*
++ * Validate that all branches starting at 'insn->entry' encounter UNRET_END
++ * before RET.
++ */
++static int validate_unret(struct objtool_file *file)
++{
++	struct instruction *insn;
++	int ret, warnings = 0;
++
++	for_each_insn(file, insn) {
++		if (!insn->entry)
++			continue;
++
++		ret = validate_entry(file, insn);
++		if (ret < 0) {
++			WARN_FUNC("Failed UNRET validation", insn->sec, insn->offset);
++			return ret;
++		}
++		warnings += ret;
++	}
++
++	return warnings;
++}
++
+ static int validate_retpoline(struct objtool_file *file)
+ {
+ 	struct instruction *insn;
+@@ -2692,7 +3245,8 @@ static int validate_retpoline(struct objtool_file *file)
+ 
+ 	for_each_insn(file, insn) {
+ 		if (insn->type != INSN_JUMP_DYNAMIC &&
+-		    insn->type != INSN_CALL_DYNAMIC)
++		    insn->type != INSN_CALL_DYNAMIC &&
++		    insn->type != INSN_RETURN)
+ 			continue;
+ 
+ 		if (insn->retpoline_safe)
+@@ -2707,9 +3261,17 @@ static int validate_retpoline(struct objtool_file *file)
+ 		if (!strcmp(insn->sec->name, ".init.text") && !module)
+ 			continue;
+ 
+-		WARN_FUNC("indirect %s found in RETPOLINE build",
+-			  insn->sec, insn->offset,
+-			  insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call");
++		if (insn->type == INSN_RETURN) {
++			if (rethunk) {
++				WARN_FUNC("'naked' return found in RETHUNK build",
++					  insn->sec, insn->offset);
++			} else
++				continue;
++		} else {
++			WARN_FUNC("indirect %s found in RETPOLINE build",
++				  insn->sec, insn->offset,
++				  insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call");
++		}
+ 
+ 		warnings++;
+ 	}
+@@ -2735,7 +3297,7 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
+ 	int i;
+ 	struct instruction *prev_insn;
+ 
+-	if (insn->ignore || insn->type == INSN_NOP)
++	if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP)
+ 		return true;
+ 
+ 	/*
+@@ -2750,9 +3312,6 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
+ 	    !strcmp(insn->sec->name, ".altinstr_aux"))
+ 		return true;
+ 
+-	if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->offset == FAKE_JUMP_OFFSET)
+-		return true;
+-
+ 	if (!insn->func)
+ 		return false;
+ 
+@@ -2838,10 +3397,7 @@ static int validate_section(struct objtool_file *file, struct section *sec)
+ 			continue;
+ 
+ 		init_insn_state(&state, sec);
+-		state.cfi.cfa = initial_func_cfi.cfa;
+-		memcpy(&state.cfi.regs, &initial_func_cfi.regs,
+-		       CFI_NUM_REGS * sizeof(struct cfi_reg));
+-		state.cfi.stack_size = initial_func_cfi.cfa.offset;
++		set_func_state(&state.cfi);
+ 
+ 		warnings += validate_symbol(file, sec, func, &state);
+ 	}
+@@ -2907,10 +3463,20 @@ int check(struct objtool_file *file)
+ 	int ret, warnings = 0;
+ 
+ 	arch_initial_func_cfi_state(&initial_func_cfi);
++	init_cfi_state(&init_cfi);
++	init_cfi_state(&func_cfi);
++	set_func_state(&func_cfi);
++
++	if (!cfi_hash_alloc())
++		goto out;
++
++	cfi_hash_add(&init_cfi);
++	cfi_hash_add(&func_cfi);
+ 
+ 	ret = decode_sections(file);
+ 	if (ret < 0)
+ 		goto out;
++
+ 	warnings += ret;
+ 
+ 	if (list_empty(&file->insn_list))
+@@ -2942,6 +3508,17 @@ int check(struct objtool_file *file)
+ 		goto out;
+ 	warnings += ret;
+ 
++	if (unret) {
++		/*
++		 * Must be after validate_branch() and friends, it plays
++		 * further games with insn->visited.
++		 */
++		ret = validate_unret(file);
++		if (ret < 0)
++			return ret;
++		warnings += ret;
++	}
++
+ 	if (!warnings) {
+ 		ret = validate_reachable_instructions(file);
+ 		if (ret < 0)
+@@ -2954,6 +3531,27 @@ int check(struct objtool_file *file)
+ 		goto out;
+ 	warnings += ret;
+ 
++	if (retpoline) {
++		ret = create_retpoline_sites_sections(file);
++		if (ret < 0)
++			goto out;
++		warnings += ret;
++	}
++
++	if (rethunk) {
++		ret = create_return_sites_sections(file);
++		if (ret < 0)
++			goto out;
++		warnings += ret;
++	}
++
++	if (stats) {
++		printf("nr_insns_visited: %ld\n", nr_insns_visited);
++		printf("nr_cfi: %ld\n", nr_cfi);
++		printf("nr_cfi_reused: %ld\n", nr_cfi_reused);
++		printf("nr_cfi_cache: %ld\n", nr_cfi_cache);
++	}
++
+ out:
+ 	/*
+ 	 *  For now, don't fail the kernel build on fatal warnings.  These
+diff --git a/tools/objtool/check.h b/tools/objtool/check.h
+index 2804848e628e3..7f34a7f9ca523 100644
+--- a/tools/objtool/check.h
++++ b/tools/objtool/check.h
+@@ -19,10 +19,27 @@ struct insn_state {
+ 	s8 instr;
+ };
+ 
++struct alt_group {
++	/*
++	 * Pointer from a replacement group to the original group.  NULL if it
++	 * *is* the original group.
++	 */
++	struct alt_group *orig_group;
++
++	/* First and last instructions in the group */
++	struct instruction *first_insn, *last_insn;
++
++	/*
++	 * Byte-offset-addressed len-sized array of pointers to CFI structs.
++	 * This is shared with the other alt_groups in the same alternative.
++	 */
++	struct cfi_state **cfi;
++};
++
+ struct instruction {
+ 	struct list_head list;
+ 	struct hlist_node hash;
+-	struct list_head static_call_node;
++	struct list_head call_node;
+ 	struct section *sec;
+ 	unsigned long offset;
+ 	unsigned int len;
+@@ -30,24 +47,28 @@ struct instruction {
+ 	unsigned long immediate;
+ 	bool dead_end, ignore, ignore_alts;
+ 	bool hint;
++	bool save, restore;
+ 	bool retpoline_safe;
++	bool entry;
+ 	s8 instr;
+ 	u8 visited;
+-	u8 ret_offset;
+-	int alt_group;
++	struct alt_group *alt_group;
+ 	struct symbol *call_dest;
+ 	struct instruction *jump_dest;
+ 	struct instruction *first_jump_src;
+ 	struct reloc *jump_table;
++	struct reloc *reloc;
+ 	struct list_head alts;
+ 	struct symbol *func;
+ 	struct list_head stack_ops;
+-	struct cfi_state cfi;
+-#ifdef INSN_USE_ORC
+-	struct orc_entry orc;
+-#endif
++	struct cfi_state *cfi;
+ };
+ 
++#define VISITED_BRANCH		0x01
++#define VISITED_BRANCH_UACCESS	0x02
++#define VISITED_BRANCH_MASK	0x03
++#define VISITED_ENTRY		0x04
++
+ static inline bool is_static_jump(struct instruction *insn)
+ {
+ 	return insn->type == INSN_JUMP_CONDITIONAL ||
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index d8421e1d06bed..5aa3b4e76479e 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -262,32 +262,6 @@ struct reloc *find_reloc_by_dest(const struct elf *elf, struct section *sec, uns
+ 	return find_reloc_by_dest_range(elf, sec, offset, 1);
+ }
+ 
+-void insn_to_reloc_sym_addend(struct section *sec, unsigned long offset,
+-			      struct reloc *reloc)
+-{
+-	if (sec->sym) {
+-		reloc->sym = sec->sym;
+-		reloc->addend = offset;
+-		return;
+-	}
+-
+-	/*
+-	 * The Clang assembler strips section symbols, so we have to reference
+-	 * the function symbol instead:
+-	 */
+-	reloc->sym = find_symbol_containing(sec, offset);
+-	if (!reloc->sym) {
+-		/*
+-		 * Hack alert.  This happens when we need to reference the NOP
+-		 * pad insn immediately after the function.
+-		 */
+-		reloc->sym = find_symbol_containing(sec, offset - 1);
+-	}
+-
+-	if (reloc->sym)
+-		reloc->addend = offset - reloc->sym->offset;
+-}
+-
+ static int read_sections(struct elf *elf)
+ {
+ 	Elf_Scn *s = NULL;
+@@ -367,12 +341,41 @@ static int read_sections(struct elf *elf)
+ 	return 0;
+ }
+ 
++static void elf_add_symbol(struct elf *elf, struct symbol *sym)
++{
++	struct list_head *entry;
++	struct rb_node *pnode;
++
++	sym->alias = sym;
++
++	sym->type = GELF_ST_TYPE(sym->sym.st_info);
++	sym->bind = GELF_ST_BIND(sym->sym.st_info);
++
++	sym->offset = sym->sym.st_value;
++	sym->len = sym->sym.st_size;
++
++	rb_add(&sym->sec->symbol_tree, &sym->node, symbol_to_offset);
++	pnode = rb_prev(&sym->node);
++	if (pnode)
++		entry = &rb_entry(pnode, struct symbol, node)->list;
++	else
++		entry = &sym->sec->symbol_list;
++	list_add(&sym->list, entry);
++	elf_hash_add(elf->symbol_hash, &sym->hash, sym->idx);
++	elf_hash_add(elf->symbol_name_hash, &sym->name_hash, str_hash(sym->name));
++
++	/*
++	 * Don't store empty STT_NOTYPE symbols in the rbtree.  They
++	 * can exist within a function, confusing the sorting.
++	 */
++	if (!sym->len)
++		rb_erase(&sym->node, &sym->sec->symbol_tree);
++}
++
+ static int read_symbols(struct elf *elf)
+ {
+ 	struct section *symtab, *symtab_shndx, *sec;
+ 	struct symbol *sym, *pfunc;
+-	struct list_head *entry;
+-	struct rb_node *pnode;
+ 	int symbols_nr, i;
+ 	char *coldstr;
+ 	Elf_Data *shndx_data = NULL;
+@@ -400,7 +403,6 @@ static int read_symbols(struct elf *elf)
+ 			return -1;
+ 		}
+ 		memset(sym, 0, sizeof(*sym));
+-		sym->alias = sym;
+ 
+ 		sym->idx = i;
+ 
+@@ -417,9 +419,6 @@ static int read_symbols(struct elf *elf)
+ 			goto err;
+ 		}
+ 
+-		sym->type = GELF_ST_TYPE(sym->sym.st_info);
+-		sym->bind = GELF_ST_BIND(sym->sym.st_info);
+-
+ 		if ((sym->sym.st_shndx > SHN_UNDEF &&
+ 		     sym->sym.st_shndx < SHN_LORESERVE) ||
+ 		    (shndx_data && sym->sym.st_shndx == SHN_XINDEX)) {
+@@ -432,32 +431,14 @@ static int read_symbols(struct elf *elf)
+ 				     sym->name);
+ 				goto err;
+ 			}
+-			if (sym->type == STT_SECTION) {
++			if (GELF_ST_TYPE(sym->sym.st_info) == STT_SECTION) {
+ 				sym->name = sym->sec->name;
+ 				sym->sec->sym = sym;
+ 			}
+ 		} else
+ 			sym->sec = find_section_by_index(elf, 0);
+ 
+-		sym->offset = sym->sym.st_value;
+-		sym->len = sym->sym.st_size;
+-
+-		rb_add(&sym->sec->symbol_tree, &sym->node, symbol_to_offset);
+-		pnode = rb_prev(&sym->node);
+-		if (pnode)
+-			entry = &rb_entry(pnode, struct symbol, node)->list;
+-		else
+-			entry = &sym->sec->symbol_list;
+-		list_add(&sym->list, entry);
+-		elf_hash_add(elf->symbol_hash, &sym->hash, sym->idx);
+-		elf_hash_add(elf->symbol_name_hash, &sym->name_hash, str_hash(sym->name));
+-
+-		/*
+-		 * Don't store empty STT_NOTYPE symbols in the rbtree.  They
+-		 * can exist within a function, confusing the sorting.
+-		 */
+-		if (!sym->len)
+-			rb_erase(&sym->node, &sym->sec->symbol_tree);
++		elf_add_symbol(elf, sym);
+ 	}
+ 
+ 	if (stats)
+@@ -524,12 +505,275 @@ err:
+ 	return -1;
+ }
+ 
+-void elf_add_reloc(struct elf *elf, struct reloc *reloc)
++static struct section *elf_create_reloc_section(struct elf *elf,
++						struct section *base,
++						int reltype);
++
++int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
++		  unsigned int type, struct symbol *sym, s64 addend)
+ {
+-	struct section *sec = reloc->sec;
++	struct reloc *reloc;
++
++	if (!sec->reloc && !elf_create_reloc_section(elf, sec, SHT_RELA))
++		return -1;
+ 
+-	list_add_tail(&reloc->list, &sec->reloc_list);
++	reloc = malloc(sizeof(*reloc));
++	if (!reloc) {
++		perror("malloc");
++		return -1;
++	}
++	memset(reloc, 0, sizeof(*reloc));
++
++	reloc->sec = sec->reloc;
++	reloc->offset = offset;
++	reloc->type = type;
++	reloc->sym = sym;
++	reloc->addend = addend;
++
++	list_add_tail(&reloc->list, &sec->reloc->reloc_list);
+ 	elf_hash_add(elf->reloc_hash, &reloc->hash, reloc_hash(reloc));
++
++	sec->reloc->changed = true;
++
++	return 0;
++}
++
++/*
++ * Ensure that any reloc section containing references to @sym is marked
++ * changed such that it will get re-generated in elf_rebuild_reloc_sections()
++ * with the new symbol index.
++ */
++static void elf_dirty_reloc_sym(struct elf *elf, struct symbol *sym)
++{
++	struct section *sec;
++
++	list_for_each_entry(sec, &elf->sections, list) {
++		struct reloc *reloc;
++
++		if (sec->changed)
++			continue;
++
++		list_for_each_entry(reloc, &sec->reloc_list, list) {
++			if (reloc->sym == sym) {
++				sec->changed = true;
++				break;
++			}
++		}
++	}
++}
++
++/*
++ * The libelf API is terrible; gelf_update_sym*() takes a data block relative
++ * index value, *NOT* the symbol index. As such, iterate the data blocks and
++ * adjust index until it fits.
++ *
++ * If no data block is found, allow adding a new data block provided the index
++ * is only one past the end.
++ */
++static int elf_update_symbol(struct elf *elf, struct section *symtab,
++			     struct section *symtab_shndx, struct symbol *sym)
++{
++	Elf32_Word shndx = sym->sec ? sym->sec->idx : SHN_UNDEF;
++	Elf_Data *symtab_data = NULL, *shndx_data = NULL;
++	Elf64_Xword entsize = symtab->sh.sh_entsize;
++	int max_idx, idx = sym->idx;
++	Elf_Scn *s, *t = NULL;
++
++	s = elf_getscn(elf->elf, symtab->idx);
++	if (!s) {
++		WARN_ELF("elf_getscn");
++		return -1;
++	}
++
++	if (symtab_shndx) {
++		t = elf_getscn(elf->elf, symtab_shndx->idx);
++		if (!t) {
++			WARN_ELF("elf_getscn");
++			return -1;
++		}
++	}
++
++	for (;;) {
++		/* get next data descriptor for the relevant sections */
++		symtab_data = elf_getdata(s, symtab_data);
++		if (t)
++			shndx_data = elf_getdata(t, shndx_data);
++
++		/* end-of-list */
++		if (!symtab_data) {
++			void *buf;
++
++			if (idx) {
++				/* we don't do holes in symbol tables */
++				WARN("index out of range");
++				return -1;
++			}
++
++			/* if @idx == 0, it's the next contiguous entry, create it */
++			symtab_data = elf_newdata(s);
++			if (t)
++				shndx_data = elf_newdata(t);
++
++			buf = calloc(1, entsize);
++			if (!buf) {
++				WARN("malloc");
++				return -1;
++			}
++
++			symtab_data->d_buf = buf;
++			symtab_data->d_size = entsize;
++			symtab_data->d_align = 1;
++			symtab_data->d_type = ELF_T_SYM;
++
++			symtab->sh.sh_size += entsize;
++			symtab->changed = true;
++
++			if (t) {
++				shndx_data->d_buf = &sym->sec->idx;
++				shndx_data->d_size = sizeof(Elf32_Word);
++				shndx_data->d_align = sizeof(Elf32_Word);
++				shndx_data->d_type = ELF_T_WORD;
++
++				symtab_shndx->sh.sh_size += sizeof(Elf32_Word);
++				symtab_shndx->changed = true;
++			}
++
++			break;
++		}
++
++		/* empty blocks should not happen */
++		if (!symtab_data->d_size) {
++			WARN("zero size data");
++			return -1;
++		}
++
++		/* is this the right block? */
++		max_idx = symtab_data->d_size / entsize;
++		if (idx < max_idx)
++			break;
++
++		/* adjust index and try again */
++		idx -= max_idx;
++	}
++
++	/* something went side-ways */
++	if (idx < 0) {
++		WARN("negative index");
++		return -1;
++	}
++
++	/* setup extended section index magic and write the symbol */
++	if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) {
++		sym->sym.st_shndx = shndx;
++		if (!shndx_data)
++			shndx = 0;
++	} else {
++		sym->sym.st_shndx = SHN_XINDEX;
++		if (!shndx_data) {
++			WARN("no .symtab_shndx");
++			return -1;
++		}
++	}
++
++	if (!gelf_update_symshndx(symtab_data, shndx_data, idx, &sym->sym, shndx)) {
++		WARN_ELF("gelf_update_symshndx");
++		return -1;
++	}
++
++	return 0;
++}
++
++static struct symbol *
++elf_create_section_symbol(struct elf *elf, struct section *sec)
++{
++	struct section *symtab, *symtab_shndx;
++	Elf32_Word first_non_local, new_idx;
++	struct symbol *sym, *old;
++
++	symtab = find_section_by_name(elf, ".symtab");
++	if (symtab) {
++		symtab_shndx = find_section_by_name(elf, ".symtab_shndx");
++	} else {
++		WARN("no .symtab");
++		return NULL;
++	}
++
++	sym = calloc(1, sizeof(*sym));
++	if (!sym) {
++		perror("malloc");
++		return NULL;
++	}
++
++	sym->name = sec->name;
++	sym->sec = sec;
++
++	// st_name 0
++	sym->sym.st_info = GELF_ST_INFO(STB_LOCAL, STT_SECTION);
++	// st_other 0
++	// st_value 0
++	// st_size 0
++
++	/*
++	 * Move the first global symbol, as per sh_info, into a new, higher
++	 * symbol index. This fees up a spot for a new local symbol.
++	 */
++	first_non_local = symtab->sh.sh_info;
++	new_idx = symtab->sh.sh_size / symtab->sh.sh_entsize;
++	old = find_symbol_by_index(elf, first_non_local);
++	if (old) {
++		old->idx = new_idx;
++
++		hlist_del(&old->hash);
++		elf_hash_add(elf->symbol_hash, &old->hash, old->idx);
++
++		elf_dirty_reloc_sym(elf, old);
++
++		if (elf_update_symbol(elf, symtab, symtab_shndx, old)) {
++			WARN("elf_update_symbol move");
++			return NULL;
++		}
++
++		new_idx = first_non_local;
++	}
++
++	sym->idx = new_idx;
++	if (elf_update_symbol(elf, symtab, symtab_shndx, sym)) {
++		WARN("elf_update_symbol");
++		return NULL;
++	}
++
++	/*
++	 * Either way, we added a LOCAL symbol.
++	 */
++	symtab->sh.sh_info += 1;
++
++	elf_add_symbol(elf, sym);
++
++	return sym;
++}
++
++int elf_add_reloc_to_insn(struct elf *elf, struct section *sec,
++			  unsigned long offset, unsigned int type,
++			  struct section *insn_sec, unsigned long insn_off)
++{
++	struct symbol *sym = insn_sec->sym;
++	int addend = insn_off;
++
++	if (!sym) {
++		/*
++		 * Due to how weak functions work, we must use section based
++		 * relocations. Symbol based relocations would result in the
++		 * weak and non-weak function annotations being overlaid on the
++		 * non-weak function after linking.
++		 */
++		sym = elf_create_section_symbol(elf, insn_sec);
++		if (!sym)
++			return -1;
++
++		insn_sec->sym = sym;
++	}
++
++	return elf_add_reloc(elf, sec, offset, type, sym, addend);
+ }
+ 
+ static int read_rel_reloc(struct section *sec, int i, struct reloc *reloc, unsigned int *symndx)
+@@ -609,7 +853,9 @@ static int read_relocs(struct elf *elf)
+ 				return -1;
+ 			}
+ 
+-			elf_add_reloc(elf, reloc);
++			list_add_tail(&reloc->list, &sec->reloc_list);
++			elf_hash_add(elf->reloc_hash, &reloc->hash, reloc_hash(reloc));
++
+ 			nr_reloc++;
+ 		}
+ 		max_reloc = max(max_reloc, nr_reloc);
+@@ -687,13 +933,49 @@ err:
+ 	return NULL;
+ }
+ 
++static int elf_add_string(struct elf *elf, struct section *strtab, char *str)
++{
++	Elf_Data *data;
++	Elf_Scn *s;
++	int len;
++
++	if (!strtab)
++		strtab = find_section_by_name(elf, ".strtab");
++	if (!strtab) {
++		WARN("can't find .strtab section");
++		return -1;
++	}
++
++	s = elf_getscn(elf->elf, strtab->idx);
++	if (!s) {
++		WARN_ELF("elf_getscn");
++		return -1;
++	}
++
++	data = elf_newdata(s);
++	if (!data) {
++		WARN_ELF("elf_newdata");
++		return -1;
++	}
++
++	data->d_buf = str;
++	data->d_size = strlen(str) + 1;
++	data->d_align = 1;
++	data->d_type = ELF_T_SYM;
++
++	len = strtab->len;
++	strtab->len += data->d_size;
++	strtab->changed = true;
++
++	return len;
++}
++
+ struct section *elf_create_section(struct elf *elf, const char *name,
+ 				   unsigned int sh_flags, size_t entsize, int nr)
+ {
+ 	struct section *sec, *shstrtab;
+ 	size_t size = entsize * nr;
+ 	Elf_Scn *s;
+-	Elf_Data *data;
+ 
+ 	sec = malloc(sizeof(*sec));
+ 	if (!sec) {
+@@ -750,7 +1032,6 @@ struct section *elf_create_section(struct elf *elf, const char *name,
+ 	sec->sh.sh_addralign = 1;
+ 	sec->sh.sh_flags = SHF_ALLOC | sh_flags;
+ 
+-
+ 	/* Add section name to .shstrtab (or .strtab for Clang) */
+ 	shstrtab = find_section_by_name(elf, ".shstrtab");
+ 	if (!shstrtab)
+@@ -759,27 +1040,9 @@ struct section *elf_create_section(struct elf *elf, const char *name,
+ 		WARN("can't find .shstrtab or .strtab section");
+ 		return NULL;
+ 	}
+-
+-	s = elf_getscn(elf->elf, shstrtab->idx);
+-	if (!s) {
+-		WARN_ELF("elf_getscn");
+-		return NULL;
+-	}
+-
+-	data = elf_newdata(s);
+-	if (!data) {
+-		WARN_ELF("elf_newdata");
++	sec->sh.sh_name = elf_add_string(elf, shstrtab, sec->name);
++	if (sec->sh.sh_name == -1)
+ 		return NULL;
+-	}
+-
+-	data->d_buf = sec->name;
+-	data->d_size = strlen(name) + 1;
+-	data->d_align = 1;
+-
+-	sec->sh.sh_name = shstrtab->len;
+-
+-	shstrtab->len += strlen(name) + 1;
+-	shstrtab->changed = true;
+ 
+ 	list_add_tail(&sec->list, &elf->sections);
+ 	elf_hash_add(elf->section_hash, &sec->hash, sec->idx);
+@@ -850,7 +1113,7 @@ static struct section *elf_create_rela_reloc_section(struct elf *elf, struct sec
+ 	return sec;
+ }
+ 
+-struct section *elf_create_reloc_section(struct elf *elf,
++static struct section *elf_create_reloc_section(struct elf *elf,
+ 					 struct section *base,
+ 					 int reltype)
+ {
+@@ -920,14 +1183,11 @@ static int elf_rebuild_rela_reloc_section(struct section *sec, int nr)
+ 	return 0;
+ }
+ 
+-int elf_rebuild_reloc_section(struct elf *elf, struct section *sec)
++static int elf_rebuild_reloc_section(struct elf *elf, struct section *sec)
+ {
+ 	struct reloc *reloc;
+ 	int nr;
+ 
+-	sec->changed = true;
+-	elf->changed = true;
+-
+ 	nr = 0;
+ 	list_for_each_entry(reloc, &sec->reloc_list, list)
+ 		nr++;
+@@ -991,9 +1251,15 @@ int elf_write(struct elf *elf)
+ 	struct section *sec;
+ 	Elf_Scn *s;
+ 
+-	/* Update section headers for changed sections: */
++	/* Update changed relocation sections and section headers: */
+ 	list_for_each_entry(sec, &elf->sections, list) {
+ 		if (sec->changed) {
++			if (sec->base &&
++			    elf_rebuild_reloc_section(elf, sec)) {
++				WARN("elf_rebuild_reloc_section");
++				return -1;
++			}
++
+ 			s = elf_getscn(elf->elf, sec->idx);
+ 			if (!s) {
+ 				WARN_ELF("elf_getscn");
+@@ -1005,6 +1271,7 @@ int elf_write(struct elf *elf)
+ 			}
+ 
+ 			sec->changed = false;
++			elf->changed = true;
+ 		}
+ 	}
+ 
+diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h
+index e6890cc70a25b..a1863eb35fbbc 100644
+--- a/tools/objtool/elf.h
++++ b/tools/objtool/elf.h
+@@ -55,8 +55,12 @@ struct symbol {
+ 	unsigned long offset;
+ 	unsigned int len;
+ 	struct symbol *pfunc, *cfunc, *alias;
+-	bool uaccess_safe;
+-	bool static_call_tramp;
++	u8 uaccess_safe      : 1;
++	u8 static_call_tramp : 1;
++	u8 retpoline_thunk   : 1;
++	u8 return_thunk      : 1;
++	u8 fentry            : 1;
++	u8 kcov              : 1;
+ };
+ 
+ struct reloc {
+@@ -70,7 +74,7 @@ struct reloc {
+ 	struct symbol *sym;
+ 	unsigned long offset;
+ 	unsigned int type;
+-	int addend;
++	s64 addend;
+ 	int idx;
+ 	bool jump_table_start;
+ };
+@@ -122,8 +126,13 @@ static inline u32 reloc_hash(struct reloc *reloc)
+ 
+ struct elf *elf_open_read(const char *name, int flags);
+ struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr);
+-struct section *elf_create_reloc_section(struct elf *elf, struct section *base, int reltype);
+-void elf_add_reloc(struct elf *elf, struct reloc *reloc);
++
++int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
++		  unsigned int type, struct symbol *sym, s64 addend);
++int elf_add_reloc_to_insn(struct elf *elf, struct section *sec,
++			  unsigned long offset, unsigned int type,
++			  struct section *insn_sec, unsigned long insn_off);
++
+ int elf_write_insn(struct elf *elf, struct section *sec,
+ 		   unsigned long offset, unsigned int len,
+ 		   const char *insn);
+@@ -140,9 +149,6 @@ struct reloc *find_reloc_by_dest(const struct elf *elf, struct section *sec, uns
+ struct reloc *find_reloc_by_dest_range(const struct elf *elf, struct section *sec,
+ 				     unsigned long offset, unsigned int len);
+ struct symbol *find_func_containing(struct section *sec, unsigned long offset);
+-void insn_to_reloc_sym_addend(struct section *sec, unsigned long offset,
+-			      struct reloc *reloc);
+-int elf_rebuild_reloc_section(struct elf *elf, struct section *sec);
+ 
+ #define for_each_sec(file, sec)						\
+ 	list_for_each_entry(sec, &file->elf->sections, list)
+diff --git a/tools/objtool/objtool.c b/tools/objtool/objtool.c
+index 9df0cd86d310d..cb2c6acd9667f 100644
+--- a/tools/objtool/objtool.c
++++ b/tools/objtool/objtool.c
+@@ -61,6 +61,8 @@ struct objtool_file *objtool_open_read(const char *_objname)
+ 
+ 	INIT_LIST_HEAD(&file.insn_list);
+ 	hash_init(file.insn_hash);
++	INIT_LIST_HEAD(&file.retpoline_call_list);
++	INIT_LIST_HEAD(&file.return_thunk_list);
+ 	INIT_LIST_HEAD(&file.static_call_list);
+ 	file.c_file = !vmlinux && find_section_by_name(file.elf, ".comment");
+ 	file.ignore_unreachables = no_unreachable;
+diff --git a/tools/objtool/objtool.h b/tools/objtool/objtool.h
+index 4125d4578b23b..bf64946e749bc 100644
+--- a/tools/objtool/objtool.h
++++ b/tools/objtool/objtool.h
+@@ -18,6 +18,8 @@ struct objtool_file {
+ 	struct elf *elf;
+ 	struct list_head insn_list;
+ 	DECLARE_HASHTABLE(insn_hash, 20);
++	struct list_head retpoline_call_list;
++	struct list_head return_thunk_list;
+ 	struct list_head static_call_list;
+ 	bool ignore_unreachables, c_file, hints, rodata;
+ };
+@@ -26,7 +28,6 @@ struct objtool_file *objtool_open_read(const char *_objname);
+ 
+ int check(struct objtool_file *file);
+ int orc_dump(const char *objname);
+-int create_orc(struct objtool_file *file);
+-int create_orc_sections(struct objtool_file *file);
++int orc_create(struct objtool_file *file);
+ 
+ #endif /* _OBJTOOL_H */
+diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c
+index 9ce68b385a1b8..812b33ed9f652 100644
+--- a/tools/objtool/orc_gen.c
++++ b/tools/objtool/orc_gen.c
+@@ -12,205 +12,231 @@
+ #include "check.h"
+ #include "warn.h"
+ 
+-int create_orc(struct objtool_file *file)
++static int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi,
++			  struct instruction *insn)
+ {
+-	struct instruction *insn;
++	struct cfi_reg *bp = &cfi->regs[CFI_BP];
+ 
+-	for_each_insn(file, insn) {
+-		struct orc_entry *orc = &insn->orc;
+-		struct cfi_reg *cfa = &insn->cfi.cfa;
+-		struct cfi_reg *bp = &insn->cfi.regs[CFI_BP];
++	memset(orc, 0, sizeof(*orc));
+ 
+-		if (!insn->sec->text)
+-			continue;
+-
+-		orc->end = insn->cfi.end;
++	if (!cfi) {
++		orc->end = 0;
++		orc->sp_reg = ORC_REG_UNDEFINED;
++		return 0;
++	}
+ 
+-		if (cfa->base == CFI_UNDEFINED) {
+-			orc->sp_reg = ORC_REG_UNDEFINED;
+-			continue;
+-		}
++	orc->end = cfi->end;
+ 
+-		switch (cfa->base) {
+-		case CFI_SP:
+-			orc->sp_reg = ORC_REG_SP;
+-			break;
+-		case CFI_SP_INDIRECT:
+-			orc->sp_reg = ORC_REG_SP_INDIRECT;
+-			break;
+-		case CFI_BP:
+-			orc->sp_reg = ORC_REG_BP;
+-			break;
+-		case CFI_BP_INDIRECT:
+-			orc->sp_reg = ORC_REG_BP_INDIRECT;
+-			break;
+-		case CFI_R10:
+-			orc->sp_reg = ORC_REG_R10;
+-			break;
+-		case CFI_R13:
+-			orc->sp_reg = ORC_REG_R13;
+-			break;
+-		case CFI_DI:
+-			orc->sp_reg = ORC_REG_DI;
+-			break;
+-		case CFI_DX:
+-			orc->sp_reg = ORC_REG_DX;
+-			break;
+-		default:
+-			WARN_FUNC("unknown CFA base reg %d",
+-				  insn->sec, insn->offset, cfa->base);
+-			return -1;
+-		}
++	if (cfi->cfa.base == CFI_UNDEFINED) {
++		orc->sp_reg = ORC_REG_UNDEFINED;
++		return 0;
++	}
+ 
+-		switch(bp->base) {
+-		case CFI_UNDEFINED:
+-			orc->bp_reg = ORC_REG_UNDEFINED;
+-			break;
+-		case CFI_CFA:
+-			orc->bp_reg = ORC_REG_PREV_SP;
+-			break;
+-		case CFI_BP:
+-			orc->bp_reg = ORC_REG_BP;
+-			break;
+-		default:
+-			WARN_FUNC("unknown BP base reg %d",
+-				  insn->sec, insn->offset, bp->base);
+-			return -1;
+-		}
++	switch (cfi->cfa.base) {
++	case CFI_SP:
++		orc->sp_reg = ORC_REG_SP;
++		break;
++	case CFI_SP_INDIRECT:
++		orc->sp_reg = ORC_REG_SP_INDIRECT;
++		break;
++	case CFI_BP:
++		orc->sp_reg = ORC_REG_BP;
++		break;
++	case CFI_BP_INDIRECT:
++		orc->sp_reg = ORC_REG_BP_INDIRECT;
++		break;
++	case CFI_R10:
++		orc->sp_reg = ORC_REG_R10;
++		break;
++	case CFI_R13:
++		orc->sp_reg = ORC_REG_R13;
++		break;
++	case CFI_DI:
++		orc->sp_reg = ORC_REG_DI;
++		break;
++	case CFI_DX:
++		orc->sp_reg = ORC_REG_DX;
++		break;
++	default:
++		WARN_FUNC("unknown CFA base reg %d",
++			  insn->sec, insn->offset, cfi->cfa.base);
++		return -1;
++	}
+ 
+-		orc->sp_offset = cfa->offset;
+-		orc->bp_offset = bp->offset;
+-		orc->type = insn->cfi.type;
++	switch (bp->base) {
++	case CFI_UNDEFINED:
++		orc->bp_reg = ORC_REG_UNDEFINED;
++		break;
++	case CFI_CFA:
++		orc->bp_reg = ORC_REG_PREV_SP;
++		break;
++	case CFI_BP:
++		orc->bp_reg = ORC_REG_BP;
++		break;
++	default:
++		WARN_FUNC("unknown BP base reg %d",
++			  insn->sec, insn->offset, bp->base);
++		return -1;
+ 	}
+ 
++	orc->sp_offset = cfi->cfa.offset;
++	orc->bp_offset = bp->offset;
++	orc->type = cfi->type;
++
+ 	return 0;
+ }
+ 
+-static int create_orc_entry(struct elf *elf, struct section *u_sec, struct section *ip_relocsec,
+-				unsigned int idx, struct section *insn_sec,
+-				unsigned long insn_off, struct orc_entry *o)
++static int write_orc_entry(struct elf *elf, struct section *orc_sec,
++			   struct section *ip_sec, unsigned int idx,
++			   struct section *insn_sec, unsigned long insn_off,
++			   struct orc_entry *o)
+ {
+ 	struct orc_entry *orc;
+-	struct reloc *reloc;
+ 
+ 	/* populate ORC data */
+-	orc = (struct orc_entry *)u_sec->data->d_buf + idx;
++	orc = (struct orc_entry *)orc_sec->data->d_buf + idx;
+ 	memcpy(orc, o, sizeof(*orc));
+ 
+ 	/* populate reloc for ip */
+-	reloc = malloc(sizeof(*reloc));
+-	if (!reloc) {
+-		perror("malloc");
++	if (elf_add_reloc_to_insn(elf, ip_sec, idx * sizeof(int), R_X86_64_PC32,
++				  insn_sec, insn_off))
+ 		return -1;
+-	}
+-	memset(reloc, 0, sizeof(*reloc));
+ 
+-	insn_to_reloc_sym_addend(insn_sec, insn_off, reloc);
+-	if (!reloc->sym) {
+-		WARN("missing symbol for insn at offset 0x%lx",
+-		     insn_off);
++	return 0;
++}
++
++struct orc_list_entry {
++	struct list_head list;
++	struct orc_entry orc;
++	struct section *insn_sec;
++	unsigned long insn_off;
++};
++
++static int orc_list_add(struct list_head *orc_list, struct orc_entry *orc,
++			struct section *sec, unsigned long offset)
++{
++	struct orc_list_entry *entry = malloc(sizeof(*entry));
++
++	if (!entry) {
++		WARN("malloc failed");
+ 		return -1;
+ 	}
+ 
+-	reloc->type = R_X86_64_PC32;
+-	reloc->offset = idx * sizeof(int);
+-	reloc->sec = ip_relocsec;
+-
+-	elf_add_reloc(elf, reloc);
++	entry->orc	= *orc;
++	entry->insn_sec = sec;
++	entry->insn_off = offset;
+ 
++	list_add_tail(&entry->list, orc_list);
+ 	return 0;
+ }
+ 
+-int create_orc_sections(struct objtool_file *file)
++static unsigned long alt_group_len(struct alt_group *alt_group)
++{
++	return alt_group->last_insn->offset +
++	       alt_group->last_insn->len -
++	       alt_group->first_insn->offset;
++}
++
++int orc_create(struct objtool_file *file)
+ {
+-	struct instruction *insn, *prev_insn;
+-	struct section *sec, *u_sec, *ip_relocsec;
+-	unsigned int idx;
++	struct section *sec, *orc_sec;
++	unsigned int nr = 0, idx = 0;
++	struct orc_list_entry *entry;
++	struct list_head orc_list;
+ 
+-	struct orc_entry empty = {
+-		.sp_reg = ORC_REG_UNDEFINED,
++	struct orc_entry null = {
++		.sp_reg  = ORC_REG_UNDEFINED,
+ 		.bp_reg  = ORC_REG_UNDEFINED,
+ 		.type    = UNWIND_HINT_TYPE_CALL,
+ 	};
+ 
+-	sec = find_section_by_name(file->elf, ".orc_unwind");
+-	if (sec) {
+-		WARN("file already has .orc_unwind section, skipping");
+-		return -1;
+-	}
+-
+-	/* count the number of needed orcs */
+-	idx = 0;
++	/* Build a deduplicated list of ORC entries: */
++	INIT_LIST_HEAD(&orc_list);
+ 	for_each_sec(file, sec) {
+-		if (!sec->text)
+-			continue;
++		struct orc_entry orc, prev_orc = {0};
++		struct instruction *insn;
++		bool empty = true;
+ 
+-		prev_insn = NULL;
+-		sec_for_each_insn(file, sec, insn) {
+-			if (!prev_insn ||
+-			    memcmp(&insn->orc, &prev_insn->orc,
+-				   sizeof(struct orc_entry))) {
+-				idx++;
+-			}
+-			prev_insn = insn;
+-		}
+-
+-		/* section terminator */
+-		if (prev_insn)
+-			idx++;
+-	}
+-	if (!idx)
+-		return -1;
+-
+-
+-	/* create .orc_unwind_ip and .rela.orc_unwind_ip sections */
+-	sec = elf_create_section(file->elf, ".orc_unwind_ip", 0, sizeof(int), idx);
+-	if (!sec)
+-		return -1;
+-
+-	ip_relocsec = elf_create_reloc_section(file->elf, sec, SHT_RELA);
+-	if (!ip_relocsec)
+-		return -1;
+-
+-	/* create .orc_unwind section */
+-	u_sec = elf_create_section(file->elf, ".orc_unwind", 0,
+-				   sizeof(struct orc_entry), idx);
+-
+-	/* populate sections */
+-	idx = 0;
+-	for_each_sec(file, sec) {
+ 		if (!sec->text)
+ 			continue;
+ 
+-		prev_insn = NULL;
+ 		sec_for_each_insn(file, sec, insn) {
+-			if (!prev_insn || memcmp(&insn->orc, &prev_insn->orc,
+-						 sizeof(struct orc_entry))) {
++			struct alt_group *alt_group = insn->alt_group;
++			int i;
+ 
+-				if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
+-						     insn->sec, insn->offset,
+-						     &insn->orc))
++			if (!alt_group) {
++				if (init_orc_entry(&orc, insn->cfi, insn))
++					return -1;
++				if (!memcmp(&prev_orc, &orc, sizeof(orc)))
++					continue;
++				if (orc_list_add(&orc_list, &orc, sec,
++						 insn->offset))
+ 					return -1;
++				nr++;
++				prev_orc = orc;
++				empty = false;
++				continue;
++			}
+ 
+-				idx++;
++			/*
++			 * Alternatives can have different stack layout
++			 * possibilities (but they shouldn't conflict).
++			 * Instead of traversing the instructions, use the
++			 * alt_group's flattened byte-offset-addressed CFI
++			 * array.
++			 */
++			for (i = 0; i < alt_group_len(alt_group); i++) {
++				struct cfi_state *cfi = alt_group->cfi[i];
++				if (!cfi)
++					continue;
++				/* errors are reported on the original insn */
++				if (init_orc_entry(&orc, cfi, insn))
++					return -1;
++				if (!memcmp(&prev_orc, &orc, sizeof(orc)))
++					continue;
++				if (orc_list_add(&orc_list, &orc, insn->sec,
++						 insn->offset + i))
++					return -1;
++				nr++;
++				prev_orc = orc;
++				empty = false;
+ 			}
+-			prev_insn = insn;
+-		}
+ 
+-		/* section terminator */
+-		if (prev_insn) {
+-			if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
+-					     prev_insn->sec,
+-					     prev_insn->offset + prev_insn->len,
+-					     &empty))
+-				return -1;
++			/* Skip to the end of the alt_group */
++			insn = alt_group->last_insn;
++		}
+ 
+-			idx++;
++		/* Add a section terminator */
++		if (!empty) {
++			orc_list_add(&orc_list, &null, sec, sec->len);
++			nr++;
+ 		}
+ 	}
++	if (!nr)
++		return 0;
++
++	/* Create .orc_unwind, .orc_unwind_ip and .rela.orc_unwind_ip sections: */
++	sec = find_section_by_name(file->elf, ".orc_unwind");
++	if (sec) {
++		WARN("file already has .orc_unwind section, skipping");
++		return -1;
++	}
++	orc_sec = elf_create_section(file->elf, ".orc_unwind", 0,
++				     sizeof(struct orc_entry), nr);
++	if (!orc_sec)
++		return -1;
+ 
+-	if (elf_rebuild_reloc_section(file->elf, ip_relocsec))
++	sec = elf_create_section(file->elf, ".orc_unwind_ip", 0, sizeof(int), nr);
++	if (!sec)
+ 		return -1;
+ 
++	/* Write ORC entries to sections: */
++	list_for_each_entry(entry, &orc_list, list) {
++		if (write_orc_entry(file->elf, orc_sec, sec, idx++,
++				    entry->insn_sec, entry->insn_off,
++				    &entry->orc))
++			return -1;
++	}
++
+ 	return 0;
+ }
+diff --git a/tools/objtool/special.c b/tools/objtool/special.c
+index 1a2420febd08a..aff0cee7bac17 100644
+--- a/tools/objtool/special.c
++++ b/tools/objtool/special.c
+@@ -55,6 +55,13 @@ void __weak arch_handle_alternative(unsigned short feature, struct special_alt *
+ {
+ }
+ 
++static void reloc_to_sec_off(struct reloc *reloc, struct section **sec,
++			     unsigned long *off)
++{
++	*sec = reloc->sym->sec;
++	*off = reloc->sym->offset + reloc->addend;
++}
++
+ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ 			 struct section *sec, int idx,
+ 			 struct special_alt *alt)
+@@ -87,14 +94,8 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ 		WARN_FUNC("can't find orig reloc", sec, offset + entry->orig);
+ 		return -1;
+ 	}
+-	if (orig_reloc->sym->type != STT_SECTION) {
+-		WARN_FUNC("don't know how to handle non-section reloc symbol %s",
+-			   sec, offset + entry->orig, orig_reloc->sym->name);
+-		return -1;
+-	}
+ 
+-	alt->orig_sec = orig_reloc->sym->sec;
+-	alt->orig_off = orig_reloc->addend;
++	reloc_to_sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off);
+ 
+ 	if (!entry->group || alt->new_len) {
+ 		new_reloc = find_reloc_by_dest(elf, sec, offset + entry->new);
+@@ -104,8 +105,7 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ 			return -1;
+ 		}
+ 
+-		alt->new_sec = new_reloc->sym->sec;
+-		alt->new_off = (unsigned int)new_reloc->addend;
++		reloc_to_sec_off(new_reloc, &alt->new_sec, &alt->new_off);
+ 
+ 		/* _ASM_EXTABLE_EX hack */
+ 		if (alt->new_off >= 0x7ffffff0)
+@@ -152,7 +152,9 @@ int special_get_alts(struct elf *elf, struct list_head *alts)
+ 			memset(alt, 0, sizeof(*alt));
+ 
+ 			ret = get_alt_entry(elf, entry, sec, idx, alt);
+-			if (ret)
++			if (ret > 0)
++				continue;
++			if (ret < 0)
+ 				return ret;
+ 
+ 			list_add_tail(&alt->list, alts);
+diff --git a/tools/objtool/sync-check.sh b/tools/objtool/sync-check.sh
+index 606a4b5e929f2..4bbabaecab14e 100755
+--- a/tools/objtool/sync-check.sh
++++ b/tools/objtool/sync-check.sh
+@@ -16,11 +16,14 @@ arch/x86/include/asm/emulate_prefix.h
+ arch/x86/lib/x86-opcode-map.txt
+ arch/x86/tools/gen-insn-attr-x86.awk
+ include/linux/static_call_types.h
+-arch/x86/include/asm/inat.h     -I '^#include [\"<]\(asm/\)*inat_types.h[\">]'
+-arch/x86/include/asm/insn.h     -I '^#include [\"<]\(asm/\)*inat.h[\">]'
+-arch/x86/lib/inat.c             -I '^#include [\"<]\(../include/\)*asm/insn.h[\">]'
+-arch/x86/lib/insn.c             -I '^#include [\"<]\(../include/\)*asm/in\(at\|sn\).h[\">]' -I '^#include [\"<]\(../include/\)*asm/emulate_prefix.h[\">]'
+ "
++
++SYNC_CHECK_FILES='
++arch/x86/include/asm/inat.h
++arch/x86/include/asm/insn.h
++arch/x86/lib/inat.c
++arch/x86/lib/insn.c
++'
+ fi
+ 
+ check_2 () {
+@@ -63,3 +66,9 @@ while read -r file_entry; do
+ done <<EOF
+ $FILES
+ EOF
++
++if [ "$SRCARCH" = "x86" ]; then
++	for i in $SYNC_CHECK_FILES; do
++		check $i '-I "^.*\/\*.*__ignore_sync_check__.*\*\/.*$"'
++	done
++fi
+diff --git a/tools/objtool/weak.c b/tools/objtool/weak.c
+index 7843e9a7a72f4..553ec9ce51ba8 100644
+--- a/tools/objtool/weak.c
++++ b/tools/objtool/weak.c
+@@ -25,12 +25,7 @@ int __weak orc_dump(const char *_objname)
+ 	UNSUPPORTED("orc");
+ }
+ 
+-int __weak create_orc(struct objtool_file *file)
+-{
+-	UNSUPPORTED("orc");
+-}
+-
+-int __weak create_orc_sections(struct objtool_file *file)
++int __weak orc_create(struct objtool_file *file)
+ {
+ 	UNSUPPORTED("orc");
+ }
+diff --git a/tools/perf/check-headers.sh b/tools/perf/check-headers.sh
+index 15ecb1803fb99..9f085aa09c97a 100755
+--- a/tools/perf/check-headers.sh
++++ b/tools/perf/check-headers.sh
+@@ -75,6 +75,13 @@ include/uapi/asm-generic/mman-common.h
+ include/uapi/asm-generic/unistd.h
+ '
+ 
++SYNC_CHECK_FILES='
++arch/x86/include/asm/inat.h
++arch/x86/include/asm/insn.h
++arch/x86/lib/inat.c
++arch/x86/lib/insn.c
++'
++
+ # These copies are under tools/perf/trace/beauty/ as they are not used to in
+ # building object files only by scripts in tools/perf/trace/beauty/ to generate
+ # tables that then gets included in .c files for things like id->string syscall
+@@ -129,6 +136,10 @@ for i in $FILES; do
+   check $i -B
+ done
+ 
++for i in $SYNC_CHECK_FILES; do
++  check $i '-I "^.*\/\*.*__ignore_sync_check__.*\*\/.*$"'
++done
++
+ # diff with extra ignore lines
+ check arch/x86/lib/memcpy_64.S        '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memcpy_\(erms\|orig\))"'
+ check arch/x86/lib/memset_64.S        '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memset_\(erms\|orig\))"'
+@@ -137,10 +148,6 @@ check include/uapi/linux/mman.h       '-I "^#include <\(uapi/\)*asm/mman.h>"'
+ check include/linux/build_bug.h       '-I "^#\(ifndef\|endif\)\( \/\/\)* static_assert$"'
+ check include/linux/ctype.h	      '-I "isdigit("'
+ check lib/ctype.c		      '-I "^EXPORT_SYMBOL" -I "^#include <linux/export.h>" -B'
+-check arch/x86/include/asm/inat.h     '-I "^#include [\"<]\(asm/\)*inat_types.h[\">]"'
+-check arch/x86/include/asm/insn.h     '-I "^#include [\"<]\(asm/\)*inat.h[\">]"'
+-check arch/x86/lib/inat.c	      '-I "^#include [\"<]\(../include/\)*asm/insn.h[\">]"'
+-check arch/x86/lib/insn.c             '-I "^#include [\"<]\(../include/\)*asm/in\(at\|sn\).h[\">]" -I "^#include [\"<]\(../include/\)*asm/emulate_prefix.h[\">]"'
+ 
+ # diff non-symmetric files
+ check_2 tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-07-29 16:37 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-07-29 16:37 UTC (permalink / raw
  To: gentoo-commits

commit:     7f19ef6618415f799d86e0a39707a0ca2ae60917
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 29 16:37:14 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul 29 16:37:14 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7f19ef66

Linux patch 5.10.134

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1133_linux-5.10.134.patch | 3830 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3834 insertions(+)

diff --git a/0000_README b/0000_README
index 3fc0f934..7292c57d 100644
--- a/0000_README
+++ b/0000_README
@@ -575,6 +575,10 @@ Patch:  1132_linux-5.10.133.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.133
 
+Patch:  1133_linux-5.10.134.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.134
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1133_linux-5.10.134.patch b/1133_linux-5.10.134.patch
new file mode 100644
index 00000000..215d4755
--- /dev/null
+++ b/1133_linux-5.10.134.patch
@@ -0,0 +1,3830 @@
+diff --git a/Documentation/networking/netdevices.rst b/Documentation/networking/netdevices.rst
+index 5a85fcc80c765..557b974834371 100644
+--- a/Documentation/networking/netdevices.rst
++++ b/Documentation/networking/netdevices.rst
+@@ -10,18 +10,177 @@ Introduction
+ The following is a random collection of documentation regarding
+ network devices.
+ 
+-struct net_device allocation rules
+-==================================
++struct net_device lifetime rules
++================================
+ Network device structures need to persist even after module is unloaded and
+ must be allocated with alloc_netdev_mqs() and friends.
+ If device has registered successfully, it will be freed on last use
+-by free_netdev(). This is required to handle the pathologic case cleanly
+-(example: rmmod mydriver </sys/class/net/myeth/mtu )
++by free_netdev(). This is required to handle the pathological case cleanly
++(example: ``rmmod mydriver </sys/class/net/myeth/mtu``)
+ 
+-alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver
++alloc_netdev_mqs() / alloc_netdev() reserve extra space for driver
+ private data which gets freed when the network device is freed. If
+ separately allocated data is attached to the network device
+-(netdev_priv(dev)) then it is up to the module exit handler to free that.
++(netdev_priv()) then it is up to the module exit handler to free that.
++
++There are two groups of APIs for registering struct net_device.
++First group can be used in normal contexts where ``rtnl_lock`` is not already
++held: register_netdev(), unregister_netdev().
++Second group can be used when ``rtnl_lock`` is already held:
++register_netdevice(), unregister_netdevice(), free_netdevice().
++
++Simple drivers
++--------------
++
++Most drivers (especially device drivers) handle lifetime of struct net_device
++in context where ``rtnl_lock`` is not held (e.g. driver probe and remove paths).
++
++In that case the struct net_device registration is done using
++the register_netdev(), and unregister_netdev() functions:
++
++.. code-block:: c
++
++  int probe()
++  {
++    struct my_device_priv *priv;
++    int err;
++
++    dev = alloc_netdev_mqs(...);
++    if (!dev)
++      return -ENOMEM;
++    priv = netdev_priv(dev);
++
++    /* ... do all device setup before calling register_netdev() ...
++     */
++
++    err = register_netdev(dev);
++    if (err)
++      goto err_undo;
++
++    /* net_device is visible to the user! */
++
++  err_undo:
++    /* ... undo the device setup ... */
++    free_netdev(dev);
++    return err;
++  }
++
++  void remove()
++  {
++    unregister_netdev(dev);
++    free_netdev(dev);
++  }
++
++Note that after calling register_netdev() the device is visible in the system.
++Users can open it and start sending / receiving traffic immediately,
++or run any other callback, so all initialization must be done prior to
++registration.
++
++unregister_netdev() closes the device and waits for all users to be done
++with it. The memory of struct net_device itself may still be referenced
++by sysfs but all operations on that device will fail.
++
++free_netdev() can be called after unregister_netdev() returns on when
++register_netdev() failed.
++
++Device management under RTNL
++----------------------------
++
++Registering struct net_device while in context which already holds
++the ``rtnl_lock`` requires extra care. In those scenarios most drivers
++will want to make use of struct net_device's ``needs_free_netdev``
++and ``priv_destructor`` members for freeing of state.
++
++Example flow of netdev handling under ``rtnl_lock``:
++
++.. code-block:: c
++
++  static void my_setup(struct net_device *dev)
++  {
++    dev->needs_free_netdev = true;
++  }
++
++  static void my_destructor(struct net_device *dev)
++  {
++    some_obj_destroy(priv->obj);
++    some_uninit(priv);
++  }
++
++  int create_link()
++  {
++    struct my_device_priv *priv;
++    int err;
++
++    ASSERT_RTNL();
++
++    dev = alloc_netdev(sizeof(*priv), "net%d", NET_NAME_UNKNOWN, my_setup);
++    if (!dev)
++      return -ENOMEM;
++    priv = netdev_priv(dev);
++
++    /* Implicit constructor */
++    err = some_init(priv);
++    if (err)
++      goto err_free_dev;
++
++    priv->obj = some_obj_create();
++    if (!priv->obj) {
++      err = -ENOMEM;
++      goto err_some_uninit;
++    }
++    /* End of constructor, set the destructor: */
++    dev->priv_destructor = my_destructor;
++
++    err = register_netdevice(dev);
++    if (err)
++      /* register_netdevice() calls destructor on failure */
++      goto err_free_dev;
++
++    /* If anything fails now unregister_netdevice() (or unregister_netdev())
++     * will take care of calling my_destructor and free_netdev().
++     */
++
++    return 0;
++
++  err_some_uninit:
++    some_uninit(priv);
++  err_free_dev:
++    free_netdev(dev);
++    return err;
++  }
++
++If struct net_device.priv_destructor is set it will be called by the core
++some time after unregister_netdevice(), it will also be called if
++register_netdevice() fails. The callback may be invoked with or without
++``rtnl_lock`` held.
++
++There is no explicit constructor callback, driver "constructs" the private
++netdev state after allocating it and before registration.
++
++Setting struct net_device.needs_free_netdev makes core call free_netdevice()
++automatically after unregister_netdevice() when all references to the device
++are gone. It only takes effect after a successful call to register_netdevice()
++so if register_netdevice() fails driver is responsible for calling
++free_netdev().
++
++free_netdev() is safe to call on error paths right after unregister_netdevice()
++or when register_netdevice() fails. Parts of netdev (de)registration process
++happen after ``rtnl_lock`` is released, therefore in those cases free_netdev()
++will defer some of the processing until ``rtnl_lock`` is released.
++
++Devices spawned from struct rtnl_link_ops should never free the
++struct net_device directly.
++
++.ndo_init and .ndo_uninit
++~~~~~~~~~~~~~~~~~~~~~~~~~
++
++``.ndo_init`` and ``.ndo_uninit`` callbacks are called during net_device
++registration and de-registration, under ``rtnl_lock``. Drivers can use
++those e.g. when parts of their init process need to run under ``rtnl_lock``.
++
++``.ndo_init`` runs before device is visible in the system, ``.ndo_uninit``
++runs during de-registering after device is closed but other subsystems
++may still have outstanding references to the netdevice.
+ 
+ MTU
+ ===
+diff --git a/Makefile b/Makefile
+index fbd330e58c3b8..00dddc2ac804a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 133
++SUBLEVEL = 134
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/kernel/srmcons.c b/arch/alpha/kernel/srmcons.c
+index 438b10c44d732..2b7a314b84522 100644
+--- a/arch/alpha/kernel/srmcons.c
++++ b/arch/alpha/kernel/srmcons.c
+@@ -59,7 +59,7 @@ srmcons_do_receive_chars(struct tty_port *port)
+ 	} while((result.bits.status & 1) && (++loops < 10));
+ 
+ 	if (count)
+-		tty_schedule_flip(port);
++		tty_flip_buffer_push(port);
+ 
+ 	return count;
+ }
+diff --git a/arch/m68k/Kconfig.bus b/arch/m68k/Kconfig.bus
+index d1e93a39cd3bc..f1be832e2b746 100644
+--- a/arch/m68k/Kconfig.bus
++++ b/arch/m68k/Kconfig.bus
+@@ -63,7 +63,7 @@ source "drivers/zorro/Kconfig"
+ 
+ endif
+ 
+-if COLDFIRE
++if !MMU
+ 
+ config ISA_DMA_API
+ 	def_bool !M5272
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index db9505c658eab..3d3016092b31c 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -73,6 +73,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
+ endif
+ 
+ KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
++KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
+ 
+ # GCC versions that support the "-mstrict-align" option default to allowing
+ # unaligned accesses.  While unaligned accesses are explicitly allowed in the
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index e077f4fe7c6d4..2a51ee2f5a0f0 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -298,6 +298,7 @@
+ #define X86_FEATURE_RETPOLINE_LFENCE	(11*32+13) /* "" Use LFENCE for Spectre variant 2 */
+ #define X86_FEATURE_RETHUNK		(11*32+14) /* "" Use REturn THUNK */
+ #define X86_FEATURE_UNRET		(11*32+15) /* "" AMD BTB untrain return */
++#define X86_FEATURE_USE_IBPB_FW		(11*32+16) /* "" Use IBPB during runtime firmware calls */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX512_BF16		(12*32+ 5) /* AVX512 BFLOAT16 instructions */
+diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
+index 30f76b9668579..871a8b06d4304 100644
+--- a/arch/x86/include/asm/mshyperv.h
++++ b/arch/x86/include/asm/mshyperv.h
+@@ -247,13 +247,6 @@ bool hv_vcpu_is_preempted(int vcpu);
+ static inline void hv_apic_init(void) {}
+ #endif
+ 
+-static inline void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry,
+-					      struct msi_desc *msi_desc)
+-{
+-	msi_entry->address = msi_desc->msg.address_lo;
+-	msi_entry->data = msi_desc->msg.data;
+-}
+-
+ #else /* CONFIG_HYPERV */
+ static inline void hyperv_init(void) {}
+ static inline void hyperv_setup_mmu_ops(void) {}
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 6f2adaf53f467..c3e8e50633ea2 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -298,6 +298,8 @@ do {									\
+ 	alternative_msr_write(MSR_IA32_SPEC_CTRL,			\
+ 			      spec_ctrl_current() | SPEC_CTRL_IBRS,	\
+ 			      X86_FEATURE_USE_IBRS_FW);			\
++	alternative_msr_write(MSR_IA32_PRED_CMD, PRED_CMD_IBPB,		\
++			      X86_FEATURE_USE_IBPB_FW);			\
+ } while (0)
+ 
+ #define firmware_restrict_branch_speculation_end()			\
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 77eefa0b0f324..a85fb17f11804 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -709,7 +709,9 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
+ 			dest = addr + insn.length + insn.immediate.value;
+ 
+ 		if (__static_call_fixup(addr, op, dest) ||
+-		    WARN_ON_ONCE(dest != &__x86_return_thunk))
++		    WARN_ONCE(dest != &__x86_return_thunk,
++			      "missing return thunk: %pS-%pS: %*ph",
++			      addr, dest, 5, addr))
+ 			continue;
+ 
+ 		DPRINTK("return thunk at: %pS (%px) len: %d to: %pS",
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index bc6382f5ec27b..7896b67dda420 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -931,6 +931,7 @@ static inline const char *spectre_v2_module_string(void) { return ""; }
+ #define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
+ #define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
+ #define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
++#define SPECTRE_V2_IBRS_PERF_MSG "WARNING: IBRS mitigation selected on Enhanced IBRS CPU, this may cause unnecessary performance loss\n"
+ 
+ #ifdef CONFIG_BPF_SYSCALL
+ void unpriv_ebpf_notify(int new_state)
+@@ -1371,6 +1372,8 @@ static void __init spectre_v2_select_mitigation(void)
+ 
+ 	case SPECTRE_V2_IBRS:
+ 		setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS);
++		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED))
++			pr_warn(SPECTRE_V2_IBRS_PERF_MSG);
+ 		break;
+ 
+ 	case SPECTRE_V2_LFENCE:
+@@ -1472,7 +1475,16 @@ static void __init spectre_v2_select_mitigation(void)
+ 	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
+ 	 * enable IBRS around firmware calls.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
++	if (boot_cpu_has_bug(X86_BUG_RETBLEED) &&
++	    (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++	     boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) {
++
++		if (retbleed_cmd != RETBLEED_CMD_IBPB) {
++			setup_force_cpu_cap(X86_FEATURE_USE_IBPB_FW);
++			pr_info("Enabling Speculation Barrier for firmware calls\n");
++		}
++
++	} else if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
+ 		pr_info("Enabling Restricted Speculation for firmware calls\n");
+ 	}
+diff --git a/drivers/accessibility/speakup/spk_ttyio.c b/drivers/accessibility/speakup/spk_ttyio.c
+index 6284aff434a1a..da3a24557a031 100644
+--- a/drivers/accessibility/speakup/spk_ttyio.c
++++ b/drivers/accessibility/speakup/spk_ttyio.c
+@@ -88,7 +88,7 @@ static int spk_ttyio_receive_buf2(struct tty_struct *tty,
+ 	}
+ 
+ 	if (!ldisc_data->buf_free)
+-		/* ttyio_in will tty_schedule_flip */
++		/* ttyio_in will tty_flip_buffer_push */
+ 		return 0;
+ 
+ 	/* Make sure the consumer has read buf before we have seen
+@@ -334,7 +334,7 @@ static unsigned char ttyio_in(int timeout)
+ 	mb();
+ 	ldisc_data->buf_free = true;
+ 	/* Let TTY push more characters */
+-	tty_schedule_flip(speakup_tty->port);
++	tty_flip_buffer_push(speakup_tty->port);
+ 
+ 	return rv;
+ }
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index bb4ca064447e9..957be5f69406a 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -350,6 +350,9 @@ static const struct regmap_config pca953x_i2c_regmap = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+ 
++	.use_single_read = true,
++	.use_single_write = true,
++
+ 	.readable_reg = pca953x_readable_register,
+ 	.writeable_reg = pca953x_writeable_register,
+ 	.volatile_reg = pca953x_volatile_register,
+@@ -893,15 +896,18 @@ static int pca953x_irq_setup(struct pca953x_chip *chip,
+ static int device_pca95xx_init(struct pca953x_chip *chip, u32 invert)
+ {
+ 	DECLARE_BITMAP(val, MAX_LINE);
++	u8 regaddr;
+ 	int ret;
+ 
+-	ret = regcache_sync_region(chip->regmap, chip->regs->output,
+-				   chip->regs->output + NBANK(chip));
++	regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0);
++	ret = regcache_sync_region(chip->regmap, regaddr,
++				   regaddr + NBANK(chip) - 1);
+ 	if (ret)
+ 		goto out;
+ 
+-	ret = regcache_sync_region(chip->regmap, chip->regs->direction,
+-				   chip->regs->direction + NBANK(chip));
++	regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0);
++	ret = regcache_sync_region(chip->regmap, regaddr,
++				   regaddr + NBANK(chip) - 1);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -1114,14 +1120,14 @@ static int pca953x_regcache_sync(struct device *dev)
+ 	 * sync these registers first and only then sync the rest.
+ 	 */
+ 	regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0);
+-	ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip));
++	ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip) - 1);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to sync GPIO dir registers: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0);
+-	ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip));
++	ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip) - 1);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to sync GPIO out registers: %d\n", ret);
+ 		return ret;
+@@ -1131,7 +1137,7 @@ static int pca953x_regcache_sync(struct device *dev)
+ 	if (chip->driver_data & PCA_PCAL) {
+ 		regaddr = pca953x_recalc_addr(chip, PCAL953X_IN_LATCH, 0);
+ 		ret = regcache_sync_region(chip->regmap, regaddr,
+-					   regaddr + NBANK(chip));
++					   regaddr + NBANK(chip) - 1);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to sync INT latch registers: %d\n",
+ 				ret);
+@@ -1140,7 +1146,7 @@ static int pca953x_regcache_sync(struct device *dev)
+ 
+ 		regaddr = pca953x_recalc_addr(chip, PCAL953X_INT_MASK, 0);
+ 		ret = regcache_sync_region(chip->regmap, regaddr,
+-					   regaddr + NBANK(chip));
++					   regaddr + NBANK(chip) - 1);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to sync INT mask registers: %d\n",
+ 				ret);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index f069d0faba64b..55ecc67592ebc 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -922,6 +922,37 @@ static void amdgpu_check_debugfs_connector_property_change(struct amdgpu_device
+ 	}
+ }
+ 
++struct amdgpu_stutter_quirk {
++	u16 chip_vendor;
++	u16 chip_device;
++	u16 subsys_vendor;
++	u16 subsys_device;
++	u8 revision;
++};
++
++static const struct amdgpu_stutter_quirk amdgpu_stutter_quirk_list[] = {
++	/* https://bugzilla.kernel.org/show_bug.cgi?id=214417 */
++	{ 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc8 },
++	{ 0, 0, 0, 0, 0 },
++};
++
++static bool dm_should_disable_stutter(struct pci_dev *pdev)
++{
++	const struct amdgpu_stutter_quirk *p = amdgpu_stutter_quirk_list;
++
++	while (p && p->chip_device != 0) {
++		if (pdev->vendor == p->chip_vendor &&
++		    pdev->device == p->chip_device &&
++		    pdev->subsystem_vendor == p->subsys_vendor &&
++		    pdev->subsystem_device == p->subsys_device &&
++		    pdev->revision == p->revision) {
++			return true;
++		}
++		++p;
++	}
++	return false;
++}
++
+ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ {
+ 	struct dc_init_data init_data;
+@@ -1014,6 +1045,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 
+ 	if (adev->asic_type != CHIP_CARRIZO && adev->asic_type != CHIP_STONEY)
+ 		adev->dm.dc->debug.disable_stutter = amdgpu_pp_feature_mask & PP_STUTTER_MODE ? false : true;
++	if (dm_should_disable_stutter(adev->pdev))
++		adev->dm.dc->debug.disable_stutter = true;
+ 
+ 	if (amdgpu_dc_debug_mask & DC_DISABLE_STUTTER)
+ 		adev->dm.dc->debug.disable_stutter = true;
+diff --git a/drivers/gpu/drm/imx/dcss/dcss-dev.c b/drivers/gpu/drm/imx/dcss/dcss-dev.c
+index c849533ca83e3..3f5750cc2673e 100644
+--- a/drivers/gpu/drm/imx/dcss/dcss-dev.c
++++ b/drivers/gpu/drm/imx/dcss/dcss-dev.c
+@@ -207,6 +207,7 @@ struct dcss_dev *dcss_dev_create(struct device *dev, bool hdmi_output)
+ 
+ 	ret = dcss_submodules_init(dcss);
+ 	if (ret) {
++		of_node_put(dcss->of_port);
+ 		dev_err(dev, "submodules initialization failed\n");
+ 		goto clks_err;
+ 	}
+@@ -237,6 +238,8 @@ void dcss_dev_destroy(struct dcss_dev *dcss)
+ 		dcss_clocks_disable(dcss);
+ 	}
+ 
++	of_node_put(dcss->of_port);
++
+ 	pm_runtime_disable(dcss->dev);
+ 
+ 	dcss_submodules_stop(dcss);
+diff --git a/drivers/gpu/drm/imx/dcss/dcss-plane.c b/drivers/gpu/drm/imx/dcss/dcss-plane.c
+index f54087ac44d35..46a188dd02adc 100644
+--- a/drivers/gpu/drm/imx/dcss/dcss-plane.c
++++ b/drivers/gpu/drm/imx/dcss/dcss-plane.c
+@@ -268,7 +268,6 @@ static void dcss_plane_atomic_update(struct drm_plane *plane,
+ 	struct dcss_plane *dcss_plane = to_dcss_plane(plane);
+ 	struct dcss_dev *dcss = plane->dev->dev_private;
+ 	struct drm_framebuffer *fb = state->fb;
+-	u32 pixel_format;
+ 	struct drm_crtc_state *crtc_state;
+ 	bool modifiers_present;
+ 	u32 src_w, src_h, dst_w, dst_h;
+@@ -279,7 +278,6 @@ static void dcss_plane_atomic_update(struct drm_plane *plane,
+ 	if (!fb || !state->crtc || !state->visible)
+ 		return;
+ 
+-	pixel_format = state->fb->format->format;
+ 	crtc_state = state->crtc->state;
+ 	modifiers_present = !!(fb->flags & DRM_MODE_FB_MODIFIERS);
+ 
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index 01564bd96c624..0abce487ead72 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -386,9 +386,9 @@ static irqreturn_t cdns_i2c_slave_isr(void *ptr)
+  */
+ static irqreturn_t cdns_i2c_master_isr(void *ptr)
+ {
+-	unsigned int isr_status, avail_bytes, updatetx;
++	unsigned int isr_status, avail_bytes;
+ 	unsigned int bytes_to_send;
+-	bool hold_quirk;
++	bool updatetx;
+ 	struct cdns_i2c *id = ptr;
+ 	/* Signal completion only after everything is updated */
+ 	int done_flag = 0;
+@@ -408,11 +408,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr)
+ 	 * Check if transfer size register needs to be updated again for a
+ 	 * large data receive operation.
+ 	 */
+-	updatetx = 0;
+-	if (id->recv_count > id->curr_recv_count)
+-		updatetx = 1;
+-
+-	hold_quirk = (id->quirks & CDNS_I2C_BROKEN_HOLD_BIT) && updatetx;
++	updatetx = id->recv_count > id->curr_recv_count;
+ 
+ 	/* When receiving, handle data interrupt and completion interrupt */
+ 	if (id->p_recv_buf &&
+@@ -443,7 +439,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr)
+ 				break;
+ 			}
+ 
+-			if (cdns_is_holdquirk(id, hold_quirk))
++			if (cdns_is_holdquirk(id, updatetx))
+ 				break;
+ 		}
+ 
+@@ -454,7 +450,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr)
+ 		 * maintain transfer size non-zero while performing a large
+ 		 * receive operation.
+ 		 */
+-		if (cdns_is_holdquirk(id, hold_quirk)) {
++		if (cdns_is_holdquirk(id, updatetx)) {
+ 			/* wait while fifo is full */
+ 			while (cdns_i2c_readreg(CDNS_I2C_XFER_SIZE_OFFSET) !=
+ 			       (id->curr_recv_count - CDNS_I2C_FIFO_DEPTH))
+@@ -476,22 +472,6 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr)
+ 						  CDNS_I2C_XFER_SIZE_OFFSET);
+ 				id->curr_recv_count = id->recv_count;
+ 			}
+-		} else if (id->recv_count && !hold_quirk &&
+-						!id->curr_recv_count) {
+-
+-			/* Set the slave address in address register*/
+-			cdns_i2c_writereg(id->p_msg->addr & CDNS_I2C_ADDR_MASK,
+-						CDNS_I2C_ADDR_OFFSET);
+-
+-			if (id->recv_count > CDNS_I2C_TRANSFER_SIZE) {
+-				cdns_i2c_writereg(CDNS_I2C_TRANSFER_SIZE,
+-						CDNS_I2C_XFER_SIZE_OFFSET);
+-				id->curr_recv_count = CDNS_I2C_TRANSFER_SIZE;
+-			} else {
+-				cdns_i2c_writereg(id->recv_count,
+-						CDNS_I2C_XFER_SIZE_OFFSET);
+-				id->curr_recv_count = id->recv_count;
+-			}
+ 		}
+ 
+ 		/* Clear hold (if not repeated start) and signal completion */
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index 51e071c20e397..cd6e016e62103 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -1235,8 +1235,8 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ 	csk->sndbuf = newsk->sk_sndbuf;
+ 	csk->smac_idx = ((struct port_info *)netdev_priv(ndev))->smt_idx;
+ 	RCV_WSCALE(tp) = select_rcv_wscale(tcp_full_space(newsk),
+-					   sock_net(newsk)->
+-						ipv4.sysctl_tcp_window_scaling,
++					   READ_ONCE(sock_net(newsk)->
++						     ipv4.sysctl_tcp_window_scaling),
+ 					   tp->window_clamp);
+ 	neigh_release(n);
+ 	inet_inherit_port(&tcp_hashinfo, lsk, newsk);
+@@ -1383,7 +1383,7 @@ static void chtls_pass_accept_request(struct sock *sk,
+ #endif
+ 	}
+ 	if (req->tcpopt.wsf <= 14 &&
+-	    sock_net(sk)->ipv4.sysctl_tcp_window_scaling) {
++	    READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling)) {
+ 		inet_rsk(oreq)->wscale_ok = 1;
+ 		inet_rsk(oreq)->snd_wscale = req->tcpopt.wsf;
+ 	}
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index 649c5c429bd7c..1288b5e3d2201 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -2287,7 +2287,7 @@ err:
+ 
+ /* Uses sync mcc */
+ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+-				      u8 page_num, u8 *data)
++				      u8 page_num, u32 off, u32 len, u8 *data)
+ {
+ 	struct be_dma_mem cmd;
+ 	struct be_mcc_wrb *wrb;
+@@ -2321,10 +2321,10 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ 	req->port = cpu_to_le32(adapter->hba_port_num);
+ 	req->page_num = cpu_to_le32(page_num);
+ 	status = be_mcc_notify_wait(adapter);
+-	if (!status) {
++	if (!status && len > 0) {
+ 		struct be_cmd_resp_port_type *resp = cmd.va;
+ 
+-		memcpy(data, resp->page_data, PAGE_DATA_LEN);
++		memcpy(data, resp->page_data + off, len);
+ 	}
+ err:
+ 	mutex_unlock(&adapter->mcc_lock);
+@@ -2415,7 +2415,7 @@ int be_cmd_query_cable_type(struct be_adapter *adapter)
+ 	int status;
+ 
+ 	status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+-						   page_data);
++						   0, PAGE_DATA_LEN, page_data);
+ 	if (!status) {
+ 		switch (adapter->phy.interface_type) {
+ 		case PHY_TYPE_QSFP:
+@@ -2440,7 +2440,7 @@ int be_cmd_query_sfp_info(struct be_adapter *adapter)
+ 	int status;
+ 
+ 	status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+-						   page_data);
++						   0, PAGE_DATA_LEN, page_data);
+ 	if (!status) {
+ 		strlcpy(adapter->phy.vendor_name, page_data +
+ 			SFP_VENDOR_NAME_OFFSET, SFP_VENDOR_NAME_LEN - 1);
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.h b/drivers/net/ethernet/emulex/benet/be_cmds.h
+index c30d6d6f0f3a0..9e17d6a7ab8cd 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.h
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.h
+@@ -2427,7 +2427,7 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num, u8 beacon,
+ int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num,
+ 			    u32 *state);
+ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+-				      u8 page_num, u8 *data);
++				      u8 page_num, u32 off, u32 len, u8 *data);
+ int be_cmd_query_cable_type(struct be_adapter *adapter);
+ int be_cmd_query_sfp_info(struct be_adapter *adapter);
+ int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd,
+diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+index 99cc1c46fb301..d90bf457e49cc 100644
+--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
++++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+@@ -1338,7 +1338,7 @@ static int be_get_module_info(struct net_device *netdev,
+ 		return -EOPNOTSUPP;
+ 
+ 	status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+-						   page_data);
++						   0, PAGE_DATA_LEN, page_data);
+ 	if (!status) {
+ 		if (!page_data[SFP_PLUS_SFF_8472_COMP]) {
+ 			modinfo->type = ETH_MODULE_SFF_8079;
+@@ -1356,25 +1356,32 @@ static int be_get_module_eeprom(struct net_device *netdev,
+ {
+ 	struct be_adapter *adapter = netdev_priv(netdev);
+ 	int status;
++	u32 begin, end;
+ 
+ 	if (!check_privilege(adapter, MAX_PRIVILEGES))
+ 		return -EOPNOTSUPP;
+ 
+-	status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+-						   data);
+-	if (status)
+-		goto err;
++	begin = eeprom->offset;
++	end = eeprom->offset + eeprom->len;
++
++	if (begin < PAGE_DATA_LEN) {
++		status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, begin,
++							   min_t(u32, end, PAGE_DATA_LEN) - begin,
++							   data);
++		if (status)
++			goto err;
++
++		data += PAGE_DATA_LEN - begin;
++		begin = PAGE_DATA_LEN;
++	}
+ 
+-	if (eeprom->offset + eeprom->len > PAGE_DATA_LEN) {
+-		status = be_cmd_read_port_transceiver_data(adapter,
+-							   TR_PAGE_A2,
+-							   data +
+-							   PAGE_DATA_LEN);
++	if (end > PAGE_DATA_LEN) {
++		status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A2,
++							   begin - PAGE_DATA_LEN,
++							   end - begin, data);
+ 		if (status)
+ 			goto err;
+ 	}
+-	if (eeprom->offset)
+-		memcpy(data, data + eeprom->offset, eeprom->len);
+ err:
+ 	return be_cmd_status(status);
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 58453f7958df9..11d4e3ba9af4c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -10187,7 +10187,7 @@ static int i40e_reset(struct i40e_pf *pf)
+  **/
+ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ {
+-	int old_recovery_mode_bit = test_bit(__I40E_RECOVERY_MODE, pf->state);
++	const bool is_recovery_mode_reported = i40e_check_recovery_mode(pf);
+ 	struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
+ 	struct i40e_hw *hw = &pf->hw;
+ 	i40e_status ret;
+@@ -10195,13 +10195,11 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 	int v;
+ 
+ 	if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) &&
+-	    i40e_check_recovery_mode(pf)) {
++	    is_recovery_mode_reported)
+ 		i40e_set_ethtool_ops(pf->vsi[pf->lan_vsi]->netdev);
+-	}
+ 
+ 	if (test_bit(__I40E_DOWN, pf->state) &&
+-	    !test_bit(__I40E_RECOVERY_MODE, pf->state) &&
+-	    !old_recovery_mode_bit)
++	    !test_bit(__I40E_RECOVERY_MODE, pf->state))
+ 		goto clear_recovery;
+ 	dev_dbg(&pf->pdev->dev, "Rebuilding internal switch\n");
+ 
+@@ -10228,13 +10226,12 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 	 * accordingly with regard to resources initialization
+ 	 * and deinitialization
+ 	 */
+-	if (test_bit(__I40E_RECOVERY_MODE, pf->state) ||
+-	    old_recovery_mode_bit) {
++	if (test_bit(__I40E_RECOVERY_MODE, pf->state)) {
+ 		if (i40e_get_capabilities(pf,
+ 					  i40e_aqc_opc_list_func_capabilities))
+ 			goto end_unlock;
+ 
+-		if (test_bit(__I40E_RECOVERY_MODE, pf->state)) {
++		if (is_recovery_mode_reported) {
+ 			/* we're staying in recovery mode so we'll reinitialize
+ 			 * misc vector here
+ 			 */
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+index 256fa07d54d5d..99983f7a0ce0b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+@@ -1263,11 +1263,10 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring,
+ {
+ 	struct iavf_rx_buffer *rx_buffer;
+ 
+-	if (!size)
+-		return NULL;
+-
+ 	rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
+ 	prefetchw(rx_buffer->page);
++	if (!size)
++		return rx_buffer;
+ 
+ 	/* we are reusing so sync this buffer for CPU use */
+ 	dma_sync_single_range_for_cpu(rx_ring->dev,
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 53e31002ce52a..e7ffe63925fd2 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -4933,6 +4933,9 @@ u32 igc_rd32(struct igc_hw *hw, u32 reg)
+ 	u8 __iomem *hw_addr = READ_ONCE(hw->hw_addr);
+ 	u32 value = 0;
+ 
++	if (IGC_REMOVED(hw_addr))
++		return ~value;
++
+ 	value = readl(&hw_addr[reg]);
+ 
+ 	/* reads should not return all F's */
+diff --git a/drivers/net/ethernet/intel/igc/igc_regs.h b/drivers/net/ethernet/intel/igc/igc_regs.h
+index b52dd9d737e87..a273e1c33b3fd 100644
+--- a/drivers/net/ethernet/intel/igc/igc_regs.h
++++ b/drivers/net/ethernet/intel/igc/igc_regs.h
+@@ -252,7 +252,8 @@ u32 igc_rd32(struct igc_hw *hw, u32 reg);
+ #define wr32(reg, val) \
+ do { \
+ 	u8 __iomem *hw_addr = READ_ONCE((hw)->hw_addr); \
+-	writel((val), &hw_addr[(reg)]); \
++	if (!IGC_REMOVED(hw_addr)) \
++		writel((val), &hw_addr[(reg)]); \
+ } while (0)
+ 
+ #define rd32(reg) (igc_rd32(hw, reg))
+@@ -264,4 +265,6 @@ do { \
+ 
+ #define array_rd32(reg, offset) (igc_rd32(hw, (reg) + ((offset) << 2)))
+ 
++#define IGC_REMOVED(h) unlikely(!(h))
++
+ #endif
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+index de0fc6ecf491f..27c6f911737bf 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+@@ -769,6 +769,7 @@ struct ixgbe_adapter {
+ #ifdef CONFIG_IXGBE_IPSEC
+ 	struct ixgbe_ipsec *ipsec;
+ #endif /* CONFIG_IXGBE_IPSEC */
++	spinlock_t vfs_lock;
+ };
+ 
+ static inline u8 ixgbe_max_rss_indices(struct ixgbe_adapter *adapter)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index a3a02e2f92f64..b5b8be4672aa4 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -6403,6 +6403,9 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter,
+ 	/* n-tuple support exists, always init our spinlock */
+ 	spin_lock_init(&adapter->fdir_perfect_lock);
+ 
++	/* init spinlock to avoid concurrency of VF resources */
++	spin_lock_init(&adapter->vfs_lock);
++
+ #ifdef CONFIG_IXGBE_DCB
+ 	ixgbe_init_dcb(adapter);
+ #endif
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index aaebdae8b5fff..0078ae5926164 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -204,10 +204,13 @@ void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, unsigned int max_vfs)
+ int ixgbe_disable_sriov(struct ixgbe_adapter *adapter)
+ {
+ 	unsigned int num_vfs = adapter->num_vfs, vf;
++	unsigned long flags;
+ 	int rss;
+ 
++	spin_lock_irqsave(&adapter->vfs_lock, flags);
+ 	/* set num VFs to 0 to prevent access to vfinfo */
+ 	adapter->num_vfs = 0;
++	spin_unlock_irqrestore(&adapter->vfs_lock, flags);
+ 
+ 	/* put the reference to all of the vf devices */
+ 	for (vf = 0; vf < num_vfs; ++vf) {
+@@ -1305,8 +1308,10 @@ static void ixgbe_rcv_ack_from_vf(struct ixgbe_adapter *adapter, u32 vf)
+ void ixgbe_msg_task(struct ixgbe_adapter *adapter)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
++	unsigned long flags;
+ 	u32 vf;
+ 
++	spin_lock_irqsave(&adapter->vfs_lock, flags);
+ 	for (vf = 0; vf < adapter->num_vfs; vf++) {
+ 		/* process any reset requests */
+ 		if (!ixgbe_check_for_rst(hw, vf))
+@@ -1320,6 +1325,7 @@ void ixgbe_msg_task(struct ixgbe_adapter *adapter)
+ 		if (!ixgbe_check_for_ack(hw, vf))
+ 			ixgbe_rcv_ack_from_vf(adapter, vf);
+ 	}
++	spin_unlock_irqrestore(&adapter->vfs_lock, flags);
+ }
+ 
+ void ixgbe_disable_tx_rx(struct ixgbe_adapter *adapter)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 53128382fc2e0..d2887ae508bb8 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -4003,7 +4003,7 @@ static bool mlxsw_sp_fi_is_gateway(const struct mlxsw_sp *mlxsw_sp,
+ {
+ 	const struct fib_nh *nh = fib_info_nh(fi, 0);
+ 
+-	return nh->fib_nh_scope == RT_SCOPE_LINK ||
++	return nh->fib_nh_gw_family ||
+ 	       mlxsw_sp_nexthop4_ipip_type(mlxsw_sp, nh, NULL);
+ }
+ 
+@@ -8038,13 +8038,14 @@ static int mlxsw_sp_dscp_init(struct mlxsw_sp *mlxsw_sp)
+ static int __mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp)
+ {
+ 	struct net *net = mlxsw_sp_net(mlxsw_sp);
+-	bool usp = net->ipv4.sysctl_ip_fwd_update_priority;
+ 	char rgcr_pl[MLXSW_REG_RGCR_LEN];
+ 	u64 max_rifs;
++	bool usp;
+ 
+ 	if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, MAX_RIFS))
+ 		return -EIO;
+ 	max_rifs = MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS);
++	usp = READ_ONCE(net->ipv4.sysctl_ip_fwd_update_priority);
+ 
+ 	mlxsw_reg_rgcr_pack(rgcr_pl, true, true);
+ 	mlxsw_reg_rgcr_max_router_interfaces_set(rgcr_pl, max_rifs);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 16c538cfaf59d..2e71e510e127d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -215,6 +215,9 @@ static void dwmac4_map_mtl_dma(struct mac_device_info *hw, u32 queue, u32 chan)
+ 	if (queue == 0 || queue == 4) {
+ 		value &= ~MTL_RXQ_DMA_Q04MDMACH_MASK;
+ 		value |= MTL_RXQ_DMA_Q04MDMACH(chan);
++	} else if (queue > 4) {
++		value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue - 4);
++		value |= MTL_RXQ_DMA_QXMDMACH(chan, queue - 4);
+ 	} else {
+ 		value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue);
+ 		value |= MTL_RXQ_DMA_QXMDMACH(chan, queue);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index e9aa9a5eba6be..27b7bb64a0281 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -738,19 +738,10 @@ int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags)
+ 	struct timespec64 now;
+ 	u32 sec_inc = 0;
+ 	u64 temp = 0;
+-	int ret;
+ 
+ 	if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
+ 		return -EOPNOTSUPP;
+ 
+-	ret = clk_prepare_enable(priv->plat->clk_ptp_ref);
+-	if (ret < 0) {
+-		netdev_warn(priv->dev,
+-			    "failed to enable PTP reference clock: %pe\n",
+-			    ERR_PTR(ret));
+-		return ret;
+-	}
+-
+ 	stmmac_config_hw_tstamping(priv, priv->ptpaddr, systime_flags);
+ 	priv->systime_flags = systime_flags;
+ 
+@@ -2755,6 +2746,14 @@ static int stmmac_hw_setup(struct net_device *dev, bool ptp_register)
+ 
+ 	stmmac_mmc_setup(priv);
+ 
++	if (ptp_register) {
++		ret = clk_prepare_enable(priv->plat->clk_ptp_ref);
++		if (ret < 0)
++			netdev_warn(priv->dev,
++				    "failed to enable PTP reference clock: %pe\n",
++				    ERR_PTR(ret));
++	}
++
+ 	ret = stmmac_init_ptp(priv);
+ 	if (ret == -EOPNOTSUPP)
+ 		netdev_warn(priv->dev, "PTP not supported by HW\n");
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index b40b962055fa5..f70d8d1ce3298 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -814,7 +814,13 @@ static int __maybe_unused stmmac_pltfr_noirq_resume(struct device *dev)
+ 		if (ret)
+ 			return ret;
+ 
+-		stmmac_init_tstamp_counter(priv, priv->systime_flags);
++		ret = clk_prepare_enable(priv->plat->clk_ptp_ref);
++		if (ret < 0) {
++			netdev_warn(priv->dev,
++				    "failed to enable PTP reference clock: %pe\n",
++				    ERR_PTR(ret));
++			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 79a53fe245e5c..0ac4f59e3f186 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1796,7 +1796,7 @@ static const struct driver_info ax88179_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1809,7 +1809,7 @@ static const struct driver_info ax88178a_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1822,7 +1822,7 @@ static const struct driver_info cypress_GX3_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1835,7 +1835,7 @@ static const struct driver_info dlink_dub1312_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1848,7 +1848,7 @@ static const struct driver_info sitecom_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1861,7 +1861,7 @@ static const struct driver_info samsung_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1874,7 +1874,7 @@ static const struct driver_info lenovo_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1887,7 +1887,7 @@ static const struct driver_info belkin_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset	= ax88179_reset,
+ 	.stop	= ax88179_stop,
+-	.flags	= FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags	= FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1900,7 +1900,7 @@ static const struct driver_info toshiba_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset	= ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags	= FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags	= FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1913,7 +1913,7 @@ static const struct driver_info mct_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset	= ax88179_reset,
+ 	.stop	= ax88179_stop,
+-	.flags	= FLAG_ETHER | FLAG_FRAMING_AX,
++	.flags	= FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index a070e69bb49cd..4353443b89d81 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1118,6 +1118,10 @@ static void hv_int_desc_free(struct hv_pci_dev *hpdev,
+ 		u8 buffer[sizeof(struct pci_delete_interrupt)];
+ 	} ctxt;
+ 
++	if (!int_desc->vector_count) {
++		kfree(int_desc);
++		return;
++	}
+ 	memset(&ctxt, 0, sizeof(ctxt));
+ 	int_pkt = (struct pci_delete_interrupt *)&ctxt.pkt.message;
+ 	int_pkt->message_type.type =
+@@ -1180,6 +1184,28 @@ static void hv_irq_mask(struct irq_data *data)
+ 	pci_msi_mask_irq(data);
+ }
+ 
++static unsigned int hv_msi_get_int_vector(struct irq_data *data)
++{
++	struct irq_cfg *cfg = irqd_cfg(data);
++
++	return cfg->vector;
++}
++
++static int hv_msi_prepare(struct irq_domain *domain, struct device *dev,
++			  int nvec, msi_alloc_info_t *info)
++{
++	int ret = pci_msi_prepare(domain, dev, nvec, info);
++
++	/*
++	 * By using the interrupt remapper in the hypervisor IOMMU, contiguous
++	 * CPU vectors is not needed for multi-MSI
++	 */
++	if (info->type == X86_IRQ_ALLOC_TYPE_PCI_MSI)
++		info->flags &= ~X86_IRQ_ALLOC_CONTIGUOUS_VECTORS;
++
++	return ret;
++}
++
+ /**
+  * hv_irq_unmask() - "Unmask" the IRQ by setting its current
+  * affinity.
+@@ -1195,6 +1221,7 @@ static void hv_irq_unmask(struct irq_data *data)
+ 	struct msi_desc *msi_desc = irq_data_get_msi_desc(data);
+ 	struct irq_cfg *cfg = irqd_cfg(data);
+ 	struct hv_retarget_device_interrupt *params;
++	struct tran_int_desc *int_desc;
+ 	struct hv_pcibus_device *hbus;
+ 	struct cpumask *dest;
+ 	cpumask_var_t tmp;
+@@ -1209,6 +1236,7 @@ static void hv_irq_unmask(struct irq_data *data)
+ 	pdev = msi_desc_to_pci_dev(msi_desc);
+ 	pbus = pdev->bus;
+ 	hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
++	int_desc = data->chip_data;
+ 
+ 	spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
+ 
+@@ -1216,7 +1244,8 @@ static void hv_irq_unmask(struct irq_data *data)
+ 	memset(params, 0, sizeof(*params));
+ 	params->partition_id = HV_PARTITION_ID_SELF;
+ 	params->int_entry.source = 1; /* MSI(-X) */
+-	hv_set_msi_entry_from_desc(&params->int_entry.msi_entry, msi_desc);
++	params->int_entry.msi_entry.address = int_desc->address & 0xffffffff;
++	params->int_entry.msi_entry.data = int_desc->data;
+ 	params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
+ 			   (hbus->hdev->dev_instance.b[4] << 16) |
+ 			   (hbus->hdev->dev_instance.b[7] << 8) |
+@@ -1317,12 +1346,12 @@ static void hv_pci_compose_compl(void *context, struct pci_response *resp,
+ 
+ static u32 hv_compose_msi_req_v1(
+ 	struct pci_create_interrupt *int_pkt, struct cpumask *affinity,
+-	u32 slot, u8 vector)
++	u32 slot, u8 vector, u8 vector_count)
+ {
+ 	int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE;
+ 	int_pkt->wslot.slot = slot;
+ 	int_pkt->int_desc.vector = vector;
+-	int_pkt->int_desc.vector_count = 1;
++	int_pkt->int_desc.vector_count = vector_count;
+ 	int_pkt->int_desc.delivery_mode = dest_Fixed;
+ 
+ 	/*
+@@ -1336,14 +1365,14 @@ static u32 hv_compose_msi_req_v1(
+ 
+ static u32 hv_compose_msi_req_v2(
+ 	struct pci_create_interrupt2 *int_pkt, struct cpumask *affinity,
+-	u32 slot, u8 vector)
++	u32 slot, u8 vector, u8 vector_count)
+ {
+ 	int cpu;
+ 
+ 	int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE2;
+ 	int_pkt->wslot.slot = slot;
+ 	int_pkt->int_desc.vector = vector;
+-	int_pkt->int_desc.vector_count = 1;
++	int_pkt->int_desc.vector_count = vector_count;
+ 	int_pkt->int_desc.delivery_mode = dest_Fixed;
+ 
+ 	/*
+@@ -1371,7 +1400,6 @@ static u32 hv_compose_msi_req_v2(
+  */
+ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ {
+-	struct irq_cfg *cfg = irqd_cfg(data);
+ 	struct hv_pcibus_device *hbus;
+ 	struct vmbus_channel *channel;
+ 	struct hv_pci_dev *hpdev;
+@@ -1380,6 +1408,8 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 	struct cpumask *dest;
+ 	struct compose_comp_ctxt comp;
+ 	struct tran_int_desc *int_desc;
++	struct msi_desc *msi_desc;
++	u8 vector, vector_count;
+ 	struct {
+ 		struct pci_packet pci_pkt;
+ 		union {
+@@ -1391,7 +1421,17 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 	u32 size;
+ 	int ret;
+ 
+-	pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data));
++	/* Reuse the previous allocation */
++	if (data->chip_data) {
++		int_desc = data->chip_data;
++		msg->address_hi = int_desc->address >> 32;
++		msg->address_lo = int_desc->address & 0xffffffff;
++		msg->data = int_desc->data;
++		return;
++	}
++
++	msi_desc  = irq_data_get_msi_desc(data);
++	pdev = msi_desc_to_pci_dev(msi_desc);
+ 	dest = irq_data_get_effective_affinity_mask(data);
+ 	pbus = pdev->bus;
+ 	hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
+@@ -1400,17 +1440,40 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 	if (!hpdev)
+ 		goto return_null_message;
+ 
+-	/* Free any previous message that might have already been composed. */
+-	if (data->chip_data) {
+-		int_desc = data->chip_data;
+-		data->chip_data = NULL;
+-		hv_int_desc_free(hpdev, int_desc);
+-	}
+-
+ 	int_desc = kzalloc(sizeof(*int_desc), GFP_ATOMIC);
+ 	if (!int_desc)
+ 		goto drop_reference;
+ 
++	if (!msi_desc->msi_attrib.is_msix && msi_desc->nvec_used > 1) {
++		/*
++		 * If this is not the first MSI of Multi MSI, we already have
++		 * a mapping.  Can exit early.
++		 */
++		if (msi_desc->irq != data->irq) {
++			data->chip_data = int_desc;
++			int_desc->address = msi_desc->msg.address_lo |
++					    (u64)msi_desc->msg.address_hi << 32;
++			int_desc->data = msi_desc->msg.data +
++					 (data->irq - msi_desc->irq);
++			msg->address_hi = msi_desc->msg.address_hi;
++			msg->address_lo = msi_desc->msg.address_lo;
++			msg->data = int_desc->data;
++			put_pcichild(hpdev);
++			return;
++		}
++		/*
++		 * The vector we select here is a dummy value.  The correct
++		 * value gets sent to the hypervisor in unmask().  This needs
++		 * to be aligned with the count, and also not zero.  Multi-msi
++		 * is powers of 2 up to 32, so 32 will always work here.
++		 */
++		vector = 32;
++		vector_count = msi_desc->nvec_used;
++	} else {
++		vector = hv_msi_get_int_vector(data);
++		vector_count = 1;
++	}
++
+ 	memset(&ctxt, 0, sizeof(ctxt));
+ 	init_completion(&comp.comp_pkt.host_event);
+ 	ctxt.pci_pkt.completion_func = hv_pci_compose_compl;
+@@ -1421,7 +1484,8 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 		size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1,
+ 					dest,
+ 					hpdev->desc.win_slot.slot,
+-					cfg->vector);
++					vector,
++					vector_count);
+ 		break;
+ 
+ 	case PCI_PROTOCOL_VERSION_1_2:
+@@ -1429,7 +1493,8 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 		size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2,
+ 					dest,
+ 					hpdev->desc.win_slot.slot,
+-					cfg->vector);
++					vector,
++					vector_count);
+ 		break;
+ 
+ 	default:
+@@ -1545,7 +1610,7 @@ static struct irq_chip hv_msi_irq_chip = {
+ };
+ 
+ static struct msi_domain_ops hv_msi_ops = {
+-	.msi_prepare	= pci_msi_prepare,
++	.msi_prepare	= hv_msi_prepare,
+ 	.msi_free	= hv_msi_free,
+ };
+ 
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index b017dd400c460..60406f1f8337e 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -1303,15 +1303,17 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl,
+ 	bank->bank_ioport_nr = bank_ioport_nr;
+ 	spin_lock_init(&bank->lock);
+ 
+-	/* create irq hierarchical domain */
+-	bank->fwnode = of_node_to_fwnode(np);
++	if (pctl->domain) {
++		/* create irq hierarchical domain */
++		bank->fwnode = of_node_to_fwnode(np);
+ 
+-	bank->domain = irq_domain_create_hierarchy(pctl->domain, 0,
+-					STM32_GPIO_IRQ_LINE, bank->fwnode,
+-					&stm32_gpio_domain_ops, bank);
++		bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, STM32_GPIO_IRQ_LINE,
++							   bank->fwnode, &stm32_gpio_domain_ops,
++							   bank);
+ 
+-	if (!bank->domain)
+-		return -ENODEV;
++		if (!bank->domain)
++			return -ENODEV;
++	}
+ 
+ 	err = gpiochip_add_data(&bank->gpio_chip, bank);
+ 	if (err) {
+@@ -1481,6 +1483,8 @@ int stm32_pctl_probe(struct platform_device *pdev)
+ 	pctl->domain = stm32_pctrl_get_irq_domain(np);
+ 	if (IS_ERR(pctl->domain))
+ 		return PTR_ERR(pctl->domain);
++	if (!pctl->domain)
++		dev_warn(dev, "pinctrl without interrupt support\n");
+ 
+ 	/* hwspinlock is optional */
+ 	hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
+diff --git a/drivers/power/reset/arm-versatile-reboot.c b/drivers/power/reset/arm-versatile-reboot.c
+index 08d0a07b58ef2..c7624d7611a7e 100644
+--- a/drivers/power/reset/arm-versatile-reboot.c
++++ b/drivers/power/reset/arm-versatile-reboot.c
+@@ -146,6 +146,7 @@ static int __init versatile_reboot_probe(void)
+ 	versatile_reboot_type = (enum versatile_reboot)reboot_id->data;
+ 
+ 	syscon_regmap = syscon_node_to_regmap(np);
++	of_node_put(np);
+ 	if (IS_ERR(syscon_regmap))
+ 		return PTR_ERR(syscon_regmap);
+ 
+diff --git a/drivers/s390/char/keyboard.h b/drivers/s390/char/keyboard.h
+index c467589c7f452..c06d399b9b1f1 100644
+--- a/drivers/s390/char/keyboard.h
++++ b/drivers/s390/char/keyboard.h
+@@ -56,7 +56,7 @@ static inline void
+ kbd_put_queue(struct tty_port *port, int ch)
+ {
+ 	tty_insert_flip_char(port, ch, 0);
+-	tty_schedule_flip(port);
++	tty_flip_buffer_push(port);
+ }
+ 
+ static inline void
+@@ -64,5 +64,5 @@ kbd_puts_queue(struct tty_port *port, char *cp)
+ {
+ 	while (*cp)
+ 		tty_insert_flip_char(port, *cp++, 0);
+-	tty_schedule_flip(port);
++	tty_flip_buffer_push(port);
+ }
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index 33c32e9317675..bb9d8386ba3b5 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -1174,10 +1174,14 @@ static void bcm2835_spi_handle_err(struct spi_controller *ctlr,
+ 	struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
+ 
+ 	/* if an error occurred and we have an active dma, then terminate */
+-	dmaengine_terminate_sync(ctlr->dma_tx);
+-	bs->tx_dma_active = false;
+-	dmaengine_terminate_sync(ctlr->dma_rx);
+-	bs->rx_dma_active = false;
++	if (ctlr->dma_tx) {
++		dmaengine_terminate_sync(ctlr->dma_tx);
++		bs->tx_dma_active = false;
++	}
++	if (ctlr->dma_rx) {
++		dmaengine_terminate_sync(ctlr->dma_rx);
++		bs->rx_dma_active = false;
++	}
+ 	bcm2835_spi_undo_prologue(bs);
+ 
+ 	/* and reset */
+diff --git a/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c b/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c
+index 09b0b8a16e994..2e971cbe2d7a7 100644
+--- a/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c
++++ b/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c
+@@ -267,6 +267,8 @@ static int rt2880_pinmux_pins(struct rt2880_priv *p)
+ 						p->func[i]->pin_count,
+ 						sizeof(int),
+ 						GFP_KERNEL);
++		if (!p->func[i]->pins)
++			return -ENOMEM;
+ 		for (j = 0; j < p->func[i]->pin_count; j++)
+ 			p->func[i]->pins[j] = p->func[i]->pin_first + j;
+ 
+diff --git a/drivers/tty/cyclades.c b/drivers/tty/cyclades.c
+index 097266342e5e3..4bcaab250676f 100644
+--- a/drivers/tty/cyclades.c
++++ b/drivers/tty/cyclades.c
+@@ -556,7 +556,7 @@ static void cyy_chip_rx(struct cyclades_card *cinfo, int chip,
+ 		}
+ 		info->idle_stats.recv_idle = jiffies;
+ 	}
+-	tty_schedule_flip(port);
++	tty_flip_buffer_push(port);
+ 
+ 	/* end of service */
+ 	cyy_writeb(info, CyRIR, save_xir & 0x3f);
+@@ -996,7 +996,7 @@ static void cyz_handle_rx(struct cyclades_port *info)
+ 		mod_timer(&info->rx_full_timer, jiffies + 1);
+ #endif
+ 	info->idle_stats.recv_idle = jiffies;
+-	tty_schedule_flip(&info->port);
++	tty_flip_buffer_push(&info->port);
+ 
+ 	/* Update rx_get */
+ 	cy_writel(&buf_ctrl->rx_get, new_rx_get);
+@@ -1172,7 +1172,7 @@ static void cyz_handle_cmd(struct cyclades_card *cinfo)
+ 		if (delta_count)
+ 			wake_up_interruptible(&info->port.delta_msr_wait);
+ 		if (special_count)
+-			tty_schedule_flip(&info->port);
++			tty_flip_buffer_push(&info->port);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c
+index 9180ca5e4dcd4..d6e82eb61fc2d 100644
+--- a/drivers/tty/goldfish.c
++++ b/drivers/tty/goldfish.c
+@@ -151,7 +151,7 @@ static irqreturn_t goldfish_tty_interrupt(int irq, void *dev_id)
+ 	address = (unsigned long)(void *)buf;
+ 	goldfish_tty_rw(qtty, address, count, 0);
+ 
+-	tty_schedule_flip(&qtty->port);
++	tty_flip_buffer_push(&qtty->port);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/tty/moxa.c b/drivers/tty/moxa.c
+index f9f14104bd2c0..d6528f3bb9b99 100644
+--- a/drivers/tty/moxa.c
++++ b/drivers/tty/moxa.c
+@@ -1385,7 +1385,7 @@ static int moxa_poll_port(struct moxa_port *p, unsigned int handle,
+ 		if (inited && !tty_throttled(tty) &&
+ 				MoxaPortRxQueue(p) > 0) { /* RX */
+ 			MoxaPortReadData(p);
+-			tty_schedule_flip(&p->port);
++			tty_flip_buffer_push(&p->port);
+ 		}
+ 	} else {
+ 		clear_bit(EMPTYWAIT, &p->statusflags);
+@@ -1410,7 +1410,7 @@ static int moxa_poll_port(struct moxa_port *p, unsigned int handle,
+ 
+ 	if (tty && (intr & IntrBreak) && !I_IGNBRK(tty)) { /* BREAK */
+ 		tty_insert_flip_char(&p->port, 0, TTY_BREAK);
+-		tty_schedule_flip(&p->port);
++		tty_flip_buffer_push(&p->port);
+ 	}
+ 
+ 	if (intr & IntrLine)
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index 23368cec7ee84..16498f5fba64d 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -111,21 +111,11 @@ static void pty_unthrottle(struct tty_struct *tty)
+ static int pty_write(struct tty_struct *tty, const unsigned char *buf, int c)
+ {
+ 	struct tty_struct *to = tty->link;
+-	unsigned long flags;
+ 
+-	if (tty->stopped)
++	if (tty->stopped || !c)
+ 		return 0;
+ 
+-	if (c > 0) {
+-		spin_lock_irqsave(&to->port->lock, flags);
+-		/* Stuff the data into the input queue of the other end */
+-		c = tty_insert_flip_string(to->port, buf, c);
+-		spin_unlock_irqrestore(&to->port->lock, flags);
+-		/* And shovel */
+-		if (c)
+-			tty_flip_buffer_push(to->port);
+-	}
+-	return c;
++	return tty_insert_flip_string_and_push_buffer(to->port, buf, c);
+ }
+ 
+ /**
+diff --git a/drivers/tty/serial/lpc32xx_hs.c b/drivers/tty/serial/lpc32xx_hs.c
+index b5898c9320361..a9802308ff608 100644
+--- a/drivers/tty/serial/lpc32xx_hs.c
++++ b/drivers/tty/serial/lpc32xx_hs.c
+@@ -344,7 +344,7 @@ static irqreturn_t serial_lpc32xx_interrupt(int irq, void *dev_id)
+ 		       LPC32XX_HSUART_IIR(port->membase));
+ 		port->icount.overrun++;
+ 		tty_insert_flip_char(tport, 0, TTY_OVERRUN);
+-		tty_schedule_flip(tport);
++		tty_flip_buffer_push(tport);
+ 	}
+ 
+ 	/* Data received? */
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 34ff2181afd1a..e941f57de9533 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -443,13 +443,13 @@ static void mvebu_uart_shutdown(struct uart_port *port)
+ 	}
+ }
+ 
+-static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
++static unsigned int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
+ {
+ 	unsigned int d_divisor, m_divisor;
+ 	u32 brdv, osamp;
+ 
+ 	if (!port->uartclk)
+-		return -EOPNOTSUPP;
++		return 0;
+ 
+ 	/*
+ 	 * The baudrate is derived from the UART clock thanks to two divisors:
+@@ -473,7 +473,7 @@ static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
+ 	osamp &= ~OSAMP_DIVISORS_MASK;
+ 	writel(osamp, port->membase + UART_OSAMP);
+ 
+-	return 0;
++	return DIV_ROUND_CLOSEST(port->uartclk, d_divisor * m_divisor);
+ }
+ 
+ static void mvebu_uart_set_termios(struct uart_port *port,
+@@ -510,15 +510,11 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 	max_baud = 230400;
+ 
+ 	baud = uart_get_baud_rate(port, termios, old, min_baud, max_baud);
+-	if (mvebu_uart_baud_rate_set(port, baud)) {
+-		/* No clock available, baudrate cannot be changed */
+-		if (old)
+-			baud = uart_get_baud_rate(port, old, NULL,
+-						  min_baud, max_baud);
+-	} else {
+-		tty_termios_encode_baud_rate(termios, baud, baud);
+-		uart_update_timeout(port, termios->c_cflag, baud);
+-	}
++	baud = mvebu_uart_baud_rate_set(port, baud);
++
++	/* In case baudrate cannot be changed, report previous old value */
++	if (baud == 0 && old)
++		baud = tty_termios_baud_rate(old);
+ 
+ 	/* Only the following flag changes are supported */
+ 	if (old) {
+@@ -529,6 +525,11 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 		termios->c_cflag |= CS8;
+ 	}
+ 
++	if (baud != 0) {
++		tty_termios_encode_baud_rate(termios, baud, baud);
++		uart_update_timeout(port, termios->c_cflag, baud);
++	}
++
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ }
+ 
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 6c4a50addadd8..5bbc2e010b483 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -394,27 +394,6 @@ int __tty_insert_flip_char(struct tty_port *port, unsigned char ch, char flag)
+ }
+ EXPORT_SYMBOL(__tty_insert_flip_char);
+ 
+-/**
+- *	tty_schedule_flip	-	push characters to ldisc
+- *	@port: tty port to push from
+- *
+- *	Takes any pending buffers and transfers their ownership to the
+- *	ldisc side of the queue. It then schedules those characters for
+- *	processing by the line discipline.
+- */
+-
+-void tty_schedule_flip(struct tty_port *port)
+-{
+-	struct tty_bufhead *buf = &port->buf;
+-
+-	/* paired w/ acquire in flush_to_ldisc(); ensures
+-	 * flush_to_ldisc() sees buffer data.
+-	 */
+-	smp_store_release(&buf->tail->commit, buf->tail->used);
+-	queue_work(system_unbound_wq, &buf->work);
+-}
+-EXPORT_SYMBOL(tty_schedule_flip);
+-
+ /**
+  *	tty_prepare_flip_string		-	make room for characters
+  *	@port: tty port
+@@ -544,6 +523,15 @@ static void flush_to_ldisc(struct work_struct *work)
+ 
+ }
+ 
++static inline void tty_flip_buffer_commit(struct tty_buffer *tail)
++{
++	/*
++	 * Paired w/ acquire in flush_to_ldisc(); ensures flush_to_ldisc() sees
++	 * buffer data.
++	 */
++	smp_store_release(&tail->commit, tail->used);
++}
++
+ /**
+  *	tty_flip_buffer_push	-	terminal
+  *	@port: tty port to push
+@@ -557,10 +545,44 @@ static void flush_to_ldisc(struct work_struct *work)
+ 
+ void tty_flip_buffer_push(struct tty_port *port)
+ {
+-	tty_schedule_flip(port);
++	struct tty_bufhead *buf = &port->buf;
++
++	tty_flip_buffer_commit(buf->tail);
++	queue_work(system_unbound_wq, &buf->work);
+ }
+ EXPORT_SYMBOL(tty_flip_buffer_push);
+ 
++/**
++ * tty_insert_flip_string_and_push_buffer - add characters to the tty buffer and
++ *	push
++ * @port: tty port
++ * @chars: characters
++ * @size: size
++ *
++ * The function combines tty_insert_flip_string() and tty_flip_buffer_push()
++ * with the exception of properly holding the @port->lock.
++ *
++ * To be used only internally (by pty currently).
++ *
++ * Returns: the number added.
++ */
++int tty_insert_flip_string_and_push_buffer(struct tty_port *port,
++		const unsigned char *chars, size_t size)
++{
++	struct tty_bufhead *buf = &port->buf;
++	unsigned long flags;
++
++	spin_lock_irqsave(&port->lock, flags);
++	size = tty_insert_flip_string(port, chars, size);
++	if (size)
++		tty_flip_buffer_commit(buf->tail);
++	spin_unlock_irqrestore(&port->lock, flags);
++
++	queue_work(system_unbound_wq, &buf->work);
++
++	return size;
++}
++
+ /**
+  *	tty_buffer_init		-	prepare a tty buffer structure
+  *	@port: tty port to initialise
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 78acc270e39ac..aa0026a9839c8 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -311,7 +311,7 @@ int kbd_rate(struct kbd_repeat *rpt)
+ static void put_queue(struct vc_data *vc, int ch)
+ {
+ 	tty_insert_flip_char(&vc->port, ch, 0);
+-	tty_schedule_flip(&vc->port);
++	tty_flip_buffer_push(&vc->port);
+ }
+ 
+ static void puts_queue(struct vc_data *vc, char *cp)
+@@ -320,7 +320,7 @@ static void puts_queue(struct vc_data *vc, char *cp)
+ 		tty_insert_flip_char(&vc->port, *cp, 0);
+ 		cp++;
+ 	}
+-	tty_schedule_flip(&vc->port);
++	tty_flip_buffer_push(&vc->port);
+ }
+ 
+ static void applkey(struct vc_data *vc, int key, char mode)
+@@ -565,7 +565,7 @@ static void fn_inc_console(struct vc_data *vc)
+ static void fn_send_intr(struct vc_data *vc)
+ {
+ 	tty_insert_flip_char(&vc->port, 0, TTY_BREAK);
+-	tty_schedule_flip(&vc->port);
++	tty_flip_buffer_push(&vc->port);
+ }
+ 
+ static void fn_scroll_forw(struct vc_data *vc)
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index f043fd7e0f924..2ebe73b116dc7 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1834,7 +1834,7 @@ static void csi_m(struct vc_data *vc)
+ static void respond_string(const char *p, size_t len, struct tty_port *port)
+ {
+ 	tty_insert_flip_string(port, p, len);
+-	tty_schedule_flip(port);
++	tty_flip_buffer_push(port);
+ }
+ 
+ static void cursor_report(struct vc_data *vc, struct tty_struct *tty)
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index f415c056ff8ab..54fee4087bf10 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -401,7 +401,8 @@ static void __unmap_grant_pages_done(int result,
+ 	unsigned int offset = data->unmap_ops - map->unmap_ops;
+ 
+ 	for (i = 0; i < data->count; i++) {
+-		WARN_ON(map->unmap_ops[offset+i].status);
++		WARN_ON(map->unmap_ops[offset+i].status &&
++			map->unmap_ops[offset+i].handle != -1);
+ 		pr_debug("unmap handle=%d st=%d\n",
+ 			map->unmap_ops[offset+i].handle,
+ 			map->unmap_ops[offset+i].status);
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index 2ce96a9ce63c0..eaa28d654e9f0 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -4067,13 +4067,14 @@ static void send_repeat_remove(struct dlm_ls *ls, char *ms_name, int len)
+ 	rv = _create_message(ls, sizeof(struct dlm_message) + len,
+ 			     dir_nodeid, DLM_MSG_REMOVE, &ms, &mh);
+ 	if (rv)
+-		return;
++		goto out;
+ 
+ 	memcpy(ms->m_extra, name, len);
+ 	ms->m_hash = hash;
+ 
+ 	send_message(mh, ms);
+ 
++out:
+ 	spin_lock(&ls->ls_remove_spin);
+ 	ls->ls_remove_len = 0;
+ 	memset(ls->ls_remove_name, 0, DLM_RESNAME_MAXLEN);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 023c3eb34dcca..a952288b2ab8e 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1325,7 +1325,7 @@ static void io_req_clean_work(struct io_kiocb *req)
+  */
+ static bool io_identity_cow(struct io_kiocb *req)
+ {
+-	struct io_uring_task *tctx = current->io_uring;
++	struct io_uring_task *tctx = req->task->io_uring;
+ 	const struct cred *creds = NULL;
+ 	struct io_identity *id;
+ 
+diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
+index 4e035aca6f7e6..6093fa6db2600 100644
+--- a/include/linux/bitfield.h
++++ b/include/linux/bitfield.h
+@@ -41,6 +41,22 @@
+ 
+ #define __bf_shf(x) (__builtin_ffsll(x) - 1)
+ 
++#define __scalar_type_to_unsigned_cases(type)				\
++		unsigned type:	(unsigned type)0,			\
++		signed type:	(unsigned type)0
++
++#define __unsigned_scalar_typeof(x) typeof(				\
++		_Generic((x),						\
++			char:	(unsigned char)0,			\
++			__scalar_type_to_unsigned_cases(char),		\
++			__scalar_type_to_unsigned_cases(short),		\
++			__scalar_type_to_unsigned_cases(int),		\
++			__scalar_type_to_unsigned_cases(long),		\
++			__scalar_type_to_unsigned_cases(long long),	\
++			default: (x)))
++
++#define __bf_cast_unsigned(type, x)	((__unsigned_scalar_typeof(type))(x))
++
+ #define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx)			\
+ 	({								\
+ 		BUILD_BUG_ON_MSG(!__builtin_constant_p(_mask),		\
+@@ -49,7 +65,8 @@
+ 		BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ?		\
+ 				 ~((_mask) >> __bf_shf(_mask)) & (_val) : 0, \
+ 				 _pfx "value too large for the field"); \
+-		BUILD_BUG_ON_MSG((_mask) > (typeof(_reg))~0ull,		\
++		BUILD_BUG_ON_MSG(__bf_cast_unsigned(_mask, _mask) >	\
++				 __bf_cast_unsigned(_reg, ~0ull),	\
+ 				 _pfx "type of reg too small for mask"); \
+ 		__BUILD_BUG_ON_NOT_POWER_OF_2((_mask) +			\
+ 					      (1ULL << __bf_shf(_mask))); \
+diff --git a/include/linux/tty_flip.h b/include/linux/tty_flip.h
+index 767f62086bd9b..c326bfdb5ec2c 100644
+--- a/include/linux/tty_flip.h
++++ b/include/linux/tty_flip.h
+@@ -12,7 +12,6 @@ extern int tty_insert_flip_string_fixed_flag(struct tty_port *port,
+ extern int tty_prepare_flip_string(struct tty_port *port,
+ 		unsigned char **chars, size_t size);
+ extern void tty_flip_buffer_push(struct tty_port *port);
+-void tty_schedule_flip(struct tty_port *port);
+ int __tty_insert_flip_char(struct tty_port *port, unsigned char ch, char flag);
+ 
+ static inline int tty_insert_flip_char(struct tty_port *port,
+@@ -40,4 +39,7 @@ static inline int tty_insert_flip_string(struct tty_port *port,
+ extern void tty_buffer_lock_exclusive(struct tty_port *port);
+ extern void tty_buffer_unlock_exclusive(struct tty_port *port);
+ 
++int tty_insert_flip_string_and_push_buffer(struct tty_port *port,
++		const unsigned char *chars, size_t cnt);
++
+ #endif /* _LINUX_TTY_FLIP_H */
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index 3fecc4a411a13..355835639ae58 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -422,6 +422,71 @@ out:
+ 	return NULL;
+ }
+ 
++/* Shall not be called with lock_sock held */
++static inline struct sk_buff *bt_skb_sendmsg(struct sock *sk,
++					     struct msghdr *msg,
++					     size_t len, size_t mtu,
++					     size_t headroom, size_t tailroom)
++{
++	struct sk_buff *skb;
++	size_t size = min_t(size_t, len, mtu);
++	int err;
++
++	skb = bt_skb_send_alloc(sk, size + headroom + tailroom,
++				msg->msg_flags & MSG_DONTWAIT, &err);
++	if (!skb)
++		return ERR_PTR(err);
++
++	skb_reserve(skb, headroom);
++	skb_tailroom_reserve(skb, mtu, tailroom);
++
++	if (!copy_from_iter_full(skb_put(skb, size), size, &msg->msg_iter)) {
++		kfree_skb(skb);
++		return ERR_PTR(-EFAULT);
++	}
++
++	skb->priority = sk->sk_priority;
++
++	return skb;
++}
++
++/* Similar to bt_skb_sendmsg but can split the msg into multiple fragments
++ * accourding to the MTU.
++ */
++static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk,
++					      struct msghdr *msg,
++					      size_t len, size_t mtu,
++					      size_t headroom, size_t tailroom)
++{
++	struct sk_buff *skb, **frag;
++
++	skb = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom);
++	if (IS_ERR_OR_NULL(skb))
++		return skb;
++
++	len -= skb->len;
++	if (!len)
++		return skb;
++
++	/* Add remaining data over MTU as continuation fragments */
++	frag = &skb_shinfo(skb)->frag_list;
++	while (len) {
++		struct sk_buff *tmp;
++
++		tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom);
++		if (IS_ERR(tmp)) {
++			return skb;
++		}
++
++		len -= tmp->len;
++
++		*frag = tmp;
++		frag = &(*frag)->next;
++	}
++
++	return skb;
++}
++
+ int bt_to_errno(u16 code);
+ 
+ void hci_sock_set_flag(struct sock *sk, int nr);
+diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
+index 89163ef8cf4be..3c039d4b0e480 100644
+--- a/include/net/inet_sock.h
++++ b/include/net/inet_sock.h
+@@ -107,7 +107,8 @@ static inline struct inet_request_sock *inet_rsk(const struct request_sock *sk)
+ 
+ static inline u32 inet_request_mark(const struct sock *sk, struct sk_buff *skb)
+ {
+-	if (!sk->sk_mark && sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept)
++	if (!sk->sk_mark &&
++	    READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept))
+ 		return skb->mark;
+ 
+ 	return sk->sk_mark;
+@@ -369,7 +370,7 @@ static inline bool inet_get_convert_csum(struct sock *sk)
+ static inline bool inet_can_nonlocal_bind(struct net *net,
+ 					  struct inet_sock *inet)
+ {
+-	return net->ipv4.sysctl_ip_nonlocal_bind ||
++	return READ_ONCE(net->ipv4.sysctl_ip_nonlocal_bind) ||
+ 		inet->freebind || inet->transparent;
+ }
+ 
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 76aaa7eb5b823..c5822d7824cd0 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -352,7 +352,7 @@ static inline bool sysctl_dev_name_is_allowed(const char *name)
+ 
+ static inline bool inet_port_requires_bind_service(struct net *net, unsigned short port)
+ {
+-	return port < net->ipv4.sysctl_ip_prot_sock;
++	return port < READ_ONCE(net->ipv4.sysctl_ip_prot_sock);
+ }
+ 
+ #else
+@@ -379,7 +379,7 @@ void ipfrag_init(void);
+ void ip_static_sysctl_init(void);
+ 
+ #define IP4_REPLY_MARK(net, mark) \
+-	((net)->ipv4.sysctl_fwmark_reflect ? (mark) : 0)
++	(READ_ONCE((net)->ipv4.sysctl_fwmark_reflect) ? (mark) : 0)
+ 
+ static inline bool ip_is_fragment(const struct iphdr *iph)
+ {
+@@ -440,7 +440,7 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
+ 	struct net *net = dev_net(dst->dev);
+ 	unsigned int mtu;
+ 
+-	if (net->ipv4.sysctl_ip_fwd_use_pmtu ||
++	if (READ_ONCE(net->ipv4.sysctl_ip_fwd_use_pmtu) ||
+ 	    ip_mtu_locked(dst) ||
+ 	    !forwarding)
+ 		return dst_mtu(dst);
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 2a28e09255735..44bfb22069c1f 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1380,8 +1380,8 @@ static inline void tcp_slow_start_after_idle_check(struct sock *sk)
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	s32 delta;
+ 
+-	if (!sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle || tp->packets_out ||
+-	    ca_ops->cong_control)
++	if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle) ||
++	    tp->packets_out || ca_ops->cong_control)
+ 		return;
+ 	delta = tcp_jiffies32 - tp->lsndtime;
+ 	if (delta > inet_csk(sk)->icsk_rto)
+@@ -1447,21 +1447,24 @@ static inline int keepalive_intvl_when(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
+ 
+-	return tp->keepalive_intvl ? : net->ipv4.sysctl_tcp_keepalive_intvl;
++	return tp->keepalive_intvl ? :
++		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl);
+ }
+ 
+ static inline int keepalive_time_when(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
+ 
+-	return tp->keepalive_time ? : net->ipv4.sysctl_tcp_keepalive_time;
++	return tp->keepalive_time ? :
++		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time);
+ }
+ 
+ static inline int keepalive_probes(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
+ 
+-	return tp->keepalive_probes ? : net->ipv4.sysctl_tcp_keepalive_probes;
++	return tp->keepalive_probes ? :
++		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes);
+ }
+ 
+ static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp)
+@@ -1474,7 +1477,8 @@ static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp)
+ 
+ static inline int tcp_fin_time(const struct sock *sk)
+ {
+-	int fin_timeout = tcp_sk(sk)->linger2 ? : sock_net(sk)->ipv4.sysctl_tcp_fin_timeout;
++	int fin_timeout = tcp_sk(sk)->linger2 ? :
++		READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fin_timeout);
+ 	const int rto = inet_csk(sk)->icsk_rto;
+ 
+ 	if (fin_timeout < (rto << 2) - (rto >> 1))
+@@ -1969,7 +1973,7 @@ void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr);
+ static inline u32 tcp_notsent_lowat(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
+-	return tp->notsent_lowat ?: net->ipv4.sysctl_tcp_notsent_lowat;
++	return tp->notsent_lowat ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);
+ }
+ 
+ /* @wake is one when sk_stream_write_space() calls us.
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 4017f257628f3..010bc324f860b 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -259,7 +259,7 @@ static inline bool udp_sk_bound_dev_eq(struct net *net, int bound_dev_if,
+ 				       int dif, int sdif)
+ {
+ #if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV)
+-	return inet_bound_dev_eq(!!net->ipv4.sysctl_udp_l3mdev_accept,
++	return inet_bound_dev_eq(!!READ_ONCE(net->ipv4.sysctl_udp_l3mdev_accept),
+ 				 bound_dev_if, dif, sdif);
+ #else
+ 	return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 845a4c0524332..fd2aa6b9909ec 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -66,11 +66,13 @@ void *bpf_internal_load_pointer_neg_helper(const struct sk_buff *skb, int k, uns
+ {
+ 	u8 *ptr = NULL;
+ 
+-	if (k >= SKF_NET_OFF)
++	if (k >= SKF_NET_OFF) {
+ 		ptr = skb_network_header(skb) + k - SKF_NET_OFF;
+-	else if (k >= SKF_LL_OFF)
++	} else if (k >= SKF_LL_OFF) {
++		if (unlikely(!skb_mac_header_was_set(skb)))
++			return NULL;
+ 		ptr = skb_mac_header(skb) + k - SKF_LL_OFF;
+-
++	}
+ 	if (ptr >= skb->head && ptr + size <= skb_tail_pointer(skb))
+ 		return ptr;
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 8ba155a7b59ed..0e01216f4e5af 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6228,10 +6228,10 @@ again:
+ 
+ 		if (!atomic_inc_not_zero(&event->rb->mmap_count)) {
+ 			/*
+-			 * Raced against perf_mmap_close() through
+-			 * perf_event_set_output(). Try again, hope for better
+-			 * luck.
++			 * Raced against perf_mmap_close(); remove the
++			 * event and try again.
+ 			 */
++			ring_buffer_attach(event, NULL);
+ 			mutex_unlock(&event->mmap_mutex);
+ 			goto again;
+ 		}
+@@ -11587,14 +11587,25 @@ err_size:
+ 	goto out;
+ }
+ 
++static void mutex_lock_double(struct mutex *a, struct mutex *b)
++{
++	if (b < a)
++		swap(a, b);
++
++	mutex_lock(a);
++	mutex_lock_nested(b, SINGLE_DEPTH_NESTING);
++}
++
+ static int
+ perf_event_set_output(struct perf_event *event, struct perf_event *output_event)
+ {
+ 	struct perf_buffer *rb = NULL;
+ 	int ret = -EINVAL;
+ 
+-	if (!output_event)
++	if (!output_event) {
++		mutex_lock(&event->mmap_mutex);
+ 		goto set;
++	}
+ 
+ 	/* don't allow circular references */
+ 	if (event == output_event)
+@@ -11632,8 +11643,15 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event)
+ 	    event->pmu != output_event->pmu)
+ 		goto out;
+ 
++	/*
++	 * Hold both mmap_mutex to serialize against perf_mmap_close().  Since
++	 * output_event is already on rb->event_list, and the list iteration
++	 * restarts after every removal, it is guaranteed this new event is
++	 * observed *OR* if output_event is already removed, it's guaranteed we
++	 * observe !rb->mmap_count.
++	 */
++	mutex_lock_double(&event->mmap_mutex, &output_event->mmap_mutex);
+ set:
+-	mutex_lock(&event->mmap_mutex);
+ 	/* Can't redirect output if we've got an active mmap() */
+ 	if (atomic_read(&event->mmap_count))
+ 		goto unlock;
+@@ -11643,6 +11661,12 @@ set:
+ 		rb = ring_buffer_get(output_event);
+ 		if (!rb)
+ 			goto unlock;
++
++		/* did we race against perf_mmap_close() */
++		if (!atomic_read(&rb->mmap_count)) {
++			ring_buffer_put(rb);
++			goto unlock;
++		}
+ 	}
+ 
+ 	ring_buffer_attach(event, rb);
+@@ -11650,20 +11674,13 @@ set:
+ 	ret = 0;
+ unlock:
+ 	mutex_unlock(&event->mmap_mutex);
++	if (output_event)
++		mutex_unlock(&output_event->mmap_mutex);
+ 
+ out:
+ 	return ret;
+ }
+ 
+-static void mutex_lock_double(struct mutex *a, struct mutex *b)
+-{
+-	if (b < a)
+-		swap(a, b);
+-
+-	mutex_lock(a);
+-	mutex_lock_nested(b, SINGLE_DEPTH_NESTING);
+-}
+-
+ static int perf_event_set_clock(struct perf_event *event, clockid_t clk_id)
+ {
+ 	bool nmi_safe = false;
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index a3ae00c348a8b..933706106b983 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1563,7 +1563,10 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
+ 		 * the throttle.
+ 		 */
+ 		p->dl.dl_throttled = 0;
+-		BUG_ON(!is_dl_boosted(&p->dl) || flags != ENQUEUE_REPLENISH);
++		if (!(flags & ENQUEUE_REPLENISH))
++			printk_deferred_once("sched: DL de-boosted task PID %d: REPLENISH flag missing\n",
++					     task_pid_nr(p));
++
+ 		return;
+ 	}
+ 
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index 249ed32591449..e5d22af43fa0b 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -34,6 +34,27 @@ MODULE_LICENSE("GPL");
+ #define WATCH_QUEUE_NOTE_SIZE 128
+ #define WATCH_QUEUE_NOTES_PER_PAGE (PAGE_SIZE / WATCH_QUEUE_NOTE_SIZE)
+ 
++/*
++ * This must be called under the RCU read-lock, which makes
++ * sure that the wqueue still exists. It can then take the lock,
++ * and check that the wqueue hasn't been destroyed, which in
++ * turn makes sure that the notification pipe still exists.
++ */
++static inline bool lock_wqueue(struct watch_queue *wqueue)
++{
++	spin_lock_bh(&wqueue->lock);
++	if (unlikely(wqueue->defunct)) {
++		spin_unlock_bh(&wqueue->lock);
++		return false;
++	}
++	return true;
++}
++
++static inline void unlock_wqueue(struct watch_queue *wqueue)
++{
++	spin_unlock_bh(&wqueue->lock);
++}
++
+ static void watch_queue_pipe_buf_release(struct pipe_inode_info *pipe,
+ 					 struct pipe_buffer *buf)
+ {
+@@ -69,6 +90,10 @@ static const struct pipe_buf_operations watch_queue_pipe_buf_ops = {
+ 
+ /*
+  * Post a notification to a watch queue.
++ *
++ * Must be called with the RCU lock for reading, and the
++ * watch_queue lock held, which guarantees that the pipe
++ * hasn't been released.
+  */
+ static bool post_one_notification(struct watch_queue *wqueue,
+ 				  struct watch_notification *n)
+@@ -85,9 +110,6 @@ static bool post_one_notification(struct watch_queue *wqueue,
+ 
+ 	spin_lock_irq(&pipe->rd_wait.lock);
+ 
+-	if (wqueue->defunct)
+-		goto out;
+-
+ 	mask = pipe->ring_size - 1;
+ 	head = pipe->head;
+ 	tail = pipe->tail;
+@@ -203,7 +225,10 @@ void __post_watch_notification(struct watch_list *wlist,
+ 		if (security_post_notification(watch->cred, cred, n) < 0)
+ 			continue;
+ 
+-		post_one_notification(wqueue, n);
++		if (lock_wqueue(wqueue)) {
++			post_one_notification(wqueue, n);
++			unlock_wqueue(wqueue);
++		}
+ 	}
+ 
+ 	rcu_read_unlock();
+@@ -465,11 +490,12 @@ int add_watch_to_object(struct watch *watch, struct watch_list *wlist)
+ 		return -EAGAIN;
+ 	}
+ 
+-	spin_lock_bh(&wqueue->lock);
+-	kref_get(&wqueue->usage);
+-	kref_get(&watch->usage);
+-	hlist_add_head(&watch->queue_node, &wqueue->watches);
+-	spin_unlock_bh(&wqueue->lock);
++	if (lock_wqueue(wqueue)) {
++		kref_get(&wqueue->usage);
++		kref_get(&watch->usage);
++		hlist_add_head(&watch->queue_node, &wqueue->watches);
++		unlock_wqueue(wqueue);
++	}
+ 
+ 	hlist_add_head(&watch->list_node, &wlist->watchers);
+ 	return 0;
+@@ -523,20 +549,15 @@ found:
+ 
+ 	wqueue = rcu_dereference(watch->queue);
+ 
+-	/* We don't need the watch list lock for the next bit as RCU is
+-	 * protecting *wqueue from deallocation.
+-	 */
+-	if (wqueue) {
++	if (lock_wqueue(wqueue)) {
+ 		post_one_notification(wqueue, &n.watch);
+ 
+-		spin_lock_bh(&wqueue->lock);
+-
+ 		if (!hlist_unhashed(&watch->queue_node)) {
+ 			hlist_del_init_rcu(&watch->queue_node);
+ 			put_watch(watch);
+ 		}
+ 
+-		spin_unlock_bh(&wqueue->lock);
++		unlock_wqueue(wqueue);
+ 	}
+ 
+ 	if (wlist->release_watch) {
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 7315b978e834b..f9f47449e8dda 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -374,7 +374,7 @@ static void mpol_rebind_preferred(struct mempolicy *pol,
+  */
+ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask)
+ {
+-	if (!pol)
++	if (!pol || pol->mode == MPOL_LOCAL)
+ 		return;
+ 	if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) &&
+ 	    nodes_equal(pol->w.cpuset_mems_allowed, *newmask))
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index d12c9a8a9953e..64a94c9812da1 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -278,9 +278,7 @@ static int register_vlan_device(struct net_device *real_dev, u16 vlan_id)
+ 	return 0;
+ 
+ out_free_newdev:
+-	if (new_dev->reg_state == NETREG_UNINITIALIZED ||
+-	    new_dev->reg_state == NETREG_UNREGISTERED)
+-		free_netdev(new_dev);
++	free_netdev(new_dev);
+ 	return err;
+ }
+ 
+diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
+index f2bacb464ccf3..7324764384b67 100644
+--- a/net/bluetooth/rfcomm/core.c
++++ b/net/bluetooth/rfcomm/core.c
+@@ -549,22 +549,58 @@ struct rfcomm_dlc *rfcomm_dlc_exists(bdaddr_t *src, bdaddr_t *dst, u8 channel)
+ 	return dlc;
+ }
+ 
++static int rfcomm_dlc_send_frag(struct rfcomm_dlc *d, struct sk_buff *frag)
++{
++	int len = frag->len;
++
++	BT_DBG("dlc %p mtu %d len %d", d, d->mtu, len);
++
++	if (len > d->mtu)
++		return -EINVAL;
++
++	rfcomm_make_uih(frag, d->addr);
++	__skb_queue_tail(&d->tx_queue, frag);
++
++	return len;
++}
++
+ int rfcomm_dlc_send(struct rfcomm_dlc *d, struct sk_buff *skb)
+ {
+-	int len = skb->len;
++	unsigned long flags;
++	struct sk_buff *frag, *next;
++	int len;
+ 
+ 	if (d->state != BT_CONNECTED)
+ 		return -ENOTCONN;
+ 
+-	BT_DBG("dlc %p mtu %d len %d", d, d->mtu, len);
++	frag = skb_shinfo(skb)->frag_list;
++	skb_shinfo(skb)->frag_list = NULL;
+ 
+-	if (len > d->mtu)
+-		return -EINVAL;
++	/* Queue all fragments atomically. */
++	spin_lock_irqsave(&d->tx_queue.lock, flags);
+ 
+-	rfcomm_make_uih(skb, d->addr);
+-	skb_queue_tail(&d->tx_queue, skb);
++	len = rfcomm_dlc_send_frag(d, skb);
++	if (len < 0 || !frag)
++		goto unlock;
++
++	for (; frag; frag = next) {
++		int ret;
++
++		next = frag->next;
++
++		ret = rfcomm_dlc_send_frag(d, frag);
++		if (ret < 0) {
++			kfree_skb(frag);
++			goto unlock;
++		}
++
++		len += ret;
++	}
++
++unlock:
++	spin_unlock_irqrestore(&d->tx_queue.lock, flags);
+ 
+-	if (!test_bit(RFCOMM_TX_THROTTLED, &d->flags))
++	if (len > 0 && !test_bit(RFCOMM_TX_THROTTLED, &d->flags))
+ 		rfcomm_schedule();
+ 	return len;
+ }
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index ae6f807305617..4cf1fa9900cae 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -575,46 +575,20 @@ static int rfcomm_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	lock_sock(sk);
+ 
+ 	sent = bt_sock_wait_ready(sk, msg->msg_flags);
+-	if (sent)
+-		goto done;
+-
+-	while (len) {
+-		size_t size = min_t(size_t, len, d->mtu);
+-		int err;
+-
+-		skb = sock_alloc_send_skb(sk, size + RFCOMM_SKB_RESERVE,
+-				msg->msg_flags & MSG_DONTWAIT, &err);
+-		if (!skb) {
+-			if (sent == 0)
+-				sent = err;
+-			break;
+-		}
+-		skb_reserve(skb, RFCOMM_SKB_HEAD_RESERVE);
+-
+-		err = memcpy_from_msg(skb_put(skb, size), msg, size);
+-		if (err) {
+-			kfree_skb(skb);
+-			if (sent == 0)
+-				sent = err;
+-			break;
+-		}
+ 
+-		skb->priority = sk->sk_priority;
++	release_sock(sk);
+ 
+-		err = rfcomm_dlc_send(d, skb);
+-		if (err < 0) {
+-			kfree_skb(skb);
+-			if (sent == 0)
+-				sent = err;
+-			break;
+-		}
++	if (sent)
++		return sent;
+ 
+-		sent += size;
+-		len  -= size;
+-	}
++	skb = bt_skb_sendmmsg(sk, msg, len, d->mtu, RFCOMM_SKB_HEAD_RESERVE,
++			      RFCOMM_SKB_TAIL_RESERVE);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
+ 
+-done:
+-	release_sock(sk);
++	sent = rfcomm_dlc_send(d, skb);
++	if (sent < 0)
++		kfree_skb(skb);
+ 
+ 	return sent;
+ }
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index df254c7de2dd6..8244d3ae185bf 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -280,12 +280,10 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk)
+ 	return err;
+ }
+ 
+-static int sco_send_frame(struct sock *sk, void *buf, int len,
+-			  unsigned int msg_flags)
++static int sco_send_frame(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct sco_conn *conn = sco_pi(sk)->conn;
+-	struct sk_buff *skb;
+-	int err;
++	int len = skb->len;
+ 
+ 	/* Check outgoing MTU */
+ 	if (len > conn->mtu)
+@@ -293,11 +291,6 @@ static int sco_send_frame(struct sock *sk, void *buf, int len,
+ 
+ 	BT_DBG("sk %p len %d", sk, len);
+ 
+-	skb = bt_skb_send_alloc(sk, len, msg_flags & MSG_DONTWAIT, &err);
+-	if (!skb)
+-		return err;
+-
+-	memcpy(skb_put(skb, len), buf, len);
+ 	hci_send_sco(conn->hcon, skb);
+ 
+ 	return len;
+@@ -727,7 +720,7 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 			    size_t len)
+ {
+ 	struct sock *sk = sock->sk;
+-	void *buf;
++	struct sk_buff *skb;
+ 	int err;
+ 
+ 	BT_DBG("sock %p, sk %p", sock, sk);
+@@ -739,24 +732,21 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	if (msg->msg_flags & MSG_OOB)
+ 		return -EOPNOTSUPP;
+ 
+-	buf = kmalloc(len, GFP_KERNEL);
+-	if (!buf)
+-		return -ENOMEM;
+-
+-	if (memcpy_from_msg(buf, msg, len)) {
+-		kfree(buf);
+-		return -EFAULT;
+-	}
++	skb = bt_skb_sendmsg(sk, msg, len, len, 0, 0);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
+ 
+ 	lock_sock(sk);
+ 
+ 	if (sk->sk_state == BT_CONNECTED)
+-		err = sco_send_frame(sk, buf, len, msg->msg_flags);
++		err = sco_send_frame(sk, skb);
+ 	else
+ 		err = -ENOTCONN;
+ 
+ 	release_sock(sk);
+-	kfree(buf);
++
++	if (err < 0)
++		kfree_skb(skb);
+ 	return err;
+ }
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index af52050b0f383..637bc576fbd26 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5750,7 +5750,7 @@ static void flush_all_backlogs(void)
+ 	}
+ 
+ 	/* we can have in flight packet[s] on the cpus we are not flushing,
+-	 * synchronize_net() in rollback_registered_many() will take care of
++	 * synchronize_net() in unregister_netdevice_many() will take care of
+ 	 * them
+ 	 */
+ 	for_each_cpu(cpu, &flush_cpus)
+@@ -9508,106 +9508,6 @@ static void net_set_todo(struct net_device *dev)
+ 	dev_net(dev)->dev_unreg_count++;
+ }
+ 
+-static void rollback_registered_many(struct list_head *head)
+-{
+-	struct net_device *dev, *tmp;
+-	LIST_HEAD(close_head);
+-
+-	BUG_ON(dev_boot_phase);
+-	ASSERT_RTNL();
+-
+-	list_for_each_entry_safe(dev, tmp, head, unreg_list) {
+-		/* Some devices call without registering
+-		 * for initialization unwind. Remove those
+-		 * devices and proceed with the remaining.
+-		 */
+-		if (dev->reg_state == NETREG_UNINITIALIZED) {
+-			pr_debug("unregister_netdevice: device %s/%p never was registered\n",
+-				 dev->name, dev);
+-
+-			WARN_ON(1);
+-			list_del(&dev->unreg_list);
+-			continue;
+-		}
+-		dev->dismantle = true;
+-		BUG_ON(dev->reg_state != NETREG_REGISTERED);
+-	}
+-
+-	/* If device is running, close it first. */
+-	list_for_each_entry(dev, head, unreg_list)
+-		list_add_tail(&dev->close_list, &close_head);
+-	dev_close_many(&close_head, true);
+-
+-	list_for_each_entry(dev, head, unreg_list) {
+-		/* And unlink it from device chain. */
+-		unlist_netdevice(dev);
+-
+-		dev->reg_state = NETREG_UNREGISTERING;
+-	}
+-	flush_all_backlogs();
+-
+-	synchronize_net();
+-
+-	list_for_each_entry(dev, head, unreg_list) {
+-		struct sk_buff *skb = NULL;
+-
+-		/* Shutdown queueing discipline. */
+-		dev_shutdown(dev);
+-
+-		dev_xdp_uninstall(dev);
+-
+-		/* Notify protocols, that we are about to destroy
+-		 * this device. They should clean all the things.
+-		 */
+-		call_netdevice_notifiers(NETDEV_UNREGISTER, dev);
+-
+-		if (!dev->rtnl_link_ops ||
+-		    dev->rtnl_link_state == RTNL_LINK_INITIALIZED)
+-			skb = rtmsg_ifinfo_build_skb(RTM_DELLINK, dev, ~0U, 0,
+-						     GFP_KERNEL, NULL, 0);
+-
+-		/*
+-		 *	Flush the unicast and multicast chains
+-		 */
+-		dev_uc_flush(dev);
+-		dev_mc_flush(dev);
+-
+-		netdev_name_node_alt_flush(dev);
+-		netdev_name_node_free(dev->name_node);
+-
+-		if (dev->netdev_ops->ndo_uninit)
+-			dev->netdev_ops->ndo_uninit(dev);
+-
+-		if (skb)
+-			rtmsg_ifinfo_send(skb, dev, GFP_KERNEL);
+-
+-		/* Notifier chain MUST detach us all upper devices. */
+-		WARN_ON(netdev_has_any_upper_dev(dev));
+-		WARN_ON(netdev_has_any_lower_dev(dev));
+-
+-		/* Remove entries from kobject tree */
+-		netdev_unregister_kobject(dev);
+-#ifdef CONFIG_XPS
+-		/* Remove XPS queueing entries */
+-		netif_reset_xps_queues_gt(dev, 0);
+-#endif
+-	}
+-
+-	synchronize_net();
+-
+-	list_for_each_entry(dev, head, unreg_list)
+-		dev_put(dev);
+-}
+-
+-static void rollback_registered(struct net_device *dev)
+-{
+-	LIST_HEAD(single);
+-
+-	list_add(&dev->unreg_list, &single);
+-	rollback_registered_many(&single);
+-	list_del(&single);
+-}
+-
+ static netdev_features_t netdev_sync_upper_features(struct net_device *lower,
+ 	struct net_device *upper, netdev_features_t features)
+ {
+@@ -10144,17 +10044,10 @@ int register_netdevice(struct net_device *dev)
+ 	ret = call_netdevice_notifiers(NETDEV_REGISTER, dev);
+ 	ret = notifier_to_errno(ret);
+ 	if (ret) {
+-		rollback_registered(dev);
+-		rcu_barrier();
+-
+-		dev->reg_state = NETREG_UNREGISTERED;
+-		/* We should put the kobject that hold in
+-		 * netdev_unregister_kobject(), otherwise
+-		 * the net device cannot be freed when
+-		 * driver calls free_netdev(), because the
+-		 * kobject is being hold.
+-		 */
+-		kobject_put(&dev->dev.kobj);
++		/* Expect explicit free_netdev() on failure */
++		dev->needs_free_netdev = false;
++		unregister_netdevice_queue(dev, NULL);
++		goto out;
+ 	}
+ 	/*
+ 	 *	Prevent userspace races by waiting until the network
+@@ -10683,6 +10576,17 @@ void free_netdev(struct net_device *dev)
+ 	struct napi_struct *p, *n;
+ 
+ 	might_sleep();
++
++	/* When called immediately after register_netdevice() failed the unwind
++	 * handling may still be dismantling the device. Handle that case by
++	 * deferring the free.
++	 */
++	if (dev->reg_state == NETREG_UNREGISTERING) {
++		ASSERT_RTNL();
++		dev->needs_free_netdev = true;
++		return;
++	}
++
+ 	netif_free_tx_queues(dev);
+ 	netif_free_rx_queues(dev);
+ 
+@@ -10749,9 +10653,10 @@ void unregister_netdevice_queue(struct net_device *dev, struct list_head *head)
+ 	if (head) {
+ 		list_move_tail(&dev->unreg_list, head);
+ 	} else {
+-		rollback_registered(dev);
+-		/* Finish processing unregister after unlock */
+-		net_set_todo(dev);
++		LIST_HEAD(single);
++
++		list_add(&dev->unreg_list, &single);
++		unregister_netdevice_many(&single);
+ 	}
+ }
+ EXPORT_SYMBOL(unregister_netdevice_queue);
+@@ -10765,14 +10670,100 @@ EXPORT_SYMBOL(unregister_netdevice_queue);
+  */
+ void unregister_netdevice_many(struct list_head *head)
+ {
+-	struct net_device *dev;
++	struct net_device *dev, *tmp;
++	LIST_HEAD(close_head);
++
++	BUG_ON(dev_boot_phase);
++	ASSERT_RTNL();
+ 
+-	if (!list_empty(head)) {
+-		rollback_registered_many(head);
+-		list_for_each_entry(dev, head, unreg_list)
+-			net_set_todo(dev);
+-		list_del(head);
++	if (list_empty(head))
++		return;
++
++	list_for_each_entry_safe(dev, tmp, head, unreg_list) {
++		/* Some devices call without registering
++		 * for initialization unwind. Remove those
++		 * devices and proceed with the remaining.
++		 */
++		if (dev->reg_state == NETREG_UNINITIALIZED) {
++			pr_debug("unregister_netdevice: device %s/%p never was registered\n",
++				 dev->name, dev);
++
++			WARN_ON(1);
++			list_del(&dev->unreg_list);
++			continue;
++		}
++		dev->dismantle = true;
++		BUG_ON(dev->reg_state != NETREG_REGISTERED);
+ 	}
++
++	/* If device is running, close it first. */
++	list_for_each_entry(dev, head, unreg_list)
++		list_add_tail(&dev->close_list, &close_head);
++	dev_close_many(&close_head, true);
++
++	list_for_each_entry(dev, head, unreg_list) {
++		/* And unlink it from device chain. */
++		unlist_netdevice(dev);
++
++		dev->reg_state = NETREG_UNREGISTERING;
++	}
++	flush_all_backlogs();
++
++	synchronize_net();
++
++	list_for_each_entry(dev, head, unreg_list) {
++		struct sk_buff *skb = NULL;
++
++		/* Shutdown queueing discipline. */
++		dev_shutdown(dev);
++
++		dev_xdp_uninstall(dev);
++
++		/* Notify protocols, that we are about to destroy
++		 * this device. They should clean all the things.
++		 */
++		call_netdevice_notifiers(NETDEV_UNREGISTER, dev);
++
++		if (!dev->rtnl_link_ops ||
++		    dev->rtnl_link_state == RTNL_LINK_INITIALIZED)
++			skb = rtmsg_ifinfo_build_skb(RTM_DELLINK, dev, ~0U, 0,
++						     GFP_KERNEL, NULL, 0);
++
++		/*
++		 *	Flush the unicast and multicast chains
++		 */
++		dev_uc_flush(dev);
++		dev_mc_flush(dev);
++
++		netdev_name_node_alt_flush(dev);
++		netdev_name_node_free(dev->name_node);
++
++		if (dev->netdev_ops->ndo_uninit)
++			dev->netdev_ops->ndo_uninit(dev);
++
++		if (skb)
++			rtmsg_ifinfo_send(skb, dev, GFP_KERNEL);
++
++		/* Notifier chain MUST detach us all upper devices. */
++		WARN_ON(netdev_has_any_upper_dev(dev));
++		WARN_ON(netdev_has_any_lower_dev(dev));
++
++		/* Remove entries from kobject tree */
++		netdev_unregister_kobject(dev);
++#ifdef CONFIG_XPS
++		/* Remove XPS queueing entries */
++		netif_reset_xps_queues_gt(dev, 0);
++#endif
++	}
++
++	synchronize_net();
++
++	list_for_each_entry(dev, head, unreg_list) {
++		dev_put(dev);
++		net_set_todo(dev);
++	}
++
++	list_del(head);
+ }
+ EXPORT_SYMBOL(unregister_netdevice_many);
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 34ae30503ac4f..e2b491665775f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6507,7 +6507,7 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len
+ 	if (sk->sk_protocol != IPPROTO_TCP || sk->sk_state != TCP_LISTEN)
+ 		return -EINVAL;
+ 
+-	if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies)
++	if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies))
+ 		return -EINVAL;
+ 
+ 	if (!th->ack || th->rst || th->syn)
+@@ -6582,7 +6582,7 @@ BPF_CALL_5(bpf_tcp_gen_syncookie, struct sock *, sk, void *, iph, u32, iph_len,
+ 	if (sk->sk_protocol != IPPROTO_TCP || sk->sk_state != TCP_LISTEN)
+ 		return -EINVAL;
+ 
+-	if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies)
++	if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies))
+ 		return -ENOENT;
+ 
+ 	if (!th->syn || th->ack || th->fin || th->rst)
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 873081cda9507..3c9c2d6e3b92e 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3442,26 +3442,15 @@ replay:
+ 
+ 	dev->ifindex = ifm->ifi_index;
+ 
+-	if (ops->newlink) {
++	if (ops->newlink)
+ 		err = ops->newlink(link_net ? : net, dev, tb, data, extack);
+-		/* Drivers should call free_netdev() in ->destructor
+-		 * and unregister it on failure after registration
+-		 * so that device could be finally freed in rtnl_unlock.
+-		 */
+-		if (err < 0) {
+-			/* If device is not registered at all, free it now */
+-			if (dev->reg_state == NETREG_UNINITIALIZED ||
+-			    dev->reg_state == NETREG_UNREGISTERED)
+-				free_netdev(dev);
+-			goto out;
+-		}
+-	} else {
++	else
+ 		err = register_netdevice(dev);
+-		if (err < 0) {
+-			free_netdev(dev);
+-			goto out;
+-		}
++	if (err < 0) {
++		free_netdev(dev);
++		goto out;
+ 	}
++
+ 	err = rtnl_configure_link(dev, ifm);
+ 	if (err < 0)
+ 		goto out_unregister;
+diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c
+index 7131cd1fb2ad5..189eea1372d5d 100644
+--- a/net/core/secure_seq.c
++++ b/net/core/secure_seq.c
+@@ -64,7 +64,7 @@ u32 secure_tcpv6_ts_off(const struct net *net,
+ 		.daddr = *(struct in6_addr *)daddr,
+ 	};
+ 
+-	if (net->ipv4.sysctl_tcp_timestamps != 1)
++	if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1)
+ 		return 0;
+ 
+ 	ts_secret_init();
+@@ -120,7 +120,7 @@ EXPORT_SYMBOL(secure_ipv6_port_ephemeral);
+ #ifdef CONFIG_INET
+ u32 secure_tcp_ts_off(const struct net *net, __be32 saddr, __be32 daddr)
+ {
+-	if (net->ipv4.sysctl_tcp_timestamps != 1)
++	if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1)
+ 		return 0;
+ 
+ 	ts_secret_init();
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index e77283069c7b7..a733ce1a3f8f4 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -220,7 +220,7 @@ int inet_listen(struct socket *sock, int backlog)
+ 		 * because the socket was in TCP_LISTEN state previously but
+ 		 * was shutdown() rather than close().
+ 		 */
+-		tcp_fastopen = sock_net(sk)->ipv4.sysctl_tcp_fastopen;
++		tcp_fastopen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen);
+ 		if ((tcp_fastopen & TFO_SERVER_WO_SOCKOPT1) &&
+ 		    (tcp_fastopen & TFO_SERVER_ENABLE) &&
+ 		    !inet_csk(sk)->icsk_accept_queue.fastopenq.max_qlen) {
+@@ -338,7 +338,7 @@ lookup_protocol:
+ 			inet->hdrincl = 1;
+ 	}
+ 
+-	if (net->ipv4.sysctl_ip_no_pmtu_disc)
++	if (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc))
+ 		inet->pmtudisc = IP_PMTUDISC_DONT;
+ 	else
+ 		inet->pmtudisc = IP_PMTUDISC_WANT;
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 70c866308abea..3824b7abecf7e 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -2232,7 +2232,7 @@ void fib_select_multipath(struct fib_result *res, int hash)
+ 	}
+ 
+ 	change_nexthops(fi) {
+-		if (net->ipv4.sysctl_fib_multipath_use_neigh) {
++		if (READ_ONCE(net->ipv4.sysctl_fib_multipath_use_neigh)) {
+ 			if (!fib_good_nh(nexthop_nh))
+ 				continue;
+ 			if (!first) {
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index 0fa0da1d71f57..a1aacf5e75a6a 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -887,7 +887,7 @@ static bool icmp_unreach(struct sk_buff *skb)
+ 			 * values please see
+ 			 * Documentation/networking/ip-sysctl.rst
+ 			 */
+-			switch (net->ipv4.sysctl_ip_no_pmtu_disc) {
++			switch (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc)) {
+ 			default:
+ 				net_dbg_ratelimited("%pI4: fragmentation needed and DF set\n",
+ 						    &iph->daddr);
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 3817988a5a1d4..428cc3a4c36f1 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -467,7 +467,8 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ip_mc_list *pmc,
+ 
+ 	if (pmc->multiaddr == IGMP_ALL_HOSTS)
+ 		return skb;
+-	if (ipv4_is_local_multicast(pmc->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports)
++	if (ipv4_is_local_multicast(pmc->multiaddr) &&
++	    !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
+ 		return skb;
+ 
+ 	mtu = READ_ONCE(dev->mtu);
+@@ -593,7 +594,7 @@ static int igmpv3_send_report(struct in_device *in_dev, struct ip_mc_list *pmc)
+ 			if (pmc->multiaddr == IGMP_ALL_HOSTS)
+ 				continue;
+ 			if (ipv4_is_local_multicast(pmc->multiaddr) &&
+-			     !net->ipv4.sysctl_igmp_llm_reports)
++			    !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
+ 				continue;
+ 			spin_lock_bh(&pmc->lock);
+ 			if (pmc->sfcount[MCAST_EXCLUDE])
+@@ -736,7 +737,8 @@ static int igmp_send_report(struct in_device *in_dev, struct ip_mc_list *pmc,
+ 	if (type == IGMPV3_HOST_MEMBERSHIP_REPORT)
+ 		return igmpv3_send_report(in_dev, pmc);
+ 
+-	if (ipv4_is_local_multicast(group) && !net->ipv4.sysctl_igmp_llm_reports)
++	if (ipv4_is_local_multicast(group) &&
++	    !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
+ 		return 0;
+ 
+ 	if (type == IGMP_HOST_LEAVE_MESSAGE)
+@@ -920,7 +922,8 @@ static bool igmp_heard_report(struct in_device *in_dev, __be32 group)
+ 
+ 	if (group == IGMP_ALL_HOSTS)
+ 		return false;
+-	if (ipv4_is_local_multicast(group) && !net->ipv4.sysctl_igmp_llm_reports)
++	if (ipv4_is_local_multicast(group) &&
++	    !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
+ 		return false;
+ 
+ 	rcu_read_lock();
+@@ -1045,7 +1048,7 @@ static bool igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb,
+ 		if (im->multiaddr == IGMP_ALL_HOSTS)
+ 			continue;
+ 		if (ipv4_is_local_multicast(im->multiaddr) &&
+-		    !net->ipv4.sysctl_igmp_llm_reports)
++		    !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
+ 			continue;
+ 		spin_lock_bh(&im->lock);
+ 		if (im->tm_running)
+@@ -1296,7 +1299,8 @@ static void __igmp_group_dropped(struct ip_mc_list *im, gfp_t gfp)
+ #ifdef CONFIG_IP_MULTICAST
+ 	if (im->multiaddr == IGMP_ALL_HOSTS)
+ 		return;
+-	if (ipv4_is_local_multicast(im->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports)
++	if (ipv4_is_local_multicast(im->multiaddr) &&
++	    !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
+ 		return;
+ 
+ 	reporter = im->reporter;
+@@ -1338,7 +1342,8 @@ static void igmp_group_added(struct ip_mc_list *im)
+ #ifdef CONFIG_IP_MULTICAST
+ 	if (im->multiaddr == IGMP_ALL_HOSTS)
+ 		return;
+-	if (ipv4_is_local_multicast(im->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports)
++	if (ipv4_is_local_multicast(im->multiaddr) &&
++	    !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
+ 		return;
+ 
+ 	if (in_dev->dead)
+@@ -1642,7 +1647,7 @@ static void ip_mc_rejoin_groups(struct in_device *in_dev)
+ 		if (im->multiaddr == IGMP_ALL_HOSTS)
+ 			continue;
+ 		if (ipv4_is_local_multicast(im->multiaddr) &&
+-		    !net->ipv4.sysctl_igmp_llm_reports)
++		    !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports))
+ 			continue;
+ 
+ 		/* a failover is happening and switches
+@@ -2192,7 +2197,7 @@ static int __ip_mc_join_group(struct sock *sk, struct ip_mreqn *imr,
+ 		count++;
+ 	}
+ 	err = -ENOBUFS;
+-	if (count >= net->ipv4.sysctl_igmp_max_memberships)
++	if (count >= READ_ONCE(net->ipv4.sysctl_igmp_max_memberships))
+ 		goto done;
+ 	iml = sock_kmalloc(sk, sizeof(*iml), GFP_KERNEL);
+ 	if (!iml)
+@@ -2379,7 +2384,7 @@ int ip_mc_source(int add, int omode, struct sock *sk, struct
+ 	}
+ 	/* else, add a new source to the filter */
+ 
+-	if (psl && psl->sl_count >= net->ipv4.sysctl_igmp_max_msf) {
++	if (psl && psl->sl_count >= READ_ONCE(net->ipv4.sysctl_igmp_max_msf)) {
+ 		err = -ENOBUFS;
+ 		goto done;
+ 	}
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 7785a4775e58a..4d97133240036 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -251,7 +251,7 @@ next_port:
+ 		goto other_half_scan;
+ 	}
+ 
+-	if (net->ipv4.sysctl_ip_autobind_reuse && !relax) {
++	if (READ_ONCE(net->ipv4.sysctl_ip_autobind_reuse) && !relax) {
+ 		/* We still have a chance to connect to different destinations */
+ 		relax = true;
+ 		goto ports_exhausted;
+diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
+index 00ec819f949b5..29730edda220a 100644
+--- a/net/ipv4/ip_forward.c
++++ b/net/ipv4/ip_forward.c
+@@ -151,7 +151,7 @@ int ip_forward(struct sk_buff *skb)
+ 	    !skb_sec_path(skb))
+ 		ip_rt_send_redirect(skb);
+ 
+-	if (net->ipv4.sysctl_ip_fwd_update_priority)
++	if (READ_ONCE(net->ipv4.sysctl_ip_fwd_update_priority))
+ 		skb->priority = rt_tos2priority(iph->tos);
+ 
+ 	return NF_HOOK(NFPROTO_IPV4, NF_INET_FORWARD,
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index ec6036713e2c2..22507a6a3f71c 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -783,7 +783,7 @@ static int ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval, int optlen)
+ 	/* numsrc >= (4G-140)/128 overflow in 32 bits */
+ 	err = -ENOBUFS;
+ 	if (gsf->gf_numsrc >= 0x1ffffff ||
+-	    gsf->gf_numsrc > sock_net(sk)->ipv4.sysctl_igmp_max_msf)
++	    gsf->gf_numsrc > READ_ONCE(sock_net(sk)->ipv4.sysctl_igmp_max_msf))
+ 		goto out_free_gsf;
+ 
+ 	err = -EINVAL;
+@@ -832,7 +832,7 @@ static int compat_ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 
+ 	/* numsrc >= (4G-140)/128 overflow in 32 bits */
+ 	err = -ENOBUFS;
+-	if (n > sock_net(sk)->ipv4.sysctl_igmp_max_msf)
++	if (n > READ_ONCE(sock_net(sk)->ipv4.sysctl_igmp_max_msf))
+ 		goto out_free_gsf;
+ 	err = set_mcast_msfilter(sk, gf32->gf_interface, n, gf32->gf_fmode,
+ 				 &gf32->gf_group, gf32->gf_slist);
+@@ -1242,7 +1242,7 @@ static int do_ip_setsockopt(struct sock *sk, int level, int optname,
+ 		}
+ 		/* numsrc >= (1G-4) overflow in 32 bits */
+ 		if (msf->imsf_numsrc >= 0x3ffffffcU ||
+-		    msf->imsf_numsrc > net->ipv4.sysctl_igmp_max_msf) {
++		    msf->imsf_numsrc > READ_ONCE(net->ipv4.sysctl_igmp_max_msf)) {
+ 			kfree(msf);
+ 			err = -ENOBUFS;
+ 			break;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index aab8ac383d5d1..374647693d7ac 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1442,7 +1442,7 @@ u32 ip_mtu_from_fib_result(struct fib_result *res, __be32 daddr)
+ 	struct fib_info *fi = res->fi;
+ 	u32 mtu = 0;
+ 
+-	if (dev_net(dev)->ipv4.sysctl_ip_fwd_use_pmtu ||
++	if (READ_ONCE(dev_net(dev)->ipv4.sysctl_ip_fwd_use_pmtu) ||
+ 	    fi->fib_metrics->metrics[RTAX_LOCK - 1] & (1 << RTAX_MTU))
+ 		mtu = fi->fib_mtu;
+ 
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 10b469aee4920..41afc9155f311 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -249,12 +249,12 @@ bool cookie_timestamp_decode(const struct net *net,
+ 		return true;
+ 	}
+ 
+-	if (!net->ipv4.sysctl_tcp_timestamps)
++	if (!READ_ONCE(net->ipv4.sysctl_tcp_timestamps))
+ 		return false;
+ 
+ 	tcp_opt->sack_ok = (options & TS_OPT_SACK) ? TCP_SACK_SEEN : 0;
+ 
+-	if (tcp_opt->sack_ok && !net->ipv4.sysctl_tcp_sack)
++	if (tcp_opt->sack_ok && !READ_ONCE(net->ipv4.sysctl_tcp_sack))
+ 		return false;
+ 
+ 	if ((options & TS_OPT_WSCALE_MASK) == TS_OPT_WSCALE_MASK)
+@@ -263,7 +263,7 @@ bool cookie_timestamp_decode(const struct net *net,
+ 	tcp_opt->wscale_ok = 1;
+ 	tcp_opt->snd_wscale = options & TS_OPT_WSCALE_MASK;
+ 
+-	return net->ipv4.sysctl_tcp_window_scaling != 0;
++	return READ_ONCE(net->ipv4.sysctl_tcp_window_scaling) != 0;
+ }
+ EXPORT_SYMBOL(cookie_timestamp_decode);
+ 
+@@ -342,7 +342,8 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
+ 	struct flowi4 fl4;
+ 	u32 tsoff = 0;
+ 
+-	if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies || !th->ack || th->rst)
++	if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) ||
++	    !th->ack || th->rst)
+ 		goto out;
+ 
+ 	if (tcp_synq_no_recent_overflow(sk))
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 08829809e88b7..86f553864f98f 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -95,7 +95,7 @@ static int ipv4_local_port_range(struct ctl_table *table, int write,
+ 		 * port limit.
+ 		 */
+ 		if ((range[1] < range[0]) ||
+-		    (range[0] < net->ipv4.sysctl_ip_prot_sock))
++		    (range[0] < READ_ONCE(net->ipv4.sysctl_ip_prot_sock)))
+ 			ret = -EINVAL;
+ 		else
+ 			set_local_port_range(net, range);
+@@ -121,7 +121,7 @@ static int ipv4_privileged_ports(struct ctl_table *table, int write,
+ 		.extra2 = &ip_privileged_port_max,
+ 	};
+ 
+-	pports = net->ipv4.sysctl_ip_prot_sock;
++	pports = READ_ONCE(net->ipv4.sysctl_ip_prot_sock);
+ 
+ 	ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos);
+ 
+@@ -133,7 +133,7 @@ static int ipv4_privileged_ports(struct ctl_table *table, int write,
+ 		if (range[0] < pports)
+ 			ret = -EINVAL;
+ 		else
+-			net->ipv4.sysctl_ip_prot_sock = pports;
++			WRITE_ONCE(net->ipv4.sysctl_ip_prot_sock, pports);
+ 	}
+ 
+ 	return ret;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 19c13ad5c121b..f1fd26bb199ce 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -440,7 +440,7 @@ void tcp_init_sock(struct sock *sk)
+ 	tp->snd_cwnd_clamp = ~0;
+ 	tp->mss_cache = TCP_MSS_DEFAULT;
+ 
+-	tp->reordering = sock_net(sk)->ipv4.sysctl_tcp_reordering;
++	tp->reordering = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reordering);
+ 	tcp_assign_congestion_control(sk);
+ 
+ 	tp->tsoffset = 0;
+@@ -1148,7 +1148,8 @@ static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg,
+ 	struct sockaddr *uaddr = msg->msg_name;
+ 	int err, flags;
+ 
+-	if (!(sock_net(sk)->ipv4.sysctl_tcp_fastopen & TFO_CLIENT_ENABLE) ||
++	if (!(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen) &
++	      TFO_CLIENT_ENABLE) ||
+ 	    (uaddr && msg->msg_namelen >= sizeof(uaddr->sa_family) &&
+ 	     uaddr->sa_family == AF_UNSPEC))
+ 		return -EOPNOTSUPP;
+@@ -3390,7 +3391,8 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 	case TCP_FASTOPEN_CONNECT:
+ 		if (val > 1 || val < 0) {
+ 			err = -EINVAL;
+-		} else if (net->ipv4.sysctl_tcp_fastopen & TFO_CLIENT_ENABLE) {
++		} else if (READ_ONCE(net->ipv4.sysctl_tcp_fastopen) &
++			   TFO_CLIENT_ENABLE) {
+ 			if (sk->sk_state == TCP_CLOSE)
+ 				tp->fastopen_connect = val;
+ 			else
+@@ -3727,7 +3729,7 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 	case TCP_LINGER2:
+ 		val = tp->linger2;
+ 		if (val >= 0)
+-			val = (val ? : net->ipv4.sysctl_tcp_fin_timeout) / HZ;
++			val = (val ? : READ_ONCE(net->ipv4.sysctl_tcp_fin_timeout)) / HZ;
+ 		break;
+ 	case TCP_DEFER_ACCEPT:
+ 		val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept,
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 1071119843843..39fb037ce5f3f 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -349,7 +349,7 @@ static bool tcp_fastopen_no_cookie(const struct sock *sk,
+ 				   const struct dst_entry *dst,
+ 				   int flag)
+ {
+-	return (sock_net(sk)->ipv4.sysctl_tcp_fastopen & flag) ||
++	return (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen) & flag) ||
+ 	       tcp_sk(sk)->fastopen_no_cookie ||
+ 	       (dst && dst_metric(dst, RTAX_FASTOPEN_NO_COOKIE));
+ }
+@@ -364,7 +364,7 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb,
+ 			      const struct dst_entry *dst)
+ {
+ 	bool syn_data = TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq + 1;
+-	int tcp_fastopen = sock_net(sk)->ipv4.sysctl_tcp_fastopen;
++	int tcp_fastopen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen);
+ 	struct tcp_fastopen_cookie valid_foc = { .len = -1 };
+ 	struct sock *child;
+ 	int ret = 0;
+@@ -506,7 +506,7 @@ void tcp_fastopen_active_disable(struct sock *sk)
+ {
+ 	struct net *net = sock_net(sk);
+ 
+-	if (!sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout)
++	if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout))
+ 		return;
+ 
+ 	/* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */
+@@ -527,7 +527,8 @@ void tcp_fastopen_active_disable(struct sock *sk)
+  */
+ bool tcp_fastopen_active_should_disable(struct sock *sk)
+ {
+-	unsigned int tfo_bh_timeout = sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout;
++	unsigned int tfo_bh_timeout =
++		READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout);
+ 	unsigned long timeout;
+ 	int tfo_da_times;
+ 	int multiplier;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 54ed68e05b66a..d817f8c31c9ce 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -1011,7 +1011,7 @@ static void tcp_check_sack_reordering(struct sock *sk, const u32 low_seq,
+ 			 tp->undo_marker ? tp->undo_retrans : 0);
+ #endif
+ 		tp->reordering = min_t(u32, (metric + mss - 1) / mss,
+-				       sock_net(sk)->ipv4.sysctl_tcp_max_reordering);
++				       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_max_reordering));
+ 	}
+ 
+ 	/* This exciting event is worth to be remembered. 8) */
+@@ -1990,7 +1990,7 @@ static void tcp_check_reno_reordering(struct sock *sk, const int addend)
+ 		return;
+ 
+ 	tp->reordering = min_t(u32, tp->packets_out + addend,
+-			       sock_net(sk)->ipv4.sysctl_tcp_max_reordering);
++			       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_max_reordering));
+ 	tp->reord_seen++;
+ 	NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRENOREORDER);
+ }
+@@ -2055,7 +2055,8 @@ static inline void tcp_init_undo(struct tcp_sock *tp)
+ 
+ static bool tcp_is_rack(const struct sock *sk)
+ {
+-	return sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_LOSS_DETECTION;
++	return READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) &
++		TCP_RACK_LOSS_DETECTION;
+ }
+ 
+ /* If we detect SACK reneging, forget all SACK information
+@@ -2099,6 +2100,7 @@ void tcp_enter_loss(struct sock *sk)
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	struct net *net = sock_net(sk);
+ 	bool new_recovery = icsk->icsk_ca_state < TCP_CA_Recovery;
++	u8 reordering;
+ 
+ 	tcp_timeout_mark_lost(sk);
+ 
+@@ -2119,10 +2121,12 @@ void tcp_enter_loss(struct sock *sk)
+ 	/* Timeout in disordered state after receiving substantial DUPACKs
+ 	 * suggests that the degree of reordering is over-estimated.
+ 	 */
++	reordering = READ_ONCE(net->ipv4.sysctl_tcp_reordering);
+ 	if (icsk->icsk_ca_state <= TCP_CA_Disorder &&
+-	    tp->sacked_out >= net->ipv4.sysctl_tcp_reordering)
++	    tp->sacked_out >= reordering)
+ 		tp->reordering = min_t(unsigned int, tp->reordering,
+-				       net->ipv4.sysctl_tcp_reordering);
++				       reordering);
++
+ 	tcp_set_ca_state(sk, TCP_CA_Loss);
+ 	tp->high_seq = tp->snd_nxt;
+ 	tcp_ecn_queue_cwr(tp);
+@@ -3411,7 +3415,8 @@ static inline bool tcp_may_raise_cwnd(const struct sock *sk, const int flag)
+ 	 * new SACK or ECE mark may first advance cwnd here and later reduce
+ 	 * cwnd in tcp_fastretrans_alert() based on more states.
+ 	 */
+-	if (tcp_sk(sk)->reordering > sock_net(sk)->ipv4.sysctl_tcp_reordering)
++	if (tcp_sk(sk)->reordering >
++	    READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reordering))
+ 		return flag & FLAG_FORWARD_PROGRESS;
+ 
+ 	return flag & FLAG_DATA_ACKED;
+@@ -4003,7 +4008,7 @@ void tcp_parse_options(const struct net *net,
+ 				break;
+ 			case TCPOPT_WINDOW:
+ 				if (opsize == TCPOLEN_WINDOW && th->syn &&
+-				    !estab && net->ipv4.sysctl_tcp_window_scaling) {
++				    !estab && READ_ONCE(net->ipv4.sysctl_tcp_window_scaling)) {
+ 					__u8 snd_wscale = *(__u8 *)ptr;
+ 					opt_rx->wscale_ok = 1;
+ 					if (snd_wscale > TCP_MAX_WSCALE) {
+@@ -4019,7 +4024,7 @@ void tcp_parse_options(const struct net *net,
+ 			case TCPOPT_TIMESTAMP:
+ 				if ((opsize == TCPOLEN_TIMESTAMP) &&
+ 				    ((estab && opt_rx->tstamp_ok) ||
+-				     (!estab && net->ipv4.sysctl_tcp_timestamps))) {
++				     (!estab && READ_ONCE(net->ipv4.sysctl_tcp_timestamps)))) {
+ 					opt_rx->saw_tstamp = 1;
+ 					opt_rx->rcv_tsval = get_unaligned_be32(ptr);
+ 					opt_rx->rcv_tsecr = get_unaligned_be32(ptr + 4);
+@@ -4027,7 +4032,7 @@ void tcp_parse_options(const struct net *net,
+ 				break;
+ 			case TCPOPT_SACK_PERM:
+ 				if (opsize == TCPOLEN_SACK_PERM && th->syn &&
+-				    !estab && net->ipv4.sysctl_tcp_sack) {
++				    !estab && READ_ONCE(net->ipv4.sysctl_tcp_sack)) {
+ 					opt_rx->sack_ok = TCP_SACK_SEEN;
+ 					tcp_sack_reset(opt_rx);
+ 				}
+@@ -5487,7 +5492,7 @@ static void tcp_check_urg(struct sock *sk, const struct tcphdr *th)
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	u32 ptr = ntohs(th->urg_ptr);
+ 
+-	if (ptr && !sock_net(sk)->ipv4.sysctl_tcp_stdurg)
++	if (ptr && !READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_stdurg))
+ 		ptr--;
+ 	ptr += ntohl(th->seq);
+ 
+@@ -6683,11 +6688,14 @@ static bool tcp_syn_flood_action(const struct sock *sk, const char *proto)
+ {
+ 	struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue;
+ 	const char *msg = "Dropping request";
+-	bool want_cookie = false;
+ 	struct net *net = sock_net(sk);
++	bool want_cookie = false;
++	u8 syncookies;
++
++	syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies);
+ 
+ #ifdef CONFIG_SYN_COOKIES
+-	if (net->ipv4.sysctl_tcp_syncookies) {
++	if (syncookies) {
+ 		msg = "Sending cookies";
+ 		want_cookie = true;
+ 		__NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPREQQFULLDOCOOKIES);
+@@ -6695,8 +6703,7 @@ static bool tcp_syn_flood_action(const struct sock *sk, const char *proto)
+ #endif
+ 		__NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPREQQFULLDROP);
+ 
+-	if (!queue->synflood_warned &&
+-	    net->ipv4.sysctl_tcp_syncookies != 2 &&
++	if (!queue->synflood_warned && syncookies != 2 &&
+ 	    xchg(&queue->synflood_warned, 1) == 0)
+ 		net_info_ratelimited("%s: Possible SYN flooding on port %d. %s.  Check SNMP counters.\n",
+ 				     proto, sk->sk_num, msg);
+@@ -6745,7 +6752,7 @@ u16 tcp_get_syncookie_mss(struct request_sock_ops *rsk_ops,
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	u16 mss;
+ 
+-	if (sock_net(sk)->ipv4.sysctl_tcp_syncookies != 2 &&
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) != 2 &&
+ 	    !inet_csk_reqsk_queue_is_full(sk))
+ 		return 0;
+ 
+@@ -6779,13 +6786,15 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
+ 	bool want_cookie = false;
+ 	struct dst_entry *dst;
+ 	struct flowi fl;
++	u8 syncookies;
++
++	syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies);
+ 
+ 	/* TW buckets are converted to open requests without
+ 	 * limitations, they conserve resources and peer is
+ 	 * evidently real one.
+ 	 */
+-	if ((net->ipv4.sysctl_tcp_syncookies == 2 ||
+-	     inet_csk_reqsk_queue_is_full(sk)) && !isn) {
++	if ((syncookies == 2 || inet_csk_reqsk_queue_is_full(sk)) && !isn) {
+ 		want_cookie = tcp_syn_flood_action(sk, rsk_ops->slab_name);
+ 		if (!want_cookie)
+ 			goto drop;
+@@ -6839,10 +6848,12 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
+ 		goto drop_and_free;
+ 
+ 	if (!want_cookie && !isn) {
++		int max_syn_backlog = READ_ONCE(net->ipv4.sysctl_max_syn_backlog);
++
+ 		/* Kill the following clause, if you dislike this way. */
+-		if (!net->ipv4.sysctl_tcp_syncookies &&
+-		    (net->ipv4.sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) <
+-		     (net->ipv4.sysctl_max_syn_backlog >> 2)) &&
++		if (!syncookies &&
++		    (max_syn_backlog - inet_csk_reqsk_queue_len(sk) <
++		     (max_syn_backlog >> 2)) &&
+ 		    !tcp_peer_is_proven(req, dst)) {
+ 			/* Without syncookies last quarter of
+ 			 * backlog is filled with destinations,
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 32c60122db9c8..d5f13ff7d9004 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -106,10 +106,10 @@ static u32 tcp_v4_init_ts_off(const struct net *net, const struct sk_buff *skb)
+ 
+ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
+ {
++	int reuse = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_tw_reuse);
+ 	const struct inet_timewait_sock *tw = inet_twsk(sktw);
+ 	const struct tcp_timewait_sock *tcptw = tcp_twsk(sktw);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+-	int reuse = sock_net(sk)->ipv4.sysctl_tcp_tw_reuse;
+ 
+ 	if (reuse == 2) {
+ 		/* Still does not detect *everything* that goes through
+diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
+index 6b27c481fe18b..8d7e32f4abf67 100644
+--- a/net/ipv4/tcp_metrics.c
++++ b/net/ipv4/tcp_metrics.c
+@@ -428,7 +428,8 @@ void tcp_update_metrics(struct sock *sk)
+ 		if (!tcp_metric_locked(tm, TCP_METRIC_REORDERING)) {
+ 			val = tcp_metric_get(tm, TCP_METRIC_REORDERING);
+ 			if (val < tp->reordering &&
+-			    tp->reordering != net->ipv4.sysctl_tcp_reordering)
++			    tp->reordering !=
++			    READ_ONCE(net->ipv4.sysctl_tcp_reordering))
+ 				tcp_metric_set(tm, TCP_METRIC_REORDERING,
+ 					       tp->reordering);
+ 		}
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 62f5ef9e6f938..e42312321191b 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -180,7 +180,7 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb,
+ 			 * Oh well... nobody has a sufficient solution to this
+ 			 * protocol bug yet.
+ 			 */
+-			if (twsk_net(tw)->ipv4.sysctl_tcp_rfc1337 == 0) {
++			if (!READ_ONCE(twsk_net(tw)->ipv4.sysctl_tcp_rfc1337)) {
+ kill:
+ 				inet_twsk_deschedule_put(tw);
+ 				return TCP_TW_SUCCESS;
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 8634a5c853f51..9b67c61576e4c 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -789,18 +789,18 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb,
+ 	opts->mss = tcp_advertise_mss(sk);
+ 	remaining -= TCPOLEN_MSS_ALIGNED;
+ 
+-	if (likely(sock_net(sk)->ipv4.sysctl_tcp_timestamps && !*md5)) {
++	if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps) && !*md5)) {
+ 		opts->options |= OPTION_TS;
+ 		opts->tsval = tcp_skb_timestamp(skb) + tp->tsoffset;
+ 		opts->tsecr = tp->rx_opt.ts_recent;
+ 		remaining -= TCPOLEN_TSTAMP_ALIGNED;
+ 	}
+-	if (likely(sock_net(sk)->ipv4.sysctl_tcp_window_scaling)) {
++	if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling))) {
+ 		opts->ws = tp->rx_opt.rcv_wscale;
+ 		opts->options |= OPTION_WSCALE;
+ 		remaining -= TCPOLEN_WSCALE_ALIGNED;
+ 	}
+-	if (likely(sock_net(sk)->ipv4.sysctl_tcp_sack)) {
++	if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_sack))) {
+ 		opts->options |= OPTION_SACK_ADVERTISE;
+ 		if (unlikely(!(OPTION_TS & opts->options)))
+ 			remaining -= TCPOLEN_SACKPERM_ALIGNED;
+@@ -1720,7 +1720,8 @@ static inline int __tcp_mtu_to_mss(struct sock *sk, int pmtu)
+ 	mss_now -= icsk->icsk_ext_hdr_len;
+ 
+ 	/* Then reserve room for full set of TCP options and 8 bytes of data */
+-	mss_now = max(mss_now, sock_net(sk)->ipv4.sysctl_tcp_min_snd_mss);
++	mss_now = max(mss_now,
++		      READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_snd_mss));
+ 	return mss_now;
+ }
+ 
+@@ -1763,10 +1764,10 @@ void tcp_mtup_init(struct sock *sk)
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	struct net *net = sock_net(sk);
+ 
+-	icsk->icsk_mtup.enabled = net->ipv4.sysctl_tcp_mtu_probing > 1;
++	icsk->icsk_mtup.enabled = READ_ONCE(net->ipv4.sysctl_tcp_mtu_probing) > 1;
+ 	icsk->icsk_mtup.search_high = tp->rx_opt.mss_clamp + sizeof(struct tcphdr) +
+ 			       icsk->icsk_af_ops->net_header_len;
+-	icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, net->ipv4.sysctl_tcp_base_mss);
++	icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, READ_ONCE(net->ipv4.sysctl_tcp_base_mss));
+ 	icsk->icsk_mtup.probe_size = 0;
+ 	if (icsk->icsk_mtup.enabled)
+ 		icsk->icsk_mtup.probe_timestamp = tcp_jiffies32;
+@@ -1898,7 +1899,7 @@ static void tcp_cwnd_validate(struct sock *sk, bool is_cwnd_limited)
+ 		if (tp->packets_out > tp->snd_cwnd_used)
+ 			tp->snd_cwnd_used = tp->packets_out;
+ 
+-		if (sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle &&
++		if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle) &&
+ 		    (s32)(tcp_jiffies32 - tp->snd_cwnd_stamp) >= inet_csk(sk)->icsk_rto &&
+ 		    !ca_ops->cong_control)
+ 			tcp_cwnd_application_limited(sk);
+@@ -2277,7 +2278,7 @@ static inline void tcp_mtu_check_reprobe(struct sock *sk)
+ 	u32 interval;
+ 	s32 delta;
+ 
+-	interval = net->ipv4.sysctl_tcp_probe_interval;
++	interval = READ_ONCE(net->ipv4.sysctl_tcp_probe_interval);
+ 	delta = tcp_jiffies32 - icsk->icsk_mtup.probe_timestamp;
+ 	if (unlikely(delta >= interval * HZ)) {
+ 		int mss = tcp_current_mss(sk);
+@@ -2359,7 +2360,7 @@ static int tcp_mtu_probe(struct sock *sk)
+ 	 * probing process by not resetting search range to its orignal.
+ 	 */
+ 	if (probe_size > tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_high) ||
+-		interval < net->ipv4.sysctl_tcp_probe_threshold) {
++	    interval < READ_ONCE(net->ipv4.sysctl_tcp_probe_threshold)) {
+ 		/* Check whether enough time has elaplased for
+ 		 * another round of probing.
+ 		 */
+@@ -2734,7 +2735,7 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto)
+ 	if (rcu_access_pointer(tp->fastopen_rsk))
+ 		return false;
+ 
+-	early_retrans = sock_net(sk)->ipv4.sysctl_tcp_early_retrans;
++	early_retrans = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_early_retrans);
+ 	/* Schedule a loss probe in 2*RTT for SACK capable connections
+ 	 * not in loss recovery, that are either limited by cwnd or application.
+ 	 */
+@@ -3099,7 +3100,7 @@ static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *to,
+ 	struct sk_buff *skb = to, *tmp;
+ 	bool first = true;
+ 
+-	if (!sock_net(sk)->ipv4.sysctl_tcp_retrans_collapse)
++	if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_retrans_collapse))
+ 		return;
+ 	if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN)
+ 		return;
+@@ -3647,7 +3648,7 @@ static void tcp_connect_init(struct sock *sk)
+ 	 * See tcp_input.c:tcp_rcv_state_process case TCP_SYN_SENT.
+ 	 */
+ 	tp->tcp_header_len = sizeof(struct tcphdr);
+-	if (sock_net(sk)->ipv4.sysctl_tcp_timestamps)
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps))
+ 		tp->tcp_header_len += TCPOLEN_TSTAMP_ALIGNED;
+ 
+ #ifdef CONFIG_TCP_MD5SIG
+@@ -3683,7 +3684,7 @@ static void tcp_connect_init(struct sock *sk)
+ 				  tp->advmss - (tp->rx_opt.ts_recent_stamp ? tp->tcp_header_len - sizeof(struct tcphdr) : 0),
+ 				  &tp->rcv_wnd,
+ 				  &tp->window_clamp,
+-				  sock_net(sk)->ipv4.sysctl_tcp_window_scaling,
++				  READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling),
+ 				  &rcv_wscale,
+ 				  rcv_wnd);
+ 
+@@ -4091,7 +4092,7 @@ void tcp_send_probe0(struct sock *sk)
+ 
+ 	icsk->icsk_probes_out++;
+ 	if (err <= 0) {
+-		if (icsk->icsk_backoff < net->ipv4.sysctl_tcp_retries2)
++		if (icsk->icsk_backoff < READ_ONCE(net->ipv4.sysctl_tcp_retries2))
+ 			icsk->icsk_backoff++;
+ 		timeout = tcp_probe0_when(sk, TCP_RTO_MAX);
+ 	} else {
+diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c
+index 31fc178f42c02..21fc9859d421e 100644
+--- a/net/ipv4/tcp_recovery.c
++++ b/net/ipv4/tcp_recovery.c
+@@ -19,7 +19,8 @@ static u32 tcp_rack_reo_wnd(const struct sock *sk)
+ 			return 0;
+ 
+ 		if (tp->sacked_out >= tp->reordering &&
+-		    !(sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_NO_DUPTHRESH))
++		    !(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) &
++		      TCP_RACK_NO_DUPTHRESH))
+ 			return 0;
+ 	}
+ 
+@@ -190,7 +191,8 @@ void tcp_rack_update_reo_wnd(struct sock *sk, struct rate_sample *rs)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
+-	if (sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_STATIC_REO_WND ||
++	if ((READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) &
++	     TCP_RACK_STATIC_REO_WND) ||
+ 	    !rs->prior_delivered)
+ 		return;
+ 
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 4ef08079ccfa9..888683f2ff3ee 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -143,7 +143,7 @@ static int tcp_out_of_resources(struct sock *sk, bool do_reset)
+  */
+ static int tcp_orphan_retries(struct sock *sk, bool alive)
+ {
+-	int retries = sock_net(sk)->ipv4.sysctl_tcp_orphan_retries; /* May be zero. */
++	int retries = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_orphan_retries); /* May be zero. */
+ 
+ 	/* We know from an ICMP that something is wrong. */
+ 	if (sk->sk_err_soft && !alive)
+@@ -163,7 +163,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk)
+ 	int mss;
+ 
+ 	/* Black hole detection */
+-	if (!net->ipv4.sysctl_tcp_mtu_probing)
++	if (!READ_ONCE(net->ipv4.sysctl_tcp_mtu_probing))
+ 		return;
+ 
+ 	if (!icsk->icsk_mtup.enabled) {
+@@ -171,9 +171,9 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk)
+ 		icsk->icsk_mtup.probe_timestamp = tcp_jiffies32;
+ 	} else {
+ 		mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low) >> 1;
+-		mss = min(net->ipv4.sysctl_tcp_base_mss, mss);
+-		mss = max(mss, net->ipv4.sysctl_tcp_mtu_probe_floor);
+-		mss = max(mss, net->ipv4.sysctl_tcp_min_snd_mss);
++		mss = min(READ_ONCE(net->ipv4.sysctl_tcp_base_mss), mss);
++		mss = max(mss, READ_ONCE(net->ipv4.sysctl_tcp_mtu_probe_floor));
++		mss = max(mss, READ_ONCE(net->ipv4.sysctl_tcp_min_snd_mss));
+ 		icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, mss);
+ 	}
+ 	tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);
+@@ -242,14 +242,14 @@ static int tcp_write_timeout(struct sock *sk)
+ 		retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries;
+ 		expired = icsk->icsk_retransmits >= retry_until;
+ 	} else {
+-		if (retransmits_timed_out(sk, net->ipv4.sysctl_tcp_retries1, 0)) {
++		if (retransmits_timed_out(sk, READ_ONCE(net->ipv4.sysctl_tcp_retries1), 0)) {
+ 			/* Black hole detection */
+ 			tcp_mtu_probing(icsk, sk);
+ 
+ 			__dst_negative_advice(sk);
+ 		}
+ 
+-		retry_until = net->ipv4.sysctl_tcp_retries2;
++		retry_until = READ_ONCE(net->ipv4.sysctl_tcp_retries2);
+ 		if (sock_flag(sk, SOCK_DEAD)) {
+ 			const bool alive = icsk->icsk_rto < TCP_RTO_MAX;
+ 
+@@ -380,7 +380,7 @@ static void tcp_probe_timer(struct sock *sk)
+ 		 msecs_to_jiffies(icsk->icsk_user_timeout))
+ 		goto abort;
+ 
+-	max_probes = sock_net(sk)->ipv4.sysctl_tcp_retries2;
++	max_probes = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_retries2);
+ 	if (sock_flag(sk, SOCK_DEAD)) {
+ 		const bool alive = inet_csk_rto_backoff(icsk, TCP_RTO_MAX) < TCP_RTO_MAX;
+ 
+@@ -574,7 +574,7 @@ out_reset_timer:
+ 	 * linear-timeout retransmissions into a black hole
+ 	 */
+ 	if (sk->sk_state == TCP_ESTABLISHED &&
+-	    (tp->thin_lto || net->ipv4.sysctl_tcp_thin_linear_timeouts) &&
++	    (tp->thin_lto || READ_ONCE(net->ipv4.sysctl_tcp_thin_linear_timeouts)) &&
+ 	    tcp_stream_is_thin(tp) &&
+ 	    icsk->icsk_retransmits <= TCP_THIN_LINEAR_RETRIES) {
+ 		icsk->icsk_backoff = 0;
+@@ -585,7 +585,7 @@ out_reset_timer:
+ 	}
+ 	inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS,
+ 				  tcp_clamp_rto_to_user_timeout(sk), TCP_RTO_MAX);
+-	if (retransmits_timed_out(sk, net->ipv4.sysctl_tcp_retries1 + 1, 0))
++	if (retransmits_timed_out(sk, READ_ONCE(net->ipv4.sysctl_tcp_retries1) + 1, 0))
+ 		__sk_dst_reset(sk);
+ 
+ out:;
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 890a9cfc6ce27..d30c9d949c1b5 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -225,7 +225,7 @@ lookup_protocol:
+ 	inet->mc_list	= NULL;
+ 	inet->rcv_tos	= 0;
+ 
+-	if (net->ipv4.sysctl_ip_no_pmtu_disc)
++	if (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc))
+ 		inet->pmtudisc = IP_PMTUDISC_DONT;
+ 	else
+ 		inet->pmtudisc = IP_PMTUDISC_WANT;
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index ca92dd6981dea..12ae817aaf2ec 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -141,7 +141,8 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ 	__u8 rcv_wscale;
+ 	u32 tsoff = 0;
+ 
+-	if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies || !th->ack || th->rst)
++	if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) ||
++	    !th->ack || th->rst)
+ 		goto out;
+ 
+ 	if (tcp_synq_no_recent_overflow(sk))
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 940f1e257a90a..6e4ca837e91dd 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -358,7 +358,7 @@ static int sctp_v4_available(union sctp_addr *addr, struct sctp_sock *sp)
+ 	if (addr->v4.sin_addr.s_addr != htonl(INADDR_ANY) &&
+ 	   ret != RTN_LOCAL &&
+ 	   !sp->inet.freebind &&
+-	   !net->ipv4.sysctl_ip_nonlocal_bind)
++	    !READ_ONCE(net->ipv4.sysctl_ip_nonlocal_bind))
+ 		return 0;
+ 
+ 	if (ipv6_only_sock(sctp_opt2sk(sp)))
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index ee1f0fdba0855..0ef15f8fba902 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -1787,7 +1787,7 @@ void smc_llc_lgr_init(struct smc_link_group *lgr, struct smc_sock *smc)
+ 	init_waitqueue_head(&lgr->llc_flow_waiter);
+ 	init_waitqueue_head(&lgr->llc_msg_waiter);
+ 	mutex_init(&lgr->llc_conf_mutex);
+-	lgr->llc_testlink_time = net->ipv4.sysctl_tcp_keepalive_time;
++	lgr->llc_testlink_time = READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time);
+ }
+ 
+ /* called after lgr was removed from lgr_list */
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 6ae2ce411b4bf..23eab7ac43ee5 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -97,13 +97,16 @@ static void tls_device_queue_ctx_destruction(struct tls_context *ctx)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&tls_device_lock, flags);
++	if (unlikely(!refcount_dec_and_test(&ctx->refcount)))
++		goto unlock;
++
+ 	list_move_tail(&ctx->list, &tls_device_gc_list);
+ 
+ 	/* schedule_work inside the spinlock
+ 	 * to make sure tls_device_down waits for that work.
+ 	 */
+ 	schedule_work(&tls_device_gc_work);
+-
++unlock:
+ 	spin_unlock_irqrestore(&tls_device_lock, flags);
+ }
+ 
+@@ -194,8 +197,7 @@ void tls_device_sk_destruct(struct sock *sk)
+ 		clean_acked_data_disable(inet_csk(sk));
+ 	}
+ 
+-	if (refcount_dec_and_test(&tls_ctx->refcount))
+-		tls_device_queue_ctx_destruction(tls_ctx);
++	tls_device_queue_ctx_destruction(tls_ctx);
+ }
+ EXPORT_SYMBOL_GPL(tls_device_sk_destruct);
+ 
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 93cbcc8f9b39e..603b05ed7eb4c 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -2680,8 +2680,10 @@ static int xfrm_expand_policies(const struct flowi *fl, u16 family,
+ 		*num_xfrms = 0;
+ 		return 0;
+ 	}
+-	if (IS_ERR(pols[0]))
++	if (IS_ERR(pols[0])) {
++		*num_pols = 0;
+ 		return PTR_ERR(pols[0]);
++	}
+ 
+ 	*num_xfrms = pols[0]->xfrm_nr;
+ 
+@@ -2696,6 +2698,7 @@ static int xfrm_expand_policies(const struct flowi *fl, u16 family,
+ 		if (pols[1]) {
+ 			if (IS_ERR(pols[1])) {
+ 				xfrm_pols_put(pols, *num_pols);
++				*num_pols = 0;
+ 				return PTR_ERR(pols[1]);
+ 			}
+ 			(*num_pols)++;
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 717db5ecd0bd4..bc0bbb1571cef 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2587,7 +2587,7 @@ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
+ 	int err;
+ 
+ 	if (family == AF_INET &&
+-	    xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc)
++	    READ_ONCE(xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc))
+ 		x->props.flags |= XFRM_STATE_NOPMTUDISC;
+ 
+ 	err = -EPROTONOSUPPORT;
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index e737c216efc49..18569adcb4fe7 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -1805,6 +1805,10 @@ bool ima_appraise_signature(enum kernel_read_file_id id)
+ 	if (id >= READING_MAX_ID)
+ 		return false;
+ 
++	if (id == READING_KEXEC_IMAGE && !(ima_appraise & IMA_APPRAISE_ENFORCE)
++	    && security_locked_down(LOCKDOWN_KEXEC))
++		return false;
++
+ 	func = read_idmap[id] ?: FILE_CHECK;
+ 
+ 	rcu_read_lock();
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 0f335162f87c7..966bef5acc750 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -133,6 +133,7 @@ int snd_dma_alloc_pages(int type, struct device *device, size_t size,
+ 	if (WARN_ON(!dmab))
+ 		return -ENXIO;
+ 
++	size = PAGE_ALIGN(size);
+ 	dmab->dev.type = type;
+ 	dmab->dev.dev = device;
+ 	dmab->bytes = 0;
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 9cd8ca2d8bc16..c5dbac10c3720 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -3644,8 +3644,11 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
+ 		kvm_put_kvm_no_destroy(kvm);
+ 		mutex_lock(&kvm->lock);
+ 		list_del(&dev->vm_node);
++		if (ops->release)
++			ops->release(dev);
+ 		mutex_unlock(&kvm->lock);
+-		ops->destroy(dev);
++		if (ops->destroy)
++			ops->destroy(dev);
+ 		return ret;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-08-03 14:24 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2022-08-03 14:24 UTC (permalink / raw
  To: gentoo-commits

commit:     206a5e2746ef7fe6e5960e2af948e1eedef7e208
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Aug  3 14:12:37 2022 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Aug  3 14:12:44 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=206a5e27

Linux patch 5.10.135

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1134_linux-5.10.135.patch | 2841 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2845 insertions(+)

diff --git a/0000_README b/0000_README
index 7292c57d..19bd6321 100644
--- a/0000_README
+++ b/0000_README
@@ -579,6 +579,10 @@ Patch:  1133_linux-5.10.134.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.134
 
+Patch:  1134_linux-5.10.135.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.135
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1134_linux-5.10.135.patch b/1134_linux-5.10.135.patch
new file mode 100644
index 00000000..435afe17
--- /dev/null
+++ b/1134_linux-5.10.135.patch
@@ -0,0 +1,2841 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 1a58c580b2366..8b7c26d090459 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2873,6 +2873,7 @@
+ 					       no_entry_flush [PPC]
+ 					       no_uaccess_flush [PPC]
+ 					       mmio_stale_data=off [X86]
++					       retbleed=off [X86]
+ 
+ 				Exceptions:
+ 					       This does not have any effect on
+@@ -2895,6 +2896,7 @@
+ 					       mds=full,nosmt [X86]
+ 					       tsx_async_abort=full,nosmt [X86]
+ 					       mmio_stale_data=full,nosmt [X86]
++					       retbleed=auto,nosmt [X86]
+ 
+ 	mminit_loglevel=
+ 			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index 0b1f3235aa773..0158dff638873 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -2629,7 +2629,14 @@ sctp_rmem - vector of 3 INTEGERs: min, default, max
+ 	Default: 4K
+ 
+ sctp_wmem  - vector of 3 INTEGERs: min, default, max
+-	Currently this tunable has no effect.
++	Only the first value ("min") is used, "default" and "max" are
++	ignored.
++
++	min: Minimum size of send buffer that can be used by SCTP sockets.
++	It is guaranteed to each SCTP socket (but not association) even
++	under moderate memory pressure.
++
++	Default: 4K
+ 
+ addr_scope_policy - INTEGER
+ 	Control IPv4 address scoping - draft-stewart-tsvwg-sctp-ipv4-00
+diff --git a/Makefile b/Makefile
+index 00dddc2ac804a..5f4dbcb433075 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 134
++SUBLEVEL = 135
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/include/asm/dma.h b/arch/arm/include/asm/dma.h
+index a81dda65c5762..45180a2cc47cb 100644
+--- a/arch/arm/include/asm/dma.h
++++ b/arch/arm/include/asm/dma.h
+@@ -10,7 +10,7 @@
+ #else
+ #define MAX_DMA_ADDRESS	({ \
+ 	extern phys_addr_t arm_dma_zone_size; \
+-	arm_dma_zone_size && arm_dma_zone_size < (0x10000000 - PAGE_OFFSET) ? \
++	arm_dma_zone_size && arm_dma_zone_size < (0x100000000ULL - PAGE_OFFSET) ? \
+ 		(PAGE_OFFSET + arm_dma_zone_size) : 0xffffffffUL; })
+ #endif
+ 
+diff --git a/arch/arm/lib/xor-neon.c b/arch/arm/lib/xor-neon.c
+index b99dd8e1c93f1..7ba6cf8261626 100644
+--- a/arch/arm/lib/xor-neon.c
++++ b/arch/arm/lib/xor-neon.c
+@@ -26,8 +26,9 @@ MODULE_LICENSE("GPL");
+  * While older versions of GCC do not generate incorrect code, they fail to
+  * recognize the parallel nature of these functions, and emit plain ARM code,
+  * which is known to be slower than the optimized ARM code in asm-arm/xor.h.
++ *
++ * #warning This code requires at least version 4.6 of GCC
+  */
+-#warning This code requires at least version 4.6 of GCC
+ #endif
+ 
+ #pragma GCC diagnostic ignored "-Wunused-variable"
+diff --git a/arch/s390/include/asm/archrandom.h b/arch/s390/include/asm/archrandom.h
+index 2c6e1c6ecbe78..4120c428dc378 100644
+--- a/arch/s390/include/asm/archrandom.h
++++ b/arch/s390/include/asm/archrandom.h
+@@ -2,7 +2,7 @@
+ /*
+  * Kernel interface for the s390 arch_random_* functions
+  *
+- * Copyright IBM Corp. 2017, 2020
++ * Copyright IBM Corp. 2017, 2022
+  *
+  * Author: Harald Freudenberger <freude@de.ibm.com>
+  *
+@@ -14,6 +14,7 @@
+ #ifdef CONFIG_ARCH_RANDOM
+ 
+ #include <linux/static_key.h>
++#include <linux/preempt.h>
+ #include <linux/atomic.h>
+ #include <asm/cpacf.h>
+ 
+@@ -32,7 +33,8 @@ static inline bool __must_check arch_get_random_int(unsigned int *v)
+ 
+ static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
+ {
+-	if (static_branch_likely(&s390_arch_random_available)) {
++	if (static_branch_likely(&s390_arch_random_available) &&
++	    in_task()) {
+ 		cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
+ 		atomic64_add(sizeof(*v), &s390_arch_random_counter);
+ 		return true;
+@@ -42,7 +44,8 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
+ 
+ static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
+ {
+-	if (static_branch_likely(&s390_arch_random_available)) {
++	if (static_branch_likely(&s390_arch_random_available) &&
++	    in_task()) {
+ 		cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
+ 		atomic64_add(sizeof(*v), &s390_arch_random_counter);
+ 		return true;
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 7896b67dda420..2e5762faf7740 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1476,6 +1476,7 @@ static void __init spectre_v2_select_mitigation(void)
+ 	 * enable IBRS around firmware calls.
+ 	 */
+ 	if (boot_cpu_has_bug(X86_BUG_RETBLEED) &&
++	    boot_cpu_has(X86_FEATURE_IBPB) &&
+ 	    (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+ 	     boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) {
+ 
+diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c
+index a918ca93e4f7d..df5897c90becc 100644
+--- a/drivers/edac/ghes_edac.c
++++ b/drivers/edac/ghes_edac.c
+@@ -101,9 +101,14 @@ static void dimm_setup_label(struct dimm_info *dimm, u16 handle)
+ 
+ 	dmi_memdev_name(handle, &bank, &device);
+ 
+-	/* both strings must be non-zero */
+-	if (bank && *bank && device && *device)
+-		snprintf(dimm->label, sizeof(dimm->label), "%s %s", bank, device);
++	/*
++	 * Set to a NULL string when both bank and device are zero. In this case,
++	 * the label assigned by default will be preserved.
++	 */
++	snprintf(dimm->label, sizeof(dimm->label), "%s%s%s",
++		 (bank && *bank) ? bank : "",
++		 (bank && *bank && device && *device) ? " " : "",
++		 (device && *device) ? device : "");
+ }
+ 
+ static void assign_dmi_dimm_info(struct dimm_info *dimm, struct memdev_dmi_entry *entry)
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
+index 92987daa5e17d..5e72e6cb2f840 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
+@@ -679,7 +679,11 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
+ 		goto out_free_dma;
+ 
+ 	for (i = 0; i < npages; i += max) {
+-		args.end = start + (max << PAGE_SHIFT);
++		if (args.start + (max << PAGE_SHIFT) > end)
++			args.end = end;
++		else
++			args.end = args.start + (max << PAGE_SHIFT);
++
+ 		ret = migrate_vma_setup(&args);
+ 		if (ret)
+ 			goto out_free_pfns;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 11d4e3ba9af4c..1dad62ecb8a3a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -1907,11 +1907,15 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi,
+ 		 * non-zero req_queue_pairs says that user requested a new
+ 		 * queue count via ethtool's set_channels, so use this
+ 		 * value for queues distribution across traffic classes
++		 * We need at least one queue pair for the interface
++		 * to be usable as we see in else statement.
+ 		 */
+ 		if (vsi->req_queue_pairs > 0)
+ 			vsi->num_queue_pairs = vsi->req_queue_pairs;
+ 		else if (pf->flags & I40E_FLAG_MSIX_ENABLED)
+ 			vsi->num_queue_pairs = pf->num_lan_msix;
++		else
++			vsi->num_queue_pairs = 1;
+ 	}
+ 
+ 	/* Number of queues per enabled TC */
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 060897eb9cabe..7f1bf71844bce 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -652,7 +652,8 @@ static int ice_lbtest_receive_frames(struct ice_ring *rx_ring)
+ 		rx_desc = ICE_RX_DESC(rx_ring, i);
+ 
+ 		if (!(rx_desc->wb.status_error0 &
+-		    cpu_to_le16(ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS)))
++		    (cpu_to_le16(BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S)) |
++		     cpu_to_le16(BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S)))))
+ 			continue;
+ 
+ 		rx_buf = &rx_ring->rx_buf[i];
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index aae79fdd51727..810f2bdb91645 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -5203,10 +5203,12 @@ int ice_vsi_cfg(struct ice_vsi *vsi)
+ 	if (vsi->netdev) {
+ 		ice_set_rx_mode(vsi->netdev);
+ 
+-		err = ice_vsi_vlan_setup(vsi);
++		if (vsi->type != ICE_VSI_LB) {
++			err = ice_vsi_vlan_setup(vsi);
+ 
+-		if (err)
+-			return err;
++			if (err)
++				return err;
++		}
+ 	}
+ 	ice_vsi_cfg_dcb_rings(vsi);
+ 
+diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
+index 725b0f38813a9..a2b4e3befa591 100644
+--- a/drivers/net/ethernet/sfc/ptp.c
++++ b/drivers/net/ethernet/sfc/ptp.c
+@@ -1100,7 +1100,29 @@ static void efx_ptp_xmit_skb_queue(struct efx_nic *efx, struct sk_buff *skb)
+ 
+ 	tx_queue = efx_channel_get_tx_queue(ptp_data->channel, type);
+ 	if (tx_queue && tx_queue->timestamping) {
++		/* This code invokes normal driver TX code which is always
++		 * protected from softirqs when called from generic TX code,
++		 * which in turn disables preemption. Look at __dev_queue_xmit
++		 * which uses rcu_read_lock_bh disabling preemption for RCU
++		 * plus disabling softirqs. We do not need RCU reader
++		 * protection here.
++		 *
++		 * Although it is theoretically safe for current PTP TX/RX code
++		 * running without disabling softirqs, there are three good
++		 * reasond for doing so:
++		 *
++		 *      1) The code invoked is mainly implemented for non-PTP
++		 *         packets and it is always executed with softirqs
++		 *         disabled.
++		 *      2) This being a single PTP packet, better to not
++		 *         interrupt its processing by softirqs which can lead
++		 *         to high latencies.
++		 *      3) netdev_xmit_more checks preemption is disabled and
++		 *         triggers a BUG_ON if not.
++		 */
++		local_bh_disable();
+ 		efx_enqueue_skb(tx_queue, skb);
++		local_bh_enable();
+ 	} else {
+ 		WARN_ONCE(1, "PTP channel has no timestamped tx queue\n");
+ 		dev_kfree_skb_any(skb);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 789a124809e3c..70c5905a916b9 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -240,6 +240,7 @@ static struct macsec_cb *macsec_skb_cb(struct sk_buff *skb)
+ #define DEFAULT_SEND_SCI true
+ #define DEFAULT_ENCRYPT false
+ #define DEFAULT_ENCODING_SA 0
++#define MACSEC_XPN_MAX_REPLAY_WINDOW (((1 << 30) - 1))
+ 
+ static bool send_sci(const struct macsec_secy *secy)
+ {
+@@ -1694,7 +1695,7 @@ static bool validate_add_rxsa(struct nlattr **attrs)
+ 		return false;
+ 
+ 	if (attrs[MACSEC_SA_ATTR_PN] &&
+-	    *(u64 *)nla_data(attrs[MACSEC_SA_ATTR_PN]) == 0)
++	    nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0)
+ 		return false;
+ 
+ 	if (attrs[MACSEC_SA_ATTR_ACTIVE]) {
+@@ -1750,7 +1751,8 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	pn_len = secy->xpn ? MACSEC_XPN_PN_LEN : MACSEC_DEFAULT_PN_LEN;
+-	if (nla_len(tb_sa[MACSEC_SA_ATTR_PN]) != pn_len) {
++	if (tb_sa[MACSEC_SA_ATTR_PN] &&
++	    nla_len(tb_sa[MACSEC_SA_ATTR_PN]) != pn_len) {
+ 		pr_notice("macsec: nl: add_rxsa: bad pn length: %d != %d\n",
+ 			  nla_len(tb_sa[MACSEC_SA_ATTR_PN]), pn_len);
+ 		rtnl_unlock();
+@@ -1766,7 +1768,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 		if (nla_len(tb_sa[MACSEC_SA_ATTR_SALT]) != MACSEC_SALT_LEN) {
+ 			pr_notice("macsec: nl: add_rxsa: bad salt length: %d != %d\n",
+ 				  nla_len(tb_sa[MACSEC_SA_ATTR_SALT]),
+-				  MACSEC_SA_ATTR_SALT);
++				  MACSEC_SALT_LEN);
+ 			rtnl_unlock();
+ 			return -EINVAL;
+ 		}
+@@ -1839,7 +1841,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 	return 0;
+ 
+ cleanup:
+-	kfree(rx_sa);
++	macsec_rxsa_put(rx_sa);
+ 	rtnl_unlock();
+ 	return err;
+ }
+@@ -1936,7 +1938,7 @@ static bool validate_add_txsa(struct nlattr **attrs)
+ 	if (nla_get_u8(attrs[MACSEC_SA_ATTR_AN]) >= MACSEC_NUM_AN)
+ 		return false;
+ 
+-	if (nla_get_u32(attrs[MACSEC_SA_ATTR_PN]) == 0)
++	if (nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0)
+ 		return false;
+ 
+ 	if (attrs[MACSEC_SA_ATTR_ACTIVE]) {
+@@ -2008,7 +2010,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
+ 		if (nla_len(tb_sa[MACSEC_SA_ATTR_SALT]) != MACSEC_SALT_LEN) {
+ 			pr_notice("macsec: nl: add_txsa: bad salt length: %d != %d\n",
+ 				  nla_len(tb_sa[MACSEC_SA_ATTR_SALT]),
+-				  MACSEC_SA_ATTR_SALT);
++				  MACSEC_SALT_LEN);
+ 			rtnl_unlock();
+ 			return -EINVAL;
+ 		}
+@@ -2082,7 +2084,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
+ 
+ cleanup:
+ 	secy->operational = was_operational;
+-	kfree(tx_sa);
++	macsec_txsa_put(tx_sa);
+ 	rtnl_unlock();
+ 	return err;
+ }
+@@ -2290,7 +2292,7 @@ static bool validate_upd_sa(struct nlattr **attrs)
+ 	if (nla_get_u8(attrs[MACSEC_SA_ATTR_AN]) >= MACSEC_NUM_AN)
+ 		return false;
+ 
+-	if (attrs[MACSEC_SA_ATTR_PN] && nla_get_u32(attrs[MACSEC_SA_ATTR_PN]) == 0)
++	if (attrs[MACSEC_SA_ATTR_PN] && nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0)
+ 		return false;
+ 
+ 	if (attrs[MACSEC_SA_ATTR_ACTIVE]) {
+@@ -3737,9 +3739,6 @@ static int macsec_changelink_common(struct net_device *dev,
+ 		secy->operational = tx_sa && tx_sa->active;
+ 	}
+ 
+-	if (data[IFLA_MACSEC_WINDOW])
+-		secy->replay_window = nla_get_u32(data[IFLA_MACSEC_WINDOW]);
+-
+ 	if (data[IFLA_MACSEC_ENCRYPT])
+ 		tx_sc->encrypt = !!nla_get_u8(data[IFLA_MACSEC_ENCRYPT]);
+ 
+@@ -3785,6 +3784,16 @@ static int macsec_changelink_common(struct net_device *dev,
+ 		}
+ 	}
+ 
++	if (data[IFLA_MACSEC_WINDOW]) {
++		secy->replay_window = nla_get_u32(data[IFLA_MACSEC_WINDOW]);
++
++		/* IEEE 802.1AEbw-2013 10.7.8 - maximum replay window
++		 * for XPN cipher suites */
++		if (secy->xpn &&
++		    secy->replay_window > MACSEC_XPN_MAX_REPLAY_WINDOW)
++			return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -3814,7 +3823,7 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
+ 
+ 	ret = macsec_changelink_common(dev, data);
+ 	if (ret)
+-		return ret;
++		goto cleanup;
+ 
+ 	/* If h/w offloading is available, propagate to the device */
+ 	if (macsec_is_offloaded(macsec)) {
+diff --git a/drivers/net/sungem_phy.c b/drivers/net/sungem_phy.c
+index 291fa449993fb..45f295403cb55 100644
+--- a/drivers/net/sungem_phy.c
++++ b/drivers/net/sungem_phy.c
+@@ -454,6 +454,7 @@ static int bcm5421_init(struct mii_phy* phy)
+ 		int can_low_power = 1;
+ 		if (np == NULL || of_get_property(np, "no-autolowpower", NULL))
+ 			can_low_power = 0;
++		of_node_put(np);
+ 		if (can_low_power) {
+ 			/* Enable automatic low-power */
+ 			sungem_phy_write(phy, 0x1c, 0x9002);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 37178b078ee37..0a07c05a610d1 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -213,9 +213,15 @@ struct virtnet_info {
+ 	/* Packet virtio header size */
+ 	u8 hdr_len;
+ 
+-	/* Work struct for refilling if we run low on memory. */
++	/* Work struct for delayed refilling if we run low on memory. */
+ 	struct delayed_work refill;
+ 
++	/* Is delayed refill enabled? */
++	bool refill_enabled;
++
++	/* The lock to synchronize the access to refill_enabled */
++	spinlock_t refill_lock;
++
+ 	/* Work struct for config space updates */
+ 	struct work_struct config_work;
+ 
+@@ -319,6 +325,20 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask)
+ 	return p;
+ }
+ 
++static void enable_delayed_refill(struct virtnet_info *vi)
++{
++	spin_lock_bh(&vi->refill_lock);
++	vi->refill_enabled = true;
++	spin_unlock_bh(&vi->refill_lock);
++}
++
++static void disable_delayed_refill(struct virtnet_info *vi)
++{
++	spin_lock_bh(&vi->refill_lock);
++	vi->refill_enabled = false;
++	spin_unlock_bh(&vi->refill_lock);
++}
++
+ static void virtqueue_napi_schedule(struct napi_struct *napi,
+ 				    struct virtqueue *vq)
+ {
+@@ -1403,8 +1423,12 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
+ 	}
+ 
+ 	if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) {
+-		if (!try_fill_recv(vi, rq, GFP_ATOMIC))
+-			schedule_delayed_work(&vi->refill, 0);
++		if (!try_fill_recv(vi, rq, GFP_ATOMIC)) {
++			spin_lock(&vi->refill_lock);
++			if (vi->refill_enabled)
++				schedule_delayed_work(&vi->refill, 0);
++			spin_unlock(&vi->refill_lock);
++		}
+ 	}
+ 
+ 	u64_stats_update_begin(&rq->stats.syncp);
+@@ -1523,6 +1547,8 @@ static int virtnet_open(struct net_device *dev)
+ 	struct virtnet_info *vi = netdev_priv(dev);
+ 	int i, err;
+ 
++	enable_delayed_refill(vi);
++
+ 	for (i = 0; i < vi->max_queue_pairs; i++) {
+ 		if (i < vi->curr_queue_pairs)
+ 			/* Make sure we have some buffers: if oom use wq. */
+@@ -1893,6 +1919,8 @@ static int virtnet_close(struct net_device *dev)
+ 	struct virtnet_info *vi = netdev_priv(dev);
+ 	int i;
+ 
++	/* Make sure NAPI doesn't schedule refill work */
++	disable_delayed_refill(vi);
+ 	/* Make sure refill_work doesn't re-enable napi! */
+ 	cancel_delayed_work_sync(&vi->refill);
+ 
+@@ -2390,6 +2418,8 @@ static int virtnet_restore_up(struct virtio_device *vdev)
+ 
+ 	virtio_device_ready(vdev);
+ 
++	enable_delayed_refill(vi);
++
+ 	if (netif_running(vi->dev)) {
+ 		err = virtnet_open(vi->dev);
+ 		if (err)
+@@ -3092,6 +3122,7 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 	vdev->priv = vi;
+ 
+ 	INIT_WORK(&vi->config_work, virtnet_config_changed_work);
++	spin_lock_init(&vi->refill_lock);
+ 
+ 	/* If we can receive ANY GSO packets, we must allocate large ones. */
+ 	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
+diff --git a/drivers/net/wireless/mediatek/mt7601u/usb.c b/drivers/net/wireless/mediatek/mt7601u/usb.c
+index 6bcc4a13ae6c7..cc772045d526f 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/usb.c
++++ b/drivers/net/wireless/mediatek/mt7601u/usb.c
+@@ -26,6 +26,7 @@ static const struct usb_device_id mt7601u_device_table[] = {
+ 	{ USB_DEVICE(0x2717, 0x4106) },
+ 	{ USB_DEVICE(0x2955, 0x0001) },
+ 	{ USB_DEVICE(0x2955, 0x1001) },
++	{ USB_DEVICE(0x2955, 0x1003) },
+ 	{ USB_DEVICE(0x2a5f, 0x1000) },
+ 	{ USB_DEVICE(0x7392, 0x7710) },
+ 	{ 0, }
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index 0f2430fb398db..576cc39077f32 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -107,9 +107,20 @@ out:
+ 	return ret;
+ }
+ 
++static bool phandle_exists(const struct device_node *np,
++			   const char *phandle_name, int index)
++{
++	struct device_node *parse_np = of_parse_phandle(np, phandle_name, index);
++
++	if (parse_np)
++		of_node_put(parse_np);
++
++	return parse_np != NULL;
++}
++
+ #define MAX_PROP_SIZE 32
+ static int ufshcd_populate_vreg(struct device *dev, const char *name,
+-		struct ufs_vreg **out_vreg)
++				struct ufs_vreg **out_vreg)
+ {
+ 	int ret = 0;
+ 	char prop_name[MAX_PROP_SIZE];
+@@ -122,7 +133,7 @@ static int ufshcd_populate_vreg(struct device *dev, const char *name,
+ 	}
+ 
+ 	snprintf(prop_name, MAX_PROP_SIZE, "%s-supply", name);
+-	if (!of_parse_phandle(np, prop_name, 0)) {
++	if (!phandle_exists(np, prop_name, 0)) {
+ 		dev_info(dev, "%s: Unable to find %s regulator, assuming enabled\n",
+ 				__func__, prop_name);
+ 		goto out;
+diff --git a/fs/ntfs/attrib.c b/fs/ntfs/attrib.c
+index d563abc3e1364..914e991731300 100644
+--- a/fs/ntfs/attrib.c
++++ b/fs/ntfs/attrib.c
+@@ -592,8 +592,12 @@ static int ntfs_attr_find(const ATTR_TYPE type, const ntfschar *name,
+ 		a = (ATTR_RECORD*)((u8*)ctx->attr +
+ 				le32_to_cpu(ctx->attr->length));
+ 	for (;;	a = (ATTR_RECORD*)((u8*)a + le32_to_cpu(a->length))) {
+-		if ((u8*)a < (u8*)ctx->mrec || (u8*)a > (u8*)ctx->mrec +
+-				le32_to_cpu(ctx->mrec->bytes_allocated))
++		u8 *mrec_end = (u8 *)ctx->mrec +
++		               le32_to_cpu(ctx->mrec->bytes_allocated);
++		u8 *name_end = (u8 *)a + le16_to_cpu(a->name_offset) +
++			       a->name_length * sizeof(ntfschar);
++		if ((u8*)a < (u8*)ctx->mrec || (u8*)a > mrec_end ||
++		    name_end > mrec_end)
+ 			break;
+ 		ctx->attr = a;
+ 		if (unlikely(le32_to_cpu(a->type) > le32_to_cpu(type) ||
+diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h
+index 7993d527edae9..0a8cd8e59a92c 100644
+--- a/fs/ocfs2/ocfs2.h
++++ b/fs/ocfs2/ocfs2.h
+@@ -279,7 +279,6 @@ enum ocfs2_mount_options
+ 	OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT = 1 << 15,  /* Journal Async Commit */
+ 	OCFS2_MOUNT_ERRORS_CONT = 1 << 16, /* Return EIO to the calling process on error */
+ 	OCFS2_MOUNT_ERRORS_ROFS = 1 << 17, /* Change filesystem to read-only on error */
+-	OCFS2_MOUNT_NOCLUSTER = 1 << 18, /* No cluster aware filesystem mount */
+ };
+ 
+ #define OCFS2_OSB_SOFT_RO	0x0001
+@@ -675,8 +674,7 @@ static inline int ocfs2_cluster_o2cb_global_heartbeat(struct ocfs2_super *osb)
+ 
+ static inline int ocfs2_mount_local(struct ocfs2_super *osb)
+ {
+-	return ((osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_LOCAL_MOUNT)
+-		|| (osb->s_mount_opt & OCFS2_MOUNT_NOCLUSTER));
++	return (osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_LOCAL_MOUNT);
+ }
+ 
+ static inline int ocfs2_uses_extended_slot_map(struct ocfs2_super *osb)
+diff --git a/fs/ocfs2/slot_map.c b/fs/ocfs2/slot_map.c
+index 4da0e4b1e79bf..8caeceeaeda7c 100644
+--- a/fs/ocfs2/slot_map.c
++++ b/fs/ocfs2/slot_map.c
+@@ -254,16 +254,14 @@ static int __ocfs2_find_empty_slot(struct ocfs2_slot_info *si,
+ 	int i, ret = -ENOSPC;
+ 
+ 	if ((preferred >= 0) && (preferred < si->si_num_slots)) {
+-		if (!si->si_slots[preferred].sl_valid ||
+-		    !si->si_slots[preferred].sl_node_num) {
++		if (!si->si_slots[preferred].sl_valid) {
+ 			ret = preferred;
+ 			goto out;
+ 		}
+ 	}
+ 
+ 	for(i = 0; i < si->si_num_slots; i++) {
+-		if (!si->si_slots[i].sl_valid ||
+-		    !si->si_slots[i].sl_node_num) {
++		if (!si->si_slots[i].sl_valid) {
+ 			ret = i;
+ 			break;
+ 		}
+@@ -458,30 +456,24 @@ int ocfs2_find_slot(struct ocfs2_super *osb)
+ 	spin_lock(&osb->osb_lock);
+ 	ocfs2_update_slot_info(si);
+ 
+-	if (ocfs2_mount_local(osb))
+-		/* use slot 0 directly in local mode */
+-		slot = 0;
+-	else {
+-		/* search for ourselves first and take the slot if it already
+-		 * exists. Perhaps we need to mark this in a variable for our
+-		 * own journal recovery? Possibly not, though we certainly
+-		 * need to warn to the user */
+-		slot = __ocfs2_node_num_to_slot(si, osb->node_num);
++	/* search for ourselves first and take the slot if it already
++	 * exists. Perhaps we need to mark this in a variable for our
++	 * own journal recovery? Possibly not, though we certainly
++	 * need to warn to the user */
++	slot = __ocfs2_node_num_to_slot(si, osb->node_num);
++	if (slot < 0) {
++		/* if no slot yet, then just take 1st available
++		 * one. */
++		slot = __ocfs2_find_empty_slot(si, osb->preferred_slot);
+ 		if (slot < 0) {
+-			/* if no slot yet, then just take 1st available
+-			 * one. */
+-			slot = __ocfs2_find_empty_slot(si, osb->preferred_slot);
+-			if (slot < 0) {
+-				spin_unlock(&osb->osb_lock);
+-				mlog(ML_ERROR, "no free slots available!\n");
+-				status = -EINVAL;
+-				goto bail;
+-			}
+-		} else
+-			printk(KERN_INFO "ocfs2: Slot %d on device (%s) was "
+-			       "already allocated to this node!\n",
+-			       slot, osb->dev_str);
+-	}
++			spin_unlock(&osb->osb_lock);
++			mlog(ML_ERROR, "no free slots available!\n");
++			status = -EINVAL;
++			goto bail;
++		}
++	} else
++		printk(KERN_INFO "ocfs2: Slot %d on device (%s) was already "
++		       "allocated to this node!\n", slot, osb->dev_str);
+ 
+ 	ocfs2_set_slot(si, slot, osb->node_num);
+ 	osb->slot_num = slot;
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 477ad05a34ea2..c0e5f1bad499f 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -175,7 +175,6 @@ enum {
+ 	Opt_dir_resv_level,
+ 	Opt_journal_async_commit,
+ 	Opt_err_cont,
+-	Opt_nocluster,
+ 	Opt_err,
+ };
+ 
+@@ -209,7 +208,6 @@ static const match_table_t tokens = {
+ 	{Opt_dir_resv_level, "dir_resv_level=%u"},
+ 	{Opt_journal_async_commit, "journal_async_commit"},
+ 	{Opt_err_cont, "errors=continue"},
+-	{Opt_nocluster, "nocluster"},
+ 	{Opt_err, NULL}
+ };
+ 
+@@ -621,13 +619,6 @@ static int ocfs2_remount(struct super_block *sb, int *flags, char *data)
+ 		goto out;
+ 	}
+ 
+-	tmp = OCFS2_MOUNT_NOCLUSTER;
+-	if ((osb->s_mount_opt & tmp) != (parsed_options.mount_opt & tmp)) {
+-		ret = -EINVAL;
+-		mlog(ML_ERROR, "Cannot change nocluster option on remount\n");
+-		goto out;
+-	}
+-
+ 	tmp = OCFS2_MOUNT_HB_LOCAL | OCFS2_MOUNT_HB_GLOBAL |
+ 		OCFS2_MOUNT_HB_NONE;
+ 	if ((osb->s_mount_opt & tmp) != (parsed_options.mount_opt & tmp)) {
+@@ -868,7 +859,6 @@ static int ocfs2_verify_userspace_stack(struct ocfs2_super *osb,
+ 	}
+ 
+ 	if (ocfs2_userspace_stack(osb) &&
+-	    !(osb->s_mount_opt & OCFS2_MOUNT_NOCLUSTER) &&
+ 	    strncmp(osb->osb_cluster_stack, mopt->cluster_stack,
+ 		    OCFS2_STACK_LABEL_LEN)) {
+ 		mlog(ML_ERROR,
+@@ -1149,11 +1139,6 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 	       osb->s_mount_opt & OCFS2_MOUNT_DATA_WRITEBACK ? "writeback" :
+ 	       "ordered");
+ 
+-	if ((osb->s_mount_opt & OCFS2_MOUNT_NOCLUSTER) &&
+-	   !(osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_LOCAL_MOUNT))
+-		printk(KERN_NOTICE "ocfs2: The shared device (%s) is mounted "
+-		       "without cluster aware mode.\n", osb->dev_str);
+-
+ 	atomic_set(&osb->vol_state, VOLUME_MOUNTED);
+ 	wake_up(&osb->osb_mount_event);
+ 
+@@ -1460,9 +1445,6 @@ static int ocfs2_parse_options(struct super_block *sb,
+ 		case Opt_journal_async_commit:
+ 			mopt->mount_opt |= OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT;
+ 			break;
+-		case Opt_nocluster:
+-			mopt->mount_opt |= OCFS2_MOUNT_NOCLUSTER;
+-			break;
+ 		default:
+ 			mlog(ML_ERROR,
+ 			     "Unrecognized mount option \"%s\" "
+@@ -1574,9 +1556,6 @@ static int ocfs2_show_options(struct seq_file *s, struct dentry *root)
+ 	if (opts & OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT)
+ 		seq_printf(s, ",journal_async_commit");
+ 
+-	if (opts & OCFS2_MOUNT_NOCLUSTER)
+-		seq_printf(s, ",nocluster");
+-
+ 	return 0;
+ }
+ 
+diff --git a/fs/xfs/libxfs/xfs_log_format.h b/fs/xfs/libxfs/xfs_log_format.h
+index 8bd00da6d2a40..2f46ef3800aa2 100644
+--- a/fs/xfs/libxfs/xfs_log_format.h
++++ b/fs/xfs/libxfs/xfs_log_format.h
+@@ -414,7 +414,16 @@ struct xfs_log_dinode {
+ 	/* start of the extended dinode, writable fields */
+ 	uint32_t	di_crc;		/* CRC of the inode */
+ 	uint64_t	di_changecount;	/* number of attribute changes */
+-	xfs_lsn_t	di_lsn;		/* flush sequence */
++
++	/*
++	 * The LSN we write to this field during formatting is not a reflection
++	 * of the current on-disk LSN. It should never be used for recovery
++	 * sequencing, nor should it be recovered into the on-disk inode at all.
++	 * See xlog_recover_inode_commit_pass2() and xfs_log_dinode_to_disk()
++	 * for details.
++	 */
++	xfs_lsn_t	di_lsn;
++
+ 	uint64_t	di_flags2;	/* more random flags */
+ 	uint32_t	di_cowextsize;	/* basic cow extent size for file */
+ 	uint8_t		di_pad2[12];	/* more padding for future expansion */
+diff --git a/fs/xfs/libxfs/xfs_types.h b/fs/xfs/libxfs/xfs_types.h
+index 397d94775440d..1ce06173c2f55 100644
+--- a/fs/xfs/libxfs/xfs_types.h
++++ b/fs/xfs/libxfs/xfs_types.h
+@@ -21,6 +21,7 @@ typedef int32_t		xfs_suminfo_t;	/* type of bitmap summary info */
+ typedef uint32_t	xfs_rtword_t;	/* word type for bitmap manipulations */
+ 
+ typedef int64_t		xfs_lsn_t;	/* log sequence number */
++typedef int64_t		xfs_csn_t;	/* CIL sequence number */
+ 
+ typedef uint32_t	xfs_dablk_t;	/* dir/attr block number (in file) */
+ typedef uint32_t	xfs_dahash_t;	/* dir/attr hash value */
+diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c
+index 8c6e26d62ef28..a3d5ecccfc2cc 100644
+--- a/fs/xfs/xfs_buf_item.c
++++ b/fs/xfs/xfs_buf_item.c
+@@ -393,17 +393,8 @@ xfs_buf_item_pin(
+ }
+ 
+ /*
+- * This is called to unpin the buffer associated with the buf log
+- * item which was previously pinned with a call to xfs_buf_item_pin().
+- *
+- * Also drop the reference to the buf item for the current transaction.
+- * If the XFS_BLI_STALE flag is set and we are the last reference,
+- * then free up the buf log item and unlock the buffer.
+- *
+- * If the remove flag is set we are called from uncommit in the
+- * forced-shutdown path.  If that is true and the reference count on
+- * the log item is going to drop to zero we need to free the item's
+- * descriptor in the transaction.
++ * This is called to unpin the buffer associated with the buf log item which
++ * was previously pinned with a call to xfs_buf_item_pin().
+  */
+ STATIC void
+ xfs_buf_item_unpin(
+@@ -420,38 +411,35 @@ xfs_buf_item_unpin(
+ 
+ 	trace_xfs_buf_item_unpin(bip);
+ 
++	/*
++	 * Drop the bli ref associated with the pin and grab the hold required
++	 * for the I/O simulation failure in the abort case. We have to do this
++	 * before the pin count drops because the AIL doesn't acquire a bli
++	 * reference. Therefore if the refcount drops to zero, the bli could
++	 * still be AIL resident and the buffer submitted for I/O (and freed on
++	 * completion) at any point before we return. This can be removed once
++	 * the AIL properly holds a reference on the bli.
++	 */
+ 	freed = atomic_dec_and_test(&bip->bli_refcount);
+-
++	if (freed && !stale && remove)
++		xfs_buf_hold(bp);
+ 	if (atomic_dec_and_test(&bp->b_pin_count))
+ 		wake_up_all(&bp->b_waiters);
+ 
+-	if (freed && stale) {
++	 /* nothing to do but drop the pin count if the bli is active */
++	if (!freed)
++		return;
++
++	if (stale) {
+ 		ASSERT(bip->bli_flags & XFS_BLI_STALE);
+ 		ASSERT(xfs_buf_islocked(bp));
+ 		ASSERT(bp->b_flags & XBF_STALE);
+ 		ASSERT(bip->__bli_format.blf_flags & XFS_BLF_CANCEL);
++		ASSERT(list_empty(&lip->li_trans));
++		ASSERT(!bp->b_transp);
+ 
+ 		trace_xfs_buf_item_unpin_stale(bip);
+ 
+-		if (remove) {
+-			/*
+-			 * If we are in a transaction context, we have to
+-			 * remove the log item from the transaction as we are
+-			 * about to release our reference to the buffer.  If we
+-			 * don't, the unlock that occurs later in
+-			 * xfs_trans_uncommit() will try to reference the
+-			 * buffer which we no longer have a hold on.
+-			 */
+-			if (!list_empty(&lip->li_trans))
+-				xfs_trans_del_item(lip);
+-
+-			/*
+-			 * Since the transaction no longer refers to the buffer,
+-			 * the buffer should no longer refer to the transaction.
+-			 */
+-			bp->b_transp = NULL;
+-		}
+-
+ 		/*
+ 		 * If we get called here because of an IO error, we may or may
+ 		 * not have the item on the AIL. xfs_trans_ail_delete() will
+@@ -468,13 +456,13 @@ xfs_buf_item_unpin(
+ 			ASSERT(bp->b_log_item == NULL);
+ 		}
+ 		xfs_buf_relse(bp);
+-	} else if (freed && remove) {
++	} else if (remove) {
+ 		/*
+ 		 * The buffer must be locked and held by the caller to simulate
+-		 * an async I/O failure.
++		 * an async I/O failure. We acquired the hold for this case
++		 * before the buffer was unpinned.
+ 		 */
+ 		xfs_buf_lock(bp);
+-		xfs_buf_hold(bp);
+ 		bp->b_flags |= XBF_ASYNC;
+ 		xfs_buf_ioend_fail(bp);
+ 	}
+@@ -632,7 +620,7 @@ xfs_buf_item_release(
+ STATIC void
+ xfs_buf_item_committing(
+ 	struct xfs_log_item	*lip,
+-	xfs_lsn_t		commit_lsn)
++	xfs_csn_t		seq)
+ {
+ 	return xfs_buf_item_release(lip);
+ }
+diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c
+index 1d649462d731a..b374c9cee1177 100644
+--- a/fs/xfs/xfs_buf_item_recover.c
++++ b/fs/xfs/xfs_buf_item_recover.c
+@@ -796,6 +796,7 @@ xlog_recover_get_buf_lsn(
+ 	switch (magicda) {
+ 	case XFS_DIR3_LEAF1_MAGIC:
+ 	case XFS_DIR3_LEAFN_MAGIC:
++	case XFS_ATTR3_LEAF_MAGIC:
+ 	case XFS_DA3_NODE_MAGIC:
+ 		lsn = be64_to_cpu(((struct xfs_da3_blkinfo *)blk)->lsn);
+ 		uuid = &((struct xfs_da3_blkinfo *)blk)->uuid;
+diff --git a/fs/xfs/xfs_dquot_item.c b/fs/xfs/xfs_dquot_item.c
+index 8c1fdf37ee8f0..8ed47b739b6cc 100644
+--- a/fs/xfs/xfs_dquot_item.c
++++ b/fs/xfs/xfs_dquot_item.c
+@@ -188,7 +188,7 @@ xfs_qm_dquot_logitem_release(
+ STATIC void
+ xfs_qm_dquot_logitem_committing(
+ 	struct xfs_log_item	*lip,
+-	xfs_lsn_t		commit_lsn)
++	xfs_csn_t		seq)
+ {
+ 	return xfs_qm_dquot_logitem_release(lip);
+ }
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index 5b0f93f738372..4d6bf8d4974fe 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -118,6 +118,54 @@ xfs_dir_fsync(
+ 	return xfs_log_force_inode(ip);
+ }
+ 
++static xfs_csn_t
++xfs_fsync_seq(
++	struct xfs_inode	*ip,
++	bool			datasync)
++{
++	if (!xfs_ipincount(ip))
++		return 0;
++	if (datasync && !(ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP))
++		return 0;
++	return ip->i_itemp->ili_commit_seq;
++}
++
++/*
++ * All metadata updates are logged, which means that we just have to flush the
++ * log up to the latest LSN that touched the inode.
++ *
++ * If we have concurrent fsync/fdatasync() calls, we need them to all block on
++ * the log force before we clear the ili_fsync_fields field. This ensures that
++ * we don't get a racing sync operation that does not wait for the metadata to
++ * hit the journal before returning.  If we race with clearing ili_fsync_fields,
++ * then all that will happen is the log force will do nothing as the lsn will
++ * already be on disk.  We can't race with setting ili_fsync_fields because that
++ * is done under XFS_ILOCK_EXCL, and that can't happen because we hold the lock
++ * shared until after the ili_fsync_fields is cleared.
++ */
++static  int
++xfs_fsync_flush_log(
++	struct xfs_inode	*ip,
++	bool			datasync,
++	int			*log_flushed)
++{
++	int			error = 0;
++	xfs_csn_t		seq;
++
++	xfs_ilock(ip, XFS_ILOCK_SHARED);
++	seq = xfs_fsync_seq(ip, datasync);
++	if (seq) {
++		error = xfs_log_force_seq(ip->i_mount, seq, XFS_LOG_SYNC,
++					  log_flushed);
++
++		spin_lock(&ip->i_itemp->ili_lock);
++		ip->i_itemp->ili_fsync_fields = 0;
++		spin_unlock(&ip->i_itemp->ili_lock);
++	}
++	xfs_iunlock(ip, XFS_ILOCK_SHARED);
++	return error;
++}
++
+ STATIC int
+ xfs_file_fsync(
+ 	struct file		*file,
+@@ -125,13 +173,10 @@ xfs_file_fsync(
+ 	loff_t			end,
+ 	int			datasync)
+ {
+-	struct inode		*inode = file->f_mapping->host;
+-	struct xfs_inode	*ip = XFS_I(inode);
+-	struct xfs_inode_log_item *iip = ip->i_itemp;
++	struct xfs_inode	*ip = XFS_I(file->f_mapping->host);
+ 	struct xfs_mount	*mp = ip->i_mount;
+ 	int			error = 0;
+ 	int			log_flushed = 0;
+-	xfs_lsn_t		lsn = 0;
+ 
+ 	trace_xfs_file_fsync(ip);
+ 
+@@ -155,33 +200,7 @@ xfs_file_fsync(
+ 	else if (mp->m_logdev_targp != mp->m_ddev_targp)
+ 		xfs_blkdev_issue_flush(mp->m_ddev_targp);
+ 
+-	/*
+-	 * All metadata updates are logged, which means that we just have to
+-	 * flush the log up to the latest LSN that touched the inode. If we have
+-	 * concurrent fsync/fdatasync() calls, we need them to all block on the
+-	 * log force before we clear the ili_fsync_fields field. This ensures
+-	 * that we don't get a racing sync operation that does not wait for the
+-	 * metadata to hit the journal before returning. If we race with
+-	 * clearing the ili_fsync_fields, then all that will happen is the log
+-	 * force will do nothing as the lsn will already be on disk. We can't
+-	 * race with setting ili_fsync_fields because that is done under
+-	 * XFS_ILOCK_EXCL, and that can't happen because we hold the lock shared
+-	 * until after the ili_fsync_fields is cleared.
+-	 */
+-	xfs_ilock(ip, XFS_ILOCK_SHARED);
+-	if (xfs_ipincount(ip)) {
+-		if (!datasync ||
+-		    (iip->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP))
+-			lsn = iip->ili_last_lsn;
+-	}
+-
+-	if (lsn) {
+-		error = xfs_log_force_lsn(mp, lsn, XFS_LOG_SYNC, &log_flushed);
+-		spin_lock(&iip->ili_lock);
+-		iip->ili_fsync_fields = 0;
+-		spin_unlock(&iip->ili_lock);
+-	}
+-	xfs_iunlock(ip, XFS_ILOCK_SHARED);
++	error = xfs_fsync_flush_log(ip, datasync, &log_flushed);
+ 
+ 	/*
+ 	 * If we only have a single device, and the log force about was
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 03497741aef74..1f61e085676b3 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -2754,7 +2754,7 @@ xfs_iunpin(
+ 	trace_xfs_inode_unpin_nowait(ip, _RET_IP_);
+ 
+ 	/* Give the log a push to start the unpinning I/O */
+-	xfs_log_force_lsn(ip->i_mount, ip->i_itemp->ili_last_lsn, 0, NULL);
++	xfs_log_force_seq(ip->i_mount, ip->i_itemp->ili_commit_seq, 0, NULL);
+ 
+ }
+ 
+@@ -3716,16 +3716,16 @@ int
+ xfs_log_force_inode(
+ 	struct xfs_inode	*ip)
+ {
+-	xfs_lsn_t		lsn = 0;
++	xfs_csn_t		seq = 0;
+ 
+ 	xfs_ilock(ip, XFS_ILOCK_SHARED);
+ 	if (xfs_ipincount(ip))
+-		lsn = ip->i_itemp->ili_last_lsn;
++		seq = ip->i_itemp->ili_commit_seq;
+ 	xfs_iunlock(ip, XFS_ILOCK_SHARED);
+ 
+-	if (!lsn)
++	if (!seq)
+ 		return 0;
+-	return xfs_log_force_lsn(ip->i_mount, lsn, XFS_LOG_SYNC, NULL);
++	return xfs_log_force_seq(ip->i_mount, seq, XFS_LOG_SYNC, NULL);
+ }
+ 
+ /*
+diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c
+index 6ff91e5bf3cd7..3aba4559469f1 100644
+--- a/fs/xfs/xfs_inode_item.c
++++ b/fs/xfs/xfs_inode_item.c
+@@ -617,9 +617,9 @@ xfs_inode_item_committed(
+ STATIC void
+ xfs_inode_item_committing(
+ 	struct xfs_log_item	*lip,
+-	xfs_lsn_t		commit_lsn)
++	xfs_csn_t		seq)
+ {
+-	INODE_ITEM(lip)->ili_last_lsn = commit_lsn;
++	INODE_ITEM(lip)->ili_commit_seq = seq;
+ 	return xfs_inode_item_release(lip);
+ }
+ 
+diff --git a/fs/xfs/xfs_inode_item.h b/fs/xfs/xfs_inode_item.h
+index 4b926e32831c0..403b45ab9aa28 100644
+--- a/fs/xfs/xfs_inode_item.h
++++ b/fs/xfs/xfs_inode_item.h
+@@ -33,7 +33,7 @@ struct xfs_inode_log_item {
+ 	unsigned int		ili_fields;	   /* fields to be logged */
+ 	unsigned int		ili_fsync_fields;  /* logged since last fsync */
+ 	xfs_lsn_t		ili_flush_lsn;	   /* lsn at last flush */
+-	xfs_lsn_t		ili_last_lsn;	   /* lsn at last transaction */
++	xfs_csn_t		ili_commit_seq;	   /* last transaction commit */
+ };
+ 
+ static inline int xfs_inode_clean(struct xfs_inode *ip)
+diff --git a/fs/xfs/xfs_inode_item_recover.c b/fs/xfs/xfs_inode_item_recover.c
+index cb44f7653f03b..538724f9f85ca 100644
+--- a/fs/xfs/xfs_inode_item_recover.c
++++ b/fs/xfs/xfs_inode_item_recover.c
+@@ -145,7 +145,8 @@ xfs_log_dinode_to_disk_ts(
+ STATIC void
+ xfs_log_dinode_to_disk(
+ 	struct xfs_log_dinode	*from,
+-	struct xfs_dinode	*to)
++	struct xfs_dinode	*to,
++	xfs_lsn_t		lsn)
+ {
+ 	to->di_magic = cpu_to_be16(from->di_magic);
+ 	to->di_mode = cpu_to_be16(from->di_mode);
+@@ -182,7 +183,7 @@ xfs_log_dinode_to_disk(
+ 		to->di_flags2 = cpu_to_be64(from->di_flags2);
+ 		to->di_cowextsize = cpu_to_be32(from->di_cowextsize);
+ 		to->di_ino = cpu_to_be64(from->di_ino);
+-		to->di_lsn = cpu_to_be64(from->di_lsn);
++		to->di_lsn = cpu_to_be64(lsn);
+ 		memcpy(to->di_pad2, from->di_pad2, sizeof(to->di_pad2));
+ 		uuid_copy(&to->di_uuid, &from->di_uuid);
+ 		to->di_flushiter = 0;
+@@ -261,16 +262,25 @@ xlog_recover_inode_commit_pass2(
+ 	}
+ 
+ 	/*
+-	 * If the inode has an LSN in it, recover the inode only if it's less
+-	 * than the lsn of the transaction we are replaying. Note: we still
+-	 * need to replay an owner change even though the inode is more recent
+-	 * than the transaction as there is no guarantee that all the btree
+-	 * blocks are more recent than this transaction, too.
++	 * If the inode has an LSN in it, recover the inode only if the on-disk
++	 * inode's LSN is older than the lsn of the transaction we are
++	 * replaying. We can have multiple checkpoints with the same start LSN,
++	 * so the current LSN being equal to the on-disk LSN doesn't necessarily
++	 * mean that the on-disk inode is more recent than the change being
++	 * replayed.
++	 *
++	 * We must check the current_lsn against the on-disk inode
++	 * here because the we can't trust the log dinode to contain a valid LSN
++	 * (see comment below before replaying the log dinode for details).
++	 *
++	 * Note: we still need to replay an owner change even though the inode
++	 * is more recent than the transaction as there is no guarantee that all
++	 * the btree blocks are more recent than this transaction, too.
+ 	 */
+ 	if (dip->di_version >= 3) {
+ 		xfs_lsn_t	lsn = be64_to_cpu(dip->di_lsn);
+ 
+-		if (lsn && lsn != -1 && XFS_LSN_CMP(lsn, current_lsn) >= 0) {
++		if (lsn && lsn != -1 && XFS_LSN_CMP(lsn, current_lsn) > 0) {
+ 			trace_xfs_log_recover_inode_skip(log, in_f);
+ 			error = 0;
+ 			goto out_owner_change;
+@@ -368,8 +378,17 @@ xlog_recover_inode_commit_pass2(
+ 		goto out_release;
+ 	}
+ 
+-	/* recover the log dinode inode into the on disk inode */
+-	xfs_log_dinode_to_disk(ldip, dip);
++	/*
++	 * Recover the log dinode inode into the on disk inode.
++	 *
++	 * The LSN in the log dinode is garbage - it can be zero or reflect
++	 * stale in-memory runtime state that isn't coherent with the changes
++	 * logged in this transaction or the changes written to the on-disk
++	 * inode.  Hence we write the current lSN into the inode because that
++	 * matches what xfs_iflush() would write inode the inode when flushing
++	 * the changes in this transaction.
++	 */
++	xfs_log_dinode_to_disk(ldip, dip, current_lsn);
+ 
+ 	fields = in_f->ilf_fields;
+ 	if (fields & XFS_ILOG_DEV)
+diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
+index b445e63cbc3c7..22d7d74231d42 100644
+--- a/fs/xfs/xfs_log.c
++++ b/fs/xfs/xfs_log.c
+@@ -765,6 +765,9 @@ xfs_log_mount_finish(
+ 	if (readonly)
+ 		mp->m_flags |= XFS_MOUNT_RDONLY;
+ 
++	/* Make sure the log is dead if we're returning failure. */
++	ASSERT(!error || (mp->m_log->l_flags & XLOG_IO_ERROR));
++
+ 	return error;
+ }
+ 
+@@ -3210,14 +3213,13 @@ out_error:
+ }
+ 
+ static int
+-__xfs_log_force_lsn(
+-	struct xfs_mount	*mp,
++xlog_force_lsn(
++	struct xlog		*log,
+ 	xfs_lsn_t		lsn,
+ 	uint			flags,
+ 	int			*log_flushed,
+ 	bool			already_slept)
+ {
+-	struct xlog		*log = mp->m_log;
+ 	struct xlog_in_core	*iclog;
+ 
+ 	spin_lock(&log->l_icloglock);
+@@ -3250,8 +3252,6 @@ __xfs_log_force_lsn(
+ 		if (!already_slept &&
+ 		    (iclog->ic_prev->ic_state == XLOG_STATE_WANT_SYNC ||
+ 		     iclog->ic_prev->ic_state == XLOG_STATE_SYNCING)) {
+-			XFS_STATS_INC(mp, xs_log_force_sleep);
+-
+ 			xlog_wait(&iclog->ic_prev->ic_write_wait,
+ 					&log->l_icloglock);
+ 			return -EAGAIN;
+@@ -3289,25 +3289,29 @@ out_error:
+  * to disk, that thread will wake up all threads waiting on the queue.
+  */
+ int
+-xfs_log_force_lsn(
++xfs_log_force_seq(
+ 	struct xfs_mount	*mp,
+-	xfs_lsn_t		lsn,
++	xfs_csn_t		seq,
+ 	uint			flags,
+ 	int			*log_flushed)
+ {
++	struct xlog		*log = mp->m_log;
++	xfs_lsn_t		lsn;
+ 	int			ret;
+-	ASSERT(lsn != 0);
++	ASSERT(seq != 0);
+ 
+ 	XFS_STATS_INC(mp, xs_log_force);
+-	trace_xfs_log_force(mp, lsn, _RET_IP_);
++	trace_xfs_log_force(mp, seq, _RET_IP_);
+ 
+-	lsn = xlog_cil_force_lsn(mp->m_log, lsn);
++	lsn = xlog_cil_force_seq(log, seq);
+ 	if (lsn == NULLCOMMITLSN)
+ 		return 0;
+ 
+-	ret = __xfs_log_force_lsn(mp, lsn, flags, log_flushed, false);
+-	if (ret == -EAGAIN)
+-		ret = __xfs_log_force_lsn(mp, lsn, flags, log_flushed, true);
++	ret = xlog_force_lsn(log, lsn, flags, log_flushed, false);
++	if (ret == -EAGAIN) {
++		XFS_STATS_INC(mp, xs_log_force_sleep);
++		ret = xlog_force_lsn(log, lsn, flags, log_flushed, true);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h
+index 98c913da7587e..a1089f8b7169b 100644
+--- a/fs/xfs/xfs_log.h
++++ b/fs/xfs/xfs_log.h
+@@ -106,7 +106,7 @@ struct xfs_item_ops;
+ struct xfs_trans;
+ 
+ int	  xfs_log_force(struct xfs_mount *mp, uint flags);
+-int	  xfs_log_force_lsn(struct xfs_mount *mp, xfs_lsn_t lsn, uint flags,
++int	  xfs_log_force_seq(struct xfs_mount *mp, xfs_csn_t seq, uint flags,
+ 		int *log_forced);
+ int	  xfs_log_mount(struct xfs_mount	*mp,
+ 			struct xfs_buftarg	*log_target,
+@@ -132,8 +132,6 @@ bool	xfs_log_writable(struct xfs_mount *mp);
+ struct xlog_ticket *xfs_log_ticket_get(struct xlog_ticket *ticket);
+ void	  xfs_log_ticket_put(struct xlog_ticket *ticket);
+ 
+-void	xfs_log_commit_cil(struct xfs_mount *mp, struct xfs_trans *tp,
+-				xfs_lsn_t *commit_lsn, bool regrant);
+ void	xlog_cil_process_committed(struct list_head *list);
+ bool	xfs_log_item_in_current_chkpt(struct xfs_log_item *lip);
+ 
+diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c
+index cd5c04dabe2e1..fbe160d5e9b96 100644
+--- a/fs/xfs/xfs_log_cil.c
++++ b/fs/xfs/xfs_log_cil.c
+@@ -777,7 +777,7 @@ xlog_cil_push_work(
+ 	 * that higher sequences will wait for us to write out a commit record
+ 	 * before they do.
+ 	 *
+-	 * xfs_log_force_lsn requires us to mirror the new sequence into the cil
++	 * xfs_log_force_seq requires us to mirror the new sequence into the cil
+ 	 * structure atomically with the addition of this sequence to the
+ 	 * committing list. This also ensures that we can do unlocked checks
+ 	 * against the current sequence in log forces without risking
+@@ -1020,16 +1020,14 @@ xlog_cil_empty(
+  * allowed again.
+  */
+ void
+-xfs_log_commit_cil(
+-	struct xfs_mount	*mp,
++xlog_cil_commit(
++	struct xlog		*log,
+ 	struct xfs_trans	*tp,
+-	xfs_lsn_t		*commit_lsn,
++	xfs_csn_t		*commit_seq,
+ 	bool			regrant)
+ {
+-	struct xlog		*log = mp->m_log;
+ 	struct xfs_cil		*cil = log->l_cilp;
+ 	struct xfs_log_item	*lip, *next;
+-	xfs_lsn_t		xc_commit_lsn;
+ 
+ 	/*
+ 	 * Do all necessary memory allocation before we lock the CIL.
+@@ -1043,10 +1041,6 @@ xfs_log_commit_cil(
+ 
+ 	xlog_cil_insert_items(log, tp);
+ 
+-	xc_commit_lsn = cil->xc_ctx->sequence;
+-	if (commit_lsn)
+-		*commit_lsn = xc_commit_lsn;
+-
+ 	if (regrant && !XLOG_FORCED_SHUTDOWN(log))
+ 		xfs_log_ticket_regrant(log, tp->t_ticket);
+ 	else
+@@ -1069,8 +1063,10 @@ xfs_log_commit_cil(
+ 	list_for_each_entry_safe(lip, next, &tp->t_items, li_trans) {
+ 		xfs_trans_del_item(lip);
+ 		if (lip->li_ops->iop_committing)
+-			lip->li_ops->iop_committing(lip, xc_commit_lsn);
++			lip->li_ops->iop_committing(lip, cil->xc_ctx->sequence);
+ 	}
++	if (commit_seq)
++		*commit_seq = cil->xc_ctx->sequence;
+ 
+ 	/* xlog_cil_push_background() releases cil->xc_ctx_lock */
+ 	xlog_cil_push_background(log);
+@@ -1087,9 +1083,9 @@ xfs_log_commit_cil(
+  * iclog flush is necessary following this call.
+  */
+ xfs_lsn_t
+-xlog_cil_force_lsn(
++xlog_cil_force_seq(
+ 	struct xlog	*log,
+-	xfs_lsn_t	sequence)
++	xfs_csn_t	sequence)
+ {
+ 	struct xfs_cil		*cil = log->l_cilp;
+ 	struct xfs_cil_ctx	*ctx;
+@@ -1183,23 +1179,19 @@ out_shutdown:
+  */
+ bool
+ xfs_log_item_in_current_chkpt(
+-	struct xfs_log_item *lip)
++	struct xfs_log_item	*lip)
+ {
+-	struct xfs_cil_ctx *ctx;
++	struct xfs_cil		*cil = lip->li_mountp->m_log->l_cilp;
+ 
+ 	if (list_empty(&lip->li_cil))
+ 		return false;
+ 
+-	ctx = lip->li_mountp->m_log->l_cilp->xc_ctx;
+-
+ 	/*
+ 	 * li_seq is written on the first commit of a log item to record the
+ 	 * first checkpoint it is written to. Hence if it is different to the
+ 	 * current sequence, we're in a new checkpoint.
+ 	 */
+-	if (XFS_LSN_CMP(lip->li_seq, ctx->sequence) != 0)
+-		return false;
+-	return true;
++	return lip->li_seq == READ_ONCE(cil->xc_current_sequence);
+ }
+ 
+ /*
+diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h
+index 1c6fdbf3d5066..42cd1602ac256 100644
+--- a/fs/xfs/xfs_log_priv.h
++++ b/fs/xfs/xfs_log_priv.h
+@@ -230,7 +230,7 @@ struct xfs_cil;
+ 
+ struct xfs_cil_ctx {
+ 	struct xfs_cil		*cil;
+-	xfs_lsn_t		sequence;	/* chkpt sequence # */
++	xfs_csn_t		sequence;	/* chkpt sequence # */
+ 	xfs_lsn_t		start_lsn;	/* first LSN of chkpt commit */
+ 	xfs_lsn_t		commit_lsn;	/* chkpt commit record lsn */
+ 	struct xlog_ticket	*ticket;	/* chkpt ticket */
+@@ -268,10 +268,10 @@ struct xfs_cil {
+ 	struct xfs_cil_ctx	*xc_ctx;
+ 
+ 	spinlock_t		xc_push_lock ____cacheline_aligned_in_smp;
+-	xfs_lsn_t		xc_push_seq;
++	xfs_csn_t		xc_push_seq;
+ 	struct list_head	xc_committing;
+ 	wait_queue_head_t	xc_commit_wait;
+-	xfs_lsn_t		xc_current_sequence;
++	xfs_csn_t		xc_current_sequence;
+ 	struct work_struct	xc_push_work;
+ 	wait_queue_head_t	xc_push_wait;	/* background push throttle */
+ } ____cacheline_aligned_in_smp;
+@@ -547,19 +547,18 @@ int	xlog_cil_init(struct xlog *log);
+ void	xlog_cil_init_post_recovery(struct xlog *log);
+ void	xlog_cil_destroy(struct xlog *log);
+ bool	xlog_cil_empty(struct xlog *log);
++void	xlog_cil_commit(struct xlog *log, struct xfs_trans *tp,
++			xfs_csn_t *commit_seq, bool regrant);
+ 
+ /*
+  * CIL force routines
+  */
+-xfs_lsn_t
+-xlog_cil_force_lsn(
+-	struct xlog *log,
+-	xfs_lsn_t sequence);
++xfs_lsn_t xlog_cil_force_seq(struct xlog *log, xfs_csn_t sequence);
+ 
+ static inline void
+ xlog_cil_force(struct xlog *log)
+ {
+-	xlog_cil_force_lsn(log, log->l_cilp->xc_current_sequence);
++	xlog_cil_force_seq(log, log->l_cilp->xc_current_sequence);
+ }
+ 
+ /*
+diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
+index 87886b7f77dad..69408782019eb 100644
+--- a/fs/xfs/xfs_log_recover.c
++++ b/fs/xfs/xfs_log_recover.c
+@@ -2457,8 +2457,10 @@ xlog_finish_defer_ops(
+ 
+ 		error = xfs_trans_alloc(mp, &resv, dfc->dfc_blkres,
+ 				dfc->dfc_rtxres, XFS_TRANS_RESERVE, &tp);
+-		if (error)
++		if (error) {
++			xfs_force_shutdown(mp, SHUTDOWN_LOG_IO_ERROR);
+ 			return error;
++		}
+ 
+ 		/*
+ 		 * Transfer to this new transaction all the dfops we captured
+@@ -3454,6 +3456,7 @@ xlog_recover_finish(
+ 			 * this) before we get around to xfs_log_mount_cancel.
+ 			 */
+ 			xlog_recover_cancel_intents(log);
++			xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR);
+ 			xfs_alert(log->l_mp, "Failed to recover intents");
+ 			return error;
+ 		}
+diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
+index 44b05e1d5d327..a2a5a0fd92334 100644
+--- a/fs/xfs/xfs_mount.c
++++ b/fs/xfs/xfs_mount.c
+@@ -968,9 +968,17 @@ xfs_mountfs(
+ 	/*
+ 	 * Finish recovering the file system.  This part needed to be delayed
+ 	 * until after the root and real-time bitmap inodes were consistently
+-	 * read in.
++	 * read in.  Temporarily create per-AG space reservations for metadata
++	 * btree shape changes because space freeing transactions (for inode
++	 * inactivation) require the per-AG reservation in lieu of reserving
++	 * blocks.
+ 	 */
++	error = xfs_fs_reserve_ag_blocks(mp);
++	if (error && error == -ENOSPC)
++		xfs_warn(mp,
++	"ENOSPC reserving per-AG metadata pool, log recovery may fail.");
+ 	error = xfs_log_mount_finish(mp);
++	xfs_fs_unreserve_ag_blocks(mp);
+ 	if (error) {
+ 		xfs_warn(mp, "log mount finish failed");
+ 		goto out_rtunmount;
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index 36166bae24a6f..73a1de7ceefc9 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -832,7 +832,7 @@ __xfs_trans_commit(
+ 	bool			regrant)
+ {
+ 	struct xfs_mount	*mp = tp->t_mountp;
+-	xfs_lsn_t		commit_lsn = -1;
++	xfs_csn_t		commit_seq = 0;
+ 	int			error = 0;
+ 	int			sync = tp->t_flags & XFS_TRANS_SYNC;
+ 
+@@ -874,7 +874,7 @@ __xfs_trans_commit(
+ 		xfs_trans_apply_sb_deltas(tp);
+ 	xfs_trans_apply_dquot_deltas(tp);
+ 
+-	xfs_log_commit_cil(mp, tp, &commit_lsn, regrant);
++	xlog_cil_commit(mp->m_log, tp, &commit_seq, regrant);
+ 
+ 	xfs_trans_free(tp);
+ 
+@@ -883,7 +883,7 @@ __xfs_trans_commit(
+ 	 * log out now and wait for it.
+ 	 */
+ 	if (sync) {
+-		error = xfs_log_force_lsn(mp, commit_lsn, XFS_LOG_SYNC, NULL);
++		error = xfs_log_force_seq(mp, commit_seq, XFS_LOG_SYNC, NULL);
+ 		XFS_STATS_INC(mp, xs_trans_sync);
+ 	} else {
+ 		XFS_STATS_INC(mp, xs_trans_async);
+diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
+index 075eeade4f7d5..97485559008bb 100644
+--- a/fs/xfs/xfs_trans.h
++++ b/fs/xfs/xfs_trans.h
+@@ -43,7 +43,7 @@ struct xfs_log_item {
+ 	struct list_head		li_cil;		/* CIL pointers */
+ 	struct xfs_log_vec		*li_lv;		/* active log vector */
+ 	struct xfs_log_vec		*li_lv_shadow;	/* standby vector */
+-	xfs_lsn_t			li_seq;		/* CIL commit seq */
++	xfs_csn_t			li_seq;		/* CIL commit seq */
+ };
+ 
+ /*
+@@ -69,7 +69,7 @@ struct xfs_item_ops {
+ 	void (*iop_pin)(struct xfs_log_item *);
+ 	void (*iop_unpin)(struct xfs_log_item *, int remove);
+ 	uint (*iop_push)(struct xfs_log_item *, struct list_head *);
+-	void (*iop_committing)(struct xfs_log_item *, xfs_lsn_t commit_lsn);
++	void (*iop_committing)(struct xfs_log_item *lip, xfs_csn_t seq);
+ 	void (*iop_release)(struct xfs_log_item *);
+ 	xfs_lsn_t (*iop_committed)(struct xfs_log_item *, xfs_lsn_t);
+ 	int (*iop_recover)(struct xfs_log_item *lip,
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index f21bc441e3fa8..b010d45a1ecd5 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1457,6 +1457,9 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
+ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
+ 			     const union bpf_attr *kattr,
+ 			     union bpf_attr __user *uattr);
++int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
++				const union bpf_attr *kattr,
++				union bpf_attr __user *uattr);
+ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ 		    const struct bpf_prog *prog,
+ 		    struct bpf_insn_access_aux *info);
+@@ -1671,6 +1674,13 @@ static inline int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
+ 	return -ENOTSUPP;
+ }
+ 
++static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
++					      const union bpf_attr *kattr,
++					      union bpf_attr __user *uattr)
++{
++	return -ENOTSUPP;
++}
++
+ static inline void bpf_map_put(struct bpf_map *map)
+ {
+ }
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index e7ce719838b5e..edba74a536839 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -405,6 +405,9 @@ static inline bool ip6_ignore_linkdown(const struct net_device *dev)
+ {
+ 	const struct inet6_dev *idev = __in6_dev_get(dev);
+ 
++	if (unlikely(!idev))
++		return true;
++
+ 	return !!idev->cnf.ignore_routes_with_linkdown;
+ }
+ 
+diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
+index 1d1232917de72..9b8000869b078 100644
+--- a/include/net/bluetooth/l2cap.h
++++ b/include/net/bluetooth/l2cap.h
+@@ -845,6 +845,7 @@ enum {
+ };
+ 
+ void l2cap_chan_hold(struct l2cap_chan *c);
++struct l2cap_chan *l2cap_chan_hold_unless_zero(struct l2cap_chan *c);
+ void l2cap_chan_put(struct l2cap_chan *c);
+ 
+ static inline void l2cap_chan_lock(struct l2cap_chan *chan)
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index 0b1864a82d4ad..ff901aade442f 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -317,7 +317,7 @@ void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
+ 
+ struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu);
+ 
+-#define TCP_PINGPONG_THRESH	3
++#define TCP_PINGPONG_THRESH	1
+ 
+ static inline void inet_csk_enter_pingpong_mode(struct sock *sk)
+ {
+@@ -334,14 +334,6 @@ static inline bool inet_csk_in_pingpong_mode(struct sock *sk)
+ 	return inet_csk(sk)->icsk_ack.pingpong >= TCP_PINGPONG_THRESH;
+ }
+ 
+-static inline void inet_csk_inc_pingpong_cnt(struct sock *sk)
+-{
+-	struct inet_connection_sock *icsk = inet_csk(sk);
+-
+-	if (icsk->icsk_ack.pingpong < U8_MAX)
+-		icsk->icsk_ack.pingpong++;
+-}
+-
+ static inline bool inet_csk_has_ulp(struct sock *sk)
+ {
+ 	return inet_sk(sk)->is_icsk && !!inet_csk(sk)->icsk_ulp_ops;
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 44bfb22069c1f..8129ce9a07719 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1396,7 +1396,7 @@ void tcp_select_initial_window(const struct sock *sk, int __space,
+ 
+ static inline int tcp_win_from_space(const struct sock *sk, int space)
+ {
+-	int tcp_adv_win_scale = sock_net(sk)->ipv4.sysctl_tcp_adv_win_scale;
++	int tcp_adv_win_scale = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_adv_win_scale);
+ 
+ 	return tcp_adv_win_scale <= 0 ?
+ 		(space>>(-tcp_adv_win_scale)) :
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 0f39fdcb2273c..2a234023821e3 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -5007,7 +5007,10 @@ struct bpf_pidns_info {
+ 
+ /* User accessible data for SK_LOOKUP programs. Add new fields at the end. */
+ struct bpf_sk_lookup {
+-	__bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */
++	union {
++		__bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */
++		__u64 cookie; /* Non-zero if socket was selected in PROG_TEST_RUN */
++	};
+ 
+ 	__u32 family;		/* Protocol family (AF_INET, AF_INET6) */
+ 	__u32 protocol;		/* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index e5d22af43fa0b..d29731a30b8e1 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -457,6 +457,33 @@ void init_watch(struct watch *watch, struct watch_queue *wqueue)
+ 	rcu_assign_pointer(watch->queue, wqueue);
+ }
+ 
++static int add_one_watch(struct watch *watch, struct watch_list *wlist, struct watch_queue *wqueue)
++{
++	const struct cred *cred;
++	struct watch *w;
++
++	hlist_for_each_entry(w, &wlist->watchers, list_node) {
++		struct watch_queue *wq = rcu_access_pointer(w->queue);
++		if (wqueue == wq && watch->id == w->id)
++			return -EBUSY;
++	}
++
++	cred = current_cred();
++	if (atomic_inc_return(&cred->user->nr_watches) > task_rlimit(current, RLIMIT_NOFILE)) {
++		atomic_dec(&cred->user->nr_watches);
++		return -EAGAIN;
++	}
++
++	watch->cred = get_cred(cred);
++	rcu_assign_pointer(watch->watch_list, wlist);
++
++	kref_get(&wqueue->usage);
++	kref_get(&watch->usage);
++	hlist_add_head(&watch->queue_node, &wqueue->watches);
++	hlist_add_head_rcu(&watch->list_node, &wlist->watchers);
++	return 0;
++}
++
+ /**
+  * add_watch_to_object - Add a watch on an object to a watch list
+  * @watch: The watch to add
+@@ -471,34 +498,21 @@ void init_watch(struct watch *watch, struct watch_queue *wqueue)
+  */
+ int add_watch_to_object(struct watch *watch, struct watch_list *wlist)
+ {
+-	struct watch_queue *wqueue = rcu_access_pointer(watch->queue);
+-	struct watch *w;
+-
+-	hlist_for_each_entry(w, &wlist->watchers, list_node) {
+-		struct watch_queue *wq = rcu_access_pointer(w->queue);
+-		if (wqueue == wq && watch->id == w->id)
+-			return -EBUSY;
+-	}
+-
+-	watch->cred = get_current_cred();
+-	rcu_assign_pointer(watch->watch_list, wlist);
++	struct watch_queue *wqueue;
++	int ret = -ENOENT;
+ 
+-	if (atomic_inc_return(&watch->cred->user->nr_watches) >
+-	    task_rlimit(current, RLIMIT_NOFILE)) {
+-		atomic_dec(&watch->cred->user->nr_watches);
+-		put_cred(watch->cred);
+-		return -EAGAIN;
+-	}
++	rcu_read_lock();
+ 
++	wqueue = rcu_access_pointer(watch->queue);
+ 	if (lock_wqueue(wqueue)) {
+-		kref_get(&wqueue->usage);
+-		kref_get(&watch->usage);
+-		hlist_add_head(&watch->queue_node, &wqueue->watches);
++		spin_lock(&wlist->lock);
++		ret = add_one_watch(watch, wlist, wqueue);
++		spin_unlock(&wlist->lock);
+ 		unlock_wqueue(wqueue);
+ 	}
+ 
+-	hlist_add_head(&watch->list_node, &wlist->watchers);
+-	return 0;
++	rcu_read_unlock();
++	return ret;
+ }
+ EXPORT_SYMBOL(add_watch_to_object);
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index f3418edb136be..43ff22ce76324 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3679,11 +3679,15 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
+ 	 * need to be calculated.
+ 	 */
+ 	if (!order) {
+-		long fast_free;
++		long usable_free;
++		long reserved;
+ 
+-		fast_free = free_pages;
+-		fast_free -= __zone_watermark_unusable_free(z, 0, alloc_flags);
+-		if (fast_free > mark + z->lowmem_reserve[highest_zoneidx])
++		usable_free = free_pages;
++		reserved = __zone_watermark_unusable_free(z, 0, alloc_flags);
++
++		/* reserved may over estimate high-atomic reserves. */
++		usable_free -= min(usable_free, reserved);
++		if (usable_free > mark + z->lowmem_reserve[highest_zoneidx])
+ 			return true;
+ 	}
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 2557cd917f5ed..6a5ff5dcc09a9 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -111,7 +111,8 @@ static struct l2cap_chan *__l2cap_get_chan_by_scid(struct l2cap_conn *conn,
+ }
+ 
+ /* Find channel with given SCID.
+- * Returns locked channel. */
++ * Returns a reference locked channel.
++ */
+ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
+ 						 u16 cid)
+ {
+@@ -119,15 +120,19 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
+ 
+ 	mutex_lock(&conn->chan_lock);
+ 	c = __l2cap_get_chan_by_scid(conn, cid);
+-	if (c)
+-		l2cap_chan_lock(c);
++	if (c) {
++		/* Only lock if chan reference is not 0 */
++		c = l2cap_chan_hold_unless_zero(c);
++		if (c)
++			l2cap_chan_lock(c);
++	}
+ 	mutex_unlock(&conn->chan_lock);
+ 
+ 	return c;
+ }
+ 
+ /* Find channel with given DCID.
+- * Returns locked channel.
++ * Returns a reference locked channel.
+  */
+ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
+ 						 u16 cid)
+@@ -136,8 +141,12 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
+ 
+ 	mutex_lock(&conn->chan_lock);
+ 	c = __l2cap_get_chan_by_dcid(conn, cid);
+-	if (c)
+-		l2cap_chan_lock(c);
++	if (c) {
++		/* Only lock if chan reference is not 0 */
++		c = l2cap_chan_hold_unless_zero(c);
++		if (c)
++			l2cap_chan_lock(c);
++	}
+ 	mutex_unlock(&conn->chan_lock);
+ 
+ 	return c;
+@@ -162,8 +171,12 @@ static struct l2cap_chan *l2cap_get_chan_by_ident(struct l2cap_conn *conn,
+ 
+ 	mutex_lock(&conn->chan_lock);
+ 	c = __l2cap_get_chan_by_ident(conn, ident);
+-	if (c)
+-		l2cap_chan_lock(c);
++	if (c) {
++		/* Only lock if chan reference is not 0 */
++		c = l2cap_chan_hold_unless_zero(c);
++		if (c)
++			l2cap_chan_lock(c);
++	}
+ 	mutex_unlock(&conn->chan_lock);
+ 
+ 	return c;
+@@ -497,6 +510,16 @@ void l2cap_chan_hold(struct l2cap_chan *c)
+ 	kref_get(&c->kref);
+ }
+ 
++struct l2cap_chan *l2cap_chan_hold_unless_zero(struct l2cap_chan *c)
++{
++	BT_DBG("chan %p orig refcnt %u", c, kref_read(&c->kref));
++
++	if (!kref_get_unless_zero(&c->kref))
++		return NULL;
++
++	return c;
++}
++
+ void l2cap_chan_put(struct l2cap_chan *c)
+ {
+ 	BT_DBG("chan %p orig refcnt %d", c, kref_read(&c->kref));
+@@ -1965,7 +1988,10 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ 			src_match = !bacmp(&c->src, src);
+ 			dst_match = !bacmp(&c->dst, dst);
+ 			if (src_match && dst_match) {
+-				l2cap_chan_hold(c);
++				c = l2cap_chan_hold_unless_zero(c);
++				if (!c)
++					continue;
++
+ 				read_unlock(&chan_list_lock);
+ 				return c;
+ 			}
+@@ -1980,7 +2006,7 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ 	}
+ 
+ 	if (c1)
+-		l2cap_chan_hold(c1);
++		c1 = l2cap_chan_hold_unless_zero(c1);
+ 
+ 	read_unlock(&chan_list_lock);
+ 
+@@ -4460,6 +4486,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn,
+ 
+ unlock:
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ 	return err;
+ }
+ 
+@@ -4573,6 +4600,7 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn,
+ 
+ done:
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ 	return err;
+ }
+ 
+@@ -5300,6 +5328,7 @@ send_move_response:
+ 	l2cap_send_move_chan_rsp(chan, result);
+ 
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ 
+ 	return 0;
+ }
+@@ -5392,6 +5421,7 @@ static void l2cap_move_continue(struct l2cap_conn *conn, u16 icid, u16 result)
+ 	}
+ 
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ }
+ 
+ static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid,
+@@ -5421,6 +5451,7 @@ static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid,
+ 	l2cap_send_move_chan_cfm(chan, L2CAP_MC_UNCONFIRMED);
+ 
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ }
+ 
+ static int l2cap_move_channel_rsp(struct l2cap_conn *conn,
+@@ -5484,6 +5515,7 @@ static int l2cap_move_channel_confirm(struct l2cap_conn *conn,
+ 	l2cap_send_move_chan_cfm_rsp(conn, cmd->ident, icid);
+ 
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ 
+ 	return 0;
+ }
+@@ -5519,6 +5551,7 @@ static inline int l2cap_move_channel_confirm_rsp(struct l2cap_conn *conn,
+ 	}
+ 
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ 
+ 	return 0;
+ }
+@@ -5891,12 +5924,11 @@ static inline int l2cap_le_credits(struct l2cap_conn *conn,
+ 	if (credits > max_credits) {
+ 		BT_ERR("LE credits overflow");
+ 		l2cap_send_disconn_req(chan, ECONNRESET);
+-		l2cap_chan_unlock(chan);
+ 
+ 		/* Return 0 so that we don't trigger an unnecessary
+ 		 * command reject packet.
+ 		 */
+-		return 0;
++		goto unlock;
+ 	}
+ 
+ 	chan->tx_credits += credits;
+@@ -5907,7 +5939,9 @@ static inline int l2cap_le_credits(struct l2cap_conn *conn,
+ 	if (chan->tx_credits)
+ 		chan->ops->resume(chan);
+ 
++unlock:
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ 
+ 	return 0;
+ }
+@@ -7587,6 +7621,7 @@ drop:
+ 
+ done:
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ }
+ 
+ static void l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm,
+@@ -8074,7 +8109,7 @@ static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c,
+ 		if (src_type != c->src_type)
+ 			continue;
+ 
+-		l2cap_chan_hold(c);
++		c = l2cap_chan_hold_unless_zero(c);
+ 		read_unlock(&chan_list_lock);
+ 		return c;
+ 	}
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index eb684f31fd698..f8b231bbbe381 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -10,20 +10,86 @@
+ #include <net/bpf_sk_storage.h>
+ #include <net/sock.h>
+ #include <net/tcp.h>
++#include <net/net_namespace.h>
+ #include <linux/error-injection.h>
+ #include <linux/smp.h>
++#include <linux/sock_diag.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/bpf_test_run.h>
+ 
++struct bpf_test_timer {
++	enum { NO_PREEMPT, NO_MIGRATE } mode;
++	u32 i;
++	u64 time_start, time_spent;
++};
++
++static void bpf_test_timer_enter(struct bpf_test_timer *t)
++	__acquires(rcu)
++{
++	rcu_read_lock();
++	if (t->mode == NO_PREEMPT)
++		preempt_disable();
++	else
++		migrate_disable();
++
++	t->time_start = ktime_get_ns();
++}
++
++static void bpf_test_timer_leave(struct bpf_test_timer *t)
++	__releases(rcu)
++{
++	t->time_start = 0;
++
++	if (t->mode == NO_PREEMPT)
++		preempt_enable();
++	else
++		migrate_enable();
++	rcu_read_unlock();
++}
++
++static bool bpf_test_timer_continue(struct bpf_test_timer *t, u32 repeat, int *err, u32 *duration)
++	__must_hold(rcu)
++{
++	t->i++;
++	if (t->i >= repeat) {
++		/* We're done. */
++		t->time_spent += ktime_get_ns() - t->time_start;
++		do_div(t->time_spent, t->i);
++		*duration = t->time_spent > U32_MAX ? U32_MAX : (u32)t->time_spent;
++		*err = 0;
++		goto reset;
++	}
++
++	if (signal_pending(current)) {
++		/* During iteration: we've been cancelled, abort. */
++		*err = -EINTR;
++		goto reset;
++	}
++
++	if (need_resched()) {
++		/* During iteration: we need to reschedule between runs. */
++		t->time_spent += ktime_get_ns() - t->time_start;
++		bpf_test_timer_leave(t);
++		cond_resched();
++		bpf_test_timer_enter(t);
++	}
++
++	/* Do another round. */
++	return true;
++
++reset:
++	t->i = 0;
++	return false;
++}
++
+ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
+ 			u32 *retval, u32 *time, bool xdp)
+ {
+ 	struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE] = { NULL };
++	struct bpf_test_timer t = { NO_MIGRATE };
+ 	enum bpf_cgroup_storage_type stype;
+-	u64 time_start, time_spent = 0;
+-	int ret = 0;
+-	u32 i;
++	int ret;
+ 
+ 	for_each_cgroup_storage_type(stype) {
+ 		storage[stype] = bpf_cgroup_storage_alloc(prog, stype);
+@@ -38,10 +104,8 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
+ 	if (!repeat)
+ 		repeat = 1;
+ 
+-	rcu_read_lock();
+-	migrate_disable();
+-	time_start = ktime_get_ns();
+-	for (i = 0; i < repeat; i++) {
++	bpf_test_timer_enter(&t);
++	do {
+ 		ret = bpf_cgroup_storage_set(storage);
+ 		if (ret)
+ 			break;
+@@ -53,29 +117,8 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
+ 
+ 		bpf_cgroup_storage_unset();
+ 
+-		if (signal_pending(current)) {
+-			ret = -EINTR;
+-			break;
+-		}
+-
+-		if (need_resched()) {
+-			time_spent += ktime_get_ns() - time_start;
+-			migrate_enable();
+-			rcu_read_unlock();
+-
+-			cond_resched();
+-
+-			rcu_read_lock();
+-			migrate_disable();
+-			time_start = ktime_get_ns();
+-		}
+-	}
+-	time_spent += ktime_get_ns() - time_start;
+-	migrate_enable();
+-	rcu_read_unlock();
+-
+-	do_div(time_spent, repeat);
+-	*time = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
++	} while (bpf_test_timer_continue(&t, repeat, &ret, time));
++	bpf_test_timer_leave(&t);
+ 
+ 	for_each_cgroup_storage_type(stype)
+ 		bpf_cgroup_storage_free(storage[stype]);
+@@ -688,18 +731,17 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
+ 				     const union bpf_attr *kattr,
+ 				     union bpf_attr __user *uattr)
+ {
++	struct bpf_test_timer t = { NO_PREEMPT };
+ 	u32 size = kattr->test.data_size_in;
+ 	struct bpf_flow_dissector ctx = {};
+ 	u32 repeat = kattr->test.repeat;
+ 	struct bpf_flow_keys *user_ctx;
+ 	struct bpf_flow_keys flow_keys;
+-	u64 time_start, time_spent = 0;
+ 	const struct ethhdr *eth;
+ 	unsigned int flags = 0;
+ 	u32 retval, duration;
+ 	void *data;
+ 	int ret;
+-	u32 i;
+ 
+ 	if (prog->type != BPF_PROG_TYPE_FLOW_DISSECTOR)
+ 		return -EINVAL;
+@@ -735,48 +777,127 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
+ 	ctx.data = data;
+ 	ctx.data_end = (__u8 *)data + size;
+ 
+-	rcu_read_lock();
+-	preempt_disable();
+-	time_start = ktime_get_ns();
+-	for (i = 0; i < repeat; i++) {
++	bpf_test_timer_enter(&t);
++	do {
+ 		retval = bpf_flow_dissect(prog, &ctx, eth->h_proto, ETH_HLEN,
+ 					  size, flags);
++	} while (bpf_test_timer_continue(&t, repeat, &ret, &duration));
++	bpf_test_timer_leave(&t);
+ 
+-		if (signal_pending(current)) {
+-			preempt_enable();
+-			rcu_read_unlock();
++	if (ret < 0)
++		goto out;
+ 
+-			ret = -EINTR;
+-			goto out;
+-		}
++	ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
++			      retval, duration);
++	if (!ret)
++		ret = bpf_ctx_finish(kattr, uattr, user_ctx,
++				     sizeof(struct bpf_flow_keys));
+ 
+-		if (need_resched()) {
+-			time_spent += ktime_get_ns() - time_start;
+-			preempt_enable();
+-			rcu_read_unlock();
++out:
++	kfree(user_ctx);
++	kfree(data);
++	return ret;
++}
+ 
+-			cond_resched();
++int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
++				union bpf_attr __user *uattr)
++{
++	struct bpf_test_timer t = { NO_PREEMPT };
++	struct bpf_prog_array *progs = NULL;
++	struct bpf_sk_lookup_kern ctx = {};
++	u32 repeat = kattr->test.repeat;
++	struct bpf_sk_lookup *user_ctx;
++	u32 retval, duration;
++	int ret = -EINVAL;
+ 
+-			rcu_read_lock();
+-			preempt_disable();
+-			time_start = ktime_get_ns();
+-		}
++	if (prog->type != BPF_PROG_TYPE_SK_LOOKUP)
++		return -EINVAL;
++
++	if (kattr->test.flags || kattr->test.cpu)
++		return -EINVAL;
++
++	if (kattr->test.data_in || kattr->test.data_size_in || kattr->test.data_out ||
++	    kattr->test.data_size_out)
++		return -EINVAL;
++
++	if (!repeat)
++		repeat = 1;
++
++	user_ctx = bpf_ctx_init(kattr, sizeof(*user_ctx));
++	if (IS_ERR(user_ctx))
++		return PTR_ERR(user_ctx);
++
++	if (!user_ctx)
++		return -EINVAL;
++
++	if (user_ctx->sk)
++		goto out;
++
++	if (!range_is_zero(user_ctx, offsetofend(typeof(*user_ctx), local_port), sizeof(*user_ctx)))
++		goto out;
++
++	if (user_ctx->local_port > U16_MAX || user_ctx->remote_port > U16_MAX) {
++		ret = -ERANGE;
++		goto out;
+ 	}
+-	time_spent += ktime_get_ns() - time_start;
+-	preempt_enable();
+-	rcu_read_unlock();
+ 
+-	do_div(time_spent, repeat);
+-	duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
++	ctx.family = (u16)user_ctx->family;
++	ctx.protocol = (u16)user_ctx->protocol;
++	ctx.dport = (u16)user_ctx->local_port;
++	ctx.sport = (__force __be16)user_ctx->remote_port;
+ 
+-	ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
+-			      retval, duration);
++	switch (ctx.family) {
++	case AF_INET:
++		ctx.v4.daddr = (__force __be32)user_ctx->local_ip4;
++		ctx.v4.saddr = (__force __be32)user_ctx->remote_ip4;
++		break;
++
++#if IS_ENABLED(CONFIG_IPV6)
++	case AF_INET6:
++		ctx.v6.daddr = (struct in6_addr *)user_ctx->local_ip6;
++		ctx.v6.saddr = (struct in6_addr *)user_ctx->remote_ip6;
++		break;
++#endif
++
++	default:
++		ret = -EAFNOSUPPORT;
++		goto out;
++	}
++
++	progs = bpf_prog_array_alloc(1, GFP_KERNEL);
++	if (!progs) {
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	progs->items[0].prog = prog;
++
++	bpf_test_timer_enter(&t);
++	do {
++		ctx.selected_sk = NULL;
++		retval = BPF_PROG_SK_LOOKUP_RUN_ARRAY(progs, ctx, BPF_PROG_RUN);
++	} while (bpf_test_timer_continue(&t, repeat, &ret, &duration));
++	bpf_test_timer_leave(&t);
++
++	if (ret < 0)
++		goto out;
++
++	user_ctx->cookie = 0;
++	if (ctx.selected_sk) {
++		if (ctx.selected_sk->sk_reuseport && !ctx.no_reuseport) {
++			ret = -EOPNOTSUPP;
++			goto out;
++		}
++
++		user_ctx->cookie = sock_gen_cookie(ctx.selected_sk);
++	}
++
++	ret = bpf_test_finish(kattr, uattr, NULL, 0, retval, duration);
+ 	if (!ret)
+-		ret = bpf_ctx_finish(kattr, uattr, user_ctx,
+-				     sizeof(struct bpf_flow_keys));
++		ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(*user_ctx));
+ 
+ out:
++	bpf_prog_array_free(progs);
+ 	kfree(user_ctx);
+-	kfree(data);
+ 	return ret;
+ }
+diff --git a/net/core/filter.c b/net/core/filter.c
+index e2b491665775f..815edf7bc4390 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -10334,6 +10334,7 @@ static u32 sk_lookup_convert_ctx_access(enum bpf_access_type type,
+ }
+ 
+ const struct bpf_prog_ops sk_lookup_prog_ops = {
++	.test_run = bpf_prog_test_run_sk_lookup,
+ };
+ 
+ const struct bpf_verifier_ops sk_lookup_verifier_ops = {
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 428cc3a4c36f1..c71b863093ace 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -827,7 +827,7 @@ static void igmp_ifc_event(struct in_device *in_dev)
+ 	struct net *net = dev_net(in_dev->dev);
+ 	if (IGMP_V1_SEEN(in_dev) || IGMP_V2_SEEN(in_dev))
+ 		return;
+-	WRITE_ONCE(in_dev->mr_ifc_count, in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv);
++	WRITE_ONCE(in_dev->mr_ifc_count, in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv));
+ 	igmp_ifc_start_timer(in_dev, 1);
+ }
+ 
+@@ -1009,7 +1009,7 @@ static bool igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb,
+ 		 * received value was zero, use the default or statically
+ 		 * configured value.
+ 		 */
+-		in_dev->mr_qrv = ih3->qrv ?: net->ipv4.sysctl_igmp_qrv;
++		in_dev->mr_qrv = ih3->qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 		in_dev->mr_qi = IGMPV3_QQIC(ih3->qqic)*HZ ?: IGMP_QUERY_INTERVAL;
+ 
+ 		/* RFC3376, 8.3. Query Response Interval:
+@@ -1189,7 +1189,7 @@ static void igmpv3_add_delrec(struct in_device *in_dev, struct ip_mc_list *im,
+ 	pmc->interface = im->interface;
+ 	in_dev_hold(in_dev);
+ 	pmc->multiaddr = im->multiaddr;
+-	pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++	pmc->crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 	pmc->sfmode = im->sfmode;
+ 	if (pmc->sfmode == MCAST_INCLUDE) {
+ 		struct ip_sf_list *psf;
+@@ -1240,9 +1240,11 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im)
+ 			swap(im->tomb, pmc->tomb);
+ 			swap(im->sources, pmc->sources);
+ 			for (psf = im->sources; psf; psf = psf->sf_next)
+-				psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++				psf->sf_crcount = in_dev->mr_qrv ?:
++					READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 		} else {
+-			im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++			im->crcount = in_dev->mr_qrv ?:
++				READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 		}
+ 		in_dev_put(pmc->interface);
+ 		kfree_pmc(pmc);
+@@ -1349,7 +1351,7 @@ static void igmp_group_added(struct ip_mc_list *im)
+ 	if (in_dev->dead)
+ 		return;
+ 
+-	im->unsolicit_count = net->ipv4.sysctl_igmp_qrv;
++	im->unsolicit_count = READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 	if (IGMP_V1_SEEN(in_dev) || IGMP_V2_SEEN(in_dev)) {
+ 		spin_lock_bh(&im->lock);
+ 		igmp_start_timer(im, IGMP_INITIAL_REPORT_DELAY);
+@@ -1363,7 +1365,7 @@ static void igmp_group_added(struct ip_mc_list *im)
+ 	 * IN() to IN(A).
+ 	 */
+ 	if (im->sfmode == MCAST_EXCLUDE)
+-		im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++		im->crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 
+ 	igmp_ifc_event(in_dev);
+ #endif
+@@ -1754,7 +1756,7 @@ static void ip_mc_reset(struct in_device *in_dev)
+ 
+ 	in_dev->mr_qi = IGMP_QUERY_INTERVAL;
+ 	in_dev->mr_qri = IGMP_QUERY_RESPONSE_INTERVAL;
+-	in_dev->mr_qrv = net->ipv4.sysctl_igmp_qrv;
++	in_dev->mr_qrv = READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ }
+ #else
+ static void ip_mc_reset(struct in_device *in_dev)
+@@ -1888,7 +1890,7 @@ static int ip_mc_del1_src(struct ip_mc_list *pmc, int sfmode,
+ #ifdef CONFIG_IP_MULTICAST
+ 		if (psf->sf_oldin &&
+ 		    !IGMP_V1_SEEN(in_dev) && !IGMP_V2_SEEN(in_dev)) {
+-			psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++			psf->sf_crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 			psf->sf_next = pmc->tomb;
+ 			pmc->tomb = psf;
+ 			rv = 1;
+@@ -1952,7 +1954,7 @@ static int ip_mc_del_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
+ 		/* filter mode change */
+ 		pmc->sfmode = MCAST_INCLUDE;
+ #ifdef CONFIG_IP_MULTICAST
+-		pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++		pmc->crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 		WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount);
+ 		for (psf = pmc->sources; psf; psf = psf->sf_next)
+ 			psf->sf_crcount = 0;
+@@ -2131,7 +2133,7 @@ static int ip_mc_add_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
+ #ifdef CONFIG_IP_MULTICAST
+ 		/* else no filters; keep old mode for reports */
+ 
+-		pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++		pmc->crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv);
+ 		WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount);
+ 		for (psf = pmc->sources; psf; psf = psf->sf_next)
+ 			psf->sf_crcount = 0;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index f1fd26bb199ce..78460eb39b3af 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -698,7 +698,7 @@ static bool tcp_should_autocork(struct sock *sk, struct sk_buff *skb,
+ 				int size_goal)
+ {
+ 	return skb->len < size_goal &&
+-	       sock_net(sk)->ipv4.sysctl_tcp_autocorking &&
++	       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_autocorking) &&
+ 	       !tcp_rtx_queue_empty(sk) &&
+ 	       refcount_read(&sk->sk_wmem_alloc) > skb->truesize;
+ }
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index d817f8c31c9ce..d35e88b5ffcbe 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -503,7 +503,7 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
+  */
+ static void tcp_init_buffer_space(struct sock *sk)
+ {
+-	int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
++	int tcp_app_win = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_app_win);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	int maxwin;
+ 
+@@ -693,7 +693,7 @@ void tcp_rcv_space_adjust(struct sock *sk)
+ 	 * <prev RTT . ><current RTT .. ><next RTT .... >
+ 	 */
+ 
+-	if (sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf &&
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) &&
+ 	    !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {
+ 		int rcvmem, rcvbuf;
+ 		u64 rcvwin, grow;
+@@ -2135,7 +2135,7 @@ void tcp_enter_loss(struct sock *sk)
+ 	 * loss recovery is underway except recurring timeout(s) on
+ 	 * the same SND.UNA (sec 3.2). Disable F-RTO on path MTU probing
+ 	 */
+-	tp->frto = net->ipv4.sysctl_tcp_frto &&
++	tp->frto = READ_ONCE(net->ipv4.sysctl_tcp_frto) &&
+ 		   (new_recovery || icsk->icsk_retransmits) &&
+ 		   !inet_csk(sk)->icsk_mtup.probe_size;
+ }
+@@ -3004,7 +3004,7 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una,
+ 
+ static void tcp_update_rtt_min(struct sock *sk, u32 rtt_us, const int flag)
+ {
+-	u32 wlen = sock_net(sk)->ipv4.sysctl_tcp_min_rtt_wlen * HZ;
++	u32 wlen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_rtt_wlen) * HZ;
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
+ 	if ((flag & FLAG_ACK_MAYBE_DELAYED) && rtt_us > tcp_min_rtt(tp)) {
+@@ -3528,7 +3528,8 @@ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
+ 	if (*last_oow_ack_time) {
+ 		s32 elapsed = (s32)(tcp_jiffies32 - *last_oow_ack_time);
+ 
+-		if (0 <= elapsed && elapsed < net->ipv4.sysctl_tcp_invalid_ratelimit) {
++		if (0 <= elapsed &&
++		    elapsed < READ_ONCE(net->ipv4.sysctl_tcp_invalid_ratelimit)) {
+ 			NET_INC_STATS(net, mib_idx);
+ 			return true;	/* rate-limited: don't send yet! */
+ 		}
+@@ -3576,7 +3577,7 @@ static void tcp_send_challenge_ack(struct sock *sk, const struct sk_buff *skb)
+ 	/* Then check host-wide RFC 5961 rate limit. */
+ 	now = jiffies / HZ;
+ 	if (now != challenge_timestamp) {
+-		u32 ack_limit = net->ipv4.sysctl_tcp_challenge_ack_limit;
++		u32 ack_limit = READ_ONCE(net->ipv4.sysctl_tcp_challenge_ack_limit);
+ 		u32 half = (ack_limit + 1) >> 1;
+ 
+ 		challenge_timestamp = now;
+@@ -4367,7 +4368,7 @@ static void tcp_dsack_set(struct sock *sk, u32 seq, u32 end_seq)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
+-	if (tcp_is_sack(tp) && sock_net(sk)->ipv4.sysctl_tcp_dsack) {
++	if (tcp_is_sack(tp) && READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_dsack)) {
+ 		int mib_idx;
+ 
+ 		if (before(seq, tp->rcv_nxt))
+@@ -4414,7 +4415,7 @@ static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb)
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
+ 		tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
+ 
+-		if (tcp_is_sack(tp) && sock_net(sk)->ipv4.sysctl_tcp_dsack) {
++		if (tcp_is_sack(tp) && READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_dsack)) {
+ 			u32 end_seq = TCP_SKB_CB(skb)->end_seq;
+ 
+ 			tcp_rcv_spurious_retrans(sk, skb);
+@@ -5439,7 +5440,7 @@ send_now:
+ 	}
+ 
+ 	if (!tcp_is_sack(tp) ||
+-	    tp->compressed_ack >= sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr)
++	    tp->compressed_ack >= READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr))
+ 		goto send_now;
+ 
+ 	if (tp->compressed_ack_rcv_nxt != tp->rcv_nxt) {
+@@ -5460,11 +5461,12 @@ send_now:
+ 	if (tp->srtt_us && tp->srtt_us < rtt)
+ 		rtt = tp->srtt_us;
+ 
+-	delay = min_t(unsigned long, sock_net(sk)->ipv4.sysctl_tcp_comp_sack_delay_ns,
++	delay = min_t(unsigned long,
++		      READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_delay_ns),
+ 		      rtt * (NSEC_PER_USEC >> 3)/20);
+ 	sock_hold(sk);
+ 	hrtimer_start_range_ns(&tp->compressed_ack_timer, ns_to_ktime(delay),
+-			       sock_net(sk)->ipv4.sysctl_tcp_comp_sack_slack_ns,
++			       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_slack_ns),
+ 			       HRTIMER_MODE_REL_PINNED_SOFT);
+ }
+ 
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index d5f13ff7d9004..0d165ce2d80a7 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -983,7 +983,7 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
+ 	if (skb) {
+ 		__tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
+ 
+-		tos = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ?
++		tos = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) ?
+ 				(tcp_rsk(req)->syn_tos & ~INET_ECN_MASK) |
+ 				(inet_sk(sk)->tos & INET_ECN_MASK) :
+ 				inet_sk(sk)->tos;
+@@ -1558,7 +1558,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
+ 	/* Set ToS of the new socket based upon the value of incoming SYN.
+ 	 * ECT bits are set later in tcp_init_transfer().
+ 	 */
+-	if (sock_net(sk)->ipv4.sysctl_tcp_reflect_tos)
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos))
+ 		newinet->tos = tcp_rsk(req)->syn_tos & ~INET_ECN_MASK;
+ 
+ 	if (!dst) {
+diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
+index 8d7e32f4abf67..f3ca6eea2ca39 100644
+--- a/net/ipv4/tcp_metrics.c
++++ b/net/ipv4/tcp_metrics.c
+@@ -329,7 +329,7 @@ void tcp_update_metrics(struct sock *sk)
+ 	int m;
+ 
+ 	sk_dst_confirm(sk);
+-	if (net->ipv4.sysctl_tcp_nometrics_save || !dst)
++	if (READ_ONCE(net->ipv4.sysctl_tcp_nometrics_save) || !dst)
+ 		return;
+ 
+ 	rcu_read_lock();
+@@ -385,7 +385,7 @@ void tcp_update_metrics(struct sock *sk)
+ 
+ 	if (tcp_in_initial_slowstart(tp)) {
+ 		/* Slow start still did not finish. */
+-		if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save &&
++		if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) &&
+ 		    !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) {
+ 			val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH);
+ 			if (val && (tp->snd_cwnd >> 1) > val)
+@@ -401,7 +401,7 @@ void tcp_update_metrics(struct sock *sk)
+ 	} else if (!tcp_in_slow_start(tp) &&
+ 		   icsk->icsk_ca_state == TCP_CA_Open) {
+ 		/* Cong. avoidance phase, cwnd is reliable. */
+-		if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save &&
++		if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) &&
+ 		    !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH))
+ 			tcp_metric_set(tm, TCP_METRIC_SSTHRESH,
+ 				       max(tp->snd_cwnd >> 1, tp->snd_ssthresh));
+@@ -418,7 +418,7 @@ void tcp_update_metrics(struct sock *sk)
+ 			tcp_metric_set(tm, TCP_METRIC_CWND,
+ 				       (val + tp->snd_ssthresh) >> 1);
+ 		}
+-		if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save &&
++		if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) &&
+ 		    !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) {
+ 			val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH);
+ 			if (val && tp->snd_ssthresh > val)
+@@ -463,7 +463,7 @@ void tcp_init_metrics(struct sock *sk)
+ 	if (tcp_metric_locked(tm, TCP_METRIC_CWND))
+ 		tp->snd_cwnd_clamp = tcp_metric_get(tm, TCP_METRIC_CWND);
+ 
+-	val = net->ipv4.sysctl_tcp_no_ssthresh_metrics_save ?
++	val = READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) ?
+ 	      0 : tcp_metric_get(tm, TCP_METRIC_SSTHRESH);
+ 	if (val) {
+ 		tp->snd_ssthresh = val;
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 9b67c61576e4c..657b0a4d93599 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -167,16 +167,13 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
+ 	if (tcp_packets_in_flight(tp) == 0)
+ 		tcp_ca_event(sk, CA_EVENT_TX_START);
+ 
+-	/* If this is the first data packet sent in response to the
+-	 * previous received data,
+-	 * and it is a reply for ato after last received packet,
+-	 * increase pingpong count.
+-	 */
+-	if (before(tp->lsndtime, icsk->icsk_ack.lrcvtime) &&
+-	    (u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato)
+-		inet_csk_inc_pingpong_cnt(sk);
+-
+ 	tp->lsndtime = now;
++
++	/* If it is a reply for ato after last received
++	 * packet, enter pingpong mode.
++	 */
++	if ((u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato)
++		inet_csk_enter_pingpong_mode(sk);
+ }
+ 
+ /* Account for an ACK we sent. */
+@@ -1987,7 +1984,7 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now)
+ 
+ 	min_tso = ca_ops->min_tso_segs ?
+ 			ca_ops->min_tso_segs(sk) :
+-			sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs;
++			READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs);
+ 
+ 	tso_segs = tcp_tso_autosize(sk, mss_now, min_tso);
+ 	return min_t(u32, tso_segs, sk->sk_gso_max_segs);
+@@ -2502,7 +2499,7 @@ static bool tcp_small_queue_check(struct sock *sk, const struct sk_buff *skb,
+ 		      sk->sk_pacing_rate >> READ_ONCE(sk->sk_pacing_shift));
+ 	if (sk->sk_pacing_status == SK_PACING_NONE)
+ 		limit = min_t(unsigned long, limit,
+-			      sock_net(sk)->ipv4.sysctl_tcp_limit_output_bytes);
++			      READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_limit_output_bytes));
+ 	limit <<= factor;
+ 
+ 	if (static_branch_unlikely(&tcp_tx_delay_enabled) &&
+diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
+index 6ac88fe24a8e0..135e3a060caa8 100644
+--- a/net/ipv6/ping.c
++++ b/net/ipv6/ping.c
+@@ -22,6 +22,11 @@
+ #include <linux/proc_fs.h>
+ #include <net/ping.h>
+ 
++static void ping_v6_destroy(struct sock *sk)
++{
++	inet6_destroy_sock(sk);
++}
++
+ /* Compatibility glue so we can support IPv6 when it's compiled as a module */
+ static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
+ 				 int *addr_len)
+@@ -166,6 +171,7 @@ struct proto pingv6_prot = {
+ 	.owner =	THIS_MODULE,
+ 	.init =		ping_init_sock,
+ 	.close =	ping_close,
++	.destroy =	ping_v6_destroy,
+ 	.connect =	ip6_datagram_connect_v6_only,
+ 	.disconnect =	__udp_disconnect,
+ 	.setsockopt =	ipv6_setsockopt,
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 303b54414a6cc..8d91f36cb11bc 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -542,7 +542,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
+ 		if (np->repflow && ireq->pktopts)
+ 			fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts));
+ 
+-		tclass = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ?
++		tclass = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) ?
+ 				(tcp_rsk(req)->syn_tos & ~INET_ECN_MASK) |
+ 				(np->tclass & INET_ECN_MASK) :
+ 				np->tclass;
+@@ -1344,7 +1344,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 	/* Set ToS of the new socket based upon the value of incoming SYN.
+ 	 * ECT bits are set later in tcp_init_transfer().
+ 	 */
+-	if (sock_net(sk)->ipv4.sysctl_tcp_reflect_tos)
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos))
+ 		newnp->tclass = tcp_rsk(req)->syn_tos & ~INET_ECN_MASK;
+ 
+ 	/* Clone native IPv6 options from listening socket (if any)
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 8123c79e27913..d0e91aa7b30e5 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1421,7 +1421,7 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)
+ 	if (msk->rcvq_space.copied <= msk->rcvq_space.space)
+ 		goto new_measure;
+ 
+-	if (sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf &&
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) &&
+ 	    !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {
+ 		int rcvmem, rcvbuf;
+ 		u64 rcvwin, grow;
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 1640da5c50776..72d30922ed290 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -838,11 +838,16 @@ nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum)
+ }
+ 
+ static int
+-nfqnl_mangle(void *data, int data_len, struct nf_queue_entry *e, int diff)
++nfqnl_mangle(void *data, unsigned int data_len, struct nf_queue_entry *e, int diff)
+ {
+ 	struct sk_buff *nskb;
+ 
+ 	if (diff < 0) {
++		unsigned int min_len = skb_transport_offset(e->skb);
++
++		if (data_len < min_len)
++			return -EINVAL;
++
+ 		if (pskb_trim(e->skb, data_len))
+ 			return -ENOMEM;
+ 	} else if (diff > 0) {
+diff --git a/net/sctp/associola.c b/net/sctp/associola.c
+index fdb69d46276d6..2d4ec61877553 100644
+--- a/net/sctp/associola.c
++++ b/net/sctp/associola.c
+@@ -226,9 +226,8 @@ static struct sctp_association *sctp_association_init(
+ 	if (!sctp_ulpq_init(&asoc->ulpq, asoc))
+ 		goto fail_init;
+ 
+-	if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams,
+-			     0, gfp))
+-		goto fail_init;
++	if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams, 0, gfp))
++		goto stream_free;
+ 
+ 	/* Initialize default path MTU. */
+ 	asoc->pathmtu = sp->pathmtu;
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index 6dc95dcc0ff4f..ef9fceadef8d5 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -137,7 +137,7 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
+ 
+ 	ret = sctp_stream_alloc_out(stream, outcnt, gfp);
+ 	if (ret)
+-		goto out_err;
++		return ret;
+ 
+ 	for (i = 0; i < stream->outcnt; i++)
+ 		SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
+@@ -145,22 +145,9 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
+ handle_in:
+ 	sctp_stream_interleave_init(stream);
+ 	if (!incnt)
+-		goto out;
+-
+-	ret = sctp_stream_alloc_in(stream, incnt, gfp);
+-	if (ret)
+-		goto in_err;
+-
+-	goto out;
++		return 0;
+ 
+-in_err:
+-	sched->free(stream);
+-	genradix_free(&stream->in);
+-out_err:
+-	genradix_free(&stream->out);
+-	stream->outcnt = 0;
+-out:
+-	return ret;
++	return sctp_stream_alloc_in(stream, incnt, gfp);
+ }
+ 
+ int sctp_stream_init_ext(struct sctp_stream *stream, __u16 sid)
+diff --git a/net/sctp/stream_sched.c b/net/sctp/stream_sched.c
+index 99e5f69fbb742..a2e1d34f52c5b 100644
+--- a/net/sctp/stream_sched.c
++++ b/net/sctp/stream_sched.c
+@@ -163,7 +163,7 @@ int sctp_sched_set_sched(struct sctp_association *asoc,
+ 		if (!SCTP_SO(&asoc->stream, i)->ext)
+ 			continue;
+ 
+-		ret = n->init_sid(&asoc->stream, i, GFP_KERNEL);
++		ret = n->init_sid(&asoc->stream, i, GFP_ATOMIC);
+ 		if (ret)
+ 			goto err;
+ 	}
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 23eab7ac43ee5..5cb6846544cc7 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -1349,8 +1349,13 @@ static int tls_device_down(struct net_device *netdev)
+ 		 * by tls_device_free_ctx. rx_conf and tx_conf stay in TLS_HW.
+ 		 * Now release the ref taken above.
+ 		 */
+-		if (refcount_dec_and_test(&ctx->refcount))
++		if (refcount_dec_and_test(&ctx->refcount)) {
++			/* sk_destruct ran after tls_device_down took a ref, and
++			 * it returned early. Complete the destruction here.
++			 */
++			list_del(&ctx->list);
+ 			tls_device_free_ctx(ctx);
++		}
+ 	}
+ 
+ 	up_write(&device_offload_lock);
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index e440cd7f32a6f..b9ee2ded381ab 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -5006,7 +5006,10 @@ struct bpf_pidns_info {
+ 
+ /* User accessible data for SK_LOOKUP programs. Add new fields at the end. */
+ struct bpf_sk_lookup {
+-	__bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */
++	union {
++		__bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */
++		__u64 cookie; /* Non-zero if socket was selected in PROG_TEST_RUN */
++	};
+ 
+ 	__u32 family;		/* Protocol family (AF_INET, AF_INET6) */
+ 	__u32 protocol;		/* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 94809aed8b447..1cab29d45bfb3 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -232,6 +232,33 @@ Elf_Scn *elf_section_by_name(Elf *elf, GElf_Ehdr *ep,
+ 	return NULL;
+ }
+ 
++static int elf_read_program_header(Elf *elf, u64 vaddr, GElf_Phdr *phdr)
++{
++	size_t i, phdrnum;
++	u64 sz;
++
++	if (elf_getphdrnum(elf, &phdrnum))
++		return -1;
++
++	for (i = 0; i < phdrnum; i++) {
++		if (gelf_getphdr(elf, i, phdr) == NULL)
++			return -1;
++
++		if (phdr->p_type != PT_LOAD)
++			continue;
++
++		sz = max(phdr->p_memsz, phdr->p_filesz);
++		if (!sz)
++			continue;
++
++		if (vaddr >= phdr->p_vaddr && (vaddr < phdr->p_vaddr + sz))
++			return 0;
++	}
++
++	/* Not found any valid program header */
++	return -1;
++}
++
+ static bool want_demangle(bool is_kernel_sym)
+ {
+ 	return is_kernel_sym ? symbol_conf.demangle_kernel : symbol_conf.demangle;
+@@ -1181,6 +1208,7 @@ int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss,
+ 					sym.st_value);
+ 			used_opd = true;
+ 		}
++
+ 		/*
+ 		 * When loading symbols in a data mapping, ABS symbols (which
+ 		 * has a value of SHN_ABS in its st_shndx) failed at
+@@ -1217,11 +1245,20 @@ int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss,
+ 				goto out_elf_end;
+ 		} else if ((used_opd && runtime_ss->adjust_symbols) ||
+ 			   (!used_opd && syms_ss->adjust_symbols)) {
++			GElf_Phdr phdr;
++
++			if (elf_read_program_header(syms_ss->elf,
++						    (u64)sym.st_value, &phdr)) {
++				pr_warning("%s: failed to find program header for "
++					   "symbol: %s st_value: %#" PRIx64 "\n",
++					   __func__, elf_name, (u64)sym.st_value);
++				continue;
++			}
+ 			pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
+-				  "sh_addr: %#" PRIx64 " sh_offset: %#" PRIx64 "\n", __func__,
+-				  (u64)sym.st_value, (u64)shdr.sh_addr,
+-				  (u64)shdr.sh_offset);
+-			sym.st_value -= shdr.sh_addr - shdr.sh_offset;
++				  "p_vaddr: %#" PRIx64 " p_offset: %#" PRIx64 "\n",
++				  __func__, (u64)sym.st_value, (u64)phdr.p_vaddr,
++				  (u64)phdr.p_offset);
++			sym.st_value -= phdr.p_vaddr - phdr.p_offset;
+ 		}
+ 
+ 		demangled = demangle_sym(dso, kmodule, elf_name);
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index a4c55fcb0e7b1..0fb92d9a319b7 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -100,7 +100,7 @@ struct bpf_test {
+ 	enum bpf_prog_type prog_type;
+ 	uint8_t flags;
+ 	void (*fill_helper)(struct bpf_test *self);
+-	uint8_t runs;
++	int runs;
+ #define bpf_testdata_struct_t					\
+ 	struct {						\
+ 		uint32_t retval, retval_unpriv;			\
+@@ -1054,7 +1054,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
+ 
+ 	run_errs = 0;
+ 	run_successes = 0;
+-	if (!alignment_prevented_execution && fd_prog >= 0) {
++	if (!alignment_prevented_execution && fd_prog >= 0 && test->runs >= 0) {
+ 		uint32_t expected_val;
+ 		int i;
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c b/tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c
+index 2ad5f974451c3..fd3b62a084b9f 100644
+--- a/tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c
++++ b/tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c
+@@ -239,6 +239,7 @@
+ 	.result = ACCEPT,
+ 	.prog_type = BPF_PROG_TYPE_SK_LOOKUP,
+ 	.expected_attach_type = BPF_SK_LOOKUP,
++	.runs = -1,
+ },
+ /* invalid 8-byte reads from a 4-byte fields in bpf_sk_lookup */
+ {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-08-11 12:34 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-08-11 12:34 UTC (permalink / raw
  To: gentoo-commits

commit:     68fe36a61d8beb3a44014a32a3ca8c8d98f23a6a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 11 12:34:28 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 11 12:34:28 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=68fe36a6

Linux patch 5.10.136

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1135_linux-5.10.136.patch | 1250 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1254 insertions(+)

diff --git a/0000_README b/0000_README
index 19bd6321..f9d23a2c 100644
--- a/0000_README
+++ b/0000_README
@@ -583,6 +583,10 @@ Patch:  1134_linux-5.10.135.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.135
 
+Patch:  1135_linux-5.10.136.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.136
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1135_linux-5.10.136.patch b/1135_linux-5.10.136.patch
new file mode 100644
index 00000000..f6266a42
--- /dev/null
+++ b/1135_linux-5.10.136.patch
@@ -0,0 +1,1250 @@
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index 6bd97cd50d625..7e061ed449aaa 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -422,6 +422,14 @@ The possible values in this file are:
+   'RSB filling'   Protection of RSB on context switch enabled
+   =============   ===========================================
+ 
++  - EIBRS Post-barrier Return Stack Buffer (PBRSB) protection status:
++
++  ===========================  =======================================================
++  'PBRSB-eIBRS: SW sequence'   CPU is affected and protection of RSB on VMEXIT enabled
++  'PBRSB-eIBRS: Vulnerable'    CPU is vulnerable
++  'PBRSB-eIBRS: Not affected'  CPU is not affected by PBRSB
++  ===========================  =======================================================
++
+ Full mitigation might require a microcode update from the CPU
+ vendor. When the necessary microcode is not available, the kernel will
+ report vulnerability.
+diff --git a/Makefile b/Makefile
+index 5f4dbcb433075..1730698124c7b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 135
++SUBLEVEL = 136
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c
+index 01e22fe408235..9f4599014854d 100644
+--- a/arch/arm64/crypto/poly1305-glue.c
++++ b/arch/arm64/crypto/poly1305-glue.c
+@@ -52,7 +52,7 @@ static void neon_poly1305_blocks(struct poly1305_desc_ctx *dctx, const u8 *src,
+ {
+ 	if (unlikely(!dctx->sset)) {
+ 		if (!dctx->rset) {
+-			poly1305_init_arch(dctx, src);
++			poly1305_init_arm64(&dctx->h, src);
+ 			src += POLY1305_BLOCK_SIZE;
+ 			len -= POLY1305_BLOCK_SIZE;
+ 			dctx->rset = 1;
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 16c045906b2ac..159646da3c6bc 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2447,7 +2447,7 @@ config RETPOLINE
+ config RETHUNK
+ 	bool "Enable return-thunks"
+ 	depends on RETPOLINE && CC_HAS_RETURN_THUNK
+-	default y
++	default y if X86_64
+ 	help
+ 	  Compile the kernel with the return-thunks compiler option to guard
+ 	  against kernel-to-user data leaks by avoiding return speculation.
+@@ -2456,21 +2456,21 @@ config RETHUNK
+ 
+ config CPU_UNRET_ENTRY
+ 	bool "Enable UNRET on kernel entry"
+-	depends on CPU_SUP_AMD && RETHUNK
++	depends on CPU_SUP_AMD && RETHUNK && X86_64
+ 	default y
+ 	help
+ 	  Compile the kernel with support for the retbleed=unret mitigation.
+ 
+ config CPU_IBPB_ENTRY
+ 	bool "Enable IBPB on kernel entry"
+-	depends on CPU_SUP_AMD
++	depends on CPU_SUP_AMD && X86_64
+ 	default y
+ 	help
+ 	  Compile the kernel with support for the retbleed=ibpb mitigation.
+ 
+ config CPU_IBRS_ENTRY
+ 	bool "Enable IBRS on kernel entry"
+-	depends on CPU_SUP_INTEL
++	depends on CPU_SUP_INTEL && X86_64
+ 	default y
+ 	help
+ 	  Compile the kernel with support for the spectre_v2=ibrs mitigation.
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 2a51ee2f5a0f0..37ba0cdf99aa8 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -299,6 +299,7 @@
+ #define X86_FEATURE_RETHUNK		(11*32+14) /* "" Use REturn THUNK */
+ #define X86_FEATURE_UNRET		(11*32+15) /* "" AMD BTB untrain return */
+ #define X86_FEATURE_USE_IBPB_FW		(11*32+16) /* "" Use IBPB during runtime firmware calls */
++#define X86_FEATURE_RSB_VMEXIT_LITE	(11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX512_BF16		(12*32+ 5) /* AVX512 BFLOAT16 instructions */
+@@ -429,5 +430,6 @@
+ #define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+ #define X86_BUG_MMIO_STALE_DATA		X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
+ #define X86_BUG_RETBLEED		X86_BUG(26) /* CPU is affected by RETBleed */
++#define X86_BUG_EIBRS_PBRSB		X86_BUG(27) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 407de670cd609..144dc164b7596 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -148,6 +148,10 @@
+ 						 * are restricted to targets in
+ 						 * kernel.
+ 						 */
++#define ARCH_CAP_PBRSB_NO		BIT(24)	/*
++						 * Not susceptible to Post-Barrier
++						 * Return Stack Buffer Predictions.
++						 */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+ #define L1D_FLUSH			BIT(0)	/*
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index c3e8e50633ea2..0acd99329923c 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -60,7 +60,9 @@
+ 774:						\
+ 	add	$(BITS_PER_LONG/8) * 2, sp;	\
+ 	dec	reg;				\
+-	jnz	771b;
++	jnz	771b;				\
++	/* barrier for jnz misprediction */	\
++	lfence;
+ 
+ #ifdef __ASSEMBLY__
+ 
+@@ -118,13 +120,28 @@
+ #endif
+ .endm
+ 
++.macro ISSUE_UNBALANCED_RET_GUARD
++	ANNOTATE_INTRA_FUNCTION_CALL
++	call .Lunbalanced_ret_guard_\@
++	int3
++.Lunbalanced_ret_guard_\@:
++	add $(BITS_PER_LONG/8), %_ASM_SP
++	lfence
++.endm
++
+  /*
+   * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
+   * monstrosity above, manually.
+   */
+-.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
++.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2
++.ifb \ftr2
+ 	ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr
++.else
++	ALTERNATIVE_2 "jmp .Lskip_rsb_\@", "", \ftr, "jmp .Lunbalanced_\@", \ftr2
++.endif
+ 	__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)
++.Lunbalanced_\@:
++	ISSUE_UNBALANCED_RET_GUARD
+ .Lskip_rsb_\@:
+ .endm
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 2e5762faf7740..859a3f59526c7 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1291,6 +1291,53 @@ static void __init spec_ctrl_disable_kernel_rrsba(void)
+ 	}
+ }
+ 
++static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode)
++{
++	/*
++	 * Similar to context switches, there are two types of RSB attacks
++	 * after VM exit:
++	 *
++	 * 1) RSB underflow
++	 *
++	 * 2) Poisoned RSB entry
++	 *
++	 * When retpoline is enabled, both are mitigated by filling/clearing
++	 * the RSB.
++	 *
++	 * When IBRS is enabled, while #1 would be mitigated by the IBRS branch
++	 * prediction isolation protections, RSB still needs to be cleared
++	 * because of #2.  Note that SMEP provides no protection here, unlike
++	 * user-space-poisoned RSB entries.
++	 *
++	 * eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB
++	 * bug is present then a LITE version of RSB protection is required,
++	 * just a single call needs to retire before a RET is executed.
++	 */
++	switch (mode) {
++	case SPECTRE_V2_NONE:
++		return;
++
++	case SPECTRE_V2_EIBRS_LFENCE:
++	case SPECTRE_V2_EIBRS:
++		if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
++			setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
++			pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n");
++		}
++		return;
++
++	case SPECTRE_V2_EIBRS_RETPOLINE:
++	case SPECTRE_V2_RETPOLINE:
++	case SPECTRE_V2_LFENCE:
++	case SPECTRE_V2_IBRS:
++		setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
++		pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n");
++		return;
++	}
++
++	pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exit");
++	dump_stack();
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -1441,28 +1488,7 @@ static void __init spectre_v2_select_mitigation(void)
+ 	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+ 	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
+ 
+-	/*
+-	 * Similar to context switches, there are two types of RSB attacks
+-	 * after vmexit:
+-	 *
+-	 * 1) RSB underflow
+-	 *
+-	 * 2) Poisoned RSB entry
+-	 *
+-	 * When retpoline is enabled, both are mitigated by filling/clearing
+-	 * the RSB.
+-	 *
+-	 * When IBRS is enabled, while #1 would be mitigated by the IBRS branch
+-	 * prediction isolation protections, RSB still needs to be cleared
+-	 * because of #2.  Note that SMEP provides no protection here, unlike
+-	 * user-space-poisoned RSB entries.
+-	 *
+-	 * eIBRS, on the other hand, has RSB-poisoning protections, so it
+-	 * doesn't need RSB clearing after vmexit.
+-	 */
+-	if (boot_cpu_has(X86_FEATURE_RETPOLINE) ||
+-	    boot_cpu_has(X86_FEATURE_KERNEL_IBRS))
+-		setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
++	spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
+ 
+ 	/*
+ 	 * Retpoline protects the kernel, but doesn't protect firmware.  IBRS
+@@ -2215,6 +2241,19 @@ static char *ibpb_state(void)
+ 	return "";
+ }
+ 
++static char *pbrsb_eibrs_state(void)
++{
++	if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
++		if (boot_cpu_has(X86_FEATURE_RSB_VMEXIT_LITE) ||
++		    boot_cpu_has(X86_FEATURE_RSB_VMEXIT))
++			return ", PBRSB-eIBRS: SW sequence";
++		else
++			return ", PBRSB-eIBRS: Vulnerable";
++	} else {
++		return ", PBRSB-eIBRS: Not affected";
++	}
++}
++
+ static ssize_t spectre_v2_show_state(char *buf)
+ {
+ 	if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
+@@ -2227,12 +2266,13 @@ static ssize_t spectre_v2_show_state(char *buf)
+ 	    spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
+ 		return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
+ 
+-	return sprintf(buf, "%s%s%s%s%s%s\n",
++	return sprintf(buf, "%s%s%s%s%s%s%s\n",
+ 		       spectre_v2_strings[spectre_v2_enabled],
+ 		       ibpb_state(),
+ 		       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
+ 		       stibp_state(),
+ 		       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
++		       pbrsb_eibrs_state(),
+ 		       spectre_v2_module_string());
+ }
+ 
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 901352bd3b426..9fc91482e85e3 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1024,6 +1024,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+ #define NO_SWAPGS		BIT(6)
+ #define NO_ITLB_MULTIHIT	BIT(7)
+ #define NO_SPECTRE_V2		BIT(8)
++#define NO_EIBRS_PBRSB		BIT(9)
+ 
+ #define VULNWL(vendor, family, model, whitelist)	\
+ 	X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, whitelist)
+@@ -1064,7 +1065,7 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 
+ 	VULNWL_INTEL(ATOM_GOLDMONT,		NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+ 	VULNWL_INTEL(ATOM_GOLDMONT_D,		NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+-	VULNWL_INTEL(ATOM_GOLDMONT_PLUS,	NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
++	VULNWL_INTEL(ATOM_GOLDMONT_PLUS,	NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
+ 
+ 	/*
+ 	 * Technically, swapgs isn't serializing on AMD (despite it previously
+@@ -1074,7 +1075,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 	 * good enough for our purposes.
+ 	 */
+ 
+-	VULNWL_INTEL(ATOM_TREMONT_D,		NO_ITLB_MULTIHIT),
++	VULNWL_INTEL(ATOM_TREMONT,		NO_EIBRS_PBRSB),
++	VULNWL_INTEL(ATOM_TREMONT_L,		NO_EIBRS_PBRSB),
++	VULNWL_INTEL(ATOM_TREMONT_D,		NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
+ 
+ 	/* AMD Family 0xf - 0x12 */
+ 	VULNWL_AMD(0x0f,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+@@ -1252,6 +1255,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 			setup_force_cpu_bug(X86_BUG_RETBLEED);
+ 	}
+ 
++	if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) &&
++	    !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
++	    !(ia32_cap & ARCH_CAP_PBRSB_NO))
++		setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
+index 857fa0fc49faf..982138bebb70f 100644
+--- a/arch/x86/kvm/vmx/vmenter.S
++++ b/arch/x86/kvm/vmx/vmenter.S
+@@ -197,11 +197,13 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL)
+ 	 * entries and (in some cases) RSB underflow.
+ 	 *
+ 	 * eIBRS has its own protection against poisoned RSB, so it doesn't
+-	 * need the RSB filling sequence.  But it does need to be enabled
+-	 * before the first unbalanced RET.
++	 * need the RSB filling sequence.  But it does need to be enabled, and a
++	 * single call to retire, before the first unbalanced RET.
+          */
+ 
+-	FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
++	FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\
++			   X86_FEATURE_RSB_VMEXIT_LITE
++
+ 
+ 	pop %_ASM_ARG2	/* @flags */
+ 	pop %_ASM_ARG1	/* @vmx */
+diff --git a/drivers/acpi/apei/bert.c b/drivers/acpi/apei/bert.c
+index 598fd19b65fa4..45973aa6e06d4 100644
+--- a/drivers/acpi/apei/bert.c
++++ b/drivers/acpi/apei/bert.c
+@@ -29,16 +29,26 @@
+ 
+ #undef pr_fmt
+ #define pr_fmt(fmt) "BERT: " fmt
++
++#define ACPI_BERT_PRINT_MAX_RECORDS 5
+ #define ACPI_BERT_PRINT_MAX_LEN 1024
+ 
+ static int bert_disable;
+ 
++/*
++ * Print "all" the error records in the BERT table, but avoid huge spam to
++ * the console if the BIOS included oversize records, or too many records.
++ * Skipping some records here does not lose anything because the full
++ * data is available to user tools in:
++ *	/sys/firmware/acpi/tables/data/BERT
++ */
+ static void __init bert_print_all(struct acpi_bert_region *region,
+ 				  unsigned int region_len)
+ {
+ 	struct acpi_hest_generic_status *estatus =
+ 		(struct acpi_hest_generic_status *)region;
+ 	int remain = region_len;
++	int printed = 0, skipped = 0;
+ 	u32 estatus_len;
+ 
+ 	while (remain >= sizeof(struct acpi_bert_region)) {
+@@ -46,24 +56,26 @@ static void __init bert_print_all(struct acpi_bert_region *region,
+ 		if (remain < estatus_len) {
+ 			pr_err(FW_BUG "Truncated status block (length: %u).\n",
+ 			       estatus_len);
+-			return;
++			break;
+ 		}
+ 
+ 		/* No more error records. */
+ 		if (!estatus->block_status)
+-			return;
++			break;
+ 
+ 		if (cper_estatus_check(estatus)) {
+ 			pr_err(FW_BUG "Invalid error record.\n");
+-			return;
++			break;
+ 		}
+ 
+-		pr_info_once("Error records from previous boot:\n");
+-		if (region_len < ACPI_BERT_PRINT_MAX_LEN)
++		if (estatus_len < ACPI_BERT_PRINT_MAX_LEN &&
++		    printed < ACPI_BERT_PRINT_MAX_RECORDS) {
++			pr_info_once("Error records from previous boot:\n");
+ 			cper_estatus_print(KERN_INFO HW_ERR, estatus);
+-		else
+-			pr_info_once("Max print length exceeded, table data is available at:\n"
+-				     "/sys/firmware/acpi/tables/data/BERT");
++			printed++;
++		} else {
++			skipped++;
++		}
+ 
+ 		/*
+ 		 * Because the boot error source is "one-time polled" type,
+@@ -75,6 +87,9 @@ static void __init bert_print_all(struct acpi_bert_region *region,
+ 		estatus = (void *)estatus + estatus_len;
+ 		remain -= estatus_len;
+ 	}
++
++	if (skipped)
++		pr_info(HW_ERR "Skipped %d error records\n", skipped);
+ }
+ 
+ static int __init setup_bert_disable(char *str)
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 7b9793cb55c50..e39d59ad64964 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -424,7 +424,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 	.callback = video_detect_force_native,
+ 	.ident = "Clevo NL5xRU",
+ 	.matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+ 		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
+ 		},
+ 	},
+@@ -432,59 +431,75 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 	.callback = video_detect_force_native,
+ 	.ident = "Clevo NL5xRU",
+ 	.matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
+-		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
+ 		},
+ 	},
+ 	{
+ 	.callback = video_detect_force_native,
+ 	.ident = "Clevo NL5xRU",
+ 	.matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+-		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
+ 		},
+ 	},
+ 	{
+ 	.callback = video_detect_force_native,
+-	.ident = "Clevo NL5xRU",
++	.ident = "Clevo NL5xNU",
+ 	.matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+-		DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
+ 		},
+ 	},
++	/*
++	 * The TongFang PF5PU1G, PF4NU1F, PF5NU1G, and PF5LUXG/TUXEDO BA15 Gen10,
++	 * Pulse 14/15 Gen1, and Pulse 15 Gen2 have the same problem as the Clevo
++	 * NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2. See the description
++	 * above.
++	 */
+ 	{
+ 	.callback = video_detect_force_native,
+-	.ident = "Clevo NL5xRU",
++	.ident = "TongFang PF5PU1G",
+ 	.matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+-		DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
++		DMI_MATCH(DMI_BOARD_NAME, "PF5PU1G"),
+ 		},
+ 	},
+ 	{
+ 	.callback = video_detect_force_native,
+-	.ident = "Clevo NL5xNU",
++	.ident = "TongFang PF4NU1F",
++	.matches = {
++		DMI_MATCH(DMI_BOARD_NAME, "PF4NU1F"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang PF4NU1F",
+ 	.matches = {
+ 		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+-		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		DMI_MATCH(DMI_BOARD_NAME, "PULSE1401"),
+ 		},
+ 	},
+ 	{
+ 	.callback = video_detect_force_native,
+-	.ident = "Clevo NL5xNU",
++	.ident = "TongFang PF5NU1G",
+ 	.matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
+-		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		DMI_MATCH(DMI_BOARD_NAME, "PF5NU1G"),
+ 		},
+ 	},
+ 	{
+ 	.callback = video_detect_force_native,
+-	.ident = "Clevo NL5xNU",
++	.ident = "TongFang PF5NU1G",
+ 	.matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+-		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "PULSE1501"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang PF5LUXG",
++	.matches = {
++		DMI_MATCH(DMI_BOARD_NAME, "PF5LUXG"),
+ 		},
+ 	},
+-
+ 	/*
+ 	 * Desktops which falsely report a backlight and which our heuristics
+ 	 * for this do not catch.
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 1b9743b7f2ef9..d263eac784daa 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -401,6 +401,8 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
+ 	{ 0x6606, "BCM4345C5"	},	/* 003.006.006 */
+ 	{ 0x230f, "BCM4356A2"	},	/* 001.003.015 */
+ 	{ 0x220e, "BCM20702A1"  },	/* 001.002.014 */
++	{ 0x420d, "BCM4349B1"	},	/* 002.002.013 */
++	{ 0x420e, "BCM4349B1"	},	/* 002.002.014 */
+ 	{ 0x4217, "BCM4329B1"   },	/* 002.002.023 */
+ 	{ 0x6106, "BCM4359C0"	},	/* 003.001.006 */
+ 	{ 0x4106, "BCM4335A0"	},	/* 002.001.006 */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 538232b4c42ac..a699e6166aefe 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -399,6 +399,18 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0bda, 0xc822), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 
++	/* Realtek 8852CE Bluetooth devices */
++	{ USB_DEVICE(0x04ca, 0x4007), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x04c5, 0x1675), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0cb8, 0xc558), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x13d3, 0x3587), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x13d3, 0x3586), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++
+ 	/* Realtek Bluetooth devices */
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01),
+ 	  .driver_info = BTUSB_REALTEK },
+@@ -416,6 +428,9 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH |
+ 						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
+ 
+ 	/* Additional Realtek 8723AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index 259a643377c24..3f6e96a4e1147 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -1489,8 +1489,10 @@ static const struct of_device_id bcm_bluetooth_of_match[] = {
+ 	{ .compatible = "brcm,bcm4345c5" },
+ 	{ .compatible = "brcm,bcm4330-bt" },
+ 	{ .compatible = "brcm,bcm43438-bt", .data = &bcm43438_device_data },
++	{ .compatible = "brcm,bcm4349-bt", .data = &bcm43438_device_data },
+ 	{ .compatible = "brcm,bcm43540-bt", .data = &bcm4354_device_data },
+ 	{ .compatible = "brcm,bcm4335a0" },
++	{ .compatible = "infineon,cyw55572-bt" },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(of, bcm_bluetooth_of_match);
+diff --git a/drivers/macintosh/adb.c b/drivers/macintosh/adb.c
+index 73b3961890397..afb0942ccc293 100644
+--- a/drivers/macintosh/adb.c
++++ b/drivers/macintosh/adb.c
+@@ -647,7 +647,7 @@ do_adb_query(struct adb_request *req)
+ 
+ 	switch(req->data[1]) {
+ 	case ADB_QUERY_GETDEVINFO:
+-		if (req->nbytes < 3)
++		if (req->nbytes < 3 || req->data[2] >= 16)
+ 			break;
+ 		mutex_lock(&adb_handler_mutex);
+ 		req->reply[0] = adb_handler[req->data[2]].original_address;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index be9ff6a74ecce..a643b2f2f4de9 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -220,6 +220,9 @@ struct tun_struct {
+ 	struct tun_prog __rcu *steering_prog;
+ 	struct tun_prog __rcu *filter_prog;
+ 	struct ethtool_link_ksettings link_ksettings;
++	/* init args */
++	struct file *file;
++	struct ifreq *ifr;
+ };
+ 
+ struct veth {
+@@ -227,6 +230,9 @@ struct veth {
+ 	__be16 h_vlan_TCI;
+ };
+ 
++static void tun_flow_init(struct tun_struct *tun);
++static void tun_flow_uninit(struct tun_struct *tun);
++
+ static int tun_napi_receive(struct napi_struct *napi, int budget)
+ {
+ 	struct tun_file *tfile = container_of(napi, struct tun_file, napi);
+@@ -975,6 +981,49 @@ static int check_filter(struct tap_filter *filter, const struct sk_buff *skb)
+ 
+ static const struct ethtool_ops tun_ethtool_ops;
+ 
++static int tun_net_init(struct net_device *dev)
++{
++	struct tun_struct *tun = netdev_priv(dev);
++	struct ifreq *ifr = tun->ifr;
++	int err;
++
++	tun->pcpu_stats = netdev_alloc_pcpu_stats(struct tun_pcpu_stats);
++	if (!tun->pcpu_stats)
++		return -ENOMEM;
++
++	spin_lock_init(&tun->lock);
++
++	err = security_tun_dev_alloc_security(&tun->security);
++	if (err < 0) {
++		free_percpu(tun->pcpu_stats);
++		return err;
++	}
++
++	tun_flow_init(tun);
++
++	dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST |
++			   TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX |
++			   NETIF_F_HW_VLAN_STAG_TX;
++	dev->features = dev->hw_features | NETIF_F_LLTX;
++	dev->vlan_features = dev->features &
++			     ~(NETIF_F_HW_VLAN_CTAG_TX |
++			       NETIF_F_HW_VLAN_STAG_TX);
++
++	tun->flags = (tun->flags & ~TUN_FEATURES) |
++		      (ifr->ifr_flags & TUN_FEATURES);
++
++	INIT_LIST_HEAD(&tun->disabled);
++	err = tun_attach(tun, tun->file, false, ifr->ifr_flags & IFF_NAPI,
++			 ifr->ifr_flags & IFF_NAPI_FRAGS, false);
++	if (err < 0) {
++		tun_flow_uninit(tun);
++		security_tun_dev_free_security(tun->security);
++		free_percpu(tun->pcpu_stats);
++		return err;
++	}
++	return 0;
++}
++
+ /* Net device detach from fd. */
+ static void tun_net_uninit(struct net_device *dev)
+ {
+@@ -1216,6 +1265,7 @@ static int tun_net_change_carrier(struct net_device *dev, bool new_carrier)
+ }
+ 
+ static const struct net_device_ops tun_netdev_ops = {
++	.ndo_init		= tun_net_init,
+ 	.ndo_uninit		= tun_net_uninit,
+ 	.ndo_open		= tun_net_open,
+ 	.ndo_stop		= tun_net_close,
+@@ -1296,6 +1346,7 @@ static int tun_xdp_tx(struct net_device *dev, struct xdp_buff *xdp)
+ }
+ 
+ static const struct net_device_ops tap_netdev_ops = {
++	.ndo_init		= tun_net_init,
+ 	.ndo_uninit		= tun_net_uninit,
+ 	.ndo_open		= tun_net_open,
+ 	.ndo_stop		= tun_net_close,
+@@ -1336,7 +1387,7 @@ static void tun_flow_uninit(struct tun_struct *tun)
+ #define MAX_MTU 65535
+ 
+ /* Initialize net device. */
+-static void tun_net_init(struct net_device *dev)
++static void tun_net_initialize(struct net_device *dev)
+ {
+ 	struct tun_struct *tun = netdev_priv(dev);
+ 
+@@ -2268,10 +2319,6 @@ static void tun_free_netdev(struct net_device *dev)
+ 	BUG_ON(!(list_empty(&tun->disabled)));
+ 
+ 	free_percpu(tun->pcpu_stats);
+-	/* We clear pcpu_stats so that tun_set_iff() can tell if
+-	 * tun_free_netdev() has been called from register_netdevice().
+-	 */
+-	tun->pcpu_stats = NULL;
+ 
+ 	tun_flow_uninit(tun);
+ 	security_tun_dev_free_security(tun->security);
+@@ -2784,41 +2831,16 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
+ 		tun->rx_batched = 0;
+ 		RCU_INIT_POINTER(tun->steering_prog, NULL);
+ 
+-		tun->pcpu_stats = netdev_alloc_pcpu_stats(struct tun_pcpu_stats);
+-		if (!tun->pcpu_stats) {
+-			err = -ENOMEM;
+-			goto err_free_dev;
+-		}
++		tun->ifr = ifr;
++		tun->file = file;
+ 
+-		spin_lock_init(&tun->lock);
+-
+-		err = security_tun_dev_alloc_security(&tun->security);
+-		if (err < 0)
+-			goto err_free_stat;
+-
+-		tun_net_init(dev);
+-		tun_flow_init(tun);
+-
+-		dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST |
+-				   TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX |
+-				   NETIF_F_HW_VLAN_STAG_TX;
+-		dev->features = dev->hw_features | NETIF_F_LLTX;
+-		dev->vlan_features = dev->features &
+-				     ~(NETIF_F_HW_VLAN_CTAG_TX |
+-				       NETIF_F_HW_VLAN_STAG_TX);
+-
+-		tun->flags = (tun->flags & ~TUN_FEATURES) |
+-			      (ifr->ifr_flags & TUN_FEATURES);
+-
+-		INIT_LIST_HEAD(&tun->disabled);
+-		err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI,
+-				 ifr->ifr_flags & IFF_NAPI_FRAGS, false);
+-		if (err < 0)
+-			goto err_free_flow;
++		tun_net_initialize(dev);
+ 
+ 		err = register_netdevice(tun->dev);
+-		if (err < 0)
+-			goto err_detach;
++		if (err < 0) {
++			free_netdev(dev);
++			return err;
++		}
+ 		/* free_netdev() won't check refcnt, to aovid race
+ 		 * with dev_put() we need publish tun after registration.
+ 		 */
+@@ -2835,24 +2857,6 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
+ 
+ 	strcpy(ifr->ifr_name, tun->dev->name);
+ 	return 0;
+-
+-err_detach:
+-	tun_detach_all(dev);
+-	/* We are here because register_netdevice() has failed.
+-	 * If register_netdevice() already called tun_free_netdev()
+-	 * while dealing with the error, tun->pcpu_stats has been cleared.
+-	 */
+-	if (!tun->pcpu_stats)
+-		goto err_free_dev;
+-
+-err_free_flow:
+-	tun_flow_uninit(tun);
+-	security_tun_dev_free_security(tun->security);
+-err_free_stat:
+-	free_percpu(tun->pcpu_stats);
+-err_free_dev:
+-	free_netdev(dev);
+-	return err;
+ }
+ 
+ static void tun_get_iff(struct tun_struct *tun, struct ifreq *ifr)
+diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
+index 0a1634238e673..6b45e63fae4ba 100644
+--- a/drivers/net/wireless/ath/ath9k/htc.h
++++ b/drivers/net/wireless/ath/ath9k/htc.h
+@@ -281,6 +281,7 @@ struct ath9k_htc_rxbuf {
+ struct ath9k_htc_rx {
+ 	struct list_head rxbuf;
+ 	spinlock_t rxbuflock;
++	bool initialized;
+ };
+ 
+ #define ATH9K_HTC_TX_CLEANUP_INTERVAL 50 /* ms */
+@@ -305,6 +306,7 @@ struct ath9k_htc_tx {
+ 	DECLARE_BITMAP(tx_slot, MAX_TX_BUF_NUM);
+ 	struct timer_list cleanup_timer;
+ 	spinlock_t tx_lock;
++	bool initialized;
+ };
+ 
+ struct ath9k_htc_tx_ctl {
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 30ddf333e04dc..43a743ec9d9e0 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -808,6 +808,11 @@ int ath9k_tx_init(struct ath9k_htc_priv *priv)
+ 	skb_queue_head_init(&priv->tx.data_vi_queue);
+ 	skb_queue_head_init(&priv->tx.data_vo_queue);
+ 	skb_queue_head_init(&priv->tx.tx_failed);
++
++	/* Allow ath9k_wmi_event_tasklet(WMI_TXSTATUS_EVENTID) to operate. */
++	smp_wmb();
++	priv->tx.initialized = true;
++
+ 	return 0;
+ }
+ 
+@@ -1133,6 +1138,10 @@ void ath9k_htc_rxep(void *drv_priv, struct sk_buff *skb,
+ 	struct ath9k_htc_rxbuf *rxbuf = NULL, *tmp_buf = NULL;
+ 	unsigned long flags;
+ 
++	/* Check if ath9k_rx_init() completed. */
++	if (!data_race(priv->rx.initialized))
++		goto err;
++
+ 	spin_lock_irqsave(&priv->rx.rxbuflock, flags);
+ 	list_for_each_entry(tmp_buf, &priv->rx.rxbuf, list) {
+ 		if (!tmp_buf->in_process) {
+@@ -1188,6 +1197,10 @@ int ath9k_rx_init(struct ath9k_htc_priv *priv)
+ 		list_add_tail(&rxbuf->list, &priv->rx.rxbuf);
+ 	}
+ 
++	/* Allow ath9k_htc_rxep() to operate. */
++	smp_wmb();
++	priv->rx.initialized = true;
++
+ 	return 0;
+ 
+ err:
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index fe29ad4b9023c..f315c54bd3ac0 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -169,6 +169,10 @@ void ath9k_wmi_event_tasklet(struct tasklet_struct *t)
+ 					     &wmi->drv_priv->fatal_work);
+ 			break;
+ 		case WMI_TXSTATUS_EVENTID:
++			/* Check if ath9k_tx_init() completed. */
++			if (!data_race(priv->tx.initialized))
++				break;
++
+ 			spin_lock_bh(&priv->tx.tx_lock);
+ 			if (priv->tx.flags & ATH9K_HTC_OP_TX_DRAIN) {
+ 				spin_unlock_bh(&priv->tx.tx_lock);
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index 54ba20492ad11..ec53f52a06a58 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -296,6 +296,7 @@
+ #define X86_FEATURE_RETPOLINE_LFENCE	(11*32+13) /* "" Use LFENCE for Spectre variant 2 */
+ #define X86_FEATURE_RETHUNK		(11*32+14) /* "" Use REturn THUNK */
+ #define X86_FEATURE_UNRET		(11*32+15) /* "" AMD BTB untrain return */
++#define X86_FEATURE_RSB_VMEXIT_LITE	(11*32+17) /* "" Fill RSB on VM-Exit when EIBRS is enabled */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX512_BF16		(12*32+ 5) /* AVX512 BFLOAT16 instructions */
+diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
+index 407de670cd609..144dc164b7596 100644
+--- a/tools/arch/x86/include/asm/msr-index.h
++++ b/tools/arch/x86/include/asm/msr-index.h
+@@ -148,6 +148,10 @@
+ 						 * are restricted to targets in
+ 						 * kernel.
+ 						 */
++#define ARCH_CAP_PBRSB_NO		BIT(24)	/*
++						 * Not susceptible to Post-Barrier
++						 * Return Stack Buffer Predictions.
++						 */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+ #define L1D_FLUSH			BIT(0)	/*
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index b9ee2ded381ab..7943e748916d4 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -4180,7 +4180,8 @@ struct bpf_sock {
+ 	__u32 src_ip4;
+ 	__u32 src_ip6[4];
+ 	__u32 src_port;		/* host byte order */
+-	__u32 dst_port;		/* network byte order */
++	__be16 dst_port;	/* network byte order */
++	__u16 :16;		/* zero padding */
+ 	__u32 dst_ip4;
+ 	__u32 dst_ip6[4];
+ 	__u32 state;
+diff --git a/tools/kvm/kvm_stat/kvm_stat b/tools/kvm/kvm_stat/kvm_stat
+index b0bf56c5f1202..a1efcfbd8b9e6 100755
+--- a/tools/kvm/kvm_stat/kvm_stat
++++ b/tools/kvm/kvm_stat/kvm_stat
+@@ -1646,7 +1646,8 @@ Press any other key to refresh statistics immediately.
+                          .format(values))
+             if len(pids) > 1:
+                 sys.exit('Error: Multiple processes found (pids: {}). Use "-p"'
+-                         ' to specify the desired pid'.format(" ".join(pids)))
++                         ' to specify the desired pid'
++                         .format(" ".join(map(str, pids))))
+             namespace.pid = pids[0]
+ 
+     argparser = argparse.ArgumentParser(description=description_text,
+diff --git a/tools/testing/selftests/bpf/prog_tests/sock_fields.c b/tools/testing/selftests/bpf/prog_tests/sock_fields.c
+index af87118e748e5..e8b5bf707cd45 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sock_fields.c
++++ b/tools/testing/selftests/bpf/prog_tests/sock_fields.c
+@@ -1,9 +1,11 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright (c) 2019 Facebook */
+ 
++#define _GNU_SOURCE
+ #include <netinet/in.h>
+ #include <arpa/inet.h>
+ #include <unistd.h>
++#include <sched.h>
+ #include <stdlib.h>
+ #include <string.h>
+ #include <errno.h>
+@@ -21,6 +23,7 @@
+ enum bpf_linum_array_idx {
+ 	EGRESS_LINUM_IDX,
+ 	INGRESS_LINUM_IDX,
++	READ_SK_DST_PORT_LINUM_IDX,
+ 	__NR_BPF_LINUM_ARRAY_IDX,
+ };
+ 
+@@ -43,8 +46,16 @@ static __u64 child_cg_id;
+ static int linum_map_fd;
+ static __u32 duration;
+ 
+-static __u32 egress_linum_idx = EGRESS_LINUM_IDX;
+-static __u32 ingress_linum_idx = INGRESS_LINUM_IDX;
++static bool create_netns(void)
++{
++	if (!ASSERT_OK(unshare(CLONE_NEWNET), "create netns"))
++		return false;
++
++	if (!ASSERT_OK(system("ip link set dev lo up"), "bring up lo"))
++		return false;
++
++	return true;
++}
+ 
+ static void print_sk(const struct bpf_sock *sk, const char *prefix)
+ {
+@@ -92,19 +103,24 @@ static void check_result(void)
+ {
+ 	struct bpf_tcp_sock srv_tp, cli_tp, listen_tp;
+ 	struct bpf_sock srv_sk, cli_sk, listen_sk;
+-	__u32 ingress_linum, egress_linum;
++	__u32 idx, ingress_linum, egress_linum, linum;
+ 	int err;
+ 
+-	err = bpf_map_lookup_elem(linum_map_fd, &egress_linum_idx,
+-				  &egress_linum);
++	idx = EGRESS_LINUM_IDX;
++	err = bpf_map_lookup_elem(linum_map_fd, &idx, &egress_linum);
+ 	CHECK(err == -1, "bpf_map_lookup_elem(linum_map_fd)",
+ 	      "err:%d errno:%d\n", err, errno);
+ 
+-	err = bpf_map_lookup_elem(linum_map_fd, &ingress_linum_idx,
+-				  &ingress_linum);
++	idx = INGRESS_LINUM_IDX;
++	err = bpf_map_lookup_elem(linum_map_fd, &idx, &ingress_linum);
+ 	CHECK(err == -1, "bpf_map_lookup_elem(linum_map_fd)",
+ 	      "err:%d errno:%d\n", err, errno);
+ 
++	idx = READ_SK_DST_PORT_LINUM_IDX;
++	err = bpf_map_lookup_elem(linum_map_fd, &idx, &linum);
++	ASSERT_OK(err, "bpf_map_lookup_elem(linum_map_fd, READ_SK_DST_PORT_IDX)");
++	ASSERT_EQ(linum, 0, "failure in read_sk_dst_port on line");
++
+ 	memcpy(&srv_sk, &skel->bss->srv_sk, sizeof(srv_sk));
+ 	memcpy(&srv_tp, &skel->bss->srv_tp, sizeof(srv_tp));
+ 	memcpy(&cli_sk, &skel->bss->cli_sk, sizeof(cli_sk));
+@@ -263,7 +279,7 @@ static void test(void)
+ 	char buf[DATA_LEN];
+ 
+ 	/* Prepare listen_fd */
+-	listen_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0, 0);
++	listen_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0xcafe, 0);
+ 	/* start_server() has logged the error details */
+ 	if (CHECK_FAIL(listen_fd == -1))
+ 		goto done;
+@@ -331,8 +347,12 @@ done:
+ 
+ void test_sock_fields(void)
+ {
+-	struct bpf_link *egress_link = NULL, *ingress_link = NULL;
+ 	int parent_cg_fd = -1, child_cg_fd = -1;
++	struct bpf_link *link;
++
++	/* Use a dedicated netns to have a fixed listen port */
++	if (!create_netns())
++		return;
+ 
+ 	/* Create a cgroup, get fd, and join it */
+ 	parent_cg_fd = test__join_cgroup(PARENT_CGROUP);
+@@ -353,17 +373,20 @@ void test_sock_fields(void)
+ 	if (CHECK(!skel, "test_sock_fields__open_and_load", "failed\n"))
+ 		goto done;
+ 
+-	egress_link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields,
+-						 child_cg_fd);
+-	if (CHECK(IS_ERR(egress_link), "attach_cgroup(egress)", "err:%ld\n",
+-		  PTR_ERR(egress_link)))
++	link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields, child_cg_fd);
++	if (!ASSERT_OK_PTR(link, "attach_cgroup(egress_read_sock_fields)"))
++		goto done;
++	skel->links.egress_read_sock_fields = link;
++
++	link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields, child_cg_fd);
++	if (!ASSERT_OK_PTR(link, "attach_cgroup(ingress_read_sock_fields)"))
+ 		goto done;
++	skel->links.ingress_read_sock_fields = link;
+ 
+-	ingress_link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields,
+-						  child_cg_fd);
+-	if (CHECK(IS_ERR(ingress_link), "attach_cgroup(ingress)", "err:%ld\n",
+-		  PTR_ERR(ingress_link)))
++	link = bpf_program__attach_cgroup(skel->progs.read_sk_dst_port, child_cg_fd);
++	if (!ASSERT_OK_PTR(link, "attach_cgroup(read_sk_dst_port"))
+ 		goto done;
++	skel->links.read_sk_dst_port = link;
+ 
+ 	linum_map_fd = bpf_map__fd(skel->maps.linum_map);
+ 	sk_pkt_out_cnt_fd = bpf_map__fd(skel->maps.sk_pkt_out_cnt);
+@@ -372,8 +395,7 @@ void test_sock_fields(void)
+ 	test();
+ 
+ done:
+-	bpf_link__destroy(egress_link);
+-	bpf_link__destroy(ingress_link);
++	test_sock_fields__detach(skel);
+ 	test_sock_fields__destroy(skel);
+ 	if (child_cg_fd != -1)
+ 		close(child_cg_fd);
+diff --git a/tools/testing/selftests/bpf/progs/test_sock_fields.c b/tools/testing/selftests/bpf/progs/test_sock_fields.c
+index 7967348b11af6..43b31aa1fcf72 100644
+--- a/tools/testing/selftests/bpf/progs/test_sock_fields.c
++++ b/tools/testing/selftests/bpf/progs/test_sock_fields.c
+@@ -12,6 +12,7 @@
+ enum bpf_linum_array_idx {
+ 	EGRESS_LINUM_IDX,
+ 	INGRESS_LINUM_IDX,
++	READ_SK_DST_PORT_LINUM_IDX,
+ 	__NR_BPF_LINUM_ARRAY_IDX,
+ };
+ 
+@@ -250,4 +251,48 @@ int ingress_read_sock_fields(struct __sk_buff *skb)
+ 	return CG_OK;
+ }
+ 
++static __noinline bool sk_dst_port__load_word(struct bpf_sock *sk)
++{
++	__u32 *word = (__u32 *)&sk->dst_port;
++	return word[0] == bpf_htonl(0xcafe0000);
++}
++
++static __noinline bool sk_dst_port__load_half(struct bpf_sock *sk)
++{
++	__u16 *half = (__u16 *)&sk->dst_port;
++	return half[0] == bpf_htons(0xcafe);
++}
++
++static __noinline bool sk_dst_port__load_byte(struct bpf_sock *sk)
++{
++	__u8 *byte = (__u8 *)&sk->dst_port;
++	return byte[0] == 0xca && byte[1] == 0xfe;
++}
++
++SEC("cgroup_skb/egress")
++int read_sk_dst_port(struct __sk_buff *skb)
++{
++	__u32 linum, linum_idx;
++	struct bpf_sock *sk;
++
++	linum_idx = READ_SK_DST_PORT_LINUM_IDX;
++
++	sk = skb->sk;
++	if (!sk)
++		RET_LOG();
++
++	/* Ignore everything but the SYN from the client socket */
++	if (sk->state != BPF_TCP_SYN_SENT)
++		return CG_OK;
++
++	if (!sk_dst_port__load_word(sk))
++		RET_LOG();
++	if (!sk_dst_port__load_half(sk))
++		RET_LOG();
++	if (!sk_dst_port__load_byte(sk))
++		RET_LOG();
++
++	return CG_OK;
++}
++
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c
+index ce13ece08d51c..8c224eac93df7 100644
+--- a/tools/testing/selftests/bpf/verifier/sock.c
++++ b/tools/testing/selftests/bpf/verifier/sock.c
+@@ -121,7 +121,25 @@
+ 	.result = ACCEPT,
+ },
+ {
+-	"sk_fullsock(skb->sk): sk->dst_port [narrow load]",
++	"sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)",
++	.insns = {
++	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
++	.result = ACCEPT,
++},
++{
++	"sk_fullsock(skb->sk): sk->dst_port [half load]",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+ 	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+@@ -139,7 +157,64 @@
+ 	.result = ACCEPT,
+ },
+ {
+-	"sk_fullsock(skb->sk): sk->dst_port [load 2nd byte]",
++	"sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)",
++	.insns = {
++	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
++	.result = REJECT,
++	.errstr = "invalid sock access",
++},
++{
++	"sk_fullsock(skb->sk): sk->dst_port [byte load]",
++	.insns = {
++	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
++	BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
++	.result = ACCEPT,
++},
++{
++	"sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)",
++	.insns = {
++	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
++	.result = REJECT,
++	.errstr = "invalid sock access",
++},
++{
++	"sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)",
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+ 	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+@@ -149,7 +224,7 @@
+ 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1),
++	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, dst_port)),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+diff --git a/tools/testing/selftests/kvm/lib/aarch64/ucall.c b/tools/testing/selftests/kvm/lib/aarch64/ucall.c
+index 2f37b90ee1a94..f600311fdc6ae 100644
+--- a/tools/testing/selftests/kvm/lib/aarch64/ucall.c
++++ b/tools/testing/selftests/kvm/lib/aarch64/ucall.c
+@@ -73,20 +73,19 @@ void ucall_uninit(struct kvm_vm *vm)
+ 
+ void ucall(uint64_t cmd, int nargs, ...)
+ {
+-	struct ucall uc = {
+-		.cmd = cmd,
+-	};
++	struct ucall uc = {};
+ 	va_list va;
+ 	int i;
+ 
++	WRITE_ONCE(uc.cmd, cmd);
+ 	nargs = nargs <= UCALL_MAX_ARGS ? nargs : UCALL_MAX_ARGS;
+ 
+ 	va_start(va, nargs);
+ 	for (i = 0; i < nargs; ++i)
+-		uc.args[i] = va_arg(va, uint64_t);
++		WRITE_ONCE(uc.args[i], va_arg(va, uint64_t));
+ 	va_end(va);
+ 
+-	*ucall_exit_mmio_addr = (vm_vaddr_t)&uc;
++	WRITE_ONCE(*ucall_exit_mmio_addr, (vm_vaddr_t)&uc);
+ }
+ 
+ uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-08-21 16:52 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-08-21 16:52 UTC (permalink / raw
  To: gentoo-commits

commit:     d2b36d48c5c3f8accdf42ad34b5de409bd15958a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 21 16:52:12 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug 21 16:52:12 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d2b36d48

Linux patch 5.10.137

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1136_linux-5.10.137.patch | 18472 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 18476 insertions(+)

diff --git a/0000_README b/0000_README
index f9d23a2c..4c11c230 100644
--- a/0000_README
+++ b/0000_README
@@ -587,6 +587,10 @@ Patch:  1135_linux-5.10.136.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.136
 
+Patch:  1136_linux-5.10.137.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.137
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1136_linux-5.10.137.patch b/1136_linux-5.10.137.patch
new file mode 100644
index 00000000..c627ef31
--- /dev/null
+++ b/1136_linux-5.10.137.patch
@@ -0,0 +1,18472 @@
+diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
+index ac2947b989504..3d5de44cbbee9 100644
+--- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
++++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
+@@ -42,5 +42,5 @@ KernelVersion:  5.10
+ Contact:        SeongJae Park <sjpark@amazon.de>
+ Description:
+                 Whether to enable the persistent grants feature or not.  Note
+-                that this option only takes effect on newly created backends.
++                that this option only takes effect on newly connected backends.
+                 The default is Y (enable).
+diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkfront b/Documentation/ABI/testing/sysfs-driver-xen-blkfront
+index 28008905615f0..1f7659aa085c2 100644
+--- a/Documentation/ABI/testing/sysfs-driver-xen-blkfront
++++ b/Documentation/ABI/testing/sysfs-driver-xen-blkfront
+@@ -15,5 +15,5 @@ KernelVersion:  5.10
+ Contact:        SeongJae Park <sjpark@amazon.de>
+ Description:
+                 Whether to enable the persistent grants feature or not.  Note
+-                that this option only takes effect on newly created frontends.
++                that this option only takes effect on newly connected frontends.
+                 The default is Y (enable).
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 8b7c26d090459..f577c29f20930 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4662,20 +4662,33 @@
+ 			Speculative Code Execution with Return Instructions)
+ 			vulnerability.
+ 
++			AMD-based UNRET and IBPB mitigations alone do not stop
++			sibling threads from influencing the predictions of other
++			sibling threads. For that reason, STIBP is used on pro-
++			cessors that support it, and mitigate SMT on processors
++			that don't.
++
+ 			off          - no mitigation
+ 			auto         - automatically select a migitation
+ 			auto,nosmt   - automatically select a mitigation,
+ 				       disabling SMT if necessary for
+ 				       the full mitigation (only on Zen1
+ 				       and older without STIBP).
+-			ibpb	     - mitigate short speculation windows on
+-				       basic block boundaries too. Safe, highest
+-				       perf impact.
+-			unret        - force enable untrained return thunks,
+-				       only effective on AMD f15h-f17h
+-				       based systems.
+-			unret,nosmt  - like unret, will disable SMT when STIBP
+-			               is not available.
++			ibpb         - On AMD, mitigate short speculation
++				       windows on basic block boundaries too.
++				       Safe, highest perf impact. It also
++				       enables STIBP if present. Not suitable
++				       on Intel.
++			ibpb,nosmt   - Like "ibpb" above but will disable SMT
++				       when STIBP is not available. This is
++				       the alternative for systems which do not
++				       have STIBP.
++			unret        - Force enable untrained return thunks,
++				       only effective on AMD f15h-f17h based
++				       systems.
++			unret,nosmt  - Like unret, but will disable SMT when STIBP
++				       is not available. This is the alternative for
++				       systems which do not have STIBP.
+ 
+ 			Selecting 'auto' will choose a mitigation method at run
+ 			time according to the CPU.
+diff --git a/Documentation/admin-guide/pm/cpuidle.rst b/Documentation/admin-guide/pm/cpuidle.rst
+index 10fde58d08697..3596e3714ec18 100644
+--- a/Documentation/admin-guide/pm/cpuidle.rst
++++ b/Documentation/admin-guide/pm/cpuidle.rst
+@@ -685,8 +685,8 @@ the ``menu`` governor to be used on the systems that use the ``ladder`` governor
+ by default this way, for example.
+ 
+ The other kernel command line parameters controlling CPU idle time management
+-described below are only relevant for the *x86* architecture and some of
+-them affect Intel processors only.
++described below are only relevant for the *x86* architecture and references
++to ``intel_idle`` affect Intel processors only.
+ 
+ The *x86* architecture support code recognizes three kernel command line
+ options related to CPU idle time management: ``idle=poll``, ``idle=halt``,
+@@ -708,10 +708,13 @@ idle, so it very well may hurt single-thread computations performance as well as
+ energy-efficiency.  Thus using it for performance reasons may not be a good idea
+ at all.]
+ 
+-The ``idle=nomwait`` option disables the ``intel_idle`` driver and causes
+-``acpi_idle`` to be used (as long as all of the information needed by it is
+-there in the system's ACPI tables), but it is not allowed to use the
+-``MWAIT`` instruction of the CPUs to ask the hardware to enter idle states.
++The ``idle=nomwait`` option prevents the use of ``MWAIT`` instruction of
++the CPU to enter idle states. When this option is used, the ``acpi_idle``
++driver will use the ``HLT`` instruction instead of ``MWAIT``. On systems
++running Intel processors, this option disables the ``intel_idle`` driver
++and forces the use of the ``acpi_idle`` driver instead. Note that in either
++case, ``acpi_idle`` driver will function only if all the information needed
++by it is in the system's ACPI tables.
+ 
+ In addition to the architecture-level kernel command line options affecting CPU
+ idle time management, there are parameters affecting individual ``CPUIdle``
+diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
+index f1a4d3c3ba0bb..d3a02300913a7 100644
+--- a/Documentation/driver-api/vfio.rst
++++ b/Documentation/driver-api/vfio.rst
+@@ -249,18 +249,23 @@ VFIO bus driver API
+ 
+ VFIO bus drivers, such as vfio-pci make use of only a few interfaces
+ into VFIO core.  When devices are bound and unbound to the driver,
+-the driver should call vfio_add_group_dev() and vfio_del_group_dev()
+-respectively::
+-
+-	extern int vfio_add_group_dev(struct device *dev,
+-				      const struct vfio_device_ops *ops,
+-				      void *device_data);
+-
+-	extern void *vfio_del_group_dev(struct device *dev);
+-
+-vfio_add_group_dev() indicates to the core to begin tracking the
+-iommu_group of the specified dev and register the dev as owned by
+-a VFIO bus driver.  The driver provides an ops structure for callbacks
++the driver should call vfio_register_group_dev() and
++vfio_unregister_group_dev() respectively::
++
++	void vfio_init_group_dev(struct vfio_device *device,
++				struct device *dev,
++				const struct vfio_device_ops *ops,
++				void *device_data);
++	int vfio_register_group_dev(struct vfio_device *device);
++	void vfio_unregister_group_dev(struct vfio_device *device);
++
++The driver should embed the vfio_device in its own structure and call
++vfio_init_group_dev() to pre-configure it before going to registration.
++vfio_register_group_dev() indicates to the core to begin tracking the
++iommu_group of the specified dev and register the dev as owned by a VFIO bus
++driver. Once vfio_register_group_dev() returns it is possible for userspace to
++start accessing the driver, thus the driver should ensure it is completely
++ready before calling it. The driver provides an ops structure for callbacks
+ similar to a file operations structure::
+ 
+ 	struct vfio_device_ops {
+@@ -276,7 +281,7 @@ similar to a file operations structure::
+ 	};
+ 
+ Each function is passed the device_data that was originally registered
+-in the vfio_add_group_dev() call above.  This allows the bus driver
++in the vfio_register_group_dev() call above.  This allows the bus driver
+ an easy place to store its opaque, private data.  The open/release
+ callbacks are issued when a new file descriptor is created for a
+ device (via VFIO_GROUP_GET_DEVICE_FD).  The ioctl interface provides
+diff --git a/Makefile b/Makefile
+index 1730698124c7b..b3bfdf51232f3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 136
++SUBLEVEL = 137
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -983,6 +983,9 @@ KBUILD_CFLAGS   += $(KCFLAGS)
+ KBUILD_LDFLAGS_MODULE += --build-id=sha1
+ LDFLAGS_vmlinux += --build-id=sha1
+ 
++KBUILD_LDFLAGS	+= -z noexecstack
++KBUILD_LDFLAGS	+= $(call ld-option,--no-warn-rwx-segments)
++
+ ifeq ($(CONFIG_STRIP_ASM_SYMS),y)
+ LDFLAGS_vmlinux	+= $(call ld-option, -X,)
+ endif
+diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile
+index 7e8151681597c..d93f01dddc3f9 100644
+--- a/arch/arm/boot/dts/Makefile
++++ b/arch/arm/boot/dts/Makefile
+@@ -128,6 +128,7 @@ dtb-$(CONFIG_ARCH_BCM_5301X) += \
+ 	bcm47094-luxul-xwr-3150-v1.dtb \
+ 	bcm47094-netgear-r8500.dtb \
+ 	bcm47094-phicomm-k3.dtb \
++	bcm53015-meraki-mr26.dtb \
+ 	bcm53016-meraki-mr32.dtb \
+ 	bcm94708.dtb \
+ 	bcm94709.dtb \
+diff --git a/arch/arm/boot/dts/aspeed-ast2500-evb.dts b/arch/arm/boot/dts/aspeed-ast2500-evb.dts
+index 8bec21ed0de53..7a874debb7d53 100644
+--- a/arch/arm/boot/dts/aspeed-ast2500-evb.dts
++++ b/arch/arm/boot/dts/aspeed-ast2500-evb.dts
+@@ -5,7 +5,7 @@
+ 
+ / {
+ 	model = "AST2500 EVB";
+-	compatible = "aspeed,ast2500";
++	compatible = "aspeed,ast2500-evb", "aspeed,ast2500";
+ 
+ 	aliases {
+ 		serial4 = &uart5;
+diff --git a/arch/arm/boot/dts/aspeed-ast2600-evb.dts b/arch/arm/boot/dts/aspeed-ast2600-evb.dts
+index 8d0f4656aa05c..892814c02aa92 100644
+--- a/arch/arm/boot/dts/aspeed-ast2600-evb.dts
++++ b/arch/arm/boot/dts/aspeed-ast2600-evb.dts
+@@ -7,7 +7,7 @@
+ 
+ / {
+ 	model = "AST2600 EVB";
+-	compatible = "aspeed,ast2600";
++	compatible = "aspeed,ast2600-evb-a1", "aspeed,ast2600";
+ 
+ 	aliases {
+ 		serial4 = &uart5;
+diff --git a/arch/arm/boot/dts/bcm53015-meraki-mr26.dts b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+new file mode 100644
+index 0000000000000..14f58033efeb9
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+@@ -0,0 +1,166 @@
++// SPDX-License-Identifier: GPL-2.0-or-later OR MIT
++/*
++ * Broadcom BCM470X / BCM5301X ARM platform code.
++ * DTS for Meraki MR26 / Codename: Venom
++ *
++ * Copyright (C) 2022 Christian Lamparter <chunkeey@gmail.com>
++ */
++
++/dts-v1/;
++
++#include "bcm4708.dtsi"
++#include "bcm5301x-nand-cs0-bch8.dtsi"
++#include <dt-bindings/leds/common.h>
++
++/ {
++	compatible = "meraki,mr26", "brcm,bcm53015", "brcm,bcm4708";
++	model = "Meraki MR26";
++
++	memory@0 {
++		reg = <0x00000000 0x08000000>;
++		device_type = "memory";
++	};
++
++	leds {
++		compatible = "gpio-leds";
++
++		led-0 {
++			function = LED_FUNCTION_FAULT;
++			color = <LED_COLOR_ID_AMBER>;
++			gpios = <&chipcommon 13 GPIO_ACTIVE_HIGH>;
++			panic-indicator;
++		};
++		led-1 {
++			function = LED_FUNCTION_INDICATOR;
++			color = <LED_COLOR_ID_WHITE>;
++			gpios = <&chipcommon 12 GPIO_ACTIVE_HIGH>;
++		};
++	};
++
++	keys {
++		compatible = "gpio-keys";
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		key-restart {
++			label = "Reset";
++			linux,code = <KEY_RESTART>;
++			gpios = <&chipcommon 11 GPIO_ACTIVE_LOW>;
++		};
++	};
++};
++
++&uart0 {
++	clock-frequency = <50000000>;
++	/delete-property/ clocks;
++};
++
++&uart1 {
++	status = "disabled";
++};
++
++&gmac0 {
++	status = "okay";
++};
++
++&gmac1 {
++	status = "disabled";
++};
++&gmac2 {
++	status = "disabled";
++};
++&gmac3 {
++	status = "disabled";
++};
++
++&nandcs {
++	nand-ecc-algo = "hw";
++
++	partitions {
++		compatible = "fixed-partitions";
++		#address-cells = <0x1>;
++		#size-cells = <0x1>;
++
++		partition@0 {
++			label = "u-boot";
++			reg = <0x0 0x200000>;
++			read-only;
++		};
++
++		partition@200000 {
++			label = "u-boot-env";
++			reg = <0x200000 0x200000>;
++			/* empty */
++		};
++
++		partition@400000 {
++			label = "u-boot-backup";
++			reg = <0x400000 0x200000>;
++			/* empty */
++		};
++
++		partition@600000 {
++			label = "u-boot-env-backup";
++			reg = <0x600000 0x200000>;
++			/* empty */
++		};
++
++		partition@800000 {
++			label = "ubi";
++			reg = <0x800000 0x7780000>;
++		};
++	};
++};
++
++&srab {
++	status = "okay";
++
++	ports {
++		port@0 {
++			reg = <0>;
++			label = "poe";
++		};
++
++		port@5 {
++			reg = <5>;
++			label = "cpu";
++			ethernet = <&gmac0>;
++
++			fixed-link {
++				speed = <1000>;
++				duplex-full;
++			};
++		};
++	};
++};
++
++&i2c0 {
++	status = "okay";
++
++	pinctrl-names = "default";
++	pinctrl-0 = <&pinmux_i2c>;
++
++	clock-frequency = <100000>;
++
++	ina219@40 {
++		compatible = "ti,ina219"; /* PoE power */
++		reg = <0x40>;
++		shunt-resistor = <60000>; /* = 60 mOhms */
++	};
++
++	eeprom@56 {
++		compatible = "atmel,24c64";
++		reg = <0x56>;
++		pagesize = <32>;
++		read-only;
++		#address-cells = <1>;
++		#size-cells = <1>;
++
++		/* it's empty */
++	};
++};
++
++&thermal {
++	status = "disabled";
++	/* does not work, reads 418 degree Celsius */
++};
+diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts
+index 6d9a5ede94aaf..006fbd7f54322 100644
+--- a/arch/arm/boot/dts/imx53-ppd.dts
++++ b/arch/arm/boot/dts/imx53-ppd.dts
+@@ -592,7 +592,7 @@
+ 
+ 	touchscreen@4b {
+ 		compatible = "atmel,maxtouch";
+-		reset-gpio = <&gpio5 19 GPIO_ACTIVE_HIGH>;
++		reset-gpio = <&gpio5 19 GPIO_ACTIVE_LOW>;
+ 		reg = <0x4b>;
+ 		interrupt-parent = <&gpio5>;
+ 		interrupts = <4 IRQ_TYPE_LEVEL_LOW>;
+diff --git a/arch/arm/boot/dts/imx6dl-colibri-eval-v3.dts b/arch/arm/boot/dts/imx6dl-colibri-eval-v3.dts
+index 65359aece950d..7da74e6f46d9b 100644
+--- a/arch/arm/boot/dts/imx6dl-colibri-eval-v3.dts
++++ b/arch/arm/boot/dts/imx6dl-colibri-eval-v3.dts
+@@ -143,7 +143,7 @@
+ 		reg = <0x4a>;
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <9 IRQ_TYPE_EDGE_FALLING>;		/* SODIMM 28 */
+-		reset-gpios = <&gpio2 10 GPIO_ACTIVE_HIGH>;	/* SODIMM 30 */
++		reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>;	/* SODIMM 30 */
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6q-apalis-eval.dts b/arch/arm/boot/dts/imx6q-apalis-eval.dts
+index fab83abb64660..a0683b4aeca15 100644
+--- a/arch/arm/boot/dts/imx6q-apalis-eval.dts
++++ b/arch/arm/boot/dts/imx6q-apalis-eval.dts
+@@ -140,7 +140,7 @@
+ 		reg = <0x4a>;
+ 		interrupt-parent = <&gpio6>;
+ 		interrupts = <10 IRQ_TYPE_EDGE_FALLING>;
+-		reset-gpios = <&gpio6 9 GPIO_ACTIVE_HIGH>; /* SODIMM 13 */
++		reset-gpios = <&gpio6 9 GPIO_ACTIVE_LOW>; /* SODIMM 13 */
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6q-apalis-ixora-v1.1.dts b/arch/arm/boot/dts/imx6q-apalis-ixora-v1.1.dts
+index 1614b1ae501d9..86e84781cf5d8 100644
+--- a/arch/arm/boot/dts/imx6q-apalis-ixora-v1.1.dts
++++ b/arch/arm/boot/dts/imx6q-apalis-ixora-v1.1.dts
+@@ -145,7 +145,7 @@
+ 		reg = <0x4a>;
+ 		interrupt-parent = <&gpio6>;
+ 		interrupts = <10 IRQ_TYPE_EDGE_FALLING>;
+-		reset-gpios = <&gpio6 9 GPIO_ACTIVE_HIGH>; /* SODIMM 13 */
++		reset-gpios = <&gpio6 9 GPIO_ACTIVE_LOW>; /* SODIMM 13 */
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6q-apalis-ixora.dts b/arch/arm/boot/dts/imx6q-apalis-ixora.dts
+index fa9f98dd15ac6..62e72773e53b3 100644
+--- a/arch/arm/boot/dts/imx6q-apalis-ixora.dts
++++ b/arch/arm/boot/dts/imx6q-apalis-ixora.dts
+@@ -144,7 +144,7 @@
+ 		reg = <0x4a>;
+ 		interrupt-parent = <&gpio6>;
+ 		interrupts = <10 IRQ_TYPE_EDGE_FALLING>;
+-		reset-gpios = <&gpio6 9 GPIO_ACTIVE_HIGH>; /* SODIMM 13 */
++		reset-gpios = <&gpio6 9 GPIO_ACTIVE_LOW>; /* SODIMM 13 */
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
+index d7d9f3e46b92a..c40684ad11b8e 100644
+--- a/arch/arm/boot/dts/imx6ul.dtsi
++++ b/arch/arm/boot/dts/imx6ul.dtsi
+@@ -62,20 +62,18 @@
+ 			clock-frequency = <696000000>;
+ 			clock-latency = <61036>; /* two CLK32 periods */
+ 			#cooling-cells = <2>;
+-			operating-points = <
++			operating-points =
+ 				/* kHz	uV */
+-				696000	1275000
+-				528000	1175000
+-				396000	1025000
+-				198000	950000
+-			>;
+-			fsl,soc-operating-points = <
++				<696000	1275000>,
++				<528000	1175000>,
++				<396000	1025000>,
++				<198000	950000>;
++			fsl,soc-operating-points =
+ 				/* KHz	uV */
+-				696000	1275000
+-				528000	1175000
+-				396000	1175000
+-				198000	1175000
+-			>;
++				<696000	1275000>,
++				<528000	1175000>,
++				<396000	1175000>,
++				<198000	1175000>;
+ 			clocks = <&clks IMX6UL_CLK_ARM>,
+ 				 <&clks IMX6UL_CLK_PLL2_BUS>,
+ 				 <&clks IMX6UL_CLK_PLL2_PFD2>,
+@@ -147,6 +145,9 @@
+ 		ocram: sram@900000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00900000 0x20000>;
++			ranges = <0 0x00900000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 		};
+ 
+ 		intc: interrupt-controller@a01000 {
+@@ -540,7 +541,7 @@
+ 			};
+ 
+ 			kpp: keypad@20b8000 {
+-				compatible = "fsl,imx6ul-kpp", "fsl,imx6q-kpp", "fsl,imx21-kpp";
++				compatible = "fsl,imx6ul-kpp", "fsl,imx21-kpp";
+ 				reg = <0x020b8000 0x4000>;
+ 				interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6UL_CLK_KPP>;
+@@ -994,7 +995,7 @@
+ 			};
+ 
+ 			csi: csi@21c4000 {
+-				compatible = "fsl,imx6ul-csi", "fsl,imx7-csi";
++				compatible = "fsl,imx6ul-csi";
+ 				reg = <0x021c4000 0x4000>;
+ 				interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6UL_CLK_CSI>;
+@@ -1003,7 +1004,7 @@
+ 			};
+ 
+ 			lcdif: lcdif@21c8000 {
+-				compatible = "fsl,imx6ul-lcdif", "fsl,imx28-lcdif";
++				compatible = "fsl,imx6ul-lcdif", "fsl,imx6sx-lcdif";
+ 				reg = <0x021c8000 0x4000>;
+ 				interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6UL_CLK_LCDIF_PIX>,
+@@ -1024,7 +1025,7 @@
+ 			qspi: spi@21e0000 {
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				compatible = "fsl,imx6ul-qspi", "fsl,imx6sx-qspi";
++				compatible = "fsl,imx6ul-qspi";
+ 				reg = <0x021e0000 0x4000>, <0x60000000 0x10000000>;
+ 				reg-names = "QuadSPI", "QuadSPI-memory";
+ 				interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/imx7-colibri-aster.dtsi b/arch/arm/boot/dts/imx7-colibri-aster.dtsi
+index 9fa701bec2eca..139188eb9f409 100644
+--- a/arch/arm/boot/dts/imx7-colibri-aster.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri-aster.dtsi
+@@ -99,7 +99,7 @@
+ 		reg = <0x4a>;
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <15 IRQ_TYPE_EDGE_FALLING>;	/* SODIMM 107 */
+-		reset-gpios = <&gpio2 28 GPIO_ACTIVE_HIGH>;	/* SODIMM 106 */
++		reset-gpios = <&gpio2 28 GPIO_ACTIVE_LOW>;	/* SODIMM 106 */
+ 	};
+ 
+ 	/* M41T0M6 real time clock on carrier board */
+diff --git a/arch/arm/boot/dts/imx7-colibri-eval-v3.dtsi b/arch/arm/boot/dts/imx7-colibri-eval-v3.dtsi
+index 97601375f2640..3caf450735d7e 100644
+--- a/arch/arm/boot/dts/imx7-colibri-eval-v3.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri-eval-v3.dtsi
+@@ -124,7 +124,7 @@
+ 		reg = <0x4a>;
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <9 IRQ_TYPE_EDGE_FALLING>;		/* SODIMM 28 */
+-		reset-gpios = <&gpio1 10 GPIO_ACTIVE_HIGH>;	/* SODIMM 30 */
++		reset-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>;	/* SODIMM 30 */
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx7d-colibri-emmc.dtsi b/arch/arm/boot/dts/imx7d-colibri-emmc.dtsi
+index af39e5370fa12..045e4413d3390 100644
+--- a/arch/arm/boot/dts/imx7d-colibri-emmc.dtsi
++++ b/arch/arm/boot/dts/imx7d-colibri-emmc.dtsi
+@@ -13,6 +13,10 @@
+ 	};
+ };
+ 
++&cpu1 {
++	cpu-supply = <&reg_DCDC2>;
++};
++
+ &gpio6 {
+ 	gpio-line-names = "",
+ 			  "",
+diff --git a/arch/arm/boot/dts/motorola-mapphone-common.dtsi b/arch/arm/boot/dts/motorola-mapphone-common.dtsi
+index d5ded4f794df4..5f8f77cfbe59f 100644
+--- a/arch/arm/boot/dts/motorola-mapphone-common.dtsi
++++ b/arch/arm/boot/dts/motorola-mapphone-common.dtsi
+@@ -430,7 +430,7 @@
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&touchscreen_pins>;
+ 
+-		reset-gpios = <&gpio6 13 GPIO_ACTIVE_HIGH>; /* gpio173 */
++		reset-gpios = <&gpio6 13 GPIO_ACTIVE_LOW>; /* gpio173 */
+ 
+ 		/* gpio_183 with sys_nirq2 pad as wakeup */
+ 		interrupts-extended = <&gpio6 23 IRQ_TYPE_LEVEL_LOW>,
+diff --git a/arch/arm/boot/dts/qcom-mdm9615.dtsi b/arch/arm/boot/dts/qcom-mdm9615.dtsi
+index dda2ceec6591a..ad9b52d53ef9b 100644
+--- a/arch/arm/boot/dts/qcom-mdm9615.dtsi
++++ b/arch/arm/boot/dts/qcom-mdm9615.dtsi
+@@ -324,6 +324,7 @@
+ 
+ 				pmicgpio: gpio@150 {
+ 					compatible = "qcom,pm8018-gpio", "qcom,ssbi-gpio";
++					reg = <0x150>;
+ 					interrupt-controller;
+ 					#interrupt-cells = <2>;
+ 					gpio-controller;
+diff --git a/arch/arm/boot/dts/qcom-pm8841.dtsi b/arch/arm/boot/dts/qcom-pm8841.dtsi
+index 2fd59c440903d..c73e5b149ac5e 100644
+--- a/arch/arm/boot/dts/qcom-pm8841.dtsi
++++ b/arch/arm/boot/dts/qcom-pm8841.dtsi
+@@ -25,6 +25,7 @@
+ 			compatible = "qcom,spmi-temp-alarm";
+ 			reg = <0x2400>;
+ 			interrupts = <4 0x24 0 IRQ_TYPE_EDGE_RISING>;
++			#thermal-sensor-cells = <0>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index 9005f0a23e8f2..984bc8dc5e4bd 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -631,7 +631,7 @@
+ 		interrupts = <5 IRQ_TYPE_EDGE_FALLING>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&ts_irq>;
+-		reset-gpios = <&gpj1 3 GPIO_ACTIVE_HIGH>;
++		reset-gpios = <&gpj1 3 GPIO_ACTIVE_LOW>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+index 5dbfb83c1b06b..ce87e1ec10dcd 100644
+--- a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
++++ b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+@@ -446,7 +446,7 @@
+ 			interrupt-parent = <&gpio>;
+ 			interrupts = <TEGRA_GPIO(V, 6) IRQ_TYPE_LEVEL_LOW>;
+ 
+-			reset-gpios = <&gpio TEGRA_GPIO(Q, 7) GPIO_ACTIVE_HIGH>;
++			reset-gpios = <&gpio TEGRA_GPIO(Q, 7) GPIO_ACTIVE_LOW>;
+ 
+ 			vdda-supply = <&vdd_3v3_sys>;
+ 			vdd-supply  = <&vdd_3v3_sys>;
+diff --git a/arch/arm/boot/dts/uniphier-pxs2.dtsi b/arch/arm/boot/dts/uniphier-pxs2.dtsi
+index e81e5937a60ae..03301ddb3403a 100644
+--- a/arch/arm/boot/dts/uniphier-pxs2.dtsi
++++ b/arch/arm/boot/dts/uniphier-pxs2.dtsi
+@@ -597,8 +597,8 @@
+ 			compatible = "socionext,uniphier-dwc3", "snps,dwc3";
+ 			status = "disabled";
+ 			reg = <0x65a00000 0xcd00>;
+-			interrupt-names = "host", "peripheral";
+-			interrupts = <0 134 4>, <0 135 4>;
++			interrupt-names = "dwc_usb3";
++			interrupts = <0 134 4>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pinctrl_usb0>, <&pinctrl_usb2>;
+ 			clock-names = "ref", "bus_early", "suspend";
+@@ -693,8 +693,8 @@
+ 			compatible = "socionext,uniphier-dwc3", "snps,dwc3";
+ 			status = "disabled";
+ 			reg = <0x65c00000 0xcd00>;
+-			interrupt-names = "host", "peripheral";
+-			interrupts = <0 137 4>, <0 138 4>;
++			interrupt-names = "dwc_usb3";
++			interrupts = <0 137 4>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pinctrl_usb1>, <&pinctrl_usb3>;
+ 			clock-names = "ref", "bus_early", "suspend";
+diff --git a/arch/arm/lib/findbit.S b/arch/arm/lib/findbit.S
+index b5e8b9ae4c7d4..7fd3600db8efd 100644
+--- a/arch/arm/lib/findbit.S
++++ b/arch/arm/lib/findbit.S
+@@ -40,8 +40,8 @@ ENDPROC(_find_first_zero_bit_le)
+  * Prototype: int find_next_zero_bit(void *addr, unsigned int maxbit, int offset)
+  */
+ ENTRY(_find_next_zero_bit_le)
+-		teq	r1, #0
+-		beq	3b
++		cmp	r2, r1
++		bhs	3b
+ 		ands	ip, r2, #7
+ 		beq	1b			@ If new byte, goto old routine
+  ARM(		ldrb	r3, [r0, r2, lsr #3]	)
+@@ -81,8 +81,8 @@ ENDPROC(_find_first_bit_le)
+  * Prototype: int find_next_zero_bit(void *addr, unsigned int maxbit, int offset)
+  */
+ ENTRY(_find_next_bit_le)
+-		teq	r1, #0
+-		beq	3b
++		cmp	r2, r1
++		bhs	3b
+ 		ands	ip, r2, #7
+ 		beq	1b			@ If new byte, goto old routine
+  ARM(		ldrb	r3, [r0, r2, lsr #3]	)
+@@ -115,8 +115,8 @@ ENTRY(_find_first_zero_bit_be)
+ ENDPROC(_find_first_zero_bit_be)
+ 
+ ENTRY(_find_next_zero_bit_be)
+-		teq	r1, #0
+-		beq	3b
++		cmp	r2, r1
++		bhs	3b
+ 		ands	ip, r2, #7
+ 		beq	1b			@ If new byte, goto old routine
+ 		eor	r3, r2, #0x18		@ big endian byte ordering
+@@ -149,8 +149,8 @@ ENTRY(_find_first_bit_be)
+ ENDPROC(_find_first_bit_be)
+ 
+ ENTRY(_find_next_bit_be)
+-		teq	r1, #0
+-		beq	3b
++		cmp	r2, r1
++		bhs	3b
+ 		ands	ip, r2, #7
+ 		beq	1b			@ If new byte, goto old routine
+ 		eor	r3, r2, #0x18		@ big endian byte ordering
+diff --git a/arch/arm/mach-bcm/bcm_kona_smc.c b/arch/arm/mach-bcm/bcm_kona_smc.c
+index 43a16f922b53b..513efea655baf 100644
+--- a/arch/arm/mach-bcm/bcm_kona_smc.c
++++ b/arch/arm/mach-bcm/bcm_kona_smc.c
+@@ -54,6 +54,7 @@ int __init bcm_kona_smc_init(void)
+ 		return -ENODEV;
+ 
+ 	prop_val = of_get_address(node, 0, &prop_size, NULL);
++	of_node_put(node);
+ 	if (!prop_val)
+ 		return -EINVAL;
+ 
+diff --git a/arch/arm/mach-omap2/display.c b/arch/arm/mach-omap2/display.c
+index 6098666e928d0..f24d4e56ddfc2 100644
+--- a/arch/arm/mach-omap2/display.c
++++ b/arch/arm/mach-omap2/display.c
+@@ -211,6 +211,7 @@ static int __init omapdss_init_fbdev(void)
+ 	node = of_find_node_by_name(NULL, "omap4_padconf_global");
+ 	if (node)
+ 		omap4_dsi_mux_syscon = syscon_node_to_regmap(node);
++	of_node_put(node);
+ 
+ 	return 0;
+ }
+@@ -259,11 +260,13 @@ static int __init omapdss_init_of(void)
+ 
+ 	if (!pdev) {
+ 		pr_err("Unable to find DSS platform device\n");
++		of_node_put(node);
+ 		return -ENODEV;
+ 	}
+ 
+ 	r = of_platform_populate(node, NULL, NULL, &pdev->dev);
+ 	put_device(&pdev->dev);
++	of_node_put(node);
+ 	if (r) {
+ 		pr_err("Unable to populate DSS submodule devices\n");
+ 		return r;
+diff --git a/arch/arm/mach-omap2/prm3xxx.c b/arch/arm/mach-omap2/prm3xxx.c
+index 1b442b1285693..63e73e9b82bc6 100644
+--- a/arch/arm/mach-omap2/prm3xxx.c
++++ b/arch/arm/mach-omap2/prm3xxx.c
+@@ -708,6 +708,7 @@ static int omap3xxx_prm_late_init(void)
+ 	}
+ 
+ 	irq_num = of_irq_get(np, 0);
++	of_node_put(np);
+ 	if (irq_num == -EPROBE_DEFER)
+ 		return irq_num;
+ 
+diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+index 09ef73b99dd86..ba44cec5e59ac 100644
+--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+@@ -125,6 +125,7 @@ remove:
+ 
+ 	list_for_each_entry_safe(pos, tmp, &quirk_list, list) {
+ 		list_del(&pos->list);
++		of_node_put(pos->np);
+ 		kfree(pos);
+ 	}
+ 
+@@ -174,11 +175,12 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 		memcpy(&quirk->i2c_msg, id->data, sizeof(quirk->i2c_msg));
+ 
+ 		quirk->id = id;
+-		quirk->np = np;
++		quirk->np = of_node_get(np);
+ 		quirk->i2c_msg.addr = addr;
+ 
+ 		ret = of_irq_parse_one(np, 0, argsa);
+ 		if (ret) {	/* Skip invalid entry and continue */
++			of_node_put(np);
+ 			kfree(quirk);
+ 			continue;
+ 		}
+@@ -225,6 +227,7 @@ err_free:
+ err_mem:
+ 	list_for_each_entry_safe(pos, tmp, &quirk_list, list) {
+ 		list_del(&pos->list);
++		of_node_put(pos->np);
+ 		kfree(pos);
+ 	}
+ 
+diff --git a/arch/arm/mach-zynq/common.c b/arch/arm/mach-zynq/common.c
+index e1ca6a5732d27..15e8a321a713b 100644
+--- a/arch/arm/mach-zynq/common.c
++++ b/arch/arm/mach-zynq/common.c
+@@ -77,6 +77,7 @@ static int __init zynq_get_revision(void)
+ 	}
+ 
+ 	zynq_devcfg_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	if (!zynq_devcfg_base) {
+ 		pr_err("%s: Unable to map I/O memory\n", __func__);
+ 		return -1;
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-orangepi-win.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-orangepi-win.dts
+index 70e31743f0bac..3c08497568eac 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-orangepi-win.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-orangepi-win.dts
+@@ -40,7 +40,7 @@
+ 	leds {
+ 		compatible = "gpio-leds";
+ 
+-		status {
++		led-0 {
+ 			label = "orangepi:green:status";
+ 			gpios = <&pio 7 11 GPIO_ACTIVE_HIGH>; /* PH11 */
+ 		};
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+index 9a11e5c60c269..3053f484c8cc1 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+@@ -49,7 +49,7 @@
+ 		wps {
+ 			label = "wps";
+ 			linux,code = <KEY_WPS_BUTTON>;
+-			gpios = <&pio 102 GPIO_ACTIVE_HIGH>;
++			gpios = <&pio 102 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
+index d71b7a1140fe2..216dc30fa26ce 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
+@@ -75,7 +75,7 @@
+ 
+ 		/* SDMMC1 (SD/MMC) */
+ 		mmc@3400000 {
+-			cd-gpios = <&gpio TEGRA194_MAIN_GPIO(A, 0) GPIO_ACTIVE_LOW>;
++			cd-gpios = <&gpio TEGRA194_MAIN_GPIO(G, 7) GPIO_ACTIVE_LOW>;
+ 		};
+ 
+ 		/* SDMMC4 (eMMC) */
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index dca040f66f5f3..99e2488b92dc3 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -383,7 +383,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		qpic_nand: nand@79b0000 {
++		qpic_nand: nand-controller@79b0000 {
+ 			compatible = "qcom,ipq8074-nand";
+ 			reg = <0x079b0000 0x10000>;
+ 			#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+index b654b802e95c6..7bddc5ebc6aa2 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+@@ -548,7 +548,7 @@
+ 				compatible = "snps,dwc3";
+ 				reg = <0x07580000 0xcd00>;
+ 				interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>;
+-				phys = <&usb2_phy_sec>, <&usb3_phy>;
++				phys = <&usb2_phy_prim>, <&usb3_phy>;
+ 				phy-names = "usb2-phy", "usb3-phy";
+ 				snps,has-lpm-erratum;
+ 				snps,hird-threshold = /bits/ 8 <0x10>;
+@@ -577,7 +577,7 @@
+ 				compatible = "snps,dwc3";
+ 				reg = <0x078c0000 0xcc00>;
+ 				interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>;
+-				phys = <&usb2_phy_prim>;
++				phys = <&usb2_phy_sec>;
+ 				phy-names = "usb2-phy";
+ 				snps,has-lpm-erratum;
+ 				snps,hird-threshold = /bits/ 8 <0x10>;
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+index bc4bb5dd8bae9..53e1d43cbecf8 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+@@ -145,7 +145,7 @@
+ 		};
+ 	};
+ 
+-	reg_audio: regulator_audio {
++	reg_audio: regulator-audio {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "audio-1.8V";
+ 		regulator-min-microvolt = <1800000>;
+@@ -173,7 +173,7 @@
+ 		vin-supply = <&reg_lcd>;
+ 	};
+ 
+-	reg_cam0: regulator_camera {
++	reg_cam0: regulator-cam0 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "reg_cam0";
+ 		regulator-min-microvolt = <1800000>;
+@@ -182,7 +182,7 @@
+ 		enable-active-high;
+ 	};
+ 
+-	reg_cam1: regulator_camera {
++	reg_cam1: regulator-cam1 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "reg_cam1";
+ 		regulator-min-microvolt = <1800000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index e0e54342cd4c7..4c7d7e8f8e289 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -1929,7 +1929,7 @@
+ 		cpu-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <0>;
+-			thermal-sensors = <&thermal 0>;
++			thermal-sensors = <&thermal>;
+ 			sustainable-power = <717>;
+ 
+ 			cooling-maps {
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index 33d7e657bd9cf..37159b9408e8a 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -2029,7 +2029,7 @@
+ 		cpu-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <0>;
+-			thermal-sensors = <&thermal 0>;
++			thermal-sensors = <&thermal>;
+ 			sustainable-power = <717>;
+ 
+ 			cooling-maps {
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+index be97da1322580..ba75adedbf79b 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+@@ -599,8 +599,8 @@
+ 			compatible = "socionext,uniphier-dwc3", "snps,dwc3";
+ 			status = "disabled";
+ 			reg = <0x65a00000 0xcd00>;
+-			interrupt-names = "host", "peripheral";
+-			interrupts = <0 134 4>, <0 135 4>;
++			interrupt-names = "dwc_usb3";
++			interrupts = <0 134 4>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pinctrl_usb0>, <&pinctrl_usb2>;
+ 			clock-names = "ref", "bus_early", "suspend";
+@@ -701,8 +701,8 @@
+ 			compatible = "socionext,uniphier-dwc3", "snps,dwc3";
+ 			status = "disabled";
+ 			reg = <0x65c00000 0xcd00>;
+-			interrupt-names = "host", "peripheral";
+-			interrupts = <0 137 4>, <0 138 4>;
++			interrupt-names = "dwc_usb3";
++			interrupts = <0 137 4>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pinctrl_usb1>, <&pinctrl_usb3>;
+ 			clock-names = "ref", "bus_early", "suspend";
+diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
+index b8eb0453123d1..6bd4e749a946e 100644
+--- a/arch/arm64/crypto/Kconfig
++++ b/arch/arm64/crypto/Kconfig
+@@ -59,6 +59,7 @@ config CRYPTO_GHASH_ARM64_CE
+ 	select CRYPTO_HASH
+ 	select CRYPTO_GF128MUL
+ 	select CRYPTO_LIB_AES
++	select CRYPTO_AEAD
+ 
+ config CRYPTO_CRCT10DIF_ARM64_CE
+ 	tristate "CRCT10DIF digest algorithm using PMULL instructions"
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index fce8cbecd6bc7..7c546c3487c9f 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -194,8 +194,9 @@ void tls_preserve_current_state(void);
+ 
+ static inline void start_thread_common(struct pt_regs *regs, unsigned long pc)
+ {
++	s32 previous_syscall = regs->syscallno;
+ 	memset(regs, 0, sizeof(*regs));
+-	forget_syscall(regs);
++	regs->syscallno = previous_syscall;
+ 	regs->pc = pc;
+ 
+ 	if (system_uses_irq_prio_masking())
+diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
+index 7364de008bab3..91b8a8378ba39 100644
+--- a/arch/arm64/kernel/armv8_deprecated.c
++++ b/arch/arm64/kernel/armv8_deprecated.c
+@@ -59,6 +59,7 @@ struct insn_emulation {
+ static LIST_HEAD(insn_emulation);
+ static int nr_insn_emulated __initdata;
+ static DEFINE_RAW_SPINLOCK(insn_emulation_lock);
++static DEFINE_MUTEX(insn_emulation_mutex);
+ 
+ static void register_emulation_hooks(struct insn_emulation_ops *ops)
+ {
+@@ -207,10 +208,10 @@ static int emulation_proc_handler(struct ctl_table *table, int write,
+ 				  loff_t *ppos)
+ {
+ 	int ret = 0;
+-	struct insn_emulation *insn = (struct insn_emulation *) table->data;
++	struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode);
+ 	enum insn_emulation_mode prev_mode = insn->current_mode;
+ 
+-	table->data = &insn->current_mode;
++	mutex_lock(&insn_emulation_mutex);
+ 	ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ 
+ 	if (ret || !write || prev_mode == insn->current_mode)
+@@ -223,7 +224,7 @@ static int emulation_proc_handler(struct ctl_table *table, int write,
+ 		update_insn_emulation_mode(insn, INSN_UNDEF);
+ 	}
+ ret:
+-	table->data = insn;
++	mutex_unlock(&insn_emulation_mutex);
+ 	return ret;
+ }
+ 
+@@ -247,7 +248,7 @@ static void __init register_insn_emulation_sysctl(void)
+ 		sysctl->maxlen = sizeof(int);
+ 
+ 		sysctl->procname = insn->ops->name;
+-		sysctl->data = insn;
++		sysctl->data = &insn->current_mode;
+ 		sysctl->extra1 = &insn->min;
+ 		sysctl->extra2 = &insn->max;
+ 		sysctl->proc_handler = emulation_proc_handler;
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index c9108ed406458..4087e2d1f39e2 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -508,7 +508,7 @@ static const struct arm64_ftr_bits ftr_id_pfr2[] = {
+ 
+ static const struct arm64_ftr_bits ftr_id_dfr0[] = {
+ 	/* [31:28] TraceFilt */
+-	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_PERFMON_SHIFT, 4, 0xf),
++	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_DFR0_PERFMON_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MPROFDBG_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPTRC_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPTRC_SHIFT, 4, 0),
+diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
+index 6624596846d3d..2401164c5f861 100644
+--- a/arch/arm64/kvm/hyp/nvhe/switch.c
++++ b/arch/arm64/kvm/hyp/nvhe/switch.c
+@@ -279,5 +279,5 @@ void __noreturn hyp_panic(void)
+ 
+ asmlinkage void kvm_unexpected_el2_exception(void)
+ {
+-	return __kvm_unexpected_el2_exception();
++	__kvm_unexpected_el2_exception();
+ }
+diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
+index 532e687f69366..99e2581e9806f 100644
+--- a/arch/arm64/kvm/hyp/vhe/switch.c
++++ b/arch/arm64/kvm/hyp/vhe/switch.c
+@@ -228,5 +228,5 @@ void __noreturn hyp_panic(void)
+ 
+ asmlinkage void kvm_unexpected_el2_exception(void)
+ {
+-	return __kvm_unexpected_el2_exception();
++	__kvm_unexpected_el2_exception();
+ }
+diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
+index f2afabbadd430..cc2c1ae48e62d 100644
+--- a/arch/hexagon/Kconfig
++++ b/arch/hexagon/Kconfig
+@@ -32,6 +32,7 @@ config HEXAGON
+ 	select MODULES_USE_ELF_RELA
+ 	select GENERIC_CPU_DEVICES
+ 	select SET_FS
++	select ARCH_WANT_LD_ORPHAN_WARN
+ 	help
+ 	  Qualcomm Hexagon is a processor architecture designed for high
+ 	  performance and low power across a wide variety of applications.
+diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
+index 2d8bcdc27d7f8..05e7c9ad1a965 100644
+--- a/arch/ia64/include/asm/processor.h
++++ b/arch/ia64/include/asm/processor.h
+@@ -542,7 +542,7 @@ ia64_get_irr(unsigned int vector)
+ {
+ 	unsigned int reg = vector / 64;
+ 	unsigned int bit = vector % 64;
+-	u64 irr;
++	unsigned long irr;
+ 
+ 	switch (reg) {
+ 	case 0: irr = ia64_getreg(_IA64_REG_CR_IRR0); break;
+diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c
+index 4184d641f05e0..33a02f3814f58 100644
+--- a/arch/mips/kernel/proc.c
++++ b/arch/mips/kernel/proc.c
+@@ -172,7 +172,7 @@ static void *c_start(struct seq_file *m, loff_t *pos)
+ {
+ 	unsigned long i = *pos;
+ 
+-	return i < NR_CPUS ? (void *) (i + 1) : NULL;
++	return i < nr_cpu_ids ? (void *) (i + 1) : NULL;
+ }
+ 
+ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
+index 80fa0650736ba..f5a25ed0930d2 100644
+--- a/arch/parisc/kernel/drivers.c
++++ b/arch/parisc/kernel/drivers.c
+@@ -521,7 +521,6 @@ alloc_pa_dev(unsigned long hpa, struct hardware_path *mod_path)
+ 	dev->id.hversion_rev = iodc_data[1] & 0x0f;
+ 	dev->id.sversion = ((iodc_data[4] & 0x0f) << 16) |
+ 			(iodc_data[5] << 8) | iodc_data[6];
+-	dev->hpa.name = parisc_pathname(dev);
+ 	dev->hpa.start = hpa;
+ 	/* This is awkward.  The STI spec says that gfx devices may occupy
+ 	 * 32MB or 64MB.  Unfortunately, we don't know how to tell whether
+@@ -535,10 +534,10 @@ alloc_pa_dev(unsigned long hpa, struct hardware_path *mod_path)
+ 		dev->hpa.end = hpa + 0xfff;
+ 	}
+ 	dev->hpa.flags = IORESOURCE_MEM;
+-	name = parisc_hardware_description(&dev->id);
+-	if (name) {
+-		strlcpy(dev->name, name, sizeof(dev->name));
+-	}
++	dev->hpa.name = dev->name;
++	name = parisc_hardware_description(&dev->id) ? : "unknown";
++	snprintf(dev->name, sizeof(dev->name), "%s [%s]",
++		name, parisc_pathname(dev));
+ 
+ 	/* Silently fail things like mouse ports which are subsumed within
+ 	 * the keyboard controller
+diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
+index f375ea528e59c..d526ebfa58e50 100644
+--- a/arch/parisc/kernel/syscalls/syscall.tbl
++++ b/arch/parisc/kernel/syscalls/syscall.tbl
+@@ -413,7 +413,7 @@
+ 412	32	utimensat_time64		sys_utimensat			sys_utimensat
+ 413	32	pselect6_time64			sys_pselect6			compat_sys_pselect6_time64
+ 414	32	ppoll_time64			sys_ppoll			compat_sys_ppoll_time64
+-416	32	io_pgetevents_time64		sys_io_pgetevents		sys_io_pgetevents
++416	32	io_pgetevents_time64		sys_io_pgetevents		compat_sys_io_pgetevents_time64
+ 417	32	recvmmsg_time64			sys_recvmmsg			compat_sys_recvmmsg_time64
+ 418	32	mq_timedsend_time64		sys_mq_timedsend		sys_mq_timedsend
+ 419	32	mq_timedreceive_time64		sys_mq_timedreceive		sys_mq_timedreceive
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index 376104c166fcf..db2bdc4cec64e 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -20,6 +20,7 @@ CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_prom_init.o += -fno-stack-protector
+ CFLAGS_prom_init.o += -DDISABLE_BRANCH_PROFILING
+ CFLAGS_prom_init.o += -ffreestanding
++CFLAGS_prom_init.o += $(call cc-option, -ftrivial-auto-var-init=uninitialized)
+ 
+ ifdef CONFIG_FUNCTION_TRACER
+ # Do not trace early boot code
+diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
+index 7920559a1ca81..f9d35c9ea4aed 100644
+--- a/arch/powerpc/kernel/pci-common.c
++++ b/arch/powerpc/kernel/pci-common.c
+@@ -73,16 +73,32 @@ void set_pci_dma_ops(const struct dma_map_ops *dma_ops)
+ static int get_phb_number(struct device_node *dn)
+ {
+ 	int ret, phb_id = -1;
+-	u32 prop_32;
+ 	u64 prop;
+ 
+ 	/*
+ 	 * Try fixed PHB numbering first, by checking archs and reading
+-	 * the respective device-tree properties. Firstly, try powernv by
+-	 * reading "ibm,opal-phbid", only present in OPAL environment.
++	 * the respective device-tree properties. Firstly, try reading
++	 * standard "linux,pci-domain", then try reading "ibm,opal-phbid"
++	 * (only present in powernv OPAL environment), then try device-tree
++	 * alias and as the last try to use lower bits of "reg" property.
+ 	 */
+-	ret = of_property_read_u64(dn, "ibm,opal-phbid", &prop);
++	ret = of_get_pci_domain_nr(dn);
++	if (ret >= 0) {
++		prop = ret;
++		ret = 0;
++	}
++	if (ret)
++		ret = of_property_read_u64(dn, "ibm,opal-phbid", &prop);
++
+ 	if (ret) {
++		ret = of_alias_get_id(dn, "pci");
++		if (ret >= 0) {
++			prop = ret;
++			ret = 0;
++		}
++	}
++	if (ret) {
++		u32 prop_32;
+ 		ret = of_property_read_u32_index(dn, "reg", 1, &prop_32);
+ 		prop = prop_32;
+ 	}
+@@ -94,10 +110,7 @@ static int get_phb_number(struct device_node *dn)
+ 	if ((phb_id >= 0) && !test_and_set_bit(phb_id, phb_bitmap))
+ 		return phb_id;
+ 
+-	/*
+-	 * If not pseries nor powernv, or if fixed PHB numbering tried to add
+-	 * the same PHB number twice, then fallback to dynamic PHB numbering.
+-	 */
++	/* If everything fails then fallback to dynamic PHB numbering. */
+ 	phb_id = find_first_zero_bit(phb_bitmap, MAX_PHBS);
+ 	BUG_ON(phb_id >= MAX_PHBS);
+ 	set_bit(phb_id, phb_bitmap);
+diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
+index c005fe041c187..ae97b82966a47 100644
+--- a/arch/powerpc/mm/ptdump/shared.c
++++ b/arch/powerpc/mm/ptdump/shared.c
+@@ -17,9 +17,9 @@ static const struct flag_info flag_array[] = {
+ 		.clear	= "    ",
+ 	}, {
+ 		.mask	= _PAGE_RW,
+-		.val	= _PAGE_RW,
+-		.set	= "rw",
+-		.clear	= "r ",
++		.val	= 0,
++		.set	= "r ",
++		.clear	= "rw",
+ 	}, {
+ 		.mask	= _PAGE_EXEC,
+ 		.val	= _PAGE_EXEC,
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index e49aa8fc6a491..6e3e50614353b 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1267,27 +1267,22 @@ static void power_pmu_disable(struct pmu *pmu)
+ 		 * a PMI happens during interrupt replay and perf counter
+ 		 * values are cleared by PMU callbacks before replay.
+ 		 *
+-		 * If any PMC corresponding to the active PMU events are
+-		 * overflown, disable the interrupt by clearing the paca
+-		 * bit for PMI since we are disabling the PMU now.
+-		 * Otherwise provide a warning if there is PMI pending, but
+-		 * no counter is found overflown.
++		 * Disable the interrupt by clearing the paca bit for PMI
++		 * since we are disabling the PMU now. Otherwise provide a
++		 * warning if there is PMI pending, but no counter is found
++		 * overflown.
++		 *
++		 * Since power_pmu_disable runs under local_irq_save, it
++		 * could happen that code hits a PMC overflow without PMI
++		 * pending in paca. Hence only clear PMI pending if it was
++		 * set.
++		 *
++		 * If a PMI is pending, then MSR[EE] must be disabled (because
++		 * the masked PMI handler disabling EE). So it is safe to
++		 * call clear_pmi_irq_pending().
+ 		 */
+-		if (any_pmc_overflown(cpuhw)) {
+-			/*
+-			 * Since power_pmu_disable runs under local_irq_save, it
+-			 * could happen that code hits a PMC overflow without PMI
+-			 * pending in paca. Hence only clear PMI pending if it was
+-			 * set.
+-			 *
+-			 * If a PMI is pending, then MSR[EE] must be disabled (because
+-			 * the masked PMI handler disabling EE). So it is safe to
+-			 * call clear_pmi_irq_pending().
+-			 */
+-			if (pmi_irq_pending())
+-				clear_pmi_irq_pending();
+-		} else
+-			WARN_ON(pmi_irq_pending());
++		if (pmi_irq_pending())
++			clear_pmi_irq_pending();
+ 
+ 		val = mmcra = cpuhw->mmcr.mmcra;
+ 
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 32a9c4c09b989..75ebfbff4debc 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -152,11 +152,11 @@ config POWER9_CPU
+ 
+ config E5500_CPU
+ 	bool "Freescale e5500"
+-	depends on E500
++	depends on PPC64 && E500
+ 
+ config E6500_CPU
+ 	bool "Freescale e6500"
+-	depends on E500
++	depends on PPC64 && E500
+ 
+ config 860_CPU
+ 	bool "8xx family"
+diff --git a/arch/powerpc/platforms/cell/axon_msi.c b/arch/powerpc/platforms/cell/axon_msi.c
+index ca2555b8a0c2a..ffbc7d2e94640 100644
+--- a/arch/powerpc/platforms/cell/axon_msi.c
++++ b/arch/powerpc/platforms/cell/axon_msi.c
+@@ -226,6 +226,7 @@ static int setup_msi_msg_address(struct pci_dev *dev, struct msi_msg *msg)
+ 	if (!prop) {
+ 		dev_dbg(&dev->dev,
+ 			"axon_msi: no msi-address-(32|64) properties found\n");
++		of_node_put(dn);
+ 		return -ENOENT;
+ 	}
+ 
+diff --git a/arch/powerpc/platforms/cell/spufs/inode.c b/arch/powerpc/platforms/cell/spufs/inode.c
+index 25390569e24cd..908e9b8e79fe6 100644
+--- a/arch/powerpc/platforms/cell/spufs/inode.c
++++ b/arch/powerpc/platforms/cell/spufs/inode.c
+@@ -664,6 +664,7 @@ spufs_init_isolated_loader(void)
+ 		return;
+ 
+ 	loader = of_get_property(dn, "loader", &size);
++	of_node_put(dn);
+ 	if (!loader)
+ 		return;
+ 
+diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c
+index 236bd2ba51b98..a99033c3dce71 100644
+--- a/arch/powerpc/platforms/powernv/rng.c
++++ b/arch/powerpc/platforms/powernv/rng.c
+@@ -63,6 +63,8 @@ int powernv_get_random_real_mode(unsigned long *v)
+ 	struct powernv_rng *rng;
+ 
+ 	rng = raw_cpu_read(powernv_rng);
++	if (!rng)
++		return 0;
+ 
+ 	*v = rng_whiten(rng, __raw_rm_readq(rng->regs_real));
+ 
+diff --git a/arch/powerpc/sysdev/fsl_pci.c b/arch/powerpc/sysdev/fsl_pci.c
+index 040b9d01c0798..4dd152450e784 100644
+--- a/arch/powerpc/sysdev/fsl_pci.c
++++ b/arch/powerpc/sysdev/fsl_pci.c
+@@ -520,6 +520,7 @@ int fsl_add_bridge(struct platform_device *pdev, int is_primary)
+ 	struct resource rsrc;
+ 	const int *bus_range;
+ 	u8 hdr_type, progif;
++	u32 class_code;
+ 	struct device_node *dev;
+ 	struct ccsr_pci __iomem *pci;
+ 	u16 temp;
+@@ -593,6 +594,13 @@ int fsl_add_bridge(struct platform_device *pdev, int is_primary)
+ 			PPC_INDIRECT_TYPE_SURPRESS_PRIMARY_BUS;
+ 		if (fsl_pcie_check_link(hose))
+ 			hose->indirect_type |= PPC_INDIRECT_TYPE_NO_PCIE_LINK;
++		/* Fix Class Code to PCI_CLASS_BRIDGE_PCI_NORMAL for pre-3.0 controller */
++		if (in_be32(&pci->block_rev1) < PCIE_IP_REV_3_0) {
++			early_read_config_dword(hose, 0, 0, PCIE_FSL_CSR_CLASSCODE, &class_code);
++			class_code &= 0xff;
++			class_code |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8;
++			early_write_config_dword(hose, 0, 0, PCIE_FSL_CSR_CLASSCODE, class_code);
++		}
+ 	} else {
+ 		/*
+ 		 * Set PBFR(PCI Bus Function Register)[10] = 1 to
+diff --git a/arch/powerpc/sysdev/fsl_pci.h b/arch/powerpc/sysdev/fsl_pci.h
+index 1d7a412056959..5ffaa60f1fa09 100644
+--- a/arch/powerpc/sysdev/fsl_pci.h
++++ b/arch/powerpc/sysdev/fsl_pci.h
+@@ -18,6 +18,7 @@ struct platform_device;
+ 
+ #define PCIE_LTSSM	0x0404		/* PCIE Link Training and Status */
+ #define PCIE_LTSSM_L0	0x16		/* L0 state */
++#define PCIE_FSL_CSR_CLASSCODE	0x474	/* FSL GPEX CSR */
+ #define PCIE_IP_REV_2_2		0x02080202 /* PCIE IP block version Rev2.2 */
+ #define PCIE_IP_REV_3_0		0x02080300 /* PCIE IP block version Rev3.0 */
+ #define PIWAR_EN		0x80000000	/* Enable */
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index b57eeaff7bb33..38e8b98961744 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -710,6 +710,7 @@ static bool xive_get_max_prio(u8 *max_prio)
+ 	}
+ 
+ 	reg = of_get_property(rootdn, "ibm,plat-res-int-priorities", &len);
++	of_node_put(rootdn);
+ 	if (!reg) {
+ 		pr_err("Failed to read 'ibm,plat-res-int-priorities' property\n");
+ 		return false;
+diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
+index ee5878d968cc1..9c842c41684ac 100644
+--- a/arch/riscv/kernel/reset.c
++++ b/arch/riscv/kernel/reset.c
+@@ -12,7 +12,7 @@ static void default_power_off(void)
+ 		wait_for_interrupt();
+ }
+ 
+-void (*pm_power_off)(void) = default_power_off;
++void (*pm_power_off)(void) = NULL;
+ EXPORT_SYMBOL(pm_power_off);
+ 
+ void machine_restart(char *cmd)
+@@ -23,10 +23,16 @@ void machine_restart(char *cmd)
+ 
+ void machine_halt(void)
+ {
+-	pm_power_off();
++	if (pm_power_off != NULL)
++		pm_power_off();
++	else
++		default_power_off();
+ }
+ 
+ void machine_power_off(void)
+ {
+-	pm_power_off();
++	if (pm_power_off != NULL)
++		pm_power_off();
++	else
++		default_power_off();
+ }
+diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
+index 40264f60b0da9..f4073106e1f39 100644
+--- a/arch/s390/include/asm/gmap.h
++++ b/arch/s390/include/asm/gmap.h
+@@ -148,4 +148,6 @@ void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long dirty_bitmap[4],
+ 			     unsigned long gaddr, unsigned long vmaddr);
+ int gmap_mark_unmergeable(void);
+ void s390_reset_acc(struct mm_struct *mm);
++void s390_unlist_old_asce(struct gmap *gmap);
++int s390_replace_asce(struct gmap *gmap);
+ #endif /* _ASM_S390_GMAP_H */
+diff --git a/arch/s390/kernel/asm-offsets.c b/arch/s390/kernel/asm-offsets.c
+index 483051e10db38..e070073930a9a 100644
+--- a/arch/s390/kernel/asm-offsets.c
++++ b/arch/s390/kernel/asm-offsets.c
+@@ -150,6 +150,8 @@ int main(void)
+ 	OFFSET(__LC_BR_R1, lowcore, br_r1_trampoline);
+ 	/* software defined ABI-relevant lowcore locations 0xe00 - 0xe20 */
+ 	OFFSET(__LC_DUMP_REIPL, lowcore, ipib);
++	OFFSET(__LC_VMCORE_INFO, lowcore, vmcore_info);
++	OFFSET(__LC_OS_INFO, lowcore, os_info);
+ 	/* hardware defined lowcore locations 0x1000 - 0x18ff */
+ 	OFFSET(__LC_MCESAD, lowcore, mcesad);
+ 	OFFSET(__LC_EXT_PARAMS2, lowcore, ext_params2);
+diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
+index 205b2e2648aae..76762dc67ca90 100644
+--- a/arch/s390/kernel/crash_dump.c
++++ b/arch/s390/kernel/crash_dump.c
+@@ -432,7 +432,7 @@ static void *get_vmcoreinfo_old(unsigned long *size)
+ 	Elf64_Nhdr note;
+ 	void *addr;
+ 
+-	if (copy_oldmem_kernel(&addr, &S390_lowcore.vmcore_info, sizeof(addr)))
++	if (copy_oldmem_kernel(&addr, (void *)__LC_VMCORE_INFO, sizeof(addr)))
+ 		return NULL;
+ 	memset(nt_name, 0, sizeof(nt_name));
+ 	if (copy_oldmem_kernel(&note, addr, sizeof(note)))
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index 76cd09879eaf4..53da174754d97 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -29,6 +29,7 @@ int s390_verify_sig(const char *kernel, unsigned long kernel_len)
+ 	const unsigned long marker_len = sizeof(MODULE_SIG_STRING) - 1;
+ 	struct module_signature *ms;
+ 	unsigned long sig_len;
++	int ret;
+ 
+ 	/* Skip signature verification when not secure IPLed. */
+ 	if (!ipl_secure_flag)
+@@ -63,11 +64,18 @@ int s390_verify_sig(const char *kernel, unsigned long kernel_len)
+ 		return -EBADMSG;
+ 	}
+ 
+-	return verify_pkcs7_signature(kernel, kernel_len,
+-				      kernel + kernel_len, sig_len,
+-				      VERIFY_USE_PLATFORM_KEYRING,
+-				      VERIFYING_MODULE_SIGNATURE,
+-				      NULL, NULL);
++	ret = verify_pkcs7_signature(kernel, kernel_len,
++				     kernel + kernel_len, sig_len,
++				     VERIFY_USE_SECONDARY_KEYRING,
++				     VERIFYING_MODULE_SIGNATURE,
++				     NULL, NULL);
++	if (ret == -ENOKEY && IS_ENABLED(CONFIG_INTEGRITY_PLATFORM_KEYRING))
++		ret = verify_pkcs7_signature(kernel, kernel_len,
++					     kernel + kernel_len, sig_len,
++					     VERIFY_USE_PLATFORM_KEYRING,
++					     VERIFYING_MODULE_SIGNATURE,
++					     NULL, NULL);
++	return ret;
+ }
+ #endif /* CONFIG_KEXEC_SIG */
+ 
+diff --git a/arch/s390/kernel/os_info.c b/arch/s390/kernel/os_info.c
+index 0a5e4bafb6ad1..1b8e2aff20e34 100644
+--- a/arch/s390/kernel/os_info.c
++++ b/arch/s390/kernel/os_info.c
+@@ -15,6 +15,7 @@
+ #include <asm/checksum.h>
+ #include <asm/lowcore.h>
+ #include <asm/os_info.h>
++#include <asm/asm-offsets.h>
+ 
+ /*
+  * OS info structure has to be page aligned
+@@ -123,7 +124,7 @@ static void os_info_old_init(void)
+ 		return;
+ 	if (!OLDMEM_BASE)
+ 		goto fail;
+-	if (copy_oldmem_kernel(&addr, &S390_lowcore.os_info, sizeof(addr)))
++	if (copy_oldmem_kernel(&addr, (void *)__LC_OS_INFO, sizeof(addr)))
+ 		goto fail;
+ 	if (addr == 0 || addr % PAGE_SIZE)
+ 		goto fail;
+diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
+index e7a7c499a73f4..77909d362b78f 100644
+--- a/arch/s390/kvm/intercept.c
++++ b/arch/s390/kvm/intercept.c
+@@ -521,12 +521,27 @@ static int handle_pv_uvc(struct kvm_vcpu *vcpu)
+ 
+ static int handle_pv_notification(struct kvm_vcpu *vcpu)
+ {
++	int ret;
++
+ 	if (vcpu->arch.sie_block->ipa == 0xb210)
+ 		return handle_pv_spx(vcpu);
+ 	if (vcpu->arch.sie_block->ipa == 0xb220)
+ 		return handle_pv_sclp(vcpu);
+ 	if (vcpu->arch.sie_block->ipa == 0xb9a4)
+ 		return handle_pv_uvc(vcpu);
++	if (vcpu->arch.sie_block->ipa >> 8 == 0xae) {
++		/*
++		 * Besides external call, other SIGP orders also cause a
++		 * 108 (pv notify) intercept. In contrast to external call,
++		 * these orders need to be emulated and hence the appropriate
++		 * place to handle them is in handle_instruction().
++		 * So first try kvm_s390_handle_sigp_pei() and if that isn't
++		 * successful, go on with handle_instruction().
++		 */
++		ret = kvm_s390_handle_sigp_pei(vcpu);
++		if (!ret)
++			return ret;
++	}
+ 
+ 	return handle_instruction(vcpu);
+ }
+diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
+index 8228878872228..c0e00e94ee22d 100644
+--- a/arch/s390/kvm/pv.c
++++ b/arch/s390/kvm/pv.c
+@@ -163,10 +163,13 @@ int kvm_s390_pv_deinit_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
+ 	atomic_set(&kvm->mm->context.is_protected, 0);
+ 	KVM_UV_EVENT(kvm, 3, "PROTVIRT DESTROY VM: rc %x rrc %x", *rc, *rrc);
+ 	WARN_ONCE(cc, "protvirt destroy vm failed rc %x rrc %x", *rc, *rrc);
+-	/* Inteded memory leak on "impossible" error */
+-	if (!cc)
++	/* Intended memory leak on "impossible" error */
++	if (!cc) {
+ 		kvm_s390_pv_dealloc_vm(kvm);
+-	return cc ? -EIO : 0;
++		return 0;
++	}
++	s390_replace_asce(kvm->arch.gmap);
++	return -EIO;
+ }
+ 
+ int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
+diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c
+index 3dc921e853b6e..52800279686c0 100644
+--- a/arch/s390/kvm/sigp.c
++++ b/arch/s390/kvm/sigp.c
+@@ -492,9 +492,9 @@ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu)
+ 	struct kvm_vcpu *dest_vcpu;
+ 	u8 order_code = kvm_s390_get_base_disp_rs(vcpu, NULL);
+ 
+-	trace_kvm_s390_handle_sigp_pei(vcpu, order_code, cpu_addr);
+-
+ 	if (order_code == SIGP_EXTERNAL_CALL) {
++		trace_kvm_s390_handle_sigp_pei(vcpu, order_code, cpu_addr);
++
+ 		dest_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr);
+ 		BUG_ON(dest_vcpu == NULL);
+ 
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 2db097c14cec0..03e561608eed4 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2721,3 +2721,89 @@ void s390_reset_acc(struct mm_struct *mm)
+ 	mmput(mm);
+ }
+ EXPORT_SYMBOL_GPL(s390_reset_acc);
++
++/**
++ * s390_unlist_old_asce - Remove the topmost level of page tables from the
++ * list of page tables of the gmap.
++ * @gmap: the gmap whose table is to be removed
++ *
++ * On s390x, KVM keeps a list of all pages containing the page tables of the
++ * gmap (the CRST list). This list is used at tear down time to free all
++ * pages that are now not needed anymore.
++ *
++ * This function removes the topmost page of the tree (the one pointed to by
++ * the ASCE) from the CRST list.
++ *
++ * This means that it will not be freed when the VM is torn down, and needs
++ * to be handled separately by the caller, unless a leak is actually
++ * intended. Notice that this function will only remove the page from the
++ * list, the page will still be used as a top level page table (and ASCE).
++ */
++void s390_unlist_old_asce(struct gmap *gmap)
++{
++	struct page *old;
++
++	old = virt_to_page(gmap->table);
++	spin_lock(&gmap->guest_table_lock);
++	list_del(&old->lru);
++	/*
++	 * Sometimes the topmost page might need to be "removed" multiple
++	 * times, for example if the VM is rebooted into secure mode several
++	 * times concurrently, or if s390_replace_asce fails after calling
++	 * s390_remove_old_asce and is attempted again later. In that case
++	 * the old asce has been removed from the list, and therefore it
++	 * will not be freed when the VM terminates, but the ASCE is still
++	 * in use and still pointed to.
++	 * A subsequent call to replace_asce will follow the pointer and try
++	 * to remove the same page from the list again.
++	 * Therefore it's necessary that the page of the ASCE has valid
++	 * pointers, so list_del can work (and do nothing) without
++	 * dereferencing stale or invalid pointers.
++	 */
++	INIT_LIST_HEAD(&old->lru);
++	spin_unlock(&gmap->guest_table_lock);
++}
++EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
++
++/**
++ * s390_replace_asce - Try to replace the current ASCE of a gmap with a copy
++ * @gmap: the gmap whose ASCE needs to be replaced
++ *
++ * If the allocation of the new top level page table fails, the ASCE is not
++ * replaced.
++ * In any case, the old ASCE is always removed from the gmap CRST list.
++ * Therefore the caller has to make sure to save a pointer to it
++ * beforehand, unless a leak is actually intended.
++ */
++int s390_replace_asce(struct gmap *gmap)
++{
++	unsigned long asce;
++	struct page *page;
++	void *table;
++
++	s390_unlist_old_asce(gmap);
++
++	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
++	if (!page)
++		return -ENOMEM;
++	table = page_to_virt(page);
++	memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT));
++
++	/*
++	 * The caller has to deal with the old ASCE, but here we make sure
++	 * the new one is properly added to the CRST list, so that
++	 * it will be freed when the VM is torn down.
++	 */
++	spin_lock(&gmap->guest_table_lock);
++	list_add(&page->lru, &gmap->crst_list);
++	spin_unlock(&gmap->guest_table_lock);
++
++	/* Set new table origin while preserving existing ASCE control bits */
++	asce = (gmap->asce & ~_ASCE_ORIGIN) | __pa(table);
++	WRITE_ONCE(gmap->asce, asce);
++	WRITE_ONCE(gmap->mm->context.gmap_asce, asce);
++	WRITE_ONCE(gmap->table, table);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(s390_replace_asce);
+diff --git a/arch/um/Kconfig b/arch/um/Kconfig
+index 4b799fad8b483..1c57599b82fa7 100644
+--- a/arch/um/Kconfig
++++ b/arch/um/Kconfig
+@@ -192,3 +192,8 @@ config UML_TIME_TRAVEL_SUPPORT
+ endmenu
+ 
+ source "arch/um/drivers/Kconfig"
++
++config ARCH_SUSPEND_POSSIBLE
++	def_bool y
++
++source "kernel/power/Kconfig"
+diff --git a/arch/um/drivers/random.c b/arch/um/drivers/random.c
+index e4b9b2ce9abf4..4b712395763ec 100644
+--- a/arch/um/drivers/random.c
++++ b/arch/um/drivers/random.c
+@@ -28,7 +28,7 @@
+  * protects against a module being loaded twice at the same time.
+  */
+ static int random_fd = -1;
+-static struct hwrng hwrng = { 0, };
++static struct hwrng hwrng;
+ static DECLARE_COMPLETION(have_data);
+ 
+ static int rng_dev_read(struct hwrng *rng, void *buf, size_t max, bool block)
+diff --git a/arch/um/include/shared/kern_util.h b/arch/um/include/shared/kern_util.h
+index ccafb62e8ccec..9c08e728a675e 100644
+--- a/arch/um/include/shared/kern_util.h
++++ b/arch/um/include/shared/kern_util.h
+@@ -39,6 +39,8 @@ extern int is_syscall(unsigned long addr);
+ 
+ extern void timer_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs);
+ 
++extern void uml_pm_wake(void);
++
+ extern int start_uml(void);
+ extern void paging_init(void);
+ 
+diff --git a/arch/um/include/shared/os.h b/arch/um/include/shared/os.h
+index f467d28fc0b49..2f31d44d892e0 100644
+--- a/arch/um/include/shared/os.h
++++ b/arch/um/include/shared/os.h
+@@ -241,6 +241,7 @@ extern int set_signals(int enable);
+ extern int set_signals_trace(int enable);
+ extern int os_is_signal_stack(void);
+ extern void deliver_alarm(void);
++extern void register_pm_wake_signal(void);
+ 
+ /* util.c */
+ extern void stack_protections(unsigned long address);
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index 26af24b5d900a..52e2e2a3e4aef 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -13,6 +13,7 @@
+ #include <linux/sched.h>
+ #include <linux/sched/task.h>
+ #include <linux/kmsg_dump.h>
++#include <linux/suspend.h>
+ 
+ #include <asm/processor.h>
+ #include <asm/sections.h>
+@@ -385,3 +386,27 @@ void *text_poke(void *addr, const void *opcode, size_t len)
+ void text_poke_sync(void)
+ {
+ }
++
++#ifdef CONFIG_PM_SLEEP
++void uml_pm_wake(void)
++{
++	pm_system_wakeup();
++}
++
++static int init_pm_wake_signal(void)
++{
++	/*
++	 * In external time-travel mode we can't use signals to wake up
++	 * since that would mess with the scheduling. We'll have to do
++	 * some additional work to support wakeup on virtio devices or
++	 * similar, perhaps implementing a fake RTC controller that can
++	 * trigger wakeup (and request the appropriate scheduling from
++	 * the external scheduler when going to suspend.)
++	 */
++	if (time_travel_mode != TT_MODE_EXTERNAL)
++		register_pm_wake_signal();
++	return 0;
++}
++
++late_initcall(init_pm_wake_signal);
++#endif
+diff --git a/arch/um/os-Linux/signal.c b/arch/um/os-Linux/signal.c
+index b58bc68cbe649..0a2ea84033b4a 100644
+--- a/arch/um/os-Linux/signal.c
++++ b/arch/um/os-Linux/signal.c
+@@ -136,6 +136,16 @@ void set_sigstack(void *sig_stack, int size)
+ 		panic("enabling signal stack failed, errno = %d\n", errno);
+ }
+ 
++static void sigusr1_handler(int sig, struct siginfo *unused_si, mcontext_t *mc)
++{
++	uml_pm_wake();
++}
++
++void register_pm_wake_signal(void)
++{
++	set_handler(SIGUSR1);
++}
++
+ static void (*handlers[_NSIG])(int sig, struct siginfo *si, mcontext_t *mc) = {
+ 	[SIGSEGV] = sig_handler,
+ 	[SIGBUS] = sig_handler,
+@@ -145,7 +155,9 @@ static void (*handlers[_NSIG])(int sig, struct siginfo *si, mcontext_t *mc) = {
+ 
+ 	[SIGIO] = sig_handler,
+ 	[SIGWINCH] = sig_handler,
+-	[SIGALRM] = timer_alarm_handler
++	[SIGALRM] = timer_alarm_handler,
++
++	[SIGUSR1] = sigusr1_handler,
+ };
+ 
+ static void hard_handler(int sig, siginfo_t *si, void *p)
+diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
+index fe605205b4ce2..59a42342b5559 100644
+--- a/arch/x86/boot/Makefile
++++ b/arch/x86/boot/Makefile
+@@ -103,7 +103,7 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
+ AFLAGS_header.o += -I$(objtree)/$(obj)
+ $(obj)/header.o: $(obj)/zoffset.h
+ 
+-LDFLAGS_setup.elf	:= -m elf_i386 -T
++LDFLAGS_setup.elf	:= -m elf_i386 -z noexecstack -T
+ $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
+ 	$(call if_changed,ld)
+ 
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index bf91e0a36d77f..ad268a15bc7bb 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -68,6 +68,8 @@ LDFLAGS_vmlinux := -pie $(call ld-option, --no-dynamic-linker)
+ ifdef CONFIG_LD_ORPHAN_WARN
+ LDFLAGS_vmlinux += --orphan-handling=warn
+ endif
++LDFLAGS_vmlinux += -z noexecstack
++LDFLAGS_vmlinux += $(call ld-option,--no-warn-rwx-segments)
+ LDFLAGS_vmlinux += -T
+ 
+ hostprogs	:= mkpiggy
+diff --git a/arch/x86/entry/Makefile b/arch/x86/entry/Makefile
+index 58533752efab2..63dc4b1dfc924 100644
+--- a/arch/x86/entry/Makefile
++++ b/arch/x86/entry/Makefile
+@@ -21,12 +21,13 @@ CFLAGS_syscall_64.o		+= $(call cc-option,-Wno-override-init,)
+ CFLAGS_syscall_32.o		+= $(call cc-option,-Wno-override-init,)
+ CFLAGS_syscall_x32.o		+= $(call cc-option,-Wno-override-init,)
+ 
+-obj-y				:= entry.o entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
++obj-y				:= entry.o entry_$(BITS).o syscall_$(BITS).o
+ obj-y				+= common.o
+ 
+ obj-y				+= vdso/
+ obj-y				+= vsyscall/
+ 
++obj-$(CONFIG_PREEMPTION)	+= thunk_$(BITS).o
+ obj-$(CONFIG_IA32_EMULATION)	+= entry_64_compat.o syscall_32.o
+ obj-$(CONFIG_X86_X32_ABI)	+= syscall_x32.o
+ 
+diff --git a/arch/x86/entry/thunk_32.S b/arch/x86/entry/thunk_32.S
+index 7591bab060f70..ff6e7003da974 100644
+--- a/arch/x86/entry/thunk_32.S
++++ b/arch/x86/entry/thunk_32.S
+@@ -29,10 +29,8 @@ SYM_CODE_START_NOALIGN(\name)
+ SYM_CODE_END(\name)
+ 	.endm
+ 
+-#ifdef CONFIG_PREEMPTION
+ 	THUNK preempt_schedule_thunk, preempt_schedule
+ 	THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
+ 	EXPORT_SYMBOL(preempt_schedule_thunk)
+ 	EXPORT_SYMBOL(preempt_schedule_notrace_thunk)
+-#endif
+ 
+diff --git a/arch/x86/entry/thunk_64.S b/arch/x86/entry/thunk_64.S
+index 1b5044ad8cd0d..14776163fbffc 100644
+--- a/arch/x86/entry/thunk_64.S
++++ b/arch/x86/entry/thunk_64.S
+@@ -36,14 +36,11 @@ SYM_FUNC_END(\name)
+ 	_ASM_NOKPROBE(\name)
+ 	.endm
+ 
+-#ifdef CONFIG_PREEMPTION
+ 	THUNK preempt_schedule_thunk, preempt_schedule
+ 	THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
+ 	EXPORT_SYMBOL(preempt_schedule_thunk)
+ 	EXPORT_SYMBOL(preempt_schedule_notrace_thunk)
+-#endif
+ 
+-#ifdef CONFIG_PREEMPTION
+ SYM_CODE_START_LOCAL_NOALIGN(__thunk_restore)
+ 	popq %r11
+ 	popq %r10
+@@ -58,4 +55,3 @@ SYM_CODE_START_LOCAL_NOALIGN(__thunk_restore)
+ 	RET
+ 	_ASM_NOKPROBE(__thunk_restore)
+ SYM_CODE_END(__thunk_restore)
+-#endif
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index f181220f1b5dc..14409755a8ea3 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -176,7 +176,7 @@ quiet_cmd_vdso = VDSO    $@
+ 		 sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@'
+ 
+ VDSO_LDFLAGS = -shared --hash-style=both --build-id=sha1 \
+-	$(call ld-option, --eh-frame-hdr) -Bsymbolic
++	$(call ld-option, --eh-frame-hdr) -Bsymbolic -z noexecstack
+ GCOV_PROFILE := n
+ 
+ quiet_cmd_vdso_and_check = VDSO    $@
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index b0e4001efb50f..38c63a78aba6f 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -432,6 +432,7 @@ struct kvm_pmu {
+ 	unsigned nr_arch_fixed_counters;
+ 	unsigned available_event_types;
+ 	u64 fixed_ctr_ctrl;
++	u64 fixed_ctr_ctrl_mask;
+ 	u64 global_ctrl;
+ 	u64 global_status;
+ 	u64 global_ovf_ctrl;
+@@ -439,6 +440,7 @@ struct kvm_pmu {
+ 	u64 global_ctrl_mask;
+ 	u64 global_ovf_ctrl_mask;
+ 	u64 reserved_bits;
++	u64 raw_event_mask;
+ 	u8 version;
+ 	struct kvm_pmc gp_counters[INTEL_PMC_MAX_GENERIC];
+ 	struct kvm_pmc fixed_counters[INTEL_PMC_MAX_FIXED];
+@@ -1117,7 +1119,8 @@ struct kvm_x86_ops {
+ 			    struct kvm_segment *var, int seg);
+ 	void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l);
+ 	void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0);
+-	int (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4);
++	bool (*is_valid_cr4)(struct kvm_vcpu *vcpu, unsigned long cr0);
++	void (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4);
+ 	int (*set_efer)(struct kvm_vcpu *vcpu, u64 efer);
+ 	void (*get_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ 	void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+@@ -1340,7 +1343,7 @@ static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
+ 		return -ENOTSUPP;
+ }
+ 
+-void kvm_mmu_x86_module_init(void);
++void __init kvm_mmu_x86_module_init(void);
+ int kvm_mmu_vendor_module_init(void);
+ void kvm_mmu_vendor_module_exit(void);
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 859a3f59526c7..aa4ee46f00ce5 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -144,7 +144,7 @@ void __init check_bugs(void)
+ 	/*
+ 	 * spectre_v2_user_select_mitigation() relies on the state set by
+ 	 * retbleed_select_mitigation(); specifically the STIBP selection is
+-	 * forced for UNRET.
++	 * forced for UNRET or IBPB.
+ 	 */
+ 	spectre_v2_user_select_mitigation();
+ 	ssb_select_mitigation();
+@@ -1135,7 +1135,8 @@ spectre_v2_user_select_mitigation(void)
+ 	    boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+ 		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+ 
+-	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) {
++	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
++	    retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
+ 		if (mode != SPECTRE_V2_USER_STRICT &&
+ 		    mode != SPECTRE_V2_USER_STRICT_PREFERRED)
+ 			pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
+@@ -2283,10 +2284,11 @@ static ssize_t srbds_show_state(char *buf)
+ 
+ static ssize_t retbleed_show_state(char *buf)
+ {
+-	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) {
++	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
++	    retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
+ 	    if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
+ 		boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
+-		    return sprintf(buf, "Vulnerable: untrained return thunk on non-Zen uarch\n");
++		    return sprintf(buf, "Vulnerable: untrained return thunk / IBPB on non-AMD based uarch\n");
+ 
+ 	    return sprintf(buf, "%s; SMT %s\n",
+ 			   retbleed_strings[retbleed_mitigation],
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index dca5cf82144c0..9a8633a6506ca 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -93,6 +93,7 @@ static int ftrace_verify_code(unsigned long ip, const char *old_code)
+ 
+ 	/* Make sure it is what we expect it to be */
+ 	if (memcmp(cur_code, old_code, MCOUNT_INSN_SIZE) != 0) {
++		ftrace_expected = old_code;
+ 		WARN_ON(1);
+ 		return -EINVAL;
+ 	}
+diff --git a/arch/x86/kernel/pmem.c b/arch/x86/kernel/pmem.c
+index 6b07faaa15798..23154d24b1173 100644
+--- a/arch/x86/kernel/pmem.c
++++ b/arch/x86/kernel/pmem.c
+@@ -27,6 +27,11 @@ static __init int register_e820_pmem(void)
+ 	 * simply here to trigger the module to load on demand.
+ 	 */
+ 	pdev = platform_device_alloc("e820_pmem", -1);
+-	return platform_device_add(pdev);
++
++	rc = platform_device_add(pdev);
++	if (rc)
++		platform_device_put(pdev);
++
++	return rc;
+ }
+ device_initcall(register_e820_pmem);
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index a2823682d64e7..4505d845daba6 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -777,6 +777,10 @@ static void amd_e400_idle(void)
+  */
+ static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
+ {
++	/* User has disallowed the use of MWAIT. Fallback to HALT */
++	if (boot_option_idle_override == IDLE_NOMWAIT)
++		return 0;
++
+ 	if (c->x86_vendor != X86_VENDOR_INTEL)
+ 		return 0;
+ 
+@@ -885,9 +889,8 @@ static int __init idle_setup(char *str)
+ 	} else if (!strcmp(str, "nomwait")) {
+ 		/*
+ 		 * If the boot option of "idle=nomwait" is added,
+-		 * it means that mwait will be disabled for CPU C2/C3
+-		 * states. In such case it won't touch the variable
+-		 * of boot_option_idle_override.
++		 * it means that mwait will be disabled for CPU C1/C2/C3
++		 * states.
+ 		 */
+ 		boot_option_idle_override = IDLE_NOMWAIT;
+ 	} else
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 737035f16a9e6..2aa41d682bb2c 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -1772,16 +1772,6 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ 	case VCPU_SREG_TR:
+ 		if (seg_desc.s || (seg_desc.type != 1 && seg_desc.type != 9))
+ 			goto exception;
+-		if (!seg_desc.p) {
+-			err_vec = NP_VECTOR;
+-			goto exception;
+-		}
+-		old_desc = seg_desc;
+-		seg_desc.type |= 2; /* busy */
+-		ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
+-						  sizeof(seg_desc), &ctxt->exception);
+-		if (ret != X86EMUL_CONTINUE)
+-			return ret;
+ 		break;
+ 	case VCPU_SREG_LDTR:
+ 		if (seg_desc.s || seg_desc.type != 2)
+@@ -1819,8 +1809,17 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ 		if (ret != X86EMUL_CONTINUE)
+ 			return ret;
+ 		if (emul_is_noncanonical_address(get_desc_base(&seg_desc) |
+-				((u64)base3 << 32), ctxt))
+-			return emulate_gp(ctxt, 0);
++						 ((u64)base3 << 32), ctxt))
++			return emulate_gp(ctxt, err_code);
++	}
++
++	if (seg == VCPU_SREG_TR) {
++		old_desc = seg_desc;
++		seg_desc.type |= 2; /* busy */
++		ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
++						  sizeof(seg_desc), &ctxt->exception);
++		if (ret != X86EMUL_CONTINUE)
++			return ret;
+ 	}
+ load:
+ 	ctxt->ops->set_segment(ctxt, selector, &seg_desc, base3, seg);
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index d806139377bc6..09ec1cda2d687 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -428,6 +428,9 @@ static int synic_set_irq(struct kvm_vcpu_hv_synic *synic, u32 sint)
+ 	struct kvm_lapic_irq irq;
+ 	int ret, vector;
+ 
++	if (KVM_BUG_ON(!lapic_in_kernel(vcpu), vcpu->kvm))
++		return -EINVAL;
++
+ 	if (sint >= ARRAY_SIZE(synic->sint))
+ 		return -EINVAL;
+ 
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 6ed6b090be941..260727eaa6b96 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -991,6 +991,10 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
+ 	*r = -1;
+ 
+ 	if (irq->shorthand == APIC_DEST_SELF) {
++		if (KVM_BUG_ON(!src, kvm)) {
++			*r = 0;
++			return true;
++		}
+ 		*r = kvm_apic_set_irq(src->vcpu, irq, dest_map);
+ 		return true;
+ 	}
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 6096d0f1a62af..13bf3198d0cee 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5886,7 +5886,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+  * nx_huge_pages needs to be resolved to true/false when kvm.ko is loaded, as
+  * its default value of -1 is technically undefined behavior for a boolean.
+  */
+-void kvm_mmu_x86_module_init(void)
++void __init kvm_mmu_x86_module_init(void)
+ {
+ 	if (nx_huge_pages == -1)
+ 		__set_nx_huge_pages(get_nx_auto_mode());
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index 2f83b5d948b33..e5322a0dc5bb0 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -13,6 +13,8 @@
+ #include <linux/types.h>
+ #include <linux/kvm_host.h>
+ #include <linux/perf_event.h>
++#include <linux/bsearch.h>
++#include <linux/sort.h>
+ #include <asm/perf_event.h>
+ #include "x86.h"
+ #include "cpuid.h"
+@@ -168,13 +170,21 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
+ 	return true;
+ }
+ 
++static int cmp_u64(const void *pa, const void *pb)
++{
++	u64 a = *(u64 *)pa;
++	u64 b = *(u64 *)pb;
++
++	return (a > b) - (a < b);
++}
++
+ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ {
+ 	u64 config;
+ 	u32 type = PERF_TYPE_RAW;
+ 	struct kvm *kvm = pmc->vcpu->kvm;
+ 	struct kvm_pmu_event_filter *filter;
+-	int i;
++	struct kvm_pmu *pmu = vcpu_to_pmu(pmc->vcpu);
+ 	bool allow_event = true;
+ 
+ 	if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL)
+@@ -189,16 +199,13 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ 
+ 	filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu);
+ 	if (filter) {
+-		for (i = 0; i < filter->nevents; i++)
+-			if (filter->events[i] ==
+-			    (eventsel & AMD64_RAW_EVENT_MASK_NB))
+-				break;
+-		if (filter->action == KVM_PMU_EVENT_ALLOW &&
+-		    i == filter->nevents)
+-			allow_event = false;
+-		if (filter->action == KVM_PMU_EVENT_DENY &&
+-		    i < filter->nevents)
+-			allow_event = false;
++		__u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB;
++
++		if (bsearch(&key, filter->events, filter->nevents,
++			    sizeof(__u64), cmp_u64))
++			allow_event = filter->action == KVM_PMU_EVENT_ALLOW;
++		else
++			allow_event = filter->action == KVM_PMU_EVENT_DENY;
+ 	}
+ 	if (!allow_event)
+ 		return;
+@@ -214,7 +221,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ 	}
+ 
+ 	if (type == PERF_TYPE_RAW)
+-		config = eventsel & AMD64_RAW_EVENT_MASK;
++		config = eventsel & pmu->raw_event_mask;
+ 
+ 	if (pmc->current_config == eventsel && pmc_resume_counter(pmc))
+ 		return;
+@@ -507,6 +514,11 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
+ 	/* Ensure nevents can't be changed between the user copies. */
+ 	*filter = tmp;
+ 
++	/*
++	 * Sort the in-kernel list so that we can search it with bsearch.
++	 */
++	sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL);
++
+ 	mutex_lock(&kvm->lock);
+ 	filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter,
+ 				     mutex_is_locked(&kvm->lock));
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index 49e5be735f147..35da84f63b202 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -295,6 +295,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
+ 
+ 	pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
+ 	pmu->reserved_bits = 0xfffffff000280000ull;
++	pmu->raw_event_mask = AMD64_RAW_EVENT_MASK;
+ 	pmu->version = 1;
+ 	/* not applicable to AMD; but clean them to prevent any fall out */
+ 	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 7773a765f5489..442705517caf4 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1692,14 +1692,16 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+ 	update_cr0_intercept(svm);
+ }
+ 
+-int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++static bool svm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++{
++	return true;
++}
++
++void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+ 	unsigned long host_cr4_mce = cr4_read_shadow() & X86_CR4_MCE;
+ 	unsigned long old_cr4 = to_svm(vcpu)->vmcb->save.cr4;
+ 
+-	if (cr4 & X86_CR4_VMXE)
+-		return 1;
+-
+ 	if (npt_enabled && ((old_cr4 ^ cr4) & X86_CR4_PGE))
+ 		svm_flush_tlb(vcpu);
+ 
+@@ -1709,7 +1711,6 @@ int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 	cr4 |= host_cr4_mce;
+ 	to_svm(vcpu)->vmcb->save.cr4 = cr4;
+ 	vmcb_mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR);
+-	return 0;
+ }
+ 
+ static void svm_set_segment(struct kvm_vcpu *vcpu,
+@@ -3188,8 +3189,6 @@ static void svm_set_irq(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 
+-	BUG_ON(!(gif_set(svm)));
+-
+ 	trace_kvm_inj_virq(vcpu->arch.interrupt.nr);
+ 	++vcpu->stat.irq_injections;
+ 
+@@ -4243,6 +4242,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
+ 	.get_cpl = svm_get_cpl,
+ 	.get_cs_db_l_bits = kvm_get_cs_db_l_bits,
+ 	.set_cr0 = svm_set_cr0,
++	.is_valid_cr4 = svm_is_valid_cr4,
+ 	.set_cr4 = svm_set_cr4,
+ 	.set_efer = svm_set_efer,
+ 	.get_idt = svm_get_idt,
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 2c007241fbf53..10aba1dd264ed 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -355,7 +355,7 @@ void svm_vcpu_free_msrpm(u32 *msrpm);
+ 
+ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
+-int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
++void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
+ void svm_flush_tlb(struct kvm_vcpu *vcpu);
+ void disable_nmi_singlestep(struct vcpu_svm *svm);
+ bool svm_smi_blocked(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 09804cad6e2db..6c4277e99d586 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -1245,7 +1245,7 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
+ 		BIT_ULL(49) | BIT_ULL(54) | BIT_ULL(55) |
+ 		/* reserved */
+ 		BIT_ULL(31) | GENMASK_ULL(47, 45) | GENMASK_ULL(63, 56);
+-	u64 vmx_basic = vmx->nested.msrs.basic;
++	u64 vmx_basic = vmcs_config.nested.basic;
+ 
+ 	if (!is_bitwise_subset(vmx_basic, data, feature_and_reserved))
+ 		return -EINVAL;
+@@ -1268,36 +1268,42 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
+ 	return 0;
+ }
+ 
+-static int
+-vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
++static void vmx_get_control_msr(struct nested_vmx_msrs *msrs, u32 msr_index,
++				u32 **low, u32 **high)
+ {
+-	u64 supported;
+-	u32 *lowp, *highp;
+-
+ 	switch (msr_index) {
+ 	case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
+-		lowp = &vmx->nested.msrs.pinbased_ctls_low;
+-		highp = &vmx->nested.msrs.pinbased_ctls_high;
++		*low = &msrs->pinbased_ctls_low;
++		*high = &msrs->pinbased_ctls_high;
+ 		break;
+ 	case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
+-		lowp = &vmx->nested.msrs.procbased_ctls_low;
+-		highp = &vmx->nested.msrs.procbased_ctls_high;
++		*low = &msrs->procbased_ctls_low;
++		*high = &msrs->procbased_ctls_high;
+ 		break;
+ 	case MSR_IA32_VMX_TRUE_EXIT_CTLS:
+-		lowp = &vmx->nested.msrs.exit_ctls_low;
+-		highp = &vmx->nested.msrs.exit_ctls_high;
++		*low = &msrs->exit_ctls_low;
++		*high = &msrs->exit_ctls_high;
+ 		break;
+ 	case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
+-		lowp = &vmx->nested.msrs.entry_ctls_low;
+-		highp = &vmx->nested.msrs.entry_ctls_high;
++		*low = &msrs->entry_ctls_low;
++		*high = &msrs->entry_ctls_high;
+ 		break;
+ 	case MSR_IA32_VMX_PROCBASED_CTLS2:
+-		lowp = &vmx->nested.msrs.secondary_ctls_low;
+-		highp = &vmx->nested.msrs.secondary_ctls_high;
++		*low = &msrs->secondary_ctls_low;
++		*high = &msrs->secondary_ctls_high;
+ 		break;
+ 	default:
+ 		BUG();
+ 	}
++}
++
++static int
++vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
++{
++	u32 *lowp, *highp;
++	u64 supported;
++
++	vmx_get_control_msr(&vmcs_config.nested, msr_index, &lowp, &highp);
+ 
+ 	supported = vmx_control_msr(*lowp, *highp);
+ 
+@@ -1309,6 +1315,7 @@ vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
+ 	if (!is_bitwise_subset(supported, data, GENMASK_ULL(63, 32)))
+ 		return -EINVAL;
+ 
++	vmx_get_control_msr(&vmx->nested.msrs, msr_index, &lowp, &highp);
+ 	*lowp = data;
+ 	*highp = data >> 32;
+ 	return 0;
+@@ -1322,10 +1329,8 @@ static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx, u64 data)
+ 		BIT_ULL(28) | BIT_ULL(29) | BIT_ULL(30) |
+ 		/* reserved */
+ 		GENMASK_ULL(13, 9) | BIT_ULL(31);
+-	u64 vmx_misc;
+-
+-	vmx_misc = vmx_control_msr(vmx->nested.msrs.misc_low,
+-				   vmx->nested.msrs.misc_high);
++	u64 vmx_misc = vmx_control_msr(vmcs_config.nested.misc_low,
++				       vmcs_config.nested.misc_high);
+ 
+ 	if (!is_bitwise_subset(vmx_misc, data, feature_and_reserved_bits))
+ 		return -EINVAL;
+@@ -1353,10 +1358,8 @@ static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx, u64 data)
+ 
+ static int vmx_restore_vmx_ept_vpid_cap(struct vcpu_vmx *vmx, u64 data)
+ {
+-	u64 vmx_ept_vpid_cap;
+-
+-	vmx_ept_vpid_cap = vmx_control_msr(vmx->nested.msrs.ept_caps,
+-					   vmx->nested.msrs.vpid_caps);
++	u64 vmx_ept_vpid_cap = vmx_control_msr(vmcs_config.nested.ept_caps,
++					       vmcs_config.nested.vpid_caps);
+ 
+ 	/* Every bit is either reserved or a feature bit. */
+ 	if (!is_bitwise_subset(vmx_ept_vpid_cap, data, -1ULL))
+@@ -1367,20 +1370,21 @@ static int vmx_restore_vmx_ept_vpid_cap(struct vcpu_vmx *vmx, u64 data)
+ 	return 0;
+ }
+ 
+-static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
++static u64 *vmx_get_fixed0_msr(struct nested_vmx_msrs *msrs, u32 msr_index)
+ {
+-	u64 *msr;
+-
+ 	switch (msr_index) {
+ 	case MSR_IA32_VMX_CR0_FIXED0:
+-		msr = &vmx->nested.msrs.cr0_fixed0;
+-		break;
++		return &msrs->cr0_fixed0;
+ 	case MSR_IA32_VMX_CR4_FIXED0:
+-		msr = &vmx->nested.msrs.cr4_fixed0;
+-		break;
++		return &msrs->cr4_fixed0;
+ 	default:
+ 		BUG();
+ 	}
++}
++
++static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
++{
++	const u64 *msr = vmx_get_fixed0_msr(&vmcs_config.nested, msr_index);
+ 
+ 	/*
+ 	 * 1 bits (which indicates bits which "must-be-1" during VMX operation)
+@@ -1389,7 +1393,7 @@ static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
+ 	if (!is_bitwise_subset(data, *msr, -1ULL))
+ 		return -EINVAL;
+ 
+-	*msr = data;
++	*vmx_get_fixed0_msr(&vmx->nested.msrs, msr_index) = data;
+ 	return 0;
+ }
+ 
+@@ -1450,7 +1454,7 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
+ 		vmx->nested.msrs.vmcs_enum = data;
+ 		return 0;
+ 	case MSR_IA32_VMX_VMFUNC:
+-		if (data & ~vmx->nested.msrs.vmfunc_controls)
++		if (data & ~vmcs_config.nested.vmfunc_controls)
+ 			return -EINVAL;
+ 		vmx->nested.msrs.vmfunc_controls = data;
+ 		return 0;
+@@ -3337,10 +3341,12 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ 	if (likely(!evaluate_pending_interrupts) && kvm_vcpu_apicv_active(vcpu))
+ 		evaluate_pending_interrupts |= vmx_has_apicv_interrupt(vcpu);
+ 
+-	if (!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
++	if (!vmx->nested.nested_run_pending ||
++	    !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
+ 		vmx->nested.vmcs01_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
+ 	if (kvm_mpx_supported() &&
+-		!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++	    (!vmx->nested.nested_run_pending ||
++	     !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
+ 		vmx->nested.vmcs01_guest_bndcfgs = vmcs_read64(GUEST_BNDCFGS);
+ 
+ 	/*
+@@ -4871,20 +4877,25 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ 		| FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX;
+ 
+ 	/*
+-	 * The Intel VMX Instruction Reference lists a bunch of bits that are
+-	 * prerequisite to running VMXON, most notably cr4.VMXE must be set to
+-	 * 1 (see vmx_set_cr4() for when we allow the guest to set this).
+-	 * Otherwise, we should fail with #UD.  But most faulting conditions
+-	 * have already been checked by hardware, prior to the VM-exit for
+-	 * VMXON.  We do test guest cr4.VMXE because processor CR4 always has
+-	 * that bit set to 1 in non-root mode.
++	 * Note, KVM cannot rely on hardware to perform the CR0/CR4 #UD checks
++	 * that have higher priority than VM-Exit (see Intel SDM's pseudocode
++	 * for VMXON), as KVM must load valid CR0/CR4 values into hardware while
++	 * running the guest, i.e. KVM needs to check the _guest_ values.
++	 *
++	 * Rely on hardware for the other two pre-VM-Exit checks, !VM86 and
++	 * !COMPATIBILITY modes.  KVM may run the guest in VM86 to emulate Real
++	 * Mode, but KVM will never take the guest out of those modes.
+ 	 */
+-	if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE)) {
++	if (!nested_host_cr0_valid(vcpu, kvm_read_cr0(vcpu)) ||
++	    !nested_host_cr4_valid(vcpu, kvm_read_cr4(vcpu))) {
+ 		kvm_queue_exception(vcpu, UD_VECTOR);
+ 		return 1;
+ 	}
+ 
+-	/* CPL=0 must be checked manually. */
++	/*
++	 * CPL=0 and all other checks that are lower priority than VM-Exit must
++	 * be checked manually.
++	 */
+ 	if (vmx_get_cpl(vcpu)) {
+ 		kvm_inject_gp(vcpu, 0);
+ 		return 1;
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index bd70c1d7f3458..f938fc997766c 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -104,6 +104,9 @@ static bool intel_pmc_is_enabled(struct kvm_pmc *pmc)
+ {
+ 	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
+ 
++	if (pmu->version < 2)
++		return true;
++
+ 	return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl);
+ }
+ 
+@@ -153,12 +156,17 @@ static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu,
+ 	return &counters[array_index_nospec(idx, num_counters)];
+ }
+ 
+-static inline bool fw_writes_is_enabled(struct kvm_vcpu *vcpu)
++static inline u64 vcpu_get_perf_capabilities(struct kvm_vcpu *vcpu)
+ {
+ 	if (!guest_cpuid_has(vcpu, X86_FEATURE_PDCM))
+-		return false;
++		return 0;
+ 
+-	return vcpu->arch.perf_capabilities & PMU_CAP_FW_WRITES;
++	return vcpu->arch.perf_capabilities;
++}
++
++static inline bool fw_writes_is_enabled(struct kvm_vcpu *vcpu)
++{
++	return (vcpu_get_perf_capabilities(vcpu) & PMU_CAP_FW_WRITES) != 0;
+ }
+ 
+ static inline struct kvm_pmc *get_fw_gp_pmc(struct kvm_pmu *pmu, u32 msr)
+@@ -254,7 +262,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_CORE_PERF_FIXED_CTR_CTRL:
+ 		if (pmu->fixed_ctr_ctrl == data)
+ 			return 0;
+-		if (!(data & 0xfffffffffffff444ull)) {
++		if (!(data & pmu->fixed_ctr_ctrl_mask)) {
+ 			reprogram_fixed_counters(pmu, data);
+ 			return 0;
+ 		}
+@@ -321,6 +329,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ 	struct kvm_cpuid_entry2 *entry;
+ 	union cpuid10_eax eax;
+ 	union cpuid10_edx edx;
++	int i;
+ 
+ 	pmu->nr_arch_gp_counters = 0;
+ 	pmu->nr_arch_fixed_counters = 0;
+@@ -328,7 +337,10 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ 	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+ 	pmu->version = 0;
+ 	pmu->reserved_bits = 0xffffffff00200000ull;
+-	vcpu->arch.perf_capabilities = 0;
++	pmu->raw_event_mask = X86_RAW_EVENT_MASK;
++	pmu->global_ctrl_mask = ~0ull;
++	pmu->global_ovf_ctrl_mask = ~0ull;
++	pmu->fixed_ctr_ctrl_mask = ~0ull;
+ 
+ 	entry = kvm_find_cpuid_entry(vcpu, 0xa, 0);
+ 	if (!entry)
+@@ -341,8 +353,6 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ 		return;
+ 
+ 	perf_get_x86_pmu_capability(&x86_pmu);
+-	if (guest_cpuid_has(vcpu, X86_FEATURE_PDCM))
+-		vcpu->arch.perf_capabilities = vmx_get_perf_capabilities();
+ 
+ 	pmu->nr_arch_gp_counters = min_t(int, eax.split.num_counters,
+ 					 x86_pmu.num_counters_gp);
+@@ -364,6 +374,8 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ 			((u64)1 << edx.split.bit_width_fixed) - 1;
+ 	}
+ 
++	for (i = 0; i < pmu->nr_arch_fixed_counters; i++)
++		pmu->fixed_ctr_ctrl_mask &= ~(0xbull << (i * 4));
+ 	pmu->global_ctrl = ((1ull << pmu->nr_arch_gp_counters) - 1) |
+ 		(((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED);
+ 	pmu->global_ctrl_mask = ~pmu->global_ctrl;
+@@ -406,6 +418,8 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu)
+ 		pmu->fixed_counters[i].idx = i + INTEL_PMC_IDX_FIXED;
+ 		pmu->fixed_counters[i].current_config = 0;
+ 	}
++
++	vcpu->arch.perf_capabilities = vmx_get_perf_capabilities();
+ }
+ 
+ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 9b520da3f7488..b33d0f283d4f8 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -3183,7 +3183,23 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long pgd,
+ 		vmcs_writel(GUEST_CR3, guest_cr3);
+ }
+ 
+-int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++static bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++{
++	/*
++	 * We operate under the default treatment of SMM, so VMX cannot be
++	 * enabled under SMM.  Note, whether or not VMXE is allowed at all is
++	 * handled by kvm_valid_cr4().
++	 */
++	if ((cr4 & X86_CR4_VMXE) && is_smm(vcpu))
++		return false;
++
++	if (to_vmx(vcpu)->nested.vmxon && !nested_cr4_valid(vcpu, cr4))
++		return false;
++
++	return true;
++}
++
++void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	/*
+@@ -3211,21 +3227,6 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 		}
+ 	}
+ 
+-	if (cr4 & X86_CR4_VMXE) {
+-		/*
+-		 * To use VMXON (and later other VMX instructions), a guest
+-		 * must first be able to turn on cr4.VMXE (see handle_vmon()).
+-		 * So basically the check on whether to allow nested VMX
+-		 * is here.  We operate under the default treatment of SMM,
+-		 * so VMX cannot be enabled under SMM.
+-		 */
+-		if (!nested_vmx_allowed(vcpu) || is_smm(vcpu))
+-			return 1;
+-	}
+-
+-	if (vmx->nested.vmxon && !nested_cr4_valid(vcpu, cr4))
+-		return 1;
+-
+ 	vcpu->arch.cr4 = cr4;
+ 	kvm_register_mark_available(vcpu, VCPU_EXREG_CR4);
+ 
+@@ -3256,7 +3257,6 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 
+ 	vmcs_writel(CR4_READ_SHADOW, cr4);
+ 	vmcs_writel(GUEST_CR4, hw_cr4);
+-	return 0;
+ }
+ 
+ void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
+@@ -7752,6 +7752,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
+ 	.get_cpl = vmx_get_cpl,
+ 	.get_cs_db_l_bits = vmx_get_cs_db_l_bits,
+ 	.set_cr0 = vmx_set_cr0,
++	.is_valid_cr4 = vmx_is_valid_cr4,
+ 	.set_cr4 = vmx_set_cr4,
+ 	.set_efer = vmx_set_efer,
+ 	.get_idt = vmx_get_idt,
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index a6b52d3a39c9d..24903f05c204b 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -347,7 +347,7 @@ u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu);
+ void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask);
+ int vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
+-int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
++void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
+ void set_cr4_guest_host_mask(struct vcpu_vmx *vmx);
+ void ept_save_pdptrs(struct kvm_vcpu *vcpu);
+ void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 29a8ca95c581d..5f4f855bb3b10 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -986,6 +986,9 @@ int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 	if (cr4 & vcpu->arch.cr4_guest_rsvd_bits)
+ 		return -EINVAL;
+ 
++	if (!kvm_x86_ops.is_valid_cr4(vcpu, cr4))
++		return -EINVAL;
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(kvm_valid_cr4);
+@@ -1020,8 +1023,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 			return 1;
+ 	}
+ 
+-	if (kvm_x86_ops.set_cr4(vcpu, cr4))
+-		return 1;
++	kvm_x86_ops.set_cr4(vcpu, cr4);
+ 
+ 	if (((cr4 ^ old_cr4) & mmu_role_bits) ||
+ 	    (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
+@@ -2862,17 +2864,20 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			/* only 0 or all 1s can be written to IA32_MCi_CTL
+ 			 * some Linux kernels though clear bit 10 in bank 4 to
+ 			 * workaround a BIOS/GART TBL issue on AMD K8s, ignore
+-			 * this to avoid an uncatched #GP in the guest
++			 * this to avoid an uncatched #GP in the guest.
++			 *
++			 * UNIXWARE clears bit 0 of MC1_CTL to ignore
++			 * correctable, single-bit ECC data errors.
+ 			 */
+ 			if ((offset & 0x3) == 0 &&
+-			    data != 0 && (data | (1 << 10)) != ~(u64)0)
+-				return -1;
++			    data != 0 && (data | (1 << 10) | 1) != ~(u64)0)
++				return 1;
+ 
+ 			/* MCi_STATUS */
+ 			if (!msr_info->host_initiated &&
+ 			    (offset & 0x3) == 1 && data != 0) {
+ 				if (!can_set_mci_status(vcpu))
+-					return -1;
++					return 1;
+ 			}
+ 
+ 			vcpu->arch.mce_banks[offset] = data;
+diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
+index e94da744386f3..9dc31996c7edb 100644
+--- a/arch/x86/mm/numa.c
++++ b/arch/x86/mm/numa.c
+@@ -861,7 +861,7 @@ void debug_cpumask_set_cpu(int cpu, int node, bool enable)
+ 		return;
+ 	}
+ 	mask = node_to_cpumask_map[node];
+-	if (!mask) {
++	if (!cpumask_available(mask)) {
+ 		pr_err("node_to_cpumask_map[%i] NULL\n", node);
+ 		dump_stack();
+ 		return;
+@@ -907,7 +907,7 @@ const struct cpumask *cpumask_of_node(int node)
+ 		dump_stack();
+ 		return cpu_none_mask;
+ 	}
+-	if (node_to_cpumask_map[node] == NULL) {
++	if (!cpumask_available(node_to_cpumask_map[node])) {
+ 		printk(KERN_WARNING
+ 			"cpumask_of_node(%d): no node_to_cpumask_map!\n",
+ 			node);
+diff --git a/arch/x86/platform/olpc/olpc-xo1-sci.c b/arch/x86/platform/olpc/olpc-xo1-sci.c
+index f03a6883dcc6d..89f25af4b3c33 100644
+--- a/arch/x86/platform/olpc/olpc-xo1-sci.c
++++ b/arch/x86/platform/olpc/olpc-xo1-sci.c
+@@ -80,7 +80,7 @@ static void send_ebook_state(void)
+ 		return;
+ 	}
+ 
+-	if (!!test_bit(SW_TABLET_MODE, ebook_switch_idev->sw) == state)
++	if (test_bit(SW_TABLET_MODE, ebook_switch_idev->sw) == !!state)
+ 		return; /* Nothing new to report. */
+ 
+ 	input_report_switch(ebook_switch_idev, SW_TABLET_MODE, state);
+diff --git a/arch/x86/um/Makefile b/arch/x86/um/Makefile
+index 77f70b969d143..3113800da63ad 100644
+--- a/arch/x86/um/Makefile
++++ b/arch/x86/um/Makefile
+@@ -27,7 +27,8 @@ else
+ 
+ obj-y += syscalls_64.o vdso/
+ 
+-subarch-y = ../lib/csum-partial_64.o ../lib/memcpy_64.o ../entry/thunk_64.o
++subarch-y = ../lib/csum-partial_64.o ../lib/memcpy_64.o
++subarch-$(CONFIG_PREEMPTION) += ../entry/thunk_64.o
+ 
+ endif
+ 
+diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
+index 4986226a5ab26..08d70c868c130 100644
+--- a/arch/xtensa/platforms/iss/network.c
++++ b/arch/xtensa/platforms/iss/network.c
+@@ -502,16 +502,24 @@ static const struct net_device_ops iss_netdev_ops = {
+ 	.ndo_set_rx_mode	= iss_net_set_multicast_list,
+ };
+ 
+-static int iss_net_configure(int index, char *init)
++static void iss_net_pdev_release(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct iss_net_private *lp =
++		container_of(pdev, struct iss_net_private, pdev);
++
++	free_netdev(lp->dev);
++}
++
++static void iss_net_configure(int index, char *init)
+ {
+ 	struct net_device *dev;
+ 	struct iss_net_private *lp;
+-	int err;
+ 
+ 	dev = alloc_etherdev(sizeof(*lp));
+ 	if (dev == NULL) {
+ 		pr_err("eth_configure: failed to allocate device\n");
+-		return 1;
++		return;
+ 	}
+ 
+ 	/* Initialize private element. */
+@@ -540,7 +548,7 @@ static int iss_net_configure(int index, char *init)
+ 	if (!tuntap_probe(lp, index, init)) {
+ 		pr_err("%s: invalid arguments. Skipping device!\n",
+ 		       dev->name);
+-		goto errout;
++		goto err_free_netdev;
+ 	}
+ 
+ 	pr_info("Netdevice %d (%pM)\n", index, dev->dev_addr);
+@@ -548,7 +556,8 @@ static int iss_net_configure(int index, char *init)
+ 	/* sysfs register */
+ 
+ 	if (!driver_registered) {
+-		platform_driver_register(&iss_net_driver);
++		if (platform_driver_register(&iss_net_driver))
++			goto err_free_netdev;
+ 		driver_registered = 1;
+ 	}
+ 
+@@ -558,7 +567,9 @@ static int iss_net_configure(int index, char *init)
+ 
+ 	lp->pdev.id = index;
+ 	lp->pdev.name = DRIVER_NAME;
+-	platform_device_register(&lp->pdev);
++	lp->pdev.dev.release = iss_net_pdev_release;
++	if (platform_device_register(&lp->pdev))
++		goto err_free_netdev;
+ 	SET_NETDEV_DEV(dev, &lp->pdev.dev);
+ 
+ 	dev->netdev_ops = &iss_netdev_ops;
+@@ -567,23 +578,20 @@ static int iss_net_configure(int index, char *init)
+ 	dev->irq = -1;
+ 
+ 	rtnl_lock();
+-	err = register_netdevice(dev);
+-	rtnl_unlock();
+-
+-	if (err) {
++	if (register_netdevice(dev)) {
++		rtnl_unlock();
+ 		pr_err("%s: error registering net device!\n", dev->name);
+-		/* XXX: should we call ->remove() here? */
+-		free_netdev(dev);
+-		return 1;
++		platform_device_unregister(&lp->pdev);
++		return;
+ 	}
++	rtnl_unlock();
+ 
+ 	timer_setup(&lp->tl, iss_net_user_timer_expire, 0);
+ 
+-	return 0;
++	return;
+ 
+-errout:
+-	/* FIXME: unregister; free, etc.. */
+-	return -EIO;
++err_free_netdev:
++	free_netdev(dev);
+ }
+ 
+ /* ------------------------------------------------------------------------- */
+diff --git a/block/bio.c b/block/bio.c
+index f8d26ce7b61b0..6d6e7b96b0021 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1057,9 +1057,6 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
+ 	size_t offset;
+ 	int ret = 0;
+ 
+-	if (WARN_ON_ONCE(!max_append_sectors))
+-		return 0;
+-
+ 	/*
+ 	 * Move page array up in the allocated memory for the bio vecs as far as
+ 	 * possible so that we can start filling biovecs from the beginning
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 006b1f0a59bc5..fbba277364f01 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -806,7 +806,7 @@ static struct request *attempt_merge(struct request_queue *q,
+ 	 */
+ 	blk_account_io_merge_request(next);
+ 
+-	trace_block_rq_merge(q, next);
++	trace_block_rq_merge(next);
+ 
+ 	/*
+ 	 * ownership of bio passed from next to req, return 'next' for
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index b5f26082b9594..212e1e7954696 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -881,6 +881,9 @@ void blk_mq_debugfs_register_hctx(struct request_queue *q,
+ 	char name[20];
+ 	int i;
+ 
++	if (!q->debugfs_dir)
++		return;
++
+ 	snprintf(name, sizeof(name), "hctx%u", hctx->queue_num);
+ 	hctx->debugfs_dir = debugfs_create_dir(name, q->debugfs_dir);
+ 
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index e0117f5f969de..72e64ba661fc7 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -396,7 +396,7 @@ EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge);
+ 
+ void blk_mq_sched_request_inserted(struct request *rq)
+ {
+-	trace_block_rq_insert(rq->q, rq);
++	trace_block_rq_insert(rq);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_sched_request_inserted);
+ 
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index c5d82b21a1ccb..90f64bb42fbd1 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -733,7 +733,7 @@ void blk_mq_start_request(struct request *rq)
+ {
+ 	struct request_queue *q = rq->q;
+ 
+-	trace_block_rq_issue(q, rq);
++	trace_block_rq_issue(rq);
+ 
+ 	if (test_bit(QUEUE_FLAG_STATS, &q->queue_flags)) {
+ 		rq->io_start_time_ns = ktime_get_ns();
+@@ -760,7 +760,7 @@ static void __blk_mq_requeue_request(struct request *rq)
+ 
+ 	blk_mq_put_driver_tag(rq);
+ 
+-	trace_block_rq_requeue(q, rq);
++	trace_block_rq_requeue(rq);
+ 	rq_qos_requeue(q, rq);
+ 
+ 	if (blk_mq_request_started(rq)) {
+@@ -1806,7 +1806,7 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
+ 
+ 	lockdep_assert_held(&ctx->lock);
+ 
+-	trace_block_rq_insert(hctx->queue, rq);
++	trace_block_rq_insert(rq);
+ 
+ 	if (at_head)
+ 		list_add(&rq->queuelist, &ctx->rq_lists[type]);
+@@ -1863,7 +1863,7 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
+ 	 */
+ 	list_for_each_entry(rq, list, queuelist) {
+ 		BUG_ON(rq->mq_ctx != ctx);
+-		trace_block_rq_insert(hctx->queue, rq);
++		trace_block_rq_insert(rq);
+ 	}
+ 
+ 	spin_lock(&ctx->lock);
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index 788a4ba1e2e74..cf9b7ac362025 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -260,6 +260,10 @@ static int cert_sig_digest_update(const struct public_key_signature *sig,
+ 
+ 	BUG_ON(!sig->data);
+ 
++	/* SM2 signatures always use the SM3 hash algorithm */
++	if (!sig->hash_algo || strcmp(sig->hash_algo, "sm3") != 0)
++		return -EINVAL;
++
+ 	ret = sm2_compute_z_digest(tfm_pkey, SM2_DEFAULT_USERID,
+ 					SM2_DEFAULT_USERID_LEN, dgst);
+ 	if (ret)
+@@ -356,8 +360,7 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 	if (ret)
+ 		goto error_free_key;
+ 
+-	if (sig->pkey_algo && strcmp(sig->pkey_algo, "sm2") == 0 &&
+-	    sig->data_size) {
++	if (strcmp(pkey->pkey_algo, "sm2") == 0 && sig->data_size) {
+ 		ret = cert_sig_digest_update(sig, tfm);
+ 		if (ret)
+ 			goto error_free_key;
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index be73974ce449a..6ff81027c69dd 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -401,6 +401,9 @@ static int register_device_clock(struct acpi_device *adev,
+ 	if (!lpss_clk_dev)
+ 		lpt_register_clock_device();
+ 
++	if (IS_ERR(lpss_clk_dev))
++		return PTR_ERR(lpss_clk_dev);
++
+ 	clk_data = platform_get_drvdata(lpss_clk_dev);
+ 	if (!clk_data)
+ 		return -ENODEV;
+diff --git a/drivers/acpi/apei/einj.c b/drivers/acpi/apei/einj.c
+index 1331567595512..c281d5b339d3f 100644
+--- a/drivers/acpi/apei/einj.c
++++ b/drivers/acpi/apei/einj.c
+@@ -544,6 +544,8 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
+ 	    ((region_intersects(base_addr, size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE)
+ 				!= REGION_INTERSECTS) &&
+ 	     (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY)
++				!= REGION_INTERSECTS) &&
++	     (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_SOFT_RESERVED)
+ 				!= REGION_INTERSECTS)))
+ 		return -EINVAL;
+ 
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 2ac0773326e9a..b62348a7e4d98 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -607,33 +607,6 @@ static int pcc_data_alloc(int pcc_ss_id)
+ 	return 0;
+ }
+ 
+-/* Check if CPPC revision + num_ent combination is supported */
+-static bool is_cppc_supported(int revision, int num_ent)
+-{
+-	int expected_num_ent;
+-
+-	switch (revision) {
+-	case CPPC_V2_REV:
+-		expected_num_ent = CPPC_V2_NUM_ENT;
+-		break;
+-	case CPPC_V3_REV:
+-		expected_num_ent = CPPC_V3_NUM_ENT;
+-		break;
+-	default:
+-		pr_debug("Firmware exports unsupported CPPC revision: %d\n",
+-			revision);
+-		return false;
+-	}
+-
+-	if (expected_num_ent != num_ent) {
+-		pr_debug("Firmware exports %d entries. Expected: %d for CPPC rev:%d\n",
+-			num_ent, expected_num_ent, revision);
+-		return false;
+-	}
+-
+-	return true;
+-}
+-
+ /*
+  * An example CPC table looks like the following.
+  *
+@@ -729,7 +702,6 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ 				cpc_obj->type);
+ 		goto out_free;
+ 	}
+-	cpc_ptr->num_entries = num_ent;
+ 
+ 	/* Second entry should be revision. */
+ 	cpc_obj = &out_obj->package.elements[1];
+@@ -740,10 +712,32 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ 				cpc_obj->type);
+ 		goto out_free;
+ 	}
+-	cpc_ptr->version = cpc_rev;
+ 
+-	if (!is_cppc_supported(cpc_rev, num_ent))
++	if (cpc_rev < CPPC_V2_REV) {
++		pr_debug("Unsupported _CPC Revision (%d) for CPU:%d\n", cpc_rev,
++			 pr->id);
+ 		goto out_free;
++	}
++
++	/*
++	 * Disregard _CPC if the number of entries in the return pachage is not
++	 * as expected, but support future revisions being proper supersets of
++	 * the v3 and only causing more entries to be returned by _CPC.
++	 */
++	if ((cpc_rev == CPPC_V2_REV && num_ent != CPPC_V2_NUM_ENT) ||
++	    (cpc_rev == CPPC_V3_REV && num_ent != CPPC_V3_NUM_ENT) ||
++	    (cpc_rev > CPPC_V3_REV && num_ent <= CPPC_V3_NUM_ENT)) {
++		pr_debug("Unexpected number of _CPC return package entries (%d) for CPU:%d\n",
++			 num_ent, pr->id);
++		goto out_free;
++	}
++	if (cpc_rev > CPPC_V3_REV) {
++		num_ent = CPPC_V3_NUM_ENT;
++		cpc_rev = CPPC_V3_REV;
++	}
++
++	cpc_ptr->num_entries = num_ent;
++	cpc_ptr->version = cpc_rev;
+ 
+ 	/* Iterate through remaining entries in _CPC */
+ 	for (i = 2; i < num_ent; i++) {
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 3f2e5ea9ab6b7..4707d1808ca54 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -183,7 +183,6 @@ static struct workqueue_struct *ec_wq;
+ static struct workqueue_struct *ec_query_wq;
+ 
+ static int EC_FLAGS_CORRECT_ECDT; /* Needs ECDT port address correction */
+-static int EC_FLAGS_IGNORE_DSDT_GPE; /* Needs ECDT GPE as correction setting */
+ static int EC_FLAGS_TRUST_DSDT_GPE; /* Needs DSDT GPE as correction setting */
+ static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
+ 
+@@ -1405,24 +1404,16 @@ ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval)
+ 	if (ec->data_addr == 0 || ec->command_addr == 0)
+ 		return AE_OK;
+ 
+-	if (boot_ec && boot_ec_is_ecdt && EC_FLAGS_IGNORE_DSDT_GPE) {
+-		/*
+-		 * Always inherit the GPE number setting from the ECDT
+-		 * EC.
+-		 */
+-		ec->gpe = boot_ec->gpe;
+-	} else {
+-		/* Get GPE bit assignment (EC events). */
+-		/* TODO: Add support for _GPE returning a package */
+-		status = acpi_evaluate_integer(handle, "_GPE", NULL, &tmp);
+-		if (ACPI_SUCCESS(status))
+-			ec->gpe = tmp;
++	/* Get GPE bit assignment (EC events). */
++	/* TODO: Add support for _GPE returning a package */
++	status = acpi_evaluate_integer(handle, "_GPE", NULL, &tmp);
++	if (ACPI_SUCCESS(status))
++		ec->gpe = tmp;
++	/*
++	 * Errors are non-fatal, allowing for ACPI Reduced Hardware
++	 * platforms which use GpioInt instead of GPE.
++	 */
+ 
+-		/*
+-		 * Errors are non-fatal, allowing for ACPI Reduced Hardware
+-		 * platforms which use GpioInt instead of GPE.
+-		 */
+-	}
+ 	/* Use the global lock for all EC transactions? */
+ 	tmp = 0;
+ 	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
+@@ -1860,60 +1851,12 @@ static int ec_honor_dsdt_gpe(const struct dmi_system_id *id)
+ 	return 0;
+ }
+ 
+-/*
+- * Some DSDTs contain wrong GPE setting.
+- * Asus FX502VD/VE, GL702VMK, X550VXK, X580VD
+- * https://bugzilla.kernel.org/show_bug.cgi?id=195651
+- */
+-static int ec_honor_ecdt_gpe(const struct dmi_system_id *id)
+-{
+-	pr_debug("Detected system needing ignore DSDT GPE setting.\n");
+-	EC_FLAGS_IGNORE_DSDT_GPE = 1;
+-	return 0;
+-}
+-
+ static const struct dmi_system_id ec_dmi_table[] __initconst = {
+ 	{
+ 	ec_correct_ecdt, "MSI MS-171F", {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star"),
+ 	DMI_MATCH(DMI_PRODUCT_NAME, "MS-171F"),}, NULL},
+ 	{
+-	ec_honor_ecdt_gpe, "ASUS FX502VD", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "FX502VD"),}, NULL},
+-	{
+-	ec_honor_ecdt_gpe, "ASUS FX502VE", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "FX502VE"),}, NULL},
+-	{
+-	ec_honor_ecdt_gpe, "ASUS GL702VMK", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "GL702VMK"),}, NULL},
+-	{
+-	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BA", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "X505BA"),}, NULL},
+-	{
+-	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BP", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "X505BP"),}, NULL},
+-	{
+-	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BA", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "X542BA"),}, NULL},
+-	{
+-	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BP", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "X542BP"),}, NULL},
+-	{
+-	ec_honor_ecdt_gpe, "ASUS X550VXK", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "X550VXK"),}, NULL},
+-	{
+-	ec_honor_ecdt_gpe, "ASUS X580VD", {
+-	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-	DMI_MATCH(DMI_PRODUCT_NAME, "X580VD"),}, NULL},
+-	{
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=209989 */
+ 	ec_honor_dsdt_gpe, "HP Pavilion Gaming Laptop 15-cx0xxx", {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+@@ -2180,13 +2123,6 @@ static const struct dmi_system_id acpi_ec_no_wakeup[] = {
+ 			DMI_MATCH(DMI_PRODUCT_FAMILY, "Thinkpad X1 Carbon 6th"),
+ 		},
+ 	},
+-	{
+-		.ident = "ThinkPad X1 Carbon 6th",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Carbon 6th"),
+-		},
+-	},
+ 	{
+ 		.ident = "ThinkPad X1 Yoga 3rd",
+ 		.matches = {
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 9921b481c7ee1..e5dd87ddc6b34 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -609,7 +609,7 @@ static DEFINE_RAW_SPINLOCK(c3_lock);
+  * @cx: Target state context
+  * @index: index of target state
+  */
+-static int acpi_idle_enter_bm(struct cpuidle_driver *drv,
++static int __cpuidle acpi_idle_enter_bm(struct cpuidle_driver *drv,
+ 			       struct acpi_processor *pr,
+ 			       struct acpi_processor_cx *cx,
+ 			       int index)
+@@ -666,7 +666,7 @@ static int acpi_idle_enter_bm(struct cpuidle_driver *drv,
+ 	return index;
+ }
+ 
+-static int acpi_idle_enter(struct cpuidle_device *dev,
++static int __cpuidle acpi_idle_enter(struct cpuidle_device *dev,
+ 			   struct cpuidle_driver *drv, int index)
+ {
+ 	struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
+@@ -695,7 +695,7 @@ static int acpi_idle_enter(struct cpuidle_device *dev,
+ 	return index;
+ }
+ 
+-static int acpi_idle_enter_s2idle(struct cpuidle_device *dev,
++static int __cpuidle acpi_idle_enter_s2idle(struct cpuidle_device *dev,
+ 				  struct cpuidle_driver *drv, int index)
+ {
+ 	struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index cfda5720de027..097a5b5f46ab0 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -364,6 +364,14 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "80E3"),
+ 		},
+ 	},
++	{
++	.callback = init_nvs_save_s3,
++	.ident = "Lenovo G40-45",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "80E1"),
++		},
++	},
+ 	/*
+ 	 * ThinkPad X1 Tablet(2016) cannot do suspend-to-idle using
+ 	 * the Low Power S0 Idle firmware interface (see
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index f9d9f1ad9215e..b5441741274bb 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -1056,6 +1056,7 @@ static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
+ static int __driver_attach(struct device *dev, void *data)
+ {
+ 	struct device_driver *drv = data;
++	bool async = false;
+ 	int ret;
+ 
+ 	/*
+@@ -1093,9 +1094,11 @@ static int __driver_attach(struct device *dev, void *data)
+ 		if (!dev->driver) {
+ 			get_device(dev);
+ 			dev->p->async_driver = drv;
+-			async_schedule_dev(__driver_attach_async_helper, dev);
++			async = true;
+ 		}
+ 		device_unlock(dev);
++		if (async)
++			async_schedule_dev(__driver_attach_async_helper, dev);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
+index bb3686c3869de..c6ba8f9f3f311 100644
+--- a/drivers/block/null_blk_main.c
++++ b/drivers/block/null_blk_main.c
+@@ -1876,8 +1876,13 @@ static int null_add_dev(struct nullb_device *dev)
+ 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, nullb->q);
+ 
+ 	mutex_lock(&lock);
+-	nullb->index = ida_simple_get(&nullb_indexes, 0, 0, GFP_KERNEL);
+-	dev->index = nullb->index;
++	rv = ida_simple_get(&nullb_indexes, 0, 0, GFP_KERNEL);
++	if (rv < 0) {
++		mutex_unlock(&lock);
++		goto out_cleanup_zone;
++	}
++	nullb->index = rv;
++	dev->index = rv;
+ 	mutex_unlock(&lock);
+ 
+ 	blk_queue_logical_block_size(nullb->q, dev->blocksize);
+@@ -1889,13 +1894,16 @@ static int null_add_dev(struct nullb_device *dev)
+ 
+ 	rv = null_gendisk_register(nullb);
+ 	if (rv)
+-		goto out_cleanup_zone;
++		goto out_ida_free;
+ 
+ 	mutex_lock(&lock);
+ 	list_add_tail(&nullb->list, &nullb_list);
+ 	mutex_unlock(&lock);
+ 
+ 	return 0;
++
++out_ida_free:
++	ida_free(&nullb_indexes, nullb->index);
+ out_cleanup_zone:
+ 	null_free_zoned_dev(dev);
+ out_cleanup_blk_queue:
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index 6c5e9373e91c3..44782b15b9fdb 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -157,6 +157,11 @@ static int xen_blkif_alloc_rings(struct xen_blkif *blkif)
+ 	return 0;
+ }
+ 
++/* Enable the persistent grants feature. */
++static bool feature_persistent = true;
++module_param(feature_persistent, bool, 0644);
++MODULE_PARM_DESC(feature_persistent, "Enables the persistent grants feature");
++
+ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
+ {
+ 	struct xen_blkif *blkif;
+@@ -472,12 +477,6 @@ static void xen_vbd_free(struct xen_vbd *vbd)
+ 	vbd->bdev = NULL;
+ }
+ 
+-/* Enable the persistent grants feature. */
+-static bool feature_persistent = true;
+-module_param(feature_persistent, bool, 0644);
+-MODULE_PARM_DESC(feature_persistent,
+-		"Enables the persistent grants feature");
+-
+ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
+ 			  unsigned major, unsigned minor, int readonly,
+ 			  int cdrom)
+@@ -523,8 +522,6 @@ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
+ 	if (q && blk_queue_secure_erase(q))
+ 		vbd->discard_secure = true;
+ 
+-	vbd->feature_gnt_persistent = feature_persistent;
+-
+ 	pr_debug("Successful creation of handle=%04x (dom=%u)\n",
+ 		handle, blkif->domid);
+ 	return 0;
+@@ -1091,10 +1088,9 @@ static int connect_ring(struct backend_info *be)
+ 		xenbus_dev_fatal(dev, err, "unknown fe protocol %s", protocol);
+ 		return -ENOSYS;
+ 	}
+-	if (blkif->vbd.feature_gnt_persistent)
+-		blkif->vbd.feature_gnt_persistent =
+-			xenbus_read_unsigned(dev->otherend,
+-					"feature-persistent", 0);
++
++	blkif->vbd.feature_gnt_persistent = feature_persistent &&
++		xenbus_read_unsigned(dev->otherend, "feature-persistent", 0);
+ 
+ 	blkif->vbd.overflow_max_grants = 0;
+ 
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index abbb68b6d9bd5..03e079a6f0721 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -2088,8 +2088,6 @@ static int blkfront_probe(struct xenbus_device *dev,
+ 	info->vdevice = vdevice;
+ 	info->connected = BLKIF_STATE_DISCONNECTED;
+ 
+-	info->feature_persistent = feature_persistent;
+-
+ 	/* Front end dir is a number, which is used as the id. */
+ 	info->handle = simple_strtoul(strrchr(dev->nodename, '/')+1, NULL, 0);
+ 	dev_set_drvdata(&dev->dev, info);
+@@ -2393,7 +2391,7 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
+ 	if (xenbus_read_unsigned(info->xbdev->otherend, "feature-discard", 0))
+ 		blkfront_setup_discard(info);
+ 
+-	if (info->feature_persistent)
++	if (feature_persistent)
+ 		info->feature_persistent =
+ 			!!xenbus_read_unsigned(info->xbdev->otherend,
+ 					       "feature-persistent", 0);
+diff --git a/drivers/bluetooth/hci_intel.c b/drivers/bluetooth/hci_intel.c
+index b20a40fab83e5..d5d2feef6c521 100644
+--- a/drivers/bluetooth/hci_intel.c
++++ b/drivers/bluetooth/hci_intel.c
+@@ -1214,7 +1214,11 @@ static struct platform_driver intel_driver = {
+ 
+ int __init intel_init(void)
+ {
+-	platform_driver_register(&intel_driver);
++	int err;
++
++	err = platform_driver_register(&intel_driver);
++	if (err)
++		return err;
+ 
+ 	return hci_uart_register_proto(&intel_proto);
+ }
+diff --git a/drivers/bus/hisi_lpc.c b/drivers/bus/hisi_lpc.c
+index 378f5d62a9912..e7eaa8784fee0 100644
+--- a/drivers/bus/hisi_lpc.c
++++ b/drivers/bus/hisi_lpc.c
+@@ -503,13 +503,13 @@ static int hisi_lpc_acpi_probe(struct device *hostdev)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(hostdev);
+ 	struct acpi_device *child;
++	struct platform_device *pdev;
+ 	int ret;
+ 
+ 	/* Only consider the children of the host */
+ 	list_for_each_entry(child, &adev->children, node) {
+ 		const char *hid = acpi_device_hid(child);
+ 		const struct hisi_lpc_acpi_cell *cell;
+-		struct platform_device *pdev;
+ 		const struct resource *res;
+ 		bool found = false;
+ 		int num_res;
+@@ -571,22 +571,24 @@ static int hisi_lpc_acpi_probe(struct device *hostdev)
+ 
+ 		ret = platform_device_add_resources(pdev, res, num_res);
+ 		if (ret)
+-			goto fail;
++			goto fail_put_device;
+ 
+ 		ret = platform_device_add_data(pdev, cell->pdata,
+ 					       cell->pdata_size);
+ 		if (ret)
+-			goto fail;
++			goto fail_put_device;
+ 
+ 		ret = platform_device_add(pdev);
+ 		if (ret)
+-			goto fail;
++			goto fail_put_device;
+ 
+ 		acpi_device_set_enumerated(child);
+ 	}
+ 
+ 	return 0;
+ 
++fail_put_device:
++	platform_device_put(pdev);
+ fail:
+ 	hisi_lpc_acpi_remove(hostdev);
+ 	return ret;
+diff --git a/drivers/clk/mediatek/reset.c b/drivers/clk/mediatek/reset.c
+index cb939c071b0cc..89916acf0bc32 100644
+--- a/drivers/clk/mediatek/reset.c
++++ b/drivers/clk/mediatek/reset.c
+@@ -25,7 +25,7 @@ static int mtk_reset_assert_set_clr(struct reset_controller_dev *rcdev,
+ 	struct mtk_reset *data = container_of(rcdev, struct mtk_reset, rcdev);
+ 	unsigned int reg = data->regofs + ((id / 32) << 4);
+ 
+-	return regmap_write(data->regmap, reg, 1);
++	return regmap_write(data->regmap, reg, BIT(id % 32));
+ }
+ 
+ static int mtk_reset_deassert_set_clr(struct reset_controller_dev *rcdev,
+@@ -34,7 +34,7 @@ static int mtk_reset_deassert_set_clr(struct reset_controller_dev *rcdev,
+ 	struct mtk_reset *data = container_of(rcdev, struct mtk_reset, rcdev);
+ 	unsigned int reg = data->regofs + ((id / 32) << 4) + 0x4;
+ 
+-	return regmap_write(data->regmap, reg, 1);
++	return regmap_write(data->regmap, reg, BIT(id % 32));
+ }
+ 
+ static int mtk_reset_assert(struct reset_controller_dev *rcdev,
+diff --git a/drivers/clk/qcom/camcc-sdm845.c b/drivers/clk/qcom/camcc-sdm845.c
+index 1b2cefef7431d..a8a2cfa83290a 100644
+--- a/drivers/clk/qcom/camcc-sdm845.c
++++ b/drivers/clk/qcom/camcc-sdm845.c
+@@ -1521,6 +1521,8 @@ static struct clk_branch cam_cc_sys_tmr_clk = {
+ 	},
+ };
+ 
++static struct gdsc titan_top_gdsc;
++
+ static struct gdsc bps_gdsc = {
+ 	.gdscr = 0x6004,
+ 	.pd = {
+@@ -1554,6 +1556,7 @@ static struct gdsc ife_0_gdsc = {
+ 		.name = "ife_0_gdsc",
+ 	},
+ 	.flags = POLL_CFG_GDSCR,
++	.parent = &titan_top_gdsc.pd,
+ 	.pwrsts = PWRSTS_OFF_ON,
+ };
+ 
+@@ -1563,6 +1566,7 @@ static struct gdsc ife_1_gdsc = {
+ 		.name = "ife_1_gdsc",
+ 	},
+ 	.flags = POLL_CFG_GDSCR,
++	.parent = &titan_top_gdsc.pd,
+ 	.pwrsts = PWRSTS_OFF_ON,
+ };
+ 
+diff --git a/drivers/clk/qcom/clk-krait.c b/drivers/clk/qcom/clk-krait.c
+index 59f1af415b580..90046428693c2 100644
+--- a/drivers/clk/qcom/clk-krait.c
++++ b/drivers/clk/qcom/clk-krait.c
+@@ -32,11 +32,16 @@ static void __krait_mux_set_sel(struct krait_mux_clk *mux, int sel)
+ 		regval |= (sel & mux->mask) << (mux->shift + LPL_SHIFT);
+ 	}
+ 	krait_set_l2_indirect_reg(mux->offset, regval);
+-	spin_unlock_irqrestore(&krait_clock_reg_lock, flags);
+ 
+ 	/* Wait for switch to complete. */
+ 	mb();
+ 	udelay(1);
++
++	/*
++	 * Unlock now to make sure the mux register is not
++	 * modified while switching to the new parent.
++	 */
++	spin_unlock_irqrestore(&krait_clock_reg_lock, flags);
+ }
+ 
+ static int krait_mux_set_parent(struct clk_hw *hw, u8 index)
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 541016db3c4bb..2c2ecfc5e61f5 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -1788,8 +1788,10 @@ static struct clk_regmap_div nss_port4_tx_div_clk_src = {
+ static const struct freq_tbl ftbl_nss_port5_rx_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
+ 	F(25000000, P_UNIPHY1_RX, 12.5, 0, 0),
++	F(25000000, P_UNIPHY0_RX, 5, 0, 0),
+ 	F(78125000, P_UNIPHY1_RX, 4, 0, 0),
+ 	F(125000000, P_UNIPHY1_RX, 2.5, 0, 0),
++	F(125000000, P_UNIPHY0_RX, 1, 0, 0),
+ 	F(156250000, P_UNIPHY1_RX, 2, 0, 0),
+ 	F(312500000, P_UNIPHY1_RX, 1, 0, 0),
+ 	{ }
+@@ -1828,8 +1830,10 @@ static struct clk_regmap_div nss_port5_rx_div_clk_src = {
+ static const struct freq_tbl ftbl_nss_port5_tx_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
+ 	F(25000000, P_UNIPHY1_TX, 12.5, 0, 0),
++	F(25000000, P_UNIPHY0_TX, 5, 0, 0),
+ 	F(78125000, P_UNIPHY1_TX, 4, 0, 0),
+ 	F(125000000, P_UNIPHY1_TX, 2.5, 0, 0),
++	F(125000000, P_UNIPHY0_TX, 1, 0, 0),
+ 	F(156250000, P_UNIPHY1_TX, 2, 0, 0),
+ 	F(312500000, P_UNIPHY1_TX, 1, 0, 0),
+ 	{ }
+@@ -1867,8 +1871,10 @@ static struct clk_regmap_div nss_port5_tx_div_clk_src = {
+ 
+ static const struct freq_tbl ftbl_nss_port6_rx_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
++	F(25000000, P_UNIPHY2_RX, 5, 0, 0),
+ 	F(25000000, P_UNIPHY2_RX, 12.5, 0, 0),
+ 	F(78125000, P_UNIPHY2_RX, 4, 0, 0),
++	F(125000000, P_UNIPHY2_RX, 1, 0, 0),
+ 	F(125000000, P_UNIPHY2_RX, 2.5, 0, 0),
+ 	F(156250000, P_UNIPHY2_RX, 2, 0, 0),
+ 	F(312500000, P_UNIPHY2_RX, 1, 0, 0),
+@@ -1907,8 +1913,10 @@ static struct clk_regmap_div nss_port6_rx_div_clk_src = {
+ 
+ static const struct freq_tbl ftbl_nss_port6_tx_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
++	F(25000000, P_UNIPHY2_TX, 5, 0, 0),
+ 	F(25000000, P_UNIPHY2_TX, 12.5, 0, 0),
+ 	F(78125000, P_UNIPHY2_TX, 4, 0, 0),
++	F(125000000, P_UNIPHY2_TX, 1, 0, 0),
+ 	F(125000000, P_UNIPHY2_TX, 2.5, 0, 0),
+ 	F(156250000, P_UNIPHY2_TX, 2, 0, 0),
+ 	F(312500000, P_UNIPHY2_TX, 1, 0, 0),
+@@ -3346,6 +3354,7 @@ static struct clk_branch gcc_nssnoc_ubi1_ahb_clk = {
+ 
+ static struct clk_branch gcc_ubi0_ahb_clk = {
+ 	.halt_reg = 0x6820c,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x6820c,
+ 		.enable_mask = BIT(0),
+@@ -3363,6 +3372,7 @@ static struct clk_branch gcc_ubi0_ahb_clk = {
+ 
+ static struct clk_branch gcc_ubi0_axi_clk = {
+ 	.halt_reg = 0x68200,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x68200,
+ 		.enable_mask = BIT(0),
+@@ -3380,6 +3390,7 @@ static struct clk_branch gcc_ubi0_axi_clk = {
+ 
+ static struct clk_branch gcc_ubi0_nc_axi_clk = {
+ 	.halt_reg = 0x68204,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x68204,
+ 		.enable_mask = BIT(0),
+@@ -3397,6 +3408,7 @@ static struct clk_branch gcc_ubi0_nc_axi_clk = {
+ 
+ static struct clk_branch gcc_ubi0_core_clk = {
+ 	.halt_reg = 0x68210,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x68210,
+ 		.enable_mask = BIT(0),
+@@ -3414,6 +3426,7 @@ static struct clk_branch gcc_ubi0_core_clk = {
+ 
+ static struct clk_branch gcc_ubi0_mpt_clk = {
+ 	.halt_reg = 0x68208,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x68208,
+ 		.enable_mask = BIT(0),
+@@ -3431,6 +3444,7 @@ static struct clk_branch gcc_ubi0_mpt_clk = {
+ 
+ static struct clk_branch gcc_ubi1_ahb_clk = {
+ 	.halt_reg = 0x6822c,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x6822c,
+ 		.enable_mask = BIT(0),
+@@ -3448,6 +3462,7 @@ static struct clk_branch gcc_ubi1_ahb_clk = {
+ 
+ static struct clk_branch gcc_ubi1_axi_clk = {
+ 	.halt_reg = 0x68220,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x68220,
+ 		.enable_mask = BIT(0),
+@@ -3465,6 +3480,7 @@ static struct clk_branch gcc_ubi1_axi_clk = {
+ 
+ static struct clk_branch gcc_ubi1_nc_axi_clk = {
+ 	.halt_reg = 0x68224,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x68224,
+ 		.enable_mask = BIT(0),
+@@ -3482,6 +3498,7 @@ static struct clk_branch gcc_ubi1_nc_axi_clk = {
+ 
+ static struct clk_branch gcc_ubi1_core_clk = {
+ 	.halt_reg = 0x68230,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x68230,
+ 		.enable_mask = BIT(0),
+@@ -3499,6 +3516,7 @@ static struct clk_branch gcc_ubi1_core_clk = {
+ 
+ static struct clk_branch gcc_ubi1_mpt_clk = {
+ 	.halt_reg = 0x68228,
++	.halt_check = BRANCH_HALT_DELAY,
+ 	.clkr = {
+ 		.enable_reg = 0x68228,
+ 		.enable_mask = BIT(0),
+@@ -4371,6 +4389,33 @@ static struct clk_branch gcc_pcie0_axi_s_bridge_clk = {
+ 	},
+ };
+ 
++static const struct alpha_pll_config ubi32_pll_config = {
++	.l = 0x4e,
++	.config_ctl_val = 0x200d4aa8,
++	.config_ctl_hi_val = 0x3c2,
++	.main_output_mask = BIT(0),
++	.aux_output_mask = BIT(1),
++	.pre_div_val = 0x0,
++	.pre_div_mask = BIT(12),
++	.post_div_val = 0x0,
++	.post_div_mask = GENMASK(9, 8),
++};
++
++static const struct alpha_pll_config nss_crypto_pll_config = {
++	.l = 0x3e,
++	.alpha = 0x0,
++	.alpha_hi = 0x80,
++	.config_ctl_val = 0x4001055b,
++	.main_output_mask = BIT(0),
++	.pre_div_val = 0x0,
++	.pre_div_mask = GENMASK(14, 12),
++	.post_div_val = 0x1 << 8,
++	.post_div_mask = GENMASK(11, 8),
++	.vco_mask = GENMASK(21, 20),
++	.vco_val = 0x0,
++	.alpha_en_mask = BIT(24),
++};
++
+ static struct clk_hw *gcc_ipq8074_hws[] = {
+ 	&gpll0_out_main_div2.hw,
+ 	&gpll6_out_main_div2.hw,
+@@ -4772,7 +4817,20 @@ static const struct qcom_cc_desc gcc_ipq8074_desc = {
+ 
+ static int gcc_ipq8074_probe(struct platform_device *pdev)
+ {
+-	return qcom_cc_probe(pdev, &gcc_ipq8074_desc);
++	struct regmap *regmap;
++
++	regmap = qcom_cc_map(pdev, &gcc_ipq8074_desc);
++	if (IS_ERR(regmap))
++		return PTR_ERR(regmap);
++
++	/* SW Workaround for UBI32 Huayra PLL */
++	regmap_update_bits(regmap, 0x2501c, BIT(26), BIT(26));
++
++	clk_alpha_pll_configure(&ubi32_pll_main, regmap, &ubi32_pll_config);
++	clk_alpha_pll_configure(&nss_crypto_pll_main, regmap,
++				&nss_crypto_pll_config);
++
++	return qcom_cc_really_probe(pdev, &gcc_ipq8074_desc, regmap);
+ }
+ 
+ static struct platform_driver gcc_ipq8074_driver = {
+diff --git a/drivers/clk/renesas/r9a06g032-clocks.c b/drivers/clk/renesas/r9a06g032-clocks.c
+index 892e91b92f2c8..245150a5484a2 100644
+--- a/drivers/clk/renesas/r9a06g032-clocks.c
++++ b/drivers/clk/renesas/r9a06g032-clocks.c
+@@ -286,8 +286,8 @@ static const struct r9a06g032_clkdesc r9a06g032_clocks[] = {
+ 		.name = "uart_group_012",
+ 		.type = K_BITSEL,
+ 		.source = 1 + R9A06G032_DIV_UART,
+-		/* R9A06G032_SYSCTRL_REG_PWRCTRL_PG1_PR2 */
+-		.dual.sel = ((0xec / 4) << 5) | 24,
++		/* R9A06G032_SYSCTRL_REG_PWRCTRL_PG0_0 */
++		.dual.sel = ((0x34 / 4) << 5) | 30,
+ 		.dual.group = 0,
+ 	},
+ 	{
+@@ -295,8 +295,8 @@ static const struct r9a06g032_clkdesc r9a06g032_clocks[] = {
+ 		.name = "uart_group_34567",
+ 		.type = K_BITSEL,
+ 		.source = 1 + R9A06G032_DIV_P2_PG,
+-		/* R9A06G032_SYSCTRL_REG_PWRCTRL_PG0_0 */
+-		.dual.sel = ((0x34 / 4) << 5) | 30,
++		/* R9A06G032_SYSCTRL_REG_PWRCTRL_PG1_PR2 */
++		.dual.sel = ((0xec / 4) << 5) | 24,
+ 		.dual.group = 1,
+ 	},
+ 	D_UGATE(CLK_UART0, "clk_uart0", UART_GROUP_012, 0, 0, 0x1b2, 0x1b3, 0x1b4, 0x1b5),
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 7b3be3dc2210e..d0954993e2e37 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -151,6 +151,7 @@ dma_iv_error:
+ 	while (i >= 0) {
+ 		dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
+ 		memzero_explicit(sf->iv[i], ivsize);
++		i--;
+ 	}
+ 	return err;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index 6575305786436..47b5828e35c34 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -476,14 +476,32 @@ static int allocate_flows(struct sun8i_ss_dev *ss)
+ 
+ 		ss->flows[i].biv = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
+ 						GFP_KERNEL | GFP_DMA);
+-		if (!ss->flows[i].biv)
++		if (!ss->flows[i].biv) {
++			err = -ENOMEM;
+ 			goto error_engine;
++		}
+ 
+ 		for (j = 0; j < MAX_SG; j++) {
+ 			ss->flows[i].iv[j] = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
+ 							  GFP_KERNEL | GFP_DMA);
+-			if (!ss->flows[i].iv[j])
++			if (!ss->flows[i].iv[j]) {
++				err = -ENOMEM;
+ 				goto error_engine;
++			}
++		}
++
++		/* the padding could be up to two block. */
++		ss->flows[i].pad = devm_kmalloc(ss->dev, SHA256_BLOCK_SIZE * 2,
++						GFP_KERNEL | GFP_DMA);
++		if (!ss->flows[i].pad) {
++			err = -ENOMEM;
++			goto error_engine;
++		}
++		ss->flows[i].result = devm_kmalloc(ss->dev, SHA256_DIGEST_SIZE,
++						   GFP_KERNEL | GFP_DMA);
++		if (!ss->flows[i].result) {
++			err = -ENOMEM;
++			goto error_engine;
+ 		}
+ 
+ 		ss->flows[i].engine = crypto_engine_alloc_init(ss->dev, true);
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index 55d652cd468be..98040794acdc9 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -341,18 +341,11 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (digestsize == SHA224_DIGEST_SIZE)
+ 		digestsize = SHA256_DIGEST_SIZE;
+ 
+-	/* the padding could be up to two block. */
+-	pad = kzalloc(algt->alg.hash.halg.base.cra_blocksize * 2, GFP_KERNEL | GFP_DMA);
+-	if (!pad)
+-		return -ENOMEM;
++	result = ss->flows[rctx->flow].result;
++	pad = ss->flows[rctx->flow].pad;
++	memset(pad, 0, algt->alg.hash.halg.base.cra_blocksize * 2);
+ 	bf = (__le32 *)pad;
+ 
+-	result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
+-	if (!result) {
+-		kfree(pad);
+-		return -ENOMEM;
+-	}
+-
+ 	for (i = 0; i < MAX_SG; i++) {
+ 		rctx->t_dst[i].addr = 0;
+ 		rctx->t_dst[i].len = 0;
+@@ -447,8 +440,6 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ 
+ 	memcpy(areq->result, result, algt->alg.hash.halg.digestsize);
+ theend:
+-	kfree(pad);
+-	kfree(result);
+ 	local_bh_disable();
+ 	crypto_finalize_hash_request(engine, breq, err);
+ 	local_bh_enable();
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
+index 49147195ecf6c..a97a790ae451e 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
+@@ -122,6 +122,8 @@ struct sginfo {
+  * @stat_req:	number of request done by this flow
+  * @iv:		list of IV to use for each step
+  * @biv:	buffer which contain the backuped IV
++ * @pad:	padding buffer for hash operations
++ * @result:	buffer for storing the result of hash operations
+  */
+ struct sun8i_ss_flow {
+ 	struct crypto_engine *engine;
+@@ -129,6 +131,8 @@ struct sun8i_ss_flow {
+ 	int status;
+ 	u8 *iv[MAX_SG];
+ 	u8 *biv;
++	void *pad;
++	void *result;
+ #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
+ 	unsigned long stat_req;
+ #endif
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 57b57d4db500c..ed39a22e1b2b9 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -278,7 +278,7 @@ static int __sev_platform_shutdown_locked(int *error)
+ 	struct sev_device *sev = psp_master->sev_data;
+ 	int ret;
+ 
+-	if (sev->state == SEV_STATE_UNINIT)
++	if (!sev || sev->state == SEV_STATE_UNINIT)
+ 		return 0;
+ 
+ 	ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+index a87f9904087aa..90c13ebe7e83a 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+@@ -210,7 +210,7 @@ static int hpre_prepare_dma_buf(struct hpre_asym_request *hpre_req,
+ 	if (unlikely(shift < 0))
+ 		return -EINVAL;
+ 
+-	ptr = dma_alloc_coherent(dev, ctx->key_sz, tmp, GFP_KERNEL);
++	ptr = dma_alloc_coherent(dev, ctx->key_sz, tmp, GFP_ATOMIC);
+ 	if (unlikely(!ptr))
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c b/drivers/crypto/hisilicon/sec/sec_algs.c
+index 8ca945ac297ef..2066f8d40c5aa 100644
+--- a/drivers/crypto/hisilicon/sec/sec_algs.c
++++ b/drivers/crypto/hisilicon/sec/sec_algs.c
+@@ -449,7 +449,7 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp,
+ 		 */
+ 	}
+ 
+-	mutex_lock(&ctx->queue->queuelock);
++	spin_lock_bh(&ctx->queue->queuelock);
+ 	/* Put the IV in place for chained cases */
+ 	switch (ctx->cipher_alg) {
+ 	case SEC_C_AES_CBC_128:
+@@ -509,7 +509,7 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp,
+ 			list_del(&backlog_req->backlog_head);
+ 		}
+ 	}
+-	mutex_unlock(&ctx->queue->queuelock);
++	spin_unlock_bh(&ctx->queue->queuelock);
+ 
+ 	mutex_lock(&sec_req->lock);
+ 	list_del(&sec_req_el->head);
+@@ -798,7 +798,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ 	 */
+ 
+ 	/* Grab a big lock for a long time to avoid concurrency issues */
+-	mutex_lock(&queue->queuelock);
++	spin_lock_bh(&queue->queuelock);
+ 
+ 	/*
+ 	 * Can go on to queue if we have space in either:
+@@ -814,15 +814,15 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ 		ret = -EBUSY;
+ 		if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {
+ 			list_add_tail(&sec_req->backlog_head, &ctx->backlog);
+-			mutex_unlock(&queue->queuelock);
++			spin_unlock_bh(&queue->queuelock);
+ 			goto out;
+ 		}
+ 
+-		mutex_unlock(&queue->queuelock);
++		spin_unlock_bh(&queue->queuelock);
+ 		goto err_free_elements;
+ 	}
+ 	ret = sec_send_request(sec_req, queue);
+-	mutex_unlock(&queue->queuelock);
++	spin_unlock_bh(&queue->queuelock);
+ 	if (ret)
+ 		goto err_free_elements;
+ 
+@@ -881,7 +881,7 @@ static int sec_alg_skcipher_init(struct crypto_skcipher *tfm)
+ 	if (IS_ERR(ctx->queue))
+ 		return PTR_ERR(ctx->queue);
+ 
+-	mutex_init(&ctx->queue->queuelock);
++	spin_lock_init(&ctx->queue->queuelock);
+ 	ctx->queue->havesoftqueue = false;
+ 
+ 	return 0;
+diff --git a/drivers/crypto/hisilicon/sec/sec_drv.h b/drivers/crypto/hisilicon/sec/sec_drv.h
+index 4d9063a8b10b1..0bf4d7c3856ca 100644
+--- a/drivers/crypto/hisilicon/sec/sec_drv.h
++++ b/drivers/crypto/hisilicon/sec/sec_drv.h
+@@ -347,7 +347,7 @@ struct sec_queue {
+ 	DECLARE_BITMAP(unprocessed, SEC_QUEUE_LEN);
+ 	DECLARE_KFIFO_PTR(softqueue, typeof(struct sec_request_el *));
+ 	bool havesoftqueue;
+-	struct mutex queuelock;
++	spinlock_t queuelock;
+ 	void *shadow[SEC_QUEUE_LEN];
+ };
+ 
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index 037762b531e27..249735b7ceca3 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -4,8 +4,6 @@
+ #ifndef __HISI_SEC_V2_H
+ #define __HISI_SEC_V2_H
+ 
+-#include <linux/list.h>
+-
+ #include "../qm.h"
+ #include "sec_crypto.h"
+ 
+@@ -50,7 +48,7 @@ struct sec_req {
+ 
+ 	int err_type;
+ 	int req_id;
+-	int flag;
++	u32 flag;
+ 
+ 	/* Status of the SEC request */
+ 	bool fake_busy;
+@@ -105,7 +103,7 @@ struct sec_qp_ctx {
+ 	struct idr req_idr;
+ 	struct sec_alg_res res[QM_Q_DEPTH];
+ 	struct sec_ctx *ctx;
+-	struct mutex req_lock;
++	spinlock_t req_lock;
+ 	struct list_head backlog;
+ 	struct hisi_acc_sgl_pool *c_in_pool;
+ 	struct hisi_acc_sgl_pool *c_out_pool;
+@@ -140,6 +138,7 @@ struct sec_ctx {
+ 	bool pbuf_supported;
+ 	struct sec_cipher_ctx c_ctx;
+ 	struct sec_auth_ctx a_ctx;
++	struct device *dev;
+ };
+ 
+ enum sec_endian {
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 630dcb59ad569..2dbec638cca83 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -42,7 +42,6 @@
+ 
+ #define SEC_TOTAL_IV_SZ		(SEC_IV_SIZE * QM_Q_DEPTH)
+ #define SEC_SGL_SGE_NR		128
+-#define SEC_CTX_DEV(ctx)	(&(ctx)->sec->qm.pdev->dev)
+ #define SEC_CIPHER_AUTH		0xfe
+ #define SEC_AUTH_CIPHER		0x1
+ #define SEC_MAX_MAC_LEN		64
+@@ -89,13 +88,13 @@ static int sec_alloc_req_id(struct sec_req *req, struct sec_qp_ctx *qp_ctx)
+ {
+ 	int req_id;
+ 
+-	mutex_lock(&qp_ctx->req_lock);
++	spin_lock_bh(&qp_ctx->req_lock);
+ 
+ 	req_id = idr_alloc_cyclic(&qp_ctx->req_idr, NULL,
+ 				  0, QM_Q_DEPTH, GFP_ATOMIC);
+-	mutex_unlock(&qp_ctx->req_lock);
++	spin_unlock_bh(&qp_ctx->req_lock);
+ 	if (unlikely(req_id < 0)) {
+-		dev_err(SEC_CTX_DEV(req->ctx), "alloc req id fail!\n");
++		dev_err(req->ctx->dev, "alloc req id fail!\n");
+ 		return req_id;
+ 	}
+ 
+@@ -110,16 +109,16 @@ static void sec_free_req_id(struct sec_req *req)
+ 	int req_id = req->req_id;
+ 
+ 	if (unlikely(req_id < 0 || req_id >= QM_Q_DEPTH)) {
+-		dev_err(SEC_CTX_DEV(req->ctx), "free request id invalid!\n");
++		dev_err(req->ctx->dev, "free request id invalid!\n");
+ 		return;
+ 	}
+ 
+ 	qp_ctx->req_list[req_id] = NULL;
+ 	req->qp_ctx = NULL;
+ 
+-	mutex_lock(&qp_ctx->req_lock);
++	spin_lock_bh(&qp_ctx->req_lock);
+ 	idr_remove(&qp_ctx->req_idr, req_id);
+-	mutex_unlock(&qp_ctx->req_lock);
++	spin_unlock_bh(&qp_ctx->req_lock);
+ }
+ 
+ static int sec_aead_verify(struct sec_req *req)
+@@ -136,7 +135,7 @@ static int sec_aead_verify(struct sec_req *req)
+ 				aead_req->cryptlen + aead_req->assoclen -
+ 				authsize);
+ 	if (unlikely(sz != authsize || memcmp(mac_out, mac, sz))) {
+-		dev_err(SEC_CTX_DEV(req->ctx), "aead verify failure!\n");
++		dev_err(req->ctx->dev, "aead verify failure!\n");
+ 		return -EBADMSG;
+ 	}
+ 
+@@ -175,7 +174,7 @@ static void sec_req_cb(struct hisi_qp *qp, void *resp)
+ 	if (unlikely(req->err_type || done != SEC_SQE_DONE ||
+ 	    (ctx->alg_type == SEC_SKCIPHER && flag != SEC_SQE_CFLAG) ||
+ 	    (ctx->alg_type == SEC_AEAD && flag != SEC_SQE_AEAD_FLAG))) {
+-		dev_err(SEC_CTX_DEV(ctx),
++		dev_err_ratelimited(ctx->dev,
+ 			"err_type[%d],done[%d],flag[%d]\n",
+ 			req->err_type, done, flag);
+ 		err = -EIO;
+@@ -202,7 +201,7 @@ static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req)
+ 	    !(req->flag & CRYPTO_TFM_REQ_MAY_BACKLOG))
+ 		return -EBUSY;
+ 
+-	mutex_lock(&qp_ctx->req_lock);
++	spin_lock_bh(&qp_ctx->req_lock);
+ 	ret = hisi_qp_send(qp_ctx->qp, &req->sec_sqe);
+ 
+ 	if (ctx->fake_req_limit <=
+@@ -210,10 +209,10 @@ static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req)
+ 		list_add_tail(&req->backlog_head, &qp_ctx->backlog);
+ 		atomic64_inc(&ctx->sec->debug.dfx.send_cnt);
+ 		atomic64_inc(&ctx->sec->debug.dfx.send_busy_cnt);
+-		mutex_unlock(&qp_ctx->req_lock);
++		spin_unlock_bh(&qp_ctx->req_lock);
+ 		return -EBUSY;
+ 	}
+-	mutex_unlock(&qp_ctx->req_lock);
++	spin_unlock_bh(&qp_ctx->req_lock);
+ 
+ 	if (unlikely(ret == -EBUSY))
+ 		return -ENOBUFS;
+@@ -323,8 +322,8 @@ static int sec_alloc_pbuf_resource(struct device *dev, struct sec_alg_res *res)
+ static int sec_alg_resource_alloc(struct sec_ctx *ctx,
+ 				  struct sec_qp_ctx *qp_ctx)
+ {
+-	struct device *dev = SEC_CTX_DEV(ctx);
+ 	struct sec_alg_res *res = qp_ctx->res;
++	struct device *dev = ctx->dev;
+ 	int ret;
+ 
+ 	ret = sec_alloc_civ_resource(dev, res);
+@@ -357,7 +356,7 @@ alloc_fail:
+ static void sec_alg_resource_free(struct sec_ctx *ctx,
+ 				  struct sec_qp_ctx *qp_ctx)
+ {
+-	struct device *dev = SEC_CTX_DEV(ctx);
++	struct device *dev = ctx->dev;
+ 
+ 	sec_free_civ_resource(dev, qp_ctx->res);
+ 
+@@ -370,7 +369,7 @@ static void sec_alg_resource_free(struct sec_ctx *ctx,
+ static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx,
+ 			     int qp_ctx_id, int alg_type)
+ {
+-	struct device *dev = SEC_CTX_DEV(ctx);
++	struct device *dev = ctx->dev;
+ 	struct sec_qp_ctx *qp_ctx;
+ 	struct hisi_qp *qp;
+ 	int ret = -ENOMEM;
+@@ -383,7 +382,7 @@ static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx,
+ 	qp_ctx->qp = qp;
+ 	qp_ctx->ctx = ctx;
+ 
+-	mutex_init(&qp_ctx->req_lock);
++	spin_lock_init(&qp_ctx->req_lock);
+ 	idr_init(&qp_ctx->req_idr);
+ 	INIT_LIST_HEAD(&qp_ctx->backlog);
+ 
+@@ -426,7 +425,7 @@ err_destroy_idr:
+ static void sec_release_qp_ctx(struct sec_ctx *ctx,
+ 			       struct sec_qp_ctx *qp_ctx)
+ {
+-	struct device *dev = SEC_CTX_DEV(ctx);
++	struct device *dev = ctx->dev;
+ 
+ 	hisi_qm_stop_qp(qp_ctx->qp);
+ 	sec_alg_resource_free(ctx, qp_ctx);
+@@ -450,6 +449,7 @@ static int sec_ctx_base_init(struct sec_ctx *ctx)
+ 
+ 	sec = container_of(ctx->qps[0]->qm, struct sec_dev, qm);
+ 	ctx->sec = sec;
++	ctx->dev = &sec->qm.pdev->dev;
+ 	ctx->hlf_q_num = sec->ctx_q_num >> 1;
+ 
+ 	ctx->pbuf_supported = ctx->sec->iommu_used;
+@@ -474,11 +474,9 @@ static int sec_ctx_base_init(struct sec_ctx *ctx)
+ err_sec_release_qp_ctx:
+ 	for (i = i - 1; i >= 0; i--)
+ 		sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
+-
+ 	kfree(ctx->qp_ctx);
+ err_destroy_qps:
+ 	sec_destroy_qps(ctx->qps, sec->ctx_q_num);
+-
+ 	return ret;
+ }
+ 
+@@ -497,7 +495,7 @@ static int sec_cipher_init(struct sec_ctx *ctx)
+ {
+ 	struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
+ 
+-	c_ctx->c_key = dma_alloc_coherent(SEC_CTX_DEV(ctx), SEC_MAX_KEY_SIZE,
++	c_ctx->c_key = dma_alloc_coherent(ctx->dev, SEC_MAX_KEY_SIZE,
+ 					  &c_ctx->c_key_dma, GFP_KERNEL);
+ 	if (!c_ctx->c_key)
+ 		return -ENOMEM;
+@@ -510,7 +508,7 @@ static void sec_cipher_uninit(struct sec_ctx *ctx)
+ 	struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
+ 
+ 	memzero_explicit(c_ctx->c_key, SEC_MAX_KEY_SIZE);
+-	dma_free_coherent(SEC_CTX_DEV(ctx), SEC_MAX_KEY_SIZE,
++	dma_free_coherent(ctx->dev, SEC_MAX_KEY_SIZE,
+ 			  c_ctx->c_key, c_ctx->c_key_dma);
+ }
+ 
+@@ -518,7 +516,7 @@ static int sec_auth_init(struct sec_ctx *ctx)
+ {
+ 	struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+ 
+-	a_ctx->a_key = dma_alloc_coherent(SEC_CTX_DEV(ctx), SEC_MAX_KEY_SIZE,
++	a_ctx->a_key = dma_alloc_coherent(ctx->dev, SEC_MAX_AKEY_SIZE,
+ 					  &a_ctx->a_key_dma, GFP_KERNEL);
+ 	if (!a_ctx->a_key)
+ 		return -ENOMEM;
+@@ -530,8 +528,8 @@ static void sec_auth_uninit(struct sec_ctx *ctx)
+ {
+ 	struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+ 
+-	memzero_explicit(a_ctx->a_key, SEC_MAX_KEY_SIZE);
+-	dma_free_coherent(SEC_CTX_DEV(ctx), SEC_MAX_KEY_SIZE,
++	memzero_explicit(a_ctx->a_key, SEC_MAX_AKEY_SIZE);
++	dma_free_coherent(ctx->dev, SEC_MAX_AKEY_SIZE,
+ 			  a_ctx->a_key, a_ctx->a_key_dma);
+ }
+ 
+@@ -631,12 +629,13 @@ static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ {
+ 	struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
+ 	struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
++	struct device *dev = ctx->dev;
+ 	int ret;
+ 
+ 	if (c_mode == SEC_CMODE_XTS) {
+ 		ret = xts_verify_key(tfm, key, keylen);
+ 		if (ret) {
+-			dev_err(SEC_CTX_DEV(ctx), "xts mode key err!\n");
++			dev_err(dev, "xts mode key err!\n");
+ 			return ret;
+ 		}
+ 	}
+@@ -657,7 +656,7 @@ static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ 	}
+ 
+ 	if (ret) {
+-		dev_err(SEC_CTX_DEV(ctx), "set sec key err!\n");
++		dev_err(dev, "set sec key err!\n");
+ 		return ret;
+ 	}
+ 
+@@ -689,7 +688,7 @@ static int sec_cipher_pbuf_map(struct sec_ctx *ctx, struct sec_req *req,
+ 	struct aead_request *aead_req = req->aead_req.aead_req;
+ 	struct sec_cipher_req *c_req = &req->c_req;
+ 	struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+-	struct device *dev = SEC_CTX_DEV(ctx);
++	struct device *dev = ctx->dev;
+ 	int copy_size, pbuf_length;
+ 	int req_id = req->req_id;
+ 
+@@ -699,9 +698,8 @@ static int sec_cipher_pbuf_map(struct sec_ctx *ctx, struct sec_req *req,
+ 		copy_size = c_req->c_len;
+ 
+ 	pbuf_length = sg_copy_to_buffer(src, sg_nents(src),
+-				qp_ctx->res[req_id].pbuf,
+-				copy_size);
+-
++							qp_ctx->res[req_id].pbuf,
++							copy_size);
+ 	if (unlikely(pbuf_length != copy_size)) {
+ 		dev_err(dev, "copy src data to pbuf error!\n");
+ 		return -EINVAL;
+@@ -725,7 +723,7 @@ static void sec_cipher_pbuf_unmap(struct sec_ctx *ctx, struct sec_req *req,
+ 	struct aead_request *aead_req = req->aead_req.aead_req;
+ 	struct sec_cipher_req *c_req = &req->c_req;
+ 	struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+-	struct device *dev = SEC_CTX_DEV(ctx);
++	struct device *dev = ctx->dev;
+ 	int copy_size, pbuf_length;
+ 	int req_id = req->req_id;
+ 
+@@ -737,7 +735,6 @@ static void sec_cipher_pbuf_unmap(struct sec_ctx *ctx, struct sec_req *req,
+ 	pbuf_length = sg_copy_from_buffer(dst, sg_nents(dst),
+ 				qp_ctx->res[req_id].pbuf,
+ 				copy_size);
+-
+ 	if (unlikely(pbuf_length != copy_size))
+ 		dev_err(dev, "copy pbuf data to dst error!\n");
+ 
+@@ -750,7 +747,7 @@ static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
+ 	struct sec_aead_req *a_req = &req->aead_req;
+ 	struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+ 	struct sec_alg_res *res = &qp_ctx->res[req->req_id];
+-	struct device *dev = SEC_CTX_DEV(ctx);
++	struct device *dev = ctx->dev;
+ 	int ret;
+ 
+ 	if (req->use_pbuf) {
+@@ -805,7 +802,7 @@ static void sec_cipher_unmap(struct sec_ctx *ctx, struct sec_req *req,
+ 			     struct scatterlist *src, struct scatterlist *dst)
+ {
+ 	struct sec_cipher_req *c_req = &req->c_req;
+-	struct device *dev = SEC_CTX_DEV(ctx);
++	struct device *dev = ctx->dev;
+ 
+ 	if (req->use_pbuf) {
+ 		sec_cipher_pbuf_unmap(ctx, req, dst);
+@@ -889,6 +886,7 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ {
+ 	struct sec_ctx *ctx = crypto_aead_ctx(tfm);
+ 	struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
++	struct device *dev = ctx->dev;
+ 	struct crypto_authenc_keys keys;
+ 	int ret;
+ 
+@@ -902,13 +900,13 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ 
+ 	ret = sec_aead_aes_set_key(c_ctx, &keys);
+ 	if (ret) {
+-		dev_err(SEC_CTX_DEV(ctx), "set sec cipher key err!\n");
++		dev_err(dev, "set sec cipher key err!\n");
+ 		goto bad_key;
+ 	}
+ 
+ 	ret = sec_aead_auth_set_key(&ctx->a_ctx, &keys);
+ 	if (ret) {
+-		dev_err(SEC_CTX_DEV(ctx), "set sec auth key err!\n");
++		dev_err(dev, "set sec auth key err!\n");
+ 		goto bad_key;
+ 	}
+ 
+@@ -1061,7 +1059,7 @@ static void sec_update_iv(struct sec_req *req, enum sec_alg_type alg_type)
+ 	sz = sg_pcopy_to_buffer(sgl, sg_nents(sgl), iv, iv_size,
+ 				cryptlen - iv_size);
+ 	if (unlikely(sz != iv_size))
+-		dev_err(SEC_CTX_DEV(req->ctx), "copy output iv error!\n");
++		dev_err(req->ctx->dev, "copy output iv error!\n");
+ }
+ 
+ static struct sec_req *sec_back_req_clear(struct sec_ctx *ctx,
+@@ -1069,7 +1067,7 @@ static struct sec_req *sec_back_req_clear(struct sec_ctx *ctx,
+ {
+ 	struct sec_req *backlog_req = NULL;
+ 
+-	mutex_lock(&qp_ctx->req_lock);
++	spin_lock_bh(&qp_ctx->req_lock);
+ 	if (ctx->fake_req_limit >=
+ 	    atomic_read(&qp_ctx->qp->qp_status.used) &&
+ 	    !list_empty(&qp_ctx->backlog)) {
+@@ -1077,7 +1075,7 @@ static struct sec_req *sec_back_req_clear(struct sec_ctx *ctx,
+ 				typeof(*backlog_req), backlog_head);
+ 		list_del(&backlog_req->backlog_head);
+ 	}
+-	mutex_unlock(&qp_ctx->req_lock);
++	spin_unlock_bh(&qp_ctx->req_lock);
+ 
+ 	return backlog_req;
+ }
+@@ -1160,7 +1158,7 @@ static int sec_aead_bd_fill(struct sec_ctx *ctx, struct sec_req *req)
+ 
+ 	ret = sec_skcipher_bd_fill(ctx, req);
+ 	if (unlikely(ret)) {
+-		dev_err(SEC_CTX_DEV(ctx), "skcipher bd fill is error!\n");
++		dev_err(ctx->dev, "skcipher bd fill is error!\n");
+ 		return ret;
+ 	}
+ 
+@@ -1194,7 +1192,7 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
+ 					  a_req->assoclen);
+ 
+ 		if (unlikely(sz != authsize)) {
+-			dev_err(SEC_CTX_DEV(req->ctx), "copy out mac err!\n");
++			dev_err(c->dev, "copy out mac err!\n");
+ 			err = -EINVAL;
+ 		}
+ 	}
+@@ -1259,7 +1257,7 @@ static int sec_process(struct sec_ctx *ctx, struct sec_req *req)
+ 	ret = ctx->req_op->bd_send(ctx, req);
+ 	if (unlikely((ret != -EBUSY && ret != -EINPROGRESS) ||
+ 		(ret == -EBUSY && !(req->flag & CRYPTO_TFM_REQ_MAY_BACKLOG)))) {
+-		dev_err_ratelimited(SEC_CTX_DEV(ctx), "send sec request failed!\n");
++		dev_err_ratelimited(ctx->dev, "send sec request failed!\n");
+ 		goto err_send_req;
+ 	}
+ 
+@@ -1326,7 +1324,7 @@ static int sec_aead_init(struct crypto_aead *tfm)
+ 	ctx->alg_type = SEC_AEAD;
+ 	ctx->c_ctx.ivsize = crypto_aead_ivsize(tfm);
+ 	if (ctx->c_ctx.ivsize > SEC_IV_SIZE) {
+-		dev_err(SEC_CTX_DEV(ctx), "get error aead iv size!\n");
++		dev_err(ctx->dev, "get error aead iv size!\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1376,7 +1374,7 @@ static int sec_aead_ctx_init(struct crypto_aead *tfm, const char *hash_name)
+ 
+ 	auth_ctx->hash_tfm = crypto_alloc_shash(hash_name, 0, 0);
+ 	if (IS_ERR(auth_ctx->hash_tfm)) {
+-		dev_err(SEC_CTX_DEV(ctx), "aead alloc shash error!\n");
++		dev_err(ctx->dev, "aead alloc shash error!\n");
+ 		sec_aead_exit(tfm);
+ 		return PTR_ERR(auth_ctx->hash_tfm);
+ 	}
+@@ -1410,7 +1408,7 @@ static int sec_aead_sha512_ctx_init(struct crypto_aead *tfm)
+ static int sec_skcipher_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ {
+ 	struct skcipher_request *sk_req = sreq->c_req.sk_req;
+-	struct device *dev = SEC_CTX_DEV(ctx);
++	struct device *dev = ctx->dev;
+ 	u8 c_alg = ctx->c_ctx.c_alg;
+ 
+ 	if (unlikely(!sk_req->src || !sk_req->dst)) {
+@@ -1533,14 +1531,15 @@ static struct skcipher_alg sec_skciphers[] = {
+ 
+ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ {
+-	u8 c_alg = ctx->c_ctx.c_alg;
+ 	struct aead_request *req = sreq->aead_req.aead_req;
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	size_t authsize = crypto_aead_authsize(tfm);
++	struct device *dev = ctx->dev;
++	u8 c_alg = ctx->c_ctx.c_alg;
+ 
+ 	if (unlikely(!req->src || !req->dst || !req->cryptlen ||
+ 		req->assoclen > SEC_MAX_AAD_LEN)) {
+-		dev_err(SEC_CTX_DEV(ctx), "aead input param error!\n");
++		dev_err(dev, "aead input param error!\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1552,7 +1551,7 @@ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ 
+ 	/* Support AES only */
+ 	if (unlikely(c_alg != SEC_CALG_AES)) {
+-		dev_err(SEC_CTX_DEV(ctx), "aead crypto alg error!\n");
++		dev_err(dev, "aead crypto alg error!\n");
+ 		return -EINVAL;
+ 
+ 	}
+@@ -1562,7 +1561,7 @@ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ 		sreq->c_req.c_len = req->cryptlen - authsize;
+ 
+ 	if (unlikely(sreq->c_req.c_len & (AES_BLOCK_SIZE - 1))) {
+-		dev_err(SEC_CTX_DEV(ctx), "aead crypto length error!\n");
++		dev_err(dev, "aead crypto length error!\n");
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.h b/drivers/crypto/hisilicon/sec2/sec_crypto.h
+index b2786e17d8fe2..20f11e5bbf1d5 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.h
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.h
+@@ -6,6 +6,7 @@
+ 
+ #define SEC_IV_SIZE		24
+ #define SEC_MAX_KEY_SIZE	64
++#define SEC_MAX_AKEY_SIZE	128
+ #define SEC_COMM_SCENE		0
+ 
+ enum sec_calg {
+@@ -64,7 +65,6 @@ enum sec_addr_type {
+ };
+ 
+ struct sec_sqe_type2 {
+-
+ 	/*
+ 	 * mac_len: 0~4 bits
+ 	 * a_key_len: 5~10 bits
+@@ -120,7 +120,6 @@ struct sec_sqe_type2 {
+ 	/* c_pad_len_field: 0~1 bits */
+ 	__le16 c_pad_len_field;
+ 
+-
+ 	__le64 long_a_data_len;
+ 	__le64 a_ivin_addr;
+ 	__le64 a_key_addr;
+diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
+index 2e1562108a858..fbcf52e46d179 100644
+--- a/drivers/crypto/inside-secure/safexcel.c
++++ b/drivers/crypto/inside-secure/safexcel.c
+@@ -1834,6 +1834,8 @@ static const struct of_device_id safexcel_of_match_table[] = {
+ 	{},
+ };
+ 
++MODULE_DEVICE_TABLE(of, safexcel_of_match_table);
++
+ static struct platform_driver  crypto_safexcel = {
+ 	.probe		= safexcel_probe,
+ 	.remove		= safexcel_remove,
+diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
+index 58c8cc8fe0e11..d7ed50f8b9294 100644
+--- a/drivers/dma/dw-edma/dw-edma-core.c
++++ b/drivers/dma/dw-edma/dw-edma-core.c
+@@ -400,7 +400,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
+ 		chunk->ll_region.sz += burst->sz;
+ 		desc->alloc_sz += burst->sz;
+ 
+-		if (chan->dir == EDMA_DIR_WRITE) {
++		if (dir == DMA_DEV_TO_MEM) {
+ 			burst->sar = src_addr;
+ 			if (xfer->cyclic) {
+ 				burst->dar = xfer->xfer.cyclic.paddr;
+diff --git a/drivers/dma/sf-pdma/sf-pdma.c b/drivers/dma/sf-pdma/sf-pdma.c
+index 528deb5d9f314..5c615a8b514bf 100644
+--- a/drivers/dma/sf-pdma/sf-pdma.c
++++ b/drivers/dma/sf-pdma/sf-pdma.c
+@@ -52,16 +52,6 @@ static inline struct sf_pdma_desc *to_sf_pdma_desc(struct virt_dma_desc *vd)
+ static struct sf_pdma_desc *sf_pdma_alloc_desc(struct sf_pdma_chan *chan)
+ {
+ 	struct sf_pdma_desc *desc;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&chan->lock, flags);
+-
+-	if (chan->desc && !chan->desc->in_use) {
+-		spin_unlock_irqrestore(&chan->lock, flags);
+-		return chan->desc;
+-	}
+-
+-	spin_unlock_irqrestore(&chan->lock, flags);
+ 
+ 	desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
+ 	if (!desc)
+@@ -94,6 +84,7 @@ sf_pdma_prep_dma_memcpy(struct dma_chan *dchan,	dma_addr_t dest, dma_addr_t src,
+ {
+ 	struct sf_pdma_chan *chan = to_sf_pdma_chan(dchan);
+ 	struct sf_pdma_desc *desc;
++	unsigned long iflags;
+ 
+ 	if (chan && (!len || !dest || !src)) {
+ 		dev_err(chan->pdma->dma_dev.dev,
+@@ -109,10 +100,9 @@ sf_pdma_prep_dma_memcpy(struct dma_chan *dchan,	dma_addr_t dest, dma_addr_t src,
+ 	desc->dirn = DMA_MEM_TO_MEM;
+ 	desc->async_tx = vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
+ 
+-	spin_lock_irqsave(&chan->vchan.lock, flags);
+-	chan->desc = desc;
++	spin_lock_irqsave(&chan->vchan.lock, iflags);
+ 	sf_pdma_fill_desc(desc, dest, src, len);
+-	spin_unlock_irqrestore(&chan->vchan.lock, flags);
++	spin_unlock_irqrestore(&chan->vchan.lock, iflags);
+ 
+ 	return desc->async_tx;
+ }
+@@ -169,11 +159,17 @@ static size_t sf_pdma_desc_residue(struct sf_pdma_chan *chan,
+ 	unsigned long flags;
+ 	u64 residue = 0;
+ 	struct sf_pdma_desc *desc;
+-	struct dma_async_tx_descriptor *tx;
++	struct dma_async_tx_descriptor *tx = NULL;
+ 
+ 	spin_lock_irqsave(&chan->vchan.lock, flags);
+ 
+-	tx = &chan->desc->vdesc.tx;
++	list_for_each_entry(vd, &chan->vchan.desc_submitted, node)
++		if (vd->tx.cookie == cookie)
++			tx = &vd->tx;
++
++	if (!tx)
++		goto out;
++
+ 	if (cookie == tx->chan->completed_cookie)
+ 		goto out;
+ 
+@@ -240,6 +236,19 @@ static void sf_pdma_enable_request(struct sf_pdma_chan *chan)
+ 	writel(v, regs->ctrl);
+ }
+ 
++static struct sf_pdma_desc *sf_pdma_get_first_pending_desc(struct sf_pdma_chan *chan)
++{
++	struct virt_dma_chan *vchan = &chan->vchan;
++	struct virt_dma_desc *vdesc;
++
++	if (list_empty(&vchan->desc_issued))
++		return NULL;
++
++	vdesc = list_first_entry(&vchan->desc_issued, struct virt_dma_desc, node);
++
++	return container_of(vdesc, struct sf_pdma_desc, vdesc);
++}
++
+ static void sf_pdma_xfer_desc(struct sf_pdma_chan *chan)
+ {
+ 	struct sf_pdma_desc *desc = chan->desc;
+@@ -267,8 +276,11 @@ static void sf_pdma_issue_pending(struct dma_chan *dchan)
+ 
+ 	spin_lock_irqsave(&chan->vchan.lock, flags);
+ 
+-	if (vchan_issue_pending(&chan->vchan) && chan->desc)
++	if (!chan->desc && vchan_issue_pending(&chan->vchan)) {
++		/* vchan_issue_pending has made a check that desc in not NULL */
++		chan->desc = sf_pdma_get_first_pending_desc(chan);
+ 		sf_pdma_xfer_desc(chan);
++	}
+ 
+ 	spin_unlock_irqrestore(&chan->vchan.lock, flags);
+ }
+@@ -297,6 +309,11 @@ static void sf_pdma_donebh_tasklet(struct tasklet_struct *t)
+ 	spin_lock_irqsave(&chan->vchan.lock, flags);
+ 	list_del(&chan->desc->vdesc.node);
+ 	vchan_cookie_complete(&chan->desc->vdesc);
++
++	chan->desc = sf_pdma_get_first_pending_desc(chan);
++	if (chan->desc)
++		sf_pdma_xfer_desc(chan);
++
+ 	spin_unlock_irqrestore(&chan->vchan.lock, flags);
+ }
+ 
+diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c
+index 4ceba5ef78958..36391cb5130e2 100644
+--- a/drivers/firmware/arm_scpi.c
++++ b/drivers/firmware/arm_scpi.c
+@@ -815,7 +815,7 @@ static int scpi_init_versions(struct scpi_drvinfo *info)
+ 		info->firmware_version = le32_to_cpu(caps.platform_version);
+ 	}
+ 	/* Ignore error if not implemented */
+-	if (scpi_info->is_legacy && ret == -EOPNOTSUPP)
++	if (info->is_legacy && ret == -EOPNOTSUPP)
+ 		return 0;
+ 
+ 	return ret;
+@@ -905,13 +905,14 @@ static int scpi_probe(struct platform_device *pdev)
+ 	struct resource res;
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
++	struct scpi_drvinfo *scpi_drvinfo;
+ 
+-	scpi_info = devm_kzalloc(dev, sizeof(*scpi_info), GFP_KERNEL);
+-	if (!scpi_info)
++	scpi_drvinfo = devm_kzalloc(dev, sizeof(*scpi_drvinfo), GFP_KERNEL);
++	if (!scpi_drvinfo)
+ 		return -ENOMEM;
+ 
+ 	if (of_match_device(legacy_scpi_of_match, &pdev->dev))
+-		scpi_info->is_legacy = true;
++		scpi_drvinfo->is_legacy = true;
+ 
+ 	count = of_count_phandle_with_args(np, "mboxes", "#mbox-cells");
+ 	if (count < 0) {
+@@ -919,19 +920,19 @@ static int scpi_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan),
+-					   GFP_KERNEL);
+-	if (!scpi_info->channels)
++	scpi_drvinfo->channels =
++		devm_kcalloc(dev, count, sizeof(struct scpi_chan), GFP_KERNEL);
++	if (!scpi_drvinfo->channels)
+ 		return -ENOMEM;
+ 
+-	ret = devm_add_action(dev, scpi_free_channels, scpi_info);
++	ret = devm_add_action(dev, scpi_free_channels, scpi_drvinfo);
+ 	if (ret)
+ 		return ret;
+ 
+-	for (; scpi_info->num_chans < count; scpi_info->num_chans++) {
++	for (; scpi_drvinfo->num_chans < count; scpi_drvinfo->num_chans++) {
+ 		resource_size_t size;
+-		int idx = scpi_info->num_chans;
+-		struct scpi_chan *pchan = scpi_info->channels + idx;
++		int idx = scpi_drvinfo->num_chans;
++		struct scpi_chan *pchan = scpi_drvinfo->channels + idx;
+ 		struct mbox_client *cl = &pchan->cl;
+ 		struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
+ 
+@@ -975,45 +976,53 @@ static int scpi_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	scpi_info->commands = scpi_std_commands;
++	scpi_drvinfo->commands = scpi_std_commands;
+ 
+-	platform_set_drvdata(pdev, scpi_info);
++	platform_set_drvdata(pdev, scpi_drvinfo);
+ 
+-	if (scpi_info->is_legacy) {
++	if (scpi_drvinfo->is_legacy) {
+ 		/* Replace with legacy variants */
+ 		scpi_ops.clk_set_val = legacy_scpi_clk_set_val;
+-		scpi_info->commands = scpi_legacy_commands;
++		scpi_drvinfo->commands = scpi_legacy_commands;
+ 
+ 		/* Fill priority bitmap */
+ 		for (idx = 0; idx < ARRAY_SIZE(legacy_hpriority_cmds); idx++)
+ 			set_bit(legacy_hpriority_cmds[idx],
+-				scpi_info->cmd_priority);
++				scpi_drvinfo->cmd_priority);
+ 	}
+ 
+-	ret = scpi_init_versions(scpi_info);
++	scpi_info = scpi_drvinfo;
++
++	ret = scpi_init_versions(scpi_drvinfo);
+ 	if (ret) {
+ 		dev_err(dev, "incorrect or no SCP firmware found\n");
++		scpi_info = NULL;
+ 		return ret;
+ 	}
+ 
+-	if (scpi_info->is_legacy && !scpi_info->protocol_version &&
+-	    !scpi_info->firmware_version)
++	if (scpi_drvinfo->is_legacy && !scpi_drvinfo->protocol_version &&
++	    !scpi_drvinfo->firmware_version)
+ 		dev_info(dev, "SCP Protocol legacy pre-1.0 firmware\n");
+ 	else
+ 		dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n",
+ 			 FIELD_GET(PROTO_REV_MAJOR_MASK,
+-				   scpi_info->protocol_version),
++				   scpi_drvinfo->protocol_version),
+ 			 FIELD_GET(PROTO_REV_MINOR_MASK,
+-				   scpi_info->protocol_version),
++				   scpi_drvinfo->protocol_version),
+ 			 FIELD_GET(FW_REV_MAJOR_MASK,
+-				   scpi_info->firmware_version),
++				   scpi_drvinfo->firmware_version),
+ 			 FIELD_GET(FW_REV_MINOR_MASK,
+-				   scpi_info->firmware_version),
++				   scpi_drvinfo->firmware_version),
+ 			 FIELD_GET(FW_REV_PATCH_MASK,
+-				   scpi_info->firmware_version));
+-	scpi_info->scpi_ops = &scpi_ops;
++				   scpi_drvinfo->firmware_version));
++
++	scpi_drvinfo->scpi_ops = &scpi_ops;
+ 
+-	return devm_of_platform_populate(dev);
++	ret = devm_of_platform_populate(dev);
++	if (ret)
++		scpi_info = NULL;
++
++	return ret;
+ }
+ 
+ static const struct of_device_id scpi_of_match[] = {
+diff --git a/drivers/firmware/tegra/bpmp-debugfs.c b/drivers/firmware/tegra/bpmp-debugfs.c
+index 440d99c63638b..fad97ec8e81f2 100644
+--- a/drivers/firmware/tegra/bpmp-debugfs.c
++++ b/drivers/firmware/tegra/bpmp-debugfs.c
+@@ -429,7 +429,7 @@ static int bpmp_populate_debugfs_inband(struct tegra_bpmp *bpmp,
+ 			mode |= attrs & DEBUGFS_S_IWUSR ? 0200 : 0;
+ 			dentry = debugfs_create_file(name, mode, parent, bpmp,
+ 						     &bpmp_debug_fops);
+-			if (!dentry) {
++			if (IS_ERR(dentry)) {
+ 				err = -ENOMEM;
+ 				goto out;
+ 			}
+@@ -680,7 +680,7 @@ static int bpmp_populate_dir(struct tegra_bpmp *bpmp, struct seqbuf *seqbuf,
+ 
+ 		if (t & DEBUGFS_S_ISDIR) {
+ 			dentry = debugfs_create_dir(name, parent);
+-			if (!dentry)
++			if (IS_ERR(dentry))
+ 				return -ENOMEM;
+ 			err = bpmp_populate_dir(bpmp, seqbuf, dentry, depth+1);
+ 			if (err < 0)
+@@ -693,7 +693,7 @@ static int bpmp_populate_dir(struct tegra_bpmp *bpmp, struct seqbuf *seqbuf,
+ 			dentry = debugfs_create_file(name, mode,
+ 						     parent, bpmp,
+ 						     &debugfs_fops);
+-			if (!dentry)
++			if (IS_ERR(dentry))
+ 				return -ENOMEM;
+ 		}
+ 	}
+@@ -743,11 +743,11 @@ int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp)
+ 		return 0;
+ 
+ 	root = debugfs_create_dir("bpmp", NULL);
+-	if (!root)
++	if (IS_ERR(root))
+ 		return -ENOMEM;
+ 
+ 	bpmp->debugfs_mirror = debugfs_create_dir("debug", root);
+-	if (!bpmp->debugfs_mirror) {
++	if (IS_ERR(bpmp->debugfs_mirror)) {
+ 		err = -ENOMEM;
+ 		goto out;
+ 	}
+diff --git a/drivers/fpga/altera-pr-ip-core.c b/drivers/fpga/altera-pr-ip-core.c
+index 2cf25fd5e8979..75b4b3ec933a5 100644
+--- a/drivers/fpga/altera-pr-ip-core.c
++++ b/drivers/fpga/altera-pr-ip-core.c
+@@ -108,7 +108,7 @@ static int alt_pr_fpga_write(struct fpga_manager *mgr, const char *buf,
+ 	u32 *buffer_32 = (u32 *)buf;
+ 	size_t i = 0;
+ 
+-	if (count <= 0)
++	if (!count)
+ 		return -EINVAL;
+ 
+ 	/* Write out the complete 32-bit chunks */
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 01424af654db7..2e63274a4c2c9 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -863,7 +863,8 @@ int of_mm_gpiochip_add_data(struct device_node *np,
+ 	if (mm_gc->save_regs)
+ 		mm_gc->save_regs(mm_gc);
+ 
+-	mm_gc->gc.of_node = np;
++	of_node_put(mm_gc->gc.of_node);
++	mm_gc->gc.of_node = of_node_get(np);
+ 
+ 	ret = gpiochip_add_data(gc, data);
+ 	if (ret)
+@@ -871,6 +872,7 @@ int of_mm_gpiochip_add_data(struct device_node *np,
+ 
+ 	return 0;
+ err2:
++	of_node_put(np);
+ 	iounmap(mm_gc->regs);
+ err1:
+ 	kfree(gc->label);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index f615ecc06a223..6937f81340084 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -905,6 +905,10 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ 	if (WARN_ON_ONCE(min_offset > max_offset))
+ 		return -EINVAL;
+ 
++	/* Check domain to be pinned to against preferred domains */
++	if (bo->preferred_domains & domain)
++		domain = bo->preferred_domains & domain;
++
+ 	/* A shared bo cannot be migrated to VRAM */
+ 	if (bo->prime_shared_count) {
+ 		if (domain & AMDGPU_GEM_DOMAIN_GTT)
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index aca2f14f04c2a..430c5e8f0388e 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1063,6 +1063,10 @@ static int adv7511_init_cec_regmap(struct adv7511 *adv)
+ 						ADV7511_CEC_I2C_ADDR_DEFAULT);
+ 	if (IS_ERR(adv->i2c_cec))
+ 		return PTR_ERR(adv->i2c_cec);
++
++	regmap_write(adv->regmap, ADV7511_REG_CEC_I2C_ADDR,
++		     adv->i2c_cec->addr << 1);
++
+ 	i2c_set_clientdata(adv->i2c_cec, adv);
+ 
+ 	adv->regmap_cec = devm_regmap_init_i2c(adv->i2c_cec,
+@@ -1267,9 +1271,6 @@ static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id)
+ 	if (ret)
+ 		goto err_i2c_unregister_packet;
+ 
+-	regmap_write(adv7511->regmap, ADV7511_REG_CEC_I2C_ADDR,
+-		     adv7511->i2c_cec->addr << 1);
+-
+ 	INIT_WORK(&adv7511->hpd_work, adv7511_hpd_work);
+ 
+ 	if (i2c->irq) {
+@@ -1380,10 +1381,21 @@ static struct i2c_driver adv7511_driver = {
+ 
+ static int __init adv7511_init(void)
+ {
+-	if (IS_ENABLED(CONFIG_DRM_MIPI_DSI))
+-		mipi_dsi_driver_register(&adv7533_dsi_driver);
++	int ret;
+ 
+-	return i2c_add_driver(&adv7511_driver);
++	if (IS_ENABLED(CONFIG_DRM_MIPI_DSI)) {
++		ret = mipi_dsi_driver_register(&adv7533_dsi_driver);
++		if (ret)
++			return ret;
++	}
++
++	ret = i2c_add_driver(&adv7511_driver);
++	if (ret) {
++		if (IS_ENABLED(CONFIG_DRM_MIPI_DSI))
++			mipi_dsi_driver_unregister(&adv7533_dsi_driver);
++	}
++
++	return ret;
+ }
+ module_init(adv7511_init);
+ 
+diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c
+index ec7745c31da07..ab0bce4a988c5 100644
+--- a/drivers/gpu/drm/bridge/sil-sii8620.c
++++ b/drivers/gpu/drm/bridge/sil-sii8620.c
+@@ -605,7 +605,7 @@ static void *sii8620_burst_get_tx_buf(struct sii8620 *ctx, int len)
+ 	u8 *buf = &ctx->burst.tx_buf[ctx->burst.tx_count];
+ 	int size = len + 2;
+ 
+-	if (ctx->burst.tx_count + size > ARRAY_SIZE(ctx->burst.tx_buf)) {
++	if (ctx->burst.tx_count + size >= ARRAY_SIZE(ctx->burst.tx_buf)) {
+ 		dev_err(ctx->dev, "TX-BLK buffer exhausted\n");
+ 		ctx->error = -EINVAL;
+ 		return NULL;
+@@ -622,7 +622,7 @@ static u8 *sii8620_burst_get_rx_buf(struct sii8620 *ctx, int len)
+ 	u8 *buf = &ctx->burst.rx_buf[ctx->burst.rx_count];
+ 	int size = len + 1;
+ 
+-	if (ctx->burst.tx_count + size > ARRAY_SIZE(ctx->burst.tx_buf)) {
++	if (ctx->burst.rx_count + size >= ARRAY_SIZE(ctx->burst.rx_buf)) {
+ 		dev_err(ctx->dev, "RX-BLK buffer exhausted\n");
+ 		ctx->error = -EINVAL;
+ 		return NULL;
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index 34a3e4e9f7175..b4f7e7a7f7c51 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1535,19 +1535,12 @@ static irqreturn_t tc_irq_handler(int irq, void *arg)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
++static int tc_probe_edp_bridge_endpoint(struct tc_data *tc)
+ {
+-	struct device *dev = &client->dev;
++	struct device *dev = tc->dev;
+ 	struct drm_panel *panel;
+-	struct tc_data *tc;
+ 	int ret;
+ 
+-	tc = devm_kzalloc(dev, sizeof(*tc), GFP_KERNEL);
+-	if (!tc)
+-		return -ENOMEM;
+-
+-	tc->dev = dev;
+-
+ 	/* port@2 is the output port */
+ 	ret = drm_of_find_panel_or_bridge(dev->of_node, 2, 0, &panel, NULL);
+ 	if (ret && ret != -ENODEV)
+@@ -1566,6 +1559,50 @@ static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		tc->bridge.type = DRM_MODE_CONNECTOR_DisplayPort;
+ 	}
+ 
++	return 0;
++}
++
++static void tc_clk_disable(void *data)
++{
++	struct clk *refclk = data;
++
++	clk_disable_unprepare(refclk);
++}
++
++static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
++{
++	struct device *dev = &client->dev;
++	struct tc_data *tc;
++	int ret;
++
++	tc = devm_kzalloc(dev, sizeof(*tc), GFP_KERNEL);
++	if (!tc)
++		return -ENOMEM;
++
++	tc->dev = dev;
++
++	ret = tc_probe_edp_bridge_endpoint(tc);
++	if (ret)
++		return ret;
++
++	tc->refclk = devm_clk_get(dev, "ref");
++	if (IS_ERR(tc->refclk)) {
++		ret = PTR_ERR(tc->refclk);
++		dev_err(dev, "Failed to get refclk: %d\n", ret);
++		return ret;
++	}
++
++	ret = clk_prepare_enable(tc->refclk);
++	if (ret)
++		return ret;
++
++	ret = devm_add_action_or_reset(dev, tc_clk_disable, tc->refclk);
++	if (ret)
++		return ret;
++
++	/* tRSTW = 100 cycles , at 13 MHz that is ~7.69 us */
++	usleep_range(10, 15);
++
+ 	/* Shut down GPIO is optional */
+ 	tc->sd_gpio = devm_gpiod_get_optional(dev, "shutdown", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(tc->sd_gpio))
+@@ -1586,13 +1623,6 @@ static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		usleep_range(5000, 10000);
+ 	}
+ 
+-	tc->refclk = devm_clk_get(dev, "ref");
+-	if (IS_ERR(tc->refclk)) {
+-		ret = PTR_ERR(tc->refclk);
+-		dev_err(dev, "Failed to get refclk: %d\n", ret);
+-		return ret;
+-	}
+-
+ 	tc->regmap = devm_regmap_init_i2c(client, &tc_regmap_config);
+ 	if (IS_ERR(tc->regmap)) {
+ 		ret = PTR_ERR(tc->regmap);
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index 69c2c079d8036..5979af230eda0 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -1277,7 +1277,7 @@ retry:
+ 		ret = dma_resv_lock_slow_interruptible(obj->resv,
+ 								 acquire_ctx);
+ 		if (ret) {
+-			ww_acquire_done(acquire_ctx);
++			ww_acquire_fini(acquire_ctx);
+ 			return ret;
+ 		}
+ 	}
+@@ -1302,7 +1302,7 @@ retry:
+ 				goto retry;
+ 			}
+ 
+-			ww_acquire_done(acquire_ctx);
++			ww_acquire_fini(acquire_ctx);
+ 			return ret;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/drm_mipi_dbi.c b/drivers/gpu/drm/drm_mipi_dbi.c
+index 230c4fd7131c4..9f132229aed1c 100644
+--- a/drivers/gpu/drm/drm_mipi_dbi.c
++++ b/drivers/gpu/drm/drm_mipi_dbi.c
+@@ -1137,6 +1137,13 @@ int mipi_dbi_spi_transfer(struct spi_device *spi, u32 speed_hz,
+ 	size_t chunk;
+ 	int ret;
+ 
++	/* In __spi_validate, there's a validation that no partial transfers
++	 * are accepted (xfer->len % w_size must be zero).
++	 * Here we align max_chunk to multiple of 2 (16bits),
++	 * to prevent transfers from being rejected.
++	 */
++	max_chunk = ALIGN_DOWN(max_chunk, 2);
++
+ 	spi_message_init_with_transfers(&m, &tr, 1);
+ 
+ 	while (len) {
+diff --git a/drivers/gpu/drm/exynos/exynos7_drm_decon.c b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+index f2d87a7445c73..1c04c232dce15 100644
+--- a/drivers/gpu/drm/exynos/exynos7_drm_decon.c
++++ b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+@@ -800,31 +800,40 @@ static int exynos7_decon_resume(struct device *dev)
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(dev, "Failed to prepare_enable the pclk [%d]\n",
+ 			      ret);
+-		return ret;
++		goto err_pclk_enable;
+ 	}
+ 
+ 	ret = clk_prepare_enable(ctx->aclk);
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(dev, "Failed to prepare_enable the aclk [%d]\n",
+ 			      ret);
+-		return ret;
++		goto err_aclk_enable;
+ 	}
+ 
+ 	ret = clk_prepare_enable(ctx->eclk);
+ 	if  (ret < 0) {
+ 		DRM_DEV_ERROR(dev, "Failed to prepare_enable the eclk [%d]\n",
+ 			      ret);
+-		return ret;
++		goto err_eclk_enable;
+ 	}
+ 
+ 	ret = clk_prepare_enable(ctx->vclk);
+ 	if  (ret < 0) {
+ 		DRM_DEV_ERROR(dev, "Failed to prepare_enable the vclk [%d]\n",
+ 			      ret);
+-		return ret;
++		goto err_vclk_enable;
+ 	}
+ 
+ 	return 0;
++
++err_vclk_enable:
++	clk_disable_unprepare(ctx->eclk);
++err_eclk_enable:
++	clk_disable_unprepare(ctx->aclk);
++err_aclk_enable:
++	clk_disable_unprepare(ctx->pclk);
++err_pclk_enable:
++	return ret;
+ }
+ #endif
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
+index 0bf31f9a8af56..e6780fcc5006f 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
++++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
+@@ -526,8 +526,8 @@ static int i915_dmc_info(struct seq_file *m, void *unused)
+ 		 * reg for DC3CO debugging and validation,
+ 		 * but TGL DMC f/w is using DMC_DEBUG3 reg for DC3CO counter.
+ 		 */
+-		seq_printf(m, "DC3CO count: %d\n",
+-			   intel_de_read(dev_priv, DMC_DEBUG3));
++		seq_printf(m, "DC3CO count: %d\n", intel_de_read(dev_priv, IS_DGFX(dev_priv) ?
++					DG1_DMC_DEBUG3 : TGL_DMC_DEBUG3));
+ 	} else {
+ 		dc5_reg = IS_BROXTON(dev_priv) ? BXT_CSR_DC3_DC5_COUNT :
+ 						 SKL_CSR_DC3_DC5_COUNT;
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index f1ab26307db6f..04157d8ced320 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -7546,7 +7546,8 @@ enum {
+ #define TGL_DMC_DEBUG_DC5_COUNT	_MMIO(0x101084)
+ #define TGL_DMC_DEBUG_DC6_COUNT	_MMIO(0x101088)
+ 
+-#define DMC_DEBUG3		_MMIO(0x101090)
++#define TGL_DMC_DEBUG3		_MMIO(0x101090)
++#define DG1_DMC_DEBUG3		_MMIO(0x13415c)
+ 
+ /* Display Internal Timeout Register */
+ #define RM_TIMEOUT		_MMIO(0x42060)
+diff --git a/drivers/gpu/drm/mcde/mcde_dsi.c b/drivers/gpu/drm/mcde/mcde_dsi.c
+index 5275b2723293b..64e6fb8062908 100644
+--- a/drivers/gpu/drm/mcde/mcde_dsi.c
++++ b/drivers/gpu/drm/mcde/mcde_dsi.c
+@@ -1118,6 +1118,7 @@ static int mcde_dsi_bind(struct device *dev, struct device *master,
+ 			bridge = of_drm_find_bridge(child);
+ 			if (!bridge) {
+ 				dev_err(dev, "failed to find bridge\n");
++				of_node_put(child);
+ 				return -EINVAL;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index 52f11a63a3304..c1ae336df6833 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -52,13 +52,7 @@ enum mtk_dpi_out_channel_swap {
+ };
+ 
+ enum mtk_dpi_out_color_format {
+-	MTK_DPI_COLOR_FORMAT_RGB,
+-	MTK_DPI_COLOR_FORMAT_RGB_FULL,
+-	MTK_DPI_COLOR_FORMAT_YCBCR_444,
+-	MTK_DPI_COLOR_FORMAT_YCBCR_422,
+-	MTK_DPI_COLOR_FORMAT_XV_YCC,
+-	MTK_DPI_COLOR_FORMAT_YCBCR_444_FULL,
+-	MTK_DPI_COLOR_FORMAT_YCBCR_422_FULL
++	MTK_DPI_COLOR_FORMAT_RGB
+ };
+ 
+ struct mtk_dpi {
+@@ -358,24 +352,11 @@ static void mtk_dpi_config_disable_edge(struct mtk_dpi *dpi)
+ static void mtk_dpi_config_color_format(struct mtk_dpi *dpi,
+ 					enum mtk_dpi_out_color_format format)
+ {
+-	if ((format == MTK_DPI_COLOR_FORMAT_YCBCR_444) ||
+-	    (format == MTK_DPI_COLOR_FORMAT_YCBCR_444_FULL)) {
+-		mtk_dpi_config_yuv422_enable(dpi, false);
+-		mtk_dpi_config_csc_enable(dpi, true);
+-		mtk_dpi_config_swap_input(dpi, false);
+-		mtk_dpi_config_channel_swap(dpi, MTK_DPI_OUT_CHANNEL_SWAP_BGR);
+-	} else if ((format == MTK_DPI_COLOR_FORMAT_YCBCR_422) ||
+-		   (format == MTK_DPI_COLOR_FORMAT_YCBCR_422_FULL)) {
+-		mtk_dpi_config_yuv422_enable(dpi, true);
+-		mtk_dpi_config_csc_enable(dpi, true);
+-		mtk_dpi_config_swap_input(dpi, true);
+-		mtk_dpi_config_channel_swap(dpi, MTK_DPI_OUT_CHANNEL_SWAP_RGB);
+-	} else {
+-		mtk_dpi_config_yuv422_enable(dpi, false);
+-		mtk_dpi_config_csc_enable(dpi, false);
+-		mtk_dpi_config_swap_input(dpi, false);
+-		mtk_dpi_config_channel_swap(dpi, MTK_DPI_OUT_CHANNEL_SWAP_RGB);
+-	}
++	/* only support RGB888 */
++	mtk_dpi_config_yuv422_enable(dpi, false);
++	mtk_dpi_config_csc_enable(dpi, false);
++	mtk_dpi_config_swap_input(dpi, false);
++	mtk_dpi_config_channel_swap(dpi, MTK_DPI_OUT_CHANNEL_SWAP_RGB);
+ }
+ 
+ static void mtk_dpi_power_off(struct mtk_dpi *dpi)
+@@ -416,7 +397,6 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
+ 	if (dpi->pinctrl && dpi->pins_dpi)
+ 		pinctrl_select_state(dpi->pinctrl, dpi->pins_dpi);
+ 
+-	mtk_dpi_enable(dpi);
+ 	return 0;
+ 
+ err_pixel:
+@@ -553,6 +533,7 @@ static void mtk_dpi_bridge_enable(struct drm_bridge *bridge)
+ 
+ 	mtk_dpi_power_on(dpi);
+ 	mtk_dpi_set_display_mode(dpi, &dpi->mode);
++	mtk_dpi_enable(dpi);
+ }
+ 
+ static const struct drm_bridge_funcs mtk_dpi_bridge_funcs = {
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index 65fd99c528af2..7d37d2a01e3cf 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -202,6 +202,7 @@ struct mtk_dsi {
+ 	struct mtk_phy_timing phy_timing;
+ 	int refcount;
+ 	bool enabled;
++	bool lanes_ready;
+ 	u32 irq_data;
+ 	wait_queue_head_t irq_wait_queue;
+ 	const struct mtk_dsi_driver_data *driver_data;
+@@ -644,18 +645,11 @@ static int mtk_dsi_poweron(struct mtk_dsi *dsi)
+ 	mtk_dsi_reset_engine(dsi);
+ 	mtk_dsi_phy_timconfig(dsi);
+ 
+-	mtk_dsi_rxtx_control(dsi);
+-	usleep_range(30, 100);
+-	mtk_dsi_reset_dphy(dsi);
+ 	mtk_dsi_ps_control_vact(dsi);
+ 	mtk_dsi_set_vm_cmd(dsi);
+ 	mtk_dsi_config_vdo_timing(dsi);
+ 	mtk_dsi_set_interrupt_enable(dsi);
+ 
+-	mtk_dsi_clk_ulp_mode_leave(dsi);
+-	mtk_dsi_lane0_ulp_mode_leave(dsi);
+-	mtk_dsi_clk_hs_mode(dsi, 0);
+-
+ 	return 0;
+ err_disable_engine_clk:
+ 	clk_disable_unprepare(dsi->engine_clk);
+@@ -674,19 +668,11 @@ static void mtk_dsi_poweroff(struct mtk_dsi *dsi)
+ 	if (--dsi->refcount != 0)
+ 		return;
+ 
+-	/*
+-	 * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
+-	 * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
+-	 * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
+-	 * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
+-	 * after dsi is fully set.
+-	 */
+-	mtk_dsi_stop(dsi);
+-
+-	mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
+ 	mtk_dsi_reset_engine(dsi);
+ 	mtk_dsi_lane0_ulp_mode_enter(dsi);
+ 	mtk_dsi_clk_ulp_mode_enter(dsi);
++	/* set the lane number as 0 to pull down mipi */
++	writel(0, dsi->regs + DSI_TXRX_CTRL);
+ 
+ 	mtk_dsi_disable(dsi);
+ 
+@@ -694,21 +680,31 @@ static void mtk_dsi_poweroff(struct mtk_dsi *dsi)
+ 	clk_disable_unprepare(dsi->digital_clk);
+ 
+ 	phy_power_off(dsi->phy);
++
++	dsi->lanes_ready = false;
+ }
+ 
+-static void mtk_output_dsi_enable(struct mtk_dsi *dsi)
++static void mtk_dsi_lane_ready(struct mtk_dsi *dsi)
+ {
+-	int ret;
++	if (!dsi->lanes_ready) {
++		dsi->lanes_ready = true;
++		mtk_dsi_rxtx_control(dsi);
++		usleep_range(30, 100);
++		mtk_dsi_reset_dphy(dsi);
++		mtk_dsi_clk_ulp_mode_leave(dsi);
++		mtk_dsi_lane0_ulp_mode_leave(dsi);
++		mtk_dsi_clk_hs_mode(dsi, 0);
++		msleep(20);
++		/* The reaction time after pulling up the mipi signal for dsi_rx */
++	}
++}
+ 
++static void mtk_output_dsi_enable(struct mtk_dsi *dsi)
++{
+ 	if (dsi->enabled)
+ 		return;
+ 
+-	ret = mtk_dsi_poweron(dsi);
+-	if (ret < 0) {
+-		DRM_ERROR("failed to power on dsi\n");
+-		return;
+-	}
+-
++	mtk_dsi_lane_ready(dsi);
+ 	mtk_dsi_set_mode(dsi);
+ 	mtk_dsi_clk_hs_mode(dsi, 1);
+ 
+@@ -722,7 +718,16 @@ static void mtk_output_dsi_disable(struct mtk_dsi *dsi)
+ 	if (!dsi->enabled)
+ 		return;
+ 
+-	mtk_dsi_poweroff(dsi);
++	/*
++	 * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
++	 * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
++	 * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
++	 * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
++	 * after dsi is fully set.
++	 */
++	mtk_dsi_stop(dsi);
++
++	mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
+ 
+ 	dsi->enabled = false;
+ }
+@@ -746,24 +751,50 @@ static void mtk_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ 	drm_display_mode_to_videomode(adjusted, &dsi->vm);
+ }
+ 
+-static void mtk_dsi_bridge_disable(struct drm_bridge *bridge)
++static void mtk_dsi_bridge_atomic_disable(struct drm_bridge *bridge,
++					  struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct mtk_dsi *dsi = bridge_to_dsi(bridge);
+ 
+ 	mtk_output_dsi_disable(dsi);
+ }
+ 
+-static void mtk_dsi_bridge_enable(struct drm_bridge *bridge)
++static void mtk_dsi_bridge_atomic_enable(struct drm_bridge *bridge,
++					 struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct mtk_dsi *dsi = bridge_to_dsi(bridge);
+ 
++	if (dsi->refcount == 0)
++		return;
++
+ 	mtk_output_dsi_enable(dsi);
+ }
+ 
++static void mtk_dsi_bridge_atomic_pre_enable(struct drm_bridge *bridge,
++					     struct drm_bridge_state *old_bridge_state)
++{
++	struct mtk_dsi *dsi = bridge_to_dsi(bridge);
++	int ret;
++
++	ret = mtk_dsi_poweron(dsi);
++	if (ret < 0)
++		DRM_ERROR("failed to power on dsi\n");
++}
++
++static void mtk_dsi_bridge_atomic_post_disable(struct drm_bridge *bridge,
++					       struct drm_bridge_state *old_bridge_state)
++{
++	struct mtk_dsi *dsi = bridge_to_dsi(bridge);
++
++	mtk_dsi_poweroff(dsi);
++}
++
+ static const struct drm_bridge_funcs mtk_dsi_bridge_funcs = {
+ 	.attach = mtk_dsi_bridge_attach,
+-	.disable = mtk_dsi_bridge_disable,
+-	.enable = mtk_dsi_bridge_enable,
++	.atomic_disable = mtk_dsi_bridge_atomic_disable,
++	.atomic_enable = mtk_dsi_bridge_atomic_enable,
++	.atomic_pre_enable = mtk_dsi_bridge_atomic_pre_enable,
++	.atomic_post_disable = mtk_dsi_bridge_atomic_post_disable,
+ 	.mode_set = mtk_dsi_bridge_mode_set,
+ };
+ 
+@@ -891,24 +922,35 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
+ 	u8 read_data[16];
+ 	void *src_addr;
+ 	u8 irq_flag = CMD_DONE_INT_FLAG;
++	u32 dsi_mode;
++	int ret;
+ 
+-	if (readl(dsi->regs + DSI_MODE_CTRL) & MODE) {
+-		DRM_ERROR("dsi engine is not command mode\n");
+-		return -EINVAL;
++	dsi_mode = readl(dsi->regs + DSI_MODE_CTRL);
++	if (dsi_mode & MODE) {
++		mtk_dsi_stop(dsi);
++		ret = mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
++		if (ret)
++			goto restore_dsi_mode;
+ 	}
+ 
+ 	if (MTK_DSI_HOST_IS_READ(msg->type))
+ 		irq_flag |= LPRX_RD_RDY_INT_FLAG;
+ 
+-	if (mtk_dsi_host_send_cmd(dsi, msg, irq_flag) < 0)
+-		return -ETIME;
++	mtk_dsi_lane_ready(dsi);
+ 
+-	if (!MTK_DSI_HOST_IS_READ(msg->type))
+-		return 0;
++	ret = mtk_dsi_host_send_cmd(dsi, msg, irq_flag);
++	if (ret)
++		goto restore_dsi_mode;
++
++	if (!MTK_DSI_HOST_IS_READ(msg->type)) {
++		recv_cnt = 0;
++		goto restore_dsi_mode;
++	}
+ 
+ 	if (!msg->rx_buf) {
+ 		DRM_ERROR("dsi receive buffer size may be NULL\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto restore_dsi_mode;
+ 	}
+ 
+ 	for (i = 0; i < 16; i++)
+@@ -933,7 +975,13 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
+ 	DRM_INFO("dsi get %d byte data from the panel address(0x%x)\n",
+ 		 recv_cnt, *((u8 *)(msg->tx_buf)));
+ 
+-	return recv_cnt;
++restore_dsi_mode:
++	if (dsi_mode & MODE) {
++		mtk_dsi_set_mode(dsi);
++		mtk_dsi_start(dsi);
++	}
++
++	return ret < 0 ? ret : recv_cnt;
+ }
+ 
+ static const struct mipi_dsi_host_ops mtk_dsi_ops = {
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+index a4f5cb90f3e80..e4b8a789835a4 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+@@ -123,12 +123,13 @@ int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
+ {
+ 	struct msm_drm_private *priv = s->dev->dev_private;
+ 	struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
+-	struct mdp5_global_state *state = mdp5_get_global_state(s);
++	struct mdp5_global_state *state;
+ 	struct mdp5_hw_pipe_state *new_state;
+ 
+ 	if (!hwpipe)
+ 		return 0;
+ 
++	state = mdp5_get_global_state(s);
+ 	if (IS_ERR(state))
+ 		return PTR_ERR(state);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
+index f2ad6f49fb72e..00128756dedb0 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.c
++++ b/drivers/gpu/drm/nouveau/nouveau_display.c
+@@ -521,7 +521,7 @@ nouveau_display_hpd_work(struct work_struct *work)
+ 
+ 	pm_runtime_mark_last_busy(drm->dev->dev);
+ noop:
+-	pm_runtime_put_sync(drm->dev->dev);
++	pm_runtime_put_autosuspend(dev->dev);
+ }
+ 
+ #ifdef CONFIG_ACPI
+@@ -543,7 +543,7 @@ nouveau_display_acpi_ntfy(struct notifier_block *nb, unsigned long val,
+ 				 * it's own hotplug events.
+ 				 */
+ 				pm_runtime_put_autosuspend(drm->dev->dev);
+-			} else if (ret == 0) {
++			} else if (ret == 0 || ret == -EINPROGRESS) {
+ 				/* We've started resuming the GPU already, so
+ 				 * it will handle scheduling a full reprobe
+ 				 * itself
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+index 24ec5339efb46..a3c86499ff77c 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+@@ -464,7 +464,7 @@ nouveau_fbcon_set_suspend_work(struct work_struct *work)
+ 	if (state == FBINFO_STATE_RUNNING) {
+ 		nouveau_fbcon_hotplug_resume(drm->fbcon);
+ 		pm_runtime_mark_last_busy(drm->dev->dev);
+-		pm_runtime_put_sync(drm->dev->dev);
++		pm_runtime_put_autosuspend(drm->dev->dev);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
+index 8bff14ae16b0e..f0368d9a0154d 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
+@@ -33,7 +33,7 @@ nvbios_addr(struct nvkm_bios *bios, u32 *addr, u8 size)
+ {
+ 	u32 p = *addr;
+ 
+-	if (*addr > bios->image0_size && bios->imaged_addr) {
++	if (*addr >= bios->image0_size && bios->imaged_addr) {
+ 		*addr -= bios->image0_size;
+ 		*addr += bios->imaged_addr;
+ 	}
+diff --git a/drivers/gpu/drm/radeon/.gitignore b/drivers/gpu/drm/radeon/.gitignore
+index 9c1a941539836..d8777383a64aa 100644
+--- a/drivers/gpu/drm/radeon/.gitignore
++++ b/drivers/gpu/drm/radeon/.gitignore
+@@ -1,4 +1,4 @@
+-# SPDX-License-Identifier: GPL-2.0-only
++# SPDX-License-Identifier: MIT
+ mkregtable
+ *_reg_safe.h
+ 
+diff --git a/drivers/gpu/drm/radeon/Kconfig b/drivers/gpu/drm/radeon/Kconfig
+index 6f60f4840cc58..52819e7f1fca1 100644
+--- a/drivers/gpu/drm/radeon/Kconfig
++++ b/drivers/gpu/drm/radeon/Kconfig
+@@ -1,4 +1,4 @@
+-# SPDX-License-Identifier: GPL-2.0-only
++# SPDX-License-Identifier: MIT
+ config DRM_RADEON_USERPTR
+ 	bool "Always enable userptr support"
+ 	depends on DRM_RADEON
+diff --git a/drivers/gpu/drm/radeon/Makefile b/drivers/gpu/drm/radeon/Makefile
+index 11c97edde54dd..3d502f1bbfcbe 100644
+--- a/drivers/gpu/drm/radeon/Makefile
++++ b/drivers/gpu/drm/radeon/Makefile
+@@ -1,4 +1,4 @@
+-# SPDX-License-Identifier: GPL-2.0
++# SPDX-License-Identifier: MIT
+ #
+ # Makefile for the drm device driver.  This driver provides support for the
+ # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
+diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c
+index 59cdadcece159..a5218747742ba 100644
+--- a/drivers/gpu/drm/radeon/ni_dpm.c
++++ b/drivers/gpu/drm/radeon/ni_dpm.c
+@@ -2740,10 +2740,10 @@ static int ni_set_mc_special_registers(struct radeon_device *rdev,
+ 					table->mc_reg_table_entry[k].mc_data[j] |= 0x100;
+ 			}
+ 			j++;
+-			if (j > SMC_NISLANDS_MC_REGISTER_ARRAY_SIZE)
+-				return -EINVAL;
+ 			break;
+ 		case MC_SEQ_RESERVE_M >> 2:
++			if (j >= SMC_NISLANDS_MC_REGISTER_ARRAY_SIZE)
++				return -EINVAL;
+ 			temp_reg = RREG32(MC_PMG_CMD_MRS1);
+ 			table->mc_reg_address[j].s1 = MC_PMG_CMD_MRS1 >> 2;
+ 			table->mc_reg_address[j].s0 = MC_SEQ_PMG_CMD_MRS1_LP >> 2;
+@@ -2752,8 +2752,6 @@ static int ni_set_mc_special_registers(struct radeon_device *rdev,
+ 					(temp_reg & 0xffff0000) |
+ 					(table->mc_reg_table_entry[k].mc_data[i] & 0x0000ffff);
+ 			j++;
+-			if (j > SMC_NISLANDS_MC_REGISTER_ARRAY_SIZE)
+-				return -EINVAL;
+ 			break;
+ 		default:
+ 			break;
+diff --git a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+index ade2327a10e2c..512581698a1e0 100644
+--- a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
++++ b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+@@ -398,7 +398,15 @@ static int rockchip_dp_probe(struct platform_device *pdev)
+ 	if (IS_ERR(dp->adp))
+ 		return PTR_ERR(dp->adp);
+ 
+-	return component_add(dev, &rockchip_dp_component_ops);
++	ret = component_add(dev, &rockchip_dp_component_ops);
++	if (ret)
++		goto err_dp_remove;
++
++	return 0;
++
++err_dp_remove:
++	analogix_dp_remove(dp->adp);
++	return ret;
+ }
+ 
+ static int rockchip_dp_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 91568f166a8ad..af98bfcde5189 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -1530,6 +1530,9 @@ static struct drm_crtc_state *vop_crtc_duplicate_state(struct drm_crtc *crtc)
+ {
+ 	struct rockchip_crtc_state *rockchip_state;
+ 
++	if (WARN_ON(!crtc->state))
++		return NULL;
++
+ 	rockchip_state = kzalloc(sizeof(*rockchip_state), GFP_KERNEL);
+ 	if (!rockchip_state)
+ 		return NULL;
+diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7735r.c
+index c0bc2a18edde9..9d0c127bdb0c1 100644
+--- a/drivers/gpu/drm/tiny/st7735r.c
++++ b/drivers/gpu/drm/tiny/st7735r.c
+@@ -175,6 +175,7 @@ MODULE_DEVICE_TABLE(of, st7735r_of_match);
+ 
+ static const struct spi_device_id st7735r_id[] = {
+ 	{ "jd-t18003-t01", (uintptr_t)&jd_t18003_t01_cfg },
++	{ "rh128128t", (uintptr_t)&rh128128t_cfg },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(spi, st7735r_id);
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index f4ccca922e44a..79724fddfb4b6 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -319,7 +319,8 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc)
+ 	u32 pixel_rep = (mode->flags & DRM_MODE_FLAG_DBLCLK) ? 2 : 1;
+ 	bool is_dsi = (vc4_encoder->type == VC4_ENCODER_TYPE_DSI0 ||
+ 		       vc4_encoder->type == VC4_ENCODER_TYPE_DSI1);
+-	u32 format = is_dsi ? PV_CONTROL_FORMAT_DSIV_24 : PV_CONTROL_FORMAT_24;
++	bool is_dsi1 = vc4_encoder->type == VC4_ENCODER_TYPE_DSI1;
++	u32 format = is_dsi1 ? PV_CONTROL_FORMAT_DSIV_24 : PV_CONTROL_FORMAT_24;
+ 	u8 ppc = pv_data->pixels_per_clock;
+ 	bool debug_dump_regs = false;
+ 
+@@ -345,7 +346,8 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc)
+ 				 PV_HORZB_HACTIVE));
+ 
+ 	CRTC_WRITE(PV_VERTA,
+-		   VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
++		   VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end +
++				 interlace,
+ 				 PV_VERTA_VBP) |
+ 		   VC4_SET_FIELD(mode->crtc_vsync_end - mode->crtc_vsync_start,
+ 				 PV_VERTA_VSYNC));
+@@ -357,7 +359,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc)
+ 	if (interlace) {
+ 		CRTC_WRITE(PV_VERTA_EVEN,
+ 			   VC4_SET_FIELD(mode->crtc_vtotal -
+-					 mode->crtc_vsync_end - 1,
++					 mode->crtc_vsync_end,
+ 					 PV_VERTA_VBP) |
+ 			   VC4_SET_FIELD(mode->crtc_vsync_end -
+ 					 mode->crtc_vsync_start,
+@@ -377,7 +379,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc)
+ 			   PV_VCONTROL_CONTINUOUS |
+ 			   (is_dsi ? PV_VCONTROL_DSI : 0) |
+ 			   PV_VCONTROL_INTERLACE |
+-			   VC4_SET_FIELD(mode->htotal * pixel_rep / 2,
++			   VC4_SET_FIELD(mode->htotal * pixel_rep / (2 * ppc),
+ 					 PV_VCONTROL_ODD_DELAY));
+ 		CRTC_WRITE(PV_VSYNCD_EVEN, 0);
+ 	} else {
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
+index 839610f8092af..52426bc8edb8b 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -246,6 +246,15 @@ static void vc4_match_add_drivers(struct device *dev,
+ 	}
+ }
+ 
++static const struct of_device_id vc4_dma_range_matches[] = {
++	{ .compatible = "brcm,bcm2711-hvs" },
++	{ .compatible = "brcm,bcm2835-hvs" },
++	{ .compatible = "brcm,bcm2835-v3d" },
++	{ .compatible = "brcm,cygnus-v3d" },
++	{ .compatible = "brcm,vc4-v3d" },
++	{}
++};
++
+ static int vc4_drm_bind(struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+@@ -263,6 +272,16 @@ static int vc4_drm_bind(struct device *dev)
+ 		vc4_drm_driver.driver_features &= ~DRIVER_RENDER;
+ 	of_node_put(node);
+ 
++	node = of_find_matching_node_and_match(NULL, vc4_dma_range_matches,
++					       NULL);
++	if (node) {
++		ret = of_dma_configure(dev, node, true);
++		of_node_put(node);
++
++		if (ret)
++			return ret;
++	}
++
+ 	vc4 = devm_drm_dev_alloc(dev, &vc4_drm_driver, struct vc4_dev, base);
+ 	if (IS_ERR(vc4))
+ 		return PTR_ERR(vc4);
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index 9809c3a856c67..921463625d82e 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -77,7 +77,6 @@ struct vc4_dev {
+ 	struct vc4_hvs *hvs;
+ 	struct vc4_v3d *v3d;
+ 	struct vc4_dpi *dpi;
+-	struct vc4_dsi *dsi1;
+ 	struct vc4_vec *vec;
+ 	struct vc4_txp *txp;
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_dsi.c b/drivers/gpu/drm/vc4/vc4_dsi.c
+index ad84b56f4091d..0bda40c2d7879 100644
+--- a/drivers/gpu/drm/vc4/vc4_dsi.c
++++ b/drivers/gpu/drm/vc4/vc4_dsi.c
+@@ -181,8 +181,50 @@
+ 
+ #define DSI0_TXPKT_PIX_FIFO		0x20 /* AKA PIX_FIFO */
+ 
+-#define DSI0_INT_STAT		0x24
+-#define DSI0_INT_EN		0x28
++#define DSI0_INT_STAT			0x24
++#define DSI0_INT_EN			0x28
++# define DSI0_INT_FIFO_ERR		BIT(25)
++# define DSI0_INT_CMDC_DONE_MASK	VC4_MASK(24, 23)
++# define DSI0_INT_CMDC_DONE_SHIFT	23
++#  define DSI0_INT_CMDC_DONE_NO_REPEAT		1
++#  define DSI0_INT_CMDC_DONE_REPEAT		3
++# define DSI0_INT_PHY_DIR_RTF		BIT(22)
++# define DSI0_INT_PHY_D1_ULPS		BIT(21)
++# define DSI0_INT_PHY_D1_STOP		BIT(20)
++# define DSI0_INT_PHY_RXLPDT		BIT(19)
++# define DSI0_INT_PHY_RXTRIG		BIT(18)
++# define DSI0_INT_PHY_D0_ULPS		BIT(17)
++# define DSI0_INT_PHY_D0_LPDT		BIT(16)
++# define DSI0_INT_PHY_D0_FTR		BIT(15)
++# define DSI0_INT_PHY_D0_STOP		BIT(14)
++/* Signaled when the clock lane enters the given state. */
++# define DSI0_INT_PHY_CLK_ULPS		BIT(13)
++# define DSI0_INT_PHY_CLK_HS		BIT(12)
++# define DSI0_INT_PHY_CLK_FTR		BIT(11)
++/* Signaled on timeouts */
++# define DSI0_INT_PR_TO			BIT(10)
++# define DSI0_INT_TA_TO			BIT(9)
++# define DSI0_INT_LPRX_TO		BIT(8)
++# define DSI0_INT_HSTX_TO		BIT(7)
++/* Contention on a line when trying to drive the line low */
++# define DSI0_INT_ERR_CONT_LP1		BIT(6)
++# define DSI0_INT_ERR_CONT_LP0		BIT(5)
++/* Control error: incorrect line state sequence on data lane 0. */
++# define DSI0_INT_ERR_CONTROL		BIT(4)
++# define DSI0_INT_ERR_SYNC_ESC		BIT(3)
++# define DSI0_INT_RX2_PKT		BIT(2)
++# define DSI0_INT_RX1_PKT		BIT(1)
++# define DSI0_INT_CMD_PKT		BIT(0)
++
++#define DSI0_INTERRUPTS_ALWAYS_ENABLED	(DSI0_INT_ERR_SYNC_ESC | \
++					 DSI0_INT_ERR_CONTROL |	 \
++					 DSI0_INT_ERR_CONT_LP0 | \
++					 DSI0_INT_ERR_CONT_LP1 | \
++					 DSI0_INT_HSTX_TO |	 \
++					 DSI0_INT_LPRX_TO |	 \
++					 DSI0_INT_TA_TO |	 \
++					 DSI0_INT_PR_TO)
++
+ # define DSI1_INT_PHY_D3_ULPS		BIT(30)
+ # define DSI1_INT_PHY_D3_STOP		BIT(29)
+ # define DSI1_INT_PHY_D2_ULPS		BIT(28)
+@@ -493,6 +535,18 @@
+  */
+ #define DSI1_ID			0x8c
+ 
++struct vc4_dsi_variant {
++	/* Whether we're on bcm2835's DSI0 or DSI1. */
++	unsigned int port;
++
++	bool broken_axi_workaround;
++
++	const char *debugfs_name;
++	const struct debugfs_reg32 *regs;
++	size_t nregs;
++
++};
++
+ /* General DSI hardware state. */
+ struct vc4_dsi {
+ 	struct platform_device *pdev;
+@@ -509,8 +563,7 @@ struct vc4_dsi {
+ 	u32 *reg_dma_mem;
+ 	dma_addr_t reg_paddr;
+ 
+-	/* Whether we're on bcm2835's DSI0 or DSI1. */
+-	int port;
++	const struct vc4_dsi_variant *variant;
+ 
+ 	/* DSI channel for the panel we're connected to. */
+ 	u32 channel;
+@@ -586,10 +639,10 @@ dsi_dma_workaround_write(struct vc4_dsi *dsi, u32 offset, u32 val)
+ #define DSI_READ(offset) readl(dsi->regs + (offset))
+ #define DSI_WRITE(offset, val) dsi_dma_workaround_write(dsi, offset, val)
+ #define DSI_PORT_READ(offset) \
+-	DSI_READ(dsi->port ? DSI1_##offset : DSI0_##offset)
++	DSI_READ(dsi->variant->port ? DSI1_##offset : DSI0_##offset)
+ #define DSI_PORT_WRITE(offset, val) \
+-	DSI_WRITE(dsi->port ? DSI1_##offset : DSI0_##offset, val)
+-#define DSI_PORT_BIT(bit) (dsi->port ? DSI1_##bit : DSI0_##bit)
++	DSI_WRITE(dsi->variant->port ? DSI1_##offset : DSI0_##offset, val)
++#define DSI_PORT_BIT(bit) (dsi->variant->port ? DSI1_##bit : DSI0_##bit)
+ 
+ /* VC4 DSI encoder KMS struct */
+ struct vc4_dsi_encoder {
+@@ -750,6 +803,9 @@ static void vc4_dsi_encoder_disable(struct drm_encoder *encoder)
+ 	list_for_each_entry_reverse(iter, &dsi->bridge_chain, chain_node) {
+ 		if (iter->funcs->disable)
+ 			iter->funcs->disable(iter);
++
++		if (iter == dsi->bridge)
++			break;
+ 	}
+ 
+ 	vc4_dsi_ulps(dsi, true);
+@@ -794,11 +850,9 @@ static bool vc4_dsi_encoder_mode_fixup(struct drm_encoder *encoder,
+ 	/* Find what divider gets us a faster clock than the requested
+ 	 * pixel clock.
+ 	 */
+-	for (divider = 1; divider < 8; divider++) {
+-		if (parent_rate / divider < pll_clock) {
+-			divider--;
++	for (divider = 1; divider < 255; divider++) {
++		if (parent_rate / (divider + 1) < pll_clock)
+ 			break;
+-		}
+ 	}
+ 
+ 	/* Now that we've picked a PLL divider, calculate back to its
+@@ -837,7 +891,7 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+ 
+ 	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret) {
+-		DRM_ERROR("Failed to runtime PM enable on DSI%d\n", dsi->port);
++		DRM_ERROR("Failed to runtime PM enable on DSI%d\n", dsi->variant->port);
+ 		return;
+ 	}
+ 
+@@ -871,7 +925,7 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+ 	DSI_PORT_WRITE(STAT, DSI_PORT_READ(STAT));
+ 
+ 	/* Set AFE CTR00/CTR1 to release powerdown of analog. */
+-	if (dsi->port == 0) {
++	if (dsi->variant->port == 0) {
+ 		u32 afec0 = (VC4_SET_FIELD(7, DSI_PHY_AFEC0_PTATADJ) |
+ 			     VC4_SET_FIELD(7, DSI_PHY_AFEC0_CTATADJ));
+ 
+@@ -883,6 +937,9 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+ 
+ 		DSI_PORT_WRITE(PHY_AFEC0, afec0);
+ 
++		/* AFEC reset hold time */
++		mdelay(1);
++
+ 		DSI_PORT_WRITE(PHY_AFEC1,
+ 			       VC4_SET_FIELD(6,  DSI0_PHY_AFEC1_IDR_DLANE1) |
+ 			       VC4_SET_FIELD(6,  DSI0_PHY_AFEC1_IDR_DLANE0) |
+@@ -1017,7 +1074,7 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+ 		       DSI_PORT_BIT(PHYC_CLANE_ENABLE) |
+ 		       ((dsi->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS) ?
+ 			0 : DSI_PORT_BIT(PHYC_HS_CLK_CONTINUOUS)) |
+-		       (dsi->port == 0 ?
++		       (dsi->variant->port == 0 ?
+ 			VC4_SET_FIELD(lpx - 1, DSI0_PHYC_ESC_CLK_LPDT) :
+ 			VC4_SET_FIELD(lpx - 1, DSI1_PHYC_ESC_CLK_LPDT)));
+ 
+@@ -1043,18 +1100,15 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+ 		       DSI_DISP1_ENABLE);
+ 
+ 	/* Ungate the block. */
+-	if (dsi->port == 0)
++	if (dsi->variant->port == 0)
+ 		DSI_PORT_WRITE(CTRL, DSI_PORT_READ(CTRL) | DSI0_CTRL_CTRL0);
+ 	else
+ 		DSI_PORT_WRITE(CTRL, DSI_PORT_READ(CTRL) | DSI1_CTRL_EN);
+ 
+ 	/* Bring AFE out of reset. */
+-	if (dsi->port == 0) {
+-	} else {
+-		DSI_PORT_WRITE(PHY_AFEC0,
+-			       DSI_PORT_READ(PHY_AFEC0) &
+-			       ~DSI1_PHY_AFEC0_RESET);
+-	}
++	DSI_PORT_WRITE(PHY_AFEC0,
++		       DSI_PORT_READ(PHY_AFEC0) &
++		       ~DSI_PORT_BIT(PHY_AFEC0_RESET));
+ 
+ 	vc4_dsi_ulps(dsi, false);
+ 
+@@ -1173,13 +1227,28 @@ static ssize_t vc4_dsi_host_transfer(struct mipi_dsi_host *host,
+ 	/* Enable the appropriate interrupt for the transfer completion. */
+ 	dsi->xfer_result = 0;
+ 	reinit_completion(&dsi->xfer_completion);
+-	DSI_PORT_WRITE(INT_STAT, DSI1_INT_TXPKT1_DONE | DSI1_INT_PHY_DIR_RTF);
+-	if (msg->rx_len) {
+-		DSI_PORT_WRITE(INT_EN, (DSI1_INTERRUPTS_ALWAYS_ENABLED |
+-					DSI1_INT_PHY_DIR_RTF));
++	if (dsi->variant->port == 0) {
++		DSI_PORT_WRITE(INT_STAT,
++			       DSI0_INT_CMDC_DONE_MASK | DSI1_INT_PHY_DIR_RTF);
++		if (msg->rx_len) {
++			DSI_PORT_WRITE(INT_EN, (DSI0_INTERRUPTS_ALWAYS_ENABLED |
++						DSI0_INT_PHY_DIR_RTF));
++		} else {
++			DSI_PORT_WRITE(INT_EN,
++				       (DSI0_INTERRUPTS_ALWAYS_ENABLED |
++					VC4_SET_FIELD(DSI0_INT_CMDC_DONE_NO_REPEAT,
++						      DSI0_INT_CMDC_DONE)));
++		}
+ 	} else {
+-		DSI_PORT_WRITE(INT_EN, (DSI1_INTERRUPTS_ALWAYS_ENABLED |
+-					DSI1_INT_TXPKT1_DONE));
++		DSI_PORT_WRITE(INT_STAT,
++			       DSI1_INT_TXPKT1_DONE | DSI1_INT_PHY_DIR_RTF);
++		if (msg->rx_len) {
++			DSI_PORT_WRITE(INT_EN, (DSI1_INTERRUPTS_ALWAYS_ENABLED |
++						DSI1_INT_PHY_DIR_RTF));
++		} else {
++			DSI_PORT_WRITE(INT_EN, (DSI1_INTERRUPTS_ALWAYS_ENABLED |
++						DSI1_INT_TXPKT1_DONE));
++		}
+ 	}
+ 
+ 	/* Send the packet. */
+@@ -1196,7 +1265,7 @@ static ssize_t vc4_dsi_host_transfer(struct mipi_dsi_host *host,
+ 		ret = dsi->xfer_result;
+ 	}
+ 
+-	DSI_PORT_WRITE(INT_EN, DSI1_INTERRUPTS_ALWAYS_ENABLED);
++	DSI_PORT_WRITE(INT_EN, DSI_PORT_BIT(INTERRUPTS_ALWAYS_ENABLED));
+ 
+ 	if (ret)
+ 		goto reset_fifo_and_return;
+@@ -1242,7 +1311,7 @@ reset_fifo_and_return:
+ 		       DSI_PORT_BIT(CTRL_RESET_FIFOS));
+ 
+ 	DSI_PORT_WRITE(TXPKT1C, 0);
+-	DSI_PORT_WRITE(INT_EN, DSI1_INTERRUPTS_ALWAYS_ENABLED);
++	DSI_PORT_WRITE(INT_EN, DSI_PORT_BIT(INTERRUPTS_ALWAYS_ENABLED));
+ 	return ret;
+ }
+ 
+@@ -1305,8 +1374,16 @@ static const struct drm_encoder_helper_funcs vc4_dsi_encoder_helper_funcs = {
+ 	.mode_fixup = vc4_dsi_encoder_mode_fixup,
+ };
+ 
++static const struct vc4_dsi_variant bcm2835_dsi1_variant = {
++	.port			= 1,
++	.broken_axi_workaround	= true,
++	.debugfs_name		= "dsi1_regs",
++	.regs			= dsi1_regs,
++	.nregs			= ARRAY_SIZE(dsi1_regs),
++};
++
+ static const struct of_device_id vc4_dsi_dt_match[] = {
+-	{ .compatible = "brcm,bcm2835-dsi1", (void *)(uintptr_t)1 },
++	{ .compatible = "brcm,bcm2835-dsi1", &bcm2835_dsi1_variant },
+ 	{}
+ };
+ 
+@@ -1317,7 +1394,7 @@ static void dsi_handle_error(struct vc4_dsi *dsi,
+ 	if (!(stat & bit))
+ 		return;
+ 
+-	DRM_ERROR("DSI%d: %s error\n", dsi->port, type);
++	DRM_ERROR("DSI%d: %s error\n", dsi->variant->port, type);
+ 	*ret = IRQ_HANDLED;
+ }
+ 
+@@ -1351,26 +1428,28 @@ static irqreturn_t vc4_dsi_irq_handler(int irq, void *data)
+ 	DSI_PORT_WRITE(INT_STAT, stat);
+ 
+ 	dsi_handle_error(dsi, &ret, stat,
+-			 DSI1_INT_ERR_SYNC_ESC, "LPDT sync");
++			 DSI_PORT_BIT(INT_ERR_SYNC_ESC), "LPDT sync");
+ 	dsi_handle_error(dsi, &ret, stat,
+-			 DSI1_INT_ERR_CONTROL, "data lane 0 sequence");
++			 DSI_PORT_BIT(INT_ERR_CONTROL), "data lane 0 sequence");
+ 	dsi_handle_error(dsi, &ret, stat,
+-			 DSI1_INT_ERR_CONT_LP0, "LP0 contention");
++			 DSI_PORT_BIT(INT_ERR_CONT_LP0), "LP0 contention");
+ 	dsi_handle_error(dsi, &ret, stat,
+-			 DSI1_INT_ERR_CONT_LP1, "LP1 contention");
++			 DSI_PORT_BIT(INT_ERR_CONT_LP1), "LP1 contention");
+ 	dsi_handle_error(dsi, &ret, stat,
+-			 DSI1_INT_HSTX_TO, "HSTX timeout");
++			 DSI_PORT_BIT(INT_HSTX_TO), "HSTX timeout");
+ 	dsi_handle_error(dsi, &ret, stat,
+-			 DSI1_INT_LPRX_TO, "LPRX timeout");
++			 DSI_PORT_BIT(INT_LPRX_TO), "LPRX timeout");
+ 	dsi_handle_error(dsi, &ret, stat,
+-			 DSI1_INT_TA_TO, "turnaround timeout");
++			 DSI_PORT_BIT(INT_TA_TO), "turnaround timeout");
+ 	dsi_handle_error(dsi, &ret, stat,
+-			 DSI1_INT_PR_TO, "peripheral reset timeout");
++			 DSI_PORT_BIT(INT_PR_TO), "peripheral reset timeout");
+ 
+-	if (stat & (DSI1_INT_TXPKT1_DONE | DSI1_INT_PHY_DIR_RTF)) {
++	if (stat & ((dsi->variant->port ? DSI1_INT_TXPKT1_DONE :
++					  DSI0_INT_CMDC_DONE_MASK) |
++		    DSI_PORT_BIT(INT_PHY_DIR_RTF))) {
+ 		complete(&dsi->xfer_completion);
+ 		ret = IRQ_HANDLED;
+-	} else if (stat & DSI1_INT_HSTX_TO) {
++	} else if (stat & DSI_PORT_BIT(INT_HSTX_TO)) {
+ 		complete(&dsi->xfer_completion);
+ 		dsi->xfer_result = -ETIMEDOUT;
+ 		ret = IRQ_HANDLED;
+@@ -1390,12 +1469,12 @@ vc4_dsi_init_phy_clocks(struct vc4_dsi *dsi)
+ 	struct device *dev = &dsi->pdev->dev;
+ 	const char *parent_name = __clk_get_name(dsi->pll_phy_clock);
+ 	static const struct {
+-		const char *dsi0_name, *dsi1_name;
++		const char *name;
+ 		int div;
+ 	} phy_clocks[] = {
+-		{ "dsi0_byte", "dsi1_byte", 8 },
+-		{ "dsi0_ddr2", "dsi1_ddr2", 4 },
+-		{ "dsi0_ddr", "dsi1_ddr", 2 },
++		{ "byte", 8 },
++		{ "ddr2", 4 },
++		{ "ddr", 2 },
+ 	};
+ 	int i;
+ 
+@@ -1411,8 +1490,12 @@ vc4_dsi_init_phy_clocks(struct vc4_dsi *dsi)
+ 	for (i = 0; i < ARRAY_SIZE(phy_clocks); i++) {
+ 		struct clk_fixed_factor *fix = &dsi->phy_clocks[i];
+ 		struct clk_init_data init;
++		char clk_name[16];
+ 		int ret;
+ 
++		snprintf(clk_name, sizeof(clk_name),
++			 "dsi%u_%s", dsi->variant->port, phy_clocks[i].name);
++
+ 		/* We just use core fixed factor clock ops for the PHY
+ 		 * clocks.  The clocks are actually gated by the
+ 		 * PHY_AFEC0_DDRCLK_EN bits, which we should be
+@@ -1429,10 +1512,7 @@ vc4_dsi_init_phy_clocks(struct vc4_dsi *dsi)
+ 		memset(&init, 0, sizeof(init));
+ 		init.parent_names = &parent_name;
+ 		init.num_parents = 1;
+-		if (dsi->port == 1)
+-			init.name = phy_clocks[i].dsi1_name;
+-		else
+-			init.name = phy_clocks[i].dsi0_name;
++		init.name = clk_name;
+ 		init.ops = &clk_fixed_factor_ops;
+ 
+ 		ret = devm_clk_hw_register(dev, &fix->hw);
+@@ -1451,7 +1531,6 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct drm_device *drm = dev_get_drvdata(master);
+-	struct vc4_dev *vc4 = to_vc4_dev(drm);
+ 	struct vc4_dsi *dsi = dev_get_drvdata(dev);
+ 	struct vc4_dsi_encoder *vc4_dsi_encoder;
+ 	struct drm_panel *panel;
+@@ -1463,7 +1542,7 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ 	if (!match)
+ 		return -ENODEV;
+ 
+-	dsi->port = (uintptr_t)match->data;
++	dsi->variant = match->data;
+ 
+ 	vc4_dsi_encoder = devm_kzalloc(dev, sizeof(*vc4_dsi_encoder),
+ 				       GFP_KERNEL);
+@@ -1471,7 +1550,8 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ 		return -ENOMEM;
+ 
+ 	INIT_LIST_HEAD(&dsi->bridge_chain);
+-	vc4_dsi_encoder->base.type = VC4_ENCODER_TYPE_DSI1;
++	vc4_dsi_encoder->base.type = dsi->variant->port ?
++			VC4_ENCODER_TYPE_DSI1 : VC4_ENCODER_TYPE_DSI0;
+ 	vc4_dsi_encoder->dsi = dsi;
+ 	dsi->encoder = &vc4_dsi_encoder->base.base;
+ 
+@@ -1480,13 +1560,8 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ 		return PTR_ERR(dsi->regs);
+ 
+ 	dsi->regset.base = dsi->regs;
+-	if (dsi->port == 0) {
+-		dsi->regset.regs = dsi0_regs;
+-		dsi->regset.nregs = ARRAY_SIZE(dsi0_regs);
+-	} else {
+-		dsi->regset.regs = dsi1_regs;
+-		dsi->regset.nregs = ARRAY_SIZE(dsi1_regs);
+-	}
++	dsi->regset.regs = dsi->variant->regs;
++	dsi->regset.nregs = dsi->variant->nregs;
+ 
+ 	if (DSI_PORT_READ(ID) != DSI_ID_VALUE) {
+ 		dev_err(dev, "Port returned 0x%08x for ID instead of 0x%08x\n",
+@@ -1498,7 +1573,7 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ 	 * from the ARM.  It does handle writes from the DMA engine,
+ 	 * so set up a channel for talking to it.
+ 	 */
+-	if (dsi->port == 1) {
++	if (dsi->variant->broken_axi_workaround) {
+ 		dsi->reg_dma_mem = dma_alloc_coherent(dev, 4,
+ 						      &dsi->reg_dma_paddr,
+ 						      GFP_KERNEL);
+@@ -1604,9 +1679,6 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (dsi->port == 1)
+-		vc4->dsi1 = dsi;
+-
+ 	drm_simple_encoder_init(drm, dsi->encoder, DRM_MODE_ENCODER_DSI);
+ 	drm_encoder_helper_add(dsi->encoder, &vc4_dsi_encoder_helper_funcs);
+ 
+@@ -1622,10 +1694,7 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ 	 */
+ 	list_splice_init(&dsi->encoder->bridge_chain, &dsi->bridge_chain);
+ 
+-	if (dsi->port == 0)
+-		vc4_debugfs_add_regset32(drm, "dsi0_regs", &dsi->regset);
+-	else
+-		vc4_debugfs_add_regset32(drm, "dsi1_regs", &dsi->regset);
++	vc4_debugfs_add_regset32(drm, dsi->variant->debugfs_name, &dsi->regset);
+ 
+ 	pm_runtime_enable(dev);
+ 
+@@ -1635,8 +1704,6 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ static void vc4_dsi_unbind(struct device *dev, struct device *master,
+ 			   void *data)
+ {
+-	struct drm_device *drm = dev_get_drvdata(master);
+-	struct vc4_dev *vc4 = to_vc4_dev(drm);
+ 	struct vc4_dsi *dsi = dev_get_drvdata(dev);
+ 
+ 	if (dsi->bridge)
+@@ -1648,9 +1715,6 @@ static void vc4_dsi_unbind(struct device *dev, struct device *master,
+ 	 */
+ 	list_splice_init(&dsi->bridge_chain, &dsi->encoder->bridge_chain);
+ 	drm_encoder_cleanup(dsi->encoder);
+-
+-	if (dsi->port == 1)
+-		vc4->dsi1 = NULL;
+ }
+ 
+ static const struct component_ops vc4_dsi_ops = {
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index a308f2d05d173..08175c3dd374b 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -83,6 +83,8 @@
+ #define CEC_CLOCK_FREQ 40000
+ #define VC4_HSM_MID_CLOCK 149985000
+ 
++#define HDMI_14_MAX_TMDS_CLK   (340 * 1000 * 1000)
++
+ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
+ {
+ 	struct drm_info_node *node = (struct drm_info_node *)m->private;
+@@ -209,7 +211,9 @@ static int vc4_hdmi_connector_get_modes(struct drm_connector *connector)
+ static void vc4_hdmi_connector_reset(struct drm_connector *connector)
+ {
+ 	drm_atomic_helper_connector_reset(connector);
+-	drm_atomic_helper_connector_tv_reset(connector);
++
++	if (connector->state)
++		drm_atomic_helper_connector_tv_reset(connector);
+ }
+ 
+ static const struct drm_connector_funcs vc4_hdmi_connector_funcs = {
+@@ -518,12 +522,12 @@ static void vc4_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
+ 				   VC4_HDMI_VERTA_VFP) |
+ 		     VC4_SET_FIELD(mode->crtc_vdisplay, VC4_HDMI_VERTA_VAL));
+ 	u32 vertb = (VC4_SET_FIELD(0, VC4_HDMI_VERTB_VSPO) |
+-		     VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
++		     VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end +
++				   interlaced,
+ 				   VC4_HDMI_VERTB_VBP));
+ 	u32 vertb_even = (VC4_SET_FIELD(0, VC4_HDMI_VERTB_VSPO) |
+ 			  VC4_SET_FIELD(mode->crtc_vtotal -
+-					mode->crtc_vsync_end -
+-					interlaced,
++					mode->crtc_vsync_end,
+ 					VC4_HDMI_VERTB_VBP));
+ 
+ 	HDMI_WRITE(HDMI_HORZA,
+@@ -561,13 +565,13 @@ static void vc5_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
+ 		     VC4_SET_FIELD(mode->crtc_vsync_start - mode->crtc_vdisplay,
+ 				   VC5_HDMI_VERTA_VFP) |
+ 		     VC4_SET_FIELD(mode->crtc_vdisplay, VC5_HDMI_VERTA_VAL));
+-	u32 vertb = (VC4_SET_FIELD(0, VC5_HDMI_VERTB_VSPO) |
++	u32 vertb = (VC4_SET_FIELD(mode->htotal >> (2 - pixel_rep),
++				   VC5_HDMI_VERTB_VSPO) |
+ 		     VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
+ 				   VC4_HDMI_VERTB_VBP));
+ 	u32 vertb_even = (VC4_SET_FIELD(0, VC5_HDMI_VERTB_VSPO) |
+ 			  VC4_SET_FIELD(mode->crtc_vtotal -
+-					mode->crtc_vsync_end -
+-					interlaced,
++					mode->crtc_vsync_end - interlaced,
+ 					VC4_HDMI_VERTB_VBP));
+ 
+ 	HDMI_WRITE(HDMI_VEC_INTERFACE_XBAR, 0x354021);
+@@ -1031,22 +1035,12 @@ static int vc4_hdmi_audio_hw_params(struct snd_pcm_substream *substream,
+ 	audio_packet_config |= VC4_SET_FIELD(channel_mask,
+ 					     VC4_HDMI_AUDIO_PACKET_CEA_MASK);
+ 
+-	/* Set the MAI threshold.  This logic mimics the firmware's. */
+-	if (vc4_hdmi->audio.samplerate > 96000) {
+-		HDMI_WRITE(HDMI_MAI_THR,
+-			   VC4_SET_FIELD(0x12, VC4_HD_MAI_THR_DREQHIGH) |
+-			   VC4_SET_FIELD(0x12, VC4_HD_MAI_THR_DREQLOW));
+-	} else if (vc4_hdmi->audio.samplerate > 48000) {
+-		HDMI_WRITE(HDMI_MAI_THR,
+-			   VC4_SET_FIELD(0x14, VC4_HD_MAI_THR_DREQHIGH) |
+-			   VC4_SET_FIELD(0x12, VC4_HD_MAI_THR_DREQLOW));
+-	} else {
+-		HDMI_WRITE(HDMI_MAI_THR,
+-			   VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICHIGH) |
+-			   VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICLOW) |
+-			   VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_DREQHIGH) |
+-			   VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_DREQLOW));
+-	}
++	/* Set the MAI threshold */
++	HDMI_WRITE(HDMI_MAI_THR,
++		   VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICHIGH) |
++		   VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICLOW) |
++		   VC4_SET_FIELD(0x06, VC4_HD_MAI_THR_DREQHIGH) |
++		   VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_DREQLOW));
+ 
+ 	HDMI_WRITE(HDMI_MAI_CONFIG,
+ 		   VC4_HDMI_MAI_CONFIG_BIT_REVERSE |
+@@ -1231,12 +1225,12 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
+ 	struct snd_soc_card *card = &vc4_hdmi->audio.card;
+ 	struct device *dev = &vc4_hdmi->pdev->dev;
+ 	const __be32 *addr;
+-	int index;
++	int index, len;
+ 	int ret;
+ 
+-	if (!of_find_property(dev->of_node, "dmas", NULL)) {
++	if (!of_find_property(dev->of_node, "dmas", &len) || !len) {
+ 		dev_warn(dev,
+-			 "'dmas' DT property is missing, no HDMI audio\n");
++			 "'dmas' DT property is missing or empty, no HDMI audio\n");
+ 		return 0;
+ 	}
+ 
+@@ -1947,7 +1941,7 @@ static const struct vc4_hdmi_variant bcm2711_hdmi0_variant = {
+ 	.encoder_type		= VC4_ENCODER_TYPE_HDMI0,
+ 	.debugfs_name		= "hdmi0_regs",
+ 	.card_name		= "vc4-hdmi-0",
+-	.max_pixel_clock	= 297000000,
++	.max_pixel_clock	= HDMI_14_MAX_TMDS_CLK,
+ 	.registers		= vc5_hdmi_hdmi0_fields,
+ 	.num_registers		= ARRAY_SIZE(vc5_hdmi_hdmi0_fields),
+ 	.phy_lane_mapping	= {
+@@ -1973,7 +1967,7 @@ static const struct vc4_hdmi_variant bcm2711_hdmi1_variant = {
+ 	.encoder_type		= VC4_ENCODER_TYPE_HDMI1,
+ 	.debugfs_name		= "hdmi1_regs",
+ 	.card_name		= "vc4-hdmi-1",
+-	.max_pixel_clock	= 297000000,
++	.max_pixel_clock	= HDMI_14_MAX_TMDS_CLK,
+ 	.registers		= vc5_hdmi_hdmi1_fields,
+ 	.num_registers		= ARRAY_SIZE(vc5_hdmi_hdmi1_fields),
+ 	.phy_lane_mapping	= {
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index af4b8944a6032..4df222a830493 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -303,16 +303,16 @@ static int vc4_plane_margins_adj(struct drm_plane_state *pstate)
+ 					       adjhdisplay,
+ 					       crtc_state->mode.hdisplay);
+ 	vc4_pstate->crtc_x += left;
+-	if (vc4_pstate->crtc_x > crtc_state->mode.hdisplay - left)
+-		vc4_pstate->crtc_x = crtc_state->mode.hdisplay - left;
++	if (vc4_pstate->crtc_x > crtc_state->mode.hdisplay - right)
++		vc4_pstate->crtc_x = crtc_state->mode.hdisplay - right;
+ 
+ 	adjvdisplay = crtc_state->mode.vdisplay - (top + bottom);
+ 	vc4_pstate->crtc_y = DIV_ROUND_CLOSEST(vc4_pstate->crtc_y *
+ 					       adjvdisplay,
+ 					       crtc_state->mode.vdisplay);
+ 	vc4_pstate->crtc_y += top;
+-	if (vc4_pstate->crtc_y > crtc_state->mode.vdisplay - top)
+-		vc4_pstate->crtc_y = crtc_state->mode.vdisplay - top;
++	if (vc4_pstate->crtc_y > crtc_state->mode.vdisplay - bottom)
++		vc4_pstate->crtc_y = crtc_state->mode.vdisplay - bottom;
+ 
+ 	vc4_pstate->crtc_w = DIV_ROUND_CLOSEST(vc4_pstate->crtc_w *
+ 					       adjhdisplay,
+@@ -332,7 +332,6 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ 	struct vc4_plane_state *vc4_state = to_vc4_plane_state(state);
+ 	struct drm_framebuffer *fb = state->fb;
+ 	struct drm_gem_cma_object *bo = drm_fb_cma_get_gem_obj(fb, 0);
+-	u32 subpixel_src_mask = (1 << 16) - 1;
+ 	int num_planes = fb->format->num_planes;
+ 	struct drm_crtc_state *crtc_state;
+ 	u32 h_subsample = fb->format->hsub;
+@@ -354,18 +353,15 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ 	for (i = 0; i < num_planes; i++)
+ 		vc4_state->offsets[i] = bo->paddr + fb->offsets[i];
+ 
+-	/* We don't support subpixel source positioning for scaling. */
+-	if ((state->src.x1 & subpixel_src_mask) ||
+-	    (state->src.x2 & subpixel_src_mask) ||
+-	    (state->src.y1 & subpixel_src_mask) ||
+-	    (state->src.y2 & subpixel_src_mask)) {
+-		return -EINVAL;
+-	}
+-
+-	vc4_state->src_x = state->src.x1 >> 16;
+-	vc4_state->src_y = state->src.y1 >> 16;
+-	vc4_state->src_w[0] = (state->src.x2 - state->src.x1) >> 16;
+-	vc4_state->src_h[0] = (state->src.y2 - state->src.y1) >> 16;
++	/*
++	 * We don't support subpixel source positioning for scaling,
++	 * but fractional coordinates can be generated by clipping
++	 * so just round for now
++	 */
++	vc4_state->src_x = DIV_ROUND_CLOSEST(state->src.x1, 1 << 16);
++	vc4_state->src_y = DIV_ROUND_CLOSEST(state->src.y1, 1 << 16);
++	vc4_state->src_w[0] = DIV_ROUND_CLOSEST(state->src.x2, 1 << 16) - vc4_state->src_x;
++	vc4_state->src_h[0] = DIV_ROUND_CLOSEST(state->src.y2, 1 << 16) - vc4_state->src_y;
+ 
+ 	vc4_state->crtc_x = state->dst.x1;
+ 	vc4_state->crtc_y = state->dst.y1;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+index c8da7adc6b307..33b8ebab178a1 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+@@ -470,8 +470,10 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev,
+ 	spin_unlock(&vgdev->display_info_lock);
+ 
+ 	/* not in cache - need to talk to hw */
+-	virtio_gpu_cmd_get_capset(vgdev, found_valid, args->cap_set_ver,
+-				  &cache_ent);
++	ret = virtio_gpu_cmd_get_capset(vgdev, found_valid, args->cap_set_ver,
++					&cache_ent);
++	if (ret)
++		return ret;
+ 	virtio_gpu_notify(vgdev);
+ 
+ copy_exit:
+diff --git a/drivers/hid/hid-alps.c b/drivers/hid/hid-alps.c
+index 6b665931147df..ef73fef1b3e3f 100644
+--- a/drivers/hid/hid-alps.c
++++ b/drivers/hid/hid-alps.c
+@@ -830,6 +830,8 @@ static const struct hid_device_id alps_id[] = {
+ 		USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_U1_DUAL) },
+ 	{ HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY,
+ 		USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_U1) },
++	{ HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY,
++		USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY) },
+ 	{ HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY,
+ 		USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_T4_BTNLESS) },
+ 	{ }
+diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
+index 477baa30889cc..172f20e88c6c9 100644
+--- a/drivers/hid/hid-cp2112.c
++++ b/drivers/hid/hid-cp2112.c
+@@ -788,6 +788,11 @@ static int cp2112_xfer(struct i2c_adapter *adap, u16 addr,
+ 		data->word = le16_to_cpup((__le16 *)buf);
+ 		break;
+ 	case I2C_SMBUS_I2C_BLOCK_DATA:
++		if (read_length > I2C_SMBUS_BLOCK_MAX) {
++			ret = -EINVAL;
++			goto power_normal;
++		}
++
+ 		memcpy(data->block + 1, buf, read_length);
+ 		break;
+ 	case I2C_SMBUS_BLOCK_DATA:
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 3744c3db51405..bb096dfb7b363 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -389,7 +389,9 @@
+ #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W	0x0401
+ #define USB_DEVICE_ID_HP_X2		0x074d
+ #define USB_DEVICE_ID_HP_X2_10_COVER	0x0755
++#define I2C_DEVICE_ID_HP_SPECTRE_X360_15	0x2817
+ #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN	0x2706
++#define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN	0x261A
+ 
+ #define USB_VENDOR_ID_ELECOM		0x056e
+ #define USB_DEVICE_ID_ELECOM_BM084	0x0061
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index a17d1dda95703..75a4d8d6bb0fd 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -324,6 +324,10 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ 	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),
+ 	  HID_BATTERY_QUIRK_IGNORE },
++	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15),
++	  HID_BATTERY_QUIRK_IGNORE },
++	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN),
++	  HID_BATTERY_QUIRK_IGNORE },
+ 	{}
+ };
+ 
+diff --git a/drivers/hid/hid-mcp2221.c b/drivers/hid/hid-mcp2221.c
+index 4211b9839209b..de52e9f7bb8cb 100644
+--- a/drivers/hid/hid-mcp2221.c
++++ b/drivers/hid/hid-mcp2221.c
+@@ -385,6 +385,9 @@ static int mcp_smbus_write(struct mcp2221 *mcp, u16 addr,
+ 		data_len = 7;
+ 		break;
+ 	default:
++		if (len > I2C_SMBUS_BLOCK_MAX)
++			return -EINVAL;
++
+ 		memcpy(&mcp->txbuf[5], buf, len);
+ 		data_len = len + 5;
+ 	}
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 329bb1a46f90e..4dbf69078387f 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2124,7 +2124,7 @@ static int wacom_register_inputs(struct wacom *wacom)
+ 
+ 	error = wacom_setup_pad_input_capabilities(pad_input_dev, wacom_wac);
+ 	if (error) {
+-		/* no pad in use on this interface */
++		/* no pad events using this interface */
+ 		input_free_device(pad_input_dev);
+ 		wacom_wac->pad_input = NULL;
+ 		pad_input_dev = NULL;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index d90bfa8b7313e..d8d127fcc82a8 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -638,9 +638,26 @@ static int wacom_intuos_id_mangle(int tool_id)
+ 	return (tool_id & ~0xFFF) << 4 | (tool_id & 0xFFF);
+ }
+ 
++static bool wacom_is_art_pen(int tool_id)
++{
++	bool is_art_pen = false;
++
++	switch (tool_id) {
++	case 0x885:	/* Intuos3 Marker Pen */
++	case 0x804:	/* Intuos4/5 13HD/24HD Marker Pen */
++	case 0x10804:	/* Intuos4/5 13HD/24HD Art Pen */
++		is_art_pen = true;
++		break;
++	}
++	return is_art_pen;
++}
++
+ static int wacom_intuos_get_tool_type(int tool_id)
+ {
+-	int tool_type;
++	int tool_type = BTN_TOOL_PEN;
++
++	if (wacom_is_art_pen(tool_id))
++		return tool_type;
+ 
+ 	switch (tool_id) {
+ 	case 0x812: /* Inking pen */
+@@ -655,12 +672,9 @@ static int wacom_intuos_get_tool_type(int tool_id)
+ 	case 0x852:
+ 	case 0x823: /* Intuos3 Grip Pen */
+ 	case 0x813: /* Intuos3 Classic Pen */
+-	case 0x885: /* Intuos3 Marker Pen */
+ 	case 0x802: /* Intuos4/5 13HD/24HD General Pen */
+-	case 0x804: /* Intuos4/5 13HD/24HD Marker Pen */
+ 	case 0x8e2: /* IntuosHT2 pen */
+ 	case 0x022:
+-	case 0x10804: /* Intuos4/5 13HD/24HD Art Pen */
+ 	case 0x10842: /* MobileStudio Pro Pro Pen slim */
+ 	case 0x14802: /* Intuos4/5 13HD/24HD Classic Pen */
+ 	case 0x16802: /* Cintiq 13HD Pro Pen */
+@@ -718,10 +732,6 @@ static int wacom_intuos_get_tool_type(int tool_id)
+ 	case 0x10902: /* Intuos4/5 13HD/24HD Airbrush */
+ 		tool_type = BTN_TOOL_AIRBRUSH;
+ 		break;
+-
+-	default: /* Unknown tool */
+-		tool_type = BTN_TOOL_PEN;
+-		break;
+ 	}
+ 	return tool_type;
+ }
+@@ -2006,7 +2016,6 @@ static void wacom_wac_pad_usage_mapping(struct hid_device *hdev,
+ 		wacom_wac->has_mute_touch_switch = true;
+ 		usage->type = EV_SW;
+ 		usage->code = SW_MUTE_DEVICE;
+-		features->device_type |= WACOM_DEVICETYPE_PAD;
+ 		break;
+ 	case WACOM_HID_WD_TOUCHSTRIP:
+ 		wacom_map_usage(input, usage, field, EV_ABS, ABS_RX, 0);
+@@ -2086,6 +2095,30 @@ static void wacom_wac_pad_event(struct hid_device *hdev, struct hid_field *field
+ 			wacom_wac->hid_data.inrange_state |= value;
+ 	}
+ 
++	/* Process touch switch state first since it is reported through touch interface,
++	 * which is indepentent of pad interface. In the case when there are no other pad
++	 * events, the pad interface will not even be created.
++	 */
++	if ((equivalent_usage == WACOM_HID_WD_MUTE_DEVICE) ||
++	   (equivalent_usage == WACOM_HID_WD_TOUCHONOFF)) {
++		if (wacom_wac->shared->touch_input) {
++			bool *is_touch_on = &wacom_wac->shared->is_touch_on;
++
++			if (equivalent_usage == WACOM_HID_WD_MUTE_DEVICE && value)
++				*is_touch_on = !(*is_touch_on);
++			else if (equivalent_usage == WACOM_HID_WD_TOUCHONOFF)
++				*is_touch_on = value;
++
++			input_report_switch(wacom_wac->shared->touch_input,
++					    SW_MUTE_DEVICE, !(*is_touch_on));
++			input_sync(wacom_wac->shared->touch_input);
++		}
++		return;
++	}
++
++	if (!input)
++		return;
++
+ 	switch (equivalent_usage) {
+ 	case WACOM_HID_WD_TOUCHRING:
+ 		/*
+@@ -2121,22 +2154,6 @@ static void wacom_wac_pad_event(struct hid_device *hdev, struct hid_field *field
+ 			input_event(input, usage->type, usage->code, 0);
+ 		break;
+ 
+-	case WACOM_HID_WD_MUTE_DEVICE:
+-	case WACOM_HID_WD_TOUCHONOFF:
+-		if (wacom_wac->shared->touch_input) {
+-			bool *is_touch_on = &wacom_wac->shared->is_touch_on;
+-
+-			if (equivalent_usage == WACOM_HID_WD_MUTE_DEVICE && value)
+-				*is_touch_on = !(*is_touch_on);
+-			else if (equivalent_usage == WACOM_HID_WD_TOUCHONOFF)
+-				*is_touch_on = value;
+-
+-			input_report_switch(wacom_wac->shared->touch_input,
+-					    SW_MUTE_DEVICE, !(*is_touch_on));
+-			input_sync(wacom_wac->shared->touch_input);
+-		}
+-		break;
+-
+ 	case WACOM_HID_WD_MODE_CHANGE:
+ 		if (wacom_wac->is_direct_mode != value) {
+ 			wacom_wac->is_direct_mode = value;
+@@ -2312,6 +2329,9 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ 		}
+ 		return;
+ 	case HID_DG_TWIST:
++		/* don't modify the value if the pen doesn't support the feature */
++		if (!wacom_is_art_pen(wacom_wac->id[0])) return;
++
+ 		/*
+ 		 * Userspace expects pen twist to have its zero point when
+ 		 * the buttons/finger is on the tablet's left. HID values
+@@ -2763,7 +2783,7 @@ void wacom_wac_event(struct hid_device *hdev, struct hid_field *field,
+ 	/* usage tests must precede field tests */
+ 	if (WACOM_BATTERY_USAGE(usage))
+ 		wacom_wac_battery_event(hdev, field, usage, value);
+-	else if (WACOM_PAD_FIELD(field) && wacom->wacom_wac.pad_input)
++	else if (WACOM_PAD_FIELD(field))
+ 		wacom_wac_pad_event(hdev, field, usage, value);
+ 	else if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input)
+ 		wacom_wac_pen_event(hdev, field, usage, value);
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 72c7603739578..00303af82a777 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -621,3 +621,4 @@ module_exit(drivetemp_exit);
+ MODULE_AUTHOR("Guenter Roeck <linus@roeck-us.net>");
+ MODULE_DESCRIPTION("Hard drive temperature monitor");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS("platform:drivetemp");
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index e8a7f47b8fce3..5ddc8103503b5 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1382,6 +1382,7 @@ static int coresight_remove_match(struct device *dev, void *data)
+ 			 * platform data.
+ 			 */
+ 			fwnode_handle_put(conn->child_fwnode);
++			conn->child_fwnode = NULL;
+ 			/* No need to continue */
+ 			break;
+ 		}
+diff --git a/drivers/hwtracing/intel_th/msu-sink.c b/drivers/hwtracing/intel_th/msu-sink.c
+index 2c7f5116be126..891b28ea25fe6 100644
+--- a/drivers/hwtracing/intel_th/msu-sink.c
++++ b/drivers/hwtracing/intel_th/msu-sink.c
+@@ -71,6 +71,9 @@ static int msu_sink_alloc_window(void *data, struct sg_table **sgt, size_t size)
+ 		block = dma_alloc_coherent(priv->dev->parent->parent,
+ 					   PAGE_SIZE, &sg_dma_address(sg_ptr),
+ 					   GFP_KERNEL);
++		if (!block)
++			return -ENOMEM;
++
+ 		sg_set_buf(sg_ptr, block, PAGE_SIZE);
+ 	}
+ 
+diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
+index 3a77551fb4fc1..24f56a7c0fcf1 100644
+--- a/drivers/hwtracing/intel_th/msu.c
++++ b/drivers/hwtracing/intel_th/msu.c
+@@ -1053,6 +1053,16 @@ msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs) {}
+ static inline void msc_buffer_set_wb(struct msc_window *win) {}
+ #endif /* CONFIG_X86 */
+ 
++static struct page *msc_sg_page(struct scatterlist *sg)
++{
++	void *addr = sg_virt(sg);
++
++	if (is_vmalloc_addr(addr))
++		return vmalloc_to_page(addr);
++
++	return sg_page(sg);
++}
++
+ /**
+  * msc_buffer_win_alloc() - alloc a window for a multiblock mode
+  * @msc:	MSC device
+@@ -1125,7 +1135,7 @@ static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win)
+ 	int i;
+ 
+ 	for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) {
+-		struct page *page = sg_page(sg);
++		struct page *page = msc_sg_page(sg);
+ 
+ 		page->mapping = NULL;
+ 		dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE,
+@@ -1387,7 +1397,7 @@ found:
+ 	pgoff -= win->pgoff;
+ 
+ 	for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) {
+-		struct page *page = sg_page(sg);
++		struct page *page = msc_sg_page(sg);
+ 		size_t pgsz = PFN_DOWN(sg->length);
+ 
+ 		if (pgoff < pgsz)
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 817cdb29bbd89..e25438025b9f2 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -100,8 +100,10 @@ static int intel_th_pci_probe(struct pci_dev *pdev,
+ 		}
+ 
+ 	th = intel_th_alloc(&pdev->dev, drvdata, resource, r);
+-	if (IS_ERR(th))
+-		return PTR_ERR(th);
++	if (IS_ERR(th)) {
++		err = PTR_ERR(th);
++		goto err_free_irq;
++	}
+ 
+ 	th->activate   = intel_th_pci_activate;
+ 	th->deactivate = intel_th_pci_deactivate;
+@@ -109,6 +111,10 @@ static int intel_th_pci_probe(struct pci_dev *pdev,
+ 	pci_set_master(pdev);
+ 
+ 	return 0;
++
++err_free_irq:
++	pci_free_irq_vectors(pdev);
++	return err;
+ }
+ 
+ static void intel_th_pci_remove(struct pci_dev *pdev)
+@@ -278,6 +284,21 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x54a6),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Meteor Lake-P */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7e24),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
++	{
++		/* Raptor Lake-S */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7a26),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
++	{
++		/* Raptor Lake-S CPU */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa76f),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{
+ 		/* Alder Lake CPU */
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index 0abce487ead72..50928216b3f28 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -566,8 +566,13 @@ static void cdns_i2c_mrecv(struct cdns_i2c *id)
+ 	ctrl_reg = cdns_i2c_readreg(CDNS_I2C_CR_OFFSET);
+ 	ctrl_reg |= CDNS_I2C_CR_RW | CDNS_I2C_CR_CLR_FIFO;
+ 
++	/*
++	 * Receive up to I2C_SMBUS_BLOCK_MAX data bytes, plus one message length
++	 * byte, plus one checksum byte if PEC is enabled. p_msg->len will be 2 if
++	 * PEC is enabled, otherwise 1.
++	 */
+ 	if (id->p_msg->flags & I2C_M_RECV_LEN)
+-		id->recv_count = I2C_SMBUS_BLOCK_MAX + 1;
++		id->recv_count = I2C_SMBUS_BLOCK_MAX + id->p_msg->len;
+ 
+ 	id->curr_recv_count = id->recv_count;
+ 
+@@ -753,6 +758,9 @@ static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg,
+ 	if (id->err_status & CDNS_I2C_IXR_ARB_LOST)
+ 		return -EAGAIN;
+ 
++	if (msg->flags & I2C_M_RECV_LEN)
++		msg->len += min_t(unsigned int, msg->buf[0], I2C_SMBUS_BLOCK_MAX);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index d9ac62c1ac25e..31e3d2c9d6bc5 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -123,11 +123,11 @@ enum i2c_addr {
+  * Since the addr regs are sprinkled all over the address space,
+  * use this array to get the address or each register.
+  */
+-#define I2C_NUM_OWN_ADDR 10
++#define I2C_NUM_OWN_ADDR 2
++#define I2C_NUM_OWN_ADDR_SUPPORTED 2
++
+ static const int npcm_i2caddr[I2C_NUM_OWN_ADDR] = {
+-	NPCM_I2CADDR1, NPCM_I2CADDR2, NPCM_I2CADDR3, NPCM_I2CADDR4,
+-	NPCM_I2CADDR5, NPCM_I2CADDR6, NPCM_I2CADDR7, NPCM_I2CADDR8,
+-	NPCM_I2CADDR9, NPCM_I2CADDR10,
++	NPCM_I2CADDR1, NPCM_I2CADDR2,
+ };
+ #endif
+ 
+@@ -391,14 +391,10 @@ static void npcm_i2c_disable(struct npcm_i2c *bus)
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ 	int i;
+ 
+-	/* select bank 0 for I2C addresses */
+-	npcm_i2c_select_bank(bus, I2C_BANK_0);
+-
+ 	/* Slave addresses removal */
+-	for (i = I2C_SLAVE_ADDR1; i < I2C_NUM_OWN_ADDR; i++)
++	for (i = I2C_SLAVE_ADDR1; i < I2C_NUM_OWN_ADDR_SUPPORTED; i++)
+ 		iowrite8(0, bus->reg + npcm_i2caddr[i]);
+ 
+-	npcm_i2c_select_bank(bus, I2C_BANK_1);
+ #endif
+ 	/* Disable module */
+ 	i2cctl2 = ioread8(bus->reg + NPCM_I2CCTL2);
+@@ -603,8 +599,7 @@ static int npcm_i2c_slave_enable(struct npcm_i2c *bus, enum i2c_addr addr_type,
+ 			i2cctl1 &= ~NPCM_I2CCTL1_GCMEN;
+ 		iowrite8(i2cctl1, bus->reg + NPCM_I2CCTL1);
+ 		return 0;
+-	}
+-	if (addr_type == I2C_ARP_ADDR) {
++	} else if (addr_type == I2C_ARP_ADDR) {
+ 		i2cctl3 = ioread8(bus->reg + NPCM_I2CCTL3);
+ 		if (enable)
+ 			i2cctl3 |= I2CCTL3_ARPMEN;
+@@ -613,16 +608,16 @@ static int npcm_i2c_slave_enable(struct npcm_i2c *bus, enum i2c_addr addr_type,
+ 		iowrite8(i2cctl3, bus->reg + NPCM_I2CCTL3);
+ 		return 0;
+ 	}
++	if (addr_type > I2C_SLAVE_ADDR2 && addr_type <= I2C_SLAVE_ADDR10)
++		dev_err(bus->dev, "try to enable more than 2 SA not supported\n");
++
+ 	if (addr_type >= I2C_ARP_ADDR)
+ 		return -EFAULT;
+-	/* select bank 0 for address 3 to 10 */
+-	if (addr_type > I2C_SLAVE_ADDR2)
+-		npcm_i2c_select_bank(bus, I2C_BANK_0);
++
+ 	/* Set and enable the address */
+ 	iowrite8(sa_reg, bus->reg + npcm_i2caddr[addr_type]);
+ 	npcm_i2c_slave_int_enable(bus, enable);
+-	if (addr_type > I2C_SLAVE_ADDR2)
+-		npcm_i2c_select_bank(bus, I2C_BANK_1);
++
+ 	return 0;
+ }
+ #endif
+@@ -843,15 +838,11 @@ static u8 npcm_i2c_get_slave_addr(struct npcm_i2c *bus, enum i2c_addr addr_type)
+ {
+ 	u8 slave_add;
+ 
+-	/* select bank 0 for address 3 to 10 */
+-	if (addr_type > I2C_SLAVE_ADDR2)
+-		npcm_i2c_select_bank(bus, I2C_BANK_0);
++	if (addr_type > I2C_SLAVE_ADDR2 && addr_type <= I2C_SLAVE_ADDR10)
++		dev_err(bus->dev, "get slave: try to use more than 2 SA not supported\n");
+ 
+ 	slave_add = ioread8(bus->reg + npcm_i2caddr[(int)addr_type]);
+ 
+-	if (addr_type > I2C_SLAVE_ADDR2)
+-		npcm_i2c_select_bank(bus, I2C_BANK_1);
+-
+ 	return slave_add;
+ }
+ 
+@@ -861,12 +852,12 @@ static int npcm_i2c_remove_slave_addr(struct npcm_i2c *bus, u8 slave_add)
+ 
+ 	/* Set the enable bit */
+ 	slave_add |= 0x80;
+-	npcm_i2c_select_bank(bus, I2C_BANK_0);
+-	for (i = I2C_SLAVE_ADDR1; i < I2C_NUM_OWN_ADDR; i++) {
++
++	for (i = I2C_SLAVE_ADDR1; i < I2C_NUM_OWN_ADDR_SUPPORTED; i++) {
+ 		if (ioread8(bus->reg + npcm_i2caddr[i]) == slave_add)
+ 			iowrite8(0, bus->reg + npcm_i2caddr[i]);
+ 	}
+-	npcm_i2c_select_bank(bus, I2C_BANK_1);
++
+ 	return 0;
+ }
+ 
+@@ -921,11 +912,15 @@ static int npcm_i2c_slave_get_wr_buf(struct npcm_i2c *bus)
+ 	for (i = 0; i < I2C_HW_FIFO_SIZE; i++) {
+ 		if (bus->slv_wr_size >= I2C_HW_FIFO_SIZE)
+ 			break;
+-		i2c_slave_event(bus->slave, I2C_SLAVE_READ_REQUESTED, &value);
++		if (bus->state == I2C_SLAVE_MATCH) {
++			i2c_slave_event(bus->slave, I2C_SLAVE_READ_REQUESTED, &value);
++			bus->state = I2C_OPER_STARTED;
++		} else {
++			i2c_slave_event(bus->slave, I2C_SLAVE_READ_PROCESSED, &value);
++		}
+ 		ind = (bus->slv_wr_ind + bus->slv_wr_size) % I2C_HW_FIFO_SIZE;
+ 		bus->slv_wr_buf[ind] = value;
+ 		bus->slv_wr_size++;
+-		i2c_slave_event(bus->slave, I2C_SLAVE_READ_PROCESSED, &value);
+ 	}
+ 	return I2C_HW_FIFO_SIZE - ret;
+ }
+@@ -973,7 +968,6 @@ static void npcm_i2c_slave_xmit(struct npcm_i2c *bus, u16 nwrite,
+ 	if (nwrite == 0)
+ 		return;
+ 
+-	bus->state = I2C_OPER_STARTED;
+ 	bus->operation = I2C_WRITE_OPER;
+ 
+ 	/* get the next buffer */
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index bdce6d3e53273..34fecf97a355b 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -2405,8 +2405,9 @@ void i2c_put_adapter(struct i2c_adapter *adap)
+ 	if (!adap)
+ 		return;
+ 
+-	put_device(&adap->dev);
+ 	module_put(adap->owner);
++	/* Should be last, otherwise we risk use-after-free with 'adap' */
++	put_device(&adap->dev);
+ }
+ EXPORT_SYMBOL(i2c_put_adapter);
+ 
+diff --git a/drivers/i2c/muxes/i2c-mux-gpmux.c b/drivers/i2c/muxes/i2c-mux-gpmux.c
+index d3acd8d66c323..33024acaac02b 100644
+--- a/drivers/i2c/muxes/i2c-mux-gpmux.c
++++ b/drivers/i2c/muxes/i2c-mux-gpmux.c
+@@ -134,6 +134,7 @@ static int i2c_mux_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_children:
++	of_node_put(child);
+ 	i2c_mux_del_adapters(muxc);
+ err_parent:
+ 	i2c_put_adapter(parent);
+diff --git a/drivers/iio/accel/bma400.h b/drivers/iio/accel/bma400.h
+index 5ad10db9819fe..416090c6b1e81 100644
+--- a/drivers/iio/accel/bma400.h
++++ b/drivers/iio/accel/bma400.h
+@@ -83,8 +83,27 @@
+ #define BMA400_ACC_ODR_MIN_WHOLE_HZ 25
+ #define BMA400_ACC_ODR_MIN_HZ       12
+ 
+-#define BMA400_SCALE_MIN            38357
+-#define BMA400_SCALE_MAX            306864
++/*
++ * BMA400_SCALE_MIN macro value represents m/s^2 for 1 LSB before
++ * converting to micro values for +-2g range.
++ *
++ * For +-2g - 1 LSB = 0.976562 milli g = 0.009576 m/s^2
++ * For +-4g - 1 LSB = 1.953125 milli g = 0.019153 m/s^2
++ * For +-16g - 1 LSB = 7.8125 milli g = 0.076614 m/s^2
++ *
++ * The raw value which is used to select the different ranges is determined
++ * by the first bit set position from the scale value, so BMA400_SCALE_MIN
++ * should be odd.
++ *
++ * Scale values for +-2g, +-4g, +-8g and +-16g are populated into bma400_scales
++ * array by left shifting BMA400_SCALE_MIN.
++ * e.g.:
++ * To select +-2g = 9577 << 0 = raw value to write is 0.
++ * To select +-8g = 9577 << 2 = raw value to write is 2.
++ * To select +-16g = 9577 << 3 = raw value to write is 3.
++ */
++#define BMA400_SCALE_MIN            9577
++#define BMA400_SCALE_MAX            76617
+ 
+ #define BMA400_NUM_REGULATORS       2
+ #define BMA400_VDD_REGULATOR        0
+diff --git a/drivers/iio/accel/bma400_core.c b/drivers/iio/accel/bma400_core.c
+index 7eeba80e32cb5..58aa6a0e11807 100644
+--- a/drivers/iio/accel/bma400_core.c
++++ b/drivers/iio/accel/bma400_core.c
+@@ -13,14 +13,14 @@
+ 
+ #include <linux/bitops.h>
+ #include <linux/device.h>
+-#include <linux/iio/iio.h>
+-#include <linux/iio/sysfs.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+ 
++#include <linux/iio/iio.h>
++
+ #include "bma400.h"
+ 
+ /*
+diff --git a/drivers/iio/light/isl29028.c b/drivers/iio/light/isl29028.c
+index 2f8b494f3e080..74e75477660af 100644
+--- a/drivers/iio/light/isl29028.c
++++ b/drivers/iio/light/isl29028.c
+@@ -627,7 +627,7 @@ static int isl29028_probe(struct i2c_client *client,
+ 					 ISL29028_POWER_OFF_DELAY_MS);
+ 	pm_runtime_use_autosuspend(&client->dev);
+ 
+-	ret = devm_iio_device_register(indio_dev->dev.parent, indio_dev);
++	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev,
+ 			"%s(): iio registration failed with error %d\n",
+diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
+index cfc2110fc38ab..d84b1098762c1 100644
+--- a/drivers/infiniband/hw/hfi1/file_ops.c
++++ b/drivers/infiniband/hw/hfi1/file_ops.c
+@@ -1220,8 +1220,10 @@ static int setup_base_ctxt(struct hfi1_filedata *fd,
+ 		goto done;
+ 
+ 	ret = init_user_ctxt(fd, uctxt);
+-	if (ret)
++	if (ret) {
++		hfi1_free_ctxt_rcv_groups(uctxt);
+ 		goto done;
++	}
+ 
+ 	user_init(uctxt);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index abe882ec1bae7..6dab03b7aca80 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -5642,8 +5642,8 @@ static irqreturn_t hns_roce_v2_msix_interrupt_abn(int irq, void *dev_id)
+ 
+ 		dev_err(dev, "AEQ overflow!\n");
+ 
+-		int_st |= 1 << HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S;
+-		roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG, int_st);
++		roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG,
++			   1 << HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S);
+ 
+ 		/* Set reset level for reset_event() */
+ 		if (ops->set_default_reset_request)
+diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
+index b3391ecedda7e..0404e6f22d37a 100644
+--- a/drivers/infiniband/hw/mlx5/fs.c
++++ b/drivers/infiniband/hw/mlx5/fs.c
+@@ -2081,12 +2081,10 @@ static int mlx5_ib_matcher_ns(struct uverbs_attr_bundle *attrs,
+ 		if (err)
+ 			return err;
+ 
+-		if (flags) {
+-			mlx5_ib_ft_type_to_namespace(
++		if (flags)
++			return mlx5_ib_ft_type_to_namespace(
+ 				MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX,
+ 				&obj->ns_type);
+-			return 0;
+-		}
+ 	}
+ 
+ 	obj->ns_type = MLX5_FLOW_NAMESPACE_BYPASS;
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index f7b97b8e81a43..3543b9af10b7a 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -2989,7 +2989,11 @@ struct ib_mr *qedr_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 len,
+ 
+ 	rc = dev->ops->rdma_alloc_tid(dev->rdma_ctx, &mr->hw_mr.itid);
+ 	if (rc) {
+-		DP_ERR(dev, "roce alloc tid returned an error %d\n", rc);
++		if (rc == -EINVAL)
++			DP_ERR(dev, "Out of MR resources\n");
++		else
++			DP_ERR(dev, "roce alloc tid returned error %d\n", rc);
++
+ 		goto err1;
+ 	}
+ 
+@@ -3084,8 +3088,12 @@ static struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd,
+ 
+ 	rc = dev->ops->rdma_alloc_tid(dev->rdma_ctx, &mr->hw_mr.itid);
+ 	if (rc) {
+-		DP_ERR(dev, "roce alloc tid returned an error %d\n", rc);
+-		goto err0;
++		if (rc == -EINVAL)
++			DP_ERR(dev, "Out of MR resources\n");
++		else
++			DP_ERR(dev, "roce alloc tid returned error %d\n", rc);
++
++		goto err1;
+ 	}
+ 
+ 	/* Index only, 18 bit long, lkey = itid << 8 | key */
+@@ -3109,7 +3117,7 @@ static struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd,
+ 	rc = dev->ops->rdma_register_tid(dev->rdma_ctx, &mr->hw_mr);
+ 	if (rc) {
+ 		DP_ERR(dev, "roce register tid returned an error %d\n", rc);
+-		goto err1;
++		goto err2;
+ 	}
+ 
+ 	mr->ibmr.lkey = mr->hw_mr.itid << 8 | mr->hw_mr.key;
+@@ -3118,8 +3126,10 @@ static struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd,
+ 	DP_DEBUG(dev, QEDR_MSG_MR, "alloc frmr: %x\n", mr->ibmr.lkey);
+ 	return mr;
+ 
+-err1:
++err2:
+ 	dev->ops->rdma_free_tid(dev->rdma_ctx, mr->hw_mr.itid);
++err1:
++	qedr_free_pbl(dev, &mr->info.pbl_info, mr->info.pbl_table);
+ err0:
+ 	kfree(mr);
+ 	return ERR_PTR(rc);
+@@ -3214,7 +3224,11 @@ struct ib_mr *qedr_get_dma_mr(struct ib_pd *ibpd, int acc)
+ 
+ 	rc = dev->ops->rdma_alloc_tid(dev->rdma_ctx, &mr->hw_mr.itid);
+ 	if (rc) {
+-		DP_ERR(dev, "roce alloc tid returned an error %d\n", rc);
++		if (rc == -EINVAL)
++			DP_ERR(dev, "Out of MR resources\n");
++		else
++			DP_ERR(dev, "roce alloc tid returned error %d\n", rc);
++
+ 		goto err1;
+ 	}
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index a1b79015e6f22..2847ab4d9a5f0 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -184,6 +184,14 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	spin_lock_init(&qp->grp_lock);
+ 	spin_lock_init(&qp->state_lock);
+ 
++	spin_lock_init(&qp->req.task.state_lock);
++	spin_lock_init(&qp->resp.task.state_lock);
++	spin_lock_init(&qp->comp.task.state_lock);
++
++	spin_lock_init(&qp->sq.sq_lock);
++	spin_lock_init(&qp->rq.producer_lock);
++	spin_lock_init(&qp->rq.consumer_lock);
++
+ 	atomic_set(&qp->ssn, 0);
+ 	atomic_set(&qp->skb_out, 0);
+ }
+@@ -239,7 +247,6 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	qp->req.opcode		= -1;
+ 	qp->comp.opcode		= -1;
+ 
+-	spin_lock_init(&qp->sq.sq_lock);
+ 	skb_queue_head_init(&qp->req_pkts);
+ 
+ 	rxe_init_task(rxe, &qp->req.task, qp,
+@@ -289,9 +296,6 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 		}
+ 	}
+ 
+-	spin_lock_init(&qp->rq.producer_lock);
+-	spin_lock_init(&qp->rq.consumer_lock);
+-
+ 	skb_queue_head_init(&qp->resp_pkts);
+ 
+ 	rxe_init_task(rxe, &qp->resp.task, qp,
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index 6e7399c2ca8c9..b87ba4c9fccf1 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -725,11 +725,11 @@ static int siw_proc_mpareply(struct siw_cep *cep)
+ 	enum mpa_v2_ctrl mpa_p2p_mode = MPA_V2_RDMA_NO_RTR;
+ 
+ 	rv = siw_recv_mpa_rr(cep);
+-	if (rv != -EAGAIN)
+-		siw_cancel_mpatimer(cep);
+ 	if (rv)
+ 		goto out_err;
+ 
++	siw_cancel_mpatimer(cep);
++
+ 	rep = &cep->mpa.hdr;
+ 
+ 	if (__mpa_rr_revision(rep->params.bits) > MPA_REVISION_2) {
+@@ -895,7 +895,8 @@ static int siw_proc_mpareply(struct siw_cep *cep)
+ 	}
+ 
+ out_err:
+-	siw_cm_upcall(cep, IW_CM_EVENT_CONNECT_REPLY, -EINVAL);
++	if (rv != -EAGAIN)
++		siw_cm_upcall(cep, IW_CM_EVENT_CONNECT_REPLY, -EINVAL);
+ 
+ 	return rv;
+ }
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 13634eda833de..5c39e4c4bef7f 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -1728,11 +1728,6 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 	if (con->c.cid == 0) {
+ 		queue_depth = le16_to_cpu(msg->queue_depth);
+ 
+-		if (queue_depth > MAX_SESS_QUEUE_DEPTH) {
+-			rtrs_err(clt, "Invalid RTRS message: queue=%d\n",
+-				  queue_depth);
+-			return -ECONNRESET;
+-		}
+ 		if (sess->queue_depth > 0 && queue_depth != sess->queue_depth) {
+ 			rtrs_err(clt, "Error: queue depth changed\n");
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+index 51c60f5428761..c5ca123d52a87 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+@@ -23,6 +23,17 @@
+ #define RTRS_PROTO_VER_STRING __stringify(RTRS_PROTO_VER_MAJOR) "." \
+ 			       __stringify(RTRS_PROTO_VER_MINOR)
+ 
++/*
++ * Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS)
++ * and the minimum chunk size is 4096 (2^12).
++ * So the maximum sess_queue_depth is 65536 (2^16) in theory.
++ * But mempool_create, create_qp and ib_post_send fail with
++ * "cannot allocate memory" error if sess_queue_depth is too big.
++ * Therefore the pratical max value of sess_queue_depth is
++ * somewhere between 1 and 65534 and it depends on the system.
++ */
++#define MAX_SESS_QUEUE_DEPTH 65535
++
+ enum rtrs_imm_const {
+ 	MAX_IMM_TYPE_BITS = 4,
+ 	MAX_IMM_TYPE_MASK = ((1 << MAX_IMM_TYPE_BITS) - 1),
+@@ -46,16 +57,7 @@ enum {
+ 
+ 	MAX_PATHS_NUM = 128,
+ 
+-	/*
+-	 * Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS)
+-	 * and the minimum chunk size is 4096 (2^12).
+-	 * So the maximum sess_queue_depth is 65536 (2^16) in theory.
+-	 * But mempool_create, create_qp and ib_post_send fail with
+-	 * "cannot allocate memory" error if sess_queue_depth is too big.
+-	 * Therefore the pratical max value of sess_queue_depth is
+-	 * somewhere between 1 and 65536 and it depends on the system.
+-	 */
+-	MAX_SESS_QUEUE_DEPTH = 65536,
++	MIN_CHUNK_SIZE = 8192,
+ 
+ 	RTRS_HB_INTERVAL_MS = 5000,
+ 	RTRS_HB_MISSED_MAX = 5,
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index b033bfa9f3839..b152a742cd3c5 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -2193,9 +2193,9 @@ static int check_module_params(void)
+ 		       sess_queue_depth, 1, MAX_SESS_QUEUE_DEPTH);
+ 		return -EINVAL;
+ 	}
+-	if (max_chunk_size < 4096 || !is_power_of_2(max_chunk_size)) {
++	if (max_chunk_size < MIN_CHUNK_SIZE || !is_power_of_2(max_chunk_size)) {
+ 		pr_err("Invalid max_chunk_size value %d, has to be >= %d and should be power of two.\n",
+-		       max_chunk_size, 4096);
++		       max_chunk_size, MIN_CHUNK_SIZE);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 07ecc7dc1822b..c0ed08fcab480 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -565,12 +565,9 @@ static int srpt_refresh_port(struct srpt_port *sport)
+ 	if (ret)
+ 		return ret;
+ 
+-	sport->port_guid_id.wwn.priv = sport;
+-	srpt_format_guid(sport->port_guid_id.name,
+-			 sizeof(sport->port_guid_id.name),
++	srpt_format_guid(sport->guid_name, ARRAY_SIZE(sport->guid_name),
+ 			 &sport->gid.global.interface_id);
+-	sport->port_gid_id.wwn.priv = sport;
+-	snprintf(sport->port_gid_id.name, sizeof(sport->port_gid_id.name),
++	snprintf(sport->gid_name, ARRAY_SIZE(sport->gid_name),
+ 		 "0x%016llx%016llx",
+ 		 be64_to_cpu(sport->gid.global.subnet_prefix),
+ 		 be64_to_cpu(sport->gid.global.interface_id));
+@@ -2310,31 +2307,35 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 	tag_num = ch->rq_size;
+ 	tag_size = 1; /* ib_srpt does not use se_sess->sess_cmd_map */
+ 
+-	mutex_lock(&sport->port_guid_id.mutex);
+-	list_for_each_entry(stpg, &sport->port_guid_id.tpg_list, entry) {
+-		if (!IS_ERR_OR_NULL(ch->sess))
+-			break;
+-		ch->sess = target_setup_session(&stpg->tpg, tag_num,
++	if (sport->guid_id) {
++		mutex_lock(&sport->guid_id->mutex);
++		list_for_each_entry(stpg, &sport->guid_id->tpg_list, entry) {
++			if (!IS_ERR_OR_NULL(ch->sess))
++				break;
++			ch->sess = target_setup_session(&stpg->tpg, tag_num,
+ 						tag_size, TARGET_PROT_NORMAL,
+ 						ch->sess_name, ch, NULL);
++		}
++		mutex_unlock(&sport->guid_id->mutex);
+ 	}
+-	mutex_unlock(&sport->port_guid_id.mutex);
+ 
+-	mutex_lock(&sport->port_gid_id.mutex);
+-	list_for_each_entry(stpg, &sport->port_gid_id.tpg_list, entry) {
+-		if (!IS_ERR_OR_NULL(ch->sess))
+-			break;
+-		ch->sess = target_setup_session(&stpg->tpg, tag_num,
++	if (sport->gid_id) {
++		mutex_lock(&sport->gid_id->mutex);
++		list_for_each_entry(stpg, &sport->gid_id->tpg_list, entry) {
++			if (!IS_ERR_OR_NULL(ch->sess))
++				break;
++			ch->sess = target_setup_session(&stpg->tpg, tag_num,
+ 					tag_size, TARGET_PROT_NORMAL, i_port_id,
+ 					ch, NULL);
+-		if (!IS_ERR_OR_NULL(ch->sess))
+-			break;
+-		/* Retry without leading "0x" */
+-		ch->sess = target_setup_session(&stpg->tpg, tag_num,
++			if (!IS_ERR_OR_NULL(ch->sess))
++				break;
++			/* Retry without leading "0x" */
++			ch->sess = target_setup_session(&stpg->tpg, tag_num,
+ 						tag_size, TARGET_PROT_NORMAL,
+ 						i_port_id + 2, ch, NULL);
++		}
++		mutex_unlock(&sport->gid_id->mutex);
+ 	}
+-	mutex_unlock(&sport->port_gid_id.mutex);
+ 
+ 	if (IS_ERR_OR_NULL(ch->sess)) {
+ 		WARN_ON_ONCE(ch->sess == NULL);
+@@ -2980,7 +2981,12 @@ static int srpt_release_sport(struct srpt_port *sport)
+ 	return 0;
+ }
+ 
+-static struct se_wwn *__srpt_lookup_wwn(const char *name)
++struct port_and_port_id {
++	struct srpt_port *sport;
++	struct srpt_port_id **port_id;
++};
++
++static struct port_and_port_id __srpt_lookup_port(const char *name)
+ {
+ 	struct ib_device *dev;
+ 	struct srpt_device *sdev;
+@@ -2995,25 +3001,38 @@ static struct se_wwn *__srpt_lookup_wwn(const char *name)
+ 		for (i = 0; i < dev->phys_port_cnt; i++) {
+ 			sport = &sdev->port[i];
+ 
+-			if (strcmp(sport->port_guid_id.name, name) == 0)
+-				return &sport->port_guid_id.wwn;
+-			if (strcmp(sport->port_gid_id.name, name) == 0)
+-				return &sport->port_gid_id.wwn;
++			if (strcmp(sport->guid_name, name) == 0) {
++				kref_get(&sdev->refcnt);
++				return (struct port_and_port_id){
++					sport, &sport->guid_id};
++			}
++			if (strcmp(sport->gid_name, name) == 0) {
++				kref_get(&sdev->refcnt);
++				return (struct port_and_port_id){
++					sport, &sport->gid_id};
++			}
+ 		}
+ 	}
+ 
+-	return NULL;
++	return (struct port_and_port_id){};
+ }
+ 
+-static struct se_wwn *srpt_lookup_wwn(const char *name)
++/**
++ * srpt_lookup_port() - Look up an RDMA port by name
++ * @name: ASCII port name
++ *
++ * Increments the RDMA port reference count if an RDMA port pointer is returned.
++ * The caller must drop that reference count by calling srpt_port_put_ref().
++ */
++static struct port_and_port_id srpt_lookup_port(const char *name)
+ {
+-	struct se_wwn *wwn;
++	struct port_and_port_id papi;
+ 
+ 	spin_lock(&srpt_dev_lock);
+-	wwn = __srpt_lookup_wwn(name);
++	papi = __srpt_lookup_port(name);
+ 	spin_unlock(&srpt_dev_lock);
+ 
+-	return wwn;
++	return papi;
+ }
+ 
+ static void srpt_free_srq(struct srpt_device *sdev)
+@@ -3098,6 +3117,18 @@ static int srpt_use_srq(struct srpt_device *sdev, bool use_srq)
+ 	return ret;
+ }
+ 
++static void srpt_free_sdev(struct kref *refcnt)
++{
++	struct srpt_device *sdev = container_of(refcnt, typeof(*sdev), refcnt);
++
++	kfree(sdev);
++}
++
++static void srpt_sdev_put(struct srpt_device *sdev)
++{
++	kref_put(&sdev->refcnt, srpt_free_sdev);
++}
++
+ /**
+  * srpt_add_one - InfiniBand device addition callback function
+  * @device: Describes a HCA.
+@@ -3115,6 +3146,7 @@ static int srpt_add_one(struct ib_device *device)
+ 	if (!sdev)
+ 		return -ENOMEM;
+ 
++	kref_init(&sdev->refcnt);
+ 	sdev->device = device;
+ 	mutex_init(&sdev->sdev_mutex);
+ 
+@@ -3178,10 +3210,6 @@ static int srpt_add_one(struct ib_device *device)
+ 		sport->port_attrib.srp_sq_size = DEF_SRPT_SQ_SIZE;
+ 		sport->port_attrib.use_srq = false;
+ 		INIT_WORK(&sport->work, srpt_refresh_port_work);
+-		mutex_init(&sport->port_guid_id.mutex);
+-		INIT_LIST_HEAD(&sport->port_guid_id.tpg_list);
+-		mutex_init(&sport->port_gid_id.mutex);
+-		INIT_LIST_HEAD(&sport->port_gid_id.tpg_list);
+ 
+ 		ret = srpt_refresh_port(sport);
+ 		if (ret) {
+@@ -3210,7 +3238,7 @@ err_ring:
+ 	srpt_free_srq(sdev);
+ 	ib_dealloc_pd(sdev->pd);
+ free_dev:
+-	kfree(sdev);
++	srpt_sdev_put(sdev);
+ 	pr_info("%s(%s) failed.\n", __func__, dev_name(&device->dev));
+ 	return ret;
+ }
+@@ -3254,7 +3282,7 @@ static void srpt_remove_one(struct ib_device *device, void *client_data)
+ 
+ 	ib_dealloc_pd(sdev->pd);
+ 
+-	kfree(sdev);
++	srpt_sdev_put(sdev);
+ }
+ 
+ static struct ib_client srpt_client = {
+@@ -3282,10 +3310,10 @@ static struct srpt_port_id *srpt_wwn_to_sport_id(struct se_wwn *wwn)
+ {
+ 	struct srpt_port *sport = wwn->priv;
+ 
+-	if (wwn == &sport->port_guid_id.wwn)
+-		return &sport->port_guid_id;
+-	if (wwn == &sport->port_gid_id.wwn)
+-		return &sport->port_gid_id;
++	if (sport->guid_id && &sport->guid_id->wwn == wwn)
++		return sport->guid_id;
++	if (sport->gid_id && &sport->gid_id->wwn == wwn)
++		return sport->gid_id;
+ 	WARN_ON_ONCE(true);
+ 	return NULL;
+ }
+@@ -3800,7 +3828,31 @@ static struct se_wwn *srpt_make_tport(struct target_fabric_configfs *tf,
+ 				      struct config_group *group,
+ 				      const char *name)
+ {
+-	return srpt_lookup_wwn(name) ? : ERR_PTR(-EINVAL);
++	struct port_and_port_id papi = srpt_lookup_port(name);
++	struct srpt_port *sport = papi.sport;
++	struct srpt_port_id *port_id;
++
++	if (!papi.port_id)
++		return ERR_PTR(-EINVAL);
++	if (*papi.port_id) {
++		/* Attempt to create a directory that already exists. */
++		WARN_ON_ONCE(true);
++		return &(*papi.port_id)->wwn;
++	}
++	port_id = kzalloc(sizeof(*port_id), GFP_KERNEL);
++	if (!port_id) {
++		srpt_sdev_put(sport->sdev);
++		return ERR_PTR(-ENOMEM);
++	}
++	mutex_init(&port_id->mutex);
++	INIT_LIST_HEAD(&port_id->tpg_list);
++	port_id->wwn.priv = sport;
++	memcpy(port_id->name, port_id == sport->guid_id ? sport->guid_name :
++	       sport->gid_name, ARRAY_SIZE(port_id->name));
++
++	*papi.port_id = port_id;
++
++	return &port_id->wwn;
+ }
+ 
+ /**
+@@ -3809,6 +3861,18 @@ static struct se_wwn *srpt_make_tport(struct target_fabric_configfs *tf,
+  */
+ static void srpt_drop_tport(struct se_wwn *wwn)
+ {
++	struct srpt_port_id *port_id = container_of(wwn, typeof(*port_id), wwn);
++	struct srpt_port *sport = wwn->priv;
++
++	if (sport->guid_id == port_id)
++		sport->guid_id = NULL;
++	else if (sport->gid_id == port_id)
++		sport->gid_id = NULL;
++	else
++		WARN_ON_ONCE(true);
++
++	srpt_sdev_put(sport->sdev);
++	kfree(port_id);
+ }
+ 
+ static ssize_t srpt_wwn_version_show(struct config_item *item, char *buf)
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h
+index bdeb010efee68..2bf381ecd482b 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.h
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.h
+@@ -376,7 +376,7 @@ struct srpt_tpg {
+ };
+ 
+ /**
+- * struct srpt_port_id - information about an RDMA port name
++ * struct srpt_port_id - LIO RDMA port information
+  * @mutex:	Protects @tpg_list changes.
+  * @tpg_list:	TPGs associated with the RDMA port name.
+  * @wwn:	WWN associated with the RDMA port name.
+@@ -393,7 +393,7 @@ struct srpt_port_id {
+ };
+ 
+ /**
+- * struct srpt_port - information associated by SRPT with a single IB port
++ * struct srpt_port - SRPT RDMA port information
+  * @sdev:      backpointer to the HCA information.
+  * @mad_agent: per-port management datagram processing information.
+  * @enabled:   Whether or not this target port is enabled.
+@@ -402,8 +402,10 @@ struct srpt_port_id {
+  * @lid:       cached value of the port's lid.
+  * @gid:       cached value of the port's gid.
+  * @work:      work structure for refreshing the aforementioned cached values.
+- * @port_guid_id: target port GUID
+- * @port_gid_id: target port GID
++ * @guid_name: port name in GUID format.
++ * @guid_id:   LIO target port information for the port name in GUID format.
++ * @gid_name:  port name in GID format.
++ * @gid_id:    LIO target port information for the port name in GID format.
+  * @port_attrib:   Port attributes that can be accessed through configfs.
+  * @refcount:	   Number of objects associated with this port.
+  * @freed_channels: Completion that will be signaled once @refcount becomes 0.
+@@ -419,8 +421,10 @@ struct srpt_port {
+ 	u32			lid;
+ 	union ib_gid		gid;
+ 	struct work_struct	work;
+-	struct srpt_port_id	port_guid_id;
+-	struct srpt_port_id	port_gid_id;
++	char			guid_name[64];
++	struct srpt_port_id	*guid_id;
++	char			gid_name[64];
++	struct srpt_port_id	*gid_id;
+ 	struct srpt_port_attrib port_attrib;
+ 	atomic_t		refcount;
+ 	struct completion	*freed_channels;
+@@ -430,6 +434,7 @@ struct srpt_port {
+ 
+ /**
+  * struct srpt_device - information associated by SRPT with a single HCA
++ * @refcnt:	   Reference count for this device.
+  * @device:        Backpointer to the struct ib_device managed by the IB core.
+  * @pd:            IB protection domain.
+  * @lkey:          L_Key (local key) with write access to all local memory.
+@@ -445,6 +450,7 @@ struct srpt_port {
+  * @port:          Information about the ports owned by this HCA.
+  */
+ struct srpt_device {
++	struct kref		refcnt;
+ 	struct ib_device	*device;
+ 	struct ib_pd		*pd;
+ 	u32			lkey;
+diff --git a/drivers/input/serio/gscps2.c b/drivers/input/serio/gscps2.c
+index 2f9775de3c5b9..70ea03a35c607 100644
+--- a/drivers/input/serio/gscps2.c
++++ b/drivers/input/serio/gscps2.c
+@@ -350,6 +350,10 @@ static int __init gscps2_probe(struct parisc_device *dev)
+ 	ps2port->port = serio;
+ 	ps2port->padev = dev;
+ 	ps2port->addr = ioremap(hpa, GSC_STATUS + 4);
++	if (!ps2port->addr) {
++		ret = -ENOMEM;
++		goto fail_nomem;
++	}
+ 	spin_lock_init(&ps2port->lock);
+ 
+ 	gscps2_reset(ps2port);
+diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
+index 8df402a1ed446..3c152e934cb88 100644
+--- a/drivers/input/touchscreen/atmel_mxt_ts.c
++++ b/drivers/input/touchscreen/atmel_mxt_ts.c
+@@ -3134,8 +3134,9 @@ static int mxt_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	if (error)
+ 		return error;
+ 
++	/* Request the RESET line as asserted so we go into reset */
+ 	data->reset_gpio = devm_gpiod_get_optional(&client->dev,
+-						   "reset", GPIOD_OUT_LOW);
++						   "reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(data->reset_gpio)) {
+ 		error = PTR_ERR(data->reset_gpio);
+ 		dev_err(&client->dev, "Failed to get reset gpio: %d\n", error);
+@@ -3153,8 +3154,9 @@ static int mxt_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	disable_irq(client->irq);
+ 
+ 	if (data->reset_gpio) {
++		/* Wait a while and then de-assert the RESET GPIO line */
+ 		msleep(MXT_RESET_GPIO_TIME);
+-		gpiod_set_value(data->reset_gpio, 1);
++		gpiod_set_value(data->reset_gpio, 0);
+ 		msleep(MXT_RESET_INVALID_CHG);
+ 	}
+ 
+diff --git a/drivers/interconnect/imx/imx.c b/drivers/interconnect/imx/imx.c
+index e398ebf1dbbab..36f870e7b5965 100644
+--- a/drivers/interconnect/imx/imx.c
++++ b/drivers/interconnect/imx/imx.c
+@@ -226,16 +226,16 @@ int imx_icc_register(struct platform_device *pdev,
+ 	struct device *dev = &pdev->dev;
+ 	struct icc_onecell_data *data;
+ 	struct icc_provider *provider;
+-	int max_node_id;
++	int num_nodes;
+ 	int ret;
+ 
+ 	/* icc_onecell_data is indexed by node_id, unlike nodes param */
+-	max_node_id = get_max_node_id(nodes, nodes_count);
+-	data = devm_kzalloc(dev, struct_size(data, nodes, max_node_id),
++	num_nodes = get_max_node_id(nodes, nodes_count) + 1;
++	data = devm_kzalloc(dev, struct_size(data, nodes, num_nodes),
+ 			    GFP_KERNEL);
+ 	if (!data)
+ 		return -ENOMEM;
+-	data->num_nodes = max_node_id;
++	data->num_nodes = num_nodes;
+ 
+ 	provider = devm_kzalloc(dev, sizeof(*provider), GFP_KERNEL);
+ 	if (!provider)
+diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+index b30d6c966e2c8..a24390c548a91 100644
+--- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c
++++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+@@ -766,9 +766,12 @@ static bool qcom_iommu_has_secure_context(struct qcom_iommu_dev *qcom_iommu)
+ {
+ 	struct device_node *child;
+ 
+-	for_each_child_of_node(qcom_iommu->dev->of_node, child)
+-		if (of_device_is_compatible(child, "qcom,msm-iommu-v1-sec"))
++	for_each_child_of_node(qcom_iommu->dev->of_node, child) {
++		if (of_device_is_compatible(child, "qcom,msm-iommu-v1-sec")) {
++			of_node_put(child);
+ 			return true;
++		}
++	}
+ 
+ 	return false;
+ }
+diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
+index de324b4eedfe9..0cdb5493a464f 100644
+--- a/drivers/iommu/exynos-iommu.c
++++ b/drivers/iommu/exynos-iommu.c
+@@ -635,7 +635,7 @@ static int exynos_sysmmu_probe(struct platform_device *pdev)
+ 
+ 	ret = iommu_device_register(&data->iommu);
+ 	if (ret)
+-		return ret;
++		goto err_iommu_register;
+ 
+ 	platform_set_drvdata(pdev, data);
+ 
+@@ -662,6 +662,10 @@ static int exynos_sysmmu_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 
+ 	return 0;
++
++err_iommu_register:
++	iommu_device_sysfs_remove(&data->iommu);
++	return ret;
+ }
+ 
+ static int __maybe_unused exynos_sysmmu_suspend(struct device *dev)
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 70d569b80ecf1..0bc497f4cb9f0 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -497,7 +497,7 @@ static int dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
+ 		if (drhd->reg_base_addr == rhsa->base_address) {
+ 			int node = pxm_to_node(rhsa->proximity_domain);
+ 
+-			if (!node_online(node))
++			if (node != NUMA_NO_NODE && !node_online(node))
+ 				node = NUMA_NO_NODE;
+ 			drhd->iommu->node = node;
+ 			return 0;
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index dc062e8c2caf8..3c24bf45263ce 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -178,7 +178,7 @@ config MADERA_IRQ
+ config IRQ_MIPS_CPU
+ 	bool
+ 	select GENERIC_IRQ_CHIP
+-	select GENERIC_IRQ_IPI if SYS_SUPPORTS_MULTITHREADING
++	select GENERIC_IRQ_IPI if SMP && SYS_SUPPORTS_MULTITHREADING
+ 	select IRQ_DOMAIN
+ 	select GENERIC_IRQ_EFFECTIVE_AFF_MASK
+ 
+@@ -313,7 +313,8 @@ config KEYSTONE_IRQ
+ 
+ config MIPS_GIC
+ 	bool
+-	select GENERIC_IRQ_IPI
++	select GENERIC_IRQ_IPI if SMP
++	select IRQ_DOMAIN_HIERARCHY
+ 	select MIPS_CM
+ 
+ config INGENIC_IRQ
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index 215885962bb0a..8ada91bdbe4d0 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -50,13 +50,15 @@ static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks);
+ 
+ static DEFINE_SPINLOCK(gic_lock);
+ static struct irq_domain *gic_irq_domain;
+-static struct irq_domain *gic_ipi_domain;
+ static int gic_shared_intrs;
+ static unsigned int gic_cpu_pin;
+ static unsigned int timer_cpu_pin;
+ static struct irq_chip gic_level_irq_controller, gic_edge_irq_controller;
++
++#ifdef CONFIG_GENERIC_IRQ_IPI
+ static DECLARE_BITMAP(ipi_resrv, GIC_MAX_INTRS);
+ static DECLARE_BITMAP(ipi_available, GIC_MAX_INTRS);
++#endif /* CONFIG_GENERIC_IRQ_IPI */
+ 
+ static struct gic_all_vpes_chip_data {
+ 	u32	map;
+@@ -459,9 +461,11 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ 	u32 map;
+ 
+ 	if (hwirq >= GIC_SHARED_HWIRQ_BASE) {
++#ifdef CONFIG_GENERIC_IRQ_IPI
+ 		/* verify that shared irqs don't conflict with an IPI irq */
+ 		if (test_bit(GIC_HWIRQ_TO_SHARED(hwirq), ipi_resrv))
+ 			return -EBUSY;
++#endif /* CONFIG_GENERIC_IRQ_IPI */
+ 
+ 		err = irq_domain_set_hwirq_and_chip(d, virq, hwirq,
+ 						    &gic_level_irq_controller,
+@@ -550,6 +554,8 @@ static const struct irq_domain_ops gic_irq_domain_ops = {
+ 	.map = gic_irq_domain_map,
+ };
+ 
++#ifdef CONFIG_GENERIC_IRQ_IPI
++
+ static int gic_ipi_domain_xlate(struct irq_domain *d, struct device_node *ctrlr,
+ 				const u32 *intspec, unsigned int intsize,
+ 				irq_hw_number_t *out_hwirq,
+@@ -653,6 +659,48 @@ static const struct irq_domain_ops gic_ipi_domain_ops = {
+ 	.match = gic_ipi_domain_match,
+ };
+ 
++static int gic_register_ipi_domain(struct device_node *node)
++{
++	struct irq_domain *gic_ipi_domain;
++	unsigned int v[2], num_ipis;
++
++	gic_ipi_domain = irq_domain_add_hierarchy(gic_irq_domain,
++						  IRQ_DOMAIN_FLAG_IPI_PER_CPU,
++						  GIC_NUM_LOCAL_INTRS + gic_shared_intrs,
++						  node, &gic_ipi_domain_ops, NULL);
++	if (!gic_ipi_domain) {
++		pr_err("Failed to add IPI domain");
++		return -ENXIO;
++	}
++
++	irq_domain_update_bus_token(gic_ipi_domain, DOMAIN_BUS_IPI);
++
++	if (node &&
++	    !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) {
++		bitmap_set(ipi_resrv, v[0], v[1]);
++	} else {
++		/*
++		 * Reserve 2 interrupts per possible CPU/VP for use as IPIs,
++		 * meeting the requirements of arch/mips SMP.
++		 */
++		num_ipis = 2 * num_possible_cpus();
++		bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis);
++	}
++
++	bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS);
++
++	return 0;
++}
++
++#else /* !CONFIG_GENERIC_IRQ_IPI */
++
++static inline int gic_register_ipi_domain(struct device_node *node)
++{
++	return 0;
++}
++
++#endif /* !CONFIG_GENERIC_IRQ_IPI */
++
+ static int gic_cpu_startup(unsigned int cpu)
+ {
+ 	/* Enable or disable EIC */
+@@ -671,11 +719,12 @@ static int gic_cpu_startup(unsigned int cpu)
+ static int __init gic_of_init(struct device_node *node,
+ 			      struct device_node *parent)
+ {
+-	unsigned int cpu_vec, i, gicconfig, v[2], num_ipis;
++	unsigned int cpu_vec, i, gicconfig;
+ 	unsigned long reserved;
+ 	phys_addr_t gic_base;
+ 	struct resource res;
+ 	size_t gic_len;
++	int ret;
+ 
+ 	/* Find the first available CPU vector. */
+ 	i = 0;
+@@ -717,6 +766,10 @@ static int __init gic_of_init(struct device_node *node,
+ 	}
+ 
+ 	mips_gic_base = ioremap(gic_base, gic_len);
++	if (!mips_gic_base) {
++		pr_err("Failed to ioremap gic_base\n");
++		return -ENOMEM;
++	}
+ 
+ 	gicconfig = read_gic_config();
+ 	gic_shared_intrs = gicconfig & GIC_CONFIG_NUMINTERRUPTS;
+@@ -764,30 +817,9 @@ static int __init gic_of_init(struct device_node *node,
+ 		return -ENXIO;
+ 	}
+ 
+-	gic_ipi_domain = irq_domain_add_hierarchy(gic_irq_domain,
+-						  IRQ_DOMAIN_FLAG_IPI_PER_CPU,
+-						  GIC_NUM_LOCAL_INTRS + gic_shared_intrs,
+-						  node, &gic_ipi_domain_ops, NULL);
+-	if (!gic_ipi_domain) {
+-		pr_err("Failed to add IPI domain");
+-		return -ENXIO;
+-	}
+-
+-	irq_domain_update_bus_token(gic_ipi_domain, DOMAIN_BUS_IPI);
+-
+-	if (node &&
+-	    !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) {
+-		bitmap_set(ipi_resrv, v[0], v[1]);
+-	} else {
+-		/*
+-		 * Reserve 2 interrupts per possible CPU/VP for use as IPIs,
+-		 * meeting the requirements of arch/mips SMP.
+-		 */
+-		num_ipis = 2 * num_possible_cpus();
+-		bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis);
+-	}
+-
+-	bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS);
++	ret = gic_register_ipi_domain(node);
++	if (ret)
++		return ret;
+ 
+ 	board_bind_eic_interrupt = &gic_bind_eic_interrupt;
+ 
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 4e94200e01423..a2d09c9c6e9f7 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3514,7 +3514,7 @@ static void raid_status(struct dm_target *ti, status_type_t type,
+ {
+ 	struct raid_set *rs = ti->private;
+ 	struct mddev *mddev = &rs->md;
+-	struct r5conf *conf = mddev->private;
++	struct r5conf *conf = rs_is_raid456(rs) ? mddev->private : NULL;
+ 	int i, max_nr_stripes = conf ? conf->max_nr_stripes : 0;
+ 	unsigned long recovery;
+ 	unsigned int raid_param_cnt = 1; /* at least 1 for chunksize */
+@@ -3794,7 +3794,7 @@ static void attempt_restore_of_faulty_devices(struct raid_set *rs)
+ 
+ 	memset(cleared_failed_devices, 0, sizeof(cleared_failed_devices));
+ 
+-	for (i = 0; i < mddev->raid_disks; i++) {
++	for (i = 0; i < rs->raid_disks; i++) {
+ 		r = &rs->dev[i].rdev;
+ 		/* HM FIXME: enhance journal device recovery processing */
+ 		if (test_bit(Journal, &r->flags))
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 4833f4b20b2c7..5f933dbb0152c 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -397,7 +397,7 @@ static int map_request(struct dm_rq_target_io *tio)
+ 		}
+ 
+ 		/* The target has remapped the I/O so dispatch it */
+-		trace_block_rq_remap(clone->q, clone, disk_devt(dm_disk(md)),
++		trace_block_rq_remap(clone, disk_devt(dm_disk(md)),
+ 				     blk_rq_pos(rq));
+ 		ret = dm_dispatch_clone_request(clone, rq);
+ 		if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) {
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index 6ebb2127f3e2e..842d79e5ea3aa 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -2058,10 +2058,13 @@ int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
+ 					dm_sm_threshold_fn fn,
+ 					void *context)
+ {
+-	int r;
++	int r = -EINVAL;
+ 
+ 	pmd_write_lock_in_core(pmd);
+-	r = dm_sm_register_threshold_callback(pmd->metadata_sm, threshold, fn, context);
++	if (!pmd->fail_io) {
++		r = dm_sm_register_threshold_callback(pmd->metadata_sm,
++						      threshold, fn, context);
++	}
+ 	pmd_write_unlock(pmd);
+ 
+ 	return r;
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index fff4c50df74db..a196d7cb51bd2 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -3401,8 +3401,10 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 						calc_metadata_threshold(pt),
+ 						metadata_low_callback,
+ 						pool);
+-	if (r)
++	if (r) {
++		ti->error = "Error registering metadata threshold";
+ 		goto out_flags_changed;
++	}
+ 
+ 	dm_pool_register_pre_commit_callback(pool->pmd,
+ 					     metadata_pre_commit_callback, pool);
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 9d6ae3e64285b..13cc318db012d 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -20,7 +20,7 @@
+ 
+ #define HIGH_WATERMARK			50
+ #define LOW_WATERMARK			45
+-#define MAX_WRITEBACK_JOBS		0
++#define MAX_WRITEBACK_JOBS		min(0x10000000 / PAGE_SIZE, totalram_pages() / 16)
+ #define ENDIO_LATENCY			16
+ #define WRITEBACK_LATENCY		64
+ #define AUTOCOMMIT_BLOCKS_SSD		65536
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index ab0e2338e47ec..1005abf768609 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -3003,6 +3003,11 @@ static int dm_call_pr(struct block_device *bdev, iterate_devices_callout_fn fn,
+ 		goto out;
+ 	ti = dm_table_get_target(table, 0);
+ 
++	if (dm_suspended_md(md)) {
++		ret = -EAGAIN;
++		goto out;
++	}
++
+ 	ret = -EINVAL;
+ 	if (!ti->type->iterate_devices)
+ 		goto out;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 5bd1edbb415bd..4463ef3e3729b 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6278,11 +6278,11 @@ static void mddev_detach(struct mddev *mddev)
+ static void __md_stop(struct mddev *mddev)
+ {
+ 	struct md_personality *pers = mddev->pers;
+-	md_bitmap_destroy(mddev);
+ 	mddev_detach(mddev);
+ 	/* Ensure ->event_work is done */
+ 	if (mddev->event_work.func)
+ 		flush_workqueue(md_misc_wq);
++	md_bitmap_destroy(mddev);
+ 	spin_lock(&mddev->lock);
+ 	mddev->pers = NULL;
+ 	spin_unlock(&mddev->lock);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 70dccc3c9631d..0e741a8d278df 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1809,9 +1809,12 @@ static int raid10_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	int err = 0;
+ 	int number = rdev->raid_disk;
+ 	struct md_rdev **rdevp;
+-	struct raid10_info *p = conf->mirrors + number;
++	struct raid10_info *p;
+ 
+ 	print_conf(conf);
++	if (unlikely(number >= mddev->raid_disks))
++		return 0;
++	p = conf->mirrors + number;
+ 	if (rdev == p->rdev)
+ 		rdevp = &p->rdev;
+ 	else if (rdev == p->replacement)
+diff --git a/drivers/media/pci/tw686x/tw686x-core.c b/drivers/media/pci/tw686x/tw686x-core.c
+index 74ae4f0dcee76..8a25a0dac4aeb 100644
+--- a/drivers/media/pci/tw686x/tw686x-core.c
++++ b/drivers/media/pci/tw686x/tw686x-core.c
+@@ -315,13 +315,6 @@ static int tw686x_probe(struct pci_dev *pci_dev,
+ 
+ 	spin_lock_init(&dev->lock);
+ 
+-	err = request_irq(pci_dev->irq, tw686x_irq, IRQF_SHARED,
+-			  dev->name, dev);
+-	if (err < 0) {
+-		dev_err(&pci_dev->dev, "unable to request interrupt\n");
+-		goto iounmap;
+-	}
+-
+ 	timer_setup(&dev->dma_delay_timer, tw686x_dma_delay, 0);
+ 
+ 	/*
+@@ -333,18 +326,23 @@ static int tw686x_probe(struct pci_dev *pci_dev,
+ 	err = tw686x_video_init(dev);
+ 	if (err) {
+ 		dev_err(&pci_dev->dev, "can't register video\n");
+-		goto free_irq;
++		goto iounmap;
+ 	}
+ 
+ 	err = tw686x_audio_init(dev);
+ 	if (err)
+ 		dev_warn(&pci_dev->dev, "can't register audio\n");
+ 
++	err = request_irq(pci_dev->irq, tw686x_irq, IRQF_SHARED,
++			  dev->name, dev);
++	if (err < 0) {
++		dev_err(&pci_dev->dev, "unable to request interrupt\n");
++		goto iounmap;
++	}
++
+ 	pci_set_drvdata(pci_dev, dev);
+ 	return 0;
+ 
+-free_irq:
+-	free_irq(pci_dev->irq, dev);
+ iounmap:
+ 	pci_iounmap(pci_dev, dev->mmio);
+ free_region:
+diff --git a/drivers/media/pci/tw686x/tw686x-video.c b/drivers/media/pci/tw686x/tw686x-video.c
+index 1ced2b0ddb241..55ed8851256f3 100644
+--- a/drivers/media/pci/tw686x/tw686x-video.c
++++ b/drivers/media/pci/tw686x/tw686x-video.c
+@@ -1283,8 +1283,10 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ 		video_set_drvdata(vdev, vc);
+ 
+ 		err = video_register_device(vdev, VFL_TYPE_VIDEO, -1);
+-		if (err < 0)
++		if (err < 0) {
++			video_device_release(vdev);
+ 			goto error;
++		}
+ 		vc->num = vdev->num;
+ 	}
+ 
+diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_ipi.h b/drivers/media/platform/mtk-mdp/mtk_mdp_ipi.h
+index 2cb8cecb30771..b810c96695c83 100644
+--- a/drivers/media/platform/mtk-mdp/mtk_mdp_ipi.h
++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_ipi.h
+@@ -40,12 +40,14 @@ struct mdp_ipi_init {
+  * @ipi_id        : IPI_MDP
+  * @ap_inst       : AP mtk_mdp_vpu address
+  * @vpu_inst_addr : VPU MDP instance address
++ * @padding       : Alignment padding
+  */
+ struct mdp_ipi_comm {
+ 	uint32_t msg_id;
+ 	uint32_t ipi_id;
+ 	uint64_t ap_inst;
+ 	uint32_t vpu_inst_addr;
++	uint32_t padding;
+ };
+ 
+ /**
+diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c
+index 60e57e0f19272..fd7d2a9d0449a 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-video.c
++++ b/drivers/media/usb/hdpvr/hdpvr-video.c
+@@ -409,7 +409,7 @@ static ssize_t hdpvr_read(struct file *file, char __user *buffer, size_t count,
+ 	struct hdpvr_device *dev = video_drvdata(file);
+ 	struct hdpvr_buffer *buf = NULL;
+ 	struct urb *urb;
+-	unsigned int ret = 0;
++	int ret = 0;
+ 	int rem, cnt;
+ 
+ 	if (*pos)
+diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
+index 73190652c267b..ad14d52141067 100644
+--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
++++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
+@@ -927,7 +927,7 @@ static __poll_t v4l2_m2m_poll_for_data(struct file *file,
+ 	if ((!src_q->streaming || src_q->error ||
+ 	     list_empty(&src_q->queued_list)) &&
+ 	    (!dst_q->streaming || dst_q->error ||
+-	     list_empty(&dst_q->queued_list)))
++	     (list_empty(&dst_q->queued_list) && !dst_q->last_buffer_dequeued)))
+ 		return EPOLLERR;
+ 
+ 	spin_lock_irqsave(&src_q->done_lock, flags);
+diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
+index bc1f484f50f1d..6df98c0e56221 100644
+--- a/drivers/memstick/core/ms_block.c
++++ b/drivers/memstick/core/ms_block.c
+@@ -1335,17 +1335,17 @@ static int msb_ftl_initialize(struct msb_data *msb)
+ 	msb->zone_count = msb->block_count / MS_BLOCKS_IN_ZONE;
+ 	msb->logical_block_count = msb->zone_count * 496 - 2;
+ 
+-	msb->used_blocks_bitmap = kzalloc(msb->block_count / 8, GFP_KERNEL);
+-	msb->erased_blocks_bitmap = kzalloc(msb->block_count / 8, GFP_KERNEL);
++	msb->used_blocks_bitmap = bitmap_zalloc(msb->block_count, GFP_KERNEL);
++	msb->erased_blocks_bitmap = bitmap_zalloc(msb->block_count, GFP_KERNEL);
+ 	msb->lba_to_pba_table =
+ 		kmalloc_array(msb->logical_block_count, sizeof(u16),
+ 			      GFP_KERNEL);
+ 
+ 	if (!msb->used_blocks_bitmap || !msb->lba_to_pba_table ||
+ 						!msb->erased_blocks_bitmap) {
+-		kfree(msb->used_blocks_bitmap);
++		bitmap_free(msb->used_blocks_bitmap);
++		bitmap_free(msb->erased_blocks_bitmap);
+ 		kfree(msb->lba_to_pba_table);
+-		kfree(msb->erased_blocks_bitmap);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -1953,7 +1953,8 @@ static int msb_bd_open(struct block_device *bdev, fmode_t mode)
+ static void msb_data_clear(struct msb_data *msb)
+ {
+ 	kfree(msb->boot_page);
+-	kfree(msb->used_blocks_bitmap);
++	bitmap_free(msb->used_blocks_bitmap);
++	bitmap_free(msb->erased_blocks_bitmap);
+ 	kfree(msb->lba_to_pba_table);
+ 	kfree(msb->cache);
+ 	msb->card = NULL;
+diff --git a/drivers/mfd/max77620.c b/drivers/mfd/max77620.c
+index fec2096474ad1..a6661e07035ba 100644
+--- a/drivers/mfd/max77620.c
++++ b/drivers/mfd/max77620.c
+@@ -419,9 +419,11 @@ static int max77620_initialise_fps(struct max77620_chip *chip)
+ 		ret = max77620_config_fps(chip, fps_child);
+ 		if (ret < 0) {
+ 			of_node_put(fps_child);
++			of_node_put(fps_np);
+ 			return ret;
+ 		}
+ 	}
++	of_node_put(fps_np);
+ 
+ 	config = chip->enable_global_lpm ? MAX77620_ONOFFCNFG2_SLP_LPM_MSK : 0;
+ 	ret = regmap_update_bits(chip->rmap, MAX77620_REG_ONOFFCNFG2,
+diff --git a/drivers/mfd/t7l66xb.c b/drivers/mfd/t7l66xb.c
+index 70da0c4ae457e..58811c5ab564f 100644
+--- a/drivers/mfd/t7l66xb.c
++++ b/drivers/mfd/t7l66xb.c
+@@ -405,11 +405,8 @@ err_noirq:
+ 
+ static int t7l66xb_remove(struct platform_device *dev)
+ {
+-	struct t7l66xb_platform_data *pdata = dev_get_platdata(&dev->dev);
+ 	struct t7l66xb *t7l66xb = platform_get_drvdata(dev);
+-	int ret;
+ 
+-	ret = pdata->disable(dev);
+ 	clk_disable_unprepare(t7l66xb->clk48m);
+ 	clk_put(t7l66xb->clk48m);
+ 	clk_disable_unprepare(t7l66xb->clk32k);
+@@ -420,8 +417,7 @@ static int t7l66xb_remove(struct platform_device *dev)
+ 	mfd_remove_devices(&dev->dev);
+ 	kfree(t7l66xb);
+ 
+-	return ret;
+-
++	return 0;
+ }
+ 
+ static struct platform_driver t7l66xb_platform_driver = {
+diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c
+index 5d15607027e9e..358b000b3a552 100644
+--- a/drivers/misc/cardreader/rtsx_pcr.c
++++ b/drivers/misc/cardreader/rtsx_pcr.c
+@@ -1529,7 +1529,7 @@ static int rtsx_pci_probe(struct pci_dev *pcidev,
+ 	pcr->remap_addr = ioremap(base, len);
+ 	if (!pcr->remap_addr) {
+ 		ret = -ENOMEM;
+-		goto free_handle;
++		goto free_idr;
+ 	}
+ 
+ 	pcr->rtsx_resv_buf = dma_alloc_coherent(&(pcidev->dev),
+@@ -1591,6 +1591,10 @@ disable_msi:
+ 			pcr->rtsx_resv_buf, pcr->rtsx_resv_buf_addr);
+ unmap:
+ 	iounmap(pcr->remap_addr);
++free_idr:
++	spin_lock(&rtsx_pci_lock);
++	idr_remove(&rtsx_pci_idr, pcr->id);
++	spin_unlock(&rtsx_pci_lock);
+ free_handle:
+ 	kfree(handle);
+ free_pcr:
+diff --git a/drivers/misc/eeprom/idt_89hpesx.c b/drivers/misc/eeprom/idt_89hpesx.c
+index 3e4a594c110b3..6a456645efb0d 100644
+--- a/drivers/misc/eeprom/idt_89hpesx.c
++++ b/drivers/misc/eeprom/idt_89hpesx.c
+@@ -940,14 +940,18 @@ static ssize_t idt_dbgfs_csr_write(struct file *filep, const char __user *ubuf,
+ 	u32 csraddr, csrval;
+ 	char *buf;
+ 
++	if (*offp)
++		return 0;
++
+ 	/* Copy data from User-space */
+ 	buf = kmalloc(count + 1, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+-	ret = simple_write_to_buffer(buf, count, offp, ubuf, count);
+-	if (ret < 0)
++	if (copy_from_user(buf, ubuf, count)) {
++		ret = -EFAULT;
+ 		goto free_buf;
++	}
+ 	buf[count] = 0;
+ 
+ 	/* Find position of colon in the buffer */
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 70eb3d03937ff..66a00b7c751f7 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -169,7 +169,7 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
+ 				      unsigned int part_type);
+ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+ 			       struct mmc_card *card,
+-			       int disable_multi,
++			       int recovery_mode,
+ 			       struct mmc_queue *mq);
+ static void mmc_blk_hsq_req_done(struct mmc_request *mrq);
+ 
+@@ -1247,7 +1247,7 @@ static void mmc_blk_eval_resp_error(struct mmc_blk_request *brq)
+ }
+ 
+ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+-			      int disable_multi, bool *do_rel_wr_p,
++			      int recovery_mode, bool *do_rel_wr_p,
+ 			      bool *do_data_tag_p)
+ {
+ 	struct mmc_blk_data *md = mq->blkdata;
+@@ -1311,12 +1311,12 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+ 			brq->data.blocks--;
+ 
+ 		/*
+-		 * After a read error, we redo the request one sector
++		 * After a read error, we redo the request one (native) sector
+ 		 * at a time in order to accurately determine which
+ 		 * sectors can be read successfully.
+ 		 */
+-		if (disable_multi)
+-			brq->data.blocks = 1;
++		if (recovery_mode)
++			brq->data.blocks = queue_physical_block_size(mq->queue) >> 9;
+ 
+ 		/*
+ 		 * Some controllers have HW issues while operating
+@@ -1533,7 +1533,7 @@ static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+ 
+ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+ 			       struct mmc_card *card,
+-			       int disable_multi,
++			       int recovery_mode,
+ 			       struct mmc_queue *mq)
+ {
+ 	u32 readcmd, writecmd;
+@@ -1542,7 +1542,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+ 	struct mmc_blk_data *md = mq->blkdata;
+ 	bool do_rel_wr, do_data_tag;
+ 
+-	mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag);
++	mmc_blk_data_prep(mq, mqrq, recovery_mode, &do_rel_wr, &do_data_tag);
+ 
+ 	brq->mrq.cmd = &brq->cmd;
+ 
+@@ -1633,7 +1633,7 @@ static int mmc_blk_fix_state(struct mmc_card *card, struct request *req)
+ 
+ #define MMC_READ_SINGLE_RETRIES	2
+ 
+-/* Single sector read during recovery */
++/* Single (native) sector read during recovery */
+ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
+ {
+ 	struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
+@@ -1641,6 +1641,7 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
+ 	struct mmc_card *card = mq->card;
+ 	struct mmc_host *host = card->host;
+ 	blk_status_t error = BLK_STS_OK;
++	size_t bytes_per_read = queue_physical_block_size(mq->queue);
+ 
+ 	do {
+ 		u32 status;
+@@ -1675,13 +1676,13 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
+ 		else
+ 			error = BLK_STS_OK;
+ 
+-	} while (blk_update_request(req, error, 512));
++	} while (blk_update_request(req, error, bytes_per_read));
+ 
+ 	return;
+ 
+ error_exit:
+ 	mrq->data->bytes_xfered = 0;
+-	blk_update_request(req, BLK_STS_IOERR, 512);
++	blk_update_request(req, BLK_STS_IOERR, bytes_per_read);
+ 	/* Let it try the remaining request again */
+ 	if (mqrq->retries > MMC_MAX_RETRIES - 1)
+ 		mqrq->retries = MMC_MAX_RETRIES - 1;
+@@ -1822,10 +1823,9 @@ static void mmc_blk_mq_rw_recovery(struct mmc_queue *mq, struct request *req)
+ 		return;
+ 	}
+ 
+-	/* FIXME: Missing single sector read for large sector size */
+-	if (!mmc_large_sector(card) && rq_data_dir(req) == READ &&
+-	    brq->data.blocks > 1) {
+-		/* Read one sector at a time */
++	if (rq_data_dir(req) == READ && brq->data.blocks >
++			queue_physical_block_size(mq->queue) >> 9) {
++		/* Read one (native) sector at a time */
+ 		mmc_blk_read_single(mq, req);
+ 		return;
+ 	}
+diff --git a/drivers/mmc/host/cavium-octeon.c b/drivers/mmc/host/cavium-octeon.c
+index 2c4b2df52adb1..12dca91a8ef61 100644
+--- a/drivers/mmc/host/cavium-octeon.c
++++ b/drivers/mmc/host/cavium-octeon.c
+@@ -277,6 +277,7 @@ static int octeon_mmc_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "Error populating slots\n");
+ 			octeon_mmc_set_shared_power(host, 0);
++			of_node_put(cn);
+ 			goto error;
+ 		}
+ 		i++;
+diff --git a/drivers/mmc/host/cavium-thunderx.c b/drivers/mmc/host/cavium-thunderx.c
+index 76013bbbcff30..202b1d6da678c 100644
+--- a/drivers/mmc/host/cavium-thunderx.c
++++ b/drivers/mmc/host/cavium-thunderx.c
+@@ -142,8 +142,10 @@ static int thunder_mmc_probe(struct pci_dev *pdev,
+ 				continue;
+ 
+ 			ret = cvm_mmc_of_slot_probe(&host->slot_pdev[i]->dev, host);
+-			if (ret)
++			if (ret) {
++				of_node_put(child_node);
+ 				goto error;
++			}
+ 		}
+ 		i++;
+ 	}
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index d1a1c548c515f..0452c312b65eb 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -100,8 +100,13 @@ static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
+ static void sdhci_at91_set_uhs_signaling(struct sdhci_host *host,
+ 					 unsigned int timing)
+ {
+-	if (timing == MMC_TIMING_MMC_DDR52)
+-		sdhci_writeb(host, SDMMC_MC1R_DDR, SDMMC_MC1R);
++	u8 mc1r;
++
++	if (timing == MMC_TIMING_MMC_DDR52) {
++		mc1r = sdhci_readb(host, SDMMC_MC1R);
++		mc1r |= SDMMC_MC1R_DDR;
++		sdhci_writeb(host, mc1r, SDMMC_MC1R);
++	}
+ 	sdhci_set_uhs_signaling(host, timing);
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 343648fcbc31f..d53374991e137 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -904,6 +904,7 @@ static int esdhc_signal_voltage_switch(struct mmc_host *mmc,
+ 		scfg_node = of_find_matching_node(NULL, scfg_device_ids);
+ 		if (scfg_node)
+ 			scfg_base = of_iomap(scfg_node, 0);
++		of_node_put(scfg_node);
+ 		if (scfg_base) {
+ 			sdhciovselcr = SDHCIOVSELCR_TGLEN |
+ 				       SDHCIOVSELCR_VSELVAL;
+diff --git a/drivers/mtd/devices/st_spi_fsm.c b/drivers/mtd/devices/st_spi_fsm.c
+index 1888523d9745f..9bee99f07af0c 100644
+--- a/drivers/mtd/devices/st_spi_fsm.c
++++ b/drivers/mtd/devices/st_spi_fsm.c
+@@ -2115,10 +2115,12 @@ static int stfsm_probe(struct platform_device *pdev)
+ 		(long long)fsm->mtd.size, (long long)(fsm->mtd.size >> 20),
+ 		fsm->mtd.erasesize, (fsm->mtd.erasesize >> 10));
+ 
+-	return mtd_device_register(&fsm->mtd, NULL, 0);
+-
++	ret = mtd_device_register(&fsm->mtd, NULL, 0);
++	if (ret) {
+ err_clk_unprepare:
+-	clk_disable_unprepare(fsm->clk);
++		clk_disable_unprepare(fsm->clk);
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mtd/maps/physmap-versatile.c b/drivers/mtd/maps/physmap-versatile.c
+index ad7cd9cfaee04..a1b8b7b25f88b 100644
+--- a/drivers/mtd/maps/physmap-versatile.c
++++ b/drivers/mtd/maps/physmap-versatile.c
+@@ -93,6 +93,7 @@ static int ap_flash_init(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 	ebi_base = of_iomap(ebi, 0);
++	of_node_put(ebi);
+ 	if (!ebi_base)
+ 		return -ENODEV;
+ 
+@@ -207,6 +208,7 @@ int of_flash_probe_versatile(struct platform_device *pdev,
+ 
+ 		versatile_flashprot = (enum versatile_flashprot)devid->data;
+ 		rmap = syscon_node_to_regmap(sysnp);
++		of_node_put(sysnp);
+ 		if (IS_ERR(rmap))
+ 			return PTR_ERR(rmap);
+ 
+diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
+index 0ee3192916d97..6a0d48c42cfa9 100644
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -91,7 +91,7 @@
+ 
+ #define DATA_INTERFACE_REG		0x6C
+ #define   DIFACE_SDR_MODE(x)		FIELD_PREP(GENMASK(2, 0), (x))
+-#define   DIFACE_DDR_MODE(x)		FIELD_PREP(GENMASK(5, 3), (X))
++#define   DIFACE_DDR_MODE(x)		FIELD_PREP(GENMASK(5, 3), (x))
+ #define   DIFACE_SDR			0
+ #define   DIFACE_NVDDR			BIT(9)
+ 
+@@ -283,17 +283,17 @@ static int anfc_select_target(struct nand_chip *chip, int target)
+ 
+ 	/* Update clock frequency */
+ 	if (nfc->cur_clk != anand->clk) {
+-		clk_disable_unprepare(nfc->controller_clk);
+-		ret = clk_set_rate(nfc->controller_clk, anand->clk);
++		clk_disable_unprepare(nfc->bus_clk);
++		ret = clk_set_rate(nfc->bus_clk, anand->clk);
+ 		if (ret) {
+ 			dev_err(nfc->dev, "Failed to change clock rate\n");
+ 			return ret;
+ 		}
+ 
+-		ret = clk_prepare_enable(nfc->controller_clk);
++		ret = clk_prepare_enable(nfc->bus_clk);
+ 		if (ret) {
+ 			dev_err(nfc->dev,
+-				"Failed to re-enable the controller clock\n");
++				"Failed to re-enable the bus clock\n");
+ 			return ret;
+ 		}
+ 
+@@ -884,21 +884,60 @@ static int anfc_setup_interface(struct nand_chip *chip, int target,
+ 	struct anand *anand = to_anand(chip);
+ 	struct arasan_nfc *nfc = to_anfc(chip->controller);
+ 	struct device_node *np = nfc->dev->of_node;
++	const struct nand_sdr_timings *sdr;
++	const struct nand_nvddr_timings *nvddr;
++
++	if (nand_interface_is_nvddr(conf)) {
++		nvddr = nand_get_nvddr_timings(conf);
++		if (IS_ERR(nvddr))
++			return PTR_ERR(nvddr);
++
++		/*
++		 * The controller only supports data payload requests which are
++		 * a multiple of 4. In practice, most data accesses are 4-byte
++		 * aligned and this is not an issue. However, rounding up will
++		 * simply be refused by the controller if we reached the end of
++		 * the device *and* we are using the NV-DDR interface(!). In
++		 * this situation, unaligned data requests ending at the device
++		 * boundary will confuse the controller and cannot be performed.
++		 *
++		 * This is something that happens in nand_read_subpage() when
++		 * selecting software ECC support and must be avoided.
++		 */
++		if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT)
++			return -ENOTSUPP;
++	} else {
++		sdr = nand_get_sdr_timings(conf);
++		if (IS_ERR(sdr))
++			return PTR_ERR(sdr);
++	}
+ 
+ 	if (target < 0)
+ 		return 0;
+ 
+-	anand->timings = DIFACE_SDR | DIFACE_SDR_MODE(conf->timings.mode);
+-	anand->clk = ANFC_XLNX_SDR_DFLT_CORE_CLK;
++	if (nand_interface_is_sdr(conf))
++		anand->timings = DIFACE_SDR |
++				 DIFACE_SDR_MODE(conf->timings.mode);
++	else
++		anand->timings = DIFACE_NVDDR |
++				 DIFACE_DDR_MODE(conf->timings.mode);
++
++	if (nand_interface_is_sdr(conf)) {
++		anand->clk = ANFC_XLNX_SDR_DFLT_CORE_CLK;
++	} else {
++		/* ONFI timings are defined in picoseconds */
++		anand->clk = div_u64((u64)NSEC_PER_SEC * 1000,
++				     conf->timings.nvddr.tCK_min);
++	}
+ 
+ 	/*
+ 	 * Due to a hardware bug in the ZynqMP SoC, SDR timing modes 0-1 work
+ 	 * with f > 90MHz (default clock is 100MHz) but signals are unstable
+ 	 * with higher modes. Hence we decrease a little bit the clock rate to
+-	 * 80MHz when using modes 2-5 with this SoC.
++	 * 80MHz when using SDR modes 2-5 with this SoC.
+ 	 */
+ 	if (of_device_is_compatible(np, "xlnx,zynqmp-nand-controller") &&
+-	    conf->timings.mode >= 2)
++	    nand_interface_is_sdr(conf) && conf->timings.mode >= 2)
+ 		anand->clk = ANFC_XLNX_SDR_HS_CORE_CLK;
+ 
+ 	return 0;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index c048e826746a9..2228c34f3deab 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -1246,7 +1246,7 @@ static int atmel_smc_nand_prepare_smcconf(struct atmel_nand *nand,
+ 	nc = to_nand_controller(nand->base.controller);
+ 
+ 	/* DDR interface not supported. */
+-	if (conf->type != NAND_SDR_IFACE)
++	if (!nand_interface_is_sdr(conf))
+ 		return -ENOTSUPP;
+ 
+ 	/*
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index 817bddccb775f..327a2257ec26d 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -1307,7 +1307,6 @@ static int meson_nfc_nand_chip_cleanup(struct meson_nfc *nfc)
+ 		if (ret)
+ 			return ret;
+ 
+-		meson_nfc_free_buffer(&meson_chip->nand);
+ 		nand_cleanup(&meson_chip->nand);
+ 		list_del(&meson_chip->node);
+ 	}
+diff --git a/drivers/mtd/nand/raw/nand_timings.c b/drivers/mtd/nand/raw/nand_timings.c
+index 94d832646487d..481b56d5f60d9 100644
+--- a/drivers/mtd/nand/raw/nand_timings.c
++++ b/drivers/mtd/nand/raw/nand_timings.c
+@@ -292,6 +292,261 @@ static const struct nand_interface_config onfi_sdr_timings[] = {
+ 	},
+ };
+ 
++static const struct nand_interface_config onfi_nvddr_timings[] = {
++	/* Mode 0 */
++	{
++		.type = NAND_NVDDR_IFACE,
++		.timings.mode = 0,
++		.timings.nvddr = {
++			.tCCS_min = 500000,
++			.tR_max = 200000000,
++			.tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tAC_min = 3000,
++			.tAC_max = 25000,
++			.tADL_min = 400000,
++			.tCAD_min = 45000,
++			.tCAH_min = 10000,
++			.tCALH_min = 10000,
++			.tCALS_min = 10000,
++			.tCAS_min = 10000,
++			.tCEH_min = 20000,
++			.tCH_min = 10000,
++			.tCK_min = 50000,
++			.tCS_min = 35000,
++			.tDH_min = 5000,
++			.tDQSCK_min = 3000,
++			.tDQSCK_max = 25000,
++			.tDQSD_min = 0,
++			.tDQSD_max = 18000,
++			.tDQSHZ_max = 20000,
++			.tDQSQ_max = 5000,
++			.tDS_min = 5000,
++			.tDSC_min = 50000,
++			.tFEAT_max = 1000000,
++			.tITC_max = 1000000,
++			.tQHS_max = 6000,
++			.tRHW_min = 100000,
++			.tRR_min = 20000,
++			.tRST_max = 500000000,
++			.tWB_max = 100000,
++			.tWHR_min = 80000,
++			.tWRCK_min = 20000,
++			.tWW_min = 100000,
++		},
++	},
++	/* Mode 1 */
++	{
++		.type = NAND_NVDDR_IFACE,
++		.timings.mode = 1,
++		.timings.nvddr = {
++			.tCCS_min = 500000,
++			.tR_max = 200000000,
++			.tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tAC_min = 3000,
++			.tAC_max = 25000,
++			.tADL_min = 400000,
++			.tCAD_min = 45000,
++			.tCAH_min = 5000,
++			.tCALH_min = 5000,
++			.tCALS_min = 5000,
++			.tCAS_min = 5000,
++			.tCEH_min = 20000,
++			.tCH_min = 5000,
++			.tCK_min = 30000,
++			.tCS_min = 25000,
++			.tDH_min = 2500,
++			.tDQSCK_min = 3000,
++			.tDQSCK_max = 25000,
++			.tDQSD_min = 0,
++			.tDQSD_max = 18000,
++			.tDQSHZ_max = 20000,
++			.tDQSQ_max = 2500,
++			.tDS_min = 3000,
++			.tDSC_min = 30000,
++			.tFEAT_max = 1000000,
++			.tITC_max = 1000000,
++			.tQHS_max = 3000,
++			.tRHW_min = 100000,
++			.tRR_min = 20000,
++			.tRST_max = 500000000,
++			.tWB_max = 100000,
++			.tWHR_min = 80000,
++			.tWRCK_min = 20000,
++			.tWW_min = 100000,
++		},
++	},
++	/* Mode 2 */
++	{
++		.type = NAND_NVDDR_IFACE,
++		.timings.mode = 2,
++		.timings.nvddr = {
++			.tCCS_min = 500000,
++			.tR_max = 200000000,
++			.tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tAC_min = 3000,
++			.tAC_max = 25000,
++			.tADL_min = 400000,
++			.tCAD_min = 45000,
++			.tCAH_min = 4000,
++			.tCALH_min = 4000,
++			.tCALS_min = 4000,
++			.tCAS_min = 4000,
++			.tCEH_min = 20000,
++			.tCH_min = 4000,
++			.tCK_min = 20000,
++			.tCS_min = 15000,
++			.tDH_min = 1700,
++			.tDQSCK_min = 3000,
++			.tDQSCK_max = 25000,
++			.tDQSD_min = 0,
++			.tDQSD_max = 18000,
++			.tDQSHZ_max = 20000,
++			.tDQSQ_max = 1700,
++			.tDS_min = 2000,
++			.tDSC_min = 20000,
++			.tFEAT_max = 1000000,
++			.tITC_max = 1000000,
++			.tQHS_max = 2000,
++			.tRHW_min = 100000,
++			.tRR_min = 20000,
++			.tRST_max = 500000000,
++			.tWB_max = 100000,
++			.tWHR_min = 80000,
++			.tWRCK_min = 20000,
++			.tWW_min = 100000,
++		},
++	},
++	/* Mode 3 */
++	{
++		.type = NAND_NVDDR_IFACE,
++		.timings.mode = 3,
++		.timings.nvddr = {
++			.tCCS_min = 500000,
++			.tR_max = 200000000,
++			.tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tAC_min = 3000,
++			.tAC_max = 25000,
++			.tADL_min = 400000,
++			.tCAD_min = 45000,
++			.tCAH_min = 3000,
++			.tCALH_min = 3000,
++			.tCALS_min = 3000,
++			.tCAS_min = 3000,
++			.tCEH_min = 20000,
++			.tCH_min = 3000,
++			.tCK_min = 15000,
++			.tCS_min = 15000,
++			.tDH_min = 1300,
++			.tDQSCK_min = 3000,
++			.tDQSCK_max = 25000,
++			.tDQSD_min = 0,
++			.tDQSD_max = 18000,
++			.tDQSHZ_max = 20000,
++			.tDQSQ_max = 1300,
++			.tDS_min = 1500,
++			.tDSC_min = 15000,
++			.tFEAT_max = 1000000,
++			.tITC_max = 1000000,
++			.tQHS_max = 1500,
++			.tRHW_min = 100000,
++			.tRR_min = 20000,
++			.tRST_max = 500000000,
++			.tWB_max = 100000,
++			.tWHR_min = 80000,
++			.tWRCK_min = 20000,
++			.tWW_min = 100000,
++		},
++	},
++	/* Mode 4 */
++	{
++		.type = NAND_NVDDR_IFACE,
++		.timings.mode = 4,
++		.timings.nvddr = {
++			.tCCS_min = 500000,
++			.tR_max = 200000000,
++			.tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tAC_min = 3000,
++			.tAC_max = 25000,
++			.tADL_min = 400000,
++			.tCAD_min = 45000,
++			.tCAH_min = 2500,
++			.tCALH_min = 2500,
++			.tCALS_min = 2500,
++			.tCAS_min = 2500,
++			.tCEH_min = 20000,
++			.tCH_min = 2500,
++			.tCK_min = 12000,
++			.tCS_min = 15000,
++			.tDH_min = 1100,
++			.tDQSCK_min = 3000,
++			.tDQSCK_max = 25000,
++			.tDQSD_min = 0,
++			.tDQSD_max = 18000,
++			.tDQSHZ_max = 20000,
++			.tDQSQ_max = 1000,
++			.tDS_min = 1100,
++			.tDSC_min = 12000,
++			.tFEAT_max = 1000000,
++			.tITC_max = 1000000,
++			.tQHS_max = 1200,
++			.tRHW_min = 100000,
++			.tRR_min = 20000,
++			.tRST_max = 500000000,
++			.tWB_max = 100000,
++			.tWHR_min = 80000,
++			.tWRCK_min = 20000,
++			.tWW_min = 100000,
++		},
++	},
++	/* Mode 5 */
++	{
++		.type = NAND_NVDDR_IFACE,
++		.timings.mode = 5,
++		.timings.nvddr = {
++			.tCCS_min = 500000,
++			.tR_max = 200000000,
++			.tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX,
++			.tAC_min = 3000,
++			.tAC_max = 25000,
++			.tADL_min = 400000,
++			.tCAD_min = 45000,
++			.tCAH_min = 2000,
++			.tCALH_min = 2000,
++			.tCALS_min = 2000,
++			.tCAS_min = 2000,
++			.tCEH_min = 20000,
++			.tCH_min = 2000,
++			.tCK_min = 10000,
++			.tCS_min = 15000,
++			.tDH_min = 900,
++			.tDQSCK_min = 3000,
++			.tDQSCK_max = 25000,
++			.tDQSD_min = 0,
++			.tDQSD_max = 18000,
++			.tDQSHZ_max = 20000,
++			.tDQSQ_max = 850,
++			.tDS_min = 900,
++			.tDSC_min = 10000,
++			.tFEAT_max = 1000000,
++			.tITC_max = 1000000,
++			.tQHS_max = 1000,
++			.tRHW_min = 100000,
++			.tRR_min = 20000,
++			.tRST_max = 500000000,
++			.tWB_max = 100000,
++			.tWHR_min = 80000,
++			.tWRCK_min = 20000,
++			.tWW_min = 100000,
++		},
++	},
++};
++
+ /* All NAND chips share the same reset data interface: SDR mode 0 */
+ const struct nand_interface_config *nand_get_reset_interface_config(void)
+ {
+diff --git a/drivers/mtd/parsers/redboot.c b/drivers/mtd/parsers/redboot.c
+index 3ccd6363ee8cb..4f3bcc59a6385 100644
+--- a/drivers/mtd/parsers/redboot.c
++++ b/drivers/mtd/parsers/redboot.c
+@@ -58,6 +58,7 @@ static void parse_redboot_of(struct mtd_info *master)
+ 		return;
+ 
+ 	ret = of_property_read_u32(npart, "fis-index-block", &dirblock);
++	of_node_put(npart);
+ 	if (ret)
+ 		return;
+ 
+diff --git a/drivers/mtd/sm_ftl.c b/drivers/mtd/sm_ftl.c
+index b9f272408c4d5..2fedae67c07c5 100644
+--- a/drivers/mtd/sm_ftl.c
++++ b/drivers/mtd/sm_ftl.c
+@@ -1098,9 +1098,9 @@ static void sm_release(struct mtd_blktrans_dev *dev)
+ {
+ 	struct sm_ftl *ftl = dev->priv;
+ 
+-	mutex_lock(&ftl->mutex);
+ 	del_timer_sync(&ftl->timer);
+ 	cancel_work_sync(&ftl->flush_work);
++	mutex_lock(&ftl->mutex);
+ 	sm_cache_flush(ftl);
+ 	mutex_unlock(&ftl->mutex);
+ }
+diff --git a/drivers/net/can/pch_can.c b/drivers/net/can/pch_can.c
+index 79d9abdcc65aa..1272ec793a8d6 100644
+--- a/drivers/net/can/pch_can.c
++++ b/drivers/net/can/pch_can.c
+@@ -489,6 +489,7 @@ static void pch_can_error(struct net_device *ndev, u32 status)
+ 	if (!skb)
+ 		return;
+ 
++	errc = ioread32(&priv->regs->errc);
+ 	if (status & PCH_BUS_OFF) {
+ 		pch_can_set_tx_all(priv, 0);
+ 		pch_can_set_rx_all(priv, 0);
+@@ -496,9 +497,11 @@ static void pch_can_error(struct net_device *ndev, u32 status)
+ 		cf->can_id |= CAN_ERR_BUSOFF;
+ 		priv->can.can_stats.bus_off++;
+ 		can_bus_off(ndev);
++	} else {
++		cf->data[6] = errc & PCH_TEC;
++		cf->data[7] = (errc & PCH_REC) >> 8;
+ 	}
+ 
+-	errc = ioread32(&priv->regs->errc);
+ 	/* Warning interrupt. */
+ 	if (status & PCH_EWARN) {
+ 		state = CAN_STATE_ERROR_WARNING;
+@@ -556,9 +559,6 @@ static void pch_can_error(struct net_device *ndev, u32 status)
+ 		break;
+ 	}
+ 
+-	cf->data[6] = errc & PCH_TEC;
+-	cf->data[7] = (errc & PCH_REC) >> 8;
+-
+ 	priv->can.state = state;
+ 	netif_receive_skb(skb);
+ 
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index 3570a4de0085a..134eda66f0dcf 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -235,11 +235,8 @@ static void rcar_can_error(struct net_device *ndev)
+ 	if (eifr & (RCAR_CAN_EIFR_EWIF | RCAR_CAN_EIFR_EPIF)) {
+ 		txerr = readb(&priv->regs->tecr);
+ 		rxerr = readb(&priv->regs->recr);
+-		if (skb) {
++		if (skb)
+ 			cf->can_id |= CAN_ERR_CRTL;
+-			cf->data[6] = txerr;
+-			cf->data[7] = rxerr;
+-		}
+ 	}
+ 	if (eifr & RCAR_CAN_EIFR_BEIF) {
+ 		int rx_errors = 0, tx_errors = 0;
+@@ -339,6 +336,9 @@ static void rcar_can_error(struct net_device *ndev)
+ 		can_bus_off(ndev);
+ 		if (skb)
+ 			cf->can_id |= CAN_ERR_BUSOFF;
++	} else if (skb) {
++		cf->data[6] = txerr;
++		cf->data[7] = rxerr;
+ 	}
+ 	if (eifr & RCAR_CAN_EIFR_ORIF) {
+ 		netdev_dbg(priv->ndev, "Receive overrun error interrupt\n");
+diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
+index 25a4d7d0b3498..ee34baeb2afef 100644
+--- a/drivers/net/can/sja1000/sja1000.c
++++ b/drivers/net/can/sja1000/sja1000.c
+@@ -405,9 +405,6 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ 	txerr = priv->read_reg(priv, SJA1000_TXERR);
+ 	rxerr = priv->read_reg(priv, SJA1000_RXERR);
+ 
+-	cf->data[6] = txerr;
+-	cf->data[7] = rxerr;
+-
+ 	if (isrc & IRQ_DOI) {
+ 		/* data overrun interrupt */
+ 		netdev_dbg(dev, "data overrun interrupt\n");
+@@ -429,6 +426,10 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ 		else
+ 			state = CAN_STATE_ERROR_ACTIVE;
+ 	}
++	if (state != CAN_STATE_BUS_OFF) {
++		cf->data[6] = txerr;
++		cf->data[7] = rxerr;
++	}
+ 	if (isrc & IRQ_BEI) {
+ 		/* bus error interrupt */
+ 		priv->can.can_stats.bus_error++;
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index 7d2315c8cacb1..28273e84171a2 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -670,8 +670,6 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ 
+ 			txerr = hi3110_read(spi, HI3110_READ_TEC);
+ 			rxerr = hi3110_read(spi, HI3110_READ_REC);
+-			cf->data[6] = txerr;
+-			cf->data[7] = rxerr;
+ 			tx_state = txerr >= rxerr ? new_state : 0;
+ 			rx_state = txerr <= rxerr ? new_state : 0;
+ 			can_change_state(net, cf, tx_state, rx_state);
+@@ -684,6 +682,9 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ 					hi3110_hw_sleep(spi);
+ 					break;
+ 				}
++			} else {
++				cf->data[6] = txerr;
++				cf->data[7] = rxerr;
+ 			}
+ 		}
+ 
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index b3f2f4fe5ee04..39ddb3d849dd8 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -525,11 +525,6 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ 	rxerr = (errc >> 16) & 0xFF;
+ 	txerr = errc & 0xFF;
+ 
+-	if (skb) {
+-		cf->data[6] = txerr;
+-		cf->data[7] = rxerr;
+-	}
+-
+ 	if (isrc & SUN4I_INT_DATA_OR) {
+ 		/* data overrun interrupt */
+ 		netdev_dbg(dev, "data overrun interrupt\n");
+@@ -560,6 +555,10 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ 		else
+ 			state = CAN_STATE_ERROR_ACTIVE;
+ 	}
++	if (skb && state != CAN_STATE_BUS_OFF) {
++		cf->data[6] = txerr;
++		cf->data[7] = rxerr;
++	}
+ 	if (isrc & SUN4I_INT_BUS_ERR) {
+ 		/* bus error interrupt */
+ 		netdev_dbg(dev, "bus error interrupt\n");
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index a7c408acb0c09..01d4a731b579c 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -890,8 +890,10 @@ static void kvaser_usb_hydra_update_state(struct kvaser_usb_net_priv *priv,
+ 	    new_state < CAN_STATE_BUS_OFF)
+ 		priv->can.can_stats.restarts++;
+ 
+-	cf->data[6] = bec->txerr;
+-	cf->data[7] = bec->rxerr;
++	if (new_state != CAN_STATE_BUS_OFF) {
++		cf->data[6] = bec->txerr;
++		cf->data[7] = bec->rxerr;
++	}
+ 
+ 	stats = &netdev->stats;
+ 	stats->rx_packets++;
+@@ -1045,8 +1047,10 @@ kvaser_usb_hydra_error_frame(struct kvaser_usb_net_priv *priv,
+ 	shhwtstamps->hwtstamp = hwtstamp;
+ 
+ 	cf->can_id |= CAN_ERR_BUSERROR;
+-	cf->data[6] = bec.txerr;
+-	cf->data[7] = bec.rxerr;
++	if (new_state != CAN_STATE_BUS_OFF) {
++		cf->data[6] = bec.txerr;
++		cf->data[7] = bec.rxerr;
++	}
+ 
+ 	stats->rx_packets++;
+ 	stats->rx_bytes += cf->can_dlc;
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index 0e0403dd05500..5e281249ad5fe 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -857,8 +857,10 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 		break;
+ 	}
+ 
+-	cf->data[6] = es->txerr;
+-	cf->data[7] = es->rxerr;
++	if (new_state != CAN_STATE_BUS_OFF) {
++		cf->data[6] = es->txerr;
++		cf->data[7] = es->rxerr;
++	}
+ 
+ 	stats->rx_packets++;
+ 	stats->rx_bytes += cf->can_dlc;
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index 985e00aee4ee1..885c54c6f81ad 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -442,9 +442,10 @@ static void usb_8dev_rx_err_msg(struct usb_8dev_priv *priv,
+ 
+ 	if (rx_errors)
+ 		stats->rx_errors++;
+-
+-	cf->data[6] = txerr;
+-	cf->data[7] = rxerr;
++	if (priv->can.state != CAN_STATE_BUS_OFF) {
++		cf->data[6] = txerr;
++		cf->data[7] = rxerr;
++	}
+ 
+ 	priv->bec.txerr = txerr;
+ 	priv->bec.rxerr = rxerr;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_dev.h b/drivers/net/ethernet/huawei/hinic/hinic_dev.h
+index fb3e89141a0d9..a4fbf44f944cd 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_dev.h
++++ b/drivers/net/ethernet/huawei/hinic/hinic_dev.h
+@@ -95,9 +95,6 @@ struct hinic_dev {
+ 	u16				sq_depth;
+ 	u16				rq_depth;
+ 
+-	struct hinic_txq_stats          tx_stats;
+-	struct hinic_rxq_stats          rx_stats;
+-
+ 	u8				rss_tmpl_idx;
+ 	u8				rss_hash_engine;
+ 	u16				num_rss;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+index ace949fe62331..4f1d585485d7a 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+@@ -62,8 +62,6 @@ MODULE_PARM_DESC(rx_weight, "Number Rx packets for NAPI budget (default=64)");
+ 
+ #define HINIC_LRO_RX_TIMER_DEFAULT	16
+ 
+-#define VLAN_BITMAP_SIZE(nic_dev)       (ALIGN(VLAN_N_VID, 8) / 8)
+-
+ #define work_to_rx_mode_work(work)      \
+ 		container_of(work, struct hinic_rx_mode_work, work)
+ 
+@@ -82,56 +80,44 @@ static int set_features(struct hinic_dev *nic_dev,
+ 			netdev_features_t pre_features,
+ 			netdev_features_t features, bool force_change);
+ 
+-static void update_rx_stats(struct hinic_dev *nic_dev, struct hinic_rxq *rxq)
++static void gather_rx_stats(struct hinic_rxq_stats *nic_rx_stats, struct hinic_rxq *rxq)
+ {
+-	struct hinic_rxq_stats *nic_rx_stats = &nic_dev->rx_stats;
+ 	struct hinic_rxq_stats rx_stats;
+ 
+-	u64_stats_init(&rx_stats.syncp);
+-
+ 	hinic_rxq_get_stats(rxq, &rx_stats);
+ 
+-	u64_stats_update_begin(&nic_rx_stats->syncp);
+ 	nic_rx_stats->bytes += rx_stats.bytes;
+ 	nic_rx_stats->pkts  += rx_stats.pkts;
+ 	nic_rx_stats->errors += rx_stats.errors;
+ 	nic_rx_stats->csum_errors += rx_stats.csum_errors;
+ 	nic_rx_stats->other_errors += rx_stats.other_errors;
+-	u64_stats_update_end(&nic_rx_stats->syncp);
+-
+-	hinic_rxq_clean_stats(rxq);
+ }
+ 
+-static void update_tx_stats(struct hinic_dev *nic_dev, struct hinic_txq *txq)
++static void gather_tx_stats(struct hinic_txq_stats *nic_tx_stats, struct hinic_txq *txq)
+ {
+-	struct hinic_txq_stats *nic_tx_stats = &nic_dev->tx_stats;
+ 	struct hinic_txq_stats tx_stats;
+ 
+-	u64_stats_init(&tx_stats.syncp);
+-
+ 	hinic_txq_get_stats(txq, &tx_stats);
+ 
+-	u64_stats_update_begin(&nic_tx_stats->syncp);
+ 	nic_tx_stats->bytes += tx_stats.bytes;
+ 	nic_tx_stats->pkts += tx_stats.pkts;
+ 	nic_tx_stats->tx_busy += tx_stats.tx_busy;
+ 	nic_tx_stats->tx_wake += tx_stats.tx_wake;
+ 	nic_tx_stats->tx_dropped += tx_stats.tx_dropped;
+ 	nic_tx_stats->big_frags_pkts += tx_stats.big_frags_pkts;
+-	u64_stats_update_end(&nic_tx_stats->syncp);
+-
+-	hinic_txq_clean_stats(txq);
+ }
+ 
+-static void update_nic_stats(struct hinic_dev *nic_dev)
++static void gather_nic_stats(struct hinic_dev *nic_dev,
++			     struct hinic_rxq_stats *nic_rx_stats,
++			     struct hinic_txq_stats *nic_tx_stats)
+ {
+ 	int i, num_qps = hinic_hwdev_num_qps(nic_dev->hwdev);
+ 
+ 	for (i = 0; i < num_qps; i++)
+-		update_rx_stats(nic_dev, &nic_dev->rxqs[i]);
++		gather_rx_stats(nic_rx_stats, &nic_dev->rxqs[i]);
+ 
+ 	for (i = 0; i < num_qps; i++)
+-		update_tx_stats(nic_dev, &nic_dev->txqs[i]);
++		gather_tx_stats(nic_tx_stats, &nic_dev->txqs[i]);
+ }
+ 
+ /**
+@@ -567,8 +553,6 @@ int hinic_close(struct net_device *netdev)
+ 	netif_carrier_off(netdev);
+ 	netif_tx_disable(netdev);
+ 
+-	update_nic_stats(nic_dev);
+-
+ 	up(&nic_dev->mgmt_lock);
+ 
+ 	if (!HINIC_IS_VF(nic_dev->hwdev->hwif))
+@@ -862,26 +846,19 @@ static void hinic_get_stats64(struct net_device *netdev,
+ 			      struct rtnl_link_stats64 *stats)
+ {
+ 	struct hinic_dev *nic_dev = netdev_priv(netdev);
+-	struct hinic_rxq_stats *nic_rx_stats;
+-	struct hinic_txq_stats *nic_tx_stats;
+-
+-	nic_rx_stats = &nic_dev->rx_stats;
+-	nic_tx_stats = &nic_dev->tx_stats;
+-
+-	down(&nic_dev->mgmt_lock);
++	struct hinic_rxq_stats nic_rx_stats = {};
++	struct hinic_txq_stats nic_tx_stats = {};
+ 
+ 	if (nic_dev->flags & HINIC_INTF_UP)
+-		update_nic_stats(nic_dev);
+-
+-	up(&nic_dev->mgmt_lock);
++		gather_nic_stats(nic_dev, &nic_rx_stats, &nic_tx_stats);
+ 
+-	stats->rx_bytes   = nic_rx_stats->bytes;
+-	stats->rx_packets = nic_rx_stats->pkts;
+-	stats->rx_errors  = nic_rx_stats->errors;
++	stats->rx_bytes   = nic_rx_stats.bytes;
++	stats->rx_packets = nic_rx_stats.pkts;
++	stats->rx_errors  = nic_rx_stats.errors;
+ 
+-	stats->tx_bytes   = nic_tx_stats->bytes;
+-	stats->tx_packets = nic_tx_stats->pkts;
+-	stats->tx_errors  = nic_tx_stats->tx_dropped;
++	stats->tx_bytes   = nic_tx_stats.bytes;
++	stats->tx_packets = nic_tx_stats.pkts;
++	stats->tx_errors  = nic_tx_stats.tx_dropped;
+ }
+ 
+ static int hinic_set_features(struct net_device *netdev,
+@@ -1180,8 +1157,6 @@ static void hinic_free_intr_coalesce(struct hinic_dev *nic_dev)
+ static int nic_dev_init(struct pci_dev *pdev)
+ {
+ 	struct hinic_rx_mode_work *rx_mode_work;
+-	struct hinic_txq_stats *tx_stats;
+-	struct hinic_rxq_stats *rx_stats;
+ 	struct hinic_dev *nic_dev;
+ 	struct net_device *netdev;
+ 	struct hinic_hwdev *hwdev;
+@@ -1242,15 +1217,8 @@ static int nic_dev_init(struct pci_dev *pdev)
+ 
+ 	sema_init(&nic_dev->mgmt_lock, 1);
+ 
+-	tx_stats = &nic_dev->tx_stats;
+-	rx_stats = &nic_dev->rx_stats;
+-
+-	u64_stats_init(&tx_stats->syncp);
+-	u64_stats_init(&rx_stats->syncp);
+-
+-	nic_dev->vlan_bitmap = devm_kzalloc(&pdev->dev,
+-					    VLAN_BITMAP_SIZE(nic_dev),
+-					    GFP_KERNEL);
++	nic_dev->vlan_bitmap = devm_bitmap_zalloc(&pdev->dev, VLAN_N_VID,
++						  GFP_KERNEL);
+ 	if (!nic_dev->vlan_bitmap) {
+ 		err = -ENOMEM;
+ 		goto err_vlan_bitmap;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+index 070a7cc6392e8..04b19af63fd61 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+@@ -73,7 +73,6 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
+ 	struct hinic_rxq_stats *rxq_stats = &rxq->rxq_stats;
+ 	unsigned int start;
+ 
+-	u64_stats_update_begin(&stats->syncp);
+ 	do {
+ 		start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ 		stats->pkts = rxq_stats->pkts;
+@@ -83,7 +82,6 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
+ 		stats->csum_errors = rxq_stats->csum_errors;
+ 		stats->other_errors = rxq_stats->other_errors;
+ 	} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+-	u64_stats_update_end(&stats->syncp);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+index 3828b09bfea3f..d13514a8160e8 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+@@ -97,7 +97,6 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
+ 	struct hinic_txq_stats *txq_stats = &txq->txq_stats;
+ 	unsigned int start;
+ 
+-	u64_stats_update_begin(&stats->syncp);
+ 	do {
+ 		start = u64_stats_fetch_begin(&txq_stats->syncp);
+ 		stats->pkts    = txq_stats->pkts;
+@@ -107,7 +106,6 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
+ 		stats->tx_dropped = txq_stats->tx_dropped;
+ 		stats->big_frags_pkts = txq_stats->big_frags_pkts;
+ 	} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+-	u64_stats_update_end(&stats->syncp);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index ce1e2fb22e092..a994a2970ab24 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -86,6 +86,7 @@ struct iavf_vsi {
+ #define IAVF_HKEY_ARRAY_SIZE ((IAVF_VFQF_HKEY_MAX_INDEX + 1) * 4)
+ #define IAVF_HLUT_ARRAY_SIZE ((IAVF_VFQF_HLUT_MAX_INDEX + 1) * 4)
+ #define IAVF_MBPS_DIVISOR	125000 /* divisor to convert to Mbps */
++#define IAVF_MBPS_QUANTA	50
+ 
+ #define IAVF_VIRTCHNL_VF_RESOURCE_SIZE (sizeof(struct virtchnl_vf_resource) + \
+ 					(IAVF_MAX_VF_VSI * \
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index bd1fb3774769b..a9cea7ccdd865 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2578,6 +2578,7 @@ static int iavf_validate_ch_config(struct iavf_adapter *adapter,
+ 				   struct tc_mqprio_qopt_offload *mqprio_qopt)
+ {
+ 	u64 total_max_rate = 0;
++	u32 tx_rate_rem = 0;
+ 	int i, num_qps = 0;
+ 	u64 tx_rate = 0;
+ 	int ret = 0;
+@@ -2592,12 +2593,32 @@ static int iavf_validate_ch_config(struct iavf_adapter *adapter,
+ 			return -EINVAL;
+ 		if (mqprio_qopt->min_rate[i]) {
+ 			dev_err(&adapter->pdev->dev,
+-				"Invalid min tx rate (greater than 0) specified\n");
++				"Invalid min tx rate (greater than 0) specified for TC%d\n",
++				i);
+ 			return -EINVAL;
+ 		}
+-		/*convert to Mbps */
++
++		/* convert to Mbps */
+ 		tx_rate = div_u64(mqprio_qopt->max_rate[i],
+ 				  IAVF_MBPS_DIVISOR);
++
++		if (mqprio_qopt->max_rate[i] &&
++		    tx_rate < IAVF_MBPS_QUANTA) {
++			dev_err(&adapter->pdev->dev,
++				"Invalid max tx rate for TC%d, minimum %dMbps\n",
++				i, IAVF_MBPS_QUANTA);
++			return -EINVAL;
++		}
++
++		(void)div_u64_rem(tx_rate, IAVF_MBPS_QUANTA, &tx_rate_rem);
++
++		if (tx_rate_rem != 0) {
++			dev_err(&adapter->pdev->dev,
++				"Invalid max tx rate for TC%d, not divisible by %d\n",
++				i, IAVF_MBPS_QUANTA);
++			return -EINVAL;
++		}
++
+ 		total_max_rate += tx_rate;
+ 		num_qps += mqprio_qopt->qopt.count[i];
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 73060b30fece3..b0229ceae2341 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -101,7 +101,7 @@ struct page_pool;
+ #define MLX5E_REQUIRED_WQE_MTTS		(MLX5_ALIGN_MTTS(MLX5_MPWRQ_PAGES_PER_WQE + 1))
+ #define MLX5E_REQUIRED_MTTS(wqes)	(wqes * MLX5E_REQUIRED_WQE_MTTS)
+ #define MLX5E_MAX_RQ_NUM_MTTS	\
+-	((1 << 16) * 2) /* So that MLX5_MTT_OCTW(num_mtts) fits into u16 */
++	(ALIGN_DOWN(U16_MAX, 4) * 2) /* So that MLX5_MTT_OCTW(num_mtts) fits into u16 */
+ #define MLX5E_ORDER2_MAX_PACKET_MTU (order_base_2(10 * 1024))
+ #define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE_MPW	\
+ 		(ilog2(MLX5E_MAX_RQ_NUM_MTTS / MLX5E_REQUIRED_WQE_MTTS))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
+index 1b392696280d2..f824d781b99ef 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
+@@ -15,7 +15,7 @@ static int mlx5e_ktls_add(struct net_device *netdev, struct sock *sk,
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	int err;
+ 
+-	if (WARN_ON(!mlx5e_ktls_type_check(mdev, crypto_info)))
++	if (!mlx5e_ktls_type_check(mdev, crypto_info))
+ 		return -EOPNOTSUPP;
+ 
+ 	if (direction == TLS_OFFLOAD_CTX_DIR_TX)
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index e95c09dc2c30d..e42520f909fe2 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1286,7 +1286,7 @@ static int ionic_set_nic_features(struct ionic_lif *lif,
+ 	if ((old_hw_features ^ lif->hw_features) & IONIC_ETH_HW_RX_HASH)
+ 		ionic_lif_rss_config(lif, lif->rss_types, NULL, NULL);
+ 
+-	if ((vlan_flags & features) &&
++	if ((vlan_flags & le64_to_cpu(ctx.cmd.lif_setattr.features)) &&
+ 	    !(vlan_flags & le64_to_cpu(ctx.comp.lif_setattr.features)))
+ 		dev_info_once(lif->ionic->dev, "NIC is not supporting vlan offload, likely in SmartNIC mode\n");
+ 
+diff --git a/drivers/net/netdevsim/bpf.c b/drivers/net/netdevsim/bpf.c
+index a438202129323..50854265864d1 100644
+--- a/drivers/net/netdevsim/bpf.c
++++ b/drivers/net/netdevsim/bpf.c
+@@ -351,10 +351,12 @@ nsim_map_alloc_elem(struct bpf_offloaded_map *offmap, unsigned int idx)
+ {
+ 	struct nsim_bpf_bound_map *nmap = offmap->dev_priv;
+ 
+-	nmap->entry[idx].key = kmalloc(offmap->map.key_size, GFP_USER);
++	nmap->entry[idx].key = kmalloc(offmap->map.key_size,
++				       GFP_KERNEL_ACCOUNT | __GFP_NOWARN);
+ 	if (!nmap->entry[idx].key)
+ 		return -ENOMEM;
+-	nmap->entry[idx].value = kmalloc(offmap->map.value_size, GFP_USER);
++	nmap->entry[idx].value = kmalloc(offmap->map.value_size,
++					 GFP_KERNEL_ACCOUNT | __GFP_NOWARN);
+ 	if (!nmap->entry[idx].value) {
+ 		kfree(nmap->entry[idx].key);
+ 		nmap->entry[idx].key = NULL;
+@@ -496,7 +498,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
+ 	if (offmap->map.map_flags)
+ 		return -EINVAL;
+ 
+-	nmap = kzalloc(sizeof(*nmap), GFP_USER);
++	nmap = kzalloc(sizeof(*nmap), GFP_KERNEL_ACCOUNT);
+ 	if (!nmap)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 0ac4f59e3f186..79a53fe245e5c 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1796,7 +1796,7 @@ static const struct driver_info ax88179_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1809,7 +1809,7 @@ static const struct driver_info ax88178a_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1822,7 +1822,7 @@ static const struct driver_info cypress_GX3_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1835,7 +1835,7 @@ static const struct driver_info dlink_dub1312_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1848,7 +1848,7 @@ static const struct driver_info sitecom_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1861,7 +1861,7 @@ static const struct driver_info samsung_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1874,7 +1874,7 @@ static const struct driver_info lenovo_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset = ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1887,7 +1887,7 @@ static const struct driver_info belkin_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset	= ax88179_reset,
+ 	.stop	= ax88179_stop,
+-	.flags	= FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags	= FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1900,7 +1900,7 @@ static const struct driver_info toshiba_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset	= ax88179_reset,
+ 	.stop = ax88179_stop,
+-	.flags	= FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags	= FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1913,7 +1913,7 @@ static const struct driver_info mct_info = {
+ 	.link_reset = ax88179_link_reset,
+ 	.reset	= ax88179_reset,
+ 	.stop	= ax88179_stop,
+-	.flags	= FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++	.flags	= FLAG_ETHER | FLAG_FRAMING_AX,
+ 	.rx_fixup = ax88179_rx_fixup,
+ 	.tx_fixup = ax88179_tx_fixup,
+ };
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index e5b7448511467..65d42f5d42a3c 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -564,16 +564,12 @@ static int smsc95xx_phy_update_flowcontrol(struct usbnet *dev)
+ 	return smsc95xx_write_reg(dev, AFC_CFG, afc_cfg);
+ }
+ 
+-static int smsc95xx_link_reset(struct usbnet *dev)
++static void smsc95xx_mac_update_fullduplex(struct usbnet *dev)
+ {
+ 	struct smsc95xx_priv *pdata = dev->driver_priv;
+ 	unsigned long flags;
+ 	int ret;
+ 
+-	ret = smsc95xx_write_reg(dev, INT_STS, INT_STS_CLEAR_ALL_);
+-	if (ret < 0)
+-		return ret;
+-
+ 	spin_lock_irqsave(&pdata->mac_cr_lock, flags);
+ 	if (pdata->phydev->duplex != DUPLEX_FULL) {
+ 		pdata->mac_cr &= ~MAC_CR_FDPX_;
+@@ -585,14 +581,16 @@ static int smsc95xx_link_reset(struct usbnet *dev)
+ 	spin_unlock_irqrestore(&pdata->mac_cr_lock, flags);
+ 
+ 	ret = smsc95xx_write_reg(dev, MAC_CR, pdata->mac_cr);
+-	if (ret < 0)
+-		return ret;
++	if (ret < 0) {
++		if (ret != -ENODEV)
++			netdev_warn(dev->net,
++				    "Error updating MAC full duplex mode\n");
++		return;
++	}
+ 
+ 	ret = smsc95xx_phy_update_flowcontrol(dev);
+ 	if (ret < 0)
+ 		netdev_warn(dev->net, "Error updating PHY flow control\n");
+-
+-	return ret;
+ }
+ 
+ static void smsc95xx_status(struct usbnet *dev, struct urb *urb)
+@@ -609,7 +607,7 @@ static void smsc95xx_status(struct usbnet *dev, struct urb *urb)
+ 	netif_dbg(dev, link, dev->net, "intdata: 0x%08X\n", intdata);
+ 
+ 	if (intdata & INT_ENP_PHY_INT_)
+-		usbnet_defer_kevent(dev, EVENT_LINK_RESET);
++		;
+ 	else
+ 		netdev_warn(dev->net, "unexpected interrupt, intdata=0x%08X\n",
+ 			    intdata);
+@@ -1066,6 +1064,7 @@ static void smsc95xx_handle_link_change(struct net_device *net)
+ 	struct usbnet *dev = netdev_priv(net);
+ 
+ 	phy_print_status(net->phydev);
++	smsc95xx_mac_update_fullduplex(dev);
+ 	usbnet_defer_kevent(dev, EVENT_LINK_CHANGE);
+ }
+ 
+@@ -1972,7 +1971,6 @@ static const struct driver_info smsc95xx_info = {
+ 	.description	= "smsc95xx USB 2.0 Ethernet",
+ 	.bind		= smsc95xx_bind,
+ 	.unbind		= smsc95xx_unbind,
+-	.link_reset	= smsc95xx_link_reset,
+ 	.reset		= smsc95xx_reset,
+ 	.check_connect	= smsc95xx_start_phy,
+ 	.stop		= smsc95xx_stop,
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 58dd77efcaade..1239fd57514bb 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -830,13 +830,11 @@ int usbnet_stop (struct net_device *net)
+ 
+ 	mpn = !test_and_clear_bit(EVENT_NO_RUNTIME_PM, &dev->flags);
+ 
+-	/* deferred work (task, timer, softirq) must also stop.
+-	 * can't flush_scheduled_work() until we drop rtnl (later),
+-	 * else workers could deadlock; so make workers a NOP.
+-	 */
++	/* deferred work (timer, softirq, task) must also stop */
+ 	dev->flags = 0;
+ 	del_timer_sync (&dev->delay);
+ 	tasklet_kill (&dev->bh);
++	cancel_work_sync(&dev->kevent);
+ 	if (!pm)
+ 		usb_autopm_put_interface(dev->intf);
+ 
+@@ -1585,8 +1583,6 @@ void usbnet_disconnect (struct usb_interface *intf)
+ 	net = dev->net;
+ 	unregister_netdev (net);
+ 
+-	cancel_work_sync(&dev->kevent);
+-
+ 	usb_scuttle_anchored_urbs(&dev->deferred);
+ 
+ 	if (dev->driver_info->unbind)
+diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
+index 9a4c8ff32d9dd..5bf7822c53f18 100644
+--- a/drivers/net/wireguard/allowedips.c
++++ b/drivers/net/wireguard/allowedips.c
+@@ -6,6 +6,8 @@
+ #include "allowedips.h"
+ #include "peer.h"
+ 
++enum { MAX_ALLOWEDIPS_BITS = 128 };
++
+ static struct kmem_cache *node_cache;
+ 
+ static void swap_endian(u8 *dst, const u8 *src, u8 bits)
+@@ -40,7 +42,8 @@ static void push_rcu(struct allowedips_node **stack,
+ 		     struct allowedips_node __rcu *p, unsigned int *len)
+ {
+ 	if (rcu_access_pointer(p)) {
+-		WARN_ON(IS_ENABLED(DEBUG) && *len >= 128);
++		if (WARN_ON(IS_ENABLED(DEBUG) && *len >= MAX_ALLOWEDIPS_BITS))
++			return;
+ 		stack[(*len)++] = rcu_dereference_raw(p);
+ 	}
+ }
+@@ -52,7 +55,7 @@ static void node_free_rcu(struct rcu_head *rcu)
+ 
+ static void root_free_rcu(struct rcu_head *rcu)
+ {
+-	struct allowedips_node *node, *stack[128] = {
++	struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_BITS] = {
+ 		container_of(rcu, struct allowedips_node, rcu) };
+ 	unsigned int len = 1;
+ 
+@@ -65,7 +68,7 @@ static void root_free_rcu(struct rcu_head *rcu)
+ 
+ static void root_remove_peer_lists(struct allowedips_node *root)
+ {
+-	struct allowedips_node *node, *stack[128] = { root };
++	struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_BITS] = { root };
+ 	unsigned int len = 1;
+ 
+ 	while (len > 0 && (node = stack[--len])) {
+diff --git a/drivers/net/wireguard/selftest/allowedips.c b/drivers/net/wireguard/selftest/allowedips.c
+index e173204ae7d78..41db10f9be498 100644
+--- a/drivers/net/wireguard/selftest/allowedips.c
++++ b/drivers/net/wireguard/selftest/allowedips.c
+@@ -593,10 +593,10 @@ bool __init wg_allowedips_selftest(void)
+ 	wg_allowedips_remove_by_peer(&t, a, &mutex);
+ 	test_negative(4, a, 192, 168, 0, 1);
+ 
+-	/* These will hit the WARN_ON(len >= 128) in free_node if something
+-	 * goes wrong.
++	/* These will hit the WARN_ON(len >= MAX_ALLOWEDIPS_BITS) in free_node
++	 * if something goes wrong.
+ 	 */
+-	for (i = 0; i < 128; ++i) {
++	for (i = 0; i < MAX_ALLOWEDIPS_BITS; ++i) {
+ 		part = cpu_to_be64(~(1LLU << (i % 64)));
+ 		memset(&ip, 0xff, 16);
+ 		memcpy((u8 *)&ip + (i < 64) * 8, &part, 8);
+diff --git a/drivers/net/wireguard/selftest/ratelimiter.c b/drivers/net/wireguard/selftest/ratelimiter.c
+index 007cd4457c5f6..ba87d294604fe 100644
+--- a/drivers/net/wireguard/selftest/ratelimiter.c
++++ b/drivers/net/wireguard/selftest/ratelimiter.c
+@@ -6,28 +6,29 @@
+ #ifdef DEBUG
+ 
+ #include <linux/jiffies.h>
++#include <linux/hrtimer.h>
+ 
+ static const struct {
+ 	bool result;
+-	unsigned int msec_to_sleep_before;
++	u64 nsec_to_sleep_before;
+ } expected_results[] __initconst = {
+ 	[0 ... PACKETS_BURSTABLE - 1] = { true, 0 },
+ 	[PACKETS_BURSTABLE] = { false, 0 },
+-	[PACKETS_BURSTABLE + 1] = { true, MSEC_PER_SEC / PACKETS_PER_SECOND },
++	[PACKETS_BURSTABLE + 1] = { true, NSEC_PER_SEC / PACKETS_PER_SECOND },
+ 	[PACKETS_BURSTABLE + 2] = { false, 0 },
+-	[PACKETS_BURSTABLE + 3] = { true, (MSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
++	[PACKETS_BURSTABLE + 3] = { true, (NSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
+ 	[PACKETS_BURSTABLE + 4] = { true, 0 },
+ 	[PACKETS_BURSTABLE + 5] = { false, 0 }
+ };
+ 
+ static __init unsigned int maximum_jiffies_at_index(int index)
+ {
+-	unsigned int total_msecs = 2 * MSEC_PER_SEC / PACKETS_PER_SECOND / 3;
++	u64 total_nsecs = 2 * NSEC_PER_SEC / PACKETS_PER_SECOND / 3;
+ 	int i;
+ 
+ 	for (i = 0; i <= index; ++i)
+-		total_msecs += expected_results[i].msec_to_sleep_before;
+-	return msecs_to_jiffies(total_msecs);
++		total_nsecs += expected_results[i].nsec_to_sleep_before;
++	return nsecs_to_jiffies(total_nsecs);
+ }
+ 
+ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
+@@ -42,8 +43,12 @@ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
+ 	loop_start_time = jiffies;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(expected_results); ++i) {
+-		if (expected_results[i].msec_to_sleep_before)
+-			msleep(expected_results[i].msec_to_sleep_before);
++		if (expected_results[i].nsec_to_sleep_before) {
++			ktime_t timeout = ktime_add(ktime_add_ns(ktime_get_coarse_boottime(), TICK_NSEC * 4 / 3),
++						    ns_to_ktime(expected_results[i].nsec_to_sleep_before));
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout_range_clock(&timeout, 0, HRTIMER_MODE_ABS, CLOCK_BOOTTIME);
++		}
+ 
+ 		if (time_is_before_jiffies(loop_start_time +
+ 					   maximum_jiffies_at_index(i)))
+@@ -127,7 +132,7 @@ bool __init wg_ratelimiter_selftest(void)
+ 	if (IS_ENABLED(CONFIG_KASAN) || IS_ENABLED(CONFIG_UBSAN))
+ 		return true;
+ 
+-	BUILD_BUG_ON(MSEC_PER_SEC % PACKETS_PER_SECOND != 0);
++	BUILD_BUG_ON(NSEC_PER_SEC % PACKETS_PER_SECOND != 0);
+ 
+ 	if (wg_ratelimiter_init())
+ 		goto out;
+@@ -176,7 +181,6 @@ bool __init wg_ratelimiter_selftest(void)
+ 				test += test_count;
+ 				goto err;
+ 			}
+-			msleep(500);
+ 			continue;
+ 		} else if (ret < 0) {
+ 			test += test_count;
+@@ -195,7 +199,6 @@ bool __init wg_ratelimiter_selftest(void)
+ 				test += test_count;
+ 				goto err;
+ 			}
+-			msleep(50);
+ 			continue;
+ 		}
+ 		test += test_count;
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index e5a296039f714..4870a3dab0ded 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -1205,13 +1205,12 @@ static void ath10k_snoc_init_napi(struct ath10k *ar)
+ static int ath10k_snoc_request_irq(struct ath10k *ar)
+ {
+ 	struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
+-	int irqflags = IRQF_TRIGGER_RISING;
+ 	int ret, id;
+ 
+ 	for (id = 0; id < CE_COUNT_MAX; id++) {
+ 		ret = request_irq(ar_snoc->ce_irqs[id].irq_line,
+-				  ath10k_snoc_per_engine_handler,
+-				  irqflags, ce_name[id], ar);
++				  ath10k_snoc_per_engine_handler, 0,
++				  ce_name[id], ar);
+ 		if (ret) {
+ 			ath10k_err(ar,
+ 				   "failed to register IRQ handler for CE %d: %d\n",
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 28de2c7ae8991..473d92240a829 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -476,23 +476,23 @@ static int ath11k_core_pdev_create(struct ath11k_base *ab)
+ 		return ret;
+ 	}
+ 
+-	ret = ath11k_mac_register(ab);
++	ret = ath11k_dp_pdev_alloc(ab);
+ 	if (ret) {
+-		ath11k_err(ab, "failed register the radio with mac80211: %d\n", ret);
++		ath11k_err(ab, "failed to attach DP pdev: %d\n", ret);
+ 		goto err_pdev_debug;
+ 	}
+ 
+-	ret = ath11k_dp_pdev_alloc(ab);
++	ret = ath11k_mac_register(ab);
+ 	if (ret) {
+-		ath11k_err(ab, "failed to attach DP pdev: %d\n", ret);
+-		goto err_mac_unregister;
++		ath11k_err(ab, "failed register the radio with mac80211: %d\n", ret);
++		goto err_dp_pdev_free;
+ 	}
+ 
+ 	ret = ath11k_thermal_register(ab);
+ 	if (ret) {
+ 		ath11k_err(ab, "could not register thermal device: %d\n",
+ 			   ret);
+-		goto err_dp_pdev_free;
++		goto err_mac_unregister;
+ 	}
+ 
+ 	ret = ath11k_spectral_init(ab);
+@@ -505,10 +505,10 @@ static int ath11k_core_pdev_create(struct ath11k_base *ab)
+ 
+ err_thermal_unregister:
+ 	ath11k_thermal_unregister(ab);
+-err_dp_pdev_free:
+-	ath11k_dp_pdev_free(ab);
+ err_mac_unregister:
+ 	ath11k_mac_unregister(ab);
++err_dp_pdev_free:
++	ath11k_dp_pdev_free(ab);
+ err_pdev_debug:
+ 	ath11k_debugfs_pdev_destroy(ab);
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/debug.h b/drivers/net/wireless/ath/ath11k/debug.h
+index 659a275e2eb38..694ebba17fad7 100644
+--- a/drivers/net/wireless/ath/ath11k/debug.h
++++ b/drivers/net/wireless/ath/ath11k/debug.h
+@@ -23,8 +23,8 @@ enum ath11k_debug_mask {
+ 	ATH11K_DBG_TESTMODE	= 0x00000400,
+ 	ATH11k_DBG_HAL		= 0x00000800,
+ 	ATH11K_DBG_PCI		= 0x00001000,
+-	ATH11K_DBG_DP_TX	= 0x00001000,
+-	ATH11K_DBG_DP_RX	= 0x00002000,
++	ATH11K_DBG_DP_TX	= 0x00002000,
++	ATH11K_DBG_DP_RX	= 0x00004000,
+ 	ATH11K_DBG_ANY		= 0xffffffff,
+ };
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
+index 6b45e63fae4ba..e3d546ef71ddc 100644
+--- a/drivers/net/wireless/ath/ath9k/htc.h
++++ b/drivers/net/wireless/ath/ath9k/htc.h
+@@ -327,11 +327,11 @@ static inline struct ath9k_htc_tx_ctl *HTC_SKB_CB(struct sk_buff *skb)
+ }
+ 
+ #ifdef CONFIG_ATH9K_HTC_DEBUGFS
+-
+-#define TX_STAT_INC(c) (hif_dev->htc_handle->drv_priv->debug.tx_stats.c++)
+-#define TX_STAT_ADD(c, a) (hif_dev->htc_handle->drv_priv->debug.tx_stats.c += a)
+-#define RX_STAT_INC(c) (hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c++)
+-#define RX_STAT_ADD(c, a) (hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c += a)
++#define __STAT_SAFE(expr) (hif_dev->htc_handle->drv_priv ? (expr) : 0)
++#define TX_STAT_INC(c) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.tx_stats.c++)
++#define TX_STAT_ADD(c, a) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.tx_stats.c += a)
++#define RX_STAT_INC(c) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c++)
++#define RX_STAT_ADD(c, a) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c += a)
+ #define CAB_STAT_INC   priv->debug.tx_stats.cab_queued++
+ 
+ #define TX_QSTAT_INC(q) (priv->debug.tx_stats.queue_stats[q]++)
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index ff61ae34ecdf0..07ac88fb1c577 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -944,7 +944,6 @@ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ 	priv->hw = hw;
+ 	priv->htc = htc_handle;
+ 	priv->dev = dev;
+-	htc_handle->drv_priv = priv;
+ 	SET_IEEE80211_DEV(hw, priv->dev);
+ 
+ 	ret = ath9k_htc_wait_for_target(priv);
+@@ -965,6 +964,8 @@ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ 	if (ret)
+ 		goto err_init;
+ 
++	htc_handle->drv_priv = priv;
++
+ 	return 0;
+ 
+ err_init:
+diff --git a/drivers/net/wireless/ath/wil6210/debugfs.c b/drivers/net/wireless/ath/wil6210/debugfs.c
+index 2d618f90afa7b..cb40162bae995 100644
+--- a/drivers/net/wireless/ath/wil6210/debugfs.c
++++ b/drivers/net/wireless/ath/wil6210/debugfs.c
+@@ -1010,20 +1010,14 @@ static ssize_t wil_write_file_wmi(struct file *file, const char __user *buf,
+ 	void *cmd;
+ 	int cmdlen = len - sizeof(struct wmi_cmd_hdr);
+ 	u16 cmdid;
+-	int rc, rc1;
++	int rc1;
+ 
+-	if (cmdlen < 0)
++	if (cmdlen < 0 || *ppos != 0)
+ 		return -EINVAL;
+ 
+-	wmi = kmalloc(len, GFP_KERNEL);
+-	if (!wmi)
+-		return -ENOMEM;
+-
+-	rc = simple_write_to_buffer(wmi, len, ppos, buf, len);
+-	if (rc < 0) {
+-		kfree(wmi);
+-		return rc;
+-	}
++	wmi = memdup_user(buf, len);
++	if (IS_ERR(wmi))
++		return PTR_ERR(wmi);
+ 
+ 	cmd = (cmdlen > 0) ? &wmi[1] : NULL;
+ 	cmdid = le16_to_cpu(wmi->command_id);
+@@ -1033,7 +1027,7 @@ static ssize_t wil_write_file_wmi(struct file *file, const char __user *buf,
+ 
+ 	wil_info(wil, "0x%04x[%d] -> %d\n", cmdid, cmdlen, rc1);
+ 
+-	return rc;
++	return len;
+ }
+ 
+ static const struct file_operations fops_wmi = {
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-rs.c b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+index 9a491e5db75bd..532e3b91777d9 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-rs.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+@@ -2403,7 +2403,7 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ 		/* Repeat initial/next rate.
+ 		 * For legacy IL_NUMBER_TRY == 1, this loop will not execute.
+ 		 * For HT IL_HT_NUMBER_TRY == 3, this executes twice. */
+-		while (repeat_rate > 0 && idx < LINK_QUAL_MAX_RETRY_NUM) {
++		while (repeat_rate > 0) {
+ 			if (is_legacy(tbl_type.lq_type)) {
+ 				if (ant_toggle_cnt < NUM_TRY_BEFORE_ANT_TOGGLE)
+ 					ant_toggle_cnt++;
+@@ -2422,6 +2422,8 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ 			    cpu_to_le32(new_rate);
+ 			repeat_rate--;
+ 			idx++;
++			if (idx >= LINK_QUAL_MAX_RETRY_NUM)
++				goto out;
+ 		}
+ 
+ 		il4965_rs_get_tbl_info_from_mcs(new_rate, lq_sta->band,
+@@ -2466,6 +2468,7 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ 		repeat_rate--;
+ 	}
+ 
++out:
+ 	lq_cmd->agg_params.agg_frame_cnt_limit = LINK_QUAL_AGG_FRAME_LIMIT_DEF;
+ 	lq_cmd->agg_params.agg_dis_start_th = LINK_QUAL_AGG_DISABLE_START_DEF;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index ef62839894c77..09f870c48a4f6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -1840,6 +1840,7 @@ static void iwl_mvm_disable_sta_queues(struct iwl_mvm *mvm,
+ 			iwl_mvm_txq_from_mac80211(sta->txq[i]);
+ 
+ 		mvmtxq->txq_id = IWL_MVM_INVALID_QUEUE;
++		list_del_init(&mvmtxq->list);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/intersil/p54/main.c b/drivers/net/wireless/intersil/p54/main.c
+index a3ca6620dc0c6..8fa3ec71603e3 100644
+--- a/drivers/net/wireless/intersil/p54/main.c
++++ b/drivers/net/wireless/intersil/p54/main.c
+@@ -682,7 +682,7 @@ static void p54_flush(struct ieee80211_hw *dev, struct ieee80211_vif *vif,
+ 	 * queues have already been stopped and no new frames can sneak
+ 	 * up from behind.
+ 	 */
+-	while ((total = p54_flush_count(priv) && i--)) {
++	while ((total = p54_flush_count(priv)) && i--) {
+ 		/* waste time */
+ 		msleep(20);
+ 	}
+diff --git a/drivers/net/wireless/intersil/p54/p54spi.c b/drivers/net/wireless/intersil/p54/p54spi.c
+index ab0fe85658518..cdb57819684ae 100644
+--- a/drivers/net/wireless/intersil/p54/p54spi.c
++++ b/drivers/net/wireless/intersil/p54/p54spi.c
+@@ -164,7 +164,7 @@ static int p54spi_request_firmware(struct ieee80211_hw *dev)
+ 
+ 	ret = p54_parse_firmware(dev, priv->firmware);
+ 	if (ret) {
+-		release_firmware(priv->firmware);
++		/* the firmware is released by the caller */
+ 		return ret;
+ 	}
+ 
+@@ -659,6 +659,7 @@ static int p54spi_probe(struct spi_device *spi)
+ 	return 0;
+ 
+ err_free_common:
++	release_firmware(priv->firmware);
+ 	free_irq(gpio_to_irq(p54spi_gpio_irq), spi);
+ err_free_gpio_irq:
+ 	gpio_free(p54spi_gpio_irq);
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index afd2d5add04b1..8e412125a49c1 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -593,7 +593,7 @@ struct mac80211_hwsim_data {
+ 	bool ps_poll_pending;
+ 	struct dentry *debugfs;
+ 
+-	uintptr_t pending_cookie;
++	atomic_t pending_cookie;
+ 	struct sk_buff_head pending;	/* packets pending */
+ 	/*
+ 	 * Only radios in the same group can communicate together (the
+@@ -1269,8 +1269,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw,
+ 		goto nla_put_failure;
+ 
+ 	/* We create a cookie to identify this skb */
+-	data->pending_cookie++;
+-	cookie = data->pending_cookie;
++	cookie = atomic_inc_return(&data->pending_cookie);
+ 	info->rate_driver_data[0] = (void *)cookie;
+ 	if (nla_put_u64_64bit(skb, HWSIM_ATTR_COOKIE, cookie, HWSIM_ATTR_PAD))
+ 		goto nla_put_failure;
+@@ -3508,6 +3507,7 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2,
+ 	const u8 *src;
+ 	unsigned int hwsim_flags;
+ 	int i;
++	unsigned long flags;
+ 	bool found = false;
+ 
+ 	if (!info->attrs[HWSIM_ATTR_ADDR_TRANSMITTER] ||
+@@ -3535,18 +3535,20 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2,
+ 	}
+ 
+ 	/* look for the skb matching the cookie passed back from user */
++	spin_lock_irqsave(&data2->pending.lock, flags);
+ 	skb_queue_walk_safe(&data2->pending, skb, tmp) {
+-		u64 skb_cookie;
++		uintptr_t skb_cookie;
+ 
+ 		txi = IEEE80211_SKB_CB(skb);
+-		skb_cookie = (u64)(uintptr_t)txi->rate_driver_data[0];
++		skb_cookie = (uintptr_t)txi->rate_driver_data[0];
+ 
+ 		if (skb_cookie == ret_skb_cookie) {
+-			skb_unlink(skb, &data2->pending);
++			__skb_unlink(skb, &data2->pending);
+ 			found = true;
+ 			break;
+ 		}
+ 	}
++	spin_unlock_irqrestore(&data2->pending.lock, flags);
+ 
+ 	/* not found */
+ 	if (!found)
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index 5d6dc1dd050d4..32fdc4150b605 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -287,6 +287,7 @@ static int if_usb_probe(struct usb_interface *intf,
+ 	return 0;
+ 
+ err_get_fw:
++	usb_put_dev(udev);
+ 	lbs_remove_card(priv);
+ err_add_card:
+ 	if_usb_reset_device(cardp);
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.h b/drivers/net/wireless/marvell/mwifiex/main.h
+index 5923c5c14c8df..f4e3dce10d654 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.h
++++ b/drivers/net/wireless/marvell/mwifiex/main.h
+@@ -1054,6 +1054,8 @@ struct mwifiex_adapter {
+ 	void *devdump_data;
+ 	int devdump_len;
+ 	struct timer_list devdump_timer;
++
++	bool ignore_btcoex_events;
+ };
+ 
+ void mwifiex_process_tx_queue(struct mwifiex_adapter *adapter);
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index 7c137eba8cda7..b0024893a1cba 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -3142,6 +3142,9 @@ static int mwifiex_init_pcie(struct mwifiex_adapter *adapter)
+ 	if (ret)
+ 		goto err_alloc_buffers;
+ 
++	if (pdev->device == PCIE_DEVICE_ID_MARVELL_88W8897)
++		adapter->ignore_btcoex_events = true;
++
+ 	return 0;
+ 
+ err_alloc_buffers:
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_event.c b/drivers/net/wireless/marvell/mwifiex/sta_event.c
+index 753458628f86a..05073a49ab5fe 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_event.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_event.c
+@@ -1061,6 +1061,9 @@ int mwifiex_process_sta_event(struct mwifiex_private *priv)
+ 		break;
+ 	case EVENT_BT_COEX_WLAN_PARA_CHANGE:
+ 		dev_dbg(adapter->dev, "EVENT: BT coex wlan param update\n");
++		if (adapter->ignore_btcoex_events)
++			break;
++
+ 		mwifiex_bt_coex_wlan_param_update_event(priv,
+ 							adapter->event_skb);
+ 		break;
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 466447a5184f8..81ff3b4c6c1b3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -107,6 +107,7 @@ static int mt76_led_init(struct mt76_dev *dev)
+ 		if (!of_property_read_u32(np, "led-sources", &led_pin))
+ 			dev->led_pin = led_pin;
+ 		dev->led_al = of_property_read_bool(np, "led-active-low");
++		of_node_put(np);
+ 	}
+ 
+ 	return led_classdev_register(dev->dev, &dev->led_cdev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+index e43d13d7c9881..2dad61fd451fb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+@@ -108,7 +108,7 @@ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
+ 	ret = mt76u_bulk_msg(dev, skb->data, skb->len, NULL, 500,
+ 			     MT_EP_OUT_INBAND_CMD);
+ 	if (ret)
+-		return ret;
++		goto out;
+ 
+ 	if (wait_resp)
+ 		ret = mt76x02u_mcu_wait_resp(dev, seq);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/debug.c b/drivers/net/wireless/realtek/rtlwifi/debug.c
+index 901cdfe3723cf..0b1bc04cb6adb 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/debug.c
++++ b/drivers/net/wireless/realtek/rtlwifi/debug.c
+@@ -329,8 +329,8 @@ static ssize_t rtl_debugfs_set_write_h2c(struct file *filp,
+ 
+ 	tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
+ 
+-	if (!buffer || copy_from_user(tmp, buffer, tmp_len))
+-		return count;
++	if (copy_from_user(tmp, buffer, tmp_len))
++		return -EFAULT;
+ 
+ 	tmp[tmp_len] = '\0';
+ 
+@@ -340,8 +340,8 @@ static ssize_t rtl_debugfs_set_write_h2c(struct file *filp,
+ 			 &h2c_data[4], &h2c_data[5],
+ 			 &h2c_data[6], &h2c_data[7]);
+ 
+-	if (h2c_len <= 0)
+-		return count;
++	if (h2c_len == 0)
++		return -EINVAL;
+ 
+ 	for (i = 0; i < h2c_len; i++)
+ 		h2c_data_packed[i] = (u8)h2c_data[i];
+diff --git a/drivers/nvme/host/trace.h b/drivers/nvme/host/trace.h
+index 35bac7a254227..aa8b0f86b2be1 100644
+--- a/drivers/nvme/host/trace.h
++++ b/drivers/nvme/host/trace.h
+@@ -98,7 +98,7 @@ TRACE_EVENT(nvme_complete_rq,
+ 	    TP_fast_assign(
+ 		__entry->ctrl_id = nvme_req(req)->ctrl->instance;
+ 		__entry->qid = nvme_req_qid(req);
+-		__entry->cid = req->tag;
++		__entry->cid = nvme_req(req)->cmd->common.command_id;
+ 		__entry->result = le64_to_cpu(nvme_req(req)->result.u64);
+ 		__entry->retries = nvme_req(req)->retries;
+ 		__entry->flags = nvme_req(req)->flags;
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 903b465c8568b..7ed605ffb7171 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -2052,8 +2052,8 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
+ 		}
+ 
+ 		virt_dev = dev_pm_domain_attach_by_name(dev, *name);
+-		if (IS_ERR(virt_dev)) {
+-			ret = PTR_ERR(virt_dev);
++		if (IS_ERR_OR_NULL(virt_dev)) {
++			ret = PTR_ERR(virt_dev) ? : -ENODEV;
+ 			dev_err(dev, "Couldn't attach to pm_domain: %d\n", ret);
+ 			goto err;
+ 		}
+diff --git a/drivers/parisc/lba_pci.c b/drivers/parisc/lba_pci.c
+index 732b516c7bf84..afc6e66ddc31c 100644
+--- a/drivers/parisc/lba_pci.c
++++ b/drivers/parisc/lba_pci.c
+@@ -1476,9 +1476,13 @@ lba_driver_probe(struct parisc_device *dev)
+ 	u32 func_class;
+ 	void *tmp_obj;
+ 	char *version;
+-	void __iomem *addr = ioremap(dev->hpa.start, 4096);
++	void __iomem *addr;
+ 	int max;
+ 
++	addr = ioremap(dev->hpa.start, 4096);
++	if (addr == NULL)
++		return -ENOMEM;
++
+ 	/* Read HW Rev First */
+ 	func_class = READ_REG32(addr + LBA_FCLASS);
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index ad7da4ea43a5e..95ed719402d75 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -773,8 +773,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
+ 	ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys,
+ 					     epc->mem->window.page_size);
+ 	if (!ep->msi_mem) {
++		ret = -ENOMEM;
+ 		dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
+-		return -ENOMEM;
++		goto err_exit_epc_mem;
+ 	}
+ 
+ 	if (ep->ops->get_features) {
+@@ -783,6 +784,19 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
+ 			return 0;
+ 	}
+ 
+-	return dw_pcie_ep_init_complete(ep);
++	ret = dw_pcie_ep_init_complete(ep);
++	if (ret)
++		goto err_free_epc_mem;
++
++	return 0;
++
++err_free_epc_mem:
++	pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem,
++			      epc->mem->window.page_size);
++
++err_exit_epc_mem:
++	pci_epc_mem_exit(epc);
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(dw_pcie_ep_init);
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index c2dea8fc97c8f..2b74ff88c5c56 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -439,7 +439,7 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
+ void dw_pcie_disable_atu(struct dw_pcie *pci, int index,
+ 			 enum dw_pcie_region_type type)
+ {
+-	int region;
++	u32 region;
+ 
+ 	switch (type) {
+ 	case DW_PCIE_REGION_INBOUND:
+@@ -452,8 +452,18 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, int index,
+ 		return;
+ 	}
+ 
+-	dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, region | index);
+-	dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, ~(u32)PCIE_ATU_ENABLE);
++	if (pci->iatu_unroll_enabled) {
++		if (region == PCIE_ATU_REGION_INBOUND) {
++			dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
++						 ~(u32)PCIE_ATU_ENABLE);
++		} else {
++			dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
++						 ~(u32)PCIE_ATU_ENABLE);
++		}
++	} else {
++		dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, region | index);
++		dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, ~(u32)PCIE_ATU_ENABLE);
++	}
+ }
+ 
+ int dw_pcie_wait_for_link(struct dw_pcie *pci)
+@@ -588,6 +598,13 @@ void dw_pcie_setup(struct dw_pcie *pci)
+ 	val |= PORT_LINK_DLL_LINK_EN;
+ 	dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
+ 
++	if (of_property_read_bool(np, "snps,enable-cdm-check")) {
++		val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
++		val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
++		       PCIE_PL_CHK_REG_CHK_REG_START;
++		dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
++	}
++
+ 	of_property_read_u32(np, "num-lanes", &pci->num_lanes);
+ 	if (!pci->num_lanes) {
+ 		dev_dbg(pci->dev, "Using h/w default number of lanes\n");
+@@ -634,11 +651,4 @@ void dw_pcie_setup(struct dw_pcie *pci)
+ 		break;
+ 	}
+ 	dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
+-
+-	if (of_property_read_bool(np, "snps,enable-cdm-check")) {
+-		val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
+-		val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
+-		       PCIE_PL_CHK_REG_CHK_REG_START;
+-		dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
+-	}
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 1b8b3c12eeced..5fbd80908a99a 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -320,8 +320,6 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ 	reset_control_assert(res->ext_reset);
+ 	reset_control_assert(res->phy_reset);
+ 
+-	writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL);
+-
+ 	ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies);
+ 	if (ret < 0) {
+ 		dev_err(dev, "cannot enable regulators\n");
+@@ -364,15 +362,15 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ 		goto err_deassert_axi;
+ 	}
+ 
+-	ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
+-	if (ret)
+-		goto err_clks;
+-
+ 	/* enable PCIe clocks and resets */
+ 	val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
+ 	val &= ~BIT(0);
+ 	writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
+ 
++	ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
++	if (ret)
++		goto err_clks;
++
+ 	if (of_device_is_compatible(node, "qcom,pcie-ipq8064") ||
+ 	    of_device_is_compatible(node, "qcom,pcie-ipq8064-v2")) {
+ 		writel(PCS_DEEMPH_TX_DEEMPH_GEN1(24) |
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index a5b677ec07690..1222f5749bc67 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -370,15 +370,14 @@ static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
+ 	struct tegra_pcie_dw *pcie = arg;
+ 	struct dw_pcie *pci = &pcie->pci;
+ 	struct pcie_port *pp = &pci->pp;
+-	u32 val, tmp;
++	u32 val, status_l0, status_l1;
+ 	u16 val_w;
+ 
+-	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
+-	if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
+-		val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
+-		if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) {
+-			appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
+-
++	status_l0 = appl_readl(pcie, APPL_INTR_STATUS_L0);
++	if (status_l0 & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
++		status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
++		appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_0_0);
++		if (status_l1 & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) {
+ 			/* SBR & Surprise Link Down WAR */
+ 			val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
+ 			val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
+@@ -394,15 +393,15 @@ static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
+ 		}
+ 	}
+ 
+-	if (val & APPL_INTR_STATUS_L0_INT_INT) {
+-		val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0);
+-		if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) {
++	if (status_l0 & APPL_INTR_STATUS_L0_INT_INT) {
++		status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0);
++		if (status_l1 & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) {
+ 			appl_writel(pcie,
+ 				    APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS,
+ 				    APPL_INTR_STATUS_L1_8_0);
+ 			apply_bad_link_workaround(pp);
+ 		}
+-		if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) {
++		if (status_l1 & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) {
+ 			appl_writel(pcie,
+ 				    APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS,
+ 				    APPL_INTR_STATUS_L1_8_0);
+@@ -414,25 +413,24 @@ static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
+ 		}
+ 	}
+ 
+-	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
+-	if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) {
+-		val = appl_readl(pcie, APPL_INTR_STATUS_L1_18);
+-		tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
+-		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) {
++	if (status_l0 & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) {
++		status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_18);
++		val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
++		if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) {
+ 			dev_info(pci->dev, "CDM check complete\n");
+-			tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE;
++			val |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE;
+ 		}
+-		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) {
++		if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) {
+ 			dev_err(pci->dev, "CDM comparison mismatch\n");
+-			tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR;
++			val |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR;
+ 		}
+-		if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) {
++		if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) {
+ 			dev_err(pci->dev, "CDM Logic error\n");
+-			tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR;
++			val |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR;
+ 		}
+-		dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp);
+-		tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR);
+-		dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp);
++		dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
++		val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR);
++		dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", val);
+ 	}
+ 
+ 	return IRQ_HANDLED;
+@@ -965,7 +963,7 @@ static int tegra_pcie_dw_host_init(struct pcie_port *pp)
+ 		offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF);
+ 		val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP);
+ 		val &= ~PCI_DLF_EXCHANGE_ENABLE;
+-		dw_pcie_writel_dbi(pci, offset, val);
++		dw_pcie_writel_dbi(pci, offset + PCI_DLF_CAP, val);
+ 
+ 		tegra_pcie_prepare_host(pp);
+ 
+@@ -1970,6 +1968,7 @@ static int tegra_pcie_config_ep(struct tegra_pcie_dw *pcie,
+ 	if (ret) {
+ 		dev_err(dev, "Failed to initialize DWC Endpoint subsystem: %d\n",
+ 			ret);
++		pm_runtime_disable(dev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 262b2c4c70c9f..ddfeca9016a02 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -623,7 +623,6 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
+ 
+ 	cancel_delayed_work(&epf_test->cmd_handler);
+ 	pci_epf_test_clean_dma_chan(epf_test);
+-	pci_epc_stop(epc);
+ 	for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
+ 		epf_bar = &epf->bar[bar];
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 4084764bf0b1b..0039460c6ab02 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -559,8 +559,8 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
+ 
+ /* PCI error reporting and recovery */
+ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+-			pci_channel_state_t state,
+-			pci_ers_result_t (*reset_link)(struct pci_dev *pdev));
++		pci_channel_state_t state,
++		pci_ers_result_t (*reset_subordinates)(struct pci_dev *pdev));
+ 
+ bool pcie_wait_for_link(struct pci_dev *pdev, bool active);
+ #ifdef CONFIG_PCIEASPM
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index c40546eeecb39..9564b74003f0f 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -305,7 +305,8 @@ int pci_aer_raw_clear_status(struct pci_dev *dev)
+ 		return -EIO;
+ 
+ 	port_type = pci_pcie_type(dev);
+-	if (port_type == PCI_EXP_TYPE_ROOT_PORT) {
++	if (port_type == PCI_EXP_TYPE_ROOT_PORT ||
++	    port_type == PCI_EXP_TYPE_RC_EC) {
+ 		pci_read_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, &status);
+ 		pci_write_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, status);
+ 	}
+@@ -537,7 +538,7 @@ static const char *aer_agent_string[] = {
+ 	struct pci_dev *pdev = to_pci_dev(dev);				\
+ 	u64 *stats = pdev->aer_stats->stats_array;			\
+ 									\
+-	for (i = 0; i < ARRAY_SIZE(strings_array); i++) {		\
++	for (i = 0; i < ARRAY_SIZE(pdev->aer_stats->stats_array); i++) {\
+ 		if (strings_array[i])					\
+ 			str += sprintf(str, "%s %llu\n",		\
+ 				       strings_array[i], stats[i]);	\
+@@ -600,7 +601,8 @@ static umode_t aer_stats_attrs_are_visible(struct kobject *kobj,
+ 	if ((a == &dev_attr_aer_rootport_total_err_cor.attr ||
+ 	     a == &dev_attr_aer_rootport_total_err_fatal.attr ||
+ 	     a == &dev_attr_aer_rootport_total_err_nonfatal.attr) &&
+-	    pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT)
++	    ((pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT) &&
++	     (pci_pcie_type(pdev) != PCI_EXP_TYPE_RC_EC)))
+ 		return 0;
+ 
+ 	return a->mode;
+@@ -1039,6 +1041,7 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
+  */
+ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
+ {
++	int type = pci_pcie_type(dev);
+ 	int aer = dev->aer_cap;
+ 	int temp;
+ 
+@@ -1057,8 +1060,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
+ 			&info->mask);
+ 		if (!(info->status & ~info->mask))
+ 			return 0;
+-	} else if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
+-	           pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
++	} else if (type == PCI_EXP_TYPE_ROOT_PORT ||
++		   type == PCI_EXP_TYPE_DOWNSTREAM ||
+ 		   info->severity == AER_NONFATAL) {
+ 
+ 		/* Link is still healthy for IO reads */
+@@ -1210,6 +1213,7 @@ static int set_device_error_reporting(struct pci_dev *dev, void *data)
+ 	int type = pci_pcie_type(dev);
+ 
+ 	if ((type == PCI_EXP_TYPE_ROOT_PORT) ||
++	    (type == PCI_EXP_TYPE_RC_EC) ||
+ 	    (type == PCI_EXP_TYPE_UPSTREAM) ||
+ 	    (type == PCI_EXP_TYPE_DOWNSTREAM)) {
+ 		if (enable)
+@@ -1334,6 +1338,16 @@ static int aer_probe(struct pcie_device *dev)
+ 	struct device *device = &dev->device;
+ 	struct pci_dev *port = dev->port;
+ 
++	BUILD_BUG_ON(ARRAY_SIZE(aer_correctable_error_string) <
++		     AER_MAX_TYPEOF_COR_ERRS);
++	BUILD_BUG_ON(ARRAY_SIZE(aer_uncorrectable_error_string) <
++		     AER_MAX_TYPEOF_UNCOR_ERRS);
++
++	/* Limit to Root Ports or Root Complex Event Collectors */
++	if ((pci_pcie_type(port) != PCI_EXP_TYPE_RC_EC) &&
++	    (pci_pcie_type(port) != PCI_EXP_TYPE_ROOT_PORT))
++		return -ENODEV;
++
+ 	rpc = devm_kzalloc(device, sizeof(struct aer_rpc), GFP_KERNEL);
+ 	if (!rpc)
+ 		return -ENOMEM;
+@@ -1355,41 +1369,60 @@ static int aer_probe(struct pcie_device *dev)
+ }
+ 
+ /**
+- * aer_root_reset - reset link on Root Port
+- * @dev: pointer to Root Port's pci_dev data structure
++ * aer_root_reset - reset Root Port hierarchy or RCEC
++ * @dev: pointer to Root Port or RCEC
+  *
+- * Invoked by Port Bus driver when performing link reset at Root Port.
++ * Invoked by Port Bus driver when performing reset.
+  */
+ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
+ {
+-	int aer = dev->aer_cap;
++	int type = pci_pcie_type(dev);
++	struct pci_dev *root;
++	int aer;
++	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+ 	u32 reg32;
+ 	int rc;
+ 
++	root = dev;	/* device with Root Error registers */
++	aer = root->aer_cap;
+ 
+-	/* Disable Root's interrupt in response to error messages */
+-	pci_read_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, &reg32);
+-	reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK;
+-	pci_write_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, reg32);
++	if ((host->native_aer || pcie_ports_native) && aer) {
++		/* Disable Root's interrupt in response to error messages */
++		pci_read_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, &reg32);
++		reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK;
++		pci_write_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, reg32);
++	}
+ 
+-	rc = pci_bus_error_reset(dev);
+-	pci_info(dev, "Root Port link has been reset\n");
++	if (type == PCI_EXP_TYPE_RC_EC) {
++		if (pcie_has_flr(dev)) {
++			rc = pcie_flr(dev);
++			pci_info(dev, "has been reset (%d)\n", rc);
++		} else {
++			pci_info(dev, "not reset (no FLR support)\n");
++			rc = -ENOTTY;
++		}
++	} else {
++		rc = pci_bus_error_reset(dev);
++		pci_info(dev, "Root Port link has been reset (%d)\n", rc);
++	}
+ 
+-	/* Clear Root Error Status */
+-	pci_read_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, &reg32);
+-	pci_write_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, reg32);
++	if ((host->native_aer || pcie_ports_native) && aer) {
++		/* Clear Root Error Status */
++		pci_read_config_dword(root, aer + PCI_ERR_ROOT_STATUS, &reg32);
++		pci_write_config_dword(root, aer + PCI_ERR_ROOT_STATUS, reg32);
+ 
+-	/* Enable Root Port's interrupt in response to error messages */
+-	pci_read_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, &reg32);
+-	reg32 |= ROOT_PORT_INTR_ON_MESG_MASK;
+-	pci_write_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, reg32);
++		/* Enable Root Port's interrupt in response to error messages */
++		pci_read_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, &reg32);
++		reg32 |= ROOT_PORT_INTR_ON_MESG_MASK;
++		pci_write_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, reg32);
++	}
+ 
+ 	return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
+ }
+ 
+ static struct pcie_port_service_driver aerdriver = {
+ 	.name		= "aer",
+-	.port_type	= PCI_EXP_TYPE_ROOT_PORT,
++	.port_type	= PCIE_ANY_PORT,
+ 	.service	= PCIE_PORT_SERVICE_AER,
+ 
+ 	.probe		= aer_probe,
+diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
+index c543f419d8f9e..d89d7ed70768c 100644
+--- a/drivers/pci/pcie/err.c
++++ b/drivers/pci/pcie/err.c
+@@ -146,38 +146,69 @@ out:
+ 	return 0;
+ }
+ 
++/**
++ * pci_walk_bridge - walk bridges potentially AER affected
++ * @bridge:	bridge which may be a Port or an RCEC
++ * @cb:		callback to be called for each device found
++ * @userdata:	arbitrary pointer to be passed to callback
++ *
++ * If the device provided is a bridge, walk the subordinate bus, including
++ * any bridged devices on buses under this bus.  Call the provided callback
++ * on each device found.
++ *
++ * If the device provided has no subordinate bus, e.g., an RCEC, call the
++ * callback on the device itself.
++ */
++static void pci_walk_bridge(struct pci_dev *bridge,
++			    int (*cb)(struct pci_dev *, void *),
++			    void *userdata)
++{
++	if (bridge->subordinate)
++		pci_walk_bus(bridge->subordinate, cb, userdata);
++	else
++		cb(bridge, userdata);
++}
++
+ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+-			pci_channel_state_t state,
+-			pci_ers_result_t (*reset_link)(struct pci_dev *pdev))
++		pci_channel_state_t state,
++		pci_ers_result_t (*reset_subordinates)(struct pci_dev *pdev))
+ {
++	int type = pci_pcie_type(dev);
++	struct pci_dev *bridge;
+ 	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
+-	struct pci_bus *bus;
+ 
+ 	/*
+-	 * Error recovery runs on all subordinates of the first downstream port.
+-	 * If the downstream port detected the error, it is cleared at the end.
++	 * If the error was detected by a Root Port, Downstream Port, or
++	 * RCEC, recovery runs on the device itself.  For Ports, that also
++	 * includes any subordinate devices.
++	 *
++	 * If it was detected by another device (Endpoint, etc), recovery
++	 * runs on the device and anything else under the same Port, i.e.,
++	 * everything under "bridge".
+ 	 */
+-	if (!(pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
+-	      pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM))
+-		dev = dev->bus->self;
+-	bus = dev->subordinate;
+-
+-	pci_dbg(dev, "broadcast error_detected message\n");
++	if (type == PCI_EXP_TYPE_ROOT_PORT ||
++	    type == PCI_EXP_TYPE_DOWNSTREAM ||
++	    type == PCI_EXP_TYPE_RC_EC)
++		bridge = dev;
++	else
++		bridge = pci_upstream_bridge(dev);
++
++	pci_dbg(bridge, "broadcast error_detected message\n");
+ 	if (state == pci_channel_io_frozen) {
+-		pci_walk_bus(bus, report_frozen_detected, &status);
+-		status = reset_link(dev);
++		pci_walk_bridge(bridge, report_frozen_detected, &status);
++		status = reset_subordinates(bridge);
+ 		if (status != PCI_ERS_RESULT_RECOVERED) {
+-			pci_warn(dev, "link reset failed\n");
++			pci_warn(bridge, "subordinate device reset failed\n");
+ 			goto failed;
+ 		}
+ 	} else {
+-		pci_walk_bus(bus, report_normal_detected, &status);
++		pci_walk_bridge(bridge, report_normal_detected, &status);
+ 	}
+ 
+ 	if (status == PCI_ERS_RESULT_CAN_RECOVER) {
+ 		status = PCI_ERS_RESULT_RECOVERED;
+-		pci_dbg(dev, "broadcast mmio_enabled message\n");
+-		pci_walk_bus(bus, report_mmio_enabled, &status);
++		pci_dbg(bridge, "broadcast mmio_enabled message\n");
++		pci_walk_bridge(bridge, report_mmio_enabled, &status);
+ 	}
+ 
+ 	if (status == PCI_ERS_RESULT_NEED_RESET) {
+@@ -187,27 +218,27 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+ 		 * drivers' slot_reset callbacks?
+ 		 */
+ 		status = PCI_ERS_RESULT_RECOVERED;
+-		pci_dbg(dev, "broadcast slot_reset message\n");
+-		pci_walk_bus(bus, report_slot_reset, &status);
++		pci_dbg(bridge, "broadcast slot_reset message\n");
++		pci_walk_bridge(bridge, report_slot_reset, &status);
+ 	}
+ 
+ 	if (status != PCI_ERS_RESULT_RECOVERED)
+ 		goto failed;
+ 
+-	pci_dbg(dev, "broadcast resume message\n");
+-	pci_walk_bus(bus, report_resume, &status);
++	pci_dbg(bridge, "broadcast resume message\n");
++	pci_walk_bridge(bridge, report_resume, &status);
+ 
+-	if (pcie_aer_is_native(dev))
+-		pcie_clear_device_status(dev);
+-	pci_aer_clear_nonfatal_status(dev);
+-	pci_info(dev, "device recovery successful\n");
++	if (pcie_aer_is_native(bridge))
++		pcie_clear_device_status(bridge);
++	pci_aer_clear_nonfatal_status(bridge);
++	pci_info(bridge, "device recovery successful\n");
+ 	return status;
+ 
+ failed:
+-	pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT);
++	pci_uevent_ers(bridge, PCI_ERS_RESULT_DISCONNECT);
+ 
+ 	/* TODO: Should kernel panic here? */
+-	pci_info(dev, "device recovery failed\n");
++	pci_info(bridge, "device recovery failed\n");
+ 
+ 	return status;
+ }
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index 3779b264dbec3..5ae81f2df45f7 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -222,15 +222,8 @@ static int get_port_device_capability(struct pci_dev *dev)
+ 
+ #ifdef CONFIG_PCIEAER
+ 	if (dev->aer_cap && pci_aer_available() &&
+-	    (pcie_ports_native || host->native_aer)) {
++	    (pcie_ports_native || host->native_aer))
+ 		services |= PCIE_PORT_SERVICE_AER;
+-
+-		/*
+-		 * Disable AER on this port in case it's been enabled by the
+-		 * BIOS (the AER service driver will enable it when necessary).
+-		 */
+-		pci_disable_pcie_error_reporting(dev);
+-	}
+ #endif
+ 
+ 	/*
+diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c
+index d4559cf88f79d..aac1a6828b4f9 100644
+--- a/drivers/pci/pcie/portdrv_pci.c
++++ b/drivers/pci/pcie/portdrv_pci.c
+@@ -101,12 +101,14 @@ static const struct dev_pm_ops pcie_portdrv_pm_ops = {
+ static int pcie_portdrv_probe(struct pci_dev *dev,
+ 					const struct pci_device_id *id)
+ {
++	int type = pci_pcie_type(dev);
+ 	int status;
+ 
+ 	if (!pci_is_pcie(dev) ||
+-	    ((pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) &&
+-	     (pci_pcie_type(dev) != PCI_EXP_TYPE_UPSTREAM) &&
+-	     (pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM)))
++	    ((type != PCI_EXP_TYPE_ROOT_PORT) &&
++	     (type != PCI_EXP_TYPE_UPSTREAM) &&
++	     (type != PCI_EXP_TYPE_DOWNSTREAM) &&
++	     (type != PCI_EXP_TYPE_RC_EC)))
+ 		return -ENODEV;
+ 
+ 	status = pcie_port_device_register(dev);
+@@ -195,6 +197,8 @@ static const struct pci_device_id port_pci_ids[] = {
+ 	{ PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0) },
+ 	/* subtractive decode PCI-to-PCI bridge, class type is 060401h */
+ 	{ PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x01), ~0) },
++	/* handle any Root Complex Event Collector */
++	{ PCI_DEVICE_CLASS(((PCI_CLASS_SYSTEM_RCEC << 8) | 0x00), ~0) },
+ 	{ },
+ };
+ 
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index cc00915ad6d19..6fbfcab4918cf 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -39,6 +39,24 @@
+ #include <asm/mmu.h>
+ #include <asm/sysreg.h>
+ 
++/*
++ * Cache if the event is allowed to trace Context information.
++ * This allows us to perform the check, i.e, perfmon_capable(),
++ * in the context of the event owner, once, during the event_init().
++ */
++#define SPE_PMU_HW_FLAGS_CX			BIT(0)
++
++static void set_spe_event_has_cx(struct perf_event *event)
++{
++	if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && perfmon_capable())
++		event->hw.flags |= SPE_PMU_HW_FLAGS_CX;
++}
++
++static bool get_spe_event_has_cx(struct perf_event *event)
++{
++	return !!(event->hw.flags & SPE_PMU_HW_FLAGS_CX);
++}
++
+ #define ARM_SPE_BUF_PAD_BYTE			0
+ 
+ struct arm_spe_pmu_buf {
+@@ -274,7 +292,7 @@ static u64 arm_spe_event_to_pmscr(struct perf_event *event)
+ 	if (!attr->exclude_kernel)
+ 		reg |= BIT(SYS_PMSCR_EL1_E1SPE_SHIFT);
+ 
+-	if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && perfmon_capable())
++	if (get_spe_event_has_cx(event))
+ 		reg |= BIT(SYS_PMSCR_EL1_CX_SHIFT);
+ 
+ 	return reg;
+@@ -699,10 +717,10 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
+ 	    !(spe_pmu->features & SPE_PMU_FEAT_FILT_LAT))
+ 		return -EOPNOTSUPP;
+ 
++	set_spe_event_has_cx(event);
+ 	reg = arm_spe_event_to_pmscr(event);
+ 	if (!perfmon_capable() &&
+ 	    (reg & (BIT(SYS_PMSCR_EL1_PA_SHIFT) |
+-		    BIT(SYS_PMSCR_EL1_CX_SHIFT) |
+ 		    BIT(SYS_PMSCR_EL1_PCT_SHIFT))))
+ 		return -EACCES;
+ 
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index 979f92194e81a..c4de8c4db1930 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -121,16 +121,16 @@ static int cros_ec_sleep_event(struct cros_ec_device *ec_dev, u8 sleep_event)
+ 	buf.msg.command = EC_CMD_HOST_SLEEP_EVENT;
+ 
+ 	ret = cros_ec_cmd_xfer_status(ec_dev, &buf.msg);
+-
+-	/* For now, report failure to transition to S0ix with a warning. */
++	/* Report failure to transition to system wide suspend with a warning. */
+ 	if (ret >= 0 && ec_dev->host_sleep_v1 &&
+-	    (sleep_event == HOST_SLEEP_EVENT_S0IX_RESUME)) {
++	    (sleep_event == HOST_SLEEP_EVENT_S0IX_RESUME ||
++	     sleep_event == HOST_SLEEP_EVENT_S3_RESUME)) {
+ 		ec_dev->last_resume_result =
+ 			buf.u.resp1.resume_response.sleep_transitions;
+ 
+ 		WARN_ONCE(buf.u.resp1.resume_response.sleep_transitions &
+ 			  EC_HOST_RESUME_SLEEP_TIMEOUT,
+-			  "EC detected sleep transition timeout. Total slp_s0 transitions: %d",
++			  "EC detected sleep transition timeout. Total sleep transitions: %d",
+ 			  buf.u.resp1.resume_response.sleep_transitions &
+ 			  EC_HOST_RESUME_SLEEP_TRANSITIONS_MASK);
+ 	}
+diff --git a/drivers/platform/olpc/olpc-ec.c b/drivers/platform/olpc/olpc-ec.c
+index 2db7113383fdc..89d9fca02fe9d 100644
+--- a/drivers/platform/olpc/olpc-ec.c
++++ b/drivers/platform/olpc/olpc-ec.c
+@@ -265,7 +265,7 @@ static ssize_t ec_dbgfs_cmd_write(struct file *file, const char __user *buf,
+ 	int i, m;
+ 	unsigned char ec_cmd[EC_MAX_CMD_ARGS];
+ 	unsigned int ec_cmd_int[EC_MAX_CMD_ARGS];
+-	char cmdbuf[64];
++	char cmdbuf[64] = "";
+ 	int ec_cmd_bytes;
+ 
+ 	mutex_lock(&ec_dbgfs_lock);
+diff --git a/drivers/pwm/pwm-lpc18xx-sct.c b/drivers/pwm/pwm-lpc18xx-sct.c
+index 9b15b6a79082a..f32a9e0692ad6 100644
+--- a/drivers/pwm/pwm-lpc18xx-sct.c
++++ b/drivers/pwm/pwm-lpc18xx-sct.c
+@@ -325,7 +325,6 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ {
+ 	struct lpc18xx_pwm_chip *lpc18xx_pwm;
+ 	struct pwm_device *pwm;
+-	struct resource *res;
+ 	int ret, i;
+ 	u64 val;
+ 
+@@ -336,8 +335,7 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ 
+ 	lpc18xx_pwm->dev = &pdev->dev;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	lpc18xx_pwm->base = devm_ioremap_resource(&pdev->dev, res);
++	lpc18xx_pwm->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(lpc18xx_pwm->base))
+ 		return PTR_ERR(lpc18xx_pwm->base);
+ 
+diff --git a/drivers/pwm/pwm-sifive.c b/drivers/pwm/pwm-sifive.c
+index 2485fbaaead22..9cc0612f08498 100644
+--- a/drivers/pwm/pwm-sifive.c
++++ b/drivers/pwm/pwm-sifive.c
+@@ -23,7 +23,7 @@
+ #define PWM_SIFIVE_PWMCFG		0x0
+ #define PWM_SIFIVE_PWMCOUNT		0x8
+ #define PWM_SIFIVE_PWMS			0x10
+-#define PWM_SIFIVE_PWMCMP0		0x20
++#define PWM_SIFIVE_PWMCMP(i)		(0x20 + 4 * (i))
+ 
+ /* PWMCFG fields */
+ #define PWM_SIFIVE_PWMCFG_SCALE		GENMASK(3, 0)
+@@ -36,8 +36,6 @@
+ #define PWM_SIFIVE_PWMCFG_GANG		BIT(24)
+ #define PWM_SIFIVE_PWMCFG_IP		BIT(28)
+ 
+-/* PWM_SIFIVE_SIZE_PWMCMP is used to calculate offset for pwmcmpX registers */
+-#define PWM_SIFIVE_SIZE_PWMCMP		4
+ #define PWM_SIFIVE_CMPWIDTH		16
+ #define PWM_SIFIVE_DEFAULT_PERIOD	10000000
+ 
+@@ -112,8 +110,7 @@ static void pwm_sifive_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	struct pwm_sifive_ddata *ddata = pwm_sifive_chip_to_ddata(chip);
+ 	u32 duty, val;
+ 
+-	duty = readl(ddata->regs + PWM_SIFIVE_PWMCMP0 +
+-		     pwm->hwpwm * PWM_SIFIVE_SIZE_PWMCMP);
++	duty = readl(ddata->regs + PWM_SIFIVE_PWMCMP(pwm->hwpwm));
+ 
+ 	state->enabled = duty > 0;
+ 
+@@ -194,8 +191,7 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		pwm_sifive_update_clock(ddata, clk_get_rate(ddata->clk));
+ 	}
+ 
+-	writel(frac, ddata->regs + PWM_SIFIVE_PWMCMP0 +
+-	       pwm->hwpwm * PWM_SIFIVE_SIZE_PWMCMP);
++	writel(frac, ddata->regs + PWM_SIFIVE_PWMCMP(pwm->hwpwm));
+ 
+ 	if (state->enabled != enabled)
+ 		pwm_sifive_enable(chip, state->enabled);
+@@ -234,6 +230,8 @@ static int pwm_sifive_probe(struct platform_device *pdev)
+ 	struct pwm_chip *chip;
+ 	struct resource *res;
+ 	int ret;
++	u32 val;
++	unsigned int enabled_pwms = 0, enabled_clks = 1;
+ 
+ 	ddata = devm_kzalloc(dev, sizeof(*ddata), GFP_KERNEL);
+ 	if (!ddata)
+@@ -264,6 +262,33 @@ static int pwm_sifive_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	val = readl(ddata->regs + PWM_SIFIVE_PWMCFG);
++	if (val & PWM_SIFIVE_PWMCFG_EN_ALWAYS) {
++		unsigned int i;
++
++		for (i = 0; i < chip->npwm; ++i) {
++			val = readl(ddata->regs + PWM_SIFIVE_PWMCMP(i));
++			if (val > 0)
++				++enabled_pwms;
++		}
++	}
++
++	/* The clk should be on once for each running PWM. */
++	if (enabled_pwms) {
++		while (enabled_clks < enabled_pwms) {
++			/* This is not expected to fail as the clk is already on */
++			ret = clk_enable(ddata->clk);
++			if (unlikely(ret)) {
++				dev_err_probe(dev, ret, "Failed to enable clk\n");
++				goto disable_clk;
++			}
++			++enabled_clks;
++		}
++	} else {
++		clk_disable(ddata->clk);
++		enabled_clks = 0;
++	}
++
+ 	/* Watch for changes to underlying clock frequency */
+ 	ddata->notifier.notifier_call = pwm_sifive_clock_notifier;
+ 	ret = clk_notifier_register(ddata->clk, &ddata->notifier);
+@@ -286,7 +311,11 @@ static int pwm_sifive_probe(struct platform_device *pdev)
+ unregister_clk:
+ 	clk_notifier_unregister(ddata->clk, &ddata->notifier);
+ disable_clk:
+-	clk_disable_unprepare(ddata->clk);
++	while (enabled_clks) {
++		clk_disable(ddata->clk);
++		--enabled_clks;
++	}
++	clk_unprepare(ddata->clk);
+ 
+ 	return ret;
+ }
+@@ -294,25 +323,21 @@ disable_clk:
+ static int pwm_sifive_remove(struct platform_device *dev)
+ {
+ 	struct pwm_sifive_ddata *ddata = platform_get_drvdata(dev);
+-	bool is_enabled = false;
+ 	struct pwm_device *pwm;
+-	int ret, ch;
++	int ch;
++
++	pwmchip_remove(&ddata->chip);
++	clk_notifier_unregister(ddata->clk, &ddata->notifier);
+ 
+ 	for (ch = 0; ch < ddata->chip.npwm; ch++) {
+ 		pwm = &ddata->chip.pwms[ch];
+-		if (pwm->state.enabled) {
+-			is_enabled = true;
+-			break;
+-		}
++		if (pwm->state.enabled)
++			clk_disable(ddata->clk);
+ 	}
+-	if (is_enabled)
+-		clk_disable(ddata->clk);
+ 
+-	clk_disable_unprepare(ddata->clk);
+-	ret = pwmchip_remove(&ddata->chip);
+-	clk_notifier_unregister(ddata->clk, &ddata->notifier);
++	clk_unprepare(ddata->clk);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static const struct of_device_id pwm_sifive_of_match[] = {
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 06c0b15fe4c08..5d844697c7b68 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -206,8 +206,12 @@ static int of_get_regulation_constraints(struct device *dev,
+ 		}
+ 
+ 		suspend_np = of_get_child_by_name(np, regulator_states[i]);
+-		if (!suspend_np || !suspend_state)
++		if (!suspend_np)
+ 			continue;
++		if (!suspend_state) {
++			of_node_put(suspend_np);
++			continue;
++		}
+ 
+ 		if (!of_property_read_u32(suspend_np, "regulator-mode",
+ 					  &pval)) {
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 05d227f9d2f28..0295d7b160e5b 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -313,10 +313,10 @@ static const struct regulator_desc pm8941_switch = {
+ 
+ static const struct regulator_desc pm8916_pldo = {
+ 	.linear_ranges = (struct linear_range[]) {
+-		REGULATOR_LINEAR_RANGE(750000, 0, 208, 12500),
++		REGULATOR_LINEAR_RANGE(1750000, 0, 127, 12500),
+ 	},
+ 	.n_linear_ranges = 1,
+-	.n_voltages = 209,
++	.n_voltages = 128,
+ 	.ops = &rpm_smps_ldo_ops,
+ };
+ 
+diff --git a/drivers/remoteproc/qcom_sysmon.c b/drivers/remoteproc/qcom_sysmon.c
+index b37b111b15b39..a26221a6f6c22 100644
+--- a/drivers/remoteproc/qcom_sysmon.c
++++ b/drivers/remoteproc/qcom_sysmon.c
+@@ -41,6 +41,7 @@ struct qcom_sysmon {
+ 	struct completion comp;
+ 	struct completion ind_comp;
+ 	struct completion shutdown_comp;
++	struct completion ssctl_comp;
+ 	struct mutex lock;
+ 
+ 	bool ssr_ack;
+@@ -422,6 +423,8 @@ static int ssctl_new_server(struct qmi_handle *qmi, struct qmi_service *svc)
+ 
+ 	svc->priv = sysmon;
+ 
++	complete(&sysmon->ssctl_comp);
++
+ 	return 0;
+ }
+ 
+@@ -478,6 +481,7 @@ static int sysmon_start(struct rproc_subdev *subdev)
+ 		.ssr_event = SSCTL_SSR_EVENT_AFTER_POWERUP
+ 	};
+ 
++	reinit_completion(&sysmon->ssctl_comp);
+ 	mutex_lock(&sysmon->state_lock);
+ 	sysmon->state = SSCTL_SSR_EVENT_AFTER_POWERUP;
+ 	blocking_notifier_call_chain(&sysmon_notifiers, 0, (void *)&event);
+@@ -520,6 +524,11 @@ static void sysmon_stop(struct rproc_subdev *subdev, bool crashed)
+ 	if (crashed)
+ 		return;
+ 
++	if (sysmon->ssctl_instance) {
++		if (!wait_for_completion_timeout(&sysmon->ssctl_comp, HZ / 2))
++			dev_err(sysmon->dev, "timeout waiting for ssctl service\n");
++	}
++
+ 	if (sysmon->ssctl_version)
+ 		ssctl_request_shutdown(sysmon);
+ 	else if (sysmon->ept)
+@@ -606,6 +615,7 @@ struct qcom_sysmon *qcom_add_sysmon_subdev(struct rproc *rproc,
+ 	init_completion(&sysmon->comp);
+ 	init_completion(&sysmon->ind_comp);
+ 	init_completion(&sysmon->shutdown_comp);
++	init_completion(&sysmon->ssctl_comp);
+ 	mutex_init(&sysmon->lock);
+ 	mutex_init(&sysmon->state_lock);
+ 
+diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c
+index 67286a4505cd1..572f7b8ba2347 100644
+--- a/drivers/remoteproc/qcom_wcnss.c
++++ b/drivers/remoteproc/qcom_wcnss.c
+@@ -415,6 +415,7 @@ static int wcnss_request_irq(struct qcom_wcnss *wcnss,
+ 			     irq_handler_t thread_fn)
+ {
+ 	int ret;
++	int irq_number;
+ 
+ 	ret = platform_get_irq_byname(pdev, name);
+ 	if (ret < 0 && optional) {
+@@ -425,14 +426,19 @@ static int wcnss_request_irq(struct qcom_wcnss *wcnss,
+ 		return ret;
+ 	}
+ 
++	irq_number = ret;
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, ret,
+ 					NULL, thread_fn,
+ 					IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+ 					"wcnss", wcnss);
+-	if (ret)
++	if (ret) {
+ 		dev_err(&pdev->dev, "request %s IRQ failed\n", name);
++		return ret;
++	}
+ 
+-	return ret;
++	/* Return the IRQ number if the IRQ was successfully acquired */
++	return irq_number;
+ }
+ 
+ static int wcnss_alloc_memory_region(struct qcom_wcnss *wcnss)
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index afeb9d6e4313d..f92a18c06d805 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -1283,6 +1283,7 @@ static int k3_r5_cluster_of_init(struct platform_device *pdev)
+ 		if (!cpdev) {
+ 			ret = -ENODEV;
+ 			dev_err(dev, "could not get R5 core platform device\n");
++			of_node_put(child);
+ 			goto fail;
+ 		}
+ 
+@@ -1291,6 +1292,7 @@ static int k3_r5_cluster_of_init(struct platform_device *pdev)
+ 			dev_err(dev, "k3_r5_core_of_init failed, ret = %d\n",
+ 				ret);
+ 			put_device(&cpdev->dev);
++			of_node_put(child);
+ 			goto fail;
+ 		}
+ 
+diff --git a/drivers/rpmsg/mtk_rpmsg.c b/drivers/rpmsg/mtk_rpmsg.c
+index 96a17ec291401..2d8cb596ad691 100644
+--- a/drivers/rpmsg/mtk_rpmsg.c
++++ b/drivers/rpmsg/mtk_rpmsg.c
+@@ -234,7 +234,9 @@ static void mtk_register_device_work_function(struct work_struct *register_work)
+ 		if (info->registered)
+ 			continue;
+ 
++		mutex_unlock(&subdev->channels_lock);
+ 		ret = mtk_rpmsg_register_device(subdev, &info->info);
++		mutex_lock(&subdev->channels_lock);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "Can't create rpmsg_device\n");
+ 			continue;
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index a4db9f6100d2f..0b1e853d8c91a 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1364,6 +1364,7 @@ static int qcom_smd_parse_edge(struct device *dev,
+ 		}
+ 
+ 		edge->ipc_regmap = syscon_node_to_regmap(syscon_np);
++		of_node_put(syscon_np);
+ 		if (IS_ERR(edge->ipc_regmap)) {
+ 			ret = PTR_ERR(edge->ipc_regmap);
+ 			goto put_node;
+diff --git a/drivers/s390/char/zcore.c b/drivers/s390/char/zcore.c
+index 1515fdc3c1abd..3841c0e77df69 100644
+--- a/drivers/s390/char/zcore.c
++++ b/drivers/s390/char/zcore.c
+@@ -48,6 +48,7 @@ static struct dentry *zcore_reipl_file;
+ static struct dentry *zcore_hsa_file;
+ static struct ipl_parameter_block *zcore_ipl_block;
+ 
++static DEFINE_MUTEX(hsa_buf_mutex);
+ static char hsa_buf[PAGE_SIZE] __aligned(PAGE_SIZE);
+ 
+ /*
+@@ -64,19 +65,24 @@ int memcpy_hsa_user(void __user *dest, unsigned long src, size_t count)
+ 	if (!hsa_available)
+ 		return -ENODATA;
+ 
++	mutex_lock(&hsa_buf_mutex);
+ 	while (count) {
+ 		if (sclp_sdias_copy(hsa_buf, src / PAGE_SIZE + 2, 1)) {
+ 			TRACE("sclp_sdias_copy() failed\n");
++			mutex_unlock(&hsa_buf_mutex);
+ 			return -EIO;
+ 		}
+ 		offset = src % PAGE_SIZE;
+ 		bytes = min(PAGE_SIZE - offset, count);
+-		if (copy_to_user(dest, hsa_buf + offset, bytes))
++		if (copy_to_user(dest, hsa_buf + offset, bytes)) {
++			mutex_unlock(&hsa_buf_mutex);
+ 			return -EFAULT;
++		}
+ 		src += bytes;
+ 		dest += bytes;
+ 		count -= bytes;
+ 	}
++	mutex_unlock(&hsa_buf_mutex);
+ 	return 0;
+ }
+ 
+@@ -94,9 +100,11 @@ int memcpy_hsa_kernel(void *dest, unsigned long src, size_t count)
+ 	if (!hsa_available)
+ 		return -ENODATA;
+ 
++	mutex_lock(&hsa_buf_mutex);
+ 	while (count) {
+ 		if (sclp_sdias_copy(hsa_buf, src / PAGE_SIZE + 2, 1)) {
+ 			TRACE("sclp_sdias_copy() failed\n");
++			mutex_unlock(&hsa_buf_mutex);
+ 			return -EIO;
+ 		}
+ 		offset = src % PAGE_SIZE;
+@@ -106,6 +114,7 @@ int memcpy_hsa_kernel(void *dest, unsigned long src, size_t count)
+ 		dest += bytes;
+ 		count -= bytes;
+ 	}
++	mutex_unlock(&hsa_buf_mutex);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index 9b61e9b131ade..e3c1060b6056c 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -288,19 +288,11 @@ static int vfio_ccw_sch_event(struct subchannel *sch, int process)
+ 	if (work_pending(&sch->todo_work))
+ 		goto out_unlock;
+ 
+-	if (cio_update_schib(sch)) {
+-		vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
+-		rc = 0;
+-		goto out_unlock;
+-	}
+-
+-	private = dev_get_drvdata(&sch->dev);
+-	if (private->state == VFIO_CCW_STATE_NOT_OPER) {
+-		private->state = private->mdev ? VFIO_CCW_STATE_IDLE :
+-				 VFIO_CCW_STATE_STANDBY;
+-	}
+ 	rc = 0;
+ 
++	if (cio_update_schib(sch))
++		vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
++
+ out_unlock:
+ 	spin_unlock_irqrestore(sch->lock, flags);
+ 
+diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
+index 511bf8e0a436c..b61acbb09be3b 100644
+--- a/drivers/s390/scsi/zfcp_fc.c
++++ b/drivers/s390/scsi/zfcp_fc.c
+@@ -145,27 +145,33 @@ void zfcp_fc_enqueue_event(struct zfcp_adapter *adapter,
+ 
+ static int zfcp_fc_wka_port_get(struct zfcp_fc_wka_port *wka_port)
+ {
++	int ret = -EIO;
++
+ 	if (mutex_lock_interruptible(&wka_port->mutex))
+ 		return -ERESTARTSYS;
+ 
+ 	if (wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE ||
+ 	    wka_port->status == ZFCP_FC_WKA_PORT_CLOSING) {
+ 		wka_port->status = ZFCP_FC_WKA_PORT_OPENING;
+-		if (zfcp_fsf_open_wka_port(wka_port))
++		if (zfcp_fsf_open_wka_port(wka_port)) {
++			/* could not even send request, nothing to wait for */
+ 			wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
++			goto out;
++		}
+ 	}
+ 
+-	mutex_unlock(&wka_port->mutex);
+-
+-	wait_event(wka_port->completion_wq,
++	wait_event(wka_port->opened,
+ 		   wka_port->status == ZFCP_FC_WKA_PORT_ONLINE ||
+ 		   wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE);
+ 
+ 	if (wka_port->status == ZFCP_FC_WKA_PORT_ONLINE) {
+ 		atomic_inc(&wka_port->refcount);
+-		return 0;
++		ret = 0;
++		goto out;
+ 	}
+-	return -EIO;
++out:
++	mutex_unlock(&wka_port->mutex);
++	return ret;
+ }
+ 
+ static void zfcp_fc_wka_port_offline(struct work_struct *work)
+@@ -181,9 +187,12 @@ static void zfcp_fc_wka_port_offline(struct work_struct *work)
+ 
+ 	wka_port->status = ZFCP_FC_WKA_PORT_CLOSING;
+ 	if (zfcp_fsf_close_wka_port(wka_port)) {
++		/* could not even send request, nothing to wait for */
+ 		wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
+-		wake_up(&wka_port->completion_wq);
++		goto out;
+ 	}
++	wait_event(wka_port->closed,
++		   wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE);
+ out:
+ 	mutex_unlock(&wka_port->mutex);
+ }
+@@ -193,13 +202,15 @@ static void zfcp_fc_wka_port_put(struct zfcp_fc_wka_port *wka_port)
+ 	if (atomic_dec_return(&wka_port->refcount) != 0)
+ 		return;
+ 	/* wait 10 milliseconds, other reqs might pop in */
+-	schedule_delayed_work(&wka_port->work, HZ / 100);
++	queue_delayed_work(wka_port->adapter->work_queue, &wka_port->work,
++			   msecs_to_jiffies(10));
+ }
+ 
+ static void zfcp_fc_wka_port_init(struct zfcp_fc_wka_port *wka_port, u32 d_id,
+ 				  struct zfcp_adapter *adapter)
+ {
+-	init_waitqueue_head(&wka_port->completion_wq);
++	init_waitqueue_head(&wka_port->opened);
++	init_waitqueue_head(&wka_port->closed);
+ 
+ 	wka_port->adapter = adapter;
+ 	wka_port->d_id = d_id;
+diff --git a/drivers/s390/scsi/zfcp_fc.h b/drivers/s390/scsi/zfcp_fc.h
+index 6902ae1f8e4f0..25bebfaa8cbcd 100644
+--- a/drivers/s390/scsi/zfcp_fc.h
++++ b/drivers/s390/scsi/zfcp_fc.h
+@@ -185,7 +185,8 @@ enum zfcp_fc_wka_status {
+ /**
+  * struct zfcp_fc_wka_port - representation of well-known-address (WKA) FC port
+  * @adapter: Pointer to adapter structure this WKA port belongs to
+- * @completion_wq: Wait for completion of open/close command
++ * @opened: Wait for completion of open command
++ * @closed: Wait for completion of close command
+  * @status: Current status of WKA port
+  * @refcount: Reference count to keep port open as long as it is in use
+  * @d_id: FC destination id or well-known-address
+@@ -195,7 +196,8 @@ enum zfcp_fc_wka_status {
+  */
+ struct zfcp_fc_wka_port {
+ 	struct zfcp_adapter	*adapter;
+-	wait_queue_head_t	completion_wq;
++	wait_queue_head_t	opened;
++	wait_queue_head_t	closed;
+ 	enum zfcp_fc_wka_status	status;
+ 	atomic_t		refcount;
+ 	u32			d_id;
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index 6cb963a067771..8401c42db5419 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -1889,7 +1889,7 @@ static void zfcp_fsf_open_wka_port_handler(struct zfcp_fsf_req *req)
+ 		wka_port->status = ZFCP_FC_WKA_PORT_ONLINE;
+ 	}
+ out:
+-	wake_up(&wka_port->completion_wq);
++	wake_up(&wka_port->opened);
+ }
+ 
+ /**
+@@ -1948,7 +1948,7 @@ static void zfcp_fsf_close_wka_port_handler(struct zfcp_fsf_req *req)
+ 	}
+ 
+ 	wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
+-	wake_up(&wka_port->completion_wq);
++	wake_up(&wka_port->closed);
+ }
+ 
+ /**
+@@ -2359,8 +2359,7 @@ static void zfcp_fsf_req_trace(struct zfcp_fsf_req *req, struct scsi_cmnd *scsi)
+ 		}
+ 	}
+ 
+-	blk_add_driver_data(scsi->request->q, scsi->request, &blktrc,
+-			    sizeof(blktrc));
++	blk_add_driver_data(scsi->request, &blktrc, sizeof(blktrc));
+ }
+ 
+ /**
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 8a8e0920d2b41..6afce455b9d8d 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -3857,6 +3857,7 @@ struct qla_hw_data {
+ 	/* SRB cache. */
+ #define SRB_MIN_REQ     128
+ 	mempool_t       *srb_mempool;
++	u8 port_name[WWN_SIZE];
+ 
+ 	volatile struct {
+ 		uint32_t	mbox_int		:1;
+@@ -4134,8 +4135,8 @@ struct qla_hw_data {
+ #define IS_OEM_001(ha)          ((ha)->device_type & DT_OEM_001)
+ #define HAS_EXTENDED_IDS(ha)    ((ha)->device_type & DT_EXTENDED_IDS)
+ #define IS_CT6_SUPPORTED(ha)	((ha)->device_type & DT_CT6_SUPPORTED)
+-#define IS_MQUE_CAPABLE(ha)	((ha)->mqenable || IS_QLA83XX(ha) || \
+-				IS_QLA27XX(ha) || IS_QLA28XX(ha))
++#define IS_MQUE_CAPABLE(ha)	(IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
++				 IS_QLA28XX(ha))
+ #define IS_BIDI_CAPABLE(ha) \
+     (IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
+ /* Bit 21 of fw_attributes decides the MCTP capabilities */
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 3bc1850273421..7e5ee31581d61 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -405,7 +405,8 @@ extern int
+ qla2x00_get_resource_cnts(scsi_qla_host_t *);
+ 
+ extern int
+-qla2x00_get_fcal_position_map(scsi_qla_host_t *ha, char *pos_map);
++qla2x00_get_fcal_position_map(scsi_qla_host_t *ha, char *pos_map,
++		u8 *num_entries);
+ 
+ extern int
+ qla2x00_get_link_status(scsi_qla_host_t *, uint16_t, struct link_statistics *,
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 73015c69b5e89..20bbd69e35e51 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -1594,7 +1594,6 @@ qla2x00_hba_attributes(scsi_qla_host_t *vha, void *entries,
+ 	unsigned int callopt)
+ {
+ 	struct qla_hw_data *ha = vha->hw;
+-	struct init_cb_24xx *icb24 = (void *)ha->init_cb;
+ 	struct new_utsname *p_sysid = utsname();
+ 	struct ct_fdmi_hba_attr *eiter;
+ 	uint16_t alen;
+@@ -1756,8 +1755,8 @@ qla2x00_hba_attributes(scsi_qla_host_t *vha, void *entries,
+ 	/* MAX CT Payload Length */
+ 	eiter = entries + size;
+ 	eiter->type = cpu_to_be16(FDMI_HBA_MAXIMUM_CT_PAYLOAD_LENGTH);
+-	eiter->a.max_ct_len = cpu_to_be32(le16_to_cpu(IS_FWI2_CAPABLE(ha) ?
+-		icb24->frame_payload_size : ha->init_cb->frame_payload_size));
++	eiter->a.max_ct_len = cpu_to_be32(ha->frame_payload_size >> 2);
++
+ 	alen = sizeof(eiter->a.max_ct_len);
+ 	alen += FDMI_ATTR_TYPELEN(eiter);
+ 	eiter->len = cpu_to_be16(alen);
+@@ -1849,7 +1848,6 @@ qla2x00_port_attributes(scsi_qla_host_t *vha, void *entries,
+ 	unsigned int callopt)
+ {
+ 	struct qla_hw_data *ha = vha->hw;
+-	struct init_cb_24xx *icb24 = (void *)ha->init_cb;
+ 	struct new_utsname *p_sysid = utsname();
+ 	char *hostname = p_sysid ?
+ 		p_sysid->nodename : fc_host_system_hostname(vha->host);
+@@ -1901,8 +1899,7 @@ qla2x00_port_attributes(scsi_qla_host_t *vha, void *entries,
+ 	/* Max frame size. */
+ 	eiter = entries + size;
+ 	eiter->type = cpu_to_be16(FDMI_PORT_MAX_FRAME_SIZE);
+-	eiter->a.max_frame_size = cpu_to_be32(le16_to_cpu(IS_FWI2_CAPABLE(ha) ?
+-		icb24->frame_payload_size : ha->init_cb->frame_payload_size));
++	eiter->a.max_frame_size = cpu_to_be32(ha->frame_payload_size);
+ 	alen = sizeof(eiter->a.max_frame_size);
+ 	alen += FDMI_ATTR_TYPELEN(eiter);
+ 	eiter->len = cpu_to_be16(alen);
+@@ -3555,7 +3552,7 @@ login_logout:
+ 				do_delete) {
+ 				if (fcport->loop_id != FC_NO_LOOP_ID) {
+ 					if (fcport->flags & FCF_FCP2_DEVICE)
+-						fcport->logout_on_delete = 0;
++						continue;
+ 
+ 					ql_dbg(ql_dbg_disc, vha, 0x20f0,
+ 					    "%s %d %8phC post del sess\n",
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 9452848ede3f8..422ff67038d17 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1734,7 +1734,8 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	case RSCN_PORT_ADDR:
+ 		fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1);
+ 		if (fcport) {
+-			if (fcport->flags & FCF_FCP2_DEVICE) {
++			if (fcport->flags & FCF_FCP2_DEVICE &&
++			    atomic_read(&fcport->state) == FCS_ONLINE) {
+ 				ql_dbg(ql_dbg_disc, vha, 0x2115,
+ 				       "Delaying session delete for FCP2 portid=%06x %8phC ",
+ 					fcport->d_id.b24, fcport->port_name);
+@@ -1746,7 +1747,8 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 		break;
+ 	case RSCN_AREA_ADDR:
+ 		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+-			if (fcport->flags & FCF_FCP2_DEVICE)
++			if (fcport->flags & FCF_FCP2_DEVICE &&
++			    atomic_read(&fcport->state) == FCS_ONLINE)
+ 				continue;
+ 
+ 			if ((ea->id.b24 & 0xffff00) == (fcport->d_id.b24 & 0xffff00)) {
+@@ -1757,7 +1759,8 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 		break;
+ 	case RSCN_DOM_ADDR:
+ 		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+-			if (fcport->flags & FCF_FCP2_DEVICE)
++			if (fcport->flags & FCF_FCP2_DEVICE &&
++			    atomic_read(&fcport->state) == FCS_ONLINE)
+ 				continue;
+ 
+ 			if ((ea->id.b24 & 0xff0000) == (fcport->d_id.b24 & 0xff0000)) {
+@@ -1769,7 +1772,8 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	case RSCN_FAB_ADDR:
+ 	default:
+ 		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+-			if (fcport->flags & FCF_FCP2_DEVICE)
++			if (fcport->flags & FCF_FCP2_DEVICE &&
++			    atomic_read(&fcport->state) == FCS_ONLINE)
+ 				continue;
+ 
+ 			fcport->scan_needed = 1;
+@@ -4328,6 +4332,8 @@ qla2x00_init_rings(scsi_qla_host_t *vha)
+ 			 BIT_6) != 0;
+ 		ql_dbg(ql_dbg_init, vha, 0x00bc, "FA-WWPN Support: %s.\n",
+ 		    (ha->flags.fawwpn_enabled) ? "enabled" : "disabled");
++		/* Init_cb will be reused for other command(s).  Save a backup copy of port_name */
++		memcpy(ha->port_name, ha->init_cb->port_name, WWN_SIZE);
+ 	}
+ 
+ 	rval = qla2x00_init_firmware(vha, ha->init_cb_size);
+@@ -5268,6 +5274,22 @@ static int qla2x00_configure_n2n_loop(scsi_qla_host_t *vha)
+ 	return QLA_FUNCTION_FAILED;
+ }
+ 
++static void
++qla_reinitialize_link(scsi_qla_host_t *vha)
++{
++	int rval;
++
++	atomic_set(&vha->loop_state, LOOP_DOWN);
++	atomic_set(&vha->loop_down_timer, LOOP_DOWN_TIME);
++	rval = qla2x00_full_login_lip(vha);
++	if (rval == QLA_SUCCESS) {
++		ql_dbg(ql_dbg_disc, vha, 0xd050, "Link reinitialized\n");
++	} else {
++		ql_dbg(ql_dbg_disc, vha, 0xd051,
++			"Link reinitialization failed (%d)\n", rval);
++	}
++}
++
+ /*
+  * qla2x00_configure_local_loop
+  *	Updates Fibre Channel Device Database with local loop devices.
+@@ -5319,6 +5341,19 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ 		spin_unlock_irqrestore(&vha->work_lock, flags);
+ 
+ 		if (vha->scan.scan_retry < MAX_SCAN_RETRIES) {
++			u8 loop_map_entries = 0;
++			int rc;
++
++			rc = qla2x00_get_fcal_position_map(vha, NULL,
++						&loop_map_entries);
++			if (rc == QLA_SUCCESS && loop_map_entries > 1) {
++				/*
++				 * There are devices that are still not logged
++				 * in. Reinitialize to give them a chance.
++				 */
++				qla_reinitialize_link(vha);
++				return QLA_FUNCTION_FAILED;
++			}
+ 			set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+ 			set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
+ 		}
+@@ -5547,8 +5582,6 @@ qla2x00_reg_remote_port(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	if (atomic_read(&fcport->state) == FCS_ONLINE)
+ 		return;
+ 
+-	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+-
+ 	rport_ids.node_name = wwn_to_u64(fcport->node_name);
+ 	rport_ids.port_name = wwn_to_u64(fcport->port_name);
+ 	rport_ids.port_id = fcport->d_id.b.domain << 16 |
+@@ -5649,7 +5682,6 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 		qla2x00_reg_remote_port(vha, fcport);
+ 		break;
+ 	case MODE_TARGET:
+-		qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+ 		if (!vha->vha_tgt.qla_tgt->tgt_stop &&
+ 			!vha->vha_tgt.qla_tgt->tgt_stopped)
+ 			qlt_fc_port_added(vha, fcport);
+@@ -5664,6 +5696,8 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 		break;
+ 	}
+ 
++	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
++
+ 	if (IS_IIDMA_CAPABLE(vha->hw) && vha->hw->flags.gpsc_supported) {
+ 		if (fcport->id_changed) {
+ 			fcport->id_changed = 0;
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index c5c7d60ab2524..7ea73ad845de6 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -1202,9 +1202,7 @@ skip_rio:
+ 			if (!vha->vp_idx) {
+ 				if (ha->flags.fawwpn_enabled &&
+ 				    (ha->current_topology == ISP_CFG_F)) {
+-					void *wwpn = ha->init_cb->port_name;
+-
+-					memcpy(vha->port_name, wwpn, WWN_SIZE);
++					memcpy(vha->port_name, ha->port_name, WWN_SIZE);
+ 					fc_host_port_name(vha->host) =
+ 					    wwn_to_u64(vha->port_name);
+ 					ql_dbg(ql_dbg_init + ql_dbg_verbose,
+@@ -4056,16 +4054,12 @@ msix_register_fail:
+ 	}
+ 
+ 	/* Enable MSI-X vector for response queue update for queue 0 */
+-	if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+-		if (ha->msixbase && ha->mqiobase &&
+-		    (ha->max_rsp_queues > 1 || ha->max_req_queues > 1 ||
+-		     ql2xmqsupport))
+-			ha->mqenable = 1;
+-	} else
+-		if (ha->mqiobase &&
+-		    (ha->max_rsp_queues > 1 || ha->max_req_queues > 1 ||
+-		     ql2xmqsupport))
+-			ha->mqenable = 1;
++	if (IS_MQUE_CAPABLE(ha) &&
++	    (ha->msixbase && ha->mqiobase && ha->max_qpairs))
++		ha->mqenable = 1;
++	else
++		ha->mqenable = 0;
++
+ 	ql_dbg(ql_dbg_multiq, vha, 0xc005,
+ 	    "mqiobase=%p, max_rsp_queues=%d, max_req_queues=%d.\n",
+ 	    ha->mqiobase, ha->max_rsp_queues, ha->max_req_queues);
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index bbb57edc1f662..6ff720d8961d0 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -233,6 +233,8 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ 			ql_dbg(ql_dbg_mbx, vha, 0x1112,
+ 			    "mbox[%d]<-0x%04x\n", cnt, *iptr);
+ 			wrt_reg_word(optr, *iptr);
++		} else {
++			wrt_reg_word(optr, 0);
+ 		}
+ 
+ 		mboxes >>= 1;
+@@ -269,6 +271,12 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ 		atomic_inc(&ha->num_pend_mbx_stage3);
+ 		if (!wait_for_completion_timeout(&ha->mbx_intr_comp,
+ 		    mcp->tov * HZ)) {
++			ql_dbg(ql_dbg_mbx, vha, 0x117a,
++			    "cmd=%x Timeout.\n", command);
++			spin_lock_irqsave(&ha->hardware_lock, flags);
++			clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
++			spin_unlock_irqrestore(&ha->hardware_lock, flags);
++
+ 			if (chip_reset != ha->chip_reset) {
+ 				spin_lock_irqsave(&ha->hardware_lock, flags);
+ 				ha->flags.mbox_busy = 0;
+@@ -279,12 +287,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ 				rval = QLA_ABORTED;
+ 				goto premature_exit;
+ 			}
+-			ql_dbg(ql_dbg_mbx, vha, 0x117a,
+-			    "cmd=%x Timeout.\n", command);
+-			spin_lock_irqsave(&ha->hardware_lock, flags);
+-			clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
+-			spin_unlock_irqrestore(&ha->hardware_lock, flags);
+-
+ 		} else if (ha->flags.purge_mbox ||
+ 		    chip_reset != ha->chip_reset) {
+ 			spin_lock_irqsave(&ha->hardware_lock, flags);
+@@ -3015,7 +3017,8 @@ qla2x00_get_resource_cnts(scsi_qla_host_t *vha)
+  *	Kernel context.
+  */
+ int
+-qla2x00_get_fcal_position_map(scsi_qla_host_t *vha, char *pos_map)
++qla2x00_get_fcal_position_map(scsi_qla_host_t *vha, char *pos_map,
++		u8 *num_entries)
+ {
+ 	int rval;
+ 	mbx_cmd_t mc;
+@@ -3055,6 +3058,8 @@ qla2x00_get_fcal_position_map(scsi_qla_host_t *vha, char *pos_map)
+ 
+ 		if (pos_map)
+ 			memcpy(pos_map, pmap, FCAL_MAP_SIZE);
++		if (num_entries)
++			*num_entries = pmap[0];
+ 	}
+ 	dma_pool_free(ha->s_dma_pool, pmap, pmap_dma);
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index ba1b1c7549d35..d63ccdf6e9887 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -35,11 +35,6 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
+ 		(fcport->nvme_flag & NVME_FLAG_REGISTERED))
+ 		return 0;
+ 
+-	if (atomic_read(&fcport->state) == FCS_ONLINE)
+-		return 0;
+-
+-	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+-
+ 	fcport->nvme_flag &= ~NVME_FLAG_RESETTING;
+ 
+ 	memset(&req, 0, sizeof(struct nvme_fc_port_info));
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index bfa8d77322d73..e1c086ac8a60e 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -190,7 +190,7 @@ static void sg_link_reserve(Sg_fd * sfp, Sg_request * srp, int size);
+ static void sg_unlink_reserve(Sg_fd * sfp, Sg_request * srp);
+ static Sg_fd *sg_add_sfp(Sg_device * sdp);
+ static void sg_remove_sfp(struct kref *);
+-static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id);
++static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id, bool *busy);
+ static Sg_request *sg_add_request(Sg_fd * sfp);
+ static int sg_remove_request(Sg_fd * sfp, Sg_request * srp);
+ static Sg_device *sg_get_dev(int dev);
+@@ -444,6 +444,7 @@ sg_read(struct file *filp, char __user *buf, size_t count, loff_t * ppos)
+ 	Sg_fd *sfp;
+ 	Sg_request *srp;
+ 	int req_pack_id = -1;
++	bool busy;
+ 	sg_io_hdr_t *hp;
+ 	struct sg_header *old_hdr;
+ 	int retval;
+@@ -466,20 +467,16 @@ sg_read(struct file *filp, char __user *buf, size_t count, loff_t * ppos)
+ 	if (retval)
+ 		return retval;
+ 
+-	srp = sg_get_rq_mark(sfp, req_pack_id);
++	srp = sg_get_rq_mark(sfp, req_pack_id, &busy);
+ 	if (!srp) {		/* now wait on packet to arrive */
+-		if (atomic_read(&sdp->detaching))
+-			return -ENODEV;
+ 		if (filp->f_flags & O_NONBLOCK)
+ 			return -EAGAIN;
+ 		retval = wait_event_interruptible(sfp->read_wait,
+-			(atomic_read(&sdp->detaching) ||
+-			(srp = sg_get_rq_mark(sfp, req_pack_id))));
+-		if (atomic_read(&sdp->detaching))
+-			return -ENODEV;
+-		if (retval)
+-			/* -ERESTARTSYS as signal hit process */
+-			return retval;
++			((srp = sg_get_rq_mark(sfp, req_pack_id, &busy)) ||
++			(!busy && atomic_read(&sdp->detaching))));
++		if (!srp)
++			/* signal or detaching */
++			return retval ? retval : -ENODEV;
+ 	}
+ 	if (srp->header.interface_id != '\0')
+ 		return sg_new_read(sfp, buf, count, srp);
+@@ -938,9 +935,7 @@ sg_ioctl_common(struct file *filp, Sg_device *sdp, Sg_fd *sfp,
+ 		if (result < 0)
+ 			return result;
+ 		result = wait_event_interruptible(sfp->read_wait,
+-			(srp_done(sfp, srp) || atomic_read(&sdp->detaching)));
+-		if (atomic_read(&sdp->detaching))
+-			return -ENODEV;
++			srp_done(sfp, srp));
+ 		write_lock_irq(&sfp->rq_list_lock);
+ 		if (srp->done) {
+ 			srp->done = 2;
+@@ -2093,19 +2088,28 @@ sg_unlink_reserve(Sg_fd * sfp, Sg_request * srp)
+ }
+ 
+ static Sg_request *
+-sg_get_rq_mark(Sg_fd * sfp, int pack_id)
++sg_get_rq_mark(Sg_fd * sfp, int pack_id, bool *busy)
+ {
+ 	Sg_request *resp;
+ 	unsigned long iflags;
+ 
++	*busy = false;
+ 	write_lock_irqsave(&sfp->rq_list_lock, iflags);
+ 	list_for_each_entry(resp, &sfp->rq_list, entry) {
+-		/* look for requests that are ready + not SG_IO owned */
+-		if ((1 == resp->done) && (!resp->sg_io_owned) &&
++		/* look for requests that are not SG_IO owned */
++		if ((!resp->sg_io_owned) &&
+ 		    ((-1 == pack_id) || (resp->header.pack_id == pack_id))) {
+-			resp->done = 2;	/* guard against other readers */
+-			write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+-			return resp;
++			switch (resp->done) {
++			case 0: /* request active */
++				*busy = true;
++				break;
++			case 1: /* request done; response ready to return */
++				resp->done = 2;	/* guard against other readers */
++				write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
++				return resp;
++			case 2: /* response already being returned */
++				break;
++			}
+ 		}
+ 	}
+ 	write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+@@ -2159,6 +2163,15 @@ sg_remove_request(Sg_fd * sfp, Sg_request * srp)
+ 		res = 1;
+ 	}
+ 	write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
++
++	/*
++	 * If the device is detaching, wakeup any readers in case we just
++	 * removed the last response, which would leave nothing for them to
++	 * return other than -ENODEV.
++	 */
++	if (unlikely(atomic_read(&sfp->parentdp->detaching)))
++		wake_up_interruptible_all(&sfp->read_wait);
++
+ 	return res;
+ }
+ 
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index de73ade70c24c..fcff35e20a4a3 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -4997,10 +4997,10 @@ static int pqi_raid_submit_scsi_cmd_with_io_request(
+ 	}
+ 
+ 	switch (scmd->sc_data_direction) {
+-	case DMA_TO_DEVICE:
++	case DMA_FROM_DEVICE:
+ 		request->data_direction = SOP_READ_FLAG;
+ 		break;
+-	case DMA_FROM_DEVICE:
++	case DMA_TO_DEVICE:
+ 		request->data_direction = SOP_WRITE_FLAG;
+ 		break;
+ 	case DMA_NONE:
+diff --git a/drivers/soc/amlogic/meson-mx-socinfo.c b/drivers/soc/amlogic/meson-mx-socinfo.c
+index 78f0f1aeca578..92125dd65f338 100644
+--- a/drivers/soc/amlogic/meson-mx-socinfo.c
++++ b/drivers/soc/amlogic/meson-mx-socinfo.c
+@@ -126,6 +126,7 @@ static int __init meson_mx_socinfo_init(void)
+ 	np = of_find_matching_node(NULL, meson_mx_socinfo_analog_top_ids);
+ 	if (np) {
+ 		analog_top_regmap = syscon_node_to_regmap(np);
++		of_node_put(np);
+ 		if (IS_ERR(analog_top_regmap))
+ 			return PTR_ERR(analog_top_regmap);
+ 
+diff --git a/drivers/soc/amlogic/meson-secure-pwrc.c b/drivers/soc/amlogic/meson-secure-pwrc.c
+index 5fb29a4758796..fff92e2f39744 100644
+--- a/drivers/soc/amlogic/meson-secure-pwrc.c
++++ b/drivers/soc/amlogic/meson-secure-pwrc.c
+@@ -138,8 +138,10 @@ static int meson_secure_pwrc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	pwrc = devm_kzalloc(&pdev->dev, sizeof(*pwrc), GFP_KERNEL);
+-	if (!pwrc)
++	if (!pwrc) {
++		of_node_put(sm_np);
+ 		return -ENOMEM;
++	}
+ 
+ 	pwrc->fw = meson_sm_get(sm_np);
+ 	of_node_put(sm_np);
+diff --git a/drivers/soc/fsl/guts.c b/drivers/soc/fsl/guts.c
+index 091e94c04f309..6b0c433954bfb 100644
+--- a/drivers/soc/fsl/guts.c
++++ b/drivers/soc/fsl/guts.c
+@@ -141,7 +141,7 @@ static int fsl_guts_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct resource *res;
+ 	const struct fsl_soc_die_attr *soc_die;
+-	const char *machine;
++	const char *machine = NULL;
+ 	u32 svr;
+ 
+ 	/* Initialize guts */
+diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
+index 6a3b69b43ad51..d0cf969a8fb5f 100644
+--- a/drivers/soc/qcom/Kconfig
++++ b/drivers/soc/qcom/Kconfig
+@@ -128,6 +128,7 @@ config QCOM_RPMHPD
+ 
+ config QCOM_RPMPD
+ 	tristate "Qualcomm RPM Power domain driver"
++	depends on PM
+ 	depends on QCOM_SMD_RPM
+ 	help
+ 	  QCOM RPM Power domain driver to support power-domains with
+diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c
+index 85f82e195ef8b..1dfdd0b9ba24d 100644
+--- a/drivers/soc/qcom/ocmem.c
++++ b/drivers/soc/qcom/ocmem.c
+@@ -194,14 +194,17 @@ struct ocmem *of_get_ocmem(struct device *dev)
+ 	devnode = of_parse_phandle(dev->of_node, "sram", 0);
+ 	if (!devnode || !devnode->parent) {
+ 		dev_err(dev, "Cannot look up sram phandle\n");
++		of_node_put(devnode);
+ 		return ERR_PTR(-ENODEV);
+ 	}
+ 
+ 	pdev = of_find_device_by_node(devnode->parent);
+ 	if (!pdev) {
+ 		dev_err(dev, "Cannot find device node %s\n", devnode->name);
++		of_node_put(devnode);
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 	}
++	of_node_put(devnode);
+ 
+ 	ocmem = platform_get_drvdata(pdev);
+ 	if (!ocmem) {
+diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c
+index 941499b117580..401a0be3675af 100644
+--- a/drivers/soc/qcom/qcom_aoss.c
++++ b/drivers/soc/qcom/qcom_aoss.c
+@@ -493,8 +493,10 @@ static int qmp_cooling_devices_register(struct qmp *qmp)
+ 			continue;
+ 		ret = qmp_cooling_device_add(qmp, &qmp->cooling_devs[count++],
+ 					     child);
+-		if (ret)
++		if (ret) {
++			of_node_put(child);
+ 			goto unroll;
++		}
+ 	}
+ 
+ 	if (!count)
+diff --git a/drivers/soc/renesas/r8a779a0-sysc.c b/drivers/soc/renesas/r8a779a0-sysc.c
+index d464ffa1be33d..d0a5434715b89 100644
+--- a/drivers/soc/renesas/r8a779a0-sysc.c
++++ b/drivers/soc/renesas/r8a779a0-sysc.c
+@@ -83,11 +83,11 @@ static struct r8a779a0_sysc_area r8a779a0_areas[] __initdata = {
+ 	{ "a2cv6",	R8A779A0_PD_A2CV6, R8A779A0_PD_A3IR },
+ 	{ "a2cn2",	R8A779A0_PD_A2CN2, R8A779A0_PD_A3IR },
+ 	{ "a2imp23",	R8A779A0_PD_A2IMP23, R8A779A0_PD_A3IR },
+-	{ "a2dp1",	R8A779A0_PD_A2DP0, R8A779A0_PD_A3IR },
+-	{ "a2cv2",	R8A779A0_PD_A2CV0, R8A779A0_PD_A3IR },
+-	{ "a2cv3",	R8A779A0_PD_A2CV1, R8A779A0_PD_A3IR },
+-	{ "a2cv5",	R8A779A0_PD_A2CV4, R8A779A0_PD_A3IR },
+-	{ "a2cv7",	R8A779A0_PD_A2CV6, R8A779A0_PD_A3IR },
++	{ "a2dp1",	R8A779A0_PD_A2DP1, R8A779A0_PD_A3IR },
++	{ "a2cv2",	R8A779A0_PD_A2CV2, R8A779A0_PD_A3IR },
++	{ "a2cv3",	R8A779A0_PD_A2CV3, R8A779A0_PD_A3IR },
++	{ "a2cv5",	R8A779A0_PD_A2CV5, R8A779A0_PD_A3IR },
++	{ "a2cv7",	R8A779A0_PD_A2CV7, R8A779A0_PD_A3IR },
+ 	{ "a2cn1",	R8A779A0_PD_A2CN1, R8A779A0_PD_A3IR },
+ 	{ "a1cnn0",	R8A779A0_PD_A1CNN0, R8A779A0_PD_A2CN0 },
+ 	{ "a1cnn2",	R8A779A0_PD_A1CNN2, R8A779A0_PD_A2CN2 },
+diff --git a/drivers/soundwire/bus_type.c b/drivers/soundwire/bus_type.c
+index 575b9bad99d51..2e8986cccdd49 100644
+--- a/drivers/soundwire/bus_type.c
++++ b/drivers/soundwire/bus_type.c
+@@ -184,12 +184,8 @@ int __sdw_register_driver(struct sdw_driver *drv, struct module *owner)
+ 
+ 	drv->driver.owner = owner;
+ 	drv->driver.probe = sdw_drv_probe;
+-
+-	if (drv->remove)
+-		drv->driver.remove = sdw_drv_remove;
+-
+-	if (drv->shutdown)
+-		drv->driver.shutdown = sdw_drv_shutdown;
++	drv->driver.remove = sdw_drv_remove;
++	drv->driver.shutdown = sdw_drv_shutdown;
+ 
+ 	return driver_register(&drv->driver);
+ }
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index ea03cc589e61f..4600e3c9e49e4 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -612,6 +612,10 @@ static int rspi_dma_transfer(struct rspi_data *rspi, struct sg_table *tx,
+ 					       rspi->dma_callbacked, HZ);
+ 	if (ret > 0 && rspi->dma_callbacked) {
+ 		ret = 0;
++		if (tx)
++			dmaengine_synchronize(rspi->ctlr->dma_tx);
++		if (rx)
++			dmaengine_synchronize(rspi->ctlr->dma_rx);
+ 	} else {
+ 		if (!ret) {
+ 			dev_err(&rspi->ctlr->dev, "DMA timeout\n");
+diff --git a/drivers/spi/spi-synquacer.c b/drivers/spi/spi-synquacer.c
+index ea706d9629cb1..47cbe73137c23 100644
+--- a/drivers/spi/spi-synquacer.c
++++ b/drivers/spi/spi-synquacer.c
+@@ -783,6 +783,7 @@ static int __maybe_unused synquacer_spi_resume(struct device *dev)
+ 
+ 		ret = synquacer_spi_enable(master);
+ 		if (ret) {
++			clk_disable_unprepare(sspi->clk);
+ 			dev_err(dev, "failed to enable spi (%d)\n", ret);
+ 			return ret;
+ 		}
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_cmd.c b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+index 90d50a693ce57..20c19e08968ea 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_cmd.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+@@ -899,9 +899,9 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
+ 	int err;
+ 	unsigned long irqflags;
+ 	struct ia_css_frame *frame = NULL;
+-	struct atomisp_s3a_buf *s3a_buf = NULL, *_s3a_buf_tmp;
+-	struct atomisp_dis_buf *dis_buf = NULL, *_dis_buf_tmp;
+-	struct atomisp_metadata_buf *md_buf = NULL, *_md_buf_tmp;
++	struct atomisp_s3a_buf *s3a_buf = NULL, *_s3a_buf_tmp, *s3a_iter;
++	struct atomisp_dis_buf *dis_buf = NULL, *_dis_buf_tmp, *dis_iter;
++	struct atomisp_metadata_buf *md_buf = NULL, *_md_buf_tmp, *md_iter;
+ 	enum atomisp_metadata_type md_type;
+ 	struct atomisp_device *isp = asd->isp;
+ 	struct v4l2_control ctrl;
+@@ -940,60 +940,75 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
+ 
+ 	switch (buf_type) {
+ 	case IA_CSS_BUFFER_TYPE_3A_STATISTICS:
+-		list_for_each_entry_safe(s3a_buf, _s3a_buf_tmp,
++		list_for_each_entry_safe(s3a_iter, _s3a_buf_tmp,
+ 					 &asd->s3a_stats_in_css, list) {
+-			if (s3a_buf->s3a_data ==
++			if (s3a_iter->s3a_data ==
+ 			    buffer.css_buffer.data.stats_3a) {
+-				list_del_init(&s3a_buf->list);
+-				list_add_tail(&s3a_buf->list,
++				list_del_init(&s3a_iter->list);
++				list_add_tail(&s3a_iter->list,
+ 					      &asd->s3a_stats_ready);
++				s3a_buf = s3a_iter;
+ 				break;
+ 			}
+ 		}
+ 
+ 		asd->s3a_bufs_in_css[css_pipe_id]--;
+ 		atomisp_3a_stats_ready_event(asd, buffer.css_buffer.exp_id);
+-		dev_dbg(isp->dev, "%s: s3a stat with exp_id %d is ready\n",
+-			__func__, s3a_buf->s3a_data->exp_id);
++		if (s3a_buf)
++			dev_dbg(isp->dev, "%s: s3a stat with exp_id %d is ready\n",
++				__func__, s3a_buf->s3a_data->exp_id);
++		else
++			dev_dbg(isp->dev, "%s: s3a stat is ready with no exp_id found\n",
++				__func__);
+ 		break;
+ 	case IA_CSS_BUFFER_TYPE_METADATA:
+ 		if (error)
+ 			break;
+ 
+ 		md_type = atomisp_get_metadata_type(asd, css_pipe_id);
+-		list_for_each_entry_safe(md_buf, _md_buf_tmp,
++		list_for_each_entry_safe(md_iter, _md_buf_tmp,
+ 					 &asd->metadata_in_css[md_type], list) {
+-			if (md_buf->metadata ==
++			if (md_iter->metadata ==
+ 			    buffer.css_buffer.data.metadata) {
+-				list_del_init(&md_buf->list);
+-				list_add_tail(&md_buf->list,
++				list_del_init(&md_iter->list);
++				list_add_tail(&md_iter->list,
+ 					      &asd->metadata_ready[md_type]);
++				md_buf = md_iter;
+ 				break;
+ 			}
+ 		}
+ 		asd->metadata_bufs_in_css[stream_id][css_pipe_id]--;
+ 		atomisp_metadata_ready_event(asd, md_type);
+-		dev_dbg(isp->dev, "%s: metadata with exp_id %d is ready\n",
+-			__func__, md_buf->metadata->exp_id);
++		if (md_buf)
++			dev_dbg(isp->dev, "%s: metadata with exp_id %d is ready\n",
++				__func__, md_buf->metadata->exp_id);
++		else
++			dev_dbg(isp->dev, "%s: metadata is ready with no exp_id found\n",
++				__func__);
+ 		break;
+ 	case IA_CSS_BUFFER_TYPE_DIS_STATISTICS:
+-		list_for_each_entry_safe(dis_buf, _dis_buf_tmp,
++		list_for_each_entry_safe(dis_iter, _dis_buf_tmp,
+ 					 &asd->dis_stats_in_css, list) {
+-			if (dis_buf->dis_data ==
++			if (dis_iter->dis_data ==
+ 			    buffer.css_buffer.data.stats_dvs) {
+ 				spin_lock_irqsave(&asd->dis_stats_lock,
+ 						  irqflags);
+-				list_del_init(&dis_buf->list);
+-				list_add(&dis_buf->list, &asd->dis_stats);
++				list_del_init(&dis_iter->list);
++				list_add(&dis_iter->list, &asd->dis_stats);
+ 				asd->params.dis_proj_data_valid = true;
+ 				spin_unlock_irqrestore(&asd->dis_stats_lock,
+ 						       irqflags);
++				dis_buf = dis_iter;
+ 				break;
+ 			}
+ 		}
+ 		asd->dis_bufs_in_css--;
+-		dev_dbg(isp->dev, "%s: dis stat with exp_id %d is ready\n",
+-			__func__, dis_buf->dis_data->exp_id);
++		if (dis_buf)
++			dev_dbg(isp->dev, "%s: dis stat with exp_id %d is ready\n",
++				__func__, dis_buf->dis_data->exp_id);
++		else
++			dev_dbg(isp->dev, "%s: dis stat is ready with no exp_id found\n",
++				__func__);
+ 		break;
+ 	case IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME:
+ 	case IA_CSS_BUFFER_TYPE_SEC_VF_OUTPUT_FRAME:
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index 368439cf5e174..20c01a56f2849 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -147,6 +147,9 @@ static void cedrus_h265_frame_info_write_dpb(struct cedrus_ctx *ctx,
+ 			dpb[i].pic_order_cnt[1]
+ 		};
+ 
++		if (buffer_index < 0)
++			continue;
++
+ 		cedrus_h265_frame_info_write_single(ctx, i, dpb[i].field_pic,
+ 						    pic_order_cnt,
+ 						    buffer_index);
+diff --git a/drivers/staging/rtl8192u/r8192U.h b/drivers/staging/rtl8192u/r8192U.h
+index ec33fb9122e96..57badc1e91e30 100644
+--- a/drivers/staging/rtl8192u/r8192U.h
++++ b/drivers/staging/rtl8192u/r8192U.h
+@@ -1013,7 +1013,7 @@ typedef struct r8192_priv {
+ 	bool		bis_any_nonbepkts;
+ 	bool		bcurrent_turbo_EDCA;
+ 	bool		bis_cur_rdlstate;
+-	struct timer_list fsync_timer;
++	struct delayed_work fsync_work;
+ 	bool bfsync_processing;	/* 500ms Fsync timer is active or not */
+ 	u32	rate_record;
+ 	u32	rateCountDiffRecord;
+diff --git a/drivers/staging/rtl8192u/r8192U_dm.c b/drivers/staging/rtl8192u/r8192U_dm.c
+index bac402b40121d..6aa424a31569d 100644
+--- a/drivers/staging/rtl8192u/r8192U_dm.c
++++ b/drivers/staging/rtl8192u/r8192U_dm.c
+@@ -2578,19 +2578,20 @@ static void dm_init_fsync(struct net_device *dev)
+ 	priv->ieee80211->fsync_seconddiff_ratethreshold = 200;
+ 	priv->ieee80211->fsync_state = Default_Fsync;
+ 	priv->framesyncMonitor = 1;	/* current default 0xc38 monitor on */
+-	timer_setup(&priv->fsync_timer, dm_fsync_timer_callback, 0);
++	INIT_DELAYED_WORK(&priv->fsync_work, dm_fsync_work_callback);
+ }
+ 
+ static void dm_deInit_fsync(struct net_device *dev)
+ {
+ 	struct r8192_priv *priv = ieee80211_priv(dev);
+ 
+-	del_timer_sync(&priv->fsync_timer);
++	cancel_delayed_work_sync(&priv->fsync_work);
+ }
+ 
+-void dm_fsync_timer_callback(struct timer_list *t)
++void dm_fsync_work_callback(struct work_struct *work)
+ {
+-	struct r8192_priv *priv = from_timer(priv, t, fsync_timer);
++	struct r8192_priv *priv =
++	    container_of(work, struct r8192_priv, fsync_work.work);
+ 	struct net_device *dev = priv->ieee80211->dev;
+ 	u32 rate_index, rate_count = 0, rate_count_diff = 0;
+ 	bool		bSwitchFromCountDiff = false;
+@@ -2657,17 +2658,16 @@ void dm_fsync_timer_callback(struct timer_list *t)
+ 			}
+ 		}
+ 		if (bDoubleTimeInterval) {
+-			if (timer_pending(&priv->fsync_timer))
+-				del_timer_sync(&priv->fsync_timer);
+-			priv->fsync_timer.expires = jiffies +
+-				msecs_to_jiffies(priv->ieee80211->fsync_time_interval*priv->ieee80211->fsync_multiple_timeinterval);
+-			add_timer(&priv->fsync_timer);
++			cancel_delayed_work_sync(&priv->fsync_work);
++			schedule_delayed_work(&priv->fsync_work,
++					      msecs_to_jiffies(priv
++					      ->ieee80211->fsync_time_interval *
++					      priv->ieee80211->fsync_multiple_timeinterval));
+ 		} else {
+-			if (timer_pending(&priv->fsync_timer))
+-				del_timer_sync(&priv->fsync_timer);
+-			priv->fsync_timer.expires = jiffies +
+-				msecs_to_jiffies(priv->ieee80211->fsync_time_interval);
+-			add_timer(&priv->fsync_timer);
++			cancel_delayed_work_sync(&priv->fsync_work);
++			schedule_delayed_work(&priv->fsync_work,
++					      msecs_to_jiffies(priv
++					      ->ieee80211->fsync_time_interval));
+ 		}
+ 	} else {
+ 		/* Let Register return to default value; */
+@@ -2695,7 +2695,7 @@ static void dm_EndSWFsync(struct net_device *dev)
+ 	struct r8192_priv *priv = ieee80211_priv(dev);
+ 
+ 	RT_TRACE(COMP_HALDM, "%s\n", __func__);
+-	del_timer_sync(&(priv->fsync_timer));
++	cancel_delayed_work_sync(&priv->fsync_work);
+ 
+ 	/* Let Register return to default value; */
+ 	if (priv->bswitch_fsync) {
+@@ -2736,11 +2736,9 @@ static void dm_StartSWFsync(struct net_device *dev)
+ 		if (priv->ieee80211->fsync_rate_bitmap &  rateBitmap)
+ 			priv->rate_record += priv->stats.received_rate_histogram[1][rateIndex];
+ 	}
+-	if (timer_pending(&priv->fsync_timer))
+-		del_timer_sync(&priv->fsync_timer);
+-	priv->fsync_timer.expires = jiffies +
+-			msecs_to_jiffies(priv->ieee80211->fsync_time_interval);
+-	add_timer(&priv->fsync_timer);
++	cancel_delayed_work_sync(&priv->fsync_work);
++	schedule_delayed_work(&priv->fsync_work,
++			      msecs_to_jiffies(priv->ieee80211->fsync_time_interval));
+ 
+ 	write_nic_dword(dev, rOFDM0_RxDetector2, 0x465c12cd);
+ }
+diff --git a/drivers/staging/rtl8192u/r8192U_dm.h b/drivers/staging/rtl8192u/r8192U_dm.h
+index 0b2a1c688597c..2159018b4e38f 100644
+--- a/drivers/staging/rtl8192u/r8192U_dm.h
++++ b/drivers/staging/rtl8192u/r8192U_dm.h
+@@ -166,7 +166,7 @@ void dm_force_tx_fw_info(struct net_device *dev,
+ void dm_init_edca_turbo(struct net_device *dev);
+ void dm_rf_operation_test_callback(unsigned long data);
+ void dm_rf_pathcheck_workitemcallback(struct work_struct *work);
+-void dm_fsync_timer_callback(struct timer_list *t);
++void dm_fsync_work_callback(struct work_struct *work);
+ void dm_cck_txpower_adjust(struct net_device *dev, bool  binch14);
+ void dm_shadow_init(struct net_device *dev);
+ void dm_initialize_txpower_tracking(struct net_device *dev);
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index 499fccba3d74b..6e662fb131d55 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -222,6 +222,9 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
+ 		goto err;
+ 	}
+ 
++	if (!access_ok((void __user *)addr, length))
++		return ERR_PTR(-EFAULT);
++
+ 	mutex_lock(&teedev->mutex);
+ 	shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL);
+ 	mutex_unlock(&teedev->mutex);
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index f52708f310e03..05e9a3de80b59 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -893,12 +893,13 @@ static const struct attribute_group cooling_device_stats_attr_group = {
+ 
+ static void cooling_device_stats_setup(struct thermal_cooling_device *cdev)
+ {
++	const struct attribute_group *stats_attr_group = NULL;
+ 	struct cooling_dev_stats *stats;
+ 	unsigned long states;
+ 	int var;
+ 
+ 	if (cdev->ops->get_max_state(cdev, &states))
+-		return;
++		goto out;
+ 
+ 	states++; /* Total number of states is highest state + 1 */
+ 
+@@ -908,7 +909,7 @@ static void cooling_device_stats_setup(struct thermal_cooling_device *cdev)
+ 
+ 	stats = kzalloc(var, GFP_KERNEL);
+ 	if (!stats)
+-		return;
++		goto out;
+ 
+ 	stats->time_in_state = (ktime_t *)(stats + 1);
+ 	stats->trans_table = (unsigned int *)(stats->time_in_state + states);
+@@ -918,9 +919,12 @@ static void cooling_device_stats_setup(struct thermal_cooling_device *cdev)
+ 
+ 	spin_lock_init(&stats->lock);
+ 
++	stats_attr_group = &cooling_device_stats_attr_group;
++
++out:
+ 	/* Fill the empty slot left in cooling_device_attr_groups */
+ 	var = ARRAY_SIZE(cooling_device_attr_groups) - 2;
+-	cooling_device_attr_groups[var] = &cooling_device_stats_attr_group;
++	cooling_device_attr_groups[var] = stats_attr_group;
+ }
+ 
+ static void cooling_device_stats_destroy(struct thermal_cooling_device *cdev)
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index b05b7862778c5..cb5ed4155a8d2 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -417,6 +417,27 @@ static int gsm_read_ea(unsigned int *val, u8 c)
+ 	return c & EA;
+ }
+ 
++/**
++ *	gsm_read_ea_val	-	read a value until EA
++ *	@val: variable holding value
++ *	@data: buffer of data
++ *	@dlen: length of data
++ *
++ *	Processes an EA value. Updates the passed variable and
++ *	returns the processed data length.
++ */
++static unsigned int gsm_read_ea_val(unsigned int *val, const u8 *data, int dlen)
++{
++	unsigned int len = 0;
++
++	for (; dlen > 0; dlen--) {
++		len++;
++		if (gsm_read_ea(val, *data++))
++			break;
++	}
++	return len;
++}
++
+ /**
+  *	gsm_encode_modem	-	encode modem data bits
+  *	@dlci: DLCI to encode from
+@@ -653,6 +674,37 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
+ 	return m;
+ }
+ 
++/**
++ *	gsm_is_flow_ctrl_msg	-	checks if flow control message
++ *	@msg: message to check
++ *
++ *	Returns true if the given message is a flow control command of the
++ *	control channel. False is returned in any other case.
++ */
++static bool gsm_is_flow_ctrl_msg(struct gsm_msg *msg)
++{
++	unsigned int cmd;
++
++	if (msg->addr > 0)
++		return false;
++
++	switch (msg->ctrl & ~PF) {
++	case UI:
++	case UIH:
++		cmd = 0;
++		if (gsm_read_ea_val(&cmd, msg->data + 2, msg->len - 2) < 1)
++			break;
++		switch (cmd & ~PF) {
++		case CMD_FCOFF:
++		case CMD_FCON:
++			return true;
++		}
++		break;
++	}
++
++	return false;
++}
++
+ /**
+  *	gsm_data_kick		-	poke the queue
+  *	@gsm: GSM Mux
+@@ -671,7 +723,7 @@ static void gsm_data_kick(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ 	int len;
+ 
+ 	list_for_each_entry_safe(msg, nmsg, &gsm->tx_list, list) {
+-		if (gsm->constipated && msg->addr)
++		if (gsm->constipated && !gsm_is_flow_ctrl_msg(msg))
+ 			continue;
+ 		if (gsm->encoding != 0) {
+ 			gsm->txframe[0] = GSM1_SOF;
+@@ -795,41 +847,51 @@ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ {
+ 	struct gsm_msg *msg;
+ 	u8 *dp;
+-	int len, total_size, size;
+-	int h = dlci->adaption - 1;
++	int h, len, size;
+ 
+-	total_size = 0;
+-	while (1) {
+-		len = kfifo_len(&dlci->fifo);
+-		if (len == 0)
+-			return total_size;
+-
+-		/* MTU/MRU count only the data bits */
+-		if (len > gsm->mtu)
+-			len = gsm->mtu;
+-
+-		size = len + h;
+-
+-		msg = gsm_data_alloc(gsm, dlci->addr, size, gsm->ftype);
+-		/* FIXME: need a timer or something to kick this so it can't
+-		   get stuck with no work outstanding and no buffer free */
+-		if (msg == NULL)
+-			return -ENOMEM;
+-		dp = msg->data;
+-		switch (dlci->adaption) {
+-		case 1:	/* Unstructured */
+-			break;
+-		case 2:	/* Unstructed with modem bits.
+-		Always one byte as we never send inline break data */
+-			*dp++ = (gsm_encode_modem(dlci) << 1) | EA;
+-			break;
+-		}
+-		WARN_ON(kfifo_out_locked(&dlci->fifo, dp , len, &dlci->lock) != len);
+-		__gsm_data_queue(dlci, msg);
+-		total_size += size;
++	/* for modem bits without break data */
++	h = ((dlci->adaption == 1) ? 0 : 1);
++
++	len = kfifo_len(&dlci->fifo);
++	if (len == 0)
++		return 0;
++
++	/* MTU/MRU count only the data bits but watch adaption mode */
++	if ((len + h) > gsm->mtu)
++		len = gsm->mtu - h;
++
++	size = len + h;
++
++	msg = gsm_data_alloc(gsm, dlci->addr, size, gsm->ftype);
++	/* FIXME: need a timer or something to kick this so it can't
++	 * get stuck with no work outstanding and no buffer free
++	 */
++	if (!msg)
++		return -ENOMEM;
++	dp = msg->data;
++	switch (dlci->adaption) {
++	case 1: /* Unstructured */
++		break;
++	case 2: /* Unstructured with modem bits.
++		 * Always one byte as we never send inline break data
++		 */
++		*dp++ = (gsm_encode_modem(dlci) << 1) | EA;
++		break;
++	default:
++		pr_err("%s: unsupported adaption %d\n", __func__,
++		       dlci->adaption);
++		break;
+ 	}
++
++	WARN_ON(len != kfifo_out_locked(&dlci->fifo, dp, len,
++		&dlci->lock));
++
++	/* Notify upper layer about available send space. */
++	tty_port_tty_wakeup(&dlci->port);
++
++	__gsm_data_queue(dlci, msg);
+ 	/* Bytes of data we used up */
+-	return total_size;
++	return size;
+ }
+ 
+ /**
+@@ -1326,7 +1388,7 @@ static void gsm_control_retransmit(struct timer_list *t)
+ 	spin_lock_irqsave(&gsm->control_lock, flags);
+ 	ctrl = gsm->pending_cmd;
+ 	if (ctrl) {
+-		if (gsm->cretries == 0) {
++		if (gsm->cretries == 0 || !gsm->dlci[0] || gsm->dlci[0]->dead) {
+ 			gsm->pending_cmd = NULL;
+ 			ctrl->error = -ETIMEDOUT;
+ 			ctrl->done = 1;
+@@ -1429,6 +1491,8 @@ static void gsm_dlci_close(struct gsm_dlci *dlci)
+ 	if (debug & 8)
+ 		pr_debug("DLCI %d goes closed.\n", dlci->addr);
+ 	dlci->state = DLCI_CLOSED;
++	/* Prevent us from sending data before the link is up again */
++	dlci->constipated = true;
+ 	if (dlci->addr != 0) {
+ 		tty_port_tty_hangup(&dlci->port, false);
+ 		spin_lock_irqsave(&dlci->lock, flags);
+@@ -1458,6 +1522,7 @@ static void gsm_dlci_open(struct gsm_dlci *dlci)
+ 	del_timer(&dlci->t1);
+ 	/* This will let a tty open continue */
+ 	dlci->state = DLCI_OPEN;
++	dlci->constipated = false;
+ 	if (debug & 8)
+ 		pr_debug("DLCI %d goes open.\n", dlci->addr);
+ 	wake_up(&dlci->gsm->event);
+@@ -1485,8 +1550,8 @@ static void gsm_dlci_t1(struct timer_list *t)
+ 
+ 	switch (dlci->state) {
+ 	case DLCI_OPENING:
+-		dlci->retries--;
+ 		if (dlci->retries) {
++			dlci->retries--;
+ 			gsm_command(dlci->gsm, dlci->addr, SABM|PF);
+ 			mod_timer(&dlci->t1, jiffies + gsm->t1 * HZ / 100);
+ 		} else if (!dlci->addr && gsm->control == (DM | PF)) {
+@@ -1501,8 +1566,8 @@ static void gsm_dlci_t1(struct timer_list *t)
+ 
+ 		break;
+ 	case DLCI_CLOSING:
+-		dlci->retries--;
+ 		if (dlci->retries) {
++			dlci->retries--;
+ 			gsm_command(dlci->gsm, dlci->addr, DISC|PF);
+ 			mod_timer(&dlci->t1, jiffies + gsm->t1 * HZ / 100);
+ 		} else
+@@ -1535,6 +1600,25 @@ static void gsm_dlci_begin_open(struct gsm_dlci *dlci)
+ 	mod_timer(&dlci->t1, jiffies + gsm->t1 * HZ / 100);
+ }
+ 
++/**
++ *	gsm_dlci_set_opening	-	change state to opening
++ *	@dlci: DLCI to open
++ *
++ *	Change internal state to wait for DLCI open from initiator side.
++ *	We set off timers and responses upon reception of an SABM.
++ */
++static void gsm_dlci_set_opening(struct gsm_dlci *dlci)
++{
++	switch (dlci->state) {
++	case DLCI_CLOSED:
++	case DLCI_CLOSING:
++		dlci->state = DLCI_OPENING;
++		break;
++	default:
++		break;
++	}
++}
++
+ /**
+  *	gsm_dlci_begin_close	-	start channel open procedure
+  *	@dlci: DLCI to open
+@@ -1673,10 +1757,13 @@ static struct gsm_dlci *gsm_dlci_alloc(struct gsm_mux *gsm, int addr)
+ 	dlci->addr = addr;
+ 	dlci->adaption = gsm->adaption;
+ 	dlci->state = DLCI_CLOSED;
+-	if (addr)
++	if (addr) {
+ 		dlci->data = gsm_dlci_data;
+-	else
++		/* Prevent us from sending data before the link is up */
++		dlci->constipated = true;
++	} else {
+ 		dlci->data = gsm_dlci_command;
++	}
+ 	gsm->dlci[addr] = dlci;
+ 	return dlci;
+ }
+@@ -1851,7 +1938,7 @@ static void gsm_queue(struct gsm_mux *gsm)
+ 			goto invalid;
+ #endif
+ 		if (dlci == NULL || dlci->state != DLCI_OPEN) {
+-			gsm_command(gsm, address, DM|PF);
++			gsm_response(gsm, address, DM|PF);
+ 			return;
+ 		}
+ 		dlci->data(dlci, gsm->buf, gsm->len);
+@@ -2618,11 +2705,24 @@ static ssize_t gsmld_read(struct tty_struct *tty, struct file *file,
+ static ssize_t gsmld_write(struct tty_struct *tty, struct file *file,
+ 			   const unsigned char *buf, size_t nr)
+ {
+-	int space = tty_write_room(tty);
++	struct gsm_mux *gsm = tty->disc_data;
++	unsigned long flags;
++	int space;
++	int ret;
++
++	if (!gsm)
++		return -ENODEV;
++
++	ret = -ENOBUFS;
++	spin_lock_irqsave(&gsm->tx_lock, flags);
++	space = tty_write_room(tty);
+ 	if (space >= nr)
+-		return tty->ops->write(tty, buf, nr);
+-	set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+-	return -ENOBUFS;
++		ret = tty->ops->write(tty, buf, nr);
++	else
++		set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
++	spin_unlock_irqrestore(&gsm->tx_lock, flags);
++
++	return ret;
+ }
+ 
+ /**
+@@ -2647,12 +2747,15 @@ static __poll_t gsmld_poll(struct tty_struct *tty, struct file *file,
+ 
+ 	poll_wait(file, &tty->read_wait, wait);
+ 	poll_wait(file, &tty->write_wait, wait);
++
++	if (gsm->dead)
++		mask |= EPOLLHUP;
+ 	if (tty_hung_up_p(file))
+ 		mask |= EPOLLHUP;
++	if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
++		mask |= EPOLLHUP;
+ 	if (!tty_is_writelocked(tty) && tty_write_room(tty) > 0)
+ 		mask |= EPOLLOUT | EPOLLWRNORM;
+-	if (gsm->dead)
+-		mask |= EPOLLHUP;
+ 	return mask;
+ }
+ 
+@@ -3024,6 +3127,7 @@ static int gsmtty_open(struct tty_struct *tty, struct file *filp)
+ {
+ 	struct gsm_dlci *dlci = tty->driver_data;
+ 	struct tty_port *port = &dlci->port;
++	struct gsm_mux *gsm = dlci->gsm;
+ 
+ 	port->count++;
+ 	tty_port_tty_set(port, tty);
+@@ -3033,7 +3137,10 @@ static int gsmtty_open(struct tty_struct *tty, struct file *filp)
+ 	   a DM straight back. This is ok as that will have caused a hangup */
+ 	tty_port_set_initialized(port, 1);
+ 	/* Start sending off SABM messages */
+-	gsm_dlci_begin_open(dlci);
++	if (gsm->initiator)
++		gsm_dlci_begin_open(dlci);
++	else
++		gsm_dlci_set_opening(dlci);
+ 	/* And wait for virtual carrier */
+ 	return tty_port_block_til_ready(port, tty, filp);
+ }
+diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
+index 34aa2714f3c93..b6dc9003b8c4a 100644
+--- a/drivers/tty/serial/8250/8250.h
++++ b/drivers/tty/serial/8250/8250.h
+@@ -119,6 +119,28 @@ static inline void serial_out(struct uart_8250_port *up, int offset, int value)
+ 	up->port.serial_out(&up->port, offset, value);
+ }
+ 
++/*
++ * For the 16C950
++ */
++static void serial_icr_write(struct uart_8250_port *up, int offset, int value)
++{
++	serial_out(up, UART_SCR, offset);
++	serial_out(up, UART_ICR, value);
++}
++
++static unsigned int __maybe_unused serial_icr_read(struct uart_8250_port *up,
++						   int offset)
++{
++	unsigned int value;
++
++	serial_icr_write(up, UART_ACR, up->acr | UART_ACR_ICRRD);
++	serial_out(up, UART_SCR, offset);
++	value = serial_in(up, UART_ICR);
++	serial_icr_write(up, UART_ACR, up->acr);
++
++	return value;
++}
++
+ void serial8250_clear_and_reinit_fifos(struct uart_8250_port *p);
+ 
+ static inline int serial_dl_read(struct uart_8250_port *up)
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 49559731bbcf1..ace221afeb039 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -124,12 +124,15 @@ static void dw8250_check_lcr(struct uart_port *p, int value)
+ /* Returns once the transmitter is empty or we run out of retries */
+ static void dw8250_tx_wait_empty(struct uart_port *p)
+ {
++	struct uart_8250_port *up = up_to_u8250p(p);
+ 	unsigned int tries = 20000;
+ 	unsigned int delay_threshold = tries - 1000;
+ 	unsigned int lsr;
+ 
+ 	while (tries--) {
+ 		lsr = readb (p->membase + (UART_LSR << p->regshift));
++		up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
++
+ 		if (lsr & UART_LSR_TEMT)
+ 			break;
+ 
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index da2373787f853..df10cc606582b 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -75,13 +75,12 @@ static int pci_default_setup(struct serial_private*,
+ 
+ static void moan_device(const char *str, struct pci_dev *dev)
+ {
+-	dev_err(&dev->dev,
+-	       "%s: %s\n"
++	pci_err(dev, "%s\n"
+ 	       "Please send the output of lspci -vv, this\n"
+ 	       "message (0x%04x,0x%04x,0x%04x,0x%04x), the\n"
+ 	       "manufacturer and name of serial board or\n"
+ 	       "modem board to <linux-serial@vger.kernel.org>.\n",
+-	       pci_name(dev), str, dev->vendor, dev->device,
++	       str, dev->vendor, dev->device,
+ 	       dev->subsystem_vendor, dev->subsystem_device);
+ }
+ 
+@@ -238,7 +237,7 @@ static int pci_inteli960ni_init(struct pci_dev *dev)
+ 	/* is firmware started? */
+ 	pci_read_config_dword(dev, 0x44, &oldval);
+ 	if (oldval == 0x00001000L) { /* RESET value */
+-		dev_dbg(&dev->dev, "Local i960 firmware missing\n");
++		pci_dbg(dev, "Local i960 firmware missing\n");
+ 		return -ENODEV;
+ 	}
+ 	return 0;
+@@ -588,9 +587,8 @@ static int pci_timedia_probe(struct pci_dev *dev)
+ 	 * (0,2,3,5,6: serial only -- 7,8,9: serial + parallel)
+ 	 */
+ 	if ((dev->subsystem_device & 0x00f0) >= 0x70) {
+-		dev_info(&dev->dev,
+-			"ignoring Timedia subdevice %04x for parport_serial\n",
+-			dev->subsystem_device);
++		pci_info(dev, "ignoring Timedia subdevice %04x for parport_serial\n",
++			 dev->subsystem_device);
+ 		return -ENODEV;
+ 	}
+ 
+@@ -827,8 +825,7 @@ static int pci_netmos_9900_numports(struct pci_dev *dev)
+ 		if (sub_serports > 0)
+ 			return sub_serports;
+ 
+-		dev_err(&dev->dev,
+-			"NetMos/Mostech serial driver ignoring port on ambiguous config.\n");
++		pci_err(dev, "NetMos/Mostech serial driver ignoring port on ambiguous config.\n");
+ 		return 0;
+ 	}
+ 
+@@ -897,18 +894,16 @@ static int pci_netmos_init(struct pci_dev *dev)
+ /* enable IO_Space bit */
+ #define ITE_887x_POSIO_ENABLE		(1 << 31)
+ 
++/* inta_addr are the configuration addresses of the ITE */
++static const short inta_addr[] = { 0x2a0, 0x2c0, 0x220, 0x240, 0x1e0, 0x200, 0x280 };
+ static int pci_ite887x_init(struct pci_dev *dev)
+ {
+-	/* inta_addr are the configuration addresses of the ITE */
+-	static const short inta_addr[] = { 0x2a0, 0x2c0, 0x220, 0x240, 0x1e0,
+-							0x200, 0x280, 0 };
+ 	int ret, i, type;
+ 	struct resource *iobase = NULL;
+ 	u32 miscr, uartbar, ioport;
+ 
+ 	/* search for the base-ioport */
+-	i = 0;
+-	while (inta_addr[i] && iobase == NULL) {
++	for (i = 0; i < ARRAY_SIZE(inta_addr); i++) {
+ 		iobase = request_region(inta_addr[i], ITE_887x_IOSIZE,
+ 								"ite887x");
+ 		if (iobase != NULL) {
+@@ -925,13 +920,11 @@ static int pci_ite887x_init(struct pci_dev *dev)
+ 				break;
+ 			}
+ 			release_region(iobase->start, ITE_887x_IOSIZE);
+-			iobase = NULL;
+ 		}
+-		i++;
+ 	}
+ 
+-	if (!inta_addr[i]) {
+-		dev_err(&dev->dev, "ite887x: could not find iobase\n");
++	if (i == ARRAY_SIZE(inta_addr)) {
++		pci_err(dev, "could not find iobase\n");
+ 		return -ENODEV;
+ 	}
+ 
+@@ -1001,43 +994,29 @@ static void pci_ite887x_exit(struct pci_dev *dev)
+ }
+ 
+ /*
+- * EndRun Technologies.
+- * Determine the number of ports available on the device.
++ * Oxford Semiconductor Inc.
++ * Check if an OxSemi device is part of the Tornado range of devices.
+  */
+ #define PCI_VENDOR_ID_ENDRUN			0x7401
+ #define PCI_DEVICE_ID_ENDRUN_1588	0xe100
+ 
+-static int pci_endrun_init(struct pci_dev *dev)
++static bool pci_oxsemi_tornado_p(struct pci_dev *dev)
+ {
+-	u8 __iomem *p;
+-	unsigned long deviceID;
+-	unsigned int  number_uarts = 0;
++	/* OxSemi Tornado devices are all 0xCxxx */
++	if (dev->vendor == PCI_VENDOR_ID_OXSEMI &&
++	    (dev->device & 0xf000) != 0xc000)
++		return false;
+ 
+-	/* EndRun device is all 0xexxx */
++	/* EndRun devices are all 0xExxx */
+ 	if (dev->vendor == PCI_VENDOR_ID_ENDRUN &&
+-		(dev->device & 0xf000) != 0xe000)
+-		return 0;
++	    (dev->device & 0xf000) != 0xe000)
++		return false;
+ 
+-	p = pci_iomap(dev, 0, 5);
+-	if (p == NULL)
+-		return -ENOMEM;
+-
+-	deviceID = ioread32(p);
+-	/* EndRun device */
+-	if (deviceID == 0x07000200) {
+-		number_uarts = ioread8(p + 4);
+-		dev_dbg(&dev->dev,
+-			"%d ports detected on EndRun PCI Express device\n",
+-			number_uarts);
+-	}
+-	pci_iounmap(dev, p);
+-	return number_uarts;
++	return true;
+ }
+ 
+ /*
+- * Oxford Semiconductor Inc.
+- * Check that device is part of the Tornado range of devices, then determine
+- * the number of ports available on the device.
++ * Determine the number of ports available on a Tornado device.
+  */
+ static int pci_oxsemi_tornado_init(struct pci_dev *dev)
+ {
+@@ -1045,9 +1024,7 @@ static int pci_oxsemi_tornado_init(struct pci_dev *dev)
+ 	unsigned long deviceID;
+ 	unsigned int  number_uarts = 0;
+ 
+-	/* OxSemi Tornado devices are all 0xCxxx */
+-	if (dev->vendor == PCI_VENDOR_ID_OXSEMI &&
+-	    (dev->device & 0xF000) != 0xC000)
++	if (!pci_oxsemi_tornado_p(dev))
+ 		return 0;
+ 
+ 	p = pci_iomap(dev, 0, 5);
+@@ -1058,9 +1035,10 @@ static int pci_oxsemi_tornado_init(struct pci_dev *dev)
+ 	/* Tornado device */
+ 	if (deviceID == 0x07000200) {
+ 		number_uarts = ioread8(p + 4);
+-		dev_dbg(&dev->dev,
+-			"%d ports detected on Oxford PCI Express device\n",
+-			number_uarts);
++		pci_dbg(dev, "%d ports detected on %s PCI Express device\n",
++			number_uarts,
++			dev->vendor == PCI_VENDOR_ID_ENDRUN ?
++			"EndRun" : "Oxford");
+ 	}
+ 	pci_iounmap(dev, p);
+ 	return number_uarts;
+@@ -1120,15 +1098,15 @@ static struct quatech_feature quatech_cards[] = {
+ 	{ 0, }
+ };
+ 
+-static int pci_quatech_amcc(u16 devid)
++static int pci_quatech_amcc(struct pci_dev *dev)
+ {
+ 	struct quatech_feature *qf = &quatech_cards[0];
+ 	while (qf->devid) {
+-		if (qf->devid == devid)
++		if (qf->devid == dev->device)
+ 			return qf->amcc;
+ 		qf++;
+ 	}
+-	pr_err("quatech: unknown port type '0x%04X'.\n", devid);
++	pci_err(dev, "unknown port type '0x%04X'.\n", dev->device);
+ 	return 0;
+ };
+ 
+@@ -1291,7 +1269,7 @@ static int pci_quatech_rs422(struct uart_8250_port *port)
+ 
+ static int pci_quatech_init(struct pci_dev *dev)
+ {
+-	if (pci_quatech_amcc(dev->device)) {
++	if (pci_quatech_amcc(dev)) {
+ 		unsigned long base = pci_resource_start(dev, 0);
+ 		if (base) {
+ 			u32 tmp;
+@@ -1315,7 +1293,7 @@ static int pci_quatech_setup(struct serial_private *priv,
+ 	port->port.uartclk = pci_quatech_clock(port);
+ 	/* For now just warn about RS422 */
+ 	if (pci_quatech_rs422(port))
+-		pr_warn("quatech: software control of RS422 features not currently supported.\n");
++		pci_warn(priv->dev, "software control of RS422 features not currently supported.\n");
+ 	return pci_default_setup(priv, board, port, idx);
+ }
+ 
+@@ -1529,7 +1507,7 @@ static int pci_fintek_setup(struct serial_private *priv,
+ 	/* Get the io address from configuration space */
+ 	pci_read_config_word(pdev, config_base + 4, &iobase);
+ 
+-	dev_dbg(&pdev->dev, "%s: idx=%d iobase=0x%x", __func__, idx, iobase);
++	pci_dbg(pdev, "idx=%d iobase=0x%x", idx, iobase);
+ 
+ 	port->port.iotype = UPIO_PORT;
+ 	port->port.iobase = iobase;
+@@ -1693,7 +1671,7 @@ static int skip_tx_en_setup(struct serial_private *priv,
+ 			struct uart_8250_port *port, int idx)
+ {
+ 	port->port.quirks |= UPQ_NO_TXEN_TEST;
+-	dev_dbg(&priv->dev->dev,
++	pci_dbg(priv->dev,
+ 		"serial8250: skipping TxEn test for device [%04x:%04x] subsystem [%04x:%04x]\n",
+ 		priv->dev->vendor, priv->dev->device,
+ 		priv->dev->subsystem_vendor, priv->dev->subsystem_device);
+@@ -2517,7 +2495,7 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
+ 		.device		= PCI_ANY_ID,
+ 		.subvendor	= PCI_ANY_ID,
+ 		.subdevice	= PCI_ANY_ID,
+-		.init		= pci_endrun_init,
++		.init		= pci_oxsemi_tornado_init,
+ 		.setup		= pci_default_setup,
+ 	},
+ 	/*
+@@ -2862,7 +2840,7 @@ enum pci_board_num_t {
+ 	pbn_b0_2_1843200,
+ 	pbn_b0_4_1843200,
+ 
+-	pbn_b0_1_4000000,
++	pbn_b0_1_3906250,
+ 
+ 	pbn_b0_bt_1_115200,
+ 	pbn_b0_bt_2_115200,
+@@ -2940,12 +2918,11 @@ enum pci_board_num_t {
+ 	pbn_panacom2,
+ 	pbn_panacom4,
+ 	pbn_plx_romulus,
+-	pbn_endrun_2_3906250,
+ 	pbn_oxsemi,
+-	pbn_oxsemi_1_4000000,
+-	pbn_oxsemi_2_4000000,
+-	pbn_oxsemi_4_4000000,
+-	pbn_oxsemi_8_4000000,
++	pbn_oxsemi_1_3906250,
++	pbn_oxsemi_2_3906250,
++	pbn_oxsemi_4_3906250,
++	pbn_oxsemi_8_3906250,
+ 	pbn_intel_i960,
+ 	pbn_sgi_ioc3,
+ 	pbn_computone_4,
+@@ -2983,6 +2960,10 @@ enum pci_board_num_t {
+ 	pbn_sunix_pci_4s,
+ 	pbn_sunix_pci_8s,
+ 	pbn_sunix_pci_16s,
++	pbn_titan_1_4000000,
++	pbn_titan_2_4000000,
++	pbn_titan_4_4000000,
++	pbn_titan_8_4000000,
+ 	pbn_moxa8250_2p,
+ 	pbn_moxa8250_4p,
+ 	pbn_moxa8250_8p,
+@@ -3088,10 +3069,10 @@ static struct pciserial_board pci_boards[] = {
+ 		.uart_offset	= 8,
+ 	},
+ 
+-	[pbn_b0_1_4000000] = {
++	[pbn_b0_1_3906250] = {
+ 		.flags		= FL_BASE0,
+ 		.num_ports	= 1,
+-		.base_baud	= 4000000,
++		.base_baud	= 3906250,
+ 		.uart_offset	= 8,
+ 	},
+ 
+@@ -3462,20 +3443,6 @@ static struct pciserial_board pci_boards[] = {
+ 		.first_offset	= 0x03,
+ 	},
+ 
+-	/*
+-	 * EndRun Technologies
+-	* Uses the size of PCI Base region 0 to
+-	* signal now many ports are available
+-	* 2 port 952 Uart support
+-	*/
+-	[pbn_endrun_2_3906250] = {
+-		.flags		= FL_BASE0,
+-		.num_ports	= 2,
+-		.base_baud	= 3906250,
+-		.uart_offset	= 0x200,
+-		.first_offset	= 0x1000,
+-	},
+-
+ 	/*
+ 	 * This board uses the size of PCI Base region 0 to
+ 	 * signal now many ports are available
+@@ -3486,31 +3453,31 @@ static struct pciserial_board pci_boards[] = {
+ 		.base_baud	= 115200,
+ 		.uart_offset	= 8,
+ 	},
+-	[pbn_oxsemi_1_4000000] = {
++	[pbn_oxsemi_1_3906250] = {
+ 		.flags		= FL_BASE0,
+ 		.num_ports	= 1,
+-		.base_baud	= 4000000,
++		.base_baud	= 3906250,
+ 		.uart_offset	= 0x200,
+ 		.first_offset	= 0x1000,
+ 	},
+-	[pbn_oxsemi_2_4000000] = {
++	[pbn_oxsemi_2_3906250] = {
+ 		.flags		= FL_BASE0,
+ 		.num_ports	= 2,
+-		.base_baud	= 4000000,
++		.base_baud	= 3906250,
+ 		.uart_offset	= 0x200,
+ 		.first_offset	= 0x1000,
+ 	},
+-	[pbn_oxsemi_4_4000000] = {
++	[pbn_oxsemi_4_3906250] = {
+ 		.flags		= FL_BASE0,
+ 		.num_ports	= 4,
+-		.base_baud	= 4000000,
++		.base_baud	= 3906250,
+ 		.uart_offset	= 0x200,
+ 		.first_offset	= 0x1000,
+ 	},
+-	[pbn_oxsemi_8_4000000] = {
++	[pbn_oxsemi_8_3906250] = {
+ 		.flags		= FL_BASE0,
+ 		.num_ports	= 8,
+-		.base_baud	= 4000000,
++		.base_baud	= 3906250,
+ 		.uart_offset	= 0x200,
+ 		.first_offset	= 0x1000,
+ 	},
+@@ -3770,6 +3737,34 @@ static struct pciserial_board pci_boards[] = {
+ 		.base_baud      = 921600,
+ 		.uart_offset	= 0x8,
+ 	},
++	[pbn_titan_1_4000000] = {
++		.flags		= FL_BASE0,
++		.num_ports	= 1,
++		.base_baud	= 4000000,
++		.uart_offset	= 0x200,
++		.first_offset	= 0x1000,
++	},
++	[pbn_titan_2_4000000] = {
++		.flags		= FL_BASE0,
++		.num_ports	= 2,
++		.base_baud	= 4000000,
++		.uart_offset	= 0x200,
++		.first_offset	= 0x1000,
++	},
++	[pbn_titan_4_4000000] = {
++		.flags		= FL_BASE0,
++		.num_ports	= 4,
++		.base_baud	= 4000000,
++		.uart_offset	= 0x200,
++		.first_offset	= 0x1000,
++	},
++	[pbn_titan_8_4000000] = {
++		.flags		= FL_BASE0,
++		.num_ports	= 8,
++		.base_baud	= 4000000,
++		.uart_offset	= 0x200,
++		.first_offset	= 0x1000,
++	},
+ 	[pbn_moxa8250_2p] = {
+ 		.flags		= FL_BASE1,
+ 		.num_ports      = 2,
+@@ -3979,12 +3974,12 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board)
+ 		uart.port.irq = 0;
+ 	} else {
+ 		if (pci_match_id(pci_use_msi, dev)) {
+-			dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n");
++			pci_dbg(dev, "Using MSI(-X) interrupts\n");
+ 			pci_set_master(dev);
+ 			uart.port.flags &= ~UPF_SHARE_IRQ;
+ 			rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES);
+ 		} else {
+-			dev_dbg(&dev->dev, "Using legacy interrupts\n");
++			pci_dbg(dev, "Using legacy interrupts\n");
+ 			rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY);
+ 		}
+ 		if (rc < 0) {
+@@ -4002,12 +3997,12 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board)
+ 		if (quirk->setup(priv, board, &uart, i))
+ 			break;
+ 
+-		dev_dbg(&dev->dev, "Setup PCI port: port %lx, irq %d, type %d\n",
++		pci_dbg(dev, "Setup PCI port: port %lx, irq %d, type %d\n",
+ 			uart.port.iobase, uart.port.irq, uart.port.iotype);
+ 
+ 		priv->line[i] = serial8250_register_8250_port(&uart);
+ 		if (priv->line[i] < 0) {
+-			dev_err(&dev->dev,
++			pci_err(dev,
+ 				"Couldn't register serial port %lx, irq %d, type %d, error %d\n",
+ 				uart.port.iobase, uart.port.irq,
+ 				uart.port.iotype, priv->line[i]);
+@@ -4103,8 +4098,7 @@ pciserial_init_one(struct pci_dev *dev, const struct pci_device_id *ent)
+ 	}
+ 
+ 	if (ent->driver_data >= ARRAY_SIZE(pci_boards)) {
+-		dev_err(&dev->dev, "invalid driver_data: %ld\n",
+-			ent->driver_data);
++		pci_err(dev, "invalid driver_data: %ld\n", ent->driver_data);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -4187,7 +4181,7 @@ static int pciserial_resume_one(struct device *dev)
+ 		err = pci_enable_device(pdev);
+ 		/* FIXME: We cannot simply error out here */
+ 		if (err)
+-			dev_err(dev, "Unable to re-enable ports, trying to continue.\n");
++			pci_err(pdev, "Unable to re-enable ports, trying to continue.\n");
+ 		pciserial_resume_ports(priv);
+ 	}
+ 	return 0;
+@@ -4380,13 +4374,6 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	{	PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_ROMULUS,
+ 		0x10b5, 0x106a, 0, 0,
+ 		pbn_plx_romulus },
+-	/*
+-	* EndRun Technologies. PCI express device range.
+-	*    EndRun PTP/1588 has 2 Native UARTs.
+-	*/
+-	{	PCI_VENDOR_ID_ENDRUN, PCI_DEVICE_ID_ENDRUN_1588,
+-		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_endrun_2_3906250 },
+ 	/*
+ 	 * Quatech cards. These actually have configurable clocks but for
+ 	 * now we just use the default.
+@@ -4496,158 +4483,165 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	 */
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc101,    /* OXPCIe952 1 Legacy UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_b0_1_4000000 },
++		pbn_b0_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc105,    /* OXPCIe952 1 Legacy UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_b0_1_4000000 },
++		pbn_b0_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc11b,    /* OXPCIe952 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc11f,    /* OXPCIe952 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc120,    /* OXPCIe952 1 Legacy UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_b0_1_4000000 },
++		pbn_b0_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc124,    /* OXPCIe952 1 Legacy UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_b0_1_4000000 },
++		pbn_b0_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc138,    /* OXPCIe952 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc13d,    /* OXPCIe952 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc140,    /* OXPCIe952 1 Legacy UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_b0_1_4000000 },
++		pbn_b0_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc141,    /* OXPCIe952 1 Legacy UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_b0_1_4000000 },
++		pbn_b0_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc144,    /* OXPCIe952 1 Legacy UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_b0_1_4000000 },
++		pbn_b0_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc145,    /* OXPCIe952 1 Legacy UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_b0_1_4000000 },
++		pbn_b0_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc158,    /* OXPCIe952 2 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_2_4000000 },
++		pbn_oxsemi_2_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc15d,    /* OXPCIe952 2 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_2_4000000 },
++		pbn_oxsemi_2_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc208,    /* OXPCIe954 4 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_4_4000000 },
++		pbn_oxsemi_4_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc20d,    /* OXPCIe954 4 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_4_4000000 },
++		pbn_oxsemi_4_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc308,    /* OXPCIe958 8 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_8_4000000 },
++		pbn_oxsemi_8_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc30d,    /* OXPCIe958 8 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_8_4000000 },
++		pbn_oxsemi_8_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc40b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc40f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc41b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc41f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc42b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc42f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc43b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc43f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc44b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc44f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc45b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc45f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc46b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc46f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc47b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc47f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc48b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc48f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc49b,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc49f,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc4ab,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc4af,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc4bb,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc4bf,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc4cb,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_OXSEMI, 0xc4cf,    /* OXPCIe200 1 Native UART */
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	/*
+ 	 * Mainpine Inc. IQ Express "Rev3" utilizing OxSemi Tornado
+ 	 */
+ 	{	PCI_VENDOR_ID_MAINPINE, 0x4000,	/* IQ Express 1 Port V.34 Super-G3 Fax */
+ 		PCI_VENDOR_ID_MAINPINE, 0x4001, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_oxsemi_1_3906250 },
+ 	{	PCI_VENDOR_ID_MAINPINE, 0x4000,	/* IQ Express 2 Port V.34 Super-G3 Fax */
+ 		PCI_VENDOR_ID_MAINPINE, 0x4002, 0, 0,
+-		pbn_oxsemi_2_4000000 },
++		pbn_oxsemi_2_3906250 },
+ 	{	PCI_VENDOR_ID_MAINPINE, 0x4000,	/* IQ Express 4 Port V.34 Super-G3 Fax */
+ 		PCI_VENDOR_ID_MAINPINE, 0x4004, 0, 0,
+-		pbn_oxsemi_4_4000000 },
++		pbn_oxsemi_4_3906250 },
+ 	{	PCI_VENDOR_ID_MAINPINE, 0x4000,	/* IQ Express 8 Port V.34 Super-G3 Fax */
+ 		PCI_VENDOR_ID_MAINPINE, 0x4008, 0, 0,
+-		pbn_oxsemi_8_4000000 },
++		pbn_oxsemi_8_3906250 },
+ 
+ 	/*
+ 	 * Digi/IBM PCIe 2-port Async EIA-232 Adapter utilizing OxSemi Tornado
+ 	 */
+ 	{	PCI_VENDOR_ID_DIGI, PCIE_DEVICE_ID_NEO_2_OX_IBM,
+ 		PCI_SUBVENDOR_ID_IBM, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_2_4000000 },
++		pbn_oxsemi_2_3906250 },
++	/*
++	 * EndRun Technologies. PCI express device range.
++	 * EndRun PTP/1588 has 2 Native UARTs utilizing OxSemi 952.
++	 */
++	{	PCI_VENDOR_ID_ENDRUN, PCI_DEVICE_ID_ENDRUN_1588,
++		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
++		pbn_oxsemi_2_3906250 },
+ 
+ 	/*
+ 	 * SBS Technologies, Inc. P-Octal and PMC-OCTPRO cards,
+@@ -4721,22 +4715,22 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_b0_4_921600 },
+ 	{	PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_100E,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_1_4000000 },
++		pbn_titan_1_4000000 },
+ 	{	PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_200E,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_2_4000000 },
++		pbn_titan_2_4000000 },
+ 	{	PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_400E,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_4_4000000 },
++		pbn_titan_4_4000000 },
+ 	{	PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_800E,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_8_4000000 },
++		pbn_titan_8_4000000 },
+ 	{	PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_200EI,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_2_4000000 },
++		pbn_titan_2_4000000 },
+ 	{	PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_200EISI,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_oxsemi_2_4000000 },
++		pbn_titan_2_4000000 },
+ 	{	PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_200V3,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_b0_bt_2_921600 },
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 43884e8b51610..9d60418e4adb1 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -529,27 +529,6 @@ serial_port_out_sync(struct uart_port *p, int offset, int value)
+ 	}
+ }
+ 
+-/*
+- * For the 16C950
+- */
+-static void serial_icr_write(struct uart_8250_port *up, int offset, int value)
+-{
+-	serial_out(up, UART_SCR, offset);
+-	serial_out(up, UART_ICR, value);
+-}
+-
+-static unsigned int serial_icr_read(struct uart_8250_port *up, int offset)
+-{
+-	unsigned int value;
+-
+-	serial_icr_write(up, UART_ACR, up->acr | UART_ACR_ICRRD);
+-	serial_out(up, UART_SCR, offset);
+-	value = serial_in(up, UART_ICR);
+-	serial_icr_write(up, UART_ACR, up->acr);
+-
+-	return value;
+-}
+-
+ /*
+  * FIFO support.
+  */
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index e941f57de9533..bbf1b0b37b11b 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -238,6 +238,7 @@ static void mvebu_uart_rx_chars(struct uart_port *port, unsigned int status)
+ 	struct tty_port *tport = &port->state->port;
+ 	unsigned char ch = 0;
+ 	char flag = 0;
++	int ret;
+ 
+ 	do {
+ 		if (status & STAT_RX_RDY(port)) {
+@@ -250,6 +251,16 @@ static void mvebu_uart_rx_chars(struct uart_port *port, unsigned int status)
+ 				port->icount.parity++;
+ 		}
+ 
++		/*
++		 * For UART2, error bits are not cleared on buffer read.
++		 * This causes interrupt loop and system hang.
++		 */
++		if (IS_EXTENDED(port) && (status & STAT_BRK_ERR)) {
++			ret = readl(port->membase + UART_STAT);
++			ret |= STAT_BRK_ERR;
++			writel(ret, port->membase + UART_STAT);
++		}
++
+ 		if (status & STAT_BRK_DET) {
+ 			port->icount.brk++;
+ 			status &= ~(STAT_FRM_ERR | STAT_PAR_ERR);
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 2ebe73b116dc7..a4d005fa2569d 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -344,7 +344,7 @@ static struct uni_screen *vc_uniscr_alloc(unsigned int cols, unsigned int rows)
+ 	/* allocate everything in one go */
+ 	memsize = cols * rows * sizeof(char32_t);
+ 	memsize += rows * sizeof(char32_t *);
+-	p = vmalloc(memsize);
++	p = vzalloc(memsize);
+ 	if (!p)
+ 		return NULL;
+ 
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index d5056cc34974a..f120da442d43d 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -2293,11 +2293,16 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
+ 	int ret = 0;
+ 	int val;
+ 
++	if (!ep) {
++		pr_debug("usbss: ep not configured?\n");
++		return -EINVAL;
++	}
++
+ 	priv_ep = ep_to_cdns3_ep(ep);
+ 	priv_dev = priv_ep->cdns3_dev;
+ 	comp_desc = priv_ep->endpoint.comp_desc;
+ 
+-	if (!ep || !desc || desc->bDescriptorType != USB_DT_ENDPOINT) {
++	if (!desc || desc->bDescriptorType != USB_DT_ENDPOINT) {
+ 		dev_dbg(priv_dev->dev, "usbss: invalid parameters\n");
+ 		return -EINVAL;
+ 	}
+@@ -2609,7 +2614,7 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
+ 			    struct usb_request *request)
+ {
+ 	struct cdns3_endpoint *priv_ep = ep_to_cdns3_ep(ep);
+-	struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
++	struct cdns3_device *priv_dev;
+ 	struct usb_request *req, *req_temp;
+ 	struct cdns3_request *priv_req;
+ 	struct cdns3_trb *link_trb;
+@@ -2620,6 +2625,8 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
+ 	if (!ep || !request || !ep->desc)
+ 		return -EINVAL;
+ 
++	priv_dev = priv_ep->cdns3_dev;
++
+ 	spin_lock_irqsave(&priv_dev->lock, flags);
+ 
+ 	priv_req = to_cdns3_request(request);
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index bf5e376676977..ac347f9d5ef0b 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -1692,7 +1692,6 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t)
+ 
+ 	spin_lock_irq(&bh->lock);
+ 	bh->running = true;
+- restart:
+ 	list_replace_init(&bh->head, &local_list);
+ 	spin_unlock_irq(&bh->lock);
+ 
+@@ -1706,10 +1705,17 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t)
+ 		bh->completing_ep = NULL;
+ 	}
+ 
+-	/* check if there are new URBs to giveback */
++	/*
++	 * giveback new URBs next time to prevent this function
++	 * from not exiting for a long time.
++	 */
+ 	spin_lock_irq(&bh->lock);
+-	if (!list_empty(&bh->head))
+-		goto restart;
++	if (!list_empty(&bh->head)) {
++		if (bh->high_prio)
++			tasklet_hi_schedule(&bh->bh);
++		else
++			tasklet_schedule(&bh->bh);
++	}
+ 	bh->running = false;
+ 	spin_unlock_irq(&bh->lock);
+ }
+@@ -1734,7 +1740,7 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t)
+ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
+ {
+ 	struct giveback_urb_bh *bh;
+-	bool running, high_prio_bh;
++	bool running;
+ 
+ 	/* pass status to tasklet via unlinked */
+ 	if (likely(!urb->unlinked))
+@@ -1745,13 +1751,10 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
+ 		return;
+ 	}
+ 
+-	if (usb_pipeisoc(urb->pipe) || usb_pipeint(urb->pipe)) {
++	if (usb_pipeisoc(urb->pipe) || usb_pipeint(urb->pipe))
+ 		bh = &hcd->high_prio_bh;
+-		high_prio_bh = true;
+-	} else {
++	else
+ 		bh = &hcd->low_prio_bh;
+-		high_prio_bh = false;
+-	}
+ 
+ 	spin_lock(&bh->lock);
+ 	list_add_tail(&urb->urb_list, &bh->head);
+@@ -1760,7 +1763,7 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
+ 
+ 	if (running)
+ 		;
+-	else if (high_prio_bh)
++	else if (bh->high_prio)
+ 		tasklet_hi_schedule(&bh->bh);
+ 	else
+ 		tasklet_schedule(&bh->bh);
+@@ -2800,6 +2803,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 
+ 	/* initialize tasklets */
+ 	init_giveback_urb_bh(&hcd->high_prio_bh);
++	hcd->high_prio_bh.high_prio = true;
+ 	init_giveback_urb_bh(&hcd->low_prio_bh);
+ 
+ 	/* enable irqs just before we start the controller,
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index d97da7cef8679..572cf34459aa7 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -158,8 +158,13 @@ static void __dwc3_set_mode(struct work_struct *work)
+ 		break;
+ 	}
+ 
+-	/* For DRD host or device mode only */
+-	if (dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG) {
++	/*
++	 * When current_dr_role is not set, there's no role switching.
++	 * Only perform GCTL.CoreSoftReset when there's DRD role switching.
++	 */
++	if (dwc->current_dr_role && ((DWC3_IP_IS(DWC3) ||
++			DWC3_VER_IS_PRIOR(DWC31, 190A)) &&
++			dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG)) {
+ 		reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+ 		reg |= DWC3_GCTL_CORESOFTRESET;
+ 		dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 504f8af4d0f80..915fa4197d770 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -443,9 +443,9 @@ static int dwc3_qcom_get_irq(struct platform_device *pdev,
+ 	int ret;
+ 
+ 	if (np)
+-		ret = platform_get_irq_byname(pdev_irq, name);
++		ret = platform_get_irq_byname_optional(pdev_irq, name);
+ 	else
+-		ret = platform_get_irq(pdev_irq, num);
++		ret = platform_get_irq_optional(pdev_irq, num);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 94e9d336855bc..a2a10c05ef3fb 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -970,17 +970,49 @@ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
+ 	return trbs_left;
+ }
+ 
+-static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+-		dma_addr_t dma, unsigned int length, unsigned int chain,
+-		unsigned int node, unsigned int stream_id,
+-		unsigned int short_not_ok, unsigned int no_interrupt,
+-		unsigned int is_last, bool must_interrupt)
++/**
++ * dwc3_prepare_one_trb - setup one TRB from one request
++ * @dep: endpoint for which this request is prepared
++ * @req: dwc3_request pointer
++ * @trb_length: buffer size of the TRB
++ * @chain: should this TRB be chained to the next?
++ * @node: only for isochronous endpoints. First TRB needs different type.
++ * @use_bounce_buffer: set to use bounce buffer
++ * @must_interrupt: set to interrupt on TRB completion
++ */
++static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
++		struct dwc3_request *req, unsigned int trb_length,
++		unsigned int chain, unsigned int node, bool use_bounce_buffer,
++		bool must_interrupt)
+ {
++	struct dwc3_trb		*trb;
++	dma_addr_t		dma;
++	unsigned int		stream_id = req->request.stream_id;
++	unsigned int		short_not_ok = req->request.short_not_ok;
++	unsigned int		no_interrupt = req->request.no_interrupt;
++	unsigned int		is_last = req->request.is_last;
+ 	struct dwc3		*dwc = dep->dwc;
+ 	struct usb_gadget	*gadget = dwc->gadget;
+ 	enum usb_device_speed	speed = gadget->speed;
+ 
+-	trb->size = DWC3_TRB_SIZE_LENGTH(length);
++	if (use_bounce_buffer)
++		dma = dep->dwc->bounce_addr;
++	else if (req->request.num_sgs > 0)
++		dma = sg_dma_address(req->start_sg);
++	else
++		dma = req->request.dma;
++
++	trb = &dep->trb_pool[dep->trb_enqueue];
++
++	if (!req->trb) {
++		dwc3_gadget_move_started_request(req);
++		req->trb = trb;
++		req->trb_dma = dwc3_trb_dma_offset(dep, trb);
++	}
++
++	req->num_trbs++;
++
++	trb->size = DWC3_TRB_SIZE_LENGTH(trb_length);
+ 	trb->bpl = lower_32_bits(dma);
+ 	trb->bph = upper_32_bits(dma);
+ 
+@@ -1020,10 +1052,10 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+ 				unsigned int mult = 2;
+ 				unsigned int maxp = usb_endpoint_maxp(ep->desc);
+ 
+-				if (length <= (2 * maxp))
++				if (req->request.length <= (2 * maxp))
+ 					mult--;
+ 
+-				if (length <= maxp)
++				if (req->request.length <= maxp)
+ 					mult--;
+ 
+ 				trb->size |= DWC3_TRB_SIZE_PCM1(mult);
+@@ -1092,50 +1124,6 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+ 	trace_dwc3_prepare_trb(dep, trb);
+ }
+ 
+-/**
+- * dwc3_prepare_one_trb - setup one TRB from one request
+- * @dep: endpoint for which this request is prepared
+- * @req: dwc3_request pointer
+- * @trb_length: buffer size of the TRB
+- * @chain: should this TRB be chained to the next?
+- * @node: only for isochronous endpoints. First TRB needs different type.
+- * @use_bounce_buffer: set to use bounce buffer
+- * @must_interrupt: set to interrupt on TRB completion
+- */
+-static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+-		struct dwc3_request *req, unsigned int trb_length,
+-		unsigned int chain, unsigned int node, bool use_bounce_buffer,
+-		bool must_interrupt)
+-{
+-	struct dwc3_trb		*trb;
+-	dma_addr_t		dma;
+-	unsigned int		stream_id = req->request.stream_id;
+-	unsigned int		short_not_ok = req->request.short_not_ok;
+-	unsigned int		no_interrupt = req->request.no_interrupt;
+-	unsigned int		is_last = req->request.is_last;
+-
+-	if (use_bounce_buffer)
+-		dma = dep->dwc->bounce_addr;
+-	else if (req->request.num_sgs > 0)
+-		dma = sg_dma_address(req->start_sg);
+-	else
+-		dma = req->request.dma;
+-
+-	trb = &dep->trb_pool[dep->trb_enqueue];
+-
+-	if (!req->trb) {
+-		dwc3_gadget_move_started_request(req);
+-		req->trb = trb;
+-		req->trb_dma = dwc3_trb_dma_offset(dep, trb);
+-	}
+-
+-	req->num_trbs++;
+-
+-	__dwc3_prepare_one_trb(dep, trb, dma, trb_length, chain, node,
+-			stream_id, short_not_ok, no_interrupt, is_last,
+-			must_interrupt);
+-}
+-
+ static bool dwc3_needs_extra_trb(struct dwc3_ep *dep, struct dwc3_request *req)
+ {
+ 	unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
+diff --git a/drivers/usb/gadget/udc/Kconfig b/drivers/usb/gadget/udc/Kconfig
+index 933e80d5053ac..f28e1bbd57240 100644
+--- a/drivers/usb/gadget/udc/Kconfig
++++ b/drivers/usb/gadget/udc/Kconfig
+@@ -311,7 +311,7 @@ source "drivers/usb/gadget/udc/bdc/Kconfig"
+ 
+ config USB_AMD5536UDC
+ 	tristate "AMD5536 UDC"
+-	depends on USB_PCI
++	depends on USB_PCI && HAS_DMA
+ 	select USB_SNP_CORE
+ 	help
+ 	   The AMD5536 UDC is part of the AMD Geode CS5536, an x86 southbridge.
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/hub.c b/drivers/usb/gadget/udc/aspeed-vhub/hub.c
+index bfd8e77788e29..3a4ccc722db53 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/hub.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/hub.c
+@@ -1033,8 +1033,10 @@ static int ast_vhub_init_desc(struct ast_vhub *vhub)
+ 	/* Initialize vhub String Descriptors. */
+ 	INIT_LIST_HEAD(&vhub->vhub_str_desc);
+ 	desc_np = of_get_child_by_name(vhub_np, "vhub-strings");
+-	if (desc_np)
++	if (desc_np) {
+ 		ret = ast_vhub_of_parse_str_desc(vhub, desc_np);
++		of_node_put(desc_np);
++	}
+ 	else
+ 		ret = ast_vhub_str_alloc_add(vhub, &ast_vhub_strings);
+ 
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index de178bf264c21..3ebc8c5416e30 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3693,15 +3693,15 @@ static int tegra_xudc_powerdomain_init(struct tegra_xudc *xudc)
+ 	int err;
+ 
+ 	xudc->genpd_dev_device = dev_pm_domain_attach_by_name(dev, "dev");
+-	if (IS_ERR(xudc->genpd_dev_device)) {
+-		err = PTR_ERR(xudc->genpd_dev_device);
++	if (IS_ERR_OR_NULL(xudc->genpd_dev_device)) {
++		err = PTR_ERR(xudc->genpd_dev_device) ? : -ENODATA;
+ 		dev_err(dev, "failed to get device power domain: %d\n", err);
+ 		return err;
+ 	}
+ 
+ 	xudc->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "ss");
+-	if (IS_ERR(xudc->genpd_dev_ss)) {
+-		err = PTR_ERR(xudc->genpd_dev_ss);
++	if (IS_ERR_OR_NULL(xudc->genpd_dev_ss)) {
++		err = PTR_ERR(xudc->genpd_dev_ss) ? : -ENODATA;
+ 		dev_err(dev, "failed to get SuperSpeed power domain: %d\n", err);
+ 		return err;
+ 	}
+diff --git a/drivers/usb/host/ehci-ppc-of.c b/drivers/usb/host/ehci-ppc-of.c
+index 6bbaee74f7e7d..28a19693c19fe 100644
+--- a/drivers/usb/host/ehci-ppc-of.c
++++ b/drivers/usb/host/ehci-ppc-of.c
+@@ -148,6 +148,7 @@ static int ehci_hcd_ppc_of_probe(struct platform_device *op)
+ 		} else {
+ 			ehci->has_amcc_usb23 = 1;
+ 		}
++		of_node_put(np);
+ 	}
+ 
+ 	if (of_get_property(dn, "big-endian", NULL)) {
+diff --git a/drivers/usb/host/ohci-nxp.c b/drivers/usb/host/ohci-nxp.c
+index 85878e8ad3311..106a6bcefb087 100644
+--- a/drivers/usb/host/ohci-nxp.c
++++ b/drivers/usb/host/ohci-nxp.c
+@@ -164,6 +164,7 @@ static int ohci_hcd_nxp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	isp1301_i2c_client = isp1301_get_client(isp1301_node);
++	of_node_put(isp1301_node);
+ 	if (!isp1301_i2c_client)
+ 		return -EPROBE_DEFER;
+ 
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 50bb91b6a4b8d..246a3d274142b 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -1042,15 +1042,15 @@ static int tegra_xusb_powerdomain_init(struct device *dev,
+ 	int err;
+ 
+ 	tegra->genpd_dev_host = dev_pm_domain_attach_by_name(dev, "xusb_host");
+-	if (IS_ERR(tegra->genpd_dev_host)) {
+-		err = PTR_ERR(tegra->genpd_dev_host);
++	if (IS_ERR_OR_NULL(tegra->genpd_dev_host)) {
++		err = PTR_ERR(tegra->genpd_dev_host) ? : -ENODATA;
+ 		dev_err(dev, "failed to get host pm-domain: %d\n", err);
+ 		return err;
+ 	}
+ 
+ 	tegra->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "xusb_ss");
+-	if (IS_ERR(tegra->genpd_dev_ss)) {
+-		err = PTR_ERR(tegra->genpd_dev_ss);
++	if (IS_ERR_OR_NULL(tegra->genpd_dev_ss)) {
++		err = PTR_ERR(tegra->genpd_dev_ss) ? : -ENODATA;
+ 		dev_err(dev, "failed to get superspeed pm-domain: %d\n", err);
+ 		return err;
+ 	}
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 0c66424b34ba9..f87e5fe57f225 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -2383,7 +2383,7 @@ static inline const char *xhci_decode_trb(char *str, size_t size,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_STOP_RING:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: slot %d sp %d ep %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_SLOT_ID(field3),
+diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
+index 57fc3c31712e5..018a27d879b8e 100644
+--- a/drivers/usb/serial/sierra.c
++++ b/drivers/usb/serial/sierra.c
+@@ -737,7 +737,8 @@ static void sierra_close(struct usb_serial_port *port)
+ 
+ 	/*
+ 	 * Need to take susp_lock to make sure port is not already being
+-	 * resumed, but no need to hold it due to initialized
++	 * resumed, but no need to hold it due to the tty-port initialized
++	 * flag.
+ 	 */
+ 	spin_lock_irq(&intfdata->susp_lock);
+ 	if (--intfdata->open_ports == 0)
+diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c
+index 27e3bb58c872e..e8dd4603b201e 100644
+--- a/drivers/usb/serial/usb-serial.c
++++ b/drivers/usb/serial/usb-serial.c
+@@ -254,7 +254,7 @@ static int serial_open(struct tty_struct *tty, struct file *filp)
+  *
+  * Shut down a USB serial port. Serialized against activate by the
+  * tport mutex and kept to matching open/close pairs
+- * of calls by the initialized flag.
++ * of calls by the tty-port initialized flag.
+  *
+  * Not called if tty is console.
+  */
+diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
+index b2285d5a869de..628a75d1232ae 100644
+--- a/drivers/usb/serial/usb_wwan.c
++++ b/drivers/usb/serial/usb_wwan.c
+@@ -435,7 +435,8 @@ void usb_wwan_close(struct usb_serial_port *port)
+ 
+ 	/*
+ 	 * Need to take susp_lock to make sure port is not already being
+-	 * resumed, but no need to hold it due to initialized
++	 * resumed, but no need to hold it due to the tty-port initialized
++	 * flag.
+ 	 */
+ 	spin_lock_irq(&intfdata->susp_lock);
+ 	if (--intfdata->open_ports == 0)
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index dfda8f5487c09..0c16e99807365 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -76,6 +76,10 @@ static int ucsi_read_error(struct ucsi *ucsi)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = ucsi_acknowledge_command(ucsi);
++	if (ret)
++		return ret;
++
+ 	switch (error) {
+ 	case UCSI_ERROR_INCOMPATIBLE_PARTNER:
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/vfio/mdev/mdev_private.h b/drivers/vfio/mdev/mdev_private.h
+index 7d922950caaf3..74c2e54114699 100644
+--- a/drivers/vfio/mdev/mdev_private.h
++++ b/drivers/vfio/mdev/mdev_private.h
+@@ -35,7 +35,10 @@ struct mdev_device {
+ 	bool active;
+ };
+ 
+-#define to_mdev_device(dev)	container_of(dev, struct mdev_device, dev)
++static inline struct mdev_device *to_mdev_device(struct device *dev)
++{
++	return container_of(dev, struct mdev_device, dev);
++}
+ #define dev_is_mdev(d)		((d)->bus == &mdev_bus_type)
+ 
+ struct mdev_type {
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index 2151bc7f87ab1..f886f2db8153e 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -46,7 +46,6 @@ static struct vfio {
+ 	struct mutex			group_lock;
+ 	struct cdev			group_cdev;
+ 	dev_t				group_devt;
+-	wait_queue_head_t		release_q;
+ } vfio;
+ 
+ struct vfio_iommu_driver {
+@@ -90,15 +89,6 @@ struct vfio_group {
+ 	struct blocking_notifier_head	notifier;
+ };
+ 
+-struct vfio_device {
+-	struct kref			kref;
+-	struct device			*dev;
+-	const struct vfio_device_ops	*ops;
+-	struct vfio_group		*group;
+-	struct list_head		group_next;
+-	void				*device_data;
+-};
+-
+ #ifdef CONFIG_VFIO_NOIOMMU
+ static bool noiommu __read_mostly;
+ module_param_named(enable_unsafe_noiommu_mode,
+@@ -532,67 +522,17 @@ static struct vfio_group *vfio_group_get_from_dev(struct device *dev)
+ /**
+  * Device objects - create, release, get, put, search
+  */
+-static
+-struct vfio_device *vfio_group_create_device(struct vfio_group *group,
+-					     struct device *dev,
+-					     const struct vfio_device_ops *ops,
+-					     void *device_data)
+-{
+-	struct vfio_device *device;
+-
+-	device = kzalloc(sizeof(*device), GFP_KERNEL);
+-	if (!device)
+-		return ERR_PTR(-ENOMEM);
+-
+-	kref_init(&device->kref);
+-	device->dev = dev;
+-	device->group = group;
+-	device->ops = ops;
+-	device->device_data = device_data;
+-	dev_set_drvdata(dev, device);
+-
+-	/* No need to get group_lock, caller has group reference */
+-	vfio_group_get(group);
+-
+-	mutex_lock(&group->device_lock);
+-	list_add(&device->group_next, &group->device_list);
+-	group->dev_counter++;
+-	mutex_unlock(&group->device_lock);
+-
+-	return device;
+-}
+-
+-static void vfio_device_release(struct kref *kref)
+-{
+-	struct vfio_device *device = container_of(kref,
+-						  struct vfio_device, kref);
+-	struct vfio_group *group = device->group;
+-
+-	list_del(&device->group_next);
+-	group->dev_counter--;
+-	mutex_unlock(&group->device_lock);
+-
+-	dev_set_drvdata(device->dev, NULL);
+-
+-	kfree(device);
+-
+-	/* vfio_del_group_dev may be waiting for this device */
+-	wake_up(&vfio.release_q);
+-}
+-
+ /* Device reference always implies a group reference */
+ void vfio_device_put(struct vfio_device *device)
+ {
+-	struct vfio_group *group = device->group;
+-	kref_put_mutex(&device->kref, vfio_device_release, &group->device_lock);
+-	vfio_group_put(group);
++	if (refcount_dec_and_test(&device->refcount))
++		complete(&device->comp);
+ }
+ EXPORT_SYMBOL_GPL(vfio_device_put);
+ 
+-static void vfio_device_get(struct vfio_device *device)
++static bool vfio_device_try_get(struct vfio_device *device)
+ {
+-	vfio_group_get(device->group);
+-	kref_get(&device->kref);
++	return refcount_inc_not_zero(&device->refcount);
+ }
+ 
+ static struct vfio_device *vfio_group_get_device(struct vfio_group *group,
+@@ -602,8 +542,7 @@ static struct vfio_device *vfio_group_get_device(struct vfio_group *group,
+ 
+ 	mutex_lock(&group->device_lock);
+ 	list_for_each_entry(device, &group->device_list, group_next) {
+-		if (device->dev == dev) {
+-			vfio_device_get(device);
++		if (device->dev == dev && vfio_device_try_get(device)) {
+ 			mutex_unlock(&group->device_lock);
+ 			return device;
+ 		}
+@@ -801,14 +740,23 @@ static int vfio_iommu_group_notifier(struct notifier_block *nb,
+ /**
+  * VFIO driver API
+  */
+-int vfio_add_group_dev(struct device *dev,
+-		       const struct vfio_device_ops *ops, void *device_data)
++void vfio_init_group_dev(struct vfio_device *device, struct device *dev,
++			 const struct vfio_device_ops *ops, void *device_data)
+ {
++	init_completion(&device->comp);
++	device->dev = dev;
++	device->ops = ops;
++	device->device_data = device_data;
++}
++EXPORT_SYMBOL_GPL(vfio_init_group_dev);
++
++int vfio_register_group_dev(struct vfio_device *device)
++{
++	struct vfio_device *existing_device;
+ 	struct iommu_group *iommu_group;
+ 	struct vfio_group *group;
+-	struct vfio_device *device;
+ 
+-	iommu_group = iommu_group_get(dev);
++	iommu_group = iommu_group_get(device->dev);
+ 	if (!iommu_group)
+ 		return -EINVAL;
+ 
+@@ -827,30 +775,51 @@ int vfio_add_group_dev(struct device *dev,
+ 		iommu_group_put(iommu_group);
+ 	}
+ 
+-	device = vfio_group_get_device(group, dev);
+-	if (device) {
+-		dev_WARN(dev, "Device already exists on group %d\n",
++	existing_device = vfio_group_get_device(group, device->dev);
++	if (existing_device) {
++		dev_WARN(device->dev, "Device already exists on group %d\n",
+ 			 iommu_group_id(iommu_group));
+-		vfio_device_put(device);
++		vfio_device_put(existing_device);
+ 		vfio_group_put(group);
+ 		return -EBUSY;
+ 	}
+ 
+-	device = vfio_group_create_device(group, dev, ops, device_data);
+-	if (IS_ERR(device)) {
+-		vfio_group_put(group);
+-		return PTR_ERR(device);
+-	}
++	/* Our reference on group is moved to the device */
++	device->group = group;
+ 
+-	/*
+-	 * Drop all but the vfio_device reference.  The vfio_device holds
+-	 * a reference to the vfio_group, which holds a reference to the
+-	 * iommu_group.
+-	 */
+-	vfio_group_put(group);
++	/* Refcounting can't start until the driver calls register */
++	refcount_set(&device->refcount, 1);
++
++	mutex_lock(&group->device_lock);
++	list_add(&device->group_next, &group->device_list);
++	group->dev_counter++;
++	mutex_unlock(&group->device_lock);
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(vfio_register_group_dev);
++
++int vfio_add_group_dev(struct device *dev, const struct vfio_device_ops *ops,
++		       void *device_data)
++{
++	struct vfio_device *device;
++	int ret;
++
++	device = kzalloc(sizeof(*device), GFP_KERNEL);
++	if (!device)
++		return -ENOMEM;
++
++	vfio_init_group_dev(device, dev, ops, device_data);
++	ret = vfio_register_group_dev(device);
++	if (ret)
++		goto err_kfree;
++	dev_set_drvdata(dev, device);
++	return 0;
++
++err_kfree:
++	kfree(device);
++	return ret;
++}
+ EXPORT_SYMBOL_GPL(vfio_add_group_dev);
+ 
+ /**
+@@ -895,9 +864,8 @@ static struct vfio_device *vfio_device_get_from_name(struct vfio_group *group,
+ 			ret = !strcmp(dev_name(it->dev), buf);
+ 		}
+ 
+-		if (ret) {
++		if (ret && vfio_device_try_get(it)) {
+ 			device = it;
+-			vfio_device_get(device);
+ 			break;
+ 		}
+ 	}
+@@ -918,21 +886,13 @@ EXPORT_SYMBOL_GPL(vfio_device_data);
+ /*
+  * Decrement the device reference count and wait for the device to be
+  * removed.  Open file descriptors for the device... */
+-void *vfio_del_group_dev(struct device *dev)
++void vfio_unregister_group_dev(struct vfio_device *device)
+ {
+-	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-	struct vfio_device *device = dev_get_drvdata(dev);
+ 	struct vfio_group *group = device->group;
+-	void *device_data = device->device_data;
+ 	struct vfio_unbound_dev *unbound;
+ 	unsigned int i = 0;
+ 	bool interrupted = false;
+-
+-	/*
+-	 * The group exists so long as we have a device reference.  Get
+-	 * a group reference and use it to scan for the device going away.
+-	 */
+-	vfio_group_get(group);
++	long rc;
+ 
+ 	/*
+ 	 * When the device is removed from the group, the group suddenly
+@@ -945,7 +905,7 @@ void *vfio_del_group_dev(struct device *dev)
+ 	 */
+ 	unbound = kzalloc(sizeof(*unbound), GFP_KERNEL);
+ 	if (unbound) {
+-		unbound->dev = dev;
++		unbound->dev = device->dev;
+ 		mutex_lock(&group->unbound_lock);
+ 		list_add(&unbound->unbound_next, &group->unbound_list);
+ 		mutex_unlock(&group->unbound_lock);
+@@ -953,44 +913,33 @@ void *vfio_del_group_dev(struct device *dev)
+ 	WARN_ON(!unbound);
+ 
+ 	vfio_device_put(device);
+-
+-	/*
+-	 * If the device is still present in the group after the above
+-	 * 'put', then it is in use and we need to request it from the
+-	 * bus driver.  The driver may in turn need to request the
+-	 * device from the user.  We send the request on an arbitrary
+-	 * interval with counter to allow the driver to take escalating
+-	 * measures to release the device if it has the ability to do so.
+-	 */
+-	add_wait_queue(&vfio.release_q, &wait);
+-
+-	do {
+-		device = vfio_group_get_device(group, dev);
+-		if (!device)
+-			break;
+-
++	rc = try_wait_for_completion(&device->comp);
++	while (rc <= 0) {
+ 		if (device->ops->request)
+-			device->ops->request(device_data, i++);
+-
+-		vfio_device_put(device);
++			device->ops->request(device->device_data, i++);
+ 
+ 		if (interrupted) {
+-			wait_woken(&wait, TASK_UNINTERRUPTIBLE, HZ * 10);
++			rc = wait_for_completion_timeout(&device->comp,
++							 HZ * 10);
+ 		} else {
+-			wait_woken(&wait, TASK_INTERRUPTIBLE, HZ * 10);
+-			if (signal_pending(current)) {
++			rc = wait_for_completion_interruptible_timeout(
++				&device->comp, HZ * 10);
++			if (rc < 0) {
+ 				interrupted = true;
+-				dev_warn(dev,
++				dev_warn(device->dev,
+ 					 "Device is currently in use, task"
+ 					 " \"%s\" (%d) "
+ 					 "blocked until device is released",
+ 					 current->comm, task_pid_nr(current));
+ 			}
+ 		}
++	}
+ 
+-	} while (1);
++	mutex_lock(&group->device_lock);
++	list_del(&device->group_next);
++	group->dev_counter--;
++	mutex_unlock(&group->device_lock);
+ 
+-	remove_wait_queue(&vfio.release_q, &wait);
+ 	/*
+ 	 * In order to support multiple devices per group, devices can be
+ 	 * plucked from the group while other devices in the group are still
+@@ -1008,8 +957,19 @@ void *vfio_del_group_dev(struct device *dev)
+ 	if (list_empty(&group->device_list))
+ 		wait_event(group->container_q, !group->container);
+ 
++	/* Matches the get in vfio_register_group_dev() */
+ 	vfio_group_put(group);
++}
++EXPORT_SYMBOL_GPL(vfio_unregister_group_dev);
+ 
++void *vfio_del_group_dev(struct device *dev)
++{
++	struct vfio_device *device = dev_get_drvdata(dev);
++	void *device_data = device->device_data;
++
++	vfio_unregister_group_dev(device);
++	dev_set_drvdata(dev, NULL);
++	kfree(device);
+ 	return device_data;
+ }
+ EXPORT_SYMBOL_GPL(vfio_del_group_dev);
+@@ -2356,7 +2316,6 @@ static int __init vfio_init(void)
+ 	mutex_init(&vfio.iommu_drivers_lock);
+ 	INIT_LIST_HEAD(&vfio.group_list);
+ 	INIT_LIST_HEAD(&vfio.iommu_drivers_list);
+-	init_waitqueue_head(&vfio.release_q);
+ 
+ 	ret = misc_register(&vfio_dev);
+ 	if (ret) {
+diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c
+index 79efefd224f40..6252cd59673e9 100644
+--- a/drivers/video/fbdev/amba-clcd.c
++++ b/drivers/video/fbdev/amba-clcd.c
+@@ -711,16 +711,18 @@ static int clcdfb_of_init_display(struct clcd_fb *fb)
+ 		return -ENODEV;
+ 
+ 	panel = of_graph_get_remote_port_parent(endpoint);
+-	if (!panel)
+-		return -ENODEV;
++	if (!panel) {
++		err = -ENODEV;
++		goto out_endpoint_put;
++	}
+ 
+ 	err = clcdfb_of_get_backlight(&fb->dev->dev, fb->panel);
+ 	if (err)
+-		return err;
++		goto out_panel_put;
+ 
+ 	err = clcdfb_of_get_mode(&fb->dev->dev, panel, fb->panel);
+ 	if (err)
+-		return err;
++		goto out_panel_put;
+ 
+ 	err = of_property_read_u32(fb->dev->dev.of_node, "max-memory-bandwidth",
+ 			&max_bandwidth);
+@@ -749,11 +751,21 @@ static int clcdfb_of_init_display(struct clcd_fb *fb)
+ 
+ 	if (of_property_read_u32_array(endpoint,
+ 			"arm,pl11x,tft-r0g0b0-pads",
+-			tft_r0b0g0, ARRAY_SIZE(tft_r0b0g0)) != 0)
+-		return -ENOENT;
++			tft_r0b0g0, ARRAY_SIZE(tft_r0b0g0)) != 0) {
++		err = -ENOENT;
++		goto out_panel_put;
++	}
++
++	of_node_put(panel);
++	of_node_put(endpoint);
+ 
+ 	return clcdfb_of_init_tft_panel(fb, tft_r0b0g0[0],
+ 					tft_r0b0g0[1],  tft_r0b0g0[2]);
++out_panel_put:
++	of_node_put(panel);
++out_endpoint_put:
++	of_node_put(endpoint);
++	return err;
+ }
+ 
+ static int clcdfb_of_vram_setup(struct clcd_fb *fb)
+diff --git a/drivers/video/fbdev/arkfb.c b/drivers/video/fbdev/arkfb.c
+index edf169d0816e6..8d092b1064706 100644
+--- a/drivers/video/fbdev/arkfb.c
++++ b/drivers/video/fbdev/arkfb.c
+@@ -778,7 +778,12 @@ static int arkfb_set_par(struct fb_info *info)
+ 		return -EINVAL;
+ 	}
+ 
+-	ark_set_pixclock(info, (hdiv * info->var.pixclock) / hmul);
++	value = (hdiv * info->var.pixclock) / hmul;
++	if (!value) {
++		fb_dbg(info, "invalid pixclock\n");
++		value = 1;
++	}
++	ark_set_pixclock(info, value);
+ 	svga_set_timings(par->state.vgabase, &ark_timing_regs, &(info->var), hmul, hdiv,
+ 			 (info->var.vmode & FB_VMODE_DOUBLE)     ? 2 : 1,
+ 			 (info->var.vmode & FB_VMODE_INTERLACED) ? 2 : 1,
+@@ -789,6 +794,8 @@ static int arkfb_set_par(struct fb_info *info)
+ 	value = ((value * hmul / hdiv) / 8) - 5;
+ 	vga_wcrt(par->state.vgabase, 0x42, (value + 1) / 2);
+ 
++	if (screen_size > info->screen_size)
++		screen_size = info->screen_size;
+ 	memset_io(info->screen_base, 0x00, screen_size);
+ 	/* Device and screen back on */
+ 	svga_wcrt_mask(par->state.vgabase, 0x17, 0x80, 0x80);
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 76fedfd1b1b0a..2618d3beef649 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -123,8 +123,8 @@ static int logo_lines;
+    enums.  */
+ static int logo_shown = FBCON_LOGO_CANSHOW;
+ /* console mappings */
+-static int first_fb_vc;
+-static int last_fb_vc = MAX_NR_CONSOLES - 1;
++static unsigned int first_fb_vc;
++static unsigned int last_fb_vc = MAX_NR_CONSOLES - 1;
+ static int fbcon_is_default = 1; 
+ static int primary_device = -1;
+ static int fbcon_has_console_bind;
+@@ -472,10 +472,12 @@ static int __init fb_console_setup(char *this_opt)
+ 			options += 3;
+ 			if (*options)
+ 				first_fb_vc = simple_strtoul(options, &options, 10) - 1;
+-			if (first_fb_vc < 0)
++			if (first_fb_vc >= MAX_NR_CONSOLES)
+ 				first_fb_vc = 0;
+ 			if (*options++ == '-')
+ 				last_fb_vc = simple_strtoul(options, &options, 10) - 1;
++			if (last_fb_vc < first_fb_vc || last_fb_vc >= MAX_NR_CONSOLES)
++				last_fb_vc = MAX_NR_CONSOLES - 1;
+ 			fbcon_is_default = 0; 
+ 			continue;
+ 		}
+@@ -1717,8 +1719,6 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ 	case SM_UP:
+ 		if (count > vc->vc_rows)	/* Maximum realistic size */
+ 			count = vc->vc_rows;
+-		if (logo_shown >= 0)
+-			goto redraw_up;
+ 		switch (fb_scrollmode(p)) {
+ 		case SCROLL_MOVE:
+ 			fbcon_redraw_blit(vc, info, p, t, b - t - count,
+@@ -1807,8 +1807,6 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ 	case SM_DOWN:
+ 		if (count > vc->vc_rows)	/* Maximum realistic size */
+ 			count = vc->vc_rows;
+-		if (logo_shown >= 0)
+-			goto redraw_down;
+ 		switch (fb_scrollmode(p)) {
+ 		case SCROLL_MOVE:
+ 			fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+diff --git a/drivers/video/fbdev/s3fb.c b/drivers/video/fbdev/s3fb.c
+index 5c74253e7b2c0..a936455a3df2a 100644
+--- a/drivers/video/fbdev/s3fb.c
++++ b/drivers/video/fbdev/s3fb.c
+@@ -902,6 +902,8 @@ static int s3fb_set_par(struct fb_info *info)
+ 	value = clamp((htotal + hsstart + 1) / 2 + 2, hsstart + 4, htotal + 1);
+ 	svga_wcrt_multi(par->state.vgabase, s3_dtpc_regs, value);
+ 
++	if (screen_size > info->screen_size)
++		screen_size = info->screen_size;
+ 	memset_io(info->screen_base, 0x00, screen_size);
+ 	/* Device and screen back on */
+ 	svga_wcrt_mask(par->state.vgabase, 0x17, 0x80, 0x80);
+diff --git a/drivers/video/fbdev/sis/init.c b/drivers/video/fbdev/sis/init.c
+index fde27feae5d0c..d6b2ce95a8594 100644
+--- a/drivers/video/fbdev/sis/init.c
++++ b/drivers/video/fbdev/sis/init.c
+@@ -355,12 +355,12 @@ SiS_GetModeID(int VGAEngine, unsigned int VBFlags, int HDisplay, int VDisplay,
+ 		}
+ 		break;
+ 	case 400:
+-		if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 800) && (LCDwidth >= 600))) {
++		if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 800) && (LCDheight >= 600))) {
+ 			if(VDisplay == 300) ModeIndex = ModeIndex_400x300[Depth];
+ 		}
+ 		break;
+ 	case 512:
+-		if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 1024) && (LCDwidth >= 768))) {
++		if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 1024) && (LCDheight >= 768))) {
+ 			if(VDisplay == 384) ModeIndex = ModeIndex_512x384[Depth];
+ 		}
+ 		break;
+diff --git a/drivers/video/fbdev/vt8623fb.c b/drivers/video/fbdev/vt8623fb.c
+index 7a959e5ba90b8..c274ec5e965ca 100644
+--- a/drivers/video/fbdev/vt8623fb.c
++++ b/drivers/video/fbdev/vt8623fb.c
+@@ -504,6 +504,8 @@ static int vt8623fb_set_par(struct fb_info *info)
+ 			 (info->var.vmode & FB_VMODE_DOUBLE) ? 2 : 1, 1,
+ 			 1, info->node);
+ 
++	if (screen_size > info->screen_size)
++		screen_size = info->screen_size;
+ 	memset_io(info->screen_base, 0x00, screen_size);
+ 
+ 	/* Device and screen back on */
+diff --git a/drivers/watchdog/armada_37xx_wdt.c b/drivers/watchdog/armada_37xx_wdt.c
+index e5dcb26d85f0a..dcb3ffda3fad4 100644
+--- a/drivers/watchdog/armada_37xx_wdt.c
++++ b/drivers/watchdog/armada_37xx_wdt.c
+@@ -274,6 +274,8 @@ static int armada_37xx_wdt_probe(struct platform_device *pdev)
+ 	if (!res)
+ 		return -ENODEV;
+ 	dev->reg = devm_ioremap(&pdev->dev, res->start, resource_size(res));
++	if (!dev->reg)
++		return -ENOMEM;
+ 
+ 	/* init clock */
+ 	dev->clk = devm_clk_get(&pdev->dev, NULL);
+diff --git a/fs/attr.c b/fs/attr.c
+index b4bbdbd4c8ca0..848ffe6e3c24b 100644
+--- a/fs/attr.c
++++ b/fs/attr.c
+@@ -134,6 +134,8 @@ EXPORT_SYMBOL(setattr_prepare);
+  */
+ int inode_newsize_ok(const struct inode *inode, loff_t offset)
+ {
++	if (offset < 0)
++		return -EINVAL;
+ 	if (inode->i_size < offset) {
+ 		unsigned long limit;
+ 
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index e351f53199505..889a598b17f6b 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -3126,6 +3126,7 @@ int btrfs_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags,
+ 			 * attempt.
+ 			 */
+ 			wait_for_alloc = true;
++			force = CHUNK_ALLOC_NO_FORCE;
+ 			spin_unlock(&space_info->lock);
+ 			mutex_lock(&fs_info->chunk_mutex);
+ 			mutex_unlock(&fs_info->chunk_mutex);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 35acdab56a1c9..2c7e50980a706 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3104,6 +3104,20 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		err = -EINVAL;
+ 		goto fail_alloc;
+ 	}
++	/*
++	 * We have unsupported RO compat features, although RO mounted, we
++	 * should not cause any metadata write, including log replay.
++	 * Or we could screw up whatever the new feature requires.
++	 */
++	if (unlikely(features && btrfs_super_log_root(disk_super) &&
++		     !btrfs_test_opt(fs_info, NOLOGREPLAY))) {
++		btrfs_err(fs_info,
++"cannot replay dirty log with unsupported compat_ro features (0x%llx), try rescue=nologreplay",
++			  features);
++		err = -EINVAL;
++		goto fail_alloc;
++	}
++
+ 
+ 	ret = btrfs_init_workqueues(fs_info, fs_devices);
+ 	if (ret) {
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index e65d0fabb83e5..9678d7fa4dcc9 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -332,6 +332,9 @@ static void merge_rbio(struct btrfs_raid_bio *dest,
+ {
+ 	bio_list_merge(&dest->bio_list, &victim->bio_list);
+ 	dest->bio_list_bytes += victim->bio_list_bytes;
++	/* Also inherit the bitmaps from @victim. */
++	bitmap_or(dest->dbitmap, victim->dbitmap, dest->dbitmap,
++		  dest->stripe_npages);
+ 	dest->generic_bio_cnt += victim->generic_bio_cnt;
+ 	bio_list_init(&victim->bio_list);
+ }
+@@ -874,6 +877,12 @@ static void rbio_orig_end_io(struct btrfs_raid_bio *rbio, blk_status_t err)
+ 
+ 	if (rbio->generic_bio_cnt)
+ 		btrfs_bio_counter_sub(rbio->fs_info, rbio->generic_bio_cnt);
++	/*
++	 * Clear the data bitmap, as the rbio may be cached for later usage.
++	 * do this before before unlock_stripe() so there will be no new bio
++	 * for this bio.
++	 */
++	bitmap_clear(rbio->dbitmap, 0, rbio->stripe_npages);
+ 
+ 	/*
+ 	 * At this moment, rbio->bio_list is empty, however since rbio does not
+@@ -1207,6 +1216,9 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
+ 	else
+ 		BUG();
+ 
++	/* We should have at least one data sector. */
++	ASSERT(bitmap_weight(rbio->dbitmap, rbio->stripe_npages));
++
+ 	/* at this point we either have a full stripe,
+ 	 * or we've read the full stripe from the drive.
+ 	 * recalculate the parity and write the new results.
+@@ -1280,6 +1292,11 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
+ 	for (stripe = 0; stripe < rbio->real_stripes; stripe++) {
+ 		for (pagenr = 0; pagenr < rbio->stripe_npages; pagenr++) {
+ 			struct page *page;
++
++			/* This vertical stripe has no data, skip it. */
++			if (!test_bit(pagenr, rbio->dbitmap))
++				continue;
++
+ 			if (stripe < rbio->nr_data) {
+ 				page = page_in_rbio(rbio, stripe, pagenr, 1);
+ 				if (!page)
+@@ -1304,6 +1321,11 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
+ 
+ 		for (pagenr = 0; pagenr < rbio->stripe_npages; pagenr++) {
+ 			struct page *page;
++
++			/* This vertical stripe has no data, skip it. */
++			if (!test_bit(pagenr, rbio->dbitmap))
++				continue;
++
+ 			if (stripe < rbio->nr_data) {
+ 				page = page_in_rbio(rbio, stripe, pagenr, 1);
+ 				if (!page)
+@@ -1729,6 +1751,33 @@ static void btrfs_raid_unplug(struct blk_plug_cb *cb, bool from_schedule)
+ 	run_plug(plug);
+ }
+ 
++/* Add the original bio into rbio->bio_list, and update rbio::dbitmap. */
++static void rbio_add_bio(struct btrfs_raid_bio *rbio, struct bio *orig_bio)
++{
++	const struct btrfs_fs_info *fs_info = rbio->fs_info;
++	const u64 orig_logical = orig_bio->bi_iter.bi_sector << SECTOR_SHIFT;
++	const u64 full_stripe_start = rbio->bbio->raid_map[0];
++	const u32 orig_len = orig_bio->bi_iter.bi_size;
++	const u32 sectorsize = fs_info->sectorsize;
++	u64 cur_logical;
++
++	ASSERT(orig_logical >= full_stripe_start &&
++	       orig_logical + orig_len <= full_stripe_start +
++	       rbio->nr_data * rbio->stripe_len);
++
++	bio_list_add(&rbio->bio_list, orig_bio);
++	rbio->bio_list_bytes += orig_bio->bi_iter.bi_size;
++
++	/* Update the dbitmap. */
++	for (cur_logical = orig_logical; cur_logical < orig_logical + orig_len;
++	     cur_logical += sectorsize) {
++		int bit = ((u32)(cur_logical - full_stripe_start) >>
++			   PAGE_SHIFT) % rbio->stripe_npages;
++
++		set_bit(bit, rbio->dbitmap);
++	}
++}
++
+ /*
+  * our main entry point for writes from the rest of the FS.
+  */
+@@ -1745,9 +1794,8 @@ int raid56_parity_write(struct btrfs_fs_info *fs_info, struct bio *bio,
+ 		btrfs_put_bbio(bbio);
+ 		return PTR_ERR(rbio);
+ 	}
+-	bio_list_add(&rbio->bio_list, bio);
+-	rbio->bio_list_bytes = bio->bi_iter.bi_size;
+ 	rbio->operation = BTRFS_RBIO_WRITE;
++	rbio_add_bio(rbio, bio);
+ 
+ 	btrfs_bio_counter_inc_noblocked(fs_info);
+ 	rbio->generic_bio_cnt = 1;
+@@ -2046,9 +2094,12 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
+ 	atomic_set(&rbio->error, 0);
+ 
+ 	/*
+-	 * read everything that hasn't failed.  Thanks to the
+-	 * stripe cache, it is possible that some or all of these
+-	 * pages are going to be uptodate.
++	 * Read everything that hasn't failed. However this time we will
++	 * not trust any cached sector.
++	 * As we may read out some stale data but higher layer is not reading
++	 * that stale part.
++	 *
++	 * So here we always re-read everything in recovery path.
+ 	 */
+ 	for (stripe = 0; stripe < rbio->real_stripes; stripe++) {
+ 		if (rbio->faila == stripe || rbio->failb == stripe) {
+@@ -2057,16 +2108,6 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
+ 		}
+ 
+ 		for (pagenr = 0; pagenr < rbio->stripe_npages; pagenr++) {
+-			struct page *p;
+-
+-			/*
+-			 * the rmw code may have already read this
+-			 * page in
+-			 */
+-			p = rbio_stripe_page(rbio, stripe, pagenr);
+-			if (PageUptodate(p))
+-				continue;
+-
+ 			ret = rbio_add_io_page(rbio, &bio_list,
+ 				       rbio_stripe_page(rbio, stripe, pagenr),
+ 				       stripe, pagenr, rbio->stripe_len);
+@@ -2144,8 +2185,7 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
+ 	}
+ 
+ 	rbio->operation = BTRFS_RBIO_READ_REBUILD;
+-	bio_list_add(&rbio->bio_list, bio);
+-	rbio->bio_list_bytes = bio->bi_iter.bi_size;
++	rbio_add_bio(rbio, bio);
+ 
+ 	rbio->faila = find_logical_bio_stripe(rbio, bio);
+ 	if (rbio->faila == -1) {
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index 8a6260aac26cb..f921580b56cbc 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -56,14 +56,18 @@ static int z_erofs_lz4_prepare_destpages(struct z_erofs_decompress_req *rq,
+ 
+ 		if (page) {
+ 			__clear_bit(j, bounced);
+-			if (kaddr) {
+-				if (kaddr + PAGE_SIZE == page_address(page))
++			if (!PageHighMem(page)) {
++				if (!i) {
++					kaddr = page_address(page);
++					continue;
++				}
++				if (kaddr &&
++				    kaddr + PAGE_SIZE == page_address(page)) {
+ 					kaddr += PAGE_SIZE;
+-				else
+-					kaddr = NULL;
+-			} else if (!i) {
+-				kaddr = page_address(page);
++					continue;
++				}
+ 			}
++			kaddr = NULL;
+ 			continue;
+ 		}
+ 		kaddr = NULL;
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 6094b2e9058b0..2f1f053157090 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1804,6 +1804,21 @@ static inline struct timespec64 ep_set_mstimeout(long ms)
+ 	return timespec64_add_safe(now, ts);
+ }
+ 
++/*
++ * autoremove_wake_function, but remove even on failure to wake up, because we
++ * know that default_wake_function/ttwu will only fail if the thread is already
++ * woken, and in that case the ep_poll loop will remove the entry anyways, not
++ * try to reuse it.
++ */
++static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry,
++				       unsigned int mode, int sync, void *key)
++{
++	int ret = default_wake_function(wq_entry, mode, sync, key);
++
++	list_del_init(&wq_entry->entry);
++	return ret;
++}
++
+ /**
+  * ep_poll - Retrieves ready events, and delivers them to the caller supplied
+  *           event buffer.
+@@ -1881,8 +1896,15 @@ fetch_events:
+ 		 * normal wakeup path no need to call __remove_wait_queue()
+ 		 * explicitly, thus ep->lock is not taken, which halts the
+ 		 * event delivery.
++		 *
++		 * In fact, we now use an even more aggressive function that
++		 * unconditionally removes, because we don't reuse the wait
++		 * entry between loop iterations. This lets us also avoid the
++		 * performance issue if a process is killed, causing all of its
++		 * threads to wake up without being removed normally.
+ 		 */
+ 		init_wait(&wait);
++		wait.func = ep_autoremove_wake_function;
+ 
+ 		write_lock_irq(&ep->lock);
+ 		/*
+diff --git a/fs/exec.c b/fs/exec.c
+index d37a82206fa31..b56bc4b4016e9 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1286,6 +1286,9 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 	bprm->mm = NULL;
+ 
+ #ifdef CONFIG_POSIX_TIMERS
++	spin_lock_irq(&me->sighand->siglock);
++	posix_cpu_timers_exit(me);
++	spin_unlock_irq(&me->sighand->siglock);
+ 	exit_itimers(me);
+ 	flush_itimer_signals();
+ #endif
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index b6314d3c6a87d..9a6475b2ab28b 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -1060,9 +1060,10 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ 			sbi->s_frags_per_group);
+ 		goto failed_mount;
+ 	}
+-	if (sbi->s_inodes_per_group > sb->s_blocksize * 8) {
++	if (sbi->s_inodes_per_group < sbi->s_inodes_per_block ||
++	    sbi->s_inodes_per_group > sb->s_blocksize * 8) {
+ 		ext2_msg(sb, KERN_ERR,
+-			"error: #inodes per group too big: %lu",
++			"error: invalid #inodes per group: %lu",
+ 			sbi->s_inodes_per_group);
+ 		goto failed_mount;
+ 	}
+@@ -1072,6 +1073,13 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ 	sbi->s_groups_count = ((le32_to_cpu(es->s_blocks_count) -
+ 				le32_to_cpu(es->s_first_data_block) - 1)
+ 					/ EXT2_BLOCKS_PER_GROUP(sb)) + 1;
++	if ((u64)sbi->s_groups_count * sbi->s_inodes_per_group !=
++	    le32_to_cpu(es->s_inodes_count)) {
++		ext2_msg(sb, KERN_ERR, "error: invalid #inodes: %u vs computed %llu",
++			 le32_to_cpu(es->s_inodes_count),
++			 (u64)sbi->s_groups_count * sbi->s_inodes_per_group);
++		goto failed_mount;
++	}
+ 	db_count = (sbi->s_groups_count + EXT2_DESC_PER_BLOCK(sb) - 1) /
+ 		   EXT2_DESC_PER_BLOCK(sb);
+ 	sbi->s_group_desc = kmalloc_array (db_count,
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index fbad4180514c9..88bd1d1cca233 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -34,6 +34,9 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
+ 	struct ext4_inode *raw_inode;
+ 	int free, min_offs;
+ 
++	if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
++		return 0;
++
+ 	min_offs = EXT4_SB(inode->i_sb)->s_inode_size -
+ 			EXT4_GOOD_OLD_INODE_SIZE -
+ 			EXT4_I(inode)->i_extra_isize -
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index bd0d0a10ca429..44b6d061ed71c 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1577,7 +1577,14 @@ static void mpage_release_unused_pages(struct mpage_da_data *mpd,
+ 		ext4_lblk_t start, last;
+ 		start = index << (PAGE_SHIFT - inode->i_blkbits);
+ 		last = end << (PAGE_SHIFT - inode->i_blkbits);
++
++		/*
++		 * avoid racing with extent status tree scans made by
++		 * ext4_insert_delayed_block()
++		 */
++		down_write(&EXT4_I(inode)->i_data_sem);
+ 		ext4_es_remove_extent(inode, start, last - start + 1);
++		up_write(&EXT4_I(inode)->i_data_sem);
+ 	}
+ 
+ 	pagevec_init(&pvec);
+@@ -3219,13 +3226,15 @@ static sector_t ext4_bmap(struct address_space *mapping, sector_t block)
+ {
+ 	struct inode *inode = mapping->host;
+ 	journal_t *journal;
++	sector_t ret = 0;
+ 	int err;
+ 
++	inode_lock_shared(inode);
+ 	/*
+ 	 * We can get here for an inline file via the FIBMAP ioctl
+ 	 */
+ 	if (ext4_has_inline_data(inode))
+-		return 0;
++		goto out;
+ 
+ 	if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY) &&
+ 			test_opt(inode->i_sb, DELALLOC)) {
+@@ -3264,10 +3273,14 @@ static sector_t ext4_bmap(struct address_space *mapping, sector_t block)
+ 		jbd2_journal_unlock_updates(journal);
+ 
+ 		if (err)
+-			return 0;
++			goto out;
+ 	}
+ 
+-	return iomap_bmap(mapping, block, &ext4_iomap_ops);
++	ret = iomap_bmap(mapping, block, &ext4_iomap_ops);
++
++out:
++	inode_unlock_shared(inode);
++	return ret;
+ }
+ 
+ static int ext4_readpage(struct file *file, struct page *page)
+@@ -4600,8 +4613,7 @@ static inline int ext4_iget_extra_inode(struct inode *inode,
+ 	__le32 *magic = (void *)raw_inode +
+ 			EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize;
+ 
+-	if (EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize + sizeof(__le32) <=
+-	    EXT4_INODE_SIZE(inode->i_sb) &&
++	if (EXT4_INODE_HAS_XATTR_SPACE(inode)  &&
+ 	    *magic == cpu_to_le32(EXT4_XATTR_MAGIC)) {
+ 		ext4_set_inode_state(inode, EXT4_STATE_XATTR);
+ 		return ext4_find_inline_data_nolock(inode);
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index 49912814f3d8d..04320715d61f1 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -417,7 +417,7 @@ int ext4_ext_migrate(struct inode *inode)
+ 	struct inode *tmp_inode = NULL;
+ 	struct migrate_struct lb;
+ 	unsigned long max_entries;
+-	__u32 goal;
++	__u32 goal, tmp_csum_seed;
+ 	uid_t owner[2];
+ 
+ 	/*
+@@ -465,6 +465,7 @@ int ext4_ext_migrate(struct inode *inode)
+ 	 * the migration.
+ 	 */
+ 	ei = EXT4_I(inode);
++	tmp_csum_seed = EXT4_I(tmp_inode)->i_csum_seed;
+ 	EXT4_I(tmp_inode)->i_csum_seed = ei->i_csum_seed;
+ 	i_size_write(tmp_inode, i_size_read(inode));
+ 	/*
+@@ -575,6 +576,7 @@ err_out:
+ 	 * the inode is not visible to user space.
+ 	 */
+ 	tmp_inode->i_blocks = 0;
++	EXT4_I(tmp_inode)->i_csum_seed = tmp_csum_seed;
+ 
+ 	/* Reset the extent details */
+ 	ext4_ext_tree_init(handle, tmp_inode);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 2c9ae72a1f5cb..afc20d32c9fd6 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -54,6 +54,7 @@ static struct buffer_head *ext4_append(handle_t *handle,
+ 					struct inode *inode,
+ 					ext4_lblk_t *block)
+ {
++	struct ext4_map_blocks map;
+ 	struct buffer_head *bh;
+ 	int err;
+ 
+@@ -63,6 +64,21 @@ static struct buffer_head *ext4_append(handle_t *handle,
+ 		return ERR_PTR(-ENOSPC);
+ 
+ 	*block = inode->i_size >> inode->i_sb->s_blocksize_bits;
++	map.m_lblk = *block;
++	map.m_len = 1;
++
++	/*
++	 * We're appending new directory block. Make sure the block is not
++	 * allocated yet, otherwise we will end up corrupting the
++	 * directory.
++	 */
++	err = ext4_map_blocks(NULL, inode, &map, 0);
++	if (err < 0)
++		return ERR_PTR(err);
++	if (err) {
++		EXT4_ERROR_INODE(inode, "Logical block already allocated");
++		return ERR_PTR(-EFSCORRUPTED);
++	}
+ 
+ 	bh = ext4_bread(handle, inode, *block, EXT4_GET_BLOCKS_CREATE);
+ 	if (IS_ERR(bh))
+@@ -109,6 +125,13 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode,
+ 	struct ext4_dir_entry *dirent;
+ 	int is_dx_block = 0;
+ 
++	if (block >= inode->i_size) {
++		ext4_error_inode(inode, func, line, block,
++		       "Attempting to read directory block (%u) that is past i_size (%llu)",
++		       block, inode->i_size);
++		return ERR_PTR(-EFSCORRUPTED);
++	}
++
+ 	if (ext4_simulate_fail(inode->i_sb, EXT4_SIM_DIRBLOCK_EIO))
+ 		bh = ERR_PTR(-EIO);
+ 	else
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 015028302305d..5cfea77f33227 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1461,6 +1461,7 @@ static void ext4_update_super(struct super_block *sb,
+ 	 * Update the fs overhead information
+ 	 */
+ 	ext4_calculate_overhead(sb);
++	es->s_overhead_clusters = cpu_to_le32(sbi->s_overhead);
+ 
+ 	if (test_opt(sb, DEBUG))
+ 		printk(KERN_DEBUG "EXT4-fs: added group %u:"
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 5462f26907c19..38531c5e16c60 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2167,8 +2167,9 @@ int ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i,
+ 	struct ext4_inode *raw_inode;
+ 	int error;
+ 
+-	if (EXT4_I(inode)->i_extra_isize == 0)
++	if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
+ 		return 0;
++
+ 	raw_inode = ext4_raw_inode(&is->iloc);
+ 	header = IHDR(inode, raw_inode);
+ 	is->s.base = is->s.first = IFIRST(header);
+@@ -2196,8 +2197,9 @@ int ext4_xattr_ibody_inline_set(handle_t *handle, struct inode *inode,
+ 	struct ext4_xattr_search *s = &is->s;
+ 	int error;
+ 
+-	if (EXT4_I(inode)->i_extra_isize == 0)
++	if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
+ 		return -ENOSPC;
++
+ 	error = ext4_xattr_set_entry(i, s, handle, inode, false /* is_block */);
+ 	if (error)
+ 		return error;
+diff --git a/fs/ext4/xattr.h b/fs/ext4/xattr.h
+index 730b91fa0dd70..87e5863bb4931 100644
+--- a/fs/ext4/xattr.h
++++ b/fs/ext4/xattr.h
+@@ -95,6 +95,19 @@ struct ext4_xattr_entry {
+ 
+ #define EXT4_ZERO_XATTR_VALUE ((void *)-1)
+ 
++/*
++ * If we want to add an xattr to the inode, we should make sure that
++ * i_extra_isize is not 0 and that the inode size is not less than
++ * EXT4_GOOD_OLD_INODE_SIZE + extra_isize + pad.
++ *   EXT4_GOOD_OLD_INODE_SIZE   extra_isize header   entry   pad  data
++ * |--------------------------|------------|------|---------|---|-------|
++ */
++#define EXT4_INODE_HAS_XATTR_SPACE(inode)				\
++	((EXT4_I(inode)->i_extra_isize != 0) &&				\
++	 (EXT4_GOOD_OLD_INODE_SIZE + EXT4_I(inode)->i_extra_isize +	\
++	  sizeof(struct ext4_xattr_ibody_header) + EXT4_XATTR_PAD <=	\
++	  EXT4_INODE_SIZE((inode)->i_sb)))
++
+ struct ext4_xattr_info {
+ 	const char *name;
+ 	const void *value;
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index defa068b4c7cd..d56fcace18211 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1844,10 +1844,7 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ 		if (masked_flags & F2FS_COMPR_FL) {
+ 			if (!f2fs_disable_compressed_file(inode))
+ 				return -EINVAL;
+-		}
+-		if (iflags & F2FS_NOCOMP_FL)
+-			return -EINVAL;
+-		if (iflags & F2FS_COMPR_FL) {
++		} else {
+ 			if (!f2fs_may_compress(inode))
+ 				return -EINVAL;
+ 			if (S_ISREG(inode->i_mode) && inode->i_size)
+@@ -1856,10 +1853,6 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ 			set_compress_context(inode);
+ 		}
+ 	}
+-	if ((iflags ^ masked_flags) & F2FS_NOCOMP_FL) {
+-		if (masked_flags & F2FS_COMPR_FL)
+-			return -EINVAL;
+-	}
+ 
+ 	fi->i_flags = iflags | (fi->i_flags & ~mask);
+ 	f2fs_bug_on(F2FS_I_SB(inode), (fi->i_flags & F2FS_COMPR_FL) &&
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 24e93fb254c5f..3b53fdebf03da 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1158,7 +1158,8 @@ static int move_data_block(struct inode *inode, block_t bidx,
+ 	}
+ 
+ 	if (f2fs_is_pinned_file(inode)) {
+-		f2fs_pin_file_control(inode, true);
++		if (gc_type == FG_GC)
++			f2fs_pin_file_control(inode, true);
+ 		err = -EAGAIN;
+ 		goto out;
+ 	}
+@@ -1740,23 +1741,31 @@ gc_more:
+ 	if (sync)
+ 		goto stop;
+ 
+-	if (has_not_enough_free_secs(sbi, sec_freed, 0)) {
+-		if (skipped_round <= MAX_SKIP_GC_COUNT ||
+-					skipped_round * 2 < round) {
+-			segno = NULL_SEGNO;
+-			goto gc_more;
+-		}
++	if (!has_not_enough_free_secs(sbi, sec_freed, 0))
++		goto stop;
+ 
+-		if (first_skipped < last_skipped &&
+-				(last_skipped - first_skipped) >
+-						sbi->skipped_gc_rwsem) {
+-			f2fs_drop_inmem_pages_all(sbi, true);
+-			segno = NULL_SEGNO;
+-			goto gc_more;
+-		}
+-		if (gc_type == FG_GC && !is_sbi_flag_set(sbi, SBI_CP_DISABLED))
++	if (skipped_round <= MAX_SKIP_GC_COUNT || skipped_round * 2 < round) {
++
++		/* Write checkpoint to reclaim prefree segments */
++		if (free_sections(sbi) < NR_CURSEG_PERSIST_TYPE &&
++				prefree_segments(sbi) &&
++				!is_sbi_flag_set(sbi, SBI_CP_DISABLED)) {
+ 			ret = f2fs_write_checkpoint(sbi, &cpc);
+-	}
++			if (ret)
++				goto stop;
++		}
++		segno = NULL_SEGNO;
++		goto gc_more;
++	}
++	if (first_skipped < last_skipped &&
++			(last_skipped - first_skipped) >
++					sbi->skipped_gc_rwsem) {
++		f2fs_drop_inmem_pages_all(sbi, true);
++		segno = NULL_SEGNO;
++		goto gc_more;
++	}
++	if (gc_type == FG_GC && !is_sbi_flag_set(sbi, SBI_CP_DISABLED))
++		ret = f2fs_write_checkpoint(sbi, &cpc);
+ stop:
+ 	SIT_I(sbi)->last_victim[ALLOC_NEXT] = 0;
+ 	SIT_I(sbi)->last_victim[FLUSH_DEVICE] = init_segno;
+diff --git a/fs/fuse/control.c b/fs/fuse/control.c
+index cc7e94d73c6cc..24b4d9db231db 100644
+--- a/fs/fuse/control.c
++++ b/fs/fuse/control.c
+@@ -275,7 +275,7 @@ int fuse_ctl_add_conn(struct fuse_conn *fc)
+ 	struct dentry *parent;
+ 	char name[32];
+ 
+-	if (!fuse_control_sb)
++	if (!fuse_control_sb || fc->no_control)
+ 		return 0;
+ 
+ 	parent = fuse_control_sb->s_root;
+@@ -313,7 +313,7 @@ void fuse_ctl_remove_conn(struct fuse_conn *fc)
+ {
+ 	int i;
+ 
+-	if (!fuse_control_sb)
++	if (!fuse_control_sb || fc->no_control)
+ 		return;
+ 
+ 	for (i = fc->ctl_ndents - 1; i >= 0; i--) {
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 5e484676343eb..2ede05df7d069 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -182,6 +182,12 @@ void fuse_change_attributes_common(struct inode *inode, struct fuse_attr *attr,
+ 	inode->i_uid     = make_kuid(fc->user_ns, attr->uid);
+ 	inode->i_gid     = make_kgid(fc->user_ns, attr->gid);
+ 	inode->i_blocks  = attr->blocks;
++
++	/* Sanitize nsecs */
++	attr->atimensec = min_t(u32, attr->atimensec, NSEC_PER_SEC - 1);
++	attr->mtimensec = min_t(u32, attr->mtimensec, NSEC_PER_SEC - 1);
++	attr->ctimensec = min_t(u32, attr->ctimensec, NSEC_PER_SEC - 1);
++
+ 	inode->i_atime.tv_sec   = attr->atime;
+ 	inode->i_atime.tv_nsec  = attr->atimensec;
+ 	/* mtime from server may be stale due to local buffered write */
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 867362f45cf63..98cfa73cb165b 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -564,13 +564,13 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ 	 */
+ 	jbd2_journal_switch_revoke_table(journal);
+ 
++	write_lock(&journal->j_state_lock);
+ 	/*
+ 	 * Reserved credits cannot be claimed anymore, free them
+ 	 */
+ 	atomic_sub(atomic_read(&journal->j_reserved_credits),
+ 		   &commit_transaction->t_outstanding_credits);
+ 
+-	write_lock(&journal->j_state_lock);
+ 	trace_jbd2_commit_flushing(journal, commit_transaction);
+ 	stats.run.rs_flushing = jiffies;
+ 	stats.run.rs_locked = jbd2_time_diff(stats.run.rs_locked,
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index e8fc45fd751fb..0f1cef90fa7d6 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1460,8 +1460,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 	struct journal_head *jh;
+ 	int ret = 0;
+ 
+-	if (is_handle_aborted(handle))
+-		return -EROFS;
+ 	if (!buffer_jbd(bh))
+ 		return -EUCLEAN;
+ 
+@@ -1508,6 +1506,18 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 	journal = transaction->t_journal;
+ 	spin_lock(&jh->b_state_lock);
+ 
++	if (is_handle_aborted(handle)) {
++		/*
++		 * Check journal aborting with @jh->b_state_lock locked,
++		 * since 'jh->b_transaction' could be replaced with
++		 * 'jh->b_next_transaction' during old transaction
++		 * committing if journal aborted, which may fail
++		 * assertion on 'jh->b_frozen_data == NULL'.
++		 */
++		ret = -EROFS;
++		goto out_unlock_bh;
++	}
++
+ 	if (jh->b_modified == 0) {
+ 		/*
+ 		 * This buffer's got modified and becoming part
+diff --git a/fs/namei.c b/fs/namei.c
+index 72f354b62dd5d..eba2f13d229df 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -1348,6 +1348,8 @@ static bool __follow_mount_rcu(struct nameidata *nd, struct path *path,
+ 				 * becoming unpinned.
+ 				 */
+ 				flags = dentry->d_flags;
++				if (read_seqretry(&mount_lock, nd->m_seq))
++					return false;
+ 				continue;
+ 			}
+ 			if (read_seqretry(&mount_lock, nd->m_seq))
+@@ -3272,6 +3274,8 @@ struct dentry *vfs_tmpfile(struct dentry *dentry, umode_t mode, int open_flag)
+ 	child = d_alloc(dentry, &slash_name);
+ 	if (unlikely(!child))
+ 		goto out_err;
++	if (!IS_POSIXACL(dir))
++		mode &= ~current_umask();
+ 	error = dir->i_op->tmpfile(dir, child, mode);
+ 	if (error)
+ 		goto out_err;
+diff --git a/fs/nfs/nfs3client.c b/fs/nfs/nfs3client.c
+index 5601e47360c28..b49359afac883 100644
+--- a/fs/nfs/nfs3client.c
++++ b/fs/nfs/nfs3client.c
+@@ -108,7 +108,6 @@ struct nfs_client *nfs3_set_ds_client(struct nfs_server *mds_srv,
+ 	if (mds_srv->flags & NFS_MOUNT_NORESVPORT)
+ 		__set_bit(NFS_CS_NORESVPORT, &cl_init.init_flags);
+ 
+-	__set_bit(NFS_CS_NOPING, &cl_init.init_flags);
+ 	__set_bit(NFS_CS_DS, &cl_init.init_flags);
+ 
+ 	/* Use the MDS nfs_client cl_ipaddr. */
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index f469982dcb36e..44118f0ab0b31 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -257,7 +257,7 @@ static int ovl_encode_fh(struct inode *inode, u32 *fid, int *max_len,
+ 		return FILEID_INVALID;
+ 
+ 	dentry = d_find_any_alias(inode);
+-	if (WARN_ON(!dentry))
++	if (!dentry)
+ 		return FILEID_INVALID;
+ 
+ 	bytes = ovl_dentry_to_fid(dentry, fid, buflen);
+diff --git a/fs/splice.c b/fs/splice.c
+index 866d5c2367b23..6610e55c0e2ab 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -806,17 +806,15 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
+ {
+ 	struct pipe_inode_info *pipe;
+ 	long ret, bytes;
+-	umode_t i_mode;
+ 	size_t len;
+ 	int i, flags, more;
+ 
+ 	/*
+-	 * We require the input being a regular file, as we don't want to
+-	 * randomly drop data for eg socket -> socket splicing. Use the
+-	 * piped splicing for that!
++	 * We require the input to be seekable, as we don't want to randomly
++	 * drop data for eg socket -> socket splicing. Use the piped splicing
++	 * for that!
+ 	 */
+-	i_mode = file_inode(in)->i_mode;
+-	if (unlikely(!S_ISREG(i_mode) && !S_ISBLK(i_mode)))
++	if (unlikely(!(in->f_mode & FMODE_LSEEK)))
+ 		return -EINVAL;
+ 
+ 	/*
+diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
+index deb99300d171c..e69a08ed7de43 100644
+--- a/fs/xfs/xfs_icache.c
++++ b/fs/xfs/xfs_icache.c
+@@ -47,8 +47,9 @@ xfs_inode_alloc(
+ 		return NULL;
+ 	}
+ 
+-	/* VFS doesn't initialise i_mode! */
++	/* VFS doesn't initialise i_mode or i_state! */
+ 	VFS_I(ip)->i_mode = 0;
++	VFS_I(ip)->i_state = 0;
+ 
+ 	XFS_STATS_INC(mp, vn_active);
+ 	ASSERT(atomic_read(&ip->i_pincount) == 0);
+diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
+index 74bc2beadc237..bd5a25f4952d0 100644
+--- a/fs/xfs/xfs_iomap.c
++++ b/fs/xfs/xfs_iomap.c
+@@ -1062,11 +1062,11 @@ found_cow:
+ 		error = xfs_bmbt_to_iomap(ip, srcmap, &imap, 0);
+ 		if (error)
+ 			return error;
+-	} else {
+-		xfs_trim_extent(&cmap, offset_fsb,
+-				imap.br_startoff - offset_fsb);
++		return xfs_bmbt_to_iomap(ip, iomap, &cmap, IOMAP_F_SHARED);
+ 	}
+-	return xfs_bmbt_to_iomap(ip, iomap, &cmap, IOMAP_F_SHARED);
++
++	xfs_trim_extent(&cmap, offset_fsb, imap.br_startoff - offset_fsb);
++	return xfs_bmbt_to_iomap(ip, iomap, &cmap, 0);
+ 
+ out_unlock:
+ 	xfs_iunlock(ip, XFS_ILOCK_EXCL);
+diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
+index b7f7b31a77d59..6a3026e78a9bb 100644
+--- a/fs/xfs/xfs_iops.c
++++ b/fs/xfs/xfs_iops.c
+@@ -1328,7 +1328,7 @@ xfs_setup_inode(
+ 	gfp_t			gfp_mask;
+ 
+ 	inode->i_ino = ip->i_ino;
+-	inode->i_state = I_NEW;
++	inode->i_state |= I_NEW;
+ 
+ 	inode_sb_list_add(inode);
+ 	/* make the inode look hashed for the writeback code */
+diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
+index 69408782019eb..e61f28ce3e44e 100644
+--- a/fs/xfs/xfs_log_recover.c
++++ b/fs/xfs/xfs_log_recover.c
+@@ -2061,7 +2061,9 @@ xlog_recover_add_to_cont_trans(
+ 	old_ptr = item->ri_buf[item->ri_cnt-1].i_addr;
+ 	old_len = item->ri_buf[item->ri_cnt-1].i_len;
+ 
+-	ptr = krealloc(old_ptr, len + old_len, GFP_KERNEL | __GFP_NOFAIL);
++	ptr = kvrealloc(old_ptr, old_len, len + old_len, GFP_KERNEL);
++	if (!ptr)
++		return -ENOMEM;
+ 	memcpy(&ptr[old_len], dp, len);
+ 	item->ri_buf[item->ri_cnt-1].i_len += len;
+ 	item->ri_buf[item->ri_cnt-1].i_addr = ptr;
+diff --git a/include/acpi/cppc_acpi.h b/include/acpi/cppc_acpi.h
+index a6a9373ab8634..d9417abf4cd08 100644
+--- a/include/acpi/cppc_acpi.h
++++ b/include/acpi/cppc_acpi.h
+@@ -16,7 +16,7 @@
+ #include <acpi/pcc.h>
+ #include <acpi/processor.h>
+ 
+-/* Support CPPCv2 and CPPCv3  */
++/* CPPCv2 and CPPCv3 support */
+ #define CPPC_V2_REV	2
+ #define CPPC_V3_REV	3
+ #define CPPC_V2_NUM_ENT	21
+diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
+index 99058eb81042e..c4f6a9270c03c 100644
+--- a/include/linux/bitmap.h
++++ b/include/linux/bitmap.h
+@@ -4,10 +4,12 @@
+ 
+ #ifndef __ASSEMBLY__
+ 
+-#include <linux/types.h>
+ #include <linux/bitops.h>
+-#include <linux/string.h>
+ #include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++
++struct device;
+ 
+ /*
+  * bitmaps provide bit arrays that consume one or more unsigned
+@@ -122,6 +124,12 @@ extern unsigned long *bitmap_alloc(unsigned int nbits, gfp_t flags);
+ extern unsigned long *bitmap_zalloc(unsigned int nbits, gfp_t flags);
+ extern void bitmap_free(const unsigned long *bitmap);
+ 
++/* Managed variants of the above. */
++unsigned long *devm_bitmap_alloc(struct device *dev,
++				 unsigned int nbits, gfp_t flags);
++unsigned long *devm_bitmap_zalloc(struct device *dev,
++				  unsigned int nbits, gfp_t flags);
++
+ /*
+  * lib/bitmap.c provides these functions:
+  */
+diff --git a/include/linux/blktrace_api.h b/include/linux/blktrace_api.h
+index 3b6ff5902edce..05556573b896a 100644
+--- a/include/linux/blktrace_api.h
++++ b/include/linux/blktrace_api.h
+@@ -75,8 +75,7 @@ static inline bool blk_trace_note_message_enabled(struct request_queue *q)
+ 	return ret;
+ }
+ 
+-extern void blk_add_driver_data(struct request_queue *q, struct request *rq,
+-				void *data, size_t len);
++extern void blk_add_driver_data(struct request *rq, void *data, size_t len);
+ extern int blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+ 			   struct block_device *bdev,
+ 			   char __user *arg);
+@@ -90,7 +89,7 @@ extern struct attribute_group blk_trace_attr_group;
+ #else /* !CONFIG_BLK_DEV_IO_TRACE */
+ # define blk_trace_ioctl(bdev, cmd, arg)		(-ENOTTY)
+ # define blk_trace_shutdown(q)				do { } while (0)
+-# define blk_add_driver_data(q, rq, data, len)		do {} while (0)
++# define blk_add_driver_data(rq, data, len)		do {} while (0)
+ # define blk_trace_setup(q, name, dev, bdev, arg)	(-ENOTTY)
+ # define blk_trace_startstop(q, start)			(-ENOTTY)
+ # define blk_trace_remove(q)				(-ENOTTY)
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index 6b47f94378c5a..20a2ff1c07a1b 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -117,7 +117,6 @@ static __always_inline int test_clear_buffer_##name(struct buffer_head *bh) \
+  * of the form "mark_buffer_foo()".  These are higher-level functions which
+  * do something in addition to setting a b_state bit.
+  */
+-BUFFER_FNS(Uptodate, uptodate)
+ BUFFER_FNS(Dirty, dirty)
+ TAS_BUFFER_FNS(Dirty, dirty)
+ BUFFER_FNS(Lock, locked)
+@@ -135,6 +134,30 @@ BUFFER_FNS(Meta, meta)
+ BUFFER_FNS(Prio, prio)
+ BUFFER_FNS(Defer_Completion, defer_completion)
+ 
++static __always_inline void set_buffer_uptodate(struct buffer_head *bh)
++{
++	/*
++	 * make it consistent with folio_mark_uptodate
++	 * pairs with smp_load_acquire in buffer_uptodate
++	 */
++	smp_mb__before_atomic();
++	set_bit(BH_Uptodate, &bh->b_state);
++}
++
++static __always_inline void clear_buffer_uptodate(struct buffer_head *bh)
++{
++	clear_bit(BH_Uptodate, &bh->b_state);
++}
++
++static __always_inline int buffer_uptodate(const struct buffer_head *bh)
++{
++	/*
++	 * make it consistent with folio_test_uptodate
++	 * pairs with smp_mb__before_atomic in set_buffer_uptodate
++	 */
++	return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0;
++}
++
+ #define bh_offset(bh)		((unsigned long)(bh)->b_data & ~PAGE_MASK)
+ 
+ /* If we *know* page->private refers to buffer_heads */
+diff --git a/include/linux/kfifo.h b/include/linux/kfifo.h
+index 86249476b57f4..0b35a41440ff1 100644
+--- a/include/linux/kfifo.h
++++ b/include/linux/kfifo.h
+@@ -688,7 +688,7 @@ __kfifo_uint_must_check_helper( \
+  * writer, you don't need extra locking to use these macro.
+  */
+ #define	kfifo_to_user(fifo, to, len, copied) \
+-__kfifo_uint_must_check_helper( \
++__kfifo_int_must_check_helper( \
+ ({ \
+ 	typeof((fifo) + 1) __tmp = (fifo); \
+ 	void __user *__to = (to); \
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 439fbe0ee0c74..94871f12e5362 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -146,6 +146,7 @@ static inline bool is_error_page(struct page *page)
+ #define KVM_REQ_MMU_RELOAD        (1 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+ #define KVM_REQ_PENDING_TIMER     2
+ #define KVM_REQ_UNHALT            3
++#define KVM_REQ_VM_BUGGED         (4 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+ #define KVM_REQUEST_ARCH_BASE     8
+ 
+ #define KVM_ARCH_REQ_FLAGS(nr, flags) ({ \
+@@ -505,6 +506,7 @@ struct kvm {
+ 	struct srcu_struct irq_srcu;
+ 	pid_t userspace_pid;
+ 	unsigned int max_halt_poll_ns;
++	bool vm_bugged;
+ };
+ 
+ #define kvm_err(fmt, ...) \
+@@ -533,6 +535,31 @@ struct kvm {
+ #define vcpu_err(vcpu, fmt, ...)					\
+ 	kvm_err("vcpu%i " fmt, (vcpu)->vcpu_id, ## __VA_ARGS__)
+ 
++bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req);
++static inline void kvm_vm_bugged(struct kvm *kvm)
++{
++	kvm->vm_bugged = true;
++	kvm_make_all_cpus_request(kvm, KVM_REQ_VM_BUGGED);
++}
++
++#define KVM_BUG(cond, kvm, fmt...)				\
++({								\
++	int __ret = (cond);					\
++								\
++	if (WARN_ONCE(__ret && !(kvm)->vm_bugged, fmt))		\
++		kvm_vm_bugged(kvm);				\
++	unlikely(__ret);					\
++})
++
++#define KVM_BUG_ON(cond, kvm)					\
++({								\
++	int __ret = (cond);					\
++								\
++	if (WARN_ON_ONCE(__ret && !(kvm)->vm_bugged))		\
++		kvm_vm_bugged(kvm);				\
++	unlikely(__ret);					\
++})
++
+ static inline bool kvm_dirty_log_manual_protect_and_init_set(struct kvm *kvm)
+ {
+ 	return !!(kvm->manual_dirty_log_protect & KVM_DIRTY_LOG_INITIALLY_SET);
+@@ -850,7 +877,6 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
+ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
+ 				 struct kvm_vcpu *except,
+ 				 unsigned long *vcpu_bitmap, cpumask_var_t tmp);
+-bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req);
+ bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req,
+ 				      struct kvm_vcpu *except);
+ bool kvm_make_cpus_request_mask(struct kvm *kvm, unsigned int req,
+diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
+index 20b6797babe2c..2c2586312b447 100644
+--- a/include/linux/lockdep.h
++++ b/include/linux/lockdep.h
+@@ -192,7 +192,7 @@ static inline void
+ lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
+ 		       struct lock_class_key *key, int subclass, u8 inner, u8 outer)
+ {
+-	lockdep_init_map_type(lock, name, key, subclass, inner, LD_WAIT_INV, LD_LOCK_NORMAL);
++	lockdep_init_map_type(lock, name, key, subclass, inner, outer, LD_LOCK_NORMAL);
+ }
+ 
+ static inline void
+@@ -215,24 +215,28 @@ static inline void lockdep_init_map(struct lockdep_map *lock, const char *name,
+  * or they are too narrow (they suffer from a false class-split):
+  */
+ #define lockdep_set_class(lock, key)				\
+-	lockdep_init_map_waits(&(lock)->dep_map, #key, key, 0,	\
+-			       (lock)->dep_map.wait_type_inner,	\
+-			       (lock)->dep_map.wait_type_outer)
++	lockdep_init_map_type(&(lock)->dep_map, #key, key, 0,	\
++			      (lock)->dep_map.wait_type_inner,	\
++			      (lock)->dep_map.wait_type_outer,	\
++			      (lock)->dep_map.lock_type)
+ 
+ #define lockdep_set_class_and_name(lock, key, name)		\
+-	lockdep_init_map_waits(&(lock)->dep_map, name, key, 0,	\
+-			       (lock)->dep_map.wait_type_inner,	\
+-			       (lock)->dep_map.wait_type_outer)
++	lockdep_init_map_type(&(lock)->dep_map, name, key, 0,	\
++			      (lock)->dep_map.wait_type_inner,	\
++			      (lock)->dep_map.wait_type_outer,	\
++			      (lock)->dep_map.lock_type)
+ 
+ #define lockdep_set_class_and_subclass(lock, key, sub)		\
+-	lockdep_init_map_waits(&(lock)->dep_map, #key, key, sub,\
+-			       (lock)->dep_map.wait_type_inner,	\
+-			       (lock)->dep_map.wait_type_outer)
++	lockdep_init_map_type(&(lock)->dep_map, #key, key, sub,	\
++			      (lock)->dep_map.wait_type_inner,	\
++			      (lock)->dep_map.wait_type_outer,	\
++			      (lock)->dep_map.lock_type)
+ 
+ #define lockdep_set_subclass(lock, sub)					\
+-	lockdep_init_map_waits(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\
+-			       (lock)->dep_map.wait_type_inner,		\
+-			       (lock)->dep_map.wait_type_outer)
++	lockdep_init_map_type(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\
++			      (lock)->dep_map.wait_type_inner,		\
++			      (lock)->dep_map.wait_type_outer,		\
++			      (lock)->dep_map.lock_type)
+ 
+ #define lockdep_set_novalidate_class(lock) \
+ 	lockdep_set_class_and_name(lock, &__lockdep_no_validate__, #lock)
+diff --git a/include/linux/mfd/t7l66xb.h b/include/linux/mfd/t7l66xb.h
+index 69632c1b07bd8..ae3e7a5c5219b 100644
+--- a/include/linux/mfd/t7l66xb.h
++++ b/include/linux/mfd/t7l66xb.h
+@@ -12,7 +12,6 @@
+ 
+ struct t7l66xb_platform_data {
+ 	int (*enable)(struct platform_device *dev);
+-	int (*disable)(struct platform_device *dev);
+ 	int (*suspend)(struct platform_device *dev);
+ 	int (*resume)(struct platform_device *dev);
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 5b4d88faf114a..b8b677f47a8da 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -788,6 +788,8 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags)
+ 	return kvmalloc_array(n, size, flags | __GFP_ZERO);
+ }
+ 
++extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize,
++		gfp_t flags);
+ extern void kvfree(const void *addr);
+ extern void kvfree_sensitive(const void *addr, size_t len);
+ 
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index db2eaff77f41a..2044fbd55d731 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -474,12 +474,100 @@ struct nand_sdr_timings {
+ 	u32 tWW_min;
+ };
+ 
++/**
++ * struct nand_nvddr_timings - NV-DDR NAND chip timings
++ *
++ * This struct defines the timing requirements of a NV-DDR NAND data interface.
++ * These information can be found in every NAND datasheets and the timings
++ * meaning are described in the ONFI specifications:
++ * https://media-www.micron.com/-/media/client/onfi/specs/onfi_4_1_gold.pdf
++ * (chapter 4.18.2 NV-DDR)
++ *
++ * All these timings are expressed in picoseconds.
++ *
++ * @tBERS_max: Block erase time
++ * @tCCS_min: Change column setup time
++ * @tPROG_max: Page program time
++ * @tR_max: Page read time
++ * @tAC_min: Access window of DQ[7:0] from CLK
++ * @tAC_max: Access window of DQ[7:0] from CLK
++ * @tADL_min: ALE to data loading time
++ * @tCAD_min: Command, Address, Data delay
++ * @tCAH_min: Command/Address DQ hold time
++ * @tCALH_min: W/R_n, CLE and ALE hold time
++ * @tCALS_min: W/R_n, CLE and ALE setup time
++ * @tCAS_min: Command/address DQ setup time
++ * @tCEH_min: CE# high hold time
++ * @tCH_min:  CE# hold time
++ * @tCK_min: Average clock cycle time
++ * @tCS_min: CE# setup time
++ * @tDH_min: Data hold time
++ * @tDQSCK_min: Start of the access window of DQS from CLK
++ * @tDQSCK_max: End of the access window of DQS from CLK
++ * @tDQSD_min: Min W/R_n low to DQS/DQ driven by device
++ * @tDQSD_max: Max W/R_n low to DQS/DQ driven by device
++ * @tDQSHZ_max: W/R_n high to DQS/DQ tri-state by device
++ * @tDQSQ_max: DQS-DQ skew, DQS to last DQ valid, per access
++ * @tDS_min: Data setup time
++ * @tDSC_min: DQS cycle time
++ * @tFEAT_max: Busy time for Set Features and Get Features
++ * @tITC_max: Interface and Timing Mode Change time
++ * @tQHS_max: Data hold skew factor
++ * @tRHW_min: Data output cycle to command, address, or data input cycle
++ * @tRR_min: Ready to RE# low (data only)
++ * @tRST_max: Device reset time, measured from the falling edge of R/B# to the
++ *	      rising edge of R/B#.
++ * @tWB_max: WE# high to SR[6] low
++ * @tWHR_min: WE# high to RE# low
++ * @tWRCK_min: W/R_n low to data output cycle
++ * @tWW_min: WP# transition to WE# low
++ */
++struct nand_nvddr_timings {
++	u64 tBERS_max;
++	u32 tCCS_min;
++	u64 tPROG_max;
++	u64 tR_max;
++	u32 tAC_min;
++	u32 tAC_max;
++	u32 tADL_min;
++	u32 tCAD_min;
++	u32 tCAH_min;
++	u32 tCALH_min;
++	u32 tCALS_min;
++	u32 tCAS_min;
++	u32 tCEH_min;
++	u32 tCH_min;
++	u32 tCK_min;
++	u32 tCS_min;
++	u32 tDH_min;
++	u32 tDQSCK_min;
++	u32 tDQSCK_max;
++	u32 tDQSD_min;
++	u32 tDQSD_max;
++	u32 tDQSHZ_max;
++	u32 tDQSQ_max;
++	u32 tDS_min;
++	u32 tDSC_min;
++	u32 tFEAT_max;
++	u32 tITC_max;
++	u32 tQHS_max;
++	u32 tRHW_min;
++	u32 tRR_min;
++	u32 tRST_max;
++	u32 tWB_max;
++	u32 tWHR_min;
++	u32 tWRCK_min;
++	u32 tWW_min;
++};
++
+ /**
+  * enum nand_interface_type - NAND interface type
+  * @NAND_SDR_IFACE:	Single Data Rate interface
++ * @NAND_NVDDR_IFACE:	Double Data Rate interface
+  */
+ enum nand_interface_type {
+ 	NAND_SDR_IFACE,
++	NAND_NVDDR_IFACE,
+ };
+ 
+ /**
+@@ -488,6 +576,7 @@ enum nand_interface_type {
+  * @timings:	 The timing information
+  * @timings.mode: Timing mode as defined in the specification
+  * @timings.sdr: Use it when @type is %NAND_SDR_IFACE.
++ * @timings.nvddr: Use it when @type is %NAND_NVDDR_IFACE.
+  */
+ struct nand_interface_config {
+ 	enum nand_interface_type type;
+@@ -495,10 +584,29 @@ struct nand_interface_config {
+ 		unsigned int mode;
+ 		union {
+ 			struct nand_sdr_timings sdr;
++			struct nand_nvddr_timings nvddr;
+ 		};
+ 	} timings;
+ };
+ 
++/**
++ * nand_interface_is_sdr - get the interface type
++ * @conf:	The data interface
++ */
++static bool nand_interface_is_sdr(const struct nand_interface_config *conf)
++{
++	return conf->type == NAND_SDR_IFACE;
++}
++
++/**
++ * nand_interface_is_nvddr - get the interface type
++ * @conf:	The data interface
++ */
++static bool nand_interface_is_nvddr(const struct nand_interface_config *conf)
++{
++	return conf->type == NAND_NVDDR_IFACE;
++}
++
+ /**
+  * nand_get_sdr_timings - get SDR timing from data interface
+  * @conf:	The data interface
+@@ -506,12 +614,25 @@ struct nand_interface_config {
+ static inline const struct nand_sdr_timings *
+ nand_get_sdr_timings(const struct nand_interface_config *conf)
+ {
+-	if (conf->type != NAND_SDR_IFACE)
++	if (!nand_interface_is_sdr(conf))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	return &conf->timings.sdr;
+ }
+ 
++/**
++ * nand_get_nvddr_timings - get NV-DDR timing from data interface
++ * @conf:	The data interface
++ */
++static inline const struct nand_nvddr_timings *
++nand_get_nvddr_timings(const struct nand_interface_config *conf)
++{
++	if (!nand_interface_is_nvddr(conf))
++		return ERR_PTR(-EINVAL);
++
++	return &conf->timings.nvddr;
++}
++
+ /**
+  * struct nand_op_cmd_instr - Definition of a command instruction
+  * @opcode: the command to issue in one cycle
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 635a9243cce0d..69e310173fbca 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -59,6 +59,8 @@
+ #define PCI_CLASS_BRIDGE_EISA		0x0602
+ #define PCI_CLASS_BRIDGE_MC		0x0603
+ #define PCI_CLASS_BRIDGE_PCI		0x0604
++#define PCI_CLASS_BRIDGE_PCI_NORMAL		0x060400
++#define PCI_CLASS_BRIDGE_PCI_SUBTRACTIVE	0x060401
+ #define PCI_CLASS_BRIDGE_PCMCIA		0x0605
+ #define PCI_CLASS_BRIDGE_NUBUS		0x0606
+ #define PCI_CLASS_BRIDGE_CARDBUS	0x0607
+@@ -81,6 +83,7 @@
+ #define PCI_CLASS_SYSTEM_RTC		0x0803
+ #define PCI_CLASS_SYSTEM_PCI_HOTPLUG	0x0804
+ #define PCI_CLASS_SYSTEM_SDHCI		0x0805
++#define PCI_CLASS_SYSTEM_RCEC		0x0807
+ #define PCI_CLASS_SYSTEM_OTHER		0x0880
+ 
+ #define PCI_BASE_CLASS_INPUT		0x09
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 4bca80c9931fb..4e8425c1c5605 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1658,7 +1658,7 @@ current_restore_flags(unsigned long orig_flags, unsigned long flags)
+ }
+ 
+ extern int cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
+-extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed);
++extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_effective_cpus);
+ #ifdef CONFIG_SMP
+ extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask);
+ extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask);
+diff --git a/include/linux/tpm_eventlog.h b/include/linux/tpm_eventlog.h
+index 739ba9a03ec16..20c0ff54b7a0d 100644
+--- a/include/linux/tpm_eventlog.h
++++ b/include/linux/tpm_eventlog.h
+@@ -157,7 +157,7 @@ struct tcg_algorithm_info {
+  * Return: size of the event on success, 0 on failure
+  */
+ 
+-static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
++static __always_inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+ 					 struct tcg_pcr_event *event_header,
+ 					 bool do_mapping)
+ {
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 9f05016d823f8..c0cf20b19e637 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -66,6 +66,7 @@
+ 
+ struct giveback_urb_bh {
+ 	bool running;
++	bool high_prio;
+ 	spinlock_t lock;
+ 	struct list_head  head;
+ 	struct tasklet_struct bh;
+diff --git a/include/linux/vfio.h b/include/linux/vfio.h
+index 38d3c6a8dc7e0..f479c5d7f2c37 100644
+--- a/include/linux/vfio.h
++++ b/include/linux/vfio.h
+@@ -15,6 +15,18 @@
+ #include <linux/poll.h>
+ #include <uapi/linux/vfio.h>
+ 
++struct vfio_device {
++	struct device *dev;
++	const struct vfio_device_ops *ops;
++	struct vfio_group *group;
++
++	/* Members below here are private, not for driver use */
++	refcount_t refcount;
++	struct completion comp;
++	struct list_head group_next;
++	void *device_data;
++};
++
+ /**
+  * struct vfio_device_ops - VFIO bus driver device callbacks
+  *
+@@ -48,11 +60,15 @@ struct vfio_device_ops {
+ extern struct iommu_group *vfio_iommu_group_get(struct device *dev);
+ extern void vfio_iommu_group_put(struct iommu_group *group, struct device *dev);
+ 
++void vfio_init_group_dev(struct vfio_device *device, struct device *dev,
++			 const struct vfio_device_ops *ops, void *device_data);
++int vfio_register_group_dev(struct vfio_device *device);
+ extern int vfio_add_group_dev(struct device *dev,
+ 			      const struct vfio_device_ops *ops,
+ 			      void *device_data);
+ 
+ extern void *vfio_del_group_dev(struct device *dev);
++void vfio_unregister_group_dev(struct vfio_device *device);
+ extern struct vfio_device *vfio_device_get_from_dev(struct device *dev);
+ extern void vfio_device_put(struct vfio_device *device);
+ extern void *vfio_device_data(struct vfio_device *device);
+diff --git a/include/linux/wait.h b/include/linux/wait.h
+index 9b8b0833100a0..1663e47681a30 100644
+--- a/include/linux/wait.h
++++ b/include/linux/wait.h
+@@ -534,10 +534,11 @@ do {										\
+ 										\
+ 	hrtimer_init_sleeper_on_stack(&__t, CLOCK_MONOTONIC,			\
+ 				      HRTIMER_MODE_REL);			\
+-	if ((timeout) != KTIME_MAX)						\
+-		hrtimer_start_range_ns(&__t.timer, timeout,			\
+-				       current->timer_slack_ns,			\
+-				       HRTIMER_MODE_REL);			\
++	if ((timeout) != KTIME_MAX) {						\
++		hrtimer_set_expires_range_ns(&__t.timer, timeout,		\
++					current->timer_slack_ns);		\
++		hrtimer_sleeper_start_expires(&__t, HRTIMER_MODE_REL);		\
++	}									\
+ 										\
+ 	__ret = ___wait_event(wq_head, condition, state, 0, 0,			\
+ 		if (!__t.task) {						\
+diff --git a/include/net/inet6_hashtables.h b/include/net/inet6_hashtables.h
+index 81b9659530368..56f1286583d3c 100644
+--- a/include/net/inet6_hashtables.h
++++ b/include/net/inet6_hashtables.h
+@@ -103,15 +103,24 @@ struct sock *inet6_lookup(struct net *net, struct inet_hashinfo *hashinfo,
+ 			  const int dif);
+ 
+ int inet6_hash(struct sock *sk);
+-#endif /* IS_ENABLED(CONFIG_IPV6) */
+ 
+-#define INET6_MATCH(__sk, __net, __saddr, __daddr, __ports, __dif, __sdif) \
+-	(((__sk)->sk_portpair == (__ports))			&&	\
+-	 ((__sk)->sk_family == AF_INET6)			&&	\
+-	 ipv6_addr_equal(&(__sk)->sk_v6_daddr, (__saddr))		&&	\
+-	 ipv6_addr_equal(&(__sk)->sk_v6_rcv_saddr, (__daddr))	&&	\
+-	 (((__sk)->sk_bound_dev_if == (__dif))	||			\
+-	  ((__sk)->sk_bound_dev_if == (__sdif)))		&&	\
+-	 net_eq(sock_net(__sk), (__net)))
++static inline bool inet6_match(struct net *net, const struct sock *sk,
++			       const struct in6_addr *saddr,
++			       const struct in6_addr *daddr,
++			       const __portpair ports,
++			       const int dif, const int sdif)
++{
++	if (!net_eq(sock_net(sk), net) ||
++	    sk->sk_family != AF_INET6 ||
++	    sk->sk_portpair != ports ||
++	    !ipv6_addr_equal(&sk->sk_v6_daddr, saddr) ||
++	    !ipv6_addr_equal(&sk->sk_v6_rcv_saddr, daddr))
++		return false;
++
++	/* READ_ONCE() paired with WRITE_ONCE() in sock_bindtoindex_locked() */
++	return inet_sk_bound_dev_eq(net, READ_ONCE(sk->sk_bound_dev_if), dif,
++				    sdif);
++}
++#endif /* IS_ENABLED(CONFIG_IPV6) */
+ 
+ #endif /* _INET6_HASHTABLES_H */
+diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h
+index d4d611064a76f..c9e387d174c63 100644
+--- a/include/net/inet_hashtables.h
++++ b/include/net/inet_hashtables.h
+@@ -197,17 +197,6 @@ static inline void inet_ehash_locks_free(struct inet_hashinfo *hashinfo)
+ 	hashinfo->ehash_locks = NULL;
+ }
+ 
+-static inline bool inet_sk_bound_dev_eq(struct net *net, int bound_dev_if,
+-					int dif, int sdif)
+-{
+-#if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV)
+-	return inet_bound_dev_eq(!!net->ipv4.sysctl_tcp_l3mdev_accept,
+-				 bound_dev_if, dif, sdif);
+-#else
+-	return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);
+-#endif
+-}
+-
+ struct inet_bind_bucket *
+ inet_bind_bucket_create(struct kmem_cache *cachep, struct net *net,
+ 			struct inet_bind_hashbucket *head,
+@@ -289,7 +278,6 @@ static inline struct sock *inet_lookup_listener(struct net *net,
+ 	((__force __portpair)(((__u32)(__dport) << 16) | (__force __u32)(__be16)(__sport)))
+ #endif
+ 
+-#if (BITS_PER_LONG == 64)
+ #ifdef __BIG_ENDIAN
+ #define INET_ADDR_COOKIE(__name, __saddr, __daddr) \
+ 	const __addrpair __name = (__force __addrpair) ( \
+@@ -301,24 +289,20 @@ static inline struct sock *inet_lookup_listener(struct net *net,
+ 				   (((__force __u64)(__be32)(__daddr)) << 32) | \
+ 				   ((__force __u64)(__be32)(__saddr)))
+ #endif /* __BIG_ENDIAN */
+-#define INET_MATCH(__sk, __net, __cookie, __saddr, __daddr, __ports, __dif, __sdif) \
+-	(((__sk)->sk_portpair == (__ports))			&&	\
+-	 ((__sk)->sk_addrpair == (__cookie))			&&	\
+-	 (((__sk)->sk_bound_dev_if == (__dif))			||	\
+-	  ((__sk)->sk_bound_dev_if == (__sdif)))		&&	\
+-	 net_eq(sock_net(__sk), (__net)))
+-#else /* 32-bit arch */
+-#define INET_ADDR_COOKIE(__name, __saddr, __daddr) \
+-	const int __name __deprecated __attribute__((unused))
+-
+-#define INET_MATCH(__sk, __net, __cookie, __saddr, __daddr, __ports, __dif, __sdif) \
+-	(((__sk)->sk_portpair == (__ports))		&&		\
+-	 ((__sk)->sk_daddr	== (__saddr))		&&		\
+-	 ((__sk)->sk_rcv_saddr	== (__daddr))		&&		\
+-	 (((__sk)->sk_bound_dev_if == (__dif))		||		\
+-	  ((__sk)->sk_bound_dev_if == (__sdif)))	&&		\
+-	 net_eq(sock_net(__sk), (__net)))
+-#endif /* 64-bit arch */
++
++static inline bool INET_MATCH(struct net *net, const struct sock *sk,
++			      const __addrpair cookie, const __portpair ports,
++			      int dif, int sdif)
++{
++	if (!net_eq(sock_net(sk), net) ||
++	    sk->sk_portpair != ports ||
++	    sk->sk_addrpair != cookie)
++	        return false;
++
++	/* READ_ONCE() paired with WRITE_ONCE() in sock_bindtoindex_locked() */
++	return inet_sk_bound_dev_eq(net, READ_ONCE(sk->sk_bound_dev_if), dif,
++				    sdif);
++}
+ 
+ /* Sockets in TCP_CLOSE state are _always_ taken out of the hash, so we need
+  * not check it for lookups anymore, thanks Alexey. -DaveM
+diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
+index 3c039d4b0e480..f0faf9d0e7fb7 100644
+--- a/include/net/inet_sock.h
++++ b/include/net/inet_sock.h
+@@ -117,14 +117,15 @@ static inline u32 inet_request_mark(const struct sock *sk, struct sk_buff *skb)
+ static inline int inet_request_bound_dev_if(const struct sock *sk,
+ 					    struct sk_buff *skb)
+ {
++	int bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
+ #ifdef CONFIG_NET_L3_MASTER_DEV
+ 	struct net *net = sock_net(sk);
+ 
+-	if (!sk->sk_bound_dev_if && net->ipv4.sysctl_tcp_l3mdev_accept)
++	if (!bound_dev_if && READ_ONCE(net->ipv4.sysctl_tcp_l3mdev_accept))
+ 		return l3mdev_master_ifindex_by_index(net, skb->skb_iif);
+ #endif
+ 
+-	return sk->sk_bound_dev_if;
++	return bound_dev_if;
+ }
+ 
+ static inline int inet_sk_bound_l3mdev(const struct sock *sk)
+@@ -132,7 +133,7 @@ static inline int inet_sk_bound_l3mdev(const struct sock *sk)
+ #ifdef CONFIG_NET_L3_MASTER_DEV
+ 	struct net *net = sock_net(sk);
+ 
+-	if (!net->ipv4.sysctl_tcp_l3mdev_accept)
++	if (!READ_ONCE(net->ipv4.sysctl_tcp_l3mdev_accept))
+ 		return l3mdev_master_ifindex_by_index(net,
+ 						      sk->sk_bound_dev_if);
+ #endif
+@@ -148,6 +149,17 @@ static inline bool inet_bound_dev_eq(bool l3mdev_accept, int bound_dev_if,
+ 	return bound_dev_if == dif || bound_dev_if == sdif;
+ }
+ 
++static inline bool inet_sk_bound_dev_eq(struct net *net, int bound_dev_if,
++					int dif, int sdif)
++{
++#if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV)
++	return inet_bound_dev_eq(!!READ_ONCE(net->ipv4.sysctl_tcp_l3mdev_accept),
++				 bound_dev_if, dif, sdif);
++#else
++	return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);
++#endif
++}
++
+ struct inet_cork {
+ 	unsigned int		flags;
+ 	__be32			addr;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 83854cec4a471..333131f47ac13 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -160,9 +160,6 @@ typedef __u64 __bitwise __addrpair;
+  *	for struct sock and struct inet_timewait_sock.
+  */
+ struct sock_common {
+-	/* skc_daddr and skc_rcv_saddr must be grouped on a 8 bytes aligned
+-	 * address on 64bit arches : cf INET_MATCH()
+-	 */
+ 	union {
+ 		__addrpair	skc_addrpair;
+ 		struct {
+@@ -1468,19 +1465,23 @@ static inline bool sk_has_account(struct sock *sk)
+ 
+ static inline bool sk_wmem_schedule(struct sock *sk, int size)
+ {
++	int delta;
++
+ 	if (!sk_has_account(sk))
+ 		return true;
+-	return size <= sk->sk_forward_alloc ||
+-		__sk_mem_schedule(sk, size, SK_MEM_SEND);
++	delta = size - sk->sk_forward_alloc;
++	return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_SEND);
+ }
+ 
+ static inline bool
+ sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
+ {
++	int delta;
++
+ 	if (!sk_has_account(sk))
+ 		return true;
+-	return size <= sk->sk_forward_alloc ||
+-		__sk_mem_schedule(sk, size, SK_MEM_RECV) ||
++	delta = size - sk->sk_forward_alloc;
++	return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_RECV) ||
+ 		skb_pfmemalloc(skb);
+ }
+ 
+diff --git a/include/trace/events/block.h b/include/trace/events/block.h
+index 34d64ca306b1c..76a6b3bbc01fe 100644
+--- a/include/trace/events/block.h
++++ b/include/trace/events/block.h
+@@ -64,7 +64,6 @@ DEFINE_EVENT(block_buffer, block_dirty_buffer,
+ 
+ /**
+  * block_rq_requeue - place block IO request back on a queue
+- * @q: queue holding operation
+  * @rq: block IO operation request
+  *
+  * The block operation request @rq is being placed back into queue
+@@ -73,9 +72,9 @@ DEFINE_EVENT(block_buffer, block_dirty_buffer,
+  */
+ TRACE_EVENT(block_rq_requeue,
+ 
+-	TP_PROTO(struct request_queue *q, struct request *rq),
++	TP_PROTO(struct request *rq),
+ 
+-	TP_ARGS(q, rq),
++	TP_ARGS(rq),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(  dev_t,	dev			)
+@@ -147,9 +146,9 @@ TRACE_EVENT(block_rq_complete,
+ 
+ DECLARE_EVENT_CLASS(block_rq,
+ 
+-	TP_PROTO(struct request_queue *q, struct request *rq),
++	TP_PROTO(struct request *rq),
+ 
+-	TP_ARGS(q, rq),
++	TP_ARGS(rq),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(  dev_t,	dev			)
+@@ -181,7 +180,6 @@ DECLARE_EVENT_CLASS(block_rq,
+ 
+ /**
+  * block_rq_insert - insert block operation request into queue
+- * @q: target queue
+  * @rq: block IO operation request
+  *
+  * Called immediately before block operation request @rq is inserted
+@@ -191,14 +189,13 @@ DECLARE_EVENT_CLASS(block_rq,
+  */
+ DEFINE_EVENT(block_rq, block_rq_insert,
+ 
+-	TP_PROTO(struct request_queue *q, struct request *rq),
++	TP_PROTO(struct request *rq),
+ 
+-	TP_ARGS(q, rq)
++	TP_ARGS(rq)
+ );
+ 
+ /**
+  * block_rq_issue - issue pending block IO request operation to device driver
+- * @q: queue holding operation
+  * @rq: block IO operation operation request
+  *
+  * Called when block operation request @rq from queue @q is sent to a
+@@ -206,14 +203,13 @@ DEFINE_EVENT(block_rq, block_rq_insert,
+  */
+ DEFINE_EVENT(block_rq, block_rq_issue,
+ 
+-	TP_PROTO(struct request_queue *q, struct request *rq),
++	TP_PROTO(struct request *rq),
+ 
+-	TP_ARGS(q, rq)
++	TP_ARGS(rq)
+ );
+ 
+ /**
+  * block_rq_merge - merge request with another one in the elevator
+- * @q: queue holding operation
+  * @rq: block IO operation operation request
+  *
+  * Called when block operation request @rq from queue @q is merged to another
+@@ -221,9 +217,9 @@ DEFINE_EVENT(block_rq, block_rq_issue,
+  */
+ DEFINE_EVENT(block_rq, block_rq_merge,
+ 
+-	TP_PROTO(struct request_queue *q, struct request *rq),
++	TP_PROTO(struct request *rq),
+ 
+-	TP_ARGS(q, rq)
++	TP_ARGS(rq)
+ );
+ 
+ /**
+@@ -605,7 +601,6 @@ TRACE_EVENT(block_bio_remap,
+ 
+ /**
+  * block_rq_remap - map request for a block operation request
+- * @q: queue holding the operation
+  * @rq: block IO operation request
+  * @dev: device for the operation
+  * @from: original sector for the operation
+@@ -616,10 +611,9 @@ TRACE_EVENT(block_bio_remap,
+  */
+ TRACE_EVENT(block_rq_remap,
+ 
+-	TP_PROTO(struct request_queue *q, struct request *rq, dev_t dev,
+-		 sector_t from),
++	TP_PROTO(struct request *rq, dev_t dev, sector_t from),
+ 
+-	TP_ARGS(q, rq, dev, from),
++	TP_ARGS(rq, dev, from),
+ 
+ 	TP_STRUCT__entry(
+ 		__field( dev_t,		dev		)
+diff --git a/include/trace/events/spmi.h b/include/trace/events/spmi.h
+index 8b60efe18ba68..a6819fd85cdf4 100644
+--- a/include/trace/events/spmi.h
++++ b/include/trace/events/spmi.h
+@@ -21,15 +21,15 @@ TRACE_EVENT(spmi_write_begin,
+ 		__field		( u8,         sid       )
+ 		__field		( u16,        addr      )
+ 		__field		( u8,         len       )
+-		__dynamic_array	( u8,   buf,  len + 1   )
++		__dynamic_array	( u8,   buf,  len       )
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->opcode = opcode;
+ 		__entry->sid    = sid;
+ 		__entry->addr   = addr;
+-		__entry->len    = len + 1;
+-		memcpy(__get_dynamic_array(buf), buf, len + 1);
++		__entry->len    = len;
++		memcpy(__get_dynamic_array(buf), buf, len);
+ 	),
+ 
+ 	TP_printk("opc=%d sid=%02d addr=0x%04x len=%d buf=0x[%*phD]",
+@@ -92,7 +92,7 @@ TRACE_EVENT(spmi_read_end,
+ 		__field		( u16,        addr      )
+ 		__field		( int,        ret       )
+ 		__field		( u8,         len       )
+-		__dynamic_array	( u8,   buf,  len + 1   )
++		__dynamic_array	( u8,   buf,  len       )
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -100,8 +100,8 @@ TRACE_EVENT(spmi_read_end,
+ 		__entry->sid    = sid;
+ 		__entry->addr   = addr;
+ 		__entry->ret    = ret;
+-		__entry->len    = len + 1;
+-		memcpy(__get_dynamic_array(buf), buf, len + 1);
++		__entry->len    = len;
++		memcpy(__get_dynamic_array(buf), buf, len);
+ 	),
+ 
+ 	TP_printk("opc=%d sid=%02d addr=0x%04x ret=%d len=%02d buf=0x[%*phD]",
+diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
+index 7785961d82bae..d74c076e9e2b4 100644
+--- a/include/trace/trace_events.h
++++ b/include/trace/trace_events.h
+@@ -400,16 +400,18 @@ static struct trace_event_functions trace_event_type_funcs_##call = {	\
+ 
+ #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
+ 
++#define ALIGN_STRUCTFIELD(type) ((int)(offsetof(struct {char a; type b;}, b)))
++
+ #undef __field_ext
+ #define __field_ext(_type, _item, _filter_type) {			\
+ 	.type = #_type, .name = #_item,					\
+-	.size = sizeof(_type), .align = __alignof__(_type),		\
++	.size = sizeof(_type), .align = ALIGN_STRUCTFIELD(_type),	\
+ 	.is_signed = is_signed_type(_type), .filter_type = _filter_type },
+ 
+ #undef __field_struct_ext
+ #define __field_struct_ext(_type, _item, _filter_type) {		\
+ 	.type = #_type, .name = #_item,					\
+-	.size = sizeof(_type), .align = __alignof__(_type),		\
++	.size = sizeof(_type), .align = ALIGN_STRUCTFIELD(_type),	\
+ 	0, .filter_type = _filter_type },
+ 
+ #undef __field
+@@ -421,7 +423,7 @@ static struct trace_event_functions trace_event_type_funcs_##call = {	\
+ #undef __array
+ #define __array(_type, _item, _len) {					\
+ 	.type = #_type"["__stringify(_len)"]", .name = #_item,		\
+-	.size = sizeof(_type[_len]), .align = __alignof__(_type),	\
++	.size = sizeof(_type[_len]), .align = ALIGN_STRUCTFIELD(_type),	\
+ 	.is_signed = is_signed_type(_type), .filter_type = FILTER_OTHER },
+ 
+ #undef __dynamic_array
+diff --git a/include/uapi/linux/can/error.h b/include/uapi/linux/can/error.h
+index 34633283de641..a1000cb630632 100644
+--- a/include/uapi/linux/can/error.h
++++ b/include/uapi/linux/can/error.h
+@@ -120,6 +120,9 @@
+ #define CAN_ERR_TRX_CANL_SHORT_TO_GND  0x70 /* 0111 0000 */
+ #define CAN_ERR_TRX_CANL_SHORT_TO_CANH 0x80 /* 1000 0000 */
+ 
+-/* controller specific additional information / data[5..7] */
++/* data[5] is reserved (do not use) */
++
++/* TX error counter / data[6] */
++/* RX error counter / data[7] */
+ 
+ #endif /* _UAPI_CAN_ERROR_H */
+diff --git a/include/uapi/linux/netfilter/xt_IDLETIMER.h b/include/uapi/linux/netfilter/xt_IDLETIMER.h
+index 49ddcdc61c094..7bfb31a66fc9b 100644
+--- a/include/uapi/linux/netfilter/xt_IDLETIMER.h
++++ b/include/uapi/linux/netfilter/xt_IDLETIMER.h
+@@ -1,6 +1,5 @@
++/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
+ /*
+- * linux/include/linux/netfilter/xt_IDLETIMER.h
+- *
+  * Header file for Xtables timer target module.
+  *
+  * Copyright (C) 2004, 2010 Nokia Corporation
+@@ -10,20 +9,6 @@
+  * by Luciano Coelho <luciano.coelho@nokia.com>
+  *
+  * Contact: Luciano Coelho <luciano.coelho@nokia.com>
+- *
+- * This program is free software; you can redistribute it and/or
+- * modify it under the terms of the GNU General Public License
+- * version 2 as published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful, but
+- * WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+- * General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+- * 02110-1301 USA
+  */
+ 
+ #ifndef _XT_IDLETIMER_H
+diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
+index 142b184eca8b4..7e0d526dd96f3 100644
+--- a/include/uapi/linux/pci_regs.h
++++ b/include/uapi/linux/pci_regs.h
+@@ -837,6 +837,13 @@
+ #define  PCI_PWR_CAP_BUDGET(x)	((x) & 1)	/* Included in system budget */
+ #define PCI_EXT_CAP_PWR_SIZEOF	16
+ 
++/* Root Complex Event Collector Endpoint Association  */
++#define PCI_RCEC_RCIEP_BITMAP	4	/* Associated Bitmap for RCiEPs */
++#define PCI_RCEC_BUSN		8	/* RCEC Associated Bus Numbers */
++#define  PCI_RCEC_BUSN_REG_VER	0x02	/* Least version with BUSN present */
++#define  PCI_RCEC_BUSN_NEXT(x)	(((x) >> 8) & 0xff)
++#define  PCI_RCEC_BUSN_LAST(x)	(((x) >> 16) & 0xff)
++
+ /* Vendor-Specific (VSEC, PCI_EXT_CAP_ID_VNDR) */
+ #define PCI_VNDR_HEADER		4	/* Vendor-Specific Header */
+ #define  PCI_VNDR_HEADER_ID(x)	((x) & 0xffff)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 15ddc4292bc0b..de636b7445b11 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -11152,6 +11152,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 		/* Below members will be freed only at prog->aux */
+ 		func[i]->aux->btf = prog->aux->btf;
+ 		func[i]->aux->func_info = prog->aux->func_info;
++		func[i]->aux->func_info_cnt = prog->aux->func_info_cnt;
+ 		func[i]->aux->poke_tab = prog->aux->poke_tab;
+ 		func[i]->aux->size_poke_tab = prog->aux->size_poke_tab;
+ 
+@@ -11164,9 +11165,6 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 				poke->aux = func[i]->aux;
+ 		}
+ 
+-		/* Use bpf_prog_F_tag to indicate functions in stack traces.
+-		 * Long term would need debug info to populate names
+-		 */
+ 		func[i]->aux->name[0] = 'F';
+ 		func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
+ 		func[i]->jit_requested = 1;
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index ec39e123c2a51..c51863b63f93a 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -2162,7 +2162,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 		goto out_unlock;
+ 
+ 	cgroup_taskset_for_each(task, css, tset) {
+-		ret = task_can_attach(task, cs->cpus_allowed);
++		ret = task_can_attach(task, cs->effective_cpus);
+ 		if (ret)
+ 			goto out_unlock;
+ 		ret = security_task_setscheduler(task);
+diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
+index 164a031cfdb66..0f2a44fc09719 100644
+--- a/kernel/irq/Kconfig
++++ b/kernel/irq/Kconfig
+@@ -82,6 +82,7 @@ config IRQ_FASTEOI_HIERARCHY_HANDLERS
+ # Generic IRQ IPI support
+ config GENERIC_IRQ_IPI
+ 	bool
++	depends on SMP
+ 	select IRQ_DOMAIN_HIERARCHY
+ 
+ # Generic MSI interrupt support
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 0b70811fd9561..621d8dd157bc1 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -1543,7 +1543,8 @@ int irq_chip_request_resources_parent(struct irq_data *data)
+ 	if (data->chip->irq_request_resources)
+ 		return data->chip->irq_request_resources(data);
+ 
+-	return -ENOSYS;
++	/* no error on missing optional irq_chip::irq_request_resources */
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(irq_chip_request_resources_parent);
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index cdea59acd66bf..a397042e46607 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1640,7 +1640,8 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ 	preempt_disable();
+ 
+ 	/* Ensure it is not in reserved area nor out of text */
+-	if (!kernel_text_address((unsigned long) p->addr) ||
++	if (!(core_kernel_text((unsigned long) p->addr) ||
++	    is_module_text_address((unsigned long) p->addr)) ||
+ 	    within_kprobe_blacklist((unsigned long) p->addr) ||
+ 	    jump_label_text_reserved(p->addr, p->addr) ||
+ 	    static_call_text_reserved(p->addr, p->addr) ||
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index b6683cefe19a4..6cbd2b4444769 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -1397,7 +1397,7 @@ static int add_lock_to_list(struct lock_class *this,
+ /*
+  * For good efficiency of modular, we use power of 2
+  */
+-#define MAX_CIRCULAR_QUEUE_SIZE		4096UL
++#define MAX_CIRCULAR_QUEUE_SIZE		(1UL << CONFIG_LOCKDEP_CIRCULAR_QUEUE_BITS)
+ #define CQ_MASK				(MAX_CIRCULAR_QUEUE_SIZE-1)
+ 
+ /*
+@@ -5139,9 +5139,10 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
+ 		return 0;
+ 	}
+ 
+-	lockdep_init_map_waits(lock, name, key, 0,
+-			       lock->wait_type_inner,
+-			       lock->wait_type_outer);
++	lockdep_init_map_type(lock, name, key, 0,
++			      lock->wait_type_inner,
++			      lock->wait_type_outer,
++			      lock->lock_type);
+ 	class = register_lock_class(lock, subclass, 0);
+ 	hlock->class_idx = class - lock_classes;
+ 
+diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
+index a19b016353478..bbe9000260d02 100644
+--- a/kernel/locking/lockdep_internals.h
++++ b/kernel/locking/lockdep_internals.h
+@@ -99,16 +99,16 @@ static const unsigned long LOCKF_USED_IN_IRQ_READ =
+ #define MAX_STACK_TRACE_ENTRIES	262144UL
+ #define STACK_TRACE_HASH_SIZE	8192
+ #else
+-#define MAX_LOCKDEP_ENTRIES	32768UL
++#define MAX_LOCKDEP_ENTRIES	(1UL << CONFIG_LOCKDEP_BITS)
+ 
+-#define MAX_LOCKDEP_CHAINS_BITS	16
++#define MAX_LOCKDEP_CHAINS_BITS	CONFIG_LOCKDEP_CHAINS_BITS
+ 
+ /*
+  * Stack-trace: tightly packed array of stack backtrace
+  * addresses. Protected by the hash_lock.
+  */
+-#define MAX_STACK_TRACE_ENTRIES	524288UL
+-#define STACK_TRACE_HASH_SIZE	16384
++#define MAX_STACK_TRACE_ENTRIES	(1UL << CONFIG_LOCKDEP_STACK_TRACE_BITS)
++#define STACK_TRACE_HASH_SIZE	(1 << CONFIG_LOCKDEP_STACK_TRACE_HASH_BITS)
+ #endif
+ 
+ /*
+diff --git a/kernel/power/user.c b/kernel/power/user.c
+index 740723bb38852..13cca2e2c2bc6 100644
+--- a/kernel/power/user.c
++++ b/kernel/power/user.c
+@@ -26,6 +26,7 @@
+ 
+ #include "power.h"
+ 
++static bool need_wait;
+ 
+ static struct snapshot_data {
+ 	struct snapshot_handle handle;
+@@ -78,7 +79,7 @@ static int snapshot_open(struct inode *inode, struct file *filp)
+ 		 * Resuming.  We may need to wait for the image device to
+ 		 * appear.
+ 		 */
+-		wait_for_device_probe();
++		need_wait = true;
+ 
+ 		data->swap = -1;
+ 		data->mode = O_WRONLY;
+@@ -168,6 +169,11 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
+ 	ssize_t res;
+ 	loff_t pg_offp = *offp & ~PAGE_MASK;
+ 
++	if (need_wait) {
++		wait_for_device_probe();
++		need_wait = false;
++	}
++
+ 	lock_system_sleep();
+ 
+ 	data = filp->private_data;
+@@ -244,6 +250,11 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
+ 	loff_t size;
+ 	sector_t offset;
+ 
++	if (need_wait) {
++		wait_for_device_probe();
++		need_wait = false;
++	}
++
+ 	if (_IOC_TYPE(cmd) != SNAPSHOT_IOC_MAGIC)
+ 		return -ENOTTY;
+ 	if (_IOC_NR(cmd) > SNAPSHOT_IOC_MAXNR)
+diff --git a/kernel/profile.c b/kernel/profile.c
+index b47fe52f0ade4..737b1c704aa88 100644
+--- a/kernel/profile.c
++++ b/kernel/profile.c
+@@ -109,6 +109,13 @@ int __ref profile_init(void)
+ 
+ 	/* only text is profiled */
+ 	prof_len = (_etext - _stext) >> prof_shift;
++
++	if (!prof_len) {
++		pr_warn("profiling shift: %u too large\n", prof_shift);
++		prof_on = 0;
++		return -EINVAL;
++	}
++
+ 	buffer_bytes = prof_len*sizeof(atomic_t);
+ 
+ 	if (!alloc_cpumask_var(&prof_cpu_mask, GFP_KERNEL))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index e437d946b27bb..da96a309eefed 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -844,8 +844,9 @@ int tg_nop(struct task_group *tg, void *data)
+ }
+ #endif
+ 
+-static void set_load_weight(struct task_struct *p, bool update_load)
++static void set_load_weight(struct task_struct *p)
+ {
++	bool update_load = !(READ_ONCE(p->state) & TASK_NEW);
+ 	int prio = p->static_prio - MAX_RT_PRIO;
+ 	struct load_weight *load = &p->se.load;
+ 
+@@ -2671,8 +2672,12 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
+ 	 * CPU then use the wakelist to offload the task activation to
+ 	 * the soon-to-be-idle CPU as the current CPU is likely busy.
+ 	 * nr_running is checked to avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
+ 	 */
+-	if ((wake_flags & WF_ON_CPU) && cpu_rq(cpu)->nr_running <= 1)
++	if ((wake_flags & WF_ON_CPU) && !cpu_rq(cpu)->nr_running)
+ 		return true;
+ 
+ 	return false;
+@@ -3262,7 +3267,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 			p->static_prio = NICE_TO_PRIO(0);
+ 
+ 		p->prio = p->normal_prio = p->static_prio;
+-		set_load_weight(p, false);
++		set_load_weight(p);
+ 
+ 		/*
+ 		 * We don't need the reset flag anymore after the fork. It has
+@@ -5011,7 +5016,7 @@ void set_user_nice(struct task_struct *p, long nice)
+ 		put_prev_task(rq, p);
+ 
+ 	p->static_prio = NICE_TO_PRIO(nice);
+-	set_load_weight(p, true);
++	set_load_weight(p);
+ 	old_prio = p->prio;
+ 	p->prio = effective_prio(p);
+ 
+@@ -5184,7 +5189,7 @@ static void __setscheduler_params(struct task_struct *p,
+ 	 */
+ 	p->rt_priority = attr->sched_priority;
+ 	p->normal_prio = normal_prio(p);
+-	set_load_weight(p, true);
++	set_load_weight(p);
+ }
+ 
+ /*
+@@ -6586,7 +6591,7 @@ int cpuset_cpumask_can_shrink(const struct cpumask *cur,
+ }
+ 
+ int task_can_attach(struct task_struct *p,
+-		    const struct cpumask *cs_cpus_allowed)
++		    const struct cpumask *cs_effective_cpus)
+ {
+ 	int ret = 0;
+ 
+@@ -6605,8 +6610,13 @@ int task_can_attach(struct task_struct *p,
+ 	}
+ 
+ 	if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span,
+-					      cs_cpus_allowed))
+-		ret = dl_task_can_attach(p, cs_cpus_allowed);
++					      cs_effective_cpus)) {
++		int cpu = cpumask_any_and(cpu_active_mask, cs_effective_cpus);
++
++		if (unlikely(cpu >= nr_cpu_ids))
++			return -EINVAL;
++		ret = dl_cpu_busy(cpu, p);
++	}
+ 
+ out:
+ 	return ret;
+@@ -6865,8 +6875,10 @@ static void cpuset_cpu_active(void)
+ static int cpuset_cpu_inactive(unsigned int cpu)
+ {
+ 	if (!cpuhp_tasks_frozen) {
+-		if (dl_cpu_busy(cpu))
+-			return -EBUSY;
++		int ret = dl_cpu_busy(cpu, NULL);
++
++		if (ret)
++			return ret;
+ 		cpuset_update_active_cpus();
+ 	} else {
+ 		num_cpus_frozen++;
+@@ -7189,7 +7201,7 @@ void __init sched_init(void)
+ 		atomic_set(&rq->nr_iowait, 0);
+ 	}
+ 
+-	set_load_weight(&init_task, false);
++	set_load_weight(&init_task);
+ 
+ 	/*
+ 	 * The boot idle thread does lazy MMU switching as well:
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 933706106b983..aaf98771f9357 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2825,41 +2825,6 @@ bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
+ }
+ 
+ #ifdef CONFIG_SMP
+-int dl_task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed)
+-{
+-	unsigned long flags, cap;
+-	unsigned int dest_cpu;
+-	struct dl_bw *dl_b;
+-	bool overflow;
+-	int ret;
+-
+-	dest_cpu = cpumask_any_and(cpu_active_mask, cs_cpus_allowed);
+-
+-	rcu_read_lock_sched();
+-	dl_b = dl_bw_of(dest_cpu);
+-	raw_spin_lock_irqsave(&dl_b->lock, flags);
+-	cap = dl_bw_capacity(dest_cpu);
+-	overflow = __dl_overflow(dl_b, cap, 0, p->dl.dl_bw);
+-	if (overflow) {
+-		ret = -EBUSY;
+-	} else {
+-		/*
+-		 * We reserve space for this task in the destination
+-		 * root_domain, as we can't fail after this point.
+-		 * We will free resources in the source root_domain
+-		 * later on (see set_cpus_allowed_dl()).
+-		 */
+-		int cpus = dl_bw_cpus(dest_cpu);
+-
+-		__dl_add(dl_b, p->dl.dl_bw, cpus);
+-		ret = 0;
+-	}
+-	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+-	rcu_read_unlock_sched();
+-
+-	return ret;
+-}
+-
+ int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
+ 				 const struct cpumask *trial)
+ {
+@@ -2881,7 +2846,7 @@ int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
+ 	return ret;
+ }
+ 
+-bool dl_cpu_busy(unsigned int cpu)
++int dl_cpu_busy(int cpu, struct task_struct *p)
+ {
+ 	unsigned long flags, cap;
+ 	struct dl_bw *dl_b;
+@@ -2891,11 +2856,22 @@ bool dl_cpu_busy(unsigned int cpu)
+ 	dl_b = dl_bw_of(cpu);
+ 	raw_spin_lock_irqsave(&dl_b->lock, flags);
+ 	cap = dl_bw_capacity(cpu);
+-	overflow = __dl_overflow(dl_b, cap, 0, 0);
++	overflow = __dl_overflow(dl_b, cap, 0, p ? p->dl.dl_bw : 0);
++
++	if (!overflow && p) {
++		/*
++		 * We reserve space for this task in the destination
++		 * root_domain, as we can't fail after this point.
++		 * We will free resources in the source root_domain
++		 * later on (see set_cpus_allowed_dl()).
++		 */
++		__dl_add(dl_b, p->dl.dl_bw, dl_bw_cpus(cpu));
++	}
++
+ 	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+ 	rcu_read_unlock_sched();
+ 
+-	return overflow;
++	return overflow ? -EBUSY : 0;
+ }
+ #endif
+ 
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 41b14d9242039..e6f22836c600b 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -437,7 +437,7 @@ static inline void rt_queue_push_tasks(struct rq *rq)
+ #endif /* CONFIG_SMP */
+ 
+ static void enqueue_top_rt_rq(struct rt_rq *rt_rq);
+-static void dequeue_top_rt_rq(struct rt_rq *rt_rq);
++static void dequeue_top_rt_rq(struct rt_rq *rt_rq, unsigned int count);
+ 
+ static inline int on_rt_rq(struct sched_rt_entity *rt_se)
+ {
+@@ -558,7 +558,7 @@ static void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
+ 	rt_se = rt_rq->tg->rt_se[cpu];
+ 
+ 	if (!rt_se) {
+-		dequeue_top_rt_rq(rt_rq);
++		dequeue_top_rt_rq(rt_rq, rt_rq->rt_nr_running);
+ 		/* Kick cpufreq (see the comment in kernel/sched/sched.h). */
+ 		cpufreq_update_util(rq_of_rt_rq(rt_rq), 0);
+ 	}
+@@ -644,7 +644,7 @@ static inline void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
+ 
+ static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
+ {
+-	dequeue_top_rt_rq(rt_rq);
++	dequeue_top_rt_rq(rt_rq, rt_rq->rt_nr_running);
+ }
+ 
+ static inline int rt_rq_throttled(struct rt_rq *rt_rq)
+@@ -1043,7 +1043,7 @@ static void update_curr_rt(struct rq *rq)
+ }
+ 
+ static void
+-dequeue_top_rt_rq(struct rt_rq *rt_rq)
++dequeue_top_rt_rq(struct rt_rq *rt_rq, unsigned int count)
+ {
+ 	struct rq *rq = rq_of_rt_rq(rt_rq);
+ 
+@@ -1054,7 +1054,7 @@ dequeue_top_rt_rq(struct rt_rq *rt_rq)
+ 
+ 	BUG_ON(!rq->nr_running);
+ 
+-	sub_nr_running(rq, rt_rq->rt_nr_running);
++	sub_nr_running(rq, count);
+ 	rt_rq->rt_queued = 0;
+ 
+ }
+@@ -1333,18 +1333,21 @@ static void __dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flag
+ static void dequeue_rt_stack(struct sched_rt_entity *rt_se, unsigned int flags)
+ {
+ 	struct sched_rt_entity *back = NULL;
++	unsigned int rt_nr_running;
+ 
+ 	for_each_sched_rt_entity(rt_se) {
+ 		rt_se->back = back;
+ 		back = rt_se;
+ 	}
+ 
+-	dequeue_top_rt_rq(rt_rq_of_se(back));
++	rt_nr_running = rt_rq_of_se(back)->rt_nr_running;
+ 
+ 	for (rt_se = back; rt_se; rt_se = rt_se->back) {
+ 		if (on_rt_rq(rt_se))
+ 			__dequeue_rt_entity(rt_se, flags);
+ 	}
++
++	dequeue_top_rt_rq(rt_rq_of_se(back), rt_nr_running);
+ }
+ 
+ static void enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 8d39f5d99172a..12c65628801c6 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -347,9 +347,8 @@ extern void __setparam_dl(struct task_struct *p, const struct sched_attr *attr);
+ extern void __getparam_dl(struct task_struct *p, struct sched_attr *attr);
+ extern bool __checkparam_dl(const struct sched_attr *attr);
+ extern bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr);
+-extern int  dl_task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed);
+ extern int  dl_cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
+-extern bool dl_cpu_busy(unsigned int cpu);
++extern int  dl_cpu_busy(int cpu, struct task_struct *p);
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 4ef90718c1146..544ce87ba38a7 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2209,6 +2209,7 @@ schedule_hrtimeout_range_clock(ktime_t *expires, u64 delta,
+ 
+ 	return !t.task ? 0 : -EINTR;
+ }
++EXPORT_SYMBOL_GPL(schedule_hrtimeout_range_clock);
+ 
+ /**
+  * schedule_hrtimeout_range - sleep until timeout
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index e12ce2821dba5..d9b48f7a35e0d 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -23,6 +23,7 @@
+ #include <linux/pvclock_gtod.h>
+ #include <linux/compiler.h>
+ #include <linux/audit.h>
++#include <linux/random.h>
+ 
+ #include "tick-internal.h"
+ #include "ntp_internal.h"
+@@ -1330,8 +1331,10 @@ out:
+ 	/* signal hrtimers about time change */
+ 	clock_was_set();
+ 
+-	if (!ret)
++	if (!ret) {
+ 		audit_tk_injoffset(ts_delta);
++		add_device_randomness(ts, sizeof(*ts));
++	}
+ 
+ 	return ret;
+ }
+@@ -2410,6 +2413,7 @@ int do_adjtimex(struct __kernel_timex *txc)
+ 	ret = timekeeping_validate_timex(txc);
+ 	if (ret)
+ 		return ret;
++	add_device_randomness(txc, sizeof(*txc));
+ 
+ 	if (txc->modes & ADJ_SETOFFSET) {
+ 		struct timespec64 delta;
+@@ -2427,6 +2431,7 @@ int do_adjtimex(struct __kernel_timex *txc)
+ 	audit_ntp_init(&ad);
+ 
+ 	ktime_get_real_ts64(&ts);
++	add_device_randomness(&ts, sizeof(ts));
+ 
+ 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
+ 	write_seqcount_begin(&tk_core.seq);
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index b89ff188a6183..15a376f85e09b 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -800,12 +800,12 @@ static u64 blk_trace_bio_get_cgid(struct request_queue *q, struct bio *bio)
+ #endif
+ 
+ static u64
+-blk_trace_request_get_cgid(struct request_queue *q, struct request *rq)
++blk_trace_request_get_cgid(struct request *rq)
+ {
+ 	if (!rq->bio)
+ 		return 0;
+ 	/* Use the first bio */
+-	return blk_trace_bio_get_cgid(q, rq->bio);
++	return blk_trace_bio_get_cgid(rq->q, rq->bio);
+ }
+ 
+ /*
+@@ -846,40 +846,35 @@ static void blk_add_trace_rq(struct request *rq, int error,
+ 	rcu_read_unlock();
+ }
+ 
+-static void blk_add_trace_rq_insert(void *ignore,
+-				    struct request_queue *q, struct request *rq)
++static void blk_add_trace_rq_insert(void *ignore, struct request *rq)
+ {
+ 	blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_INSERT,
+-			 blk_trace_request_get_cgid(q, rq));
++			 blk_trace_request_get_cgid(rq));
+ }
+ 
+-static void blk_add_trace_rq_issue(void *ignore,
+-				   struct request_queue *q, struct request *rq)
++static void blk_add_trace_rq_issue(void *ignore, struct request *rq)
+ {
+ 	blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_ISSUE,
+-			 blk_trace_request_get_cgid(q, rq));
++			 blk_trace_request_get_cgid(rq));
+ }
+ 
+-static void blk_add_trace_rq_merge(void *ignore,
+-				   struct request_queue *q, struct request *rq)
++static void blk_add_trace_rq_merge(void *ignore, struct request *rq)
+ {
+ 	blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_BACKMERGE,
+-			 blk_trace_request_get_cgid(q, rq));
++			 blk_trace_request_get_cgid(rq));
+ }
+ 
+-static void blk_add_trace_rq_requeue(void *ignore,
+-				     struct request_queue *q,
+-				     struct request *rq)
++static void blk_add_trace_rq_requeue(void *ignore, struct request *rq)
+ {
+ 	blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_REQUEUE,
+-			 blk_trace_request_get_cgid(q, rq));
++			 blk_trace_request_get_cgid(rq));
+ }
+ 
+ static void blk_add_trace_rq_complete(void *ignore, struct request *rq,
+ 			int error, unsigned int nr_bytes)
+ {
+ 	blk_add_trace_rq(rq, error, nr_bytes, BLK_TA_COMPLETE,
+-			 blk_trace_request_get_cgid(rq->q, rq));
++			 blk_trace_request_get_cgid(rq));
+ }
+ 
+ /**
+@@ -1087,16 +1082,14 @@ static void blk_add_trace_bio_remap(void *ignore,
+  *     Add a trace for that action.
+  *
+  **/
+-static void blk_add_trace_rq_remap(void *ignore,
+-				   struct request_queue *q,
+-				   struct request *rq, dev_t dev,
++static void blk_add_trace_rq_remap(void *ignore, struct request *rq, dev_t dev,
+ 				   sector_t from)
+ {
+ 	struct blk_trace *bt;
+ 	struct blk_io_trace_remap r;
+ 
+ 	rcu_read_lock();
+-	bt = rcu_dereference(q->blk_trace);
++	bt = rcu_dereference(rq->q->blk_trace);
+ 	if (likely(!bt)) {
+ 		rcu_read_unlock();
+ 		return;
+@@ -1107,14 +1100,13 @@ static void blk_add_trace_rq_remap(void *ignore,
+ 	r.sector_from = cpu_to_be64(from);
+ 
+ 	__blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq),
+-			rq_data_dir(rq), 0, BLK_TA_REMAP, 0,
+-			sizeof(r), &r, blk_trace_request_get_cgid(q, rq));
++			req_op(rq), rq->cmd_flags, BLK_TA_REMAP, 0,
++			sizeof(r), &r, blk_trace_request_get_cgid(rq));
+ 	rcu_read_unlock();
+ }
+ 
+ /**
+  * blk_add_driver_data - Add binary message with driver-specific data
+- * @q:		queue the io is for
+  * @rq:		io request
+  * @data:	driver-specific data
+  * @len:	length of driver-specific data
+@@ -1123,14 +1115,12 @@ static void blk_add_trace_rq_remap(void *ignore,
+  *     Some drivers might want to write driver-specific data per request.
+  *
+  **/
+-void blk_add_driver_data(struct request_queue *q,
+-			 struct request *rq,
+-			 void *data, size_t len)
++void blk_add_driver_data(struct request *rq, void *data, size_t len)
+ {
+ 	struct blk_trace *bt;
+ 
+ 	rcu_read_lock();
+-	bt = rcu_dereference(q->blk_trace);
++	bt = rcu_dereference(rq->q->blk_trace);
+ 	if (likely(!bt)) {
+ 		rcu_read_unlock();
+ 		return;
+@@ -1138,7 +1128,7 @@ void blk_add_driver_data(struct request_queue *q,
+ 
+ 	__blk_add_trace(bt, blk_rq_trace_sector(rq), blk_rq_bytes(rq), 0, 0,
+ 				BLK_TA_DRV_DATA, 0, len, data,
+-				blk_trace_request_get_cgid(q, rq));
++				blk_trace_request_get_cgid(rq));
+ 	rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(blk_add_driver_data);
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 3656fa8837834..ce796ca869c22 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1307,6 +1307,46 @@ config LOCKDEP
+ config LOCKDEP_SMALL
+ 	bool
+ 
++config LOCKDEP_BITS
++	int "Bitsize for MAX_LOCKDEP_ENTRIES"
++	depends on LOCKDEP && !LOCKDEP_SMALL
++	range 10 30
++	default 15
++	help
++	  Try increasing this value if you hit "BUG: MAX_LOCKDEP_ENTRIES too low!" message.
++
++config LOCKDEP_CHAINS_BITS
++	int "Bitsize for MAX_LOCKDEP_CHAINS"
++	depends on LOCKDEP && !LOCKDEP_SMALL
++	range 10 30
++	default 16
++	help
++	  Try increasing this value if you hit "BUG: MAX_LOCKDEP_CHAINS too low!" message.
++
++config LOCKDEP_STACK_TRACE_BITS
++	int "Bitsize for MAX_STACK_TRACE_ENTRIES"
++	depends on LOCKDEP && !LOCKDEP_SMALL
++	range 10 30
++	default 19
++	help
++	  Try increasing this value if you hit "BUG: MAX_STACK_TRACE_ENTRIES too low!" message.
++
++config LOCKDEP_STACK_TRACE_HASH_BITS
++	int "Bitsize for STACK_TRACE_HASH_SIZE"
++	depends on LOCKDEP && !LOCKDEP_SMALL
++	range 10 30
++	default 14
++	help
++	  Try increasing this value if you need large MAX_STACK_TRACE_ENTRIES.
++
++config LOCKDEP_CIRCULAR_QUEUE_BITS
++	int "Bitsize for elements in circular_queue struct"
++	depends on LOCKDEP
++	range 10 30
++	default 12
++	help
++	  Try increasing this value if you hit "lockdep bfs error:-1" warning due to __cq_enqueue() failure.
++
+ config DEBUG_LOCKDEP
+ 	bool "Lock dependency engine debugging"
+ 	depends on DEBUG_KERNEL && LOCKDEP
+diff --git a/lib/bitmap.c b/lib/bitmap.c
+index 75006c4036e9e..27e08c0e547ec 100644
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -3,17 +3,19 @@
+  * lib/bitmap.c
+  * Helper functions for bitmap.h.
+  */
+-#include <linux/export.h>
+-#include <linux/thread_info.h>
+-#include <linux/ctype.h>
+-#include <linux/errno.h>
++
+ #include <linux/bitmap.h>
+ #include <linux/bitops.h>
+ #include <linux/bug.h>
++#include <linux/ctype.h>
++#include <linux/device.h>
++#include <linux/errno.h>
++#include <linux/export.h>
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
++#include <linux/thread_info.h>
+ #include <linux/uaccess.h>
+ 
+ #include <asm/page.h>
+@@ -1262,6 +1264,38 @@ void bitmap_free(const unsigned long *bitmap)
+ }
+ EXPORT_SYMBOL(bitmap_free);
+ 
++static void devm_bitmap_free(void *data)
++{
++	unsigned long *bitmap = data;
++
++	bitmap_free(bitmap);
++}
++
++unsigned long *devm_bitmap_alloc(struct device *dev,
++				 unsigned int nbits, gfp_t flags)
++{
++	unsigned long *bitmap;
++	int ret;
++
++	bitmap = bitmap_alloc(nbits, flags);
++	if (!bitmap)
++		return NULL;
++
++	ret = devm_add_action_or_reset(dev, devm_bitmap_free, bitmap);
++	if (ret)
++		return NULL;
++
++	return bitmap;
++}
++EXPORT_SYMBOL_GPL(devm_bitmap_alloc);
++
++unsigned long *devm_bitmap_zalloc(struct device *dev,
++				  unsigned int nbits, gfp_t flags)
++{
++	return devm_bitmap_alloc(dev, nbits, flags | __GFP_ZERO);
++}
++EXPORT_SYMBOL_GPL(devm_bitmap_zalloc);
++
+ #if BITS_PER_LONG == 64
+ /**
+  * bitmap_from_arr32 - copy the contents of u32 array of bits to bitmap
+diff --git a/lib/livepatch/test_klp_callbacks_busy.c b/lib/livepatch/test_klp_callbacks_busy.c
+index 7ac845f65be56..133929e0ce8ff 100644
+--- a/lib/livepatch/test_klp_callbacks_busy.c
++++ b/lib/livepatch/test_klp_callbacks_busy.c
+@@ -16,10 +16,12 @@ MODULE_PARM_DESC(block_transition, "block_transition (default=false)");
+ 
+ static void busymod_work_func(struct work_struct *work);
+ static DECLARE_WORK(work, busymod_work_func);
++static DECLARE_COMPLETION(busymod_work_started);
+ 
+ static void busymod_work_func(struct work_struct *work)
+ {
+ 	pr_info("%s enter\n", __func__);
++	complete(&busymod_work_started);
+ 
+ 	while (READ_ONCE(block_transition)) {
+ 		/*
+@@ -37,6 +39,12 @@ static int test_klp_callbacks_busy_init(void)
+ 	pr_info("%s\n", __func__);
+ 	schedule_work(&work);
+ 
++	/*
++	 * To synchronize kernel messages, hold the init function from
++	 * exiting until the work function's entry message has printed.
++	 */
++	wait_for_completion(&busymod_work_started);
++
+ 	if (!block_transition) {
+ 		/*
+ 		 * Serialize output: print all messages from the work
+diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c
+index 525222e4f4099..2916606a93337 100644
+--- a/lib/smp_processor_id.c
++++ b/lib/smp_processor_id.c
+@@ -46,9 +46,9 @@ unsigned int check_preemption_disabled(const char *what1, const char *what2)
+ 
+ 	printk("caller is %pS\n", __builtin_return_address(0));
+ 	dump_stack();
+-	instrumentation_end();
+ 
+ out_enable:
++	instrumentation_end();
+ 	preempt_enable_no_resched_notrace();
+ out:
+ 	return this_cpu;
+diff --git a/lib/test_bpf.c b/lib/test_bpf.c
+index 4a9137c8551ad..8761b9797073c 100644
+--- a/lib/test_bpf.c
++++ b/lib/test_bpf.c
+@@ -6918,9 +6918,9 @@ static struct skb_segment_test skb_segment_tests[] __initconst = {
+ 		.build_skb = build_test_skb_linear_no_head_frag,
+ 		.features = NETIF_F_SG | NETIF_F_FRAGLIST |
+ 			    NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_GSO |
+-			    NETIF_F_LLTX_BIT | NETIF_F_GRO |
++			    NETIF_F_LLTX | NETIF_F_GRO |
+ 			    NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM |
+-			    NETIF_F_HW_VLAN_STAG_TX_BIT
++			    NETIF_F_HW_VLAN_STAG_TX
+ 	}
+ };
+ 
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 102f73ed4b1b9..a50042918cc7e 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1902,7 +1902,6 @@ unmap_and_free_vma:
+ 
+ 	/* Undo any partial mapping done by a device driver. */
+ 	unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end);
+-	charged = 0;
+ 	if (vm_flags & VM_SHARED)
+ 		mapping_unmap_writable(file->f_mapping);
+ allow_write_and_free_vma:
+diff --git a/mm/mremap.c b/mm/mremap.c
+index d4c8d6cca3f46..3334c40222101 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -310,12 +310,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 			 */
+ 			bool moved;
+ 
+-			if (need_rmap_locks)
+-				take_rmap_locks(vma);
++			take_rmap_locks(vma);
+ 			moved = move_normal_pmd(vma, old_addr, new_addr,
+ 						old_pmd, new_pmd);
+-			if (need_rmap_locks)
+-				drop_rmap_locks(vma);
++			drop_rmap_locks(vma);
+ 			if (moved)
+ 				continue;
+ #endif
+diff --git a/mm/util.c b/mm/util.c
+index ba9643de689ea..25bfda774f6fd 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -661,6 +661,21 @@ void kvfree_sensitive(const void *addr, size_t len)
+ }
+ EXPORT_SYMBOL(kvfree_sensitive);
+ 
++void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags)
++{
++	void *newp;
++
++	if (oldsize >= newsize)
++		return (void *)p;
++	newp = kvmalloc(newsize, flags);
++	if (!newp)
++		return NULL;
++	memcpy(newp, p, oldsize);
++	kvfree(p);
++	return newp;
++}
++EXPORT_SYMBOL(kvrealloc);
++
+ static inline void *__page_rmapping(struct page *page)
+ {
+ 	unsigned long mapping;
+diff --git a/net/9p/client.c b/net/9p/client.c
+index bf6ed00d7c37d..e8862cd4f91b4 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -893,16 +893,13 @@ static struct p9_fid *p9_fid_create(struct p9_client *clnt)
+ 	struct p9_fid *fid;
+ 
+ 	p9_debug(P9_DEBUG_FID, "clnt %p\n", clnt);
+-	fid = kmalloc(sizeof(struct p9_fid), GFP_KERNEL);
++	fid = kzalloc(sizeof(struct p9_fid), GFP_KERNEL);
+ 	if (!fid)
+ 		return NULL;
+ 
+-	memset(&fid->qid, 0, sizeof(struct p9_qid));
+ 	fid->mode = -1;
+ 	fid->uid = current_fsuid();
+ 	fid->clnt = clnt;
+-	fid->rdir = NULL;
+-	fid->fid = 0;
+ 
+ 	idr_preload(GFP_KERNEL);
+ 	spin_lock_irq(&clnt->lock);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 6a5ff5dcc09a9..88980015ba813 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1966,11 +1966,11 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ 						   bdaddr_t *dst,
+ 						   u8 link_type)
+ {
+-	struct l2cap_chan *c, *c1 = NULL;
++	struct l2cap_chan *c, *tmp, *c1 = NULL;
+ 
+ 	read_lock(&chan_list_lock);
+ 
+-	list_for_each_entry(c, &chan_list, global_l) {
++	list_for_each_entry_safe(c, tmp, &chan_list, global_l) {
+ 		if (state && c->state != state)
+ 			continue;
+ 
+@@ -1989,11 +1989,10 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ 			dst_match = !bacmp(&c->dst, dst);
+ 			if (src_match && dst_match) {
+ 				c = l2cap_chan_hold_unless_zero(c);
+-				if (!c)
+-					continue;
+-
+-				read_unlock(&chan_list_lock);
+-				return c;
++				if (c) {
++					read_unlock(&chan_list_lock);
++					return c;
++				}
+ 			}
+ 
+ 			/* Closest match */
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 548cf0135647d..65e81e0199b04 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -747,11 +747,6 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ 	lock_sock(sk);
+ 
+-	if (dccp_qpolicy_full(sk)) {
+-		rc = -EAGAIN;
+-		goto out_release;
+-	}
+-
+ 	timeo = sock_sndtimeo(sk, noblock);
+ 
+ 	/*
+@@ -770,6 +765,11 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	if (skb == NULL)
+ 		goto out_release;
+ 
++	if (dccp_qpolicy_full(sk)) {
++		rc = -EAGAIN;
++		goto out_discard;
++	}
++
+ 	if (sk->sk_state == DCCP_CLOSED) {
+ 		rc = -ENOTCONN;
+ 		goto out_discard;
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index f38b71cc3edbe..feb7f072f2b26 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -410,13 +410,11 @@ begin:
+ 	sk_nulls_for_each_rcu(sk, node, &head->chain) {
+ 		if (sk->sk_hash != hash)
+ 			continue;
+-		if (likely(INET_MATCH(sk, net, acookie,
+-				      saddr, daddr, ports, dif, sdif))) {
++		if (likely(INET_MATCH(net, sk, acookie, ports, dif, sdif))) {
+ 			if (unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+ 				goto out;
+-			if (unlikely(!INET_MATCH(sk, net, acookie,
+-						 saddr, daddr, ports,
+-						 dif, sdif))) {
++			if (unlikely(!INET_MATCH(net, sk, acookie,
++						 ports, dif, sdif))) {
+ 				sock_gen_put(sk);
+ 				goto begin;
+ 			}
+@@ -465,8 +463,7 @@ static int __inet_check_established(struct inet_timewait_death_row *death_row,
+ 		if (sk2->sk_hash != hash)
+ 			continue;
+ 
+-		if (likely(INET_MATCH(sk2, net, acookie,
+-					 saddr, daddr, ports, dif, sdif))) {
++		if (likely(INET_MATCH(net, sk2, acookie, ports, dif, sdif))) {
+ 			if (sk2->sk_state == TCP_TIME_WAIT) {
+ 				tw = inet_twsk(sk2);
+ 				if (twsk_unique(sk, sk2, twp))
+@@ -532,16 +529,14 @@ static bool inet_ehash_lookup_by_sk(struct sock *sk,
+ 		if (esk->sk_hash != sk->sk_hash)
+ 			continue;
+ 		if (sk->sk_family == AF_INET) {
+-			if (unlikely(INET_MATCH(esk, net, acookie,
+-						sk->sk_daddr,
+-						sk->sk_rcv_saddr,
++			if (unlikely(INET_MATCH(net, esk, acookie,
+ 						ports, dif, sdif))) {
+ 				return true;
+ 			}
+ 		}
+ #if IS_ENABLED(CONFIG_IPV6)
+ 		else if (sk->sk_family == AF_INET6) {
+-			if (unlikely(INET6_MATCH(esk, net,
++			if (unlikely(inet6_match(net, esk,
+ 						 &sk->sk_v6_daddr,
+ 						 &sk->sk_v6_rcv_saddr,
+ 						 ports, dif, sdif))) {
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 657b0a4d93599..4c9274cb92d55 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3137,7 +3137,7 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	unsigned int cur_mss;
+ 	int diff, len, err;
+-
++	int avail_wnd;
+ 
+ 	/* Inconclusive MTU probe */
+ 	if (icsk->icsk_mtup.probe_size)
+@@ -3167,17 +3167,25 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
+ 		return -EHOSTUNREACH; /* Routing failure or similar. */
+ 
+ 	cur_mss = tcp_current_mss(sk);
++	avail_wnd = tcp_wnd_end(tp) - TCP_SKB_CB(skb)->seq;
+ 
+ 	/* If receiver has shrunk his window, and skb is out of
+ 	 * new window, do not retransmit it. The exception is the
+ 	 * case, when window is shrunk to zero. In this case
+-	 * our retransmit serves as a zero window probe.
++	 * our retransmit of one segment serves as a zero window probe.
+ 	 */
+-	if (!before(TCP_SKB_CB(skb)->seq, tcp_wnd_end(tp)) &&
+-	    TCP_SKB_CB(skb)->seq != tp->snd_una)
+-		return -EAGAIN;
++	if (avail_wnd <= 0) {
++		if (TCP_SKB_CB(skb)->seq != tp->snd_una)
++			return -EAGAIN;
++		avail_wnd = cur_mss;
++	}
+ 
+ 	len = cur_mss * segs;
++	if (len > avail_wnd) {
++		len = rounddown(avail_wnd, cur_mss);
++		if (!len)
++			len = avail_wnd;
++	}
+ 	if (skb->len > len) {
+ 		if (tcp_fragment(sk, TCP_FRAG_IN_RTX_QUEUE, skb, len,
+ 				 cur_mss, GFP_ATOMIC))
+@@ -3191,8 +3199,9 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
+ 		diff -= tcp_skb_pcount(skb);
+ 		if (diff)
+ 			tcp_adjust_pcount(sk, skb, diff);
+-		if (skb->len < cur_mss)
+-			tcp_retrans_try_collapse(sk, skb, cur_mss);
++		avail_wnd = min_t(int, avail_wnd, cur_mss);
++		if (skb->len < avail_wnd)
++			tcp_retrans_try_collapse(sk, skb, avail_wnd);
+ 	}
+ 
+ 	/* RFC3168, section 6.1.1.1. ECN fallback */
+@@ -3363,11 +3372,12 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
+  */
+ void sk_forced_mem_schedule(struct sock *sk, int size)
+ {
+-	int amt;
++	int delta, amt;
+ 
+-	if (size <= sk->sk_forward_alloc)
++	delta = size - sk->sk_forward_alloc;
++	if (delta <= 0)
+ 		return;
+-	amt = sk_mem_pages(size);
++	amt = sk_mem_pages(delta);
+ 	sk->sk_forward_alloc += amt * SK_MEM_QUANTUM;
+ 	sk_memory_allocated_add(sk, amt);
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 6056d5609167c..e498c7666ec62 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2490,8 +2490,7 @@ static struct sock *__udp4_lib_demux_lookup(struct net *net,
+ 	struct sock *sk;
+ 
+ 	udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
+-		if (INET_MATCH(sk, net, acookie, rmt_addr,
+-			       loc_addr, ports, dif, sdif))
++		if (INET_MATCH(net, sk, acookie, ports, dif, sdif))
+ 			return sk;
+ 		/* Only check first socket in chain */
+ 		break;
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 40203255ed88b..b4a5e01e12016 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -71,12 +71,12 @@ begin:
+ 	sk_nulls_for_each_rcu(sk, node, &head->chain) {
+ 		if (sk->sk_hash != hash)
+ 			continue;
+-		if (!INET6_MATCH(sk, net, saddr, daddr, ports, dif, sdif))
++		if (!inet6_match(net, sk, saddr, daddr, ports, dif, sdif))
+ 			continue;
+ 		if (unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+ 			goto out;
+ 
+-		if (unlikely(!INET6_MATCH(sk, net, saddr, daddr, ports, dif, sdif))) {
++		if (unlikely(!inet6_match(net, sk, saddr, daddr, ports, dif, sdif))) {
+ 			sock_gen_put(sk);
+ 			goto begin;
+ 		}
+@@ -269,7 +269,7 @@ static int __inet6_check_established(struct inet_timewait_death_row *death_row,
+ 		if (sk2->sk_hash != hash)
+ 			continue;
+ 
+-		if (likely(INET6_MATCH(sk2, net, saddr, daddr, ports,
++		if (likely(inet6_match(net, sk2, saddr, daddr, ports,
+ 				       dif, sdif))) {
+ 			if (sk2->sk_state == TCP_TIME_WAIT) {
+ 				tw = inet_twsk(sk2);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 7745d8a402091..4e90e5a529455 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1019,7 +1019,7 @@ static struct sock *__udp6_lib_demux_lookup(struct net *net,
+ 
+ 	udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
+ 		if (sk->sk_state == TCP_ESTABLISHED &&
+-		    INET6_MATCH(sk, net, rmt_addr, loc_addr, ports, dif, sdif))
++		    inet6_match(net, sk, rmt_addr, loc_addr, ports, dif, sdif))
+ 			return sk;
+ 		/* Only check first socket in chain */
+ 		break;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index e18c3855f6161..461c03737da8d 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -645,13 +645,13 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
+ 	/* check if STA exists already */
+ 	if (sta_info_get_bss(sdata, sta->sta.addr)) {
+ 		err = -EEXIST;
+-		goto out_err;
++		goto out_cleanup;
+ 	}
+ 
+ 	sinfo = kzalloc(sizeof(struct station_info), GFP_KERNEL);
+ 	if (!sinfo) {
+ 		err = -ENOMEM;
+-		goto out_err;
++		goto out_cleanup;
+ 	}
+ 
+ 	local->num_sta++;
+@@ -707,8 +707,8 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
+  out_drop_sta:
+ 	local->num_sta--;
+ 	synchronize_net();
++ out_cleanup:
+ 	cleanup_single_sta(sta);
+- out_err:
+ 	mutex_unlock(&local->sta_mtx);
+ 	kfree(sinfo);
+ 	rcu_read_lock();
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index e5622e925ea97..2ba48f4e2d7da 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -114,6 +114,7 @@ static struct nft_trans *nft_trans_alloc_gfp(const struct nft_ctx *ctx,
+ 	if (trans == NULL)
+ 		return NULL;
+ 
++	INIT_LIST_HEAD(&trans->list);
+ 	trans->msg_type = msg_type;
+ 	trans->ctx	= *ctx;
+ 
+@@ -2265,6 +2266,7 @@ err:
+ }
+ 
+ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
++					       const struct nft_table *table,
+ 					       const struct nlattr *nla)
+ {
+ 	u32 id = ntohl(nla_get_be32(nla));
+@@ -2274,6 +2276,7 @@ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
+ 		struct nft_chain *chain = trans->ctx.chain;
+ 
+ 		if (trans->msg_type == NFT_MSG_NEWCHAIN &&
++		    chain->table == table &&
+ 		    id == nft_trans_chain_id(trans))
+ 			return chain;
+ 	}
+@@ -3154,6 +3157,7 @@ static int nft_table_validate(struct net *net, const struct nft_table *table)
+ }
+ 
+ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
++					     const struct nft_chain *chain,
+ 					     const struct nlattr *nla);
+ 
+ #define NFT_RULE_MAXEXPRS	128
+@@ -3199,7 +3203,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 			return -EOPNOTSUPP;
+ 
+ 	} else if (nla[NFTA_RULE_CHAIN_ID]) {
+-		chain = nft_chain_lookup_byid(net, nla[NFTA_RULE_CHAIN_ID]);
++		chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID]);
+ 		if (IS_ERR(chain)) {
+ 			NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]);
+ 			return PTR_ERR(chain);
+@@ -3241,7 +3245,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 				return PTR_ERR(old_rule);
+ 			}
+ 		} else if (nla[NFTA_RULE_POSITION_ID]) {
+-			old_rule = nft_rule_lookup_byid(net, nla[NFTA_RULE_POSITION_ID]);
++			old_rule = nft_rule_lookup_byid(net, chain, nla[NFTA_RULE_POSITION_ID]);
+ 			if (IS_ERR(old_rule)) {
+ 				NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION_ID]);
+ 				return PTR_ERR(old_rule);
+@@ -3380,6 +3384,7 @@ err1:
+ }
+ 
+ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
++					     const struct nft_chain *chain,
+ 					     const struct nlattr *nla)
+ {
+ 	u32 id = ntohl(nla_get_be32(nla));
+@@ -3389,6 +3394,7 @@ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
+ 		struct nft_rule *rule = nft_trans_rule(trans);
+ 
+ 		if (trans->msg_type == NFT_MSG_NEWRULE &&
++		    trans->ctx.chain == chain &&
+ 		    id == nft_trans_rule_id(trans))
+ 			return rule;
+ 	}
+@@ -3437,7 +3443,7 @@ static int nf_tables_delrule(struct net *net, struct sock *nlsk,
+ 
+ 			err = nft_delrule(&ctx, rule);
+ 		} else if (nla[NFTA_RULE_ID]) {
+-			rule = nft_rule_lookup_byid(net, nla[NFTA_RULE_ID]);
++			rule = nft_rule_lookup_byid(net, chain, nla[NFTA_RULE_ID]);
+ 			if (IS_ERR(rule)) {
+ 				NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_ID]);
+ 				return PTR_ERR(rule);
+@@ -3638,6 +3644,7 @@ static struct nft_set *nft_set_lookup_byhandle(const struct nft_table *table,
+ }
+ 
+ static struct nft_set *nft_set_lookup_byid(const struct net *net,
++					   const struct nft_table *table,
+ 					   const struct nlattr *nla, u8 genmask)
+ {
+ 	struct nft_trans *trans;
+@@ -3648,6 +3655,7 @@ static struct nft_set *nft_set_lookup_byid(const struct net *net,
+ 			struct nft_set *set = nft_trans_set(trans);
+ 
+ 			if (id == nft_trans_set_id(trans) &&
++			    set->table == table &&
+ 			    nft_active_genmask(set, genmask))
+ 				return set;
+ 		}
+@@ -3668,7 +3676,7 @@ struct nft_set *nft_set_lookup_global(const struct net *net,
+ 		if (!nla_set_id)
+ 			return set;
+ 
+-		set = nft_set_lookup_byid(net, nla_set_id, genmask);
++		set = nft_set_lookup_byid(net, table, nla_set_id, genmask);
+ 	}
+ 	return set;
+ }
+@@ -8669,7 +8677,7 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 						 tb[NFTA_VERDICT_CHAIN],
+ 						 genmask);
+ 		} else if (tb[NFTA_VERDICT_CHAIN_ID]) {
+-			chain = nft_chain_lookup_byid(ctx->net,
++			chain = nft_chain_lookup_byid(ctx->net, ctx->table,
+ 						      tb[NFTA_VERDICT_CHAIN_ID]);
+ 			if (IS_ERR(chain))
+ 				return PTR_ERR(chain);
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index cf7d974e0f619..29a208ed8fb88 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -191,6 +191,7 @@ static void rose_kill_by_device(struct net_device *dev)
+ 			rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
+ 			if (rose->neighbour)
+ 				rose->neighbour->use--;
++			dev_put(rose->device);
+ 			rose->device = NULL;
+ 		}
+ 	}
+@@ -591,6 +592,8 @@ static struct sock *rose_make_new(struct sock *osk)
+ 	rose->idle	= orose->idle;
+ 	rose->defer	= orose->defer;
+ 	rose->device	= orose->device;
++	if (rose->device)
++		dev_hold(rose->device);
+ 	rose->qbitincl	= orose->qbitincl;
+ 
+ 	return sk;
+@@ -644,6 +647,7 @@ static int rose_release(struct socket *sock)
+ 		break;
+ 	}
+ 
++	dev_put(rose->device);
+ 	sock->sk = NULL;
+ 	release_sock(sk);
+ 	sock_put(sk);
+@@ -720,7 +724,6 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ 	struct rose_sock *rose = rose_sk(sk);
+ 	struct sockaddr_rose *addr = (struct sockaddr_rose *)uaddr;
+ 	unsigned char cause, diagnostic;
+-	struct net_device *dev;
+ 	ax25_uid_assoc *user;
+ 	int n, err = 0;
+ 
+@@ -777,9 +780,12 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ 	}
+ 
+ 	if (sock_flag(sk, SOCK_ZAPPED)) {	/* Must bind first - autobinding in this may or may not work */
++		struct net_device *dev;
++
+ 		sock_reset_flag(sk, SOCK_ZAPPED);
+ 
+-		if ((dev = rose_dev_first()) == NULL) {
++		dev = rose_dev_first();
++		if (!dev) {
+ 			err = -ENETUNREACH;
+ 			goto out_release;
+ 		}
+@@ -787,6 +793,7 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ 		user = ax25_findbyuid(current_euid());
+ 		if (!user) {
+ 			err = -EINVAL;
++			dev_put(dev);
+ 			goto out_release;
+ 		}
+ 
+diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c
+index 95b198f84a3af..981bdefd478b0 100644
+--- a/net/rose/rose_route.c
++++ b/net/rose/rose_route.c
+@@ -613,6 +613,8 @@ struct net_device *rose_dev_first(void)
+ 			if (first == NULL || strncmp(dev->name, first->name, 3) < 0)
+ 				first = dev;
+ 	}
++	if (first)
++		dev_hold(first);
+ 	rcu_read_unlock();
+ 
+ 	return first;
+diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c
+index 5efa3e7ace152..b775e681cb56e 100644
+--- a/net/sched/cls_route.c
++++ b/net/sched/cls_route.c
+@@ -424,6 +424,11 @@ static int route4_set_parms(struct net *net, struct tcf_proto *tp,
+ 			return -EINVAL;
+ 	}
+ 
++	if (!nhandle) {
++		NL_SET_ERR_MSG(extack, "Replacing with handle of 0 is invalid");
++		return -EINVAL;
++	}
++
+ 	h1 = to_hash(nhandle);
+ 	b = rtnl_dereference(head->table[h1]);
+ 	if (!b) {
+@@ -477,6 +482,11 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
+ 	int err;
+ 	bool new = true;
+ 
++	if (!handle) {
++		NL_SET_ERR_MSG(extack, "Creating with handle of 0 is invalid");
++		return -EINVAL;
++	}
++
+ 	if (opt == NULL)
+ 		return handle ? -EINVAL : 0;
+ 
+@@ -526,7 +536,7 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
+ 	rcu_assign_pointer(f->next, f1);
+ 	rcu_assign_pointer(*fp, f);
+ 
+-	if (fold && fold->handle && f->handle != fold->handle) {
++	if (fold) {
+ 		th = to_hash(fold->handle);
+ 		h = from_hash(fold->handle >> 16);
+ 		b = rtnl_dereference(head->table[th]);
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index 94ed98dd899f3..57099687e5e1d 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -112,7 +112,9 @@ __faddr2line() {
+ 	# section offsets.
+ 	local file_type=$(${READELF} --file-header $objfile |
+ 		${AWK} '$1 == "Type:" { print $2; exit }')
+-	[[ $file_type = "EXEC" ]] && is_vmlinux=1
++	if [[ $file_type = "EXEC" ]] || [[ $file_type == "DYN" ]]; then
++		is_vmlinux=1
++	fi
+ 
+ 	# Go through each of the object's symbols which match the func name.
+ 	# In rare cases there might be duplicates, in which case we print all
+diff --git a/security/selinux/ss/policydb.h b/security/selinux/ss/policydb.h
+index c24d4e1063ea0..ffc4e7bad2054 100644
+--- a/security/selinux/ss/policydb.h
++++ b/security/selinux/ss/policydb.h
+@@ -370,6 +370,8 @@ static inline int put_entry(const void *buf, size_t bytes, int num, struct polic
+ {
+ 	size_t len = bytes * num;
+ 
++	if (len > fp->len)
++		return -EINVAL;
+ 	memcpy(fp->data, buf, len);
+ 	fp->data += len;
+ 	fp->len -= len;
+diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
+index f46204ab0b903..c10a264e9567d 100644
+--- a/sound/pci/hda/patch_cirrus.c
++++ b/sound/pci/hda/patch_cirrus.c
+@@ -396,6 +396,7 @@ static const struct snd_pci_quirk cs420x_fixup_tbl[] = {
+ 
+ 	/* codec SSID */
+ 	SND_PCI_QUIRK(0x106b, 0x0600, "iMac 14,1", CS420X_IMAC27_122),
++	SND_PCI_QUIRK(0x106b, 0x0900, "iMac 12,1", CS420X_IMAC27_122),
+ 	SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81),
+ 	SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122),
+ 	SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101),
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 6b5d7b4760eda..2bd0a5839e805 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -215,6 +215,7 @@ enum {
+ 	CXT_PINCFG_LEMOTE_A1205,
+ 	CXT_PINCFG_COMPAQ_CQ60,
+ 	CXT_FIXUP_STEREO_DMIC,
++	CXT_PINCFG_LENOVO_NOTEBOOK,
+ 	CXT_FIXUP_INC_MIC_BOOST,
+ 	CXT_FIXUP_HEADPHONE_MIC_PIN,
+ 	CXT_FIXUP_HEADPHONE_MIC,
+@@ -765,6 +766,14 @@ static const struct hda_fixup cxt_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cxt_fixup_stereo_dmic,
+ 	},
++	[CXT_PINCFG_LENOVO_NOTEBOOK] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x1a, 0x05d71030 },
++			{ }
++		},
++		.chain_id = CXT_FIXUP_STEREO_DMIC,
++	},
+ 	[CXT_FIXUP_INC_MIC_BOOST] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cxt5066_increase_mic_boost,
+@@ -964,7 +973,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+-	SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
++	SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_PINCFG_LENOVO_NOTEBOOK),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo G50-70", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6155261264083..b822248b666e6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6666,6 +6666,7 @@ enum {
+ 	ALC269_FIXUP_LIMIT_INT_MIC_BOOST,
+ 	ALC269VB_FIXUP_ASUS_ZENBOOK,
+ 	ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A,
++	ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE,
+ 	ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED,
+ 	ALC269VB_FIXUP_ORDISSIMO_EVE2,
+ 	ALC283_FIXUP_CHROME_BOOK,
+@@ -7241,6 +7242,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269VB_FIXUP_ASUS_ZENBOOK,
+ 	},
++	[ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x18, 0x01a110f0 },  /* use as headset mic */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MIC
++	},
+ 	[ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc269_fixup_limit_int_mic_boost,
+@@ -8790,6 +8800,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
++	SND_PCI_QUIRK(0x103c, 0x86e7, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
++	SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ 	SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+@@ -8805,6 +8817,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation",
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8786, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8787, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+@@ -8846,6 +8859,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
++	SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+@@ -8921,6 +8935,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x4018, "Clevo NV40M[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x4019, "Clevo NV40MZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x4020, "Clevo NV40MB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x4041, "Clevo NV4[15]PZ", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x40a1, "Clevo NL40GU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x40c1, "Clevo NL40[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x40d1, "Clevo NL41DU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+diff --git a/sound/soc/atmel/mchp-spdifrx.c b/sound/soc/atmel/mchp-spdifrx.c
+index e6ded6f8453fc..46f3407ed0e81 100644
+--- a/sound/soc/atmel/mchp-spdifrx.c
++++ b/sound/soc/atmel/mchp-spdifrx.c
+@@ -288,15 +288,17 @@ static void mchp_spdifrx_isr_blockend_en(struct mchp_spdifrx_dev *dev)
+ 	spin_unlock_irqrestore(&dev->blockend_lock, flags);
+ }
+ 
+-/* called from atomic context only */
++/* called from atomic/non-atomic context */
+ static void mchp_spdifrx_isr_blockend_dis(struct mchp_spdifrx_dev *dev)
+ {
+-	spin_lock(&dev->blockend_lock);
++	unsigned long flags;
++
++	spin_lock_irqsave(&dev->blockend_lock, flags);
+ 	dev->blockend_refcount--;
+ 	/* don't enable BLOCKEND interrupt if it's already enabled */
+ 	if (dev->blockend_refcount == 0)
+ 		regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
+-	spin_unlock(&dev->blockend_lock);
++	spin_unlock_irqrestore(&dev->blockend_lock, flags);
+ }
+ 
+ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+@@ -575,6 +577,7 @@ static int mchp_spdifrx_subcode_ch_get(struct mchp_spdifrx_dev *dev,
+ 	if (ret <= 0) {
+ 		dev_dbg(dev->dev, "user data for channel %d timeout\n",
+ 			channel);
++		mchp_spdifrx_isr_blockend_dis(dev);
+ 		return ret;
+ 	}
+ 
+diff --git a/sound/soc/codecs/cros_ec_codec.c b/sound/soc/codecs/cros_ec_codec.c
+index 5c3b7e5e55d23..dedbaba83792a 100644
+--- a/sound/soc/codecs/cros_ec_codec.c
++++ b/sound/soc/codecs/cros_ec_codec.c
+@@ -994,6 +994,7 @@ static int cros_ec_codec_platform_probe(struct platform_device *pdev)
+ 			dev_dbg(dev, "ap_shm_phys_addr=%#llx len=%#x\n",
+ 				priv->ap_shm_phys_addr, priv->ap_shm_len);
+ 		}
++		of_node_put(node);
+ 	}
+ #endif
+ 
+diff --git a/sound/soc/codecs/da7210.c b/sound/soc/codecs/da7210.c
+index 3d05c37f676eb..4544ed8741b62 100644
+--- a/sound/soc/codecs/da7210.c
++++ b/sound/soc/codecs/da7210.c
+@@ -1336,6 +1336,8 @@ static int __init da7210_modinit(void)
+ 	int ret = 0;
+ #if IS_ENABLED(CONFIG_I2C)
+ 	ret = i2c_add_driver(&da7210_i2c_driver);
++	if (ret)
++		return ret;
+ #endif
+ #if defined(CONFIG_SPI_MASTER)
+ 	ret = spi_register_driver(&da7210_spi_driver);
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index 20a07c92b2fc2..098a58990f07d 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -328,8 +328,8 @@ static const struct snd_kcontrol_new rx1_mix2_inp1_mux = SOC_DAPM_ENUM(
+ static const struct snd_kcontrol_new rx2_mix2_inp1_mux = SOC_DAPM_ENUM(
+ 				"RX2 MIX2 INP1 Mux", rx2_mix2_inp1_chain_enum);
+ 
+-/* Digital Gain control -38.4 dB to +38.4 dB in 0.3 dB steps */
+-static const DECLARE_TLV_DB_SCALE(digital_gain, -3840, 30, 0);
++/* Digital Gain control -84 dB to +40 dB in 1 dB steps */
++static const DECLARE_TLV_DB_SCALE(digital_gain, -8400, 100, -8400);
+ 
+ /* Cutoff Freq for High Pass Filter at -3dB */
+ static const char * const hpf_cutoff_text[] = {
+@@ -510,15 +510,15 @@ static int wcd_iir_filter_info(struct snd_kcontrol *kcontrol,
+ 
+ static const struct snd_kcontrol_new msm8916_wcd_digital_snd_controls[] = {
+ 	SOC_SINGLE_S8_TLV("RX1 Digital Volume", LPASS_CDC_RX1_VOL_CTL_B2_CTL,
+-			  -128, 127, digital_gain),
++			-84, 40, digital_gain),
+ 	SOC_SINGLE_S8_TLV("RX2 Digital Volume", LPASS_CDC_RX2_VOL_CTL_B2_CTL,
+-			  -128, 127, digital_gain),
++			-84, 40, digital_gain),
+ 	SOC_SINGLE_S8_TLV("RX3 Digital Volume", LPASS_CDC_RX3_VOL_CTL_B2_CTL,
+-			  -128, 127, digital_gain),
++			-84, 40, digital_gain),
+ 	SOC_SINGLE_S8_TLV("TX1 Digital Volume", LPASS_CDC_TX1_VOL_CTL_GAIN,
+-			  -128, 127, digital_gain),
++			-84, 40, digital_gain),
+ 	SOC_SINGLE_S8_TLV("TX2 Digital Volume", LPASS_CDC_TX2_VOL_CTL_GAIN,
+-			  -128, 127, digital_gain),
++			-84, 40, digital_gain),
+ 	SOC_ENUM("TX1 HPF Cutoff", tx1_hpf_cutoff_enum),
+ 	SOC_ENUM("TX2 HPF Cutoff", tx2_hpf_cutoff_enum),
+ 	SOC_SINGLE("TX1 HPF Switch", LPASS_CDC_TX1_MUX_CTL, 3, 1, 0),
+@@ -553,22 +553,22 @@ static const struct snd_kcontrol_new msm8916_wcd_digital_snd_controls[] = {
+ 	WCD_IIR_FILTER_CTL("IIR2 Band3", IIR2, BAND3),
+ 	WCD_IIR_FILTER_CTL("IIR2 Band4", IIR2, BAND4),
+ 	WCD_IIR_FILTER_CTL("IIR2 Band5", IIR2, BAND5),
+-	SOC_SINGLE_SX_TLV("IIR1 INP1 Volume", LPASS_CDC_IIR1_GAIN_B1_CTL,
+-			0,  -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("IIR1 INP2 Volume", LPASS_CDC_IIR1_GAIN_B2_CTL,
+-			0,  -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("IIR1 INP3 Volume", LPASS_CDC_IIR1_GAIN_B3_CTL,
+-			0,  -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("IIR1 INP4 Volume", LPASS_CDC_IIR1_GAIN_B4_CTL,
+-			0,  -84,	40, digital_gain),
+-	SOC_SINGLE_SX_TLV("IIR2 INP1 Volume", LPASS_CDC_IIR2_GAIN_B1_CTL,
+-			0,  -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("IIR2 INP2 Volume", LPASS_CDC_IIR2_GAIN_B2_CTL,
+-			0,  -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("IIR2 INP3 Volume", LPASS_CDC_IIR2_GAIN_B3_CTL,
+-			0,  -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("IIR2 INP4 Volume", LPASS_CDC_IIR2_GAIN_B4_CTL,
+-			0,  -84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("IIR1 INP1 Volume", LPASS_CDC_IIR1_GAIN_B1_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("IIR1 INP2 Volume", LPASS_CDC_IIR1_GAIN_B2_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("IIR1 INP3 Volume", LPASS_CDC_IIR1_GAIN_B3_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("IIR1 INP4 Volume", LPASS_CDC_IIR1_GAIN_B4_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("IIR2 INP1 Volume", LPASS_CDC_IIR2_GAIN_B1_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("IIR2 INP2 Volume", LPASS_CDC_IIR2_GAIN_B2_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("IIR2 INP3 Volume", LPASS_CDC_IIR2_GAIN_B3_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("IIR2 INP4 Volume", LPASS_CDC_IIR2_GAIN_B4_CTL,
++			-84, 40, digital_gain),
+ 
+ };
+ 
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 2677d0c3b19ba..8f4ed39c49de2 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -2252,51 +2252,42 @@ static int wcd9335_rx_hph_mode_put(struct snd_kcontrol *kc,
+ 
+ static const struct snd_kcontrol_new wcd9335_snd_controls[] = {
+ 	/* -84dB min - 40dB max */
+-	SOC_SINGLE_SX_TLV("RX0 Digital Volume", WCD9335_CDC_RX0_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX1 Digital Volume", WCD9335_CDC_RX1_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX2 Digital Volume", WCD9335_CDC_RX2_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX3 Digital Volume", WCD9335_CDC_RX3_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX4 Digital Volume", WCD9335_CDC_RX4_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX5 Digital Volume", WCD9335_CDC_RX5_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX6 Digital Volume", WCD9335_CDC_RX6_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX7 Digital Volume", WCD9335_CDC_RX7_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX8 Digital Volume", WCD9335_CDC_RX8_RX_VOL_CTL,
+-		0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX0 Mix Digital Volume",
+-			  WCD9335_CDC_RX0_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX1 Mix Digital Volume",
+-			  WCD9335_CDC_RX1_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX2 Mix Digital Volume",
+-			  WCD9335_CDC_RX2_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX3 Mix Digital Volume",
+-			  WCD9335_CDC_RX3_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX4 Mix Digital Volume",
+-			  WCD9335_CDC_RX4_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX5 Mix Digital Volume",
+-			  WCD9335_CDC_RX5_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX6 Mix Digital Volume",
+-			  WCD9335_CDC_RX6_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX7 Mix Digital Volume",
+-			  WCD9335_CDC_RX7_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
+-	SOC_SINGLE_SX_TLV("RX8 Mix Digital Volume",
+-			  WCD9335_CDC_RX8_RX_VOL_MIX_CTL,
+-			  0, -84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX0 Digital Volume", WCD9335_CDC_RX0_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX1 Digital Volume", WCD9335_CDC_RX1_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX2 Digital Volume", WCD9335_CDC_RX2_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX3 Digital Volume", WCD9335_CDC_RX3_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX4 Digital Volume", WCD9335_CDC_RX4_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX5 Digital Volume", WCD9335_CDC_RX5_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX6 Digital Volume", WCD9335_CDC_RX6_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX7 Digital Volume", WCD9335_CDC_RX7_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX8 Digital Volume", WCD9335_CDC_RX8_RX_VOL_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX0 Mix Digital Volume", WCD9335_CDC_RX0_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX1 Mix Digital Volume", WCD9335_CDC_RX1_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX2 Mix Digital Volume", WCD9335_CDC_RX2_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX3 Mix Digital Volume", WCD9335_CDC_RX3_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX4 Mix Digital Volume", WCD9335_CDC_RX4_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX5 Mix Digital Volume", WCD9335_CDC_RX5_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX6 Mix Digital Volume", WCD9335_CDC_RX6_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX7 Mix Digital Volume", WCD9335_CDC_RX7_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
++	SOC_SINGLE_S8_TLV("RX8 Mix Digital Volume", WCD9335_CDC_RX8_RX_VOL_MIX_CTL,
++			-84, 40, digital_gain),
+ 	SOC_ENUM("RX INT0_1 HPF cut off", cf_int0_1_enum),
+ 	SOC_ENUM("RX INT0_2 HPF cut off", cf_int0_2_enum),
+ 	SOC_ENUM("RX INT1_1 HPF cut off", cf_int1_1_enum),
+diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
+index 60951a8aabd30..3cf1f40e68924 100644
+--- a/sound/soc/fsl/fsl_easrc.c
++++ b/sound/soc/fsl/fsl_easrc.c
+@@ -476,7 +476,8 @@ static int fsl_easrc_prefilter_config(struct fsl_asrc *easrc,
+ 	struct fsl_asrc_pair *ctx;
+ 	struct device *dev;
+ 	u32 inrate, outrate, offset = 0;
+-	u32 in_s_rate, out_s_rate, in_s_fmt, out_s_fmt;
++	u32 in_s_rate, out_s_rate;
++	snd_pcm_format_t in_s_fmt, out_s_fmt;
+ 	int ret, i;
+ 
+ 	if (!easrc)
+@@ -1873,6 +1874,7 @@ static int fsl_easrc_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	struct device_node *np;
+ 	void __iomem *regs;
++	u32 asrc_fmt = 0;
+ 	int ret, irq;
+ 
+ 	easrc = devm_kzalloc(dev, sizeof(*easrc), GFP_KERNEL);
+@@ -1939,13 +1941,14 @@ static int fsl_easrc_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	ret = of_property_read_u32(np, "fsl,asrc-format", &easrc->asrc_format);
++	ret = of_property_read_u32(np, "fsl,asrc-format", &asrc_fmt);
++	easrc->asrc_format = (__force snd_pcm_format_t)asrc_fmt;
+ 	if (ret) {
+ 		dev_err(dev, "failed to asrc format\n");
+ 		return ret;
+ 	}
+ 
+-	if (!(FSL_EASRC_FORMATS & (1ULL << easrc->asrc_format))) {
++	if (!(FSL_EASRC_FORMATS & (pcm_format_to_bits(easrc->asrc_format)))) {
+ 		dev_warn(dev, "unsupported format, switching to S24_LE\n");
+ 		easrc->asrc_format = SNDRV_PCM_FORMAT_S24_LE;
+ 	}
+diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h
+index 30620d56252cc..5b8469757c122 100644
+--- a/sound/soc/fsl/fsl_easrc.h
++++ b/sound/soc/fsl/fsl_easrc.h
+@@ -569,7 +569,7 @@ struct fsl_easrc_io_params {
+ 	unsigned int access_len;
+ 	unsigned int fifo_wtmk;
+ 	unsigned int sample_rate;
+-	unsigned int sample_format;
++	snd_pcm_format_t sample_format;
+ 	unsigned int norm_rate;
+ };
+ 
+diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
+index 0c640308ed80b..bfbee2d716f39 100644
+--- a/sound/soc/generic/audio-graph-card.c
++++ b/sound/soc/generic/audio-graph-card.c
+@@ -149,8 +149,10 @@ static int asoc_simple_parse_dai(struct device_node *ep,
+ 	 *    if he unbinded CPU or Codec.
+ 	 */
+ 	ret = snd_soc_get_dai_name(&args, &dlc->dai_name);
+-	if (ret < 0)
++	if (ret < 0) {
++		of_node_put(node);
+ 		return ret;
++	}
+ 
+ 	dlc->of_node = node;
+ 
+diff --git a/sound/soc/mediatek/mt6797/mt6797-mt6351.c b/sound/soc/mediatek/mt6797/mt6797-mt6351.c
+index 496f32bcfb5e3..d2f6213a6bfcc 100644
+--- a/sound/soc/mediatek/mt6797/mt6797-mt6351.c
++++ b/sound/soc/mediatek/mt6797/mt6797-mt6351.c
+@@ -217,7 +217,8 @@ static int mt6797_mt6351_dev_probe(struct platform_device *pdev)
+ 	if (!codec_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_platform_node;
+ 	}
+ 	for_each_card_prelinks(card, i, dai_link) {
+ 		if (dai_link->codecs->name)
+@@ -230,6 +231,9 @@ static int mt6797_mt6351_dev_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
+ 
++	of_node_put(codec_node);
++put_platform_node:
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
+index c8e4e85e10575..94a9bbf144d15 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
+@@ -256,14 +256,16 @@ static int mt8173_rt5650_rt5676_dev_probe(struct platform_device *pdev)
+ 	if (!mt8173_rt5650_rt5676_dais[DAI_LINK_CODEC_I2S].codecs[0].of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_node;
+ 	}
+ 	mt8173_rt5650_rt5676_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node =
+ 		of_parse_phandle(pdev->dev.of_node, "mediatek,audio-codec", 1);
+ 	if (!mt8173_rt5650_rt5676_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_node;
+ 	}
+ 	mt8173_rt5650_rt5676_codec_conf[0].dlc.of_node =
+ 		mt8173_rt5650_rt5676_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node;
+@@ -276,7 +278,8 @@ static int mt8173_rt5650_rt5676_dev_probe(struct platform_device *pdev)
+ 	if (!mt8173_rt5650_rt5676_dais[DAI_LINK_HDMI_I2S].codecs->of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_node;
+ 	}
+ 
+ 	card->dev = &pdev->dev;
+@@ -286,6 +289,7 @@ static int mt8173_rt5650_rt5676_dev_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
+ 
++put_node:
+ 	of_node_put(platform_node);
+ 	return ret;
+ }
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650.c b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
+index e168d31f44459..1de9dab218c64 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
+@@ -280,7 +280,8 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ 	if (!mt8173_rt5650_dais[DAI_LINK_CODEC_I2S].codecs[0].of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_platform_node;
+ 	}
+ 	mt8173_rt5650_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node =
+ 		mt8173_rt5650_dais[DAI_LINK_CODEC_I2S].codecs[0].of_node;
+@@ -293,7 +294,7 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev,
+ 				"%s codec_capture_dai name fail %d\n",
+ 				__func__, ret);
+-			return ret;
++			goto put_platform_node;
+ 		}
+ 		mt8173_rt5650_dais[DAI_LINK_CODEC_I2S].codecs[1].dai_name =
+ 			codec_capture_dai;
+@@ -315,7 +316,8 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ 	if (!mt8173_rt5650_dais[DAI_LINK_HDMI_I2S].codecs->of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_platform_node;
+ 	}
+ 	card->dev = &pdev->dev;
+ 
+@@ -324,6 +326,7 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
+ 
++put_platform_node:
+ 	of_node_put(platform_node);
+ 	return ret;
+ }
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index e620a62ef534f..03abb3d719d08 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -846,6 +846,7 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev)
+ 	dsp_of_node = of_parse_phandle(pdev->dev.of_node, "qcom,adsp", 0);
+ 	if (dsp_of_node) {
+ 		dev_err(dev, "DSP exists and holds audio resources\n");
++		of_node_put(dsp_of_node);
+ 		return -EBUSY;
+ 	}
+ 
+diff --git a/sound/soc/qcom/qdsp6/q6adm.c b/sound/soc/qcom/qdsp6/q6adm.c
+index 72f29720398cd..182d36a34fafd 100644
+--- a/sound/soc/qcom/qdsp6/q6adm.c
++++ b/sound/soc/qcom/qdsp6/q6adm.c
+@@ -217,7 +217,7 @@ static struct q6copp *q6adm_alloc_copp(struct q6adm *adm, int port_idx)
+ 	idx = find_first_zero_bit(&adm->copp_bitmap[port_idx],
+ 				  MAX_COPPS_PER_PORT);
+ 
+-	if (idx > MAX_COPPS_PER_PORT)
++	if (idx >= MAX_COPPS_PER_PORT)
+ 		return ERR_PTR(-EBUSY);
+ 
+ 	c = kzalloc(sizeof(*c), GFP_ATOMIC);
+diff --git a/sound/soc/samsung/aries_wm8994.c b/sound/soc/samsung/aries_wm8994.c
+index 18458192aff18..d2908c1ea835f 100644
+--- a/sound/soc/samsung/aries_wm8994.c
++++ b/sound/soc/samsung/aries_wm8994.c
+@@ -628,8 +628,10 @@ static int aries_audio_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 
+ 	codec = of_get_child_by_name(dev->of_node, "codec");
+-	if (!codec)
+-		return -EINVAL;
++	if (!codec) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	for_each_card_prelinks(card, i, dai_link) {
+ 		dai_link->codecs->of_node = of_parse_phandle(codec,
+diff --git a/sound/soc/samsung/h1940_uda1380.c b/sound/soc/samsung/h1940_uda1380.c
+index 8aa78ff640f51..adb6b661c799f 100644
+--- a/sound/soc/samsung/h1940_uda1380.c
++++ b/sound/soc/samsung/h1940_uda1380.c
+@@ -8,7 +8,7 @@
+ // Based on version from Arnaud Patard <arnaud.patard@rtp-net.org>
+ 
+ #include <linux/types.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/module.h>
+ 
+ #include <sound/soc.h>
+diff --git a/sound/soc/samsung/rx1950_uda1380.c b/sound/soc/samsung/rx1950_uda1380.c
+index 400a7f77c7117..354f379268d9f 100644
+--- a/sound/soc/samsung/rx1950_uda1380.c
++++ b/sound/soc/samsung/rx1950_uda1380.c
+@@ -128,7 +128,7 @@ static int rx1950_startup(struct snd_pcm_substream *substream)
+ 					&hw_rates);
+ }
+ 
+-struct gpio_desc *gpiod_speaker_power;
++static struct gpio_desc *gpiod_speaker_power;
+ 
+ static int rx1950_spk_power(struct snd_soc_dapm_widget *w,
+ 				struct snd_kcontrol *kcontrol, int event)
+@@ -227,7 +227,7 @@ static int rx1950_probe(struct platform_device *pdev)
+ 	return devm_snd_soc_register_card(dev, &rx1950_asoc);
+ }
+ 
+-struct platform_driver rx1950_audio = {
++static struct platform_driver rx1950_audio = {
+ 	.driver = {
+ 		.name = "rx1950-audio",
+ 		.pm = &snd_soc_pm_ops,
+diff --git a/sound/usb/bcd2000/bcd2000.c b/sound/usb/bcd2000/bcd2000.c
+index 010976d9ceb25..01f0b329797cc 100644
+--- a/sound/usb/bcd2000/bcd2000.c
++++ b/sound/usb/bcd2000/bcd2000.c
+@@ -348,7 +348,8 @@ static int bcd2000_init_midi(struct bcd2000 *bcd2k)
+ static void bcd2000_free_usb_related_resources(struct bcd2000 *bcd2k,
+ 						struct usb_interface *interface)
+ {
+-	/* usb_kill_urb not necessary, urb is aborted automatically */
++	usb_kill_urb(bcd2k->midi_out_urb);
++	usb_kill_urb(bcd2k->midi_in_urb);
+ 
+ 	usb_free_urb(bcd2k->midi_out_urb);
+ 	usb_free_urb(bcd2k->midi_in_urb);
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 8fada26529b79..66d7f8d494dec 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3652,7 +3652,7 @@ static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
+ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ {
+ 	struct bpf_map_info info = {};
+-	__u32 len = sizeof(info);
++	__u32 len = sizeof(info), name_len;
+ 	int new_fd, err;
+ 	char *new_name;
+ 
+@@ -3662,7 +3662,12 @@ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ 	if (err)
+ 		return err;
+ 
+-	new_name = strdup(info.name);
++	name_len = strlen(info.name);
++	if (name_len == BPF_OBJ_NAME_LEN - 1 && strncmp(map->name, info.name, name_len) == 0)
++		new_name = strdup(map->name);
++	else
++		new_name = strdup(info.name);
++
+ 	if (!new_name)
+ 		return -errno;
+ 
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index c4390ef98b192..e8745f646371f 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -849,8 +849,6 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ 		goto out_mmap_tx;
+ 	}
+ 
+-	ctx->prog_fd = -1;
+-
+ 	if (!(xsk->config.libbpf_flags & XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD)) {
+ 		err = xsk_setup_xdp_prog(xsk);
+ 		if (err)
+@@ -931,7 +929,10 @@ void xsk_socket__delete(struct xsk_socket *xsk)
+ 
+ 	ctx = xsk->ctx;
+ 	umem = ctx->umem;
+-	if (ctx->prog_fd != -1) {
++
++	xsk_put_ctx(ctx, true);
++
++	if (!ctx->refcount) {
+ 		xsk_delete_bpf_maps(xsk);
+ 		close(ctx->prog_fd);
+ 	}
+@@ -948,8 +949,6 @@ void xsk_socket__delete(struct xsk_socket *xsk)
+ 		}
+ 	}
+ 
+-	xsk_put_ctx(ctx, true);
+-
+ 	umem->refcount--;
+ 	/* Do not close an fd that also has an associated umem connected
+ 	 * to it.
+diff --git a/tools/perf/util/dsos.c b/tools/perf/util/dsos.c
+index 183a81d5b2f92..2db91121bdafe 100644
+--- a/tools/perf/util/dsos.c
++++ b/tools/perf/util/dsos.c
+@@ -20,8 +20,19 @@ static int __dso_id__cmp(struct dso_id *a, struct dso_id *b)
+ 	if (a->ino > b->ino) return -1;
+ 	if (a->ino < b->ino) return 1;
+ 
+-	if (a->ino_generation > b->ino_generation) return -1;
+-	if (a->ino_generation < b->ino_generation) return 1;
++	/*
++	 * Synthesized MMAP events have zero ino_generation, avoid comparing
++	 * them with MMAP events with actual ino_generation.
++	 *
++	 * I found it harmful because the mismatch resulted in a new
++	 * dso that did not have a build ID whereas the original dso did have a
++	 * build ID. The build ID was essential because the object was not found
++	 * otherwise. - Adrian
++	 */
++	if (a->ino_generation && b->ino_generation) {
++		if (a->ino_generation > b->ino_generation) return -1;
++		if (a->ino_generation < b->ino_generation) return 1;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/tools/perf/util/genelf.c b/tools/perf/util/genelf.c
+index aed49806a09ba..953338b9e887e 100644
+--- a/tools/perf/util/genelf.c
++++ b/tools/perf/util/genelf.c
+@@ -30,7 +30,11 @@
+ 
+ #define BUILD_ID_URANDOM /* different uuid for each run */
+ 
+-#ifdef HAVE_LIBCRYPTO
++// FIXME, remove this and fix the deprecation warnings before its removed and
++// We'll break for good here...
++#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
++
++#ifdef HAVE_LIBCRYPTO_SUPPORT
+ 
+ #define BUILD_ID_MD5
+ #undef BUILD_ID_SHA	/* does not seem to work well when linked with Java */
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 1cab29d45bfb3..d8d79a9ec7758 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -1249,16 +1249,29 @@ int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss,
+ 
+ 			if (elf_read_program_header(syms_ss->elf,
+ 						    (u64)sym.st_value, &phdr)) {
+-				pr_warning("%s: failed to find program header for "
++				pr_debug4("%s: failed to find program header for "
+ 					   "symbol: %s st_value: %#" PRIx64 "\n",
+ 					   __func__, elf_name, (u64)sym.st_value);
+-				continue;
++				pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
++					"sh_addr: %#" PRIx64 " sh_offset: %#" PRIx64 "\n",
++					__func__, (u64)sym.st_value, (u64)shdr.sh_addr,
++					(u64)shdr.sh_offset);
++				/*
++				 * Fail to find program header, let's rollback
++				 * to use shdr.sh_addr and shdr.sh_offset to
++				 * calibrate symbol's file address, though this
++				 * is not necessary for normal C ELF file, we
++				 * still need to handle java JIT symbols in this
++				 * case.
++				 */
++				sym.st_value -= shdr.sh_addr - shdr.sh_offset;
++			} else {
++				pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
++					"p_vaddr: %#" PRIx64 " p_offset: %#" PRIx64 "\n",
++					__func__, (u64)sym.st_value, (u64)phdr.p_vaddr,
++					(u64)phdr.p_offset);
++				sym.st_value -= phdr.p_vaddr - phdr.p_offset;
+ 			}
+-			pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
+-				  "p_vaddr: %#" PRIx64 " p_offset: %#" PRIx64 "\n",
+-				  __func__, (u64)sym.st_value, (u64)phdr.p_vaddr,
+-				  (u64)phdr.p_offset);
+-			sym.st_value -= phdr.p_vaddr - phdr.p_offset;
+ 		}
+ 
+ 		demangled = demangle_sym(dso, kmodule, elf_name);
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
+index 93162484c2cad..48b01150e703f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf.c
+@@ -4758,7 +4758,7 @@ static void do_test_pprint(int test_num)
+ 	ret = snprintf(pin_path, sizeof(pin_path), "%s/%s",
+ 		       "/sys/fs/bpf", test->map_name);
+ 
+-	if (CHECK(ret == sizeof(pin_path), "pin_path %s/%s is too long",
++	if (CHECK(ret >= sizeof(pin_path), "pin_path %s/%s is too long",
+ 		  "/sys/fs/bpf", test->map_name)) {
+ 		err = -1;
+ 		goto done;
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+index d10c5c05bdf0b..f5d2d27bee059 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+@@ -1253,6 +1253,6 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2,
+ 
+ 	asm volatile("vmcall"
+ 		     : "=a"(r)
+-		     : "b"(a0), "c"(a1), "d"(a2), "S"(a3));
++		     : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3));
+ 	return r;
+ }
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index e36745995f224..413a7b9f3c4d3 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -798,7 +798,7 @@ void kill_thread_or_group(struct __test_metadata *_metadata,
+ 		.len = (unsigned short)ARRAY_SIZE(filter_thread),
+ 		.filter = filter_thread,
+ 	};
+-	int kill = kill_how == KILL_PROCESS ? SECCOMP_RET_KILL_PROCESS : 0xAAAAAAAAA;
++	int kill = kill_how == KILL_PROCESS ? SECCOMP_RET_KILL_PROCESS : 0xAAAAAAAA;
+ 	struct sock_filter filter_process[] = {
+ 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
+ 			offsetof(struct seccomp_data, nr)),
+diff --git a/tools/testing/selftests/timers/clocksource-switch.c b/tools/testing/selftests/timers/clocksource-switch.c
+index bfc974b4572d5..c18313a5f357b 100644
+--- a/tools/testing/selftests/timers/clocksource-switch.c
++++ b/tools/testing/selftests/timers/clocksource-switch.c
+@@ -110,10 +110,10 @@ int run_tests(int secs)
+ 
+ 	sprintf(buf, "./inconsistency-check -t %i", secs);
+ 	ret = system(buf);
+-	if (ret)
+-		return ret;
++	if (WIFEXITED(ret) && WEXITSTATUS(ret))
++		return WEXITSTATUS(ret);
+ 	ret = system("./nanosleep");
+-	return ret;
++	return WIFEXITED(ret) ? WEXITSTATUS(ret) : 0;
+ }
+ 
+ 
+diff --git a/tools/testing/selftests/timers/valid-adjtimex.c b/tools/testing/selftests/timers/valid-adjtimex.c
+index 5397de708d3c2..48b9a803235a8 100644
+--- a/tools/testing/selftests/timers/valid-adjtimex.c
++++ b/tools/testing/selftests/timers/valid-adjtimex.c
+@@ -40,7 +40,7 @@
+ #define ADJ_SETOFFSET 0x0100
+ 
+ #include <sys/syscall.h>
+-static int clock_adjtime(clockid_t id, struct timex *tx)
++int clock_adjtime(clockid_t id, struct timex *tx)
+ {
+ 	return syscall(__NR_clock_adjtime, id, tx);
+ }
+diff --git a/tools/thermal/tmon/sysfs.c b/tools/thermal/tmon/sysfs.c
+index b00b1bfd9d8e7..cb1108bc92498 100644
+--- a/tools/thermal/tmon/sysfs.c
++++ b/tools/thermal/tmon/sysfs.c
+@@ -13,6 +13,7 @@
+ #include <stdint.h>
+ #include <dirent.h>
+ #include <libintl.h>
++#include <limits.h>
+ #include <ctype.h>
+ #include <time.h>
+ #include <syslog.h>
+@@ -33,9 +34,9 @@ int sysfs_set_ulong(char *path, char *filename, unsigned long val)
+ {
+ 	FILE *fd;
+ 	int ret = -1;
+-	char filepath[256];
++	char filepath[PATH_MAX + 2]; /* NUL and '/' */
+ 
+-	snprintf(filepath, 256, "%s/%s", path, filename);
++	snprintf(filepath, sizeof(filepath), "%s/%s", path, filename);
+ 
+ 	fd = fopen(filepath, "w");
+ 	if (!fd) {
+@@ -57,9 +58,9 @@ static int sysfs_get_ulong(char *path, char *filename, unsigned long *p_ulong)
+ {
+ 	FILE *fd;
+ 	int ret = -1;
+-	char filepath[256];
++	char filepath[PATH_MAX + 2]; /* NUL and '/' */
+ 
+-	snprintf(filepath, 256, "%s/%s", path, filename);
++	snprintf(filepath, sizeof(filepath), "%s/%s", path, filename);
+ 
+ 	fd = fopen(filepath, "r");
+ 	if (!fd) {
+@@ -76,9 +77,9 @@ static int sysfs_get_string(char *path, char *filename, char *str)
+ {
+ 	FILE *fd;
+ 	int ret = -1;
+-	char filepath[256];
++	char filepath[PATH_MAX + 2]; /* NUL and '/' */
+ 
+-	snprintf(filepath, 256, "%s/%s", path, filename);
++	snprintf(filepath, sizeof(filepath), "%s/%s", path, filename);
+ 
+ 	fd = fopen(filepath, "r");
+ 	if (!fd) {
+@@ -199,8 +200,8 @@ static int find_tzone_cdev(struct dirent *nl, char *tz_name,
+ {
+ 	unsigned long trip_instance = 0;
+ 	char cdev_name_linked[256];
+-	char cdev_name[256];
+-	char cdev_trip_name[256];
++	char cdev_name[PATH_MAX];
++	char cdev_trip_name[PATH_MAX];
+ 	int cdev_id;
+ 
+ 	if (nl->d_type == DT_LNK) {
+@@ -213,7 +214,8 @@ static int find_tzone_cdev(struct dirent *nl, char *tz_name,
+ 			return -EINVAL;
+ 		}
+ 		/* find the link to real cooling device record binding */
+-		snprintf(cdev_name, 256, "%s/%s", tz_name, nl->d_name);
++		snprintf(cdev_name, sizeof(cdev_name) - 2, "%s/%s",
++			 tz_name, nl->d_name);
+ 		memset(cdev_name_linked, 0, sizeof(cdev_name_linked));
+ 		if (readlink(cdev_name, cdev_name_linked,
+ 				sizeof(cdev_name_linked) - 1) != -1) {
+@@ -226,8 +228,8 @@ static int find_tzone_cdev(struct dirent *nl, char *tz_name,
+ 			/* find the trip point in which the cdev is binded to
+ 			 * in this tzone
+ 			 */
+-			snprintf(cdev_trip_name, 256, "%s%s", nl->d_name,
+-				"_trip_point");
++			snprintf(cdev_trip_name, sizeof(cdev_trip_name) - 1,
++				"%s%s", nl->d_name, "_trip_point");
+ 			sysfs_get_ulong(tz_name, cdev_trip_name,
+ 					&trip_instance);
+ 			/* validate trip point range, e.g. trip could return -1
+diff --git a/tools/thermal/tmon/tmon.h b/tools/thermal/tmon/tmon.h
+index c9066ec104ddd..44d16d778f044 100644
+--- a/tools/thermal/tmon/tmon.h
++++ b/tools/thermal/tmon/tmon.h
+@@ -27,6 +27,9 @@
+ #define NR_LINES_TZDATA 1
+ #define TMON_LOG_FILE "/var/tmp/tmon.log"
+ 
++#include <sys/time.h>
++#include <pthread.h>
++
+ extern unsigned long ticktime;
+ extern double time_elapsed;
+ extern unsigned long target_temp_user;
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index c5dbac10c3720..578235291e92e 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2339,16 +2339,28 @@ void kvm_release_pfn_dirty(kvm_pfn_t pfn)
+ }
+ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
+ 
++static bool kvm_is_ad_tracked_pfn(kvm_pfn_t pfn)
++{
++	if (!pfn_valid(pfn))
++		return false;
++
++	/*
++	 * Per page-flags.h, pages tagged PG_reserved "should in general not be
++	 * touched (e.g. set dirty) except by its owner".
++	 */
++	return !PageReserved(pfn_to_page(pfn));
++}
++
+ void kvm_set_pfn_dirty(kvm_pfn_t pfn)
+ {
+-	if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
++	if (kvm_is_ad_tracked_pfn(pfn))
+ 		SetPageDirty(pfn_to_page(pfn));
+ }
+ EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);
+ 
+ void kvm_set_pfn_accessed(kvm_pfn_t pfn)
+ {
+-	if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
++	if (kvm_is_ad_tracked_pfn(pfn))
+ 		mark_page_accessed(pfn_to_page(pfn));
+ }
+ EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed);
+@@ -3252,7 +3264,7 @@ static long kvm_vcpu_ioctl(struct file *filp,
+ 	struct kvm_fpu *fpu = NULL;
+ 	struct kvm_sregs *kvm_sregs = NULL;
+ 
+-	if (vcpu->kvm->mm != current->mm)
++	if (vcpu->kvm->mm != current->mm || vcpu->kvm->vm_bugged)
+ 		return -EIO;
+ 
+ 	if (unlikely(_IOC_TYPE(ioctl) != KVMIO))
+@@ -3458,7 +3470,7 @@ static long kvm_vcpu_compat_ioctl(struct file *filp,
+ 	void __user *argp = compat_ptr(arg);
+ 	int r;
+ 
+-	if (vcpu->kvm->mm != current->mm)
++	if (vcpu->kvm->mm != current->mm || vcpu->kvm->vm_bugged)
+ 		return -EIO;
+ 
+ 	switch (ioctl) {
+@@ -3524,7 +3536,7 @@ static long kvm_device_ioctl(struct file *filp, unsigned int ioctl,
+ {
+ 	struct kvm_device *dev = filp->private_data;
+ 
+-	if (dev->kvm->mm != current->mm)
++	if (dev->kvm->mm != current->mm || dev->kvm->vm_bugged)
+ 		return -EIO;
+ 
+ 	switch (ioctl) {
+@@ -3743,7 +3755,7 @@ static long kvm_vm_ioctl(struct file *filp,
+ 	void __user *argp = (void __user *)arg;
+ 	int r;
+ 
+-	if (kvm->mm != current->mm)
++	if (kvm->mm != current->mm || kvm->vm_bugged)
+ 		return -EIO;
+ 	switch (ioctl) {
+ 	case KVM_CREATE_VCPU:
+@@ -3948,7 +3960,7 @@ static long kvm_vm_compat_ioctl(struct file *filp,
+ 	struct kvm *kvm = filp->private_data;
+ 	int r;
+ 
+-	if (kvm->mm != current->mm)
++	if (kvm->mm != current->mm || kvm->vm_bugged)
+ 		return -EIO;
+ 	switch (ioctl) {
+ #ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-08-25 10:33 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-08-25 10:33 UTC (permalink / raw
  To: gentoo-commits

commit:     d9e01c6dfbdabede97692ee8e5a024698b8398b5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 25 10:33:31 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 25 10:33:31 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d9e01c6d

Linux patch 5.10.138

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1137_linux-5.10.138.patch | 6440 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6444 insertions(+)

diff --git a/0000_README b/0000_README
index 4c11c230..8ea943aa 100644
--- a/0000_README
+++ b/0000_README
@@ -591,6 +591,10 @@ Patch:  1136_linux-5.10.137.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.137
 
+Patch:  1137_linux-5.10.138.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.138
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1137_linux-5.10.138.patch b/1137_linux-5.10.138.patch
new file mode 100644
index 00000000..e11f8d0d
--- /dev/null
+++ b/1137_linux-5.10.138.patch
@@ -0,0 +1,6440 @@
+diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
+index 093cdaefdb373..d8b101c97031b 100644
+--- a/Documentation/atomic_bitops.txt
++++ b/Documentation/atomic_bitops.txt
+@@ -59,7 +59,7 @@ Like with atomic_t, the rule of thumb is:
+  - RMW operations that have a return value are fully ordered.
+ 
+  - RMW operations that are conditional are unordered on FAILURE,
+-   otherwise the above rules apply. In the case of test_and_{}_bit() operations,
++   otherwise the above rules apply. In the case of test_and_set_bit_lock(),
+    if the bit in memory is unchanged by the operation then it is deemed to have
+    failed.
+ 
+diff --git a/Documentation/devicetree/bindings/arm/qcom.yaml b/Documentation/devicetree/bindings/arm/qcom.yaml
+index c97d4a580f47b..42ec1d5fed386 100644
+--- a/Documentation/devicetree/bindings/arm/qcom.yaml
++++ b/Documentation/devicetree/bindings/arm/qcom.yaml
+@@ -123,8 +123,8 @@ properties:
+           - const: qcom,msm8974
+ 
+       - items:
+-          - const: qcom,msm8916-mtp/1
+           - const: qcom,msm8916-mtp
++          - const: qcom,msm8916-mtp/1
+           - const: qcom,msm8916
+ 
+       - items:
+diff --git a/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml b/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
+index 5a5b2214f0cae..005e0edd4609a 100644
+--- a/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
++++ b/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
+@@ -22,16 +22,32 @@ properties:
+     const: qcom,gcc-msm8996
+ 
+   clocks:
++    minItems: 3
+     items:
+       - description: XO source
+       - description: Second XO source
+       - description: Sleep clock source
++      - description: PCIe 0 PIPE clock (optional)
++      - description: PCIe 1 PIPE clock (optional)
++      - description: PCIe 2 PIPE clock (optional)
++      - description: USB3 PIPE clock (optional)
++      - description: UFS RX symbol 0 clock (optional)
++      - description: UFS RX symbol 1 clock (optional)
++      - description: UFS TX symbol 0 clock (optional)
+ 
+   clock-names:
++    minItems: 3
+     items:
+       - const: cxo
+       - const: cxo2
+       - const: sleep_clk
++      - const: pcie_0_pipe_clk_src
++      - const: pcie_1_pipe_clk_src
++      - const: pcie_2_pipe_clk_src
++      - const: usb3_phy_pipe_clk_src
++      - const: ufs_rx_symbol_0_clk_src
++      - const: ufs_rx_symbol_1_clk_src
++      - const: ufs_tx_symbol_0_clk_src
+ 
+   '#clock-cells':
+     const: 1
+diff --git a/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml b/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
+index c2b0a8b6da1ec..7cebd9ddfffd0 100644
+--- a/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
+@@ -47,12 +47,6 @@ properties:
+         description:
+           Properties for single LDO regulator.
+ 
+-        properties:
+-          regulator-name:
+-            pattern: "^LDO[1-5]$"
+-            description:
+-              should be "LDO1", ..., "LDO5"
+-
+         unevaluatedProperties: false
+ 
+       "^BUCK[1-6]$":
+@@ -62,11 +56,6 @@ properties:
+           Properties for single BUCK regulator.
+ 
+         properties:
+-          regulator-name:
+-            pattern: "^BUCK[1-6]$"
+-            description:
+-              should be "BUCK1", ..., "BUCK6"
+-
+           nxp,dvs-run-voltage:
+             $ref: "/schemas/types.yaml#/definitions/uint32"
+             minimum: 600000
+diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst
+index e588bccf51583..344284236a810 100644
+--- a/Documentation/firmware-guide/acpi/apei/einj.rst
++++ b/Documentation/firmware-guide/acpi/apei/einj.rst
+@@ -168,7 +168,7 @@ An error injection example::
+   0x00000008	Memory Correctable
+   0x00000010	Memory Uncorrectable non-fatal
+   # echo 0x12345000 > param1		# Set memory address for injection
+-  # echo $((-1 << 12)) > param2		# Mask 0xfffffffffffff000 - anywhere in this page
++  # echo 0xfffffffffffff000 > param2		# Mask - anywhere in this page
+   # echo 0x8 > error_type			# Choose correctable memory error
+   # echo 1 > error_inject			# Inject now
+ 
+diff --git a/Makefile b/Makefile
+index b3bfdf51232f3..234c8032c2b4a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 137
++SUBLEVEL = 138
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1133,13 +1133,11 @@ vmlinux-alldirs	:= $(sort $(vmlinux-dirs) Documentation \
+ 		     $(patsubst %/,%,$(filter %/, $(core-) \
+ 			$(drivers-) $(libs-))))
+ 
+-subdir-modorder := $(addsuffix modules.order,$(filter %/, \
+-			$(core-y) $(core-m) $(libs-y) $(libs-m) \
+-			$(drivers-y) $(drivers-m)))
+-
+ build-dirs	:= $(vmlinux-dirs)
+ clean-dirs	:= $(vmlinux-alldirs)
+ 
++subdir-modorder := $(addsuffix /modules.order, $(build-dirs))
++
+ # Externally visible symbols (used by link-vmlinux.sh)
+ KBUILD_VMLINUX_OBJS := $(head-y) $(patsubst %/,%/built-in.a, $(core-y))
+ KBUILD_VMLINUX_OBJS += $(addsuffix built-in.a, $(filter %/, $(libs-y)))
+diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
+index 556b9ba61ec06..79272dde72db3 100644
+--- a/arch/csky/kernel/probes/kprobes.c
++++ b/arch/csky/kernel/probes/kprobes.c
+@@ -124,6 +124,10 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
+ 
+ void __kprobes arch_remove_kprobe(struct kprobe *p)
+ {
++	if (p->ainsn.api.insn) {
++		free_insn_slot(p->ainsn.api.insn, 0);
++		p->ainsn.api.insn = NULL;
++	}
+ }
+ 
+ static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
+diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
+index a994022e32c9f..ce05c0dd3acd7 100644
+--- a/arch/mips/cavium-octeon/octeon-platform.c
++++ b/arch/mips/cavium-octeon/octeon-platform.c
+@@ -86,11 +86,12 @@ static void octeon2_usb_clocks_start(struct device *dev)
+ 					 "refclk-frequency", &clock_rate);
+ 		if (i) {
+ 			dev_err(dev, "No UCTL \"refclk-frequency\"\n");
++			of_node_put(uctl_node);
+ 			goto exit;
+ 		}
+ 		i = of_property_read_string(uctl_node,
+ 					    "refclk-type", &clock_type);
+-
++		of_node_put(uctl_node);
+ 		if (!i && strcmp("crystal", clock_type) == 0)
+ 			is_crystal_clock = true;
+ 	}
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index a7521b8f76586..e8e3635dda09a 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -633,7 +633,7 @@ static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
+ 		return;
+ 	}
+ 
+-	if (cpu_has_rixi && !!_PAGE_NO_EXEC) {
++	if (cpu_has_rixi && _PAGE_NO_EXEC != 0) {
+ 		if (fill_includes_sw_bits) {
+ 			UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
+ 		} else {
+@@ -2572,7 +2572,7 @@ static void check_pabits(void)
+ 	unsigned long entry;
+ 	unsigned pabits, fillbits;
+ 
+-	if (!cpu_has_rixi || !_PAGE_NO_EXEC) {
++	if (!cpu_has_rixi || _PAGE_NO_EXEC == 0) {
+ 		/*
+ 		 * We'll only be making use of the fact that we can rotate bits
+ 		 * into the fill if the CPU supports RIXI, so don't bother
+diff --git a/arch/nios2/include/asm/entry.h b/arch/nios2/include/asm/entry.h
+index cf37f55efbc22..bafb7b2ca59fc 100644
+--- a/arch/nios2/include/asm/entry.h
++++ b/arch/nios2/include/asm/entry.h
+@@ -50,7 +50,8 @@
+ 	stw	r13, PT_R13(sp)
+ 	stw	r14, PT_R14(sp)
+ 	stw	r15, PT_R15(sp)
+-	stw	r2, PT_ORIG_R2(sp)
++	movi	r24, -1
++	stw	r24, PT_ORIG_R2(sp)
+ 	stw	r7, PT_ORIG_R7(sp)
+ 
+ 	stw	ra, PT_RA(sp)
+diff --git a/arch/nios2/include/asm/ptrace.h b/arch/nios2/include/asm/ptrace.h
+index 6424621448728..9da34c3022a27 100644
+--- a/arch/nios2/include/asm/ptrace.h
++++ b/arch/nios2/include/asm/ptrace.h
+@@ -74,6 +74,8 @@ extern void show_regs(struct pt_regs *);
+ 	((struct pt_regs *)((unsigned long)current_thread_info() + THREAD_SIZE)\
+ 		- 1)
+ 
++#define force_successful_syscall_return() (current_pt_regs()->orig_r2 = -1)
++
+ int do_syscall_trace_enter(void);
+ void do_syscall_trace_exit(void);
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/nios2/kernel/entry.S b/arch/nios2/kernel/entry.S
+index 0794cd7803dfe..99f0a65e62347 100644
+--- a/arch/nios2/kernel/entry.S
++++ b/arch/nios2/kernel/entry.S
+@@ -185,6 +185,7 @@ ENTRY(handle_system_call)
+ 	ldw	r5, PT_R5(sp)
+ 
+ local_restart:
++	stw	r2, PT_ORIG_R2(sp)
+ 	/* Check that the requested system call is within limits */
+ 	movui	r1, __NR_syscalls
+ 	bgeu	r2, r1, ret_invsyscall
+@@ -192,7 +193,6 @@ local_restart:
+ 	movhi	r11, %hiadj(sys_call_table)
+ 	add	r1, r1, r11
+ 	ldw	r1, %lo(sys_call_table)(r1)
+-	beq	r1, r0, ret_invsyscall
+ 
+ 	/* Check if we are being traced */
+ 	GET_THREAD_INFO r11
+@@ -213,6 +213,9 @@ local_restart:
+ translate_rc_and_ret:
+ 	movi	r1, 0
+ 	bge	r2, zero, 3f
++	ldw	r1, PT_ORIG_R2(sp)
++	addi	r1, r1, 1
++	beq	r1, zero, 3f
+ 	sub	r2, zero, r2
+ 	movi	r1, 1
+ 3:
+@@ -255,9 +258,9 @@ traced_system_call:
+ 	ldw	r6, PT_R6(sp)
+ 	ldw	r7, PT_R7(sp)
+ 
+-	/* Fetch the syscall function, we don't need to check the boundaries
+-	 * since this is already done.
+-	 */
++	/* Fetch the syscall function. */
++	movui	r1, __NR_syscalls
++	bgeu	r2, r1, traced_invsyscall
+ 	slli	r1, r2, 2
+ 	movhi	r11,%hiadj(sys_call_table)
+ 	add	r1, r1, r11
+@@ -276,6 +279,9 @@ traced_system_call:
+ translate_rc_and_ret2:
+ 	movi	r1, 0
+ 	bge	r2, zero, 4f
++	ldw	r1, PT_ORIG_R2(sp)
++	addi	r1, r1, 1
++	beq	r1, zero, 4f
+ 	sub	r2, zero, r2
+ 	movi	r1, 1
+ 4:
+@@ -287,6 +293,11 @@ end_translate_rc_and_ret2:
+ 	RESTORE_SWITCH_STACK
+ 	br	ret_from_exception
+ 
++	/* If the syscall number was invalid return ENOSYS */
++traced_invsyscall:
++	movi	r2, -ENOSYS
++	br	translate_rc_and_ret2
++
+ Luser_return:
+ 	GET_THREAD_INFO	r11			/* get thread_info pointer */
+ 	ldw	r10, TI_FLAGS(r11)		/* get thread_info->flags */
+@@ -336,9 +347,6 @@ external_interrupt:
+ 	/* skip if no interrupt is pending */
+ 	beq	r12, r0, ret_from_interrupt
+ 
+-	movi	r24, -1
+-	stw	r24, PT_ORIG_R2(sp)
+-
+ 	/*
+ 	 * Process an external hardware interrupt.
+ 	 */
+diff --git a/arch/nios2/kernel/signal.c b/arch/nios2/kernel/signal.c
+index e45491d1d3e44..916180e4a9978 100644
+--- a/arch/nios2/kernel/signal.c
++++ b/arch/nios2/kernel/signal.c
+@@ -242,7 +242,7 @@ static int do_signal(struct pt_regs *regs)
+ 	/*
+ 	 * If we were from a system call, check for system call restarting...
+ 	 */
+-	if (regs->orig_r2 >= 0) {
++	if (regs->orig_r2 >= 0 && regs->r1) {
+ 		continue_addr = regs->ea;
+ 		restart_addr = continue_addr - 4;
+ 		retval = regs->r2;
+@@ -264,6 +264,7 @@ static int do_signal(struct pt_regs *regs)
+ 			regs->ea = restart_addr;
+ 			break;
+ 		}
++		regs->orig_r2 = -1;
+ 	}
+ 
+ 	if (get_signal(&ksig)) {
+diff --git a/arch/nios2/kernel/syscall_table.c b/arch/nios2/kernel/syscall_table.c
+index 6176d63023c1d..c2875a6dd5a4a 100644
+--- a/arch/nios2/kernel/syscall_table.c
++++ b/arch/nios2/kernel/syscall_table.c
+@@ -13,5 +13,6 @@
+ #define __SYSCALL(nr, call) [nr] = (call),
+ 
+ void *sys_call_table[__NR_syscalls] = {
++	[0 ... __NR_syscalls-1] = sys_ni_syscall,
+ #include <asm/unistd.h>
+ };
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index 7a96cdefbd4e4..59175651f0b9e 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -17,23 +17,6 @@ HAS_BIARCH	:= $(call cc-option-yn, -m32)
+ # Set default 32 bits cross compilers for vdso and boot wrapper
+ CROSS32_COMPILE ?=
+ 
+-ifeq ($(HAS_BIARCH),y)
+-ifeq ($(CROSS32_COMPILE),)
+-ifdef CONFIG_PPC32
+-# These options will be overridden by any -mcpu option that the CPU
+-# or platform code sets later on the command line, but they are needed
+-# to set a sane 32-bit cpu target for the 64-bit cross compiler which
+-# may default to the wrong ISA.
+-KBUILD_CFLAGS		+= -mcpu=powerpc
+-KBUILD_AFLAGS		+= -mcpu=powerpc
+-endif
+-endif
+-endif
+-
+-ifdef CONFIG_PPC_BOOK3S_32
+-KBUILD_CFLAGS		+= -mcpu=powerpc
+-endif
+-
+ # If we're on a ppc/ppc64/ppc64le machine use that defconfig, otherwise just use
+ # ppc64_defconfig because we have nothing better to go on.
+ uname := $(shell uname -m)
+@@ -190,6 +173,7 @@ endif
+ endif
+ 
+ CFLAGS-$(CONFIG_TARGET_CPU_BOOL) += $(call cc-option,-mcpu=$(CONFIG_TARGET_CPU))
++AFLAGS-$(CONFIG_TARGET_CPU_BOOL) += $(call cc-option,-mcpu=$(CONFIG_TARGET_CPU))
+ 
+ # Altivec option not allowed with e500mc64 in GCC.
+ ifdef CONFIG_ALTIVEC
+@@ -200,14 +184,6 @@ endif
+ CFLAGS-$(CONFIG_E5500_CPU) += $(E5500_CPU)
+ CFLAGS-$(CONFIG_E6500_CPU) += $(call cc-option,-mcpu=e6500,$(E5500_CPU))
+ 
+-ifdef CONFIG_PPC32
+-ifdef CONFIG_PPC_E500MC
+-CFLAGS-y += $(call cc-option,-mcpu=e500mc,-mcpu=powerpc)
+-else
+-CFLAGS-$(CONFIG_E500) += $(call cc-option,-mcpu=8540 -msoft-float,-mcpu=powerpc)
+-endif
+-endif
+-
+ asinstr := $(call as-instr,lis 9$(comma)foo@high,-DHAVE_AS_ATHIGH=1)
+ 
+ KBUILD_CPPFLAGS	+= -I $(srctree)/arch/$(ARCH) $(asinstr)
+diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
+index f9d35c9ea4aed..cfc461413a5f8 100644
+--- a/arch/powerpc/kernel/pci-common.c
++++ b/arch/powerpc/kernel/pci-common.c
+@@ -66,10 +66,6 @@ void set_pci_dma_ops(const struct dma_map_ops *dma_ops)
+ 	pci_dma_ops = dma_ops;
+ }
+ 
+-/*
+- * This function should run under locking protection, specifically
+- * hose_spinlock.
+- */
+ static int get_phb_number(struct device_node *dn)
+ {
+ 	int ret, phb_id = -1;
+@@ -106,15 +102,20 @@ static int get_phb_number(struct device_node *dn)
+ 	if (!ret)
+ 		phb_id = (int)(prop & (MAX_PHBS - 1));
+ 
++	spin_lock(&hose_spinlock);
++
+ 	/* We need to be sure to not use the same PHB number twice. */
+ 	if ((phb_id >= 0) && !test_and_set_bit(phb_id, phb_bitmap))
+-		return phb_id;
++		goto out_unlock;
+ 
+ 	/* If everything fails then fallback to dynamic PHB numbering. */
+ 	phb_id = find_first_zero_bit(phb_bitmap, MAX_PHBS);
+ 	BUG_ON(phb_id >= MAX_PHBS);
+ 	set_bit(phb_id, phb_bitmap);
+ 
++out_unlock:
++	spin_unlock(&hose_spinlock);
++
+ 	return phb_id;
+ }
+ 
+@@ -125,10 +126,13 @@ struct pci_controller *pcibios_alloc_controller(struct device_node *dev)
+ 	phb = zalloc_maybe_bootmem(sizeof(struct pci_controller), GFP_KERNEL);
+ 	if (phb == NULL)
+ 		return NULL;
+-	spin_lock(&hose_spinlock);
++
+ 	phb->global_number = get_phb_number(dev);
++
++	spin_lock(&hose_spinlock);
+ 	list_add_tail(&phb->list_node, &hose_list);
+ 	spin_unlock(&hose_spinlock);
++
+ 	phb->dn = dev;
+ 	phb->is_dynamic = slab_is_available();
+ #ifdef CONFIG_PPC64
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index 7a14a094be8ac..1dfb4c213feae 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -750,6 +750,13 @@ void __init early_init_devtree(void *params)
+ 	of_scan_flat_dt(early_init_dt_scan_root, NULL);
+ 	of_scan_flat_dt(early_init_dt_scan_memory_ppc, NULL);
+ 
++	/*
++	 * As generic code authors expect to be able to use static keys
++	 * in early_param() handlers, we initialize the static keys just
++	 * before parsing early params (it's fine to call jump_label_init()
++	 * more than once).
++	 */
++	jump_label_init();
+ 	parse_early_param();
+ 
+ 	/* make sure we've parsed cmdline for mem= before this */
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 75ebfbff4debc..84f9dd476bbba 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -119,9 +119,9 @@ config GENERIC_CPU
+ 	depends on PPC64 && CPU_LITTLE_ENDIAN
+ 	select ARCH_HAS_FAST_MULTIPLIER
+ 
+-config GENERIC_CPU
++config POWERPC_CPU
+ 	bool "Generic 32 bits powerpc"
+-	depends on PPC32 && !PPC_8xx
++	depends on PPC32 && !PPC_8xx && !PPC_85xx
+ 
+ config CELL_CPU
+ 	bool "Cell Broadband Engine"
+@@ -175,11 +175,23 @@ config G4_CPU
+ 	depends on PPC_BOOK3S_32
+ 	select ALTIVEC
+ 
++config E500_CPU
++	bool "e500 (8540)"
++	depends on PPC_85xx && !PPC_E500MC
++
++config E500MC_CPU
++	bool "e500mc"
++	depends on PPC_85xx && PPC_E500MC
++
++config TOOLCHAIN_DEFAULT_CPU
++	bool "Rely on the toolchain's implicit default CPU"
++	depends on PPC32
++
+ endchoice
+ 
+ config TARGET_CPU_BOOL
+ 	bool
+-	default !GENERIC_CPU
++	default !GENERIC_CPU && !TOOLCHAIN_DEFAULT_CPU
+ 
+ config TARGET_CPU
+ 	string
+@@ -194,6 +206,9 @@ config TARGET_CPU
+ 	default "e300c2" if E300C2_CPU
+ 	default "e300c3" if E300C3_CPU
+ 	default "G4" if G4_CPU
++	default "8540" if E500_CPU
++	default "e500mc" if E500MC_CPU
++	default "powerpc" if POWERPC_CPU
+ 
+ config PPC_BOOK3S
+ 	def_bool y
+diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
+index 12f8a7fce78b1..8a7880b9c433e 100644
+--- a/arch/riscv/kernel/sys_riscv.c
++++ b/arch/riscv/kernel/sys_riscv.c
+@@ -18,9 +18,8 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
+ 	if (unlikely(offset & (~PAGE_MASK >> page_shift_offset)))
+ 		return -EINVAL;
+ 
+-	if ((prot & PROT_WRITE) && (prot & PROT_EXEC))
+-		if (unlikely(!(prot & PROT_READ)))
+-			return -EINVAL;
++	if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ)))
++		return -EINVAL;
+ 
+ 	return ksys_mmap_pgoff(addr, len, prot, flags, fd,
+ 			       offset >> (PAGE_SHIFT - page_shift_offset));
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index ad14f4466d924..c1a13011fb8e5 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -15,6 +15,7 @@
+ #include <linux/mm.h>
+ #include <linux/module.h>
+ #include <linux/irq.h>
++#include <linux/kexec.h>
+ 
+ #include <asm/processor.h>
+ #include <asm/ptrace.h>
+@@ -43,6 +44,9 @@ void die(struct pt_regs *regs, const char *str)
+ 
+ 	ret = notify_die(DIE_OOPS, str, regs, 0, regs->cause, SIGSEGV);
+ 
++	if (regs && kexec_should_crash(current))
++		crash_kexec(regs);
++
+ 	bust_spinlocks(0);
+ 	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+ 	spin_unlock_irq(&die_lock);
+diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c
+index 94a7c4125ebc8..eecde73b2e782 100644
+--- a/arch/um/os-Linux/skas/process.c
++++ b/arch/um/os-Linux/skas/process.c
+@@ -5,6 +5,7 @@
+  */
+ 
+ #include <stdlib.h>
++#include <stdbool.h>
+ #include <unistd.h>
+ #include <sched.h>
+ #include <errno.h>
+@@ -644,10 +645,24 @@ void halt_skas(void)
+ 	UML_LONGJMP(&initial_jmpbuf, INIT_JMP_HALT);
+ }
+ 
++static bool noreboot;
++
++static int __init noreboot_cmd_param(char *str, int *add)
++{
++	noreboot = true;
++	return 0;
++}
++
++__uml_setup("noreboot", noreboot_cmd_param,
++"noreboot\n"
++"    Rather than rebooting, exit always, akin to QEMU's -no-reboot option.\n"
++"    This is useful if you're using CONFIG_PANIC_TIMEOUT in order to catch\n"
++"    crashes in CI\n");
++
+ void reboot_skas(void)
+ {
+ 	block_signals_trace();
+-	UML_LONGJMP(&initial_jmpbuf, INIT_JMP_REBOOT);
++	UML_LONGJMP(&initial_jmpbuf, noreboot ? INIT_JMP_HALT : INIT_JMP_REBOOT);
+ }
+ 
+ void __switch_mm(struct mm_id *mm_idp)
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 067ca92e69ef9..20951ab522a1d 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -645,7 +645,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
+ 			pages++;
+ 			spin_lock(&init_mm.page_table_lock);
+ 
+-			prot = __pgprot(pgprot_val(prot) | __PAGE_KERNEL_LARGE);
++			prot = __pgprot(pgprot_val(prot) | _PAGE_PSE);
+ 
+ 			set_pte_init((pte_t *)pud,
+ 				     pfn_pte((paddr & PUD_MASK) >> PAGE_SHIFT,
+diff --git a/drivers/acpi/pci_mcfg.c b/drivers/acpi/pci_mcfg.c
+index 95f23acd5b802..2709ef2b0351b 100644
+--- a/drivers/acpi/pci_mcfg.c
++++ b/drivers/acpi/pci_mcfg.c
+@@ -41,6 +41,8 @@ struct mcfg_fixup {
+ static struct mcfg_fixup mcfg_quirks[] = {
+ /*	{ OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */
+ 
++#ifdef CONFIG_ARM64
++
+ #define AL_ECAM(table_id, rev, seg, ops) \
+ 	{ "AMAZON", table_id, rev, seg, MCFG_BUS_ANY, ops }
+ 
+@@ -162,6 +164,7 @@ static struct mcfg_fixup mcfg_quirks[] = {
+ 	ALTRA_ECAM_QUIRK(1, 13),
+ 	ALTRA_ECAM_QUIRK(1, 14),
+ 	ALTRA_ECAM_QUIRK(1, 15),
++#endif /* ARM64 */
+ };
+ 
+ static char mcfg_oem_id[ACPI_OEM_ID_SIZE];
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 619d34d73dcfe..80e92c298055d 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -155,10 +155,10 @@ static bool acpi_nondev_subnode_ok(acpi_handle scope,
+ 	return acpi_nondev_subnode_data_ok(handle, link, list, parent);
+ }
+ 
+-static int acpi_add_nondev_subnodes(acpi_handle scope,
+-				    const union acpi_object *links,
+-				    struct list_head *list,
+-				    struct fwnode_handle *parent)
++static bool acpi_add_nondev_subnodes(acpi_handle scope,
++				     const union acpi_object *links,
++				     struct list_head *list,
++				     struct fwnode_handle *parent)
+ {
+ 	bool ret = false;
+ 	int i;
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 018ed8736a64d..973f4d34d7cda 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2131,6 +2131,7 @@ const char *ata_get_cmd_descript(u8 command)
+ 		{ ATA_CMD_WRITE_QUEUED_FUA_EXT, "WRITE DMA QUEUED FUA EXT" },
+ 		{ ATA_CMD_FPDMA_READ,		"READ FPDMA QUEUED" },
+ 		{ ATA_CMD_FPDMA_WRITE,		"WRITE FPDMA QUEUED" },
++		{ ATA_CMD_NCQ_NON_DATA,		"NCQ NON-DATA" },
+ 		{ ATA_CMD_FPDMA_SEND,		"SEND FPDMA QUEUED" },
+ 		{ ATA_CMD_FPDMA_RECV,		"RECEIVE FPDMA QUEUED" },
+ 		{ ATA_CMD_PIO_READ,		"READ SECTOR(S)" },
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index 5f0472c18bcbd..82f6f1fbe9e78 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -3767,6 +3767,7 @@ static void __exit idt77252_exit(void)
+ 		card = idt77252_chain;
+ 		dev = card->atmdev;
+ 		idt77252_chain = card->next;
++		del_timer_sync(&card->tst_timer);
+ 
+ 		if (dev->phy->stop)
+ 			dev->phy->stop(dev);
+diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
+index 33e3b76c4fa91..b08650417bf00 100644
+--- a/drivers/block/zram/zcomp.c
++++ b/drivers/block/zram/zcomp.c
+@@ -61,12 +61,6 @@ static int zcomp_strm_init(struct zcomp_strm *zstrm, struct zcomp *comp)
+ 
+ bool zcomp_available_algorithm(const char *comp)
+ {
+-	int i;
+-
+-	i = sysfs_match_string(backends, comp);
+-	if (i >= 0)
+-		return true;
+-
+ 	/*
+ 	 * Crypto does not ignore a trailing new line symbol,
+ 	 * so make sure you don't supply a string containing
+@@ -215,6 +209,11 @@ struct zcomp *zcomp_create(const char *compress)
+ 	struct zcomp *comp;
+ 	int error;
+ 
++	/*
++	 * Crypto API will execute /sbin/modprobe if the compression module
++	 * is not loaded yet. We must do it here, otherwise we are about to
++	 * call /sbin/modprobe under CPU hot-plug lock.
++	 */
+ 	if (!zcomp_available_algorithm(compress))
+ 		return ERR_PTR(-EINVAL);
+ 
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 1a571c04a76cb..cf265ab035ea9 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1379,7 +1379,7 @@ const struct clk_ops clk_alpha_pll_postdiv_fabia_ops = {
+ EXPORT_SYMBOL_GPL(clk_alpha_pll_postdiv_fabia_ops);
+ 
+ /**
+- * clk_lucid_pll_configure - configure the lucid pll
++ * clk_trion_pll_configure - configure the trion pll
+  *
+  * @pll: clk alpha pll
+  * @regmap: register map
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 2c2ecfc5e61f5..d6d5defb82c9f 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -662,6 +662,7 @@ static struct clk_branch gcc_sleep_clk_src = {
+ 			},
+ 			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
++			.flags = CLK_IS_CRITICAL,
+ 		},
+ 	},
+ };
+diff --git a/drivers/clk/ti/clk-44xx.c b/drivers/clk/ti/clk-44xx.c
+index a38c921539793..cbf9922d93d4e 100644
+--- a/drivers/clk/ti/clk-44xx.c
++++ b/drivers/clk/ti/clk-44xx.c
+@@ -56,7 +56,7 @@ static const struct omap_clkctrl_bit_data omap4_aess_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_func_dmic_abe_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0018:26",
++	"abe-clkctrl:0018:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -76,7 +76,7 @@ static const struct omap_clkctrl_bit_data omap4_dmic_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_func_mcasp_abe_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0020:26",
++	"abe-clkctrl:0020:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -89,7 +89,7 @@ static const struct omap_clkctrl_bit_data omap4_mcasp_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_func_mcbsp1_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0028:26",
++	"abe-clkctrl:0028:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -102,7 +102,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp1_bit_data[] __initconst =
+ };
+ 
+ static const char * const omap4_func_mcbsp2_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0030:26",
++	"abe-clkctrl:0030:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -115,7 +115,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp2_bit_data[] __initconst =
+ };
+ 
+ static const char * const omap4_func_mcbsp3_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0038:26",
++	"abe-clkctrl:0038:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -183,18 +183,18 @@ static const struct omap_clkctrl_bit_data omap4_timer8_bit_data[] __initconst =
+ 
+ static const struct omap_clkctrl_reg_data omap4_abe_clkctrl_regs[] __initconst = {
+ 	{ OMAP4_L4_ABE_CLKCTRL, NULL, 0, "ocp_abe_iclk" },
+-	{ OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
++	{ OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
+ 	{ OMAP4_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+-	{ OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
+-	{ OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe_cm:clk:0020:24" },
+-	{ OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
+-	{ OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
+-	{ OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
+-	{ OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0040:8" },
+-	{ OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
+-	{ OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
+-	{ OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
+-	{ OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
++	{ OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
++	{ OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe-clkctrl:0020:24" },
++	{ OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
++	{ OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
++	{ OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
++	{ OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0040:8" },
++	{ OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
++	{ OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
++	{ OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
++	{ OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
+ 	{ OMAP4_WD_TIMER3_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ 0 },
+ };
+@@ -287,7 +287,7 @@ static const struct omap_clkctrl_bit_data omap4_fdif_bit_data[] __initconst = {
+ 
+ static const struct omap_clkctrl_reg_data omap4_iss_clkctrl_regs[] __initconst = {
+ 	{ OMAP4_ISS_CLKCTRL, omap4_iss_bit_data, CLKF_SW_SUP, "ducati_clk_mux_ck" },
+-	{ OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss_cm:clk:0008:24" },
++	{ OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss-clkctrl:0008:24" },
+ 	{ 0 },
+ };
+ 
+@@ -320,7 +320,7 @@ static const struct omap_clkctrl_bit_data omap4_dss_core_bit_data[] __initconst
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap4_l3_dss_clkctrl_regs[] __initconst = {
+-	{ OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3_dss_cm:clk:0000:8" },
++	{ OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3-dss-clkctrl:0000:8" },
+ 	{ 0 },
+ };
+ 
+@@ -336,7 +336,7 @@ static const struct omap_clkctrl_bit_data omap4_gpu_bit_data[] __initconst = {
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap4_l3_gfx_clkctrl_regs[] __initconst = {
+-	{ OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3_gfx_cm:clk:0000:24" },
++	{ OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3-gfx-clkctrl:0000:24" },
+ 	{ 0 },
+ };
+ 
+@@ -372,12 +372,12 @@ static const struct omap_clkctrl_bit_data omap4_hsi_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+-	"l3_init_cm:clk:0038:24",
++	"l3-init-clkctrl:0038:24",
+ 	NULL,
+ };
+ 
+ static const char * const omap4_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+-	"l3_init_cm:clk:0038:25",
++	"l3-init-clkctrl:0038:25",
+ 	NULL,
+ };
+ 
+@@ -418,7 +418,7 @@ static const struct omap_clkctrl_bit_data omap4_usb_host_hs_bit_data[] __initcon
+ };
+ 
+ static const char * const omap4_usb_otg_hs_xclk_parents[] __initconst = {
+-	"l3_init_cm:clk:0040:24",
++	"l3-init-clkctrl:0040:24",
+ 	NULL,
+ };
+ 
+@@ -452,14 +452,14 @@ static const struct omap_clkctrl_bit_data omap4_ocp2scp_usb_phy_bit_data[] __ini
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap4_l3_init_clkctrl_regs[] __initconst = {
+-	{ OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0008:24" },
+-	{ OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0010:24" },
+-	{ OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:0018:24" },
++	{ OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0008:24" },
++	{ OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0010:24" },
++	{ OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:0018:24" },
+ 	{ OMAP4_USB_HOST_HS_CLKCTRL, omap4_usb_host_hs_bit_data, CLKF_SW_SUP, "init_60m_fclk" },
+ 	{ OMAP4_USB_OTG_HS_CLKCTRL, omap4_usb_otg_hs_bit_data, CLKF_HW_SUP, "l3_div_ck" },
+ 	{ OMAP4_USB_TLL_HS_CLKCTRL, omap4_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ 	{ OMAP4_USB_HOST_FS_CLKCTRL, NULL, CLKF_SW_SUP, "func_48mc_fclk" },
+-	{ OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:00c0:8" },
++	{ OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:00c0:8" },
+ 	{ 0 },
+ };
+ 
+@@ -530,7 +530,7 @@ static const struct omap_clkctrl_bit_data omap4_gpio6_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_per_mcbsp4_gfclk_parents[] __initconst = {
+-	"l4_per_cm:clk:00c0:26",
++	"l4-per-clkctrl:00c0:26",
+ 	"pad_clks_ck",
+ 	NULL,
+ };
+@@ -570,12 +570,12 @@ static const struct omap_clkctrl_bit_data omap4_slimbus2_bit_data[] __initconst
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initconst = {
+-	{ OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0008:24" },
+-	{ OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0010:24" },
+-	{ OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0018:24" },
+-	{ OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0020:24" },
+-	{ OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0028:24" },
+-	{ OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0030:24" },
++	{ OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0008:24" },
++	{ OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0010:24" },
++	{ OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0018:24" },
++	{ OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0020:24" },
++	{ OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0028:24" },
++	{ OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0030:24" },
+ 	{ OMAP4_ELM_CLKCTRL, NULL, 0, "l4_div_ck" },
+ 	{ OMAP4_GPIO2_CLKCTRL, omap4_gpio2_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ 	{ OMAP4_GPIO3_CLKCTRL, omap4_gpio3_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+@@ -588,14 +588,14 @@ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initcons
+ 	{ OMAP4_I2C3_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ 	{ OMAP4_I2C4_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ 	{ OMAP4_L4_PER_CLKCTRL, NULL, 0, "l4_div_ck" },
+-	{ OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:00c0:24" },
++	{ OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:00c0:24" },
+ 	{ OMAP4_MCSPI1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MCSPI2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MCSPI3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MCSPI4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MMC3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MMC4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+-	{ OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0118:8" },
++	{ OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0118:8" },
+ 	{ OMAP4_UART1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_UART2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_UART3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -630,7 +630,7 @@ static const struct omap_clkctrl_reg_data omap4_l4_wkup_clkctrl_regs[] __initcon
+ 	{ OMAP4_L4_WKUP_CLKCTRL, NULL, 0, "l4_wkup_clk_mux_ck" },
+ 	{ OMAP4_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ OMAP4_GPIO1_CLKCTRL, omap4_gpio1_bit_data, CLKF_HW_SUP, "l4_wkup_clk_mux_ck" },
+-	{ OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4_wkup_cm:clk:0020:24" },
++	{ OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4-wkup-clkctrl:0020:24" },
+ 	{ OMAP4_COUNTER_32K_CLKCTRL, NULL, 0, "sys_32k_ck" },
+ 	{ OMAP4_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ 0 },
+@@ -644,7 +644,7 @@ static const char * const omap4_pmd_stm_clock_mux_ck_parents[] __initconst = {
+ };
+ 
+ static const char * const omap4_trace_clk_div_div_ck_parents[] __initconst = {
+-	"emu_sys_cm:clk:0000:22",
++	"emu-sys-clkctrl:0000:22",
+ 	NULL,
+ };
+ 
+@@ -662,7 +662,7 @@ static const struct omap_clkctrl_div_data omap4_trace_clk_div_div_ck_data __init
+ };
+ 
+ static const char * const omap4_stm_clk_div_ck_parents[] __initconst = {
+-	"emu_sys_cm:clk:0000:20",
++	"emu-sys-clkctrl:0000:20",
+ 	NULL,
+ };
+ 
+@@ -716,73 +716,73 @@ static struct ti_dt_clk omap44xx_clks[] = {
+ 	 * hwmod support. Once hwmod is removed, these can be removed
+ 	 * also.
+ 	 */
+-	DT_CLK(NULL, "aess_fclk", "abe_cm:0008:24"),
+-	DT_CLK(NULL, "cm2_dm10_mux", "l4_per_cm:0008:24"),
+-	DT_CLK(NULL, "cm2_dm11_mux", "l4_per_cm:0010:24"),
+-	DT_CLK(NULL, "cm2_dm2_mux", "l4_per_cm:0018:24"),
+-	DT_CLK(NULL, "cm2_dm3_mux", "l4_per_cm:0020:24"),
+-	DT_CLK(NULL, "cm2_dm4_mux", "l4_per_cm:0028:24"),
+-	DT_CLK(NULL, "cm2_dm9_mux", "l4_per_cm:0030:24"),
+-	DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
+-	DT_CLK(NULL, "dmt1_clk_mux", "l4_wkup_cm:0020:24"),
+-	DT_CLK(NULL, "dss_48mhz_clk", "l3_dss_cm:0000:9"),
+-	DT_CLK(NULL, "dss_dss_clk", "l3_dss_cm:0000:8"),
+-	DT_CLK(NULL, "dss_sys_clk", "l3_dss_cm:0000:10"),
+-	DT_CLK(NULL, "dss_tv_clk", "l3_dss_cm:0000:11"),
+-	DT_CLK(NULL, "fdif_fck", "iss_cm:0008:24"),
+-	DT_CLK(NULL, "func_dmic_abe_gfclk", "abe_cm:0018:24"),
+-	DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe_cm:0020:24"),
+-	DT_CLK(NULL, "func_mcbsp1_gfclk", "abe_cm:0028:24"),
+-	DT_CLK(NULL, "func_mcbsp2_gfclk", "abe_cm:0030:24"),
+-	DT_CLK(NULL, "func_mcbsp3_gfclk", "abe_cm:0038:24"),
+-	DT_CLK(NULL, "gpio1_dbclk", "l4_wkup_cm:0018:8"),
+-	DT_CLK(NULL, "gpio2_dbclk", "l4_per_cm:0040:8"),
+-	DT_CLK(NULL, "gpio3_dbclk", "l4_per_cm:0048:8"),
+-	DT_CLK(NULL, "gpio4_dbclk", "l4_per_cm:0050:8"),
+-	DT_CLK(NULL, "gpio5_dbclk", "l4_per_cm:0058:8"),
+-	DT_CLK(NULL, "gpio6_dbclk", "l4_per_cm:0060:8"),
+-	DT_CLK(NULL, "hsi_fck", "l3_init_cm:0018:24"),
+-	DT_CLK(NULL, "hsmmc1_fclk", "l3_init_cm:0008:24"),
+-	DT_CLK(NULL, "hsmmc2_fclk", "l3_init_cm:0010:24"),
+-	DT_CLK(NULL, "iss_ctrlclk", "iss_cm:0000:8"),
+-	DT_CLK(NULL, "mcasp_sync_mux_ck", "abe_cm:0020:26"),
+-	DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
+-	DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
+-	DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
+-	DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4_per_cm:00c0:26"),
+-	DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3_init_cm:00c0:8"),
+-	DT_CLK(NULL, "otg_60m_gfclk", "l3_init_cm:0040:24"),
+-	DT_CLK(NULL, "per_mcbsp4_gfclk", "l4_per_cm:00c0:24"),
+-	DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu_sys_cm:0000:20"),
+-	DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu_sys_cm:0000:22"),
+-	DT_CLK(NULL, "sgx_clk_mux", "l3_gfx_cm:0000:24"),
+-	DT_CLK(NULL, "slimbus1_fclk_0", "abe_cm:0040:8"),
+-	DT_CLK(NULL, "slimbus1_fclk_1", "abe_cm:0040:9"),
+-	DT_CLK(NULL, "slimbus1_fclk_2", "abe_cm:0040:10"),
+-	DT_CLK(NULL, "slimbus1_slimbus_clk", "abe_cm:0040:11"),
+-	DT_CLK(NULL, "slimbus2_fclk_0", "l4_per_cm:0118:8"),
+-	DT_CLK(NULL, "slimbus2_fclk_1", "l4_per_cm:0118:9"),
+-	DT_CLK(NULL, "slimbus2_slimbus_clk", "l4_per_cm:0118:10"),
+-	DT_CLK(NULL, "stm_clk_div_ck", "emu_sys_cm:0000:27"),
+-	DT_CLK(NULL, "timer5_sync_mux", "abe_cm:0048:24"),
+-	DT_CLK(NULL, "timer6_sync_mux", "abe_cm:0050:24"),
+-	DT_CLK(NULL, "timer7_sync_mux", "abe_cm:0058:24"),
+-	DT_CLK(NULL, "timer8_sync_mux", "abe_cm:0060:24"),
+-	DT_CLK(NULL, "trace_clk_div_div_ck", "emu_sys_cm:0000:24"),
+-	DT_CLK(NULL, "usb_host_hs_func48mclk", "l3_init_cm:0038:15"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3_init_cm:0038:13"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3_init_cm:0038:14"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3_init_cm:0038:11"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3_init_cm:0038:12"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3_init_cm:0038:8"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3_init_cm:0038:9"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init_cm:0038:10"),
+-	DT_CLK(NULL, "usb_otg_hs_xclk", "l3_init_cm:0040:8"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3_init_cm:0048:8"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3_init_cm:0048:9"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3_init_cm:0048:10"),
+-	DT_CLK(NULL, "utmi_p1_gfclk", "l3_init_cm:0038:24"),
+-	DT_CLK(NULL, "utmi_p2_gfclk", "l3_init_cm:0038:25"),
++	DT_CLK(NULL, "aess_fclk", "abe-clkctrl:0008:24"),
++	DT_CLK(NULL, "cm2_dm10_mux", "l4-per-clkctrl:0008:24"),
++	DT_CLK(NULL, "cm2_dm11_mux", "l4-per-clkctrl:0010:24"),
++	DT_CLK(NULL, "cm2_dm2_mux", "l4-per-clkctrl:0018:24"),
++	DT_CLK(NULL, "cm2_dm3_mux", "l4-per-clkctrl:0020:24"),
++	DT_CLK(NULL, "cm2_dm4_mux", "l4-per-clkctrl:0028:24"),
++	DT_CLK(NULL, "cm2_dm9_mux", "l4-per-clkctrl:0030:24"),
++	DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
++	DT_CLK(NULL, "dmt1_clk_mux", "l4-wkup-clkctrl:0020:24"),
++	DT_CLK(NULL, "dss_48mhz_clk", "l3-dss-clkctrl:0000:9"),
++	DT_CLK(NULL, "dss_dss_clk", "l3-dss-clkctrl:0000:8"),
++	DT_CLK(NULL, "dss_sys_clk", "l3-dss-clkctrl:0000:10"),
++	DT_CLK(NULL, "dss_tv_clk", "l3-dss-clkctrl:0000:11"),
++	DT_CLK(NULL, "fdif_fck", "iss-clkctrl:0008:24"),
++	DT_CLK(NULL, "func_dmic_abe_gfclk", "abe-clkctrl:0018:24"),
++	DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe-clkctrl:0020:24"),
++	DT_CLK(NULL, "func_mcbsp1_gfclk", "abe-clkctrl:0028:24"),
++	DT_CLK(NULL, "func_mcbsp2_gfclk", "abe-clkctrl:0030:24"),
++	DT_CLK(NULL, "func_mcbsp3_gfclk", "abe-clkctrl:0038:24"),
++	DT_CLK(NULL, "gpio1_dbclk", "l4-wkup-clkctrl:0018:8"),
++	DT_CLK(NULL, "gpio2_dbclk", "l4-per-clkctrl:0040:8"),
++	DT_CLK(NULL, "gpio3_dbclk", "l4-per-clkctrl:0048:8"),
++	DT_CLK(NULL, "gpio4_dbclk", "l4-per-clkctrl:0050:8"),
++	DT_CLK(NULL, "gpio5_dbclk", "l4-per-clkctrl:0058:8"),
++	DT_CLK(NULL, "gpio6_dbclk", "l4-per-clkctrl:0060:8"),
++	DT_CLK(NULL, "hsi_fck", "l3-init-clkctrl:0018:24"),
++	DT_CLK(NULL, "hsmmc1_fclk", "l3-init-clkctrl:0008:24"),
++	DT_CLK(NULL, "hsmmc2_fclk", "l3-init-clkctrl:0010:24"),
++	DT_CLK(NULL, "iss_ctrlclk", "iss-clkctrl:0000:8"),
++	DT_CLK(NULL, "mcasp_sync_mux_ck", "abe-clkctrl:0020:26"),
++	DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
++	DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
++	DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
++	DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4-per-clkctrl:00c0:26"),
++	DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3-init-clkctrl:00c0:8"),
++	DT_CLK(NULL, "otg_60m_gfclk", "l3-init-clkctrl:0040:24"),
++	DT_CLK(NULL, "per_mcbsp4_gfclk", "l4-per-clkctrl:00c0:24"),
++	DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu-sys-clkctrl:0000:20"),
++	DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu-sys-clkctrl:0000:22"),
++	DT_CLK(NULL, "sgx_clk_mux", "l3-gfx-clkctrl:0000:24"),
++	DT_CLK(NULL, "slimbus1_fclk_0", "abe-clkctrl:0040:8"),
++	DT_CLK(NULL, "slimbus1_fclk_1", "abe-clkctrl:0040:9"),
++	DT_CLK(NULL, "slimbus1_fclk_2", "abe-clkctrl:0040:10"),
++	DT_CLK(NULL, "slimbus1_slimbus_clk", "abe-clkctrl:0040:11"),
++	DT_CLK(NULL, "slimbus2_fclk_0", "l4-per-clkctrl:0118:8"),
++	DT_CLK(NULL, "slimbus2_fclk_1", "l4-per-clkctrl:0118:9"),
++	DT_CLK(NULL, "slimbus2_slimbus_clk", "l4-per-clkctrl:0118:10"),
++	DT_CLK(NULL, "stm_clk_div_ck", "emu-sys-clkctrl:0000:27"),
++	DT_CLK(NULL, "timer5_sync_mux", "abe-clkctrl:0048:24"),
++	DT_CLK(NULL, "timer6_sync_mux", "abe-clkctrl:0050:24"),
++	DT_CLK(NULL, "timer7_sync_mux", "abe-clkctrl:0058:24"),
++	DT_CLK(NULL, "timer8_sync_mux", "abe-clkctrl:0060:24"),
++	DT_CLK(NULL, "trace_clk_div_div_ck", "emu-sys-clkctrl:0000:24"),
++	DT_CLK(NULL, "usb_host_hs_func48mclk", "l3-init-clkctrl:0038:15"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3-init-clkctrl:0038:13"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3-init-clkctrl:0038:14"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3-init-clkctrl:0038:11"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3-init-clkctrl:0038:12"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3-init-clkctrl:0038:8"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3-init-clkctrl:0038:9"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init-clkctrl:0038:10"),
++	DT_CLK(NULL, "usb_otg_hs_xclk", "l3-init-clkctrl:0040:8"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3-init-clkctrl:0048:8"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3-init-clkctrl:0048:9"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3-init-clkctrl:0048:10"),
++	DT_CLK(NULL, "utmi_p1_gfclk", "l3-init-clkctrl:0038:24"),
++	DT_CLK(NULL, "utmi_p2_gfclk", "l3-init-clkctrl:0038:25"),
+ 	{ .node_name = NULL },
+ };
+ 
+diff --git a/drivers/clk/ti/clk-54xx.c b/drivers/clk/ti/clk-54xx.c
+index 8694bc9f5fc7f..04a5408085acc 100644
+--- a/drivers/clk/ti/clk-54xx.c
++++ b/drivers/clk/ti/clk-54xx.c
+@@ -50,7 +50,7 @@ static const struct omap_clkctrl_bit_data omap5_aess_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap5_dmic_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0018:26",
++	"abe-clkctrl:0018:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -70,7 +70,7 @@ static const struct omap_clkctrl_bit_data omap5_dmic_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap5_mcbsp1_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0028:26",
++	"abe-clkctrl:0028:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -83,7 +83,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp1_bit_data[] __initconst =
+ };
+ 
+ static const char * const omap5_mcbsp2_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0030:26",
++	"abe-clkctrl:0030:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -96,7 +96,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp2_bit_data[] __initconst =
+ };
+ 
+ static const char * const omap5_mcbsp3_gfclk_parents[] __initconst = {
+-	"abe_cm:clk:0038:26",
++	"abe-clkctrl:0038:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -136,16 +136,16 @@ static const struct omap_clkctrl_bit_data omap5_timer8_bit_data[] __initconst =
+ 
+ static const struct omap_clkctrl_reg_data omap5_abe_clkctrl_regs[] __initconst = {
+ 	{ OMAP5_L4_ABE_CLKCTRL, NULL, 0, "abe_iclk" },
+-	{ OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
++	{ OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
+ 	{ OMAP5_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+-	{ OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
+-	{ OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
+-	{ OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
+-	{ OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
+-	{ OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
+-	{ OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
+-	{ OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
+-	{ OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
++	{ OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
++	{ OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
++	{ OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
++	{ OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
++	{ OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
++	{ OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
++	{ OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
++	{ OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
+ 	{ 0 },
+ };
+ 
+@@ -266,12 +266,12 @@ static const struct omap_clkctrl_bit_data omap5_gpio8_bit_data[] __initconst = {
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap5_l4per_clkctrl_regs[] __initconst = {
+-	{ OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0008:24" },
+-	{ OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0010:24" },
+-	{ OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0018:24" },
+-	{ OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0020:24" },
+-	{ OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0028:24" },
+-	{ OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0030:24" },
++	{ OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0008:24" },
++	{ OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0010:24" },
++	{ OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0018:24" },
++	{ OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0020:24" },
++	{ OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0028:24" },
++	{ OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0030:24" },
+ 	{ OMAP5_GPIO2_CLKCTRL, omap5_gpio2_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ 	{ OMAP5_GPIO3_CLKCTRL, omap5_gpio3_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ 	{ OMAP5_GPIO4_CLKCTRL, omap5_gpio4_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+@@ -343,7 +343,7 @@ static const struct omap_clkctrl_bit_data omap5_dss_core_bit_data[] __initconst
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap5_dss_clkctrl_regs[] __initconst = {
+-	{ OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss_cm:clk:0000:8" },
++	{ OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss-clkctrl:0000:8" },
+ 	{ 0 },
+ };
+ 
+@@ -376,7 +376,7 @@ static const struct omap_clkctrl_bit_data omap5_gpu_core_bit_data[] __initconst
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap5_gpu_clkctrl_regs[] __initconst = {
+-	{ OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu_cm:clk:0000:24" },
++	{ OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu-clkctrl:0000:24" },
+ 	{ 0 },
+ };
+ 
+@@ -387,7 +387,7 @@ static const char * const omap5_mmc1_fclk_mux_parents[] __initconst = {
+ };
+ 
+ static const char * const omap5_mmc1_fclk_parents[] __initconst = {
+-	"l3init_cm:clk:0008:24",
++	"l3init-clkctrl:0008:24",
+ 	NULL,
+ };
+ 
+@@ -403,7 +403,7 @@ static const struct omap_clkctrl_bit_data omap5_mmc1_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap5_mmc2_fclk_parents[] __initconst = {
+-	"l3init_cm:clk:0010:24",
++	"l3init-clkctrl:0010:24",
+ 	NULL,
+ };
+ 
+@@ -428,12 +428,12 @@ static const char * const omap5_usb_host_hs_hsic480m_p3_clk_parents[] __initcons
+ };
+ 
+ static const char * const omap5_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+-	"l3init_cm:clk:0038:24",
++	"l3init-clkctrl:0038:24",
+ 	NULL,
+ };
+ 
+ static const char * const omap5_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+-	"l3init_cm:clk:0038:25",
++	"l3init-clkctrl:0038:25",
+ 	NULL,
+ };
+ 
+@@ -492,8 +492,8 @@ static const struct omap_clkctrl_bit_data omap5_usb_otg_ss_bit_data[] __initcons
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap5_l3init_clkctrl_regs[] __initconst = {
+-	{ OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0008:25" },
+-	{ OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0010:25" },
++	{ OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0008:25" },
++	{ OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0010:25" },
+ 	{ OMAP5_USB_HOST_HS_CLKCTRL, omap5_usb_host_hs_bit_data, CLKF_SW_SUP, "l3init_60m_fclk" },
+ 	{ OMAP5_USB_TLL_HS_CLKCTRL, omap5_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ 	{ OMAP5_SATA_CLKCTRL, omap5_sata_bit_data, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -517,7 +517,7 @@ static const struct omap_clkctrl_reg_data omap5_wkupaon_clkctrl_regs[] __initcon
+ 	{ OMAP5_L4_WKUP_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ 	{ OMAP5_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ OMAP5_GPIO1_CLKCTRL, omap5_gpio1_bit_data, CLKF_HW_SUP, "wkupaon_iclk_mux" },
+-	{ OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon_cm:clk:0020:24" },
++	{ OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon-clkctrl:0020:24" },
+ 	{ OMAP5_COUNTER_32K_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ 	{ OMAP5_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ 0 },
+@@ -547,58 +547,58 @@ const struct omap_clkctrl_data omap5_clkctrl_data[] __initconst = {
+ static struct ti_dt_clk omap54xx_clks[] = {
+ 	DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
+ 	DT_CLK(NULL, "sys_clkin_ck", "sys_clkin"),
+-	DT_CLK(NULL, "dmic_gfclk", "abe_cm:0018:24"),
+-	DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
+-	DT_CLK(NULL, "dss_32khz_clk", "dss_cm:0000:11"),
+-	DT_CLK(NULL, "dss_48mhz_clk", "dss_cm:0000:9"),
+-	DT_CLK(NULL, "dss_dss_clk", "dss_cm:0000:8"),
+-	DT_CLK(NULL, "dss_sys_clk", "dss_cm:0000:10"),
+-	DT_CLK(NULL, "gpio1_dbclk", "wkupaon_cm:0018:8"),
+-	DT_CLK(NULL, "gpio2_dbclk", "l4per_cm:0040:8"),
+-	DT_CLK(NULL, "gpio3_dbclk", "l4per_cm:0048:8"),
+-	DT_CLK(NULL, "gpio4_dbclk", "l4per_cm:0050:8"),
+-	DT_CLK(NULL, "gpio5_dbclk", "l4per_cm:0058:8"),
+-	DT_CLK(NULL, "gpio6_dbclk", "l4per_cm:0060:8"),
+-	DT_CLK(NULL, "gpio7_dbclk", "l4per_cm:00f0:8"),
+-	DT_CLK(NULL, "gpio8_dbclk", "l4per_cm:00f8:8"),
+-	DT_CLK(NULL, "mcbsp1_gfclk", "abe_cm:0028:24"),
+-	DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
+-	DT_CLK(NULL, "mcbsp2_gfclk", "abe_cm:0030:24"),
+-	DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
+-	DT_CLK(NULL, "mcbsp3_gfclk", "abe_cm:0038:24"),
+-	DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
+-	DT_CLK(NULL, "mmc1_32khz_clk", "l3init_cm:0008:8"),
+-	DT_CLK(NULL, "mmc1_fclk", "l3init_cm:0008:25"),
+-	DT_CLK(NULL, "mmc1_fclk_mux", "l3init_cm:0008:24"),
+-	DT_CLK(NULL, "mmc2_fclk", "l3init_cm:0010:25"),
+-	DT_CLK(NULL, "mmc2_fclk_mux", "l3init_cm:0010:24"),
+-	DT_CLK(NULL, "sata_ref_clk", "l3init_cm:0068:8"),
+-	DT_CLK(NULL, "timer10_gfclk_mux", "l4per_cm:0008:24"),
+-	DT_CLK(NULL, "timer11_gfclk_mux", "l4per_cm:0010:24"),
+-	DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon_cm:0020:24"),
+-	DT_CLK(NULL, "timer2_gfclk_mux", "l4per_cm:0018:24"),
+-	DT_CLK(NULL, "timer3_gfclk_mux", "l4per_cm:0020:24"),
+-	DT_CLK(NULL, "timer4_gfclk_mux", "l4per_cm:0028:24"),
+-	DT_CLK(NULL, "timer5_gfclk_mux", "abe_cm:0048:24"),
+-	DT_CLK(NULL, "timer6_gfclk_mux", "abe_cm:0050:24"),
+-	DT_CLK(NULL, "timer7_gfclk_mux", "abe_cm:0058:24"),
+-	DT_CLK(NULL, "timer8_gfclk_mux", "abe_cm:0060:24"),
+-	DT_CLK(NULL, "timer9_gfclk_mux", "l4per_cm:0030:24"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init_cm:0038:13"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init_cm:0038:14"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init_cm:0038:7"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init_cm:0038:11"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init_cm:0038:12"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init_cm:0038:6"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init_cm:0038:8"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init_cm:0038:9"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init_cm:0038:10"),
+-	DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init_cm:00d0:8"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init_cm:0048:8"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init_cm:0048:9"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init_cm:0048:10"),
+-	DT_CLK(NULL, "utmi_p1_gfclk", "l3init_cm:0038:24"),
+-	DT_CLK(NULL, "utmi_p2_gfclk", "l3init_cm:0038:25"),
++	DT_CLK(NULL, "dmic_gfclk", "abe-clkctrl:0018:24"),
++	DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
++	DT_CLK(NULL, "dss_32khz_clk", "dss-clkctrl:0000:11"),
++	DT_CLK(NULL, "dss_48mhz_clk", "dss-clkctrl:0000:9"),
++	DT_CLK(NULL, "dss_dss_clk", "dss-clkctrl:0000:8"),
++	DT_CLK(NULL, "dss_sys_clk", "dss-clkctrl:0000:10"),
++	DT_CLK(NULL, "gpio1_dbclk", "wkupaon-clkctrl:0018:8"),
++	DT_CLK(NULL, "gpio2_dbclk", "l4per-clkctrl:0040:8"),
++	DT_CLK(NULL, "gpio3_dbclk", "l4per-clkctrl:0048:8"),
++	DT_CLK(NULL, "gpio4_dbclk", "l4per-clkctrl:0050:8"),
++	DT_CLK(NULL, "gpio5_dbclk", "l4per-clkctrl:0058:8"),
++	DT_CLK(NULL, "gpio6_dbclk", "l4per-clkctrl:0060:8"),
++	DT_CLK(NULL, "gpio7_dbclk", "l4per-clkctrl:00f0:8"),
++	DT_CLK(NULL, "gpio8_dbclk", "l4per-clkctrl:00f8:8"),
++	DT_CLK(NULL, "mcbsp1_gfclk", "abe-clkctrl:0028:24"),
++	DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
++	DT_CLK(NULL, "mcbsp2_gfclk", "abe-clkctrl:0030:24"),
++	DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
++	DT_CLK(NULL, "mcbsp3_gfclk", "abe-clkctrl:0038:24"),
++	DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
++	DT_CLK(NULL, "mmc1_32khz_clk", "l3init-clkctrl:0008:8"),
++	DT_CLK(NULL, "mmc1_fclk", "l3init-clkctrl:0008:25"),
++	DT_CLK(NULL, "mmc1_fclk_mux", "l3init-clkctrl:0008:24"),
++	DT_CLK(NULL, "mmc2_fclk", "l3init-clkctrl:0010:25"),
++	DT_CLK(NULL, "mmc2_fclk_mux", "l3init-clkctrl:0010:24"),
++	DT_CLK(NULL, "sata_ref_clk", "l3init-clkctrl:0068:8"),
++	DT_CLK(NULL, "timer10_gfclk_mux", "l4per-clkctrl:0008:24"),
++	DT_CLK(NULL, "timer11_gfclk_mux", "l4per-clkctrl:0010:24"),
++	DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon-clkctrl:0020:24"),
++	DT_CLK(NULL, "timer2_gfclk_mux", "l4per-clkctrl:0018:24"),
++	DT_CLK(NULL, "timer3_gfclk_mux", "l4per-clkctrl:0020:24"),
++	DT_CLK(NULL, "timer4_gfclk_mux", "l4per-clkctrl:0028:24"),
++	DT_CLK(NULL, "timer5_gfclk_mux", "abe-clkctrl:0048:24"),
++	DT_CLK(NULL, "timer6_gfclk_mux", "abe-clkctrl:0050:24"),
++	DT_CLK(NULL, "timer7_gfclk_mux", "abe-clkctrl:0058:24"),
++	DT_CLK(NULL, "timer8_gfclk_mux", "abe-clkctrl:0060:24"),
++	DT_CLK(NULL, "timer9_gfclk_mux", "l4per-clkctrl:0030:24"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init-clkctrl:0038:13"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init-clkctrl:0038:14"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init-clkctrl:0038:7"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init-clkctrl:0038:11"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init-clkctrl:0038:12"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init-clkctrl:0038:6"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init-clkctrl:0038:8"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init-clkctrl:0038:9"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init-clkctrl:0038:10"),
++	DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init-clkctrl:00d0:8"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init-clkctrl:0048:8"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init-clkctrl:0048:9"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init-clkctrl:0048:10"),
++	DT_CLK(NULL, "utmi_p1_gfclk", "l3init-clkctrl:0038:24"),
++	DT_CLK(NULL, "utmi_p2_gfclk", "l3init-clkctrl:0038:25"),
+ 	{ .node_name = NULL },
+ };
+ 
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 864c484bde1b4..08a85c559f795 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -511,10 +511,6 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ 	char *c;
+ 	u16 soc_mask = 0;
+ 
+-	if (!(ti_clk_get_features()->flags & TI_CLK_CLKCTRL_COMPAT) &&
+-	    of_node_name_eq(node, "clk"))
+-		ti_clk_features.flags |= TI_CLK_CLKCTRL_COMPAT;
+-
+ 	addrp = of_get_address(node, 0, NULL, NULL);
+ 	addr = (u32)of_translate_address(node, addrp);
+ 
+diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c
+index 4357d2395e6b7..60115d8d40832 100644
+--- a/drivers/dma/sprd-dma.c
++++ b/drivers/dma/sprd-dma.c
+@@ -1236,11 +1236,8 @@ static int sprd_dma_remove(struct platform_device *pdev)
+ {
+ 	struct sprd_dma_dev *sdev = platform_get_drvdata(pdev);
+ 	struct sprd_dma_chn *c, *cn;
+-	int ret;
+ 
+-	ret = pm_runtime_get_sync(&pdev->dev);
+-	if (ret < 0)
+-		return ret;
++	pm_runtime_get_sync(&pdev->dev);
+ 
+ 	/* explicitly free the irq */
+ 	if (sdev->irq > 0)
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 728fea5094124..2d022f3fb437e 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -116,8 +116,11 @@ static bool meson_vpu_has_available_connectors(struct device *dev)
+ 	for_each_endpoint_of_node(dev->of_node, ep) {
+ 		/* If the endpoint node exists, consider it enabled */
+ 		remote = of_graph_get_remote_port(ep);
+-		if (remote)
++		if (remote) {
++			of_node_put(remote);
++			of_node_put(ep);
+ 			return true;
++		}
+ 	}
+ 
+ 	return false;
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index 259f3e6bec90a..bb7e109534de1 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -469,17 +469,17 @@ void meson_viu_init(struct meson_drm *priv)
+ 			priv->io_base + _REG(VD2_IF0_LUMA_FIFO_SIZE));
+ 
+ 	if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {
+-		writel_relaxed(VIU_OSD_BLEND_REORDER(0, 1) |
+-			       VIU_OSD_BLEND_REORDER(1, 0) |
+-			       VIU_OSD_BLEND_REORDER(2, 0) |
+-			       VIU_OSD_BLEND_REORDER(3, 0) |
+-			       VIU_OSD_BLEND_DIN_EN(1) |
+-			       VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
+-			       VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
+-			       VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
+-			       VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
+-			       VIU_OSD_BLEND_HOLD_LINES(4),
+-			       priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
++		u32 val = (u32)VIU_OSD_BLEND_REORDER(0, 1) |
++			  (u32)VIU_OSD_BLEND_REORDER(1, 0) |
++			  (u32)VIU_OSD_BLEND_REORDER(2, 0) |
++			  (u32)VIU_OSD_BLEND_REORDER(3, 0) |
++			  (u32)VIU_OSD_BLEND_DIN_EN(1) |
++			  (u32)VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
++			  (u32)VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
++			  (u32)VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
++			  (u32)VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
++			  (u32)VIU_OSD_BLEND_HOLD_LINES(4);
++		writel_relaxed(val, priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
+ 
+ 		writel_relaxed(OSD_BLEND_PATH_SEL_ENABLE,
+ 			       priv->io_base + _REG(OSD1_BLEND_SRC_CTRL));
+diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+index 4f5efcace68ea..51edb4244af7c 100644
+--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
++++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+@@ -531,7 +531,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ 				    struct drm_display_mode *mode)
+ {
+ 	struct mipi_dsi_device *device = dsi->device;
+-	unsigned int Bpp = mipi_dsi_pixel_format_to_bpp(device->format) / 8;
++	int Bpp = mipi_dsi_pixel_format_to_bpp(device->format) / 8;
+ 	u16 hbp = 0, hfp = 0, hsa = 0, hblk = 0, vblk = 0;
+ 	u32 basic_ctl = 0;
+ 	size_t bytes;
+@@ -555,7 +555,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ 		 * (4 bytes). Its minimal size is therefore 10 bytes
+ 		 */
+ #define HSA_PACKET_OVERHEAD	10
+-		hsa = max((unsigned int)HSA_PACKET_OVERHEAD,
++		hsa = max(HSA_PACKET_OVERHEAD,
+ 			  (mode->hsync_end - mode->hsync_start) * Bpp - HSA_PACKET_OVERHEAD);
+ 
+ 		/*
+@@ -564,7 +564,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ 		 * therefore 6 bytes
+ 		 */
+ #define HBP_PACKET_OVERHEAD	6
+-		hbp = max((unsigned int)HBP_PACKET_OVERHEAD,
++		hbp = max(HBP_PACKET_OVERHEAD,
+ 			  (mode->htotal - mode->hsync_end) * Bpp - HBP_PACKET_OVERHEAD);
+ 
+ 		/*
+@@ -574,7 +574,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ 		 * 16 bytes
+ 		 */
+ #define HFP_PACKET_OVERHEAD	16
+-		hfp = max((unsigned int)HFP_PACKET_OVERHEAD,
++		hfp = max(HFP_PACKET_OVERHEAD,
+ 			  (mode->hsync_start - mode->hdisplay) * Bpp - HFP_PACKET_OVERHEAD);
+ 
+ 		/*
+@@ -583,7 +583,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ 		 * bytes). Its minimal size is therefore 10 bytes.
+ 		 */
+ #define HBLK_PACKET_OVERHEAD	10
+-		hblk = max((unsigned int)HBLK_PACKET_OVERHEAD,
++		hblk = max(HBLK_PACKET_OVERHEAD,
+ 			   (mode->htotal - (mode->hsync_end - mode->hsync_start)) * Bpp -
+ 			   HBLK_PACKET_OVERHEAD);
+ 
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 72af4b4d13180..d3719df1c40dc 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1280,9 +1280,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ 	struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev);
+ 	int irq, ret;
+ 
+-	ret = pm_runtime_resume_and_get(&pdev->dev);
+-	if (ret < 0)
+-		return ret;
++	ret = pm_runtime_get_sync(&pdev->dev);
+ 
+ 	/* remove adapter */
+ 	dev_dbg(&i2c_imx->adapter.dev, "adapter removed\n");
+@@ -1291,17 +1289,21 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ 	if (i2c_imx->dma)
+ 		i2c_imx_dma_free(i2c_imx);
+ 
+-	/* setup chip registers to defaults */
+-	imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
+-	imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
+-	imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2CR);
+-	imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR);
++	if (ret == 0) {
++		/* setup chip registers to defaults */
++		imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
++		imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
++		imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2CR);
++		imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR);
++		clk_disable(i2c_imx->clk);
++	}
+ 
+ 	clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb);
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq >= 0)
+ 		free_irq(irq, i2c_imx);
+-	clk_disable_unprepare(i2c_imx->clk);
++
++	clk_unprepare(i2c_imx->clk);
+ 
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
+index f9fb56ec6dfda..dca86422b0a2c 100644
+--- a/drivers/infiniband/sw/rxe/rxe_param.h
++++ b/drivers/infiniband/sw/rxe/rxe_param.h
+@@ -98,6 +98,12 @@ enum rxe_device_param {
+ 	RXE_INFLIGHT_SKBS_PER_QP_HIGH	= 64,
+ 	RXE_INFLIGHT_SKBS_PER_QP_LOW	= 16,
+ 
++	/* Max number of interations of each tasklet
++	 * before yielding the cpu to let other
++	 * work make progress
++	 */
++	RXE_MAX_ITERATIONS		= 1024,
++
+ 	/* Delay before calling arbiter timer */
+ 	RXE_NSEC_ARB_TIMER_DELAY	= 200,
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
+index 6951fdcb31bf5..568cf56c236bc 100644
+--- a/drivers/infiniband/sw/rxe/rxe_task.c
++++ b/drivers/infiniband/sw/rxe/rxe_task.c
+@@ -8,7 +8,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/hardirq.h>
+ 
+-#include "rxe_task.h"
++#include "rxe.h"
+ 
+ int __rxe_do_task(struct rxe_task *task)
+ 
+@@ -34,6 +34,7 @@ void rxe_do_task(struct tasklet_struct *t)
+ 	int ret;
+ 	unsigned long flags;
+ 	struct rxe_task *task = from_tasklet(task, t, tasklet);
++	unsigned int iterations = RXE_MAX_ITERATIONS;
+ 
+ 	spin_lock_irqsave(&task->state_lock, flags);
+ 	switch (task->state) {
+@@ -62,13 +63,20 @@ void rxe_do_task(struct tasklet_struct *t)
+ 		spin_lock_irqsave(&task->state_lock, flags);
+ 		switch (task->state) {
+ 		case TASK_STATE_BUSY:
+-			if (ret)
++			if (ret) {
+ 				task->state = TASK_STATE_START;
+-			else
++			} else if (iterations--) {
+ 				cont = 1;
++			} else {
++				/* reschedule the tasklet and exit
++				 * the loop to give up the cpu
++				 */
++				tasklet_schedule(&task->tasklet);
++				task->state = TASK_STATE_START;
++			}
+ 			break;
+ 
+-		/* soneone tried to run the task since the last time we called
++		/* someone tried to run the task since the last time we called
+ 		 * func, so we will call one more time regardless of the
+ 		 * return value
+ 		 */
+diff --git a/drivers/irqchip/irq-tegra.c b/drivers/irqchip/irq-tegra.c
+index e1f771c72fc4c..ad3e2c1b3c87b 100644
+--- a/drivers/irqchip/irq-tegra.c
++++ b/drivers/irqchip/irq-tegra.c
+@@ -148,10 +148,10 @@ static int tegra_ictlr_suspend(void)
+ 		lic->cop_iep[i] = readl_relaxed(ictlr + ICTLR_COP_IEP_CLASS);
+ 
+ 		/* Disable COP interrupts */
+-		writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
++		writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
+ 
+ 		/* Disable CPU interrupts */
+-		writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
++		writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
+ 
+ 		/* Enable the wakeup sources of ictlr */
+ 		writel_relaxed(lic->ictlr_wake_mask[i], ictlr + ICTLR_CPU_IER_SET);
+@@ -172,12 +172,12 @@ static void tegra_ictlr_resume(void)
+ 
+ 		writel_relaxed(lic->cpu_iep[i],
+ 			       ictlr + ICTLR_CPU_IEP_CLASS);
+-		writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
++		writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
+ 		writel_relaxed(lic->cpu_ier[i],
+ 			       ictlr + ICTLR_CPU_IER_SET);
+ 		writel_relaxed(lic->cop_iep[i],
+ 			       ictlr + ICTLR_COP_IEP_CLASS);
+-		writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
++		writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
+ 		writel_relaxed(lic->cop_ier[i],
+ 			       ictlr + ICTLR_COP_IER_SET);
+ 	}
+@@ -312,7 +312,7 @@ static int __init tegra_ictlr_init(struct device_node *node,
+ 		lic->base[i] = base;
+ 
+ 		/* Disable all interrupts */
+-		writel_relaxed(~0UL, base + ICTLR_CPU_IER_CLR);
++		writel_relaxed(GENMASK(31, 0), base + ICTLR_CPU_IER_CLR);
+ 		/* All interrupts target IRQ */
+ 		writel_relaxed(0, base + ICTLR_CPU_IEP_CLASS);
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 4463ef3e3729b..884317ee1759f 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -9424,6 +9424,7 @@ void md_reap_sync_thread(struct mddev *mddev)
+ 	wake_up(&resync_wait);
+ 	/* flag recovery needed just to double check */
+ 	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
++	sysfs_notify_dirent_safe(mddev->sysfs_completed);
+ 	sysfs_notify_dirent_safe(mddev->sysfs_action);
+ 	md_new_event(mddev);
+ 	if (mddev->event_work.func)
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index c8cafdb094aaa..01c7edf329367 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2864,10 +2864,10 @@ static void raid5_end_write_request(struct bio *bi)
+ 	if (!test_and_clear_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags))
+ 		clear_bit(R5_LOCKED, &sh->dev[i].flags);
+ 	set_bit(STRIPE_HANDLE, &sh->state);
+-	raid5_release_stripe(sh);
+ 
+ 	if (sh->batch_head && sh != sh->batch_head)
+ 		raid5_release_stripe(sh->batch_head);
++	raid5_release_stripe(sh);
+ }
+ 
+ static void raid5_error(struct mddev *mddev, struct md_rdev *rdev)
+diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
+index 4cb829d5d873c..2e4dcfebf19af 100644
+--- a/drivers/misc/cxl/irq.c
++++ b/drivers/misc/cxl/irq.c
+@@ -349,6 +349,7 @@ int afu_allocate_irqs(struct cxl_context *ctx, u32 count)
+ 
+ out:
+ 	cxl_ops->release_irq_ranges(&ctx->irqs, ctx->afu->adapter);
++	bitmap_free(ctx->irq_bitmap);
+ 	afu_irq_name_free(ctx);
+ 	return -ENOMEM;
+ }
+diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
+index 56dd98ab5a814..95e56eb2cdd06 100644
+--- a/drivers/misc/uacce/uacce.c
++++ b/drivers/misc/uacce/uacce.c
+@@ -9,43 +9,38 @@
+ 
+ static struct class *uacce_class;
+ static dev_t uacce_devt;
+-static DEFINE_MUTEX(uacce_mutex);
+ static DEFINE_XARRAY_ALLOC(uacce_xa);
+ 
+-static int uacce_start_queue(struct uacce_queue *q)
++/*
++ * If the parent driver or the device disappears, the queue state is invalid and
++ * ops are not usable anymore.
++ */
++static bool uacce_queue_is_valid(struct uacce_queue *q)
+ {
+-	int ret = 0;
++	return q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED;
++}
+ 
+-	mutex_lock(&uacce_mutex);
++static int uacce_start_queue(struct uacce_queue *q)
++{
++	int ret;
+ 
+-	if (q->state != UACCE_Q_INIT) {
+-		ret = -EINVAL;
+-		goto out_with_lock;
+-	}
++	if (q->state != UACCE_Q_INIT)
++		return -EINVAL;
+ 
+ 	if (q->uacce->ops->start_queue) {
+ 		ret = q->uacce->ops->start_queue(q);
+ 		if (ret < 0)
+-			goto out_with_lock;
++			return ret;
+ 	}
+ 
+ 	q->state = UACCE_Q_STARTED;
+-
+-out_with_lock:
+-	mutex_unlock(&uacce_mutex);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static int uacce_put_queue(struct uacce_queue *q)
+ {
+ 	struct uacce_device *uacce = q->uacce;
+ 
+-	mutex_lock(&uacce_mutex);
+-
+-	if (q->state == UACCE_Q_ZOMBIE)
+-		goto out;
+-
+ 	if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue)
+ 		uacce->ops->stop_queue(q);
+ 
+@@ -54,8 +49,6 @@ static int uacce_put_queue(struct uacce_queue *q)
+ 		uacce->ops->put_queue(q);
+ 
+ 	q->state = UACCE_Q_ZOMBIE;
+-out:
+-	mutex_unlock(&uacce_mutex);
+ 
+ 	return 0;
+ }
+@@ -65,20 +58,36 @@ static long uacce_fops_unl_ioctl(struct file *filep,
+ {
+ 	struct uacce_queue *q = filep->private_data;
+ 	struct uacce_device *uacce = q->uacce;
++	long ret = -ENXIO;
++
++	/*
++	 * uacce->ops->ioctl() may take the mmap_lock when copying arg to/from
++	 * user. Avoid a circular lock dependency with uacce_fops_mmap(), which
++	 * gets called with mmap_lock held, by taking uacce->mutex instead of
++	 * q->mutex. Doing this in uacce_fops_mmap() is not possible because
++	 * uacce_fops_open() calls iommu_sva_bind_device(), which takes
++	 * mmap_lock, while holding uacce->mutex.
++	 */
++	mutex_lock(&uacce->mutex);
++	if (!uacce_queue_is_valid(q))
++		goto out_unlock;
+ 
+ 	switch (cmd) {
+ 	case UACCE_CMD_START_Q:
+-		return uacce_start_queue(q);
+-
++		ret = uacce_start_queue(q);
++		break;
+ 	case UACCE_CMD_PUT_Q:
+-		return uacce_put_queue(q);
+-
++		ret = uacce_put_queue(q);
++		break;
+ 	default:
+-		if (!uacce->ops->ioctl)
+-			return -EINVAL;
+-
+-		return uacce->ops->ioctl(q, cmd, arg);
++		if (uacce->ops->ioctl)
++			ret = uacce->ops->ioctl(q, cmd, arg);
++		else
++			ret = -EINVAL;
+ 	}
++out_unlock:
++	mutex_unlock(&uacce->mutex);
++	return ret;
+ }
+ 
+ #ifdef CONFIG_COMPAT
+@@ -136,6 +145,13 @@ static int uacce_fops_open(struct inode *inode, struct file *filep)
+ 	if (!q)
+ 		return -ENOMEM;
+ 
++	mutex_lock(&uacce->mutex);
++
++	if (!uacce->parent) {
++		ret = -EINVAL;
++		goto out_with_mem;
++	}
++
+ 	ret = uacce_bind_queue(uacce, q);
+ 	if (ret)
+ 		goto out_with_mem;
+@@ -152,10 +168,9 @@ static int uacce_fops_open(struct inode *inode, struct file *filep)
+ 	filep->private_data = q;
+ 	uacce->inode = inode;
+ 	q->state = UACCE_Q_INIT;
+-
+-	mutex_lock(&uacce->queues_lock);
++	mutex_init(&q->mutex);
+ 	list_add(&q->list, &uacce->queues);
+-	mutex_unlock(&uacce->queues_lock);
++	mutex_unlock(&uacce->mutex);
+ 
+ 	return 0;
+ 
+@@ -163,18 +178,20 @@ out_with_bond:
+ 	uacce_unbind_queue(q);
+ out_with_mem:
+ 	kfree(q);
++	mutex_unlock(&uacce->mutex);
+ 	return ret;
+ }
+ 
+ static int uacce_fops_release(struct inode *inode, struct file *filep)
+ {
+ 	struct uacce_queue *q = filep->private_data;
++	struct uacce_device *uacce = q->uacce;
+ 
+-	mutex_lock(&q->uacce->queues_lock);
+-	list_del(&q->list);
+-	mutex_unlock(&q->uacce->queues_lock);
++	mutex_lock(&uacce->mutex);
+ 	uacce_put_queue(q);
+ 	uacce_unbind_queue(q);
++	list_del(&q->list);
++	mutex_unlock(&uacce->mutex);
+ 	kfree(q);
+ 
+ 	return 0;
+@@ -217,10 +234,9 @@ static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+ 	vma->vm_private_data = q;
+ 	qfr->type = type;
+ 
+-	mutex_lock(&uacce_mutex);
+-
+-	if (q->state != UACCE_Q_INIT && q->state != UACCE_Q_STARTED) {
+-		ret = -EINVAL;
++	mutex_lock(&q->mutex);
++	if (!uacce_queue_is_valid(q)) {
++		ret = -ENXIO;
+ 		goto out_with_lock;
+ 	}
+ 
+@@ -259,12 +275,12 @@ static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+ 	}
+ 
+ 	q->qfrs[type] = qfr;
+-	mutex_unlock(&uacce_mutex);
++	mutex_unlock(&q->mutex);
+ 
+ 	return ret;
+ 
+ out_with_lock:
+-	mutex_unlock(&uacce_mutex);
++	mutex_unlock(&q->mutex);
+ 	kfree(qfr);
+ 	return ret;
+ }
+@@ -273,12 +289,20 @@ static __poll_t uacce_fops_poll(struct file *file, poll_table *wait)
+ {
+ 	struct uacce_queue *q = file->private_data;
+ 	struct uacce_device *uacce = q->uacce;
++	__poll_t ret = 0;
++
++	mutex_lock(&q->mutex);
++	if (!uacce_queue_is_valid(q))
++		goto out_unlock;
+ 
+ 	poll_wait(file, &q->wait, wait);
++
+ 	if (uacce->ops->is_q_updated && uacce->ops->is_q_updated(q))
+-		return EPOLLIN | EPOLLRDNORM;
++		ret = EPOLLIN | EPOLLRDNORM;
+ 
+-	return 0;
++out_unlock:
++	mutex_unlock(&q->mutex);
++	return ret;
+ }
+ 
+ static const struct file_operations uacce_fops = {
+@@ -431,7 +455,7 @@ struct uacce_device *uacce_alloc(struct device *parent,
+ 		goto err_with_uacce;
+ 
+ 	INIT_LIST_HEAD(&uacce->queues);
+-	mutex_init(&uacce->queues_lock);
++	mutex_init(&uacce->mutex);
+ 	device_initialize(&uacce->dev);
+ 	uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
+ 	uacce->dev.class = uacce_class;
+@@ -489,13 +513,23 @@ void uacce_remove(struct uacce_device *uacce)
+ 	if (uacce->inode)
+ 		unmap_mapping_range(uacce->inode->i_mapping, 0, 0, 1);
+ 
++	/*
++	 * uacce_fops_open() may be running concurrently, even after we remove
++	 * the cdev. Holding uacce->mutex ensures that open() does not obtain a
++	 * removed uacce device.
++	 */
++	mutex_lock(&uacce->mutex);
+ 	/* ensure no open queue remains */
+-	mutex_lock(&uacce->queues_lock);
+ 	list_for_each_entry_safe(q, next_q, &uacce->queues, list) {
++		/*
++		 * Taking q->mutex ensures that fops do not use the defunct
++		 * uacce->ops after the queue is disabled.
++		 */
++		mutex_lock(&q->mutex);
+ 		uacce_put_queue(q);
++		mutex_unlock(&q->mutex);
+ 		uacce_unbind_queue(q);
+ 	}
+-	mutex_unlock(&uacce->queues_lock);
+ 
+ 	/* disable sva now since no opened queues */
+ 	if (uacce->flags & UACCE_DEV_SVA)
+@@ -504,6 +538,13 @@ void uacce_remove(struct uacce_device *uacce)
+ 	if (uacce->cdev)
+ 		cdev_device_del(uacce->cdev, &uacce->dev);
+ 	xa_erase(&uacce_xa, uacce->dev_id);
++	/*
++	 * uacce exists as long as there are open fds, but ops will be freed
++	 * now. Ensure that bugs cause NULL deref rather than use-after-free.
++	 */
++	uacce->ops = NULL;
++	uacce->parent = NULL;
++	mutex_unlock(&uacce->mutex);
+ 	put_device(&uacce->dev);
+ }
+ EXPORT_SYMBOL_GPL(uacce_remove);
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 091e0e051d109..bccc85b3fc50a 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -1161,8 +1161,10 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	ret = device_reset_optional(&pdev->dev);
+-	if (ret)
+-		return dev_err_probe(&pdev->dev, ret, "device reset failed\n");
++	if (ret) {
++		dev_err_probe(&pdev->dev, ret, "device reset failed\n");
++		goto free_host;
++	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	host->regs = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
+index 316393c694d7a..55868b6b86583 100644
+--- a/drivers/mmc/host/pxamci.c
++++ b/drivers/mmc/host/pxamci.c
+@@ -648,7 +648,7 @@ static int pxamci_probe(struct platform_device *pdev)
+ 
+ 	ret = pxamci_of_init(pdev, mmc);
+ 	if (ret)
+-		return ret;
++		goto out;
+ 
+ 	host = mmc_priv(mmc);
+ 	host->mmc = mmc;
+@@ -672,7 +672,7 @@ static int pxamci_probe(struct platform_device *pdev)
+ 
+ 	ret = pxamci_init_ocr(host);
+ 	if (ret < 0)
+-		return ret;
++		goto out;
+ 
+ 	mmc->caps = 0;
+ 	host->cmdat = 0;
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 89897a2d41fa6..5dde3c42d241b 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -1074,9 +1074,6 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ 
+ 		mcp251x_read_2regs(spi, CANINTF, &intf, &eflag);
+ 
+-		/* mask out flags we don't care about */
+-		intf &= CANINTF_RX | CANINTF_TX | CANINTF_ERR;
+-
+ 		/* receive buffer 0 */
+ 		if (intf & CANINTF_RX0IF) {
+ 			mcp251x_hw_rx(spi, 0);
+@@ -1086,6 +1083,18 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ 			if (mcp251x_is_2510(spi))
+ 				mcp251x_write_bits(spi, CANINTF,
+ 						   CANINTF_RX0IF, 0x00);
++
++			/* check if buffer 1 is already known to be full, no need to re-read */
++			if (!(intf & CANINTF_RX1IF)) {
++				u8 intf1, eflag1;
++
++				/* intf needs to be read again to avoid a race condition */
++				mcp251x_read_2regs(spi, CANINTF, &intf1, &eflag1);
++
++				/* combine flags from both operations for error handling */
++				intf |= intf1;
++				eflag |= eflag1;
++			}
+ 		}
+ 
+ 		/* receive buffer 1 */
+@@ -1096,6 +1105,9 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ 				clear_intf |= CANINTF_RX1IF;
+ 		}
+ 
++		/* mask out flags we don't care about */
++		intf &= CANINTF_RX | CANINTF_TX | CANINTF_ERR;
++
+ 		/* any error or tx interrupt we need to clear? */
+ 		if (intf & (CANINTF_ERR | CANINTF_TX))
+ 			clear_intf |= intf & (CANINTF_ERR | CANINTF_TX);
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 6458da9c13b95..ff05b5230f0b8 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -194,7 +194,7 @@ struct __packed ems_cpc_msg {
+ 	__le32 ts_sec;	/* timestamp in seconds */
+ 	__le32 ts_nsec;	/* timestamp in nano seconds */
+ 
+-	union {
++	union __packed {
+ 		u8 generic[64];
+ 		struct cpc_can_msg can_msg;
+ 		struct cpc_can_params can_params;
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index b3aa99eb6c2c5..ece4c0512ee2d 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -762,6 +762,9 @@ static int ksz9477_port_fdb_dump(struct dsa_switch *ds, int port,
+ 			goto exit;
+ 		}
+ 
++		if (!(ksz_data & ALU_VALID))
++			continue;
++
+ 		/* read ALU table */
+ 		ksz9477_read_table(dev, alu_table);
+ 
+diff --git a/drivers/net/dsa/mv88e6060.c b/drivers/net/dsa/mv88e6060.c
+index 24b8219fd6077..dafddf8000d04 100644
+--- a/drivers/net/dsa/mv88e6060.c
++++ b/drivers/net/dsa/mv88e6060.c
+@@ -118,6 +118,9 @@ static int mv88e6060_setup_port(struct mv88e6060_priv *priv, int p)
+ 	int addr = REG_PORT(p);
+ 	int ret;
+ 
++	if (dsa_is_unused_port(priv->ds, p))
++		return 0;
++
+ 	/* Do not force flow control, disable Ingress and Egress
+ 	 * Header tagging, disable VLAN tunneling, and set the port
+ 	 * state to Forwarding.  Additionally, if this is the CPU
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index c96dfc11aa6fc..161a5eac60d62 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -578,7 +578,8 @@ static const struct ocelot_stat_layout vsc9959_stats_layout[] = {
+ 	{ .offset = 0x87,	.name = "tx_frames_below_65_octets", },
+ 	{ .offset = 0x88,	.name = "tx_frames_65_to_127_octets", },
+ 	{ .offset = 0x89,	.name = "tx_frames_128_255_octets", },
+-	{ .offset = 0x8B,	.name = "tx_frames_256_511_octets", },
++	{ .offset = 0x8A,	.name = "tx_frames_256_511_octets", },
++	{ .offset = 0x8B,	.name = "tx_frames_512_1023_octets", },
+ 	{ .offset = 0x8C,	.name = "tx_frames_1024_1526_octets", },
+ 	{ .offset = 0x8D,	.name = "tx_frames_over_1526_octets", },
+ 	{ .offset = 0x8E,	.name = "tx_yellow_prio_0", },
+diff --git a/drivers/net/dsa/sja1105/sja1105_devlink.c b/drivers/net/dsa/sja1105/sja1105_devlink.c
+index 4a2ec395bcb00..ec2ac91abcfa4 100644
+--- a/drivers/net/dsa/sja1105/sja1105_devlink.c
++++ b/drivers/net/dsa/sja1105/sja1105_devlink.c
+@@ -93,7 +93,7 @@ static int sja1105_setup_devlink_regions(struct dsa_switch *ds)
+ 
+ 		region = dsa_devlink_region_create(ds, ops, 1, size);
+ 		if (IS_ERR(region)) {
+-			while (i-- >= 0)
++			while (--i >= 0)
+ 				dsa_devlink_region_destroy(priv->regions[i]);
+ 			return PTR_ERR(region);
+ 		}
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index 2fb4126ae8d8a..2d491efa11bdf 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -265,12 +265,10 @@ static void aq_nic_service_timer_cb(struct timer_list *t)
+ static void aq_nic_polling_timer_cb(struct timer_list *t)
+ {
+ 	struct aq_nic_s *self = from_timer(self, t, polling_timer);
+-	struct aq_vec_s *aq_vec = NULL;
+ 	unsigned int i = 0U;
+ 
+-	for (i = 0U, aq_vec = self->aq_vec[0];
+-		self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i])
+-		aq_vec_isr(i, (void *)aq_vec);
++	for (i = 0U; self->aq_vecs > i; ++i)
++		aq_vec_isr(i, (void *)self->aq_vec[i]);
+ 
+ 	mod_timer(&self->polling_timer, jiffies +
+ 		  AQ_CFG_POLLING_TIMER_INTERVAL);
+@@ -872,7 +870,6 @@ int aq_nic_get_regs_count(struct aq_nic_s *self)
+ 
+ u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data)
+ {
+-	struct aq_vec_s *aq_vec = NULL;
+ 	struct aq_stats_s *stats;
+ 	unsigned int count = 0U;
+ 	unsigned int i = 0U;
+@@ -922,11 +919,11 @@ u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data)
+ 	data += i;
+ 
+ 	for (tc = 0U; tc < self->aq_nic_cfg.tcs; tc++) {
+-		for (i = 0U, aq_vec = self->aq_vec[0];
+-		     aq_vec && self->aq_vecs > i;
+-		     ++i, aq_vec = self->aq_vec[i]) {
++		for (i = 0U; self->aq_vecs > i; ++i) {
++			if (!self->aq_vec[i])
++				break;
+ 			data += count;
+-			count = aq_vec_get_sw_stats(aq_vec, tc, data);
++			count = aq_vec_get_sw_stats(self->aq_vec[i], tc, data);
+ 		}
+ 	}
+ 
+@@ -1240,7 +1237,6 @@ int aq_nic_set_loopback(struct aq_nic_s *self)
+ 
+ int aq_nic_stop(struct aq_nic_s *self)
+ {
+-	struct aq_vec_s *aq_vec = NULL;
+ 	unsigned int i = 0U;
+ 
+ 	netif_tx_disable(self->ndev);
+@@ -1258,9 +1254,8 @@ int aq_nic_stop(struct aq_nic_s *self)
+ 
+ 	aq_ptp_irq_free(self);
+ 
+-	for (i = 0U, aq_vec = self->aq_vec[0];
+-		self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i])
+-		aq_vec_stop(aq_vec);
++	for (i = 0U; self->aq_vecs > i; ++i)
++		aq_vec_stop(self->aq_vec[i]);
+ 
+ 	aq_ptp_ring_stop(self);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
+index 98ec1b8a7d8e5..6290d8bedc92e 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.c
++++ b/drivers/net/ethernet/broadcom/bgmac.c
+@@ -189,8 +189,8 @@ static netdev_tx_t bgmac_dma_tx_add(struct bgmac *bgmac,
+ 	}
+ 
+ 	slot->skb = skb;
+-	ring->end += nr_frags + 1;
+ 	netdev_sent_queue(net_dev, skb->len);
++	ring->end += nr_frags + 1;
+ 
+ 	wmb();
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index d89ddc165ec24..35401202523ef 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1349,8 +1349,8 @@ static int dpaa2_eth_add_bufs(struct dpaa2_eth_priv *priv,
+ 		buf_array[i] = addr;
+ 
+ 		/* tracing point */
+-		trace_dpaa2_eth_buf_seed(priv->net_dev,
+-					 page, DPAA2_ETH_RX_BUF_RAW_SIZE,
++		trace_dpaa2_eth_buf_seed(priv->net_dev, page_address(page),
++					 DPAA2_ETH_RX_BUF_RAW_SIZE,
+ 					 addr, priv->rx_buf_size,
+ 					 bpid);
+ 	}
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index d71eac7e19249..c5ae673005908 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -136,11 +136,7 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
+ 		 * NSEC_PER_SEC - ts.tv_nsec. Add the remaining nanoseconds
+ 		 * to current timer would be next second.
+ 		 */
+-		tempval = readl(fep->hwp + FEC_ATIME_CTRL);
+-		tempval |= FEC_T_CTRL_CAPTURE;
+-		writel(tempval, fep->hwp + FEC_ATIME_CTRL);
+-
+-		tempval = readl(fep->hwp + FEC_ATIME);
++		tempval = fep->cc.read(&fep->cc);
+ 		/* Convert the ptp local counter to 1588 timestamp */
+ 		ns = timecounter_cyc2time(&fep->tc, tempval);
+ 		ts = ns_to_timespec64(ns);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 1dad62ecb8a3a..97009cbea7793 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -382,7 +382,9 @@ static void i40e_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+ 		set_bit(__I40E_GLOBAL_RESET_REQUESTED, pf->state);
+ 		break;
+ 	default:
+-		netdev_err(netdev, "tx_timeout recovery unsuccessful\n");
++		netdev_err(netdev, "tx_timeout recovery unsuccessful, device is in non-recoverable state.\n");
++		set_bit(__I40E_DOWN_REQUESTED, pf->state);
++		set_bit(__I40E_VSI_DOWN_REQUESTED, vsi->state);
+ 		break;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_adminq.c b/drivers/net/ethernet/intel/iavf/iavf_adminq.c
+index 9fa3fa99b4c20..897b349cdaf1c 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_adminq.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_adminq.c
+@@ -324,6 +324,7 @@ static enum iavf_status iavf_config_arq_regs(struct iavf_hw *hw)
+ static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
+ {
+ 	enum iavf_status ret_code = 0;
++	int i;
+ 
+ 	if (hw->aq.asq.count > 0) {
+ 		/* queue already initialized */
+@@ -354,12 +355,17 @@ static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
+ 	/* initialize base registers */
+ 	ret_code = iavf_config_asq_regs(hw);
+ 	if (ret_code)
+-		goto init_adminq_free_rings;
++		goto init_free_asq_bufs;
+ 
+ 	/* success! */
+ 	hw->aq.asq.count = hw->aq.num_asq_entries;
+ 	goto init_adminq_exit;
+ 
++init_free_asq_bufs:
++	for (i = 0; i < hw->aq.num_asq_entries; i++)
++		iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
++	iavf_free_virt_mem(hw, &hw->aq.asq.dma_head);
++
+ init_adminq_free_rings:
+ 	iavf_free_adminq_asq(hw);
+ 
+@@ -383,6 +389,7 @@ init_adminq_exit:
+ static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
+ {
+ 	enum iavf_status ret_code = 0;
++	int i;
+ 
+ 	if (hw->aq.arq.count > 0) {
+ 		/* queue already initialized */
+@@ -413,12 +420,16 @@ static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
+ 	/* initialize base registers */
+ 	ret_code = iavf_config_arq_regs(hw);
+ 	if (ret_code)
+-		goto init_adminq_free_rings;
++		goto init_free_arq_bufs;
+ 
+ 	/* success! */
+ 	hw->aq.arq.count = hw->aq.num_arq_entries;
+ 	goto init_adminq_exit;
+ 
++init_free_arq_bufs:
++	for (i = 0; i < hw->aq.num_arq_entries; i++)
++		iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
++	iavf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+ init_adminq_free_rings:
+ 	iavf_free_adminq_arq(hw);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 5ce8590cdb374..0155c45d9d7f0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -2590,7 +2590,7 @@ ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ 		else
+ 			status = ice_set_vsi_promisc(hw, vsi_handle,
+ 						     promisc_mask, vlan_id);
+-		if (status)
++		if (status && status != -EEXIST)
+ 			break;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h
+index 7bda8c5edea5d..e6d2800a8abc5 100644
+--- a/drivers/net/ethernet/intel/igb/igb.h
++++ b/drivers/net/ethernet/intel/igb/igb.h
+@@ -664,6 +664,8 @@ struct igb_adapter {
+ 	struct igb_mac_addr *mac_table;
+ 	struct vf_mac_filter vf_macs;
+ 	struct vf_mac_filter *vf_mac_list;
++	/* lock for VF resources */
++	spinlock_t vfs_lock;
+ };
+ 
+ /* flags controlling PTP/1588 function */
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 4e51f4bb58ffc..327196d15a6ae 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -3638,6 +3638,7 @@ static int igb_disable_sriov(struct pci_dev *pdev)
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct igb_adapter *adapter = netdev_priv(netdev);
+ 	struct e1000_hw *hw = &adapter->hw;
++	unsigned long flags;
+ 
+ 	/* reclaim resources allocated to VFs */
+ 	if (adapter->vf_data) {
+@@ -3650,12 +3651,13 @@ static int igb_disable_sriov(struct pci_dev *pdev)
+ 			pci_disable_sriov(pdev);
+ 			msleep(500);
+ 		}
+-
++		spin_lock_irqsave(&adapter->vfs_lock, flags);
+ 		kfree(adapter->vf_mac_list);
+ 		adapter->vf_mac_list = NULL;
+ 		kfree(adapter->vf_data);
+ 		adapter->vf_data = NULL;
+ 		adapter->vfs_allocated_count = 0;
++		spin_unlock_irqrestore(&adapter->vfs_lock, flags);
+ 		wr32(E1000_IOVCTL, E1000_IOVCTL_REUSE_VFQ);
+ 		wrfl();
+ 		msleep(100);
+@@ -3815,7 +3817,9 @@ static void igb_remove(struct pci_dev *pdev)
+ 	igb_release_hw_control(adapter);
+ 
+ #ifdef CONFIG_PCI_IOV
++	rtnl_lock();
+ 	igb_disable_sriov(pdev);
++	rtnl_unlock();
+ #endif
+ 
+ 	unregister_netdev(netdev);
+@@ -3975,6 +3979,9 @@ static int igb_sw_init(struct igb_adapter *adapter)
+ 
+ 	spin_lock_init(&adapter->nfc_lock);
+ 	spin_lock_init(&adapter->stats64_lock);
++
++	/* init spinlock to avoid concurrency of VF resources */
++	spin_lock_init(&adapter->vfs_lock);
+ #ifdef CONFIG_PCI_IOV
+ 	switch (hw->mac.type) {
+ 	case e1000_82576:
+@@ -7852,8 +7859,10 @@ unlock:
+ static void igb_msg_task(struct igb_adapter *adapter)
+ {
+ 	struct e1000_hw *hw = &adapter->hw;
++	unsigned long flags;
+ 	u32 vf;
+ 
++	spin_lock_irqsave(&adapter->vfs_lock, flags);
+ 	for (vf = 0; vf < adapter->vfs_allocated_count; vf++) {
+ 		/* process any reset requests */
+ 		if (!igb_check_for_rst(hw, vf))
+@@ -7867,6 +7876,7 @@ static void igb_msg_task(struct igb_adapter *adapter)
+ 		if (!igb_check_for_ack(hw, vf))
+ 			igb_rcv_ack_from_vf(adapter, vf);
+ 	}
++	spin_unlock_irqrestore(&adapter->vfs_lock, flags);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index a0e1ccca505b7..6137000b11c5c 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -77,7 +77,7 @@ static void moxart_mac_free_memory(struct net_device *ndev)
+ 	int i;
+ 
+ 	for (i = 0; i < RX_DESC_NUM; i++)
+-		dma_unmap_single(&ndev->dev, priv->rx_mapping[i],
++		dma_unmap_single(&priv->pdev->dev, priv->rx_mapping[i],
+ 				 priv->rx_buf_size, DMA_FROM_DEVICE);
+ 
+ 	if (priv->tx_desc_base)
+@@ -147,11 +147,11 @@ static void moxart_mac_setup_desc_ring(struct net_device *ndev)
+ 		       desc + RX_REG_OFFSET_DESC1);
+ 
+ 		priv->rx_buf[i] = priv->rx_buf_base + priv->rx_buf_size * i;
+-		priv->rx_mapping[i] = dma_map_single(&ndev->dev,
++		priv->rx_mapping[i] = dma_map_single(&priv->pdev->dev,
+ 						     priv->rx_buf[i],
+ 						     priv->rx_buf_size,
+ 						     DMA_FROM_DEVICE);
+-		if (dma_mapping_error(&ndev->dev, priv->rx_mapping[i]))
++		if (dma_mapping_error(&priv->pdev->dev, priv->rx_mapping[i]))
+ 			netdev_err(ndev, "DMA mapping error\n");
+ 
+ 		moxart_desc_write(priv->rx_mapping[i],
+@@ -240,7 +240,7 @@ static int moxart_rx_poll(struct napi_struct *napi, int budget)
+ 		if (len > RX_BUF_SIZE)
+ 			len = RX_BUF_SIZE;
+ 
+-		dma_sync_single_for_cpu(&ndev->dev,
++		dma_sync_single_for_cpu(&priv->pdev->dev,
+ 					priv->rx_mapping[rx_head],
+ 					priv->rx_buf_size, DMA_FROM_DEVICE);
+ 		skb = netdev_alloc_skb_ip_align(ndev, len);
+@@ -294,7 +294,7 @@ static void moxart_tx_finished(struct net_device *ndev)
+ 	unsigned int tx_tail = priv->tx_tail;
+ 
+ 	while (tx_tail != tx_head) {
+-		dma_unmap_single(&ndev->dev, priv->tx_mapping[tx_tail],
++		dma_unmap_single(&priv->pdev->dev, priv->tx_mapping[tx_tail],
+ 				 priv->tx_len[tx_tail], DMA_TO_DEVICE);
+ 
+ 		ndev->stats.tx_packets++;
+@@ -358,9 +358,9 @@ static netdev_tx_t moxart_mac_start_xmit(struct sk_buff *skb,
+ 
+ 	len = skb->len > TX_BUF_SIZE ? TX_BUF_SIZE : skb->len;
+ 
+-	priv->tx_mapping[tx_head] = dma_map_single(&ndev->dev, skb->data,
++	priv->tx_mapping[tx_head] = dma_map_single(&priv->pdev->dev, skb->data,
+ 						   len, DMA_TO_DEVICE);
+-	if (dma_mapping_error(&ndev->dev, priv->tx_mapping[tx_head])) {
++	if (dma_mapping_error(&priv->pdev->dev, priv->tx_mapping[tx_head])) {
+ 		netdev_err(ndev, "DMA mapping error\n");
+ 		goto out_unlock;
+ 	}
+@@ -379,7 +379,7 @@ static netdev_tx_t moxart_mac_start_xmit(struct sk_buff *skb,
+ 		len = ETH_ZLEN;
+ 	}
+ 
+-	dma_sync_single_for_device(&ndev->dev, priv->tx_mapping[tx_head],
++	dma_sync_single_for_device(&priv->pdev->dev, priv->tx_mapping[tx_head],
+ 				   priv->tx_buf_size, DMA_TO_DEVICE);
+ 
+ 	txdes1 = TX_DESC1_LTS | TX_DESC1_FTS | (len & TX_DESC1_BUF_SIZE_MASK);
+@@ -494,7 +494,7 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ 	priv->tx_buf_size = TX_BUF_SIZE;
+ 	priv->rx_buf_size = RX_BUF_SIZE;
+ 
+-	priv->tx_desc_base = dma_alloc_coherent(&pdev->dev, TX_REG_DESC_SIZE *
++	priv->tx_desc_base = dma_alloc_coherent(p_dev, TX_REG_DESC_SIZE *
+ 						TX_DESC_NUM, &priv->tx_base,
+ 						GFP_DMA | GFP_KERNEL);
+ 	if (!priv->tx_desc_base) {
+@@ -502,7 +502,7 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ 		goto init_fail;
+ 	}
+ 
+-	priv->rx_desc_base = dma_alloc_coherent(&pdev->dev, RX_REG_DESC_SIZE *
++	priv->rx_desc_base = dma_alloc_coherent(p_dev, RX_REG_DESC_SIZE *
+ 						RX_DESC_NUM, &priv->rx_base,
+ 						GFP_DMA | GFP_KERNEL);
+ 	if (!priv->rx_desc_base) {
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index e0b801d107396..bf8590ef0964b 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -1225,6 +1225,8 @@ nfp_port_get_module_info(struct net_device *netdev,
+ 	u8 data;
+ 
+ 	port = nfp_port_from_netdev(netdev);
++	/* update port state to get latest interface */
++	set_bit(NFP_PORT_CHANGED, &port->flags);
+ 	eth_port = nfp_port_get_eth_port(port);
+ 	if (!eth_port)
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index fb065b074553e..5406f5a9bbe59 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -669,6 +669,7 @@ static void intel_eth_pci_remove(struct pci_dev *pdev)
+ 
+ 	pci_free_irq_vectors(pdev);
+ 
++	clk_disable_unprepare(priv->plat->stmmac_clk);
+ 	clk_unregister_fixed_rate(priv->plat->stmmac_clk);
+ 
+ 	pcim_iounmap_regions(pdev, BIT(0));
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 5ddb2dbb8572b..081939cb420b0 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -772,7 +772,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ 				       struct geneve_sock *gs4,
+ 				       struct flowi4 *fl4,
+ 				       const struct ip_tunnel_info *info,
+-				       __be16 dport, __be16 sport)
++				       __be16 dport, __be16 sport,
++				       __u8 *full_tos)
+ {
+ 	bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ 	struct geneve_dev *geneve = netdev_priv(dev);
+@@ -797,6 +798,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ 		use_cache = false;
+ 	}
+ 	fl4->flowi4_tos = RT_TOS(tos);
++	if (full_tos)
++		*full_tos = tos;
+ 
+ 	dst_cache = (struct dst_cache *)&info->dst_cache;
+ 	if (use_cache) {
+@@ -850,8 +853,7 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
+ 		use_cache = false;
+ 	}
+ 
+-	fl6->flowlabel = ip6_make_flowinfo(RT_TOS(prio),
+-					   info->key.label);
++	fl6->flowlabel = ip6_make_flowinfo(prio, info->key.label);
+ 	dst_cache = (struct dst_cache *)&info->dst_cache;
+ 	if (use_cache) {
+ 		dst = dst_cache_get_ip6(dst_cache, &fl6->saddr);
+@@ -885,6 +887,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	const struct ip_tunnel_key *key = &info->key;
+ 	struct rtable *rt;
+ 	struct flowi4 fl4;
++	__u8 full_tos;
+ 	__u8 tos, ttl;
+ 	__be16 df = 0;
+ 	__be16 sport;
+@@ -895,7 +898,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ 	rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+-			      geneve->cfg.info.key.tp_dst, sport);
++			      geneve->cfg.info.key.tp_dst, sport, &full_tos);
+ 	if (IS_ERR(rt))
+ 		return PTR_ERR(rt);
+ 
+@@ -939,7 +942,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 
+ 		df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0;
+ 	} else {
+-		tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, ip_hdr(skb), skb);
++		tos = ip_tunnel_ecn_encap(full_tos, ip_hdr(skb), skb);
+ 		if (geneve->cfg.ttl_inherit)
+ 			ttl = ip_tunnel_get_ttl(ip_hdr(skb), skb);
+ 		else
+@@ -1121,7 +1124,7 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ 					  1, USHRT_MAX, true);
+ 
+ 		rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+-				      geneve->cfg.info.key.tp_dst, sport);
++				      geneve->cfg.info.key.tp_dst, sport, NULL);
+ 		if (IS_ERR(rt))
+ 			return PTR_ERR(rt);
+ 
+diff --git a/drivers/net/plip/plip.c b/drivers/net/plip/plip.c
+index 4406b353123ed..5a0e5a8a8917b 100644
+--- a/drivers/net/plip/plip.c
++++ b/drivers/net/plip/plip.c
+@@ -1103,7 +1103,7 @@ plip_open(struct net_device *dev)
+ 		/* Any address will do - we take the first. We already
+ 		   have the first two bytes filled with 0xfc, from
+ 		   plip_init_dev(). */
+-		const struct in_ifaddr *ifa = rcu_dereference(in_dev->ifa_list);
++		const struct in_ifaddr *ifa = rtnl_dereference(in_dev->ifa_list);
+ 		if (ifa != NULL) {
+ 			memcpy(dev->dev_addr+2, &ifa->ifa_local, 4);
+ 		}
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 0a07c05a610d1..c942cd6a2c65e 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -968,8 +968,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ 		case XDP_TX:
+ 			stats->xdp_tx++;
+ 			xdpf = xdp_convert_buff_to_frame(&xdp);
+-			if (unlikely(!xdpf))
++			if (unlikely(!xdpf)) {
++				if (unlikely(xdp_page != page))
++					put_page(xdp_page);
+ 				goto err_xdp;
++			}
+ 			err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
+ 			if (unlikely(err < 0)) {
+ 				trace_xdp_exception(vi->dev, xdp_prog, act);
+diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
+index b7bf3f863d79b..5ee0afa621a95 100644
+--- a/drivers/ntb/test/ntb_tool.c
++++ b/drivers/ntb/test/ntb_tool.c
+@@ -367,14 +367,16 @@ static ssize_t tool_fn_write(struct tool_ctx *tc,
+ 	u64 bits;
+ 	int n;
+ 
++	if (*offp)
++		return 0;
++
+ 	buf = kmalloc(size + 1, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+-	ret = simple_write_to_buffer(buf, size, offp, ubuf, size);
+-	if (ret < 0) {
++	if (copy_from_user(buf, ubuf, size)) {
+ 		kfree(buf);
+-		return ret;
++		return -EFAULT;
+ 	}
+ 
+ 	buf[size] = 0;
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 96b67a70cbbbd..d030d5e69dc50 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1802,7 +1802,8 @@ static int __init nvmet_tcp_init(void)
+ {
+ 	int ret;
+ 
+-	nvmet_tcp_wq = alloc_workqueue("nvmet_tcp_wq", WQ_HIGHPRI, 0);
++	nvmet_tcp_wq = alloc_workqueue("nvmet_tcp_wq",
++				WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+ 	if (!nvmet_tcp_wq)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
+index d89d7ed70768c..984aa023c753f 100644
+--- a/drivers/pci/pcie/err.c
++++ b/drivers/pci/pcie/err.c
+@@ -196,8 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+ 	pci_dbg(bridge, "broadcast error_detected message\n");
+ 	if (state == pci_channel_io_frozen) {
+ 		pci_walk_bridge(bridge, report_frozen_detected, &status);
+-		status = reset_subordinates(bridge);
+-		if (status != PCI_ERS_RESULT_RECOVERED) {
++		if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) {
+ 			pci_warn(bridge, "subordinate device reset failed\n");
+ 			goto failed;
+ 		}
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 1be2894ada70c..fb2e52fd01b39 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4897,6 +4897,9 @@ static const struct pci_dev_acs_enabled {
+ 	{ PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs },
+ 	/* Broadcom multi-function device */
+ 	{ PCI_VENDOR_ID_BROADCOM, 0x16D7, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_BROADCOM, 0x1750, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_BROADCOM, 0x1751, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_BROADCOM, 0x1752, pci_quirk_mf_endpoint_acs },
+ 	{ PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
+ 	/* Amazon Annapurna Labs */
+ 	{ PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 348c670a7b07d..4de832ac47d38 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -1571,16 +1571,14 @@ EXPORT_SYMBOL_GPL(intel_pinctrl_probe_by_uid);
+ 
+ const struct intel_pinctrl_soc_data *intel_pinctrl_get_soc_data(struct platform_device *pdev)
+ {
++	const struct intel_pinctrl_soc_data * const *table;
+ 	const struct intel_pinctrl_soc_data *data = NULL;
+-	const struct intel_pinctrl_soc_data **table;
+-	struct acpi_device *adev;
+-	unsigned int i;
+ 
+-	adev = ACPI_COMPANION(&pdev->dev);
+-	if (adev) {
+-		const void *match = device_get_match_data(&pdev->dev);
++	table = device_get_match_data(&pdev->dev);
++	if (table) {
++		struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
++		unsigned int i;
+ 
+-		table = (const struct intel_pinctrl_soc_data **)match;
+ 		for (i = 0; table[i]; i++) {
+ 			if (!strcmp(adev->pnp.unique_id, table[i]->uid)) {
+ 				data = table[i];
+@@ -1594,7 +1592,7 @@ const struct intel_pinctrl_soc_data *intel_pinctrl_get_soc_data(struct platform_
+ 		if (!id)
+ 			return ERR_PTR(-ENODEV);
+ 
+-		table = (const struct intel_pinctrl_soc_data **)id->driver_data;
++		table = (const struct intel_pinctrl_soc_data * const *)id->driver_data;
+ 		data = table[pdev->id];
+ 	}
+ 
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index 6d77feda9090a..d80ec0d8eaada 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -1421,8 +1421,10 @@ static int nmk_pinctrl_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+ 
+ 	has_config = nmk_pinctrl_dt_get_config(np, &configs);
+ 	np_config = of_parse_phandle(np, "ste,config", 0);
+-	if (np_config)
++	if (np_config) {
+ 		has_config |= nmk_pinctrl_dt_get_config(np_config, &configs);
++		of_node_put(np_config);
++	}
+ 	if (has_config) {
+ 		const char *gpio_name;
+ 		const char *pin;
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm8916.c b/drivers/pinctrl/qcom/pinctrl-msm8916.c
+index 396db12ae9048..bf68913ba8212 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm8916.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm8916.c
+@@ -844,8 +844,8 @@ static const struct msm_pingroup msm8916_groups[] = {
+ 	PINGROUP(28, pwr_modem_enabled_a, NA, NA, NA, NA, NA, qdss_tracedata_b, NA, atest_combodac),
+ 	PINGROUP(29, cci_i2c, NA, NA, NA, NA, NA, qdss_tracedata_b, NA, atest_combodac),
+ 	PINGROUP(30, cci_i2c, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+-	PINGROUP(31, cci_timer0, NA, NA, NA, NA, NA, NA, NA, NA),
+-	PINGROUP(32, cci_timer1, NA, NA, NA, NA, NA, NA, NA, NA),
++	PINGROUP(31, cci_timer0, flash_strobe, NA, NA, NA, NA, NA, NA, NA),
++	PINGROUP(32, cci_timer1, flash_strobe, NA, NA, NA, NA, NA, NA, NA),
+ 	PINGROUP(33, cci_async, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+ 	PINGROUP(34, pwr_nav_enabled_a, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+ 	PINGROUP(35, pwr_crypto_enabled_a, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+diff --git a/drivers/pinctrl/qcom/pinctrl-sm8250.c b/drivers/pinctrl/qcom/pinctrl-sm8250.c
+index af144e724bd9c..3bd7f9fedcc34 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sm8250.c
++++ b/drivers/pinctrl/qcom/pinctrl-sm8250.c
+@@ -1316,7 +1316,7 @@ static const struct msm_pingroup sm8250_groups[] = {
+ static const struct msm_gpio_wakeirq_map sm8250_pdc_map[] = {
+ 	{ 0, 79 }, { 1, 84 }, { 2, 80 }, { 3, 82 }, { 4, 107 }, { 7, 43 },
+ 	{ 11, 42 }, { 14, 44 }, { 15, 52 }, { 19, 67 }, { 23, 68 }, { 24, 105 },
+-	{ 27, 92 }, { 28, 106 }, { 31, 69 }, { 35, 70 }, { 39, 37 },
++	{ 27, 92 }, { 28, 106 }, { 31, 69 }, { 35, 70 }, { 39, 73 },
+ 	{ 40, 108 }, { 43, 71 }, { 45, 72 }, { 47, 83 }, { 51, 74 }, { 55, 77 },
+ 	{ 59, 78 }, { 63, 75 }, { 64, 81 }, { 65, 87 }, { 66, 88 }, { 67, 89 },
+ 	{ 68, 54 }, { 70, 85 }, { 77, 46 }, { 80, 90 }, { 81, 91 }, { 83, 97 },
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c b/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
+index 4557e18d59899..12c40f9c1a247 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
+@@ -105,6 +105,7 @@ static const struct sunxi_pinctrl_desc sun50i_h6_r_pinctrl_data = {
+ 	.npins = ARRAY_SIZE(sun50i_h6_r_pins),
+ 	.pin_base = PL_BASE,
+ 	.irq_banks = 2,
++	.io_bias_cfg_variant = BIAS_VOLTAGE_PIO_POW_MODE_SEL,
+ };
+ 
+ static int sun50i_h6_r_pinctrl_probe(struct platform_device *pdev)
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index 24c861434bf13..e4b41cc6c5860 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -624,7 +624,7 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ 					 unsigned pin,
+ 					 struct regulator *supply)
+ {
+-	unsigned short bank = pin / PINS_PER_BANK;
++	unsigned short bank;
+ 	unsigned long flags;
+ 	u32 val, reg;
+ 	int uV;
+@@ -640,6 +640,9 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ 	if (uV == 0)
+ 		return 0;
+ 
++	pin -= pctl->desc->pin_base;
++	bank = pin / PINS_PER_BANK;
++
+ 	switch (pctl->desc->io_bias_cfg_variant) {
+ 	case BIAS_VOLTAGE_GRP_CONFIG:
+ 		/*
+@@ -657,8 +660,6 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ 		else
+ 			val = 0xD; /* 3.3V */
+ 
+-		pin -= pctl->desc->pin_base;
+-
+ 		reg = readl(pctl->membase + sunxi_grp_config_reg(pin));
+ 		reg &= ~IO_BIAS_MASK;
+ 		writel(reg | val, pctl->membase + sunxi_grp_config_reg(pin));
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index e1fadf059e05e..3a2a78ff33304 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -507,13 +507,13 @@ int cros_ec_query_all(struct cros_ec_device *ec_dev)
+ 	ret = cros_ec_get_host_command_version_mask(ec_dev,
+ 						    EC_CMD_GET_NEXT_EVENT,
+ 						    &ver_mask);
+-	if (ret < 0 || ver_mask == 0)
++	if (ret < 0 || ver_mask == 0) {
+ 		ec_dev->mkbp_event_supported = 0;
+-	else
++	} else {
+ 		ec_dev->mkbp_event_supported = fls(ver_mask);
+ 
+-	dev_dbg(ec_dev->dev, "MKBP support version %u\n",
+-		ec_dev->mkbp_event_supported - 1);
++		dev_dbg(ec_dev->dev, "MKBP support version %u\n", ec_dev->mkbp_event_supported - 1);
++	}
+ 
+ 	/* Probe if host sleep v1 is supported for S0ix failure detection. */
+ 	ret = cros_ec_get_host_command_version_mask(ec_dev,
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index beaf3a8d206f8..fbc76d69ea0b4 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -2609,8 +2609,8 @@ lpfc_debugfs_multixripools_write(struct file *file, const char __user *buf,
+ 	struct lpfc_sli4_hdw_queue *qp;
+ 	struct lpfc_multixri_pool *multixri_pool;
+ 
+-	if (nbytes > 64)
+-		nbytes = 64;
++	if (nbytes > sizeof(mybuf) - 1)
++		nbytes = sizeof(mybuf) - 1;
+ 
+ 	memset(mybuf, 0, sizeof(mybuf));
+ 
+@@ -2690,8 +2690,8 @@ lpfc_debugfs_nvmestat_write(struct file *file, const char __user *buf,
+ 	if (!phba->targetport)
+ 		return -ENXIO;
+ 
+-	if (nbytes > 64)
+-		nbytes = 64;
++	if (nbytes > sizeof(mybuf) - 1)
++		nbytes = sizeof(mybuf) - 1;
+ 
+ 	memset(mybuf, 0, sizeof(mybuf));
+ 
+@@ -2828,8 +2828,8 @@ lpfc_debugfs_ioktime_write(struct file *file, const char __user *buf,
+ 	char mybuf[64];
+ 	char *pbuf;
+ 
+-	if (nbytes > 64)
+-		nbytes = 64;
++	if (nbytes > sizeof(mybuf) - 1)
++		nbytes = sizeof(mybuf) - 1;
+ 
+ 	memset(mybuf, 0, sizeof(mybuf));
+ 
+@@ -2956,8 +2956,8 @@ lpfc_debugfs_nvmeio_trc_write(struct file *file, const char __user *buf,
+ 	char mybuf[64];
+ 	char *pbuf;
+ 
+-	if (nbytes > 63)
+-		nbytes = 63;
++	if (nbytes > sizeof(mybuf) - 1)
++		nbytes = sizeof(mybuf) - 1;
+ 
+ 	memset(mybuf, 0, sizeof(mybuf));
+ 
+@@ -3062,8 +3062,8 @@ lpfc_debugfs_hdwqstat_write(struct file *file, const char __user *buf,
+ 	char *pbuf;
+ 	int i;
+ 
+-	if (nbytes > 64)
+-		nbytes = 64;
++	if (nbytes > sizeof(mybuf) - 1)
++		nbytes = sizeof(mybuf) - 1;
+ 
+ 	memset(mybuf, 0, sizeof(mybuf));
+ 
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index 0bc7daa7afc83..e4cb52e1fe261 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -156,6 +156,7 @@ struct meson_spicc_device {
+ 	void __iomem			*base;
+ 	struct clk			*core;
+ 	struct clk			*pclk;
++	struct clk_divider		pow2_div;
+ 	struct clk			*clk;
+ 	struct spi_message		*message;
+ 	struct spi_transfer		*xfer;
+@@ -168,6 +169,8 @@ struct meson_spicc_device {
+ 	unsigned long			xfer_remain;
+ };
+ 
++#define pow2_clk_to_spicc(_div) container_of(_div, struct meson_spicc_device, pow2_div)
++
+ static void meson_spicc_oen_enable(struct meson_spicc_device *spicc)
+ {
+ 	u32 conf;
+@@ -421,7 +424,7 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ {
+ 	struct meson_spicc_device *spicc = spi_master_get_devdata(master);
+ 	struct spi_device *spi = message->spi;
+-	u32 conf = 0;
++	u32 conf = readl_relaxed(spicc->base + SPICC_CONREG) & SPICC_DATARATE_MASK;
+ 
+ 	/* Store current message */
+ 	spicc->message = message;
+@@ -458,8 +461,6 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ 	/* Select CS */
+ 	conf |= FIELD_PREP(SPICC_CS_MASK, spi->chip_select);
+ 
+-	/* Default Clock rate core/4 */
+-
+ 	/* Default 8bit word */
+ 	conf |= FIELD_PREP(SPICC_BITLENGTH_MASK, 8 - 1);
+ 
+@@ -476,12 +477,16 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ static int meson_spicc_unprepare_transfer(struct spi_master *master)
+ {
+ 	struct meson_spicc_device *spicc = spi_master_get_devdata(master);
++	u32 conf = readl_relaxed(spicc->base + SPICC_CONREG) & SPICC_DATARATE_MASK;
+ 
+ 	/* Disable all IRQs */
+ 	writel(0, spicc->base + SPICC_INTREG);
+ 
+ 	device_reset_optional(&spicc->pdev->dev);
+ 
++	/* Set default configuration, keeping datarate field */
++	writel_relaxed(conf, spicc->base + SPICC_CONREG);
++
+ 	return 0;
+ }
+ 
+@@ -518,14 +523,60 @@ static void meson_spicc_cleanup(struct spi_device *spi)
+  * Clk path for G12A series:
+  *    pclk -> pow2 fixed div -> pow2 div -> mux -> out
+  *    pclk -> enh fixed div -> enh div -> mux -> out
++ *
++ * The pow2 divider is tied to the controller HW state, and the
++ * divider is only valid when the controller is initialized.
++ *
++ * A set of clock ops is added to make sure we don't read/set this
++ * clock rate while the controller is in an unknown state.
+  */
+ 
+-static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
++static unsigned long meson_spicc_pow2_recalc_rate(struct clk_hw *hw,
++						  unsigned long parent_rate)
++{
++	struct clk_divider *divider = to_clk_divider(hw);
++	struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++	if (!spicc->master->cur_msg || !spicc->master->busy)
++		return 0;
++
++	return clk_divider_ops.recalc_rate(hw, parent_rate);
++}
++
++static int meson_spicc_pow2_determine_rate(struct clk_hw *hw,
++					   struct clk_rate_request *req)
++{
++	struct clk_divider *divider = to_clk_divider(hw);
++	struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++	if (!spicc->master->cur_msg || !spicc->master->busy)
++		return -EINVAL;
++
++	return clk_divider_ops.determine_rate(hw, req);
++}
++
++static int meson_spicc_pow2_set_rate(struct clk_hw *hw, unsigned long rate,
++				     unsigned long parent_rate)
++{
++	struct clk_divider *divider = to_clk_divider(hw);
++	struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++	if (!spicc->master->cur_msg || !spicc->master->busy)
++		return -EINVAL;
++
++	return clk_divider_ops.set_rate(hw, rate, parent_rate);
++}
++
++const struct clk_ops meson_spicc_pow2_clk_ops = {
++	.recalc_rate = meson_spicc_pow2_recalc_rate,
++	.determine_rate = meson_spicc_pow2_determine_rate,
++	.set_rate = meson_spicc_pow2_set_rate,
++};
++
++static int meson_spicc_pow2_clk_init(struct meson_spicc_device *spicc)
+ {
+ 	struct device *dev = &spicc->pdev->dev;
+-	struct clk_fixed_factor *pow2_fixed_div, *enh_fixed_div;
+-	struct clk_divider *pow2_div, *enh_div;
+-	struct clk_mux *mux;
++	struct clk_fixed_factor *pow2_fixed_div;
+ 	struct clk_init_data init;
+ 	struct clk *clk;
+ 	struct clk_parent_data parent_data[2];
+@@ -560,31 +611,45 @@ static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
+ 	if (WARN_ON(IS_ERR(clk)))
+ 		return PTR_ERR(clk);
+ 
+-	pow2_div = devm_kzalloc(dev, sizeof(*pow2_div), GFP_KERNEL);
+-	if (!pow2_div)
+-		return -ENOMEM;
+-
+ 	snprintf(name, sizeof(name), "%s#pow2_div", dev_name(dev));
+ 	init.name = name;
+-	init.ops = &clk_divider_ops;
+-	init.flags = CLK_SET_RATE_PARENT;
++	init.ops = &meson_spicc_pow2_clk_ops;
++	/*
++	 * Set NOCACHE here to make sure we read the actual HW value
++	 * since we reset the HW after each transfer.
++	 */
++	init.flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE;
+ 	parent_data[0].hw = &pow2_fixed_div->hw;
+ 	init.num_parents = 1;
+ 
+-	pow2_div->shift = 16,
+-	pow2_div->width = 3,
+-	pow2_div->flags = CLK_DIVIDER_POWER_OF_TWO,
+-	pow2_div->reg = spicc->base + SPICC_CONREG;
+-	pow2_div->hw.init = &init;
++	spicc->pow2_div.shift = 16,
++	spicc->pow2_div.width = 3,
++	spicc->pow2_div.flags = CLK_DIVIDER_POWER_OF_TWO,
++	spicc->pow2_div.reg = spicc->base + SPICC_CONREG;
++	spicc->pow2_div.hw.init = &init;
+ 
+-	clk = devm_clk_register(dev, &pow2_div->hw);
+-	if (WARN_ON(IS_ERR(clk)))
+-		return PTR_ERR(clk);
++	spicc->clk = devm_clk_register(dev, &spicc->pow2_div.hw);
++	if (WARN_ON(IS_ERR(spicc->clk)))
++		return PTR_ERR(spicc->clk);
+ 
+-	if (!spicc->data->has_enhance_clk_div) {
+-		spicc->clk = clk;
+-		return 0;
+-	}
++	return 0;
++}
++
++static int meson_spicc_enh_clk_init(struct meson_spicc_device *spicc)
++{
++	struct device *dev = &spicc->pdev->dev;
++	struct clk_fixed_factor *enh_fixed_div;
++	struct clk_divider *enh_div;
++	struct clk_mux *mux;
++	struct clk_init_data init;
++	struct clk *clk;
++	struct clk_parent_data parent_data[2];
++	char name[64];
++
++	memset(&init, 0, sizeof(init));
++	memset(&parent_data, 0, sizeof(parent_data));
++
++	init.parent_data = parent_data;
+ 
+ 	/* algorithm for enh div: rate = freq / 2 / (N + 1) */
+ 
+@@ -637,7 +702,7 @@ static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
+ 	snprintf(name, sizeof(name), "%s#sel", dev_name(dev));
+ 	init.name = name;
+ 	init.ops = &clk_mux_ops;
+-	parent_data[0].hw = &pow2_div->hw;
++	parent_data[0].hw = &spicc->pow2_div.hw;
+ 	parent_data[1].hw = &enh_div->hw;
+ 	init.num_parents = 2;
+ 	init.flags = CLK_SET_RATE_PARENT;
+@@ -754,12 +819,20 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 
+ 	meson_spicc_oen_enable(spicc);
+ 
+-	ret = meson_spicc_clk_init(spicc);
++	ret = meson_spicc_pow2_clk_init(spicc);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "clock registration failed\n");
++		dev_err(&pdev->dev, "pow2 clock registration failed\n");
+ 		goto out_clk;
+ 	}
+ 
++	if (spicc->data->has_enhance_clk_div) {
++		ret = meson_spicc_enh_clk_init(spicc);
++		if (ret) {
++			dev_err(&pdev->dev, "clock registration failed\n");
++			goto out_clk;
++		}
++	}
++
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "spi master registration failed\n");
+diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
+index e07f997cf8dd3..9cc4a7b63b0d6 100644
+--- a/drivers/tee/tee_core.c
++++ b/drivers/tee/tee_core.c
+@@ -334,6 +334,9 @@ tee_ioctl_shm_register(struct tee_context *ctx,
+ 	if (data.flags)
+ 		return -EINVAL;
+ 
++	if (!access_ok((void __user *)(unsigned long)data.addr, data.length))
++		return -EFAULT;
++
+ 	shm = tee_shm_register(ctx, data.addr, data.length,
+ 			       TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED);
+ 	if (IS_ERR(shm))
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index 6e662fb131d55..499fccba3d74b 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -222,9 +222,6 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
+ 		goto err;
+ 	}
+ 
+-	if (!access_ok((void __user *)addr, length))
+-		return ERR_PTR(-EFAULT);
+-
+ 	mutex_lock(&teedev->mutex);
+ 	shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL);
+ 	mutex_unlock(&teedev->mutex);
+diff --git a/drivers/tty/serial/ucc_uart.c b/drivers/tty/serial/ucc_uart.c
+index d6a8604157aba..d1fecc88330ec 100644
+--- a/drivers/tty/serial/ucc_uart.c
++++ b/drivers/tty/serial/ucc_uart.c
+@@ -1137,6 +1137,8 @@ static unsigned int soc_info(unsigned int *rev_h, unsigned int *rev_l)
+ 		/* No compatible property, so try the name. */
+ 		soc_string = np->name;
+ 
++	of_node_put(np);
++
+ 	/* Extract the SOC number from the "PowerPC," string */
+ 	if ((sscanf(soc_string, "PowerPC,%u", &soc) != 1) || !soc)
+ 		return 0;
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index f120da442d43d..a37ea946459cc 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -655,9 +655,9 @@ static void cdns3_wa2_remove_old_request(struct cdns3_endpoint *priv_ep)
+ 		trace_cdns3_wa2(priv_ep, "removes eldest request");
+ 
+ 		kfree(priv_req->request.buf);
++		list_del_init(&priv_req->list);
+ 		cdns3_gadget_ep_free_request(&priv_ep->endpoint,
+ 					     &priv_req->request);
+-		list_del_init(&priv_req->list);
+ 		--priv_ep->wa2_counter;
+ 
+ 		if (!chain)
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 64485f82dc5b9..da0df69cc2344 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -3593,7 +3593,8 @@ void dwc2_hsotg_core_disconnect(struct dwc2_hsotg *hsotg)
+ void dwc2_hsotg_core_connect(struct dwc2_hsotg *hsotg)
+ {
+ 	/* remove the soft-disconnect and let's go */
+-	dwc2_clear_bit(hsotg, DCTL, DCTL_SFTDISCON);
++	if (!hsotg->role_sw || (dwc2_readl(hsotg, GOTGCTL) & GOTGCTL_BSESVLD))
++		dwc2_clear_bit(hsotg, DCTL, DCTL_SFTDISCON);
+ }
+ 
+ /**
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index 633e23d58d868..5ce548c2359d8 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -159,7 +159,7 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
+ 		break;
+ 
+ 	default:
+-		uvcg_info(&video->uvc->func,
++		uvcg_warn(&video->uvc->func,
+ 			  "VS request completed with status %d.\n",
+ 			  req->status);
+ 		uvcg_queue_cancel(queue, 0);
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 454860d52ce77..cd097474b6c39 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -362,6 +362,7 @@ ep_io (struct ep_data *epdata, void *buf, unsigned len)
+ 				spin_unlock_irq (&epdata->dev->lock);
+ 
+ 				DBG (epdata->dev, "endpoint gone\n");
++				wait_for_completion(&done);
+ 				epdata->status = -ENODEV;
+ 			}
+ 		}
+diff --git a/drivers/usb/host/ohci-ppc-of.c b/drivers/usb/host/ohci-ppc-of.c
+index 45f7cceb6df31..98e46725999e9 100644
+--- a/drivers/usb/host/ohci-ppc-of.c
++++ b/drivers/usb/host/ohci-ppc-of.c
+@@ -169,6 +169,7 @@ static int ohci_hcd_ppc_of_probe(struct platform_device *op)
+ 				release_mem_region(res.start, 0x4);
+ 		} else
+ 			pr_debug("%s: cannot get ehci offset from fdt\n", __FILE__);
++		of_node_put(np);
+ 	}
+ 
+ 	irq_dispose_mapping(irq);
+diff --git a/drivers/usb/renesas_usbhs/rza.c b/drivers/usb/renesas_usbhs/rza.c
+index 24de64edb674b..2d77edefb4b30 100644
+--- a/drivers/usb/renesas_usbhs/rza.c
++++ b/drivers/usb/renesas_usbhs/rza.c
+@@ -23,6 +23,10 @@ static int usbhs_rza1_hardware_init(struct platform_device *pdev)
+ 	extal_clk = of_find_node_by_name(NULL, "extal");
+ 	of_property_read_u32(usb_x1_clk, "clock-frequency", &freq_usb);
+ 	of_property_read_u32(extal_clk, "clock-frequency", &freq_extal);
++
++	of_node_put(usb_x1_clk);
++	of_node_put(extal_clk);
++
+ 	if (freq_usb == 0) {
+ 		if (freq_extal == 12000000) {
+ 			/* Select 12MHz XTAL */
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index f886f2db8153e..90db9d66867c7 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -1783,6 +1783,7 @@ struct vfio_info_cap_header *vfio_info_cap_add(struct vfio_info_cap *caps,
+ 	buf = krealloc(caps->buf, caps->size + size, GFP_KERNEL);
+ 	if (!buf) {
+ 		kfree(caps->buf);
++		caps->buf = NULL;
+ 		caps->size = 0;
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+diff --git a/drivers/video/fbdev/i740fb.c b/drivers/video/fbdev/i740fb.c
+index 52cce0db8bd34..ad5ced4ef972d 100644
+--- a/drivers/video/fbdev/i740fb.c
++++ b/drivers/video/fbdev/i740fb.c
+@@ -400,7 +400,7 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
+ 	u32 xres, right, hslen, left, xtotal;
+ 	u32 yres, lower, vslen, upper, ytotal;
+ 	u32 vxres, xoffset, vyres, yoffset;
+-	u32 bpp, base, dacspeed24, mem;
++	u32 bpp, base, dacspeed24, mem, freq;
+ 	u8 r7;
+ 	int i;
+ 
+@@ -643,7 +643,12 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
+ 	par->atc[VGA_ATC_OVERSCAN] = 0;
+ 
+ 	/* Calculate VCLK that most closely matches the requested dot clock */
+-	i740_calc_vclk((((u32)1e9) / var->pixclock) * (u32)(1e3), par);
++	freq = (((u32)1e9) / var->pixclock) * (u32)(1e3);
++	if (freq < I740_RFREQ_FIX) {
++		fb_dbg(info, "invalid pixclock\n");
++		freq = I740_RFREQ_FIX;
++	}
++	i740_calc_vclk(freq, par);
+ 
+ 	/* Since we program the clocks ourselves, always use VCLK2. */
+ 	par->misc |= 0x0C;
+diff --git a/drivers/virt/vboxguest/vboxguest_linux.c b/drivers/virt/vboxguest/vboxguest_linux.c
+index 73eb34849eaba..4ccfd30c2a304 100644
+--- a/drivers/virt/vboxguest/vboxguest_linux.c
++++ b/drivers/virt/vboxguest/vboxguest_linux.c
+@@ -356,8 +356,8 @@ static int vbg_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ 		goto err_vbg_core_exit;
+ 	}
+ 
+-	ret = devm_request_irq(dev, pci->irq, vbg_core_isr, IRQF_SHARED,
+-			       DEVICE_NAME, gdev);
++	ret = request_irq(pci->irq, vbg_core_isr, IRQF_SHARED, DEVICE_NAME,
++			  gdev);
+ 	if (ret) {
+ 		vbg_err("vboxguest: Error requesting irq: %d\n", ret);
+ 		goto err_vbg_core_exit;
+@@ -367,7 +367,7 @@ static int vbg_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ 	if (ret) {
+ 		vbg_err("vboxguest: Error misc_register %s failed: %d\n",
+ 			DEVICE_NAME, ret);
+-		goto err_vbg_core_exit;
++		goto err_free_irq;
+ 	}
+ 
+ 	ret = misc_register(&gdev->misc_device_user);
+@@ -403,6 +403,8 @@ err_unregister_misc_device_user:
+ 	misc_deregister(&gdev->misc_device_user);
+ err_unregister_misc_device:
+ 	misc_deregister(&gdev->misc_device);
++err_free_irq:
++	free_irq(pci->irq, gdev);
+ err_vbg_core_exit:
+ 	vbg_core_exit(gdev);
+ err_disable_pcidev:
+@@ -419,6 +421,7 @@ static void vbg_pci_remove(struct pci_dev *pci)
+ 	vbg_gdev = NULL;
+ 	mutex_unlock(&vbg_gdev_mutex);
+ 
++	free_irq(pci->irq, gdev);
+ 	device_remove_file(gdev->dev, &dev_attr_host_features);
+ 	device_remove_file(gdev->dev, &dev_attr_host_version);
+ 	misc_deregister(&gdev->misc_device_user);
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index 597af455a522b..0792fda49a15f 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -128,7 +128,7 @@ static ssize_t xenbus_file_read(struct file *filp,
+ {
+ 	struct xenbus_file_priv *u = filp->private_data;
+ 	struct read_buffer *rb;
+-	unsigned i;
++	ssize_t i;
+ 	int ret;
+ 
+ 	mutex_lock(&u->reply_mutex);
+@@ -148,7 +148,7 @@ again:
+ 	rb = list_entry(u->read_buffers.next, struct read_buffer, list);
+ 	i = 0;
+ 	while (i < len) {
+-		unsigned sz = min((unsigned)len - i, rb->len - rb->cons);
++		size_t sz = min_t(size_t, len - i, rb->len - rb->cons);
+ 
+ 		ret = copy_to_user(ubuf + i, &rb->msg[rb->cons], sz);
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index c246ccc6bf057..9a8dc16673b43 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1075,7 +1075,9 @@ again:
+ 	extref = btrfs_lookup_inode_extref(NULL, root, path, name, namelen,
+ 					   inode_objectid, parent_objectid, 0,
+ 					   0);
+-	if (!IS_ERR_OR_NULL(extref)) {
++	if (IS_ERR(extref)) {
++		return PTR_ERR(extref);
++	} else if (extref) {
+ 		u32 item_size;
+ 		u32 cur_offset = 0;
+ 		unsigned long base;
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index d3f67271d3c72..76e43a487bc63 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -3501,24 +3501,23 @@ static void handle_cap_grant(struct inode *inode,
+ 			fill_inline = true;
+ 	}
+ 
+-	if (ci->i_auth_cap == cap &&
+-	    le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
+-		if (newcaps & ~extra_info->issued)
+-			wake = true;
++	if (le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
++		if (ci->i_auth_cap == cap) {
++			if (newcaps & ~extra_info->issued)
++				wake = true;
++
++			if (ci->i_requested_max_size > max_size ||
++			    !(le32_to_cpu(grant->wanted) & CEPH_CAP_ANY_FILE_WR)) {
++				/* re-request max_size if necessary */
++				ci->i_requested_max_size = 0;
++				wake = true;
++			}
+ 
+-		if (ci->i_requested_max_size > max_size ||
+-		    !(le32_to_cpu(grant->wanted) & CEPH_CAP_ANY_FILE_WR)) {
+-			/* re-request max_size if necessary */
+-			ci->i_requested_max_size = 0;
+-			wake = true;
++			ceph_kick_flushing_inode_caps(session, ci);
+ 		}
+-
+-		ceph_kick_flushing_inode_caps(session, ci);
+-		spin_unlock(&ci->i_ceph_lock);
+ 		up_read(&session->s_mdsc->snap_rwsem);
+-	} else {
+-		spin_unlock(&ci->i_ceph_lock);
+ 	}
++	spin_unlock(&ci->i_ceph_lock);
+ 
+ 	if (fill_inline)
+ 		ceph_fill_inline_data(inode, NULL, extra_info->inline_data,
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 981a915906314..6859967df2b19 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1184,14 +1184,17 @@ static int encode_supported_features(void **p, void *end)
+ 	if (count > 0) {
+ 		size_t i;
+ 		size_t size = FEATURE_BYTES(count);
++		unsigned long bit;
+ 
+ 		if (WARN_ON_ONCE(*p + 4 + size > end))
+ 			return -ERANGE;
+ 
+ 		ceph_encode_32(p, size);
+ 		memset(*p, 0, size);
+-		for (i = 0; i < count; i++)
+-			((unsigned char*)(*p))[i / 8] |= BIT(feature_bits[i] % 8);
++		for (i = 0; i < count; i++) {
++			bit = feature_bits[i];
++			((unsigned char *)(*p))[bit / 8] |= BIT(bit % 8);
++		}
+ 		*p += size;
+ 	} else {
+ 		if (WARN_ON_ONCE(*p + 4 > end))
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index f5adbebcb38e5..acf33d7192bb6 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -33,10 +33,6 @@ enum ceph_feature_type {
+ 	CEPHFS_FEATURE_MAX = CEPHFS_FEATURE_METRIC_COLLECT,
+ };
+ 
+-/*
+- * This will always have the highest feature bit value
+- * as the last element of the array.
+- */
+ #define CEPHFS_FEATURES_CLIENT_SUPPORTED {	\
+ 	0, 1, 2, 3, 4, 5, 6, 7,			\
+ 	CEPHFS_FEATURE_MIMIC,			\
+@@ -45,8 +41,6 @@ enum ceph_feature_type {
+ 	CEPHFS_FEATURE_MULTI_RECONNECT,		\
+ 	CEPHFS_FEATURE_DELEG_INO,		\
+ 	CEPHFS_FEATURE_METRIC_COLLECT,		\
+-						\
+-	CEPHFS_FEATURE_MAX,			\
+ }
+ #define CEPHFS_FEATURES_CLIENT_REQUIRED {}
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index b855abfaaf87b..b6d72e3c5ebad 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1000,9 +1000,7 @@ move_smb2_ea_to_cifs(char *dst, size_t dst_size,
+ 	size_t name_len, value_len, user_name_len;
+ 
+ 	while (src_size > 0) {
+-		name = &src->ea_data[0];
+ 		name_len = (size_t)src->ea_name_length;
+-		value = &src->ea_data[src->ea_name_length + 1];
+ 		value_len = (size_t)le16_to_cpu(src->ea_value_length);
+ 
+ 		if (name_len == 0)
+@@ -1014,6 +1012,9 @@ move_smb2_ea_to_cifs(char *dst, size_t dst_size,
+ 			goto out;
+ 		}
+ 
++		name = &src->ea_data[0];
++		value = &src->ea_data[src->ea_name_length + 1];
++
+ 		if (ea_name) {
+ 			if (ea_name_len == name_len &&
+ 			    memcmp(ea_name, name, name_len) == 0) {
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index afc20d32c9fd6..58b0f1b12095b 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2961,11 +2961,8 @@ bool ext4_empty_dir(struct inode *inode)
+ 		de = (struct ext4_dir_entry_2 *) (bh->b_data +
+ 					(offset & (sb->s_blocksize - 1)));
+ 		if (ext4_check_dir_entry(inode, NULL, de, bh,
+-					 bh->b_data, bh->b_size, offset)) {
+-			offset = (offset | (sb->s_blocksize - 1)) + 1;
+-			continue;
+-		}
+-		if (le32_to_cpu(de->inode)) {
++					 bh->b_data, bh->b_size, offset) ||
++		    le32_to_cpu(de->inode)) {
+ 			brelse(bh);
+ 			return false;
+ 		}
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 5cfea77f33227..f6409ddfd1172 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1957,6 +1957,16 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
+ 	}
+ 	brelse(bh);
+ 
++	/*
++	 * For bigalloc, trim the requested size to the nearest cluster
++	 * boundary to avoid creating an unusable filesystem. We do this
++	 * silently, instead of returning an error, to avoid breaking
++	 * callers that blindly resize the filesystem to the full size of
++	 * the underlying block device.
++	 */
++	if (ext4_has_feature_bigalloc(sb))
++		n_blocks_count &= ~((1 << EXT4_CLUSTER_BITS(sb)) - 1);
++
+ retry:
+ 	o_blocks_count = ext4_blocks_count(es);
+ 
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 5fa10d0b00683..c63274d4b74b0 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1238,7 +1238,11 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
+ 		dec_valid_node_count(sbi, dn->inode, !ofs);
+ 		goto fail;
+ 	}
+-	f2fs_bug_on(sbi, new_ni.blk_addr != NULL_ADDR);
++	if (unlikely(new_ni.blk_addr != NULL_ADDR)) {
++		err = -EFSCORRUPTED;
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		goto fail;
++	}
+ #endif
+ 	new_ni.nid = dn->nid;
+ 	new_ni.ino = dn->inode->i_ino;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 20091f4cf84de..19224e7d2ad04 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -4449,6 +4449,12 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 				return err;
+ 			seg_info_from_raw_sit(se, &sit);
+ 
++			if (se->type >= NR_PERSISTENT_LOG) {
++				f2fs_err(sbi, "Invalid segment type: %u, segno: %u",
++							se->type, start);
++				return -EFSCORRUPTED;
++			}
++
+ 			sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+ 
+ 			/* build discard map only one time */
+@@ -4495,6 +4501,13 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 			break;
+ 		seg_info_from_raw_sit(se, &sit);
+ 
++		if (se->type >= NR_PERSISTENT_LOG) {
++			f2fs_err(sbi, "Invalid segment type: %u, segno: %u",
++							se->type, start);
++			err = -EFSCORRUPTED;
++			break;
++		}
++
+ 		sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+ 
+ 		if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) {
+diff --git a/fs/nfs/nfs4idmap.c b/fs/nfs/nfs4idmap.c
+index f331866dd4182..ec6afd3c4bca6 100644
+--- a/fs/nfs/nfs4idmap.c
++++ b/fs/nfs/nfs4idmap.c
+@@ -561,22 +561,20 @@ nfs_idmap_prepare_pipe_upcall(struct idmap *idmap,
+ 	return true;
+ }
+ 
+-static void
+-nfs_idmap_complete_pipe_upcall_locked(struct idmap *idmap, int ret)
++static void nfs_idmap_complete_pipe_upcall(struct idmap_legacy_upcalldata *data,
++					   int ret)
+ {
+-	struct key *authkey = idmap->idmap_upcall_data->authkey;
+-
+-	kfree(idmap->idmap_upcall_data);
+-	idmap->idmap_upcall_data = NULL;
+-	complete_request_key(authkey, ret);
+-	key_put(authkey);
++	complete_request_key(data->authkey, ret);
++	key_put(data->authkey);
++	kfree(data);
+ }
+ 
+-static void
+-nfs_idmap_abort_pipe_upcall(struct idmap *idmap, int ret)
++static void nfs_idmap_abort_pipe_upcall(struct idmap *idmap,
++					struct idmap_legacy_upcalldata *data,
++					int ret)
+ {
+-	if (idmap->idmap_upcall_data != NULL)
+-		nfs_idmap_complete_pipe_upcall_locked(idmap, ret);
++	if (cmpxchg(&idmap->idmap_upcall_data, data, NULL) == data)
++		nfs_idmap_complete_pipe_upcall(data, ret);
+ }
+ 
+ static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux)
+@@ -613,7 +611,7 @@ static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux)
+ 
+ 	ret = rpc_queue_upcall(idmap->idmap_pipe, msg);
+ 	if (ret < 0)
+-		nfs_idmap_abort_pipe_upcall(idmap, ret);
++		nfs_idmap_abort_pipe_upcall(idmap, data, ret);
+ 
+ 	return ret;
+ out2:
+@@ -669,6 +667,7 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ 	struct request_key_auth *rka;
+ 	struct rpc_inode *rpci = RPC_I(file_inode(filp));
+ 	struct idmap *idmap = (struct idmap *)rpci->private;
++	struct idmap_legacy_upcalldata *data;
+ 	struct key *authkey;
+ 	struct idmap_msg im;
+ 	size_t namelen_in;
+@@ -678,10 +677,11 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ 	 * will have been woken up and someone else may now have used
+ 	 * idmap_key_cons - so after this point we may no longer touch it.
+ 	 */
+-	if (idmap->idmap_upcall_data == NULL)
++	data = xchg(&idmap->idmap_upcall_data, NULL);
++	if (data == NULL)
+ 		goto out_noupcall;
+ 
+-	authkey = idmap->idmap_upcall_data->authkey;
++	authkey = data->authkey;
+ 	rka = get_request_key_auth(authkey);
+ 
+ 	if (mlen != sizeof(im)) {
+@@ -703,18 +703,17 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ 	if (namelen_in == 0 || namelen_in == IDMAP_NAMESZ) {
+ 		ret = -EINVAL;
+ 		goto out;
+-}
++	}
+ 
+-	ret = nfs_idmap_read_and_verify_message(&im,
+-			&idmap->idmap_upcall_data->idmap_msg,
+-			rka->target_key, authkey);
++	ret = nfs_idmap_read_and_verify_message(&im, &data->idmap_msg,
++						rka->target_key, authkey);
+ 	if (ret >= 0) {
+ 		key_set_timeout(rka->target_key, nfs_idmap_cache_timeout);
+ 		ret = mlen;
+ 	}
+ 
+ out:
+-	nfs_idmap_complete_pipe_upcall_locked(idmap, ret);
++	nfs_idmap_complete_pipe_upcall(data, ret);
+ out_noupcall:
+ 	return ret;
+ }
+@@ -728,7 +727,7 @@ idmap_pipe_destroy_msg(struct rpc_pipe_msg *msg)
+ 	struct idmap *idmap = data->idmap;
+ 
+ 	if (msg->errno)
+-		nfs_idmap_abort_pipe_upcall(idmap, msg->errno);
++		nfs_idmap_abort_pipe_upcall(idmap, data, msg->errno);
+ }
+ 
+ static void
+@@ -736,8 +735,11 @@ idmap_release_pipe(struct inode *inode)
+ {
+ 	struct rpc_inode *rpci = RPC_I(inode);
+ 	struct idmap *idmap = (struct idmap *)rpci->private;
++	struct idmap_legacy_upcalldata *data;
+ 
+-	nfs_idmap_abort_pipe_upcall(idmap, -EPIPE);
++	data = xchg(&idmap->idmap_upcall_data, NULL);
++	if (data)
++		nfs_idmap_complete_pipe_upcall(data, -EPIPE);
+ }
+ 
+ int nfs_map_name_to_uid(const struct nfs_server *server, const char *name, size_t namelen, kuid_t *uid)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index b22da4e3165b4..03f09399abf4f 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -790,10 +790,9 @@ static void nfs4_slot_sequence_record_sent(struct nfs4_slot *slot,
+ 	if ((s32)(seqnr - slot->seq_nr_highest_sent) > 0)
+ 		slot->seq_nr_highest_sent = seqnr;
+ }
+-static void nfs4_slot_sequence_acked(struct nfs4_slot *slot,
+-		u32 seqnr)
++static void nfs4_slot_sequence_acked(struct nfs4_slot *slot, u32 seqnr)
+ {
+-	slot->seq_nr_highest_sent = seqnr;
++	nfs4_slot_sequence_record_sent(slot, seqnr);
+ 	slot->seq_nr_last_acked = seqnr;
+ }
+ 
+@@ -860,7 +859,6 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ 			__func__,
+ 			slot->slot_nr,
+ 			slot->seq_nr);
+-		nfs4_slot_sequence_acked(slot, slot->seq_nr);
+ 		goto out_retry;
+ 	case -NFS4ERR_RETRY_UNCACHED_REP:
+ 	case -NFS4ERR_SEQ_FALSE_RETRY:
+@@ -3086,12 +3084,13 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ 	}
+ 
+ out:
+-	if (opendata->lgp) {
+-		nfs4_lgopen_release(opendata->lgp);
+-		opendata->lgp = NULL;
+-	}
+-	if (!opendata->cancelled)
++	if (!opendata->cancelled) {
++		if (opendata->lgp) {
++			nfs4_lgopen_release(opendata->lgp);
++			opendata->lgp = NULL;
++		}
+ 		nfs4_sequence_free_slot(&opendata->o_res.seq_res);
++	}
+ 	return ret;
+ }
+ 
+@@ -9275,6 +9274,9 @@ static int nfs41_reclaim_complete_handle_errors(struct rpc_task *task, struct nf
+ 		rpc_delay(task, NFS4_POLL_RETRY_MAX);
+ 		fallthrough;
+ 	case -NFS4ERR_RETRY_UNCACHED_REP:
++	case -EACCES:
++		dprintk("%s: failed to reclaim complete error %d for server %s, retrying\n",
++			__func__, task->tk_status, clp->cl_hostname);
+ 		return -EAGAIN;
+ 	case -NFS4ERR_BADSESSION:
+ 	case -NFS4ERR_DEADSESSION:
+diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
+index 0e7316a86240b..21aa26e7c9882 100644
+--- a/include/asm-generic/bitops/atomic.h
++++ b/include/asm-generic/bitops/atomic.h
+@@ -35,9 +35,6 @@ static inline int test_and_set_bit(unsigned int nr, volatile unsigned long *p)
+ 	unsigned long mask = BIT_MASK(nr);
+ 
+ 	p += BIT_WORD(nr);
+-	if (READ_ONCE(*p) & mask)
+-		return 1;
+-
+ 	old = atomic_long_fetch_or(mask, (atomic_long_t *)p);
+ 	return !!(old & mask);
+ }
+@@ -48,9 +45,6 @@ static inline int test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
+ 	unsigned long mask = BIT_MASK(nr);
+ 
+ 	p += BIT_WORD(nr);
+-	if (!(READ_ONCE(*p) & mask))
+-		return 0;
+-
+ 	old = atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
+ 	return !!(old & mask);
+ }
+diff --git a/include/linux/netfilter/nfnetlink.h b/include/linux/netfilter/nfnetlink.h
+index f6267e2883f26..791d516e1e880 100644
+--- a/include/linux/netfilter/nfnetlink.h
++++ b/include/linux/netfilter/nfnetlink.h
+@@ -57,6 +57,33 @@ static inline u16 nfnl_msg_type(u8 subsys, u8 msg_type)
+ 	return subsys << 8 | msg_type;
+ }
+ 
++static inline void nfnl_fill_hdr(struct nlmsghdr *nlh, u8 family, u8 version,
++				 __be16 res_id)
++{
++	struct nfgenmsg *nfmsg;
++
++	nfmsg = nlmsg_data(nlh);
++	nfmsg->nfgen_family = family;
++	nfmsg->version = version;
++	nfmsg->res_id = res_id;
++}
++
++static inline struct nlmsghdr *nfnl_msg_put(struct sk_buff *skb, u32 portid,
++					    u32 seq, int type, int flags,
++					    u8 family, u8 version,
++					    __be16 res_id)
++{
++	struct nlmsghdr *nlh;
++
++	nlh = nlmsg_put(skb, portid, seq, type, sizeof(struct nfgenmsg), flags);
++	if (!nlh)
++		return NULL;
++
++	nfnl_fill_hdr(nlh, family, version, res_id);
++
++	return nlh;
++}
++
+ void nfnl_lock(__u8 subsys_id);
+ void nfnl_unlock(__u8 subsys_id);
+ #ifdef CONFIG_PROVE_LOCKING
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index 750c7f395ca90..f700ff2df074e 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -122,6 +122,8 @@ int watchdog_nmi_probe(void);
+ int watchdog_nmi_enable(unsigned int cpu);
+ void watchdog_nmi_disable(unsigned int cpu);
+ 
++void lockup_detector_reconfigure(void);
++
+ /**
+  * touch_nmi_watchdog - restart NMI watchdog timeout.
+  *
+diff --git a/include/linux/uacce.h b/include/linux/uacce.h
+index 48e319f402751..9ce88c28b0a87 100644
+--- a/include/linux/uacce.h
++++ b/include/linux/uacce.h
+@@ -70,6 +70,7 @@ enum uacce_q_state {
+  * @wait: wait queue head
+  * @list: index into uacce queues list
+  * @qfrs: pointer of qfr regions
++ * @mutex: protects queue state
+  * @state: queue state machine
+  * @pasid: pasid associated to the mm
+  * @handle: iommu_sva handle returned by iommu_sva_bind_device()
+@@ -80,6 +81,7 @@ struct uacce_queue {
+ 	wait_queue_head_t wait;
+ 	struct list_head list;
+ 	struct uacce_qfile_region *qfrs[UACCE_MAX_REGION];
++	struct mutex mutex;
+ 	enum uacce_q_state state;
+ 	u32 pasid;
+ 	struct iommu_sva *handle;
+@@ -97,9 +99,9 @@ struct uacce_queue {
+  * @dev_id: id of the uacce device
+  * @cdev: cdev of the uacce
+  * @dev: dev of the uacce
++ * @mutex: protects uacce operation
+  * @priv: private pointer of the uacce
+  * @queues: list of queues
+- * @queues_lock: lock for queues list
+  * @inode: core vfs
+  */
+ struct uacce_device {
+@@ -113,9 +115,9 @@ struct uacce_device {
+ 	u32 dev_id;
+ 	struct cdev *cdev;
+ 	struct device dev;
++	struct mutex mutex;
+ 	void *priv;
+ 	struct list_head queues;
+-	struct mutex queues_lock;
+ 	struct inode *inode;
+ };
+ 
+diff --git a/include/sound/control.h b/include/sound/control.h
+index 77d9fa10812d0..41bd72ffd2322 100644
+--- a/include/sound/control.h
++++ b/include/sound/control.h
+@@ -103,7 +103,7 @@ struct snd_ctl_file {
+ 	int preferred_subdevice[SND_CTL_SUBDEV_ITEMS];
+ 	wait_queue_head_t change_sleep;
+ 	spinlock_t read_lock;
+-	struct fasync_struct *fasync;
++	struct snd_fasync *fasync;
+ 	int subscribed;			/* read interface is activated */
+ 	struct list_head events;	/* waiting events for read */
+ };
+diff --git a/include/sound/core.h b/include/sound/core.h
+index 0462c577d7a3f..85610ede9ea01 100644
+--- a/include/sound/core.h
++++ b/include/sound/core.h
+@@ -446,4 +446,12 @@ snd_pci_quirk_lookup_id(u16 vendor, u16 device,
+ }
+ #endif
+ 
++/* async signal helpers */
++struct snd_fasync;
++
++int snd_fasync_helper(int fd, struct file *file, int on,
++		      struct snd_fasync **fasyncp);
++void snd_kill_fasync(struct snd_fasync *fasync, int signal, int poll);
++void snd_fasync_free(struct snd_fasync *fasync);
++
+ #endif /* __SOUND_CORE_H */
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 36c68dcea2369..f241bda2679d4 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -616,6 +616,11 @@ static int bpf_iter_init_array_map(void *priv_data,
+ 		seq_info->percpu_value_buf = value_buf;
+ 	}
+ 
++	/* bpf_iter_attach_map() acquires a map uref, and the uref may be
++	 * released before or in the middle of iterating map elements, so
++	 * acquire an extra map uref for iterator.
++	 */
++	bpf_map_inc_with_uref(map);
+ 	seq_info->map = map;
+ 	return 0;
+ }
+@@ -624,6 +629,7 @@ static void bpf_iter_fini_array_map(void *priv_data)
+ {
+ 	struct bpf_iter_seq_array_map_info *seq_info = priv_data;
+ 
++	bpf_map_put_with_uref(seq_info->map);
+ 	kfree(seq_info->percpu_value_buf);
+ }
+ 
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 6aa9e10c6335a..d154e52dd7ae0 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -653,6 +653,60 @@ static struct bpf_prog_list *find_detach_entry(struct list_head *progs,
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
++/**
++ * purge_effective_progs() - After compute_effective_progs fails to alloc new
++ *			     cgrp->bpf.inactive table we can recover by
++ *			     recomputing the array in place.
++ *
++ * @cgrp: The cgroup which descendants to travers
++ * @prog: A program to detach or NULL
++ * @link: A link to detach or NULL
++ * @type: Type of detach operation
++ */
++static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
++				  struct bpf_cgroup_link *link,
++				  enum bpf_attach_type type)
++{
++	struct cgroup_subsys_state *css;
++	struct bpf_prog_array *progs;
++	struct bpf_prog_list *pl;
++	struct list_head *head;
++	struct cgroup *cg;
++	int pos;
++
++	/* recompute effective prog array in place */
++	css_for_each_descendant_pre(css, &cgrp->self) {
++		struct cgroup *desc = container_of(css, struct cgroup, self);
++
++		if (percpu_ref_is_zero(&desc->bpf.refcnt))
++			continue;
++
++		/* find position of link or prog in effective progs array */
++		for (pos = 0, cg = desc; cg; cg = cgroup_parent(cg)) {
++			if (pos && !(cg->bpf.flags[type] & BPF_F_ALLOW_MULTI))
++				continue;
++
++			head = &cg->bpf.progs[type];
++			list_for_each_entry(pl, head, node) {
++				if (!prog_list_prog(pl))
++					continue;
++				if (pl->prog == prog && pl->link == link)
++					goto found;
++				pos++;
++			}
++		}
++found:
++		BUG_ON(!cg);
++		progs = rcu_dereference_protected(
++				desc->bpf.effective[type],
++				lockdep_is_held(&cgroup_mutex));
++
++		/* Remove the program from the array */
++		WARN_ONCE(bpf_prog_array_delete_safe_at(progs, pos),
++			  "Failed to purge a prog from array at index %d", pos);
++	}
++}
++
+ /**
+  * __cgroup_bpf_detach() - Detach the program or link from a cgroup, and
+  *                         propagate the change to descendants
+@@ -671,7 +725,6 @@ int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ 	u32 flags = cgrp->bpf.flags[type];
+ 	struct bpf_prog_list *pl;
+ 	struct bpf_prog *old_prog;
+-	int err;
+ 
+ 	if (prog && link)
+ 		/* only one of prog or link can be specified */
+@@ -686,9 +739,12 @@ int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ 	pl->prog = NULL;
+ 	pl->link = NULL;
+ 
+-	err = update_effective_progs(cgrp, type);
+-	if (err)
+-		goto cleanup;
++	if (update_effective_progs(cgrp, type)) {
++		/* if update effective array failed replace the prog with a dummy prog*/
++		pl->prog = old_prog;
++		pl->link = link;
++		purge_effective_progs(cgrp, old_prog, link, type);
++	}
+ 
+ 	/* now can actually delete it from this cgroup list */
+ 	list_del(&pl->node);
+@@ -700,12 +756,6 @@ int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ 		bpf_prog_put(old_prog);
+ 	static_branch_dec(&cgroup_bpf_enabled_key);
+ 	return 0;
+-
+-cleanup:
+-	/* restore back prog or link */
+-	pl->prog = old_prog;
+-	pl->link = link;
+-	return err;
+ }
+ 
+ /* Must be called with cgroup_mutex held to avoid races. */
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 6c444e815406b..0ce445aadfdfb 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -1801,6 +1801,7 @@ static int bpf_iter_init_hash_map(void *priv_data,
+ 		seq_info->percpu_value_buf = value_buf;
+ 	}
+ 
++	bpf_map_inc_with_uref(map);
+ 	seq_info->map = map;
+ 	seq_info->htab = container_of(map, struct bpf_htab, map);
+ 	return 0;
+@@ -1810,6 +1811,7 @@ static void bpf_iter_fini_hash_map(void *priv_data)
+ {
+ 	struct bpf_iter_seq_hash_map_info *seq_info = priv_data;
+ 
++	bpf_map_put_with_uref(seq_info->map);
+ 	kfree(seq_info->percpu_value_buf);
+ }
+ 
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 7cc5f0a77c3cc..826ecf01e380c 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -168,6 +168,7 @@ static int trace_define_generic_fields(void)
+ 
+ 	__generic_field(int, CPU, FILTER_CPU);
+ 	__generic_field(int, cpu, FILTER_CPU);
++	__generic_field(int, common_cpu, FILTER_CPU);
+ 	__generic_field(char *, COMM, FILTER_COMM);
+ 	__generic_field(char *, comm, FILTER_COMM);
+ 
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 1d31bc4acf7a5..073abbe3866b4 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -300,7 +300,7 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
+ 			}
+ 		} else
+ 			goto inval_var;
+-	} else if (strcmp(arg, "comm") == 0) {
++	} else if (strcmp(arg, "comm") == 0 || strcmp(arg, "COMM") == 0) {
+ 		code->op = FETCH_OP_COMM;
+ #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
+ 	} else if (((flags & TPARG_FL_MASK) ==
+@@ -595,7 +595,8 @@ static int traceprobe_parse_probe_arg_body(char *arg, ssize_t *size,
+ 	 * Since $comm and immediate string can not be dereferred,
+ 	 * we can find those by strcmp.
+ 	 */
+-	if (strcmp(arg, "$comm") == 0 || strncmp(arg, "\\\"", 2) == 0) {
++	if (strcmp(arg, "$comm") == 0 || strcmp(arg, "$COMM") == 0 ||
++	    strncmp(arg, "\\\"", 2) == 0) {
+ 		/* The type of $comm must be "string", and not an array. */
+ 		if (parg->count || (t && strcmp(t, "string")))
+ 			return -EINVAL;
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 01bf977090dc2..ec34d9f2eab2d 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -518,7 +518,7 @@ int lockup_detector_offline_cpu(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static void lockup_detector_reconfigure(void)
++static void __lockup_detector_reconfigure(void)
+ {
+ 	cpus_read_lock();
+ 	watchdog_nmi_stop();
+@@ -538,6 +538,13 @@ static void lockup_detector_reconfigure(void)
+ 	__lockup_detector_cleanup();
+ }
+ 
++void lockup_detector_reconfigure(void)
++{
++	mutex_lock(&watchdog_mutex);
++	__lockup_detector_reconfigure();
++	mutex_unlock(&watchdog_mutex);
++}
++
+ /*
+  * Create the watchdog thread infrastructure and configure the detector(s).
+  *
+@@ -558,13 +565,13 @@ static __init void lockup_detector_setup(void)
+ 		return;
+ 
+ 	mutex_lock(&watchdog_mutex);
+-	lockup_detector_reconfigure();
++	__lockup_detector_reconfigure();
+ 	softlockup_initialized = true;
+ 	mutex_unlock(&watchdog_mutex);
+ }
+ 
+ #else /* CONFIG_SOFTLOCKUP_DETECTOR */
+-static void lockup_detector_reconfigure(void)
++static void __lockup_detector_reconfigure(void)
+ {
+ 	cpus_read_lock();
+ 	watchdog_nmi_stop();
+@@ -572,9 +579,13 @@ static void lockup_detector_reconfigure(void)
+ 	watchdog_nmi_start();
+ 	cpus_read_unlock();
+ }
++void lockup_detector_reconfigure(void)
++{
++	__lockup_detector_reconfigure();
++}
+ static inline void lockup_detector_setup(void)
+ {
+-	lockup_detector_reconfigure();
++	__lockup_detector_reconfigure();
+ }
+ #endif /* !CONFIG_SOFTLOCKUP_DETECTOR */
+ 
+@@ -614,7 +625,7 @@ static void proc_watchdog_update(void)
+ {
+ 	/* Remove impossible cpus to keep sysctl output clean. */
+ 	cpumask_and(&watchdog_cpumask, &watchdog_cpumask, cpu_possible_mask);
+-	lockup_detector_reconfigure();
++	__lockup_detector_reconfigure();
+ }
+ 
+ /*
+diff --git a/lib/list_debug.c b/lib/list_debug.c
+index 5d5424b51b746..413daa72a3d83 100644
+--- a/lib/list_debug.c
++++ b/lib/list_debug.c
+@@ -20,7 +20,11 @@
+ bool __list_add_valid(struct list_head *new, struct list_head *prev,
+ 		      struct list_head *next)
+ {
+-	if (CHECK_DATA_CORRUPTION(next->prev != prev,
++	if (CHECK_DATA_CORRUPTION(prev == NULL,
++			"list_add corruption. prev is NULL.\n") ||
++	    CHECK_DATA_CORRUPTION(next == NULL,
++			"list_add corruption. next is NULL.\n") ||
++	    CHECK_DATA_CORRUPTION(next->prev != prev,
+ 			"list_add corruption. next->prev should be prev (%px), but was %px. (next=%px).\n",
+ 			prev, next->prev, next) ||
+ 	    CHECK_DATA_CORRUPTION(prev->next != next,
+@@ -42,7 +46,11 @@ bool __list_del_entry_valid(struct list_head *entry)
+ 	prev = entry->prev;
+ 	next = entry->next;
+ 
+-	if (CHECK_DATA_CORRUPTION(next == LIST_POISON1,
++	if (CHECK_DATA_CORRUPTION(next == NULL,
++			"list_del corruption, %px->next is NULL\n", entry) ||
++	    CHECK_DATA_CORRUPTION(prev == NULL,
++			"list_del corruption, %px->prev is NULL\n", entry) ||
++	    CHECK_DATA_CORRUPTION(next == LIST_POISON1,
+ 			"list_del corruption, %px->next is LIST_POISON1 (%px)\n",
+ 			entry, LIST_POISON1) ||
+ 	    CHECK_DATA_CORRUPTION(prev == LIST_POISON2,
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index e1a399821238f..709141abd1318 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -178,7 +178,10 @@ activate_next:
+ 	if (!first)
+ 		return;
+ 
+-	if (WARN_ON_ONCE(j1939_session_activate(first))) {
++	if (j1939_session_activate(first)) {
++		netdev_warn_once(first->priv->ndev,
++				 "%s: 0x%p: Identical session is already activated.\n",
++				 __func__, first);
+ 		first->err = -EBUSY;
+ 		goto activate_next;
+ 	} else {
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 9c39b0f5d6e07..2830a12a4dd1b 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -260,6 +260,8 @@ static void __j1939_session_drop(struct j1939_session *session)
+ 
+ static void j1939_session_destroy(struct j1939_session *session)
+ {
++	struct sk_buff *skb;
++
+ 	if (session->err)
+ 		j1939_sk_errqueue(session, J1939_ERRQUEUE_ABORT);
+ 	else
+@@ -270,7 +272,11 @@ static void j1939_session_destroy(struct j1939_session *session)
+ 	WARN_ON_ONCE(!list_empty(&session->sk_session_queue_entry));
+ 	WARN_ON_ONCE(!list_empty(&session->active_session_list_entry));
+ 
+-	skb_queue_purge(&session->skb_queue);
++	while ((skb = skb_dequeue(&session->skb_queue)) != NULL) {
++		/* drop ref taken in j1939_session_skb_queue() */
++		skb_unref(skb);
++		kfree_skb(skb);
++	}
+ 	__j1939_session_drop(session);
+ 	j1939_priv_put(session->priv);
+ 	kfree(session);
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index c907f0dc7f87a..5f773624948ff 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -794,10 +794,18 @@ static int bpf_iter_init_sk_storage_map(void *priv_data,
+ {
+ 	struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data;
+ 
++	bpf_map_inc_with_uref(aux->map);
+ 	seq_info->map = aux->map;
+ 	return 0;
+ }
+ 
++static void bpf_iter_fini_sk_storage_map(void *priv_data)
++{
++	struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data;
++
++	bpf_map_put_with_uref(seq_info->map);
++}
++
+ static int bpf_iter_attach_map(struct bpf_prog *prog,
+ 			       union bpf_iter_link_info *linfo,
+ 			       struct bpf_iter_aux_info *aux)
+@@ -815,7 +823,7 @@ static int bpf_iter_attach_map(struct bpf_prog *prog,
+ 	if (map->map_type != BPF_MAP_TYPE_SK_STORAGE)
+ 		goto put_map;
+ 
+-	if (prog->aux->max_rdonly_access > map->value_size) {
++	if (prog->aux->max_rdwr_access > map->value_size) {
+ 		err = -EACCES;
+ 		goto put_map;
+ 	}
+@@ -843,7 +851,7 @@ static const struct seq_operations bpf_sk_storage_map_seq_ops = {
+ static const struct bpf_iter_seq_info iter_seq_info = {
+ 	.seq_ops		= &bpf_sk_storage_map_seq_ops,
+ 	.init_seq_private	= bpf_iter_init_sk_storage_map,
+-	.fini_seq_private	= NULL,
++	.fini_seq_private	= bpf_iter_fini_sk_storage_map,
+ 	.seq_priv_size		= sizeof(struct bpf_iter_seq_sk_storage_map_info),
+ };
+ 
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 646d90f63dafc..72047750dcd96 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -3620,7 +3620,7 @@ static int devlink_param_get(struct devlink *devlink,
+ 			     const struct devlink_param *param,
+ 			     struct devlink_param_gset_ctx *ctx)
+ {
+-	if (!param->get)
++	if (!param->get || devlink->reload_failed)
+ 		return -EOPNOTSUPP;
+ 	return param->get(devlink, param->id, ctx);
+ }
+@@ -3629,7 +3629,7 @@ static int devlink_param_set(struct devlink *devlink,
+ 			     const struct devlink_param *param,
+ 			     struct devlink_param_gset_ctx *ctx)
+ {
+-	if (!param->set)
++	if (!param->set || devlink->reload_failed)
+ 		return -EOPNOTSUPP;
+ 	return param->set(devlink, param->id, ctx);
+ }
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 4ea5bc65848f2..cbf4184fabc98 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -815,13 +815,22 @@ static int sock_map_init_seq_private(void *priv_data,
+ {
+ 	struct sock_map_seq_info *info = priv_data;
+ 
++	bpf_map_inc_with_uref(aux->map);
+ 	info->map = aux->map;
+ 	return 0;
+ }
+ 
++static void sock_map_fini_seq_private(void *priv_data)
++{
++	struct sock_map_seq_info *info = priv_data;
++
++	bpf_map_put_with_uref(info->map);
++}
++
+ static const struct bpf_iter_seq_info sock_map_iter_seq_info = {
+ 	.seq_ops		= &sock_map_seq_ops,
+ 	.init_seq_private	= sock_map_init_seq_private,
++	.fini_seq_private	= sock_map_fini_seq_private,
+ 	.seq_priv_size		= sizeof(struct sock_map_seq_info),
+ };
+ 
+@@ -1422,18 +1431,27 @@ static const struct seq_operations sock_hash_seq_ops = {
+ };
+ 
+ static int sock_hash_init_seq_private(void *priv_data,
+-				     struct bpf_iter_aux_info *aux)
++				      struct bpf_iter_aux_info *aux)
+ {
+ 	struct sock_hash_seq_info *info = priv_data;
+ 
++	bpf_map_inc_with_uref(aux->map);
+ 	info->map = aux->map;
+ 	info->htab = container_of(aux->map, struct bpf_shtab, map);
+ 	return 0;
+ }
+ 
++static void sock_hash_fini_seq_private(void *priv_data)
++{
++	struct sock_hash_seq_info *info = priv_data;
++
++	bpf_map_put_with_uref(info->map);
++}
++
+ static const struct bpf_iter_seq_info sock_hash_iter_seq_info = {
+ 	.seq_ops		= &sock_hash_seq_ops,
+ 	.init_seq_private	= sock_hash_init_seq_private,
++	.fini_seq_private	= sock_hash_fini_seq_private,
+ 	.seq_priv_size		= sizeof(struct sock_hash_seq_info),
+ };
+ 
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 05e19e5d65140..fadad8e83521d 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1313,8 +1313,7 @@ struct dst_entry *ip6_dst_lookup_tunnel(struct sk_buff *skb,
+ 	fl6.daddr = info->key.u.ipv6.dst;
+ 	fl6.saddr = info->key.u.ipv6.src;
+ 	prio = info->key.tos;
+-	fl6.flowlabel = ip6_make_flowinfo(RT_TOS(prio),
+-					  info->key.label);
++	fl6.flowlabel = ip6_make_flowinfo(prio, info->key.label);
+ 
+ 	dst = ipv6_stub->ipv6_dst_lookup_flow(net, sock->sk, &fl6,
+ 					      NULL);
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 2b19189a930fd..c17a7dda0163f 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -963,20 +963,9 @@ static struct nlmsghdr *
+ start_msg(struct sk_buff *skb, u32 portid, u32 seq, unsigned int flags,
+ 	  enum ipset_cmd cmd)
+ {
+-	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+-
+-	nlh = nlmsg_put(skb, portid, seq, nfnl_msg_type(NFNL_SUBSYS_IPSET, cmd),
+-			sizeof(*nfmsg), flags);
+-	if (!nlh)
+-		return NULL;
+-
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = NFPROTO_IPV4;
+-	nfmsg->version = NFNETLINK_V0;
+-	nfmsg->res_id = 0;
+-
+-	return nlh;
++	return nfnl_msg_put(skb, portid, seq,
++			    nfnl_msg_type(NFNL_SUBSYS_IPSET, cmd), flags,
++			    NFPROTO_IPV4, NFNETLINK_V0, 0);
+ }
+ 
+ /* Create a set */
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index eeeaa34b3e7b5..9e6898164199b 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -553,22 +553,17 @@ ctnetlink_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
+ {
+ 	const struct nf_conntrack_zone *zone;
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	struct nlattr *nest_parms;
+ 	unsigned int event;
+ 
+ 	if (portid)
+ 		flags |= NLM_F_MULTI;
+ 	event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, IPCTNL_MSG_CT_NEW);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, nf_ct_l3num(ct),
++			   NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = nf_ct_l3num(ct);
+-	nfmsg->version      = NFNETLINK_V0;
+-	nfmsg->res_id	    = 0;
+-
+ 	zone = nf_ct_zone(ct);
+ 
+ 	nest_parms = nla_nest_start(skb, CTA_TUPLE_ORIG);
+@@ -711,7 +706,6 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
+ 	const struct nf_conntrack_zone *zone;
+ 	struct net *net;
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	struct nlattr *nest_parms;
+ 	struct nf_conn *ct = item->ct;
+ 	struct sk_buff *skb;
+@@ -741,15 +735,11 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
+ 		goto errout;
+ 
+ 	type = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, type);
+-	nlh = nlmsg_put(skb, item->portid, 0, type, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, item->portid, 0, type, flags, nf_ct_l3num(ct),
++			   NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = nf_ct_l3num(ct);
+-	nfmsg->version	= NFNETLINK_V0;
+-	nfmsg->res_id	= 0;
+-
+ 	zone = nf_ct_zone(ct);
+ 
+ 	nest_parms = nla_nest_start(skb, CTA_TUPLE_ORIG);
+@@ -2483,20 +2473,15 @@ ctnetlink_ct_stat_cpu_fill_info(struct sk_buff *skb, u32 portid, u32 seq,
+ 				__u16 cpu, const struct ip_conntrack_stat *st)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0, event;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK,
+ 			      IPCTNL_MSG_CT_GET_STATS_CPU);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
++			   NFNETLINK_V0, htons(cpu));
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = AF_UNSPEC;
+-	nfmsg->version      = NFNETLINK_V0;
+-	nfmsg->res_id	    = htons(cpu);
+-
+ 	if (nla_put_be32(skb, CTA_STATS_FOUND, htonl(st->found)) ||
+ 	    nla_put_be32(skb, CTA_STATS_INVALID, htonl(st->invalid)) ||
+ 	    nla_put_be32(skb, CTA_STATS_INSERT, htonl(st->insert)) ||
+@@ -2568,20 +2553,15 @@ ctnetlink_stat_ct_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
+ 			    struct net *net)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0, event;
+ 	unsigned int nr_conntracks = atomic_read(&net->ct.count);
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, IPCTNL_MSG_CT_GET_STATS);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
++			   NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = AF_UNSPEC;
+-	nfmsg->version      = NFNETLINK_V0;
+-	nfmsg->res_id	    = 0;
+-
+ 	if (nla_put_be32(skb, CTA_STATS_GLOBAL_ENTRIES, htonl(nr_conntracks)))
+ 		goto nla_put_failure;
+ 
+@@ -3085,19 +3065,14 @@ ctnetlink_exp_fill_info(struct sk_buff *skb, u32 portid, u32 seq,
+ 			int event, const struct nf_conntrack_expect *exp)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_EXP, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags,
++			   exp->tuple.src.l3num, NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = exp->tuple.src.l3num;
+-	nfmsg->version	    = NFNETLINK_V0;
+-	nfmsg->res_id	    = 0;
+-
+ 	if (ctnetlink_exp_dump_expect(skb, exp) < 0)
+ 		goto nla_put_failure;
+ 
+@@ -3117,7 +3092,6 @@ ctnetlink_expect_event(unsigned int events, struct nf_exp_event *item)
+ 	struct nf_conntrack_expect *exp = item->exp;
+ 	struct net *net = nf_ct_exp_net(exp);
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	struct sk_buff *skb;
+ 	unsigned int type, group;
+ 	int flags = 0;
+@@ -3140,15 +3114,11 @@ ctnetlink_expect_event(unsigned int events, struct nf_exp_event *item)
+ 		goto errout;
+ 
+ 	type = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_EXP, type);
+-	nlh = nlmsg_put(skb, item->portid, 0, type, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, item->portid, 0, type, flags,
++			   exp->tuple.src.l3num, NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = exp->tuple.src.l3num;
+-	nfmsg->version	    = NFNETLINK_V0;
+-	nfmsg->res_id	    = 0;
+-
+ 	if (ctnetlink_exp_dump_expect(skb, exp) < 0)
+ 		goto nla_put_failure;
+ 
+@@ -3716,20 +3686,15 @@ ctnetlink_exp_stat_fill_info(struct sk_buff *skb, u32 portid, u32 seq, int cpu,
+ 			     const struct ip_conntrack_stat *st)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0, event;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK,
+ 			      IPCTNL_MSG_EXP_GET_STATS_CPU);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
++			   NFNETLINK_V0, htons(cpu));
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = AF_UNSPEC;
+-	nfmsg->version      = NFNETLINK_V0;
+-	nfmsg->res_id	    = htons(cpu);
+-
+ 	if (nla_put_be32(skb, CTA_STATS_EXP_NEW, htonl(st->expect_new)) ||
+ 	    nla_put_be32(skb, CTA_STATS_EXP_CREATE, htonl(st->expect_create)) ||
+ 	    nla_put_be32(skb, CTA_STATS_EXP_DELETE, htonl(st->expect_delete)))
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 2ba48f4e2d7da..30bd4b867912c 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -66,6 +66,41 @@ static const struct rhashtable_params nft_objname_ht_params = {
+ 	.automatic_shrinking	= true,
+ };
+ 
++struct nft_audit_data {
++	struct nft_table *table;
++	int entries;
++	int op;
++	struct list_head list;
++};
++
++static const u8 nft2audit_op[NFT_MSG_MAX] = { // enum nf_tables_msg_types
++	[NFT_MSG_NEWTABLE]	= AUDIT_NFT_OP_TABLE_REGISTER,
++	[NFT_MSG_GETTABLE]	= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_DELTABLE]	= AUDIT_NFT_OP_TABLE_UNREGISTER,
++	[NFT_MSG_NEWCHAIN]	= AUDIT_NFT_OP_CHAIN_REGISTER,
++	[NFT_MSG_GETCHAIN]	= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_DELCHAIN]	= AUDIT_NFT_OP_CHAIN_UNREGISTER,
++	[NFT_MSG_NEWRULE]	= AUDIT_NFT_OP_RULE_REGISTER,
++	[NFT_MSG_GETRULE]	= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_DELRULE]	= AUDIT_NFT_OP_RULE_UNREGISTER,
++	[NFT_MSG_NEWSET]	= AUDIT_NFT_OP_SET_REGISTER,
++	[NFT_MSG_GETSET]	= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_DELSET]	= AUDIT_NFT_OP_SET_UNREGISTER,
++	[NFT_MSG_NEWSETELEM]	= AUDIT_NFT_OP_SETELEM_REGISTER,
++	[NFT_MSG_GETSETELEM]	= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_DELSETELEM]	= AUDIT_NFT_OP_SETELEM_UNREGISTER,
++	[NFT_MSG_NEWGEN]	= AUDIT_NFT_OP_GEN_REGISTER,
++	[NFT_MSG_GETGEN]	= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_TRACE]		= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_NEWOBJ]	= AUDIT_NFT_OP_OBJ_REGISTER,
++	[NFT_MSG_GETOBJ]	= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_DELOBJ]	= AUDIT_NFT_OP_OBJ_UNREGISTER,
++	[NFT_MSG_GETOBJ_RESET]	= AUDIT_NFT_OP_OBJ_RESET,
++	[NFT_MSG_NEWFLOWTABLE]	= AUDIT_NFT_OP_FLOWTABLE_REGISTER,
++	[NFT_MSG_GETFLOWTABLE]	= AUDIT_NFT_OP_INVALID,
++	[NFT_MSG_DELFLOWTABLE]	= AUDIT_NFT_OP_FLOWTABLE_UNREGISTER,
++};
++
+ static void nft_validate_state_update(struct net *net, u8 new_validate_state)
+ {
+ 	switch (net->nft.validate_state) {
+@@ -648,6 +683,11 @@ nf_tables_chain_type_lookup(struct net *net, const struct nlattr *nla,
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
++static __be16 nft_base_seq(const struct net *net)
++{
++	return htons(net->nft.base_seq & 0xffff);
++}
++
+ static const struct nla_policy nft_table_policy[NFTA_TABLE_MAX + 1] = {
+ 	[NFTA_TABLE_NAME]	= { .type = NLA_STRING,
+ 				    .len = NFT_TABLE_MAXNAMELEN - 1 },
+@@ -662,18 +702,13 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ 				     int family, const struct nft_table *table)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
++			   NFNETLINK_V0, nft_base_seq(net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= family;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= htons(net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_string(skb, NFTA_TABLE_NAME, table->name) ||
+ 	    nla_put_be32(skb, NFTA_TABLE_FLAGS, htonl(table->flags)) ||
+ 	    nla_put_be32(skb, NFTA_TABLE_USE, htonl(table->use)) ||
+@@ -710,17 +745,6 @@ static void nf_tables_table_notify(const struct nft_ctx *ctx, int event)
+ {
+ 	struct sk_buff *skb;
+ 	int err;
+-	char *buf = kasprintf(GFP_KERNEL, "%s:%llu;?:0",
+-			      ctx->table->name, ctx->table->handle);
+-
+-	audit_log_nfcfg(buf,
+-			ctx->family,
+-			ctx->table->use,
+-			event == NFT_MSG_NEWTABLE ?
+-				AUDIT_NFT_OP_TABLE_REGISTER :
+-				AUDIT_NFT_OP_TABLE_UNREGISTER,
+-			GFP_KERNEL);
+-	kfree(buf);
+ 
+ 	if (!ctx->report &&
+ 	    !nfnetlink_has_listeners(ctx->net, NFNLGRP_NFTABLES))
+@@ -1414,18 +1438,13 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 				     const struct nft_chain *chain)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
++			   NFNETLINK_V0, nft_base_seq(net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= family;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= htons(net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_string(skb, NFTA_CHAIN_TABLE, table->name))
+ 		goto nla_put_failure;
+ 	if (nla_put_be64(skb, NFTA_CHAIN_HANDLE, cpu_to_be64(chain->handle),
+@@ -1477,18 +1496,6 @@ static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
+ {
+ 	struct sk_buff *skb;
+ 	int err;
+-	char *buf = kasprintf(GFP_KERNEL, "%s:%llu;%s:%llu",
+-			      ctx->table->name, ctx->table->handle,
+-			      ctx->chain->name, ctx->chain->handle);
+-
+-	audit_log_nfcfg(buf,
+-			ctx->family,
+-			ctx->chain->use,
+-			event == NFT_MSG_NEWCHAIN ?
+-				AUDIT_NFT_OP_CHAIN_REGISTER :
+-				AUDIT_NFT_OP_CHAIN_UNREGISTER,
+-			GFP_KERNEL);
+-	kfree(buf);
+ 
+ 	if (!ctx->report &&
+ 	    !nfnetlink_has_listeners(ctx->net, NFNLGRP_NFTABLES))
+@@ -2786,20 +2793,15 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 				    const struct nft_rule *prule)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	const struct nft_expr *expr, *next;
+ 	struct nlattr *list;
+ 	u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+ 
+-	nlh = nlmsg_put(skb, portid, seq, type, sizeof(struct nfgenmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, type, flags, family, NFNETLINK_V0,
++			   nft_base_seq(net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= family;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= htons(net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_string(skb, NFTA_RULE_TABLE, table->name))
+ 		goto nla_put_failure;
+ 	if (nla_put_string(skb, NFTA_RULE_CHAIN, chain->name))
+@@ -2844,18 +2846,6 @@ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
+ {
+ 	struct sk_buff *skb;
+ 	int err;
+-	char *buf = kasprintf(GFP_KERNEL, "%s:%llu;%s:%llu",
+-			      ctx->table->name, ctx->table->handle,
+-			      ctx->chain->name, ctx->chain->handle);
+-
+-	audit_log_nfcfg(buf,
+-			ctx->family,
+-			rule->handle,
+-			event == NFT_MSG_NEWRULE ?
+-				AUDIT_NFT_OP_RULE_REGISTER :
+-				AUDIT_NFT_OP_RULE_UNREGISTER,
+-			GFP_KERNEL);
+-	kfree(buf);
+ 
+ 	if (!ctx->report &&
+ 	    !nfnetlink_has_listeners(ctx->net, NFNLGRP_NFTABLES))
+@@ -3702,7 +3692,7 @@ cont:
+ 		list_for_each_entry(i, &ctx->table->sets, list) {
+ 			int tmp;
+ 
+-			if (!nft_is_active_next(ctx->net, set))
++			if (!nft_is_active_next(ctx->net, i))
+ 				continue;
+ 			if (!sscanf(i->name, name, &tmp))
+ 				continue;
+@@ -3786,23 +3776,17 @@ static int nf_tables_fill_set_concat(struct sk_buff *skb,
+ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ 			      const struct nft_set *set, u16 event, u16 flags)
+ {
+-	struct nfgenmsg *nfmsg;
+ 	struct nlmsghdr *nlh;
+ 	u32 portid = ctx->portid;
+ 	struct nlattr *nest;
+ 	u32 seq = ctx->seq;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg),
+-			flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
++			   NFNETLINK_V0, nft_base_seq(ctx->net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= ctx->family;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= htons(ctx->net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_string(skb, NFTA_SET_TABLE, ctx->table->name))
+ 		goto nla_put_failure;
+ 	if (nla_put_string(skb, NFTA_SET_NAME, set->name))
+@@ -3882,18 +3866,6 @@ static void nf_tables_set_notify(const struct nft_ctx *ctx,
+ 	struct sk_buff *skb;
+ 	u32 portid = ctx->portid;
+ 	int err;
+-	char *buf = kasprintf(gfp_flags, "%s:%llu;%s:%llu",
+-			      ctx->table->name, ctx->table->handle,
+-			      set->name, set->handle);
+-
+-	audit_log_nfcfg(buf,
+-			ctx->family,
+-			set->field_count,
+-			event == NFT_MSG_NEWSET ?
+-				AUDIT_NFT_OP_SET_REGISTER :
+-				AUDIT_NFT_OP_SET_UNREGISTER,
+-			gfp_flags);
+-	kfree(buf);
+ 
+ 	if (!ctx->report &&
+ 	    !nfnetlink_has_listeners(ctx->net, NFNLGRP_NFTABLES))
+@@ -4241,6 +4213,11 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 		err = nf_tables_set_desc_parse(&desc, nla[NFTA_SET_DESC]);
+ 		if (err < 0)
+ 			return err;
++
++		if (desc.field_count > 1 && !(flags & NFT_SET_CONCAT))
++			return -EINVAL;
++	} else if (flags & NFT_SET_CONCAT) {
++		return -EINVAL;
+ 	}
+ 
+ 	if (nla[NFTA_SET_EXPR])
+@@ -4717,7 +4694,6 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ 	struct nft_set *set;
+ 	struct nft_set_dump_args args;
+ 	bool set_found = false;
+-	struct nfgenmsg *nfmsg;
+ 	struct nlmsghdr *nlh;
+ 	struct nlattr *nest;
+ 	u32 portid, seq;
+@@ -4750,16 +4726,11 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ 	portid = NETLINK_CB(cb->skb).portid;
+ 	seq    = cb->nlh->nlmsg_seq;
+ 
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg),
+-			NLM_F_MULTI);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, NLM_F_MULTI,
++			   table->family, NFNETLINK_V0, nft_base_seq(net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = table->family;
+-	nfmsg->version      = NFNETLINK_V0;
+-	nfmsg->res_id	    = htons(net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_string(skb, NFTA_SET_ELEM_LIST_TABLE, table->name))
+ 		goto nla_put_failure;
+ 	if (nla_put_string(skb, NFTA_SET_ELEM_LIST_SET, set->name))
+@@ -4816,22 +4787,16 @@ static int nf_tables_fill_setelem_info(struct sk_buff *skb,
+ 				       const struct nft_set *set,
+ 				       const struct nft_set_elem *elem)
+ {
+-	struct nfgenmsg *nfmsg;
+ 	struct nlmsghdr *nlh;
+ 	struct nlattr *nest;
+ 	int err;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg),
+-			flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
++			   NFNETLINK_V0, nft_base_seq(ctx->net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= ctx->family;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= htons(ctx->net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_string(skb, NFTA_SET_TABLE, ctx->table->name))
+ 		goto nla_put_failure;
+ 	if (nla_put_string(skb, NFTA_SET_NAME, set->name))
+@@ -5030,18 +4995,6 @@ static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
+ 	u32 portid = ctx->portid;
+ 	struct sk_buff *skb;
+ 	int err;
+-	char *buf = kasprintf(GFP_KERNEL, "%s:%llu;%s:%llu",
+-			      ctx->table->name, ctx->table->handle,
+-			      set->name, set->handle);
+-
+-	audit_log_nfcfg(buf,
+-			ctx->family,
+-			set->handle,
+-			event == NFT_MSG_NEWSETELEM ?
+-				AUDIT_NFT_OP_SETELEM_REGISTER :
+-				AUDIT_NFT_OP_SETELEM_UNREGISTER,
+-			GFP_KERNEL);
+-	kfree(buf);
+ 
+ 	if (!ctx->report && !nfnetlink_has_listeners(net, NFNLGRP_NFTABLES))
+ 		return;
+@@ -5245,6 +5198,15 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 			return -EINVAL;
+ 	}
+ 
++	if (set->flags & NFT_SET_OBJECT) {
++		if (!nla[NFTA_SET_ELEM_OBJREF] &&
++		    !(flags & NFT_SET_ELEM_INTERVAL_END))
++			return -EINVAL;
++	} else {
++		if (nla[NFTA_SET_ELEM_OBJREF])
++			return -EINVAL;
++	}
++
+ 	if ((flags & NFT_SET_ELEM_INTERVAL_END) &&
+ 	     (nla[NFTA_SET_ELEM_DATA] ||
+ 	      nla[NFTA_SET_ELEM_OBJREF] ||
+@@ -5322,10 +5284,6 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 				       expr->ops->size);
+ 
+ 	if (nla[NFTA_SET_ELEM_OBJREF] != NULL) {
+-		if (!(set->flags & NFT_SET_OBJECT)) {
+-			err = -EINVAL;
+-			goto err_parse_key_end;
+-		}
+ 		obj = nft_obj_lookup(ctx->net, ctx->table,
+ 				     nla[NFTA_SET_ELEM_OBJREF],
+ 				     set->objtype, genmask);
+@@ -6095,19 +6053,14 @@ static int nf_tables_fill_obj_info(struct sk_buff *skb, struct net *net,
+ 				   int family, const struct nft_table *table,
+ 				   struct nft_object *obj, bool reset)
+ {
+-	struct nfgenmsg *nfmsg;
+ 	struct nlmsghdr *nlh;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
++			   NFNETLINK_V0, nft_base_seq(net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= family;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= htons(net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_string(skb, NFTA_OBJ_TABLE, table->name) ||
+ 	    nla_put_string(skb, NFTA_OBJ_NAME, obj->key.name) ||
+ 	    nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+@@ -6170,12 +6123,11 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 			    filter->type != NFT_OBJECT_UNSPEC &&
+ 			    obj->ops->type->type != filter->type)
+ 				goto cont;
+-
+ 			if (reset) {
+ 				char *buf = kasprintf(GFP_ATOMIC,
+-						      "%s:%llu;?:0",
++						      "%s:%u",
+ 						      table->name,
+-						      table->handle);
++						      net->nft.base_seq);
+ 
+ 				audit_log_nfcfg(buf,
+ 						family,
+@@ -6296,8 +6248,8 @@ static int nf_tables_getobj(struct net *net, struct sock *nlsk,
+ 		reset = true;
+ 
+ 	if (reset) {
+-		char *buf = kasprintf(GFP_ATOMIC, "%s:%llu;?:0",
+-				      table->name, table->handle);
++		char *buf = kasprintf(GFP_ATOMIC, "%s:%u",
++				      table->name, net->nft.base_seq);
+ 
+ 		audit_log_nfcfg(buf,
+ 				family,
+@@ -6384,15 +6336,15 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ {
+ 	struct sk_buff *skb;
+ 	int err;
+-	char *buf = kasprintf(gfp, "%s:%llu;?:0",
+-			      table->name, table->handle);
++	char *buf = kasprintf(gfp, "%s:%u",
++			      table->name, net->nft.base_seq);
+ 
+ 	audit_log_nfcfg(buf,
+ 			family,
+ 			obj->handle,
+ 			event == NFT_MSG_NEWOBJ ?
+-				AUDIT_NFT_OP_OBJ_REGISTER :
+-				AUDIT_NFT_OP_OBJ_UNREGISTER,
++				 AUDIT_NFT_OP_OBJ_REGISTER :
++				 AUDIT_NFT_OP_OBJ_UNREGISTER,
+ 			gfp);
+ 	kfree(buf);
+ 
+@@ -7007,20 +6959,15 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ 					 struct list_head *hook_list)
+ {
+ 	struct nlattr *nest, *nest_devs;
+-	struct nfgenmsg *nfmsg;
+ 	struct nft_hook *hook;
+ 	struct nlmsghdr *nlh;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
++			   NFNETLINK_V0, nft_base_seq(net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= family;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= htons(net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_string(skb, NFTA_FLOWTABLE_TABLE, flowtable->table->name) ||
+ 	    nla_put_string(skb, NFTA_FLOWTABLE_NAME, flowtable->name) ||
+ 	    nla_put_be32(skb, NFTA_FLOWTABLE_USE, htonl(flowtable->use)) ||
+@@ -7210,18 +7157,6 @@ static void nf_tables_flowtable_notify(struct nft_ctx *ctx,
+ {
+ 	struct sk_buff *skb;
+ 	int err;
+-	char *buf = kasprintf(GFP_KERNEL, "%s:%llu;%s:%llu",
+-			      flowtable->table->name, flowtable->table->handle,
+-			      flowtable->name, flowtable->handle);
+-
+-	audit_log_nfcfg(buf,
+-			ctx->family,
+-			flowtable->hooknum,
+-			event == NFT_MSG_NEWFLOWTABLE ?
+-				AUDIT_NFT_OP_FLOWTABLE_REGISTER :
+-				AUDIT_NFT_OP_FLOWTABLE_UNREGISTER,
+-			GFP_KERNEL);
+-	kfree(buf);
+ 
+ 	if (!ctx->report &&
+ 	    !nfnetlink_has_listeners(ctx->net, NFNLGRP_NFTABLES))
+@@ -7265,19 +7200,14 @@ static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net,
+ 				   u32 portid, u32 seq)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	char buf[TASK_COMM_LEN];
+ 	int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN);
+ 
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), 0);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, 0, AF_UNSPEC,
++			   NFNETLINK_V0, nft_base_seq(net));
++	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= AF_UNSPEC;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= htons(net->nft.base_seq & 0xffff);
+-
+ 	if (nla_put_be32(skb, NFTA_GEN_ID, htonl(net->nft.base_seq)) ||
+ 	    nla_put_be32(skb, NFTA_GEN_PROC_PID, htonl(task_pid_nr(current))) ||
+ 	    nla_put_string(skb, NFTA_GEN_PROC_NAME, get_task_comm(buf, current)))
+@@ -7342,9 +7272,6 @@ static void nf_tables_gen_notify(struct net *net, struct sk_buff *skb,
+ 	struct sk_buff *skb2;
+ 	int err;
+ 
+-	audit_log_nfcfg("?:0;?:0", 0, net->nft.base_seq,
+-			AUDIT_NFT_OP_GEN_REGISTER, GFP_KERNEL);
+-
+ 	if (!nlmsg_report(nlh) &&
+ 	    !nfnetlink_has_listeners(net, NFNLGRP_NFTABLES))
+ 		return;
+@@ -7875,12 +7802,74 @@ new_batch:
+ 	WARN_ON_ONCE(!list_empty(&net->nft.notify_list));
+ }
+ 
++static int nf_tables_commit_audit_alloc(struct list_head *adl,
++					struct nft_table *table)
++{
++	struct nft_audit_data *adp;
++
++	list_for_each_entry(adp, adl, list) {
++		if (adp->table == table)
++			return 0;
++	}
++	adp = kzalloc(sizeof(*adp), GFP_KERNEL);
++	if (!adp)
++		return -ENOMEM;
++	adp->table = table;
++	list_add(&adp->list, adl);
++	return 0;
++}
++
++static void nf_tables_commit_audit_free(struct list_head *adl)
++{
++	struct nft_audit_data *adp, *adn;
++
++	list_for_each_entry_safe(adp, adn, adl, list) {
++		list_del(&adp->list);
++		kfree(adp);
++	}
++}
++
++static void nf_tables_commit_audit_collect(struct list_head *adl,
++					   struct nft_table *table, u32 op)
++{
++	struct nft_audit_data *adp;
++
++	list_for_each_entry(adp, adl, list) {
++		if (adp->table == table)
++			goto found;
++	}
++	WARN_ONCE(1, "table=%s not expected in commit list", table->name);
++	return;
++found:
++	adp->entries++;
++	if (!adp->op || adp->op > op)
++		adp->op = op;
++}
++
++#define AUNFTABLENAMELEN (NFT_TABLE_MAXNAMELEN + 22)
++
++static void nf_tables_commit_audit_log(struct list_head *adl, u32 generation)
++{
++	struct nft_audit_data *adp, *adn;
++	char aubuf[AUNFTABLENAMELEN];
++
++	list_for_each_entry_safe(adp, adn, adl, list) {
++		snprintf(aubuf, AUNFTABLENAMELEN, "%s:%u", adp->table->name,
++			 generation);
++		audit_log_nfcfg(aubuf, adp->table->family, adp->entries,
++				nft2audit_op[adp->op], GFP_KERNEL);
++		list_del(&adp->list);
++		kfree(adp);
++	}
++}
++
+ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ {
+ 	struct nft_trans *trans, *next;
+ 	struct nft_trans_elem *te;
+ 	struct nft_chain *chain;
+ 	struct nft_table *table;
++	LIST_HEAD(adl);
+ 	int err;
+ 
+ 	if (list_empty(&net->nft.commit_list)) {
+@@ -7900,6 +7889,12 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 	list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
+ 		int ret;
+ 
++		ret = nf_tables_commit_audit_alloc(&adl, trans->ctx.table);
++		if (ret) {
++			nf_tables_commit_chain_prepare_cancel(net);
++			nf_tables_commit_audit_free(&adl);
++			return ret;
++		}
+ 		if (trans->msg_type == NFT_MSG_NEWRULE ||
+ 		    trans->msg_type == NFT_MSG_DELRULE) {
+ 			chain = trans->ctx.chain;
+@@ -7907,6 +7902,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			ret = nf_tables_commit_chain_prepare(net, chain);
+ 			if (ret < 0) {
+ 				nf_tables_commit_chain_prepare_cancel(net);
++				nf_tables_commit_audit_free(&adl);
+ 				return ret;
+ 			}
+ 		}
+@@ -7928,6 +7924,8 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 	net->nft.gencursor = nft_gencursor_next(net);
+ 
+ 	list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
++		nf_tables_commit_audit_collect(&adl, trans->ctx.table,
++					       trans->msg_type);
+ 		switch (trans->msg_type) {
+ 		case NFT_MSG_NEWTABLE:
+ 			if (nft_trans_table_update(trans)) {
+@@ -8082,6 +8080,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 
+ 	nft_commit_notify(net, NETLINK_CB(skb).portid);
+ 	nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
++	nf_tables_commit_audit_log(&adl, net->nft.base_seq);
+ 	nf_tables_commit_release(net);
+ 
+ 	return 0;
+diff --git a/net/netfilter/nf_tables_trace.c b/net/netfilter/nf_tables_trace.c
+index 87b36da5cd985..0cf3278007ba5 100644
+--- a/net/netfilter/nf_tables_trace.c
++++ b/net/netfilter/nf_tables_trace.c
+@@ -183,7 +183,6 @@ static bool nft_trace_have_verdict_chain(struct nft_traceinfo *info)
+ void nft_trace_notify(struct nft_traceinfo *info)
+ {
+ 	const struct nft_pktinfo *pkt = info->pkt;
+-	struct nfgenmsg *nfmsg;
+ 	struct nlmsghdr *nlh;
+ 	struct sk_buff *skb;
+ 	unsigned int size;
+@@ -219,15 +218,11 @@ void nft_trace_notify(struct nft_traceinfo *info)
+ 		return;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_TRACE);
+-	nlh = nlmsg_put(skb, 0, 0, event, sizeof(struct nfgenmsg), 0);
++	nlh = nfnl_msg_put(skb, 0, 0, event, 0, info->basechain->type->family,
++			   NFNETLINK_V0, 0);
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family	= info->basechain->type->family;
+-	nfmsg->version		= NFNETLINK_V0;
+-	nfmsg->res_id		= 0;
+-
+ 	if (nla_put_be32(skb, NFTA_TRACE_NFPROTO, htonl(nft_pf(pkt))))
+ 		goto nla_put_failure;
+ 
+diff --git a/net/netfilter/nfnetlink_acct.c b/net/netfilter/nfnetlink_acct.c
+index 5bfec829c12f3..ec3e378da73d1 100644
+--- a/net/netfilter/nfnetlink_acct.c
++++ b/net/netfilter/nfnetlink_acct.c
+@@ -132,21 +132,16 @@ nfnl_acct_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
+ 		   int event, struct nf_acct *acct)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0;
+ 	u64 pkts, bytes;
+ 	u32 old_flags;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_ACCT, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
++			   NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = AF_UNSPEC;
+-	nfmsg->version = NFNETLINK_V0;
+-	nfmsg->res_id = 0;
+-
+ 	if (nla_put_string(skb, NFACCT_NAME, acct->name))
+ 		goto nla_put_failure;
+ 
+diff --git a/net/netfilter/nfnetlink_cthelper.c b/net/netfilter/nfnetlink_cthelper.c
+index 91afbf8ac8cf0..52d5f24118342 100644
+--- a/net/netfilter/nfnetlink_cthelper.c
++++ b/net/netfilter/nfnetlink_cthelper.c
+@@ -530,20 +530,15 @@ nfnl_cthelper_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
+ 			int event, struct nf_conntrack_helper *helper)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0;
+ 	int status;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_CTHELPER, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
++			   NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = AF_UNSPEC;
+-	nfmsg->version = NFNETLINK_V0;
+-	nfmsg->res_id = 0;
+-
+ 	if (nla_put_string(skb, NFCTH_NAME, helper->name))
+ 		goto nla_put_failure;
+ 
+diff --git a/net/netfilter/nfnetlink_cttimeout.c b/net/netfilter/nfnetlink_cttimeout.c
+index 89a381f7f9459..de831a2575129 100644
+--- a/net/netfilter/nfnetlink_cttimeout.c
++++ b/net/netfilter/nfnetlink_cttimeout.c
+@@ -160,22 +160,17 @@ ctnl_timeout_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
+ 		       int event, struct ctnl_timeout *timeout)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0;
+ 	const struct nf_conntrack_l4proto *l4proto = timeout->timeout.l4proto;
+ 	struct nlattr *nest_parms;
+ 	int ret;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_TIMEOUT, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
++			   NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = AF_UNSPEC;
+-	nfmsg->version = NFNETLINK_V0;
+-	nfmsg->res_id = 0;
+-
+ 	if (nla_put_string(skb, CTA_TIMEOUT_NAME, timeout->name) ||
+ 	    nla_put_be16(skb, CTA_TIMEOUT_L3PROTO,
+ 			 htons(timeout->timeout.l3num)) ||
+@@ -382,21 +377,16 @@ cttimeout_default_fill_info(struct net *net, struct sk_buff *skb, u32 portid,
+ 			    const unsigned int *timeouts)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0;
+ 	struct nlattr *nest_parms;
+ 	int ret;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_TIMEOUT, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
++			   NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = AF_UNSPEC;
+-	nfmsg->version = NFNETLINK_V0;
+-	nfmsg->res_id = 0;
+-
+ 	if (nla_put_be16(skb, CTA_TIMEOUT_L3PROTO, htons(l3num)) ||
+ 	    nla_put_u8(skb, CTA_TIMEOUT_L4PROTO, l4proto->l4proto))
+ 		goto nla_put_failure;
+diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
+index 33c13edbca4bb..f087baa95b07b 100644
+--- a/net/netfilter/nfnetlink_log.c
++++ b/net/netfilter/nfnetlink_log.c
+@@ -452,20 +452,15 @@ __build_packet_message(struct nfnl_log_net *log,
+ {
+ 	struct nfulnl_msg_packet_hdr pmsg;
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	sk_buff_data_t old_tail = inst->skb->tail;
+ 	struct sock *sk;
+ 	const unsigned char *hwhdrp;
+ 
+-	nlh = nlmsg_put(inst->skb, 0, 0,
+-			nfnl_msg_type(NFNL_SUBSYS_ULOG, NFULNL_MSG_PACKET),
+-			sizeof(struct nfgenmsg), 0);
++	nlh = nfnl_msg_put(inst->skb, 0, 0,
++			   nfnl_msg_type(NFNL_SUBSYS_ULOG, NFULNL_MSG_PACKET),
++			   0, pf, NFNETLINK_V0, htons(inst->group_num));
+ 	if (!nlh)
+ 		return -1;
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = pf;
+-	nfmsg->version = NFNETLINK_V0;
+-	nfmsg->res_id = htons(inst->group_num);
+ 
+ 	memset(&pmsg, 0, sizeof(pmsg));
+ 	pmsg.hw_protocol	= skb->protocol;
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 72d30922ed290..9d87606c76ff4 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -383,7 +383,6 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
+ 	struct nlattr *nla;
+ 	struct nfqnl_msg_packet_hdr *pmsg;
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	struct sk_buff *entskb = entry->skb;
+ 	struct net_device *indev;
+ 	struct net_device *outdev;
+@@ -469,18 +468,15 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
+ 		goto nlmsg_failure;
+ 	}
+ 
+-	nlh = nlmsg_put(skb, 0, 0,
+-			nfnl_msg_type(NFNL_SUBSYS_QUEUE, NFQNL_MSG_PACKET),
+-			sizeof(struct nfgenmsg), 0);
++	nlh = nfnl_msg_put(skb, 0, 0,
++			   nfnl_msg_type(NFNL_SUBSYS_QUEUE, NFQNL_MSG_PACKET),
++			   0, entry->state.pf, NFNETLINK_V0,
++			   htons(queue->queue_num));
+ 	if (!nlh) {
+ 		skb_tx_error(entskb);
+ 		kfree_skb(skb);
+ 		goto nlmsg_failure;
+ 	}
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = entry->state.pf;
+-	nfmsg->version = NFNETLINK_V0;
+-	nfmsg->res_id = htons(queue->queue_num);
+ 
+ 	nla = __nla_reserve(skb, NFQA_PACKET_HDR, sizeof(*pmsg));
+ 	pmsg = nla_data(nla);
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 8e56f353ff351..b8dbd20a6a4c5 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -591,19 +591,14 @@ nfnl_compat_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
+ 		      int rev, int target)
+ {
+ 	struct nlmsghdr *nlh;
+-	struct nfgenmsg *nfmsg;
+ 	unsigned int flags = portid ? NLM_F_MULTI : 0;
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFT_COMPAT, event);
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
+-	if (nlh == NULL)
++	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
++			   NFNETLINK_V0, 0);
++	if (!nlh)
+ 		goto nlmsg_failure;
+ 
+-	nfmsg = nlmsg_data(nlh);
+-	nfmsg->nfgen_family = family;
+-	nfmsg->version = NFNETLINK_V0;
+-	nfmsg->res_id = 0;
+-
+ 	if (nla_put_string(skb, NFTA_COMPAT_NAME, name) ||
+ 	    nla_put_be32(skb, NFTA_COMPAT_REV, htonl(rev)) ||
+ 	    nla_put_be32(skb, NFTA_COMPAT_TYPE, htonl(target)))
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index c992424e4d632..9fd7ba01b9f8b 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1182,13 +1182,17 @@ static int ctrl_dumppolicy_start(struct netlink_callback *cb)
+ 							     op.policy,
+ 							     op.maxattr);
+ 			if (err)
+-				return err;
++				goto err_free_state;
+ 		}
+ 	}
+ 
+ 	if (!ctx->state)
+ 		return -ENODATA;
+ 	return 0;
++
++err_free_state:
++	netlink_policy_dump_free(ctx->state);
++	return err;
+ }
+ 
+ static void *ctrl_dumppolicy_prep(struct sk_buff *skb,
+diff --git a/net/netlink/policy.c b/net/netlink/policy.c
+index 8d7c900e27f4c..87e3de0fde896 100644
+--- a/net/netlink/policy.c
++++ b/net/netlink/policy.c
+@@ -144,7 +144,7 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+ 
+ 	err = add_policy(&state, policy, maxtype);
+ 	if (err)
+-		return err;
++		goto err_try_undo;
+ 
+ 	for (policy_idx = 0;
+ 	     policy_idx < state->n_alloc && state->policies[policy_idx].policy;
+@@ -164,7 +164,7 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+ 						 policy[type].nested_policy,
+ 						 policy[type].len);
+ 				if (err)
+-					return err;
++					goto err_try_undo;
+ 				break;
+ 			default:
+ 				break;
+@@ -174,6 +174,16 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+ 
+ 	*pstate = state;
+ 	return 0;
++
++err_try_undo:
++	/* Try to preserve reasonable unwind semantics - if we're starting from
++	 * scratch clean up fully, otherwise record what we got and caller will.
++	 */
++	if (!*pstate)
++		netlink_policy_dump_free(state);
++	else
++		*pstate = state;
++	return err;
+ }
+ 
+ static bool
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 56cffbfa000b7..13448ca5aeff2 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -20,6 +20,8 @@
+ /* auto-bind range */
+ #define QRTR_MIN_EPH_SOCKET 0x4000
+ #define QRTR_MAX_EPH_SOCKET 0x7fff
++#define QRTR_EPH_PORT_RANGE \
++		XA_LIMIT(QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET)
+ 
+ /**
+  * struct qrtr_hdr_v1 - (I|R)PCrouter packet header version 1
+@@ -106,8 +108,7 @@ static LIST_HEAD(qrtr_all_nodes);
+ static DEFINE_MUTEX(qrtr_node_lock);
+ 
+ /* local port allocation management */
+-static DEFINE_IDR(qrtr_ports);
+-static DEFINE_MUTEX(qrtr_port_lock);
++static DEFINE_XARRAY_ALLOC(qrtr_ports);
+ 
+ /**
+  * struct qrtr_node - endpoint node
+@@ -635,7 +636,7 @@ static struct qrtr_sock *qrtr_port_lookup(int port)
+ 		port = 0;
+ 
+ 	rcu_read_lock();
+-	ipc = idr_find(&qrtr_ports, port);
++	ipc = xa_load(&qrtr_ports, port);
+ 	if (ipc)
+ 		sock_hold(&ipc->sk);
+ 	rcu_read_unlock();
+@@ -677,9 +678,7 @@ static void qrtr_port_remove(struct qrtr_sock *ipc)
+ 
+ 	__sock_put(&ipc->sk);
+ 
+-	mutex_lock(&qrtr_port_lock);
+-	idr_remove(&qrtr_ports, port);
+-	mutex_unlock(&qrtr_port_lock);
++	xa_erase(&qrtr_ports, port);
+ 
+ 	/* Ensure that if qrtr_port_lookup() did enter the RCU read section we
+ 	 * wait for it to up increment the refcount */
+@@ -698,29 +697,20 @@ static void qrtr_port_remove(struct qrtr_sock *ipc)
+  */
+ static int qrtr_port_assign(struct qrtr_sock *ipc, int *port)
+ {
+-	u32 min_port;
+ 	int rc;
+ 
+-	mutex_lock(&qrtr_port_lock);
+ 	if (!*port) {
+-		min_port = QRTR_MIN_EPH_SOCKET;
+-		rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, QRTR_MAX_EPH_SOCKET, GFP_ATOMIC);
+-		if (!rc)
+-			*port = min_port;
++		rc = xa_alloc(&qrtr_ports, port, ipc, QRTR_EPH_PORT_RANGE,
++				GFP_KERNEL);
+ 	} else if (*port < QRTR_MIN_EPH_SOCKET && !capable(CAP_NET_ADMIN)) {
+ 		rc = -EACCES;
+ 	} else if (*port == QRTR_PORT_CTRL) {
+-		min_port = 0;
+-		rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, 0, GFP_ATOMIC);
++		rc = xa_insert(&qrtr_ports, 0, ipc, GFP_KERNEL);
+ 	} else {
+-		min_port = *port;
+-		rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, *port, GFP_ATOMIC);
+-		if (!rc)
+-			*port = min_port;
++		rc = xa_insert(&qrtr_ports, *port, ipc, GFP_KERNEL);
+ 	}
+-	mutex_unlock(&qrtr_port_lock);
+ 
+-	if (rc == -ENOSPC)
++	if (rc == -EBUSY)
+ 		return -EADDRINUSE;
+ 	else if (rc < 0)
+ 		return rc;
+@@ -734,20 +724,16 @@ static int qrtr_port_assign(struct qrtr_sock *ipc, int *port)
+ static void qrtr_reset_ports(void)
+ {
+ 	struct qrtr_sock *ipc;
+-	int id;
+-
+-	mutex_lock(&qrtr_port_lock);
+-	idr_for_each_entry(&qrtr_ports, ipc, id) {
+-		/* Don't reset control port */
+-		if (id == 0)
+-			continue;
++	unsigned long index;
+ 
++	rcu_read_lock();
++	xa_for_each_start(&qrtr_ports, index, ipc, 1) {
+ 		sock_hold(&ipc->sk);
+ 		ipc->sk.sk_err = ENETRESET;
+ 		ipc->sk.sk_error_report(&ipc->sk);
+ 		sock_put(&ipc->sk);
+ 	}
+-	mutex_unlock(&qrtr_port_lock);
++	rcu_read_unlock();
+ }
+ 
+ /* Bind socket to address.
+diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
+index 6fdedd9dbbc28..cfbf0e129cba5 100644
+--- a/net/rds/ib_recv.c
++++ b/net/rds/ib_recv.c
+@@ -363,6 +363,7 @@ static int acquire_refill(struct rds_connection *conn)
+ static void release_refill(struct rds_connection *conn)
+ {
+ 	clear_bit(RDS_RECV_REFILL, &conn->c_flags);
++	smp_mb__after_atomic();
+ 
+ 	/* We don't use wait_on_bit()/wake_up_bit() because our waking is in a
+ 	 * hot path and finding waiters is very rare.  We don't want to walk
+diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c
+index a9f0d17fdb0d6..1bae32c482846 100644
+--- a/net/sunrpc/auth.c
++++ b/net/sunrpc/auth.c
+@@ -445,7 +445,7 @@ rpcauth_prune_expired(struct list_head *free, int nr_to_scan)
+ 		 * Enforce a 60 second garbage collection moratorium
+ 		 * Note that the cred_unused list must be time-ordered.
+ 		 */
+-		if (!time_in_range(cred->cr_expire, expired, jiffies))
++		if (time_in_range(cred->cr_expire, expired, jiffies))
+ 			continue;
+ 		if (!rpcauth_unhash_cred(cred))
+ 			continue;
+diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
+index 22a2c235abf1b..77e347a45344c 100644
+--- a/net/sunrpc/backchannel_rqst.c
++++ b/net/sunrpc/backchannel_rqst.c
+@@ -64,6 +64,17 @@ static void xprt_free_allocation(struct rpc_rqst *req)
+ 	kfree(req);
+ }
+ 
++static void xprt_bc_reinit_xdr_buf(struct xdr_buf *buf)
++{
++	buf->head[0].iov_len = PAGE_SIZE;
++	buf->tail[0].iov_len = 0;
++	buf->pages = NULL;
++	buf->page_len = 0;
++	buf->flags = 0;
++	buf->len = 0;
++	buf->buflen = PAGE_SIZE;
++}
++
+ static int xprt_alloc_xdr_buf(struct xdr_buf *buf, gfp_t gfp_flags)
+ {
+ 	struct page *page;
+@@ -292,6 +303,9 @@ void xprt_free_bc_rqst(struct rpc_rqst *req)
+ 	 */
+ 	spin_lock_bh(&xprt->bc_pa_lock);
+ 	if (xprt_need_to_requeue(xprt)) {
++		xprt_bc_reinit_xdr_buf(&req->rq_snd_buf);
++		xprt_bc_reinit_xdr_buf(&req->rq_rcv_buf);
++		req->rq_rcv_buf.len = PAGE_SIZE;
+ 		list_add_tail(&req->rq_bc_pa_list, &xprt->bc_pa_list);
+ 		xprt->bc_alloc_count++;
+ 		atomic_inc(&xprt->bc_slot_count);
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index c59806253a65a..7829a5018ef9f 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1242,6 +1242,7 @@ static void vsock_connect_timeout(struct work_struct *work)
+ 	if (sk->sk_state == TCP_SYN_SENT &&
+ 	    (sk->sk_shutdown != SHUTDOWN_MASK)) {
+ 		sk->sk_state = TCP_CLOSE;
++		sk->sk_socket->state = SS_UNCONNECTED;
+ 		sk->sk_err = ETIMEDOUT;
+ 		sk->sk_error_report(sk);
+ 		vsock_transport_cancel_pkt(vsk);
+@@ -1347,7 +1348,14 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
+ 			 * timeout fires.
+ 			 */
+ 			sock_hold(sk);
+-			schedule_delayed_work(&vsk->connect_work, timeout);
++
++			/* If the timeout function is already scheduled,
++			 * reschedule it, then ungrab the socket refcount to
++			 * keep it balanced.
++			 */
++			if (mod_delayed_work(system_wq, &vsk->connect_work,
++					     timeout))
++				sock_put(sk);
+ 
+ 			/* Skip ahead to preserve error code set above. */
+ 			goto out_wait;
+diff --git a/scripts/Makefile.gcc-plugins b/scripts/Makefile.gcc-plugins
+index 4aad284800355..36814be80264a 100644
+--- a/scripts/Makefile.gcc-plugins
++++ b/scripts/Makefile.gcc-plugins
+@@ -6,7 +6,7 @@ gcc-plugin-$(CONFIG_GCC_PLUGIN_LATENT_ENTROPY)	+= latent_entropy_plugin.so
+ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_LATENT_ENTROPY)		\
+ 		+= -DLATENT_ENTROPY_PLUGIN
+ ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY
+-    DISABLE_LATENT_ENTROPY_PLUGIN += -fplugin-arg-latent_entropy_plugin-disable
++    DISABLE_LATENT_ENTROPY_PLUGIN += -fplugin-arg-latent_entropy_plugin-disable -ULATENT_ENTROPY_PLUGIN
+ endif
+ export DISABLE_LATENT_ENTROPY_PLUGIN
+ 
+diff --git a/scripts/dummy-tools/gcc b/scripts/dummy-tools/gcc
+index 0d0589cf8184e..346757a87dbc8 100755
+--- a/scripts/dummy-tools/gcc
++++ b/scripts/dummy-tools/gcc
+@@ -77,12 +77,8 @@ fi
+ 
+ # To set GCC_PLUGINS
+ if arg_contain -print-file-name=plugin "$@"; then
+-	plugin_dir=$(mktemp -d)
+-
+-	mkdir -p $plugin_dir/include
+-	touch $plugin_dir/include/plugin-version.h
+-
+-	echo $plugin_dir
++	# Use $0 to find the in-tree dummy directory
++	echo "$(dirname "$(readlink -f "$0")")/dummy-plugin-dir"
+ 	exit 0
+ fi
+ 
+diff --git a/scripts/module.lds.S b/scripts/module.lds.S
+index c5f12195817bb..2c510db6c2ed6 100644
+--- a/scripts/module.lds.S
++++ b/scripts/module.lds.S
+@@ -22,6 +22,8 @@ SECTIONS {
+ 
+ 	.init_array		0 : ALIGN(8) { *(SORT(.init_array.*)) *(.init_array) }
+ 
++	.altinstructions	0 : ALIGN(8) { KEEP(*(.altinstructions)) }
++	__bug_table		0 : ALIGN(8) { KEEP(*(__bug_table)) }
+ 	__jump_table		0 : ALIGN(8) { KEEP(*(__jump_table)) }
+ 
+ 	__patchable_function_entries : { *(__patchable_function_entries) }
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index 5fd4a64e431f6..c173f6fd7aeed 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -401,7 +401,7 @@ static struct aa_loaddata *aa_simple_write_to_buffer(const char __user *userbuf,
+ 
+ 	data->size = copy_size;
+ 	if (copy_from_user(data->data, userbuf, copy_size)) {
+-		kvfree(data);
++		aa_put_loaddata(data);
+ 		return ERR_PTR(-EFAULT);
+ 	}
+ 
+diff --git a/security/apparmor/audit.c b/security/apparmor/audit.c
+index f7e97c7e80f3d..704b0c895605a 100644
+--- a/security/apparmor/audit.c
++++ b/security/apparmor/audit.c
+@@ -137,7 +137,7 @@ int aa_audit(int type, struct aa_profile *profile, struct common_audit_data *sa,
+ 	}
+ 	if (AUDIT_MODE(profile) == AUDIT_QUIET ||
+ 	    (type == AUDIT_APPARMOR_DENIED &&
+-	     AUDIT_MODE(profile) == AUDIT_QUIET))
++	     AUDIT_MODE(profile) == AUDIT_QUIET_DENIED))
+ 		return aad(sa)->error;
+ 
+ 	if (KILL_MODE(profile) && type == AUDIT_APPARMOR_DENIED)
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index f919ebd042fd2..87a9e6fd7908d 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -465,7 +465,7 @@ restart:
+ 				 * xattrs, or a longer match
+ 				 */
+ 				candidate = profile;
+-				candidate_len = profile->xmatch_len;
++				candidate_len = max(count, profile->xmatch_len);
+ 				candidate_xattrs = ret;
+ 				conflict = false;
+ 			}
+diff --git a/security/apparmor/include/lib.h b/security/apparmor/include/lib.h
+index 7d27db740bc2f..ac5054899f6f4 100644
+--- a/security/apparmor/include/lib.h
++++ b/security/apparmor/include/lib.h
+@@ -22,6 +22,11 @@
+  */
+ 
+ #define DEBUG_ON (aa_g_debug)
++/*
++ * split individual debug cases out in preparation for finer grained
++ * debug controls in the future.
++ */
++#define AA_DEBUG_LABEL DEBUG_ON
+ #define dbg_printk(__fmt, __args...) pr_debug(__fmt, ##__args)
+ #define AA_DEBUG(fmt, args...)						\
+ 	do {								\
+diff --git a/security/apparmor/include/policy.h b/security/apparmor/include/policy.h
+index b5b4b8190e654..b5aa4231af682 100644
+--- a/security/apparmor/include/policy.h
++++ b/security/apparmor/include/policy.h
+@@ -135,7 +135,7 @@ struct aa_profile {
+ 
+ 	const char *attach;
+ 	struct aa_dfa *xmatch;
+-	int xmatch_len;
++	unsigned int xmatch_len;
+ 	enum audit_mode audit;
+ 	long mode;
+ 	u32 path_flags;
+diff --git a/security/apparmor/label.c b/security/apparmor/label.c
+index 6222fdfebe4e5..66bc4704f8044 100644
+--- a/security/apparmor/label.c
++++ b/security/apparmor/label.c
+@@ -1632,9 +1632,9 @@ int aa_label_snxprint(char *str, size_t size, struct aa_ns *ns,
+ 	AA_BUG(!str && size != 0);
+ 	AA_BUG(!label);
+ 
+-	if (flags & FLAG_ABS_ROOT) {
++	if (AA_DEBUG_LABEL && (flags & FLAG_ABS_ROOT)) {
+ 		ns = root_ns;
+-		len = snprintf(str, size, "=");
++		len = snprintf(str, size, "_");
+ 		update_for_len(total, len, size, str);
+ 	} else if (!ns) {
+ 		ns = labels_ns(label);
+@@ -1745,7 +1745,7 @@ void aa_label_xaudit(struct audit_buffer *ab, struct aa_ns *ns,
+ 	if (!use_label_hname(ns, label, flags) ||
+ 	    display_mode(ns, label, flags)) {
+ 		len  = aa_label_asxprint(&name, ns, label, flags, gfp);
+-		if (len == -1) {
++		if (len < 0) {
+ 			AA_DEBUG("label print error");
+ 			return;
+ 		}
+@@ -1773,7 +1773,7 @@ void aa_label_seq_xprint(struct seq_file *f, struct aa_ns *ns,
+ 		int len;
+ 
+ 		len = aa_label_asxprint(&str, ns, label, flags, gfp);
+-		if (len == -1) {
++		if (len < 0) {
+ 			AA_DEBUG("label print error");
+ 			return;
+ 		}
+@@ -1796,7 +1796,7 @@ void aa_label_xprintk(struct aa_ns *ns, struct aa_label *label, int flags,
+ 		int len;
+ 
+ 		len = aa_label_asxprint(&str, ns, label, flags, gfp);
+-		if (len == -1) {
++		if (len < 0) {
+ 			AA_DEBUG("label print error");
+ 			return;
+ 		}
+@@ -1896,7 +1896,8 @@ struct aa_label *aa_label_strn_parse(struct aa_label *base, const char *str,
+ 	AA_BUG(!str);
+ 
+ 	str = skipn_spaces(str, n);
+-	if (str == NULL || (*str == '=' && base != &root_ns->unconfined->label))
++	if (str == NULL || (AA_DEBUG_LABEL && *str == '_' &&
++			    base != &root_ns->unconfined->label))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	len = label_count_strn_entries(str, end - str);
+diff --git a/security/apparmor/mount.c b/security/apparmor/mount.c
+index e0828ee7a3457..e64f76d347d6e 100644
+--- a/security/apparmor/mount.c
++++ b/security/apparmor/mount.c
+@@ -229,7 +229,8 @@ static const char * const mnt_info_table[] = {
+ 	"failed srcname match",
+ 	"failed type match",
+ 	"failed flags match",
+-	"failed data match"
++	"failed data match",
++	"failed perms check"
+ };
+ 
+ /*
+@@ -284,8 +285,8 @@ static int do_match_mnt(struct aa_dfa *dfa, unsigned int start,
+ 			return 0;
+ 	}
+ 
+-	/* failed at end of flags match */
+-	return 4;
++	/* failed at perms check, don't confuse with flags match */
++	return 6;
+ }
+ 
+ 
+@@ -718,6 +719,7 @@ int aa_pivotroot(struct aa_label *label, const struct path *old_path,
+ 			aa_put_label(target);
+ 			goto out;
+ 		}
++		aa_put_label(target);
+ 	} else
+ 		/* already audited error */
+ 		error = PTR_ERR(target);
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index dc345ac932053..556ef65ab6ee6 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -746,16 +746,18 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 		profile->label.flags |= FLAG_HAT;
+ 	if (!unpack_u32(e, &tmp, NULL))
+ 		goto fail;
+-	if (tmp == PACKED_MODE_COMPLAIN || (e->version & FORCE_COMPLAIN_FLAG))
++	if (tmp == PACKED_MODE_COMPLAIN || (e->version & FORCE_COMPLAIN_FLAG)) {
+ 		profile->mode = APPARMOR_COMPLAIN;
+-	else if (tmp == PACKED_MODE_ENFORCE)
++	} else if (tmp == PACKED_MODE_ENFORCE) {
+ 		profile->mode = APPARMOR_ENFORCE;
+-	else if (tmp == PACKED_MODE_KILL)
++	} else if (tmp == PACKED_MODE_KILL) {
+ 		profile->mode = APPARMOR_KILL;
+-	else if (tmp == PACKED_MODE_UNCONFINED)
++	} else if (tmp == PACKED_MODE_UNCONFINED) {
+ 		profile->mode = APPARMOR_UNCONFINED;
+-	else
++		profile->label.flags |= FLAG_UNCONFINED;
++	} else {
+ 		goto fail;
++	}
+ 	if (!unpack_u32(e, &tmp, NULL))
+ 		goto fail;
+ 	if (tmp)
+diff --git a/sound/core/control.c b/sound/core/control.c
+index 3b44378b9dec9..732eb515d2f59 100644
+--- a/sound/core/control.c
++++ b/sound/core/control.c
+@@ -121,6 +121,7 @@ static int snd_ctl_release(struct inode *inode, struct file *file)
+ 			if (control->vd[idx].owner == ctl)
+ 				control->vd[idx].owner = NULL;
+ 	up_write(&card->controls_rwsem);
++	snd_fasync_free(ctl->fasync);
+ 	snd_ctl_empty_read_queue(ctl);
+ 	put_pid(ctl->pid);
+ 	kfree(ctl);
+@@ -175,7 +176,7 @@ void snd_ctl_notify(struct snd_card *card, unsigned int mask,
+ 	_found:
+ 		wake_up(&ctl->change_sleep);
+ 		spin_unlock(&ctl->read_lock);
+-		kill_fasync(&ctl->fasync, SIGIO, POLL_IN);
++		snd_kill_fasync(ctl->fasync, SIGIO, POLL_IN);
+ 	}
+ 	read_unlock_irqrestore(&card->ctl_files_rwlock, flags);
+ }
+@@ -1941,7 +1942,7 @@ static int snd_ctl_fasync(int fd, struct file * file, int on)
+ 	struct snd_ctl_file *ctl;
+ 
+ 	ctl = file->private_data;
+-	return fasync_helper(fd, file, on, &ctl->fasync);
++	return snd_fasync_helper(fd, file, on, &ctl->fasync);
+ }
+ 
+ /* return the preferred subdevice number if already assigned;
+@@ -2015,7 +2016,7 @@ static int snd_ctl_dev_disconnect(struct snd_device *device)
+ 	read_lock_irqsave(&card->ctl_files_rwlock, flags);
+ 	list_for_each_entry(ctl, &card->ctl_files, list) {
+ 		wake_up(&ctl->change_sleep);
+-		kill_fasync(&ctl->fasync, SIGIO, POLL_ERR);
++		snd_kill_fasync(ctl->fasync, SIGIO, POLL_ERR);
+ 	}
+ 	read_unlock_irqrestore(&card->ctl_files_rwlock, flags);
+ 
+diff --git a/sound/core/info.c b/sound/core/info.c
+index 9fec3070f8ba3..d6fb11c3250c4 100644
+--- a/sound/core/info.c
++++ b/sound/core/info.c
+@@ -112,9 +112,9 @@ static loff_t snd_info_entry_llseek(struct file *file, loff_t offset, int orig)
+ 	entry = data->entry;
+ 	mutex_lock(&entry->access);
+ 	if (entry->c.ops->llseek) {
+-		offset = entry->c.ops->llseek(entry,
+-					      data->file_private_data,
+-					      file, offset, orig);
++		ret = entry->c.ops->llseek(entry,
++					   data->file_private_data,
++					   file, offset, orig);
+ 		goto out;
+ 	}
+ 
+diff --git a/sound/core/misc.c b/sound/core/misc.c
+index 3579dd7a161f7..c3f3d94b51970 100644
+--- a/sound/core/misc.c
++++ b/sound/core/misc.c
+@@ -10,6 +10,7 @@
+ #include <linux/time.h>
+ #include <linux/slab.h>
+ #include <linux/ioport.h>
++#include <linux/fs.h>
+ #include <sound/core.h>
+ 
+ #ifdef CONFIG_SND_DEBUG
+@@ -145,3 +146,96 @@ snd_pci_quirk_lookup(struct pci_dev *pci, const struct snd_pci_quirk *list)
+ }
+ EXPORT_SYMBOL(snd_pci_quirk_lookup);
+ #endif
++
++/*
++ * Deferred async signal helpers
++ *
++ * Below are a few helper functions to wrap the async signal handling
++ * in the deferred work.  The main purpose is to avoid the messy deadlock
++ * around tasklist_lock and co at the kill_fasync() invocation.
++ * fasync_helper() and kill_fasync() are replaced with snd_fasync_helper()
++ * and snd_kill_fasync(), respectively.  In addition, snd_fasync_free() has
++ * to be called at releasing the relevant file object.
++ */
++struct snd_fasync {
++	struct fasync_struct *fasync;
++	int signal;
++	int poll;
++	int on;
++	struct list_head list;
++};
++
++static DEFINE_SPINLOCK(snd_fasync_lock);
++static LIST_HEAD(snd_fasync_list);
++
++static void snd_fasync_work_fn(struct work_struct *work)
++{
++	struct snd_fasync *fasync;
++
++	spin_lock_irq(&snd_fasync_lock);
++	while (!list_empty(&snd_fasync_list)) {
++		fasync = list_first_entry(&snd_fasync_list, struct snd_fasync, list);
++		list_del_init(&fasync->list);
++		spin_unlock_irq(&snd_fasync_lock);
++		if (fasync->on)
++			kill_fasync(&fasync->fasync, fasync->signal, fasync->poll);
++		spin_lock_irq(&snd_fasync_lock);
++	}
++	spin_unlock_irq(&snd_fasync_lock);
++}
++
++static DECLARE_WORK(snd_fasync_work, snd_fasync_work_fn);
++
++int snd_fasync_helper(int fd, struct file *file, int on,
++		      struct snd_fasync **fasyncp)
++{
++	struct snd_fasync *fasync = NULL;
++
++	if (on) {
++		fasync = kzalloc(sizeof(*fasync), GFP_KERNEL);
++		if (!fasync)
++			return -ENOMEM;
++		INIT_LIST_HEAD(&fasync->list);
++	}
++
++	spin_lock_irq(&snd_fasync_lock);
++	if (*fasyncp) {
++		kfree(fasync);
++		fasync = *fasyncp;
++	} else {
++		if (!fasync) {
++			spin_unlock_irq(&snd_fasync_lock);
++			return 0;
++		}
++		*fasyncp = fasync;
++	}
++	fasync->on = on;
++	spin_unlock_irq(&snd_fasync_lock);
++	return fasync_helper(fd, file, on, &fasync->fasync);
++}
++EXPORT_SYMBOL_GPL(snd_fasync_helper);
++
++void snd_kill_fasync(struct snd_fasync *fasync, int signal, int poll)
++{
++	unsigned long flags;
++
++	if (!fasync || !fasync->on)
++		return;
++	spin_lock_irqsave(&snd_fasync_lock, flags);
++	fasync->signal = signal;
++	fasync->poll = poll;
++	list_move(&fasync->list, &snd_fasync_list);
++	schedule_work(&snd_fasync_work);
++	spin_unlock_irqrestore(&snd_fasync_lock, flags);
++}
++EXPORT_SYMBOL_GPL(snd_kill_fasync);
++
++void snd_fasync_free(struct snd_fasync *fasync)
++{
++	if (!fasync)
++		return;
++	fasync->on = 0;
++	flush_work(&snd_fasync_work);
++	kfree(fasync);
++}
++EXPORT_SYMBOL_GPL(snd_fasync_free);
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 04cd8953605ab..764d2b19344e3 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -83,7 +83,7 @@ struct snd_timer_user {
+ 	unsigned int filter;
+ 	struct timespec64 tstamp;		/* trigger tstamp */
+ 	wait_queue_head_t qchange_sleep;
+-	struct fasync_struct *fasync;
++	struct snd_fasync *fasync;
+ 	struct mutex ioctl_lock;
+ };
+ 
+@@ -1345,7 +1345,7 @@ static void snd_timer_user_interrupt(struct snd_timer_instance *timeri,
+ 	}
+       __wake:
+ 	spin_unlock(&tu->qlock);
+-	kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++	snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ 	wake_up(&tu->qchange_sleep);
+ }
+ 
+@@ -1383,7 +1383,7 @@ static void snd_timer_user_ccallback(struct snd_timer_instance *timeri,
+ 	spin_lock_irqsave(&tu->qlock, flags);
+ 	snd_timer_user_append_to_tqueue(tu, &r1);
+ 	spin_unlock_irqrestore(&tu->qlock, flags);
+-	kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++	snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ 	wake_up(&tu->qchange_sleep);
+ }
+ 
+@@ -1453,7 +1453,7 @@ static void snd_timer_user_tinterrupt(struct snd_timer_instance *timeri,
+ 	spin_unlock(&tu->qlock);
+ 	if (append == 0)
+ 		return;
+-	kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++	snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ 	wake_up(&tu->qchange_sleep);
+ }
+ 
+@@ -1521,6 +1521,7 @@ static int snd_timer_user_release(struct inode *inode, struct file *file)
+ 			snd_timer_instance_free(tu->timeri);
+ 		}
+ 		mutex_unlock(&tu->ioctl_lock);
++		snd_fasync_free(tu->fasync);
+ 		kfree(tu->queue);
+ 		kfree(tu->tqueue);
+ 		kfree(tu);
+@@ -2135,7 +2136,7 @@ static int snd_timer_user_fasync(int fd, struct file * file, int on)
+ 	struct snd_timer_user *tu;
+ 
+ 	tu = file->private_data;
+-	return fasync_helper(fd, file, on, &tu->fasync);
++	return snd_fasync_helper(fd, file, on, &tu->fasync);
+ }
+ 
+ static ssize_t snd_timer_user_read(struct file *file, char __user *buffer,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b822248b666e6..6e679c86b6fa3 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8963,6 +8963,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x7717, "Clevo NS70PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 315fd9d971c8c..024ec68e8d356 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -46,34 +46,22 @@ static void tas2770_reset(struct tas2770_priv *tas2770)
+ 	usleep_range(1000, 2000);
+ }
+ 
+-static int tas2770_set_bias_level(struct snd_soc_component *component,
+-				 enum snd_soc_bias_level level)
++static int tas2770_update_pwr_ctrl(struct tas2770_priv *tas2770)
+ {
+-	struct tas2770_priv *tas2770 =
+-			snd_soc_component_get_drvdata(component);
++	struct snd_soc_component *component = tas2770->component;
++	unsigned int val;
++	int ret;
+ 
+-	switch (level) {
+-	case SND_SOC_BIAS_ON:
+-		snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+-					      TAS2770_PWR_CTRL_MASK,
+-					      TAS2770_PWR_CTRL_ACTIVE);
+-		break;
+-	case SND_SOC_BIAS_STANDBY:
+-	case SND_SOC_BIAS_PREPARE:
+-		snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+-					      TAS2770_PWR_CTRL_MASK,
+-					      TAS2770_PWR_CTRL_MUTE);
+-		break;
+-	case SND_SOC_BIAS_OFF:
+-		snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+-					      TAS2770_PWR_CTRL_MASK,
+-					      TAS2770_PWR_CTRL_SHUTDOWN);
+-		break;
++	if (tas2770->dac_powered)
++		val = tas2770->unmuted ?
++			TAS2770_PWR_CTRL_ACTIVE : TAS2770_PWR_CTRL_MUTE;
++	else
++		val = TAS2770_PWR_CTRL_SHUTDOWN;
+ 
+-	default:
+-		dev_err(tas2770->dev, "wrong power level setting %d\n", level);
+-		return -EINVAL;
+-	}
++	ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
++					    TAS2770_PWR_CTRL_MASK, val);
++	if (ret < 0)
++		return ret;
+ 
+ 	return 0;
+ }
+@@ -114,9 +102,7 @@ static int tas2770_codec_resume(struct snd_soc_component *component)
+ 		gpiod_set_value_cansleep(tas2770->sdz_gpio, 1);
+ 		usleep_range(1000, 2000);
+ 	} else {
+-		ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+-						    TAS2770_PWR_CTRL_MASK,
+-						    TAS2770_PWR_CTRL_ACTIVE);
++		ret = tas2770_update_pwr_ctrl(tas2770);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+@@ -152,24 +138,19 @@ static int tas2770_dac_event(struct snd_soc_dapm_widget *w,
+ 
+ 	switch (event) {
+ 	case SND_SOC_DAPM_POST_PMU:
+-		ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+-						    TAS2770_PWR_CTRL_MASK,
+-						    TAS2770_PWR_CTRL_MUTE);
++		tas2770->dac_powered = 1;
++		ret = tas2770_update_pwr_ctrl(tas2770);
+ 		break;
+ 	case SND_SOC_DAPM_PRE_PMD:
+-		ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+-						    TAS2770_PWR_CTRL_MASK,
+-						    TAS2770_PWR_CTRL_SHUTDOWN);
++		tas2770->dac_powered = 0;
++		ret = tas2770_update_pwr_ctrl(tas2770);
+ 		break;
+ 	default:
+ 		dev_err(tas2770->dev, "Not supported evevt\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	if (ret < 0)
+-		return ret;
+-
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct snd_kcontrol_new isense_switch =
+@@ -203,21 +184,11 @@ static const struct snd_soc_dapm_route tas2770_audio_map[] = {
+ static int tas2770_mute(struct snd_soc_dai *dai, int mute, int direction)
+ {
+ 	struct snd_soc_component *component = dai->component;
+-	int ret;
+-
+-	if (mute)
+-		ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+-						    TAS2770_PWR_CTRL_MASK,
+-						    TAS2770_PWR_CTRL_MUTE);
+-	else
+-		ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+-						    TAS2770_PWR_CTRL_MASK,
+-						    TAS2770_PWR_CTRL_ACTIVE);
+-
+-	if (ret < 0)
+-		return ret;
++	struct tas2770_priv *tas2770 =
++			snd_soc_component_get_drvdata(component);
+ 
+-	return 0;
++	tas2770->unmuted = !mute;
++	return tas2770_update_pwr_ctrl(tas2770);
+ }
+ 
+ static int tas2770_set_bitwidth(struct tas2770_priv *tas2770, int bitwidth)
+@@ -337,7 +308,7 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 	struct snd_soc_component *component = dai->component;
+ 	struct tas2770_priv *tas2770 =
+ 			snd_soc_component_get_drvdata(component);
+-	u8 tdm_rx_start_slot = 0, asi_cfg_1 = 0;
++	u8 tdm_rx_start_slot = 0, invert_fpol = 0, fpol_preinv = 0, asi_cfg_1 = 0;
+ 	int ret;
+ 
+ 	switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+@@ -349,9 +320,15 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 	}
+ 
+ 	switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
++	case SND_SOC_DAIFMT_NB_IF:
++		invert_fpol = 1;
++		fallthrough;
+ 	case SND_SOC_DAIFMT_NB_NF:
+ 		asi_cfg_1 |= TAS2770_TDM_CFG_REG1_RX_RSING;
+ 		break;
++	case SND_SOC_DAIFMT_IB_IF:
++		invert_fpol = 1;
++		fallthrough;
+ 	case SND_SOC_DAIFMT_IB_NF:
+ 		asi_cfg_1 |= TAS2770_TDM_CFG_REG1_RX_FALING;
+ 		break;
+@@ -369,15 +346,19 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_I2S:
+ 		tdm_rx_start_slot = 1;
++		fpol_preinv = 0;
+ 		break;
+ 	case SND_SOC_DAIFMT_DSP_A:
+ 		tdm_rx_start_slot = 0;
++		fpol_preinv = 1;
+ 		break;
+ 	case SND_SOC_DAIFMT_DSP_B:
+ 		tdm_rx_start_slot = 1;
++		fpol_preinv = 1;
+ 		break;
+ 	case SND_SOC_DAIFMT_LEFT_J:
+ 		tdm_rx_start_slot = 0;
++		fpol_preinv = 1;
+ 		break;
+ 	default:
+ 		dev_err(tas2770->dev,
+@@ -391,6 +372,14 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = snd_soc_component_update_bits(component, TAS2770_TDM_CFG_REG0,
++					    TAS2770_TDM_CFG_REG0_FPOL_MASK,
++					    (fpol_preinv ^ invert_fpol)
++					     ? TAS2770_TDM_CFG_REG0_FPOL_RSING
++					     : TAS2770_TDM_CFG_REG0_FPOL_FALING);
++	if (ret < 0)
++		return ret;
++
+ 	return 0;
+ }
+ 
+@@ -489,7 +478,7 @@ static struct snd_soc_dai_driver tas2770_dai_driver[] = {
+ 		.id = 0,
+ 		.playback = {
+ 			.stream_name    = "ASI1 Playback",
+-			.channels_min   = 2,
++			.channels_min   = 1,
+ 			.channels_max   = 2,
+ 			.rates      = TAS2770_RATES,
+ 			.formats    = TAS2770_FORMATS,
+@@ -537,7 +526,6 @@ static const struct snd_soc_component_driver soc_component_driver_tas2770 = {
+ 	.probe			= tas2770_codec_probe,
+ 	.suspend		= tas2770_codec_suspend,
+ 	.resume			= tas2770_codec_resume,
+-	.set_bias_level = tas2770_set_bias_level,
+ 	.controls		= tas2770_snd_controls,
+ 	.num_controls		= ARRAY_SIZE(tas2770_snd_controls),
+ 	.dapm_widgets		= tas2770_dapm_widgets,
+diff --git a/sound/soc/codecs/tas2770.h b/sound/soc/codecs/tas2770.h
+index d156666bcc552..f75f40781ab13 100644
+--- a/sound/soc/codecs/tas2770.h
++++ b/sound/soc/codecs/tas2770.h
+@@ -41,6 +41,9 @@
+ #define TAS2770_TDM_CFG_REG0_31_44_1_48KHZ  0x6
+ #define TAS2770_TDM_CFG_REG0_31_88_2_96KHZ  0x8
+ #define TAS2770_TDM_CFG_REG0_31_176_4_192KHZ  0xa
++#define TAS2770_TDM_CFG_REG0_FPOL_MASK  BIT(0)
++#define TAS2770_TDM_CFG_REG0_FPOL_RSING  0
++#define TAS2770_TDM_CFG_REG0_FPOL_FALING  1
+     /* TDM Configuration Reg1 */
+ #define TAS2770_TDM_CFG_REG1  TAS2770_REG(0X0, 0x0B)
+ #define TAS2770_TDM_CFG_REG1_MASK	GENMASK(5, 1)
+@@ -135,6 +138,8 @@ struct tas2770_priv {
+ 	struct device *dev;
+ 	int v_sense_slot;
+ 	int i_sense_slot;
++	bool dac_powered;
++	bool unmuted;
+ };
+ 
+ #endif /* __TAS2770__ */
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index b0faf050132d8..b4cc724831370 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -39,6 +39,17 @@
+ #define EXCEPT_MAX_HDR_SIZE	0x400
+ #define HDA_EXT_ROM_STATUS_SIZE 8
+ 
++static const struct sof_intel_dsp_desc
++	*get_chip_info(struct snd_sof_pdata *pdata)
++{
++	const struct sof_dev_desc *desc = pdata->desc;
++	const struct sof_intel_dsp_desc *chip_info;
++
++	chip_info = desc->chip_info;
++
++	return chip_info;
++}
++
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_INTEL_SOUNDWIRE)
+ 
+ /*
+@@ -674,17 +685,6 @@ skip_soundwire:
+ 	return 0;
+ }
+ 
+-static const struct sof_intel_dsp_desc
+-	*get_chip_info(struct snd_sof_pdata *pdata)
+-{
+-	const struct sof_dev_desc *desc = pdata->desc;
+-	const struct sof_intel_dsp_desc *chip_info;
+-
+-	chip_info = desc->chip_info;
+-
+-	return chip_info;
+-}
+-
+ static irqreturn_t hda_dsp_interrupt_handler(int irq, void *context)
+ {
+ 	struct snd_sof_dev *sdev = context;
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 45fc217e4e97b..a3e06a71cf356 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -379,6 +379,14 @@ static const struct usb_audio_device_name usb_audio_names[] = {
+ 
+ 	DEVICE_NAME(0x046d, 0x0990, "Logitech, Inc.", "QuickCam Pro 9000"),
+ 
++	/* ASUS ROG Zenith II: this machine has also two devices, one for
++	 * the front headphone and another for the rest
++	 */
++	PROFILE_NAME(0x0b05, 0x1915, "ASUS", "Zenith II Front Headphone",
++		     "Zenith-II-Front-Headphone"),
++	PROFILE_NAME(0x0b05, 0x1916, "ASUS", "Zenith II Main Audio",
++		     "Zenith-II-Main-Audio"),
++
+ 	/* ASUS ROG Strix */
+ 	PROFILE_NAME(0x0b05, 0x1917,
+ 		     "Realtek", "ALC1220-VB-DT", "Realtek-ALC1220-VB-Desktop"),
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 81ace832d7e42..b708a240a5f06 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -367,13 +367,28 @@ static const struct usbmix_name_map corsair_virtuoso_map[] = {
+ 	{ 0 }
+ };
+ 
+-/* Some mobos shipped with a dummy HD-audio show the invalid GET_MIN/GET_MAX
+- * response for Input Gain Pad (id=19, control=12) and the connector status
+- * for SPDIF terminal (id=18).  Skip them.
+- */
+-static const struct usbmix_name_map asus_rog_map[] = {
+-	{ 18, NULL }, /* OT, connector control */
+-	{ 19, NULL, 12 }, /* FU, Input Gain Pad */
++/* ASUS ROG Zenith II with Realtek ALC1220-VB */
++static const struct usbmix_name_map asus_zenith_ii_map[] = {
++	{ 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */
++	{ 16, "Speaker" },		/* OT */
++	{ 22, "Speaker Playback" },	/* FU */
++	{ 7, "Line" },			/* IT */
++	{ 19, "Line Capture" },		/* FU */
++	{ 8, "Mic" },			/* IT */
++	{ 20, "Mic Capture" },		/* FU */
++	{ 9, "Front Mic" },		/* IT */
++	{ 21, "Front Mic Capture" },	/* FU */
++	{ 17, "IEC958" },		/* OT */
++	{ 23, "IEC958 Playback" },	/* FU */
++	{}
++};
++
++static const struct usbmix_connector_map asus_zenith_ii_connector_map[] = {
++	{ 10, 16 },	/* (Back) Speaker */
++	{ 11, 17 },	/* SPDIF */
++	{ 13, 7 },	/* Line */
++	{ 14, 8 },	/* Mic */
++	{ 15, 9 },	/* Front Mic */
+ 	{}
+ };
+ 
+@@ -590,9 +605,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.map = trx40_mobo_map,
+ 		.connector_map = trx40_mobo_connector_map,
+ 	},
+-	{	/* ASUS ROG Zenith II */
++	{	/* ASUS ROG Zenith II (main audio) */
+ 		.id = USB_ID(0x0b05, 0x1916),
+-		.map = asus_rog_map,
++		.map = asus_zenith_ii_map,
++		.connector_map = asus_zenith_ii_connector_map,
+ 	},
+ 	{	/* ASUS ROG Strix */
+ 		.id = USB_ID(0x0b05, 0x1917),
+diff --git a/tools/build/feature/test-libcrypto.c b/tools/build/feature/test-libcrypto.c
+index a98174e0569c8..bc34a5bbb5049 100644
+--- a/tools/build/feature/test-libcrypto.c
++++ b/tools/build/feature/test-libcrypto.c
+@@ -1,16 +1,23 @@
+ // SPDX-License-Identifier: GPL-2.0
++#include <openssl/evp.h>
+ #include <openssl/sha.h>
+ #include <openssl/md5.h>
+ 
+ int main(void)
+ {
+-	MD5_CTX context;
++	EVP_MD_CTX *mdctx;
+ 	unsigned char md[MD5_DIGEST_LENGTH + SHA_DIGEST_LENGTH];
+ 	unsigned char dat[] = "12345";
++	unsigned int digest_len;
+ 
+-	MD5_Init(&context);
+-	MD5_Update(&context, &dat[0], sizeof(dat));
+-	MD5_Final(&md[0], &context);
++	mdctx = EVP_MD_CTX_new();
++	if (!mdctx)
++		return 0;
++
++	EVP_DigestInit_ex(mdctx, EVP_md5(), NULL);
++	EVP_DigestUpdate(mdctx, &dat[0], sizeof(dat));
++	EVP_DigestFinal_ex(mdctx, &md[0], &digest_len);
++	EVP_MD_CTX_free(mdctx);
+ 
+ 	SHA1(&dat[0], sizeof(dat), &md[0]);
+ 
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index d103084fcd56c..97e2a72bd6f5e 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -1760,8 +1760,10 @@ int parse_perf_probe_command(const char *cmd, struct perf_probe_event *pev)
+ 	if (!pev->event && pev->point.function && pev->point.line
+ 			&& !pev->point.lazy_line && !pev->point.offset) {
+ 		if (asprintf(&pev->event, "%s_L%d", pev->point.function,
+-			pev->point.line) < 0)
+-			return -ENOMEM;
++			pev->point.line) < 0) {
++			ret = -ENOMEM;
++			goto out;
++		}
+ 	}
+ 
+ 	/* Copy arguments and ensure return probe has no C argument */
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+index fa928b431555c..7c02509c71d0a 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+@@ -21,7 +21,6 @@ check_error 'p:^/bar vfs_read'		# NO_GROUP_NAME
+ check_error 'p:^12345678901234567890123456789012345678901234567890123456789012345/bar vfs_read'	# GROUP_TOO_LONG
+ 
+ check_error 'p:^foo.1/bar vfs_read'	# BAD_GROUP_NAME
+-check_error 'p:foo/^ vfs_read'		# NO_EVENT_NAME
+ check_error 'p:foo/^12345678901234567890123456789012345678901234567890123456789012345 vfs_read'	# EVENT_TOO_LONG
+ check_error 'p:foo/^bar.1 vfs_read'	# BAD_EVENT_NAME
+ 
+diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
+index 9b68658b6bb85..3ae985dc24b6d 100644
+--- a/tools/vm/slabinfo.c
++++ b/tools/vm/slabinfo.c
+@@ -125,7 +125,7 @@ static void usage(void)
+ 		"-n|--numa              Show NUMA information\n"
+ 		"-N|--lines=K           Show the first K slabs\n"
+ 		"-o|--ops               Show kmem_cache_ops\n"
+-		"-P|--partial		Sort by number of partial slabs\n"
++		"-P|--partial           Sort by number of partial slabs\n"
+ 		"-r|--report            Detailed report on single slabs\n"
+ 		"-s|--shrink            Shrink slabs\n"
+ 		"-S|--Size              Sort by size\n"
+@@ -1045,15 +1045,27 @@ static void sort_slabs(void)
+ 		for (s2 = s1 + 1; s2 < slabinfo + slabs; s2++) {
+ 			int result;
+ 
+-			if (sort_size)
+-				result = slab_size(s1) < slab_size(s2);
+-			else if (sort_active)
+-				result = slab_activity(s1) < slab_activity(s2);
+-			else if (sort_loss)
+-				result = slab_waste(s1) < slab_waste(s2);
+-			else if (sort_partial)
+-				result = s1->partial < s2->partial;
+-			else
++			if (sort_size) {
++				if (slab_size(s1) == slab_size(s2))
++					result = strcasecmp(s1->name, s2->name);
++				else
++					result = slab_size(s1) < slab_size(s2);
++			} else if (sort_active) {
++				if (slab_activity(s1) == slab_activity(s2))
++					result = strcasecmp(s1->name, s2->name);
++				else
++					result = slab_activity(s1) < slab_activity(s2);
++			} else if (sort_loss) {
++				if (slab_waste(s1) == slab_waste(s2))
++					result = strcasecmp(s1->name, s2->name);
++				else
++					result = slab_waste(s1) < slab_waste(s2);
++			} else if (sort_partial) {
++				if (s1->partial == s2->partial)
++					result = strcasecmp(s1->name, s2->name);
++				else
++					result = s1->partial < s2->partial;
++			} else
+ 				result = strcasecmp(s1->name, s2->name);
+ 
+ 			if (show_inverted)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-08-29 10:46 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-08-29 10:46 UTC (permalink / raw
  To: gentoo-commits

commit:     6732004d4061d4e84ac779b81796ab9ed3d9da34
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 29 10:45:43 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug 29 10:45:43 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6732004d

Linux patch 5.10.139

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |  4 ++++
 1138_linux-5.10.139.patch | 16 ++++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/0000_README b/0000_README
index 8ea943aa..2f48b8ae 100644
--- a/0000_README
+++ b/0000_README
@@ -595,6 +595,10 @@ Patch:  1137_linux-5.10.138.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.138
 
+Patch:  1138_linux-5.10.139.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.139
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1138_linux-5.10.139.patch b/1138_linux-5.10.139.patch
new file mode 100644
index 00000000..8d1e360f
--- /dev/null
+++ b/1138_linux-5.10.139.patch
@@ -0,0 +1,16 @@
+diff --git a/Makefile b/Makefile
+index 234c8032c2b4a..48140575f960b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 138
++SUBLEVEL = 139
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/scripts/dummy-tools/dummy-plugin-dir/include/plugin-version.h b/scripts/dummy-tools/dummy-plugin-dir/include/plugin-version.h
+new file mode 100644
+index 0000000000000..e69de29bb2d1d


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-08-31 15:39 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-08-31 15:39 UTC (permalink / raw
  To: gentoo-commits

commit:     92fb566de11ac6624545feff2c5ffe8a51627926
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 31 15:39:12 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 31 15:39:12 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=92fb566d

Linux patch 5.10.140

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1139_linux-5.10.140.patch | 3424 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3428 insertions(+)

diff --git a/0000_README b/0000_README
index 2f48b8ae..f5306e89 100644
--- a/0000_README
+++ b/0000_README
@@ -599,6 +599,10 @@ Patch:  1138_linux-5.10.139.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.139
 
+Patch:  1139_linux-5.10.140.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.140
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1139_linux-5.10.140.patch b/1139_linux-5.10.140.patch
new file mode 100644
index 00000000..d204d073
--- /dev/null
+++ b/1139_linux-5.10.140.patch
@@ -0,0 +1,3424 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 44c6e57303988..500d5d8937cbb 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -511,6 +511,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ 		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ 		/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
++		/sys/devices/system/cpu/vulnerabilities/retbleed
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+diff --git a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+index 9393c50b5afc9..c98fd11907cc8 100644
+--- a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
++++ b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+@@ -230,6 +230,20 @@ The possible values in this file are:
+      * - 'Mitigation: Clear CPU buffers'
+        - The processor is vulnerable and the CPU buffer clearing mitigation is
+          enabled.
++     * - 'Unknown: No mitigations'
++       - The processor vulnerability status is unknown because it is
++	 out of Servicing period. Mitigation is not attempted.
++
++Definitions:
++------------
++
++Servicing period: The process of providing functional and security updates to
++Intel processors or platforms, utilizing the Intel Platform Update (IPU)
++process or other similar mechanisms.
++
++End of Servicing Updates (ESU): ESU is the date at which Intel will no
++longer provide Servicing, such as through IPU or other similar update
++processes. ESU dates will typically be aligned to end of quarter.
+ 
+ If the processor is vulnerable then the following information is appended to
+ the above information:
+diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst
+index f2ab8a5b6a4b8..7f553859dba82 100644
+--- a/Documentation/admin-guide/sysctl/net.rst
++++ b/Documentation/admin-guide/sysctl/net.rst
+@@ -271,7 +271,7 @@ poll cycle or the number of packets processed reaches netdev_budget.
+ netdev_max_backlog
+ ------------------
+ 
+-Maximum number  of  packets,  queued  on  the  INPUT  side, when the interface
++Maximum number of packets, queued on the INPUT side, when the interface
+ receives packets faster than kernel can process them.
+ 
+ netdev_rss_key
+diff --git a/Makefile b/Makefile
+index 48140575f960b..a80179d2c0057 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 139
++SUBLEVEL = 140
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index ca42d58e8c821..78263dadd00da 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -220,6 +220,8 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_1286807
+ 	{
+ 		ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0),
++	},
++	{
+ 		/* Kryo4xx Gold (rcpe to rfpe) => (r0p0 to r3p0) */
+ 		ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe),
+ 	},
+diff --git a/arch/parisc/kernel/unaligned.c b/arch/parisc/kernel/unaligned.c
+index 286cec4d86d7b..cc6ed74960501 100644
+--- a/arch/parisc/kernel/unaligned.c
++++ b/arch/parisc/kernel/unaligned.c
+@@ -107,7 +107,7 @@
+ #define R1(i) (((i)>>21)&0x1f)
+ #define R2(i) (((i)>>16)&0x1f)
+ #define R3(i) ((i)&0x1f)
+-#define FR3(i) ((((i)<<1)&0x1f)|(((i)>>6)&1))
++#define FR3(i) ((((i)&0x1f)<<1)|(((i)>>6)&1))
+ #define IM(i,n) (((i)>>1&((1<<(n-1))-1))|((i)&1?((0-1L)<<(n-1)):0))
+ #define IM5_2(i) IM((i)>>16,5)
+ #define IM5_3(i) IM((i),5)
+diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
+index ec801d3bbb37a..137a170f47d4f 100644
+--- a/arch/s390/kernel/process.c
++++ b/arch/s390/kernel/process.c
+@@ -77,6 +77,18 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+ 
+ 	memcpy(dst, src, arch_task_struct_size);
+ 	dst->thread.fpu.regs = dst->thread.fpu.fprs;
++
++	/*
++	 * Don't transfer over the runtime instrumentation or the guarded
++	 * storage control block pointers. These fields are cleared here instead
++	 * of in copy_thread() to avoid premature freeing of associated memory
++	 * on fork() failure. Wait to clear the RI flag because ->stack still
++	 * refers to the source thread.
++	 */
++	dst->thread.ri_cb = NULL;
++	dst->thread.gs_cb = NULL;
++	dst->thread.gs_bc_cb = NULL;
++
+ 	return 0;
+ }
+ 
+@@ -134,13 +146,11 @@ int copy_thread(unsigned long clone_flags, unsigned long new_stackp,
+ 	frame->childregs.flags = 0;
+ 	if (new_stackp)
+ 		frame->childregs.gprs[15] = new_stackp;
+-
+-	/* Don't copy runtime instrumentation info */
+-	p->thread.ri_cb = NULL;
++	/*
++	 * Clear the runtime instrumentation flag after the above childregs
++	 * copy. The CB pointer was already cleared in arch_dup_task_struct().
++	 */
+ 	frame->childregs.psw.mask &= ~PSW_MASK_RI;
+-	/* Don't copy guarded storage control block */
+-	p->thread.gs_cb = NULL;
+-	p->thread.gs_bc_cb = NULL;
+ 
+ 	/* Set a new TLS ?  */
+ 	if (clone_flags & CLONE_SETTLS) {
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index bd8516e6c353c..42173a7be3bb4 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1114,6 +1114,14 @@ static int intel_pmu_setup_hw_lbr_filter(struct perf_event *event)
+ 
+ 	if (static_cpu_has(X86_FEATURE_ARCH_LBR)) {
+ 		reg->config = mask;
++
++		/*
++		 * The Arch LBR HW can retrieve the common branch types
++		 * from the LBR_INFO. It doesn't require the high overhead
++		 * SW disassemble.
++		 * Enable the branch type by default for the Arch LBR.
++		 */
++		reg->reg |= X86_BR_TYPE_SAVE;
+ 		return 0;
+ 	}
+ 
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index bbd1120ae1610..fa9289718147a 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -657,6 +657,22 @@ int snb_pci2phy_map_init(int devid)
+ 	return 0;
+ }
+ 
++static u64 snb_uncore_imc_read_counter(struct intel_uncore_box *box, struct perf_event *event)
++{
++	struct hw_perf_event *hwc = &event->hw;
++
++	/*
++	 * SNB IMC counters are 32-bit and are laid out back to back
++	 * in MMIO space. Therefore we must use a 32-bit accessor function
++	 * using readq() from uncore_mmio_read_counter() causes problems
++	 * because it is reading 64-bit at a time. This is okay for the
++	 * uncore_perf_event_update() function because it drops the upper
++	 * 32-bits but not okay for plain uncore_read_counter() as invoked
++	 * in uncore_pmu_event_start().
++	 */
++	return (u64)readl(box->io_addr + hwc->event_base);
++}
++
+ static struct pmu snb_uncore_imc_pmu = {
+ 	.task_ctx_nr	= perf_invalid_context,
+ 	.event_init	= snb_uncore_imc_event_init,
+@@ -676,7 +692,7 @@ static struct intel_uncore_ops snb_uncore_imc_ops = {
+ 	.disable_event	= snb_uncore_imc_disable_event,
+ 	.enable_event	= snb_uncore_imc_enable_event,
+ 	.hw_config	= snb_uncore_imc_hw_config,
+-	.read_counter	= uncore_mmio_read_counter,
++	.read_counter	= snb_uncore_imc_read_counter,
+ };
+ 
+ static struct intel_uncore_type snb_uncore_imc = {
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 37ba0cdf99aa8..f507ad7c7fd7b 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -429,7 +429,8 @@
+ #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+ #define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+ #define X86_BUG_MMIO_STALE_DATA		X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
+-#define X86_BUG_RETBLEED		X86_BUG(26) /* CPU is affected by RETBleed */
+-#define X86_BUG_EIBRS_PBRSB		X86_BUG(27) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
++#define X86_BUG_MMIO_UNKNOWN		X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */
++#define X86_BUG_RETBLEED		X86_BUG(27) /* CPU is affected by RETBleed */
++#define X86_BUG_EIBRS_PBRSB		X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index aa4ee46f00ce5..a300a19255b66 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -424,7 +424,8 @@ static void __init mmio_select_mitigation(void)
+ 	u64 ia32_cap;
+ 
+ 	if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) ||
+-	    cpu_mitigations_off()) {
++	     boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN) ||
++	     cpu_mitigations_off()) {
+ 		mmio_mitigation = MMIO_MITIGATION_OFF;
+ 		return;
+ 	}
+@@ -529,6 +530,8 @@ out:
+ 		pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
+ 	if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
+ 		pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
++	else if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
++		pr_info("MMIO Stale Data: Unknown: No mitigations\n");
+ }
+ 
+ static void __init md_clear_select_mitigation(void)
+@@ -2198,6 +2201,9 @@ static ssize_t tsx_async_abort_show_state(char *buf)
+ 
+ static ssize_t mmio_stale_data_show_state(char *buf)
+ {
++	if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
++		return sysfs_emit(buf, "Unknown: No mitigations\n");
++
+ 	if (mmio_mitigation == MMIO_MITIGATION_OFF)
+ 		return sysfs_emit(buf, "%s\n", mmio_strings[mmio_mitigation]);
+ 
+@@ -2344,6 +2350,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		return srbds_show_state(buf);
+ 
+ 	case X86_BUG_MMIO_STALE_DATA:
++	case X86_BUG_MMIO_UNKNOWN:
+ 		return mmio_stale_data_show_state(buf);
+ 
+ 	case X86_BUG_RETBLEED:
+@@ -2403,7 +2410,10 @@ ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *
+ 
+ ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+-	return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA);
++	if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
++		return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_UNKNOWN);
++	else
++		return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA);
+ }
+ 
+ ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, char *buf)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 9fc91482e85e3..56573241d0293 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1024,7 +1024,8 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+ #define NO_SWAPGS		BIT(6)
+ #define NO_ITLB_MULTIHIT	BIT(7)
+ #define NO_SPECTRE_V2		BIT(8)
+-#define NO_EIBRS_PBRSB		BIT(9)
++#define NO_MMIO			BIT(9)
++#define NO_EIBRS_PBRSB		BIT(10)
+ 
+ #define VULNWL(vendor, family, model, whitelist)	\
+ 	X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, whitelist)
+@@ -1045,6 +1046,11 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 	VULNWL(NSC,	5, X86_MODEL_ANY,	NO_SPECULATION),
+ 
+ 	/* Intel Family 6 */
++	VULNWL_INTEL(TIGERLAKE,			NO_MMIO),
++	VULNWL_INTEL(TIGERLAKE_L,		NO_MMIO),
++	VULNWL_INTEL(ALDERLAKE,			NO_MMIO),
++	VULNWL_INTEL(ALDERLAKE_L,		NO_MMIO),
++
+ 	VULNWL_INTEL(ATOM_SALTWELL,		NO_SPECULATION | NO_ITLB_MULTIHIT),
+ 	VULNWL_INTEL(ATOM_SALTWELL_TABLET,	NO_SPECULATION | NO_ITLB_MULTIHIT),
+ 	VULNWL_INTEL(ATOM_SALTWELL_MID,		NO_SPECULATION | NO_ITLB_MULTIHIT),
+@@ -1063,9 +1069,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 	VULNWL_INTEL(ATOM_AIRMONT_MID,		NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
+ 	VULNWL_INTEL(ATOM_AIRMONT_NP,		NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+ 
+-	VULNWL_INTEL(ATOM_GOLDMONT,		NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+-	VULNWL_INTEL(ATOM_GOLDMONT_D,		NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+-	VULNWL_INTEL(ATOM_GOLDMONT_PLUS,	NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
++	VULNWL_INTEL(ATOM_GOLDMONT,		NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++	VULNWL_INTEL(ATOM_GOLDMONT_D,		NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++	VULNWL_INTEL(ATOM_GOLDMONT_PLUS,	NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
+ 
+ 	/*
+ 	 * Technically, swapgs isn't serializing on AMD (despite it previously
+@@ -1080,18 +1086,18 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 	VULNWL_INTEL(ATOM_TREMONT_D,		NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
+ 
+ 	/* AMD Family 0xf - 0x12 */
+-	VULNWL_AMD(0x0f,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+-	VULNWL_AMD(0x10,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+-	VULNWL_AMD(0x11,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+-	VULNWL_AMD(0x12,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
++	VULNWL_AMD(0x0f,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++	VULNWL_AMD(0x10,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++	VULNWL_AMD(0x11,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++	VULNWL_AMD(0x12,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+ 
+ 	/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
+-	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+-	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
++	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+ 
+ 	/* Zhaoxin Family 7 */
+-	VULNWL(CENTAUR,	7, X86_MODEL_ANY,	NO_SPECTRE_V2 | NO_SWAPGS),
+-	VULNWL(ZHAOXIN,	7, X86_MODEL_ANY,	NO_SPECTRE_V2 | NO_SWAPGS),
++	VULNWL(CENTAUR,	7, X86_MODEL_ANY,	NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
++	VULNWL(ZHAOXIN,	7, X86_MODEL_ANY,	NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
+ 	{}
+ };
+ 
+@@ -1245,10 +1251,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	 * Affected CPU list is generally enough to enumerate the vulnerability,
+ 	 * but for virtualization case check for ARCH_CAP MSR bits also, VMM may
+ 	 * not want the guest to enumerate the bug.
++	 *
++	 * Set X86_BUG_MMIO_UNKNOWN for CPUs that are neither in the blacklist,
++	 * nor in the whitelist and also don't enumerate MSR ARCH_CAP MMIO bits.
+ 	 */
+-	if (cpu_matches(cpu_vuln_blacklist, MMIO) &&
+-	    !arch_cap_mmio_immune(ia32_cap))
+-		setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
++	if (!arch_cap_mmio_immune(ia32_cap)) {
++		if (cpu_matches(cpu_vuln_blacklist, MMIO))
++			setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
++		else if (!cpu_matches(cpu_vuln_whitelist, NO_MMIO))
++			setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN);
++	}
+ 
+ 	if (!cpu_has(c, X86_FEATURE_BTC_NO)) {
+ 		if (cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA))
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index c451d5f6422f6..cc071c4c65240 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -93,22 +93,27 @@ static struct orc_entry *orc_find(unsigned long ip);
+ static struct orc_entry *orc_ftrace_find(unsigned long ip)
+ {
+ 	struct ftrace_ops *ops;
+-	unsigned long caller;
++	unsigned long tramp_addr, offset;
+ 
+ 	ops = ftrace_ops_trampoline(ip);
+ 	if (!ops)
+ 		return NULL;
+ 
++	/* Set tramp_addr to the start of the code copied by the trampoline */
+ 	if (ops->flags & FTRACE_OPS_FL_SAVE_REGS)
+-		caller = (unsigned long)ftrace_regs_call;
++		tramp_addr = (unsigned long)ftrace_regs_caller;
+ 	else
+-		caller = (unsigned long)ftrace_call;
++		tramp_addr = (unsigned long)ftrace_caller;
++
++	/* Now place tramp_addr to the location within the trampoline ip is at */
++	offset = ip - ops->trampoline;
++	tramp_addr += offset;
+ 
+ 	/* Prevent unlikely recursion */
+-	if (ip == caller)
++	if (ip == tramp_addr)
+ 		return NULL;
+ 
+-	return orc_find(caller);
++	return orc_find(tramp_addr);
+ }
+ #else
+ static struct orc_entry *orc_ftrace_find(unsigned long ip)
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 90f64bb42fbd1..cfc039fabf8ce 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1402,7 +1402,8 @@ out:
+ 	/* If we didn't flush the entire list, we could have told the driver
+ 	 * there was more coming, but that turned out to be a lie.
+ 	 */
+-	if ((!list_empty(list) || errors) && q->mq_ops->commit_rqs && queued)
++	if ((!list_empty(list) || errors || needs_resource ||
++	     ret == BLK_STS_DEV_RESOURCE) && q->mq_ops->commit_rqs && queued)
+ 		q->mq_ops->commit_rqs(hctx);
+ 	/*
+ 	 * Any items that need requeuing? Stuff them into hctx->dispatch,
+@@ -2080,6 +2081,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ 		list_del_init(&rq->queuelist);
+ 		ret = blk_mq_request_issue_directly(rq, list_empty(list));
+ 		if (ret != BLK_STS_OK) {
++			errors++;
+ 			if (ret == BLK_STS_RESOURCE ||
+ 					ret == BLK_STS_DEV_RESOURCE) {
+ 				blk_mq_request_bypass_insert(rq, false,
+@@ -2087,7 +2089,6 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ 				break;
+ 			}
+ 			blk_mq_end_request(rq, ret);
+-			errors++;
+ 		} else
+ 			queued++;
+ 	}
+diff --git a/drivers/acpi/processor_thermal.c b/drivers/acpi/processor_thermal.c
+index 6c7d05b37c986..7df0c6e3ba63c 100644
+--- a/drivers/acpi/processor_thermal.c
++++ b/drivers/acpi/processor_thermal.c
+@@ -148,7 +148,7 @@ void acpi_thermal_cpufreq_exit(struct cpufreq_policy *policy)
+ 	unsigned int cpu;
+ 
+ 	for_each_cpu(cpu, policy->related_cpus) {
+-		struct acpi_processor *pr = per_cpu(processors, policy->cpu);
++		struct acpi_processor *pr = per_cpu(processors, cpu);
+ 
+ 		if (pr)
+ 			freq_qos_remove_request(&pr->thermal_req);
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index e4517d483bdc3..b10410585a746 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1031,6 +1031,11 @@ loop_set_status_from_info(struct loop_device *lo,
+ 
+ 	lo->lo_offset = info->lo_offset;
+ 	lo->lo_sizelimit = info->lo_sizelimit;
++
++	/* loff_t vars have been assigned __u64 */
++	if (lo->lo_offset < 0 || lo->lo_sizelimit < 0)
++		return -EOVERFLOW;
++
+ 	memcpy(lo->lo_file_name, info->lo_file_name, LO_NAME_SIZE);
+ 	memcpy(lo->lo_crypt_name, info->lo_crypt_name, LO_NAME_SIZE);
+ 	lo->lo_file_name[LO_NAME_SIZE-1] = 0;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 884317ee1759f..0043dec37a870 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6278,11 +6278,11 @@ static void mddev_detach(struct mddev *mddev)
+ static void __md_stop(struct mddev *mddev)
+ {
+ 	struct md_personality *pers = mddev->pers;
++	md_bitmap_destroy(mddev);
+ 	mddev_detach(mddev);
+ 	/* Ensure ->event_work is done */
+ 	if (mddev->event_work.func)
+ 		flush_workqueue(md_misc_wq);
+-	md_bitmap_destroy(mddev);
+ 	spin_lock(&mddev->lock);
+ 	mddev->pers = NULL;
+ 	spin_unlock(&mddev->lock);
+@@ -6299,6 +6299,7 @@ void md_stop(struct mddev *mddev)
+ 	/* stop the array and free an attached data structures.
+ 	 * This is called from dm-raid
+ 	 */
++	__md_stop_writes(mddev);
+ 	__md_stop(mddev);
+ 	bioset_exit(&mddev->bio_set);
+ 	bioset_exit(&mddev->sync_set);
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index 325b20729d8ba..b0f8d551b61db 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -1988,30 +1988,24 @@ void bond_3ad_initiate_agg_selection(struct bonding *bond, int timeout)
+  */
+ void bond_3ad_initialize(struct bonding *bond, u16 tick_resolution)
+ {
+-	/* check that the bond is not initialized yet */
+-	if (!MAC_ADDRESS_EQUAL(&(BOND_AD_INFO(bond).system.sys_mac_addr),
+-				bond->dev->dev_addr)) {
+-
+-		BOND_AD_INFO(bond).aggregator_identifier = 0;
+-
+-		BOND_AD_INFO(bond).system.sys_priority =
+-			bond->params.ad_actor_sys_prio;
+-		if (is_zero_ether_addr(bond->params.ad_actor_system))
+-			BOND_AD_INFO(bond).system.sys_mac_addr =
+-			    *((struct mac_addr *)bond->dev->dev_addr);
+-		else
+-			BOND_AD_INFO(bond).system.sys_mac_addr =
+-			    *((struct mac_addr *)bond->params.ad_actor_system);
++	BOND_AD_INFO(bond).aggregator_identifier = 0;
++	BOND_AD_INFO(bond).system.sys_priority =
++		bond->params.ad_actor_sys_prio;
++	if (is_zero_ether_addr(bond->params.ad_actor_system))
++		BOND_AD_INFO(bond).system.sys_mac_addr =
++		    *((struct mac_addr *)bond->dev->dev_addr);
++	else
++		BOND_AD_INFO(bond).system.sys_mac_addr =
++		    *((struct mac_addr *)bond->params.ad_actor_system);
+ 
+-		/* initialize how many times this module is called in one
+-		 * second (should be about every 100ms)
+-		 */
+-		ad_ticks_per_sec = tick_resolution;
++	/* initialize how many times this module is called in one
++	 * second (should be about every 100ms)
++	 */
++	ad_ticks_per_sec = tick_resolution;
+ 
+-		bond_3ad_initiate_agg_selection(bond,
+-						AD_AGGREGATOR_SELECTION_TIMER *
+-						ad_ticks_per_sec);
+-	}
++	bond_3ad_initiate_agg_selection(bond,
++					AD_AGGREGATOR_SELECTION_TIMER *
++					ad_ticks_per_sec);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index 23b80aa171dd0..819f9df9425c6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -599,7 +599,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs, bool reset)
+ 		hw_resc->max_stat_ctxs -= le16_to_cpu(req.min_stat_ctx) * n;
+ 		hw_resc->max_vnics -= le16_to_cpu(req.min_vnics) * n;
+ 		if (bp->flags & BNXT_FLAG_CHIP_P5)
+-			hw_resc->max_irqs -= vf_msix * n;
++			hw_resc->max_nqs -= vf_msix;
+ 
+ 		rc = pf->active_vfs;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 5733526fa245c..59963b901be0f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -371,6 +371,19 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
+ 	bool if_running, pool_present = !!pool;
+ 	int ret = 0, pool_failure = 0;
+ 
++	if (qid >= vsi->num_rxq || qid >= vsi->num_txq) {
++		netdev_err(vsi->netdev, "Please use queue id in scope of combined queues count\n");
++		pool_failure = -EINVAL;
++		goto failure;
++	}
++
++	if (!is_power_of_2(vsi->rx_rings[qid]->count) ||
++	    !is_power_of_2(vsi->tx_rings[qid]->count)) {
++		netdev_err(vsi->netdev, "Please align ring sizes to power of 2\n");
++		pool_failure = -EINVAL;
++		goto failure;
++	}
++
+ 	if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi);
+ 
+ 	if (if_running) {
+@@ -393,6 +406,7 @@ xsk_pool_if_up:
+ 			netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret);
+ 	}
+ 
++failure:
+ 	if (pool_failure) {
+ 		netdev_err(vsi->netdev, "Could not %sable buffer pool, error = %d\n",
+ 			   pool_present ? "en" : "dis", pool_failure);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+index 22a874eee2e84..8b7f300355710 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+@@ -1211,7 +1211,6 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
+ 	struct cyclecounter cc;
+ 	unsigned long flags;
+ 	u32 incval = 0;
+-	u32 tsauxc = 0;
+ 	u32 fuse0 = 0;
+ 
+ 	/* For some of the boards below this mask is technically incorrect.
+@@ -1246,18 +1245,6 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
+ 	case ixgbe_mac_x550em_a:
+ 	case ixgbe_mac_X550:
+ 		cc.read = ixgbe_ptp_read_X550;
+-
+-		/* enable SYSTIME counter */
+-		IXGBE_WRITE_REG(hw, IXGBE_SYSTIMR, 0);
+-		IXGBE_WRITE_REG(hw, IXGBE_SYSTIML, 0);
+-		IXGBE_WRITE_REG(hw, IXGBE_SYSTIMH, 0);
+-		tsauxc = IXGBE_READ_REG(hw, IXGBE_TSAUXC);
+-		IXGBE_WRITE_REG(hw, IXGBE_TSAUXC,
+-				tsauxc & ~IXGBE_TSAUXC_DISABLE_SYSTIME);
+-		IXGBE_WRITE_REG(hw, IXGBE_TSIM, IXGBE_TSIM_TXTS);
+-		IXGBE_WRITE_REG(hw, IXGBE_EIMS, IXGBE_EIMS_TIMESYNC);
+-
+-		IXGBE_WRITE_FLUSH(hw);
+ 		break;
+ 	case ixgbe_mac_X540:
+ 		cc.read = ixgbe_ptp_read_82599;
+@@ -1289,6 +1276,50 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
+ 	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+ }
+ 
++/**
++ * ixgbe_ptp_init_systime - Initialize SYSTIME registers
++ * @adapter: the ixgbe private board structure
++ *
++ * Initialize and start the SYSTIME registers.
++ */
++static void ixgbe_ptp_init_systime(struct ixgbe_adapter *adapter)
++{
++	struct ixgbe_hw *hw = &adapter->hw;
++	u32 tsauxc;
++
++	switch (hw->mac.type) {
++	case ixgbe_mac_X550EM_x:
++	case ixgbe_mac_x550em_a:
++	case ixgbe_mac_X550:
++		tsauxc = IXGBE_READ_REG(hw, IXGBE_TSAUXC);
++
++		/* Reset SYSTIME registers to 0 */
++		IXGBE_WRITE_REG(hw, IXGBE_SYSTIMR, 0);
++		IXGBE_WRITE_REG(hw, IXGBE_SYSTIML, 0);
++		IXGBE_WRITE_REG(hw, IXGBE_SYSTIMH, 0);
++
++		/* Reset interrupt settings */
++		IXGBE_WRITE_REG(hw, IXGBE_TSIM, IXGBE_TSIM_TXTS);
++		IXGBE_WRITE_REG(hw, IXGBE_EIMS, IXGBE_EIMS_TIMESYNC);
++
++		/* Activate the SYSTIME counter */
++		IXGBE_WRITE_REG(hw, IXGBE_TSAUXC,
++				tsauxc & ~IXGBE_TSAUXC_DISABLE_SYSTIME);
++		break;
++	case ixgbe_mac_X540:
++	case ixgbe_mac_82599EB:
++		/* Reset SYSTIME registers to 0 */
++		IXGBE_WRITE_REG(hw, IXGBE_SYSTIML, 0);
++		IXGBE_WRITE_REG(hw, IXGBE_SYSTIMH, 0);
++		break;
++	default:
++		/* Other devices aren't supported */
++		return;
++	};
++
++	IXGBE_WRITE_FLUSH(hw);
++}
++
+ /**
+  * ixgbe_ptp_reset
+  * @adapter: the ixgbe private board structure
+@@ -1315,6 +1346,8 @@ void ixgbe_ptp_reset(struct ixgbe_adapter *adapter)
+ 
+ 	ixgbe_ptp_start_cyclecounter(adapter);
+ 
++	ixgbe_ptp_init_systime(adapter);
++
+ 	spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ 	timecounter_init(&adapter->hw_tc, &adapter->hw_cc,
+ 			 ktime_to_ns(ktime_get_real()));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 304435e561170..b991f03c7e991 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -706,6 +706,8 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
+ 
+ 	params->num_tc                = 1;
+ 	params->tunneled_offload_en = false;
++	if (rep->vport != MLX5_VPORT_UPLINK)
++		params->vlan_strip_disable = true;
+ 
+ 	mlx5_query_min_inline(mdev, &params->tx_min_inline_mode);
+ 
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index 6137000b11c5c..73aac97fb5c96 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -74,11 +74,6 @@ static int moxart_set_mac_address(struct net_device *ndev, void *addr)
+ static void moxart_mac_free_memory(struct net_device *ndev)
+ {
+ 	struct moxart_mac_priv_t *priv = netdev_priv(ndev);
+-	int i;
+-
+-	for (i = 0; i < RX_DESC_NUM; i++)
+-		dma_unmap_single(&priv->pdev->dev, priv->rx_mapping[i],
+-				 priv->rx_buf_size, DMA_FROM_DEVICE);
+ 
+ 	if (priv->tx_desc_base)
+ 		dma_free_coherent(&priv->pdev->dev,
+@@ -193,6 +188,7 @@ static int moxart_mac_open(struct net_device *ndev)
+ static int moxart_mac_stop(struct net_device *ndev)
+ {
+ 	struct moxart_mac_priv_t *priv = netdev_priv(ndev);
++	int i;
+ 
+ 	napi_disable(&priv->napi);
+ 
+@@ -204,6 +200,11 @@ static int moxart_mac_stop(struct net_device *ndev)
+ 	/* disable all functions */
+ 	writel(0, priv->base + REG_MAC_CTRL);
+ 
++	/* unmap areas mapped in moxart_mac_setup_desc_ring() */
++	for (i = 0; i < RX_DESC_NUM; i++)
++		dma_unmap_single(&priv->pdev->dev, priv->rx_mapping[i],
++				 priv->rx_buf_size, DMA_FROM_DEVICE);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index e14869a2e24a5..f60ffef33e0ce 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -378,8 +378,8 @@ try_again:
+ 				ionic_opcode_to_str(opcode), opcode,
+ 				ionic_error_to_str(err), err);
+ 
+-			msleep(1000);
+ 			iowrite32(0, &idev->dev_cmd_regs->done);
++			msleep(1000);
+ 			iowrite32(1, &idev->dev_cmd_regs->doorbell);
+ 			goto try_again;
+ 		}
+@@ -392,6 +392,8 @@ try_again:
+ 		return ionic_error_to_errno(err);
+ 	}
+ 
++	ionic_dev_cmd_clean(ionic);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
+index a78d66051a17d..25a8d029f2075 100644
+--- a/drivers/net/ipa/ipa_mem.c
++++ b/drivers/net/ipa/ipa_mem.c
+@@ -414,7 +414,7 @@ static int ipa_smem_init(struct ipa *ipa, u32 item, size_t size)
+ 	}
+ 
+ 	/* Align the address down and the size up to a page boundary */
+-	addr = qcom_smem_virt_to_phys(virt) & PAGE_MASK;
++	addr = qcom_smem_virt_to_phys(virt);
+ 	phys = addr & PAGE_MASK;
+ 	size = PAGE_ALIGN(size + addr - phys);
+ 	iova = phys;	/* We just want a direct mapping */
+diff --git a/drivers/net/ipvlan/ipvtap.c b/drivers/net/ipvlan/ipvtap.c
+index 1cedb634f4f7b..f01078b2581ce 100644
+--- a/drivers/net/ipvlan/ipvtap.c
++++ b/drivers/net/ipvlan/ipvtap.c
+@@ -194,7 +194,7 @@ static struct notifier_block ipvtap_notifier_block __read_mostly = {
+ 	.notifier_call	= ipvtap_device_event,
+ };
+ 
+-static int ipvtap_init(void)
++static int __init ipvtap_init(void)
+ {
+ 	int err;
+ 
+@@ -228,7 +228,7 @@ out1:
+ }
+ module_init(ipvtap_init);
+ 
+-static void ipvtap_exit(void)
++static void __exit ipvtap_exit(void)
+ {
+ 	rtnl_link_unregister(&ipvtap_link_ops);
+ 	unregister_netdevice_notifier(&ipvtap_notifier_block);
+diff --git a/drivers/nfc/pn533/uart.c b/drivers/nfc/pn533/uart.c
+index a0665d8ea85bc..e92535ebb5287 100644
+--- a/drivers/nfc/pn533/uart.c
++++ b/drivers/nfc/pn533/uart.c
+@@ -310,6 +310,7 @@ static void pn532_uart_remove(struct serdev_device *serdev)
+ 	pn53x_unregister_nfc(pn532->priv);
+ 	serdev_device_close(serdev);
+ 	pn53x_common_clean(pn532->priv);
++	del_timer_sync(&pn532->cmd_timeout);
+ 	kfree_skb(pn532->recv_skb);
+ 	kfree(pn532);
+ }
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index e20bcc835d6a8..82b658a3c220a 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -815,6 +815,7 @@ static int amd_gpio_suspend(struct device *dev)
+ {
+ 	struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+ 	struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++	unsigned long flags;
+ 	int i;
+ 
+ 	for (i = 0; i < desc->npins; i++) {
+@@ -823,7 +824,9 @@ static int amd_gpio_suspend(struct device *dev)
+ 		if (!amd_gpio_should_save(gpio_dev, pin))
+ 			continue;
+ 
+-		gpio_dev->saved_regs[i] = readl(gpio_dev->base + pin*4);
++		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++		gpio_dev->saved_regs[i] = readl(gpio_dev->base + pin * 4) & ~PIN_IRQ_PENDING;
++		raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ 	}
+ 
+ 	return 0;
+@@ -833,6 +836,7 @@ static int amd_gpio_resume(struct device *dev)
+ {
+ 	struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+ 	struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++	unsigned long flags;
+ 	int i;
+ 
+ 	for (i = 0; i < desc->npins; i++) {
+@@ -841,7 +845,10 @@ static int amd_gpio_resume(struct device *dev)
+ 		if (!amd_gpio_should_save(gpio_dev, pin))
+ 			continue;
+ 
+-		writel(gpio_dev->saved_regs[i], gpio_dev->base + pin*4);
++		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++		gpio_dev->saved_regs[i] |= readl(gpio_dev->base + pin * 4) & PIN_IRQ_PENDING;
++		writel(gpio_dev->saved_regs[i], gpio_dev->base + pin * 4);
++		raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 0ee0b80006e05..7ac1090d4379c 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1997,7 +1997,7 @@ static int storvsc_probe(struct hv_device *device,
+ 	 */
+ 	host_dev->handle_error_wq =
+ 			alloc_ordered_workqueue("storvsc_error_wq_%d",
+-						WQ_MEM_RECLAIM,
++						0,
+ 						host->host_no);
+ 	if (!host_dev->handle_error_wq) {
+ 		ret = -ENOMEM;
+diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
+index 1d999228efc85..e380941318117 100644
+--- a/drivers/scsi/ufs/ufshci.h
++++ b/drivers/scsi/ufs/ufshci.h
+@@ -129,11 +129,7 @@ enum {
+ 
+ #define UFSHCD_UIC_MASK		(UIC_COMMAND_COMPL | UFSHCD_UIC_PWR_MASK)
+ 
+-#define UFSHCD_ERROR_MASK	(UIC_ERROR |\
+-				DEVICE_FATAL_ERROR |\
+-				CONTROLLER_FATAL_ERROR |\
+-				SYSTEM_BUS_FATAL_ERROR |\
+-				CRYPTO_ENGINE_FATAL_ERROR)
++#define UFSHCD_ERROR_MASK	(UIC_ERROR | INT_FATAL_ERRORS)
+ 
+ #define INT_FATAL_ERRORS	(DEVICE_FATAL_ERROR |\
+ 				CONTROLLER_FATAL_ERROR |\
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index fe8df32bb612b..cd5f2f09468e2 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -581,27 +581,30 @@ static int lock_pages(
+ 	struct privcmd_dm_op_buf kbufs[], unsigned int num,
+ 	struct page *pages[], unsigned int nr_pages, unsigned int *pinned)
+ {
+-	unsigned int i;
++	unsigned int i, off = 0;
+ 
+-	for (i = 0; i < num; i++) {
++	for (i = 0; i < num; ) {
+ 		unsigned int requested;
+ 		int page_count;
+ 
+ 		requested = DIV_ROUND_UP(
+ 			offset_in_page(kbufs[i].uptr) + kbufs[i].size,
+-			PAGE_SIZE);
++			PAGE_SIZE) - off;
+ 		if (requested > nr_pages)
+ 			return -ENOSPC;
+ 
+ 		page_count = pin_user_pages_fast(
+-			(unsigned long) kbufs[i].uptr,
++			(unsigned long)kbufs[i].uptr + off * PAGE_SIZE,
+ 			requested, FOLL_WRITE, pages);
+-		if (page_count < 0)
+-			return page_count;
++		if (page_count <= 0)
++			return page_count ? : -EFAULT;
+ 
+ 		*pinned += page_count;
+ 		nr_pages -= page_count;
+ 		pages += page_count;
++
++		off = (requested == page_count) ? 0 : off + page_count;
++		i += !off;
+ 	}
+ 
+ 	return 0;
+@@ -677,10 +680,8 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
+ 	}
+ 
+ 	rc = lock_pages(kbufs, kdata.num, pages, nr_pages, &pinned);
+-	if (rc < 0) {
+-		nr_pages = pinned;
++	if (rc < 0)
+ 		goto out;
+-	}
+ 
+ 	for (i = 0; i < kdata.num; i++) {
+ 		set_xen_guest_handle(xbufs[i].h, kbufs[i].uptr);
+@@ -692,7 +693,7 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
+ 	xen_preemptible_hcall_end();
+ 
+ out:
+-	unlock_pages(pages, nr_pages);
++	unlock_pages(pages, pinned);
+ 	kfree(xbufs);
+ 	kfree(pages);
+ 	kfree(kbufs);
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index d297804631829..be6935d191970 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -161,7 +161,7 @@ no_valid_dev_replace_entry_found:
+ 		if (btrfs_find_device(fs_info->fs_devices,
+ 				      BTRFS_DEV_REPLACE_DEVID, NULL, NULL, false)) {
+ 			btrfs_err(fs_info,
+-			"replace devid present without an active replace item");
++"replace without active item, run 'device scan --forget' on the target device");
+ 			ret = -EUCLEAN;
+ 		} else {
+ 			dev_replace->srcdev = NULL;
+@@ -954,8 +954,7 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info)
+ 		up_write(&dev_replace->rwsem);
+ 
+ 		/* Scrub for replace must not be running in suspended state */
+-		ret = btrfs_scrub_cancel(fs_info);
+-		ASSERT(ret != -ENOTCONN);
++		btrfs_scrub_cancel(fs_info);
+ 
+ 		trans = btrfs_start_transaction(root, 0);
+ 		if (IS_ERR(trans)) {
+diff --git a/fs/btrfs/root-tree.c b/fs/btrfs/root-tree.c
+index db37a37996497..e9e8ca4e98a75 100644
+--- a/fs/btrfs/root-tree.c
++++ b/fs/btrfs/root-tree.c
+@@ -336,9 +336,10 @@ int btrfs_del_root_ref(struct btrfs_trans_handle *trans, u64 root_id,
+ 	key.offset = ref_id;
+ again:
+ 	ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1);
+-	if (ret < 0)
++	if (ret < 0) {
++		err = ret;
+ 		goto out;
+-	if (ret == 0) {
++	} else if (ret == 0) {
+ 		leaf = path->nodes[0];
+ 		ref = btrfs_item_ptr(leaf, path->slots[0],
+ 				     struct btrfs_root_ref);
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index f1a60bcdb3db8..cd6049b0bde53 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -389,6 +389,9 @@ static int btrfs_xattr_handler_set(const struct xattr_handler *handler,
+ 				   const char *name, const void *buffer,
+ 				   size_t size, int flags)
+ {
++	if (btrfs_root_readonly(BTRFS_I(inode)->root))
++		return -EROFS;
++
+ 	name = xattr_full_name(handler, name);
+ 	return btrfs_setxattr_trans(inode, name, buffer, size, flags);
+ }
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 9fdecd9090493..70cd0d764c447 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -321,7 +321,7 @@ static int read_name_gen = 1;
+ static struct file *__nfs42_ssc_open(struct vfsmount *ss_mnt,
+ 		struct nfs_fh *src_fh, nfs4_stateid *stateid)
+ {
+-	struct nfs_fattr fattr;
++	struct nfs_fattr *fattr = nfs_alloc_fattr();
+ 	struct file *filep, *res;
+ 	struct nfs_server *server;
+ 	struct inode *r_ino = NULL;
+@@ -332,14 +332,20 @@ static struct file *__nfs42_ssc_open(struct vfsmount *ss_mnt,
+ 
+ 	server = NFS_SERVER(ss_mnt->mnt_root->d_inode);
+ 
+-	nfs_fattr_init(&fattr);
++	if (!fattr)
++		return ERR_PTR(-ENOMEM);
+ 
+-	status = nfs4_proc_getattr(server, src_fh, &fattr, NULL, NULL);
++	status = nfs4_proc_getattr(server, src_fh, fattr, NULL, NULL);
+ 	if (status < 0) {
+ 		res = ERR_PTR(status);
+ 		goto out;
+ 	}
+ 
++	if (!S_ISREG(fattr->mode)) {
++		res = ERR_PTR(-EBADF);
++		goto out;
++	}
++
+ 	res = ERR_PTR(-ENOMEM);
+ 	len = strlen(SSC_READ_NAME_BODY) + 16;
+ 	read_name = kzalloc(len, GFP_NOFS);
+@@ -347,7 +353,7 @@ static struct file *__nfs42_ssc_open(struct vfsmount *ss_mnt,
+ 		goto out;
+ 	snprintf(read_name, len, SSC_READ_NAME_BODY, read_name_gen++);
+ 
+-	r_ino = nfs_fhget(ss_mnt->mnt_root->d_inode->i_sb, src_fh, &fattr,
++	r_ino = nfs_fhget(ss_mnt->mnt_root->d_inode->i_sb, src_fh, fattr,
+ 			NULL);
+ 	if (IS_ERR(r_ino)) {
+ 		res = ERR_CAST(r_ino);
+@@ -358,6 +364,7 @@ static struct file *__nfs42_ssc_open(struct vfsmount *ss_mnt,
+ 				     r_ino->i_fop);
+ 	if (IS_ERR(filep)) {
+ 		res = ERR_CAST(filep);
++		iput(r_ino);
+ 		goto out_free_name;
+ 	}
+ 	filep->f_mode |= FMODE_READ;
+@@ -392,6 +399,7 @@ static struct file *__nfs42_ssc_open(struct vfsmount *ss_mnt,
+ out_free_name:
+ 	kfree(read_name);
+ out:
++	nfs_free_fattr(fattr);
+ 	return res;
+ out_stateowner:
+ 	nfs4_put_state_owner(sp);
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index ba98371e9d164..ef18f0d71b11b 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -503,10 +503,12 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ 	struct vm_area_struct *vma = walk->vma;
+ 	bool locked = !!(vma->vm_flags & VM_LOCKED);
+ 	struct page *page = NULL;
+-	bool migration = false;
++	bool migration = false, young = false, dirty = false;
+ 
+ 	if (pte_present(*pte)) {
+ 		page = vm_normal_page(vma, addr, *pte);
++		young = pte_young(*pte);
++		dirty = pte_dirty(*pte);
+ 	} else if (is_swap_pte(*pte)) {
+ 		swp_entry_t swpent = pte_to_swp_entry(*pte);
+ 
+@@ -540,8 +542,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ 	if (!page)
+ 		return;
+ 
+-	smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte),
+-		      locked, migration);
++	smaps_account(mss, page, false, young, dirty, locked, migration);
+ }
+ 
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+diff --git a/fs/sync.c b/fs/sync.c
+index 1373a610dc784..79180e58d8628 100644
+--- a/fs/sync.c
++++ b/fs/sync.c
+@@ -21,25 +21,6 @@
+ #define VALID_FLAGS (SYNC_FILE_RANGE_WAIT_BEFORE|SYNC_FILE_RANGE_WRITE| \
+ 			SYNC_FILE_RANGE_WAIT_AFTER)
+ 
+-/*
+- * Do the filesystem syncing work. For simple filesystems
+- * writeback_inodes_sb(sb) just dirties buffers with inodes so we have to
+- * submit IO for these buffers via __sync_blockdev(). This also speeds up the
+- * wait == 1 case since in that case write_inode() functions do
+- * sync_dirty_buffer() and thus effectively write one block at a time.
+- */
+-static int __sync_filesystem(struct super_block *sb, int wait)
+-{
+-	if (wait)
+-		sync_inodes_sb(sb);
+-	else
+-		writeback_inodes_sb(sb, WB_REASON_SYNC);
+-
+-	if (sb->s_op->sync_fs)
+-		sb->s_op->sync_fs(sb, wait);
+-	return __sync_blockdev(sb->s_bdev, wait);
+-}
+-
+ /*
+  * Write out and wait upon all dirty data associated with this
+  * superblock.  Filesystem data as well as the underlying block
+@@ -47,7 +28,7 @@ static int __sync_filesystem(struct super_block *sb, int wait)
+  */
+ int sync_filesystem(struct super_block *sb)
+ {
+-	int ret;
++	int ret = 0;
+ 
+ 	/*
+ 	 * We need to be protected against the filesystem going from
+@@ -61,10 +42,31 @@ int sync_filesystem(struct super_block *sb)
+ 	if (sb_rdonly(sb))
+ 		return 0;
+ 
+-	ret = __sync_filesystem(sb, 0);
+-	if (ret < 0)
++	/*
++	 * Do the filesystem syncing work.  For simple filesystems
++	 * writeback_inodes_sb(sb) just dirties buffers with inodes so we have
++	 * to submit I/O for these buffers via __sync_blockdev().  This also
++	 * speeds up the wait == 1 case since in that case write_inode()
++	 * methods call sync_dirty_buffer() and thus effectively write one block
++	 * at a time.
++	 */
++	writeback_inodes_sb(sb, WB_REASON_SYNC);
++	if (sb->s_op->sync_fs) {
++		ret = sb->s_op->sync_fs(sb, 0);
++		if (ret)
++			return ret;
++	}
++	ret = __sync_blockdev(sb->s_bdev, 0);
++	if (ret)
+ 		return ret;
+-	return __sync_filesystem(sb, 1);
++
++	sync_inodes_sb(sb);
++	if (sb->s_op->sync_fs) {
++		ret = sb->s_op->sync_fs(sb, 1);
++		if (ret)
++			return ret;
++	}
++	return __sync_blockdev(sb->s_bdev, 1);
+ }
+ EXPORT_SYMBOL(sync_filesystem);
+ 
+diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c
+index 646735aad45df..103fa8381e7dc 100644
+--- a/fs/xfs/xfs_ioctl.c
++++ b/fs/xfs/xfs_ioctl.c
+@@ -371,7 +371,7 @@ int
+ xfs_ioc_attr_list(
+ 	struct xfs_inode		*dp,
+ 	void __user			*ubuf,
+-	int				bufsize,
++	size_t				bufsize,
+ 	int				flags,
+ 	struct xfs_attrlist_cursor __user *ucursor)
+ {
+@@ -1689,7 +1689,7 @@ xfs_ioc_getbmap(
+ 
+ 	if (bmx.bmv_count < 2)
+ 		return -EINVAL;
+-	if (bmx.bmv_count > ULONG_MAX / recsize)
++	if (bmx.bmv_count >= INT_MAX / recsize)
+ 		return -ENOMEM;
+ 
+ 	buf = kvzalloc(bmx.bmv_count * sizeof(*buf), GFP_KERNEL);
+diff --git a/fs/xfs/xfs_ioctl.h b/fs/xfs/xfs_ioctl.h
+index bab6a5a924077..416e20de66e7d 100644
+--- a/fs/xfs/xfs_ioctl.h
++++ b/fs/xfs/xfs_ioctl.h
+@@ -38,8 +38,9 @@ xfs_readlink_by_handle(
+ int xfs_ioc_attrmulti_one(struct file *parfilp, struct inode *inode,
+ 		uint32_t opcode, void __user *uname, void __user *value,
+ 		uint32_t *len, uint32_t flags);
+-int xfs_ioc_attr_list(struct xfs_inode *dp, void __user *ubuf, int bufsize,
+-	int flags, struct xfs_attrlist_cursor __user *ucursor);
++int xfs_ioc_attr_list(struct xfs_inode *dp, void __user *ubuf,
++		      size_t bufsize, int flags,
++		      struct xfs_attrlist_cursor __user *ucursor);
+ 
+ extern struct dentry *
+ xfs_handle_to_dentry(
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index 6323974d6b3e6..434c87cc9fbf5 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -757,6 +757,7 @@ xfs_fs_sync_fs(
+ 	int			wait)
+ {
+ 	struct xfs_mount	*mp = XFS_M(sb);
++	int			error;
+ 
+ 	/*
+ 	 * Doing anything during the async pass would be counterproductive.
+@@ -764,7 +765,10 @@ xfs_fs_sync_fs(
+ 	if (!wait)
+ 		return 0;
+ 
+-	xfs_log_force(mp, XFS_LOG_SYNC);
++	error = xfs_log_force(mp, XFS_LOG_SYNC);
++	if (error)
++		return error;
++
+ 	if (laptop_mode) {
+ 		/*
+ 		 * The disk must be active because we're syncing.
+@@ -1716,6 +1720,11 @@ xfs_remount_ro(
+ 	};
+ 	int			error;
+ 
++	/* Flush all the dirty data to disk. */
++	error = sync_filesystem(mp->m_super);
++	if (error)
++		return error;
++
+ 	/*
+ 	 * Cancel background eofb scanning so it cannot race with the final
+ 	 * log force+buftarg wait and deadlock the remount.
+@@ -1786,8 +1795,6 @@ xfs_fc_reconfigure(
+ 	if (error)
+ 		return error;
+ 
+-	sync_filesystem(mp->m_super);
+-
+ 	/* inode32 -> inode64 */
+ 	if ((mp->m_flags & XFS_MOUNT_SMALL_INUMS) &&
+ 	    !(new_mp->m_flags & XFS_MOUNT_SMALL_INUMS)) {
+diff --git a/include/asm-generic/sections.h b/include/asm-generic/sections.h
+index d16302d3eb597..72f1e2a8c1670 100644
+--- a/include/asm-generic/sections.h
++++ b/include/asm-generic/sections.h
+@@ -114,7 +114,7 @@ static inline bool memory_contains(void *begin, void *end, void *virt,
+ /**
+  * memory_intersects - checks if the region occupied by an object intersects
+  *                     with another memory region
+- * @begin: virtual address of the beginning of the memory regien
++ * @begin: virtual address of the beginning of the memory region
+  * @end: virtual address of the end of the memory region
+  * @virt: virtual address of the memory object
+  * @size: size of the memory object
+@@ -127,7 +127,10 @@ static inline bool memory_intersects(void *begin, void *end, void *virt,
+ {
+ 	void *vend = virt + size;
+ 
+-	return (virt >= begin && virt < end) || (vend >= begin && vend < end);
++	if (virt < end && vend > begin)
++		return true;
++
++	return false;
+ }
+ 
+ /**
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index ed2d531400051..6564fb4ac49e1 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -633,9 +633,23 @@ extern int sysctl_devconf_inherit_init_net;
+  */
+ static inline bool net_has_fallback_tunnels(const struct net *net)
+ {
+-	return !IS_ENABLED(CONFIG_SYSCTL) ||
+-	       !sysctl_fb_tunnels_only_for_init_net ||
+-	       (net == &init_net && sysctl_fb_tunnels_only_for_init_net == 1);
++#if IS_ENABLED(CONFIG_SYSCTL)
++	int fb_tunnels_only_for_init_net = READ_ONCE(sysctl_fb_tunnels_only_for_init_net);
++
++	return !fb_tunnels_only_for_init_net ||
++		(net_eq(net, &init_net) && fb_tunnels_only_for_init_net == 1);
++#else
++	return true;
++#endif
++}
++
++static inline int net_inherit_devconf(void)
++{
++#if IS_ENABLED(CONFIG_SYSCTL)
++	return READ_ONCE(sysctl_devconf_inherit_init_net);
++#else
++	return 0;
++#endif
+ }
+ 
+ static inline int netdev_queue_numa_node_read(const struct netdev_queue *q)
+diff --git a/include/linux/netfilter_bridge/ebtables.h b/include/linux/netfilter_bridge/ebtables.h
+index 3a956145a25cb..a18fb73a2b772 100644
+--- a/include/linux/netfilter_bridge/ebtables.h
++++ b/include/linux/netfilter_bridge/ebtables.h
+@@ -94,10 +94,6 @@ struct ebt_table {
+ 	struct ebt_replace_kernel *table;
+ 	unsigned int valid_hooks;
+ 	rwlock_t lock;
+-	/* e.g. could be the table explicitly only allows certain
+-	 * matches, targets, ... 0 == let it in */
+-	int (*check)(const struct ebt_table_info *info,
+-	   unsigned int valid_hooks);
+ 	/* the data used by the kernel */
+ 	struct ebt_table_info *private;
+ 	struct module *me;
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 4e8425c1c5605..b055c217eb0be 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -542,10 +542,6 @@ struct sched_dl_entity {
+ 	 * task has to wait for a replenishment to be performed at the
+ 	 * next firing of dl_timer.
+ 	 *
+-	 * @dl_boosted tells if we are boosted due to DI. If so we are
+-	 * outside bandwidth enforcement mechanism (but only until we
+-	 * exit the critical section);
+-	 *
+ 	 * @dl_yielded tells if task gave up the CPU before consuming
+ 	 * all its available runtime during the last job.
+ 	 *
+diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
+index 716b7c5f6fdd9..36e5e75e71720 100644
+--- a/include/net/busy_poll.h
++++ b/include/net/busy_poll.h
+@@ -31,7 +31,7 @@ extern unsigned int sysctl_net_busy_poll __read_mostly;
+ 
+ static inline bool net_busy_loop_on(void)
+ {
+-	return sysctl_net_busy_poll;
++	return READ_ONCE(sysctl_net_busy_poll);
+ }
+ 
+ static inline bool sk_can_busy_loop(const struct sock *sk)
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index b9948e7861f22..e66fee99ed3ea 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -176,13 +176,18 @@ struct nft_ctx {
+ 	bool				report;
+ };
+ 
++enum nft_data_desc_flags {
++	NFT_DATA_DESC_SETELEM	= (1 << 0),
++};
++
+ struct nft_data_desc {
+ 	enum nft_data_types		type;
++	unsigned int			size;
+ 	unsigned int			len;
++	unsigned int			flags;
+ };
+ 
+-int nft_data_init(const struct nft_ctx *ctx,
+-		  struct nft_data *data, unsigned int size,
++int nft_data_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 		  struct nft_data_desc *desc, const struct nlattr *nla);
+ void nft_data_hold(const struct nft_data *data, enum nft_data_types type);
+ void nft_data_release(const struct nft_data *data, enum nft_data_types type);
+diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h
+index fd10a7862fdc6..ce75121782bf7 100644
+--- a/include/net/netfilter/nf_tables_core.h
++++ b/include/net/netfilter/nf_tables_core.h
+@@ -38,6 +38,14 @@ struct nft_cmp_fast_expr {
+ 	bool			inv;
+ };
+ 
++struct nft_cmp16_fast_expr {
++	struct nft_data		data;
++	struct nft_data		mask;
++	u8			sreg;
++	u8			len;
++	bool			inv;
++};
++
+ struct nft_immediate_expr {
+ 	struct nft_data		data;
+ 	u8			dreg;
+@@ -55,6 +63,7 @@ static inline u32 nft_cmp_fast_mask(unsigned int len)
+ }
+ 
+ extern const struct nft_expr_ops nft_cmp_fast_ops;
++extern const struct nft_expr_ops nft_cmp16_fast_ops;
+ 
+ struct nft_payload {
+ 	enum nft_payload_bases	base:8;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 333131f47ac13..d31c2b9107e54 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2678,18 +2678,18 @@ static inline int sk_get_wmem0(const struct sock *sk, const struct proto *proto)
+ {
+ 	/* Does this proto have per netns sysctl_wmem ? */
+ 	if (proto->sysctl_wmem_offset)
+-		return *(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset);
++		return READ_ONCE(*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset));
+ 
+-	return *proto->sysctl_wmem;
++	return READ_ONCE(*proto->sysctl_wmem);
+ }
+ 
+ static inline int sk_get_rmem0(const struct sock *sk, const struct proto *proto)
+ {
+ 	/* Does this proto have per netns sysctl_rmem ? */
+ 	if (proto->sysctl_rmem_offset)
+-		return *(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset);
++		return READ_ONCE(*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset));
+ 
+-	return *proto->sysctl_rmem;
++	return READ_ONCE(*proto->sysctl_rmem);
+ }
+ 
+ /* Default TCP Small queue budget is ~1 ms of data (1sec >> 10)
+diff --git a/kernel/audit_fsnotify.c b/kernel/audit_fsnotify.c
+index 5b3f01da172bc..b2ebacd2f3097 100644
+--- a/kernel/audit_fsnotify.c
++++ b/kernel/audit_fsnotify.c
+@@ -102,6 +102,7 @@ struct audit_fsnotify_mark *audit_alloc_mark(struct audit_krule *krule, char *pa
+ 
+ 	ret = fsnotify_add_inode_mark(&audit_mark->mark, inode, true);
+ 	if (ret < 0) {
++		audit_mark->path = NULL;
+ 		fsnotify_put_mark(&audit_mark->mark);
+ 		audit_mark = ERR_PTR(ret);
+ 	}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index de636b7445b11..e4dcc23b52c01 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5282,8 +5282,7 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
+ 	struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx];
+ 	struct bpf_reg_state *regs = cur_regs(env), *reg;
+ 	struct bpf_map *map = meta->map_ptr;
+-	struct tnum range;
+-	u64 val;
++	u64 val, max;
+ 	int err;
+ 
+ 	if (func_id != BPF_FUNC_tail_call)
+@@ -5293,10 +5292,11 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
+ 		return -EINVAL;
+ 	}
+ 
+-	range = tnum_range(0, map->max_entries - 1);
+ 	reg = &regs[BPF_REG_3];
++	val = reg->var_off.value;
++	max = map->max_entries;
+ 
+-	if (!register_is_const(reg) || !tnum_in(range, reg->var_off)) {
++	if (!(register_is_const(reg) && val < max)) {
+ 		bpf_map_key_store(aux, BPF_MAP_KEY_POISON);
+ 		return 0;
+ 	}
+@@ -5304,8 +5304,6 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
+ 	err = mark_chain_precision(env, BPF_REG_3);
+ 	if (err)
+ 		return err;
+-
+-	val = reg->var_off.value;
+ 	if (bpf_map_key_unseen(aux))
+ 		bpf_map_key_store(aux, val);
+ 	else if (!bpf_map_key_poisoned(aux) &&
+diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
+index f27ac94d5fa72..cdecd47e5580d 100644
+--- a/kernel/sys_ni.c
++++ b/kernel/sys_ni.c
+@@ -268,6 +268,7 @@ COND_SYSCALL_COMPAT(keyctl);
+ 
+ /* mm/fadvise.c */
+ COND_SYSCALL(fadvise64_64);
++COND_SYSCALL_COMPAT(fadvise64_64);
+ 
+ /* mm/, CONFIG_MMU only */
+ COND_SYSCALL(swapon);
+diff --git a/lib/ratelimit.c b/lib/ratelimit.c
+index e01a93f46f833..ce945c17980b9 100644
+--- a/lib/ratelimit.c
++++ b/lib/ratelimit.c
+@@ -26,10 +26,16 @@
+  */
+ int ___ratelimit(struct ratelimit_state *rs, const char *func)
+ {
++	/* Paired with WRITE_ONCE() in .proc_handler().
++	 * Changing two values seperately could be inconsistent
++	 * and some message could be lost.  (See: net_ratelimit_state).
++	 */
++	int interval = READ_ONCE(rs->interval);
++	int burst = READ_ONCE(rs->burst);
+ 	unsigned long flags;
+ 	int ret;
+ 
+-	if (!rs->interval)
++	if (!interval)
+ 		return 1;
+ 
+ 	/*
+@@ -44,7 +50,7 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func)
+ 	if (!rs->begin)
+ 		rs->begin = jiffies;
+ 
+-	if (time_is_before_jiffies(rs->begin + rs->interval)) {
++	if (time_is_before_jiffies(rs->begin + interval)) {
+ 		if (rs->missed) {
+ 			if (!(rs->flags & RATELIMIT_MSG_ON_RELEASE)) {
+ 				printk_deferred(KERN_WARNING
+@@ -56,7 +62,7 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func)
+ 		rs->begin   = jiffies;
+ 		rs->printed = 0;
+ 	}
+-	if (rs->burst && rs->burst > rs->printed) {
++	if (burst && burst > rs->printed) {
+ 		rs->printed++;
+ 		ret = 1;
+ 	} else {
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 594368f6134f1..cb7b0aead7096 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1691,7 +1691,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 
+ 			VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
+ 			entry = pmd_to_swp_entry(orig_pmd);
+-			page = pfn_to_page(swp_offset(entry));
++			page = migration_entry_to_page(entry);
+ 			flush_needed = 0;
+ 		} else
+ 			WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
+@@ -2110,7 +2110,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
+ 		swp_entry_t entry;
+ 
+ 		entry = pmd_to_swp_entry(old_pmd);
+-		page = pfn_to_page(swp_offset(entry));
++		page = migration_entry_to_page(entry);
+ 		write = is_write_migration_entry(entry);
+ 		young = false;
+ 		soft_dirty = pmd_swp_soft_dirty(old_pmd);
+diff --git a/mm/mmap.c b/mm/mmap.c
+index a50042918cc7e..a1ee93f55cebb 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1694,8 +1694,12 @@ int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot)
+ 	    pgprot_val(vm_pgprot_modify(vm_page_prot, vm_flags)))
+ 		return 0;
+ 
+-	/* Do we need to track softdirty? */
+-	if (IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) && !(vm_flags & VM_SOFTDIRTY))
++	/*
++	 * Do we need to track softdirty? hugetlb does not support softdirty
++	 * tracking yet.
++	 */
++	if (IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) && !(vm_flags & VM_SOFTDIRTY) &&
++	    !is_vm_hugetlb_page(vma))
+ 		return 1;
+ 
+ 	/* Specialty mapping? */
+diff --git a/net/bridge/netfilter/ebtable_broute.c b/net/bridge/netfilter/ebtable_broute.c
+index 32bc2821027f3..57f91efce0f73 100644
+--- a/net/bridge/netfilter/ebtable_broute.c
++++ b/net/bridge/netfilter/ebtable_broute.c
+@@ -36,18 +36,10 @@ static struct ebt_replace_kernel initial_table = {
+ 	.entries	= (char *)&initial_chain,
+ };
+ 
+-static int check(const struct ebt_table_info *info, unsigned int valid_hooks)
+-{
+-	if (valid_hooks & ~(1 << NF_BR_BROUTING))
+-		return -EINVAL;
+-	return 0;
+-}
+-
+ static const struct ebt_table broute_table = {
+ 	.name		= "broute",
+ 	.table		= &initial_table,
+ 	.valid_hooks	= 1 << NF_BR_BROUTING,
+-	.check		= check,
+ 	.me		= THIS_MODULE,
+ };
+ 
+diff --git a/net/bridge/netfilter/ebtable_filter.c b/net/bridge/netfilter/ebtable_filter.c
+index bcf982e12f16b..7f2e620f4978f 100644
+--- a/net/bridge/netfilter/ebtable_filter.c
++++ b/net/bridge/netfilter/ebtable_filter.c
+@@ -43,18 +43,10 @@ static struct ebt_replace_kernel initial_table = {
+ 	.entries	= (char *)initial_chains,
+ };
+ 
+-static int check(const struct ebt_table_info *info, unsigned int valid_hooks)
+-{
+-	if (valid_hooks & ~FILTER_VALID_HOOKS)
+-		return -EINVAL;
+-	return 0;
+-}
+-
+ static const struct ebt_table frame_filter = {
+ 	.name		= "filter",
+ 	.table		= &initial_table,
+ 	.valid_hooks	= FILTER_VALID_HOOKS,
+-	.check		= check,
+ 	.me		= THIS_MODULE,
+ };
+ 
+diff --git a/net/bridge/netfilter/ebtable_nat.c b/net/bridge/netfilter/ebtable_nat.c
+index 0d092773f8161..1743a105485c4 100644
+--- a/net/bridge/netfilter/ebtable_nat.c
++++ b/net/bridge/netfilter/ebtable_nat.c
+@@ -43,18 +43,10 @@ static struct ebt_replace_kernel initial_table = {
+ 	.entries	= (char *)initial_chains,
+ };
+ 
+-static int check(const struct ebt_table_info *info, unsigned int valid_hooks)
+-{
+-	if (valid_hooks & ~NAT_VALID_HOOKS)
+-		return -EINVAL;
+-	return 0;
+-}
+-
+ static const struct ebt_table frame_nat = {
+ 	.name		= "nat",
+ 	.table		= &initial_table,
+ 	.valid_hooks	= NAT_VALID_HOOKS,
+-	.check		= check,
+ 	.me		= THIS_MODULE,
+ };
+ 
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index d481ff24a1501..310740cc684ad 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -999,8 +999,7 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
+ 		goto free_iterate;
+ 	}
+ 
+-	/* the table doesn't like it */
+-	if (t->check && (ret = t->check(newinfo, repl->valid_hooks)))
++	if (repl->valid_hooks != t->valid_hooks)
+ 		goto free_unlock;
+ 
+ 	if (repl->num_counters && repl->num_counters != t->private->nentries) {
+@@ -1186,11 +1185,6 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ 	if (ret != 0)
+ 		goto free_chainstack;
+ 
+-	if (table->check && table->check(newinfo, table->valid_hooks)) {
+-		ret = -EINVAL;
+-		goto free_chainstack;
+-	}
+-
+ 	table->private = newinfo;
+ 	rwlock_init(&table->lock);
+ 	mutex_lock(&ebt_mutex);
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index 5f773624948ff..d67d06d6b817c 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -15,18 +15,6 @@
+ 
+ DEFINE_BPF_STORAGE_CACHE(sk_cache);
+ 
+-static int omem_charge(struct sock *sk, unsigned int size)
+-{
+-	/* same check as in sock_kmalloc() */
+-	if (size <= sysctl_optmem_max &&
+-	    atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
+-		atomic_add(size, &sk->sk_omem_alloc);
+-		return 0;
+-	}
+-
+-	return -ENOMEM;
+-}
+-
+ static struct bpf_local_storage_data *
+ sk_storage_lookup(struct sock *sk, struct bpf_map *map, bool cacheit_lockit)
+ {
+@@ -316,7 +304,17 @@ BPF_CALL_2(bpf_sk_storage_delete, struct bpf_map *, map, struct sock *, sk)
+ static int sk_storage_charge(struct bpf_local_storage_map *smap,
+ 			     void *owner, u32 size)
+ {
+-	return omem_charge(owner, size);
++	int optmem_max = READ_ONCE(sysctl_optmem_max);
++	struct sock *sk = (struct sock *)owner;
++
++	/* same check as in sock_kmalloc() */
++	if (size <= optmem_max &&
++	    atomic_read(&sk->sk_omem_alloc) + size < optmem_max) {
++		atomic_add(size, &sk->sk_omem_alloc);
++		return 0;
++	}
++
++	return -ENOMEM;
+ }
+ 
+ static void sk_storage_uncharge(struct bpf_local_storage_map *smap,
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 637bc576fbd26..8355cc5e11a98 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4516,7 +4516,7 @@ static bool skb_flow_limit(struct sk_buff *skb, unsigned int qlen)
+ 	struct softnet_data *sd;
+ 	unsigned int old_flow, new_flow;
+ 
+-	if (qlen < (netdev_max_backlog >> 1))
++	if (qlen < (READ_ONCE(netdev_max_backlog) >> 1))
+ 		return false;
+ 
+ 	sd = this_cpu_ptr(&softnet_data);
+@@ -4564,7 +4564,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
+ 	if (!netif_running(skb->dev))
+ 		goto drop;
+ 	qlen = skb_queue_len(&sd->input_pkt_queue);
+-	if (qlen <= netdev_max_backlog && !skb_flow_limit(skb, qlen)) {
++	if (qlen <= READ_ONCE(netdev_max_backlog) && !skb_flow_limit(skb, qlen)) {
+ 		if (qlen) {
+ enqueue:
+ 			__skb_queue_tail(&sd->input_pkt_queue, skb);
+@@ -4795,7 +4795,7 @@ static int netif_rx_internal(struct sk_buff *skb)
+ {
+ 	int ret;
+ 
+-	net_timestamp_check(netdev_tstamp_prequeue, skb);
++	net_timestamp_check(READ_ONCE(netdev_tstamp_prequeue), skb);
+ 
+ 	trace_netif_rx(skb);
+ 
+@@ -5156,7 +5156,7 @@ static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc,
+ 	int ret = NET_RX_DROP;
+ 	__be16 type;
+ 
+-	net_timestamp_check(!netdev_tstamp_prequeue, skb);
++	net_timestamp_check(!READ_ONCE(netdev_tstamp_prequeue), skb);
+ 
+ 	trace_netif_receive_skb(skb);
+ 
+@@ -5558,7 +5558,7 @@ static int netif_receive_skb_internal(struct sk_buff *skb)
+ {
+ 	int ret;
+ 
+-	net_timestamp_check(netdev_tstamp_prequeue, skb);
++	net_timestamp_check(READ_ONCE(netdev_tstamp_prequeue), skb);
+ 
+ 	if (skb_defer_rx_timestamp(skb))
+ 		return NET_RX_SUCCESS;
+@@ -5588,7 +5588,7 @@ static void netif_receive_skb_list_internal(struct list_head *head)
+ 
+ 	INIT_LIST_HEAD(&sublist);
+ 	list_for_each_entry_safe(skb, next, head, list) {
+-		net_timestamp_check(netdev_tstamp_prequeue, skb);
++		net_timestamp_check(READ_ONCE(netdev_tstamp_prequeue), skb);
+ 		skb_list_del_init(skb);
+ 		if (!skb_defer_rx_timestamp(skb))
+ 			list_add_tail(&skb->list, &sublist);
+@@ -6371,7 +6371,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
+ 		net_rps_action_and_irq_enable(sd);
+ 	}
+ 
+-	napi->weight = dev_rx_weight;
++	napi->weight = READ_ONCE(dev_rx_weight);
+ 	while (again) {
+ 		struct sk_buff *skb;
+ 
+@@ -6879,8 +6879,8 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
+ {
+ 	struct softnet_data *sd = this_cpu_ptr(&softnet_data);
+ 	unsigned long time_limit = jiffies +
+-		usecs_to_jiffies(netdev_budget_usecs);
+-	int budget = netdev_budget;
++		usecs_to_jiffies(READ_ONCE(netdev_budget_usecs));
++	int budget = READ_ONCE(netdev_budget);
+ 	LIST_HEAD(list);
+ 	LIST_HEAD(repoll);
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 815edf7bc4390..4c22e6d1da746 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1212,10 +1212,11 @@ void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp)
+ static bool __sk_filter_charge(struct sock *sk, struct sk_filter *fp)
+ {
+ 	u32 filter_size = bpf_prog_size(fp->prog->len);
++	int optmem_max = READ_ONCE(sysctl_optmem_max);
+ 
+ 	/* same check as in sock_kmalloc() */
+-	if (filter_size <= sysctl_optmem_max &&
+-	    atomic_read(&sk->sk_omem_alloc) + filter_size < sysctl_optmem_max) {
++	if (filter_size <= optmem_max &&
++	    atomic_read(&sk->sk_omem_alloc) + filter_size < optmem_max) {
+ 		atomic_add(filter_size, &sk->sk_omem_alloc);
+ 		return true;
+ 	}
+@@ -1547,7 +1548,7 @@ int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk)
+ 	if (IS_ERR(prog))
+ 		return PTR_ERR(prog);
+ 
+-	if (bpf_prog_size(prog->len) > sysctl_optmem_max)
++	if (bpf_prog_size(prog->len) > READ_ONCE(sysctl_optmem_max))
+ 		err = -ENOMEM;
+ 	else
+ 		err = reuseport_attach_prog(sk, prog);
+@@ -1614,7 +1615,7 @@ int sk_reuseport_attach_bpf(u32 ufd, struct sock *sk)
+ 		}
+ 	} else {
+ 		/* BPF_PROG_TYPE_SOCKET_FILTER */
+-		if (bpf_prog_size(prog->len) > sysctl_optmem_max) {
++		if (bpf_prog_size(prog->len) > READ_ONCE(sysctl_optmem_max)) {
+ 			err = -ENOMEM;
+ 			goto err_prog_put;
+ 		}
+@@ -4713,14 +4714,14 @@ static int _bpf_setsockopt(struct sock *sk, int level, int optname,
+ 		/* Only some socketops are supported */
+ 		switch (optname) {
+ 		case SO_RCVBUF:
+-			val = min_t(u32, val, sysctl_rmem_max);
++			val = min_t(u32, val, READ_ONCE(sysctl_rmem_max));
+ 			val = min_t(int, val, INT_MAX / 2);
+ 			sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+ 			WRITE_ONCE(sk->sk_rcvbuf,
+ 				   max_t(int, val * 2, SOCK_MIN_RCVBUF));
+ 			break;
+ 		case SO_SNDBUF:
+-			val = min_t(u32, val, sysctl_wmem_max);
++			val = min_t(u32, val, READ_ONCE(sysctl_wmem_max));
+ 			val = min_t(int, val, INT_MAX / 2);
+ 			sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+ 			WRITE_ONCE(sk->sk_sndbuf,
+diff --git a/net/core/gro_cells.c b/net/core/gro_cells.c
+index 6eb2e5ec2c506..2f66f3f295630 100644
+--- a/net/core/gro_cells.c
++++ b/net/core/gro_cells.c
+@@ -26,7 +26,7 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
+ 
+ 	cell = this_cpu_ptr(gcells->cells);
+ 
+-	if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
++	if (skb_queue_len(&cell->napi_skbs) > READ_ONCE(netdev_max_backlog)) {
+ drop:
+ 		atomic_long_inc(&dev->rx_dropped);
+ 		kfree_skb(skb);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 48b6438f2a3d9..635cabcf8794f 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4691,7 +4691,7 @@ static bool skb_may_tx_timestamp(struct sock *sk, bool tsonly)
+ {
+ 	bool ret;
+ 
+-	if (likely(sysctl_tstamp_allow_data || tsonly))
++	if (likely(READ_ONCE(sysctl_tstamp_allow_data) || tsonly))
+ 		return true;
+ 
+ 	read_lock_bh(&sk->sk_callback_lock);
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 6d9af4ef93d7a..1bb6a003323b3 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -887,7 +887,7 @@ int sock_setsockopt(struct socket *sock, int level, int optname,
+ 		 * play 'guess the biggest size' games. RCVBUF/SNDBUF
+ 		 * are treated in BSD as hints
+ 		 */
+-		val = min_t(u32, val, sysctl_wmem_max);
++		val = min_t(u32, val, READ_ONCE(sysctl_wmem_max));
+ set_sndbuf:
+ 		/* Ensure val * 2 fits into an int, to prevent max_t()
+ 		 * from treating it as a negative value.
+@@ -919,7 +919,7 @@ set_sndbuf:
+ 		 * play 'guess the biggest size' games. RCVBUF/SNDBUF
+ 		 * are treated in BSD as hints
+ 		 */
+-		__sock_set_rcvbuf(sk, min_t(u32, val, sysctl_rmem_max));
++		__sock_set_rcvbuf(sk, min_t(u32, val, READ_ONCE(sysctl_rmem_max)));
+ 		break;
+ 
+ 	case SO_RCVBUFFORCE:
+@@ -2219,7 +2219,7 @@ struct sk_buff *sock_omalloc(struct sock *sk, unsigned long size,
+ 
+ 	/* small safe race: SKB_TRUESIZE may differ from final skb->truesize */
+ 	if (atomic_read(&sk->sk_omem_alloc) + SKB_TRUESIZE(size) >
+-	    sysctl_optmem_max)
++	    READ_ONCE(sysctl_optmem_max))
+ 		return NULL;
+ 
+ 	skb = alloc_skb(size, priority);
+@@ -2237,8 +2237,10 @@ struct sk_buff *sock_omalloc(struct sock *sk, unsigned long size,
+  */
+ void *sock_kmalloc(struct sock *sk, int size, gfp_t priority)
+ {
+-	if ((unsigned int)size <= sysctl_optmem_max &&
+-	    atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
++	int optmem_max = READ_ONCE(sysctl_optmem_max);
++
++	if ((unsigned int)size <= optmem_max &&
++	    atomic_read(&sk->sk_omem_alloc) + size < optmem_max) {
+ 		void *mem;
+ 		/* First do the add, to avoid the race if kmalloc
+ 		 * might sleep.
+@@ -2974,8 +2976,8 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ 	timer_setup(&sk->sk_timer, NULL, 0);
+ 
+ 	sk->sk_allocation	=	GFP_KERNEL;
+-	sk->sk_rcvbuf		=	sysctl_rmem_default;
+-	sk->sk_sndbuf		=	sysctl_wmem_default;
++	sk->sk_rcvbuf		=	READ_ONCE(sysctl_rmem_default);
++	sk->sk_sndbuf		=	READ_ONCE(sysctl_wmem_default);
+ 	sk->sk_state		=	TCP_CLOSE;
+ 	sk_set_socket(sk, sock);
+ 
+@@ -3030,7 +3032,7 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ 
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+ 	sk->sk_napi_id		=	0;
+-	sk->sk_ll_usec		=	sysctl_net_busy_read;
++	sk->sk_ll_usec		=	READ_ONCE(sysctl_net_busy_read);
+ #endif
+ 
+ 	sk->sk_max_pacing_rate = ~0UL;
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index 2e0a4378e778a..0dfe9f255ab3a 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -235,14 +235,17 @@ static int set_default_qdisc(struct ctl_table *table, int write,
+ static int proc_do_dev_weight(struct ctl_table *table, int write,
+ 			   void *buffer, size_t *lenp, loff_t *ppos)
+ {
+-	int ret;
++	static DEFINE_MUTEX(dev_weight_mutex);
++	int ret, weight;
+ 
++	mutex_lock(&dev_weight_mutex);
+ 	ret = proc_dointvec(table, write, buffer, lenp, ppos);
+-	if (ret != 0)
+-		return ret;
+-
+-	dev_rx_weight = weight_p * dev_weight_rx_bias;
+-	dev_tx_weight = weight_p * dev_weight_tx_bias;
++	if (!ret && write) {
++		weight = READ_ONCE(weight_p);
++		WRITE_ONCE(dev_rx_weight, weight * dev_weight_rx_bias);
++		WRITE_ONCE(dev_tx_weight, weight * dev_weight_tx_bias);
++	}
++	mutex_unlock(&dev_weight_mutex);
+ 
+ 	return ret;
+ }
+diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
+index dc92a67baea39..7d542eb461729 100644
+--- a/net/decnet/af_decnet.c
++++ b/net/decnet/af_decnet.c
+@@ -480,8 +480,8 @@ static struct sock *dn_alloc_sock(struct net *net, struct socket *sock, gfp_t gf
+ 	sk->sk_family      = PF_DECnet;
+ 	sk->sk_protocol    = 0;
+ 	sk->sk_allocation  = gfp;
+-	sk->sk_sndbuf	   = sysctl_decnet_wmem[1];
+-	sk->sk_rcvbuf	   = sysctl_decnet_rmem[1];
++	sk->sk_sndbuf	   = READ_ONCE(sysctl_decnet_wmem[1]);
++	sk->sk_rcvbuf	   = READ_ONCE(sysctl_decnet_rmem[1]);
+ 
+ 	/* Initialization of DECnet Session Control Port		*/
+ 	scp = DN_SK(sk);
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 148ef484a66ce..8f17538755507 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -2668,23 +2668,27 @@ static __net_init int devinet_init_net(struct net *net)
+ #endif
+ 
+ 	if (!net_eq(net, &init_net)) {
+-		if (IS_ENABLED(CONFIG_SYSCTL) &&
+-		    sysctl_devconf_inherit_init_net == 3) {
++		switch (net_inherit_devconf()) {
++		case 3:
+ 			/* copy from the current netns */
+ 			memcpy(all, current->nsproxy->net_ns->ipv4.devconf_all,
+ 			       sizeof(ipv4_devconf));
+ 			memcpy(dflt,
+ 			       current->nsproxy->net_ns->ipv4.devconf_dflt,
+ 			       sizeof(ipv4_devconf_dflt));
+-		} else if (!IS_ENABLED(CONFIG_SYSCTL) ||
+-			   sysctl_devconf_inherit_init_net != 2) {
+-			/* inherit == 0 or 1: copy from init_net */
++			break;
++		case 0:
++		case 1:
++			/* copy from init_net */
+ 			memcpy(all, init_net.ipv4.devconf_all,
+ 			       sizeof(ipv4_devconf));
+ 			memcpy(dflt, init_net.ipv4.devconf_dflt,
+ 			       sizeof(ipv4_devconf_dflt));
++			break;
++		case 2:
++			/* use compiled values */
++			break;
+ 		}
+-		/* else inherit == 2: use compiled values */
+ 	}
+ 
+ #ifdef CONFIG_SYSCTL
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index f77b0af3cb657..0dbf950de832f 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1721,7 +1721,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ 
+ 	sk->sk_protocol = ip_hdr(skb)->protocol;
+ 	sk->sk_bound_dev_if = arg->bound_dev_if;
+-	sk->sk_sndbuf = sysctl_wmem_default;
++	sk->sk_sndbuf = READ_ONCE(sysctl_wmem_default);
+ 	ipc.sockc.mark = fl4.flowi4_mark;
+ 	err = ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base,
+ 			     len, 0, &ipc, &rt, MSG_DONTWAIT);
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 22507a6a3f71c..4cc39c62af55d 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -773,7 +773,7 @@ static int ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval, int optlen)
+ 
+ 	if (optlen < GROUP_FILTER_SIZE(0))
+ 		return -EINVAL;
+-	if (optlen > sysctl_optmem_max)
++	if (optlen > READ_ONCE(sysctl_optmem_max))
+ 		return -ENOBUFS;
+ 
+ 	gsf = memdup_sockptr(optval, optlen);
+@@ -808,7 +808,7 @@ static int compat_ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 
+ 	if (optlen < size0)
+ 		return -EINVAL;
+-	if (optlen > sysctl_optmem_max - 4)
++	if (optlen > READ_ONCE(sysctl_optmem_max) - 4)
+ 		return -ENOBUFS;
+ 
+ 	p = kmalloc(optlen + 4, GFP_KERNEL);
+@@ -1231,7 +1231,7 @@ static int do_ip_setsockopt(struct sock *sk, int level, int optname,
+ 
+ 		if (optlen < IP_MSFILTER_SIZE(0))
+ 			goto e_inval;
+-		if (optlen > sysctl_optmem_max) {
++		if (optlen > READ_ONCE(sysctl_optmem_max)) {
+ 			err = -ENOBUFS;
+ 			break;
+ 		}
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 78460eb39b3af..bfeb05f62b94f 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -451,8 +451,8 @@ void tcp_init_sock(struct sock *sk)
+ 
+ 	icsk->icsk_sync_mss = tcp_sync_mss;
+ 
+-	WRITE_ONCE(sk->sk_sndbuf, sock_net(sk)->ipv4.sysctl_tcp_wmem[1]);
+-	WRITE_ONCE(sk->sk_rcvbuf, sock_net(sk)->ipv4.sysctl_tcp_rmem[1]);
++	WRITE_ONCE(sk->sk_sndbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[1]));
++	WRITE_ONCE(sk->sk_rcvbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[1]));
+ 
+ 	sk_sockets_allocated_inc(sk);
+ 	sk->sk_route_forced_caps = NETIF_F_GSO;
+@@ -1711,7 +1711,7 @@ int tcp_set_rcvlowat(struct sock *sk, int val)
+ 	if (sk->sk_userlocks & SOCK_RCVBUF_LOCK)
+ 		cap = sk->sk_rcvbuf >> 1;
+ 	else
+-		cap = sock_net(sk)->ipv4.sysctl_tcp_rmem[2] >> 1;
++		cap = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]) >> 1;
+ 	val = min(val, cap);
+ 	WRITE_ONCE(sk->sk_rcvlowat, val ? : 1);
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index d35e88b5ffcbe..41b44b311e8a0 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -425,7 +425,7 @@ static void tcp_sndbuf_expand(struct sock *sk)
+ 
+ 	if (sk->sk_sndbuf < sndmem)
+ 		WRITE_ONCE(sk->sk_sndbuf,
+-			   min(sndmem, sock_net(sk)->ipv4.sysctl_tcp_wmem[2]));
++			   min(sndmem, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[2])));
+ }
+ 
+ /* 2. Tuning advertised window (window_clamp, rcv_ssthresh)
+@@ -454,12 +454,13 @@ static void tcp_sndbuf_expand(struct sock *sk)
+  */
+ 
+ /* Slow part of check#2. */
+-static int __tcp_grow_window(const struct sock *sk, const struct sk_buff *skb)
++static int __tcp_grow_window(const struct sock *sk, const struct sk_buff *skb,
++			     unsigned int skbtruesize)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	/* Optimize this! */
+-	int truesize = tcp_win_from_space(sk, skb->truesize) >> 1;
+-	int window = tcp_win_from_space(sk, sock_net(sk)->ipv4.sysctl_tcp_rmem[2]) >> 1;
++	int truesize = tcp_win_from_space(sk, skbtruesize) >> 1;
++	int window = tcp_win_from_space(sk, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2])) >> 1;
+ 
+ 	while (tp->rcv_ssthresh <= window) {
+ 		if (truesize <= skb->len)
+@@ -471,7 +472,27 @@ static int __tcp_grow_window(const struct sock *sk, const struct sk_buff *skb)
+ 	return 0;
+ }
+ 
+-static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
++/* Even if skb appears to have a bad len/truesize ratio, TCP coalescing
++ * can play nice with us, as sk_buff and skb->head might be either
++ * freed or shared with up to MAX_SKB_FRAGS segments.
++ * Only give a boost to drivers using page frag(s) to hold the frame(s),
++ * and if no payload was pulled in skb->head before reaching us.
++ */
++static u32 truesize_adjust(bool adjust, const struct sk_buff *skb)
++{
++	u32 truesize = skb->truesize;
++
++	if (adjust && !skb_headlen(skb)) {
++		truesize -= SKB_TRUESIZE(skb_end_offset(skb));
++		/* paranoid check, some drivers might be buggy */
++		if (unlikely((int)truesize < (int)skb->len))
++			truesize = skb->truesize;
++	}
++	return truesize;
++}
++
++static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb,
++			    bool adjust)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	int room;
+@@ -480,15 +501,16 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
+ 
+ 	/* Check #1 */
+ 	if (room > 0 && !tcp_under_memory_pressure(sk)) {
++		unsigned int truesize = truesize_adjust(adjust, skb);
+ 		int incr;
+ 
+ 		/* Check #2. Increase window, if skb with such overhead
+ 		 * will fit to rcvbuf in future.
+ 		 */
+-		if (tcp_win_from_space(sk, skb->truesize) <= skb->len)
++		if (tcp_win_from_space(sk, truesize) <= skb->len)
+ 			incr = 2 * tp->advmss;
+ 		else
+-			incr = __tcp_grow_window(sk, skb);
++			incr = __tcp_grow_window(sk, skb, truesize);
+ 
+ 		if (incr) {
+ 			incr = max_t(int, incr, 2 * skb->len);
+@@ -543,16 +565,17 @@ static void tcp_clamp_window(struct sock *sk)
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	struct net *net = sock_net(sk);
++	int rmem2;
+ 
+ 	icsk->icsk_ack.quick = 0;
++	rmem2 = READ_ONCE(net->ipv4.sysctl_tcp_rmem[2]);
+ 
+-	if (sk->sk_rcvbuf < net->ipv4.sysctl_tcp_rmem[2] &&
++	if (sk->sk_rcvbuf < rmem2 &&
+ 	    !(sk->sk_userlocks & SOCK_RCVBUF_LOCK) &&
+ 	    !tcp_under_memory_pressure(sk) &&
+ 	    sk_memory_allocated(sk) < sk_prot_mem_limits(sk, 0)) {
+ 		WRITE_ONCE(sk->sk_rcvbuf,
+-			   min(atomic_read(&sk->sk_rmem_alloc),
+-			       net->ipv4.sysctl_tcp_rmem[2]));
++			   min(atomic_read(&sk->sk_rmem_alloc), rmem2));
+ 	}
+ 	if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf)
+ 		tp->rcv_ssthresh = min(tp->window_clamp, 2U * tp->advmss);
+@@ -714,7 +737,7 @@ void tcp_rcv_space_adjust(struct sock *sk)
+ 
+ 		do_div(rcvwin, tp->advmss);
+ 		rcvbuf = min_t(u64, rcvwin * rcvmem,
+-			       sock_net(sk)->ipv4.sysctl_tcp_rmem[2]);
++			       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
+ 		if (rcvbuf > sk->sk_rcvbuf) {
+ 			WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
+ 
+@@ -782,7 +805,7 @@ static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb)
+ 	tcp_ecn_check_ce(sk, skb);
+ 
+ 	if (skb->len >= 128)
+-		tcp_grow_window(sk, skb);
++		tcp_grow_window(sk, skb, true);
+ }
+ 
+ /* Called to compute a smoothed rtt estimate. The data fed to this
+@@ -4761,7 +4784,7 @@ coalesce_done:
+ 		 * and trigger fast retransmit.
+ 		 */
+ 		if (tcp_is_sack(tp))
+-			tcp_grow_window(sk, skb);
++			tcp_grow_window(sk, skb, true);
+ 		kfree_skb_partial(skb, fragstolen);
+ 		skb = NULL;
+ 		goto add_sack;
+@@ -4849,7 +4872,7 @@ end:
+ 		 * and trigger fast retransmit.
+ 		 */
+ 		if (tcp_is_sack(tp))
+-			tcp_grow_window(sk, skb);
++			tcp_grow_window(sk, skb, false);
+ 		skb_condense(skb);
+ 		skb_set_owner_r(skb, sk);
+ 	}
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 4c9274cb92d55..48fce999dc612 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -238,8 +238,8 @@ void tcp_select_initial_window(const struct sock *sk, int __space, __u32 mss,
+ 	*rcv_wscale = 0;
+ 	if (wscale_ok) {
+ 		/* Set window scaling on max possible window */
+-		space = max_t(u32, space, sock_net(sk)->ipv4.sysctl_tcp_rmem[2]);
+-		space = max_t(u32, space, sysctl_rmem_max);
++		space = max_t(u32, space, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
++		space = max_t(u32, space, READ_ONCE(sysctl_rmem_max));
+ 		space = min_t(u32, space, *window_clamp);
+ 		*rcv_wscale = clamp_t(int, ilog2(space) - 15,
+ 				      0, TCP_MAX_WSCALE);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 05317e6f48f8a..ed1e5bfc97b31 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -7042,9 +7042,8 @@ static int __net_init addrconf_init_net(struct net *net)
+ 	if (!dflt)
+ 		goto err_alloc_dflt;
+ 
+-	if (IS_ENABLED(CONFIG_SYSCTL) &&
+-	    !net_eq(net, &init_net)) {
+-		switch (sysctl_devconf_inherit_init_net) {
++	if (!net_eq(net, &init_net)) {
++		switch (net_inherit_devconf()) {
+ 		case 1:  /* copy from init_net */
+ 			memcpy(all, init_net.ipv6.devconf_all,
+ 			       sizeof(ipv6_devconf));
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 43a894bf9a1be..6fa118bf40cdd 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -208,7 +208,7 @@ static int ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 
+ 	if (optlen < GROUP_FILTER_SIZE(0))
+ 		return -EINVAL;
+-	if (optlen > sysctl_optmem_max)
++	if (optlen > READ_ONCE(sysctl_optmem_max))
+ 		return -ENOBUFS;
+ 
+ 	gsf = memdup_sockptr(optval, optlen);
+@@ -242,7 +242,7 @@ static int compat_ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 
+ 	if (optlen < size0)
+ 		return -EINVAL;
+-	if (optlen > sysctl_optmem_max - 4)
++	if (optlen > READ_ONCE(sysctl_optmem_max) - 4)
+ 		return -ENOBUFS;
+ 
+ 	p = kmalloc(optlen + 4, GFP_KERNEL);
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 2aa16a171285b..05e2710988883 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1701,9 +1701,12 @@ static int pfkey_register(struct sock *sk, struct sk_buff *skb, const struct sad
+ 		pfk->registered |= (1<<hdr->sadb_msg_satype);
+ 	}
+ 
++	mutex_lock(&pfkey_mutex);
+ 	xfrm_probe_algs();
+ 
+ 	supp_skb = compose_sadb_supported(hdr, GFP_KERNEL | __GFP_ZERO);
++	mutex_unlock(&pfkey_mutex);
++
+ 	if (!supp_skb) {
+ 		if (hdr->sadb_msg_satype != SADB_SATYPE_UNSPEC)
+ 			pfk->registered &= ~(1<<hdr->sadb_msg_satype);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index d0e91aa7b30e5..e61c85873ea2f 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1439,7 +1439,7 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)
+ 
+ 		do_div(rcvwin, advmss);
+ 		rcvbuf = min_t(u64, rcvwin * rcvmem,
+-			       sock_net(sk)->ipv4.sysctl_tcp_rmem[2]);
++			       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
+ 
+ 		if (rcvbuf > sk->sk_rcvbuf) {
+ 			u32 window_clamp;
+@@ -1872,8 +1872,8 @@ static int mptcp_init_sock(struct sock *sk)
+ 		return ret;
+ 
+ 	sk_sockets_allocated_inc(sk);
+-	sk->sk_rcvbuf = sock_net(sk)->ipv4.sysctl_tcp_rmem[1];
+-	sk->sk_sndbuf = sock_net(sk)->ipv4.sysctl_tcp_wmem[1];
++	sk->sk_rcvbuf = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[1]);
++	sk->sk_sndbuf = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[1]);
+ 
+ 	return 0;
+ }
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index 16b48064f715e..daab857c52a80 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -1280,12 +1280,12 @@ static void set_sock_size(struct sock *sk, int mode, int val)
+ 	lock_sock(sk);
+ 	if (mode) {
+ 		val = clamp_t(int, val, (SOCK_MIN_SNDBUF + 1) / 2,
+-			      sysctl_wmem_max);
++			      READ_ONCE(sysctl_wmem_max));
+ 		sk->sk_sndbuf = val * 2;
+ 		sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+ 	} else {
+ 		val = clamp_t(int, val, (SOCK_MIN_RCVBUF + 1) / 2,
+-			      sysctl_rmem_max);
++			      READ_ONCE(sysctl_rmem_max));
+ 		sk->sk_rcvbuf = val * 2;
+ 		sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+ 	}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 30bd4b867912c..1b039476e4d6a 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1999,9 +1999,9 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 			      u8 policy, u32 flags)
+ {
+ 	const struct nlattr * const *nla = ctx->nla;
++	struct nft_stats __percpu *stats = NULL;
+ 	struct nft_table *table = ctx->table;
+ 	struct nft_base_chain *basechain;
+-	struct nft_stats __percpu *stats;
+ 	struct net *net = ctx->net;
+ 	char name[NFT_NAME_MAXLEN];
+ 	struct nft_trans *trans;
+@@ -2037,7 +2037,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 				return PTR_ERR(stats);
+ 			}
+ 			rcu_assign_pointer(basechain->stats, stats);
+-			static_branch_inc(&nft_counters_enabled);
+ 		}
+ 
+ 		err = nft_basechain_init(basechain, family, &hook, flags);
+@@ -2120,6 +2119,9 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 		goto err_unregister_hook;
+ 	}
+ 
++	if (stats)
++		static_branch_inc(&nft_counters_enabled);
++
+ 	table->use++;
+ 
+ 	return 0;
+@@ -4839,19 +4841,13 @@ static int nft_setelem_parse_flags(const struct nft_set *set,
+ static int nft_setelem_parse_key(struct nft_ctx *ctx, struct nft_set *set,
+ 				 struct nft_data *key, struct nlattr *attr)
+ {
+-	struct nft_data_desc desc;
+-	int err;
+-
+-	err = nft_data_init(ctx, key, NFT_DATA_VALUE_MAXLEN, &desc, attr);
+-	if (err < 0)
+-		return err;
+-
+-	if (desc.type != NFT_DATA_VALUE || desc.len != set->klen) {
+-		nft_data_release(key, desc.type);
+-		return -EINVAL;
+-	}
++	struct nft_data_desc desc = {
++		.type	= NFT_DATA_VALUE,
++		.size	= NFT_DATA_VALUE_MAXLEN,
++		.len	= set->klen,
++	};
+ 
+-	return 0;
++	return nft_data_init(ctx, key, &desc, attr);
+ }
+ 
+ static int nft_setelem_parse_data(struct nft_ctx *ctx, struct nft_set *set,
+@@ -4860,24 +4856,18 @@ static int nft_setelem_parse_data(struct nft_ctx *ctx, struct nft_set *set,
+ 				  struct nlattr *attr)
+ {
+ 	u32 dtype;
+-	int err;
+-
+-	err = nft_data_init(ctx, data, NFT_DATA_VALUE_MAXLEN, desc, attr);
+-	if (err < 0)
+-		return err;
+ 
+ 	if (set->dtype == NFT_DATA_VERDICT)
+ 		dtype = NFT_DATA_VERDICT;
+ 	else
+ 		dtype = NFT_DATA_VALUE;
+ 
+-	if (dtype != desc->type ||
+-	    set->dlen != desc->len) {
+-		nft_data_release(data, desc->type);
+-		return -EINVAL;
+-	}
++	desc->type = dtype;
++	desc->size = NFT_DATA_VALUE_MAXLEN;
++	desc->len = set->dlen;
++	desc->flags = NFT_DATA_DESC_SETELEM;
+ 
+-	return 0;
++	return nft_data_init(ctx, data, desc, attr);
+ }
+ 
+ static int nft_get_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+@@ -8688,6 +8678,11 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 			return PTR_ERR(chain);
+ 		if (nft_is_base_chain(chain))
+ 			return -EOPNOTSUPP;
++		if (nft_chain_is_bound(chain))
++			return -EINVAL;
++		if (desc->flags & NFT_DATA_DESC_SETELEM &&
++		    chain->flags & NFT_CHAIN_BINDING)
++			return -EINVAL;
+ 
+ 		chain->use++;
+ 		data->verdict.chain = chain;
+@@ -8695,7 +8690,7 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 	}
+ 
+ 	desc->len = sizeof(data->verdict);
+-	desc->type = NFT_DATA_VERDICT;
++
+ 	return 0;
+ }
+ 
+@@ -8748,20 +8743,25 @@ nla_put_failure:
+ }
+ 
+ static int nft_value_init(const struct nft_ctx *ctx,
+-			  struct nft_data *data, unsigned int size,
+-			  struct nft_data_desc *desc, const struct nlattr *nla)
++			  struct nft_data *data, struct nft_data_desc *desc,
++			  const struct nlattr *nla)
+ {
+ 	unsigned int len;
+ 
+ 	len = nla_len(nla);
+ 	if (len == 0)
+ 		return -EINVAL;
+-	if (len > size)
++	if (len > desc->size)
+ 		return -EOVERFLOW;
++	if (desc->len) {
++		if (len != desc->len)
++			return -EINVAL;
++	} else {
++		desc->len = len;
++	}
+ 
+ 	nla_memcpy(data->data, nla, len);
+-	desc->type = NFT_DATA_VALUE;
+-	desc->len  = len;
++
+ 	return 0;
+ }
+ 
+@@ -8781,7 +8781,6 @@ static const struct nla_policy nft_data_policy[NFTA_DATA_MAX + 1] = {
+  *
+  *	@ctx: context of the expression using the data
+  *	@data: destination struct nft_data
+- *	@size: maximum data length
+  *	@desc: data description
+  *	@nla: netlink attribute containing data
+  *
+@@ -8791,24 +8790,35 @@ static const struct nla_policy nft_data_policy[NFTA_DATA_MAX + 1] = {
+  *	The caller can indicate that it only wants to accept data of type
+  *	NFT_DATA_VALUE by passing NULL for the ctx argument.
+  */
+-int nft_data_init(const struct nft_ctx *ctx,
+-		  struct nft_data *data, unsigned int size,
++int nft_data_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 		  struct nft_data_desc *desc, const struct nlattr *nla)
+ {
+ 	struct nlattr *tb[NFTA_DATA_MAX + 1];
+ 	int err;
+ 
++	if (WARN_ON_ONCE(!desc->size))
++		return -EINVAL;
++
+ 	err = nla_parse_nested_deprecated(tb, NFTA_DATA_MAX, nla,
+ 					  nft_data_policy, NULL);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (tb[NFTA_DATA_VALUE])
+-		return nft_value_init(ctx, data, size, desc,
+-				      tb[NFTA_DATA_VALUE]);
+-	if (tb[NFTA_DATA_VERDICT] && ctx != NULL)
+-		return nft_verdict_init(ctx, data, desc, tb[NFTA_DATA_VERDICT]);
+-	return -EINVAL;
++	if (tb[NFTA_DATA_VALUE]) {
++		if (desc->type != NFT_DATA_VALUE)
++			return -EINVAL;
++
++		err = nft_value_init(ctx, data, desc, tb[NFTA_DATA_VALUE]);
++	} else if (tb[NFTA_DATA_VERDICT] && ctx != NULL) {
++		if (desc->type != NFT_DATA_VERDICT)
++			return -EINVAL;
++
++		err = nft_verdict_init(ctx, data, desc, tb[NFTA_DATA_VERDICT]);
++	} else {
++		err = -EINVAL;
++	}
++
++	return err;
+ }
+ EXPORT_SYMBOL_GPL(nft_data_init);
+ 
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index a61b5bf5aa0fb..9dc18429ed875 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -67,6 +67,50 @@ static void nft_cmp_fast_eval(const struct nft_expr *expr,
+ 	regs->verdict.code = NFT_BREAK;
+ }
+ 
++static void nft_cmp16_fast_eval(const struct nft_expr *expr,
++				struct nft_regs *regs)
++{
++	const struct nft_cmp16_fast_expr *priv = nft_expr_priv(expr);
++	const u64 *reg_data = (const u64 *)&regs->data[priv->sreg];
++	const u64 *mask = (const u64 *)&priv->mask;
++	const u64 *data = (const u64 *)&priv->data;
++
++	if (((reg_data[0] & mask[0]) == data[0] &&
++	    ((reg_data[1] & mask[1]) == data[1])) ^ priv->inv)
++		return;
++	regs->verdict.code = NFT_BREAK;
++}
++
++static noinline void __nft_trace_verdict(struct nft_traceinfo *info,
++					 const struct nft_chain *chain,
++					 const struct nft_regs *regs)
++{
++	enum nft_trace_types type;
++
++	switch (regs->verdict.code) {
++	case NFT_CONTINUE:
++	case NFT_RETURN:
++		type = NFT_TRACETYPE_RETURN;
++		break;
++	default:
++		type = NFT_TRACETYPE_RULE;
++		break;
++	}
++
++	__nft_trace_packet(info, chain, type);
++}
++
++static inline void nft_trace_verdict(struct nft_traceinfo *info,
++				     const struct nft_chain *chain,
++				     const struct nft_rule *rule,
++				     const struct nft_regs *regs)
++{
++	if (static_branch_unlikely(&nft_trace_enabled)) {
++		info->rule = rule;
++		__nft_trace_verdict(info, chain, regs);
++	}
++}
++
+ static bool nft_payload_fast_eval(const struct nft_expr *expr,
+ 				  struct nft_regs *regs,
+ 				  const struct nft_pktinfo *pkt)
+@@ -185,6 +229,8 @@ next_rule:
+ 		nft_rule_for_each_expr(expr, last, rule) {
+ 			if (expr->ops == &nft_cmp_fast_ops)
+ 				nft_cmp_fast_eval(expr, &regs);
++			else if (expr->ops == &nft_cmp16_fast_ops)
++				nft_cmp16_fast_eval(expr, &regs);
+ 			else if (expr->ops == &nft_bitwise_fast_ops)
+ 				nft_bitwise_fast_eval(expr, &regs);
+ 			else if (expr->ops != &nft_payload_fast_ops ||
+@@ -207,13 +253,13 @@ next_rule:
+ 		break;
+ 	}
+ 
++	nft_trace_verdict(&info, chain, rule, &regs);
++
+ 	switch (regs.verdict.code & NF_VERDICT_MASK) {
+ 	case NF_ACCEPT:
+ 	case NF_DROP:
+ 	case NF_QUEUE:
+ 	case NF_STOLEN:
+-		nft_trace_packet(&info, chain, rule,
+-				 NFT_TRACETYPE_RULE);
+ 		return regs.verdict.code;
+ 	}
+ 
+@@ -226,15 +272,10 @@ next_rule:
+ 		stackptr++;
+ 		fallthrough;
+ 	case NFT_GOTO:
+-		nft_trace_packet(&info, chain, rule,
+-				 NFT_TRACETYPE_RULE);
+-
+ 		chain = regs.verdict.chain;
+ 		goto do_chain;
+ 	case NFT_CONTINUE:
+ 	case NFT_RETURN:
+-		nft_trace_packet(&info, chain, rule,
+-				 NFT_TRACETYPE_RETURN);
+ 		break;
+ 	default:
+ 		WARN_ON(1);
+diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c
+index 47b0dba95054f..d6ab7aa14adc2 100644
+--- a/net/netfilter/nft_bitwise.c
++++ b/net/netfilter/nft_bitwise.c
+@@ -93,7 +93,16 @@ static const struct nla_policy nft_bitwise_policy[NFTA_BITWISE_MAX + 1] = {
+ static int nft_bitwise_init_bool(struct nft_bitwise *priv,
+ 				 const struct nlattr *const tb[])
+ {
+-	struct nft_data_desc mask, xor;
++	struct nft_data_desc mask = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(priv->mask),
++		.len	= priv->len,
++	};
++	struct nft_data_desc xor = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(priv->xor),
++		.len	= priv->len,
++	};
+ 	int err;
+ 
+ 	if (tb[NFTA_BITWISE_DATA])
+@@ -103,36 +112,30 @@ static int nft_bitwise_init_bool(struct nft_bitwise *priv,
+ 	    !tb[NFTA_BITWISE_XOR])
+ 		return -EINVAL;
+ 
+-	err = nft_data_init(NULL, &priv->mask, sizeof(priv->mask), &mask,
+-			    tb[NFTA_BITWISE_MASK]);
++	err = nft_data_init(NULL, &priv->mask, &mask, tb[NFTA_BITWISE_MASK]);
+ 	if (err < 0)
+ 		return err;
+-	if (mask.type != NFT_DATA_VALUE || mask.len != priv->len) {
+-		err = -EINVAL;
+-		goto err1;
+-	}
+ 
+-	err = nft_data_init(NULL, &priv->xor, sizeof(priv->xor), &xor,
+-			    tb[NFTA_BITWISE_XOR]);
++	err = nft_data_init(NULL, &priv->xor, &xor, tb[NFTA_BITWISE_XOR]);
+ 	if (err < 0)
+-		goto err1;
+-	if (xor.type != NFT_DATA_VALUE || xor.len != priv->len) {
+-		err = -EINVAL;
+-		goto err2;
+-	}
++		goto err_xor_err;
+ 
+ 	return 0;
+-err2:
+-	nft_data_release(&priv->xor, xor.type);
+-err1:
++
++err_xor_err:
+ 	nft_data_release(&priv->mask, mask.type);
++
+ 	return err;
+ }
+ 
+ static int nft_bitwise_init_shift(struct nft_bitwise *priv,
+ 				  const struct nlattr *const tb[])
+ {
+-	struct nft_data_desc d;
++	struct nft_data_desc desc = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(priv->data),
++		.len	= sizeof(u32),
++	};
+ 	int err;
+ 
+ 	if (tb[NFTA_BITWISE_MASK] ||
+@@ -142,13 +145,12 @@ static int nft_bitwise_init_shift(struct nft_bitwise *priv,
+ 	if (!tb[NFTA_BITWISE_DATA])
+ 		return -EINVAL;
+ 
+-	err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &d,
+-			    tb[NFTA_BITWISE_DATA]);
++	err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_BITWISE_DATA]);
+ 	if (err < 0)
+ 		return err;
+-	if (d.type != NFT_DATA_VALUE || d.len != sizeof(u32) ||
+-	    priv->data.data[0] >= BITS_PER_TYPE(u32)) {
+-		nft_data_release(&priv->data, d.type);
++
++	if (priv->data.data[0] >= BITS_PER_TYPE(u32)) {
++		nft_data_release(&priv->data, desc.type);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -290,22 +292,21 @@ static const struct nft_expr_ops nft_bitwise_ops = {
+ static int
+ nft_bitwise_extract_u32_data(const struct nlattr * const tb, u32 *out)
+ {
+-	struct nft_data_desc desc;
+ 	struct nft_data data;
+-	int err = 0;
++	struct nft_data_desc desc = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(data),
++		.len	= sizeof(u32),
++	};
++	int err;
+ 
+-	err = nft_data_init(NULL, &data, sizeof(data), &desc, tb);
++	err = nft_data_init(NULL, &data, &desc, tb);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (desc.type != NFT_DATA_VALUE || desc.len != sizeof(u32)) {
+-		err = -EINVAL;
+-		goto err;
+-	}
+ 	*out = data.data[0];
+-err:
+-	nft_data_release(&data, desc.type);
+-	return err;
++
++	return 0;
+ }
+ 
+ static int nft_bitwise_fast_init(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_cmp.c b/net/netfilter/nft_cmp.c
+index b529c0e865466..461763a571f20 100644
+--- a/net/netfilter/nft_cmp.c
++++ b/net/netfilter/nft_cmp.c
+@@ -73,20 +73,16 @@ static int nft_cmp_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 			const struct nlattr * const tb[])
+ {
+ 	struct nft_cmp_expr *priv = nft_expr_priv(expr);
+-	struct nft_data_desc desc;
++	struct nft_data_desc desc = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(priv->data),
++	};
+ 	int err;
+ 
+-	err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &desc,
+-			    tb[NFTA_CMP_DATA]);
++	err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_CMP_DATA]);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (desc.type != NFT_DATA_VALUE) {
+-		err = -EINVAL;
+-		nft_data_release(&priv->data, desc.type);
+-		return err;
+-	}
+-
+ 	err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len);
+ 	if (err < 0)
+ 		return err;
+@@ -201,12 +197,14 @@ static int nft_cmp_fast_init(const struct nft_ctx *ctx,
+ 			     const struct nlattr * const tb[])
+ {
+ 	struct nft_cmp_fast_expr *priv = nft_expr_priv(expr);
+-	struct nft_data_desc desc;
+ 	struct nft_data data;
++	struct nft_data_desc desc = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(data),
++	};
+ 	int err;
+ 
+-	err = nft_data_init(NULL, &data, sizeof(data), &desc,
+-			    tb[NFTA_CMP_DATA]);
++	err = nft_data_init(NULL, &data, &desc, tb[NFTA_CMP_DATA]);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -272,12 +270,108 @@ const struct nft_expr_ops nft_cmp_fast_ops = {
+ 	.offload	= nft_cmp_fast_offload,
+ };
+ 
++static u32 nft_cmp_mask(u32 bitlen)
++{
++	return (__force u32)cpu_to_le32(~0U >> (sizeof(u32) * BITS_PER_BYTE - bitlen));
++}
++
++static void nft_cmp16_fast_mask(struct nft_data *data, unsigned int bitlen)
++{
++	int len = bitlen / BITS_PER_BYTE;
++	int i, words = len / sizeof(u32);
++
++	for (i = 0; i < words; i++) {
++		data->data[i] = 0xffffffff;
++		bitlen -= sizeof(u32) * BITS_PER_BYTE;
++	}
++
++	if (len % sizeof(u32))
++		data->data[i++] = nft_cmp_mask(bitlen);
++
++	for (; i < 4; i++)
++		data->data[i] = 0;
++}
++
++static int nft_cmp16_fast_init(const struct nft_ctx *ctx,
++			       const struct nft_expr *expr,
++			       const struct nlattr * const tb[])
++{
++	struct nft_cmp16_fast_expr *priv = nft_expr_priv(expr);
++	struct nft_data_desc desc = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(priv->data),
++	};
++	int err;
++
++	err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_CMP_DATA]);
++	if (err < 0)
++		return err;
++
++	err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len);
++	if (err < 0)
++		return err;
++
++	nft_cmp16_fast_mask(&priv->mask, desc.len * BITS_PER_BYTE);
++	priv->inv = ntohl(nla_get_be32(tb[NFTA_CMP_OP])) != NFT_CMP_EQ;
++	priv->len = desc.len;
++
++	return 0;
++}
++
++static int nft_cmp16_fast_offload(struct nft_offload_ctx *ctx,
++				  struct nft_flow_rule *flow,
++				  const struct nft_expr *expr)
++{
++	const struct nft_cmp16_fast_expr *priv = nft_expr_priv(expr);
++	struct nft_cmp_expr cmp = {
++		.data	= priv->data,
++		.sreg	= priv->sreg,
++		.len	= priv->len,
++		.op	= priv->inv ? NFT_CMP_NEQ : NFT_CMP_EQ,
++	};
++
++	return __nft_cmp_offload(ctx, flow, &cmp);
++}
++
++static int nft_cmp16_fast_dump(struct sk_buff *skb, const struct nft_expr *expr)
++{
++	const struct nft_cmp16_fast_expr *priv = nft_expr_priv(expr);
++	enum nft_cmp_ops op = priv->inv ? NFT_CMP_NEQ : NFT_CMP_EQ;
++
++	if (nft_dump_register(skb, NFTA_CMP_SREG, priv->sreg))
++		goto nla_put_failure;
++	if (nla_put_be32(skb, NFTA_CMP_OP, htonl(op)))
++		goto nla_put_failure;
++
++	if (nft_data_dump(skb, NFTA_CMP_DATA, &priv->data,
++			  NFT_DATA_VALUE, priv->len) < 0)
++		goto nla_put_failure;
++	return 0;
++
++nla_put_failure:
++	return -1;
++}
++
++
++const struct nft_expr_ops nft_cmp16_fast_ops = {
++	.type		= &nft_cmp_type,
++	.size		= NFT_EXPR_SIZE(sizeof(struct nft_cmp16_fast_expr)),
++	.eval		= NULL,	/* inlined */
++	.init		= nft_cmp16_fast_init,
++	.dump		= nft_cmp16_fast_dump,
++	.offload	= nft_cmp16_fast_offload,
++};
++
+ static const struct nft_expr_ops *
+ nft_cmp_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[])
+ {
+-	struct nft_data_desc desc;
+ 	struct nft_data data;
++	struct nft_data_desc desc = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(data),
++	};
+ 	enum nft_cmp_ops op;
++	u8 sreg;
+ 	int err;
+ 
+ 	if (tb[NFTA_CMP_SREG] == NULL ||
+@@ -298,23 +392,21 @@ nft_cmp_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[])
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	err = nft_data_init(NULL, &data, sizeof(data), &desc,
+-			    tb[NFTA_CMP_DATA]);
++	err = nft_data_init(NULL, &data, &desc, tb[NFTA_CMP_DATA]);
+ 	if (err < 0)
+ 		return ERR_PTR(err);
+ 
+-	if (desc.type != NFT_DATA_VALUE) {
+-		err = -EINVAL;
+-		goto err1;
+-	}
+-
+-	if (desc.len <= sizeof(u32) && (op == NFT_CMP_EQ || op == NFT_CMP_NEQ))
+-		return &nft_cmp_fast_ops;
++	sreg = ntohl(nla_get_be32(tb[NFTA_CMP_SREG]));
+ 
++	if (op == NFT_CMP_EQ || op == NFT_CMP_NEQ) {
++		if (desc.len <= sizeof(u32))
++			return &nft_cmp_fast_ops;
++		else if (desc.len <= sizeof(data) &&
++			 ((sreg >= NFT_REG_1 && sreg <= NFT_REG_4) ||
++			  (sreg >= NFT_REG32_00 && sreg <= NFT_REG32_12 && sreg % 2 == 0)))
++			return &nft_cmp16_fast_ops;
++	}
+ 	return &nft_cmp_ops;
+-err1:
+-	nft_data_release(&data, desc.type);
+-	return ERR_PTR(-EINVAL);
+ }
+ 
+ struct nft_expr_type nft_cmp_type __read_mostly = {
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index d0f67d325bdfd..fcdbc5ed3f367 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -29,20 +29,36 @@ static const struct nla_policy nft_immediate_policy[NFTA_IMMEDIATE_MAX + 1] = {
+ 	[NFTA_IMMEDIATE_DATA]	= { .type = NLA_NESTED },
+ };
+ 
++static enum nft_data_types nft_reg_to_type(const struct nlattr *nla)
++{
++	enum nft_data_types type;
++	u8 reg;
++
++	reg = ntohl(nla_get_be32(nla));
++	if (reg == NFT_REG_VERDICT)
++		type = NFT_DATA_VERDICT;
++	else
++		type = NFT_DATA_VALUE;
++
++	return type;
++}
++
+ static int nft_immediate_init(const struct nft_ctx *ctx,
+ 			      const struct nft_expr *expr,
+ 			      const struct nlattr * const tb[])
+ {
+ 	struct nft_immediate_expr *priv = nft_expr_priv(expr);
+-	struct nft_data_desc desc;
++	struct nft_data_desc desc = {
++		.size	= sizeof(priv->data),
++	};
+ 	int err;
+ 
+ 	if (tb[NFTA_IMMEDIATE_DREG] == NULL ||
+ 	    tb[NFTA_IMMEDIATE_DATA] == NULL)
+ 		return -EINVAL;
+ 
+-	err = nft_data_init(ctx, &priv->data, sizeof(priv->data), &desc,
+-			    tb[NFTA_IMMEDIATE_DATA]);
++	desc.type = nft_reg_to_type(tb[NFTA_IMMEDIATE_DREG]);
++	err = nft_data_init(ctx, &priv->data, &desc, tb[NFTA_IMMEDIATE_DATA]);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_osf.c b/net/netfilter/nft_osf.c
+index d82677e83400b..720dc9fba6d4f 100644
+--- a/net/netfilter/nft_osf.c
++++ b/net/netfilter/nft_osf.c
+@@ -115,9 +115,21 @@ static int nft_osf_validate(const struct nft_ctx *ctx,
+ 			    const struct nft_expr *expr,
+ 			    const struct nft_data **data)
+ {
+-	return nft_chain_validate_hooks(ctx->chain, (1 << NF_INET_LOCAL_IN) |
+-						    (1 << NF_INET_PRE_ROUTING) |
+-						    (1 << NF_INET_FORWARD));
++	unsigned int hooks;
++
++	switch (ctx->family) {
++	case NFPROTO_IPV4:
++	case NFPROTO_IPV6:
++	case NFPROTO_INET:
++		hooks = (1 << NF_INET_LOCAL_IN) |
++			(1 << NF_INET_PRE_ROUTING) |
++			(1 << NF_INET_FORWARD);
++		break;
++	default:
++		return -EOPNOTSUPP;
++	}
++
++	return nft_chain_validate_hooks(ctx->chain, hooks);
+ }
+ 
+ static struct nft_expr_type nft_osf_type;
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 01878c16418c2..551e0d6cf63d4 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -660,17 +660,23 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
+ 				const struct nlattr * const tb[])
+ {
+ 	struct nft_payload_set *priv = nft_expr_priv(expr);
++	u32 csum_offset, csum_type = NFT_PAYLOAD_CSUM_NONE;
++	int err;
+ 
+ 	priv->base        = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE]));
+ 	priv->offset      = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET]));
+ 	priv->len         = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN]));
+ 
+ 	if (tb[NFTA_PAYLOAD_CSUM_TYPE])
+-		priv->csum_type =
+-			ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_TYPE]));
+-	if (tb[NFTA_PAYLOAD_CSUM_OFFSET])
+-		priv->csum_offset =
+-			ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_OFFSET]));
++		csum_type = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_TYPE]));
++	if (tb[NFTA_PAYLOAD_CSUM_OFFSET]) {
++		err = nft_parse_u32_check(tb[NFTA_PAYLOAD_CSUM_OFFSET], U8_MAX,
++					  &csum_offset);
++		if (err < 0)
++			return err;
++
++		priv->csum_offset = csum_offset;
++	}
+ 	if (tb[NFTA_PAYLOAD_CSUM_FLAGS]) {
+ 		u32 flags;
+ 
+@@ -681,7 +687,7 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
+ 		priv->csum_flags = flags;
+ 	}
+ 
+-	switch (priv->csum_type) {
++	switch (csum_type) {
+ 	case NFT_PAYLOAD_CSUM_NONE:
+ 	case NFT_PAYLOAD_CSUM_INET:
+ 		break;
+@@ -695,6 +701,7 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
+ 	default:
+ 		return -EOPNOTSUPP;
+ 	}
++	priv->csum_type = csum_type;
+ 
+ 	return nft_parse_register_load(tb[NFTA_PAYLOAD_SREG], &priv->sreg,
+ 				       priv->len);
+@@ -733,6 +740,7 @@ nft_payload_select_ops(const struct nft_ctx *ctx,
+ {
+ 	enum nft_payload_bases base;
+ 	unsigned int offset, len;
++	int err;
+ 
+ 	if (tb[NFTA_PAYLOAD_BASE] == NULL ||
+ 	    tb[NFTA_PAYLOAD_OFFSET] == NULL ||
+@@ -758,8 +766,13 @@ nft_payload_select_ops(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_PAYLOAD_DREG] == NULL)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	offset = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET]));
+-	len    = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN]));
++	err = nft_parse_u32_check(tb[NFTA_PAYLOAD_OFFSET], U8_MAX, &offset);
++	if (err < 0)
++		return ERR_PTR(err);
++
++	err = nft_parse_u32_check(tb[NFTA_PAYLOAD_LEN], U8_MAX, &len);
++	if (err < 0)
++		return ERR_PTR(err);
+ 
+ 	if (len <= 4 && is_power_of_2(len) && IS_ALIGNED(offset, len) &&
+ 	    base != NFT_PAYLOAD_LL_HEADER)
+diff --git a/net/netfilter/nft_range.c b/net/netfilter/nft_range.c
+index e4a1c44d7f513..e6bbe32c323df 100644
+--- a/net/netfilter/nft_range.c
++++ b/net/netfilter/nft_range.c
+@@ -51,7 +51,14 @@ static int nft_range_init(const struct nft_ctx *ctx, const struct nft_expr *expr
+ 			const struct nlattr * const tb[])
+ {
+ 	struct nft_range_expr *priv = nft_expr_priv(expr);
+-	struct nft_data_desc desc_from, desc_to;
++	struct nft_data_desc desc_from = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(priv->data_from),
++	};
++	struct nft_data_desc desc_to = {
++		.type	= NFT_DATA_VALUE,
++		.size	= sizeof(priv->data_to),
++	};
+ 	int err;
+ 	u32 op;
+ 
+@@ -61,26 +68,16 @@ static int nft_range_init(const struct nft_ctx *ctx, const struct nft_expr *expr
+ 	    !tb[NFTA_RANGE_TO_DATA])
+ 		return -EINVAL;
+ 
+-	err = nft_data_init(NULL, &priv->data_from, sizeof(priv->data_from),
+-			    &desc_from, tb[NFTA_RANGE_FROM_DATA]);
++	err = nft_data_init(NULL, &priv->data_from, &desc_from,
++			    tb[NFTA_RANGE_FROM_DATA]);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (desc_from.type != NFT_DATA_VALUE) {
+-		err = -EINVAL;
+-		goto err1;
+-	}
+-
+-	err = nft_data_init(NULL, &priv->data_to, sizeof(priv->data_to),
+-			    &desc_to, tb[NFTA_RANGE_TO_DATA]);
++	err = nft_data_init(NULL, &priv->data_to, &desc_to,
++			    tb[NFTA_RANGE_TO_DATA]);
+ 	if (err < 0)
+ 		goto err1;
+ 
+-	if (desc_to.type != NFT_DATA_VALUE) {
+-		err = -EINVAL;
+-		goto err2;
+-	}
+-
+ 	if (desc_from.len != desc_to.len) {
+ 		err = -EINVAL;
+ 		goto err2;
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 3b27926d5382c..2ee50996da8cc 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -133,6 +133,7 @@ static const struct nft_expr_ops nft_tunnel_get_ops = {
+ 
+ static struct nft_expr_type nft_tunnel_type __read_mostly = {
+ 	.name		= "tunnel",
++	.family		= NFPROTO_NETDEV,
+ 	.ops		= &nft_tunnel_get_ops,
+ 	.policy		= nft_tunnel_policy,
+ 	.maxattr	= NFTA_TUNNEL_MAX,
+diff --git a/net/rose/rose_loopback.c b/net/rose/rose_loopback.c
+index 11c45c8c6c164..036d92c0ad794 100644
+--- a/net/rose/rose_loopback.c
++++ b/net/rose/rose_loopback.c
+@@ -96,7 +96,8 @@ static void rose_loopback_timer(struct timer_list *unused)
+ 		}
+ 
+ 		if (frametype == ROSE_CALL_REQUEST) {
+-			if (!rose_loopback_neigh->dev) {
++			if (!rose_loopback_neigh->dev &&
++			    !rose_loopback_neigh->loopback) {
+ 				kfree_skb(skb);
+ 				continue;
+ 			}
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 043508fd8d8a5..150cd7b2154c8 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -285,8 +285,10 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ 	_enter("%p,%lx", rx, p->user_call_ID);
+ 
+ 	limiter = rxrpc_get_call_slot(p, gfp);
+-	if (!limiter)
++	if (!limiter) {
++		release_sock(&rx->sk);
+ 		return ERR_PTR(-ERESTARTSYS);
++	}
+ 
+ 	call = rxrpc_alloc_client_call(rx, srx, gfp, debug_id);
+ 	if (IS_ERR(call)) {
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index aa23ba4e25662..eef3c14fd1c18 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -51,10 +51,7 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx,
+ 			return sock_intr_errno(*timeo);
+ 
+ 		trace_rxrpc_transmit(call, rxrpc_transmit_wait);
+-		mutex_unlock(&call->user_mutex);
+ 		*timeo = schedule_timeout(*timeo);
+-		if (mutex_lock_interruptible(&call->user_mutex) < 0)
+-			return sock_intr_errno(*timeo);
+ 	}
+ }
+ 
+@@ -290,37 +287,48 @@ out:
+ static int rxrpc_send_data(struct rxrpc_sock *rx,
+ 			   struct rxrpc_call *call,
+ 			   struct msghdr *msg, size_t len,
+-			   rxrpc_notify_end_tx_t notify_end_tx)
++			   rxrpc_notify_end_tx_t notify_end_tx,
++			   bool *_dropped_lock)
+ {
+ 	struct rxrpc_skb_priv *sp;
+ 	struct sk_buff *skb;
+ 	struct sock *sk = &rx->sk;
++	enum rxrpc_call_state state;
+ 	long timeo;
+-	bool more;
+-	int ret, copied;
++	bool more = msg->msg_flags & MSG_MORE;
++	int ret, copied = 0;
+ 
+ 	timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
+ 
+ 	/* this should be in poll */
+ 	sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+ 
++reload:
++	ret = -EPIPE;
+ 	if (sk->sk_shutdown & SEND_SHUTDOWN)
+-		return -EPIPE;
+-
+-	more = msg->msg_flags & MSG_MORE;
+-
++		goto maybe_error;
++	state = READ_ONCE(call->state);
++	ret = -ESHUTDOWN;
++	if (state >= RXRPC_CALL_COMPLETE)
++		goto maybe_error;
++	ret = -EPROTO;
++	if (state != RXRPC_CALL_CLIENT_SEND_REQUEST &&
++	    state != RXRPC_CALL_SERVER_ACK_REQUEST &&
++	    state != RXRPC_CALL_SERVER_SEND_REPLY)
++		goto maybe_error;
++
++	ret = -EMSGSIZE;
+ 	if (call->tx_total_len != -1) {
+-		if (len > call->tx_total_len)
+-			return -EMSGSIZE;
+-		if (!more && len != call->tx_total_len)
+-			return -EMSGSIZE;
++		if (len - copied > call->tx_total_len)
++			goto maybe_error;
++		if (!more && len - copied != call->tx_total_len)
++			goto maybe_error;
+ 	}
+ 
+ 	skb = call->tx_pending;
+ 	call->tx_pending = NULL;
+ 	rxrpc_see_skb(skb, rxrpc_skb_seen);
+ 
+-	copied = 0;
+ 	do {
+ 		/* Check to see if there's a ping ACK to reply to. */
+ 		if (call->ackr_reason == RXRPC_ACK_PING_RESPONSE)
+@@ -331,16 +339,8 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
+ 
+ 			_debug("alloc");
+ 
+-			if (!rxrpc_check_tx_space(call, NULL)) {
+-				ret = -EAGAIN;
+-				if (msg->msg_flags & MSG_DONTWAIT)
+-					goto maybe_error;
+-				ret = rxrpc_wait_for_tx_window(rx, call,
+-							       &timeo,
+-							       msg->msg_flags & MSG_WAITALL);
+-				if (ret < 0)
+-					goto maybe_error;
+-			}
++			if (!rxrpc_check_tx_space(call, NULL))
++				goto wait_for_space;
+ 
+ 			max = RXRPC_JUMBO_DATALEN;
+ 			max -= call->conn->security_size;
+@@ -485,6 +485,27 @@ maybe_error:
+ efault:
+ 	ret = -EFAULT;
+ 	goto out;
++
++wait_for_space:
++	ret = -EAGAIN;
++	if (msg->msg_flags & MSG_DONTWAIT)
++		goto maybe_error;
++	mutex_unlock(&call->user_mutex);
++	*_dropped_lock = true;
++	ret = rxrpc_wait_for_tx_window(rx, call, &timeo,
++				       msg->msg_flags & MSG_WAITALL);
++	if (ret < 0)
++		goto maybe_error;
++	if (call->interruptibility == RXRPC_INTERRUPTIBLE) {
++		if (mutex_lock_interruptible(&call->user_mutex) < 0) {
++			ret = sock_intr_errno(timeo);
++			goto maybe_error;
++		}
++	} else {
++		mutex_lock(&call->user_mutex);
++	}
++	*_dropped_lock = false;
++	goto reload;
+ }
+ 
+ /*
+@@ -646,6 +667,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ 	enum rxrpc_call_state state;
+ 	struct rxrpc_call *call;
+ 	unsigned long now, j;
++	bool dropped_lock = false;
+ 	int ret;
+ 
+ 	struct rxrpc_send_params p = {
+@@ -754,21 +776,13 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ 			ret = rxrpc_send_abort_packet(call);
+ 	} else if (p.command != RXRPC_CMD_SEND_DATA) {
+ 		ret = -EINVAL;
+-	} else if (rxrpc_is_client_call(call) &&
+-		   state != RXRPC_CALL_CLIENT_SEND_REQUEST) {
+-		/* request phase complete for this client call */
+-		ret = -EPROTO;
+-	} else if (rxrpc_is_service_call(call) &&
+-		   state != RXRPC_CALL_SERVER_ACK_REQUEST &&
+-		   state != RXRPC_CALL_SERVER_SEND_REPLY) {
+-		/* Reply phase not begun or not complete for service call. */
+-		ret = -EPROTO;
+ 	} else {
+-		ret = rxrpc_send_data(rx, call, msg, len, NULL);
++		ret = rxrpc_send_data(rx, call, msg, len, NULL, &dropped_lock);
+ 	}
+ 
+ out_put_unlock:
+-	mutex_unlock(&call->user_mutex);
++	if (!dropped_lock)
++		mutex_unlock(&call->user_mutex);
+ error_put:
+ 	rxrpc_put_call(call, rxrpc_call_put);
+ 	_leave(" = %d", ret);
+@@ -796,6 +810,7 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call,
+ 			   struct msghdr *msg, size_t len,
+ 			   rxrpc_notify_end_tx_t notify_end_tx)
+ {
++	bool dropped_lock = false;
+ 	int ret;
+ 
+ 	_enter("{%d,%s},", call->debug_id, rxrpc_call_states[call->state]);
+@@ -813,7 +828,7 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call,
+ 	case RXRPC_CALL_SERVER_ACK_REQUEST:
+ 	case RXRPC_CALL_SERVER_SEND_REPLY:
+ 		ret = rxrpc_send_data(rxrpc_sk(sock->sk), call, msg, len,
+-				      notify_end_tx);
++				      notify_end_tx, &dropped_lock);
+ 		break;
+ 	case RXRPC_CALL_COMPLETE:
+ 		read_lock_bh(&call->state_lock);
+@@ -827,7 +842,8 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call,
+ 		break;
+ 	}
+ 
+-	mutex_unlock(&call->user_mutex);
++	if (!dropped_lock)
++		mutex_unlock(&call->user_mutex);
+ 	_leave(" = %d", ret);
+ 	return ret;
+ }
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 5d5391adb667c..68f1e89430b3b 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -403,7 +403,7 @@ static inline bool qdisc_restart(struct Qdisc *q, int *packets)
+ 
+ void __qdisc_run(struct Qdisc *q)
+ {
+-	int quota = dev_tx_weight;
++	int quota = READ_ONCE(dev_tx_weight);
+ 	int packets;
+ 
+ 	while (qdisc_restart(q, &packets)) {
+diff --git a/net/socket.c b/net/socket.c
+index d52c265ad449b..bcf68b150fe29 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1670,7 +1670,7 @@ int __sys_listen(int fd, int backlog)
+ 
+ 	sock = sockfd_lookup_light(fd, &err, &fput_needed);
+ 	if (sock) {
+-		somaxconn = sock_net(sock->sk)->core.sysctl_somaxconn;
++		somaxconn = READ_ONCE(sock_net(sock->sk)->core.sysctl_somaxconn);
+ 		if ((unsigned int)backlog > somaxconn)
+ 			backlog = somaxconn;
+ 
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index c5af31312e0cf..78c6648af7827 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1867,7 +1867,7 @@ call_encode(struct rpc_task *task)
+ 			break;
+ 		case -EKEYEXPIRED:
+ 			if (!task->tk_cred_retry) {
+-				rpc_exit(task, task->tk_status);
++				rpc_call_rpcerror(task, task->tk_status);
+ 			} else {
+ 				task->tk_action = call_refresh;
+ 				task->tk_cred_retry--;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 38256aabf4f1d..8f3c9fbb99165 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -504,7 +504,7 @@ static int tipc_sk_create(struct net *net, struct socket *sock,
+ 	timer_setup(&sk->sk_timer, tipc_sk_timeout, 0);
+ 	sk->sk_shutdown = 0;
+ 	sk->sk_backlog_rcv = tipc_sk_backlog_rcv;
+-	sk->sk_rcvbuf = sysctl_tipc_rmem[1];
++	sk->sk_rcvbuf = READ_ONCE(sysctl_tipc_rmem[1]);
+ 	sk->sk_data_ready = tipc_data_ready;
+ 	sk->sk_write_space = tipc_write_space;
+ 	sk->sk_destruct = tipc_sock_destruct;
+diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c
+index 1f08ebf7d80c5..24ca49ecebea3 100644
+--- a/net/xfrm/espintcp.c
++++ b/net/xfrm/espintcp.c
+@@ -170,7 +170,7 @@ int espintcp_queue_out(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct espintcp_ctx *ctx = espintcp_getctx(sk);
+ 
+-	if (skb_queue_len(&ctx->out_queue) >= netdev_max_backlog)
++	if (skb_queue_len(&ctx->out_queue) >= READ_ONCE(netdev_max_backlog))
+ 		return -ENOBUFS;
+ 
+ 	__skb_queue_tail(&ctx->out_queue, skb);
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 61e6220ddd5ae..77e82033ad700 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -782,7 +782,7 @@ int xfrm_trans_queue_net(struct net *net, struct sk_buff *skb,
+ 
+ 	trans = this_cpu_ptr(&xfrm_trans_tasklet);
+ 
+-	if (skb_queue_len(&trans->queue) >= netdev_max_backlog)
++	if (skb_queue_len(&trans->queue) >= READ_ONCE(netdev_max_backlog))
+ 		return -ENOBUFS;
+ 
+ 	BUILD_BUG_ON(sizeof(struct xfrm_trans_cb) > sizeof(skb->cb));
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 603b05ed7eb4c..0d12bdf59d4cc 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3164,7 +3164,7 @@ ok:
+ 	return dst;
+ 
+ nopol:
+-	if (!(dst_orig->dev->flags & IFF_LOOPBACK) &&
++	if ((!dst_orig->dev || !(dst_orig->dev->flags & IFF_LOOPBACK)) &&
+ 	    net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) {
+ 		err = -EPERM;
+ 		goto error;
+@@ -3641,6 +3641,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		if (pols[1]) {
+ 			if (IS_ERR(pols[1])) {
+ 				XFRM_INC_STATS(net, LINUX_MIB_XFRMINPOLERROR);
++				xfrm_pol_put(pols[0]);
+ 				return 0;
+ 			}
+ 			pols[1]->curlft.use_time = ktime_get_real_seconds();
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index bc0bbb1571cef..fdbd56ed4bd52 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1557,6 +1557,7 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ 	x->replay = orig->replay;
+ 	x->preplay = orig->preplay;
+ 	x->mapping_maxage = orig->mapping_maxage;
++	x->lastused = orig->lastused;
+ 	x->new_mapping = 0;
+ 	x->new_mapping_sport = 0;
+ 
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 5ee3c4d1fbb2b..3e7706c251e9e 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -248,7 +248,7 @@ endif
+ # defined. get-executable-or-default fails with an error if the first argument is supplied but
+ # doesn't exist.
+ override PYTHON_CONFIG := $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO))
+-override PYTHON := $(call get-executable-or-default,PYTHON,$(subst -config,,$(PYTHON_AUTO)))
++override PYTHON := $(call get-executable-or-default,PYTHON,$(subst -config,,$(PYTHON_CONFIG)))
+ 
+ grep-libs  = $(filter -l%,$(1))
+ strip-libs  = $(filter-out -l%,$(1))


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-09-05 12:04 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-09-05 12:04 UTC (permalink / raw
  To: gentoo-commits

commit:     e733b0241620ba620ba58ee2a86aebd811b73cc1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep  5 12:03:56 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep  5 12:03:56 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e733b024

Linux patch 5.10.141

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1140_linux-5.10.141.patch | 1181 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1185 insertions(+)

diff --git a/0000_README b/0000_README
index f5306e89..1da294a6 100644
--- a/0000_README
+++ b/0000_README
@@ -603,6 +603,10 @@ Patch:  1139_linux-5.10.140.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.140
 
+Patch:  1140_linux-5.10.141.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.141
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1140_linux-5.10.141.patch b/1140_linux-5.10.141.patch
new file mode 100644
index 00000000..c5adbeb2
--- /dev/null
+++ b/1140_linux-5.10.141.patch
@@ -0,0 +1,1181 @@
+diff --git a/Makefile b/Makefile
+index a80179d2c0057..d2833d29d65f5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 140
++SUBLEVEL = 141
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/s390/hypfs/hypfs_diag.c b/arch/s390/hypfs/hypfs_diag.c
+index f0bc4dc3e9bf0..6511d15ace45e 100644
+--- a/arch/s390/hypfs/hypfs_diag.c
++++ b/arch/s390/hypfs/hypfs_diag.c
+@@ -437,7 +437,7 @@ __init int hypfs_diag_init(void)
+ 	int rc;
+ 
+ 	if (diag204_probe()) {
+-		pr_err("The hardware system does not support hypfs\n");
++		pr_info("The hardware system does not support hypfs\n");
+ 		return -ENODATA;
+ 	}
+ 
+diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
+index 5c97f48cea91d..ee919bfc81867 100644
+--- a/arch/s390/hypfs/inode.c
++++ b/arch/s390/hypfs/inode.c
+@@ -496,9 +496,9 @@ fail_hypfs_sprp_exit:
+ 	hypfs_vm_exit();
+ fail_hypfs_diag_exit:
+ 	hypfs_diag_exit();
++	pr_err("Initialization of hypfs failed with rc=%i\n", rc);
+ fail_dbfs_exit:
+ 	hypfs_dbfs_exit();
+-	pr_err("Initialization of hypfs failed with rc=%i\n", rc);
+ 	return rc;
+ }
+ device_initcall(hypfs_init)
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index ed517fad0d035..1866374356c84 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -429,7 +429,9 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
+ 	flags = FAULT_FLAG_DEFAULT;
+ 	if (user_mode(regs))
+ 		flags |= FAULT_FLAG_USER;
+-	if (access == VM_WRITE || (trans_exc_code & store_indication) == 0x400)
++	if ((trans_exc_code & store_indication) == 0x400)
++		access = VM_WRITE;
++	if (access == VM_WRITE)
+ 		flags |= FAULT_FLAG_WRITE;
+ 	mmap_read_lock(mm);
+ 
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 0acd99329923c..07f5030073bbc 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -35,33 +35,56 @@
+ #define RSB_CLEAR_LOOPS		32	/* To forcibly overwrite all entries */
+ 
+ /*
++ * Common helper for __FILL_RETURN_BUFFER and __FILL_ONE_RETURN.
++ */
++#define __FILL_RETURN_SLOT			\
++	ANNOTATE_INTRA_FUNCTION_CALL;		\
++	call	772f;				\
++	int3;					\
++772:
++
++/*
++ * Stuff the entire RSB.
++ *
+  * Google experimented with loop-unrolling and this turned out to be
+  * the optimal version — two calls, each with their own speculation
+  * trap should their return address end up getting used, in a loop.
+  */
+-#define __FILL_RETURN_BUFFER(reg, nr, sp)	\
+-	mov	$(nr/2), reg;			\
+-771:						\
+-	ANNOTATE_INTRA_FUNCTION_CALL;		\
+-	call	772f;				\
+-773:	/* speculation trap */			\
+-	UNWIND_HINT_EMPTY;			\
+-	pause;					\
+-	lfence;					\
+-	jmp	773b;				\
+-772:						\
+-	ANNOTATE_INTRA_FUNCTION_CALL;		\
+-	call	774f;				\
+-775:	/* speculation trap */			\
+-	UNWIND_HINT_EMPTY;			\
+-	pause;					\
+-	lfence;					\
+-	jmp	775b;				\
+-774:						\
+-	add	$(BITS_PER_LONG/8) * 2, sp;	\
+-	dec	reg;				\
+-	jnz	771b;				\
+-	/* barrier for jnz misprediction */	\
++#ifdef CONFIG_X86_64
++#define __FILL_RETURN_BUFFER(reg, nr)			\
++	mov	$(nr/2), reg;				\
++771:							\
++	__FILL_RETURN_SLOT				\
++	__FILL_RETURN_SLOT				\
++	add	$(BITS_PER_LONG/8) * 2, %_ASM_SP;	\
++	dec	reg;					\
++	jnz	771b;					\
++	/* barrier for jnz misprediction */		\
++	lfence;
++#else
++/*
++ * i386 doesn't unconditionally have LFENCE, as such it can't
++ * do a loop.
++ */
++#define __FILL_RETURN_BUFFER(reg, nr)			\
++	.rept nr;					\
++	__FILL_RETURN_SLOT;				\
++	.endr;						\
++	add	$(BITS_PER_LONG/8) * nr, %_ASM_SP;
++#endif
++
++/*
++ * Stuff a single RSB slot.
++ *
++ * To mitigate Post-Barrier RSB speculation, one CALL instruction must be
++ * forced to retire before letting a RET instruction execute.
++ *
++ * On PBRSB-vulnerable CPUs, it is not safe for a RET to be executed
++ * before this point.
++ */
++#define __FILL_ONE_RETURN				\
++	__FILL_RETURN_SLOT				\
++	add	$(BITS_PER_LONG/8), %_ASM_SP;		\
+ 	lfence;
+ 
+ #ifdef __ASSEMBLY__
+@@ -120,28 +143,15 @@
+ #endif
+ .endm
+ 
+-.macro ISSUE_UNBALANCED_RET_GUARD
+-	ANNOTATE_INTRA_FUNCTION_CALL
+-	call .Lunbalanced_ret_guard_\@
+-	int3
+-.Lunbalanced_ret_guard_\@:
+-	add $(BITS_PER_LONG/8), %_ASM_SP
+-	lfence
+-.endm
+-
+  /*
+   * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
+   * monstrosity above, manually.
+   */
+-.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2
+-.ifb \ftr2
+-	ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr
+-.else
+-	ALTERNATIVE_2 "jmp .Lskip_rsb_\@", "", \ftr, "jmp .Lunbalanced_\@", \ftr2
+-.endif
+-	__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)
+-.Lunbalanced_\@:
+-	ISSUE_UNBALANCED_RET_GUARD
++.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2=ALT_NOT(X86_FEATURE_ALWAYS)
++	ALTERNATIVE_2 "jmp .Lskip_rsb_\@", \
++		__stringify(__FILL_RETURN_BUFFER(\reg,\nr)), \ftr, \
++		__stringify(__FILL_ONE_RETURN), \ftr2
++
+ .Lskip_rsb_\@:
+ .endm
+ 
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 366b124057081..a5d5247c4f3e8 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -6069,6 +6069,7 @@ const struct file_operations binder_fops = {
+ 	.open = binder_open,
+ 	.flush = binder_flush,
+ 	.release = binder_release,
++	.may_pollfree = true,
+ };
+ 
+ static int __init init_binder_device(const char *name)
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index 2e3b76519b49d..b624f3d8f0e64 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -327,7 +327,23 @@ static struct miscdevice udmabuf_misc = {
+ 
+ static int __init udmabuf_dev_init(void)
+ {
+-	return misc_register(&udmabuf_misc);
++	int ret;
++
++	ret = misc_register(&udmabuf_misc);
++	if (ret < 0) {
++		pr_err("Could not initialize udmabuf device\n");
++		return ret;
++	}
++
++	ret = dma_coerce_mask_and_coherent(udmabuf_misc.this_device,
++					   DMA_BIT_MASK(64));
++	if (ret < 0) {
++		pr_err("Could not setup DMA mask for udmabuf device\n");
++		misc_deregister(&udmabuf_misc);
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static void __exit udmabuf_dev_exit(void)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index d949d6c52f24b..ff5555353eb4f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -283,7 +283,7 @@ enum amdgpu_kiq_irq {
+ 	AMDGPU_CP_KIQ_IRQ_DRIVER0 = 0,
+ 	AMDGPU_CP_KIQ_IRQ_LAST
+ };
+-
++#define SRIOV_USEC_TIMEOUT  1200000 /* wait 12 * 100ms for SRIOV */
+ #define MAX_KIQ_REG_WAIT       5000 /* in usecs, 5ms */
+ #define MAX_KIQ_REG_BAILOUT_INTERVAL   5 /* in msecs, 5ms */
+ #define MAX_KIQ_REG_TRY 80 /* 20 -> 80 */
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index 150fa5258fb6f..2aa9242c58ab9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -371,6 +371,7 @@ static int gmc_v10_0_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
+ 	uint32_t seq;
+ 	uint16_t queried_pasid;
+ 	bool ret;
++	u32 usec_timeout = amdgpu_sriov_vf(adev) ? SRIOV_USEC_TIMEOUT : adev->usec_timeout;
+ 	struct amdgpu_ring *ring = &adev->gfx.kiq.ring;
+ 	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+ 
+@@ -389,7 +390,7 @@ static int gmc_v10_0_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
+ 
+ 		amdgpu_ring_commit(ring);
+ 		spin_unlock(&adev->gfx.kiq.ring_lock);
+-		r = amdgpu_fence_wait_polling(ring, seq, adev->usec_timeout);
++		r = amdgpu_fence_wait_polling(ring, seq, usec_timeout);
+ 		if (r < 1) {
+ 			dev_err(adev->dev, "wait for kiq fence error: %ld.\n", r);
+ 			return -ETIME;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 3a864041968f6..1673bf3bae55a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -839,6 +839,7 @@ static int gmc_v9_0_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
+ 	uint32_t seq;
+ 	uint16_t queried_pasid;
+ 	bool ret;
++	u32 usec_timeout = amdgpu_sriov_vf(adev) ? SRIOV_USEC_TIMEOUT : adev->usec_timeout;
+ 	struct amdgpu_ring *ring = &adev->gfx.kiq.ring;
+ 	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+ 
+@@ -878,7 +879,7 @@ static int gmc_v9_0_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
+ 
+ 		amdgpu_ring_commit(ring);
+ 		spin_unlock(&adev->gfx.kiq.ring_lock);
+-		r = amdgpu_fence_wait_polling(ring, seq, adev->usec_timeout);
++		r = amdgpu_fence_wait_polling(ring, seq, usec_timeout);
+ 		if (r < 1) {
+ 			dev_err(adev->dev, "wait for kiq fence error: %ld.\n", r);
+ 			up_read(&adev->reset_sem);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index bae3a146b2cc2..89cc852cb27c5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -546,9 +546,11 @@ static void dce112_get_pix_clk_dividers_helper (
+ 		switch (pix_clk_params->color_depth) {
+ 		case COLOR_DEPTH_101010:
+ 			actual_pixel_clock_100hz = (actual_pixel_clock_100hz * 5) >> 2;
++			actual_pixel_clock_100hz -= actual_pixel_clock_100hz % 10;
+ 			break;
+ 		case COLOR_DEPTH_121212:
+ 			actual_pixel_clock_100hz = (actual_pixel_clock_100hz * 6) >> 2;
++			actual_pixel_clock_100hz -= actual_pixel_clock_100hz % 10;
+ 			break;
+ 		case COLOR_DEPTH_161616:
+ 			actual_pixel_clock_100hz = actual_pixel_clock_100hz * 2;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
+index 3fcd408e91032..855682590c1bb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
+@@ -125,6 +125,12 @@ struct mpcc *mpc1_get_mpcc_for_dpp(struct mpc_tree *tree, int dpp_id)
+ 	while (tmp_mpcc != NULL) {
+ 		if (tmp_mpcc->dpp_id == dpp_id)
+ 			return tmp_mpcc;
++
++		/* avoid circular linked list */
++		ASSERT(tmp_mpcc != tmp_mpcc->mpcc_bot);
++		if (tmp_mpcc == tmp_mpcc->mpcc_bot)
++			break;
++
+ 		tmp_mpcc = tmp_mpcc->mpcc_bot;
+ 	}
+ 	return NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
+index 800be2693faca..963d72f96dca3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
+@@ -464,6 +464,11 @@ void optc1_enable_optc_clock(struct timing_generator *optc, bool enable)
+ 				OTG_CLOCK_ON, 1,
+ 				1, 1000);
+ 	} else  {
++
++		//last chance to clear underflow, otherwise, it will always there due to clock is off.
++		if (optc->funcs->is_optc_underflow_occurred(optc) == true)
++			optc->funcs->clear_optc_underflow(optc);
++
+ 		REG_UPDATE_2(OTG_CLOCK_CONTROL,
+ 				OTG_CLOCK_GATE_DIS, 0,
+ 				OTG_CLOCK_EN, 0);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
+index 99cc095dc33c7..a701ea56c0aa0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
+@@ -533,6 +533,12 @@ struct mpcc *mpc2_get_mpcc_for_dpp(struct mpc_tree *tree, int dpp_id)
+ 	while (tmp_mpcc != NULL) {
+ 		if (tmp_mpcc->dpp_id == 0xf || tmp_mpcc->dpp_id == dpp_id)
+ 			return tmp_mpcc;
++
++		/* avoid circular linked list */
++		ASSERT(tmp_mpcc != tmp_mpcc->mpcc_bot);
++		if (tmp_mpcc == tmp_mpcc->mpcc_bot)
++			break;
++
+ 		tmp_mpcc = tmp_mpcc->mpcc_bot;
+ 	}
+ 	return NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.c
+index af462fe4260de..b0fd8859bd2f2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.c
+@@ -86,7 +86,7 @@ bool hubp3_program_surface_flip_and_addr(
+ 			VMID, address->vmid);
+ 
+ 	if (address->type == PLN_ADDR_TYPE_GRPH_STEREO) {
+-		REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_MODE_FOR_STEREOSYNC, 0x1);
++		REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_MODE_FOR_STEREOSYNC, 0);
+ 		REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_IN_STEREOSYNC, 0x1);
+ 
+ 	} else {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 8556c229ff598..49d7fa1d08427 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -2759,6 +2759,7 @@ static const struct pptable_funcs sienna_cichlid_ppt_funcs = {
+ 	.dump_pptable = sienna_cichlid_dump_pptable,
+ 	.init_microcode = smu_v11_0_init_microcode,
+ 	.load_microcode = smu_v11_0_load_microcode,
++	.fini_microcode = smu_v11_0_fini_microcode,
+ 	.init_smc_tables = sienna_cichlid_init_smc_tables,
+ 	.fini_smc_tables = smu_v11_0_fini_smc_tables,
+ 	.init_power = smu_v11_0_init_power,
+diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c
+index a3b151b29bd71..fc616db4231bb 100644
+--- a/drivers/hid/hid-steam.c
++++ b/drivers/hid/hid-steam.c
+@@ -134,6 +134,11 @@ static int steam_recv_report(struct steam_device *steam,
+ 	int ret;
+ 
+ 	r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0];
++	if (!r) {
++		hid_err(steam->hdev, "No HID_FEATURE_REPORT submitted -  nothing to read\n");
++		return -EINVAL;
++	}
++
+ 	if (hid_report_len(r) < 64)
+ 		return -EINVAL;
+ 
+@@ -165,6 +170,11 @@ static int steam_send_report(struct steam_device *steam,
+ 	int ret;
+ 
+ 	r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0];
++	if (!r) {
++		hid_err(steam->hdev, "No HID_FEATURE_REPORT submitted -  nothing to read\n");
++		return -EINVAL;
++	}
++
+ 	if (hid_report_len(r) < 64)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/hid/hidraw.c b/drivers/hid/hidraw.c
+index 2eee5e31c2b7e..fade7fcf6a146 100644
+--- a/drivers/hid/hidraw.c
++++ b/drivers/hid/hidraw.c
+@@ -346,10 +346,13 @@ static int hidraw_release(struct inode * inode, struct file * file)
+ 	unsigned int minor = iminor(inode);
+ 	struct hidraw_list *list = file->private_data;
+ 	unsigned long flags;
++	int i;
+ 
+ 	mutex_lock(&minors_lock);
+ 
+ 	spin_lock_irqsave(&hidraw_table[minor]->list_lock, flags);
++	for (i = list->tail; i < list->head; i++)
++		kfree(list->buffer[i].value);
+ 	list_del(&list->node);
+ 	spin_unlock_irqrestore(&hidraw_table[minor]->list_lock, flags);
+ 	kfree(list);
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index fccd1798445d5..d22ce328a2797 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -2610,6 +2610,7 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf,
+ 		del_timer_sync(&hdw->encoder_run_timer);
+ 		del_timer_sync(&hdw->encoder_wait_timer);
+ 		flush_work(&hdw->workpoll);
++		v4l2_device_unregister(&hdw->v4l2_dev);
+ 		usb_free_urb(hdw->ctl_read_urb);
+ 		usb_free_urb(hdw->ctl_write_urb);
+ 		kfree(hdw->ctl_read_buffer);
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index f5c965da95013..d71c113f428f6 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2293,6 +2293,9 @@ static void msdc_cqe_disable(struct mmc_host *mmc, bool recovery)
+ 	/* disable busy check */
+ 	sdr_clr_bits(host->base + MSDC_PATCH_BIT1, MSDC_PB1_BUSY_CHECK_SEL);
+ 
++	val = readl(host->base + MSDC_INT);
++	writel(val, host->base + MSDC_INT);
++
+ 	if (recovery) {
+ 		sdr_set_field(host->base + MSDC_DMA_CTRL,
+ 			      MSDC_DMA_CTRL_STOP, 1);
+@@ -2693,11 +2696,14 @@ static int __maybe_unused msdc_suspend(struct device *dev)
+ {
+ 	struct mmc_host *mmc = dev_get_drvdata(dev);
+ 	int ret;
++	u32 val;
+ 
+ 	if (mmc->caps2 & MMC_CAP2_CQE) {
+ 		ret = cqhci_suspend(mmc);
+ 		if (ret)
+ 			return ret;
++		val = readl(((struct msdc_host *)mmc_priv(mmc))->base + MSDC_INT);
++		writel(val, ((struct msdc_host *)mmc_priv(mmc))->base + MSDC_INT);
+ 	}
+ 
+ 	return pm_runtime_force_suspend(dev);
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index 5ae81f2df45f7..3779b264dbec3 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -222,8 +222,15 @@ static int get_port_device_capability(struct pci_dev *dev)
+ 
+ #ifdef CONFIG_PCIEAER
+ 	if (dev->aer_cap && pci_aer_available() &&
+-	    (pcie_ports_native || host->native_aer))
++	    (pcie_ports_native || host->native_aer)) {
+ 		services |= PCIE_PORT_SERVICE_AER;
++
++		/*
++		 * Disable AER on this port in case it's been enabled by the
++		 * BIOS (the AER service driver will enable it when necessary).
++		 */
++		pci_disable_pcie_error_reporting(dev);
++	}
+ #endif
+ 
+ 	/*
+diff --git a/drivers/video/fbdev/pm2fb.c b/drivers/video/fbdev/pm2fb.c
+index 0642555289e06..c12d46e283598 100644
+--- a/drivers/video/fbdev/pm2fb.c
++++ b/drivers/video/fbdev/pm2fb.c
+@@ -616,6 +616,11 @@ static int pm2fb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ 		return -EINVAL;
+ 	}
+ 
++	if (!var->pixclock) {
++		DPRINTK("pixclock is zero\n");
++		return -EINVAL;
++	}
++
+ 	if (PICOS2KHZ(var->pixclock) > PM2_MAX_PIXCLOCK) {
+ 		DPRINTK("pixclock too high (%ldKHz)\n",
+ 			PICOS2KHZ(var->pixclock));
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index a952288b2ab8e..9654b60a06a58 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -5198,6 +5198,11 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 	bool cancel = false;
+ 
++	if (req->file->f_op->may_pollfree) {
++		spin_lock_irq(&ctx->completion_lock);
++		return -EOPNOTSUPP;
++	}
++
+ 	INIT_HLIST_NODE(&req->hash_node);
+ 	io_init_poll_iocb(poll, mask, wake_func);
+ 	poll->file = req->file;
+diff --git a/fs/signalfd.c b/fs/signalfd.c
+index b94fb5f81797a..41dc597b78cc6 100644
+--- a/fs/signalfd.c
++++ b/fs/signalfd.c
+@@ -248,6 +248,7 @@ static const struct file_operations signalfd_fops = {
+ 	.poll		= signalfd_poll,
+ 	.read		= signalfd_read,
+ 	.llseek		= noop_llseek,
++	.may_pollfree	= true,
+ };
+ 
+ static int do_signalfd4(int ufd, sigset_t *mask, int flags)
+diff --git a/fs/xfs/xfs_filestream.c b/fs/xfs/xfs_filestream.c
+index db23e455eb91d..bc41ec0c483d0 100644
+--- a/fs/xfs/xfs_filestream.c
++++ b/fs/xfs/xfs_filestream.c
+@@ -128,11 +128,12 @@ xfs_filestream_pick_ag(
+ 		if (!pag->pagf_init) {
+ 			err = xfs_alloc_pagf_init(mp, NULL, ag, trylock);
+ 			if (err) {
+-				xfs_perag_put(pag);
+-				if (err != -EAGAIN)
++				if (err != -EAGAIN) {
++					xfs_perag_put(pag);
+ 					return err;
++				}
+ 				/* Couldn't lock the AGF, skip this AG. */
+-				continue;
++				goto next_ag;
+ 			}
+ 		}
+ 
+diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c
+index ef1d5bb88b93a..775f833146e30 100644
+--- a/fs/xfs/xfs_fsops.c
++++ b/fs/xfs/xfs_fsops.c
+@@ -376,46 +376,36 @@ xfs_reserve_blocks(
+ 	 * If the request is larger than the current reservation, reserve the
+ 	 * blocks before we update the reserve counters. Sample m_fdblocks and
+ 	 * perform a partial reservation if the request exceeds free space.
++	 *
++	 * The code below estimates how many blocks it can request from
++	 * fdblocks to stash in the reserve pool.  This is a classic TOCTOU
++	 * race since fdblocks updates are not always coordinated via
++	 * m_sb_lock.  Set the reserve size even if there's not enough free
++	 * space to fill it because mod_fdblocks will refill an undersized
++	 * reserve when it can.
+ 	 */
+-	error = -ENOSPC;
+-	do {
+-		free = percpu_counter_sum(&mp->m_fdblocks) -
+-						mp->m_alloc_set_aside;
+-		if (free <= 0)
+-			break;
+-
+-		delta = request - mp->m_resblks;
+-		lcounter = free - delta;
+-		if (lcounter < 0)
+-			/* We can't satisfy the request, just get what we can */
+-			fdblks_delta = free;
+-		else
+-			fdblks_delta = delta;
+-
++	free = percpu_counter_sum(&mp->m_fdblocks) -
++						xfs_fdblocks_unavailable(mp);
++	delta = request - mp->m_resblks;
++	mp->m_resblks = request;
++	if (delta > 0 && free > 0) {
+ 		/*
+ 		 * We'll either succeed in getting space from the free block
+-		 * count or we'll get an ENOSPC. If we get a ENOSPC, it means
+-		 * things changed while we were calculating fdblks_delta and so
+-		 * we should try again to see if there is anything left to
+-		 * reserve.
++		 * count or we'll get an ENOSPC.  Don't set the reserved flag
++		 * here - we don't want to reserve the extra reserve blocks
++		 * from the reserve.
+ 		 *
+-		 * Don't set the reserved flag here - we don't want to reserve
+-		 * the extra reserve blocks from the reserve.....
++		 * The desired reserve size can change after we drop the lock.
++		 * Use mod_fdblocks to put the space into the reserve or into
++		 * fdblocks as appropriate.
+ 		 */
++		fdblks_delta = min(free, delta);
+ 		spin_unlock(&mp->m_sb_lock);
+ 		error = xfs_mod_fdblocks(mp, -fdblks_delta, 0);
++		if (!error)
++			xfs_mod_fdblocks(mp, fdblks_delta, 0);
+ 		spin_lock(&mp->m_sb_lock);
+-	} while (error == -ENOSPC);
+-
+-	/*
+-	 * Update the reserve counters if blocks have been successfully
+-	 * allocated.
+-	 */
+-	if (!error && fdblks_delta) {
+-		mp->m_resblks += fdblks_delta;
+-		mp->m_resblks_avail += fdblks_delta;
+ 	}
+-
+ out:
+ 	if (outval) {
+ 		outval->resblks = mp->m_resblks;
+diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
+index dfa429b77ee28..3a6bc9dc11b5c 100644
+--- a/fs/xfs/xfs_mount.h
++++ b/fs/xfs/xfs_mount.h
+@@ -406,6 +406,14 @@ extern int	xfs_initialize_perag(xfs_mount_t *mp, xfs_agnumber_t agcount,
+ 				     xfs_agnumber_t *maxagi);
+ extern void	xfs_unmountfs(xfs_mount_t *);
+ 
++/* Accessor added for 5.10.y backport */
++static inline uint64_t
++xfs_fdblocks_unavailable(
++	struct xfs_mount	*mp)
++{
++	return mp->m_alloc_set_aside;
++}
++
+ extern int	xfs_mod_fdblocks(struct xfs_mount *mp, int64_t delta,
+ 				 bool reserved);
+ extern int	xfs_mod_frextents(struct xfs_mount *mp, int64_t delta);
+diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c
+index fe45b0c3970c1..288ea38c43ad0 100644
+--- a/fs/xfs/xfs_trans_dquot.c
++++ b/fs/xfs/xfs_trans_dquot.c
+@@ -615,7 +615,6 @@ xfs_dqresv_check(
+ 			return QUOTA_NL_ISOFTLONGWARN;
+ 		}
+ 
+-		res->warnings++;
+ 		return QUOTA_NL_ISOFTWARN;
+ 	}
+ 
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 42d246a942283..c8f887641878f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1859,6 +1859,7 @@ struct file_operations {
+ 				   struct file *file_out, loff_t pos_out,
+ 				   loff_t len, unsigned int remap_flags);
+ 	int (*fadvise)(struct file *, loff_t, loff_t, int);
++	bool may_pollfree;
+ } __randomize_layout;
+ 
+ struct inode_operations {
+diff --git a/include/linux/rmap.h b/include/linux/rmap.h
+index 8d04e7deedc66..297744ea4dd0c 100644
+--- a/include/linux/rmap.h
++++ b/include/linux/rmap.h
+@@ -39,12 +39,15 @@ struct anon_vma {
+ 	atomic_t refcount;
+ 
+ 	/*
+-	 * Count of child anon_vmas and VMAs which points to this anon_vma.
++	 * Count of child anon_vmas. Equals to the count of all anon_vmas that
++	 * have ->parent pointing to this one, including itself.
+ 	 *
+ 	 * This counter is used for making decision about reusing anon_vma
+ 	 * instead of forking new one. See comments in function anon_vma_clone.
+ 	 */
+-	unsigned degree;
++	unsigned long num_children;
++	/* Count of VMAs whose ->anon_vma pointer points to this object. */
++	unsigned long num_active_vmas;
+ 
+ 	struct anon_vma *parent;	/* Parent of this anon_vma */
+ 
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index acbf1875ad506..61fc053a4a4ef 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -2222,6 +2222,14 @@ static inline void skb_set_tail_pointer(struct sk_buff *skb, const int offset)
+ 
+ #endif /* NET_SKBUFF_DATA_USES_OFFSET */
+ 
++static inline void skb_assert_len(struct sk_buff *skb)
++{
++#ifdef CONFIG_DEBUG_NET
++	if (WARN_ONCE(!skb->len, "%s\n", __func__))
++		DO_ONCE_LITE(skb_dump, KERN_ERR, skb, false);
++#endif /* CONFIG_DEBUG_NET */
++}
++
+ /*
+  *	Add data to an sk_buff
+  */
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 822c048934e3f..1138dd3071dbd 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -281,7 +281,8 @@ static inline void sk_msg_sg_copy_clear(struct sk_msg *msg, u32 start)
+ 
+ static inline struct sk_psock *sk_psock(const struct sock *sk)
+ {
+-	return rcu_dereference_sk_user_data(sk);
++	return __rcu_dereference_sk_user_data_with_flags(sk,
++							 SK_USER_DATA_PSOCK);
+ }
+ 
+ static inline void sk_psock_queue_msg(struct sk_psock *psock,
+diff --git a/include/net/sock.h b/include/net/sock.h
+index d31c2b9107e54..d53fb64374767 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -527,14 +527,26 @@ enum sk_pacing {
+ 	SK_PACING_FQ		= 2,
+ };
+ 
+-/* Pointer stored in sk_user_data might not be suitable for copying
+- * when cloning the socket. For instance, it can point to a reference
+- * counted object. sk_user_data bottom bit is set if pointer must not
+- * be copied.
++/* flag bits in sk_user_data
++ *
++ * - SK_USER_DATA_NOCOPY:      Pointer stored in sk_user_data might
++ *   not be suitable for copying when cloning the socket. For instance,
++ *   it can point to a reference counted object. sk_user_data bottom
++ *   bit is set if pointer must not be copied.
++ *
++ * - SK_USER_DATA_BPF:         Mark whether sk_user_data field is
++ *   managed/owned by a BPF reuseport array. This bit should be set
++ *   when sk_user_data's sk is added to the bpf's reuseport_array.
++ *
++ * - SK_USER_DATA_PSOCK:       Mark whether pointer stored in
++ *   sk_user_data points to psock type. This bit should be set
++ *   when sk_user_data is assigned to a psock object.
+  */
+ #define SK_USER_DATA_NOCOPY	1UL
+-#define SK_USER_DATA_BPF	2UL	/* Managed by BPF */
+-#define SK_USER_DATA_PTRMASK	~(SK_USER_DATA_NOCOPY | SK_USER_DATA_BPF)
++#define SK_USER_DATA_BPF	2UL
++#define SK_USER_DATA_PSOCK	4UL
++#define SK_USER_DATA_PTRMASK	~(SK_USER_DATA_NOCOPY | SK_USER_DATA_BPF |\
++				  SK_USER_DATA_PSOCK)
+ 
+ /**
+  * sk_user_data_is_nocopy - Test if sk_user_data pointer must not be copied
+@@ -547,24 +559,40 @@ static inline bool sk_user_data_is_nocopy(const struct sock *sk)
+ 
+ #define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data)))
+ 
++/**
++ * __rcu_dereference_sk_user_data_with_flags - return the pointer
++ * only if argument flags all has been set in sk_user_data. Otherwise
++ * return NULL
++ *
++ * @sk: socket
++ * @flags: flag bits
++ */
++static inline void *
++__rcu_dereference_sk_user_data_with_flags(const struct sock *sk,
++					  uintptr_t flags)
++{
++	uintptr_t sk_user_data = (uintptr_t)rcu_dereference(__sk_user_data(sk));
++
++	WARN_ON_ONCE(flags & SK_USER_DATA_PTRMASK);
++
++	if ((sk_user_data & flags) == flags)
++		return (void *)(sk_user_data & SK_USER_DATA_PTRMASK);
++	return NULL;
++}
++
+ #define rcu_dereference_sk_user_data(sk)				\
++	__rcu_dereference_sk_user_data_with_flags(sk, 0)
++#define __rcu_assign_sk_user_data_with_flags(sk, ptr, flags)		\
+ ({									\
+-	void *__tmp = rcu_dereference(__sk_user_data((sk)));		\
+-	(void *)((uintptr_t)__tmp & SK_USER_DATA_PTRMASK);		\
+-})
+-#define rcu_assign_sk_user_data(sk, ptr)				\
+-({									\
+-	uintptr_t __tmp = (uintptr_t)(ptr);				\
+-	WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK);			\
+-	rcu_assign_pointer(__sk_user_data((sk)), __tmp);		\
+-})
+-#define rcu_assign_sk_user_data_nocopy(sk, ptr)				\
+-({									\
+-	uintptr_t __tmp = (uintptr_t)(ptr);				\
+-	WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK);			\
++	uintptr_t __tmp1 = (uintptr_t)(ptr),				\
++		  __tmp2 = (uintptr_t)(flags);				\
++	WARN_ON_ONCE(__tmp1 & ~SK_USER_DATA_PTRMASK);			\
++	WARN_ON_ONCE(__tmp2 & SK_USER_DATA_PTRMASK);			\
+ 	rcu_assign_pointer(__sk_user_data((sk)),			\
+-			   __tmp | SK_USER_DATA_NOCOPY);		\
++			   __tmp1 | __tmp2);				\
+ })
++#define rcu_assign_sk_user_data(sk, ptr)				\
++	__rcu_assign_sk_user_data_with_flags(sk, ptr, 0)
+ 
+ /*
+  * SK_CAN_REUSE and SK_NO_REUSE on a socket mean that the socket is OK
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index a397042e46607..a93407da0ae10 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1786,11 +1786,12 @@ static struct kprobe *__disable_kprobe(struct kprobe *p)
+ 		/* Try to disarm and disable this/parent probe */
+ 		if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
+ 			/*
+-			 * If kprobes_all_disarmed is set, orig_p
+-			 * should have already been disarmed, so
+-			 * skip unneed disarming process.
++			 * Don't be lazy here.  Even if 'kprobes_all_disarmed'
++			 * is false, 'orig_p' might not have been armed yet.
++			 * Note arm_all_kprobes() __tries__ to arm all kprobes
++			 * on the best effort basis.
+ 			 */
+-			if (!kprobes_all_disarmed) {
++			if (!kprobes_all_disarmed && !kprobe_disabled(orig_p)) {
+ 				ret = disarm_kprobe(orig_p, true);
+ 				if (ret) {
+ 					p->flags &= ~KPROBE_FLAG_DISABLED;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index a63713dcd05d5..d868df6f13c86 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -2899,6 +2899,16 @@ int ftrace_startup(struct ftrace_ops *ops, int command)
+ 
+ 	ftrace_startup_enable(command);
+ 
++	/*
++	 * If ftrace is in an undefined state, we just remove ops from list
++	 * to prevent the NULL pointer, instead of totally rolling it back and
++	 * free trampoline, because those actions could cause further damage.
++	 */
++	if (unlikely(ftrace_disabled)) {
++		__unregister_ftrace_function(ops);
++		return -ENODEV;
++	}
++
+ 	ops->flags &= ~FTRACE_OPS_FL_ADDING;
+ 
+ 	return 0;
+diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
+index 2082af43d51fb..0717a0dcefed1 100644
+--- a/lib/crypto/Kconfig
++++ b/lib/crypto/Kconfig
+@@ -33,7 +33,6 @@ config CRYPTO_ARCH_HAVE_LIB_CHACHA
+ 
+ config CRYPTO_LIB_CHACHA_GENERIC
+ 	tristate
+-	select XOR_BLOCKS
+ 	help
+ 	  This symbol can be depended upon by arch implementations of the
+ 	  ChaCha library interface that require the generic code as a
+diff --git a/lib/vdso/gettimeofday.c b/lib/vdso/gettimeofday.c
+index 2919f16981404..c6f6dee087460 100644
+--- a/lib/vdso/gettimeofday.c
++++ b/lib/vdso/gettimeofday.c
+@@ -46,8 +46,8 @@ static inline bool vdso_cycles_ok(u64 cycles)
+ #endif
+ 
+ #ifdef CONFIG_TIME_NS
+-static int do_hres_timens(const struct vdso_data *vdns, clockid_t clk,
+-			  struct __kernel_timespec *ts)
++static __always_inline int do_hres_timens(const struct vdso_data *vdns, clockid_t clk,
++					  struct __kernel_timespec *ts)
+ {
+ 	const struct vdso_data *vd = __arch_get_timens_vdso_data();
+ 	const struct timens_offset *offs = &vdns->offset[clk];
+@@ -97,8 +97,8 @@ static __always_inline const struct vdso_data *__arch_get_timens_vdso_data(void)
+ 	return NULL;
+ }
+ 
+-static int do_hres_timens(const struct vdso_data *vdns, clockid_t clk,
+-			  struct __kernel_timespec *ts)
++static __always_inline int do_hres_timens(const struct vdso_data *vdns, clockid_t clk,
++					  struct __kernel_timespec *ts)
+ {
+ 	return -EINVAL;
+ }
+@@ -159,8 +159,8 @@ static __always_inline int do_hres(const struct vdso_data *vd, clockid_t clk,
+ }
+ 
+ #ifdef CONFIG_TIME_NS
+-static int do_coarse_timens(const struct vdso_data *vdns, clockid_t clk,
+-			    struct __kernel_timespec *ts)
++static __always_inline int do_coarse_timens(const struct vdso_data *vdns, clockid_t clk,
++					    struct __kernel_timespec *ts)
+ {
+ 	const struct vdso_data *vd = __arch_get_timens_vdso_data();
+ 	const struct vdso_timestamp *vdso_ts = &vd->basetime[clk];
+@@ -188,8 +188,8 @@ static int do_coarse_timens(const struct vdso_data *vdns, clockid_t clk,
+ 	return 0;
+ }
+ #else
+-static int do_coarse_timens(const struct vdso_data *vdns, clockid_t clk,
+-			    struct __kernel_timespec *ts)
++static __always_inline int do_coarse_timens(const struct vdso_data *vdns, clockid_t clk,
++					    struct __kernel_timespec *ts)
+ {
+ 	return -1;
+ }
+diff --git a/mm/mmap.c b/mm/mmap.c
+index a1ee93f55cebb..b69c9711bb269 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2669,6 +2669,18 @@ static void unmap_region(struct mm_struct *mm,
+ 	tlb_gather_mmu(&tlb, mm, start, end);
+ 	update_hiwater_rss(mm);
+ 	unmap_vmas(&tlb, vma, start, end);
++
++	/*
++	 * Ensure we have no stale TLB entries by the time this mapping is
++	 * removed from the rmap.
++	 * Note that we don't have to worry about nested flushes here because
++	 * we're holding the mm semaphore for removing the mapping - so any
++	 * concurrent flush in this region has to be coming through the rmap,
++	 * and we synchronize against that using the rmap lock.
++	 */
++	if ((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) != 0)
++		tlb_flush_mmu(&tlb);
++
+ 	free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
+ 				 next ? next->vm_start : USER_PGTABLES_CEILING);
+ 	tlb_finish_mmu(&tlb, start, end);
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 44ad7bf2e5631..e6f840be18906 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -89,7 +89,8 @@ static inline struct anon_vma *anon_vma_alloc(void)
+ 	anon_vma = kmem_cache_alloc(anon_vma_cachep, GFP_KERNEL);
+ 	if (anon_vma) {
+ 		atomic_set(&anon_vma->refcount, 1);
+-		anon_vma->degree = 1;	/* Reference for first vma */
++		anon_vma->num_children = 0;
++		anon_vma->num_active_vmas = 0;
+ 		anon_vma->parent = anon_vma;
+ 		/*
+ 		 * Initialise the anon_vma root to point to itself. If called
+@@ -197,6 +198,7 @@ int __anon_vma_prepare(struct vm_area_struct *vma)
+ 		anon_vma = anon_vma_alloc();
+ 		if (unlikely(!anon_vma))
+ 			goto out_enomem_free_avc;
++		anon_vma->num_children++; /* self-parent link for new root */
+ 		allocated = anon_vma;
+ 	}
+ 
+@@ -206,8 +208,7 @@ int __anon_vma_prepare(struct vm_area_struct *vma)
+ 	if (likely(!vma->anon_vma)) {
+ 		vma->anon_vma = anon_vma;
+ 		anon_vma_chain_link(vma, avc, anon_vma);
+-		/* vma reference or self-parent link for new root */
+-		anon_vma->degree++;
++		anon_vma->num_active_vmas++;
+ 		allocated = NULL;
+ 		avc = NULL;
+ 	}
+@@ -292,19 +293,19 @@ int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
+ 		anon_vma_chain_link(dst, avc, anon_vma);
+ 
+ 		/*
+-		 * Reuse existing anon_vma if its degree lower than two,
+-		 * that means it has no vma and only one anon_vma child.
++		 * Reuse existing anon_vma if it has no vma and only one
++		 * anon_vma child.
+ 		 *
+-		 * Do not chose parent anon_vma, otherwise first child
+-		 * will always reuse it. Root anon_vma is never reused:
++		 * Root anon_vma is never reused:
+ 		 * it has self-parent reference and at least one child.
+ 		 */
+ 		if (!dst->anon_vma && src->anon_vma &&
+-		    anon_vma != src->anon_vma && anon_vma->degree < 2)
++		    anon_vma->num_children < 2 &&
++		    anon_vma->num_active_vmas == 0)
+ 			dst->anon_vma = anon_vma;
+ 	}
+ 	if (dst->anon_vma)
+-		dst->anon_vma->degree++;
++		dst->anon_vma->num_active_vmas++;
+ 	unlock_anon_vma_root(root);
+ 	return 0;
+ 
+@@ -354,6 +355,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
+ 	anon_vma = anon_vma_alloc();
+ 	if (!anon_vma)
+ 		goto out_error;
++	anon_vma->num_active_vmas++;
+ 	avc = anon_vma_chain_alloc(GFP_KERNEL);
+ 	if (!avc)
+ 		goto out_error_free_anon_vma;
+@@ -374,7 +376,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
+ 	vma->anon_vma = anon_vma;
+ 	anon_vma_lock_write(anon_vma);
+ 	anon_vma_chain_link(vma, avc, anon_vma);
+-	anon_vma->parent->degree++;
++	anon_vma->parent->num_children++;
+ 	anon_vma_unlock_write(anon_vma);
+ 
+ 	return 0;
+@@ -406,7 +408,7 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
+ 		 * to free them outside the lock.
+ 		 */
+ 		if (RB_EMPTY_ROOT(&anon_vma->rb_root.rb_root)) {
+-			anon_vma->parent->degree--;
++			anon_vma->parent->num_children--;
+ 			continue;
+ 		}
+ 
+@@ -414,7 +416,7 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
+ 		anon_vma_chain_free(avc);
+ 	}
+ 	if (vma->anon_vma)
+-		vma->anon_vma->degree--;
++		vma->anon_vma->num_active_vmas--;
+ 	unlock_anon_vma_root(root);
+ 
+ 	/*
+@@ -425,7 +427,8 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
+ 	list_for_each_entry_safe(avc, next, &vma->anon_vma_chain, same_vma) {
+ 		struct anon_vma *anon_vma = avc->anon_vma;
+ 
+-		VM_WARN_ON(anon_vma->degree);
++		VM_WARN_ON(anon_vma->num_children);
++		VM_WARN_ON(anon_vma->num_active_vmas);
+ 		put_anon_vma(anon_vma);
+ 
+ 		list_del(&avc->same_vma);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 88980015ba813..0c38af2ff2097 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1988,11 +1988,11 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ 			src_match = !bacmp(&c->src, src);
+ 			dst_match = !bacmp(&c->dst, dst);
+ 			if (src_match && dst_match) {
+-				c = l2cap_chan_hold_unless_zero(c);
+-				if (c) {
+-					read_unlock(&chan_list_lock);
+-					return c;
+-				}
++				if (!l2cap_chan_hold_unless_zero(c))
++					continue;
++
++				read_unlock(&chan_list_lock);
++				return c;
+ 			}
+ 
+ 			/* Closest match */
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index f8b231bbbe381..2983e926fe3cc 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -441,6 +441,9 @@ static int convert___skb_to_skb(struct sk_buff *skb, struct __sk_buff *__skb)
+ {
+ 	struct qdisc_skb_cb *cb = (struct qdisc_skb_cb *)skb->cb;
+ 
++	if (!skb->len)
++		return -EINVAL;
++
+ 	if (!__skb)
+ 		return 0;
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 8355cc5e11a98..34b5aab42b912 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4097,6 +4097,7 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
+ 	bool again = false;
+ 
+ 	skb_reset_mac_header(skb);
++	skb_assert_len(skb);
+ 
+ 	if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_SCHED_TSTAMP))
+ 		__skb_tstamp_tx(skb, NULL, skb->sk, SCM_TSTAMP_SCHED);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 52a1c8725337b..434c5aab83ea2 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -280,11 +280,26 @@ static int neigh_del_timer(struct neighbour *n)
+ 	return 0;
+ }
+ 
+-static void pneigh_queue_purge(struct sk_buff_head *list)
++static void pneigh_queue_purge(struct sk_buff_head *list, struct net *net)
+ {
++	struct sk_buff_head tmp;
++	unsigned long flags;
+ 	struct sk_buff *skb;
+ 
+-	while ((skb = skb_dequeue(list)) != NULL) {
++	skb_queue_head_init(&tmp);
++	spin_lock_irqsave(&list->lock, flags);
++	skb = skb_peek(list);
++	while (skb != NULL) {
++		struct sk_buff *skb_next = skb_peek_next(skb, list);
++		if (net == NULL || net_eq(dev_net(skb->dev), net)) {
++			__skb_unlink(skb, list);
++			__skb_queue_tail(&tmp, skb);
++		}
++		skb = skb_next;
++	}
++	spin_unlock_irqrestore(&list->lock, flags);
++
++	while ((skb = __skb_dequeue(&tmp))) {
+ 		dev_put(skb->dev);
+ 		kfree_skb(skb);
+ 	}
+@@ -358,9 +373,9 @@ static int __neigh_ifdown(struct neigh_table *tbl, struct net_device *dev,
+ 	write_lock_bh(&tbl->lock);
+ 	neigh_flush_dev(tbl, dev, skip_perm);
+ 	pneigh_ifdown_and_unlock(tbl, dev);
+-
+-	del_timer_sync(&tbl->proxy_timer);
+-	pneigh_queue_purge(&tbl->proxy_queue);
++	pneigh_queue_purge(&tbl->proxy_queue, dev_net(dev));
++	if (skb_queue_empty_lockless(&tbl->proxy_queue))
++		del_timer_sync(&tbl->proxy_timer);
+ 	return 0;
+ }
+ 
+@@ -1743,7 +1758,7 @@ int neigh_table_clear(int index, struct neigh_table *tbl)
+ 	/* It is not clean... Fix it to unload IPv6 module safely */
+ 	cancel_delayed_work_sync(&tbl->gc_work);
+ 	del_timer_sync(&tbl->proxy_timer);
+-	pneigh_queue_purge(&tbl->proxy_queue);
++	pneigh_queue_purge(&tbl->proxy_queue, NULL);
+ 	neigh_ifdown(tbl, NULL);
+ 	if (atomic_read(&tbl->entries))
+ 		pr_crit("neighbour leakage\n");
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 545181a1ae043..bb4fbc60b272e 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -612,7 +612,9 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
+ 	sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED);
+ 	refcount_set(&psock->refcnt, 1);
+ 
+-	rcu_assign_sk_user_data_nocopy(sk, psock);
++	__rcu_assign_sk_user_data_with_flags(sk, psock,
++					     SK_USER_DATA_NOCOPY |
++					     SK_USER_DATA_PSOCK);
+ 	sock_hold(sk);
+ 
+ out:
+diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
+index 6bafd3876aff3..8bf70ce03f951 100644
+--- a/net/netfilter/Kconfig
++++ b/net/netfilter/Kconfig
+@@ -118,7 +118,6 @@ config NF_CONNTRACK_ZONES
+ 
+ config NF_CONNTRACK_PROCFS
+ 	bool "Supply CT list in procfs (OBSOLETE)"
+-	default y
+ 	depends on PROC_FS
+ 	help
+ 	This option enables for the list of known conntrack entries
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 5ee600d108a0a..b70b06e312bd0 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2986,8 +2986,8 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 	if (err)
+ 		goto out_free;
+ 
+-	if (sock->type == SOCK_RAW &&
+-	    !dev_validate_header(dev, skb->data, len)) {
++	if ((sock->type == SOCK_RAW &&
++	     !dev_validate_header(dev, skb->data, len)) || !skb->len) {
+ 		err = -EINVAL;
+ 		goto out_free;
+ 	}
+diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
+index 12a87be0fb446..42154b6df6529 100644
+--- a/scripts/Makefile.modpost
++++ b/scripts/Makefile.modpost
+@@ -87,8 +87,7 @@ obj := $(KBUILD_EXTMOD)
+ src := $(obj)
+ 
+ # Include the module's Makefile to find KBUILD_EXTRA_SYMBOLS
+-include $(if $(wildcard $(KBUILD_EXTMOD)/Kbuild), \
+-             $(KBUILD_EXTMOD)/Kbuild, $(KBUILD_EXTMOD)/Makefile)
++include $(if $(wildcard $(src)/Kbuild), $(src)/Kbuild, $(src)/Makefile)
+ 
+ # modpost option for external modules
+ MODPOST += -e


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-09-08 10:46 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-09-08 10:46 UTC (permalink / raw
  To: gentoo-commits

commit:     0fbc7d9ba25f59820591cc22dac9de945298103d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep  8 10:46:05 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep  8 10:46:05 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0fbc7d9b

Linux patch 5.10.142

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1141_linux-5.10.142.patch | 2472 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2476 insertions(+)

diff --git a/0000_README b/0000_README
index 1da294a6..75caafbb 100644
--- a/0000_README
+++ b/0000_README
@@ -607,6 +607,10 @@ Patch:  1140_linux-5.10.141.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.141
 
+Patch:  1141_linux-5.10.142.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.142
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1141_linux-5.10.142.patch b/1141_linux-5.10.142.patch
new file mode 100644
index 00000000..9c622600
--- /dev/null
+++ b/1141_linux-5.10.142.patch
@@ -0,0 +1,2472 @@
+diff --git a/Makefile b/Makefile
+index d2833d29d65f5..655fe095459b3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 141
++SUBLEVEL = 142
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/powerpc/kernel/systbl.S b/arch/powerpc/kernel/systbl.S
+index d34276f3c495f..b0a3063ab1b1b 100644
+--- a/arch/powerpc/kernel/systbl.S
++++ b/arch/powerpc/kernel/systbl.S
+@@ -18,6 +18,7 @@
+ 	.p2align	3
+ #define __SYSCALL(nr, entry)	.8byte entry
+ #else
++	.p2align	2
+ #define __SYSCALL(nr, entry)	.long entry
+ #endif
+ 
+diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
+index 19fecb362d815..09f6be19ba7b3 100644
+--- a/arch/riscv/mm/pageattr.c
++++ b/arch/riscv/mm/pageattr.c
+@@ -118,10 +118,10 @@ static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
+ 	if (!numpages)
+ 		return 0;
+ 
+-	mmap_read_lock(&init_mm);
++	mmap_write_lock(&init_mm);
+ 	ret =  walk_page_range_novma(&init_mm, start, end, &pageattr_ops, NULL,
+ 				     &masks);
+-	mmap_read_unlock(&init_mm);
++	mmap_write_unlock(&init_mm);
+ 
+ 	flush_tlb_kernel_range(start, end);
+ 
+diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
+index 60f9241e5e4a6..d3642fb634bd9 100644
+--- a/arch/s390/include/asm/hugetlb.h
++++ b/arch/s390/include/asm/hugetlb.h
+@@ -28,9 +28,11 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ static inline int prepare_hugepage_range(struct file *file,
+ 			unsigned long addr, unsigned long len)
+ {
+-	if (len & ~HPAGE_MASK)
++	struct hstate *h = hstate_file(file);
++
++	if (len & ~huge_page_mask(h))
+ 		return -EINVAL;
+-	if (addr & ~HPAGE_MASK)
++	if (addr & ~huge_page_mask(h))
+ 		return -EINVAL;
+ 	return 0;
+ }
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index 177ccfbda40a9..9505bdb0aa544 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -122,6 +122,7 @@ SECTIONS
+ 	/*
+ 	 * Table with the patch locations to undo expolines
+ 	*/
++	. = ALIGN(4);
+ 	.nospec_call_table : {
+ 		__nospec_call_start = . ;
+ 		*(.s390_indirect*)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 5f4f855bb3b10..c5a08ec348e6f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1364,12 +1364,32 @@ static const u32 msr_based_features_all[] = {
+ static u32 msr_based_features[ARRAY_SIZE(msr_based_features_all)];
+ static unsigned int num_msr_based_features;
+ 
++/*
++ * Some IA32_ARCH_CAPABILITIES bits have dependencies on MSRs that KVM
++ * does not yet virtualize. These include:
++ *   10 - MISC_PACKAGE_CTRLS
++ *   11 - ENERGY_FILTERING_CTL
++ *   12 - DOITM
++ *   18 - FB_CLEAR_CTRL
++ *   21 - XAPIC_DISABLE_STATUS
++ *   23 - OVERCLOCKING_STATUS
++ */
++
++#define KVM_SUPPORTED_ARCH_CAP \
++	(ARCH_CAP_RDCL_NO | ARCH_CAP_IBRS_ALL | ARCH_CAP_RSBA | \
++	 ARCH_CAP_SKIP_VMENTRY_L1DFLUSH | ARCH_CAP_SSB_NO | ARCH_CAP_MDS_NO | \
++	 ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \
++	 ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \
++	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO)
++
+ static u64 kvm_get_arch_capabilities(void)
+ {
+ 	u64 data = 0;
+ 
+-	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
++	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) {
+ 		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, data);
++		data &= KVM_SUPPORTED_ARCH_CAP;
++	}
+ 
+ 	/*
+ 	 * If nx_huge_pages is enabled, KVM's shadow paging will ensure that
+@@ -1417,9 +1437,6 @@ static u64 kvm_get_arch_capabilities(void)
+ 		 */
+ 	}
+ 
+-	/* Guests don't need to know "Fill buffer clear control" exists */
+-	data &= ~ARCH_CAP_FB_CLEAR_CTRL;
+-
+ 	return data;
+ }
+ 
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index a5d5247c4f3e8..cfb1393a0891a 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1744,6 +1744,18 @@ static int binder_inc_ref_for_node(struct binder_proc *proc,
+ 	}
+ 	ret = binder_inc_ref_olocked(ref, strong, target_list);
+ 	*rdata = ref->data;
++	if (ret && ref == new_ref) {
++		/*
++		 * Cleanup the failed reference here as the target
++		 * could now be dead and have already released its
++		 * references by now. Calling on the new reference
++		 * with strong=0 and a tmp_refs will not decrement
++		 * the node. The new_ref gets kfree'd below.
++		 */
++		binder_cleanup_ref_olocked(new_ref);
++		ref = NULL;
++	}
++
+ 	binder_proc_unlock(proc);
+ 	if (new_ref && ref != new_ref)
+ 		/*
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index b5441741274bb..72ef9e83a84b2 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -837,6 +837,11 @@ static int __device_attach_driver(struct device_driver *drv, void *_data)
+ 	} else if (ret == -EPROBE_DEFER) {
+ 		dev_dbg(dev, "Device match requests probe deferral\n");
+ 		driver_deferred_probe_add(dev);
++		/*
++		 * Device can't match with a driver right now, so don't attempt
++		 * to match or bind with other drivers on the bus.
++		 */
++		return ret;
+ 	} else if (ret < 0) {
+ 		dev_dbg(dev, "Bus failed to match device: %d\n", ret);
+ 		return ret;
+@@ -1076,6 +1081,11 @@ static int __driver_attach(struct device *dev, void *data)
+ 	} else if (ret == -EPROBE_DEFER) {
+ 		dev_dbg(dev, "Device match requests probe deferral\n");
+ 		driver_deferred_probe_add(dev);
++		/*
++		 * Driver could not match with device, but may match with
++		 * another device on the bus.
++		 */
++		return 0;
+ 	} else if (ret < 0) {
+ 		dev_dbg(dev, "Bus failed to match device: %d\n", ret);
+ 		return ret;
+diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
+index 040829e2d0162..5eff34767b775 100644
+--- a/drivers/block/xen-blkback/common.h
++++ b/drivers/block/xen-blkback/common.h
+@@ -226,6 +226,9 @@ struct xen_vbd {
+ 	sector_t		size;
+ 	unsigned int		flush_support:1;
+ 	unsigned int		discard_secure:1;
++	/* Connect-time cached feature_persistent parameter value */
++	unsigned int		feature_gnt_persistent_parm:1;
++	/* Persistent grants feature negotiation result */
+ 	unsigned int		feature_gnt_persistent:1;
+ 	unsigned int		overflow_max_grants:1;
+ };
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index 44782b15b9fdb..ddea362959318 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -911,7 +911,7 @@ again:
+ 	xen_blkbk_barrier(xbt, be, be->blkif->vbd.flush_support);
+ 
+ 	err = xenbus_printf(xbt, dev->nodename, "feature-persistent", "%u",
+-			be->blkif->vbd.feature_gnt_persistent);
++			be->blkif->vbd.feature_gnt_persistent_parm);
+ 	if (err) {
+ 		xenbus_dev_fatal(dev, err, "writing %s/feature-persistent",
+ 				 dev->nodename);
+@@ -1089,7 +1089,9 @@ static int connect_ring(struct backend_info *be)
+ 		return -ENOSYS;
+ 	}
+ 
+-	blkif->vbd.feature_gnt_persistent = feature_persistent &&
++	blkif->vbd.feature_gnt_persistent_parm = feature_persistent;
++	blkif->vbd.feature_gnt_persistent =
++		blkif->vbd.feature_gnt_persistent_parm &&
+ 		xenbus_read_unsigned(dev->otherend, "feature-persistent", 0);
+ 
+ 	blkif->vbd.overflow_max_grants = 0;
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 03e079a6f0721..9d5460f6e0ff1 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -211,6 +211,9 @@ struct blkfront_info
+ 	unsigned int feature_fua:1;
+ 	unsigned int feature_discard:1;
+ 	unsigned int feature_secdiscard:1;
++	/* Connect-time cached feature_persistent parameter */
++	unsigned int feature_persistent_parm:1;
++	/* Persistent grants feature negotiation result */
+ 	unsigned int feature_persistent:1;
+ 	unsigned int bounce:1;
+ 	unsigned int discard_granularity;
+@@ -1941,7 +1944,7 @@ again:
+ 		goto abort_transaction;
+ 	}
+ 	err = xenbus_printf(xbt, dev->nodename, "feature-persistent", "%u",
+-			info->feature_persistent);
++			info->feature_persistent_parm);
+ 	if (err)
+ 		dev_warn(&dev->dev,
+ 			 "writing persistent grants feature to xenbus");
+@@ -2391,7 +2394,8 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
+ 	if (xenbus_read_unsigned(info->xbdev->otherend, "feature-discard", 0))
+ 		blkfront_setup_discard(info);
+ 
+-	if (feature_persistent)
++	info->feature_persistent_parm = feature_persistent;
++	if (info->feature_persistent_parm)
+ 		info->feature_persistent =
+ 			!!xenbus_read_unsigned(info->xbdev->otherend,
+ 					       "feature-persistent", 0);
+diff --git a/drivers/clk/bcm/clk-raspberrypi.c b/drivers/clk/bcm/clk-raspberrypi.c
+index f89b9cfc43099..969227e2df215 100644
+--- a/drivers/clk/bcm/clk-raspberrypi.c
++++ b/drivers/clk/bcm/clk-raspberrypi.c
+@@ -139,7 +139,7 @@ static unsigned long raspberrypi_fw_get_rate(struct clk_hw *hw,
+ 	ret = raspberrypi_clock_property(rpi->firmware, data,
+ 					 RPI_FIRMWARE_GET_CLOCK_RATE, &val);
+ 	if (ret)
+-		return ret;
++		return 0;
+ 
+ 	return val;
+ }
+@@ -156,7 +156,7 @@ static int raspberrypi_fw_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	ret = raspberrypi_clock_property(rpi->firmware, data,
+ 					 RPI_FIRMWARE_SET_CLOCK_RATE, &_rate);
+ 	if (ret)
+-		dev_err_ratelimited(rpi->dev, "Failed to change %s frequency: %d",
++		dev_err_ratelimited(rpi->dev, "Failed to change %s frequency: %d\n",
+ 				    clk_hw_get_name(hw), ret);
+ 
+ 	return ret;
+@@ -208,7 +208,7 @@ static struct clk_hw *raspberrypi_clk_register(struct raspberrypi_clk *rpi,
+ 					 RPI_FIRMWARE_GET_MIN_CLOCK_RATE,
+ 					 &min_rate);
+ 	if (ret) {
+-		dev_err(rpi->dev, "Failed to get clock %d min freq: %d",
++		dev_err(rpi->dev, "Failed to get clock %d min freq: %d\n",
+ 			id, ret);
+ 		return ERR_PTR(ret);
+ 	}
+@@ -251,8 +251,13 @@ static int raspberrypi_discover_clocks(struct raspberrypi_clk *rpi,
+ 	struct rpi_firmware_get_clocks_response *clks;
+ 	int ret;
+ 
++	/*
++	 * The firmware doesn't guarantee that the last element of
++	 * RPI_FIRMWARE_GET_CLOCKS is zeroed. So allocate an additional
++	 * zero element as sentinel.
++	 */
+ 	clks = devm_kcalloc(rpi->dev,
+-			    sizeof(*clks), RPI_FIRMWARE_NUM_CLK_ID,
++			    RPI_FIRMWARE_NUM_CLK_ID + 1, sizeof(*clks),
+ 			    GFP_KERNEL);
+ 	if (!clks)
+ 		return -ENOMEM;
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 2e56cc0a3bce6..b355d3d40f63a 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -846,10 +846,9 @@ static void clk_core_unprepare(struct clk_core *core)
+ 	if (core->ops->unprepare)
+ 		core->ops->unprepare(core->hw);
+ 
+-	clk_pm_runtime_put(core);
+-
+ 	trace_clk_unprepare_complete(core);
+ 	clk_core_unprepare(core->parent);
++	clk_pm_runtime_put(core);
+ }
+ 
+ static void clk_core_unprepare_lock(struct clk_core *core)
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 957be5f69406a..3ad1a9e432c8a 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1162,7 +1162,9 @@ static int pca953x_suspend(struct device *dev)
+ {
+ 	struct pca953x_chip *chip = dev_get_drvdata(dev);
+ 
++	mutex_lock(&chip->i2c_lock);
+ 	regcache_cache_only(chip->regmap, true);
++	mutex_unlock(&chip->i2c_lock);
+ 
+ 	if (atomic_read(&chip->wakeup_path))
+ 		device_set_wakeup_path(dev);
+@@ -1185,13 +1187,17 @@ static int pca953x_resume(struct device *dev)
+ 		}
+ 	}
+ 
++	mutex_lock(&chip->i2c_lock);
+ 	regcache_cache_only(chip->regmap, false);
+ 	regcache_mark_dirty(chip->regmap);
+ 	ret = pca953x_regcache_sync(dev);
+-	if (ret)
++	if (ret) {
++		mutex_unlock(&chip->i2c_lock);
+ 		return ret;
++	}
+ 
+ 	ret = regcache_sync(chip->regmap);
++	mutex_unlock(&chip->i2c_lock);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to restore register map: %d\n", ret);
+ 		return ret;
+diff --git a/drivers/gpu/drm/i915/display/intel_quirks.c b/drivers/gpu/drm/i915/display/intel_quirks.c
+index 46beb155d835f..8eb1842f14cea 100644
+--- a/drivers/gpu/drm/i915/display/intel_quirks.c
++++ b/drivers/gpu/drm/i915/display/intel_quirks.c
+@@ -156,6 +156,9 @@ static struct intel_quirk intel_quirks[] = {
+ 	/* ASRock ITX*/
+ 	{ 0x3185, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
+ 	{ 0x3184, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
++	/* ECS Liva Q2 */
++	{ 0x3185, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
++	{ 0x3184, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
+ };
+ 
+ void intel_init_quirks(struct drm_i915_private *i915)
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index 0b1ea29dcffac..606e6c315fe24 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -660,7 +660,7 @@ static int update_fdi_rx_iir_status(struct intel_vgpu *vgpu,
+ 	else if (FDI_RX_IMR_TO_PIPE(offset) != INVALID_INDEX)
+ 		index = FDI_RX_IMR_TO_PIPE(offset);
+ 	else {
+-		gvt_vgpu_err("Unsupport registers %x\n", offset);
++		gvt_vgpu_err("Unsupported registers %x\n", offset);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index b9ca844ce2ad0..9fac55c24214a 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1205,7 +1205,7 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+ 	if (ret)
+ 		return ret;
+ 
+-	dp_ctrl_train_pattern_set(ctrl, pattern | DP_RECOVERED_CLOCK_OUT_EN);
++	dp_ctrl_train_pattern_set(ctrl, pattern);
+ 
+ 	for (tries = 0; tries <= maximum_retries; tries++) {
+ 		drm_dp_link_train_channel_eq_delay(ctrl->panel->dpcd);
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_cfg.c b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+index d255bea87ca41..73f066ef6f406 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_cfg.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+@@ -117,7 +117,7 @@ static const char * const dsi_8996_bus_clk_names[] = {
+ static const struct msm_dsi_config msm8996_dsi_cfg = {
+ 	.io_offset = DSI_6G_REG_SHIFT,
+ 	.reg_cfg = {
+-		.num = 2,
++		.num = 3,
+ 		.regs = {
+ 			{"vdda", 18160, 1 },	/* 1.25 V */
+ 			{"vcca", 17000, 32 },	/* 0.925 V */
+@@ -156,7 +156,7 @@ static const char * const dsi_sdm660_bus_clk_names[] = {
+ static const struct msm_dsi_config sdm660_dsi_cfg = {
+ 	.io_offset = DSI_6G_REG_SHIFT,
+ 	.reg_cfg = {
+-		.num = 2,
++		.num = 1,
+ 		.regs = {
+ 			{"vdda", 12560, 4 },	/* 1.2 V */
+ 		},
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+index e07986ab52c22..2e0be85ec3947 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+@@ -345,7 +345,7 @@ int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing *timing,
+ 	} else {
+ 		timing->shared_timings.clk_pre =
+ 			linear_inter(tmax, tmin, pcnt2, 0, false);
+-			timing->shared_timings.clk_pre_inc_by_2 = 0;
++		timing->shared_timings.clk_pre_inc_by_2 = 0;
+ 	}
+ 
+ 	timing->ta_go = 3;
+diff --git a/drivers/hwmon/gpio-fan.c b/drivers/hwmon/gpio-fan.c
+index 3ea4021f267cf..d96e435cc42b1 100644
+--- a/drivers/hwmon/gpio-fan.c
++++ b/drivers/hwmon/gpio-fan.c
+@@ -391,6 +391,9 @@ static int gpio_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ 	if (!fan_data)
+ 		return -EINVAL;
+ 
++	if (state >= fan_data->num_speed)
++		return -EINVAL;
++
+ 	set_fan_speed(fan_data, state);
+ 	return 0;
+ }
+diff --git a/drivers/iio/adc/ad7292.c b/drivers/iio/adc/ad7292.c
+index ab204e9199e99..3e6ece05854d8 100644
+--- a/drivers/iio/adc/ad7292.c
++++ b/drivers/iio/adc/ad7292.c
+@@ -289,10 +289,8 @@ static int ad7292_probe(struct spi_device *spi)
+ 
+ 		ret = devm_add_action_or_reset(&spi->dev,
+ 					       ad7292_regulator_disable, st);
+-		if (ret) {
+-			regulator_disable(st->reg);
++		if (ret)
+ 			return ret;
+-		}
+ 
+ 		ret = regulator_get_voltage(st->reg);
+ 		if (ret < 0)
+diff --git a/drivers/iio/adc/mcp3911.c b/drivers/iio/adc/mcp3911.c
+index e573da5397bb3..65278270a75ce 100644
+--- a/drivers/iio/adc/mcp3911.c
++++ b/drivers/iio/adc/mcp3911.c
+@@ -38,8 +38,8 @@
+ #define MCP3911_CHANNEL(x)		(MCP3911_REG_CHANNEL0 + x * 3)
+ #define MCP3911_OFFCAL(x)		(MCP3911_REG_OFFCAL_CH0 + x * 6)
+ 
+-/* Internal voltage reference in uV */
+-#define MCP3911_INT_VREF_UV		1200000
++/* Internal voltage reference in mV */
++#define MCP3911_INT_VREF_MV		1200
+ 
+ #define MCP3911_REG_READ(reg, id)	((((reg) << 1) | ((id) << 5) | (1 << 0)) & 0xff)
+ #define MCP3911_REG_WRITE(reg, id)	((((reg) << 1) | ((id) << 5) | (0 << 0)) & 0xff)
+@@ -111,6 +111,8 @@ static int mcp3911_read_raw(struct iio_dev *indio_dev,
+ 		if (ret)
+ 			goto out;
+ 
++		*val = sign_extend32(*val, 23);
++
+ 		ret = IIO_VAL_INT;
+ 		break;
+ 
+@@ -135,11 +137,18 @@ static int mcp3911_read_raw(struct iio_dev *indio_dev,
+ 
+ 			*val = ret / 1000;
+ 		} else {
+-			*val = MCP3911_INT_VREF_UV;
++			*val = MCP3911_INT_VREF_MV;
+ 		}
+ 
+-		*val2 = 24;
+-		ret = IIO_VAL_FRACTIONAL_LOG2;
++		/*
++		 * For 24bit Conversion
++		 * Raw = ((Voltage)/(Vref) * 2^23 * Gain * 1.5
++		 * Voltage = Raw * (Vref)/(2^23 * Gain * 1.5)
++		 */
++
++		/* val2 = (2^23 * 1.5) */
++		*val2 = 12582912;
++		ret = IIO_VAL_FRACTIONAL;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/input/joystick/iforce/iforce-serio.c b/drivers/input/joystick/iforce/iforce-serio.c
+index f95a81b9fac72..2380546d79782 100644
+--- a/drivers/input/joystick/iforce/iforce-serio.c
++++ b/drivers/input/joystick/iforce/iforce-serio.c
+@@ -39,7 +39,7 @@ static void iforce_serio_xmit(struct iforce *iforce)
+ 
+ again:
+ 	if (iforce->xmit.head == iforce->xmit.tail) {
+-		clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
++		iforce_clear_xmit_and_wake(iforce);
+ 		spin_unlock_irqrestore(&iforce->xmit_lock, flags);
+ 		return;
+ 	}
+@@ -64,7 +64,7 @@ again:
+ 	if (test_and_clear_bit(IFORCE_XMIT_AGAIN, iforce->xmit_flags))
+ 		goto again;
+ 
+-	clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
++	iforce_clear_xmit_and_wake(iforce);
+ 
+ 	spin_unlock_irqrestore(&iforce->xmit_lock, flags);
+ }
+@@ -169,7 +169,7 @@ static irqreturn_t iforce_serio_irq(struct serio *serio,
+ 			iforce_serio->cmd_response_len = iforce_serio->len;
+ 
+ 			/* Signal that command is done */
+-			wake_up(&iforce->wait);
++			wake_up_all(&iforce->wait);
+ 		} else if (likely(iforce->type)) {
+ 			iforce_process_packet(iforce, iforce_serio->id,
+ 					      iforce_serio->data_in,
+diff --git a/drivers/input/joystick/iforce/iforce-usb.c b/drivers/input/joystick/iforce/iforce-usb.c
+index ea58805c480fa..cba92bd590a8d 100644
+--- a/drivers/input/joystick/iforce/iforce-usb.c
++++ b/drivers/input/joystick/iforce/iforce-usb.c
+@@ -30,7 +30,7 @@ static void __iforce_usb_xmit(struct iforce *iforce)
+ 	spin_lock_irqsave(&iforce->xmit_lock, flags);
+ 
+ 	if (iforce->xmit.head == iforce->xmit.tail) {
+-		clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
++		iforce_clear_xmit_and_wake(iforce);
+ 		spin_unlock_irqrestore(&iforce->xmit_lock, flags);
+ 		return;
+ 	}
+@@ -58,9 +58,9 @@ static void __iforce_usb_xmit(struct iforce *iforce)
+ 	XMIT_INC(iforce->xmit.tail, n);
+ 
+ 	if ( (n=usb_submit_urb(iforce_usb->out, GFP_ATOMIC)) ) {
+-		clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
+ 		dev_warn(&iforce_usb->intf->dev,
+ 			 "usb_submit_urb failed %d\n", n);
++		iforce_clear_xmit_and_wake(iforce);
+ 	}
+ 
+ 	/* The IFORCE_XMIT_RUNNING bit is not cleared here. That's intended.
+@@ -175,15 +175,15 @@ static void iforce_usb_out(struct urb *urb)
+ 	struct iforce *iforce = &iforce_usb->iforce;
+ 
+ 	if (urb->status) {
+-		clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
+ 		dev_dbg(&iforce_usb->intf->dev, "urb->status %d, exiting\n",
+ 			urb->status);
++		iforce_clear_xmit_and_wake(iforce);
+ 		return;
+ 	}
+ 
+ 	__iforce_usb_xmit(iforce);
+ 
+-	wake_up(&iforce->wait);
++	wake_up_all(&iforce->wait);
+ }
+ 
+ static int iforce_usb_probe(struct usb_interface *intf,
+diff --git a/drivers/input/joystick/iforce/iforce.h b/drivers/input/joystick/iforce/iforce.h
+index 6aa761ebbdf77..9ccb9107ccbef 100644
+--- a/drivers/input/joystick/iforce/iforce.h
++++ b/drivers/input/joystick/iforce/iforce.h
+@@ -119,6 +119,12 @@ static inline int iforce_get_id_packet(struct iforce *iforce, u8 id,
+ 					 response_data, response_len);
+ }
+ 
++static inline void iforce_clear_xmit_and_wake(struct iforce *iforce)
++{
++	clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
++	wake_up_all(&iforce->wait);
++}
++
+ /* Public functions */
+ /* iforce-main.c */
+ int iforce_init_device(struct device *parent, u16 bustype,
+diff --git a/drivers/input/misc/rk805-pwrkey.c b/drivers/input/misc/rk805-pwrkey.c
+index 3fb64dbda1a21..76873aa005b41 100644
+--- a/drivers/input/misc/rk805-pwrkey.c
++++ b/drivers/input/misc/rk805-pwrkey.c
+@@ -98,6 +98,7 @@ static struct platform_driver rk805_pwrkey_driver = {
+ };
+ module_platform_driver(rk805_pwrkey_driver);
+ 
++MODULE_ALIAS("platform:rk805-pwrkey");
+ MODULE_AUTHOR("Joseph Chen <chenjh@rock-chips.com>");
+ MODULE_DESCRIPTION("RK805 PMIC Power Key driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index dbb5a4f44bda5..de4cf6eb5258b 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -1416,42 +1416,37 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ {
+ 	int ret;
+ 	struct device *dev = ir->dev;
+-	char *data;
+-
+-	data = kzalloc(USB_CTRL_MSG_SZ, GFP_KERNEL);
+-	if (!data) {
+-		dev_err(dev, "%s: memory allocation failed!", __func__);
+-		return;
+-	}
++	char data[USB_CTRL_MSG_SZ];
+ 
+ 	/*
+ 	 * This is a strange one. Windows issues a set address to the device
+ 	 * on the receive control pipe and expect a certain value pair back
+ 	 */
+-	ret = usb_control_msg(ir->usbdev, usb_rcvctrlpipe(ir->usbdev, 0),
+-			      USB_REQ_SET_ADDRESS, USB_TYPE_VENDOR, 0, 0,
+-			      data, USB_CTRL_MSG_SZ, 3000);
++	ret = usb_control_msg_recv(ir->usbdev, 0, USB_REQ_SET_ADDRESS,
++				   USB_DIR_IN | USB_TYPE_VENDOR,
++				   0, 0, data, USB_CTRL_MSG_SZ, 3000,
++				   GFP_KERNEL);
+ 	dev_dbg(dev, "set address - ret = %d", ret);
+ 	dev_dbg(dev, "set address - data[0] = %d, data[1] = %d",
+ 						data[0], data[1]);
+ 
+ 	/* set feature: bit rate 38400 bps */
+-	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+-			      USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
+-			      0xc04e, 0x0000, NULL, 0, 3000);
++	ret = usb_control_msg_send(ir->usbdev, 0,
++				   USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
++				   0xc04e, 0x0000, NULL, 0, 3000, GFP_KERNEL);
+ 
+ 	dev_dbg(dev, "set feature - ret = %d", ret);
+ 
+ 	/* bRequest 4: set char length to 8 bits */
+-	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+-			      4, USB_TYPE_VENDOR,
+-			      0x0808, 0x0000, NULL, 0, 3000);
++	ret = usb_control_msg_send(ir->usbdev, 0,
++				   4, USB_TYPE_VENDOR,
++				   0x0808, 0x0000, NULL, 0, 3000, GFP_KERNEL);
+ 	dev_dbg(dev, "set char length - retB = %d", ret);
+ 
+ 	/* bRequest 2: set handshaking to use DTR/DSR */
+-	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+-			      2, USB_TYPE_VENDOR,
+-			      0x0000, 0x0100, NULL, 0, 3000);
++	ret = usb_control_msg_send(ir->usbdev, 0,
++				   2, USB_TYPE_VENDOR,
++				   0x0000, 0x0100, NULL, 0, 3000, GFP_KERNEL);
+ 	dev_dbg(dev, "set handshake  - retC = %d", ret);
+ 
+ 	/* device resume */
+@@ -1459,8 +1454,6 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ 
+ 	/* get hw/sw revision? */
+ 	mce_command_out(ir, GET_REVISION, sizeof(GET_REVISION));
+-
+-	kfree(data);
+ }
+ 
+ static void mceusb_gen2_init(struct mceusb_dev *ir)
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 65f24b6150aa3..2c3142b4b5dd7 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1548,7 +1548,12 @@ static int fastrpc_cb_probe(struct platform_device *pdev)
+ 	of_property_read_u32(dev->of_node, "qcom,nsessions", &sessions);
+ 
+ 	spin_lock_irqsave(&cctx->lock, flags);
+-	sess = &cctx->session[cctx->sesscount];
++	if (cctx->sesscount >= FASTRPC_MAX_SESSIONS) {
++		dev_err(&pdev->dev, "too many sessions\n");
++		spin_unlock_irqrestore(&cctx->lock, flags);
++		return -ENOSPC;
++	}
++	sess = &cctx->session[cctx->sesscount++];
+ 	sess->used = false;
+ 	sess->valid = true;
+ 	sess->dev = dev;
+@@ -1561,13 +1566,12 @@ static int fastrpc_cb_probe(struct platform_device *pdev)
+ 		struct fastrpc_session_ctx *dup_sess;
+ 
+ 		for (i = 1; i < sessions; i++) {
+-			if (cctx->sesscount++ >= FASTRPC_MAX_SESSIONS)
++			if (cctx->sesscount >= FASTRPC_MAX_SESSIONS)
+ 				break;
+-			dup_sess = &cctx->session[cctx->sesscount];
++			dup_sess = &cctx->session[cctx->sesscount++];
+ 			memcpy(dup_sess, sess, sizeof(*dup_sess));
+ 		}
+ 	}
+-	cctx->sesscount++;
+ 	spin_unlock_irqrestore(&cctx->lock, flags);
+ 	rc = dma_set_mask(dev, DMA_BIT_MASK(32));
+ 	if (rc) {
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index bac343a8d569a..0b09cdaaeb6c1 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -1107,7 +1107,7 @@ retry:
+ 					mmc_remove_card(card);
+ 				goto retry;
+ 			}
+-			goto done;
++			goto cont;
+ 		}
+ 	}
+ 
+@@ -1143,7 +1143,7 @@ retry:
+ 			mmc_set_bus_width(host, MMC_BUS_WIDTH_4);
+ 		}
+ 	}
+-
++cont:
+ 	if (host->cqe_ops && !host->cqe_enabled) {
+ 		err = host->cqe_ops->cqe_enable(host, card);
+ 		if (!err) {
+@@ -1161,7 +1161,7 @@ retry:
+ 		err = -EINVAL;
+ 		goto free_card;
+ 	}
+-done:
++
+ 	host->card = card;
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index d11fcfd927c0b..85ea073b742fb 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -1920,7 +1920,7 @@ static void gmac_get_stats64(struct net_device *netdev,
+ 
+ 	/* Racing with RX NAPI */
+ 	do {
+-		start = u64_stats_fetch_begin(&port->rx_stats_syncp);
++		start = u64_stats_fetch_begin_irq(&port->rx_stats_syncp);
+ 
+ 		stats->rx_packets = port->stats.rx_packets;
+ 		stats->rx_bytes = port->stats.rx_bytes;
+@@ -1932,11 +1932,11 @@ static void gmac_get_stats64(struct net_device *netdev,
+ 		stats->rx_crc_errors = port->stats.rx_crc_errors;
+ 		stats->rx_frame_errors = port->stats.rx_frame_errors;
+ 
+-	} while (u64_stats_fetch_retry(&port->rx_stats_syncp, start));
++	} while (u64_stats_fetch_retry_irq(&port->rx_stats_syncp, start));
+ 
+ 	/* Racing with MIB and TX completion interrupts */
+ 	do {
+-		start = u64_stats_fetch_begin(&port->ir_stats_syncp);
++		start = u64_stats_fetch_begin_irq(&port->ir_stats_syncp);
+ 
+ 		stats->tx_errors = port->stats.tx_errors;
+ 		stats->tx_packets = port->stats.tx_packets;
+@@ -1946,15 +1946,15 @@ static void gmac_get_stats64(struct net_device *netdev,
+ 		stats->rx_missed_errors = port->stats.rx_missed_errors;
+ 		stats->rx_fifo_errors = port->stats.rx_fifo_errors;
+ 
+-	} while (u64_stats_fetch_retry(&port->ir_stats_syncp, start));
++	} while (u64_stats_fetch_retry_irq(&port->ir_stats_syncp, start));
+ 
+ 	/* Racing with hard_start_xmit */
+ 	do {
+-		start = u64_stats_fetch_begin(&port->tx_stats_syncp);
++		start = u64_stats_fetch_begin_irq(&port->tx_stats_syncp);
+ 
+ 		stats->tx_dropped = port->stats.tx_dropped;
+ 
+-	} while (u64_stats_fetch_retry(&port->tx_stats_syncp, start));
++	} while (u64_stats_fetch_retry_irq(&port->tx_stats_syncp, start));
+ 
+ 	stats->rx_dropped += stats->rx_missed_errors;
+ }
+@@ -2032,18 +2032,18 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
+ 	/* Racing with MIB interrupt */
+ 	do {
+ 		p = values;
+-		start = u64_stats_fetch_begin(&port->ir_stats_syncp);
++		start = u64_stats_fetch_begin_irq(&port->ir_stats_syncp);
+ 
+ 		for (i = 0; i < RX_STATS_NUM; i++)
+ 			*p++ = port->hw_stats[i];
+ 
+-	} while (u64_stats_fetch_retry(&port->ir_stats_syncp, start));
++	} while (u64_stats_fetch_retry_irq(&port->ir_stats_syncp, start));
+ 	values = p;
+ 
+ 	/* Racing with RX NAPI */
+ 	do {
+ 		p = values;
+-		start = u64_stats_fetch_begin(&port->rx_stats_syncp);
++		start = u64_stats_fetch_begin_irq(&port->rx_stats_syncp);
+ 
+ 		for (i = 0; i < RX_STATUS_NUM; i++)
+ 			*p++ = port->rx_stats[i];
+@@ -2051,13 +2051,13 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
+ 			*p++ = port->rx_csum_stats[i];
+ 		*p++ = port->rx_napi_exits;
+ 
+-	} while (u64_stats_fetch_retry(&port->rx_stats_syncp, start));
++	} while (u64_stats_fetch_retry_irq(&port->rx_stats_syncp, start));
+ 	values = p;
+ 
+ 	/* Racing with TX start_xmit */
+ 	do {
+ 		p = values;
+-		start = u64_stats_fetch_begin(&port->tx_stats_syncp);
++		start = u64_stats_fetch_begin_irq(&port->tx_stats_syncp);
+ 
+ 		for (i = 0; i < TX_MAX_FRAGS; i++) {
+ 			*values++ = port->tx_frag_stats[i];
+@@ -2066,7 +2066,7 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
+ 		*values++ = port->tx_frags_linearized;
+ 		*values++ = port->tx_hw_csummed;
+ 
+-	} while (u64_stats_fetch_retry(&port->tx_stats_syncp, start));
++	} while (u64_stats_fetch_retry_irq(&port->tx_stats_syncp, start));
+ }
+ 
+ static int gmac_get_ksettings(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
+index 7b44769bd87c3..c53a043139446 100644
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -172,14 +172,14 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ 				struct gve_rx_ring *rx = &priv->rx[ring];
+ 
+ 				start =
+-				  u64_stats_fetch_begin(&priv->rx[ring].statss);
++				  u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
+ 				tmp_rx_pkts = rx->rpackets;
+ 				tmp_rx_bytes = rx->rbytes;
+ 				tmp_rx_skb_alloc_fail = rx->rx_skb_alloc_fail;
+ 				tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
+ 				tmp_rx_desc_err_dropped_pkt =
+ 					rx->rx_desc_err_dropped_pkt;
+-			} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
++			} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
+ 						       start));
+ 			rx_pkts += tmp_rx_pkts;
+ 			rx_bytes += tmp_rx_bytes;
+@@ -193,10 +193,10 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ 		if (priv->tx) {
+ 			do {
+ 				start =
+-				  u64_stats_fetch_begin(&priv->tx[ring].statss);
++				  u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
+ 				tmp_tx_pkts = priv->tx[ring].pkt_done;
+ 				tmp_tx_bytes = priv->tx[ring].bytes_done;
+-			} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
++			} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
+ 						       start));
+ 			tx_pkts += tmp_tx_pkts;
+ 			tx_bytes += tmp_tx_bytes;
+@@ -254,13 +254,13 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ 			data[i++] = rx->cnt;
+ 			do {
+ 				start =
+-				  u64_stats_fetch_begin(&priv->rx[ring].statss);
++				  u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
+ 				tmp_rx_bytes = rx->rbytes;
+ 				tmp_rx_skb_alloc_fail = rx->rx_skb_alloc_fail;
+ 				tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
+ 				tmp_rx_desc_err_dropped_pkt =
+ 					rx->rx_desc_err_dropped_pkt;
+-			} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
++			} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
+ 						       start));
+ 			data[i++] = tmp_rx_bytes;
+ 			/* rx dropped packets */
+@@ -313,9 +313,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ 			data[i++] = tx->done;
+ 			do {
+ 				start =
+-				  u64_stats_fetch_begin(&priv->tx[ring].statss);
++				  u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
+ 				tmp_tx_bytes = tx->bytes_done;
+-			} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
++			} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
+ 						       start));
+ 			data[i++] = tmp_tx_bytes;
+ 			data[i++] = tx->wake_queue;
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 6cb75bb1ed052..f0c1e6c80b61c 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -40,10 +40,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ 		for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
+ 			do {
+ 				start =
+-				  u64_stats_fetch_begin(&priv->rx[ring].statss);
++				  u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
+ 				packets = priv->rx[ring].rpackets;
+ 				bytes = priv->rx[ring].rbytes;
+-			} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
++			} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
+ 						       start));
+ 			s->rx_packets += packets;
+ 			s->rx_bytes += bytes;
+@@ -53,10 +53,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ 		for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
+ 			do {
+ 				start =
+-				  u64_stats_fetch_begin(&priv->tx[ring].statss);
++				  u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
+ 				packets = priv->tx[ring].pkt_done;
+ 				bytes = priv->tx[ring].bytes_done;
+-			} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
++			} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
+ 						       start));
+ 			s->tx_packets += packets;
+ 			s->tx_bytes += bytes;
+@@ -1041,9 +1041,9 @@ void gve_handle_report_stats(struct gve_priv *priv)
+ 	if (priv->tx) {
+ 		for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+ 			do {
+-				start = u64_stats_fetch_begin(&priv->tx[idx].statss);
++				start = u64_stats_fetch_begin_irq(&priv->tx[idx].statss);
+ 				tx_bytes = priv->tx[idx].bytes_done;
+-			} while (u64_stats_fetch_retry(&priv->tx[idx].statss, start));
++			} while (u64_stats_fetch_retry_irq(&priv->tx[idx].statss, start));
+ 			stats[stats_idx++] = (struct stats) {
+ 				.stat_name = cpu_to_be32(TX_WAKE_CNT),
+ 				.value = cpu_to_be64(priv->tx[idx].wake_queue),
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+index 04b19af63fd61..30ab05289e8df 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+@@ -74,14 +74,14 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
+ 	unsigned int start;
+ 
+ 	do {
+-		start = u64_stats_fetch_begin(&rxq_stats->syncp);
++		start = u64_stats_fetch_begin_irq(&rxq_stats->syncp);
+ 		stats->pkts = rxq_stats->pkts;
+ 		stats->bytes = rxq_stats->bytes;
+ 		stats->errors = rxq_stats->csum_errors +
+ 				rxq_stats->other_errors;
+ 		stats->csum_errors = rxq_stats->csum_errors;
+ 		stats->other_errors = rxq_stats->other_errors;
+-	} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
++	} while (u64_stats_fetch_retry_irq(&rxq_stats->syncp, start));
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+index d13514a8160e8..c12d814ac94a6 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+@@ -98,14 +98,14 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
+ 	unsigned int start;
+ 
+ 	do {
+-		start = u64_stats_fetch_begin(&txq_stats->syncp);
++		start = u64_stats_fetch_begin_irq(&txq_stats->syncp);
+ 		stats->pkts    = txq_stats->pkts;
+ 		stats->bytes   = txq_stats->bytes;
+ 		stats->tx_busy = txq_stats->tx_busy;
+ 		stats->tx_wake = txq_stats->tx_wake;
+ 		stats->tx_dropped = txq_stats->tx_dropped;
+ 		stats->big_frags_pkts = txq_stats->big_frags_pkts;
+-	} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
++	} while (u64_stats_fetch_retry_irq(&txq_stats->syncp, start));
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index dfc1f32cda2b3..5ab230aab2cd8 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -3373,21 +3373,21 @@ static void nfp_net_stat64(struct net_device *netdev,
+ 		unsigned int start;
+ 
+ 		do {
+-			start = u64_stats_fetch_begin(&r_vec->rx_sync);
++			start = u64_stats_fetch_begin_irq(&r_vec->rx_sync);
+ 			data[0] = r_vec->rx_pkts;
+ 			data[1] = r_vec->rx_bytes;
+ 			data[2] = r_vec->rx_drops;
+-		} while (u64_stats_fetch_retry(&r_vec->rx_sync, start));
++		} while (u64_stats_fetch_retry_irq(&r_vec->rx_sync, start));
+ 		stats->rx_packets += data[0];
+ 		stats->rx_bytes += data[1];
+ 		stats->rx_dropped += data[2];
+ 
+ 		do {
+-			start = u64_stats_fetch_begin(&r_vec->tx_sync);
++			start = u64_stats_fetch_begin_irq(&r_vec->tx_sync);
+ 			data[0] = r_vec->tx_pkts;
+ 			data[1] = r_vec->tx_bytes;
+ 			data[2] = r_vec->tx_errors;
+-		} while (u64_stats_fetch_retry(&r_vec->tx_sync, start));
++		} while (u64_stats_fetch_retry_irq(&r_vec->tx_sync, start));
+ 		stats->tx_packets += data[0];
+ 		stats->tx_bytes += data[1];
+ 		stats->tx_errors += data[2];
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index bf8590ef0964b..3977aa2f59bd1 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -494,7 +494,7 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
+ 		unsigned int start;
+ 
+ 		do {
+-			start = u64_stats_fetch_begin(&nn->r_vecs[i].rx_sync);
++			start = u64_stats_fetch_begin_irq(&nn->r_vecs[i].rx_sync);
+ 			data[0] = nn->r_vecs[i].rx_pkts;
+ 			tmp[0] = nn->r_vecs[i].hw_csum_rx_ok;
+ 			tmp[1] = nn->r_vecs[i].hw_csum_rx_inner_ok;
+@@ -502,10 +502,10 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
+ 			tmp[3] = nn->r_vecs[i].hw_csum_rx_error;
+ 			tmp[4] = nn->r_vecs[i].rx_replace_buf_alloc_fail;
+ 			tmp[5] = nn->r_vecs[i].hw_tls_rx;
+-		} while (u64_stats_fetch_retry(&nn->r_vecs[i].rx_sync, start));
++		} while (u64_stats_fetch_retry_irq(&nn->r_vecs[i].rx_sync, start));
+ 
+ 		do {
+-			start = u64_stats_fetch_begin(&nn->r_vecs[i].tx_sync);
++			start = u64_stats_fetch_begin_irq(&nn->r_vecs[i].tx_sync);
+ 			data[1] = nn->r_vecs[i].tx_pkts;
+ 			data[2] = nn->r_vecs[i].tx_busy;
+ 			tmp[6] = nn->r_vecs[i].hw_csum_tx;
+@@ -515,7 +515,7 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
+ 			tmp[10] = nn->r_vecs[i].hw_tls_tx;
+ 			tmp[11] = nn->r_vecs[i].tls_tx_fallback;
+ 			tmp[12] = nn->r_vecs[i].tls_tx_no_fallback;
+-		} while (u64_stats_fetch_retry(&nn->r_vecs[i].tx_sync, start));
++		} while (u64_stats_fetch_retry_irq(&nn->r_vecs[i].tx_sync, start));
+ 
+ 		data += NN_RVEC_PER_Q_STATS;
+ 
+diff --git a/drivers/net/ethernet/rocker/rocker_ofdpa.c b/drivers/net/ethernet/rocker/rocker_ofdpa.c
+index 8157666209798..e4d919de7e3fc 100644
+--- a/drivers/net/ethernet/rocker/rocker_ofdpa.c
++++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c
+@@ -1273,7 +1273,7 @@ static int ofdpa_port_ipv4_neigh(struct ofdpa_port *ofdpa_port,
+ 	bool removing;
+ 	int err = 0;
+ 
+-	entry = kzalloc(sizeof(*entry), GFP_KERNEL);
++	entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
+ 	if (!entry)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
+index 7db9cbd0f5ded..07adbeec19787 100644
+--- a/drivers/net/ieee802154/adf7242.c
++++ b/drivers/net/ieee802154/adf7242.c
+@@ -1310,10 +1310,11 @@ static int adf7242_remove(struct spi_device *spi)
+ 
+ 	debugfs_remove_recursive(lp->debugfs_root);
+ 
++	ieee802154_unregister_hw(lp->hw);
++
+ 	cancel_delayed_work_sync(&lp->work);
+ 	destroy_workqueue(lp->wqueue);
+ 
+-	ieee802154_unregister_hw(lp->hw);
+ 	mutex_destroy(&lp->bmux);
+ 	ieee802154_free_hw(lp->hw);
+ 
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index ad6dbf0110526..4fb0638a55b44 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -67,10 +67,10 @@ nsim_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats)
+ 	unsigned int start;
+ 
+ 	do {
+-		start = u64_stats_fetch_begin(&ns->syncp);
++		start = u64_stats_fetch_begin_irq(&ns->syncp);
+ 		stats->tx_bytes = ns->tx_bytes;
+ 		stats->tx_packets = ns->tx_packets;
+-	} while (u64_stats_fetch_retry(&ns->syncp, start));
++	} while (u64_stats_fetch_retry_irq(&ns->syncp, start));
+ }
+ 
+ static int
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index a9d2a4b98e570..4b0739f95f8b9 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -244,7 +244,7 @@ static void pmc_power_off(void)
+ 	pm1_cnt_port = acpi_base_addr + PM1_CNT;
+ 
+ 	pm1_cnt_value = inl(pm1_cnt_port);
+-	pm1_cnt_value &= SLEEP_TYPE_MASK;
++	pm1_cnt_value &= ~SLEEP_TYPE_MASK;
+ 	pm1_cnt_value |= SLEEP_TYPE_S5;
+ 	pm1_cnt_value |= SLEEP_ENABLE;
+ 
+diff --git a/drivers/staging/rtl8712/rtl8712_cmd.c b/drivers/staging/rtl8712/rtl8712_cmd.c
+index ff3cb09c57a63..30e965c410ffd 100644
+--- a/drivers/staging/rtl8712/rtl8712_cmd.c
++++ b/drivers/staging/rtl8712/rtl8712_cmd.c
+@@ -117,34 +117,6 @@ static void r871x_internal_cmd_hdl(struct _adapter *padapter, u8 *pbuf)
+ 	kfree(pdrvcmd->pbuf);
+ }
+ 
+-static u8 read_macreg_hdl(struct _adapter *padapter, u8 *pbuf)
+-{
+-	void (*pcmd_callback)(struct _adapter *dev, struct cmd_obj	*pcmd);
+-	struct cmd_obj *pcmd  = (struct cmd_obj *)pbuf;
+-
+-	/*  invoke cmd->callback function */
+-	pcmd_callback = cmd_callback[pcmd->cmdcode].callback;
+-	if (!pcmd_callback)
+-		r8712_free_cmd_obj(pcmd);
+-	else
+-		pcmd_callback(padapter, pcmd);
+-	return H2C_SUCCESS;
+-}
+-
+-static u8 write_macreg_hdl(struct _adapter *padapter, u8 *pbuf)
+-{
+-	void (*pcmd_callback)(struct _adapter *dev, struct cmd_obj	*pcmd);
+-	struct cmd_obj *pcmd  = (struct cmd_obj *)pbuf;
+-
+-	/*  invoke cmd->callback function */
+-	pcmd_callback = cmd_callback[pcmd->cmdcode].callback;
+-	if (!pcmd_callback)
+-		r8712_free_cmd_obj(pcmd);
+-	else
+-		pcmd_callback(padapter, pcmd);
+-	return H2C_SUCCESS;
+-}
+-
+ static u8 read_bbreg_hdl(struct _adapter *padapter, u8 *pbuf)
+ {
+ 	struct cmd_obj *pcmd  = (struct cmd_obj *)pbuf;
+@@ -213,14 +185,6 @@ static struct cmd_obj *cmd_hdl_filter(struct _adapter *padapter,
+ 	pcmd_r = NULL;
+ 
+ 	switch (pcmd->cmdcode) {
+-	case GEN_CMD_CODE(_Read_MACREG):
+-		read_macreg_hdl(padapter, (u8 *)pcmd);
+-		pcmd_r = pcmd;
+-		break;
+-	case GEN_CMD_CODE(_Write_MACREG):
+-		write_macreg_hdl(padapter, (u8 *)pcmd);
+-		pcmd_r = pcmd;
+-		break;
+ 	case GEN_CMD_CODE(_Read_BBREG):
+ 		read_bbreg_hdl(padapter, (u8 *)pcmd);
+ 		break;
+diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
+index 9894b8f630648..772acb190f507 100644
+--- a/drivers/thunderbolt/ctl.c
++++ b/drivers/thunderbolt/ctl.c
+@@ -396,7 +396,7 @@ static void tb_ctl_rx_submit(struct ctl_pkg *pkg)
+ 
+ static int tb_async_error(const struct ctl_pkg *pkg)
+ {
+-	const struct cfg_error_pkg *error = (const struct cfg_error_pkg *)pkg;
++	const struct cfg_error_pkg *error = pkg->buffer;
+ 
+ 	if (pkg->frame.eof != TB_CFG_PKG_ERROR)
+ 		return false;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 52a603a6f9b88..a2c4eab0b4703 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1376,9 +1376,9 @@ static int lpuart32_config_rs485(struct uart_port *port,
+ 		 * Note: UART is assumed to be active high.
+ 		 */
+ 		if (rs485->flags & SER_RS485_RTS_ON_SEND)
+-			modem &= ~UARTMODEM_TXRTSPOL;
+-		else if (rs485->flags & SER_RS485_RTS_AFTER_SEND)
+ 			modem |= UARTMODEM_TXRTSPOL;
++		else if (rs485->flags & SER_RS485_RTS_AFTER_SEND)
++			modem &= ~UARTMODEM_TXRTSPOL;
+ 	}
+ 
+ 	/* Store the new configuration */
+@@ -2138,6 +2138,7 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	uart_update_timeout(port, termios->c_cflag, baud);
+ 
+ 	/* wait transmit engin complete */
++	lpuart32_write(&sport->port, 0, UARTMODIR);
+ 	lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
+ 
+ 	/* disable transmit and receive */
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index a4d005fa2569d..0252c0562dbc8 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -4671,9 +4671,11 @@ static int con_font_set(struct vc_data *vc, struct console_font_op *op)
+ 	console_lock();
+ 	if (vc->vc_mode != KD_TEXT)
+ 		rc = -EINVAL;
+-	else if (vc->vc_sw->con_font_set)
++	else if (vc->vc_sw->con_font_set) {
++		if (vc_is_sel(vc))
++			clear_selection();
+ 		rc = vc->vc_sw->con_font_set(vc, &font, op->flags);
+-	else
++	} else
+ 		rc = -ENOSYS;
+ 	console_unlock();
+ 	kfree(font.data);
+@@ -4700,9 +4702,11 @@ static int con_font_default(struct vc_data *vc, struct console_font_op *op)
+ 		console_unlock();
+ 		return -EINVAL;
+ 	}
+-	if (vc->vc_sw->con_font_default)
++	if (vc->vc_sw->con_font_default) {
++		if (vc_is_sel(vc))
++			clear_selection();
+ 		rc = vc->vc_sw->con_font_default(vc, &font, s);
+-	else
++	} else
+ 		rc = -ENOSYS;
+ 	console_unlock();
+ 	if (!rc) {
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 7950d5b3af429..070b838c7da98 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1830,6 +1830,9 @@ static const struct usb_device_id acm_ids[] = {
+ 	{ USB_DEVICE(0x09d8, 0x0320), /* Elatec GmbH TWN3 */
+ 	.driver_info = NO_UNION_NORMAL, /* has misplaced union descriptor */
+ 	},
++	{ USB_DEVICE(0x0c26, 0x0020), /* Icom ICF3400 Serie */
++	.driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */
++	},
+ 	{ USB_DEVICE(0x0ca6, 0xa050), /* Castles VEGA3000 */
+ 	.driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */
+ 	},
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 18ee3914b4686..53b3d77fba6a2 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -5967,6 +5967,11 @@ re_enumerate_no_bos:
+  * the reset is over (using their post_reset method).
+  *
+  * Return: The same as for usb_reset_and_verify_device().
++ * However, if a reset is already in progress (for instance, if a
++ * driver doesn't have pre_ or post_reset() callbacks, and while
++ * being unbound or re-bound during the ongoing reset its disconnect()
++ * or probe() routine tries to perform a second, nested reset), the
++ * routine returns -EINPROGRESS.
+  *
+  * Note:
+  * The caller must own the device lock.  For example, it's safe to use
+@@ -6000,6 +6005,10 @@ int usb_reset_device(struct usb_device *udev)
+ 		return -EISDIR;
+ 	}
+ 
++	if (udev->reset_in_progress)
++		return -EINPROGRESS;
++	udev->reset_in_progress = 1;
++
+ 	port_dev = hub->ports[udev->portnum - 1];
+ 
+ 	/*
+@@ -6064,6 +6073,7 @@ int usb_reset_device(struct usb_device *udev)
+ 
+ 	usb_autosuspend_device(udev);
+ 	memalloc_noio_restore(noio_flag);
++	udev->reset_in_progress = 0;
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(usb_reset_device);
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 49d333f02af4e..8851db646ef53 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -154,9 +154,9 @@ static int __dwc2_lowlevel_hw_enable(struct dwc2_hsotg *hsotg)
+ 	} else if (hsotg->plat && hsotg->plat->phy_init) {
+ 		ret = hsotg->plat->phy_init(pdev, hsotg->plat->phy_type);
+ 	} else {
+-		ret = phy_power_on(hsotg->phy);
++		ret = phy_init(hsotg->phy);
+ 		if (ret == 0)
+-			ret = phy_init(hsotg->phy);
++			ret = phy_power_on(hsotg->phy);
+ 	}
+ 
+ 	return ret;
+@@ -188,9 +188,9 @@ static int __dwc2_lowlevel_hw_disable(struct dwc2_hsotg *hsotg)
+ 	} else if (hsotg->plat && hsotg->plat->phy_exit) {
+ 		ret = hsotg->plat->phy_exit(pdev, hsotg->plat->phy_type);
+ 	} else {
+-		ret = phy_exit(hsotg->phy);
++		ret = phy_power_off(hsotg->phy);
+ 		if (ret == 0)
+-			ret = phy_power_off(hsotg->phy);
++			ret = phy_exit(hsotg->phy);
+ 	}
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 572cf34459aa7..5aae7504f78a1 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -728,15 +728,16 @@ static void dwc3_core_exit(struct dwc3 *dwc)
+ {
+ 	dwc3_event_buffers_cleanup(dwc);
+ 
++	usb_phy_set_suspend(dwc->usb2_phy, 1);
++	usb_phy_set_suspend(dwc->usb3_phy, 1);
++	phy_power_off(dwc->usb2_generic_phy);
++	phy_power_off(dwc->usb3_generic_phy);
++
+ 	usb_phy_shutdown(dwc->usb2_phy);
+ 	usb_phy_shutdown(dwc->usb3_phy);
+ 	phy_exit(dwc->usb2_generic_phy);
+ 	phy_exit(dwc->usb3_generic_phy);
+ 
+-	usb_phy_set_suspend(dwc->usb2_phy, 1);
+-	usb_phy_set_suspend(dwc->usb3_phy, 1);
+-	phy_power_off(dwc->usb2_generic_phy);
+-	phy_power_off(dwc->usb3_generic_phy);
+ 	clk_bulk_disable_unprepare(dwc->num_clks, dwc->clks);
+ 	reset_control_assert(dwc->reset);
+ }
+@@ -1606,16 +1607,16 @@ err5:
+ 	dwc3_debugfs_exit(dwc);
+ 	dwc3_event_buffers_cleanup(dwc);
+ 
+-	usb_phy_shutdown(dwc->usb2_phy);
+-	usb_phy_shutdown(dwc->usb3_phy);
+-	phy_exit(dwc->usb2_generic_phy);
+-	phy_exit(dwc->usb3_generic_phy);
+-
+ 	usb_phy_set_suspend(dwc->usb2_phy, 1);
+ 	usb_phy_set_suspend(dwc->usb3_phy, 1);
+ 	phy_power_off(dwc->usb2_generic_phy);
+ 	phy_power_off(dwc->usb3_generic_phy);
+ 
++	usb_phy_shutdown(dwc->usb2_phy);
++	usb_phy_shutdown(dwc->usb3_phy);
++	phy_exit(dwc->usb2_generic_phy);
++	phy_exit(dwc->usb3_generic_phy);
++
+ 	dwc3_ulpi_exit(dwc);
+ 
+ err4:
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 915fa4197d770..ca3a35fd8f746 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -296,6 +296,14 @@ static void dwc3_qcom_interconnect_exit(struct dwc3_qcom *qcom)
+ 	icc_put(qcom->icc_path_apps);
+ }
+ 
++/* Only usable in contexts where the role can not change. */
++static bool dwc3_qcom_is_host(struct dwc3_qcom *qcom)
++{
++	struct dwc3 *dwc = platform_get_drvdata(qcom->dwc3);
++
++	return dwc->xhci;
++}
++
+ static void dwc3_qcom_disable_interrupts(struct dwc3_qcom *qcom)
+ {
+ 	if (qcom->hs_phy_irq) {
+@@ -411,7 +419,11 @@ static irqreturn_t qcom_dwc3_resume_irq(int irq, void *data)
+ 	if (qcom->pm_suspended)
+ 		return IRQ_HANDLED;
+ 
+-	if (dwc->xhci)
++	/*
++	 * This is safe as role switching is done from a freezable workqueue
++	 * and the wakeup interrupts are disabled as part of resume.
++	 */
++	if (dwc3_qcom_is_host(qcom))
+ 		pm_runtime_resume(&dwc->xhci->dev);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
+index e195176580de1..86bc2bec9038d 100644
+--- a/drivers/usb/dwc3/host.c
++++ b/drivers/usb/dwc3/host.c
+@@ -10,8 +10,13 @@
+ #include <linux/acpi.h>
+ #include <linux/platform_device.h>
+ 
++#include "../host/xhci-plat.h"
+ #include "core.h"
+ 
++static const struct xhci_plat_priv dwc3_xhci_plat_priv = {
++	.quirks = XHCI_SKIP_PHY_INIT,
++};
++
+ static int dwc3_host_get_irq(struct dwc3 *dwc)
+ {
+ 	struct platform_device	*dwc3_pdev = to_platform_device(dwc->dev);
+@@ -87,6 +92,11 @@ int dwc3_host_init(struct dwc3 *dwc)
+ 		goto err;
+ 	}
+ 
++	ret = platform_device_add_data(xhci, &dwc3_xhci_plat_priv,
++					sizeof(dwc3_xhci_plat_priv));
++	if (ret)
++		goto err;
++
+ 	memset(props, 0, sizeof(struct property_entry) * ARRAY_SIZE(props));
+ 
+ 	if (dwc->usb3_lpm_capable)
+@@ -130,4 +140,5 @@ err:
+ void dwc3_host_exit(struct dwc3 *dwc)
+ {
+ 	platform_device_unregister(dwc->xhci);
++	dwc->xhci = NULL;
+ }
+diff --git a/drivers/usb/gadget/function/storage_common.c b/drivers/usb/gadget/function/storage_common.c
+index f7e6c42558eb7..021984921f919 100644
+--- a/drivers/usb/gadget/function/storage_common.c
++++ b/drivers/usb/gadget/function/storage_common.c
+@@ -294,8 +294,10 @@ EXPORT_SYMBOL_GPL(fsg_lun_fsync_sub);
+ void store_cdrom_address(u8 *dest, int msf, u32 addr)
+ {
+ 	if (msf) {
+-		/* Convert to Minutes-Seconds-Frames */
+-		addr >>= 2;		/* Convert to 2048-byte frames */
++		/*
++		 * Convert to Minutes-Seconds-Frames.
++		 * Sector size is already set to 2048 bytes.
++		 */
+ 		addr += 2*75;		/* Lead-in occupies 2 seconds */
+ 		dest[3] = addr % 75;	/* Frames */
+ 		addr /= 75;
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 94adae8b19f00..7bb3067418076 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -566,7 +566,7 @@ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd)
+  * It will release and re-aquire the lock while calling ACPI
+  * method.
+  */
+-void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
++static void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
+ 				u16 index, bool on, unsigned long *flags)
+ 	__must_hold(&xhci->lock)
+ {
+@@ -1561,6 +1561,17 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+ 
+ 	status = bus_state->resuming_ports;
+ 
++	/*
++	 * SS devices are only visible to roothub after link training completes.
++	 * Keep polling roothubs for a grace period after xHC start
++	 */
++	if (xhci->run_graceperiod) {
++		if (time_before(jiffies, xhci->run_graceperiod))
++			status = 1;
++		else
++			xhci->run_graceperiod = 0;
++	}
++
+ 	mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;
+ 
+ 	/* For each port, did anything change?  If so, set that bit in buf. */
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 997de5f294f15..7b16b6b45af7d 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -149,9 +149,11 @@ int xhci_start(struct xhci_hcd *xhci)
+ 		xhci_err(xhci, "Host took too long to start, "
+ 				"waited %u microseconds.\n",
+ 				XHCI_MAX_HALT_USEC);
+-	if (!ret)
++	if (!ret) {
+ 		/* clear state flags. Including dying, halted or removing */
+ 		xhci->xhc_state = 0;
++		xhci->run_graceperiod = jiffies + msecs_to_jiffies(500);
++	}
+ 
+ 	return ret;
+ }
+@@ -775,8 +777,6 @@ static void xhci_stop(struct usb_hcd *hcd)
+ void xhci_shutdown(struct usb_hcd *hcd)
+ {
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+-	unsigned long flags;
+-	int i;
+ 
+ 	if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
+ 		usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev));
+@@ -792,21 +792,12 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ 		del_timer_sync(&xhci->shared_hcd->rh_timer);
+ 	}
+ 
+-	spin_lock_irqsave(&xhci->lock, flags);
++	spin_lock_irq(&xhci->lock);
+ 	xhci_halt(xhci);
+-
+-	/* Power off USB2 ports*/
+-	for (i = 0; i < xhci->usb2_rhub.num_ports; i++)
+-		xhci_set_port_power(xhci, xhci->main_hcd, i, false, &flags);
+-
+-	/* Power off USB3 ports*/
+-	for (i = 0; i < xhci->usb3_rhub.num_ports; i++)
+-		xhci_set_port_power(xhci, xhci->shared_hcd, i, false, &flags);
+-
+ 	/* Workaround for spurious wakeups at shutdown with HSW */
+ 	if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+ 		xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+-	spin_unlock_irqrestore(&xhci->lock, flags);
++	spin_unlock_irq(&xhci->lock);
+ 
+ 	xhci_cleanup_msix(xhci);
+ 
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index f87e5fe57f225..6f16a05b19584 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1816,7 +1816,7 @@ struct xhci_hcd {
+ 
+ 	/* Host controller watchdog timer structures */
+ 	unsigned int		xhc_state;
+-
++	unsigned long		run_graceperiod;
+ 	u32			command;
+ 	struct s3_save		s3;
+ /* Host controller is dying - not responding to commands. "I'm not dead yet!"
+@@ -2162,8 +2162,6 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex,
+ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf);
+ int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1);
+ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd);
+-void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, u16 index,
+-			 bool on, unsigned long *flags);
+ 
+ void xhci_hc_died(struct xhci_hcd *xhci);
+ 
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index a2a38fc76ca53..97a250e75ab7f 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -97,7 +97,10 @@ struct ch341_private {
+ 	u8 mcr;
+ 	u8 msr;
+ 	u8 lcr;
++
+ 	unsigned long quirks;
++	u8 version;
++
+ 	unsigned long break_end;
+ };
+ 
+@@ -256,8 +259,12 @@ static int ch341_set_baudrate_lcr(struct usb_device *dev,
+ 	/*
+ 	 * CH341A buffers data until a full endpoint-size packet (32 bytes)
+ 	 * has been received unless bit 7 is set.
++	 *
++	 * At least one device with version 0x27 appears to have this bit
++	 * inverted.
+ 	 */
+-	val |= BIT(7);
++	if (priv->version > 0x27)
++		val |= BIT(7);
+ 
+ 	r = ch341_control_out(dev, CH341_REQ_WRITE_REG,
+ 			      CH341_REG_DIVISOR << 8 | CH341_REG_PRESCALER,
+@@ -271,6 +278,9 @@ static int ch341_set_baudrate_lcr(struct usb_device *dev,
+ 	 * (stop bits, parity and word length). Version 0x30 and above use
+ 	 * CH341_REG_LCR only and CH341_REG_LCR2 is always set to zero.
+ 	 */
++	if (priv->version < 0x30)
++		return 0;
++
+ 	r = ch341_control_out(dev, CH341_REQ_WRITE_REG,
+ 			      CH341_REG_LCR2 << 8 | CH341_REG_LCR, lcr);
+ 	if (r)
+@@ -323,7 +333,9 @@ static int ch341_configure(struct usb_device *dev, struct ch341_private *priv)
+ 	r = ch341_control_in(dev, CH341_REQ_READ_VERSION, 0, 0, buffer, size);
+ 	if (r < 0)
+ 		goto out;
+-	dev_dbg(&dev->dev, "Chip version: 0x%02x\n", buffer[0]);
++
++	priv->version = buffer[0];
++	dev_dbg(&dev->dev, "Chip version: 0x%02x\n", priv->version);
+ 
+ 	r = ch341_control_out(dev, CH341_REQ_SERIAL_INIT, 0, 0);
+ 	if (r < 0)
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 067b206bd2527..6b5ba6180c307 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -134,6 +134,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x83AA) }, /* Mark-10 Digital Force Gauge */
+ 	{ USB_DEVICE(0x10C4, 0x83D8) }, /* DekTec DTA Plus VHF/UHF Booster/Attenuator */
+ 	{ USB_DEVICE(0x10C4, 0x8411) }, /* Kyocera GPS Module */
++	{ USB_DEVICE(0x10C4, 0x8414) }, /* Decagon USB Cable Adapter */
+ 	{ USB_DEVICE(0x10C4, 0x8418) }, /* IRZ Automation Teleport SG-10 GSM/GPRS Modem */
+ 	{ USB_DEVICE(0x10C4, 0x846E) }, /* BEI USB Sensor Interface (VCP) */
+ 	{ USB_DEVICE(0x10C4, 0x8470) }, /* Juniper Networks BX Series System Console */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 8f980fc6efc19..5480bacba39fc 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1045,6 +1045,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 	/* IDS GmbH devices */
+ 	{ USB_DEVICE(IDS_VID, IDS_SI31A_PID) },
+ 	{ USB_DEVICE(IDS_VID, IDS_CM31A_PID) },
++	/* Omron devices */
++	{ USB_DEVICE(OMRON_VID, OMRON_CS1W_CIF31_PID) },
+ 	/* U-Blox devices */
+ 	{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) },
+ 	{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 4e92c165c86bf..31c8ccabbbb78 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -661,6 +661,12 @@
+ #define INFINEON_TRIBOARD_TC1798_PID	0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */
+ #define INFINEON_TRIBOARD_TC2X7_PID	0x0043 /* DAS JTAG TriBoard TC2X7 V1.0 */
+ 
++/*
++ * Omron corporation (https://www.omron.com)
++ */
++ #define OMRON_VID			0x0590
++ #define OMRON_CS1W_CIF31_PID		0x00b2
++
+ /*
+  * Acton Research Corp.
+  */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 44e06b95584e5..211e03a204072 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -253,6 +253,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_BG96			0x0296
+ #define QUECTEL_PRODUCT_EP06			0x0306
+ #define QUECTEL_PRODUCT_EM05G			0x030a
++#define QUECTEL_PRODUCT_EM060K			0x030b
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
+@@ -438,6 +439,8 @@ static void option_instat_callback(struct urb *urb);
+ #define CINTERION_PRODUCT_MV31_2_RMNET		0x00b9
+ #define CINTERION_PRODUCT_MV32_WA		0x00f1
+ #define CINTERION_PRODUCT_MV32_WB		0x00f2
++#define CINTERION_PRODUCT_MV32_WA_RMNET		0x00f3
++#define CINTERION_PRODUCT_MV32_WB_RMNET		0x00f4
+ 
+ /* Olivetti products */
+ #define OLIVETTI_VENDOR_ID			0x0b3c
+@@ -573,6 +576,10 @@ static void option_instat_callback(struct urb *urb);
+ #define WETELECOM_PRODUCT_6802			0x6802
+ #define WETELECOM_PRODUCT_WMD300		0x6803
+ 
++/* OPPO products */
++#define OPPO_VENDOR_ID				0x22d9
++#define OPPO_PRODUCT_R11			0x276c
++
+ 
+ /* Device flags */
+ 
+@@ -1138,6 +1145,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+@@ -1993,8 +2003,12 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(0)},
+ 	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA, 0xff),
+ 	  .driver_info = RSVD(3)},
++	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA_RMNET, 0xff),
++	  .driver_info = RSVD(0) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB, 0xff),
+ 	  .driver_info = RSVD(3)},
++	{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB_RMNET, 0xff),
++	  .driver_info = RSVD(0) },
+ 	{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100),
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120),
+@@ -2155,6 +2169,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) },			/* GosunCn GM500 RNDIS */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) },			/* GosunCn GM500 MBIM */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) },			/* GosunCn GM500 ECM/NCM */
++	{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 1a05e3dcfec8a..4993227ab2930 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2294,6 +2294,13 @@ UNUSUAL_DEV( 0x1e74, 0x4621, 0x0000, 0x0000,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_BULK_IGNORE_TAG | US_FL_MAX_SECTORS_64 ),
+ 
++/* Reported by Witold Lipieta <witold.lipieta@thaumatec.com> */
++UNUSUAL_DEV( 0x1fc9, 0x0117, 0x0100, 0x0100,
++		"NXP Semiconductors",
++		"PN7462AU",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_RESIDUE ),
++
+ /* Supplied with some Castlewood ORB removable drives */
+ UNUSUAL_DEV(  0x2027, 0xa001, 0x0000, 0x9999,
+ 		"Double-H Technology",
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index e62e5e3da01e4..5e293ccf0e904 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -88,8 +88,8 @@ static int dp_altmode_configure(struct dp_altmode *dp, u8 con)
+ 	case DP_STATUS_CON_UFP_D:
+ 	case DP_STATUS_CON_BOTH: /* NOTE: First acting as DP source */
+ 		conf |= DP_CONF_UFP_U_AS_UFP_D;
+-		pin_assign = DP_CAP_DFP_D_PIN_ASSIGN(dp->alt->vdo) &
+-			     DP_CAP_UFP_D_PIN_ASSIGN(dp->port->vdo);
++		pin_assign = DP_CAP_PIN_ASSIGN_UFP_D(dp->alt->vdo) &
++				 DP_CAP_PIN_ASSIGN_DFP_D(dp->port->vdo);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
+index 5c83d41766c85..0a2d24d6ac6f7 100644
+--- a/drivers/xen/grant-table.c
++++ b/drivers/xen/grant-table.c
+@@ -981,6 +981,9 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
+ 	size_t size;
+ 	int i, ret;
+ 
++	if (args->nr_pages < 0 || args->nr_pages > (INT_MAX >> PAGE_SHIFT))
++		return -ENOMEM;
++
+ 	size = args->nr_pages << PAGE_SHIFT;
+ 	if (args->coherent)
+ 		args->vaddr = dma_alloc_coherent(args->dev, size,
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 2fdf178aa76f6..d4d89e0738ff4 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -540,15 +540,47 @@ error:
+ 	return ret;
+ }
+ 
+-static bool device_path_matched(const char *path, struct btrfs_device *device)
++/*
++ * Check if the device in the path matches the device in the given struct device.
++ *
++ * Returns:
++ *   true  If it is the same device.
++ *   false If it is not the same device or on error.
++ */
++static bool device_matched(const struct btrfs_device *device, const char *path)
+ {
+-	int found;
++	char *device_name;
++	struct block_device *bdev_old;
++	struct block_device *bdev_new;
++
++	/*
++	 * If we are looking for a device with the matching dev_t, then skip
++	 * device without a name (a missing device).
++	 */
++	if (!device->name)
++		return false;
++
++	device_name = kzalloc(BTRFS_PATH_NAME_MAX, GFP_KERNEL);
++	if (!device_name)
++		return false;
+ 
+ 	rcu_read_lock();
+-	found = strcmp(rcu_str_deref(device->name), path);
++	scnprintf(device_name, BTRFS_PATH_NAME_MAX, "%s", rcu_str_deref(device->name));
+ 	rcu_read_unlock();
+ 
+-	return found == 0;
++	bdev_old = lookup_bdev(device_name);
++	kfree(device_name);
++	if (IS_ERR(bdev_old))
++		return false;
++
++	bdev_new = lookup_bdev(path);
++	if (IS_ERR(bdev_new))
++		return false;
++
++	if (bdev_old == bdev_new)
++		return true;
++
++	return false;
+ }
+ 
+ /*
+@@ -581,9 +613,7 @@ static int btrfs_free_stale_devices(const char *path,
+ 					 &fs_devices->devices, dev_list) {
+ 			if (skip_device && skip_device == device)
+ 				continue;
+-			if (path && !device->name)
+-				continue;
+-			if (path && !device_path_matched(path, device))
++			if (path && !device_matched(device, path))
+ 				continue;
+ 			if (fs_devices->opened) {
+ 				/* for an already deleted device return 0 */
+diff --git a/include/linux/platform_data/x86/pmc_atom.h b/include/linux/platform_data/x86/pmc_atom.h
+index 022bcea9edec5..99a9b09dc839d 100644
+--- a/include/linux/platform_data/x86/pmc_atom.h
++++ b/include/linux/platform_data/x86/pmc_atom.h
+@@ -7,6 +7,8 @@
+ #ifndef PMC_ATOM_H
+ #define PMC_ATOM_H
+ 
++#include <linux/bits.h>
++
+ /* ValleyView Power Control Unit PCI Device ID */
+ #define	PCI_DEVICE_ID_VLV_PMC	0x0F1C
+ /* CherryTrail Power Control Unit PCI Device ID */
+@@ -139,9 +141,9 @@
+ #define	ACPI_MMIO_REG_LEN	0x100
+ 
+ #define	PM1_CNT			0x4
+-#define	SLEEP_TYPE_MASK		0xFFFFECFF
++#define	SLEEP_TYPE_MASK		GENMASK(12, 10)
+ #define	SLEEP_TYPE_S5		0x1C00
+-#define	SLEEP_ENABLE		0x2000
++#define	SLEEP_ENABLE		BIT(13)
+ 
+ extern int pmc_atom_read(int offset, u32 *value);
+ extern int pmc_atom_write(int offset, u32 value);
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index d6a41841b93e4..a093667991bb9 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -580,6 +580,7 @@ struct usb3_lpm_parameters {
+  * @devaddr: device address, XHCI: assigned by HW, others: same as devnum
+  * @can_submit: URBs may be submitted
+  * @persist_enabled:  USB_PERSIST enabled for this device
++ * @reset_in_progress: the device is being reset
+  * @have_langid: whether string_langid is valid
+  * @authorized: policy has said we can use it;
+  *	(user space) policy determines if we authorize this device to be
+@@ -665,6 +666,7 @@ struct usb_device {
+ 
+ 	unsigned can_submit:1;
+ 	unsigned persist_enabled:1;
++	unsigned reset_in_progress:1;
+ 	unsigned have_langid:1;
+ 	unsigned authorized:1;
+ 	unsigned authenticated:1;
+diff --git a/include/linux/usb/typec_dp.h b/include/linux/usb/typec_dp.h
+index fc4c7edb2e8a4..296909ea04f26 100644
+--- a/include/linux/usb/typec_dp.h
++++ b/include/linux/usb/typec_dp.h
+@@ -73,6 +73,11 @@ enum {
+ #define DP_CAP_USB			BIT(7)
+ #define DP_CAP_DFP_D_PIN_ASSIGN(_cap_)	(((_cap_) & GENMASK(15, 8)) >> 8)
+ #define DP_CAP_UFP_D_PIN_ASSIGN(_cap_)	(((_cap_) & GENMASK(23, 16)) >> 16)
++/* Get pin assignment taking plug & receptacle into consideration */
++#define DP_CAP_PIN_ASSIGN_UFP_D(_cap_) ((_cap_ & DP_CAP_RECEPTACLE) ? \
++			DP_CAP_UFP_D_PIN_ASSIGN(_cap_) : DP_CAP_DFP_D_PIN_ASSIGN(_cap_))
++#define DP_CAP_PIN_ASSIGN_DFP_D(_cap_) ((_cap_ & DP_CAP_RECEPTACLE) ? \
++			DP_CAP_DFP_D_PIN_ASSIGN(_cap_) : DP_CAP_UFP_D_PIN_ASSIGN(_cap_))
+ 
+ /* DisplayPort Status Update VDO bits */
+ #define DP_STATUS_CONNECTION(_status_)	((_status_) & 3)
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index d154e52dd7ae0..6d92e393e1bc6 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -695,8 +695,10 @@ static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
+ 				pos++;
+ 			}
+ 		}
++
++		/* no link or prog match, skip the cgroup of this layer */
++		continue;
+ found:
+-		BUG_ON(!cg);
+ 		progs = rcu_dereference_protected(
+ 				desc->bpf.effective[type],
+ 				lockdep_is_held(&cgroup_mutex));
+diff --git a/mm/pagewalk.c b/mm/pagewalk.c
+index e81640d9f1770..371ec21a19899 100644
+--- a/mm/pagewalk.c
++++ b/mm/pagewalk.c
+@@ -71,7 +71,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
+ 	do {
+ again:
+ 		next = pmd_addr_end(addr, end);
+-		if (pmd_none(*pmd) || (!walk->vma && !walk->no_vma)) {
++		if (pmd_none(*pmd)) {
+ 			if (ops->pte_hole)
+ 				err = ops->pte_hole(addr, next, depth, walk);
+ 			if (err)
+@@ -129,7 +129,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
+ 	do {
+  again:
+ 		next = pud_addr_end(addr, end);
+-		if (pud_none(*pud) || (!walk->vma && !walk->no_vma)) {
++		if (pud_none(*pud)) {
+ 			if (ops->pte_hole)
+ 				err = ops->pte_hole(addr, next, depth, walk);
+ 			if (err)
+@@ -318,19 +318,19 @@ static int __walk_page_range(unsigned long start, unsigned long end,
+ 	struct vm_area_struct *vma = walk->vma;
+ 	const struct mm_walk_ops *ops = walk->ops;
+ 
+-	if (vma && ops->pre_vma) {
++	if (ops->pre_vma) {
+ 		err = ops->pre_vma(start, end, walk);
+ 		if (err)
+ 			return err;
+ 	}
+ 
+-	if (vma && is_vm_hugetlb_page(vma)) {
++	if (is_vm_hugetlb_page(vma)) {
+ 		if (ops->hugetlb_entry)
+ 			err = walk_hugetlb_range(start, end, walk);
+ 	} else
+ 		err = walk_pgd_range(start, end, walk);
+ 
+-	if (vma && ops->post_vma)
++	if (ops->post_vma)
+ 		ops->post_vma(walk);
+ 
+ 	return err;
+@@ -402,9 +402,13 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
+ 		if (!vma) { /* after the last vma */
+ 			walk.vma = NULL;
+ 			next = end;
++			if (ops->pte_hole)
++				err = ops->pte_hole(start, next, -1, &walk);
+ 		} else if (start < vma->vm_start) { /* outside vma */
+ 			walk.vma = NULL;
+ 			next = min(end, vma->vm_start);
++			if (ops->pte_hole)
++				err = ops->pte_hole(start, next, -1, &walk);
+ 		} else { /* inside vma */
+ 			walk.vma = vma;
+ 			next = min(end, vma->vm_end);
+@@ -422,9 +426,8 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
+ 			}
+ 			if (err < 0)
+ 				break;
+-		}
+-		if (walk.vma || walk.ops->pte_hole)
+ 			err = __walk_page_range(start, next, &walk);
++		}
+ 		if (err)
+ 			break;
+ 	} while (start = next, start < end);
+@@ -453,9 +456,9 @@ int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
+ 	if (start >= end || !walk.mm)
+ 		return -EINVAL;
+ 
+-	mmap_assert_locked(walk.mm);
++	mmap_assert_write_locked(walk.mm);
+ 
+-	return __walk_page_range(start, end, &walk);
++	return walk_pgd_range(start, end, &walk);
+ }
+ 
+ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
+diff --git a/mm/ptdump.c b/mm/ptdump.c
+index 93f2f63dc52dc..a917bf55c61ea 100644
+--- a/mm/ptdump.c
++++ b/mm/ptdump.c
+@@ -141,13 +141,13 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
+ {
+ 	const struct ptdump_range *range = st->range;
+ 
+-	mmap_read_lock(mm);
++	mmap_write_lock(mm);
+ 	while (range->start != range->end) {
+ 		walk_page_range_novma(mm, range->start, range->end,
+ 				      &ptdump_ops, pgd, st);
+ 		range++;
+ 	}
+-	mmap_read_unlock(mm);
++	mmap_write_unlock(mm);
+ 
+ 	/* Flush out the last page */
+ 	st->note_page(st, 0, -1, 0);
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 0df4594b49c78..af8a4255cf1ba 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -389,7 +389,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ 	dev_match = dev_match || (res.type == RTN_LOCAL &&
+ 				  dev == net->loopback_dev);
+ 	if (dev_match) {
+-		ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_HOST;
++		ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_LINK;
+ 		return ret;
+ 	}
+ 	if (no_addr)
+@@ -401,7 +401,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ 	ret = 0;
+ 	if (fib_lookup(net, &fl4, &res, FIB_LOOKUP_IGNORE_LINKSTATE) == 0) {
+ 		if (res.type == RTN_UNICAST)
+-			ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_HOST;
++			ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_LINK;
+ 	}
+ 	return ret;
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 41b44b311e8a0..e62500d6fe0d0 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3599,11 +3599,11 @@ static void tcp_send_challenge_ack(struct sock *sk, const struct sk_buff *skb)
+ 
+ 	/* Then check host-wide RFC 5961 rate limit. */
+ 	now = jiffies / HZ;
+-	if (now != challenge_timestamp) {
++	if (now != READ_ONCE(challenge_timestamp)) {
+ 		u32 ack_limit = READ_ONCE(net->ipv4.sysctl_tcp_challenge_ack_limit);
+ 		u32 half = (ack_limit + 1) >> 1;
+ 
+-		challenge_timestamp = now;
++		WRITE_ONCE(challenge_timestamp, now);
+ 		WRITE_ONCE(challenge_count, half + prandom_u32_max(ack_limit));
+ 	}
+ 	count = READ_ONCE(challenge_count);
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 56dad9565bc93..18469f1f707e5 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -1411,12 +1411,6 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
+ 	psock->sk = csk;
+ 	psock->bpf_prog = prog;
+ 
+-	err = strp_init(&psock->strp, csk, &cb);
+-	if (err) {
+-		kmem_cache_free(kcm_psockp, psock);
+-		goto out;
+-	}
+-
+ 	write_lock_bh(&csk->sk_callback_lock);
+ 
+ 	/* Check if sk_user_data is aready by KCM or someone else.
+@@ -1424,13 +1418,18 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
+ 	 */
+ 	if (csk->sk_user_data) {
+ 		write_unlock_bh(&csk->sk_callback_lock);
+-		strp_stop(&psock->strp);
+-		strp_done(&psock->strp);
+ 		kmem_cache_free(kcm_psockp, psock);
+ 		err = -EALREADY;
+ 		goto out;
+ 	}
+ 
++	err = strp_init(&psock->strp, csk, &cb);
++	if (err) {
++		write_unlock_bh(&csk->sk_callback_lock);
++		kmem_cache_free(kcm_psockp, psock);
++		goto out;
++	}
++
+ 	psock->save_data_ready = csk->sk_data_ready;
+ 	psock->save_write_space = csk->sk_write_space;
+ 	psock->save_state_change = csk->sk_state_change;
+diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
+index a7ac53a2f00d8..78ae58ec397a0 100644
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -541,6 +541,10 @@ int ieee80211_ibss_finish_csa(struct ieee80211_sub_if_data *sdata)
+ 
+ 	sdata_assert_lock(sdata);
+ 
++	/* When not connected/joined, sending CSA doesn't make sense. */
++	if (ifibss->state != IEEE80211_IBSS_MLME_JOINED)
++		return -ENOLINK;
++
+ 	/* update cfg80211 bss information with the new channel */
+ 	if (!is_zero_ether_addr(ifibss->bssid)) {
+ 		cbss = cfg80211_get_bss(sdata->local->hw.wiphy,
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index 887f945bb12d4..d6afaacaf7ef8 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -461,16 +461,19 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
+ 	scan_req = rcu_dereference_protected(local->scan_req,
+ 					     lockdep_is_held(&local->mtx));
+ 
+-	if (scan_req != local->int_scan_req) {
+-		local->scan_info.aborted = aborted;
+-		cfg80211_scan_done(scan_req, &local->scan_info);
+-	}
+ 	RCU_INIT_POINTER(local->scan_req, NULL);
+ 	RCU_INIT_POINTER(local->scan_sdata, NULL);
+ 
+ 	local->scanning = 0;
+ 	local->scan_chandef.chan = NULL;
+ 
++	synchronize_rcu();
++
++	if (scan_req != local->int_scan_req) {
++		local->scan_info.aborted = aborted;
++		cfg80211_scan_done(scan_req, &local->scan_info);
++	}
++
+ 	/* Set power back to normal operating levels. */
+ 	ieee80211_hw_config(local, 0);
+ 
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 461c03737da8d..cee39ae52245c 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2175,9 +2175,9 @@ static inline u64 sta_get_tidstats_msdu(struct ieee80211_sta_rx_stats *rxstats,
+ 	u64 value;
+ 
+ 	do {
+-		start = u64_stats_fetch_begin(&rxstats->syncp);
++		start = u64_stats_fetch_begin_irq(&rxstats->syncp);
+ 		value = rxstats->msdu[tid];
+-	} while (u64_stats_fetch_retry(&rxstats->syncp, start));
++	} while (u64_stats_fetch_retry_irq(&rxstats->syncp, start));
+ 
+ 	return value;
+ }
+@@ -2241,9 +2241,9 @@ static inline u64 sta_get_stats_bytes(struct ieee80211_sta_rx_stats *rxstats)
+ 	u64 value;
+ 
+ 	do {
+-		start = u64_stats_fetch_begin(&rxstats->syncp);
++		start = u64_stats_fetch_begin_irq(&rxstats->syncp);
+ 		value = rxstats->bytes;
+-	} while (u64_stats_fetch_retry(&rxstats->syncp, start));
++	} while (u64_stats_fetch_retry_irq(&rxstats->syncp, start));
+ 
+ 	return value;
+ }
+diff --git a/net/mac802154/rx.c b/net/mac802154/rx.c
+index b8ce84618a55b..c439125ef2b91 100644
+--- a/net/mac802154/rx.c
++++ b/net/mac802154/rx.c
+@@ -44,7 +44,7 @@ ieee802154_subif_frame(struct ieee802154_sub_if_data *sdata,
+ 
+ 	switch (mac_cb(skb)->dest.mode) {
+ 	case IEEE802154_ADDR_NONE:
+-		if (mac_cb(skb)->dest.mode != IEEE802154_ADDR_NONE)
++		if (hdr->source.mode != IEEE802154_ADDR_NONE)
+ 			/* FIXME: check if we are PAN coordinator */
+ 			skb->pkt_type = PACKET_OTHERHOST;
+ 		else
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index 9c047c148a112..72398149e4d4f 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -1078,9 +1078,9 @@ static void mpls_get_stats(struct mpls_dev *mdev,
+ 
+ 		p = per_cpu_ptr(mdev->stats, i);
+ 		do {
+-			start = u64_stats_fetch_begin(&p->syncp);
++			start = u64_stats_fetch_begin_irq(&p->syncp);
+ 			local = p->stats;
+-		} while (u64_stats_fetch_retry(&p->syncp, start));
++		} while (u64_stats_fetch_retry_irq(&p->syncp, start));
+ 
+ 		stats->rx_packets	+= local.rx_packets;
+ 		stats->rx_bytes		+= local.rx_bytes;
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 68f1e89430b3b..ecdd9e83f2f49 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1057,6 +1057,21 @@ struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue,
+ }
+ EXPORT_SYMBOL(dev_graft_qdisc);
+ 
++static void shutdown_scheduler_queue(struct net_device *dev,
++				     struct netdev_queue *dev_queue,
++				     void *_qdisc_default)
++{
++	struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
++	struct Qdisc *qdisc_default = _qdisc_default;
++
++	if (qdisc) {
++		rcu_assign_pointer(dev_queue->qdisc, qdisc_default);
++		dev_queue->qdisc_sleeping = qdisc_default;
++
++		qdisc_put(qdisc);
++	}
++}
++
+ static void attach_one_default_qdisc(struct net_device *dev,
+ 				     struct netdev_queue *dev_queue,
+ 				     void *_unused)
+@@ -1104,6 +1119,7 @@ static void attach_default_qdiscs(struct net_device *dev)
+ 	if (qdisc == &noop_qdisc) {
+ 		netdev_warn(dev, "default qdisc (%s) fail, fallback to %s\n",
+ 			    default_qdisc_ops->id, noqueue_qdisc_ops.id);
++		netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
+ 		dev->priv_flags |= IFF_NO_QUEUE;
+ 		netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
+ 		qdisc = txq->qdisc_sleeping;
+@@ -1357,21 +1373,6 @@ void dev_init_scheduler(struct net_device *dev)
+ 	timer_setup(&dev->watchdog_timer, dev_watchdog, 0);
+ }
+ 
+-static void shutdown_scheduler_queue(struct net_device *dev,
+-				     struct netdev_queue *dev_queue,
+-				     void *_qdisc_default)
+-{
+-	struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
+-	struct Qdisc *qdisc_default = _qdisc_default;
+-
+-	if (qdisc) {
+-		rcu_assign_pointer(dev_queue->qdisc, qdisc_default);
+-		dev_queue->qdisc_sleeping = qdisc_default;
+-
+-		qdisc_put(qdisc);
+-	}
+-}
+-
+ void dev_shutdown(struct net_device *dev)
+ {
+ 	netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
+diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
+index 78e79029dc631..6eb17004a9e44 100644
+--- a/net/sched/sch_tbf.c
++++ b/net/sched/sch_tbf.c
+@@ -342,6 +342,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
+ 	struct nlattr *tb[TCA_TBF_MAX + 1];
+ 	struct tc_tbf_qopt *qopt;
+ 	struct Qdisc *child = NULL;
++	struct Qdisc *old = NULL;
+ 	struct psched_ratecfg rate;
+ 	struct psched_ratecfg peak;
+ 	u64 max_size;
+@@ -433,7 +434,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
+ 	sch_tree_lock(sch);
+ 	if (child) {
+ 		qdisc_tree_flush_backlog(q->qdisc);
+-		qdisc_put(q->qdisc);
++		old = q->qdisc;
+ 		q->qdisc = child;
+ 	}
+ 	q->limit = qopt->limit;
+@@ -453,6 +454,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
+ 	memcpy(&q->peak, &peak, sizeof(struct psched_ratecfg));
+ 
+ 	sch_tree_unlock(sch);
++	qdisc_put(old);
+ 	err = 0;
+ 
+ 	tbf_offload_change(sch);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 5d7710dd95145..41cbc7c89c9d2 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1325,7 +1325,6 @@ static void smc_listen_out_connected(struct smc_sock *new_smc)
+ {
+ 	struct sock *newsmcsk = &new_smc->sk;
+ 
+-	sk_refcnt_debug_inc(newsmcsk);
+ 	if (newsmcsk->sk_state == SMC_INIT)
+ 		newsmcsk->sk_state = SMC_ACTIVE;
+ 
+diff --git a/net/wireless/debugfs.c b/net/wireless/debugfs.c
+index 76b845f68ac89..d80b06d669593 100644
+--- a/net/wireless/debugfs.c
++++ b/net/wireless/debugfs.c
+@@ -65,9 +65,10 @@ static ssize_t ht40allow_map_read(struct file *file,
+ {
+ 	struct wiphy *wiphy = file->private_data;
+ 	char *buf;
+-	unsigned int offset = 0, buf_size = PAGE_SIZE, i, r;
++	unsigned int offset = 0, buf_size = PAGE_SIZE, i;
+ 	enum nl80211_band band;
+ 	struct ieee80211_supported_band *sband;
++	ssize_t r;
+ 
+ 	buf = kzalloc(buf_size, GFP_KERNEL);
+ 	if (!buf)
+diff --git a/sound/core/seq/oss/seq_oss_midi.c b/sound/core/seq/oss/seq_oss_midi.c
+index 2ddfe22266517..f73ee0798aeab 100644
+--- a/sound/core/seq/oss/seq_oss_midi.c
++++ b/sound/core/seq/oss/seq_oss_midi.c
+@@ -267,7 +267,9 @@ snd_seq_oss_midi_clear_all(void)
+ void
+ snd_seq_oss_midi_setup(struct seq_oss_devinfo *dp)
+ {
++	spin_lock_irq(&register_lock);
+ 	dp->max_mididev = max_midi_devs;
++	spin_unlock_irq(&register_lock);
+ }
+ 
+ /*
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index cc93157fa9500..0363670a56e7c 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -121,13 +121,13 @@ struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
+ 	spin_unlock_irqrestore(&clients_lock, flags);
+ #ifdef CONFIG_MODULES
+ 	if (!in_interrupt()) {
+-		static char client_requested[SNDRV_SEQ_GLOBAL_CLIENTS];
+-		static char card_requested[SNDRV_CARDS];
++		static DECLARE_BITMAP(client_requested, SNDRV_SEQ_GLOBAL_CLIENTS);
++		static DECLARE_BITMAP(card_requested, SNDRV_CARDS);
++
+ 		if (clientid < SNDRV_SEQ_GLOBAL_CLIENTS) {
+ 			int idx;
+ 			
+-			if (!client_requested[clientid]) {
+-				client_requested[clientid] = 1;
++			if (!test_and_set_bit(clientid, client_requested)) {
+ 				for (idx = 0; idx < 15; idx++) {
+ 					if (seq_client_load[idx] < 0)
+ 						break;
+@@ -142,10 +142,8 @@ struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
+ 			int card = (clientid - SNDRV_SEQ_GLOBAL_CLIENTS) /
+ 				SNDRV_SEQ_CLIENTS_PER_CARD;
+ 			if (card < snd_ecards_limit) {
+-				if (! card_requested[card]) {
+-					card_requested[card] = 1;
++				if (!test_and_set_bit(card, card_requested))
+ 					snd_request_card(card);
+-				}
+ 				snd_seq_device_load_drivers();
+ 			}
+ 		}
+diff --git a/sound/hda/intel-nhlt.c b/sound/hda/intel-nhlt.c
+index e2237239d922a..8714891f50b0a 100644
+--- a/sound/hda/intel-nhlt.c
++++ b/sound/hda/intel-nhlt.c
+@@ -55,20 +55,26 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
+ 
+ 		/* find max number of channels based on format_configuration */
+ 		if (fmt_configs->fmt_count) {
+-			dev_dbg(dev, "%s: found %d format definitions\n",
+-				__func__, fmt_configs->fmt_count);
++			struct nhlt_fmt_cfg *fmt_cfg = fmt_configs->fmt_config;
++
++			dev_dbg(dev, "found %d format definitions\n",
++				fmt_configs->fmt_count);
+ 
+ 			for (i = 0; i < fmt_configs->fmt_count; i++) {
+ 				struct wav_fmt_ext *fmt_ext;
+ 
+-				fmt_ext = &fmt_configs->fmt_config[i].fmt_ext;
++				fmt_ext = &fmt_cfg->fmt_ext;
+ 
+ 				if (fmt_ext->fmt.channels > max_ch)
+ 					max_ch = fmt_ext->fmt.channels;
++
++				/* Move to the next nhlt_fmt_cfg */
++				fmt_cfg = (struct nhlt_fmt_cfg *)(fmt_cfg->config.caps +
++								  fmt_cfg->config.size);
+ 			}
+-			dev_dbg(dev, "%s: max channels found %d\n", __func__, max_ch);
++			dev_dbg(dev, "max channels found %d\n", max_ch);
+ 		} else {
+-			dev_dbg(dev, "%s: No format information found\n", __func__);
++			dev_dbg(dev, "No format information found\n");
+ 		}
+ 
+ 		if (cfg->device_config.config_type != NHLT_CONFIG_TYPE_MIC_ARRAY) {
+@@ -95,17 +101,16 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
+ 			}
+ 
+ 			if (dmic_geo > 0) {
+-				dev_dbg(dev, "%s: Array with %d dmics\n", __func__, dmic_geo);
++				dev_dbg(dev, "Array with %d dmics\n", dmic_geo);
+ 			}
+ 			if (max_ch > dmic_geo) {
+-				dev_dbg(dev, "%s: max channels %d exceed dmic number %d\n",
+-					__func__, max_ch, dmic_geo);
++				dev_dbg(dev, "max channels %d exceed dmic number %d\n",
++					max_ch, dmic_geo);
+ 			}
+ 		}
+ 	}
+ 
+-	dev_dbg(dev, "%s: dmic number %d max_ch %d\n",
+-		__func__, dmic_geo, max_ch);
++	dev_dbg(dev, "dmic number %d max_ch %d\n", dmic_geo, max_ch);
+ 
+ 	return dmic_geo;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6e679c86b6fa3..78f4f684a3c72 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4628,6 +4628,48 @@ static void alc236_fixup_hp_mute_led_micmute_vref(struct hda_codec *codec,
+ 	alc236_fixup_hp_micmute_led_vref(codec, fix, action);
+ }
+ 
++static inline void alc298_samsung_write_coef_pack(struct hda_codec *codec,
++						  const unsigned short coefs[2])
++{
++	alc_write_coef_idx(codec, 0x23, coefs[0]);
++	alc_write_coef_idx(codec, 0x25, coefs[1]);
++	alc_write_coef_idx(codec, 0x26, 0xb011);
++}
++
++struct alc298_samsung_amp_desc {
++	unsigned char nid;
++	unsigned short init_seq[2][2];
++};
++
++static void alc298_fixup_samsung_amp(struct hda_codec *codec,
++				     const struct hda_fixup *fix, int action)
++{
++	int i, j;
++	static const unsigned short init_seq[][2] = {
++		{ 0x19, 0x00 }, { 0x20, 0xc0 }, { 0x22, 0x44 }, { 0x23, 0x08 },
++		{ 0x24, 0x85 }, { 0x25, 0x41 }, { 0x35, 0x40 }, { 0x36, 0x01 },
++		{ 0x38, 0x81 }, { 0x3a, 0x03 }, { 0x3b, 0x81 }, { 0x40, 0x3e },
++		{ 0x41, 0x07 }, { 0x400, 0x1 }
++	};
++	static const struct alc298_samsung_amp_desc amps[] = {
++		{ 0x3a, { { 0x18, 0x1 }, { 0x26, 0x0 } } },
++		{ 0x39, { { 0x18, 0x2 }, { 0x26, 0x1 } } }
++	};
++
++	if (action != HDA_FIXUP_ACT_INIT)
++		return;
++
++	for (i = 0; i < ARRAY_SIZE(amps); i++) {
++		alc_write_coef_idx(codec, 0x22, amps[i].nid);
++
++		for (j = 0; j < ARRAY_SIZE(amps[i].init_seq); j++)
++			alc298_samsung_write_coef_pack(codec, amps[i].init_seq[j]);
++
++		for (j = 0; j < ARRAY_SIZE(init_seq); j++)
++			alc298_samsung_write_coef_pack(codec, init_seq[j]);
++	}
++}
++
+ #if IS_REACHABLE(CONFIG_INPUT)
+ static void gpio2_mic_hotkey_event(struct hda_codec *codec,
+ 				   struct hda_jack_callback *event)
+@@ -6787,6 +6829,7 @@ enum {
+ 	ALC236_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
++	ALC298_FIXUP_SAMSUNG_AMP,
+ 	ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ 	ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ 	ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+@@ -8140,6 +8183,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc236_fixup_hp_mute_led_micmute_vref,
+ 	},
++	[ALC298_FIXUP_SAMSUNG_AMP] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc298_fixup_samsung_amp,
++		.chained = true,
++		.chain_id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET
++	},
+ 	[ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -8914,13 +8963,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+-	SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+-	SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+-	SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+-	SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++	SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+-	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+-	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+@@ -9280,7 +9329,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
+ 	{.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
+ 	{.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+-	{.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
++	{.id = ALC298_FIXUP_SAMSUNG_AMP, .name = "alc298-samsung-amp"},
+ 	{.id = ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc256-samsung-headphone"},
+ 	{.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ 	{.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-09-15 10:31 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-09-15 10:31 UTC (permalink / raw
  To: gentoo-commits

commit:     5001fb691f5a0cb75ff7bfc439fdcbe1da7fef5c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 15 10:30:56 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 15 10:30:56 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5001fb69

Linux patch 5.10.143

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1142_linux-5.10.143.patch | 2685 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2689 insertions(+)

diff --git a/0000_README b/0000_README
index 75caafbb..32d72e53 100644
--- a/0000_README
+++ b/0000_README
@@ -611,6 +611,10 @@ Patch:  1141_linux-5.10.142.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.142
 
+Patch:  1142_linux-5.10.143.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.143
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1142_linux-5.10.143.patch b/1142_linux-5.10.143.patch
new file mode 100644
index 00000000..28c57e76
--- /dev/null
+++ b/1142_linux-5.10.143.patch
@@ -0,0 +1,2685 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index f01eed0ee23ad..22a07c208fee0 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -92,6 +92,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A77      | #1508412        | ARM64_ERRATUM_1508412       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A510     | #2457168        | ARM64_ERRATUM_2457168       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1188873,1418040| ARM64_ERRATUM_1418040       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1349291        | N/A                         |
+diff --git a/Makefile b/Makefile
+index 655fe095459b3..60b2018c26dba 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 142
++SUBLEVEL = 143
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi b/arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi
+index a06700e53e4c3..9c8b3eb49ea30 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi
++++ b/arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi
+@@ -62,8 +62,8 @@
+ 		regulators {
+ 			vdd_3v3: VDD_IO {
+ 				regulator-name = "VDD_IO";
+-				regulator-min-microvolt = <1200000>;
+-				regulator-max-microvolt = <3700000>;
++				regulator-min-microvolt = <3300000>;
++				regulator-max-microvolt = <3300000>;
+ 				regulator-initial-mode = <2>;
+ 				regulator-allowed-modes = <2>, <4>;
+ 				regulator-always-on;
+@@ -81,8 +81,8 @@
+ 
+ 			vddio_ddr: VDD_DDR {
+ 				regulator-name = "VDD_DDR";
+-				regulator-min-microvolt = <600000>;
+-				regulator-max-microvolt = <1850000>;
++				regulator-min-microvolt = <1200000>;
++				regulator-max-microvolt = <1200000>;
+ 				regulator-initial-mode = <2>;
+ 				regulator-allowed-modes = <2>, <4>;
+ 				regulator-always-on;
+@@ -104,8 +104,8 @@
+ 
+ 			vdd_core: VDD_CORE {
+ 				regulator-name = "VDD_CORE";
+-				regulator-min-microvolt = <600000>;
+-				regulator-max-microvolt = <1850000>;
++				regulator-min-microvolt = <1250000>;
++				regulator-max-microvolt = <1250000>;
+ 				regulator-initial-mode = <2>;
+ 				regulator-allowed-modes = <2>, <4>;
+ 				regulator-always-on;
+@@ -146,8 +146,8 @@
+ 
+ 			LDO1 {
+ 				regulator-name = "LDO1";
+-				regulator-min-microvolt = <1200000>;
+-				regulator-max-microvolt = <3700000>;
++				regulator-min-microvolt = <3300000>;
++				regulator-max-microvolt = <3300000>;
+ 				regulator-always-on;
+ 
+ 				regulator-state-standby {
+@@ -161,9 +161,8 @@
+ 
+ 			LDO2 {
+ 				regulator-name = "LDO2";
+-				regulator-min-microvolt = <1200000>;
+-				regulator-max-microvolt = <3700000>;
+-				regulator-always-on;
++				regulator-min-microvolt = <1800000>;
++				regulator-max-microvolt = <3300000>;
+ 
+ 				regulator-state-standby {
+ 					regulator-on-in-suspend;
+diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+index 634411d13b4aa..00b9e88ff5451 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_icp.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+@@ -195,8 +195,8 @@
+ 			regulators {
+ 				vdd_io_reg: VDD_IO {
+ 					regulator-name = "VDD_IO";
+-					regulator-min-microvolt = <1200000>;
+-					regulator-max-microvolt = <3700000>;
++					regulator-min-microvolt = <3300000>;
++					regulator-max-microvolt = <3300000>;
+ 					regulator-initial-mode = <2>;
+ 					regulator-allowed-modes = <2>, <4>;
+ 					regulator-always-on;
+@@ -214,8 +214,8 @@
+ 
+ 				VDD_DDR {
+ 					regulator-name = "VDD_DDR";
+-					regulator-min-microvolt = <600000>;
+-					regulator-max-microvolt = <1850000>;
++					regulator-min-microvolt = <1350000>;
++					regulator-max-microvolt = <1350000>;
+ 					regulator-initial-mode = <2>;
+ 					regulator-allowed-modes = <2>, <4>;
+ 					regulator-always-on;
+@@ -233,8 +233,8 @@
+ 
+ 				VDD_CORE {
+ 					regulator-name = "VDD_CORE";
+-					regulator-min-microvolt = <600000>;
+-					regulator-max-microvolt = <1850000>;
++					regulator-min-microvolt = <1250000>;
++					regulator-max-microvolt = <1250000>;
+ 					regulator-initial-mode = <2>;
+ 					regulator-allowed-modes = <2>, <4>;
+ 					regulator-always-on;
+@@ -256,7 +256,6 @@
+ 					regulator-max-microvolt = <1850000>;
+ 					regulator-initial-mode = <2>;
+ 					regulator-allowed-modes = <2>, <4>;
+-					regulator-always-on;
+ 
+ 					regulator-state-standby {
+ 						regulator-on-in-suspend;
+@@ -271,8 +270,8 @@
+ 
+ 				LDO1 {
+ 					regulator-name = "LDO1";
+-					regulator-min-microvolt = <1200000>;
+-					regulator-max-microvolt = <3700000>;
++					regulator-min-microvolt = <2500000>;
++					regulator-max-microvolt = <2500000>;
+ 					regulator-always-on;
+ 
+ 					regulator-state-standby {
+@@ -286,8 +285,8 @@
+ 
+ 				LDO2 {
+ 					regulator-name = "LDO2";
+-					regulator-min-microvolt = <1200000>;
+-					regulator-max-microvolt = <3700000>;
++					regulator-min-microvolt = <3300000>;
++					regulator-max-microvolt = <3300000>;
+ 					regulator-always-on;
+ 
+ 					regulator-state-standby {
+diff --git a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+index 92f9977d14822..e9a4115124eb0 100644
+--- a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+@@ -51,16 +51,6 @@
+ 		vin-supply = <&reg_3p3v_s5>;
+ 	};
+ 
+-	reg_3p3v_s0: regulator-3p3v-s0 {
+-		compatible = "regulator-fixed";
+-		regulator-name = "V_3V3_S0";
+-		regulator-min-microvolt = <3300000>;
+-		regulator-max-microvolt = <3300000>;
+-		regulator-always-on;
+-		regulator-boot-on;
+-		vin-supply = <&reg_3p3v_s5>;
+-	};
+-
+ 	reg_3p3v_s5: regulator-3p3v-s5 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "V_3V3_S5";
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 7c7906e9dafda..1116a8d092c01 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -657,6 +657,24 @@ config ARM64_ERRATUM_1508412
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_2457168
++	bool "Cortex-A510: 2457168: workaround for AMEVCNTR01 incrementing incorrectly"
++	depends on ARM64_AMU_EXTN
++	default y
++	help
++	  This option adds the workaround for ARM Cortex-A510 erratum 2457168.
++
++	  The AMU counter AMEVCNTR01 (constant counter) should increment at the same rate
++	  as the system counter. On affected Cortex-A510 cores AMEVCNTR01 increments
++	  incorrectly giving a significantly higher output value.
++
++	  Work around this problem by keeping the reference values of affected counters
++	  to 0 thus signaling an error case. This effect is the same to firmware disabling
++	  affected counters, in which case 0 will be returned when reading the disabled
++	  counters.
++
++	  If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ 	bool "Cavium erratum 22375, 24313"
+ 	default y
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index f42fd0a2e81c8..53030d3c03a2c 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -67,7 +67,8 @@
+ #define ARM64_MTE				57
+ #define ARM64_WORKAROUND_1508412		58
+ #define ARM64_SPECTRE_BHB			59
++#define ARM64_WORKAROUND_2457168		60
+ 
+-#define ARM64_NCAPS				60
++#define ARM64_NCAPS				61
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
+index 587543c6c51cb..97c42be71338a 100644
+--- a/arch/arm64/kernel/cacheinfo.c
++++ b/arch/arm64/kernel/cacheinfo.c
+@@ -45,7 +45,8 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
+ 
+ int init_cache_level(unsigned int cpu)
+ {
+-	unsigned int ctype, level, leaves, fw_level;
++	unsigned int ctype, level, leaves;
++	int fw_level;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ 
+ 	for (level = 1, leaves = 0; level <= MAX_CACHE_LEVEL; level++) {
+@@ -63,6 +64,9 @@ int init_cache_level(unsigned int cpu)
+ 	else
+ 		fw_level = acpi_find_last_cache_level(cpu);
+ 
++	if (fw_level < 0)
++		return fw_level;
++
+ 	if (level < fw_level) {
+ 		/*
+ 		 * some external caches not specified in CLIDR_EL1
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 78263dadd00da..aaacca6fd52f6 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -545,6 +545,15 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 				  0, 0,
+ 				  1, 0),
+ 	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_2457168
++	{
++		.desc = "ARM erratum 2457168",
++		.capability = ARM64_WORKAROUND_2457168,
++		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
++		/* Cortex-A510 r0p0-r1p1 */
++		CAP_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1)
++	},
+ #endif
+ 	{
+ 	}
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 4087e2d1f39e2..e72c90b826568 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1559,7 +1559,10 @@ static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap)
+ 		pr_info("detected CPU%d: Activity Monitors Unit (AMU)\n",
+ 			smp_processor_id());
+ 		cpumask_set_cpu(smp_processor_id(), &amu_cpus);
+-		init_cpu_freq_invariance_counters();
++
++		/* 0 reference values signal broken/disabled counters */
++		if (!this_cpu_has_cap(ARM64_WORKAROUND_2457168))
++			init_cpu_freq_invariance_counters();
+ 	}
+ }
+ 
+diff --git a/arch/mips/loongson32/ls1c/board.c b/arch/mips/loongson32/ls1c/board.c
+index e9de6da0ce51f..9dcfe9de55b0a 100644
+--- a/arch/mips/loongson32/ls1c/board.c
++++ b/arch/mips/loongson32/ls1c/board.c
+@@ -15,7 +15,6 @@ static struct platform_device *ls1c_platform_devices[] __initdata = {
+ static int __init ls1c_platform_init(void)
+ {
+ 	ls1x_serial_set_uartclk(&ls1x_uart_pdev);
+-	ls1x_rtc_set_extclk(&ls1x_rtc_pdev);
+ 
+ 	return platform_add_devices(ls1c_platform_devices,
+ 				   ARRAY_SIZE(ls1c_platform_devices));
+diff --git a/arch/parisc/kernel/head.S b/arch/parisc/kernel/head.S
+index aa93d775c34db..598d0938449da 100644
+--- a/arch/parisc/kernel/head.S
++++ b/arch/parisc/kernel/head.S
+@@ -22,7 +22,7 @@
+ #include <linux/init.h>
+ #include <linux/pgtable.h>
+ 
+-	.level	PA_ASM_LEVEL
++	.level	1.1
+ 
+ 	__INITDATA
+ ENTRY(boot_args)
+@@ -69,6 +69,47 @@ $bss_loop:
+ 	stw,ma          %arg2,4(%r1)
+ 	stw,ma          %arg3,4(%r1)
+ 
++#if !defined(CONFIG_64BIT) && defined(CONFIG_PA20)
++	/* This 32-bit kernel was compiled for PA2.0 CPUs. Check current CPU
++	 * and halt kernel if we detect a PA1.x CPU. */
++	ldi		32,%r10
++	mtctl		%r10,%cr11
++	.level 2.0
++	mfctl,w		%cr11,%r10
++	.level 1.1
++	comib,<>,n	0,%r10,$cpu_ok
++
++	load32		PA(msg1),%arg0
++	ldi		msg1_end-msg1,%arg1
++$iodc_panic:
++	copy		%arg0, %r10
++	copy		%arg1, %r11
++	load32		PA(init_stack),%sp
++#define MEM_CONS 0x3A0
++	ldw		MEM_CONS+32(%r0),%arg0	// HPA
++	ldi		ENTRY_IO_COUT,%arg1
++	ldw		MEM_CONS+36(%r0),%arg2	// SPA
++	ldw		MEM_CONS+8(%r0),%arg3	// layers
++	load32		PA(__bss_start),%r1
++	stw		%r1,-52(%sp)		// arg4
++	stw		%r0,-56(%sp)		// arg5
++	stw		%r10,-60(%sp)		// arg6 = ptr to text
++	stw		%r11,-64(%sp)		// arg7 = len
++	stw		%r0,-68(%sp)		// arg8
++	load32		PA(.iodc_panic_ret), %rp
++	ldw		MEM_CONS+40(%r0),%r1	// ENTRY_IODC
++	bv,n		(%r1)
++.iodc_panic_ret:
++	b .				/* wait endless with ... */
++	or		%r10,%r10,%r10	/* qemu idle sleep */
++msg1:	.ascii "Can't boot kernel which was built for PA8x00 CPUs on this machine.\r\n"
++msg1_end:
++
++$cpu_ok:
++#endif
++
++	.level	PA_ASM_LEVEL
++
+ 	/* Initialize startup VM. Just map first 16/32 MB of memory */
+ 	load32		PA(swapper_pg_dir),%r4
+ 	mtctl		%r4,%cr24	/* Initialize kernel root pointer */
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 9d5460f6e0ff1..6f33d62331b1f 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1852,6 +1852,12 @@ static void free_info(struct blkfront_info *info)
+ 	kfree(info);
+ }
+ 
++/* Enable the persistent grants feature. */
++static bool feature_persistent = true;
++module_param(feature_persistent, bool, 0644);
++MODULE_PARM_DESC(feature_persistent,
++		"Enables the persistent grants feature");
++
+ /* Common code used when first setting up, and when resuming. */
+ static int talk_to_blkback(struct xenbus_device *dev,
+ 			   struct blkfront_info *info)
+@@ -1943,6 +1949,7 @@ again:
+ 		message = "writing protocol";
+ 		goto abort_transaction;
+ 	}
++	info->feature_persistent_parm = feature_persistent;
+ 	err = xenbus_printf(xbt, dev->nodename, "feature-persistent", "%u",
+ 			info->feature_persistent_parm);
+ 	if (err)
+@@ -2019,12 +2026,6 @@ static int negotiate_mq(struct blkfront_info *info)
+ 	return 0;
+ }
+ 
+-/* Enable the persistent grants feature. */
+-static bool feature_persistent = true;
+-module_param(feature_persistent, bool, 0644);
+-MODULE_PARM_DESC(feature_persistent,
+-		"Enables the persistent grants feature");
+-
+ /**
+  * Entry point to this code when a new device is created.  Allocate the basic
+  * structures and the ring buffer for communication with the backend, and
+@@ -2394,7 +2395,6 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
+ 	if (xenbus_read_unsigned(info->xbdev->otherend, "feature-discard", 0))
+ 		blkfront_setup_discard(info);
+ 
+-	info->feature_persistent_parm = feature_persistent;
+ 	if (info->feature_persistent_parm)
+ 		info->feature_persistent =
+ 			!!xenbus_read_unsigned(info->xbdev->otherend,
+diff --git a/drivers/firmware/efi/capsule-loader.c b/drivers/firmware/efi/capsule-loader.c
+index 4dde8edd53b62..3e8d4b51a8140 100644
+--- a/drivers/firmware/efi/capsule-loader.c
++++ b/drivers/firmware/efi/capsule-loader.c
+@@ -242,29 +242,6 @@ failed:
+ 	return ret;
+ }
+ 
+-/**
+- * efi_capsule_flush - called by file close or file flush
+- * @file: file pointer
+- * @id: not used
+- *
+- *	If a capsule is being partially uploaded then calling this function
+- *	will be treated as upload termination and will free those completed
+- *	buffer pages and -ECANCELED will be returned.
+- **/
+-static int efi_capsule_flush(struct file *file, fl_owner_t id)
+-{
+-	int ret = 0;
+-	struct capsule_info *cap_info = file->private_data;
+-
+-	if (cap_info->index > 0) {
+-		pr_err("capsule upload not complete\n");
+-		efi_free_all_buff_pages(cap_info);
+-		ret = -ECANCELED;
+-	}
+-
+-	return ret;
+-}
+-
+ /**
+  * efi_capsule_release - called by file close
+  * @inode: not used
+@@ -277,6 +254,13 @@ static int efi_capsule_release(struct inode *inode, struct file *file)
+ {
+ 	struct capsule_info *cap_info = file->private_data;
+ 
++	if (cap_info->index > 0 &&
++	    (cap_info->header.headersize == 0 ||
++	     cap_info->count < cap_info->total_size)) {
++		pr_err("capsule upload not complete\n");
++		efi_free_all_buff_pages(cap_info);
++	}
++
+ 	kfree(cap_info->pages);
+ 	kfree(cap_info->phys);
+ 	kfree(file->private_data);
+@@ -324,7 +308,6 @@ static const struct file_operations efi_capsule_fops = {
+ 	.owner = THIS_MODULE,
+ 	.open = efi_capsule_open,
+ 	.write = efi_capsule_write,
+-	.flush = efi_capsule_flush,
+ 	.release = efi_capsule_release,
+ 	.llseek = no_llseek,
+ };
+diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
+index a2ae9c3b95793..433e11dab4a87 100644
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -37,6 +37,13 @@ KBUILD_CFLAGS			:= $(cflags-y) -Os -DDISABLE_BRANCH_PROFILING \
+ 				   $(call cc-option,-fno-addrsig) \
+ 				   -D__DISABLE_EXPORTS
+ 
++#
++# struct randomization only makes sense for Linux internal types, which the EFI
++# stub code never touches, so let's turn off struct randomization for the stub
++# altogether
++#
++KBUILD_CFLAGS := $(filter-out $(RANDSTRUCT_CFLAGS), $(KBUILD_CFLAGS))
++
+ # remove SCS flags from all objects in this directory
+ KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 2f47f81a74a57..ae84d3b582aa5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -2146,6 +2146,9 @@ static int psp_hw_fini(void *handle)
+ 		psp_rap_terminate(psp);
+ 		psp_dtm_terminate(psp);
+ 		psp_hdcp_terminate(psp);
++
++		if (adev->gmc.xgmi.num_physical_nodes > 1)
++			psp_xgmi_terminate(psp);
+ 	}
+ 
+ 	psp_asd_unload(psp);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+index 042c85fc528bb..def0b7092438f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+@@ -622,7 +622,7 @@ int amdgpu_xgmi_remove_device(struct amdgpu_device *adev)
+ 		amdgpu_put_xgmi_hive(hive);
+ 	}
+ 
+-	return psp_xgmi_terminate(&adev->psp);
++	return 0;
+ }
+ 
+ int amdgpu_xgmi_ras_late_init(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 405bb3efa2a96..38f4c7474487b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -2570,7 +2570,8 @@ static void gfx_v9_0_constants_init(struct amdgpu_device *adev)
+ 
+ 	gfx_v9_0_tiling_mode_table_init(adev);
+ 
+-	gfx_v9_0_setup_rb(adev);
++	if (adev->gfx.num_gfx_rings)
++		gfx_v9_0_setup_rb(adev);
+ 	gfx_v9_0_get_cu_info(adev, &adev->gfx.cu_info);
+ 	adev->gfx.config.db_debug2 = RREG32_SOC15(GC, 0, mmDB_DEBUG2);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+index f84701c562bf2..97441f373531f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+@@ -178,6 +178,7 @@ static void mmhub_v1_0_init_cache_regs(struct amdgpu_device *adev)
+ 	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
+ 	WREG32_SOC15(MMHUB, 0, mmVM_L2_CNTL2, tmp);
+ 
++	tmp = mmVM_L2_CNTL3_DEFAULT;
+ 	if (adev->gmc.translate_further) {
+ 		tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3, BANK_SELECT, 12);
+ 		tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3,
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index 5979af230eda0..8b30e8d83fbcf 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -166,21 +166,6 @@ void drm_gem_private_object_init(struct drm_device *dev,
+ }
+ EXPORT_SYMBOL(drm_gem_private_object_init);
+ 
+-static void
+-drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp)
+-{
+-	/*
+-	 * Note: obj->dma_buf can't disappear as long as we still hold a
+-	 * handle reference in obj->handle_count.
+-	 */
+-	mutex_lock(&filp->prime.lock);
+-	if (obj->dma_buf) {
+-		drm_prime_remove_buf_handle_locked(&filp->prime,
+-						   obj->dma_buf);
+-	}
+-	mutex_unlock(&filp->prime.lock);
+-}
+-
+ /**
+  * drm_gem_object_handle_free - release resources bound to userspace handles
+  * @obj: GEM object to clean up.
+@@ -254,7 +239,7 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
+ 	else if (dev->driver->gem_close_object)
+ 		dev->driver->gem_close_object(obj, file_priv);
+ 
+-	drm_gem_remove_prime_handles(obj, file_priv);
++	drm_prime_remove_buf_handle(&file_priv->prime, id);
+ 	drm_vma_node_revoke(&obj->vma_node, file_priv);
+ 
+ 	drm_gem_object_handle_put_unlocked(obj);
+diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
+index b65865c630b0a..f80e0f28087d1 100644
+--- a/drivers/gpu/drm/drm_internal.h
++++ b/drivers/gpu/drm/drm_internal.h
+@@ -86,8 +86,8 @@ int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
+ 
+ void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv);
+ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv);
+-void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv,
+-					struct dma_buf *dma_buf);
++void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv,
++				 uint32_t handle);
+ 
+ /* drm_drv.c */
+ struct drm_minor *drm_minor_acquire(unsigned int minor_id);
+diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
+index 9f955f2010c25..825499ea3ff59 100644
+--- a/drivers/gpu/drm/drm_prime.c
++++ b/drivers/gpu/drm/drm_prime.c
+@@ -187,29 +187,33 @@ static int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpri
+ 	return -ENOENT;
+ }
+ 
+-void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv,
+-					struct dma_buf *dma_buf)
++void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv,
++				 uint32_t handle)
+ {
+ 	struct rb_node *rb;
+ 
+-	rb = prime_fpriv->dmabufs.rb_node;
++	mutex_lock(&prime_fpriv->lock);
++
++	rb = prime_fpriv->handles.rb_node;
+ 	while (rb) {
+ 		struct drm_prime_member *member;
+ 
+-		member = rb_entry(rb, struct drm_prime_member, dmabuf_rb);
+-		if (member->dma_buf == dma_buf) {
++		member = rb_entry(rb, struct drm_prime_member, handle_rb);
++		if (member->handle == handle) {
+ 			rb_erase(&member->handle_rb, &prime_fpriv->handles);
+ 			rb_erase(&member->dmabuf_rb, &prime_fpriv->dmabufs);
+ 
+-			dma_buf_put(dma_buf);
++			dma_buf_put(member->dma_buf);
+ 			kfree(member);
+-			return;
+-		} else if (member->dma_buf < dma_buf) {
++			break;
++		} else if (member->handle < handle) {
+ 			rb = rb->rb_right;
+ 		} else {
+ 			rb = rb->rb_left;
+ 		}
+ 	}
++
++	mutex_unlock(&prime_fpriv->lock);
+ }
+ 
+ void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv)
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+index f2c8b56be9ead..261a5e97a0b4a 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+@@ -163,6 +163,28 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
+ 	intel_dp_compute_rate(intel_dp, intel_dp->link_rate,
+ 			      &link_bw, &rate_select);
+ 
++	/*
++	 * WaEdpLinkRateDataReload
++	 *
++	 * Parade PS8461E MUX (used on varius TGL+ laptops) needs
++	 * to snoop the link rates reported by the sink when we
++	 * use LINK_RATE_SET in order to operate in jitter cleaning
++	 * mode (as opposed to redriver mode). Unfortunately it
++	 * loses track of the snooped link rates when powered down,
++	 * so we need to make it re-snoop often. Without this high
++	 * link rates are not stable.
++	 */
++	if (!link_bw) {
++		struct intel_connector *connector = intel_dp->attached_connector;
++		__le16 sink_rates[DP_MAX_SUPPORTED_RATES];
++
++		drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] Reloading eDP link rates\n",
++			    connector->base.base.id, connector->base.name);
++
++		drm_dp_dpcd_read(&intel_dp->aux, DP_SUPPORTED_LINK_RATES,
++				 sink_rates, sizeof(sink_rates));
++	}
++
+ 	if (link_bw)
+ 		drm_dbg_kms(&i915->drm,
+ 			    "Using LINK_BW_SET value %02x\n", link_bw);
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index 266e3cbbd09bd..8287410f471fb 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1623,6 +1623,9 @@ int radeon_suspend_kms(struct drm_device *dev, bool suspend,
+ 		if (r) {
+ 			/* delay GPU reset to resume */
+ 			radeon_fence_driver_force_completion(rdev, i);
++		} else {
++			/* finish executing delayed work */
++			flush_delayed_work(&rdev->fence_drv[i].lockup_work);
+ 		}
+ 	}
+ 
+diff --git a/drivers/hwmon/mr75203.c b/drivers/hwmon/mr75203.c
+index 046523d47c29b..41e3d3b54baff 100644
+--- a/drivers/hwmon/mr75203.c
++++ b/drivers/hwmon/mr75203.c
+@@ -68,8 +68,9 @@
+ 
+ /* VM Individual Macro Register */
+ #define VM_COM_REG_SIZE	0x200
+-#define VM_SDIF_DONE(n)	(VM_COM_REG_SIZE + 0x34 + 0x200 * (n))
+-#define VM_SDIF_DATA(n)	(VM_COM_REG_SIZE + 0x40 + 0x200 * (n))
++#define VM_SDIF_DONE(vm)	(VM_COM_REG_SIZE + 0x34 + 0x200 * (vm))
++#define VM_SDIF_DATA(vm, ch)	\
++	(VM_COM_REG_SIZE + 0x40 + 0x200 * (vm) + 0x4 * (ch))
+ 
+ /* SDA Slave Register */
+ #define IP_CTRL			0x00
+@@ -115,6 +116,7 @@ struct pvt_device {
+ 	u32			t_num;
+ 	u32			p_num;
+ 	u32			v_num;
++	u32			c_num;
+ 	u32			ip_freq;
+ 	u8			*vm_idx;
+ };
+@@ -178,14 +180,15 @@ static int pvt_read_in(struct device *dev, u32 attr, int channel, long *val)
+ {
+ 	struct pvt_device *pvt = dev_get_drvdata(dev);
+ 	struct regmap *v_map = pvt->v_map;
++	u8 vm_idx, ch_idx;
+ 	u32 n, stat;
+-	u8 vm_idx;
+ 	int ret;
+ 
+-	if (channel >= pvt->v_num)
++	if (channel >= pvt->v_num * pvt->c_num)
+ 		return -EINVAL;
+ 
+-	vm_idx = pvt->vm_idx[channel];
++	vm_idx = pvt->vm_idx[channel / pvt->c_num];
++	ch_idx = channel % pvt->c_num;
+ 
+ 	switch (attr) {
+ 	case hwmon_in_input:
+@@ -196,13 +199,23 @@ static int pvt_read_in(struct device *dev, u32 attr, int channel, long *val)
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = regmap_read(v_map, VM_SDIF_DATA(vm_idx), &n);
++		ret = regmap_read(v_map, VM_SDIF_DATA(vm_idx, ch_idx), &n);
+ 		if(ret < 0)
+ 			return ret;
+ 
+ 		n &= SAMPLE_DATA_MSK;
+-		/* Convert the N bitstream count into voltage */
+-		*val = (PVT_N_CONST * n - PVT_R_CONST) >> PVT_CONV_BITS;
++		/*
++		 * Convert the N bitstream count into voltage.
++		 * To support negative voltage calculation for 64bit machines
++		 * n must be cast to long, since n and *val differ both in
++		 * signedness and in size.
++		 * Division is used instead of right shift, because for signed
++		 * numbers, the sign bit is used to fill the vacated bit
++		 * positions, and if the number is negative, 1 is used.
++		 * BIT(x) may not be used instead of (1 << x) because it's
++		 * unsigned.
++		 */
++		*val = (PVT_N_CONST * (long)n - PVT_R_CONST) / (1 << PVT_CONV_BITS);
+ 
+ 		return 0;
+ 	default:
+@@ -385,6 +398,19 @@ static int pvt_init(struct pvt_device *pvt)
+ 		if (ret)
+ 			return ret;
+ 
++		val = (BIT(pvt->c_num) - 1) | VM_CH_INIT |
++		      IP_POLL << SDIF_ADDR_SFT | SDIF_WRN_W | SDIF_PROG;
++		ret = regmap_write(v_map, SDIF_W, val);
++		if (ret < 0)
++			return ret;
++
++		ret = regmap_read_poll_timeout(v_map, SDIF_STAT,
++					       val, !(val & SDIF_BUSY),
++					       PVT_POLL_DELAY_US,
++					       PVT_POLL_TIMEOUT_US);
++		if (ret)
++			return ret;
++
+ 		val = CFG1_VOL_MEAS_MODE | CFG1_PARALLEL_OUT |
+ 		      CFG1_14_BIT | IP_CFG << SDIF_ADDR_SFT |
+ 		      SDIF_WRN_W | SDIF_PROG;
+@@ -499,8 +525,8 @@ static int pvt_reset_control_deassert(struct device *dev, struct pvt_device *pvt
+ 
+ static int mr75203_probe(struct platform_device *pdev)
+ {
++	u32 ts_num, vm_num, pd_num, ch_num, val, index, i;
+ 	const struct hwmon_channel_info **pvt_info;
+-	u32 ts_num, vm_num, pd_num, val, index, i;
+ 	struct device *dev = &pdev->dev;
+ 	u32 *temp_config, *in_config;
+ 	struct device *hwmon_dev;
+@@ -541,9 +567,11 @@ static int mr75203_probe(struct platform_device *pdev)
+ 	ts_num = (val & TS_NUM_MSK) >> TS_NUM_SFT;
+ 	pd_num = (val & PD_NUM_MSK) >> PD_NUM_SFT;
+ 	vm_num = (val & VM_NUM_MSK) >> VM_NUM_SFT;
++	ch_num = (val & CH_NUM_MSK) >> CH_NUM_SFT;
+ 	pvt->t_num = ts_num;
+ 	pvt->p_num = pd_num;
+ 	pvt->v_num = vm_num;
++	pvt->c_num = ch_num;
+ 	val = 0;
+ 	if (ts_num)
+ 		val++;
+@@ -580,7 +608,7 @@ static int mr75203_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (vm_num) {
+-		u32 num = vm_num;
++		u32 total_ch;
+ 
+ 		ret = pvt_get_regmap(pdev, "vm", pvt);
+ 		if (ret)
+@@ -594,30 +622,30 @@ static int mr75203_probe(struct platform_device *pdev)
+ 		ret = device_property_read_u8_array(dev, "intel,vm-map",
+ 						    pvt->vm_idx, vm_num);
+ 		if (ret) {
+-			num = 0;
++			/*
++			 * Incase intel,vm-map property is not defined, we
++			 * assume incremental channel numbers.
++			 */
++			for (i = 0; i < vm_num; i++)
++				pvt->vm_idx[i] = i;
+ 		} else {
+ 			for (i = 0; i < vm_num; i++)
+ 				if (pvt->vm_idx[i] >= vm_num ||
+ 				    pvt->vm_idx[i] == 0xff) {
+-					num = i;
++					pvt->v_num = i;
++					vm_num = i;
+ 					break;
+ 				}
+ 		}
+ 
+-		/*
+-		 * Incase intel,vm-map property is not defined, we assume
+-		 * incremental channel numbers.
+-		 */
+-		for (i = num; i < vm_num; i++)
+-			pvt->vm_idx[i] = i;
+-
+-		in_config = devm_kcalloc(dev, num + 1,
++		total_ch = ch_num * vm_num;
++		in_config = devm_kcalloc(dev, total_ch + 1,
+ 					 sizeof(*in_config), GFP_KERNEL);
+ 		if (!in_config)
+ 			return -ENOMEM;
+ 
+-		memset32(in_config, HWMON_I_INPUT, num);
+-		in_config[num] = 0;
++		memset32(in_config, HWMON_I_INPUT, total_ch);
++		in_config[total_ch] = 0;
+ 		pvt_in.config = in_config;
+ 
+ 		pvt_info[index++] = &pvt_in;
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 3c40aa50cd60c..b5fa19a033c0a 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1722,8 +1722,8 @@ cma_ib_id_from_event(struct ib_cm_id *cm_id,
+ 		}
+ 
+ 		if (!validate_net_dev(*net_dev,
+-				 (struct sockaddr *)&req->listen_addr_storage,
+-				 (struct sockaddr *)&req->src_addr_storage)) {
++				 (struct sockaddr *)&req->src_addr_storage,
++				 (struct sockaddr *)&req->listen_addr_storage)) {
+ 			id_priv = ERR_PTR(-EHOSTUNREACH);
+ 			goto err;
+ 		}
+diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
+index 323f6cf006824..af4af4789ef27 100644
+--- a/drivers/infiniband/core/umem_odp.c
++++ b/drivers/infiniband/core/umem_odp.c
+@@ -466,7 +466,7 @@ retry:
+ 		mutex_unlock(&umem_odp->umem_mutex);
+ 
+ out_put_mm:
+-	mmput(owning_mm);
++	mmput_async(owning_mm);
+ out_put_task:
+ 	if (owning_process)
+ 		put_task_struct(owning_process);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index be7f2fe1e8839..8a92faeb3d237 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -92,7 +92,7 @@
+ 
+ #define HNS_ROCE_V2_QPC_TIMER_ENTRY_SZ		PAGE_SIZE
+ #define HNS_ROCE_V2_CQC_TIMER_ENTRY_SZ		PAGE_SIZE
+-#define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED		0xFFFFF000
++#define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED		0xFFFF000
+ #define HNS_ROCE_V2_MAX_INNER_MTPT_NUM		2
+ #define HNS_ROCE_INVALID_LKEY			0x100
+ #define HNS_ROCE_CMQ_TX_TIMEOUT			30000
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 291e06d631505..6fe98af7741b5 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -386,11 +386,8 @@ static int set_rq_size(struct hns_roce_dev *hr_dev, struct ib_qp_cap *cap,
+ 
+ 	hr_qp->rq.max_gs = roundup_pow_of_two(max(1U, cap->max_recv_sge));
+ 
+-	if (hr_dev->caps.max_rq_sg <= HNS_ROCE_SGE_IN_WQE)
+-		hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz);
+-	else
+-		hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz *
+-					    hr_qp->rq.max_gs);
++	hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz *
++				    hr_qp->rq.max_gs);
+ 
+ 	hr_qp->rq.wqe_cnt = cnt;
+ 	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE)
+diff --git a/drivers/infiniband/hw/mlx5/mad.c b/drivers/infiniband/hw/mlx5/mad.c
+index 9bb9bb058932f..cca7a4a6bd82d 100644
+--- a/drivers/infiniband/hw/mlx5/mad.c
++++ b/drivers/infiniband/hw/mlx5/mad.c
+@@ -166,6 +166,12 @@ static int process_pma_cmd(struct mlx5_ib_dev *dev, u8 port_num,
+ 		mdev = dev->mdev;
+ 		mdev_port_num = 1;
+ 	}
++	if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1) {
++		/* set local port to one for Function-Per-Port HCA. */
++		mdev = dev->mdev;
++		mdev_port_num = 1;
++	}
++
+ 	/* Declaring support of extended counters */
+ 	if (in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO) {
+ 		struct ib_class_port_info cpi = {};
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index 7989c4043db4e..3c3ae5ef29428 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -29,7 +29,7 @@ static struct page *siw_get_pblpage(struct siw_mem *mem, u64 addr, int *idx)
+ 	dma_addr_t paddr = siw_pbl_get_buffer(pbl, offset, NULL, idx);
+ 
+ 	if (paddr)
+-		return virt_to_page(paddr);
++		return virt_to_page((void *)paddr);
+ 
+ 	return NULL;
+ }
+@@ -523,13 +523,23 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s)
+ 					kunmap(p);
+ 				}
+ 			} else {
+-				u64 va = sge->laddr + sge_off;
++				/*
++				 * Cast to an uintptr_t to preserve all 64 bits
++				 * in sge->laddr.
++				 */
++				uintptr_t va = (uintptr_t)(sge->laddr + sge_off);
+ 
+-				page_array[seg] = virt_to_page(va & PAGE_MASK);
++				/*
++				 * virt_to_page() takes a (void *) pointer
++				 * so cast to a (void *) meaning it will be 64
++				 * bits on a 64 bit platform and 32 bits on a
++				 * 32 bit platform.
++				 */
++				page_array[seg] = virt_to_page((void *)(va & PAGE_MASK));
+ 				if (do_crc)
+ 					crypto_shash_update(
+ 						c_tx->mpa_crc_hd,
+-						(void *)(uintptr_t)va,
++						(void *)va,
+ 						plen);
+ 			}
+ 
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 200cf5da5e0ad..f216a86d9c817 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -923,7 +923,8 @@ static void build_completion_wait(struct iommu_cmd *cmd,
+ 	memset(cmd, 0, sizeof(*cmd));
+ 	cmd->data[0] = lower_32_bits(paddr) | CMD_COMPL_WAIT_STORE_MASK;
+ 	cmd->data[1] = upper_32_bits(paddr);
+-	cmd->data[2] = data;
++	cmd->data[2] = lower_32_bits(data);
++	cmd->data[3] = upper_32_bits(data);
+ 	CMD_SET_TYPE(cmd, CMD_COMPL_WAIT);
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
+index 32f3facbed1a5..b3cb5d1033260 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
+@@ -178,6 +178,10 @@ void i40e_notify_client_of_netdev_close(struct i40e_vsi *vsi, bool reset)
+ 			"Cannot locate client instance close routine\n");
+ 		return;
+ 	}
++	if (!test_bit(__I40E_CLIENT_INSTANCE_OPENED, &cdev->state)) {
++		dev_dbg(&pf->pdev->dev, "Client is not open, abort close\n");
++		return;
++	}
+ 	cdev->client->ops->close(&cdev->lan_info, cdev->client, reset);
+ 	clear_bit(__I40E_CLIENT_INSTANCE_OPENED, &cdev->state);
+ 	i40e_client_release_qvlist(&cdev->lan_info);
+@@ -374,7 +378,6 @@ void i40e_client_subtask(struct i40e_pf *pf)
+ 				/* Remove failed client instance */
+ 				clear_bit(__I40E_CLIENT_INSTANCE_OPENED,
+ 					  &cdev->state);
+-				i40e_client_del_instance(pf);
+ 				return;
+ 			}
+ 		}
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 810f2bdb91645..f193709c8efc6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -3404,7 +3404,7 @@ static int ice_init_pf(struct ice_pf *pf)
+ 
+ 	pf->avail_rxqs = bitmap_zalloc(pf->max_pf_rxqs, GFP_KERNEL);
+ 	if (!pf->avail_rxqs) {
+-		devm_kfree(ice_pf_to_dev(pf), pf->avail_txqs);
++		bitmap_free(pf->avail_txqs);
+ 		pf->avail_txqs = NULL;
+ 		return -ENOMEM;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-rs.c b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+index 532e3b91777d9..150805aec4071 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-rs.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+@@ -2403,7 +2403,7 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ 		/* Repeat initial/next rate.
+ 		 * For legacy IL_NUMBER_TRY == 1, this loop will not execute.
+ 		 * For HT IL_HT_NUMBER_TRY == 3, this executes twice. */
+-		while (repeat_rate > 0) {
++		while (repeat_rate > 0 && idx < (LINK_QUAL_MAX_RETRY_NUM - 1)) {
+ 			if (is_legacy(tbl_type.lq_type)) {
+ 				if (ant_toggle_cnt < NUM_TRY_BEFORE_ANT_TOGGLE)
+ 					ant_toggle_cnt++;
+@@ -2422,8 +2422,6 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ 			    cpu_to_le32(new_rate);
+ 			repeat_rate--;
+ 			idx++;
+-			if (idx >= LINK_QUAL_MAX_RETRY_NUM)
+-				goto out;
+ 		}
+ 
+ 		il4965_rs_get_tbl_info_from_mcs(new_rate, lq_sta->band,
+@@ -2468,7 +2466,6 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ 		repeat_rate--;
+ 	}
+ 
+-out:
+ 	lq_cmd->agg_params.agg_frame_cnt_limit = LINK_QUAL_AGG_FRAME_LIMIT_DEF;
+ 	lq_cmd->agg_params.agg_dis_start_th = LINK_QUAL_AGG_DISABLE_START_DEF;
+ 
+diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
+index ca261e0fc9c9b..9ee9ce0493fe6 100644
+--- a/drivers/net/xen-netback/xenbus.c
++++ b/drivers/net/xen-netback/xenbus.c
+@@ -256,7 +256,6 @@ static void backend_disconnect(struct backend_info *be)
+ 		unsigned int queue_index;
+ 
+ 		xen_unregister_watchers(vif);
+-		xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
+ #ifdef CONFIG_DEBUG_FS
+ 		xenvif_debugfs_delif(vif);
+ #endif /* CONFIG_DEBUG_FS */
+@@ -984,6 +983,7 @@ static int netback_remove(struct xenbus_device *dev)
+ 	struct backend_info *be = dev_get_drvdata(&dev->dev);
+ 
+ 	unregister_hotplug_status_watch(be);
++	xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status");
+ 	if (be->vif) {
+ 		kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE);
+ 		backend_disconnect(be);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index fe8c27bbc3f20..57df87def8c33 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -118,7 +118,6 @@ struct nvme_tcp_queue {
+ 	struct mutex		send_mutex;
+ 	struct llist_head	req_list;
+ 	struct list_head	send_list;
+-	bool			more_requests;
+ 
+ 	/* recv state */
+ 	void			*pdu;
+@@ -314,7 +313,7 @@ static inline void nvme_tcp_send_all(struct nvme_tcp_queue *queue)
+ static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+ {
+ 	return !list_empty(&queue->send_list) ||
+-		!llist_empty(&queue->req_list) || queue->more_requests;
++		!llist_empty(&queue->req_list);
+ }
+ 
+ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+@@ -333,9 +332,7 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 	 */
+ 	if (queue->io_cpu == raw_smp_processor_id() &&
+ 	    sync && empty && mutex_trylock(&queue->send_mutex)) {
+-		queue->more_requests = !last;
+ 		nvme_tcp_send_all(queue);
+-		queue->more_requests = false;
+ 		mutex_unlock(&queue->send_mutex);
+ 	}
+ 
+@@ -1196,7 +1193,7 @@ static void nvme_tcp_io_work(struct work_struct *w)
+ 		else if (unlikely(result < 0))
+ 			return;
+ 
+-		if (!pending)
++		if (!pending || !queue->rd_enabled)
+ 			return;
+ 
+ 	} while (!time_after(jiffies, deadline)); /* quota is exhausted */
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 9a8fa2e582d5b..bc88ff2912f56 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -730,6 +730,8 @@ static void nvmet_set_error(struct nvmet_req *req, u16 status)
+ 
+ static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
+ {
++	struct nvmet_ns *ns = req->ns;
++
+ 	if (!req->sq->sqhd_disabled)
+ 		nvmet_update_sq_head(req);
+ 	req->cqe->sq_id = cpu_to_le16(req->sq->qid);
+@@ -740,9 +742,9 @@ static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
+ 
+ 	trace_nvmet_req_complete(req);
+ 
+-	if (req->ns)
+-		nvmet_put_namespace(req->ns);
+ 	req->ops->queue_response(req);
++	if (ns)
++		nvmet_put_namespace(ns);
+ }
+ 
+ void nvmet_req_complete(struct nvmet_req *req, u16 status)
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index b916fab9b1618..ffd5000c23d39 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -1380,15 +1380,17 @@ ccio_init_resource(struct resource *res, char *name, void __iomem *ioaddr)
+ 	}
+ }
+ 
+-static void __init ccio_init_resources(struct ioc *ioc)
++static int __init ccio_init_resources(struct ioc *ioc)
+ {
+ 	struct resource *res = ioc->mmio_region;
+ 	char *name = kmalloc(14, GFP_KERNEL);
+-
++	if (unlikely(!name))
++		return -ENOMEM;
+ 	snprintf(name, 14, "GSC Bus [%d/]", ioc->hw_path);
+ 
+ 	ccio_init_resource(res, name, &ioc->ioc_regs->io_io_low);
+ 	ccio_init_resource(res + 1, name, &ioc->ioc_regs->io_io_low_hv);
++	return 0;
+ }
+ 
+ static int new_ioc_area(struct resource *res, unsigned long size,
+@@ -1543,7 +1545,10 @@ static int __init ccio_probe(struct parisc_device *dev)
+ 		return -ENOMEM;
+ 	}
+ 	ccio_ioc_init(ioc);
+-	ccio_init_resources(ioc);
++	if (ccio_init_resources(ioc)) {
++		kfree(ioc);
++		return -ENOMEM;
++	}
+ 	hppa_dma_ops = &ccio_ops;
+ 
+ 	hba = kzalloc(sizeof(*hba), GFP_KERNEL);
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 6e3f3511e7ddd..317d701487ecd 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2596,13 +2596,18 @@ static int _regulator_do_enable(struct regulator_dev *rdev)
+  */
+ static int _regulator_handle_consumer_enable(struct regulator *regulator)
+ {
++	int ret;
+ 	struct regulator_dev *rdev = regulator->rdev;
+ 
+ 	lockdep_assert_held_once(&rdev->mutex.base);
+ 
+ 	regulator->enable_count++;
+-	if (regulator->uA_load && regulator->enable_count == 1)
+-		return drms_uA_update(rdev);
++	if (regulator->uA_load && regulator->enable_count == 1) {
++		ret = drms_uA_update(rdev);
++		if (ret)
++			regulator->enable_count--;
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 134e4ee5dc481..17200b453cbbb 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -6670,7 +6670,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ 	/* Allocate device driver memory */
+ 	rc = lpfc_mem_alloc(phba, SGL_ALIGN_SZ);
+ 	if (rc)
+-		return -ENOMEM;
++		goto out_destroy_workqueue;
+ 
+ 	/* IF Type 2 ports get initialized now. */
+ 	if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >=
+@@ -7076,6 +7076,9 @@ out_free_bsmbx:
+ 	lpfc_destroy_bootstrap_mbox(phba);
+ out_free_mem:
+ 	lpfc_mem_free(phba);
++out_destroy_workqueue:
++	destroy_workqueue(phba->wq);
++	phba->wq = NULL;
+ 	return rc;
+ }
+ 
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 13022a42fd6f4..7838c7911adde 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -5198,7 +5198,6 @@ megasas_alloc_fusion_context(struct megasas_instance *instance)
+ 		if (!fusion->log_to_span) {
+ 			dev_err(&instance->pdev->dev, "Failed from %s %d\n",
+ 				__func__, __LINE__);
+-			kfree(instance->ctrl_context);
+ 			return -ENOMEM;
+ 		}
+ 	}
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 8418b59b3743b..c3a5978b0efac 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -3501,6 +3501,7 @@ static struct fw_event_work *dequeue_next_fw_event(struct MPT3SAS_ADAPTER *ioc)
+ 		fw_event = list_first_entry(&ioc->fw_event_list,
+ 				struct fw_event_work, list);
+ 		list_del_init(&fw_event->list);
++		fw_event_work_put(fw_event);
+ 	}
+ 	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+ 
+@@ -3559,7 +3560,6 @@ _scsih_fw_event_cleanup_queue(struct MPT3SAS_ADAPTER *ioc)
+ 		if (cancel_work_sync(&fw_event->work))
+ 			fw_event_work_put(fw_event);
+ 
+-		fw_event_work_put(fw_event);
+ 	}
+ 	ioc->fw_events_cleanup = 0;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index ba823e8eb902b..ecb30c2738b8b 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -6817,14 +6817,8 @@ qlt_24xx_config_rings(struct scsi_qla_host *vha)
+ 
+ 	if (ha->flags.msix_enabled) {
+ 		if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+-			if (IS_QLA2071(ha)) {
+-				/* 4 ports Baker: Enable Interrupt Handshake */
+-				icb->msix_atio = 0;
+-				icb->firmware_options_2 |= cpu_to_le32(BIT_26);
+-			} else {
+-				icb->msix_atio = cpu_to_le16(msix->entry);
+-				icb->firmware_options_2 &= cpu_to_le32(~BIT_26);
+-			}
++			icb->msix_atio = cpu_to_le16(msix->entry);
++			icb->firmware_options_2 &= cpu_to_le32(~BIT_26);
+ 			ql_dbg(ql_dbg_init, vha, 0xf072,
+ 			    "Registering ICB vector 0x%x for atio que.\n",
+ 			    msix->entry);
+diff --git a/drivers/soc/bcm/brcmstb/pm/pm-arm.c b/drivers/soc/bcm/brcmstb/pm/pm-arm.c
+index c6ec7d95bcfcc..722fd54e537cf 100644
+--- a/drivers/soc/bcm/brcmstb/pm/pm-arm.c
++++ b/drivers/soc/bcm/brcmstb/pm/pm-arm.c
+@@ -681,13 +681,14 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ 	const struct of_device_id *of_id = NULL;
+ 	struct device_node *dn;
+ 	void __iomem *base;
+-	int ret, i;
++	int ret, i, s;
+ 
+ 	/* AON ctrl registers */
+ 	base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL);
+ 	if (IS_ERR(base)) {
+ 		pr_err("error mapping AON_CTRL\n");
+-		return PTR_ERR(base);
++		ret = PTR_ERR(base);
++		goto aon_err;
+ 	}
+ 	ctrl.aon_ctrl_base = base;
+ 
+@@ -697,8 +698,10 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ 		/* Assume standard offset */
+ 		ctrl.aon_sram = ctrl.aon_ctrl_base +
+ 				     AON_CTRL_SYSTEM_DATA_RAM_OFS;
++		s = 0;
+ 	} else {
+ 		ctrl.aon_sram = base;
++		s = 1;
+ 	}
+ 
+ 	writel_relaxed(0, ctrl.aon_sram + AON_REG_PANIC);
+@@ -708,7 +711,8 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ 				     (const void **)&ddr_phy_data);
+ 	if (IS_ERR(base)) {
+ 		pr_err("error mapping DDR PHY\n");
+-		return PTR_ERR(base);
++		ret = PTR_ERR(base);
++		goto ddr_phy_err;
+ 	}
+ 	ctrl.support_warm_boot = ddr_phy_data->supports_warm_boot;
+ 	ctrl.pll_status_offset = ddr_phy_data->pll_status_offset;
+@@ -728,17 +732,20 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ 	for_each_matching_node(dn, ddr_shimphy_dt_ids) {
+ 		i = ctrl.num_memc;
+ 		if (i >= MAX_NUM_MEMC) {
++			of_node_put(dn);
+ 			pr_warn("too many MEMCs (max %d)\n", MAX_NUM_MEMC);
+ 			break;
+ 		}
+ 
+ 		base = of_io_request_and_map(dn, 0, dn->full_name);
+ 		if (IS_ERR(base)) {
++			of_node_put(dn);
+ 			if (!ctrl.support_warm_boot)
+ 				break;
+ 
+ 			pr_err("error mapping DDR SHIMPHY %d\n", i);
+-			return PTR_ERR(base);
++			ret = PTR_ERR(base);
++			goto ddr_shimphy_err;
+ 		}
+ 		ctrl.memcs[i].ddr_shimphy_base = base;
+ 		ctrl.num_memc++;
+@@ -749,14 +756,18 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ 	for_each_matching_node(dn, brcmstb_memc_of_match) {
+ 		base = of_iomap(dn, 0);
+ 		if (!base) {
++			of_node_put(dn);
+ 			pr_err("error mapping DDR Sequencer %d\n", i);
+-			return -ENOMEM;
++			ret = -ENOMEM;
++			goto brcmstb_memc_err;
+ 		}
+ 
+ 		of_id = of_match_node(brcmstb_memc_of_match, dn);
+ 		if (!of_id) {
+ 			iounmap(base);
+-			return -EINVAL;
++			of_node_put(dn);
++			ret = -EINVAL;
++			goto brcmstb_memc_err;
+ 		}
+ 
+ 		ddr_seq_data = of_id->data;
+@@ -776,21 +787,24 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ 	dn = of_find_matching_node(NULL, sram_dt_ids);
+ 	if (!dn) {
+ 		pr_err("SRAM not found\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto brcmstb_memc_err;
+ 	}
+ 
+ 	ret = brcmstb_init_sram(dn);
+ 	of_node_put(dn);
+ 	if (ret) {
+ 		pr_err("error setting up SRAM for PM\n");
+-		return ret;
++		goto brcmstb_memc_err;
+ 	}
+ 
+ 	ctrl.pdev = pdev;
+ 
+ 	ctrl.s3_params = kmalloc(sizeof(*ctrl.s3_params), GFP_KERNEL);
+-	if (!ctrl.s3_params)
+-		return -ENOMEM;
++	if (!ctrl.s3_params) {
++		ret = -ENOMEM;
++		goto s3_params_err;
++	}
+ 	ctrl.s3_params_pa = dma_map_single(&pdev->dev, ctrl.s3_params,
+ 					   sizeof(*ctrl.s3_params),
+ 					   DMA_TO_DEVICE);
+@@ -810,7 +824,21 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ 
+ out:
+ 	kfree(ctrl.s3_params);
+-
++s3_params_err:
++	iounmap(ctrl.boot_sram);
++brcmstb_memc_err:
++	for (i--; i >= 0; i--)
++		iounmap(ctrl.memcs[i].ddr_ctrl);
++ddr_shimphy_err:
++	for (i = 0; i < ctrl.num_memc; i++)
++		iounmap(ctrl.memcs[i].ddr_shimphy_base);
++
++	iounmap(ctrl.memcs[0].ddr_phy_base);
++ddr_phy_err:
++	iounmap(ctrl.aon_ctrl_base);
++	if (s)
++		iounmap(ctrl.aon_sram);
++aon_err:
+ 	pr_warn("PM: initialization failed with code %d\n", ret);
+ 
+ 	return ret;
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index 499fccba3d74b..6fb4400333fb4 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -9,6 +9,7 @@
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/tee_drv.h>
++#include <linux/uaccess.h>
+ #include <linux/uio.h>
+ #include "tee_private.h"
+ 
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index cb5ed4155a8d2..c91a3004931f1 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -235,7 +235,7 @@ struct gsm_mux {
+ 	int old_c_iflag;		/* termios c_iflag value before attach */
+ 	bool constipated;		/* Asked by remote to shut up */
+ 
+-	spinlock_t tx_lock;
++	struct mutex tx_mutex;
+ 	unsigned int tx_bytes;		/* TX data outstanding */
+ #define TX_THRESH_HI		8192
+ #define TX_THRESH_LO		2048
+@@ -820,15 +820,14 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+  *
+  *	Add data to the transmit queue and try and get stuff moving
+  *	out of the mux tty if not already doing so. Take the
+- *	the gsm tx lock and dlci lock.
++ *	the gsm tx mutex and dlci lock.
+  */
+ 
+ static void gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+ {
+-	unsigned long flags;
+-	spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
++	mutex_lock(&dlci->gsm->tx_mutex);
+ 	__gsm_data_queue(dlci, msg);
+-	spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags);
++	mutex_unlock(&dlci->gsm->tx_mutex);
+ }
+ 
+ /**
+@@ -840,7 +839,7 @@ static void gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+  *	is data. Keep to the MRU of the mux. This path handles the usual tty
+  *	interface which is a byte stream with optional modem data.
+  *
+- *	Caller must hold the tx_lock of the mux.
++ *	Caller must hold the tx_mutex of the mux.
+  */
+ 
+ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+@@ -903,7 +902,7 @@ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+  *	is data. Keep to the MRU of the mux. This path handles framed data
+  *	queued as skbuffs to the DLCI.
+  *
+- *	Caller must hold the tx_lock of the mux.
++ *	Caller must hold the tx_mutex of the mux.
+  */
+ 
+ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+@@ -919,7 +918,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+ 	if (dlci->adaption == 4)
+ 		overhead = 1;
+ 
+-	/* dlci->skb is locked by tx_lock */
++	/* dlci->skb is locked by tx_mutex */
+ 	if (dlci->skb == NULL) {
+ 		dlci->skb = skb_dequeue_tail(&dlci->skb_list);
+ 		if (dlci->skb == NULL)
+@@ -1019,13 +1018,12 @@ static void gsm_dlci_data_sweep(struct gsm_mux *gsm)
+ 
+ static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
+ {
+-	unsigned long flags;
+ 	int sweep;
+ 
+ 	if (dlci->constipated)
+ 		return;
+ 
+-	spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
++	mutex_lock(&dlci->gsm->tx_mutex);
+ 	/* If we have nothing running then we need to fire up */
+ 	sweep = (dlci->gsm->tx_bytes < TX_THRESH_LO);
+ 	if (dlci->gsm->tx_bytes == 0) {
+@@ -1036,7 +1034,7 @@ static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
+ 	}
+ 	if (sweep)
+ 		gsm_dlci_data_sweep(dlci->gsm);
+-	spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags);
++	mutex_unlock(&dlci->gsm->tx_mutex);
+ }
+ 
+ /*
+@@ -1258,7 +1256,6 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ 						const u8 *data, int clen)
+ {
+ 	u8 buf[1];
+-	unsigned long flags;
+ 
+ 	switch (command) {
+ 	case CMD_CLD: {
+@@ -1280,9 +1277,9 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ 		gsm->constipated = false;
+ 		gsm_control_reply(gsm, CMD_FCON, NULL, 0);
+ 		/* Kick the link in case it is idling */
+-		spin_lock_irqsave(&gsm->tx_lock, flags);
++		mutex_lock(&gsm->tx_mutex);
+ 		gsm_data_kick(gsm, NULL);
+-		spin_unlock_irqrestore(&gsm->tx_lock, flags);
++		mutex_unlock(&gsm->tx_mutex);
+ 		break;
+ 	case CMD_FCOFF:
+ 		/* Modem wants us to STFU */
+@@ -2200,11 +2197,6 @@ static int gsm_activate_mux(struct gsm_mux *gsm)
+ {
+ 	struct gsm_dlci *dlci;
+ 
+-	timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
+-	init_waitqueue_head(&gsm->event);
+-	spin_lock_init(&gsm->control_lock);
+-	spin_lock_init(&gsm->tx_lock);
+-
+ 	if (gsm->encoding == 0)
+ 		gsm->receive = gsm0_receive;
+ 	else
+@@ -2233,6 +2225,7 @@ static void gsm_free_mux(struct gsm_mux *gsm)
+ 			break;
+ 		}
+ 	}
++	mutex_destroy(&gsm->tx_mutex);
+ 	mutex_destroy(&gsm->mutex);
+ 	kfree(gsm->txframe);
+ 	kfree(gsm->buf);
+@@ -2304,8 +2297,12 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ 	}
+ 	spin_lock_init(&gsm->lock);
+ 	mutex_init(&gsm->mutex);
++	mutex_init(&gsm->tx_mutex);
+ 	kref_init(&gsm->ref);
+ 	INIT_LIST_HEAD(&gsm->tx_list);
++	timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
++	init_waitqueue_head(&gsm->event);
++	spin_lock_init(&gsm->control_lock);
+ 
+ 	gsm->t1 = T1;
+ 	gsm->t2 = T2;
+@@ -2330,6 +2327,7 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ 	}
+ 	spin_unlock(&gsm_mux_lock);
+ 	if (i == MAX_MUX) {
++		mutex_destroy(&gsm->tx_mutex);
+ 		mutex_destroy(&gsm->mutex);
+ 		kfree(gsm->txframe);
+ 		kfree(gsm->buf);
+@@ -2654,16 +2652,15 @@ static int gsmld_open(struct tty_struct *tty)
+ static void gsmld_write_wakeup(struct tty_struct *tty)
+ {
+ 	struct gsm_mux *gsm = tty->disc_data;
+-	unsigned long flags;
+ 
+ 	/* Queue poll */
+ 	clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+-	spin_lock_irqsave(&gsm->tx_lock, flags);
++	mutex_lock(&gsm->tx_mutex);
+ 	gsm_data_kick(gsm, NULL);
+ 	if (gsm->tx_bytes < TX_THRESH_LO) {
+ 		gsm_dlci_data_sweep(gsm);
+ 	}
+-	spin_unlock_irqrestore(&gsm->tx_lock, flags);
++	mutex_unlock(&gsm->tx_mutex);
+ }
+ 
+ /**
+@@ -2706,7 +2703,6 @@ static ssize_t gsmld_write(struct tty_struct *tty, struct file *file,
+ 			   const unsigned char *buf, size_t nr)
+ {
+ 	struct gsm_mux *gsm = tty->disc_data;
+-	unsigned long flags;
+ 	int space;
+ 	int ret;
+ 
+@@ -2714,13 +2710,13 @@ static ssize_t gsmld_write(struct tty_struct *tty, struct file *file,
+ 		return -ENODEV;
+ 
+ 	ret = -ENOBUFS;
+-	spin_lock_irqsave(&gsm->tx_lock, flags);
++	mutex_lock(&gsm->tx_mutex);
+ 	space = tty_write_room(tty);
+ 	if (space >= nr)
+ 		ret = tty->ops->write(tty, buf, nr);
+ 	else
+ 		set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+-	spin_unlock_irqrestore(&gsm->tx_lock, flags);
++	mutex_unlock(&gsm->tx_mutex);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/video/fbdev/chipsfb.c b/drivers/video/fbdev/chipsfb.c
+index 393894af26f84..2b00a9d554fc0 100644
+--- a/drivers/video/fbdev/chipsfb.c
++++ b/drivers/video/fbdev/chipsfb.c
+@@ -430,6 +430,7 @@ static int chipsfb_pci_init(struct pci_dev *dp, const struct pci_device_id *ent)
+  err_release_fb:
+ 	framebuffer_release(p);
+  err_disable:
++	pci_disable_device(dp);
+  err_out:
+ 	return rc;
+ }
+diff --git a/fs/afs/flock.c b/fs/afs/flock.c
+index cb3054c7843ea..466ad609f2057 100644
+--- a/fs/afs/flock.c
++++ b/fs/afs/flock.c
+@@ -76,7 +76,7 @@ void afs_lock_op_done(struct afs_call *call)
+ 	if (call->error == 0) {
+ 		spin_lock(&vnode->lock);
+ 		trace_afs_flock_ev(vnode, NULL, afs_flock_timestamp, 0);
+-		vnode->locked_at = call->reply_time;
++		vnode->locked_at = call->issue_time;
+ 		afs_schedule_lock_extension(vnode);
+ 		spin_unlock(&vnode->lock);
+ 	}
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index 1d95ed9dd86e6..0048a32cb040e 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -130,7 +130,7 @@ bad:
+ 
+ static time64_t xdr_decode_expiry(struct afs_call *call, u32 expiry)
+ {
+-	return ktime_divns(call->reply_time, NSEC_PER_SEC) + expiry;
++	return ktime_divns(call->issue_time, NSEC_PER_SEC) + expiry;
+ }
+ 
+ static void xdr_decode_AFSCallBack(const __be32 **_bp,
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index dc08a3d9b3a8b..637cbe549397c 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -135,7 +135,6 @@ struct afs_call {
+ 	bool			need_attention;	/* T if RxRPC poked us */
+ 	bool			async;		/* T if asynchronous */
+ 	bool			upgrade;	/* T to request service upgrade */
+-	bool			have_reply_time; /* T if have got reply_time */
+ 	bool			intr;		/* T if interruptible */
+ 	bool			unmarshalling_error; /* T if an unmarshalling error occurred */
+ 	u16			service_id;	/* Actual service ID (after upgrade) */
+@@ -149,7 +148,7 @@ struct afs_call {
+ 		} __attribute__((packed));
+ 		__be64		tmp64;
+ 	};
+-	ktime_t			reply_time;	/* Time of first reply packet */
++	ktime_t			issue_time;	/* Time of issue of operation */
+ };
+ 
+ struct afs_call_type {
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index efe0fb3ad8bdc..535d28b44bca3 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -429,6 +429,7 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
+ 	if (call->max_lifespan)
+ 		rxrpc_kernel_set_max_life(call->net->socket, rxcall,
+ 					  call->max_lifespan);
++	call->issue_time = ktime_get_real();
+ 
+ 	/* send the request */
+ 	iov[0].iov_base	= call->request;
+@@ -533,12 +534,6 @@ static void afs_deliver_to_call(struct afs_call *call)
+ 			return;
+ 		}
+ 
+-		if (!call->have_reply_time &&
+-		    rxrpc_kernel_get_reply_time(call->net->socket,
+-						call->rxcall,
+-						&call->reply_time))
+-			call->have_reply_time = true;
+-
+ 		ret = call->type->deliver(call);
+ 		state = READ_ONCE(call->state);
+ 		if (ret == 0 && call->unmarshalling_error)
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index bd787e71a657f..5b2ef5ffd716f 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -239,8 +239,7 @@ static void xdr_decode_YFSCallBack(const __be32 **_bp,
+ 	struct afs_callback *cb = &scb->callback;
+ 	ktime_t cb_expiry;
+ 
+-	cb_expiry = call->reply_time;
+-	cb_expiry = ktime_add(cb_expiry, xdr_to_u64(x->expiration_time) * 100);
++	cb_expiry = ktime_add(call->issue_time, xdr_to_u64(x->expiration_time) * 100);
+ 	cb->expires_at	= ktime_divns(cb_expiry, NSEC_PER_SEC);
+ 	scb->have_cb	= true;
+ 	*_bp += xdr_size(x);
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index 2fa3ba354cc96..001c26daacbaa 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -74,7 +74,6 @@ smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
+ 		nr_ioctl_req.Reserved = 0;
+ 		rc = SMB2_ioctl(xid, oparms->tcon, fid->persistent_fid,
+ 			fid->volatile_fid, FSCTL_LMR_REQUEST_RESILIENCY,
+-			true /* is_fsctl */,
+ 			(char *)&nr_ioctl_req, sizeof(nr_ioctl_req),
+ 			CIFSMaxBufSize, NULL, NULL /* no return info */);
+ 		if (rc == -EOPNOTSUPP) {
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index b6d72e3c5ebad..11efd5289ec43 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -587,7 +587,7 @@ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
+ 	struct cifs_ses *ses = tcon->ses;
+ 
+ 	rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+-			FSCTL_QUERY_NETWORK_INTERFACE_INFO, true /* is_fsctl */,
++			FSCTL_QUERY_NETWORK_INTERFACE_INFO,
+ 			NULL /* no data input */, 0 /* no data input */,
+ 			CIFSMaxBufSize, (char **)&out_buf, &ret_data_len);
+ 	if (rc == -EOPNOTSUPP) {
+@@ -1470,9 +1470,8 @@ SMB2_request_res_key(const unsigned int xid, struct cifs_tcon *tcon,
+ 	struct resume_key_req *res_key;
+ 
+ 	rc = SMB2_ioctl(xid, tcon, persistent_fid, volatile_fid,
+-			FSCTL_SRV_REQUEST_RESUME_KEY, true /* is_fsctl */,
+-			NULL, 0 /* no input */, CIFSMaxBufSize,
+-			(char **)&res_key, &ret_data_len);
++			FSCTL_SRV_REQUEST_RESUME_KEY, NULL, 0 /* no input */,
++			CIFSMaxBufSize, (char **)&res_key, &ret_data_len);
+ 
+ 	if (rc) {
+ 		cifs_tcon_dbg(VFS, "refcpy ioctl error %d getting resume key\n", rc);
+@@ -1611,7 +1610,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 		rqst[1].rq_nvec = SMB2_IOCTL_IOV_SIZE;
+ 
+ 		rc = SMB2_ioctl_init(tcon, server, &rqst[1], COMPOUND_FID, COMPOUND_FID,
+-				     qi.info_type, true, buffer, qi.output_buffer_length,
++				     qi.info_type, buffer, qi.output_buffer_length,
+ 				     CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE -
+ 				     MAX_SMB2_CLOSE_RESPONSE_SIZE);
+ 		free_req1_func = SMB2_ioctl_free;
+@@ -1787,9 +1786,8 @@ smb2_copychunk_range(const unsigned int xid,
+ 		retbuf = NULL;
+ 		rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,
+ 			trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE,
+-			true /* is_fsctl */, (char *)pcchunk,
+-			sizeof(struct copychunk_ioctl),	CIFSMaxBufSize,
+-			(char **)&retbuf, &ret_data_len);
++			(char *)pcchunk, sizeof(struct copychunk_ioctl),
++			CIFSMaxBufSize, (char **)&retbuf, &ret_data_len);
+ 		if (rc == 0) {
+ 			if (ret_data_len !=
+ 					sizeof(struct copychunk_ioctl_rsp)) {
+@@ -1949,7 +1947,6 @@ static bool smb2_set_sparse(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ 			cfile->fid.volatile_fid, FSCTL_SET_SPARSE,
+-			true /* is_fctl */,
+ 			&setsparse, 1, CIFSMaxBufSize, NULL, NULL);
+ 	if (rc) {
+ 		tcon->broken_sparse_sup = true;
+@@ -2032,7 +2029,6 @@ smb2_duplicate_extents(const unsigned int xid,
+ 	rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,
+ 			trgtfile->fid.volatile_fid,
+ 			FSCTL_DUPLICATE_EXTENTS_TO_FILE,
+-			true /* is_fsctl */,
+ 			(char *)&dup_ext_buf,
+ 			sizeof(struct duplicate_extents_to_file),
+ 			CIFSMaxBufSize, NULL,
+@@ -2067,7 +2063,6 @@ smb3_set_integrity(const unsigned int xid, struct cifs_tcon *tcon,
+ 	return SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ 			cfile->fid.volatile_fid,
+ 			FSCTL_SET_INTEGRITY_INFORMATION,
+-			true /* is_fsctl */,
+ 			(char *)&integr_info,
+ 			sizeof(struct fsctl_set_integrity_information_req),
+ 			CIFSMaxBufSize, NULL,
+@@ -2120,7 +2115,6 @@ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
+ 	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ 			cfile->fid.volatile_fid,
+ 			FSCTL_SRV_ENUMERATE_SNAPSHOTS,
+-			true /* is_fsctl */,
+ 			NULL, 0 /* no input data */, max_response_size,
+ 			(char **)&retbuf,
+ 			&ret_data_len);
+@@ -2762,7 +2756,6 @@ smb2_get_dfs_refer(const unsigned int xid, struct cifs_ses *ses,
+ 	do {
+ 		rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+ 				FSCTL_DFS_GET_REFERRALS,
+-				true /* is_fsctl */,
+ 				(char *)dfs_req, dfs_req_size, CIFSMaxBufSize,
+ 				(char **)&dfs_rsp, &dfs_rsp_size);
+ 	} while (rc == -EAGAIN);
+@@ -2964,8 +2957,7 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	rc = SMB2_ioctl_init(tcon, server,
+ 			     &rqst[1], fid.persistent_fid,
+-			     fid.volatile_fid, FSCTL_GET_REPARSE_POINT,
+-			     true /* is_fctl */, NULL, 0,
++			     fid.volatile_fid, FSCTL_GET_REPARSE_POINT, NULL, 0,
+ 			     CIFSMaxBufSize -
+ 			     MAX_SMB2_CREATE_RESPONSE_SIZE -
+ 			     MAX_SMB2_CLOSE_RESPONSE_SIZE);
+@@ -3145,8 +3137,7 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	rc = SMB2_ioctl_init(tcon, server,
+ 			     &rqst[1], COMPOUND_FID,
+-			     COMPOUND_FID, FSCTL_GET_REPARSE_POINT,
+-			     true /* is_fctl */, NULL, 0,
++			     COMPOUND_FID, FSCTL_GET_REPARSE_POINT, NULL, 0,
+ 			     CIFSMaxBufSize -
+ 			     MAX_SMB2_CREATE_RESPONSE_SIZE -
+ 			     MAX_SMB2_CLOSE_RESPONSE_SIZE);
+@@ -3409,7 +3400,7 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ 	fsctl_buf.BeyondFinalZero = cpu_to_le64(offset + len);
+ 
+ 	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+-			cfile->fid.volatile_fid, FSCTL_SET_ZERO_DATA, true,
++			cfile->fid.volatile_fid, FSCTL_SET_ZERO_DATA,
+ 			(char *)&fsctl_buf,
+ 			sizeof(struct file_zero_data_information),
+ 			0, NULL, NULL);
+@@ -3439,7 +3430,7 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ 			    loff_t offset, loff_t len)
+ {
+-	struct inode *inode;
++	struct inode *inode = file_inode(file);
+ 	struct cifsFileInfo *cfile = file->private_data;
+ 	struct file_zero_data_information fsctl_buf;
+ 	long rc;
+@@ -3448,14 +3439,12 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ 
+ 	xid = get_xid();
+ 
+-	inode = d_inode(cfile->dentry);
+-
++	inode_lock(inode);
+ 	/* Need to make file sparse, if not already, before freeing range. */
+ 	/* Consider adding equivalent for compressed since it could also work */
+ 	if (!smb2_set_sparse(xid, tcon, cfile, inode, set_sparse)) {
+ 		rc = -EOPNOTSUPP;
+-		free_xid(xid);
+-		return rc;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -3471,9 +3460,11 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ 
+ 	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ 			cfile->fid.volatile_fid, FSCTL_SET_ZERO_DATA,
+-			true /* is_fctl */, (char *)&fsctl_buf,
++			(char *)&fsctl_buf,
+ 			sizeof(struct file_zero_data_information),
+ 			CIFSMaxBufSize, NULL, NULL);
++out:
++	inode_unlock(inode);
+ 	free_xid(xid);
+ 	return rc;
+ }
+@@ -3530,7 +3521,7 @@ static int smb3_simple_fallocate_range(unsigned int xid,
+ 	in_data.length = cpu_to_le64(len);
+ 	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ 			cfile->fid.volatile_fid,
+-			FSCTL_QUERY_ALLOCATED_RANGES, true,
++			FSCTL_QUERY_ALLOCATED_RANGES,
+ 			(char *)&in_data, sizeof(in_data),
+ 			1024 * sizeof(struct file_allocated_range_buffer),
+ 			(char **)&out_data, &out_data_len);
+@@ -3771,7 +3762,7 @@ static loff_t smb3_llseek(struct file *file, struct cifs_tcon *tcon, loff_t offs
+ 
+ 	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ 			cfile->fid.volatile_fid,
+-			FSCTL_QUERY_ALLOCATED_RANGES, true,
++			FSCTL_QUERY_ALLOCATED_RANGES,
+ 			(char *)&in_data, sizeof(in_data),
+ 			sizeof(struct file_allocated_range_buffer),
+ 			(char **)&out_data, &out_data_len);
+@@ -3831,7 +3822,7 @@ static int smb3_fiemap(struct cifs_tcon *tcon,
+ 
+ 	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ 			cfile->fid.volatile_fid,
+-			FSCTL_QUERY_ALLOCATED_RANGES, true,
++			FSCTL_QUERY_ALLOCATED_RANGES,
+ 			(char *)&in_data, sizeof(in_data),
+ 			1024 * sizeof(struct file_allocated_range_buffer),
+ 			(char **)&out_data, &out_data_len);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 24dd711fa9b95..7ee8abd1f79be 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1081,7 +1081,7 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ 	}
+ 
+ 	rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+-		FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
++		FSCTL_VALIDATE_NEGOTIATE_INFO,
+ 		(char *)pneg_inbuf, inbuflen, CIFSMaxBufSize,
+ 		(char **)&pneg_rsp, &rsplen);
+ 	if (rc == -EOPNOTSUPP) {
+@@ -2922,7 +2922,7 @@ int
+ SMB2_ioctl_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
+ 		struct smb_rqst *rqst,
+ 		u64 persistent_fid, u64 volatile_fid, u32 opcode,
+-		bool is_fsctl, char *in_data, u32 indatalen,
++		char *in_data, u32 indatalen,
+ 		__u32 max_response_size)
+ {
+ 	struct smb2_ioctl_req *req;
+@@ -2997,10 +2997,8 @@ SMB2_ioctl_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
+ 	req->sync_hdr.CreditCharge =
+ 		cpu_to_le16(DIV_ROUND_UP(max(indatalen, max_response_size),
+ 					 SMB2_MAX_BUFFER_SIZE));
+-	if (is_fsctl)
+-		req->Flags = cpu_to_le32(SMB2_0_IOCTL_IS_FSCTL);
+-	else
+-		req->Flags = 0;
++	/* always an FSCTL (for now) */
++	req->Flags = cpu_to_le32(SMB2_0_IOCTL_IS_FSCTL);
+ 
+ 	/* validate negotiate request must be signed - see MS-SMB2 3.2.5.5 */
+ 	if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO)
+@@ -3027,9 +3025,9 @@ SMB2_ioctl_free(struct smb_rqst *rqst)
+  */
+ int
+ SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
+-	   u64 volatile_fid, u32 opcode, bool is_fsctl,
+-	   char *in_data, u32 indatalen, u32 max_out_data_len,
+-	   char **out_data, u32 *plen /* returned data len */)
++	   u64 volatile_fid, u32 opcode, char *in_data, u32 indatalen,
++	   u32 max_out_data_len, char **out_data,
++	   u32 *plen /* returned data len */)
+ {
+ 	struct smb_rqst rqst;
+ 	struct smb2_ioctl_rsp *rsp = NULL;
+@@ -3071,7 +3069,7 @@ SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
+ 
+ 	rc = SMB2_ioctl_init(tcon, server,
+ 			     &rqst, persistent_fid, volatile_fid, opcode,
+-			     is_fsctl, in_data, indatalen, max_out_data_len);
++			     in_data, indatalen, max_out_data_len);
+ 	if (rc)
+ 		goto ioctl_exit;
+ 
+@@ -3153,7 +3151,7 @@ SMB2_set_compression(const unsigned int xid, struct cifs_tcon *tcon,
+ 			cpu_to_le16(COMPRESSION_FORMAT_DEFAULT);
+ 
+ 	rc = SMB2_ioctl(xid, tcon, persistent_fid, volatile_fid,
+-			FSCTL_SET_COMPRESSION, true /* is_fsctl */,
++			FSCTL_SET_COMPRESSION,
+ 			(char *)&fsctl_input /* data input */,
+ 			2 /* in data len */, CIFSMaxBufSize /* max out data */,
+ 			&ret_data /* out data */, NULL);
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index 4eb0ca84355a6..ed2b4fb012a41 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -155,13 +155,13 @@ extern int SMB2_open_init(struct cifs_tcon *tcon,
+ extern void SMB2_open_free(struct smb_rqst *rqst);
+ extern int SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon,
+ 		     u64 persistent_fid, u64 volatile_fid, u32 opcode,
+-		     bool is_fsctl, char *in_data, u32 indatalen, u32 maxoutlen,
++		     char *in_data, u32 indatalen, u32 maxoutlen,
+ 		     char **out_data, u32 *plen /* returned data len */);
+ extern int SMB2_ioctl_init(struct cifs_tcon *tcon,
+ 			   struct TCP_Server_Info *server,
+ 			   struct smb_rqst *rqst,
+ 			   u64 persistent_fid, u64 volatile_fid, u32 opcode,
+-			   bool is_fsctl, char *in_data, u32 indatalen,
++			   char *in_data, u32 indatalen,
+ 			   __u32 max_response_size);
+ extern void SMB2_ioctl_free(struct smb_rqst *rqst);
+ extern int SMB2_change_notify(const unsigned int xid, struct cifs_tcon *tcon,
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 848e0aaa8da5d..f47f0a7d2c3b9 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -730,6 +730,28 @@ void debugfs_remove(struct dentry *dentry)
+ }
+ EXPORT_SYMBOL_GPL(debugfs_remove);
+ 
++/**
++ * debugfs_lookup_and_remove - lookup a directory or file and recursively remove it
++ * @name: a pointer to a string containing the name of the item to look up.
++ * @parent: a pointer to the parent dentry of the item.
++ *
++ * This is the equlivant of doing something like
++ * debugfs_remove(debugfs_lookup(..)) but with the proper reference counting
++ * handled for the directory being looked up.
++ */
++void debugfs_lookup_and_remove(const char *name, struct dentry *parent)
++{
++	struct dentry *dentry;
++
++	dentry = debugfs_lookup(name, parent);
++	if (!dentry)
++		return;
++
++	debugfs_remove(dentry);
++	dput(dentry);
++}
++EXPORT_SYMBOL_GPL(debugfs_lookup_and_remove);
++
+ /**
+  * debugfs_rename - rename a file/directory in the debugfs filesystem
+  * @old_dir: a pointer to the parent dentry for the renamed object. This
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index c852bb5ff2121..a4ae1fcd2ab1e 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1014,6 +1014,10 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 	iov_iter_kvec(&iter, WRITE, vec, vlen, *cnt);
+ 	since = READ_ONCE(file->f_wb_err);
+ 	if (flags & RWF_SYNC) {
++		if (verf)
++			nfsd_copy_boot_verifier(verf,
++					net_generic(SVC_NET(rqstp),
++					nfsd_net_id));
+ 		host_err = vfs_iter_write(file, &iter, &pos, flags);
+ 		if (host_err < 0)
+ 			nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index 20a2ff1c07a1b..e93e3faa82296 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -136,6 +136,17 @@ BUFFER_FNS(Defer_Completion, defer_completion)
+ 
+ static __always_inline void set_buffer_uptodate(struct buffer_head *bh)
+ {
++	/*
++	 * If somebody else already set this uptodate, they will
++	 * have done the memory barrier, and a reader will thus
++	 * see *some* valid buffer state.
++	 *
++	 * Any other serialization (with IO errors or whatever that
++	 * might clear the bit) has to come from other state (eg BH_Lock).
++	 */
++	if (test_bit(BH_Uptodate, &bh->b_state))
++		return;
++
+ 	/*
+ 	 * make it consistent with folio_mark_uptodate
+ 	 * pairs with smp_load_acquire in buffer_uptodate
+diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
+index d6c4cc9ecc77c..2357109a8901b 100644
+--- a/include/linux/debugfs.h
++++ b/include/linux/debugfs.h
+@@ -91,6 +91,8 @@ struct dentry *debugfs_create_automount(const char *name,
+ void debugfs_remove(struct dentry *dentry);
+ #define debugfs_remove_recursive debugfs_remove
+ 
++void debugfs_lookup_and_remove(const char *name, struct dentry *parent);
++
+ const struct file_operations *debugfs_real_fops(const struct file *filp);
+ 
+ int debugfs_file_get(struct dentry *dentry);
+@@ -220,6 +222,10 @@ static inline void debugfs_remove(struct dentry *dentry)
+ static inline void debugfs_remove_recursive(struct dentry *dentry)
+ { }
+ 
++static inline void debugfs_lookup_and_remove(const char *name,
++					     struct dentry *parent)
++{ }
++
+ const struct file_operations *debugfs_real_fops(const struct file *filp);
+ 
+ static inline int debugfs_file_get(struct dentry *dentry)
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 5046c99deba86..684c16849eff3 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2304,6 +2304,47 @@ int task_cgroup_path(struct task_struct *task, char *buf, size_t buflen)
+ }
+ EXPORT_SYMBOL_GPL(task_cgroup_path);
+ 
++/**
++ * cgroup_attach_lock - Lock for ->attach()
++ * @lock_threadgroup: whether to down_write cgroup_threadgroup_rwsem
++ *
++ * cgroup migration sometimes needs to stabilize threadgroups against forks and
++ * exits by write-locking cgroup_threadgroup_rwsem. However, some ->attach()
++ * implementations (e.g. cpuset), also need to disable CPU hotplug.
++ * Unfortunately, letting ->attach() operations acquire cpus_read_lock() can
++ * lead to deadlocks.
++ *
++ * Bringing up a CPU may involve creating and destroying tasks which requires
++ * read-locking threadgroup_rwsem, so threadgroup_rwsem nests inside
++ * cpus_read_lock(). If we call an ->attach() which acquires the cpus lock while
++ * write-locking threadgroup_rwsem, the locking order is reversed and we end up
++ * waiting for an on-going CPU hotplug operation which in turn is waiting for
++ * the threadgroup_rwsem to be released to create new tasks. For more details:
++ *
++ *   http://lkml.kernel.org/r/20220711174629.uehfmqegcwn2lqzu@wubuntu
++ *
++ * Resolve the situation by always acquiring cpus_read_lock() before optionally
++ * write-locking cgroup_threadgroup_rwsem. This allows ->attach() to assume that
++ * CPU hotplug is disabled on entry.
++ */
++static void cgroup_attach_lock(bool lock_threadgroup)
++{
++	cpus_read_lock();
++	if (lock_threadgroup)
++		percpu_down_write(&cgroup_threadgroup_rwsem);
++}
++
++/**
++ * cgroup_attach_unlock - Undo cgroup_attach_lock()
++ * @lock_threadgroup: whether to up_write cgroup_threadgroup_rwsem
++ */
++static void cgroup_attach_unlock(bool lock_threadgroup)
++{
++	if (lock_threadgroup)
++		percpu_up_write(&cgroup_threadgroup_rwsem);
++	cpus_read_unlock();
++}
++
+ /**
+  * cgroup_migrate_add_task - add a migration target task to a migration context
+  * @task: target task
+@@ -2780,8 +2821,7 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader,
+ }
+ 
+ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
+-					     bool *locked)
+-	__acquires(&cgroup_threadgroup_rwsem)
++					     bool *threadgroup_locked)
+ {
+ 	struct task_struct *tsk;
+ 	pid_t pid;
+@@ -2798,12 +2838,8 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
+ 	 * Therefore, we can skip the global lock.
+ 	 */
+ 	lockdep_assert_held(&cgroup_mutex);
+-	if (pid || threadgroup) {
+-		percpu_down_write(&cgroup_threadgroup_rwsem);
+-		*locked = true;
+-	} else {
+-		*locked = false;
+-	}
++	*threadgroup_locked = pid || threadgroup;
++	cgroup_attach_lock(*threadgroup_locked);
+ 
+ 	rcu_read_lock();
+ 	if (pid) {
+@@ -2834,17 +2870,14 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
+ 	goto out_unlock_rcu;
+ 
+ out_unlock_threadgroup:
+-	if (*locked) {
+-		percpu_up_write(&cgroup_threadgroup_rwsem);
+-		*locked = false;
+-	}
++	cgroup_attach_unlock(*threadgroup_locked);
++	*threadgroup_locked = false;
+ out_unlock_rcu:
+ 	rcu_read_unlock();
+ 	return tsk;
+ }
+ 
+-void cgroup_procs_write_finish(struct task_struct *task, bool locked)
+-	__releases(&cgroup_threadgroup_rwsem)
++void cgroup_procs_write_finish(struct task_struct *task, bool threadgroup_locked)
+ {
+ 	struct cgroup_subsys *ss;
+ 	int ssid;
+@@ -2852,8 +2885,8 @@ void cgroup_procs_write_finish(struct task_struct *task, bool locked)
+ 	/* release reference from cgroup_procs_write_start() */
+ 	put_task_struct(task);
+ 
+-	if (locked)
+-		percpu_up_write(&cgroup_threadgroup_rwsem);
++	cgroup_attach_unlock(threadgroup_locked);
++
+ 	for_each_subsys(ss, ssid)
+ 		if (ss->post_attach)
+ 			ss->post_attach();
+@@ -2908,12 +2941,11 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ 	struct cgroup_subsys_state *d_css;
+ 	struct cgroup *dsct;
+ 	struct css_set *src_cset;
++	bool has_tasks;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+ 
+-	percpu_down_write(&cgroup_threadgroup_rwsem);
+-
+ 	/* look up all csses currently attached to @cgrp's subtree */
+ 	spin_lock_irq(&css_set_lock);
+ 	cgroup_for_each_live_descendant_pre(dsct, d_css, cgrp) {
+@@ -2924,6 +2956,15 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ 	}
+ 	spin_unlock_irq(&css_set_lock);
+ 
++	/*
++	 * We need to write-lock threadgroup_rwsem while migrating tasks.
++	 * However, if there are no source csets for @cgrp, changing its
++	 * controllers isn't gonna produce any task migrations and the
++	 * write-locking can be skipped safely.
++	 */
++	has_tasks = !list_empty(&mgctx.preloaded_src_csets);
++	cgroup_attach_lock(has_tasks);
++
+ 	/* NULL dst indicates self on default hierarchy */
+ 	ret = cgroup_migrate_prepare_dst(&mgctx);
+ 	if (ret)
+@@ -2943,7 +2984,7 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ 	ret = cgroup_migrate_execute(&mgctx);
+ out_finish:
+ 	cgroup_migrate_finish(&mgctx);
+-	percpu_up_write(&cgroup_threadgroup_rwsem);
++	cgroup_attach_unlock(has_tasks);
+ 	return ret;
+ }
+ 
+@@ -4799,13 +4840,13 @@ static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
+ 	struct task_struct *task;
+ 	const struct cred *saved_cred;
+ 	ssize_t ret;
+-	bool locked;
++	bool threadgroup_locked;
+ 
+ 	dst_cgrp = cgroup_kn_lock_live(of->kn, false);
+ 	if (!dst_cgrp)
+ 		return -ENODEV;
+ 
+-	task = cgroup_procs_write_start(buf, true, &locked);
++	task = cgroup_procs_write_start(buf, true, &threadgroup_locked);
+ 	ret = PTR_ERR_OR_ZERO(task);
+ 	if (ret)
+ 		goto out_unlock;
+@@ -4831,7 +4872,7 @@ static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
+ 	ret = cgroup_attach_task(dst_cgrp, task, true);
+ 
+ out_finish:
+-	cgroup_procs_write_finish(task, locked);
++	cgroup_procs_write_finish(task, threadgroup_locked);
+ out_unlock:
+ 	cgroup_kn_unlock(of->kn);
+ 
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index c51863b63f93a..b7830f1f1f3a5 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -2212,7 +2212,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ 	cgroup_taskset_first(tset, &css);
+ 	cs = css_cs(css);
+ 
+-	cpus_read_lock();
++	lockdep_assert_cpus_held();	/* see cgroup_attach_lock() */
+ 	percpu_down_write(&cpuset_rwsem);
+ 
+ 	/* prepare for attach */
+@@ -2268,7 +2268,6 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ 		wake_up(&cpuset_attach_wq);
+ 
+ 	percpu_up_write(&cpuset_rwsem);
+-	cpus_read_unlock();
+ }
+ 
+ /* The various types of files and directories in a cpuset file system */
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 274587a57717f..4a9831d01f0ea 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -452,7 +452,10 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
+ 	}
+ }
+ 
+-#define slot_addr(start, idx)	((start) + ((idx) << IO_TLB_SHIFT))
++static inline phys_addr_t slot_addr(phys_addr_t start, phys_addr_t idx)
++{
++	return start + (idx << IO_TLB_SHIFT);
++}
+ 
+ /*
+  * Return the offset into a iotlb slot required to keep the device happy.
+diff --git a/kernel/fork.c b/kernel/fork.c
+index a78c0b02edd55..b877480c901f0 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1127,6 +1127,7 @@ void mmput_async(struct mm_struct *mm)
+ 		schedule_work(&mm->async_put_work);
+ 	}
+ }
++EXPORT_SYMBOL_GPL(mmput_async);
+ #endif
+ 
+ /**
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index a93407da0ae10..dac82a0e7c0b0 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1642,6 +1642,7 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ 	/* Ensure it is not in reserved area nor out of text */
+ 	if (!(core_kernel_text((unsigned long) p->addr) ||
+ 	    is_module_text_address((unsigned long) p->addr)) ||
++	    in_gate_area_no_mm((unsigned long) p->addr) ||
+ 	    within_kprobe_blacklist((unsigned long) p->addr) ||
+ 	    jump_label_text_reserved(p->addr, p->addr) ||
+ 	    static_call_text_reserved(p->addr, p->addr) ||
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 5bfae0686199e..4801751cb6b6d 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1123,7 +1123,7 @@ EXPORT_SYMBOL(kmemleak_no_scan);
+ void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
+ 			       gfp_t gfp)
+ {
+-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
++	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
+ 		kmemleak_alloc(__va(phys), size, min_count, gfp);
+ }
+ EXPORT_SYMBOL(kmemleak_alloc_phys);
+@@ -1137,7 +1137,7 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
+  */
+ void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
+ {
+-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
++	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
+ 		kmemleak_free_part(__va(phys), size);
+ }
+ EXPORT_SYMBOL(kmemleak_free_part_phys);
+@@ -1149,7 +1149,7 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
+  */
+ void __ref kmemleak_not_leak_phys(phys_addr_t phys)
+ {
+-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
++	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
+ 		kmemleak_not_leak(__va(phys));
+ }
+ EXPORT_SYMBOL(kmemleak_not_leak_phys);
+@@ -1161,7 +1161,7 @@ EXPORT_SYMBOL(kmemleak_not_leak_phys);
+  */
+ void __ref kmemleak_ignore_phys(phys_addr_t phys)
+ {
+-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
++	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
+ 		kmemleak_ignore(__va(phys));
+ }
+ EXPORT_SYMBOL(kmemleak_ignore_phys);
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 10a2c7bca7199..a718204c4bfdd 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -384,6 +384,7 @@ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_
+ 				/* - Bridged-and-DNAT'ed traffic doesn't
+ 				 *   require ip_forwarding. */
+ 				if (rt->dst.dev == dev) {
++					skb_dst_drop(skb);
+ 					skb_dst_set(skb, &rt->dst);
+ 					goto bridged_dnat;
+ 				}
+@@ -413,6 +414,7 @@ bridged_dnat:
+ 			kfree_skb(skb);
+ 			return 0;
+ 		}
++		skb_dst_drop(skb);
+ 		skb_dst_set_noref(skb, &rt->dst);
+ 	}
+ 
+diff --git a/net/bridge/br_netfilter_ipv6.c b/net/bridge/br_netfilter_ipv6.c
+index e4e0c836c3f51..6b07f30675bb0 100644
+--- a/net/bridge/br_netfilter_ipv6.c
++++ b/net/bridge/br_netfilter_ipv6.c
+@@ -197,6 +197,7 @@ static int br_nf_pre_routing_finish_ipv6(struct net *net, struct sock *sk, struc
+ 			kfree_skb(skb);
+ 			return 0;
+ 		}
++		skb_dst_drop(skb);
+ 		skb_dst_set_noref(skb, &rt->dst);
+ 	}
+ 
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 635cabcf8794f..7bdcdad58dc86 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3986,9 +3986,8 @@ normal:
+ 				SKB_GSO_CB(nskb)->csum_start =
+ 					skb_headroom(nskb) + doffset;
+ 			} else {
+-				skb_copy_bits(head_skb, offset,
+-					      skb_put(nskb, len),
+-					      len);
++				if (skb_copy_bits(head_skb, offset, skb_put(nskb, len), len))
++					goto err;
+ 			}
+ 			continue;
+ 		}
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index e62500d6fe0d0..4ecd85b1e806c 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2496,6 +2496,21 @@ static inline bool tcp_may_undo(const struct tcp_sock *tp)
+ 	return tp->undo_marker && (!tp->undo_retrans || tcp_packet_delayed(tp));
+ }
+ 
++static bool tcp_is_non_sack_preventing_reopen(struct sock *sk)
++{
++	struct tcp_sock *tp = tcp_sk(sk);
++
++	if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
++		/* Hold old state until something *above* high_seq
++		 * is ACKed. For Reno it is MUST to prevent false
++		 * fast retransmits (RFC2582). SACK TCP is safe. */
++		if (!tcp_any_retrans_done(sk))
++			tp->retrans_stamp = 0;
++		return true;
++	}
++	return false;
++}
++
+ /* People celebrate: "We love our President!" */
+ static bool tcp_try_undo_recovery(struct sock *sk)
+ {
+@@ -2518,14 +2533,8 @@ static bool tcp_try_undo_recovery(struct sock *sk)
+ 	} else if (tp->rack.reo_wnd_persist) {
+ 		tp->rack.reo_wnd_persist--;
+ 	}
+-	if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
+-		/* Hold old state until something *above* high_seq
+-		 * is ACKed. For Reno it is MUST to prevent false
+-		 * fast retransmits (RFC2582). SACK TCP is safe. */
+-		if (!tcp_any_retrans_done(sk))
+-			tp->retrans_stamp = 0;
++	if (tcp_is_non_sack_preventing_reopen(sk))
+ 		return true;
+-	}
+ 	tcp_set_ca_state(sk, TCP_CA_Open);
+ 	tp->is_sack_reneg = 0;
+ 	return false;
+@@ -2561,6 +2570,8 @@ static bool tcp_try_undo_loss(struct sock *sk, bool frto_undo)
+ 			NET_INC_STATS(sock_net(sk),
+ 					LINUX_MIB_TCPSPURIOUSRTOS);
+ 		inet_csk(sk)->icsk_retransmits = 0;
++		if (tcp_is_non_sack_preventing_reopen(sk))
++			return true;
+ 		if (frto_undo || tcp_is_sack(tp)) {
+ 			tcp_set_ca_state(sk, TCP_CA_Open);
+ 			tp->is_sack_reneg = 0;
+diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c
+index d2f8138e5a73a..2278c0234c497 100644
+--- a/net/ipv6/seg6.c
++++ b/net/ipv6/seg6.c
+@@ -135,6 +135,11 @@ static int seg6_genl_sethmac(struct sk_buff *skb, struct genl_info *info)
+ 		goto out_unlock;
+ 	}
+ 
++	if (slen > nla_len(info->attrs[SEG6_ATTR_SECRET])) {
++		err = -EINVAL;
++		goto out_unlock;
++	}
++
+ 	if (hinfo) {
+ 		err = seg6_hmac_info_del(net, hmackeyid);
+ 		if (err)
+diff --git a/net/netfilter/nf_conntrack_irc.c b/net/netfilter/nf_conntrack_irc.c
+index e40988a2f22fb..26245419ef4a9 100644
+--- a/net/netfilter/nf_conntrack_irc.c
++++ b/net/netfilter/nf_conntrack_irc.c
+@@ -185,8 +185,9 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ 
+ 			/* dcc_ip can be the internal OR external (NAT'ed) IP */
+ 			tuple = &ct->tuplehash[dir].tuple;
+-			if (tuple->src.u3.ip != dcc_ip &&
+-			    tuple->dst.u3.ip != dcc_ip) {
++			if ((tuple->src.u3.ip != dcc_ip &&
++			     ct->tuplehash[!dir].tuple.dst.u3.ip != dcc_ip) ||
++			    dcc_port == 0) {
+ 				net_warn_ratelimited("Forged DCC command from %pI4: %pI4:%u\n",
+ 						     &tuple->src.u3.ip,
+ 						     &dcc_ip, dcc_port);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 1b039476e4d6a..b8e7e1c5c08a8 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1971,8 +1971,10 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family,
+ 	chain->flags |= NFT_CHAIN_BASE | flags;
+ 	basechain->policy = NF_ACCEPT;
+ 	if (chain->flags & NFT_CHAIN_HW_OFFLOAD &&
+-	    !nft_chain_offload_support(basechain))
++	    !nft_chain_offload_support(basechain)) {
++		list_splice_init(&basechain->hook_list, &hook->list);
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	flow_block_init(&basechain->flow_block);
+ 
+diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
+index f114dc2af5cf3..5345e8eefd33c 100644
+--- a/net/rxrpc/rxkad.c
++++ b/net/rxrpc/rxkad.c
+@@ -451,7 +451,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
+ 	 * directly into the target buffer.
+ 	 */
+ 	sg = _sg;
+-	nsg = skb_shinfo(skb)->nr_frags;
++	nsg = skb_shinfo(skb)->nr_frags + 1;
+ 	if (nsg <= 4) {
+ 		nsg = 4;
+ 	} else {
+diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
+index da047a37a3bf3..b2724057629f6 100644
+--- a/net/sched/sch_sfb.c
++++ b/net/sched/sch_sfb.c
+@@ -135,15 +135,15 @@ static void increment_one_qlen(u32 sfbhash, u32 slot, struct sfb_sched_data *q)
+ 	}
+ }
+ 
+-static void increment_qlen(const struct sk_buff *skb, struct sfb_sched_data *q)
++static void increment_qlen(const struct sfb_skb_cb *cb, struct sfb_sched_data *q)
+ {
+ 	u32 sfbhash;
+ 
+-	sfbhash = sfb_hash(skb, 0);
++	sfbhash = cb->hashes[0];
+ 	if (sfbhash)
+ 		increment_one_qlen(sfbhash, 0, q);
+ 
+-	sfbhash = sfb_hash(skb, 1);
++	sfbhash = cb->hashes[1];
+ 	if (sfbhash)
+ 		increment_one_qlen(sfbhash, 1, q);
+ }
+@@ -281,8 +281,10 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ {
+ 
+ 	struct sfb_sched_data *q = qdisc_priv(sch);
++	unsigned int len = qdisc_pkt_len(skb);
+ 	struct Qdisc *child = q->qdisc;
+ 	struct tcf_proto *fl;
++	struct sfb_skb_cb cb;
+ 	int i;
+ 	u32 p_min = ~0;
+ 	u32 minqlen = ~0;
+@@ -399,11 +401,12 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 	}
+ 
+ enqueue:
++	memcpy(&cb, sfb_skb_cb(skb), sizeof(cb));
+ 	ret = qdisc_enqueue(skb, child, to_free);
+ 	if (likely(ret == NET_XMIT_SUCCESS)) {
+-		qdisc_qstats_backlog_inc(sch, skb);
++		sch->qstats.backlog += len;
+ 		sch->q.qlen++;
+-		increment_qlen(skb, q);
++		increment_qlen(&cb, q);
+ 	} else if (net_xmit_drop_count(ret)) {
+ 		q->stats.childdrop++;
+ 		qdisc_qstats_drop(sch);
+diff --git a/net/tipc/monitor.c b/net/tipc/monitor.c
+index a37190da5a504..1d90f39129ca0 100644
+--- a/net/tipc/monitor.c
++++ b/net/tipc/monitor.c
+@@ -130,7 +130,7 @@ static void map_set(u64 *up_map, int i, unsigned int v)
+ 
+ static int map_get(u64 up_map, int i)
+ {
+-	return (up_map & (1 << i)) >> i;
++	return (up_map & (1ULL << i)) >> i;
+ }
+ 
+ static struct tipc_peer *peer_prev(struct tipc_peer *peer)
+diff --git a/sound/drivers/aloop.c b/sound/drivers/aloop.c
+index 2c5f7e905ab8f..fb45a32d99cd9 100644
+--- a/sound/drivers/aloop.c
++++ b/sound/drivers/aloop.c
+@@ -606,17 +606,18 @@ static unsigned int loopback_jiffies_timer_pos_update
+ 			cable->streams[SNDRV_PCM_STREAM_PLAYBACK];
+ 	struct loopback_pcm *dpcm_capt =
+ 			cable->streams[SNDRV_PCM_STREAM_CAPTURE];
+-	unsigned long delta_play = 0, delta_capt = 0;
++	unsigned long delta_play = 0, delta_capt = 0, cur_jiffies;
+ 	unsigned int running, count1, count2;
+ 
++	cur_jiffies = jiffies;
+ 	running = cable->running ^ cable->pause;
+ 	if (running & (1 << SNDRV_PCM_STREAM_PLAYBACK)) {
+-		delta_play = jiffies - dpcm_play->last_jiffies;
++		delta_play = cur_jiffies - dpcm_play->last_jiffies;
+ 		dpcm_play->last_jiffies += delta_play;
+ 	}
+ 
+ 	if (running & (1 << SNDRV_PCM_STREAM_CAPTURE)) {
+-		delta_capt = jiffies - dpcm_capt->last_jiffies;
++		delta_capt = cur_jiffies - dpcm_capt->last_jiffies;
+ 		dpcm_capt->last_jiffies += delta_capt;
+ 	}
+ 
+diff --git a/sound/pci/emu10k1/emupcm.c b/sound/pci/emu10k1/emupcm.c
+index b2ddabb994381..8d2c101d66a23 100644
+--- a/sound/pci/emu10k1/emupcm.c
++++ b/sound/pci/emu10k1/emupcm.c
+@@ -123,7 +123,7 @@ static int snd_emu10k1_pcm_channel_alloc(struct snd_emu10k1_pcm * epcm, int voic
+ 	epcm->voices[0]->epcm = epcm;
+ 	if (voices > 1) {
+ 		for (i = 1; i < voices; i++) {
+-			epcm->voices[i] = &epcm->emu->voices[epcm->voices[0]->number + i];
++			epcm->voices[i] = &epcm->emu->voices[(epcm->voices[0]->number + i) % NUM_G];
+ 			epcm->voices[i]->epcm = epcm;
+ 		}
+ 	}
+diff --git a/sound/soc/atmel/mchp-spdiftx.c b/sound/soc/atmel/mchp-spdiftx.c
+index 3bd350afb7434..0d2e3fa21519c 100644
+--- a/sound/soc/atmel/mchp-spdiftx.c
++++ b/sound/soc/atmel/mchp-spdiftx.c
+@@ -196,8 +196,7 @@ struct mchp_spdiftx_dev {
+ 	struct clk				*pclk;
+ 	struct clk				*gclk;
+ 	unsigned int				fmt;
+-	const struct mchp_i2s_caps		*caps;
+-	int					gclk_enabled:1;
++	unsigned int				gclk_enabled:1;
+ };
+ 
+ static inline int mchp_spdiftx_is_running(struct mchp_spdiftx_dev *dev)
+@@ -766,8 +765,6 @@ static const struct of_device_id mchp_spdiftx_dt_ids[] = {
+ MODULE_DEVICE_TABLE(of, mchp_spdiftx_dt_ids);
+ static int mchp_spdiftx_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
+-	const struct of_device_id *match;
+ 	struct mchp_spdiftx_dev *dev;
+ 	struct resource *mem;
+ 	struct regmap *regmap;
+@@ -781,11 +778,6 @@ static int mchp_spdiftx_probe(struct platform_device *pdev)
+ 	if (!dev)
+ 		return -ENOMEM;
+ 
+-	/* Get hardware capabilities. */
+-	match = of_match_node(mchp_spdiftx_dt_ids, np);
+-	if (match)
+-		dev->caps = match->data;
+-
+ 	/* Map I/O registers. */
+ 	base = devm_platform_get_and_ioremap_resource(pdev, 0, &mem);
+ 	if (IS_ERR(base))
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index a3e06a71cf356..6b172db58a310 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -667,7 +667,7 @@ static bool check_delayed_register_option(struct snd_usb_audio *chip, int iface)
+ 		if (delayed_register[i] &&
+ 		    sscanf(delayed_register[i], "%x:%x", &id, &inum) == 2 &&
+ 		    id == chip->usb_id)
+-			return inum != iface;
++			return iface < inum;
+ 	}
+ 
+ 	return false;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 6333a2ecb848a..41f5d8242478f 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1911,7 +1911,7 @@ bool snd_usb_registration_quirk(struct snd_usb_audio *chip, int iface)
+ 
+ 	for (q = registration_quirks; q->usb_id; q++)
+ 		if (chip->usb_id == q->usb_id)
+-			return iface != q->interface;
++			return iface < q->interface;
+ 
+ 	/* Register as normal */
+ 	return false;
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index 2f6d39c2ba7c8..c4f4585f9b851 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -496,6 +496,10 @@ static int __snd_usb_add_audio_stream(struct snd_usb_audio *chip,
+ 			return 0;
+ 		}
+ 	}
++
++	if (chip->card->registered)
++		chip->need_delayed_register = true;
++
+ 	/* look for an empty stream */
+ 	list_for_each_entry(as, &chip->pcm_list, list) {
+ 		if (as->fmt_type != fp->fmt_type)
+@@ -503,9 +507,6 @@ static int __snd_usb_add_audio_stream(struct snd_usb_audio *chip,
+ 		subs = &as->substream[stream];
+ 		if (subs->ep_num)
+ 			continue;
+-		if (snd_device_get_state(chip->card, as->pcm) !=
+-		    SNDRV_DEV_BUILD)
+-			chip->need_delayed_register = true;
+ 		err = snd_pcm_new_stream(as->pcm, stream, 1);
+ 		if (err < 0)
+ 			return err;
+@@ -1106,7 +1107,7 @@ static int __snd_usb_parse_audio_interface(struct snd_usb_audio *chip,
+ 	 * Dallas DS4201 workaround: It presents 5 altsettings, but the last
+ 	 * one misses syncpipe, and does not produce any sound.
+ 	 */
+-	if (chip->usb_id == USB_ID(0x04fa, 0x4201))
++	if (chip->usb_id == USB_ID(0x04fa, 0x4201) && num >= 4)
+ 		num = 4;
+ 
+ 	for (i = 0; i < num; i++) {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-09-20 12:01 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-09-20 12:01 UTC (permalink / raw
  To: gentoo-commits

commit:     534fd45571e70d0b43f914362658570a9cd45bfc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 20 12:01:49 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Sep 20 12:01:49 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=534fd455

Linux patch 5.10.144

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1143_linux-5.10.144.patch | 1010 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1014 insertions(+)

diff --git a/0000_README b/0000_README
index 32d72e53..7aedc075 100644
--- a/0000_README
+++ b/0000_README
@@ -615,6 +615,10 @@ Patch:  1142_linux-5.10.143.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.143
 
+Patch:  1143_linux-5.10.144.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.144
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1143_linux-5.10.144.patch b/1143_linux-5.10.144.patch
new file mode 100644
index 00000000..f1021dcf
--- /dev/null
+++ b/1143_linux-5.10.144.patch
@@ -0,0 +1,1010 @@
+diff --git a/Documentation/input/joydev/joystick.rst b/Documentation/input/joydev/joystick.rst
+index 9746fd76cc581..f38c330c028e5 100644
+--- a/Documentation/input/joydev/joystick.rst
++++ b/Documentation/input/joydev/joystick.rst
+@@ -517,6 +517,7 @@ All I-Force devices are supported by the iforce module. This includes:
+ * AVB Mag Turbo Force
+ * AVB Top Shot Pegasus
+ * AVB Top Shot Force Feedback Racing Wheel
++* Boeder Force Feedback Wheel
+ * Logitech WingMan Force
+ * Logitech WingMan Force Wheel
+ * Guillemot Race Leader Force Feedback
+diff --git a/Makefile b/Makefile
+index 60b2018c26dba..21aa9b04164d1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 143
++SUBLEVEL = 144
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx28-evk.dts b/arch/arm/boot/dts/imx28-evk.dts
+index 7e2b0f198dfad..1053b7c584d81 100644
+--- a/arch/arm/boot/dts/imx28-evk.dts
++++ b/arch/arm/boot/dts/imx28-evk.dts
+@@ -129,7 +129,7 @@
+ 				pinctrl-0 = <&spi2_pins_a>;
+ 				status = "okay";
+ 
+-				flash: m25p80@0 {
++				flash: flash@0 {
+ 					#address-cells = <1>;
+ 					#size-cells = <1>;
+ 					compatible = "sst,sst25vf016b", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx28-m28evk.dts b/arch/arm/boot/dts/imx28-m28evk.dts
+index f3bddc5ada4b8..13acdc7916b9b 100644
+--- a/arch/arm/boot/dts/imx28-m28evk.dts
++++ b/arch/arm/boot/dts/imx28-m28evk.dts
+@@ -33,7 +33,7 @@
+ 				pinctrl-0 = <&spi2_pins_a>;
+ 				status = "okay";
+ 
+-				flash: m25p80@0 {
++				flash: flash@0 {
+ 					#address-cells = <1>;
+ 					#size-cells = <1>;
+ 					compatible = "m25p80", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx28-sps1.dts b/arch/arm/boot/dts/imx28-sps1.dts
+index 43be7a6a769bc..90928db0df701 100644
+--- a/arch/arm/boot/dts/imx28-sps1.dts
++++ b/arch/arm/boot/dts/imx28-sps1.dts
+@@ -51,7 +51,7 @@
+ 				pinctrl-0 = <&spi2_pins_a>;
+ 				status = "okay";
+ 
+-				flash: m25p80@0 {
++				flash: flash@0 {
+ 					#address-cells = <1>;
+ 					#size-cells = <1>;
+ 					compatible = "everspin,mr25h256", "mr25h256";
+diff --git a/arch/arm/boot/dts/imx6dl-rex-basic.dts b/arch/arm/boot/dts/imx6dl-rex-basic.dts
+index 0f1616bfa9a80..b72f8ea1e6f6c 100644
+--- a/arch/arm/boot/dts/imx6dl-rex-basic.dts
++++ b/arch/arm/boot/dts/imx6dl-rex-basic.dts
+@@ -19,7 +19,7 @@
+ };
+ 
+ &ecspi3 {
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "sst,sst25vf016b", "jedec,spi-nor";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6q-ba16.dtsi b/arch/arm/boot/dts/imx6q-ba16.dtsi
+index e4578ed3371ef..133991ca8c633 100644
+--- a/arch/arm/boot/dts/imx6q-ba16.dtsi
++++ b/arch/arm/boot/dts/imx6q-ba16.dtsi
+@@ -139,7 +139,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: n25q032@0 {
++	flash: flash@0 {
+ 		compatible = "jedec,spi-nor";
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+diff --git a/arch/arm/boot/dts/imx6q-bx50v3.dtsi b/arch/arm/boot/dts/imx6q-bx50v3.dtsi
+index 2a98cc657595f..66be04299cbf8 100644
+--- a/arch/arm/boot/dts/imx6q-bx50v3.dtsi
++++ b/arch/arm/boot/dts/imx6q-bx50v3.dtsi
+@@ -160,7 +160,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi5>;
+ 	status = "okay";
+ 
+-	m25_eeprom: m25p80@0 {
++	m25_eeprom: flash@0 {
+ 		compatible = "atmel,at25";
+ 		spi-max-frequency = <10000000>;
+ 		size = <0x8000>;
+diff --git a/arch/arm/boot/dts/imx6q-cm-fx6.dts b/arch/arm/boot/dts/imx6q-cm-fx6.dts
+index bfb530f29d9de..1ad41c944b4b9 100644
+--- a/arch/arm/boot/dts/imx6q-cm-fx6.dts
++++ b/arch/arm/boot/dts/imx6q-cm-fx6.dts
+@@ -260,7 +260,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	m25p80@0 {
++	flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "st,m25p", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6q-dmo-edmqmx6.dts b/arch/arm/boot/dts/imx6q-dmo-edmqmx6.dts
+index fa2307d8ce861..4dee1b22d5c17 100644
+--- a/arch/arm/boot/dts/imx6q-dmo-edmqmx6.dts
++++ b/arch/arm/boot/dts/imx6q-dmo-edmqmx6.dts
+@@ -102,7 +102,7 @@
+ 	cs-gpios = <&gpio1 12 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "m25p80", "jedec,spi-nor";
+ 		spi-max-frequency = <40000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6q-dms-ba16.dts b/arch/arm/boot/dts/imx6q-dms-ba16.dts
+index 48fb47e715f6d..137db38f0d27b 100644
+--- a/arch/arm/boot/dts/imx6q-dms-ba16.dts
++++ b/arch/arm/boot/dts/imx6q-dms-ba16.dts
+@@ -47,7 +47,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi5>;
+ 	status = "okay";
+ 
+-	m25_eeprom: m25p80@0 {
++	m25_eeprom: flash@0 {
+ 		compatible = "atmel,at25256B", "atmel,at25";
+ 		spi-max-frequency = <20000000>;
+ 		size = <0x8000>;
+diff --git a/arch/arm/boot/dts/imx6q-gw5400-a.dts b/arch/arm/boot/dts/imx6q-gw5400-a.dts
+index 4cde45d5c90c8..e894faba571f9 100644
+--- a/arch/arm/boot/dts/imx6q-gw5400-a.dts
++++ b/arch/arm/boot/dts/imx6q-gw5400-a.dts
+@@ -137,7 +137,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "sst,w25q256", "jedec,spi-nor";
+ 		spi-max-frequency = <30000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6q-marsboard.dts b/arch/arm/boot/dts/imx6q-marsboard.dts
+index 05ee283882290..cc18010023942 100644
+--- a/arch/arm/boot/dts/imx6q-marsboard.dts
++++ b/arch/arm/boot/dts/imx6q-marsboard.dts
+@@ -100,7 +100,7 @@
+ 	cs-gpios = <&gpio2 30 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+-	m25p80@0 {
++	flash@0 {
+ 		compatible = "microchip,sst25vf016b";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6q-rex-pro.dts b/arch/arm/boot/dts/imx6q-rex-pro.dts
+index 1767e1a3cd53a..271f4b2d9b9f0 100644
+--- a/arch/arm/boot/dts/imx6q-rex-pro.dts
++++ b/arch/arm/boot/dts/imx6q-rex-pro.dts
+@@ -19,7 +19,7 @@
+ };
+ 
+ &ecspi3 {
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "sst,sst25vf032b", "jedec,spi-nor";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6qdl-aristainetos.dtsi b/arch/arm/boot/dts/imx6qdl-aristainetos.dtsi
+index e21f6ac864e54..baa197c90060e 100644
+--- a/arch/arm/boot/dts/imx6qdl-aristainetos.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-aristainetos.dtsi
+@@ -96,7 +96,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi4>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "micron,n25q128a11", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6qdl-aristainetos2.dtsi b/arch/arm/boot/dts/imx6qdl-aristainetos2.dtsi
+index ead7ba27e1053..ff8cb47fb9fdb 100644
+--- a/arch/arm/boot/dts/imx6qdl-aristainetos2.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-aristainetos2.dtsi
+@@ -131,7 +131,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi4>;
+ 	status = "okay";
+ 
+-	flash: m25p80@1 {
++	flash: flash@1 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "micron,n25q128a11", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6qdl-dfi-fs700-m60.dtsi b/arch/arm/boot/dts/imx6qdl-dfi-fs700-m60.dtsi
+index 648f5fcb72e65..2c1d6f28e6950 100644
+--- a/arch/arm/boot/dts/imx6qdl-dfi-fs700-m60.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-dfi-fs700-m60.dtsi
+@@ -35,7 +35,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi3>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "sst,sst25vf040b", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+index e9a4115124eb0..37d94aa45a8b7 100644
+--- a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+@@ -248,8 +248,8 @@
+ 	status = "okay";
+ 
+ 	/* default boot source: workaround #1 for errata ERR006282 */
+-	smarc_flash: spi-flash@0 {
+-		compatible = "winbond,w25q16dw", "jedec,spi-nor";
++	smarc_flash: flash@0 {
++		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <20000000>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi b/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
+index d526f01a2c520..b7e74d859a962 100644
+--- a/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
+@@ -179,7 +179,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "microchip,sst25vf016b";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
+index a0917823c244f..a88323ac6c696 100644
+--- a/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
+@@ -321,7 +321,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "microchip,sst25vf016b";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
+index 92d09a3ebe0ee..ee7e2371f94bd 100644
+--- a/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
+@@ -252,7 +252,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "microchip,sst25vf016b";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
+index 1243677b5f977..5adeb7aed2204 100644
+--- a/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
+@@ -237,7 +237,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "sst,sst25vf016b", "jedec,spi-nor";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+index afe477f329846..17535bf12516d 100644
+--- a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+@@ -272,7 +272,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1 &pinctrl_ecspi1_cs>;
+ 	status = "disabled"; /* pin conflict with WEIM NOR */
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "st,m25p32", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi b/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
+index fdc3aa9d544d3..0aa1a0a28de0c 100644
+--- a/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
+@@ -313,7 +313,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "sst,sst25vf016b", "jedec,spi-nor";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6qdl-sabresd.dtsi b/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
+index f824c9abd11a3..758c62fb9cac1 100644
+--- a/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
+@@ -194,7 +194,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "st,m25p32", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6sl-evk.dts b/arch/arm/boot/dts/imx6sl-evk.dts
+index 25f6f2fb1555e..f16c830f1e918 100644
+--- a/arch/arm/boot/dts/imx6sl-evk.dts
++++ b/arch/arm/boot/dts/imx6sl-evk.dts
+@@ -137,7 +137,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "st,m25p32", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6sx-nitrogen6sx.dts b/arch/arm/boot/dts/imx6sx-nitrogen6sx.dts
+index 66af78e83b701..a2c79bcf9a11c 100644
+--- a/arch/arm/boot/dts/imx6sx-nitrogen6sx.dts
++++ b/arch/arm/boot/dts/imx6sx-nitrogen6sx.dts
+@@ -107,7 +107,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi1>;
+ 	status = "okay";
+ 
+-	flash: m25p80@0 {
++	flash: flash@0 {
+ 		compatible = "microchip,sst25vf016b";
+ 		spi-max-frequency = <20000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6sx-sdb-reva.dts b/arch/arm/boot/dts/imx6sx-sdb-reva.dts
+index dce5dcf96c255..7dda42553f4bc 100644
+--- a/arch/arm/boot/dts/imx6sx-sdb-reva.dts
++++ b/arch/arm/boot/dts/imx6sx-sdb-reva.dts
+@@ -123,7 +123,7 @@
+ 	pinctrl-0 = <&pinctrl_qspi2>;
+ 	status = "okay";
+ 
+-	flash0: s25fl128s@0 {
++	flash0: flash@0 {
+ 		reg = <0>;
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+@@ -133,7 +133,7 @@
+ 		spi-tx-bus-width = <4>;
+ 	};
+ 
+-	flash1: s25fl128s@2 {
++	flash1: flash@2 {
+ 		reg = <2>;
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+diff --git a/arch/arm/boot/dts/imx6sx-sdb.dts b/arch/arm/boot/dts/imx6sx-sdb.dts
+index 5a63ca6157229..1b808563a536a 100644
+--- a/arch/arm/boot/dts/imx6sx-sdb.dts
++++ b/arch/arm/boot/dts/imx6sx-sdb.dts
+@@ -108,7 +108,7 @@
+ 	pinctrl-0 = <&pinctrl_qspi2>;
+ 	status = "okay";
+ 
+-	flash0: n25q256a@0 {
++	flash0: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "micron,n25q256a", "jedec,spi-nor";
+@@ -118,7 +118,7 @@
+ 		reg = <0>;
+ 	};
+ 
+-	flash1: n25q256a@2 {
++	flash1: flash@2 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "micron,n25q256a", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6ul-14x14-evk.dtsi b/arch/arm/boot/dts/imx6ul-14x14-evk.dtsi
+index 64c2d1e9f7fce..71d3c7e05e08f 100644
+--- a/arch/arm/boot/dts/imx6ul-14x14-evk.dtsi
++++ b/arch/arm/boot/dts/imx6ul-14x14-evk.dtsi
+@@ -239,7 +239,7 @@
+ 	pinctrl-0 = <&pinctrl_qspi>;
+ 	status = "okay";
+ 
+-	flash0: n25q256a@0 {
++	flash0: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "micron,n25q256a", "jedec,spi-nor";
+diff --git a/arch/arm/boot/dts/imx6ul-kontron-n6310-som.dtsi b/arch/arm/boot/dts/imx6ul-kontron-n6310-som.dtsi
+index 47d3ce5d255fa..acd936540d898 100644
+--- a/arch/arm/boot/dts/imx6ul-kontron-n6310-som.dtsi
++++ b/arch/arm/boot/dts/imx6ul-kontron-n6310-som.dtsi
+@@ -19,7 +19,7 @@
+ };
+ 
+ &qspi {
+-	spi-flash@0 {
++	flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "spi-nand";
+diff --git a/arch/arm/boot/dts/imx6ul-kontron-n6311-som.dtsi b/arch/arm/boot/dts/imx6ul-kontron-n6311-som.dtsi
+index a095a7654ac65..29ed38dce5802 100644
+--- a/arch/arm/boot/dts/imx6ul-kontron-n6311-som.dtsi
++++ b/arch/arm/boot/dts/imx6ul-kontron-n6311-som.dtsi
+@@ -18,7 +18,7 @@
+ };
+ 
+ &qspi {
+-	spi-flash@0 {
++	flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "spi-nand";
+diff --git a/arch/arm/boot/dts/imx6ul-kontron-n6x1x-som-common.dtsi b/arch/arm/boot/dts/imx6ul-kontron-n6x1x-som-common.dtsi
+index 2a449a3c1ae27..09a83dbdf6510 100644
+--- a/arch/arm/boot/dts/imx6ul-kontron-n6x1x-som-common.dtsi
++++ b/arch/arm/boot/dts/imx6ul-kontron-n6x1x-som-common.dtsi
+@@ -19,7 +19,7 @@
+ 	pinctrl-0 = <&pinctrl_ecspi2>;
+ 	status = "okay";
+ 
+-	spi-flash@0 {
++	flash@0 {
+ 		compatible = "mxicy,mx25v8035f", "jedec,spi-nor";
+ 		spi-max-frequency = <50000000>;
+ 		reg = <0>;
+diff --git a/arch/arm/boot/dts/imx6ull-kontron-n6411-som.dtsi b/arch/arm/boot/dts/imx6ull-kontron-n6411-som.dtsi
+index b7e984284e1ad..d000606c07049 100644
+--- a/arch/arm/boot/dts/imx6ull-kontron-n6411-som.dtsi
++++ b/arch/arm/boot/dts/imx6ull-kontron-n6411-som.dtsi
+@@ -18,7 +18,7 @@
+ };
+ 
+ &qspi {
+-	spi-flash@0 {
++	flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "spi-nand";
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 9a8633a6506ca..d096b5a1dbebe 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -322,12 +322,12 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 	unsigned long offset;
+ 	unsigned long npages;
+ 	unsigned long size;
+-	unsigned long retq;
+ 	unsigned long *ptr;
+ 	void *trampoline;
+ 	void *ip;
+ 	/* 48 8b 15 <offset> is movq <offset>(%rip), %rdx */
+ 	unsigned const char op_ref[] = { 0x48, 0x8b, 0x15 };
++	unsigned const char retq[] = { RET_INSN_OPCODE, INT3_INSN_OPCODE };
+ 	union ftrace_op_code_union op_ptr;
+ 	int ret;
+ 
+@@ -367,13 +367,10 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 	ip = trampoline + size;
+ 
+ 	/* The trampoline ends with ret(q) */
+-	retq = (unsigned long)ftrace_stub;
+ 	if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
+ 		memcpy(ip, text_gen_insn(JMP32_INSN_OPCODE, ip, &__x86_return_thunk), JMP32_INSN_SIZE);
+ 	else
+-		ret = copy_from_kernel_nofault(ip, (void *)retq, RET_SIZE);
+-	if (WARN_ON(ret < 0))
+-		goto fail;
++		memcpy(ip, retq, sizeof(retq));
+ 
+ 	/* No need to test direct calls on created trampolines */
+ 	if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) {
+diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
+index e3a375185a1b4..5b2dabedcf664 100644
+--- a/arch/x86/kernel/ftrace_64.S
++++ b/arch/x86/kernel/ftrace_64.S
+@@ -170,7 +170,6 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
+ 
+ /*
+  * This is weak to keep gas from relaxing the jumps.
+- * It is also used to copy the RET for trampolines.
+  */
+ SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK)
+ 	UNWIND_HINT_FUNC
+@@ -325,7 +324,7 @@ SYM_FUNC_END(ftrace_graph_caller)
+ 
+ SYM_CODE_START(return_to_handler)
+ 	UNWIND_HINT_EMPTY
+-	subq  $24, %rsp
++	subq  $16, %rsp
+ 
+ 	/* Save the return values */
+ 	movq %rax, (%rsp)
+@@ -337,7 +336,19 @@ SYM_CODE_START(return_to_handler)
+ 	movq %rax, %rdi
+ 	movq 8(%rsp), %rdx
+ 	movq (%rsp), %rax
+-	addq $24, %rsp
+-	JMP_NOSPEC rdi
++
++	addq $16, %rsp
++	/*
++	 * Jump back to the old return address. This cannot be JMP_NOSPEC rdi
++	 * since IBT would demand that contain ENDBR, which simply isn't so for
++	 * return addresses. Use a retpoline here to keep the RSB balanced.
++	 */
++	ANNOTATE_INTRA_FUNCTION_CALL
++	call .Ldo_rop
++	int3
++.Ldo_rop:
++	mov %rdi, (%rsp)
++	UNWIND_HINT_FUNC
++	RET
+ SYM_CODE_END(return_to_handler)
+ #endif
+diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c
+index 67ed4f238d437..780cba4e30d0e 100644
+--- a/drivers/gpio/gpio-mockup.c
++++ b/drivers/gpio/gpio-mockup.c
+@@ -375,6 +375,13 @@ static void gpio_mockup_debugfs_setup(struct device *dev,
+ 	}
+ }
+ 
++static void gpio_mockup_debugfs_cleanup(void *data)
++{
++	struct gpio_mockup_chip *chip = data;
++
++	debugfs_remove_recursive(chip->dbg_dir);
++}
++
+ static void gpio_mockup_dispose_mappings(void *data)
+ {
+ 	struct gpio_mockup_chip *chip = data;
+@@ -457,7 +464,7 @@ static int gpio_mockup_probe(struct platform_device *pdev)
+ 
+ 	gpio_mockup_debugfs_setup(dev, chip);
+ 
+-	return 0;
++	return devm_add_action_or_reset(dev, gpio_mockup_debugfs_cleanup, chip);
+ }
+ 
+ static struct platform_driver gpio_mockup_driver = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index ae84d3b582aa5..8a2abcfd5a889 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -1921,7 +1921,7 @@ static int psp_load_smu_fw(struct psp_context *psp)
+ static bool fw_load_skip_check(struct psp_context *psp,
+ 			       struct amdgpu_firmware_info *ucode)
+ {
+-	if (!ucode->fw)
++	if (!ucode->fw || !ucode->ucode_size)
+ 		return true;
+ 
+ 	if (ucode->ucode_id == AMDGPU_UCODE_ID_SMC &&
+diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
+index fea30e7aa9e83..084b6ae2a4761 100644
+--- a/drivers/gpu/drm/msm/msm_rd.c
++++ b/drivers/gpu/drm/msm/msm_rd.c
+@@ -191,6 +191,9 @@ static int rd_open(struct inode *inode, struct file *file)
+ 	file->private_data = rd;
+ 	rd->open = true;
+ 
++	/* Reset fifo to clear any previously unread data: */
++	rd->fifo.head = rd->fifo.tail = 0;
++
+ 	/* the parsing tools need to know gpu-id to know which
+ 	 * register database to load.
+ 	 */
+diff --git a/drivers/hid/intel-ish-hid/ishtp-hid.h b/drivers/hid/intel-ish-hid/ishtp-hid.h
+index 5ffd0da3cf1fa..65af0ebef79f6 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-hid.h
++++ b/drivers/hid/intel-ish-hid/ishtp-hid.h
+@@ -110,7 +110,7 @@ struct report_list {
+  * @multi_packet_cnt:	Count of fragmented packet count
+  *
+  * This structure is used to store completion flags and per client data like
+- * like report description, number of HID devices etc.
++ * report description, number of HID devices etc.
+  */
+ struct ishtp_cl_data {
+ 	/* completion flags */
+diff --git a/drivers/hid/intel-ish-hid/ishtp/client.c b/drivers/hid/intel-ish-hid/ishtp/client.c
+index 1cc157126fce7..c0d69303e3b09 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/client.c
++++ b/drivers/hid/intel-ish-hid/ishtp/client.c
+@@ -626,13 +626,14 @@ static void ishtp_cl_read_complete(struct ishtp_cl_rb *rb)
+ }
+ 
+ /**
+- * ipc_tx_callback() - IPC tx callback function
++ * ipc_tx_send() - IPC tx send function
+  * @prm: Pointer to client device instance
+  *
+- * Send message over IPC either first time or on callback on previous message
+- * completion
++ * Send message over IPC. Message will be split into fragments
++ * if message size is bigger than IPC FIFO size, and all
++ * fragments will be sent one by one.
+  */
+-static void ipc_tx_callback(void *prm)
++static void ipc_tx_send(void *prm)
+ {
+ 	struct ishtp_cl	*cl = prm;
+ 	struct ishtp_cl_tx_ring	*cl_msg;
+@@ -677,32 +678,41 @@ static void ipc_tx_callback(void *prm)
+ 			    list);
+ 	rem = cl_msg->send_buf.size - cl->tx_offs;
+ 
+-	ishtp_hdr.host_addr = cl->host_client_id;
+-	ishtp_hdr.fw_addr = cl->fw_client_id;
+-	ishtp_hdr.reserved = 0;
+-	pmsg = cl_msg->send_buf.data + cl->tx_offs;
++	while (rem > 0) {
++		ishtp_hdr.host_addr = cl->host_client_id;
++		ishtp_hdr.fw_addr = cl->fw_client_id;
++		ishtp_hdr.reserved = 0;
++		pmsg = cl_msg->send_buf.data + cl->tx_offs;
++
++		if (rem <= dev->mtu) {
++			/* Last fragment or only one packet */
++			ishtp_hdr.length = rem;
++			ishtp_hdr.msg_complete = 1;
++			/* Submit to IPC queue with no callback */
++			ishtp_write_message(dev, &ishtp_hdr, pmsg);
++			cl->tx_offs = 0;
++			cl->sending = 0;
+ 
+-	if (rem <= dev->mtu) {
+-		ishtp_hdr.length = rem;
+-		ishtp_hdr.msg_complete = 1;
+-		cl->sending = 0;
+-		list_del_init(&cl_msg->list);	/* Must be before write */
+-		spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
+-		/* Submit to IPC queue with no callback */
+-		ishtp_write_message(dev, &ishtp_hdr, pmsg);
+-		spin_lock_irqsave(&cl->tx_free_list_spinlock, tx_free_flags);
+-		list_add_tail(&cl_msg->list, &cl->tx_free_list.list);
+-		++cl->tx_ring_free_size;
+-		spin_unlock_irqrestore(&cl->tx_free_list_spinlock,
+-			tx_free_flags);
+-	} else {
+-		/* Send IPC fragment */
+-		spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
+-		cl->tx_offs += dev->mtu;
+-		ishtp_hdr.length = dev->mtu;
+-		ishtp_hdr.msg_complete = 0;
+-		ishtp_send_msg(dev, &ishtp_hdr, pmsg, ipc_tx_callback, cl);
++			break;
++		} else {
++			/* Send ipc fragment */
++			ishtp_hdr.length = dev->mtu;
++			ishtp_hdr.msg_complete = 0;
++			/* All fregments submitted to IPC queue with no callback */
++			ishtp_write_message(dev, &ishtp_hdr, pmsg);
++			cl->tx_offs += dev->mtu;
++			rem = cl_msg->send_buf.size - cl->tx_offs;
++		}
+ 	}
++
++	list_del_init(&cl_msg->list);
++	spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
++
++	spin_lock_irqsave(&cl->tx_free_list_spinlock, tx_free_flags);
++	list_add_tail(&cl_msg->list, &cl->tx_free_list.list);
++	++cl->tx_ring_free_size;
++	spin_unlock_irqrestore(&cl->tx_free_list_spinlock,
++		tx_free_flags);
+ }
+ 
+ /**
+@@ -720,7 +730,7 @@ static void ishtp_cl_send_msg_ipc(struct ishtp_device *dev,
+ 		return;
+ 
+ 	cl->tx_offs = 0;
+-	ipc_tx_callback(cl);
++	ipc_tx_send(cl);
+ 	++cl->send_msg_cnt_ipc;
+ }
+ 
+diff --git a/drivers/input/joystick/iforce/iforce-main.c b/drivers/input/joystick/iforce/iforce-main.c
+index b2a68bc9f0b4d..b86de1312512b 100644
+--- a/drivers/input/joystick/iforce/iforce-main.c
++++ b/drivers/input/joystick/iforce/iforce-main.c
+@@ -50,6 +50,7 @@ static struct iforce_device iforce_device[] = {
+ 	{ 0x046d, 0xc291, "Logitech WingMan Formula Force",		btn_wheel, abs_wheel, ff_iforce },
+ 	{ 0x05ef, 0x020a, "AVB Top Shot Pegasus",			btn_joystick_avb, abs_avb_pegasus, ff_iforce },
+ 	{ 0x05ef, 0x8884, "AVB Mag Turbo Force",			btn_wheel, abs_wheel, ff_iforce },
++	{ 0x05ef, 0x8886, "Boeder Force Feedback Wheel",		btn_wheel, abs_wheel, ff_iforce },
+ 	{ 0x05ef, 0x8888, "AVB Top Shot Force Feedback Racing Wheel",	btn_wheel, abs_wheel, ff_iforce }, //?
+ 	{ 0x061c, 0xc0a4, "ACT LABS Force RS",                          btn_wheel, abs_wheel, ff_iforce }, //?
+ 	{ 0x061c, 0xc084, "ACT LABS Force RS",				btn_wheel, abs_wheel, ff_iforce },
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index 5fc789f717c8a..b23abde5d7db3 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -154,6 +154,7 @@ static const struct goodix_chip_data gt9x_chip_data = {
+ 
+ static const struct goodix_chip_id goodix_chip_ids[] = {
+ 	{ .id = "1151", .data = &gt1x_chip_data },
++	{ .id = "1158", .data = &gt1x_chip_data },
+ 	{ .id = "5663", .data = &gt1x_chip_data },
+ 	{ .id = "5688", .data = &gt1x_chip_data },
+ 	{ .id = "917S", .data = &gt1x_chip_data },
+@@ -1385,6 +1386,7 @@ MODULE_DEVICE_TABLE(acpi, goodix_acpi_match);
+ #ifdef CONFIG_OF
+ static const struct of_device_id goodix_of_match[] = {
+ 	{ .compatible = "goodix,gt1151" },
++	{ .compatible = "goodix,gt1158" },
+ 	{ .compatible = "goodix,gt5663" },
+ 	{ .compatible = "goodix,gt5688" },
+ 	{ .compatible = "goodix,gt911" },
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 477dde39823c7..93c60712a948e 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -560,14 +560,36 @@ static inline int domain_pfn_supported(struct dmar_domain *domain,
+ 	return !(addr_width < BITS_PER_LONG && pfn >> addr_width);
+ }
+ 
++/*
++ * Calculate the Supported Adjusted Guest Address Widths of an IOMMU.
++ * Refer to 11.4.2 of the VT-d spec for the encoding of each bit of
++ * the returned SAGAW.
++ */
++static unsigned long __iommu_calculate_sagaw(struct intel_iommu *iommu)
++{
++	unsigned long fl_sagaw, sl_sagaw;
++
++	fl_sagaw = BIT(2) | (cap_fl1gp_support(iommu->cap) ? BIT(3) : 0);
++	sl_sagaw = cap_sagaw(iommu->cap);
++
++	/* Second level only. */
++	if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++		return sl_sagaw;
++
++	/* First level only. */
++	if (!ecap_slts(iommu->ecap))
++		return fl_sagaw;
++
++	return fl_sagaw & sl_sagaw;
++}
++
+ static int __iommu_calculate_agaw(struct intel_iommu *iommu, int max_gaw)
+ {
+ 	unsigned long sagaw;
+ 	int agaw = -1;
+ 
+-	sagaw = cap_sagaw(iommu->cap);
+-	for (agaw = width_to_agaw(max_gaw);
+-	     agaw >= 0; agaw--) {
++	sagaw = __iommu_calculate_sagaw(iommu);
++	for (agaw = width_to_agaw(max_gaw); agaw >= 0; agaw--) {
+ 		if (test_bit(agaw, &sagaw))
+ 			break;
+ 	}
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index 5143cdd0eecad..be96116dc2ccb 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -18146,16 +18146,20 @@ static void tg3_shutdown(struct pci_dev *pdev)
+ 	struct net_device *dev = pci_get_drvdata(pdev);
+ 	struct tg3 *tp = netdev_priv(dev);
+ 
++	tg3_reset_task_cancel(tp);
++
+ 	rtnl_lock();
++
+ 	netif_device_detach(dev);
+ 
+ 	if (netif_running(dev))
+ 		dev_close(dev);
+ 
+-	if (system_state == SYSTEM_POWER_OFF)
+-		tg3_power_down(tp);
++	tg3_power_down(tp);
+ 
+ 	rtnl_unlock();
++
++	pci_disable_device(pdev);
+ }
+ 
+ /**
+diff --git a/drivers/net/ieee802154/cc2520.c b/drivers/net/ieee802154/cc2520.c
+index 89c046b204e0c..4517517215f2b 100644
+--- a/drivers/net/ieee802154/cc2520.c
++++ b/drivers/net/ieee802154/cc2520.c
+@@ -504,6 +504,7 @@ cc2520_tx(struct ieee802154_hw *hw, struct sk_buff *skb)
+ 		goto err_tx;
+ 
+ 	if (status & CC2520_STATUS_TX_UNDERFLOW) {
++		rc = -EINVAL;
+ 		dev_err(&priv->spi->dev, "cc2520 tx underflow exception\n");
+ 		goto err_tx;
+ 	}
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index d030d5e69dc50..e3e35b9bd6846 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1471,6 +1471,9 @@ static void nvmet_tcp_state_change(struct sock *sk)
+ 		goto done;
+ 
+ 	switch (sk->sk_state) {
++	case TCP_FIN_WAIT2:
++	case TCP_LAST_ACK:
++		break;
+ 	case TCP_FIN_WAIT1:
+ 	case TCP_CLOSE_WAIT:
+ 	case TCP_CLOSE:
+diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
+index ef9676418c9f4..2e1f3680d8466 100644
+--- a/drivers/perf/arm_pmu_platform.c
++++ b/drivers/perf/arm_pmu_platform.c
+@@ -117,7 +117,7 @@ static int pmu_parse_irqs(struct arm_pmu *pmu)
+ 
+ 	if (num_irqs == 1) {
+ 		int irq = platform_get_irq(pdev, 0);
+-		if (irq && irq_is_percpu_devid(irq))
++		if ((irq > 0) && irq_is_percpu_devid(irq))
+ 			return pmu_parse_percpu_irq(pmu, irq);
+ 	}
+ 
+diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
+index 80983f9dfcd55..8e696262215fc 100644
+--- a/drivers/platform/x86/acer-wmi.c
++++ b/drivers/platform/x86/acer-wmi.c
+@@ -93,6 +93,7 @@ static const struct key_entry acer_wmi_keymap[] __initconst = {
+ 	{KE_KEY, 0x22, {KEY_PROG2} },    /* Arcade */
+ 	{KE_KEY, 0x23, {KEY_PROG3} },    /* P_Key */
+ 	{KE_KEY, 0x24, {KEY_PROG4} },    /* Social networking_Key */
++	{KE_KEY, 0x27, {KEY_HELP} },
+ 	{KE_KEY, 0x29, {KEY_PROG3} },    /* P_Key for TM8372 */
+ 	{KE_IGNORE, 0x41, {KEY_MUTE} },
+ 	{KE_IGNORE, 0x42, {KEY_PREVIOUSSONG} },
+@@ -106,7 +107,13 @@ static const struct key_entry acer_wmi_keymap[] __initconst = {
+ 	{KE_IGNORE, 0x48, {KEY_VOLUMEUP} },
+ 	{KE_IGNORE, 0x49, {KEY_VOLUMEDOWN} },
+ 	{KE_IGNORE, 0x4a, {KEY_VOLUMEDOWN} },
+-	{KE_IGNORE, 0x61, {KEY_SWITCHVIDEOMODE} },
++	/*
++	 * 0x61 is KEY_SWITCHVIDEOMODE. Usually this is a duplicate input event
++	 * with the "Video Bus" input device events. But sometimes it is not
++	 * a dup. Map it to KEY_UNKNOWN instead of using KE_IGNORE so that
++	 * udev/hwdb can override it on systems where it is not a dup.
++	 */
++	{KE_KEY, 0x61, {KEY_UNKNOWN} },
+ 	{KE_IGNORE, 0x62, {KEY_BRIGHTNESSUP} },
+ 	{KE_IGNORE, 0x63, {KEY_BRIGHTNESSDOWN} },
+ 	{KE_KEY, 0x64, {KEY_SWITCHVIDEOMODE} },	/* Display Switch */
+diff --git a/drivers/soc/fsl/Kconfig b/drivers/soc/fsl/Kconfig
+index 4df32bc4c7a6e..c5d46152d4680 100644
+--- a/drivers/soc/fsl/Kconfig
++++ b/drivers/soc/fsl/Kconfig
+@@ -24,6 +24,7 @@ config FSL_MC_DPIO
+         tristate "QorIQ DPAA2 DPIO driver"
+         depends on FSL_MC_BUS
+         select SOC_BUS
++        select FSL_GUTS
+         help
+ 	  Driver for the DPAA2 DPIO object.  A DPIO provides queue and
+ 	  buffer management facilities for software to interact with
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 4051c8cd0cd8a..23ab3b048d9be 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -62,6 +62,13 @@ UNUSUAL_DEV(0x0984, 0x0301, 0x0128, 0x0128,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_IGNORE_UAS),
+ 
++/* Reported-by: Tom Hu <huxiaoying@kylinos.cn> */
++UNUSUAL_DEV(0x0b05, 0x1932, 0x0000, 0x9999,
++		"ASUS",
++		"External HDD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ /* Reported-by: David Webb <djw@noc.ac.uk> */
+ UNUSUAL_DEV(0x0bc2, 0x331a, 0x0000, 0x9999,
+ 		"Seagate",
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 8b7315c22f0d1..4b70571368526 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -139,6 +139,8 @@ struct tracefs_mount_opts {
+ 	kuid_t uid;
+ 	kgid_t gid;
+ 	umode_t mode;
++	/* Opt_* bitfield. */
++	unsigned int opts;
+ };
+ 
+ enum {
+@@ -239,6 +241,7 @@ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ 	kgid_t gid;
+ 	char *p;
+ 
++	opts->opts = 0;
+ 	opts->mode = TRACEFS_DEFAULT_MODE;
+ 
+ 	while ((p = strsep(&data, ",")) != NULL) {
+@@ -273,24 +276,36 @@ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ 		 * but traditionally tracefs has ignored all mount options
+ 		 */
+ 		}
++
++		opts->opts |= BIT(token);
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int tracefs_apply_options(struct super_block *sb)
++static int tracefs_apply_options(struct super_block *sb, bool remount)
+ {
+ 	struct tracefs_fs_info *fsi = sb->s_fs_info;
+ 	struct inode *inode = sb->s_root->d_inode;
+ 	struct tracefs_mount_opts *opts = &fsi->mount_opts;
+ 
+-	inode->i_mode &= ~S_IALLUGO;
+-	inode->i_mode |= opts->mode;
++	/*
++	 * On remount, only reset mode/uid/gid if they were provided as mount
++	 * options.
++	 */
++
++	if (!remount || opts->opts & BIT(Opt_mode)) {
++		inode->i_mode &= ~S_IALLUGO;
++		inode->i_mode |= opts->mode;
++	}
+ 
+-	inode->i_uid = opts->uid;
++	if (!remount || opts->opts & BIT(Opt_uid))
++		inode->i_uid = opts->uid;
+ 
+-	/* Set all the group ids to the mount option */
+-	set_gid(sb->s_root, opts->gid);
++	if (!remount || opts->opts & BIT(Opt_gid)) {
++		/* Set all the group ids to the mount option */
++		set_gid(sb->s_root, opts->gid);
++	}
+ 
+ 	return 0;
+ }
+@@ -305,7 +320,7 @@ static int tracefs_remount(struct super_block *sb, int *flags, char *data)
+ 	if (err)
+ 		goto fail;
+ 
+-	tracefs_apply_options(sb);
++	tracefs_apply_options(sb, true);
+ 
+ fail:
+ 	return err;
+@@ -357,7 +372,7 @@ static int trace_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	sb->s_op = &tracefs_super_operations;
+ 
+-	tracefs_apply_options(sb);
++	tracefs_apply_options(sb, false);
+ 
+ 	return 0;
+ 
+diff --git a/mm/mmap.c b/mm/mmap.c
+index b69c9711bb269..31fc116a8ec9b 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2664,6 +2664,7 @@ static void unmap_region(struct mm_struct *mm,
+ {
+ 	struct vm_area_struct *next = vma_next(mm, prev);
+ 	struct mmu_gather tlb;
++	struct vm_area_struct *cur_vma;
+ 
+ 	lru_add_drain();
+ 	tlb_gather_mmu(&tlb, mm, start, end);
+@@ -2678,8 +2679,12 @@ static void unmap_region(struct mm_struct *mm,
+ 	 * concurrent flush in this region has to be coming through the rmap,
+ 	 * and we synchronize against that using the rmap lock.
+ 	 */
+-	if ((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) != 0)
+-		tlb_flush_mmu(&tlb);
++	for (cur_vma = vma; cur_vma; cur_vma = cur_vma->vm_next) {
++		if ((cur_vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) != 0) {
++			tlb_flush_mmu(&tlb);
++			break;
++		}
++	}
+ 
+ 	free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
+ 				 next ? next->vm_start : USER_PGTABLES_CEILING);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-09-23 12:40 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-09-23 12:40 UTC (permalink / raw
  To: gentoo-commits

commit:     ecbe39ac61f676205b3c89ec43cbffbfd3f77c90
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 23 12:39:57 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 23 12:39:57 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ecbe39ac

Linux patch 5.10.145

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1144_linux-5.10.145.patch | 1878 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1882 insertions(+)

diff --git a/0000_README b/0000_README
index 7aedc075..0670d018 100644
--- a/0000_README
+++ b/0000_README
@@ -619,6 +619,10 @@ Patch:  1143_linux-5.10.144.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.144
 
+Patch:  1144_linux-5.10.145.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.145
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1144_linux-5.10.145.patch b/1144_linux-5.10.145.patch
new file mode 100644
index 00000000..c4117822
--- /dev/null
+++ b/1144_linux-5.10.145.patch
@@ -0,0 +1,1878 @@
+diff --git a/Makefile b/Makefile
+index 21aa9b04164d1..76c85e40beea3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 144
++SUBLEVEL = 145
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c
+index 6501a842c41a5..191bcaf565138 100644
+--- a/arch/mips/cavium-octeon/octeon-irq.c
++++ b/arch/mips/cavium-octeon/octeon-irq.c
+@@ -127,6 +127,16 @@ static void octeon_irq_free_cd(struct irq_domain *d, unsigned int irq)
+ static int octeon_irq_force_ciu_mapping(struct irq_domain *domain,
+ 					int irq, int line, int bit)
+ {
++	struct device_node *of_node;
++	int ret;
++
++	of_node = irq_domain_get_of_node(domain);
++	if (!of_node)
++		return -EINVAL;
++	ret = irq_alloc_desc_at(irq, of_node_to_nid(of_node));
++	if (ret < 0)
++		return ret;
++
+ 	return irq_domain_associate(domain, irq, line << 6 | bit);
+ }
+ 
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index 2d89f79f460cb..07a4d4badd697 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -315,6 +315,16 @@ config IRQSTACKS
+ 	  for handling hard and soft interrupts.  This can help avoid
+ 	  overflowing the process kernel stacks.
+ 
++config TLB_PTLOCK
++	bool "Use page table locks in TLB fault handler"
++	depends on SMP
++	default n
++	help
++	  Select this option to enable page table locking in the TLB
++	  fault handler. This ensures that page table entries are
++	  updated consistently on SMP machines at the expense of some
++	  loss in performance.
++
+ config HOTPLUG_CPU
+ 	bool
+ 	default y if SMP
+diff --git a/arch/parisc/include/asm/mmu_context.h b/arch/parisc/include/asm/mmu_context.h
+index cb5f2f7304213..aba69ff79e8c1 100644
+--- a/arch/parisc/include/asm/mmu_context.h
++++ b/arch/parisc/include/asm/mmu_context.h
+@@ -5,6 +5,7 @@
+ #include <linux/mm.h>
+ #include <linux/sched.h>
+ #include <linux/atomic.h>
++#include <linux/spinlock.h>
+ #include <asm-generic/mm_hooks.h>
+ 
+ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+@@ -52,6 +53,12 @@ static inline void switch_mm_irqs_off(struct mm_struct *prev,
+ 		struct mm_struct *next, struct task_struct *tsk)
+ {
+ 	if (prev != next) {
++#ifdef CONFIG_TLB_PTLOCK
++		/* put physical address of page_table_lock in cr28 (tr4)
++		   for TLB faults */
++		spinlock_t *pgd_lock = &next->page_table_lock;
++		mtctl(__pa(__ldcw_align(&pgd_lock->rlock.raw_lock)), 28);
++#endif
+ 		mtctl(__pa(next->pgd), 25);
+ 		load_context(next->context);
+ 	}
+diff --git a/arch/parisc/include/asm/page.h b/arch/parisc/include/asm/page.h
+index 8802ce651a3af..0561568f7b489 100644
+--- a/arch/parisc/include/asm/page.h
++++ b/arch/parisc/include/asm/page.h
+@@ -112,7 +112,7 @@ extern int npmem_ranges;
+ #else
+ #define BITS_PER_PTE_ENTRY	2
+ #define BITS_PER_PMD_ENTRY	2
+-#define BITS_PER_PGD_ENTRY	BITS_PER_PMD_ENTRY
++#define BITS_PER_PGD_ENTRY	2
+ #endif
+ #define PGD_ENTRY_SIZE	(1UL << BITS_PER_PGD_ENTRY)
+ #define PMD_ENTRY_SIZE	(1UL << BITS_PER_PMD_ENTRY)
+diff --git a/arch/parisc/include/asm/pgalloc.h b/arch/parisc/include/asm/pgalloc.h
+index a6482b2ce0eab..dda5570853116 100644
+--- a/arch/parisc/include/asm/pgalloc.h
++++ b/arch/parisc/include/asm/pgalloc.h
+@@ -15,47 +15,23 @@
+ #define __HAVE_ARCH_PGD_FREE
+ #include <asm-generic/pgalloc.h>
+ 
+-/* Allocate the top level pgd (page directory)
+- *
+- * Here (for 64 bit kernels) we implement a Hybrid L2/L3 scheme: we
+- * allocate the first pmd adjacent to the pgd.  This means that we can
+- * subtract a constant offset to get to it.  The pmd and pgd sizes are
+- * arranged so that a single pmd covers 4GB (giving a full 64-bit
+- * process access to 8TB) so our lookups are effectively L2 for the
+- * first 4GB of the kernel (i.e. for all ILP32 processes and all the
+- * kernel for machines with under 4GB of memory) */
++/* Allocate the top level pgd (page directory) */
+ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+ {
+-	pgd_t *pgd = (pgd_t *)__get_free_pages(GFP_KERNEL,
+-					       PGD_ALLOC_ORDER);
+-	pgd_t *actual_pgd = pgd;
++	pgd_t *pgd;
+ 
+-	if (likely(pgd != NULL)) {
+-		memset(pgd, 0, PAGE_SIZE<<PGD_ALLOC_ORDER);
+-#if CONFIG_PGTABLE_LEVELS == 3
+-		actual_pgd += PTRS_PER_PGD;
+-		/* Populate first pmd with allocated memory.  We mark it
+-		 * with PxD_FLAG_ATTACHED as a signal to the system that this
+-		 * pmd entry may not be cleared. */
+-		set_pgd(actual_pgd, __pgd((PxD_FLAG_PRESENT |
+-				        PxD_FLAG_VALID |
+-					PxD_FLAG_ATTACHED)
+-			+ (__u32)(__pa((unsigned long)pgd) >> PxD_VALUE_SHIFT)));
+-		/* The first pmd entry also is marked with PxD_FLAG_ATTACHED as
+-		 * a signal that this pmd may not be freed */
+-		set_pgd(pgd, __pgd(PxD_FLAG_ATTACHED));
+-#endif
+-	}
+-	spin_lock_init(pgd_spinlock(actual_pgd));
+-	return actual_pgd;
++	pgd = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER);
++	if (unlikely(pgd == NULL))
++		return NULL;
++
++	memset(pgd, 0, PAGE_SIZE << PGD_ORDER);
++
++	return pgd;
+ }
+ 
+ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+ {
+-#if CONFIG_PGTABLE_LEVELS == 3
+-	pgd -= PTRS_PER_PGD;
+-#endif
+-	free_pages((unsigned long)pgd, PGD_ALLOC_ORDER);
++	free_pages((unsigned long)pgd, PGD_ORDER);
+ }
+ 
+ #if CONFIG_PGTABLE_LEVELS == 3
+@@ -70,41 +46,25 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+ 
+ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
+ {
+-	return (pmd_t *)__get_free_pages(GFP_PGTABLE_KERNEL, PMD_ORDER);
++	pmd_t *pmd;
++
++	pmd = (pmd_t *)__get_free_pages(GFP_PGTABLE_KERNEL, PMD_ORDER);
++	if (likely(pmd))
++		memset ((void *)pmd, 0, PAGE_SIZE << PMD_ORDER);
++	return pmd;
+ }
+ 
+ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+ {
+-	if (pmd_flag(*pmd) & PxD_FLAG_ATTACHED) {
+-		/*
+-		 * This is the permanent pmd attached to the pgd;
+-		 * cannot free it.
+-		 * Increment the counter to compensate for the decrement
+-		 * done by generic mm code.
+-		 */
+-		mm_inc_nr_pmds(mm);
+-		return;
+-	}
+ 	free_pages((unsigned long)pmd, PMD_ORDER);
+ }
+-
+ #endif
+ 
+ static inline void
+ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte)
+ {
+-#if CONFIG_PGTABLE_LEVELS == 3
+-	/* preserve the gateway marker if this is the beginning of
+-	 * the permanent pmd */
+-	if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)
+-		set_pmd(pmd, __pmd((PxD_FLAG_PRESENT |
+-				PxD_FLAG_VALID |
+-				PxD_FLAG_ATTACHED)
+-			+ (__u32)(__pa((unsigned long)pte) >> PxD_VALUE_SHIFT)));
+-	else
+-#endif
+-		set_pmd(pmd, __pmd((PxD_FLAG_PRESENT | PxD_FLAG_VALID)
+-			+ (__u32)(__pa((unsigned long)pte) >> PxD_VALUE_SHIFT)));
++	set_pmd(pmd, __pmd((PxD_FLAG_PRESENT | PxD_FLAG_VALID)
++		+ (__u32)(__pa((unsigned long)pte) >> PxD_VALUE_SHIFT)));
+ }
+ 
+ #define pmd_populate(mm, pmd, pte_page) \
+diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
+index 75cf84070fc91..8964798b8274e 100644
+--- a/arch/parisc/include/asm/pgtable.h
++++ b/arch/parisc/include/asm/pgtable.h
+@@ -23,8 +23,6 @@
+ #include <asm/processor.h>
+ #include <asm/cache.h>
+ 
+-static inline spinlock_t *pgd_spinlock(pgd_t *);
+-
+ /*
+  * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel
+  * memory.  For the return value to be meaningful, ADDR must be >=
+@@ -42,12 +40,8 @@ static inline spinlock_t *pgd_spinlock(pgd_t *);
+ 
+ /* This is for the serialization of PxTLB broadcasts. At least on the N class
+  * systems, only one PxTLB inter processor broadcast can be active at any one
+- * time on the Merced bus.
+-
+- * PTE updates are protected by locks in the PMD.
+- */
++ * time on the Merced bus. */
+ extern spinlock_t pa_tlb_flush_lock;
+-extern spinlock_t pa_swapper_pg_lock;
+ #if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
+ extern int pa_serialize_tlb_flushes;
+ #else
+@@ -82,22 +76,25 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)
+ 	purge_tlb_end(flags);
+ }
+ 
++extern void __update_cache(pte_t pte);
++
+ /* Certain architectures need to do special things when PTEs
+  * within a page table are directly modified.  Thus, the following
+  * hook is made available.
+  */
+-#define set_pte(pteptr, pteval)                                 \
+-        do{                                                     \
+-                *(pteptr) = (pteval);                           \
+-        } while(0)
+-
+-#define set_pte_at(mm, addr, ptep, pteval)			\
+-	do {							\
+-		unsigned long flags;				\
+-		spin_lock_irqsave(pgd_spinlock((mm)->pgd), flags);\
+-		set_pte(ptep, pteval);				\
+-		purge_tlb_entries(mm, addr);			\
+-		spin_unlock_irqrestore(pgd_spinlock((mm)->pgd), flags);\
++#define set_pte(pteptr, pteval)			\
++	do {					\
++		*(pteptr) = (pteval);		\
++		mb();				\
++	} while(0)
++
++#define set_pte_at(mm, addr, pteptr, pteval)	\
++	do {					\
++		if (pte_present(pteval) &&	\
++		    pte_user(pteval))		\
++			__update_cache(pteval);	\
++		*(pteptr) = (pteval);		\
++		purge_tlb_entries(mm, addr);	\
+ 	} while (0)
+ 
+ #endif /* !__ASSEMBLY__ */
+@@ -120,12 +117,10 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)
+ #define KERNEL_INITIAL_SIZE	(1 << KERNEL_INITIAL_ORDER)
+ 
+ #if CONFIG_PGTABLE_LEVELS == 3
+-#define PGD_ORDER	1 /* Number of pages per pgd */
+-#define PMD_ORDER	1 /* Number of pages per pmd */
+-#define PGD_ALLOC_ORDER	(2 + 1) /* first pgd contains pmd */
++#define PMD_ORDER	1
++#define PGD_ORDER	0
+ #else
+-#define PGD_ORDER	1 /* Number of pages per pgd */
+-#define PGD_ALLOC_ORDER	(PGD_ORDER + 1)
++#define PGD_ORDER	1
+ #endif
+ 
+ /* Definitions for 3rd level (we use PLD here for Page Lower directory
+@@ -240,11 +235,9 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)
+  * able to effectively address 40/42/44-bits of physical address space
+  * depending on 4k/16k/64k PAGE_SIZE */
+ #define _PxD_PRESENT_BIT   31
+-#define _PxD_ATTACHED_BIT  30
+-#define _PxD_VALID_BIT     29
++#define _PxD_VALID_BIT     30
+ 
+ #define PxD_FLAG_PRESENT  (1 << xlate_pabit(_PxD_PRESENT_BIT))
+-#define PxD_FLAG_ATTACHED (1 << xlate_pabit(_PxD_ATTACHED_BIT))
+ #define PxD_FLAG_VALID    (1 << xlate_pabit(_PxD_VALID_BIT))
+ #define PxD_FLAG_MASK     (0xf)
+ #define PxD_FLAG_SHIFT    (4)
+@@ -317,6 +310,7 @@ extern unsigned long *empty_zero_page;
+ 
+ #define pte_none(x)     (pte_val(x) == 0)
+ #define pte_present(x)	(pte_val(x) & _PAGE_PRESENT)
++#define pte_user(x)	(pte_val(x) & _PAGE_USER)
+ #define pte_clear(mm, addr, xp)  set_pte_at(mm, addr, xp, __pte(0))
+ 
+ #define pmd_flag(x)	(pmd_val(x) & PxD_FLAG_MASK)
+@@ -326,23 +320,10 @@ extern unsigned long *empty_zero_page;
+ #define pgd_flag(x)	(pgd_val(x) & PxD_FLAG_MASK)
+ #define pgd_address(x)	((unsigned long)(pgd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT)
+ 
+-#if CONFIG_PGTABLE_LEVELS == 3
+-/* The first entry of the permanent pmd is not there if it contains
+- * the gateway marker */
+-#define pmd_none(x)	(!pmd_val(x) || pmd_flag(x) == PxD_FLAG_ATTACHED)
+-#else
+ #define pmd_none(x)	(!pmd_val(x))
+-#endif
+ #define pmd_bad(x)	(!(pmd_flag(x) & PxD_FLAG_VALID))
+ #define pmd_present(x)	(pmd_flag(x) & PxD_FLAG_PRESENT)
+ static inline void pmd_clear(pmd_t *pmd) {
+-#if CONFIG_PGTABLE_LEVELS == 3
+-	if (pmd_flag(*pmd) & PxD_FLAG_ATTACHED)
+-		/* This is the entry pointing to the permanent pmd
+-		 * attached to the pgd; cannot clear it */
+-		set_pmd(pmd, __pmd(PxD_FLAG_ATTACHED));
+-	else
+-#endif
+ 		set_pmd(pmd,  __pmd(0));
+ }
+ 
+@@ -358,12 +339,6 @@ static inline void pmd_clear(pmd_t *pmd) {
+ #define pud_bad(x)      (!(pud_flag(x) & PxD_FLAG_VALID))
+ #define pud_present(x)  (pud_flag(x) & PxD_FLAG_PRESENT)
+ static inline void pud_clear(pud_t *pud) {
+-#if CONFIG_PGTABLE_LEVELS == 3
+-	if(pud_flag(*pud) & PxD_FLAG_ATTACHED)
+-		/* This is the permanent pmd attached to the pud; cannot
+-		 * free it */
+-		return;
+-#endif
+ 	set_pud(pud, __pud(0));
+ }
+ #endif
+@@ -443,7 +418,7 @@ extern void paging_init (void);
+ 
+ #define PG_dcache_dirty         PG_arch_1
+ 
+-extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *);
++#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep)
+ 
+ /* Encode and de-code a swap entry */
+ 
+@@ -456,32 +431,18 @@ extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *);
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val(pte) })
+ #define __swp_entry_to_pte(x)		((pte_t) { (x).val })
+ 
+-
+-static inline spinlock_t *pgd_spinlock(pgd_t *pgd)
+-{
+-	if (unlikely(pgd == swapper_pg_dir))
+-		return &pa_swapper_pg_lock;
+-	return (spinlock_t *)((char *)pgd + (PAGE_SIZE << (PGD_ALLOC_ORDER - 1)));
+-}
+-
+-
+ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+ {
+ 	pte_t pte;
+-	unsigned long flags;
+ 
+ 	if (!pte_young(*ptep))
+ 		return 0;
+ 
+-	spin_lock_irqsave(pgd_spinlock(vma->vm_mm->pgd), flags);
+ 	pte = *ptep;
+ 	if (!pte_young(pte)) {
+-		spin_unlock_irqrestore(pgd_spinlock(vma->vm_mm->pgd), flags);
+ 		return 0;
+ 	}
+-	set_pte(ptep, pte_mkold(pte));
+-	purge_tlb_entries(vma->vm_mm, addr);
+-	spin_unlock_irqrestore(pgd_spinlock(vma->vm_mm->pgd), flags);
++	set_pte_at(vma->vm_mm, addr, ptep, pte_mkold(pte));
+ 	return 1;
+ }
+ 
+@@ -489,24 +450,16 @@ struct mm_struct;
+ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+ 	pte_t old_pte;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(pgd_spinlock(mm->pgd), flags);
+ 	old_pte = *ptep;
+-	set_pte(ptep, __pte(0));
+-	purge_tlb_entries(mm, addr);
+-	spin_unlock_irqrestore(pgd_spinlock(mm->pgd), flags);
++	set_pte_at(mm, addr, ptep, __pte(0));
+ 
+ 	return old_pte;
+ }
+ 
+ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+-	unsigned long flags;
+-	spin_lock_irqsave(pgd_spinlock(mm->pgd), flags);
+-	set_pte(ptep, pte_wrprotect(*ptep));
+-	purge_tlb_entries(mm, addr);
+-	spin_unlock_irqrestore(pgd_spinlock(mm->pgd), flags);
++	set_pte_at(mm, addr, ptep, pte_wrprotect(*ptep));
+ }
+ 
+ #define pte_same(A,B)	(pte_val(A) == pte_val(B))
+diff --git a/arch/parisc/kernel/asm-offsets.c b/arch/parisc/kernel/asm-offsets.c
+index 305768a40773f..cd2cc1b1648c0 100644
+--- a/arch/parisc/kernel/asm-offsets.c
++++ b/arch/parisc/kernel/asm-offsets.c
+@@ -268,7 +268,6 @@ int main(void)
+ 	DEFINE(ASM_BITS_PER_PGD, BITS_PER_PGD);
+ 	DEFINE(ASM_BITS_PER_PMD, BITS_PER_PMD);
+ 	DEFINE(ASM_BITS_PER_PTE, BITS_PER_PTE);
+-	DEFINE(ASM_PGD_PMD_OFFSET, -(PAGE_SIZE << PGD_ORDER));
+ 	DEFINE(ASM_PMD_ENTRY, ((PAGE_OFFSET & PMD_MASK) >> PMD_SHIFT));
+ 	DEFINE(ASM_PGD_ENTRY, PAGE_OFFSET >> PGDIR_SHIFT);
+ 	DEFINE(ASM_PGD_ENTRY_SIZE, PGD_ENTRY_SIZE);
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 86a1a63563fd5..c81ab0cb89255 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -83,9 +83,9 @@ EXPORT_SYMBOL(flush_cache_all_local);
+ #define pfn_va(pfn)	__va(PFN_PHYS(pfn))
+ 
+ void
+-update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
++__update_cache(pte_t pte)
+ {
+-	unsigned long pfn = pte_pfn(*ptep);
++	unsigned long pfn = pte_pfn(pte);
+ 	struct page *page;
+ 
+ 	/* We don't have pte special.  As a result, we can be called with
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 3da39140babcf..05bed27eef859 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -35,10 +35,9 @@
+ 	.level 2.0
+ #endif
+ 
+-	.import		pa_tlb_lock,data
+-	.macro  load_pa_tlb_lock reg
+-	mfctl		%cr25,\reg
+-	addil		L%(PAGE_SIZE << (PGD_ALLOC_ORDER - 1)),\reg
++	/* Get aligned page_table_lock address for this mm from cr28/tr4 */
++	.macro  get_ptl reg
++	mfctl	%cr28,\reg
+ 	.endm
+ 
+ 	/* space_to_prot macro creates a prot id from a space id */
+@@ -407,7 +406,9 @@
+ # endif
+ #endif
+ 	dep             %r0,31,PAGE_SHIFT,\pmd  /* clear offset */
++#if CONFIG_PGTABLE_LEVELS < 3
+ 	copy		%r0,\pte
++#endif
+ 	ldw,s		\index(\pmd),\pmd
+ 	bb,>=,n		\pmd,_PxD_PRESENT_BIT,\fault
+ 	dep		%r0,31,PxD_FLAG_SHIFT,\pmd /* clear flags */
+@@ -417,38 +418,23 @@
+ 	shladd		\index,BITS_PER_PTE_ENTRY,\pmd,\pmd /* pmd is now pte */
+ 	.endm
+ 
+-	/* Look up PTE in a 3-Level scheme.
+-	 *
+-	 * Here we implement a Hybrid L2/L3 scheme: we allocate the
+-	 * first pmd adjacent to the pgd.  This means that we can
+-	 * subtract a constant offset to get to it.  The pmd and pgd
+-	 * sizes are arranged so that a single pmd covers 4GB (giving
+-	 * a full LP64 process access to 8TB) so our lookups are
+-	 * effectively L2 for the first 4GB of the kernel (i.e. for
+-	 * all ILP32 processes and all the kernel for machines with
+-	 * under 4GB of memory) */
++	/* Look up PTE in a 3-Level scheme. */
+ 	.macro		L3_ptep pgd,pte,index,va,fault
+-#if CONFIG_PGTABLE_LEVELS == 3 /* we might have a 2-Level scheme, e.g. with 16kb page size */
++#if CONFIG_PGTABLE_LEVELS == 3
++	copy		%r0,\pte
+ 	extrd,u		\va,63-ASM_PGDIR_SHIFT,ASM_BITS_PER_PGD,\index
+-	extrd,u,*=	\va,63-ASM_PGDIR_SHIFT,64-ASM_PGDIR_SHIFT,%r0
+ 	ldw,s		\index(\pgd),\pgd
+-	extrd,u,*=	\va,63-ASM_PGDIR_SHIFT,64-ASM_PGDIR_SHIFT,%r0
+ 	bb,>=,n		\pgd,_PxD_PRESENT_BIT,\fault
+-	extrd,u,*=	\va,63-ASM_PGDIR_SHIFT,64-ASM_PGDIR_SHIFT,%r0
+-	shld		\pgd,PxD_VALUE_SHIFT,\index
+-	extrd,u,*=	\va,63-ASM_PGDIR_SHIFT,64-ASM_PGDIR_SHIFT,%r0
+-	copy		\index,\pgd
+-	extrd,u,*<>	\va,63-ASM_PGDIR_SHIFT,64-ASM_PGDIR_SHIFT,%r0
+-	ldo		ASM_PGD_PMD_OFFSET(\pgd),\pgd
++	shld		\pgd,PxD_VALUE_SHIFT,\pgd
+ #endif
+ 	L2_ptep		\pgd,\pte,\index,\va,\fault
+ 	.endm
+ 
+-	/* Acquire pa_tlb_lock lock and check page is present. */
+-	.macro		tlb_lock	spc,ptp,pte,tmp,tmp1,fault
+-#ifdef CONFIG_SMP
++	/* Acquire page_table_lock and check page is present. */
++	.macro		ptl_lock	spc,ptp,pte,tmp,tmp1,fault
++#ifdef CONFIG_TLB_PTLOCK
+ 98:	cmpib,COND(=),n	0,\spc,2f
+-	load_pa_tlb_lock \tmp
++	get_ptl		\tmp
+ 1:	LDCW		0(\tmp),\tmp1
+ 	cmpib,COND(=)	0,\tmp1,1b
+ 	nop
+@@ -463,26 +449,26 @@
+ 3:
+ 	.endm
+ 
+-	/* Release pa_tlb_lock lock without reloading lock address.
++	/* Release page_table_lock without reloading lock address.
+ 	   Note that the values in the register spc are limited to
+ 	   NR_SPACE_IDS (262144). Thus, the stw instruction always
+ 	   stores a nonzero value even when register spc is 64 bits.
+ 	   We use an ordered store to ensure all prior accesses are
+ 	   performed prior to releasing the lock. */
+-	.macro		tlb_unlock0	spc,tmp
+-#ifdef CONFIG_SMP
++	.macro		ptl_unlock0	spc,tmp
++#ifdef CONFIG_TLB_PTLOCK
+ 98:	or,COND(=)	%r0,\spc,%r0
+ 	stw,ma		\spc,0(\tmp)
+ 99:	ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+ #endif
+ 	.endm
+ 
+-	/* Release pa_tlb_lock lock. */
+-	.macro		tlb_unlock1	spc,tmp
+-#ifdef CONFIG_SMP
+-98:	load_pa_tlb_lock \tmp
++	/* Release page_table_lock. */
++	.macro		ptl_unlock1	spc,tmp
++#ifdef CONFIG_TLB_PTLOCK
++98:	get_ptl		\tmp
++	ptl_unlock0	\spc,\tmp
+ 99:	ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-	tlb_unlock0	\spc,\tmp
+ #endif
+ 	.endm
+ 
+@@ -1165,14 +1151,14 @@ dtlb_miss_20w:
+ 
+ 	L3_ptep		ptp,pte,t0,va,dtlb_check_alias_20w
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,dtlb_check_alias_20w
++	ptl_lock	spc,ptp,pte,t0,t1,dtlb_check_alias_20w
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+ 	
+ 	idtlbt          pte,prot
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1191,14 +1177,14 @@ nadtlb_miss_20w:
+ 
+ 	L3_ptep		ptp,pte,t0,va,nadtlb_check_alias_20w
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,nadtlb_check_alias_20w
++	ptl_lock	spc,ptp,pte,t0,t1,nadtlb_check_alias_20w
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+ 
+ 	idtlbt          pte,prot
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1219,7 +1205,7 @@ dtlb_miss_11:
+ 
+ 	L2_ptep		ptp,pte,t0,va,dtlb_check_alias_11
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,dtlb_check_alias_11
++	ptl_lock	spc,ptp,pte,t0,t1,dtlb_check_alias_11
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb_11	spc,pte,prot
+@@ -1232,7 +1218,7 @@ dtlb_miss_11:
+ 
+ 	mtsp		t1, %sr1	/* Restore sr1 */
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1252,7 +1238,7 @@ nadtlb_miss_11:
+ 
+ 	L2_ptep		ptp,pte,t0,va,nadtlb_check_alias_11
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,nadtlb_check_alias_11
++	ptl_lock	spc,ptp,pte,t0,t1,nadtlb_check_alias_11
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb_11	spc,pte,prot
+@@ -1265,7 +1251,7 @@ nadtlb_miss_11:
+ 
+ 	mtsp		t1, %sr1	/* Restore sr1 */
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1285,7 +1271,7 @@ dtlb_miss_20:
+ 
+ 	L2_ptep		ptp,pte,t0,va,dtlb_check_alias_20
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,dtlb_check_alias_20
++	ptl_lock	spc,ptp,pte,t0,t1,dtlb_check_alias_20
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+@@ -1294,7 +1280,7 @@ dtlb_miss_20:
+ 
+ 	idtlbt          pte,prot
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1313,7 +1299,7 @@ nadtlb_miss_20:
+ 
+ 	L2_ptep		ptp,pte,t0,va,nadtlb_check_alias_20
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,nadtlb_check_alias_20
++	ptl_lock	spc,ptp,pte,t0,t1,nadtlb_check_alias_20
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+@@ -1322,7 +1308,7 @@ nadtlb_miss_20:
+ 	
+ 	idtlbt		pte,prot
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1422,14 +1408,14 @@ itlb_miss_20w:
+ 
+ 	L3_ptep		ptp,pte,t0,va,itlb_fault
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,itlb_fault
++	ptl_lock	spc,ptp,pte,t0,t1,itlb_fault
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+ 	
+ 	iitlbt          pte,prot
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1446,14 +1432,14 @@ naitlb_miss_20w:
+ 
+ 	L3_ptep		ptp,pte,t0,va,naitlb_check_alias_20w
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,naitlb_check_alias_20w
++	ptl_lock	spc,ptp,pte,t0,t1,naitlb_check_alias_20w
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+ 
+ 	iitlbt          pte,prot
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1474,7 +1460,7 @@ itlb_miss_11:
+ 
+ 	L2_ptep		ptp,pte,t0,va,itlb_fault
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,itlb_fault
++	ptl_lock	spc,ptp,pte,t0,t1,itlb_fault
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb_11	spc,pte,prot
+@@ -1487,7 +1473,7 @@ itlb_miss_11:
+ 
+ 	mtsp		t1, %sr1	/* Restore sr1 */
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1498,7 +1484,7 @@ naitlb_miss_11:
+ 
+ 	L2_ptep		ptp,pte,t0,va,naitlb_check_alias_11
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,naitlb_check_alias_11
++	ptl_lock	spc,ptp,pte,t0,t1,naitlb_check_alias_11
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb_11	spc,pte,prot
+@@ -1511,7 +1497,7 @@ naitlb_miss_11:
+ 
+ 	mtsp		t1, %sr1	/* Restore sr1 */
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1532,7 +1518,7 @@ itlb_miss_20:
+ 
+ 	L2_ptep		ptp,pte,t0,va,itlb_fault
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,itlb_fault
++	ptl_lock	spc,ptp,pte,t0,t1,itlb_fault
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+@@ -1541,7 +1527,7 @@ itlb_miss_20:
+ 
+ 	iitlbt          pte,prot
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1552,7 +1538,7 @@ naitlb_miss_20:
+ 
+ 	L2_ptep		ptp,pte,t0,va,naitlb_check_alias_20
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,naitlb_check_alias_20
++	ptl_lock	spc,ptp,pte,t0,t1,naitlb_check_alias_20
+ 	update_accessed	ptp,pte,t0,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+@@ -1561,7 +1547,7 @@ naitlb_miss_20:
+ 
+ 	iitlbt          pte,prot
+ 
+-	tlb_unlock1	spc,t0
++	ptl_unlock1	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1584,14 +1570,14 @@ dbit_trap_20w:
+ 
+ 	L3_ptep		ptp,pte,t0,va,dbit_fault
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,dbit_fault
++	ptl_lock	spc,ptp,pte,t0,t1,dbit_fault
+ 	update_dirty	ptp,pte,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+ 		
+ 	idtlbt          pte,prot
+ 
+-	tlb_unlock0	spc,t0
++	ptl_unlock0	spc,t0
+ 	rfir
+ 	nop
+ #else
+@@ -1604,7 +1590,7 @@ dbit_trap_11:
+ 
+ 	L2_ptep		ptp,pte,t0,va,dbit_fault
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,dbit_fault
++	ptl_lock	spc,ptp,pte,t0,t1,dbit_fault
+ 	update_dirty	ptp,pte,t1
+ 
+ 	make_insert_tlb_11	spc,pte,prot
+@@ -1617,7 +1603,7 @@ dbit_trap_11:
+ 
+ 	mtsp            t1, %sr1     /* Restore sr1 */
+ 
+-	tlb_unlock0	spc,t0
++	ptl_unlock0	spc,t0
+ 	rfir
+ 	nop
+ 
+@@ -1628,7 +1614,7 @@ dbit_trap_20:
+ 
+ 	L2_ptep		ptp,pte,t0,va,dbit_fault
+ 
+-	tlb_lock	spc,ptp,pte,t0,t1,dbit_fault
++	ptl_lock	spc,ptp,pte,t0,t1,dbit_fault
+ 	update_dirty	ptp,pte,t1
+ 
+ 	make_insert_tlb	spc,pte,prot,t1
+@@ -1637,7 +1623,7 @@ dbit_trap_20:
+ 	
+ 	idtlbt		pte,prot
+ 
+-	tlb_unlock0	spc,t0
++	ptl_unlock0	spc,t0
+ 	rfir
+ 	nop
+ #endif
+diff --git a/arch/parisc/mm/hugetlbpage.c b/arch/parisc/mm/hugetlbpage.c
+index d7ba014a7fbb5..43652de5f139f 100644
+--- a/arch/parisc/mm/hugetlbpage.c
++++ b/arch/parisc/mm/hugetlbpage.c
+@@ -142,24 +142,17 @@ static void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ 		     pte_t *ptep, pte_t entry)
+ {
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(pgd_spinlock((mm)->pgd), flags);
+ 	__set_huge_pte_at(mm, addr, ptep, entry);
+-	spin_unlock_irqrestore(pgd_spinlock((mm)->pgd), flags);
+ }
+ 
+ 
+ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
+ 			      pte_t *ptep)
+ {
+-	unsigned long flags;
+ 	pte_t entry;
+ 
+-	spin_lock_irqsave(pgd_spinlock((mm)->pgd), flags);
+ 	entry = *ptep;
+ 	__set_huge_pte_at(mm, addr, ptep, __pte(0));
+-	spin_unlock_irqrestore(pgd_spinlock((mm)->pgd), flags);
+ 
+ 	return entry;
+ }
+@@ -168,29 +161,23 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
+ void huge_ptep_set_wrprotect(struct mm_struct *mm,
+ 				unsigned long addr, pte_t *ptep)
+ {
+-	unsigned long flags;
+ 	pte_t old_pte;
+ 
+-	spin_lock_irqsave(pgd_spinlock((mm)->pgd), flags);
+ 	old_pte = *ptep;
+ 	__set_huge_pte_at(mm, addr, ptep, pte_wrprotect(old_pte));
+-	spin_unlock_irqrestore(pgd_spinlock((mm)->pgd), flags);
+ }
+ 
+ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ 				unsigned long addr, pte_t *ptep,
+ 				pte_t pte, int dirty)
+ {
+-	unsigned long flags;
+ 	int changed;
+ 	struct mm_struct *mm = vma->vm_mm;
+ 
+-	spin_lock_irqsave(pgd_spinlock((mm)->pgd), flags);
+ 	changed = !pte_same(*ptep, pte);
+ 	if (changed) {
+ 		__set_huge_pte_at(mm, addr, ptep, pte);
+ 	}
+-	spin_unlock_irqrestore(pgd_spinlock((mm)->pgd), flags);
+ 	return changed;
+ }
+ 
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 319afa00cdf7b..6a083fc87a038 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -37,11 +37,6 @@ extern int  data_start;
+ extern void parisc_kernel_start(void);	/* Kernel entry point in head.S */
+ 
+ #if CONFIG_PGTABLE_LEVELS == 3
+-/* NOTE: This layout exactly conforms to the hybrid L2/L3 page table layout
+- * with the first pmd adjacent to the pgd and below it. gcc doesn't actually
+- * guarantee that global objects will be laid out in memory in the same order
+- * as the order of declaration, so put these in different sections and use
+- * the linker script to order them. */
+ pmd_t pmd0[PTRS_PER_PMD] __section(".data..vm0.pmd") __attribute__ ((aligned(PAGE_SIZE)));
+ #endif
+ 
+@@ -558,6 +553,11 @@ void __init mem_init(void)
+ 	BUILD_BUG_ON(PGD_ENTRY_SIZE != sizeof(pgd_t));
+ 	BUILD_BUG_ON(PAGE_SHIFT + BITS_PER_PTE + BITS_PER_PMD + BITS_PER_PGD
+ 			> BITS_PER_LONG);
++#if CONFIG_PGTABLE_LEVELS == 3
++	BUILD_BUG_ON(PT_INITIAL > PTRS_PER_PMD);
++#else
++	BUILD_BUG_ON(PT_INITIAL > PTRS_PER_PGD);
++#endif
+ 
+ 	high_memory = __va((max_pfn << PAGE_SHIFT));
+ 	set_max_mapnr(max_low_pfn);
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 38b7a3491aac0..1d25932389951 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3399,8 +3399,22 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
+ 
+ 	kvmppc_set_host_core(pcpu);
+ 
++	context_tracking_guest_exit();
++	if (!vtime_accounting_enabled_this_cpu()) {
++		local_irq_enable();
++		/*
++		 * Service IRQs here before vtime_account_guest_exit() so any
++		 * ticks that occurred while running the guest are accounted to
++		 * the guest. If vtime accounting is enabled, accounting uses
++		 * TB rather than ticks, so it can be done without enabling
++		 * interrupts here, which has the problem that it accounts
++		 * interrupt processing overhead to the host.
++		 */
++		local_irq_disable();
++	}
++	vtime_account_guest_exit();
++
+ 	local_irq_enable();
+-	guest_exit();
+ 
+ 	/* Let secondaries go back to the offline loop */
+ 	for (i = 0; i < controlled_threads; ++i) {
+@@ -4235,8 +4249,22 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ 
+ 	kvmppc_set_host_core(pcpu);
+ 
++	context_tracking_guest_exit();
++	if (!vtime_accounting_enabled_this_cpu()) {
++		local_irq_enable();
++		/*
++		 * Service IRQs here before vtime_account_guest_exit() so any
++		 * ticks that occurred while running the guest are accounted to
++		 * the guest. If vtime accounting is enabled, accounting uses
++		 * TB rather than ticks, so it can be done without enabling
++		 * interrupts here, which has the problem that it accounts
++		 * interrupt processing overhead to the host.
++		 */
++		local_irq_disable();
++	}
++	vtime_account_guest_exit();
++
+ 	local_irq_enable();
+-	guest_exit();
+ 
+ 	cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest);
+ 
+diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
+index b1abcb8164397..75381beb7514a 100644
+--- a/arch/powerpc/kvm/booke.c
++++ b/arch/powerpc/kvm/booke.c
+@@ -1016,7 +1016,21 @@ int kvmppc_handle_exit(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+ 	}
+ 
+ 	trace_kvm_exit(exit_nr, vcpu);
+-	guest_exit_irqoff();
++
++	context_tracking_guest_exit();
++	if (!vtime_accounting_enabled_this_cpu()) {
++		local_irq_enable();
++		/*
++		 * Service IRQs here before vtime_account_guest_exit() so any
++		 * ticks that occurred while running the guest are accounted to
++		 * the guest. If vtime accounting is enabled, accounting uses
++		 * TB rather than ticks, so it can be done without enabling
++		 * interrupts here, which has the problem that it accounts
++		 * interrupt processing overhead to the host.
++		 */
++		local_irq_disable();
++	}
++	vtime_account_guest_exit();
+ 
+ 	local_irq_enable();
+ 
+diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
+index 2f73cb5bf12d5..f386a7bc38114 100644
+--- a/arch/powerpc/platforms/pseries/mobility.c
++++ b/arch/powerpc/platforms/pseries/mobility.c
+@@ -59,18 +59,31 @@ static int mobility_rtas_call(int token, char *buf, s32 scope)
+ 	return rc;
+ }
+ 
+-static int delete_dt_node(__be32 phandle)
++static int delete_dt_node(struct device_node *dn)
+ {
+-	struct device_node *dn;
++	struct device_node *pdn;
++	bool is_platfac;
+ 
+-	dn = of_find_node_by_phandle(be32_to_cpu(phandle));
+-	if (!dn)
+-		return -ENOENT;
++	pdn = of_get_parent(dn);
++	is_platfac = of_node_is_type(dn, "ibm,platform-facilities") ||
++		     of_node_is_type(pdn, "ibm,platform-facilities");
++	of_node_put(pdn);
+ 
+-	pr_debug("removing node %pOFfp\n", dn);
++	/*
++	 * The drivers that bind to nodes in the platform-facilities
++	 * hierarchy don't support node removal, and the removal directive
++	 * from firmware is always followed by an add of an equivalent
++	 * node. The capability (e.g. RNG, encryption, compression)
++	 * represented by the node is never interrupted by the migration.
++	 * So ignore changes to this part of the tree.
++	 */
++	if (is_platfac) {
++		pr_notice("ignoring remove operation for %pOFfp\n", dn);
++		return 0;
++	}
+ 
++	pr_debug("removing node %pOFfp\n", dn);
+ 	dlpar_detach_node(dn);
+-	of_node_put(dn);
+ 	return 0;
+ }
+ 
+@@ -135,10 +148,9 @@ static int update_dt_property(struct device_node *dn, struct property **prop,
+ 	return 0;
+ }
+ 
+-static int update_dt_node(__be32 phandle, s32 scope)
++static int update_dt_node(struct device_node *dn, s32 scope)
+ {
+ 	struct update_props_workarea *upwa;
+-	struct device_node *dn;
+ 	struct property *prop = NULL;
+ 	int i, rc, rtas_rc;
+ 	char *prop_data;
+@@ -155,14 +167,8 @@ static int update_dt_node(__be32 phandle, s32 scope)
+ 	if (!rtas_buf)
+ 		return -ENOMEM;
+ 
+-	dn = of_find_node_by_phandle(be32_to_cpu(phandle));
+-	if (!dn) {
+-		kfree(rtas_buf);
+-		return -ENOENT;
+-	}
+-
+ 	upwa = (struct update_props_workarea *)&rtas_buf[0];
+-	upwa->phandle = phandle;
++	upwa->phandle = cpu_to_be32(dn->phandle);
+ 
+ 	do {
+ 		rtas_rc = mobility_rtas_call(update_properties_token, rtas_buf,
+@@ -221,25 +227,30 @@ static int update_dt_node(__be32 phandle, s32 scope)
+ 		cond_resched();
+ 	} while (rtas_rc == 1);
+ 
+-	of_node_put(dn);
+ 	kfree(rtas_buf);
+ 	return 0;
+ }
+ 
+-static int add_dt_node(__be32 parent_phandle, __be32 drc_index)
++static int add_dt_node(struct device_node *parent_dn, __be32 drc_index)
+ {
+ 	struct device_node *dn;
+-	struct device_node *parent_dn;
+ 	int rc;
+ 
+-	parent_dn = of_find_node_by_phandle(be32_to_cpu(parent_phandle));
+-	if (!parent_dn)
+-		return -ENOENT;
+-
+ 	dn = dlpar_configure_connector(drc_index, parent_dn);
+-	if (!dn) {
+-		of_node_put(parent_dn);
++	if (!dn)
+ 		return -ENOENT;
++
++	/*
++	 * Since delete_dt_node() ignores this node type, this is the
++	 * necessary counterpart. We also know that a platform-facilities
++	 * node returned from dlpar_configure_connector() has children
++	 * attached, and dlpar_attach_node() only adds the parent, leaking
++	 * the children. So ignore these on the add side for now.
++	 */
++	if (of_node_is_type(dn, "ibm,platform-facilities")) {
++		pr_notice("ignoring add operation for %pOF\n", dn);
++		dlpar_free_cc_nodes(dn);
++		return 0;
+ 	}
+ 
+ 	rc = dlpar_attach_node(dn, parent_dn);
+@@ -248,7 +259,6 @@ static int add_dt_node(__be32 parent_phandle, __be32 drc_index)
+ 
+ 	pr_debug("added node %pOFfp\n", dn);
+ 
+-	of_node_put(parent_dn);
+ 	return rc;
+ }
+ 
+@@ -281,22 +291,31 @@ int pseries_devicetree_update(s32 scope)
+ 			data++;
+ 
+ 			for (i = 0; i < node_count; i++) {
++				struct device_node *np;
+ 				__be32 phandle = *data++;
+ 				__be32 drc_index;
+ 
++				np = of_find_node_by_phandle(be32_to_cpu(phandle));
++				if (!np) {
++					pr_warn("Failed lookup: phandle 0x%x for action 0x%x\n",
++						be32_to_cpu(phandle), action);
++					continue;
++				}
++
+ 				switch (action) {
+ 				case DELETE_DT_NODE:
+-					delete_dt_node(phandle);
++					delete_dt_node(np);
+ 					break;
+ 				case UPDATE_DT_NODE:
+-					update_dt_node(phandle, scope);
++					update_dt_node(np, scope);
+ 					break;
+ 				case ADD_DT_NODE:
+ 					drc_index = *data++;
+-					add_dt_node(phandle, drc_index);
++					add_dt_node(np, drc_index);
+ 					break;
+ 				}
+ 
++				of_node_put(np);
+ 				cond_resched();
+ 			}
+ 		}
+diff --git a/drivers/dma/bestcomm/ata.c b/drivers/dma/bestcomm/ata.c
+index 2fd87f83cf90b..e169f18da551f 100644
+--- a/drivers/dma/bestcomm/ata.c
++++ b/drivers/dma/bestcomm/ata.c
+@@ -133,7 +133,7 @@ void bcom_ata_reset_bd(struct bcom_task *tsk)
+ 	struct bcom_ata_var *var;
+ 
+ 	/* Reset all BD */
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+diff --git a/drivers/dma/bestcomm/bestcomm.c b/drivers/dma/bestcomm/bestcomm.c
+index d91cbbe7a48fb..8c42e5ca00a99 100644
+--- a/drivers/dma/bestcomm/bestcomm.c
++++ b/drivers/dma/bestcomm/bestcomm.c
+@@ -95,7 +95,7 @@ bcom_task_alloc(int bd_count, int bd_size, int priv_size)
+ 		tsk->bd = bcom_sram_alloc(bd_count * bd_size, 4, &tsk->bd_pa);
+ 		if (!tsk->bd)
+ 			goto error;
+-		memset(tsk->bd, 0x00, bd_count * bd_size);
++		memset_io(tsk->bd, 0x00, bd_count * bd_size);
+ 
+ 		tsk->num_bd = bd_count;
+ 		tsk->bd_size = bd_size;
+@@ -186,16 +186,16 @@ bcom_load_image(int task, u32 *task_image)
+ 	inc = bcom_task_inc(task);
+ 
+ 	/* Clear & copy */
+-	memset(var, 0x00, BCOM_VAR_SIZE);
+-	memset(inc, 0x00, BCOM_INC_SIZE);
++	memset_io(var, 0x00, BCOM_VAR_SIZE);
++	memset_io(inc, 0x00, BCOM_INC_SIZE);
+ 
+ 	desc_src = (u32 *)(hdr + 1);
+ 	var_src = desc_src + hdr->desc_size;
+ 	inc_src = var_src + hdr->var_size;
+ 
+-	memcpy(desc, desc_src, hdr->desc_size * sizeof(u32));
+-	memcpy(var + hdr->first_var, var_src, hdr->var_size * sizeof(u32));
+-	memcpy(inc, inc_src, hdr->inc_size * sizeof(u32));
++	memcpy_toio(desc, desc_src, hdr->desc_size * sizeof(u32));
++	memcpy_toio(var + hdr->first_var, var_src, hdr->var_size * sizeof(u32));
++	memcpy_toio(inc, inc_src, hdr->inc_size * sizeof(u32));
+ 
+ 	return 0;
+ }
+@@ -302,13 +302,13 @@ static int bcom_engine_init(void)
+ 		return -ENOMEM;
+ 	}
+ 
+-	memset(bcom_eng->tdt, 0x00, tdt_size);
+-	memset(bcom_eng->ctx, 0x00, ctx_size);
+-	memset(bcom_eng->var, 0x00, var_size);
+-	memset(bcom_eng->fdt, 0x00, fdt_size);
++	memset_io(bcom_eng->tdt, 0x00, tdt_size);
++	memset_io(bcom_eng->ctx, 0x00, ctx_size);
++	memset_io(bcom_eng->var, 0x00, var_size);
++	memset_io(bcom_eng->fdt, 0x00, fdt_size);
+ 
+ 	/* Copy the FDT for the EU#3 */
+-	memcpy(&bcom_eng->fdt[48], fdt_ops, sizeof(fdt_ops));
++	memcpy_toio(&bcom_eng->fdt[48], fdt_ops, sizeof(fdt_ops));
+ 
+ 	/* Initialize Task base structure */
+ 	for (task=0; task<BCOM_MAX_TASKS; task++)
+diff --git a/drivers/dma/bestcomm/fec.c b/drivers/dma/bestcomm/fec.c
+index 7f1fb1c999e43..d203618ac11fe 100644
+--- a/drivers/dma/bestcomm/fec.c
++++ b/drivers/dma/bestcomm/fec.c
+@@ -140,7 +140,7 @@ bcom_fec_rx_reset(struct bcom_task *tsk)
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+ 
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	/* Configure some stuff */
+ 	bcom_set_task_pragma(tsk->tasknum, BCOM_FEC_RX_BD_PRAGMA);
+@@ -241,7 +241,7 @@ bcom_fec_tx_reset(struct bcom_task *tsk)
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+ 
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	/* Configure some stuff */
+ 	bcom_set_task_pragma(tsk->tasknum, BCOM_FEC_TX_BD_PRAGMA);
+diff --git a/drivers/dma/bestcomm/gen_bd.c b/drivers/dma/bestcomm/gen_bd.c
+index 906ddba6a6f5d..8a24a5cbc2633 100644
+--- a/drivers/dma/bestcomm/gen_bd.c
++++ b/drivers/dma/bestcomm/gen_bd.c
+@@ -142,7 +142,7 @@ bcom_gen_bd_rx_reset(struct bcom_task *tsk)
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+ 
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	/* Configure some stuff */
+ 	bcom_set_task_pragma(tsk->tasknum, BCOM_GEN_RX_BD_PRAGMA);
+@@ -226,7 +226,7 @@ bcom_gen_bd_tx_reset(struct bcom_task *tsk)
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+ 
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	/* Configure some stuff */
+ 	bcom_set_task_pragma(tsk->tasknum, BCOM_GEN_TX_BD_PRAGMA);
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index d60d5520707dc..60c2533a39a5f 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -169,6 +169,7 @@ static int mpc8xxx_irq_set_type(struct irq_data *d, unsigned int flow_type)
+ 
+ 	switch (flow_type) {
+ 	case IRQ_TYPE_EDGE_FALLING:
++	case IRQ_TYPE_LEVEL_LOW:
+ 		raw_spin_lock_irqsave(&mpc8xxx_gc->lock, flags);
+ 		gc->write_reg(mpc8xxx_gc->regs + GPIO_ICR,
+ 			gc->read_reg(mpc8xxx_gc->regs + GPIO_ICR)
+diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/meson/meson_plane.c
+index 35338ed182099..255c6b863f8d2 100644
+--- a/drivers/gpu/drm/meson/meson_plane.c
++++ b/drivers/gpu/drm/meson/meson_plane.c
+@@ -163,7 +163,7 @@ static void meson_plane_atomic_update(struct drm_plane *plane,
+ 
+ 	/* Enable OSD and BLK0, set max global alpha */
+ 	priv->viu.osd1_ctrl_stat = OSD_ENABLE |
+-				   (0xFF << OSD_GLOBAL_ALPHA_SHIFT) |
++				   (0x100 << OSD_GLOBAL_ALPHA_SHIFT) |
+ 				   OSD_BLK0_ENABLE;
+ 
+ 	priv->viu.osd1_ctrl_stat2 = readl(priv->io_base +
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index bb7e109534de1..d4b907889a21d 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -94,7 +94,7 @@ static void meson_viu_set_g12a_osd1_matrix(struct meson_drm *priv,
+ 		priv->io_base + _REG(VPP_WRAP_OSD1_MATRIX_COEF11_12));
+ 	writel(((m[9] & 0x1fff) << 16) | (m[10] & 0x1fff),
+ 		priv->io_base + _REG(VPP_WRAP_OSD1_MATRIX_COEF20_21));
+-	writel((m[11] & 0x1fff) << 16,
++	writel((m[11] & 0x1fff),
+ 		priv->io_base +	_REG(VPP_WRAP_OSD1_MATRIX_COEF22));
+ 
+ 	writel(((m[18] & 0xfff) << 16) | (m[19] & 0xfff),
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 7b7a8a74405df..371b345635e62 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -666,44 +666,48 @@ static void mv88e6xxx_mac_config(struct dsa_switch *ds, int port,
+ {
+ 	struct mv88e6xxx_chip *chip = ds->priv;
+ 	struct mv88e6xxx_port *p;
+-	int err;
++	int err = 0;
+ 
+ 	p = &chip->ports[port];
+ 
+-	/* FIXME: is this the correct test? If we're in fixed mode on an
+-	 * internal port, why should we process this any different from
+-	 * PHY mode? On the other hand, the port may be automedia between
+-	 * an internal PHY and the serdes...
+-	 */
+-	if ((mode == MLO_AN_PHY) && mv88e6xxx_phy_is_internal(ds, port))
+-		return;
+-
+ 	mv88e6xxx_reg_lock(chip);
+-	/* In inband mode, the link may come up at any time while the link
+-	 * is not forced down. Force the link down while we reconfigure the
+-	 * interface mode.
+-	 */
+-	if (mode == MLO_AN_INBAND && p->interface != state->interface &&
+-	    chip->info->ops->port_set_link)
+-		chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN);
+-
+-	err = mv88e6xxx_port_config_interface(chip, port, state->interface);
+-	if (err && err != -EOPNOTSUPP)
+-		goto err_unlock;
+ 
+-	err = mv88e6xxx_serdes_pcs_config(chip, port, mode, state->interface,
+-					  state->advertising);
+-	/* FIXME: we should restart negotiation if something changed - which
+-	 * is something we get if we convert to using phylinks PCS operations.
+-	 */
+-	if (err > 0)
+-		err = 0;
++	if (mode != MLO_AN_PHY || !mv88e6xxx_phy_is_internal(ds, port)) {
++		/* In inband mode, the link may come up at any time while the
++		 * link is not forced down. Force the link down while we
++		 * reconfigure the interface mode.
++		 */
++		if (mode == MLO_AN_INBAND &&
++		    p->interface != state->interface &&
++		    chip->info->ops->port_set_link)
++			chip->info->ops->port_set_link(chip, port,
++						       LINK_FORCED_DOWN);
++
++		err = mv88e6xxx_port_config_interface(chip, port,
++						      state->interface);
++		if (err && err != -EOPNOTSUPP)
++			goto err_unlock;
++
++		err = mv88e6xxx_serdes_pcs_config(chip, port, mode,
++						  state->interface,
++						  state->advertising);
++		/* FIXME: we should restart negotiation if something changed -
++		 * which is something we get if we convert to using phylinks
++		 * PCS operations.
++		 */
++		if (err > 0)
++			err = 0;
++	}
+ 
+ 	/* Undo the forced down state above after completing configuration
+-	 * irrespective of its state on entry, which allows the link to come up.
++	 * irrespective of its state on entry, which allows the link to come
++	 * up in the in-band case where there is no separate SERDES. Also
++	 * ensure that the link can come up if the PPU is in use and we are
++	 * in PHY mode (we treat the PPU as an effective in-band mechanism.)
+ 	 */
+-	if (mode == MLO_AN_INBAND && p->interface != state->interface &&
+-	    chip->info->ops->port_set_link)
++	if (chip->info->ops->port_set_link &&
++	    ((mode == MLO_AN_INBAND && p->interface != state->interface) ||
++	     (mode == MLO_AN_PHY && mv88e6xxx_port_ppu_updates(chip, port))))
+ 		chip->info->ops->port_set_link(chip, port, LINK_UNFORCED);
+ 
+ 	p->interface = state->interface;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 48e8b94e4a7c5..1502069f3a4e2 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1024,6 +1024,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)},	/* Quectel EG12/EM12 */
+ 	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0620)},	/* Quectel EM160R-GL */
+ 	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0800)},	/* Quectel RM500Q-GL */
++	{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0801)},	/* Quectel RM520N */
+ 
+ 	/* 3. Combined interface devices matching on interface number */
+ 	{QMI_FIXED_INTF(0x0408, 0xea42, 4)},	/* Yota / Megafon M100-1 */
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 8e412125a49c1..50190ded7edc7 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -4209,6 +4209,10 @@ static int hwsim_virtio_handle_cmd(struct sk_buff *skb)
+ 
+ 	nlh = nlmsg_hdr(skb);
+ 	gnlh = nlmsg_data(nlh);
++
++	if (skb->len < nlh->nlmsg_len)
++		return -EINVAL;
++
+ 	err = genlmsg_parse(nlh, &hwsim_genl_family, tb, HWSIM_ATTR_MAX,
+ 			    hwsim_genl_policy, NULL);
+ 	if (err) {
+@@ -4251,7 +4255,8 @@ static void hwsim_virtio_rx_work(struct work_struct *work)
+ 	spin_unlock_irqrestore(&hwsim_virtio_lock, flags);
+ 
+ 	skb->data = skb->head;
+-	skb_set_tail_pointer(skb, len);
++	skb_reset_tail_pointer(skb);
++	skb_put(skb, len);
+ 	hwsim_virtio_handle_cmd(skb);
+ 
+ 	spin_lock_irqsave(&hwsim_virtio_lock, flags);
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 57ff31b6b1e47..5a1b8688b4605 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -315,7 +315,7 @@ static int unflatten_dt_nodes(const void *blob,
+ 	for (offset = 0;
+ 	     offset >= 0 && depth >= initial_depth;
+ 	     offset = fdt_next_node(blob, offset, &depth)) {
+-		if (WARN_ON_ONCE(depth >= FDT_MAX_DEPTH))
++		if (WARN_ON_ONCE(depth >= FDT_MAX_DEPTH - 1))
+ 			continue;
+ 
+ 		if (!IS_ENABLED(CONFIG_OF_KOBJ) &&
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index ffd5000c23d39..be81b765858be 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -1546,6 +1546,7 @@ static int __init ccio_probe(struct parisc_device *dev)
+ 	}
+ 	ccio_ioc_init(ioc);
+ 	if (ccio_init_resources(ioc)) {
++		iounmap(ioc->ioc_regs);
+ 		kfree(ioc);
+ 		return -ENOMEM;
+ 	}
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sun50i-a100-r.c b/drivers/pinctrl/sunxi/pinctrl-sun50i-a100-r.c
+index 21054fcacd345..18088f6f44b23 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sun50i-a100-r.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sun50i-a100-r.c
+@@ -98,7 +98,7 @@ MODULE_DEVICE_TABLE(of, a100_r_pinctrl_match);
+ static struct platform_driver a100_r_pinctrl_driver = {
+ 	.probe	= a100_r_pinctrl_probe,
+ 	.driver	= {
+-		.name		= "sun50iw10p1-r-pinctrl",
++		.name		= "sun50i-a100-r-pinctrl",
+ 		.of_match_table	= a100_r_pinctrl_match,
+ 	},
+ };
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index 8a0cd5bf00657..cebddefba2f42 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -93,6 +93,13 @@ static const struct dmi_system_id button_array_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Tablet Gen 2"),
+ 		},
+ 	},
++	{
++		.ident = "Microsoft Surface Go 3",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index 0a19500d3725e..44a8e500fb304 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -791,7 +791,7 @@ static int pfuze100_regulator_probe(struct i2c_client *client,
+ 		((pfuze_chip->chip_id == PFUZE3000) ? "3000" : "3001"))));
+ 
+ 	memcpy(pfuze_chip->regulator_descs, pfuze_chip->pfuze_regulators,
+-		sizeof(pfuze_chip->regulator_descs));
++		regulator_num * sizeof(struct pfuze_regulator));
+ 
+ 	ret = pfuze_parse_regulators_dt(pfuze_chip);
+ 	if (ret)
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index a37ea946459cc..c6fc14b169dac 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -352,19 +352,6 @@ static void cdns3_ep_inc_deq(struct cdns3_endpoint *priv_ep)
+ 	cdns3_ep_inc_trb(&priv_ep->dequeue, &priv_ep->ccs, priv_ep->num_trbs);
+ }
+ 
+-static void cdns3_move_deq_to_next_trb(struct cdns3_request *priv_req)
+-{
+-	struct cdns3_endpoint *priv_ep = priv_req->priv_ep;
+-	int current_trb = priv_req->start_trb;
+-
+-	while (current_trb != priv_req->end_trb) {
+-		cdns3_ep_inc_deq(priv_ep);
+-		current_trb = priv_ep->dequeue;
+-	}
+-
+-	cdns3_ep_inc_deq(priv_ep);
+-}
+-
+ /**
+  * cdns3_allow_enable_l1 - enable/disable permits to transition to L1.
+  * @priv_dev: Extended gadget object
+@@ -1518,10 +1505,11 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
+ 
+ 		trb = priv_ep->trb_pool + priv_ep->dequeue;
+ 
+-		/* Request was dequeued and TRB was changed to TRB_LINK. */
+-		if (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) {
++		/* The TRB was changed as link TRB, and the request was handled at ep_dequeue */
++		while (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) {
+ 			trace_cdns3_complete_trb(priv_ep, trb);
+-			cdns3_move_deq_to_next_trb(priv_req);
++			cdns3_ep_inc_deq(priv_ep);
++			trb = priv_ep->trb_pool + priv_ep->dequeue;
+ 		}
+ 
+ 		if (!request->stream_id) {
+diff --git a/drivers/video/fbdev/i740fb.c b/drivers/video/fbdev/i740fb.c
+index ad5ced4ef972d..8fb4e01e1943f 100644
+--- a/drivers/video/fbdev/i740fb.c
++++ b/drivers/video/fbdev/i740fb.c
+@@ -662,6 +662,9 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
+ 
+ static int i740fb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ {
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	switch (var->bits_per_pixel) {
+ 	case 8:
+ 		var->red.offset	= var->green.offset = var->blue.offset = 0;
+diff --git a/drivers/video/fbdev/pxa3xx-gcu.c b/drivers/video/fbdev/pxa3xx-gcu.c
+index 9421d14d0eb02..9e9888e40c573 100644
+--- a/drivers/video/fbdev/pxa3xx-gcu.c
++++ b/drivers/video/fbdev/pxa3xx-gcu.c
+@@ -381,7 +381,7 @@ pxa3xx_gcu_write(struct file *file, const char *buff,
+ 	struct pxa3xx_gcu_batch	*buffer;
+ 	struct pxa3xx_gcu_priv *priv = to_pxa3xx_gcu_priv(file);
+ 
+-	int words = count / 4;
++	size_t words = count / 4;
+ 
+ 	/* Does not need to be atomic. There's a lock in user space,
+ 	 * but anyhow, this is just for statistics. */
+diff --git a/fs/afs/misc.c b/fs/afs/misc.c
+index 1d1a8debe4723..f1dc2162900a4 100644
+--- a/fs/afs/misc.c
++++ b/fs/afs/misc.c
+@@ -69,6 +69,7 @@ int afs_abort_to_error(u32 abort_code)
+ 		/* Unified AFS error table */
+ 	case UAEPERM:			return -EPERM;
+ 	case UAENOENT:			return -ENOENT;
++	case UAEAGAIN:			return -EAGAIN;
+ 	case UAEACCES:			return -EACCES;
+ 	case UAEBUSY:			return -EBUSY;
+ 	case UAEEXIST:			return -EEXIST;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 6c06870f90184..fafb69d338c26 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3244,6 +3244,9 @@ static ssize_t __cifs_writev(
+ 
+ ssize_t cifs_direct_writev(struct kiocb *iocb, struct iov_iter *from)
+ {
++	struct file *file = iocb->ki_filp;
++
++	cifs_revalidate_mapping(file->f_inode);
+ 	return __cifs_writev(iocb, from, true);
+ }
+ 
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 503a0056b60f2..383ae8744c337 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -209,8 +209,8 @@ smb_send_kvec(struct TCP_Server_Info *server, struct msghdr *smb_msg,
+ 
+ 	*sent = 0;
+ 
+-	smb_msg->msg_name = (struct sockaddr *) &server->dstaddr;
+-	smb_msg->msg_namelen = sizeof(struct sockaddr);
++	smb_msg->msg_name = NULL;
++	smb_msg->msg_namelen = 0;
+ 	smb_msg->msg_control = NULL;
+ 	smb_msg->msg_controllen = 0;
+ 	if (server->noblocksnd)
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 4034102010f05..b3fcc27b95648 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -1029,22 +1029,31 @@ static void nfs_fill_super(struct super_block *sb, struct nfs_fs_context *ctx)
+ 	if (ctx && ctx->bsize)
+ 		sb->s_blocksize = nfs_block_size(ctx->bsize, &sb->s_blocksize_bits);
+ 
+-	if (server->nfs_client->rpc_ops->version != 2) {
+-		/* The VFS shouldn't apply the umask to mode bits. We will do
+-		 * so ourselves when necessary.
++	switch (server->nfs_client->rpc_ops->version) {
++	case 2:
++		sb->s_time_gran = 1000;
++		sb->s_time_min = 0;
++		sb->s_time_max = U32_MAX;
++		break;
++	case 3:
++		/*
++		 * The VFS shouldn't apply the umask to mode bits.
++		 * We will do so ourselves when necessary.
+ 		 */
+ 		sb->s_flags |= SB_POSIXACL;
+ 		sb->s_time_gran = 1;
+-		sb->s_export_op = &nfs_export_ops;
+-	} else
+-		sb->s_time_gran = 1000;
+-
+-	if (server->nfs_client->rpc_ops->version != 4) {
+ 		sb->s_time_min = 0;
+ 		sb->s_time_max = U32_MAX;
+-	} else {
++		sb->s_export_op = &nfs_export_ops;
++		break;
++	case 4:
++		sb->s_flags |= SB_POSIXACL;
++		sb->s_time_gran = 1;
+ 		sb->s_time_min = S64_MIN;
+ 		sb->s_time_max = S64_MAX;
++		if (server->caps & NFS_CAP_ATOMIC_OPEN_V1)
++			sb->s_export_op = &nfs_export_ops;
++		break;
+ 	}
+ 
+ 	sb->s_magic = NFS_SUPER_MAGIC;
+diff --git a/include/linux/of_device.h b/include/linux/of_device.h
+index 07ca187fc5e44..fe339106e02c4 100644
+--- a/include/linux/of_device.h
++++ b/include/linux/of_device.h
+@@ -113,8 +113,9 @@ static inline struct device_node *of_cpu_device_node_get(int cpu)
+ }
+ 
+ static inline int of_dma_configure_id(struct device *dev,
+-				   struct device_node *np,
+-				   bool force_dma)
++				      struct device_node *np,
++				      bool force_dma,
++				      const u32 *id)
+ {
+ 	return 0;
+ }
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 1a0a9f820c69b..433b9e840b387 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -57,6 +57,7 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
+ 	int retval = 0;
+ 
+ 	mutex_lock(&cgroup_mutex);
++	cpus_read_lock();
+ 	percpu_down_write(&cgroup_threadgroup_rwsem);
+ 	for_each_root(root) {
+ 		struct cgroup *from_cgrp;
+@@ -73,6 +74,7 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
+ 			break;
+ 	}
+ 	percpu_up_write(&cgroup_threadgroup_rwsem);
++	cpus_read_unlock();
+ 	mutex_unlock(&cgroup_mutex);
+ 
+ 	return retval;
+diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
+index f4938040c2286..3aa55b8075608 100644
+--- a/kernel/trace/trace_preemptirq.c
++++ b/kernel/trace/trace_preemptirq.c
+@@ -94,15 +94,15 @@ __visible void trace_hardirqs_on_caller(unsigned long caller_addr)
+ 		this_cpu_write(tracing_irq_cpu, 0);
+ 	}
+ 
+-	lockdep_hardirqs_on_prepare(CALLER_ADDR0);
+-	lockdep_hardirqs_on(CALLER_ADDR0);
++	lockdep_hardirqs_on_prepare(caller_addr);
++	lockdep_hardirqs_on(caller_addr);
+ }
+ EXPORT_SYMBOL(trace_hardirqs_on_caller);
+ NOKPROBE_SYMBOL(trace_hardirqs_on_caller);
+ 
+ __visible void trace_hardirqs_off_caller(unsigned long caller_addr)
+ {
+-	lockdep_hardirqs_off(CALLER_ADDR0);
++	lockdep_hardirqs_off(caller_addr);
+ 
+ 	if (!this_cpu_read(tracing_irq_cpu)) {
+ 		this_cpu_write(tracing_irq_cpu, 1);
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index f8ecad2b730e8..2a93e7b5fbd05 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -166,7 +166,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ 	_enter("{%d,%d}", call->tx_hard_ack, call->tx_top);
+ 
+ 	now = ktime_get_real();
+-	max_age = ktime_sub(now, jiffies_to_usecs(call->peer->rto_j));
++	max_age = ktime_sub_us(now, jiffies_to_usecs(call->peer->rto_j));
+ 
+ 	spin_lock_bh(&call->lock);
+ 
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index 8c2881054266d..ebbf1b03b62cf 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -424,6 +424,9 @@ static void rxrpc_local_processor(struct work_struct *work)
+ 		container_of(work, struct rxrpc_local, processor);
+ 	bool again;
+ 
++	if (local->dead)
++		return;
++
+ 	trace_rxrpc_local(local->debug_id, rxrpc_local_processing,
+ 			  atomic_read(&local->usage), NULL);
+ 
+diff --git a/scripts/mksysmap b/scripts/mksysmap
+index 9aa23d15862a0..ad8bbc52267d0 100755
+--- a/scripts/mksysmap
++++ b/scripts/mksysmap
+@@ -41,4 +41,4 @@
+ # so we just ignore them to let readprofile continue to work.
+ # (At least sparc64 has __crc_ in the middle).
+ 
+-$NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( \.L\)' > $2
++$NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( \.L\)\|\( L0\)' > $2
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index 07787698b9738..1e44e337986e8 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -479,7 +479,8 @@ MODULE_DEVICE_TABLE(of, hda_tegra_match);
+ static int hda_tegra_probe(struct platform_device *pdev)
+ {
+ 	const unsigned int driver_flags = AZX_DCAPS_CORBRP_SELF_CLEAR |
+-					  AZX_DCAPS_PM_RUNTIME;
++					  AZX_DCAPS_PM_RUNTIME |
++					  AZX_DCAPS_4K_BDLE_BOUNDARY;
+ 	struct snd_card *card;
+ 	struct azx *chip;
+ 	struct hda_tegra *hda;
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index c662431bf13a5..b848e435b93fd 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -209,6 +209,7 @@ struct sigmatel_spec {
+ 
+ 	/* beep widgets */
+ 	hda_nid_t anabeep_nid;
++	bool beep_power_on;
+ 
+ 	/* SPDIF-out mux */
+ 	const char * const *spdif_labels;
+@@ -4447,6 +4448,28 @@ static int stac_suspend(struct hda_codec *codec)
+ 	stac_shutup(codec);
+ 	return 0;
+ }
++
++static int stac_check_power_status(struct hda_codec *codec, hda_nid_t nid)
++{
++#ifdef CONFIG_SND_HDA_INPUT_BEEP
++	struct sigmatel_spec *spec = codec->spec;
++#endif
++	int ret = snd_hda_gen_check_power_status(codec, nid);
++
++#ifdef CONFIG_SND_HDA_INPUT_BEEP
++	if (nid == spec->gen.beep_nid && codec->beep) {
++		if (codec->beep->enabled != spec->beep_power_on) {
++			spec->beep_power_on = codec->beep->enabled;
++			if (spec->beep_power_on)
++				snd_hda_power_up_pm(codec);
++			else
++				snd_hda_power_down_pm(codec);
++		}
++		ret |= spec->beep_power_on;
++	}
++#endif
++	return ret;
++}
+ #else
+ #define stac_suspend		NULL
+ #endif /* CONFIG_PM */
+@@ -4459,6 +4482,7 @@ static const struct hda_codec_ops stac_patch_ops = {
+ 	.unsol_event = snd_hda_jack_unsol_event,
+ #ifdef CONFIG_PM
+ 	.suspend = stac_suspend,
++	.check_power_status = stac_check_power_status,
+ #endif
+ 	.reboot_notify = stac_shutup,
+ };
+diff --git a/sound/soc/codecs/nau8824.c b/sound/soc/codecs/nau8824.c
+index c8ccfa2fff848..a95fe3fff1db8 100644
+--- a/sound/soc/codecs/nau8824.c
++++ b/sound/soc/codecs/nau8824.c
+@@ -1072,6 +1072,7 @@ static int nau8824_hw_params(struct snd_pcm_substream *substream,
+ 	struct snd_soc_component *component = dai->component;
+ 	struct nau8824 *nau8824 = snd_soc_component_get_drvdata(component);
+ 	unsigned int val_len = 0, osr, ctrl_val, bclk_fs, bclk_div;
++	int err = -EINVAL;
+ 
+ 	nau8824_sema_acquire(nau8824, HZ);
+ 
+@@ -1088,7 +1089,7 @@ static int nau8824_hw_params(struct snd_pcm_substream *substream,
+ 		osr &= NAU8824_DAC_OVERSAMPLE_MASK;
+ 		if (nau8824_clock_check(nau8824, substream->stream,
+ 			nau8824->fs, osr))
+-			return -EINVAL;
++			goto error;
+ 		regmap_update_bits(nau8824->regmap, NAU8824_REG_CLK_DIVIDER,
+ 			NAU8824_CLK_DAC_SRC_MASK,
+ 			osr_dac_sel[osr].clk_src << NAU8824_CLK_DAC_SRC_SFT);
+@@ -1098,7 +1099,7 @@ static int nau8824_hw_params(struct snd_pcm_substream *substream,
+ 		osr &= NAU8824_ADC_SYNC_DOWN_MASK;
+ 		if (nau8824_clock_check(nau8824, substream->stream,
+ 			nau8824->fs, osr))
+-			return -EINVAL;
++			goto error;
+ 		regmap_update_bits(nau8824->regmap, NAU8824_REG_CLK_DIVIDER,
+ 			NAU8824_CLK_ADC_SRC_MASK,
+ 			osr_adc_sel[osr].clk_src << NAU8824_CLK_ADC_SRC_SFT);
+@@ -1119,7 +1120,7 @@ static int nau8824_hw_params(struct snd_pcm_substream *substream,
+ 		else if (bclk_fs <= 256)
+ 			bclk_div = 0;
+ 		else
+-			return -EINVAL;
++			goto error;
+ 		regmap_update_bits(nau8824->regmap,
+ 			NAU8824_REG_PORT0_I2S_PCM_CTRL_2,
+ 			NAU8824_I2S_LRC_DIV_MASK | NAU8824_I2S_BLK_DIV_MASK,
+@@ -1140,15 +1141,17 @@ static int nau8824_hw_params(struct snd_pcm_substream *substream,
+ 		val_len |= NAU8824_I2S_DL_32;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		goto error;
+ 	}
+ 
+ 	regmap_update_bits(nau8824->regmap, NAU8824_REG_PORT0_I2S_PCM_CTRL_1,
+ 		NAU8824_I2S_DL_MASK, val_len);
++	err = 0;
+ 
++ error:
+ 	nau8824_sema_release(nau8824);
+ 
+-	return 0;
++	return err;
+ }
+ 
+ static int nau8824_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+@@ -1157,8 +1160,6 @@ static int nau8824_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 	struct nau8824 *nau8824 = snd_soc_component_get_drvdata(component);
+ 	unsigned int ctrl1_val = 0, ctrl2_val = 0;
+ 
+-	nau8824_sema_acquire(nau8824, HZ);
+-
+ 	switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+ 	case SND_SOC_DAIFMT_CBM_CFM:
+ 		ctrl2_val |= NAU8824_I2S_MS_MASTER;
+@@ -1200,6 +1201,8 @@ static int nau8824_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 		return -EINVAL;
+ 	}
+ 
++	nau8824_sema_acquire(nau8824, HZ);
++
+ 	regmap_update_bits(nau8824->regmap, NAU8824_REG_PORT0_I2S_PCM_CTRL_1,
+ 		NAU8824_I2S_DF_MASK | NAU8824_I2S_BP_MASK |
+ 		NAU8824_I2S_PCMB_EN, ctrl1_val);
+diff --git a/tools/include/uapi/asm/errno.h b/tools/include/uapi/asm/errno.h
+index d30439b4b8ab4..869379f91fe48 100644
+--- a/tools/include/uapi/asm/errno.h
++++ b/tools/include/uapi/asm/errno.h
+@@ -9,8 +9,8 @@
+ #include "../../../arch/alpha/include/uapi/asm/errno.h"
+ #elif defined(__mips__)
+ #include "../../../arch/mips/include/uapi/asm/errno.h"
+-#elif defined(__xtensa__)
+-#include "../../../arch/xtensa/include/uapi/asm/errno.h"
++#elif defined(__hppa__)
++#include "../../../arch/parisc/include/uapi/asm/errno.h"
+ #else
+ #include <asm-generic/errno.h>
+ #endif


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-09-28  9:30 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-09-28  9:30 UTC (permalink / raw
  To: gentoo-commits

commit:     909e1c9f3afbe76ba852d8efe3eec49b98577a7b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 28 09:30:12 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 28 09:30:12 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=909e1c9f

Linux patch 5.10.146

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1145_linux-5.10.146.patch | 4791 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4795 insertions(+)

diff --git a/0000_README b/0000_README
index 0670d018..ef3cbd20 100644
--- a/0000_README
+++ b/0000_README
@@ -623,6 +623,10 @@ Patch:  1144_linux-5.10.145.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.145
 
+Patch:  1145_linux-5.10.146.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.146
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1145_linux-5.10.146.patch b/1145_linux-5.10.146.patch
new file mode 100644
index 00000000..51366bac
--- /dev/null
+++ b/1145_linux-5.10.146.patch
@@ -0,0 +1,4791 @@
+diff --git a/Makefile b/Makefile
+index 76c85e40beea3..26a871eebe924 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 145
++SUBLEVEL = 146
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 1116a8d092c01..af65ab83e63d4 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1654,7 +1654,10 @@ config ARM64_BTI_KERNEL
+ 	depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI
+ 	# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697
+ 	depends on !CC_IS_GCC || GCC_VERSION >= 100100
+-	depends on !(CC_IS_CLANG && GCOV_KERNEL)
++	# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106671
++	depends on !CC_IS_GCC
++	# https://github.com/llvm/llvm-project/commit/a88c722e687e6780dcd6a58718350dc76fcc4cc9
++	depends on !CC_IS_CLANG || CLANG_VERSION >= 120000
+ 	depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
+ 	help
+ 	  Build the kernel with Branch Target Identification annotations
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts b/arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts
+index e6c1c94c8d69c..07737b65d7a3d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts
+@@ -87,3 +87,8 @@
+ 		};
+ 	};
+ };
++
++&wlan_host_wake_l {
++	/* Kevin has an external pull up, but Bob does not. */
++	rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_up>;
++};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
+index 1384dabbdf406..739937f70f8d0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
+@@ -237,6 +237,14 @@
+ &edp {
+ 	status = "okay";
+ 
++	/*
++	 * eDP PHY/clk don't sync reliably at anything other than 24 MHz. Only
++	 * set this here, because rk3399-gru.dtsi ensures we can generate this
++	 * off GPLL=600MHz, whereas some other RK3399 boards may not.
++	 */
++	assigned-clocks = <&cru PCLK_EDP>;
++	assigned-clock-rates = <24000000>;
++
+ 	ports {
+ 		edp_out: port@1 {
+ 			reg = <1>;
+@@ -395,6 +403,7 @@ ap_i2c_tp: &i2c5 {
+ 	};
+ 
+ 	wlan_host_wake_l: wlan-host-wake-l {
++		/* Kevin has an external pull up, but Bob does not */
+ 		rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_none>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 544110aaffc56..95bc7a5f61dd5 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -102,7 +102,6 @@
+ 	vcc5v0_host: vcc5v0-host-regulator {
+ 		compatible = "regulator-fixed";
+ 		gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
+-		enable-active-low;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&vcc5v0_host_en>;
+ 		regulator-name = "vcc5v0_host";
+diff --git a/arch/mips/lantiq/clk.c b/arch/mips/lantiq/clk.c
+index 7a623684d9b5e..2d5a0bcb0cec1 100644
+--- a/arch/mips/lantiq/clk.c
++++ b/arch/mips/lantiq/clk.c
+@@ -50,6 +50,7 @@ struct clk *clk_get_io(void)
+ {
+ 	return &cpu_clk_generic[2];
+ }
++EXPORT_SYMBOL_GPL(clk_get_io);
+ 
+ struct clk *clk_get_ppe(void)
+ {
+diff --git a/arch/mips/loongson32/common/platform.c b/arch/mips/loongson32/common/platform.c
+index 794c96c2a4cdd..311dc1580bbde 100644
+--- a/arch/mips/loongson32/common/platform.c
++++ b/arch/mips/loongson32/common/platform.c
+@@ -98,7 +98,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
+ 	if (plat_dat->bus_id) {
+ 		__raw_writel(__raw_readl(LS1X_MUX_CTRL0) | GMAC1_USE_UART1 |
+ 			     GMAC1_USE_UART0, LS1X_MUX_CTRL0);
+-		switch (plat_dat->interface) {
++		switch (plat_dat->phy_interface) {
+ 		case PHY_INTERFACE_MODE_RGMII:
+ 			val &= ~(GMAC1_USE_TXCLK | GMAC1_USE_PWM23);
+ 			break;
+@@ -107,12 +107,12 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
+ 			break;
+ 		default:
+ 			pr_err("unsupported mii mode %d\n",
+-			       plat_dat->interface);
++			       plat_dat->phy_interface);
+ 			return -ENOTSUPP;
+ 		}
+ 		val &= ~GMAC1_SHUT;
+ 	} else {
+-		switch (plat_dat->interface) {
++		switch (plat_dat->phy_interface) {
+ 		case PHY_INTERFACE_MODE_RGMII:
+ 			val &= ~(GMAC0_USE_TXCLK | GMAC0_USE_PWM01);
+ 			break;
+@@ -121,7 +121,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
+ 			break;
+ 		default:
+ 			pr_err("unsupported mii mode %d\n",
+-			       plat_dat->interface);
++			       plat_dat->phy_interface);
+ 			return -ENOTSUPP;
+ 		}
+ 		val &= ~GMAC0_SHUT;
+@@ -131,7 +131,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
+ 	plat_dat = dev_get_platdata(&pdev->dev);
+ 
+ 	val &= ~PHY_INTF_SELI;
+-	if (plat_dat->interface == PHY_INTERFACE_MODE_RMII)
++	if (plat_dat->phy_interface == PHY_INTERFACE_MODE_RMII)
+ 		val |= 0x4 << PHY_INTF_SELI_SHIFT;
+ 	__raw_writel(val, LS1X_MUX_CTRL1);
+ 
+@@ -146,9 +146,9 @@ static struct plat_stmmacenet_data ls1x_eth0_pdata = {
+ 	.bus_id			= 0,
+ 	.phy_addr		= -1,
+ #if defined(CONFIG_LOONGSON1_LS1B)
+-	.interface		= PHY_INTERFACE_MODE_MII,
++	.phy_interface		= PHY_INTERFACE_MODE_MII,
+ #elif defined(CONFIG_LOONGSON1_LS1C)
+-	.interface		= PHY_INTERFACE_MODE_RMII,
++	.phy_interface		= PHY_INTERFACE_MODE_RMII,
+ #endif
+ 	.mdio_bus_data		= &ls1x_mdio_bus_data,
+ 	.dma_cfg		= &ls1x_eth_dma_cfg,
+@@ -186,7 +186,7 @@ struct platform_device ls1x_eth0_pdev = {
+ static struct plat_stmmacenet_data ls1x_eth1_pdata = {
+ 	.bus_id			= 1,
+ 	.phy_addr		= -1,
+-	.interface		= PHY_INTERFACE_MODE_MII,
++	.phy_interface		= PHY_INTERFACE_MODE_MII,
+ 	.mdio_bus_data		= &ls1x_mdio_bus_data,
+ 	.dma_cfg		= &ls1x_eth_dma_cfg,
+ 	.has_gmac		= 1,
+diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
+index bc6841867b512..529c123cf0a47 100644
+--- a/arch/riscv/kernel/signal.c
++++ b/arch/riscv/kernel/signal.c
+@@ -121,6 +121,8 @@ SYSCALL_DEFINE0(rt_sigreturn)
+ 	if (restore_altstack(&frame->uc.uc_stack))
+ 		goto badframe;
+ 
++	regs->cause = -1UL;
++
+ 	return regs->a0;
+ 
+ badframe:
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 38c63a78aba6f..660012ab7bfa5 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1275,6 +1275,7 @@ struct kvm_x86_ops {
+ 	int (*mem_enc_op)(struct kvm *kvm, void __user *argp);
+ 	int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
+ 	int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
++	void (*guest_memory_reclaimed)(struct kvm *kvm);
+ 
+ 	int (*get_msr_feature)(struct kvm_msr_entry *entry);
+ 
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 7397cc449e2fc..c2b34998c27df 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -1177,6 +1177,14 @@ void sev_hardware_teardown(void)
+ 	sev_flush_asids();
+ }
+ 
++void sev_guest_memory_reclaimed(struct kvm *kvm)
++{
++	if (!sev_guest(kvm))
++		return;
++
++	wbinvd_on_all_cpus();
++}
++
+ void pre_sev_run(struct vcpu_svm *svm, int cpu)
+ {
+ 	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 442705517caf4..a0512a91760d2 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4325,6 +4325,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
+ 	.mem_enc_op = svm_mem_enc_op,
+ 	.mem_enc_reg_region = svm_register_enc_region,
+ 	.mem_enc_unreg_region = svm_unregister_enc_region,
++	.guest_memory_reclaimed = sev_guest_memory_reclaimed,
+ 
+ 	.can_emulate_instruction = svm_can_emulate_instruction,
+ 
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 10aba1dd264ed..f62d13fc6e01f 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -491,6 +491,8 @@ int svm_register_enc_region(struct kvm *kvm,
+ 			    struct kvm_enc_region *range);
+ int svm_unregister_enc_region(struct kvm *kvm,
+ 			      struct kvm_enc_region *range);
++void sev_guest_memory_reclaimed(struct kvm *kvm);
++
+ void pre_sev_run(struct vcpu_svm *svm, int cpu);
+ int __init sev_hardware_setup(void);
+ void sev_hardware_teardown(void);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c5a08ec348e6f..f3473418dcd5d 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8875,6 +8875,12 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+ 		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
+ }
+ 
++void kvm_arch_guest_memory_reclaimed(struct kvm *kvm)
++{
++	if (kvm_x86_ops.guest_memory_reclaimed)
++		kvm_x86_ops.guest_memory_reclaimed(kvm);
++}
++
+ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+ {
+ 	if (!lapic_in_kernel(vcpu))
+diff --git a/drivers/dax/hmem/device.c b/drivers/dax/hmem/device.c
+index cb6401c9e9a4f..acf31cc1dbcca 100644
+--- a/drivers/dax/hmem/device.c
++++ b/drivers/dax/hmem/device.c
+@@ -15,6 +15,7 @@ void hmem_register_device(int target_nid, struct resource *r)
+ 		.start = r->start,
+ 		.end = r->end,
+ 		.flags = IORESOURCE_MEM,
++		.desc = IORES_DESC_SOFT_RESERVED,
+ 	};
+ 	struct platform_device *pdev;
+ 	struct memregion_info info;
+diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c
+index 8563a392f30bf..dadab2feca080 100644
+--- a/drivers/dma/ti/k3-udma-private.c
++++ b/drivers/dma/ti/k3-udma-private.c
+@@ -31,14 +31,14 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property)
+ 	}
+ 
+ 	pdev = of_find_device_by_node(udma_node);
++	if (np != udma_node)
++		of_node_put(udma_node);
++
+ 	if (!pdev) {
+ 		pr_debug("UDMA device not found\n");
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 	}
+ 
+-	if (np != udma_node)
+-		of_node_put(udma_node);
+-
+ 	ud = platform_get_drvdata(pdev);
+ 	if (!ud) {
+ 		pr_debug("UDMA has not been probed\n");
+diff --git a/drivers/firmware/efi/libstub/secureboot.c b/drivers/firmware/efi/libstub/secureboot.c
+index 5efc524b14bef..a2be3a71bcf8e 100644
+--- a/drivers/firmware/efi/libstub/secureboot.c
++++ b/drivers/firmware/efi/libstub/secureboot.c
+@@ -19,7 +19,7 @@ static const efi_char16_t efi_SetupMode_name[] = L"SetupMode";
+ 
+ /* SHIM variables */
+ static const efi_guid_t shim_guid = EFI_SHIM_LOCK_GUID;
+-static const efi_char16_t shim_MokSBState_name[] = L"MokSBState";
++static const efi_char16_t shim_MokSBState_name[] = L"MokSBStateRT";
+ 
+ /*
+  * Determine whether we're in secure boot mode.
+@@ -53,8 +53,8 @@ enum efi_secureboot_mode efi_get_secureboot(void)
+ 
+ 	/*
+ 	 * See if a user has put the shim into insecure mode. If so, and if the
+-	 * variable doesn't have the runtime attribute set, we might as well
+-	 * honor that.
++	 * variable doesn't have the non-volatile attribute set, we might as
++	 * well honor that.
+ 	 */
+ 	size = sizeof(moksbstate);
+ 	status = get_efi_var(shim_MokSBState_name, &shim_guid,
+@@ -63,7 +63,7 @@ enum efi_secureboot_mode efi_get_secureboot(void)
+ 	/* If it fails, we don't care why. Default to secure */
+ 	if (status != EFI_SUCCESS)
+ 		goto secure_boot_enabled;
+-	if (!(attr & EFI_VARIABLE_RUNTIME_ACCESS) && moksbstate == 1)
++	if (!(attr & EFI_VARIABLE_NON_VOLATILE) && moksbstate == 1)
+ 		return efi_secureboot_mode_disabled;
+ 
+ secure_boot_enabled:
+diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
+index 3672539cb96eb..5d0f1b1966fc6 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.c
++++ b/drivers/firmware/efi/libstub/x86-stub.c
+@@ -414,6 +414,13 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
+ 	hdr->ramdisk_image = 0;
+ 	hdr->ramdisk_size = 0;
+ 
++	/*
++	 * Disregard any setup data that was provided by the bootloader:
++	 * setup_data could be pointing anywhere, and we have no way of
++	 * authenticating or validating the payload.
++	 */
++	hdr->setup_data = 0;
++
+ 	efi_stub_entry(handle, sys_table_arg, boot_params);
+ 	/* not reached */
+ 
+diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c
+index 780cba4e30d0e..876027fdefc95 100644
+--- a/drivers/gpio/gpio-mockup.c
++++ b/drivers/gpio/gpio-mockup.c
+@@ -604,9 +604,9 @@ static int __init gpio_mockup_init(void)
+ 
+ static void __exit gpio_mockup_exit(void)
+ {
++	gpio_mockup_unregister_pdevs();
+ 	debugfs_remove_recursive(gpio_mockup_dbg_dir);
+ 	platform_driver_unregister(&gpio_mockup_driver);
+-	gpio_mockup_unregister_pdevs();
+ }
+ 
+ module_init(gpio_mockup_init);
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index 2613881a66e66..381cfa26a4a1a 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -1769,7 +1769,6 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ 		ret = -ENODEV;
+ 		goto out_free_le;
+ 	}
+-	le->irq = irq;
+ 
+ 	if (eflags & GPIOEVENT_REQUEST_RISING_EDGE)
+ 		irqflags |= test_bit(FLAG_ACTIVE_LOW, &desc->flags) ?
+@@ -1783,7 +1782,7 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ 	init_waitqueue_head(&le->wait);
+ 
+ 	/* Request a thread to read the events */
+-	ret = request_threaded_irq(le->irq,
++	ret = request_threaded_irq(irq,
+ 				   lineevent_irq_handler,
+ 				   lineevent_irq_thread,
+ 				   irqflags,
+@@ -1792,6 +1791,8 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ 	if (ret)
+ 		goto out_free_le;
+ 
++	le->irq = irq;
++
+ 	fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC);
+ 	if (fd < 0) {
+ 		ret = fd;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index f262c4e7a48a2..881045e600af2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2047,6 +2047,11 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
+ 				amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_ATOMBIOS_INIT_FAIL, 0, 0);
+ 				return r;
+ 			}
++
++			/*get pf2vf msg info at it's earliest time*/
++			if (amdgpu_sriov_vf(adev))
++				amdgpu_virt_init_data_exchange(adev);
++
+ 		}
+ 	}
+ 
+@@ -2174,8 +2179,20 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ 		}
+ 		adev->ip_blocks[i].status.sw = true;
+ 
+-		/* need to do gmc hw init early so we can allocate gpu mem */
+-		if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {
++		if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_COMMON) {
++			/* need to do common hw init early so everything is set up for gmc */
++			r = adev->ip_blocks[i].version->funcs->hw_init((void *)adev);
++			if (r) {
++				DRM_ERROR("hw_init %d failed %d\n", i, r);
++				goto init_failed;
++			}
++			adev->ip_blocks[i].status.hw = true;
++		} else if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {
++			/* need to do gmc hw init early so we can allocate gpu mem */
++			/* Try to reserve bad pages early */
++			if (amdgpu_sriov_vf(adev))
++				amdgpu_virt_exchange_data(adev);
++
+ 			r = amdgpu_device_vram_scratch_init(adev);
+ 			if (r) {
+ 				DRM_ERROR("amdgpu_vram_scratch_init failed %d\n", r);
+@@ -2753,8 +2770,8 @@ static int amdgpu_device_ip_reinit_early_sriov(struct amdgpu_device *adev)
+ 	int i, r;
+ 
+ 	static enum amd_ip_block_type ip_order[] = {
+-		AMD_IP_BLOCK_TYPE_GMC,
+ 		AMD_IP_BLOCK_TYPE_COMMON,
++		AMD_IP_BLOCK_TYPE_GMC,
+ 		AMD_IP_BLOCK_TYPE_PSP,
+ 		AMD_IP_BLOCK_TYPE_IH,
+ 	};
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 7cc7af2a6822e..947f50e402ba0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -35,6 +35,7 @@
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
+ #include <drm/drm_crtc_helper.h>
++#include <drm/drm_damage_helper.h>
+ #include <drm/drm_edid.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_fb_helper.h>
+@@ -498,6 +499,7 @@ bool amdgpu_display_ddc_probe(struct amdgpu_connector *amdgpu_connector,
+ static const struct drm_framebuffer_funcs amdgpu_fb_funcs = {
+ 	.destroy = drm_gem_fb_destroy,
+ 	.create_handle = drm_gem_fb_create_handle,
++	.dirty = drm_atomic_helper_dirtyfb,
+ };
+ 
+ uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index eb22a190c2423..3638f0e12a2b8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1979,15 +1979,12 @@ int amdgpu_ras_request_reset_on_boot(struct amdgpu_device *adev,
+ 	return 0;
+ }
+ 
+-static int amdgpu_ras_check_asic_type(struct amdgpu_device *adev)
++static bool amdgpu_ras_asic_supported(struct amdgpu_device *adev)
+ {
+-	if (adev->asic_type != CHIP_VEGA10 &&
+-		adev->asic_type != CHIP_VEGA20 &&
+-		adev->asic_type != CHIP_ARCTURUS &&
+-		adev->asic_type != CHIP_SIENNA_CICHLID)
+-		return 1;
+-	else
+-		return 0;
++	return adev->asic_type == CHIP_VEGA10 ||
++		adev->asic_type == CHIP_VEGA20 ||
++		adev->asic_type == CHIP_ARCTURUS ||
++		adev->asic_type == CHIP_SIENNA_CICHLID;
+ }
+ 
+ /*
+@@ -2006,7 +2003,7 @@ static void amdgpu_ras_check_supported(struct amdgpu_device *adev,
+ 	*supported = 0;
+ 
+ 	if (amdgpu_sriov_vf(adev) || !adev->is_atom_fw ||
+-		amdgpu_ras_check_asic_type(adev))
++	    !amdgpu_ras_asic_supported(adev))
+ 		return;
+ 
+ 	if (amdgpu_atomfirmware_mem_ecc_supported(adev)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index e7678ba8fdcf8..16bfb36c27e41 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -580,16 +580,34 @@ void amdgpu_virt_fini_data_exchange(struct amdgpu_device *adev)
+ 
+ void amdgpu_virt_init_data_exchange(struct amdgpu_device *adev)
+ {
+-	uint64_t bp_block_offset = 0;
+-	uint32_t bp_block_size = 0;
+-	struct amd_sriov_msg_pf2vf_info *pf2vf_v2 = NULL;
+-
+ 	adev->virt.fw_reserve.p_pf2vf = NULL;
+ 	adev->virt.fw_reserve.p_vf2pf = NULL;
+ 	adev->virt.vf2pf_update_interval_ms = 0;
+ 
+ 	if (adev->mman.fw_vram_usage_va != NULL) {
+-		adev->virt.vf2pf_update_interval_ms = 2000;
++		/* go through this logic in ip_init and reset to init workqueue*/
++		amdgpu_virt_exchange_data(adev);
++
++		INIT_DELAYED_WORK(&adev->virt.vf2pf_work, amdgpu_virt_update_vf2pf_work_item);
++		schedule_delayed_work(&(adev->virt.vf2pf_work), msecs_to_jiffies(adev->virt.vf2pf_update_interval_ms));
++	} else if (adev->bios != NULL) {
++		/* got through this logic in early init stage to get necessary flags, e.g. rlcg_acc related*/
++		adev->virt.fw_reserve.p_pf2vf =
++			(struct amd_sriov_msg_pf2vf_info_header *)
++			(adev->bios + (AMD_SRIOV_MSG_PF2VF_OFFSET_KB << 10));
++
++		amdgpu_virt_read_pf2vf_data(adev);
++	}
++}
++
++
++void amdgpu_virt_exchange_data(struct amdgpu_device *adev)
++{
++	uint64_t bp_block_offset = 0;
++	uint32_t bp_block_size = 0;
++	struct amd_sriov_msg_pf2vf_info *pf2vf_v2 = NULL;
++
++	if (adev->mman.fw_vram_usage_va != NULL) {
+ 
+ 		adev->virt.fw_reserve.p_pf2vf =
+ 			(struct amd_sriov_msg_pf2vf_info_header *)
+@@ -616,13 +634,9 @@ void amdgpu_virt_init_data_exchange(struct amdgpu_device *adev)
+ 					amdgpu_virt_add_bad_page(adev, bp_block_offset, bp_block_size);
+ 			}
+ 	}
+-
+-	if (adev->virt.vf2pf_update_interval_ms != 0) {
+-		INIT_DELAYED_WORK(&adev->virt.vf2pf_work, amdgpu_virt_update_vf2pf_work_item);
+-		schedule_delayed_work(&(adev->virt.vf2pf_work), adev->virt.vf2pf_update_interval_ms);
+-	}
+ }
+ 
++
+ void amdgpu_detect_virtualization(struct amdgpu_device *adev)
+ {
+ 	uint32_t reg;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+index 8dd624c20f895..77b9d37bfa1b2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+@@ -271,6 +271,7 @@ int amdgpu_virt_alloc_mm_table(struct amdgpu_device *adev);
+ void amdgpu_virt_free_mm_table(struct amdgpu_device *adev);
+ void amdgpu_virt_release_ras_err_handler_data(struct amdgpu_device *adev);
+ void amdgpu_virt_init_data_exchange(struct amdgpu_device *adev);
++void amdgpu_virt_exchange_data(struct amdgpu_device *adev);
+ void amdgpu_virt_fini_data_exchange(struct amdgpu_device *adev);
+ void amdgpu_detect_virtualization(struct amdgpu_device *adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index 1f2e2460e121e..a1a8e026b9fa6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -1475,6 +1475,11 @@ static int sdma_v4_0_start(struct amdgpu_device *adev)
+ 		WREG32_SDMA(i, mmSDMA0_CNTL, temp);
+ 
+ 		if (!amdgpu_sriov_vf(adev)) {
++			ring = &adev->sdma.instance[i].ring;
++			adev->nbio.funcs->sdma_doorbell_range(adev, i,
++				ring->use_doorbell, ring->doorbell_index,
++				adev->doorbell_index.sdma_doorbell_range);
++
+ 			/* unhalt engine */
+ 			temp = RREG32_SDMA(i, mmSDMA0_F32_CNTL);
+ 			temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 7212b9900e0ab..abd649285a22d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -1332,25 +1332,6 @@ static int soc15_common_sw_fini(void *handle)
+ 	return 0;
+ }
+ 
+-static void soc15_doorbell_range_init(struct amdgpu_device *adev)
+-{
+-	int i;
+-	struct amdgpu_ring *ring;
+-
+-	/* sdma/ih doorbell range are programed by hypervisor */
+-	if (!amdgpu_sriov_vf(adev)) {
+-		for (i = 0; i < adev->sdma.num_instances; i++) {
+-			ring = &adev->sdma.instance[i].ring;
+-			adev->nbio.funcs->sdma_doorbell_range(adev, i,
+-				ring->use_doorbell, ring->doorbell_index,
+-				adev->doorbell_index.sdma_doorbell_range);
+-		}
+-
+-		adev->nbio.funcs->ih_doorbell_range(adev, adev->irq.ih.use_doorbell,
+-						adev->irq.ih.doorbell_index);
+-	}
+-}
+-
+ static int soc15_common_hw_init(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+@@ -1370,12 +1351,6 @@ static int soc15_common_hw_init(void *handle)
+ 
+ 	/* enable the doorbell aperture */
+ 	soc15_enable_doorbell_aperture(adev, true);
+-	/* HW doorbell routing policy: doorbell writing not
+-	 * in SDMA/IH/MM/ACV range will be routed to CP. So
+-	 * we need to init SDMA/IH/MM/ACV doorbell range prior
+-	 * to CP ip block init and ring test.
+-	 */
+-	soc15_doorbell_range_init(adev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index 2663f1b318420..e427f4ffa0807 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -6653,8 +6653,7 @@ static double CalculateUrgentLatency(
+ 	return ret;
+ }
+ 
+-
+-static void UseMinimumDCFCLK(
++static noinline_for_stack void UseMinimumDCFCLK(
+ 		struct display_mode_lib *mode_lib,
+ 		int MaxInterDCNTileRepeaters,
+ 		int MaxPrefetchMode,
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index 09bc2c249e1af..3c4390d71a827 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -1524,6 +1524,7 @@ static void interpolate_user_regamma(uint32_t hw_points_num,
+ 	struct fixed31_32 lut2;
+ 	struct fixed31_32 delta_lut;
+ 	struct fixed31_32 delta_index;
++	const struct fixed31_32 one = dc_fixpt_from_int(1);
+ 
+ 	i = 0;
+ 	/* fixed_pt library has problems handling too small values */
+@@ -1552,6 +1553,9 @@ static void interpolate_user_regamma(uint32_t hw_points_num,
+ 			} else
+ 				hw_x = coordinates_x[i].x;
+ 
++			if (dc_fixpt_le(one, hw_x))
++				hw_x = one;
++
+ 			norm_x = dc_fixpt_mul(norm_factor, hw_x);
+ 			index = dc_fixpt_floor(norm_x);
+ 			if (index < 0 || index > 255)
+diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c
+index 3df6d6e850f52..70148ae16f146 100644
+--- a/drivers/gpu/drm/gma500/gma_display.c
++++ b/drivers/gpu/drm/gma500/gma_display.c
+@@ -529,15 +529,18 @@ int gma_crtc_page_flip(struct drm_crtc *crtc,
+ 		WARN_ON(drm_crtc_vblank_get(crtc) != 0);
+ 
+ 		gma_crtc->page_flip_event = event;
++		spin_unlock_irqrestore(&dev->event_lock, flags);
+ 
+ 		/* Call this locked if we want an event at vblank interrupt. */
+ 		ret = crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, old_fb);
+ 		if (ret) {
+-			gma_crtc->page_flip_event = NULL;
+-			drm_crtc_vblank_put(crtc);
++			spin_lock_irqsave(&dev->event_lock, flags);
++			if (gma_crtc->page_flip_event) {
++				gma_crtc->page_flip_event = NULL;
++				drm_crtc_vblank_put(crtc);
++			}
++			spin_unlock_irqrestore(&dev->event_lock, flags);
+ 		}
+-
+-		spin_unlock_irqrestore(&dev->event_lock, flags);
+ 	} else {
+ 		ret = crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, old_fb);
+ 	}
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/Kconfig b/drivers/gpu/drm/hisilicon/hibmc/Kconfig
+index 43943e9802036..4e41c144a2902 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/Kconfig
++++ b/drivers/gpu/drm/hisilicon/hibmc/Kconfig
+@@ -1,7 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config DRM_HISI_HIBMC
+ 	tristate "DRM Support for Hisilicon Hibmc"
+-	depends on DRM && PCI && ARM64
++	depends on DRM && PCI && (ARM64 || COMPILE_TEST)
++	depends on MMU
+ 	select DRM_KMS_HELPER
+ 	select DRM_VRAM_HELPER
+ 	select DRM_TTM
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index 7d37d2a01e3cf..146c4d04f572d 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -668,6 +668,16 @@ static void mtk_dsi_poweroff(struct mtk_dsi *dsi)
+ 	if (--dsi->refcount != 0)
+ 		return;
+ 
++	/*
++	 * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
++	 * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
++	 * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
++	 * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
++	 * after dsi is fully set.
++	 */
++	mtk_dsi_stop(dsi);
++
++	mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
+ 	mtk_dsi_reset_engine(dsi);
+ 	mtk_dsi_lane0_ulp_mode_enter(dsi);
+ 	mtk_dsi_clk_ulp_mode_enter(dsi);
+@@ -718,17 +728,6 @@ static void mtk_output_dsi_disable(struct mtk_dsi *dsi)
+ 	if (!dsi->enabled)
+ 		return;
+ 
+-	/*
+-	 * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
+-	 * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
+-	 * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
+-	 * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
+-	 * after dsi is fully set.
+-	 */
+-	mtk_dsi_stop(dsi);
+-
+-	mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
+-
+ 	dsi->enabled = false;
+ }
+ 
+@@ -791,10 +790,13 @@ static void mtk_dsi_bridge_atomic_post_disable(struct drm_bridge *bridge,
+ 
+ static const struct drm_bridge_funcs mtk_dsi_bridge_funcs = {
+ 	.attach = mtk_dsi_bridge_attach,
++	.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
+ 	.atomic_disable = mtk_dsi_bridge_atomic_disable,
++	.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
+ 	.atomic_enable = mtk_dsi_bridge_atomic_enable,
+ 	.atomic_pre_enable = mtk_dsi_bridge_atomic_pre_enable,
+ 	.atomic_post_disable = mtk_dsi_bridge_atomic_post_disable,
++	.atomic_reset = drm_atomic_helper_bridge_reset,
+ 	.mode_set = mtk_dsi_bridge_mode_set,
+ };
+ 
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index bf2c845ef3a20..b7b37082a9d72 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2201,7 +2201,7 @@ static const struct panel_desc innolux_g121i1_l01 = {
+ 		.enable = 200,
+ 		.disable = 20,
+ 	},
+-	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++	.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index dec54c70e0082..857c47c69ef15 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -276,8 +276,9 @@ static int cdn_dp_connector_get_modes(struct drm_connector *connector)
+ 	return ret;
+ }
+ 
+-static int cdn_dp_connector_mode_valid(struct drm_connector *connector,
+-				       struct drm_display_mode *mode)
++static enum drm_mode_status
++cdn_dp_connector_mode_valid(struct drm_connector *connector,
++			    struct drm_display_mode *mode)
+ {
+ 	struct cdn_dp_device *dp = connector_to_dp(connector);
+ 	struct drm_display_info *display_info = &dp->connector.display_info;
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 5d820037e2918..514279dac7cb5 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2251,7 +2251,7 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
+ 			bool fb_overlap_ok)
+ {
+ 	struct resource *iter, *shadow;
+-	resource_size_t range_min, range_max, start;
++	resource_size_t range_min, range_max, start, end;
+ 	const char *dev_n = dev_name(&device_obj->device);
+ 	int retval;
+ 
+@@ -2286,6 +2286,14 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
+ 		range_max = iter->end;
+ 		start = (range_min + align - 1) & ~(align - 1);
+ 		for (; start + size - 1 <= range_max; start += align) {
++			end = start + size - 1;
++
++			/* Skip the whole fb_mmio region if not fb_overlap_ok */
++			if (!fb_overlap_ok && fb_mmio &&
++			    (((start >= fb_mmio->start) && (start <= fb_mmio->end)) ||
++			     ((end >= fb_mmio->start) && (end <= fb_mmio->end))))
++				continue;
++
+ 			shadow = __request_region(iter, start, size, NULL,
+ 						  IORESOURCE_BUSY);
+ 			if (!shadow)
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index d3719df1c40dc..be4ad516293b0 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1289,7 +1289,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ 	if (i2c_imx->dma)
+ 		i2c_imx_dma_free(i2c_imx);
+ 
+-	if (ret == 0) {
++	if (ret >= 0) {
+ 		/* setup chip registers to defaults */
+ 		imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
+ 		imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
+diff --git a/drivers/i2c/busses/i2c-mlxbf.c b/drivers/i2c/busses/i2c-mlxbf.c
+index ab261d762dea3..bea82a787b4f3 100644
+--- a/drivers/i2c/busses/i2c-mlxbf.c
++++ b/drivers/i2c/busses/i2c-mlxbf.c
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include <linux/acpi.h>
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/interrupt.h>
+@@ -63,13 +64,14 @@
+  */
+ #define MLXBF_I2C_TYU_PLL_OUT_FREQ  (400 * 1000 * 1000)
+ /* Reference clock for Bluefield - 156 MHz. */
+-#define MLXBF_I2C_PLL_IN_FREQ       (156 * 1000 * 1000)
++#define MLXBF_I2C_PLL_IN_FREQ       156250000ULL
+ 
+ /* Constant used to determine the PLL frequency. */
+-#define MLNXBF_I2C_COREPLL_CONST    16384
++#define MLNXBF_I2C_COREPLL_CONST    16384ULL
++
++#define MLXBF_I2C_FREQUENCY_1GHZ  1000000000ULL
+ 
+ /* PLL registers. */
+-#define MLXBF_I2C_CORE_PLL_REG0         0x0
+ #define MLXBF_I2C_CORE_PLL_REG1         0x4
+ #define MLXBF_I2C_CORE_PLL_REG2         0x8
+ 
+@@ -187,22 +189,15 @@ enum {
+ #define MLXBF_I2C_COREPLL_FREQ          MLXBF_I2C_TYU_PLL_OUT_FREQ
+ 
+ /* Core PLL TYU configuration. */
+-#define MLXBF_I2C_COREPLL_CORE_F_TYU_MASK   GENMASK(12, 0)
+-#define MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK  GENMASK(3, 0)
+-#define MLXBF_I2C_COREPLL_CORE_R_TYU_MASK   GENMASK(5, 0)
+-
+-#define MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT  3
+-#define MLXBF_I2C_COREPLL_CORE_OD_TYU_SHIFT 16
+-#define MLXBF_I2C_COREPLL_CORE_R_TYU_SHIFT  20
++#define MLXBF_I2C_COREPLL_CORE_F_TYU_MASK   GENMASK(15, 3)
++#define MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK  GENMASK(19, 16)
++#define MLXBF_I2C_COREPLL_CORE_R_TYU_MASK   GENMASK(25, 20)
+ 
+ /* Core PLL YU configuration. */
+ #define MLXBF_I2C_COREPLL_CORE_F_YU_MASK    GENMASK(25, 0)
+ #define MLXBF_I2C_COREPLL_CORE_OD_YU_MASK   GENMASK(3, 0)
+-#define MLXBF_I2C_COREPLL_CORE_R_YU_MASK    GENMASK(5, 0)
++#define MLXBF_I2C_COREPLL_CORE_R_YU_MASK    GENMASK(31, 26)
+ 
+-#define MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT   0
+-#define MLXBF_I2C_COREPLL_CORE_OD_YU_SHIFT  1
+-#define MLXBF_I2C_COREPLL_CORE_R_YU_SHIFT   26
+ 
+ /* Core PLL frequency. */
+ static u64 mlxbf_i2c_corepll_frequency;
+@@ -485,8 +480,6 @@ static struct mutex mlxbf_i2c_bus_lock;
+ #define MLXBF_I2C_MASK_8    GENMASK(7, 0)
+ #define MLXBF_I2C_MASK_16   GENMASK(15, 0)
+ 
+-#define MLXBF_I2C_FREQUENCY_1GHZ  1000000000
+-
+ /*
+  * Function to poll a set of bits at a specific address; it checks whether
+  * the bits are equal to zero when eq_zero is set to 'true', and not equal
+@@ -675,7 +668,7 @@ static int mlxbf_i2c_smbus_enable(struct mlxbf_i2c_priv *priv, u8 slave,
+ 	/* Clear status bits. */
+ 	writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_STATUS);
+ 	/* Set the cause data. */
+-	writel(~0x0, priv->smbus->io + MLXBF_I2C_CAUSE_OR_CLEAR);
++	writel(~0x0, priv->mst_cause->io + MLXBF_I2C_CAUSE_OR_CLEAR);
+ 	/* Zero PEC byte. */
+ 	writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_PEC);
+ 	/* Zero byte count. */
+@@ -744,6 +737,9 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ 		if (flags & MLXBF_I2C_F_WRITE) {
+ 			write_en = 1;
+ 			write_len += operation->length;
++			if (data_idx + operation->length >
++					MLXBF_I2C_MASTER_DATA_DESC_SIZE)
++				return -ENOBUFS;
+ 			memcpy(data_desc + data_idx,
+ 			       operation->buffer, operation->length);
+ 			data_idx += operation->length;
+@@ -1413,24 +1409,19 @@ static int mlxbf_i2c_init_master(struct platform_device *pdev,
+ 	return 0;
+ }
+ 
+-static u64 mlxbf_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
++static u64 mlxbf_i2c_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
+ {
+-	u64 core_frequency, pad_frequency;
++	u64 core_frequency;
+ 	u8 core_od, core_r;
+ 	u32 corepll_val;
+ 	u16 core_f;
+ 
+-	pad_frequency = MLXBF_I2C_PLL_IN_FREQ;
+-
+ 	corepll_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1);
+ 
+ 	/* Get Core PLL configuration bits. */
+-	core_f = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT) &
+-			MLXBF_I2C_COREPLL_CORE_F_TYU_MASK;
+-	core_od = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_OD_TYU_SHIFT) &
+-			MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK;
+-	core_r = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_R_TYU_SHIFT) &
+-			MLXBF_I2C_COREPLL_CORE_R_TYU_MASK;
++	core_f = FIELD_GET(MLXBF_I2C_COREPLL_CORE_F_TYU_MASK, corepll_val);
++	core_od = FIELD_GET(MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK, corepll_val);
++	core_r = FIELD_GET(MLXBF_I2C_COREPLL_CORE_R_TYU_MASK, corepll_val);
+ 
+ 	/*
+ 	 * Compute PLL output frequency as follow:
+@@ -1442,31 +1433,26 @@ static u64 mlxbf_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
+ 	 * Where PLL_OUT_FREQ and PLL_IN_FREQ refer to CoreFrequency
+ 	 * and PadFrequency, respectively.
+ 	 */
+-	core_frequency = pad_frequency * (++core_f);
++	core_frequency = MLXBF_I2C_PLL_IN_FREQ * (++core_f);
+ 	core_frequency /= (++core_r) * (++core_od);
+ 
+ 	return core_frequency;
+ }
+ 
+-static u64 mlxbf_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
++static u64 mlxbf_i2c_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
+ {
+ 	u32 corepll_reg1_val, corepll_reg2_val;
+-	u64 corepll_frequency, pad_frequency;
++	u64 corepll_frequency;
+ 	u8 core_od, core_r;
+ 	u32 core_f;
+ 
+-	pad_frequency = MLXBF_I2C_PLL_IN_FREQ;
+-
+ 	corepll_reg1_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1);
+ 	corepll_reg2_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG2);
+ 
+ 	/* Get Core PLL configuration bits */
+-	core_f = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT) &
+-			MLXBF_I2C_COREPLL_CORE_F_YU_MASK;
+-	core_r = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_R_YU_SHIFT) &
+-			MLXBF_I2C_COREPLL_CORE_R_YU_MASK;
+-	core_od = rol32(corepll_reg2_val,  MLXBF_I2C_COREPLL_CORE_OD_YU_SHIFT) &
+-			MLXBF_I2C_COREPLL_CORE_OD_YU_MASK;
++	core_f = FIELD_GET(MLXBF_I2C_COREPLL_CORE_F_YU_MASK, corepll_reg1_val);
++	core_r = FIELD_GET(MLXBF_I2C_COREPLL_CORE_R_YU_MASK, corepll_reg1_val);
++	core_od = FIELD_GET(MLXBF_I2C_COREPLL_CORE_OD_YU_MASK, corepll_reg2_val);
+ 
+ 	/*
+ 	 * Compute PLL output frequency as follow:
+@@ -1478,7 +1464,7 @@ static u64 mlxbf_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
+ 	 * Where PLL_OUT_FREQ and PLL_IN_FREQ refer to CoreFrequency
+ 	 * and PadFrequency, respectively.
+ 	 */
+-	corepll_frequency = (pad_frequency * core_f) / MLNXBF_I2C_COREPLL_CONST;
++	corepll_frequency = (MLXBF_I2C_PLL_IN_FREQ * core_f) / MLNXBF_I2C_COREPLL_CONST;
+ 	corepll_frequency /= (++core_r) * (++core_od);
+ 
+ 	return corepll_frequency;
+@@ -2186,14 +2172,14 @@ static struct mlxbf_i2c_chip_info mlxbf_i2c_chip[] = {
+ 			[1] = &mlxbf_i2c_corepll_res[MLXBF_I2C_CHIP_TYPE_1],
+ 			[2] = &mlxbf_i2c_gpio_res[MLXBF_I2C_CHIP_TYPE_1]
+ 		},
+-		.calculate_freq = mlxbf_calculate_freq_from_tyu
++		.calculate_freq = mlxbf_i2c_calculate_freq_from_tyu
+ 	},
+ 	[MLXBF_I2C_CHIP_TYPE_2] = {
+ 		.type = MLXBF_I2C_CHIP_TYPE_2,
+ 		.shared_res = {
+ 			[0] = &mlxbf_i2c_corepll_res[MLXBF_I2C_CHIP_TYPE_2]
+ 		},
+-		.calculate_freq = mlxbf_calculate_freq_from_yu
++		.calculate_freq = mlxbf_i2c_calculate_freq_from_yu
+ 	}
+ };
+ 
+diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c
+index f6fae64861ce8..27cc5f03611cb 100644
+--- a/drivers/interconnect/qcom/icc-rpmh.c
++++ b/drivers/interconnect/qcom/icc-rpmh.c
+@@ -20,13 +20,18 @@ void qcom_icc_pre_aggregate(struct icc_node *node)
+ {
+ 	size_t i;
+ 	struct qcom_icc_node *qn;
++	struct qcom_icc_provider *qp;
+ 
+ 	qn = node->data;
++	qp = to_qcom_provider(node->provider);
+ 
+ 	for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ 		qn->sum_avg[i] = 0;
+ 		qn->max_peak[i] = 0;
+ 	}
++
++	for (i = 0; i < qn->num_bcms; i++)
++		qcom_icc_bcm_voter_add(qp->voter, qn->bcms[i]);
+ }
+ EXPORT_SYMBOL_GPL(qcom_icc_pre_aggregate);
+ 
+@@ -44,10 +49,8 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+ {
+ 	size_t i;
+ 	struct qcom_icc_node *qn;
+-	struct qcom_icc_provider *qp;
+ 
+ 	qn = node->data;
+-	qp = to_qcom_provider(node->provider);
+ 
+ 	if (!tag)
+ 		tag = QCOM_ICC_TAG_ALWAYS;
+@@ -67,9 +70,6 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+ 	*agg_avg += avg_bw;
+ 	*agg_peak = max_t(u32, *agg_peak, peak_bw);
+ 
+-	for (i = 0; i < qn->num_bcms; i++)
+-		qcom_icc_bcm_voter_add(qp->voter, qn->bcms[i]);
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(qcom_icc_aggregate);
+diff --git a/drivers/interconnect/qcom/sm8150.c b/drivers/interconnect/qcom/sm8150.c
+index c76b2c7f9b106..b936196c229c8 100644
+--- a/drivers/interconnect/qcom/sm8150.c
++++ b/drivers/interconnect/qcom/sm8150.c
+@@ -627,7 +627,6 @@ static struct platform_driver qnoc_driver = {
+ 	.driver = {
+ 		.name = "qnoc-sm8150",
+ 		.of_match_table = qnoc_of_match,
+-		.sync_state = icc_sync_state,
+ 	},
+ };
+ module_platform_driver(qnoc_driver);
+diff --git a/drivers/interconnect/qcom/sm8250.c b/drivers/interconnect/qcom/sm8250.c
+index cc558fec74e38..40820043c8d36 100644
+--- a/drivers/interconnect/qcom/sm8250.c
++++ b/drivers/interconnect/qcom/sm8250.c
+@@ -643,7 +643,6 @@ static struct platform_driver qnoc_driver = {
+ 	.driver = {
+ 		.name = "qnoc-sm8250",
+ 		.of_match_table = qnoc_of_match,
+-		.sync_state = icc_sync_state,
+ 	},
+ };
+ module_platform_driver(qnoc_driver);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 93c60712a948e..c48cf737b521d 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -569,7 +569,7 @@ static unsigned long __iommu_calculate_sagaw(struct intel_iommu *iommu)
+ {
+ 	unsigned long fl_sagaw, sl_sagaw;
+ 
+-	fl_sagaw = BIT(2) | (cap_fl1gp_support(iommu->cap) ? BIT(3) : 0);
++	fl_sagaw = BIT(2) | (cap_5lp_support(iommu->cap) ? BIT(3) : 0);
+ 	sl_sagaw = cap_sagaw(iommu->cap);
+ 
+ 	/* Second level only. */
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
+index a2563c2540808..2299d5cca8ffb 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.c
++++ b/drivers/media/usb/b2c2/flexcop-usb.c
+@@ -512,7 +512,7 @@ static int flexcop_usb_init(struct flexcop_usb *fc_usb)
+ 
+ 	if (fc_usb->uintf->cur_altsetting->desc.bNumEndpoints < 1)
+ 		return -ENODEV;
+-	if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[1].desc))
++	if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[0].desc))
+ 		return -ENODEV;
+ 
+ 	switch (fc_usb->udev->speed) {
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 0b09cdaaeb6c1..899768ed1688d 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -932,15 +932,16 @@ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
+ 
+ 		/* Erase init depends on CSD and SSR */
+ 		mmc_init_erase(card);
+-
+-		/*
+-		 * Fetch switch information from card.
+-		 */
+-		err = mmc_read_switch(card);
+-		if (err)
+-			return err;
+ 	}
+ 
++	/*
++	 * Fetch switch information from card. Note, sd3_bus_mode can change if
++	 * voltage switch outcome changes, so do this always.
++	 */
++	err = mmc_read_switch(card);
++	if (err)
++		return err;
++
+ 	/*
+ 	 * For SPI, enable CRC as appropriate.
+ 	 * This CRC enable is located AFTER the reading of the
+@@ -1089,26 +1090,15 @@ retry:
+ 	if (!v18_fixup_failed && !mmc_host_is_spi(host) && mmc_host_uhs(host) &&
+ 	    mmc_sd_card_using_v18(card) &&
+ 	    host->ios.signal_voltage != MMC_SIGNAL_VOLTAGE_180) {
+-		/*
+-		 * Re-read switch information in case it has changed since
+-		 * oldcard was initialized.
+-		 */
+-		if (oldcard) {
+-			err = mmc_read_switch(card);
+-			if (err)
+-				goto free_card;
+-		}
+-		if (mmc_sd_card_using_v18(card)) {
+-			if (mmc_host_set_uhs_voltage(host) ||
+-			    mmc_sd_init_uhs_card(card)) {
+-				v18_fixup_failed = true;
+-				mmc_power_cycle(host, ocr);
+-				if (!oldcard)
+-					mmc_remove_card(card);
+-				goto retry;
+-			}
+-			goto cont;
++		if (mmc_host_set_uhs_voltage(host) ||
++		    mmc_sd_init_uhs_card(card)) {
++			v18_fixup_failed = true;
++			mmc_power_cycle(host, ocr);
++			if (!oldcard)
++				mmc_remove_card(card);
++			goto retry;
+ 		}
++		goto cont;
+ 	}
+ 
+ 	/* Initialization sequence for UHS-I cards */
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index b0f8d551b61db..acb6ff0be5fff 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -85,8 +85,9 @@ static const u8 null_mac_addr[ETH_ALEN + 2] __long_aligned = {
+ static u16 ad_ticks_per_sec;
+ static const int ad_delta_in_ticks = (AD_TIMER_INTERVAL * HZ) / 1000;
+ 
+-static const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned =
+-	MULTICAST_LACPDU_ADDR;
++const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned = {
++	0x01, 0x80, 0xC2, 0x00, 0x00, 0x02
++};
+ 
+ /* ================= main 802.3ad protocol functions ================== */
+ static int ad_lacpdu_send(struct port *port);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 9c4b45341fd28..f38a6ce5749bb 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -827,12 +827,8 @@ static void bond_hw_addr_flush(struct net_device *bond_dev,
+ 	dev_uc_unsync(slave_dev, bond_dev);
+ 	dev_mc_unsync(slave_dev, bond_dev);
+ 
+-	if (BOND_MODE(bond) == BOND_MODE_8023AD) {
+-		/* del lacpdu mc addr from mc list */
+-		u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
+-
+-		dev_mc_del(slave_dev, lacpdu_multicast);
+-	}
++	if (BOND_MODE(bond) == BOND_MODE_8023AD)
++		dev_mc_del(slave_dev, lacpdu_mcast_addr);
+ }
+ 
+ /*--------------------------- Active slave change ---------------------------*/
+@@ -852,7 +848,8 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
+ 		if (bond->dev->flags & IFF_ALLMULTI)
+ 			dev_set_allmulti(old_active->dev, -1);
+ 
+-		bond_hw_addr_flush(bond->dev, old_active->dev);
++		if (bond->dev->flags & IFF_UP)
++			bond_hw_addr_flush(bond->dev, old_active->dev);
+ 	}
+ 
+ 	if (new_active) {
+@@ -863,10 +860,12 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
+ 		if (bond->dev->flags & IFF_ALLMULTI)
+ 			dev_set_allmulti(new_active->dev, 1);
+ 
+-		netif_addr_lock_bh(bond->dev);
+-		dev_uc_sync(new_active->dev, bond->dev);
+-		dev_mc_sync(new_active->dev, bond->dev);
+-		netif_addr_unlock_bh(bond->dev);
++		if (bond->dev->flags & IFF_UP) {
++			netif_addr_lock_bh(bond->dev);
++			dev_uc_sync(new_active->dev, bond->dev);
++			dev_mc_sync(new_active->dev, bond->dev);
++			netif_addr_unlock_bh(bond->dev);
++		}
+ 	}
+ }
+ 
+@@ -2073,16 +2072,14 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ 			}
+ 		}
+ 
+-		netif_addr_lock_bh(bond_dev);
+-		dev_mc_sync_multiple(slave_dev, bond_dev);
+-		dev_uc_sync_multiple(slave_dev, bond_dev);
+-		netif_addr_unlock_bh(bond_dev);
++		if (bond_dev->flags & IFF_UP) {
++			netif_addr_lock_bh(bond_dev);
++			dev_mc_sync_multiple(slave_dev, bond_dev);
++			dev_uc_sync_multiple(slave_dev, bond_dev);
++			netif_addr_unlock_bh(bond_dev);
+ 
+-		if (BOND_MODE(bond) == BOND_MODE_8023AD) {
+-			/* add lacpdu mc addr to mc list */
+-			u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
+-
+-			dev_mc_add(slave_dev, lacpdu_multicast);
++			if (BOND_MODE(bond) == BOND_MODE_8023AD)
++				dev_mc_add(slave_dev, lacpdu_mcast_addr);
+ 		}
+ 	}
+ 
+@@ -2310,7 +2307,8 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 		if (old_flags & IFF_ALLMULTI)
+ 			dev_set_allmulti(slave_dev, -1);
+ 
+-		bond_hw_addr_flush(bond_dev, slave_dev);
++		if (old_flags & IFF_UP)
++			bond_hw_addr_flush(bond_dev, slave_dev);
+ 	}
+ 
+ 	slave_disable_netpoll(slave);
+@@ -3772,6 +3770,9 @@ static int bond_open(struct net_device *bond_dev)
+ 		/* register to receive LACPDUs */
+ 		bond->recv_probe = bond_3ad_lacpdu_recv;
+ 		bond_3ad_initiate_agg_selection(bond, 1);
++
++		bond_for_each_slave(bond, slave, iter)
++			dev_mc_add(slave->dev, lacpdu_mcast_addr);
+ 	}
+ 
+ 	if (bond_mode_can_use_xmit_hash(bond))
+@@ -3783,6 +3784,7 @@ static int bond_open(struct net_device *bond_dev)
+ static int bond_close(struct net_device *bond_dev)
+ {
+ 	struct bonding *bond = netdev_priv(bond_dev);
++	struct slave *slave;
+ 
+ 	bond_work_cancel_all(bond);
+ 	bond->send_peer_notif = 0;
+@@ -3790,6 +3792,19 @@ static int bond_close(struct net_device *bond_dev)
+ 		bond_alb_deinitialize(bond);
+ 	bond->recv_probe = NULL;
+ 
++	if (bond_uses_primary(bond)) {
++		rcu_read_lock();
++		slave = rcu_dereference(bond->curr_active_slave);
++		if (slave)
++			bond_hw_addr_flush(bond_dev, slave->dev);
++		rcu_read_unlock();
++	} else {
++		struct list_head *iter;
++
++		bond_for_each_slave(bond, slave, iter)
++			bond_hw_addr_flush(bond_dev, slave->dev);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index 7cbaac238ff62..429950241de32 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -954,11 +954,6 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
+ 	u32 reg_ctrl, reg_id, reg_iflag1;
+ 	int i;
+ 
+-	if (unlikely(drop)) {
+-		skb = ERR_PTR(-ENOBUFS);
+-		goto mark_as_read;
+-	}
+-
+ 	mb = flexcan_get_mb(priv, n);
+ 
+ 	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
+@@ -987,6 +982,11 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
+ 		reg_ctrl = priv->read(&mb->can_ctrl);
+ 	}
+ 
++	if (unlikely(drop)) {
++		skb = ERR_PTR(-ENOBUFS);
++		goto mark_as_read;
++	}
++
+ 	if (reg_ctrl & FLEXCAN_MB_CNT_EDL)
+ 		skb = alloc_canfd_skb(offload->dev, &cfd);
+ 	else
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index 1bfc497da9ac8..a879200eaab02 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -678,6 +678,7 @@ static int gs_can_open(struct net_device *netdev)
+ 		flags |= GS_CAN_MODE_TRIPLE_SAMPLE;
+ 
+ 	/* finally start device */
++	dev->can.state = CAN_STATE_ERROR_ACTIVE;
+ 	dm->mode = cpu_to_le32(GS_CAN_MODE_START);
+ 	dm->flags = cpu_to_le32(flags);
+ 	rc = usb_control_msg(interface_to_usbdev(dev->iface),
+@@ -694,13 +695,12 @@ static int gs_can_open(struct net_device *netdev)
+ 	if (rc < 0) {
+ 		netdev_err(netdev, "Couldn't start device (err=%d)\n", rc);
+ 		kfree(dm);
++		dev->can.state = CAN_STATE_STOPPED;
+ 		return rc;
+ 	}
+ 
+ 	kfree(dm);
+ 
+-	dev->can.state = CAN_STATE_ERROR_ACTIVE;
+-
+ 	parent->active_channels++;
+ 	if (!(dev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
+ 		netif_start_queue(netdev);
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 15aa3b3c0089f..4af2538259576 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -1671,29 +1671,6 @@ static int enetc_set_rss(struct net_device *ndev, int en)
+ 	return 0;
+ }
+ 
+-static int enetc_set_psfp(struct net_device *ndev, int en)
+-{
+-	struct enetc_ndev_priv *priv = netdev_priv(ndev);
+-	int err;
+-
+-	if (en) {
+-		err = enetc_psfp_enable(priv);
+-		if (err)
+-			return err;
+-
+-		priv->active_offloads |= ENETC_F_QCI;
+-		return 0;
+-	}
+-
+-	err = enetc_psfp_disable(priv);
+-	if (err)
+-		return err;
+-
+-	priv->active_offloads &= ~ENETC_F_QCI;
+-
+-	return 0;
+-}
+-
+ static void enetc_enable_rxvlan(struct net_device *ndev, bool en)
+ {
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
+@@ -1712,11 +1689,9 @@ static void enetc_enable_txvlan(struct net_device *ndev, bool en)
+ 		enetc_bdr_enable_txvlan(&priv->si->hw, i, en);
+ }
+ 
+-int enetc_set_features(struct net_device *ndev,
+-		       netdev_features_t features)
++void enetc_set_features(struct net_device *ndev, netdev_features_t features)
+ {
+ 	netdev_features_t changed = ndev->features ^ features;
+-	int err = 0;
+ 
+ 	if (changed & NETIF_F_RXHASH)
+ 		enetc_set_rss(ndev, !!(features & NETIF_F_RXHASH));
+@@ -1728,11 +1703,6 @@ int enetc_set_features(struct net_device *ndev,
+ 	if (changed & NETIF_F_HW_VLAN_CTAG_TX)
+ 		enetc_enable_txvlan(ndev,
+ 				    !!(features & NETIF_F_HW_VLAN_CTAG_TX));
+-
+-	if (changed & NETIF_F_HW_TC)
+-		err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
+-
+-	return err;
+ }
+ 
+ #ifdef CONFIG_FSL_ENETC_PTP_CLOCK
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
+index 15d19cbd5a954..00386c5d3cde9 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc.h
+@@ -301,8 +301,7 @@ void enetc_start(struct net_device *ndev);
+ void enetc_stop(struct net_device *ndev);
+ netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev);
+ struct net_device_stats *enetc_get_stats(struct net_device *ndev);
+-int enetc_set_features(struct net_device *ndev,
+-		       netdev_features_t features);
++void enetc_set_features(struct net_device *ndev, netdev_features_t features);
+ int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd);
+ int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+ 		   void *type_data);
+@@ -335,6 +334,7 @@ int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+ int enetc_setup_tc_psfp(struct net_device *ndev, void *type_data);
+ int enetc_psfp_init(struct enetc_ndev_priv *priv);
+ int enetc_psfp_clean(struct enetc_ndev_priv *priv);
++int enetc_set_psfp(struct net_device *ndev, bool en);
+ 
+ static inline void enetc_get_max_cap(struct enetc_ndev_priv *priv)
+ {
+@@ -410,4 +410,9 @@ static inline int enetc_psfp_disable(struct enetc_ndev_priv *priv)
+ {
+ 	return 0;
+ }
++
++static inline int enetc_set_psfp(struct net_device *ndev, bool en)
++{
++	return 0;
++}
+ #endif
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 716b396bf0947..6904e10dd46b3 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -671,6 +671,13 @@ static int enetc_pf_set_features(struct net_device *ndev,
+ {
+ 	netdev_features_t changed = ndev->features ^ features;
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
++	int err;
++
++	if (changed & NETIF_F_HW_TC) {
++		err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
++		if (err)
++			return err;
++	}
+ 
+ 	if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) {
+ 		struct enetc_pf *pf = enetc_si_priv(priv->si);
+@@ -684,7 +691,9 @@ static int enetc_pf_set_features(struct net_device *ndev,
+ 	if (changed & NETIF_F_LOOPBACK)
+ 		enetc_set_loopback(ndev, !!(features & NETIF_F_LOOPBACK));
+ 
+-	return enetc_set_features(ndev, features);
++	enetc_set_features(ndev, features);
++
++	return 0;
+ }
+ 
+ static const struct net_device_ops enetc_ndev_ops = {
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 9e6988fd3787a..62efe1aebf86a 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -1525,6 +1525,29 @@ int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+ 	}
+ }
+ 
++int enetc_set_psfp(struct net_device *ndev, bool en)
++{
++	struct enetc_ndev_priv *priv = netdev_priv(ndev);
++	int err;
++
++	if (en) {
++		err = enetc_psfp_enable(priv);
++		if (err)
++			return err;
++
++		priv->active_offloads |= ENETC_F_QCI;
++		return 0;
++	}
++
++	err = enetc_psfp_disable(priv);
++	if (err)
++		return err;
++
++	priv->active_offloads &= ~ENETC_F_QCI;
++
++	return 0;
++}
++
+ int enetc_psfp_init(struct enetc_ndev_priv *priv)
+ {
+ 	if (epsfp.psfp_sfi_bitmap)
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_vf.c b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+index 33c125735db7e..5ce3e2593bdde 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_vf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+@@ -88,7 +88,9 @@ static int enetc_vf_set_mac_addr(struct net_device *ndev, void *addr)
+ static int enetc_vf_set_features(struct net_device *ndev,
+ 				 netdev_features_t features)
+ {
+-	return enetc_set_features(ndev, features);
++	enetc_set_features(ndev, features);
++
++	return 0;
+ }
+ 
+ /* Probing/ Init */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 97009cbea7793..c7f243ddbcf72 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5733,6 +5733,26 @@ static int i40e_get_link_speed(struct i40e_vsi *vsi)
+ 	}
+ }
+ 
++/**
++ * i40e_bw_bytes_to_mbits - Convert max_tx_rate from bytes to mbits
++ * @vsi: Pointer to vsi structure
++ * @max_tx_rate: max TX rate in bytes to be converted into Mbits
++ *
++ * Helper function to convert units before send to set BW limit
++ **/
++static u64 i40e_bw_bytes_to_mbits(struct i40e_vsi *vsi, u64 max_tx_rate)
++{
++	if (max_tx_rate < I40E_BW_MBPS_DIVISOR) {
++		dev_warn(&vsi->back->pdev->dev,
++			 "Setting max tx rate to minimum usable value of 50Mbps.\n");
++		max_tx_rate = I40E_BW_CREDIT_DIVISOR;
++	} else {
++		do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
++	}
++
++	return max_tx_rate;
++}
++
+ /**
+  * i40e_set_bw_limit - setup BW limit for Tx traffic based on max_tx_rate
+  * @vsi: VSI to be configured
+@@ -5755,10 +5775,10 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
+ 			max_tx_rate, seid);
+ 		return -EINVAL;
+ 	}
+-	if (max_tx_rate && max_tx_rate < 50) {
++	if (max_tx_rate && max_tx_rate < I40E_BW_CREDIT_DIVISOR) {
+ 		dev_warn(&pf->pdev->dev,
+ 			 "Setting max tx rate to minimum usable value of 50Mbps.\n");
+-		max_tx_rate = 50;
++		max_tx_rate = I40E_BW_CREDIT_DIVISOR;
+ 	}
+ 
+ 	/* Tx rate credits are in values of 50Mbps, 0 is disabled */
+@@ -7719,9 +7739,9 @@ config_tc:
+ 
+ 	if (pf->flags & I40E_FLAG_TC_MQPRIO) {
+ 		if (vsi->mqprio_qopt.max_rate[0]) {
+-			u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
++			u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,
++						  vsi->mqprio_qopt.max_rate[0]);
+ 
+-			do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
+ 			ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);
+ 			if (!ret) {
+ 				u64 credits = max_tx_rate;
+@@ -10366,10 +10386,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 	}
+ 
+ 	if (vsi->mqprio_qopt.max_rate[0]) {
+-		u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
++		u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,
++						  vsi->mqprio_qopt.max_rate[0]);
+ 		u64 credits = 0;
+ 
+-		do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
+ 		ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);
+ 		if (ret)
+ 			goto end_unlock;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 1947c5a775505..ffff7de801af7 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1985,6 +1985,25 @@ static void i40e_del_qch(struct i40e_vf *vf)
+ 	}
+ }
+ 
++/**
++ * i40e_vc_get_max_frame_size
++ * @vf: pointer to the VF
++ *
++ * Max frame size is determined based on the current port's max frame size and
++ * whether a port VLAN is configured on this VF. The VF is not aware whether
++ * it's in a port VLAN so the PF needs to account for this in max frame size
++ * checks and sending the max frame size to the VF.
++ **/
++static u16 i40e_vc_get_max_frame_size(struct i40e_vf *vf)
++{
++	u16 max_frame_size = vf->pf->hw.phy.link_info.max_frame_size;
++
++	if (vf->port_vlan_id)
++		max_frame_size -= VLAN_HLEN;
++
++	return max_frame_size;
++}
++
+ /**
+  * i40e_vc_get_vf_resources_msg
+  * @vf: pointer to the VF info
+@@ -2085,6 +2104,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ 	vfres->max_vectors = pf->hw.func_caps.num_msix_vectors_vf;
+ 	vfres->rss_key_size = I40E_HKEY_ARRAY_SIZE;
+ 	vfres->rss_lut_size = I40E_VF_HLUT_ARRAY_SIZE;
++	vfres->max_mtu = i40e_vc_get_max_frame_size(vf);
+ 
+ 	if (vf->lan_vsi_idx) {
+ 		vfres->vsi_res[0].vsi_id = vf->lan_vsi_id;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+index 99983f7a0ce0b..d481a922f0184 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+@@ -114,8 +114,11 @@ u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw)
+ {
+ 	u32 head, tail;
+ 
++	/* underlying hardware might not allow access and/or always return
++	 * 0 for the head/tail registers so just use the cached values
++	 */
+ 	head = ring->next_to_clean;
+-	tail = readl(ring->tail);
++	tail = ring->next_to_use;
+ 
+ 	if (head != tail)
+ 		return (head < tail) ?
+@@ -1368,7 +1371,7 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
+ #endif
+ 	struct sk_buff *skb;
+ 
+-	if (!rx_buffer)
++	if (!rx_buffer || !size)
+ 		return NULL;
+ 	/* prefetch first cache line of first page */
+ 	va = page_address(rx_buffer->page) + rx_buffer->page_offset;
+@@ -1526,7 +1529,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
+ 		/* exit if we failed to retrieve a buffer */
+ 		if (!skb) {
+ 			rx_ring->rx_stats.alloc_buff_failed++;
+-			if (rx_buffer)
++			if (rx_buffer && size)
+ 				rx_buffer->pagecnt_bias++;
+ 			break;
+ 		}
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index ff479bf721443..5deee75bc4360 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -241,11 +241,14 @@ out:
+ void iavf_configure_queues(struct iavf_adapter *adapter)
+ {
+ 	struct virtchnl_vsi_queue_config_info *vqci;
+-	struct virtchnl_queue_pair_info *vqpi;
++	int i, max_frame = adapter->vf_res->max_mtu;
+ 	int pairs = adapter->num_active_queues;
+-	int i, max_frame = IAVF_MAX_RXBUFFER;
++	struct virtchnl_queue_pair_info *vqpi;
+ 	size_t len;
+ 
++	if (max_frame > IAVF_MAX_RXBUFFER || !max_frame)
++		max_frame = IAVF_MAX_RXBUFFER;
++
+ 	if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
+ 		/* bail because we already have a command pending */
+ 		dev_err(&adapter->pdev->dev, "Cannot configure queues, command %d pending\n",
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index d0f1b2dc7dff0..c49168ba7a4d6 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -308,7 +308,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
+ 		efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0);
+ 		efx->n_rx_channels = 1;
+ 		efx->n_tx_channels = 1;
+-		efx->tx_channel_offset = 1;
++		efx->tx_channel_offset = efx_separate_tx_channels ? 1 : 0;
+ 		efx->n_xdp_channels = 0;
+ 		efx->xdp_channel_offset = efx->n_channels;
+ 		efx->legacy_irq = efx->pci_dev->irq;
+diff --git a/drivers/net/ethernet/sfc/tx.c b/drivers/net/ethernet/sfc/tx.c
+index 1665529a72717..fcc7de8ae2bfa 100644
+--- a/drivers/net/ethernet/sfc/tx.c
++++ b/drivers/net/ethernet/sfc/tx.c
+@@ -545,7 +545,7 @@ netdev_tx_t efx_hard_start_xmit(struct sk_buff *skb,
+ 		 * previous packets out.
+ 		 */
+ 		if (!netdev_xmit_more())
+-			efx_tx_send_pending(tx_queue->channel);
++			efx_tx_send_pending(efx_get_tx_channel(efx, index));
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c
+index 69fc47089e625..940db4ec57142 100644
+--- a/drivers/net/ethernet/sun/sunhme.c
++++ b/drivers/net/ethernet/sun/sunhme.c
+@@ -2063,9 +2063,9 @@ static void happy_meal_rx(struct happy_meal *hp, struct net_device *dev)
+ 
+ 			skb_reserve(copy_skb, 2);
+ 			skb_put(copy_skb, len);
+-			dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE);
++			dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE);
+ 			skb_copy_from_linear_data(skb, copy_skb->data, len);
+-			dma_sync_single_for_device(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE);
++			dma_sync_single_for_device(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE);
+ 			/* Reuse original ring buffer. */
+ 			hme_write_rxd(hp, this,
+ 				      (RXFLAG_OWN|((RX_BUF_ALLOC_SIZE-RX_OFFSET)<<16)),
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index fe91b72eca36c..64b12e462765e 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -1251,20 +1251,18 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
+ /* Initialize a ring, including allocating DMA memory for its entries */
+ static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
+ {
+-	size_t size = count * GSI_RING_ELEMENT_SIZE;
++	u32 size = count * GSI_RING_ELEMENT_SIZE;
+ 	struct device *dev = gsi->dev;
+ 	dma_addr_t addr;
+ 
+-	/* Hardware requires a 2^n ring size, with alignment equal to size */
++	/* Hardware requires a 2^n ring size, with alignment equal to size.
++	 * The DMA address returned by dma_alloc_coherent() is guaranteed to
++	 * be a power-of-2 number of pages, which satisfies the requirement.
++	 */
+ 	ring->virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
+-	if (ring->virt && addr % size) {
+-		dma_free_coherent(dev, size, ring->virt, addr);
+-		dev_err(dev, "unable to alloc 0x%zx-aligned ring buffer\n",
+-			size);
+-		return -EINVAL;	/* Not a good error value, but distinct */
+-	} else if (!ring->virt) {
++	if (!ring->virt)
+ 		return -ENOMEM;
+-	}
++
+ 	ring->addr = addr;
+ 	ring->count = count;
+ 
+diff --git a/drivers/net/ipa/gsi_private.h b/drivers/net/ipa/gsi_private.h
+index 1785c9d3344d1..d58dce46e061a 100644
+--- a/drivers/net/ipa/gsi_private.h
++++ b/drivers/net/ipa/gsi_private.h
+@@ -14,7 +14,7 @@ struct gsi_trans;
+ struct gsi_ring;
+ struct gsi_channel;
+ 
+-#define GSI_RING_ELEMENT_SIZE	16	/* bytes */
++#define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
+ 
+ /* Return the entry that follows one provided in a transaction pool */
+ void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element);
+diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c
+index 6c3ed5b17b80c..70c2b585f98d6 100644
+--- a/drivers/net/ipa/gsi_trans.c
++++ b/drivers/net/ipa/gsi_trans.c
+@@ -153,11 +153,10 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
+ 	size = __roundup_pow_of_two(size);
+ 	total_size = (count + max_alloc - 1) * size;
+ 
+-	/* The allocator will give us a power-of-2 number of pages.  But we
+-	 * can't guarantee that, so request it.  That way we won't waste any
+-	 * memory that would be available beyond the required space.
+-	 *
+-	 * Note that gsi_trans_pool_exit_dma() assumes the total allocated
++	/* The allocator will give us a power-of-2 number of pages
++	 * sufficient to satisfy our request.  Round up our requested
++	 * size to avoid any unused space in the allocation.  This way
++	 * gsi_trans_pool_exit_dma() can assume the total allocated
+ 	 * size is exactly (count * size).
+ 	 */
+ 	total_size = get_order(total_size) << PAGE_SHIFT;
+diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
+index a47378b7d9b2f..dc94ce0356556 100644
+--- a/drivers/net/ipa/ipa_cmd.c
++++ b/drivers/net/ipa/ipa_cmd.c
+@@ -154,7 +154,7 @@ static void ipa_cmd_validate_build(void)
+ 	 * of entries, as and IPv4 and IPv6 route tables have the same number
+ 	 * of entries.
+ 	 */
+-#define TABLE_SIZE	(TABLE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE)
++#define TABLE_SIZE	(TABLE_COUNT_MAX * sizeof(__le64))
+ #define TABLE_COUNT_MAX	max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
+ 	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
+ 	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
+diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
+index 7fc1058a5ca93..ba05e26c3c60e 100644
+--- a/drivers/net/ipa/ipa_data.h
++++ b/drivers/net/ipa/ipa_data.h
+@@ -72,8 +72,8 @@
+  * that can be included in a single transaction.
+  */
+ struct gsi_channel_data {
+-	u16 tre_count;
+-	u16 event_count;
++	u16 tre_count;			/* must be a power of 2 */
++	u16 event_count;		/* must be a power of 2 */
+ 	u8 tlv_count;
+ };
+ 
+diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
+index 1a87a49538c50..880ec353f958f 100644
+--- a/drivers/net/ipa/ipa_qmi.c
++++ b/drivers/net/ipa/ipa_qmi.c
+@@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
+ 	mem = &ipa->mem[IPA_MEM_V4_ROUTE];
+ 	req.v4_route_tbl_info_valid = 1;
+ 	req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
+-	req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE;
++	req.v4_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
+ 
+ 	mem = &ipa->mem[IPA_MEM_V6_ROUTE];
+ 	req.v6_route_tbl_info_valid = 1;
+ 	req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
+-	req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE;
++	req.v6_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
+ 
+ 	mem = &ipa->mem[IPA_MEM_V4_FILTER];
+ 	req.v4_filter_tbl_start_valid = 1;
+@@ -352,8 +352,7 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
+ 		req.v4_hash_route_tbl_info_valid = 1;
+ 		req.v4_hash_route_tbl_info.start =
+ 				ipa->mem_offset + mem->offset;
+-		req.v4_hash_route_tbl_info.count =
+-				mem->size / IPA_TABLE_ENTRY_SIZE;
++		req.v4_hash_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
+ 	}
+ 
+ 	mem = &ipa->mem[IPA_MEM_V6_ROUTE_HASHED];
+@@ -361,8 +360,7 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
+ 		req.v6_hash_route_tbl_info_valid = 1;
+ 		req.v6_hash_route_tbl_info.start =
+ 			ipa->mem_offset + mem->offset;
+-		req.v6_hash_route_tbl_info.count =
+-			mem->size / IPA_TABLE_ENTRY_SIZE;
++		req.v6_hash_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
+ 	}
+ 
+ 	mem = &ipa->mem[IPA_MEM_V4_FILTER_HASHED];
+diff --git a/drivers/net/ipa/ipa_qmi_msg.c b/drivers/net/ipa/ipa_qmi_msg.c
+index 73413371e3d3e..ecf9f863c842b 100644
+--- a/drivers/net/ipa/ipa_qmi_msg.c
++++ b/drivers/net/ipa/ipa_qmi_msg.c
+@@ -271,7 +271,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ 		.tlv_type	= 0x12,
+ 		.offset		= offsetof(struct ipa_init_modem_driver_req,
+ 					   v4_route_tbl_info),
+-		.ei_array	= ipa_mem_array_ei,
++		.ei_array	= ipa_mem_bounds_ei,
+ 	},
+ 	{
+ 		.data_type	= QMI_OPT_FLAG,
+@@ -292,7 +292,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ 		.tlv_type	= 0x13,
+ 		.offset		= offsetof(struct ipa_init_modem_driver_req,
+ 					   v6_route_tbl_info),
+-		.ei_array	= ipa_mem_array_ei,
++		.ei_array	= ipa_mem_bounds_ei,
+ 	},
+ 	{
+ 		.data_type	= QMI_OPT_FLAG,
+@@ -456,7 +456,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ 		.tlv_type	= 0x1b,
+ 		.offset		= offsetof(struct ipa_init_modem_driver_req,
+ 					   v4_hash_route_tbl_info),
+-		.ei_array	= ipa_mem_array_ei,
++		.ei_array	= ipa_mem_bounds_ei,
+ 	},
+ 	{
+ 		.data_type	= QMI_OPT_FLAG,
+@@ -477,7 +477,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ 		.tlv_type	= 0x1c,
+ 		.offset		= offsetof(struct ipa_init_modem_driver_req,
+ 					   v6_hash_route_tbl_info),
+-		.ei_array	= ipa_mem_array_ei,
++		.ei_array	= ipa_mem_bounds_ei,
+ 	},
+ 	{
+ 		.data_type	= QMI_OPT_FLAG,
+diff --git a/drivers/net/ipa/ipa_qmi_msg.h b/drivers/net/ipa/ipa_qmi_msg.h
+index cfac456cea0ca..58de425bb8e61 100644
+--- a/drivers/net/ipa/ipa_qmi_msg.h
++++ b/drivers/net/ipa/ipa_qmi_msg.h
+@@ -82,9 +82,11 @@ enum ipa_platform_type {
+ 	IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01	= 5,	/* QNX MSM */
+ };
+ 
+-/* This defines the start and end offset of a range of memory.  Both
+- * fields are offsets relative to the start of IPA shared memory.
+- * The end value is the last addressable byte *within* the range.
++/* This defines the start and end offset of a range of memory.  The start
++ * value is a byte offset relative to the start of IPA shared memory.  The
++ * end value is the last addressable unit *within* the range.  Typically
++ * the end value is in units of bytes, however it can also be a maximum
++ * array index value.
+  */
+ struct ipa_mem_bounds {
+ 	u32 start;
+@@ -125,18 +127,19 @@ struct ipa_init_modem_driver_req {
+ 	u8			hdr_tbl_info_valid;
+ 	struct ipa_mem_bounds	hdr_tbl_info;
+ 
+-	/* Routing table information.  These define the location and size of
+-	 * non-hashable IPv4 and IPv6 filter tables.  The start values are
+-	 * offsets relative to the start of IPA shared memory.
++	/* Routing table information.  These define the location and maximum
++	 * *index* (not byte) for the modem portion of non-hashable IPv4 and
++	 * IPv6 routing tables.  The start values are byte offsets relative
++	 * to the start of IPA shared memory.
+ 	 */
+ 	u8			v4_route_tbl_info_valid;
+-	struct ipa_mem_array	v4_route_tbl_info;
++	struct ipa_mem_bounds	v4_route_tbl_info;
+ 	u8			v6_route_tbl_info_valid;
+-	struct ipa_mem_array	v6_route_tbl_info;
++	struct ipa_mem_bounds	v6_route_tbl_info;
+ 
+ 	/* Filter table information.  These define the location of the
+ 	 * non-hashable IPv4 and IPv6 filter tables.  The start values are
+-	 * offsets relative to the start of IPA shared memory.
++	 * byte offsets relative to the start of IPA shared memory.
+ 	 */
+ 	u8			v4_filter_tbl_start_valid;
+ 	u32			v4_filter_tbl_start;
+@@ -177,18 +180,20 @@ struct ipa_init_modem_driver_req {
+ 	u8			zip_tbl_info_valid;
+ 	struct ipa_mem_bounds	zip_tbl_info;
+ 
+-	/* Routing table information.  These define the location and size
+-	 * of hashable IPv4 and IPv6 filter tables.  The start values are
+-	 * offsets relative to the start of IPA shared memory.
++	/* Routing table information.  These define the location and maximum
++	 * *index* (not byte) for the modem portion of hashable IPv4 and IPv6
++	 * routing tables (if supported by hardware).  The start values are
++	 * byte offsets relative to the start of IPA shared memory.
+ 	 */
+ 	u8			v4_hash_route_tbl_info_valid;
+-	struct ipa_mem_array	v4_hash_route_tbl_info;
++	struct ipa_mem_bounds	v4_hash_route_tbl_info;
+ 	u8			v6_hash_route_tbl_info_valid;
+-	struct ipa_mem_array	v6_hash_route_tbl_info;
++	struct ipa_mem_bounds	v6_hash_route_tbl_info;
+ 
+ 	/* Filter table information.  These define the location and size
+-	 * of hashable IPv4 and IPv6 filter tables.  The start values are
+-	 * offsets relative to the start of IPA shared memory.
++	 * of hashable IPv4 and IPv6 filter tables (if supported by hardware).
++	 * The start values are byte offsets relative to the start of IPA
++	 * shared memory.
+ 	 */
+ 	u8			v4_hash_filter_tbl_start_valid;
+ 	u32			v4_hash_filter_tbl_start;
+diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
+index 0747866d60abc..02c1928374144 100644
+--- a/drivers/net/ipa/ipa_table.c
++++ b/drivers/net/ipa/ipa_table.c
+@@ -27,28 +27,38 @@
+ /**
+  * DOC: IPA Filter and Route Tables
+  *
+- * The IPA has tables defined in its local shared memory that define filter
+- * and routing rules.  Each entry in these tables contains a 64-bit DMA
+- * address that refers to DRAM (system memory) containing a rule definition.
++ * The IPA has tables defined in its local (IPA-resident) memory that define
++ * filter and routing rules.  An entry in either of these tables is a little
++ * endian 64-bit "slot" that holds the address of a rule definition.  (The
++ * size of these slots is 64 bits regardless of the host DMA address size.)
++ *
++ * Separate tables (both filter and route) used for IPv4 and IPv6.  There
++ * are normally another set of "hashed" filter and route tables, which are
++ * used with a hash of message metadata.  Hashed operation is not supported
++ * by all IPA hardware (IPA v4.2 doesn't support hashed tables).
++ *
++ * Rules can be in local memory or in DRAM (system memory).  The offset of
++ * an object (such as a route or filter table) in IPA-resident memory must
++ * 128-byte aligned.  An object in system memory (such as a route or filter
++ * rule) must be at an 8-byte aligned address.  We currently only place
++ * route or filter rules in system memory.
++ *
+  * A rule consists of a contiguous block of 32-bit values terminated with
+  * 32 zero bits.  A special "zero entry" rule consisting of 64 zero bits
+  * represents "no filtering" or "no routing," and is the reset value for
+- * filter or route table rules.  Separate tables (both filter and route)
+- * used for IPv4 and IPv6.  Additionally, there can be hashed filter or
+- * route tables, which are used when a hash of message metadata matches.
+- * Hashed operation is not supported by all IPA hardware.
++ * filter or route table rules.
+  *
+  * Each filter rule is associated with an AP or modem TX endpoint, though
+- * not all TX endpoints support filtering.  The first 64-bit entry in a
++ * not all TX endpoints support filtering.  The first 64-bit slot in a
+  * filter table is a bitmap indicating which endpoints have entries in
+  * the table.  The low-order bit (bit 0) in this bitmap represents a
+  * special global filter, which applies to all traffic.  This is not
+  * used in the current code.  Bit 1, if set, indicates that there is an
+- * entry (i.e. a DMA address referring to a rule) for endpoint 0 in the
+- * table.  Bit 2, if set, indicates there is an entry for endpoint 1,
+- * and so on.  Space is set aside in IPA local memory to hold as many
+- * filter table entries as might be required, but typically they are not
+- * all used.
++ * entry (i.e. slot containing a system address referring to a rule) for
++ * endpoint 0 in the table.  Bit 3, if set, indicates there is an entry
++ * for endpoint 2, and so on.  Space is set aside in IPA local memory to
++ * hold as many filter table entries as might be required, but typically
++ * they are not all used.
+  *
+  * The AP initializes all entries in a filter table to refer to a "zero"
+  * entry.  Once initialized the modem and AP update the entries for
+@@ -96,13 +106,8 @@
+  *                 ----------------------
+  */
+ 
+-/* IPA hardware constrains filter and route tables alignment */
+-#define IPA_TABLE_ALIGN			128	/* Minimum table alignment */
+-
+ /* Assignment of route table entries to the modem and AP */
+ #define IPA_ROUTE_MODEM_MIN		0
+-#define IPA_ROUTE_MODEM_COUNT		8
+-
+ #define IPA_ROUTE_AP_MIN		IPA_ROUTE_MODEM_COUNT
+ #define IPA_ROUTE_AP_COUNT \
+ 		(IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT)
+@@ -118,21 +123,14 @@
+ /* Check things that can be validated at build time. */
+ static void ipa_table_validate_build(void)
+ {
+-	/* IPA hardware accesses memory 128 bytes at a time.  Addresses
+-	 * referred to by entries in filter and route tables must be
+-	 * aligned on 128-byte byte boundaries.  The only rule address
+-	 * ever use is the "zero rule", and it's aligned at the base
+-	 * of a coherent DMA allocation.
+-	 */
+-	BUILD_BUG_ON(ARCH_DMA_MINALIGN % IPA_TABLE_ALIGN);
+-
+-	/* Filter and route tables contain DMA addresses that refer to
+-	 * filter or route rules.  We use a fixed constant to represent
+-	 * the size of either type of table entry.  Code in ipa_table_init()
+-	 * uses a pointer to __le64 to initialize table entriews.
++	/* Filter and route tables contain DMA addresses that refer
++	 * to filter or route rules.  But the size of a table entry
++	 * is 64 bits regardless of what the size of an AP DMA address
++	 * is.  A fixed constant defines the size of an entry, and
++	 * code in ipa_table_init() uses a pointer to __le64 to
++	 * initialize tables.
+ 	 */
+-	BUILD_BUG_ON(IPA_TABLE_ENTRY_SIZE != sizeof(dma_addr_t));
+-	BUILD_BUG_ON(sizeof(dma_addr_t) != sizeof(__le64));
++	BUILD_BUG_ON(sizeof(dma_addr_t) > sizeof(__le64));
+ 
+ 	/* A "zero rule" is used to represent no filtering or no routing.
+ 	 * It is a 64-bit block of zeroed memory.  Code in ipa_table_init()
+@@ -163,7 +161,7 @@ ipa_table_valid_one(struct ipa *ipa, bool route, bool ipv6, bool hashed)
+ 		else
+ 			mem = hashed ? &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]
+ 				     : &ipa->mem[IPA_MEM_V4_ROUTE];
+-		size = IPA_ROUTE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE;
++		size = IPA_ROUTE_COUNT_MAX * sizeof(__le64);
+ 	} else {
+ 		if (ipv6)
+ 			mem = hashed ? &ipa->mem[IPA_MEM_V6_FILTER_HASHED]
+@@ -171,7 +169,7 @@ ipa_table_valid_one(struct ipa *ipa, bool route, bool ipv6, bool hashed)
+ 		else
+ 			mem = hashed ? &ipa->mem[IPA_MEM_V4_FILTER_HASHED]
+ 				     : &ipa->mem[IPA_MEM_V4_FILTER];
+-		size = (1 + IPA_FILTER_COUNT_MAX) * IPA_TABLE_ENTRY_SIZE;
++		size = (1 + IPA_FILTER_COUNT_MAX) * sizeof(__le64);
+ 	}
+ 
+ 	if (!ipa_cmd_table_valid(ipa, mem, route, ipv6, hashed))
+@@ -270,8 +268,8 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
+ 	if (filter)
+ 		first++;	/* skip over bitmap */
+ 
+-	offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE;
+-	size = count * IPA_TABLE_ENTRY_SIZE;
++	offset = mem->offset + first * sizeof(__le64);
++	size = count * sizeof(__le64);
+ 	addr = ipa_table_addr(ipa, false, count);
+ 
+ 	ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
+@@ -455,11 +453,11 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
+ 		count = 1 + hweight32(ipa->filter_map);
+ 		hash_count = hash_mem->size ? count : 0;
+ 	} else {
+-		count = mem->size / IPA_TABLE_ENTRY_SIZE;
+-		hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE;
++		count = mem->size / sizeof(__le64);
++		hash_count = hash_mem->size / sizeof(__le64);
+ 	}
+-	size = count * IPA_TABLE_ENTRY_SIZE;
+-	hash_size = hash_count * IPA_TABLE_ENTRY_SIZE;
++	size = count * sizeof(__le64);
++	hash_size = hash_count * sizeof(__le64);
+ 
+ 	addr = ipa_table_addr(ipa, filter, count);
+ 	hash_addr = ipa_table_addr(ipa, filter, hash_count);
+@@ -662,7 +660,13 @@ int ipa_table_init(struct ipa *ipa)
+ 
+ 	ipa_table_validate_build();
+ 
+-	size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE;
++	/* The IPA hardware requires route and filter table rules to be
++	 * aligned on a 128-byte boundary.  We put the "zero rule" at the
++	 * base of the table area allocated here.  The DMA address returned
++	 * by dma_alloc_coherent() is guaranteed to be a power-of-2 number
++	 * of pages, which satisfies the rule alignment requirement.
++	 */
++	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
+ 	virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
+ 	if (!virt)
+ 		return -ENOMEM;
+@@ -694,7 +698,7 @@ void ipa_table_exit(struct ipa *ipa)
+ 	struct device *dev = &ipa->pdev->dev;
+ 	size_t size;
+ 
+-	size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE;
++	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
+ 
+ 	dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
+ 	ipa->table_addr = 0;
+diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
+index 78038d14fcea9..35e519cef25da 100644
+--- a/drivers/net/ipa/ipa_table.h
++++ b/drivers/net/ipa/ipa_table.h
+@@ -10,12 +10,12 @@
+ 
+ struct ipa;
+ 
+-/* The size of a filter or route table entry */
+-#define IPA_TABLE_ENTRY_SIZE	sizeof(__le64)	/* Holds a physical address */
+-
+ /* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
+ #define IPA_FILTER_COUNT_MAX	14
+ 
++/* The number of route table entries allotted to the modem */
++#define IPA_ROUTE_MODEM_COUNT	8
++
+ /* The maximum number of route table entries (IPv4, IPv6; hashed or not) */
+ #define IPA_ROUTE_COUNT_MAX	15
+ 
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index 8801d093135c3..a33149ee0ddcf 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -496,7 +496,6 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+ 
+ static int ipvlan_process_outbound(struct sk_buff *skb)
+ {
+-	struct ethhdr *ethh = eth_hdr(skb);
+ 	int ret = NET_XMIT_DROP;
+ 
+ 	/* The ipvlan is a pseudo-L2 device, so the packets that we receive
+@@ -506,6 +505,8 @@ static int ipvlan_process_outbound(struct sk_buff *skb)
+ 	if (skb_mac_header_was_set(skb)) {
+ 		/* In this mode we dont care about
+ 		 * multicast and broadcast traffic */
++		struct ethhdr *ethh = eth_hdr(skb);
++
+ 		if (is_multicast_ether_addr(ethh->h_dest)) {
+ 			pr_debug_ratelimited(
+ 				"Dropped {multi|broad}cast of type=[%x]\n",
+@@ -590,7 +591,7 @@ out:
+ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	const struct ipvl_dev *ipvlan = netdev_priv(dev);
+-	struct ethhdr *eth = eth_hdr(skb);
++	struct ethhdr *eth = skb_eth_hdr(skb);
+ 	struct ipvl_addr *addr;
+ 	void *lyr3h;
+ 	int addr_type;
+@@ -620,6 +621,7 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ 		return dev_forward_skb(ipvlan->phy_dev, skb);
+ 
+ 	} else if (is_multicast_ether_addr(eth->h_dest)) {
++		skb_reset_mac_header(skb);
+ 		ipvlan_skb_crossing_ns(skb, NULL);
+ 		ipvlan_multicast_enqueue(ipvlan->port, skb, true);
+ 		return NET_XMIT_SUCCESS;
+diff --git a/drivers/net/mdio/of_mdio.c b/drivers/net/mdio/of_mdio.c
+index ea0bf13e8ac3f..5bae47f3da405 100644
+--- a/drivers/net/mdio/of_mdio.c
++++ b/drivers/net/mdio/of_mdio.c
+@@ -332,6 +332,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
+ 	return 0;
+ 
+ unregister:
++	of_node_put(child);
+ 	mdiobus_unregister(mdio);
+ 	return rc;
+ }
+diff --git a/drivers/net/phy/aquantia_main.c b/drivers/net/phy/aquantia_main.c
+index 75a62d1cc7375..7045595f8d7d1 100644
+--- a/drivers/net/phy/aquantia_main.c
++++ b/drivers/net/phy/aquantia_main.c
+@@ -89,6 +89,9 @@
+ #define VEND1_GLOBAL_FW_ID_MAJOR		GENMASK(15, 8)
+ #define VEND1_GLOBAL_FW_ID_MINOR		GENMASK(7, 0)
+ 
++#define VEND1_GLOBAL_GEN_STAT2			0xc831
++#define VEND1_GLOBAL_GEN_STAT2_OP_IN_PROG	BIT(15)
++
+ #define VEND1_GLOBAL_RSVD_STAT1			0xc885
+ #define VEND1_GLOBAL_RSVD_STAT1_FW_BUILD_ID	GENMASK(7, 4)
+ #define VEND1_GLOBAL_RSVD_STAT1_PROV_ID		GENMASK(3, 0)
+@@ -123,6 +126,12 @@
+ #define VEND1_GLOBAL_INT_VEND_MASK_GLOBAL2	BIT(1)
+ #define VEND1_GLOBAL_INT_VEND_MASK_GLOBAL3	BIT(0)
+ 
++/* Sleep and timeout for checking if the Processor-Intensive
++ * MDIO operation is finished
++ */
++#define AQR107_OP_IN_PROG_SLEEP		1000
++#define AQR107_OP_IN_PROG_TIMEOUT	100000
++
+ struct aqr107_hw_stat {
+ 	const char *name;
+ 	int reg;
+@@ -569,16 +578,52 @@ static void aqr107_link_change_notify(struct phy_device *phydev)
+ 		phydev_info(phydev, "Aquantia 1000Base-T2 mode active\n");
+ }
+ 
++static int aqr107_wait_processor_intensive_op(struct phy_device *phydev)
++{
++	int val, err;
++
++	/* The datasheet notes to wait at least 1ms after issuing a
++	 * processor intensive operation before checking.
++	 * We cannot use the 'sleep_before_read' parameter of read_poll_timeout
++	 * because that just determines the maximum time slept, not the minimum.
++	 */
++	usleep_range(1000, 5000);
++
++	err = phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1,
++					VEND1_GLOBAL_GEN_STAT2, val,
++					!(val & VEND1_GLOBAL_GEN_STAT2_OP_IN_PROG),
++					AQR107_OP_IN_PROG_SLEEP,
++					AQR107_OP_IN_PROG_TIMEOUT, false);
++	if (err) {
++		phydev_err(phydev, "timeout: processor-intensive MDIO operation\n");
++		return err;
++	}
++
++	return 0;
++}
++
+ static int aqr107_suspend(struct phy_device *phydev)
+ {
+-	return phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
+-				MDIO_CTRL1_LPOWER);
++	int err;
++
++	err = phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
++			       MDIO_CTRL1_LPOWER);
++	if (err)
++		return err;
++
++	return aqr107_wait_processor_intensive_op(phydev);
+ }
+ 
+ static int aqr107_resume(struct phy_device *phydev)
+ {
+-	return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
+-				  MDIO_CTRL1_LPOWER);
++	int err;
++
++	err = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
++				 MDIO_CTRL1_LPOWER);
++	if (err)
++		return err;
++
++	return aqr107_wait_processor_intensive_op(phydev);
+ }
+ 
+ static int aqr107_probe(struct phy_device *phydev)
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 615f3776b4bee..7117d559a32e4 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1270,10 +1270,12 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ 		}
+ 	}
+ 
+-	netif_addr_lock_bh(dev);
+-	dev_uc_sync_multiple(port_dev, dev);
+-	dev_mc_sync_multiple(port_dev, dev);
+-	netif_addr_unlock_bh(dev);
++	if (dev->flags & IFF_UP) {
++		netif_addr_lock_bh(dev);
++		dev_uc_sync_multiple(port_dev, dev);
++		dev_mc_sync_multiple(port_dev, dev);
++		netif_addr_unlock_bh(dev);
++	}
+ 
+ 	port->index = -1;
+ 	list_add_tail_rcu(&port->list, &team->port_list);
+@@ -1344,8 +1346,10 @@ static int team_port_del(struct team *team, struct net_device *port_dev)
+ 	netdev_rx_handler_unregister(port_dev);
+ 	team_port_disable_netpoll(port);
+ 	vlan_vids_del_by_dev(port_dev, dev);
+-	dev_uc_unsync(port_dev, dev);
+-	dev_mc_unsync(port_dev, dev);
++	if (dev->flags & IFF_UP) {
++		dev_uc_unsync(port_dev, dev);
++		dev_mc_unsync(port_dev, dev);
++	}
+ 	dev_close(port_dev);
+ 	team_port_leave(team, port);
+ 
+@@ -1695,6 +1699,14 @@ static int team_open(struct net_device *dev)
+ 
+ static int team_close(struct net_device *dev)
+ {
++	struct team *team = netdev_priv(dev);
++	struct team_port *port;
++
++	list_for_each_entry(port, &team->port_list, list) {
++		dev_uc_unsync(port->dev, dev);
++		dev_mc_unsync(port->dev, dev);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireguard/netlink.c b/drivers/net/wireguard/netlink.c
+index d0f3b6d7f4089..5c804bcabfe6b 100644
+--- a/drivers/net/wireguard/netlink.c
++++ b/drivers/net/wireguard/netlink.c
+@@ -436,14 +436,13 @@ static int set_peer(struct wg_device *wg, struct nlattr **attrs)
+ 	if (attrs[WGPEER_A_ENDPOINT]) {
+ 		struct sockaddr *addr = nla_data(attrs[WGPEER_A_ENDPOINT]);
+ 		size_t len = nla_len(attrs[WGPEER_A_ENDPOINT]);
++		struct endpoint endpoint = { { { 0 } } };
+ 
+-		if ((len == sizeof(struct sockaddr_in) &&
+-		     addr->sa_family == AF_INET) ||
+-		    (len == sizeof(struct sockaddr_in6) &&
+-		     addr->sa_family == AF_INET6)) {
+-			struct endpoint endpoint = { { { 0 } } };
+-
+-			memcpy(&endpoint.addr, addr, len);
++		if (len == sizeof(struct sockaddr_in) && addr->sa_family == AF_INET) {
++			endpoint.addr4 = *(struct sockaddr_in *)addr;
++			wg_socket_set_peer_endpoint(peer, &endpoint);
++		} else if (len == sizeof(struct sockaddr_in6) && addr->sa_family == AF_INET6) {
++			endpoint.addr6 = *(struct sockaddr_in6 *)addr;
+ 			wg_socket_set_peer_endpoint(peer, &endpoint);
+ 		}
+ 	}
+diff --git a/drivers/net/wireguard/selftest/ratelimiter.c b/drivers/net/wireguard/selftest/ratelimiter.c
+index ba87d294604fe..d4bb40a695ab6 100644
+--- a/drivers/net/wireguard/selftest/ratelimiter.c
++++ b/drivers/net/wireguard/selftest/ratelimiter.c
+@@ -6,29 +6,28 @@
+ #ifdef DEBUG
+ 
+ #include <linux/jiffies.h>
+-#include <linux/hrtimer.h>
+ 
+ static const struct {
+ 	bool result;
+-	u64 nsec_to_sleep_before;
++	unsigned int msec_to_sleep_before;
+ } expected_results[] __initconst = {
+ 	[0 ... PACKETS_BURSTABLE - 1] = { true, 0 },
+ 	[PACKETS_BURSTABLE] = { false, 0 },
+-	[PACKETS_BURSTABLE + 1] = { true, NSEC_PER_SEC / PACKETS_PER_SECOND },
++	[PACKETS_BURSTABLE + 1] = { true, MSEC_PER_SEC / PACKETS_PER_SECOND },
+ 	[PACKETS_BURSTABLE + 2] = { false, 0 },
+-	[PACKETS_BURSTABLE + 3] = { true, (NSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
++	[PACKETS_BURSTABLE + 3] = { true, (MSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
+ 	[PACKETS_BURSTABLE + 4] = { true, 0 },
+ 	[PACKETS_BURSTABLE + 5] = { false, 0 }
+ };
+ 
+ static __init unsigned int maximum_jiffies_at_index(int index)
+ {
+-	u64 total_nsecs = 2 * NSEC_PER_SEC / PACKETS_PER_SECOND / 3;
++	unsigned int total_msecs = 2 * MSEC_PER_SEC / PACKETS_PER_SECOND / 3;
+ 	int i;
+ 
+ 	for (i = 0; i <= index; ++i)
+-		total_nsecs += expected_results[i].nsec_to_sleep_before;
+-	return nsecs_to_jiffies(total_nsecs);
++		total_msecs += expected_results[i].msec_to_sleep_before;
++	return msecs_to_jiffies(total_msecs);
+ }
+ 
+ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
+@@ -43,12 +42,8 @@ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
+ 	loop_start_time = jiffies;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(expected_results); ++i) {
+-		if (expected_results[i].nsec_to_sleep_before) {
+-			ktime_t timeout = ktime_add(ktime_add_ns(ktime_get_coarse_boottime(), TICK_NSEC * 4 / 3),
+-						    ns_to_ktime(expected_results[i].nsec_to_sleep_before));
+-			set_current_state(TASK_UNINTERRUPTIBLE);
+-			schedule_hrtimeout_range_clock(&timeout, 0, HRTIMER_MODE_ABS, CLOCK_BOOTTIME);
+-		}
++		if (expected_results[i].msec_to_sleep_before)
++			msleep(expected_results[i].msec_to_sleep_before);
+ 
+ 		if (time_is_before_jiffies(loop_start_time +
+ 					   maximum_jiffies_at_index(i)))
+@@ -132,7 +127,7 @@ bool __init wg_ratelimiter_selftest(void)
+ 	if (IS_ENABLED(CONFIG_KASAN) || IS_ENABLED(CONFIG_UBSAN))
+ 		return true;
+ 
+-	BUILD_BUG_ON(NSEC_PER_SEC % PACKETS_PER_SECOND != 0);
++	BUILD_BUG_ON(MSEC_PER_SEC % PACKETS_PER_SECOND != 0);
+ 
+ 	if (wg_ratelimiter_init())
+ 		goto out;
+@@ -172,7 +167,7 @@ bool __init wg_ratelimiter_selftest(void)
+ 	++test;
+ #endif
+ 
+-	for (trials = TRIALS_BEFORE_GIVING_UP;;) {
++	for (trials = TRIALS_BEFORE_GIVING_UP; IS_ENABLED(DEBUG_RATELIMITER_TIMINGS);) {
+ 		int test_count = 0, ret;
+ 
+ 		ret = timings_test(skb4, hdr4, skb6, hdr6, &test_count);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 1465a92ea3fc9..b26617026e831 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -950,7 +950,7 @@ u32 mt7615_mac_get_sta_tid_sn(struct mt7615_dev *dev, int wcid, u8 tid)
+ 	offset %= 32;
+ 
+ 	val = mt76_rr(dev, addr);
+-	val >>= (tid % 32);
++	val >>= offset;
+ 
+ 	if (offset > 20) {
+ 		addr += 4;
+diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c
+index dc78a523a69f2..b6b938aa66158 100644
+--- a/drivers/s390/block/dasd_alias.c
++++ b/drivers/s390/block/dasd_alias.c
+@@ -675,12 +675,12 @@ int dasd_alias_remove_device(struct dasd_device *device)
+ struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device)
+ {
+ 	struct dasd_eckd_private *alias_priv, *private = base_device->private;
+-	struct alias_pav_group *group = private->pavgroup;
+ 	struct alias_lcu *lcu = private->lcu;
+ 	struct dasd_device *alias_device;
++	struct alias_pav_group *group;
+ 	unsigned long flags;
+ 
+-	if (!group || !lcu)
++	if (!lcu)
+ 		return NULL;
+ 	if (lcu->pav == NO_PAV ||
+ 	    lcu->flags & (NEED_UAC_UPDATE | UPDATE_PENDING))
+@@ -697,6 +697,11 @@ struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device)
+ 	}
+ 
+ 	spin_lock_irqsave(&lcu->lock, flags);
++	group = private->pavgroup;
++	if (!group) {
++		spin_unlock_irqrestore(&lcu->lock, flags);
++		return NULL;
++	}
+ 	alias_device = group->next;
+ 	if (!alias_device) {
+ 		if (list_empty(&group->aliaslist)) {
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 3153f164554aa..c1b76cda60dbc 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -2822,23 +2822,22 @@ static int
+ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
+ {
+ 	struct sysinfo s;
+-	int dma_mask;
+ 
+ 	if (ioc->is_mcpu_endpoint ||
+ 	    sizeof(dma_addr_t) == 4 || ioc->use_32bit_dma ||
+-	    dma_get_required_mask(&pdev->dev) <= 32)
+-		dma_mask = 32;
++	    dma_get_required_mask(&pdev->dev) <= DMA_BIT_MASK(32))
++		ioc->dma_mask = 32;
+ 	/* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */
+ 	else if (ioc->hba_mpi_version_belonged > MPI2_VERSION)
+-		dma_mask = 63;
++		ioc->dma_mask = 63;
+ 	else
+-		dma_mask = 64;
++		ioc->dma_mask = 64;
+ 
+-	if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(dma_mask)) ||
+-	    dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(dma_mask)))
++	if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(ioc->dma_mask)) ||
++	    dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(ioc->dma_mask)))
+ 		return -ENODEV;
+ 
+-	if (dma_mask > 32) {
++	if (ioc->dma_mask > 32) {
+ 		ioc->base_add_sg_single = &_base_add_sg_single_64;
+ 		ioc->sge_size = sizeof(Mpi2SGESimple64_t);
+ 	} else {
+@@ -2848,7 +2847,7 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
+ 
+ 	si_meminfo(&s);
+ 	ioc_info(ioc, "%d BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (%ld kB)\n",
+-		dma_mask, convert_to_kb(s.totalram));
++		ioc->dma_mask, convert_to_kb(s.totalram));
+ 
+ 	return 0;
+ }
+@@ -4902,10 +4901,10 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ 			dma_pool_free(ioc->pcie_sgl_dma_pool,
+ 					ioc->pcie_sg_lookup[i].pcie_sgl,
+ 					ioc->pcie_sg_lookup[i].pcie_sgl_dma);
++			ioc->pcie_sg_lookup[i].pcie_sgl = NULL;
+ 		}
+ 		dma_pool_destroy(ioc->pcie_sgl_dma_pool);
+ 	}
+-
+ 	if (ioc->config_page) {
+ 		dexitprintk(ioc,
+ 			    ioc_info(ioc, "config_page(0x%p): free\n",
+@@ -4960,6 +4959,89 @@ mpt3sas_check_same_4gb_region(long reply_pool_start_address, u32 pool_sz)
+ 		return 0;
+ }
+ 
++/**
++ * _base_reduce_hba_queue_depth- Retry with reduced queue depth
++ * @ioc: Adapter object
++ *
++ * Return: 0 for success, non-zero for failure.
++ **/
++static inline int
++_base_reduce_hba_queue_depth(struct MPT3SAS_ADAPTER *ioc)
++{
++	int reduce_sz = 64;
++
++	if ((ioc->hba_queue_depth - reduce_sz) >
++	    (ioc->internal_depth + INTERNAL_SCSIIO_CMDS_COUNT)) {
++		ioc->hba_queue_depth -= reduce_sz;
++		return 0;
++	} else
++		return -ENOMEM;
++}
++
++/**
++ * _base_allocate_pcie_sgl_pool - Allocating DMA'able memory
++ *			for pcie sgl pools.
++ * @ioc: Adapter object
++ * @sz: DMA Pool size
++ * @ct: Chain tracker
++ * Return: 0 for success, non-zero for failure.
++ */
++
++static int
++_base_allocate_pcie_sgl_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
++{
++	int i = 0, j = 0;
++	struct chain_tracker *ct;
++
++	ioc->pcie_sgl_dma_pool =
++	    dma_pool_create("PCIe SGL pool", &ioc->pdev->dev, sz,
++	    ioc->page_size, 0);
++	if (!ioc->pcie_sgl_dma_pool) {
++		ioc_err(ioc, "PCIe SGL pool: dma_pool_create failed\n");
++		return -ENOMEM;
++	}
++
++	ioc->chains_per_prp_buffer = sz/ioc->chain_segment_sz;
++	ioc->chains_per_prp_buffer =
++	    min(ioc->chains_per_prp_buffer, ioc->chains_needed_per_io);
++	for (i = 0; i < ioc->scsiio_depth; i++) {
++		ioc->pcie_sg_lookup[i].pcie_sgl =
++		    dma_pool_alloc(ioc->pcie_sgl_dma_pool, GFP_KERNEL,
++		    &ioc->pcie_sg_lookup[i].pcie_sgl_dma);
++		if (!ioc->pcie_sg_lookup[i].pcie_sgl) {
++			ioc_err(ioc, "PCIe SGL pool: dma_pool_alloc failed\n");
++			return -EAGAIN;
++		}
++
++		if (!mpt3sas_check_same_4gb_region(
++		    (long)ioc->pcie_sg_lookup[i].pcie_sgl, sz)) {
++			ioc_err(ioc, "PCIE SGLs are not in same 4G !! pcie sgl (0x%p) dma = (0x%llx)\n",
++			    ioc->pcie_sg_lookup[i].pcie_sgl,
++			    (unsigned long long)
++			    ioc->pcie_sg_lookup[i].pcie_sgl_dma);
++			ioc->use_32bit_dma = true;
++			return -EAGAIN;
++		}
++
++		for (j = 0; j < ioc->chains_per_prp_buffer; j++) {
++			ct = &ioc->chain_lookup[i].chains_per_smid[j];
++			ct->chain_buffer =
++			    ioc->pcie_sg_lookup[i].pcie_sgl +
++			    (j * ioc->chain_segment_sz);
++			ct->chain_buffer_dma =
++			    ioc->pcie_sg_lookup[i].pcie_sgl_dma +
++			    (j * ioc->chain_segment_sz);
++		}
++	}
++	dinitprintk(ioc, ioc_info(ioc,
++	    "PCIe sgl pool depth(%d), element_size(%d), pool_size(%d kB)\n",
++	    ioc->scsiio_depth, sz, (sz * ioc->scsiio_depth)/1024));
++	dinitprintk(ioc, ioc_info(ioc,
++	    "Number of chains can fit in a PRP page(%d)\n",
++	    ioc->chains_per_prp_buffer));
++	return 0;
++}
++
+ /**
+  * base_alloc_rdpq_dma_pool - Allocating DMA'able memory
+  *                     for reply queues.
+@@ -5058,7 +5140,7 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ 	unsigned short sg_tablesize;
+ 	u16 sge_size;
+ 	int i, j;
+-	int ret = 0;
++	int ret = 0, rc = 0;
+ 	struct chain_tracker *ct;
+ 
+ 	dinitprintk(ioc, ioc_info(ioc, "%s\n", __func__));
+@@ -5357,6 +5439,7 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ 	 * be required for NVMe PRP's, only each set of NVMe blocks will be
+ 	 * contiguous, so a new set is allocated for each possible I/O.
+ 	 */
++
+ 	ioc->chains_per_prp_buffer = 0;
+ 	if (ioc->facts.ProtocolFlags & MPI2_IOCFACTS_PROTOCOL_NVME_DEVICES) {
+ 		nvme_blocks_needed =
+@@ -5371,43 +5454,11 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ 			goto out;
+ 		}
+ 		sz = nvme_blocks_needed * ioc->page_size;
+-		ioc->pcie_sgl_dma_pool =
+-			dma_pool_create("PCIe SGL pool", &ioc->pdev->dev, sz, 16, 0);
+-		if (!ioc->pcie_sgl_dma_pool) {
+-			ioc_info(ioc, "PCIe SGL pool: dma_pool_create failed\n");
+-			goto out;
+-		}
+-
+-		ioc->chains_per_prp_buffer = sz/ioc->chain_segment_sz;
+-		ioc->chains_per_prp_buffer = min(ioc->chains_per_prp_buffer,
+-						ioc->chains_needed_per_io);
+-
+-		for (i = 0; i < ioc->scsiio_depth; i++) {
+-			ioc->pcie_sg_lookup[i].pcie_sgl = dma_pool_alloc(
+-				ioc->pcie_sgl_dma_pool, GFP_KERNEL,
+-				&ioc->pcie_sg_lookup[i].pcie_sgl_dma);
+-			if (!ioc->pcie_sg_lookup[i].pcie_sgl) {
+-				ioc_info(ioc, "PCIe SGL pool: dma_pool_alloc failed\n");
+-				goto out;
+-			}
+-			for (j = 0; j < ioc->chains_per_prp_buffer; j++) {
+-				ct = &ioc->chain_lookup[i].chains_per_smid[j];
+-				ct->chain_buffer =
+-				    ioc->pcie_sg_lookup[i].pcie_sgl +
+-				    (j * ioc->chain_segment_sz);
+-				ct->chain_buffer_dma =
+-				    ioc->pcie_sg_lookup[i].pcie_sgl_dma +
+-				    (j * ioc->chain_segment_sz);
+-			}
+-		}
+-
+-		dinitprintk(ioc,
+-			    ioc_info(ioc, "PCIe sgl pool depth(%d), element_size(%d), pool_size(%d kB)\n",
+-				     ioc->scsiio_depth, sz,
+-				     (sz * ioc->scsiio_depth) / 1024));
+-		dinitprintk(ioc,
+-			    ioc_info(ioc, "Number of chains can fit in a PRP page(%d)\n",
+-				     ioc->chains_per_prp_buffer));
++		rc = _base_allocate_pcie_sgl_pool(ioc, sz);
++		if (rc == -ENOMEM)
++			return -ENOMEM;
++		else if (rc == -EAGAIN)
++			goto try_32bit_dma;
+ 		total_sz += sz * ioc->scsiio_depth;
+ 	}
+ 
+@@ -5577,6 +5628,19 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ 		 ioc->shost->sg_tablesize);
+ 	return 0;
+ 
++try_32bit_dma:
++	_base_release_memory_pools(ioc);
++	if (ioc->use_32bit_dma && (ioc->dma_mask > 32)) {
++		/* Change dma coherent mask to 32 bit and reallocate */
++		if (_base_config_dma_addressing(ioc, ioc->pdev) != 0) {
++			pr_err("Setting 32 bit coherent DMA mask Failed %s\n",
++			    pci_name(ioc->pdev));
++			return -ENODEV;
++		}
++	} else if (_base_reduce_hba_queue_depth(ioc) != 0)
++		return -ENOMEM;
++	goto retry_allocation;
++
+  out:
+ 	return -ENOMEM;
+ }
+@@ -7239,6 +7303,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
+ 
+ 	ioc->rdpq_array_enable_assigned = 0;
+ 	ioc->use_32bit_dma = false;
++	ioc->dma_mask = 64;
+ 	if (ioc->is_aero_ioc)
+ 		ioc->base_readl = &_base_readl_aero;
+ 	else
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h
+index bc8beb10f3fc3..823bbe64a477f 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.h
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
+@@ -1257,6 +1257,7 @@ struct MPT3SAS_ADAPTER {
+ 	u16		thresh_hold;
+ 	u8		high_iops_queues;
+ 	u32		drv_support_bitmap;
++	u32             dma_mask;
+ 	bool		enable_sdev_max_qd;
+ 	bool		use_32bit_dma;
+ 
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 602065bfc9bb8..b7872ad3e7622 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -295,20 +295,16 @@ static int atmel_config_rs485(struct uart_port *port,
+ 
+ 	mode = atmel_uart_readl(port, ATMEL_US_MR);
+ 
+-	/* Resetting serial mode to RS232 (0x0) */
+-	mode &= ~ATMEL_US_USMODE;
+-
+-	port->rs485 = *rs485conf;
+-
+ 	if (rs485conf->flags & SER_RS485_ENABLED) {
+ 		dev_dbg(port->dev, "Setting UART to RS485\n");
+-		if (port->rs485.flags & SER_RS485_RX_DURING_TX)
++		if (rs485conf->flags & SER_RS485_RX_DURING_TX)
+ 			atmel_port->tx_done_mask = ATMEL_US_TXRDY;
+ 		else
+ 			atmel_port->tx_done_mask = ATMEL_US_TXEMPTY;
+ 
+ 		atmel_uart_writel(port, ATMEL_US_TTGR,
+ 				  rs485conf->delay_rts_after_send);
++		mode &= ~ATMEL_US_USMODE;
+ 		mode |= ATMEL_US_USMODE_RS485;
+ 	} else {
+ 		dev_dbg(port->dev, "Setting UART to RS232\n");
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index c2be22c3b7d1b..cda71802b6982 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -520,7 +520,7 @@ static void tegra_uart_tx_dma_complete(void *args)
+ 	count = tup->tx_bytes_requested - state.residue;
+ 	async_tx_ack(tup->tx_dma_desc);
+ 	spin_lock_irqsave(&tup->uport.lock, flags);
+-	xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
++	uart_xmit_advance(&tup->uport, count);
+ 	tup->tx_in_progress = 0;
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(&tup->uport);
+@@ -608,7 +608,6 @@ static unsigned int tegra_uart_tx_empty(struct uart_port *u)
+ static void tegra_uart_stop_tx(struct uart_port *u)
+ {
+ 	struct tegra_uart_port *tup = to_tegra_uport(u);
+-	struct circ_buf *xmit = &tup->uport.state->xmit;
+ 	struct dma_tx_state state;
+ 	unsigned int count;
+ 
+@@ -619,7 +618,7 @@ static void tegra_uart_stop_tx(struct uart_port *u)
+ 	dmaengine_tx_status(tup->tx_dma_chan, tup->tx_cookie, &state);
+ 	count = tup->tx_bytes_requested - state.residue;
+ 	async_tx_ack(tup->tx_dma_desc);
+-	xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
++	uart_xmit_advance(&tup->uport, count);
+ 	tup->tx_in_progress = 0;
+ }
+ 
+diff --git a/drivers/tty/serial/tegra-tcu.c b/drivers/tty/serial/tegra-tcu.c
+index aaf8748a61479..31ae705aa38b7 100644
+--- a/drivers/tty/serial/tegra-tcu.c
++++ b/drivers/tty/serial/tegra-tcu.c
+@@ -101,7 +101,7 @@ static void tegra_tcu_uart_start_tx(struct uart_port *port)
+ 			break;
+ 
+ 		tegra_tcu_write(tcu, &xmit->buf[xmit->tail], count);
+-		xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
++		uart_xmit_advance(port, count);
+ 	}
+ 
+ 	uart_write_wakeup(port);
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index c6fc14b169dac..e3a8b6c71aa1d 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -1531,7 +1531,8 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
+ 						TRB_LEN(le32_to_cpu(trb->length));
+ 
+ 				if (priv_req->num_of_trb > 1 &&
+-					le32_to_cpu(trb->control) & TRB_SMM)
++					le32_to_cpu(trb->control) & TRB_SMM &&
++					le32_to_cpu(trb->control) & TRB_CHAIN)
+ 					transfer_end = true;
+ 
+ 				cdns3_ep_inc_deq(priv_ep);
+@@ -1691,6 +1692,7 @@ static int cdns3_check_ep_interrupt_proceed(struct cdns3_endpoint *priv_ep)
+ 				ep_cfg &= ~EP_CFG_ENABLE;
+ 				writel(ep_cfg, &priv_dev->regs->ep_cfg);
+ 				priv_ep->flags &= ~EP_QUIRK_ISO_OUT_EN;
++				priv_ep->flags |= EP_UPDATE_EP_TRBADDR;
+ 			}
+ 			cdns3_transfer_completed(priv_dev, priv_ep);
+ 		} else if (!(priv_ep->flags & EP_STALLED) &&
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 53b3d77fba6a2..f2a3c0b5b535d 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -5968,7 +5968,7 @@ re_enumerate_no_bos:
+  *
+  * Return: The same as for usb_reset_and_verify_device().
+  * However, if a reset is already in progress (for instance, if a
+- * driver doesn't have pre_ or post_reset() callbacks, and while
++ * driver doesn't have pre_reset() or post_reset() callbacks, and while
+  * being unbound or re-bound during the ongoing reset its disconnect()
+  * or probe() routine tries to perform a second, nested reset), the
+  * routine returns -EINPROGRESS.
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 5aae7504f78a1..4a0eec1765118 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -114,8 +114,6 @@ void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode)
+ 	dwc->current_dr_role = mode;
+ }
+ 
+-static int dwc3_core_soft_reset(struct dwc3 *dwc);
+-
+ static void __dwc3_set_mode(struct work_struct *work)
+ {
+ 	struct dwc3 *dwc = work_to_dwc(work);
+@@ -265,7 +263,7 @@ u32 dwc3_core_fifo_space(struct dwc3_ep *dep, u8 type)
+  * dwc3_core_soft_reset - Issues core soft reset and PHY reset
+  * @dwc: pointer to our context structure
+  */
+-static int dwc3_core_soft_reset(struct dwc3 *dwc)
++int dwc3_core_soft_reset(struct dwc3 *dwc)
+ {
+ 	u32		reg;
+ 	int		retries = 1000;
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 79e1b82e5e057..cbebe541f7e8f 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -1010,6 +1010,7 @@ struct dwc3_scratchpad_array {
+  * @tx_max_burst_prd: max periodic ESS transmit burst size
+  * @hsphy_interface: "utmi" or "ulpi"
+  * @connected: true when we're connected to a host, false otherwise
++ * @softconnect: true when gadget connect is called, false when disconnect runs
+  * @delayed_status: true when gadget driver asks for delayed status
+  * @ep0_bounced: true when we used bounce buffer
+  * @ep0_expect_in: true when we expect a DATA IN transfer
+@@ -1218,6 +1219,7 @@ struct dwc3 {
+ 	const char		*hsphy_interface;
+ 
+ 	unsigned		connected:1;
++	unsigned		softconnect:1;
+ 	unsigned		delayed_status:1;
+ 	unsigned		ep0_bounced:1;
+ 	unsigned		ep0_expect_in:1;
+@@ -1456,6 +1458,8 @@ bool dwc3_has_imod(struct dwc3 *dwc);
+ int dwc3_event_buffers_setup(struct dwc3 *dwc);
+ void dwc3_event_buffers_cleanup(struct dwc3 *dwc);
+ 
++int dwc3_core_soft_reset(struct dwc3 *dwc);
++
+ #if IS_ENABLED(CONFIG_USB_DWC3_HOST) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)
+ int dwc3_host_init(struct dwc3 *dwc);
+ void dwc3_host_exit(struct dwc3 *dwc);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index a2a10c05ef3fb..41ed2f6f8a8d0 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2120,14 +2120,42 @@ static void dwc3_gadget_disable_irq(struct dwc3 *dwc);
+ static void __dwc3_gadget_stop(struct dwc3 *dwc);
+ static int __dwc3_gadget_start(struct dwc3 *dwc);
+ 
++static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&dwc->lock, flags);
++	dwc->connected = false;
++
++	/*
++	 * In the Synopsys DesignWare Cores USB3 Databook Rev. 3.30a
++	 * Section 4.1.8 Table 4-7, it states that for a device-initiated
++	 * disconnect, the SW needs to ensure that it sends "a DEPENDXFER
++	 * command for any active transfers" before clearing the RunStop
++	 * bit.
++	 */
++	dwc3_stop_active_transfers(dwc);
++	__dwc3_gadget_stop(dwc);
++	spin_unlock_irqrestore(&dwc->lock, flags);
++
++	/*
++	 * Note: if the GEVNTCOUNT indicates events in the event buffer, the
++	 * driver needs to acknowledge them before the controller can halt.
++	 * Simply let the interrupt handler acknowledges and handle the
++	 * remaining event generated by the controller while polling for
++	 * DSTS.DEVCTLHLT.
++	 */
++	return dwc3_gadget_run_stop(dwc, false, false);
++}
++
+ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ {
+ 	struct dwc3		*dwc = gadget_to_dwc(g);
+-	unsigned long		flags;
+ 	int			ret;
+ 
+ 	is_on = !!is_on;
+ 
++	dwc->softconnect = is_on;
+ 	/*
+ 	 * Per databook, when we want to stop the gadget, if a control transfer
+ 	 * is still in process, complete it and get the core into setup phase.
+@@ -2163,50 +2191,27 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 		return 0;
+ 	}
+ 
+-	/*
+-	 * Synchronize and disable any further event handling while controller
+-	 * is being enabled/disabled.
+-	 */
+-	disable_irq(dwc->irq_gadget);
+-
+-	spin_lock_irqsave(&dwc->lock, flags);
++	if (dwc->pullups_connected == is_on) {
++		pm_runtime_put(dwc->dev);
++		return 0;
++	}
+ 
+ 	if (!is_on) {
+-		u32 count;
+-
+-		dwc->connected = false;
++		ret = dwc3_gadget_soft_disconnect(dwc);
++	} else {
+ 		/*
+-		 * In the Synopsis DesignWare Cores USB3 Databook Rev. 3.30a
+-		 * Section 4.1.8 Table 4-7, it states that for a device-initiated
+-		 * disconnect, the SW needs to ensure that it sends "a DEPENDXFER
+-		 * command for any active transfers" before clearing the RunStop
+-		 * bit.
++		 * In the Synopsys DWC_usb31 1.90a programming guide section
++		 * 4.1.9, it specifies that for a reconnect after a
++		 * device-initiated disconnect requires a core soft reset
++		 * (DCTL.CSftRst) before enabling the run/stop bit.
+ 		 */
+-		dwc3_stop_active_transfers(dwc);
+-		__dwc3_gadget_stop(dwc);
++		dwc3_core_soft_reset(dwc);
+ 
+-		/*
+-		 * In the Synopsis DesignWare Cores USB3 Databook Rev. 3.30a
+-		 * Section 1.3.4, it mentions that for the DEVCTRLHLT bit, the
+-		 * "software needs to acknowledge the events that are generated
+-		 * (by writing to GEVNTCOUNTn) while it is waiting for this bit
+-		 * to be set to '1'."
+-		 */
+-		count = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(0));
+-		count &= DWC3_GEVNTCOUNT_MASK;
+-		if (count > 0) {
+-			dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), count);
+-			dwc->ev_buf->lpos = (dwc->ev_buf->lpos + count) %
+-						dwc->ev_buf->length;
+-		}
+-	} else {
++		dwc3_event_buffers_setup(dwc);
+ 		__dwc3_gadget_start(dwc);
++		ret = dwc3_gadget_run_stop(dwc, true, false);
+ 	}
+ 
+-	ret = dwc3_gadget_run_stop(dwc, is_on, false);
+-	spin_unlock_irqrestore(&dwc->lock, flags);
+-	enable_irq(dwc->irq_gadget);
+-
+ 	pm_runtime_put(dwc->dev);
+ 
+ 	return ret;
+@@ -4048,7 +4053,7 @@ int dwc3_gadget_resume(struct dwc3 *dwc)
+ {
+ 	int			ret;
+ 
+-	if (!dwc->gadget_driver)
++	if (!dwc->gadget_driver || !dwc->softconnect)
+ 		return 0;
+ 
+ 	ret = __dwc3_gadget_start(dwc);
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index 8950d1f10a7fb..86c4bc9df3b80 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -25,6 +25,13 @@
+  */
+ #define TT_MICROFRAMES_MAX 9
+ 
++/* schedule error type */
++#define ESCH_SS_Y6		1001
++#define ESCH_SS_OVERLAP		1002
++#define ESCH_CS_OVERFLOW	1003
++#define ESCH_BW_OVERFLOW	1004
++#define ESCH_FIXME		1005
++
+ /* mtk scheduler bitmasks */
+ #define EP_BPKTS(p)	((p) & 0x7f)
+ #define EP_BCSCOUNT(p)	(((p) & 0x7) << 8)
+@@ -32,6 +39,24 @@
+ #define EP_BOFFSET(p)	((p) & 0x3fff)
+ #define EP_BREPEAT(p)	(((p) & 0x7fff) << 16)
+ 
++static char *sch_error_string(int err_num)
++{
++	switch (err_num) {
++	case ESCH_SS_Y6:
++		return "Can't schedule Start-Split in Y6";
++	case ESCH_SS_OVERLAP:
++		return "Can't find a suitable Start-Split location";
++	case ESCH_CS_OVERFLOW:
++		return "The last Complete-Split is greater than 7";
++	case ESCH_BW_OVERFLOW:
++		return "Bandwidth exceeds the maximum limit";
++	case ESCH_FIXME:
++		return "FIXME, to be resolved";
++	default:
++		return "Unknown";
++	}
++}
++
+ static int is_fs_or_ls(enum usb_device_speed speed)
+ {
+ 	return speed == USB_SPEED_FULL || speed == USB_SPEED_LOW;
+@@ -375,7 +400,6 @@ static void update_bus_bw(struct mu3h_sch_bw_info *sch_bw,
+ 					sch_ep->bw_budget_table[j];
+ 		}
+ 	}
+-	sch_ep->allocated = used;
+ }
+ 
+ static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
+@@ -384,19 +408,20 @@ static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
+ 	u32 num_esit, tmp;
+ 	int base;
+ 	int i, j;
++	u8 uframes = DIV_ROUND_UP(sch_ep->maxpkt, FS_PAYLOAD_MAX);
+ 
+ 	num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
++
++	if (sch_ep->ep_type == INT_IN_EP || sch_ep->ep_type == ISOC_IN_EP)
++		offset++;
++
+ 	for (i = 0; i < num_esit; i++) {
+ 		base = offset + i * sch_ep->esit;
+ 
+-		/*
+-		 * Compared with hs bus, no matter what ep type,
+-		 * the hub will always delay one uframe to send data
+-		 */
+-		for (j = 0; j < sch_ep->cs_count; j++) {
++		for (j = 0; j < uframes; j++) {
+ 			tmp = tt->fs_bus_bw[base + j] + sch_ep->bw_cost_per_microframe;
+ 			if (tmp > FS_PAYLOAD_MAX)
+-				return -ERANGE;
++				return -ESCH_BW_OVERFLOW;
+ 		}
+ 	}
+ 
+@@ -406,15 +431,11 @@ static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
+ static int check_sch_tt(struct usb_device *udev,
+ 	struct mu3h_sch_ep_info *sch_ep, u32 offset)
+ {
+-	struct mu3h_sch_tt *tt = sch_ep->sch_tt;
+ 	u32 extra_cs_count;
+-	u32 fs_budget_start;
+ 	u32 start_ss, last_ss;
+ 	u32 start_cs, last_cs;
+-	int i;
+ 
+ 	start_ss = offset % 8;
+-	fs_budget_start = (start_ss + 1) % 8;
+ 
+ 	if (sch_ep->ep_type == ISOC_OUT_EP) {
+ 		last_ss = start_ss + sch_ep->cs_count - 1;
+@@ -424,11 +445,7 @@ static int check_sch_tt(struct usb_device *udev,
+ 		 * must never schedule Start-Split in Y6
+ 		 */
+ 		if (!(start_ss == 7 || last_ss < 6))
+-			return -ERANGE;
+-
+-		for (i = 0; i < sch_ep->cs_count; i++)
+-			if (test_bit(offset + i, tt->ss_bit_map))
+-				return -ERANGE;
++			return -ESCH_SS_Y6;
+ 
+ 	} else {
+ 		u32 cs_count = DIV_ROUND_UP(sch_ep->maxpkt, FS_PAYLOAD_MAX);
+@@ -438,29 +455,24 @@ static int check_sch_tt(struct usb_device *udev,
+ 		 * must never schedule Start-Split in Y6
+ 		 */
+ 		if (start_ss == 6)
+-			return -ERANGE;
++			return -ESCH_SS_Y6;
+ 
+ 		/* one uframe for ss + one uframe for idle */
+ 		start_cs = (start_ss + 2) % 8;
+ 		last_cs = start_cs + cs_count - 1;
+ 
+ 		if (last_cs > 7)
+-			return -ERANGE;
++			return -ESCH_CS_OVERFLOW;
+ 
+ 		if (sch_ep->ep_type == ISOC_IN_EP)
+ 			extra_cs_count = (last_cs == 7) ? 1 : 2;
+ 		else /*  ep_type : INTR IN / INTR OUT */
+-			extra_cs_count = (fs_budget_start == 6) ? 1 : 2;
++			extra_cs_count = 1;
+ 
+ 		cs_count += extra_cs_count;
+ 		if (cs_count > 7)
+ 			cs_count = 7; /* HW limit */
+ 
+-		for (i = 0; i < cs_count + 2; i++) {
+-			if (test_bit(offset + i, tt->ss_bit_map))
+-				return -ERANGE;
+-		}
+-
+ 		sch_ep->cs_count = cs_count;
+ 		/* one for ss, the other for idle */
+ 		sch_ep->num_budget_microframes = cs_count + 2;
+@@ -482,28 +494,24 @@ static void update_sch_tt(struct usb_device *udev,
+ 	struct mu3h_sch_tt *tt = sch_ep->sch_tt;
+ 	u32 base, num_esit;
+ 	int bw_updated;
+-	int bits;
+ 	int i, j;
++	int offset = sch_ep->offset;
++	u8 uframes = DIV_ROUND_UP(sch_ep->maxpkt, FS_PAYLOAD_MAX);
+ 
+ 	num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
+-	bits = (sch_ep->ep_type == ISOC_OUT_EP) ? sch_ep->cs_count : 1;
+ 
+ 	if (used)
+ 		bw_updated = sch_ep->bw_cost_per_microframe;
+ 	else
+ 		bw_updated = -sch_ep->bw_cost_per_microframe;
+ 
+-	for (i = 0; i < num_esit; i++) {
+-		base = sch_ep->offset + i * sch_ep->esit;
++	if (sch_ep->ep_type == INT_IN_EP || sch_ep->ep_type == ISOC_IN_EP)
++		offset++;
+ 
+-		for (j = 0; j < bits; j++) {
+-			if (used)
+-				set_bit(base + j, tt->ss_bit_map);
+-			else
+-				clear_bit(base + j, tt->ss_bit_map);
+-		}
++	for (i = 0; i < num_esit; i++) {
++		base = offset + i * sch_ep->esit;
+ 
+-		for (j = 0; j < sch_ep->cs_count; j++)
++		for (j = 0; j < uframes; j++)
+ 			tt->fs_bus_bw[base + j] += bw_updated;
+ 	}
+ 
+@@ -513,21 +521,48 @@ static void update_sch_tt(struct usb_device *udev,
+ 		list_del(&sch_ep->tt_endpoint);
+ }
+ 
++static int load_ep_bw(struct usb_device *udev, struct mu3h_sch_bw_info *sch_bw,
++		      struct mu3h_sch_ep_info *sch_ep, bool loaded)
++{
++	if (sch_ep->sch_tt)
++		update_sch_tt(udev, sch_ep, loaded);
++
++	/* update bus bandwidth info */
++	update_bus_bw(sch_bw, sch_ep, loaded);
++	sch_ep->allocated = loaded;
++
++	return 0;
++}
++
++static u32 get_esit_boundary(struct mu3h_sch_ep_info *sch_ep)
++{
++	u32 boundary = sch_ep->esit;
++
++	if (sch_ep->sch_tt) { /* LS/FS with TT */
++		/*
++		 * tune for CS, normally esit >= 8 for FS/LS,
++		 * not add one for other types to avoid access array
++		 * out of boundary
++		 */
++		if (sch_ep->ep_type == ISOC_OUT_EP && boundary > 1)
++			boundary--;
++	}
++
++	return boundary;
++}
++
+ static int check_sch_bw(struct usb_device *udev,
+ 	struct mu3h_sch_bw_info *sch_bw, struct mu3h_sch_ep_info *sch_ep)
+ {
+ 	u32 offset;
+-	u32 esit;
+ 	u32 min_bw;
+ 	u32 min_index;
+ 	u32 worst_bw;
+ 	u32 bw_boundary;
++	u32 esit_boundary;
+ 	u32 min_num_budget;
+ 	u32 min_cs_count;
+-	bool tt_offset_ok = false;
+-	int ret;
+-
+-	esit = sch_ep->esit;
++	int ret = 0;
+ 
+ 	/*
+ 	 * Search through all possible schedule microframes.
+@@ -537,16 +572,15 @@ static int check_sch_bw(struct usb_device *udev,
+ 	min_index = 0;
+ 	min_cs_count = sch_ep->cs_count;
+ 	min_num_budget = sch_ep->num_budget_microframes;
+-	for (offset = 0; offset < esit; offset++) {
+-		if (is_fs_or_ls(udev->speed)) {
++	esit_boundary = get_esit_boundary(sch_ep);
++	for (offset = 0; offset < sch_ep->esit; offset++) {
++		if (sch_ep->sch_tt) {
+ 			ret = check_sch_tt(udev, sch_ep, offset);
+ 			if (ret)
+ 				continue;
+-			else
+-				tt_offset_ok = true;
+ 		}
+ 
+-		if ((offset + sch_ep->num_budget_microframes) > sch_ep->esit)
++		if ((offset + sch_ep->num_budget_microframes) > esit_boundary)
+ 			break;
+ 
+ 		worst_bw = get_max_bw(sch_bw, sch_ep, offset);
+@@ -569,35 +603,21 @@ static int check_sch_bw(struct usb_device *udev,
+ 
+ 	/* check bandwidth */
+ 	if (min_bw > bw_boundary)
+-		return -ERANGE;
++		return ret ? ret : -ESCH_BW_OVERFLOW;
+ 
+ 	sch_ep->offset = min_index;
+ 	sch_ep->cs_count = min_cs_count;
+ 	sch_ep->num_budget_microframes = min_num_budget;
+ 
+-	if (is_fs_or_ls(udev->speed)) {
+-		/* all offset for tt is not ok*/
+-		if (!tt_offset_ok)
+-			return -ERANGE;
+-
+-		update_sch_tt(udev, sch_ep, 1);
+-	}
+-
+-	/* update bus bandwidth info */
+-	update_bus_bw(sch_bw, sch_ep, 1);
+-
+-	return 0;
++	return load_ep_bw(udev, sch_bw, sch_ep, true);
+ }
+ 
+ static void destroy_sch_ep(struct usb_device *udev,
+ 	struct mu3h_sch_bw_info *sch_bw, struct mu3h_sch_ep_info *sch_ep)
+ {
+ 	/* only release ep bw check passed by check_sch_bw() */
+-	if (sch_ep->allocated) {
+-		update_bus_bw(sch_bw, sch_ep, 0);
+-		if (sch_ep->sch_tt)
+-			update_sch_tt(udev, sch_ep, 0);
+-	}
++	if (sch_ep->allocated)
++		load_ep_bw(udev, sch_bw, sch_ep, false);
+ 
+ 	if (sch_ep->sch_tt)
+ 		drop_tt(udev);
+@@ -760,7 +780,8 @@ int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ 
+ 		ret = check_sch_bw(udev, sch_bw, sch_ep);
+ 		if (ret) {
+-			xhci_err(xhci, "Not enough bandwidth!\n");
++			xhci_err(xhci, "Not enough bandwidth! (%s)\n",
++				 sch_error_string(-ret));
+ 			return -ENOSPC;
+ 		}
+ 	}
+diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
+index 2fc0568ba054e..3e2c607b5d64c 100644
+--- a/drivers/usb/host/xhci-mtk.h
++++ b/drivers/usb/host/xhci-mtk.h
+@@ -20,14 +20,12 @@
+ #define XHCI_MTK_MAX_ESIT	64
+ 
+ /**
+- * @ss_bit_map: used to avoid start split microframes overlay
+  * @fs_bus_bw: array to keep track of bandwidth already used for FS
+  * @ep_list: Endpoints using this TT
+  * @usb_tt: usb TT related
+  * @tt_port: TT port number
+  */
+ struct mu3h_sch_tt {
+-	DECLARE_BITMAP(ss_bit_map, XHCI_MTK_MAX_ESIT);
+ 	u32 fs_bus_bw[XHCI_MTK_MAX_ESIT];
+ 	struct list_head ep_list;
+ 	struct usb_tt *usb_tt;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 211e03a204072..eea3dd18a044c 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -256,6 +256,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EM060K			0x030b
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
++#define QUECTEL_PRODUCT_RM520N			0x0801
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
+ #define QUECTEL_PRODUCT_EC200T			0x6026
+ #define QUECTEL_PRODUCT_RM500K			0x7001
+@@ -1138,6 +1139,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff),
+ 	  .driver_info = NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0203, 0xff), /* BG95-M3 */
++	  .driver_info = ZLP },
+ 	{ USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96),
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+@@ -1159,6 +1162,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+ 	  .driver_info = ZLP },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index acdef6fbb85e0..80daa70e288b0 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -83,8 +83,6 @@ enum {
+ /*
+  * Input Output Manager (IOM) PORT STATUS
+  */
+-#define IOM_PORT_STATUS_OFFSET				0x560
+-
+ #define IOM_PORT_STATUS_ACTIVITY_TYPE_MASK		GENMASK(9, 6)
+ #define IOM_PORT_STATUS_ACTIVITY_TYPE_SHIFT		6
+ #define IOM_PORT_STATUS_ACTIVITY_TYPE_USB		0x03
+@@ -144,6 +142,7 @@ struct pmc_usb {
+ 	struct pmc_usb_port *port;
+ 	struct acpi_device *iom_adev;
+ 	void __iomem *iom_base;
++	u32 iom_port_status_offset;
+ };
+ 
+ static void update_port_status(struct pmc_usb_port *port)
+@@ -153,7 +152,8 @@ static void update_port_status(struct pmc_usb_port *port)
+ 	/* SoC expects the USB Type-C port numbers to start with 0 */
+ 	port_num = port->usb3_port - 1;
+ 
+-	port->iom_status = readl(port->pmc->iom_base + IOM_PORT_STATUS_OFFSET +
++	port->iom_status = readl(port->pmc->iom_base +
++				 port->pmc->iom_port_status_offset +
+ 				 port_num * sizeof(u32));
+ }
+ 
+@@ -541,19 +541,42 @@ err_unregister_switch:
+ 
+ static int is_memory(struct acpi_resource *res, void *data)
+ {
+-	struct resource r;
++	struct resource_win win = {};
++	struct resource *r = &win.res;
+ 
+-	return !acpi_dev_resource_memory(res, &r);
++	return !(acpi_dev_resource_memory(res, r) ||
++		 acpi_dev_resource_address_space(res, &win));
+ }
+ 
++/* IOM ACPI IDs and IOM_PORT_STATUS_OFFSET */
++static const struct acpi_device_id iom_acpi_ids[] = {
++	/* TigerLake */
++	{ "INTC1072", 0x560, },
++
++	/* AlderLake */
++	{ "INTC1079", 0x160, },
++
++	/* Meteor Lake */
++	{ "INTC107A", 0x160, },
++	{}
++};
++
+ static int pmc_usb_probe_iom(struct pmc_usb *pmc)
+ {
+ 	struct list_head resource_list;
+ 	struct resource_entry *rentry;
+-	struct acpi_device *adev;
++	static const struct acpi_device_id *dev_id;
++	struct acpi_device *adev = NULL;
+ 	int ret;
+ 
+-	adev = acpi_dev_get_first_match_dev("INTC1072", NULL, -1);
++	for (dev_id = &iom_acpi_ids[0]; dev_id->id[0]; dev_id++) {
++		if (acpi_dev_present(dev_id->id, NULL, -1)) {
++			pmc->iom_port_status_offset = (u32)dev_id->driver_data;
++			adev = acpi_dev_get_first_match_dev(dev_id->id, NULL, -1);
++			break;
++		}
++	}
++
+ 	if (!adev)
+ 		return -ENODEV;
+ 
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index fbd438e9b9b03..ce50ca9a320c7 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -98,6 +98,12 @@ struct vfio_dma {
+ 	unsigned long		*bitmap;
+ };
+ 
++struct vfio_batch {
++	struct page		**pages;	/* for pin_user_pages_remote */
++	struct page		*fallback_page; /* if pages alloc fails */
++	int			capacity;	/* length of pages array */
++};
++
+ struct vfio_group {
+ 	struct iommu_group	*iommu_group;
+ 	struct list_head	next;
+@@ -428,6 +434,31 @@ static int put_pfn(unsigned long pfn, int prot)
+ 	return 0;
+ }
+ 
++#define VFIO_BATCH_MAX_CAPACITY (PAGE_SIZE / sizeof(struct page *))
++
++static void vfio_batch_init(struct vfio_batch *batch)
++{
++	if (unlikely(disable_hugepages))
++		goto fallback;
++
++	batch->pages = (struct page **) __get_free_page(GFP_KERNEL);
++	if (!batch->pages)
++		goto fallback;
++
++	batch->capacity = VFIO_BATCH_MAX_CAPACITY;
++	return;
++
++fallback:
++	batch->pages = &batch->fallback_page;
++	batch->capacity = 1;
++}
++
++static void vfio_batch_fini(struct vfio_batch *batch)
++{
++	if (batch->capacity == VFIO_BATCH_MAX_CAPACITY)
++		free_page((unsigned long)batch->pages);
++}
++
+ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
+ 			    unsigned long vaddr, unsigned long *pfn,
+ 			    bool write_fault)
+@@ -464,10 +495,14 @@ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
+ 	return ret;
+ }
+ 
+-static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+-			 int prot, unsigned long *pfn)
++/*
++ * Returns the positive number of pfns successfully obtained or a negative
++ * error code.
++ */
++static int vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr,
++			  long npages, int prot, unsigned long *pfn,
++			  struct page **pages)
+ {
+-	struct page *page[1];
+ 	struct vm_area_struct *vma;
+ 	unsigned int flags = 0;
+ 	int ret;
+@@ -476,11 +511,22 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+ 		flags |= FOLL_WRITE;
+ 
+ 	mmap_read_lock(mm);
+-	ret = pin_user_pages_remote(mm, vaddr, 1, flags | FOLL_LONGTERM,
+-				    page, NULL, NULL);
+-	if (ret == 1) {
+-		*pfn = page_to_pfn(page[0]);
+-		ret = 0;
++	ret = pin_user_pages_remote(mm, vaddr, npages, flags | FOLL_LONGTERM,
++				    pages, NULL, NULL);
++	if (ret > 0) {
++		int i;
++
++		/*
++		 * The zero page is always resident, we don't need to pin it
++		 * and it falls into our invalid/reserved test so we don't
++		 * unpin in put_pfn().  Unpin all zero pages in the batch here.
++		 */
++		for (i = 0 ; i < ret; i++) {
++			if (unlikely(is_zero_pfn(page_to_pfn(pages[i]))))
++				unpin_user_page(pages[i]);
++		}
++
++		*pfn = page_to_pfn(pages[0]);
+ 		goto done;
+ 	}
+ 
+@@ -494,8 +540,12 @@ retry:
+ 		if (ret == -EAGAIN)
+ 			goto retry;
+ 
+-		if (!ret && !is_invalid_reserved_pfn(*pfn))
+-			ret = -EFAULT;
++		if (!ret) {
++			if (is_invalid_reserved_pfn(*pfn))
++				ret = 1;
++			else
++				ret = -EFAULT;
++		}
+ 	}
+ done:
+ 	mmap_read_unlock(mm);
+@@ -509,7 +559,7 @@ done:
+  */
+ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 				  long npage, unsigned long *pfn_base,
+-				  unsigned long limit)
++				  unsigned long limit, struct vfio_batch *batch)
+ {
+ 	unsigned long pfn = 0;
+ 	long ret, pinned = 0, lock_acct = 0;
+@@ -520,8 +570,9 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 	if (!current->mm)
+ 		return -ENODEV;
+ 
+-	ret = vaddr_get_pfn(current->mm, vaddr, dma->prot, pfn_base);
+-	if (ret)
++	ret = vaddr_get_pfns(current->mm, vaddr, 1, dma->prot, pfn_base,
++			     batch->pages);
++	if (ret < 0)
+ 		return ret;
+ 
+ 	pinned++;
+@@ -547,8 +598,9 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 	/* Lock all the consecutive pages from pfn_base */
+ 	for (vaddr += PAGE_SIZE, iova += PAGE_SIZE; pinned < npage;
+ 	     pinned++, vaddr += PAGE_SIZE, iova += PAGE_SIZE) {
+-		ret = vaddr_get_pfn(current->mm, vaddr, dma->prot, &pfn);
+-		if (ret)
++		ret = vaddr_get_pfns(current->mm, vaddr, 1, dma->prot, &pfn,
++				     batch->pages);
++		if (ret < 0)
+ 			break;
+ 
+ 		if (pfn != *pfn_base + pinned ||
+@@ -574,7 +626,7 @@ out:
+ 	ret = vfio_lock_acct(dma, lock_acct, false);
+ 
+ unpin_out:
+-	if (ret) {
++	if (ret < 0) {
+ 		if (!rsvd) {
+ 			for (pfn = *pfn_base ; pinned ; pfn++, pinned--)
+ 				put_pfn(pfn, dma->prot);
+@@ -610,6 +662,7 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
+ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+ 				  unsigned long *pfn_base, bool do_accounting)
+ {
++	struct page *pages[1];
+ 	struct mm_struct *mm;
+ 	int ret;
+ 
+@@ -617,8 +670,13 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+ 	if (!mm)
+ 		return -ENODEV;
+ 
+-	ret = vaddr_get_pfn(mm, vaddr, dma->prot, pfn_base);
+-	if (!ret && do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
++	ret = vaddr_get_pfns(mm, vaddr, 1, dma->prot, pfn_base, pages);
++	if (ret != 1)
++		goto out;
++
++	ret = 0;
++
++	if (do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
+ 		ret = vfio_lock_acct(dma, 1, true);
+ 		if (ret) {
+ 			put_pfn(*pfn_base, dma->prot);
+@@ -630,6 +688,7 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+ 		}
+ 	}
+ 
++out:
+ 	mmput(mm);
+ 	return ret;
+ }
+@@ -1263,15 +1322,19 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma,
+ {
+ 	dma_addr_t iova = dma->iova;
+ 	unsigned long vaddr = dma->vaddr;
++	struct vfio_batch batch;
+ 	size_t size = map_size;
+ 	long npage;
+ 	unsigned long pfn, limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+ 	int ret = 0;
+ 
++	vfio_batch_init(&batch);
++
+ 	while (size) {
+ 		/* Pin a contiguous chunk of memory */
+ 		npage = vfio_pin_pages_remote(dma, vaddr + dma->size,
+-					      size >> PAGE_SHIFT, &pfn, limit);
++					      size >> PAGE_SHIFT, &pfn, limit,
++					      &batch);
+ 		if (npage <= 0) {
+ 			WARN_ON(!npage);
+ 			ret = (int)npage;
+@@ -1291,6 +1354,7 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma,
+ 		dma->size += npage << PAGE_SHIFT;
+ 	}
+ 
++	vfio_batch_fini(&batch);
+ 	dma->iommu_mapped = true;
+ 
+ 	if (ret)
+@@ -1449,6 +1513,7 @@ static int vfio_bus_type(struct device *dev, void *data)
+ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ 			     struct vfio_domain *domain)
+ {
++	struct vfio_batch batch;
+ 	struct vfio_domain *d = NULL;
+ 	struct rb_node *n;
+ 	unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+@@ -1459,6 +1524,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ 		d = list_first_entry(&iommu->domain_list,
+ 				     struct vfio_domain, next);
+ 
++	vfio_batch_init(&batch);
++
+ 	n = rb_first(&iommu->dma_list);
+ 
+ 	for (; n; n = rb_next(n)) {
+@@ -1506,7 +1573,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ 
+ 				npage = vfio_pin_pages_remote(dma, vaddr,
+ 							      n >> PAGE_SHIFT,
+-							      &pfn, limit);
++							      &pfn, limit,
++							      &batch);
+ 				if (npage <= 0) {
+ 					WARN_ON(!npage);
+ 					ret = (int)npage;
+@@ -1539,6 +1607,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ 		dma->iommu_mapped = true;
+ 	}
+ 
++	vfio_batch_fini(&batch);
+ 	return 0;
+ 
+ unwind:
+@@ -1579,6 +1648,7 @@ unwind:
+ 		}
+ 	}
+ 
++	vfio_batch_fini(&batch);
+ 	return ret;
+ }
+ 
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 24c6f36177bac..a6ca4eda9a5ae 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -230,6 +230,8 @@ extern unsigned int setup_special_user_owner_ACE(struct cifs_ace *pace);
+ extern void dequeue_mid(struct mid_q_entry *mid, bool malformed);
+ extern int cifs_read_from_socket(struct TCP_Server_Info *server, char *buf,
+ 			         unsigned int to_read);
++extern ssize_t cifs_discard_from_socket(struct TCP_Server_Info *server,
++					size_t to_read);
+ extern int cifs_read_page_from_socket(struct TCP_Server_Info *server,
+ 					struct page *page,
+ 					unsigned int page_offset,
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 0496934feecb7..c279527aae92d 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -1451,9 +1451,9 @@ cifs_discard_remaining_data(struct TCP_Server_Info *server)
+ 	while (remaining > 0) {
+ 		int length;
+ 
+-		length = cifs_read_from_socket(server, server->bigbuf,
+-				min_t(unsigned int, remaining,
+-				    CIFSMaxBufSize + MAX_HEADER_SIZE(server)));
++		length = cifs_discard_from_socket(server,
++				min_t(size_t, remaining,
++				      CIFSMaxBufSize + MAX_HEADER_SIZE(server)));
+ 		if (length < 0)
+ 			return length;
+ 		server->total_read += length;
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 7f5d173760cfc..d1c3086d7ddd0 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -695,9 +695,6 @@ cifs_readv_from_socket(struct TCP_Server_Info *server, struct msghdr *smb_msg)
+ 	int length = 0;
+ 	int total_read;
+ 
+-	smb_msg->msg_control = NULL;
+-	smb_msg->msg_controllen = 0;
+-
+ 	for (total_read = 0; msg_data_left(smb_msg); total_read += length) {
+ 		try_to_freeze();
+ 
+@@ -748,18 +745,33 @@ int
+ cifs_read_from_socket(struct TCP_Server_Info *server, char *buf,
+ 		      unsigned int to_read)
+ {
+-	struct msghdr smb_msg;
++	struct msghdr smb_msg = {};
+ 	struct kvec iov = {.iov_base = buf, .iov_len = to_read};
+ 	iov_iter_kvec(&smb_msg.msg_iter, READ, &iov, 1, to_read);
+ 
+ 	return cifs_readv_from_socket(server, &smb_msg);
+ }
+ 
++ssize_t
++cifs_discard_from_socket(struct TCP_Server_Info *server, size_t to_read)
++{
++	struct msghdr smb_msg = {};
++
++	/*
++	 *  iov_iter_discard already sets smb_msg.type and count and iov_offset
++	 *  and cifs_readv_from_socket sets msg_control and msg_controllen
++	 *  so little to initialize in struct msghdr
++	 */
++	iov_iter_discard(&smb_msg.msg_iter, READ, to_read);
++
++	return cifs_readv_from_socket(server, &smb_msg);
++}
++
+ int
+ cifs_read_page_from_socket(struct TCP_Server_Info *server, struct page *page,
+ 	unsigned int page_offset, unsigned int to_read)
+ {
+-	struct msghdr smb_msg;
++	struct msghdr smb_msg = {};
+ 	struct bio_vec bv = {
+ 		.bv_page = page, .bv_len = to_read, .bv_offset = page_offset};
+ 	iov_iter_bvec(&smb_msg.msg_iter, READ, &bv, 1, to_read);
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 383ae8744c337..b137006f0fd25 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -209,10 +209,6 @@ smb_send_kvec(struct TCP_Server_Info *server, struct msghdr *smb_msg,
+ 
+ 	*sent = 0;
+ 
+-	smb_msg->msg_name = NULL;
+-	smb_msg->msg_namelen = 0;
+-	smb_msg->msg_control = NULL;
+-	smb_msg->msg_controllen = 0;
+ 	if (server->noblocksnd)
+ 		smb_msg->msg_flags = MSG_DONTWAIT + MSG_NOSIGNAL;
+ 	else
+@@ -324,7 +320,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 	sigset_t mask, oldmask;
+ 	size_t total_len = 0, sent, size;
+ 	struct socket *ssocket = server->ssocket;
+-	struct msghdr smb_msg;
++	struct msghdr smb_msg = {};
+ 	__be32 rfc1002_marker;
+ 
+ 	if (cifs_rdma_enabled(server)) {
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 0f49bf547b848..30add5a3df3df 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -459,6 +459,10 @@ static int __ext4_ext_check(const char *function, unsigned int line,
+ 		error_msg = "invalid eh_entries";
+ 		goto corrupted;
+ 	}
++	if (unlikely((eh->eh_entries == 0) && (depth > 0))) {
++		error_msg = "eh_entries is 0 but eh_depth is > 0";
++		goto corrupted;
++	}
+ 	if (!ext4_valid_extent_entries(inode, eh, lblk, &pblk, depth)) {
+ 		error_msg = "invalid extent entries";
+ 		goto corrupted;
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 875af329c43ec..c53c9b1322049 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -508,7 +508,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
+ 		goto fallback;
+ 	}
+ 
+-	max_dirs = ndirs / ngroups + inodes_per_group / 16;
++	max_dirs = ndirs / ngroups + inodes_per_group*flex_size / 16;
+ 	min_inodes = avefreei - inodes_per_group*flex_size / 4;
+ 	if (min_inodes < 1)
+ 		min_inodes = 1;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index c32d0895c3a3d..d5ca02a7766e0 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -4959,6 +4959,7 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
+ 	ext4_fsblk_t block = 0;
+ 	unsigned int inquota = 0;
+ 	unsigned int reserv_clstrs = 0;
++	int retries = 0;
+ 	u64 seq;
+ 
+ 	might_sleep();
+@@ -5061,7 +5062,8 @@ repeat:
+ 			ar->len = ac->ac_b_ex.fe_len;
+ 		}
+ 	} else {
+-		if (ext4_mb_discard_preallocations_should_retry(sb, ac, &seq))
++		if (++retries < 3 &&
++		    ext4_mb_discard_preallocations_should_retry(sb, ac, &seq))
+ 			goto repeat;
+ 		/*
+ 		 * If block allocation fails then the pa allocated above
+diff --git a/fs/xfs/libxfs/xfs_inode_buf.c b/fs/xfs/libxfs/xfs_inode_buf.c
+index c667c63f2cb00..fa8aefe6b7ec3 100644
+--- a/fs/xfs/libxfs/xfs_inode_buf.c
++++ b/fs/xfs/libxfs/xfs_inode_buf.c
+@@ -358,19 +358,36 @@ xfs_dinode_verify_fork(
+ 	int			whichfork)
+ {
+ 	uint32_t		di_nextents = XFS_DFORK_NEXTENTS(dip, whichfork);
++	mode_t			mode = be16_to_cpu(dip->di_mode);
++	uint32_t		fork_size = XFS_DFORK_SIZE(dip, mp, whichfork);
++	uint32_t		fork_format = XFS_DFORK_FORMAT(dip, whichfork);
+ 
+-	switch (XFS_DFORK_FORMAT(dip, whichfork)) {
++	/*
++	 * For fork types that can contain local data, check that the fork
++	 * format matches the size of local data contained within the fork.
++	 *
++	 * For all types, check that when the size says the should be in extent
++	 * or btree format, the inode isn't claiming it is in local format.
++	 */
++	if (whichfork == XFS_DATA_FORK) {
++		if (S_ISDIR(mode) || S_ISLNK(mode)) {
++			if (be64_to_cpu(dip->di_size) <= fork_size &&
++			    fork_format != XFS_DINODE_FMT_LOCAL)
++				return __this_address;
++		}
++
++		if (be64_to_cpu(dip->di_size) > fork_size &&
++		    fork_format == XFS_DINODE_FMT_LOCAL)
++			return __this_address;
++	}
++
++	switch (fork_format) {
+ 	case XFS_DINODE_FMT_LOCAL:
+ 		/*
+-		 * no local regular files yet
++		 * No local regular files yet.
+ 		 */
+-		if (whichfork == XFS_DATA_FORK) {
+-			if (S_ISREG(be16_to_cpu(dip->di_mode)))
+-				return __this_address;
+-			if (be64_to_cpu(dip->di_size) >
+-					XFS_DFORK_SIZE(dip, mp, whichfork))
+-				return __this_address;
+-		}
++		if (S_ISREG(mode) && whichfork == XFS_DATA_FORK)
++			return __this_address;
+ 		if (di_nextents)
+ 			return __this_address;
+ 		break;
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 1f61e085676b3..19008838df769 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -802,6 +802,7 @@ xfs_ialloc(
+ 	xfs_buf_t	**ialloc_context,
+ 	xfs_inode_t	**ipp)
+ {
++	struct inode	*dir = pip ? VFS_I(pip) : NULL;
+ 	struct xfs_mount *mp = tp->t_mountp;
+ 	xfs_ino_t	ino;
+ 	xfs_inode_t	*ip;
+@@ -847,18 +848,17 @@ xfs_ialloc(
+ 		return error;
+ 	ASSERT(ip != NULL);
+ 	inode = VFS_I(ip);
+-	inode->i_mode = mode;
+ 	set_nlink(inode, nlink);
+-	inode->i_uid = current_fsuid();
+ 	inode->i_rdev = rdev;
+ 	ip->i_d.di_projid = prid;
+ 
+-	if (pip && XFS_INHERIT_GID(pip)) {
+-		inode->i_gid = VFS_I(pip)->i_gid;
+-		if ((VFS_I(pip)->i_mode & S_ISGID) && S_ISDIR(mode))
+-			inode->i_mode |= S_ISGID;
++	if (dir && !(dir->i_mode & S_ISGID) &&
++	    (mp->m_flags & XFS_MOUNT_GRPID)) {
++		inode->i_uid = current_fsuid();
++		inode->i_gid = dir->i_gid;
++		inode->i_mode = mode;
+ 	} else {
+-		inode->i_gid = current_fsgid();
++		inode_init_owner(inode, dir, mode);
+ 	}
+ 
+ 	/*
+@@ -2669,14 +2669,13 @@ xfs_ifree_cluster(
+ }
+ 
+ /*
+- * This is called to return an inode to the inode free list.
+- * The inode should already be truncated to 0 length and have
+- * no pages associated with it.  This routine also assumes that
+- * the inode is already a part of the transaction.
++ * This is called to return an inode to the inode free list.  The inode should
++ * already be truncated to 0 length and have no pages associated with it.  This
++ * routine also assumes that the inode is already a part of the transaction.
+  *
+- * The on-disk copy of the inode will have been added to the list
+- * of unlinked inodes in the AGI. We need to remove the inode from
+- * that list atomically with respect to freeing it here.
++ * The on-disk copy of the inode will have been added to the list of unlinked
++ * inodes in the AGI. We need to remove the inode from that list atomically with
++ * respect to freeing it here.
+  */
+ int
+ xfs_ifree(
+@@ -2694,13 +2693,16 @@ xfs_ifree(
+ 	ASSERT(ip->i_d.di_nblocks == 0);
+ 
+ 	/*
+-	 * Pull the on-disk inode from the AGI unlinked list.
++	 * Free the inode first so that we guarantee that the AGI lock is going
++	 * to be taken before we remove the inode from the unlinked list. This
++	 * makes the AGI lock -> unlinked list modification order the same as
++	 * used in O_TMPFILE creation.
+ 	 */
+-	error = xfs_iunlink_remove(tp, ip);
++	error = xfs_difree(tp, ip->i_ino, &xic);
+ 	if (error)
+ 		return error;
+ 
+-	error = xfs_difree(tp, ip->i_ino, &xic);
++	error = xfs_iunlink_remove(tp, ip);
+ 	if (error)
+ 		return error;
+ 
+diff --git a/include/linux/inetdevice.h b/include/linux/inetdevice.h
+index b68fca08be27c..3088d94684c1c 100644
+--- a/include/linux/inetdevice.h
++++ b/include/linux/inetdevice.h
+@@ -178,6 +178,15 @@ static inline struct net_device *ip_dev_find(struct net *net, __be32 addr)
+ 
+ int inet_addr_onlink(struct in_device *in_dev, __be32 a, __be32 b);
+ int devinet_ioctl(struct net *net, unsigned int cmd, struct ifreq *);
++#ifdef CONFIG_INET
++int inet_gifconf(struct net_device *dev, char __user *buf, int len, int size);
++#else
++static inline int inet_gifconf(struct net_device *dev, char __user *buf,
++			       int len, int size)
++{
++	return 0;
++}
++#endif
+ void devinet_init(void);
+ struct in_device *inetdev_by_index(struct net *, int);
+ __be32 inet_select_addr(const struct net_device *dev, __be32 dst, int scope);
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 94871f12e5362..896e563e2c181 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1489,6 +1489,8 @@ static inline long kvm_arch_vcpu_async_ioctl(struct file *filp,
+ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+ 					    unsigned long start, unsigned long end);
+ 
++void kvm_arch_guest_memory_reclaimed(struct kvm *kvm);
++
+ #ifdef CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE
+ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu);
+ #else
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 6564fb4ac49e1..ef75567efd27a 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3201,14 +3201,6 @@ static inline bool dev_has_header(const struct net_device *dev)
+ 	return dev->header_ops && dev->header_ops->create;
+ }
+ 
+-typedef int gifconf_func_t(struct net_device * dev, char __user * bufptr,
+-			   int len, int size);
+-int register_gifconf(unsigned int family, gifconf_func_t *gifconf);
+-static inline int unregister_gifconf(unsigned int family)
+-{
+-	return register_gifconf(family, NULL);
+-}
+-
+ #ifdef CONFIG_NET_FLOW_LIMIT
+ #define FLOW_LIMIT_HISTORY	(1 << 7)  /* must be ^2 and !overflow buckets */
+ struct sd_flow_limit {
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 9c1292ea47fdc..59a8caf3230a4 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -300,6 +300,23 @@ struct uart_state {
+ /* number of characters left in xmit buffer before we ask for more */
+ #define WAKEUP_CHARS		256
+ 
++/**
++ * uart_xmit_advance - Advance xmit buffer and account Tx'ed chars
++ * @up: uart_port structure describing the port
++ * @chars: number of characters sent
++ *
++ * This function advances the tail of circular xmit buffer by the number of
++ * @chars transmitted and handles accounting of transmitted bytes (into
++ * @up's icount.tx).
++ */
++static inline void uart_xmit_advance(struct uart_port *up, unsigned int chars)
++{
++	struct circ_buf *xmit = &up->state->xmit;
++
++	xmit->tail = (xmit->tail + chars) & (UART_XMIT_SIZE - 1);
++	up->icount.tx += chars;
++}
++
+ struct module;
+ struct tty_driver;
+ 
+diff --git a/include/net/bond_3ad.h b/include/net/bond_3ad.h
+index 1a28f299a4c61..895eae18271fa 100644
+--- a/include/net/bond_3ad.h
++++ b/include/net/bond_3ad.h
+@@ -15,8 +15,6 @@
+ #define PKT_TYPE_LACPDU         cpu_to_be16(ETH_P_SLOW)
+ #define AD_TIMER_INTERVAL       100 /*msec*/
+ 
+-#define MULTICAST_LACPDU_ADDR    {0x01, 0x80, 0xC2, 0x00, 0x00, 0x02}
+-
+ #define AD_LACP_SLOW 0
+ #define AD_LACP_FAST 1
+ 
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index 67d676059aa0d..d9cc3f5602fb2 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -763,6 +763,9 @@ extern struct rtnl_link_ops bond_link_ops;
+ /* exported from bond_sysfs_slave.c */
+ extern const struct sysfs_ops slave_sysfs_ops;
+ 
++/* exported from bond_3ad.c */
++extern const u8 lacpdu_mcast_addr[];
++
+ static inline netdev_tx_t bond_tx_drop(struct net_device *dev, struct sk_buff *skb)
+ {
+ 	atomic_long_inc(&dev->tx_dropped);
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index fdf5fa4bf4448..0cc2a62e88f9e 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3047,10 +3047,8 @@ static bool __flush_work(struct work_struct *work, bool from_cancel)
+ 	if (WARN_ON(!work->func))
+ 		return false;
+ 
+-	if (!from_cancel) {
+-		lock_map_acquire(&work->lockdep_map);
+-		lock_map_release(&work->lockdep_map);
+-	}
++	lock_map_acquire(&work->lockdep_map);
++	lock_map_release(&work->lockdep_map);
+ 
+ 	if (start_flush_work(work, &barr, from_cancel)) {
+ 		wait_for_completion(&barr.done);
+diff --git a/mm/slub.c b/mm/slub.c
+index b395ef0645444..b0f637519ac99 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -5559,7 +5559,8 @@ static char *create_unique_id(struct kmem_cache *s)
+ 	char *name = kmalloc(ID_STR_LENGTH, GFP_KERNEL);
+ 	char *p = name;
+ 
+-	BUG_ON(!name);
++	if (!name)
++		return ERR_PTR(-ENOMEM);
+ 
+ 	*p++ = ':';
+ 	/*
+@@ -5617,6 +5618,8 @@ static int sysfs_slab_add(struct kmem_cache *s)
+ 		 * for the symlinks.
+ 		 */
+ 		name = create_unique_id(s);
++		if (IS_ERR(name))
++			return PTR_ERR(name);
+ 	}
+ 
+ 	s->kobj.kset = kset;
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 310740cc684ad..06b80b5843819 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -999,8 +999,10 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
+ 		goto free_iterate;
+ 	}
+ 
+-	if (repl->valid_hooks != t->valid_hooks)
++	if (repl->valid_hooks != t->valid_hooks) {
++		ret = -EINVAL;
+ 		goto free_unlock;
++	}
+ 
+ 	if (repl->num_counters && repl->num_counters != t->private->nentries) {
+ 		ret = -EINVAL;
+diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
+index 54fb18b4f55e4..993420da29307 100644
+--- a/net/core/dev_ioctl.c
++++ b/net/core/dev_ioctl.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/kmod.h>
+ #include <linux/netdevice.h>
++#include <linux/inetdevice.h>
+ #include <linux/etherdevice.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/net_tstamp.h>
+@@ -25,26 +26,6 @@ static int dev_ifname(struct net *net, struct ifreq *ifr)
+ 	return netdev_get_name(net, ifr->ifr_name, ifr->ifr_ifindex);
+ }
+ 
+-static gifconf_func_t *gifconf_list[NPROTO];
+-
+-/**
+- *	register_gifconf	-	register a SIOCGIF handler
+- *	@family: Address family
+- *	@gifconf: Function handler
+- *
+- *	Register protocol dependent address dumping routines. The handler
+- *	that is passed must not be freed or reused until it has been replaced
+- *	by another handler.
+- */
+-int register_gifconf(unsigned int family, gifconf_func_t *gifconf)
+-{
+-	if (family >= NPROTO)
+-		return -EINVAL;
+-	gifconf_list[family] = gifconf;
+-	return 0;
+-}
+-EXPORT_SYMBOL(register_gifconf);
+-
+ /*
+  *	Perform a SIOCGIFCONF call. This structure will change
+  *	size eventually, and there is nothing I can do about it.
+@@ -57,7 +38,6 @@ int dev_ifconf(struct net *net, struct ifconf *ifc, int size)
+ 	char __user *pos;
+ 	int len;
+ 	int total;
+-	int i;
+ 
+ 	/*
+ 	 *	Fetch the caller's info block.
+@@ -72,19 +52,15 @@ int dev_ifconf(struct net *net, struct ifconf *ifc, int size)
+ 
+ 	total = 0;
+ 	for_each_netdev(net, dev) {
+-		for (i = 0; i < NPROTO; i++) {
+-			if (gifconf_list[i]) {
+-				int done;
+-				if (!pos)
+-					done = gifconf_list[i](dev, NULL, 0, size);
+-				else
+-					done = gifconf_list[i](dev, pos + total,
+-							       len - total, size);
+-				if (done < 0)
+-					return -EFAULT;
+-				total += done;
+-			}
+-		}
++		int done;
++		if (!pos)
++			done = inet_gifconf(dev, NULL, 0, size);
++		else
++			done = inet_gifconf(dev, pos + total,
++					    len - total, size);
++		if (done < 0)
++			return -EFAULT;
++		total += done;
+ 	}
+ 
+ 	/*
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index f9baa9b1c77f7..ed120828c7e21 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1485,7 +1485,7 @@ __be32 flow_get_u32_dst(const struct flow_keys *flow)
+ }
+ EXPORT_SYMBOL(flow_get_u32_dst);
+ 
+-/* Sort the source and destination IP (and the ports if the IP are the same),
++/* Sort the source and destination IP and the ports,
+  * to have consistent hash within the two directions
+  */
+ static inline void __flow_hash_consistentify(struct flow_keys *keys)
+@@ -1494,13 +1494,12 @@ static inline void __flow_hash_consistentify(struct flow_keys *keys)
+ 
+ 	switch (keys->control.addr_type) {
+ 	case FLOW_DISSECTOR_KEY_IPV4_ADDRS:
+-		addr_diff = (__force u32)keys->addrs.v4addrs.dst -
+-			    (__force u32)keys->addrs.v4addrs.src;
+-		if ((addr_diff < 0) ||
+-		    (addr_diff == 0 &&
+-		     ((__force u16)keys->ports.dst <
+-		      (__force u16)keys->ports.src))) {
++		if ((__force u32)keys->addrs.v4addrs.dst <
++		    (__force u32)keys->addrs.v4addrs.src)
+ 			swap(keys->addrs.v4addrs.src, keys->addrs.v4addrs.dst);
++
++		if ((__force u16)keys->ports.dst <
++		    (__force u16)keys->ports.src) {
+ 			swap(keys->ports.src, keys->ports.dst);
+ 		}
+ 		break;
+@@ -1508,13 +1507,13 @@ static inline void __flow_hash_consistentify(struct flow_keys *keys)
+ 		addr_diff = memcmp(&keys->addrs.v6addrs.dst,
+ 				   &keys->addrs.v6addrs.src,
+ 				   sizeof(keys->addrs.v6addrs.dst));
+-		if ((addr_diff < 0) ||
+-		    (addr_diff == 0 &&
+-		     ((__force u16)keys->ports.dst <
+-		      (__force u16)keys->ports.src))) {
++		if (addr_diff < 0) {
+ 			for (i = 0; i < 4; i++)
+ 				swap(keys->addrs.v6addrs.src.s6_addr32[i],
+ 				     keys->addrs.v6addrs.dst.s6_addr32[i]);
++		}
++		if ((__force u16)keys->ports.dst <
++		    (__force u16)keys->ports.src) {
+ 			swap(keys->ports.src, keys->ports.dst);
+ 		}
+ 		break;
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 8f17538755507..88b6120878cd9 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -1244,7 +1244,7 @@ out:
+ 	return ret;
+ }
+ 
+-static int inet_gifconf(struct net_device *dev, char __user *buf, int len, int size)
++int inet_gifconf(struct net_device *dev, char __user *buf, int len, int size)
+ {
+ 	struct in_device *in_dev = __in_dev_get_rtnl(dev);
+ 	const struct in_ifaddr *ifa;
+@@ -2766,8 +2766,6 @@ void __init devinet_init(void)
+ 		INIT_HLIST_HEAD(&inet_addr_lst[i]);
+ 
+ 	register_pernet_subsys(&devinet_ops);
+-
+-	register_gifconf(PF_INET, inet_gifconf);
+ 	register_netdevice_notifier(&ip_netdev_notifier);
+ 
+ 	queue_delayed_work(system_power_efficient_wq, &check_lifetime_work, 0);
+diff --git a/net/netfilter/nf_conntrack_irc.c b/net/netfilter/nf_conntrack_irc.c
+index 26245419ef4a9..65b5b05fe38d3 100644
+--- a/net/netfilter/nf_conntrack_irc.c
++++ b/net/netfilter/nf_conntrack_irc.c
+@@ -148,15 +148,37 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ 	data = ib_ptr;
+ 	data_limit = ib_ptr + skb->len - dataoff;
+ 
+-	/* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24
+-	 * 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */
+-	while (data < data_limit - (19 + MINMATCHLEN)) {
+-		if (memcmp(data, "\1DCC ", 5)) {
++	/* Skip any whitespace */
++	while (data < data_limit - 10) {
++		if (*data == ' ' || *data == '\r' || *data == '\n')
++			data++;
++		else
++			break;
++	}
++
++	/* strlen("PRIVMSG x ")=10 */
++	if (data < data_limit - 10) {
++		if (strncasecmp("PRIVMSG ", data, 8))
++			goto out;
++		data += 8;
++	}
++
++	/* strlen(" :\1DCC SENT t AAAAAAAA P\1\n")=26
++	 * 7+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=26
++	 */
++	while (data < data_limit - (21 + MINMATCHLEN)) {
++		/* Find first " :", the start of message */
++		if (memcmp(data, " :", 2)) {
+ 			data++;
+ 			continue;
+ 		}
++		data += 2;
++
++		/* then check that place only for the DCC command */
++		if (memcmp(data, "\1DCC ", 5))
++			goto out;
+ 		data += 5;
+-		/* we have at least (19+MINMATCHLEN)-5 bytes valid data left */
++		/* we have at least (21+MINMATCHLEN)-(2+5) bytes valid data left */
+ 
+ 		iph = ip_hdr(skb);
+ 		pr_debug("DCC found in master %pI4:%u %pI4:%u\n",
+@@ -172,7 +194,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ 			pr_debug("DCC %s detected\n", dccprotos[i]);
+ 
+ 			/* we have at least
+-			 * (19+MINMATCHLEN)-5-dccprotos[i].matchlen bytes valid
++			 * (21+MINMATCHLEN)-7-dccprotos[i].matchlen bytes valid
+ 			 * data left (== 14/13 bytes) */
+ 			if (parse_dcc(data, data_limit, &dcc_ip,
+ 				       &dcc_port, &addr_beg_p, &addr_end_p)) {
+diff --git a/net/netfilter/nf_conntrack_sip.c b/net/netfilter/nf_conntrack_sip.c
+index b83dc9bf0a5dd..78fd9122b70c7 100644
+--- a/net/netfilter/nf_conntrack_sip.c
++++ b/net/netfilter/nf_conntrack_sip.c
+@@ -477,7 +477,7 @@ static int ct_sip_walk_headers(const struct nf_conn *ct, const char *dptr,
+ 				return ret;
+ 			if (ret == 0)
+ 				break;
+-			dataoff += *matchoff;
++			dataoff = *matchoff;
+ 		}
+ 		*in_header = 0;
+ 	}
+@@ -489,7 +489,7 @@ static int ct_sip_walk_headers(const struct nf_conn *ct, const char *dptr,
+ 			break;
+ 		if (ret == 0)
+ 			return ret;
+-		dataoff += *matchoff;
++		dataoff = *matchoff;
+ 	}
+ 
+ 	if (in_header)
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index b8e7e1c5c08a8..810995d712ac7 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2001,7 +2001,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 			      u8 policy, u32 flags)
+ {
+ 	const struct nlattr * const *nla = ctx->nla;
+-	struct nft_stats __percpu *stats = NULL;
+ 	struct nft_table *table = ctx->table;
+ 	struct nft_base_chain *basechain;
+ 	struct net *net = ctx->net;
+@@ -2015,6 +2014,7 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 		return -EOVERFLOW;
+ 
+ 	if (nla[NFTA_CHAIN_HOOK]) {
++		struct nft_stats __percpu *stats = NULL;
+ 		struct nft_chain_hook hook;
+ 
+ 		if (flags & NFT_CHAIN_BINDING)
+@@ -2045,8 +2045,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 		if (err < 0) {
+ 			nft_chain_release_hook(&hook);
+ 			kfree(basechain);
++			free_percpu(stats);
+ 			return err;
+ 		}
++		if (stats)
++			static_branch_inc(&nft_counters_enabled);
+ 	} else {
+ 		if (flags & NFT_CHAIN_BASE)
+ 			return -EINVAL;
+@@ -2121,9 +2124,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 		goto err_unregister_hook;
+ 	}
+ 
+-	if (stats)
+-		static_branch_inc(&nft_counters_enabled);
+-
+ 	table->use++;
+ 
+ 	return 0;
+diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
+index 79fbf37291f38..51e3953b414c0 100644
+--- a/net/netfilter/nfnetlink_osf.c
++++ b/net/netfilter/nfnetlink_osf.c
+@@ -269,6 +269,7 @@ bool nf_osf_find(const struct sk_buff *skb,
+ 	struct nf_osf_hdr_ctx ctx;
+ 	const struct tcphdr *tcp;
+ 	struct tcphdr _tcph;
++	bool found = false;
+ 
+ 	memset(&ctx, 0, sizeof(ctx));
+ 
+@@ -283,10 +284,11 @@ bool nf_osf_find(const struct sk_buff *skb,
+ 
+ 		data->genre = f->genre;
+ 		data->version = f->version;
++		found = true;
+ 		break;
+ 	}
+ 
+-	return true;
++	return found;
+ }
+ EXPORT_SYMBOL_GPL(nf_osf_find);
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index b8ffb7e4f696c..c410a736301bc 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2124,6 +2124,7 @@ replay:
+ 	}
+ 
+ 	if (chain->tmplt_ops && chain->tmplt_ops != tp->ops) {
++		tfilter_put(tp, fh);
+ 		NL_SET_ERR_MSG(extack, "Chain template is set to a different filter kind");
+ 		err = -EINVAL;
+ 		goto errout;
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index eca525791013e..ab8835a72cee6 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -65,6 +65,7 @@ struct taprio_sched {
+ 	u32 flags;
+ 	enum tk_offsets tk_offset;
+ 	int clockid;
++	bool offloaded;
+ 	atomic64_t picos_per_byte; /* Using picoseconds because for 10Gbps+
+ 				    * speeds it's sub-nanoseconds per byte
+ 				    */
+@@ -1267,6 +1268,8 @@ static int taprio_enable_offload(struct net_device *dev,
+ 		goto done;
+ 	}
+ 
++	q->offloaded = true;
++
+ done:
+ 	taprio_offload_free(offload);
+ 
+@@ -1281,12 +1284,9 @@ static int taprio_disable_offload(struct net_device *dev,
+ 	struct tc_taprio_qopt_offload *offload;
+ 	int err;
+ 
+-	if (!FULL_OFFLOAD_IS_ENABLED(q->flags))
++	if (!q->offloaded)
+ 		return 0;
+ 
+-	if (!ops->ndo_setup_tc)
+-		return -EOPNOTSUPP;
+-
+ 	offload = taprio_offload_alloc(0);
+ 	if (!offload) {
+ 		NL_SET_ERR_MSG(extack,
+@@ -1302,6 +1302,8 @@ static int taprio_disable_offload(struct net_device *dev,
+ 		goto out;
+ 	}
+ 
++	q->offloaded = false;
++
+ out:
+ 	taprio_offload_free(offload);
+ 
+@@ -1904,12 +1906,14 @@ start_error:
+ 
+ static struct Qdisc *taprio_leaf(struct Qdisc *sch, unsigned long cl)
+ {
+-	struct netdev_queue *dev_queue = taprio_queue_get(sch, cl);
++	struct taprio_sched *q = qdisc_priv(sch);
++	struct net_device *dev = qdisc_dev(sch);
++	unsigned int ntx = cl - 1;
+ 
+-	if (!dev_queue)
++	if (ntx >= dev->num_tx_queues)
+ 		return NULL;
+ 
+-	return dev_queue->qdisc_sleeping;
++	return q->qdiscs[ntx];
+ }
+ 
+ static unsigned long taprio_find(struct Qdisc *sch, u32 classid)
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index ef2fd28999baf..bf485a2017a4e 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1584,7 +1584,7 @@ static struct smc_buf_desc *smcr_new_buf_create(struct smc_link_group *lgr,
+ static int smcr_buf_map_usable_links(struct smc_link_group *lgr,
+ 				     struct smc_buf_desc *buf_desc, bool is_rmb)
+ {
+-	int i, rc = 0;
++	int i, rc = 0, cnt = 0;
+ 
+ 	/* protect against parallel link reconfiguration */
+ 	mutex_lock(&lgr->llc_conf_mutex);
+@@ -1597,9 +1597,12 @@ static int smcr_buf_map_usable_links(struct smc_link_group *lgr,
+ 			rc = -ENOMEM;
+ 			goto out;
+ 		}
++		cnt++;
+ 	}
+ out:
+ 	mutex_unlock(&lgr->llc_conf_mutex);
++	if (!rc && !cnt)
++		rc = -EINVAL;
+ 	return rc;
+ }
+ 
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 600ea241ead79..79b8d4258fd3b 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2584,6 +2584,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	/* 5 Series/3400 */
+ 	{ PCI_DEVICE(0x8086, 0x3b56),
+ 	  .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM },
++	{ PCI_DEVICE(0x8086, 0x3b57),
++	  .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM },
+ 	/* Poulsbo */
+ 	{ PCI_DEVICE(0x8086, 0x811b),
+ 	  .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_BASE },
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 71e11481ba41c..7551cdf3b4529 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -3839,6 +3839,7 @@ static int patch_tegra_hdmi(struct hda_codec *codec)
+ 	if (err)
+ 		return err;
+ 
++	codec->depop_delay = 10;
+ 	codec->patch_ops.build_pcms = tegra_hdmi_build_pcms;
+ 	spec = codec->spec;
+ 	spec->chmap.ops.chmap_cea_alloc_validate_get_type =
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 78f4f684a3c72..574fe798d5125 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6824,6 +6824,8 @@ enum {
+ 	ALC294_FIXUP_ASUS_GU502_HP,
+ 	ALC294_FIXUP_ASUS_GU502_PINS,
+ 	ALC294_FIXUP_ASUS_GU502_VERBS,
++	ALC294_FIXUP_ASUS_G513_PINS,
++	ALC285_FIXUP_ASUS_G533Z_PINS,
+ 	ALC285_FIXUP_HP_GPIO_LED,
+ 	ALC285_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_GPIO_LED,
+@@ -8149,6 +8151,24 @@ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC294_FIXUP_ASUS_GU502_HP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc294_fixup_gu502_hp,
++	},
++	 [ALC294_FIXUP_ASUS_G513_PINS] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++				{ 0x19, 0x03a11050 }, /* front HP mic */
++				{ 0x1a, 0x03a11c30 }, /* rear external mic */
++				{ 0x21, 0x03211420 }, /* front HP out */
++				{ }
++		},
++	},
++	[ALC285_FIXUP_ASUS_G533Z_PINS] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x14, 0x90170120 },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC294_FIXUP_ASUS_G513_PINS,
+ 	},
+ 	[ALC294_FIXUP_ASUS_COEF_1B] = {
+ 		.type = HDA_FIXUP_VERBS,
+@@ -8754,6 +8774,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
++	SND_PCI_QUIRK(0x1028, 0x087d, "Dell Precision 5530", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+@@ -8769,6 +8790,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0b19, "Dell XPS 15 9520", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0b1a, "Dell Precision 5570", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -8912,10 +8934,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
++	SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
++	SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
+ 	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
+-	SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -8930,14 +8953,16 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
++	SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ 	SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
++	SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
++	SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+-	SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+-	SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+@@ -9134,6 +9159,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ 	SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
++	SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+ 	SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
+ 	SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+diff --git a/tools/perf/util/genelf.c b/tools/perf/util/genelf.c
+index 953338b9e887e..02cd9f75e3d2f 100644
+--- a/tools/perf/util/genelf.c
++++ b/tools/perf/util/genelf.c
+@@ -251,6 +251,7 @@ jit_write_elf(int fd, uint64_t load_addr, const char *sym,
+ 	Elf_Data *d;
+ 	Elf_Scn *scn;
+ 	Elf_Ehdr *ehdr;
++	Elf_Phdr *phdr;
+ 	Elf_Shdr *shdr;
+ 	uint64_t eh_frame_base_offset;
+ 	char *strsym = NULL;
+@@ -285,6 +286,19 @@ jit_write_elf(int fd, uint64_t load_addr, const char *sym,
+ 	ehdr->e_version = EV_CURRENT;
+ 	ehdr->e_shstrndx= unwinding ? 4 : 2; /* shdr index for section name */
+ 
++	/*
++	 * setup program header
++	 */
++	phdr = elf_newphdr(e, 1);
++	phdr[0].p_type = PT_LOAD;
++	phdr[0].p_offset = 0;
++	phdr[0].p_vaddr = 0;
++	phdr[0].p_paddr = 0;
++	phdr[0].p_filesz = csize;
++	phdr[0].p_memsz = csize;
++	phdr[0].p_flags = PF_X | PF_R;
++	phdr[0].p_align = 8;
++
+ 	/*
+ 	 * setup text section
+ 	 */
+diff --git a/tools/perf/util/genelf.h b/tools/perf/util/genelf.h
+index d4137559be053..ac638945b4cb0 100644
+--- a/tools/perf/util/genelf.h
++++ b/tools/perf/util/genelf.h
+@@ -50,8 +50,10 @@ int jit_add_debug_info(Elf *e, uint64_t code_addr, void *debug, int nr_debug_ent
+ 
+ #if GEN_ELF_CLASS == ELFCLASS64
+ #define elf_newehdr	elf64_newehdr
++#define elf_newphdr	elf64_newphdr
+ #define elf_getshdr	elf64_getshdr
+ #define Elf_Ehdr	Elf64_Ehdr
++#define Elf_Phdr	Elf64_Phdr
+ #define Elf_Shdr	Elf64_Shdr
+ #define Elf_Sym		Elf64_Sym
+ #define ELF_ST_TYPE(a)	ELF64_ST_TYPE(a)
+@@ -59,8 +61,10 @@ int jit_add_debug_info(Elf *e, uint64_t code_addr, void *debug, int nr_debug_ent
+ #define ELF_ST_VIS(a)	ELF64_ST_VISIBILITY(a)
+ #else
+ #define elf_newehdr	elf32_newehdr
++#define elf_newphdr	elf32_newphdr
+ #define elf_getshdr	elf32_getshdr
+ #define Elf_Ehdr	Elf32_Ehdr
++#define Elf_Phdr	Elf32_Phdr
+ #define Elf_Shdr	Elf32_Shdr
+ #define Elf_Sym		Elf32_Sym
+ #define ELF_ST_TYPE(a)	ELF32_ST_TYPE(a)
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index d8d79a9ec7758..3e423a9200151 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -2002,8 +2002,8 @@ static int kcore_copy__compare_file(const char *from_dir, const char *to_dir,
+  * unusual.  One significant peculiarity is that the mapping (start -> pgoff)
+  * is not the same for the kernel map and the modules map.  That happens because
+  * the data is copied adjacently whereas the original kcore has gaps.  Finally,
+- * kallsyms and modules files are compared with their copies to check that
+- * modules have not been loaded or unloaded while the copies were taking place.
++ * kallsyms file is compared with its copy to check that modules have not been
++ * loaded or unloaded while the copies were taking place.
+  *
+  * Return: %0 on success, %-1 on failure.
+  */
+@@ -2066,9 +2066,6 @@ int kcore_copy(const char *from_dir, const char *to_dir)
+ 			goto out_extract_close;
+ 	}
+ 
+-	if (kcore_copy__compare_file(from_dir, to_dir, "modules"))
+-		goto out_extract_close;
+-
+ 	if (kcore_copy__compare_file(from_dir, to_dir, "kallsyms"))
+ 		goto out_extract_close;
+ 
+diff --git a/tools/testing/selftests/net/forwarding/sch_red.sh b/tools/testing/selftests/net/forwarding/sch_red.sh
+index e714bae473fb4..81f31179ac887 100755
+--- a/tools/testing/selftests/net/forwarding/sch_red.sh
++++ b/tools/testing/selftests/net/forwarding/sch_red.sh
+@@ -1,3 +1,4 @@
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ # This test sends one stream of traffic from H1 through a TBF shaper, to a RED
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 578235291e92e..c4cce817a4522 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -159,6 +159,10 @@ __weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+ {
+ }
+ 
++__weak void kvm_arch_guest_memory_reclaimed(struct kvm *kvm)
++{
++}
++
+ bool kvm_is_zone_device_pfn(kvm_pfn_t pfn)
+ {
+ 	/*
+@@ -340,6 +344,12 @@ void kvm_reload_remote_mmus(struct kvm *kvm)
+ 	kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
+ }
+ 
++static void kvm_flush_shadow_all(struct kvm *kvm)
++{
++	kvm_arch_flush_shadow_all(kvm);
++	kvm_arch_guest_memory_reclaimed(kvm);
++}
++
+ #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
+ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
+ 					       gfp_t gfp_flags)
+@@ -489,6 +499,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ 		kvm_flush_remote_tlbs(kvm);
+ 
+ 	spin_unlock(&kvm->mmu_lock);
++	kvm_arch_guest_memory_reclaimed(kvm);
+ 	srcu_read_unlock(&kvm->srcu, idx);
+ 
+ 	return 0;
+@@ -592,7 +603,7 @@ static void kvm_mmu_notifier_release(struct mmu_notifier *mn,
+ 	int idx;
+ 
+ 	idx = srcu_read_lock(&kvm->srcu);
+-	kvm_arch_flush_shadow_all(kvm);
++	kvm_flush_shadow_all(kvm);
+ 	srcu_read_unlock(&kvm->srcu, idx);
+ }
+ 
+@@ -896,7 +907,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
+ #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
+ 	mmu_notifier_unregister(&kvm->mmu_notifier, kvm->mm);
+ #else
+-	kvm_arch_flush_shadow_all(kvm);
++	kvm_flush_shadow_all(kvm);
+ #endif
+ 	kvm_arch_destroy_vm(kvm);
+ 	kvm_destroy_devices(kvm);
+@@ -1238,6 +1249,7 @@ static int kvm_set_memslot(struct kvm *kvm,
+ 		 *	- kvm_is_visible_gfn (mmu_check_root)
+ 		 */
+ 		kvm_arch_flush_shadow_memslot(kvm, slot);
++		kvm_arch_guest_memory_reclaimed(kvm);
+ 	}
+ 
+ 	r = kvm_arch_prepare_memory_region(kvm, new, mem, change);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-10-05 11:58 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-10-05 11:58 UTC (permalink / raw
  To: gentoo-commits

commit:     020e5dd57319eb0ebd5150b5547b93f2c8553082
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct  5 11:58:00 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct  5 11:58:00 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=020e5dd5

Linux patch 5.10.147

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1146_linux-5.10.147.patch | 1492 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1496 insertions(+)

diff --git a/0000_README b/0000_README
index ef3cbd20..33d96fa1 100644
--- a/0000_README
+++ b/0000_README
@@ -627,6 +627,10 @@ Patch:  1145_linux-5.10.146.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.146
 
+Patch:  1146_linux-5.10.147.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.147
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1146_linux-5.10.147.patch b/1146_linux-5.10.147.patch
new file mode 100644
index 00000000..5c09dfe4
--- /dev/null
+++ b/1146_linux-5.10.147.patch
@@ -0,0 +1,1492 @@
+diff --git a/Makefile b/Makefile
+index 26a871eebe924..24110f834775a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 146
++SUBLEVEL = 147
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/am33xx-l4.dtsi b/arch/arm/boot/dts/am33xx-l4.dtsi
+index 29fafb67cfaad..0d36e9dd14a45 100644
+--- a/arch/arm/boot/dts/am33xx-l4.dtsi
++++ b/arch/arm/boot/dts/am33xx-l4.dtsi
+@@ -1352,8 +1352,7 @@
+ 			mmc1: mmc@0 {
+ 				compatible = "ti,am335-sdhci";
+ 				ti,needs-special-reset;
+-				dmas = <&edma_xbar 24 0 0
+-					&edma_xbar 25 0 0>;
++				dmas = <&edma 24 0>, <&edma 25 0>;
+ 				dma-names = "tx", "rx";
+ 				interrupts = <64>;
+ 				reg = <0x0 0x1000>;
+diff --git a/arch/arm/boot/dts/integratorap.dts b/arch/arm/boot/dts/integratorap.dts
+index 67d1f9b24a52f..8600c0548525e 100644
+--- a/arch/arm/boot/dts/integratorap.dts
++++ b/arch/arm/boot/dts/integratorap.dts
+@@ -153,6 +153,7 @@
+ 
+ 	pci: pciv3@62000000 {
+ 		compatible = "arm,integrator-ap-pci", "v3,v360epc-pci";
++		device_type = "pci";
+ 		#interrupt-cells = <1>;
+ 		#size-cells = <2>;
+ 		#address-cells = <3>;
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index a85fb17f11804..e6e63a9d27cbe 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -1330,22 +1330,23 @@ struct bp_patching_desc {
+ 	atomic_t refs;
+ };
+ 
+-static struct bp_patching_desc *bp_desc;
++static struct bp_patching_desc bp_desc;
+ 
+ static __always_inline
+-struct bp_patching_desc *try_get_desc(struct bp_patching_desc **descp)
++struct bp_patching_desc *try_get_desc(void)
+ {
+-	/* rcu_dereference */
+-	struct bp_patching_desc *desc = __READ_ONCE(*descp);
++	struct bp_patching_desc *desc = &bp_desc;
+ 
+-	if (!desc || !arch_atomic_inc_not_zero(&desc->refs))
++	if (!arch_atomic_inc_not_zero(&desc->refs))
+ 		return NULL;
+ 
+ 	return desc;
+ }
+ 
+-static __always_inline void put_desc(struct bp_patching_desc *desc)
++static __always_inline void put_desc(void)
+ {
++	struct bp_patching_desc *desc = &bp_desc;
++
+ 	smp_mb__before_atomic();
+ 	arch_atomic_dec(&desc->refs);
+ }
+@@ -1378,15 +1379,15 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
+ 
+ 	/*
+ 	 * Having observed our INT3 instruction, we now must observe
+-	 * bp_desc:
++	 * bp_desc with non-zero refcount:
+ 	 *
+-	 *	bp_desc = desc			INT3
++	 *	bp_desc.refs = 1		INT3
+ 	 *	WMB				RMB
+-	 *	write INT3			if (desc)
++	 *	write INT3			if (bp_desc.refs != 0)
+ 	 */
+ 	smp_rmb();
+ 
+-	desc = try_get_desc(&bp_desc);
++	desc = try_get_desc();
+ 	if (!desc)
+ 		return 0;
+ 
+@@ -1440,7 +1441,7 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
+ 	ret = 1;
+ 
+ out_put:
+-	put_desc(desc);
++	put_desc();
+ 	return ret;
+ }
+ 
+@@ -1471,18 +1472,20 @@ static int tp_vec_nr;
+  */
+ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries)
+ {
+-	struct bp_patching_desc desc = {
+-		.vec = tp,
+-		.nr_entries = nr_entries,
+-		.refs = ATOMIC_INIT(1),
+-	};
+ 	unsigned char int3 = INT3_INSN_OPCODE;
+ 	unsigned int i;
+ 	int do_sync;
+ 
+ 	lockdep_assert_held(&text_mutex);
+ 
+-	smp_store_release(&bp_desc, &desc); /* rcu_assign_pointer */
++	bp_desc.vec = tp;
++	bp_desc.nr_entries = nr_entries;
++
++	/*
++	 * Corresponds to the implicit memory barrier in try_get_desc() to
++	 * ensure reading a non-zero refcount provides up to date bp_desc data.
++	 */
++	atomic_set_release(&bp_desc.refs, 1);
+ 
+ 	/*
+ 	 * Corresponding read barrier in int3 notifier for making sure the
+@@ -1570,12 +1573,10 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
+ 		text_poke_sync();
+ 
+ 	/*
+-	 * Remove and synchronize_rcu(), except we have a very primitive
+-	 * refcount based completion.
++	 * Remove and wait for refs to be zero.
+ 	 */
+-	WRITE_ONCE(bp_desc, NULL); /* RCU_INIT_POINTER */
+-	if (!atomic_dec_and_test(&desc.refs))
+-		atomic_cond_read_acquire(&desc.refs, !VAL);
++	if (!atomic_dec_and_test(&bp_desc.refs))
++		atomic_cond_read_acquire(&bp_desc.refs, !VAL);
+ }
+ 
+ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 6e1ea5e85e598..6f44274aa949d 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -661,8 +661,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 			entry->edx = 0;
+ 		}
+ 		break;
+-	case 9:
+-		break;
+ 	case 0xa: { /* Architectural Performance Monitoring */
+ 		struct x86_pmu_capability cap;
+ 		union cpuid10_eax eax;
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 2402fa4d8aa55..d13474c6d1818 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3936,6 +3936,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "PIONEER DVD-RW  DVR-212D",	NULL,	ATA_HORKAGE_NOSETXFER },
+ 	{ "PIONEER DVD-RW  DVR-216D",	NULL,	ATA_HORKAGE_NOSETXFER },
+ 
++	/* These specific Pioneer models have LPM issues */
++	{ "PIONEER BD-RW   BDR-207M",	NULL,	ATA_HORKAGE_NOLPM },
++	{ "PIONEER BD-RW   BDR-205",	NULL,	ATA_HORKAGE_NOLPM },
++
+ 	/* Crucial BX100 SSD 500GB has broken LPM support */
+ 	{ "CT500BX100SSD1",		NULL,	ATA_HORKAGE_NOLPM },
+ 
+diff --git a/drivers/clk/bcm/clk-iproc-pll.c b/drivers/clk/bcm/clk-iproc-pll.c
+index 274441e2ddb28..8f0619f362e3b 100644
+--- a/drivers/clk/bcm/clk-iproc-pll.c
++++ b/drivers/clk/bcm/clk-iproc-pll.c
+@@ -736,6 +736,7 @@ void iproc_pll_clk_setup(struct device_node *node,
+ 	const char *parent_name;
+ 	struct iproc_clk *iclk_array;
+ 	struct clk_hw_onecell_data *clk_data;
++	const char *clk_name;
+ 
+ 	if (WARN_ON(!pll_ctrl) || WARN_ON(!clk_ctrl))
+ 		return;
+@@ -783,7 +784,12 @@ void iproc_pll_clk_setup(struct device_node *node,
+ 	iclk = &iclk_array[0];
+ 	iclk->pll = pll;
+ 
+-	init.name = node->name;
++	ret = of_property_read_string_index(node, "clock-output-names",
++					    0, &clk_name);
++	if (WARN_ON(ret))
++		goto err_pll_register;
++
++	init.name = clk_name;
+ 	init.ops = &iproc_pll_ops;
+ 	init.flags = 0;
+ 	parent_name = of_clk_get_parent_name(node, 0);
+@@ -803,13 +809,11 @@ void iproc_pll_clk_setup(struct device_node *node,
+ 		goto err_pll_register;
+ 
+ 	clk_data->hws[0] = &iclk->hw;
++	parent_name = clk_name;
+ 
+ 	/* now initialize and register all leaf clocks */
+ 	for (i = 1; i < num_clks; i++) {
+-		const char *clk_name;
+-
+ 		memset(&init, 0, sizeof(init));
+-		parent_name = node->name;
+ 
+ 		ret = of_property_read_string_index(node, "clock-output-names",
+ 						    i, &clk_name);
+diff --git a/drivers/clk/imx/clk-imx6sx.c b/drivers/clk/imx/clk-imx6sx.c
+index fc1bd23d45834..598f3cf4eba49 100644
+--- a/drivers/clk/imx/clk-imx6sx.c
++++ b/drivers/clk/imx/clk-imx6sx.c
+@@ -280,13 +280,13 @@ static void __init imx6sx_clocks_init(struct device_node *ccm_node)
+ 	hws[IMX6SX_CLK_SSI3_SEL]           = imx_clk_hw_mux("ssi3_sel",         base + 0x1c,  14,     2,      ssi_sels,          ARRAY_SIZE(ssi_sels));
+ 	hws[IMX6SX_CLK_SSI2_SEL]           = imx_clk_hw_mux("ssi2_sel",         base + 0x1c,  12,     2,      ssi_sels,          ARRAY_SIZE(ssi_sels));
+ 	hws[IMX6SX_CLK_SSI1_SEL]           = imx_clk_hw_mux("ssi1_sel",         base + 0x1c,  10,     2,      ssi_sels,          ARRAY_SIZE(ssi_sels));
+-	hws[IMX6SX_CLK_QSPI1_SEL]          = imx_clk_hw_mux_flags("qspi1_sel", base + 0x1c,  7, 3, qspi1_sels, ARRAY_SIZE(qspi1_sels), CLK_SET_RATE_PARENT);
++	hws[IMX6SX_CLK_QSPI1_SEL]          = imx_clk_hw_mux("qspi1_sel",        base + 0x1c,  7,      3,      qspi1_sels,        ARRAY_SIZE(qspi1_sels));
+ 	hws[IMX6SX_CLK_PERCLK_SEL]         = imx_clk_hw_mux("perclk_sel",       base + 0x1c,  6,      1,      perclk_sels,       ARRAY_SIZE(perclk_sels));
+ 	hws[IMX6SX_CLK_VID_SEL]            = imx_clk_hw_mux("vid_sel",          base + 0x20,  21,     3,      vid_sels,          ARRAY_SIZE(vid_sels));
+ 	hws[IMX6SX_CLK_ESAI_SEL]           = imx_clk_hw_mux("esai_sel",         base + 0x20,  19,     2,      audio_sels,        ARRAY_SIZE(audio_sels));
+ 	hws[IMX6SX_CLK_CAN_SEL]            = imx_clk_hw_mux("can_sel",          base + 0x20,  8,      2,      can_sels,          ARRAY_SIZE(can_sels));
+ 	hws[IMX6SX_CLK_UART_SEL]           = imx_clk_hw_mux("uart_sel",         base + 0x24,  6,      1,      uart_sels,         ARRAY_SIZE(uart_sels));
+-	hws[IMX6SX_CLK_QSPI2_SEL]          = imx_clk_hw_mux_flags("qspi2_sel", base + 0x2c, 15, 3, qspi2_sels, ARRAY_SIZE(qspi2_sels), CLK_SET_RATE_PARENT);
++	hws[IMX6SX_CLK_QSPI2_SEL]          = imx_clk_hw_mux("qspi2_sel",        base + 0x2c,  15,     3,      qspi2_sels,        ARRAY_SIZE(qspi2_sels));
+ 	hws[IMX6SX_CLK_SPDIF_SEL]          = imx_clk_hw_mux("spdif_sel",        base + 0x30,  20,     2,      audio_sels,        ARRAY_SIZE(audio_sels));
+ 	hws[IMX6SX_CLK_AUDIO_SEL]          = imx_clk_hw_mux("audio_sel",        base + 0x30,  7,      2,      audio_sels,        ARRAY_SIZE(audio_sels));
+ 	hws[IMX6SX_CLK_ENET_PRE_SEL]       = imx_clk_hw_mux("enet_pre_sel",     base + 0x34,  15,     3,      enet_pre_sels,     ARRAY_SIZE(enet_pre_sels));
+diff --git a/drivers/clk/ingenic/tcu.c b/drivers/clk/ingenic/tcu.c
+index 9382dc3aa27e6..1999c114f4656 100644
+--- a/drivers/clk/ingenic/tcu.c
++++ b/drivers/clk/ingenic/tcu.c
+@@ -100,15 +100,11 @@ static bool ingenic_tcu_enable_regs(struct clk_hw *hw)
+ 	bool enabled = false;
+ 
+ 	/*
+-	 * If the SoC has no global TCU clock, we must ungate the channel's
+-	 * clock to be able to access its registers.
+-	 * If we have a TCU clock, it will be enabled automatically as it has
+-	 * been attached to the regmap.
++	 * According to the programming manual, a timer channel's registers can
++	 * only be accessed when the channel's stop bit is clear.
+ 	 */
+-	if (!tcu->clk) {
+-		enabled = !!ingenic_tcu_is_enabled(hw);
+-		regmap_write(tcu->map, TCU_REG_TSCR, BIT(info->gate_bit));
+-	}
++	enabled = !!ingenic_tcu_is_enabled(hw);
++	regmap_write(tcu->map, TCU_REG_TSCR, BIT(info->gate_bit));
+ 
+ 	return enabled;
+ }
+@@ -119,8 +115,7 @@ static void ingenic_tcu_disable_regs(struct clk_hw *hw)
+ 	const struct ingenic_tcu_clk_info *info = tcu_clk->info;
+ 	struct ingenic_tcu *tcu = tcu_clk->tcu;
+ 
+-	if (!tcu->clk)
+-		regmap_write(tcu->map, TCU_REG_TSSR, BIT(info->gate_bit));
++	regmap_write(tcu->map, TCU_REG_TSSR, BIT(info->gate_bit));
+ }
+ 
+ static u8 ingenic_tcu_get_parent(struct clk_hw *hw)
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index a7bcb429c02b5..e8baa07450b7d 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1865,12 +1865,6 @@ EXPORT_SYMBOL_GPL(analogix_dp_remove);
+ int analogix_dp_suspend(struct analogix_dp_device *dp)
+ {
+ 	clk_disable_unprepare(dp->clock);
+-
+-	if (dp->plat_data->panel) {
+-		if (drm_panel_unprepare(dp->plat_data->panel))
+-			DRM_ERROR("failed to turnoff the panel\n");
+-	}
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_suspend);
+@@ -1885,13 +1879,6 @@ int analogix_dp_resume(struct analogix_dp_device *dp)
+ 		return ret;
+ 	}
+ 
+-	if (dp->plat_data->panel) {
+-		if (drm_panel_prepare(dp->plat_data->panel)) {
+-			DRM_ERROR("failed to setup the panel\n");
+-			return -EBUSY;
+-		}
+-	}
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_resume);
+diff --git a/drivers/input/keyboard/snvs_pwrkey.c b/drivers/input/keyboard/snvs_pwrkey.c
+index 65286762b02ab..ad8660be0127c 100644
+--- a/drivers/input/keyboard/snvs_pwrkey.c
++++ b/drivers/input/keyboard/snvs_pwrkey.c
+@@ -20,7 +20,7 @@
+ #include <linux/mfd/syscon.h>
+ #include <linux/regmap.h>
+ 
+-#define SNVS_HPVIDR1_REG	0xF8
++#define SNVS_HPVIDR1_REG	0xBF8
+ #define SNVS_LPSR_REG		0x4C	/* LP Status Register */
+ #define SNVS_LPCR_REG		0x38	/* LP Control Register */
+ #define SNVS_HPSR_REG		0x14
+diff --git a/drivers/input/touchscreen/melfas_mip4.c b/drivers/input/touchscreen/melfas_mip4.c
+index f67efdd040b24..43fcc8c84e2f5 100644
+--- a/drivers/input/touchscreen/melfas_mip4.c
++++ b/drivers/input/touchscreen/melfas_mip4.c
+@@ -1453,7 +1453,7 @@ static int mip4_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 					      "ce", GPIOD_OUT_LOW);
+ 	if (IS_ERR(ts->gpio_ce)) {
+ 		error = PTR_ERR(ts->gpio_ce);
+-		if (error != EPROBE_DEFER)
++		if (error != -EPROBE_DEFER)
+ 			dev_err(&client->dev,
+ 				"Failed to get gpio: %d\n", error);
+ 		return error;
+diff --git a/drivers/media/dvb-core/dvb_vb2.c b/drivers/media/dvb-core/dvb_vb2.c
+index 6974f17315294..1331f2c2237e6 100644
+--- a/drivers/media/dvb-core/dvb_vb2.c
++++ b/drivers/media/dvb-core/dvb_vb2.c
+@@ -358,6 +358,12 @@ int dvb_vb2_reqbufs(struct dvb_vb2_ctx *ctx, struct dmx_requestbuffers *req)
+ 
+ int dvb_vb2_querybuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
+ {
++	struct vb2_queue *q = &ctx->vb_q;
++
++	if (b->index >= q->num_buffers) {
++		dprintk(1, "[%s] buffer index out of range\n", ctx->name);
++		return -EINVAL;
++	}
+ 	vb2_core_querybuf(&ctx->vb_q, b->index, b);
+ 	dprintk(3, "[%s] index=%d\n", ctx->name, b->index);
+ 	return 0;
+@@ -382,8 +388,13 @@ int dvb_vb2_expbuf(struct dvb_vb2_ctx *ctx, struct dmx_exportbuffer *exp)
+ 
+ int dvb_vb2_qbuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
+ {
++	struct vb2_queue *q = &ctx->vb_q;
+ 	int ret;
+ 
++	if (b->index >= q->num_buffers) {
++		dprintk(1, "[%s] buffer index out of range\n", ctx->name);
++		return -EINVAL;
++	}
+ 	ret = vb2_core_qbuf(&ctx->vb_q, b->index, b, NULL);
+ 	if (ret) {
+ 		dprintk(1, "[%s] index=%d errno=%d\n", ctx->name,
+diff --git a/drivers/mmc/host/mmc_hsq.c b/drivers/mmc/host/mmc_hsq.c
+index a5e05ed0fda3e..9d35453e7371b 100644
+--- a/drivers/mmc/host/mmc_hsq.c
++++ b/drivers/mmc/host/mmc_hsq.c
+@@ -34,7 +34,7 @@ static void mmc_hsq_pump_requests(struct mmc_hsq *hsq)
+ 	spin_lock_irqsave(&hsq->lock, flags);
+ 
+ 	/* Make sure we are not already running a request now */
+-	if (hsq->mrq) {
++	if (hsq->mrq || hsq->recovery_halt) {
+ 		spin_unlock_irqrestore(&hsq->lock, flags);
+ 		return;
+ 	}
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index ea67a7ef2390c..c16300b921391 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -111,8 +111,8 @@
+ #define CLK_DIV_MASK		0x7f
+ 
+ /* REG_BUS_WIDTH */
+-#define BUS_WIDTH_8		BIT(2)
+-#define BUS_WIDTH_4		BIT(1)
++#define BUS_WIDTH_4_SUPPORT	BIT(3)
++#define BUS_WIDTH_4		BIT(2)
+ #define BUS_WIDTH_1		BIT(0)
+ 
+ #define MMC_VDD_360		23
+@@ -527,9 +527,6 @@ static void moxart_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 	case MMC_BUS_WIDTH_4:
+ 		writel(BUS_WIDTH_4, host->base + REG_BUS_WIDTH);
+ 		break;
+-	case MMC_BUS_WIDTH_8:
+-		writel(BUS_WIDTH_8, host->base + REG_BUS_WIDTH);
+-		break;
+ 	default:
+ 		writel(BUS_WIDTH_1, host->base + REG_BUS_WIDTH);
+ 		break;
+@@ -654,16 +651,8 @@ static int moxart_probe(struct platform_device *pdev)
+ 		dmaengine_slave_config(host->dma_chan_rx, &cfg);
+ 	}
+ 
+-	switch ((readl(host->base + REG_BUS_WIDTH) >> 3) & 3) {
+-	case 1:
++	if (readl(host->base + REG_BUS_WIDTH) & BUS_WIDTH_4_SUPPORT)
+ 		mmc->caps |= MMC_CAP_4_BIT_DATA;
+-		break;
+-	case 2:
+-		mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA;
+-		break;
+-	default:
+-		break;
+-	}
+ 
+ 	writel(0, host->base + REG_INTERRUPT_MASK);
+ 
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 265620a81f9f6..70155e996f7d7 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -502,14 +502,19 @@ static bool mt7531_dual_sgmii_supported(struct mt7530_priv *priv)
+ static int
+ mt7531_pad_setup(struct dsa_switch *ds, phy_interface_t interface)
+ {
+-	struct mt7530_priv *priv = ds->priv;
++	return 0;
++}
++
++static void
++mt7531_pll_setup(struct mt7530_priv *priv)
++{
+ 	u32 top_sig;
+ 	u32 hwstrap;
+ 	u32 xtal;
+ 	u32 val;
+ 
+ 	if (mt7531_dual_sgmii_supported(priv))
+-		return 0;
++		return;
+ 
+ 	val = mt7530_read(priv, MT7531_CREV);
+ 	top_sig = mt7530_read(priv, MT7531_TOP_SIG_SR);
+@@ -588,8 +593,6 @@ mt7531_pad_setup(struct dsa_switch *ds, phy_interface_t interface)
+ 	val |= EN_COREPLL;
+ 	mt7530_write(priv, MT7531_PLLGP_EN, val);
+ 	usleep_range(25, 35);
+-
+-	return 0;
+ }
+ 
+ static void
+@@ -1731,6 +1734,8 @@ mt7531_setup(struct dsa_switch *ds)
+ 		     SYS_CTRL_PHY_RST | SYS_CTRL_SW_RST |
+ 		     SYS_CTRL_REG_RST);
+ 
++	mt7531_pll_setup(priv);
++
+ 	if (mt7531_dual_sgmii_supported(priv)) {
+ 		priv->p5_intf_sel = P5_INTF_SEL_GMAC5_SGMII;
+ 
+@@ -2281,8 +2286,6 @@ mt7531_cpu_port_config(struct dsa_switch *ds, int port)
+ 	case 6:
+ 		interface = PHY_INTERFACE_MODE_2500BASEX;
+ 
+-		mt7531_pad_setup(ds, interface);
+-
+ 		priv->p6_interface = interface;
+ 		break;
+ 	default:
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+index c5b0e725b2382..2169351b6afc3 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+@@ -14,6 +14,7 @@
+ #include "cudbg_entity.h"
+ #include "cudbg_lib.h"
+ #include "cudbg_zlib.h"
++#include "cxgb4_tc_mqprio.h"
+ 
+ static const u32 t6_tp_pio_array[][IREG_NUM_ELEM] = {
+ 	{0x7e40, 0x7e44, 0x020, 28}, /* t6_tp_pio_regs_20_to_3b */
+@@ -3476,7 +3477,7 @@ int cudbg_collect_qdesc(struct cudbg_init *pdbg_init,
+ 			for (i = 0; i < utxq->ntxq; i++)
+ 				QDESC_GET_TXQ(&utxq->uldtxq[i].q,
+ 					      cudbg_uld_txq_to_qtype(j),
+-					      out_unlock);
++					      out_unlock_uld);
+ 		}
+ 	}
+ 
+@@ -3493,7 +3494,7 @@ int cudbg_collect_qdesc(struct cudbg_init *pdbg_init,
+ 			for (i = 0; i < urxq->nrxq; i++)
+ 				QDESC_GET_RXQ(&urxq->uldrxq[i].rspq,
+ 					      cudbg_uld_rxq_to_qtype(j),
+-					      out_unlock);
++					      out_unlock_uld);
+ 		}
+ 
+ 		/* ULD FLQ */
+@@ -3505,7 +3506,7 @@ int cudbg_collect_qdesc(struct cudbg_init *pdbg_init,
+ 			for (i = 0; i < urxq->nrxq; i++)
+ 				QDESC_GET_FLQ(&urxq->uldrxq[i].fl,
+ 					      cudbg_uld_flq_to_qtype(j),
+-					      out_unlock);
++					      out_unlock_uld);
+ 		}
+ 
+ 		/* ULD CIQ */
+@@ -3518,29 +3519,34 @@ int cudbg_collect_qdesc(struct cudbg_init *pdbg_init,
+ 			for (i = 0; i < urxq->nciq; i++)
+ 				QDESC_GET_RXQ(&urxq->uldrxq[base + i].rspq,
+ 					      cudbg_uld_ciq_to_qtype(j),
+-					      out_unlock);
++					      out_unlock_uld);
+ 		}
+ 	}
++	mutex_unlock(&uld_mutex);
++
++	if (!padap->tc_mqprio)
++		goto out;
+ 
++	mutex_lock(&padap->tc_mqprio->mqprio_mutex);
+ 	/* ETHOFLD TXQ */
+ 	if (s->eohw_txq)
+ 		for (i = 0; i < s->eoqsets; i++)
+ 			QDESC_GET_TXQ(&s->eohw_txq[i].q,
+-				      CUDBG_QTYPE_ETHOFLD_TXQ, out);
++				      CUDBG_QTYPE_ETHOFLD_TXQ, out_unlock_mqprio);
+ 
+ 	/* ETHOFLD RXQ and FLQ */
+ 	if (s->eohw_rxq) {
+ 		for (i = 0; i < s->eoqsets; i++)
+ 			QDESC_GET_RXQ(&s->eohw_rxq[i].rspq,
+-				      CUDBG_QTYPE_ETHOFLD_RXQ, out);
++				      CUDBG_QTYPE_ETHOFLD_RXQ, out_unlock_mqprio);
+ 
+ 		for (i = 0; i < s->eoqsets; i++)
+ 			QDESC_GET_FLQ(&s->eohw_rxq[i].fl,
+-				      CUDBG_QTYPE_ETHOFLD_FLQ, out);
++				      CUDBG_QTYPE_ETHOFLD_FLQ, out_unlock_mqprio);
+ 	}
+ 
+-out_unlock:
+-	mutex_unlock(&uld_mutex);
++out_unlock_mqprio:
++	mutex_unlock(&padap->tc_mqprio->mqprio_mutex);
+ 
+ out:
+ 	qdesc_info->qdesc_entry_size = sizeof(*qdesc_entry);
+@@ -3578,6 +3584,10 @@ out_free:
+ #undef QDESC_GET
+ 
+ 	return rc;
++
++out_unlock_uld:
++	mutex_unlock(&uld_mutex);
++	goto out;
+ }
+ 
+ int cudbg_collect_flash(struct cudbg_init *pdbg_init,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 27b7bb64a0281..41e71a26b1ade 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2907,6 +2907,15 @@ static int stmmac_open(struct net_device *dev)
+ 		goto init_error;
+ 	}
+ 
++	if (priv->plat->serdes_powerup) {
++		ret = priv->plat->serdes_powerup(dev, priv->plat->bsp_priv);
++		if (ret < 0) {
++			netdev_err(priv->dev, "%s: Serdes powerup failed\n",
++				   __func__);
++			goto init_error;
++		}
++	}
++
+ 	ret = stmmac_hw_setup(dev, true);
+ 	if (ret < 0) {
+ 		netdev_err(priv->dev, "%s: Hw setup failed\n", __func__);
+@@ -3022,6 +3031,10 @@ static int stmmac_release(struct net_device *dev)
+ 	/* Disable the MAC Rx/Tx */
+ 	stmmac_mac_set(priv, priv->ioaddr, false);
+ 
++	/* Powerdown Serdes if there is */
++	if (priv->plat->serdes_powerdown)
++		priv->plat->serdes_powerdown(dev, priv->plat->bsp_priv);
++
+ 	netif_carrier_off(dev);
+ 
+ 	stmmac_release_ptp(priv);
+@@ -5178,14 +5191,6 @@ int stmmac_dvr_probe(struct device *device,
+ 		goto error_netdev_register;
+ 	}
+ 
+-	if (priv->plat->serdes_powerup) {
+-		ret = priv->plat->serdes_powerup(ndev,
+-						 priv->plat->bsp_priv);
+-
+-		if (ret < 0)
+-			goto error_serdes_powerup;
+-	}
+-
+ #ifdef CONFIG_DEBUG_FS
+ 	stmmac_init_fs(ndev);
+ #endif
+@@ -5197,8 +5202,6 @@ int stmmac_dvr_probe(struct device *device,
+ 
+ 	return ret;
+ 
+-error_serdes_powerup:
+-	unregister_netdev(ndev);
+ error_netdev_register:
+ 	phylink_destroy(priv->phylink);
+ error_phy_setup:
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 1502069f3a4e2..a1c9233e264d9 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1332,6 +1332,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x413c, 0x81b3, 8)},	/* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ 	{QMI_FIXED_INTF(0x413c, 0x81b6, 8)},	/* Dell Wireless 5811e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81b6, 10)},	/* Dell Wireless 5811e */
++	{QMI_FIXED_INTF(0x413c, 0x81c2, 8)},	/* Dell Wireless 5811e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81cc, 8)},	/* Dell Wireless 5816e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81d7, 0)},	/* Dell Wireless 5821e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81d7, 1)},	/* Dell Wireless 5821e preproduction config */
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 1239fd57514bb..43d70348343b2 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1567,6 +1567,7 @@ void usbnet_disconnect (struct usb_interface *intf)
+ 	struct usbnet		*dev;
+ 	struct usb_device	*xdev;
+ 	struct net_device	*net;
++	struct urb		*urb;
+ 
+ 	dev = usb_get_intfdata(intf);
+ 	usb_set_intfdata(intf, NULL);
+@@ -1583,7 +1584,11 @@ void usbnet_disconnect (struct usb_interface *intf)
+ 	net = dev->net;
+ 	unregister_netdev (net);
+ 
+-	usb_scuttle_anchored_urbs(&dev->deferred);
++	while ((urb = usb_get_from_anchor(&dev->deferred))) {
++		dev_kfree_skb(urb->context);
++		kfree(urb->sg);
++		usb_free_urb(urb);
++	}
+ 
+ 	if (dev->driver_info->unbind)
+ 		dev->driver_info->unbind (dev, intf);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index ab060b4911ffd..265d9199b657f 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2285,18 +2285,21 @@ static int nvme_pr_preempt(struct block_device *bdev, u64 old, u64 new,
+ 		enum pr_type type, bool abort)
+ {
+ 	u32 cdw10 = nvme_pr_type(type) << 8 | (abort ? 2 : 1);
++
+ 	return nvme_pr_command(bdev, cdw10, old, new, nvme_cmd_resv_acquire);
+ }
+ 
+ static int nvme_pr_clear(struct block_device *bdev, u64 key)
+ {
+-	u32 cdw10 = 1 | (key ? 1 << 3 : 0);
+-	return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_register);
++	u32 cdw10 = 1 | (key ? 0 : 1 << 3);
++
++	return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release);
+ }
+ 
+ static int nvme_pr_release(struct block_device *bdev, u64 key, enum pr_type type)
+ {
+-	u32 cdw10 = nvme_pr_type(type) << 8 | (key ? 1 << 3 : 0);
++	u32 cdw10 = nvme_pr_type(type) << 8 | (key ? 0 : 1 << 3);
++
+ 	return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release);
+ }
+ 
+diff --git a/drivers/reset/reset-imx7.c b/drivers/reset/reset-imx7.c
+index 185a333df66c5..d2408725eb2c3 100644
+--- a/drivers/reset/reset-imx7.c
++++ b/drivers/reset/reset-imx7.c
+@@ -329,6 +329,7 @@ static int imx8mp_reset_set(struct reset_controller_dev *rcdev,
+ 		break;
+ 
+ 	case IMX8MP_RESET_PCIE_CTRL_APPS_EN:
++	case IMX8MP_RESET_PCIEPHY_PERST:
+ 		value = assert ? 0 : bit;
+ 		break;
+ 	}
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index dfe7e6370d84f..cd41dc061d874 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2738,7 +2738,6 @@ static int slave_configure_v3_hw(struct scsi_device *sdev)
+ 	struct hisi_hba *hisi_hba = shost_priv(shost);
+ 	struct device *dev = hisi_hba->dev;
+ 	int ret = sas_slave_configure(sdev);
+-	unsigned int max_sectors;
+ 
+ 	if (ret)
+ 		return ret;
+@@ -2756,12 +2755,6 @@ static int slave_configure_v3_hw(struct scsi_device *sdev)
+ 		}
+ 	}
+ 
+-	/* Set according to IOMMU IOVA caching limit */
+-	max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue),
+-			    (PAGE_SIZE * 32) >> SECTOR_SHIFT);
+-
+-	blk_queue_max_hw_sectors(sdev->request_queue, max_sectors);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/sunxi/sunxi_sram.c b/drivers/soc/sunxi/sunxi_sram.c
+index d4c7bd59429ec..443e38e9f30a2 100644
+--- a/drivers/soc/sunxi/sunxi_sram.c
++++ b/drivers/soc/sunxi/sunxi_sram.c
+@@ -78,8 +78,8 @@ static struct sunxi_sram_desc sun4i_a10_sram_d = {
+ 
+ static struct sunxi_sram_desc sun50i_a64_sram_c = {
+ 	.data	= SUNXI_SRAM_DATA("C", 0x4, 24, 1,
+-				  SUNXI_SRAM_MAP(0, 1, "cpu"),
+-				  SUNXI_SRAM_MAP(1, 0, "de2")),
++				  SUNXI_SRAM_MAP(1, 0, "cpu"),
++				  SUNXI_SRAM_MAP(0, 1, "de2")),
+ };
+ 
+ static const struct of_device_id sunxi_sram_dt_ids[] = {
+@@ -254,6 +254,7 @@ int sunxi_sram_claim(struct device *dev)
+ 	writel(val | ((device << sram_data->offset) & mask),
+ 	       base + sram_data->reg);
+ 
++	sram_desc->claimed = true;
+ 	spin_unlock(&sram_lock);
+ 
+ 	return 0;
+@@ -318,12 +319,11 @@ static struct regmap_config sunxi_sram_emac_clock_regmap = {
+ 	.writeable_reg	= sunxi_sram_regmap_accessible_reg,
+ };
+ 
+-static int sunxi_sram_probe(struct platform_device *pdev)
++static int __init sunxi_sram_probe(struct platform_device *pdev)
+ {
+-	struct resource *res;
+-	struct dentry *d;
+ 	struct regmap *emac_clock;
+ 	const struct sunxi_sramc_variant *variant;
++	struct device *dev = &pdev->dev;
+ 
+ 	sram_dev = &pdev->dev;
+ 
+@@ -331,18 +331,10 @@ static int sunxi_sram_probe(struct platform_device *pdev)
+ 	if (!variant)
+ 		return -EINVAL;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	base = devm_ioremap_resource(&pdev->dev, res);
++	base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(base))
+ 		return PTR_ERR(base);
+ 
+-	of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+-
+-	d = debugfs_create_file("sram", S_IRUGO, NULL, NULL,
+-				&sunxi_sram_fops);
+-	if (!d)
+-		return -ENOMEM;
+-
+ 	if (variant->has_emac_clock) {
+ 		emac_clock = devm_regmap_init_mmio(&pdev->dev, base,
+ 						   &sunxi_sram_emac_clock_regmap);
+@@ -351,6 +343,10 @@ static int sunxi_sram_probe(struct platform_device *pdev)
+ 			return PTR_ERR(emac_clock);
+ 	}
+ 
++	of_platform_populate(dev->of_node, NULL, NULL, dev);
++
++	debugfs_create_file("sram", 0444, NULL, NULL, &sunxi_sram_fops);
++
+ 	return 0;
+ }
+ 
+@@ -396,9 +392,8 @@ static struct platform_driver sunxi_sram_driver = {
+ 		.name		= "sunxi-sram",
+ 		.of_match_table	= sunxi_sram_dt_match,
+ 	},
+-	.probe	= sunxi_sram_probe,
+ };
+-module_platform_driver(sunxi_sram_driver);
++builtin_platform_driver_probe(sunxi_sram_driver, sunxi_sram_probe);
+ 
+ MODULE_AUTHOR("Maxime Ripard <maxime.ripard@free-electrons.com>");
+ MODULE_DESCRIPTION("Allwinner sunXi SRAM Controller Driver");
+diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c
+index 7013f7ce36781..ddccd97a359fa 100644
+--- a/drivers/staging/media/rkvdec/rkvdec-h264.c
++++ b/drivers/staging/media/rkvdec/rkvdec-h264.c
+@@ -1124,8 +1124,8 @@ static int rkvdec_h264_run(struct rkvdec_ctx *ctx)
+ 
+ 	schedule_delayed_work(&rkvdec->watchdog_work, msecs_to_jiffies(2000));
+ 
+-	writel(0xffffffff, rkvdec->regs + RKVDEC_REG_STRMD_ERR_EN);
+-	writel(0xffffffff, rkvdec->regs + RKVDEC_REG_H264_ERR_E);
++	writel(0, rkvdec->regs + RKVDEC_REG_STRMD_ERR_EN);
++	writel(0, rkvdec->regs + RKVDEC_REG_H264_ERR_E);
+ 	writel(1, rkvdec->regs + RKVDEC_REG_PREF_LUMA_CACHE_COMMAND);
+ 	writel(1, rkvdec->regs + RKVDEC_REG_PREF_CHR_CACHE_COMMAND);
+ 
+diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
+index 82c46b200c347..b2fb3397310e4 100644
+--- a/drivers/thunderbolt/icm.c
++++ b/drivers/thunderbolt/icm.c
+@@ -2300,6 +2300,18 @@ struct tb *icm_probe(struct tb_nhi *nhi)
+ 		icm->rtd3_veto = icm_icl_rtd3_veto;
+ 		tb->cm_ops = &icm_icl_ops;
+ 		break;
++
++	case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_2C_NHI:
++	case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI:
++		icm->is_supported = icm_tgl_is_supported;
++		icm->get_mode = icm_ar_get_mode;
++		icm->driver_ready = icm_tr_driver_ready;
++		icm->device_connected = icm_tr_device_connected;
++		icm->device_disconnected = icm_tr_device_disconnected;
++		icm->xdomain_connected = icm_tr_xdomain_connected;
++		icm->xdomain_disconnected = icm_tr_xdomain_disconnected;
++		tb->cm_ops = &icm_tr_ops;
++		break;
+ 	}
+ 
+ 	if (!icm->is_supported || !icm->is_supported(tb)) {
+diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
+index 4e0861d750720..7ad6d3f0583b3 100644
+--- a/drivers/thunderbolt/nhi.h
++++ b/drivers/thunderbolt/nhi.h
+@@ -55,6 +55,8 @@ extern const struct tb_nhi_ops icl_nhi_ops;
+  * need for the PCI quirk anymore as we will use ICM also on Apple
+  * hardware.
+  */
++#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_2C_NHI		0x1134
++#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI		0x1137
+ #define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_NHI            0x157d
+ #define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE         0x157e
+ #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI		0x15bf
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index c4b157c29af7a..65f99d744654f 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2046,6 +2046,7 @@ int tb_switch_configure(struct tb_switch *sw)
+ 		 * additional capabilities.
+ 		 */
+ 		sw->config.cmuv = USB4_VERSION_1_0;
++		sw->config.plug_events_delay = 0xa;
+ 
+ 		/* Enumerate the switch */
+ 		ret = tb_sw_write(sw, (u32 *)&sw->config + 1, TB_CFG_SWITCH,
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 23ab3b048d9be..251778d14e2dd 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -52,6 +52,13 @@ UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+ 
++/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
++UNUSUAL_DEV(0x090c, 0x2000, 0x0000, 0x9999,
++		"Hiksemi",
++		"External HDD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ /*
+  * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+  * commands in UAS mode.  Observed with the 1.28 firmware; are there others?
+@@ -76,6 +83,13 @@ UNUSUAL_DEV(0x0bc2, 0x331a, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_REPORT_LUNS),
+ 
++/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
++UNUSUAL_DEV(0x0bda, 0x9210, 0x0000, 0x9999,
++		"Hiksemi",
++		"External HDD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */
+ UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999,
+ 		"Initio Corporation",
+@@ -118,6 +132,13 @@ UNUSUAL_DEV(0x154b, 0xf00d, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_ATA_1X),
+ 
++/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
++UNUSUAL_DEV(0x17ef, 0x3899, 0x0000, 0x9999,
++		"Thinkplus",
++		"External HDD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ /* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+ UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
+ 		"VIA",
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 0c16e99807365..4cd5c291cdf38 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -515,8 +515,6 @@ static int ucsi_get_pdos(struct ucsi_connector *con, int is_partner,
+ 				num_pdos * sizeof(u32));
+ 	if (ret < 0)
+ 		dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret);
+-	if (ret == 0 && offset == 0)
+-		dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n");
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 2c7e50980a706..f2abd8bfd4a0f 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4105,6 +4105,31 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ 	/* clear out the rbtree of defraggable inodes */
+ 	btrfs_cleanup_defrag_inodes(fs_info);
+ 
++	/*
++	 * After we parked the cleaner kthread, ordered extents may have
++	 * completed and created new delayed iputs. If one of the async reclaim
++	 * tasks is running and in the RUN_DELAYED_IPUTS flush state, then we
++	 * can hang forever trying to stop it, because if a delayed iput is
++	 * added after it ran btrfs_run_delayed_iputs() and before it called
++	 * btrfs_wait_on_delayed_iputs(), it will hang forever since there is
++	 * no one else to run iputs.
++	 *
++	 * So wait for all ongoing ordered extents to complete and then run
++	 * delayed iputs. This works because once we reach this point no one
++	 * can either create new ordered extents nor create delayed iputs
++	 * through some other means.
++	 *
++	 * Also note that btrfs_wait_ordered_roots() is not safe here, because
++	 * it waits for BTRFS_ORDERED_COMPLETE to be set on an ordered extent,
++	 * but the delayed iput for the respective inode is made only when doing
++	 * the final btrfs_put_ordered_extent() (which must happen at
++	 * btrfs_finish_ordered_io() when we are unmounting).
++	 */
++	btrfs_flush_workqueue(fs_info->endio_write_workers);
++	/* Ordered extents for free space inodes. */
++	btrfs_flush_workqueue(fs_info->endio_freespace_worker);
++	btrfs_run_delayed_iputs(fs_info);
++
+ 	cancel_work_sync(&fs_info->async_reclaim_work);
+ 	cancel_work_sync(&fs_info->async_data_reclaim_work);
+ 
+diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
+index 0d7e948cb29c9..7f69422d5191d 100644
+--- a/fs/ntfs/super.c
++++ b/fs/ntfs/super.c
+@@ -2092,7 +2092,8 @@ get_ctx_vol_failed:
+ 	// TODO: Initialize security.
+ 	/* Get the extended system files' directory inode. */
+ 	vol->extend_ino = ntfs_iget(sb, FILE_Extend);
+-	if (IS_ERR(vol->extend_ino) || is_bad_inode(vol->extend_ino)) {
++	if (IS_ERR(vol->extend_ino) || is_bad_inode(vol->extend_ino) ||
++	    !S_ISDIR(vol->extend_ino->i_mode)) {
+ 		if (!IS_ERR(vol->extend_ino))
+ 			iput(vol->extend_ino);
+ 		ntfs_error(sb, "Failed to load $Extend.");
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 4a9831d01f0ea..d897d161366a4 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -734,7 +734,18 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
+ 
+ size_t swiotlb_max_mapping_size(struct device *dev)
+ {
+-	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
++	int min_align_mask = dma_get_min_align_mask(dev);
++	int min_align = 0;
++
++	/*
++	 * swiotlb_find_slots() skips slots according to
++	 * min align mask. This affects max mapping size.
++	 * Take it into acount here.
++	 */
++	if (min_align_mask)
++		min_align = roundup(min_align_mask, IO_TLB_SIZE);
++
++	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE - min_align;
+ }
+ 
+ bool is_swiotlb_active(void)
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 77e1dc2d4e186..f71fc88f0b331 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -434,8 +434,11 @@ regular_page:
+ 			continue;
+ 		}
+ 
+-		/* Do not interfere with other mappings of this page */
+-		if (page_mapcount(page) != 1)
++		/*
++		 * Do not interfere with other mappings of this page and
++		 * non-LRU page.
++		 */
++		if (!PageLRU(page) || page_mapcount(page) != 1)
+ 			continue;
+ 
+ 		VM_BUG_ON_PAGE(PageTransCompound(page), page);
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 495bdac5cf929..b716b8fa2c3ff 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -2473,13 +2473,14 @@ next:
+ 		migrate->dst[migrate->npages] = 0;
+ 		migrate->src[migrate->npages++] = mpfn;
+ 	}
+-	arch_leave_lazy_mmu_mode();
+-	pte_unmap_unlock(ptep - 1, ptl);
+ 
+ 	/* Only flush the TLB if we actually modified any entries */
+ 	if (unmapped)
+ 		flush_tlb_range(walk->vma, start, end);
+ 
++	arch_leave_lazy_mmu_mode();
++	pte_unmap_unlock(ptep - 1, ptl);
++
+ 	return 0;
+ }
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 43ff22ce76324..a56f2b9df5a01 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4322,6 +4322,30 @@ void fs_reclaim_release(gfp_t gfp_mask)
+ EXPORT_SYMBOL_GPL(fs_reclaim_release);
+ #endif
+ 
++/*
++ * Zonelists may change due to hotplug during allocation. Detect when zonelists
++ * have been rebuilt so allocation retries. Reader side does not lock and
++ * retries the allocation if zonelist changes. Writer side is protected by the
++ * embedded spin_lock.
++ */
++static DEFINE_SEQLOCK(zonelist_update_seq);
++
++static unsigned int zonelist_iter_begin(void)
++{
++	if (IS_ENABLED(CONFIG_MEMORY_HOTREMOVE))
++		return read_seqbegin(&zonelist_update_seq);
++
++	return 0;
++}
++
++static unsigned int check_retry_zonelist(unsigned int seq)
++{
++	if (IS_ENABLED(CONFIG_MEMORY_HOTREMOVE))
++		return read_seqretry(&zonelist_update_seq, seq);
++
++	return seq;
++}
++
+ /* Perform direct synchronous page reclaim */
+ static unsigned long
+ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
+@@ -4629,6 +4653,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 	int compaction_retries;
+ 	int no_progress_loops;
+ 	unsigned int cpuset_mems_cookie;
++	unsigned int zonelist_iter_cookie;
+ 	int reserve_flags;
+ 
+ 	/*
+@@ -4639,11 +4664,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 				(__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))
+ 		gfp_mask &= ~__GFP_ATOMIC;
+ 
+-retry_cpuset:
++restart:
+ 	compaction_retries = 0;
+ 	no_progress_loops = 0;
+ 	compact_priority = DEF_COMPACT_PRIORITY;
+ 	cpuset_mems_cookie = read_mems_allowed_begin();
++	zonelist_iter_cookie = zonelist_iter_begin();
+ 
+ 	/*
+ 	 * The fast path uses conservative alloc_flags to succeed only until
+@@ -4802,9 +4828,13 @@ retry:
+ 		goto retry;
+ 
+ 
+-	/* Deal with possible cpuset update races before we start OOM killing */
+-	if (check_retry_cpuset(cpuset_mems_cookie, ac))
+-		goto retry_cpuset;
++	/*
++	 * Deal with possible cpuset update races or zonelist updates to avoid
++	 * a unnecessary OOM kill.
++	 */
++	if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
++	    check_retry_zonelist(zonelist_iter_cookie))
++		goto restart;
+ 
+ 	/* Reclaim has failed us, start killing things */
+ 	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
+@@ -4824,9 +4854,13 @@ retry:
+ 	}
+ 
+ nopage:
+-	/* Deal with possible cpuset update races before we fail */
+-	if (check_retry_cpuset(cpuset_mems_cookie, ac))
+-		goto retry_cpuset;
++	/*
++	 * Deal with possible cpuset update races or zonelist updates to avoid
++	 * a unnecessary OOM kill.
++	 */
++	if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
++	    check_retry_zonelist(zonelist_iter_cookie))
++		goto restart;
+ 
+ 	/*
+ 	 * Make sure that __GFP_NOFAIL request doesn't leak out and make sure
+@@ -5129,6 +5163,18 @@ refill:
+ 		/* reset page count bias and offset to start of new frag */
+ 		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ 		offset = size - fragsz;
++		if (unlikely(offset < 0)) {
++			/*
++			 * The caller is trying to allocate a fragment
++			 * with fragsz > PAGE_SIZE but the cache isn't big
++			 * enough to satisfy the request, this may
++			 * happen in low memory conditions.
++			 * We don't release the cache page because
++			 * it could make memory pressure worse
++			 * so we simply return NULL here.
++			 */
++			return NULL;
++		}
+ 	}
+ 
+ 	nc->pagecnt_bias--;
+@@ -5924,9 +5970,8 @@ static void __build_all_zonelists(void *data)
+ 	int nid;
+ 	int __maybe_unused cpu;
+ 	pg_data_t *self = data;
+-	static DEFINE_SPINLOCK(lock);
+ 
+-	spin_lock(&lock);
++	write_seqlock(&zonelist_update_seq);
+ 
+ #ifdef CONFIG_NUMA
+ 	memset(node_load, 0, sizeof(node_load));
+@@ -5959,7 +6004,7 @@ static void __build_all_zonelists(void *data)
+ #endif
+ 	}
+ 
+-	spin_unlock(&lock);
++	write_sequnlock(&zonelist_update_seq);
+ }
+ 
+ static noinline void __init
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 825b3e9b55f7e..f7e88d7466c30 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -1293,7 +1293,7 @@ static int tcf_ct_init(struct net *net, struct nlattr *nla,
+ 
+ 	err = tcf_ct_flow_table_get(params);
+ 	if (err)
+-		goto cleanup;
++		goto cleanup_params;
+ 
+ 	spin_lock_bh(&c->tcf_lock);
+ 	goto_ch = tcf_action_set_ctrlact(*a, parm->action, goto_ch);
+@@ -1308,6 +1308,9 @@ static int tcf_ct_init(struct net *net, struct nlattr *nla,
+ 
+ 	return res;
+ 
++cleanup_params:
++	if (params->tmpl)
++		nf_ct_put(params->tmpl);
+ cleanup:
+ 	if (goto_ch)
+ 		tcf_chain_put_by_act(goto_ch);
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index 1e44e337986e8..17b06f7b69ee6 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -17,6 +17,7 @@
+ #include <linux/moduleparam.h>
+ #include <linux/mutex.h>
+ #include <linux/of_device.h>
++#include <linux/reset.h>
+ #include <linux/slab.h>
+ #include <linux/time.h>
+ #include <linux/string.h>
+@@ -70,9 +71,9 @@
+ struct hda_tegra {
+ 	struct azx chip;
+ 	struct device *dev;
+-	struct clk *hda_clk;
+-	struct clk *hda2codec_2x_clk;
+-	struct clk *hda2hdmi_clk;
++	struct reset_control *reset;
++	struct clk_bulk_data clocks[3];
++	unsigned int nclocks;
+ 	void __iomem *regs;
+ 	struct work_struct probe_work;
+ };
+@@ -113,36 +114,6 @@ static void hda_tegra_init(struct hda_tegra *hda)
+ 	writel(v, hda->regs + HDA_IPFS_INTR_MASK);
+ }
+ 
+-static int hda_tegra_enable_clocks(struct hda_tegra *data)
+-{
+-	int rc;
+-
+-	rc = clk_prepare_enable(data->hda_clk);
+-	if (rc)
+-		return rc;
+-	rc = clk_prepare_enable(data->hda2codec_2x_clk);
+-	if (rc)
+-		goto disable_hda;
+-	rc = clk_prepare_enable(data->hda2hdmi_clk);
+-	if (rc)
+-		goto disable_codec_2x;
+-
+-	return 0;
+-
+-disable_codec_2x:
+-	clk_disable_unprepare(data->hda2codec_2x_clk);
+-disable_hda:
+-	clk_disable_unprepare(data->hda_clk);
+-	return rc;
+-}
+-
+-static void hda_tegra_disable_clocks(struct hda_tegra *data)
+-{
+-	clk_disable_unprepare(data->hda2hdmi_clk);
+-	clk_disable_unprepare(data->hda2codec_2x_clk);
+-	clk_disable_unprepare(data->hda_clk);
+-}
+-
+ /*
+  * power management
+  */
+@@ -186,7 +157,7 @@ static int __maybe_unused hda_tegra_runtime_suspend(struct device *dev)
+ 		azx_stop_chip(chip);
+ 		azx_enter_link_reset(chip);
+ 	}
+-	hda_tegra_disable_clocks(hda);
++	clk_bulk_disable_unprepare(hda->nclocks, hda->clocks);
+ 
+ 	return 0;
+ }
+@@ -198,7 +169,13 @@ static int __maybe_unused hda_tegra_runtime_resume(struct device *dev)
+ 	struct hda_tegra *hda = container_of(chip, struct hda_tegra, chip);
+ 	int rc;
+ 
+-	rc = hda_tegra_enable_clocks(hda);
++	if (!chip->running) {
++		rc = reset_control_assert(hda->reset);
++		if (rc)
++			return rc;
++	}
++
++	rc = clk_bulk_prepare_enable(hda->nclocks, hda->clocks);
+ 	if (rc != 0)
+ 		return rc;
+ 	if (chip && chip->running) {
+@@ -207,6 +184,12 @@ static int __maybe_unused hda_tegra_runtime_resume(struct device *dev)
+ 		/* disable controller wake up event*/
+ 		azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
+ 			   ~STATESTS_INT_MASK);
++	} else {
++		usleep_range(10, 100);
++
++		rc = reset_control_deassert(hda->reset);
++		if (rc)
++			return rc;
+ 	}
+ 
+ 	return 0;
+@@ -268,29 +251,6 @@ static int hda_tegra_init_chip(struct azx *chip, struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int hda_tegra_init_clk(struct hda_tegra *hda)
+-{
+-	struct device *dev = hda->dev;
+-
+-	hda->hda_clk = devm_clk_get(dev, "hda");
+-	if (IS_ERR(hda->hda_clk)) {
+-		dev_err(dev, "failed to get hda clock\n");
+-		return PTR_ERR(hda->hda_clk);
+-	}
+-	hda->hda2codec_2x_clk = devm_clk_get(dev, "hda2codec_2x");
+-	if (IS_ERR(hda->hda2codec_2x_clk)) {
+-		dev_err(dev, "failed to get hda2codec_2x clock\n");
+-		return PTR_ERR(hda->hda2codec_2x_clk);
+-	}
+-	hda->hda2hdmi_clk = devm_clk_get(dev, "hda2hdmi");
+-	if (IS_ERR(hda->hda2hdmi_clk)) {
+-		dev_err(dev, "failed to get hda2hdmi clock\n");
+-		return PTR_ERR(hda->hda2hdmi_clk);
+-	}
+-
+-	return 0;
+-}
+-
+ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
+ {
+ 	struct hda_tegra *hda = container_of(chip, struct hda_tegra, chip);
+@@ -499,7 +459,17 @@ static int hda_tegra_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
+-	err = hda_tegra_init_clk(hda);
++	hda->reset = devm_reset_control_array_get_exclusive(&pdev->dev);
++	if (IS_ERR(hda->reset)) {
++		err = PTR_ERR(hda->reset);
++		goto out_free;
++	}
++
++	hda->clocks[hda->nclocks++].id = "hda";
++	hda->clocks[hda->nclocks++].id = "hda2hdmi";
++	hda->clocks[hda->nclocks++].id = "hda2codec_2x";
++
++	err = devm_clk_bulk_get(&pdev->dev, hda->nclocks, hda->clocks);
+ 	if (err < 0)
+ 		goto out_free;
+ 
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 7551cdf3b4529..e6f261e8c5ae7 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -157,6 +157,9 @@ struct hdmi_spec {
+ 
+ 	bool dyn_pin_out;
+ 	bool dyn_pcm_assign;
++	bool dyn_pcm_no_legacy;
++	bool nv_dp_workaround; /* workaround DP audio infoframe for Nvidia */
++
+ 	bool intel_hsw_fixup;	/* apply Intel platform-specific fixups */
+ 	/*
+ 	 * Non-generic VIA/NVIDIA specific
+@@ -666,15 +669,24 @@ static void hdmi_pin_setup_infoframe(struct hda_codec *codec,
+ 				     int ca, int active_channels,
+ 				     int conn_type)
+ {
++	struct hdmi_spec *spec = codec->spec;
+ 	union audio_infoframe ai;
+ 
+ 	memset(&ai, 0, sizeof(ai));
+-	if (conn_type == 0) { /* HDMI */
++	if ((conn_type == 0) || /* HDMI */
++		/* Nvidia DisplayPort: Nvidia HW expects same layout as HDMI */
++		(conn_type == 1 && spec->nv_dp_workaround)) {
+ 		struct hdmi_audio_infoframe *hdmi_ai = &ai.hdmi;
+ 
+-		hdmi_ai->type		= 0x84;
+-		hdmi_ai->ver		= 0x01;
+-		hdmi_ai->len		= 0x0a;
++		if (conn_type == 0) { /* HDMI */
++			hdmi_ai->type		= 0x84;
++			hdmi_ai->ver		= 0x01;
++			hdmi_ai->len		= 0x0a;
++		} else {/* Nvidia DP */
++			hdmi_ai->type		= 0x84;
++			hdmi_ai->ver		= 0x1b;
++			hdmi_ai->len		= 0x11 << 2;
++		}
+ 		hdmi_ai->CC02_CT47	= active_channels - 1;
+ 		hdmi_ai->CA		= ca;
+ 		hdmi_checksum_audio_infoframe(hdmi_ai);
+@@ -1348,6 +1360,12 @@ static int hdmi_find_pcm_slot(struct hdmi_spec *spec,
+ {
+ 	int i;
+ 
++	/* on the new machines, try to assign the pcm slot dynamically,
++	 * not use the preferred fixed map (legacy way) anymore.
++	 */
++	if (spec->dyn_pcm_no_legacy)
++		goto last_try;
++
+ 	/*
+ 	 * generic_hdmi_build_pcms() may allocate extra PCMs on some
+ 	 * platforms (with maximum of 'num_nids + dev_num - 1')
+@@ -1377,8 +1395,9 @@ static int hdmi_find_pcm_slot(struct hdmi_spec *spec,
+ 			return i;
+ 	}
+ 
++ last_try:
+ 	/* the last try; check the empty slots in pins */
+-	for (i = 0; i < spec->num_nids; i++) {
++	for (i = 0; i < spec->pcm_used; i++) {
+ 		if (!test_bit(i, &spec->pcm_bitmap))
+ 			return i;
+ 	}
+@@ -2254,7 +2273,9 @@ static int generic_hdmi_build_pcms(struct hda_codec *codec)
+ 	 * dev_num is the device entry number in a pin
+ 	 */
+ 
+-	if (codec->mst_no_extra_pcms)
++	if (spec->dyn_pcm_no_legacy && codec->mst_no_extra_pcms)
++		pcm_num = spec->num_cvts;
++	else if (codec->mst_no_extra_pcms)
+ 		pcm_num = spec->num_nids;
+ 	else
+ 		pcm_num = spec->num_nids + spec->dev_num - 1;
+@@ -3010,8 +3031,16 @@ static int patch_i915_tgl_hdmi(struct hda_codec *codec)
+ 	 * the index indicate the port number.
+ 	 */
+ 	static const int map[] = {0x4, 0x6, 0x8, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf};
++	int ret;
+ 
+-	return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map));
++	ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map));
++	if (!ret) {
++		struct hdmi_spec *spec = codec->spec;
++
++		spec->dyn_pcm_no_legacy = true;
++	}
++
++	return ret;
+ }
+ 
+ /* Intel Baytrail and Braswell; with eld notifier */
+@@ -3510,6 +3539,7 @@ static int patch_nvhdmi_2ch(struct hda_codec *codec)
+ 	spec->pcm_playback.rates = SUPPORTED_RATES;
+ 	spec->pcm_playback.maxbps = SUPPORTED_MAXBPS;
+ 	spec->pcm_playback.formats = SUPPORTED_FORMATS;
++	spec->nv_dp_workaround = true;
+ 	return 0;
+ }
+ 
+@@ -3649,6 +3679,7 @@ static int patch_nvhdmi(struct hda_codec *codec)
+ 	spec->chmap.ops.chmap_cea_alloc_validate_get_type =
+ 		nvhdmi_chmap_cea_alloc_validate_get_type;
+ 	spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
++	spec->nv_dp_workaround = true;
+ 
+ 	codec->link_down_at_suspend = 1;
+ 
+@@ -3672,6 +3703,7 @@ static int patch_nvhdmi_legacy(struct hda_codec *codec)
+ 	spec->chmap.ops.chmap_cea_alloc_validate_get_type =
+ 		nvhdmi_chmap_cea_alloc_validate_get_type;
+ 	spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
++	spec->nv_dp_workaround = true;
+ 
+ 	codec->link_down_at_suspend = 1;
+ 
+@@ -3845,6 +3877,7 @@ static int patch_tegra_hdmi(struct hda_codec *codec)
+ 	spec->chmap.ops.chmap_cea_alloc_validate_get_type =
+ 		nvhdmi_chmap_cea_alloc_validate_get_type;
+ 	spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
++	spec->nv_dp_workaround = true;
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 024ec68e8d356..171bbcc919d55 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -495,6 +495,8 @@ static struct snd_soc_dai_driver tas2770_dai_driver[] = {
+ 	},
+ };
+ 
++static const struct regmap_config tas2770_i2c_regmap;
++
+ static int tas2770_codec_probe(struct snd_soc_component *component)
+ {
+ 	struct tas2770_priv *tas2770 =
+@@ -508,6 +510,7 @@ static int tas2770_codec_probe(struct snd_soc_component *component)
+ 	}
+ 
+ 	tas2770_reset(tas2770);
++	regmap_reinit_cache(tas2770->regmap, &tas2770_i2c_regmap);
+ 
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
+index b5277106df1fd..b0cc082fbb84f 100644
+--- a/tools/testing/selftests/net/reuseport_bpf.c
++++ b/tools/testing/selftests/net/reuseport_bpf.c
+@@ -330,7 +330,7 @@ static void test_extra_filter(const struct test_params p)
+ 	if (bind(fd1, addr, sockaddr_size()))
+ 		error(1, errno, "failed to bind recv socket 1");
+ 
+-	if (!bind(fd2, addr, sockaddr_size()) && errno != EADDRINUSE)
++	if (!bind(fd2, addr, sockaddr_size()) || errno != EADDRINUSE)
+ 		error(1, errno, "bind socket 2 should fail with EADDRINUSE");
+ 
+ 	free(addr);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-10-15 10:05 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-10-15 10:05 UTC (permalink / raw
  To: gentoo-commits

commit:     be811b7b46514343f3c5d5134e45dffc3a08ea88
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 15 10:05:36 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct 15 10:05:36 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=be811b7b

Linux patch 5.10.148

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1147_linux-5.10.148.patch | 2197 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2201 insertions(+)

diff --git a/0000_README b/0000_README
index 33d96fa1..cfc507bc 100644
--- a/0000_README
+++ b/0000_README
@@ -631,6 +631,10 @@ Patch:  1146_linux-5.10.147.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.147
 
+Patch:  1147_linux-5.10.148.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.148
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1147_linux-5.10.148.patch b/1147_linux-5.10.148.patch
new file mode 100644
index 00000000..3dc73f12
--- /dev/null
+++ b/1147_linux-5.10.148.patch
@@ -0,0 +1,2197 @@
+diff --git a/Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt b/Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt
+index 8a9f3559335b5..7e14e26676ec9 100644
+--- a/Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt
++++ b/Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt
+@@ -34,8 +34,8 @@ Example:
+ Use specific request line passing from dma
+ For example, MMC request line is 5
+ 
+-	sdhci: sdhci@98e00000 {
+-		compatible = "moxa,moxart-sdhci";
++	mmc: mmc@98e00000 {
++		compatible = "moxa,moxart-mmc";
+ 		reg = <0x98e00000 0x5C>;
+ 		interrupts = <5 0>;
+ 		clocks = <&clk_apb>;
+diff --git a/Documentation/process/code-of-conduct-interpretation.rst b/Documentation/process/code-of-conduct-interpretation.rst
+index e899f14a4ba24..4f8a06b00f608 100644
+--- a/Documentation/process/code-of-conduct-interpretation.rst
++++ b/Documentation/process/code-of-conduct-interpretation.rst
+@@ -51,7 +51,7 @@ the Technical Advisory Board (TAB) or other maintainers if you're
+ uncertain how to handle situations that come up.  It will not be
+ considered a violation report unless you want it to be.  If you are
+ uncertain about approaching the TAB or any other maintainers, please
+-reach out to our conflict mediator, Mishi Choudhary <mishi@linux.com>.
++reach out to our conflict mediator, Joanna Lee <joanna.lee@gesmer.com>.
+ 
+ In the end, "be kind to each other" is really what the end goal is for
+ everybody.  We know everyone is human and we all fail at times, but the
+diff --git a/Makefile b/Makefile
+index 24110f834775a..c40acf09ce29d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 147
++SUBLEVEL = 148
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/moxart-uc7112lx.dts b/arch/arm/boot/dts/moxart-uc7112lx.dts
+index eb5291b0ee3aa..e07b807b4cec5 100644
+--- a/arch/arm/boot/dts/moxart-uc7112lx.dts
++++ b/arch/arm/boot/dts/moxart-uc7112lx.dts
+@@ -79,7 +79,7 @@
+ 	clocks = <&ref12>;
+ };
+ 
+-&sdhci {
++&mmc {
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/moxart.dtsi b/arch/arm/boot/dts/moxart.dtsi
+index f5f070a874823..764832ddfa78a 100644
+--- a/arch/arm/boot/dts/moxart.dtsi
++++ b/arch/arm/boot/dts/moxart.dtsi
+@@ -93,8 +93,8 @@
+ 			clock-names = "PCLK";
+ 		};
+ 
+-		sdhci: sdhci@98e00000 {
+-			compatible = "moxa,moxart-sdhci";
++		mmc: mmc@98e00000 {
++			compatible = "moxa,moxart-mmc";
+ 			reg = <0x98e00000 0x5C>;
+ 			interrupts = <5 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&clk_apb>;
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 295959487b76d..ae4ba6a6745d4 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -997,15 +997,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
+ 	pmd = *pmdp;
+ 	pmd_clear(pmdp);
+ 
+-	/*
+-	 * pmdp collapse_flush need to ensure that there are no parallel gup
+-	 * walk after this call. This is needed so that we can have stable
+-	 * page ref count when collapsing a page. We don't allow a collapse page
+-	 * if we have gup taken on the page. We can ensure that by sending IPI
+-	 * because gup walk happens with IRQ disabled.
+-	 */
+-	serialize_against_pte_lookup(vma->vm_mm);
+-
+ 	radix__flush_tlb_collapsed_pmd(vma->vm_mm, address);
+ 
+ 	return pmd;
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index 1cea46ff9bb78..7756151413393 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -131,10 +131,18 @@ export LDS_ELF_FORMAT := $(ELF_FORMAT)
+ # The wrappers will select whether using "malloc" or the kernel allocator.
+ LINK_WRAPS = -Wl,--wrap,malloc -Wl,--wrap,free -Wl,--wrap,calloc
+ 
++# Avoid binutils 2.39+ warnings by marking the stack non-executable and
++# ignorning warnings for the kallsyms sections.
++LDFLAGS_EXECSTACK = -z noexecstack
++ifeq ($(CONFIG_LD_IS_BFD),y)
++LDFLAGS_EXECSTACK += $(call ld-option,--no-warn-rwx-segments)
++endif
++
+ LD_FLAGS_CMDLINE = $(foreach opt,$(KBUILD_LDFLAGS),-Wl,$(opt))
+ 
+ # Used by link-vmlinux.sh which has special support for um link
+ export CFLAGS_vmlinux := $(LINK-y) $(LINK_WRAPS) $(LD_FLAGS_CMDLINE)
++export LDFLAGS_vmlinux := $(LDFLAGS_EXECSTACK)
+ 
+ # When cleaning we don't include .config, so we don't include
+ # TT or skas makefiles and don't clean skas_ptregs.h.
+diff --git a/arch/x86/um/shared/sysdep/syscalls_32.h b/arch/x86/um/shared/sysdep/syscalls_32.h
+index 68fd2cf526fd7..f6e9f84397e79 100644
+--- a/arch/x86/um/shared/sysdep/syscalls_32.h
++++ b/arch/x86/um/shared/sysdep/syscalls_32.h
+@@ -6,10 +6,9 @@
+ #include <asm/unistd.h>
+ #include <sysdep/ptrace.h>
+ 
+-typedef long syscall_handler_t(struct pt_regs);
++typedef long syscall_handler_t(struct syscall_args);
+ 
+ extern syscall_handler_t *sys_call_table[];
+ 
+ #define EXECUTE_SYSCALL(syscall, regs) \
+-	((long (*)(struct syscall_args)) \
+-	 (*sys_call_table[syscall]))(SYSCALL_ARGS(&regs->regs))
++	((*sys_call_table[syscall]))(SYSCALL_ARGS(&regs->regs))
+diff --git a/arch/x86/um/tls_32.c b/arch/x86/um/tls_32.c
+index ac8eee093f9cd..66162eafd8e8f 100644
+--- a/arch/x86/um/tls_32.c
++++ b/arch/x86/um/tls_32.c
+@@ -65,9 +65,6 @@ static int get_free_idx(struct task_struct* task)
+ 	struct thread_struct *t = &task->thread;
+ 	int idx;
+ 
+-	if (!t->arch.tls_array)
+-		return GDT_ENTRY_TLS_MIN;
+-
+ 	for (idx = 0; idx < GDT_ENTRY_TLS_ENTRIES; idx++)
+ 		if (!t->arch.tls_array[idx].present)
+ 			return idx + GDT_ENTRY_TLS_MIN;
+@@ -240,9 +237,6 @@ static int get_tls_entry(struct task_struct *task, struct user_desc *info,
+ {
+ 	struct thread_struct *t = &task->thread;
+ 
+-	if (!t->arch.tls_array)
+-		goto clear;
+-
+ 	if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+ 		return -EINVAL;
+ 
+diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
+index 5943387e3f357..5ca366e15c767 100644
+--- a/arch/x86/um/vdso/Makefile
++++ b/arch/x86/um/vdso/Makefile
+@@ -62,7 +62,7 @@ quiet_cmd_vdso = VDSO    $@
+ 		       -Wl,-T,$(filter %.lds,$^) $(filter %.o,$^) && \
+ 		 sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@'
+ 
+-VDSO_LDFLAGS = -fPIC -shared -Wl,--hash-style=sysv
++VDSO_LDFLAGS = -fPIC -shared -Wl,--hash-style=sysv -z noexecstack
+ GCOV_PROFILE := n
+ 
+ #
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index 94c2b556cf972..7d483c3323480 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -981,8 +981,8 @@ static const struct memdev {
+ #endif
+ 	 [5] = { "zero", 0666, &zero_fops, 0 },
+ 	 [7] = { "full", 0666, &full_fops, 0 },
+-	 [8] = { "random", 0666, &random_fops, 0 },
+-	 [9] = { "urandom", 0666, &urandom_fops, 0 },
++	 [8] = { "random", 0666, &random_fops, FMODE_NOWAIT },
++	 [9] = { "urandom", 0666, &urandom_fops, FMODE_NOWAIT },
+ #ifdef CONFIG_PRINTK
+ 	[11] = { "kmsg", 0644, &kmsg_fops, 0 },
+ #endif
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index f769d858eda73..b54481e667307 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -895,20 +895,23 @@ void __cold add_bootloader_randomness(const void *buf, size_t len)
+ EXPORT_SYMBOL_GPL(add_bootloader_randomness);
+ 
+ struct fast_pool {
+-	struct work_struct mix;
+ 	unsigned long pool[4];
+ 	unsigned long last;
+ 	unsigned int count;
++	struct timer_list mix;
+ };
+ 
++static void mix_interrupt_randomness(struct timer_list *work);
++
+ static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
+ #ifdef CONFIG_64BIT
+ #define FASTMIX_PERM SIPHASH_PERMUTATION
+-	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
++	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 },
+ #else
+ #define FASTMIX_PERM HSIPHASH_PERMUTATION
+-	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
++	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
+ #endif
++	.mix = __TIMER_INITIALIZER(mix_interrupt_randomness, 0)
+ };
+ 
+ /*
+@@ -950,7 +953,7 @@ int __cold random_online_cpu(unsigned int cpu)
+ }
+ #endif
+ 
+-static void mix_interrupt_randomness(struct work_struct *work)
++static void mix_interrupt_randomness(struct timer_list *work)
+ {
+ 	struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
+ 	/*
+@@ -981,7 +984,7 @@ static void mix_interrupt_randomness(struct work_struct *work)
+ 	local_irq_enable();
+ 
+ 	mix_pool_bytes(pool, sizeof(pool));
+-	credit_init_bits(max(1u, (count & U16_MAX) / 64));
++	credit_init_bits(clamp_t(unsigned int, (count & U16_MAX) / 64, 1, sizeof(pool) * 8));
+ 
+ 	memzero_explicit(pool, sizeof(pool));
+ }
+@@ -1004,10 +1007,11 @@ void add_interrupt_randomness(int irq)
+ 	if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
+ 		return;
+ 
+-	if (unlikely(!fast_pool->mix.func))
+-		INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
+ 	fast_pool->count |= MIX_INFLIGHT;
+-	queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
++	if (!timer_pending(&fast_pool->mix)) {
++		fast_pool->mix.expires = jiffies;
++		add_timer_on(&fast_pool->mix, raw_smp_processor_id());
++	}
+ }
+ EXPORT_SYMBOL_GPL(add_interrupt_randomness);
+ 
+@@ -1299,6 +1303,11 @@ static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
+ {
+ 	int ret;
+ 
++	if (!crng_ready() &&
++	    ((kiocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO)) ||
++	     (kiocb->ki_filp->f_flags & O_NONBLOCK)))
++		return -EAGAIN;
++
+ 	ret = wait_for_random_bytes();
+ 	if (ret != 0)
+ 		return ret;
+diff --git a/drivers/clk/ti/clk-44xx.c b/drivers/clk/ti/clk-44xx.c
+index cbf9922d93d4e..a38c921539793 100644
+--- a/drivers/clk/ti/clk-44xx.c
++++ b/drivers/clk/ti/clk-44xx.c
+@@ -56,7 +56,7 @@ static const struct omap_clkctrl_bit_data omap4_aess_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_func_dmic_abe_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0018:26",
++	"abe_cm:clk:0018:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -76,7 +76,7 @@ static const struct omap_clkctrl_bit_data omap4_dmic_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_func_mcasp_abe_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0020:26",
++	"abe_cm:clk:0020:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -89,7 +89,7 @@ static const struct omap_clkctrl_bit_data omap4_mcasp_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_func_mcbsp1_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0028:26",
++	"abe_cm:clk:0028:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -102,7 +102,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp1_bit_data[] __initconst =
+ };
+ 
+ static const char * const omap4_func_mcbsp2_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0030:26",
++	"abe_cm:clk:0030:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -115,7 +115,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp2_bit_data[] __initconst =
+ };
+ 
+ static const char * const omap4_func_mcbsp3_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0038:26",
++	"abe_cm:clk:0038:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -183,18 +183,18 @@ static const struct omap_clkctrl_bit_data omap4_timer8_bit_data[] __initconst =
+ 
+ static const struct omap_clkctrl_reg_data omap4_abe_clkctrl_regs[] __initconst = {
+ 	{ OMAP4_L4_ABE_CLKCTRL, NULL, 0, "ocp_abe_iclk" },
+-	{ OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
++	{ OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
+ 	{ OMAP4_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+-	{ OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
+-	{ OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe-clkctrl:0020:24" },
+-	{ OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
+-	{ OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
+-	{ OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
+-	{ OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0040:8" },
+-	{ OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
+-	{ OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
+-	{ OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
+-	{ OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
++	{ OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
++	{ OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe_cm:clk:0020:24" },
++	{ OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
++	{ OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
++	{ OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
++	{ OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0040:8" },
++	{ OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
++	{ OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
++	{ OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
++	{ OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
+ 	{ OMAP4_WD_TIMER3_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ 0 },
+ };
+@@ -287,7 +287,7 @@ static const struct omap_clkctrl_bit_data omap4_fdif_bit_data[] __initconst = {
+ 
+ static const struct omap_clkctrl_reg_data omap4_iss_clkctrl_regs[] __initconst = {
+ 	{ OMAP4_ISS_CLKCTRL, omap4_iss_bit_data, CLKF_SW_SUP, "ducati_clk_mux_ck" },
+-	{ OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss-clkctrl:0008:24" },
++	{ OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss_cm:clk:0008:24" },
+ 	{ 0 },
+ };
+ 
+@@ -320,7 +320,7 @@ static const struct omap_clkctrl_bit_data omap4_dss_core_bit_data[] __initconst
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap4_l3_dss_clkctrl_regs[] __initconst = {
+-	{ OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3-dss-clkctrl:0000:8" },
++	{ OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3_dss_cm:clk:0000:8" },
+ 	{ 0 },
+ };
+ 
+@@ -336,7 +336,7 @@ static const struct omap_clkctrl_bit_data omap4_gpu_bit_data[] __initconst = {
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap4_l3_gfx_clkctrl_regs[] __initconst = {
+-	{ OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3-gfx-clkctrl:0000:24" },
++	{ OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3_gfx_cm:clk:0000:24" },
+ 	{ 0 },
+ };
+ 
+@@ -372,12 +372,12 @@ static const struct omap_clkctrl_bit_data omap4_hsi_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+-	"l3-init-clkctrl:0038:24",
++	"l3_init_cm:clk:0038:24",
+ 	NULL,
+ };
+ 
+ static const char * const omap4_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+-	"l3-init-clkctrl:0038:25",
++	"l3_init_cm:clk:0038:25",
+ 	NULL,
+ };
+ 
+@@ -418,7 +418,7 @@ static const struct omap_clkctrl_bit_data omap4_usb_host_hs_bit_data[] __initcon
+ };
+ 
+ static const char * const omap4_usb_otg_hs_xclk_parents[] __initconst = {
+-	"l3-init-clkctrl:0040:24",
++	"l3_init_cm:clk:0040:24",
+ 	NULL,
+ };
+ 
+@@ -452,14 +452,14 @@ static const struct omap_clkctrl_bit_data omap4_ocp2scp_usb_phy_bit_data[] __ini
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap4_l3_init_clkctrl_regs[] __initconst = {
+-	{ OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0008:24" },
+-	{ OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0010:24" },
+-	{ OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:0018:24" },
++	{ OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0008:24" },
++	{ OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0010:24" },
++	{ OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:0018:24" },
+ 	{ OMAP4_USB_HOST_HS_CLKCTRL, omap4_usb_host_hs_bit_data, CLKF_SW_SUP, "init_60m_fclk" },
+ 	{ OMAP4_USB_OTG_HS_CLKCTRL, omap4_usb_otg_hs_bit_data, CLKF_HW_SUP, "l3_div_ck" },
+ 	{ OMAP4_USB_TLL_HS_CLKCTRL, omap4_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ 	{ OMAP4_USB_HOST_FS_CLKCTRL, NULL, CLKF_SW_SUP, "func_48mc_fclk" },
+-	{ OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:00c0:8" },
++	{ OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:00c0:8" },
+ 	{ 0 },
+ };
+ 
+@@ -530,7 +530,7 @@ static const struct omap_clkctrl_bit_data omap4_gpio6_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap4_per_mcbsp4_gfclk_parents[] __initconst = {
+-	"l4-per-clkctrl:00c0:26",
++	"l4_per_cm:clk:00c0:26",
+ 	"pad_clks_ck",
+ 	NULL,
+ };
+@@ -570,12 +570,12 @@ static const struct omap_clkctrl_bit_data omap4_slimbus2_bit_data[] __initconst
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initconst = {
+-	{ OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0008:24" },
+-	{ OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0010:24" },
+-	{ OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0018:24" },
+-	{ OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0020:24" },
+-	{ OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0028:24" },
+-	{ OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0030:24" },
++	{ OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0008:24" },
++	{ OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0010:24" },
++	{ OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0018:24" },
++	{ OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0020:24" },
++	{ OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0028:24" },
++	{ OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0030:24" },
+ 	{ OMAP4_ELM_CLKCTRL, NULL, 0, "l4_div_ck" },
+ 	{ OMAP4_GPIO2_CLKCTRL, omap4_gpio2_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ 	{ OMAP4_GPIO3_CLKCTRL, omap4_gpio3_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+@@ -588,14 +588,14 @@ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initcons
+ 	{ OMAP4_I2C3_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ 	{ OMAP4_I2C4_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ 	{ OMAP4_L4_PER_CLKCTRL, NULL, 0, "l4_div_ck" },
+-	{ OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:00c0:24" },
++	{ OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:00c0:24" },
+ 	{ OMAP4_MCSPI1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MCSPI2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MCSPI3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MCSPI4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MMC3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_MMC4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+-	{ OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0118:8" },
++	{ OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0118:8" },
+ 	{ OMAP4_UART1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_UART2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ 	{ OMAP4_UART3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -630,7 +630,7 @@ static const struct omap_clkctrl_reg_data omap4_l4_wkup_clkctrl_regs[] __initcon
+ 	{ OMAP4_L4_WKUP_CLKCTRL, NULL, 0, "l4_wkup_clk_mux_ck" },
+ 	{ OMAP4_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ OMAP4_GPIO1_CLKCTRL, omap4_gpio1_bit_data, CLKF_HW_SUP, "l4_wkup_clk_mux_ck" },
+-	{ OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4-wkup-clkctrl:0020:24" },
++	{ OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4_wkup_cm:clk:0020:24" },
+ 	{ OMAP4_COUNTER_32K_CLKCTRL, NULL, 0, "sys_32k_ck" },
+ 	{ OMAP4_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ 0 },
+@@ -644,7 +644,7 @@ static const char * const omap4_pmd_stm_clock_mux_ck_parents[] __initconst = {
+ };
+ 
+ static const char * const omap4_trace_clk_div_div_ck_parents[] __initconst = {
+-	"emu-sys-clkctrl:0000:22",
++	"emu_sys_cm:clk:0000:22",
+ 	NULL,
+ };
+ 
+@@ -662,7 +662,7 @@ static const struct omap_clkctrl_div_data omap4_trace_clk_div_div_ck_data __init
+ };
+ 
+ static const char * const omap4_stm_clk_div_ck_parents[] __initconst = {
+-	"emu-sys-clkctrl:0000:20",
++	"emu_sys_cm:clk:0000:20",
+ 	NULL,
+ };
+ 
+@@ -716,73 +716,73 @@ static struct ti_dt_clk omap44xx_clks[] = {
+ 	 * hwmod support. Once hwmod is removed, these can be removed
+ 	 * also.
+ 	 */
+-	DT_CLK(NULL, "aess_fclk", "abe-clkctrl:0008:24"),
+-	DT_CLK(NULL, "cm2_dm10_mux", "l4-per-clkctrl:0008:24"),
+-	DT_CLK(NULL, "cm2_dm11_mux", "l4-per-clkctrl:0010:24"),
+-	DT_CLK(NULL, "cm2_dm2_mux", "l4-per-clkctrl:0018:24"),
+-	DT_CLK(NULL, "cm2_dm3_mux", "l4-per-clkctrl:0020:24"),
+-	DT_CLK(NULL, "cm2_dm4_mux", "l4-per-clkctrl:0028:24"),
+-	DT_CLK(NULL, "cm2_dm9_mux", "l4-per-clkctrl:0030:24"),
+-	DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
+-	DT_CLK(NULL, "dmt1_clk_mux", "l4-wkup-clkctrl:0020:24"),
+-	DT_CLK(NULL, "dss_48mhz_clk", "l3-dss-clkctrl:0000:9"),
+-	DT_CLK(NULL, "dss_dss_clk", "l3-dss-clkctrl:0000:8"),
+-	DT_CLK(NULL, "dss_sys_clk", "l3-dss-clkctrl:0000:10"),
+-	DT_CLK(NULL, "dss_tv_clk", "l3-dss-clkctrl:0000:11"),
+-	DT_CLK(NULL, "fdif_fck", "iss-clkctrl:0008:24"),
+-	DT_CLK(NULL, "func_dmic_abe_gfclk", "abe-clkctrl:0018:24"),
+-	DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe-clkctrl:0020:24"),
+-	DT_CLK(NULL, "func_mcbsp1_gfclk", "abe-clkctrl:0028:24"),
+-	DT_CLK(NULL, "func_mcbsp2_gfclk", "abe-clkctrl:0030:24"),
+-	DT_CLK(NULL, "func_mcbsp3_gfclk", "abe-clkctrl:0038:24"),
+-	DT_CLK(NULL, "gpio1_dbclk", "l4-wkup-clkctrl:0018:8"),
+-	DT_CLK(NULL, "gpio2_dbclk", "l4-per-clkctrl:0040:8"),
+-	DT_CLK(NULL, "gpio3_dbclk", "l4-per-clkctrl:0048:8"),
+-	DT_CLK(NULL, "gpio4_dbclk", "l4-per-clkctrl:0050:8"),
+-	DT_CLK(NULL, "gpio5_dbclk", "l4-per-clkctrl:0058:8"),
+-	DT_CLK(NULL, "gpio6_dbclk", "l4-per-clkctrl:0060:8"),
+-	DT_CLK(NULL, "hsi_fck", "l3-init-clkctrl:0018:24"),
+-	DT_CLK(NULL, "hsmmc1_fclk", "l3-init-clkctrl:0008:24"),
+-	DT_CLK(NULL, "hsmmc2_fclk", "l3-init-clkctrl:0010:24"),
+-	DT_CLK(NULL, "iss_ctrlclk", "iss-clkctrl:0000:8"),
+-	DT_CLK(NULL, "mcasp_sync_mux_ck", "abe-clkctrl:0020:26"),
+-	DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
+-	DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
+-	DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
+-	DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4-per-clkctrl:00c0:26"),
+-	DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3-init-clkctrl:00c0:8"),
+-	DT_CLK(NULL, "otg_60m_gfclk", "l3-init-clkctrl:0040:24"),
+-	DT_CLK(NULL, "per_mcbsp4_gfclk", "l4-per-clkctrl:00c0:24"),
+-	DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu-sys-clkctrl:0000:20"),
+-	DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu-sys-clkctrl:0000:22"),
+-	DT_CLK(NULL, "sgx_clk_mux", "l3-gfx-clkctrl:0000:24"),
+-	DT_CLK(NULL, "slimbus1_fclk_0", "abe-clkctrl:0040:8"),
+-	DT_CLK(NULL, "slimbus1_fclk_1", "abe-clkctrl:0040:9"),
+-	DT_CLK(NULL, "slimbus1_fclk_2", "abe-clkctrl:0040:10"),
+-	DT_CLK(NULL, "slimbus1_slimbus_clk", "abe-clkctrl:0040:11"),
+-	DT_CLK(NULL, "slimbus2_fclk_0", "l4-per-clkctrl:0118:8"),
+-	DT_CLK(NULL, "slimbus2_fclk_1", "l4-per-clkctrl:0118:9"),
+-	DT_CLK(NULL, "slimbus2_slimbus_clk", "l4-per-clkctrl:0118:10"),
+-	DT_CLK(NULL, "stm_clk_div_ck", "emu-sys-clkctrl:0000:27"),
+-	DT_CLK(NULL, "timer5_sync_mux", "abe-clkctrl:0048:24"),
+-	DT_CLK(NULL, "timer6_sync_mux", "abe-clkctrl:0050:24"),
+-	DT_CLK(NULL, "timer7_sync_mux", "abe-clkctrl:0058:24"),
+-	DT_CLK(NULL, "timer8_sync_mux", "abe-clkctrl:0060:24"),
+-	DT_CLK(NULL, "trace_clk_div_div_ck", "emu-sys-clkctrl:0000:24"),
+-	DT_CLK(NULL, "usb_host_hs_func48mclk", "l3-init-clkctrl:0038:15"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3-init-clkctrl:0038:13"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3-init-clkctrl:0038:14"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3-init-clkctrl:0038:11"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3-init-clkctrl:0038:12"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3-init-clkctrl:0038:8"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3-init-clkctrl:0038:9"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init-clkctrl:0038:10"),
+-	DT_CLK(NULL, "usb_otg_hs_xclk", "l3-init-clkctrl:0040:8"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3-init-clkctrl:0048:8"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3-init-clkctrl:0048:9"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3-init-clkctrl:0048:10"),
+-	DT_CLK(NULL, "utmi_p1_gfclk", "l3-init-clkctrl:0038:24"),
+-	DT_CLK(NULL, "utmi_p2_gfclk", "l3-init-clkctrl:0038:25"),
++	DT_CLK(NULL, "aess_fclk", "abe_cm:0008:24"),
++	DT_CLK(NULL, "cm2_dm10_mux", "l4_per_cm:0008:24"),
++	DT_CLK(NULL, "cm2_dm11_mux", "l4_per_cm:0010:24"),
++	DT_CLK(NULL, "cm2_dm2_mux", "l4_per_cm:0018:24"),
++	DT_CLK(NULL, "cm2_dm3_mux", "l4_per_cm:0020:24"),
++	DT_CLK(NULL, "cm2_dm4_mux", "l4_per_cm:0028:24"),
++	DT_CLK(NULL, "cm2_dm9_mux", "l4_per_cm:0030:24"),
++	DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
++	DT_CLK(NULL, "dmt1_clk_mux", "l4_wkup_cm:0020:24"),
++	DT_CLK(NULL, "dss_48mhz_clk", "l3_dss_cm:0000:9"),
++	DT_CLK(NULL, "dss_dss_clk", "l3_dss_cm:0000:8"),
++	DT_CLK(NULL, "dss_sys_clk", "l3_dss_cm:0000:10"),
++	DT_CLK(NULL, "dss_tv_clk", "l3_dss_cm:0000:11"),
++	DT_CLK(NULL, "fdif_fck", "iss_cm:0008:24"),
++	DT_CLK(NULL, "func_dmic_abe_gfclk", "abe_cm:0018:24"),
++	DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe_cm:0020:24"),
++	DT_CLK(NULL, "func_mcbsp1_gfclk", "abe_cm:0028:24"),
++	DT_CLK(NULL, "func_mcbsp2_gfclk", "abe_cm:0030:24"),
++	DT_CLK(NULL, "func_mcbsp3_gfclk", "abe_cm:0038:24"),
++	DT_CLK(NULL, "gpio1_dbclk", "l4_wkup_cm:0018:8"),
++	DT_CLK(NULL, "gpio2_dbclk", "l4_per_cm:0040:8"),
++	DT_CLK(NULL, "gpio3_dbclk", "l4_per_cm:0048:8"),
++	DT_CLK(NULL, "gpio4_dbclk", "l4_per_cm:0050:8"),
++	DT_CLK(NULL, "gpio5_dbclk", "l4_per_cm:0058:8"),
++	DT_CLK(NULL, "gpio6_dbclk", "l4_per_cm:0060:8"),
++	DT_CLK(NULL, "hsi_fck", "l3_init_cm:0018:24"),
++	DT_CLK(NULL, "hsmmc1_fclk", "l3_init_cm:0008:24"),
++	DT_CLK(NULL, "hsmmc2_fclk", "l3_init_cm:0010:24"),
++	DT_CLK(NULL, "iss_ctrlclk", "iss_cm:0000:8"),
++	DT_CLK(NULL, "mcasp_sync_mux_ck", "abe_cm:0020:26"),
++	DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
++	DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
++	DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
++	DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4_per_cm:00c0:26"),
++	DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3_init_cm:00c0:8"),
++	DT_CLK(NULL, "otg_60m_gfclk", "l3_init_cm:0040:24"),
++	DT_CLK(NULL, "per_mcbsp4_gfclk", "l4_per_cm:00c0:24"),
++	DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu_sys_cm:0000:20"),
++	DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu_sys_cm:0000:22"),
++	DT_CLK(NULL, "sgx_clk_mux", "l3_gfx_cm:0000:24"),
++	DT_CLK(NULL, "slimbus1_fclk_0", "abe_cm:0040:8"),
++	DT_CLK(NULL, "slimbus1_fclk_1", "abe_cm:0040:9"),
++	DT_CLK(NULL, "slimbus1_fclk_2", "abe_cm:0040:10"),
++	DT_CLK(NULL, "slimbus1_slimbus_clk", "abe_cm:0040:11"),
++	DT_CLK(NULL, "slimbus2_fclk_0", "l4_per_cm:0118:8"),
++	DT_CLK(NULL, "slimbus2_fclk_1", "l4_per_cm:0118:9"),
++	DT_CLK(NULL, "slimbus2_slimbus_clk", "l4_per_cm:0118:10"),
++	DT_CLK(NULL, "stm_clk_div_ck", "emu_sys_cm:0000:27"),
++	DT_CLK(NULL, "timer5_sync_mux", "abe_cm:0048:24"),
++	DT_CLK(NULL, "timer6_sync_mux", "abe_cm:0050:24"),
++	DT_CLK(NULL, "timer7_sync_mux", "abe_cm:0058:24"),
++	DT_CLK(NULL, "timer8_sync_mux", "abe_cm:0060:24"),
++	DT_CLK(NULL, "trace_clk_div_div_ck", "emu_sys_cm:0000:24"),
++	DT_CLK(NULL, "usb_host_hs_func48mclk", "l3_init_cm:0038:15"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3_init_cm:0038:13"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3_init_cm:0038:14"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3_init_cm:0038:11"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3_init_cm:0038:12"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3_init_cm:0038:8"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3_init_cm:0038:9"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init_cm:0038:10"),
++	DT_CLK(NULL, "usb_otg_hs_xclk", "l3_init_cm:0040:8"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3_init_cm:0048:8"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3_init_cm:0048:9"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3_init_cm:0048:10"),
++	DT_CLK(NULL, "utmi_p1_gfclk", "l3_init_cm:0038:24"),
++	DT_CLK(NULL, "utmi_p2_gfclk", "l3_init_cm:0038:25"),
+ 	{ .node_name = NULL },
+ };
+ 
+diff --git a/drivers/clk/ti/clk-54xx.c b/drivers/clk/ti/clk-54xx.c
+index 04a5408085acc..8694bc9f5fc7f 100644
+--- a/drivers/clk/ti/clk-54xx.c
++++ b/drivers/clk/ti/clk-54xx.c
+@@ -50,7 +50,7 @@ static const struct omap_clkctrl_bit_data omap5_aess_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap5_dmic_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0018:26",
++	"abe_cm:clk:0018:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -70,7 +70,7 @@ static const struct omap_clkctrl_bit_data omap5_dmic_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap5_mcbsp1_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0028:26",
++	"abe_cm:clk:0028:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -83,7 +83,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp1_bit_data[] __initconst =
+ };
+ 
+ static const char * const omap5_mcbsp2_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0030:26",
++	"abe_cm:clk:0030:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -96,7 +96,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp2_bit_data[] __initconst =
+ };
+ 
+ static const char * const omap5_mcbsp3_gfclk_parents[] __initconst = {
+-	"abe-clkctrl:0038:26",
++	"abe_cm:clk:0038:26",
+ 	"pad_clks_ck",
+ 	"slimbus_clk",
+ 	NULL,
+@@ -136,16 +136,16 @@ static const struct omap_clkctrl_bit_data omap5_timer8_bit_data[] __initconst =
+ 
+ static const struct omap_clkctrl_reg_data omap5_abe_clkctrl_regs[] __initconst = {
+ 	{ OMAP5_L4_ABE_CLKCTRL, NULL, 0, "abe_iclk" },
+-	{ OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
++	{ OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
+ 	{ OMAP5_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+-	{ OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
+-	{ OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
+-	{ OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
+-	{ OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
+-	{ OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
+-	{ OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
+-	{ OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
+-	{ OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
++	{ OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
++	{ OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
++	{ OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
++	{ OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
++	{ OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
++	{ OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
++	{ OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
++	{ OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
+ 	{ 0 },
+ };
+ 
+@@ -266,12 +266,12 @@ static const struct omap_clkctrl_bit_data omap5_gpio8_bit_data[] __initconst = {
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap5_l4per_clkctrl_regs[] __initconst = {
+-	{ OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0008:24" },
+-	{ OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0010:24" },
+-	{ OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0018:24" },
+-	{ OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0020:24" },
+-	{ OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0028:24" },
+-	{ OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0030:24" },
++	{ OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0008:24" },
++	{ OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0010:24" },
++	{ OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0018:24" },
++	{ OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0020:24" },
++	{ OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0028:24" },
++	{ OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0030:24" },
+ 	{ OMAP5_GPIO2_CLKCTRL, omap5_gpio2_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ 	{ OMAP5_GPIO3_CLKCTRL, omap5_gpio3_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ 	{ OMAP5_GPIO4_CLKCTRL, omap5_gpio4_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+@@ -343,7 +343,7 @@ static const struct omap_clkctrl_bit_data omap5_dss_core_bit_data[] __initconst
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap5_dss_clkctrl_regs[] __initconst = {
+-	{ OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss-clkctrl:0000:8" },
++	{ OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss_cm:clk:0000:8" },
+ 	{ 0 },
+ };
+ 
+@@ -376,7 +376,7 @@ static const struct omap_clkctrl_bit_data omap5_gpu_core_bit_data[] __initconst
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap5_gpu_clkctrl_regs[] __initconst = {
+-	{ OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu-clkctrl:0000:24" },
++	{ OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu_cm:clk:0000:24" },
+ 	{ 0 },
+ };
+ 
+@@ -387,7 +387,7 @@ static const char * const omap5_mmc1_fclk_mux_parents[] __initconst = {
+ };
+ 
+ static const char * const omap5_mmc1_fclk_parents[] __initconst = {
+-	"l3init-clkctrl:0008:24",
++	"l3init_cm:clk:0008:24",
+ 	NULL,
+ };
+ 
+@@ -403,7 +403,7 @@ static const struct omap_clkctrl_bit_data omap5_mmc1_bit_data[] __initconst = {
+ };
+ 
+ static const char * const omap5_mmc2_fclk_parents[] __initconst = {
+-	"l3init-clkctrl:0010:24",
++	"l3init_cm:clk:0010:24",
+ 	NULL,
+ };
+ 
+@@ -428,12 +428,12 @@ static const char * const omap5_usb_host_hs_hsic480m_p3_clk_parents[] __initcons
+ };
+ 
+ static const char * const omap5_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+-	"l3init-clkctrl:0038:24",
++	"l3init_cm:clk:0038:24",
+ 	NULL,
+ };
+ 
+ static const char * const omap5_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+-	"l3init-clkctrl:0038:25",
++	"l3init_cm:clk:0038:25",
+ 	NULL,
+ };
+ 
+@@ -492,8 +492,8 @@ static const struct omap_clkctrl_bit_data omap5_usb_otg_ss_bit_data[] __initcons
+ };
+ 
+ static const struct omap_clkctrl_reg_data omap5_l3init_clkctrl_regs[] __initconst = {
+-	{ OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0008:25" },
+-	{ OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0010:25" },
++	{ OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0008:25" },
++	{ OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0010:25" },
+ 	{ OMAP5_USB_HOST_HS_CLKCTRL, omap5_usb_host_hs_bit_data, CLKF_SW_SUP, "l3init_60m_fclk" },
+ 	{ OMAP5_USB_TLL_HS_CLKCTRL, omap5_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ 	{ OMAP5_SATA_CLKCTRL, omap5_sata_bit_data, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -517,7 +517,7 @@ static const struct omap_clkctrl_reg_data omap5_wkupaon_clkctrl_regs[] __initcon
+ 	{ OMAP5_L4_WKUP_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ 	{ OMAP5_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ OMAP5_GPIO1_CLKCTRL, omap5_gpio1_bit_data, CLKF_HW_SUP, "wkupaon_iclk_mux" },
+-	{ OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon-clkctrl:0020:24" },
++	{ OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon_cm:clk:0020:24" },
+ 	{ OMAP5_COUNTER_32K_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ 	{ OMAP5_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ 	{ 0 },
+@@ -547,58 +547,58 @@ const struct omap_clkctrl_data omap5_clkctrl_data[] __initconst = {
+ static struct ti_dt_clk omap54xx_clks[] = {
+ 	DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
+ 	DT_CLK(NULL, "sys_clkin_ck", "sys_clkin"),
+-	DT_CLK(NULL, "dmic_gfclk", "abe-clkctrl:0018:24"),
+-	DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
+-	DT_CLK(NULL, "dss_32khz_clk", "dss-clkctrl:0000:11"),
+-	DT_CLK(NULL, "dss_48mhz_clk", "dss-clkctrl:0000:9"),
+-	DT_CLK(NULL, "dss_dss_clk", "dss-clkctrl:0000:8"),
+-	DT_CLK(NULL, "dss_sys_clk", "dss-clkctrl:0000:10"),
+-	DT_CLK(NULL, "gpio1_dbclk", "wkupaon-clkctrl:0018:8"),
+-	DT_CLK(NULL, "gpio2_dbclk", "l4per-clkctrl:0040:8"),
+-	DT_CLK(NULL, "gpio3_dbclk", "l4per-clkctrl:0048:8"),
+-	DT_CLK(NULL, "gpio4_dbclk", "l4per-clkctrl:0050:8"),
+-	DT_CLK(NULL, "gpio5_dbclk", "l4per-clkctrl:0058:8"),
+-	DT_CLK(NULL, "gpio6_dbclk", "l4per-clkctrl:0060:8"),
+-	DT_CLK(NULL, "gpio7_dbclk", "l4per-clkctrl:00f0:8"),
+-	DT_CLK(NULL, "gpio8_dbclk", "l4per-clkctrl:00f8:8"),
+-	DT_CLK(NULL, "mcbsp1_gfclk", "abe-clkctrl:0028:24"),
+-	DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
+-	DT_CLK(NULL, "mcbsp2_gfclk", "abe-clkctrl:0030:24"),
+-	DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
+-	DT_CLK(NULL, "mcbsp3_gfclk", "abe-clkctrl:0038:24"),
+-	DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
+-	DT_CLK(NULL, "mmc1_32khz_clk", "l3init-clkctrl:0008:8"),
+-	DT_CLK(NULL, "mmc1_fclk", "l3init-clkctrl:0008:25"),
+-	DT_CLK(NULL, "mmc1_fclk_mux", "l3init-clkctrl:0008:24"),
+-	DT_CLK(NULL, "mmc2_fclk", "l3init-clkctrl:0010:25"),
+-	DT_CLK(NULL, "mmc2_fclk_mux", "l3init-clkctrl:0010:24"),
+-	DT_CLK(NULL, "sata_ref_clk", "l3init-clkctrl:0068:8"),
+-	DT_CLK(NULL, "timer10_gfclk_mux", "l4per-clkctrl:0008:24"),
+-	DT_CLK(NULL, "timer11_gfclk_mux", "l4per-clkctrl:0010:24"),
+-	DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon-clkctrl:0020:24"),
+-	DT_CLK(NULL, "timer2_gfclk_mux", "l4per-clkctrl:0018:24"),
+-	DT_CLK(NULL, "timer3_gfclk_mux", "l4per-clkctrl:0020:24"),
+-	DT_CLK(NULL, "timer4_gfclk_mux", "l4per-clkctrl:0028:24"),
+-	DT_CLK(NULL, "timer5_gfclk_mux", "abe-clkctrl:0048:24"),
+-	DT_CLK(NULL, "timer6_gfclk_mux", "abe-clkctrl:0050:24"),
+-	DT_CLK(NULL, "timer7_gfclk_mux", "abe-clkctrl:0058:24"),
+-	DT_CLK(NULL, "timer8_gfclk_mux", "abe-clkctrl:0060:24"),
+-	DT_CLK(NULL, "timer9_gfclk_mux", "l4per-clkctrl:0030:24"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init-clkctrl:0038:13"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init-clkctrl:0038:14"),
+-	DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init-clkctrl:0038:7"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init-clkctrl:0038:11"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init-clkctrl:0038:12"),
+-	DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init-clkctrl:0038:6"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init-clkctrl:0038:8"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init-clkctrl:0038:9"),
+-	DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init-clkctrl:0038:10"),
+-	DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init-clkctrl:00d0:8"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init-clkctrl:0048:8"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init-clkctrl:0048:9"),
+-	DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init-clkctrl:0048:10"),
+-	DT_CLK(NULL, "utmi_p1_gfclk", "l3init-clkctrl:0038:24"),
+-	DT_CLK(NULL, "utmi_p2_gfclk", "l3init-clkctrl:0038:25"),
++	DT_CLK(NULL, "dmic_gfclk", "abe_cm:0018:24"),
++	DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
++	DT_CLK(NULL, "dss_32khz_clk", "dss_cm:0000:11"),
++	DT_CLK(NULL, "dss_48mhz_clk", "dss_cm:0000:9"),
++	DT_CLK(NULL, "dss_dss_clk", "dss_cm:0000:8"),
++	DT_CLK(NULL, "dss_sys_clk", "dss_cm:0000:10"),
++	DT_CLK(NULL, "gpio1_dbclk", "wkupaon_cm:0018:8"),
++	DT_CLK(NULL, "gpio2_dbclk", "l4per_cm:0040:8"),
++	DT_CLK(NULL, "gpio3_dbclk", "l4per_cm:0048:8"),
++	DT_CLK(NULL, "gpio4_dbclk", "l4per_cm:0050:8"),
++	DT_CLK(NULL, "gpio5_dbclk", "l4per_cm:0058:8"),
++	DT_CLK(NULL, "gpio6_dbclk", "l4per_cm:0060:8"),
++	DT_CLK(NULL, "gpio7_dbclk", "l4per_cm:00f0:8"),
++	DT_CLK(NULL, "gpio8_dbclk", "l4per_cm:00f8:8"),
++	DT_CLK(NULL, "mcbsp1_gfclk", "abe_cm:0028:24"),
++	DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
++	DT_CLK(NULL, "mcbsp2_gfclk", "abe_cm:0030:24"),
++	DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
++	DT_CLK(NULL, "mcbsp3_gfclk", "abe_cm:0038:24"),
++	DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
++	DT_CLK(NULL, "mmc1_32khz_clk", "l3init_cm:0008:8"),
++	DT_CLK(NULL, "mmc1_fclk", "l3init_cm:0008:25"),
++	DT_CLK(NULL, "mmc1_fclk_mux", "l3init_cm:0008:24"),
++	DT_CLK(NULL, "mmc2_fclk", "l3init_cm:0010:25"),
++	DT_CLK(NULL, "mmc2_fclk_mux", "l3init_cm:0010:24"),
++	DT_CLK(NULL, "sata_ref_clk", "l3init_cm:0068:8"),
++	DT_CLK(NULL, "timer10_gfclk_mux", "l4per_cm:0008:24"),
++	DT_CLK(NULL, "timer11_gfclk_mux", "l4per_cm:0010:24"),
++	DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon_cm:0020:24"),
++	DT_CLK(NULL, "timer2_gfclk_mux", "l4per_cm:0018:24"),
++	DT_CLK(NULL, "timer3_gfclk_mux", "l4per_cm:0020:24"),
++	DT_CLK(NULL, "timer4_gfclk_mux", "l4per_cm:0028:24"),
++	DT_CLK(NULL, "timer5_gfclk_mux", "abe_cm:0048:24"),
++	DT_CLK(NULL, "timer6_gfclk_mux", "abe_cm:0050:24"),
++	DT_CLK(NULL, "timer7_gfclk_mux", "abe_cm:0058:24"),
++	DT_CLK(NULL, "timer8_gfclk_mux", "abe_cm:0060:24"),
++	DT_CLK(NULL, "timer9_gfclk_mux", "l4per_cm:0030:24"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init_cm:0038:13"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init_cm:0038:14"),
++	DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init_cm:0038:7"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init_cm:0038:11"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init_cm:0038:12"),
++	DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init_cm:0038:6"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init_cm:0038:8"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init_cm:0038:9"),
++	DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init_cm:0038:10"),
++	DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init_cm:00d0:8"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init_cm:0048:8"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init_cm:0048:9"),
++	DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init_cm:0048:10"),
++	DT_CLK(NULL, "utmi_p1_gfclk", "l3init_cm:0038:24"),
++	DT_CLK(NULL, "utmi_p2_gfclk", "l3init_cm:0038:25"),
+ 	{ .node_name = NULL },
+ };
+ 
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 08a85c559f795..864c484bde1b4 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -511,6 +511,10 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ 	char *c;
+ 	u16 soc_mask = 0;
+ 
++	if (!(ti_clk_get_features()->flags & TI_CLK_CLKCTRL_COMPAT) &&
++	    of_node_name_eq(node, "clk"))
++		ti_clk_features.flags |= TI_CLK_CLKCTRL_COMPAT;
++
+ 	addrp = of_get_address(node, 0, NULL, NULL);
+ 	addr = (u32)of_translate_address(node, addrp);
+ 
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index cab4719e4cf9c..e76adc31ab66f 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -3020,9 +3020,10 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 
+ 	/* Request and map I/O memory */
+ 	xdev->regs = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(xdev->regs))
+-		return PTR_ERR(xdev->regs);
+-
++	if (IS_ERR(xdev->regs)) {
++		err = PTR_ERR(xdev->regs);
++		goto disable_clks;
++	}
+ 	/* Retrieve the DMA engine properties from the device tree */
+ 	xdev->max_buffer_len = GENMASK(XILINX_DMA_MAX_TRANS_LEN_MAX - 1, 0);
+ 	xdev->s2mm_chan_id = xdev->dma_config->max_channels / 2;
+@@ -3050,7 +3051,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 		if (err < 0) {
+ 			dev_err(xdev->dev,
+ 				"missing xlnx,num-fstores property\n");
+-			return err;
++			goto disable_clks;
+ 		}
+ 
+ 		err = of_property_read_u32(node, "xlnx,flush-fsync",
+@@ -3070,7 +3071,11 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 		xdev->ext_addr = false;
+ 
+ 	/* Set the dma mask bits */
+-	dma_set_mask_and_coherent(xdev->dev, DMA_BIT_MASK(addr_width));
++	err = dma_set_mask_and_coherent(xdev->dev, DMA_BIT_MASK(addr_width));
++	if (err < 0) {
++		dev_err(xdev->dev, "DMA mask error %d\n", err);
++		goto disable_clks;
++	}
+ 
+ 	/* Initialize the DMA engine */
+ 	xdev->common.dev = &pdev->dev;
+@@ -3115,7 +3120,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 	for_each_child_of_node(node, child) {
+ 		err = xilinx_dma_child_probe(xdev, child);
+ 		if (err < 0)
+-			goto disable_clks;
++			goto error;
+ 	}
+ 
+ 	if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+@@ -3150,12 +3155,12 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
+-disable_clks:
+-	xdma_disable_allclks(xdev);
+ error:
+ 	for (i = 0; i < xdev->dma_config->max_channels; i++)
+ 		if (xdev->chan[i])
+ 			xilinx_dma_chan_remove(xdev->chan[i]);
++disable_clks:
++	xdma_disable_allclks(xdev);
+ 
+ 	return err;
+ }
+diff --git a/drivers/firmware/arm_scmi/scmi_pm_domain.c b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+index a4e4aa9a35426..af74e521f89f3 100644
+--- a/drivers/firmware/arm_scmi/scmi_pm_domain.c
++++ b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+@@ -106,9 +106,28 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ 	scmi_pd_data->domains = domains;
+ 	scmi_pd_data->num_domains = num_domains;
+ 
++	dev_set_drvdata(dev, scmi_pd_data);
++
+ 	return of_genpd_add_provider_onecell(np, scmi_pd_data);
+ }
+ 
++static void scmi_pm_domain_remove(struct scmi_device *sdev)
++{
++	int i;
++	struct genpd_onecell_data *scmi_pd_data;
++	struct device *dev = &sdev->dev;
++	struct device_node *np = dev->of_node;
++
++	of_genpd_del_provider(np);
++
++	scmi_pd_data = dev_get_drvdata(dev);
++	for (i = 0; i < scmi_pd_data->num_domains; i++) {
++		if (!scmi_pd_data->domains[i])
++			continue;
++		pm_genpd_remove(scmi_pd_data->domains[i]);
++	}
++}
++
+ static const struct scmi_device_id scmi_id_table[] = {
+ 	{ SCMI_PROTOCOL_POWER, "genpd" },
+ 	{ },
+@@ -118,6 +137,7 @@ MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+ static struct scmi_driver scmi_power_domain_driver = {
+ 	.name = "scmi-power-domain",
+ 	.probe = scmi_pm_domain_probe,
++	.remove = scmi_pm_domain_remove,
+ 	.id_table = scmi_id_table,
+ };
+ module_scmi_driver(scmi_power_domain_driver);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index 3ac6c7b65a45a..e33fe0207b9e5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -2047,7 +2047,8 @@ static void dce110_setup_audio_dto(
+ 			continue;
+ 		if (pipe_ctx->stream->signal != SIGNAL_TYPE_HDMI_TYPE_A)
+ 			continue;
+-		if (pipe_ctx->stream_res.audio != NULL) {
++		if (pipe_ctx->stream_res.audio != NULL &&
++			pipe_ctx->stream_res.audio->enabled == false) {
+ 			struct audio_output audio_output;
+ 
+ 			build_audio_output(context, pipe_ctx, &audio_output);
+@@ -2075,7 +2076,8 @@ static void dce110_setup_audio_dto(
+ 			if (!dc_is_dp_signal(pipe_ctx->stream->signal))
+ 				continue;
+ 
+-			if (pipe_ctx->stream_res.audio != NULL) {
++			if (pipe_ctx->stream_res.audio != NULL &&
++				pipe_ctx->stream_res.audio->enabled == false) {
+ 				struct audio_output audio_output;
+ 
+ 				build_audio_output(context, pipe_ctx, &audio_output);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 3d778760a3b55..8f66eef0c6837 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1481,6 +1481,7 @@ static void dcn20_update_dchubp_dpp(
+ 	/* Any updates are handled in dc interface, just need
+ 	 * to apply existing for plane enable / opp change */
+ 	if (pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed
++			|| pipe_ctx->update_flags.bits.plane_changed
+ 			|| pipe_ctx->stream->update_flags.bits.gamut_remap
+ 			|| pipe_ctx->stream->update_flags.bits.out_csc) {
+ #if defined(CONFIG_DRM_AMD_DC_DCN3_0)
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index ba101afcfc27f..70dedc0f7827c 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -112,6 +112,8 @@ static const struct xpad_device {
+ 	u8 xtype;
+ } xpad_device[] = {
+ 	{ 0x0079, 0x18d4, "GPD Win 2 X-Box Controller", 0, XTYPE_XBOX360 },
++	{ 0x03eb, 0xff01, "Wooting One (Legacy)", 0, XTYPE_XBOX360 },
++	{ 0x03eb, 0xff02, "Wooting Two (Legacy)", 0, XTYPE_XBOX360 },
+ 	{ 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ 	{ 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ 	{ 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
+@@ -242,6 +244,7 @@ static const struct xpad_device {
+ 	{ 0x0f0d, 0x0063, "Hori Real Arcade Pro Hayabusa (USA) Xbox One", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ 	{ 0x0f0d, 0x0067, "HORIPAD ONE", 0, XTYPE_XBOXONE },
+ 	{ 0x0f0d, 0x0078, "Hori Real Arcade Pro V Kai Xbox One", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
++	{ 0x0f0d, 0x00c5, "Hori Fighting Commander ONE", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ 	{ 0x0f30, 0x010b, "Philips Recoil", 0, XTYPE_XBOX },
+ 	{ 0x0f30, 0x0202, "Joytech Advanced Controller", 0, XTYPE_XBOX },
+ 	{ 0x0f30, 0x8888, "BigBen XBMiniPad Controller", 0, XTYPE_XBOX },
+@@ -258,6 +261,7 @@ static const struct xpad_device {
+ 	{ 0x1430, 0x8888, "TX6500+ Dance Pad (first generation)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
+ 	{ 0x1430, 0xf801, "RedOctane Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 },
++	{ 0x146b, 0x0604, "Bigben Interactive DAIJA Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ 	{ 0x1532, 0x0037, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ 	{ 0x1532, 0x0a00, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ 	{ 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE },
+@@ -322,6 +326,7 @@ static const struct xpad_device {
+ 	{ 0x24c6, 0x5502, "Hori Fighting Stick VX Alt", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x5503, "Hori Fighting Edge", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x5506, "Hori SOULCALIBUR V Stick", 0, XTYPE_XBOX360 },
++	{ 0x24c6, 0x5510, "Hori Fighting Commander ONE (Xbox 360/PC Mode)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x550d, "Hori GEM Xbox controller", 0, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x550e, "Hori Real Arcade Pro V Kai 360", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x551a, "PowerA FUSION Pro Controller", 0, XTYPE_XBOXONE },
+@@ -331,6 +336,14 @@ static const struct xpad_device {
+ 	{ 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x5d04, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0xfafe, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
++	{ 0x2563, 0x058d, "OneXPlayer Gamepad", 0, XTYPE_XBOX360 },
++	{ 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE },
++	{ 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 },
++	{ 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 },
++	{ 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 },
++	{ 0x31e3, 0x1220, "Wooting Two HE", 0, XTYPE_XBOX360 },
++	{ 0x31e3, 0x1300, "Wooting 60HE (AVR)", 0, XTYPE_XBOX360 },
++	{ 0x31e3, 0x1310, "Wooting 60HE (ARM)", 0, XTYPE_XBOX360 },
+ 	{ 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 },
+ 	{ 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
+ 	{ 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+@@ -416,6 +429,7 @@ static const signed short xpad_abs_triggers[] = {
+ static const struct usb_device_id xpad_table[] = {
+ 	{ USB_INTERFACE_INFO('X', 'B', 0) },	/* X-Box USB-IF not approved class */
+ 	XPAD_XBOX360_VENDOR(0x0079),		/* GPD Win 2 Controller */
++	XPAD_XBOX360_VENDOR(0x03eb),		/* Wooting Keyboards (Legacy) */
+ 	XPAD_XBOX360_VENDOR(0x044f),		/* Thrustmaster X-Box 360 controllers */
+ 	XPAD_XBOX360_VENDOR(0x045e),		/* Microsoft X-Box 360 controllers */
+ 	XPAD_XBOXONE_VENDOR(0x045e),		/* Microsoft X-Box One controllers */
+@@ -426,6 +440,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	{ USB_DEVICE(0x0738, 0x4540) },		/* Mad Catz Beat Pad */
+ 	XPAD_XBOXONE_VENDOR(0x0738),		/* Mad Catz FightStick TE 2 */
+ 	XPAD_XBOX360_VENDOR(0x07ff),		/* Mad Catz GamePad */
++	XPAD_XBOX360_VENDOR(0x0c12),		/* Zeroplus X-Box 360 controllers */
+ 	XPAD_XBOX360_VENDOR(0x0e6f),		/* 0x0e6f X-Box 360 controllers */
+ 	XPAD_XBOXONE_VENDOR(0x0e6f),		/* 0x0e6f X-Box One controllers */
+ 	XPAD_XBOX360_VENDOR(0x0f0d),		/* Hori Controllers */
+@@ -446,8 +461,12 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOXONE_VENDOR(0x20d6),		/* PowerA Controllers */
+ 	XPAD_XBOX360_VENDOR(0x24c6),		/* PowerA Controllers */
+ 	XPAD_XBOXONE_VENDOR(0x24c6),		/* PowerA Controllers */
++	XPAD_XBOX360_VENDOR(0x2563),		/* OneXPlayer Gamepad */
++	XPAD_XBOX360_VENDOR(0x260d),		/* Dareu H101 */
++	XPAD_XBOXONE_VENDOR(0x2dc8),		/* 8BitDo Pro 2 Wired Controller for Xbox */
+ 	XPAD_XBOXONE_VENDOR(0x2e24),		/* Hyperkin Duke X-Box One pad */
+ 	XPAD_XBOX360_VENDOR(0x2f24),		/* GameSir Controllers */
++	XPAD_XBOX360_VENDOR(0x31e3),		/* Wooting Keyboards */
+ 	XPAD_XBOX360_VENDOR(0x3285),		/* Nacon GC-100 */
+ 	{ }
+ };
+@@ -1964,7 +1983,6 @@ static struct usb_driver xpad_driver = {
+ 	.disconnect	= xpad_disconnect,
+ 	.suspend	= xpad_suspend,
+ 	.resume		= xpad_resume,
+-	.reset_resume	= xpad_resume,
+ 	.id_table	= xpad_table,
+ };
+ 
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index d3844730eacaf..48eec5fe7397b 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -331,6 +331,22 @@ static bool pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
+ 	return false;
+ }
+ 
++static int pci_endpoint_test_validate_xfer_params(struct device *dev,
++		struct pci_endpoint_test_xfer_param *param, size_t alignment)
++{
++	if (!param->size) {
++		dev_dbg(dev, "Data size is zero\n");
++		return -EINVAL;
++	}
++
++	if (param->size > SIZE_MAX - alignment) {
++		dev_dbg(dev, "Maximum transfer data size exceeded\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
+ 				   unsigned long arg)
+ {
+@@ -362,9 +378,11 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
+ 		return false;
+ 	}
+ 
++	err = pci_endpoint_test_validate_xfer_params(dev, &param, alignment);
++	if (err)
++		return false;
++
+ 	size = param.size;
+-	if (size > SIZE_MAX - alignment)
+-		goto err;
+ 
+ 	use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
+ 	if (use_dma)
+@@ -496,9 +514,11 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test,
+ 		return false;
+ 	}
+ 
++	err = pci_endpoint_test_validate_xfer_params(dev, &param, alignment);
++	if (err)
++		return false;
++
+ 	size = param.size;
+-	if (size > SIZE_MAX - alignment)
+-		goto err;
+ 
+ 	use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
+ 	if (use_dma)
+@@ -594,9 +614,11 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test,
+ 		return false;
+ 	}
+ 
++	err = pci_endpoint_test_validate_xfer_params(dev, &param, alignment);
++	if (err)
++		return false;
++
+ 	size = param.size;
+-	if (size > SIZE_MAX - alignment)
+-		goto err;
+ 
+ 	use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
+ 	if (use_dma)
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 899768ed1688d..868b121ce4f35 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -853,7 +853,8 @@ try_again:
+ 	 * the CCS bit is set as well. We deliberately deviate from the spec in
+ 	 * regards to this, which allows UHS-I to be supported for SDSC cards.
+ 	 */
+-	if (!mmc_host_is_spi(host) && rocr && (*rocr & 0x01000000)) {
++	if (!mmc_host_is_spi(host) && (ocr & SD_OCR_S18R) &&
++	    rocr && (*rocr & SD_ROCR_S18A)) {
+ 		err = mmc_set_uhs_voltage(host, pocr);
+ 		if (err == -EAGAIN) {
+ 			retries--;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+index 4af0cd9530de6..ff245f75fa3d1 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+@@ -89,11 +89,8 @@ static int aq_ndev_close(struct net_device *ndev)
+ 	int err = 0;
+ 
+ 	err = aq_nic_stop(aq_nic);
+-	if (err < 0)
+-		goto err_exit;
+ 	aq_nic_deinit(aq_nic, true);
+ 
+-err_exit:
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 50190ded7edc7..a6d4ff4760ad1 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -3675,6 +3675,8 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 
+ 	rx_status.band = channel->band;
+ 	rx_status.rate_idx = nla_get_u32(info->attrs[HWSIM_ATTR_RX_RATE]);
++	if (rx_status.rate_idx >= data2->hw->wiphy->bands[rx_status.band]->n_bitrates)
++		goto out;
+ 	rx_status.signal = nla_get_u32(info->attrs[HWSIM_ATTR_SIGNAL]);
+ 
+ 	hdr = (void *)skb->data;
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 4840886532ff7..7cbed0310c09f 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -1472,7 +1472,7 @@ static void qcom_glink_rx_close(struct qcom_glink *glink, unsigned int rcid)
+ 	cancel_work_sync(&channel->intent_work);
+ 
+ 	if (channel->rpdev) {
+-		strncpy(chinfo.name, channel->name, sizeof(chinfo.name));
++		strscpy_pad(chinfo.name, channel->name, sizeof(chinfo.name));
+ 		chinfo.src = RPMSG_ADDR_ANY;
+ 		chinfo.dst = RPMSG_ADDR_ANY;
+ 
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 0b1e853d8c91a..b5167ef93abf9 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1073,7 +1073,7 @@ static int qcom_smd_create_device(struct qcom_smd_channel *channel)
+ 
+ 	/* Assign public information to the rpmsg_device */
+ 	rpdev = &qsdev->rpdev;
+-	strncpy(rpdev->id.name, channel->name, RPMSG_NAME_SIZE);
++	strscpy_pad(rpdev->id.name, channel->name, RPMSG_NAME_SIZE);
+ 	rpdev->src = RPMSG_ADDR_ANY;
+ 	rpdev->dst = RPMSG_ADDR_ANY;
+ 
+@@ -1304,7 +1304,7 @@ static void qcom_channel_state_worker(struct work_struct *work)
+ 
+ 		spin_unlock_irqrestore(&edge->channels_lock, flags);
+ 
+-		strncpy(chinfo.name, channel->name, sizeof(chinfo.name));
++		strscpy_pad(chinfo.name, channel->name, sizeof(chinfo.name));
+ 		chinfo.src = RPMSG_ADDR_ANY;
+ 		chinfo.dst = RPMSG_ADDR_ANY;
+ 		rpmsg_unregister_device(&edge->dev, &chinfo);
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index e64457f53da86..de5b6453827c0 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3671,11 +3671,6 @@ err2:
+ err1:
+ 	scsi_host_put(lport->host);
+ err0:
+-	if (qedf) {
+-		QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n");
+-
+-		clear_bit(QEDF_PROBING, &qedf->flags);
+-	}
+ 	return rc;
+ }
+ 
+diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c
+index d4f10c0d813cf..a3bce11ed4b4b 100644
+--- a/drivers/scsi/stex.c
++++ b/drivers/scsi/stex.c
+@@ -668,16 +668,17 @@ stex_queuecommand_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
+ 		return 0;
+ 	case PASSTHRU_CMD:
+ 		if (cmd->cmnd[1] == PASSTHRU_GET_DRVVER) {
+-			struct st_drvver ver;
++			const struct st_drvver ver = {
++				.major = ST_VER_MAJOR,
++				.minor = ST_VER_MINOR,
++				.oem = ST_OEM,
++				.build = ST_BUILD_VER,
++				.signature[0] = PASSTHRU_SIGNATURE,
++				.console_id = host->max_id - 1,
++				.host_no = hba->host->host_no,
++			};
+ 			size_t cp_len = sizeof(ver);
+ 
+-			ver.major = ST_VER_MAJOR;
+-			ver.minor = ST_VER_MINOR;
+-			ver.oem = ST_OEM;
+-			ver.build = ST_BUILD_VER;
+-			ver.signature[0] = PASSTHRU_SIGNATURE;
+-			ver.console_id = host->max_id - 1;
+-			ver.host_no = hba->host->host_no;
+ 			cp_len = scsi_sg_copy_from_buffer(cmd, &ver, cp_len);
+ 			cmd->result = sizeof(ver) == cp_len ?
+ 				DID_OK << 16 | COMMAND_COMPLETE << 8 :
+diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
+index f48a23adbc35d..094e812e9e692 100644
+--- a/drivers/usb/mon/mon_bin.c
++++ b/drivers/usb/mon/mon_bin.c
+@@ -1268,6 +1268,11 @@ static int mon_bin_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ 	/* don't do anything here: "fault" will set up page table entries */
+ 	vma->vm_ops = &mon_bin_vm_ops;
++
++	if (vma->vm_flags & VM_WRITE)
++		return -EPERM;
++
++	vma->vm_flags &= ~VM_MAYWRITE;
+ 	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+ 	vma->vm_private_data = filp->private_data;
+ 	mon_bin_vma_open(vma);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 5480bacba39fc..3bfa395c31120 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1320,8 +1320,7 @@ static u32 get_ftdi_divisor(struct tty_struct *tty,
+ 		case 38400: div_value = ftdi_sio_b38400; break;
+ 		case 57600: div_value = ftdi_sio_b57600;  break;
+ 		case 115200: div_value = ftdi_sio_b115200; break;
+-		} /* baud */
+-		if (div_value == 0) {
++		default:
+ 			dev_dbg(dev, "%s - Baudrate (%d) requested is not supported\n",
+ 				__func__,  baud);
+ 			div_value = ftdi_sio_b9600;
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index 586ef5551e76e..b1e844bf31f81 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -177,6 +177,7 @@ static const struct usb_device_id id_table[] = {
+ 	{DEVICE_SWI(0x413c, 0x81b3)},	/* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ 	{DEVICE_SWI(0x413c, 0x81b5)},	/* Dell Wireless 5811e QDL */
+ 	{DEVICE_SWI(0x413c, 0x81b6)},	/* Dell Wireless 5811e QDL */
++	{DEVICE_SWI(0x413c, 0x81c2)},	/* Dell Wireless 5811e */
+ 	{DEVICE_SWI(0x413c, 0x81cb)},	/* Dell Wireless 5816e QDL */
+ 	{DEVICE_SWI(0x413c, 0x81cc)},	/* Dell Wireless 5816e */
+ 	{DEVICE_SWI(0x413c, 0x81cf)},   /* Dell Wireless 5819 */
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 93d986856f1c9..943655e36a799 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -703,6 +703,12 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ 	if (dentry->d_name.len > NAME_MAX)
+ 		return -ENAMETOOLONG;
+ 
++	/*
++	 * Do not truncate the file, since atomic_open is called before the
++	 * permission check. The caller will do the truncation afterward.
++	 */
++	flags &= ~O_TRUNC;
++
+ 	if (flags & O_CREAT) {
+ 		if (ceph_quota_is_max_files_exceeded(dir))
+ 			return -EDQUOT;
+@@ -769,9 +775,7 @@ retry:
+ 	}
+ 
+ 	set_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags);
+-	err = ceph_mdsc_do_request(mdsc,
+-				   (flags & (O_CREAT|O_TRUNC)) ? dir : NULL,
+-				   req);
++	err = ceph_mdsc_do_request(mdsc, (flags & O_CREAT) ? dir : NULL, req);
+ 	err = ceph_handle_snapdir(req, dentry, err);
+ 	if (err)
+ 		goto out_req;
+diff --git a/fs/inode.c b/fs/inode.c
+index 638d5d5bf42df..9f49e0bdc2f77 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -168,8 +168,6 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
+ 	inode->i_wb_frn_history = 0;
+ #endif
+ 
+-	if (security_inode_alloc(inode))
+-		goto out;
+ 	spin_lock_init(&inode->i_lock);
+ 	lockdep_set_class(&inode->i_lock, &sb->s_type->i_lock_key);
+ 
+@@ -202,11 +200,12 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
+ 	inode->i_fsnotify_mask = 0;
+ #endif
+ 	inode->i_flctx = NULL;
++
++	if (unlikely(security_inode_alloc(inode)))
++		return -ENOMEM;
+ 	this_cpu_inc(nr_inodes);
+ 
+ 	return 0;
+-out:
+-	return -ENOMEM;
+ }
+ EXPORT_SYMBOL(inode_init_always);
+ 
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index 95684fa3c985a..fb594edc0837c 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -332,6 +332,7 @@ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
+ 	struct inode *inode;
+ 	struct nilfs_inode_info *ii;
+ 	struct nilfs_root *root;
++	struct buffer_head *bh;
+ 	int err = -ENOMEM;
+ 	ino_t ino;
+ 
+@@ -347,11 +348,25 @@ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
+ 	ii->i_state = BIT(NILFS_I_NEW);
+ 	ii->i_root = root;
+ 
+-	err = nilfs_ifile_create_inode(root->ifile, &ino, &ii->i_bh);
++	err = nilfs_ifile_create_inode(root->ifile, &ino, &bh);
+ 	if (unlikely(err))
+ 		goto failed_ifile_create_inode;
+ 	/* reference count of i_bh inherits from nilfs_mdt_read_block() */
+ 
++	if (unlikely(ino < NILFS_USER_INO)) {
++		nilfs_warn(sb,
++			   "inode bitmap is inconsistent for reserved inodes");
++		do {
++			brelse(bh);
++			err = nilfs_ifile_create_inode(root->ifile, &ino, &bh);
++			if (unlikely(err))
++				goto failed_ifile_create_inode;
++		} while (ino < NILFS_USER_INO);
++
++		nilfs_info(sb, "repaired inode bitmap for reserved inodes");
++	}
++	ii->i_bh = bh;
++
+ 	atomic64_inc(&root->inodes_count);
+ 	inode_init_owner(inode, dir, mode);
+ 	inode->i_ino = ino;
+@@ -444,6 +459,8 @@ int nilfs_read_inode_common(struct inode *inode,
+ 	inode->i_atime.tv_nsec = le32_to_cpu(raw_inode->i_mtime_nsec);
+ 	inode->i_ctime.tv_nsec = le32_to_cpu(raw_inode->i_ctime_nsec);
+ 	inode->i_mtime.tv_nsec = le32_to_cpu(raw_inode->i_mtime_nsec);
++	if (nilfs_is_metadata_file_inode(inode) && !S_ISREG(inode->i_mode))
++		return -EIO; /* this inode is for metadata and corrupted */
+ 	if (inode->i_nlink == 0)
+ 		return -ESTALE; /* this inode is deleted */
+ 
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 8350c2eaee75a..545f764d70b12 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -880,9 +880,11 @@ static int nilfs_segctor_create_checkpoint(struct nilfs_sc_info *sci)
+ 		nilfs_mdt_mark_dirty(nilfs->ns_cpfile);
+ 		nilfs_cpfile_put_checkpoint(
+ 			nilfs->ns_cpfile, nilfs->ns_cno, bh_cp);
+-	} else
+-		WARN_ON(err == -EINVAL || err == -ENOENT);
+-
++	} else if (err == -EINVAL || err == -ENOENT) {
++		nilfs_error(sci->sc_super,
++			    "checkpoint creation failed due to metadata corruption.");
++		err = -EIO;
++	}
+ 	return err;
+ }
+ 
+@@ -896,7 +898,11 @@ static int nilfs_segctor_fill_in_checkpoint(struct nilfs_sc_info *sci)
+ 	err = nilfs_cpfile_get_checkpoint(nilfs->ns_cpfile, nilfs->ns_cno, 0,
+ 					  &raw_cp, &bh_cp);
+ 	if (unlikely(err)) {
+-		WARN_ON(err == -EINVAL || err == -ENOENT);
++		if (err == -EINVAL || err == -ENOENT) {
++			nilfs_error(sci->sc_super,
++				    "checkpoint finalization failed due to metadata corruption.");
++			err = -EIO;
++		}
+ 		goto failed_ibh;
+ 	}
+ 	raw_cp->cp_snapshot_list.ssl_next = 0;
+@@ -2791,10 +2797,9 @@ int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
+ 	inode_attach_wb(nilfs->ns_bdev->bd_inode, NULL);
+ 
+ 	err = nilfs_segctor_start_thread(nilfs->ns_writer);
+-	if (err) {
+-		kfree(nilfs->ns_writer);
+-		nilfs->ns_writer = NULL;
+-	}
++	if (unlikely(err))
++		nilfs_detach_log_writer(sb);
++
+ 	return err;
+ }
+ 
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index 4cf524ccab430..ae2de4e1cd6fa 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -54,9 +54,6 @@
+ 
+ #define __compiletime_object_size(obj) __builtin_object_size(obj, 0)
+ 
+-#define __compiletime_warning(message) __attribute__((__warning__(message)))
+-#define __compiletime_error(message) __attribute__((__error__(message)))
+-
+ #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
+ #define __latent_entropy __attribute__((latent_entropy))
+ #endif
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index b2a3f4f641a70..08eb06301791d 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -30,6 +30,7 @@
+ # define __GCC4_has_attribute___assume_aligned__      (__GNUC_MINOR__ >= 9)
+ # define __GCC4_has_attribute___copy__                0
+ # define __GCC4_has_attribute___designated_init__     0
++# define __GCC4_has_attribute___error__               1
+ # define __GCC4_has_attribute___externally_visible__  1
+ # define __GCC4_has_attribute___no_caller_saved_registers__ 0
+ # define __GCC4_has_attribute___noclone__             1
+@@ -37,6 +38,7 @@
+ # define __GCC4_has_attribute___no_sanitize_address__ (__GNUC_MINOR__ >= 8)
+ # define __GCC4_has_attribute___no_sanitize_undefined__ (__GNUC_MINOR__ >= 9)
+ # define __GCC4_has_attribute___fallthrough__         0
++# define __GCC4_has_attribute___warning__             1
+ #endif
+ 
+ /*
+@@ -136,6 +138,17 @@
+ # define __designated_init
+ #endif
+ 
++/*
++ * Optional: only supported since clang >= 14.0
++ *
++ *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-error-function-attribute
++ */
++#if __has_attribute(__error__)
++# define __compiletime_error(msg)       __attribute__((__error__(msg)))
++#else
++# define __compiletime_error(msg)
++#endif
++
+ /*
+  * Optional: not supported by clang
+  *
+@@ -272,6 +285,17 @@
+  */
+ #define __used                          __attribute__((__used__))
+ 
++/*
++ * Optional: only supported since clang >= 14.0
++ *
++ *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-warning-function-attribute
++ */
++#if __has_attribute(__warning__)
++# define __compiletime_warning(msg)     __attribute__((__warning__(msg)))
++#else
++# define __compiletime_warning(msg)
++#endif
++
+ /*
+  *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-weak-function-attribute
+  *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-weak-variable-attribute
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index 2a1c202baa1fe..eb2bda017ccb7 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -281,12 +281,6 @@ struct ftrace_likely_data {
+ #ifndef __compiletime_object_size
+ # define __compiletime_object_size(obj) -1
+ #endif
+-#ifndef __compiletime_warning
+-# define __compiletime_warning(message)
+-#endif
+-#ifndef __compiletime_error
+-# define __compiletime_error(message)
+-#endif
+ 
+ #ifdef __OPTIMIZE__
+ # define __compiletime_assert(condition, msg, prefix, suffix)		\
+diff --git a/include/net/ieee802154_netdev.h b/include/net/ieee802154_netdev.h
+index d0d188c3294bd..a8994f307fc38 100644
+--- a/include/net/ieee802154_netdev.h
++++ b/include/net/ieee802154_netdev.h
+@@ -15,6 +15,22 @@
+ #ifndef IEEE802154_NETDEVICE_H
+ #define IEEE802154_NETDEVICE_H
+ 
++#define IEEE802154_REQUIRED_SIZE(struct_type, member) \
++	(offsetof(typeof(struct_type), member) + \
++	sizeof(((typeof(struct_type) *)(NULL))->member))
++
++#define IEEE802154_ADDR_OFFSET \
++	offsetof(typeof(struct sockaddr_ieee802154), addr)
++
++#define IEEE802154_MIN_NAMELEN (IEEE802154_ADDR_OFFSET + \
++	IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, addr_type))
++
++#define IEEE802154_NAMELEN_SHORT (IEEE802154_ADDR_OFFSET + \
++	IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, short_addr))
++
++#define IEEE802154_NAMELEN_LONG (IEEE802154_ADDR_OFFSET + \
++	IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, hwaddr))
++
+ #include <net/af_ieee802154.h>
+ #include <linux/netdevice.h>
+ #include <linux/skbuff.h>
+@@ -165,6 +181,27 @@ static inline void ieee802154_devaddr_to_raw(void *raw, __le64 addr)
+ 	memcpy(raw, &temp, IEEE802154_ADDR_LEN);
+ }
+ 
++static inline int
++ieee802154_sockaddr_check_size(struct sockaddr_ieee802154 *daddr, int len)
++{
++	struct ieee802154_addr_sa *sa;
++
++	sa = &daddr->addr;
++	if (len < IEEE802154_MIN_NAMELEN)
++		return -EINVAL;
++	switch (sa->addr_type) {
++	case IEEE802154_ADDR_SHORT:
++		if (len < IEEE802154_NAMELEN_SHORT)
++			return -EINVAL;
++		break;
++	case IEEE802154_ADDR_LONG:
++		if (len < IEEE802154_NAMELEN_LONG)
++			return -EINVAL;
++		break;
++	}
++	return 0;
++}
++
+ static inline void ieee802154_addr_from_sa(struct ieee802154_addr *a,
+ 					   const struct ieee802154_addr_sa *sa)
+ {
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index 7a9a23e7a604a..c9a47d3d8f503 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -86,7 +86,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
+ 						struct xdp_umem *umem);
+ int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *dev,
+ 		  u16 queue_id, u16 flags);
+-int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
++int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_sock *umem_xs,
+ 			 struct net_device *dev, u16 queue_id);
+ void xp_destroy(struct xsk_buff_pool *pool);
+ void xp_release(struct xdp_buff_xsk *xskb);
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index 69ade4fb71aab..4d272e834ca2e 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -205,7 +205,7 @@ static inline unsigned int scsi_get_resid(struct scsi_cmnd *cmd)
+ 	for_each_sg(scsi_sglist(cmd), sg, nseg, __i)
+ 
+ static inline int scsi_sg_copy_from_buffer(struct scsi_cmnd *cmd,
+-					   void *buf, int buflen)
++					   const void *buf, int buflen)
+ {
+ 	return sg_copy_from_buffer(scsi_sglist(cmd), scsi_sg_count(cmd),
+ 				   buf, buflen);
+diff --git a/mm/gup.c b/mm/gup.c
+index 6cb7d8ae56f66..b47c751df069a 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2128,8 +2128,28 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
+ }
+ 
+ #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
+-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
+-			 unsigned int flags, struct page **pages, int *nr)
++/*
++ * Fast-gup relies on pte change detection to avoid concurrent pgtable
++ * operations.
++ *
++ * To pin the page, fast-gup needs to do below in order:
++ * (1) pin the page (by prefetching pte), then (2) check pte not changed.
++ *
++ * For the rest of pgtable operations where pgtable updates can be racy
++ * with fast-gup, we need to do (1) clear pte, then (2) check whether page
++ * is pinned.
++ *
++ * Above will work for all pte-level operations, including THP split.
++ *
++ * For THP collapse, it's a bit more complicated because fast-gup may be
++ * walking a pgtable page that is being freed (pte is still valid but pmd
++ * can be cleared already).  To avoid race in such condition, we need to
++ * also check pmd here to make sure pmd doesn't change (corresponds to
++ * pmdp_collapse_flush() in the THP collapse code path).
++ */
++static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
++			 unsigned long end, unsigned int flags,
++			 struct page **pages, int *nr)
+ {
+ 	struct dev_pagemap *pgmap = NULL;
+ 	int nr_start = *nr, ret = 0;
+@@ -2169,7 +2189,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
+ 		if (!head)
+ 			goto pte_unmap;
+ 
+-		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
++		if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
++		    unlikely(pte_val(pte) != pte_val(*ptep))) {
+ 			put_compound_head(head, 1, flags);
+ 			goto pte_unmap;
+ 		}
+@@ -2214,8 +2235,9 @@ pte_unmap:
+  * get_user_pages_fast_only implementation that can pin pages. Thus it's still
+  * useful to have gup_huge_pmd even if we can't operate on ptes.
+  */
+-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
+-			 unsigned int flags, struct page **pages, int *nr)
++static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
++			 unsigned long end, unsigned int flags,
++			 struct page **pages, int *nr)
+ {
+ 	return 0;
+ }
+@@ -2522,7 +2544,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
+ 			if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr,
+ 					 PMD_SHIFT, next, flags, pages, nr))
+ 				return 0;
+-		} else if (!gup_pte_range(pmd, addr, next, flags, pages, nr))
++		} else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr))
+ 			return 0;
+ 	} while (pmdp++, addr = next, addr != end);
+ 
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 969e57dde65f9..cf4dceb9682bf 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1144,10 +1144,12 @@ static void collapse_huge_page(struct mm_struct *mm,
+ 
+ 	pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
+ 	/*
+-	 * After this gup_fast can't run anymore. This also removes
+-	 * any huge TLB entry from the CPU so we won't allow
+-	 * huge and small TLB entries for the same virtual address
+-	 * to avoid the risk of CPU bugs in that area.
++	 * This removes any huge TLB entry from the CPU so we won't allow
++	 * huge and small TLB entries for the same virtual address to
++	 * avoid the risk of CPU bugs in that area.
++	 *
++	 * Parallel fast GUP is fine since fast GUP will back off when
++	 * it detects PMD is changed.
+ 	 */
+ 	_pmd = pmdp_collapse_flush(vma, address, pmd);
+ 	spin_unlock(pmd_ptl);
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index c25f7617770c8..7edec210780a3 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -201,8 +201,9 @@ static int raw_bind(struct sock *sk, struct sockaddr *_uaddr, int len)
+ 	int err = 0;
+ 	struct net_device *dev = NULL;
+ 
+-	if (len < sizeof(*uaddr))
+-		return -EINVAL;
++	err = ieee802154_sockaddr_check_size(uaddr, len);
++	if (err < 0)
++		return err;
+ 
+ 	uaddr = (struct sockaddr_ieee802154 *)_uaddr;
+ 	if (uaddr->family != AF_IEEE802154)
+@@ -494,7 +495,8 @@ static int dgram_bind(struct sock *sk, struct sockaddr *uaddr, int len)
+ 
+ 	ro->bound = 0;
+ 
+-	if (len < sizeof(*addr))
++	err = ieee802154_sockaddr_check_size(addr, len);
++	if (err < 0)
+ 		goto out;
+ 
+ 	if (addr->family != AF_IEEE802154)
+@@ -565,8 +567,9 @@ static int dgram_connect(struct sock *sk, struct sockaddr *uaddr,
+ 	struct dgram_sock *ro = dgram_sk(sk);
+ 	int err = 0;
+ 
+-	if (len < sizeof(*addr))
+-		return -EINVAL;
++	err = ieee802154_sockaddr_check_size(addr, len);
++	if (err < 0)
++		return err;
+ 
+ 	if (addr->family != AF_IEEE802154)
+ 		return -EINVAL;
+@@ -605,6 +608,7 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 	struct ieee802154_mac_cb *cb;
+ 	struct dgram_sock *ro = dgram_sk(sk);
+ 	struct ieee802154_addr dst_addr;
++	DECLARE_SOCKADDR(struct sockaddr_ieee802154*, daddr, msg->msg_name);
+ 	int hlen, tlen;
+ 	int err;
+ 
+@@ -613,10 +617,20 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	if (!ro->connected && !msg->msg_name)
+-		return -EDESTADDRREQ;
+-	else if (ro->connected && msg->msg_name)
+-		return -EISCONN;
++	if (msg->msg_name) {
++		if (ro->connected)
++			return -EISCONN;
++		if (msg->msg_namelen < IEEE802154_MIN_NAMELEN)
++			return -EINVAL;
++		err = ieee802154_sockaddr_check_size(daddr, msg->msg_namelen);
++		if (err < 0)
++			return err;
++		ieee802154_addr_from_sa(&dst_addr, &daddr->addr);
++	} else {
++		if (!ro->connected)
++			return -EDESTADDRREQ;
++		dst_addr = ro->dst_addr;
++	}
+ 
+ 	if (!ro->bound)
+ 		dev = dev_getfirstbyhwtype(sock_net(sk), ARPHRD_IEEE802154);
+@@ -652,16 +666,6 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 	cb = mac_cb_init(skb);
+ 	cb->type = IEEE802154_FC_TYPE_DATA;
+ 	cb->ackreq = ro->want_ack;
+-
+-	if (msg->msg_name) {
+-		DECLARE_SOCKADDR(struct sockaddr_ieee802154*,
+-				 daddr, msg->msg_name);
+-
+-		ieee802154_addr_from_sa(&dst_addr, &daddr->addr);
+-	} else {
+-		dst_addr = ro->dst_addr;
+-	}
+-
+ 	cb->secen = ro->secen;
+ 	cb->secen_override = ro->secen_override;
+ 	cb->seclevel = ro->seclevel;
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index e991abb45f68f..97a63b940482d 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1975,10 +1975,11 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+ 
+ 		if (mmie_keyidx < NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS ||
+ 		    mmie_keyidx >= NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS +
+-		    NUM_DEFAULT_BEACON_KEYS) {
+-			cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
+-						     skb->data,
+-						     skb->len);
++				   NUM_DEFAULT_BEACON_KEYS) {
++			if (rx->sdata->dev)
++				cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
++							     skb->data,
++							     skb->len);
+ 			return RX_DROP_MONITOR; /* unexpected BIP keyidx */
+ 		}
+ 
+@@ -2126,7 +2127,8 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+ 	/* either the frame has been decrypted or will be dropped */
+ 	status->flag |= RX_FLAG_DECRYPTED;
+ 
+-	if (unlikely(ieee80211_is_beacon(fc) && result == RX_DROP_UNUSABLE))
++	if (unlikely(ieee80211_is_beacon(fc) && result == RX_DROP_UNUSABLE &&
++		     rx->sdata->dev))
+ 		cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
+ 					     skb->data, skb->len);
+ 
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index a1f129292ad88..11d5686893c6a 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1409,6 +1409,8 @@ static size_t ieee802_11_find_bssid_profile(const u8 *start, size_t len,
+ 	for_each_element_id(elem, WLAN_EID_MULTIPLE_BSSID, start, len) {
+ 		if (elem->datalen < 2)
+ 			continue;
++		if (elem->data[0] < 1 || elem->data[0] > 8)
++			continue;
+ 
+ 		for_each_element(sub, elem->data + 1, elem->datalen - 1) {
+ 			u8 new_bssid[ETH_ALEN];
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 6dc9b7e22b71d..22d169923261f 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -143,18 +143,12 @@ static inline void bss_ref_get(struct cfg80211_registered_device *rdev,
+ 	lockdep_assert_held(&rdev->bss_lock);
+ 
+ 	bss->refcount++;
+-	if (bss->pub.hidden_beacon_bss) {
+-		bss = container_of(bss->pub.hidden_beacon_bss,
+-				   struct cfg80211_internal_bss,
+-				   pub);
+-		bss->refcount++;
+-	}
+-	if (bss->pub.transmitted_bss) {
+-		bss = container_of(bss->pub.transmitted_bss,
+-				   struct cfg80211_internal_bss,
+-				   pub);
+-		bss->refcount++;
+-	}
++
++	if (bss->pub.hidden_beacon_bss)
++		bss_from_pub(bss->pub.hidden_beacon_bss)->refcount++;
++
++	if (bss->pub.transmitted_bss)
++		bss_from_pub(bss->pub.transmitted_bss)->refcount++;
+ }
+ 
+ static inline void bss_ref_put(struct cfg80211_registered_device *rdev,
+@@ -304,7 +298,8 @@ static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
+ 	tmp_old = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen);
+ 	tmp_old = (tmp_old) ? tmp_old + tmp_old[1] + 2 : ie;
+ 
+-	while (tmp_old + tmp_old[1] + 2 - ie <= ielen) {
++	while (tmp_old + 2 - ie <= ielen &&
++	       tmp_old + tmp_old[1] + 2 - ie <= ielen) {
+ 		if (tmp_old[0] == 0) {
+ 			tmp_old++;
+ 			continue;
+@@ -364,7 +359,8 @@ static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
+ 	 * copied to new ie, skip ssid, capability, bssid-index ie
+ 	 */
+ 	tmp_new = sub_copy;
+-	while (tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) {
++	while (tmp_new + 2 - sub_copy <= subie_len &&
++	       tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) {
+ 		if (!(tmp_new[0] == WLAN_EID_NON_TX_BSSID_CAP ||
+ 		      tmp_new[0] == WLAN_EID_SSID)) {
+ 			memcpy(pos, tmp_new, tmp_new[1] + 2);
+@@ -429,6 +425,15 @@ cfg80211_add_nontrans_list(struct cfg80211_bss *trans_bss,
+ 
+ 	rcu_read_unlock();
+ 
++	/*
++	 * This is a bit weird - it's not on the list, but already on another
++	 * one! The only way that could happen is if there's some BSSID/SSID
++	 * shared by multiple APs in their multi-BSSID profiles, potentially
++	 * with hidden SSID mixed in ... ignore it.
++	 */
++	if (!list_empty(&nontrans_bss->nontrans_list))
++		return -EINVAL;
++
+ 	/* add to the list */
+ 	list_add_tail(&nontrans_bss->nontrans_list, &trans_bss->nontrans_list);
+ 	return 0;
+@@ -1597,6 +1602,23 @@ struct cfg80211_non_tx_bss {
+ 	u8 bssid_index;
+ };
+ 
++static void cfg80211_update_hidden_bsses(struct cfg80211_internal_bss *known,
++					 const struct cfg80211_bss_ies *new_ies,
++					 const struct cfg80211_bss_ies *old_ies)
++{
++	struct cfg80211_internal_bss *bss;
++
++	/* Assign beacon IEs to all sub entries */
++	list_for_each_entry(bss, &known->hidden_list, hidden_list) {
++		const struct cfg80211_bss_ies *ies;
++
++		ies = rcu_access_pointer(bss->pub.beacon_ies);
++		WARN_ON(ies != old_ies);
++
++		rcu_assign_pointer(bss->pub.beacon_ies, new_ies);
++	}
++}
++
+ static bool
+ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ 			  struct cfg80211_internal_bss *known,
+@@ -1620,7 +1642,6 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ 			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+ 	} else if (rcu_access_pointer(new->pub.beacon_ies)) {
+ 		const struct cfg80211_bss_ies *old;
+-		struct cfg80211_internal_bss *bss;
+ 
+ 		if (known->pub.hidden_beacon_bss &&
+ 		    !list_empty(&known->hidden_list)) {
+@@ -1648,16 +1669,7 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ 		if (old == rcu_access_pointer(known->pub.ies))
+ 			rcu_assign_pointer(known->pub.ies, new->pub.beacon_ies);
+ 
+-		/* Assign beacon IEs to all sub entries */
+-		list_for_each_entry(bss, &known->hidden_list, hidden_list) {
+-			const struct cfg80211_bss_ies *ies;
+-
+-			ies = rcu_access_pointer(bss->pub.beacon_ies);
+-			WARN_ON(ies != old);
+-
+-			rcu_assign_pointer(bss->pub.beacon_ies,
+-					   new->pub.beacon_ies);
+-		}
++		cfg80211_update_hidden_bsses(known, new->pub.beacon_ies, old);
+ 
+ 		if (old)
+ 			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+@@ -1734,6 +1746,8 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 		new->refcount = 1;
+ 		INIT_LIST_HEAD(&new->hidden_list);
+ 		INIT_LIST_HEAD(&new->pub.nontrans_list);
++		/* we'll set this later if it was non-NULL */
++		new->pub.transmitted_bss = NULL;
+ 
+ 		if (rcu_access_pointer(tmp->pub.proberesp_ies)) {
+ 			hidden = rb_find_bss(rdev, tmp, BSS_CMP_HIDE_ZLEN);
+@@ -1971,10 +1985,15 @@ cfg80211_inform_single_bss_data(struct wiphy *wiphy,
+ 		spin_lock_bh(&rdev->bss_lock);
+ 		if (cfg80211_add_nontrans_list(non_tx_data->tx_bss,
+ 					       &res->pub)) {
+-			if (__cfg80211_unlink_bss(rdev, res))
++			if (__cfg80211_unlink_bss(rdev, res)) {
+ 				rdev->bss_generation++;
++				res = NULL;
++			}
+ 		}
+ 		spin_unlock_bh(&rdev->bss_lock);
++
++		if (!res)
++			return NULL;
+ 	}
+ 
+ 	trace_cfg80211_return_bss(&res->pub);
+@@ -2093,6 +2112,8 @@ static void cfg80211_parse_mbssid_data(struct wiphy *wiphy,
+ 	for_each_element_id(elem, WLAN_EID_MULTIPLE_BSSID, ie, ielen) {
+ 		if (elem->datalen < 4)
+ 			continue;
++		if (elem->data[0] < 1 || (int)elem->data[0] > 8)
++			continue;
+ 		for_each_element(sub, elem->data + 1, elem->datalen - 1) {
+ 			u8 profile_len;
+ 
+@@ -2228,7 +2249,7 @@ cfg80211_update_notlisted_nontrans(struct wiphy *wiphy,
+ 	size_t new_ie_len;
+ 	struct cfg80211_bss_ies *new_ies;
+ 	const struct cfg80211_bss_ies *old;
+-	u8 cpy_len;
++	size_t cpy_len;
+ 
+ 	lockdep_assert_held(&wiphy_to_rdev(wiphy)->bss_lock);
+ 
+@@ -2295,6 +2316,8 @@ cfg80211_update_notlisted_nontrans(struct wiphy *wiphy,
+ 	} else {
+ 		old = rcu_access_pointer(nontrans_bss->beacon_ies);
+ 		rcu_assign_pointer(nontrans_bss->beacon_ies, new_ies);
++		cfg80211_update_hidden_bsses(bss_from_pub(nontrans_bss),
++					     new_ies, old);
+ 		rcu_assign_pointer(nontrans_bss->ies, new_ies);
+ 		if (old)
+ 			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index ca4716b92774b..691841dc6d334 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -742,8 +742,8 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 				goto out_unlock;
+ 			}
+ 
+-			err = xp_assign_dev_shared(xs->pool, umem_xs->umem,
+-						   dev, qid);
++			err = xp_assign_dev_shared(xs->pool, umem_xs, dev,
++						   qid);
+ 			if (err) {
+ 				xp_destroy(xs->pool);
+ 				xs->pool = NULL;
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index e63a285a98565..c347e52f58df8 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -198,17 +198,18 @@ int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *dev,
+ 	return __xp_assign_dev(pool, dev, queue_id, flags);
+ }
+ 
+-int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
++int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_sock *umem_xs,
+ 			 struct net_device *dev, u16 queue_id)
+ {
+ 	u16 flags;
++	struct xdp_umem *umem = umem_xs->umem;
+ 
+ 	/* One fill and completion ring required for each queue id. */
+ 	if (!pool->fq || !pool->cq)
+ 		return -EINVAL;
+ 
+ 	flags = umem->zc ? XDP_ZEROCOPY : XDP_COPY;
+-	if (pool->uses_need_wakeup)
++	if (umem_xs->pool->uses_need_wakeup)
+ 		flags |= XDP_USE_NEED_WAKEUP;
+ 
+ 	return __xp_assign_dev(pool, dev, queue_id, flags);
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index 23d3967786b9f..fe327a4532ddb 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -52,6 +52,7 @@ KBUILD_CFLAGS += -Wno-format-zero-length
+ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast)
+ KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare
+ KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access)
++KBUILD_CFLAGS += $(call cc-disable-warning, cast-function-type-strict)
+ endif
+ 
+ endif
+diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c
+index 555d2dfc0ff79..185c609c6e380 100644
+--- a/security/integrity/platform_certs/load_uefi.c
++++ b/security/integrity/platform_certs/load_uefi.c
+@@ -30,7 +30,7 @@ static const struct dmi_system_id uefi_skip_cert[] = {
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,1") },
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,2") },
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir9,1") },
+-	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacMini8,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "Macmini8,1") },
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacPro7,1") },
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,1") },
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,2") },
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index f88de74da1eb3..de6f94bee50b9 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -1662,13 +1662,14 @@ static int snd_pcm_oss_sync(struct snd_pcm_oss_file *pcm_oss_file)
+ 		runtime = substream->runtime;
+ 		if (atomic_read(&substream->mmap_count))
+ 			goto __direct;
+-		if ((err = snd_pcm_oss_make_ready(substream)) < 0)
+-			return err;
+ 		atomic_inc(&runtime->oss.rw_ref);
+ 		if (mutex_lock_interruptible(&runtime->oss.params_lock)) {
+ 			atomic_dec(&runtime->oss.rw_ref);
+ 			return -ERESTARTSYS;
+ 		}
++		err = snd_pcm_oss_make_ready_locked(substream);
++		if (err < 0)
++			goto unlock;
+ 		format = snd_pcm_oss_format_from(runtime->oss.format);
+ 		width = snd_pcm_format_physical_width(format);
+ 		if (runtime->oss.buffer_used > 0) {
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 79b8d4258fd3b..26dfa8558792f 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2588,7 +2588,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM },
+ 	/* Poulsbo */
+ 	{ PCI_DEVICE(0x8086, 0x811b),
+-	  .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_BASE },
++	  .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_BASE |
++	  AZX_DCAPS_POSFIX_LPIB },
+ 	/* Oaktrail */
+ 	{ PCI_DEVICE(0x8086, 0x080a),
+ 	  .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_BASE },
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index e6f261e8c5ae7..c3fcf478037f9 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1269,6 +1269,7 @@ static int hdmi_pcm_open(struct hda_pcm_stream *hinfo,
+ 	set_bit(pcm_idx, &spec->pcm_in_use);
+ 	per_pin = get_pin(spec, pin_idx);
+ 	per_pin->cvt_nid = per_cvt->cvt_nid;
++	per_pin->silent_stream = false;
+ 	hinfo->nid = per_cvt->cvt_nid;
+ 
+ 	/* flip stripe flag for the assigned stream if supported */
+diff --git a/tools/perf/util/get_current_dir_name.c b/tools/perf/util/get_current_dir_name.c
+index b205d929245f5..e68935e9ac8ce 100644
+--- a/tools/perf/util/get_current_dir_name.c
++++ b/tools/perf/util/get_current_dir_name.c
+@@ -3,8 +3,9 @@
+ //
+ #ifndef HAVE_GET_CURRENT_DIR_NAME
+ #include "get_current_dir_name.h"
++#include <limits.h>
++#include <string.h>
+ #include <unistd.h>
+-#include <stdlib.h>
+ 
+ /* Android's 'bionic' library, for one, doesn't have this */
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-10-17 16:46 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-10-17 16:46 UTC (permalink / raw
  To: gentoo-commits

commit:     3a1701399bf760fde24f81dfcc38733840c42224
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 17 16:45:58 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Oct 17 16:45:58 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3a170139

Linux patch 5.10.149

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1148_linux-5.10.149.patch | 202 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 206 insertions(+)

diff --git a/0000_README b/0000_README
index cfc507bc..73ccd6af 100644
--- a/0000_README
+++ b/0000_README
@@ -635,6 +635,10 @@ Patch:  1147_linux-5.10.148.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.148
 
+Patch:  1148_linux-5.10.149.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.149
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1148_linux-5.10.149.patch b/1148_linux-5.10.149.patch
new file mode 100644
index 00000000..6dda5b25
--- /dev/null
+++ b/1148_linux-5.10.149.patch
@@ -0,0 +1,202 @@
+diff --git a/Makefile b/Makefile
+index c40acf09ce29d..b824bdb0457c5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 148
++SUBLEVEL = 149
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/fs/splice.c b/fs/splice.c
+index 6610e55c0e2ab..866d5c2367b23 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -806,15 +806,17 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
+ {
+ 	struct pipe_inode_info *pipe;
+ 	long ret, bytes;
++	umode_t i_mode;
+ 	size_t len;
+ 	int i, flags, more;
+ 
+ 	/*
+-	 * We require the input to be seekable, as we don't want to randomly
+-	 * drop data for eg socket -> socket splicing. Use the piped splicing
+-	 * for that!
++	 * We require the input being a regular file, as we don't want to
++	 * randomly drop data for eg socket -> socket splicing. Use the
++	 * piped splicing for that!
+ 	 */
+-	if (unlikely(!(in->f_mode & FMODE_LSEEK)))
++	i_mode = file_inode(in)->i_mode;
++	if (unlikely(!S_ISREG(i_mode) && !S_ISBLK(i_mode)))
+ 		return -EINVAL;
+ 
+ 	/*
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index bcc94cc1b6201..63499db5c63d9 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1485,7 +1485,6 @@ struct ieee802_11_elems {
+ 	const u8 *supp_rates;
+ 	const u8 *ds_params;
+ 	const struct ieee80211_tim_ie *tim;
+-	const u8 *challenge;
+ 	const u8 *rsn;
+ 	const u8 *rsnx;
+ 	const u8 *erp_info;
+@@ -1538,7 +1537,6 @@ struct ieee802_11_elems {
+ 	u8 ssid_len;
+ 	u8 supp_rates_len;
+ 	u8 tim_len;
+-	u8 challenge_len;
+ 	u8 rsn_len;
+ 	u8 rsnx_len;
+ 	u8 ext_supp_rates_len;
+@@ -1553,6 +1551,8 @@ struct ieee802_11_elems {
+ 	u8 country_elem_len;
+ 	u8 bssid_index_len;
+ 
++	void *nontx_profile;
++
+ 	/* whether a parse error occurred while retrieving these elements */
+ 	bool parse_error;
+ };
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 3988403064ab6..c52b8eb7fb8a2 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2899,14 +2899,14 @@ static void ieee80211_auth_challenge(struct ieee80211_sub_if_data *sdata,
+ {
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_mgd_auth_data *auth_data = sdata->u.mgd.auth_data;
++	const struct element *challenge;
+ 	u8 *pos;
+-	struct ieee802_11_elems elems;
+ 	u32 tx_flags = 0;
+ 
+ 	pos = mgmt->u.auth.variable;
+-	ieee802_11_parse_elems(pos, len - (pos - (u8 *)mgmt), false, &elems,
+-			       mgmt->bssid, auth_data->bss->bssid);
+-	if (!elems.challenge)
++	challenge = cfg80211_find_elem(WLAN_EID_CHALLENGE, pos,
++				       len - (pos - (u8 *)mgmt));
++	if (!challenge)
+ 		return;
+ 	auth_data->expected_transaction = 4;
+ 	drv_mgd_prepare_tx(sdata->local, sdata, 0);
+@@ -2914,7 +2914,8 @@ static void ieee80211_auth_challenge(struct ieee80211_sub_if_data *sdata,
+ 		tx_flags = IEEE80211_TX_CTL_REQ_TX_STATUS |
+ 			   IEEE80211_TX_INTFL_MLME_CONN_TX;
+ 	ieee80211_send_auth(sdata, 3, auth_data->algorithm, 0,
+-			    elems.challenge - 2, elems.challenge_len + 2,
++			    (void *)challenge,
++			    challenge->datalen + sizeof(*challenge),
+ 			    auth_data->bss->bssid, auth_data->bss->bssid,
+ 			    auth_data->key, auth_data->key_len,
+ 			    auth_data->key_idx, tx_flags);
+@@ -3299,7 +3300,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
+ 	}
+ 	capab_info = le16_to_cpu(mgmt->u.assoc_resp.capab_info);
+ 	ieee802_11_parse_elems(pos, len - (pos - (u8 *)mgmt), false, elems,
+-			       mgmt->bssid, assoc_data->bss->bssid);
++			       mgmt->bssid, NULL);
+ 
+ 	if (elems->aid_resp)
+ 		aid = le16_to_cpu(elems->aid_resp->aid);
+@@ -3393,6 +3394,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
+ 			sdata_info(sdata,
+ 				   "AP bug: VHT operation missing from AssocResp\n");
+ 		}
++		kfree(bss_elems.nontx_profile);
+ 	}
+ 
+ 	/*
+@@ -3707,7 +3709,7 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
+ 		return;
+ 
+ 	ieee802_11_parse_elems(pos, len - (pos - (u8 *)mgmt), false, &elems,
+-			       mgmt->bssid, assoc_data->bss->bssid);
++			       mgmt->bssid, NULL);
+ 
+ 	if (status_code == WLAN_STATUS_ASSOC_REJECTED_TEMPORARILY &&
+ 	    elems.timeout_int &&
+@@ -4044,6 +4046,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
+ 		ifmgd->assoc_data->timeout = jiffies;
+ 		ifmgd->assoc_data->timeout_started = true;
+ 		run_again(sdata, ifmgd->assoc_data->timeout);
++		kfree(elems.nontx_profile);
+ 		return;
+ 	}
+ 
+@@ -4221,7 +4224,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
+ 		ieee80211_report_disconnect(sdata, deauth_buf,
+ 					    sizeof(deauth_buf), true,
+ 					    WLAN_REASON_DEAUTH_LEAVING);
+-		return;
++		goto free;
+ 	}
+ 
+ 	if (sta && elems.opmode_notif)
+@@ -4236,6 +4239,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
+ 					       elems.cisco_dtpc_elem);
+ 
+ 	ieee80211_bss_info_change_notify(sdata, changed);
++free:
++	kfree(elems.nontx_profile);
+ }
+ 
+ void ieee80211_sta_rx_queued_ext(struct ieee80211_sub_if_data *sdata,
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index d6afaacaf7ef8..b241ff8c015a9 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -227,6 +227,8 @@ ieee80211_bss_info_update(struct ieee80211_local *local,
+ 						rx_status, beacon);
+ 	}
+ 
++	kfree(elems.nontx_profile);
++
+ 	return bss;
+ }
+ 
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 11d5686893c6a..7fa6efa8b83c1 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1124,10 +1124,6 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 			} else
+ 				elem_parse_failed = true;
+ 			break;
+-		case WLAN_EID_CHALLENGE:
+-			elems->challenge = pos;
+-			elems->challenge_len = elen;
+-			break;
+ 		case WLAN_EID_VENDOR_SPECIFIC:
+ 			if (elen >= 4 && pos[0] == 0x00 && pos[1] == 0x50 &&
+ 			    pos[2] == 0xf2) {
+@@ -1487,6 +1483,11 @@ u32 ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 			cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
+ 					       nontransmitted_profile,
+ 					       nontransmitted_profile_len);
++		if (!nontransmitted_profile_len) {
++			nontransmitted_profile_len = 0;
++			kfree(nontransmitted_profile);
++			nontransmitted_profile = NULL;
++		}
+ 	}
+ 
+ 	crc = _ieee802_11_parse_elems_crc(start, len, action, elems, filter,
+@@ -1516,7 +1517,7 @@ u32 ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
+ 	    offsetofend(struct ieee80211_bssid_index, dtim_count))
+ 		elems->dtim_count = elems->bssid_index->dtim_count;
+ 
+-	kfree(nontransmitted_profile);
++	elems->nontx_profile = nontransmitted_profile;
+ 
+ 	return crc;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-10-26 11:46 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-10-26 11:46 UTC (permalink / raw
  To: gentoo-commits

commit:     d31d5a89f293a02f994b7c6232dfdc46a78944fc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 26 11:45:50 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 26 11:45:50 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d31d5a89

Linux patch 5.10.150

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1149_linux-5.10.150.patch | 12282 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 12286 insertions(+)

diff --git a/0000_README b/0000_README
index 73ccd6af..5c39ef3a 100644
--- a/0000_README
+++ b/0000_README
@@ -639,6 +639,10 @@ Patch:  1148_linux-5.10.149.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.149
 
+Patch:  1149_linux-5.10.150.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.150
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1149_linux-5.10.150.patch b/1149_linux-5.10.150.patch
new file mode 100644
index 00000000..02e6d9cf
--- /dev/null
+++ b/1149_linux-5.10.150.patch
@@ -0,0 +1,12282 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
+index df42bed09f25d..53f07fc41b966 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio
++++ b/Documentation/ABI/testing/sysfs-bus-iio
+@@ -142,7 +142,7 @@ Description:
+ 		Raw capacitance measurement from channel Y. Units after
+ 		application of scale and offset are nanofarads.
+ 
+-What:		/sys/.../iio:deviceX/in_capacitanceY-in_capacitanceZ_raw
++What:		/sys/.../iio:deviceX/in_capacitanceY-capacitanceZ_raw
+ KernelVersion:	3.2
+ Contact:	linux-iio@vger.kernel.org
+ Description:
+diff --git a/Makefile b/Makefile
+index b824bdb0457c5..5c7075d3b2f65 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 149
++SUBLEVEL = 150
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -816,12 +816,12 @@ endif
+ 
+ # Initialize all stack variables with a zero value.
+ ifdef CONFIG_INIT_STACK_ALL_ZERO
+-# Future support for zero initialization is still being debated, see
+-# https://bugs.llvm.org/show_bug.cgi?id=45497. These flags are subject to being
+-# renamed or dropped.
+ KBUILD_CFLAGS	+= -ftrivial-auto-var-init=zero
++ifdef CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER
++# https://github.com/llvm/llvm-project/issues/44842
+ KBUILD_CFLAGS	+= -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang
+ endif
++endif
+ 
+ DEBUG_CFLAGS	:=
+ 
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index b587ecc6f9493..985ab0b091a6a 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -1791,7 +1791,6 @@ config CMDLINE
+ choice
+ 	prompt "Kernel command line type" if CMDLINE != ""
+ 	default CMDLINE_FROM_BOOTLOADER
+-	depends on ATAGS
+ 
+ config CMDLINE_FROM_BOOTLOADER
+ 	bool "Use bootloader kernel arguments if available"
+diff --git a/arch/arm/boot/dts/armada-385-turris-omnia.dts b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+index fde4c302f08ec..92e08486ec81f 100644
+--- a/arch/arm/boot/dts/armada-385-turris-omnia.dts
++++ b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+@@ -307,7 +307,7 @@
+ 		marvell,function = "spi0";
+ 	};
+ 
+-	spi0cs1_pins: spi0cs1-pins {
++	spi0cs2_pins: spi0cs2-pins {
+ 		marvell,pins = "mpp26";
+ 		marvell,function = "spi0";
+ 	};
+@@ -342,7 +342,7 @@
+ 		};
+ 	};
+ 
+-	/* MISO, MOSI, SCLK and CS1 are routed to pin header CN11 */
++	/* MISO, MOSI, SCLK and CS2 are routed to pin header CN11 */
+ };
+ 
+ &uart0 {
+diff --git a/arch/arm/boot/dts/exynos4412-midas.dtsi b/arch/arm/boot/dts/exynos4412-midas.dtsi
+index 06450066b1787..255a13666edcd 100644
+--- a/arch/arm/boot/dts/exynos4412-midas.dtsi
++++ b/arch/arm/boot/dts/exynos4412-midas.dtsi
+@@ -588,7 +588,7 @@
+ 		clocks = <&camera 1>;
+ 		clock-names = "extclk";
+ 		samsung,camclk-out = <1>;
+-		gpios = <&gpm1 6 GPIO_ACTIVE_HIGH>;
++		gpios = <&gpm1 6 GPIO_ACTIVE_LOW>;
+ 
+ 		port {
+ 			is_s5k6a3_ep: endpoint {
+diff --git a/arch/arm/boot/dts/exynos4412-origen.dts b/arch/arm/boot/dts/exynos4412-origen.dts
+index c2e793b69e7d9..e2d76ea4404e8 100644
+--- a/arch/arm/boot/dts/exynos4412-origen.dts
++++ b/arch/arm/boot/dts/exynos4412-origen.dts
+@@ -95,7 +95,7 @@
+ };
+ 
+ &ehci {
+-	samsung,vbus-gpio = <&gpx3 5 1>;
++	samsung,vbus-gpio = <&gpx3 5 GPIO_ACTIVE_HIGH>;
+ 	status = "okay";
+ 	phys = <&exynos_usbphy 2>, <&exynos_usbphy 3>;
+ 	phy-names = "hsic0", "hsic1";
+diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
+index fdd81fdc3f357..cd3183c36488a 100644
+--- a/arch/arm/boot/dts/imx6dl.dtsi
++++ b/arch/arm/boot/dts/imx6dl.dtsi
+@@ -84,6 +84,9 @@
+ 		ocram: sram@900000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00900000 0x20000>;
++			ranges = <0 0x00900000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 			clocks = <&clks IMX6QDL_CLK_OCRAM>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
+index 5277e39032912..afec1677e6baf 100644
+--- a/arch/arm/boot/dts/imx6q.dtsi
++++ b/arch/arm/boot/dts/imx6q.dtsi
+@@ -163,6 +163,9 @@
+ 		ocram: sram@900000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00900000 0x40000>;
++			ranges = <0 0x00900000 0x40000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 			clocks = <&clks IMX6QDL_CLK_OCRAM>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx6qp.dtsi b/arch/arm/boot/dts/imx6qp.dtsi
+index b310f13a53f22..4d23c92aa8a6b 100644
+--- a/arch/arm/boot/dts/imx6qp.dtsi
++++ b/arch/arm/boot/dts/imx6qp.dtsi
+@@ -9,12 +9,18 @@
+ 		ocram2: sram@940000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00940000 0x20000>;
++			ranges = <0 0x00940000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 			clocks = <&clks IMX6QDL_CLK_OCRAM>;
+ 		};
+ 
+ 		ocram3: sram@960000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00960000 0x20000>;
++			ranges = <0 0x00960000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 			clocks = <&clks IMX6QDL_CLK_OCRAM>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx6sl.dtsi b/arch/arm/boot/dts/imx6sl.dtsi
+index 91a8c54d5e113..c184a6d5bc420 100644
+--- a/arch/arm/boot/dts/imx6sl.dtsi
++++ b/arch/arm/boot/dts/imx6sl.dtsi
+@@ -114,6 +114,9 @@
+ 		ocram: sram@900000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00900000 0x20000>;
++			ranges = <0 0x00900000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 			clocks = <&clks IMX6SL_CLK_OCRAM>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx6sll.dtsi b/arch/arm/boot/dts/imx6sll.dtsi
+index 0b622201a1f3d..bf5b262b91f91 100644
+--- a/arch/arm/boot/dts/imx6sll.dtsi
++++ b/arch/arm/boot/dts/imx6sll.dtsi
+@@ -115,6 +115,9 @@
+ 		ocram: sram@900000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00900000 0x20000>;
++			ranges = <0 0x00900000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 		};
+ 
+ 		intc: interrupt-controller@a01000 {
+diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
+index dfdca1804f9f8..c399919943c34 100644
+--- a/arch/arm/boot/dts/imx6sx.dtsi
++++ b/arch/arm/boot/dts/imx6sx.dtsi
+@@ -161,12 +161,18 @@
+ 		ocram_s: sram@8f8000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x008f8000 0x4000>;
++			ranges = <0 0x008f8000 0x4000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 			clocks = <&clks IMX6SX_CLK_OCRAM_S>;
+ 		};
+ 
+ 		ocram: sram@900000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00900000 0x20000>;
++			ranges = <0 0x00900000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 			clocks = <&clks IMX6SX_CLK_OCRAM>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx7d-sdb.dts b/arch/arm/boot/dts/imx7d-sdb.dts
+index 6823b9f1a2a32..6d562ebe90295 100644
+--- a/arch/arm/boot/dts/imx7d-sdb.dts
++++ b/arch/arm/boot/dts/imx7d-sdb.dts
+@@ -199,12 +199,7 @@
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <29 0>;
+ 		pendown-gpio = <&gpio2 29 GPIO_ACTIVE_HIGH>;
+-		ti,x-min = /bits/ 16 <0>;
+-		ti,x-max = /bits/ 16 <0>;
+-		ti,y-min = /bits/ 16 <0>;
+-		ti,y-max = /bits/ 16 <0>;
+-		ti,pressure-max = /bits/ 16 <0>;
+-		ti,x-plate-ohms = /bits/ 16 <400>;
++		touchscreen-max-pressure = <255>;
+ 		wakeup-source;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/kirkwood-lsxl.dtsi b/arch/arm/boot/dts/kirkwood-lsxl.dtsi
+index 7b151acb99846..88b70ba1c8fee 100644
+--- a/arch/arm/boot/dts/kirkwood-lsxl.dtsi
++++ b/arch/arm/boot/dts/kirkwood-lsxl.dtsi
+@@ -10,6 +10,11 @@
+ 
+ 	ocp@f1000000 {
+ 		pinctrl: pin-controller@10000 {
++			/* Non-default UART pins */
++			pmx_uart0: pmx-uart0 {
++				marvell,pins = "mpp4", "mpp5";
++			};
++
+ 			pmx_power_hdd: pmx-power-hdd {
+ 				marvell,pins = "mpp10";
+ 				marvell,function = "gpo";
+@@ -213,22 +218,11 @@
+ &mdio {
+ 	status = "okay";
+ 
+-	ethphy0: ethernet-phy@0 {
+-		reg = <0>;
+-	};
+-
+ 	ethphy1: ethernet-phy@8 {
+ 		reg = <8>;
+ 	};
+ };
+ 
+-&eth0 {
+-	status = "okay";
+-	ethernet0-port@0 {
+-		phy-handle = <&ethphy0>;
+-	};
+-};
+-
+ &eth1 {
+ 	status = "okay";
+ 	ethernet1-port@0 {
+diff --git a/arch/arm/mm/dump.c b/arch/arm/mm/dump.c
+index c18d23a5e5f12..9b9023a92d464 100644
+--- a/arch/arm/mm/dump.c
++++ b/arch/arm/mm/dump.c
+@@ -342,7 +342,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
+ 		addr = start + i * PMD_SIZE;
+ 		domain = get_domain_name(pmd);
+ 		if (pmd_none(*pmd) || pmd_large(*pmd) || !pmd_present(*pmd))
+-			note_page(st, addr, 3, pmd_val(*pmd), domain);
++			note_page(st, addr, 4, pmd_val(*pmd), domain);
+ 		else
+ 			walk_pte(st, pmd, addr, domain);
+ 
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index 86f213f1b44b8..0d0c3bf239143 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -300,7 +300,11 @@ static struct mem_type mem_types[] __ro_after_init = {
+ 		.prot_pte  = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
+ 			     L_PTE_XN | L_PTE_RDONLY,
+ 		.prot_l1   = PMD_TYPE_TABLE,
++#ifdef CONFIG_ARM_LPAE
++		.prot_sect = PMD_TYPE_SECT | L_PMD_SECT_RDONLY | PMD_SECT_AP2,
++#else
+ 		.prot_sect = PMD_TYPE_SECT,
++#endif
+ 		.domain    = DOMAIN_KERNEL,
+ 	},
+ 	[MT_ROM] = {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
+index e3c6d1272198f..325ea100969a8 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
+@@ -899,6 +899,7 @@
+ 		interrupts = <20 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_gauge>;
++		power-supplies = <&bq25895>;
+ 		maxim,over-heat-temp = <700>;
+ 		maxim,over-volt = <4500>;
+ 		maxim,rsns-microohm = <5000>;
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 3724bab278b28..402a24f845b9e 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -216,11 +216,26 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
+ 	unsigned long pc = rec->ip;
+ 	u32 old = 0, new;
+ 
++	new = aarch64_insn_gen_nop();
++
++	/*
++	 * When using mcount, callsites in modules may have been initalized to
++	 * call an arbitrary module PLT (which redirects to the _mcount stub)
++	 * rather than the ftrace PLT we'll use at runtime (which redirects to
++	 * the ftrace trampoline). We can ignore the old PLT when initializing
++	 * the callsite.
++	 *
++	 * Note: 'mod' is only set at module load time.
++	 */
++	if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) &&
++	    IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && mod) {
++		return aarch64_insn_patch_text_nosync((void *)pc, new);
++	}
++
+ 	if (!ftrace_find_callable_addr(rec, mod, &addr))
+ 		return -EINVAL;
+ 
+ 	old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
+-	new = aarch64_insn_gen_nop();
+ 
+ 	return ftrace_modify_code(pc, old, new, true);
+ }
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index 543c67cae02ff..4358bc3193067 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -158,7 +158,7 @@ static int validate_cpu_freq_invariance_counters(int cpu)
+ 	}
+ 
+ 	/* Convert maximum frequency from KHz to Hz and validate */
+-	max_freq_hz = cpufreq_get_hw_max_freq(cpu) * 1000;
++	max_freq_hz = cpufreq_get_hw_max_freq(cpu) * 1000ULL;
+ 	if (unlikely(!max_freq_hz)) {
+ 		pr_debug("CPU%d: invalid maximum frequency.\n", cpu);
+ 		return -EINVAL;
+diff --git a/arch/ia64/mm/numa.c b/arch/ia64/mm/numa.c
+index f34964271101a..6cd002e8163db 100644
+--- a/arch/ia64/mm/numa.c
++++ b/arch/ia64/mm/numa.c
+@@ -106,5 +106,6 @@ int memory_add_physaddr_to_nid(u64 addr)
+ 		return 0;
+ 	return nid;
+ }
++EXPORT_SYMBOL(memory_add_physaddr_to_nid);
+ #endif
+ #endif
+diff --git a/arch/mips/bcm47xx/prom.c b/arch/mips/bcm47xx/prom.c
+index 3e2a8166377f5..22509b5fab74f 100644
+--- a/arch/mips/bcm47xx/prom.c
++++ b/arch/mips/bcm47xx/prom.c
+@@ -86,7 +86,7 @@ static __init void prom_init_mem(void)
+ 			pr_debug("Assume 128MB RAM\n");
+ 			break;
+ 		}
+-		if (!memcmp(prom_init, prom_init + mem, 32))
++		if (!memcmp((void *)prom_init, (void *)prom_init + mem, 32))
+ 			break;
+ 	}
+ 	lowmem = mem;
+@@ -163,7 +163,7 @@ void __init bcm47xx_prom_highmem_init(void)
+ 
+ 	off = EXTVBASE + __pa(off);
+ 	for (extmem = 128 << 20; extmem < 512 << 20; extmem <<= 1) {
+-		if (!memcmp(prom_init, (void *)(off + extmem), 16))
++		if (!memcmp((void *)prom_init, (void *)(off + extmem), 16))
+ 			break;
+ 	}
+ 	extmem -= lowmem;
+diff --git a/arch/mips/sgi-ip27/ip27-xtalk.c b/arch/mips/sgi-ip27/ip27-xtalk.c
+index 000ede156bdc0..5143d1cf8984c 100644
+--- a/arch/mips/sgi-ip27/ip27-xtalk.c
++++ b/arch/mips/sgi-ip27/ip27-xtalk.c
+@@ -27,15 +27,18 @@ static void bridge_platform_create(nasid_t nasid, int widget, int masterwid)
+ {
+ 	struct xtalk_bridge_platform_data *bd;
+ 	struct sgi_w1_platform_data *wd;
+-	struct platform_device *pdev;
++	struct platform_device *pdev_wd;
++	struct platform_device *pdev_bd;
+ 	struct resource w1_res;
+ 	unsigned long offset;
+ 
+ 	offset = NODE_OFFSET(nasid);
+ 
+ 	wd = kzalloc(sizeof(*wd), GFP_KERNEL);
+-	if (!wd)
+-		goto no_mem;
++	if (!wd) {
++		pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++		return;
++	}
+ 
+ 	snprintf(wd->dev_id, sizeof(wd->dev_id), "bridge-%012lx",
+ 		 offset + (widget << SWIN_SIZE_BITS));
+@@ -46,22 +49,35 @@ static void bridge_platform_create(nasid_t nasid, int widget, int masterwid)
+ 	w1_res.end = w1_res.start + 3;
+ 	w1_res.flags = IORESOURCE_MEM;
+ 
+-	pdev = platform_device_alloc("sgi_w1", PLATFORM_DEVID_AUTO);
+-	if (!pdev) {
+-		kfree(wd);
+-		goto no_mem;
++	pdev_wd = platform_device_alloc("sgi_w1", PLATFORM_DEVID_AUTO);
++	if (!pdev_wd) {
++		pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++		goto err_kfree_wd;
++	}
++	if (platform_device_add_resources(pdev_wd, &w1_res, 1)) {
++		pr_warn("xtalk:n%d/%x bridge failed to add platform resources.\n", nasid, widget);
++		goto err_put_pdev_wd;
++	}
++	if (platform_device_add_data(pdev_wd, wd, sizeof(*wd))) {
++		pr_warn("xtalk:n%d/%x bridge failed to add platform data.\n", nasid, widget);
++		goto err_put_pdev_wd;
++	}
++	if (platform_device_add(pdev_wd)) {
++		pr_warn("xtalk:n%d/%x bridge failed to add platform device.\n", nasid, widget);
++		goto err_put_pdev_wd;
+ 	}
+-	platform_device_add_resources(pdev, &w1_res, 1);
+-	platform_device_add_data(pdev, wd, sizeof(*wd));
+-	platform_device_add(pdev);
++	/* platform_device_add_data() duplicates the data */
++	kfree(wd);
+ 
+ 	bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+-	if (!bd)
+-		goto no_mem;
+-	pdev = platform_device_alloc("xtalk-bridge", PLATFORM_DEVID_AUTO);
+-	if (!pdev) {
+-		kfree(bd);
+-		goto no_mem;
++	if (!bd) {
++		pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++		goto err_unregister_pdev_wd;
++	}
++	pdev_bd = platform_device_alloc("xtalk-bridge", PLATFORM_DEVID_AUTO);
++	if (!pdev_bd) {
++		pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++		goto err_kfree_bd;
+ 	}
+ 
+ 
+@@ -82,13 +98,31 @@ static void bridge_platform_create(nasid_t nasid, int widget, int masterwid)
+ 	bd->io.flags	= IORESOURCE_IO;
+ 	bd->io_offset	= offset;
+ 
+-	platform_device_add_data(pdev, bd, sizeof(*bd));
+-	platform_device_add(pdev);
++	if (platform_device_add_data(pdev_bd, bd, sizeof(*bd))) {
++		pr_warn("xtalk:n%d/%x bridge failed to add platform data.\n", nasid, widget);
++		goto err_put_pdev_bd;
++	}
++	if (platform_device_add(pdev_bd)) {
++		pr_warn("xtalk:n%d/%x bridge failed to add platform device.\n", nasid, widget);
++		goto err_put_pdev_bd;
++	}
++	/* platform_device_add_data() duplicates the data */
++	kfree(bd);
+ 	pr_info("xtalk:n%d/%x bridge widget\n", nasid, widget);
+ 	return;
+ 
+-no_mem:
+-	pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++err_put_pdev_bd:
++	platform_device_put(pdev_bd);
++err_kfree_bd:
++	kfree(bd);
++err_unregister_pdev_wd:
++	platform_device_unregister(pdev_wd);
++	return;
++err_put_pdev_wd:
++	platform_device_put(pdev_wd);
++err_kfree_wd:
++	kfree(wd);
++	return;
+ }
+ 
+ static int probe_one_port(nasid_t nasid, int widget, int masterwid)
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index 59175651f0b9e..6122541412961 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -153,7 +153,7 @@ CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=power8
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power9,-mtune=power8)
+ else
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,$(call cc-option,-mtune=power5))
+-CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mcpu=power5,-mcpu=power4)
++CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=power4
+ endif
+ else ifdef CONFIG_PPC_BOOK3E_64
+ CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
+diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
+index e4b364b5da9e7..8b78eba755f9a 100644
+--- a/arch/powerpc/boot/Makefile
++++ b/arch/powerpc/boot/Makefile
+@@ -30,6 +30,7 @@ endif
+ 
+ BOOTCFLAGS    := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ 		 -fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \
++		 $(call cc-option,-mno-spe) $(call cc-option,-mspe=no) \
+ 		 -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
+ 		 $(LINUXINCLUDE)
+ 
+diff --git a/arch/powerpc/boot/dts/fsl/e500v1_power_isa.dtsi b/arch/powerpc/boot/dts/fsl/e500v1_power_isa.dtsi
+new file mode 100644
+index 0000000000000..7e2a90cde72e5
+--- /dev/null
++++ b/arch/powerpc/boot/dts/fsl/e500v1_power_isa.dtsi
+@@ -0,0 +1,51 @@
++/*
++ * e500v1 Power ISA Device Tree Source (include)
++ *
++ * Copyright 2012 Freescale Semiconductor Inc.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions are met:
++ *     * Redistributions of source code must retain the above copyright
++ *       notice, this list of conditions and the following disclaimer.
++ *     * Redistributions in binary form must reproduce the above copyright
++ *       notice, this list of conditions and the following disclaimer in the
++ *       documentation and/or other materials provided with the distribution.
++ *     * Neither the name of Freescale Semiconductor nor the
++ *       names of its contributors may be used to endorse or promote products
++ *       derived from this software without specific prior written permission.
++ *
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") as published by the Free Software
++ * Foundation, either version 2 of that License or (at your option) any
++ * later version.
++ *
++ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY
++ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
++ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++/ {
++	cpus {
++		power-isa-version = "2.03";
++		power-isa-b;		// Base
++		power-isa-e;		// Embedded
++		power-isa-atb;		// Alternate Time Base
++		power-isa-cs;		// Cache Specification
++		power-isa-e.le;		// Embedded.Little-Endian
++		power-isa-e.pm;		// Embedded.Performance Monitor
++		power-isa-ecl;		// Embedded Cache Locking
++		power-isa-mmc;		// Memory Coherence
++		power-isa-sp;		// Signal Processing Engine
++		power-isa-sp.fs;	// SPE.Embedded Float Scalar Single
++		power-isa-sp.fv;	// SPE.Embedded Float Vector
++		mmu-type = "power-embedded";
++	};
++};
+diff --git a/arch/powerpc/boot/dts/fsl/mpc8540ads.dts b/arch/powerpc/boot/dts/fsl/mpc8540ads.dts
+index 18a885130538a..e03ae130162ba 100644
+--- a/arch/powerpc/boot/dts/fsl/mpc8540ads.dts
++++ b/arch/powerpc/boot/dts/fsl/mpc8540ads.dts
+@@ -7,7 +7,7 @@
+ 
+ /dts-v1/;
+ 
+-/include/ "e500v2_power_isa.dtsi"
++/include/ "e500v1_power_isa.dtsi"
+ 
+ / {
+ 	model = "MPC8540ADS";
+diff --git a/arch/powerpc/boot/dts/fsl/mpc8541cds.dts b/arch/powerpc/boot/dts/fsl/mpc8541cds.dts
+index ac381e7b1c60e..a2a6c5cf852e9 100644
+--- a/arch/powerpc/boot/dts/fsl/mpc8541cds.dts
++++ b/arch/powerpc/boot/dts/fsl/mpc8541cds.dts
+@@ -7,7 +7,7 @@
+ 
+ /dts-v1/;
+ 
+-/include/ "e500v2_power_isa.dtsi"
++/include/ "e500v1_power_isa.dtsi"
+ 
+ / {
+ 	model = "MPC8541CDS";
+diff --git a/arch/powerpc/boot/dts/fsl/mpc8555cds.dts b/arch/powerpc/boot/dts/fsl/mpc8555cds.dts
+index 9f58db2a7e661..901b6ff06dfbb 100644
+--- a/arch/powerpc/boot/dts/fsl/mpc8555cds.dts
++++ b/arch/powerpc/boot/dts/fsl/mpc8555cds.dts
+@@ -7,7 +7,7 @@
+ 
+ /dts-v1/;
+ 
+-/include/ "e500v2_power_isa.dtsi"
++/include/ "e500v1_power_isa.dtsi"
+ 
+ / {
+ 	model = "MPC8555CDS";
+diff --git a/arch/powerpc/boot/dts/fsl/mpc8560ads.dts b/arch/powerpc/boot/dts/fsl/mpc8560ads.dts
+index a24722ccaebf1..c2f9aea78b29f 100644
+--- a/arch/powerpc/boot/dts/fsl/mpc8560ads.dts
++++ b/arch/powerpc/boot/dts/fsl/mpc8560ads.dts
+@@ -7,7 +7,7 @@
+ 
+ /dts-v1/;
+ 
+-/include/ "e500v2_power_isa.dtsi"
++/include/ "e500v1_power_isa.dtsi"
+ 
+ / {
+ 	model = "MPC8560ADS";
+diff --git a/arch/powerpc/kernel/pci_dn.c b/arch/powerpc/kernel/pci_dn.c
+index e99b7c547d7e9..b173ba3426450 100644
+--- a/arch/powerpc/kernel/pci_dn.c
++++ b/arch/powerpc/kernel/pci_dn.c
+@@ -330,6 +330,7 @@ struct pci_dn *pci_add_device_node_info(struct pci_controller *hose,
+ 	INIT_LIST_HEAD(&pdn->list);
+ 	parent = of_get_parent(dn);
+ 	pdn->parent = parent ? PCI_DN(parent) : NULL;
++	of_node_put(parent);
+ 	if (pdn->parent)
+ 		list_add_tail(&pdn->list, &pdn->parent->child_list);
+ 
+diff --git a/arch/powerpc/math-emu/math_efp.c b/arch/powerpc/math-emu/math_efp.c
+index 0a05e51964c10..90111c9e7521f 100644
+--- a/arch/powerpc/math-emu/math_efp.c
++++ b/arch/powerpc/math-emu/math_efp.c
+@@ -17,6 +17,7 @@
+ 
+ #include <linux/types.h>
+ #include <linux/prctl.h>
++#include <linux/module.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/reg.h>
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index c61c3b62c8c62..1d05c168c8fb5 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -892,6 +892,7 @@ static void opal_export_attrs(void)
+ 	kobj = kobject_create_and_add("exports", opal_kobj);
+ 	if (!kobj) {
+ 		pr_warn("kobject_create_and_add() of exports failed\n");
++		of_node_put(np);
+ 		return;
+ 	}
+ 
+diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
+index 808e7118abfc5..d276c5e964458 100644
+--- a/arch/powerpc/sysdev/fsl_msi.c
++++ b/arch/powerpc/sysdev/fsl_msi.c
+@@ -211,8 +211,10 @@ static int fsl_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ 			dev_err(&pdev->dev,
+ 				"node %pOF has an invalid fsl,msi phandle %u\n",
+ 				hose->dn, np->phandle);
++			of_node_put(np);
+ 			return -EINVAL;
+ 		}
++		of_node_put(np);
+ 	}
+ 
+ 	for_each_pci_msi_entry(entry, pdev) {
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 3d3016092b31c..1bb1bf1141cc7 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -37,6 +37,7 @@ else
+ endif
+ 
+ ifeq ($(CONFIG_LD_IS_LLD),y)
++ifeq ($(shell test $(CONFIG_LLD_VERSION) -lt 150000; echo $$?),0)
+ 	KBUILD_CFLAGS += -mno-relax
+ 	KBUILD_AFLAGS += -mno-relax
+ ifneq ($(LLVM_IAS),1)
+@@ -44,6 +45,7 @@ ifneq ($(LLVM_IAS),1)
+ 	KBUILD_AFLAGS += -Wa,-mno-relax
+ endif
+ endif
++endif
+ 
+ # ISA string setting
+ riscv-march-$(CONFIG_ARCH_RV32I)	:= rv32ima
+diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
+index c025a746a1486..391dd869db64b 100644
+--- a/arch/riscv/include/asm/io.h
++++ b/arch/riscv/include/asm/io.h
+@@ -114,9 +114,9 @@ __io_reads_ins(reads, u32, l, __io_br(), __io_ar(addr))
+ __io_reads_ins(ins,  u8, b, __io_pbr(), __io_par(addr))
+ __io_reads_ins(ins, u16, w, __io_pbr(), __io_par(addr))
+ __io_reads_ins(ins, u32, l, __io_pbr(), __io_par(addr))
+-#define insb(addr, buffer, count) __insb((void __iomem *)(long)addr, buffer, count)
+-#define insw(addr, buffer, count) __insw((void __iomem *)(long)addr, buffer, count)
+-#define insl(addr, buffer, count) __insl((void __iomem *)(long)addr, buffer, count)
++#define insb(addr, buffer, count) __insb(PCI_IOBASE + (addr), buffer, count)
++#define insw(addr, buffer, count) __insw(PCI_IOBASE + (addr), buffer, count)
++#define insl(addr, buffer, count) __insl(PCI_IOBASE + (addr), buffer, count)
+ 
+ __io_writes_outs(writes,  u8, b, __io_bw(), __io_aw())
+ __io_writes_outs(writes, u16, w, __io_bw(), __io_aw())
+@@ -128,22 +128,22 @@ __io_writes_outs(writes, u32, l, __io_bw(), __io_aw())
+ __io_writes_outs(outs,  u8, b, __io_pbw(), __io_paw())
+ __io_writes_outs(outs, u16, w, __io_pbw(), __io_paw())
+ __io_writes_outs(outs, u32, l, __io_pbw(), __io_paw())
+-#define outsb(addr, buffer, count) __outsb((void __iomem *)(long)addr, buffer, count)
+-#define outsw(addr, buffer, count) __outsw((void __iomem *)(long)addr, buffer, count)
+-#define outsl(addr, buffer, count) __outsl((void __iomem *)(long)addr, buffer, count)
++#define outsb(addr, buffer, count) __outsb(PCI_IOBASE + (addr), buffer, count)
++#define outsw(addr, buffer, count) __outsw(PCI_IOBASE + (addr), buffer, count)
++#define outsl(addr, buffer, count) __outsl(PCI_IOBASE + (addr), buffer, count)
+ 
+ #ifdef CONFIG_64BIT
+ __io_reads_ins(reads, u64, q, __io_br(), __io_ar(addr))
+ #define readsq(addr, buffer, count) __readsq(addr, buffer, count)
+ 
+ __io_reads_ins(ins, u64, q, __io_pbr(), __io_par(addr))
+-#define insq(addr, buffer, count) __insq((void __iomem *)addr, buffer, count)
++#define insq(addr, buffer, count) __insq(PCI_IOBASE + (addr), buffer, count)
+ 
+ __io_writes_outs(writes, u64, q, __io_bw(), __io_aw())
+ #define writesq(addr, buffer, count) __writesq(addr, buffer, count)
+ 
+ __io_writes_outs(outs, u64, q, __io_pbr(), __io_paw())
+-#define outsq(addr, buffer, count) __outsq((void __iomem *)addr, buffer, count)
++#define outsq(addr, buffer, count) __outsq(PCI_IOBASE + (addr), buffer, count)
+ #endif
+ 
+ #include <asm-generic/io.h>
+diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
+index 8a7880b9c433e..bb402685057a2 100644
+--- a/arch/riscv/kernel/sys_riscv.c
++++ b/arch/riscv/kernel/sys_riscv.c
+@@ -18,9 +18,6 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
+ 	if (unlikely(offset & (~PAGE_MASK >> page_shift_offset)))
+ 		return -EINVAL;
+ 
+-	if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ)))
+-		return -EINVAL;
+-
+ 	return ksys_mmap_pgoff(addr, len, prot, flags, fd,
+ 			       offset >> (PAGE_SHIFT - page_shift_offset));
+ }
+diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
+index 3c8b9e433c673..8f84bbe0ac33e 100644
+--- a/arch/riscv/mm/fault.c
++++ b/arch/riscv/mm/fault.c
+@@ -167,7 +167,8 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
+ 		}
+ 		break;
+ 	case EXC_LOAD_PAGE_FAULT:
+-		if (!(vma->vm_flags & VM_READ)) {
++		/* Write implies read */
++		if (!(vma->vm_flags & (VM_READ | VM_WRITE))) {
+ 			return true;
+ 		}
+ 		break;
+diff --git a/arch/sh/include/asm/sections.h b/arch/sh/include/asm/sections.h
+index 8edb824049b9e..0cb0ca149ac34 100644
+--- a/arch/sh/include/asm/sections.h
++++ b/arch/sh/include/asm/sections.h
+@@ -4,7 +4,7 @@
+ 
+ #include <asm-generic/sections.h>
+ 
+-extern long __machvec_start, __machvec_end;
++extern char __machvec_start[], __machvec_end[];
+ extern char __uncached_start, __uncached_end;
+ extern char __start_eh_frame[], __stop_eh_frame[];
+ 
+diff --git a/arch/sh/kernel/machvec.c b/arch/sh/kernel/machvec.c
+index d606679a211e1..57efaf5b82ae0 100644
+--- a/arch/sh/kernel/machvec.c
++++ b/arch/sh/kernel/machvec.c
+@@ -20,8 +20,8 @@
+ #define MV_NAME_SIZE 32
+ 
+ #define for_each_mv(mv) \
+-	for ((mv) = (struct sh_machine_vector *)&__machvec_start; \
+-	     (mv) && (unsigned long)(mv) < (unsigned long)&__machvec_end; \
++	for ((mv) = (struct sh_machine_vector *)__machvec_start; \
++	     (mv) && (unsigned long)(mv) < (unsigned long)__machvec_end; \
+ 	     (mv)++)
+ 
+ static struct sh_machine_vector * __init get_mv_byname(const char *name)
+@@ -87,8 +87,8 @@ void __init sh_mv_setup(void)
+ 	if (!machvec_selected) {
+ 		unsigned long machvec_size;
+ 
+-		machvec_size = ((unsigned long)&__machvec_end -
+-				(unsigned long)&__machvec_start);
++		machvec_size = ((unsigned long)__machvec_end -
++				(unsigned long)__machvec_start);
+ 
+ 		/*
+ 		 * Sanity check for machvec section alignment. Ensure
+@@ -102,7 +102,7 @@ void __init sh_mv_setup(void)
+ 		 * vector (usually the only one) from .machvec.init.
+ 		 */
+ 		if (machvec_size >= sizeof(struct sh_machine_vector))
+-			sh_mv = *(struct sh_machine_vector *)&__machvec_start;
++			sh_mv = *(struct sh_machine_vector *)__machvec_start;
+ 	}
+ 
+ 	pr_notice("Booting machvec: %s\n", get_system_type());
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index 52e2e2a3e4aef..00c6dce14bd2b 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -77,7 +77,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+ 
+ static void *c_start(struct seq_file *m, loff_t *pos)
+ {
+-	return *pos < NR_CPUS ? cpu_data + *pos : NULL;
++	return *pos < nr_cpu_ids ? cpu_data + *pos : NULL;
+ }
+ 
+ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
+index 0ed20e8bba9e5..ae7192b751361 100644
+--- a/arch/x86/include/asm/hyperv-tlfs.h
++++ b/arch/x86/include/asm/hyperv-tlfs.h
+@@ -474,7 +474,7 @@ struct hv_enlightened_vmcs {
+ 	u64 guest_rip;
+ 
+ 	u32 hv_clean_fields;
+-	u32 hv_padding_32;
++	u32 padding32_1;
+ 	u32 hv_synthetic_controls;
+ 	struct {
+ 		u32 nested_flush_hypercall:1;
+@@ -482,7 +482,7 @@ struct hv_enlightened_vmcs {
+ 		u32 reserved:30;
+ 	}  __packed hv_enlightenments_control;
+ 	u32 hv_vp_id;
+-
++	u32 padding32_2;
+ 	u64 hv_vm_id;
+ 	u64 partition_assist_page;
+ 	u64 padding64_4[4];
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index 91a06cef50c1b..f73327397b898 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -9,6 +9,7 @@
+ struct ucode_patch {
+ 	struct list_head plist;
+ 	void *data;		/* Intel uses only this one */
++	unsigned int size;
+ 	u32 patch_id;
+ 	u16 equiv_cpu;
+ };
+diff --git a/arch/x86/kernel/cpu/feat_ctl.c b/arch/x86/kernel/cpu/feat_ctl.c
+index 29a3bedabd060..d7541851288e3 100644
+--- a/arch/x86/kernel/cpu/feat_ctl.c
++++ b/arch/x86/kernel/cpu/feat_ctl.c
+@@ -1,11 +1,11 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/tboot.h>
+ 
++#include <asm/cpu.h>
+ #include <asm/cpufeature.h>
+ #include <asm/msr-index.h>
+ #include <asm/processor.h>
+ #include <asm/vmx.h>
+-#include "cpu.h"
+ 
+ #undef pr_fmt
+ #define pr_fmt(fmt)	"x86/cpu: " fmt
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 3f6b137ef4e6e..c879364413390 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -783,6 +783,7 @@ static int verify_and_add_patch(u8 family, u8 *fw, unsigned int leftover,
+ 		kfree(patch);
+ 		return -EINVAL;
+ 	}
++	patch->size = *patch_size;
+ 
+ 	mc_hdr      = (struct microcode_header_amd *)(fw + SECTION_HDR_SIZE);
+ 	proc_id     = mc_hdr->processor_rev_id;
+@@ -864,7 +865,7 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
+ 		return ret;
+ 
+ 	memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
+-	memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data), PATCH_MAX_SIZE));
++	memcpy(amd_ucode_patch, p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
+ 
+ 	return ret;
+ }
+diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+index 0daf2f1cf7a86..465dce141bfc8 100644
+--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
++++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+@@ -416,6 +416,7 @@ static int pseudo_lock_fn(void *_rdtgrp)
+ 	struct pseudo_lock_region *plr = rdtgrp->plr;
+ 	u32 rmid_p, closid_p;
+ 	unsigned long i;
++	u64 saved_msr;
+ #ifdef CONFIG_KASAN
+ 	/*
+ 	 * The registers used for local register variables are also used
+@@ -459,6 +460,7 @@ static int pseudo_lock_fn(void *_rdtgrp)
+ 	 * the buffer and evict pseudo-locked memory read earlier from the
+ 	 * cache.
+ 	 */
++	saved_msr = __rdmsr(MSR_MISC_FEATURE_CONTROL);
+ 	__wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
+ 	closid_p = this_cpu_read(pqr_state.cur_closid);
+ 	rmid_p = this_cpu_read(pqr_state.cur_rmid);
+@@ -510,7 +512,7 @@ static int pseudo_lock_fn(void *_rdtgrp)
+ 	__wrmsr(IA32_PQR_ASSOC, rmid_p, closid_p);
+ 
+ 	/* Re-enable the hardware prefetcher(s) */
+-	wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
++	wrmsrl(MSR_MISC_FEATURE_CONTROL, saved_msr);
+ 	local_irq_enable();
+ 
+ 	plr->thread_done = 1;
+@@ -867,6 +869,7 @@ bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d)
+ static int measure_cycles_lat_fn(void *_plr)
+ {
+ 	struct pseudo_lock_region *plr = _plr;
++	u32 saved_low, saved_high;
+ 	unsigned long i;
+ 	u64 start, end;
+ 	void *mem_r;
+@@ -875,6 +878,7 @@ static int measure_cycles_lat_fn(void *_plr)
+ 	/*
+ 	 * Disable hardware prefetchers.
+ 	 */
++	rdmsr(MSR_MISC_FEATURE_CONTROL, saved_low, saved_high);
+ 	wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
+ 	mem_r = READ_ONCE(plr->kmem);
+ 	/*
+@@ -891,7 +895,7 @@ static int measure_cycles_lat_fn(void *_plr)
+ 		end = rdtsc_ordered();
+ 		trace_pseudo_lock_mem_latency((u32)(end - start));
+ 	}
+-	wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
++	wrmsr(MSR_MISC_FEATURE_CONTROL, saved_low, saved_high);
+ 	local_irq_enable();
+ 	plr->thread_done = 1;
+ 	wake_up_interruptible(&plr->lock_thread_wq);
+@@ -936,6 +940,7 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr,
+ 	u64 hits_before = 0, hits_after = 0, miss_before = 0, miss_after = 0;
+ 	struct perf_event *miss_event, *hit_event;
+ 	int hit_pmcnum, miss_pmcnum;
++	u32 saved_low, saved_high;
+ 	unsigned int line_size;
+ 	unsigned int size;
+ 	unsigned long i;
+@@ -969,6 +974,7 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr,
+ 	/*
+ 	 * Disable hardware prefetchers.
+ 	 */
++	rdmsr(MSR_MISC_FEATURE_CONTROL, saved_low, saved_high);
+ 	wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
+ 
+ 	/* Initialize rest of local variables */
+@@ -1027,7 +1033,7 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr,
+ 	 */
+ 	rmb();
+ 	/* Re-enable hardware prefetchers */
+-	wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
++	wrmsr(MSR_MISC_FEATURE_CONTROL, saved_low, saved_high);
+ 	local_irq_enable();
+ out_hit:
+ 	perf_event_release_kernel(hit_event);
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 2aa41d682bb2c..52a881d240704 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2039,7 +2039,7 @@ static int em_pop_sreg(struct x86_emulate_ctxt *ctxt)
+ 	if (rc != X86EMUL_CONTINUE)
+ 		return rc;
+ 
+-	if (ctxt->modrm_reg == VCPU_SREG_SS)
++	if (seg == VCPU_SREG_SS)
+ 		ctxt->interruptibility = KVM_X86_SHADOW_INT_MOV_SS;
+ 	if (ctxt->op_bytes > 2)
+ 		rsp_increment(ctxt, ctxt->op_bytes - 2);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 6c4277e99d586..7f15e2b2a0d6c 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3776,7 +3776,16 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu,
+ 	u32 intr_info = nr | INTR_INFO_VALID_MASK;
+ 
+ 	if (vcpu->arch.exception.has_error_code) {
+-		vmcs12->vm_exit_intr_error_code = vcpu->arch.exception.error_code;
++		/*
++		 * Intel CPUs do not generate error codes with bits 31:16 set,
++		 * and more importantly VMX disallows setting bits 31:16 in the
++		 * injected error code for VM-Entry.  Drop the bits to mimic
++		 * hardware and avoid inducing failure on nested VM-Entry if L1
++		 * chooses to inject the exception back to L2.  AMD CPUs _do_
++		 * generate "full" 32-bit error codes, so KVM allows userspace
++		 * to inject exception error codes with bits 31:16 set.
++		 */
++		vmcs12->vm_exit_intr_error_code = (u16)vcpu->arch.exception.error_code;
+ 		intr_info |= INTR_INFO_DELIVER_CODE_MASK;
+ 	}
+ 
+@@ -4183,14 +4192,6 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 			nested_vmx_abort(vcpu,
+ 					 VMX_ABORT_SAVE_GUEST_MSR_FAIL);
+ 	}
+-
+-	/*
+-	 * Drop what we picked up for L2 via vmx_complete_interrupts. It is
+-	 * preserved above and would only end up incorrectly in L1.
+-	 */
+-	vcpu->arch.nmi_injected = false;
+-	kvm_clear_exception_queue(vcpu);
+-	kvm_clear_interrupt_queue(vcpu);
+ }
+ 
+ /*
+@@ -4530,6 +4531,17 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ 		WARN_ON_ONCE(nested_early_check);
+ 	}
+ 
++	/*
++	 * Drop events/exceptions that were queued for re-injection to L2
++	 * (picked up via vmx_complete_interrupts()), as well as exceptions
++	 * that were pending for L2.  Note, this must NOT be hoisted above
++	 * prepare_vmcs12(), events/exceptions queued for re-injection need to
++	 * be captured in vmcs12 (see vmcs12_save_pending_event()).
++	 */
++	vcpu->arch.nmi_injected = false;
++	kvm_clear_exception_queue(vcpu);
++	kvm_clear_interrupt_queue(vcpu);
++
+ 	vmx_switch_vmcs(vcpu, &vmx->vmcs01);
+ 
+ 	/* Update any VMCS fields that might have changed while L2 ran */
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index b33d0f283d4f8..af6742d11ca14 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1737,7 +1737,17 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu)
+ 	kvm_deliver_exception_payload(vcpu);
+ 
+ 	if (has_error_code) {
+-		vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, error_code);
++		/*
++		 * Despite the error code being architecturally defined as 32
++		 * bits, and the VMCS field being 32 bits, Intel CPUs and thus
++		 * VMX don't actually supporting setting bits 31:16.  Hardware
++		 * will (should) never provide a bogus error code, but AMD CPUs
++		 * do generate error codes with bits 31:16 set, and so KVM's
++		 * ABI lets userspace shove in arbitrary 32-bit values.  Drop
++		 * the upper bits to avoid VM-Fail, losing information that
++		 * does't really exist is preferable to killing the VM.
++		 */
++		vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, (u16)error_code);
+ 		intr_info |= INTR_INFO_DELIVER_CODE_MASK;
+ 	}
+ 
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 804c65d2b95f3..815030b7f6fa8 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -768,6 +768,7 @@ static void xen_load_idt(const struct desc_ptr *desc)
+ {
+ 	static DEFINE_SPINLOCK(lock);
+ 	static struct trap_info traps[257];
++	static const struct trap_info zero = { };
+ 	unsigned out;
+ 
+ 	trace_xen_cpu_load_idt(desc);
+@@ -777,7 +778,7 @@ static void xen_load_idt(const struct desc_ptr *desc)
+ 	memcpy(this_cpu_ptr(&idt_desc), desc, sizeof(idt_desc));
+ 
+ 	out = xen_convert_trap_info(desc, traps, false);
+-	memset(&traps[out], 0, sizeof(traps[0]));
++	traps[out] = zero;
+ 
+ 	xen_mc_flush();
+ 	if (HYPERVISOR_set_trap_table(traps))
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index cfc039fabf8ce..e37ba792902af 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -105,7 +105,8 @@ static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx,
+ {
+ 	struct mq_inflight *mi = priv;
+ 
+-	if (rq->part == mi->part && blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)
++	if ((!mi->part->partno || rq->part == mi->part) &&
++	    blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)
+ 		mi->inflight[rq_data_dir(rq)]++;
+ 
+ 	return true;
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index c53a254171a29..c526fdd0a7b90 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -944,7 +944,7 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio,
+ 				 u64 bps_limit, unsigned long *wait)
+ {
+ 	bool rw = bio_data_dir(bio);
+-	u64 bytes_allowed, extra_bytes, tmp;
++	u64 bytes_allowed, extra_bytes;
+ 	unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
+ 	unsigned int bio_size = throtl_bio_data_size(bio);
+ 
+@@ -961,10 +961,8 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio,
+ 		jiffy_elapsed_rnd = tg->td->throtl_slice;
+ 
+ 	jiffy_elapsed_rnd = roundup(jiffy_elapsed_rnd, tg->td->throtl_slice);
+-
+-	tmp = bps_limit * jiffy_elapsed_rnd;
+-	do_div(tmp, HZ);
+-	bytes_allowed = tmp;
++	bytes_allowed = mul_u64_u64_div_u64(bps_limit, (u64)jiffy_elapsed_rnd,
++					    (u64)HZ);
+ 
+ 	if (tg->bytes_disp[rw] + bio_size <= bytes_allowed) {
+ 		if (wait)
+diff --git a/crypto/akcipher.c b/crypto/akcipher.c
+index f866085c8a4a3..ab975a420e1e9 100644
+--- a/crypto/akcipher.c
++++ b/crypto/akcipher.c
+@@ -120,6 +120,12 @@ static int akcipher_default_op(struct akcipher_request *req)
+ 	return -ENOSYS;
+ }
+ 
++static int akcipher_default_set_key(struct crypto_akcipher *tfm,
++				     const void *key, unsigned int keylen)
++{
++	return -ENOSYS;
++}
++
+ int crypto_register_akcipher(struct akcipher_alg *alg)
+ {
+ 	struct crypto_alg *base = &alg->base;
+@@ -132,6 +138,8 @@ int crypto_register_akcipher(struct akcipher_alg *alg)
+ 		alg->encrypt = akcipher_default_op;
+ 	if (!alg->decrypt)
+ 		alg->decrypt = akcipher_default_op;
++	if (!alg->set_priv_key)
++		alg->set_priv_key = akcipher_default_set_key;
+ 
+ 	akcipher_prepare_alg(alg);
+ 	return crypto_register_alg(base);
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index eb04b2f828eef..cf6c9ffe04a2d 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -498,6 +498,22 @@ static const struct dmi_system_id video_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE R830"),
+ 		},
+ 	},
++	{
++	 .callback = video_disable_backlight_sysfs_if,
++	 .ident = "Toshiba Satellite Z830",
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE Z830"),
++		},
++	},
++	{
++	 .callback = video_disable_backlight_sysfs_if,
++	 .ident = "Toshiba Portege Z830",
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "PORTEGE Z830"),
++		},
++	},
+ 	/*
+ 	 * Some machine's _DOD IDs don't have bit 31(Device ID Scheme) set
+ 	 * but the IDs actually follow the Device ID Scheme.
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 0c8330ed1ffd5..5206fd3b78678 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -985,7 +985,7 @@ static void ghes_proc_in_irq(struct irq_work *irq_work)
+ 				ghes_estatus_cache_add(generic, estatus);
+ 		}
+ 
+-		if (task_work_pending && current->mm != &init_mm) {
++		if (task_work_pending && current->mm) {
+ 			estatus_node->task_work.func = ghes_kick_task_work;
+ 			estatus_node->task_work_cpu = smp_processor_id();
+ 			ret = task_work_add(current, &estatus_node->task_work,
+diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c
+index 0910441321f72..64d6da0a53035 100644
+--- a/drivers/ata/libahci_platform.c
++++ b/drivers/ata/libahci_platform.c
+@@ -451,14 +451,24 @@ struct ahci_host_priv *ahci_platform_get_resources(struct platform_device *pdev,
+ 		}
+ 	}
+ 
+-	hpriv->nports = child_nodes = of_get_child_count(dev->of_node);
++	/*
++	 * Too many sub-nodes most likely means having something wrong with
++	 * the firmware.
++	 */
++	child_nodes = of_get_child_count(dev->of_node);
++	if (child_nodes > AHCI_MAX_PORTS) {
++		rc = -EINVAL;
++		goto err_out;
++	}
+ 
+ 	/*
+ 	 * If no sub-node was found, we still need to set nports to
+ 	 * one in order to be able to use the
+ 	 * ahci_platform_[en|dis]able_[phys|regulators] functions.
+ 	 */
+-	if (!child_nodes)
++	if (child_nodes)
++		hpriv->nports = child_nodes;
++	else
+ 		hpriv->nports = 1;
+ 
+ 	hpriv->phys = devm_kcalloc(dev, hpriv->nports, sizeof(*hpriv->phys), GFP_KERNEL);
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 4a6b82d434eef..b0d3dadeb9643 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1342,10 +1342,12 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b
+ 	mutex_unlock(&nbd->config_lock);
+ 	ret = wait_event_interruptible(config->recv_wq,
+ 					 atomic_read(&config->recv_threads) == 0);
+-	if (ret)
++	if (ret) {
+ 		sock_shutdown(nbd);
+-	flush_workqueue(nbd->recv_workq);
++		nbd_clear_que(nbd);
++	}
+ 
++	flush_workqueue(nbd->recv_workq);
+ 	mutex_lock(&nbd->config_lock);
+ 	nbd_bdev_reset(bdev);
+ 	/* user requested, ignore socket errors */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a699e6166aefe..6efd981979bd3 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2816,6 +2816,7 @@ enum {
+ enum {
+ 	BTMTK_WMT_INVALID,
+ 	BTMTK_WMT_PATCH_UNDONE,
++	BTMTK_WMT_PATCH_PROGRESS,
+ 	BTMTK_WMT_PATCH_DONE,
+ 	BTMTK_WMT_ON_UNDONE,
+ 	BTMTK_WMT_ON_DONE,
+@@ -2831,7 +2832,7 @@ struct btmtk_wmt_hdr {
+ 
+ struct btmtk_hci_wmt_cmd {
+ 	struct btmtk_wmt_hdr hdr;
+-	u8 data[256];
++	u8 data[];
+ } __packed;
+ 
+ struct btmtk_hci_wmt_evt {
+@@ -2934,7 +2935,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 	 * to generate the event. Otherwise, the WMT event cannot return from
+ 	 * the device successfully.
+ 	 */
+-	udelay(100);
++	udelay(500);
+ 
+ 	usb_anchor_urb(urb, &data->ctrl_anchor);
+ 	err = usb_submit_urb(urb, GFP_ATOMIC);
+@@ -3010,7 +3011,7 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	struct btmtk_hci_wmt_evt_funcc *wmt_evt_funcc;
+ 	u32 hlen, status = BTMTK_WMT_INVALID;
+ 	struct btmtk_hci_wmt_evt *wmt_evt;
+-	struct btmtk_hci_wmt_cmd wc;
++	struct btmtk_hci_wmt_cmd *wc;
+ 	struct btmtk_wmt_hdr *hdr;
+ 	int err;
+ 
+@@ -3019,24 +3020,42 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	if (hlen > 255)
+ 		return -EINVAL;
+ 
+-	hdr = (struct btmtk_wmt_hdr *)&wc;
++	wc = kzalloc(hlen, GFP_KERNEL);
++	if (!wc)
++		return -ENOMEM;
++
++	hdr = &wc->hdr;
+ 	hdr->dir = 1;
+ 	hdr->op = wmt_params->op;
+ 	hdr->dlen = cpu_to_le16(wmt_params->dlen + 1);
+ 	hdr->flag = wmt_params->flag;
+-	memcpy(wc.data, wmt_params->data, wmt_params->dlen);
++	memcpy(wc->data, wmt_params->data, wmt_params->dlen);
+ 
+ 	set_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+ 
+-	err = __hci_cmd_send(hdev, 0xfc6f, hlen, &wc);
++	/* WMT cmd/event doesn't follow up the generic HCI cmd/event handling,
++	 * it needs constantly polling control pipe until the host received the
++	 * WMT event, thus, we should require to specifically acquire PM counter
++	 * on the USB to prevent the interface from entering auto suspended
++	 * while WMT cmd/event in progress.
++	 */
++	err = usb_autopm_get_interface(data->intf);
++	if (err < 0)
++		goto err_free_wc;
++
++	err = __hci_cmd_send(hdev, 0xfc6f, hlen, wc);
+ 
+ 	if (err < 0) {
+ 		clear_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+-		return err;
++		usb_autopm_put_interface(data->intf);
++		goto err_free_wc;
+ 	}
+ 
+ 	/* Submit control IN URB on demand to process the WMT event */
+ 	err = btusb_mtk_submit_wmt_recv_urb(hdev);
++
++	usb_autopm_put_interface(data->intf);
++
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -3054,13 +3073,14 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	if (err == -EINTR) {
+ 		bt_dev_err(hdev, "Execution of wmt command interrupted");
+ 		clear_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+-		return err;
++		goto err_free_wc;
+ 	}
+ 
+ 	if (err) {
+ 		bt_dev_err(hdev, "Execution of wmt command timed out");
+ 		clear_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+-		return -ETIMEDOUT;
++		err = -ETIMEDOUT;
++		goto err_free_wc;
+ 	}
+ 
+ 	/* Parse and handle the return WMT event */
+@@ -3096,7 +3116,8 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ err_free_skb:
+ 	kfree_skb(data->evt_skb);
+ 	data->evt_skb = NULL;
+-
++err_free_wc:
++	kfree(wc);
+ 	return err;
+ }
+ 
+@@ -3238,9 +3259,9 @@ err_free_buf:
+ 	return err;
+ }
+ 
+-static int btusb_mtk_id_get(struct btusb_data *data, u32 *id)
++static int btusb_mtk_id_get(struct btusb_data *data, u32 reg, u32 *id)
+ {
+-	return btusb_mtk_reg_read(data, 0x80000008, id);
++	return btusb_mtk_reg_read(data, reg, id);
+ }
+ 
+ static int btusb_mtk_setup(struct hci_dev *hdev)
+@@ -3258,7 +3279,7 @@ static int btusb_mtk_setup(struct hci_dev *hdev)
+ 
+ 	calltime = ktime_get();
+ 
+-	err = btusb_mtk_id_get(data, &dev_id);
++	err = btusb_mtk_id_get(data, 0x80000008, &dev_id);
+ 	if (err < 0) {
+ 		bt_dev_err(hdev, "Failed to get device id (%d)", err);
+ 		return err;
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index 637c5b8c2aa1a..726d5c83c550e 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -490,6 +490,11 @@ static int hci_uart_tty_open(struct tty_struct *tty)
+ 		BT_ERR("Can't allocate control structure");
+ 		return -ENFILE;
+ 	}
++	if (percpu_init_rwsem(&hu->proto_lock)) {
++		BT_ERR("Can't allocate semaphore structure");
++		kfree(hu);
++		return -ENOMEM;
++	}
+ 
+ 	tty->disc_data = hu;
+ 	hu->tty = tty;
+@@ -502,8 +507,6 @@ static int hci_uart_tty_open(struct tty_struct *tty)
+ 	INIT_WORK(&hu->init_ready, hci_uart_init_work);
+ 	INIT_WORK(&hu->write_work, hci_uart_write_work);
+ 
+-	percpu_init_rwsem(&hu->proto_lock);
+-
+ 	/* Flush any pending characters in the driver */
+ 	tty_driver_flush_buffer(tty);
+ 
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index e9a44ab3812df..f2e2e553d4de7 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -301,11 +301,12 @@ int hci_uart_register_device(struct hci_uart *hu,
+ 
+ 	serdev_device_set_client_ops(hu->serdev, &hci_serdev_client_ops);
+ 
++	if (percpu_init_rwsem(&hu->proto_lock))
++		return -ENOMEM;
++
+ 	err = serdev_device_open(hu->serdev);
+ 	if (err)
+-		return err;
+-
+-	percpu_init_rwsem(&hu->proto_lock);
++		goto err_rwsem;
+ 
+ 	err = p->open(hu);
+ 	if (err)
+@@ -375,6 +376,8 @@ err_alloc:
+ 	p->close(hu);
+ err_open:
+ 	serdev_device_close(hu->serdev);
++err_rwsem:
++	percpu_free_rwsem(&hu->proto_lock);
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(hci_uart_register_device);
+@@ -396,5 +399,6 @@ void hci_uart_unregister_device(struct hci_uart *hu)
+ 		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+ 		serdev_device_close(hu->serdev);
+ 	}
++	percpu_free_rwsem(&hu->proto_lock);
+ }
+ EXPORT_SYMBOL_GPL(hci_uart_unregister_device);
+diff --git a/drivers/char/hw_random/imx-rngc.c b/drivers/char/hw_random/imx-rngc.c
+index 61c844baf26e8..9b182e5bfa87a 100644
+--- a/drivers/char/hw_random/imx-rngc.c
++++ b/drivers/char/hw_random/imx-rngc.c
+@@ -272,13 +272,6 @@ static int imx_rngc_probe(struct platform_device *pdev)
+ 		goto err;
+ 	}
+ 
+-	ret = devm_request_irq(&pdev->dev,
+-			irq, imx_rngc_irq, 0, pdev->name, (void *)rngc);
+-	if (ret) {
+-		dev_err(rngc->dev, "Can't get interrupt working.\n");
+-		goto err;
+-	}
+-
+ 	init_completion(&rngc->rng_op_done);
+ 
+ 	rngc->rng.name = pdev->name;
+@@ -292,6 +285,13 @@ static int imx_rngc_probe(struct platform_device *pdev)
+ 
+ 	imx_rngc_irq_mask_clear(rngc);
+ 
++	ret = devm_request_irq(&pdev->dev,
++			irq, imx_rngc_irq, 0, pdev->name, (void *)rngc);
++	if (ret) {
++		dev_err(rngc->dev, "Can't get interrupt working.\n");
++		return ret;
++	}
++
+ 	if (self_test) {
+ 		ret = imx_rngc_self_test(rngc);
+ 		if (ret) {
+diff --git a/drivers/clk/baikal-t1/ccu-div.c b/drivers/clk/baikal-t1/ccu-div.c
+index 4062092d67f90..a6642f3d33d44 100644
+--- a/drivers/clk/baikal-t1/ccu-div.c
++++ b/drivers/clk/baikal-t1/ccu-div.c
+@@ -34,6 +34,7 @@
+ #define CCU_DIV_CTL_CLKDIV_MASK(_width) \
+ 	GENMASK((_width) + CCU_DIV_CTL_CLKDIV_FLD - 1, CCU_DIV_CTL_CLKDIV_FLD)
+ #define CCU_DIV_CTL_LOCK_SHIFTED	BIT(27)
++#define CCU_DIV_CTL_GATE_REF_BUF	BIT(28)
+ #define CCU_DIV_CTL_LOCK_NORMAL		BIT(31)
+ 
+ #define CCU_DIV_RST_DELAY_US		1
+@@ -170,6 +171,40 @@ static int ccu_div_gate_is_enabled(struct clk_hw *hw)
+ 	return !!(val & CCU_DIV_CTL_EN);
+ }
+ 
++static int ccu_div_buf_enable(struct clk_hw *hw)
++{
++	struct ccu_div *div = to_ccu_div(hw);
++	unsigned long flags;
++
++	spin_lock_irqsave(&div->lock, flags);
++	regmap_update_bits(div->sys_regs, div->reg_ctl,
++			   CCU_DIV_CTL_GATE_REF_BUF, 0);
++	spin_unlock_irqrestore(&div->lock, flags);
++
++	return 0;
++}
++
++static void ccu_div_buf_disable(struct clk_hw *hw)
++{
++	struct ccu_div *div = to_ccu_div(hw);
++	unsigned long flags;
++
++	spin_lock_irqsave(&div->lock, flags);
++	regmap_update_bits(div->sys_regs, div->reg_ctl,
++			   CCU_DIV_CTL_GATE_REF_BUF, CCU_DIV_CTL_GATE_REF_BUF);
++	spin_unlock_irqrestore(&div->lock, flags);
++}
++
++static int ccu_div_buf_is_enabled(struct clk_hw *hw)
++{
++	struct ccu_div *div = to_ccu_div(hw);
++	u32 val = 0;
++
++	regmap_read(div->sys_regs, div->reg_ctl, &val);
++
++	return !(val & CCU_DIV_CTL_GATE_REF_BUF);
++}
++
+ static unsigned long ccu_div_var_recalc_rate(struct clk_hw *hw,
+ 					     unsigned long parent_rate)
+ {
+@@ -323,6 +358,7 @@ static const struct ccu_div_dbgfs_bit ccu_div_bits[] = {
+ 	CCU_DIV_DBGFS_BIT_ATTR("div_en", CCU_DIV_CTL_EN),
+ 	CCU_DIV_DBGFS_BIT_ATTR("div_rst", CCU_DIV_CTL_RST),
+ 	CCU_DIV_DBGFS_BIT_ATTR("div_bypass", CCU_DIV_CTL_SET_CLKDIV),
++	CCU_DIV_DBGFS_BIT_ATTR("div_buf", CCU_DIV_CTL_GATE_REF_BUF),
+ 	CCU_DIV_DBGFS_BIT_ATTR("div_lock", CCU_DIV_CTL_LOCK_NORMAL)
+ };
+ 
+@@ -441,6 +477,9 @@ static void ccu_div_var_debug_init(struct clk_hw *hw, struct dentry *dentry)
+ 			continue;
+ 		}
+ 
++		if (!strcmp("div_buf", name))
++			continue;
++
+ 		bits[didx] = ccu_div_bits[bidx];
+ 		bits[didx].div = div;
+ 
+@@ -477,6 +516,21 @@ static void ccu_div_gate_debug_init(struct clk_hw *hw, struct dentry *dentry)
+ 				   &ccu_div_dbgfs_fixed_clkdiv_fops);
+ }
+ 
++static void ccu_div_buf_debug_init(struct clk_hw *hw, struct dentry *dentry)
++{
++	struct ccu_div *div = to_ccu_div(hw);
++	struct ccu_div_dbgfs_bit *bit;
++
++	bit = kmalloc(sizeof(*bit), GFP_KERNEL);
++	if (!bit)
++		return;
++
++	*bit = ccu_div_bits[3];
++	bit->div = div;
++	debugfs_create_file_unsafe(bit->name, ccu_div_dbgfs_mode, dentry, bit,
++				   &ccu_div_dbgfs_bit_fops);
++}
++
+ static void ccu_div_fixed_debug_init(struct clk_hw *hw, struct dentry *dentry)
+ {
+ 	struct ccu_div *div = to_ccu_div(hw);
+@@ -489,6 +543,7 @@ static void ccu_div_fixed_debug_init(struct clk_hw *hw, struct dentry *dentry)
+ 
+ #define ccu_div_var_debug_init NULL
+ #define ccu_div_gate_debug_init NULL
++#define ccu_div_buf_debug_init NULL
+ #define ccu_div_fixed_debug_init NULL
+ 
+ #endif /* !CONFIG_DEBUG_FS */
+@@ -520,6 +575,13 @@ static const struct clk_ops ccu_div_gate_ops = {
+ 	.debug_init = ccu_div_gate_debug_init
+ };
+ 
++static const struct clk_ops ccu_div_buf_ops = {
++	.enable = ccu_div_buf_enable,
++	.disable = ccu_div_buf_disable,
++	.is_enabled = ccu_div_buf_is_enabled,
++	.debug_init = ccu_div_buf_debug_init
++};
++
+ static const struct clk_ops ccu_div_fixed_ops = {
+ 	.recalc_rate = ccu_div_fixed_recalc_rate,
+ 	.round_rate = ccu_div_fixed_round_rate,
+@@ -566,6 +628,8 @@ struct ccu_div *ccu_div_hw_register(const struct ccu_div_init_data *div_init)
+ 	} else if (div_init->type == CCU_DIV_GATE) {
+ 		hw_init.ops = &ccu_div_gate_ops;
+ 		div->divider = div_init->divider;
++	} else if (div_init->type == CCU_DIV_BUF) {
++		hw_init.ops = &ccu_div_buf_ops;
+ 	} else if (div_init->type == CCU_DIV_FIXED) {
+ 		hw_init.ops = &ccu_div_fixed_ops;
+ 		div->divider = div_init->divider;
+@@ -579,6 +643,7 @@ struct ccu_div *ccu_div_hw_register(const struct ccu_div_init_data *div_init)
+ 		goto err_free_div;
+ 	}
+ 	parent_data.fw_name = div_init->parent_name;
++	parent_data.name = div_init->parent_name;
+ 	hw_init.parent_data = &parent_data;
+ 	hw_init.num_parents = 1;
+ 
+diff --git a/drivers/clk/baikal-t1/ccu-div.h b/drivers/clk/baikal-t1/ccu-div.h
+index 795665caefbdc..4eb49ff4803c6 100644
+--- a/drivers/clk/baikal-t1/ccu-div.h
++++ b/drivers/clk/baikal-t1/ccu-div.h
+@@ -13,6 +13,14 @@
+ #include <linux/bits.h>
+ #include <linux/of.h>
+ 
++/*
++ * CCU Divider private clock IDs
++ * @CCU_SYS_SATA_CLK: CCU SATA internal clock
++ * @CCU_SYS_XGMAC_CLK: CCU XGMAC internal clock
++ */
++#define CCU_SYS_SATA_CLK		-1
++#define CCU_SYS_XGMAC_CLK		-2
++
+ /*
+  * CCU Divider private flags
+  * @CCU_DIV_SKIP_ONE: Due to some reason divider can't be set to 1.
+@@ -31,11 +39,13 @@
+  * enum ccu_div_type - CCU Divider types
+  * @CCU_DIV_VAR: Clocks gate with variable divider.
+  * @CCU_DIV_GATE: Clocks gate with fixed divider.
++ * @CCU_DIV_BUF: Clock gate with no divider.
+  * @CCU_DIV_FIXED: Ungateable clock with fixed divider.
+  */
+ enum ccu_div_type {
+ 	CCU_DIV_VAR,
+ 	CCU_DIV_GATE,
++	CCU_DIV_BUF,
+ 	CCU_DIV_FIXED
+ };
+ 
+diff --git a/drivers/clk/baikal-t1/clk-ccu-div.c b/drivers/clk/baikal-t1/clk-ccu-div.c
+index f141fda12b09a..90f4fda406ee6 100644
+--- a/drivers/clk/baikal-t1/clk-ccu-div.c
++++ b/drivers/clk/baikal-t1/clk-ccu-div.c
+@@ -76,6 +76,16 @@
+ 		.divider = _divider				\
+ 	}
+ 
++#define CCU_DIV_BUF_INFO(_id, _name, _pname, _base, _flags)	\
++	{							\
++		.id = _id,					\
++		.name = _name,					\
++		.parent_name = _pname,				\
++		.base = _base,					\
++		.type = CCU_DIV_BUF,				\
++		.flags = _flags					\
++	}
++
+ #define CCU_DIV_FIXED_INFO(_id, _name, _pname, _divider)	\
+ 	{							\
+ 		.id = _id,					\
+@@ -188,11 +198,14 @@ static const struct ccu_div_rst_map axi_rst_map[] = {
+  * for the SoC devices registers IO-operations.
+  */
+ static const struct ccu_div_info sys_info[] = {
+-	CCU_DIV_VAR_INFO(CCU_SYS_SATA_REF_CLK, "sys_sata_ref_clk",
++	CCU_DIV_VAR_INFO(CCU_SYS_SATA_CLK, "sys_sata_clk",
+ 			 "sata_clk", CCU_SYS_SATA_REF_BASE, 4,
+ 			 CLK_SET_RATE_GATE,
+ 			 CCU_DIV_SKIP_ONE | CCU_DIV_LOCK_SHIFTED |
+ 			 CCU_DIV_RESET_DOMAIN),
++	CCU_DIV_BUF_INFO(CCU_SYS_SATA_REF_CLK, "sys_sata_ref_clk",
++			 "sys_sata_clk", CCU_SYS_SATA_REF_BASE,
++			 CLK_SET_RATE_PARENT),
+ 	CCU_DIV_VAR_INFO(CCU_SYS_APB_CLK, "sys_apb_clk",
+ 			 "pcie_clk", CCU_SYS_APB_BASE, 5,
+ 			 CLK_IS_CRITICAL, CCU_DIV_RESET_DOMAIN),
+@@ -204,10 +217,12 @@ static const struct ccu_div_info sys_info[] = {
+ 			  "eth_clk", CCU_SYS_GMAC1_BASE, 5),
+ 	CCU_DIV_FIXED_INFO(CCU_SYS_GMAC1_PTP_CLK, "sys_gmac1_ptp_clk",
+ 			   "eth_clk", 10),
+-	CCU_DIV_GATE_INFO(CCU_SYS_XGMAC_REF_CLK, "sys_xgmac_ref_clk",
+-			  "eth_clk", CCU_SYS_XGMAC_BASE, 8),
++	CCU_DIV_GATE_INFO(CCU_SYS_XGMAC_CLK, "sys_xgmac_clk",
++			  "eth_clk", CCU_SYS_XGMAC_BASE, 1),
++	CCU_DIV_FIXED_INFO(CCU_SYS_XGMAC_REF_CLK, "sys_xgmac_ref_clk",
++			   "sys_xgmac_clk", 8),
+ 	CCU_DIV_FIXED_INFO(CCU_SYS_XGMAC_PTP_CLK, "sys_xgmac_ptp_clk",
+-			   "eth_clk", 10),
++			   "sys_xgmac_clk", 8),
+ 	CCU_DIV_GATE_INFO(CCU_SYS_USB_CLK, "sys_usb_clk",
+ 			  "eth_clk", CCU_SYS_USB_BASE, 10),
+ 	CCU_DIV_VAR_INFO(CCU_SYS_PVT_CLK, "sys_pvt_clk",
+@@ -396,6 +411,9 @@ static int ccu_div_clk_register(struct ccu_div_data *data)
+ 			init.base = info->base;
+ 			init.sys_regs = data->sys_regs;
+ 			init.divider = info->divider;
++		} else if (init.type == CCU_DIV_BUF) {
++			init.base = info->base;
++			init.sys_regs = data->sys_regs;
+ 		} else {
+ 			init.divider = info->divider;
+ 		}
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 178886823b90c..b7f89873fcf54 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -968,9 +968,9 @@ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
+ 	return div;
+ }
+ 
+-static long bcm2835_clock_rate_from_divisor(struct bcm2835_clock *clock,
+-					    unsigned long parent_rate,
+-					    u32 div)
++static unsigned long bcm2835_clock_rate_from_divisor(struct bcm2835_clock *clock,
++						     unsigned long parent_rate,
++						     u32 div)
+ {
+ 	const struct bcm2835_clock_data *data = clock->data;
+ 	u64 temp;
+@@ -1786,7 +1786,7 @@ static const struct bcm2835_clk_desc clk_desc_array[] = {
+ 		.load_mask = CM_PLLC_LOADPER,
+ 		.hold_mask = CM_PLLC_HOLDPER,
+ 		.fixed_divider = 1,
+-		.flags = CLK_SET_RATE_PARENT),
++		.flags = CLK_IS_CRITICAL | CLK_SET_RATE_PARENT),
+ 
+ 	/*
+ 	 * PLLD is the display PLL, used to drive DSI display panels.
+diff --git a/drivers/clk/berlin/bg2.c b/drivers/clk/berlin/bg2.c
+index bccdfa00fd373..67a9edbba29c4 100644
+--- a/drivers/clk/berlin/bg2.c
++++ b/drivers/clk/berlin/bg2.c
+@@ -500,12 +500,15 @@ static void __init berlin2_clock_setup(struct device_node *np)
+ 	int n, ret;
+ 
+ 	clk_data = kzalloc(struct_size(clk_data, hws, MAX_CLKS), GFP_KERNEL);
+-	if (!clk_data)
++	if (!clk_data) {
++		of_node_put(parent_np);
+ 		return;
++	}
+ 	clk_data->num = MAX_CLKS;
+ 	hws = clk_data->hws;
+ 
+ 	gbase = of_iomap(parent_np, 0);
++	of_node_put(parent_np);
+ 	if (!gbase)
+ 		return;
+ 
+diff --git a/drivers/clk/berlin/bg2q.c b/drivers/clk/berlin/bg2q.c
+index e9518d35f262e..dd2784bb75b64 100644
+--- a/drivers/clk/berlin/bg2q.c
++++ b/drivers/clk/berlin/bg2q.c
+@@ -286,19 +286,23 @@ static void __init berlin2q_clock_setup(struct device_node *np)
+ 	int n, ret;
+ 
+ 	clk_data = kzalloc(struct_size(clk_data, hws, MAX_CLKS), GFP_KERNEL);
+-	if (!clk_data)
++	if (!clk_data) {
++		of_node_put(parent_np);
+ 		return;
++	}
+ 	clk_data->num = MAX_CLKS;
+ 	hws = clk_data->hws;
+ 
+ 	gbase = of_iomap(parent_np, 0);
+ 	if (!gbase) {
++		of_node_put(parent_np);
+ 		pr_err("%pOF: Unable to map global base\n", np);
+ 		return;
+ 	}
+ 
+ 	/* BG2Q CPU PLL is not part of global registers */
+ 	cpupll_base = of_iomap(parent_np, 1);
++	of_node_put(parent_np);
+ 	if (!cpupll_base) {
+ 		pr_err("%pOF: Unable to map cpupll base\n", np);
+ 		iounmap(gbase);
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index 24dab2312bc6f..9c3305bcb27ae 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -622,7 +622,7 @@ static int aspeed_g6_clk_probe(struct platform_device *pdev)
+ 	regmap_write(map, 0x308, 0x12000); /* 3x3 = 9 */
+ 
+ 	/* P-Bus (BCLK) clock divider */
+-	hw = clk_hw_register_divider_table(dev, "bclk", "hpll", 0,
++	hw = clk_hw_register_divider_table(dev, "bclk", "epll", 0,
+ 			scu_g6_base + ASPEED_G6_CLK_SELECTION1, 20, 3, 0,
+ 			ast2600_div_table,
+ 			&aspeed_g6_clk_lock);
+diff --git a/drivers/clk/clk-oxnas.c b/drivers/clk/clk-oxnas.c
+index 78d5ea669fea7..2fe36f579ac5e 100644
+--- a/drivers/clk/clk-oxnas.c
++++ b/drivers/clk/clk-oxnas.c
+@@ -207,7 +207,7 @@ static const struct of_device_id oxnas_stdclk_dt_ids[] = {
+ 
+ static int oxnas_stdclk_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
++	struct device_node *np = pdev->dev.of_node, *parent_np;
+ 	const struct oxnas_stdclk_data *data;
+ 	const struct of_device_id *id;
+ 	struct regmap *regmap;
+@@ -219,7 +219,9 @@ static int oxnas_stdclk_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	data = id->data;
+ 
+-	regmap = syscon_node_to_regmap(of_get_parent(np));
++	parent_np = of_get_parent(np);
++	regmap = syscon_node_to_regmap(parent_np);
++	of_node_put(parent_np);
+ 	if (IS_ERR(regmap)) {
+ 		dev_err(&pdev->dev, "failed to have parent regmap\n");
+ 		return PTR_ERR(regmap);
+diff --git a/drivers/clk/clk-qoriq.c b/drivers/clk/clk-qoriq.c
+index 46101c6a20f26..585b9ac118818 100644
+--- a/drivers/clk/clk-qoriq.c
++++ b/drivers/clk/clk-qoriq.c
+@@ -1038,8 +1038,13 @@ static void __init _clockgen_init(struct device_node *np, bool legacy);
+  */
+ static void __init legacy_init_clockgen(struct device_node *np)
+ {
+-	if (!clockgen.node)
+-		_clockgen_init(of_get_parent(np), true);
++	if (!clockgen.node) {
++		struct device_node *parent_np;
++
++		parent_np = of_get_parent(np);
++		_clockgen_init(parent_np, true);
++		of_node_put(parent_np);
++	}
+ }
+ 
+ /* Legacy node */
+@@ -1134,6 +1139,7 @@ static struct clk * __init create_sysclk(const char *name)
+ 	sysclk = of_get_child_by_name(clockgen.node, "sysclk");
+ 	if (sysclk) {
+ 		clk = sysclk_from_fixed(sysclk, name);
++		of_node_put(sysclk);
+ 		if (!IS_ERR(clk))
+ 			return clk;
+ 	}
+diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
+index 4e741f94baf02..eb597ea7bb87b 100644
+--- a/drivers/clk/clk-versaclock5.c
++++ b/drivers/clk/clk-versaclock5.c
+@@ -1116,7 +1116,7 @@ static const struct vc5_chip_info idt_5p49v6901_info = {
+ 	.model = IDT_VC6_5P49V6901,
+ 	.clk_fod_cnt = 4,
+ 	.clk_out_cnt = 5,
+-	.flags = VC5_HAS_PFD_FREQ_DBL,
++	.flags = VC5_HAS_PFD_FREQ_DBL | VC5_HAS_BYPASS_SYNC_BIT,
+ };
+ 
+ static const struct vc5_chip_info idt_5p49v6965_info = {
+diff --git a/drivers/clk/mediatek/clk-mt8183-mfgcfg.c b/drivers/clk/mediatek/clk-mt8183-mfgcfg.c
+index 37b4162c58820..3a33014eee7f7 100644
+--- a/drivers/clk/mediatek/clk-mt8183-mfgcfg.c
++++ b/drivers/clk/mediatek/clk-mt8183-mfgcfg.c
+@@ -18,9 +18,9 @@ static const struct mtk_gate_regs mfg_cg_regs = {
+ 	.sta_ofs = 0x0,
+ };
+ 
+-#define GATE_MFG(_id, _name, _parent, _shift)			\
+-	GATE_MTK(_id, _name, _parent, &mfg_cg_regs, _shift,	\
+-		&mtk_clk_gate_ops_setclr)
++#define GATE_MFG(_id, _name, _parent, _shift)				\
++	GATE_MTK_FLAGS(_id, _name, _parent, &mfg_cg_regs, _shift,	\
++		       &mtk_clk_gate_ops_setclr, CLK_SET_RATE_PARENT)
+ 
+ static const struct mtk_gate mfg_clks[] = {
+ 	GATE_MFG(CLK_MFG_BG3D, "mfg_bg3d", "mfg_sel", 0)
+diff --git a/drivers/clk/meson/meson-aoclk.c b/drivers/clk/meson/meson-aoclk.c
+index 3a6d84cd66012..67d8a0d30221c 100644
+--- a/drivers/clk/meson/meson-aoclk.c
++++ b/drivers/clk/meson/meson-aoclk.c
+@@ -36,6 +36,7 @@ int meson_aoclkc_probe(struct platform_device *pdev)
+ 	struct meson_aoclk_reset_controller *rstc;
+ 	struct meson_aoclk_data *data;
+ 	struct device *dev = &pdev->dev;
++	struct device_node *np;
+ 	struct regmap *regmap;
+ 	int ret, clkid;
+ 
+@@ -47,7 +48,9 @@ int meson_aoclkc_probe(struct platform_device *pdev)
+ 	if (!rstc)
+ 		return -ENOMEM;
+ 
+-	regmap = syscon_node_to_regmap(of_get_parent(dev->of_node));
++	np = of_get_parent(dev->of_node);
++	regmap = syscon_node_to_regmap(np);
++	of_node_put(np);
+ 	if (IS_ERR(regmap)) {
+ 		dev_err(dev, "failed to get regmap\n");
+ 		return PTR_ERR(regmap);
+diff --git a/drivers/clk/meson/meson-eeclk.c b/drivers/clk/meson/meson-eeclk.c
+index a7cb1e7aedc46..18ae387872687 100644
+--- a/drivers/clk/meson/meson-eeclk.c
++++ b/drivers/clk/meson/meson-eeclk.c
+@@ -17,6 +17,7 @@ int meson_eeclkc_probe(struct platform_device *pdev)
+ {
+ 	const struct meson_eeclkc_data *data;
+ 	struct device *dev = &pdev->dev;
++	struct device_node *np;
+ 	struct regmap *map;
+ 	int ret, i;
+ 
+@@ -25,7 +26,9 @@ int meson_eeclkc_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 
+ 	/* Get the hhi system controller node */
+-	map = syscon_node_to_regmap(of_get_parent(dev->of_node));
++	np = of_get_parent(dev->of_node);
++	map = syscon_node_to_regmap(np);
++	of_node_put(np);
+ 	if (IS_ERR(map)) {
+ 		dev_err(dev,
+ 			"failed to get HHI regmap\n");
+diff --git a/drivers/clk/meson/meson8b.c b/drivers/clk/meson/meson8b.c
+index 862f0756b50f0..1da9d212f8b77 100644
+--- a/drivers/clk/meson/meson8b.c
++++ b/drivers/clk/meson/meson8b.c
+@@ -3735,13 +3735,16 @@ static void __init meson8b_clkc_init_common(struct device_node *np,
+ 			struct clk_hw_onecell_data *clk_hw_onecell_data)
+ {
+ 	struct meson8b_clk_reset *rstc;
++	struct device_node *parent_np;
+ 	const char *notifier_clk_name;
+ 	struct clk *notifier_clk;
+ 	void __iomem *clk_base;
+ 	struct regmap *map;
+ 	int i, ret;
+ 
+-	map = syscon_node_to_regmap(of_get_parent(np));
++	parent_np = of_get_parent(np);
++	map = syscon_node_to_regmap(parent_np);
++	of_node_put(parent_np);
+ 	if (IS_ERR(map)) {
+ 		pr_info("failed to get HHI regmap - Trying obsolete regs\n");
+ 
+diff --git a/drivers/clk/qcom/apss-ipq6018.c b/drivers/clk/qcom/apss-ipq6018.c
+index d78ff2f310bfa..b5d93657e1ee3 100644
+--- a/drivers/clk/qcom/apss-ipq6018.c
++++ b/drivers/clk/qcom/apss-ipq6018.c
+@@ -57,7 +57,7 @@ static struct clk_branch apcs_alias0_core_clk = {
+ 			.parent_hws = (const struct clk_hw *[]){
+ 				&apcs_alias0_clk_src.clkr.hw },
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
++			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+diff --git a/drivers/clk/sprd/common.c b/drivers/clk/sprd/common.c
+index d620bbbcdfc88..ce81e4087a8fc 100644
+--- a/drivers/clk/sprd/common.c
++++ b/drivers/clk/sprd/common.c
+@@ -41,7 +41,7 @@ int sprd_clk_regmap_init(struct platform_device *pdev,
+ {
+ 	void __iomem *base;
+ 	struct device *dev = &pdev->dev;
+-	struct device_node *node = dev->of_node;
++	struct device_node *node = dev->of_node, *np;
+ 	struct regmap *regmap;
+ 
+ 	if (of_find_property(node, "sprd,syscon", NULL)) {
+@@ -50,9 +50,10 @@ int sprd_clk_regmap_init(struct platform_device *pdev,
+ 			pr_err("%s: failed to get syscon regmap\n", __func__);
+ 			return PTR_ERR(regmap);
+ 		}
+-	} else if (of_device_is_compatible(of_get_parent(dev->of_node),
+-			   "syscon")) {
+-		regmap = device_node_to_regmap(of_get_parent(dev->of_node));
++	} else if (of_device_is_compatible(np =	of_get_parent(node), "syscon") ||
++		   (of_node_put(np), 0)) {
++		regmap = device_node_to_regmap(np);
++		of_node_put(np);
+ 		if (IS_ERR(regmap)) {
+ 			dev_err(dev, "failed to get regmap from its parent.\n");
+ 			return PTR_ERR(regmap);
+diff --git a/drivers/clk/tegra/clk-tegra114.c b/drivers/clk/tegra/clk-tegra114.c
+index bc9e47a4cb60a..4e2b26e3e5738 100644
+--- a/drivers/clk/tegra/clk-tegra114.c
++++ b/drivers/clk/tegra/clk-tegra114.c
+@@ -1317,6 +1317,7 @@ static void __init tegra114_clock_init(struct device_node *np)
+ 	}
+ 
+ 	pmc_base = of_iomap(node, 0);
++	of_node_put(node);
+ 	if (!pmc_base) {
+ 		pr_err("Can't map pmc registers\n");
+ 		WARN_ON(1);
+diff --git a/drivers/clk/tegra/clk-tegra20.c b/drivers/clk/tegra/clk-tegra20.c
+index 3efc651b42e3a..d60ee6e318a55 100644
+--- a/drivers/clk/tegra/clk-tegra20.c
++++ b/drivers/clk/tegra/clk-tegra20.c
+@@ -1128,6 +1128,7 @@ static void __init tegra20_clock_init(struct device_node *np)
+ 	}
+ 
+ 	pmc_base = of_iomap(node, 0);
++	of_node_put(node);
+ 	if (!pmc_base) {
+ 		pr_err("Can't map pmc registers\n");
+ 		BUG();
+diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c
+index 68cbb98af567d..1a0016d07f88c 100644
+--- a/drivers/clk/tegra/clk-tegra210.c
++++ b/drivers/clk/tegra/clk-tegra210.c
+@@ -3697,6 +3697,7 @@ static void __init tegra210_clock_init(struct device_node *np)
+ 	}
+ 
+ 	pmc_base = of_iomap(node, 0);
++	of_node_put(node);
+ 	if (!pmc_base) {
+ 		pr_err("Can't map pmc registers\n");
+ 		WARN_ON(1);
+diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
+index 8d4c08b034bdd..e2e59d78c173f 100644
+--- a/drivers/clk/ti/clk-dra7-atl.c
++++ b/drivers/clk/ti/clk-dra7-atl.c
+@@ -251,14 +251,16 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
+ 		if (rc) {
+ 			pr_err("%s: failed to lookup atl clock %d\n", __func__,
+ 			       i);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto pm_put;
+ 		}
+ 
+ 		clk = of_clk_get_from_provider(&clkspec);
+ 		if (IS_ERR(clk)) {
+ 			pr_err("%s: failed to get atl clock %d from provider\n",
+ 			       __func__, i);
+-			return PTR_ERR(clk);
++			ret = PTR_ERR(clk);
++			goto pm_put;
+ 		}
+ 
+ 		cdesc = to_atl_desc(__clk_get_hw(clk));
+@@ -291,8 +293,9 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
+ 		if (cdesc->enabled)
+ 			atl_clk_enable(__clk_get_hw(clk));
+ 	}
+-	pm_runtime_put_sync(cinfo->dev);
+ 
++pm_put:
++	pm_runtime_put_sync(cinfo->dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/clk/zynqmp/clkc.c b/drivers/clk/zynqmp/clkc.c
+index db8d0d7161ce2..9c82ae240c407 100644
+--- a/drivers/clk/zynqmp/clkc.c
++++ b/drivers/clk/zynqmp/clkc.c
+@@ -687,6 +687,13 @@ static void zynqmp_get_clock_info(void)
+ 				  FIELD_PREP(CLK_ATTR_NODE_INDEX, i);
+ 
+ 		zynqmp_pm_clock_get_name(clock[i].clk_id, &name);
++
++		/*
++		 * Terminate with NULL character in case name provided by firmware
++		 * is longer and truncated due to size limit.
++		 */
++		name.name[sizeof(name.name) - 1] = '\0';
++
+ 		if (!strcmp(name.name, RESERVED_CLK_NAME))
+ 			continue;
+ 		strncpy(clock[i].clk_name, name.name, MAX_NAME_LEN);
+diff --git a/drivers/clk/zynqmp/pll.c b/drivers/clk/zynqmp/pll.c
+index abe6afbf3407b..2ae7f9129b07a 100644
+--- a/drivers/clk/zynqmp/pll.c
++++ b/drivers/clk/zynqmp/pll.c
+@@ -99,26 +99,25 @@ static long zynqmp_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+ 				  unsigned long *prate)
+ {
+ 	u32 fbdiv;
+-	long rate_div, f;
++	u32 mult, div;
+ 
+-	/* Enable the fractional mode if needed */
+-	rate_div = (rate * FRAC_DIV) / *prate;
+-	f = rate_div % FRAC_DIV;
+-	if (f) {
+-		if (rate > PS_PLL_VCO_MAX) {
+-			fbdiv = rate / PS_PLL_VCO_MAX;
+-			rate = rate / (fbdiv + 1);
+-		}
+-		if (rate < PS_PLL_VCO_MIN) {
+-			fbdiv = DIV_ROUND_UP(PS_PLL_VCO_MIN, rate);
+-			rate = rate * fbdiv;
+-		}
+-		return rate;
++	/* Let rate fall inside the range PS_PLL_VCO_MIN ~ PS_PLL_VCO_MAX */
++	if (rate > PS_PLL_VCO_MAX) {
++		div = DIV_ROUND_UP(rate, PS_PLL_VCO_MAX);
++		rate = rate / div;
++	}
++	if (rate < PS_PLL_VCO_MIN) {
++		mult = DIV_ROUND_UP(PS_PLL_VCO_MIN, rate);
++		rate = rate * mult;
+ 	}
+ 
+ 	fbdiv = DIV_ROUND_CLOSEST(rate, *prate);
+-	fbdiv = clamp_t(u32, fbdiv, PLL_FBDIV_MIN, PLL_FBDIV_MAX);
+-	return *prate * fbdiv;
++	if (fbdiv < PLL_FBDIV_MIN || fbdiv > PLL_FBDIV_MAX) {
++		fbdiv = clamp_t(u32, fbdiv, PLL_FBDIV_MIN, PLL_FBDIV_MAX);
++		rate = *prate * fbdiv;
++	}
++
++	return rate;
+ }
+ 
+ /**
+diff --git a/drivers/crypto/cavium/cpt/cptpf_main.c b/drivers/crypto/cavium/cpt/cptpf_main.c
+index 7819490274512..d9362199423f2 100644
+--- a/drivers/crypto/cavium/cpt/cptpf_main.c
++++ b/drivers/crypto/cavium/cpt/cptpf_main.c
+@@ -254,6 +254,7 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
+ 	const struct firmware *fw_entry;
+ 	struct device *dev = &cpt->pdev->dev;
+ 	struct ucode_header *ucode;
++	unsigned int code_length;
+ 	struct microcode *mcode;
+ 	int j, ret = 0;
+ 
+@@ -264,11 +265,12 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
+ 	ucode = (struct ucode_header *)fw_entry->data;
+ 	mcode = &cpt->mcode[cpt->next_mc_idx];
+ 	memcpy(mcode->version, (u8 *)fw_entry->data, CPT_UCODE_VERSION_SZ);
+-	mcode->code_size = ntohl(ucode->code_length) * 2;
+-	if (!mcode->code_size) {
++	code_length = ntohl(ucode->code_length);
++	if (code_length == 0 || code_length >= INT_MAX / 2) {
+ 		ret = -EINVAL;
+ 		goto fw_release;
+ 	}
++	mcode->code_size = code_length * 2;
+ 
+ 	mcode->is_ae = is_ae;
+ 	mcode->core_mask = 0ULL;
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index b3eea329f840f..b9299defb431d 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -642,6 +642,10 @@ static void ccp_dma_release(struct ccp_device *ccp)
+ 	for (i = 0; i < ccp->cmd_q_count; i++) {
+ 		chan = ccp->ccp_dma_chan + i;
+ 		dma_chan = &chan->dma_chan;
++
++		if (dma_chan->client_count)
++			dma_release_channel(dma_chan);
++
+ 		tasklet_kill(&chan->cleanup_tasklet);
+ 		list_del_rcu(&dma_chan->device_node);
+ 	}
+@@ -767,8 +771,8 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
+ 	if (!dmaengine)
+ 		return;
+ 
+-	dma_async_device_unregister(dma_dev);
+ 	ccp_dma_release(ccp);
++	dma_async_device_unregister(dma_dev);
+ 
+ 	kmem_cache_destroy(ccp->dma_desc_cache);
+ 	kmem_cache_destroy(ccp->dma_cmd_cache);
+diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c
+index 08b4660b014c6..5db7cdea994ae 100644
+--- a/drivers/crypto/hisilicon/zip/zip_crypto.c
++++ b/drivers/crypto/hisilicon/zip/zip_crypto.c
+@@ -107,12 +107,12 @@ static int sgl_sge_nr_set(const char *val, const struct kernel_param *kp)
+ 	if (ret || n == 0 || n > HISI_ACC_SGL_SGE_NR_MAX)
+ 		return -EINVAL;
+ 
+-	return param_set_int(val, kp);
++	return param_set_ushort(val, kp);
+ }
+ 
+ static const struct kernel_param_ops sgl_sge_nr_ops = {
+ 	.set = sgl_sge_nr_set,
+-	.get = param_get_int,
++	.get = param_get_ushort,
+ };
+ 
+ static u16 sgl_sge_nr = HZIP_SGL_SGE_NR;
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index 56d5ccb5cc004..1c9af02eb63b6 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -381,7 +381,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ 					u32 x;
+ 
+ 					x = ipad[i] ^ ipad[i + 4];
+-					cache[i] ^= swab(x);
++					cache[i] ^= swab32(x);
+ 				}
+ 			}
+ 			cache_len = AES_BLOCK_SIZE;
+@@ -819,7 +819,7 @@ static int safexcel_ahash_final(struct ahash_request *areq)
+ 			u32 *result = (void *)areq->result;
+ 
+ 			/* K3 */
+-			result[i] = swab(ctx->base.ipad.word[i + 4]);
++			result[i] = swab32(ctx->base.ipad.word[i + 4]);
+ 		}
+ 		areq->result[0] ^= 0x80;			// 10- padding
+ 		crypto_cipher_encrypt_one(ctx->kaes, areq->result, areq->result);
+@@ -2104,7 +2104,7 @@ static int safexcel_xcbcmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 	crypto_cipher_encrypt_one(ctx->kaes, (u8 *)key_tmp + AES_BLOCK_SIZE,
+ 		"\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3");
+ 	for (i = 0; i < 3 * AES_BLOCK_SIZE / sizeof(u32); i++)
+-		ctx->base.ipad.word[i] = swab(key_tmp[i]);
++		ctx->base.ipad.word[i] = swab32(key_tmp[i]);
+ 
+ 	crypto_cipher_clear_flags(ctx->kaes, CRYPTO_TFM_REQ_MASK);
+ 	crypto_cipher_set_flags(ctx->kaes, crypto_ahash_get_flags(tfm) &
+@@ -2187,7 +2187,7 @@ static int safexcel_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 		return ret;
+ 
+ 	for (i = 0; i < len / sizeof(u32); i++)
+-		ctx->base.ipad.word[i + 8] = swab(aes.key_enc[i]);
++		ctx->base.ipad.word[i + 8] = swab32(aes.key_enc[i]);
+ 
+ 	/* precompute the CMAC key material */
+ 	crypto_cipher_clear_flags(ctx->kaes, CRYPTO_TFM_REQ_MASK);
+diff --git a/drivers/crypto/marvell/octeontx/otx_cptpf_ucode.c b/drivers/crypto/marvell/octeontx/otx_cptpf_ucode.c
+index 40b482198ebc5..a765eefb18c2f 100644
+--- a/drivers/crypto/marvell/octeontx/otx_cptpf_ucode.c
++++ b/drivers/crypto/marvell/octeontx/otx_cptpf_ucode.c
+@@ -286,6 +286,7 @@ static int process_tar_file(struct device *dev,
+ 	struct tar_ucode_info_t *tar_info;
+ 	struct otx_cpt_ucode_hdr *ucode_hdr;
+ 	int ucode_type, ucode_size;
++	unsigned int code_length;
+ 
+ 	/*
+ 	 * If size is less than microcode header size then don't report
+@@ -303,7 +304,13 @@ static int process_tar_file(struct device *dev,
+ 	if (get_ucode_type(ucode_hdr, &ucode_type))
+ 		return 0;
+ 
+-	ucode_size = ntohl(ucode_hdr->code_length) * 2;
++	code_length = ntohl(ucode_hdr->code_length);
++	if (code_length >= INT_MAX / 2) {
++		dev_err(dev, "Invalid code_length %u\n", code_length);
++		return -EINVAL;
++	}
++
++	ucode_size = code_length * 2;
+ 	if (!ucode_size || (size < round_up(ucode_size, 16) +
+ 	    sizeof(struct otx_cpt_ucode_hdr) + OTX_CPT_UCODE_SIGN_LEN)) {
+ 		dev_err(dev, "Ucode %s invalid size\n", filename);
+@@ -886,6 +893,7 @@ static int ucode_load(struct device *dev, struct otx_cpt_ucode *ucode,
+ {
+ 	struct otx_cpt_ucode_hdr *ucode_hdr;
+ 	const struct firmware *fw;
++	unsigned int code_length;
+ 	int ret;
+ 
+ 	set_ucode_filename(ucode, ucode_filename);
+@@ -896,7 +904,13 @@ static int ucode_load(struct device *dev, struct otx_cpt_ucode *ucode,
+ 	ucode_hdr = (struct otx_cpt_ucode_hdr *) fw->data;
+ 	memcpy(ucode->ver_str, ucode_hdr->ver_str, OTX_CPT_UCODE_VER_STR_SZ);
+ 	ucode->ver_num = ucode_hdr->ver_num;
+-	ucode->size = ntohl(ucode_hdr->code_length) * 2;
++	code_length = ntohl(ucode_hdr->code_length);
++	if (code_length >= INT_MAX / 2) {
++		dev_err(dev, "Ucode invalid code_length %u\n", code_length);
++		ret = -EINVAL;
++		goto release_fw;
++	}
++	ucode->size = code_length * 2;
+ 	if (!ucode->size || (fw->size < round_up(ucode->size, 16)
+ 	    + sizeof(struct otx_cpt_ucode_hdr) + OTX_CPT_UCODE_SIGN_LEN)) {
+ 		dev_err(dev, "Ucode %s invalid size\n", ucode_filename);
+diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
+index 06abe1e2074e9..5b71768fc0c7f 100644
+--- a/drivers/crypto/qat/qat_common/qat_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_algs.c
+@@ -34,19 +34,6 @@
+ static DEFINE_MUTEX(algs_lock);
+ static unsigned int active_devs;
+ 
+-struct qat_alg_buf {
+-	u32 len;
+-	u32 resrvd;
+-	u64 addr;
+-} __packed;
+-
+-struct qat_alg_buf_list {
+-	u64 resrvd;
+-	u32 num_bufs;
+-	u32 num_mapped_bufs;
+-	struct qat_alg_buf bufers[];
+-} __packed __aligned(64);
+-
+ /* Common content descriptor */
+ struct qat_alg_cd {
+ 	union {
+@@ -637,14 +624,20 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst,
+ 	dma_addr_t blpout = qat_req->buf.bloutp;
+ 	size_t sz = qat_req->buf.sz;
+ 	size_t sz_out = qat_req->buf.sz_out;
++	int bl_dma_dir;
+ 	int i;
+ 
++	bl_dma_dir = blp != blpout ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
++
+ 	for (i = 0; i < bl->num_bufs; i++)
+ 		dma_unmap_single(dev, bl->bufers[i].addr,
+-				 bl->bufers[i].len, DMA_BIDIRECTIONAL);
++				 bl->bufers[i].len, bl_dma_dir);
+ 
+ 	dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE);
+-	kfree(bl);
++
++	if (!qat_req->buf.sgl_src_valid)
++		kfree(bl);
++
+ 	if (blp != blpout) {
+ 		/* If out of place operation dma unmap only data */
+ 		int bufless = blout->num_bufs - blout->num_mapped_bufs;
+@@ -652,10 +645,12 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst,
+ 		for (i = bufless; i < blout->num_bufs; i++) {
+ 			dma_unmap_single(dev, blout->bufers[i].addr,
+ 					 blout->bufers[i].len,
+-					 DMA_BIDIRECTIONAL);
++					 DMA_FROM_DEVICE);
+ 		}
+ 		dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE);
+-		kfree(blout);
++
++		if (!qat_req->buf.sgl_dst_valid)
++			kfree(blout);
+ 	}
+ }
+ 
+@@ -669,26 +664,34 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 	int n = sg_nents(sgl);
+ 	struct qat_alg_buf_list *bufl;
+ 	struct qat_alg_buf_list *buflout = NULL;
+-	dma_addr_t blp;
+-	dma_addr_t bloutp;
++	dma_addr_t blp = DMA_MAPPING_ERROR;
++	dma_addr_t bloutp = DMA_MAPPING_ERROR;
+ 	struct scatterlist *sg;
+-	size_t sz_out, sz = struct_size(bufl, bufers, n + 1);
++	size_t sz_out, sz = struct_size(bufl, bufers, n);
++	int node = dev_to_node(&GET_DEV(inst->accel_dev));
++	int bufl_dma_dir;
+ 
+ 	if (unlikely(!n))
+ 		return -EINVAL;
+ 
+-	bufl = kzalloc_node(sz, GFP_ATOMIC,
+-			    dev_to_node(&GET_DEV(inst->accel_dev)));
+-	if (unlikely(!bufl))
+-		return -ENOMEM;
++	qat_req->buf.sgl_src_valid = false;
++	qat_req->buf.sgl_dst_valid = false;
++
++	if (n > QAT_MAX_BUFF_DESC) {
++		bufl = kzalloc_node(sz, GFP_ATOMIC, node);
++		if (unlikely(!bufl))
++			return -ENOMEM;
++	} else {
++		bufl = &qat_req->buf.sgl_src.sgl_hdr;
++		memset(bufl, 0, sizeof(struct qat_alg_buf_list));
++		qat_req->buf.sgl_src_valid = true;
++	}
++
++	bufl_dma_dir = sgl != sglout ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
+ 
+ 	for_each_sg(sgl, sg, n, i)
+ 		bufl->bufers[i].addr = DMA_MAPPING_ERROR;
+ 
+-	blp = dma_map_single(dev, bufl, sz, DMA_TO_DEVICE);
+-	if (unlikely(dma_mapping_error(dev, blp)))
+-		goto err_in;
+-
+ 	for_each_sg(sgl, sg, n, i) {
+ 		int y = sg_nctr;
+ 
+@@ -697,13 +700,16 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 
+ 		bufl->bufers[y].addr = dma_map_single(dev, sg_virt(sg),
+ 						      sg->length,
+-						      DMA_BIDIRECTIONAL);
++						      bufl_dma_dir);
+ 		bufl->bufers[y].len = sg->length;
+ 		if (unlikely(dma_mapping_error(dev, bufl->bufers[y].addr)))
+ 			goto err_in;
+ 		sg_nctr++;
+ 	}
+ 	bufl->num_bufs = sg_nctr;
++	blp = dma_map_single(dev, bufl, sz, DMA_TO_DEVICE);
++	if (unlikely(dma_mapping_error(dev, blp)))
++		goto err_in;
+ 	qat_req->buf.bl = bufl;
+ 	qat_req->buf.blp = blp;
+ 	qat_req->buf.sz = sz;
+@@ -712,20 +718,23 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 		struct qat_alg_buf *bufers;
+ 
+ 		n = sg_nents(sglout);
+-		sz_out = struct_size(buflout, bufers, n + 1);
++		sz_out = struct_size(buflout, bufers, n);
+ 		sg_nctr = 0;
+-		buflout = kzalloc_node(sz_out, GFP_ATOMIC,
+-				       dev_to_node(&GET_DEV(inst->accel_dev)));
+-		if (unlikely(!buflout))
+-			goto err_in;
++
++		if (n > QAT_MAX_BUFF_DESC) {
++			buflout = kzalloc_node(sz_out, GFP_ATOMIC, node);
++			if (unlikely(!buflout))
++				goto err_in;
++		} else {
++			buflout = &qat_req->buf.sgl_dst.sgl_hdr;
++			memset(buflout, 0, sizeof(struct qat_alg_buf_list));
++			qat_req->buf.sgl_dst_valid = true;
++		}
+ 
+ 		bufers = buflout->bufers;
+ 		for_each_sg(sglout, sg, n, i)
+ 			bufers[i].addr = DMA_MAPPING_ERROR;
+ 
+-		bloutp = dma_map_single(dev, buflout, sz_out, DMA_TO_DEVICE);
+-		if (unlikely(dma_mapping_error(dev, bloutp)))
+-			goto err_out;
+ 		for_each_sg(sglout, sg, n, i) {
+ 			int y = sg_nctr;
+ 
+@@ -734,7 +743,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 
+ 			bufers[y].addr = dma_map_single(dev, sg_virt(sg),
+ 							sg->length,
+-							DMA_BIDIRECTIONAL);
++							DMA_FROM_DEVICE);
+ 			if (unlikely(dma_mapping_error(dev, bufers[y].addr)))
+ 				goto err_out;
+ 			bufers[y].len = sg->length;
+@@ -742,6 +751,9 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 		}
+ 		buflout->num_bufs = sg_nctr;
+ 		buflout->num_mapped_bufs = sg_nctr;
++		bloutp = dma_map_single(dev, buflout, sz_out, DMA_TO_DEVICE);
++		if (unlikely(dma_mapping_error(dev, bloutp)))
++			goto err_out;
+ 		qat_req->buf.blout = buflout;
+ 		qat_req->buf.bloutp = bloutp;
+ 		qat_req->buf.sz_out = sz_out;
+@@ -753,27 +765,32 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ 	return 0;
+ 
+ err_out:
++	if (!dma_mapping_error(dev, bloutp))
++		dma_unmap_single(dev, bloutp, sz_out, DMA_TO_DEVICE);
++
+ 	n = sg_nents(sglout);
+ 	for (i = 0; i < n; i++)
+ 		if (!dma_mapping_error(dev, buflout->bufers[i].addr))
+ 			dma_unmap_single(dev, buflout->bufers[i].addr,
+ 					 buflout->bufers[i].len,
+-					 DMA_BIDIRECTIONAL);
+-	if (!dma_mapping_error(dev, bloutp))
+-		dma_unmap_single(dev, bloutp, sz_out, DMA_TO_DEVICE);
+-	kfree(buflout);
++					 DMA_FROM_DEVICE);
++
++	if (!qat_req->buf.sgl_dst_valid)
++		kfree(buflout);
+ 
+ err_in:
++	if (!dma_mapping_error(dev, blp))
++		dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE);
++
+ 	n = sg_nents(sgl);
+ 	for (i = 0; i < n; i++)
+ 		if (!dma_mapping_error(dev, bufl->bufers[i].addr))
+ 			dma_unmap_single(dev, bufl->bufers[i].addr,
+ 					 bufl->bufers[i].len,
+-					 DMA_BIDIRECTIONAL);
++					 bufl_dma_dir);
+ 
+-	if (!dma_mapping_error(dev, blp))
+-		dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE);
+-	kfree(bufl);
++	if (!qat_req->buf.sgl_src_valid)
++		kfree(bufl);
+ 
+ 	dev_err(dev, "Failed to map buf for dma\n");
+ 	return -ENOMEM;
+diff --git a/drivers/crypto/qat/qat_common/qat_crypto.h b/drivers/crypto/qat/qat_common/qat_crypto.h
+index 12682d1e9f5f3..5f9328201ba46 100644
+--- a/drivers/crypto/qat/qat_common/qat_crypto.h
++++ b/drivers/crypto/qat/qat_common/qat_crypto.h
+@@ -20,6 +20,26 @@ struct qat_crypto_instance {
+ 	atomic_t refctr;
+ };
+ 
++#define QAT_MAX_BUFF_DESC	4
++
++struct qat_alg_buf {
++	u32 len;
++	u32 resrvd;
++	u64 addr;
++} __packed;
++
++struct qat_alg_buf_list {
++	u64 resrvd;
++	u32 num_bufs;
++	u32 num_mapped_bufs;
++	struct qat_alg_buf bufers[];
++} __packed;
++
++struct qat_alg_fixed_buf_list {
++	struct qat_alg_buf_list sgl_hdr;
++	struct qat_alg_buf descriptors[QAT_MAX_BUFF_DESC];
++} __packed __aligned(64);
++
+ struct qat_crypto_request_buffs {
+ 	struct qat_alg_buf_list *bl;
+ 	dma_addr_t blp;
+@@ -27,6 +47,10 @@ struct qat_crypto_request_buffs {
+ 	dma_addr_t bloutp;
+ 	size_t sz;
+ 	size_t sz_out;
++	bool sgl_src_valid;
++	bool sgl_dst_valid;
++	struct qat_alg_fixed_buf_list sgl_src;
++	struct qat_alg_fixed_buf_list sgl_dst;
+ };
+ 
+ struct qat_crypto_request;
+diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
+index d60679c798224..2043dd0611217 100644
+--- a/drivers/crypto/sahara.c
++++ b/drivers/crypto/sahara.c
+@@ -25,10 +25,10 @@
+ #include <linux/kernel.h>
+ #include <linux/kthread.h>
+ #include <linux/module.h>
+-#include <linux/mutex.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
++#include <linux/spinlock.h>
+ 
+ #define SHA_BUFFER_LEN		PAGE_SIZE
+ #define SAHARA_MAX_SHA_BLOCK_SIZE	SHA256_BLOCK_SIZE
+@@ -195,7 +195,7 @@ struct sahara_dev {
+ 	void __iomem		*regs_base;
+ 	struct clk		*clk_ipg;
+ 	struct clk		*clk_ahb;
+-	struct mutex		queue_mutex;
++	spinlock_t		queue_spinlock;
+ 	struct task_struct	*kthread;
+ 	struct completion	dma_completion;
+ 
+@@ -641,9 +641,9 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode)
+ 
+ 	rctx->mode = mode;
+ 
+-	mutex_lock(&dev->queue_mutex);
++	spin_lock_bh(&dev->queue_spinlock);
+ 	err = crypto_enqueue_request(&dev->queue, &req->base);
+-	mutex_unlock(&dev->queue_mutex);
++	spin_unlock_bh(&dev->queue_spinlock);
+ 
+ 	wake_up_process(dev->kthread);
+ 
+@@ -1042,10 +1042,10 @@ static int sahara_queue_manage(void *data)
+ 	do {
+ 		__set_current_state(TASK_INTERRUPTIBLE);
+ 
+-		mutex_lock(&dev->queue_mutex);
++		spin_lock_bh(&dev->queue_spinlock);
+ 		backlog = crypto_get_backlog(&dev->queue);
+ 		async_req = crypto_dequeue_request(&dev->queue);
+-		mutex_unlock(&dev->queue_mutex);
++		spin_unlock_bh(&dev->queue_spinlock);
+ 
+ 		if (backlog)
+ 			backlog->complete(backlog, -EINPROGRESS);
+@@ -1091,9 +1091,9 @@ static int sahara_sha_enqueue(struct ahash_request *req, int last)
+ 		rctx->first = 1;
+ 	}
+ 
+-	mutex_lock(&dev->queue_mutex);
++	spin_lock_bh(&dev->queue_spinlock);
+ 	ret = crypto_enqueue_request(&dev->queue, &req->base);
+-	mutex_unlock(&dev->queue_mutex);
++	spin_unlock_bh(&dev->queue_spinlock);
+ 
+ 	wake_up_process(dev->kthread);
+ 
+@@ -1454,7 +1454,7 @@ static int sahara_probe(struct platform_device *pdev)
+ 
+ 	crypto_init_queue(&dev->queue, SAHARA_QUEUE_LENGTH);
+ 
+-	mutex_init(&dev->queue_mutex);
++	spin_lock_init(&dev->queue_spinlock);
+ 
+ 	dev_ptr = dev;
+ 
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index b624f3d8f0e64..e359c5c6c4df2 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -118,17 +118,20 @@ static int begin_cpu_udmabuf(struct dma_buf *buf,
+ {
+ 	struct udmabuf *ubuf = buf->priv;
+ 	struct device *dev = ubuf->device->this_device;
++	int ret = 0;
+ 
+ 	if (!ubuf->sg) {
+ 		ubuf->sg = get_sg_table(dev, buf, direction);
+-		if (IS_ERR(ubuf->sg))
+-			return PTR_ERR(ubuf->sg);
++		if (IS_ERR(ubuf->sg)) {
++			ret = PTR_ERR(ubuf->sg);
++			ubuf->sg = NULL;
++		}
+ 	} else {
+ 		dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents,
+ 				    direction);
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int end_cpu_udmabuf(struct dma_buf *buf,
+diff --git a/drivers/dma/hisi_dma.c b/drivers/dma/hisi_dma.c
+index 3e83769615d1c..8f16513673100 100644
+--- a/drivers/dma/hisi_dma.c
++++ b/drivers/dma/hisi_dma.c
+@@ -185,7 +185,8 @@ static void hisi_dma_reset_qp_point(struct hisi_dma_dev *hdma_dev, u32 index)
+ 	hisi_dma_chan_write(hdma_dev->base, HISI_DMA_CQ_HEAD_PTR, index, 0);
+ }
+ 
+-static void hisi_dma_reset_hw_chan(struct hisi_dma_chan *chan)
++static void hisi_dma_reset_or_disable_hw_chan(struct hisi_dma_chan *chan,
++					      bool disable)
+ {
+ 	struct hisi_dma_dev *hdma_dev = chan->hdma_dev;
+ 	u32 index = chan->qp_num, tmp;
+@@ -206,8 +207,11 @@ static void hisi_dma_reset_hw_chan(struct hisi_dma_chan *chan)
+ 	hisi_dma_do_reset(hdma_dev, index);
+ 	hisi_dma_reset_qp_point(hdma_dev, index);
+ 	hisi_dma_pause_dma(hdma_dev, index, false);
+-	hisi_dma_enable_dma(hdma_dev, index, true);
+-	hisi_dma_unmask_irq(hdma_dev, index);
++
++	if (!disable) {
++		hisi_dma_enable_dma(hdma_dev, index, true);
++		hisi_dma_unmask_irq(hdma_dev, index);
++	}
+ 
+ 	ret = readl_relaxed_poll_timeout(hdma_dev->base +
+ 		HISI_DMA_Q_FSM_STS + index * HISI_DMA_OFFSET, tmp,
+@@ -223,7 +227,7 @@ static void hisi_dma_free_chan_resources(struct dma_chan *c)
+ 	struct hisi_dma_chan *chan = to_hisi_dma_chan(c);
+ 	struct hisi_dma_dev *hdma_dev = chan->hdma_dev;
+ 
+-	hisi_dma_reset_hw_chan(chan);
++	hisi_dma_reset_or_disable_hw_chan(chan, false);
+ 	vchan_free_chan_resources(&chan->vc);
+ 
+ 	memset(chan->sq, 0, sizeof(struct hisi_dma_sqe) * hdma_dev->chan_depth);
+@@ -272,7 +276,6 @@ static void hisi_dma_start_transfer(struct hisi_dma_chan *chan)
+ 
+ 	vd = vchan_next_desc(&chan->vc);
+ 	if (!vd) {
+-		dev_err(&hdma_dev->pdev->dev, "no issued task!\n");
+ 		chan->desc = NULL;
+ 		return;
+ 	}
+@@ -304,7 +307,7 @@ static void hisi_dma_issue_pending(struct dma_chan *c)
+ 
+ 	spin_lock_irqsave(&chan->vc.lock, flags);
+ 
+-	if (vchan_issue_pending(&chan->vc))
++	if (vchan_issue_pending(&chan->vc) && !chan->desc)
+ 		hisi_dma_start_transfer(chan);
+ 
+ 	spin_unlock_irqrestore(&chan->vc.lock, flags);
+@@ -399,7 +402,7 @@ static void hisi_dma_enable_qp(struct hisi_dma_dev *hdma_dev, u32 qp_index)
+ 
+ static void hisi_dma_disable_qp(struct hisi_dma_dev *hdma_dev, u32 qp_index)
+ {
+-	hisi_dma_reset_hw_chan(&hdma_dev->chan[qp_index]);
++	hisi_dma_reset_or_disable_hw_chan(&hdma_dev->chan[qp_index], true);
+ }
+ 
+ static void hisi_dma_enable_qps(struct hisi_dma_dev *hdma_dev)
+@@ -438,18 +441,15 @@ static irqreturn_t hisi_dma_irq(int irq, void *data)
+ 	desc = chan->desc;
+ 	cqe = chan->cq + chan->cq_head;
+ 	if (desc) {
++		chan->cq_head = (chan->cq_head + 1) % hdma_dev->chan_depth;
++		hisi_dma_chan_write(hdma_dev->base, HISI_DMA_CQ_HEAD_PTR,
++				    chan->qp_num, chan->cq_head);
+ 		if (FIELD_GET(STATUS_MASK, cqe->w0) == STATUS_SUCC) {
+-			chan->cq_head = (chan->cq_head + 1) %
+-					hdma_dev->chan_depth;
+-			hisi_dma_chan_write(hdma_dev->base,
+-					    HISI_DMA_CQ_HEAD_PTR, chan->qp_num,
+-					    chan->cq_head);
+ 			vchan_cookie_complete(&desc->vd);
++			hisi_dma_start_transfer(chan);
+ 		} else {
+ 			dev_err(&hdma_dev->pdev->dev, "task error!\n");
+ 		}
+-
+-		chan->desc = NULL;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&chan->vc.lock, flags);
+diff --git a/drivers/dma/ioat/dma.c b/drivers/dma/ioat/dma.c
+index 37ff4ec7db76f..e2070df6cad28 100644
+--- a/drivers/dma/ioat/dma.c
++++ b/drivers/dma/ioat/dma.c
+@@ -656,7 +656,7 @@ static void __cleanup(struct ioatdma_chan *ioat_chan, dma_addr_t phys_complete)
+ 	if (active - i == 0) {
+ 		dev_dbg(to_dev(ioat_chan), "%s: cancel completion timeout\n",
+ 			__func__);
+-		mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
++		mod_timer_pending(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
+ 	}
+ 
+ 	/* microsecond delay by sysfs variable  per pending descriptor */
+@@ -682,7 +682,7 @@ static void ioat_cleanup(struct ioatdma_chan *ioat_chan)
+ 
+ 		if (chanerr &
+ 		    (IOAT_CHANERR_HANDLE_MASK | IOAT_CHANERR_RECOVER_MASK)) {
+-			mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
++			mod_timer_pending(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
+ 			ioat_eh(ioat_chan);
+ 		}
+ 	}
+@@ -879,7 +879,7 @@ static void check_active(struct ioatdma_chan *ioat_chan)
+ 	}
+ 
+ 	if (test_and_clear_bit(IOAT_CHAN_ACTIVE, &ioat_chan->state))
+-		mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
++		mod_timer_pending(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
+ }
+ 
+ static void ioat_reboot_chan(struct ioatdma_chan *ioat_chan)
+diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
+index 368cd60000eec..d48b0de05b62f 100644
+--- a/drivers/firmware/efi/libstub/fdt.c
++++ b/drivers/firmware/efi/libstub/fdt.c
+@@ -281,14 +281,6 @@ efi_status_t allocate_new_fdt_and_exit_boot(void *handle,
+ 		goto fail;
+ 	}
+ 
+-	/*
+-	 * Now that we have done our final memory allocation (and free)
+-	 * we can get the memory map key needed for exit_boot_services().
+-	 */
+-	status = efi_get_memory_map(&map);
+-	if (status != EFI_SUCCESS)
+-		goto fail_free_new_fdt;
+-
+ 	status = update_fdt((void *)fdt_addr, fdt_size,
+ 			    (void *)*new_fdt_addr, MAX_FDT_SIZE, cmdline_ptr,
+ 			    initrd_addr, initrd_size);
+diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
+index 7d9367b220107..c1cd5ca875caa 100644
+--- a/drivers/firmware/google/gsmi.c
++++ b/drivers/firmware/google/gsmi.c
+@@ -680,6 +680,15 @@ static struct notifier_block gsmi_die_notifier = {
+ static int gsmi_panic_callback(struct notifier_block *nb,
+ 			       unsigned long reason, void *arg)
+ {
++
++	/*
++	 * Panic callbacks are executed with all other CPUs stopped,
++	 * so we must not attempt to spin waiting for gsmi_dev.lock
++	 * to be released.
++	 */
++	if (spin_is_locked(&gsmi_dev.lock))
++		return NOTIFY_DONE;
++
+ 	gsmi_shutdown_reason(GSMI_SHUTDOWN_PANIC);
+ 	return NOTIFY_DONE;
+ }
+diff --git a/drivers/fpga/dfl.c b/drivers/fpga/dfl.c
+index b450870b75ed8..eb8a6e329af9b 100644
+--- a/drivers/fpga/dfl.c
++++ b/drivers/fpga/dfl.c
+@@ -1857,7 +1857,7 @@ long dfl_feature_ioctl_set_irq(struct platform_device *pdev,
+ 		return -EINVAL;
+ 
+ 	fds = memdup_user((void __user *)(arg + sizeof(hdr)),
+-			  hdr.count * sizeof(s32));
++			  array_size(hdr.count, sizeof(s32)));
+ 	if (IS_ERR(fds))
+ 		return PTR_ERR(fds);
+ 
+diff --git a/drivers/fsi/fsi-core.c b/drivers/fsi/fsi-core.c
+index 59ddc9fd5bca4..92e6eebd1851e 100644
+--- a/drivers/fsi/fsi-core.c
++++ b/drivers/fsi/fsi-core.c
+@@ -1309,6 +1309,9 @@ int fsi_master_register(struct fsi_master *master)
+ 
+ 	mutex_init(&master->scan_lock);
+ 	master->idx = ida_simple_get(&master_ida, 0, INT_MAX, GFP_KERNEL);
++	if (master->idx < 0)
++		return master->idx;
++
+ 	dev_set_name(&master->dev, "fsi%d", master->idx);
+ 	master->dev.class = &fsi_master_class;
+ 
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index ca868271f4c43..4e9b3a95fa7c7 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -30,6 +30,7 @@ menuconfig DRM
+ config DRM_MIPI_DBI
+ 	tristate
+ 	depends on DRM
++	select DRM_KMS_HELPER
+ 
+ config DRM_MIPI_DSI
+ 	bool
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index df1f9b88a53f9..98d3661336a46 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -1671,10 +1671,12 @@ amdgpu_connector_add(struct amdgpu_device *adev,
+ 						   adev->mode_info.dither_property,
+ 						   AMDGPU_FMT_DITHER_DISABLE);
+ 
+-			if (amdgpu_audio != 0)
++			if (amdgpu_audio != 0) {
+ 				drm_object_attach_property(&amdgpu_connector->base.base,
+ 							   adev->mode_info.audio_property,
+ 							   AMDGPU_AUDIO_AUTO);
++				amdgpu_connector->audio = AMDGPU_AUDIO_AUTO;
++			}
+ 
+ 			subpixel_order = SubPixelHorizontalRGB;
+ 			connector->interlace_allowed = true;
+@@ -1796,6 +1798,7 @@ amdgpu_connector_add(struct amdgpu_device *adev,
+ 				drm_object_attach_property(&amdgpu_connector->base.base,
+ 							   adev->mode_info.audio_property,
+ 							   AMDGPU_AUDIO_AUTO);
++				amdgpu_connector->audio = AMDGPU_AUDIO_AUTO;
+ 			}
+ 			drm_object_attach_property(&amdgpu_connector->base.base,
+ 						   adev->mode_info.dither_property,
+@@ -1849,6 +1852,7 @@ amdgpu_connector_add(struct amdgpu_device *adev,
+ 				drm_object_attach_property(&amdgpu_connector->base.base,
+ 							   adev->mode_info.audio_property,
+ 							   AMDGPU_AUDIO_AUTO);
++				amdgpu_connector->audio = AMDGPU_AUDIO_AUTO;
+ 			}
+ 			drm_object_attach_property(&amdgpu_connector->base.base,
+ 						   adev->mode_info.dither_property,
+@@ -1899,6 +1903,7 @@ amdgpu_connector_add(struct amdgpu_device *adev,
+ 				drm_object_attach_property(&amdgpu_connector->base.base,
+ 							   adev->mode_info.audio_property,
+ 							   AMDGPU_AUDIO_AUTO);
++				amdgpu_connector->audio = AMDGPU_AUDIO_AUTO;
+ 			}
+ 			drm_object_attach_property(&amdgpu_connector->base.base,
+ 						   adev->mode_info.dither_property,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 881045e600af2..bde0496d2f153 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2179,16 +2179,8 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ 		}
+ 		adev->ip_blocks[i].status.sw = true;
+ 
+-		if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_COMMON) {
+-			/* need to do common hw init early so everything is set up for gmc */
+-			r = adev->ip_blocks[i].version->funcs->hw_init((void *)adev);
+-			if (r) {
+-				DRM_ERROR("hw_init %d failed %d\n", i, r);
+-				goto init_failed;
+-			}
+-			adev->ip_blocks[i].status.hw = true;
+-		} else if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {
+-			/* need to do gmc hw init early so we can allocate gpu mem */
++		/* need to do gmc hw init early so we can allocate gpu mem */
++		if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {
+ 			/* Try to reserve bad pages early */
+ 			if (amdgpu_sriov_vf(adev))
+ 				amdgpu_virt_exchange_data(adev);
+@@ -2770,8 +2762,8 @@ static int amdgpu_device_ip_reinit_early_sriov(struct amdgpu_device *adev)
+ 	int i, r;
+ 
+ 	static enum amd_ip_block_type ip_order[] = {
+-		AMD_IP_BLOCK_TYPE_COMMON,
+ 		AMD_IP_BLOCK_TYPE_GMC,
++		AMD_IP_BLOCK_TYPE_COMMON,
+ 		AMD_IP_BLOCK_TYPE_PSP,
+ 		AMD_IP_BLOCK_TYPE_IH,
+ 	};
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 947f50e402ba0..7cc7af2a6822e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -35,7 +35,6 @@
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
+ #include <drm/drm_crtc_helper.h>
+-#include <drm/drm_damage_helper.h>
+ #include <drm/drm_edid.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_fb_helper.h>
+@@ -499,7 +498,6 @@ bool amdgpu_display_ddc_probe(struct amdgpu_connector *amdgpu_connector,
+ static const struct drm_framebuffer_funcs amdgpu_fb_funcs = {
+ 	.destroy = drm_gem_fb_destroy,
+ 	.create_handle = drm_gem_fb_create_handle,
+-	.dirty = drm_atomic_helper_dirtyfb,
+ };
+ 
+ uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index a1a8e026b9fa6..1f2e2460e121e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -1475,11 +1475,6 @@ static int sdma_v4_0_start(struct amdgpu_device *adev)
+ 		WREG32_SDMA(i, mmSDMA0_CNTL, temp);
+ 
+ 		if (!amdgpu_sriov_vf(adev)) {
+-			ring = &adev->sdma.instance[i].ring;
+-			adev->nbio.funcs->sdma_doorbell_range(adev, i,
+-				ring->use_doorbell, ring->doorbell_index,
+-				adev->doorbell_index.sdma_doorbell_range);
+-
+ 			/* unhalt engine */
+ 			temp = RREG32_SDMA(i, mmSDMA0_F32_CNTL);
+ 			temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index abd649285a22d..7212b9900e0ab 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -1332,6 +1332,25 @@ static int soc15_common_sw_fini(void *handle)
+ 	return 0;
+ }
+ 
++static void soc15_doorbell_range_init(struct amdgpu_device *adev)
++{
++	int i;
++	struct amdgpu_ring *ring;
++
++	/* sdma/ih doorbell range are programed by hypervisor */
++	if (!amdgpu_sriov_vf(adev)) {
++		for (i = 0; i < adev->sdma.num_instances; i++) {
++			ring = &adev->sdma.instance[i].ring;
++			adev->nbio.funcs->sdma_doorbell_range(adev, i,
++				ring->use_doorbell, ring->doorbell_index,
++				adev->doorbell_index.sdma_doorbell_range);
++		}
++
++		adev->nbio.funcs->ih_doorbell_range(adev, adev->irq.ih.use_doorbell,
++						adev->irq.ih.doorbell_index);
++	}
++}
++
+ static int soc15_common_hw_init(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+@@ -1351,6 +1370,12 @@ static int soc15_common_hw_init(void *handle)
+ 
+ 	/* enable the doorbell aperture */
+ 	soc15_enable_doorbell_aperture(adev, true);
++	/* HW doorbell routing policy: doorbell writing not
++	 * in SDMA/IH/MM/ACV range will be routed to CP. So
++	 * we need to init SDMA/IH/MM/ACV doorbell range prior
++	 * to CP ip block init and ring test.
++	 */
++	soc15_doorbell_range_init(adev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/calcs/bw_fixed.c b/drivers/gpu/drm/amd/display/dc/calcs/bw_fixed.c
+index 6ca288fb5fb9e..2d46bc527b218 100644
+--- a/drivers/gpu/drm/amd/display/dc/calcs/bw_fixed.c
++++ b/drivers/gpu/drm/amd/display/dc/calcs/bw_fixed.c
+@@ -26,12 +26,12 @@
+ #include "bw_fixed.h"
+ 
+ 
+-#define MIN_I64 \
+-	(int64_t)(-(1LL << 63))
+-
+ #define MAX_I64 \
+ 	(int64_t)((1ULL << 63) - 1)
+ 
++#define MIN_I64 \
++	(-MAX_I64 - 1)
++
+ #define FRACTIONAL_PART_MASK \
+ 	((1ULL << BW_FIXED_BITS_PER_FRACTIONAL_PART) - 1)
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 93f5229c303e7..99887bcfada04 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2202,11 +2202,8 @@ static void copy_stream_update_to_stream(struct dc *dc,
+ 	if (update->abm_level)
+ 		stream->abm_level = *update->abm_level;
+ 
+-	if (update->periodic_interrupt0)
+-		stream->periodic_interrupt0 = *update->periodic_interrupt0;
+-
+-	if (update->periodic_interrupt1)
+-		stream->periodic_interrupt1 = *update->periodic_interrupt1;
++	if (update->periodic_interrupt)
++		stream->periodic_interrupt = *update->periodic_interrupt;
+ 
+ 	if (update->gamut_remap)
+ 		stream->gamut_remap_matrix = *update->gamut_remap;
+@@ -2288,13 +2285,8 @@ static void commit_planes_do_stream_update(struct dc *dc,
+ 
+ 		if (!pipe_ctx->top_pipe &&  !pipe_ctx->prev_odm_pipe && pipe_ctx->stream == stream) {
+ 
+-			if (stream_update->periodic_interrupt0 &&
+-					dc->hwss.setup_periodic_interrupt)
+-				dc->hwss.setup_periodic_interrupt(dc, pipe_ctx, VLINE0);
+-
+-			if (stream_update->periodic_interrupt1 &&
+-					dc->hwss.setup_periodic_interrupt)
+-				dc->hwss.setup_periodic_interrupt(dc, pipe_ctx, VLINE1);
++			if (stream_update->periodic_interrupt && dc->hwss.setup_periodic_interrupt)
++				dc->hwss.setup_periodic_interrupt(dc, pipe_ctx);
+ 
+ 			if ((stream_update->hdr_static_metadata && !stream->use_dynamic_meta) ||
+ 					stream_update->vrr_infopacket ||
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+index 205bedd1b1966..0487c1b8957c0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+@@ -179,8 +179,7 @@ struct dc_stream_state {
+ 	/* DMCU info */
+ 	unsigned int abm_level;
+ 
+-	struct periodic_interrupt_config periodic_interrupt0;
+-	struct periodic_interrupt_config periodic_interrupt1;
++	struct periodic_interrupt_config periodic_interrupt;
+ 
+ 	/* from core_stream struct */
+ 	struct dc_context *ctx;
+@@ -244,8 +243,7 @@ struct dc_stream_update {
+ 	struct dc_info_packet *hdr_static_metadata;
+ 	unsigned int *abm_level;
+ 
+-	struct periodic_interrupt_config *periodic_interrupt0;
+-	struct periodic_interrupt_config *periodic_interrupt1;
++	struct periodic_interrupt_config *periodic_interrupt;
+ 
+ 	struct dc_info_packet *vrr_infopacket;
+ 	struct dc_info_packet *vsc_infopacket;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 31a13daf4289c..71a85c5306ed0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -3611,7 +3611,7 @@ void dcn10_calc_vupdate_position(
+ {
+ 	const struct dc_crtc_timing *dc_crtc_timing = &pipe_ctx->stream->timing;
+ 	int vline_int_offset_from_vupdate =
+-			pipe_ctx->stream->periodic_interrupt0.lines_offset;
++			pipe_ctx->stream->periodic_interrupt.lines_offset;
+ 	int vupdate_offset_from_vsync = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
+ 	int start_position;
+ 
+@@ -3636,18 +3636,10 @@ void dcn10_calc_vupdate_position(
+ static void dcn10_cal_vline_position(
+ 		struct dc *dc,
+ 		struct pipe_ctx *pipe_ctx,
+-		enum vline_select vline,
+ 		uint32_t *start_line,
+ 		uint32_t *end_line)
+ {
+-	enum vertical_interrupt_ref_point ref_point = INVALID_POINT;
+-
+-	if (vline == VLINE0)
+-		ref_point = pipe_ctx->stream->periodic_interrupt0.ref_point;
+-	else if (vline == VLINE1)
+-		ref_point = pipe_ctx->stream->periodic_interrupt1.ref_point;
+-
+-	switch (ref_point) {
++	switch (pipe_ctx->stream->periodic_interrupt.ref_point) {
+ 	case START_V_UPDATE:
+ 		dcn10_calc_vupdate_position(
+ 				dc,
+@@ -3656,7 +3648,9 @@ static void dcn10_cal_vline_position(
+ 				end_line);
+ 		break;
+ 	case START_V_SYNC:
+-		// Suppose to do nothing because vsync is 0;
++		// vsync is line 0 so start_line is just the requested line offset
++		*start_line = pipe_ctx->stream->periodic_interrupt.lines_offset;
++		*end_line = *start_line + 2;
+ 		break;
+ 	default:
+ 		ASSERT(0);
+@@ -3666,24 +3660,15 @@ static void dcn10_cal_vline_position(
+ 
+ void dcn10_setup_periodic_interrupt(
+ 		struct dc *dc,
+-		struct pipe_ctx *pipe_ctx,
+-		enum vline_select vline)
++		struct pipe_ctx *pipe_ctx)
+ {
+ 	struct timing_generator *tg = pipe_ctx->stream_res.tg;
++	uint32_t start_line = 0;
++	uint32_t end_line = 0;
+ 
+-	if (vline == VLINE0) {
+-		uint32_t start_line = 0;
+-		uint32_t end_line = 0;
++	dcn10_cal_vline_position(dc, pipe_ctx, &start_line, &end_line);
+ 
+-		dcn10_cal_vline_position(dc, pipe_ctx, vline, &start_line, &end_line);
+-
+-		tg->funcs->setup_vertical_interrupt0(tg, start_line, end_line);
+-
+-	} else if (vline == VLINE1) {
+-		pipe_ctx->stream_res.tg->funcs->setup_vertical_interrupt1(
+-				tg,
+-				pipe_ctx->stream->periodic_interrupt1.lines_offset);
+-	}
++	tg->funcs->setup_vertical_interrupt0(tg, start_line, end_line);
+ }
+ 
+ void dcn10_setup_vupdate_interrupt(struct dc *dc, struct pipe_ctx *pipe_ctx)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
+index e5691e4990231..81b5057d5ff12 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
+@@ -174,8 +174,7 @@ void dcn10_set_cursor_attribute(struct pipe_ctx *pipe_ctx);
+ void dcn10_set_cursor_sdr_white_level(struct pipe_ctx *pipe_ctx);
+ void dcn10_setup_periodic_interrupt(
+ 		struct dc *dc,
+-		struct pipe_ctx *pipe_ctx,
+-		enum vline_select vline);
++		struct pipe_ctx *pipe_ctx);
+ enum dc_status dcn10_set_clock(struct dc *dc,
+ 		enum dc_clock_type clock_type,
+ 		uint32_t clk_khz,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+index 64c1be818b0e8..3165a66c5362f 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+@@ -32,11 +32,6 @@
+ #include "inc/hw/link_encoder.h"
+ #include "core_status.h"
+ 
+-enum vline_select {
+-	VLINE0,
+-	VLINE1
+-};
+-
+ struct pipe_ctx;
+ struct dc_state;
+ struct dc_stream_status;
+@@ -112,8 +107,7 @@ struct hw_sequencer_funcs {
+ 			int group_index, int group_size,
+ 			struct pipe_ctx *grouped_pipes[]);
+ 	void (*setup_periodic_interrupt)(struct dc *dc,
+-			struct pipe_ctx *pipe_ctx,
+-			enum vline_select vline);
++			struct pipe_ctx *pipe_ctx);
+ 	void (*set_drr)(struct pipe_ctx **pipe_ctx, int num_pipes,
+ 			unsigned int vmin, unsigned int vmax,
+ 			unsigned int vmid, unsigned int vmid_frame_number);
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+index a0f6ee15c2485..711061bf3eb7e 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+@@ -386,10 +386,7 @@ void adv7511_cec_irq_process(struct adv7511 *adv7511, unsigned int irq1);
+ #else
+ static inline int adv7511_cec_init(struct device *dev, struct adv7511 *adv7511)
+ {
+-	unsigned int offset = adv7511->type == ADV7533 ?
+-						ADV7533_REG_CEC_OFFSET : 0;
+-
+-	regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL + offset,
++	regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL,
+ 		     ADV7511_CEC_CTRL_POWER_DOWN);
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c b/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
+index a20a45c0b353f..ddd1305b82b2c 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
+@@ -316,7 +316,7 @@ int adv7511_cec_init(struct device *dev, struct adv7511 *adv7511)
+ 		goto err_cec_alloc;
+ 	}
+ 
+-	regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL + offset, 0);
++	regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL, 0);
+ 	/* cec soft reset */
+ 	regmap_write(adv7511->regmap_cec,
+ 		     ADV7511_REG_CEC_SOFT_RESET + offset, 0x01);
+@@ -343,7 +343,7 @@ err_cec_alloc:
+ 	dev_info(dev, "Initializing CEC failed with error %d, disabling CEC\n",
+ 		 ret);
+ err_cec_parse_dt:
+-	regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL + offset,
++	regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL,
+ 		     ADV7511_CEC_CTRL_POWER_DOWN);
+ 	return ret == -EPROBE_DEFER ? ret : 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index 29b1ce2140abc..1dcc28a4d8537 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -816,13 +816,14 @@ static int lt9611_connector_init(struct drm_bridge *bridge, struct lt9611 *lt961
+ 
+ 	drm_connector_helper_add(&lt9611->connector,
+ 				 &lt9611_bridge_connector_helper_funcs);
+-	drm_connector_attach_encoder(&lt9611->connector, bridge->encoder);
+ 
+ 	if (!bridge->encoder) {
+ 		DRM_ERROR("Parent encoder object not found");
+ 		return -ENODEV;
+ 	}
+ 
++	drm_connector_attach_encoder(&lt9611->connector, bridge->encoder);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+index cce98bf2a4e73..72248a565579e 100644
+--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
++++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+@@ -296,7 +296,9 @@ static void ge_b850v3_lvds_remove(void)
+ 	 * This check is to avoid both the drivers
+ 	 * removing the bridge in their remove() function
+ 	 */
+-	if (!ge_b850v3_lvds_ptr)
++	if (!ge_b850v3_lvds_ptr ||
++	    !ge_b850v3_lvds_ptr->stdp2690_i2c ||
++		!ge_b850v3_lvds_ptr->stdp4028_i2c)
+ 		goto out;
+ 
+ 	drm_bridge_remove(&ge_b850v3_lvds_ptr->bridge);
+diff --git a/drivers/gpu/drm/bridge/parade-ps8640.c b/drivers/gpu/drm/bridge/parade-ps8640.c
+index 7bd0affa057a5..9248510104005 100644
+--- a/drivers/gpu/drm/bridge/parade-ps8640.c
++++ b/drivers/gpu/drm/bridge/parade-ps8640.c
+@@ -333,8 +333,8 @@ static int ps8640_probe(struct i2c_client *client)
+ 	if (IS_ERR(ps_bridge->panel_bridge))
+ 		return PTR_ERR(ps_bridge->panel_bridge);
+ 
+-	ps_bridge->supplies[0].supply = "vdd33";
+-	ps_bridge->supplies[1].supply = "vdd12";
++	ps_bridge->supplies[0].supply = "vdd12";
++	ps_bridge->supplies[1].supply = "vdd33";
+ 	ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ps_bridge->supplies),
+ 				      ps_bridge->supplies);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index b10228b9e3a93..356c7d0bd035f 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -2984,6 +2984,7 @@ static irqreturn_t dw_hdmi_irq(int irq, void *dev_id)
+ {
+ 	struct dw_hdmi *hdmi = dev_id;
+ 	u8 intr_stat, phy_int_pol, phy_pol_mask, phy_stat;
++	enum drm_connector_status status = connector_status_unknown;
+ 
+ 	intr_stat = hdmi_readb(hdmi, HDMI_IH_PHY_STAT0);
+ 	phy_int_pol = hdmi_readb(hdmi, HDMI_PHY_POL0);
+@@ -3022,13 +3023,15 @@ static irqreturn_t dw_hdmi_irq(int irq, void *dev_id)
+ 			cec_notifier_phys_addr_invalidate(hdmi->cec_notifier);
+ 			mutex_unlock(&hdmi->cec_notifier_mutex);
+ 		}
+-	}
+ 
+-	if (intr_stat & HDMI_IH_PHY_STAT0_HPD) {
+-		enum drm_connector_status status = phy_int_pol & HDMI_PHY_HPD
+-						 ? connector_status_connected
+-						 : connector_status_disconnected;
++		if (phy_stat & HDMI_PHY_HPD)
++			status = connector_status_connected;
++
++		if (!(phy_stat & (HDMI_PHY_HPD | HDMI_PHY_RX_SENSE)))
++			status = connector_status_disconnected;
++	}
+ 
++	if (status != connector_status_unknown) {
+ 		dev_dbg(hdmi->dev, "EVENT=%s\n",
+ 			status == connector_status_connected ?
+ 			"plugin" : "plugout");
+diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
+index 044acd07c1538..d799ec14fd7f5 100644
+--- a/drivers/gpu/drm/drm_bridge.c
++++ b/drivers/gpu/drm/drm_bridge.c
+@@ -753,8 +753,8 @@ static int select_bus_fmt_recursive(struct drm_bridge *first_bridge,
+ 				    struct drm_connector_state *conn_state,
+ 				    u32 out_bus_fmt)
+ {
++	unsigned int i, num_in_bus_fmts = 0;
+ 	struct drm_bridge_state *cur_state;
+-	unsigned int num_in_bus_fmts, i;
+ 	struct drm_bridge *prev_bridge;
+ 	u32 *in_bus_fmts;
+ 	int ret;
+@@ -875,7 +875,7 @@ drm_atomic_bridge_chain_select_bus_fmts(struct drm_bridge *bridge,
+ 	struct drm_connector *conn = conn_state->connector;
+ 	struct drm_encoder *encoder = bridge->encoder;
+ 	struct drm_bridge_state *last_bridge_state;
+-	unsigned int i, num_out_bus_fmts;
++	unsigned int i, num_out_bus_fmts = 0;
+ 	struct drm_bridge *last_bridge;
+ 	u32 *out_bus_fmts;
+ 	int ret = 0;
+diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
+index 3c55753bab161..6ba16db775003 100644
+--- a/drivers/gpu/drm/drm_dp_helper.c
++++ b/drivers/gpu/drm/drm_dp_helper.c
+@@ -2172,17 +2172,8 @@ int drm_dp_set_phy_test_pattern(struct drm_dp_aux *aux,
+ 				struct drm_dp_phy_test_params *data, u8 dp_rev)
+ {
+ 	int err, i;
+-	u8 link_config[2];
+ 	u8 test_pattern;
+ 
+-	link_config[0] = drm_dp_link_rate_to_bw_code(data->link_rate);
+-	link_config[1] = data->num_lanes;
+-	if (data->enhanced_frame_cap)
+-		link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
+-	err = drm_dp_dpcd_write(aux, DP_LINK_BW_SET, link_config, 2);
+-	if (err < 0)
+-		return err;
+-
+ 	test_pattern = data->phy_pattern;
+ 	if (dp_rev < 0x12) {
+ 		test_pattern = (test_pattern << 2) &
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index ab423b0413ee5..4272cd3622f8b 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -4856,14 +4856,14 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+ 		seq_printf(m, "dpcd: %*ph\n", DP_RECEIVER_CAP_SIZE, buf);
+ 
+ 		ret = drm_dp_dpcd_read(mgr->aux, DP_FAUX_CAP, buf, 2);
+-		if (ret) {
++		if (ret != 2) {
+ 			seq_printf(m, "faux/mst read failed\n");
+ 			goto out;
+ 		}
+ 		seq_printf(m, "faux/mst: %*ph\n", 2, buf);
+ 
+ 		ret = drm_dp_dpcd_read(mgr->aux, DP_MSTM_CTRL, buf, 1);
+-		if (ret) {
++		if (ret != 1) {
+ 			seq_printf(m, "mst ctrl read failed\n");
+ 			goto out;
+ 		}
+@@ -4871,7 +4871,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+ 
+ 		/* dump the standard OUI branch header */
+ 		ret = drm_dp_dpcd_read(mgr->aux, DP_BRANCH_OUI, buf, DP_BRANCH_OUI_HEADER_SIZE);
+-		if (ret) {
++		if (ret != DP_BRANCH_OUI_HEADER_SIZE) {
+ 			seq_printf(m, "branch oui read failed\n");
+ 			goto out;
+ 		}
+diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
+index 4606cc938b36d..c160a45a4274f 100644
+--- a/drivers/gpu/drm/drm_ioctl.c
++++ b/drivers/gpu/drm/drm_ioctl.c
+@@ -473,7 +473,13 @@ EXPORT_SYMBOL(drm_invalid_op);
+  */
+ static int drm_copy_field(char __user *buf, size_t *buf_len, const char *value)
+ {
+-	int len;
++	size_t len;
++
++	/* don't attempt to copy a NULL pointer */
++	if (WARN_ONCE(!value, "BUG: the value to copy was not set!")) {
++		*buf_len = 0;
++		return 0;
++	}
+ 
+ 	/* don't overflow userbuf */
+ 	len = strlen(value);
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 5dd475e829950..2c43d54766f34 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -300,6 +300,7 @@ static int mipi_dsi_remove_device_fn(struct device *dev, void *priv)
+ {
+ 	struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
+ 
++	mipi_dsi_detach(dsi);
+ 	mipi_dsi_device_unregister(dsi);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index f5ab891731d0b..083273736c837 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -128,6 +128,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "One S1003"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* Anbernic Win600 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Anbernic"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Win600"),
++		},
++		.driver_data = (void *)&lcd720x1280_rightside_up,
+ 	}, {	/* Asus T100HA */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 2f2dc029668bc..1b5e8d3e45a98 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -5145,10 +5145,14 @@ skl_compute_wm_params(const struct intel_crtc_state *crtc_state,
+ 	wp->y_tiled = modifier == I915_FORMAT_MOD_Y_TILED ||
+ 		      modifier == I915_FORMAT_MOD_Yf_TILED ||
+ 		      modifier == I915_FORMAT_MOD_Y_TILED_CCS ||
+-		      modifier == I915_FORMAT_MOD_Yf_TILED_CCS;
++		      modifier == I915_FORMAT_MOD_Yf_TILED_CCS ||
++		      modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS ||
++		      modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS;
+ 	wp->x_tiled = modifier == I915_FORMAT_MOD_X_TILED;
+ 	wp->rc_surface = modifier == I915_FORMAT_MOD_Y_TILED_CCS ||
+-			 modifier == I915_FORMAT_MOD_Yf_TILED_CCS;
++			 modifier == I915_FORMAT_MOD_Yf_TILED_CCS ||
++			 modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS ||
++			 modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS;
+ 	wp->is_planar = intel_format_info_is_yuv_semiplanar(format, modifier);
+ 
+ 	wp->width = width;
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 2d022f3fb437e..b0bfe85f5f6a8 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -528,6 +528,13 @@ static int meson_drv_probe(struct platform_device *pdev)
+ 	return 0;
+ };
+ 
++static int meson_drv_remove(struct platform_device *pdev)
++{
++	component_master_del(&pdev->dev, &meson_drv_master_ops);
++
++	return 0;
++}
++
+ static struct meson_drm_match_data meson_drm_gxbb_data = {
+ 	.compat = VPU_COMPATIBLE_GXBB,
+ };
+@@ -565,6 +572,7 @@ static const struct dev_pm_ops meson_drv_pm_ops = {
+ 
+ static struct platform_driver meson_drm_platform_driver = {
+ 	.probe      = meson_drv_probe,
++	.remove     = meson_drv_remove,
+ 	.shutdown   = meson_drv_shutdown,
+ 	.driver     = {
+ 		.name	= "meson-drm",
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index 7503f093f3b64..b7841f7fc10a8 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -675,12 +675,10 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms)
+ 	_dpu_kms_mmu_destroy(dpu_kms);
+ 
+ 	if (dpu_kms->catalog) {
+-		for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
+-			u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
+-
+-			if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) {
+-				dpu_hw_vbif_destroy(dpu_kms->hw_vbif[vbif_idx]);
+-				dpu_kms->hw_vbif[vbif_idx] = NULL;
++		for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {
++			if (dpu_kms->hw_vbif[i]) {
++				dpu_hw_vbif_destroy(dpu_kms->hw_vbif[i]);
++				dpu_kms->hw_vbif[i] = NULL;
+ 			}
+ 		}
+ 	}
+@@ -987,7 +985,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+ 	for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
+ 		u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
+ 
+-		dpu_kms->hw_vbif[i] = dpu_hw_vbif_init(vbif_idx,
++		dpu_kms->hw_vbif[vbif_idx] = dpu_hw_vbif_init(vbif_idx,
+ 				dpu_kms->vbif[vbif_idx], dpu_kms->catalog);
+ 		if (IS_ERR_OR_NULL(dpu_kms->hw_vbif[vbif_idx])) {
+ 			rc = PTR_ERR(dpu_kms->hw_vbif[vbif_idx]);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
+index 5e8c3f3e66256..fc86d34aec805 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
+@@ -11,6 +11,14 @@
+ #include "dpu_hw_vbif.h"
+ #include "dpu_trace.h"
+ 
++static struct dpu_hw_vbif *dpu_get_vbif(struct dpu_kms *dpu_kms, enum dpu_vbif vbif_idx)
++{
++	if (vbif_idx < ARRAY_SIZE(dpu_kms->hw_vbif))
++		return dpu_kms->hw_vbif[vbif_idx];
++
++	return NULL;
++}
++
+ /**
+  * _dpu_vbif_wait_for_xin_halt - wait for the xin to halt
+  * @vbif:	Pointer to hardware vbif driver
+@@ -148,20 +156,15 @@ exit:
+ void dpu_vbif_set_ot_limit(struct dpu_kms *dpu_kms,
+ 		struct dpu_vbif_set_ot_params *params)
+ {
+-	struct dpu_hw_vbif *vbif = NULL;
++	struct dpu_hw_vbif *vbif;
+ 	struct dpu_hw_mdp *mdp;
+ 	bool forced_on = false;
+ 	u32 ot_lim;
+-	int ret, i;
++	int ret;
+ 
+ 	mdp = dpu_kms->hw_mdp;
+ 
+-	for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {
+-		if (dpu_kms->hw_vbif[i] &&
+-				dpu_kms->hw_vbif[i]->idx == params->vbif_idx)
+-			vbif = dpu_kms->hw_vbif[i];
+-	}
+-
++	vbif = dpu_get_vbif(dpu_kms, params->vbif_idx);
+ 	if (!vbif || !mdp) {
+ 		DPU_DEBUG("invalid arguments vbif %d mdp %d\n",
+ 				vbif != NULL, mdp != NULL);
+@@ -204,7 +207,7 @@ void dpu_vbif_set_ot_limit(struct dpu_kms *dpu_kms,
+ void dpu_vbif_set_qos_remap(struct dpu_kms *dpu_kms,
+ 		struct dpu_vbif_set_qos_params *params)
+ {
+-	struct dpu_hw_vbif *vbif = NULL;
++	struct dpu_hw_vbif *vbif;
+ 	struct dpu_hw_mdp *mdp;
+ 	bool forced_on = false;
+ 	const struct dpu_vbif_qos_tbl *qos_tbl;
+@@ -216,13 +219,7 @@ void dpu_vbif_set_qos_remap(struct dpu_kms *dpu_kms,
+ 	}
+ 	mdp = dpu_kms->hw_mdp;
+ 
+-	for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {
+-		if (dpu_kms->hw_vbif[i] &&
+-				dpu_kms->hw_vbif[i]->idx == params->vbif_idx) {
+-			vbif = dpu_kms->hw_vbif[i];
+-			break;
+-		}
+-	}
++	vbif = dpu_get_vbif(dpu_kms, params->vbif_idx);
+ 
+ 	if (!vbif || !vbif->cap) {
+ 		DPU_ERROR("invalid vbif %d\n", params->vbif_idx);
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
+index 2da6982efdbfc..613348b022fe8 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
+@@ -416,7 +416,7 @@ void dp_catalog_ctrl_config_msa(struct dp_catalog *dp_catalog,
+ 
+ 	if (rate == link_rate_hbr3)
+ 		pixel_div = 6;
+-	else if (rate == 1620000 || rate == 270000)
++	else if (rate == 162000 || rate == 270000)
+ 		pixel_div = 2;
+ 	else if (rate == link_rate_hbr2)
+ 		pixel_div = 4;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index b4946b595d86e..b57dcad8865fa 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -279,8 +279,10 @@ nouveau_bo_alloc(struct nouveau_cli *cli, u64 *size, int *align, u32 domain,
+ 			break;
+ 	}
+ 
+-	if (WARN_ON(pi < 0))
++	if (WARN_ON(pi < 0)) {
++		kfree(nvbo);
+ 		return ERR_PTR(-EINVAL);
++	}
+ 
+ 	/* Disable compression if suitable settings couldn't be found. */
+ 	if (nvbo->comp && !vmm->page[pi].comp) {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index 4c992fd5bd68a..9542fc63e7968 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -500,7 +500,8 @@ nouveau_connector_set_encoder(struct drm_connector *connector,
+ 			connector->interlace_allowed =
+ 				nv_encoder->caps.dp_interlace;
+ 		else
+-			connector->interlace_allowed = true;
++			connector->interlace_allowed =
++				drm->client.device.info.family < NV_DEVICE_INFO_V0_VOLTA;
+ 		connector->doublescan_allowed = true;
+ 	} else
+ 	if (nv_encoder->dcb->type == DCB_OUTPUT_LVDS ||
+diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
+index 5f5b87f995468..f08bda533bd94 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
++++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
+@@ -89,7 +89,6 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
+ 	ret = nouveau_bo_init(nvbo, size, align, NOUVEAU_GEM_DOMAIN_GART,
+ 			      sg, robj);
+ 	if (ret) {
+-		nouveau_bo_ref(NULL, &nvbo);
+ 		obj = ERR_PTR(ret);
+ 		goto unlock;
+ 	}
+diff --git a/drivers/gpu/drm/omapdrm/dss/dss.c b/drivers/gpu/drm/omapdrm/dss/dss.c
+index 6ccbc29c4ce4b..d5b3123ed081e 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dss.c
++++ b/drivers/gpu/drm/omapdrm/dss/dss.c
+@@ -1173,6 +1173,7 @@ static void __dss_uninit_ports(struct dss_device *dss, unsigned int num_ports)
+ 		default:
+ 			break;
+ 		}
++		of_node_put(port);
+ 	}
+ }
+ 
+@@ -1205,11 +1206,13 @@ static int dss_init_ports(struct dss_device *dss)
+ 		default:
+ 			break;
+ 		}
++		of_node_put(port);
+ 	}
+ 
+ 	return 0;
+ 
+ error:
++	of_node_put(port);
+ 	__dss_uninit_ports(dss, i);
+ 	return r;
+ }
+diff --git a/drivers/gpu/drm/pl111/pl111_versatile.c b/drivers/gpu/drm/pl111/pl111_versatile.c
+index bdd883f4f0da5..963a5d5e6987a 100644
+--- a/drivers/gpu/drm/pl111/pl111_versatile.c
++++ b/drivers/gpu/drm/pl111/pl111_versatile.c
+@@ -402,6 +402,7 @@ static int pl111_vexpress_clcd_init(struct device *dev, struct device_node *np,
+ 		if (of_device_is_compatible(child, "arm,pl111")) {
+ 			has_coretile_clcd = true;
+ 			ct_clcd = child;
++			of_node_put(child);
+ 			break;
+ 		}
+ 		if (of_device_is_compatible(child, "arm,hdlcd")) {
+diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
+index edcfd8c120c44..209b5ceba3e00 100644
+--- a/drivers/gpu/drm/udl/udl_modeset.c
++++ b/drivers/gpu/drm/udl/udl_modeset.c
+@@ -400,9 +400,6 @@ udl_simple_display_pipe_enable(struct drm_simple_display_pipe *pipe,
+ 
+ 	udl_handle_damage(fb, 0, 0, fb->width, fb->height);
+ 
+-	if (!crtc_state->mode_changed)
+-		return;
+-
+ 	/* enable display */
+ 	udl_crtc_write_mode_to_hw(crtc);
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_vec.c b/drivers/gpu/drm/vc4/vc4_vec.c
+index bd5b8eb58b180..c6bd168a58983 100644
+--- a/drivers/gpu/drm/vc4/vc4_vec.c
++++ b/drivers/gpu/drm/vc4/vc4_vec.c
+@@ -257,7 +257,7 @@ static void vc4_vec_ntsc_j_mode_set(struct vc4_vec *vec)
+ static const struct drm_display_mode ntsc_mode = {
+ 	DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 13500,
+ 		 720, 720 + 14, 720 + 14 + 64, 720 + 14 + 64 + 60, 0,
+-		 480, 480 + 3, 480 + 3 + 3, 480 + 3 + 3 + 16, 0,
++		 480, 480 + 7, 480 + 7 + 6, 525, 0,
+ 		 DRM_MODE_FLAG_INTERLACE)
+ };
+ 
+@@ -279,7 +279,7 @@ static void vc4_vec_pal_m_mode_set(struct vc4_vec *vec)
+ static const struct drm_display_mode pal_mode = {
+ 	DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 13500,
+ 		 720, 720 + 20, 720 + 20 + 64, 720 + 20 + 64 + 60, 0,
+-		 576, 576 + 2, 576 + 2 + 3, 576 + 2 + 3 + 20, 0,
++		 576, 576 + 4, 576 + 4 + 6, 625, 0,
+ 		 DRM_MODE_FLAG_INTERLACE)
+ };
+ 
+diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
+index 5e40fa0f5e8f2..e98a29d243c08 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
+@@ -601,7 +601,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
+ 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
+ 	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
+ 
+-	if (use_dma_api)
++	if (virtio_gpu_is_shmem(bo) && use_dma_api)
+ 		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
+ 					    shmem->pages, DMA_TO_DEVICE);
+ 
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index d686917cc3b1f..a78ce16d4782d 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1155,7 +1155,7 @@ static void mt_touch_report(struct hid_device *hid,
+ 	int contact_count = -1;
+ 
+ 	/* sticky fingers release in progress, abort */
+-	if (test_and_set_bit(MT_IO_FLAGS_RUNNING, &td->mt_io_flags))
++	if (test_and_set_bit_lock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags))
+ 		return;
+ 
+ 	scantime = *app->scantime;
+@@ -1236,7 +1236,7 @@ static void mt_touch_report(struct hid_device *hid,
+ 			del_timer(&td->release_timer);
+ 	}
+ 
+-	clear_bit(MT_IO_FLAGS_RUNNING, &td->mt_io_flags);
++	clear_bit_unlock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags);
+ }
+ 
+ static int mt_touch_input_configured(struct hid_device *hdev,
+@@ -1671,11 +1671,11 @@ static void mt_expired_timeout(struct timer_list *t)
+ 	 * An input report came in just before we release the sticky fingers,
+ 	 * it will take care of the sticky fingers.
+ 	 */
+-	if (test_and_set_bit(MT_IO_FLAGS_RUNNING, &td->mt_io_flags))
++	if (test_and_set_bit_lock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags))
+ 		return;
+ 	if (test_bit(MT_IO_FLAGS_PENDING_SLOTS, &td->mt_io_flags))
+ 		mt_release_contacts(hdev);
+-	clear_bit(MT_IO_FLAGS_RUNNING, &td->mt_io_flags);
++	clear_bit_unlock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags);
+ }
+ 
+ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+diff --git a/drivers/hid/hid-roccat.c b/drivers/hid/hid-roccat.c
+index 26373b82fe812..6da80e442fdd1 100644
+--- a/drivers/hid/hid-roccat.c
++++ b/drivers/hid/hid-roccat.c
+@@ -257,6 +257,8 @@ int roccat_report_event(int minor, u8 const *data)
+ 	if (!new_value)
+ 		return -ENOMEM;
+ 
++	mutex_lock(&device->cbuf_lock);
++
+ 	report = &device->cbuf[device->cbuf_end];
+ 
+ 	/* passing NULL is safe */
+@@ -276,6 +278,8 @@ int roccat_report_event(int minor, u8 const *data)
+ 			reader->cbuf_start = (reader->cbuf_start + 1) % ROCCAT_CBUF_SIZE;
+ 	}
+ 
++	mutex_unlock(&device->cbuf_lock);
++
+ 	wake_up_interruptible(&device->wait);
+ 	return 0;
+ }
+diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c
+index 44a3f5660c109..eb98201583185 100644
+--- a/drivers/hsi/controllers/omap_ssi_core.c
++++ b/drivers/hsi/controllers/omap_ssi_core.c
+@@ -524,6 +524,7 @@ static int ssi_probe(struct platform_device *pd)
+ 		if (!childpdev) {
+ 			err = -ENODEV;
+ 			dev_err(&pd->dev, "failed to create ssi controller port\n");
++			of_node_put(child);
+ 			goto out3;
+ 		}
+ 	}
+diff --git a/drivers/hsi/controllers/omap_ssi_port.c b/drivers/hsi/controllers/omap_ssi_port.c
+index a0cb5be246e1c..b9495b720f1bd 100644
+--- a/drivers/hsi/controllers/omap_ssi_port.c
++++ b/drivers/hsi/controllers/omap_ssi_port.c
+@@ -230,10 +230,10 @@ static int ssi_start_dma(struct hsi_msg *msg, int lch)
+ 	if (msg->ttype == HSI_MSG_READ) {
+ 		err = dma_map_sg(&ssi->device, msg->sgt.sgl, msg->sgt.nents,
+ 							DMA_FROM_DEVICE);
+-		if (err < 0) {
++		if (!err) {
+ 			dev_dbg(&ssi->device, "DMA map SG failed !\n");
+ 			pm_runtime_put_autosuspend(omap_port->pdev);
+-			return err;
++			return -EIO;
+ 		}
+ 		csdp = SSI_DST_BURST_4x32_BIT | SSI_DST_MEMORY_PORT |
+ 			SSI_SRC_SINGLE_ACCESS0 | SSI_SRC_PERIPHERAL_PORT |
+@@ -247,10 +247,10 @@ static int ssi_start_dma(struct hsi_msg *msg, int lch)
+ 	} else {
+ 		err = dma_map_sg(&ssi->device, msg->sgt.sgl, msg->sgt.nents,
+ 							DMA_TO_DEVICE);
+-		if (err < 0) {
++		if (!err) {
+ 			dev_dbg(&ssi->device, "DMA map SG failed !\n");
+ 			pm_runtime_put_autosuspend(omap_port->pdev);
+-			return err;
++			return -EIO;
+ 		}
+ 		csdp = SSI_SRC_BURST_4x32_BIT | SSI_SRC_MEMORY_PORT |
+ 			SSI_DST_SINGLE_ACCESS0 | SSI_DST_PERIPHERAL_PORT |
+diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c
+index 1fe37418ff46c..f29ce49294daf 100644
+--- a/drivers/hwmon/gsc-hwmon.c
++++ b/drivers/hwmon/gsc-hwmon.c
+@@ -267,6 +267,7 @@ gsc_hwmon_get_devtree_pdata(struct device *dev)
+ 	pdata->nchannels = nchannels;
+ 
+ 	/* fan controller base address */
++	of_node_get(dev->parent->of_node);
+ 	fan = of_find_compatible_node(dev->parent->of_node, NULL, "gw,gsc-fan");
+ 	if (fan && of_property_read_u32(fan, "reg", &pdata->fan_base)) {
+ 		dev_err(dev, "fan node without base\n");
+diff --git a/drivers/i2c/busses/i2c-mlxbf.c b/drivers/i2c/busses/i2c-mlxbf.c
+index bea82a787b4f3..90c488a606934 100644
+--- a/drivers/i2c/busses/i2c-mlxbf.c
++++ b/drivers/i2c/busses/i2c-mlxbf.c
+@@ -312,6 +312,7 @@ static u64 mlxbf_i2c_corepll_frequency;
+  * exact.
+  */
+ #define MLXBF_I2C_SMBUS_TIMEOUT   (300 * 1000) /* 300ms */
++#define MLXBF_I2C_SMBUS_LOCK_POLL_TIMEOUT (300 * 1000) /* 300ms */
+ 
+ /* Encapsulates timing parameters. */
+ struct mlxbf_i2c_timings {
+@@ -520,6 +521,25 @@ static bool mlxbf_smbus_master_wait_for_idle(struct mlxbf_i2c_priv *priv)
+ 	return false;
+ }
+ 
++/*
++ * wait for the lock to be released before acquiring it.
++ */
++static bool mlxbf_i2c_smbus_master_lock(struct mlxbf_i2c_priv *priv)
++{
++	if (mlxbf_smbus_poll(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_GW,
++			   MLXBF_I2C_MASTER_LOCK_BIT, true,
++			   MLXBF_I2C_SMBUS_LOCK_POLL_TIMEOUT))
++		return true;
++
++	return false;
++}
++
++static void mlxbf_i2c_smbus_master_unlock(struct mlxbf_i2c_priv *priv)
++{
++	/* Clear the gw to clear the lock */
++	writel(0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_GW);
++}
++
+ static bool mlxbf_i2c_smbus_transaction_success(u32 master_status,
+ 						u32 cause_status)
+ {
+@@ -711,10 +731,19 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ 	slave = request->slave & GENMASK(6, 0);
+ 	addr = slave << 1;
+ 
+-	/* First of all, check whether the HW is idle. */
+-	if (WARN_ON(!mlxbf_smbus_master_wait_for_idle(priv)))
++	/*
++	 * Try to acquire the smbus gw lock before any reads of the GW register since
++	 * a read sets the lock.
++	 */
++	if (WARN_ON(!mlxbf_i2c_smbus_master_lock(priv)))
+ 		return -EBUSY;
+ 
++	/* Check whether the HW is idle */
++	if (WARN_ON(!mlxbf_smbus_master_wait_for_idle(priv))) {
++		ret = -EBUSY;
++		goto out_unlock;
++	}
++
+ 	/* Set first byte. */
+ 	data_desc[data_idx++] = addr;
+ 
+@@ -738,8 +767,10 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ 			write_en = 1;
+ 			write_len += operation->length;
+ 			if (data_idx + operation->length >
+-					MLXBF_I2C_MASTER_DATA_DESC_SIZE)
+-				return -ENOBUFS;
++					MLXBF_I2C_MASTER_DATA_DESC_SIZE) {
++				ret = -ENOBUFS;
++				goto out_unlock;
++			}
+ 			memcpy(data_desc + data_idx,
+ 			       operation->buffer, operation->length);
+ 			data_idx += operation->length;
+@@ -771,7 +802,7 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ 		ret = mlxbf_i2c_smbus_enable(priv, slave, write_len, block_en,
+ 					 pec_en, 0);
+ 		if (ret)
+-			return ret;
++			goto out_unlock;
+ 	}
+ 
+ 	if (read_en) {
+@@ -798,6 +829,9 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ 			priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_FSM);
+ 	}
+ 
++out_unlock:
++	mlxbf_i2c_smbus_master_unlock(priv);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c
+index 8c1e866f72e85..96eeda433ad66 100644
+--- a/drivers/iio/adc/ad7923.c
++++ b/drivers/iio/adc/ad7923.c
+@@ -93,6 +93,7 @@ enum ad7923_id {
+ 			.sign = 'u',					\
+ 			.realbits = (bits),				\
+ 			.storagebits = 16,				\
++			.shift = 12 - (bits),				\
+ 			.endianness = IIO_BE,				\
+ 		},							\
+ 	}
+@@ -274,7 +275,8 @@ static int ad7923_read_raw(struct iio_dev *indio_dev,
+ 			return ret;
+ 
+ 		if (chan->address == EXTRACT(ret, 12, 4))
+-			*val = EXTRACT(ret, 0, 12);
++			*val = EXTRACT(ret, chan->scan_type.shift,
++				       chan->scan_type.realbits);
+ 		else
+ 			return -EIO;
+ 
+diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
+index 4ede7e766765e..250b78ee16251 100644
+--- a/drivers/iio/adc/at91-sama5d2_adc.c
++++ b/drivers/iio/adc/at91-sama5d2_adc.c
+@@ -74,7 +74,7 @@
+ #define	AT91_SAMA5D2_MR_ANACH		BIT(23)
+ /* Tracking Time */
+ #define	AT91_SAMA5D2_MR_TRACKTIM(v)	((v) << 24)
+-#define	AT91_SAMA5D2_MR_TRACKTIM_MAX	0xff
++#define	AT91_SAMA5D2_MR_TRACKTIM_MAX	0xf
+ /* Transfer Time */
+ #define	AT91_SAMA5D2_MR_TRANSFER(v)	((v) << 28)
+ #define	AT91_SAMA5D2_MR_TRANSFER_MAX	0x3
+@@ -1353,10 +1353,12 @@ static int at91_adc_read_info_raw(struct iio_dev *indio_dev,
+ 		ret = at91_adc_read_position(st, chan->channel,
+ 					     &tmp_val);
+ 		*val = tmp_val;
++		if (ret > 0)
++			ret = at91_adc_adjust_val_osr(st, val);
+ 		mutex_unlock(&st->lock);
+ 		iio_device_release_direct_mode(indio_dev);
+ 
+-		return at91_adc_adjust_val_osr(st, val);
++		return ret;
+ 	}
+ 	if (chan->type == IIO_PRESSURE) {
+ 		ret = iio_device_claim_direct_mode(indio_dev);
+@@ -1367,10 +1369,12 @@ static int at91_adc_read_info_raw(struct iio_dev *indio_dev,
+ 		ret = at91_adc_read_pressure(st, chan->channel,
+ 					     &tmp_val);
+ 		*val = tmp_val;
++		if (ret > 0)
++			ret = at91_adc_adjust_val_osr(st, val);
+ 		mutex_unlock(&st->lock);
+ 		iio_device_release_direct_mode(indio_dev);
+ 
+-		return at91_adc_adjust_val_osr(st, val);
++		return ret;
+ 	}
+ 
+ 	/* in this case we have a voltage channel */
+@@ -1461,16 +1465,20 @@ static int at91_adc_write_raw(struct iio_dev *indio_dev,
+ 		/* if no change, optimize out */
+ 		if (val == st->oversampling_ratio)
+ 			return 0;
++		mutex_lock(&st->lock);
+ 		st->oversampling_ratio = val;
+ 		/* update ratio */
+ 		at91_adc_config_emr(st);
++		mutex_unlock(&st->lock);
+ 		return 0;
+ 	case IIO_CHAN_INFO_SAMP_FREQ:
+ 		if (val < st->soc_info.min_sample_rate ||
+ 		    val > st->soc_info.max_sample_rate)
+ 			return -EINVAL;
+ 
++		mutex_lock(&st->lock);
+ 		at91_adc_setup_samp_freq(indio_dev, val);
++		mutex_unlock(&st->lock);
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+@@ -1899,6 +1907,9 @@ static __maybe_unused int at91_adc_suspend(struct device *dev)
+ 	struct iio_dev *indio_dev = dev_get_drvdata(dev);
+ 	struct at91_adc_state *st = iio_priv(indio_dev);
+ 
++	if (iio_buffer_enabled(indio_dev))
++		at91_adc_buffer_postdisable(indio_dev);
++
+ 	/*
+ 	 * Do a sofware reset of the ADC before we go to suspend.
+ 	 * this will ensure that all pins are free from being muxed by the ADC
+@@ -1942,14 +1953,11 @@ static __maybe_unused int at91_adc_resume(struct device *dev)
+ 	if (!iio_buffer_enabled(indio_dev))
+ 		return 0;
+ 
+-	/* check if we are enabling triggered buffer or the touchscreen */
+-	if (at91_adc_current_chan_is_touch(indio_dev))
+-		return at91_adc_configure_touch(st, true);
+-	else
+-		return at91_adc_configure_trigger(st->trig, true);
++	ret = at91_adc_buffer_prepare(indio_dev);
++	if (ret)
++		goto vref_disable_resume;
+ 
+-	/* not needed but more explicit */
+-	return 0;
++	return at91_adc_configure_trigger(st->trig, true);
+ 
+ vref_disable_resume:
+ 	regulator_disable(st->vref);
+diff --git a/drivers/iio/adc/ltc2497.c b/drivers/iio/adc/ltc2497.c
+index 1adddf5a88a94..61f373fab9a11 100644
+--- a/drivers/iio/adc/ltc2497.c
++++ b/drivers/iio/adc/ltc2497.c
+@@ -41,6 +41,19 @@ static int ltc2497_result_and_measure(struct ltc2497core_driverdata *ddata,
+ 		}
+ 
+ 		*val = (be32_to_cpu(st->buf) >> 14) - (1 << 17);
++
++		/*
++		 * The part started a new conversion at the end of the above i2c
++		 * transfer, so if the address didn't change since the last call
++		 * everything is fine and we can return early.
++		 * If not (which should only happen when some sort of bulk
++		 * conversion is implemented) we have to program the new
++		 * address. Note that this probably fails as the conversion that
++		 * was triggered above is like not complete yet and the two
++		 * operations have to be done in a single transfer.
++		 */
++		if (ddata->addr_prev == address)
++			return 0;
+ 	}
+ 
+ 	ret = i2c_smbus_write_byte(st->client,
+diff --git a/drivers/iio/dac/ad5593r.c b/drivers/iio/dac/ad5593r.c
+index 5b4df36fdc2ad..4cc855c781218 100644
+--- a/drivers/iio/dac/ad5593r.c
++++ b/drivers/iio/dac/ad5593r.c
+@@ -13,6 +13,8 @@
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+ 
++#include <asm/unaligned.h>
++
+ #define AD5593R_MODE_CONF		(0 << 4)
+ #define AD5593R_MODE_DAC_WRITE		(1 << 4)
+ #define AD5593R_MODE_ADC_READBACK	(4 << 4)
+@@ -20,6 +22,24 @@
+ #define AD5593R_MODE_GPIO_READBACK	(6 << 4)
+ #define AD5593R_MODE_REG_READBACK	(7 << 4)
+ 
++static int ad5593r_read_word(struct i2c_client *i2c, u8 reg, u16 *value)
++{
++	int ret;
++	u8 buf[2];
++
++	ret = i2c_smbus_write_byte(i2c, reg);
++	if (ret < 0)
++		return ret;
++
++	ret = i2c_master_recv(i2c, buf, sizeof(buf));
++	if (ret < 0)
++		return ret;
++
++	*value = get_unaligned_be16(buf);
++
++	return 0;
++}
++
+ static int ad5593r_write_dac(struct ad5592r_state *st, unsigned chan, u16 value)
+ {
+ 	struct i2c_client *i2c = to_i2c_client(st->dev);
+@@ -38,13 +58,7 @@ static int ad5593r_read_adc(struct ad5592r_state *st, unsigned chan, u16 *value)
+ 	if (val < 0)
+ 		return (int) val;
+ 
+-	val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_ADC_READBACK);
+-	if (val < 0)
+-		return (int) val;
+-
+-	*value = (u16) val;
+-
+-	return 0;
++	return ad5593r_read_word(i2c, AD5593R_MODE_ADC_READBACK, value);
+ }
+ 
+ static int ad5593r_reg_write(struct ad5592r_state *st, u8 reg, u16 value)
+@@ -58,25 +72,19 @@ static int ad5593r_reg_write(struct ad5592r_state *st, u8 reg, u16 value)
+ static int ad5593r_reg_read(struct ad5592r_state *st, u8 reg, u16 *value)
+ {
+ 	struct i2c_client *i2c = to_i2c_client(st->dev);
+-	s32 val;
+-
+-	val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_REG_READBACK | reg);
+-	if (val < 0)
+-		return (int) val;
+ 
+-	*value = (u16) val;
+-
+-	return 0;
++	return ad5593r_read_word(i2c, AD5593R_MODE_REG_READBACK | reg, value);
+ }
+ 
+ static int ad5593r_gpio_read(struct ad5592r_state *st, u8 *value)
+ {
+ 	struct i2c_client *i2c = to_i2c_client(st->dev);
+-	s32 val;
++	u16 val;
++	int ret;
+ 
+-	val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_GPIO_READBACK);
+-	if (val < 0)
+-		return (int) val;
++	ret = ad5593r_read_word(i2c, AD5593R_MODE_GPIO_READBACK, &val);
++	if (ret)
++		return ret;
+ 
+ 	*value = (u8) val;
+ 
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 8c3faa7972842..c32b2577dd991 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -136,9 +136,10 @@ static int __of_iio_channel_get(struct iio_channel *channel,
+ 
+ 	idev = bus_find_device(&iio_bus_type, NULL, iiospec.np,
+ 			       iio_dev_node_match);
+-	of_node_put(iiospec.np);
+-	if (idev == NULL)
++	if (idev == NULL) {
++		of_node_put(iiospec.np);
+ 		return -EPROBE_DEFER;
++	}
+ 
+ 	indio_dev = dev_to_iio_dev(idev);
+ 	channel->indio_dev = indio_dev;
+@@ -146,6 +147,7 @@ static int __of_iio_channel_get(struct iio_channel *channel,
+ 		index = indio_dev->info->of_xlate(indio_dev, &iiospec);
+ 	else
+ 		index = __of_iio_simple_xlate(indio_dev, &iiospec);
++	of_node_put(iiospec.np);
+ 	if (index < 0)
+ 		goto err_put;
+ 	channel->channel = &indio_dev->channels[index];
+diff --git a/drivers/iio/pressure/dps310.c b/drivers/iio/pressure/dps310.c
+index 0730380ceb692..cf8b92fae1b3d 100644
+--- a/drivers/iio/pressure/dps310.c
++++ b/drivers/iio/pressure/dps310.c
+@@ -89,6 +89,7 @@ struct dps310_data {
+ 	s32 c00, c10, c20, c30, c01, c11, c21;
+ 	s32 pressure_raw;
+ 	s32 temp_raw;
++	bool timeout_recovery_failed;
+ };
+ 
+ static const struct iio_chan_spec dps310_channels[] = {
+@@ -159,6 +160,102 @@ static int dps310_get_coefs(struct dps310_data *data)
+ 	return 0;
+ }
+ 
++/*
++ * Some versions of the chip will read temperatures in the ~60C range when
++ * it's actually ~20C. This is the manufacturer recommended workaround
++ * to correct the issue. The registers used below are undocumented.
++ */
++static int dps310_temp_workaround(struct dps310_data *data)
++{
++	int rc;
++	int reg;
++
++	rc = regmap_read(data->regmap, 0x32, &reg);
++	if (rc)
++		return rc;
++
++	/*
++	 * If bit 1 is set then the device is okay, and the workaround does not
++	 * need to be applied
++	 */
++	if (reg & BIT(1))
++		return 0;
++
++	rc = regmap_write(data->regmap, 0x0e, 0xA5);
++	if (rc)
++		return rc;
++
++	rc = regmap_write(data->regmap, 0x0f, 0x96);
++	if (rc)
++		return rc;
++
++	rc = regmap_write(data->regmap, 0x62, 0x02);
++	if (rc)
++		return rc;
++
++	rc = regmap_write(data->regmap, 0x0e, 0x00);
++	if (rc)
++		return rc;
++
++	return regmap_write(data->regmap, 0x0f, 0x00);
++}
++
++static int dps310_startup(struct dps310_data *data)
++{
++	int rc;
++	int ready;
++
++	/*
++	 * Set up pressure sensor in single sample, one measurement per second
++	 * mode
++	 */
++	rc = regmap_write(data->regmap, DPS310_PRS_CFG, 0);
++	if (rc)
++		return rc;
++
++	/*
++	 * Set up external (MEMS) temperature sensor in single sample, one
++	 * measurement per second mode
++	 */
++	rc = regmap_write(data->regmap, DPS310_TMP_CFG, DPS310_TMP_EXT);
++	if (rc)
++		return rc;
++
++	/* Temp and pressure shifts are disabled when PRC <= 8 */
++	rc = regmap_write_bits(data->regmap, DPS310_CFG_REG,
++			       DPS310_PRS_SHIFT_EN | DPS310_TMP_SHIFT_EN, 0);
++	if (rc)
++		return rc;
++
++	/* MEAS_CFG doesn't update correctly unless first written with 0 */
++	rc = regmap_write_bits(data->regmap, DPS310_MEAS_CFG,
++			       DPS310_MEAS_CTRL_BITS, 0);
++	if (rc)
++		return rc;
++
++	/* Turn on temperature and pressure measurement in the background */
++	rc = regmap_write_bits(data->regmap, DPS310_MEAS_CFG,
++			       DPS310_MEAS_CTRL_BITS, DPS310_PRS_EN |
++			       DPS310_TEMP_EN | DPS310_BACKGROUND);
++	if (rc)
++		return rc;
++
++	/*
++	 * Calibration coefficients required for reporting temperature.
++	 * They are available 40ms after the device has started
++	 */
++	rc = regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready,
++				      ready & DPS310_COEF_RDY, 10000, 40000);
++	if (rc)
++		return rc;
++
++	rc = dps310_get_coefs(data);
++	if (rc)
++		return rc;
++
++	return dps310_temp_workaround(data);
++}
++
+ static int dps310_get_pres_precision(struct dps310_data *data)
+ {
+ 	int rc;
+@@ -297,11 +394,69 @@ static int dps310_get_temp_k(struct dps310_data *data)
+ 	return scale_factors[ilog2(rc)];
+ }
+ 
++static int dps310_reset_wait(struct dps310_data *data)
++{
++	int rc;
++
++	rc = regmap_write(data->regmap, DPS310_RESET, DPS310_RESET_MAGIC);
++	if (rc)
++		return rc;
++
++	/* Wait for device chip access: 2.5ms in specification */
++	usleep_range(2500, 12000);
++	return 0;
++}
++
++static int dps310_reset_reinit(struct dps310_data *data)
++{
++	int rc;
++
++	rc = dps310_reset_wait(data);
++	if (rc)
++		return rc;
++
++	return dps310_startup(data);
++}
++
++static int dps310_ready_status(struct dps310_data *data, int ready_bit, int timeout)
++{
++	int sleep = DPS310_POLL_SLEEP_US(timeout);
++	int ready;
++
++	return regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready, ready & ready_bit,
++					sleep, timeout);
++}
++
++static int dps310_ready(struct dps310_data *data, int ready_bit, int timeout)
++{
++	int rc;
++
++	rc = dps310_ready_status(data, ready_bit, timeout);
++	if (rc) {
++		if (rc == -ETIMEDOUT && !data->timeout_recovery_failed) {
++			/* Reset and reinitialize the chip. */
++			if (dps310_reset_reinit(data)) {
++				data->timeout_recovery_failed = true;
++			} else {
++				/* Try again to get sensor ready status. */
++				if (dps310_ready_status(data, ready_bit, timeout))
++					data->timeout_recovery_failed = true;
++				else
++					return 0;
++			}
++		}
++
++		return rc;
++	}
++
++	data->timeout_recovery_failed = false;
++	return 0;
++}
++
+ static int dps310_read_pres_raw(struct dps310_data *data)
+ {
+ 	int rc;
+ 	int rate;
+-	int ready;
+ 	int timeout;
+ 	s32 raw;
+ 	u8 val[3];
+@@ -313,9 +468,7 @@ static int dps310_read_pres_raw(struct dps310_data *data)
+ 	timeout = DPS310_POLL_TIMEOUT_US(rate);
+ 
+ 	/* Poll for sensor readiness; base the timeout upon the sample rate. */
+-	rc = regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready,
+-				      ready & DPS310_PRS_RDY,
+-				      DPS310_POLL_SLEEP_US(timeout), timeout);
++	rc = dps310_ready(data, DPS310_PRS_RDY, timeout);
+ 	if (rc)
+ 		goto done;
+ 
+@@ -352,7 +505,6 @@ static int dps310_read_temp_raw(struct dps310_data *data)
+ {
+ 	int rc;
+ 	int rate;
+-	int ready;
+ 	int timeout;
+ 
+ 	if (mutex_lock_interruptible(&data->lock))
+@@ -362,10 +514,8 @@ static int dps310_read_temp_raw(struct dps310_data *data)
+ 	timeout = DPS310_POLL_TIMEOUT_US(rate);
+ 
+ 	/* Poll for sensor readiness; base the timeout upon the sample rate. */
+-	rc = regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready,
+-				      ready & DPS310_TMP_RDY,
+-				      DPS310_POLL_SLEEP_US(timeout), timeout);
+-	if (rc < 0)
++	rc = dps310_ready(data, DPS310_TMP_RDY, timeout);
++	if (rc)
+ 		goto done;
+ 
+ 	rc = dps310_read_temp_ready(data);
+@@ -660,7 +810,7 @@ static void dps310_reset(void *action_data)
+ {
+ 	struct dps310_data *data = action_data;
+ 
+-	regmap_write(data->regmap, DPS310_RESET, DPS310_RESET_MAGIC);
++	dps310_reset_wait(data);
+ }
+ 
+ static const struct regmap_config dps310_regmap_config = {
+@@ -677,52 +827,12 @@ static const struct iio_info dps310_info = {
+ 	.write_raw = dps310_write_raw,
+ };
+ 
+-/*
+- * Some verions of chip will read temperatures in the ~60C range when
+- * its actually ~20C. This is the manufacturer recommended workaround
+- * to correct the issue. The registers used below are undocumented.
+- */
+-static int dps310_temp_workaround(struct dps310_data *data)
+-{
+-	int rc;
+-	int reg;
+-
+-	rc = regmap_read(data->regmap, 0x32, &reg);
+-	if (rc < 0)
+-		return rc;
+-
+-	/*
+-	 * If bit 1 is set then the device is okay, and the workaround does not
+-	 * need to be applied
+-	 */
+-	if (reg & BIT(1))
+-		return 0;
+-
+-	rc = regmap_write(data->regmap, 0x0e, 0xA5);
+-	if (rc < 0)
+-		return rc;
+-
+-	rc = regmap_write(data->regmap, 0x0f, 0x96);
+-	if (rc < 0)
+-		return rc;
+-
+-	rc = regmap_write(data->regmap, 0x62, 0x02);
+-	if (rc < 0)
+-		return rc;
+-
+-	rc = regmap_write(data->regmap, 0x0e, 0x00);
+-	if (rc < 0)
+-		return rc;
+-
+-	return regmap_write(data->regmap, 0x0f, 0x00);
+-}
+-
+ static int dps310_probe(struct i2c_client *client,
+ 			const struct i2c_device_id *id)
+ {
+ 	struct dps310_data *data;
+ 	struct iio_dev *iio;
+-	int rc, ready;
++	int rc;
+ 
+ 	iio = devm_iio_device_alloc(&client->dev,  sizeof(*data));
+ 	if (!iio)
+@@ -747,54 +857,8 @@ static int dps310_probe(struct i2c_client *client,
+ 	if (rc)
+ 		return rc;
+ 
+-	/*
+-	 * Set up pressure sensor in single sample, one measurement per second
+-	 * mode
+-	 */
+-	rc = regmap_write(data->regmap, DPS310_PRS_CFG, 0);
+-
+-	/*
+-	 * Set up external (MEMS) temperature sensor in single sample, one
+-	 * measurement per second mode
+-	 */
+-	rc = regmap_write(data->regmap, DPS310_TMP_CFG, DPS310_TMP_EXT);
+-	if (rc < 0)
+-		return rc;
+-
+-	/* Temp and pressure shifts are disabled when PRC <= 8 */
+-	rc = regmap_write_bits(data->regmap, DPS310_CFG_REG,
+-			       DPS310_PRS_SHIFT_EN | DPS310_TMP_SHIFT_EN, 0);
+-	if (rc < 0)
+-		return rc;
+-
+-	/* MEAS_CFG doesn't update correctly unless first written with 0 */
+-	rc = regmap_write_bits(data->regmap, DPS310_MEAS_CFG,
+-			       DPS310_MEAS_CTRL_BITS, 0);
+-	if (rc < 0)
+-		return rc;
+-
+-	/* Turn on temperature and pressure measurement in the background */
+-	rc = regmap_write_bits(data->regmap, DPS310_MEAS_CFG,
+-			       DPS310_MEAS_CTRL_BITS, DPS310_PRS_EN |
+-			       DPS310_TEMP_EN | DPS310_BACKGROUND);
+-	if (rc < 0)
+-		return rc;
+-
+-	/*
+-	 * Calibration coefficients required for reporting temperature.
+-	 * They are available 40ms after the device has started
+-	 */
+-	rc = regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready,
+-				      ready & DPS310_COEF_RDY, 10000, 40000);
+-	if (rc < 0)
+-		return rc;
+-
+-	rc = dps310_get_coefs(data);
+-	if (rc < 0)
+-		return rc;
+-
+-	rc = dps310_temp_workaround(data);
+-	if (rc < 0)
++	rc = dps310_startup(data);
++	if (rc)
+ 		return rc;
+ 
+ 	rc = devm_iio_device_register(&client->dev, iio);
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 3cc7a23fa69fe..3133b6be6cab9 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -1643,14 +1643,13 @@ static void cm_path_set_rec_type(struct ib_device *ib_device, u8 port_num,
+ 
+ static void cm_format_path_lid_from_req(struct cm_req_msg *req_msg,
+ 					struct sa_path_rec *primary_path,
+-					struct sa_path_rec *alt_path)
++					struct sa_path_rec *alt_path,
++					struct ib_wc *wc)
+ {
+ 	u32 lid;
+ 
+ 	if (primary_path->rec_type != SA_PATH_REC_TYPE_OPA) {
+-		sa_path_set_dlid(primary_path,
+-				 IBA_GET(CM_REQ_PRIMARY_LOCAL_PORT_LID,
+-					 req_msg));
++		sa_path_set_dlid(primary_path, wc->slid);
+ 		sa_path_set_slid(primary_path,
+ 				 IBA_GET(CM_REQ_PRIMARY_REMOTE_PORT_LID,
+ 					 req_msg));
+@@ -1687,7 +1686,8 @@ static void cm_format_path_lid_from_req(struct cm_req_msg *req_msg,
+ 
+ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
+ 				     struct sa_path_rec *primary_path,
+-				     struct sa_path_rec *alt_path)
++				     struct sa_path_rec *alt_path,
++				     struct ib_wc *wc)
+ {
+ 	primary_path->dgid =
+ 		*IBA_GET_MEM_PTR(CM_REQ_PRIMARY_LOCAL_PORT_GID, req_msg);
+@@ -1745,7 +1745,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
+ 		if (sa_path_is_roce(alt_path))
+ 			alt_path->roce.route_resolved = false;
+ 	}
+-	cm_format_path_lid_from_req(req_msg, primary_path, alt_path);
++	cm_format_path_lid_from_req(req_msg, primary_path, alt_path, wc);
+ }
+ 
+ static u16 cm_get_bth_pkey(struct cm_work *work)
+@@ -2163,7 +2163,7 @@ static int cm_req_handler(struct cm_work *work)
+ 	if (cm_req_has_alt_path(req_msg))
+ 		work->path[1].rec_type = work->path[0].rec_type;
+ 	cm_format_paths_from_req(req_msg, &work->path[0],
+-				 &work->path[1]);
++				 &work->path[1], work->mad_recv_wc->wc);
+ 	if (cm_id_priv->av.ah_attr.type == RDMA_AH_ATTR_TYPE_ROCE)
+ 		sa_path_set_dmac(&work->path[0],
+ 				 cm_id_priv->av.ah_attr.roce.dmac);
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 466026825dd75..d7c90da9ce7f1 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -749,6 +749,7 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs)
+ 	mr->uobject = uobj;
+ 	atomic_inc(&pd->usecnt);
+ 	mr->iova = cmd.hca_va;
++	mr->length = cmd.length;
+ 
+ 	rdma_restrack_new(&mr->res, RDMA_RESTRACK_MR);
+ 	rdma_restrack_set_name(&mr->res, NULL);
+@@ -832,8 +833,10 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs)
+ 		atomic_dec(&old_pd->usecnt);
+ 	}
+ 
+-	if (cmd.flags & IB_MR_REREG_TRANS)
++	if (cmd.flags & IB_MR_REREG_TRANS) {
+ 		mr->iova = cmd.hca_va;
++		mr->length = cmd.length;
++	}
+ 
+ 	memset(&resp, 0, sizeof(resp));
+ 	resp.lkey      = mr->lkey;
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 597e889ba8312..5889639e90a1c 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -2082,6 +2082,8 @@ struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ 	mr->pd = pd;
+ 	mr->dm = NULL;
+ 	atomic_inc(&pd->usecnt);
++	mr->iova =  virt_addr;
++	mr->length = length;
+ 
+ 	rdma_restrack_new(&mr->res, RDMA_RESTRACK_MR);
+ 	rdma_restrack_parent_name(&mr->res, &pd->res);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 027ec8413ac25..6d7cc724862fa 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -286,7 +286,6 @@ struct ib_mr *hns_roce_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ 		goto err_alloc_pbl;
+ 
+ 	mr->ibmr.rkey = mr->ibmr.lkey = mr->key;
+-	mr->ibmr.length = length;
+ 
+ 	return &mr->ibmr;
+ 
+diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
+index 426fed005d538..811b4bb345247 100644
+--- a/drivers/infiniband/hw/mlx4/mr.c
++++ b/drivers/infiniband/hw/mlx4/mr.c
+@@ -439,7 +439,6 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ 		goto err_mr;
+ 
+ 	mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key;
+-	mr->ibmr.length = length;
+ 	mr->ibmr.page_size = 1U << shift;
+ 
+ 	return &mr->ibmr;
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 2847ab4d9a5f0..2e4b008f03870 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -775,7 +775,9 @@ void rxe_qp_destroy(struct rxe_qp *qp)
+ 	rxe_cleanup_task(&qp->comp.task);
+ 
+ 	/* flush out any receive wr's or pending requests */
+-	__rxe_do_task(&qp->req.task);
++	if (qp->req.task.func)
++		__rxe_do_task(&qp->req.task);
++
+ 	if (qp->sq.queue) {
+ 		__rxe_do_task(&qp->comp.task);
+ 		__rxe_do_task(&qp->req.task);
+@@ -815,8 +817,10 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+ 
+ 	free_rd_atomic_resources(qp);
+ 
+-	kernel_sock_shutdown(qp->sk, SHUT_RDWR);
+-	sock_release(qp->sk);
++	if (qp->sk) {
++		kernel_sock_shutdown(qp->sk, SHUT_RDWR);
++		sock_release(qp->sk);
++	}
+ }
+ 
+ /* called when the last reference to the qp is dropped */
+diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/siw/siw_qp_rx.c
+index 875ea6f1b04a2..fd721cc19682e 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_rx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_rx.c
+@@ -961,27 +961,28 @@ out:
+ static int siw_get_trailer(struct siw_qp *qp, struct siw_rx_stream *srx)
+ {
+ 	struct sk_buff *skb = srx->skb;
++	int avail = min(srx->skb_new, srx->fpdu_part_rem);
+ 	u8 *tbuf = (u8 *)&srx->trailer.crc - srx->pad;
+ 	__wsum crc_in, crc_own = 0;
+ 
+ 	siw_dbg_qp(qp, "expected %d, available %d, pad %u\n",
+ 		   srx->fpdu_part_rem, srx->skb_new, srx->pad);
+ 
+-	if (srx->skb_new < srx->fpdu_part_rem)
+-		return -EAGAIN;
+-
+-	skb_copy_bits(skb, srx->skb_offset, tbuf, srx->fpdu_part_rem);
++	skb_copy_bits(skb, srx->skb_offset, tbuf, avail);
+ 
+-	if (srx->mpa_crc_hd && srx->pad)
+-		crypto_shash_update(srx->mpa_crc_hd, tbuf, srx->pad);
++	srx->skb_new -= avail;
++	srx->skb_offset += avail;
++	srx->skb_copied += avail;
++	srx->fpdu_part_rem -= avail;
+ 
+-	srx->skb_new -= srx->fpdu_part_rem;
+-	srx->skb_offset += srx->fpdu_part_rem;
+-	srx->skb_copied += srx->fpdu_part_rem;
++	if (srx->fpdu_part_rem)
++		return -EAGAIN;
+ 
+ 	if (!srx->mpa_crc_hd)
+ 		return 0;
+ 
++	if (srx->pad)
++		crypto_shash_update(srx->mpa_crc_hd, tbuf, srx->pad);
+ 	/*
+ 	 * CRC32 is computed, transmitted and received directly in NBO,
+ 	 * so there's never a reason to convert byte order.
+@@ -1083,10 +1084,9 @@ static int siw_get_hdr(struct siw_rx_stream *srx)
+ 	 * completely received.
+ 	 */
+ 	if (iwarp_pktinfo[opcode].hdr_len > sizeof(struct iwarp_ctrl_tagged)) {
+-		bytes = iwarp_pktinfo[opcode].hdr_len - MIN_DDP_HDR;
++		int hdrlen = iwarp_pktinfo[opcode].hdr_len;
+ 
+-		if (srx->skb_new < bytes)
+-			return -EAGAIN;
++		bytes = min_t(int, hdrlen - MIN_DDP_HDR, srx->skb_new);
+ 
+ 		skb_copy_bits(skb, srx->skb_offset,
+ 			      (char *)c_hdr + srx->fpdu_part_rcvd, bytes);
+@@ -1096,6 +1096,9 @@ static int siw_get_hdr(struct siw_rx_stream *srx)
+ 		srx->skb_new -= bytes;
+ 		srx->skb_offset += bytes;
+ 		srx->skb_copied += bytes;
++
++		if (srx->fpdu_part_rcvd < hdrlen)
++			return -EAGAIN;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/iommu/omap-iommu-debug.c b/drivers/iommu/omap-iommu-debug.c
+index a99afb5d9011c..259f65291d909 100644
+--- a/drivers/iommu/omap-iommu-debug.c
++++ b/drivers/iommu/omap-iommu-debug.c
+@@ -32,12 +32,12 @@ static inline bool is_omap_iommu_detached(struct omap_iommu *obj)
+ 		ssize_t bytes;						\
+ 		const char *str = "%20s: %08x\n";			\
+ 		const int maxcol = 32;					\
+-		bytes = snprintf(p, maxcol, str, __stringify(name),	\
++		if (len < maxcol)					\
++			goto out;					\
++		bytes = scnprintf(p, maxcol, str, __stringify(name),	\
+ 				 iommu_read_reg(obj, MMU_##name));	\
+ 		p += bytes;						\
+ 		len -= bytes;						\
+-		if (len < maxcol)					\
+-			goto out;					\
+ 	} while (0)
+ 
+ static ssize_t
+diff --git a/drivers/isdn/mISDN/l1oip.h b/drivers/isdn/mISDN/l1oip.h
+index 7ea10db20e3a6..48133d0228120 100644
+--- a/drivers/isdn/mISDN/l1oip.h
++++ b/drivers/isdn/mISDN/l1oip.h
+@@ -59,6 +59,7 @@ struct l1oip {
+ 	int			bundle;		/* bundle channels in one frm */
+ 	int			codec;		/* codec to use for transmis. */
+ 	int			limit;		/* limit number of bchannels */
++	bool			shutdown;	/* if card is released */
+ 
+ 	/* timer */
+ 	struct timer_list	keep_tl;
+diff --git a/drivers/isdn/mISDN/l1oip_core.c b/drivers/isdn/mISDN/l1oip_core.c
+index b57dcb834594d..aec4f2a69c3bd 100644
+--- a/drivers/isdn/mISDN/l1oip_core.c
++++ b/drivers/isdn/mISDN/l1oip_core.c
+@@ -275,7 +275,7 @@ l1oip_socket_send(struct l1oip *hc, u8 localcodec, u8 channel, u32 chanmask,
+ 	p = frame;
+ 
+ 	/* restart timer */
+-	if (time_before(hc->keep_tl.expires, jiffies + 5 * HZ))
++	if (time_before(hc->keep_tl.expires, jiffies + 5 * HZ) && !hc->shutdown)
+ 		mod_timer(&hc->keep_tl, jiffies + L1OIP_KEEPALIVE * HZ);
+ 	else
+ 		hc->keep_tl.expires = jiffies + L1OIP_KEEPALIVE * HZ;
+@@ -601,7 +601,9 @@ multiframe:
+ 		goto multiframe;
+ 
+ 	/* restart timer */
+-	if (time_before(hc->timeout_tl.expires, jiffies + 5 * HZ) || !hc->timeout_on) {
++	if ((time_before(hc->timeout_tl.expires, jiffies + 5 * HZ) ||
++	     !hc->timeout_on) &&
++	    !hc->shutdown) {
+ 		hc->timeout_on = 1;
+ 		mod_timer(&hc->timeout_tl, jiffies + L1OIP_TIMEOUT * HZ);
+ 	} else /* only adjust timer */
+@@ -1232,11 +1234,10 @@ release_card(struct l1oip *hc)
+ {
+ 	int	ch;
+ 
+-	if (timer_pending(&hc->keep_tl))
+-		del_timer(&hc->keep_tl);
++	hc->shutdown = true;
+ 
+-	if (timer_pending(&hc->timeout_tl))
+-		del_timer(&hc->timeout_tl);
++	del_timer_sync(&hc->keep_tl);
++	del_timer_sync(&hc->timeout_tl);
+ 
+ 	cancel_work_sync(&hc->workq);
+ 
+diff --git a/drivers/leds/leds-lm3601x.c b/drivers/leds/leds-lm3601x.c
+index d0e1d4814042e..3d12727482017 100644
+--- a/drivers/leds/leds-lm3601x.c
++++ b/drivers/leds/leds-lm3601x.c
+@@ -444,8 +444,6 @@ static int lm3601x_remove(struct i2c_client *client)
+ {
+ 	struct lm3601x_led *led = i2c_get_clientdata(client);
+ 
+-	mutex_destroy(&led->lock);
+-
+ 	return regmap_update_bits(led->regmap, LM3601X_ENABLE_REG,
+ 			   LM3601X_ENABLE_MASK,
+ 			   LM3601X_MODE_STANDBY);
+diff --git a/drivers/mailbox/bcm-flexrm-mailbox.c b/drivers/mailbox/bcm-flexrm-mailbox.c
+index bee33abb53084..e913ed1e34c63 100644
+--- a/drivers/mailbox/bcm-flexrm-mailbox.c
++++ b/drivers/mailbox/bcm-flexrm-mailbox.c
+@@ -632,15 +632,15 @@ static int flexrm_spu_dma_map(struct device *dev, struct brcm_message *msg)
+ 
+ 	rc = dma_map_sg(dev, msg->spu.src, sg_nents(msg->spu.src),
+ 			DMA_TO_DEVICE);
+-	if (rc < 0)
+-		return rc;
++	if (!rc)
++		return -EIO;
+ 
+ 	rc = dma_map_sg(dev, msg->spu.dst, sg_nents(msg->spu.dst),
+ 			DMA_FROM_DEVICE);
+-	if (rc < 0) {
++	if (!rc) {
+ 		dma_unmap_sg(dev, msg->spu.src, sg_nents(msg->spu.src),
+ 			     DMA_TO_DEVICE);
+-		return rc;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index a878b959fbcdd..3aa73da2c67bd 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -119,6 +119,53 @@ static void __update_writeback_rate(struct cached_dev *dc)
+ 	dc->writeback_rate_target = target;
+ }
+ 
++static bool idle_counter_exceeded(struct cache_set *c)
++{
++	int counter, dev_nr;
++
++	/*
++	 * If c->idle_counter is overflow (idel for really long time),
++	 * reset as 0 and not set maximum rate this time for code
++	 * simplicity.
++	 */
++	counter = atomic_inc_return(&c->idle_counter);
++	if (counter <= 0) {
++		atomic_set(&c->idle_counter, 0);
++		return false;
++	}
++
++	dev_nr = atomic_read(&c->attached_dev_nr);
++	if (dev_nr == 0)
++		return false;
++
++	/*
++	 * c->idle_counter is increased by writeback thread of all
++	 * attached backing devices, in order to represent a rough
++	 * time period, counter should be divided by dev_nr.
++	 * Otherwise the idle time cannot be larger with more backing
++	 * device attached.
++	 * The following calculation equals to checking
++	 *	(counter / dev_nr) < (dev_nr * 6)
++	 */
++	if (counter < (dev_nr * dev_nr * 6))
++		return false;
++
++	return true;
++}
++
++/*
++ * Idle_counter is increased every time when update_writeback_rate() is
++ * called. If all backing devices attached to the same cache set have
++ * identical dc->writeback_rate_update_seconds values, it is about 6
++ * rounds of update_writeback_rate() on each backing device before
++ * c->at_max_writeback_rate is set to 1, and then max wrteback rate set
++ * to each dc->writeback_rate.rate.
++ * In order to avoid extra locking cost for counting exact dirty cached
++ * devices number, c->attached_dev_nr is used to calculate the idle
++ * throushold. It might be bigger if not all cached device are in write-
++ * back mode, but it still works well with limited extra rounds of
++ * update_writeback_rate().
++ */
+ static bool set_at_max_writeback_rate(struct cache_set *c,
+ 				       struct cached_dev *dc)
+ {
+@@ -129,21 +176,8 @@ static bool set_at_max_writeback_rate(struct cache_set *c,
+ 	/* Don't set max writeback rate if gc is running */
+ 	if (!c->gc_mark_valid)
+ 		return false;
+-	/*
+-	 * Idle_counter is increased everytime when update_writeback_rate() is
+-	 * called. If all backing devices attached to the same cache set have
+-	 * identical dc->writeback_rate_update_seconds values, it is about 6
+-	 * rounds of update_writeback_rate() on each backing device before
+-	 * c->at_max_writeback_rate is set to 1, and then max wrteback rate set
+-	 * to each dc->writeback_rate.rate.
+-	 * In order to avoid extra locking cost for counting exact dirty cached
+-	 * devices number, c->attached_dev_nr is used to calculate the idle
+-	 * throushold. It might be bigger if not all cached device are in write-
+-	 * back mode, but it still works well with limited extra rounds of
+-	 * update_writeback_rate().
+-	 */
+-	if (atomic_inc_return(&c->idle_counter) <
+-	    atomic_read(&c->attached_dev_nr) * 6)
++
++	if (!idle_counter_exceeded(c))
+ 		return false;
+ 
+ 	if (atomic_read(&c->at_max_writeback_rate) != 1)
+@@ -157,13 +191,10 @@ static bool set_at_max_writeback_rate(struct cache_set *c,
+ 	dc->writeback_rate_change = 0;
+ 
+ 	/*
+-	 * Check c->idle_counter and c->at_max_writeback_rate agagain in case
+-	 * new I/O arrives during before set_at_max_writeback_rate() returns.
+-	 * Then the writeback rate is set to 1, and its new value should be
+-	 * decided via __update_writeback_rate().
++	 * In case new I/O arrives during before
++	 * set_at_max_writeback_rate() returns.
+ 	 */
+-	if ((atomic_read(&c->idle_counter) <
+-	     atomic_read(&c->attached_dev_nr) * 6) ||
++	if (!idle_counter_exceeded(c) ||
+ 	    !atomic_read(&c->at_max_writeback_rate))
+ 		return false;
+ 
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index a4c0cafa6010a..a20332e755e81 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -48,7 +48,7 @@ static void dump_zones(struct mddev *mddev)
+ 		int len = 0;
+ 
+ 		for (k = 0; k < conf->strip_zone[j].nb_dev; k++)
+-			len += snprintf(line+len, 200-len, "%s%s", k?"/":"",
++			len += scnprintf(line+len, 200-len, "%s%s", k?"/":"",
+ 					bdevname(conf->devlist[j*raid_disks
+ 							       + k]->bdev, b));
+ 		pr_debug("md: zone%d=[%s]\n", j, line);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 01c7edf329367..9f114b9d8dc6b 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -36,6 +36,7 @@
+  */
+ 
+ #include <linux/blkdev.h>
++#include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/raid/pq.h>
+ #include <linux/async_tx.h>
+@@ -3936,7 +3937,7 @@ static void handle_stripe_fill(struct stripe_head *sh,
+ 		 * back cache (prexor with orig_page, and then xor with
+ 		 * page) in the read path
+ 		 */
+-		if (s->injournal && s->failed) {
++		if (s->to_read && s->injournal && s->failed) {
+ 			if (test_bit(STRIPE_R5C_CACHING, &sh->state))
+ 				r5c_make_stripe_write_out(sh);
+ 			goto out;
+@@ -6519,7 +6520,18 @@ static void raid5d(struct md_thread *thread)
+ 			spin_unlock_irq(&conf->device_lock);
+ 			md_check_recovery(mddev);
+ 			spin_lock_irq(&conf->device_lock);
++
++			/*
++			 * Waiting on MD_SB_CHANGE_PENDING below may deadlock
++			 * seeing md_check_recovery() is needed to clear
++			 * the flag when using mdmon.
++			 */
++			continue;
+ 		}
++
++		wait_event_lock_irq(mddev->sb_wait,
++			!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags),
++			conf->device_lock);
+ 	}
+ 	pr_debug("%d stripes handled\n", handled);
+ 
+diff --git a/drivers/media/pci/cx88/cx88-vbi.c b/drivers/media/pci/cx88/cx88-vbi.c
+index 58489ea0c1da1..7cf2271866d05 100644
+--- a/drivers/media/pci/cx88/cx88-vbi.c
++++ b/drivers/media/pci/cx88/cx88-vbi.c
+@@ -144,11 +144,10 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ 		return -EINVAL;
+ 	vb2_set_plane_payload(vb, 0, size);
+ 
+-	cx88_risc_buffer(dev->pci, &buf->risc, sgt->sgl,
+-			 0, VBI_LINE_LENGTH * lines,
+-			 VBI_LINE_LENGTH, 0,
+-			 lines);
+-	return 0;
++	return cx88_risc_buffer(dev->pci, &buf->risc, sgt->sgl,
++				0, VBI_LINE_LENGTH * lines,
++				VBI_LINE_LENGTH, 0,
++				lines);
+ }
+ 
+ static void buffer_finish(struct vb2_buffer *vb)
+diff --git a/drivers/media/pci/cx88/cx88-video.c b/drivers/media/pci/cx88/cx88-video.c
+index 8cffdacf60079..e5adffa3a99a8 100644
+--- a/drivers/media/pci/cx88/cx88-video.c
++++ b/drivers/media/pci/cx88/cx88-video.c
+@@ -431,6 +431,7 @@ static int queue_setup(struct vb2_queue *q,
+ 
+ static int buffer_prepare(struct vb2_buffer *vb)
+ {
++	int ret;
+ 	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ 	struct cx8800_dev *dev = vb->vb2_queue->drv_priv;
+ 	struct cx88_core *core = dev->core;
+@@ -445,35 +446,35 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ 
+ 	switch (core->field) {
+ 	case V4L2_FIELD_TOP:
+-		cx88_risc_buffer(dev->pci, &buf->risc,
+-				 sgt->sgl, 0, UNSET,
+-				 buf->bpl, 0, core->height);
++		ret = cx88_risc_buffer(dev->pci, &buf->risc,
++				       sgt->sgl, 0, UNSET,
++				       buf->bpl, 0, core->height);
+ 		break;
+ 	case V4L2_FIELD_BOTTOM:
+-		cx88_risc_buffer(dev->pci, &buf->risc,
+-				 sgt->sgl, UNSET, 0,
+-				 buf->bpl, 0, core->height);
++		ret = cx88_risc_buffer(dev->pci, &buf->risc,
++				       sgt->sgl, UNSET, 0,
++				       buf->bpl, 0, core->height);
+ 		break;
+ 	case V4L2_FIELD_SEQ_TB:
+-		cx88_risc_buffer(dev->pci, &buf->risc,
+-				 sgt->sgl,
+-				 0, buf->bpl * (core->height >> 1),
+-				 buf->bpl, 0,
+-				 core->height >> 1);
++		ret = cx88_risc_buffer(dev->pci, &buf->risc,
++				       sgt->sgl,
++				       0, buf->bpl * (core->height >> 1),
++				       buf->bpl, 0,
++				       core->height >> 1);
+ 		break;
+ 	case V4L2_FIELD_SEQ_BT:
+-		cx88_risc_buffer(dev->pci, &buf->risc,
+-				 sgt->sgl,
+-				 buf->bpl * (core->height >> 1), 0,
+-				 buf->bpl, 0,
+-				 core->height >> 1);
++		ret = cx88_risc_buffer(dev->pci, &buf->risc,
++				       sgt->sgl,
++				       buf->bpl * (core->height >> 1), 0,
++				       buf->bpl, 0,
++				       core->height >> 1);
+ 		break;
+ 	case V4L2_FIELD_INTERLACED:
+ 	default:
+-		cx88_risc_buffer(dev->pci, &buf->risc,
+-				 sgt->sgl, 0, buf->bpl,
+-				 buf->bpl, buf->bpl,
+-				 core->height >> 1);
++		ret = cx88_risc_buffer(dev->pci, &buf->risc,
++				       sgt->sgl, 0, buf->bpl,
++				       buf->bpl, buf->bpl,
++				       core->height >> 1);
+ 		break;
+ 	}
+ 	dprintk(2,
+@@ -481,7 +482,7 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ 		buf, buf->vb.vb2_buf.index, __func__,
+ 		core->width, core->height, dev->fmt->depth, dev->fmt->fourcc,
+ 		(unsigned long)buf->risc.dma);
+-	return 0;
++	return ret;
+ }
+ 
+ static void buffer_finish(struct vb2_buffer *vb)
+diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
+index dc2a144cd29b6..b52d2203eac57 100644
+--- a/drivers/media/platform/exynos4-is/fimc-is.c
++++ b/drivers/media/platform/exynos4-is/fimc-is.c
+@@ -213,6 +213,7 @@ static int fimc_is_register_subdevs(struct fimc_is *is)
+ 
+ 			if (ret < 0 || index >= FIMC_IS_SENSORS_NUM) {
+ 				of_node_put(child);
++				of_node_put(i2c_bus);
+ 				return ret;
+ 			}
+ 			index++;
+diff --git a/drivers/media/platform/xilinx/xilinx-vipp.c b/drivers/media/platform/xilinx/xilinx-vipp.c
+index cc2856efea594..f2b0c490187cd 100644
+--- a/drivers/media/platform/xilinx/xilinx-vipp.c
++++ b/drivers/media/platform/xilinx/xilinx-vipp.c
+@@ -472,7 +472,7 @@ static int xvip_graph_dma_init(struct xvip_composite_device *xdev)
+ {
+ 	struct device_node *ports;
+ 	struct device_node *port;
+-	int ret;
++	int ret = 0;
+ 
+ 	ports = of_get_child_by_name(xdev->dev->of_node, "ports");
+ 	if (ports == NULL) {
+@@ -482,13 +482,14 @@ static int xvip_graph_dma_init(struct xvip_composite_device *xdev)
+ 
+ 	for_each_child_of_node(ports, port) {
+ 		ret = xvip_graph_dma_init_one(xdev, port);
+-		if (ret < 0) {
++		if (ret) {
+ 			of_node_put(port);
+-			return ret;
++			break;
+ 		}
+ 	}
+ 
+-	return 0;
++	of_node_put(ports);
++	return ret;
+ }
+ 
+ static void xvip_graph_cleanup(struct xvip_composite_device *xdev)
+diff --git a/drivers/memory/of_memory.c b/drivers/memory/of_memory.c
+index d9f5437d3bce0..1791614f324b7 100644
+--- a/drivers/memory/of_memory.c
++++ b/drivers/memory/of_memory.c
+@@ -134,6 +134,7 @@ const struct lpddr2_timings *of_get_ddr_timings(struct device_node *np_ddr,
+ 	for_each_child_of_node(np_ddr, np_tim) {
+ 		if (of_device_is_compatible(np_tim, tim_compat)) {
+ 			if (of_do_get_timings(np_tim, &timings[i])) {
++				of_node_put(np_tim);
+ 				devm_kfree(dev, timings);
+ 				goto default_timings;
+ 			}
+@@ -282,6 +283,7 @@ const struct lpddr3_timings
+ 		if (of_device_is_compatible(np_tim, tim_compat)) {
+ 			if (of_lpddr3_do_get_timings(np_tim, &timings[i])) {
+ 				devm_kfree(dev, timings);
++				of_node_put(np_tim);
+ 				goto default_timings;
+ 			}
+ 			i++;
+diff --git a/drivers/memory/pl353-smc.c b/drivers/memory/pl353-smc.c
+index b0b251bb207f3..1a6964f1ba6a7 100644
+--- a/drivers/memory/pl353-smc.c
++++ b/drivers/memory/pl353-smc.c
+@@ -416,6 +416,7 @@ static int pl353_smc_probe(struct amba_device *adev, const struct amba_id *id)
+ 	if (init)
+ 		init(adev, child);
+ 	of_platform_device_create(child, NULL, &adev->dev);
++	of_node_put(child);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mfd/fsl-imx25-tsadc.c b/drivers/mfd/fsl-imx25-tsadc.c
+index a016b39fe9b0e..5f1f6f3a0696a 100644
+--- a/drivers/mfd/fsl-imx25-tsadc.c
++++ b/drivers/mfd/fsl-imx25-tsadc.c
+@@ -69,7 +69,7 @@ static int mx25_tsadc_setup_irq(struct platform_device *pdev,
+ 	int irq;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0)
++	if (irq < 0)
+ 		return irq;
+ 
+ 	tsadc->domain = irq_domain_add_simple(np, 2, 0, &mx25_tsadc_domain_ops,
+@@ -84,6 +84,19 @@ static int mx25_tsadc_setup_irq(struct platform_device *pdev,
+ 	return 0;
+ }
+ 
++static int mx25_tsadc_unset_irq(struct platform_device *pdev)
++{
++	struct mx25_tsadc *tsadc = platform_get_drvdata(pdev);
++	int irq = platform_get_irq(pdev, 0);
++
++	if (irq >= 0) {
++		irq_set_chained_handler_and_data(irq, NULL, NULL);
++		irq_domain_remove(tsadc->domain);
++	}
++
++	return 0;
++}
++
+ static void mx25_tsadc_setup_clk(struct platform_device *pdev,
+ 				 struct mx25_tsadc *tsadc)
+ {
+@@ -171,18 +184,21 @@ static int mx25_tsadc_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, tsadc);
+ 
+-	return devm_of_platform_populate(dev);
++	ret = devm_of_platform_populate(dev);
++	if (ret)
++		goto err_irq;
++
++	return 0;
++
++err_irq:
++	mx25_tsadc_unset_irq(pdev);
++
++	return ret;
+ }
+ 
+ static int mx25_tsadc_remove(struct platform_device *pdev)
+ {
+-	struct mx25_tsadc *tsadc = platform_get_drvdata(pdev);
+-	int irq = platform_get_irq(pdev, 0);
+-
+-	if (irq) {
+-		irq_set_chained_handler_and_data(irq, NULL, NULL);
+-		irq_domain_remove(tsadc->domain);
+-	}
++	mx25_tsadc_unset_irq(pdev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mfd/intel_soc_pmic_core.c b/drivers/mfd/intel_soc_pmic_core.c
+index ddd64f9e3341e..926653e1f6033 100644
+--- a/drivers/mfd/intel_soc_pmic_core.c
++++ b/drivers/mfd/intel_soc_pmic_core.c
+@@ -95,6 +95,7 @@ static int intel_soc_pmic_i2c_probe(struct i2c_client *i2c,
+ 	return 0;
+ 
+ err_del_irq_chip:
++	pwm_remove_table(crc_pwm_lookup, ARRAY_SIZE(crc_pwm_lookup));
+ 	regmap_del_irq_chip(pmic->irq, pmic->irq_chip_data);
+ 	return ret;
+ }
+diff --git a/drivers/mfd/lp8788-irq.c b/drivers/mfd/lp8788-irq.c
+index 348439a3fbbd4..39006297f3d27 100644
+--- a/drivers/mfd/lp8788-irq.c
++++ b/drivers/mfd/lp8788-irq.c
+@@ -175,6 +175,7 @@ int lp8788_irq_init(struct lp8788 *lp, int irq)
+ 				IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 				"lp8788-irq", irqd);
+ 	if (ret) {
++		irq_domain_remove(lp->irqdm);
+ 		dev_err(lp->dev, "failed to create a thread for IRQ_N\n");
+ 		return ret;
+ 	}
+@@ -188,4 +189,6 @@ void lp8788_irq_exit(struct lp8788 *lp)
+ {
+ 	if (lp->irq)
+ 		free_irq(lp->irq, lp->irqdm);
++	if (lp->irqdm)
++		irq_domain_remove(lp->irqdm);
+ }
+diff --git a/drivers/mfd/lp8788.c b/drivers/mfd/lp8788.c
+index 768d556b3fe98..5c3d642c8e3a3 100644
+--- a/drivers/mfd/lp8788.c
++++ b/drivers/mfd/lp8788.c
+@@ -195,8 +195,16 @@ static int lp8788_probe(struct i2c_client *cl, const struct i2c_device_id *id)
+ 	if (ret)
+ 		return ret;
+ 
+-	return mfd_add_devices(lp->dev, -1, lp8788_devs,
+-			       ARRAY_SIZE(lp8788_devs), NULL, 0, NULL);
++	ret = mfd_add_devices(lp->dev, -1, lp8788_devs,
++			      ARRAY_SIZE(lp8788_devs), NULL, 0, NULL);
++	if (ret)
++		goto err_exit_irq;
++
++	return 0;
++
++err_exit_irq:
++	lp8788_irq_exit(lp);
++	return ret;
+ }
+ 
+ static int lp8788_remove(struct i2c_client *cl)
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index 6d2f4a0a901dc..37ad72d8cde2a 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -1720,7 +1720,12 @@ static struct platform_driver sm501_plat_driver = {
+ 
+ static int __init sm501_base_init(void)
+ {
+-	platform_driver_register(&sm501_plat_driver);
++	int ret;
++
++	ret = platform_driver_register(&sm501_plat_driver);
++	if (ret < 0)
++		return ret;
++
+ 	return pci_register_driver(&sm501_pci_driver);
+ }
+ 
+diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
+index c742ab02ae186..e094809b54ff5 100644
+--- a/drivers/misc/ocxl/file.c
++++ b/drivers/misc/ocxl/file.c
+@@ -259,6 +259,8 @@ static long afu_ioctl(struct file *file, unsigned int cmd,
+ 		if (IS_ERR(ev_ctx))
+ 			return PTR_ERR(ev_ctx);
+ 		rc = ocxl_irq_set_handler(ctx, irq_id, irq_handler, irq_free, ev_ctx);
++		if (rc)
++			eventfd_ctx_put(ev_ctx);
+ 		break;
+ 
+ 	case OCXL_IOCTL_GET_METADATA:
+diff --git a/drivers/mmc/host/au1xmmc.c b/drivers/mmc/host/au1xmmc.c
+index bd00515fbaba2..56a3bf51d446d 100644
+--- a/drivers/mmc/host/au1xmmc.c
++++ b/drivers/mmc/host/au1xmmc.c
+@@ -1097,8 +1097,9 @@ out5:
+ 	if (host->platdata && host->platdata->cd_setup &&
+ 	    !(mmc->caps & MMC_CAP_NEEDS_POLL))
+ 		host->platdata->cd_setup(mmc, 0);
+-out_clk:
++
+ 	clk_disable_unprepare(host->clk);
++out_clk:
+ 	clk_put(host->clk);
+ out_irq:
+ 	free_irq(host->irq, host);
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 192cb8b20b472..ad2e73f9a58f4 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -2182,6 +2182,7 @@ static const struct sdhci_msm_variant_info sm8250_sdhci_var = {
+ static const struct of_device_id sdhci_msm_dt_match[] = {
+ 	{.compatible = "qcom,sdhci-msm-v4", .data = &sdhci_msm_mci_var},
+ 	{.compatible = "qcom,sdhci-msm-v5", .data = &sdhci_msm_v5_var},
++	{.compatible = "qcom,sdm670-sdhci", .data = &sdm845_sdhci_var},
+ 	{.compatible = "qcom,sdm845-sdhci", .data = &sdm845_sdhci_var},
+ 	{.compatible = "qcom,sm8250-sdhci", .data = &sm8250_sdhci_var},
+ 	{.compatible = "qcom,sc7180-sdhci", .data = &sdm845_sdhci_var},
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index 9cd8862e6cbd0..8575f4537e57b 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -296,7 +296,7 @@ static unsigned int sdhci_sprd_get_max_clock(struct sdhci_host *host)
+ 
+ static unsigned int sdhci_sprd_get_min_clock(struct sdhci_host *host)
+ {
+-	return 400000;
++	return 100000;
+ }
+ 
+ static void sdhci_sprd_set_uhs_signaling(struct sdhci_host *host,
+diff --git a/drivers/mmc/host/wmt-sdmmc.c b/drivers/mmc/host/wmt-sdmmc.c
+index cf10949fb0acc..8df722ec57edc 100644
+--- a/drivers/mmc/host/wmt-sdmmc.c
++++ b/drivers/mmc/host/wmt-sdmmc.c
+@@ -849,7 +849,7 @@ static int wmt_mci_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->clk_sdmmc)) {
+ 		dev_err(&pdev->dev, "Error getting clock\n");
+ 		ret = PTR_ERR(priv->clk_sdmmc);
+-		goto fail5;
++		goto fail5_and_a_half;
+ 	}
+ 
+ 	ret = clk_prepare_enable(priv->clk_sdmmc);
+@@ -866,6 +866,9 @@ static int wmt_mci_probe(struct platform_device *pdev)
+ 	return 0;
+ fail6:
+ 	clk_put(priv->clk_sdmmc);
++fail5_and_a_half:
++	dma_free_coherent(&pdev->dev, mmc->max_blk_count * 16,
++			  priv->dma_desc_buffer, priv->dma_desc_device_addr);
+ fail5:
+ 	free_irq(dma_irq, priv);
+ fail4:
+diff --git a/drivers/mtd/devices/docg3.c b/drivers/mtd/devices/docg3.c
+index a030792115bc2..fa42473d04c1b 100644
+--- a/drivers/mtd/devices/docg3.c
++++ b/drivers/mtd/devices/docg3.c
+@@ -1975,9 +1975,14 @@ static int __init docg3_probe(struct platform_device *pdev)
+ 		dev_err(dev, "No I/O memory resource defined\n");
+ 		return ret;
+ 	}
+-	base = devm_ioremap(dev, ress->start, DOC_IOSPACE_SIZE);
+ 
+ 	ret = -ENOMEM;
++	base = devm_ioremap(dev, ress->start, DOC_IOSPACE_SIZE);
++	if (!base) {
++		dev_err(dev, "devm_ioremap dev failed\n");
++		return ret;
++	}
++
+ 	cascade = devm_kcalloc(dev, DOC_MAX_NBFLOORS, sizeof(*cascade),
+ 			       GFP_KERNEL);
+ 	if (!cascade)
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 2228c34f3deab..0d84f8156d8e4 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -405,6 +405,7 @@ static int atmel_nand_dma_transfer(struct atmel_nand_controller *nc,
+ 
+ 	dma_async_issue_pending(nc->dmac);
+ 	wait_for_completion(&finished);
++	dma_unmap_single(nc->dev, buf_dma, len, dir);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mtd/nand/raw/fsl_elbc_nand.c b/drivers/mtd/nand/raw/fsl_elbc_nand.c
+index b2af7f81fdf8f..c174b6dc3c6ba 100644
+--- a/drivers/mtd/nand/raw/fsl_elbc_nand.c
++++ b/drivers/mtd/nand/raw/fsl_elbc_nand.c
+@@ -727,36 +727,40 @@ static int fsl_elbc_attach_chip(struct nand_chip *chip)
+ 	struct fsl_lbc_regs __iomem *lbc = ctrl->regs;
+ 	unsigned int al;
+ 
+-	switch (chip->ecc.engine_type) {
+ 	/*
+ 	 * if ECC was not chosen in DT, decide whether to use HW or SW ECC from
+ 	 * CS Base Register
+ 	 */
+-	case NAND_ECC_ENGINE_TYPE_NONE:
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_INVALID) {
+ 		/* If CS Base Register selects full hardware ECC then use it */
+ 		if ((in_be32(&lbc->bank[priv->bank].br) & BR_DECC) ==
+ 		    BR_DECC_CHK_GEN) {
+-			chip->ecc.read_page = fsl_elbc_read_page;
+-			chip->ecc.write_page = fsl_elbc_write_page;
+-			chip->ecc.write_subpage = fsl_elbc_write_subpage;
+-
+ 			chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
+-			mtd_set_ooblayout(mtd, &fsl_elbc_ooblayout_ops);
+-			chip->ecc.size = 512;
+-			chip->ecc.bytes = 3;
+-			chip->ecc.strength = 1;
+ 		} else {
+ 			/* otherwise fall back to default software ECC */
+ 			chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+ 			chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 		}
++	}
++
++	switch (chip->ecc.engine_type) {
++	/* if HW ECC was chosen, setup ecc and oob layout */
++	case NAND_ECC_ENGINE_TYPE_ON_HOST:
++		chip->ecc.read_page = fsl_elbc_read_page;
++		chip->ecc.write_page = fsl_elbc_write_page;
++		chip->ecc.write_subpage = fsl_elbc_write_subpage;
++		mtd_set_ooblayout(mtd, &fsl_elbc_ooblayout_ops);
++		chip->ecc.size = 512;
++		chip->ecc.bytes = 3;
++		chip->ecc.strength = 1;
+ 		break;
+ 
+-	/* if SW ECC was chosen in DT, we do not need to set anything here */
++	/* if none or SW ECC was chosen, we do not need to set anything here */
++	case NAND_ECC_ENGINE_TYPE_NONE:
+ 	case NAND_ECC_ENGINE_TYPE_SOFT:
++	case NAND_ECC_ENGINE_TYPE_ON_DIE:
+ 		break;
+ 
+-	/* should we also implement *_ECC_ENGINE_CONTROLLER to do as above? */
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index 327a2257ec26d..38f490088d764 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -454,7 +454,7 @@ static int meson_nfc_ecc_correct(struct nand_chip *nand, u32 *bitflips,
+ 		if (ECC_ERR_CNT(*info) != ECC_UNCORRECTABLE) {
+ 			mtd->ecc_stats.corrected += ECC_ERR_CNT(*info);
+ 			*bitflips = max_t(u32, *bitflips, ECC_ERR_CNT(*info));
+-			*correct_bitmap |= 1 >> i;
++			*correct_bitmap |= BIT_ULL(i);
+ 			continue;
+ 		}
+ 		if ((nand->options & NAND_NEED_SCRAMBLING) &&
+@@ -800,7 +800,7 @@ static int meson_nfc_read_page_hwecc(struct nand_chip *nand, u8 *buf,
+ 			u8 *data = buf + i * ecc->size;
+ 			u8 *oob = nand->oob_poi + i * (ecc->bytes + 2);
+ 
+-			if (correct_bitmap & (1 << i))
++			if (correct_bitmap & BIT_ULL(i))
+ 				continue;
+ 			ret = nand_check_erased_ecc_chunk(data,	ecc->size,
+ 							  oob, ecc->bytes + 2,
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+index 61e67986b625e..62958f04a2f20 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+@@ -178,6 +178,8 @@ struct kvaser_usb_dev_cfg {
+ extern const struct kvaser_usb_dev_ops kvaser_usb_hydra_dev_ops;
+ extern const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops;
+ 
++void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv);
++
+ int kvaser_usb_recv_cmd(const struct kvaser_usb *dev, void *cmd, int len,
+ 			int *actual_len);
+ 
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index 416763fd1f11c..7491f85e85b30 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -453,7 +453,7 @@ static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv)
+ /* This method might sleep. Do not call it in the atomic context
+  * of URB completions.
+  */
+-static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv)
++void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv)
+ {
+ 	usb_kill_anchored_urbs(&priv->tx_submitted);
+ 	kvaser_usb_reset_tx_urb_contexts(priv);
+@@ -690,6 +690,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 	init_usb_anchor(&priv->tx_submitted);
+ 	init_completion(&priv->start_comp);
+ 	init_completion(&priv->stop_comp);
++	init_completion(&priv->flush_comp);
+ 	priv->can.ctrlmode_supported = 0;
+ 
+ 	priv->dev = dev;
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index 01d4a731b579c..5d642458bac54 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -1886,7 +1886,7 @@ static int kvaser_usb_hydra_flush_queue(struct kvaser_usb_net_priv *priv)
+ {
+ 	int err;
+ 
+-	init_completion(&priv->flush_comp);
++	reinit_completion(&priv->flush_comp);
+ 
+ 	err = kvaser_usb_hydra_send_simple_cmd(priv->dev, CMD_FLUSH_QUEUE,
+ 					       priv->channel);
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index 5e281249ad5fe..78d52a5e8fd5d 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -309,6 +309,38 @@ struct kvaser_cmd {
+ 	} u;
+ } __packed;
+ 
++#define CMD_SIZE_ANY 0xff
++#define kvaser_fsize(field) sizeof_field(struct kvaser_cmd, field)
++
++static const u8 kvaser_usb_leaf_cmd_sizes_leaf[] = {
++	[CMD_START_CHIP_REPLY]		= kvaser_fsize(u.simple),
++	[CMD_STOP_CHIP_REPLY]		= kvaser_fsize(u.simple),
++	[CMD_GET_CARD_INFO_REPLY]	= kvaser_fsize(u.cardinfo),
++	[CMD_TX_ACKNOWLEDGE]		= kvaser_fsize(u.tx_acknowledge_header),
++	[CMD_GET_SOFTWARE_INFO_REPLY]	= kvaser_fsize(u.leaf.softinfo),
++	[CMD_RX_STD_MESSAGE]		= kvaser_fsize(u.leaf.rx_can),
++	[CMD_RX_EXT_MESSAGE]		= kvaser_fsize(u.leaf.rx_can),
++	[CMD_LEAF_LOG_MESSAGE]		= kvaser_fsize(u.leaf.log_message),
++	[CMD_CHIP_STATE_EVENT]		= kvaser_fsize(u.leaf.chip_state_event),
++	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.leaf.error_event),
++	/* ignored events: */
++	[CMD_FLUSH_QUEUE_REPLY]		= CMD_SIZE_ANY,
++};
++
++static const u8 kvaser_usb_leaf_cmd_sizes_usbcan[] = {
++	[CMD_START_CHIP_REPLY]		= kvaser_fsize(u.simple),
++	[CMD_STOP_CHIP_REPLY]		= kvaser_fsize(u.simple),
++	[CMD_GET_CARD_INFO_REPLY]	= kvaser_fsize(u.cardinfo),
++	[CMD_TX_ACKNOWLEDGE]		= kvaser_fsize(u.tx_acknowledge_header),
++	[CMD_GET_SOFTWARE_INFO_REPLY]	= kvaser_fsize(u.usbcan.softinfo),
++	[CMD_RX_STD_MESSAGE]		= kvaser_fsize(u.usbcan.rx_can),
++	[CMD_RX_EXT_MESSAGE]		= kvaser_fsize(u.usbcan.rx_can),
++	[CMD_CHIP_STATE_EVENT]		= kvaser_fsize(u.usbcan.chip_state_event),
++	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.usbcan.error_event),
++	/* ignored events: */
++	[CMD_USBCAN_CLOCK_OVERFLOW_EVENT] = CMD_SIZE_ANY,
++};
++
+ /* Summary of a kvaser error event, for a unified Leaf/Usbcan error
+  * handling. Some discrepancies between the two families exist:
+  *
+@@ -396,6 +428,43 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_32mhz = {
+ 	.bittiming_const = &kvaser_usb_flexc_bittiming_const,
+ };
+ 
++static int kvaser_usb_leaf_verify_size(const struct kvaser_usb *dev,
++				       const struct kvaser_cmd *cmd)
++{
++	/* buffer size >= cmd->len ensured by caller */
++	u8 min_size = 0;
++
++	switch (dev->driver_info->family) {
++	case KVASER_LEAF:
++		if (cmd->id < ARRAY_SIZE(kvaser_usb_leaf_cmd_sizes_leaf))
++			min_size = kvaser_usb_leaf_cmd_sizes_leaf[cmd->id];
++		break;
++	case KVASER_USBCAN:
++		if (cmd->id < ARRAY_SIZE(kvaser_usb_leaf_cmd_sizes_usbcan))
++			min_size = kvaser_usb_leaf_cmd_sizes_usbcan[cmd->id];
++		break;
++	}
++
++	if (min_size == CMD_SIZE_ANY)
++		return 0;
++
++	if (min_size) {
++		min_size += CMD_HEADER_LEN;
++		if (cmd->len >= min_size)
++			return 0;
++
++		dev_err_ratelimited(&dev->intf->dev,
++				    "Received command %u too short (size %u, needed %u)",
++				    cmd->id, cmd->len, min_size);
++		return -EIO;
++	}
++
++	dev_warn_ratelimited(&dev->intf->dev,
++			     "Unhandled command (%d, size %d)\n",
++			     cmd->id, cmd->len);
++	return -EINVAL;
++}
++
+ static void *
+ kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
+ 			     const struct sk_buff *skb, int *frame_len,
+@@ -503,6 +572,9 @@ static int kvaser_usb_leaf_wait_cmd(const struct kvaser_usb *dev, u8 id,
+ end:
+ 	kfree(buf);
+ 
++	if (err == 0)
++		err = kvaser_usb_leaf_verify_size(dev, cmd);
++
+ 	return err;
+ }
+ 
+@@ -1137,6 +1209,9 @@ static void kvaser_usb_leaf_stop_chip_reply(const struct kvaser_usb *dev,
+ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
+ 					   const struct kvaser_cmd *cmd)
+ {
++	if (kvaser_usb_leaf_verify_size(dev, cmd) < 0)
++		return;
++
+ 	switch (cmd->id) {
+ 	case CMD_START_CHIP_REPLY:
+ 		kvaser_usb_leaf_start_chip_reply(dev, cmd);
+@@ -1355,9 +1430,13 @@ static int kvaser_usb_leaf_set_mode(struct net_device *netdev,
+ 
+ 	switch (mode) {
+ 	case CAN_MODE_START:
++		kvaser_usb_unlink_tx_urbs(priv);
++
+ 		err = kvaser_usb_leaf_simple_cmd_async(priv, CMD_START_CHIP);
+ 		if (err)
+ 			return err;
++
++		priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ 		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 198e041d84109..4f669e7c75587 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -788,6 +788,7 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp,
+ 			BNX2X_ERR("skb_put is about to fail...  pad %d  len %d  rx_buf_size %d\n",
+ 				  pad, len, fp->rx_buf_size);
+ 			bnx2x_panic();
++			bnx2x_frag_free(fp, new_data);
+ 			return;
+ 		}
+ #endif
+diff --git a/drivers/net/ethernet/freescale/fs_enet/mac-fec.c b/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
+index 99fe2c210d0f6..61f4b6e50d29b 100644
+--- a/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
++++ b/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
+@@ -98,7 +98,7 @@ static int do_pd_setup(struct fs_enet_private *fep)
+ 		return -EINVAL;
+ 
+ 	fep->fec.fecp = of_iomap(ofdev->dev.of_node, 0);
+-	if (!fep->fcc.fccp)
++	if (!fep->fec.fecp)
+ 		return -EINVAL;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index d825eb021b22e..e999ac2de34e8 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -1434,6 +1434,7 @@ u32 mvpp2_read(struct mvpp2 *priv, u32 offset);
+ void mvpp2_dbgfs_init(struct mvpp2 *priv, const char *name);
+ 
+ void mvpp2_dbgfs_cleanup(struct mvpp2 *priv);
++void mvpp2_dbgfs_exit(void);
+ 
+ #ifdef CONFIG_MVPP2_PTP
+ int mvpp22_tai_probe(struct device *dev, struct mvpp2 *priv);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c
+index 4a3baa7e01424..75e83ea2a926e 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c
+@@ -691,6 +691,13 @@ static int mvpp2_dbgfs_port_init(struct dentry *parent,
+ 	return 0;
+ }
+ 
++static struct dentry *mvpp2_root;
++
++void mvpp2_dbgfs_exit(void)
++{
++	debugfs_remove(mvpp2_root);
++}
++
+ void mvpp2_dbgfs_cleanup(struct mvpp2 *priv)
+ {
+ 	debugfs_remove_recursive(priv->dbgfs_dir);
+@@ -700,10 +707,9 @@ void mvpp2_dbgfs_cleanup(struct mvpp2 *priv)
+ 
+ void mvpp2_dbgfs_init(struct mvpp2 *priv, const char *name)
+ {
+-	struct dentry *mvpp2_dir, *mvpp2_root;
++	struct dentry *mvpp2_dir;
+ 	int ret, i;
+ 
+-	mvpp2_root = debugfs_lookup(MVPP2_DRIVER_NAME, NULL);
+ 	if (!mvpp2_root)
+ 		mvpp2_root = debugfs_create_dir(MVPP2_DRIVER_NAME, NULL);
+ 
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 542cd6f2c9bd4..68c5ed8716c84 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -7155,7 +7155,18 @@ static struct platform_driver mvpp2_driver = {
+ 	},
+ };
+ 
+-module_platform_driver(mvpp2_driver);
++static int __init mvpp2_driver_init(void)
++{
++	return platform_driver_register(&mvpp2_driver);
++}
++module_init(mvpp2_driver_init);
++
++static void __exit mvpp2_driver_exit(void)
++{
++	platform_driver_unregister(&mvpp2_driver);
++	mvpp2_dbgfs_exit();
++}
++module_exit(mvpp2_driver_exit);
+ 
+ MODULE_DESCRIPTION("Marvell PPv2 Ethernet Driver - www.marvell.com");
+ MODULE_AUTHOR("Marcin Wojtas <mw@semihalf.com>");
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 0bb5b1c786546..a526242a3e36d 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -1689,7 +1689,9 @@ static void intr_callback(struct urb *urb)
+ 			   "Stop submitting intr, status %d\n", status);
+ 		return;
+ 	case -EOVERFLOW:
+-		netif_info(tp, intr, tp->netdev, "intr status -EOVERFLOW\n");
++		if (net_ratelimit())
++			netif_info(tp, intr, tp->netdev,
++				   "intr status -EOVERFLOW\n");
+ 		goto resubmit;
+ 	/* -EPIPE:  should clear the halt */
+ 	default:
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index b61cd275fbda2..15f02bf23e9bd 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -853,11 +853,36 @@ static int ath10k_peer_delete(struct ath10k *ar, u32 vdev_id, const u8 *addr)
+ 	return 0;
+ }
+ 
++static void ath10k_peer_map_cleanup(struct ath10k *ar, struct ath10k_peer *peer)
++{
++	int peer_id, i;
++
++	lockdep_assert_held(&ar->conf_mutex);
++
++	for_each_set_bit(peer_id, peer->peer_ids,
++			 ATH10K_MAX_NUM_PEER_IDS) {
++		ar->peer_map[peer_id] = NULL;
++	}
++
++	/* Double check that peer is properly un-referenced from
++	 * the peer_map
++	 */
++	for (i = 0; i < ARRAY_SIZE(ar->peer_map); i++) {
++		if (ar->peer_map[i] == peer) {
++			ath10k_warn(ar, "removing stale peer_map entry for %pM (ptr %pK idx %d)\n",
++				    peer->addr, peer, i);
++			ar->peer_map[i] = NULL;
++		}
++	}
++
++	list_del(&peer->list);
++	kfree(peer);
++	ar->num_peers--;
++}
++
+ static void ath10k_peer_cleanup(struct ath10k *ar, u32 vdev_id)
+ {
+ 	struct ath10k_peer *peer, *tmp;
+-	int peer_id;
+-	int i;
+ 
+ 	lockdep_assert_held(&ar->conf_mutex);
+ 
+@@ -869,25 +894,7 @@ static void ath10k_peer_cleanup(struct ath10k *ar, u32 vdev_id)
+ 		ath10k_warn(ar, "removing stale peer %pM from vdev_id %d\n",
+ 			    peer->addr, vdev_id);
+ 
+-		for_each_set_bit(peer_id, peer->peer_ids,
+-				 ATH10K_MAX_NUM_PEER_IDS) {
+-			ar->peer_map[peer_id] = NULL;
+-		}
+-
+-		/* Double check that peer is properly un-referenced from
+-		 * the peer_map
+-		 */
+-		for (i = 0; i < ARRAY_SIZE(ar->peer_map); i++) {
+-			if (ar->peer_map[i] == peer) {
+-				ath10k_warn(ar, "removing stale peer_map entry for %pM (ptr %pK idx %d)\n",
+-					    peer->addr, peer, i);
+-				ar->peer_map[i] = NULL;
+-			}
+-		}
+-
+-		list_del(&peer->list);
+-		kfree(peer);
+-		ar->num_peers--;
++		ath10k_peer_map_cleanup(ar, peer);
+ 	}
+ 	spin_unlock_bh(&ar->data_lock);
+ }
+@@ -7470,10 +7477,7 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
+ 				/* Clean up the peer object as well since we
+ 				 * must have failed to do this above.
+ 				 */
+-				list_del(&peer->list);
+-				ar->peer_map[i] = NULL;
+-				kfree(peer);
+-				ar->num_peers--;
++				ath10k_peer_map_cleanup(ar, peer);
+ 			}
+ 		}
+ 		spin_unlock_bh(&ar->data_lock);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 44282aec069d3..67faf62999ded 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -3419,6 +3419,8 @@ static int ath11k_mac_set_txbf_conf(struct ath11k_vif *arvif)
+ 	if (vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE)) {
+ 		nsts = vht_cap & IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
+ 		nsts >>= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
++		if (nsts > (ar->num_rx_chains - 1))
++			nsts = ar->num_rx_chains - 1;
+ 		value |= SM(nsts, WMI_TXBF_STS_CAP_OFFSET);
+ 	}
+ 
+@@ -3459,7 +3461,7 @@ static int ath11k_mac_set_txbf_conf(struct ath11k_vif *arvif)
+ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+ {
+ 	bool subfer, subfee;
+-	int sound_dim = 0;
++	int sound_dim = 0, nsts = 0;
+ 
+ 	subfer = !!(*vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE));
+ 	subfee = !!(*vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE));
+@@ -3469,6 +3471,11 @@ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+ 		subfer = false;
+ 	}
+ 
++	if (ar->num_rx_chains < 2) {
++		*vht_cap &= ~(IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE);
++		subfee = false;
++	}
++
+ 	/* If SU Beaformer is not set, then disable MU Beamformer Capability */
+ 	if (!subfer)
+ 		*vht_cap &= ~(IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE);
+@@ -3481,7 +3488,9 @@ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+ 	sound_dim >>= IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_SHIFT;
+ 	*vht_cap &= ~IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK;
+ 
+-	/* TODO: Need to check invalid STS and Sound_dim values set by FW? */
++	nsts = (*vht_cap & IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK);
++	nsts >>= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
++	*vht_cap &= ~IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
+ 
+ 	/* Enable Sounding Dimension Field only if SU BF is enabled */
+ 	if (subfer) {
+@@ -3493,9 +3502,15 @@ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+ 		*vht_cap |= sound_dim;
+ 	}
+ 
+-	/* Use the STS advertised by FW unless SU Beamformee is not supported*/
+-	if (!subfee)
+-		*vht_cap &= ~(IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK);
++	/* Enable Beamformee STS Field only if SU BF is enabled */
++	if (subfee) {
++		if (nsts > (ar->num_rx_chains - 1))
++			nsts = ar->num_rx_chains - 1;
++
++		nsts <<= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
++		nsts &=  IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
++		*vht_cap |= nsts;
++	}
+ }
+ 
+ static struct ieee80211_sta_vht_cap
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index 994ec48b2f669..ca05b07a45e67 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -364,33 +364,27 @@ ret:
+ }
+ 
+ static void ath9k_htc_fw_panic_report(struct htc_target *htc_handle,
+-				      struct sk_buff *skb)
++				      struct sk_buff *skb, u32 len)
+ {
+ 	uint32_t *pattern = (uint32_t *)skb->data;
+ 
+-	switch (*pattern) {
+-	case 0x33221199:
+-		{
++	if (*pattern == 0x33221199 && len >= sizeof(struct htc_panic_bad_vaddr)) {
+ 		struct htc_panic_bad_vaddr *htc_panic;
+ 		htc_panic = (struct htc_panic_bad_vaddr *) skb->data;
+ 		dev_err(htc_handle->dev, "ath: firmware panic! "
+ 			"exccause: 0x%08x; pc: 0x%08x; badvaddr: 0x%08x.\n",
+ 			htc_panic->exccause, htc_panic->pc,
+ 			htc_panic->badvaddr);
+-		break;
+-		}
+-	case 0x33221299:
+-		{
++		return;
++	}
++	if (*pattern == 0x33221299) {
+ 		struct htc_panic_bad_epid *htc_panic;
+ 		htc_panic = (struct htc_panic_bad_epid *) skb->data;
+ 		dev_err(htc_handle->dev, "ath: firmware panic! "
+ 			"bad epid: 0x%08x\n", htc_panic->epid);
+-		break;
+-		}
+-	default:
+-		dev_err(htc_handle->dev, "ath: unknown panic pattern!\n");
+-		break;
++		return;
+ 	}
++	dev_err(htc_handle->dev, "ath: unknown panic pattern!\n");
+ }
+ 
+ /*
+@@ -411,16 +405,26 @@ void ath9k_htc_rx_msg(struct htc_target *htc_handle,
+ 	if (!htc_handle || !skb)
+ 		return;
+ 
++	/* A valid message requires len >= 8.
++	 *
++	 *   sizeof(struct htc_frame_hdr) == 8
++	 *   sizeof(struct htc_ready_msg) == 8
++	 *   sizeof(struct htc_panic_bad_vaddr) == 16
++	 *   sizeof(struct htc_panic_bad_epid) == 8
++	 */
++	if (unlikely(len < sizeof(struct htc_frame_hdr)))
++		goto invalid;
+ 	htc_hdr = (struct htc_frame_hdr *) skb->data;
+ 	epid = htc_hdr->endpoint_id;
+ 
+ 	if (epid == 0x99) {
+-		ath9k_htc_fw_panic_report(htc_handle, skb);
++		ath9k_htc_fw_panic_report(htc_handle, skb, len);
+ 		kfree_skb(skb);
+ 		return;
+ 	}
+ 
+ 	if (epid < 0 || epid >= ENDPOINT_MAX) {
++invalid:
+ 		if (pipe_id != USB_REG_IN_PIPE)
+ 			dev_kfree_skb_any(skb);
+ 		else
+@@ -432,21 +436,30 @@ void ath9k_htc_rx_msg(struct htc_target *htc_handle,
+ 
+ 		/* Handle trailer */
+ 		if (htc_hdr->flags & HTC_FLAGS_RECV_TRAILER) {
+-			if (be32_to_cpu(*(__be32 *) skb->data) == 0x00C60000)
++			if (be32_to_cpu(*(__be32 *) skb->data) == 0x00C60000) {
+ 				/* Move past the Watchdog pattern */
+ 				htc_hdr = (struct htc_frame_hdr *)(skb->data + 4);
++				len -= 4;
++			}
+ 		}
+ 
+ 		/* Get the message ID */
++		if (unlikely(len < sizeof(struct htc_frame_hdr) + sizeof(__be16)))
++			goto invalid;
+ 		msg_id = (__be16 *) ((void *) htc_hdr +
+ 				     sizeof(struct htc_frame_hdr));
+ 
+ 		/* Now process HTC messages */
+ 		switch (be16_to_cpu(*msg_id)) {
+ 		case HTC_MSG_READY_ID:
++			if (unlikely(len < sizeof(struct htc_ready_msg)))
++				goto invalid;
+ 			htc_process_target_rdy(htc_handle, htc_hdr);
+ 			break;
+ 		case HTC_MSG_CONNECT_SERVICE_RESPONSE_ID:
++			if (unlikely(len < sizeof(struct htc_frame_hdr) +
++				     sizeof(struct htc_conn_svc_rspmsg)))
++				goto invalid;
+ 			htc_process_conn_rsp(htc_handle, htc_hdr);
+ 			break;
+ 		default:
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 61039538a15bc..c8e1d505f7b5d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -290,6 +290,7 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
+ 	struct brcmf_pub *drvr = ifp->drvr;
+ 	struct ethhdr *eh;
+ 	int head_delta;
++	unsigned int tx_bytes = skb->len;
+ 
+ 	brcmf_dbg(DATA, "Enter, bsscfgidx=%d\n", ifp->bsscfgidx);
+ 
+@@ -364,7 +365,7 @@ done:
+ 		ndev->stats.tx_dropped++;
+ 	} else {
+ 		ndev->stats.tx_packets++;
+-		ndev->stats.tx_bytes += skb->len;
++		ndev->stats.tx_bytes += tx_bytes;
+ 	}
+ 
+ 	/* Return ok: we always eat the packet */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
+index fabfbb0b40b0c..d0a7465be586d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
+@@ -158,12 +158,12 @@ static int brcmf_pno_set_random(struct brcmf_if *ifp, struct brcmf_pno_info *pi)
+ 	struct brcmf_pno_macaddr_le pfn_mac;
+ 	u8 *mac_addr = NULL;
+ 	u8 *mac_mask = NULL;
+-	int err, i;
++	int err, i, ri;
+ 
+-	for (i = 0; i < pi->n_reqs; i++)
+-		if (pi->reqs[i]->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
+-			mac_addr = pi->reqs[i]->mac_addr;
+-			mac_mask = pi->reqs[i]->mac_addr_mask;
++	for (ri = 0; ri < pi->n_reqs; ri++)
++		if (pi->reqs[ri]->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
++			mac_addr = pi->reqs[ri]->mac_addr;
++			mac_mask = pi->reqs[ri]->mac_addr_mask;
+ 			break;
+ 		}
+ 
+@@ -185,7 +185,7 @@ static int brcmf_pno_set_random(struct brcmf_if *ifp, struct brcmf_pno_info *pi)
+ 	pfn_mac.mac[0] |= 0x02;
+ 
+ 	brcmf_dbg(SCAN, "enabling random mac: reqid=%llu mac=%pM\n",
+-		  pi->reqs[i]->reqid, pfn_mac.mac);
++		  pi->reqs[ri]->reqid, pfn_mac.mac);
+ 	err = brcmf_fil_iovar_data_set(ifp, "pfn_macaddr", &pfn_mac,
+ 				       sizeof(pfn_mac));
+ 	if (err)
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+index fed6d21cd6ce1..4bdd3a95f2d21 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+@@ -4151,7 +4151,10 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev,
+ 		rt2800_bbp_write(rt2x00dev, 62, 0x37 - rt2x00dev->lna_gain);
+ 		rt2800_bbp_write(rt2x00dev, 63, 0x37 - rt2x00dev->lna_gain);
+ 		rt2800_bbp_write(rt2x00dev, 64, 0x37 - rt2x00dev->lna_gain);
+-		rt2800_bbp_write(rt2x00dev, 86, 0);
++		if (rt2x00_rt(rt2x00dev, RT6352))
++			rt2800_bbp_write(rt2x00dev, 86, 0x38);
++		else
++			rt2800_bbp_write(rt2x00dev, 86, 0);
+ 	}
+ 
+ 	if (rf->channel <= 14) {
+@@ -4352,7 +4355,8 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev,
+ 		reg = (rf->channel <= 14 ? 0x1c : 0x24) + 2*rt2x00dev->lna_gain;
+ 		rt2800_bbp_write_with_rx_chain(rt2x00dev, 66, reg);
+ 
+-		rt2800_iq_calibrate(rt2x00dev, rf->channel);
++		if (rt2x00_rt(rt2x00dev, RT5592))
++			rt2800_iq_calibrate(rt2x00dev, rf->channel);
+ 	}
+ 
+ 	bbp = rt2800_bbp_read(rt2x00dev, 4);
+@@ -5625,7 +5629,8 @@ static inline void rt2800_set_vgc(struct rt2x00_dev *rt2x00dev,
+ 	if (qual->vgc_level != vgc_level) {
+ 		if (rt2x00_rt(rt2x00dev, RT3572) ||
+ 		    rt2x00_rt(rt2x00dev, RT3593) ||
+-		    rt2x00_rt(rt2x00dev, RT3883)) {
++		    rt2x00_rt(rt2x00dev, RT3883) ||
++		    rt2x00_rt(rt2x00dev, RT6352)) {
+ 			rt2800_bbp_write_with_rx_chain(rt2x00dev, 66,
+ 						       vgc_level);
+ 		} else if (rt2x00_rt(rt2x00dev, RT5592)) {
+@@ -5848,7 +5853,7 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev)
+ 		rt2800_register_write(rt2x00dev, TX_SW_CFG0, 0x00000404);
+ 	} else if (rt2x00_rt(rt2x00dev, RT6352)) {
+ 		rt2800_register_write(rt2x00dev, TX_SW_CFG0, 0x00000401);
+-		rt2800_register_write(rt2x00dev, TX_SW_CFG1, 0x000C0000);
++		rt2800_register_write(rt2x00dev, TX_SW_CFG1, 0x000C0001);
+ 		rt2800_register_write(rt2x00dev, TX_SW_CFG2, 0x00000000);
+ 		rt2800_register_write(rt2x00dev, TX_ALC_VGA3, 0x00000000);
+ 		rt2800_register_write(rt2x00dev, TX0_BB_GAIN_ATTEN, 0x0);
+@@ -6110,6 +6115,27 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev)
+ 		reg = rt2800_register_read(rt2x00dev, US_CYC_CNT);
+ 		rt2x00_set_field32(&reg, US_CYC_CNT_CLOCK_CYCLE, 125);
+ 		rt2800_register_write(rt2x00dev, US_CYC_CNT, reg);
++	} else if (rt2x00_is_soc(rt2x00dev)) {
++		struct clk *clk = clk_get_sys("bus", NULL);
++		int rate;
++
++		if (IS_ERR(clk)) {
++			clk = clk_get_sys("cpu", NULL);
++
++			if (IS_ERR(clk)) {
++				rate = 125;
++			} else {
++				rate = clk_get_rate(clk) / 3000000;
++				clk_put(clk);
++			}
++		} else {
++			rate = clk_get_rate(clk) / 1000000;
++			clk_put(clk);
++		}
++
++		reg = rt2800_register_read(rt2x00dev, US_CYC_CNT);
++		rt2x00_set_field32(&reg, US_CYC_CNT_CLOCK_CYCLE, rate);
++		rt2800_register_write(rt2x00dev, US_CYC_CNT, reg);
+ 	}
+ 
+ 	reg = rt2800_register_read(rt2x00dev, HT_FBK_CFG0);
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 0d374a2948406..e34cd6fed7e88 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -1874,13 +1874,6 @@ static int rtl8xxxu_read_efuse(struct rtl8xxxu_priv *priv)
+ 
+ 		/* We have 8 bits to indicate validity */
+ 		map_addr = offset * 8;
+-		if (map_addr >= EFUSE_MAP_LEN) {
+-			dev_warn(dev, "%s: Illegal map_addr (%04x), "
+-				 "efuse corrupt!\n",
+-				 __func__, map_addr);
+-			ret = -EINVAL;
+-			goto exit;
+-		}
+ 		for (i = 0; i < EFUSE_MAX_WORD_UNIT; i++) {
+ 			/* Check word enable condition in the section */
+ 			if (word_mask & BIT(i)) {
+@@ -1891,6 +1884,13 @@ static int rtl8xxxu_read_efuse(struct rtl8xxxu_priv *priv)
+ 			ret = rtl8xxxu_read_efuse8(priv, efuse_addr++, &val8);
+ 			if (ret)
+ 				goto exit;
++			if (map_addr >= EFUSE_MAP_LEN - 1) {
++				dev_warn(dev, "%s: Illegal map_addr (%04x), "
++					 "efuse corrupt!\n",
++					 __func__, map_addr);
++				ret = -EINVAL;
++				goto exit;
++			}
+ 			priv->efuse_wifi.raw[map_addr++] = val8;
+ 
+ 			ret = rtl8xxxu_read_efuse8(priv, efuse_addr++, &val8);
+@@ -2925,12 +2925,12 @@ bool rtl8xxxu_gen2_simularity_compare(struct rtl8xxxu_priv *priv,
+ 		}
+ 
+ 		if (!(simubitmap & 0x30) && priv->tx_paths > 1) {
+-			/* path B RX OK */
++			/* path B TX OK */
+ 			for (i = 4; i < 6; i++)
+ 				result[3][i] = result[c1][i];
+ 		}
+ 
+-		if (!(simubitmap & 0x30) && priv->tx_paths > 1) {
++		if (!(simubitmap & 0xc0) && priv->tx_paths > 1) {
+ 			/* path B RX OK */
+ 			for (i = 6; i < 8; i++)
+ 				result[3][i] = result[c1][i];
+@@ -4338,15 +4338,14 @@ void rtl8xxxu_gen2_update_rate_mask(struct rtl8xxxu_priv *priv,
+ 	h2c.b_macid_cfg.ramask2 = (ramask >> 16) & 0xff;
+ 	h2c.b_macid_cfg.ramask3 = (ramask >> 24) & 0xff;
+ 
+-	h2c.ramask.arg = 0x80;
+ 	h2c.b_macid_cfg.data1 = rateid;
+ 	if (sgi)
+ 		h2c.b_macid_cfg.data1 |= BIT(7);
+ 
+ 	h2c.b_macid_cfg.data2 = bw;
+ 
+-	dev_dbg(&priv->udev->dev, "%s: rate mask %08x, arg %02x, size %zi\n",
+-		__func__, ramask, h2c.ramask.arg, sizeof(h2c.b_macid_cfg));
++	dev_dbg(&priv->udev->dev, "%s: rate mask %08x, rateid %02x, sgi %d, size %zi\n",
++		__func__, ramask, rateid, sgi, sizeof(h2c.b_macid_cfg));
+ 	rtl8xxxu_gen2_h2c_cmd(priv, &h2c, sizeof(h2c.b_macid_cfg));
+ }
+ 
+@@ -4508,6 +4507,53 @@ rtl8xxxu_wireless_mode(struct ieee80211_hw *hw, struct ieee80211_sta *sta)
+ 	return network_type;
+ }
+ 
++static void rtl8xxxu_set_aifs(struct rtl8xxxu_priv *priv, u8 slot_time)
++{
++	u32 reg_edca_param[IEEE80211_NUM_ACS] = {
++		[IEEE80211_AC_VO] = REG_EDCA_VO_PARAM,
++		[IEEE80211_AC_VI] = REG_EDCA_VI_PARAM,
++		[IEEE80211_AC_BE] = REG_EDCA_BE_PARAM,
++		[IEEE80211_AC_BK] = REG_EDCA_BK_PARAM,
++	};
++	u32 val32;
++	u16 wireless_mode = 0;
++	u8 aifs, aifsn, sifs;
++	int i;
++
++	if (priv->vif) {
++		struct ieee80211_sta *sta;
++
++		rcu_read_lock();
++		sta = ieee80211_find_sta(priv->vif, priv->vif->bss_conf.bssid);
++		if (sta)
++			wireless_mode = rtl8xxxu_wireless_mode(priv->hw, sta);
++		rcu_read_unlock();
++	}
++
++	if (priv->hw->conf.chandef.chan->band == NL80211_BAND_5GHZ ||
++	    (wireless_mode & WIRELESS_MODE_N_24G))
++		sifs = 16;
++	else
++		sifs = 10;
++
++	for (i = 0; i < IEEE80211_NUM_ACS; i++) {
++		val32 = rtl8xxxu_read32(priv, reg_edca_param[i]);
++
++		/* It was set in conf_tx. */
++		aifsn = val32 & 0xff;
++
++		/* aifsn not set yet or already fixed */
++		if (aifsn < 2 || aifsn > 15)
++			continue;
++
++		aifs = aifsn * slot_time + sifs;
++
++		val32 &= ~0xff;
++		val32 |= aifs;
++		rtl8xxxu_write32(priv, reg_edca_param[i], val32);
++	}
++}
++
+ static void
+ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 			  struct ieee80211_bss_conf *bss_conf, u32 changed)
+@@ -4593,6 +4639,8 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 		else
+ 			val8 = 20;
+ 		rtl8xxxu_write8(priv, REG_SLOT, val8);
++
++		rtl8xxxu_set_aifs(priv, val8);
+ 	}
+ 
+ 	if (changed & BSS_CHANGED_BSSID) {
+@@ -4984,6 +5032,8 @@ static void rtl8xxxu_tx(struct ieee80211_hw *hw,
+ 	if (control && control->sta)
+ 		sta = control->sta;
+ 
++	queue = rtl8xxxu_queue_select(hw, skb);
++
+ 	tx_desc = skb_push(skb, tx_desc_size);
+ 
+ 	memset(tx_desc, 0, tx_desc_size);
+@@ -4996,7 +5046,6 @@ static void rtl8xxxu_tx(struct ieee80211_hw *hw,
+ 	    is_broadcast_ether_addr(ieee80211_get_DA(hdr)))
+ 		tx_desc->txdw0 |= TXDESC_BROADMULTICAST;
+ 
+-	queue = rtl8xxxu_queue_select(hw, skb);
+ 	tx_desc->txdw1 = cpu_to_le32(queue << TXDESC_QUEUE_SHIFT);
+ 
+ 	if (tx_info->control.hw_key) {
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 265d9199b657f..e9c13804760e7 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2949,7 +2949,6 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ 	nvme_init_subnqn(subsys, ctrl, id);
+ 	memcpy(subsys->serial, id->sn, sizeof(subsys->serial));
+ 	memcpy(subsys->model, id->mn, sizeof(subsys->model));
+-	memcpy(subsys->firmware_rev, id->fr, sizeof(subsys->firmware_rev));
+ 	subsys->vendor_id = le16_to_cpu(id->vid);
+ 	subsys->cmic = id->cmic;
+ 	subsys->awupf = le16_to_cpu(id->awupf);
+@@ -3110,6 +3109,8 @@ int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 				ctrl->quirks |= core_quirks[i].quirks;
+ 		}
+ 	}
++	memcpy(ctrl->subsys->firmware_rev, id->fr,
++	       sizeof(ctrl->subsys->firmware_rev));
+ 
+ 	if (force_apst && (ctrl->quirks & NVME_QUIRK_NO_DEEPEST_PS)) {
+ 		dev_warn(ctrl->device, "forcibly allowing all power states due to nvme_core.force_apst -- use at your own risk\n");
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index ce129655ef0a3..65f4bf8806087 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2624,6 +2624,8 @@ static void nvme_reset_work(struct work_struct *work)
+ 	if (result)
+ 		goto out_unlock;
+ 
++	dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
++
+ 	/*
+ 	 * Limit the max command size to prevent iod->sg allocations going
+ 	 * over a single page.
+@@ -2636,7 +2638,6 @@ static void nvme_reset_work(struct work_struct *work)
+ 	 * Don't limit the IOMMU merged segment size.
+ 	 */
+ 	dma_set_max_seg_size(dev->dev, 0xffffffff);
+-	dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
+ 
+ 	mutex_unlock(&dev->shutdown_lock);
+ 
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index e3e35b9bd6846..2ddbd4f4f6281 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -922,10 +922,17 @@ static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue)
+ 	struct nvme_tcp_data_pdu *data = &queue->pdu.data;
+ 	struct nvmet_tcp_cmd *cmd;
+ 
+-	if (likely(queue->nr_cmds))
++	if (likely(queue->nr_cmds)) {
++		if (unlikely(data->ttag >= queue->nr_cmds)) {
++			pr_err("queue %d: received out of bound ttag %u, nr_cmds %u\n",
++				queue->idx, data->ttag, queue->nr_cmds);
++			nvmet_tcp_fatal_error(queue);
++			return -EPROTO;
++		}
+ 		cmd = &queue->cmds[data->ttag];
+-	else
++	} else {
+ 		cmd = &queue->connect;
++	}
+ 
+ 	if (le32_to_cpu(data->data_offset) != cmd->rbytes_done) {
+ 		pr_err("ttag %u unexpected data offset %u (expected %u)\n",
+diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
+index 7f1acb3918d0c..875d50c16f19d 100644
+--- a/drivers/pci/setup-res.c
++++ b/drivers/pci/setup-res.c
+@@ -210,6 +210,17 @@ static int pci_revert_fw_address(struct resource *res, struct pci_dev *dev,
+ 
+ 	root = pci_find_parent_resource(dev, res);
+ 	if (!root) {
++		/*
++		 * If dev is behind a bridge, accesses will only reach it
++		 * if res is inside the relevant bridge window.
++		 */
++		if (pci_upstream_bridge(dev))
++			return -ENXIO;
++
++		/*
++		 * On the root bus, assume the host bridge will forward
++		 * everything.
++		 */
+ 		if (res->flags & IORESOURCE_IO)
+ 			root = &ioport_resource;
+ 		else
+diff --git a/drivers/phy/qualcomm/phy-qcom-usb-hsic.c b/drivers/phy/qualcomm/phy-qcom-usb-hsic.c
+index 04d18d52f700d..d4741c2dbbb56 100644
+--- a/drivers/phy/qualcomm/phy-qcom-usb-hsic.c
++++ b/drivers/phy/qualcomm/phy-qcom-usb-hsic.c
+@@ -54,8 +54,10 @@ static int qcom_usb_hsic_phy_power_on(struct phy *phy)
+ 
+ 	/* Configure pins for HSIC functionality */
+ 	pins_default = pinctrl_lookup_state(uphy->pctl, PINCTRL_STATE_DEFAULT);
+-	if (IS_ERR(pins_default))
+-		return PTR_ERR(pins_default);
++	if (IS_ERR(pins_default)) {
++		ret = PTR_ERR(pins_default);
++		goto err_ulpi;
++	}
+ 
+ 	ret = pinctrl_select_state(uphy->pctl, pins_default);
+ 	if (ret)
+diff --git a/drivers/platform/chrome/chromeos_laptop.c b/drivers/platform/chrome/chromeos_laptop.c
+index 472a03daa8693..109c191d35cfc 100644
+--- a/drivers/platform/chrome/chromeos_laptop.c
++++ b/drivers/platform/chrome/chromeos_laptop.c
+@@ -718,6 +718,7 @@ static int __init
+ chromeos_laptop_prepare_i2c_peripherals(struct chromeos_laptop *cros_laptop,
+ 					const struct chromeos_laptop *src)
+ {
++	struct i2c_peripheral *i2c_peripherals;
+ 	struct i2c_peripheral *i2c_dev;
+ 	struct i2c_board_info *info;
+ 	int i;
+@@ -726,17 +727,15 @@ chromeos_laptop_prepare_i2c_peripherals(struct chromeos_laptop *cros_laptop,
+ 	if (!src->num_i2c_peripherals)
+ 		return 0;
+ 
+-	cros_laptop->i2c_peripherals = kmemdup(src->i2c_peripherals,
+-					       src->num_i2c_peripherals *
+-						sizeof(*src->i2c_peripherals),
+-					       GFP_KERNEL);
+-	if (!cros_laptop->i2c_peripherals)
++	i2c_peripherals = kmemdup(src->i2c_peripherals,
++					      src->num_i2c_peripherals *
++					  sizeof(*src->i2c_peripherals),
++					  GFP_KERNEL);
++	if (!i2c_peripherals)
+ 		return -ENOMEM;
+ 
+-	cros_laptop->num_i2c_peripherals = src->num_i2c_peripherals;
+-
+-	for (i = 0; i < cros_laptop->num_i2c_peripherals; i++) {
+-		i2c_dev = &cros_laptop->i2c_peripherals[i];
++	for (i = 0; i < src->num_i2c_peripherals; i++) {
++		i2c_dev = &i2c_peripherals[i];
+ 		info = &i2c_dev->board_info;
+ 
+ 		error = chromeos_laptop_setup_irq(i2c_dev);
+@@ -754,16 +753,19 @@ chromeos_laptop_prepare_i2c_peripherals(struct chromeos_laptop *cros_laptop,
+ 		}
+ 	}
+ 
++	cros_laptop->i2c_peripherals = i2c_peripherals;
++	cros_laptop->num_i2c_peripherals = src->num_i2c_peripherals;
++
+ 	return 0;
+ 
+ err_out:
+ 	while (--i >= 0) {
+-		i2c_dev = &cros_laptop->i2c_peripherals[i];
++		i2c_dev = &i2c_peripherals[i];
+ 		info = &i2c_dev->board_info;
+ 		if (info->properties)
+ 			property_entries_free(info->properties);
+ 	}
+-	kfree(cros_laptop->i2c_peripherals);
++	kfree(i2c_peripherals);
+ 	return error;
+ }
+ 
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index c4de8c4db1930..5a622666a0755 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -332,10 +332,16 @@ EXPORT_SYMBOL(cros_ec_suspend);
+ 
+ static void cros_ec_report_events_during_suspend(struct cros_ec_device *ec_dev)
+ {
++	bool wake_event;
++
+ 	while (ec_dev->mkbp_event_supported &&
+-	       cros_ec_get_next_event(ec_dev, NULL, NULL) > 0)
++	       cros_ec_get_next_event(ec_dev, &wake_event, NULL) > 0) {
+ 		blocking_notifier_call_chain(&ec_dev->event_notifier,
+ 					     1, ec_dev);
++
++		if (wake_event && device_may_wakeup(ec_dev->dev))
++			pm_wakeup_event(ec_dev->dev, 0);
++	}
+ }
+ 
+ /**
+diff --git a/drivers/platform/chrome/cros_ec_chardev.c b/drivers/platform/chrome/cros_ec_chardev.c
+index fd33de546aee0..0de7c255254e0 100644
+--- a/drivers/platform/chrome/cros_ec_chardev.c
++++ b/drivers/platform/chrome/cros_ec_chardev.c
+@@ -327,6 +327,9 @@ static long cros_ec_chardev_ioctl_readmem(struct cros_ec_dev *ec,
+ 	if (copy_from_user(&s_mem, arg, sizeof(s_mem)))
+ 		return -EFAULT;
+ 
++	if (s_mem.bytes > sizeof(s_mem.buffer))
++		return -EINVAL;
++
+ 	num = ec_dev->cmd_readmem(ec_dev, s_mem.offset, s_mem.bytes,
+ 				  s_mem.buffer);
+ 	if (num <= 0)
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index 3a2a78ff33304..8ffbf92ec1e07 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -748,6 +748,7 @@ int cros_ec_get_next_event(struct cros_ec_device *ec_dev,
+ 	u8 event_type;
+ 	u32 host_event;
+ 	int ret;
++	u32 ver_mask;
+ 
+ 	/*
+ 	 * Default value for wake_event.
+@@ -769,6 +770,37 @@ int cros_ec_get_next_event(struct cros_ec_device *ec_dev,
+ 		return get_keyboard_state_event(ec_dev);
+ 
+ 	ret = get_next_event(ec_dev);
++	/*
++	 * -ENOPROTOOPT is returned when EC returns EC_RES_INVALID_VERSION.
++	 * This can occur when EC based device (e.g. Fingerprint MCU) jumps to
++	 * the RO image which doesn't support newer version of the command. In
++	 * this case we will attempt to update maximum supported version of the
++	 * EC_CMD_GET_NEXT_EVENT.
++	 */
++	if (ret == -ENOPROTOOPT) {
++		dev_dbg(ec_dev->dev,
++			"GET_NEXT_EVENT returned invalid version error.\n");
++		ret = cros_ec_get_host_command_version_mask(ec_dev,
++							EC_CMD_GET_NEXT_EVENT,
++							&ver_mask);
++		if (ret < 0 || ver_mask == 0)
++			/*
++			 * Do not change the MKBP supported version if we can't
++			 * obtain supported version correctly. Please note that
++			 * calling EC_CMD_GET_NEXT_EVENT returned
++			 * EC_RES_INVALID_VERSION which means that the command
++			 * is present.
++			 */
++			return -ENOPROTOOPT;
++
++		ec_dev->mkbp_event_supported = fls(ver_mask);
++		dev_dbg(ec_dev->dev, "MKBP support version changed to %u\n",
++			ec_dev->mkbp_event_supported - 1);
++
++		/* Try to get next event with new MKBP support version set. */
++		ret = get_next_event(ec_dev);
++	}
++
+ 	if (ret <= 0)
+ 		return ret;
+ 
+diff --git a/drivers/platform/x86/msi-laptop.c b/drivers/platform/x86/msi-laptop.c
+index 24ffc8e2d2d1e..0e804b6c2d242 100644
+--- a/drivers/platform/x86/msi-laptop.c
++++ b/drivers/platform/x86/msi-laptop.c
+@@ -596,11 +596,10 @@ static const struct dmi_system_id msi_dmi_table[] __initconst = {
+ 	{
+ 		.ident = "MSI S270",
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "MICRO-STAR INT'L CO.,LTD"),
++			DMI_MATCH(DMI_SYS_VENDOR, "MICRO-STAR INT"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "MS-1013"),
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "0131"),
+-			DMI_MATCH(DMI_CHASSIS_VENDOR,
+-				  "MICRO-STAR INT'L CO.,LTD")
++			DMI_MATCH(DMI_CHASSIS_VENDOR, "MICRO-STAR INT")
+ 		},
+ 		.driver_data = &quirk_old_ec_model,
+ 		.callback = dmi_check_cb
+@@ -633,8 +632,7 @@ static const struct dmi_system_id msi_dmi_table[] __initconst = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "NOTEBOOK"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "SAM2000"),
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "0131"),
+-			DMI_MATCH(DMI_CHASSIS_VENDOR,
+-				  "MICRO-STAR INT'L CO.,LTD")
++			DMI_MATCH(DMI_CHASSIS_VENDOR, "MICRO-STAR INT")
+ 		},
+ 		.driver_data = &quirk_old_ec_model,
+ 		.callback = dmi_check_cb
+@@ -1048,8 +1046,7 @@ static int __init msi_init(void)
+ 		return -EINVAL;
+ 
+ 	/* Register backlight stuff */
+-
+-	if (quirks->old_ec_model ||
++	if (quirks->old_ec_model &&
+ 	    acpi_video_get_backlight_type() == acpi_backlight_vendor) {
+ 		struct backlight_properties props;
+ 		memset(&props, 0, sizeof(struct backlight_properties));
+@@ -1117,6 +1114,8 @@ fail_create_attr:
+ fail_create_group:
+ 	if (quirks->load_scm_model) {
+ 		i8042_remove_filter(msi_laptop_i8042_filter);
++		cancel_delayed_work_sync(&msi_touchpad_dwork);
++		input_unregister_device(msi_laptop_input_dev);
+ 		cancel_delayed_work_sync(&msi_rfkill_dwork);
+ 		cancel_work_sync(&msi_rfkill_work);
+ 		rfkill_cleanup();
+@@ -1137,6 +1136,7 @@ static void __exit msi_cleanup(void)
+ {
+ 	if (quirks->load_scm_model) {
+ 		i8042_remove_filter(msi_laptop_i8042_filter);
++		cancel_delayed_work_sync(&msi_touchpad_dwork);
+ 		input_unregister_device(msi_laptop_input_dev);
+ 		cancel_delayed_work_sync(&msi_rfkill_dwork);
+ 		cancel_work_sync(&msi_rfkill_work);
+diff --git a/drivers/power/supply/adp5061.c b/drivers/power/supply/adp5061.c
+index 003557043ab3a..daee1161c3059 100644
+--- a/drivers/power/supply/adp5061.c
++++ b/drivers/power/supply/adp5061.c
+@@ -427,11 +427,11 @@ static int adp5061_get_chg_type(struct adp5061_state *st,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	chg_type = adp5061_chg_type[ADP5061_CHG_STATUS_1_CHG_STATUS(status1)];
+-	if (chg_type > ADP5061_CHG_FAST_CV)
++	chg_type = ADP5061_CHG_STATUS_1_CHG_STATUS(status1);
++	if (chg_type >= ARRAY_SIZE(adp5061_chg_type))
+ 		val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+ 	else
+-		val->intval = chg_type;
++		val->intval = adp5061_chg_type[chg_type];
+ 
+ 	return ret;
+ }
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index 70d6d52bc1e21..285420c1eb7c3 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -938,6 +938,9 @@ static u64 rapl_compute_time_window_core(struct rapl_package *rp, u64 value,
+ 		y = value & 0x1f;
+ 		value = (1 << y) * (4 + f) * rp->time_unit / 4;
+ 	} else {
++		if (value < rp->time_unit)
++			return 0;
++
+ 		do_div(value, rp->time_unit);
+ 		y = ilog2(value);
+ 		f = div64_u64(4 * (value - (1 << y)), 1 << y);
+@@ -979,7 +982,6 @@ static const struct rapl_defaults rapl_defaults_spr_server = {
+ 	.check_unit = rapl_check_unit_core,
+ 	.set_floor_freq = set_floor_freq_default,
+ 	.compute_time_window = rapl_compute_time_window_core,
+-	.dram_domain_energy_unit = 15300,
+ 	.psys_domain_energy_unit = 1000000000,
+ };
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 317d701487ecd..bf8ba73d6c7c7 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2544,7 +2544,7 @@ static int _regulator_do_enable(struct regulator_dev *rdev)
+ 	 * expired, return -ETIMEDOUT.
+ 	 */
+ 	if (rdev->desc->poll_enabled_time) {
+-		unsigned int time_remaining = delay;
++		int time_remaining = delay;
+ 
+ 		while (time_remaining > 0) {
+ 			_regulator_enable_delay(rdev->desc->poll_enabled_time);
+diff --git a/drivers/regulator/qcom_rpm-regulator.c b/drivers/regulator/qcom_rpm-regulator.c
+index 7f9d66ac37ff8..3c41b71a1f529 100644
+--- a/drivers/regulator/qcom_rpm-regulator.c
++++ b/drivers/regulator/qcom_rpm-regulator.c
+@@ -802,6 +802,12 @@ static const struct rpm_regulator_data rpm_pm8018_regulators[] = {
+ };
+ 
+ static const struct rpm_regulator_data rpm_pm8058_regulators[] = {
++	{ "s0",   QCOM_RPM_PM8058_SMPS0,  &pm8058_smps, "vdd_s0" },
++	{ "s1",   QCOM_RPM_PM8058_SMPS1,  &pm8058_smps, "vdd_s1" },
++	{ "s2",   QCOM_RPM_PM8058_SMPS2,  &pm8058_smps, "vdd_s2" },
++	{ "s3",   QCOM_RPM_PM8058_SMPS3,  &pm8058_smps, "vdd_s3" },
++	{ "s4",   QCOM_RPM_PM8058_SMPS4,  &pm8058_smps, "vdd_s4" },
++
+ 	{ "l0",   QCOM_RPM_PM8058_LDO0,   &pm8058_nldo, "vdd_l0_l1_lvs"	},
+ 	{ "l1",   QCOM_RPM_PM8058_LDO1,   &pm8058_nldo, "vdd_l0_l1_lvs" },
+ 	{ "l2",   QCOM_RPM_PM8058_LDO2,   &pm8058_pldo, "vdd_l2_l11_l12" },
+@@ -829,12 +835,6 @@ static const struct rpm_regulator_data rpm_pm8058_regulators[] = {
+ 	{ "l24",  QCOM_RPM_PM8058_LDO24,  &pm8058_nldo, "vdd_l23_l24_l25" },
+ 	{ "l25",  QCOM_RPM_PM8058_LDO25,  &pm8058_nldo, "vdd_l23_l24_l25" },
+ 
+-	{ "s0",   QCOM_RPM_PM8058_SMPS0,  &pm8058_smps, "vdd_s0" },
+-	{ "s1",   QCOM_RPM_PM8058_SMPS1,  &pm8058_smps, "vdd_s1" },
+-	{ "s2",   QCOM_RPM_PM8058_SMPS2,  &pm8058_smps, "vdd_s2" },
+-	{ "s3",   QCOM_RPM_PM8058_SMPS3,  &pm8058_smps, "vdd_s3" },
+-	{ "s4",   QCOM_RPM_PM8058_SMPS4,  &pm8058_smps, "vdd_s4" },
+-
+ 	{ "lvs0", QCOM_RPM_PM8058_LVS0, &pm8058_switch, "vdd_l0_l1_lvs" },
+ 	{ "lvs1", QCOM_RPM_PM8058_LVS1, &pm8058_switch, "vdd_l0_l1_lvs" },
+ 
+@@ -843,6 +843,12 @@ static const struct rpm_regulator_data rpm_pm8058_regulators[] = {
+ };
+ 
+ static const struct rpm_regulator_data rpm_pm8901_regulators[] = {
++	{ "s0",   QCOM_RPM_PM8901_SMPS0, &pm8901_ftsmps, "vdd_s0" },
++	{ "s1",   QCOM_RPM_PM8901_SMPS1, &pm8901_ftsmps, "vdd_s1" },
++	{ "s2",   QCOM_RPM_PM8901_SMPS2, &pm8901_ftsmps, "vdd_s2" },
++	{ "s3",   QCOM_RPM_PM8901_SMPS3, &pm8901_ftsmps, "vdd_s3" },
++	{ "s4",   QCOM_RPM_PM8901_SMPS4, &pm8901_ftsmps, "vdd_s4" },
++
+ 	{ "l0",   QCOM_RPM_PM8901_LDO0, &pm8901_nldo, "vdd_l0" },
+ 	{ "l1",   QCOM_RPM_PM8901_LDO1, &pm8901_pldo, "vdd_l1" },
+ 	{ "l2",   QCOM_RPM_PM8901_LDO2, &pm8901_pldo, "vdd_l2" },
+@@ -851,12 +857,6 @@ static const struct rpm_regulator_data rpm_pm8901_regulators[] = {
+ 	{ "l5",   QCOM_RPM_PM8901_LDO5, &pm8901_pldo, "vdd_l5" },
+ 	{ "l6",   QCOM_RPM_PM8901_LDO6, &pm8901_pldo, "vdd_l6" },
+ 
+-	{ "s0",   QCOM_RPM_PM8901_SMPS0, &pm8901_ftsmps, "vdd_s0" },
+-	{ "s1",   QCOM_RPM_PM8901_SMPS1, &pm8901_ftsmps, "vdd_s1" },
+-	{ "s2",   QCOM_RPM_PM8901_SMPS2, &pm8901_ftsmps, "vdd_s2" },
+-	{ "s3",   QCOM_RPM_PM8901_SMPS3, &pm8901_ftsmps, "vdd_s3" },
+-	{ "s4",   QCOM_RPM_PM8901_SMPS4, &pm8901_ftsmps, "vdd_s4" },
+-
+ 	{ "lvs0", QCOM_RPM_PM8901_LVS0, &pm8901_switch, "lvs0_in" },
+ 	{ "lvs1", QCOM_RPM_PM8901_LVS1, &pm8901_switch, "lvs1_in" },
+ 	{ "lvs2", QCOM_RPM_PM8901_LVS2, &pm8901_switch, "lvs2_in" },
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index 3337b1e80412b..f6f92033132a8 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -2014,7 +2014,7 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	retval = pci_enable_device(pdev);
+ 	if (retval) {
+ 		TW_PRINTK(host, TW_DRIVER, 0x34, "Failed to enable pci device");
+-		goto out_disable_device;
++		return -ENODEV;
+ 	}
+ 
+ 	pci_set_master(pdev);
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index df47557a02a3c..6485c1aa9e741 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -558,6 +558,8 @@ iscsi_sw_tcp_conn_create(struct iscsi_cls_session *cls_session,
+ 	tcp_conn = conn->dd_data;
+ 	tcp_sw_conn = tcp_conn->dd_data;
+ 
++	mutex_init(&tcp_sw_conn->sock_lock);
++
+ 	tfm = crypto_alloc_ahash("crc32c", 0, CRYPTO_ALG_ASYNC);
+ 	if (IS_ERR(tfm))
+ 		goto free_conn;
+@@ -592,11 +594,15 @@ free_conn:
+ 
+ static void iscsi_sw_tcp_release_conn(struct iscsi_conn *conn)
+ {
+-	struct iscsi_session *session = conn->session;
+ 	struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+ 	struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
+ 	struct socket *sock = tcp_sw_conn->sock;
+ 
++	/*
++	 * The iscsi transport class will make sure we are not called in
++	 * parallel with start, stop, bind and destroys. However, this can be
++	 * called twice if userspace does a stop then a destroy.
++	 */
+ 	if (!sock)
+ 		return;
+ 
+@@ -604,9 +610,9 @@ static void iscsi_sw_tcp_release_conn(struct iscsi_conn *conn)
+ 	iscsi_sw_tcp_conn_restore_callbacks(conn);
+ 	sock_put(sock->sk);
+ 
+-	spin_lock_bh(&session->frwd_lock);
++	mutex_lock(&tcp_sw_conn->sock_lock);
+ 	tcp_sw_conn->sock = NULL;
+-	spin_unlock_bh(&session->frwd_lock);
++	mutex_unlock(&tcp_sw_conn->sock_lock);
+ 	sockfd_put(sock);
+ }
+ 
+@@ -658,7 +664,6 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session,
+ 		       struct iscsi_cls_conn *cls_conn, uint64_t transport_eph,
+ 		       int is_leading)
+ {
+-	struct iscsi_session *session = cls_session->dd_data;
+ 	struct iscsi_conn *conn = cls_conn->dd_data;
+ 	struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+ 	struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
+@@ -678,10 +683,10 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session,
+ 	if (err)
+ 		goto free_socket;
+ 
+-	spin_lock_bh(&session->frwd_lock);
++	mutex_lock(&tcp_sw_conn->sock_lock);
+ 	/* bind iSCSI connection and socket */
+ 	tcp_sw_conn->sock = sock;
+-	spin_unlock_bh(&session->frwd_lock);
++	mutex_unlock(&tcp_sw_conn->sock_lock);
+ 
+ 	/* setup Socket parameters */
+ 	sk = sock->sk;
+@@ -717,8 +722,15 @@ static int iscsi_sw_tcp_conn_set_param(struct iscsi_cls_conn *cls_conn,
+ 		break;
+ 	case ISCSI_PARAM_DATADGST_EN:
+ 		iscsi_set_param(cls_conn, param, buf, buflen);
++
++		mutex_lock(&tcp_sw_conn->sock_lock);
++		if (!tcp_sw_conn->sock) {
++			mutex_unlock(&tcp_sw_conn->sock_lock);
++			return -ENOTCONN;
++		}
+ 		tcp_sw_conn->sendpage = conn->datadgst_en ?
+ 			sock_no_sendpage : tcp_sw_conn->sock->ops->sendpage;
++		mutex_unlock(&tcp_sw_conn->sock_lock);
+ 		break;
+ 	case ISCSI_PARAM_MAX_R2T:
+ 		return iscsi_tcp_set_max_r2t(conn, buf);
+@@ -733,8 +745,8 @@ static int iscsi_sw_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ 				       enum iscsi_param param, char *buf)
+ {
+ 	struct iscsi_conn *conn = cls_conn->dd_data;
+-	struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+-	struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
++	struct iscsi_sw_tcp_conn *tcp_sw_conn;
++	struct iscsi_tcp_conn *tcp_conn;
+ 	struct sockaddr_in6 addr;
+ 	struct socket *sock;
+ 	int rc;
+@@ -744,21 +756,36 @@ static int iscsi_sw_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ 	case ISCSI_PARAM_CONN_ADDRESS:
+ 	case ISCSI_PARAM_LOCAL_PORT:
+ 		spin_lock_bh(&conn->session->frwd_lock);
+-		if (!tcp_sw_conn || !tcp_sw_conn->sock) {
++		if (!conn->session->leadconn) {
+ 			spin_unlock_bh(&conn->session->frwd_lock);
+ 			return -ENOTCONN;
+ 		}
+-		sock = tcp_sw_conn->sock;
+-		sock_hold(sock->sk);
++		/*
++		 * The conn has been setup and bound, so just grab a ref
++		 * incase a destroy runs while we are in the net layer.
++		 */
++		iscsi_get_conn(conn->cls_conn);
+ 		spin_unlock_bh(&conn->session->frwd_lock);
+ 
++		tcp_conn = conn->dd_data;
++		tcp_sw_conn = tcp_conn->dd_data;
++
++		mutex_lock(&tcp_sw_conn->sock_lock);
++		sock = tcp_sw_conn->sock;
++		if (!sock) {
++			rc = -ENOTCONN;
++			goto sock_unlock;
++		}
++
+ 		if (param == ISCSI_PARAM_LOCAL_PORT)
+ 			rc = kernel_getsockname(sock,
+ 						(struct sockaddr *)&addr);
+ 		else
+ 			rc = kernel_getpeername(sock,
+ 						(struct sockaddr *)&addr);
+-		sock_put(sock->sk);
++sock_unlock:
++		mutex_unlock(&tcp_sw_conn->sock_lock);
++		iscsi_put_conn(conn->cls_conn);
+ 		if (rc < 0)
+ 			return rc;
+ 
+@@ -796,17 +823,21 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
+ 		}
+ 		tcp_conn = conn->dd_data;
+ 		tcp_sw_conn = tcp_conn->dd_data;
+-		sock = tcp_sw_conn->sock;
+-		if (!sock) {
+-			spin_unlock_bh(&session->frwd_lock);
+-			return -ENOTCONN;
+-		}
+-		sock_hold(sock->sk);
++		/*
++		 * The conn has been setup and bound, so just grab a ref
++		 * incase a destroy runs while we are in the net layer.
++		 */
++		iscsi_get_conn(conn->cls_conn);
+ 		spin_unlock_bh(&session->frwd_lock);
+ 
+-		rc = kernel_getsockname(sock,
+-					(struct sockaddr *)&addr);
+-		sock_put(sock->sk);
++		mutex_lock(&tcp_sw_conn->sock_lock);
++		sock = tcp_sw_conn->sock;
++		if (!sock)
++			rc = -ENOTCONN;
++		else
++			rc = kernel_getsockname(sock, (struct sockaddr *)&addr);
++		mutex_unlock(&tcp_sw_conn->sock_lock);
++		iscsi_put_conn(conn->cls_conn);
+ 		if (rc < 0)
+ 			return rc;
+ 
+diff --git a/drivers/scsi/iscsi_tcp.h b/drivers/scsi/iscsi_tcp.h
+index 791453195099c..1731956326e2b 100644
+--- a/drivers/scsi/iscsi_tcp.h
++++ b/drivers/scsi/iscsi_tcp.h
+@@ -28,6 +28,8 @@ struct iscsi_sw_tcp_send {
+ 
+ struct iscsi_sw_tcp_conn {
+ 	struct socket		*sock;
++	/* Taken when accessing the sock from the netlink/sysfs interface */
++	struct mutex		sock_lock;
+ 
+ 	struct iscsi_sw_tcp_send out;
+ 	/* old values for socket callbacks */
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 8d6bcc19359ff..51485d0251f2d 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -85,7 +85,7 @@ static int smp_execute_task_sg(struct domain_device *dev,
+ 		res = i->dft->lldd_execute_task(task, GFP_KERNEL);
+ 
+ 		if (res) {
+-			del_timer(&task->slow_task->timer);
++			del_timer_sync(&task->slow_task->timer);
+ 			pr_notice("executing SMP task failed:%d\n", res);
+ 			break;
+ 		}
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index de5b6453827c0..f48ef47546f4d 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1917,6 +1917,27 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
+ 		fc_vport_setlink(vn_port);
+ 	}
+ 
++	/* Set symbolic node name */
++	if (base_qedf->pdev->device == QL45xxx)
++		snprintf(fc_host_symbolic_name(vn_port->host), 256,
++			 "Marvell FastLinQ 45xxx FCoE v%s", QEDF_VERSION);
++
++	if (base_qedf->pdev->device == QL41xxx)
++		snprintf(fc_host_symbolic_name(vn_port->host), 256,
++			 "Marvell FastLinQ 41xxx FCoE v%s", QEDF_VERSION);
++
++	/* Set supported speed */
++	fc_host_supported_speeds(vn_port->host) = n_port->link_supported_speeds;
++
++	/* Set speed */
++	vn_port->link_speed = n_port->link_speed;
++
++	/* Set port type */
++	fc_host_port_type(vn_port->host) = FC_PORTTYPE_NPIV;
++
++	/* Set maxframe size */
++	fc_host_maxframe_size(vn_port->host) = n_port->mfs;
++
+ 	QEDF_INFO(&(base_qedf->dbg_ctx), QEDF_LOG_NPIV, "vn_port=%p.\n",
+ 		   vn_port);
+ 
+diff --git a/drivers/soc/qcom/smem_state.c b/drivers/soc/qcom/smem_state.c
+index d2b558438deb7..41e9294071960 100644
+--- a/drivers/soc/qcom/smem_state.c
++++ b/drivers/soc/qcom/smem_state.c
+@@ -136,6 +136,7 @@ static void qcom_smem_state_release(struct kref *ref)
+ 	struct qcom_smem_state *state = container_of(ref, struct qcom_smem_state, refcount);
+ 
+ 	list_del(&state->list);
++	of_node_put(state->of_node);
+ 	kfree(state);
+ }
+ 
+@@ -169,7 +170,7 @@ struct qcom_smem_state *qcom_smem_state_register(struct device_node *of_node,
+ 
+ 	kref_init(&state->refcount);
+ 
+-	state->of_node = of_node;
++	state->of_node = of_node_get(of_node);
+ 	state->ops = *ops;
+ 	state->priv = priv;
+ 
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index 6564f15c53190..acba67dfbc859 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -511,7 +511,7 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ 	for (id = 0; id < smsm->num_hosts; id++) {
+ 		ret = smsm_parse_ipc(smsm, id);
+ 		if (ret < 0)
+-			return ret;
++			goto out_put;
+ 	}
+ 
+ 	/* Acquire the main SMSM state vector */
+@@ -519,13 +519,14 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ 			      smsm->num_entries * sizeof(u32));
+ 	if (ret < 0 && ret != -EEXIST) {
+ 		dev_err(&pdev->dev, "unable to allocate shared state entry\n");
+-		return ret;
++		goto out_put;
+ 	}
+ 
+ 	states = qcom_smem_get(QCOM_SMEM_HOST_ANY, SMEM_SMSM_SHARED_STATE, NULL);
+ 	if (IS_ERR(states)) {
+ 		dev_err(&pdev->dev, "Unable to acquire shared state entry\n");
+-		return PTR_ERR(states);
++		ret = PTR_ERR(states);
++		goto out_put;
+ 	}
+ 
+ 	/* Acquire the list of interrupt mask vectors */
+@@ -533,13 +534,14 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ 	ret = qcom_smem_alloc(QCOM_SMEM_HOST_ANY, SMEM_SMSM_CPU_INTR_MASK, size);
+ 	if (ret < 0 && ret != -EEXIST) {
+ 		dev_err(&pdev->dev, "unable to allocate smsm interrupt mask\n");
+-		return ret;
++		goto out_put;
+ 	}
+ 
+ 	intr_mask = qcom_smem_get(QCOM_SMEM_HOST_ANY, SMEM_SMSM_CPU_INTR_MASK, NULL);
+ 	if (IS_ERR(intr_mask)) {
+ 		dev_err(&pdev->dev, "unable to acquire shared memory interrupt mask\n");
+-		return PTR_ERR(intr_mask);
++		ret = PTR_ERR(intr_mask);
++		goto out_put;
+ 	}
+ 
+ 	/* Setup the reference to the local state bits */
+@@ -550,7 +552,8 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ 	smsm->state = qcom_smem_state_register(local_node, &smsm_state_ops, smsm);
+ 	if (IS_ERR(smsm->state)) {
+ 		dev_err(smsm->dev, "failed to register qcom_smem_state\n");
+-		return PTR_ERR(smsm->state);
++		ret = PTR_ERR(smsm->state);
++		goto out_put;
+ 	}
+ 
+ 	/* Register handlers for remote processor entries of interest. */
+@@ -580,16 +583,19 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	platform_set_drvdata(pdev, smsm);
++	of_node_put(local_node);
+ 
+ 	return 0;
+ 
+ unwind_interfaces:
++	of_node_put(node);
+ 	for (id = 0; id < smsm->num_entries; id++)
+ 		if (smsm->entries[id].domain)
+ 			irq_domain_remove(smsm->entries[id].domain);
+ 
+ 	qcom_smem_state_unregister(smsm->state);
+-
++out_put:
++	of_node_put(local_node);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/soc/tegra/Kconfig b/drivers/soc/tegra/Kconfig
+index 976dee0364700..676807c5a2154 100644
+--- a/drivers/soc/tegra/Kconfig
++++ b/drivers/soc/tegra/Kconfig
+@@ -136,7 +136,6 @@ config SOC_TEGRA_FUSE
+ 	def_bool y
+ 	depends on ARCH_TEGRA
+ 	select SOC_BUS
+-	select TEGRA20_APB_DMA if ARCH_TEGRA_2x_SOC
+ 
+ config SOC_TEGRA_FLOWCTRL
+ 	bool
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index c6d421a4b91b6..a3247692ddc07 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -501,9 +501,12 @@ cdns_fill_msg_resp(struct sdw_cdns *cdns,
+ 		return SDW_CMD_IGNORED;
+ 	}
+ 
+-	/* fill response */
+-	for (i = 0; i < count; i++)
+-		msg->buf[i + offset] = FIELD_GET(CDNS_MCP_RESP_RDATA, cdns->response_buf[i]);
++	if (msg->flags == SDW_MSG_FLAG_READ) {
++		/* fill response */
++		for (i = 0; i < count; i++)
++			msg->buf[i + offset] = FIELD_GET(CDNS_MCP_RESP_RDATA,
++							 cdns->response_buf[i]);
++	}
+ 
+ 	return SDW_CMD_OK;
+ }
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 824d9f900aca7..942d2fe132181 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -1470,7 +1470,6 @@ int intel_master_startup(struct platform_device *pdev)
+ 	ret = intel_register_dai(sdw);
+ 	if (ret) {
+ 		dev_err(dev, "DAI registration failed: %d\n", ret);
+-		snd_soc_unregister_component(dev);
+ 		goto err_interrupt;
+ 	}
+ 
+diff --git a/drivers/spi/spi-dw-bt1.c b/drivers/spi/spi-dw-bt1.c
+index bc9d5eab3c589..8f6a1af144562 100644
+--- a/drivers/spi/spi-dw-bt1.c
++++ b/drivers/spi/spi-dw-bt1.c
+@@ -293,8 +293,10 @@ static int dw_spi_bt1_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	ret = dw_spi_add_host(&pdev->dev, dws);
+-	if (ret)
++	if (ret) {
++		pm_runtime_disable(&pdev->dev);
+ 		goto err_disable_clk;
++	}
+ 
+ 	platform_set_drvdata(pdev, dwsbt1);
+ 
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index e4cb52e1fe261..6974a1c947aad 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -537,7 +537,7 @@ static unsigned long meson_spicc_pow2_recalc_rate(struct clk_hw *hw,
+ 	struct clk_divider *divider = to_clk_divider(hw);
+ 	struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
+ 
+-	if (!spicc->master->cur_msg || !spicc->master->busy)
++	if (!spicc->master->cur_msg)
+ 		return 0;
+ 
+ 	return clk_divider_ops.recalc_rate(hw, parent_rate);
+@@ -549,7 +549,7 @@ static int meson_spicc_pow2_determine_rate(struct clk_hw *hw,
+ 	struct clk_divider *divider = to_clk_divider(hw);
+ 	struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
+ 
+-	if (!spicc->master->cur_msg || !spicc->master->busy)
++	if (!spicc->master->cur_msg)
+ 		return -EINVAL;
+ 
+ 	return clk_divider_ops.determine_rate(hw, req);
+@@ -561,7 +561,7 @@ static int meson_spicc_pow2_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	struct clk_divider *divider = to_clk_divider(hw);
+ 	struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
+ 
+-	if (!spicc->master->cur_msg || !spicc->master->busy)
++	if (!spicc->master->cur_msg)
+ 		return -EINVAL;
+ 
+ 	return clk_divider_ops.set_rate(hw, rate, parent_rate);
+diff --git a/drivers/spi/spi-mt7621.c b/drivers/spi/spi-mt7621.c
+index b4b9b7309b5e9..351b0ef52bbc8 100644
+--- a/drivers/spi/spi-mt7621.c
++++ b/drivers/spi/spi-mt7621.c
+@@ -340,11 +340,9 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(clk)) {
+-		dev_err(&pdev->dev, "unable to get SYS clock, err=%d\n",
+-			status);
+-		return PTR_ERR(clk);
+-	}
++	if (IS_ERR(clk))
++		return dev_err_probe(&pdev->dev, PTR_ERR(clk),
++				     "unable to get SYS clock\n");
+ 
+ 	status = clk_prepare_enable(clk);
+ 	if (status)
+diff --git a/drivers/spi/spi-omap-100k.c b/drivers/spi/spi-omap-100k.c
+index 0d0cd061d3563..7c992d1f4abd9 100644
+--- a/drivers/spi/spi-omap-100k.c
++++ b/drivers/spi/spi-omap-100k.c
+@@ -414,6 +414,7 @@ static int omap1_spi100k_probe(struct platform_device *pdev)
+ 	return status;
+ 
+ err_fck:
++	pm_runtime_disable(&pdev->dev);
+ 	clk_disable_unprepare(spi100k->fck);
+ err_ick:
+ 	clk_disable_unprepare(spi100k->ick);
+diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c
+index d39dec6d1c91e..f3877eeb3da65 100644
+--- a/drivers/spi/spi-qup.c
++++ b/drivers/spi/spi-qup.c
+@@ -1199,8 +1199,10 @@ static int spi_qup_pm_resume_runtime(struct device *device)
+ 		return ret;
+ 
+ 	ret = clk_prepare_enable(controller->cclk);
+-	if (ret)
++	if (ret) {
++		clk_disable_unprepare(controller->iclk);
+ 		return ret;
++	}
+ 
+ 	/* Disable clocks auto gaiting */
+ 	config = readl_relaxed(controller->base + QUP_CONFIG);
+@@ -1246,14 +1248,25 @@ static int spi_qup_resume(struct device *device)
+ 		return ret;
+ 
+ 	ret = clk_prepare_enable(controller->cclk);
+-	if (ret)
++	if (ret) {
++		clk_disable_unprepare(controller->iclk);
+ 		return ret;
++	}
+ 
+ 	ret = spi_qup_set_state(controller, QUP_STATE_RESET);
+ 	if (ret)
+-		return ret;
++		goto disable_clk;
+ 
+-	return spi_master_resume(master);
++	ret = spi_master_resume(master);
++	if (ret)
++		goto disable_clk;
++
++	return 0;
++
++disable_clk:
++	clk_disable_unprepare(controller->cclk);
++	clk_disable_unprepare(controller->iclk);
++	return ret;
+ }
+ #endif /* CONFIG_PM_SLEEP */
+ 
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index dfa7c91e13aa5..d435df1b715bb 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -84,6 +84,7 @@
+ #define S3C64XX_SPI_ST_TX_FIFORDY		(1<<0)
+ 
+ #define S3C64XX_SPI_PACKET_CNT_EN		(1<<16)
++#define S3C64XX_SPI_PACKET_CNT_MASK		GENMASK(15, 0)
+ 
+ #define S3C64XX_SPI_PND_TX_UNDERRUN_CLR		(1<<4)
+ #define S3C64XX_SPI_PND_TX_OVERRUN_CLR		(1<<3)
+@@ -660,6 +661,13 @@ static int s3c64xx_spi_prepare_message(struct spi_master *master,
+ 	return 0;
+ }
+ 
++static size_t s3c64xx_spi_max_transfer_size(struct spi_device *spi)
++{
++	struct spi_controller *ctlr = spi->controller;
++
++	return ctlr->can_dma ? S3C64XX_SPI_PACKET_CNT_MASK : SIZE_MAX;
++}
++
+ static int s3c64xx_spi_transfer_one(struct spi_master *master,
+ 				    struct spi_device *spi,
+ 				    struct spi_transfer *xfer)
+@@ -1135,6 +1143,7 @@ static int s3c64xx_spi_probe(struct platform_device *pdev)
+ 	master->prepare_transfer_hardware = s3c64xx_spi_prepare_transfer;
+ 	master->prepare_message = s3c64xx_spi_prepare_message;
+ 	master->transfer_one = s3c64xx_spi_transfer_one;
++	master->max_transfer_size = s3c64xx_spi_max_transfer_size;
+ 	master->num_chipselect = sci->num_cs;
+ 	master->dma_alignment = 8;
+ 	master->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(16) |
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 6ea7b286c80c2..857a1399850c3 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -946,6 +946,8 @@ void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev,
+ 	if (sgt->orig_nents) {
+ 		dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+ 		sg_free_table(sgt);
++		sgt->orig_nents = 0;
++		sgt->nents = 0;
+ 	}
+ }
+ 
+diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
+index bbbd311eda030..e6de2aeece8d3 100644
+--- a/drivers/spmi/spmi-pmic-arb.c
++++ b/drivers/spmi/spmi-pmic-arb.c
+@@ -887,7 +887,8 @@ static int pmic_arb_read_apid_map_v5(struct spmi_pmic_arb *pmic_arb)
+ 	 * version 5, there is more than one APID mapped to each PPID.
+ 	 * The owner field for each of these mappings specifies the EE which is
+ 	 * allowed to write to the APID.  The owner of the last (highest) APID
+-	 * for a given PPID will receive interrupts from the PPID.
++	 * which has the IRQ owner bit set for a given PPID will receive
++	 * interrupts from the PPID.
+ 	 */
+ 	for (i = 0; ; i++, apidd++) {
+ 		offset = pmic_arb->ver_ops->apid_map_offset(i);
+@@ -910,16 +911,16 @@ static int pmic_arb_read_apid_map_v5(struct spmi_pmic_arb *pmic_arb)
+ 		apid = pmic_arb->ppid_to_apid[ppid] & ~PMIC_ARB_APID_VALID;
+ 		prev_apidd = &pmic_arb->apid_data[apid];
+ 
+-		if (valid && is_irq_ee &&
+-				prev_apidd->write_ee == pmic_arb->ee) {
++		if (!valid || apidd->write_ee == pmic_arb->ee) {
++			/* First PPID mapping or one for this EE */
++			pmic_arb->ppid_to_apid[ppid] = i | PMIC_ARB_APID_VALID;
++		} else if (valid && is_irq_ee &&
++			   prev_apidd->write_ee == pmic_arb->ee) {
+ 			/*
+ 			 * Duplicate PPID mapping after the one for this EE;
+ 			 * override the irq owner
+ 			 */
+ 			prev_apidd->irq_ee = apidd->irq_ee;
+-		} else if (!valid || is_irq_ee) {
+-			/* First PPID mapping or duplicate for another EE */
+-			pmic_arb->ppid_to_apid[ppid] = i | PMIC_ARB_APID_VALID;
+ 		}
+ 
+ 		apidd->ppid = ppid;
+diff --git a/drivers/staging/greybus/audio_helper.c b/drivers/staging/greybus/audio_helper.c
+index a9576f92efaa4..08443f4aa045d 100644
+--- a/drivers/staging/greybus/audio_helper.c
++++ b/drivers/staging/greybus/audio_helper.c
+@@ -3,7 +3,6 @@
+  * Greybus Audio Sound SoC helper APIs
+  */
+ 
+-#include <linux/debugfs.h>
+ #include <sound/core.h>
+ #include <sound/soc.h>
+ #include <sound/soc-dapm.h>
+@@ -116,10 +115,6 @@ int gbaudio_dapm_free_controls(struct snd_soc_dapm_context *dapm,
+ {
+ 	int i;
+ 	struct snd_soc_dapm_widget *w, *next_w;
+-#ifdef CONFIG_DEBUG_FS
+-	struct dentry *parent = dapm->debugfs_dapm;
+-	struct dentry *debugfs_w = NULL;
+-#endif
+ 
+ 	mutex_lock(&dapm->card->dapm_mutex);
+ 	for (i = 0; i < num; i++) {
+@@ -139,12 +134,6 @@ int gbaudio_dapm_free_controls(struct snd_soc_dapm_context *dapm,
+ 			continue;
+ 		}
+ 		widget++;
+-#ifdef CONFIG_DEBUG_FS
+-		if (!parent)
+-			debugfs_w = debugfs_lookup(w->name, parent);
+-		debugfs_remove(debugfs_w);
+-		debugfs_w = NULL;
+-#endif
+ 		gbaudio_dapm_free_widget(w);
+ 	}
+ 	mutex_unlock(&dapm->card->dapm_mutex);
+diff --git a/drivers/staging/media/meson/vdec/vdec_hevc.c b/drivers/staging/media/meson/vdec/vdec_hevc.c
+index 9530e580e57a2..afced435c9070 100644
+--- a/drivers/staging/media/meson/vdec/vdec_hevc.c
++++ b/drivers/staging/media/meson/vdec/vdec_hevc.c
+@@ -167,8 +167,12 @@ static int vdec_hevc_start(struct amvdec_session *sess)
+ 
+ 	clk_set_rate(core->vdec_hevc_clk, 666666666);
+ 	ret = clk_prepare_enable(core->vdec_hevc_clk);
+-	if (ret)
++	if (ret) {
++		if (core->platform->revision == VDEC_REVISION_G12A ||
++		    core->platform->revision == VDEC_REVISION_SM1)
++			clk_disable_unprepare(core->vdec_hevcf_clk);
+ 		return ret;
++	}
+ 
+ 	if (core->platform->revision == VDEC_REVISION_SM1)
+ 		regmap_update_bits(core->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus.c b/drivers/staging/media/sunxi/cedrus/cedrus.c
+index 1dd833757c4ee..28de90edf4cc5 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus.c
+@@ -399,6 +399,8 @@ static int cedrus_probe(struct platform_device *pdev)
+ 	if (!dev)
+ 		return -ENOMEM;
+ 
++	platform_set_drvdata(pdev, dev);
++
+ 	dev->vfd = cedrus_video_device;
+ 	dev->dev = &pdev->dev;
+ 	dev->pdev = pdev;
+@@ -469,8 +471,6 @@ static int cedrus_probe(struct platform_device *pdev)
+ 		goto err_m2m_mc;
+ 	}
+ 
+-	platform_set_drvdata(pdev, dev);
+-
+ 	return 0;
+ 
+ err_m2m_mc:
+diff --git a/drivers/staging/rtl8723bs/core/rtw_cmd.c b/drivers/staging/rtl8723bs/core/rtw_cmd.c
+index 2abe205e34535..cee05385f8727 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_cmd.c
++++ b/drivers/staging/rtl8723bs/core/rtw_cmd.c
+@@ -165,8 +165,6 @@ No irqsave is necessary.
+ 
+ int rtw_init_cmd_priv(struct	cmd_priv *pcmdpriv)
+ {
+-	int res = 0;
+-
+ 	init_completion(&pcmdpriv->cmd_queue_comp);
+ 	init_completion(&pcmdpriv->terminate_cmdthread_comp);
+ 
+@@ -178,18 +176,16 @@ int rtw_init_cmd_priv(struct	cmd_priv *pcmdpriv)
+ 
+ 	pcmdpriv->cmd_allocated_buf = rtw_zmalloc(MAX_CMDSZ + CMDBUFF_ALIGN_SZ);
+ 
+-	if (!pcmdpriv->cmd_allocated_buf) {
+-		res = -ENOMEM;
+-		goto exit;
+-	}
++	if (!pcmdpriv->cmd_allocated_buf)
++		return -ENOMEM;
+ 
+ 	pcmdpriv->cmd_buf = pcmdpriv->cmd_allocated_buf  +  CMDBUFF_ALIGN_SZ - ((SIZE_PTR)(pcmdpriv->cmd_allocated_buf) & (CMDBUFF_ALIGN_SZ-1));
+ 
+ 	pcmdpriv->rsp_allocated_buf = rtw_zmalloc(MAX_RSPSZ + 4);
+ 
+ 	if (!pcmdpriv->rsp_allocated_buf) {
+-		res = -ENOMEM;
+-		goto exit;
++		kfree(pcmdpriv->cmd_allocated_buf);
++		return -ENOMEM;
+ 	}
+ 
+ 	pcmdpriv->rsp_buf = pcmdpriv->rsp_allocated_buf  +  4 - ((SIZE_PTR)(pcmdpriv->rsp_allocated_buf) & 3);
+@@ -199,8 +195,8 @@ int rtw_init_cmd_priv(struct	cmd_priv *pcmdpriv)
+ 	pcmdpriv->rsp_cnt = 0;
+ 
+ 	mutex_init(&pcmdpriv->sctx_mutex);
+-exit:
+-	return res;
++
++	return 0;
+ }
+ 
+ static void c2h_wk_callback(_workitem * work);
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index 09ab6d6f2429b..343f0de031546 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -564,7 +564,7 @@ err_free_rd:
+ 	kfree(desc->rd_info);
+ 
+ err_free_desc:
+-	while (--i) {
++	while (i--) {
+ 		desc = &priv->aRD0Ring[i];
+ 		device_free_rx_buf(priv, desc);
+ 		kfree(desc->rd_info);
+@@ -610,7 +610,7 @@ err_free_rd:
+ 	kfree(desc->rd_info);
+ 
+ err_free_desc:
+-	while (--i) {
++	while (i--) {
+ 		desc = &priv->aRD1Ring[i];
+ 		device_free_rx_buf(priv, desc);
+ 		kfree(desc->rd_info);
+@@ -675,7 +675,7 @@ static int device_init_td0_ring(struct vnt_private *priv)
+ 	return 0;
+ 
+ err_free_desc:
+-	while (--i) {
++	while (i--) {
+ 		desc = &priv->apTD0Rings[i];
+ 		kfree(desc->td_info);
+ 	}
+@@ -715,7 +715,7 @@ static int device_init_td1_ring(struct vnt_private *priv)
+ 	return 0;
+ 
+ err_free_desc:
+-	while (--i) {
++	while (i--) {
+ 		desc = &priv->apTD1Rings[i];
+ 		kfree(desc->td_info);
+ 	}
+diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c
+index b0eb5ece9243b..fb04470d7d4bb 100644
+--- a/drivers/thermal/intel/intel_powerclamp.c
++++ b/drivers/thermal/intel/intel_powerclamp.c
+@@ -531,9 +531,7 @@ static int start_power_clamp(void)
+ 	get_online_cpus();
+ 
+ 	/* prefer BSP */
+-	control_cpu = 0;
+-	if (!cpu_online(control_cpu))
+-		control_cpu = smp_processor_id();
++	control_cpu = cpumask_first(cpu_online_mask);
+ 
+ 	clamping = true;
+ 	schedule_delayed_work(&poll_pkg_cstate_work, 0);
+diff --git a/drivers/thermal/qcom/tsens-v0_1.c b/drivers/thermal/qcom/tsens-v0_1.c
+index 4ffa2e2c01457..9b8ba429a3049 100644
+--- a/drivers/thermal/qcom/tsens-v0_1.c
++++ b/drivers/thermal/qcom/tsens-v0_1.c
+@@ -522,7 +522,7 @@ static const struct tsens_ops ops_8939 = {
+ struct tsens_plat_data data_8939 = {
+ 	.num_sensors	= 10,
+ 	.ops		= &ops_8939,
+-	.hw_ids		= (unsigned int []){ 0, 1, 2, 4, 5, 6, 7, 8, 9, 10 },
++	.hw_ids		= (unsigned int []){ 0, 1, 2, 3, 5, 6, 7, 8, 9, 10 },
+ 
+ 	.feat		= &tsens_v0_1_feat,
+ 	.fields	= tsens_v0_1_regfields,
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 65f99d744654f..e881b72833dcf 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2413,6 +2413,26 @@ void tb_switch_unconfigure_link(struct tb_switch *sw)
+ 		tb_lc_unconfigure_port(down);
+ }
+ 
++static int tb_switch_port_hotplug_enable(struct tb_switch *sw)
++{
++	struct tb_port *port;
++
++	if (tb_switch_is_icm(sw))
++		return 0;
++
++	tb_switch_for_each_port(sw, port) {
++		int res;
++
++		if (!port->cap_usb4)
++			continue;
++
++		res = usb4_port_hotplug_enable(port);
++		if (res)
++			return res;
++	}
++	return 0;
++}
++
+ /**
+  * tb_switch_add() - Add a switch to the domain
+  * @sw: Switch to add
+@@ -2480,6 +2500,10 @@ int tb_switch_add(struct tb_switch *sw)
+ 			return ret;
+ 	}
+ 
++	ret = tb_switch_port_hotplug_enable(sw);
++	if (ret)
++		return ret;
++
+ 	ret = device_add(&sw->dev);
+ 	if (ret) {
+ 		dev_err(&sw->dev, "failed to add device: %d\n", ret);
+diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
+index 8ea360b0ff773..266f3bf8ff5c6 100644
+--- a/drivers/thunderbolt/tb.h
++++ b/drivers/thunderbolt/tb.h
+@@ -979,6 +979,7 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
+ 					  const struct tb_port *port);
+ 
+ int usb4_port_unlock(struct tb_port *port);
++int usb4_port_hotplug_enable(struct tb_port *port);
+ int usb4_port_configure(struct tb_port *port);
+ void usb4_port_unconfigure(struct tb_port *port);
+ int usb4_port_configure_xdomain(struct tb_port *port);
+diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
+index e7d9529822fab..26868e2f9d0bd 100644
+--- a/drivers/thunderbolt/tb_regs.h
++++ b/drivers/thunderbolt/tb_regs.h
+@@ -285,6 +285,7 @@ struct tb_regs_port_header {
+ #define ADP_CS_5				0x05
+ #define ADP_CS_5_LCA_MASK			GENMASK(28, 22)
+ #define ADP_CS_5_LCA_SHIFT			22
++#define ADP_CS_5_DHP				BIT(31)
+ 
+ /* TMU adapter registers */
+ #define TMU_ADP_CS_3				0x03
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index c05ec6fad77f6..0b3a77ade04d9 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -854,6 +854,26 @@ int usb4_port_unlock(struct tb_port *port)
+ 	return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_4, 1);
+ }
+ 
++/**
++ * usb4_port_hotplug_enable() - Enables hotplug for a port
++ * @port: USB4 port to operate on
++ *
++ * Enables hot plug events on a given port. This is only intended
++ * to be used on lane, DP-IN, and DP-OUT adapters.
++ */
++int usb4_port_hotplug_enable(struct tb_port *port)
++{
++	int ret;
++	u32 val;
++
++	ret = tb_port_read(port, &val, TB_CFG_PORT, ADP_CS_5, 1);
++	if (ret)
++		return ret;
++
++	val &= ~ADP_CS_5_DHP;
++	return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_5, 1);
++}
++
+ static int usb4_port_set_configured(struct tb_port *port, bool configured)
+ {
+ 	int ret;
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 98ce484f1089d..0a7e9491b4d14 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -310,10 +310,9 @@ static void serial8250_backup_timeout(struct timer_list *t)
+ 		jiffies + uart_poll_timeout(&up->port) + HZ / 5);
+ }
+ 
+-static int univ8250_setup_irq(struct uart_8250_port *up)
++static void univ8250_setup_timer(struct uart_8250_port *up)
+ {
+ 	struct uart_port *port = &up->port;
+-	int retval = 0;
+ 
+ 	/*
+ 	 * The above check will only give an accurate result the first time
+@@ -332,12 +331,18 @@ static int univ8250_setup_irq(struct uart_8250_port *up)
+ 	 * hardware interrupt, we use a timer-based system.  The original
+ 	 * driver used to do this with IRQ0.
+ 	 */
+-	if (!port->irq) {
++	if (!port->irq)
+ 		mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
+-	} else
+-		retval = serial_link_irq_chain(up);
++}
+ 
+-	return retval;
++static int univ8250_setup_irq(struct uart_8250_port *up)
++{
++	struct uart_port *port = &up->port;
++
++	if (port->irq)
++		return serial_link_irq_chain(up);
++
++	return 0;
+ }
+ 
+ static void univ8250_release_irq(struct uart_8250_port *up)
+@@ -393,6 +398,7 @@ static struct uart_ops univ8250_port_ops;
+ static const struct uart_8250_ops univ8250_driver_ops = {
+ 	.setup_irq	= univ8250_setup_irq,
+ 	.release_irq	= univ8250_release_irq,
++	.setup_timer	= univ8250_setup_timer,
+ };
+ 
+ static struct uart_8250_port serial8250_ports[UART_NR];
+@@ -766,6 +772,7 @@ void serial8250_suspend_port(int line)
+ 	if (!console_suspend_enabled && uart_console(port) &&
+ 	    port->type != PORT_8250) {
+ 		unsigned char canary = 0xa5;
++
+ 		serial_out(up, UART_SCR, canary);
+ 		if (serial_in(up, UART_SCR) == canary)
+ 			up->canary = canary;
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 9d60418e4adb1..71d143c002488 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2276,6 +2276,10 @@ int serial8250_do_startup(struct uart_port *port)
+ 	if (port->irq && (up->port.flags & UPF_SHARE_IRQ))
+ 		up->port.irqflags |= IRQF_SHARED;
+ 
++	retval = up->ops->setup_irq(up);
++	if (retval)
++		goto out;
++
+ 	if (port->irq && !(up->port.flags & UPF_NO_THRE_TEST)) {
+ 		unsigned char iir1;
+ 
+@@ -2318,9 +2322,7 @@ int serial8250_do_startup(struct uart_port *port)
+ 		}
+ 	}
+ 
+-	retval = up->ops->setup_irq(up);
+-	if (retval)
+-		goto out;
++	up->ops->setup_timer(up);
+ 
+ 	/*
+ 	 * Now, initialize the UART
+@@ -3286,8 +3288,13 @@ static void serial8250_console_restore(struct uart_8250_port *up)
+ 	unsigned int baud, quot, frac = 0;
+ 
+ 	termios.c_cflag = port->cons->cflag;
+-	if (port->state->port.tty && termios.c_cflag == 0)
++	termios.c_ispeed = port->cons->ispeed;
++	termios.c_ospeed = port->cons->ospeed;
++	if (port->state->port.tty && termios.c_cflag == 0) {
+ 		termios.c_cflag = port->state->port.tty->termios.c_cflag;
++		termios.c_ispeed = port->state->port.tty->termios.c_ispeed;
++		termios.c_ospeed = port->state->port.tty->termios.c_ospeed;
++	}
+ 
+ 	baud = serial8250_get_baud_rate(port, &termios, NULL);
+ 	quot = serial8250_get_divisor(port, baud, &frac);
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index a2c4eab0b4703..269d1e3a025d2 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1725,6 +1725,7 @@ static void lpuart_dma_shutdown(struct lpuart_port *sport)
+ 	if (sport->lpuart_dma_rx_use) {
+ 		del_timer_sync(&sport->lpuart_timer);
+ 		lpuart_dma_rx_free(&sport->port);
++		sport->lpuart_dma_rx_use = false;
+ 	}
+ 
+ 	if (sport->lpuart_dma_tx_use) {
+@@ -1733,6 +1734,7 @@ static void lpuart_dma_shutdown(struct lpuart_port *sport)
+ 			sport->dma_tx_in_progress = false;
+ 			dmaengine_terminate_all(sport->dma_tx_chan);
+ 		}
++		sport->lpuart_dma_tx_use = false;
+ 	}
+ 
+ 	if (sport->dma_tx_chan)
+diff --git a/drivers/tty/serial/jsm/jsm_driver.c b/drivers/tty/serial/jsm/jsm_driver.c
+index cd30da0ef0834..b5b61e598b533 100644
+--- a/drivers/tty/serial/jsm/jsm_driver.c
++++ b/drivers/tty/serial/jsm/jsm_driver.c
+@@ -212,7 +212,8 @@ static int jsm_probe_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 		break;
+ 	default:
+-		return -ENXIO;
++		rc = -ENXIO;
++		goto out_kfree_brd;
+ 	}
+ 
+ 	rc = request_irq(brd->irq, brd->bd_ops->intr, IRQF_SHARED, "JSM", brd);
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index b5a8afbc452ba..f7dfa123907ab 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -375,6 +375,8 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
+ 		isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
+ 	}
+ 
++	isrstatus &= port->read_status_mask;
++	isrstatus &= ~port->ignore_status_mask;
+ 	/*
+ 	 * Skip RX processing if RX is disabled as RXEMPTY will never be set
+ 	 * as read bytes will not be removed from the FIFO.
+diff --git a/drivers/usb/common/common.c b/drivers/usb/common/common.c
+index 1433260d99b48..347fb3d3894a5 100644
+--- a/drivers/usb/common/common.c
++++ b/drivers/usb/common/common.c
+@@ -25,6 +25,12 @@ static const char *const ep_type_names[] = {
+ 	[USB_ENDPOINT_XFER_INT] = "intr",
+ };
+ 
++/**
++ * usb_ep_type_string() - Returns human readable-name of the endpoint type.
++ * @ep_type: The endpoint type to return human-readable name for.  If it's not
++ *   any of the types: USB_ENDPOINT_XFER_{CONTROL, ISOC, BULK, INT},
++ *   usually got by usb_endpoint_type(), the string 'unknown' will be returned.
++ */
+ const char *usb_ep_type_string(int ep_type)
+ {
+ 	if (ep_type < 0 || ep_type >= ARRAY_SIZE(ep_type_names))
+@@ -69,6 +75,19 @@ static const char *const speed_names[] = {
+ 	[USB_SPEED_SUPER_PLUS] = "super-speed-plus",
+ };
+ 
++static const char *const ssp_rate[] = {
++	[USB_SSP_GEN_UNKNOWN] = "UNKNOWN",
++	[USB_SSP_GEN_2x1] = "super-speed-plus-gen2x1",
++	[USB_SSP_GEN_1x2] = "super-speed-plus-gen1x2",
++	[USB_SSP_GEN_2x2] = "super-speed-plus-gen2x2",
++};
++
++/**
++ * usb_speed_string() - Returns human readable-name of the speed.
++ * @speed: The speed to return human-readable name for.  If it's not
++ *   any of the speeds defined in usb_device_speed enum, string for
++ *   USB_SPEED_UNKNOWN will be returned.
++ */
+ const char *usb_speed_string(enum usb_device_speed speed)
+ {
+ 	if (speed < 0 || speed >= ARRAY_SIZE(speed_names))
+@@ -77,6 +96,14 @@ const char *usb_speed_string(enum usb_device_speed speed)
+ }
+ EXPORT_SYMBOL_GPL(usb_speed_string);
+ 
++/**
++ * usb_get_maximum_speed - Get maximum requested speed for a given USB
++ * controller.
++ * @dev: Pointer to the given USB controller device
++ *
++ * The function gets the maximum speed string from property "maximum-speed",
++ * and returns the corresponding enum usb_device_speed.
++ */
+ enum usb_device_speed usb_get_maximum_speed(struct device *dev)
+ {
+ 	const char *maximum_speed;
+@@ -86,12 +113,44 @@ enum usb_device_speed usb_get_maximum_speed(struct device *dev)
+ 	if (ret < 0)
+ 		return USB_SPEED_UNKNOWN;
+ 
+-	ret = match_string(speed_names, ARRAY_SIZE(speed_names), maximum_speed);
++	ret = match_string(ssp_rate, ARRAY_SIZE(ssp_rate), maximum_speed);
++	if (ret > 0)
++		return USB_SPEED_SUPER_PLUS;
+ 
++	ret = match_string(speed_names, ARRAY_SIZE(speed_names), maximum_speed);
+ 	return (ret < 0) ? USB_SPEED_UNKNOWN : ret;
+ }
+ EXPORT_SYMBOL_GPL(usb_get_maximum_speed);
+ 
++/**
++ * usb_get_maximum_ssp_rate - Get the signaling rate generation and lane count
++ *	of a SuperSpeed Plus capable device.
++ * @dev: Pointer to the given USB controller device
++ *
++ * If the string from "maximum-speed" property is super-speed-plus-genXxY where
++ * 'X' is the generation number and 'Y' is the number of lanes, then this
++ * function returns the corresponding enum usb_ssp_rate.
++ */
++enum usb_ssp_rate usb_get_maximum_ssp_rate(struct device *dev)
++{
++	const char *maximum_speed;
++	int ret;
++
++	ret = device_property_read_string(dev, "maximum-speed", &maximum_speed);
++	if (ret < 0)
++		return USB_SSP_GEN_UNKNOWN;
++
++	ret = match_string(ssp_rate, ARRAY_SIZE(ssp_rate), maximum_speed);
++	return (ret < 0) ? USB_SSP_GEN_UNKNOWN : ret;
++}
++EXPORT_SYMBOL_GPL(usb_get_maximum_ssp_rate);
++
++/**
++ * usb_state_string - Returns human readable name for the state.
++ * @state: The state to return a human-readable name for. If it's not
++ *	any of the states devices in usb_device_state_string enum,
++ *	the string UNKNOWN will be returned.
++ */
+ const char *usb_state_string(enum usb_device_state state)
+ {
+ 	static const char *const names[] = {
+@@ -141,6 +200,47 @@ enum usb_dr_mode usb_get_dr_mode(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(usb_get_dr_mode);
+ 
++/**
++ * usb_decode_interval - Decode bInterval into the time expressed in 1us unit
++ * @epd: The descriptor of the endpoint
++ * @speed: The speed that the endpoint works as
++ *
++ * Function returns the interval expressed in 1us unit for servicing
++ * endpoint for data transfers.
++ */
++unsigned int usb_decode_interval(const struct usb_endpoint_descriptor *epd,
++				 enum usb_device_speed speed)
++{
++	unsigned int interval = 0;
++
++	switch (usb_endpoint_type(epd)) {
++	case USB_ENDPOINT_XFER_CONTROL:
++		/* uframes per NAK */
++		if (speed == USB_SPEED_HIGH)
++			interval = epd->bInterval;
++		break;
++	case USB_ENDPOINT_XFER_ISOC:
++		interval = 1 << (epd->bInterval - 1);
++		break;
++	case USB_ENDPOINT_XFER_BULK:
++		/* uframes per NAK */
++		if (speed == USB_SPEED_HIGH && usb_endpoint_dir_out(epd))
++			interval = epd->bInterval;
++		break;
++	case USB_ENDPOINT_XFER_INT:
++		if (speed >= USB_SPEED_HIGH)
++			interval = 1 << (epd->bInterval - 1);
++		else
++			interval = epd->bInterval;
++		break;
++	}
++
++	interval *= (speed >= USB_SPEED_HIGH) ? 125 : 1000;
++
++	return interval;
++}
++EXPORT_SYMBOL_GPL(usb_decode_interval);
++
+ #ifdef CONFIG_OF
+ /**
+  * of_usb_get_dr_mode_by_phy - Get dual role mode for the controller device
+diff --git a/drivers/usb/common/debug.c b/drivers/usb/common/debug.c
+index ba849c7bc5c7f..f0c0e8db70388 100644
+--- a/drivers/usb/common/debug.c
++++ b/drivers/usb/common/debug.c
+@@ -207,12 +207,28 @@ static void usb_decode_set_isoch_delay(__u8 wValue, char *str, size_t size)
+ 	snprintf(str, size, "Set Isochronous Delay(Delay = %d ns)", wValue);
+ }
+ 
+-/*
+- * usb_decode_ctrl - returns a string representation of ctrl request
+- */
+-const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
+-			    __u8 bRequest, __u16 wValue, __u16 wIndex,
+-			    __u16 wLength)
++static void usb_decode_ctrl_generic(char *str, size_t size, __u8 bRequestType,
++				    __u8 bRequest, __u16 wValue, __u16 wIndex,
++				    __u16 wLength)
++{
++	u8 recip = bRequestType & USB_RECIP_MASK;
++	u8 type = bRequestType & USB_TYPE_MASK;
++
++	snprintf(str, size,
++		 "Type=%s Recipient=%s Dir=%s bRequest=%u wValue=%u wIndex=%u wLength=%u",
++		 (type == USB_TYPE_STANDARD)    ? "Standard" :
++		 (type == USB_TYPE_VENDOR)      ? "Vendor" :
++		 (type == USB_TYPE_CLASS)       ? "Class" : "Unknown",
++		 (recip == USB_RECIP_DEVICE)    ? "Device" :
++		 (recip == USB_RECIP_INTERFACE) ? "Interface" :
++		 (recip == USB_RECIP_ENDPOINT)  ? "Endpoint" : "Unknown",
++		 (bRequestType & USB_DIR_IN)    ? "IN" : "OUT",
++		 bRequest, wValue, wIndex, wLength);
++}
++
++static void usb_decode_ctrl_standard(char *str, size_t size, __u8 bRequestType,
++				     __u8 bRequest, __u16 wValue, __u16 wIndex,
++				     __u16 wLength)
+ {
+ 	switch (bRequest) {
+ 	case USB_REQ_GET_STATUS:
+@@ -253,14 +269,48 @@ const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
+ 		usb_decode_set_isoch_delay(wValue, str, size);
+ 		break;
+ 	default:
+-		snprintf(str, size, "%02x %02x %02x %02x %02x %02x %02x %02x",
+-			 bRequestType, bRequest,
+-			 (u8)(cpu_to_le16(wValue) & 0xff),
+-			 (u8)(cpu_to_le16(wValue) >> 8),
+-			 (u8)(cpu_to_le16(wIndex) & 0xff),
+-			 (u8)(cpu_to_le16(wIndex) >> 8),
+-			 (u8)(cpu_to_le16(wLength) & 0xff),
+-			 (u8)(cpu_to_le16(wLength) >> 8));
++		usb_decode_ctrl_generic(str, size, bRequestType, bRequest,
++					wValue, wIndex, wLength);
++		break;
++	}
++}
++
++/**
++ * usb_decode_ctrl - Returns human readable representation of control request.
++ * @str: buffer to return a human-readable representation of control request.
++ *       This buffer should have about 200 bytes.
++ * @size: size of str buffer.
++ * @bRequestType: matches the USB bmRequestType field
++ * @bRequest: matches the USB bRequest field
++ * @wValue: matches the USB wValue field (CPU byte order)
++ * @wIndex: matches the USB wIndex field (CPU byte order)
++ * @wLength: matches the USB wLength field (CPU byte order)
++ *
++ * Function returns decoded, formatted and human-readable description of
++ * control request packet.
++ *
++ * The usage scenario for this is for tracepoints, so function as a return
++ * use the same value as in parameters. This approach allows to use this
++ * function in TP_printk
++ *
++ * Important: wValue, wIndex, wLength parameters before invoking this function
++ * should be processed by le16_to_cpu macro.
++ */
++const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
++			    __u8 bRequest, __u16 wValue, __u16 wIndex,
++			    __u16 wLength)
++{
++	switch (bRequestType & USB_TYPE_MASK) {
++	case USB_TYPE_STANDARD:
++		usb_decode_ctrl_standard(str, size, bRequestType, bRequest,
++					 wValue, wIndex, wLength);
++		break;
++	case USB_TYPE_VENDOR:
++	case USB_TYPE_CLASS:
++	default:
++		usb_decode_ctrl_generic(str, size, bRequestType, bRequest,
++					wValue, wIndex, wLength);
++		break;
+ 	}
+ 
+ 	return str;
+diff --git a/drivers/usb/core/devices.c b/drivers/usb/core/devices.c
+index 1ef2de6e375ac..d8b0041de612f 100644
+--- a/drivers/usb/core/devices.c
++++ b/drivers/usb/core/devices.c
+@@ -157,38 +157,25 @@ static char *usb_dump_endpoint_descriptor(int speed, char *start, char *end,
+ 	switch (usb_endpoint_type(desc)) {
+ 	case USB_ENDPOINT_XFER_CONTROL:
+ 		type = "Ctrl";
+-		if (speed == USB_SPEED_HIGH)	/* uframes per NAK */
+-			interval = desc->bInterval;
+-		else
+-			interval = 0;
+ 		dir = 'B';			/* ctrl is bidirectional */
+ 		break;
+ 	case USB_ENDPOINT_XFER_ISOC:
+ 		type = "Isoc";
+-		interval = 1 << (desc->bInterval - 1);
+ 		break;
+ 	case USB_ENDPOINT_XFER_BULK:
+ 		type = "Bulk";
+-		if (speed == USB_SPEED_HIGH && dir == 'O') /* uframes per NAK */
+-			interval = desc->bInterval;
+-		else
+-			interval = 0;
+ 		break;
+ 	case USB_ENDPOINT_XFER_INT:
+ 		type = "Int.";
+-		if (speed == USB_SPEED_HIGH || speed >= USB_SPEED_SUPER)
+-			interval = 1 << (desc->bInterval - 1);
+-		else
+-			interval = desc->bInterval;
+ 		break;
+ 	default:	/* "can't happen" */
+ 		return start;
+ 	}
+-	interval *= (speed == USB_SPEED_HIGH ||
+-		     speed >= USB_SPEED_SUPER) ? 125 : 1000;
+-	if (interval % 1000)
++
++	interval = usb_decode_interval(desc, speed);
++	if (interval % 1000) {
+ 		unit = 'u';
+-	else {
++	} else {
+ 		unit = 'm';
+ 		interval /= 1000;
+ 	}
+diff --git a/drivers/usb/core/endpoint.c b/drivers/usb/core/endpoint.c
+index 1c2c040796760..fc3341f2bb61a 100644
+--- a/drivers/usb/core/endpoint.c
++++ b/drivers/usb/core/endpoint.c
+@@ -84,40 +84,13 @@ static ssize_t interval_show(struct device *dev, struct device_attribute *attr,
+ 			     char *buf)
+ {
+ 	struct ep_device *ep = to_ep_device(dev);
++	unsigned int interval;
+ 	char unit;
+-	unsigned interval = 0;
+-	unsigned in;
+ 
+-	in = (ep->desc->bEndpointAddress & USB_DIR_IN);
+-
+-	switch (usb_endpoint_type(ep->desc)) {
+-	case USB_ENDPOINT_XFER_CONTROL:
+-		if (ep->udev->speed == USB_SPEED_HIGH)
+-			/* uframes per NAK */
+-			interval = ep->desc->bInterval;
+-		break;
+-
+-	case USB_ENDPOINT_XFER_ISOC:
+-		interval = 1 << (ep->desc->bInterval - 1);
+-		break;
+-
+-	case USB_ENDPOINT_XFER_BULK:
+-		if (ep->udev->speed == USB_SPEED_HIGH && !in)
+-			/* uframes per NAK */
+-			interval = ep->desc->bInterval;
+-		break;
+-
+-	case USB_ENDPOINT_XFER_INT:
+-		if (ep->udev->speed == USB_SPEED_HIGH)
+-			interval = 1 << (ep->desc->bInterval - 1);
+-		else
+-			interval = ep->desc->bInterval;
+-		break;
+-	}
+-	interval *= (ep->udev->speed == USB_SPEED_HIGH) ? 125 : 1000;
+-	if (interval % 1000)
++	interval = usb_decode_interval(ep->desc, ep->udev->speed);
++	if (interval % 1000) {
+ 		unit = 'u';
+-	else {
++	} else {
+ 		unit = 'm';
+ 		interval /= 1000;
+ 	}
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index f03ee889ecc70..03473e20e2186 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -438,6 +438,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x1532, 0x0116), .driver_info =
+ 			USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
+ 
++	/* Lenovo ThinkPad OneLink+ Dock twin hub controllers (VIA Labs VL812) */
++	{ USB_DEVICE(0x17ef, 0x1018), .driver_info = USB_QUIRK_RESET_RESUME },
++	{ USB_DEVICE(0x17ef, 0x1019), .driver_info = USB_QUIRK_RESET_RESUME },
++
+ 	/* Lenovo USB-C to Ethernet Adapter RTL8153-04 */
+ 	{ USB_DEVICE(0x17ef, 0x720c), .driver_info = USB_QUIRK_NO_LPM },
+ 
+diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c
+index 236ecc9689985..c13bb29a160e8 100644
+--- a/drivers/usb/gadget/function/f_printer.c
++++ b/drivers/usb/gadget/function/f_printer.c
+@@ -87,7 +87,7 @@ struct printer_dev {
+ 	u8			printer_cdev_open;
+ 	wait_queue_head_t	wait;
+ 	unsigned		q_len;
+-	char			*pnp_string;	/* We don't own memory! */
++	char			**pnp_string;	/* We don't own memory! */
+ 	struct usb_function	function;
+ };
+ 
+@@ -999,16 +999,16 @@ static int printer_func_setup(struct usb_function *f,
+ 			if ((wIndex>>8) != dev->interface)
+ 				break;
+ 
+-			if (!dev->pnp_string) {
++			if (!*dev->pnp_string) {
+ 				value = 0;
+ 				break;
+ 			}
+-			value = strlen(dev->pnp_string);
++			value = strlen(*dev->pnp_string);
+ 			buf[0] = (value >> 8) & 0xFF;
+ 			buf[1] = value & 0xFF;
+-			memcpy(buf + 2, dev->pnp_string, value);
++			memcpy(buf + 2, *dev->pnp_string, value);
+ 			DBG(dev, "1284 PNP String: %x %s\n", value,
+-			    dev->pnp_string);
++			    *dev->pnp_string);
+ 			break;
+ 
+ 		case GET_PORT_STATUS: /* Get Port Status */
+@@ -1471,7 +1471,7 @@ static struct usb_function *gprinter_alloc(struct usb_function_instance *fi)
+ 	kref_init(&dev->kref);
+ 	++opts->refcnt;
+ 	dev->minor = opts->minor;
+-	dev->pnp_string = opts->pnp_string;
++	dev->pnp_string = &opts->pnp_string;
+ 	dev->q_len = opts->q_len;
+ 	mutex_unlock(&opts->lock);
+ 
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 024e8911df344..1fba5605a88ea 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -659,7 +659,7 @@ struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci,
+ 			num_stream_ctxs, &stream_info->ctx_array_dma,
+ 			mem_flags);
+ 	if (!stream_info->stream_ctx_array)
+-		goto cleanup_ctx;
++		goto cleanup_ring_array;
+ 	memset(stream_info->stream_ctx_array, 0,
+ 			sizeof(struct xhci_stream_ctx)*num_stream_ctxs);
+ 
+@@ -720,6 +720,11 @@ cleanup_rings:
+ 	}
+ 	xhci_free_command(xhci, stream_info->free_streams_command);
+ cleanup_ctx:
++	xhci_free_stream_ctx(xhci,
++		stream_info->num_stream_ctxs,
++		stream_info->stream_ctx_array,
++		stream_info->ctx_array_dma);
++cleanup_ring_array:
+ 	kfree(stream_info->stream_rings);
+ cleanup_info:
+ 	kfree(stream_info);
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index dc570ce4e8319..972a44b2a7f12 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -134,7 +134,7 @@ static const struct xhci_plat_priv xhci_plat_renesas_rcar_gen3 = {
+ };
+ 
+ static const struct xhci_plat_priv xhci_plat_brcm = {
+-	.quirks = XHCI_RESET_ON_RESUME,
++	.quirks = XHCI_RESET_ON_RESUME | XHCI_SUSPEND_RESUME_CLKS,
+ };
+ 
+ static const struct of_device_id usb_xhci_of_match[] = {
+@@ -447,7 +447,16 @@ static int __maybe_unused xhci_plat_suspend(struct device *dev)
+ 	 * xhci_suspend() needs `do_wakeup` to know whether host is allowed
+ 	 * to do wakeup during suspend.
+ 	 */
+-	return xhci_suspend(xhci, device_may_wakeup(dev));
++	ret = xhci_suspend(xhci, device_may_wakeup(dev));
++	if (ret)
++		return ret;
++
++	if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
++		clk_disable_unprepare(xhci->clk);
++		clk_disable_unprepare(xhci->reg_clk);
++	}
++
++	return 0;
+ }
+ 
+ static int __maybe_unused xhci_plat_resume(struct device *dev)
+@@ -456,6 +465,11 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
+ 	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+ 	int ret;
+ 
++	if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
++		clk_prepare_enable(xhci->clk);
++		clk_prepare_enable(xhci->reg_clk);
++	}
++
+ 	ret = xhci_priv_resume_quirk(hcd);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 7b16b6b45af7d..8918e6ae5c4b6 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1163,7 +1163,8 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 	/* re-initialize the HC on Restore Error, or Host Controller Error */
+ 	if (temp & (STS_SRE | STS_HCE)) {
+ 		reinit_xhc = true;
+-		xhci_warn(xhci, "xHC error in resume, USBSTS 0x%x, Reinit\n", temp);
++		if (!xhci->broken_suspend)
++			xhci_warn(xhci, "xHC error in resume, USBSTS 0x%x, Reinit\n", temp);
+ 	}
+ 
+ 	if (reinit_xhc) {
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 6f16a05b19584..e668740000b25 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1888,6 +1888,7 @@ struct xhci_hcd {
+ #define XHCI_SG_TRB_CACHE_SIZE_QUIRK	BIT_ULL(39)
+ #define XHCI_NO_SOFT_RETRY	BIT_ULL(40)
+ #define XHCI_EP_CTX_BROKEN_DCS	BIT_ULL(42)
++#define XHCI_SUSPEND_RESUME_CLKS	BIT_ULL(43)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+diff --git a/drivers/usb/misc/idmouse.c b/drivers/usb/misc/idmouse.c
+index e9437a176518a..ea39243efee39 100644
+--- a/drivers/usb/misc/idmouse.c
++++ b/drivers/usb/misc/idmouse.c
+@@ -177,10 +177,6 @@ static int idmouse_create_image(struct usb_idmouse *dev)
+ 		bytes_read += bulk_read;
+ 	}
+ 
+-	/* reset the device */
+-reset:
+-	ftip_command(dev, FTIP_RELEASE, 0, 0);
+-
+ 	/* check for valid image */
+ 	/* right border should be black (0x00) */
+ 	for (bytes_read = sizeof(HEADER)-1 + WIDTH-1; bytes_read < IMGSIZE; bytes_read += WIDTH)
+@@ -192,6 +188,10 @@ reset:
+ 		if (dev->bulk_in_buffer[bytes_read] != 0xFF)
+ 			return -EAGAIN;
+ 
++	/* reset the device */
++reset:
++	ftip_command(dev, FTIP_RELEASE, 0, 0);
++
+ 	/* should be IMGSIZE == 65040 */
+ 	dev_dbg(&dev->interface->dev, "read %d bytes fingerprint data\n",
+ 		bytes_read);
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index fb806b33178a0..c273eee35aaa7 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -760,6 +760,9 @@ static void rxstate(struct musb *musb, struct musb_request *req)
+ 			musb_writew(epio, MUSB_RXCSR, csr);
+ 
+ buffer_aint_mapped:
++			fifo_count = min_t(unsigned int,
++					request->length - request->actual,
++					(unsigned int)fifo_count);
+ 			musb_read_fifo(musb_ep->hw_ep, fifo_count, (u8 *)
+ 					(request->buf + request->actual));
+ 			request->actual += fifo_count;
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 4993227ab2930..20dcbccb290b3 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -1275,12 +1275,6 @@ UNUSUAL_DEV( 0x090a, 0x1200, 0x0000, 0x9999,
+ 		USB_SC_RBC, USB_PR_BULK, NULL,
+ 		0 ),
+ 
+-UNUSUAL_DEV(0x090c, 0x1000, 0x1100, 0x1100,
+-		"Samsung",
+-		"Flash Drive FIT",
+-		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+-		US_FL_MAX_SECTORS_64),
+-
+ /* aeb */
+ UNUSUAL_DEV( 0x090c, 0x1132, 0x0000, 0xffff,
+ 		"Feiya",
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index 5d2d6ce7ff413..b0153617fe0e0 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -359,7 +359,7 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq,
+ 		return NULL;
+ 	}
+ 
+-	pkt->buf = kmalloc(pkt->len, GFP_KERNEL);
++	pkt->buf = kvmalloc(pkt->len, GFP_KERNEL);
+ 	if (!pkt->buf) {
+ 		kfree(pkt);
+ 		return NULL;
+diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
+index 28768c272b73d..7673db5da26b0 100644
+--- a/drivers/video/fbdev/smscufx.c
++++ b/drivers/video/fbdev/smscufx.c
+@@ -137,6 +137,8 @@ static int ufx_submit_urb(struct ufx_data *dev, struct urb * urb, size_t len);
+ static int ufx_alloc_urb_list(struct ufx_data *dev, int count, size_t size);
+ static void ufx_free_urb_list(struct ufx_data *dev);
+ 
++static DEFINE_MUTEX(disconnect_mutex);
++
+ /* reads a control register */
+ static int ufx_reg_read(struct ufx_data *dev, u32 index, u32 *data)
+ {
+@@ -1070,9 +1072,13 @@ static int ufx_ops_open(struct fb_info *info, int user)
+ 	if (user == 0 && !console)
+ 		return -EBUSY;
+ 
++	mutex_lock(&disconnect_mutex);
++
+ 	/* If the USB device is gone, we don't accept new opens */
+-	if (dev->virtualized)
++	if (dev->virtualized) {
++		mutex_unlock(&disconnect_mutex);
+ 		return -ENODEV;
++	}
+ 
+ 	dev->fb_count++;
+ 
+@@ -1096,6 +1102,8 @@ static int ufx_ops_open(struct fb_info *info, int user)
+ 	pr_debug("open /dev/fb%d user=%d fb_info=%p count=%d",
+ 		info->node, user, info, dev->fb_count);
+ 
++	mutex_unlock(&disconnect_mutex);
++
+ 	return 0;
+ }
+ 
+@@ -1740,6 +1748,8 @@ static void ufx_usb_disconnect(struct usb_interface *interface)
+ {
+ 	struct ufx_data *dev;
+ 
++	mutex_lock(&disconnect_mutex);
++
+ 	dev = usb_get_intfdata(interface);
+ 
+ 	pr_debug("USB disconnect starting\n");
+@@ -1760,6 +1770,8 @@ static void ufx_usb_disconnect(struct usb_interface *interface)
+ 	kref_put(&dev->kref, ufx_free);
+ 
+ 	/* consider ufx_data freed */
++
++	mutex_unlock(&disconnect_mutex);
+ }
+ 
+ static struct usb_driver ufx_driver = {
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index 002f265d8db58..b0470f4f595ee 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -1257,7 +1257,7 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
+ 	
+ 	/* limit fbsize to max visible screen size */
+ 	if (fix->smem_len > yres*fix->line_length)
+-		fix->smem_len = yres*fix->line_length;
++		fix->smem_len = ALIGN(yres*fix->line_length, 4*1024*1024);
+ 	
+ 	fix->accel = FB_ACCEL_NONE;
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index a02e38fb696c1..36da775340768 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1158,6 +1158,21 @@ out_add_root:
+ 		fs_info->qgroup_rescan_running = true;
+ 	        btrfs_queue_work(fs_info->qgroup_rescan_workers,
+ 	                         &fs_info->qgroup_rescan_work);
++	} else {
++		/*
++		 * We have set both BTRFS_FS_QUOTA_ENABLED and
++		 * BTRFS_QGROUP_STATUS_FLAG_ON, so we can only fail with
++		 * -EINPROGRESS. That can happen because someone started the
++		 * rescan worker by calling quota rescan ioctl before we
++		 * attempted to initialize the rescan worker. Failure due to
++		 * quotas disabled in the meanwhile is not possible, because
++		 * we are holding a write lock on fs_info->subvol_sem, which
++		 * is also acquired when disabling quotas.
++		 * Ignore such error, and any other error would need to undo
++		 * everything we did in the transaction we just committed.
++		 */
++		ASSERT(ret == -EINPROGRESS);
++		ret = 0;
+ 	}
+ 
+ out_free_path:
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 0392c556af601..88b9a5394561e 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3811,6 +3811,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	int ret;
+ 	struct btrfs_device *dev;
+ 	unsigned int nofs_flag;
++	bool need_commit = false;
+ 
+ 	if (btrfs_fs_closing(fs_info))
+ 		return -EAGAIN;
+@@ -3924,6 +3925,12 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	 */
+ 	nofs_flag = memalloc_nofs_save();
+ 	if (!is_dev_replace) {
++		u64 old_super_errors;
++
++		spin_lock(&sctx->stat_lock);
++		old_super_errors = sctx->stat.super_errors;
++		spin_unlock(&sctx->stat_lock);
++
+ 		btrfs_info(fs_info, "scrub: started on devid %llu", devid);
+ 		/*
+ 		 * by holding device list mutex, we can
+@@ -3932,6 +3939,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 		mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 		ret = scrub_supers(sctx, dev);
+ 		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
++
++		spin_lock(&sctx->stat_lock);
++		/*
++		 * Super block errors found, but we can not commit transaction
++		 * at current context, since btrfs_commit_transaction() needs
++		 * to pause the current running scrub (hold by ourselves).
++		 */
++		if (sctx->stat.super_errors > old_super_errors && !sctx->readonly)
++			need_commit = true;
++		spin_unlock(&sctx->stat_lock);
+ 	}
+ 
+ 	if (!ret)
+@@ -3958,6 +3975,25 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	scrub_workers_put(fs_info);
+ 	scrub_put_ctx(sctx);
+ 
++	/*
++	 * We found some super block errors before, now try to force a
++	 * transaction commit, as scrub has finished.
++	 */
++	if (need_commit) {
++		struct btrfs_trans_handle *trans;
++
++		trans = btrfs_start_transaction(fs_info->tree_root, 0);
++		if (IS_ERR(trans)) {
++			ret = PTR_ERR(trans);
++			btrfs_err(fs_info,
++	"scrub: failed to start transaction to fix super block errors: %d", ret);
++			return ret;
++		}
++		ret = btrfs_commit_transaction(trans);
++		if (ret < 0)
++			btrfs_err(fs_info,
++	"scrub: failed to commit transaction to fix super block errors: %d", ret);
++	}
+ 	return ret;
+ out:
+ 	scrub_workers_put(fs_info);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index fafb69d338c26..a648146e49cfa 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3936,6 +3936,15 @@ static ssize_t __cifs_readv(
+ 		len = ctx->len;
+ 	}
+ 
++	if (direct) {
++		rc = filemap_write_and_wait_range(file->f_inode->i_mapping,
++						  offset, offset + len - 1);
++		if (rc) {
++			kref_put(&ctx->refcount, cifs_aio_ctx_release);
++			return -EAGAIN;
++		}
++	}
++
+ 	/* grab a lock here due to read response handlers can access ctx */
+ 	mutex_lock(&ctx->aio_mutex);
+ 
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 7ee8abd1f79be..0c4a2474e75be 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1075,9 +1075,9 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ 		pneg_inbuf->Dialects[0] =
+ 			cpu_to_le16(server->vals->protocol_id);
+ 		pneg_inbuf->DialectCount = cpu_to_le16(1);
+-		/* structure is big enough for 3 dialects, sending only 1 */
++		/* structure is big enough for 4 dialects, sending only 1 */
+ 		inbuflen = sizeof(*pneg_inbuf) -
+-				sizeof(pneg_inbuf->Dialects[0]) * 2;
++				sizeof(pneg_inbuf->Dialects[0]) * 3;
+ 	}
+ 
+ 	rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+@@ -2294,7 +2294,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ 	unsigned int acelen, acl_size, ace_count;
+ 	unsigned int owner_offset = 0;
+ 	unsigned int group_offset = 0;
+-	struct smb3_acl acl;
++	struct smb3_acl acl = {};
+ 
+ 	*len = roundup(sizeof(struct crt_sd_ctxt) + (sizeof(struct cifs_ace) * 4), 8);
+ 
+@@ -2367,6 +2367,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ 	acl.AclRevision = ACL_REVISION; /* See 2.4.4.1 of MS-DTYP */
+ 	acl.AclSize = cpu_to_le16(acl_size);
+ 	acl.AceCount = cpu_to_le16(ace_count);
++	/* acl.Sbz1 and Sbz2 MBZ so are not set here, but initialized above */
+ 	memcpy(aclptr, &acl, sizeof(struct smb3_acl));
+ 
+ 	buf->ccontext.DataLength = cpu_to_le32(ptr - (__u8 *)&buf->sd);
+diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
+index 283c7b94eddad..ca06069e95c8c 100644
+--- a/fs/dlm/ast.c
++++ b/fs/dlm/ast.c
+@@ -198,13 +198,13 @@ void dlm_add_cb(struct dlm_lkb *lkb, uint32_t flags, int mode, int status,
+ 	if (!prev_seq) {
+ 		kref_get(&lkb->lkb_ref);
+ 
++		mutex_lock(&ls->ls_cb_mutex);
+ 		if (test_bit(LSFL_CB_DELAY, &ls->ls_flags)) {
+-			mutex_lock(&ls->ls_cb_mutex);
+ 			list_add(&lkb->lkb_cb_list, &ls->ls_cb_delay);
+-			mutex_unlock(&ls->ls_cb_mutex);
+ 		} else {
+ 			queue_work(ls->ls_callback_wq, &lkb->lkb_cb_work);
+ 		}
++		mutex_unlock(&ls->ls_cb_mutex);
+ 	}
+  out:
+ 	mutex_unlock(&lkb->lkb_cb_mutex);
+@@ -284,7 +284,9 @@ void dlm_callback_stop(struct dlm_ls *ls)
+ 
+ void dlm_callback_suspend(struct dlm_ls *ls)
+ {
++	mutex_lock(&ls->ls_cb_mutex);
+ 	set_bit(LSFL_CB_DELAY, &ls->ls_flags);
++	mutex_unlock(&ls->ls_cb_mutex);
+ 
+ 	if (ls->ls_callback_wq)
+ 		flush_workqueue(ls->ls_callback_wq);
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index eaa28d654e9f0..dde9afb6747ba 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -2888,24 +2888,24 @@ static int set_unlock_args(uint32_t flags, void *astarg, struct dlm_args *args)
+ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
+ 			      struct dlm_args *args)
+ {
+-	int rv = -EINVAL;
++	int rv = -EBUSY;
+ 
+ 	if (args->flags & DLM_LKF_CONVERT) {
+-		if (lkb->lkb_flags & DLM_IFL_MSTCPY)
++		if (lkb->lkb_status != DLM_LKSTS_GRANTED)
+ 			goto out;
+ 
+-		if (args->flags & DLM_LKF_QUECVT &&
+-		    !__quecvt_compat_matrix[lkb->lkb_grmode+1][args->mode+1])
++		if (lkb->lkb_wait_type)
+ 			goto out;
+ 
+-		rv = -EBUSY;
+-		if (lkb->lkb_status != DLM_LKSTS_GRANTED)
++		if (is_overlap(lkb))
+ 			goto out;
+ 
+-		if (lkb->lkb_wait_type)
++		rv = -EINVAL;
++		if (lkb->lkb_flags & DLM_IFL_MSTCPY)
+ 			goto out;
+ 
+-		if (is_overlap(lkb))
++		if (args->flags & DLM_LKF_QUECVT &&
++		    !__quecvt_compat_matrix[lkb->lkb_grmode+1][args->mode+1])
+ 			goto out;
+ 	}
+ 
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 501e60713010e..41dcf21558c4e 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -766,22 +766,25 @@ static int ext4_fc_write_inode(struct inode *inode, u32 *crc)
+ 	tl.fc_tag = cpu_to_le16(EXT4_FC_TAG_INODE);
+ 	tl.fc_len = cpu_to_le16(inode_len + sizeof(fc_inode.fc_ino));
+ 
++	ret = -ECANCELED;
+ 	dst = ext4_fc_reserve_space(inode->i_sb,
+ 			sizeof(tl) + inode_len + sizeof(fc_inode.fc_ino), crc);
+ 	if (!dst)
+-		return -ECANCELED;
++		goto err;
+ 
+ 	if (!ext4_fc_memcpy(inode->i_sb, dst, &tl, sizeof(tl), crc))
+-		return -ECANCELED;
++		goto err;
+ 	dst += sizeof(tl);
+ 	if (!ext4_fc_memcpy(inode->i_sb, dst, &fc_inode, sizeof(fc_inode), crc))
+-		return -ECANCELED;
++		goto err;
+ 	dst += sizeof(fc_inode);
+ 	if (!ext4_fc_memcpy(inode->i_sb, dst, (u8 *)ext4_raw_inode(&iloc),
+ 					inode_len, crc))
+-		return -ECANCELED;
+-
+-	return 0;
++		goto err;
++	ret = 0;
++err:
++	brelse(iloc.bh);
++	return ret;
+ }
+ 
+ /*
+@@ -1388,13 +1391,15 @@ static int ext4_fc_record_modified_inode(struct super_block *sb, int ino)
+ 		if (state->fc_modified_inodes[i] == ino)
+ 			return 0;
+ 	if (state->fc_modified_inodes_used == state->fc_modified_inodes_size) {
+-		state->fc_modified_inodes = krealloc(
+-				state->fc_modified_inodes,
++		int *fc_modified_inodes;
++
++		fc_modified_inodes = krealloc(state->fc_modified_inodes,
+ 				sizeof(int) * (state->fc_modified_inodes_size +
+ 				EXT4_FC_REPLAY_REALLOC_INCREMENT),
+ 				GFP_KERNEL);
+-		if (!state->fc_modified_inodes)
++		if (!fc_modified_inodes)
+ 			return -ENOMEM;
++		state->fc_modified_inodes = fc_modified_inodes;
+ 		state->fc_modified_inodes_size +=
+ 			EXT4_FC_REPLAY_REALLOC_INCREMENT;
+ 	}
+@@ -1579,15 +1584,18 @@ int ext4_fc_record_regions(struct super_block *sb, int ino,
+ 	if (replay && state->fc_regions_used != state->fc_regions_valid)
+ 		state->fc_regions_used = state->fc_regions_valid;
+ 	if (state->fc_regions_used == state->fc_regions_size) {
++		struct ext4_fc_alloc_region *fc_regions;
++
++		fc_regions = krealloc(state->fc_regions,
++				      sizeof(struct ext4_fc_alloc_region) *
++				      (state->fc_regions_size +
++				       EXT4_FC_REPLAY_REALLOC_INCREMENT),
++				      GFP_KERNEL);
++		if (!fc_regions)
++			return -ENOMEM;
+ 		state->fc_regions_size +=
+ 			EXT4_FC_REPLAY_REALLOC_INCREMENT;
+-		state->fc_regions = krealloc(
+-					state->fc_regions,
+-					state->fc_regions_size *
+-					sizeof(struct ext4_fc_alloc_region),
+-					GFP_KERNEL);
+-		if (!state->fc_regions)
+-			return -ENOMEM;
++		state->fc_regions = fc_regions;
+ 	}
+ 	region = &state->fc_regions[state->fc_regions_used++];
+ 	region->ino = ino;
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 7b28d44b0ddd1..0f61e0aa85d6f 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -529,6 +529,12 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 		ret = -EAGAIN;
+ 		goto out;
+ 	}
++	/*
++	 * Make sure inline data cannot be created anymore since we are going
++	 * to allocate blocks for DIO. We know the inode does not have any
++	 * inline data now because ext4_dio_supported() checked for that.
++	 */
++	ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ 
+ 	offset = iocb->ki_pos;
+ 	count = ret;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 44b6d061ed71c..45f31dc1e66ff 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1175,6 +1175,13 @@ retry_grab:
+ 	page = grab_cache_page_write_begin(mapping, index, flags);
+ 	if (!page)
+ 		return -ENOMEM;
++	/*
++	 * The same as page allocation, we prealloc buffer heads before
++	 * starting the handle.
++	 */
++	if (!page_has_buffers(page))
++		create_empty_buffers(page, inode->i_sb->s_blocksize, 0);
++
+ 	unlock_page(page);
+ 
+ retry_journal:
+@@ -5769,7 +5776,12 @@ int ext4_mark_iloc_dirty(handle_t *handle,
+ 	}
+ 	ext4_fc_track_inode(handle, inode);
+ 
+-	if (IS_I_VERSION(inode))
++	/*
++	 * ea_inodes are using i_version for storing reference count, don't
++	 * mess with it
++	 */
++	if (IS_I_VERSION(inode) &&
++	    !(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
+ 		inode_inc_iversion(inode);
+ 
+ 	/* the do_update_inode consumes one bh->b_count */
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 58b0f1b12095b..646cc1935dffe 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -125,7 +125,7 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode,
+ 	struct ext4_dir_entry *dirent;
+ 	int is_dx_block = 0;
+ 
+-	if (block >= inode->i_size) {
++	if (block >= inode->i_size >> inode->i_blkbits) {
+ 		ext4_error_inode(inode, func, line, block,
+ 		       "Attempting to read directory block (%u) that is past i_size (%llu)",
+ 		       block, inode->i_size);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index f6409ddfd1172..c55ba0390021e 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -2068,7 +2068,7 @@ retry:
+ 			goto out;
+ 	}
+ 
+-	if (ext4_blocks_count(es) == n_blocks_count)
++	if (ext4_blocks_count(es) == n_blocks_count && n_blocks_count_retry == 0)
+ 		goto out;
+ 
+ 	err = ext4_alloc_flex_bg_array(sb, n_group + 1);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index a0af833f7da70..9573d493c374b 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -188,19 +188,12 @@ int ext4_read_bh(struct buffer_head *bh, int op_flags, bh_end_io_t *end_io)
+ 
+ int ext4_read_bh_lock(struct buffer_head *bh, int op_flags, bool wait)
+ {
+-	if (trylock_buffer(bh)) {
+-		if (wait)
+-			return ext4_read_bh(bh, op_flags, NULL);
++	lock_buffer(bh);
++	if (!wait) {
+ 		ext4_read_bh_nowait(bh, op_flags, NULL);
+ 		return 0;
+ 	}
+-	if (wait) {
+-		wait_on_buffer(bh);
+-		if (buffer_uptodate(bh))
+-			return 0;
+-		return -EIO;
+-	}
+-	return 0;
++	return ext4_read_bh(bh, op_flags, NULL);
+ }
+ 
+ /*
+@@ -247,7 +240,8 @@ void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block)
+ 	struct buffer_head *bh = sb_getblk_gfp(sb, block, 0);
+ 
+ 	if (likely(bh)) {
+-		ext4_read_bh_lock(bh, REQ_RAHEAD, false);
++		if (trylock_buffer(bh))
++			ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL);
+ 		brelse(bh);
+ 	}
+ }
+@@ -3550,6 +3544,7 @@ static int ext4_lazyinit_thread(void *arg)
+ 	unsigned long next_wakeup, cur;
+ 
+ 	BUG_ON(NULL == eli);
++	set_freezable();
+ 
+ cont_thread:
+ 	while (true) {
+@@ -6273,7 +6268,7 @@ static int ext4_write_info(struct super_block *sb, int type)
+ 	handle_t *handle;
+ 
+ 	/* Data block + inode block */
+-	handle = ext4_journal_start(d_inode(sb->s_root), EXT4_HT_QUOTA, 2);
++	handle = ext4_journal_start_sb(sb, EXT4_HT_QUOTA, 2);
+ 	if (IS_ERR(handle))
+ 		return PTR_ERR(handle);
+ 	ret = dquot_commit_info(sb, type);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 1c49b9959b32a..cd46a64ace1b3 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -136,7 +136,7 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr,
+ 	unsigned int segno, offset;
+ 	bool exist;
+ 
+-	if (type != DATA_GENERIC_ENHANCE && type != DATA_GENERIC_ENHANCE_READ)
++	if (type == DATA_GENERIC)
+ 		return true;
+ 
+ 	segno = GET_SEGNO(sbi, blkaddr);
+@@ -144,6 +144,13 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr,
+ 	se = get_seg_entry(sbi, segno);
+ 
+ 	exist = f2fs_test_bit(offset, se->cur_valid_map);
++	if (exist && type == DATA_GENERIC_ENHANCE_UPDATE) {
++		f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d",
++			 blkaddr, exist);
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		return exist;
++	}
++
+ 	if (!exist && type == DATA_GENERIC_ENHANCE) {
+ 		f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d",
+ 			 blkaddr, exist);
+@@ -181,6 +188,7 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
+ 	case DATA_GENERIC:
+ 	case DATA_GENERIC_ENHANCE:
+ 	case DATA_GENERIC_ENHANCE_READ:
++	case DATA_GENERIC_ENHANCE_UPDATE:
+ 		if (unlikely(blkaddr >= MAX_BLKADDR(sbi) ||
+ 				blkaddr < MAIN_BLKADDR(sbi))) {
+ 			f2fs_warn(sbi, "access invalid blkaddr:%u",
+@@ -1039,7 +1047,8 @@ void f2fs_remove_dirty_inode(struct inode *inode)
+ 	spin_unlock(&sbi->inode_lock[type]);
+ }
+ 
+-int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type)
++int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type,
++						bool from_cp)
+ {
+ 	struct list_head *head;
+ 	struct inode *inode;
+@@ -1074,11 +1083,15 @@ retry:
+ 	if (inode) {
+ 		unsigned long cur_ino = inode->i_ino;
+ 
+-		F2FS_I(inode)->cp_task = current;
++		if (from_cp)
++			F2FS_I(inode)->cp_task = current;
++		F2FS_I(inode)->wb_task = current;
+ 
+ 		filemap_fdatawrite(inode->i_mapping);
+ 
+-		F2FS_I(inode)->cp_task = NULL;
++		F2FS_I(inode)->wb_task = NULL;
++		if (from_cp)
++			F2FS_I(inode)->cp_task = NULL;
+ 
+ 		iput(inode);
+ 		/* We need to give cpu to another writers. */
+@@ -1207,7 +1220,7 @@ retry_flush_dents:
+ 	/* write all the dirty dentry pages */
+ 	if (get_pages(sbi, F2FS_DIRTY_DENTS)) {
+ 		f2fs_unlock_all(sbi);
+-		err = f2fs_sync_dirty_inodes(sbi, DIR_INODE);
++		err = f2fs_sync_dirty_inodes(sbi, DIR_INODE, true);
+ 		if (err)
+ 			return err;
+ 		cond_resched();
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index b2016fd3a7ca3..9270330ec5ced 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2912,7 +2912,7 @@ out:
+ 	}
+ 	unlock_page(page);
+ 	if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) &&
+-			!F2FS_I(inode)->cp_task && allow_balance)
++			!F2FS_I(inode)->wb_task && allow_balance)
+ 		f2fs_balance_fs(sbi, need_balance_fs);
+ 
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+@@ -3210,7 +3210,7 @@ static inline bool __should_serialize_io(struct inode *inode,
+ 					struct writeback_control *wbc)
+ {
+ 	/* to avoid deadlock in path of data flush */
+-	if (F2FS_I(inode)->cp_task)
++	if (F2FS_I(inode)->wb_task)
+ 		return false;
+ 
+ 	if (!S_ISREG(inode->i_mode))
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 3ebf976a682d5..bd16c78b5bf22 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -762,9 +762,8 @@ void f2fs_drop_extent_tree(struct inode *inode)
+ 	if (!f2fs_may_extent_tree(inode))
+ 		return;
+ 
+-	set_inode_flag(inode, FI_NO_EXTENT);
+-
+ 	write_lock(&et->lock);
++	set_inode_flag(inode, FI_NO_EXTENT);
+ 	__free_extent_tree(sbi, et);
+ 	if (et->largest.len) {
+ 		et->largest.len = 0;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 1066725c3c5d5..c03fdda1bddf6 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -235,6 +235,10 @@ enum {
+ 					 * condition of read on truncated area
+ 					 * by extent_cache
+ 					 */
++	DATA_GENERIC_ENHANCE_UPDATE,	/*
++					 * strong check on range and segment
++					 * bitmap for update case
++					 */
+ 	META_GENERIC,
+ };
+ 
+@@ -697,6 +701,7 @@ struct f2fs_inode_info {
+ 	unsigned int clevel;		/* maximum level of given file name */
+ 	struct task_struct *task;	/* lookup and create consistency */
+ 	struct task_struct *cp_task;	/* separate cp/wb IO stats*/
++	struct task_struct *wb_task;	/* indicate inode is in context of writeback */
+ 	nid_t i_xattr_nid;		/* node id that contains xattrs */
+ 	loff_t	last_disk_size;		/* lastly written file size */
+ 	spinlock_t i_size_lock;		/* protect last_disk_size */
+@@ -2422,24 +2427,31 @@ static inline void *f2fs_kmem_cache_alloc(struct kmem_cache *cachep,
+ 	return entry;
+ }
+ 
+-static inline bool is_idle(struct f2fs_sb_info *sbi, int type)
++static inline bool is_inflight_io(struct f2fs_sb_info *sbi, int type)
+ {
+-	if (sbi->gc_mode == GC_URGENT_HIGH)
+-		return true;
+-
+ 	if (get_pages(sbi, F2FS_RD_DATA) || get_pages(sbi, F2FS_RD_NODE) ||
+ 		get_pages(sbi, F2FS_RD_META) || get_pages(sbi, F2FS_WB_DATA) ||
+ 		get_pages(sbi, F2FS_WB_CP_DATA) ||
+ 		get_pages(sbi, F2FS_DIO_READ) ||
+ 		get_pages(sbi, F2FS_DIO_WRITE))
+-		return false;
++		return true;
+ 
+ 	if (type != DISCARD_TIME && SM_I(sbi) && SM_I(sbi)->dcc_info &&
+ 			atomic_read(&SM_I(sbi)->dcc_info->queued_discard))
+-		return false;
++		return true;
+ 
+ 	if (SM_I(sbi) && SM_I(sbi)->fcc_info &&
+ 			atomic_read(&SM_I(sbi)->fcc_info->queued_flush))
++		return true;
++	return false;
++}
++
++static inline bool is_idle(struct f2fs_sb_info *sbi, int type)
++{
++	if (sbi->gc_mode == GC_URGENT_HIGH)
++		return true;
++
++	if (is_inflight_io(sbi, type))
+ 		return false;
+ 
+ 	if (sbi->gc_mode == GC_URGENT_LOW &&
+@@ -3389,7 +3401,8 @@ int f2fs_recover_orphan_inodes(struct f2fs_sb_info *sbi);
+ int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi);
+ void f2fs_update_dirty_page(struct inode *inode, struct page *page);
+ void f2fs_remove_dirty_inode(struct inode *inode);
+-int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type);
++int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type,
++								bool from_cp);
+ void f2fs_wait_on_all_pages(struct f2fs_sb_info *sbi, int type);
+ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc);
+ void f2fs_init_ino_entry_info(struct f2fs_sb_info *sbi);
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 3b53fdebf03da..3baa62ef6e3a3 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -977,7 +977,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ {
+ 	struct page *node_page;
+ 	nid_t nid;
+-	unsigned int ofs_in_node;
++	unsigned int ofs_in_node, max_addrs;
+ 	block_t source_blkaddr;
+ 
+ 	nid = le32_to_cpu(sum->nid);
+@@ -1003,6 +1003,14 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 		return false;
+ 	}
+ 
++	max_addrs = IS_INODE(node_page) ? DEF_ADDRS_PER_INODE :
++						DEF_ADDRS_PER_BLOCK;
++	if (ofs_in_node >= max_addrs) {
++		f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%u, nid:%u, max:%u",
++			ofs_in_node, dni->ino, dni->nid, max_addrs);
++		return false;
++	}
++
+ 	*nofs = ofs_of_node(node_page);
+ 	source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node);
+ 	f2fs_put_page(node_page, 1);
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 72ce131116791..c3c527afdd074 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -437,7 +437,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
+ 	struct dnode_of_data tdn = *dn;
+ 	nid_t ino, nid;
+ 	struct inode *inode;
+-	unsigned int offset;
++	unsigned int offset, ofs_in_node, max_addrs;
+ 	block_t bidx;
+ 	int i;
+ 
+@@ -463,15 +463,24 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
+ got_it:
+ 	/* Use the locked dnode page and inode */
+ 	nid = le32_to_cpu(sum.nid);
++	ofs_in_node = le16_to_cpu(sum.ofs_in_node);
++
++	max_addrs = ADDRS_PER_PAGE(dn->node_page, dn->inode);
++	if (ofs_in_node >= max_addrs) {
++		f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%lu, nid:%u, max:%u",
++			ofs_in_node, dn->inode->i_ino, nid, max_addrs);
++		return -EFSCORRUPTED;
++	}
++
+ 	if (dn->inode->i_ino == nid) {
+ 		tdn.nid = nid;
+ 		if (!dn->inode_page_locked)
+ 			lock_page(dn->inode_page);
+ 		tdn.node_page = dn->inode_page;
+-		tdn.ofs_in_node = le16_to_cpu(sum.ofs_in_node);
++		tdn.ofs_in_node = ofs_in_node;
+ 		goto truncate_out;
+ 	} else if (dn->nid == nid) {
+-		tdn.ofs_in_node = le16_to_cpu(sum.ofs_in_node);
++		tdn.ofs_in_node = ofs_in_node;
+ 		goto truncate_out;
+ 	}
+ 
+@@ -661,6 +670,14 @@ retry_prev:
+ 				goto err;
+ 			}
+ 
++			if (f2fs_is_valid_blkaddr(sbi, dest,
++					DATA_GENERIC_ENHANCE_UPDATE)) {
++				f2fs_err(sbi, "Inconsistent dest blkaddr:%u, ino:%lu, ofs:%u",
++					dest, inode->i_ino, dn.ofs_in_node);
++				err = -EFSCORRUPTED;
++				goto err;
++			}
++
+ 			/* write dummy data page */
+ 			f2fs_replace_block(sbi, &dn, src, dest,
+ 						ni.version, false, false);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 19224e7d2ad04..68774d6198a59 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -536,31 +536,38 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi, bool from_bg)
+ 	else
+ 		f2fs_build_free_nids(sbi, false, false);
+ 
+-	if (!is_idle(sbi, REQ_TIME) &&
+-		(!excess_dirty_nats(sbi) && !excess_dirty_nodes(sbi)))
++	if (excess_dirty_nats(sbi) || excess_dirty_nodes(sbi) ||
++		excess_prefree_segs(sbi))
++		goto do_sync;
++
++	/* there is background inflight IO or foreground operation recently */
++	if (is_inflight_io(sbi, REQ_TIME) ||
++		(!f2fs_time_over(sbi, REQ_TIME) && rwsem_is_locked(&sbi->cp_rwsem)))
+ 		return;
+ 
++	/* exceed periodical checkpoint timeout threshold */
++	if (f2fs_time_over(sbi, CP_TIME))
++		goto do_sync;
++
+ 	/* checkpoint is the only way to shrink partial cached entries */
+-	if (!f2fs_available_free_memory(sbi, NAT_ENTRIES) ||
+-			!f2fs_available_free_memory(sbi, INO_ENTRIES) ||
+-			excess_prefree_segs(sbi) ||
+-			excess_dirty_nats(sbi) ||
+-			excess_dirty_nodes(sbi) ||
+-			f2fs_time_over(sbi, CP_TIME)) {
+-		if (test_opt(sbi, DATA_FLUSH) && from_bg) {
+-			struct blk_plug plug;
+-
+-			mutex_lock(&sbi->flush_lock);
+-
+-			blk_start_plug(&plug);
+-			f2fs_sync_dirty_inodes(sbi, FILE_INODE);
+-			blk_finish_plug(&plug);
++	if (f2fs_available_free_memory(sbi, NAT_ENTRIES) &&
++		f2fs_available_free_memory(sbi, INO_ENTRIES))
++		return;
+ 
+-			mutex_unlock(&sbi->flush_lock);
+-		}
+-		f2fs_sync_fs(sbi->sb, true);
+-		stat_inc_bg_cp_count(sbi->stat_info);
++do_sync:
++	if (test_opt(sbi, DATA_FLUSH) && from_bg) {
++		struct blk_plug plug;
++
++		mutex_lock(&sbi->flush_lock);
++
++		blk_start_plug(&plug);
++		f2fs_sync_dirty_inodes(sbi, FILE_INODE, false);
++		blk_finish_plug(&plug);
++
++		mutex_unlock(&sbi->flush_lock);
+ 	}
++	f2fs_sync_fs(sbi->sb, true);
++	stat_inc_bg_cp_count(sbi->stat_info);
+ }
+ 
+ static int __submit_flush_wait(struct f2fs_sb_info *sbi,
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index ccfb6c5a8fbc0..fba413ced9826 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -267,10 +267,10 @@ static int f2fs_sb_read_encoding(const struct f2fs_super_block *sb,
+ 
+ static inline void limit_reserve_root(struct f2fs_sb_info *sbi)
+ {
+-	block_t limit = min((sbi->user_block_count << 1) / 1000,
++	block_t limit = min((sbi->user_block_count >> 3),
+ 			sbi->user_block_count - sbi->reserved_blocks);
+ 
+-	/* limit is 0.2% */
++	/* limit is 12.5% */
+ 	if (test_opt(sbi, RESERVE_ROOT) &&
+ 			F2FS_OPTION(sbi).root_reserved_blocks > limit) {
+ 		F2FS_OPTION(sbi).root_reserved_blocks = limit;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 9654b60a06a58..05f360b66b07a 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -7301,6 +7301,7 @@ static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+ 	}
+ 
+ 	skb->sk = sk;
++	skb->scm_io_uring = 1;
+ 
+ 	nr_files = 0;
+ 	fpl->user = get_uid(ctx->user);
+@@ -8436,8 +8437,6 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ 	if (ctx->sqo_task) {
+ 		put_task_struct(ctx->sqo_task);
+ 		ctx->sqo_task = NULL;
+-		mmdrop(ctx->mm_account);
+-		ctx->mm_account = NULL;
+ 	}
+ 
+ #ifdef CONFIG_BLK_CGROUP
+@@ -8456,6 +8455,11 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ 	}
+ #endif
+ 
++	if (ctx->mm_account) {
++		mmdrop(ctx->mm_account);
++		ctx->mm_account = NULL;
++	}
++
+ 	io_mem_free(ctx->rings);
+ 	io_mem_free(ctx->sq_sqes);
+ 
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 98cfa73cb165b..fa24b407a9dcb 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -581,7 +581,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ 	journal->j_running_transaction = NULL;
+ 	start_time = ktime_get();
+ 	commit_transaction->t_log_start = journal->j_head;
+-	wake_up(&journal->j_wait_transaction_locked);
++	wake_up_all(&journal->j_wait_transaction_locked);
+ 	write_unlock(&journal->j_state_lock);
+ 
+ 	jbd_debug(3, "JBD2: commit phase 2a\n");
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index b748329bb0bab..6689d235de8a4 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -924,10 +924,16 @@ int jbd2_fc_wait_bufs(journal_t *journal, int num_blks)
+ 	for (i = j_fc_off - 1; i >= j_fc_off - num_blks; i--) {
+ 		bh = journal->j_fc_wbuf[i];
+ 		wait_on_buffer(bh);
++		/*
++		 * Update j_fc_off so jbd2_fc_release_bufs can release remain
++		 * buffer head.
++		 */
++		if (unlikely(!buffer_uptodate(bh))) {
++			journal->j_fc_off = i + 1;
++			return -EIO;
++		}
+ 		put_bh(bh);
+ 		journal->j_fc_wbuf[i] = NULL;
+-		if (unlikely(!buffer_uptodate(bh)))
+-			return -EIO;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index 1e07dfac4d811..1ae1697fe99bd 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -256,6 +256,7 @@ static int fc_do_one_pass(journal_t *journal,
+ 		err = journal->j_fc_replay_callback(journal, bh, pass,
+ 					next_fc_block - journal->j_fc_first,
+ 					expected_commit_id);
++		brelse(bh);
+ 		next_fc_block++;
+ 		if (err < 0 || err == JBD2_FC_REPLAY_STOP)
+ 			break;
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 0f1cef90fa7d6..86472212cce17 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -173,7 +173,7 @@ static void wait_transaction_locked(journal_t *journal)
+ 	int need_to_start;
+ 	tid_t tid = journal->j_running_transaction->t_tid;
+ 
+-	prepare_to_wait(&journal->j_wait_transaction_locked, &wait,
++	prepare_to_wait_exclusive(&journal->j_wait_transaction_locked, &wait,
+ 			TASK_UNINTERRUPTIBLE);
+ 	need_to_start = !tid_geq(journal->j_commit_request, tid);
+ 	read_unlock(&journal->j_state_lock);
+@@ -199,7 +199,7 @@ static void wait_transaction_switching(journal_t *journal)
+ 		read_unlock(&journal->j_state_lock);
+ 		return;
+ 	}
+-	prepare_to_wait(&journal->j_wait_transaction_locked, &wait,
++	prepare_to_wait_exclusive(&journal->j_wait_transaction_locked, &wait,
+ 			TASK_UNINTERRUPTIBLE);
+ 	read_unlock(&journal->j_state_lock);
+ 	/*
+@@ -894,7 +894,7 @@ void jbd2_journal_unlock_updates (journal_t *journal)
+ 	write_lock(&journal->j_state_lock);
+ 	--journal->j_barrier_count;
+ 	write_unlock(&journal->j_state_lock);
+-	wake_up(&journal->j_wait_transaction_locked);
++	wake_up_all(&journal->j_wait_transaction_locked);
+ }
+ 
+ static void warn_dirty_buffer(struct buffer_head *bh)
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index f9b730c43192d..83c4e68839537 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -815,8 +815,10 @@ __cld_pipe_inprogress_downcall(const struct cld_msg_v2 __user *cmsg,
+ 				princhash.data = memdup_user(
+ 						&ci->cc_princhash.cp_data,
+ 						princhashlen);
+-				if (IS_ERR_OR_NULL(princhash.data))
++				if (IS_ERR_OR_NULL(princhash.data)) {
++					kfree(name.data);
+ 					return -EFAULT;
++				}
+ 				princhash.len = princhashlen;
+ 			} else
+ 				princhash.len = 0;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index f1b503bec2221..665d0eaeb8dbf 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -843,6 +843,7 @@ static struct nfs4_ol_stateid * nfs4_alloc_open_stateid(struct nfs4_client *clp)
+ 
+ static void nfs4_free_deleg(struct nfs4_stid *stid)
+ {
++	WARN_ON(!list_empty(&stid->sc_cp_list));
+ 	kmem_cache_free(deleg_slab, stid);
+ 	atomic_long_dec(&num_delegations);
+ }
+@@ -1358,6 +1359,7 @@ static void nfs4_free_ol_stateid(struct nfs4_stid *stid)
+ 	release_all_access(stp);
+ 	if (stp->st_stateowner)
+ 		nfs4_put_stateowner(stp->st_stateowner);
++	WARN_ON(!list_empty(&stid->sc_cp_list));
+ 	kmem_cache_free(stateid_slab, stid);
+ }
+ 
+@@ -6207,6 +6209,7 @@ static void nfsd4_close_open_stateid(struct nfs4_ol_stateid *s)
+ 	struct nfs4_client *clp = s->st_stid.sc_client;
+ 	bool unhashed;
+ 	LIST_HEAD(reaplist);
++	struct nfs4_ol_stateid *stp;
+ 
+ 	spin_lock(&clp->cl_lock);
+ 	unhashed = unhash_open_stateid(s, &reaplist);
+@@ -6215,6 +6218,8 @@ static void nfsd4_close_open_stateid(struct nfs4_ol_stateid *s)
+ 		if (unhashed)
+ 			put_ol_stateid_locked(s, &reaplist);
+ 		spin_unlock(&clp->cl_lock);
++		list_for_each_entry(stp, &reaplist, st_locks)
++			nfs4_free_cpntf_statelist(clp->net, &stp->st_stid);
+ 		free_ol_stateid_reaplist(&reaplist);
+ 	} else {
+ 		spin_unlock(&clp->cl_lock);
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 46f825cf53f4f..cc605ee0b2fae 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3871,7 +3871,7 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	if (resp->xdr.buf->page_len &&
+ 	    test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags)) {
+ 		WARN_ON_ONCE(1);
+-		return nfserr_resource;
++		return nfserr_serverfault;
+ 	}
+ 	xdr_commit_encode(xdr);
+ 
+diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
+index 1a188fbdf34e5..07948f6ac84ef 100644
+--- a/fs/quota/quota_tree.c
++++ b/fs/quota/quota_tree.c
+@@ -80,6 +80,35 @@ static ssize_t write_blk(struct qtree_mem_dqinfo *info, uint blk, char *buf)
+ 	return ret;
+ }
+ 
++static inline int do_check_range(struct super_block *sb, const char *val_name,
++				 uint val, uint min_val, uint max_val)
++{
++	if (val < min_val || val > max_val) {
++		quota_error(sb, "Getting %s %u out of range %u-%u",
++			    val_name, val, min_val, max_val);
++		return -EUCLEAN;
++	}
++
++	return 0;
++}
++
++static int check_dquot_block_header(struct qtree_mem_dqinfo *info,
++				    struct qt_disk_dqdbheader *dh)
++{
++	int err = 0;
++
++	err = do_check_range(info->dqi_sb, "dqdh_next_free",
++			     le32_to_cpu(dh->dqdh_next_free), 0,
++			     info->dqi_blocks - 1);
++	if (err)
++		return err;
++	err = do_check_range(info->dqi_sb, "dqdh_prev_free",
++			     le32_to_cpu(dh->dqdh_prev_free), 0,
++			     info->dqi_blocks - 1);
++
++	return err;
++}
++
+ /* Remove empty block from list and return it */
+ static int get_free_dqblk(struct qtree_mem_dqinfo *info)
+ {
+@@ -94,6 +123,9 @@ static int get_free_dqblk(struct qtree_mem_dqinfo *info)
+ 		ret = read_blk(info, blk, buf);
+ 		if (ret < 0)
+ 			goto out_buf;
++		ret = check_dquot_block_header(info, dh);
++		if (ret)
++			goto out_buf;
+ 		info->dqi_free_blk = le32_to_cpu(dh->dqdh_next_free);
+ 	}
+ 	else {
+@@ -241,6 +273,9 @@ static uint find_free_dqentry(struct qtree_mem_dqinfo *info,
+ 		*err = read_blk(info, blk, buf);
+ 		if (*err < 0)
+ 			goto out_buf;
++		*err = check_dquot_block_header(info, dh);
++		if (*err)
++			goto out_buf;
+ 	} else {
+ 		blk = get_free_dqblk(info);
+ 		if ((int)blk < 0) {
+@@ -433,6 +468,9 @@ static int free_dqentry(struct qtree_mem_dqinfo *info, struct dquot *dquot,
+ 		goto out_buf;
+ 	}
+ 	dh = (struct qt_disk_dqdbheader *)buf;
++	ret = check_dquot_block_header(info, dh);
++	if (ret)
++		goto out_buf;
+ 	le16_add_cpu(&dh->dqdh_entries, -1);
+ 	if (!le16_to_cpu(dh->dqdh_entries)) {	/* Block got free? */
+ 		ret = remove_free_dqentry(info, buf, blk);
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index aef0da5d6f636..a3074a9d71a6a 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -974,7 +974,7 @@ static int resolve_userfault_fork(struct userfaultfd_ctx *ctx,
+ 	int fd;
+ 
+ 	fd = anon_inode_getfd("[userfaultfd]", &userfaultfd_fops, new,
+-			      O_RDWR | (new->flags & UFFD_SHARED_FCNTL_FLAGS));
++			      O_RDONLY | (new->flags & UFFD_SHARED_FCNTL_FLAGS));
+ 	if (fd < 0)
+ 		return fd;
+ 
+@@ -1987,7 +1987,7 @@ SYSCALL_DEFINE1(userfaultfd, int, flags)
+ 	mmgrab(ctx->mm);
+ 
+ 	fd = anon_inode_getfd("[userfaultfd]", &userfaultfd_fops, ctx,
+-			      O_RDWR | (flags & UFFD_SHARED_FCNTL_FLAGS));
++			      O_RDONLY | (flags & UFFD_SHARED_FCNTL_FLAGS));
+ 	if (fd < 0) {
+ 		mmdrop(ctx->mm);
+ 		kmem_cache_free(userfaultfd_ctx_cachep, ctx);
+diff --git a/include/linux/ata.h b/include/linux/ata.h
+index 6e67aded28f8c..6d2d31b03b4de 100644
+--- a/include/linux/ata.h
++++ b/include/linux/ata.h
+@@ -565,6 +565,18 @@ struct ata_bmdma_prd {
+ 	((((id)[ATA_ID_SATA_CAPABILITY] != 0x0000) && \
+ 	  ((id)[ATA_ID_SATA_CAPABILITY] != 0xffff)) && \
+ 	 ((id)[ATA_ID_FEATURE_SUPP] & (1 << 2)))
++#define ata_id_has_devslp(id)	\
++	((((id)[ATA_ID_SATA_CAPABILITY] != 0x0000) && \
++	  ((id)[ATA_ID_SATA_CAPABILITY] != 0xffff)) && \
++	 ((id)[ATA_ID_FEATURE_SUPP] & (1 << 8)))
++#define ata_id_has_ncq_autosense(id) \
++	((((id)[ATA_ID_SATA_CAPABILITY] != 0x0000) && \
++	  ((id)[ATA_ID_SATA_CAPABILITY] != 0xffff)) && \
++	 ((id)[ATA_ID_FEATURE_SUPP] & (1 << 7)))
++#define ata_id_has_dipm(id)	\
++	((((id)[ATA_ID_SATA_CAPABILITY] != 0x0000) && \
++	  ((id)[ATA_ID_SATA_CAPABILITY] != 0xffff)) && \
++	 ((id)[ATA_ID_FEATURE_SUPP] & (1 << 3)))
+ #define ata_id_iordy_disable(id) ((id)[ATA_ID_CAPABILITY] & (1 << 10))
+ #define ata_id_has_iordy(id) ((id)[ATA_ID_CAPABILITY] & (1 << 11))
+ #define ata_id_u32(id,n)	\
+@@ -577,9 +589,6 @@ struct ata_bmdma_prd {
+ 
+ #define ata_id_cdb_intr(id)	(((id)[ATA_ID_CONFIG] & 0x60) == 0x20)
+ #define ata_id_has_da(id)	((id)[ATA_ID_SATA_CAPABILITY_2] & (1 << 4))
+-#define ata_id_has_devslp(id)	((id)[ATA_ID_FEATURE_SUPP] & (1 << 8))
+-#define ata_id_has_ncq_autosense(id) \
+-				((id)[ATA_ID_FEATURE_SUPP] & (1 << 7))
+ 
+ static inline bool ata_id_has_hipm(const u16 *id)
+ {
+@@ -591,17 +600,6 @@ static inline bool ata_id_has_hipm(const u16 *id)
+ 	return val & (1 << 9);
+ }
+ 
+-static inline bool ata_id_has_dipm(const u16 *id)
+-{
+-	u16 val = id[ATA_ID_FEATURE_SUPP];
+-
+-	if (val == 0 || val == 0xffff)
+-		return false;
+-
+-	return val & (1 << 3);
+-}
+-
+-
+ static inline bool ata_id_has_fua(const u16 *id)
+ {
+ 	if ((id[ATA_ID_CFSSE] & 0xC000) != 0x4000)
+@@ -770,16 +768,21 @@ static inline bool ata_id_has_read_log_dma_ext(const u16 *id)
+ 
+ static inline bool ata_id_has_sense_reporting(const u16 *id)
+ {
+-	if (!(id[ATA_ID_CFS_ENABLE_2] & (1 << 15)))
++	if (!(id[ATA_ID_CFS_ENABLE_2] & BIT(15)))
++		return false;
++	if ((id[ATA_ID_COMMAND_SET_3] & (BIT(15) | BIT(14))) != BIT(14))
+ 		return false;
+-	return id[ATA_ID_COMMAND_SET_3] & (1 << 6);
++	return id[ATA_ID_COMMAND_SET_3] & BIT(6);
+ }
+ 
+ static inline bool ata_id_sense_reporting_enabled(const u16 *id)
+ {
+-	if (!(id[ATA_ID_CFS_ENABLE_2] & (1 << 15)))
++	if (!ata_id_has_sense_reporting(id))
++		return false;
++	/* ata_id_has_sense_reporting() == true, word 86 must have bit 15 set */
++	if ((id[ATA_ID_COMMAND_SET_4] & (BIT(15) | BIT(14))) != BIT(14))
+ 		return false;
+-	return id[ATA_ID_COMMAND_SET_4] & (1 << 6);
++	return id[ATA_ID_COMMAND_SET_4] & BIT(6);
+ }
+ 
+ /**
+diff --git a/include/linux/dynamic_debug.h b/include/linux/dynamic_debug.h
+index a57ee75342cf8..c0c6ea9ea7e36 100644
+--- a/include/linux/dynamic_debug.h
++++ b/include/linux/dynamic_debug.h
+@@ -50,9 +50,6 @@ struct _ddebug {
+ 
+ #if defined(CONFIG_DYNAMIC_DEBUG_CORE)
+ 
+-/* exported for module authors to exercise >control */
+-int dynamic_debug_exec_queries(const char *query, const char *modname);
+-
+ int ddebug_add_module(struct _ddebug *tab, unsigned int n,
+ 				const char *modname);
+ extern int ddebug_remove_module(const char *mod_name);
+@@ -196,7 +193,7 @@ static inline int ddebug_remove_module(const char *mod)
+ static inline int ddebug_dyndbg_module_param_cb(char *param, char *val,
+ 						const char *modname)
+ {
+-	if (strstr(param, "dyndbg")) {
++	if (!strcmp(param, "dyndbg")) {
+ 		/* avoid pr_warn(), which wants pr_fmt() fully defined */
+ 		printk(KERN_WARNING "dyndbg param is supported only in "
+ 			"CONFIG_DYNAMIC_DEBUG builds\n");
+@@ -216,12 +213,6 @@ static inline int ddebug_dyndbg_module_param_cb(char *param, char *val,
+ 				rowsize, groupsize, buf, len, ascii);	\
+ 	} while (0)
+ 
+-static inline int dynamic_debug_exec_queries(const char *query, const char *modname)
+-{
+-	pr_warn("kernel not built with CONFIG_DYNAMIC_DEBUG_CORE\n");
+-	return 0;
+-}
+-
+ #endif /* !CONFIG_DYNAMIC_DEBUG_CORE */
+ 
+ #endif
+diff --git a/include/linux/iova.h b/include/linux/iova.h
+index a0637abffee88..6c19b09e96634 100644
+--- a/include/linux/iova.h
++++ b/include/linux/iova.h
+@@ -132,7 +132,7 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
+ 	return iova >> iova_shift(iovad);
+ }
+ 
+-#if IS_ENABLED(CONFIG_IOMMU_IOVA)
++#if IS_REACHABLE(CONFIG_IOMMU_IOVA)
+ int iova_cache_get(void);
+ void iova_cache_put(void);
+ 
+diff --git a/include/linux/once.h b/include/linux/once.h
+index ae6f4eb41cbe7..bb58e1c3aa034 100644
+--- a/include/linux/once.h
++++ b/include/linux/once.h
+@@ -5,10 +5,18 @@
+ #include <linux/types.h>
+ #include <linux/jump_label.h>
+ 
++/* Helpers used from arbitrary contexts.
++ * Hard irqs are blocked, be cautious.
++ */
+ bool __do_once_start(bool *done, unsigned long *flags);
+ void __do_once_done(bool *done, struct static_key_true *once_key,
+ 		    unsigned long *flags, struct module *mod);
+ 
++/* Variant for process contexts only. */
++bool __do_once_slow_start(bool *done);
++void __do_once_slow_done(bool *done, struct static_key_true *once_key,
++			 struct module *mod);
++
+ /* Call a function exactly once. The idea of DO_ONCE() is to perform
+  * a function call such as initialization of random seeds, etc, only
+  * once, where DO_ONCE() can live in the fast-path. After @func has
+@@ -52,9 +60,29 @@ void __do_once_done(bool *done, struct static_key_true *once_key,
+ 		___ret;							     \
+ 	})
+ 
++/* Variant of DO_ONCE() for process/sleepable contexts. */
++#define DO_ONCE_SLOW(func, ...)						     \
++	({								     \
++		bool ___ret = false;					     \
++		static bool __section(".data.once") ___done = false;	     \
++		static DEFINE_STATIC_KEY_TRUE(___once_key);		     \
++		if (static_branch_unlikely(&___once_key)) {		     \
++			___ret = __do_once_slow_start(&___done);	     \
++			if (unlikely(___ret)) {				     \
++				func(__VA_ARGS__);			     \
++				__do_once_slow_done(&___done, &___once_key,  \
++						    THIS_MODULE);	     \
++			}						     \
++		}							     \
++		___ret;							     \
++	})
++
+ #define get_random_once(buf, nbytes)					     \
+ 	DO_ONCE(get_random_bytes, (buf), (nbytes))
+ #define get_random_once_wait(buf, nbytes)                                    \
+ 	DO_ONCE(get_random_bytes_wait, (buf), (nbytes))                      \
+ 
++#define get_random_slow_once(buf, nbytes)				     \
++	DO_ONCE_SLOW(get_random_bytes, (buf), (nbytes))
++
+ #endif /* _LINUX_ONCE_H */
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index 136ea0997e6df..c9237d30c29b3 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -100,7 +100,7 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *k
+ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
+ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 			  struct file *filp, poll_table *poll_table);
+-
++void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu);
+ 
+ #define RING_BUFFER_ALL_CPUS -1
+ 
+diff --git a/include/linux/serial_8250.h b/include/linux/serial_8250.h
+index 2b70f736b091d..92f3b778d8c20 100644
+--- a/include/linux/serial_8250.h
++++ b/include/linux/serial_8250.h
+@@ -74,6 +74,7 @@ struct uart_8250_port;
+ struct uart_8250_ops {
+ 	int		(*setup_irq)(struct uart_8250_port *);
+ 	void		(*release_irq)(struct uart_8250_port *);
++	void		(*setup_timer)(struct uart_8250_port *);
+ };
+ 
+ struct uart_8250_em485 {
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 61fc053a4a4ef..462b0e3ef2b27 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -681,6 +681,7 @@ typedef unsigned char *sk_buff_data_t;
+  *	@csum_level: indicates the number of consecutive checksums found in
+  *		the packet minus one that have been verified as
+  *		CHECKSUM_UNNECESSARY (max 3)
++ *	@scm_io_uring: SKB holds io_uring registered files
+  *	@dst_pending_confirm: need to confirm neighbour
+  *	@decrypted: Decrypted SKB
+  *	@napi_id: id of the NAPI struct this skb came from
+@@ -858,6 +859,7 @@ struct sk_buff {
+ #ifdef CONFIG_TLS_DEVICE
+ 	__u8			decrypted:1;
+ #endif
++	__u8			scm_io_uring:1;
+ 
+ #ifdef CONFIG_NET_SCHED
+ 	__u16			tc_index;	/* traffic control index */
+diff --git a/include/linux/tcp.h b/include/linux/tcp.h
+index 2f87377e9af70..6e3340379d85f 100644
+--- a/include/linux/tcp.h
++++ b/include/linux/tcp.h
+@@ -265,7 +265,7 @@ struct tcp_sock {
+ 	u32	packets_out;	/* Packets which are "in flight"	*/
+ 	u32	retrans_out;	/* Retransmitted packets out		*/
+ 	u32	max_packets_out;  /* max packets_out in last window */
+-	u32	max_packets_seq;  /* right edge of max_packets_out flight */
++	u32	cwnd_usage_seq;  /* right edge of cwnd usage tracking flight */
+ 
+ 	u16	urg_data;	/* Saved octet of OOB data and control flags */
+ 	u8	ecn_flags;	/* ECN status bits.			*/
+diff --git a/include/linux/usb/ch9.h b/include/linux/usb/ch9.h
+index 604c6c514a504..1cffa34740b00 100644
+--- a/include/linux/usb/ch9.h
++++ b/include/linux/usb/ch9.h
+@@ -36,62 +36,24 @@
+ #include <linux/device.h>
+ #include <uapi/linux/usb/ch9.h>
+ 
+-/**
+- * usb_ep_type_string() - Returns human readable-name of the endpoint type.
+- * @ep_type: The endpoint type to return human-readable name for.  If it's not
+- *   any of the types: USB_ENDPOINT_XFER_{CONTROL, ISOC, BULK, INT},
+- *   usually got by usb_endpoint_type(), the string 'unknown' will be returned.
+- */
+-extern const char *usb_ep_type_string(int ep_type);
++/* USB 3.2 SuperSpeed Plus phy signaling rate generation and lane count */
+ 
+-/**
+- * usb_speed_string() - Returns human readable-name of the speed.
+- * @speed: The speed to return human-readable name for.  If it's not
+- *   any of the speeds defined in usb_device_speed enum, string for
+- *   USB_SPEED_UNKNOWN will be returned.
+- */
+-extern const char *usb_speed_string(enum usb_device_speed speed);
++enum usb_ssp_rate {
++	USB_SSP_GEN_UNKNOWN = 0,
++	USB_SSP_GEN_2x1,
++	USB_SSP_GEN_1x2,
++	USB_SSP_GEN_2x2,
++};
+ 
+-/**
+- * usb_get_maximum_speed - Get maximum requested speed for a given USB
+- * controller.
+- * @dev: Pointer to the given USB controller device
+- *
+- * The function gets the maximum speed string from property "maximum-speed",
+- * and returns the corresponding enum usb_device_speed.
+- */
++extern const char *usb_ep_type_string(int ep_type);
++extern const char *usb_speed_string(enum usb_device_speed speed);
+ extern enum usb_device_speed usb_get_maximum_speed(struct device *dev);
+-
+-/**
+- * usb_state_string - Returns human readable name for the state.
+- * @state: The state to return a human-readable name for. If it's not
+- *	any of the states devices in usb_device_state_string enum,
+- *	the string UNKNOWN will be returned.
+- */
++extern enum usb_ssp_rate usb_get_maximum_ssp_rate(struct device *dev);
+ extern const char *usb_state_string(enum usb_device_state state);
++unsigned int usb_decode_interval(const struct usb_endpoint_descriptor *epd,
++				 enum usb_device_speed speed);
+ 
+ #ifdef CONFIG_TRACING
+-/**
+- * usb_decode_ctrl - Returns human readable representation of control request.
+- * @str: buffer to return a human-readable representation of control request.
+- *       This buffer should have about 200 bytes.
+- * @size: size of str buffer.
+- * @bRequestType: matches the USB bmRequestType field
+- * @bRequest: matches the USB bRequest field
+- * @wValue: matches the USB wValue field (CPU byte order)
+- * @wIndex: matches the USB wIndex field (CPU byte order)
+- * @wLength: matches the USB wLength field (CPU byte order)
+- *
+- * Function returns decoded, formatted and human-readable description of
+- * control request packet.
+- *
+- * The usage scenario for this is for tracepoints, so function as a return
+- * use the same value as in parameters. This approach allows to use this
+- * function in TP_printk
+- *
+- * Important: wValue, wIndex, wLength parameters before invoking this function
+- * should be processed by le16_to_cpu macro.
+- */
+ extern const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
+ 				   __u8 bRequest, __u16 wValue, __u16 wIndex,
+ 				   __u16 wLength);
+diff --git a/include/net/ieee802154_netdev.h b/include/net/ieee802154_netdev.h
+index a8994f307fc38..03b64bf876a46 100644
+--- a/include/net/ieee802154_netdev.h
++++ b/include/net/ieee802154_netdev.h
+@@ -185,21 +185,27 @@ static inline int
+ ieee802154_sockaddr_check_size(struct sockaddr_ieee802154 *daddr, int len)
+ {
+ 	struct ieee802154_addr_sa *sa;
++	int ret = 0;
+ 
+ 	sa = &daddr->addr;
+ 	if (len < IEEE802154_MIN_NAMELEN)
+ 		return -EINVAL;
+ 	switch (sa->addr_type) {
++	case IEEE802154_ADDR_NONE:
++		break;
+ 	case IEEE802154_ADDR_SHORT:
+ 		if (len < IEEE802154_NAMELEN_SHORT)
+-			return -EINVAL;
++			ret = -EINVAL;
+ 		break;
+ 	case IEEE802154_ADDR_LONG:
+ 		if (len < IEEE802154_NAMELEN_LONG)
+-			return -EINVAL;
++			ret = -EINVAL;
++		break;
++	default:
++		ret = -EINVAL;
+ 		break;
+ 	}
+-	return 0;
++	return ret;
+ }
+ 
+ static inline void ieee802154_addr_from_sa(struct ieee802154_addr *a,
+diff --git a/include/net/sock.h b/include/net/sock.h
+index d53fb64374767..90a8b8b26a207 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -421,7 +421,7 @@ struct sock {
+ #ifdef CONFIG_XFRM
+ 	struct xfrm_policy __rcu *sk_policy[2];
+ #endif
+-	struct dst_entry	*sk_rx_dst;
++	struct dst_entry __rcu	*sk_rx_dst;
+ 	struct dst_entry __rcu	*sk_dst_cache;
+ 	atomic_t		sk_omem_alloc;
+ 	int			sk_sndbuf;
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 8129ce9a07719..bf4af27f56200 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1271,11 +1271,14 @@ static inline bool tcp_is_cwnd_limited(const struct sock *sk)
+ {
+ 	const struct tcp_sock *tp = tcp_sk(sk);
+ 
++	if (tp->is_cwnd_limited)
++		return true;
++
+ 	/* If in slow start, ensure cwnd grows to twice what was ACKed. */
+ 	if (tcp_in_slow_start(tp))
+ 		return tp->snd_cwnd < 2 * tp->max_packets_out;
+ 
+-	return tp->is_cwnd_limited;
++	return false;
+ }
+ 
+ /* BBR congestion control needs pacing.
+diff --git a/include/uapi/linux/usb/ch9.h b/include/uapi/linux/usb/ch9.h
+index 0f865ae4ba89d..17ce56198c9ad 100644
+--- a/include/uapi/linux/usb/ch9.h
++++ b/include/uapi/linux/usb/ch9.h
+@@ -968,9 +968,22 @@ struct usb_ssp_cap_descriptor {
+ 	__le32 bmSublinkSpeedAttr[1]; /* list of sublink speed attrib entries */
+ #define USB_SSP_SUBLINK_SPEED_SSID	(0xf)		/* sublink speed ID */
+ #define USB_SSP_SUBLINK_SPEED_LSE	(0x3 << 4)	/* Lanespeed exponent */
++#define USB_SSP_SUBLINK_SPEED_LSE_BPS		0
++#define USB_SSP_SUBLINK_SPEED_LSE_KBPS		1
++#define USB_SSP_SUBLINK_SPEED_LSE_MBPS		2
++#define USB_SSP_SUBLINK_SPEED_LSE_GBPS		3
++
+ #define USB_SSP_SUBLINK_SPEED_ST	(0x3 << 6)	/* Sublink type */
++#define USB_SSP_SUBLINK_SPEED_ST_SYM_RX		0
++#define USB_SSP_SUBLINK_SPEED_ST_ASYM_RX	1
++#define USB_SSP_SUBLINK_SPEED_ST_SYM_TX		2
++#define USB_SSP_SUBLINK_SPEED_ST_ASYM_TX	3
++
+ #define USB_SSP_SUBLINK_SPEED_RSVD	(0x3f << 8)	/* Reserved */
+ #define USB_SSP_SUBLINK_SPEED_LP	(0x3 << 14)	/* Link protocol */
++#define USB_SSP_SUBLINK_SPEED_LP_SS		0
++#define USB_SSP_SUBLINK_SPEED_LP_SSP		1
++
+ #define USB_SSP_SUBLINK_SPEED_LSM	(0xff << 16)	/* Lanespeed mantissa */
+ } __attribute__((packed));
+ 
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index dc497eaf22663..9232938e3f960 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -2913,7 +2913,7 @@ static int btf_struct_resolve(struct btf_verifier_env *env,
+ 	if (v->next_member) {
+ 		const struct btf_type *last_member_type;
+ 		const struct btf_member *last_member;
+-		u16 last_member_type_id;
++		u32 last_member_type_id;
+ 
+ 		last_member = btf_type_member(v->t) + v->next_member - 1;
+ 		last_member_type_id = last_member->type;
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 419dbc3d060ee..aaad2dce2be6f 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -3915,7 +3915,9 @@ static int bpf_task_fd_query(const union bpf_attr *attr,
+ 	if (attr->task_fd_query.flags != 0)
+ 		return -EINVAL;
+ 
++	rcu_read_lock();
+ 	task = get_pid_task(find_vpid(pid), PIDTYPE_PID);
++	rcu_read_unlock();
+ 	if (!task)
+ 		return -ENOENT;
+ 
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index b7830f1f1f3a5..43270b07b2e0b 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -33,6 +33,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/kernel.h>
+ #include <linux/kmod.h>
++#include <linux/kthread.h>
+ #include <linux/list.h>
+ #include <linux/mempolicy.h>
+ #include <linux/mm.h>
+@@ -1059,10 +1060,18 @@ static void update_tasks_cpumask(struct cpuset *cs)
+ {
+ 	struct css_task_iter it;
+ 	struct task_struct *task;
++	bool top_cs = cs == &top_cpuset;
+ 
+ 	css_task_iter_start(&cs->css, 0, &it);
+-	while ((task = css_task_iter_next(&it)))
++	while ((task = css_task_iter_next(&it))) {
++		/*
++		 * Percpu kthreads in top_cpuset are ignored
++		 */
++		if (top_cs && (task->flags & PF_KTHREAD) &&
++		    kthread_is_per_cpu(task))
++			continue;
+ 		set_cpus_allowed_ptr(task, cs->effective_cpus);
++	}
+ 	css_task_iter_end(&it);
+ }
+ 
+@@ -2016,12 +2025,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
+ 		update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 	}
+ 
+-	/*
+-	 * Update cpumask of parent's tasks except when it is the top
+-	 * cpuset as some system daemons cannot be mapped to other CPUs.
+-	 */
+-	if (parent != &top_cpuset)
+-		update_tasks_cpumask(parent);
++	update_tasks_cpumask(parent);
+ 
+ 	if (parent->child_ecpus_count)
+ 		update_sibling_cpumasks(parent, cs, &tmpmask);
+diff --git a/kernel/gcov/gcc_4_7.c b/kernel/gcov/gcc_4_7.c
+index 53c67c87f141b..c699feda21ac0 100644
+--- a/kernel/gcov/gcc_4_7.c
++++ b/kernel/gcov/gcc_4_7.c
+@@ -33,6 +33,13 @@
+ 
+ #define GCOV_TAG_FUNCTION_LENGTH	3
+ 
++/* Since GCC 12.1 sizes are in BYTES and not in WORDS (4B). */
++#if (__GNUC__ >= 12)
++#define GCOV_UNIT_SIZE				4
++#else
++#define GCOV_UNIT_SIZE				1
++#endif
++
+ static struct gcov_info *gcov_info_head;
+ 
+ /**
+@@ -451,12 +458,18 @@ static size_t convert_to_gcda(char *buffer, struct gcov_info *info)
+ 	pos += store_gcov_u32(buffer, pos, info->version);
+ 	pos += store_gcov_u32(buffer, pos, info->stamp);
+ 
++#if (__GNUC__ >= 12)
++	/* Use zero as checksum of the compilation unit. */
++	pos += store_gcov_u32(buffer, pos, 0);
++#endif
++
+ 	for (fi_idx = 0; fi_idx < info->n_functions; fi_idx++) {
+ 		fi_ptr = info->functions[fi_idx];
+ 
+ 		/* Function record. */
+ 		pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION);
+-		pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION_LENGTH);
++		pos += store_gcov_u32(buffer, pos,
++			GCOV_TAG_FUNCTION_LENGTH * GCOV_UNIT_SIZE);
+ 		pos += store_gcov_u32(buffer, pos, fi_ptr->ident);
+ 		pos += store_gcov_u32(buffer, pos, fi_ptr->lineno_checksum);
+ 		pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
+@@ -470,7 +483,8 @@ static size_t convert_to_gcda(char *buffer, struct gcov_info *info)
+ 			/* Counter record. */
+ 			pos += store_gcov_u32(buffer, pos,
+ 					      GCOV_TAG_FOR_COUNTER(ct_idx));
+-			pos += store_gcov_u32(buffer, pos, ci_ptr->num * 2);
++			pos += store_gcov_u32(buffer, pos,
++				ci_ptr->num * 2 * GCOV_UNIT_SIZE);
+ 
+ 			for (cv_idx = 0; cv_idx < ci_ptr->num; cv_idx++) {
+ 				pos += store_gcov_u64(buffer, pos,
+diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
+index f6310f848f342..b04b87a4e0a7b 100644
+--- a/kernel/livepatch/transition.c
++++ b/kernel/livepatch/transition.c
+@@ -611,9 +611,23 @@ void klp_reverse_transition(void)
+ /* Called from copy_process() during fork */
+ void klp_copy_process(struct task_struct *child)
+ {
+-	child->patch_state = current->patch_state;
+ 
+-	/* TIF_PATCH_PENDING gets copied in setup_thread_stack() */
++	/*
++	 * The parent process may have gone through a KLP transition since
++	 * the thread flag was copied in setup_thread_stack earlier. Bring
++	 * the task flag up to date with the parent here.
++	 *
++	 * The operation is serialized against all klp_*_transition()
++	 * operations by the tasklist_lock. The only exception is
++	 * klp_update_patch_state(current), but we cannot race with
++	 * that because we are current.
++	 */
++	if (test_tsk_thread_flag(current, TIF_PATCH_PENDING))
++		set_tsk_thread_flag(child, TIF_PATCH_PENDING);
++	else
++		clear_tsk_thread_flag(child, TIF_PATCH_PENDING);
++
++	child->patch_state = current->patch_state;
+ }
+ 
+ /*
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 14af29fe13770..8b51e6a5b3869 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -171,7 +171,7 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
+ static void synchronize_rcu_tasks_generic(struct rcu_tasks *rtp)
+ {
+ 	/* Complain if the scheduler has not started.  */
+-	RCU_LOCKDEP_WARN(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
++	WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
+ 			 "synchronize_rcu_tasks called too soon");
+ 
+ 	/* Wait for the grace period. */
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index b41009a283caf..b10d6bcea77df 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3393,15 +3393,16 @@ static void fill_page_cache_func(struct work_struct *work)
+ 		bnode = (struct kvfree_rcu_bulk_data *)
+ 			__get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
+ 
+-		if (bnode) {
+-			raw_spin_lock_irqsave(&krcp->lock, flags);
+-			pushed = put_cached_bnode(krcp, bnode);
+-			raw_spin_unlock_irqrestore(&krcp->lock, flags);
++		if (!bnode)
++			break;
+ 
+-			if (!pushed) {
+-				free_page((unsigned long) bnode);
+-				break;
+-			}
++		raw_spin_lock_irqsave(&krcp->lock, flags);
++		pushed = put_cached_bnode(krcp, bnode);
++		raw_spin_unlock_irqrestore(&krcp->lock, flags);
++
++		if (!pushed) {
++			free_page((unsigned long) bnode);
++			break;
+ 		}
+ 	}
+ 
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index d868df6f13c86..2165c9ac14bf4 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -5662,8 +5662,12 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
+ 
+ 		if (filter_hash) {
+ 			orig_hash = &iter->ops->func_hash->filter_hash;
+-			if (iter->tr && !list_empty(&iter->tr->mod_trace))
+-				iter->hash->flags |= FTRACE_HASH_FL_MOD;
++			if (iter->tr) {
++				if (list_empty(&iter->tr->mod_trace))
++					iter->hash->flags &= ~FTRACE_HASH_FL_MOD;
++				else
++					iter->hash->flags |= FTRACE_HASH_FL_MOD;
++			}
+ 		} else
+ 			orig_hash = &iter->ops->func_hash->notrace_hash;
+ 
+diff --git a/kernel/trace/kprobe_event_gen_test.c b/kernel/trace/kprobe_event_gen_test.c
+index 18b0f1cbb947f..80e04a1e19772 100644
+--- a/kernel/trace/kprobe_event_gen_test.c
++++ b/kernel/trace/kprobe_event_gen_test.c
+@@ -35,6 +35,45 @@
+ static struct trace_event_file *gen_kprobe_test;
+ static struct trace_event_file *gen_kretprobe_test;
+ 
++#define KPROBE_GEN_TEST_FUNC	"do_sys_open"
++
++/* X86 */
++#if defined(CONFIG_X86_64) || defined(CONFIG_X86_32)
++#define KPROBE_GEN_TEST_ARG0	"dfd=%ax"
++#define KPROBE_GEN_TEST_ARG1	"filename=%dx"
++#define KPROBE_GEN_TEST_ARG2	"flags=%cx"
++#define KPROBE_GEN_TEST_ARG3	"mode=+4($stack)"
++
++/* ARM64 */
++#elif defined(CONFIG_ARM64)
++#define KPROBE_GEN_TEST_ARG0	"dfd=%x0"
++#define KPROBE_GEN_TEST_ARG1	"filename=%x1"
++#define KPROBE_GEN_TEST_ARG2	"flags=%x2"
++#define KPROBE_GEN_TEST_ARG3	"mode=%x3"
++
++/* ARM */
++#elif defined(CONFIG_ARM)
++#define KPROBE_GEN_TEST_ARG0	"dfd=%r0"
++#define KPROBE_GEN_TEST_ARG1	"filename=%r1"
++#define KPROBE_GEN_TEST_ARG2	"flags=%r2"
++#define KPROBE_GEN_TEST_ARG3	"mode=%r3"
++
++/* RISCV */
++#elif defined(CONFIG_RISCV)
++#define KPROBE_GEN_TEST_ARG0	"dfd=%a0"
++#define KPROBE_GEN_TEST_ARG1	"filename=%a1"
++#define KPROBE_GEN_TEST_ARG2	"flags=%a2"
++#define KPROBE_GEN_TEST_ARG3	"mode=%a3"
++
++/* others */
++#else
++#define KPROBE_GEN_TEST_ARG0	NULL
++#define KPROBE_GEN_TEST_ARG1	NULL
++#define KPROBE_GEN_TEST_ARG2	NULL
++#define KPROBE_GEN_TEST_ARG3	NULL
++#endif
++
++
+ /*
+  * Test to make sure we can create a kprobe event, then add more
+  * fields.
+@@ -58,14 +97,14 @@ static int __init test_gen_kprobe_cmd(void)
+ 	 * fields.
+ 	 */
+ 	ret = kprobe_event_gen_cmd_start(&cmd, "gen_kprobe_test",
+-					 "do_sys_open",
+-					 "dfd=%ax", "filename=%dx");
++					 KPROBE_GEN_TEST_FUNC,
++					 KPROBE_GEN_TEST_ARG0, KPROBE_GEN_TEST_ARG1);
+ 	if (ret)
+ 		goto free;
+ 
+ 	/* Use kprobe_event_add_fields to add the rest of the fields */
+ 
+-	ret = kprobe_event_add_fields(&cmd, "flags=%cx", "mode=+4($stack)");
++	ret = kprobe_event_add_fields(&cmd, KPROBE_GEN_TEST_ARG2, KPROBE_GEN_TEST_ARG3);
+ 	if (ret)
+ 		goto free;
+ 
+@@ -128,7 +167,7 @@ static int __init test_gen_kretprobe_cmd(void)
+ 	 * Define the kretprobe event.
+ 	 */
+ 	ret = kretprobe_event_gen_cmd_start(&cmd, "gen_kretprobe_test",
+-					    "do_sys_open",
++					    KPROBE_GEN_TEST_FUNC,
+ 					    "$retval");
+ 	if (ret)
+ 		goto free;
+@@ -206,7 +245,7 @@ static void __exit kprobe_event_gen_test_exit(void)
+ 	WARN_ON(kprobe_event_delete("gen_kprobe_test"));
+ 
+ 	/* Disable the event or you can't remove it */
+-	WARN_ON(trace_array_set_clr_event(gen_kprobe_test->tr,
++	WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr,
+ 					  "kprobes",
+ 					  "gen_kretprobe_test", false));
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 6deac666ba3e4..a12e278155550 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -414,6 +414,7 @@ struct rb_irq_work {
+ 	struct irq_work			work;
+ 	wait_queue_head_t		waiters;
+ 	wait_queue_head_t		full_waiters;
++	long				wait_index;
+ 	bool				waiters_pending;
+ 	bool				full_waiters_pending;
+ 	bool				wakeup_full;
+@@ -794,12 +795,44 @@ static void rb_wake_up_waiters(struct irq_work *work)
+ 	struct rb_irq_work *rbwork = container_of(work, struct rb_irq_work, work);
+ 
+ 	wake_up_all(&rbwork->waiters);
+-	if (rbwork->wakeup_full) {
++	if (rbwork->full_waiters_pending || rbwork->wakeup_full) {
+ 		rbwork->wakeup_full = false;
++		rbwork->full_waiters_pending = false;
+ 		wake_up_all(&rbwork->full_waiters);
+ 	}
+ }
+ 
++/**
++ * ring_buffer_wake_waiters - wake up any waiters on this ring buffer
++ * @buffer: The ring buffer to wake waiters on
++ *
++ * In the case of a file that represents a ring buffer is closing,
++ * it is prudent to wake up any waiters that are on this.
++ */
++void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu)
++{
++	struct ring_buffer_per_cpu *cpu_buffer;
++	struct rb_irq_work *rbwork;
++
++	if (cpu == RING_BUFFER_ALL_CPUS) {
++
++		/* Wake up individual ones too. One level recursion */
++		for_each_buffer_cpu(buffer, cpu)
++			ring_buffer_wake_waiters(buffer, cpu);
++
++		rbwork = &buffer->irq_work;
++	} else {
++		cpu_buffer = buffer->buffers[cpu];
++		rbwork = &cpu_buffer->irq_work;
++	}
++
++	rbwork->wait_index++;
++	/* make sure the waiters see the new index */
++	smp_wmb();
++
++	rb_wake_up_waiters(&rbwork->work);
++}
++
+ /**
+  * ring_buffer_wait - wait for input to the ring buffer
+  * @buffer: buffer to wait on
+@@ -815,6 +848,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	DEFINE_WAIT(wait);
+ 	struct rb_irq_work *work;
++	long wait_index;
+ 	int ret = 0;
+ 
+ 	/*
+@@ -833,6 +867,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ 		work = &cpu_buffer->irq_work;
+ 	}
+ 
++	wait_index = READ_ONCE(work->wait_index);
+ 
+ 	while (true) {
+ 		if (full)
+@@ -888,7 +923,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ 			nr_pages = cpu_buffer->nr_pages;
+ 			dirty = ring_buffer_nr_dirty_pages(buffer, cpu);
+ 			if (!cpu_buffer->shortest_full ||
+-			    cpu_buffer->shortest_full < full)
++			    cpu_buffer->shortest_full > full)
+ 				cpu_buffer->shortest_full = full;
+ 			raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ 			if (!pagebusy &&
+@@ -897,6 +932,11 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ 		}
+ 
+ 		schedule();
++
++		/* Make sure to see the new wait index */
++		smp_rmb();
++		if (wait_index != work->wait_index)
++			break;
+ 	}
+ 
+ 	if (full)
+@@ -2491,6 +2531,9 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ 		/* Mark the rest of the page with padding */
+ 		rb_event_set_padding(event);
+ 
++		/* Make sure the padding is visible before the write update */
++		smp_wmb();
++
+ 		/* Set the write back to the previous setting */
+ 		local_sub(length, &tail_page->write);
+ 		return;
+@@ -2502,6 +2545,9 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ 	/* time delta must be non zero */
+ 	event->time_delta = 1;
+ 
++	/* Make sure the padding is visible before the tail_page->write update */
++	smp_wmb();
++
+ 	/* Set write to end of buffer */
+ 	length = (tail + length) - BUF_PAGE_SIZE;
+ 	local_sub(length, &tail_page->write);
+@@ -4316,6 +4362,33 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
+ 	arch_spin_unlock(&cpu_buffer->lock);
+ 	local_irq_restore(flags);
+ 
++	/*
++	 * The writer has preempt disable, wait for it. But not forever
++	 * Although, 1 second is pretty much "forever"
++	 */
++#define USECS_WAIT	1000000
++        for (nr_loops = 0; nr_loops < USECS_WAIT; nr_loops++) {
++		/* If the write is past the end of page, a writer is still updating it */
++		if (likely(!reader || rb_page_write(reader) <= BUF_PAGE_SIZE))
++			break;
++
++		udelay(1);
++
++		/* Get the latest version of the reader write value */
++		smp_rmb();
++	}
++
++	/* The writer is not moving forward? Something is wrong */
++	if (RB_WARN_ON(cpu_buffer, nr_loops == USECS_WAIT))
++		reader = NULL;
++
++	/*
++	 * Make sure we see any padding after the write update
++	 * (see rb_reset_tail())
++	 */
++	smp_rmb();
++
++
+ 	return reader;
+ }
+ 
+@@ -5341,7 +5414,15 @@ int ring_buffer_read_page(struct trace_buffer *buffer,
+ 		unsigned int pos = 0;
+ 		unsigned int size;
+ 
+-		if (full)
++		/*
++		 * If a full page is expected, this can still be returned
++		 * if there's been a previous partial read and the
++		 * rest of the page can be read and the commit page is off
++		 * the reader page.
++		 */
++		if (full &&
++		    (!read || (len < (commit - read)) ||
++		     cpu_buffer->reader_page == cpu_buffer->commit_page))
+ 			goto out_unlock;
+ 
+ 		if (len > (commit - read))
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 50200898410d5..a5245362ce7a8 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1197,12 +1197,14 @@ void *tracing_cond_snapshot_data(struct trace_array *tr)
+ {
+ 	void *cond_data = NULL;
+ 
++	local_irq_disable();
+ 	arch_spin_lock(&tr->max_lock);
+ 
+ 	if (tr->cond_snapshot)
+ 		cond_data = tr->cond_snapshot->cond_data;
+ 
+ 	arch_spin_unlock(&tr->max_lock);
++	local_irq_enable();
+ 
+ 	return cond_data;
+ }
+@@ -1338,9 +1340,11 @@ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data,
+ 		goto fail_unlock;
+ 	}
+ 
++	local_irq_disable();
+ 	arch_spin_lock(&tr->max_lock);
+ 	tr->cond_snapshot = cond_snapshot;
+ 	arch_spin_unlock(&tr->max_lock);
++	local_irq_enable();
+ 
+ 	mutex_unlock(&trace_types_lock);
+ 
+@@ -1367,6 +1371,7 @@ int tracing_snapshot_cond_disable(struct trace_array *tr)
+ {
+ 	int ret = 0;
+ 
++	local_irq_disable();
+ 	arch_spin_lock(&tr->max_lock);
+ 
+ 	if (!tr->cond_snapshot)
+@@ -1377,6 +1382,7 @@ int tracing_snapshot_cond_disable(struct trace_array *tr)
+ 	}
+ 
+ 	arch_spin_unlock(&tr->max_lock);
++	local_irq_enable();
+ 
+ 	return ret;
+ }
+@@ -2198,6 +2204,11 @@ static size_t tgid_map_max;
+ 
+ #define SAVED_CMDLINES_DEFAULT 128
+ #define NO_CMDLINE_MAP UINT_MAX
++/*
++ * Preemption must be disabled before acquiring trace_cmdline_lock.
++ * The various trace_arrays' max_lock must be acquired in a context
++ * where interrupt is disabled.
++ */
+ static arch_spinlock_t trace_cmdline_lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ struct saved_cmdlines_buffer {
+ 	unsigned map_pid_to_cmdline[PID_MAX_DEFAULT+1];
+@@ -2410,7 +2421,11 @@ static int trace_save_cmdline(struct task_struct *tsk)
+ 	 * the lock, but we also don't want to spin
+ 	 * nor do we want to disable interrupts,
+ 	 * so if we miss here, then better luck next time.
++	 *
++	 * This is called within the scheduler and wake up, so interrupts
++	 * had better been disabled and run queue lock been held.
+ 	 */
++	lockdep_assert_preemption_disabled();
+ 	if (!arch_spin_trylock(&trace_cmdline_lock))
+ 		return 0;
+ 
+@@ -5470,9 +5485,11 @@ tracing_saved_cmdlines_size_read(struct file *filp, char __user *ubuf,
+ 	char buf[64];
+ 	int r;
+ 
++	preempt_disable();
+ 	arch_spin_lock(&trace_cmdline_lock);
+ 	r = scnprintf(buf, sizeof(buf), "%u\n", savedcmd->cmdline_num);
+ 	arch_spin_unlock(&trace_cmdline_lock);
++	preempt_enable();
+ 
+ 	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
+ }
+@@ -5497,10 +5514,12 @@ static int tracing_resize_saved_cmdlines(unsigned int val)
+ 		return -ENOMEM;
+ 	}
+ 
++	preempt_disable();
+ 	arch_spin_lock(&trace_cmdline_lock);
+ 	savedcmd_temp = savedcmd;
+ 	savedcmd = s;
+ 	arch_spin_unlock(&trace_cmdline_lock);
++	preempt_enable();
+ 	free_saved_cmdlines_buffer(savedcmd_temp);
+ 
+ 	return 0;
+@@ -5953,10 +5972,12 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ 
+ #ifdef CONFIG_TRACER_SNAPSHOT
+ 	if (t->use_max_tr) {
++		local_irq_disable();
+ 		arch_spin_lock(&tr->max_lock);
+ 		if (tr->cond_snapshot)
+ 			ret = -EBUSY;
+ 		arch_spin_unlock(&tr->max_lock);
++		local_irq_enable();
+ 		if (ret)
+ 			goto out;
+ 	}
+@@ -7030,10 +7051,12 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 		goto out;
+ 	}
+ 
++	local_irq_disable();
+ 	arch_spin_lock(&tr->max_lock);
+ 	if (tr->cond_snapshot)
+ 		ret = -EBUSY;
+ 	arch_spin_unlock(&tr->max_lock);
++	local_irq_enable();
+ 	if (ret)
+ 		goto out;
+ 
+diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
+index 921d0a654243c..10a50c03074ee 100644
+--- a/lib/dynamic_debug.c
++++ b/lib/dynamic_debug.c
+@@ -207,10 +207,11 @@ static int ddebug_change(const struct ddebug_query *query,
+ 				continue;
+ #ifdef CONFIG_JUMP_LABEL
+ 			if (dp->flags & _DPRINTK_FLAGS_PRINT) {
+-				if (!(modifiers->flags & _DPRINTK_FLAGS_PRINT))
++				if (!(newflags & _DPRINTK_FLAGS_PRINT))
+ 					static_branch_disable(&dp->key.dd_key_true);
+-			} else if (modifiers->flags & _DPRINTK_FLAGS_PRINT)
++			} else if (newflags & _DPRINTK_FLAGS_PRINT) {
+ 				static_branch_enable(&dp->key.dd_key_true);
++			}
+ #endif
+ 			dp->flags = newflags;
+ 			v2pr_info("changed %s:%d [%s]%s =%s\n",
+@@ -379,10 +380,6 @@ static int ddebug_parse_query(char *words[], int nwords,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (modname)
+-		/* support $modname.dyndbg=<multiple queries> */
+-		query->module = modname;
+-
+ 	for (i = 0; i < nwords; i += 2) {
+ 		char *keyword = words[i];
+ 		char *arg = words[i+1];
+@@ -423,6 +420,13 @@ static int ddebug_parse_query(char *words[], int nwords,
+ 		if (rc)
+ 			return rc;
+ 	}
++	if (!query->module && modname)
++		/*
++		 * support $modname.dyndbg=<multiple queries>, when
++		 * not given in the query itself
++		 */
++		query->module = modname;
++
+ 	vpr_info_dq(query, "parsed");
+ 	return 0;
+ }
+@@ -548,35 +552,6 @@ static int ddebug_exec_queries(char *query, const char *modname)
+ 	return nfound;
+ }
+ 
+-/**
+- * dynamic_debug_exec_queries - select and change dynamic-debug prints
+- * @query: query-string described in admin-guide/dynamic-debug-howto
+- * @modname: string containing module name, usually &module.mod_name
+- *
+- * This uses the >/proc/dynamic_debug/control reader, allowing module
+- * authors to modify their dynamic-debug callsites. The modname is
+- * canonically struct module.mod_name, but can also be null or a
+- * module-wildcard, for example: "drm*".
+- */
+-int dynamic_debug_exec_queries(const char *query, const char *modname)
+-{
+-	int rc;
+-	char *qry; /* writable copy of query */
+-
+-	if (!query) {
+-		pr_err("non-null query/command string expected\n");
+-		return -EINVAL;
+-	}
+-	qry = kstrndup(query, PAGE_SIZE, GFP_KERNEL);
+-	if (!qry)
+-		return -ENOMEM;
+-
+-	rc = ddebug_exec_queries(qry, modname);
+-	kfree(qry);
+-	return rc;
+-}
+-EXPORT_SYMBOL_GPL(dynamic_debug_exec_queries);
+-
+ #define PREFIX_SIZE 64
+ 
+ static int remaining(int wrote)
+diff --git a/lib/once.c b/lib/once.c
+index 59149bf3bfb4a..351f66aad310a 100644
+--- a/lib/once.c
++++ b/lib/once.c
+@@ -66,3 +66,33 @@ void __do_once_done(bool *done, struct static_key_true *once_key,
+ 	once_disable_jump(once_key, mod);
+ }
+ EXPORT_SYMBOL(__do_once_done);
++
++static DEFINE_MUTEX(once_mutex);
++
++bool __do_once_slow_start(bool *done)
++	__acquires(once_mutex)
++{
++	mutex_lock(&once_mutex);
++	if (*done) {
++		mutex_unlock(&once_mutex);
++		/* Keep sparse happy by restoring an even lock count on
++		 * this mutex. In case we return here, we don't call into
++		 * __do_once_done but return early in the DO_ONCE_SLOW() macro.
++		 */
++		__acquire(once_mutex);
++		return false;
++	}
++
++	return true;
++}
++EXPORT_SYMBOL(__do_once_slow_start);
++
++void __do_once_slow_done(bool *done, struct static_key_true *once_key,
++			 struct module *mod)
++	__releases(once_mutex)
++{
++	*done = true;
++	mutex_unlock(&once_mutex);
++	once_disable_jump(once_key, mod);
++}
++EXPORT_SYMBOL(__do_once_slow_done);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index c42c76447e103..c57c165bfbbc4 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4337,6 +4337,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
+ 	spinlock_t *ptl;
+ 	unsigned long haddr = address & huge_page_mask(h);
+ 	bool new_page = false;
++	u32 hash = hugetlb_fault_mutex_hash(mapping, idx);
+ 
+ 	/*
+ 	 * Currently, we are forced to kill the process in the event the
+@@ -4346,7 +4347,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
+ 	if (is_vma_resv_set(vma, HPAGE_RESV_UNMAPPED)) {
+ 		pr_warn_ratelimited("PID %d killed due to inadequate hugepage pool\n",
+ 			   current->pid);
+-		return ret;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -4365,7 +4366,6 @@ retry:
+ 		 * Check for page in userfault range
+ 		 */
+ 		if (userfaultfd_missing(vma)) {
+-			u32 hash;
+ 			struct vm_fault vmf = {
+ 				.vma = vma,
+ 				.address = haddr,
+@@ -4380,17 +4380,14 @@ retry:
+ 			};
+ 
+ 			/*
+-			 * hugetlb_fault_mutex and i_mmap_rwsem must be
+-			 * dropped before handling userfault.  Reacquire
+-			 * after handling fault to make calling code simpler.
++			 * vma_lock and hugetlb_fault_mutex must be dropped
++			 * before handling userfault. Also mmap_lock will
++			 * be dropped during handling userfault, any vma
++			 * operation should be careful from here.
+ 			 */
+-			hash = hugetlb_fault_mutex_hash(mapping, idx);
+ 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ 			i_mmap_unlock_read(mapping);
+-			ret = handle_userfault(&vmf, VM_UFFD_MISSING);
+-			i_mmap_lock_read(mapping);
+-			mutex_lock(&hugetlb_fault_mutex_table[hash]);
+-			goto out;
++			return handle_userfault(&vmf, VM_UFFD_MISSING);
+ 		}
+ 
+ 		page = alloc_huge_page(vma, haddr, 0);
+@@ -4497,6 +4494,8 @@ retry:
+ 
+ 	unlock_page(page);
+ out:
++	mutex_unlock(&hugetlb_fault_mutex_table[hash]);
++	i_mmap_unlock_read(mapping);
+ 	return ret;
+ 
+ backout:
+@@ -4592,10 +4591,12 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 	entry = huge_ptep_get(ptep);
+-	if (huge_pte_none(entry)) {
+-		ret = hugetlb_no_page(mm, vma, mapping, idx, address, ptep, flags);
+-		goto out_mutex;
+-	}
++	if (huge_pte_none(entry))
++		/*
++		 * hugetlb_no_page will drop vma lock and hugetlb fault
++		 * mutex internally, which make us return immediately.
++		 */
++		return hugetlb_no_page(mm, vma, mapping, idx, address, ptep, flags);
+ 
+ 	ret = 0;
+ 
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 31fc116a8ec9b..33ebda8385b95 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1856,7 +1856,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
+ 	if (!arch_validate_flags(vma->vm_flags)) {
+ 		error = -EINVAL;
+ 		if (file)
+-			goto unmap_and_free_vma;
++			goto close_and_free_vma;
+ 		else
+ 			goto free_vma;
+ 	}
+@@ -1900,6 +1900,9 @@ out:
+ 
+ 	return addr;
+ 
++close_and_free_vma:
++	if (vma->vm_ops && vma->vm_ops->close)
++		vma->vm_ops->close(vma);
+ unmap_and_free_vma:
+ 	vma->vm_file = NULL;
+ 	fput(file);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 2cb0cf035476b..866eb22432de2 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -4482,15 +4482,27 @@ static inline int __get_blocks(struct hci_dev *hdev, struct sk_buff *skb)
+ 	return DIV_ROUND_UP(skb->len - HCI_ACL_HDR_SIZE, hdev->block_len);
+ }
+ 
+-static void __check_timeout(struct hci_dev *hdev, unsigned int cnt)
++static void __check_timeout(struct hci_dev *hdev, unsigned int cnt, u8 type)
+ {
+-	if (!hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
+-		/* ACL tx timeout must be longer than maximum
+-		 * link supervision timeout (40.9 seconds) */
+-		if (!cnt && time_after(jiffies, hdev->acl_last_tx +
+-				       HCI_ACL_TX_TIMEOUT))
+-			hci_link_tx_to(hdev, ACL_LINK);
++	unsigned long last_tx;
++
++	if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED))
++		return;
++
++	switch (type) {
++	case LE_LINK:
++		last_tx = hdev->le_last_tx;
++		break;
++	default:
++		last_tx = hdev->acl_last_tx;
++		break;
+ 	}
++
++	/* tx timeout must be longer than maximum link supervision timeout
++	 * (40.9 seconds)
++	 */
++	if (!cnt && time_after(jiffies, last_tx + HCI_ACL_TX_TIMEOUT))
++		hci_link_tx_to(hdev, type);
+ }
+ 
+ /* Schedule SCO */
+@@ -4548,7 +4560,7 @@ static void hci_sched_acl_pkt(struct hci_dev *hdev)
+ 	struct sk_buff *skb;
+ 	int quote;
+ 
+-	__check_timeout(hdev, cnt);
++	__check_timeout(hdev, cnt, ACL_LINK);
+ 
+ 	while (hdev->acl_cnt &&
+ 	       (chan = hci_chan_sent(hdev, ACL_LINK, &quote))) {
+@@ -4591,8 +4603,6 @@ static void hci_sched_acl_blk(struct hci_dev *hdev)
+ 	int quote;
+ 	u8 type;
+ 
+-	__check_timeout(hdev, cnt);
+-
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	if (hdev->dev_type == HCI_AMP)
+@@ -4600,6 +4610,8 @@ static void hci_sched_acl_blk(struct hci_dev *hdev)
+ 	else
+ 		type = ACL_LINK;
+ 
++	__check_timeout(hdev, cnt, type);
++
+ 	while (hdev->block_cnt > 0 &&
+ 	       (chan = hci_chan_sent(hdev, type, &quote))) {
+ 		u32 priority = (skb_peek(&chan->data_q))->priority;
+@@ -4673,7 +4685,7 @@ static void hci_sched_le(struct hci_dev *hdev)
+ 
+ 	cnt = hdev->le_pkts ? hdev->le_cnt : hdev->acl_cnt;
+ 
+-	__check_timeout(hdev, cnt);
++	__check_timeout(hdev, cnt, LE_LINK);
+ 
+ 	tmp = cnt;
+ 	while (cnt && (chan = hci_chan_sent(hdev, LE_LINK, &quote))) {
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index b69d88b88d2e4..ccd2c377bf83c 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -48,6 +48,9 @@ void hci_conn_add_sysfs(struct hci_conn *conn)
+ 
+ 	BT_DBG("conn %p", conn);
+ 
++	if (device_is_registered(&conn->dev))
++		return;
++
+ 	dev_set_name(&conn->dev, "%s:%d", hdev->name, conn->handle);
+ 
+ 	if (device_add(&conn->dev) < 0) {
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 0c38af2ff2097..83dd76e9196f3 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -61,6 +61,9 @@ static void l2cap_send_disconn_req(struct l2cap_chan *chan, int err);
+ 
+ static void l2cap_tx(struct l2cap_chan *chan, struct l2cap_ctrl *control,
+ 		     struct sk_buff_head *skbs, u8 event);
++static void l2cap_retrans_timeout(struct work_struct *work);
++static void l2cap_monitor_timeout(struct work_struct *work);
++static void l2cap_ack_timeout(struct work_struct *work);
+ 
+ static inline u8 bdaddr_type(u8 link_type, u8 bdaddr_type)
+ {
+@@ -476,6 +479,9 @@ struct l2cap_chan *l2cap_chan_create(void)
+ 	write_unlock(&chan_list_lock);
+ 
+ 	INIT_DELAYED_WORK(&chan->chan_timer, l2cap_chan_timeout);
++	INIT_DELAYED_WORK(&chan->retrans_timer, l2cap_retrans_timeout);
++	INIT_DELAYED_WORK(&chan->monitor_timer, l2cap_monitor_timeout);
++	INIT_DELAYED_WORK(&chan->ack_timer, l2cap_ack_timeout);
+ 
+ 	chan->state = BT_OPEN;
+ 
+@@ -3316,10 +3322,6 @@ int l2cap_ertm_init(struct l2cap_chan *chan)
+ 	chan->rx_state = L2CAP_RX_STATE_RECV;
+ 	chan->tx_state = L2CAP_TX_STATE_XMIT;
+ 
+-	INIT_DELAYED_WORK(&chan->retrans_timer, l2cap_retrans_timeout);
+-	INIT_DELAYED_WORK(&chan->monitor_timer, l2cap_monitor_timeout);
+-	INIT_DELAYED_WORK(&chan->ack_timer, l2cap_ack_timeout);
+-
+ 	skb_queue_head_init(&chan->srej_q);
+ 
+ 	err = l2cap_seq_list_init(&chan->srej_list, chan->tx_win);
+@@ -4303,6 +4305,12 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ 		}
+ 	}
+ 
++	chan = l2cap_chan_hold_unless_zero(chan);
++	if (!chan) {
++		err = -EBADSLT;
++		goto unlock;
++	}
++
+ 	err = 0;
+ 
+ 	l2cap_chan_lock(chan);
+@@ -4332,6 +4340,7 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ 	}
+ 
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ 
+ unlock:
+ 	mutex_unlock(&conn->chan_lock);
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index e918a0f3cda28..afa82adaf6cd5 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -274,6 +274,7 @@ static void bcm_can_tx(struct bcm_op *op)
+ 	struct sk_buff *skb;
+ 	struct net_device *dev;
+ 	struct canfd_frame *cf = op->frames + op->cfsiz * op->currframe;
++	int err;
+ 
+ 	/* no target device? => exit */
+ 	if (!op->ifindex)
+@@ -298,11 +299,11 @@ static void bcm_can_tx(struct bcm_op *op)
+ 	/* send with loopback */
+ 	skb->dev = dev;
+ 	can_skb_set_owner(skb, op->sk);
+-	can_send(skb, 1);
++	err = can_send(skb, 1);
++	if (!err)
++		op->frames_abs++;
+ 
+-	/* update statistics */
+ 	op->currframe++;
+-	op->frames_abs++;
+ 
+ 	/* reached last frame? */
+ 	if (op->currframe >= op->nframes)
+diff --git a/net/core/stream.c b/net/core/stream.c
+index a166a32b411fa..a61130504827a 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -159,7 +159,8 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p)
+ 		*timeo_p = current_timeo;
+ 	}
+ out:
+-	remove_wait_queue(sk_sleep(sk), &wait);
++	if (!sock_flag(sk, SOCK_DEAD))
++		remove_wait_queue(sk_sleep(sk), &wait);
+ 	return err;
+ 
+ do_error:
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index 7edec210780a3..ecc0d5fbde048 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -273,6 +273,10 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 		err = -EMSGSIZE;
+ 		goto out_dev;
+ 	}
++	if (!size) {
++		err = 0;
++		goto out_dev;
++	}
+ 
+ 	hlen = LL_RESERVED_SPACE(dev);
+ 	tlen = dev->needed_tailroom;
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index a733ce1a3f8f4..87d73a3e92bad 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -158,7 +158,7 @@ void inet_sock_destruct(struct sock *sk)
+ 
+ 	kfree(rcu_dereference_protected(inet->inet_opt, 1));
+ 	dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
+-	dst_release(sk->sk_rx_dst);
++	dst_release(rcu_dereference_protected(sk->sk_rx_dst, 1));
+ 	sk_refcnt_debug_dec(sk);
+ }
+ EXPORT_SYMBOL(inet_sock_destruct);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index feb7f072f2b26..c0de655fffd74 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -771,8 +771,8 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 	if (likely(remaining > 1))
+ 		remaining &= ~1U;
+ 
+-	net_get_random_once(table_perturb,
+-			    INET_TABLE_PERTURB_SIZE * sizeof(*table_perturb));
++	get_random_slow_once(table_perturb,
++			     INET_TABLE_PERTURB_SIZE * sizeof(*table_perturb));
+ 	index = port_offset & (INET_TABLE_PERTURB_SIZE - 1);
+ 
+ 	offset = READ_ONCE(table_perturb[index]) + (port_offset >> 32);
+diff --git a/net/ipv4/netfilter/nft_fib_ipv4.c b/net/ipv4/netfilter/nft_fib_ipv4.c
+index 03df986217b7b..9e6f0f1275e2c 100644
+--- a/net/ipv4/netfilter/nft_fib_ipv4.c
++++ b/net/ipv4/netfilter/nft_fib_ipv4.c
+@@ -83,6 +83,9 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	else
+ 		oif = NULL;
+ 
++	if (priv->flags & NFTA_FIB_F_IIF)
++		fl4.flowi4_oif = l3mdev_master_ifindex_rcu(oif);
++
+ 	if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+ 	    nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+ 		nft_fib_store_result(dest, priv, nft_in(pkt));
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index bfeb05f62b94f..a7127364253c6 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2796,6 +2796,8 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 	tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+ 	tp->snd_cwnd = TCP_INIT_CWND;
+ 	tp->snd_cwnd_cnt = 0;
++	tp->is_cwnd_limited = 0;
++	tp->max_packets_out = 0;
+ 	tp->window_clamp = 0;
+ 	tp->delivered = 0;
+ 	tp->delivered_ce = 0;
+@@ -2814,8 +2816,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 	icsk->icsk_ack.rcv_mss = TCP_MIN_MSS;
+ 	memset(&tp->rx_opt, 0, sizeof(tp->rx_opt));
+ 	__sk_dst_reset(sk);
+-	dst_release(sk->sk_rx_dst);
+-	sk->sk_rx_dst = NULL;
++	dst_release(xchg((__force struct dst_entry **)&sk->sk_rx_dst, NULL));
+ 	tcp_saved_syn_free(tp);
+ 	tp->compressed_ack = 0;
+ 	tp->segs_in = 0;
+@@ -4041,12 +4042,16 @@ static void __tcp_alloc_md5sig_pool(void)
+ 	 * to memory. See smp_rmb() in tcp_get_md5sig_pool()
+ 	 */
+ 	smp_wmb();
+-	tcp_md5sig_pool_populated = true;
++	/* Paired with READ_ONCE() from tcp_alloc_md5sig_pool()
++	 * and tcp_get_md5sig_pool().
++	*/
++	WRITE_ONCE(tcp_md5sig_pool_populated, true);
+ }
+ 
+ bool tcp_alloc_md5sig_pool(void)
+ {
+-	if (unlikely(!tcp_md5sig_pool_populated)) {
++	/* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
++	if (unlikely(!READ_ONCE(tcp_md5sig_pool_populated))) {
+ 		mutex_lock(&tcp_md5sig_mutex);
+ 
+ 		if (!tcp_md5sig_pool_populated) {
+@@ -4057,7 +4062,8 @@ bool tcp_alloc_md5sig_pool(void)
+ 
+ 		mutex_unlock(&tcp_md5sig_mutex);
+ 	}
+-	return tcp_md5sig_pool_populated;
++	/* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
++	return READ_ONCE(tcp_md5sig_pool_populated);
+ }
+ EXPORT_SYMBOL(tcp_alloc_md5sig_pool);
+ 
+@@ -4073,7 +4079,8 @@ struct tcp_md5sig_pool *tcp_get_md5sig_pool(void)
+ {
+ 	local_bh_disable();
+ 
+-	if (tcp_md5sig_pool_populated) {
++	/* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
++	if (READ_ONCE(tcp_md5sig_pool_populated)) {
+ 		/* coupled with smp_wmb() in __tcp_alloc_md5sig_pool() */
+ 		smp_rmb();
+ 		return this_cpu_ptr(&tcp_md5sig_pool);
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 4ecd85b1e806c..377cba9b124d0 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5777,7 +5777,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb)
+ 	trace_tcp_probe(sk, skb);
+ 
+ 	tcp_mstamp_refresh(tp);
+-	if (unlikely(!sk->sk_rx_dst))
++	if (unlikely(!rcu_access_pointer(sk->sk_rx_dst)))
+ 		inet_csk(sk)->icsk_af_ops->sk_rx_dst_set(sk, skb);
+ 	/*
+ 	 *	Header prediction.
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 0d165ce2d80a7..5c1e6b0687e2a 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1670,15 +1670,18 @@ int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb)
+ 	struct sock *rsk;
+ 
+ 	if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */
+-		struct dst_entry *dst = sk->sk_rx_dst;
++		struct dst_entry *dst;
++
++		dst = rcu_dereference_protected(sk->sk_rx_dst,
++						lockdep_sock_is_held(sk));
+ 
+ 		sock_rps_save_rxhash(sk, skb);
+ 		sk_mark_napi_id(sk, skb);
+ 		if (dst) {
+ 			if (inet_sk(sk)->rx_dst_ifindex != skb->skb_iif ||
+ 			    !dst->ops->check(dst, 0)) {
++				RCU_INIT_POINTER(sk->sk_rx_dst, NULL);
+ 				dst_release(dst);
+-				sk->sk_rx_dst = NULL;
+ 			}
+ 		}
+ 		tcp_rcv_established(sk, skb);
+@@ -1753,7 +1756,7 @@ int tcp_v4_early_demux(struct sk_buff *skb)
+ 		skb->sk = sk;
+ 		skb->destructor = sock_edemux;
+ 		if (sk_fullsock(sk)) {
+-			struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst);
++			struct dst_entry *dst = rcu_dereference(sk->sk_rx_dst);
+ 
+ 			if (dst)
+ 				dst = dst_check(dst, 0);
+@@ -2162,7 +2165,7 @@ void inet_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
+ 	struct dst_entry *dst = skb_dst(skb);
+ 
+ 	if (dst && dst_hold_safe(dst)) {
+-		sk->sk_rx_dst = dst;
++		rcu_assign_pointer(sk->sk_rx_dst, dst);
+ 		inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
+ 	}
+ }
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 48fce999dc612..eefd032bc6dbd 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1876,15 +1876,20 @@ static void tcp_cwnd_validate(struct sock *sk, bool is_cwnd_limited)
+ 	const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops;
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
+-	/* Track the maximum number of outstanding packets in each
+-	 * window, and remember whether we were cwnd-limited then.
++	/* Track the strongest available signal of the degree to which the cwnd
++	 * is fully utilized. If cwnd-limited then remember that fact for the
++	 * current window. If not cwnd-limited then track the maximum number of
++	 * outstanding packets in the current window. (If cwnd-limited then we
++	 * chose to not update tp->max_packets_out to avoid an extra else
++	 * clause with no functional impact.)
+ 	 */
+-	if (!before(tp->snd_una, tp->max_packets_seq) ||
+-	    tp->packets_out > tp->max_packets_out ||
+-	    is_cwnd_limited) {
+-		tp->max_packets_out = tp->packets_out;
+-		tp->max_packets_seq = tp->snd_nxt;
++	if (!before(tp->snd_una, tp->cwnd_usage_seq) ||
++	    is_cwnd_limited ||
++	    (!tp->is_cwnd_limited &&
++	     tp->packets_out > tp->max_packets_out)) {
+ 		tp->is_cwnd_limited = is_cwnd_limited;
++		tp->max_packets_out = tp->packets_out;
++		tp->cwnd_usage_seq = tp->snd_nxt;
+ 	}
+ 
+ 	if (tcp_is_cwnd_limited(sk)) {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index e498c7666ec62..4446aa8237ff0 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2193,7 +2193,7 @@ bool udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
+ 	struct dst_entry *old;
+ 
+ 	if (dst_hold_safe(dst)) {
+-		old = xchg(&sk->sk_rx_dst, dst);
++		old = xchg((__force struct dst_entry **)&sk->sk_rx_dst, dst);
+ 		dst_release(old);
+ 		return old != dst;
+ 	}
+@@ -2383,7 +2383,7 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		struct dst_entry *dst = skb_dst(skb);
+ 		int ret;
+ 
+-		if (unlikely(sk->sk_rx_dst != dst))
++		if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst))
+ 			udp_sk_rx_dst_set(sk, dst);
+ 
+ 		ret = udp_unicast_rcv_skb(sk, skb, uh);
+@@ -2541,7 +2541,7 @@ int udp_v4_early_demux(struct sk_buff *skb)
+ 
+ 	skb->sk = sk;
+ 	skb->destructor = sock_efree;
+-	dst = READ_ONCE(sk->sk_rx_dst);
++	dst = rcu_dereference(sk->sk_rx_dst);
+ 
+ 	if (dst)
+ 		dst = dst_check(dst, 0);
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index 92f3235fa2874..602743f6dcee0 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -37,6 +37,9 @@ static int nft_fib6_flowi_init(struct flowi6 *fl6, const struct nft_fib *priv,
+ 	if (ipv6_addr_type(&fl6->daddr) & IPV6_ADDR_LINKLOCAL) {
+ 		lookup_flags |= RT6_LOOKUP_F_IFACE;
+ 		fl6->flowi6_oif = get_ifindex(dev ? dev : pkt->skb->dev);
++	} else if ((priv->flags & NFTA_FIB_F_IIF) &&
++		   (netif_is_l3_master(dev) || netif_is_l3_slave(dev))) {
++		fl6->flowi6_oif = dev->ifindex;
+ 	}
+ 
+ 	if (ipv6_addr_type(&fl6->saddr) & IPV6_ADDR_UNICAST)
+@@ -193,7 +196,8 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	if (rt->rt6i_flags & (RTF_REJECT | RTF_ANYCAST | RTF_LOCAL))
+ 		goto put_rt_err;
+ 
+-	if (oif && oif != rt->rt6i_idev->dev)
++	if (oif && oif != rt->rt6i_idev->dev &&
++	    l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) != oif->ifindex)
+ 		goto put_rt_err;
+ 
+ 	nft_fib_store_result(dest, priv, rt->rt6i_idev->dev);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 8d91f36cb11bc..c14eaec64a0b8 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -107,7 +107,7 @@ static void inet6_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
+ 	if (dst && dst_hold_safe(dst)) {
+ 		const struct rt6_info *rt = (const struct rt6_info *)dst;
+ 
+-		sk->sk_rx_dst = dst;
++		rcu_assign_pointer(sk->sk_rx_dst, dst);
+ 		inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
+ 		tcp_inet6_sk(sk)->rx_dst_cookie = rt6_get_cookie(rt);
+ 	}
+@@ -1482,15 +1482,18 @@ static int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
+ 		opt_skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC));
+ 
+ 	if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */
+-		struct dst_entry *dst = sk->sk_rx_dst;
++		struct dst_entry *dst;
++
++		dst = rcu_dereference_protected(sk->sk_rx_dst,
++						lockdep_sock_is_held(sk));
+ 
+ 		sock_rps_save_rxhash(sk, skb);
+ 		sk_mark_napi_id(sk, skb);
+ 		if (dst) {
+ 			if (inet_sk(sk)->rx_dst_ifindex != skb->skb_iif ||
+ 			    dst->ops->check(dst, np->rx_dst_cookie) == NULL) {
++				RCU_INIT_POINTER(sk->sk_rx_dst, NULL);
+ 				dst_release(dst);
+-				sk->sk_rx_dst = NULL;
+ 			}
+ 		}
+ 
+@@ -1842,7 +1845,7 @@ INDIRECT_CALLABLE_SCOPE void tcp_v6_early_demux(struct sk_buff *skb)
+ 		skb->sk = sk;
+ 		skb->destructor = sock_edemux;
+ 		if (sk_fullsock(sk)) {
+-			struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst);
++			struct dst_entry *dst = rcu_dereference(sk->sk_rx_dst);
+ 
+ 			if (dst)
+ 				dst = dst_check(dst, tcp_inet6_sk(sk)->rx_dst_cookie);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 4e90e5a529455..9b504bf492144 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -941,7 +941,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		struct dst_entry *dst = skb_dst(skb);
+ 		int ret;
+ 
+-		if (unlikely(sk->sk_rx_dst != dst))
++		if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst))
+ 			udp6_sk_rx_dst_set(sk, dst);
+ 
+ 		if (!uh->check && !udp_sk(sk)->no_check6_rx) {
+@@ -1055,7 +1055,7 @@ INDIRECT_CALLABLE_SCOPE void udp_v6_early_demux(struct sk_buff *skb)
+ 
+ 	skb->sk = sk;
+ 	skb->destructor = sock_efree;
+-	dst = READ_ONCE(sk->sk_rx_dst);
++	dst = rcu_dereference(sk->sk_rx_dst);
+ 
+ 	if (dst)
+ 		dst = dst_check(dst, inet6_sk(sk)->rx_dst_cookie);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 8010967a68741..c6a7f1c99abc5 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -3357,9 +3357,6 @@ static int ieee80211_set_csa_beacon(struct ieee80211_sub_if_data *sdata,
+ 	case NL80211_IFTYPE_MESH_POINT: {
+ 		struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
+ 
+-		if (params->chandef.width != sdata->vif.bss_conf.chandef.width)
+-			return -EINVAL;
+-
+ 		/* changes into another band are not supported */
+ 		if (sdata->vif.bss_conf.chandef.chan->band !=
+ 		    params->chandef.chan->band)
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 9d6ef6cb9b263..6b5c0abf7f1b5 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -241,10 +241,17 @@ void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key)
+ 		upcall.portid = ovs_vport_find_upcall_portid(p, skb);
+ 		upcall.mru = OVS_CB(skb)->mru;
+ 		error = ovs_dp_upcall(dp, skb, key, &upcall, 0);
+-		if (unlikely(error))
+-			kfree_skb(skb);
+-		else
++		switch (error) {
++		case 0:
++		case -EAGAIN:
++		case -ERESTARTSYS:
++		case -EINTR:
+ 			consume_skb(skb);
++			break;
++		default:
++			kfree_skb(skb);
++			break;
++		}
+ 		stats_counter = &stats->n_missed;
+ 		goto out;
+ 	}
+@@ -537,8 +544,9 @@ static int queue_userspace_packet(struct datapath *dp, struct sk_buff *skb,
+ out:
+ 	if (err)
+ 		skb_tx_error(skb);
+-	kfree_skb(user_skb);
+-	kfree_skb(nskb);
++	consume_skb(user_skb);
++	consume_skb(nskb);
++
+ 	return err;
+ }
+ 
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index 5327d130c4b56..b560d06e6d96d 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -166,10 +166,10 @@ void rds_tcp_reset_callbacks(struct socket *sock,
+ 	 */
+ 	atomic_set(&cp->cp_state, RDS_CONN_RESETTING);
+ 	wait_event(cp->cp_waitq, !test_bit(RDS_IN_XMIT, &cp->cp_flags));
+-	lock_sock(osock->sk);
+ 	/* reset receive side state for rds_tcp_data_recv() for osock  */
+ 	cancel_delayed_work_sync(&cp->cp_send_w);
+ 	cancel_delayed_work_sync(&cp->cp_recv_w);
++	lock_sock(osock->sk);
+ 	if (tc->t_tinc) {
+ 		rds_inc_put(&tc->t_tinc->ti_inc);
+ 		tc->t_tinc = NULL;
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index db6b7373d16c3..34964145514e6 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -863,12 +863,17 @@ int sctp_auth_set_key(struct sctp_endpoint *ep,
+ 	}
+ 
+ 	list_del_init(&shkey->key_list);
+-	sctp_auth_shkey_release(shkey);
+ 	list_add(&cur_key->key_list, sh_keys);
+ 
+-	if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
+-		sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
++	if (asoc && asoc->active_key_id == auth_key->sca_keynumber &&
++	    sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL)) {
++		list_del_init(&cur_key->key_list);
++		sctp_auth_shkey_release(cur_key);
++		list_add(&shkey->key_list, sh_keys);
++		return -ENOMEM;
++	}
+ 
++	sctp_auth_shkey_release(shkey);
+ 	return 0;
+ }
+ 
+@@ -902,8 +907,13 @@ int sctp_auth_set_active_key(struct sctp_endpoint *ep,
+ 		return -EINVAL;
+ 
+ 	if (asoc) {
++		__u16  active_key_id = asoc->active_key_id;
++
+ 		asoc->active_key_id = key_id;
+-		sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
++		if (sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL)) {
++			asoc->active_key_id = active_key_id;
++			return -ENOMEM;
++		}
+ 	} else
+ 		ep->active_key_id = key_id;
+ 
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index d45d5366115a7..dc27635403932 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -204,6 +204,7 @@ void wait_for_unix_gc(void)
+ /* The external entry point: unix_gc() */
+ void unix_gc(void)
+ {
++	struct sk_buff *next_skb, *skb;
+ 	struct unix_sock *u;
+ 	struct unix_sock *next;
+ 	struct sk_buff_head hitlist;
+@@ -297,11 +298,30 @@ void unix_gc(void)
+ 
+ 	spin_unlock(&unix_gc_lock);
+ 
++	/* We need io_uring to clean its registered files, ignore all io_uring
++	 * originated skbs. It's fine as io_uring doesn't keep references to
++	 * other io_uring instances and so killing all other files in the cycle
++	 * will put all io_uring references forcing it to go through normal
++	 * release.path eventually putting registered files.
++	 */
++	skb_queue_walk_safe(&hitlist, skb, next_skb) {
++		if (skb->scm_io_uring) {
++			__skb_unlink(skb, &hitlist);
++			skb_queue_tail(&skb->sk->sk_receive_queue, skb);
++		}
++	}
++
+ 	/* Here we are. Hitlist is filled. Die. */
+ 	__skb_queue_purge(&hitlist);
+ 
+ 	spin_lock(&unix_gc_lock);
+ 
++	/* There could be io_uring registered files, just push them back to
++	 * the inflight list
++	 */
++	list_for_each_entry_safe(u, next, &gc_candidates, link)
++		list_move_tail(&u->link, &gc_inflight_list);
++
+ 	/* All candidates should have been detached by now. */
+ 	BUG_ON(!list_empty(&gc_candidates));
+ 
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index d6d3a05c008a4..c9ee9259af48a 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1196,7 +1196,7 @@ EXPORT_SYMBOL_GPL(virtio_transport_recv_pkt);
+ 
+ void virtio_transport_free_pkt(struct virtio_vsock_pkt *pkt)
+ {
+-	kfree(pkt->buf);
++	kvfree(pkt->buf);
+ 	kfree(pkt);
+ }
+ EXPORT_SYMBOL_GPL(virtio_transport_free_pkt);
+diff --git a/net/xfrm/xfrm_ipcomp.c b/net/xfrm/xfrm_ipcomp.c
+index 0814320472f18..24ac6805275e9 100644
+--- a/net/xfrm/xfrm_ipcomp.c
++++ b/net/xfrm/xfrm_ipcomp.c
+@@ -212,6 +212,7 @@ static void ipcomp_free_scratches(void)
+ 		vfree(*per_cpu_ptr(scratches, i));
+ 
+ 	free_percpu(scratches);
++	ipcomp_scratches = NULL;
+ }
+ 
+ static void * __percpu *ipcomp_alloc_scratches(void)
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 0d6e118207913..25696de8114a3 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -179,8 +179,29 @@ echo-cmd = $(if $($(quiet)cmd_$(1)),\
+  quiet_redirect :=
+ silent_redirect := exec >/dev/null;
+ 
++# Delete the target on interruption
++#
++# GNU Make automatically deletes the target if it has already been changed by
++# the interrupted recipe. So, you can safely stop the build by Ctrl-C (Make
++# will delete incomplete targets), and resume it later.
++#
++# However, this does not work when the stderr is piped to another program, like
++#  $ make >&2 | tee log
++# Make dies with SIGPIPE before cleaning the targets.
++#
++# To address it, we clean the target in signal traps.
++#
++# Make deletes the target when it catches SIGHUP, SIGINT, SIGQUIT, SIGTERM.
++# So, we cover them, and also SIGPIPE just in case.
++#
++# Of course, this is unneeded for phony targets.
++delete-on-interrupt = \
++	$(if $(filter-out $(PHONY), $@), \
++		$(foreach sig, HUP INT QUIT TERM PIPE, \
++			trap 'rm -f $@; trap - $(sig); kill -s $(sig) $$$$' $(sig);))
++
+ # printing commands
+-cmd = @set -e; $(echo-cmd) $($(quiet)redirect) $(cmd_$(1))
++cmd = @set -e; $(echo-cmd) $($(quiet)redirect) $(delete-on-interrupt) $(cmd_$(1))
+ 
+ ###
+ # if_changed      - execute command if any prerequisite is newer than
+diff --git a/scripts/package/mkspec b/scripts/package/mkspec
+index 7c477ca7dc982..951cc60e5a903 100755
+--- a/scripts/package/mkspec
++++ b/scripts/package/mkspec
+@@ -85,10 +85,10 @@ $S
+ 	mkdir -p %{buildroot}/boot
+ 	%ifarch ia64
+ 	mkdir -p %{buildroot}/boot/efi
+-	cp \$($MAKE image_name) %{buildroot}/boot/efi/vmlinuz-$KERNELRELEASE
++	cp \$($MAKE -s image_name) %{buildroot}/boot/efi/vmlinuz-$KERNELRELEASE
+ 	ln -s efi/vmlinuz-$KERNELRELEASE %{buildroot}/boot/
+ 	%else
+-	cp \$($MAKE image_name) %{buildroot}/boot/vmlinuz-$KERNELRELEASE
++	cp \$($MAKE -s image_name) %{buildroot}/boot/vmlinuz-$KERNELRELEASE
+ 	%endif
+ $M	$MAKE %{?_smp_mflags} INSTALL_MOD_PATH=%{buildroot} modules_install
+ 	$MAKE %{?_smp_mflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
+diff --git a/scripts/selinux/install_policy.sh b/scripts/selinux/install_policy.sh
+index 2dccf141241d7..20af56ce245c5 100755
+--- a/scripts/selinux/install_policy.sh
++++ b/scripts/selinux/install_policy.sh
+@@ -78,7 +78,7 @@ cd /etc/selinux/dummy/contexts/files
+ $SF -F file_contexts /
+ 
+ mounts=`cat /proc/$$/mounts | \
+-	egrep "ext[234]|jfs|xfs|reiserfs|jffs2|gfs2|btrfs|f2fs|ocfs2" | \
++	grep -E "ext[234]|jfs|xfs|reiserfs|jffs2|gfs2|btrfs|f2fs|ocfs2" | \
+ 	awk '{ print $2 '}`
+ $SF -F file_contexts $mounts
+ 
+diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
+index 269967c4fc1b6..b54eb7177a31f 100644
+--- a/security/Kconfig.hardening
++++ b/security/Kconfig.hardening
+@@ -22,13 +22,23 @@ menu "Memory initialization"
+ config CC_HAS_AUTO_VAR_INIT_PATTERN
+ 	def_bool $(cc-option,-ftrivial-auto-var-init=pattern)
+ 
+-config CC_HAS_AUTO_VAR_INIT_ZERO
++config CC_HAS_AUTO_VAR_INIT_ZERO_BARE
++	def_bool $(cc-option,-ftrivial-auto-var-init=zero)
++
++config CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER
++	# Clang 16 and later warn about using the -enable flag, but it
++	# is required before then.
+ 	def_bool $(cc-option,-ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang)
++	depends on !CC_HAS_AUTO_VAR_INIT_ZERO_BARE
++
++config CC_HAS_AUTO_VAR_INIT_ZERO
++	def_bool CC_HAS_AUTO_VAR_INIT_ZERO_BARE || CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER
+ 
+ choice
+ 	prompt "Initialize kernel stack variables at function entry"
+ 	default GCC_PLUGIN_STRUCTLEAK_BYREF_ALL if COMPILE_TEST && GCC_PLUGINS
+ 	default INIT_STACK_ALL_PATTERN if COMPILE_TEST && CC_HAS_AUTO_VAR_INIT_PATTERN
++	default INIT_STACK_ALL_ZERO if CC_HAS_AUTO_VAR_INIT_ZERO
+ 	default INIT_STACK_NONE
+ 	help
+ 	  This option enables initialization of stack variables at
+@@ -39,11 +49,11 @@ choice
+ 	  syscalls.
+ 
+ 	  This chooses the level of coverage over classes of potentially
+-	  uninitialized variables. The selected class will be
++	  uninitialized variables. The selected class of variable will be
+ 	  initialized before use in a function.
+ 
+ 	config INIT_STACK_NONE
+-		bool "no automatic initialization (weakest)"
++		bool "no automatic stack variable initialization (weakest)"
+ 		help
+ 		  Disable automatic stack variable initialization.
+ 		  This leaves the kernel vulnerable to the standard
+@@ -80,7 +90,7 @@ choice
+ 		  and is disallowed.
+ 
+ 	config GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
+-		bool "zero-init anything passed by reference (very strong)"
++		bool "zero-init everything passed by reference (very strong)"
+ 		depends on GCC_PLUGINS
+ 		depends on !(KASAN && KASAN_STACK=1)
+ 		select GCC_PLUGIN_STRUCTLEAK
+@@ -91,33 +101,44 @@ choice
+ 		  of uninitialized stack variable exploits and information
+ 		  exposures.
+ 
++		  As a side-effect, this keeps a lot of variables on the
++		  stack that can otherwise be optimized out, so combining
++		  this with CONFIG_KASAN_STACK can lead to a stack overflow
++		  and is disallowed.
++
+ 	config INIT_STACK_ALL_PATTERN
+-		bool "0xAA-init everything on the stack (strongest)"
++		bool "pattern-init everything (strongest)"
+ 		depends on CC_HAS_AUTO_VAR_INIT_PATTERN
+ 		help
+-		  Initializes everything on the stack with a 0xAA
+-		  pattern. This is intended to eliminate all classes
+-		  of uninitialized stack variable exploits and information
+-		  exposures, even variables that were warned to have been
+-		  left uninitialized.
++		  Initializes everything on the stack (including padding)
++		  with a specific debug value. This is intended to eliminate
++		  all classes of uninitialized stack variable exploits and
++		  information exposures, even variables that were warned about
++		  having been left uninitialized.
+ 
+ 		  Pattern initialization is known to provoke many existing bugs
+ 		  related to uninitialized locals, e.g. pointers receive
+-		  non-NULL values, buffer sizes and indices are very big.
++		  non-NULL values, buffer sizes and indices are very big. The
++		  pattern is situation-specific; Clang on 64-bit uses 0xAA
++		  repeating for all types and padding except float and double
++		  which use 0xFF repeating (-NaN). Clang on 32-bit uses 0xFF
++		  repeating for all types and padding.
+ 
+ 	config INIT_STACK_ALL_ZERO
+-		bool "zero-init everything on the stack (strongest and safest)"
++		bool "zero-init everything (strongest and safest)"
+ 		depends on CC_HAS_AUTO_VAR_INIT_ZERO
+ 		help
+-		  Initializes everything on the stack with a zero
+-		  value. This is intended to eliminate all classes
+-		  of uninitialized stack variable exploits and information
+-		  exposures, even variables that were warned to have been
+-		  left uninitialized.
+-
+-		  Zero initialization provides safe defaults for strings,
+-		  pointers, indices and sizes, and is therefore
+-		  more suitable as a security mitigation measure.
++		  Initializes everything on the stack (including padding)
++		  with a zero value. This is intended to eliminate all
++		  classes of uninitialized stack variable exploits and
++		  information exposures, even variables that were warned
++		  about having been left uninitialized.
++
++		  Zero initialization provides safe defaults for strings
++		  (immediately NUL-terminated), pointers (NULL), indices
++		  (index 0), and sizes (0 length), so it is therefore more
++		  suitable as a production security mitigation than pattern
++		  initialization.
+ 
+ endchoice
+ 
+diff --git a/sound/core/pcm_dmaengine.c b/sound/core/pcm_dmaengine.c
+index 4d0e8fe535a1e..be58505889a36 100644
+--- a/sound/core/pcm_dmaengine.c
++++ b/sound/core/pcm_dmaengine.c
+@@ -130,12 +130,14 @@ EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_set_config_from_dai_data);
+ 
+ static void dmaengine_pcm_dma_complete(void *arg)
+ {
++	unsigned int new_pos;
+ 	struct snd_pcm_substream *substream = arg;
+ 	struct dmaengine_pcm_runtime_data *prtd = substream_to_prtd(substream);
+ 
+-	prtd->pos += snd_pcm_lib_period_bytes(substream);
+-	if (prtd->pos >= snd_pcm_lib_buffer_bytes(substream))
+-		prtd->pos = 0;
++	new_pos = prtd->pos + snd_pcm_lib_period_bytes(substream);
++	if (new_pos >= snd_pcm_lib_buffer_bytes(substream))
++		new_pos = 0;
++	prtd->pos = new_pos;
+ 
+ 	snd_pcm_period_elapsed(substream);
+ }
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index 257ad5206240f..0d91143eb4647 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -1736,10 +1736,8 @@ static int snd_rawmidi_free(struct snd_rawmidi *rmidi)
+ 
+ 	snd_info_free_entry(rmidi->proc_entry);
+ 	rmidi->proc_entry = NULL;
+-	mutex_lock(&register_mutex);
+ 	if (rmidi->ops && rmidi->ops->dev_unregister)
+ 		rmidi->ops->dev_unregister(rmidi);
+-	mutex_unlock(&register_mutex);
+ 
+ 	snd_rawmidi_free_substreams(&rmidi->streams[SNDRV_RAWMIDI_STREAM_INPUT]);
+ 	snd_rawmidi_free_substreams(&rmidi->streams[SNDRV_RAWMIDI_STREAM_OUTPUT]);
+diff --git a/sound/core/sound_oss.c b/sound/core/sound_oss.c
+index 610f317bea9d1..99874e80b6829 100644
+--- a/sound/core/sound_oss.c
++++ b/sound/core/sound_oss.c
+@@ -162,7 +162,6 @@ int snd_unregister_oss_device(int type, struct snd_card *card, int dev)
+ 		mutex_unlock(&sound_oss_mutex);
+ 		return -ENOENT;
+ 	}
+-	unregister_sound_special(minor);
+ 	switch (SNDRV_MINOR_OSS_DEVICE(minor)) {
+ 	case SNDRV_MINOR_OSS_PCM:
+ 		track2 = SNDRV_MINOR_OSS(cidx, SNDRV_MINOR_OSS_AUDIO);
+@@ -174,12 +173,18 @@ int snd_unregister_oss_device(int type, struct snd_card *card, int dev)
+ 		track2 = SNDRV_MINOR_OSS(cidx, SNDRV_MINOR_OSS_DMMIDI1);
+ 		break;
+ 	}
+-	if (track2 >= 0) {
+-		unregister_sound_special(track2);
++	if (track2 >= 0)
+ 		snd_oss_minors[track2] = NULL;
+-	}
+ 	snd_oss_minors[minor] = NULL;
+ 	mutex_unlock(&sound_oss_mutex);
++
++	/* call unregister_sound_special() outside sound_oss_mutex;
++	 * otherwise may deadlock, as it can trigger the release of a card
++	 */
++	unregister_sound_special(minor);
++	if (track2 >= 0)
++		unregister_sound_special(track2);
++
+ 	kfree(mptr);
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/hda_beep.c b/sound/pci/hda/hda_beep.c
+index 53a2b89f8983c..e63621bcb2142 100644
+--- a/sound/pci/hda/hda_beep.c
++++ b/sound/pci/hda/hda_beep.c
+@@ -118,6 +118,12 @@ static int snd_hda_beep_event(struct input_dev *dev, unsigned int type,
+ 	return 0;
+ }
+ 
++static void turn_on_beep(struct hda_beep *beep)
++{
++	if (beep->keep_power_at_enable)
++		snd_hda_power_up_pm(beep->codec);
++}
++
+ static void turn_off_beep(struct hda_beep *beep)
+ {
+ 	cancel_work_sync(&beep->beep_work);
+@@ -125,6 +131,8 @@ static void turn_off_beep(struct hda_beep *beep)
+ 		/* turn off beep */
+ 		generate_tone(beep, 0);
+ 	}
++	if (beep->keep_power_at_enable)
++		snd_hda_power_down_pm(beep->codec);
+ }
+ 
+ /**
+@@ -140,7 +148,9 @@ int snd_hda_enable_beep_device(struct hda_codec *codec, int enable)
+ 	enable = !!enable;
+ 	if (beep->enabled != enable) {
+ 		beep->enabled = enable;
+-		if (!enable)
++		if (enable)
++			turn_on_beep(beep);
++		else
+ 			turn_off_beep(beep);
+ 		return 1;
+ 	}
+@@ -167,7 +177,8 @@ static int beep_dev_disconnect(struct snd_device *device)
+ 		input_unregister_device(beep->dev);
+ 	else
+ 		input_free_device(beep->dev);
+-	turn_off_beep(beep);
++	if (beep->enabled)
++		turn_off_beep(beep);
+ 	return 0;
+ }
+ 
+diff --git a/sound/pci/hda/hda_beep.h b/sound/pci/hda/hda_beep.h
+index a25358a4807ab..db76e3ddba654 100644
+--- a/sound/pci/hda/hda_beep.h
++++ b/sound/pci/hda/hda_beep.h
+@@ -25,6 +25,7 @@ struct hda_beep {
+ 	unsigned int enabled:1;
+ 	unsigned int linear_tone:1;	/* linear tone for IDT/STAC codec */
+ 	unsigned int playing:1;
++	unsigned int keep_power_at_enable:1;	/* set by driver */
+ 	struct work_struct beep_work; /* scheduled task for beep event */
+ 	struct mutex mutex;
+ 	void (*power_hook)(struct hda_beep *beep, bool on);
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index c3fcf478037f9..b1c57c65f6cd5 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2684,9 +2684,6 @@ static void generic_acomp_pin_eld_notify(void *audio_ptr, int port, int dev_id)
+ 	 */
+ 	if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
+ 		return;
+-	/* ditto during suspend/resume process itself */
+-	if (snd_hdac_is_in_pm(&codec->core))
+-		return;
+ 
+ 	check_presence_and_report(codec, pin_nid, dev_id);
+ }
+@@ -2870,9 +2867,6 @@ static void intel_pin_eld_notify(void *audio_ptr, int port, int pipe)
+ 	 */
+ 	if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
+ 		return;
+-	/* ditto during suspend/resume process itself */
+-	if (snd_hdac_is_in_pm(&codec->core))
+-		return;
+ 
+ 	snd_hdac_i915_set_bclk(&codec->bus->core);
+ 	check_presence_and_report(codec, pin_nid, dev_id);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 574fe798d5125..60e3bc1248363 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8164,11 +8164,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC285_FIXUP_ASUS_G533Z_PINS] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+-			{ 0x14, 0x90170120 },
++			{ 0x14, 0x90170152 }, /* Speaker Surround Playback Switch */
++			{ 0x19, 0x03a19020 }, /* Mic Boost Volume */
++			{ 0x1a, 0x03a11c30 }, /* Mic Boost Volume */
++			{ 0x1e, 0x90170151 }, /* Rear jack, IN OUT EAPD Detect */
++			{ 0x21, 0x03211420 },
+ 			{ }
+ 		},
+-		.chained = true,
+-		.chain_id = ALC294_FIXUP_ASUS_G513_PINS,
+ 	},
+ 	[ALC294_FIXUP_ASUS_COEF_1B] = {
+ 		.type = HDA_FIXUP_VERBS,
+@@ -8774,7 +8776,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
+-	SND_PCI_QUIRK(0x1028, 0x087d, "Dell Precision 5530", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+@@ -8963,6 +8964,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++	SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+@@ -8984,6 +8986,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
++	SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index b848e435b93fd..6fc0c4e77cd1e 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -4308,6 +4308,8 @@ static int stac_parse_auto_config(struct hda_codec *codec)
+ 		if (codec->beep) {
+ 			/* IDT/STAC codecs have linear beep tone parameter */
+ 			codec->beep->linear_tone = spec->linear_tone_beep;
++			/* keep power up while beep is enabled */
++			codec->beep->keep_power_at_enable = 1;
+ 			/* if no beep switch is available, make its own one */
+ 			caps = query_amp_caps(codec, nid, HDA_OUTPUT);
+ 			if (!(caps & AC_AMPCAP_MUTE)) {
+@@ -4448,28 +4450,6 @@ static int stac_suspend(struct hda_codec *codec)
+ 	stac_shutup(codec);
+ 	return 0;
+ }
+-
+-static int stac_check_power_status(struct hda_codec *codec, hda_nid_t nid)
+-{
+-#ifdef CONFIG_SND_HDA_INPUT_BEEP
+-	struct sigmatel_spec *spec = codec->spec;
+-#endif
+-	int ret = snd_hda_gen_check_power_status(codec, nid);
+-
+-#ifdef CONFIG_SND_HDA_INPUT_BEEP
+-	if (nid == spec->gen.beep_nid && codec->beep) {
+-		if (codec->beep->enabled != spec->beep_power_on) {
+-			spec->beep_power_on = codec->beep->enabled;
+-			if (spec->beep_power_on)
+-				snd_hda_power_up_pm(codec);
+-			else
+-				snd_hda_power_down_pm(codec);
+-		}
+-		ret |= spec->beep_power_on;
+-	}
+-#endif
+-	return ret;
+-}
+ #else
+ #define stac_suspend		NULL
+ #endif /* CONFIG_PM */
+@@ -4482,7 +4462,6 @@ static const struct hda_codec_ops stac_patch_ops = {
+ 	.unsol_event = snd_hda_jack_unsol_event,
+ #ifdef CONFIG_PM
+ 	.suspend = stac_suspend,
+-	.check_power_status = stac_check_power_status,
+ #endif
+ 	.reboot_notify = stac_shutup,
+ };
+diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c
+index 5f8c96dea094a..f9e58d6509a83 100644
+--- a/sound/soc/codecs/da7219.c
++++ b/sound/soc/codecs/da7219.c
+@@ -2194,6 +2194,7 @@ static int da7219_register_dai_clks(struct snd_soc_component *component)
+ 			dai_clk_lookup = clkdev_hw_create(dai_clk_hw, init.name,
+ 							  "%s", dev_name(dev));
+ 			if (!dai_clk_lookup) {
++				clk_hw_unregister(dai_clk_hw);
+ 				ret = -ENOMEM;
+ 				goto err;
+ 			} else {
+@@ -2215,12 +2216,12 @@ static int da7219_register_dai_clks(struct snd_soc_component *component)
+ 	return 0;
+ 
+ err:
+-	do {
++	while (--i >= 0) {
+ 		if (da7219->dai_clks_lookup[i])
+ 			clkdev_drop(da7219->dai_clks_lookup[i]);
+ 
+ 		clk_hw_unregister(&da7219->dai_clks_hw[i]);
+-	} while (i-- > 0);
++	}
+ 
+ 	if (np)
+ 		kfree(da7219->clk_hw_data);
+diff --git a/sound/soc/codecs/mt6660.c b/sound/soc/codecs/mt6660.c
+index d1797003c83dd..e18a58868273b 100644
+--- a/sound/soc/codecs/mt6660.c
++++ b/sound/soc/codecs/mt6660.c
+@@ -504,13 +504,17 @@ static int mt6660_i2c_probe(struct i2c_client *client,
+ 		dev_err(chip->dev, "read chip revision fail\n");
+ 		goto probe_fail;
+ 	}
+-	pm_runtime_set_active(chip->dev);
+-	pm_runtime_enable(chip->dev);
+ 
+ 	ret = devm_snd_soc_register_component(chip->dev,
+ 					       &mt6660_component_driver,
+ 					       &mt6660_codec_dai, 1);
++	if (!ret) {
++		pm_runtime_set_active(chip->dev);
++		pm_runtime_enable(chip->dev);
++	}
++
+ 	return ret;
++
+ probe_fail:
+ 	_mt6660_chip_power_on(chip, 0);
+ 	mutex_destroy(&chip->io_lock);
+diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c
+index 37588804a6b5f..8b262e7f5275f 100644
+--- a/sound/soc/codecs/tas2764.c
++++ b/sound/soc/codecs/tas2764.c
+@@ -34,6 +34,9 @@ struct tas2764_priv {
+ 	
+ 	int v_sense_slot;
+ 	int i_sense_slot;
++
++	bool dac_powered;
++	bool unmuted;
+ };
+ 
+ static void tas2764_reset(struct tas2764_priv *tas2764)
+@@ -50,34 +53,22 @@ static void tas2764_reset(struct tas2764_priv *tas2764)
+ 	usleep_range(1000, 2000);
+ }
+ 
+-static int tas2764_set_bias_level(struct snd_soc_component *component,
+-				 enum snd_soc_bias_level level)
++static int tas2764_update_pwr_ctrl(struct tas2764_priv *tas2764)
+ {
+-	struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component);
++	struct snd_soc_component *component = tas2764->component;
++	unsigned int val;
++	int ret;
+ 
+-	switch (level) {
+-	case SND_SOC_BIAS_ON:
+-		snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+-					      TAS2764_PWR_CTRL_MASK,
+-					      TAS2764_PWR_CTRL_ACTIVE);
+-		break;
+-	case SND_SOC_BIAS_STANDBY:
+-	case SND_SOC_BIAS_PREPARE:
+-		snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+-					      TAS2764_PWR_CTRL_MASK,
+-					      TAS2764_PWR_CTRL_MUTE);
+-		break;
+-	case SND_SOC_BIAS_OFF:
+-		snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+-					      TAS2764_PWR_CTRL_MASK,
+-					      TAS2764_PWR_CTRL_SHUTDOWN);
+-		break;
++	if (tas2764->dac_powered)
++		val = tas2764->unmuted ?
++			TAS2764_PWR_CTRL_ACTIVE : TAS2764_PWR_CTRL_MUTE;
++	else
++		val = TAS2764_PWR_CTRL_SHUTDOWN;
+ 
+-	default:
+-		dev_err(tas2764->dev,
+-				"wrong power level setting %d\n", level);
+-		return -EINVAL;
+-	}
++	ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
++					    TAS2764_PWR_CTRL_MASK, val);
++	if (ret < 0)
++		return ret;
+ 
+ 	return 0;
+ }
+@@ -114,9 +105,7 @@ static int tas2764_codec_resume(struct snd_soc_component *component)
+ 		usleep_range(1000, 2000);
+ 	}
+ 
+-	ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+-					    TAS2764_PWR_CTRL_MASK,
+-					    TAS2764_PWR_CTRL_ACTIVE);
++	ret = tas2764_update_pwr_ctrl(tas2764);
+ 
+ 	if (ret < 0)
+ 		return ret;
+@@ -150,14 +139,12 @@ static int tas2764_dac_event(struct snd_soc_dapm_widget *w,
+ 
+ 	switch (event) {
+ 	case SND_SOC_DAPM_POST_PMU:
+-		ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+-						    TAS2764_PWR_CTRL_MASK,
+-						    TAS2764_PWR_CTRL_MUTE);
++		tas2764->dac_powered = true;
++		ret = tas2764_update_pwr_ctrl(tas2764);
+ 		break;
+ 	case SND_SOC_DAPM_PRE_PMD:
+-		ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+-						    TAS2764_PWR_CTRL_MASK,
+-						    TAS2764_PWR_CTRL_SHUTDOWN);
++		tas2764->dac_powered = false;
++		ret = tas2764_update_pwr_ctrl(tas2764);
+ 		break;
+ 	default:
+ 		dev_err(tas2764->dev, "Unsupported event\n");
+@@ -202,17 +189,11 @@ static const struct snd_soc_dapm_route tas2764_audio_map[] = {
+ 
+ static int tas2764_mute(struct snd_soc_dai *dai, int mute, int direction)
+ {
+-	struct snd_soc_component *component = dai->component;
+-	int ret;
+-
+-	ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+-					    TAS2764_PWR_CTRL_MASK,
+-					    mute ? TAS2764_PWR_CTRL_MUTE : 0);
++	struct tas2764_priv *tas2764 =
++			snd_soc_component_get_drvdata(dai->component);
+ 
+-	if (ret < 0)
+-		return ret;
+-
+-	return 0;
++	tas2764->unmuted = !mute;
++	return tas2764_update_pwr_ctrl(tas2764);
+ }
+ 
+ static int tas2764_set_bitwidth(struct tas2764_priv *tas2764, int bitwidth)
+@@ -485,7 +466,7 @@ static struct snd_soc_dai_driver tas2764_dai_driver[] = {
+ 		.id = 0,
+ 		.playback = {
+ 			.stream_name    = "ASI1 Playback",
+-			.channels_min   = 2,
++			.channels_min   = 1,
+ 			.channels_max   = 2,
+ 			.rates      = TAS2764_RATES,
+ 			.formats    = TAS2764_FORMATS,
+@@ -526,12 +507,6 @@ static int tas2764_codec_probe(struct snd_soc_component *component)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+-					    TAS2764_PWR_CTRL_MASK,
+-					    TAS2764_PWR_CTRL_MUTE);
+-	if (ret < 0)
+-		return ret;
+-
+ 	return 0;
+ }
+ 
+@@ -549,7 +524,6 @@ static const struct snd_soc_component_driver soc_component_driver_tas2764 = {
+ 	.probe			= tas2764_codec_probe,
+ 	.suspend		= tas2764_codec_suspend,
+ 	.resume			= tas2764_codec_resume,
+-	.set_bias_level		= tas2764_set_bias_level,
+ 	.controls		= tas2764_snd_controls,
+ 	.num_controls		= ARRAY_SIZE(tas2764_snd_controls),
+ 	.dapm_widgets		= tas2764_dapm_widgets,
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 8f4ed39c49de2..33c29a1f52d00 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -1971,8 +1971,8 @@ static int wcd9335_trigger(struct snd_pcm_substream *substream, int cmd,
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 	case SNDRV_PCM_TRIGGER_SUSPEND:
+ 	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+-		slim_stream_unprepare(dai_data->sruntime);
+ 		slim_stream_disable(dai_data->sruntime);
++		slim_stream_unprepare(dai_data->sruntime);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index fd704df9b1758..104751ac6cd14 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -1829,8 +1829,8 @@ static int wcd934x_trigger(struct snd_pcm_substream *substream, int cmd,
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 	case SNDRV_PCM_TRIGGER_SUSPEND:
+ 	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+-		slim_stream_unprepare(dai_data->sruntime);
+ 		slim_stream_disable(dai_data->sruntime);
++		slim_stream_unprepare(dai_data->sruntime);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/sound/soc/codecs/wm5102.c b/sound/soc/codecs/wm5102.c
+index 2ed3fa67027d0..b7f5e5391fdb7 100644
+--- a/sound/soc/codecs/wm5102.c
++++ b/sound/soc/codecs/wm5102.c
+@@ -2083,9 +2083,6 @@ static int wm5102_probe(struct platform_device *pdev)
+ 		regmap_update_bits(arizona->regmap, wm5102_digital_vu[i],
+ 				   WM5102_DIG_VU, WM5102_DIG_VU);
+ 
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_idle(&pdev->dev);
+-
+ 	ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1,
+ 				  "ADSP2 Compressed IRQ", wm5102_adsp2_irq,
+ 				  wm5102);
+@@ -2118,6 +2115,9 @@ static int wm5102_probe(struct platform_device *pdev)
+ 		goto err_spk_irqs;
+ 	}
+ 
++	pm_runtime_enable(&pdev->dev);
++	pm_runtime_idle(&pdev->dev);
++
+ 	return ret;
+ 
+ err_spk_irqs:
+diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c
+index d0cef982215dc..c158f8b1e8e46 100644
+--- a/sound/soc/codecs/wm5110.c
++++ b/sound/soc/codecs/wm5110.c
+@@ -2452,9 +2452,6 @@ static int wm5110_probe(struct platform_device *pdev)
+ 		regmap_update_bits(arizona->regmap, wm5110_digital_vu[i],
+ 				   WM5110_DIG_VU, WM5110_DIG_VU);
+ 
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_idle(&pdev->dev);
+-
+ 	ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1,
+ 				  "ADSP2 Compressed IRQ", wm5110_adsp2_irq,
+ 				  wm5110);
+@@ -2487,6 +2484,9 @@ static int wm5110_probe(struct platform_device *pdev)
+ 		goto err_spk_irqs;
+ 	}
+ 
++	pm_runtime_enable(&pdev->dev);
++	pm_runtime_idle(&pdev->dev);
++
+ 	return ret;
+ 
+ err_spk_irqs:
+diff --git a/sound/soc/codecs/wm8997.c b/sound/soc/codecs/wm8997.c
+index 229f2986cd96b..07378714b013a 100644
+--- a/sound/soc/codecs/wm8997.c
++++ b/sound/soc/codecs/wm8997.c
+@@ -1156,9 +1156,6 @@ static int wm8997_probe(struct platform_device *pdev)
+ 		regmap_update_bits(arizona->regmap, wm8997_digital_vu[i],
+ 				   WM8997_DIG_VU, WM8997_DIG_VU);
+ 
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_idle(&pdev->dev);
+-
+ 	arizona_init_common(arizona);
+ 
+ 	ret = arizona_init_vol_limit(arizona);
+@@ -1177,6 +1174,9 @@ static int wm8997_probe(struct platform_device *pdev)
+ 		goto err_spk_irqs;
+ 	}
+ 
++	pm_runtime_enable(&pdev->dev);
++	pm_runtime_idle(&pdev->dev);
++
+ 	return ret;
+ 
+ err_spk_irqs:
+diff --git a/sound/soc/fsl/eukrea-tlv320.c b/sound/soc/fsl/eukrea-tlv320.c
+index e13271ea84ded..29cf9234984d9 100644
+--- a/sound/soc/fsl/eukrea-tlv320.c
++++ b/sound/soc/fsl/eukrea-tlv320.c
+@@ -86,7 +86,7 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ 	int ret;
+ 	int int_port = 0, ext_port;
+ 	struct device_node *np = pdev->dev.of_node;
+-	struct device_node *ssi_np = NULL, *codec_np = NULL;
++	struct device_node *ssi_np = NULL, *codec_np = NULL, *tmp_np = NULL;
+ 
+ 	eukrea_tlv320.dev = &pdev->dev;
+ 	if (np) {
+@@ -143,7 +143,7 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (machine_is_eukrea_cpuimx27() ||
+-	    of_find_compatible_node(NULL, NULL, "fsl,imx21-audmux")) {
++	    (tmp_np = of_find_compatible_node(NULL, NULL, "fsl,imx21-audmux"))) {
+ 		imx_audmux_v1_configure_port(MX27_AUDMUX_HPCR1_SSI0,
+ 			IMX_AUDMUX_V1_PCR_SYN |
+ 			IMX_AUDMUX_V1_PCR_TFSDIR |
+@@ -158,10 +158,11 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ 			IMX_AUDMUX_V1_PCR_SYN |
+ 			IMX_AUDMUX_V1_PCR_RXDSEL(MX27_AUDMUX_HPCR1_SSI0)
+ 		);
++		of_node_put(tmp_np);
+ 	} else if (machine_is_eukrea_cpuimx25sd() ||
+ 		   machine_is_eukrea_cpuimx35sd() ||
+ 		   machine_is_eukrea_cpuimx51sd() ||
+-		   of_find_compatible_node(NULL, NULL, "fsl,imx31-audmux")) {
++		   (tmp_np = of_find_compatible_node(NULL, NULL, "fsl,imx31-audmux"))) {
+ 		if (!np)
+ 			ext_port = machine_is_eukrea_cpuimx25sd() ?
+ 				4 : 3;
+@@ -178,6 +179,7 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ 			IMX_AUDMUX_V2_PTCR_SYN,
+ 			IMX_AUDMUX_V2_PDCR_RXDSEL(int_port)
+ 		);
++		of_node_put(tmp_np);
+ 	} else {
+ 		if (np) {
+ 			/* The eukrea,asoc-tlv320 driver was explicitly
+diff --git a/sound/soc/sh/rcar/ctu.c b/sound/soc/sh/rcar/ctu.c
+index 7647b3d4c0baa..25a8cfc274335 100644
+--- a/sound/soc/sh/rcar/ctu.c
++++ b/sound/soc/sh/rcar/ctu.c
+@@ -171,7 +171,11 @@ static int rsnd_ctu_init(struct rsnd_mod *mod,
+ 			 struct rsnd_dai_stream *io,
+ 			 struct rsnd_priv *priv)
+ {
+-	rsnd_mod_power_on(mod);
++	int ret;
++
++	ret = rsnd_mod_power_on(mod);
++	if (ret < 0)
++		return ret;
+ 
+ 	rsnd_ctu_activation(mod);
+ 
+diff --git a/sound/soc/sh/rcar/dvc.c b/sound/soc/sh/rcar/dvc.c
+index 8d91c0eb0880f..53b2ad01222b5 100644
+--- a/sound/soc/sh/rcar/dvc.c
++++ b/sound/soc/sh/rcar/dvc.c
+@@ -186,7 +186,11 @@ static int rsnd_dvc_init(struct rsnd_mod *mod,
+ 			 struct rsnd_dai_stream *io,
+ 			 struct rsnd_priv *priv)
+ {
+-	rsnd_mod_power_on(mod);
++	int ret;
++
++	ret = rsnd_mod_power_on(mod);
++	if (ret < 0)
++		return ret;
+ 
+ 	rsnd_dvc_activation(mod);
+ 
+diff --git a/sound/soc/sh/rcar/mix.c b/sound/soc/sh/rcar/mix.c
+index a3e0370f5704a..c6fe2595c373a 100644
+--- a/sound/soc/sh/rcar/mix.c
++++ b/sound/soc/sh/rcar/mix.c
+@@ -146,7 +146,11 @@ static int rsnd_mix_init(struct rsnd_mod *mod,
+ 			 struct rsnd_dai_stream *io,
+ 			 struct rsnd_priv *priv)
+ {
+-	rsnd_mod_power_on(mod);
++	int ret;
++
++	ret = rsnd_mod_power_on(mod);
++	if (ret < 0)
++		return ret;
+ 
+ 	rsnd_mix_activation(mod);
+ 
+diff --git a/sound/soc/sh/rcar/src.c b/sound/soc/sh/rcar/src.c
+index 585ffba0244b9..fd52e26a3808b 100644
+--- a/sound/soc/sh/rcar/src.c
++++ b/sound/soc/sh/rcar/src.c
+@@ -454,11 +454,14 @@ static int rsnd_src_init(struct rsnd_mod *mod,
+ 			 struct rsnd_priv *priv)
+ {
+ 	struct rsnd_src *src = rsnd_mod_to_src(mod);
++	int ret;
+ 
+ 	/* reset sync convert_rate */
+ 	src->sync.val = 0;
+ 
+-	rsnd_mod_power_on(mod);
++	ret = rsnd_mod_power_on(mod);
++	if (ret < 0)
++		return ret;
+ 
+ 	rsnd_src_activation(mod);
+ 
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 042207c116514..2ead44779d46d 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -518,7 +518,9 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ 
+ 	ssi->usrcnt++;
+ 
+-	rsnd_mod_power_on(mod);
++	ret = rsnd_mod_power_on(mod);
++	if (ret < 0)
++		return ret;
+ 
+ 	rsnd_ssi_config_init(mod, io);
+ 
+diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c
+index 75657a25dbc05..fe9feaab6a0ac 100644
+--- a/sound/soc/sof/sof-pci-dev.c
++++ b/sound/soc/sof/sof-pci-dev.c
+@@ -75,7 +75,7 @@ static const struct dmi_system_id community_key_platforms[] = {
+ 	{
+ 		.ident = "Google Chromebooks",
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Google"),
++			DMI_MATCH(DMI_PRODUCT_FAMILY, "Google"),
+ 		}
+ 	},
+ 	{},
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 8527267725bb7..80dcac5abe0c4 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -73,12 +73,13 @@ static inline unsigned get_usb_high_speed_rate(unsigned int rate)
+  */
+ static void release_urb_ctx(struct snd_urb_ctx *u)
+ {
+-	if (u->buffer_size)
++	if (u->urb && u->buffer_size)
+ 		usb_free_coherent(u->ep->chip->dev, u->buffer_size,
+ 				  u->urb->transfer_buffer,
+ 				  u->urb->transfer_dma);
+ 	usb_free_urb(u->urb);
+ 	u->urb = NULL;
++	u->buffer_size = 0;
+ }
+ 
+ static const char *usb_error_string(int err)
+@@ -998,6 +999,7 @@ static int sync_ep_set_params(struct snd_usb_endpoint *ep)
+ 	if (!ep->syncbuf)
+ 		return -ENOMEM;
+ 
++	ep->nurbs = SYNC_URBS;
+ 	for (i = 0; i < SYNC_URBS; i++) {
+ 		struct snd_urb_ctx *u = &ep->urb[i];
+ 		u->index = i;
+@@ -1017,8 +1019,6 @@ static int sync_ep_set_params(struct snd_usb_endpoint *ep)
+ 		u->urb->complete = snd_complete_urb;
+ 	}
+ 
+-	ep->nurbs = SYNC_URBS;
+-
+ 	return 0;
+ 
+ out_of_memory:
+diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c
+index 0e9310727281a..13be487631992 100644
+--- a/tools/bpf/bpftool/btf_dumper.c
++++ b/tools/bpf/bpftool/btf_dumper.c
+@@ -416,7 +416,7 @@ static int btf_dumper_int(const struct btf_type *t, __u8 bit_offset,
+ 					     *(char *)data);
+ 		break;
+ 	case BTF_INT_BOOL:
+-		jsonw_bool(jw, *(int *)data);
++		jsonw_bool(jw, *(bool *)data);
+ 		break;
+ 	default:
+ 		/* shouldn't happen */
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index 1854d6b978604..4fd4e3462ebce 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -398,6 +398,16 @@ int main(int argc, char **argv)
+ 
+ 	setlinebuf(stdout);
+ 
++#ifdef USE_LIBCAP
++	/* Libcap < 2.63 hooks before main() to compute the number of
++	 * capabilities of the running kernel, and doing so it calls prctl()
++	 * which may fail and set errno to non-zero.
++	 * Let's reset errno to make sure this does not interfere with the
++	 * batch mode.
++	 */
++	errno = 0;
++#endif
++
+ 	last_do_help = do_help;
+ 	pretty_output = false;
+ 	json_output = false;
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index e8745f646371f..fa1f8faf7dfe9 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -930,13 +930,13 @@ void xsk_socket__delete(struct xsk_socket *xsk)
+ 	ctx = xsk->ctx;
+ 	umem = ctx->umem;
+ 
+-	xsk_put_ctx(ctx, true);
+-
+-	if (!ctx->refcount) {
++	if (ctx->refcount == 1) {
+ 		xsk_delete_bpf_maps(xsk);
+ 		close(ctx->prog_fd);
+ 	}
+ 
++	xsk_put_ctx(ctx, true);
++
+ 	err = xsk_get_mmap_offsets(xsk->fd, &off);
+ 	if (!err) {
+ 		if (xsk->rx) {
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index 5aa3b4e76479e..a2ea3931e01d5 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -578,6 +578,11 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
+ 	Elf64_Xword entsize = symtab->sh.sh_entsize;
+ 	int max_idx, idx = sym->idx;
+ 	Elf_Scn *s, *t = NULL;
++	bool is_special_shndx = sym->sym.st_shndx >= SHN_LORESERVE &&
++				sym->sym.st_shndx != SHN_XINDEX;
++
++	if (is_special_shndx)
++		shndx = sym->sym.st_shndx;
+ 
+ 	s = elf_getscn(elf->elf, symtab->idx);
+ 	if (!s) {
+@@ -663,7 +668,7 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
+ 	}
+ 
+ 	/* setup extended section index magic and write the symbol */
+-	if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) {
++	if ((shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) || is_special_shndx) {
+ 		sym->sym.st_shndx = shndx;
+ 		if (!shndx_data)
+ 			shndx = 0;
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 5163d2ffea70d..453773ce6f455 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -3279,6 +3279,7 @@ static const char * const intel_pt_info_fmts[] = {
+ 	[INTEL_PT_SNAPSHOT_MODE]	= "  Snapshot mode       %"PRId64"\n",
+ 	[INTEL_PT_PER_CPU_MMAPS]	= "  Per-cpu maps        %"PRId64"\n",
+ 	[INTEL_PT_MTC_BIT]		= "  MTC bit             %#"PRIx64"\n",
++	[INTEL_PT_MTC_FREQ_BITS]	= "  MTC freq bits       %#"PRIx64"\n",
+ 	[INTEL_PT_TSC_CTC_N]		= "  TSC:CTC numerator   %"PRIu64"\n",
+ 	[INTEL_PT_TSC_CTC_D]		= "  TSC:CTC denominator %"PRIu64"\n",
+ 	[INTEL_PT_CYC_BIT]		= "  CYC bit             %#"PRIx64"\n",
+@@ -3293,8 +3294,12 @@ static void intel_pt_print_info(__u64 *arr, int start, int finish)
+ 	if (!dump_trace)
+ 		return;
+ 
+-	for (i = start; i <= finish; i++)
+-		fprintf(stdout, intel_pt_info_fmts[i], arr[i]);
++	for (i = start; i <= finish; i++) {
++		const char *fmt = intel_pt_info_fmts[i];
++
++		if (fmt)
++			fprintf(stdout, fmt, arr[i]);
++	}
+ }
+ 
+ static void intel_pt_print_info_str(const char *name, const char *str)
+diff --git a/tools/testing/selftests/arm64/signal/testcases/testcases.c b/tools/testing/selftests/arm64/signal/testcases/testcases.c
+index 61ebcdf638311..a3ac5c2d8aac7 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/testcases.c
++++ b/tools/testing/selftests/arm64/signal/testcases/testcases.c
+@@ -33,7 +33,7 @@ bool validate_extra_context(struct extra_context *extra, char **err)
+ 		return false;
+ 
+ 	fprintf(stderr, "Validating EXTRA...\n");
+-	term = GET_RESV_NEXT_HEAD(extra);
++	term = GET_RESV_NEXT_HEAD(&extra->head);
+ 	if (!term || term->magic || term->size) {
+ 		*err = "Missing terminator after EXTRA context";
+ 		return false;
+diff --git a/tools/testing/selftests/tpm2/tpm2.py b/tools/testing/selftests/tpm2/tpm2.py
+index f34486cd7342d..3e67fdb518ec3 100644
+--- a/tools/testing/selftests/tpm2/tpm2.py
++++ b/tools/testing/selftests/tpm2/tpm2.py
+@@ -370,6 +370,10 @@ class Client:
+             fcntl.fcntl(self.tpm, fcntl.F_SETFL, flags)
+             self.tpm_poll = select.poll()
+ 
++    def __del__(self):
++        if self.tpm:
++            self.tpm.close()
++
+     def close(self):
+         self.tpm.close()
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-10-28 13:38 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-10-28 13:38 UTC (permalink / raw
  To: gentoo-commits

commit:     de90e6e904f3e5566d88854bbdddcffe9a15b88c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 28 13:37:59 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct 28 13:37:59 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=de90e6e9

Linux patch 5.10.151

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |  4 +++
 1150_linux-5.10.151.patch | 70 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 74 insertions(+)

diff --git a/0000_README b/0000_README
index 5c39ef3a..9f9c67d6 100644
--- a/0000_README
+++ b/0000_README
@@ -643,6 +643,10 @@ Patch:  1149_linux-5.10.150.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.150
 
+Patch:  1150_linux-5.10.151.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.151
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1150_linux-5.10.151.patch b/1150_linux-5.10.151.patch
new file mode 100644
index 00000000..88985b0e
--- /dev/null
+++ b/1150_linux-5.10.151.patch
@@ -0,0 +1,70 @@
+diff --git a/Makefile b/Makefile
+index 5c7075d3b2f65..0e22d4c8bc79b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 150
++SUBLEVEL = 151
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -465,6 +465,8 @@ LZ4		= lz4c
+ XZ		= xz
+ ZSTD		= zstd
+ 
++PAHOLE_FLAGS	= $(shell PAHOLE=$(PAHOLE) $(srctree)/scripts/pahole-flags.sh)
++
+ CHECKFLAGS     := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \
+ 		  -Wbitwise -Wno-return-void -Wno-unknown-attribute $(CF)
+ NOSTDINC_FLAGS :=
+@@ -518,6 +520,7 @@ export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
+ export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
+ export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
+ export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
++export PAHOLE_FLAGS
+ 
+ # Files to ignore in find ... statements
+ 
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index d0b44bee9286e..acd07a70a2f4e 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -161,7 +161,7 @@ gen_btf()
+ 	vmlinux_link ${1}
+ 
+ 	info "BTF" ${2}
+-	LLVM_OBJCOPY=${OBJCOPY} ${PAHOLE} -J ${1}
++	LLVM_OBJCOPY="${OBJCOPY}" ${PAHOLE} -J ${PAHOLE_FLAGS} ${1}
+ 
+ 	# Create ${2} which contains just .BTF section but no symbols. Add
+ 	# SHF_ALLOC because .BTF will be part of the vmlinux image. --strip-all
+diff --git a/scripts/pahole-flags.sh b/scripts/pahole-flags.sh
+new file mode 100755
+index 0000000000000..8c82173e42e52
+--- /dev/null
++++ b/scripts/pahole-flags.sh
+@@ -0,0 +1,21 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0
++
++extra_paholeopt=
++
++if ! [ -x "$(command -v ${PAHOLE})" ]; then
++	exit 0
++fi
++
++pahole_ver=$(${PAHOLE} --version | sed -E 's/v([0-9]+)\.([0-9]+)/\1\2/')
++
++if [ "${pahole_ver}" -ge "118" ] && [ "${pahole_ver}" -le "121" ]; then
++	# pahole 1.18 through 1.21 can't handle zero-sized per-CPU vars
++	extra_paholeopt="${extra_paholeopt} --skip_encoding_btf_vars"
++fi
++
++if [ "${pahole_ver}" -ge "124" ]; then
++	extra_paholeopt="${extra_paholeopt} --skip_encoding_btf_enum64"
++fi
++
++echo ${extra_paholeopt}


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-10-30  9:33 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-10-30  9:33 UTC (permalink / raw
  To: gentoo-commits

commit:     44b00d2dfe78ec70b2cd9efa0b1a8865c2ab9b00
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 30 09:32:53 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Oct 30 09:32:53 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=44b00d2d

Linux patch 5.10.152

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1151_linux-5.10.152.patch | 3209 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3213 insertions(+)

diff --git a/0000_README b/0000_README
index 9f9c67d6..2d158649 100644
--- a/0000_README
+++ b/0000_README
@@ -647,6 +647,10 @@ Patch:  1150_linux-5.10.151.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.151
 
+Patch:  1151_linux-5.10.152.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.152
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1151_linux-5.10.152.patch b/1151_linux-5.10.152.patch
new file mode 100644
index 00000000..e772b821
--- /dev/null
+++ b/1151_linux-5.10.152.patch
@@ -0,0 +1,3209 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 22a07c208fee0..4f3206495217c 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -76,10 +76,14 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A57      | #1319537        | ARM64_ERRATUM_1319367       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A57      | #1742098        | ARM64_ERRATUM_1742098       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A72      | #853709         | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A72      | #1319367        | ARM64_ERRATUM_1319367       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A72      | #1655431        | ARM64_ERRATUM_1742098       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A73      | #858921         | ARM64_ERRATUM_858921        |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A76      | #1188873,1418040| ARM64_ERRATUM_1418040       |
+diff --git a/Makefile b/Makefile
+index 0e22d4c8bc79b..a0750d0519820 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 151
++SUBLEVEL = 152
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -842,7 +842,9 @@ else
+ DEBUG_CFLAGS	+= -g
+ endif
+ 
+-ifneq ($(LLVM_IAS),1)
++ifeq ($(LLVM_IAS),1)
++KBUILD_AFLAGS	+= -g
++else
+ KBUILD_AFLAGS	+= -Wa,-gdwarf-2
+ endif
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index af65ab83e63d4..34bd4cba81e66 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -481,6 +481,22 @@ config ARM64_ERRATUM_834220
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_1742098
++	bool "Cortex-A57/A72: 1742098: ELR recorded incorrectly on interrupt taken between cryptographic instructions in a sequence"
++	depends on COMPAT
++	default y
++	help
++	  This option removes the AES hwcap for aarch32 user-space to
++	  workaround erratum 1742098 on Cortex-A57 and Cortex-A72.
++
++	  Affected parts may corrupt the AES state if an interrupt is
++	  taken between a pair of AES instructions. These instructions
++	  are only present if the cryptography extensions are present.
++	  All software should have a fallback implementation for CPUs
++	  that don't implement the cryptography extensions.
++
++	  If unsure, say Y.
++
+ config ARM64_ERRATUM_845719
+ 	bool "Cortex-A53: 845719: a load might read incorrect data"
+ 	depends on COMPAT
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lte-sku.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lte-sku.dtsi
+index 44956e3165a16..469aad4e5948c 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lte-sku.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lte-sku.dtsi
+@@ -9,6 +9,10 @@
+ 	label = "proximity-wifi-lte";
+ };
+ 
++&mpss_mem {
++	reg = <0x0 0x86000000 0x0 0x8c00000>;
++};
++
+ &remoteproc_mpss {
+ 	firmware-name = "qcom/sc7180-trogdor/modem/mba.mbn",
+ 			"qcom/sc7180-trogdor/modem/qdsp6sw.mbn";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+index 5b2a616c6257b..cb2c47f13a8a4 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+@@ -39,7 +39,7 @@
+ 		};
+ 
+ 		mpss_mem: memory@86000000 {
+-			reg = <0x0 0x86000000 0x0 0x8c00000>;
++			reg = <0x0 0x86000000 0x0 0x2000000>;
+ 			no-map;
+ 		};
+ 
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index 53030d3c03a2c..d2080a41f6e6f 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -68,7 +68,8 @@
+ #define ARM64_WORKAROUND_1508412		58
+ #define ARM64_SPECTRE_BHB			59
+ #define ARM64_WORKAROUND_2457168		60
++#define ARM64_WORKAROUND_1742098		61
+ 
+-#define ARM64_NCAPS				61
++#define ARM64_NCAPS				62
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index aaacca6fd52f6..5d6f19bc628c2 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -356,6 +356,14 @@ static const struct midr_range erratum_1463225[] = {
+ };
+ #endif
+ 
++#ifdef CONFIG_ARM64_ERRATUM_1742098
++static struct midr_range broken_aarch32_aes[] = {
++	MIDR_RANGE(MIDR_CORTEX_A57, 0, 1, 0xf, 0xf),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
++	{},
++};
++#endif
++
+ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
+ 	{
+@@ -554,6 +562,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		/* Cortex-A510 r0p0-r1p1 */
+ 		CAP_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1)
+ 	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_1742098
++	{
++		.desc = "ARM erratum 1742098",
++		.capability = ARM64_WORKAROUND_1742098,
++		CAP_MIDR_RANGE_LIST(broken_aarch32_aes),
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++	},
+ #endif
+ 	{
+ 	}
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index e72c90b826568..f3767c1445933 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -76,6 +76,7 @@
+ #include <asm/cpufeature.h>
+ #include <asm/cpu_ops.h>
+ #include <asm/fpsimd.h>
++#include <asm/hwcap.h>
+ #include <asm/mmu_context.h>
+ #include <asm/mte.h>
+ #include <asm/processor.h>
+@@ -1730,6 +1731,14 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
+ }
+ #endif /* CONFIG_ARM64_MTE */
+ 
++static void elf_hwcap_fixup(void)
++{
++#ifdef CONFIG_ARM64_ERRATUM_1742098
++	if (cpus_have_const_cap(ARM64_WORKAROUND_1742098))
++		compat_elf_hwcap2 &= ~COMPAT_HWCAP2_AES;
++#endif /* ARM64_ERRATUM_1742098 */
++}
++
+ /* Internal helper functions to match cpu capability type */
+ static bool
+ cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
+@@ -2735,8 +2744,10 @@ void __init setup_cpu_features(void)
+ 	setup_system_capabilities();
+ 	setup_elf_hwcaps(arm64_elf_hwcaps);
+ 
+-	if (system_supports_32bit_el0())
++	if (system_supports_32bit_el0()) {
+ 		setup_elf_hwcaps(compat_elf_hwcaps);
++		elf_hwcap_fixup();
++	}
+ 
+ 	if (system_uses_ttbr0_pan())
+ 		pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n");
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index 4358bc3193067..f35af19b70555 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -22,46 +22,6 @@
+ #include <asm/cputype.h>
+ #include <asm/topology.h>
+ 
+-void store_cpu_topology(unsigned int cpuid)
+-{
+-	struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
+-	u64 mpidr;
+-
+-	if (cpuid_topo->package_id != -1)
+-		goto topology_populated;
+-
+-	mpidr = read_cpuid_mpidr();
+-
+-	/* Uniprocessor systems can rely on default topology values */
+-	if (mpidr & MPIDR_UP_BITMASK)
+-		return;
+-
+-	/*
+-	 * This would be the place to create cpu topology based on MPIDR.
+-	 *
+-	 * However, it cannot be trusted to depict the actual topology; some
+-	 * pieces of the architecture enforce an artificial cap on Aff0 values
+-	 * (e.g. GICv3's ICC_SGI1R_EL1 limits it to 15), leading to an
+-	 * artificial cycling of Aff1, Aff2 and Aff3 values. IOW, these end up
+-	 * having absolutely no relationship to the actual underlying system
+-	 * topology, and cannot be reasonably used as core / package ID.
+-	 *
+-	 * If the MT bit is set, Aff0 *could* be used to define a thread ID, but
+-	 * we still wouldn't be able to obtain a sane core ID. This means we
+-	 * need to entirely ignore MPIDR for any topology deduction.
+-	 */
+-	cpuid_topo->thread_id  = -1;
+-	cpuid_topo->core_id    = cpuid;
+-	cpuid_topo->package_id = cpu_to_node(cpuid);
+-
+-	pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n",
+-		 cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
+-		 cpuid_topo->thread_id, mpidr);
+-
+-topology_populated:
+-	update_siblings_masks(cpuid);
+-}
+-
+ #ifdef CONFIG_ACPI
+ static bool __init acpi_cpu_is_threaded(int cpu)
+ {
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index b9518f94bd435..23710bf5a86b6 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -2096,7 +2096,7 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
+ 
+ 	memset(entry, 0, esz);
+ 
+-	while (len > 0) {
++	while (true) {
+ 		int next_offset;
+ 		size_t byte_offset;
+ 
+@@ -2109,6 +2109,9 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
+ 			return next_offset;
+ 
+ 		byte_offset = next_offset * esz;
++		if (byte_offset >= len)
++			break;
++
+ 		id += next_offset;
+ 		gpa += byte_offset;
+ 		len -= byte_offset;
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 1b894c3275781..557c4a8c4087d 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -35,7 +35,7 @@ config RISCV
+ 	select CLINT_TIMER if !MMU
+ 	select COMMON_CLK
+ 	select EDAC_SUPPORT
+-	select GENERIC_ARCH_TOPOLOGY if SMP
++	select GENERIC_ARCH_TOPOLOGY
+ 	select GENERIC_ATOMIC64 if !64BIT
+ 	select GENERIC_CLOCKEVENTS
+ 	select GENERIC_EARLY_IOREMAP
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 117f3212a8e4b..cc85858f7fe8e 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -54,10 +54,17 @@ static DEFINE_PER_CPU(struct cpu, cpu_devices);
+ static void __init parse_dtb(void)
+ {
+ 	/* Early scan of device tree from init memory */
+-	if (early_init_dt_scan(dtb_early_va))
+-		return;
++	if (early_init_dt_scan(dtb_early_va)) {
++		const char *name = of_flat_dt_get_machine_name();
++
++		if (name) {
++			pr_info("Machine model: %s\n", name);
++			dump_stack_set_arch_desc("%s (DT)", name);
++		}
++	} else {
++		pr_err("No DTB passed to the kernel\n");
++	}
+ 
+-	pr_err("No DTB passed to the kernel\n");
+ #ifdef CONFIG_CMDLINE_FORCE
+ 	strlcpy(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);
+ 	pr_info("Forcing kernel command line to: %s\n", boot_command_line);
+diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
+index 0b04e0eae3ab5..0e0aed380e281 100644
+--- a/arch/riscv/kernel/smpboot.c
++++ b/arch/riscv/kernel/smpboot.c
+@@ -46,6 +46,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ 	int cpuid;
+ 	int ret;
+ 
++	store_cpu_topology(smp_processor_id());
++
+ 	/* This covers non-smp usecase mandated by "nosmp" option */
+ 	if (max_cpus == 0)
+ 		return;
+@@ -152,8 +154,8 @@ asmlinkage __visible void smp_callin(void)
+ 	mmgrab(mm);
+ 	current->active_mm = mm;
+ 
++	store_cpu_topology(curr_cpuid);
+ 	notify_cpu_starting(curr_cpuid);
+-	update_siblings_masks(curr_cpuid);
+ 	set_cpu_online(curr_cpuid, 1);
+ 
+ 	/*
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 159646da3c6bc..d64e690139950 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1945,7 +1945,6 @@ config EFI
+ config EFI_STUB
+ 	bool "EFI stub support"
+ 	depends on EFI && !X86_USE_3DNOW
+-	depends on $(cc-option,-mabi=ms) || X86_32
+ 	select RELOCATABLE
+ 	help
+ 	  This kernel feature allows a bzImage to be loaded directly
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index cc3b79c066853..95234f46b0fb9 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -13,6 +13,8 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
+ #include <linux/types.h>
++#include <linux/bits.h>
++#include <linux/limits.h>
+ #include <linux/slab.h>
+ #include <linux/device.h>
+ 
+@@ -1348,11 +1350,37 @@ static void pt_addr_filters_fini(struct perf_event *event)
+ 	event->hw.addr_filters = NULL;
+ }
+ 
+-static inline bool valid_kernel_ip(unsigned long ip)
++#ifdef CONFIG_X86_64
++static u64 canonical_address(u64 vaddr, u8 vaddr_bits)
+ {
+-	return virt_addr_valid(ip) && kernel_ip(ip);
++	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
+ }
+ 
++static u64 is_canonical_address(u64 vaddr, u8 vaddr_bits)
++{
++	return canonical_address(vaddr, vaddr_bits) == vaddr;
++}
++
++/* Clamp to a canonical address greater-than-or-equal-to the address given */
++static u64 clamp_to_ge_canonical_addr(u64 vaddr, u8 vaddr_bits)
++{
++	return is_canonical_address(vaddr, vaddr_bits) ?
++	       vaddr :
++	       -BIT_ULL(vaddr_bits - 1);
++}
++
++/* Clamp to a canonical address less-than-or-equal-to the address given */
++static u64 clamp_to_le_canonical_addr(u64 vaddr, u8 vaddr_bits)
++{
++	return is_canonical_address(vaddr, vaddr_bits) ?
++	       vaddr :
++	       BIT_ULL(vaddr_bits - 1) - 1;
++}
++#else
++#define clamp_to_ge_canonical_addr(x, y) (x)
++#define clamp_to_le_canonical_addr(x, y) (x)
++#endif
++
+ static int pt_event_addr_filters_validate(struct list_head *filters)
+ {
+ 	struct perf_addr_filter *filter;
+@@ -1367,14 +1395,6 @@ static int pt_event_addr_filters_validate(struct list_head *filters)
+ 		    filter->action == PERF_ADDR_FILTER_ACTION_START)
+ 			return -EOPNOTSUPP;
+ 
+-		if (!filter->path.dentry) {
+-			if (!valid_kernel_ip(filter->offset))
+-				return -EINVAL;
+-
+-			if (!valid_kernel_ip(filter->offset + filter->size))
+-				return -EINVAL;
+-		}
+-
+ 		if (++range > intel_pt_validate_hw_cap(PT_CAP_num_address_ranges))
+ 			return -EOPNOTSUPP;
+ 	}
+@@ -1398,9 +1418,26 @@ static void pt_event_addr_filters_sync(struct perf_event *event)
+ 		if (filter->path.dentry && !fr[range].start) {
+ 			msr_a = msr_b = 0;
+ 		} else {
+-			/* apply the offset */
+-			msr_a = fr[range].start;
+-			msr_b = msr_a + fr[range].size - 1;
++			unsigned long n = fr[range].size - 1;
++			unsigned long a = fr[range].start;
++			unsigned long b;
++
++			if (a > ULONG_MAX - n)
++				b = ULONG_MAX;
++			else
++				b = a + n;
++			/*
++			 * Apply the offset. 64-bit addresses written to the
++			 * MSRs must be canonical, but the range can encompass
++			 * non-canonical addresses. Since software cannot
++			 * execute at non-canonical addresses, adjusting to
++			 * canonical addresses does not affect the result of the
++			 * address filter.
++			 */
++			msr_a = clamp_to_ge_canonical_addr(a, boot_cpu_data.x86_virt_bits);
++			msr_b = clamp_to_le_canonical_addr(b, boot_cpu_data.x86_virt_bits);
++			if (msr_b < msr_a)
++				msr_a = msr_b = 0;
+ 		}
+ 
+ 		filters->filter[range].msr_a  = msr_a;
+diff --git a/arch/x86/include/asm/iommu.h b/arch/x86/include/asm/iommu.h
+index bf1ed2ddc74bd..7a983119bc403 100644
+--- a/arch/x86/include/asm/iommu.h
++++ b/arch/x86/include/asm/iommu.h
+@@ -17,8 +17,10 @@ arch_rmrr_sanity_check(struct acpi_dmar_reserved_memory *rmrr)
+ {
+ 	u64 start = rmrr->base_address;
+ 	u64 end = rmrr->end_address + 1;
++	int entry_type;
+ 
+-	if (e820__mapped_all(start, end, E820_TYPE_RESERVED))
++	entry_type = e820__get_entry_type(start, end);
++	if (entry_type == E820_TYPE_RESERVED || entry_type == E820_TYPE_NVS)
+ 		return 0;
+ 
+ 	pr_err(FW_BUG "No firmware reserved region can cover this RMRR [%#018Lx-%#018Lx], contact BIOS vendor for fixes\n",
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index c879364413390..234a96f25248d 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -441,7 +441,13 @@ apply_microcode_early_amd(u32 cpuid_1_eax, void *ucode, size_t size, bool save_p
+ 		return ret;
+ 
+ 	native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+-	if (rev >= mc->hdr.patch_id)
++
++	/*
++	 * Allow application of the same revision to pick up SMT-specific
++	 * changes even if the revision of the other SMT thread is already
++	 * up-to-date.
++	 */
++	if (rev > mc->hdr.patch_id)
+ 		return ret;
+ 
+ 	if (!__apply_microcode_amd(mc)) {
+@@ -523,8 +529,12 @@ void load_ucode_amd_ap(unsigned int cpuid_1_eax)
+ 
+ 	native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+ 
+-	/* Check whether we have saved a new patch already: */
+-	if (*new_rev && rev < mc->hdr.patch_id) {
++	/*
++	 * Check whether a new patch has been saved already. Also, allow application of
++	 * the same revision in order to pick up SMT-thread-specific configuration even
++	 * if the sibling SMT thread already has an up-to-date revision.
++	 */
++	if (*new_rev && rev <= mc->hdr.patch_id) {
+ 		if (!__apply_microcode_amd(mc)) {
+ 			*new_rev = mc->hdr.patch_id;
+ 			return;
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index 35d81b5deae1c..6f63920f073c6 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -838,9 +838,11 @@ int wbt_init(struct request_queue *q)
+ 	rwb->last_comp = rwb->last_issue = jiffies;
+ 	rwb->win_nsec = RWB_WINDOW_NSEC;
+ 	rwb->enable_state = WBT_STATE_ON_DEFAULT;
+-	rwb->wc = 1;
++	rwb->wc = test_bit(QUEUE_FLAG_WC, &q->queue_flags);
+ 	rwb->rq_depth.default_depth = RWB_DEF_DEPTH;
+-	wbt_update_limits(rwb);
++	rwb->min_lat_nsec = wbt_default_latency_nsec(q);
++
++	wbt_queue_depth_changed(&rwb->rqos);
+ 
+ 	/*
+ 	 * Assign rwb and add the stats callback.
+@@ -848,10 +850,5 @@ int wbt_init(struct request_queue *q)
+ 	rq_qos_add(q, &rwb->rqos);
+ 	blk_stat_add_callback(q, rwb->cb);
+ 
+-	rwb->min_lat_nsec = wbt_default_latency_nsec(q);
+-
+-	wbt_queue_depth_changed(&rwb->rqos);
+-	wbt_set_write_cache(q, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
+-
+ 	return 0;
+ }
+diff --git a/drivers/acpi/acpi_extlog.c b/drivers/acpi/acpi_extlog.c
+index 72f1fb77abcd0..e648158368a7d 100644
+--- a/drivers/acpi/acpi_extlog.c
++++ b/drivers/acpi/acpi_extlog.c
+@@ -12,6 +12,7 @@
+ #include <linux/ratelimit.h>
+ #include <linux/edac.h>
+ #include <linux/ras.h>
++#include <acpi/ghes.h>
+ #include <asm/cpu.h>
+ #include <asm/mce.h>
+ 
+@@ -138,8 +139,8 @@ static int extlog_print(struct notifier_block *nb, unsigned long val,
+ 	int	cpu = mce->extcpu;
+ 	struct acpi_hest_generic_status *estatus, *tmp;
+ 	struct acpi_hest_generic_data *gdata;
+-	const guid_t *fru_id = &guid_null;
+-	char *fru_text = "";
++	const guid_t *fru_id;
++	char *fru_text;
+ 	guid_t *sec_type;
+ 	static u32 err_seq;
+ 
+@@ -160,17 +161,23 @@ static int extlog_print(struct notifier_block *nb, unsigned long val,
+ 
+ 	/* log event via trace */
+ 	err_seq++;
+-	gdata = (struct acpi_hest_generic_data *)(tmp + 1);
+-	if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID)
+-		fru_id = (guid_t *)gdata->fru_id;
+-	if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
+-		fru_text = gdata->fru_text;
+-	sec_type = (guid_t *)gdata->section_type;
+-	if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
+-		struct cper_sec_mem_err *mem = (void *)(gdata + 1);
+-		if (gdata->error_data_length >= sizeof(*mem))
+-			trace_extlog_mem_event(mem, err_seq, fru_id, fru_text,
+-					       (u8)gdata->error_severity);
++	apei_estatus_for_each_section(tmp, gdata) {
++		if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID)
++			fru_id = (guid_t *)gdata->fru_id;
++		else
++			fru_id = &guid_null;
++		if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
++			fru_text = gdata->fru_text;
++		else
++			fru_text = "";
++		sec_type = (guid_t *)gdata->section_type;
++		if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
++			struct cper_sec_mem_err *mem = (void *)(gdata + 1);
++
++			if (gdata->error_data_length >= sizeof(*mem))
++				trace_extlog_mem_event(mem, err_seq, fru_id, fru_text,
++						       (u8)gdata->error_severity);
++		}
+ 	}
+ 
+ out:
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index e39d59ad64964..b13713199ad94 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -500,6 +500,70 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_BOARD_NAME, "PF5LUXG"),
+ 		},
+ 	},
++	/*
++	 * More Tongfang devices with the same issue as the Clevo NL5xRU and
++	 * NL5xNU/TUXEDO Aura 15 Gen1 and Gen2. See the description above.
++	 */
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang GKxNRxx",
++	.matches = {
++		DMI_MATCH(DMI_BOARD_NAME, "GKxNRxx"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang GKxNRxx",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "POLARIS1501A1650TI"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang GKxNRxx",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "POLARIS1501A2060"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang GKxNRxx",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "POLARIS1701A1650TI"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang GKxNRxx",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "POLARIS1701A2060"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang GMxNGxx",
++	.matches = {
++		DMI_MATCH(DMI_BOARD_NAME, "GMxNGxx"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang GMxZGxx",
++	.matches = {
++		DMI_MATCH(DMI_BOARD_NAME, "GMxZGxx"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "TongFang GMxRGxx",
++	.matches = {
++		DMI_MATCH(DMI_BOARD_NAME, "GMxRGxx"),
++		},
++	},
+ 	/*
+ 	 * Desktops which falsely report a backlight and which our heuristics
+ 	 * for this do not catch.
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index d1f284f0c83d9..1ce8973569933 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -254,7 +254,7 @@ enum {
+ 	PCS_7				= 0x94, /* 7+ port PCS (Denverton) */
+ 
+ 	/* em constants */
+-	EM_MAX_SLOTS			= 8,
++	EM_MAX_SLOTS			= SATA_PMP_MAX_PORTS,
+ 	EM_MAX_RETRY			= 5,
+ 
+ 	/* em_ctl bits */
+diff --git a/drivers/ata/ahci_imx.c b/drivers/ata/ahci_imx.c
+index 388baf528fa81..189f75d537414 100644
+--- a/drivers/ata/ahci_imx.c
++++ b/drivers/ata/ahci_imx.c
+@@ -1230,4 +1230,4 @@ module_platform_driver(imx_ahci_driver);
+ MODULE_DESCRIPTION("Freescale i.MX AHCI SATA platform driver");
+ MODULE_AUTHOR("Richard Zhu <Hong-Xing.Zhu@freescale.com>");
+ MODULE_LICENSE("GPL");
+-MODULE_ALIAS("ahci:imx");
++MODULE_ALIAS("platform:" DRV_NAME);
+diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
+index 8272a3a002a34..51647926e6051 100644
+--- a/drivers/base/arch_topology.c
++++ b/drivers/base/arch_topology.c
+@@ -596,4 +596,23 @@ void __init init_cpu_topology(void)
+ 	else if (of_have_populated_dt() && parse_dt_topology())
+ 		reset_cpu_topology();
+ }
++
++void store_cpu_topology(unsigned int cpuid)
++{
++	struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
++
++	if (cpuid_topo->package_id != -1)
++		goto topology_populated;
++
++	cpuid_topo->thread_id = -1;
++	cpuid_topo->core_id = cpuid;
++	cpuid_topo->package_id = cpu_to_node(cpuid);
++
++	pr_debug("CPU%u: package %d core %d thread %d\n",
++		 cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
++		 cpuid_topo->thread_id);
++
++topology_populated:
++	update_siblings_masks(cpuid);
++}
+ #endif
+diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+index 7fdd30e92e429..9b3d24721d7b2 100644
+--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
++++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+@@ -215,6 +215,7 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
+ 	int speed = 0, pvs = 0, pvs_ver = 0;
+ 	u8 *speedbin;
+ 	size_t len;
++	int ret = 0;
+ 
+ 	speedbin = nvmem_cell_read(speedbin_nvmem, &len);
+ 
+@@ -232,7 +233,8 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
+ 		break;
+ 	default:
+ 		dev_err(cpu_dev, "Unable to read nvmem data. Defaulting to 0!\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto len_error;
+ 	}
+ 
+ 	snprintf(*pvs_name, sizeof("speedXX-pvsXX-vXX"), "speed%d-pvs%d-v%d",
+@@ -240,8 +242,9 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
+ 
+ 	drv->versions = (1 << speed);
+ 
++len_error:
+ 	kfree(speedbin);
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct qcom_cpufreq_match_data match_data_kryo = {
+@@ -264,7 +267,8 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
+ 	struct nvmem_cell *speedbin_nvmem;
+ 	struct device_node *np;
+ 	struct device *cpu_dev;
+-	char *pvs_name = "speedXX-pvsXX-vXX";
++	char pvs_name_buffer[] = "speedXX-pvsXX-vXX";
++	char *pvs_name = pvs_name_buffer;
+ 	unsigned cpu;
+ 	const struct of_device_id *match;
+ 	int ret;
+diff --git a/drivers/dma/mxs-dma.c b/drivers/dma/mxs-dma.c
+index 65f816b40c328..dc147cc2436e9 100644
+--- a/drivers/dma/mxs-dma.c
++++ b/drivers/dma/mxs-dma.c
+@@ -167,29 +167,11 @@ static struct mxs_dma_type mxs_dma_types[] = {
+ 	}
+ };
+ 
+-static const struct platform_device_id mxs_dma_ids[] = {
+-	{
+-		.name = "imx23-dma-apbh",
+-		.driver_data = (kernel_ulong_t) &mxs_dma_types[0],
+-	}, {
+-		.name = "imx23-dma-apbx",
+-		.driver_data = (kernel_ulong_t) &mxs_dma_types[1],
+-	}, {
+-		.name = "imx28-dma-apbh",
+-		.driver_data = (kernel_ulong_t) &mxs_dma_types[2],
+-	}, {
+-		.name = "imx28-dma-apbx",
+-		.driver_data = (kernel_ulong_t) &mxs_dma_types[3],
+-	}, {
+-		/* end of list */
+-	}
+-};
+-
+ static const struct of_device_id mxs_dma_dt_ids[] = {
+-	{ .compatible = "fsl,imx23-dma-apbh", .data = &mxs_dma_ids[0], },
+-	{ .compatible = "fsl,imx23-dma-apbx", .data = &mxs_dma_ids[1], },
+-	{ .compatible = "fsl,imx28-dma-apbh", .data = &mxs_dma_ids[2], },
+-	{ .compatible = "fsl,imx28-dma-apbx", .data = &mxs_dma_ids[3], },
++	{ .compatible = "fsl,imx23-dma-apbh", .data = &mxs_dma_types[0], },
++	{ .compatible = "fsl,imx23-dma-apbx", .data = &mxs_dma_types[1], },
++	{ .compatible = "fsl,imx28-dma-apbh", .data = &mxs_dma_types[2], },
++	{ .compatible = "fsl,imx28-dma-apbx", .data = &mxs_dma_types[3], },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, mxs_dma_dt_ids);
+@@ -688,7 +670,7 @@ static enum dma_status mxs_dma_tx_status(struct dma_chan *chan,
+ 	return mxs_chan->status;
+ }
+ 
+-static int __init mxs_dma_init(struct mxs_dma_engine *mxs_dma)
++static int mxs_dma_init(struct mxs_dma_engine *mxs_dma)
+ {
+ 	int ret;
+ 
+@@ -759,11 +741,9 @@ static struct dma_chan *mxs_dma_xlate(struct of_phandle_args *dma_spec,
+ 				     ofdma->of_node);
+ }
+ 
+-static int __init mxs_dma_probe(struct platform_device *pdev)
++static int mxs_dma_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+-	const struct platform_device_id *id_entry;
+-	const struct of_device_id *of_id;
+ 	const struct mxs_dma_type *dma_type;
+ 	struct mxs_dma_engine *mxs_dma;
+ 	struct resource *iores;
+@@ -779,13 +759,7 @@ static int __init mxs_dma_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	of_id = of_match_device(mxs_dma_dt_ids, &pdev->dev);
+-	if (of_id)
+-		id_entry = of_id->data;
+-	else
+-		id_entry = platform_get_device_id(pdev);
+-
+-	dma_type = (struct mxs_dma_type *)id_entry->driver_data;
++	dma_type = (struct mxs_dma_type *)of_device_get_match_data(&pdev->dev);
+ 	mxs_dma->type = dma_type->type;
+ 	mxs_dma->dev_id = dma_type->id;
+ 
+@@ -865,11 +839,7 @@ static struct platform_driver mxs_dma_driver = {
+ 		.name	= "mxs-dma",
+ 		.of_match_table = mxs_dma_dt_ids,
+ 	},
+-	.id_table	= mxs_dma_ids,
++	.probe = mxs_dma_probe,
+ };
+ 
+-static int __init mxs_dma_module_init(void)
+-{
+-	return platform_driver_probe(&mxs_dma_driver, mxs_dma_probe);
+-}
+-subsys_initcall(mxs_dma_module_init);
++builtin_platform_driver(mxs_dma_driver);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
+index 6a311cd934403..e6de627342695 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
++++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
+@@ -213,14 +213,14 @@ static int virtio_gpu_cursor_prepare_fb(struct drm_plane *plane,
+ }
+ 
+ static void virtio_gpu_cursor_cleanup_fb(struct drm_plane *plane,
+-					 struct drm_plane_state *old_state)
++					struct drm_plane_state *state)
+ {
+ 	struct virtio_gpu_framebuffer *vgfb;
+ 
+-	if (!plane->state->fb)
++	if (!state->fb)
+ 		return;
+ 
+-	vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
++	vgfb = to_virtio_gpu_framebuffer(state->fb);
+ 	if (vgfb->fence) {
+ 		dma_fence_put(&vgfb->fence->f);
+ 		vgfb->fence = NULL;
+diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
+index fc4c074597539..28158d2f23523 100644
+--- a/drivers/hid/hid-magicmouse.c
++++ b/drivers/hid/hid-magicmouse.c
+@@ -387,7 +387,7 @@ static int magicmouse_raw_event(struct hid_device *hdev,
+ 		magicmouse_raw_event(hdev, report, data + 2, data[1]);
+ 		magicmouse_raw_event(hdev, report, data + 2 + data[1],
+ 			size - 2 - data[1]);
+-		break;
++		return 0;
+ 	default:
+ 		return 0;
+ 	}
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index bb9211215a688..032129292957e 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -46,9 +46,6 @@ MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
+ #define TOTAL_ATTRS		(MAX_CORE_ATTRS + 1)
+ #define MAX_CORE_DATA		(NUM_REAL_CORES + BASE_SYSFS_ATTR_NO)
+ 
+-#define TO_CORE_ID(cpu)		(cpu_data(cpu).cpu_core_id)
+-#define TO_ATTR_NO(cpu)		(TO_CORE_ID(cpu) + BASE_SYSFS_ATTR_NO)
+-
+ #ifdef CONFIG_SMP
+ #define for_each_sibling(i, cpu) \
+ 	for_each_cpu(i, topology_sibling_cpumask(cpu))
+@@ -91,6 +88,8 @@ struct temp_data {
+ struct platform_data {
+ 	struct device		*hwmon_dev;
+ 	u16			pkg_id;
++	u16			cpu_map[NUM_REAL_CORES];
++	struct ida		ida;
+ 	struct cpumask		cpumask;
+ 	struct temp_data	*core_data[MAX_CORE_DATA];
+ 	struct device_attribute name_attr;
+@@ -441,7 +440,7 @@ static struct temp_data *init_temp_data(unsigned int cpu, int pkg_flag)
+ 							MSR_IA32_THERM_STATUS;
+ 	tdata->is_pkg_data = pkg_flag;
+ 	tdata->cpu = cpu;
+-	tdata->cpu_core_id = TO_CORE_ID(cpu);
++	tdata->cpu_core_id = topology_core_id(cpu);
+ 	tdata->attr_size = MAX_CORE_ATTRS;
+ 	mutex_init(&tdata->update_lock);
+ 	return tdata;
+@@ -454,7 +453,7 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
+ 	struct platform_data *pdata = platform_get_drvdata(pdev);
+ 	struct cpuinfo_x86 *c = &cpu_data(cpu);
+ 	u32 eax, edx;
+-	int err, attr_no;
++	int err, index, attr_no;
+ 
+ 	/*
+ 	 * Find attr number for sysfs:
+@@ -462,14 +461,26 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
+ 	 * The attr number is always core id + 2
+ 	 * The Pkgtemp will always show up as temp1_*, if available
+ 	 */
+-	attr_no = pkg_flag ? PKG_SYSFS_ATTR_NO : TO_ATTR_NO(cpu);
++	if (pkg_flag) {
++		attr_no = PKG_SYSFS_ATTR_NO;
++	} else {
++		index = ida_alloc(&pdata->ida, GFP_KERNEL);
++		if (index < 0)
++			return index;
++		pdata->cpu_map[index] = topology_core_id(cpu);
++		attr_no = index + BASE_SYSFS_ATTR_NO;
++	}
+ 
+-	if (attr_no > MAX_CORE_DATA - 1)
+-		return -ERANGE;
++	if (attr_no > MAX_CORE_DATA - 1) {
++		err = -ERANGE;
++		goto ida_free;
++	}
+ 
+ 	tdata = init_temp_data(cpu, pkg_flag);
+-	if (!tdata)
+-		return -ENOMEM;
++	if (!tdata) {
++		err = -ENOMEM;
++		goto ida_free;
++	}
+ 
+ 	/* Test if we can access the status register */
+ 	err = rdmsr_safe_on_cpu(cpu, tdata->status_reg, &eax, &edx);
+@@ -505,6 +516,9 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
+ exit_free:
+ 	pdata->core_data[attr_no] = NULL;
+ 	kfree(tdata);
++ida_free:
++	if (!pkg_flag)
++		ida_free(&pdata->ida, index);
+ 	return err;
+ }
+ 
+@@ -524,6 +538,9 @@ static void coretemp_remove_core(struct platform_data *pdata, int indx)
+ 
+ 	kfree(pdata->core_data[indx]);
+ 	pdata->core_data[indx] = NULL;
++
++	if (indx >= BASE_SYSFS_ATTR_NO)
++		ida_free(&pdata->ida, indx - BASE_SYSFS_ATTR_NO);
+ }
+ 
+ static int coretemp_probe(struct platform_device *pdev)
+@@ -537,6 +554,7 @@ static int coretemp_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	pdata->pkg_id = pdev->id;
++	ida_init(&pdata->ida);
+ 	platform_set_drvdata(pdev, pdata);
+ 
+ 	pdata->hwmon_dev = devm_hwmon_device_register_with_groups(dev, DRVNAME,
+@@ -553,6 +571,7 @@ static int coretemp_remove(struct platform_device *pdev)
+ 		if (pdata->core_data[i])
+ 			coretemp_remove_core(pdata, i);
+ 
++	ida_destroy(&pdata->ida);
+ 	return 0;
+ }
+ 
+@@ -647,7 +666,7 @@ static int coretemp_cpu_offline(unsigned int cpu)
+ 	struct platform_device *pdev = coretemp_get_pdev(cpu);
+ 	struct platform_data *pd;
+ 	struct temp_data *tdata;
+-	int indx, target;
++	int i, indx = -1, target;
+ 
+ 	/*
+ 	 * Don't execute this on suspend as the device remove locks
+@@ -660,12 +679,19 @@ static int coretemp_cpu_offline(unsigned int cpu)
+ 	if (!pdev)
+ 		return 0;
+ 
+-	/* The core id is too big, just return */
+-	indx = TO_ATTR_NO(cpu);
+-	if (indx > MAX_CORE_DATA - 1)
++	pd = platform_get_drvdata(pdev);
++
++	for (i = 0; i < NUM_REAL_CORES; i++) {
++		if (pd->cpu_map[i] == topology_core_id(cpu)) {
++			indx = i + BASE_SYSFS_ATTR_NO;
++			break;
++		}
++	}
++
++	/* Too many cores and this core is not populated, just return */
++	if (indx < 0)
+ 		return 0;
+ 
+-	pd = platform_get_drvdata(pdev);
+ 	tdata = pd->core_data[indx];
+ 
+ 	cpumask_clear_cpu(cpu, &pd->cpumask);
+diff --git a/drivers/i2c/busses/i2c-qcom-cci.c b/drivers/i2c/busses/i2c-qcom-cci.c
+index 09e599069a81d..06c87c79bae76 100644
+--- a/drivers/i2c/busses/i2c-qcom-cci.c
++++ b/drivers/i2c/busses/i2c-qcom-cci.c
+@@ -638,6 +638,11 @@ static int cci_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto error;
+ 
++	pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
++	pm_runtime_use_autosuspend(dev);
++	pm_runtime_set_active(dev);
++	pm_runtime_enable(dev);
++
+ 	for (i = 0; i < cci->data->num_masters; i++) {
+ 		if (!cci->master[i].cci)
+ 			continue;
+@@ -649,14 +654,12 @@ static int cci_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
+-	pm_runtime_use_autosuspend(dev);
+-	pm_runtime_set_active(dev);
+-	pm_runtime_enable(dev);
+-
+ 	return 0;
+ 
+ error_i2c:
++	pm_runtime_disable(dev);
++	pm_runtime_dont_use_autosuspend(dev);
++
+ 	for (--i ; i >= 0; i--) {
+ 		if (cci->master[i].cci) {
+ 			i2c_del_adapter(&cci->master[i].adap);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index c48cf737b521d..f23329b7f97cd 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2846,6 +2846,7 @@ static int __init si_domain_init(int hw)
+ 
+ 	if (md_domain_init(si_domain, DEFAULT_DOMAIN_ADDRESS_WIDTH)) {
+ 		domain_exit(si_domain);
++		si_domain = NULL;
+ 		return -EFAULT;
+ 	}
+ 
+@@ -3505,6 +3506,10 @@ free_iommu:
+ 		disable_dmar_iommu(iommu);
+ 		free_dmar_iommu(iommu);
+ 	}
++	if (si_domain) {
++		domain_exit(si_domain);
++		si_domain = NULL;
++	}
+ 
+ 	kfree(g_iommus);
+ 
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index ea13170a6a2cf..de34a87d1130e 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -158,6 +158,8 @@ vdec_try_fmt_common(struct venus_inst *inst, struct v4l2_format *f)
+ 		else
+ 			return NULL;
+ 		fmt = find_format(inst, pixmp->pixelformat, f->type);
++		if (!fmt)
++			return NULL;
+ 	}
+ 
+ 	pixmp->width = clamp(pixmp->width, frame_width_min(inst),
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index de4cf6eb5258b..142319c48405d 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -1077,7 +1077,7 @@ static int mceusb_set_timeout(struct rc_dev *dev, unsigned int timeout)
+ 	struct mceusb_dev *ir = dev->priv;
+ 	unsigned int units;
+ 
+-	units = DIV_ROUND_CLOSEST(timeout, MCE_TIME_UNIT);
++	units = DIV_ROUND_UP(timeout, MCE_TIME_UNIT);
+ 
+ 	cmdbuf[2] = units >> 8;
+ 	cmdbuf[3] = units;
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 66a00b7c751f7..6622e32621874 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1069,6 +1069,11 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
+ 	nr = blk_rq_sectors(req);
+ 
+ 	do {
++		unsigned int erase_arg = card->erase_arg;
++
++		if (mmc_card_broken_sd_discard(card))
++			erase_arg = SD_ERASE_ARG;
++
+ 		err = 0;
+ 		if (card->quirks & MMC_QUIRK_INAND_CMD38) {
+ 			err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
+@@ -1079,7 +1084,7 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
+ 					 card->ext_csd.generic_cmd6_time);
+ 		}
+ 		if (!err)
+-			err = mmc_erase(card, from, nr, card->erase_arg);
++			err = mmc_erase(card, from, nr, erase_arg);
+ 	} while (err == -EIO && !mmc_blk_reset(md, card->host, type));
+ 	if (err)
+ 		status = BLK_STS_IOERR;
+diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h
+index 7bd392d55cfa5..5c6986131faff 100644
+--- a/drivers/mmc/core/card.h
++++ b/drivers/mmc/core/card.h
+@@ -70,6 +70,7 @@ struct mmc_fixup {
+ #define EXT_CSD_REV_ANY (-1u)
+ 
+ #define CID_MANFID_SANDISK      0x2
++#define CID_MANFID_SANDISK_SD   0x3
+ #define CID_MANFID_ATP          0x9
+ #define CID_MANFID_TOSHIBA      0x11
+ #define CID_MANFID_MICRON       0x13
+@@ -222,4 +223,9 @@ static inline int mmc_card_broken_hpi(const struct mmc_card *c)
+ 	return c->quirks & MMC_QUIRK_BROKEN_HPI;
+ }
+ 
++static inline int mmc_card_broken_sd_discard(const struct mmc_card *c)
++{
++	return c->quirks & MMC_QUIRK_BROKEN_SD_DISCARD;
++}
++
+ #endif
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index d68e6e513a4f4..c8c0f50a2076d 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -99,6 +99,12 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ 	MMC_FIXUP("V10016", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc,
+ 		  MMC_QUIRK_TRIM_BROKEN),
+ 
++	/*
++	 * Some SD cards reports discard support while they don't
++	 */
++	MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,
++		  MMC_QUIRK_BROKEN_SD_DISCARD),
++
+ 	END_FIXUP
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index d50b691f6c44e..67211fc42d242 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -760,7 +760,7 @@ static void tegra_sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+ 	 */
+ 	host_clk = tegra_host->ddr_signaling ? clock * 2 : clock;
+ 	clk_set_rate(pltfm_host->clk, host_clk);
+-	tegra_host->curr_clk_rate = host_clk;
++	tegra_host->curr_clk_rate = clk_get_rate(pltfm_host->clk);
+ 	if (tegra_host->ddr_signaling)
+ 		host->max_clk = host_clk;
+ 	else
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.c b/drivers/net/ethernet/hisilicon/hns/hnae.c
+index 00fafc0f85121..430eccea8e5e9 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.c
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.c
+@@ -419,8 +419,10 @@ int hnae_ae_register(struct hnae_ae_dev *hdev, struct module *owner)
+ 	hdev->cls_dev.release = hnae_release;
+ 	(void)dev_set_name(&hdev->cls_dev, "hnae%d", hdev->id);
+ 	ret = device_register(&hdev->cls_dev);
+-	if (ret)
++	if (ret) {
++		put_device(&hdev->cls_dev);
+ 		return ret;
++	}
+ 
+ 	__module_get(THIS_MODULE);
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 63054061966e6..cc5f5c237774f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -2081,9 +2081,6 @@ static int i40e_set_ringparam(struct net_device *netdev,
+ 			 */
+ 			rx_rings[i].tail = hw->hw_addr + I40E_PRTGEN_STATUS;
+ 			err = i40e_setup_rx_descriptors(&rx_rings[i]);
+-			if (err)
+-				goto rx_unwind;
+-			err = i40e_alloc_rx_bi(&rx_rings[i]);
+ 			if (err)
+ 				goto rx_unwind;
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index c7f243ddbcf72..ea6a984c6d12b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -3409,12 +3409,8 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
+ 	if (ring->vsi->type == I40E_VSI_MAIN)
+ 		xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);
+ 
+-	kfree(ring->rx_bi);
+ 	ring->xsk_pool = i40e_xsk_pool(ring);
+ 	if (ring->xsk_pool) {
+-		ret = i40e_alloc_rx_bi_zc(ring);
+-		if (ret)
+-			return ret;
+ 		ring->rx_buf_len =
+ 		  xsk_pool_get_rx_frame_size(ring->xsk_pool);
+ 		/* For AF_XDP ZC, we disallow packets to span on
+@@ -3432,9 +3428,6 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
+ 			 ring->queue_index);
+ 
+ 	} else {
+-		ret = i40e_alloc_rx_bi(ring);
+-		if (ret)
+-			return ret;
+ 		ring->rx_buf_len = vsi->rx_buf_len;
+ 		if (ring->vsi->type == I40E_VSI_MAIN) {
+ 			ret = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
+@@ -12684,6 +12677,14 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi,
+ 		i40e_reset_and_rebuild(pf, true, true);
+ 	}
+ 
++	if (!i40e_enabled_xdp_vsi(vsi) && prog) {
++		if (i40e_realloc_rx_bi_zc(vsi, true))
++			return -ENOMEM;
++	} else if (i40e_enabled_xdp_vsi(vsi) && !prog) {
++		if (i40e_realloc_rx_bi_zc(vsi, false))
++			return -ENOMEM;
++	}
++
+ 	for (i = 0; i < vsi->num_queue_pairs; i++)
+ 		WRITE_ONCE(vsi->rx_rings[i]->xdp_prog, vsi->xdp_prog);
+ 
+@@ -12916,6 +12917,7 @@ int i40e_queue_pair_disable(struct i40e_vsi *vsi, int queue_pair)
+ 
+ 	i40e_queue_pair_disable_irq(vsi, queue_pair);
+ 	err = i40e_queue_pair_toggle_rings(vsi, queue_pair, false /* off */);
++	i40e_clean_rx_ring(vsi->rx_rings[queue_pair]);
+ 	i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */);
+ 	i40e_queue_pair_clean_rings(vsi, queue_pair);
+ 	i40e_queue_pair_reset_stats(vsi, queue_pair);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 5ad28129fab2a..43be33d87e391 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -1305,14 +1305,6 @@ err:
+ 	return -ENOMEM;
+ }
+ 
+-int i40e_alloc_rx_bi(struct i40e_ring *rx_ring)
+-{
+-	unsigned long sz = sizeof(*rx_ring->rx_bi) * rx_ring->count;
+-
+-	rx_ring->rx_bi = kzalloc(sz, GFP_KERNEL);
+-	return rx_ring->rx_bi ? 0 : -ENOMEM;
+-}
+-
+ static void i40e_clear_rx_bi(struct i40e_ring *rx_ring)
+ {
+ 	memset(rx_ring->rx_bi, 0, sizeof(*rx_ring->rx_bi) * rx_ring->count);
+@@ -1443,6 +1435,11 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring)
+ 
+ 	rx_ring->xdp_prog = rx_ring->vsi->xdp_prog;
+ 
++	rx_ring->rx_bi =
++		kcalloc(rx_ring->count, sizeof(*rx_ring->rx_bi), GFP_KERNEL);
++	if (!rx_ring->rx_bi)
++		return -ENOMEM;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+index 93ac201f68b8e..af843e8169f7d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+@@ -465,7 +465,6 @@ int __i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size);
+ bool __i40e_chk_linearize(struct sk_buff *skb);
+ int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+ 		  u32 flags);
+-int i40e_alloc_rx_bi(struct i40e_ring *rx_ring);
+ 
+ /**
+  * i40e_get_head - Retrieve head from head writeback
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index 75e4a698c3db2..7f12261236293 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -9,14 +9,6 @@
+ #include "i40e_txrx_common.h"
+ #include "i40e_xsk.h"
+ 
+-int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring)
+-{
+-	unsigned long sz = sizeof(*rx_ring->rx_bi_zc) * rx_ring->count;
+-
+-	rx_ring->rx_bi_zc = kzalloc(sz, GFP_KERNEL);
+-	return rx_ring->rx_bi_zc ? 0 : -ENOMEM;
+-}
+-
+ void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring)
+ {
+ 	memset(rx_ring->rx_bi_zc, 0,
+@@ -28,6 +20,58 @@ static struct xdp_buff **i40e_rx_bi(struct i40e_ring *rx_ring, u32 idx)
+ 	return &rx_ring->rx_bi_zc[idx];
+ }
+ 
++/**
++ * i40e_realloc_rx_xdp_bi - reallocate SW ring for either XSK or normal buffer
++ * @rx_ring: Current rx ring
++ * @pool_present: is pool for XSK present
++ *
++ * Try allocating memory and return ENOMEM, if failed to allocate.
++ * If allocation was successful, substitute buffer with allocated one.
++ * Returns 0 on success, negative on failure
++ */
++static int i40e_realloc_rx_xdp_bi(struct i40e_ring *rx_ring, bool pool_present)
++{
++	size_t elem_size = pool_present ? sizeof(*rx_ring->rx_bi_zc) :
++					  sizeof(*rx_ring->rx_bi);
++	void *sw_ring = kcalloc(rx_ring->count, elem_size, GFP_KERNEL);
++
++	if (!sw_ring)
++		return -ENOMEM;
++
++	if (pool_present) {
++		kfree(rx_ring->rx_bi);
++		rx_ring->rx_bi = NULL;
++		rx_ring->rx_bi_zc = sw_ring;
++	} else {
++		kfree(rx_ring->rx_bi_zc);
++		rx_ring->rx_bi_zc = NULL;
++		rx_ring->rx_bi = sw_ring;
++	}
++	return 0;
++}
++
++/**
++ * i40e_realloc_rx_bi_zc - reallocate rx SW rings
++ * @vsi: Current VSI
++ * @zc: is zero copy set
++ *
++ * Reallocate buffer for rx_rings that might be used by XSK.
++ * XDP requires more memory, than rx_buf provides.
++ * Returns 0 on success, negative on failure
++ */
++int i40e_realloc_rx_bi_zc(struct i40e_vsi *vsi, bool zc)
++{
++	struct i40e_ring *rx_ring;
++	unsigned long q;
++
++	for_each_set_bit(q, vsi->af_xdp_zc_qps, vsi->alloc_queue_pairs) {
++		rx_ring = vsi->rx_rings[q];
++		if (i40e_realloc_rx_xdp_bi(rx_ring, zc))
++			return -ENOMEM;
++	}
++	return 0;
++}
++
+ /**
+  * i40e_xsk_pool_enable - Enable/associate an AF_XDP buffer pool to a
+  * certain ring/qid
+@@ -68,6 +112,10 @@ static int i40e_xsk_pool_enable(struct i40e_vsi *vsi,
+ 		if (err)
+ 			return err;
+ 
++		err = i40e_realloc_rx_xdp_bi(vsi->rx_rings[qid], true);
++		if (err)
++			return err;
++
+ 		err = i40e_queue_pair_enable(vsi, qid);
+ 		if (err)
+ 			return err;
+@@ -112,6 +160,9 @@ static int i40e_xsk_pool_disable(struct i40e_vsi *vsi, u16 qid)
+ 	xsk_pool_dma_unmap(pool, I40E_RX_DMA_ATTR);
+ 
+ 	if (if_running) {
++		err = i40e_realloc_rx_xdp_bi(vsi->rx_rings[qid], false);
++		if (err)
++			return err;
+ 		err = i40e_queue_pair_enable(vsi, qid);
+ 		if (err)
+ 			return err;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.h b/drivers/net/ethernet/intel/i40e/i40e_xsk.h
+index 7adfd8539247c..36f5b6d206010 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.h
+@@ -17,7 +17,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget);
+ 
+ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring);
+ int i40e_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags);
+-int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring);
++int i40e_realloc_rx_bi_zc(struct i40e_vsi *vsi, bool zc);
+ void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring);
+ 
+ #endif /* _I40E_XSK_H_ */
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index e42520f909fe2..cb12d0171517e 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -2383,11 +2383,15 @@ err_out:
+ 	 * than the full array, but leave the qcq shells in place
+ 	 */
+ 	for (i = lif->nxqs; i < lif->ionic->ntxqs_per_lif; i++) {
+-		lif->txqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
+-		ionic_qcq_free(lif, lif->txqcqs[i]);
++		if (lif->txqcqs && lif->txqcqs[i]) {
++			lif->txqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
++			ionic_qcq_free(lif, lif->txqcqs[i]);
++		}
+ 
+-		lif->rxqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
+-		ionic_qcq_free(lif, lif->rxqcqs[i]);
++		if (lif->rxqcqs && lif->rxqcqs[i]) {
++			lif->rxqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
++			ionic_qcq_free(lif, lif->rxqcqs[i]);
++		}
+ 	}
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index 5b7413305be63..eb1be73020822 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -3255,6 +3255,30 @@ static int efx_ef10_set_mac_address(struct efx_nic *efx)
+ 	bool was_enabled = efx->port_enabled;
+ 	int rc;
+ 
++#ifdef CONFIG_SFC_SRIOV
++	/* If this function is a VF and we have access to the parent PF,
++	 * then use the PF control path to attempt to change the VF MAC address.
++	 */
++	if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) {
++		struct efx_nic *efx_pf = pci_get_drvdata(efx->pci_dev->physfn);
++		struct efx_ef10_nic_data *nic_data = efx->nic_data;
++		u8 mac[ETH_ALEN];
++
++		/* net_dev->dev_addr can be zeroed by efx_net_stop in
++		 * efx_ef10_sriov_set_vf_mac, so pass in a copy.
++		 */
++		ether_addr_copy(mac, efx->net_dev->dev_addr);
++
++		rc = efx_ef10_sriov_set_vf_mac(efx_pf, nic_data->vf_index, mac);
++		if (!rc)
++			return 0;
++
++		netif_dbg(efx, drv, efx->net_dev,
++			  "Updating VF mac via PF failed (%d), setting directly\n",
++			  rc);
++	}
++#endif
++
+ 	efx_device_detach_sync(efx);
+ 	efx_net_stop(efx->net_dev);
+ 
+@@ -3277,40 +3301,6 @@ static int efx_ef10_set_mac_address(struct efx_nic *efx)
+ 		efx_net_open(efx->net_dev);
+ 	efx_device_attach_if_not_resetting(efx);
+ 
+-#ifdef CONFIG_SFC_SRIOV
+-	if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) {
+-		struct efx_ef10_nic_data *nic_data = efx->nic_data;
+-		struct pci_dev *pci_dev_pf = efx->pci_dev->physfn;
+-
+-		if (rc == -EPERM) {
+-			struct efx_nic *efx_pf;
+-
+-			/* Switch to PF and change MAC address on vport */
+-			efx_pf = pci_get_drvdata(pci_dev_pf);
+-
+-			rc = efx_ef10_sriov_set_vf_mac(efx_pf,
+-						       nic_data->vf_index,
+-						       efx->net_dev->dev_addr);
+-		} else if (!rc) {
+-			struct efx_nic *efx_pf = pci_get_drvdata(pci_dev_pf);
+-			struct efx_ef10_nic_data *nic_data = efx_pf->nic_data;
+-			unsigned int i;
+-
+-			/* MAC address successfully changed by VF (with MAC
+-			 * spoofing) so update the parent PF if possible.
+-			 */
+-			for (i = 0; i < efx_pf->vf_count; ++i) {
+-				struct ef10_vf *vf = nic_data->vf + i;
+-
+-				if (vf->efx == efx) {
+-					ether_addr_copy(vf->mac,
+-							efx->net_dev->dev_addr);
+-					return 0;
+-				}
+-			}
+-		}
+-	} else
+-#endif
+ 	if (rc == -EPERM) {
+ 		netif_err(efx, drv, efx->net_dev,
+ 			  "Cannot change MAC address; use sfboot to enable"
+diff --git a/drivers/net/ethernet/sfc/filter.h b/drivers/net/ethernet/sfc/filter.h
+index 40b2af8bfb81c..2ac3c8f1b04b5 100644
+--- a/drivers/net/ethernet/sfc/filter.h
++++ b/drivers/net/ethernet/sfc/filter.h
+@@ -157,7 +157,8 @@ struct efx_filter_spec {
+ 	u32	flags:6;
+ 	u32	dmaq_id:12;
+ 	u32	rss_context;
+-	__be16	outer_vid __aligned(4); /* allow jhash2() of match values */
++	u32	vport_id;
++	__be16	outer_vid;
+ 	__be16	inner_vid;
+ 	u8	loc_mac[ETH_ALEN];
+ 	u8	rem_mac[ETH_ALEN];
+diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c
+index 2c09afac5beb4..36b46ddb67107 100644
+--- a/drivers/net/ethernet/sfc/rx_common.c
++++ b/drivers/net/ethernet/sfc/rx_common.c
+@@ -676,17 +676,17 @@ bool efx_filter_spec_equal(const struct efx_filter_spec *left,
+ 	     (EFX_FILTER_FLAG_RX | EFX_FILTER_FLAG_TX)))
+ 		return false;
+ 
+-	return memcmp(&left->outer_vid, &right->outer_vid,
++	return memcmp(&left->vport_id, &right->vport_id,
+ 		      sizeof(struct efx_filter_spec) -
+-		      offsetof(struct efx_filter_spec, outer_vid)) == 0;
++		      offsetof(struct efx_filter_spec, vport_id)) == 0;
+ }
+ 
+ u32 efx_filter_spec_hash(const struct efx_filter_spec *spec)
+ {
+-	BUILD_BUG_ON(offsetof(struct efx_filter_spec, outer_vid) & 3);
+-	return jhash2((const u32 *)&spec->outer_vid,
++	BUILD_BUG_ON(offsetof(struct efx_filter_spec, vport_id) & 3);
++	return jhash2((const u32 *)&spec->vport_id,
+ 		      (sizeof(struct efx_filter_spec) -
+-		       offsetof(struct efx_filter_spec, outer_vid)) / 4,
++		       offsetof(struct efx_filter_spec, vport_id)) / 4,
+ 		      0);
+ }
+ 
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index a0f338cf14247..367878493e704 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -977,7 +977,8 @@ struct net_device_context {
+ 	u32 vf_alloc;
+ 	/* Serial number of the VF to team with */
+ 	u32 vf_serial;
+-
++	/* completion variable to confirm vf association */
++	struct completion vf_add;
+ 	/* Is the current data path through the VF NIC? */
+ 	bool  data_path_is_vf;
+ 
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 6a7ab930ef70d..d15da8287df32 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -1327,6 +1327,10 @@ static void netvsc_send_vf(struct net_device *ndev,
+ 
+ 	net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated;
+ 	net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial;
++
++	if (net_device_ctx->vf_alloc)
++		complete(&net_device_ctx->vf_add);
++
+ 	netdev_info(ndev, "VF slot %u %s\n",
+ 		    net_device_ctx->vf_serial,
+ 		    net_device_ctx->vf_alloc ? "added" : "removed");
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 18484370da0d4..f2020be43cfea 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2290,6 +2290,7 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
+ {
+ 	struct device *parent = vf_netdev->dev.parent;
+ 	struct net_device_context *ndev_ctx;
++	struct net_device *ndev;
+ 	struct pci_dev *pdev;
+ 	u32 serial;
+ 
+@@ -2316,6 +2317,18 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
+ 			return hv_get_drvdata(ndev_ctx->device_ctx);
+ 	}
+ 
++	/* Fallback path to check synthetic vf with
++	 * help of mac addr
++	 */
++	list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {
++		ndev = hv_get_drvdata(ndev_ctx->device_ctx);
++		if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr)) {
++			netdev_notice(vf_netdev,
++				      "falling back to mac addr based matching\n");
++			return ndev;
++		}
++	}
++
+ 	netdev_notice(vf_netdev,
+ 		      "no netdev found for vf serial:%u\n", serial);
+ 	return NULL;
+@@ -2406,6 +2419,11 @@ static int netvsc_vf_changed(struct net_device *vf_netdev)
+ 		return NOTIFY_OK;
+ 	net_device_ctx->data_path_is_vf = vf_is_up;
+ 
++	if (vf_is_up && !net_device_ctx->vf_alloc) {
++		netdev_info(ndev, "Waiting for the VF association from host\n");
++		wait_for_completion(&net_device_ctx->vf_add);
++	}
++
+ 	netvsc_switch_datapath(ndev, vf_is_up);
+ 	netdev_info(ndev, "Data path switched %s VF: %s\n",
+ 		    vf_is_up ? "to" : "from", vf_netdev->name);
+@@ -2429,6 +2447,7 @@ static int netvsc_unregister_vf(struct net_device *vf_netdev)
+ 
+ 	netvsc_vf_setxdp(vf_netdev, NULL);
+ 
++	reinit_completion(&net_device_ctx->vf_add);
+ 	netdev_rx_handler_unregister(vf_netdev);
+ 	netdev_upper_dev_unlink(vf_netdev, ndev);
+ 	RCU_INIT_POINTER(net_device_ctx->vf_netdev, NULL);
+@@ -2466,6 +2485,7 @@ static int netvsc_probe(struct hv_device *dev,
+ 
+ 	INIT_DELAYED_WORK(&net_device_ctx->dwork, netvsc_link_change);
+ 
++	init_completion(&net_device_ctx->vf_add);
+ 	spin_lock_init(&net_device_ctx->lock);
+ 	INIT_LIST_HEAD(&net_device_ctx->reconfig_events);
+ 	INIT_DELAYED_WORK(&net_device_ctx->vf_takeover, netvsc_vf_setup);
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index 3a8849716459a..db651649e0b80 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -268,8 +268,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 				DP83822_EEE_ERROR_CHANGE_INT_EN);
+ 
+ 		if (!dp83822->fx_enabled)
+-			misr_status |= DP83822_MDI_XOVER_INT_EN |
+-				       DP83822_ANEG_ERR_INT_EN |
++			misr_status |= DP83822_ANEG_ERR_INT_EN |
+ 				       DP83822_WOL_PKT_INT_EN;
+ 
+ 		err = phy_write(phydev, MII_DP83822_MISR2, misr_status);
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index f86acad0aad44..c8031e297faf4 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -757,6 +757,14 @@ static int dp83867_config_init(struct phy_device *phydev)
+ 		else
+ 			val &= ~DP83867_SGMII_TYPE;
+ 		phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_SGMIICTL, val);
++
++		/* This is a SW workaround for link instability if RX_CTRL is
++		 * not strapped to mode 3 or 4 in HW. This is required for SGMII
++		 * in addition to clearing bit 7, handled above.
++		 */
++		if (dp83867->rxctrl_strap_quirk)
++			phy_set_bits_mmd(phydev, DP83867_DEVADDR, DP83867_CFG4,
++					 BIT(8));
+ 	}
+ 
+ 	val = phy_read(phydev, DP83867_CFG3);
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 43ddbe61dc58e..935cd296887f2 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -763,6 +763,13 @@ static const struct usb_device_id	products[] = {
+ },
+ #endif
+ 
++/* Lenovo ThinkPad OneLink+ Dock (based on Realtek RTL8153) */
++{
++	USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3054, USB_CLASS_COMM,
++			USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++	.driver_info = 0,
++},
++
+ /* ThinkPad USB-C Dock (based on Realtek RTL8153) */
+ {
+ 	USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3062, USB_CLASS_COMM,
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index a526242a3e36d..f9a79d67d6d4f 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -6870,6 +6870,7 @@ static const struct usb_device_id rtl8152_table[] = {
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x304f)},
++	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x3054)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x3062)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x3069)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x3082)},
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index e9c13804760e7..3f106771d15be 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3232,8 +3232,12 @@ int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 		return ret;
+ 
+ 	if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) {
++		/*
++		 * Do not return errors unless we are in a controller reset,
++		 * the controller works perfectly fine without hwmon.
++		 */
+ 		ret = nvme_hwmon_init(ctrl);
+-		if (ret < 0)
++		if (ret == -EINTR)
+ 			return ret;
+ 	}
+ 
+@@ -4485,6 +4489,7 @@ EXPORT_SYMBOL_GPL(nvme_start_ctrl);
+ 
+ void nvme_uninit_ctrl(struct nvme_ctrl *ctrl)
+ {
++	nvme_hwmon_exit(ctrl);
+ 	nvme_fault_inject_fini(&ctrl->fault_inject);
+ 	dev_pm_qos_hide_latency_tolerance(ctrl->device);
+ 	cdev_device_del(&ctrl->cdev, ctrl->device);
+diff --git a/drivers/nvme/host/hwmon.c b/drivers/nvme/host/hwmon.c
+index 552dbc04567bc..9e6e56c20ec99 100644
+--- a/drivers/nvme/host/hwmon.c
++++ b/drivers/nvme/host/hwmon.c
+@@ -12,7 +12,7 @@
+ 
+ struct nvme_hwmon_data {
+ 	struct nvme_ctrl *ctrl;
+-	struct nvme_smart_log log;
++	struct nvme_smart_log *log;
+ 	struct mutex read_lock;
+ };
+ 
+@@ -60,14 +60,14 @@ static int nvme_set_temp_thresh(struct nvme_ctrl *ctrl, int sensor, bool under,
+ static int nvme_hwmon_get_smart_log(struct nvme_hwmon_data *data)
+ {
+ 	return nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0,
+-			   NVME_CSI_NVM, &data->log, sizeof(data->log), 0);
++			   NVME_CSI_NVM, data->log, sizeof(*data->log), 0);
+ }
+ 
+ static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
+ 			   u32 attr, int channel, long *val)
+ {
+ 	struct nvme_hwmon_data *data = dev_get_drvdata(dev);
+-	struct nvme_smart_log *log = &data->log;
++	struct nvme_smart_log *log = data->log;
+ 	int temp;
+ 	int err;
+ 
+@@ -163,7 +163,7 @@ static umode_t nvme_hwmon_is_visible(const void *_data,
+ 	case hwmon_temp_max:
+ 	case hwmon_temp_min:
+ 		if ((!channel && data->ctrl->wctemp) ||
+-		    (channel && data->log.temp_sensor[channel - 1])) {
++		    (channel && data->log->temp_sensor[channel - 1])) {
+ 			if (data->ctrl->quirks &
+ 			    NVME_QUIRK_NO_TEMP_THRESH_CHANGE)
+ 				return 0444;
+@@ -176,7 +176,7 @@ static umode_t nvme_hwmon_is_visible(const void *_data,
+ 		break;
+ 	case hwmon_temp_input:
+ 	case hwmon_temp_label:
+-		if (!channel || data->log.temp_sensor[channel - 1])
++		if (!channel || data->log->temp_sensor[channel - 1])
+ 			return 0444;
+ 		break;
+ 	default:
+@@ -223,33 +223,57 @@ static const struct hwmon_chip_info nvme_hwmon_chip_info = {
+ 
+ int nvme_hwmon_init(struct nvme_ctrl *ctrl)
+ {
+-	struct device *dev = ctrl->dev;
++	struct device *dev = ctrl->device;
+ 	struct nvme_hwmon_data *data;
+ 	struct device *hwmon;
+ 	int err;
+ 
+-	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
++	data = kzalloc(sizeof(*data), GFP_KERNEL);
+ 	if (!data)
+-		return 0;
++		return -ENOMEM;
++
++	data->log = kzalloc(sizeof(*data->log), GFP_KERNEL);
++	if (!data->log) {
++		err = -ENOMEM;
++		goto err_free_data;
++	}
+ 
+ 	data->ctrl = ctrl;
+ 	mutex_init(&data->read_lock);
+ 
+ 	err = nvme_hwmon_get_smart_log(data);
+ 	if (err) {
+-		dev_warn(ctrl->device,
+-			"Failed to read smart log (error %d)\n", err);
+-		devm_kfree(dev, data);
+-		return err;
++		dev_warn(dev, "Failed to read smart log (error %d)\n", err);
++		goto err_free_log;
+ 	}
+ 
+-	hwmon = devm_hwmon_device_register_with_info(dev, "nvme", data,
+-						     &nvme_hwmon_chip_info,
+-						     NULL);
++	hwmon = hwmon_device_register_with_info(dev, "nvme",
++						data, &nvme_hwmon_chip_info,
++						NULL);
+ 	if (IS_ERR(hwmon)) {
+ 		dev_warn(dev, "Failed to instantiate hwmon device\n");
+-		devm_kfree(dev, data);
++		err = PTR_ERR(hwmon);
++		goto err_free_log;
+ 	}
+-
++	ctrl->hwmon_device = hwmon;
+ 	return 0;
++
++err_free_log:
++	kfree(data->log);
++err_free_data:
++	kfree(data);
++	return err;
++}
++
++void nvme_hwmon_exit(struct nvme_ctrl *ctrl)
++{
++	if (ctrl->hwmon_device) {
++		struct nvme_hwmon_data *data =
++			dev_get_drvdata(ctrl->hwmon_device);
++
++		hwmon_device_unregister(ctrl->hwmon_device);
++		ctrl->hwmon_device = NULL;
++		kfree(data->log);
++		kfree(data);
++	}
+ }
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 58cf9e39d613e..abae7ef2ac511 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -257,6 +257,9 @@ struct nvme_ctrl {
+ 	struct rw_semaphore namespaces_rwsem;
+ 	struct device ctrl_device;
+ 	struct device *device;	/* char device */
++#ifdef CONFIG_NVME_HWMON
++	struct device *hwmon_device;
++#endif
+ 	struct cdev cdev;
+ 	struct work_struct reset_work;
+ 	struct work_struct delete_work;
+@@ -876,11 +879,16 @@ static inline struct nvme_ns *nvme_get_ns_from_dev(struct device *dev)
+ 
+ #ifdef CONFIG_NVME_HWMON
+ int nvme_hwmon_init(struct nvme_ctrl *ctrl);
++void nvme_hwmon_exit(struct nvme_ctrl *ctrl);
+ #else
+ static inline int nvme_hwmon_init(struct nvme_ctrl *ctrl)
+ {
+ 	return 0;
+ }
++
++static inline void nvme_hwmon_exit(struct nvme_ctrl *ctrl)
++{
++}
+ #endif
+ 
+ u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
+index 40ef379c28ab0..9c286b2a19001 100644
+--- a/drivers/xen/gntdev-common.h
++++ b/drivers/xen/gntdev-common.h
+@@ -44,9 +44,10 @@ struct gntdev_unmap_notify {
+ };
+ 
+ struct gntdev_grant_map {
++	atomic_t in_use;
+ 	struct mmu_interval_notifier notifier;
++	bool notifier_init;
+ 	struct list_head next;
+-	struct vm_area_struct *vma;
+ 	int index;
+ 	int count;
+ 	int flags;
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 54fee4087bf10..ff195b5717630 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -276,6 +276,9 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
+ 		 */
+ 	}
+ 
++	if (use_ptemod && map->notifier_init)
++		mmu_interval_notifier_remove(&map->notifier);
++
+ 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
+ 		notify_remote_via_evtchn(map->notify.event);
+ 		evtchn_put(map->notify.event);
+@@ -288,21 +291,14 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
+ static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data)
+ {
+ 	struct gntdev_grant_map *map = data;
+-	unsigned int pgnr = (addr - map->vma->vm_start) >> PAGE_SHIFT;
+-	int flags = map->flags | GNTMAP_application_map | GNTMAP_contains_pte;
++	unsigned int pgnr = (addr - map->pages_vm_start) >> PAGE_SHIFT;
++	int flags = map->flags | GNTMAP_application_map | GNTMAP_contains_pte |
++		    (1 << _GNTMAP_guest_avail0);
+ 	u64 pte_maddr;
+ 
+ 	BUG_ON(pgnr >= map->count);
+ 	pte_maddr = arbitrary_virt_to_machine(pte).maddr;
+ 
+-	/*
+-	 * Set the PTE as special to force get_user_pages_fast() fall
+-	 * back to the slow path.  If this is not supported as part of
+-	 * the grant map, it will be done afterwards.
+-	 */
+-	if (xen_feature(XENFEAT_gnttab_map_avail_bits))
+-		flags |= (1 << _GNTMAP_guest_avail0);
+-
+ 	gnttab_set_map_op(&map->map_ops[pgnr], pte_maddr, flags,
+ 			  map->grants[pgnr].ref,
+ 			  map->grants[pgnr].domid);
+@@ -311,14 +307,6 @@ static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data)
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_X86
+-static int set_grant_ptes_as_special(pte_t *pte, unsigned long addr, void *data)
+-{
+-	set_pte_at(current->mm, addr, pte, pte_mkspecial(*pte));
+-	return 0;
+-}
+-#endif
+-
+ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
+ {
+ 	size_t alloced = 0;
+@@ -493,11 +481,7 @@ static void gntdev_vma_close(struct vm_area_struct *vma)
+ 	struct gntdev_priv *priv = file->private_data;
+ 
+ 	pr_debug("gntdev_vma_close %p\n", vma);
+-	if (use_ptemod) {
+-		WARN_ON(map->vma != vma);
+-		mmu_interval_notifier_remove(&map->notifier);
+-		map->vma = NULL;
+-	}
++
+ 	vma->vm_private_data = NULL;
+ 	gntdev_put_map(priv, map);
+ }
+@@ -525,29 +509,30 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
+ 	struct gntdev_grant_map *map =
+ 		container_of(mn, struct gntdev_grant_map, notifier);
+ 	unsigned long mstart, mend;
++	unsigned long map_start, map_end;
+ 
+ 	if (!mmu_notifier_range_blockable(range))
+ 		return false;
+ 
++	map_start = map->pages_vm_start;
++	map_end = map->pages_vm_start + (map->count << PAGE_SHIFT);
++
+ 	/*
+ 	 * If the VMA is split or otherwise changed the notifier is not
+ 	 * updated, but we don't want to process VA's outside the modified
+ 	 * VMA. FIXME: It would be much more understandable to just prevent
+ 	 * modifying the VMA in the first place.
+ 	 */
+-	if (map->vma->vm_start >= range->end ||
+-	    map->vma->vm_end <= range->start)
++	if (map_start >= range->end || map_end <= range->start)
+ 		return true;
+ 
+-	mstart = max(range->start, map->vma->vm_start);
+-	mend = min(range->end, map->vma->vm_end);
++	mstart = max(range->start, map_start);
++	mend = min(range->end, map_end);
+ 	pr_debug("map %d+%d (%lx %lx), range %lx %lx, mrange %lx %lx\n",
+-			map->index, map->count,
+-			map->vma->vm_start, map->vma->vm_end,
+-			range->start, range->end, mstart, mend);
+-	unmap_grant_pages(map,
+-				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
+-				(mend - mstart) >> PAGE_SHIFT);
++		 map->index, map->count, map_start, map_end,
++		 range->start, range->end, mstart, mend);
++	unmap_grant_pages(map, (mstart - map_start) >> PAGE_SHIFT,
++			  (mend - mstart) >> PAGE_SHIFT);
+ 
+ 	return true;
+ }
+@@ -1027,18 +1012,15 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ 		return -EINVAL;
+ 
+ 	pr_debug("map %d+%d at %lx (pgoff %lx)\n",
+-			index, count, vma->vm_start, vma->vm_pgoff);
++		 index, count, vma->vm_start, vma->vm_pgoff);
+ 
+ 	mutex_lock(&priv->lock);
+ 	map = gntdev_find_map_index(priv, index, count);
+ 	if (!map)
+ 		goto unlock_out;
+-	if (use_ptemod && map->vma)
++	if (!atomic_add_unless(&map->in_use, 1, 1))
+ 		goto unlock_out;
+-	if (atomic_read(&map->live_grants)) {
+-		err = -EAGAIN;
+-		goto unlock_out;
+-	}
++
+ 	refcount_inc(&map->users);
+ 
+ 	vma->vm_ops = &gntdev_vmops;
+@@ -1059,15 +1041,16 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ 			map->flags |= GNTMAP_readonly;
+ 	}
+ 
++	map->pages_vm_start = vma->vm_start;
++
+ 	if (use_ptemod) {
+-		map->vma = vma;
+ 		err = mmu_interval_notifier_insert_locked(
+ 			&map->notifier, vma->vm_mm, vma->vm_start,
+ 			vma->vm_end - vma->vm_start, &gntdev_mmu_ops);
+-		if (err) {
+-			map->vma = NULL;
++		if (err)
+ 			goto out_unlock_put;
+-		}
++
++		map->notifier_init = true;
+ 	}
+ 	mutex_unlock(&priv->lock);
+ 
+@@ -1084,7 +1067,6 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ 		 */
+ 		mmu_interval_read_begin(&map->notifier);
+ 
+-		map->pages_vm_start = vma->vm_start;
+ 		err = apply_to_page_range(vma->vm_mm, vma->vm_start,
+ 					  vma->vm_end - vma->vm_start,
+ 					  find_grant_ptes, map);
+@@ -1102,23 +1084,6 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ 		err = vm_map_pages_zero(vma, map->pages, map->count);
+ 		if (err)
+ 			goto out_put_map;
+-	} else {
+-#ifdef CONFIG_X86
+-		/*
+-		 * If the PTEs were not made special by the grant map
+-		 * hypercall, do so here.
+-		 *
+-		 * This is racy since the mapping is already visible
+-		 * to userspace but userspace should be well-behaved
+-		 * enough to not touch it until the mmap() call
+-		 * returns.
+-		 */
+-		if (!xen_feature(XENFEAT_gnttab_map_avail_bits)) {
+-			apply_to_page_range(vma->vm_mm, vma->vm_start,
+-					    vma->vm_end - vma->vm_start,
+-					    set_grant_ptes_as_special, NULL);
+-		}
+-#endif
+ 	}
+ 
+ 	return 0;
+@@ -1130,13 +1095,8 @@ unlock_out:
+ out_unlock_put:
+ 	mutex_unlock(&priv->lock);
+ out_put_map:
+-	if (use_ptemod) {
++	if (use_ptemod)
+ 		unmap_grant_pages(map, 0, map->count);
+-		if (map->vma) {
+-			mmu_interval_notifier_remove(&map->notifier);
+-			map->vma = NULL;
+-		}
+-	}
+ 	gntdev_put_map(priv, map);
+ 	return err;
+ }
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index baff31a147e7d..92cb16c0e5ee1 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -137,6 +137,7 @@ struct share_check {
+ 	u64 root_objectid;
+ 	u64 inum;
+ 	int share_count;
++	bool have_delayed_delete_refs;
+ };
+ 
+ static inline int extent_is_shared(struct share_check *sc)
+@@ -817,16 +818,11 @@ static int add_delayed_refs(const struct btrfs_fs_info *fs_info,
+ 			    struct preftrees *preftrees, struct share_check *sc)
+ {
+ 	struct btrfs_delayed_ref_node *node;
+-	struct btrfs_delayed_extent_op *extent_op = head->extent_op;
+ 	struct btrfs_key key;
+-	struct btrfs_key tmp_op_key;
+ 	struct rb_node *n;
+ 	int count;
+ 	int ret = 0;
+ 
+-	if (extent_op && extent_op->update_key)
+-		btrfs_disk_key_to_cpu(&tmp_op_key, &extent_op->key);
+-
+ 	spin_lock(&head->lock);
+ 	for (n = rb_first_cached(&head->ref_tree); n; n = rb_next(n)) {
+ 		node = rb_entry(n, struct btrfs_delayed_ref_node,
+@@ -852,10 +848,16 @@ static int add_delayed_refs(const struct btrfs_fs_info *fs_info,
+ 		case BTRFS_TREE_BLOCK_REF_KEY: {
+ 			/* NORMAL INDIRECT METADATA backref */
+ 			struct btrfs_delayed_tree_ref *ref;
++			struct btrfs_key *key_ptr = NULL;
++
++			if (head->extent_op && head->extent_op->update_key) {
++				btrfs_disk_key_to_cpu(&key, &head->extent_op->key);
++				key_ptr = &key;
++			}
+ 
+ 			ref = btrfs_delayed_node_to_tree_ref(node);
+ 			ret = add_indirect_ref(fs_info, preftrees, ref->root,
+-					       &tmp_op_key, ref->level + 1,
++					       key_ptr, ref->level + 1,
+ 					       node->bytenr, count, sc,
+ 					       GFP_ATOMIC);
+ 			break;
+@@ -881,13 +883,22 @@ static int add_delayed_refs(const struct btrfs_fs_info *fs_info,
+ 			key.offset = ref->offset;
+ 
+ 			/*
+-			 * Found a inum that doesn't match our known inum, we
+-			 * know it's shared.
++			 * If we have a share check context and a reference for
++			 * another inode, we can't exit immediately. This is
++			 * because even if this is a BTRFS_ADD_DELAYED_REF
++			 * reference we may find next a BTRFS_DROP_DELAYED_REF
++			 * which cancels out this ADD reference.
++			 *
++			 * If this is a DROP reference and there was no previous
++			 * ADD reference, then we need to signal that when we
++			 * process references from the extent tree (through
++			 * add_inline_refs() and add_keyed_refs()), we should
++			 * not exit early if we find a reference for another
++			 * inode, because one of the delayed DROP references
++			 * may cancel that reference in the extent tree.
+ 			 */
+-			if (sc && sc->inum && ref->objectid != sc->inum) {
+-				ret = BACKREF_FOUND_SHARED;
+-				goto out;
+-			}
++			if (sc && count < 0)
++				sc->have_delayed_delete_refs = true;
+ 
+ 			ret = add_indirect_ref(fs_info, preftrees, ref->root,
+ 					       &key, 0, node->bytenr, count, sc,
+@@ -917,7 +928,7 @@ static int add_delayed_refs(const struct btrfs_fs_info *fs_info,
+ 	}
+ 	if (!ret)
+ 		ret = extent_is_shared(sc);
+-out:
++
+ 	spin_unlock(&head->lock);
+ 	return ret;
+ }
+@@ -1020,7 +1031,8 @@ static int add_inline_refs(const struct btrfs_fs_info *fs_info,
+ 			key.type = BTRFS_EXTENT_DATA_KEY;
+ 			key.offset = btrfs_extent_data_ref_offset(leaf, dref);
+ 
+-			if (sc && sc->inum && key.objectid != sc->inum) {
++			if (sc && sc->inum && key.objectid != sc->inum &&
++			    !sc->have_delayed_delete_refs) {
+ 				ret = BACKREF_FOUND_SHARED;
+ 				break;
+ 			}
+@@ -1030,6 +1042,7 @@ static int add_inline_refs(const struct btrfs_fs_info *fs_info,
+ 			ret = add_indirect_ref(fs_info, preftrees, root,
+ 					       &key, 0, bytenr, count,
+ 					       sc, GFP_NOFS);
++
+ 			break;
+ 		}
+ 		default:
+@@ -1119,7 +1132,8 @@ static int add_keyed_refs(struct btrfs_fs_info *fs_info,
+ 			key.type = BTRFS_EXTENT_DATA_KEY;
+ 			key.offset = btrfs_extent_data_ref_offset(leaf, dref);
+ 
+-			if (sc && sc->inum && key.objectid != sc->inum) {
++			if (sc && sc->inum && key.objectid != sc->inum &&
++			    !sc->have_delayed_delete_refs) {
+ 				ret = BACKREF_FOUND_SHARED;
+ 				break;
+ 			}
+@@ -1542,6 +1556,7 @@ int btrfs_check_shared(struct btrfs_root *root, u64 inum, u64 bytenr,
+ 		.root_objectid = root->root_key.objectid,
+ 		.inum = inum,
+ 		.share_count = 0,
++		.have_delayed_delete_refs = false,
+ 	};
+ 
+ 	ulist_init(roots);
+@@ -1576,6 +1591,7 @@ int btrfs_check_shared(struct btrfs_root *root, u64 inum, u64 bytenr,
+ 			break;
+ 		bytenr = node->val;
+ 		shared.share_count = 0;
++		shared.have_delayed_delete_refs = false;
+ 		cond_resched();
+ 	}
+ 
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index bc957e6ca48b9..f442ef8b65dad 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -1221,8 +1221,11 @@ static ssize_t cifs_copy_file_range(struct file *src_file, loff_t off,
+ 	ssize_t rc;
+ 	struct cifsFileInfo *cfile = dst_file->private_data;
+ 
+-	if (cfile->swapfile)
+-		return -EOPNOTSUPP;
++	if (cfile->swapfile) {
++		rc = -EOPNOTSUPP;
++		free_xid(xid);
++		return rc;
++	}
+ 
+ 	rc = cifs_file_copychunk_range(xid, src_file, off, dst_file, destoff,
+ 					len, flags);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index a648146e49cfa..144064dc0d38a 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -1735,11 +1735,13 @@ int cifs_flock(struct file *file, int cmd, struct file_lock *fl)
+ 	struct cifsFileInfo *cfile;
+ 	__u32 type;
+ 
+-	rc = -EACCES;
+ 	xid = get_xid();
+ 
+-	if (!(fl->fl_flags & FL_FLOCK))
+-		return -ENOLCK;
++	if (!(fl->fl_flags & FL_FLOCK)) {
++		rc = -ENOLCK;
++		free_xid(xid);
++		return rc;
++	}
+ 
+ 	cfile = (struct cifsFileInfo *)file->private_data;
+ 	tcon = tlink_tcon(cfile->tlink);
+@@ -1758,8 +1760,9 @@ int cifs_flock(struct file *file, int cmd, struct file_lock *fl)
+ 		 * if no lock or unlock then nothing to do since we do not
+ 		 * know what it is
+ 		 */
++		rc = -EOPNOTSUPP;
+ 		free_xid(xid);
+-		return -EOPNOTSUPP;
++		return rc;
+ 	}
+ 
+ 	rc = cifs_setlk(file, fl, type, wait_flag, posix_lck, lock, unlock,
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index d58c5ffeca0d9..cf6fd138d8d5c 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -306,6 +306,7 @@ out:
+ 		cifs_put_tcp_session(chan->server, 0);
+ 	unload_nls(vol.local_nls);
+ 
++	free_xid(xid);
+ 	return rc;
+ }
+ 
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index 71b43538fa44c..fcf34f83bf6a8 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -148,12 +148,17 @@ void f_delown(struct file *filp)
+ 
+ pid_t f_getown(struct file *filp)
+ {
+-	pid_t pid;
+-	read_lock(&filp->f_owner.lock);
+-	pid = pid_vnr(filp->f_owner.pid);
+-	if (filp->f_owner.pid_type == PIDTYPE_PGID)
+-		pid = -pid;
+-	read_unlock(&filp->f_owner.lock);
++	pid_t pid = 0;
++
++	read_lock_irq(&filp->f_owner.lock);
++	rcu_read_lock();
++	if (pid_task(filp->f_owner.pid, filp->f_owner.pid_type)) {
++		pid = pid_vnr(filp->f_owner.pid);
++		if (filp->f_owner.pid_type == PIDTYPE_PGID)
++			pid = -pid;
++	}
++	rcu_read_unlock();
++	read_unlock_irq(&filp->f_owner.lock);
+ 	return pid;
+ }
+ 
+@@ -200,11 +205,14 @@ static int f_setown_ex(struct file *filp, unsigned long arg)
+ static int f_getown_ex(struct file *filp, unsigned long arg)
+ {
+ 	struct f_owner_ex __user *owner_p = (void __user *)arg;
+-	struct f_owner_ex owner;
++	struct f_owner_ex owner = {};
+ 	int ret = 0;
+ 
+-	read_lock(&filp->f_owner.lock);
+-	owner.pid = pid_vnr(filp->f_owner.pid);
++	read_lock_irq(&filp->f_owner.lock);
++	rcu_read_lock();
++	if (pid_task(filp->f_owner.pid, filp->f_owner.pid_type))
++		owner.pid = pid_vnr(filp->f_owner.pid);
++	rcu_read_unlock();
+ 	switch (filp->f_owner.pid_type) {
+ 	case PIDTYPE_PID:
+ 		owner.type = F_OWNER_TID;
+@@ -223,7 +231,7 @@ static int f_getown_ex(struct file *filp, unsigned long arg)
+ 		ret = -EINVAL;
+ 		break;
+ 	}
+-	read_unlock(&filp->f_owner.lock);
++	read_unlock_irq(&filp->f_owner.lock);
+ 
+ 	if (!ret) {
+ 		ret = copy_to_user(owner_p, &owner, sizeof(owner));
+@@ -241,10 +249,10 @@ static int f_getowner_uids(struct file *filp, unsigned long arg)
+ 	uid_t src[2];
+ 	int err;
+ 
+-	read_lock(&filp->f_owner.lock);
++	read_lock_irq(&filp->f_owner.lock);
+ 	src[0] = from_kuid(user_ns, filp->f_owner.uid);
+ 	src[1] = from_kuid(user_ns, filp->f_owner.euid);
+-	read_unlock(&filp->f_owner.lock);
++	read_unlock_irq(&filp->f_owner.lock);
+ 
+ 	err  = put_user(src[0], &dst[0]);
+ 	err |= put_user(src[1], &dst[1]);
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index c46bf7f581a14..856474b0a1ae7 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -231,6 +231,7 @@ static int ocfs2_mknod(struct inode *dir,
+ 	handle_t *handle = NULL;
+ 	struct ocfs2_super *osb;
+ 	struct ocfs2_dinode *dirfe;
++	struct ocfs2_dinode *fe = NULL;
+ 	struct buffer_head *new_fe_bh = NULL;
+ 	struct inode *inode = NULL;
+ 	struct ocfs2_alloc_context *inode_ac = NULL;
+@@ -381,6 +382,7 @@ static int ocfs2_mknod(struct inode *dir,
+ 		goto leave;
+ 	}
+ 
++	fe = (struct ocfs2_dinode *) new_fe_bh->b_data;
+ 	if (S_ISDIR(mode)) {
+ 		status = ocfs2_fill_new_dir(osb, handle, dir, inode,
+ 					    new_fe_bh, data_ac, meta_ac);
+@@ -453,8 +455,11 @@ roll_back:
+ leave:
+ 	if (status < 0 && did_quota_inode)
+ 		dquot_free_inode(inode);
+-	if (handle)
++	if (handle) {
++		if (status < 0 && fe)
++			ocfs2_set_links_count(fe, 0);
+ 		ocfs2_commit_trans(osb, handle);
++	}
+ 
+ 	ocfs2_inode_unlock(dir, 1);
+ 	if (did_block_signals)
+@@ -631,18 +636,9 @@ static int ocfs2_mknod_locked(struct ocfs2_super *osb,
+ 		return status;
+ 	}
+ 
+-	status = __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh,
++	return __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh,
+ 				    parent_fe_bh, handle, inode_ac,
+ 				    fe_blkno, suballoc_loc, suballoc_bit);
+-	if (status < 0) {
+-		u64 bg_blkno = ocfs2_which_suballoc_group(fe_blkno, suballoc_bit);
+-		int tmp = ocfs2_free_suballoc_bits(handle, inode_ac->ac_inode,
+-				inode_ac->ac_bh, suballoc_bit, bg_blkno, 1);
+-		if (tmp)
+-			mlog_errno(tmp);
+-	}
+-
+-	return status;
+ }
+ 
+ static int ocfs2_mkdir(struct inode *dir,
+@@ -2023,8 +2019,11 @@ bail:
+ 					ocfs2_clusters_to_bytes(osb->sb, 1));
+ 	if (status < 0 && did_quota_inode)
+ 		dquot_free_inode(inode);
+-	if (handle)
++	if (handle) {
++		if (status < 0 && fe)
++			ocfs2_set_links_count(fe, 0);
+ 		ocfs2_commit_trans(osb, handle);
++	}
+ 
+ 	ocfs2_inode_unlock(dir, 1);
+ 	if (did_block_signals)
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index ef18f0d71b11b..8b75a04836b63 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -951,7 +951,7 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
+ 		vma = vma->vm_next;
+ 	}
+ 
+-	show_vma_header_prefix(m, priv->mm->mmap->vm_start,
++	show_vma_header_prefix(m, priv->mm->mmap ? priv->mm->mmap->vm_start : 0,
+ 			       last_vma_end, 0, 0, 0, 0);
+ 	seq_pad(m, ' ');
+ 	seq_puts(m, "[rollup]\n");
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 896e563e2c181..9cb0a3d7874f2 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -911,6 +911,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
+ 			    struct kvm_enable_cap *cap);
+ long kvm_arch_vm_ioctl(struct file *filp,
+ 		       unsigned int ioctl, unsigned long arg);
++long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
++			      unsigned long arg);
+ 
+ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu);
+ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu);
+diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
+index 42df06c6b19ce..ef870d1f4f5f7 100644
+--- a/include/linux/mmc/card.h
++++ b/include/linux/mmc/card.h
+@@ -270,6 +270,7 @@ struct mmc_card {
+ #define MMC_QUIRK_BROKEN_IRQ_POLLING	(1<<11)	/* Polling SDIO_CCCR_INTx could create a fake interrupt */
+ #define MMC_QUIRK_TRIM_BROKEN	(1<<12)		/* Skip trim */
+ #define MMC_QUIRK_BROKEN_HPI	(1<<13)		/* Disable broken HPI support */
++#define MMC_QUIRK_BROKEN_SD_DISCARD	(1<<14)	/* Disable broken SD discard support */
+ 
+ 	bool			reenable_cmdq;	/* Re-enable Command Queue */
+ 
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index bed2387af456d..e7e8c318925de 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1178,7 +1178,6 @@ static inline void __qdisc_reset_queue(struct qdisc_skb_head *qh)
+ static inline void qdisc_reset_queue(struct Qdisc *sch)
+ {
+ 	__qdisc_reset_queue(&sch->q);
+-	sch->qstats.backlog = 0;
+ }
+ 
+ static inline struct Qdisc *qdisc_replace(struct Qdisc *sch, struct Qdisc *new,
+diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
+index 505f1e18e9bf9..3eac185ae2e8a 100644
+--- a/include/net/sock_reuseport.h
++++ b/include/net/sock_reuseport.h
+@@ -38,21 +38,20 @@ extern struct sock *reuseport_select_sock(struct sock *sk,
+ extern int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog);
+ extern int reuseport_detach_prog(struct sock *sk);
+ 
+-static inline bool reuseport_has_conns(struct sock *sk, bool set)
++static inline bool reuseport_has_conns(struct sock *sk)
+ {
+ 	struct sock_reuseport *reuse;
+ 	bool ret = false;
+ 
+ 	rcu_read_lock();
+ 	reuse = rcu_dereference(sk->sk_reuseport_cb);
+-	if (reuse) {
+-		if (set)
+-			reuse->has_conns = 1;
+-		ret = reuse->has_conns;
+-	}
++	if (reuse && reuse->has_conns)
++		ret = true;
+ 	rcu_read_unlock();
+ 
+ 	return ret;
+ }
+ 
++void reuseport_has_conns_set(struct sock *sk);
++
+ #endif  /* _SOCK_REUSEPORT_H */
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index a5245362ce7a8..b7cb9147f0c59 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6008,12 +6008,12 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ 	if (tr->current_trace->reset)
+ 		tr->current_trace->reset(tr);
+ 
++#ifdef CONFIG_TRACER_MAX_TRACE
++	had_max_tr = tr->current_trace->use_max_tr;
++
+ 	/* Current trace needs to be nop_trace before synchronize_rcu */
+ 	tr->current_trace = &nop_trace;
+ 
+-#ifdef CONFIG_TRACER_MAX_TRACE
+-	had_max_tr = tr->allocated_snapshot;
+-
+ 	if (had_max_tr && !t->use_max_tr) {
+ 		/*
+ 		 * We need to make sure that the update_max_tr sees that
+@@ -6025,14 +6025,14 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ 		synchronize_rcu();
+ 		free_snapshot(tr);
+ 	}
+-#endif
+ 
+-#ifdef CONFIG_TRACER_MAX_TRACE
+-	if (t->use_max_tr && !had_max_tr) {
++	if (t->use_max_tr && !tr->allocated_snapshot) {
+ 		ret = tracing_alloc_snapshot_instance(tr);
+ 		if (ret < 0)
+ 			goto out;
+ 	}
++#else
++	tr->current_trace = &nop_trace;
+ #endif
+ 
+ 	if (t->init) {
+diff --git a/net/atm/mpoa_proc.c b/net/atm/mpoa_proc.c
+index 829db9eba0cb9..aaf64b9539150 100644
+--- a/net/atm/mpoa_proc.c
++++ b/net/atm/mpoa_proc.c
+@@ -219,11 +219,12 @@ static ssize_t proc_mpc_write(struct file *file, const char __user *buff,
+ 	if (!page)
+ 		return -ENOMEM;
+ 
+-	for (p = page, len = 0; len < nbytes; p++, len++) {
++	for (p = page, len = 0; len < nbytes; p++) {
+ 		if (get_user(*p, buff++)) {
+ 			free_page((unsigned long)page);
+ 			return -EFAULT;
+ 		}
++		len += 1;
+ 		if (*p == '\0' || *p == '\n')
+ 			break;
+ 	}
+diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
+index b065f0a103ed0..49f9c2c4ffd5a 100644
+--- a/net/core/sock_reuseport.c
++++ b/net/core/sock_reuseport.c
+@@ -18,6 +18,22 @@ DEFINE_SPINLOCK(reuseport_lock);
+ 
+ static DEFINE_IDA(reuseport_ida);
+ 
++void reuseport_has_conns_set(struct sock *sk)
++{
++	struct sock_reuseport *reuse;
++
++	if (!rcu_access_pointer(sk->sk_reuseport_cb))
++		return;
++
++	spin_lock_bh(&reuseport_lock);
++	reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
++					  lockdep_is_held(&reuseport_lock));
++	if (likely(reuse))
++		reuse->has_conns = 1;
++	spin_unlock_bh(&reuseport_lock);
++}
++EXPORT_SYMBOL(reuseport_has_conns_set);
++
+ static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
+ {
+ 	unsigned int size = sizeof(struct sock_reuseport) +
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index baf4765be6d78..908324b46328f 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -108,15 +108,15 @@ struct sk_buff *hsr_get_untagged_frame(struct hsr_frame_info *frame,
+ 				       struct hsr_port *port)
+ {
+ 	if (!frame->skb_std) {
+-		if (frame->skb_hsr) {
++		if (frame->skb_hsr)
+ 			frame->skb_std =
+ 				create_stripped_skb_hsr(frame->skb_hsr, frame);
+-		} else {
+-			/* Unexpected */
+-			WARN_ONCE(1, "%s:%d: Unexpected frame received (port_src %s)\n",
+-				  __FILE__, __LINE__, port->dev->name);
++		else
++			netdev_warn_once(port->dev,
++					 "Unexpected frame received in hsr_get_untagged_frame()\n");
++
++		if (!frame->skb_std)
+ 			return NULL;
+-		}
+ 	}
+ 
+ 	return skb_clone(frame->skb_std, GFP_ATOMIC);
+diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
+index 4a8550c49202d..112c6e892d305 100644
+--- a/net/ipv4/datagram.c
++++ b/net/ipv4/datagram.c
+@@ -70,7 +70,7 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
+ 	}
+ 	inet->inet_daddr = fl4->daddr;
+ 	inet->inet_dport = usin->sin_port;
+-	reuseport_has_conns(sk, true);
++	reuseport_has_conns_set(sk);
+ 	sk->sk_state = TCP_ESTABLISHED;
+ 	sk_set_txhash(sk);
+ 	inet->inet_id = prandom_u32();
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 4446aa8237ff0..b093daaa3deb9 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -446,7 +446,7 @@ static struct sock *udp4_lib_lookup2(struct net *net,
+ 			result = lookup_reuseport(net, sk, skb,
+ 						  saddr, sport, daddr, hnum);
+ 			/* Fall back to scoring if group has connections */
+-			if (result && !reuseport_has_conns(sk, false))
++			if (result && !reuseport_has_conns(sk))
+ 				return result;
+ 
+ 			result = result ? : sk;
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index 206f66310a88d..f4559e5bc84bf 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -256,7 +256,7 @@ ipv4_connected:
+ 		goto out;
+ 	}
+ 
+-	reuseport_has_conns(sk, true);
++	reuseport_has_conns_set(sk);
+ 	sk->sk_state = TCP_ESTABLISHED;
+ 	sk_set_txhash(sk);
+ out:
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 9b504bf492144..514e6a55959fe 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -179,7 +179,7 @@ static struct sock *udp6_lib_lookup2(struct net *net,
+ 			result = lookup_reuseport(net, sk, skb,
+ 						  saddr, sport, daddr, hnum);
+ 			/* Fall back to scoring if group has connections */
+-			if (result && !reuseport_has_conns(sk, false))
++			if (result && !reuseport_has_conns(sk))
+ 				return result;
+ 
+ 			result = result ? : sk;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 6e18aa4177828..d8ffe41143856 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1081,12 +1081,13 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 
+ skip:
+ 		if (!ingress) {
+-			notify_and_destroy(net, skb, n, classid,
+-					   rtnl_dereference(dev->qdisc), new);
++			old = rtnl_dereference(dev->qdisc);
+ 			if (new && !new->ops->attach)
+ 				qdisc_refcount_inc(new);
+ 			rcu_assign_pointer(dev->qdisc, new ? : &noop_qdisc);
+ 
++			notify_and_destroy(net, skb, n, classid, old, new);
++
+ 			if (new && new->ops->attach)
+ 				new->ops->attach(new);
+ 		} else {
+diff --git a/net/sched/sch_atm.c b/net/sched/sch_atm.c
+index 1c281cc81f577..794c7377cd7e9 100644
+--- a/net/sched/sch_atm.c
++++ b/net/sched/sch_atm.c
+@@ -575,7 +575,6 @@ static void atm_tc_reset(struct Qdisc *sch)
+ 	pr_debug("atm_tc_reset(sch %p,[qdisc %p])\n", sch, p);
+ 	list_for_each_entry(flow, &p->flows, list)
+ 		qdisc_reset(flow->q);
+-	sch->q.qlen = 0;
+ }
+ 
+ static void atm_tc_destroy(struct Qdisc *sch)
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index c580139fcedec..5dc7a3c310c9d 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -2224,8 +2224,12 @@ retry:
+ 
+ static void cake_reset(struct Qdisc *sch)
+ {
++	struct cake_sched_data *q = qdisc_priv(sch);
+ 	u32 c;
+ 
++	if (!q->tins)
++		return;
++
+ 	for (c = 0; c < CAKE_MAX_TINS; c++)
+ 		cake_clear_tin(sch, c);
+ }
+diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
+index 4a78fcf5d4f98..9a3dff02b7a2b 100644
+--- a/net/sched/sch_cbq.c
++++ b/net/sched/sch_cbq.c
+@@ -1053,7 +1053,6 @@ cbq_reset(struct Qdisc *sch)
+ 			cl->cpriority = cl->priority;
+ 		}
+ 	}
+-	sch->q.qlen = 0;
+ }
+ 
+ 
+diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c
+index 2adbd945bf15a..25d2daaa81227 100644
+--- a/net/sched/sch_choke.c
++++ b/net/sched/sch_choke.c
+@@ -315,8 +315,6 @@ static void choke_reset(struct Qdisc *sch)
+ 		rtnl_qdisc_drop(skb, sch);
+ 	}
+ 
+-	sch->q.qlen = 0;
+-	sch->qstats.backlog = 0;
+ 	if (q->tab)
+ 		memset(q->tab, 0, (q->tab_mask + 1) * sizeof(struct sk_buff *));
+ 	q->head = q->tail = 0;
+diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c
+index dde564670ad8c..08424aac6da82 100644
+--- a/net/sched/sch_drr.c
++++ b/net/sched/sch_drr.c
+@@ -443,8 +443,6 @@ static void drr_reset_qdisc(struct Qdisc *sch)
+ 			qdisc_reset(cl->qdisc);
+ 		}
+ 	}
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ }
+ 
+ static void drr_destroy_qdisc(struct Qdisc *sch)
+diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c
+index 76ed1a05ded27..a75bc7f80cd7e 100644
+--- a/net/sched/sch_dsmark.c
++++ b/net/sched/sch_dsmark.c
+@@ -408,8 +408,6 @@ static void dsmark_reset(struct Qdisc *sch)
+ 	pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p);
+ 	if (p->q)
+ 		qdisc_reset(p->q);
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ }
+ 
+ static void dsmark_destroy(struct Qdisc *sch)
+diff --git a/net/sched/sch_etf.c b/net/sched/sch_etf.c
+index c48f91075b5c6..d96103b0e2bf5 100644
+--- a/net/sched/sch_etf.c
++++ b/net/sched/sch_etf.c
+@@ -445,9 +445,6 @@ static void etf_reset(struct Qdisc *sch)
+ 	timesortedlist_clear(sch);
+ 	__qdisc_reset_queue(&sch->q);
+ 
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+-
+ 	q->last = 0;
+ }
+ 
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index 9c224872ef035..05817c55692f0 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -722,8 +722,6 @@ static void ets_qdisc_reset(struct Qdisc *sch)
+ 	}
+ 	for (band = 0; band < q->nbands; band++)
+ 		qdisc_reset(q->classes[band].qdisc);
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ }
+ 
+ static void ets_qdisc_destroy(struct Qdisc *sch)
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index 99e8db2621984..01d6eea5b0ce9 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -347,8 +347,6 @@ static void fq_codel_reset(struct Qdisc *sch)
+ 		codel_vars_init(&flow->cvars);
+ 	}
+ 	memset(q->backlogs, 0, q->flows_cnt * sizeof(u32));
+-	sch->q.qlen = 0;
+-	sch->qstats.backlog = 0;
+ 	q->memory_usage = 0;
+ }
+ 
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index c70802785518f..cf04f70e96bf1 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -521,9 +521,6 @@ static void fq_pie_reset(struct Qdisc *sch)
+ 		INIT_LIST_HEAD(&flow->flowchain);
+ 		pie_vars_init(&flow->vars);
+ 	}
+-
+-	sch->q.qlen = 0;
+-	sch->qstats.backlog = 0;
+ }
+ 
+ static void fq_pie_destroy(struct Qdisc *sch)
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index d1902fca98447..cdc43a06aa9bc 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -1484,8 +1484,6 @@ hfsc_reset_qdisc(struct Qdisc *sch)
+ 	}
+ 	q->eligible = RB_ROOT;
+ 	qdisc_watchdog_cancel(&q->watchdog);
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ }
+ 
+ static void
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index cd70dbcbd72fd..c3ba018fd083e 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -966,8 +966,6 @@ static void htb_reset(struct Qdisc *sch)
+ 	}
+ 	qdisc_watchdog_cancel(&q->watchdog);
+ 	__qdisc_reset_queue(&q->direct_queue);
+-	sch->q.qlen = 0;
+-	sch->qstats.backlog = 0;
+ 	memset(q->hlevel, 0, sizeof(q->hlevel));
+ 	memset(q->row_mask, 0, sizeof(q->row_mask));
+ }
+diff --git a/net/sched/sch_multiq.c b/net/sched/sch_multiq.c
+index 5c27b4270b908..1c6dbcfa89b87 100644
+--- a/net/sched/sch_multiq.c
++++ b/net/sched/sch_multiq.c
+@@ -152,7 +152,6 @@ multiq_reset(struct Qdisc *sch)
+ 
+ 	for (band = 0; band < q->bands; band++)
+ 		qdisc_reset(q->queues[band]);
+-	sch->q.qlen = 0;
+ 	q->curband = 0;
+ }
+ 
+diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
+index 3eabb871a1d52..1c805fe05b82a 100644
+--- a/net/sched/sch_prio.c
++++ b/net/sched/sch_prio.c
+@@ -135,8 +135,6 @@ prio_reset(struct Qdisc *sch)
+ 
+ 	for (prio = 0; prio < q->bands; prio++)
+ 		qdisc_reset(q->queues[prio]);
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ }
+ 
+ static int prio_offload(struct Qdisc *sch, struct tc_prio_qopt *qopt)
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index af8c63a9ec18c..1d1d81aeb389f 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -1458,8 +1458,6 @@ static void qfq_reset_qdisc(struct Qdisc *sch)
+ 			qdisc_reset(cl->qdisc);
+ 		}
+ 	}
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ }
+ 
+ static void qfq_destroy_qdisc(struct Qdisc *sch)
+diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
+index 40adf1f07a82d..f1e013e3f04a9 100644
+--- a/net/sched/sch_red.c
++++ b/net/sched/sch_red.c
+@@ -176,8 +176,6 @@ static void red_reset(struct Qdisc *sch)
+ 	struct red_sched_data *q = qdisc_priv(sch);
+ 
+ 	qdisc_reset(q->qdisc);
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ 	red_restart(&q->vars);
+ }
+ 
+diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
+index b2724057629f6..9ded56228ea10 100644
+--- a/net/sched/sch_sfb.c
++++ b/net/sched/sch_sfb.c
+@@ -455,9 +455,8 @@ static void sfb_reset(struct Qdisc *sch)
+ {
+ 	struct sfb_sched_data *q = qdisc_priv(sch);
+ 
+-	qdisc_reset(q->qdisc);
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
++	if (likely(q->qdisc))
++		qdisc_reset(q->qdisc);
+ 	q->slot = 0;
+ 	q->double_buffering = false;
+ 	sfb_zero_all_buckets(q);
+diff --git a/net/sched/sch_skbprio.c b/net/sched/sch_skbprio.c
+index 7a5e4c4547156..df72fb83d9c7d 100644
+--- a/net/sched/sch_skbprio.c
++++ b/net/sched/sch_skbprio.c
+@@ -213,9 +213,6 @@ static void skbprio_reset(struct Qdisc *sch)
+ 	struct skbprio_sched_data *q = qdisc_priv(sch);
+ 	int prio;
+ 
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+-
+ 	for (prio = 0; prio < SKBPRIO_MAX_PRIORITY; prio++)
+ 		__skb_queue_purge(&q->qdiscs[prio]);
+ 
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index ab8835a72cee6..7f33b31c7b8bd 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1626,8 +1626,6 @@ static void taprio_reset(struct Qdisc *sch)
+ 			if (q->qdiscs[i])
+ 				qdisc_reset(q->qdiscs[i]);
+ 	}
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ }
+ 
+ static void taprio_destroy(struct Qdisc *sch)
+diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
+index 6eb17004a9e44..7461e5c67d50a 100644
+--- a/net/sched/sch_tbf.c
++++ b/net/sched/sch_tbf.c
+@@ -316,8 +316,6 @@ static void tbf_reset(struct Qdisc *sch)
+ 	struct tbf_sched_data *q = qdisc_priv(sch);
+ 
+ 	qdisc_reset(q->qdisc);
+-	sch->qstats.backlog = 0;
+-	sch->q.qlen = 0;
+ 	q->t_c = ktime_get_ns();
+ 	q->tokens = q->buffer;
+ 	q->ptokens = q->mtu;
+diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c
+index 6af6b95bdb672..79aaab51cbf5c 100644
+--- a/net/sched/sch_teql.c
++++ b/net/sched/sch_teql.c
+@@ -124,7 +124,6 @@ teql_reset(struct Qdisc *sch)
+ 	struct teql_sched_data *dat = qdisc_priv(sch);
+ 
+ 	skb_queue_purge(&dat->q);
+-	sch->q.qlen = 0;
+ }
+ 
+ static void
+diff --git a/net/tipc/discover.c b/net/tipc/discover.c
+index 14bc20604051d..2ae268b674650 100644
+--- a/net/tipc/discover.c
++++ b/net/tipc/discover.c
+@@ -147,8 +147,8 @@ static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d,
+ {
+ 	struct net *net = d->net;
+ 	struct tipc_net *tn = tipc_net(net);
+-	bool trial = time_before(jiffies, tn->addr_trial_end);
+ 	u32 self = tipc_own_addr(net);
++	bool trial = time_before(jiffies, tn->addr_trial_end) && !self;
+ 
+ 	if (mtyp == DSC_TRIAL_FAIL_MSG) {
+ 		if (!trial)
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index 13f3143609f9e..d9e2c0fea3f2b 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -568,7 +568,7 @@ bool tipc_topsrv_kern_subscr(struct net *net, u32 port, u32 type, u32 lower,
+ 	sub.seq.upper = upper;
+ 	sub.timeout = TIPC_WAIT_FOREVER;
+ 	sub.filter = filter;
+-	*(u32 *)&sub.usr_handle = port;
++	*(u64 *)&sub.usr_handle = (u64)port;
+ 
+ 	con = tipc_conn_alloc(tipc_topsrv(net));
+ 	if (IS_ERR(con))
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 31d631fa846ef..3db8bd2158d9b 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -2011,7 +2011,8 @@ static inline int convert_context_handle_invalid_context(
+  * in `newc'.  Verify that the context is valid
+  * under the new policy.
+  */
+-static int convert_context(struct context *oldc, struct context *newc, void *p)
++static int convert_context(struct context *oldc, struct context *newc, void *p,
++			   gfp_t gfp_flags)
+ {
+ 	struct convert_context_args *args;
+ 	struct ocontext *oc;
+@@ -2025,7 +2026,7 @@ static int convert_context(struct context *oldc, struct context *newc, void *p)
+ 	args = p;
+ 
+ 	if (oldc->str) {
+-		s = kstrdup(oldc->str, GFP_KERNEL);
++		s = kstrdup(oldc->str, gfp_flags);
+ 		if (!s)
+ 			return -ENOMEM;
+ 
+diff --git a/security/selinux/ss/sidtab.c b/security/selinux/ss/sidtab.c
+index 656d50b09f762..1981c5af13e0a 100644
+--- a/security/selinux/ss/sidtab.c
++++ b/security/selinux/ss/sidtab.c
+@@ -325,7 +325,7 @@ int sidtab_context_to_sid(struct sidtab *s, struct context *context,
+ 		}
+ 
+ 		rc = convert->func(context, &dst_convert->context,
+-				   convert->args);
++				   convert->args, GFP_ATOMIC);
+ 		if (rc) {
+ 			context_destroy(&dst->context);
+ 			goto out_unlock;
+@@ -404,7 +404,7 @@ static int sidtab_convert_tree(union sidtab_entry_inner *edst,
+ 		while (i < SIDTAB_LEAF_ENTRIES && *pos < count) {
+ 			rc = convert->func(&esrc->ptr_leaf->entries[i].context,
+ 					   &edst->ptr_leaf->entries[i].context,
+-					   convert->args);
++					   convert->args, GFP_KERNEL);
+ 			if (rc)
+ 				return rc;
+ 			(*pos)++;
+diff --git a/security/selinux/ss/sidtab.h b/security/selinux/ss/sidtab.h
+index 4eff0e49dcb22..9fce0d553fe2c 100644
+--- a/security/selinux/ss/sidtab.h
++++ b/security/selinux/ss/sidtab.h
+@@ -65,7 +65,7 @@ struct sidtab_isid_entry {
+ };
+ 
+ struct sidtab_convert_params {
+-	int (*func)(struct context *oldc, struct context *newc, void *args);
++	int (*func)(struct context *oldc, struct context *newc, void *args, gfp_t gfp_flags);
+ 	void *args;
+ 	struct sidtab *target;
+ };
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 3a0a7930cd10a..c56a4d9c3be94 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -356,6 +356,12 @@ __add_event(struct list_head *list, int *idx,
+ 	struct perf_cpu_map *cpus = pmu ? perf_cpu_map__get(pmu->cpus) :
+ 			       cpu_list ? perf_cpu_map__new(cpu_list) : NULL;
+ 
++	if (pmu)
++		perf_pmu__warn_invalid_formats(pmu);
++
++	if (pmu && attr->type == PERF_TYPE_RAW)
++		perf_pmu__warn_invalid_config(pmu, attr->config, name);
++
+ 	if (init_attr)
+ 		event_attr_init(attr);
+ 
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index d41caeb35cf6c..ac45da0302a73 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -862,6 +862,23 @@ static struct perf_pmu *pmu_lookup(const char *name)
+ 	return pmu;
+ }
+ 
++void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu)
++{
++	struct perf_pmu_format *format;
++
++	/* fake pmu doesn't have format list */
++	if (pmu == &perf_pmu__fake)
++		return;
++
++	list_for_each_entry(format, &pmu->format, list)
++		if (format->value >= PERF_PMU_FORMAT_VALUE_CONFIG_END) {
++			pr_warning("WARNING: '%s' format '%s' requires 'perf_event_attr::config%d'"
++				   "which is not supported by this version of perf!\n",
++				   pmu->name, format->name, format->value);
++			return;
++		}
++}
++
+ static struct perf_pmu *pmu_find(const char *name)
+ {
+ 	struct perf_pmu *pmu;
+@@ -1716,3 +1733,36 @@ int perf_pmu__caps_parse(struct perf_pmu *pmu)
+ 
+ 	return nr_caps;
+ }
++
++void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
++				   char *name)
++{
++	struct perf_pmu_format *format;
++	__u64 masks = 0, bits;
++	char buf[100];
++	unsigned int i;
++
++	list_for_each_entry(format, &pmu->format, list)	{
++		if (format->value != PERF_PMU_FORMAT_VALUE_CONFIG)
++			continue;
++
++		for_each_set_bit(i, format->bits, PERF_PMU_FORMAT_BITS)
++			masks |= 1ULL << i;
++	}
++
++	/*
++	 * Kernel doesn't export any valid format bits.
++	 */
++	if (masks == 0)
++		return;
++
++	bits = config & ~masks;
++	if (bits == 0)
++		return;
++
++	bitmap_scnprintf((unsigned long *)&bits, sizeof(bits) * 8, buf, sizeof(buf));
++
++	pr_warning("WARNING: event '%s' not valid (bits %s of config "
++		   "'%llx' not supported by kernel)!\n",
++		   name ?: "N/A", buf, config);
++}
+diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
+index a64e9c9ce731a..7d208b8507695 100644
+--- a/tools/perf/util/pmu.h
++++ b/tools/perf/util/pmu.h
+@@ -15,6 +15,7 @@ enum {
+ 	PERF_PMU_FORMAT_VALUE_CONFIG,
+ 	PERF_PMU_FORMAT_VALUE_CONFIG1,
+ 	PERF_PMU_FORMAT_VALUE_CONFIG2,
++	PERF_PMU_FORMAT_VALUE_CONFIG_END,
+ };
+ 
+ #define PERF_PMU_FORMAT_BITS 64
+@@ -120,4 +121,8 @@ int perf_pmu__convert_scale(const char *scale, char **end, double *sval);
+ 
+ int perf_pmu__caps_parse(struct perf_pmu *pmu);
+ 
++void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
++				   char *name);
++void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu);
++
+ #endif /* __PMU_H */
+diff --git a/tools/perf/util/pmu.l b/tools/perf/util/pmu.l
+index a15d9fbd7c0ed..58b4926cfaca9 100644
+--- a/tools/perf/util/pmu.l
++++ b/tools/perf/util/pmu.l
+@@ -27,8 +27,6 @@ num_dec         [0-9]+
+ 
+ {num_dec}	{ return value(10); }
+ config		{ return PP_CONFIG; }
+-config1		{ return PP_CONFIG1; }
+-config2		{ return PP_CONFIG2; }
+ -		{ return '-'; }
+ :		{ return ':'; }
+ ,		{ return ','; }
+diff --git a/tools/perf/util/pmu.y b/tools/perf/util/pmu.y
+index bfd7e8509869b..283efe059819d 100644
+--- a/tools/perf/util/pmu.y
++++ b/tools/perf/util/pmu.y
+@@ -20,7 +20,7 @@ do { \
+ 
+ %}
+ 
+-%token PP_CONFIG PP_CONFIG1 PP_CONFIG2
++%token PP_CONFIG
+ %token PP_VALUE PP_ERROR
+ %type <num> PP_VALUE
+ %type <bits> bit_term
+@@ -47,18 +47,11 @@ PP_CONFIG ':' bits
+ 				      $3));
+ }
+ |
+-PP_CONFIG1 ':' bits
++PP_CONFIG PP_VALUE ':' bits
+ {
+ 	ABORT_ON(perf_pmu__new_format(format, name,
+-				      PERF_PMU_FORMAT_VALUE_CONFIG1,
+-				      $3));
+-}
+-|
+-PP_CONFIG2 ':' bits
+-{
+-	ABORT_ON(perf_pmu__new_format(format, name,
+-				      PERF_PMU_FORMAT_VALUE_CONFIG2,
+-				      $3));
++				      $2,
++				      $4));
+ }
+ 
+ bits:
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index c4cce817a4522..564d5c145fbe7 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -3966,6 +3966,12 @@ struct compat_kvm_clear_dirty_log {
+ 	};
+ };
+ 
++long __weak kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
++				     unsigned long arg)
++{
++	return -ENOTTY;
++}
++
+ static long kvm_vm_compat_ioctl(struct file *filp,
+ 			   unsigned int ioctl, unsigned long arg)
+ {
+@@ -3974,6 +3980,11 @@ static long kvm_vm_compat_ioctl(struct file *filp,
+ 
+ 	if (kvm->mm != current->mm || kvm->vm_bugged)
+ 		return -EIO;
++
++	r = kvm_arch_vm_compat_ioctl(filp, ioctl, arg);
++	if (r != -ENOTTY)
++		return r;
++
+ 	switch (ioctl) {
+ #ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
+ 	case KVM_CLEAR_DIRTY_LOG: {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-11-03 15:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-11-03 15:17 UTC (permalink / raw
  To: gentoo-commits

commit:     c99b58476496fd6a51f7ae85c130f0787e87ac69
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov  3 15:16:58 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov  3 15:16:58 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c99b5847

Linux patch 5.10.153

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1152_linux-5.10.153.patch | 3272 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3276 insertions(+)

diff --git a/0000_README b/0000_README
index 2d158649..76e2297c 100644
--- a/0000_README
+++ b/0000_README
@@ -651,6 +651,10 @@ Patch:  1151_linux-5.10.152.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.152
 
+Patch:  1152_linux-5.10.153.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.153
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1152_linux-5.10.153.patch b/1152_linux-5.10.153.patch
new file mode 100644
index 00000000..6f989aad
--- /dev/null
+++ b/1152_linux-5.10.153.patch
@@ -0,0 +1,3272 @@
+diff --git a/Makefile b/Makefile
+index a0750d0519820..d1cd7539105df 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 152
++SUBLEVEL = 153
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/include/asm/io.h b/arch/arc/include/asm/io.h
+index 8f777d6441a5d..80347382a3800 100644
+--- a/arch/arc/include/asm/io.h
++++ b/arch/arc/include/asm/io.h
+@@ -32,7 +32,7 @@ static inline void ioport_unmap(void __iomem *addr)
+ {
+ }
+ 
+-extern void iounmap(const void __iomem *addr);
++extern void iounmap(const volatile void __iomem *addr);
+ 
+ /*
+  * io{read,write}{16,32}be() macros
+diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c
+index 95c649fbc95af..d3b1ea16e9cd3 100644
+--- a/arch/arc/mm/ioremap.c
++++ b/arch/arc/mm/ioremap.c
+@@ -93,7 +93,7 @@ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
+ EXPORT_SYMBOL(ioremap_prot);
+ 
+ 
+-void iounmap(const void __iomem *addr)
++void iounmap(const volatile void __iomem *addr)
+ {
+ 	/* weird double cast to handle phys_addr_t > 32 bits */
+ 	if (arc_uncached_addr_space((phys_addr_t)(u32)addr))
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index 423f9b40e4d95..31ba0ac7db630 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -648,7 +648,8 @@ static inline bool system_supports_4kb_granule(void)
+ 	val = cpuid_feature_extract_unsigned_field(mmfr0,
+ 						ID_AA64MMFR0_TGRAN4_SHIFT);
+ 
+-	return val == ID_AA64MMFR0_TGRAN4_SUPPORTED;
++	return (val >= ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN) &&
++	       (val <= ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX);
+ }
+ 
+ static inline bool system_supports_64kb_granule(void)
+@@ -660,7 +661,8 @@ static inline bool system_supports_64kb_granule(void)
+ 	val = cpuid_feature_extract_unsigned_field(mmfr0,
+ 						ID_AA64MMFR0_TGRAN64_SHIFT);
+ 
+-	return val == ID_AA64MMFR0_TGRAN64_SUPPORTED;
++	return (val >= ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN) &&
++	       (val <= ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX);
+ }
+ 
+ static inline bool system_supports_16kb_granule(void)
+@@ -672,7 +674,8 @@ static inline bool system_supports_16kb_granule(void)
+ 	val = cpuid_feature_extract_unsigned_field(mmfr0,
+ 						ID_AA64MMFR0_TGRAN16_SHIFT);
+ 
+-	return val == ID_AA64MMFR0_TGRAN16_SUPPORTED;
++	return (val >= ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN) &&
++	       (val <= ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX);
+ }
+ 
+ static inline bool system_supports_mixed_endian_el0(void)
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 39f5c1672f480..457b6bb276bb2 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -60,6 +60,7 @@
+ #define ARM_CPU_IMP_FUJITSU		0x46
+ #define ARM_CPU_IMP_HISI		0x48
+ #define ARM_CPU_IMP_APPLE		0x61
++#define ARM_CPU_IMP_AMPERE		0xC0
+ 
+ #define ARM_CPU_PART_AEM_V8		0xD0F
+ #define ARM_CPU_PART_FOUNDATION		0xD00
+@@ -112,6 +113,8 @@
+ #define APPLE_CPU_PART_M1_ICESTORM	0x022
+ #define APPLE_CPU_PART_M1_FIRESTORM	0x023
+ 
++#define AMPERE_CPU_PART_AMPERE1		0xAC3
++
+ #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
+ #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
+ #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
+@@ -151,6 +154,7 @@
+ #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110)
+ #define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM)
+ #define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM)
++#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
+ 
+ /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
+ #define MIDR_FUJITSU_ERRATUM_010001		MIDR_FUJITSU_A64FX
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 1f2209ad2cca1..06755fad38304 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -786,15 +786,24 @@
+ #define ID_AA64MMFR0_ASID_SHIFT		4
+ #define ID_AA64MMFR0_PARANGE_SHIFT	0
+ 
+-#define ID_AA64MMFR0_TGRAN4_NI		0xf
+-#define ID_AA64MMFR0_TGRAN4_SUPPORTED	0x0
+-#define ID_AA64MMFR0_TGRAN64_NI		0xf
+-#define ID_AA64MMFR0_TGRAN64_SUPPORTED	0x0
+-#define ID_AA64MMFR0_TGRAN16_NI		0x0
+-#define ID_AA64MMFR0_TGRAN16_SUPPORTED	0x1
++#define ID_AA64MMFR0_TGRAN4_NI			0xf
++#define ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN	0x0
++#define ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX	0x7
++#define ID_AA64MMFR0_TGRAN64_NI			0xf
++#define ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN	0x0
++#define ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX	0x7
++#define ID_AA64MMFR0_TGRAN16_NI			0x0
++#define ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN	0x1
++#define ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX	0xf
++
+ #define ID_AA64MMFR0_PARANGE_48		0x5
+ #define ID_AA64MMFR0_PARANGE_52		0x6
+ 
++#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT	0x0
++#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE	0x1
++#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN	0x2
++#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX	0x7
++
+ #ifdef CONFIG_ARM64_PA_BITS_52
+ #define ID_AA64MMFR0_PARANGE_MAX	ID_AA64MMFR0_PARANGE_52
+ #else
+@@ -955,14 +964,17 @@
+ #define ID_PFR1_PROGMOD_SHIFT		0
+ 
+ #if defined(CONFIG_ARM64_4K_PAGES)
+-#define ID_AA64MMFR0_TGRAN_SHIFT	ID_AA64MMFR0_TGRAN4_SHIFT
+-#define ID_AA64MMFR0_TGRAN_SUPPORTED	ID_AA64MMFR0_TGRAN4_SUPPORTED
++#define ID_AA64MMFR0_TGRAN_SHIFT		ID_AA64MMFR0_TGRAN4_SHIFT
++#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN	ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN
++#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX	ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX
+ #elif defined(CONFIG_ARM64_16K_PAGES)
+-#define ID_AA64MMFR0_TGRAN_SHIFT	ID_AA64MMFR0_TGRAN16_SHIFT
+-#define ID_AA64MMFR0_TGRAN_SUPPORTED	ID_AA64MMFR0_TGRAN16_SUPPORTED
++#define ID_AA64MMFR0_TGRAN_SHIFT		ID_AA64MMFR0_TGRAN16_SHIFT
++#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN	ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN
++#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX	ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX
+ #elif defined(CONFIG_ARM64_64K_PAGES)
+-#define ID_AA64MMFR0_TGRAN_SHIFT	ID_AA64MMFR0_TGRAN64_SHIFT
+-#define ID_AA64MMFR0_TGRAN_SUPPORTED	ID_AA64MMFR0_TGRAN64_SUPPORTED
++#define ID_AA64MMFR0_TGRAN_SHIFT		ID_AA64MMFR0_TGRAN64_SHIFT
++#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN	ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN
++#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX	ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX
+ #endif
+ 
+ #define MVFR2_FPMISC_SHIFT		4
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index f9119eea735e2..e1c25fa3b8e6c 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -797,8 +797,10 @@ SYM_FUNC_END(__secondary_too_slow)
+ SYM_FUNC_START(__enable_mmu)
+ 	mrs	x2, ID_AA64MMFR0_EL1
+ 	ubfx	x2, x2, #ID_AA64MMFR0_TGRAN_SHIFT, 4
+-	cmp	x2, #ID_AA64MMFR0_TGRAN_SUPPORTED
+-	b.ne	__no_granule_support
++	cmp     x2, #ID_AA64MMFR0_TGRAN_SUPPORTED_MIN
++	b.lt    __no_granule_support
++	cmp     x2, #ID_AA64MMFR0_TGRAN_SUPPORTED_MAX
++	b.gt    __no_granule_support
+ 	update_early_cpu_boot_status 0, x2, x3
+ 	adrp	x2, idmap_pg_dir
+ 	phys_to_ttbr x1, x1
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 6ae53d8cd576f..faa8a6bf2376e 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -876,6 +876,10 @@ u8 spectre_bhb_loop_affected(int scope)
+ 			MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
+ 			{},
+ 		};
++		static const struct midr_range spectre_bhb_k11_list[] = {
++			MIDR_ALL_VERSIONS(MIDR_AMPERE1),
++			{},
++		};
+ 		static const struct midr_range spectre_bhb_k8_list[] = {
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+@@ -886,6 +890,8 @@ u8 spectre_bhb_loop_affected(int scope)
+ 			k = 32;
+ 		else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
+ 			k = 24;
++		else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
++			k = 11;
+ 		else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
+ 			k =  8;
+ 
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index 204c62debf06e..6f85c1821c3fb 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -397,16 +397,18 @@ int kvm_set_ipa_limit(void)
+ 	}
+ 
+ 	switch (cpuid_feature_extract_unsigned_field(mmfr0, tgran_2)) {
+-	default:
+-	case 1:
++	case ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE:
+ 		kvm_err("PAGE_SIZE not supported at Stage-2, giving up\n");
+ 		return -EINVAL;
+-	case 0:
++	case ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT:
+ 		kvm_debug("PAGE_SIZE supported at Stage-2 (default)\n");
+ 		break;
+-	case 2:
++	case ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN ... ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX:
+ 		kvm_debug("PAGE_SIZE supported at Stage-2 (advertised)\n");
+ 		break;
++	default:
++		kvm_err("Unsupported value for TGRAN_2, giving up\n");
++		return -EINVAL;
+ 	}
+ 
+ 	kvm_ipa_limit = id_aa64mmfr0_parange_to_phys_shift(parange);
+diff --git a/arch/s390/include/asm/futex.h b/arch/s390/include/asm/futex.h
+index 26f9144562c9e..e1d0b2aaaddd3 100644
+--- a/arch/s390/include/asm/futex.h
++++ b/arch/s390/include/asm/futex.h
+@@ -16,7 +16,8 @@
+ 		"3: jl    1b\n"						\
+ 		"   lhi   %0,0\n"					\
+ 		"4: sacf  768\n"					\
+-		EX_TABLE(0b,4b) EX_TABLE(2b,4b) EX_TABLE(3b,4b)		\
++		EX_TABLE(0b,4b) EX_TABLE(1b,4b)				\
++		EX_TABLE(2b,4b) EX_TABLE(3b,4b)				\
+ 		: "=d" (ret), "=&d" (oldval), "=&d" (newval),		\
+ 		  "=m" (*uaddr)						\
+ 		: "0" (-EFAULT), "d" (oparg), "a" (uaddr),		\
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index 37b1bbd1a27cc..1ec8076209cab 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -64,7 +64,7 @@ static inline int __pcistg_mio_inuser(
+ 	asm volatile (
+ 		"       sacf    256\n"
+ 		"0:     llgc    %[tmp],0(%[src])\n"
+-		"       sllg    %[val],%[val],8\n"
++		"4:	sllg	%[val],%[val],8\n"
+ 		"       aghi    %[src],1\n"
+ 		"       ogr     %[val],%[tmp]\n"
+ 		"       brctg   %[cnt],0b\n"
+@@ -72,7 +72,7 @@ static inline int __pcistg_mio_inuser(
+ 		"2:     ipm     %[cc]\n"
+ 		"       srl     %[cc],28\n"
+ 		"3:     sacf    768\n"
+-		EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
++		EX_TABLE(0b, 3b) EX_TABLE(4b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
+ 		:
+ 		[src] "+a" (src), [cnt] "+d" (cnt),
+ 		[val] "+d" (val), [tmp] "=d" (tmp),
+@@ -222,10 +222,10 @@ static inline int __pcilg_mio_inuser(
+ 		"2:     ahi     %[shift],-8\n"
+ 		"       srlg    %[tmp],%[val],0(%[shift])\n"
+ 		"3:     stc     %[tmp],0(%[dst])\n"
+-		"       aghi    %[dst],1\n"
++		"5:	aghi	%[dst],1\n"
+ 		"       brctg   %[cnt],2b\n"
+ 		"4:     sacf    768\n"
+-		EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b)
++		EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b) EX_TABLE(5b, 4b)
+ 		:
+ 		[cc] "+d" (cc), [val] "=d" (val), [len] "+d" (len),
+ 		[dst] "+a" (dst), [cnt] "+d" (cnt), [tmp] "=d" (tmp),
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index 42173a7be3bb4..4b6c39c5facba 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1847,7 +1847,7 @@ void __init intel_pmu_arch_lbr_init(void)
+ 	return;
+ 
+ clear_arch_lbr:
+-	clear_cpu_cap(&boot_cpu_data, X86_FEATURE_ARCH_LBR);
++	setup_clear_cpu_cap(X86_FEATURE_ARCH_LBR);
+ }
+ 
+ /**
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index cc071c4c65240..d557a545f4bc5 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -697,7 +697,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 	/* Otherwise, skip ahead to the user-specified starting frame: */
+ 	while (!unwind_done(state) &&
+ 	       (!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
+-			state->sp < (unsigned long)first_frame))
++			state->sp <= (unsigned long)first_frame))
+ 		unwind_next_frame(state);
+ 
+ 	return;
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 743268996336d..d0ba5459ce0b9 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -2789,6 +2789,10 @@ static int genpd_iterate_idle_states(struct device_node *dn,
+ 		np = it.node;
+ 		if (!of_match_node(idle_state_match, np))
+ 			continue;
++
++		if (!of_device_is_available(np))
++			continue;
++
+ 		if (states) {
+ 			ret = genpd_parse_state(&states[i], np);
+ 			if (ret) {
+diff --git a/drivers/counter/microchip-tcb-capture.c b/drivers/counter/microchip-tcb-capture.c
+index 710acc0a37044..85fbbac06d314 100644
+--- a/drivers/counter/microchip-tcb-capture.c
++++ b/drivers/counter/microchip-tcb-capture.c
+@@ -29,7 +29,6 @@ struct mchp_tc_data {
+ 	int qdec_mode;
+ 	int num_channels;
+ 	int channel[2];
+-	bool trig_inverted;
+ };
+ 
+ enum mchp_tc_count_function {
+@@ -163,7 +162,7 @@ static int mchp_tc_count_signal_read(struct counter_device *counter,
+ 
+ 	regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], SR), &sr);
+ 
+-	if (priv->trig_inverted)
++	if (signal->id == 1)
+ 		sigstatus = (sr & ATMEL_TC_MTIOB);
+ 	else
+ 		sigstatus = (sr & ATMEL_TC_MTIOA);
+@@ -181,6 +180,17 @@ static int mchp_tc_count_action_get(struct counter_device *counter,
+ 	struct mchp_tc_data *const priv = counter->priv;
+ 	u32 cmr;
+ 
++	if (priv->qdec_mode) {
++		*action = COUNTER_SYNAPSE_ACTION_BOTH_EDGES;
++		return 0;
++	}
++
++	/* Only TIOA signal is evaluated in non-QDEC mode */
++	if (synapse->signal->id != 0) {
++		*action = COUNTER_SYNAPSE_ACTION_NONE;
++		return 0;
++	}
++
+ 	regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], CMR), &cmr);
+ 
+ 	switch (cmr & ATMEL_TC_ETRGEDG) {
+@@ -209,8 +219,8 @@ static int mchp_tc_count_action_set(struct counter_device *counter,
+ 	struct mchp_tc_data *const priv = counter->priv;
+ 	u32 edge = ATMEL_TC_ETRGEDG_NONE;
+ 
+-	/* QDEC mode is rising edge only */
+-	if (priv->qdec_mode)
++	/* QDEC mode is rising edge only; only TIOA handled in non-QDEC mode */
++	if (priv->qdec_mode || synapse->signal->id != 0)
+ 		return -EINVAL;
+ 
+ 	switch (action) {
+diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
+index 415a971e76947..7f4bafcd9d335 100644
+--- a/drivers/firmware/efi/libstub/arm64-stub.c
++++ b/drivers/firmware/efi/libstub/arm64-stub.c
+@@ -24,7 +24,7 @@ efi_status_t check_platform_features(void)
+ 		return EFI_SUCCESS;
+ 
+ 	tg = (read_cpuid(ID_AA64MMFR0_EL1) >> ID_AA64MMFR0_TGRAN_SHIFT) & 0xf;
+-	if (tg != ID_AA64MMFR0_TGRAN_SUPPORTED) {
++	if (tg < ID_AA64MMFR0_TGRAN_SUPPORTED_MIN || tg > ID_AA64MMFR0_TGRAN_SUPPORTED_MAX) {
+ 		if (IS_ENABLED(CONFIG_ARM64_64K_PAGES))
+ 			efi_err("This 64 KB granular kernel is not supported by your CPU\n");
+ 		else
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_lvds_connector.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_lvds_connector.c
+index 7288041dd86ad..7444b75c42157 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_lvds_connector.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_lvds_connector.c
+@@ -56,8 +56,9 @@ static int mdp4_lvds_connector_get_modes(struct drm_connector *connector)
+ 	return ret;
+ }
+ 
+-static int mdp4_lvds_connector_mode_valid(struct drm_connector *connector,
+-				 struct drm_display_mode *mode)
++static enum drm_mode_status
++mdp4_lvds_connector_mode_valid(struct drm_connector *connector,
++			       struct drm_display_mode *mode)
+ {
+ 	struct mdp4_lvds_connector *mdp4_lvds_connector =
+ 			to_mdp4_lvds_connector(connector);
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index a3de1d0523ea0..5a152d505dfb9 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -1201,7 +1201,7 @@ int dp_display_request_irq(struct msm_dp *dp_display)
+ 		return -EINVAL;
+ 	}
+ 
+-	rc = devm_request_irq(&dp->pdev->dev, dp->irq,
++	rc = devm_request_irq(dp_display->drm_dev->dev, dp->irq,
+ 			dp_display_irq_handler,
+ 			IRQF_TRIGGER_HIGH, "dp_display_isr", dp);
+ 	if (rc < 0) {
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index f845333593daa..7377596a13f4b 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -205,6 +205,12 @@ int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
+ 		return -EINVAL;
+ 
+ 	priv = dev->dev_private;
++
++	if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {
++		DRM_DEV_ERROR(dev->dev, "too many bridges\n");
++		return -ENOSPC;
++	}
++
+ 	msm_dsi->dev = dev;
+ 
+ 	ret = msm_dsi_host_modeset_init(msm_dsi->host, dev);
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index 28b33b35a30ce..47796e12b4322 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -293,6 +293,11 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
+ 	struct platform_device *pdev = hdmi->pdev;
+ 	int ret;
+ 
++	if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {
++		DRM_DEV_ERROR(dev->dev, "too many bridges\n");
++		return -ENOSPC;
++	}
++
+ 	hdmi->dev = dev;
+ 	hdmi->encoder = encoder;
+ 
+diff --git a/drivers/iio/light/tsl2583.c b/drivers/iio/light/tsl2583.c
+index 40b7dd266b314..e39d512145a67 100644
+--- a/drivers/iio/light/tsl2583.c
++++ b/drivers/iio/light/tsl2583.c
+@@ -856,7 +856,7 @@ static int tsl2583_probe(struct i2c_client *clientp,
+ 					 TSL2583_POWER_OFF_DELAY_MS);
+ 	pm_runtime_use_autosuspend(&clientp->dev);
+ 
+-	ret = devm_iio_device_register(indio_dev->dev.parent, indio_dev);
++	ret = iio_device_register(indio_dev);
+ 	if (ret) {
+ 		dev_err(&clientp->dev, "%s: iio registration failed\n",
+ 			__func__);
+diff --git a/drivers/iio/temperature/ltc2983.c b/drivers/iio/temperature/ltc2983.c
+index 3b4a0e60e6059..8306daa779081 100644
+--- a/drivers/iio/temperature/ltc2983.c
++++ b/drivers/iio/temperature/ltc2983.c
+@@ -1376,13 +1376,6 @@ static int ltc2983_setup(struct ltc2983_data *st, bool assign_iio)
+ 		return ret;
+ 	}
+ 
+-	st->iio_chan = devm_kzalloc(&st->spi->dev,
+-				    st->iio_channels * sizeof(*st->iio_chan),
+-				    GFP_KERNEL);
+-
+-	if (!st->iio_chan)
+-		return -ENOMEM;
+-
+ 	ret = regmap_update_bits(st->regmap, LTC2983_GLOBAL_CONFIG_REG,
+ 				 LTC2983_NOTCH_FREQ_MASK,
+ 				 LTC2983_NOTCH_FREQ(st->filter_notch_freq));
+@@ -1494,6 +1487,12 @@ static int ltc2983_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
++	st->iio_chan = devm_kzalloc(&spi->dev,
++				    st->iio_channels * sizeof(*st->iio_chan),
++				    GFP_KERNEL);
++	if (!st->iio_chan)
++		return -ENOMEM;
++
+ 	ret = ltc2983_setup(st, true);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/media/test-drivers/vivid/vivid-core.c b/drivers/media/test-drivers/vivid/vivid-core.c
+index 1e356dc65d318..761d2abd40067 100644
+--- a/drivers/media/test-drivers/vivid/vivid-core.c
++++ b/drivers/media/test-drivers/vivid/vivid-core.c
+@@ -330,6 +330,28 @@ static int vidioc_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *a
+ 	return vivid_vid_out_g_fbuf(file, fh, a);
+ }
+ 
++/*
++ * Only support the framebuffer of one of the vivid instances.
++ * Anything else is rejected.
++ */
++bool vivid_validate_fb(const struct v4l2_framebuffer *a)
++{
++	struct vivid_dev *dev;
++	int i;
++
++	for (i = 0; i < n_devs; i++) {
++		dev = vivid_devs[i];
++		if (!dev || !dev->video_pbase)
++			continue;
++		if ((unsigned long)a->base == dev->video_pbase &&
++		    a->fmt.width <= dev->display_width &&
++		    a->fmt.height <= dev->display_height &&
++		    a->fmt.bytesperline <= dev->display_byte_stride)
++			return true;
++	}
++	return false;
++}
++
+ static int vidioc_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffer *a)
+ {
+ 	struct video_device *vdev = video_devdata(file);
+@@ -850,8 +872,12 @@ static int vivid_detect_feature_set(struct vivid_dev *dev, int inst,
+ 
+ 	/* how many inputs do we have and of what type? */
+ 	dev->num_inputs = num_inputs[inst];
+-	if (dev->num_inputs < 1)
+-		dev->num_inputs = 1;
++	if (node_type & 0x20007) {
++		if (dev->num_inputs < 1)
++			dev->num_inputs = 1;
++	} else {
++		dev->num_inputs = 0;
++	}
+ 	if (dev->num_inputs >= MAX_INPUTS)
+ 		dev->num_inputs = MAX_INPUTS;
+ 	for (i = 0; i < dev->num_inputs; i++) {
+@@ -868,8 +894,12 @@ static int vivid_detect_feature_set(struct vivid_dev *dev, int inst,
+ 
+ 	/* how many outputs do we have and of what type? */
+ 	dev->num_outputs = num_outputs[inst];
+-	if (dev->num_outputs < 1)
+-		dev->num_outputs = 1;
++	if (node_type & 0x40300) {
++		if (dev->num_outputs < 1)
++			dev->num_outputs = 1;
++	} else {
++		dev->num_outputs = 0;
++	}
+ 	if (dev->num_outputs >= MAX_OUTPUTS)
+ 		dev->num_outputs = MAX_OUTPUTS;
+ 	for (i = 0; i < dev->num_outputs; i++) {
+diff --git a/drivers/media/test-drivers/vivid/vivid-core.h b/drivers/media/test-drivers/vivid/vivid-core.h
+index 99e69b8f770f0..6aa32c8e6fb5c 100644
+--- a/drivers/media/test-drivers/vivid/vivid-core.h
++++ b/drivers/media/test-drivers/vivid/vivid-core.h
+@@ -609,4 +609,6 @@ static inline bool vivid_is_hdmi_out(const struct vivid_dev *dev)
+ 	return dev->output_type[dev->output] == HDMI;
+ }
+ 
++bool vivid_validate_fb(const struct v4l2_framebuffer *a);
++
+ #endif
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index eadf28ab1e393..d493bd17481b0 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -452,6 +452,12 @@ void vivid_update_format_cap(struct vivid_dev *dev, bool keep_controls)
+ 	tpg_reset_source(&dev->tpg, dev->src_rect.width, dev->src_rect.height, dev->field_cap);
+ 	dev->crop_cap = dev->src_rect;
+ 	dev->crop_bounds_cap = dev->src_rect;
++	if (dev->bitmap_cap &&
++	    (dev->compose_cap.width != dev->crop_cap.width ||
++	     dev->compose_cap.height != dev->crop_cap.height)) {
++		vfree(dev->bitmap_cap);
++		dev->bitmap_cap = NULL;
++	}
+ 	dev->compose_cap = dev->crop_cap;
+ 	if (V4L2_FIELD_HAS_T_OR_B(dev->field_cap))
+ 		dev->compose_cap.height /= 2;
+@@ -909,6 +915,8 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
+ 	struct vivid_dev *dev = video_drvdata(file);
+ 	struct v4l2_rect *crop = &dev->crop_cap;
+ 	struct v4l2_rect *compose = &dev->compose_cap;
++	unsigned orig_compose_w = compose->width;
++	unsigned orig_compose_h = compose->height;
+ 	unsigned factor = V4L2_FIELD_HAS_T_OR_B(dev->field_cap) ? 2 : 1;
+ 	int ret;
+ 
+@@ -1025,17 +1033,17 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
+ 			s->r.height /= factor;
+ 		}
+ 		v4l2_rect_map_inside(&s->r, &dev->fmt_cap_rect);
+-		if (dev->bitmap_cap && (compose->width != s->r.width ||
+-					compose->height != s->r.height)) {
+-			vfree(dev->bitmap_cap);
+-			dev->bitmap_cap = NULL;
+-		}
+ 		*compose = s->r;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+ 	}
+ 
++	if (dev->bitmap_cap && (compose->width != orig_compose_w ||
++				compose->height != orig_compose_h)) {
++		vfree(dev->bitmap_cap);
++		dev->bitmap_cap = NULL;
++	}
+ 	tpg_s_crop_compose(&dev->tpg, crop, compose);
+ 	return 0;
+ }
+@@ -1276,7 +1284,14 @@ int vivid_vid_cap_s_fbuf(struct file *file, void *fh,
+ 		return -EINVAL;
+ 	if (a->fmt.bytesperline < (a->fmt.width * fmt->bit_depth[0]) / 8)
+ 		return -EINVAL;
+-	if (a->fmt.height * a->fmt.bytesperline < a->fmt.sizeimage)
++	if (a->fmt.bytesperline > a->fmt.sizeimage / a->fmt.height)
++		return -EINVAL;
++
++	/*
++	 * Only support the framebuffer of one of the vivid instances.
++	 * Anything else is rejected.
++	 */
++	if (!vivid_validate_fb(a))
+ 		return -EINVAL;
+ 
+ 	dev->fb_vbase_cap = phys_to_virt((unsigned long)a->base);
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index af48705c704f8..003c32fed3f75 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -161,6 +161,20 @@ bool v4l2_valid_dv_timings(const struct v4l2_dv_timings *t,
+ 	    (bt->interlaced && !(caps & V4L2_DV_BT_CAP_INTERLACED)) ||
+ 	    (!bt->interlaced && !(caps & V4L2_DV_BT_CAP_PROGRESSIVE)))
+ 		return false;
++
++	/* sanity checks for the blanking timings */
++	if (!bt->interlaced &&
++	    (bt->il_vbackporch || bt->il_vsync || bt->il_vfrontporch))
++		return false;
++	if (bt->hfrontporch > 2 * bt->width ||
++	    bt->hsync > 1024 || bt->hbackporch > 1024)
++		return false;
++	if (bt->vfrontporch > 4096 ||
++	    bt->vsync > 128 || bt->vbackporch > 4096)
++		return false;
++	if (bt->interlaced && (bt->il_vfrontporch > 4096 ||
++	    bt->il_vsync > 128 || bt->il_vbackporch > 4096))
++		return false;
+ 	return fnc == NULL || fnc(t, fnc_handle);
+ }
+ EXPORT_SYMBOL_GPL(v4l2_valid_dv_timings);
+diff --git a/drivers/mmc/core/sdio_bus.c b/drivers/mmc/core/sdio_bus.c
+index 3d709029e07ce..a448535c1265d 100644
+--- a/drivers/mmc/core/sdio_bus.c
++++ b/drivers/mmc/core/sdio_bus.c
+@@ -292,7 +292,8 @@ static void sdio_release_func(struct device *dev)
+ {
+ 	struct sdio_func *func = dev_to_sdio_func(dev);
+ 
+-	sdio_free_func_cis(func);
++	if (!(func->card->quirks & MMC_QUIRK_NONSTD_SDIO))
++		sdio_free_func_cis(func);
+ 
+ 	kfree(func->info);
+ 	kfree(func->tmpbuf);
+diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
+index 30ff42fd173e2..82e1fbd6b2ff0 100644
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -1079,9 +1079,10 @@ config MMC_SDHCI_OMAP
+ 
+ config MMC_SDHCI_AM654
+ 	tristate "Support for the SDHCI Controller in TI's AM654 SOCs"
+-	depends on MMC_SDHCI_PLTFM && OF && REGMAP_MMIO
++	depends on MMC_SDHCI_PLTFM && OF
+ 	select MMC_SDHCI_IO_ACCESSORS
+ 	select MMC_CQHCI
++	select REGMAP_MMIO
+ 	help
+ 	  This selects the Secure Digital Host Controller Interface (SDHCI)
+ 	  support present in TI's AM654 SOCs. The controller supports
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index d00c916f133bd..dce35f81e0a55 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -2672,7 +2672,7 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc,
+ 	chip->controller = &nfc->controller;
+ 	nand_set_flash_node(chip, np);
+ 
+-	if (!of_property_read_bool(np, "marvell,nand-keep-config"))
++	if (of_property_read_bool(np, "marvell,nand-keep-config"))
+ 		chip->options |= NAND_KEEP_TIMINGS;
+ 
+ 	mtd = nand_to_mtd(chip);
+diff --git a/drivers/net/can/mscan/mpc5xxx_can.c b/drivers/net/can/mscan/mpc5xxx_can.c
+index e254e04ae257f..ef649764f9b4e 100644
+--- a/drivers/net/can/mscan/mpc5xxx_can.c
++++ b/drivers/net/can/mscan/mpc5xxx_can.c
+@@ -325,14 +325,14 @@ static int mpc5xxx_can_probe(struct platform_device *ofdev)
+ 					       &mscan_clksrc);
+ 	if (!priv->can.clock.freq) {
+ 		dev_err(&ofdev->dev, "couldn't get MSCAN clock properties\n");
+-		goto exit_free_mscan;
++		goto exit_put_clock;
+ 	}
+ 
+ 	err = register_mscandev(dev, mscan_clksrc);
+ 	if (err) {
+ 		dev_err(&ofdev->dev, "registering %s failed (err=%d)\n",
+ 			DRV_NAME, err);
+-		goto exit_free_mscan;
++		goto exit_put_clock;
+ 	}
+ 
+ 	dev_info(&ofdev->dev, "MSCAN at 0x%p, irq %d, clock %d Hz\n",
+@@ -340,7 +340,9 @@ static int mpc5xxx_can_probe(struct platform_device *ofdev)
+ 
+ 	return 0;
+ 
+-exit_free_mscan:
++exit_put_clock:
++	if (data->put_clock)
++		data->put_clock(ofdev);
+ 	free_candev(dev);
+ exit_dispose_irq:
+ 	irq_dispose_mapping(irq);
+diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
+index 67f0f14e2bf4e..c61534a2a2d3a 100644
+--- a/drivers/net/can/rcar/rcar_canfd.c
++++ b/drivers/net/can/rcar/rcar_canfd.c
+@@ -1075,7 +1075,7 @@ static irqreturn_t rcar_canfd_global_interrupt(int irq, void *dev_id)
+ 	struct rcar_canfd_global *gpriv = dev_id;
+ 	struct net_device *ndev;
+ 	struct rcar_canfd_channel *priv;
+-	u32 sts, gerfl;
++	u32 sts, cc, gerfl;
+ 	u32 ch, ridx;
+ 
+ 	/* Global error interrupts still indicate a condition specific
+@@ -1093,7 +1093,9 @@ static irqreturn_t rcar_canfd_global_interrupt(int irq, void *dev_id)
+ 
+ 		/* Handle Rx interrupts */
+ 		sts = rcar_canfd_read(priv->base, RCANFD_RFSTS(ridx));
+-		if (likely(sts & RCANFD_RFSTS_RFIF)) {
++		cc = rcar_canfd_read(priv->base, RCANFD_RFCC(ridx));
++		if (likely(sts & RCANFD_RFSTS_RFIF &&
++			   cc & RCANFD_RFCC_RFIE)) {
+ 			if (napi_schedule_prep(&priv->napi)) {
+ 				/* Disable Rx FIFO interrupts */
+ 				rcar_canfd_clear_bit(priv->base,
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 5dde3c42d241b..ffcb04aac9729 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -1419,11 +1419,14 @@ static int mcp251x_can_probe(struct spi_device *spi)
+ 
+ 	ret = mcp251x_gpio_setup(priv);
+ 	if (ret)
+-		goto error_probe;
++		goto out_unregister_candev;
+ 
+ 	netdev_info(net, "MCP%x successfully initialized.\n", priv->model);
+ 	return 0;
+ 
++out_unregister_candev:
++	unregister_candev(net);
++
+ error_probe:
+ 	destroy_workqueue(priv->wq);
+ 	priv->wq = NULL;
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index 5d642458bac54..45d2787248839 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -1845,7 +1845,7 @@ static int kvaser_usb_hydra_start_chip(struct kvaser_usb_net_priv *priv)
+ {
+ 	int err;
+ 
+-	init_completion(&priv->start_comp);
++	reinit_completion(&priv->start_comp);
+ 
+ 	err = kvaser_usb_hydra_send_simple_cmd(priv->dev, CMD_START_CHIP_REQ,
+ 					       priv->channel);
+@@ -1863,7 +1863,7 @@ static int kvaser_usb_hydra_stop_chip(struct kvaser_usb_net_priv *priv)
+ {
+ 	int err;
+ 
+-	init_completion(&priv->stop_comp);
++	reinit_completion(&priv->stop_comp);
+ 
+ 	/* Make sure we do not report invalid BUS_OFF from CMD_CHIP_STATE_EVENT
+ 	 * see comment in kvaser_usb_hydra_update_state()
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index 78d52a5e8fd5d..15380cc08ee69 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -1324,7 +1324,7 @@ static int kvaser_usb_leaf_start_chip(struct kvaser_usb_net_priv *priv)
+ {
+ 	int err;
+ 
+-	init_completion(&priv->start_comp);
++	reinit_completion(&priv->start_comp);
+ 
+ 	err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_START_CHIP,
+ 					      priv->channel);
+@@ -1342,7 +1342,7 @@ static int kvaser_usb_leaf_stop_chip(struct kvaser_usb_net_priv *priv)
+ {
+ 	int err;
+ 
+-	init_completion(&priv->stop_comp);
++	reinit_completion(&priv->stop_comp);
+ 
+ 	err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_STOP_CHIP,
+ 					      priv->channel);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 213769054391c..a7166cd1179f2 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -239,6 +239,7 @@ enum xgbe_sfp_speed {
+ #define XGBE_SFP_BASE_BR_1GBE_MAX		0x0d
+ #define XGBE_SFP_BASE_BR_10GBE_MIN		0x64
+ #define XGBE_SFP_BASE_BR_10GBE_MAX		0x68
++#define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX	0x78
+ 
+ #define XGBE_SFP_BASE_CU_CABLE_LEN		18
+ 
+@@ -284,6 +285,8 @@ struct xgbe_sfp_eeprom {
+ #define XGBE_BEL_FUSE_VENDOR	"BEL-FUSE        "
+ #define XGBE_BEL_FUSE_PARTNO	"1GBT-SFP06      "
+ 
++#define XGBE_MOLEX_VENDOR	"Molex Inc.      "
++
+ struct xgbe_sfp_ascii {
+ 	union {
+ 		char vendor[XGBE_SFP_BASE_VENDOR_NAME_LEN + 1];
+@@ -834,7 +837,11 @@ static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
+ 		break;
+ 	case XGBE_SFP_SPEED_10000:
+ 		min = XGBE_SFP_BASE_BR_10GBE_MIN;
+-		max = XGBE_SFP_BASE_BR_10GBE_MAX;
++		if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME],
++			   XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0)
++			max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX;
++		else
++			max = XGBE_SFP_BASE_BR_10GBE_MAX;
+ 		break;
+ 	default:
+ 		return false;
+@@ -1151,7 +1158,10 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
+ 	}
+ 
+ 	/* Determine the type of SFP */
+-	if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
++	if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE &&
++	    xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
++		phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
++	else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
+ 		phy_data->sfp_base = XGBE_SFP_BASE_10000_SR;
+ 	else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_LR)
+ 		phy_data->sfp_base = XGBE_SFP_BASE_10000_LR;
+@@ -1167,9 +1177,6 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
+ 		phy_data->sfp_base = XGBE_SFP_BASE_1000_CX;
+ 	else if (sfp_base[XGBE_SFP_BASE_1GBE_CC] & XGBE_SFP_BASE_1GBE_CC_T)
+ 		phy_data->sfp_base = XGBE_SFP_BASE_1000_T;
+-	else if ((phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE) &&
+-		 xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
+-		phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
+ 
+ 	switch (phy_data->sfp_base) {
+ 	case XGBE_SFP_BASE_1000_T:
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
+index 4a6dfac857ca9..7c6e0811f2e63 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
+@@ -1451,26 +1451,57 @@ static void aq_check_txsa_expiration(struct aq_nic_s *nic)
+ 			egress_sa_threshold_expired);
+ }
+ 
++#define AQ_LOCKED_MDO_DEF(mdo)						\
++static int aq_locked_mdo_##mdo(struct macsec_context *ctx)		\
++{									\
++	struct aq_nic_s *nic = netdev_priv(ctx->netdev);		\
++	int ret;							\
++	mutex_lock(&nic->macsec_mutex);					\
++	ret = aq_mdo_##mdo(ctx);					\
++	mutex_unlock(&nic->macsec_mutex);				\
++	return ret;							\
++}
++
++AQ_LOCKED_MDO_DEF(dev_open)
++AQ_LOCKED_MDO_DEF(dev_stop)
++AQ_LOCKED_MDO_DEF(add_secy)
++AQ_LOCKED_MDO_DEF(upd_secy)
++AQ_LOCKED_MDO_DEF(del_secy)
++AQ_LOCKED_MDO_DEF(add_rxsc)
++AQ_LOCKED_MDO_DEF(upd_rxsc)
++AQ_LOCKED_MDO_DEF(del_rxsc)
++AQ_LOCKED_MDO_DEF(add_rxsa)
++AQ_LOCKED_MDO_DEF(upd_rxsa)
++AQ_LOCKED_MDO_DEF(del_rxsa)
++AQ_LOCKED_MDO_DEF(add_txsa)
++AQ_LOCKED_MDO_DEF(upd_txsa)
++AQ_LOCKED_MDO_DEF(del_txsa)
++AQ_LOCKED_MDO_DEF(get_dev_stats)
++AQ_LOCKED_MDO_DEF(get_tx_sc_stats)
++AQ_LOCKED_MDO_DEF(get_tx_sa_stats)
++AQ_LOCKED_MDO_DEF(get_rx_sc_stats)
++AQ_LOCKED_MDO_DEF(get_rx_sa_stats)
++
+ const struct macsec_ops aq_macsec_ops = {
+-	.mdo_dev_open = aq_mdo_dev_open,
+-	.mdo_dev_stop = aq_mdo_dev_stop,
+-	.mdo_add_secy = aq_mdo_add_secy,
+-	.mdo_upd_secy = aq_mdo_upd_secy,
+-	.mdo_del_secy = aq_mdo_del_secy,
+-	.mdo_add_rxsc = aq_mdo_add_rxsc,
+-	.mdo_upd_rxsc = aq_mdo_upd_rxsc,
+-	.mdo_del_rxsc = aq_mdo_del_rxsc,
+-	.mdo_add_rxsa = aq_mdo_add_rxsa,
+-	.mdo_upd_rxsa = aq_mdo_upd_rxsa,
+-	.mdo_del_rxsa = aq_mdo_del_rxsa,
+-	.mdo_add_txsa = aq_mdo_add_txsa,
+-	.mdo_upd_txsa = aq_mdo_upd_txsa,
+-	.mdo_del_txsa = aq_mdo_del_txsa,
+-	.mdo_get_dev_stats = aq_mdo_get_dev_stats,
+-	.mdo_get_tx_sc_stats = aq_mdo_get_tx_sc_stats,
+-	.mdo_get_tx_sa_stats = aq_mdo_get_tx_sa_stats,
+-	.mdo_get_rx_sc_stats = aq_mdo_get_rx_sc_stats,
+-	.mdo_get_rx_sa_stats = aq_mdo_get_rx_sa_stats,
++	.mdo_dev_open = aq_locked_mdo_dev_open,
++	.mdo_dev_stop = aq_locked_mdo_dev_stop,
++	.mdo_add_secy = aq_locked_mdo_add_secy,
++	.mdo_upd_secy = aq_locked_mdo_upd_secy,
++	.mdo_del_secy = aq_locked_mdo_del_secy,
++	.mdo_add_rxsc = aq_locked_mdo_add_rxsc,
++	.mdo_upd_rxsc = aq_locked_mdo_upd_rxsc,
++	.mdo_del_rxsc = aq_locked_mdo_del_rxsc,
++	.mdo_add_rxsa = aq_locked_mdo_add_rxsa,
++	.mdo_upd_rxsa = aq_locked_mdo_upd_rxsa,
++	.mdo_del_rxsa = aq_locked_mdo_del_rxsa,
++	.mdo_add_txsa = aq_locked_mdo_add_txsa,
++	.mdo_upd_txsa = aq_locked_mdo_upd_txsa,
++	.mdo_del_txsa = aq_locked_mdo_del_txsa,
++	.mdo_get_dev_stats = aq_locked_mdo_get_dev_stats,
++	.mdo_get_tx_sc_stats = aq_locked_mdo_get_tx_sc_stats,
++	.mdo_get_tx_sa_stats = aq_locked_mdo_get_tx_sa_stats,
++	.mdo_get_rx_sc_stats = aq_locked_mdo_get_rx_sc_stats,
++	.mdo_get_rx_sa_stats = aq_locked_mdo_get_rx_sa_stats,
+ };
+ 
+ int aq_macsec_init(struct aq_nic_s *nic)
+@@ -1492,6 +1523,7 @@ int aq_macsec_init(struct aq_nic_s *nic)
+ 
+ 	nic->ndev->features |= NETIF_F_HW_MACSEC;
+ 	nic->ndev->macsec_ops = &aq_macsec_ops;
++	mutex_init(&nic->macsec_mutex);
+ 
+ 	return 0;
+ }
+@@ -1515,7 +1547,7 @@ int aq_macsec_enable(struct aq_nic_s *nic)
+ 	if (!nic->macsec_cfg)
+ 		return 0;
+ 
+-	rtnl_lock();
++	mutex_lock(&nic->macsec_mutex);
+ 
+ 	if (nic->aq_fw_ops->send_macsec_req) {
+ 		struct macsec_cfg_request cfg = { 0 };
+@@ -1564,7 +1596,7 @@ int aq_macsec_enable(struct aq_nic_s *nic)
+ 	ret = aq_apply_macsec_cfg(nic);
+ 
+ unlock:
+-	rtnl_unlock();
++	mutex_unlock(&nic->macsec_mutex);
+ 	return ret;
+ }
+ 
+@@ -1576,9 +1608,9 @@ void aq_macsec_work(struct aq_nic_s *nic)
+ 	if (!netif_carrier_ok(nic->ndev))
+ 		return;
+ 
+-	rtnl_lock();
++	mutex_lock(&nic->macsec_mutex);
+ 	aq_check_txsa_expiration(nic);
+-	rtnl_unlock();
++	mutex_unlock(&nic->macsec_mutex);
+ }
+ 
+ int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
+@@ -1589,21 +1621,30 @@ int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
+ 	if (!cfg)
+ 		return 0;
+ 
++	mutex_lock(&nic->macsec_mutex);
++
+ 	for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
+ 		if (!test_bit(i, &cfg->rxsc_idx_busy))
+ 			continue;
+ 		cnt += hweight_long(cfg->aq_rxsc[i].rx_sa_idx_busy);
+ 	}
+ 
++	mutex_unlock(&nic->macsec_mutex);
+ 	return cnt;
+ }
+ 
+ int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic)
+ {
++	int cnt;
++
+ 	if (!nic->macsec_cfg)
+ 		return 0;
+ 
+-	return hweight_long(nic->macsec_cfg->txsc_idx_busy);
++	mutex_lock(&nic->macsec_mutex);
++	cnt = hweight_long(nic->macsec_cfg->txsc_idx_busy);
++	mutex_unlock(&nic->macsec_mutex);
++
++	return cnt;
+ }
+ 
+ int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
+@@ -1614,12 +1655,15 @@ int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
+ 	if (!cfg)
+ 		return 0;
+ 
++	mutex_lock(&nic->macsec_mutex);
++
+ 	for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
+ 		if (!test_bit(i, &cfg->txsc_idx_busy))
+ 			continue;
+ 		cnt += hweight_long(cfg->aq_txsc[i].tx_sa_idx_busy);
+ 	}
+ 
++	mutex_unlock(&nic->macsec_mutex);
+ 	return cnt;
+ }
+ 
+@@ -1691,6 +1735,8 @@ u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data)
+ 	if (!cfg)
+ 		return data;
+ 
++	mutex_lock(&nic->macsec_mutex);
++
+ 	aq_macsec_update_stats(nic);
+ 
+ 	common_stats = &cfg->stats;
+@@ -1773,5 +1819,7 @@ u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data)
+ 
+ 	data += i;
+ 
++	mutex_unlock(&nic->macsec_mutex);
++
+ 	return data;
+ }
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+index 926cca9a0c837..6da3efa289a3f 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+@@ -152,6 +152,8 @@ struct aq_nic_s {
+ 	struct mutex fwreq_mutex;
+ #if IS_ENABLED(CONFIG_MACSEC)
+ 	struct aq_macsec_cfg *macsec_cfg;
++	/* mutex to protect data in macsec_cfg */
++	struct mutex macsec_mutex;
+ #endif
+ 	/* PTP support */
+ 	struct aq_ptp_s *aq_ptp;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 4af2538259576..ca62c72eb7729 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -1241,7 +1241,12 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+ 
+ 	enetc_rxbdr_wr(hw, idx, ENETC_RBBSR, ENETC_RXB_DMA_SIZE);
+ 
++	/* Also prepare the consumer index in case page allocation never
++	 * succeeds. In that case, hardware will never advance producer index
++	 * to match consumer index, and will drop all frames.
++	 */
+ 	enetc_rxbdr_wr(hw, idx, ENETC_RBPIR, 0);
++	enetc_rxbdr_wr(hw, idx, ENETC_RBCIR, 1);
+ 
+ 	/* enable Rx ints by setting pkt thr to 1 */
+ 	enetc_rxbdr_wr(hw, idx, ENETC_RBICR0, ENETC_RBICR0_ICEN | 0x1);
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index d8bdaf2e5365c..e183caf381765 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -2251,6 +2251,31 @@ static u32 fec_enet_register_offset[] = {
+ 	IEEE_R_DROP, IEEE_R_FRAME_OK, IEEE_R_CRC, IEEE_R_ALIGN, IEEE_R_MACERR,
+ 	IEEE_R_FDXFC, IEEE_R_OCTETS_OK
+ };
++/* for i.MX6ul */
++static u32 fec_enet_register_offset_6ul[] = {
++	FEC_IEVENT, FEC_IMASK, FEC_R_DES_ACTIVE_0, FEC_X_DES_ACTIVE_0,
++	FEC_ECNTRL, FEC_MII_DATA, FEC_MII_SPEED, FEC_MIB_CTRLSTAT, FEC_R_CNTRL,
++	FEC_X_CNTRL, FEC_ADDR_LOW, FEC_ADDR_HIGH, FEC_OPD, FEC_TXIC0, FEC_RXIC0,
++	FEC_HASH_TABLE_HIGH, FEC_HASH_TABLE_LOW, FEC_GRP_HASH_TABLE_HIGH,
++	FEC_GRP_HASH_TABLE_LOW, FEC_X_WMRK, FEC_R_DES_START_0,
++	FEC_X_DES_START_0, FEC_R_BUFF_SIZE_0, FEC_R_FIFO_RSFL, FEC_R_FIFO_RSEM,
++	FEC_R_FIFO_RAEM, FEC_R_FIFO_RAFL, FEC_RACC,
++	RMON_T_DROP, RMON_T_PACKETS, RMON_T_BC_PKT, RMON_T_MC_PKT,
++	RMON_T_CRC_ALIGN, RMON_T_UNDERSIZE, RMON_T_OVERSIZE, RMON_T_FRAG,
++	RMON_T_JAB, RMON_T_COL, RMON_T_P64, RMON_T_P65TO127, RMON_T_P128TO255,
++	RMON_T_P256TO511, RMON_T_P512TO1023, RMON_T_P1024TO2047,
++	RMON_T_P_GTE2048, RMON_T_OCTETS,
++	IEEE_T_DROP, IEEE_T_FRAME_OK, IEEE_T_1COL, IEEE_T_MCOL, IEEE_T_DEF,
++	IEEE_T_LCOL, IEEE_T_EXCOL, IEEE_T_MACERR, IEEE_T_CSERR, IEEE_T_SQE,
++	IEEE_T_FDXFC, IEEE_T_OCTETS_OK,
++	RMON_R_PACKETS, RMON_R_BC_PKT, RMON_R_MC_PKT, RMON_R_CRC_ALIGN,
++	RMON_R_UNDERSIZE, RMON_R_OVERSIZE, RMON_R_FRAG, RMON_R_JAB,
++	RMON_R_RESVD_O, RMON_R_P64, RMON_R_P65TO127, RMON_R_P128TO255,
++	RMON_R_P256TO511, RMON_R_P512TO1023, RMON_R_P1024TO2047,
++	RMON_R_P_GTE2048, RMON_R_OCTETS,
++	IEEE_R_DROP, IEEE_R_FRAME_OK, IEEE_R_CRC, IEEE_R_ALIGN, IEEE_R_MACERR,
++	IEEE_R_FDXFC, IEEE_R_OCTETS_OK
++};
+ #else
+ static __u32 fec_enet_register_version = 1;
+ static u32 fec_enet_register_offset[] = {
+@@ -2275,7 +2300,24 @@ static void fec_enet_get_regs(struct net_device *ndev,
+ 	u32 *buf = (u32 *)regbuf;
+ 	u32 i, off;
+ 	int ret;
++#if defined(CONFIG_M523x) || defined(CONFIG_M527x) || defined(CONFIG_M528x) || \
++	defined(CONFIG_M520x) || defined(CONFIG_M532x) || defined(CONFIG_ARM) || \
++	defined(CONFIG_ARM64) || defined(CONFIG_COMPILE_TEST)
++	u32 *reg_list;
++	u32 reg_cnt;
+ 
++	if (!of_machine_is_compatible("fsl,imx6ul")) {
++		reg_list = fec_enet_register_offset;
++		reg_cnt = ARRAY_SIZE(fec_enet_register_offset);
++	} else {
++		reg_list = fec_enet_register_offset_6ul;
++		reg_cnt = ARRAY_SIZE(fec_enet_register_offset_6ul);
++	}
++#else
++	/* coldfire */
++	static u32 *reg_list = fec_enet_register_offset;
++	static const u32 reg_cnt = ARRAY_SIZE(fec_enet_register_offset);
++#endif
+ 	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return;
+@@ -2284,8 +2326,8 @@ static void fec_enet_get_regs(struct net_device *ndev,
+ 
+ 	memset(buf, 0, regs->len);
+ 
+-	for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) {
+-		off = fec_enet_register_offset[i];
++	for (i = 0; i < reg_cnt; i++) {
++		off = reg_list[i];
+ 
+ 		if ((off == FEC_R_BOUND || off == FEC_R_FSTART) &&
+ 		    !(fep->quirks & FEC_QUIRK_HAS_FRREG))
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_debugfs.c b/drivers/net/ethernet/huawei/hinic/hinic_debugfs.c
+index 19eb839177ec2..061952c6c21a4 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_debugfs.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_debugfs.c
+@@ -85,6 +85,7 @@ static int hinic_dbg_get_func_table(struct hinic_dev *nic_dev, int idx)
+ 	struct tag_sml_funcfg_tbl *funcfg_table_elem;
+ 	struct hinic_cmd_lt_rd *read_data;
+ 	u16 out_size = sizeof(*read_data);
++	int ret = ~0;
+ 	int err;
+ 
+ 	read_data = kzalloc(sizeof(*read_data), GFP_KERNEL);
+@@ -111,20 +112,25 @@ static int hinic_dbg_get_func_table(struct hinic_dev *nic_dev, int idx)
+ 
+ 	switch (idx) {
+ 	case VALID:
+-		return funcfg_table_elem->dw0.bs.valid;
++		ret = funcfg_table_elem->dw0.bs.valid;
++		break;
+ 	case RX_MODE:
+-		return funcfg_table_elem->dw0.bs.nic_rx_mode;
++		ret = funcfg_table_elem->dw0.bs.nic_rx_mode;
++		break;
+ 	case MTU:
+-		return funcfg_table_elem->dw1.bs.mtu;
++		ret = funcfg_table_elem->dw1.bs.mtu;
++		break;
+ 	case RQ_DEPTH:
+-		return funcfg_table_elem->dw13.bs.cfg_rq_depth;
++		ret = funcfg_table_elem->dw13.bs.cfg_rq_depth;
++		break;
+ 	case QUEUE_NUM:
+-		return funcfg_table_elem->dw13.bs.cfg_q_num;
++		ret = funcfg_table_elem->dw13.bs.cfg_q_num;
++		break;
+ 	}
+ 
+ 	kfree(read_data);
+ 
+-	return ~0;
++	return ret;
+ }
+ 
+ static ssize_t hinic_dbg_cmd_read(struct file *filp, char __user *buffer, size_t count,
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
+index 21b8235952d33..dff979f5d08b5 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
+@@ -929,7 +929,7 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
+ 
+ err_set_cmdq_depth:
+ 	hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ);
+-
++	free_cmdq(&cmdqs->cmdq[HINIC_CMDQ_SYNC]);
+ err_cmdq_ctxt:
+ 	hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
+ 			    HINIC_MAX_CMDQ_TYPES);
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
+index 799b85c88eff8..bcf2476512a5a 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
+@@ -892,7 +892,7 @@ int hinic_set_interrupt_cfg(struct hinic_hwdev *hwdev,
+ 	if (err)
+ 		return -EINVAL;
+ 
+-	interrupt_info->lli_credit_cnt = temp_info.lli_timer_cnt;
++	interrupt_info->lli_credit_cnt = temp_info.lli_credit_cnt;
+ 	interrupt_info->lli_timer_cnt = temp_info.lli_timer_cnt;
+ 
+ 	err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_sriov.c b/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
+index f8a26459ff653..4d82ebfe27f93 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
+@@ -1178,7 +1178,6 @@ int hinic_vf_func_init(struct hinic_hwdev *hwdev)
+ 			dev_err(&hwdev->hwif->pdev->dev,
+ 				"Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
+ 				err, register_info.status, out_size);
+-			hinic_unregister_vf_mbox_cb(hwdev, HINIC_MOD_L2NIC);
+ 			return -EIO;
+ 		}
+ 	} else {
+diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+index f630667364253..28a5f8d73a614 100644
+--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c
++++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+@@ -2897,6 +2897,7 @@ static struct device *ehea_register_port(struct ehea_port *port,
+ 	ret = of_device_register(&port->ofdev);
+ 	if (ret) {
+ 		pr_err("failed to register device. ret=%d\n", ret);
++		put_device(&port->ofdev.dev);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index cc5f5c237774f..144c4824b5e80 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -3083,10 +3083,17 @@ static int i40e_get_rss_hash_opts(struct i40e_pf *pf, struct ethtool_rxnfc *cmd)
+ 
+ 		if (cmd->flow_type == TCP_V4_FLOW ||
+ 		    cmd->flow_type == UDP_V4_FLOW) {
+-			if (i_set & I40E_L3_SRC_MASK)
+-				cmd->data |= RXH_IP_SRC;
+-			if (i_set & I40E_L3_DST_MASK)
+-				cmd->data |= RXH_IP_DST;
++			if (hw->mac.type == I40E_MAC_X722) {
++				if (i_set & I40E_X722_L3_SRC_MASK)
++					cmd->data |= RXH_IP_SRC;
++				if (i_set & I40E_X722_L3_DST_MASK)
++					cmd->data |= RXH_IP_DST;
++			} else {
++				if (i_set & I40E_L3_SRC_MASK)
++					cmd->data |= RXH_IP_SRC;
++				if (i_set & I40E_L3_DST_MASK)
++					cmd->data |= RXH_IP_DST;
++			}
+ 		} else if (cmd->flow_type == TCP_V6_FLOW ||
+ 			  cmd->flow_type == UDP_V6_FLOW) {
+ 			if (i_set & I40E_L3_V6_SRC_MASK)
+@@ -3393,12 +3400,15 @@ static int i40e_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
+ 
+ /**
+  * i40e_get_rss_hash_bits - Read RSS Hash bits from register
++ * @hw: hw structure
+  * @nfc: pointer to user request
+  * @i_setc: bits currently set
+  *
+  * Returns value of bits to be set per user request
+  **/
+-static u64 i40e_get_rss_hash_bits(struct ethtool_rxnfc *nfc, u64 i_setc)
++static u64 i40e_get_rss_hash_bits(struct i40e_hw *hw,
++				  struct ethtool_rxnfc *nfc,
++				  u64 i_setc)
+ {
+ 	u64 i_set = i_setc;
+ 	u64 src_l3 = 0, dst_l3 = 0;
+@@ -3417,8 +3427,13 @@ static u64 i40e_get_rss_hash_bits(struct ethtool_rxnfc *nfc, u64 i_setc)
+ 		dst_l3 = I40E_L3_V6_DST_MASK;
+ 	} else if (nfc->flow_type == TCP_V4_FLOW ||
+ 		  nfc->flow_type == UDP_V4_FLOW) {
+-		src_l3 = I40E_L3_SRC_MASK;
+-		dst_l3 = I40E_L3_DST_MASK;
++		if (hw->mac.type == I40E_MAC_X722) {
++			src_l3 = I40E_X722_L3_SRC_MASK;
++			dst_l3 = I40E_X722_L3_DST_MASK;
++		} else {
++			src_l3 = I40E_L3_SRC_MASK;
++			dst_l3 = I40E_L3_DST_MASK;
++		}
+ 	} else {
+ 		/* Any other flow type are not supported here */
+ 		return i_set;
+@@ -3436,6 +3451,7 @@ static u64 i40e_get_rss_hash_bits(struct ethtool_rxnfc *nfc, u64 i_setc)
+ 	return i_set;
+ }
+ 
++#define FLOW_PCTYPES_SIZE 64
+ /**
+  * i40e_set_rss_hash_opt - Enable/Disable flow types for RSS hash
+  * @pf: pointer to the physical function struct
+@@ -3448,9 +3464,11 @@ static int i40e_set_rss_hash_opt(struct i40e_pf *pf, struct ethtool_rxnfc *nfc)
+ 	struct i40e_hw *hw = &pf->hw;
+ 	u64 hena = (u64)i40e_read_rx_ctl(hw, I40E_PFQF_HENA(0)) |
+ 		   ((u64)i40e_read_rx_ctl(hw, I40E_PFQF_HENA(1)) << 32);
+-	u8 flow_pctype = 0;
++	DECLARE_BITMAP(flow_pctypes, FLOW_PCTYPES_SIZE);
+ 	u64 i_set, i_setc;
+ 
++	bitmap_zero(flow_pctypes, FLOW_PCTYPES_SIZE);
++
+ 	if (pf->flags & I40E_FLAG_MFP_ENABLED) {
+ 		dev_err(&pf->pdev->dev,
+ 			"Change of RSS hash input set is not supported when MFP mode is enabled\n");
+@@ -3466,36 +3484,35 @@ static int i40e_set_rss_hash_opt(struct i40e_pf *pf, struct ethtool_rxnfc *nfc)
+ 
+ 	switch (nfc->flow_type) {
+ 	case TCP_V4_FLOW:
+-		flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV4_TCP;
++		set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_TCP, flow_pctypes);
+ 		if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
+-			hena |=
+-			  BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
++			set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK,
++				flow_pctypes);
+ 		break;
+ 	case TCP_V6_FLOW:
+-		flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV6_TCP;
+-		if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
+-			hena |=
+-			  BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
++		set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_TCP, flow_pctypes);
+ 		if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
+-			hena |=
+-			  BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK);
++			set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK,
++				flow_pctypes);
+ 		break;
+ 	case UDP_V4_FLOW:
+-		flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP;
+-		if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
+-			hena |=
+-			  BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) |
+-			  BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP);
+-
++		set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_UDP, flow_pctypes);
++		if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE) {
++			set_bit(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP,
++				flow_pctypes);
++			set_bit(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP,
++				flow_pctypes);
++		}
+ 		hena |= BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV4);
+ 		break;
+ 	case UDP_V6_FLOW:
+-		flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV6_UDP;
+-		if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
+-			hena |=
+-			  BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) |
+-			  BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP);
+-
++		set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_UDP, flow_pctypes);
++		if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE) {
++			set_bit(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP,
++				flow_pctypes);
++			set_bit(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP,
++				flow_pctypes);
++		}
+ 		hena |= BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV6);
+ 		break;
+ 	case AH_ESP_V4_FLOW:
+@@ -3528,17 +3545,20 @@ static int i40e_set_rss_hash_opt(struct i40e_pf *pf, struct ethtool_rxnfc *nfc)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (flow_pctype) {
+-		i_setc = (u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0,
+-					       flow_pctype)) |
+-			((u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(1,
+-					       flow_pctype)) << 32);
+-		i_set = i40e_get_rss_hash_bits(nfc, i_setc);
+-		i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_pctype),
+-				  (u32)i_set);
+-		i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_pctype),
+-				  (u32)(i_set >> 32));
+-		hena |= BIT_ULL(flow_pctype);
++	if (bitmap_weight(flow_pctypes, FLOW_PCTYPES_SIZE)) {
++		u8 flow_id;
++
++		for_each_set_bit(flow_id, flow_pctypes, FLOW_PCTYPES_SIZE) {
++			i_setc = (u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id)) |
++				 ((u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id)) << 32);
++			i_set = i40e_get_rss_hash_bits(&pf->hw, nfc, i_setc);
++
++			i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id),
++					  (u32)i_set);
++			i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id),
++					  (u32)(i_set >> 32));
++			hena |= BIT_ULL(flow_id);
++		}
+ 	}
+ 
+ 	i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), (u32)hena);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
+index 446672a7e39fb..0872448c0e804 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
+@@ -1404,6 +1404,10 @@ struct i40e_lldp_variables {
+ #define I40E_PFQF_CTL_0_HASHLUTSIZE_512	0x00010000
+ 
+ /* INPUT SET MASK for RSS, flow director, and flexible payload */
++#define I40E_X722_L3_SRC_SHIFT		49
++#define I40E_X722_L3_SRC_MASK		(0x3ULL << I40E_X722_L3_SRC_SHIFT)
++#define I40E_X722_L3_DST_SHIFT		41
++#define I40E_X722_L3_DST_MASK		(0x3ULL << I40E_X722_L3_DST_SHIFT)
+ #define I40E_L3_SRC_SHIFT		47
+ #define I40E_L3_SRC_MASK		(0x3ULL << I40E_L3_SRC_SHIFT)
+ #define I40E_L3_V6_SRC_SHIFT		43
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index ffff7de801af7..381b28a087467 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1483,10 +1483,12 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
+ 	if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state))
+ 		return true;
+ 
+-	/* If the VFs have been disabled, this means something else is
+-	 * resetting the VF, so we shouldn't continue.
+-	 */
+-	if (test_and_set_bit(__I40E_VF_DISABLE, pf->state))
++	/* Bail out if VFs are disabled. */
++	if (test_bit(__I40E_VF_DISABLE, pf->state))
++		return true;
++
++	/* If VF is being reset already we don't need to continue. */
++	if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+ 		return true;
+ 
+ 	i40e_trigger_vf_reset(vf, flr);
+@@ -1523,7 +1525,7 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
+ 	i40e_cleanup_reset_vf(vf);
+ 
+ 	i40e_flush(hw);
+-	clear_bit(__I40E_VF_DISABLE, pf->state);
++	clear_bit(I40E_VF_STATE_RESETTING, &vf->vf_states);
+ 
+ 	return true;
+ }
+@@ -1556,8 +1558,12 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 		return false;
+ 
+ 	/* Begin reset on all VFs at once */
+-	for (v = 0; v < pf->num_alloc_vfs; v++)
+-		i40e_trigger_vf_reset(&pf->vf[v], flr);
++	for (v = 0; v < pf->num_alloc_vfs; v++) {
++		vf = &pf->vf[v];
++		/* If VF is being reset no need to trigger reset again */
++		if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
++			i40e_trigger_vf_reset(&pf->vf[v], flr);
++	}
+ 
+ 	/* HW requires some time to make sure it can flush the FIFO for a VF
+ 	 * when it resets it. Poll the VPGEN_VFRSTAT register for each VF in
+@@ -1573,9 +1579,11 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 		 */
+ 		while (v < pf->num_alloc_vfs) {
+ 			vf = &pf->vf[v];
+-			reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id));
+-			if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK))
+-				break;
++			if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) {
++				reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id));
++				if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK))
++					break;
++			}
+ 
+ 			/* If the current VF has finished resetting, move on
+ 			 * to the next VF in sequence.
+@@ -1603,6 +1611,10 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 		if (pf->vf[v].lan_vsi_idx == 0)
+ 			continue;
+ 
++		/* If VF is reset in another thread just continue */
++		if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
++			continue;
++
+ 		i40e_vsi_stop_rings_no_wait(pf->vsi[pf->vf[v].lan_vsi_idx]);
+ 	}
+ 
+@@ -1614,6 +1626,10 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 		if (pf->vf[v].lan_vsi_idx == 0)
+ 			continue;
+ 
++		/* If VF is reset in another thread just continue */
++		if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
++			continue;
++
+ 		i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[v].lan_vsi_idx]);
+ 	}
+ 
+@@ -1623,8 +1639,13 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 	mdelay(50);
+ 
+ 	/* Finish the reset on each VF */
+-	for (v = 0; v < pf->num_alloc_vfs; v++)
++	for (v = 0; v < pf->num_alloc_vfs; v++) {
++		/* If VF is reset in another thread just continue */
++		if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
++			continue;
++
+ 		i40e_cleanup_reset_vf(&pf->vf[v]);
++	}
+ 
+ 	i40e_flush(hw);
+ 	clear_bit(__I40E_VF_DISABLE, pf->state);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index a554d0a0b09bd..358bbdb587951 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -39,6 +39,7 @@ enum i40e_vf_states {
+ 	I40E_VF_STATE_MC_PROMISC,
+ 	I40E_VF_STATE_UC_PROMISC,
+ 	I40E_VF_STATE_PRE_ENABLE,
++	I40E_VF_STATE_RESETTING
+ };
+ 
+ /* VF capabilities */
+diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
+index 2d0c52f7106bc..5ea626b1e5783 100644
+--- a/drivers/net/ethernet/lantiq_etop.c
++++ b/drivers/net/ethernet/lantiq_etop.c
+@@ -466,7 +466,6 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
+ 	len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
+ 
+ 	if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
+-		dev_kfree_skb_any(skb);
+ 		netdev_err(dev, "tx ring full\n");
+ 		netif_tx_stop_queue(txq);
+ 		return NETDEV_TX_BUSY;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 94426d29025eb..6612b2c0be486 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1853,7 +1853,7 @@ void mlx5_cmd_init_async_ctx(struct mlx5_core_dev *dev,
+ 	ctx->dev = dev;
+ 	/* Starts at 1 to avoid doing wake_up if we are not cleaning up */
+ 	atomic_set(&ctx->num_inflight, 1);
+-	init_waitqueue_head(&ctx->wait);
++	init_completion(&ctx->inflight_done);
+ }
+ EXPORT_SYMBOL(mlx5_cmd_init_async_ctx);
+ 
+@@ -1867,8 +1867,8 @@ EXPORT_SYMBOL(mlx5_cmd_init_async_ctx);
+  */
+ void mlx5_cmd_cleanup_async_ctx(struct mlx5_async_ctx *ctx)
+ {
+-	atomic_dec(&ctx->num_inflight);
+-	wait_event(ctx->wait, atomic_read(&ctx->num_inflight) == 0);
++	if (!atomic_dec_and_test(&ctx->num_inflight))
++		wait_for_completion(&ctx->inflight_done);
+ }
+ EXPORT_SYMBOL(mlx5_cmd_cleanup_async_ctx);
+ 
+@@ -1879,7 +1879,7 @@ static void mlx5_cmd_exec_cb_handler(int status, void *_work)
+ 
+ 	work->user_callback(status, work);
+ 	if (atomic_dec_and_test(&ctx->num_inflight))
+-		wake_up(&ctx->wait);
++		complete(&ctx->inflight_done);
+ }
+ 
+ int mlx5_cmd_exec_cb(struct mlx5_async_ctx *ctx, void *in, int in_size,
+@@ -1895,7 +1895,7 @@ int mlx5_cmd_exec_cb(struct mlx5_async_ctx *ctx, void *in, int in_size,
+ 	ret = cmd_exec(ctx->dev, in, in_size, out, out_size,
+ 		       mlx5_cmd_exec_cb_handler, work, false);
+ 	if (ret && atomic_dec_and_test(&ctx->num_inflight))
+-		wake_up(&ctx->wait);
++		complete(&ctx->inflight_done);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index 26f7fab109d97..d08bd22dc5698 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -113,7 +113,6 @@ static bool mlx5e_ipsec_update_esn_state(struct mlx5e_ipsec_sa_entry *sa_entry)
+ 	struct xfrm_replay_state_esn *replay_esn;
+ 	u32 seq_bottom = 0;
+ 	u8 overlap;
+-	u32 *esn;
+ 
+ 	if (!(sa_entry->x->props.flags & XFRM_STATE_ESN)) {
+ 		sa_entry->esn_state.trigger = 0;
+@@ -128,11 +127,9 @@ static bool mlx5e_ipsec_update_esn_state(struct mlx5e_ipsec_sa_entry *sa_entry)
+ 
+ 	sa_entry->esn_state.esn = xfrm_replay_seqhi(sa_entry->x,
+ 						    htonl(seq_bottom));
+-	esn = &sa_entry->esn_state.esn;
+ 
+ 	sa_entry->esn_state.trigger = 1;
+ 	if (unlikely(overlap && seq_bottom < MLX5E_IPSEC_ESN_SCOPE_MID)) {
+-		++(*esn);
+ 		sa_entry->esn_state.overlap = 0;
+ 		return true;
+ 	} else if (unlikely(!overlap &&
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
+index 839a01da110f3..8ff16318e32dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
+@@ -122,7 +122,7 @@ void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_mpfs *mpfs = dev->priv.mpfs;
+ 
+-	if (!MLX5_ESWITCH_MANAGER(dev))
++	if (!mpfs)
+ 		return;
+ 
+ 	WARN_ON(!hlist_empty(mpfs->hash));
+@@ -137,7 +137,7 @@ int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac)
+ 	int err = 0;
+ 	u32 index;
+ 
+-	if (!MLX5_ESWITCH_MANAGER(dev))
++	if (!mpfs)
+ 		return 0;
+ 
+ 	mutex_lock(&mpfs->lock);
+@@ -185,7 +185,7 @@ int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac)
+ 	int err = 0;
+ 	u32 index;
+ 
+-	if (!MLX5_ESWITCH_MANAGER(dev))
++	if (!mpfs)
+ 		return 0;
+ 
+ 	mutex_lock(&mpfs->lock);
+diff --git a/drivers/net/ethernet/micrel/ksz884x.c b/drivers/net/ethernet/micrel/ksz884x.c
+index 9ed264ed70705..1fa16064142d9 100644
+--- a/drivers/net/ethernet/micrel/ksz884x.c
++++ b/drivers/net/ethernet/micrel/ksz884x.c
+@@ -6923,7 +6923,7 @@ static int pcidev_init(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	char banner[sizeof(version)];
+ 	struct ksz_switch *sw = NULL;
+ 
+-	result = pci_enable_device(pdev);
++	result = pcim_enable_device(pdev);
+ 	if (result)
+ 		return result;
+ 
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index ef3634d1b9f7f..b9acee214bb6a 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -1958,11 +1958,13 @@ static int netsec_register_mdio(struct netsec_priv *priv, u32 phy_addr)
+ 			ret = PTR_ERR(priv->phydev);
+ 			dev_err(priv->dev, "get_phy_device err(%d)\n", ret);
+ 			priv->phydev = NULL;
++			mdiobus_unregister(bus);
+ 			return -ENODEV;
+ 		}
+ 
+ 		ret = phy_device_register(priv->phydev);
+ 		if (ret) {
++			phy_device_free(priv->phydev);
+ 			mdiobus_unregister(bus);
+ 			dev_err(priv->dev,
+ 				"phy_device_register err(%d)\n", ret);
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index d0407f44de78d..61b9dc511d904 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -3262,11 +3262,34 @@ struct fc_function_template qla2xxx_transport_vport_functions = {
+ 	.bsg_timeout = qla24xx_bsg_timeout,
+ };
+ 
++static uint
++qla2x00_get_host_supported_speeds(scsi_qla_host_t *vha, uint speeds)
++{
++	uint supported_speeds = FC_PORTSPEED_UNKNOWN;
++
++	if (speeds & FDMI_PORT_SPEED_64GB)
++		supported_speeds |= FC_PORTSPEED_64GBIT;
++	if (speeds & FDMI_PORT_SPEED_32GB)
++		supported_speeds |= FC_PORTSPEED_32GBIT;
++	if (speeds & FDMI_PORT_SPEED_16GB)
++		supported_speeds |= FC_PORTSPEED_16GBIT;
++	if (speeds & FDMI_PORT_SPEED_8GB)
++		supported_speeds |= FC_PORTSPEED_8GBIT;
++	if (speeds & FDMI_PORT_SPEED_4GB)
++		supported_speeds |= FC_PORTSPEED_4GBIT;
++	if (speeds & FDMI_PORT_SPEED_2GB)
++		supported_speeds |= FC_PORTSPEED_2GBIT;
++	if (speeds & FDMI_PORT_SPEED_1GB)
++		supported_speeds |= FC_PORTSPEED_1GBIT;
++
++	return supported_speeds;
++}
++
+ void
+ qla2x00_init_host_attr(scsi_qla_host_t *vha)
+ {
+ 	struct qla_hw_data *ha = vha->hw;
+-	u32 speeds = FC_PORTSPEED_UNKNOWN;
++	u32 speeds = 0, fdmi_speed = 0;
+ 
+ 	fc_host_dev_loss_tmo(vha->host) = ha->port_down_retry_count;
+ 	fc_host_node_name(vha->host) = wwn_to_u64(vha->node_name);
+@@ -3276,7 +3299,8 @@ qla2x00_init_host_attr(scsi_qla_host_t *vha)
+ 	fc_host_max_npiv_vports(vha->host) = ha->max_npiv_vports;
+ 	fc_host_npiv_vports_inuse(vha->host) = ha->cur_vport_count;
+ 
+-	speeds = qla25xx_fdmi_port_speed_capability(ha);
++	fdmi_speed = qla25xx_fdmi_port_speed_capability(ha);
++	speeds = qla2x00_get_host_supported_speeds(vha, fdmi_speed);
+ 
+ 	fc_host_supported_speeds(vha->host) = speeds;
+ }
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index bd068d3bb455d..58f66176bcb28 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1074,6 +1074,7 @@ static blk_status_t sd_setup_write_same_cmnd(struct scsi_cmnd *cmd)
+ 	struct bio *bio = rq->bio;
+ 	u64 lba = sectors_to_logical(sdp, blk_rq_pos(rq));
+ 	u32 nr_blocks = sectors_to_logical(sdp, blk_rq_sectors(rq));
++	unsigned int nr_bytes = blk_rq_bytes(rq);
+ 	blk_status_t ret;
+ 
+ 	if (sdkp->device->no_write_same)
+@@ -1110,7 +1111,7 @@ static blk_status_t sd_setup_write_same_cmnd(struct scsi_cmnd *cmd)
+ 	 */
+ 	rq->__data_len = sdp->sector_size;
+ 	ret = scsi_alloc_sgtables(cmd);
+-	rq->__data_len = blk_rq_bytes(rq);
++	rq->__data_len = nr_bytes;
+ 
+ 	return ret;
+ }
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 537bee8d2258a..f3744ac805ecb 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -342,6 +342,9 @@ static void omap8250_restore_regs(struct uart_8250_port *up)
+ 	omap8250_update_mdr1(up, priv);
+ 
+ 	up->port.ops->set_mctrl(&up->port, up->port.mctrl);
++
++	if (up->port.rs485.flags & SER_RS485_ENABLED)
++		serial8250_em485_stop_tx(up);
+ }
+ 
+ /*
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index df10cc606582b..b6656898699d1 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -1531,7 +1531,6 @@ static int pci_fintek_init(struct pci_dev *dev)
+ 	resource_size_t bar_data[3];
+ 	u8 config_base;
+ 	struct serial_private *priv = pci_get_drvdata(dev);
+-	struct uart_8250_port *port;
+ 
+ 	if (!(pci_resource_flags(dev, 5) & IORESOURCE_IO) ||
+ 			!(pci_resource_flags(dev, 4) & IORESOURCE_IO) ||
+@@ -1578,13 +1577,7 @@ static int pci_fintek_init(struct pci_dev *dev)
+ 
+ 		pci_write_config_byte(dev, config_base + 0x06, dev->irq);
+ 
+-		if (priv) {
+-			/* re-apply RS232/485 mode when
+-			 * pciserial_resume_ports()
+-			 */
+-			port = serial8250_get_port(priv->line[i]);
+-			pci_fintek_rs485_config(&port->port, NULL);
+-		} else {
++		if (!priv) {
+ 			/* First init without port data
+ 			 * force init to RS232 Mode
+ 			 */
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 71d143c002488..8b3756e4bb05c 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -592,7 +592,7 @@ EXPORT_SYMBOL_GPL(serial8250_rpm_put);
+ static int serial8250_em485_init(struct uart_8250_port *p)
+ {
+ 	if (p->em485)
+-		return 0;
++		goto deassert_rts;
+ 
+ 	p->em485 = kmalloc(sizeof(struct uart_8250_em485), GFP_ATOMIC);
+ 	if (!p->em485)
+@@ -608,7 +608,9 @@ static int serial8250_em485_init(struct uart_8250_port *p)
+ 	p->em485->active_timer = NULL;
+ 	p->em485->tx_stopped = true;
+ 
+-	p->rs485_stop_tx(p);
++deassert_rts:
++	if (p->em485->tx_stopped)
++		p->rs485_stop_tx(p);
+ 
+ 	return 0;
+ }
+@@ -2030,6 +2032,9 @@ EXPORT_SYMBOL_GPL(serial8250_do_set_mctrl);
+ 
+ static void serial8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ {
++	if (port->rs485.flags & SER_RS485_ENABLED)
++		return;
++
+ 	if (port->set_mctrl)
+ 		port->set_mctrl(port, mctrl);
+ 	else
+@@ -3161,9 +3166,6 @@ static void serial8250_config_port(struct uart_port *port, int flags)
+ 	if (flags & UART_CONFIG_TYPE)
+ 		autoconfig(up);
+ 
+-	if (port->rs485.flags & SER_RS485_ENABLED)
+-		port->rs485_config(port, &port->rs485);
+-
+ 	/* if access method is AU, it is a 16550 with a quirk */
+ 	if (port->type == PORT_16550A && port->iotype == UPIO_AU)
+ 		up->bugs |= UART_BUG_NOMSR;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 269d1e3a025d2..43aca5a2ef0f2 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2669,10 +2669,6 @@ static int lpuart_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto failed_irq_request;
+ 
+-	ret = uart_add_one_port(&lpuart_reg, &sport->port);
+-	if (ret)
+-		goto failed_attach_port;
+-
+ 	ret = uart_get_rs485_mode(&sport->port);
+ 	if (ret)
+ 		goto failed_get_rs485;
+@@ -2684,7 +2680,9 @@ static int lpuart_probe(struct platform_device *pdev)
+ 	    sport->port.rs485.delay_rts_after_send)
+ 		dev_err(&pdev->dev, "driver doesn't support RTS delays\n");
+ 
+-	sport->port.rs485_config(&sport->port, &sport->port.rs485);
++	ret = uart_add_one_port(&lpuart_reg, &sport->port);
++	if (ret)
++		goto failed_attach_port;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index bfbca711bbf9b..cf3d531657762 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -398,8 +398,7 @@ static void imx_uart_rts_active(struct imx_port *sport, u32 *ucr2)
+ {
+ 	*ucr2 &= ~(UCR2_CTSC | UCR2_CTS);
+ 
+-	sport->port.mctrl |= TIOCM_RTS;
+-	mctrl_gpio_set(sport->gpios, sport->port.mctrl);
++	mctrl_gpio_set(sport->gpios, sport->port.mctrl | TIOCM_RTS);
+ }
+ 
+ /* called with port.lock taken and irqs caller dependent */
+@@ -408,8 +407,7 @@ static void imx_uart_rts_inactive(struct imx_port *sport, u32 *ucr2)
+ 	*ucr2 &= ~UCR2_CTSC;
+ 	*ucr2 |= UCR2_CTS;
+ 
+-	sport->port.mctrl &= ~TIOCM_RTS;
+-	mctrl_gpio_set(sport->gpios, sport->port.mctrl);
++	mctrl_gpio_set(sport->gpios, sport->port.mctrl & ~TIOCM_RTS);
+ }
+ 
+ static void start_hrtimer_ms(struct hrtimer *hrt, unsigned long msec)
+@@ -2381,8 +2379,6 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev,
+ 			"low-active RTS not possible when receiver is off, enabling receiver\n");
+ 
+-	imx_uart_rs485_config(&sport->port, &sport->port.rs485);
+-
+ 	/* Disable interrupts before requesting them */
+ 	ucr1 = imx_uart_readl(sport, UCR1);
+ 	ucr1 &= ~(UCR1_ADEN | UCR1_TRDYEN | UCR1_IDEN | UCR1_RRDYEN | UCR1_RTSDEN);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index b578f7090b637..605f928f0636a 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -42,6 +42,11 @@ static struct lock_class_key port_lock_key;
+ 
+ #define HIGH_BITS_OFFSET	((sizeof(long)-sizeof(int))*8)
+ 
++/*
++ * Max time with active RTS before/after data is sent.
++ */
++#define RS485_MAX_RTS_DELAY	100 /* msecs */
++
+ static void uart_change_speed(struct tty_struct *tty, struct uart_state *state,
+ 					struct ktermios *old_termios);
+ static void uart_wait_until_sent(struct tty_struct *tty, int timeout);
+@@ -144,15 +149,10 @@ uart_update_mctrl(struct uart_port *port, unsigned int set, unsigned int clear)
+ 	unsigned long flags;
+ 	unsigned int old;
+ 
+-	if (port->rs485.flags & SER_RS485_ENABLED) {
+-		set &= ~TIOCM_RTS;
+-		clear &= ~TIOCM_RTS;
+-	}
+-
+ 	spin_lock_irqsave(&port->lock, flags);
+ 	old = port->mctrl;
+ 	port->mctrl = (old & ~clear) | set;
+-	if (old != port->mctrl)
++	if (old != port->mctrl && !(port->rs485.flags & SER_RS485_ENABLED))
+ 		port->ops->set_mctrl(port, port->mctrl);
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ }
+@@ -1326,8 +1326,41 @@ static int uart_set_rs485_config(struct uart_port *port,
+ 	if (copy_from_user(&rs485, rs485_user, sizeof(*rs485_user)))
+ 		return -EFAULT;
+ 
++	/* pick sane settings if the user hasn't */
++	if (!(rs485.flags & SER_RS485_RTS_ON_SEND) ==
++	    !(rs485.flags & SER_RS485_RTS_AFTER_SEND)) {
++		dev_warn_ratelimited(port->dev,
++			"%s (%d): invalid RTS setting, using RTS_ON_SEND instead\n",
++			port->name, port->line);
++		rs485.flags |= SER_RS485_RTS_ON_SEND;
++		rs485.flags &= ~SER_RS485_RTS_AFTER_SEND;
++	}
++
++	if (rs485.delay_rts_before_send > RS485_MAX_RTS_DELAY) {
++		rs485.delay_rts_before_send = RS485_MAX_RTS_DELAY;
++		dev_warn_ratelimited(port->dev,
++			"%s (%d): RTS delay before sending clamped to %u ms\n",
++			port->name, port->line, rs485.delay_rts_before_send);
++	}
++
++	if (rs485.delay_rts_after_send > RS485_MAX_RTS_DELAY) {
++		rs485.delay_rts_after_send = RS485_MAX_RTS_DELAY;
++		dev_warn_ratelimited(port->dev,
++			"%s (%d): RTS delay after sending clamped to %u ms\n",
++			port->name, port->line, rs485.delay_rts_after_send);
++	}
++	/* Return clean padding area to userspace */
++	memset(rs485.padding, 0, sizeof(rs485.padding));
++
+ 	spin_lock_irqsave(&port->lock, flags);
+ 	ret = port->rs485_config(port, &rs485);
++	if (!ret) {
++		port->rs485 = rs485;
++
++		/* Reset RTS and other mctrl lines when disabling RS485 */
++		if (!(rs485.flags & SER_RS485_ENABLED))
++			port->ops->set_mctrl(port, port->mctrl);
++	}
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ 	if (ret)
+ 		return ret;
+@@ -2302,7 +2335,8 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
+ 
+ 		uart_change_pm(state, UART_PM_STATE_ON);
+ 		spin_lock_irq(&uport->lock);
+-		ops->set_mctrl(uport, 0);
++		if (!(uport->rs485.flags & SER_RS485_ENABLED))
++			ops->set_mctrl(uport, 0);
+ 		spin_unlock_irq(&uport->lock);
+ 		if (console_suspend_enabled || !uart_console(uport)) {
+ 			/* Protected by port mutex for now */
+@@ -2313,7 +2347,10 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
+ 				if (tty)
+ 					uart_change_speed(tty, state, NULL);
+ 				spin_lock_irq(&uport->lock);
+-				ops->set_mctrl(uport, uport->mctrl);
++				if (!(uport->rs485.flags & SER_RS485_ENABLED))
++					ops->set_mctrl(uport, uport->mctrl);
++				else
++					uport->rs485_config(uport, &uport->rs485);
+ 				ops->start_tx(uport);
+ 				spin_unlock_irq(&uport->lock);
+ 				tty_port_set_initialized(port, 1);
+@@ -2411,10 +2448,10 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 		 */
+ 		spin_lock_irqsave(&port->lock, flags);
+ 		port->mctrl &= TIOCM_DTR;
+-		if (port->rs485.flags & SER_RS485_ENABLED &&
+-		    !(port->rs485.flags & SER_RS485_RTS_AFTER_SEND))
+-			port->mctrl |= TIOCM_RTS;
+-		port->ops->set_mctrl(port, port->mctrl);
++		if (!(port->rs485.flags & SER_RS485_ENABLED))
++			port->ops->set_mctrl(port, port->mctrl);
++		else
++			port->rs485_config(port, &port->rs485);
+ 		spin_unlock_irqrestore(&port->lock, flags);
+ 
+ 		/*
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 03473e20e2186..eb3ea45d5d13a 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -388,6 +388,15 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* Kingston DataTraveler 3.0 */
+ 	{ USB_DEVICE(0x0951, 0x1666), .driver_info = USB_QUIRK_NO_LPM },
+ 
++	/* NVIDIA Jetson devices in Force Recovery mode */
++	{ USB_DEVICE(0x0955, 0x7018), .driver_info = USB_QUIRK_RESET_RESUME },
++	{ USB_DEVICE(0x0955, 0x7019), .driver_info = USB_QUIRK_RESET_RESUME },
++	{ USB_DEVICE(0x0955, 0x7418), .driver_info = USB_QUIRK_RESET_RESUME },
++	{ USB_DEVICE(0x0955, 0x7721), .driver_info = USB_QUIRK_RESET_RESUME },
++	{ USB_DEVICE(0x0955, 0x7c18), .driver_info = USB_QUIRK_RESET_RESUME },
++	{ USB_DEVICE(0x0955, 0x7e19), .driver_info = USB_QUIRK_RESET_RESUME },
++	{ USB_DEVICE(0x0955, 0x7f21), .driver_info = USB_QUIRK_RESET_RESUME },
++
+ 	/* X-Rite/Gretag-Macbeth Eye-One Pro display colorimeter */
+ 	{ USB_DEVICE(0x0971, 0x2000), .driver_info = USB_QUIRK_NO_SET_INTF },
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 41ed2f6f8a8d0..347ba7e4bd81a 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1064,8 +1064,8 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+ 			trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS;
+ 		}
+ 
+-		/* always enable Interrupt on Missed ISOC */
+-		trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
++		if (!no_interrupt && !chain)
++			trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
+ 		break;
+ 
+ 	case USB_ENDPOINT_XFER_BULK:
+@@ -2800,6 +2800,10 @@ static int dwc3_gadget_ep_reclaim_completed_trb(struct dwc3_ep *dep,
+ 	if (event->status & DEPEVT_STATUS_SHORT && !chain)
+ 		return 1;
+ 
++	if ((trb->ctrl & DWC3_TRB_CTRL_ISP_IMI) &&
++	    DWC3_TRB_SIZE_TRBSTS(trb->size) == DWC3_TRBSTS_MISSED_ISOC)
++		return 1;
++
+ 	if ((trb->ctrl & DWC3_TRB_CTRL_IOC) ||
+ 	    (trb->ctrl & DWC3_TRB_CTRL_LST))
+ 		return 1;
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_udc.c b/drivers/usb/gadget/udc/bdc/bdc_udc.c
+index 248426a3e88a7..5f0b3fd936319 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_udc.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_udc.c
+@@ -151,6 +151,7 @@ static void bdc_uspc_disconnected(struct bdc *bdc, bool reinit)
+ 	bdc->delayed_status = false;
+ 	bdc->reinit = reinit;
+ 	bdc->test_mode = false;
++	usb_gadget_set_state(&bdc->gadget, USB_STATE_NOTATTACHED);
+ }
+ 
+ /* TNotify wkaeup timer */
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 1fba5605a88ea..d1a42300ae58f 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -915,15 +915,19 @@ void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
+ 		if (dev->eps[i].stream_info)
+ 			xhci_free_stream_info(xhci,
+ 					dev->eps[i].stream_info);
+-		/* Endpoints on the TT/root port lists should have been removed
+-		 * when usb_disable_device() was called for the device.
+-		 * We can't drop them anyway, because the udev might have gone
+-		 * away by this point, and we can't tell what speed it was.
++		/*
++		 * Endpoints are normally deleted from the bandwidth list when
++		 * endpoints are dropped, before device is freed.
++		 * If host is dying or being removed then endpoints aren't
++		 * dropped cleanly, so delete the endpoint from list here.
++		 * Only applicable for hosts with software bandwidth checking.
+ 		 */
+-		if (!list_empty(&dev->eps[i].bw_endpoint_list))
+-			xhci_warn(xhci, "Slot %u endpoint %u "
+-					"not removed from BW list!\n",
+-					slot_id, i);
++
++		if (!list_empty(&dev->eps[i].bw_endpoint_list)) {
++			list_del_init(&dev->eps[i].bw_endpoint_list);
++			xhci_dbg(xhci, "Slot %u endpoint %u not removed from BW list!\n",
++				 slot_id, i);
++		}
+ 	}
+ 	/* If this is a hub, free the TT(s) from the TT list */
+ 	xhci_free_tt_info(xhci, dev, slot_id);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 8952492d43be6..64d5a593682b8 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -253,6 +253,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI))
+ 		xhci->quirks |= XHCI_MISSING_CAS;
+ 
++	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
++	    pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI)
++		xhci->quirks |= XHCI_RESET_TO_DEFAULT;
++
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI ||
+@@ -302,8 +306,14 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	}
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+-		pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
++		pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) {
++		/*
++		 * try to tame the ASMedia 1042 controller which reports 0.96
++		 * but appears to behave more like 1.0
++		 */
++		xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
+ 		xhci->quirks |= XHCI_BROKEN_STREAMS;
++	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ 		pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI) {
+ 		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 8918e6ae5c4b6..c968dd8653140 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -794,9 +794,15 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ 
+ 	spin_lock_irq(&xhci->lock);
+ 	xhci_halt(xhci);
+-	/* Workaround for spurious wakeups at shutdown with HSW */
+-	if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
++
++	/*
++	 * Workaround for spurious wakeps at shutdown with HSW, and for boot
++	 * firmware delay in ADL-P PCH if port are left in U3 at shutdown
++	 */
++	if (xhci->quirks & XHCI_SPURIOUS_WAKEUP ||
++	    xhci->quirks & XHCI_RESET_TO_DEFAULT)
+ 		xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
++
+ 	spin_unlock_irq(&xhci->lock);
+ 
+ 	xhci_cleanup_msix(xhci);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index e668740000b25..059050f135225 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1889,6 +1889,7 @@ struct xhci_hcd {
+ #define XHCI_NO_SOFT_RETRY	BIT_ULL(40)
+ #define XHCI_EP_CTX_BROKEN_DCS	BIT_ULL(42)
+ #define XHCI_SUSPEND_RESUME_CLKS	BIT_ULL(43)
++#define XHCI_RESET_TO_DEFAULT	BIT_ULL(44)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
+index 7673db5da26b0..5fa3f1e5dfe88 100644
+--- a/drivers/video/fbdev/smscufx.c
++++ b/drivers/video/fbdev/smscufx.c
+@@ -97,7 +97,6 @@ struct ufx_data {
+ 	struct kref kref;
+ 	int fb_count;
+ 	bool virtualized; /* true when physical usb device not present */
+-	struct delayed_work free_framebuffer_work;
+ 	atomic_t usb_active; /* 0 = update virtual buffer, but no usb traffic */
+ 	atomic_t lost_pixels; /* 1 = a render op failed. Need screen refresh */
+ 	u8 *edid; /* null until we read edid from hw or get from sysfs */
+@@ -1116,15 +1115,24 @@ static void ufx_free(struct kref *kref)
+ {
+ 	struct ufx_data *dev = container_of(kref, struct ufx_data, kref);
+ 
+-	/* this function will wait for all in-flight urbs to complete */
+-	if (dev->urbs.count > 0)
+-		ufx_free_urb_list(dev);
++	kfree(dev);
++}
+ 
+-	pr_debug("freeing ufx_data %p", dev);
++static void ufx_ops_destory(struct fb_info *info)
++{
++	struct ufx_data *dev = info->par;
++	int node = info->node;
+ 
+-	kfree(dev);
++	/* Assume info structure is freed after this point */
++	framebuffer_release(info);
++
++	pr_debug("fb_info for /dev/fb%d has been freed", node);
++
++	/* release reference taken by kref_init in probe() */
++	kref_put(&dev->kref, ufx_free);
+ }
+ 
++
+ static void ufx_release_urb_work(struct work_struct *work)
+ {
+ 	struct urb_node *unode = container_of(work, struct urb_node,
+@@ -1133,14 +1141,9 @@ static void ufx_release_urb_work(struct work_struct *work)
+ 	up(&unode->dev->urbs.limit_sem);
+ }
+ 
+-static void ufx_free_framebuffer_work(struct work_struct *work)
++static void ufx_free_framebuffer(struct ufx_data *dev)
+ {
+-	struct ufx_data *dev = container_of(work, struct ufx_data,
+-					    free_framebuffer_work.work);
+ 	struct fb_info *info = dev->info;
+-	int node = info->node;
+-
+-	unregister_framebuffer(info);
+ 
+ 	if (info->cmap.len != 0)
+ 		fb_dealloc_cmap(&info->cmap);
+@@ -1152,11 +1155,6 @@ static void ufx_free_framebuffer_work(struct work_struct *work)
+ 
+ 	dev->info = NULL;
+ 
+-	/* Assume info structure is freed after this point */
+-	framebuffer_release(info);
+-
+-	pr_debug("fb_info for /dev/fb%d has been freed", node);
+-
+ 	/* ref taken in probe() as part of registering framebfufer */
+ 	kref_put(&dev->kref, ufx_free);
+ }
+@@ -1168,11 +1166,13 @@ static int ufx_ops_release(struct fb_info *info, int user)
+ {
+ 	struct ufx_data *dev = info->par;
+ 
++	mutex_lock(&disconnect_mutex);
++
+ 	dev->fb_count--;
+ 
+ 	/* We can't free fb_info here - fbmem will touch it when we return */
+ 	if (dev->virtualized && (dev->fb_count == 0))
+-		schedule_delayed_work(&dev->free_framebuffer_work, HZ);
++		ufx_free_framebuffer(dev);
+ 
+ 	if ((dev->fb_count == 0) && (info->fbdefio)) {
+ 		fb_deferred_io_cleanup(info);
+@@ -1185,6 +1185,8 @@ static int ufx_ops_release(struct fb_info *info, int user)
+ 
+ 	kref_put(&dev->kref, ufx_free);
+ 
++	mutex_unlock(&disconnect_mutex);
++
+ 	return 0;
+ }
+ 
+@@ -1291,6 +1293,7 @@ static const struct fb_ops ufx_ops = {
+ 	.fb_blank = ufx_ops_blank,
+ 	.fb_check_var = ufx_ops_check_var,
+ 	.fb_set_par = ufx_ops_set_par,
++	.fb_destroy = ufx_ops_destory,
+ };
+ 
+ /* Assumes &info->lock held by caller
+@@ -1672,9 +1675,6 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 		goto destroy_modedb;
+ 	}
+ 
+-	INIT_DELAYED_WORK(&dev->free_framebuffer_work,
+-			  ufx_free_framebuffer_work);
+-
+ 	retval = ufx_reg_read(dev, 0x3000, &id_rev);
+ 	check_warn_goto_error(retval, "error %d reading 0x3000 register from device", retval);
+ 	dev_dbg(dev->gdev, "ID_REV register value 0x%08x", id_rev);
+@@ -1747,10 +1747,12 @@ e_nomem:
+ static void ufx_usb_disconnect(struct usb_interface *interface)
+ {
+ 	struct ufx_data *dev;
++	struct fb_info *info;
+ 
+ 	mutex_lock(&disconnect_mutex);
+ 
+ 	dev = usb_get_intfdata(interface);
++	info = dev->info;
+ 
+ 	pr_debug("USB disconnect starting\n");
+ 
+@@ -1764,12 +1766,15 @@ static void ufx_usb_disconnect(struct usb_interface *interface)
+ 
+ 	/* if clients still have us open, will be freed on last close */
+ 	if (dev->fb_count == 0)
+-		schedule_delayed_work(&dev->free_framebuffer_work, 0);
++		ufx_free_framebuffer(dev);
+ 
+-	/* release reference taken by kref_init in probe() */
+-	kref_put(&dev->kref, ufx_free);
++	/* this function will wait for all in-flight urbs to complete */
++	if (dev->urbs.count > 0)
++		ufx_free_urb_list(dev);
+ 
+-	/* consider ufx_data freed */
++	pr_debug("freeing ufx_data %p", dev);
++
++	unregister_framebuffer(info);
+ 
+ 	mutex_unlock(&disconnect_mutex);
+ }
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index ff195b5717630..16acddaff9aea 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -360,8 +360,7 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
+ 	for (i = 0; i < map->count; i++) {
+ 		if (map->map_ops[i].status == GNTST_okay) {
+ 			map->unmap_ops[i].handle = map->map_ops[i].handle;
+-			if (!use_ptemod)
+-				alloced++;
++			alloced++;
+ 		} else if (!err)
+ 			err = -EINVAL;
+ 
+@@ -370,8 +369,7 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
+ 
+ 		if (use_ptemod) {
+ 			if (map->kmap_ops[i].status == GNTST_okay) {
+-				if (map->map_ops[i].status == GNTST_okay)
+-					alloced++;
++				alloced++;
+ 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
+ 			} else if (!err)
+ 				err = -EINVAL;
+@@ -387,20 +385,42 @@ static void __unmap_grant_pages_done(int result,
+ 	unsigned int i;
+ 	struct gntdev_grant_map *map = data->data;
+ 	unsigned int offset = data->unmap_ops - map->unmap_ops;
++	int successful_unmaps = 0;
++	int live_grants;
+ 
+ 	for (i = 0; i < data->count; i++) {
++		if (map->unmap_ops[offset + i].status == GNTST_okay &&
++		    map->unmap_ops[offset + i].handle != -1)
++			successful_unmaps++;
++
+ 		WARN_ON(map->unmap_ops[offset+i].status &&
+ 			map->unmap_ops[offset+i].handle != -1);
+ 		pr_debug("unmap handle=%d st=%d\n",
+ 			map->unmap_ops[offset+i].handle,
+ 			map->unmap_ops[offset+i].status);
+ 		map->unmap_ops[offset+i].handle = -1;
++		if (use_ptemod) {
++			if (map->kunmap_ops[offset + i].status == GNTST_okay &&
++			    map->kunmap_ops[offset + i].handle != -1)
++				successful_unmaps++;
++
++			WARN_ON(map->kunmap_ops[offset+i].status &&
++				map->kunmap_ops[offset+i].handle != -1);
++			pr_debug("kunmap handle=%u st=%d\n",
++				 map->kunmap_ops[offset+i].handle,
++				 map->kunmap_ops[offset+i].status);
++			map->kunmap_ops[offset+i].handle = -1;
++		}
+ 	}
++
+ 	/*
+ 	 * Decrease the live-grant counter.  This must happen after the loop to
+ 	 * prevent premature reuse of the grants by gnttab_mmap().
+ 	 */
+-	atomic_sub(data->count, &map->live_grants);
++	live_grants = atomic_sub_return(successful_unmaps, &map->live_grants);
++	if (WARN_ON(live_grants < 0))
++		pr_err("%s: live_grants became negative (%d) after unmapping %d pages!\n",
++		       __func__, live_grants, successful_unmaps);
+ 
+ 	/* Release reference taken by __unmap_grant_pages */
+ 	gntdev_put_map(NULL, map);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 213864bc7e8c0..ccc4c6d8a578f 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -907,7 +907,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ 		interp_elf_ex = kmalloc(sizeof(*interp_elf_ex), GFP_KERNEL);
+ 		if (!interp_elf_ex) {
+ 			retval = -ENOMEM;
+-			goto out_free_ph;
++			goto out_free_file;
+ 		}
+ 
+ 		/* Get the exec headers */
+@@ -1328,6 +1328,7 @@ out:
+ out_free_dentry:
+ 	kfree(interp_elf_ex);
+ 	kfree(interp_elf_phdata);
++out_free_file:
+ 	allow_write_access(interpreter);
+ 	if (interpreter)
+ 		fput(interpreter);
+diff --git a/fs/exec.c b/fs/exec.c
+index b56bc4b4016e9..983295c0b8acf 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1198,11 +1198,11 @@ static int unshare_sighand(struct task_struct *me)
+ 			return -ENOMEM;
+ 
+ 		refcount_set(&newsighand->count, 1);
+-		memcpy(newsighand->action, oldsighand->action,
+-		       sizeof(newsighand->action));
+ 
+ 		write_lock_irq(&tasklist_lock);
+ 		spin_lock(&oldsighand->siglock);
++		memcpy(newsighand->action, oldsighand->action,
++		       sizeof(newsighand->action));
+ 		rcu_assign_pointer(me->sighand, newsighand);
+ 		spin_unlock(&oldsighand->siglock);
+ 		write_unlock_irq(&tasklist_lock);
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index afb39e1bbe3bf..8b3c86a502daa 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -1519,8 +1519,11 @@ int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name,
+ 	mutex_lock(&kernfs_mutex);
+ 
+ 	kn = kernfs_find_ns(parent, name, ns);
+-	if (kn)
++	if (kn) {
++		kernfs_get(kn);
+ 		__kernfs_remove(kn);
++		kernfs_put(kn);
++	}
+ 
+ 	mutex_unlock(&kernfs_mutex);
+ 
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 41fbb4793394d..ae88362216a4e 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -899,7 +899,7 @@ void mlx5_cmd_allowed_opcode(struct mlx5_core_dev *dev, u16 opcode);
+ struct mlx5_async_ctx {
+ 	struct mlx5_core_dev *dev;
+ 	atomic_t num_inflight;
+-	struct wait_queue_head wait;
++	struct completion inflight_done;
+ };
+ 
+ struct mlx5_async_work;
+diff --git a/include/media/v4l2-common.h b/include/media/v4l2-common.h
+index a3083529b6985..2e53ee1c8db49 100644
+--- a/include/media/v4l2-common.h
++++ b/include/media/v4l2-common.h
+@@ -175,7 +175,8 @@ struct v4l2_subdev *v4l2_i2c_new_subdev_board(struct v4l2_device *v4l2_dev,
+  *
+  * @sd: pointer to &struct v4l2_subdev
+  * @client: pointer to struct i2c_client
+- * @devname: the name of the device; if NULL, the I²C device's name will be used
++ * @devname: the name of the device; if NULL, the I²C device drivers's name
++ *           will be used
+  * @postfix: sub-device specific string to put right after the I²C device name;
+  *	     may be NULL
+  */
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index 534eaa4d39bc8..b28817c59fdf2 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -1552,7 +1552,8 @@ struct v4l2_bt_timings {
+ 	((bt)->width + V4L2_DV_BT_BLANKING_WIDTH(bt))
+ #define V4L2_DV_BT_BLANKING_HEIGHT(bt) \
+ 	((bt)->vfrontporch + (bt)->vsync + (bt)->vbackporch + \
+-	 (bt)->il_vfrontporch + (bt)->il_vsync + (bt)->il_vbackporch)
++	 ((bt)->interlaced ? \
++	  ((bt)->il_vfrontporch + (bt)->il_vsync + (bt)->il_vbackporch) : 0))
+ #define V4L2_DV_BT_FRAME_HEIGHT(bt) \
+ 	((bt)->height + V4L2_DV_BT_BLANKING_HEIGHT(bt))
+ 
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 522cb1387462c..59a1b126c369b 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -637,7 +637,7 @@ static void power_down(void)
+ 	int error;
+ 
+ 	if (hibernation_mode == HIBERNATION_SUSPEND) {
+-		error = suspend_devices_and_enter(PM_SUSPEND_MEM);
++		error = suspend_devices_and_enter(mem_sleep_current);
+ 		if (error) {
+ 			hibernation_mode = hibernation_ops ?
+ 						HIBERNATION_PLATFORM :
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index c57c165bfbbc4..d8c63d79af206 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2387,11 +2387,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
+ 		page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
+ 		if (!page)
+ 			goto out_uncharge_cgroup;
++		spin_lock(&hugetlb_lock);
+ 		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
+ 			SetPagePrivate(page);
+ 			h->resv_huge_pages--;
+ 		}
+-		spin_lock(&hugetlb_lock);
+ 		list_add(&page->lru, &h->hugepage_activelist);
+ 		/* Fall through */
+ 	}
+diff --git a/mm/memory.c b/mm/memory.c
+index cc50fa0f4590d..cbc0a163d7057 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -823,6 +823,17 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
+ 	if (likely(!page_maybe_dma_pinned(page)))
+ 		return 1;
+ 
++	/*
++	 * The vma->anon_vma of the child process may be NULL
++	 * because the entire vma does not contain anonymous pages.
++	 * A BUG will occur when the copy_present_page() passes
++	 * a copy of a non-anonymous page of that vma to the
++	 * page_add_new_anon_rmap() to set up new anonymous rmap.
++	 * Return 1 if the page is not an anonymous page.
++	 */
++	if (!PageAnon(page))
++		return 1;
++
+ 	new_page = *prealloc;
+ 	if (!new_page)
+ 		return -EAGAIN;
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 2830a12a4dd1b..78f6a91106994 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -338,10 +338,12 @@ static void j1939_session_skb_drop_old(struct j1939_session *session)
+ 		__skb_unlink(do_skb, &session->skb_queue);
+ 		/* drop ref taken in j1939_session_skb_queue() */
+ 		skb_unref(do_skb);
++		spin_unlock_irqrestore(&session->skb_queue.lock, flags);
+ 
+ 		kfree_skb(do_skb);
++	} else {
++		spin_unlock_irqrestore(&session->skb_queue.lock, flags);
+ 	}
+-	spin_unlock_irqrestore(&session->skb_queue.lock, flags);
+ }
+ 
+ void j1939_session_skb_queue(struct j1939_session *session,
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index cbff7d94b993e..a3b7d965e9c01 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -135,6 +135,7 @@ static int net_assign_generic(struct net *net, unsigned int id, void *data)
+ 
+ static int ops_init(const struct pernet_operations *ops, struct net *net)
+ {
++	struct net_generic *ng;
+ 	int err = -ENOMEM;
+ 	void *data = NULL;
+ 
+@@ -153,7 +154,13 @@ static int ops_init(const struct pernet_operations *ops, struct net *net)
+ 	if (!err)
+ 		return 0;
+ 
++	if (ops->id && ops->size) {
+ cleanup:
++		ng = rcu_dereference_protected(net->gen,
++					       lockdep_is_held(&pernet_ops_rwsem));
++		ng->ptr[*ops->id] = NULL;
++	}
++
+ 	kfree(data);
+ 
+ out:
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index ecc0d5fbde048..d4c275e56d825 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -503,8 +503,10 @@ static int dgram_bind(struct sock *sk, struct sockaddr *uaddr, int len)
+ 	if (err < 0)
+ 		goto out;
+ 
+-	if (addr->family != AF_IEEE802154)
++	if (addr->family != AF_IEEE802154) {
++		err = -EINVAL;
+ 		goto out;
++	}
+ 
+ 	ieee802154_addr_from_sa(&haddr, &addr->addr);
+ 	dev = ieee802154_get_dev(sock_net(sk), &haddr);
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 2a17dc9413ae9..7a0102a4b1de7 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -1346,7 +1346,7 @@ static int nh_create_ipv4(struct net *net, struct nexthop *nh,
+ 	if (!err) {
+ 		nh->nh_flags = fib_nh->fib_nh_flags;
+ 		fib_info_update_nhc_saddr(net, &fib_nh->nh_common,
+-					  fib_nh->fib_nh_scope);
++					  !fib_nh->fib_nh_scope ? 0 : fib_nh->fib_nh_scope - 1);
+ 	} else {
+ 		fib_nh_release(net, fib_nh);
+ 	}
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 377cba9b124d0..541758cd0b81f 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2175,7 +2175,8 @@ void tcp_enter_loss(struct sock *sk)
+  */
+ static bool tcp_check_sack_reneging(struct sock *sk, int flag)
+ {
+-	if (flag & FLAG_SACK_RENEGING) {
++	if (flag & FLAG_SACK_RENEGING &&
++	    flag & FLAG_SND_UNA_ADVANCED) {
+ 		struct tcp_sock *tp = tcp_sk(sk);
+ 		unsigned long delay = max(usecs_to_jiffies(tp->srtt_us >> 4),
+ 					  msecs_to_jiffies(10));
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 5c1e6b0687e2a..31a8009f74eea 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1770,8 +1770,7 @@ int tcp_v4_early_demux(struct sk_buff *skb)
+ 
+ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ {
+-	u32 limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf);
+-	u32 tail_gso_size, tail_gso_segs;
++	u32 limit, tail_gso_size, tail_gso_segs;
+ 	struct skb_shared_info *shinfo;
+ 	const struct tcphdr *th;
+ 	struct tcphdr *thtail;
+@@ -1874,11 +1873,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 	__skb_push(skb, hdrlen);
+ 
+ no_coalesce:
++	limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
++
+ 	/* Only socket owner can try to collapse/prune rx queues
+ 	 * to reduce memory overhead, so add a little headroom here.
+ 	 * Few sockets backlog are possibly concurrently non empty.
+ 	 */
+-	limit += 64*1024;
++	limit += 64 * 1024;
+ 
+ 	if (unlikely(sk_add_backlog(sk, skb, limit))) {
+ 		bh_unlock_sock(sk);
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 9e0890738d93f..0010f9e54f13b 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1153,14 +1153,16 @@ static void ip6gre_tnl_link_config_route(struct ip6_tnl *t, int set_mtu,
+ 				dev->needed_headroom = dst_len;
+ 
+ 			if (set_mtu) {
+-				dev->mtu = rt->dst.dev->mtu - t_hlen;
++				int mtu = rt->dst.dev->mtu - t_hlen;
++
+ 				if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
+-					dev->mtu -= 8;
++					mtu -= 8;
+ 				if (dev->type == ARPHRD_ETHER)
+-					dev->mtu -= ETH_HLEN;
++					mtu -= ETH_HLEN;
+ 
+-				if (dev->mtu < IPV6_MIN_MTU)
+-					dev->mtu = IPV6_MIN_MTU;
++				if (mtu < IPV6_MIN_MTU)
++					mtu = IPV6_MIN_MTU;
++				WRITE_ONCE(dev->mtu, mtu);
+ 			}
+ 		}
+ 		ip6_rt_put(rt);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 3a2741569b847..0d4cab94c5dd2 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1476,8 +1476,8 @@ static void ip6_tnl_link_config(struct ip6_tnl *t)
+ 	struct net_device *tdev = NULL;
+ 	struct __ip6_tnl_parm *p = &t->parms;
+ 	struct flowi6 *fl6 = &t->fl.u.ip6;
+-	unsigned int mtu;
+ 	int t_hlen;
++	int mtu;
+ 
+ 	memcpy(dev->dev_addr, &p->laddr, sizeof(struct in6_addr));
+ 	memcpy(dev->broadcast, &p->raddr, sizeof(struct in6_addr));
+@@ -1524,12 +1524,13 @@ static void ip6_tnl_link_config(struct ip6_tnl *t)
+ 			dev->hard_header_len = tdev->hard_header_len + t_hlen;
+ 			mtu = min_t(unsigned int, tdev->mtu, IP6_MAX_MTU);
+ 
+-			dev->mtu = mtu - t_hlen;
++			mtu = mtu - t_hlen;
+ 			if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
+-				dev->mtu -= 8;
++				mtu -= 8;
+ 
+-			if (dev->mtu < IPV6_MIN_MTU)
+-				dev->mtu = IPV6_MIN_MTU;
++			if (mtu < IPV6_MIN_MTU)
++				mtu = IPV6_MIN_MTU;
++			WRITE_ONCE(dev->mtu, mtu);
+ 		}
+ 	}
+ }
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 3c92e8cacbbab..1ce486a9bc076 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1123,10 +1123,12 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
+ 
+ 	if (tdev && !netif_is_l3_master(tdev)) {
+ 		int t_hlen = tunnel->hlen + sizeof(struct iphdr);
++		int mtu;
+ 
+-		dev->mtu = tdev->mtu - t_hlen;
+-		if (dev->mtu < IPV6_MIN_MTU)
+-			dev->mtu = IPV6_MIN_MTU;
++		mtu = tdev->mtu - t_hlen;
++		if (mtu < IPV6_MIN_MTU)
++			mtu = IPV6_MIN_MTU;
++		WRITE_ONCE(dev->mtu, mtu);
+ 	}
+ }
+ 
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 18469f1f707e5..6b362b362f790 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -161,7 +161,8 @@ static void kcm_rcv_ready(struct kcm_sock *kcm)
+ 	/* Buffer limit is okay now, add to ready list */
+ 	list_add_tail(&kcm->wait_rx_list,
+ 		      &kcm->mux->kcm_rx_waiters);
+-	kcm->rx_wait = true;
++	/* paired with lockless reads in kcm_rfree() */
++	WRITE_ONCE(kcm->rx_wait, true);
+ }
+ 
+ static void kcm_rfree(struct sk_buff *skb)
+@@ -177,7 +178,7 @@ static void kcm_rfree(struct sk_buff *skb)
+ 	/* For reading rx_wait and rx_psock without holding lock */
+ 	smp_mb__after_atomic();
+ 
+-	if (!kcm->rx_wait && !kcm->rx_psock &&
++	if (!READ_ONCE(kcm->rx_wait) && !READ_ONCE(kcm->rx_psock) &&
+ 	    sk_rmem_alloc_get(sk) < sk->sk_rcvlowat) {
+ 		spin_lock_bh(&mux->rx_lock);
+ 		kcm_rcv_ready(kcm);
+@@ -236,7 +237,8 @@ try_again:
+ 		if (kcm_queue_rcv_skb(&kcm->sk, skb)) {
+ 			/* Should mean socket buffer full */
+ 			list_del(&kcm->wait_rx_list);
+-			kcm->rx_wait = false;
++			/* paired with lockless reads in kcm_rfree() */
++			WRITE_ONCE(kcm->rx_wait, false);
+ 
+ 			/* Commit rx_wait to read in kcm_free */
+ 			smp_wmb();
+@@ -279,10 +281,12 @@ static struct kcm_sock *reserve_rx_kcm(struct kcm_psock *psock,
+ 	kcm = list_first_entry(&mux->kcm_rx_waiters,
+ 			       struct kcm_sock, wait_rx_list);
+ 	list_del(&kcm->wait_rx_list);
+-	kcm->rx_wait = false;
++	/* paired with lockless reads in kcm_rfree() */
++	WRITE_ONCE(kcm->rx_wait, false);
+ 
+ 	psock->rx_kcm = kcm;
+-	kcm->rx_psock = psock;
++	/* paired with lockless reads in kcm_rfree() */
++	WRITE_ONCE(kcm->rx_psock, psock);
+ 
+ 	spin_unlock_bh(&mux->rx_lock);
+ 
+@@ -309,7 +313,8 @@ static void unreserve_rx_kcm(struct kcm_psock *psock,
+ 	spin_lock_bh(&mux->rx_lock);
+ 
+ 	psock->rx_kcm = NULL;
+-	kcm->rx_psock = NULL;
++	/* paired with lockless reads in kcm_rfree() */
++	WRITE_ONCE(kcm->rx_psock, NULL);
+ 
+ 	/* Commit kcm->rx_psock before sk_rmem_alloc_get to sync with
+ 	 * kcm_rfree
+@@ -1239,7 +1244,8 @@ static void kcm_recv_disable(struct kcm_sock *kcm)
+ 	if (!kcm->rx_psock) {
+ 		if (kcm->rx_wait) {
+ 			list_del(&kcm->wait_rx_list);
+-			kcm->rx_wait = false;
++			/* paired with lockless reads in kcm_rfree() */
++			WRITE_ONCE(kcm->rx_wait, false);
+ 		}
+ 
+ 		requeue_rx_msgs(mux, &kcm->sk.sk_receive_queue);
+@@ -1792,7 +1798,8 @@ static void kcm_done(struct kcm_sock *kcm)
+ 
+ 	if (kcm->rx_wait) {
+ 		list_del(&kcm->wait_rx_list);
+-		kcm->rx_wait = false;
++		/* paired with lockless reads in kcm_rfree() */
++		WRITE_ONCE(kcm->rx_wait, false);
+ 	}
+ 	/* Move any pending receive messages to other kcm sockets */
+ 	requeue_rx_msgs(mux, &sk->sk_receive_queue);
+diff --git a/net/mac802154/rx.c b/net/mac802154/rx.c
+index c439125ef2b91..726b47a4611b5 100644
+--- a/net/mac802154/rx.c
++++ b/net/mac802154/rx.c
+@@ -132,7 +132,7 @@ static int
+ ieee802154_parse_frame_start(struct sk_buff *skb, struct ieee802154_hdr *hdr)
+ {
+ 	int hlen;
+-	struct ieee802154_mac_cb *cb = mac_cb_init(skb);
++	struct ieee802154_mac_cb *cb = mac_cb(skb);
+ 
+ 	skb_reset_mac_header(skb);
+ 
+@@ -294,8 +294,9 @@ void
+ ieee802154_rx_irqsafe(struct ieee802154_hw *hw, struct sk_buff *skb, u8 lqi)
+ {
+ 	struct ieee802154_local *local = hw_to_local(hw);
++	struct ieee802154_mac_cb *cb = mac_cb_init(skb);
+ 
+-	mac_cb(skb)->lqi = lqi;
++	cb->lqi = lqi;
+ 	skb->pkt_type = IEEE802154_RX_MSG;
+ 	skb_queue_tail(&local->skb_queue, skb);
+ 	tasklet_schedule(&local->tasklet);
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 6b5c0abf7f1b5..7ed97dc0b5617 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -1592,7 +1592,8 @@ static void ovs_dp_reset_user_features(struct sk_buff *skb,
+ 	if (IS_ERR(dp))
+ 		return;
+ 
+-	WARN(dp->user_features, "Dropping previously announced user features\n");
++	pr_warn("%s: Dropping previously announced user features\n",
++		ovs_dp_name(dp));
+ 	dp->user_features = 0;
+ }
+ 
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index d9e2c0fea3f2b..561e709ae06ab 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -450,12 +450,19 @@ static void tipc_conn_data_ready(struct sock *sk)
+ static void tipc_topsrv_accept(struct work_struct *work)
+ {
+ 	struct tipc_topsrv *srv = container_of(work, struct tipc_topsrv, awork);
+-	struct socket *lsock = srv->listener;
+-	struct socket *newsock;
++	struct socket *newsock, *lsock;
+ 	struct tipc_conn *con;
+ 	struct sock *newsk;
+ 	int ret;
+ 
++	spin_lock_bh(&srv->idr_lock);
++	if (!srv->listener) {
++		spin_unlock_bh(&srv->idr_lock);
++		return;
++	}
++	lsock = srv->listener;
++	spin_unlock_bh(&srv->idr_lock);
++
+ 	while (1) {
+ 		ret = kernel_accept(lsock, &newsock, O_NONBLOCK);
+ 		if (ret < 0)
+@@ -489,7 +496,7 @@ static void tipc_topsrv_listener_data_ready(struct sock *sk)
+ 
+ 	read_lock_bh(&sk->sk_callback_lock);
+ 	srv = sk->sk_user_data;
+-	if (srv->listener)
++	if (srv)
+ 		queue_work(srv->rcv_wq, &srv->awork);
+ 	read_unlock_bh(&sk->sk_callback_lock);
+ }
+@@ -699,8 +706,9 @@ static void tipc_topsrv_stop(struct net *net)
+ 	__module_get(lsock->sk->sk_prot_creator->owner);
+ 	srv->listener = NULL;
+ 	spin_unlock_bh(&srv->idr_lock);
+-	sock_release(lsock);
++
+ 	tipc_topsrv_work_stop(srv);
++	sock_release(lsock);
+ 	idr_destroy(&srv->conn_idr);
+ 	kfree(srv);
+ }
+diff --git a/sound/aoa/soundbus/i2sbus/core.c b/sound/aoa/soundbus/i2sbus/core.c
+index faf6b03131ee4..51ed2f34b276d 100644
+--- a/sound/aoa/soundbus/i2sbus/core.c
++++ b/sound/aoa/soundbus/i2sbus/core.c
+@@ -147,6 +147,7 @@ static int i2sbus_get_and_fixup_rsrc(struct device_node *np, int index,
+ 	return rc;
+ }
+ 
++/* Returns 1 if added, 0 for otherwise; don't return a negative value! */
+ /* FIXME: look at device node refcounting */
+ static int i2sbus_add_dev(struct macio_dev *macio,
+ 			  struct i2sbus_control *control,
+@@ -213,7 +214,7 @@ static int i2sbus_add_dev(struct macio_dev *macio,
+ 	 * either as the second one in that case is just a modem. */
+ 	if (!ok) {
+ 		kfree(dev);
+-		return -ENODEV;
++		return 0;
+ 	}
+ 
+ 	mutex_init(&dev->lock);
+@@ -302,6 +303,10 @@ static int i2sbus_add_dev(struct macio_dev *macio,
+ 
+ 	if (soundbus_add_one(&dev->sound)) {
+ 		printk(KERN_DEBUG "i2sbus: device registration error!\n");
++		if (dev->sound.ofdev.dev.kobj.state_initialized) {
++			soundbus_dev_put(&dev->sound);
++			return 0;
++		}
+ 		goto err;
+ 	}
+ 
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index 963731cf0d8c8..cd66632bf1c37 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -1946,6 +1946,7 @@ static int snd_ac97_dev_register(struct snd_device *device)
+ 		     snd_ac97_get_short_name(ac97));
+ 	if ((err = device_register(&ac97->dev)) < 0) {
+ 		ac97_err(ac97, "Can't register ac97 bus\n");
++		put_device(&ac97->dev);
+ 		ac97->dev.bus = NULL;
+ 		return err;
+ 	}
+diff --git a/sound/pci/au88x0/au88x0.h b/sound/pci/au88x0/au88x0.h
+index 0aa7af049b1b9..6cbb2bc4a0483 100644
+--- a/sound/pci/au88x0/au88x0.h
++++ b/sound/pci/au88x0/au88x0.h
+@@ -141,7 +141,7 @@ struct snd_vortex {
+ #ifndef CHIP_AU8810
+ 	stream_t dma_wt[NR_WT];
+ 	wt_voice_t wt_voice[NR_WT];	/* WT register cache. */
+-	char mixwt[(NR_WT / NR_WTPB) * 6];	/* WT mixin objects */
++	s8 mixwt[(NR_WT / NR_WTPB) * 6];	/* WT mixin objects */
+ #endif
+ 
+ 	/* Global resources */
+@@ -235,8 +235,8 @@ static int vortex_alsafmt_aspfmt(snd_pcm_format_t alsafmt, vortex_t *v);
+ static void vortex_connect_default(vortex_t * vortex, int en);
+ static int vortex_adb_allocroute(vortex_t * vortex, int dma, int nr_ch,
+ 				 int dir, int type, int subdev);
+-static char vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out,
+-				  int restype);
++static int vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out,
++				 int restype);
+ #ifndef CHIP_AU8810
+ static int vortex_wt_allocroute(vortex_t * vortex, int dma, int nr_ch);
+ static void vortex_wt_connect(vortex_t * vortex, int en);
+diff --git a/sound/pci/au88x0/au88x0_core.c b/sound/pci/au88x0/au88x0_core.c
+index 5180f1bd1326c..0b04436ac017e 100644
+--- a/sound/pci/au88x0/au88x0_core.c
++++ b/sound/pci/au88x0/au88x0_core.c
+@@ -1998,7 +1998,7 @@ static const int resnum[VORTEX_RESOURCE_LAST] =
+  out: Mean checkout if != 0. Else mean Checkin resource.
+  restype: Indicates type of resource to be checked in or out.
+ */
+-static char
++static int
+ vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out, int restype)
+ {
+ 	int i, qty = resnum[restype], resinuse = 0;
+diff --git a/sound/pci/rme9652/hdsp.c b/sound/pci/rme9652/hdsp.c
+index 4aee30db034dd..9543474245004 100644
+--- a/sound/pci/rme9652/hdsp.c
++++ b/sound/pci/rme9652/hdsp.c
+@@ -436,7 +436,7 @@ struct hdsp_midi {
+     struct snd_rawmidi           *rmidi;
+     struct snd_rawmidi_substream *input;
+     struct snd_rawmidi_substream *output;
+-    char                     istimer; /* timer in use */
++    signed char		     istimer; /* timer in use */
+     struct timer_list	     timer;
+     spinlock_t               lock;
+     int			     pending;
+@@ -479,7 +479,7 @@ struct hdsp {
+ 	pid_t                 playback_pid;
+ 	int                   running;
+ 	int                   system_sample_rate;
+-	const char           *channel_map;
++	const signed char    *channel_map;
+ 	int                   dev;
+ 	int                   irq;
+ 	unsigned long         port;
+@@ -501,7 +501,7 @@ struct hdsp {
+    where the data for that channel can be read/written from/to.
+ */
+ 
+-static const char channel_map_df_ss[HDSP_MAX_CHANNELS] = {
++static const signed char channel_map_df_ss[HDSP_MAX_CHANNELS] = {
+ 	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+ 	18, 19, 20, 21, 22, 23, 24, 25
+ };
+@@ -516,7 +516,7 @@ static const char channel_map_mf_ss[HDSP_MAX_CHANNELS] = { /* Multiface */
+ 	-1, -1, -1, -1, -1, -1, -1, -1
+ };
+ 
+-static const char channel_map_ds[HDSP_MAX_CHANNELS] = {
++static const signed char channel_map_ds[HDSP_MAX_CHANNELS] = {
+ 	/* ADAT channels are remapped */
+ 	1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23,
+ 	/* channels 12 and 13 are S/PDIF */
+@@ -525,7 +525,7 @@ static const char channel_map_ds[HDSP_MAX_CHANNELS] = {
+ 	-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1
+ };
+ 
+-static const char channel_map_H9632_ss[HDSP_MAX_CHANNELS] = {
++static const signed char channel_map_H9632_ss[HDSP_MAX_CHANNELS] = {
+ 	/* ADAT channels */
+ 	0, 1, 2, 3, 4, 5, 6, 7,
+ 	/* SPDIF */
+@@ -539,7 +539,7 @@ static const char channel_map_H9632_ss[HDSP_MAX_CHANNELS] = {
+ 	-1, -1
+ };
+ 
+-static const char channel_map_H9632_ds[HDSP_MAX_CHANNELS] = {
++static const signed char channel_map_H9632_ds[HDSP_MAX_CHANNELS] = {
+ 	/* ADAT */
+ 	1, 3, 5, 7,
+ 	/* SPDIF */
+@@ -553,7 +553,7 @@ static const char channel_map_H9632_ds[HDSP_MAX_CHANNELS] = {
+ 	-1, -1, -1, -1, -1, -1
+ };
+ 
+-static const char channel_map_H9632_qs[HDSP_MAX_CHANNELS] = {
++static const signed char channel_map_H9632_qs[HDSP_MAX_CHANNELS] = {
+ 	/* ADAT is disabled in this mode */
+ 	/* SPDIF */
+ 	8, 9,
+@@ -3869,7 +3869,7 @@ static snd_pcm_uframes_t snd_hdsp_hw_pointer(struct snd_pcm_substream *substream
+ 	return hdsp_hw_pointer(hdsp);
+ }
+ 
+-static char *hdsp_channel_buffer_location(struct hdsp *hdsp,
++static signed char *hdsp_channel_buffer_location(struct hdsp *hdsp,
+ 					     int stream,
+ 					     int channel)
+ 
+@@ -3893,7 +3893,7 @@ static int snd_hdsp_playback_copy(struct snd_pcm_substream *substream,
+ 				  void __user *src, unsigned long count)
+ {
+ 	struct hdsp *hdsp = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	if (snd_BUG_ON(pos + count > HDSP_CHANNEL_BUFFER_BYTES))
+ 		return -EINVAL;
+@@ -3911,7 +3911,7 @@ static int snd_hdsp_playback_copy_kernel(struct snd_pcm_substream *substream,
+ 					 void *src, unsigned long count)
+ {
+ 	struct hdsp *hdsp = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	channel_buf = hdsp_channel_buffer_location(hdsp, substream->pstr->stream, channel);
+ 	if (snd_BUG_ON(!channel_buf))
+@@ -3925,7 +3925,7 @@ static int snd_hdsp_capture_copy(struct snd_pcm_substream *substream,
+ 				 void __user *dst, unsigned long count)
+ {
+ 	struct hdsp *hdsp = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	if (snd_BUG_ON(pos + count > HDSP_CHANNEL_BUFFER_BYTES))
+ 		return -EINVAL;
+@@ -3943,7 +3943,7 @@ static int snd_hdsp_capture_copy_kernel(struct snd_pcm_substream *substream,
+ 					void *dst, unsigned long count)
+ {
+ 	struct hdsp *hdsp = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	channel_buf = hdsp_channel_buffer_location(hdsp, substream->pstr->stream, channel);
+ 	if (snd_BUG_ON(!channel_buf))
+@@ -3957,7 +3957,7 @@ static int snd_hdsp_hw_silence(struct snd_pcm_substream *substream,
+ 			       unsigned long count)
+ {
+ 	struct hdsp *hdsp = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	channel_buf = hdsp_channel_buffer_location (hdsp, substream->pstr->stream, channel);
+ 	if (snd_BUG_ON(!channel_buf))
+diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c
+index 8def24673f35f..459696844b8ce 100644
+--- a/sound/pci/rme9652/rme9652.c
++++ b/sound/pci/rme9652/rme9652.c
+@@ -229,7 +229,7 @@ struct snd_rme9652 {
+ 	int last_spdif_sample_rate;	/* so that we can catch externally ... */
+ 	int last_adat_sample_rate;	/* ... induced rate changes            */
+ 
+-	const char *channel_map;
++	const signed char *channel_map;
+ 
+ 	struct snd_card *card;
+ 	struct snd_pcm *pcm;
+@@ -246,12 +246,12 @@ struct snd_rme9652 {
+    where the data for that channel can be read/written from/to.
+ */
+ 
+-static const char channel_map_9652_ss[26] = {
++static const signed char channel_map_9652_ss[26] = {
+ 	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+ 	18, 19, 20, 21, 22, 23, 24, 25
+ };
+ 
+-static const char channel_map_9636_ss[26] = {
++static const signed char channel_map_9636_ss[26] = {
+ 	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 
+ 	/* channels 16 and 17 are S/PDIF */
+ 	24, 25,
+@@ -259,7 +259,7 @@ static const char channel_map_9636_ss[26] = {
+ 	-1, -1, -1, -1, -1, -1, -1, -1
+ };
+ 
+-static const char channel_map_9652_ds[26] = {
++static const signed char channel_map_9652_ds[26] = {
+ 	/* ADAT channels are remapped */
+ 	1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23,
+ 	/* channels 12 and 13 are S/PDIF */
+@@ -268,7 +268,7 @@ static const char channel_map_9652_ds[26] = {
+ 	-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1
+ };
+ 
+-static const char channel_map_9636_ds[26] = {
++static const signed char channel_map_9636_ds[26] = {
+ 	/* ADAT channels are remapped */
+ 	1, 3, 5, 7, 9, 11, 13, 15,
+ 	/* channels 8 and 9 are S/PDIF */
+@@ -1841,7 +1841,7 @@ static snd_pcm_uframes_t snd_rme9652_hw_pointer(struct snd_pcm_substream *substr
+ 	return rme9652_hw_pointer(rme9652);
+ }
+ 
+-static char *rme9652_channel_buffer_location(struct snd_rme9652 *rme9652,
++static signed char *rme9652_channel_buffer_location(struct snd_rme9652 *rme9652,
+ 					     int stream,
+ 					     int channel)
+ 
+@@ -1869,7 +1869,7 @@ static int snd_rme9652_playback_copy(struct snd_pcm_substream *substream,
+ 				     void __user *src, unsigned long count)
+ {
+ 	struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	if (snd_BUG_ON(pos + count > RME9652_CHANNEL_BUFFER_BYTES))
+ 		return -EINVAL;
+@@ -1889,7 +1889,7 @@ static int snd_rme9652_playback_copy_kernel(struct snd_pcm_substream *substream,
+ 					    void *src, unsigned long count)
+ {
+ 	struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	channel_buf = rme9652_channel_buffer_location(rme9652,
+ 						      substream->pstr->stream,
+@@ -1905,7 +1905,7 @@ static int snd_rme9652_capture_copy(struct snd_pcm_substream *substream,
+ 				    void __user *dst, unsigned long count)
+ {
+ 	struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	if (snd_BUG_ON(pos + count > RME9652_CHANNEL_BUFFER_BYTES))
+ 		return -EINVAL;
+@@ -1925,7 +1925,7 @@ static int snd_rme9652_capture_copy_kernel(struct snd_pcm_substream *substream,
+ 					   void *dst, unsigned long count)
+ {
+ 	struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	channel_buf = rme9652_channel_buffer_location(rme9652,
+ 						      substream->pstr->stream,
+@@ -1941,7 +1941,7 @@ static int snd_rme9652_hw_silence(struct snd_pcm_substream *substream,
+ 				  unsigned long count)
+ {
+ 	struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream);
+-	char *channel_buf;
++	signed char *channel_buf;
+ 
+ 	channel_buf = rme9652_channel_buffer_location (rme9652,
+ 						       substream->pstr->stream,
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index 03abb3d719d08..ecd6c049ace24 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -745,10 +745,20 @@ static bool lpass_hdmi_regmap_volatile(struct device *dev, unsigned int reg)
+ 		return true;
+ 	if (reg == LPASS_HDMI_TX_LEGACY_ADDR(v))
+ 		return true;
++	if (reg == LPASS_HDMI_TX_VBIT_CTL_ADDR(v))
++		return true;
++	if (reg == LPASS_HDMI_TX_PARITY_ADDR(v))
++		return true;
+ 
+ 	for (i = 0; i < v->hdmi_rdma_channels; ++i) {
+ 		if (reg == LPAIF_HDMI_RDMACURR_REG(v, i))
+ 			return true;
++		if (reg == LPASS_HDMI_TX_DMA_ADDR(v, i))
++			return true;
++		if (reg == LPASS_HDMI_TX_CH_LSB_ADDR(v, i))
++			return true;
++		if (reg == LPASS_HDMI_TX_CH_MSB_ADDR(v, i))
++			return true;
+ 	}
+ 	return false;
+ }
+diff --git a/sound/synth/emux/emux.c b/sound/synth/emux/emux.c
+index 6695530bba9b3..c60ff81390a44 100644
+--- a/sound/synth/emux/emux.c
++++ b/sound/synth/emux/emux.c
+@@ -125,15 +125,10 @@ EXPORT_SYMBOL(snd_emux_register);
+  */
+ int snd_emux_free(struct snd_emux *emu)
+ {
+-	unsigned long flags;
+-
+ 	if (! emu)
+ 		return -EINVAL;
+ 
+-	spin_lock_irqsave(&emu->voice_lock, flags);
+-	if (emu->timer_active)
+-		del_timer(&emu->tlist);
+-	spin_unlock_irqrestore(&emu->voice_lock, flags);
++	del_timer_sync(&emu->tlist);
+ 
+ 	snd_emux_proc_free(emu);
+ 	snd_emux_delete_virmidi(emu);
+diff --git a/tools/iio/iio_utils.c b/tools/iio/iio_utils.c
+index 7399eb7f13786..d66b18c54606a 100644
+--- a/tools/iio/iio_utils.c
++++ b/tools/iio/iio_utils.c
+@@ -543,6 +543,10 @@ static int calc_digits(int num)
+ {
+ 	int count = 0;
+ 
++	/* It takes a digit to represent zero */
++	if (!num)
++		return 1;
++
+ 	while (num != 0) {
+ 		num /= 10;
+ 		count++;
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index d3c15b53495d6..d96e86ddd2c53 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -2164,11 +2164,19 @@ struct sym_args {
+ 	bool		near;
+ };
+ 
++static bool kern_sym_name_match(const char *kname, const char *name)
++{
++	size_t n = strlen(name);
++
++	return !strcmp(kname, name) ||
++	       (!strncmp(kname, name, n) && kname[n] == '\t');
++}
++
+ static bool kern_sym_match(struct sym_args *args, const char *name, char type)
+ {
+ 	/* A function with the same name, and global or the n'th found or any */
+ 	return kallsyms__is_function(type) &&
+-	       !strcmp(name, args->name) &&
++	       kern_sym_name_match(name, args->name) &&
+ 	       ((args->global && isupper(type)) ||
+ 		(args->selected && ++(args->cnt) == args->idx) ||
+ 		(!args->global && !args->selected));


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-11-10 18:05 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-11-10 18:05 UTC (permalink / raw
  To: gentoo-commits

commit:     f562e0f6a071e4493f1141246e27739f21a4413b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 10 18:05:34 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 10 18:05:34 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f562e0f6

Linux patch 5.10.154

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1153_linux-5.10.154.patch | 4938 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4942 insertions(+)

diff --git a/0000_README b/0000_README
index 76e2297c..6ff64173 100644
--- a/0000_README
+++ b/0000_README
@@ -655,6 +655,10 @@ Patch:  1152_linux-5.10.153.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.153
 
+Patch:  1153_linux-5.10.154.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.154
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1153_linux-5.10.154.patch b/1153_linux-5.10.154.patch
new file mode 100644
index 00000000..df2ed1ad
--- /dev/null
+++ b/1153_linux-5.10.154.patch
@@ -0,0 +1,4938 @@
+diff --git a/Documentation/trace/histogram.rst b/Documentation/trace/histogram.rst
+index f99be8062bc82..a9ffc4f3ee69a 100644
+--- a/Documentation/trace/histogram.rst
++++ b/Documentation/trace/histogram.rst
+@@ -39,7 +39,7 @@ Documentation written by Tom Zanussi
+   will use the event's kernel stacktrace as the key.  The keywords
+   'keys' or 'key' can be used to specify keys, and the keywords
+   'values', 'vals', or 'val' can be used to specify values.  Compound
+-  keys consisting of up to two fields can be specified by the 'keys'
++  keys consisting of up to three fields can be specified by the 'keys'
+   keyword.  Hashing a compound key produces a unique entry in the
+   table for each unique combination of component keys, and can be
+   useful for providing more fine-grained summaries of event data.
+diff --git a/Makefile b/Makefile
+index d1cd7539105df..43fecb4045814 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 153
++SUBLEVEL = 154
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-gw5910.dtsi b/arch/arm/boot/dts/imx6qdl-gw5910.dtsi
+index ed4e222599594..852601d5ab6b8 100644
+--- a/arch/arm/boot/dts/imx6qdl-gw5910.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-gw5910.dtsi
+@@ -31,7 +31,7 @@
+ 
+ 		user-pb {
+ 			label = "user_pb";
+-			gpios = <&gsc_gpio 0 GPIO_ACTIVE_LOW>;
++			gpios = <&gsc_gpio 2 GPIO_ACTIVE_LOW>;
+ 			linux,code = <BTN_0>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-gw5913.dtsi b/arch/arm/boot/dts/imx6qdl-gw5913.dtsi
+index 4cd7d290f5b28..7a2628fdd1424 100644
+--- a/arch/arm/boot/dts/imx6qdl-gw5913.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-gw5913.dtsi
+@@ -28,7 +28,7 @@
+ 
+ 		user-pb {
+ 			label = "user_pb";
+-			gpios = <&gsc_gpio 0 GPIO_ACTIVE_LOW>;
++			gpios = <&gsc_gpio 2 GPIO_ACTIVE_LOW>;
+ 			linux,code = <BTN_0>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
+index 2c0161125ece4..cb45a2f0537a9 100644
+--- a/arch/arm64/boot/dts/arm/juno-base.dtsi
++++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
+@@ -595,12 +595,26 @@
+ 			polling-delay = <1000>;
+ 			polling-delay-passive = <100>;
+ 			thermal-sensors = <&scpi_sensors0 0>;
++			trips {
++				pmic_crit0: trip0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "critical";
++				};
++			};
+ 		};
+ 
+ 		soc {
+ 			polling-delay = <1000>;
+ 			polling-delay-passive = <100>;
+ 			thermal-sensors = <&scpi_sensors0 3>;
++			trips {
++				soc_crit0: trip0 {
++					temperature = <80000>;
++					hysteresis = <2000>;
++					type = "critical";
++				};
++			};
+ 		};
+ 
+ 		big_cluster_thermal_zone: big-cluster {
+diff --git a/arch/parisc/include/asm/hardware.h b/arch/parisc/include/asm/hardware.h
+index 9d3d7737c58b1..a005ebc547793 100644
+--- a/arch/parisc/include/asm/hardware.h
++++ b/arch/parisc/include/asm/hardware.h
+@@ -10,12 +10,12 @@
+ #define SVERSION_ANY_ID		PA_SVERSION_ANY_ID
+ 
+ struct hp_hardware {
+-	unsigned short	hw_type:5;	/* HPHW_xxx */
+-	unsigned short	hversion;
+-	unsigned long	sversion:28;
+-	unsigned short	opt;
+-	const char	name[80];	/* The hardware description */
+-};
++	unsigned int	hw_type:8;	/* HPHW_xxx */
++	unsigned int	hversion:12;
++	unsigned int	sversion:12;
++	unsigned char	opt;
++	unsigned char	name[59];	/* The hardware description */
++} __packed;
+ 
+ struct parisc_device;
+ 
+diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
+index f5a25ed0930d2..d95157488832a 100644
+--- a/arch/parisc/kernel/drivers.c
++++ b/arch/parisc/kernel/drivers.c
+@@ -883,15 +883,13 @@ void __init walk_central_bus(void)
+ 			&root);
+ }
+ 
+-static void print_parisc_device(struct parisc_device *dev)
++static __init void print_parisc_device(struct parisc_device *dev)
+ {
+-	char hw_path[64];
+-	static int count;
++	static int count __initdata;
+ 
+-	print_pa_hwpath(dev, hw_path);
+-	pr_info("%d. %s at %pap [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }",
+-		++count, dev->name, &(dev->hpa.start), hw_path, dev->id.hw_type,
+-		dev->id.hversion_rev, dev->id.hversion, dev->id.sversion);
++	pr_info("%d. %s at %pap { type:%d, hv:%#x, sv:%#x, rev:%#x }",
++		++count, dev->name, &(dev->hpa.start), dev->id.hw_type,
++		dev->id.hversion, dev->id.sversion, dev->id.hversion_rev);
+ 
+ 	if (dev->num_addrs) {
+ 		int k;
+@@ -1080,7 +1078,7 @@ static __init int qemu_print_iodc_data(struct device *lin_dev, void *data)
+ 
+ 
+ 
+-static int print_one_device(struct device * dev, void * data)
++static __init int print_one_device(struct device * dev, void * data)
+ {
+ 	struct parisc_device * pdev = to_parisc_device(dev);
+ 
+diff --git a/arch/s390/boot/compressed/vmlinux.lds.S b/arch/s390/boot/compressed/vmlinux.lds.S
+index 9427e2cd0c154..11bf3919610e4 100644
+--- a/arch/s390/boot/compressed/vmlinux.lds.S
++++ b/arch/s390/boot/compressed/vmlinux.lds.S
+@@ -91,8 +91,17 @@ SECTIONS
+ 		_compressed_start = .;
+ 		*(.vmlinux.bin.compressed)
+ 		_compressed_end = .;
+-		FILL(0xff);
+-		. = ALIGN(4096);
++	}
++
++#define SB_TRAILER_SIZE 32
++	/* Trailer needed for Secure Boot */
++	. += SB_TRAILER_SIZE; /* make sure .sb.trailer does not overwrite the previous section */
++	. = ALIGN(4096) - SB_TRAILER_SIZE;
++	.sb.trailer : {
++		QUAD(0)
++		QUAD(0)
++		QUAD(0)
++		QUAD(0x000000207a49504c)
+ 	}
+ 	_end = .;
+ 
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index f6eadf9320a1a..990d5543e3bf2 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4412,6 +4412,7 @@ static const struct x86_cpu_desc isolation_ucodes[] = {
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 5, 0x00000000),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 6, 0x00000000),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		 7, 0x00000000),
++	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X,		11, 0x00000000),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_L,		 3, 0x0000007c),
+ 	INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE,		 3, 0x0000007c),
+ 	INTEL_CPU_DESC(INTEL_FAM6_KABYLAKE,		 9, 0x0000004e),
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 945d470f62d0f..48f30ffef1f4b 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -855,8 +855,13 @@ struct event_constraint intel_icl_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL),	/* SLOTS */
+ 
+ 	INTEL_PLD_CONSTRAINT(0x1cd, 0xff),			/* MEM_TRANS_RETIRED.LOAD_LATENCY */
+-	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x1d0, 0xf),	/* MEM_INST_RETIRED.LOAD */
+-	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x2d0, 0xf),	/* MEM_INST_RETIRED.STORE */
++	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf),	/* MEM_INST_RETIRED.STLB_MISS_LOADS */
++	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf),	/* MEM_INST_RETIRED.STLB_MISS_STORES */
++	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf),	/* MEM_INST_RETIRED.LOCK_LOADS */
++	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x41d0, 0xf),	/* MEM_INST_RETIRED.SPLIT_LOADS */
++	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x42d0, 0xf),	/* MEM_INST_RETIRED.SPLIT_STORES */
++	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x81d0, 0xf),	/* MEM_INST_RETIRED.ALL_LOADS */
++	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x82d0, 0xf),	/* MEM_INST_RETIRED.ALL_STORES */
+ 
+ 	INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_LD_RANGE(0xd1, 0xd4, 0xf), /* MEM_LOAD_*_RETIRED.* */
+ 
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index 91288da295995..37d48ab3d077c 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -96,6 +96,8 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	unsigned int ht_mask_width, core_plus_mask_width, die_plus_mask_width;
+ 	unsigned int core_select_mask, core_level_siblings;
+ 	unsigned int die_select_mask, die_level_siblings;
++	unsigned int pkg_mask_width;
++	bool die_level_present = false;
+ 	int leaf;
+ 
+ 	leaf = detect_extended_topology_leaf(c);
+@@ -110,10 +112,10 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
+ 	core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 	die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
+-	die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
++	pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 
+ 	sub_index = 1;
+-	do {
++	while (true) {
+ 		cpuid_count(leaf, sub_index, &eax, &ebx, &ecx, &edx);
+ 
+ 		/*
+@@ -126,23 +128,33 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 			die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 		}
+ 		if (LEAFB_SUBTYPE(ecx) == DIE_TYPE) {
++			die_level_present = true;
+ 			die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
+ 			die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 		}
+ 
++		if (LEAFB_SUBTYPE(ecx) != INVALID_TYPE)
++			pkg_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
++		else
++			break;
++
+ 		sub_index++;
+-	} while (LEAFB_SUBTYPE(ecx) != INVALID_TYPE);
++	}
+ 
+-	core_select_mask = (~(-1 << core_plus_mask_width)) >> ht_mask_width;
++	core_select_mask = (~(-1 << pkg_mask_width)) >> ht_mask_width;
+ 	die_select_mask = (~(-1 << die_plus_mask_width)) >>
+ 				core_plus_mask_width;
+ 
+ 	c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid,
+ 				ht_mask_width) & core_select_mask;
+-	c->cpu_die_id = apic->phys_pkg_id(c->initial_apicid,
+-				core_plus_mask_width) & die_select_mask;
++
++	if (die_level_present) {
++		c->cpu_die_id = apic->phys_pkg_id(c->initial_apicid,
++					core_plus_mask_width) & die_select_mask;
++	}
++
+ 	c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid,
+-				die_plus_mask_width);
++				pkg_mask_width);
+ 	/*
+ 	 * Reinit the apicid, now that we have extended initial_apicid.
+ 	 */
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 6f44274aa949d..06a776fdb90cf 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -813,11 +813,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		entry->eax = min(entry->eax, 0x8000001f);
+ 		break;
+ 	case 0x80000001:
++		entry->ebx &= ~GENMASK(27, 16);
+ 		cpuid_entry_override(entry, CPUID_8000_0001_EDX);
+ 		cpuid_entry_override(entry, CPUID_8000_0001_ECX);
+ 		break;
+ 	case 0x80000006:
+-		/* L2 cache and TLB: pass through host info. */
++		/* Drop reserved bits, pass host L2 cache and TLB info. */
++		entry->edx &= ~GENMASK(17, 16);
+ 		break;
+ 	case 0x80000007: /* Advanced power management */
+ 		/* invariant TSC is CPUID.80000007H:EDX[8] */
+@@ -840,6 +842,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 			g_phys_as = phys_as;
+ 
+ 		entry->eax = g_phys_as | (virt_as << 8);
++		entry->ecx &= ~(GENMASK(31, 16) | GENMASK(11, 8));
+ 		entry->edx = 0;
+ 		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
+ 		break;
+@@ -859,6 +862,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		entry->ecx = entry->edx = 0;
+ 		break;
+ 	case 0x8000001a:
++		entry->eax &= GENMASK(2, 0);
++		entry->ebx = entry->ecx = entry->edx = 0;
++		break;
+ 	case 0x8000001e:
+ 		break;
+ 	/* Support memory encryption cpuid if host supports it */
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 52a881d240704..63efccc8f4292 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -796,8 +796,7 @@ static int linearize(struct x86_emulate_ctxt *ctxt,
+ 			   ctxt->mode, linear);
+ }
+ 
+-static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst,
+-			     enum x86emul_mode mode)
++static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst)
+ {
+ 	ulong linear;
+ 	int rc;
+@@ -807,41 +806,71 @@ static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst,
+ 
+ 	if (ctxt->op_bytes != sizeof(unsigned long))
+ 		addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1);
+-	rc = __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear);
++	rc = __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &linear);
+ 	if (rc == X86EMUL_CONTINUE)
+ 		ctxt->_eip = addr.ea;
+ 	return rc;
+ }
+ 
++static inline int emulator_recalc_and_set_mode(struct x86_emulate_ctxt *ctxt)
++{
++	u64 efer;
++	struct desc_struct cs;
++	u16 selector;
++	u32 base3;
++
++	ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
++
++	if (!(ctxt->ops->get_cr(ctxt, 0) & X86_CR0_PE)) {
++		/* Real mode. cpu must not have long mode active */
++		if (efer & EFER_LMA)
++			return X86EMUL_UNHANDLEABLE;
++		ctxt->mode = X86EMUL_MODE_REAL;
++		return X86EMUL_CONTINUE;
++	}
++
++	if (ctxt->eflags & X86_EFLAGS_VM) {
++		/* Protected/VM86 mode. cpu must not have long mode active */
++		if (efer & EFER_LMA)
++			return X86EMUL_UNHANDLEABLE;
++		ctxt->mode = X86EMUL_MODE_VM86;
++		return X86EMUL_CONTINUE;
++	}
++
++	if (!ctxt->ops->get_segment(ctxt, &selector, &cs, &base3, VCPU_SREG_CS))
++		return X86EMUL_UNHANDLEABLE;
++
++	if (efer & EFER_LMA) {
++		if (cs.l) {
++			/* Proper long mode */
++			ctxt->mode = X86EMUL_MODE_PROT64;
++		} else if (cs.d) {
++			/* 32 bit compatibility mode*/
++			ctxt->mode = X86EMUL_MODE_PROT32;
++		} else {
++			ctxt->mode = X86EMUL_MODE_PROT16;
++		}
++	} else {
++		/* Legacy 32 bit / 16 bit mode */
++		ctxt->mode = cs.d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
++	}
++
++	return X86EMUL_CONTINUE;
++}
++
+ static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)
+ {
+-	return assign_eip(ctxt, dst, ctxt->mode);
++	return assign_eip(ctxt, dst);
+ }
+ 
+-static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst,
+-			  const struct desc_struct *cs_desc)
++static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst)
+ {
+-	enum x86emul_mode mode = ctxt->mode;
+-	int rc;
++	int rc = emulator_recalc_and_set_mode(ctxt);
+ 
+-#ifdef CONFIG_X86_64
+-	if (ctxt->mode >= X86EMUL_MODE_PROT16) {
+-		if (cs_desc->l) {
+-			u64 efer = 0;
++	if (rc != X86EMUL_CONTINUE)
++		return rc;
+ 
+-			ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
+-			if (efer & EFER_LMA)
+-				mode = X86EMUL_MODE_PROT64;
+-		} else
+-			mode = X86EMUL_MODE_PROT32; /* temporary value */
+-	}
+-#endif
+-	if (mode == X86EMUL_MODE_PROT16 || mode == X86EMUL_MODE_PROT32)
+-		mode = cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
+-	rc = assign_eip(ctxt, dst, mode);
+-	if (rc == X86EMUL_CONTINUE)
+-		ctxt->mode = mode;
+-	return rc;
++	return assign_eip(ctxt, dst);
+ }
+ 
+ static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
+@@ -2256,7 +2285,7 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
+ 	if (rc != X86EMUL_CONTINUE)
+ 		return rc;
+ 
+-	rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
++	rc = assign_eip_far(ctxt, ctxt->src.val);
+ 	/* Error handling is not implemented. */
+ 	if (rc != X86EMUL_CONTINUE)
+ 		return X86EMUL_UNHANDLEABLE;
+@@ -2337,7 +2366,7 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt)
+ 				       &new_desc);
+ 	if (rc != X86EMUL_CONTINUE)
+ 		return rc;
+-	rc = assign_eip_far(ctxt, eip, &new_desc);
++	rc = assign_eip_far(ctxt, eip);
+ 	/* Error handling is not implemented. */
+ 	if (rc != X86EMUL_CONTINUE)
+ 		return X86EMUL_UNHANDLEABLE;
+@@ -2957,6 +2986,7 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt)
+ 	ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
+ 
+ 	ctxt->_eip = rdx;
++	ctxt->mode = usermode;
+ 	*reg_write(ctxt, VCPU_REGS_RSP) = rcx;
+ 
+ 	return X86EMUL_CONTINUE;
+@@ -3553,7 +3583,7 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt)
+ 	if (rc != X86EMUL_CONTINUE)
+ 		return rc;
+ 
+-	rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
++	rc = assign_eip_far(ctxt, ctxt->src.val);
+ 	if (rc != X86EMUL_CONTINUE)
+ 		goto fail;
+ 
+@@ -3695,11 +3725,25 @@ static int em_movbe(struct x86_emulate_ctxt *ctxt)
+ 
+ static int em_cr_write(struct x86_emulate_ctxt *ctxt)
+ {
+-	if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val))
++	int cr_num = ctxt->modrm_reg;
++	int r;
++
++	if (ctxt->ops->set_cr(ctxt, cr_num, ctxt->src.val))
+ 		return emulate_gp(ctxt, 0);
+ 
+ 	/* Disable writeback. */
+ 	ctxt->dst.type = OP_NONE;
++
++	if (cr_num == 0) {
++		/*
++		 * CR0 write might have updated CR0.PE and/or CR0.PG
++		 * which can affect the cpu's execution mode.
++		 */
++		r = emulator_recalc_and_set_mode(ctxt);
++		if (r != X86EMUL_CONTINUE)
++			return r;
++	}
++
+ 	return X86EMUL_CONTINUE;
+ }
+ 
+diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
+index a2835d784f4be..3d4988ea8b572 100644
+--- a/arch/x86/kvm/trace.h
++++ b/arch/x86/kvm/trace.h
+@@ -304,25 +304,29 @@ TRACE_EVENT(kvm_inj_virq,
+  * Tracepoint for kvm interrupt injection:
+  */
+ TRACE_EVENT(kvm_inj_exception,
+-	TP_PROTO(unsigned exception, bool has_error, unsigned error_code),
+-	TP_ARGS(exception, has_error, error_code),
++	TP_PROTO(unsigned exception, bool has_error, unsigned error_code,
++		 bool reinjected),
++	TP_ARGS(exception, has_error, error_code, reinjected),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	u8,	exception	)
+ 		__field(	u8,	has_error	)
+ 		__field(	u32,	error_code	)
++		__field(	bool,	reinjected	)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->exception	= exception;
+ 		__entry->has_error	= has_error;
+ 		__entry->error_code	= error_code;
++		__entry->reinjected	= reinjected;
+ 	),
+ 
+-	TP_printk("%s (0x%x)",
++	TP_printk("%s (0x%x)%s",
+ 		  __print_symbolic(__entry->exception, kvm_trace_sym_exc),
+ 		  /* FIXME: don't print error_code if not present */
+-		  __entry->has_error ? __entry->error_code : 0)
++		  __entry->has_error ? __entry->error_code : 0,
++		  __entry->reinjected ? " [reinjected]" : "")
+ );
+ 
+ /*
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 7f15e2b2a0d6c..498fed0dda98c 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2232,7 +2232,8 @@ static void prepare_vmcs02_early_rare(struct vcpu_vmx *vmx,
+ 	}
+ }
+ 
+-static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
++static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs01,
++				 struct vmcs12 *vmcs12)
+ {
+ 	u32 exec_control, vmcs12_exec_ctrl;
+ 	u64 guest_efer = nested_vmx_calc_efer(vmx, vmcs12);
+@@ -2243,7 +2244,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 	/*
+ 	 * PIN CONTROLS
+ 	 */
+-	exec_control = vmx_pin_based_exec_ctrl(vmx);
++	exec_control = __pin_controls_get(vmcs01);
+ 	exec_control |= (vmcs12->pin_based_vm_exec_control &
+ 			 ~PIN_BASED_VMX_PREEMPTION_TIMER);
+ 
+@@ -2258,7 +2259,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 	/*
+ 	 * EXEC CONTROLS
+ 	 */
+-	exec_control = vmx_exec_control(vmx); /* L0's desires */
++	exec_control = __exec_controls_get(vmcs01); /* L0's desires */
+ 	exec_control &= ~CPU_BASED_INTR_WINDOW_EXITING;
+ 	exec_control &= ~CPU_BASED_NMI_WINDOW_EXITING;
+ 	exec_control &= ~CPU_BASED_TPR_SHADOW;
+@@ -2295,17 +2296,20 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 	 * SECONDARY EXEC CONTROLS
+ 	 */
+ 	if (cpu_has_secondary_exec_ctrls()) {
+-		exec_control = vmx->secondary_exec_control;
++		exec_control = __secondary_exec_controls_get(vmcs01);
+ 
+ 		/* Take the following fields only from vmcs12 */
+ 		exec_control &= ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
++				  SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE |
+ 				  SECONDARY_EXEC_ENABLE_INVPCID |
+ 				  SECONDARY_EXEC_ENABLE_RDTSCP |
+ 				  SECONDARY_EXEC_XSAVES |
+ 				  SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE |
+ 				  SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
+ 				  SECONDARY_EXEC_APIC_REGISTER_VIRT |
+-				  SECONDARY_EXEC_ENABLE_VMFUNC);
++				  SECONDARY_EXEC_ENABLE_VMFUNC |
++				  SECONDARY_EXEC_DESC);
++
+ 		if (nested_cpu_has(vmcs12,
+ 				   CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)) {
+ 			vmcs12_exec_ctrl = vmcs12->secondary_vm_exec_control &
+@@ -2341,9 +2345,15 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 	 * are emulated by vmx_set_efer() in prepare_vmcs02(), but speculate
+ 	 * on the related bits (if supported by the CPU) in the hope that
+ 	 * we can avoid VMWrites during vmx_set_efer().
++	 *
++	 * Similarly, take vmcs01's PERF_GLOBAL_CTRL in the hope that if KVM is
++	 * loading PERF_GLOBAL_CTRL via the VMCS for L1, then KVM will want to
++	 * do the same for L2.
+ 	 */
+-	exec_control = (vmcs12->vm_entry_controls | vmx_vmentry_ctrl()) &
+-			~VM_ENTRY_IA32E_MODE & ~VM_ENTRY_LOAD_IA32_EFER;
++	exec_control = __vm_entry_controls_get(vmcs01);
++	exec_control |= (vmcs12->vm_entry_controls &
++			 ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL);
++	exec_control &= ~(VM_ENTRY_IA32E_MODE | VM_ENTRY_LOAD_IA32_EFER);
+ 	if (cpu_has_load_ia32_efer()) {
+ 		if (guest_efer & EFER_LMA)
+ 			exec_control |= VM_ENTRY_IA32E_MODE;
+@@ -2359,9 +2369,11 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 	 * we should use its exit controls. Note that VM_EXIT_LOAD_IA32_EFER
+ 	 * bits may be modified by vmx_set_efer() in prepare_vmcs02().
+ 	 */
+-	exec_control = vmx_vmexit_ctrl();
++	exec_control = __vm_exit_controls_get(vmcs01);
+ 	if (cpu_has_load_ia32_efer() && guest_efer != host_efer)
+ 		exec_control |= VM_EXIT_LOAD_IA32_EFER;
++	else
++		exec_control &= ~VM_EXIT_LOAD_IA32_EFER;
+ 	vm_exit_controls_set(vmx, exec_control);
+ 
+ 	/*
+@@ -3370,7 +3382,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ 
+ 	vmx_switch_vmcs(vcpu, &vmx->nested.vmcs02);
+ 
+-	prepare_vmcs02_early(vmx, vmcs12);
++	prepare_vmcs02_early(vmx, &vmx->vmcs01, vmcs12);
+ 
+ 	if (from_vmentry) {
+ 		if (unlikely(!nested_get_vmcs12_pages(vcpu))) {
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 24903f05c204b..ed4b6da83aa87 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -386,9 +386,13 @@ static inline void lname##_controls_set(struct vcpu_vmx *vmx, u32 val)	    \
+ 		vmx->loaded_vmcs->controls_shadow.lname = val;		    \
+ 	}								    \
+ }									    \
++static inline u32 __##lname##_controls_get(struct loaded_vmcs *vmcs)	    \
++{									    \
++	return vmcs->controls_shadow.lname;				    \
++}									    \
+ static inline u32 lname##_controls_get(struct vcpu_vmx *vmx)		    \
+ {									    \
+-	return vmx->loaded_vmcs->controls_shadow.lname;			    \
++	return __##lname##_controls_get(vmx->loaded_vmcs);		    \
+ }									    \
+ static inline void lname##_controls_setbit(struct vcpu_vmx *vmx, u32 val)   \
+ {									    \
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index f3473418dcd5d..0ac80b3ff0f5b 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -459,6 +459,7 @@ static int exception_class(int vector)
+ #define EXCPT_TRAP		1
+ #define EXCPT_ABORT		2
+ #define EXCPT_INTERRUPT		3
++#define EXCPT_DB		4
+ 
+ static int exception_type(int vector)
+ {
+@@ -469,8 +470,14 @@ static int exception_type(int vector)
+ 
+ 	mask = 1 << vector;
+ 
+-	/* #DB is trap, as instruction watchpoints are handled elsewhere */
+-	if (mask & ((1 << DB_VECTOR) | (1 << BP_VECTOR) | (1 << OF_VECTOR)))
++	/*
++	 * #DBs can be trap-like or fault-like, the caller must check other CPU
++	 * state, e.g. DR6, to determine whether a #DB is a trap or fault.
++	 */
++	if (mask & (1 << DB_VECTOR))
++		return EXCPT_DB;
++
++	if (mask & ((1 << BP_VECTOR) | (1 << OF_VECTOR)))
+ 		return EXCPT_TRAP;
+ 
+ 	if (mask & ((1 << DF_VECTOR) | (1 << MC_VECTOR)))
+@@ -5353,6 +5360,11 @@ split_irqchip_unlock:
+ 		r = 0;
+ 		break;
+ 	case KVM_CAP_X86_USER_SPACE_MSR:
++		r = -EINVAL;
++		if (cap->args[0] & ~(KVM_MSR_EXIT_REASON_INVAL |
++				     KVM_MSR_EXIT_REASON_UNKNOWN |
++				     KVM_MSR_EXIT_REASON_FILTER))
++			break;
+ 		kvm->arch.user_space_msr_mask = cap->args[0];
+ 		r = 0;
+ 		break;
+@@ -5434,23 +5446,22 @@ err:
+ 	return r;
+ }
+ 
+-static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
++static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm,
++				       struct kvm_msr_filter *filter)
+ {
+-	struct kvm_msr_filter __user *user_msr_filter = argp;
+ 	struct kvm_x86_msr_filter *new_filter, *old_filter;
+-	struct kvm_msr_filter filter;
+ 	bool default_allow;
+ 	bool empty = true;
+ 	int r = 0;
+ 	u32 i;
+ 
+-	if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
+-		return -EFAULT;
++	if (filter->flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
++		return -EINVAL;
+ 
+-	for (i = 0; i < ARRAY_SIZE(filter.ranges); i++)
+-		empty &= !filter.ranges[i].nmsrs;
++	for (i = 0; i < ARRAY_SIZE(filter->ranges); i++)
++		empty &= !filter->ranges[i].nmsrs;
+ 
+-	default_allow = !(filter.flags & KVM_MSR_FILTER_DEFAULT_DENY);
++	default_allow = !(filter->flags & KVM_MSR_FILTER_DEFAULT_DENY);
+ 	if (empty && !default_allow)
+ 		return -EINVAL;
+ 
+@@ -5458,8 +5469,8 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
+ 	if (!new_filter)
+ 		return -ENOMEM;
+ 
+-	for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
+-		r = kvm_add_msr_filter(new_filter, &filter.ranges[i]);
++	for (i = 0; i < ARRAY_SIZE(filter->ranges); i++) {
++		r = kvm_add_msr_filter(new_filter, &filter->ranges[i]);
+ 		if (r) {
+ 			kvm_free_msr_filter(new_filter);
+ 			return r;
+@@ -5482,6 +5493,62 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_KVM_COMPAT
++/* for KVM_X86_SET_MSR_FILTER */
++struct kvm_msr_filter_range_compat {
++	__u32 flags;
++	__u32 nmsrs;
++	__u32 base;
++	__u32 bitmap;
++};
++
++struct kvm_msr_filter_compat {
++	__u32 flags;
++	struct kvm_msr_filter_range_compat ranges[KVM_MSR_FILTER_MAX_RANGES];
++};
++
++#define KVM_X86_SET_MSR_FILTER_COMPAT _IOW(KVMIO, 0xc6, struct kvm_msr_filter_compat)
++
++long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
++			      unsigned long arg)
++{
++	void __user *argp = (void __user *)arg;
++	struct kvm *kvm = filp->private_data;
++	long r = -ENOTTY;
++
++	switch (ioctl) {
++	case KVM_X86_SET_MSR_FILTER_COMPAT: {
++		struct kvm_msr_filter __user *user_msr_filter = argp;
++		struct kvm_msr_filter_compat filter_compat;
++		struct kvm_msr_filter filter;
++		int i;
++
++		if (copy_from_user(&filter_compat, user_msr_filter,
++				   sizeof(filter_compat)))
++			return -EFAULT;
++
++		filter.flags = filter_compat.flags;
++		for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
++			struct kvm_msr_filter_range_compat *cr;
++
++			cr = &filter_compat.ranges[i];
++			filter.ranges[i] = (struct kvm_msr_filter_range) {
++				.flags = cr->flags,
++				.nmsrs = cr->nmsrs,
++				.base = cr->base,
++				.bitmap = (__u8 *)(ulong)cr->bitmap,
++			};
++		}
++
++		r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
++		break;
++	}
++	}
++
++	return r;
++}
++#endif
++
+ long kvm_arch_vm_ioctl(struct file *filp,
+ 		       unsigned int ioctl, unsigned long arg)
+ {
+@@ -5788,9 +5855,16 @@ set_pit2_out:
+ 	case KVM_SET_PMU_EVENT_FILTER:
+ 		r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp);
+ 		break;
+-	case KVM_X86_SET_MSR_FILTER:
+-		r = kvm_vm_ioctl_set_msr_filter(kvm, argp);
++	case KVM_X86_SET_MSR_FILTER: {
++		struct kvm_msr_filter __user *user_msr_filter = argp;
++		struct kvm_msr_filter filter;
++
++		if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
++			return -EFAULT;
++
++		r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
+ 		break;
++	}
+ 	default:
+ 		r = -ENOTTY;
+ 	}
+@@ -7560,6 +7634,12 @@ restart:
+ 		unsigned long rflags = kvm_x86_ops.get_rflags(vcpu);
+ 		toggle_interruptibility(vcpu, ctxt->interruptibility);
+ 		vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
++
++		/*
++		 * Note, EXCPT_DB is assumed to be fault-like as the emulator
++		 * only supports code breakpoints and general detect #DB, both
++		 * of which are fault-like.
++		 */
+ 		if (!ctxt->have_exception ||
+ 		    exception_type(ctxt->exception.vector) == EXCPT_TRAP) {
+ 			kvm_rip_write(vcpu, ctxt->eip);
+@@ -8347,6 +8427,11 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
+ 
+ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
+ {
++	trace_kvm_inj_exception(vcpu->arch.exception.nr,
++				vcpu->arch.exception.has_error_code,
++				vcpu->arch.exception.error_code,
++				vcpu->arch.exception.injected);
++
+ 	if (vcpu->arch.exception.error_code && !is_protmode(vcpu))
+ 		vcpu->arch.exception.error_code = false;
+ 	kvm_x86_ops.queue_exception(vcpu);
+@@ -8404,13 +8489,16 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
+ 
+ 	/* try to inject new event if pending */
+ 	if (vcpu->arch.exception.pending) {
+-		trace_kvm_inj_exception(vcpu->arch.exception.nr,
+-					vcpu->arch.exception.has_error_code,
+-					vcpu->arch.exception.error_code);
+-
+-		vcpu->arch.exception.pending = false;
+-		vcpu->arch.exception.injected = true;
+-
++		/*
++		 * Fault-class exceptions, except #DBs, set RF=1 in the RFLAGS
++		 * value pushed on the stack.  Trap-like exception and all #DBs
++		 * leave RF as-is (KVM follows Intel's behavior in this regard;
++		 * AMD states that code breakpoint #DBs excplitly clear RF=0).
++		 *
++		 * Note, most versions of Intel's SDM and AMD's APM incorrectly
++		 * describe the behavior of General Detect #DBs, which are
++		 * fault-like.  They do _not_ set RF, a la code breakpoints.
++		 */
+ 		if (exception_type(vcpu->arch.exception.nr) == EXCPT_FAULT)
+ 			__kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) |
+ 					     X86_EFLAGS_RF);
+@@ -8424,6 +8512,10 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit
+ 		}
+ 
+ 		kvm_inject_exception(vcpu);
++
++		vcpu->arch.exception.pending = false;
++		vcpu->arch.exception.injected = true;
++
+ 		can_inject = false;
+ 	}
+ 
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 592d32a46c4c3..7c4b8d0635ebd 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -421,6 +421,8 @@ static struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd,
+  */
+ void bfq_schedule_dispatch(struct bfq_data *bfqd)
+ {
++	lockdep_assert_held(&bfqd->lock);
++
+ 	if (bfqd->queued != 0) {
+ 		bfq_log(bfqd, "schedule dispatch");
+ 		blk_mq_run_hw_queues(bfqd->queue, true);
+@@ -6269,8 +6271,8 @@ bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 	bfq_bfqq_expire(bfqd, bfqq, true, reason);
+ 
+ schedule_dispatch:
+-	spin_unlock_irqrestore(&bfqd->lock, flags);
+ 	bfq_schedule_dispatch(bfqd);
++	spin_unlock_irqrestore(&bfqd->lock, flags);
+ }
+ 
+ /*
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 5206fd3b78678..9bdb5bd5fda63 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -163,7 +163,7 @@ static void ghes_unmap(void __iomem *vaddr, enum fixed_addresses fixmap_idx)
+ 	clear_fixmap(fixmap_idx);
+ }
+ 
+-int ghes_estatus_pool_init(int num_ghes)
++int ghes_estatus_pool_init(unsigned int num_ghes)
+ {
+ 	unsigned long addr, len;
+ 	int rc;
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 95ca4f934d283..a77ed66425f27 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -212,7 +212,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+ 		mm = alloc->vma_vm_mm;
+ 
+ 	if (mm) {
+-		mmap_read_lock(mm);
++		mmap_write_lock(mm);
+ 		vma = alloc->vma;
+ 	}
+ 
+@@ -270,7 +270,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+ 		trace_binder_alloc_page_end(alloc, index);
+ 	}
+ 	if (mm) {
+-		mmap_read_unlock(mm);
++		mmap_write_unlock(mm);
+ 		mmput(mm);
+ 	}
+ 	return 0;
+@@ -303,7 +303,7 @@ err_page_ptr_cleared:
+ 	}
+ err_no_vma:
+ 	if (mm) {
+-		mmap_read_unlock(mm);
++		mmap_write_unlock(mm);
+ 		mmput(mm);
+ 	}
+ 	return vma ? -ENOMEM : -ESRCH;
+diff --git a/drivers/ata/pata_legacy.c b/drivers/ata/pata_legacy.c
+index d91ba47f2fc44..4405d255e3aa2 100644
+--- a/drivers/ata/pata_legacy.c
++++ b/drivers/ata/pata_legacy.c
+@@ -278,9 +278,10 @@ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
+ 	outb(inb(0x1F4) & 0x07, 0x1F4);
+ 
+ 	rt = inb(0x1F3);
+-	rt &= 0x07 << (3 * adev->devno);
++	rt &= ~(0x07 << (3 * !adev->devno));
+ 	if (pio)
+-		rt |= (1 + 3 * pio) << (3 * adev->devno);
++		rt |= (1 + 3 * pio) << (3 * !adev->devno);
++	outb(rt, 0x1F3);
+ 
+ 	udelay(100);
+ 	outb(inb(0x1F2) | 0x01, 0x1F2);
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 745b7f9eb3351..8d24082848a83 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -711,8 +711,12 @@ scmi_txrx_setup(struct scmi_info *info, struct device *dev, int prot_id)
+ {
+ 	int ret = scmi_chan_setup(info, dev, prot_id, true);
+ 
+-	if (!ret) /* Rx is optional, hence no error check */
+-		scmi_chan_setup(info, dev, prot_id, false);
++	if (!ret) {
++		/* Rx is optional, report only memory errors */
++		ret = scmi_chan_setup(info, dev, prot_id, false);
++		if (ret && ret != -ENOMEM)
++			ret = 0;
++	}
+ 
+ 	return ret;
+ }
+@@ -942,6 +946,7 @@ MODULE_DEVICE_TABLE(of, scmi_of_match);
+ static struct platform_driver scmi_driver = {
+ 	.driver = {
+ 		   .name = "arm-scmi",
++		   .suppress_bind_attrs = true,
+ 		   .of_match_table = scmi_of_match,
+ 		   .dev_groups = versions_groups,
+ 		   },
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index e3df82d5d37a8..70be9c87fb673 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -590,7 +590,7 @@ int __init efi_config_parse_tables(const efi_config_table_t *config_tables,
+ 
+ 		seed = early_memremap(efi_rng_seed, sizeof(*seed));
+ 		if (seed != NULL) {
+-			size = READ_ONCE(seed->size);
++			size = min(seed->size, EFI_RANDOM_SEED_SIZE);
+ 			early_memunmap(seed, sizeof(*seed));
+ 		} else {
+ 			pr_err("Could not map UEFI random seed!\n");
+diff --git a/drivers/firmware/efi/libstub/random.c b/drivers/firmware/efi/libstub/random.c
+index 24aa375353724..33ab567695951 100644
+--- a/drivers/firmware/efi/libstub/random.c
++++ b/drivers/firmware/efi/libstub/random.c
+@@ -75,7 +75,12 @@ efi_status_t efi_random_get_seed(void)
+ 	if (status != EFI_SUCCESS)
+ 		return status;
+ 
+-	status = efi_bs_call(allocate_pool, EFI_RUNTIME_SERVICES_DATA,
++	/*
++	 * Use EFI_ACPI_RECLAIM_MEMORY here so that it is guaranteed that the
++	 * allocation will survive a kexec reboot (although we refresh the seed
++	 * beforehand)
++	 */
++	status = efi_bs_call(allocate_pool, EFI_ACPI_RECLAIM_MEMORY,
+ 			     sizeof(*seed) + EFI_RANDOM_SEED_SIZE,
+ 			     (void **)&seed);
+ 	if (status != EFI_SUCCESS)
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index 8f665678e9e39..e8d69bd548f3f 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -97,7 +97,7 @@ int __init efi_tpm_eventlog_init(void)
+ 		goto out_calc;
+ 	}
+ 
+-	memblock_reserve((unsigned long)final_tbl,
++	memblock_reserve(efi.tpm_final_log,
+ 			 tbl_size + sizeof(*final_tbl));
+ 	efi_tpm_final_log_size = tbl_size;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index 16bfb36c27e41..d6f2951035959 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -670,6 +670,12 @@ void amdgpu_detect_virtualization(struct amdgpu_device *adev)
+ 			adev->virt.caps |= AMDGPU_PASSTHROUGH_MODE;
+ 	}
+ 
++	if (amdgpu_sriov_vf(adev) && adev->asic_type == CHIP_SIENNA_CICHLID)
++		/* VF MMIO access (except mailbox range) from CPU
++		 * will be blocked during sriov runtime
++		 */
++		adev->virt.caps |= AMDGPU_VF_MMIO_ACCESS_PROTECT;
++
+ 	/* we have the ability to check now */
+ 	if (amdgpu_sriov_vf(adev)) {
+ 		switch (adev->asic_type) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+index 77b9d37bfa1b2..aea49bad914fa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+@@ -31,6 +31,7 @@
+ #define AMDGPU_SRIOV_CAPS_IS_VF        (1 << 2) /* this GPU is a virtual function */
+ #define AMDGPU_PASSTHROUGH_MODE        (1 << 3) /* thw whole GPU is pass through for VM */
+ #define AMDGPU_SRIOV_CAPS_RUNTIME      (1 << 4) /* is out of full access mode */
++#define AMDGPU_VF_MMIO_ACCESS_PROTECT  (1 << 5) /* MMIO write access is not allowed in sriov runtime */
+ 
+ /* all asic after AI use this offset */
+ #define mmRCC_IOV_FUNC_IDENTIFIER 0xDE5
+@@ -241,6 +242,9 @@ struct amdgpu_virt {
+ #define amdgpu_passthrough(adev) \
+ ((adev)->virt.caps & AMDGPU_PASSTHROUGH_MODE)
+ 
++#define amdgpu_sriov_vf_mmio_access_protection(adev) \
++((adev)->virt.caps & AMDGPU_VF_MMIO_ACCESS_PROTECT)
++
+ static inline bool is_virtual_machine(void)
+ {
+ #ifdef CONFIG_X86
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 635601d8b1310..45b1f00c59680 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -3200,7 +3200,11 @@ void amdgpu_vm_manager_init(struct amdgpu_device *adev)
+ 	 */
+ #ifdef CONFIG_X86_64
+ 	if (amdgpu_vm_update_mode == -1) {
+-		if (amdgpu_gmc_vram_full_visible(&adev->gmc))
++		/* For asic with VF MMIO access protection
++		 * avoid using CPU for VM table updates
++		 */
++		if (amdgpu_gmc_vram_full_visible(&adev->gmc) &&
++		    !amdgpu_sriov_vf_mmio_access_protection(adev))
+ 			adev->vm_manager.vm_update_mode =
+ 				AMDGPU_VM_USE_CPU_FOR_COMPUTE;
+ 		else
+diff --git a/drivers/gpu/drm/i915/display/intel_sdvo.c b/drivers/gpu/drm/i915/display/intel_sdvo.c
+index 4eaa4aa86ecdf..58f8fb7c8799c 100644
+--- a/drivers/gpu/drm/i915/display/intel_sdvo.c
++++ b/drivers/gpu/drm/i915/display/intel_sdvo.c
+@@ -2760,13 +2760,10 @@ intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device)
+ 	if (!intel_sdvo_connector)
+ 		return false;
+ 
+-	if (device == 0) {
+-		intel_sdvo->controlled_output |= SDVO_OUTPUT_TMDS0;
++	if (device == 0)
+ 		intel_sdvo_connector->output_flag = SDVO_OUTPUT_TMDS0;
+-	} else if (device == 1) {
+-		intel_sdvo->controlled_output |= SDVO_OUTPUT_TMDS1;
++	else if (device == 1)
+ 		intel_sdvo_connector->output_flag = SDVO_OUTPUT_TMDS1;
+-	}
+ 
+ 	intel_connector = &intel_sdvo_connector->base;
+ 	connector = &intel_connector->base;
+@@ -2821,7 +2818,6 @@ intel_sdvo_tv_init(struct intel_sdvo *intel_sdvo, int type)
+ 	encoder->encoder_type = DRM_MODE_ENCODER_TVDAC;
+ 	connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO;
+ 
+-	intel_sdvo->controlled_output |= type;
+ 	intel_sdvo_connector->output_flag = type;
+ 
+ 	if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
+@@ -2862,13 +2858,10 @@ intel_sdvo_analog_init(struct intel_sdvo *intel_sdvo, int device)
+ 	encoder->encoder_type = DRM_MODE_ENCODER_DAC;
+ 	connector->connector_type = DRM_MODE_CONNECTOR_VGA;
+ 
+-	if (device == 0) {
+-		intel_sdvo->controlled_output |= SDVO_OUTPUT_RGB0;
++	if (device == 0)
+ 		intel_sdvo_connector->output_flag = SDVO_OUTPUT_RGB0;
+-	} else if (device == 1) {
+-		intel_sdvo->controlled_output |= SDVO_OUTPUT_RGB1;
++	else if (device == 1)
+ 		intel_sdvo_connector->output_flag = SDVO_OUTPUT_RGB1;
+-	}
+ 
+ 	if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
+ 		kfree(intel_sdvo_connector);
+@@ -2898,13 +2891,10 @@ intel_sdvo_lvds_init(struct intel_sdvo *intel_sdvo, int device)
+ 	encoder->encoder_type = DRM_MODE_ENCODER_LVDS;
+ 	connector->connector_type = DRM_MODE_CONNECTOR_LVDS;
+ 
+-	if (device == 0) {
+-		intel_sdvo->controlled_output |= SDVO_OUTPUT_LVDS0;
++	if (device == 0)
+ 		intel_sdvo_connector->output_flag = SDVO_OUTPUT_LVDS0;
+-	} else if (device == 1) {
+-		intel_sdvo->controlled_output |= SDVO_OUTPUT_LVDS1;
++	else if (device == 1)
+ 		intel_sdvo_connector->output_flag = SDVO_OUTPUT_LVDS1;
+-	}
+ 
+ 	if (intel_sdvo_connector_init(intel_sdvo_connector, intel_sdvo) < 0) {
+ 		kfree(intel_sdvo_connector);
+@@ -2937,16 +2927,39 @@ err:
+ 	return false;
+ }
+ 
++static u16 intel_sdvo_filter_output_flags(u16 flags)
++{
++	flags &= SDVO_OUTPUT_MASK;
++
++	/* SDVO requires XXX1 function may not exist unless it has XXX0 function.*/
++	if (!(flags & SDVO_OUTPUT_TMDS0))
++		flags &= ~SDVO_OUTPUT_TMDS1;
++
++	if (!(flags & SDVO_OUTPUT_RGB0))
++		flags &= ~SDVO_OUTPUT_RGB1;
++
++	if (!(flags & SDVO_OUTPUT_LVDS0))
++		flags &= ~SDVO_OUTPUT_LVDS1;
++
++	return flags;
++}
++
+ static bool
+ intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, u16 flags)
+ {
+-	/* SDVO requires XXX1 function may not exist unless it has XXX0 function.*/
++	struct drm_i915_private *i915 = to_i915(intel_sdvo->base.base.dev);
++
++	flags = intel_sdvo_filter_output_flags(flags);
++
++	intel_sdvo->controlled_output = flags;
++
++	intel_sdvo_select_ddc_bus(i915, intel_sdvo);
+ 
+ 	if (flags & SDVO_OUTPUT_TMDS0)
+ 		if (!intel_sdvo_dvi_init(intel_sdvo, 0))
+ 			return false;
+ 
+-	if ((flags & SDVO_TMDS_MASK) == SDVO_TMDS_MASK)
++	if (flags & SDVO_OUTPUT_TMDS1)
+ 		if (!intel_sdvo_dvi_init(intel_sdvo, 1))
+ 			return false;
+ 
+@@ -2967,7 +2980,7 @@ intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, u16 flags)
+ 		if (!intel_sdvo_analog_init(intel_sdvo, 0))
+ 			return false;
+ 
+-	if ((flags & SDVO_RGB_MASK) == SDVO_RGB_MASK)
++	if (flags & SDVO_OUTPUT_RGB1)
+ 		if (!intel_sdvo_analog_init(intel_sdvo, 1))
+ 			return false;
+ 
+@@ -2975,14 +2988,13 @@ intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, u16 flags)
+ 		if (!intel_sdvo_lvds_init(intel_sdvo, 0))
+ 			return false;
+ 
+-	if ((flags & SDVO_LVDS_MASK) == SDVO_LVDS_MASK)
++	if (flags & SDVO_OUTPUT_LVDS1)
+ 		if (!intel_sdvo_lvds_init(intel_sdvo, 1))
+ 			return false;
+ 
+-	if ((flags & SDVO_OUTPUT_MASK) == 0) {
++	if (flags == 0) {
+ 		unsigned char bytes[2];
+ 
+-		intel_sdvo->controlled_output = 0;
+ 		memcpy(bytes, &intel_sdvo->caps.output_flags, 2);
+ 		DRM_DEBUG_KMS("%s: Unknown SDVO output type (0x%02x%02x)\n",
+ 			      SDVO_NAME(intel_sdvo),
+@@ -3394,8 +3406,6 @@ bool intel_sdvo_init(struct drm_i915_private *dev_priv,
+ 	 */
+ 	intel_sdvo->base.cloneable = 0;
+ 
+-	intel_sdvo_select_ddc_bus(dev_priv, intel_sdvo);
+-
+ 	/* Set the input timing to the screen. Assume always input 0. */
+ 	if (!intel_sdvo_set_target_input(intel_sdvo))
+ 		goto err_output;
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index 47796e12b4322..bd65dc9b88923 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -326,8 +326,8 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
+ 		goto fail;
+ 	}
+ 
+-	ret = devm_request_irq(&pdev->dev, hdmi->irq,
+-			msm_hdmi_irq, IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
++	ret = devm_request_irq(dev->dev, hdmi->irq,
++			msm_hdmi_irq, IRQF_TRIGGER_HIGH,
+ 			"hdmi_isr", hdmi);
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(dev->dev, "failed to request IRQ%u: %d\n",
+diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+index b0fb3c3cba596..c51be1c9c2070 100644
+--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+@@ -1286,5 +1286,11 @@ struct platform_driver dw_mipi_dsi_rockchip_driver = {
+ 		.of_match_table = dw_mipi_dsi_rockchip_dt_ids,
+ 		.pm	= &dw_mipi_dsi_rockchip_pm_ops,
+ 		.name	= "dw-mipi-dsi-rockchip",
++		/*
++		 * For dual-DSI display, one DSI pokes at the other DSI's
++		 * drvdata in dw_mipi_dsi_rockchip_find_second(). This is not
++		 * safe for asynchronous probe.
++		 */
++		.probe_type = PROBE_FORCE_SYNCHRONOUS,
+ 	},
+ };
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index bb096dfb7b363..3350a41d7dce1 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -827,6 +827,7 @@
+ #define USB_DEVICE_ID_MADCATZ_BEATPAD	0x4540
+ #define USB_DEVICE_ID_MADCATZ_RAT5	0x1705
+ #define USB_DEVICE_ID_MADCATZ_RAT9	0x1709
++#define USB_DEVICE_ID_MADCATZ_MMO7  0x1713
+ 
+ #define USB_VENDOR_ID_MCC		0x09db
+ #define USB_DEVICE_ID_MCC_PMD1024LS	0x0076
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 2ab71d717bb03..4a8014e9a511c 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -609,6 +609,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT5) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7) },
+ #endif
+ #if IS_ENABLED(CONFIG_HID_SAMSUNG)
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAMSUNG, USB_DEVICE_ID_SAMSUNG_IR_REMOTE) },
+diff --git a/drivers/hid/hid-saitek.c b/drivers/hid/hid-saitek.c
+index c7bf14c019605..b84e975977c42 100644
+--- a/drivers/hid/hid-saitek.c
++++ b/drivers/hid/hid-saitek.c
+@@ -187,6 +187,8 @@ static const struct hid_device_id saitek_devices[] = {
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7),
+ 		.driver_data = SAITEK_RELEASE_MODE_MMO7 },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7),
++		.driver_data = SAITEK_RELEASE_MODE_MMO7 },
+ 	{ }
+ };
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c
+index 0276700c246d5..90270696206c9 100644
+--- a/drivers/hwtracing/coresight/coresight-cti-core.c
++++ b/drivers/hwtracing/coresight/coresight-cti-core.c
+@@ -90,11 +90,9 @@ void cti_write_all_hw_regs(struct cti_drvdata *drvdata)
+ static int cti_enable_hw(struct cti_drvdata *drvdata)
+ {
+ 	struct cti_config *config = &drvdata->config;
+-	struct device *dev = &drvdata->csdev->dev;
+ 	unsigned long flags;
+ 	int rc = 0;
+ 
+-	pm_runtime_get_sync(dev->parent);
+ 	spin_lock_irqsave(&drvdata->spinlock, flags);
+ 
+ 	/* no need to do anything if enabled or unpowered*/
+@@ -119,7 +117,6 @@ cti_state_unchanged:
+ 	/* cannot enable due to error */
+ cti_err_not_enabled:
+ 	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+-	pm_runtime_put(dev->parent);
+ 	return rc;
+ }
+ 
+@@ -153,7 +150,6 @@ cti_hp_not_enabled:
+ static int cti_disable_hw(struct cti_drvdata *drvdata)
+ {
+ 	struct cti_config *config = &drvdata->config;
+-	struct device *dev = &drvdata->csdev->dev;
+ 
+ 	spin_lock(&drvdata->spinlock);
+ 
+@@ -174,7 +170,6 @@ static int cti_disable_hw(struct cti_drvdata *drvdata)
+ 	coresight_disclaim_device_unlocked(drvdata->base);
+ 	CS_LOCK(drvdata->base);
+ 	spin_unlock(&drvdata->spinlock);
+-	pm_runtime_put(dev->parent);
+ 	return 0;
+ 
+ 	/* not disabled this call */
+diff --git a/drivers/i2c/busses/i2c-piix4.c b/drivers/i2c/busses/i2c-piix4.c
+index 8c1b31ed0c429..aa1d3657ab4e6 100644
+--- a/drivers/i2c/busses/i2c-piix4.c
++++ b/drivers/i2c/busses/i2c-piix4.c
+@@ -961,6 +961,7 @@ static int piix4_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 					   "", &piix4_main_adapters[0]);
+ 		if (retval < 0)
+ 			return retval;
++		piix4_adapter_count = 1;
+ 	}
+ 
+ 	/* Check for auxiliary SMBus on some AMD chipsets */
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 8dabb6ffb1a4f..3b564e68130b5 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -935,6 +935,7 @@ static struct platform_driver xiic_i2c_driver = {
+ 
+ module_platform_driver(xiic_i2c_driver);
+ 
++MODULE_ALIAS("platform:" DRIVER_NAME);
+ MODULE_AUTHOR("info@mocean-labs.com");
+ MODULE_DESCRIPTION("Xilinx I2C bus driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index b5fa19a033c0a..9ed5de38e372f 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1437,7 +1437,7 @@ static bool validate_ipv4_net_dev(struct net_device *net_dev,
+ 		return false;
+ 
+ 	memset(&fl4, 0, sizeof(fl4));
+-	fl4.flowi4_iif = net_dev->ifindex;
++	fl4.flowi4_oif = net_dev->ifindex;
+ 	fl4.daddr = daddr;
+ 	fl4.saddr = saddr;
+ 
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index aa526c5ca0cf3..d91892ffe2436 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -2759,10 +2759,18 @@ static int __init ib_core_init(void)
+ 
+ 	nldev_init();
+ 	rdma_nl_register(RDMA_NL_LS, ibnl_ls_cb_table);
+-	roce_gid_mgmt_init();
++	ret = roce_gid_mgmt_init();
++	if (ret) {
++		pr_warn("Couldn't init RoCE GID management\n");
++		goto err_parent;
++	}
+ 
+ 	return 0;
+ 
++err_parent:
++	rdma_nl_unregister(RDMA_NL_LS);
++	nldev_exit();
++	unregister_pernet_device(&rdma_dev_net_ops);
+ err_compat:
+ 	unregister_blocking_lsm_notifier(&ibdev_lsm_nb);
+ err_sa:
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index 12d29d54a0812..c90f6378d8396 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -2181,7 +2181,7 @@ void __init nldev_init(void)
+ 	rdma_nl_register(RDMA_NL_NLDEV, nldev_cb_table);
+ }
+ 
+-void __exit nldev_exit(void)
++void nldev_exit(void)
+ {
+ 	rdma_nl_unregister(RDMA_NL_NLDEV);
+ }
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index 1cd8f80f097a7..60eb3a64518f3 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -955,8 +955,7 @@ void sc_disable(struct send_context *sc)
+ 	spin_unlock(&sc->release_lock);
+ 
+ 	write_seqlock(&sc->waitlock);
+-	if (!list_empty(&sc->piowait))
+-		list_move(&sc->piowait, &wake_list);
++	list_splice_init(&sc->piowait, &wake_list);
+ 	write_sequnlock(&sc->waitlock);
+ 	while (!list_empty(&wake_list)) {
+ 		struct iowait *wait;
+diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c
+index 967641662b24a..d0bb61b7e419f 100644
+--- a/drivers/infiniband/hw/qedr/main.c
++++ b/drivers/infiniband/hw/qedr/main.c
+@@ -374,6 +374,10 @@ static int qedr_alloc_resources(struct qedr_dev *dev)
+ 	if (IS_IWARP(dev)) {
+ 		xa_init(&dev->qps);
+ 		dev->iwarp_wq = create_singlethread_workqueue("qedr_iwarpq");
++		if (!dev->iwarp_wq) {
++			rc = -ENOMEM;
++			goto err1;
++		}
+ 	}
+ 
+ 	/* Allocate Status blocks for CNQ */
+@@ -381,7 +385,7 @@ static int qedr_alloc_resources(struct qedr_dev *dev)
+ 				GFP_KERNEL);
+ 	if (!dev->sb_array) {
+ 		rc = -ENOMEM;
+-		goto err1;
++		goto err_destroy_wq;
+ 	}
+ 
+ 	dev->cnq_array = kcalloc(dev->num_cnq,
+@@ -432,6 +436,9 @@ err3:
+ 	kfree(dev->cnq_array);
+ err2:
+ 	kfree(dev->sb_array);
++err_destroy_wq:
++	if (IS_IWARP(dev))
++		destroy_workqueue(dev->iwarp_wq);
+ err1:
+ 	kfree(dev->sgid_tbl);
+ 	return rc;
+diff --git a/drivers/isdn/hardware/mISDN/netjet.c b/drivers/isdn/hardware/mISDN/netjet.c
+index a52f275f82634..f8447135a9022 100644
+--- a/drivers/isdn/hardware/mISDN/netjet.c
++++ b/drivers/isdn/hardware/mISDN/netjet.c
+@@ -956,7 +956,7 @@ nj_release(struct tiger_hw *card)
+ 	}
+ 	if (card->irq > 0)
+ 		free_irq(card->irq, card);
+-	if (card->isac.dch.dev.dev.class)
++	if (device_is_registered(&card->isac.dch.dev.dev))
+ 		mISDN_unregister_device(&card->isac.dch.dev);
+ 
+ 	for (i = 0; i < 2; i++) {
+diff --git a/drivers/isdn/mISDN/core.c b/drivers/isdn/mISDN/core.c
+index a41b4b2645941..7ea0100f218a0 100644
+--- a/drivers/isdn/mISDN/core.c
++++ b/drivers/isdn/mISDN/core.c
+@@ -233,11 +233,12 @@ mISDN_register_device(struct mISDNdevice *dev,
+ 	if (debug & DEBUG_CORE)
+ 		printk(KERN_DEBUG "mISDN_register %s %d\n",
+ 		       dev_name(&dev->dev), dev->id);
++	dev->dev.class = &mISDN_class;
++
+ 	err = create_stack(dev);
+ 	if (err)
+ 		goto error1;
+ 
+-	dev->dev.class = &mISDN_class;
+ 	dev->dev.platform_data = dev;
+ 	dev->dev.parent = parent;
+ 	dev_set_drvdata(&dev->dev, dev);
+@@ -249,8 +250,8 @@ mISDN_register_device(struct mISDNdevice *dev,
+ 
+ error3:
+ 	delete_stack(dev);
+-	return err;
+ error1:
++	put_device(&dev->dev);
+ 	return err;
+ 
+ }
+diff --git a/drivers/media/cec/platform/cros-ec/cros-ec-cec.c b/drivers/media/cec/platform/cros-ec/cros-ec-cec.c
+index 2d95e16cd2489..f66699d5dc66e 100644
+--- a/drivers/media/cec/platform/cros-ec/cros-ec-cec.c
++++ b/drivers/media/cec/platform/cros-ec/cros-ec-cec.c
+@@ -44,6 +44,8 @@ static void handle_cec_message(struct cros_ec_cec *cros_ec_cec)
+ 	uint8_t *cec_message = cros_ec->event_data.data.cec_message;
+ 	unsigned int len = cros_ec->event_size;
+ 
++	if (len > CEC_MAX_MSG_SIZE)
++		len = CEC_MAX_MSG_SIZE;
+ 	cros_ec_cec->rx_msg.len = len;
+ 	memcpy(cros_ec_cec->rx_msg.msg, cec_message, len);
+ 
+diff --git a/drivers/media/cec/platform/s5p/s5p_cec.c b/drivers/media/cec/platform/s5p/s5p_cec.c
+index 028a09a7531ef..102f1af01000a 100644
+--- a/drivers/media/cec/platform/s5p/s5p_cec.c
++++ b/drivers/media/cec/platform/s5p/s5p_cec.c
+@@ -115,6 +115,8 @@ static irqreturn_t s5p_cec_irq_handler(int irq, void *priv)
+ 				dev_dbg(cec->dev, "Buffer overrun (worker did not process previous message)\n");
+ 			cec->rx = STATE_BUSY;
+ 			cec->msg.len = status >> 24;
++			if (cec->msg.len > CEC_MAX_MSG_SIZE)
++				cec->msg.len = CEC_MAX_MSG_SIZE;
+ 			cec->msg.rx_status = CEC_RX_STATUS_OK;
+ 			s5p_cec_get_rx_buf(cec, cec->msg.len,
+ 					cec->msg.msg);
+diff --git a/drivers/media/dvb-frontends/drxk_hard.c b/drivers/media/dvb-frontends/drxk_hard.c
+index a57470bf71bf3..2134e25096aac 100644
+--- a/drivers/media/dvb-frontends/drxk_hard.c
++++ b/drivers/media/dvb-frontends/drxk_hard.c
+@@ -6672,7 +6672,7 @@ static int drxk_read_snr(struct dvb_frontend *fe, u16 *snr)
+ static int drxk_read_ucblocks(struct dvb_frontend *fe, u32 *ucblocks)
+ {
+ 	struct drxk_state *state = fe->demodulator_priv;
+-	u16 err;
++	u16 err = 0;
+ 
+ 	dprintk(1, "\n");
+ 
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index a4bd85b200a3e..be4e5cdda1fa3 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -1692,6 +1692,10 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 		host->mmc_host_ops.execute_tuning = usdhc_execute_tuning;
+ 	}
+ 
++	err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
++	if (err)
++		goto disable_ahb_clk;
++
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING)
+ 		sdhci_esdhc_ops.platform_execute_tuning =
+ 					esdhc_executing_tuning;
+@@ -1699,13 +1703,15 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536)
+ 		host->quirks |= SDHCI_QUIRK_BROKEN_ADMA;
+ 
+-	if (imx_data->socdata->flags & ESDHC_FLAG_HS400)
++	if (host->caps & MMC_CAP_8_BIT_DATA &&
++	    imx_data->socdata->flags & ESDHC_FLAG_HS400)
+ 		host->quirks2 |= SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400;
+ 
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23)
+ 		host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN;
+ 
+-	if (imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
++	if (host->caps & MMC_CAP_8_BIT_DATA &&
++	    imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
+ 		host->mmc->caps2 |= MMC_CAP2_HS400_ES;
+ 		host->mmc_host_ops.hs400_enhanced_strobe =
+ 					esdhc_hs400_enhanced_strobe;
+@@ -1727,13 +1733,6 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 			goto disable_ahb_clk;
+ 	}
+ 
+-	if (of_id)
+-		err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
+-	else
+-		err = sdhci_esdhc_imx_probe_nondt(pdev, host, imx_data);
+-	if (err)
+-		goto disable_ahb_clk;
+-
+ 	sdhci_esdhc_imx_hwinit(host);
+ 
+ 	err = sdhci_add_host(host);
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index a78b060ce8471..7eb9a62ee0743 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -967,6 +967,12 @@ static bool glk_broken_cqhci(struct sdhci_pci_slot *slot)
+ 		dmi_match(DMI_SYS_VENDOR, "IRBIS"));
+ }
+ 
++static bool jsl_broken_hs400es(struct sdhci_pci_slot *slot)
++{
++	return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_JSL_EMMC &&
++			dmi_match(DMI_BIOS_VENDOR, "ASUSTeK COMPUTER INC.");
++}
++
+ static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
+ {
+ 	int ret = byt_emmc_probe_slot(slot);
+@@ -975,9 +981,11 @@ static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
+ 		slot->host->mmc->caps2 |= MMC_CAP2_CQE;
+ 
+ 	if (slot->chip->pdev->device != PCI_DEVICE_ID_INTEL_GLK_EMMC) {
+-		slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES,
+-		slot->host->mmc_host_ops.hs400_enhanced_strobe =
+-						intel_hs400_enhanced_strobe;
++		if (!jsl_broken_hs400es(slot)) {
++			slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES;
++			slot->host->mmc_host_ops.hs400_enhanced_strobe =
++							intel_hs400_enhanced_strobe;
++		}
+ 		slot->host->mmc->caps2 |= MMC_CAP2_CQE_DCMD;
+ 	}
+ 
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 92e8ca56f5665..200d3ab343b00 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -653,8 +653,9 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 	unsigned int tRP_ps;
+ 	bool use_half_period;
+ 	int sample_delay_ps, sample_delay_factor;
+-	u16 busy_timeout_cycles;
++	unsigned int busy_timeout_cycles;
+ 	u8 wrn_dly_sel;
++	u64 busy_timeout_ps;
+ 
+ 	if (sdr->tRC_min >= 30000) {
+ 		/* ONFI non-EDO modes [0-3] */
+@@ -678,7 +679,8 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 	addr_setup_cycles = TO_CYCLES(sdr->tALS_min, period_ps);
+ 	data_setup_cycles = TO_CYCLES(sdr->tDS_min, period_ps);
+ 	data_hold_cycles = TO_CYCLES(sdr->tDH_min, period_ps);
+-	busy_timeout_cycles = TO_CYCLES(sdr->tWB_max + sdr->tR_max, period_ps);
++	busy_timeout_ps = max(sdr->tBERS_max, sdr->tPROG_max);
++	busy_timeout_cycles = TO_CYCLES(busy_timeout_ps, period_ps);
+ 
+ 	hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) |
+ 		      BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) |
+diff --git a/drivers/mtd/parsers/bcm47xxpart.c b/drivers/mtd/parsers/bcm47xxpart.c
+index 6012a10f10c83..13daf9bffd081 100644
+--- a/drivers/mtd/parsers/bcm47xxpart.c
++++ b/drivers/mtd/parsers/bcm47xxpart.c
+@@ -233,11 +233,11 @@ static int bcm47xxpart_parse(struct mtd_info *master,
+ 		}
+ 
+ 		/* Read middle of the block */
+-		err = mtd_read(master, offset + 0x8000, 0x4, &bytes_read,
++		err = mtd_read(master, offset + (blocksize / 2), 0x4, &bytes_read,
+ 			       (uint8_t *)buf);
+ 		if (err && !mtd_is_bitflip(err)) {
+ 			pr_err("mtd_read error while parsing (offset: 0x%X): %d\n",
+-			       offset, err);
++			       offset + (blocksize / 2), err);
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/net/dsa/dsa_loop.c b/drivers/net/dsa/dsa_loop.c
+index e38906ae8f235..fbeb99ab9e4dd 100644
+--- a/drivers/net/dsa/dsa_loop.c
++++ b/drivers/net/dsa/dsa_loop.c
+@@ -376,6 +376,17 @@ static struct mdio_driver dsa_loop_drv = {
+ 
+ #define NUM_FIXED_PHYS	(DSA_LOOP_NUM_PORTS - 2)
+ 
++static void dsa_loop_phydevs_unregister(void)
++{
++	unsigned int i;
++
++	for (i = 0; i < NUM_FIXED_PHYS; i++)
++		if (!IS_ERR(phydevs[i])) {
++			fixed_phy_unregister(phydevs[i]);
++			phy_device_free(phydevs[i]);
++		}
++}
++
+ static int __init dsa_loop_init(void)
+ {
+ 	struct fixed_phy_status status = {
+@@ -383,23 +394,23 @@ static int __init dsa_loop_init(void)
+ 		.speed = SPEED_100,
+ 		.duplex = DUPLEX_FULL,
+ 	};
+-	unsigned int i;
++	unsigned int i, ret;
+ 
+ 	for (i = 0; i < NUM_FIXED_PHYS; i++)
+ 		phydevs[i] = fixed_phy_register(PHY_POLL, &status, NULL);
+ 
+-	return mdio_driver_register(&dsa_loop_drv);
++	ret = mdio_driver_register(&dsa_loop_drv);
++	if (ret)
++		dsa_loop_phydevs_unregister();
++
++	return ret;
+ }
+ module_init(dsa_loop_init);
+ 
+ static void __exit dsa_loop_exit(void)
+ {
+-	unsigned int i;
+-
+ 	mdio_driver_unregister(&dsa_loop_drv);
+-	for (i = 0; i < NUM_FIXED_PHYS; i++)
+-		if (!IS_ERR(phydevs[i]))
+-			fixed_phy_unregister(phydevs[i]);
++	dsa_loop_phydevs_unregister();
+ }
+ module_exit(dsa_loop_exit);
+ 
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index e183caf381765..686bb873125cc 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -623,7 +623,7 @@ fec_enet_txq_put_data_tso(struct fec_enet_priv_tx_q *txq, struct sk_buff *skb,
+ 		dev_kfree_skb_any(skb);
+ 		if (net_ratelimit())
+ 			netdev_err(ndev, "Tx DMA memory map failed\n");
+-		return NETDEV_TX_BUSY;
++		return NETDEV_TX_OK;
+ 	}
+ 
+ 	bdp->cbd_datlen = cpu_to_fec16(size);
+@@ -685,7 +685,7 @@ fec_enet_txq_put_hdr_tso(struct fec_enet_priv_tx_q *txq,
+ 			dev_kfree_skb_any(skb);
+ 			if (net_ratelimit())
+ 				netdev_err(ndev, "Tx DMA memory map failed\n");
+-			return NETDEV_TX_BUSY;
++			return NETDEV_TX_OK;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index c1cbdac4b376f..77ba6c3c7a090 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -574,7 +574,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	}
+ 
+ 	for (i = 0; i < PHY_MAX_ADDR; i++) {
+-		if ((bus->phy_mask & (1 << i)) == 0) {
++		if ((bus->phy_mask & BIT(i)) == 0) {
+ 			struct phy_device *phydev;
+ 
+ 			phydev = mdiobus_scan(bus, i);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index a643b2f2f4de9..0c09f8e9d3836 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1475,7 +1475,8 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
+ 	int err;
+ 	int i;
+ 
+-	if (it->nr_segs > MAX_SKB_FRAGS + 1)
++	if (it->nr_segs > MAX_SKB_FRAGS + 1 ||
++	    len > (ETH_MAX_MTU - NET_SKB_PAD - NET_IP_ALIGN))
+ 		return ERR_PTR(-EMSGSIZE);
+ 
+ 	local_bh_disable();
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
+index 430d2cca98b33..1285d3685c4f5 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
+@@ -228,6 +228,10 @@ static void brcmf_fweh_event_worker(struct work_struct *work)
+ 			  brcmf_fweh_event_name(event->code), event->code,
+ 			  event->emsg.ifidx, event->emsg.bsscfgidx,
+ 			  event->emsg.addr);
++		if (event->emsg.bsscfgidx >= BRCMF_MAX_IFS) {
++			bphy_err(drvr, "invalid bsscfg index: %u\n", event->emsg.bsscfgidx);
++			goto event_free;
++		}
+ 
+ 		/* convert event message */
+ 		emsg_be = &event->emsg;
+diff --git a/drivers/nfc/fdp/fdp.c b/drivers/nfc/fdp/fdp.c
+index 4dc7bd7e02b6b..90bea6a1db692 100644
+--- a/drivers/nfc/fdp/fdp.c
++++ b/drivers/nfc/fdp/fdp.c
+@@ -238,9 +238,6 @@ static int fdp_nci_open(struct nci_dev *ndev)
+ {
+ 	int r;
+ 	struct fdp_nci_info *info = nci_get_drvdata(ndev);
+-	struct device *dev = &info->phy->i2c_dev->dev;
+-
+-	dev_dbg(dev, "%s\n", __func__);
+ 
+ 	r = info->phy_ops->enable(info->phy);
+ 
+@@ -249,35 +246,26 @@ static int fdp_nci_open(struct nci_dev *ndev)
+ 
+ static int fdp_nci_close(struct nci_dev *ndev)
+ {
+-	struct fdp_nci_info *info = nci_get_drvdata(ndev);
+-	struct device *dev = &info->phy->i2c_dev->dev;
+-
+-	dev_dbg(dev, "%s\n", __func__);
+ 	return 0;
+ }
+ 
+ static int fdp_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
+ {
+ 	struct fdp_nci_info *info = nci_get_drvdata(ndev);
+-	struct device *dev = &info->phy->i2c_dev->dev;
+-
+-	dev_dbg(dev, "%s\n", __func__);
++	int ret;
+ 
+ 	if (atomic_dec_and_test(&info->data_pkt_counter))
+ 		info->data_pkt_counter_cb(ndev);
+ 
+-	return info->phy_ops->write(info->phy, skb);
+-}
+-
+-int fdp_nci_recv_frame(struct nci_dev *ndev, struct sk_buff *skb)
+-{
+-	struct fdp_nci_info *info = nci_get_drvdata(ndev);
+-	struct device *dev = &info->phy->i2c_dev->dev;
++	ret = info->phy_ops->write(info->phy, skb);
++	if (ret < 0) {
++		kfree_skb(skb);
++		return ret;
++	}
+ 
+-	dev_dbg(dev, "%s\n", __func__);
+-	return nci_recv_frame(ndev, skb);
++	consume_skb(skb);
++	return 0;
+ }
+-EXPORT_SYMBOL(fdp_nci_recv_frame);
+ 
+ static int fdp_nci_request_firmware(struct nci_dev *ndev)
+ {
+@@ -489,8 +477,6 @@ static int fdp_nci_setup(struct nci_dev *ndev)
+ 	int r;
+ 	u8 patched = 0;
+ 
+-	dev_dbg(dev, "%s\n", __func__);
+-
+ 	r = nci_core_init(ndev);
+ 	if (r)
+ 		goto error;
+@@ -598,9 +584,7 @@ static int fdp_nci_core_reset_ntf_packet(struct nci_dev *ndev,
+ 					  struct sk_buff *skb)
+ {
+ 	struct fdp_nci_info *info = nci_get_drvdata(ndev);
+-	struct device *dev = &info->phy->i2c_dev->dev;
+ 
+-	dev_dbg(dev, "%s\n", __func__);
+ 	info->setup_reset_ntf = 1;
+ 	wake_up(&info->setup_wq);
+ 
+@@ -611,9 +595,7 @@ static int fdp_nci_prop_patch_ntf_packet(struct nci_dev *ndev,
+ 					  struct sk_buff *skb)
+ {
+ 	struct fdp_nci_info *info = nci_get_drvdata(ndev);
+-	struct device *dev = &info->phy->i2c_dev->dev;
+ 
+-	dev_dbg(dev, "%s\n", __func__);
+ 	info->setup_patch_ntf = 1;
+ 	info->setup_patch_status = skb->data[0];
+ 	wake_up(&info->setup_wq);
+@@ -786,11 +768,6 @@ EXPORT_SYMBOL(fdp_nci_probe);
+ 
+ void fdp_nci_remove(struct nci_dev *ndev)
+ {
+-	struct fdp_nci_info *info = nci_get_drvdata(ndev);
+-	struct device *dev = &info->phy->i2c_dev->dev;
+-
+-	dev_dbg(dev, "%s\n", __func__);
+-
+ 	nci_unregister_device(ndev);
+ 	nci_free_device(ndev);
+ }
+diff --git a/drivers/nfc/fdp/fdp.h b/drivers/nfc/fdp/fdp.h
+index 9bd1f3f23e2d1..ead3b21ccae68 100644
+--- a/drivers/nfc/fdp/fdp.h
++++ b/drivers/nfc/fdp/fdp.h
+@@ -25,6 +25,5 @@ int fdp_nci_probe(struct fdp_i2c_phy *phy, struct nfc_phy_ops *phy_ops,
+ 		  struct nci_dev **ndev, int tx_headroom, int tx_tailroom,
+ 		  u8 clock_type, u32 clock_freq, u8 *fw_vsc_cfg);
+ void fdp_nci_remove(struct nci_dev *ndev);
+-int fdp_nci_recv_frame(struct nci_dev *ndev, struct sk_buff *skb);
+ 
+ #endif /* __LOCAL_FDP_H_ */
+diff --git a/drivers/nfc/fdp/i2c.c b/drivers/nfc/fdp/i2c.c
+index ad0abb1f0bae9..5e300788be525 100644
+--- a/drivers/nfc/fdp/i2c.c
++++ b/drivers/nfc/fdp/i2c.c
+@@ -49,7 +49,6 @@ static int fdp_nci_i2c_enable(void *phy_id)
+ {
+ 	struct fdp_i2c_phy *phy = phy_id;
+ 
+-	dev_dbg(&phy->i2c_dev->dev, "%s\n", __func__);
+ 	fdp_nci_i2c_reset(phy);
+ 
+ 	return 0;
+@@ -59,7 +58,6 @@ static void fdp_nci_i2c_disable(void *phy_id)
+ {
+ 	struct fdp_i2c_phy *phy = phy_id;
+ 
+-	dev_dbg(&phy->i2c_dev->dev, "%s\n", __func__);
+ 	fdp_nci_i2c_reset(phy);
+ }
+ 
+@@ -197,7 +195,6 @@ flush:
+ static irqreturn_t fdp_nci_i2c_irq_thread_fn(int irq, void *phy_id)
+ {
+ 	struct fdp_i2c_phy *phy = phy_id;
+-	struct i2c_client *client;
+ 	struct sk_buff *skb;
+ 	int r;
+ 
+@@ -206,9 +203,6 @@ static irqreturn_t fdp_nci_i2c_irq_thread_fn(int irq, void *phy_id)
+ 		return IRQ_NONE;
+ 	}
+ 
+-	client = phy->i2c_dev;
+-	dev_dbg(&client->dev, "%s\n", __func__);
+-
+ 	r = fdp_nci_i2c_read(phy, &skb);
+ 
+ 	if (r == -EREMOTEIO)
+@@ -217,7 +211,7 @@ static irqreturn_t fdp_nci_i2c_irq_thread_fn(int irq, void *phy_id)
+ 		return IRQ_HANDLED;
+ 
+ 	if (skb != NULL)
+-		fdp_nci_recv_frame(phy->ndev, skb);
++		nci_recv_frame(phy->ndev, skb);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -288,8 +282,6 @@ static int fdp_nci_i2c_probe(struct i2c_client *client)
+ 	u32 clock_freq;
+ 	int r = 0;
+ 
+-	dev_dbg(dev, "%s\n", __func__);
+-
+ 	if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ 		nfc_err(dev, "No I2C_FUNC_I2C support\n");
+ 		return -ENODEV;
+@@ -351,8 +343,6 @@ static int fdp_nci_i2c_remove(struct i2c_client *client)
+ {
+ 	struct fdp_i2c_phy *phy = i2c_get_clientdata(client);
+ 
+-	dev_dbg(&client->dev, "%s\n", __func__);
+-
+ 	fdp_nci_remove(phy->ndev);
+ 	fdp_nci_i2c_disable(phy);
+ 
+diff --git a/drivers/nfc/nfcmrvl/i2c.c b/drivers/nfc/nfcmrvl/i2c.c
+index f81f1cae93243..41f27e1cac20a 100644
+--- a/drivers/nfc/nfcmrvl/i2c.c
++++ b/drivers/nfc/nfcmrvl/i2c.c
+@@ -151,10 +151,15 @@ static int nfcmrvl_i2c_nci_send(struct nfcmrvl_private *priv,
+ 			ret = -EREMOTEIO;
+ 		} else
+ 			ret = 0;
++	}
++
++	if (ret) {
+ 		kfree_skb(skb);
++		return ret;
+ 	}
+ 
+-	return ret;
++	consume_skb(skb);
++	return 0;
+ }
+ 
+ static void nfcmrvl_i2c_nci_update_config(struct nfcmrvl_private *priv,
+diff --git a/drivers/nfc/nxp-nci/core.c b/drivers/nfc/nxp-nci/core.c
+index a0ce95a287c54..b68b315689c3a 100644
+--- a/drivers/nfc/nxp-nci/core.c
++++ b/drivers/nfc/nxp-nci/core.c
+@@ -70,22 +70,20 @@ static int nxp_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
+ 	struct nxp_nci_info *info = nci_get_drvdata(ndev);
+ 	int r;
+ 
+-	if (!info->phy_ops->write) {
+-		r = -ENOTSUPP;
+-		goto send_exit;
+-	}
++	if (!info->phy_ops->write)
++		return -EOPNOTSUPP;
+ 
+-	if (info->mode != NXP_NCI_MODE_NCI) {
+-		r = -EINVAL;
+-		goto send_exit;
+-	}
++	if (info->mode != NXP_NCI_MODE_NCI)
++		return -EINVAL;
+ 
+ 	r = info->phy_ops->write(info->phy_id, skb);
+-	if (r < 0)
++	if (r < 0) {
+ 		kfree_skb(skb);
++		return r;
++	}
+ 
+-send_exit:
+-	return r;
++	consume_skb(skb);
++	return 0;
+ }
+ 
+ static struct nci_ops nxp_nci_ops = {
+@@ -104,10 +102,8 @@ int nxp_nci_probe(void *phy_id, struct device *pdev,
+ 	int r;
+ 
+ 	info = devm_kzalloc(pdev, sizeof(struct nxp_nci_info), GFP_KERNEL);
+-	if (!info) {
+-		r = -ENOMEM;
+-		goto probe_exit;
+-	}
++	if (!info)
++		return -ENOMEM;
+ 
+ 	info->phy_id = phy_id;
+ 	info->pdev = pdev;
+@@ -120,31 +116,25 @@ int nxp_nci_probe(void *phy_id, struct device *pdev,
+ 	if (info->phy_ops->set_mode) {
+ 		r = info->phy_ops->set_mode(info->phy_id, NXP_NCI_MODE_COLD);
+ 		if (r < 0)
+-			goto probe_exit;
++			return r;
+ 	}
+ 
+ 	info->mode = NXP_NCI_MODE_COLD;
+ 
+ 	info->ndev = nci_allocate_device(&nxp_nci_ops, NXP_NCI_NFC_PROTOCOLS,
+ 					 NXP_NCI_HDR_LEN, 0);
+-	if (!info->ndev) {
+-		r = -ENOMEM;
+-		goto probe_exit;
+-	}
++	if (!info->ndev)
++		return -ENOMEM;
+ 
+ 	nci_set_parent_dev(info->ndev, pdev);
+ 	nci_set_drvdata(info->ndev, info);
+ 	r = nci_register_device(info->ndev);
+-	if (r < 0)
+-		goto probe_exit_free_nci;
++	if (r < 0) {
++		nci_free_device(info->ndev);
++		return r;
++	}
+ 
+ 	*ndev = info->ndev;
+-
+-	goto probe_exit;
+-
+-probe_exit_free_nci:
+-	nci_free_device(info->ndev);
+-probe_exit:
+ 	return r;
+ }
+ EXPORT_SYMBOL(nxp_nci_probe);
+diff --git a/drivers/nfc/s3fwrn5/core.c b/drivers/nfc/s3fwrn5/core.c
+index ba6c486d64659..9b43cd3a45afc 100644
+--- a/drivers/nfc/s3fwrn5/core.c
++++ b/drivers/nfc/s3fwrn5/core.c
+@@ -97,11 +97,15 @@ static int s3fwrn5_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
+ 	}
+ 
+ 	ret = s3fwrn5_write(info, skb);
+-	if (ret < 0)
++	if (ret < 0) {
+ 		kfree_skb(skb);
++		mutex_unlock(&info->mutex);
++		return ret;
++	}
+ 
++	consume_skb(skb);
+ 	mutex_unlock(&info->mutex);
+-	return ret;
++	return 0;
+ }
+ 
+ static int s3fwrn5_nci_post_setup(struct nci_dev *ndev)
+diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c
+index 8a3b0c3a1e92b..fd99735dca3e6 100644
+--- a/drivers/parisc/iosapic.c
++++ b/drivers/parisc/iosapic.c
+@@ -875,6 +875,7 @@ int iosapic_serial_irq(struct parisc_device *dev)
+ 
+ 	return vi->txn_irq;
+ }
++EXPORT_SYMBOL(iosapic_serial_irq);
+ #endif
+ 
+ 
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 42db9c52208e6..6cc4d0792e3d0 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -815,6 +815,14 @@ store_state_field(struct device *dev, struct device_attribute *attr,
+ 	}
+ 
+ 	mutex_lock(&sdev->state_mutex);
++	switch (sdev->sdev_state) {
++	case SDEV_RUNNING:
++	case SDEV_OFFLINE:
++		break;
++	default:
++		mutex_unlock(&sdev->state_mutex);
++		return -EINVAL;
++	}
+ 	if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {
+ 		ret = 0;
+ 	} else {
+diff --git a/drivers/staging/media/meson/vdec/vdec.c b/drivers/staging/media/meson/vdec/vdec.c
+index 5ccb3846c8797..7a818ca15b37d 100644
+--- a/drivers/staging/media/meson/vdec/vdec.c
++++ b/drivers/staging/media/meson/vdec/vdec.c
+@@ -1109,6 +1109,7 @@ static int vdec_probe(struct platform_device *pdev)
+ 
+ err_vdev_release:
+ 	video_device_release(vdev);
++	v4l2_device_unregister(&core->v4l2_dev);
+ 	return ret;
+ }
+ 
+@@ -1117,6 +1118,7 @@ static int vdec_remove(struct platform_device *pdev)
+ 	struct amvdec_core *core = platform_get_drvdata(pdev);
+ 
+ 	video_unregister_device(core->vdev_dec);
++	v4l2_device_unregister(&core->v4l2_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/staging/media/rkisp1/rkisp1-capture.c b/drivers/staging/media/rkisp1/rkisp1-capture.c
+index 0c934ca5adaa3..8936f5a81680c 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-capture.c
++++ b/drivers/staging/media/rkisp1/rkisp1-capture.c
+@@ -1258,11 +1258,12 @@ static int rkisp1_capture_link_validate(struct media_link *link)
+ 	struct rkisp1_capture *cap = video_get_drvdata(vdev);
+ 	const struct rkisp1_capture_fmt_cfg *fmt =
+ 		rkisp1_find_fmt_cfg(cap, cap->pix.fmt.pixelformat);
+-	struct v4l2_subdev_format sd_fmt;
++	struct v4l2_subdev_format sd_fmt = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++		.pad = link->source->index,
++	};
+ 	int ret;
+ 
+-	sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+-	sd_fmt.pad = link->source->index;
+ 	ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &sd_fmt);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/staging/media/rkisp1/rkisp1-resizer.c b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+index 4dcc342ac2b27..76f17dd7670f7 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-resizer.c
++++ b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+@@ -500,6 +500,10 @@ static int rkisp1_rsz_init_config(struct v4l2_subdev *sd,
+ 	sink_fmt->height = RKISP1_DEFAULT_HEIGHT;
+ 	sink_fmt->field = V4L2_FIELD_NONE;
+ 	sink_fmt->code = RKISP1_DEF_FMT;
++	sink_fmt->colorspace = V4L2_COLORSPACE_SRGB;
++	sink_fmt->xfer_func = V4L2_XFER_FUNC_SRGB;
++	sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
++	sink_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE;
+ 
+ 	sink_crop = v4l2_subdev_get_try_crop(sd, cfg, RKISP1_RSZ_PAD_SINK);
+ 	sink_crop->width = RKISP1_DEFAULT_WIDTH;
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 8b3756e4bb05c..f648fd1d7548e 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1023,7 +1023,8 @@ static void autoconfig_16550a(struct uart_8250_port *up)
+ 	up->port.type = PORT_16550A;
+ 	up->capabilities |= UART_CAP_FIFO;
+ 
+-	if (!IS_ENABLED(CONFIG_SERIAL_8250_16550A_VARIANTS))
++	if (!IS_ENABLED(CONFIG_SERIAL_8250_16550A_VARIANTS) &&
++	    !(up->port.flags & UPF_FULL_PROBE))
+ 		return;
+ 
+ 	/*
+diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig
+index 603137da47363..136f2b1460f91 100644
+--- a/drivers/tty/serial/8250/Kconfig
++++ b/drivers/tty/serial/8250/Kconfig
+@@ -119,7 +119,7 @@ config SERIAL_8250_CONSOLE
+ 
+ config SERIAL_8250_GSC
+ 	tristate
+-	depends on SERIAL_8250 && GSC
++	depends on SERIAL_8250 && PARISC
+ 	default SERIAL_8250
+ 
+ config SERIAL_8250_DMA
+diff --git a/drivers/tty/serial/ar933x_uart.c b/drivers/tty/serial/ar933x_uart.c
+index c2be7cf913992..fcbaff8941930 100644
+--- a/drivers/tty/serial/ar933x_uart.c
++++ b/drivers/tty/serial/ar933x_uart.c
+@@ -593,6 +593,11 @@ static int ar933x_config_rs485(struct uart_port *port,
+ 		dev_err(port->dev, "RS485 needs rts-gpio\n");
+ 		return 1;
+ 	}
++
++	if (rs485conf->flags & SER_RS485_ENABLED)
++		gpiod_set_value(up->rts_gpiod,
++			!!(rs485conf->flags & SER_RS485_RTS_AFTER_SEND));
++
+ 	port->rs485 = *rs485conf;
+ 	return 0;
+ }
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 64d5a593682b8..0ee11a9370116 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -58,24 +58,12 @@
+ #define PCI_DEVICE_ID_INTEL_CML_XHCI			0xa3af
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI		0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI		0x1138
+-#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI		0x461e
+-#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI		0x464e
+-#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI	0x51ed
+-#define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI		0xa71e
+-#define PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI		0x7ec0
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI		0x51ed
+ 
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4			0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_2			0x43bb
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_1			0x43bc
+-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1		0x161a
+-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2		0x161b
+-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3		0x161d
+-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4		0x161e
+-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5		0x15d6
+-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6		0x15d7
+-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7		0x161c
+-#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8		0x161f
+ 
+ #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI			0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI		0x1142
+@@ -268,12 +256,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI ||
+-	     pdev->device == PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI))
++	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+@@ -342,15 +325,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4))
+ 		xhci->quirks |= XHCI_NO_SOFT_RETRY;
+ 
+-	if (pdev->vendor == PCI_VENDOR_ID_AMD &&
+-	    (pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8))
++	/* xHC spec requires PCI devices to support D3hot and D3cold */
++	if (xhci->hci_version >= 0x120)
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index b0470f4f595ee..3feb6e40d56d8 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -1041,6 +1041,48 @@ stifb_copyarea(struct fb_info *info, const struct fb_copyarea *area)
+ 	SETUP_FB(fb);
+ }
+ 
++#define ARTIST_VRAM_SIZE			0x000804
++#define ARTIST_VRAM_SRC				0x000808
++#define ARTIST_VRAM_SIZE_TRIGGER_WINFILL	0x000a04
++#define ARTIST_VRAM_DEST_TRIGGER_BLOCKMOVE	0x000b00
++#define ARTIST_SRC_BM_ACCESS			0x018008
++#define ARTIST_FGCOLOR				0x018010
++#define ARTIST_BGCOLOR				0x018014
++#define ARTIST_BITMAP_OP			0x01801c
++
++static void
++stifb_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
++{
++	struct stifb_info *fb = container_of(info, struct stifb_info, info);
++
++	if (rect->rop != ROP_COPY ||
++	    (fb->id == S9000_ID_HCRX && fb->info.var.bits_per_pixel == 32))
++		return cfb_fillrect(info, rect);
++
++	SETUP_HW(fb);
++
++	if (fb->info.var.bits_per_pixel == 32) {
++		WRITE_WORD(0xBBA0A000, fb, REG_10);
++
++		NGLE_REALLY_SET_IMAGE_PLANEMASK(fb, 0xffffffff);
++	} else {
++		WRITE_WORD(fb->id == S9000_ID_HCRX ? 0x13a02000 : 0x13a01000, fb, REG_10);
++
++		NGLE_REALLY_SET_IMAGE_PLANEMASK(fb, 0xff);
++	}
++
++	WRITE_WORD(0x03000300, fb, ARTIST_BITMAP_OP);
++	WRITE_WORD(0x2ea01000, fb, ARTIST_SRC_BM_ACCESS);
++	NGLE_QUICK_SET_DST_BM_ACCESS(fb, 0x2ea01000);
++	NGLE_REALLY_SET_IMAGE_FG_COLOR(fb, rect->color);
++	WRITE_WORD(0, fb, ARTIST_BGCOLOR);
++
++	NGLE_SET_DSTXY(fb, (rect->dx << 16) | (rect->dy));
++	SET_LENXY_START_RECFILL(fb, (rect->width << 16) | (rect->height));
++
++	SETUP_FB(fb);
++}
++
+ static void __init
+ stifb_init_display(struct stifb_info *fb)
+ {
+@@ -1105,7 +1147,7 @@ static const struct fb_ops stifb_ops = {
+ 	.owner		= THIS_MODULE,
+ 	.fb_setcolreg	= stifb_setcolreg,
+ 	.fb_blank	= stifb_blank,
+-	.fb_fillrect	= cfb_fillrect,
++	.fb_fillrect	= stifb_fillrect,
+ 	.fb_copyarea	= stifb_copyarea,
+ 	.fb_imageblit	= cfb_imageblit,
+ };
+@@ -1297,7 +1339,7 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
+ 		goto out_err0;
+ 	}
+ 	info->screen_size = fix->smem_len;
+-	info->flags = FBINFO_DEFAULT | FBINFO_HWACCEL_COPYAREA;
++	info->flags = FBINFO_HWACCEL_COPYAREA | FBINFO_HWACCEL_FILLRECT;
+ 	info->pseudo_palette = &fb->pseudo_palette;
+ 
+ 	/* This has to be done !!! */
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 92cb16c0e5ee1..6942707f8b034 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -288,8 +288,10 @@ static void prelim_release(struct preftree *preftree)
+ 	struct prelim_ref *ref, *next_ref;
+ 
+ 	rbtree_postorder_for_each_entry_safe(ref, next_ref,
+-					     &preftree->root.rb_root, rbnode)
++					     &preftree->root.rb_root, rbnode) {
++		free_inode_elem_list(ref->inode_list);
+ 		free_pref(ref);
++	}
+ 
+ 	preftree->root = RB_ROOT_CACHED;
+ 	preftree->count = 0;
+@@ -647,6 +649,18 @@ unode_aux_to_inode_list(struct ulist_node *node)
+ 	return (struct extent_inode_elem *)(uintptr_t)node->aux;
+ }
+ 
++static void free_leaf_list(struct ulist *ulist)
++{
++	struct ulist_node *node;
++	struct ulist_iterator uiter;
++
++	ULIST_ITER_INIT(&uiter);
++	while ((node = ulist_next(ulist, &uiter)))
++		free_inode_elem_list(unode_aux_to_inode_list(node));
++
++	ulist_free(ulist);
++}
++
+ /*
+  * We maintain three separate rbtrees: one for direct refs, one for
+  * indirect refs which have a key, and one for indirect refs which do not
+@@ -761,7 +775,11 @@ static int resolve_indirect_refs(struct btrfs_fs_info *fs_info,
+ 		cond_resched();
+ 	}
+ out:
+-	ulist_free(parents);
++	/*
++	 * We may have inode lists attached to refs in the parents ulist, so we
++	 * must free them before freeing the ulist and its refs.
++	 */
++	free_leaf_list(parents);
+ 	return ret;
+ }
+ 
+@@ -1372,6 +1390,12 @@ again:
+ 				if (ret < 0)
+ 					goto out;
+ 				ref->inode_list = eie;
++				/*
++				 * We transferred the list ownership to the ref,
++				 * so set to NULL to avoid a double free in case
++				 * an error happens after this.
++				 */
++				eie = NULL;
+ 			}
+ 			ret = ulist_add_merge_ptr(refs, ref->parent,
+ 						  ref->inode_list,
+@@ -1397,6 +1421,14 @@ again:
+ 				eie->next = ref->inode_list;
+ 			}
+ 			eie = NULL;
++			/*
++			 * We have transferred the inode list ownership from
++			 * this ref to the ref we added to the 'refs' ulist.
++			 * So set this ref's inode list to NULL to avoid
++			 * use-after-free when our caller uses it or double
++			 * frees in case an error happens before we return.
++			 */
++			ref->inode_list = NULL;
+ 		}
+ 		cond_resched();
+ 	}
+@@ -1413,24 +1445,6 @@ out:
+ 	return ret;
+ }
+ 
+-static void free_leaf_list(struct ulist *blocks)
+-{
+-	struct ulist_node *node = NULL;
+-	struct extent_inode_elem *eie;
+-	struct ulist_iterator uiter;
+-
+-	ULIST_ITER_INIT(&uiter);
+-	while ((node = ulist_next(blocks, &uiter))) {
+-		if (!node->aux)
+-			continue;
+-		eie = unode_aux_to_inode_list(node);
+-		free_inode_elem_list(eie);
+-		node->aux = 0;
+-	}
+-
+-	ulist_free(blocks);
+-}
+-
+ /*
+  * Finds all leafs with a reference to the specified combination of bytenr and
+  * offset. key_list_head will point to a list of corresponding keys (caller must
+diff --git a/fs/btrfs/export.c b/fs/btrfs/export.c
+index 1a8d419d9e1f4..bfa2bf44529c2 100644
+--- a/fs/btrfs/export.c
++++ b/fs/btrfs/export.c
+@@ -58,7 +58,7 @@ static int btrfs_encode_fh(struct inode *inode, u32 *fh, int *max_len,
+ }
+ 
+ struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid,
+-				u64 root_objectid, u32 generation,
++				u64 root_objectid, u64 generation,
+ 				int check_generation)
+ {
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+diff --git a/fs/btrfs/export.h b/fs/btrfs/export.h
+index f32f4113c976a..5afb7ca428289 100644
+--- a/fs/btrfs/export.h
++++ b/fs/btrfs/export.h
+@@ -19,7 +19,7 @@ struct btrfs_fid {
+ } __attribute__ ((packed));
+ 
+ struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid,
+-				u64 root_objectid, u32 generation,
++				u64 root_objectid, u64 generation,
+ 				int check_generation);
+ struct dentry *btrfs_get_parent(struct dentry *child);
+ 
+diff --git a/fs/btrfs/tests/qgroup-tests.c b/fs/btrfs/tests/qgroup-tests.c
+index ce1ca8e73c2d1..c4b31dccc1846 100644
+--- a/fs/btrfs/tests/qgroup-tests.c
++++ b/fs/btrfs/tests/qgroup-tests.c
+@@ -237,8 +237,10 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 
+ 	ret = insert_normal_tree_ref(root, nodesize, nodesize, 0,
+ 				BTRFS_FS_TREE_OBJECTID);
+-	if (ret)
++	if (ret) {
++		ulist_free(old_roots);
+ 		return ret;
++	}
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+ 			false);
+@@ -273,8 +275,10 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 	}
+ 
+ 	ret = remove_extent_item(root, nodesize, nodesize);
+-	if (ret)
++	if (ret) {
++		ulist_free(old_roots);
+ 		return -EINVAL;
++	}
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+ 			false);
+@@ -338,8 +342,10 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 
+ 	ret = insert_normal_tree_ref(root, nodesize, nodesize, 0,
+ 				BTRFS_FS_TREE_OBJECTID);
+-	if (ret)
++	if (ret) {
++		ulist_free(old_roots);
+ 		return ret;
++	}
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+ 			false);
+@@ -373,8 +379,10 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 
+ 	ret = add_tree_ref(root, nodesize, nodesize, 0,
+ 			BTRFS_FIRST_FREE_OBJECTID);
+-	if (ret)
++	if (ret) {
++		ulist_free(old_roots);
+ 		return ret;
++	}
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+ 			false);
+@@ -414,8 +422,10 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 
+ 	ret = remove_extent_ref(root, nodesize, nodesize, 0,
+ 				BTRFS_FIRST_FREE_OBJECTID);
+-	if (ret)
++	if (ret) {
++		ulist_free(old_roots);
+ 		return ret;
++	}
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+ 			false);
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index 052ad40ecdb28..b746d7df37582 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -220,7 +220,7 @@ struct fscrypt_info {
+ 	 * will be NULL if the master key was found in a process-subscribed
+ 	 * keyring rather than in the filesystem-level keyring.
+ 	 */
+-	struct key *ci_master_key;
++	struct fscrypt_master_key *ci_master_key;
+ 
+ 	/*
+ 	 * Link in list of inodes that were unlocked with the master key.
+@@ -431,6 +431,40 @@ struct fscrypt_master_key_secret {
+  */
+ struct fscrypt_master_key {
+ 
++	/*
++	 * Back-pointer to the super_block of the filesystem to which this
++	 * master key has been added.  Only valid if ->mk_active_refs > 0.
++	 */
++	struct super_block			*mk_sb;
++
++	/*
++	 * Link in ->mk_sb->s_master_keys->key_hashtable.
++	 * Only valid if ->mk_active_refs > 0.
++	 */
++	struct hlist_node			mk_node;
++
++	/* Semaphore that protects ->mk_secret and ->mk_users */
++	struct rw_semaphore			mk_sem;
++
++	/*
++	 * Active and structural reference counts.  An active ref guarantees
++	 * that the struct continues to exist, continues to be in the keyring
++	 * ->mk_sb->s_master_keys, and that any embedded subkeys (e.g.
++	 * ->mk_direct_keys) that have been prepared continue to exist.
++	 * A structural ref only guarantees that the struct continues to exist.
++	 *
++	 * There is one active ref associated with ->mk_secret being present,
++	 * and one active ref for each inode in ->mk_decrypted_inodes.
++	 *
++	 * There is one structural ref associated with the active refcount being
++	 * nonzero.  Finding a key in the keyring also takes a structural ref,
++	 * which is then held temporarily while the key is operated on.
++	 */
++	refcount_t				mk_active_refs;
++	refcount_t				mk_struct_refs;
++
++	struct rcu_head				mk_rcu_head;
++
+ 	/*
+ 	 * The secret key material.  After FS_IOC_REMOVE_ENCRYPTION_KEY is
+ 	 * executed, this is wiped and no new inodes can be unlocked with this
+@@ -439,16 +473,12 @@ struct fscrypt_master_key {
+ 	 * FS_IOC_REMOVE_ENCRYPTION_KEY can be retried, or
+ 	 * FS_IOC_ADD_ENCRYPTION_KEY can add the secret again.
+ 	 *
+-	 * Locking: protected by key->sem (outer) and mk_secret_sem (inner).
+-	 * The reason for two locks is that key->sem also protects modifying
+-	 * mk_users, which ranks it above the semaphore for the keyring key
+-	 * type, which is in turn above page faults (via keyring_read).  But
+-	 * sometimes filesystems call fscrypt_get_encryption_info() from within
+-	 * a transaction, which ranks it below page faults.  So we need a
+-	 * separate lock which protects mk_secret but not also mk_users.
++	 * While ->mk_secret is present, one ref in ->mk_active_refs is held.
++	 *
++	 * Locking: protected by ->mk_sem.  The manipulation of ->mk_active_refs
++	 *	    associated with this field is protected by ->mk_sem as well.
+ 	 */
+ 	struct fscrypt_master_key_secret	mk_secret;
+-	struct rw_semaphore			mk_secret_sem;
+ 
+ 	/*
+ 	 * For v1 policy keys: an arbitrary key descriptor which was assigned by
+@@ -467,22 +497,12 @@ struct fscrypt_master_key {
+ 	 *
+ 	 * This is NULL for v1 policy keys; those can only be added by root.
+ 	 *
+-	 * Locking: in addition to this keyrings own semaphore, this is
+-	 * protected by the master key's key->sem, so we can do atomic
+-	 * search+insert.  It can also be searched without taking any locks, but
+-	 * in that case the returned key may have already been removed.
++	 * Locking: protected by ->mk_sem.  (We don't just rely on the keyrings
++	 * subsystem semaphore ->mk_users->sem, as we need support for atomic
++	 * search+insert along with proper synchronization with ->mk_secret.)
+ 	 */
+ 	struct key		*mk_users;
+ 
+-	/*
+-	 * Length of ->mk_decrypted_inodes, plus one if mk_secret is present.
+-	 * Once this goes to 0, the master key is removed from ->s_master_keys.
+-	 * The 'struct fscrypt_master_key' will continue to live as long as the
+-	 * 'struct key' whose payload it is, but we won't let this reference
+-	 * count rise again.
+-	 */
+-	refcount_t		mk_refcount;
+-
+ 	/*
+ 	 * List of inodes that were unlocked using this key.  This allows the
+ 	 * inodes to be evicted efficiently if the key is removed.
+@@ -508,11 +528,11 @@ static inline bool
+ is_master_key_secret_present(const struct fscrypt_master_key_secret *secret)
+ {
+ 	/*
+-	 * The READ_ONCE() is only necessary for fscrypt_drop_inode() and
+-	 * fscrypt_key_describe().  These run in atomic context, so they can't
+-	 * take ->mk_secret_sem and thus 'secret' can change concurrently which
+-	 * would be a data race.  But they only need to know whether the secret
+-	 * *was* present at the time of check, so READ_ONCE() suffices.
++	 * The READ_ONCE() is only necessary for fscrypt_drop_inode().
++	 * fscrypt_drop_inode() runs in atomic context, so it can't take the key
++	 * semaphore and thus 'secret' can change concurrently which would be a
++	 * data race.  But fscrypt_drop_inode() only need to know whether the
++	 * secret *was* present at the time of check, so READ_ONCE() suffices.
+ 	 */
+ 	return READ_ONCE(secret->size) != 0;
+ }
+@@ -540,7 +560,11 @@ static inline int master_key_spec_len(const struct fscrypt_key_specifier *spec)
+ 	return 0;
+ }
+ 
+-struct key *
++void fscrypt_put_master_key(struct fscrypt_master_key *mk);
++
++void fscrypt_put_master_key_activeref(struct fscrypt_master_key *mk);
++
++struct fscrypt_master_key *
+ fscrypt_find_master_key(struct super_block *sb,
+ 			const struct fscrypt_key_specifier *mk_spec);
+ 
+diff --git a/fs/crypto/hooks.c b/fs/crypto/hooks.c
+index 4180371bf8642..8268206ef21e5 100644
+--- a/fs/crypto/hooks.c
++++ b/fs/crypto/hooks.c
+@@ -5,8 +5,6 @@
+  * Encryption hooks for higher-level filesystem operations.
+  */
+ 
+-#include <linux/key.h>
+-
+ #include "fscrypt_private.h"
+ 
+ /**
+@@ -154,13 +152,13 @@ int fscrypt_prepare_setflags(struct inode *inode,
+ 		ci = inode->i_crypt_info;
+ 		if (ci->ci_policy.version != FSCRYPT_POLICY_V2)
+ 			return -EINVAL;
+-		mk = ci->ci_master_key->payload.data[0];
+-		down_read(&mk->mk_secret_sem);
++		mk = ci->ci_master_key;
++		down_read(&mk->mk_sem);
+ 		if (is_master_key_secret_present(&mk->mk_secret))
+ 			err = fscrypt_derive_dirhash_key(ci, mk);
+ 		else
+ 			err = -ENOKEY;
+-		up_read(&mk->mk_secret_sem);
++		up_read(&mk->mk_sem);
+ 		return err;
+ 	}
+ 	return 0;
+diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c
+index d7ec52cb3d9af..02f8bf8bd54da 100644
+--- a/fs/crypto/keyring.c
++++ b/fs/crypto/keyring.c
+@@ -18,6 +18,7 @@
+  * information about these ioctls.
+  */
+ 
++#include <asm/unaligned.h>
+ #include <crypto/skcipher.h>
+ #include <linux/key-type.h>
+ #include <linux/random.h>
+@@ -25,6 +26,18 @@
+ 
+ #include "fscrypt_private.h"
+ 
++/* The master encryption keys for a filesystem (->s_master_keys) */
++struct fscrypt_keyring {
++	/*
++	 * Lock that protects ->key_hashtable.  It does *not* protect the
++	 * fscrypt_master_key structs themselves.
++	 */
++	spinlock_t lock;
++
++	/* Hash table that maps fscrypt_key_specifier to fscrypt_master_key */
++	struct hlist_head key_hashtable[128];
++};
++
+ static void wipe_master_key_secret(struct fscrypt_master_key_secret *secret)
+ {
+ 	fscrypt_destroy_hkdf(&secret->hkdf);
+@@ -38,20 +51,70 @@ static void move_master_key_secret(struct fscrypt_master_key_secret *dst,
+ 	memzero_explicit(src, sizeof(*src));
+ }
+ 
+-static void free_master_key(struct fscrypt_master_key *mk)
++static void fscrypt_free_master_key(struct rcu_head *head)
++{
++	struct fscrypt_master_key *mk =
++		container_of(head, struct fscrypt_master_key, mk_rcu_head);
++	/*
++	 * The master key secret and any embedded subkeys should have already
++	 * been wiped when the last active reference to the fscrypt_master_key
++	 * struct was dropped; doing it here would be unnecessarily late.
++	 * Nevertheless, use kfree_sensitive() in case anything was missed.
++	 */
++	kfree_sensitive(mk);
++}
++
++void fscrypt_put_master_key(struct fscrypt_master_key *mk)
++{
++	if (!refcount_dec_and_test(&mk->mk_struct_refs))
++		return;
++	/*
++	 * No structural references left, so free ->mk_users, and also free the
++	 * fscrypt_master_key struct itself after an RCU grace period ensures
++	 * that concurrent keyring lookups can no longer find it.
++	 */
++	WARN_ON(refcount_read(&mk->mk_active_refs) != 0);
++	key_put(mk->mk_users);
++	mk->mk_users = NULL;
++	call_rcu(&mk->mk_rcu_head, fscrypt_free_master_key);
++}
++
++void fscrypt_put_master_key_activeref(struct fscrypt_master_key *mk)
+ {
++	struct super_block *sb = mk->mk_sb;
++	struct fscrypt_keyring *keyring = sb->s_master_keys;
+ 	size_t i;
+ 
+-	wipe_master_key_secret(&mk->mk_secret);
++	if (!refcount_dec_and_test(&mk->mk_active_refs))
++		return;
++	/*
++	 * No active references left, so complete the full removal of this
++	 * fscrypt_master_key struct by removing it from the keyring and
++	 * destroying any subkeys embedded in it.
++	 */
++
++	spin_lock(&keyring->lock);
++	hlist_del_rcu(&mk->mk_node);
++	spin_unlock(&keyring->lock);
++
++	/*
++	 * ->mk_active_refs == 0 implies that ->mk_secret is not present and
++	 * that ->mk_decrypted_inodes is empty.
++	 */
++	WARN_ON(is_master_key_secret_present(&mk->mk_secret));
++	WARN_ON(!list_empty(&mk->mk_decrypted_inodes));
+ 
+ 	for (i = 0; i <= FSCRYPT_MODE_MAX; i++) {
+ 		fscrypt_destroy_prepared_key(&mk->mk_direct_keys[i]);
+ 		fscrypt_destroy_prepared_key(&mk->mk_iv_ino_lblk_64_keys[i]);
+ 		fscrypt_destroy_prepared_key(&mk->mk_iv_ino_lblk_32_keys[i]);
+ 	}
++	memzero_explicit(&mk->mk_ino_hash_key,
++			 sizeof(mk->mk_ino_hash_key));
++	mk->mk_ino_hash_key_initialized = false;
+ 
+-	key_put(mk->mk_users);
+-	kfree_sensitive(mk);
++	/* Drop the structural ref associated with the active refs. */
++	fscrypt_put_master_key(mk);
+ }
+ 
+ static inline bool valid_key_spec(const struct fscrypt_key_specifier *spec)
+@@ -61,44 +124,6 @@ static inline bool valid_key_spec(const struct fscrypt_key_specifier *spec)
+ 	return master_key_spec_len(spec) != 0;
+ }
+ 
+-static int fscrypt_key_instantiate(struct key *key,
+-				   struct key_preparsed_payload *prep)
+-{
+-	key->payload.data[0] = (struct fscrypt_master_key *)prep->data;
+-	return 0;
+-}
+-
+-static void fscrypt_key_destroy(struct key *key)
+-{
+-	free_master_key(key->payload.data[0]);
+-}
+-
+-static void fscrypt_key_describe(const struct key *key, struct seq_file *m)
+-{
+-	seq_puts(m, key->description);
+-
+-	if (key_is_positive(key)) {
+-		const struct fscrypt_master_key *mk = key->payload.data[0];
+-
+-		if (!is_master_key_secret_present(&mk->mk_secret))
+-			seq_puts(m, ": secret removed");
+-	}
+-}
+-
+-/*
+- * Type of key in ->s_master_keys.  Each key of this type represents a master
+- * key which has been added to the filesystem.  Its payload is a
+- * 'struct fscrypt_master_key'.  The "." prefix in the key type name prevents
+- * users from adding keys of this type via the keyrings syscalls rather than via
+- * the intended method of FS_IOC_ADD_ENCRYPTION_KEY.
+- */
+-static struct key_type key_type_fscrypt = {
+-	.name			= "._fscrypt",
+-	.instantiate		= fscrypt_key_instantiate,
+-	.destroy		= fscrypt_key_destroy,
+-	.describe		= fscrypt_key_describe,
+-};
+-
+ static int fscrypt_user_key_instantiate(struct key *key,
+ 					struct key_preparsed_payload *prep)
+ {
+@@ -131,32 +156,6 @@ static struct key_type key_type_fscrypt_user = {
+ 	.describe		= fscrypt_user_key_describe,
+ };
+ 
+-/* Search ->s_master_keys or ->mk_users */
+-static struct key *search_fscrypt_keyring(struct key *keyring,
+-					  struct key_type *type,
+-					  const char *description)
+-{
+-	/*
+-	 * We need to mark the keyring reference as "possessed" so that we
+-	 * acquire permission to search it, via the KEY_POS_SEARCH permission.
+-	 */
+-	key_ref_t keyref = make_key_ref(keyring, true /* possessed */);
+-
+-	keyref = keyring_search(keyref, type, description, false);
+-	if (IS_ERR(keyref)) {
+-		if (PTR_ERR(keyref) == -EAGAIN || /* not found */
+-		    PTR_ERR(keyref) == -EKEYREVOKED) /* recently invalidated */
+-			keyref = ERR_PTR(-ENOKEY);
+-		return ERR_CAST(keyref);
+-	}
+-	return key_ref_to_ptr(keyref);
+-}
+-
+-#define FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE	\
+-	(CONST_STRLEN("fscrypt-") + sizeof_field(struct super_block, s_id))
+-
+-#define FSCRYPT_MK_DESCRIPTION_SIZE	(2 * FSCRYPT_KEY_IDENTIFIER_SIZE + 1)
+-
+ #define FSCRYPT_MK_USERS_DESCRIPTION_SIZE	\
+ 	(CONST_STRLEN("fscrypt-") + 2 * FSCRYPT_KEY_IDENTIFIER_SIZE + \
+ 	 CONST_STRLEN("-users") + 1)
+@@ -164,21 +163,6 @@ static struct key *search_fscrypt_keyring(struct key *keyring,
+ #define FSCRYPT_MK_USER_DESCRIPTION_SIZE	\
+ 	(2 * FSCRYPT_KEY_IDENTIFIER_SIZE + CONST_STRLEN(".uid.") + 10 + 1)
+ 
+-static void format_fs_keyring_description(
+-			char description[FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE],
+-			const struct super_block *sb)
+-{
+-	sprintf(description, "fscrypt-%s", sb->s_id);
+-}
+-
+-static void format_mk_description(
+-			char description[FSCRYPT_MK_DESCRIPTION_SIZE],
+-			const struct fscrypt_key_specifier *mk_spec)
+-{
+-	sprintf(description, "%*phN",
+-		master_key_spec_len(mk_spec), (u8 *)&mk_spec->u);
+-}
+-
+ static void format_mk_users_keyring_description(
+ 			char description[FSCRYPT_MK_USERS_DESCRIPTION_SIZE],
+ 			const u8 mk_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
+@@ -199,20 +183,15 @@ static void format_mk_user_description(
+ /* Create ->s_master_keys if needed.  Synchronized by fscrypt_add_key_mutex. */
+ static int allocate_filesystem_keyring(struct super_block *sb)
+ {
+-	char description[FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE];
+-	struct key *keyring;
++	struct fscrypt_keyring *keyring;
+ 
+ 	if (sb->s_master_keys)
+ 		return 0;
+ 
+-	format_fs_keyring_description(description, sb);
+-	keyring = keyring_alloc(description, GLOBAL_ROOT_UID, GLOBAL_ROOT_GID,
+-				current_cred(), KEY_POS_SEARCH |
+-				  KEY_USR_SEARCH | KEY_USR_READ | KEY_USR_VIEW,
+-				KEY_ALLOC_NOT_IN_QUOTA, NULL, NULL);
+-	if (IS_ERR(keyring))
+-		return PTR_ERR(keyring);
+-
++	keyring = kzalloc(sizeof(*keyring), GFP_KERNEL);
++	if (!keyring)
++		return -ENOMEM;
++	spin_lock_init(&keyring->lock);
+ 	/*
+ 	 * Pairs with the smp_load_acquire() in fscrypt_find_master_key().
+ 	 * I.e., here we publish ->s_master_keys with a RELEASE barrier so that
+@@ -222,21 +201,80 @@ static int allocate_filesystem_keyring(struct super_block *sb)
+ 	return 0;
+ }
+ 
+-void fscrypt_sb_free(struct super_block *sb)
++/*
++ * Release all encryption keys that have been added to the filesystem, along
++ * with the keyring that contains them.
++ *
++ * This is called at unmount time.  The filesystem's underlying block device(s)
++ * are still available at this time; this is important because after user file
++ * accesses have been allowed, this function may need to evict keys from the
++ * keyslots of an inline crypto engine, which requires the block device(s).
++ *
++ * This is also called when the super_block is being freed.  This is needed to
++ * avoid a memory leak if mounting fails after the "test_dummy_encryption"
++ * option was processed, as in that case the unmount-time call isn't made.
++ */
++void fscrypt_destroy_keyring(struct super_block *sb)
+ {
+-	key_put(sb->s_master_keys);
++	struct fscrypt_keyring *keyring = sb->s_master_keys;
++	size_t i;
++
++	if (!keyring)
++		return;
++
++	for (i = 0; i < ARRAY_SIZE(keyring->key_hashtable); i++) {
++		struct hlist_head *bucket = &keyring->key_hashtable[i];
++		struct fscrypt_master_key *mk;
++		struct hlist_node *tmp;
++
++		hlist_for_each_entry_safe(mk, tmp, bucket, mk_node) {
++			/*
++			 * Since all inodes were already evicted, every key
++			 * remaining in the keyring should have an empty inode
++			 * list, and should only still be in the keyring due to
++			 * the single active ref associated with ->mk_secret.
++			 * There should be no structural refs beyond the one
++			 * associated with the active ref.
++			 */
++			WARN_ON(refcount_read(&mk->mk_active_refs) != 1);
++			WARN_ON(refcount_read(&mk->mk_struct_refs) != 1);
++			WARN_ON(!is_master_key_secret_present(&mk->mk_secret));
++			wipe_master_key_secret(&mk->mk_secret);
++			fscrypt_put_master_key_activeref(mk);
++		}
++	}
++	kfree_sensitive(keyring);
+ 	sb->s_master_keys = NULL;
+ }
+ 
++static struct hlist_head *
++fscrypt_mk_hash_bucket(struct fscrypt_keyring *keyring,
++		       const struct fscrypt_key_specifier *mk_spec)
++{
++	/*
++	 * Since key specifiers should be "random" values, it is sufficient to
++	 * use a trivial hash function that just takes the first several bits of
++	 * the key specifier.
++	 */
++	unsigned long i = get_unaligned((unsigned long *)&mk_spec->u);
++
++	return &keyring->key_hashtable[i % ARRAY_SIZE(keyring->key_hashtable)];
++}
++
+ /*
+- * Find the specified master key in ->s_master_keys.
+- * Returns ERR_PTR(-ENOKEY) if not found.
++ * Find the specified master key struct in ->s_master_keys and take a structural
++ * ref to it.  The structural ref guarantees that the key struct continues to
++ * exist, but it does *not* guarantee that ->s_master_keys continues to contain
++ * the key struct.  The structural ref needs to be dropped by
++ * fscrypt_put_master_key().  Returns NULL if the key struct is not found.
+  */
+-struct key *fscrypt_find_master_key(struct super_block *sb,
+-				    const struct fscrypt_key_specifier *mk_spec)
++struct fscrypt_master_key *
++fscrypt_find_master_key(struct super_block *sb,
++			const struct fscrypt_key_specifier *mk_spec)
+ {
+-	struct key *keyring;
+-	char description[FSCRYPT_MK_DESCRIPTION_SIZE];
++	struct fscrypt_keyring *keyring;
++	struct hlist_head *bucket;
++	struct fscrypt_master_key *mk;
+ 
+ 	/*
+ 	 * Pairs with the smp_store_release() in allocate_filesystem_keyring().
+@@ -246,10 +284,38 @@ struct key *fscrypt_find_master_key(struct super_block *sb,
+ 	 */
+ 	keyring = smp_load_acquire(&sb->s_master_keys);
+ 	if (keyring == NULL)
+-		return ERR_PTR(-ENOKEY); /* No keyring yet, so no keys yet. */
+-
+-	format_mk_description(description, mk_spec);
+-	return search_fscrypt_keyring(keyring, &key_type_fscrypt, description);
++		return NULL; /* No keyring yet, so no keys yet. */
++
++	bucket = fscrypt_mk_hash_bucket(keyring, mk_spec);
++	rcu_read_lock();
++	switch (mk_spec->type) {
++	case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
++		hlist_for_each_entry_rcu(mk, bucket, mk_node) {
++			if (mk->mk_spec.type ==
++				FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR &&
++			    memcmp(mk->mk_spec.u.descriptor,
++				   mk_spec->u.descriptor,
++				   FSCRYPT_KEY_DESCRIPTOR_SIZE) == 0 &&
++			    refcount_inc_not_zero(&mk->mk_struct_refs))
++				goto out;
++		}
++		break;
++	case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
++		hlist_for_each_entry_rcu(mk, bucket, mk_node) {
++			if (mk->mk_spec.type ==
++				FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER &&
++			    memcmp(mk->mk_spec.u.identifier,
++				   mk_spec->u.identifier,
++				   FSCRYPT_KEY_IDENTIFIER_SIZE) == 0 &&
++			    refcount_inc_not_zero(&mk->mk_struct_refs))
++				goto out;
++		}
++		break;
++	}
++	mk = NULL;
++out:
++	rcu_read_unlock();
++	return mk;
+ }
+ 
+ static int allocate_master_key_users_keyring(struct fscrypt_master_key *mk)
+@@ -277,17 +343,30 @@ static int allocate_master_key_users_keyring(struct fscrypt_master_key *mk)
+ static struct key *find_master_key_user(struct fscrypt_master_key *mk)
+ {
+ 	char description[FSCRYPT_MK_USER_DESCRIPTION_SIZE];
++	key_ref_t keyref;
+ 
+ 	format_mk_user_description(description, mk->mk_spec.u.identifier);
+-	return search_fscrypt_keyring(mk->mk_users, &key_type_fscrypt_user,
+-				      description);
++
++	/*
++	 * We need to mark the keyring reference as "possessed" so that we
++	 * acquire permission to search it, via the KEY_POS_SEARCH permission.
++	 */
++	keyref = keyring_search(make_key_ref(mk->mk_users, true /*possessed*/),
++				&key_type_fscrypt_user, description, false);
++	if (IS_ERR(keyref)) {
++		if (PTR_ERR(keyref) == -EAGAIN || /* not found */
++		    PTR_ERR(keyref) == -EKEYREVOKED) /* recently invalidated */
++			keyref = ERR_PTR(-ENOKEY);
++		return ERR_CAST(keyref);
++	}
++	return key_ref_to_ptr(keyref);
+ }
+ 
+ /*
+  * Give the current user a "key" in ->mk_users.  This charges the user's quota
+  * and marks the master key as added by the current user, so that it cannot be
+- * removed by another user with the key.  Either the master key's key->sem must
+- * be held for write, or the master key must be still undergoing initialization.
++ * removed by another user with the key.  Either ->mk_sem must be held for
++ * write, or the master key must be still undergoing initialization.
+  */
+ static int add_master_key_user(struct fscrypt_master_key *mk)
+ {
+@@ -309,7 +388,7 @@ static int add_master_key_user(struct fscrypt_master_key *mk)
+ 
+ /*
+  * Remove the current user's "key" from ->mk_users.
+- * The master key's key->sem must be held for write.
++ * ->mk_sem must be held for write.
+  *
+  * Returns 0 if removed, -ENOKEY if not found, or another -errno code.
+  */
+@@ -327,64 +406,49 @@ static int remove_master_key_user(struct fscrypt_master_key *mk)
+ }
+ 
+ /*
+- * Allocate a new fscrypt_master_key which contains the given secret, set it as
+- * the payload of a new 'struct key' of type fscrypt, and link the 'struct key'
+- * into the given keyring.  Synchronized by fscrypt_add_key_mutex.
++ * Allocate a new fscrypt_master_key, transfer the given secret over to it, and
++ * insert it into sb->s_master_keys.
+  */
+-static int add_new_master_key(struct fscrypt_master_key_secret *secret,
+-			      const struct fscrypt_key_specifier *mk_spec,
+-			      struct key *keyring)
++static int add_new_master_key(struct super_block *sb,
++			      struct fscrypt_master_key_secret *secret,
++			      const struct fscrypt_key_specifier *mk_spec)
+ {
++	struct fscrypt_keyring *keyring = sb->s_master_keys;
+ 	struct fscrypt_master_key *mk;
+-	char description[FSCRYPT_MK_DESCRIPTION_SIZE];
+-	struct key *key;
+ 	int err;
+ 
+ 	mk = kzalloc(sizeof(*mk), GFP_KERNEL);
+ 	if (!mk)
+ 		return -ENOMEM;
+ 
++	mk->mk_sb = sb;
++	init_rwsem(&mk->mk_sem);
++	refcount_set(&mk->mk_struct_refs, 1);
+ 	mk->mk_spec = *mk_spec;
+ 
+-	move_master_key_secret(&mk->mk_secret, secret);
+-	init_rwsem(&mk->mk_secret_sem);
+-
+-	refcount_set(&mk->mk_refcount, 1); /* secret is present */
+ 	INIT_LIST_HEAD(&mk->mk_decrypted_inodes);
+ 	spin_lock_init(&mk->mk_decrypted_inodes_lock);
+ 
+ 	if (mk_spec->type == FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER) {
+ 		err = allocate_master_key_users_keyring(mk);
+ 		if (err)
+-			goto out_free_mk;
++			goto out_put;
+ 		err = add_master_key_user(mk);
+ 		if (err)
+-			goto out_free_mk;
++			goto out_put;
+ 	}
+ 
+-	/*
+-	 * Note that we don't charge this key to anyone's quota, since when
+-	 * ->mk_users is in use those keys are charged instead, and otherwise
+-	 * (when ->mk_users isn't in use) only root can add these keys.
+-	 */
+-	format_mk_description(description, mk_spec);
+-	key = key_alloc(&key_type_fscrypt, description,
+-			GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, current_cred(),
+-			KEY_POS_SEARCH | KEY_USR_SEARCH | KEY_USR_VIEW,
+-			KEY_ALLOC_NOT_IN_QUOTA, NULL);
+-	if (IS_ERR(key)) {
+-		err = PTR_ERR(key);
+-		goto out_free_mk;
+-	}
+-	err = key_instantiate_and_link(key, mk, sizeof(*mk), keyring, NULL);
+-	key_put(key);
+-	if (err)
+-		goto out_free_mk;
++	move_master_key_secret(&mk->mk_secret, secret);
++	refcount_set(&mk->mk_active_refs, 1); /* ->mk_secret is present */
+ 
++	spin_lock(&keyring->lock);
++	hlist_add_head_rcu(&mk->mk_node,
++			   fscrypt_mk_hash_bucket(keyring, mk_spec));
++	spin_unlock(&keyring->lock);
+ 	return 0;
+ 
+-out_free_mk:
+-	free_master_key(mk);
++out_put:
++	fscrypt_put_master_key(mk);
+ 	return err;
+ }
+ 
+@@ -393,45 +457,34 @@ out_free_mk:
+ static int add_existing_master_key(struct fscrypt_master_key *mk,
+ 				   struct fscrypt_master_key_secret *secret)
+ {
+-	struct key *mk_user;
+-	bool rekey;
+ 	int err;
+ 
+ 	/*
+ 	 * If the current user is already in ->mk_users, then there's nothing to
+-	 * do.  (Not applicable for v1 policy keys, which have NULL ->mk_users.)
++	 * do.  Otherwise, we need to add the user to ->mk_users.  (Neither is
++	 * applicable for v1 policy keys, which have NULL ->mk_users.)
+ 	 */
+ 	if (mk->mk_users) {
+-		mk_user = find_master_key_user(mk);
++		struct key *mk_user = find_master_key_user(mk);
++
+ 		if (mk_user != ERR_PTR(-ENOKEY)) {
+ 			if (IS_ERR(mk_user))
+ 				return PTR_ERR(mk_user);
+ 			key_put(mk_user);
+ 			return 0;
+ 		}
+-	}
+-
+-	/* If we'll be re-adding ->mk_secret, try to take the reference. */
+-	rekey = !is_master_key_secret_present(&mk->mk_secret);
+-	if (rekey && !refcount_inc_not_zero(&mk->mk_refcount))
+-		return KEY_DEAD;
+-
+-	/* Add the current user to ->mk_users, if applicable. */
+-	if (mk->mk_users) {
+ 		err = add_master_key_user(mk);
+-		if (err) {
+-			if (rekey && refcount_dec_and_test(&mk->mk_refcount))
+-				return KEY_DEAD;
++		if (err)
+ 			return err;
+-		}
+ 	}
+ 
+ 	/* Re-add the secret if needed. */
+-	if (rekey) {
+-		down_write(&mk->mk_secret_sem);
++	if (!is_master_key_secret_present(&mk->mk_secret)) {
++		if (!refcount_inc_not_zero(&mk->mk_active_refs))
++			return KEY_DEAD;
+ 		move_master_key_secret(&mk->mk_secret, secret);
+-		up_write(&mk->mk_secret_sem);
+ 	}
++
+ 	return 0;
+ }
+ 
+@@ -440,38 +493,36 @@ static int do_add_master_key(struct super_block *sb,
+ 			     const struct fscrypt_key_specifier *mk_spec)
+ {
+ 	static DEFINE_MUTEX(fscrypt_add_key_mutex);
+-	struct key *key;
++	struct fscrypt_master_key *mk;
+ 	int err;
+ 
+ 	mutex_lock(&fscrypt_add_key_mutex); /* serialize find + link */
+-retry:
+-	key = fscrypt_find_master_key(sb, mk_spec);
+-	if (IS_ERR(key)) {
+-		err = PTR_ERR(key);
+-		if (err != -ENOKEY)
+-			goto out_unlock;
++
++	mk = fscrypt_find_master_key(sb, mk_spec);
++	if (!mk) {
+ 		/* Didn't find the key in ->s_master_keys.  Add it. */
+ 		err = allocate_filesystem_keyring(sb);
+-		if (err)
+-			goto out_unlock;
+-		err = add_new_master_key(secret, mk_spec, sb->s_master_keys);
++		if (!err)
++			err = add_new_master_key(sb, secret, mk_spec);
+ 	} else {
+ 		/*
+ 		 * Found the key in ->s_master_keys.  Re-add the secret if
+ 		 * needed, and add the user to ->mk_users if needed.
+ 		 */
+-		down_write(&key->sem);
+-		err = add_existing_master_key(key->payload.data[0], secret);
+-		up_write(&key->sem);
++		down_write(&mk->mk_sem);
++		err = add_existing_master_key(mk, secret);
++		up_write(&mk->mk_sem);
+ 		if (err == KEY_DEAD) {
+-			/* Key being removed or needs to be removed */
+-			key_invalidate(key);
+-			key_put(key);
+-			goto retry;
++			/*
++			 * We found a key struct, but it's already been fully
++			 * removed.  Ignore the old struct and add a new one.
++			 * fscrypt_add_key_mutex means we don't need to worry
++			 * about concurrent adds.
++			 */
++			err = add_new_master_key(sb, secret, mk_spec);
+ 		}
+-		key_put(key);
++		fscrypt_put_master_key(mk);
+ 	}
+-out_unlock:
+ 	mutex_unlock(&fscrypt_add_key_mutex);
+ 	return err;
+ }
+@@ -735,19 +786,19 @@ int fscrypt_verify_key_added(struct super_block *sb,
+ 			     const u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
+ {
+ 	struct fscrypt_key_specifier mk_spec;
+-	struct key *key, *mk_user;
+ 	struct fscrypt_master_key *mk;
++	struct key *mk_user;
+ 	int err;
+ 
+ 	mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER;
+ 	memcpy(mk_spec.u.identifier, identifier, FSCRYPT_KEY_IDENTIFIER_SIZE);
+ 
+-	key = fscrypt_find_master_key(sb, &mk_spec);
+-	if (IS_ERR(key)) {
+-		err = PTR_ERR(key);
++	mk = fscrypt_find_master_key(sb, &mk_spec);
++	if (!mk) {
++		err = -ENOKEY;
+ 		goto out;
+ 	}
+-	mk = key->payload.data[0];
++	down_read(&mk->mk_sem);
+ 	mk_user = find_master_key_user(mk);
+ 	if (IS_ERR(mk_user)) {
+ 		err = PTR_ERR(mk_user);
+@@ -755,7 +806,8 @@ int fscrypt_verify_key_added(struct super_block *sb,
+ 		key_put(mk_user);
+ 		err = 0;
+ 	}
+-	key_put(key);
++	up_read(&mk->mk_sem);
++	fscrypt_put_master_key(mk);
+ out:
+ 	if (err == -ENOKEY && capable(CAP_FOWNER))
+ 		err = 0;
+@@ -917,11 +969,10 @@ static int do_remove_key(struct file *filp, void __user *_uarg, bool all_users)
+ 	struct super_block *sb = file_inode(filp)->i_sb;
+ 	struct fscrypt_remove_key_arg __user *uarg = _uarg;
+ 	struct fscrypt_remove_key_arg arg;
+-	struct key *key;
+ 	struct fscrypt_master_key *mk;
+ 	u32 status_flags = 0;
+ 	int err;
+-	bool dead;
++	bool inodes_remain;
+ 
+ 	if (copy_from_user(&arg, uarg, sizeof(arg)))
+ 		return -EFAULT;
+@@ -941,12 +992,10 @@ static int do_remove_key(struct file *filp, void __user *_uarg, bool all_users)
+ 		return -EACCES;
+ 
+ 	/* Find the key being removed. */
+-	key = fscrypt_find_master_key(sb, &arg.key_spec);
+-	if (IS_ERR(key))
+-		return PTR_ERR(key);
+-	mk = key->payload.data[0];
+-
+-	down_write(&key->sem);
++	mk = fscrypt_find_master_key(sb, &arg.key_spec);
++	if (!mk)
++		return -ENOKEY;
++	down_write(&mk->mk_sem);
+ 
+ 	/* If relevant, remove current user's (or all users) claim to the key */
+ 	if (mk->mk_users && mk->mk_users->keys.nr_leaves_on_tree != 0) {
+@@ -955,7 +1004,7 @@ static int do_remove_key(struct file *filp, void __user *_uarg, bool all_users)
+ 		else
+ 			err = remove_master_key_user(mk);
+ 		if (err) {
+-			up_write(&key->sem);
++			up_write(&mk->mk_sem);
+ 			goto out_put_key;
+ 		}
+ 		if (mk->mk_users->keys.nr_leaves_on_tree != 0) {
+@@ -967,28 +1016,22 @@ static int do_remove_key(struct file *filp, void __user *_uarg, bool all_users)
+ 			status_flags |=
+ 				FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS;
+ 			err = 0;
+-			up_write(&key->sem);
++			up_write(&mk->mk_sem);
+ 			goto out_put_key;
+ 		}
+ 	}
+ 
+ 	/* No user claims remaining.  Go ahead and wipe the secret. */
+-	dead = false;
++	err = -ENOKEY;
+ 	if (is_master_key_secret_present(&mk->mk_secret)) {
+-		down_write(&mk->mk_secret_sem);
+ 		wipe_master_key_secret(&mk->mk_secret);
+-		dead = refcount_dec_and_test(&mk->mk_refcount);
+-		up_write(&mk->mk_secret_sem);
+-	}
+-	up_write(&key->sem);
+-	if (dead) {
+-		/*
+-		 * No inodes reference the key, and we wiped the secret, so the
+-		 * key object is free to be removed from the keyring.
+-		 */
+-		key_invalidate(key);
++		fscrypt_put_master_key_activeref(mk);
+ 		err = 0;
+-	} else {
++	}
++	inodes_remain = refcount_read(&mk->mk_active_refs) > 0;
++	up_write(&mk->mk_sem);
++
++	if (inodes_remain) {
+ 		/* Some inodes still reference this key; try to evict them. */
+ 		err = try_to_lock_encrypted_files(sb, mk);
+ 		if (err == -EBUSY) {
+@@ -1004,7 +1047,7 @@ static int do_remove_key(struct file *filp, void __user *_uarg, bool all_users)
+ 	 * has been fully removed including all files locked.
+ 	 */
+ out_put_key:
+-	key_put(key);
++	fscrypt_put_master_key(mk);
+ 	if (err == 0)
+ 		err = put_user(status_flags, &uarg->removal_status_flags);
+ 	return err;
+@@ -1051,7 +1094,6 @@ int fscrypt_ioctl_get_key_status(struct file *filp, void __user *uarg)
+ {
+ 	struct super_block *sb = file_inode(filp)->i_sb;
+ 	struct fscrypt_get_key_status_arg arg;
+-	struct key *key;
+ 	struct fscrypt_master_key *mk;
+ 	int err;
+ 
+@@ -1068,19 +1110,18 @@ int fscrypt_ioctl_get_key_status(struct file *filp, void __user *uarg)
+ 	arg.user_count = 0;
+ 	memset(arg.__out_reserved, 0, sizeof(arg.__out_reserved));
+ 
+-	key = fscrypt_find_master_key(sb, &arg.key_spec);
+-	if (IS_ERR(key)) {
+-		if (key != ERR_PTR(-ENOKEY))
+-			return PTR_ERR(key);
++	mk = fscrypt_find_master_key(sb, &arg.key_spec);
++	if (!mk) {
+ 		arg.status = FSCRYPT_KEY_STATUS_ABSENT;
+ 		err = 0;
+ 		goto out;
+ 	}
+-	mk = key->payload.data[0];
+-	down_read(&key->sem);
++	down_read(&mk->mk_sem);
+ 
+ 	if (!is_master_key_secret_present(&mk->mk_secret)) {
+-		arg.status = FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED;
++		arg.status = refcount_read(&mk->mk_active_refs) > 0 ?
++			FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED :
++			FSCRYPT_KEY_STATUS_ABSENT /* raced with full removal */;
+ 		err = 0;
+ 		goto out_release_key;
+ 	}
+@@ -1102,8 +1143,8 @@ int fscrypt_ioctl_get_key_status(struct file *filp, void __user *uarg)
+ 	}
+ 	err = 0;
+ out_release_key:
+-	up_read(&key->sem);
+-	key_put(key);
++	up_read(&mk->mk_sem);
++	fscrypt_put_master_key(mk);
+ out:
+ 	if (!err && copy_to_user(uarg, &arg, sizeof(arg)))
+ 		err = -EFAULT;
+@@ -1115,13 +1156,9 @@ int __init fscrypt_init_keyring(void)
+ {
+ 	int err;
+ 
+-	err = register_key_type(&key_type_fscrypt);
+-	if (err)
+-		return err;
+-
+ 	err = register_key_type(&key_type_fscrypt_user);
+ 	if (err)
+-		goto err_unregister_fscrypt;
++		return err;
+ 
+ 	err = register_key_type(&key_type_fscrypt_provisioning);
+ 	if (err)
+@@ -1131,7 +1168,5 @@ int __init fscrypt_init_keyring(void)
+ 
+ err_unregister_fscrypt_user:
+ 	unregister_key_type(&key_type_fscrypt_user);
+-err_unregister_fscrypt:
+-	unregister_key_type(&key_type_fscrypt);
+ 	return err;
+ }
+diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
+index 73d96e35d9ae4..7b14054fab494 100644
+--- a/fs/crypto/keysetup.c
++++ b/fs/crypto/keysetup.c
+@@ -9,7 +9,6 @@
+  */
+ 
+ #include <crypto/skcipher.h>
+-#include <linux/key.h>
+ #include <linux/random.h>
+ 
+ #include "fscrypt_private.h"
+@@ -151,6 +150,7 @@ void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key)
+ {
+ 	crypto_free_skcipher(prep_key->tfm);
+ 	fscrypt_destroy_inline_crypt_key(prep_key);
++	memzero_explicit(prep_key, sizeof(*prep_key));
+ }
+ 
+ /* Given a per-file encryption key, set up the file's crypto transform object */
+@@ -404,20 +404,18 @@ static bool fscrypt_valid_master_key_size(const struct fscrypt_master_key *mk,
+ /*
+  * Find the master key, then set up the inode's actual encryption key.
+  *
+- * If the master key is found in the filesystem-level keyring, then the
+- * corresponding 'struct key' is returned in *master_key_ret with
+- * ->mk_secret_sem read-locked.  This is needed to ensure that only one task
+- * links the fscrypt_info into ->mk_decrypted_inodes (as multiple tasks may race
+- * to create an fscrypt_info for the same inode), and to synchronize the master
+- * key being removed with a new inode starting to use it.
++ * If the master key is found in the filesystem-level keyring, then it is
++ * returned in *mk_ret with its semaphore read-locked.  This is needed to ensure
++ * that only one task links the fscrypt_info into ->mk_decrypted_inodes (as
++ * multiple tasks may race to create an fscrypt_info for the same inode), and to
++ * synchronize the master key being removed with a new inode starting to use it.
+  */
+ static int setup_file_encryption_key(struct fscrypt_info *ci,
+ 				     bool need_dirhash_key,
+-				     struct key **master_key_ret)
++				     struct fscrypt_master_key **mk_ret)
+ {
+-	struct key *key;
+-	struct fscrypt_master_key *mk = NULL;
+ 	struct fscrypt_key_specifier mk_spec;
++	struct fscrypt_master_key *mk;
+ 	int err;
+ 
+ 	err = fscrypt_select_encryption_impl(ci);
+@@ -442,11 +440,10 @@ static int setup_file_encryption_key(struct fscrypt_info *ci,
+ 		return -EINVAL;
+ 	}
+ 
+-	key = fscrypt_find_master_key(ci->ci_inode->i_sb, &mk_spec);
+-	if (IS_ERR(key)) {
+-		if (key != ERR_PTR(-ENOKEY) ||
+-		    ci->ci_policy.version != FSCRYPT_POLICY_V1)
+-			return PTR_ERR(key);
++	mk = fscrypt_find_master_key(ci->ci_inode->i_sb, &mk_spec);
++	if (!mk) {
++		if (ci->ci_policy.version != FSCRYPT_POLICY_V1)
++			return -ENOKEY;
+ 
+ 		/*
+ 		 * As a legacy fallback for v1 policies, search for the key in
+@@ -456,9 +453,7 @@ static int setup_file_encryption_key(struct fscrypt_info *ci,
+ 		 */
+ 		return fscrypt_setup_v1_file_key_via_subscribed_keyrings(ci);
+ 	}
+-
+-	mk = key->payload.data[0];
+-	down_read(&mk->mk_secret_sem);
++	down_read(&mk->mk_sem);
+ 
+ 	/* Has the secret been removed (via FS_IOC_REMOVE_ENCRYPTION_KEY)? */
+ 	if (!is_master_key_secret_present(&mk->mk_secret)) {
+@@ -486,18 +481,18 @@ static int setup_file_encryption_key(struct fscrypt_info *ci,
+ 	if (err)
+ 		goto out_release_key;
+ 
+-	*master_key_ret = key;
++	*mk_ret = mk;
+ 	return 0;
+ 
+ out_release_key:
+-	up_read(&mk->mk_secret_sem);
+-	key_put(key);
++	up_read(&mk->mk_sem);
++	fscrypt_put_master_key(mk);
+ 	return err;
+ }
+ 
+ static void put_crypt_info(struct fscrypt_info *ci)
+ {
+-	struct key *key;
++	struct fscrypt_master_key *mk;
+ 
+ 	if (!ci)
+ 		return;
+@@ -507,24 +502,18 @@ static void put_crypt_info(struct fscrypt_info *ci)
+ 	else if (ci->ci_owns_key)
+ 		fscrypt_destroy_prepared_key(&ci->ci_enc_key);
+ 
+-	key = ci->ci_master_key;
+-	if (key) {
+-		struct fscrypt_master_key *mk = key->payload.data[0];
+-
++	mk = ci->ci_master_key;
++	if (mk) {
+ 		/*
+ 		 * Remove this inode from the list of inodes that were unlocked
+-		 * with the master key.
+-		 *
+-		 * In addition, if we're removing the last inode from a key that
+-		 * already had its secret removed, invalidate the key so that it
+-		 * gets removed from ->s_master_keys.
++		 * with the master key.  In addition, if we're removing the last
++		 * inode from a master key struct that already had its secret
++		 * removed, then complete the full removal of the struct.
+ 		 */
+ 		spin_lock(&mk->mk_decrypted_inodes_lock);
+ 		list_del(&ci->ci_master_key_link);
+ 		spin_unlock(&mk->mk_decrypted_inodes_lock);
+-		if (refcount_dec_and_test(&mk->mk_refcount))
+-			key_invalidate(key);
+-		key_put(key);
++		fscrypt_put_master_key_activeref(mk);
+ 	}
+ 	memzero_explicit(ci, sizeof(*ci));
+ 	kmem_cache_free(fscrypt_info_cachep, ci);
+@@ -538,7 +527,7 @@ fscrypt_setup_encryption_info(struct inode *inode,
+ {
+ 	struct fscrypt_info *crypt_info;
+ 	struct fscrypt_mode *mode;
+-	struct key *master_key = NULL;
++	struct fscrypt_master_key *mk = NULL;
+ 	int res;
+ 
+ 	res = fscrypt_initialize(inode->i_sb->s_cop->flags);
+@@ -561,8 +550,7 @@ fscrypt_setup_encryption_info(struct inode *inode,
+ 	WARN_ON(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
+ 	crypt_info->ci_mode = mode;
+ 
+-	res = setup_file_encryption_key(crypt_info, need_dirhash_key,
+-					&master_key);
++	res = setup_file_encryption_key(crypt_info, need_dirhash_key, &mk);
+ 	if (res)
+ 		goto out;
+ 
+@@ -577,12 +565,9 @@ fscrypt_setup_encryption_info(struct inode *inode,
+ 		 * We won the race and set ->i_crypt_info to our crypt_info.
+ 		 * Now link it into the master key's inode list.
+ 		 */
+-		if (master_key) {
+-			struct fscrypt_master_key *mk =
+-				master_key->payload.data[0];
+-
+-			refcount_inc(&mk->mk_refcount);
+-			crypt_info->ci_master_key = key_get(master_key);
++		if (mk) {
++			crypt_info->ci_master_key = mk;
++			refcount_inc(&mk->mk_active_refs);
+ 			spin_lock(&mk->mk_decrypted_inodes_lock);
+ 			list_add(&crypt_info->ci_master_key_link,
+ 				 &mk->mk_decrypted_inodes);
+@@ -592,11 +577,9 @@ fscrypt_setup_encryption_info(struct inode *inode,
+ 	}
+ 	res = 0;
+ out:
+-	if (master_key) {
+-		struct fscrypt_master_key *mk = master_key->payload.data[0];
+-
+-		up_read(&mk->mk_secret_sem);
+-		key_put(master_key);
++	if (mk) {
++		up_read(&mk->mk_sem);
++		fscrypt_put_master_key(mk);
+ 	}
+ 	put_crypt_info(crypt_info);
+ 	return res;
+@@ -747,7 +730,6 @@ EXPORT_SYMBOL(fscrypt_free_inode);
+ int fscrypt_drop_inode(struct inode *inode)
+ {
+ 	const struct fscrypt_info *ci = fscrypt_get_info(inode);
+-	const struct fscrypt_master_key *mk;
+ 
+ 	/*
+ 	 * If ci is NULL, then the inode doesn't have an encryption key set up
+@@ -757,7 +739,6 @@ int fscrypt_drop_inode(struct inode *inode)
+ 	 */
+ 	if (!ci || !ci->ci_master_key)
+ 		return 0;
+-	mk = ci->ci_master_key->payload.data[0];
+ 
+ 	/*
+ 	 * With proper, non-racy use of FS_IOC_REMOVE_ENCRYPTION_KEY, all inodes
+@@ -769,13 +750,13 @@ int fscrypt_drop_inode(struct inode *inode)
+ 		return 0;
+ 
+ 	/*
+-	 * Note: since we aren't holding ->mk_secret_sem, the result here can
++	 * Note: since we aren't holding the key semaphore, the result here can
+ 	 * immediately become outdated.  But there's no correctness problem with
+ 	 * unnecessarily evicting.  Nor is there a correctness problem with not
+ 	 * evicting while iput() is racing with the key being removed, since
+ 	 * then the thread removing the key will either evict the inode itself
+ 	 * or will correctly detect that it wasn't evicted due to the race.
+ 	 */
+-	return !is_master_key_secret_present(&mk->mk_secret);
++	return !is_master_key_secret_present(&ci->ci_master_key->mk_secret);
+ }
+ EXPORT_SYMBOL_GPL(fscrypt_drop_inode);
+diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
+index faa0f21daa684..f68265c36377e 100644
+--- a/fs/crypto/policy.c
++++ b/fs/crypto/policy.c
+@@ -686,12 +686,8 @@ int fscrypt_set_context(struct inode *inode, void *fs_data)
+ 	 * delayed key setup that requires the inode number.
+ 	 */
+ 	if (ci->ci_policy.version == FSCRYPT_POLICY_V2 &&
+-	    (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) {
+-		const struct fscrypt_master_key *mk =
+-			ci->ci_master_key->payload.data[0];
+-
+-		fscrypt_hash_inode_number(ci, mk);
+-	}
++	    (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32))
++		fscrypt_hash_inode_number(ci, ci->ci_master_key);
+ 
+ 	return inode->i_sb->s_cop->set_context(inode, &ctx, ctxsize, fs_data);
+ }
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index 04320715d61f1..4bfe2252d9a4e 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -425,7 +425,8 @@ int ext4_ext_migrate(struct inode *inode)
+ 	 * already is extent-based, error out.
+ 	 */
+ 	if (!ext4_has_feature_extents(inode->i_sb) ||
+-	    (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
++	    ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS) ||
++	    ext4_has_inline_data(inode))
+ 		return -EINVAL;
+ 
+ 	if (S_ISLNK(inode->i_mode) && inode->i_blocks == 0)
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 646cc1935dffe..b2e131d11cf8b 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2153,8 +2153,16 @@ static int make_indexed_dir(handle_t *handle, struct ext4_filename *fname,
+ 	memcpy(data2, de, len);
+ 	de = (struct ext4_dir_entry_2 *) data2;
+ 	top = data2 + len;
+-	while ((char *)(de2 = ext4_next_entry(de, blocksize)) < top)
++	while ((char *)(de2 = ext4_next_entry(de, blocksize)) < top) {
++		if (ext4_check_dir_entry(dir, NULL, de, bh2, data2, len,
++					 (data2 + (blocksize - csum_size) -
++					  (char *) de))) {
++			brelse(bh2);
++			brelse(bh);
++			return -EFSCORRUPTED;
++		}
+ 		de = de2;
++	}
+ 	de->rec_len = ext4_rec_len_to_disk(data2 + (blocksize - csum_size) -
+ 					   (char *) de, blocksize);
+ 
+diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
+index 00e3cbde472e4..35be8e7ec2a04 100644
+--- a/fs/ext4/verity.c
++++ b/fs/ext4/verity.c
+@@ -370,13 +370,14 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
+ 					       pgoff_t index,
+ 					       unsigned long num_ra_pages)
+ {
+-	DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, index);
+ 	struct page *page;
+ 
+ 	index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
+ 
+ 	page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED);
+ 	if (!page || !PageUptodate(page)) {
++		DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, index);
++
+ 		if (page)
+ 			put_page(page);
+ 		else if (num_ra_pages > 1)
+diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c
+index 15ba36926fad7..cff94d095d0fe 100644
+--- a/fs/f2fs/verity.c
++++ b/fs/f2fs/verity.c
+@@ -261,13 +261,14 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode,
+ 					       pgoff_t index,
+ 					       unsigned long num_ra_pages)
+ {
+-	DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, index);
+ 	struct page *page;
+ 
+ 	index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT;
+ 
+ 	page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED);
+ 	if (!page || !PageUptodate(page)) {
++		DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, index);
++
+ 		if (page)
+ 			put_page(page);
+ 		else if (num_ra_pages > 1)
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index d1bc96ee6eb3d..253308fcb0478 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -3311,6 +3311,10 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ 			goto out;
+ 	}
+ 
++	err = file_modified(file);
++	if (err)
++		goto out;
++
+ 	if (!(mode & FALLOC_FL_KEEP_SIZE))
+ 		set_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
+ 
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index d6ac2c4f88b6b..1eb6c7a142ff0 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -228,8 +228,7 @@ again:
+  *
+  */
+ void nfs_inode_reclaim_delegation(struct inode *inode, const struct cred *cred,
+-				  fmode_t type,
+-				  const nfs4_stateid *stateid,
++				  fmode_t type, const nfs4_stateid *stateid,
+ 				  unsigned long pagemod_limit)
+ {
+ 	struct nfs_delegation *delegation;
+@@ -239,25 +238,24 @@ void nfs_inode_reclaim_delegation(struct inode *inode, const struct cred *cred,
+ 	delegation = rcu_dereference(NFS_I(inode)->delegation);
+ 	if (delegation != NULL) {
+ 		spin_lock(&delegation->lock);
+-		if (nfs4_is_valid_delegation(delegation, 0)) {
+-			nfs4_stateid_copy(&delegation->stateid, stateid);
+-			delegation->type = type;
+-			delegation->pagemod_limit = pagemod_limit;
+-			oldcred = delegation->cred;
+-			delegation->cred = get_cred(cred);
+-			clear_bit(NFS_DELEGATION_NEED_RECLAIM,
+-				  &delegation->flags);
+-			spin_unlock(&delegation->lock);
+-			rcu_read_unlock();
+-			put_cred(oldcred);
+-			trace_nfs4_reclaim_delegation(inode, type);
+-			return;
+-		}
+-		/* We appear to have raced with a delegation return. */
++		nfs4_stateid_copy(&delegation->stateid, stateid);
++		delegation->type = type;
++		delegation->pagemod_limit = pagemod_limit;
++		oldcred = delegation->cred;
++		delegation->cred = get_cred(cred);
++		clear_bit(NFS_DELEGATION_NEED_RECLAIM, &delegation->flags);
++		if (test_and_clear_bit(NFS_DELEGATION_REVOKED,
++				       &delegation->flags))
++			atomic_long_inc(&nfs_active_delegations);
+ 		spin_unlock(&delegation->lock);
++		rcu_read_unlock();
++		put_cred(oldcred);
++		trace_nfs4_reclaim_delegation(inode, type);
++	} else {
++		rcu_read_unlock();
++		nfs_inode_set_delegation(inode, cred, type, stateid,
++					 pagemod_limit);
+ 	}
+-	rcu_read_unlock();
+-	nfs_inode_set_delegation(inode, cred, type, stateid, pagemod_limit);
+ }
+ 
+ static int nfs_do_return_delegation(struct inode *inode, struct nfs_delegation *delegation, int issync)
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 0e6437b08a3a5..252c99c76a42d 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -346,6 +346,7 @@ int nfs40_init_client(struct nfs_client *clp)
+ 	ret = nfs4_setup_slot_table(tbl, NFS4_MAX_SLOT_TABLE,
+ 					"NFSv4.0 transport Slot table");
+ 	if (ret) {
++		nfs4_shutdown_slot_table(tbl);
+ 		kfree(tbl);
+ 		return ret;
+ 	}
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index a8fe8f84c5ae0..a77a3d8c0b3f4 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1777,6 +1777,7 @@ static void nfs4_state_mark_reclaim_helper(struct nfs_client *clp,
+ 
+ static void nfs4_state_start_reclaim_reboot(struct nfs_client *clp)
+ {
++	set_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state);
+ 	/* Mark all delegations for reclaim */
+ 	nfs_delegation_mark_reclaim(clp);
+ 	nfs4_state_mark_reclaim_helper(clp, nfs4_state_mark_reclaim_reboot);
+@@ -2642,6 +2643,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ 			if (status < 0)
+ 				goto out_error;
+ 			nfs4_state_end_reclaim_reboot(clp);
++			continue;
+ 		}
+ 
+ 		/* Detect expired delegations... */
+diff --git a/fs/super.c b/fs/super.c
+index bae3fe80f852e..7629f9dd031cc 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -293,7 +293,7 @@ static void __put_super(struct super_block *s)
+ 		WARN_ON(s->s_inode_lru.node);
+ 		WARN_ON(!list_empty(&s->s_mounts));
+ 		security_sb_free(s);
+-		fscrypt_sb_free(s);
++		fscrypt_destroy_keyring(s);
+ 		put_user_ns(s->s_user_ns);
+ 		kfree(s->s_subtype);
+ 		call_rcu(&s->rcu, destroy_super_rcu);
+@@ -454,6 +454,7 @@ void generic_shutdown_super(struct super_block *sb)
+ 		evict_inodes(sb);
+ 		/* only nonzero refcount inodes can have marks */
+ 		fsnotify_sb_delete(sb);
++		fscrypt_destroy_keyring(sb);
+ 
+ 		if (sb->s_dio_done_wq) {
+ 			destroy_workqueue(sb->s_dio_done_wq);
+diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h
+index 34fb3431a8f36..292a5c40bd0c6 100644
+--- a/include/acpi/ghes.h
++++ b/include/acpi/ghes.h
+@@ -71,7 +71,7 @@ int ghes_register_vendor_record_notifier(struct notifier_block *nb);
+ void ghes_unregister_vendor_record_notifier(struct notifier_block *nb);
+ #endif
+ 
+-int ghes_estatus_pool_init(int num_ghes);
++int ghes_estatus_pool_init(unsigned int num_ghes);
+ 
+ /* From drivers/edac/ghes_edac.c */
+ 
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 3bac68fb7ff1c..7feb70d32d955 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -1161,7 +1161,7 @@ void efi_retrieve_tpm2_eventlog(void);
+ 	arch_efi_call_virt_teardown();					\
+ })
+ 
+-#define EFI_RANDOM_SEED_SIZE		64U
++#define EFI_RANDOM_SEED_SIZE		32U // BLAKE2S_HASH_SIZE
+ 
+ struct linux_efi_random_seed {
+ 	u32	size;
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index c8f887641878f..df54acdd35549 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1437,7 +1437,7 @@ struct super_block {
+ 	const struct xattr_handler **s_xattr;
+ #ifdef CONFIG_FS_ENCRYPTION
+ 	const struct fscrypt_operations	*s_cop;
+-	struct key		*s_master_keys; /* master crypto keys in use */
++	struct fscrypt_keyring	*s_master_keys; /* master crypto keys in use */
+ #endif
+ #ifdef CONFIG_FS_VERITY
+ 	const struct fsverity_operations *s_vop;
+diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
+index d0a1b8edfd9db..d0bc66fae7e01 100644
+--- a/include/linux/fscrypt.h
++++ b/include/linux/fscrypt.h
+@@ -193,7 +193,7 @@ fscrypt_free_dummy_policy(struct fscrypt_dummy_policy *dummy_policy)
+ }
+ 
+ /* keyring.c */
+-void fscrypt_sb_free(struct super_block *sb);
++void fscrypt_destroy_keyring(struct super_block *sb);
+ int fscrypt_ioctl_add_key(struct file *filp, void __user *arg);
+ int fscrypt_ioctl_remove_key(struct file *filp, void __user *arg);
+ int fscrypt_ioctl_remove_key_all_users(struct file *filp, void __user *arg);
+@@ -380,7 +380,7 @@ fscrypt_free_dummy_policy(struct fscrypt_dummy_policy *dummy_policy)
+ }
+ 
+ /* keyring.c */
+-static inline void fscrypt_sb_free(struct super_block *sb)
++static inline void fscrypt_destroy_keyring(struct super_block *sb)
+ {
+ }
+ 
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 59a8caf3230a4..6df4c3356ae61 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -100,7 +100,7 @@ struct uart_icount {
+ 	__u32	buf_overrun;
+ };
+ 
+-typedef unsigned int __bitwise upf_t;
++typedef u64 __bitwise upf_t;
+ typedef unsigned int __bitwise upstat_t;
+ 
+ struct uart_port {
+@@ -207,6 +207,7 @@ struct uart_port {
+ #define UPF_FIXED_PORT		((__force upf_t) (1 << 29))
+ #define UPF_DEAD		((__force upf_t) (1 << 30))
+ #define UPF_IOREMAP		((__force upf_t) (1 << 31))
++#define UPF_FULL_PROBE		((__force upf_t) (1ULL << 32))
+ 
+ #define __UPF_CHANGE_MASK	0x17fff
+ #define UPF_CHANGE_MASK		((__force upf_t) __UPF_CHANGE_MASK)
+diff --git a/include/net/protocol.h b/include/net/protocol.h
+index 2b778e1d2d8f1..0fd2df844fc71 100644
+--- a/include/net/protocol.h
++++ b/include/net/protocol.h
+@@ -35,8 +35,6 @@
+ 
+ /* This is used to register protocols. */
+ struct net_protocol {
+-	int			(*early_demux)(struct sk_buff *skb);
+-	int			(*early_demux_handler)(struct sk_buff *skb);
+ 	int			(*handler)(struct sk_buff *skb);
+ 
+ 	/* This returns an error if we weren't able to handle the error. */
+@@ -53,8 +51,6 @@ struct net_protocol {
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ struct inet6_protocol {
+-	void	(*early_demux)(struct sk_buff *skb);
+-	void    (*early_demux_handler)(struct sk_buff *skb);
+ 	int	(*handler)(struct sk_buff *skb);
+ 
+ 	/* This returns an error if we weren't able to handle the error. */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index bf4af27f56200..9a8d98639b20f 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -934,7 +934,7 @@ extern const struct inet_connection_sock_af_ops ipv6_specific;
+ 
+ INDIRECT_CALLABLE_DECLARE(void tcp_v6_send_check(struct sock *sk, struct sk_buff *skb));
+ INDIRECT_CALLABLE_DECLARE(int tcp_v6_rcv(struct sk_buff *skb));
+-INDIRECT_CALLABLE_DECLARE(void tcp_v6_early_demux(struct sk_buff *skb));
++void tcp_v6_early_demux(struct sk_buff *skb);
+ 
+ #endif
+ 
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 010bc324f860b..388e68c7bca05 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -176,6 +176,7 @@ INDIRECT_CALLABLE_DECLARE(int udp6_gro_complete(struct sk_buff *, int));
+ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ 				struct udphdr *uh, struct sock *sk);
+ int udp_gro_complete(struct sk_buff *skb, int nhoff, udp_lookup_t lookup);
++void udp_v6_early_demux(struct sk_buff *skb);
+ 
+ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 				  netdev_features_t features, bool is_ipv6);
+diff --git a/ipc/msg.c b/ipc/msg.c
+index 6e6c8e0c9380e..8ded6b8f10a2c 100644
+--- a/ipc/msg.c
++++ b/ipc/msg.c
+@@ -147,7 +147,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
+ 	key_t key = params->key;
+ 	int msgflg = params->flg;
+ 
+-	msq = kvmalloc(sizeof(*msq), GFP_KERNEL);
++	msq = kvmalloc(sizeof(*msq), GFP_KERNEL_ACCOUNT);
+ 	if (unlikely(!msq))
+ 		return -ENOMEM;
+ 
+diff --git a/ipc/sem.c b/ipc/sem.c
+index 7d9c06b0ad6e2..2cb6515ef1dd1 100644
+--- a/ipc/sem.c
++++ b/ipc/sem.c
+@@ -511,7 +511,7 @@ static struct sem_array *sem_alloc(size_t nsems)
+ 	if (nsems > (INT_MAX - sizeof(*sma)) / sizeof(sma->sems[0]))
+ 		return NULL;
+ 
+-	sma = kvzalloc(struct_size(sma, sems, nsems), GFP_KERNEL);
++	sma = kvzalloc(struct_size(sma, sems, nsems), GFP_KERNEL_ACCOUNT);
+ 	if (unlikely(!sma))
+ 		return NULL;
+ 
+@@ -1852,7 +1852,7 @@ static inline int get_undo_list(struct sem_undo_list **undo_listp)
+ 
+ 	undo_list = current->sysvsem.undo_list;
+ 	if (!undo_list) {
+-		undo_list = kzalloc(sizeof(*undo_list), GFP_KERNEL);
++		undo_list = kzalloc(sizeof(*undo_list), GFP_KERNEL_ACCOUNT);
+ 		if (undo_list == NULL)
+ 			return -ENOMEM;
+ 		spin_lock_init(&undo_list->lock);
+@@ -1937,7 +1937,7 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
+ 	rcu_read_unlock();
+ 
+ 	/* step 2: allocate new undo structure */
+-	new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL);
++	new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL_ACCOUNT);
+ 	if (!new) {
+ 		ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/ipc/shm.c b/ipc/shm.c
+index 471ac3e7498d5..b418731d66e88 100644
+--- a/ipc/shm.c
++++ b/ipc/shm.c
+@@ -711,7 +711,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
+ 			ns->shm_tot + numpages > ns->shm_ctlall)
+ 		return -ENOSPC;
+ 
+-	shp = kvmalloc(sizeof(*shp), GFP_KERNEL);
++	shp = kvmalloc(sizeof(*shp), GFP_KERNEL_ACCOUNT);
+ 	if (unlikely(!shp))
+ 		return -ENOMEM;
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index dac82a0e7c0b0..b0f444e86487c 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2335,8 +2335,11 @@ int enable_kprobe(struct kprobe *kp)
+ 	if (!kprobes_all_disarmed && kprobe_disabled(p)) {
+ 		p->flags &= ~KPROBE_FLAG_DISABLED;
+ 		ret = arm_kprobe(p);
+-		if (ret)
++		if (ret) {
+ 			p->flags |= KPROBE_FLAG_DISABLED;
++			if (p != kp)
++				kp->flags |= KPROBE_FLAG_DISABLED;
++		}
+ 	}
+ out:
+ 	mutex_unlock(&kprobe_mutex);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 2165c9ac14bf4..8e9ef0f555962 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -2946,18 +2946,8 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
+ 		command |= FTRACE_UPDATE_TRACE_FUNC;
+ 	}
+ 
+-	if (!command || !ftrace_enabled) {
+-		/*
+-		 * If these are dynamic or per_cpu ops, they still
+-		 * need their data freed. Since, function tracing is
+-		 * not currently active, we can just free them
+-		 * without synchronizing all CPUs.
+-		 */
+-		if (ops->flags & FTRACE_OPS_FL_DYNAMIC)
+-			goto free_ops;
+-
+-		return 0;
+-	}
++	if (!command || !ftrace_enabled)
++		goto out;
+ 
+ 	/*
+ 	 * If the ops uses a trampoline, then it needs to be
+@@ -2994,6 +2984,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
+ 	removed_ops = NULL;
+ 	ops->flags &= ~FTRACE_OPS_FL_REMOVING;
+ 
++out:
+ 	/*
+ 	 * Dynamic ops may be freed, we must make sure that all
+ 	 * callers are done before leaving this function.
+@@ -3021,7 +3012,6 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
+ 		if (IS_ENABLED(CONFIG_PREEMPTION))
+ 			synchronize_rcu_tasks();
+ 
+- free_ops:
+ 		ftrace_trampoline_free(ops);
+ 	}
+ 
+diff --git a/kernel/trace/kprobe_event_gen_test.c b/kernel/trace/kprobe_event_gen_test.c
+index 80e04a1e19772..d81f7c51025c7 100644
+--- a/kernel/trace/kprobe_event_gen_test.c
++++ b/kernel/trace/kprobe_event_gen_test.c
+@@ -100,20 +100,20 @@ static int __init test_gen_kprobe_cmd(void)
+ 					 KPROBE_GEN_TEST_FUNC,
+ 					 KPROBE_GEN_TEST_ARG0, KPROBE_GEN_TEST_ARG1);
+ 	if (ret)
+-		goto free;
++		goto out;
+ 
+ 	/* Use kprobe_event_add_fields to add the rest of the fields */
+ 
+ 	ret = kprobe_event_add_fields(&cmd, KPROBE_GEN_TEST_ARG2, KPROBE_GEN_TEST_ARG3);
+ 	if (ret)
+-		goto free;
++		goto out;
+ 
+ 	/*
+ 	 * This actually creates the event.
+ 	 */
+ 	ret = kprobe_event_gen_cmd_end(&cmd);
+ 	if (ret)
+-		goto free;
++		goto out;
+ 
+ 	/*
+ 	 * Now get the gen_kprobe_test event file.  We need to prevent
+@@ -136,13 +136,11 @@ static int __init test_gen_kprobe_cmd(void)
+ 		goto delete;
+ 	}
+  out:
++	kfree(buf);
+ 	return ret;
+  delete:
+ 	/* We got an error after creating the event, delete it */
+ 	ret = kprobe_event_delete("gen_kprobe_test");
+- free:
+-	kfree(buf);
+-
+ 	goto out;
+ }
+ 
+@@ -170,14 +168,14 @@ static int __init test_gen_kretprobe_cmd(void)
+ 					    KPROBE_GEN_TEST_FUNC,
+ 					    "$retval");
+ 	if (ret)
+-		goto free;
++		goto out;
+ 
+ 	/*
+ 	 * This actually creates the event.
+ 	 */
+ 	ret = kretprobe_event_gen_cmd_end(&cmd);
+ 	if (ret)
+-		goto free;
++		goto out;
+ 
+ 	/*
+ 	 * Now get the gen_kretprobe_test event file.  We need to
+@@ -201,13 +199,11 @@ static int __init test_gen_kretprobe_cmd(void)
+ 		goto delete;
+ 	}
+  out:
++	kfree(buf);
+ 	return ret;
+  delete:
+ 	/* We got an error after creating the event, delete it */
+ 	ret = kprobe_event_delete("gen_kretprobe_test");
+- free:
+-	kfree(buf);
+-
+ 	goto out;
+ }
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 83dd76e9196f3..e69e96ef49276 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3760,7 +3760,8 @@ done:
+ 			l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC,
+ 					   sizeof(rfc), (unsigned long) &rfc, endptr - ptr);
+ 
+-			if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) {
++			if (remote_efs &&
++			    test_bit(FLAG_EFS_ENABLE, &chan->flags)) {
+ 				chan->remote_id = efs.id;
+ 				chan->remote_stype = efs.stype;
+ 				chan->remote_msdu = le16_to_cpu(efs.msdu);
+@@ -5808,6 +5809,19 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
+ 	BT_DBG("psm 0x%2.2x scid 0x%4.4x mtu %u mps %u", __le16_to_cpu(psm),
+ 	       scid, mtu, mps);
+ 
++	/* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A
++	 * page 1059:
++	 *
++	 * Valid range: 0x0001-0x00ff
++	 *
++	 * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges
++	 */
++	if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) {
++		result = L2CAP_CR_LE_BAD_PSM;
++		chan = NULL;
++		goto response;
++	}
++
+ 	/* Check if we have socket listening on psm */
+ 	pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src,
+ 					 &conn->hcon->dst, LE_LINK);
+@@ -5988,6 +6002,18 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
+ 
+ 	psm  = req->psm;
+ 
++	/* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A
++	 * page 1059:
++	 *
++	 * Valid range: 0x0001-0x00ff
++	 *
++	 * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges
++	 */
++	if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) {
++		result = L2CAP_CR_LE_BAD_PSM;
++		goto response;
++	}
++
+ 	BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps);
+ 
+ 	memset(&pdu, 0, sizeof(pdu));
+@@ -6874,6 +6900,7 @@ static int l2cap_rx_state_recv(struct l2cap_chan *chan,
+ 			       struct l2cap_ctrl *control,
+ 			       struct sk_buff *skb, u8 event)
+ {
++	struct l2cap_ctrl local_control;
+ 	int err = 0;
+ 	bool skb_in_use = false;
+ 
+@@ -6898,15 +6925,32 @@ static int l2cap_rx_state_recv(struct l2cap_chan *chan,
+ 			chan->buffer_seq = chan->expected_tx_seq;
+ 			skb_in_use = true;
+ 
++			/* l2cap_reassemble_sdu may free skb, hence invalidate
++			 * control, so make a copy in advance to use it after
++			 * l2cap_reassemble_sdu returns and to avoid the race
++			 * condition, for example:
++			 *
++			 * The current thread calls:
++			 *   l2cap_reassemble_sdu
++			 *     chan->ops->recv == l2cap_sock_recv_cb
++			 *       __sock_queue_rcv_skb
++			 * Another thread calls:
++			 *   bt_sock_recvmsg
++			 *     skb_recv_datagram
++			 *     skb_free_datagram
++			 * Then the current thread tries to access control, but
++			 * it was freed by skb_free_datagram.
++			 */
++			local_control = *control;
+ 			err = l2cap_reassemble_sdu(chan, skb, control);
+ 			if (err)
+ 				break;
+ 
+-			if (control->final) {
++			if (local_control.final) {
+ 				if (!test_and_clear_bit(CONN_REJ_ACT,
+ 							&chan->conn_state)) {
+-					control->final = 0;
+-					l2cap_retransmit_all(chan, control);
++					local_control.final = 0;
++					l2cap_retransmit_all(chan, &local_control);
+ 					l2cap_ertm_send(chan);
+ 				}
+ 			}
+@@ -7286,11 +7330,27 @@ static int l2cap_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control,
+ static int l2cap_stream_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control,
+ 			   struct sk_buff *skb)
+ {
++	/* l2cap_reassemble_sdu may free skb, hence invalidate control, so store
++	 * the txseq field in advance to use it after l2cap_reassemble_sdu
++	 * returns and to avoid the race condition, for example:
++	 *
++	 * The current thread calls:
++	 *   l2cap_reassemble_sdu
++	 *     chan->ops->recv == l2cap_sock_recv_cb
++	 *       __sock_queue_rcv_skb
++	 * Another thread calls:
++	 *   bt_sock_recvmsg
++	 *     skb_recv_datagram
++	 *     skb_free_datagram
++	 * Then the current thread tries to access control, but it was freed by
++	 * skb_free_datagram.
++	 */
++	u16 txseq = control->txseq;
++
+ 	BT_DBG("chan %p, control %p, skb %p, state %d", chan, control, skb,
+ 	       chan->rx_state);
+ 
+-	if (l2cap_classify_txseq(chan, control->txseq) ==
+-	    L2CAP_TXSEQ_EXPECTED) {
++	if (l2cap_classify_txseq(chan, txseq) == L2CAP_TXSEQ_EXPECTED) {
+ 		l2cap_pass_to_tx(chan, control);
+ 
+ 		BT_DBG("buffer_seq %d->%d", chan->buffer_seq,
+@@ -7313,8 +7373,8 @@ static int l2cap_stream_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control,
+ 		}
+ 	}
+ 
+-	chan->last_acked_seq = control->txseq;
+-	chan->expected_tx_seq = __next_seq(chan, control->txseq);
++	chan->last_acked_seq = txseq;
++	chan->expected_tx_seq = __next_seq(chan, txseq);
+ 
+ 	return 0;
+ }
+@@ -7570,6 +7630,7 @@ static void l2cap_data_channel(struct l2cap_conn *conn, u16 cid,
+ 				return;
+ 			}
+ 
++			l2cap_chan_hold(chan);
+ 			l2cap_chan_lock(chan);
+ 		} else {
+ 			BT_DBG("unknown cid 0x%4.4x", cid);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 434c5aab83ea2..f6f580e9d2820 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -373,7 +373,7 @@ static int __neigh_ifdown(struct neigh_table *tbl, struct net_device *dev,
+ 	write_lock_bh(&tbl->lock);
+ 	neigh_flush_dev(tbl, dev, skip_perm);
+ 	pneigh_ifdown_and_unlock(tbl, dev);
+-	pneigh_queue_purge(&tbl->proxy_queue, dev_net(dev));
++	pneigh_queue_purge(&tbl->proxy_queue, dev ? dev_net(dev) : NULL);
+ 	if (skb_queue_empty_lockless(&tbl->proxy_queue))
+ 		del_timer_sync(&tbl->proxy_timer);
+ 	return 0;
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 87d73a3e92bad..48223c264991b 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1726,12 +1726,7 @@ static const struct net_protocol igmp_protocol = {
+ };
+ #endif
+ 
+-/* thinking of making this const? Don't.
+- * early_demux can change based on sysctl.
+- */
+-static struct net_protocol tcp_protocol = {
+-	.early_demux	=	tcp_v4_early_demux,
+-	.early_demux_handler =  tcp_v4_early_demux,
++static const struct net_protocol tcp_protocol = {
+ 	.handler	=	tcp_v4_rcv,
+ 	.err_handler	=	tcp_v4_err,
+ 	.no_policy	=	1,
+@@ -1739,12 +1734,7 @@ static struct net_protocol tcp_protocol = {
+ 	.icmp_strict_tag_validation = 1,
+ };
+ 
+-/* thinking of making this const? Don't.
+- * early_demux can change based on sysctl.
+- */
+-static struct net_protocol udp_protocol = {
+-	.early_demux =	udp_v4_early_demux,
+-	.early_demux_handler =	udp_v4_early_demux,
++static const struct net_protocol udp_protocol = {
+ 	.handler =	udp_rcv,
+ 	.err_handler =	udp_err,
+ 	.no_policy =	1,
+diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
+index b0c244af1e4d5..f6b3237e88cab 100644
+--- a/net/ipv4/ip_input.c
++++ b/net/ipv4/ip_input.c
+@@ -309,14 +309,13 @@ static bool ip_can_use_hint(const struct sk_buff *skb, const struct iphdr *iph,
+ 	       ip_hdr(hint)->tos == iph->tos;
+ }
+ 
+-INDIRECT_CALLABLE_DECLARE(int udp_v4_early_demux(struct sk_buff *));
+-INDIRECT_CALLABLE_DECLARE(int tcp_v4_early_demux(struct sk_buff *));
++int tcp_v4_early_demux(struct sk_buff *skb);
++int udp_v4_early_demux(struct sk_buff *skb);
+ static int ip_rcv_finish_core(struct net *net, struct sock *sk,
+ 			      struct sk_buff *skb, struct net_device *dev,
+ 			      const struct sk_buff *hint)
+ {
+ 	const struct iphdr *iph = ip_hdr(skb);
+-	int (*edemux)(struct sk_buff *skb);
+ 	struct rtable *rt;
+ 	int err;
+ 
+@@ -327,21 +326,29 @@ static int ip_rcv_finish_core(struct net *net, struct sock *sk,
+ 			goto drop_error;
+ 	}
+ 
+-	if (net->ipv4.sysctl_ip_early_demux &&
++	if (READ_ONCE(net->ipv4.sysctl_ip_early_demux) &&
+ 	    !skb_dst(skb) &&
+ 	    !skb->sk &&
+ 	    !ip_is_fragment(iph)) {
+-		const struct net_protocol *ipprot;
+-		int protocol = iph->protocol;
+-
+-		ipprot = rcu_dereference(inet_protos[protocol]);
+-		if (ipprot && (edemux = READ_ONCE(ipprot->early_demux))) {
+-			err = INDIRECT_CALL_2(edemux, tcp_v4_early_demux,
+-					      udp_v4_early_demux, skb);
+-			if (unlikely(err))
+-				goto drop_error;
+-			/* must reload iph, skb->head might have changed */
+-			iph = ip_hdr(skb);
++		switch (iph->protocol) {
++		case IPPROTO_TCP:
++			if (READ_ONCE(net->ipv4.sysctl_tcp_early_demux)) {
++				tcp_v4_early_demux(skb);
++
++				/* must reload iph, skb->head might have changed */
++				iph = ip_hdr(skb);
++			}
++			break;
++		case IPPROTO_UDP:
++			if (READ_ONCE(net->ipv4.sysctl_udp_early_demux)) {
++				err = udp_v4_early_demux(skb);
++				if (unlikely(err))
++					goto drop_error;
++
++				/* must reload iph, skb->head might have changed */
++				iph = ip_hdr(skb);
++			}
++			break;
+ 		}
+ 	}
+ 
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 86f553864f98f..439970e02ac65 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -361,61 +361,6 @@ bad_key:
+ 	return ret;
+ }
+ 
+-static void proc_configure_early_demux(int enabled, int protocol)
+-{
+-	struct net_protocol *ipprot;
+-#if IS_ENABLED(CONFIG_IPV6)
+-	struct inet6_protocol *ip6prot;
+-#endif
+-
+-	rcu_read_lock();
+-
+-	ipprot = rcu_dereference(inet_protos[protocol]);
+-	if (ipprot)
+-		ipprot->early_demux = enabled ? ipprot->early_demux_handler :
+-						NULL;
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-	ip6prot = rcu_dereference(inet6_protos[protocol]);
+-	if (ip6prot)
+-		ip6prot->early_demux = enabled ? ip6prot->early_demux_handler :
+-						 NULL;
+-#endif
+-	rcu_read_unlock();
+-}
+-
+-static int proc_tcp_early_demux(struct ctl_table *table, int write,
+-				void *buffer, size_t *lenp, loff_t *ppos)
+-{
+-	int ret = 0;
+-
+-	ret = proc_dointvec(table, write, buffer, lenp, ppos);
+-
+-	if (write && !ret) {
+-		int enabled = init_net.ipv4.sysctl_tcp_early_demux;
+-
+-		proc_configure_early_demux(enabled, IPPROTO_TCP);
+-	}
+-
+-	return ret;
+-}
+-
+-static int proc_udp_early_demux(struct ctl_table *table, int write,
+-				void *buffer, size_t *lenp, loff_t *ppos)
+-{
+-	int ret = 0;
+-
+-	ret = proc_dointvec(table, write, buffer, lenp, ppos);
+-
+-	if (write && !ret) {
+-		int enabled = init_net.ipv4.sysctl_udp_early_demux;
+-
+-		proc_configure_early_demux(enabled, IPPROTO_UDP);
+-	}
+-
+-	return ret;
+-}
+-
+ static int proc_tfo_blackhole_detect_timeout(struct ctl_table *table,
+ 					     int write, void *buffer,
+ 					     size_t *lenp, loff_t *ppos)
+@@ -685,14 +630,14 @@ static struct ctl_table ipv4_net_table[] = {
+ 		.data           = &init_net.ipv4.sysctl_udp_early_demux,
+ 		.maxlen         = sizeof(int),
+ 		.mode           = 0644,
+-		.proc_handler   = proc_udp_early_demux
++		.proc_handler   = proc_douintvec_minmax,
+ 	},
+ 	{
+ 		.procname       = "tcp_early_demux",
+ 		.data           = &init_net.ipv4.sysctl_tcp_early_demux,
+ 		.maxlen         = sizeof(int),
+ 		.mode           = 0644,
+-		.proc_handler   = proc_tcp_early_demux
++		.proc_handler   = proc_douintvec_minmax,
+ 	},
+ 	{
+ 		.procname       = "nexthop_compat_mode",
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index 15ea3d082534d..4eb9fbfdce332 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -44,21 +44,25 @@
+ #include <net/inet_ecn.h>
+ #include <net/dst_metadata.h>
+ 
+-INDIRECT_CALLABLE_DECLARE(void udp_v6_early_demux(struct sk_buff *));
+-INDIRECT_CALLABLE_DECLARE(void tcp_v6_early_demux(struct sk_buff *));
++void udp_v6_early_demux(struct sk_buff *);
++void tcp_v6_early_demux(struct sk_buff *);
+ static void ip6_rcv_finish_core(struct net *net, struct sock *sk,
+ 				struct sk_buff *skb)
+ {
+-	void (*edemux)(struct sk_buff *skb);
+-
+-	if (net->ipv4.sysctl_ip_early_demux && !skb_dst(skb) && skb->sk == NULL) {
+-		const struct inet6_protocol *ipprot;
+-
+-		ipprot = rcu_dereference(inet6_protos[ipv6_hdr(skb)->nexthdr]);
+-		if (ipprot && (edemux = READ_ONCE(ipprot->early_demux)))
+-			INDIRECT_CALL_2(edemux, tcp_v6_early_demux,
+-					udp_v6_early_demux, skb);
++	if (READ_ONCE(net->ipv4.sysctl_ip_early_demux) &&
++	    !skb_dst(skb) && !skb->sk) {
++		switch (ipv6_hdr(skb)->nexthdr) {
++		case IPPROTO_TCP:
++			if (READ_ONCE(net->ipv4.sysctl_tcp_early_demux))
++				tcp_v6_early_demux(skb);
++			break;
++		case IPPROTO_UDP:
++			if (READ_ONCE(net->ipv4.sysctl_udp_early_demux))
++				udp_v6_early_demux(skb);
++			break;
++		}
+ 	}
++
+ 	if (!skb_valid_dst(skb))
+ 		ip6_route_input(skb);
+ }
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 6fa118bf40cdd..2017257cb2784 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -417,6 +417,12 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ 		rtnl_lock();
+ 	lock_sock(sk);
+ 
++	/* Another thread has converted the socket into IPv4 with
++	 * IPV6_ADDRFORM concurrently.
++	 */
++	if (unlikely(sk->sk_family != AF_INET6))
++		goto unlock;
++
+ 	switch (optname) {
+ 
+ 	case IPV6_ADDRFORM:
+@@ -976,6 +982,7 @@ done:
+ 		break;
+ 	}
+ 
++unlock:
+ 	release_sock(sk);
+ 	if (needs_rtnl)
+ 		rtnl_unlock();
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index cdf215442d373..803d1aa83140c 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -6405,10 +6405,16 @@ static void __net_exit ip6_route_net_exit(struct net *net)
+ static int __net_init ip6_route_net_init_late(struct net *net)
+ {
+ #ifdef CONFIG_PROC_FS
+-	proc_create_net("ipv6_route", 0, net->proc_net, &ipv6_route_seq_ops,
+-			sizeof(struct ipv6_route_iter));
+-	proc_create_net_single("rt6_stats", 0444, net->proc_net,
+-			rt6_stats_seq_show, NULL);
++	if (!proc_create_net("ipv6_route", 0, net->proc_net,
++			     &ipv6_route_seq_ops,
++			     sizeof(struct ipv6_route_iter)))
++		return -ENOMEM;
++
++	if (!proc_create_net_single("rt6_stats", 0444, net->proc_net,
++				    rt6_stats_seq_show, NULL)) {
++		remove_proc_entry("ipv6_route", net->proc_net);
++		return -ENOMEM;
++	}
+ #endif
+ 	return 0;
+ }
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index c14eaec64a0b8..a558dd9d177b0 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1818,7 +1818,7 @@ do_time_wait:
+ 	goto discard_it;
+ }
+ 
+-INDIRECT_CALLABLE_SCOPE void tcp_v6_early_demux(struct sk_buff *skb)
++void tcp_v6_early_demux(struct sk_buff *skb)
+ {
+ 	const struct ipv6hdr *hdr;
+ 	const struct tcphdr *th;
+@@ -2169,12 +2169,7 @@ struct proto tcpv6_prot = {
+ };
+ EXPORT_SYMBOL_GPL(tcpv6_prot);
+ 
+-/* thinking of making this const? Don't.
+- * early_demux can change based on sysctl.
+- */
+-static struct inet6_protocol tcpv6_protocol = {
+-	.early_demux	=	tcp_v6_early_demux,
+-	.early_demux_handler =  tcp_v6_early_demux,
++static const struct inet6_protocol tcpv6_protocol = {
+ 	.handler	=	tcp_v6_rcv,
+ 	.err_handler	=	tcp_v6_err,
+ 	.flags		=	INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 514e6a55959fe..1805cc5f7418b 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1027,7 +1027,7 @@ static struct sock *__udp6_lib_demux_lookup(struct net *net,
+ 	return NULL;
+ }
+ 
+-INDIRECT_CALLABLE_SCOPE void udp_v6_early_demux(struct sk_buff *skb)
++void udp_v6_early_demux(struct sk_buff *skb)
+ {
+ 	struct net *net = dev_net(skb->dev);
+ 	const struct udphdr *uh;
+@@ -1640,12 +1640,7 @@ int udpv6_getsockopt(struct sock *sk, int level, int optname,
+ 	return ipv6_getsockopt(sk, level, optname, optval, optlen);
+ }
+ 
+-/* thinking of making this const? Don't.
+- * early_demux can change based on sysctl.
+- */
+-static struct inet6_protocol udpv6_protocol = {
+-	.early_demux	=	udp_v6_early_demux,
+-	.early_demux_handler =  udp_v6_early_demux,
++static const struct inet6_protocol udpv6_protocol = {
+ 	.handler	=	udpv6_rcv,
+ 	.err_handler	=	udpv6_err,
+ 	.flags		=	INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
+diff --git a/net/netfilter/ipvs/ip_vs_app.c b/net/netfilter/ipvs/ip_vs_app.c
+index f9b16f2b22191..fdacbc3c15bef 100644
+--- a/net/netfilter/ipvs/ip_vs_app.c
++++ b/net/netfilter/ipvs/ip_vs_app.c
+@@ -599,13 +599,19 @@ static const struct seq_operations ip_vs_app_seq_ops = {
+ int __net_init ip_vs_app_net_init(struct netns_ipvs *ipvs)
+ {
+ 	INIT_LIST_HEAD(&ipvs->app_list);
+-	proc_create_net("ip_vs_app", 0, ipvs->net->proc_net, &ip_vs_app_seq_ops,
+-			sizeof(struct seq_net_private));
++#ifdef CONFIG_PROC_FS
++	if (!proc_create_net("ip_vs_app", 0, ipvs->net->proc_net,
++			     &ip_vs_app_seq_ops,
++			     sizeof(struct seq_net_private)))
++		return -ENOMEM;
++#endif
+ 	return 0;
+ }
+ 
+ void __net_exit ip_vs_app_net_cleanup(struct netns_ipvs *ipvs)
+ {
+ 	unregister_ip_vs_app(ipvs, NULL /* all */);
++#ifdef CONFIG_PROC_FS
+ 	remove_proc_entry("ip_vs_app", ipvs->net->proc_net);
++#endif
+ }
+diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
+index fb67f1ca2495b..cb6d68220c265 100644
+--- a/net/netfilter/ipvs/ip_vs_conn.c
++++ b/net/netfilter/ipvs/ip_vs_conn.c
+@@ -1265,8 +1265,8 @@ static inline int todrop_entry(struct ip_vs_conn *cp)
+ 	 * The drop rate array needs tuning for real environments.
+ 	 * Called from timer bh only => no locking
+ 	 */
+-	static const char todrop_rate[9] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
+-	static char todrop_counter[9] = {0};
++	static const signed char todrop_rate[9] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
++	static signed char todrop_counter[9] = {0};
+ 	int i;
+ 
+ 	/* if the conn entry hasn't lasted for 60 seconds, don't drop it.
+@@ -1447,20 +1447,36 @@ int __net_init ip_vs_conn_net_init(struct netns_ipvs *ipvs)
+ {
+ 	atomic_set(&ipvs->conn_count, 0);
+ 
+-	proc_create_net("ip_vs_conn", 0, ipvs->net->proc_net,
+-			&ip_vs_conn_seq_ops, sizeof(struct ip_vs_iter_state));
+-	proc_create_net("ip_vs_conn_sync", 0, ipvs->net->proc_net,
+-			&ip_vs_conn_sync_seq_ops,
+-			sizeof(struct ip_vs_iter_state));
++#ifdef CONFIG_PROC_FS
++	if (!proc_create_net("ip_vs_conn", 0, ipvs->net->proc_net,
++			     &ip_vs_conn_seq_ops,
++			     sizeof(struct ip_vs_iter_state)))
++		goto err_conn;
++
++	if (!proc_create_net("ip_vs_conn_sync", 0, ipvs->net->proc_net,
++			     &ip_vs_conn_sync_seq_ops,
++			     sizeof(struct ip_vs_iter_state)))
++		goto err_conn_sync;
++#endif
++
+ 	return 0;
++
++#ifdef CONFIG_PROC_FS
++err_conn_sync:
++	remove_proc_entry("ip_vs_conn", ipvs->net->proc_net);
++err_conn:
++	return -ENOMEM;
++#endif
+ }
+ 
+ void __net_exit ip_vs_conn_net_cleanup(struct netns_ipvs *ipvs)
+ {
+ 	/* flush all the connection entries first */
+ 	ip_vs_conn_flush(ipvs);
++#ifdef CONFIG_PROC_FS
+ 	remove_proc_entry("ip_vs_conn", ipvs->net->proc_net);
+ 	remove_proc_entry("ip_vs_conn_sync", ipvs->net->proc_net);
++#endif
+ }
+ 
+ int __init ip_vs_conn_init(void)
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 810995d712ac7..2143edafba772 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -7527,9 +7527,6 @@ static void nft_commit_release(struct nft_trans *trans)
+ 		nf_tables_chain_destroy(&trans->ctx);
+ 		break;
+ 	case NFT_MSG_DELRULE:
+-		if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
+-			nft_flow_rule_destroy(nft_trans_flow_rule(trans));
+-
+ 		nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
+ 		break;
+ 	case NFT_MSG_DELSET:
+@@ -7973,6 +7970,9 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			nft_rule_expr_deactivate(&trans->ctx,
+ 						 nft_trans_rule(trans),
+ 						 NFT_TRANS_COMMIT);
++
++			if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
++				nft_flow_rule_destroy(nft_trans_flow_rule(trans));
+ 			break;
+ 		case NFT_MSG_NEWSET:
+ 			nft_clear(net, nft_trans_set(trans));
+diff --git a/net/rose/rose_link.c b/net/rose/rose_link.c
+index f6102e6f51617..730d2205f1976 100644
+--- a/net/rose/rose_link.c
++++ b/net/rose/rose_link.c
+@@ -236,6 +236,9 @@ void rose_transmit_clear_request(struct rose_neigh *neigh, unsigned int lci, uns
+ 	unsigned char *dptr;
+ 	int len;
+ 
++	if (!neigh->dev)
++		return;
++
+ 	len = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + ROSE_MIN_LEN + 3;
+ 
+ 	if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
+diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
+index f1e013e3f04a9..935d90874b1b7 100644
+--- a/net/sched/sch_red.c
++++ b/net/sched/sch_red.c
+@@ -72,6 +72,7 @@ static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ {
+ 	struct red_sched_data *q = qdisc_priv(sch);
+ 	struct Qdisc *child = q->qdisc;
++	unsigned int len;
+ 	int ret;
+ 
+ 	q->vars.qavg = red_calc_qavg(&q->parms,
+@@ -126,9 +127,10 @@ static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		break;
+ 	}
+ 
++	len = qdisc_pkt_len(skb);
+ 	ret = qdisc_enqueue(skb, child, to_free);
+ 	if (likely(ret == NET_XMIT_SUCCESS)) {
+-		qdisc_qstats_backlog_inc(sch, skb);
++		sch->qstats.backlog += len;
+ 		sch->q.qlen++;
+ 	} else if (net_xmit_drop_count(ret)) {
+ 		q->stats.pdrop++;
+diff --git a/security/commoncap.c b/security/commoncap.c
+index 28d582ed80c9d..b44b69796c0ba 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -391,8 +391,10 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
+ 				 &tmpbuf, size, GFP_NOFS);
+ 	dput(dentry);
+ 
+-	if (ret < 0 || !tmpbuf)
+-		return ret;
++	if (ret < 0 || !tmpbuf) {
++		size = ret;
++		goto out_free;
++	}
+ 
+ 	fs_ns = inode->i_sb->s_user_ns;
+ 	cap = (struct vfs_cap_data *) tmpbuf;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 1ac91c46da3cf..a51591f68ae68 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3656,6 +3656,58 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ 	}
+ },
+ 
++/*
++ * MacroSilicon MS2100/MS2106 based AV capture cards
++ *
++ * These claim 96kHz 1ch in the descriptors, but are actually 48kHz 2ch.
++ * They also need QUIRK_AUDIO_ALIGN_TRANSFER, which makes one wonder if
++ * they pretend to be 96kHz mono as a workaround for stereo being broken
++ * by that...
++ *
++ * They also have an issue with initial stream alignment that causes the
++ * channels to be swapped and out of phase, which is dealt with in quirks.c.
++ */
++{
++	USB_AUDIO_DEVICE(0x534d, 0x0021),
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.vendor_name = "MacroSilicon",
++		.product_name = "MS210x",
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_COMPOSITE,
++		.data = &(const struct snd_usb_audio_quirk[]) {
++			{
++				.ifnum = 2,
++				.type = QUIRK_AUDIO_ALIGN_TRANSFER,
++			},
++			{
++				.ifnum = 2,
++				.type = QUIRK_AUDIO_STANDARD_MIXER,
++			},
++			{
++				.ifnum = 3,
++				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
++				.data = &(const struct audioformat) {
++					.formats = SNDRV_PCM_FMTBIT_S16_LE,
++					.channels = 2,
++					.iface = 3,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.attributes = 0,
++					.endpoint = 0x82,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC |
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_CONTINUOUS,
++					.rate_min = 48000,
++					.rate_max = 48000,
++				}
++			},
++			{
++				.ifnum = -1
++			}
++		}
++	}
++},
++
+ /*
+  * MacroSilicon MS2109 based HDMI capture cards
+  *
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 41f5d8242478f..04a691bc560cb 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1508,6 +1508,7 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ 	case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */
+ 		pioneer_djm_set_format_quirk(subs);
+ 		break;
++	case USB_ID(0x534d, 0x0021): /* MacroSilicon MS2100/MS2106 */
+ 	case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */
+ 		subs->stream_offset_adj = 2;
+ 		break;
+diff --git a/tools/include/nolibc/nolibc.h b/tools/include/nolibc/nolibc.h
+index b8cecb66d28b7..c20d2fe7cebaa 100644
+--- a/tools/include/nolibc/nolibc.h
++++ b/tools/include/nolibc/nolibc.h
+@@ -2318,9 +2318,9 @@ static __attribute__((unused))
+ int memcmp(const void *s1, const void *s2, size_t n)
+ {
+ 	size_t ofs = 0;
+-	char c1 = 0;
++	int c1 = 0;
+ 
+-	while (ofs < n && !(c1 = ((char *)s1)[ofs] - ((char *)s2)[ofs])) {
++	while (ofs < n && !(c1 = ((unsigned char *)s1)[ofs] - ((unsigned char *)s2)[ofs])) {
+ 		ofs++;
+ 	}
+ 	return c1;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-11-16 12:08 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2022-11-16 12:08 UTC (permalink / raw
  To: gentoo-commits

commit:     32d5ec466bd2bc476549cc33ccea45fbc944fad7
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 16 12:08:14 2022 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Nov 16 12:08:14 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=32d5ec46

Linux patch 5.10.155

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1154_linux-5.10.155.patch | 2834 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2838 insertions(+)

diff --git a/0000_README b/0000_README
index 6ff64173..3e23190c 100644
--- a/0000_README
+++ b/0000_README
@@ -659,6 +659,10 @@ Patch:  1153_linux-5.10.154.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.154
 
+Patch:  1154_linux-5.10.155.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.155
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1154_linux-5.10.155.patch b/1154_linux-5.10.155.patch
new file mode 100644
index 00000000..5bb17cda
--- /dev/null
+++ b/1154_linux-5.10.155.patch
@@ -0,0 +1,2834 @@
+diff --git a/Documentation/virt/kvm/devices/vm.rst b/Documentation/virt/kvm/devices/vm.rst
+index 0aa5b1cfd700c..60acc39e0e937 100644
+--- a/Documentation/virt/kvm/devices/vm.rst
++++ b/Documentation/virt/kvm/devices/vm.rst
+@@ -215,6 +215,7 @@ KVM_S390_VM_TOD_EXT).
+ :Parameters: address of a buffer in user space to store the data (u8) to
+ :Returns:   -EFAULT if the given address is not accessible from kernel space;
+ 	    -EINVAL if setting the TOD clock extension to != 0 is not supported
++	    -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)
+ 
+ 3.2. ATTRIBUTE: KVM_S390_VM_TOD_LOW
+ -----------------------------------
+@@ -224,6 +225,7 @@ the POP (u64).
+ 
+ :Parameters: address of a buffer in user space to store the data (u64) to
+ :Returns:    -EFAULT if the given address is not accessible from kernel space
++	     -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)
+ 
+ 3.3. ATTRIBUTE: KVM_S390_VM_TOD_EXT
+ -----------------------------------
+@@ -237,6 +239,7 @@ it, it is stored as 0 and not allowed to be set to a value != 0.
+ 	     (kvm_s390_vm_tod_clock) to
+ :Returns:   -EFAULT if the given address is not accessible from kernel space;
+ 	    -EINVAL if setting the TOD clock extension to != 0 is not supported
++	    -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)
+ 
+ 4. GROUP: KVM_S390_VM_CRYPTO
+ ============================
+diff --git a/Makefile b/Makefile
+index 43fecb4045814..8ccf902b3609f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 154
++SUBLEVEL = 155
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
+index fa02efb28e88e..c5685179db5af 100644
+--- a/arch/arm64/kernel/efi.c
++++ b/arch/arm64/kernel/efi.c
+@@ -12,6 +12,14 @@
+ 
+ #include <asm/efi.h>
+ 
++static bool region_is_misaligned(const efi_memory_desc_t *md)
++{
++	if (PAGE_SIZE == EFI_PAGE_SIZE)
++		return false;
++	return !PAGE_ALIGNED(md->phys_addr) ||
++	       !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT);
++}
++
+ /*
+  * Only regions of type EFI_RUNTIME_SERVICES_CODE need to be
+  * executable, everything else can be mapped with the XN bits
+@@ -25,14 +33,22 @@ static __init pteval_t create_mapping_protection(efi_memory_desc_t *md)
+ 	if (type == EFI_MEMORY_MAPPED_IO)
+ 		return PROT_DEVICE_nGnRE;
+ 
+-	if (WARN_ONCE(!PAGE_ALIGNED(md->phys_addr),
+-		      "UEFI Runtime regions are not aligned to 64 KB -- buggy firmware?"))
++	if (region_is_misaligned(md)) {
++		static bool __initdata code_is_misaligned;
++
+ 		/*
+-		 * If the region is not aligned to the page size of the OS, we
+-		 * can not use strict permissions, since that would also affect
+-		 * the mapping attributes of the adjacent regions.
++		 * Regions that are not aligned to the OS page size cannot be
++		 * mapped with strict permissions, as those might interfere
++		 * with the permissions that are needed by the adjacent
++		 * region's mapping. However, if we haven't encountered any
++		 * misaligned runtime code regions so far, we can safely use
++		 * non-executable permissions for non-code regions.
+ 		 */
+-		return pgprot_val(PAGE_KERNEL_EXEC);
++		code_is_misaligned |= (type == EFI_RUNTIME_SERVICES_CODE);
++
++		return code_is_misaligned ? pgprot_val(PAGE_KERNEL_EXEC)
++					  : pgprot_val(PAGE_KERNEL);
++	}
+ 
+ 	/* R-- */
+ 	if ((attr & (EFI_MEMORY_XP | EFI_MEMORY_RO)) ==
+@@ -62,19 +78,16 @@ int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
+ 	bool page_mappings_only = (md->type == EFI_RUNTIME_SERVICES_CODE ||
+ 				   md->type == EFI_RUNTIME_SERVICES_DATA);
+ 
+-	if (!PAGE_ALIGNED(md->phys_addr) ||
+-	    !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT)) {
+-		/*
+-		 * If the end address of this region is not aligned to page
+-		 * size, the mapping is rounded up, and may end up sharing a
+-		 * page frame with the next UEFI memory region. If we create
+-		 * a block entry now, we may need to split it again when mapping
+-		 * the next region, and support for that is going to be removed
+-		 * from the MMU routines. So avoid block mappings altogether in
+-		 * that case.
+-		 */
++	/*
++	 * If this region is not aligned to the page size used by the OS, the
++	 * mapping will be rounded outwards, and may end up sharing a page
++	 * frame with an adjacent runtime memory region. Given that the page
++	 * table descriptor covering the shared page will be rewritten when the
++	 * adjacent region gets mapped, we must avoid block mappings here so we
++	 * don't have to worry about splitting them when that happens.
++	 */
++	if (region_is_misaligned(md))
+ 		page_mappings_only = true;
+-	}
+ 
+ 	create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
+ 			   md->num_pages << EFI_PAGE_SHIFT,
+@@ -101,6 +114,9 @@ int __init efi_set_mapping_permissions(struct mm_struct *mm,
+ 	BUG_ON(md->type != EFI_RUNTIME_SERVICES_CODE &&
+ 	       md->type != EFI_RUNTIME_SERVICES_DATA);
+ 
++	if (region_is_misaligned(md))
++		return 0;
++
+ 	/*
+ 	 * Calling apply_to_page_range() is only safe on regions that are
+ 	 * guaranteed to be mapped down to pages. Since we are only called
+diff --git a/arch/mips/kernel/jump_label.c b/arch/mips/kernel/jump_label.c
+index 662c8db9f45ba..9f5b1247b4ba4 100644
+--- a/arch/mips/kernel/jump_label.c
++++ b/arch/mips/kernel/jump_label.c
+@@ -56,7 +56,7 @@ void arch_jump_label_transform(struct jump_entry *e,
+ 			 * The branch offset must fit in the instruction's 26
+ 			 * bit field.
+ 			 */
+-			WARN_ON((offset >= BIT(25)) ||
++			WARN_ON((offset >= (long)BIT(25)) ||
+ 				(offset < -(long)BIT(25)));
+ 
+ 			insn.j_format.opcode = bc6_op;
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 73e8b5e5bb654..b16304fdf4489 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -470,6 +470,7 @@ extern void *dtb_early_va;
+ extern uintptr_t dtb_early_pa;
+ void setup_bootmem(void);
+ void paging_init(void);
++void misc_mem_init(void);
+ 
+ #define FIRST_USER_ADDRESS  0
+ 
+diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
+index dd5f985b1f40e..9a8b2e60adcf1 100644
+--- a/arch/riscv/kernel/process.c
++++ b/arch/riscv/kernel/process.c
+@@ -111,6 +111,8 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ {
+ 	struct pt_regs *childregs = task_pt_regs(p);
+ 
++	memset(&p->thread.s, 0, sizeof(p->thread.s));
++
+ 	/* p->thread holds context to be restored by __switch_to() */
+ 	if (unlikely(p->flags & PF_KTHREAD)) {
+ 		/* Kernel thread */
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index cc85858f7fe8e..8e78a8ab6a345 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -96,6 +96,8 @@ void __init setup_arch(char **cmdline_p)
+ 	else
+ 		pr_err("No DTB found in kernel mappings\n");
+ #endif
++	early_init_fdt_scan_reserved_mem();
++	misc_mem_init();
+ 
+ #ifdef CONFIG_SWIOTLB
+ 	swiotlb_init(1);
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index 24d936c147cdf..926ab3960f9e4 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -30,7 +30,7 @@ obj-y += vdso.o vdso-syms.o
+ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+ 
+ # Disable -pg to prevent insert call site
+-CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os
++CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE)
+ 
+ # Disable profiling and instrumentation for VDSO code
+ GCOV_PROFILE := n
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index e8921e78a2926..6c2f38aac5443 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -13,6 +13,7 @@
+ #include <linux/of_fdt.h>
+ #include <linux/libfdt.h>
+ #include <linux/set_memory.h>
++#include <linux/dma-map-ops.h>
+ 
+ #include <asm/fixmap.h>
+ #include <asm/tlbflush.h>
+@@ -41,13 +42,14 @@ struct pt_alloc_ops {
+ #endif
+ };
+ 
++static phys_addr_t dma32_phys_limit __ro_after_init;
++
+ static void __init zone_sizes_init(void)
+ {
+ 	unsigned long max_zone_pfns[MAX_NR_ZONES] = { 0, };
+ 
+ #ifdef CONFIG_ZONE_DMA32
+-	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(min(4UL * SZ_1G,
+-			(unsigned long) PFN_PHYS(max_low_pfn)));
++	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
+ #endif
+ 	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+ 
+@@ -193,6 +195,7 @@ void __init setup_bootmem(void)
+ 
+ 	max_pfn = PFN_DOWN(dram_end);
+ 	max_low_pfn = max_pfn;
++	dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn));
+ 	set_max_mapnr(max_low_pfn);
+ 
+ #ifdef CONFIG_BLK_DEV_INITRD
+@@ -205,7 +208,7 @@ void __init setup_bootmem(void)
+ 	 */
+ 	memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
+ 
+-	early_init_fdt_scan_reserved_mem();
++	dma_contiguous_reserve(dma32_phys_limit);
+ 	memblock_allow_resize();
+ 	memblock_dump_all();
+ }
+@@ -665,8 +668,12 @@ static void __init resource_init(void)
+ void __init paging_init(void)
+ {
+ 	setup_vm_final();
+-	sparse_init();
+ 	setup_zero_page();
++}
++
++void __init misc_mem_init(void)
++{
++	sparse_init();
+ 	zone_sizes_init();
+ 	resource_init();
+ }
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index d8e9239c24ffc..59db85fb63e1c 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -1092,6 +1092,8 @@ static int kvm_s390_vm_get_migration(struct kvm *kvm,
+ 	return 0;
+ }
+ 
++static void __kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
++
+ static int kvm_s390_set_tod_ext(struct kvm *kvm, struct kvm_device_attr *attr)
+ {
+ 	struct kvm_s390_vm_tod_clock gtod;
+@@ -1101,7 +1103,7 @@ static int kvm_s390_set_tod_ext(struct kvm *kvm, struct kvm_device_attr *attr)
+ 
+ 	if (!test_kvm_facility(kvm, 139) && gtod.epoch_idx)
+ 		return -EINVAL;
+-	kvm_s390_set_tod_clock(kvm, &gtod);
++	__kvm_s390_set_tod_clock(kvm, &gtod);
+ 
+ 	VM_EVENT(kvm, 3, "SET: TOD extension: 0x%x, TOD base: 0x%llx",
+ 		gtod.epoch_idx, gtod.tod);
+@@ -1132,7 +1134,7 @@ static int kvm_s390_set_tod_low(struct kvm *kvm, struct kvm_device_attr *attr)
+ 			   sizeof(gtod.tod)))
+ 		return -EFAULT;
+ 
+-	kvm_s390_set_tod_clock(kvm, &gtod);
++	__kvm_s390_set_tod_clock(kvm, &gtod);
+ 	VM_EVENT(kvm, 3, "SET: TOD base: 0x%llx", gtod.tod);
+ 	return 0;
+ }
+@@ -1144,6 +1146,16 @@ static int kvm_s390_set_tod(struct kvm *kvm, struct kvm_device_attr *attr)
+ 	if (attr->flags)
+ 		return -EINVAL;
+ 
++	mutex_lock(&kvm->lock);
++	/*
++	 * For protected guests, the TOD is managed by the ultravisor, so trying
++	 * to change it will never bring the expected results.
++	 */
++	if (kvm_s390_pv_is_protected(kvm)) {
++		ret = -EOPNOTSUPP;
++		goto out_unlock;
++	}
++
+ 	switch (attr->attr) {
+ 	case KVM_S390_VM_TOD_EXT:
+ 		ret = kvm_s390_set_tod_ext(kvm, attr);
+@@ -1158,6 +1170,9 @@ static int kvm_s390_set_tod(struct kvm *kvm, struct kvm_device_attr *attr)
+ 		ret = -ENXIO;
+ 		break;
+ 	}
++
++out_unlock:
++	mutex_unlock(&kvm->lock);
+ 	return ret;
+ }
+ 
+@@ -3862,14 +3877,12 @@ retry:
+ 	return 0;
+ }
+ 
+-void kvm_s390_set_tod_clock(struct kvm *kvm,
+-			    const struct kvm_s390_vm_tod_clock *gtod)
++static void __kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
+ {
+ 	struct kvm_vcpu *vcpu;
+ 	struct kvm_s390_tod_clock_ext htod;
+ 	int i;
+ 
+-	mutex_lock(&kvm->lock);
+ 	preempt_disable();
+ 
+ 	get_tod_clock_ext((char *)&htod);
+@@ -3890,7 +3903,15 @@ void kvm_s390_set_tod_clock(struct kvm *kvm,
+ 
+ 	kvm_s390_vcpu_unblock_all(kvm);
+ 	preempt_enable();
++}
++
++int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
++{
++	if (!mutex_trylock(&kvm->lock))
++		return 0;
++	__kvm_s390_set_tod_clock(kvm, gtod);
+ 	mutex_unlock(&kvm->lock);
++	return 1;
+ }
+ 
+ /**
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index a3e9b71d426f9..b6ff64796af9d 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -326,8 +326,7 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
+ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu);
+ 
+ /* implemented in kvm-s390.c */
+-void kvm_s390_set_tod_clock(struct kvm *kvm,
+-			    const struct kvm_s390_vm_tod_clock *gtod);
++int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
+ long kvm_arch_fault_in_page(struct kvm_vcpu *vcpu, gpa_t gpa, int writable);
+ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
+ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 3b1a498e58d25..e34d518dd3d39 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -102,7 +102,20 @@ static int handle_set_clock(struct kvm_vcpu *vcpu)
+ 		return kvm_s390_inject_prog_cond(vcpu, rc);
+ 
+ 	VCPU_EVENT(vcpu, 3, "SCK: setting guest TOD to 0x%llx", gtod.tod);
+-	kvm_s390_set_tod_clock(vcpu->kvm, &gtod);
++	/*
++	 * To set the TOD clock the kvm lock must be taken, but the vcpu lock
++	 * is already held in handle_set_clock. The usual lock order is the
++	 * opposite.  As SCK is deprecated and should not be used in several
++	 * cases, for example when the multiple epoch facility or TOD clock
++	 * steering facility is installed (see Principles of Operation),  a
++	 * slow path can be used.  If the lock can not be taken via try_lock,
++	 * the instruction will be retried via -EAGAIN at a later point in
++	 * time.
++	 */
++	if (!kvm_s390_try_set_tod_clock(vcpu->kvm, &gtod)) {
++		kvm_s390_retry_instr(vcpu);
++		return -EAGAIN;
++	}
+ 
+ 	kvm_s390_set_psw_cc(vcpu, 0);
+ 	return 0;
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 144dc164b7596..5a8ee3b83af2a 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -489,6 +489,11 @@
+ #define MSR_AMD64_CPUID_FN_1		0xc0011004
+ #define MSR_AMD64_LS_CFG		0xc0011020
+ #define MSR_AMD64_DC_CFG		0xc0011022
++
++#define MSR_AMD64_DE_CFG		0xc0011029
++#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT	 1
++#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE	BIT_ULL(MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT)
++
+ #define MSR_AMD64_BU_CFG2		0xc001102a
+ #define MSR_AMD64_IBSFETCHCTL		0xc0011030
+ #define MSR_AMD64_IBSFETCHLINAD		0xc0011031
+@@ -565,9 +570,6 @@
+ #define FAM10H_MMIO_CONF_BASE_MASK	0xfffffffULL
+ #define FAM10H_MMIO_CONF_BASE_SHIFT	20
+ #define MSR_FAM10H_NODE_ID		0xc001100c
+-#define MSR_F10H_DECFG			0xc0011029
+-#define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT	1
+-#define MSR_F10H_DECFG_LFENCE_SERIALIZE		BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT)
+ 
+ /* K8 MSRs */
+ #define MSR_K8_TOP_MEM1			0xc001001a
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 8b9e3277a6ceb..ec3fa4dc90318 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -822,8 +822,6 @@ static void init_amd_gh(struct cpuinfo_x86 *c)
+ 		set_cpu_bug(c, X86_BUG_AMD_TLB_MMATCH);
+ }
+ 
+-#define MSR_AMD64_DE_CFG	0xC0011029
+-
+ static void init_amd_ln(struct cpuinfo_x86 *c)
+ {
+ 	/*
+@@ -1018,8 +1016,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 		 * msr_set_bit() uses the safe accessors, too, even if the MSR
+ 		 * is not present.
+ 		 */
+-		msr_set_bit(MSR_F10H_DECFG,
+-			    MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT);
++		msr_set_bit(MSR_AMD64_DE_CFG,
++			    MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT);
+ 
+ 		/* A serializing LFENCE stops RDTSC speculation */
+ 		set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index 774ca6bfda9f4..205fa420ee7ca 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -342,8 +342,8 @@ static void init_hygon(struct cpuinfo_x86 *c)
+ 		 * msr_set_bit() uses the safe accessors, too, even if the MSR
+ 		 * is not present.
+ 		 */
+-		msr_set_bit(MSR_F10H_DECFG,
+-			    MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT);
++		msr_set_bit(MSR_AMD64_DE_CFG,
++			    MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT);
+ 
+ 		/* A serializing LFENCE stops RDTSC speculation */
+ 		set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index a0512a91760d2..2b7528821577c 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2475,9 +2475,9 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
+ 	msr->data = 0;
+ 
+ 	switch (msr->index) {
+-	case MSR_F10H_DECFG:
+-		if (boot_cpu_has(X86_FEATURE_LFENCE_RDTSC))
+-			msr->data |= MSR_F10H_DECFG_LFENCE_SERIALIZE;
++	case MSR_AMD64_DE_CFG:
++		if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
++			msr->data |= MSR_AMD64_DE_CFG_LFENCE_SERIALIZE;
+ 		break;
+ 	case MSR_IA32_PERF_CAPABILITIES:
+ 		return 0;
+@@ -2584,7 +2584,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			msr_info->data = 0x1E;
+ 		}
+ 		break;
+-	case MSR_F10H_DECFG:
++	case MSR_AMD64_DE_CFG:
+ 		msr_info->data = svm->msr_decfg;
+ 		break;
+ 	default:
+@@ -2764,7 +2764,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 	case MSR_VM_IGNNE:
+ 		vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);
+ 		break;
+-	case MSR_F10H_DECFG: {
++	case MSR_AMD64_DE_CFG: {
+ 		struct kvm_msr_entry msr_entry;
+ 
+ 		msr_entry.index = msr->index;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0ac80b3ff0f5b..23d7c563e012b 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1362,7 +1362,7 @@ static const u32 msr_based_features_all[] = {
+ 	MSR_IA32_VMX_EPT_VPID_CAP,
+ 	MSR_IA32_VMX_VMFUNC,
+ 
+-	MSR_F10H_DECFG,
++	MSR_AMD64_DE_CFG,
+ 	MSR_IA32_UCODE_REV,
+ 	MSR_IA32_ARCH_CAPABILITIES,
+ 	MSR_IA32_PERF_CAPABILITIES,
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index d023c85e3c536..61581c45788e3 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -522,6 +522,7 @@ static void pm_save_spec_msr(void)
+ 		MSR_TSX_FORCE_ABORT,
+ 		MSR_IA32_MCU_OPT_CTRL,
+ 		MSR_AMD64_LS_CFG,
++		MSR_AMD64_DE_CFG,
+ 	};
+ 
+ 	msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index a0e788b648214..459ece666c623 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -3303,6 +3303,7 @@ static unsigned int ata_scsiop_maint_in(struct ata_scsi_args *args, u8 *rbuf)
+ 	case REPORT_LUNS:
+ 	case REQUEST_SENSE:
+ 	case SYNCHRONIZE_CACHE:
++	case SYNCHRONIZE_CACHE_16:
+ 	case REZERO_UNIT:
+ 	case SEEK_6:
+ 	case SEEK_10:
+@@ -3969,6 +3970,7 @@ static inline ata_xlat_func_t ata_get_xlat_func(struct ata_device *dev, u8 cmd)
+ 		return ata_scsi_write_same_xlat;
+ 
+ 	case SYNCHRONIZE_CACHE:
++	case SYNCHRONIZE_CACHE_16:
+ 		if (ata_try_flush_cache(dev))
+ 			return ata_scsi_flush_xlat;
+ 		break;
+@@ -4215,6 +4217,7 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
+ 	 * turning this into a no-op.
+ 	 */
+ 	case SYNCHRONIZE_CACHE:
++	case SYNCHRONIZE_CACHE_16:
+ 		fallthrough;
+ 
+ 	/* no-op's, complete with success */
+diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
+index 7eaee5b705b1b..6a4f9697b5742 100644
+--- a/drivers/dma/at_hdmac.c
++++ b/drivers/dma/at_hdmac.c
+@@ -237,6 +237,8 @@ static void atc_dostart(struct at_dma_chan *atchan, struct at_desc *first)
+ 		       ATC_SPIP_BOUNDARY(first->boundary));
+ 	channel_writel(atchan, DPIP, ATC_DPIP_HOLE(first->dst_hole) |
+ 		       ATC_DPIP_BOUNDARY(first->boundary));
++	/* Don't allow CPU to reorder channel enable. */
++	wmb();
+ 	dma_writel(atdma, CHER, atchan->mask);
+ 
+ 	vdbg_dump_regs(atchan);
+@@ -297,7 +299,8 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
+ 	struct at_desc *desc_first = atc_first_active(atchan);
+ 	struct at_desc *desc;
+ 	int ret;
+-	u32 ctrla, dscr, trials;
++	u32 ctrla, dscr;
++	unsigned int i;
+ 
+ 	/*
+ 	 * If the cookie doesn't match to the currently running transfer then
+@@ -367,7 +370,7 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
+ 		dscr = channel_readl(atchan, DSCR);
+ 		rmb(); /* ensure DSCR is read before CTRLA */
+ 		ctrla = channel_readl(atchan, CTRLA);
+-		for (trials = 0; trials < ATC_MAX_DSCR_TRIALS; ++trials) {
++		for (i = 0; i < ATC_MAX_DSCR_TRIALS; ++i) {
+ 			u32 new_dscr;
+ 
+ 			rmb(); /* ensure DSCR is read after CTRLA */
+@@ -393,7 +396,7 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
+ 			rmb(); /* ensure DSCR is read before CTRLA */
+ 			ctrla = channel_readl(atchan, CTRLA);
+ 		}
+-		if (unlikely(trials >= ATC_MAX_DSCR_TRIALS))
++		if (unlikely(i == ATC_MAX_DSCR_TRIALS))
+ 			return -ETIMEDOUT;
+ 
+ 		/* for the first descriptor we can be more accurate */
+@@ -443,18 +446,6 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
+ 	if (!atc_chan_is_cyclic(atchan))
+ 		dma_cookie_complete(txd);
+ 
+-	/* If the transfer was a memset, free our temporary buffer */
+-	if (desc->memset_buffer) {
+-		dma_pool_free(atdma->memset_pool, desc->memset_vaddr,
+-			      desc->memset_paddr);
+-		desc->memset_buffer = false;
+-	}
+-
+-	/* move children to free_list */
+-	list_splice_init(&desc->tx_list, &atchan->free_list);
+-	/* move myself to free_list */
+-	list_move(&desc->desc_node, &atchan->free_list);
+-
+ 	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	dma_descriptor_unmap(txd);
+@@ -464,42 +455,20 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
+ 		dmaengine_desc_get_callback_invoke(txd, NULL);
+ 
+ 	dma_run_dependencies(txd);
+-}
+-
+-/**
+- * atc_complete_all - finish work for all transactions
+- * @atchan: channel to complete transactions for
+- *
+- * Eventually submit queued descriptors if any
+- *
+- * Assume channel is idle while calling this function
+- * Called with atchan->lock held and bh disabled
+- */
+-static void atc_complete_all(struct at_dma_chan *atchan)
+-{
+-	struct at_desc *desc, *_desc;
+-	LIST_HEAD(list);
+-	unsigned long flags;
+-
+-	dev_vdbg(chan2dev(&atchan->chan_common), "complete all\n");
+ 
+ 	spin_lock_irqsave(&atchan->lock, flags);
+-
+-	/*
+-	 * Submit queued descriptors ASAP, i.e. before we go through
+-	 * the completed ones.
+-	 */
+-	if (!list_empty(&atchan->queue))
+-		atc_dostart(atchan, atc_first_queued(atchan));
+-	/* empty active_list now it is completed */
+-	list_splice_init(&atchan->active_list, &list);
+-	/* empty queue list by moving descriptors (if any) to active_list */
+-	list_splice_init(&atchan->queue, &atchan->active_list);
+-
++	/* move children to free_list */
++	list_splice_init(&desc->tx_list, &atchan->free_list);
++	/* add myself to free_list */
++	list_add(&desc->desc_node, &atchan->free_list);
+ 	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+-	list_for_each_entry_safe(desc, _desc, &list, desc_node)
+-		atc_chain_complete(atchan, desc);
++	/* If the transfer was a memset, free our temporary buffer */
++	if (desc->memset_buffer) {
++		dma_pool_free(atdma->memset_pool, desc->memset_vaddr,
++			      desc->memset_paddr);
++		desc->memset_buffer = false;
++	}
+ }
+ 
+ /**
+@@ -508,26 +477,28 @@ static void atc_complete_all(struct at_dma_chan *atchan)
+  */
+ static void atc_advance_work(struct at_dma_chan *atchan)
+ {
++	struct at_desc *desc;
+ 	unsigned long flags;
+-	int ret;
+ 
+ 	dev_vdbg(chan2dev(&atchan->chan_common), "advance_work\n");
+ 
+ 	spin_lock_irqsave(&atchan->lock, flags);
+-	ret = atc_chan_is_enabled(atchan);
+-	spin_unlock_irqrestore(&atchan->lock, flags);
+-	if (ret)
+-		return;
+-
+-	if (list_empty(&atchan->active_list) ||
+-	    list_is_singular(&atchan->active_list))
+-		return atc_complete_all(atchan);
++	if (atc_chan_is_enabled(atchan) || list_empty(&atchan->active_list))
++		return spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+-	atc_chain_complete(atchan, atc_first_active(atchan));
++	desc = atc_first_active(atchan);
++	/* Remove the transfer node from the active list. */
++	list_del_init(&desc->desc_node);
++	spin_unlock_irqrestore(&atchan->lock, flags);
++	atc_chain_complete(atchan, desc);
+ 
+ 	/* advance work */
+ 	spin_lock_irqsave(&atchan->lock, flags);
+-	atc_dostart(atchan, atc_first_active(atchan));
++	if (!list_empty(&atchan->active_list)) {
++		desc = atc_first_queued(atchan);
++		list_move_tail(&desc->desc_node, &atchan->active_list);
++		atc_dostart(atchan, desc);
++	}
+ 	spin_unlock_irqrestore(&atchan->lock, flags);
+ }
+ 
+@@ -539,6 +510,7 @@ static void atc_advance_work(struct at_dma_chan *atchan)
+ static void atc_handle_error(struct at_dma_chan *atchan)
+ {
+ 	struct at_desc *bad_desc;
++	struct at_desc *desc;
+ 	struct at_desc *child;
+ 	unsigned long flags;
+ 
+@@ -551,13 +523,12 @@ static void atc_handle_error(struct at_dma_chan *atchan)
+ 	bad_desc = atc_first_active(atchan);
+ 	list_del_init(&bad_desc->desc_node);
+ 
+-	/* As we are stopped, take advantage to push queued descriptors
+-	 * in active_list */
+-	list_splice_init(&atchan->queue, atchan->active_list.prev);
+-
+ 	/* Try to restart the controller */
+-	if (!list_empty(&atchan->active_list))
+-		atc_dostart(atchan, atc_first_active(atchan));
++	if (!list_empty(&atchan->active_list)) {
++		desc = atc_first_queued(atchan);
++		list_move_tail(&desc->desc_node, &atchan->active_list);
++		atc_dostart(atchan, desc);
++	}
+ 
+ 	/*
+ 	 * KERN_CRITICAL may seem harsh, but since this only happens
+@@ -672,19 +643,11 @@ static dma_cookie_t atc_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	spin_lock_irqsave(&atchan->lock, flags);
+ 	cookie = dma_cookie_assign(tx);
+ 
+-	if (list_empty(&atchan->active_list)) {
+-		dev_vdbg(chan2dev(tx->chan), "tx_submit: started %u\n",
+-				desc->txd.cookie);
+-		atc_dostart(atchan, desc);
+-		list_add_tail(&desc->desc_node, &atchan->active_list);
+-	} else {
+-		dev_vdbg(chan2dev(tx->chan), "tx_submit: queued %u\n",
+-				desc->txd.cookie);
+-		list_add_tail(&desc->desc_node, &atchan->queue);
+-	}
+-
++	list_add_tail(&desc->desc_node, &atchan->queue);
+ 	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
++	dev_vdbg(chan2dev(tx->chan), "tx_submit: queued %u\n",
++		 desc->txd.cookie);
+ 	return cookie;
+ }
+ 
+@@ -1418,11 +1381,8 @@ static int atc_terminate_all(struct dma_chan *chan)
+ 	struct at_dma_chan	*atchan = to_at_dma_chan(chan);
+ 	struct at_dma		*atdma = to_at_dma(chan->device);
+ 	int			chan_id = atchan->chan_common.chan_id;
+-	struct at_desc		*desc, *_desc;
+ 	unsigned long		flags;
+ 
+-	LIST_HEAD(list);
+-
+ 	dev_vdbg(chan2dev(chan), "%s\n", __func__);
+ 
+ 	/*
+@@ -1441,19 +1401,15 @@ static int atc_terminate_all(struct dma_chan *chan)
+ 		cpu_relax();
+ 
+ 	/* active_list entries will end up before queued entries */
+-	list_splice_init(&atchan->queue, &list);
+-	list_splice_init(&atchan->active_list, &list);
+-
+-	spin_unlock_irqrestore(&atchan->lock, flags);
+-
+-	/* Flush all pending and queued descriptors */
+-	list_for_each_entry_safe(desc, _desc, &list, desc_node)
+-		atc_chain_complete(atchan, desc);
++	list_splice_tail_init(&atchan->queue, &atchan->free_list);
++	list_splice_tail_init(&atchan->active_list, &atchan->free_list);
+ 
+ 	clear_bit(ATC_IS_PAUSED, &atchan->status);
+ 	/* if channel dedicated to cyclic operations, free it */
+ 	clear_bit(ATC_IS_CYCLIC, &atchan->status);
+ 
++	spin_unlock_irqrestore(&atchan->lock, flags);
++
+ 	return 0;
+ }
+ 
+@@ -1508,20 +1464,26 @@ atc_tx_status(struct dma_chan *chan,
+ }
+ 
+ /**
+- * atc_issue_pending - try to finish work
++ * atc_issue_pending - takes the first transaction descriptor in the pending
++ * queue and starts the transfer.
+  * @chan: target DMA channel
+  */
+ static void atc_issue_pending(struct dma_chan *chan)
+ {
+-	struct at_dma_chan	*atchan = to_at_dma_chan(chan);
++	struct at_dma_chan *atchan = to_at_dma_chan(chan);
++	struct at_desc *desc;
++	unsigned long flags;
+ 
+ 	dev_vdbg(chan2dev(chan), "issue_pending\n");
+ 
+-	/* Not needed for cyclic transfers */
+-	if (atc_chan_is_cyclic(atchan))
+-		return;
++	spin_lock_irqsave(&atchan->lock, flags);
++	if (atc_chan_is_enabled(atchan) || list_empty(&atchan->queue))
++		return spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+-	atc_advance_work(atchan);
++	desc = atc_first_queued(atchan);
++	list_move_tail(&desc->desc_node, &atchan->active_list);
++	atc_dostart(atchan, desc);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ }
+ 
+ /**
+@@ -1939,7 +1901,11 @@ static int __init at_dma_probe(struct platform_device *pdev)
+ 	  dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask)  ? "slave " : "",
+ 	  plat_dat->nr_channels);
+ 
+-	dma_async_device_register(&atdma->dma_common);
++	err = dma_async_device_register(&atdma->dma_common);
++	if (err) {
++		dev_err(&pdev->dev, "Unable to register: %d.\n", err);
++		goto err_dma_async_device_register;
++	}
+ 
+ 	/*
+ 	 * Do not return an error if the dmac node is not present in order to
+@@ -1959,6 +1925,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
+ 
+ err_of_dma_controller_register:
+ 	dma_async_device_unregister(&atdma->dma_common);
++err_dma_async_device_register:
+ 	dma_pool_destroy(atdma->memset_pool);
+ err_memset_pool_create:
+ 	dma_pool_destroy(atdma->dma_desc_pool);
+diff --git a/drivers/dma/at_hdmac_regs.h b/drivers/dma/at_hdmac_regs.h
+index 80fc2fe8c77ea..8dc82c7b1dcf0 100644
+--- a/drivers/dma/at_hdmac_regs.h
++++ b/drivers/dma/at_hdmac_regs.h
+@@ -164,13 +164,13 @@
+ /* LLI == Linked List Item; aka DMA buffer descriptor */
+ struct at_lli {
+ 	/* values that are not changed by hardware */
+-	dma_addr_t	saddr;
+-	dma_addr_t	daddr;
++	u32 saddr;
++	u32 daddr;
+ 	/* value that may get written back: */
+-	u32		ctrla;
++	u32 ctrla;
+ 	/* more values that are not changed by hardware */
+-	u32		ctrlb;
+-	dma_addr_t	dscr;	/* chain to next lli */
++	u32 ctrlb;
++	u32 dscr;	/* chain to next lli */
+ };
+ 
+ /**
+diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
+index 9b0d463f89bbd..4800c596433ad 100644
+--- a/drivers/dma/mv_xor_v2.c
++++ b/drivers/dma/mv_xor_v2.c
+@@ -899,6 +899,7 @@ static int mv_xor_v2_remove(struct platform_device *pdev)
+ 	tasklet_kill(&xor_dev->irq_tasklet);
+ 
+ 	clk_disable_unprepare(xor_dev->clk);
++	clk_disable_unprepare(xor_dev->reg_clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
+index b4ef4f19f7dec..68d9d60c051d9 100644
+--- a/drivers/dma/pxa_dma.c
++++ b/drivers/dma/pxa_dma.c
+@@ -1249,14 +1249,14 @@ static int pxad_init_phys(struct platform_device *op,
+ 		return -ENOMEM;
+ 
+ 	for (i = 0; i < nb_phy_chans; i++)
+-		if (platform_get_irq(op, i) > 0)
++		if (platform_get_irq_optional(op, i) > 0)
+ 			nr_irq++;
+ 
+ 	for (i = 0; i < nb_phy_chans; i++) {
+ 		phy = &pdev->phys[i];
+ 		phy->base = pdev->base;
+ 		phy->idx = i;
+-		irq = platform_get_irq(op, i);
++		irq = platform_get_irq_optional(op, i);
+ 		if ((nr_irq > 1) && (irq > 0))
+ 			ret = devm_request_irq(&op->dev, irq,
+ 					       pxad_chan_handler,
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+index 8dd295dbe2416..dd35d3d7ad03d 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+@@ -36,13 +36,13 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
+ 		goto err_unpin_pages;
+ 	}
+ 
+-	ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL);
++	ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL);
+ 	if (ret)
+ 		goto err_free;
+ 
+ 	src = obj->mm.pages->sgl;
+ 	dst = st->sgl;
+-	for (i = 0; i < obj->mm.pages->nents; i++) {
++	for (i = 0; i < obj->mm.pages->orig_nents; i++) {
+ 		sg_set_page(dst, sg_page(src), src->length, 0);
+ 		dst = sg_next(dst);
+ 		src = sg_next(src);
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
+index 52426bc8edb8b..888aec1bbeee6 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -404,7 +404,12 @@ static int __init vc4_drm_register(void)
+ 	if (ret)
+ 		return ret;
+ 
+-	return platform_driver_register(&vc4_platform_driver);
++	ret = platform_driver_register(&vc4_platform_driver);
++	if (ret)
++		platform_unregister_drivers(component_drivers,
++					    ARRAY_SIZE(component_drivers));
++
++	return ret;
+ }
+ 
+ static void __exit vc4_drm_unregister(void)
+diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
+index 978ee2aab2d40..b7704dd6809dc 100644
+--- a/drivers/hid/hid-hyperv.c
++++ b/drivers/hid/hid-hyperv.c
+@@ -498,7 +498,7 @@ static int mousevsc_probe(struct hv_device *device,
+ 
+ 	ret = hid_add_device(hid_dev);
+ 	if (ret)
+-		goto probe_err1;
++		goto probe_err2;
+ 
+ 
+ 	ret = hid_parse(hid_dev);
+diff --git a/drivers/hwspinlock/qcom_hwspinlock.c b/drivers/hwspinlock/qcom_hwspinlock.c
+index 3647109666658..e499146648639 100644
+--- a/drivers/hwspinlock/qcom_hwspinlock.c
++++ b/drivers/hwspinlock/qcom_hwspinlock.c
+@@ -105,7 +105,7 @@ static const struct regmap_config tcsr_mutex_config = {
+ 	.reg_bits		= 32,
+ 	.reg_stride		= 4,
+ 	.val_bits		= 32,
+-	.max_register		= 0x40000,
++	.max_register		= 0x20000,
+ 	.fast_io		= true,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-cqhci.h b/drivers/mmc/host/sdhci-cqhci.h
+new file mode 100644
+index 0000000000000..cf8e7ba71bbd7
+--- /dev/null
++++ b/drivers/mmc/host/sdhci-cqhci.h
+@@ -0,0 +1,24 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright 2022 The Chromium OS Authors
++ *
++ * Support that applies to the combination of SDHCI and CQHCI, while not
++ * expressing a dependency between the two modules.
++ */
++
++#ifndef __MMC_HOST_SDHCI_CQHCI_H__
++#define __MMC_HOST_SDHCI_CQHCI_H__
++
++#include "cqhci.h"
++#include "sdhci.h"
++
++static inline void sdhci_and_cqhci_reset(struct sdhci_host *host, u8 mask)
++{
++	if ((host->mmc->caps2 & MMC_CAP2_CQE) && (mask & SDHCI_RESET_ALL) &&
++	    host->mmc->cqe_private)
++		cqhci_deactivate(host->mmc);
++
++	sdhci_reset(host, mask);
++}
++
++#endif /* __MMC_HOST_SDHCI_CQHCI_H__ */
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index be4e5cdda1fa3..449562122adc1 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -26,6 +26,7 @@
+ #include <linux/pinctrl/consumer.h>
+ #include <linux/platform_data/mmc-esdhc-imx.h>
+ #include <linux/pm_runtime.h>
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ #include "sdhci-esdhc.h"
+ #include "cqhci.h"
+@@ -294,22 +295,6 @@ struct pltfm_imx_data {
+ 	struct pm_qos_request pm_qos_req;
+ };
+ 
+-static const struct platform_device_id imx_esdhc_devtype[] = {
+-	{
+-		.name = "sdhci-esdhc-imx25",
+-		.driver_data = (kernel_ulong_t) &esdhc_imx25_data,
+-	}, {
+-		.name = "sdhci-esdhc-imx35",
+-		.driver_data = (kernel_ulong_t) &esdhc_imx35_data,
+-	}, {
+-		.name = "sdhci-esdhc-imx51",
+-		.driver_data = (kernel_ulong_t) &esdhc_imx51_data,
+-	}, {
+-		/* sentinel */
+-	}
+-};
+-MODULE_DEVICE_TABLE(platform, imx_esdhc_devtype);
+-
+ static const struct of_device_id imx_esdhc_dt_ids[] = {
+ 	{ .compatible = "fsl,imx25-esdhc", .data = &esdhc_imx25_data, },
+ 	{ .compatible = "fsl,imx35-esdhc", .data = &esdhc_imx35_data, },
+@@ -1243,7 +1228,7 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
+ 
+ static void esdhc_reset(struct sdhci_host *host, u8 mask)
+ {
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
+ 	sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
+@@ -1545,72 +1530,6 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
+ }
+ #endif
+ 
+-static int sdhci_esdhc_imx_probe_nondt(struct platform_device *pdev,
+-			 struct sdhci_host *host,
+-			 struct pltfm_imx_data *imx_data)
+-{
+-	struct esdhc_platform_data *boarddata = &imx_data->boarddata;
+-	int err;
+-
+-	if (!host->mmc->parent->platform_data) {
+-		dev_err(mmc_dev(host->mmc), "no board data!\n");
+-		return -EINVAL;
+-	}
+-
+-	imx_data->boarddata = *((struct esdhc_platform_data *)
+-				host->mmc->parent->platform_data);
+-	/* write_protect */
+-	if (boarddata->wp_type == ESDHC_WP_GPIO) {
+-		host->mmc->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH;
+-
+-		err = mmc_gpiod_request_ro(host->mmc, "wp", 0, 0);
+-		if (err) {
+-			dev_err(mmc_dev(host->mmc),
+-				"failed to request write-protect gpio!\n");
+-			return err;
+-		}
+-	}
+-
+-	/* card_detect */
+-	switch (boarddata->cd_type) {
+-	case ESDHC_CD_GPIO:
+-		err = mmc_gpiod_request_cd(host->mmc, "cd", 0, false, 0);
+-		if (err) {
+-			dev_err(mmc_dev(host->mmc),
+-				"failed to request card-detect gpio!\n");
+-			return err;
+-		}
+-		fallthrough;
+-
+-	case ESDHC_CD_CONTROLLER:
+-		/* we have a working card_detect back */
+-		host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
+-		break;
+-
+-	case ESDHC_CD_PERMANENT:
+-		host->mmc->caps |= MMC_CAP_NONREMOVABLE;
+-		break;
+-
+-	case ESDHC_CD_NONE:
+-		break;
+-	}
+-
+-	switch (boarddata->max_bus_width) {
+-	case 8:
+-		host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA;
+-		break;
+-	case 4:
+-		host->mmc->caps |= MMC_CAP_4_BIT_DATA;
+-		break;
+-	case 1:
+-	default:
+-		host->quirks |= SDHCI_QUIRK_FORCE_1_BIT_DATA;
+-		break;
+-	}
+-
+-	return 0;
+-}
+-
+ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ {
+ 	const struct of_device_id *of_id =
+@@ -1630,8 +1549,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 
+ 	imx_data = sdhci_pltfm_priv(pltfm_host);
+ 
+-	imx_data->socdata = of_id ? of_id->data : (struct esdhc_soc_data *)
+-						  pdev->id_entry->driver_data;
++	imx_data->socdata = of_id->data;
+ 
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
+ 		cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
+@@ -1943,7 +1861,6 @@ static struct platform_driver sdhci_esdhc_imx_driver = {
+ 		.of_match_table = imx_esdhc_dt_ids,
+ 		.pm	= &sdhci_esdhc_pmops,
+ 	},
+-	.id_table	= imx_esdhc_devtype,
+ 	.probe		= sdhci_esdhc_imx_probe,
+ 	.remove		= sdhci_esdhc_imx_remove,
+ };
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index fc38db64a6b48..9da49dc152489 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -25,6 +25,7 @@
+ #include <linux/firmware/xlnx-zynqmp.h>
+ 
+ #include "cqhci.h"
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ 
+ #define SDHCI_ARASAN_VENDOR_REGISTER	0x78
+@@ -359,7 +360,7 @@ static void sdhci_arasan_reset(struct sdhci_host *host, u8 mask)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host);
+ 
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	if (sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_FORCE_CDTEST) {
+ 		ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 67211fc42d242..d8fd2b5efd387 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -24,6 +24,7 @@
+ #include <linux/gpio/consumer.h>
+ #include <linux/ktime.h>
+ 
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ #include "cqhci.h"
+ 
+@@ -361,7 +362,7 @@ static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)
+ 	const struct sdhci_tegra_soc_data *soc_data = tegra_host->soc_data;
+ 	u32 misc_ctrl, clk_ctrl, pad_ctrl;
+ 
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	if (!(mask & SDHCI_RESET_ALL))
+ 		return;
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index 7cab9d831afb3..24cd6d3dc6477 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -15,6 +15,7 @@
+ #include <linux/sys_soc.h>
+ 
+ #include "cqhci.h"
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ 
+ /* CTL_CFG Registers */
+@@ -378,7 +379,7 @@ static void sdhci_am654_reset(struct sdhci_host *host, u8 mask)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
+ 
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_FORCE_CDTEST) {
+ 		ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
+@@ -464,7 +465,7 @@ static struct sdhci_ops sdhci_am654_ops = {
+ 	.set_clock = sdhci_am654_set_clock,
+ 	.write_b = sdhci_am654_write_b,
+ 	.irq = sdhci_am654_cqhci_irq,
+-	.reset = sdhci_reset,
++	.reset = sdhci_and_cqhci_reset,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_am654_pdata = {
+@@ -494,7 +495,7 @@ static struct sdhci_ops sdhci_j721e_8bit_ops = {
+ 	.set_clock = sdhci_am654_set_clock,
+ 	.write_b = sdhci_am654_write_b,
+ 	.irq = sdhci_am654_cqhci_irq,
+-	.reset = sdhci_reset,
++	.reset = sdhci_and_cqhci_reset,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_j721e_8bit_pdata = {
+diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+index 78c7cbc372b05..71151f675a498 100644
+--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
++++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+@@ -1004,8 +1004,10 @@ static int xgene_enet_open(struct net_device *ndev)
+ 
+ 	xgene_enet_napi_enable(pdata);
+ 	ret = xgene_enet_register_irq(ndev);
+-	if (ret)
++	if (ret) {
++		xgene_enet_napi_disable(pdata);
+ 		return ret;
++	}
+ 
+ 	if (ndev->phydev) {
+ 		phy_start(ndev->phydev);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
+index 7c6e0811f2e63..ee823a18294cd 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
+@@ -585,6 +585,7 @@ static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx,
+ 
+ 	ret = aq_mss_set_egress_sakey_record(hw, &key_rec, sa_idx);
+ 
++	memzero_explicit(&key_rec, sizeof(key_rec));
+ 	return ret;
+ }
+ 
+@@ -932,6 +933,7 @@ static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx,
+ 
+ 	ret = aq_mss_set_ingress_sakey_record(hw, &sa_key_record, sa_idx);
+ 
++	memzero_explicit(&sa_key_record, sizeof(sa_key_record));
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c
+index 36c7cf05630a1..4319249595207 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c
++++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c
+@@ -757,6 +757,7 @@ set_ingress_sakey_record(struct aq_hw_s *hw,
+ 			 u16 table_index)
+ {
+ 	u16 packed_record[18];
++	int ret;
+ 
+ 	if (table_index >= NUMROWS_INGRESSSAKEYRECORD)
+ 		return -EINVAL;
+@@ -789,9 +790,12 @@ set_ingress_sakey_record(struct aq_hw_s *hw,
+ 
+ 	packed_record[16] = rec->key_len & 0x3;
+ 
+-	return set_raw_ingress_record(hw, packed_record, 18, 2,
+-				      ROWOFFSET_INGRESSSAKEYRECORD +
+-					      table_index);
++	ret = set_raw_ingress_record(hw, packed_record, 18, 2,
++				     ROWOFFSET_INGRESSSAKEYRECORD +
++				     table_index);
++
++	memzero_explicit(packed_record, sizeof(packed_record));
++	return ret;
+ }
+ 
+ int aq_mss_set_ingress_sakey_record(struct aq_hw_s *hw,
+@@ -1739,14 +1743,14 @@ static int set_egress_sakey_record(struct aq_hw_s *hw,
+ 	ret = set_raw_egress_record(hw, packed_record, 8, 2,
+ 				    ROWOFFSET_EGRESSSAKEYRECORD + table_index);
+ 	if (unlikely(ret))
+-		return ret;
++		goto clear_key;
+ 	ret = set_raw_egress_record(hw, packed_record + 8, 8, 2,
+ 				    ROWOFFSET_EGRESSSAKEYRECORD + table_index -
+ 					    32);
+-	if (unlikely(ret))
+-		return ret;
+ 
+-	return 0;
++clear_key:
++	memzero_explicit(packed_record, sizeof(packed_record));
++	return ret;
+ }
+ 
+ int aq_mss_set_egress_sakey_record(struct aq_hw_s *hw,
+diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
+index 7b79528d6eed2..2b6d929d462f4 100644
+--- a/drivers/net/ethernet/broadcom/Kconfig
++++ b/drivers/net/ethernet/broadcom/Kconfig
+@@ -69,7 +69,7 @@ config BCMGENET
+ 	select BCM7XXX_PHY
+ 	select MDIO_BCM_UNIMAC
+ 	select DIMLIB
+-	select BROADCOM_PHY if ARCH_BCM2835
++	select BROADCOM_PHY if (ARCH_BCM2835 && PTP_1588_CLOCK_OPTIONAL)
+ 	help
+ 	  This driver supports the built-in Ethernet MACs found in the
+ 	  Broadcom BCM7xxx Set Top Box family chipset.
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index b818d5f342d53..8311473d537bd 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -12008,8 +12008,8 @@ static int bnxt_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+ 	rcu_read_lock();
+ 	hlist_for_each_entry_rcu(fltr, head, hash) {
+ 		if (bnxt_fltr_match(fltr, new_fltr)) {
++			rc = fltr->sw_id;
+ 			rcu_read_unlock();
+-			rc = 0;
+ 			goto err_free;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index f8f7756195205..81b63d1c2391f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -125,7 +125,7 @@ static int bnxt_set_coalesce(struct net_device *dev,
+ 	}
+ 
+ reset_coalesce:
+-	if (netif_running(dev)) {
++	if (test_bit(BNXT_STATE_OPEN, &bp->state)) {
+ 		if (update_stats) {
+ 			rc = bnxt_close_nic(bp, true, false);
+ 			if (!rc)
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+index 84ad7261e243a..8a167eea288c3 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+@@ -1302,6 +1302,7 @@ static int cxgb_up(struct adapter *adap)
+ 		if (ret < 0) {
+ 			CH_ERR(adap, "failed to bind qsets, err %d\n", ret);
+ 			t3_intr_disable(adap);
++			quiesce_rx(adap);
+ 			free_irq_resources(adap);
+ 			err = ret;
+ 			goto out;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
+index 2820a0bb971bf..5e1e46425014c 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
+@@ -858,7 +858,7 @@ static int cxgb4vf_open(struct net_device *dev)
+ 	 */
+ 	err = t4vf_update_port_info(pi);
+ 	if (err < 0)
+-		return err;
++		goto err_unwind;
+ 
+ 	/*
+ 	 * Note that this interface is up and start everything up ...
+diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c
+index 6eeccc11b76ef..3312dc4083a08 100644
+--- a/drivers/net/ethernet/freescale/fman/mac.c
++++ b/drivers/net/ethernet/freescale/fman/mac.c
+@@ -884,12 +884,21 @@ _return:
+ 	return err;
+ }
+ 
++static int mac_remove(struct platform_device *pdev)
++{
++	struct mac_device *mac_dev = platform_get_drvdata(pdev);
++
++	platform_device_unregister(mac_dev->priv->eth_dev);
++	return 0;
++}
++
+ static struct platform_driver mac_driver = {
+ 	.driver = {
+ 		.name		= KBUILD_MODNAME,
+ 		.of_match_table	= mac_match,
+ 	},
+ 	.probe		= mac_probe,
++	.remove		= mac_remove,
+ };
+ 
+ builtin_platform_driver(mac_driver);
+diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
+index 90e6111ce534d..735b76effc492 100644
+--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
+@@ -2471,6 +2471,7 @@ out_free:
+ 	for (i = 0; i < mp->rxq_count; i++)
+ 		rxq_deinit(mp->rxq + i);
+ out:
++	napi_disable(&mp->napi);
+ 	free_irq(dev->irq, dev);
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_rxtx.c b/drivers/net/ethernet/marvell/prestera/prestera_rxtx.c
+index 2a13c318048cc..59a3ea02b8add 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_rxtx.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_rxtx.c
+@@ -771,6 +771,7 @@ tx_done:
+ int prestera_rxtx_switch_init(struct prestera_switch *sw)
+ {
+ 	struct prestera_rxtx *rxtx;
++	int err;
+ 
+ 	rxtx = kzalloc(sizeof(*rxtx), GFP_KERNEL);
+ 	if (!rxtx)
+@@ -778,7 +779,11 @@ int prestera_rxtx_switch_init(struct prestera_switch *sw)
+ 
+ 	sw->rxtx = rxtx;
+ 
+-	return prestera_sdma_switch_init(sw);
++	err = prestera_sdma_switch_init(sw);
++	if (err)
++		kfree(rxtx);
++
++	return err;
+ }
+ 
+ void prestera_rxtx_switch_fini(struct prestera_switch *sw)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 6612b2c0be486..cf07318048df1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1687,12 +1687,17 @@ void mlx5_cmd_flush(struct mlx5_core_dev *dev)
+ 	struct mlx5_cmd *cmd = &dev->cmd;
+ 	int i;
+ 
+-	for (i = 0; i < cmd->max_reg_cmds; i++)
+-		while (down_trylock(&cmd->sem))
++	for (i = 0; i < cmd->max_reg_cmds; i++) {
++		while (down_trylock(&cmd->sem)) {
+ 			mlx5_cmd_trigger_completions(dev);
++			cond_resched();
++		}
++	}
+ 
+-	while (down_trylock(&cmd->pages_sem))
++	while (down_trylock(&cmd->pages_sem)) {
+ 		mlx5_cmd_trigger_completions(dev);
++		cond_resched();
++	}
+ 
+ 	/* Unlock cmdif */
+ 	up(&cmd->pages_sem);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+index 1cbb330b9f42b..6c865cb7f445d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+@@ -30,9 +30,9 @@ mlx5_eswitch_termtbl_hash(struct mlx5_flow_act *flow_act,
+ 		     sizeof(dest->vport.num), hash);
+ 	hash = jhash((const void *)&dest->vport.vhca_id,
+ 		     sizeof(dest->vport.num), hash);
+-	if (dest->vport.pkt_reformat)
+-		hash = jhash(dest->vport.pkt_reformat,
+-			     sizeof(*dest->vport.pkt_reformat),
++	if (flow_act->pkt_reformat)
++		hash = jhash(flow_act->pkt_reformat,
++			     sizeof(*flow_act->pkt_reformat),
+ 			     hash);
+ 	return hash;
+ }
+@@ -53,9 +53,11 @@ mlx5_eswitch_termtbl_cmp(struct mlx5_flow_act *flow_act1,
+ 	if (ret)
+ 		return ret;
+ 
+-	return dest1->vport.pkt_reformat && dest2->vport.pkt_reformat ?
+-	       memcmp(dest1->vport.pkt_reformat, dest2->vport.pkt_reformat,
+-		      sizeof(*dest1->vport.pkt_reformat)) : 0;
++	if (flow_act1->pkt_reformat && flow_act2->pkt_reformat)
++		return memcmp(flow_act1->pkt_reformat, flow_act2->pkt_reformat,
++			      sizeof(*flow_act1->pkt_reformat));
++
++	return !(flow_act1->pkt_reformat == flow_act2->pkt_reformat);
+ }
+ 
+ static int
+diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
+index 3cae8449fadb7..8a30be698f992 100644
+--- a/drivers/net/ethernet/neterion/s2io.c
++++ b/drivers/net/ethernet/neterion/s2io.c
+@@ -7114,9 +7114,8 @@ static int s2io_card_up(struct s2io_nic *sp)
+ 		if (ret) {
+ 			DBG_PRINT(ERR_DBG, "%s: Out of memory in Open\n",
+ 				  dev->name);
+-			s2io_reset(sp);
+-			free_rx_buffers(sp);
+-			return -ENOMEM;
++			ret = -ENOMEM;
++			goto err_fill_buff;
+ 		}
+ 		DBG_PRINT(INFO_DBG, "Buf in ring:%d is %d:\n", i,
+ 			  ring->rx_bufs_left);
+@@ -7154,18 +7153,16 @@ static int s2io_card_up(struct s2io_nic *sp)
+ 	/* Enable Rx Traffic and interrupts on the NIC */
+ 	if (start_nic(sp)) {
+ 		DBG_PRINT(ERR_DBG, "%s: Starting NIC failed\n", dev->name);
+-		s2io_reset(sp);
+-		free_rx_buffers(sp);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_out;
+ 	}
+ 
+ 	/* Add interrupt service routine */
+ 	if (s2io_add_isr(sp) != 0) {
+ 		if (sp->config.intr_type == MSI_X)
+ 			s2io_rem_isr(sp);
+-		s2io_reset(sp);
+-		free_rx_buffers(sp);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_out;
+ 	}
+ 
+ 	timer_setup(&sp->alarm_timer, s2io_alarm_handle, 0);
+@@ -7185,6 +7182,20 @@ static int s2io_card_up(struct s2io_nic *sp)
+ 	}
+ 
+ 	return 0;
++
++err_out:
++	if (config->napi) {
++		if (config->intr_type == MSI_X) {
++			for (i = 0; i < sp->config.rx_ring_num; i++)
++				napi_disable(&sp->mac_control.rings[i].napi);
++		} else {
++			napi_disable(&sp->napi);
++		}
++	}
++err_fill_buff:
++	s2io_reset(sp);
++	free_rx_buffers(sp);
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c
+index a6861df9904f9..9c48fd85c418a 100644
+--- a/drivers/net/ethernet/ni/nixge.c
++++ b/drivers/net/ethernet/ni/nixge.c
+@@ -899,6 +899,7 @@ static int nixge_open(struct net_device *ndev)
+ err_rx_irq:
+ 	free_irq(priv->tx_irq, ndev);
+ err_tx_irq:
++	napi_disable(&priv->napi);
+ 	phy_stop(phy);
+ 	phy_disconnect(phy);
+ 	tasklet_kill(&priv->dma_err_tasklet);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+index 752658ec7beeb..50ef68497bce9 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+@@ -261,11 +261,9 @@ static int meson8b_devm_clk_prepare_enable(struct meson8b_dwmac *dwmac,
+ 	if (ret)
+ 		return ret;
+ 
+-	devm_add_action_or_reset(dwmac->dev,
+-				 (void(*)(void *))clk_disable_unprepare,
+-				 dwmac->rgmii_tx_clk);
+-
+-	return 0;
++	return devm_add_action_or_reset(dwmac->dev,
++					(void(*)(void *))clk_disable_unprepare,
++					clk);
+ }
+ 
+ static int meson8b_init_prg_eth(struct meson8b_dwmac *dwmac)
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index b0f00b4edd949..5af0f9f8c0975 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -864,6 +864,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
+ 
+ err_cleanup:
+ 	if (!cpsw->usage_count) {
++		napi_disable(&cpsw->napi_rx);
++		napi_disable(&cpsw->napi_tx);
+ 		cpdma_ctlr_stop(cpsw->dma);
+ 		cpsw_destroy_xdp_rxqs(cpsw);
+ 	}
+diff --git a/drivers/net/ethernet/tundra/tsi108_eth.c b/drivers/net/ethernet/tundra/tsi108_eth.c
+index c62f474b6d08e..fcebd2418dbd3 100644
+--- a/drivers/net/ethernet/tundra/tsi108_eth.c
++++ b/drivers/net/ethernet/tundra/tsi108_eth.c
+@@ -1302,12 +1302,15 @@ static int tsi108_open(struct net_device *dev)
+ 
+ 	data->rxring = dma_alloc_coherent(&data->pdev->dev, rxring_size,
+ 					  &data->rxdma, GFP_KERNEL);
+-	if (!data->rxring)
++	if (!data->rxring) {
++		free_irq(data->irq_num, dev);
+ 		return -ENOMEM;
++	}
+ 
+ 	data->txring = dma_alloc_coherent(&data->pdev->dev, txring_size,
+ 					  &data->txdma, GFP_KERNEL);
+ 	if (!data->txring) {
++		free_irq(data->irq_num, dev);
+ 		dma_free_coherent(&data->pdev->dev, rxring_size, data->rxring,
+ 				    data->rxdma);
+ 		return -ENOMEM;
+diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
+index 1ad6085994b1c..5c17c92add8ab 100644
+--- a/drivers/net/hamradio/bpqether.c
++++ b/drivers/net/hamradio/bpqether.c
+@@ -533,7 +533,7 @@ static int bpq_device_event(struct notifier_block *this,
+ 	if (!net_eq(dev_net(dev), &init_net))
+ 		return NOTIFY_DONE;
+ 
+-	if (!dev_is_ethdev(dev))
++	if (!dev_is_ethdev(dev) && !bpq_get_ax25_dev(dev))
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 70c5905a916b9..f84e3cc0d3ec2 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1390,7 +1390,8 @@ static struct macsec_rx_sc *del_rx_sc(struct macsec_secy *secy, sci_t sci)
+ 	return NULL;
+ }
+ 
+-static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci)
++static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci,
++					 bool active)
+ {
+ 	struct macsec_rx_sc *rx_sc;
+ 	struct macsec_dev *macsec;
+@@ -1414,7 +1415,7 @@ static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci)
+ 	}
+ 
+ 	rx_sc->sci = sci;
+-	rx_sc->active = true;
++	rx_sc->active = active;
+ 	refcount_set(&rx_sc->refcnt, 1);
+ 
+ 	secy = &macsec_priv(dev)->secy;
+@@ -1823,6 +1824,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 		       secy->key_len);
+ 
+ 		err = macsec_offload(ops->mdo_add_rxsa, &ctx);
++		memzero_explicit(ctx.sa.key, secy->key_len);
+ 		if (err)
+ 			goto cleanup;
+ 	}
+@@ -1867,7 +1869,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
+ 	struct macsec_rx_sc *rx_sc;
+ 	struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
+ 	struct macsec_secy *secy;
+-	bool was_active;
++	bool active = true;
+ 	int ret;
+ 
+ 	if (!attrs[MACSEC_ATTR_IFINDEX])
+@@ -1889,16 +1891,15 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
+ 	secy = &macsec_priv(dev)->secy;
+ 	sci = nla_get_sci(tb_rxsc[MACSEC_RXSC_ATTR_SCI]);
+ 
+-	rx_sc = create_rx_sc(dev, sci);
++	if (tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE])
++		active = nla_get_u8(tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]);
++
++	rx_sc = create_rx_sc(dev, sci, active);
+ 	if (IS_ERR(rx_sc)) {
+ 		rtnl_unlock();
+ 		return PTR_ERR(rx_sc);
+ 	}
+ 
+-	was_active = rx_sc->active;
+-	if (tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE])
+-		rx_sc->active = !!nla_get_u8(tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]);
+-
+ 	if (macsec_is_offloaded(netdev_priv(dev))) {
+ 		const struct macsec_ops *ops;
+ 		struct macsec_context ctx;
+@@ -1922,7 +1923,8 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
+ 	return 0;
+ 
+ cleanup:
+-	rx_sc->active = was_active;
++	del_rx_sc(secy, sci);
++	free_rx_sc(rx_sc);
+ 	rtnl_unlock();
+ 	return ret;
+ }
+@@ -2065,6 +2067,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
+ 		       secy->key_len);
+ 
+ 		err = macsec_offload(ops->mdo_add_txsa, &ctx);
++		memzero_explicit(ctx.sa.key, secy->key_len);
+ 		if (err)
+ 			goto cleanup;
+ 	}
+@@ -2561,7 +2564,7 @@ static bool macsec_is_configured(struct macsec_dev *macsec)
+ 	struct macsec_tx_sc *tx_sc = &secy->tx_sc;
+ 	int i;
+ 
+-	if (secy->n_rx_sc > 0)
++	if (secy->rx_sc)
+ 		return true;
+ 
+ 	for (i = 0; i < MACSEC_NUM_AN; i++)
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index c8d803d3616c9..6b269a72388b8 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -1509,8 +1509,10 @@ destroy_macvlan_port:
+ 	/* the macvlan port may be freed by macvlan_uninit when fail to register.
+ 	 * so we destroy the macvlan port only when it's valid.
+ 	 */
+-	if (create && macvlan_port_get_rtnl(lowerdev))
++	if (create && macvlan_port_get_rtnl(lowerdev)) {
++		macvlan_flush_sources(port, vlan);
+ 		macvlan_port_destroy(port->dev);
++	}
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(macvlan_common_newlink);
+diff --git a/drivers/net/phy/mscc/mscc_macsec.c b/drivers/net/phy/mscc/mscc_macsec.c
+index b7b2521c73fb6..c00eef457b850 100644
+--- a/drivers/net/phy/mscc/mscc_macsec.c
++++ b/drivers/net/phy/mscc/mscc_macsec.c
+@@ -632,6 +632,7 @@ static void vsc8584_macsec_free_flow(struct vsc8531_private *priv,
+ 
+ 	list_del(&flow->list);
+ 	clear_bit(flow->index, bitmap);
++	memzero_explicit(flow->key, sizeof(flow->key));
+ 	kfree(flow);
+ }
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 0c09f8e9d3836..cb42fdbfeb326 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1986,17 +1986,25 @@ drop:
+ 					  skb_headlen(skb));
+ 
+ 		if (unlikely(headlen > skb_headlen(skb))) {
++			WARN_ON_ONCE(1);
++			err = -ENOMEM;
+ 			this_cpu_inc(tun->pcpu_stats->rx_dropped);
++napi_busy:
+ 			napi_free_frags(&tfile->napi);
+ 			rcu_read_unlock();
+ 			mutex_unlock(&tfile->napi_mutex);
+-			WARN_ON(1);
+-			return -ENOMEM;
++			return err;
+ 		}
+ 
+-		local_bh_disable();
+-		napi_gro_frags(&tfile->napi);
+-		local_bh_enable();
++		if (likely(napi_schedule_prep(&tfile->napi))) {
++			local_bh_disable();
++			napi_gro_frags(&tfile->napi);
++			napi_complete(&tfile->napi);
++			local_bh_enable();
++		} else {
++			err = -EBUSY;
++			goto napi_busy;
++		}
+ 		mutex_unlock(&tfile->napi_mutex);
+ 	} else if (tfile->napi_enabled) {
+ 		struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index f6562a343cb4e..b965eb6a4bb17 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -403,7 +403,7 @@ static int lapbeth_device_event(struct notifier_block *this,
+ 	if (dev_net(dev) != &init_net)
+ 		return NOTIFY_DONE;
+ 
+-	if (!dev_is_ethdev(dev))
++	if (!dev_is_ethdev(dev) && !lapbeth_get_x25_dev(dev))
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+diff --git a/drivers/phy/st/phy-stm32-usbphyc.c b/drivers/phy/st/phy-stm32-usbphyc.c
+index 2b3639cba51aa..03fc567e9f188 100644
+--- a/drivers/phy/st/phy-stm32-usbphyc.c
++++ b/drivers/phy/st/phy-stm32-usbphyc.c
+@@ -393,6 +393,8 @@ static int stm32_usbphyc_probe(struct platform_device *pdev)
+ 		ret = of_property_read_u32(child, "reg", &index);
+ 		if (ret || index > usbphyc->nphys) {
+ 			dev_err(&phy->dev, "invalid reg property: %d\n", ret);
++			if (!ret)
++				ret = -EINVAL;
+ 			goto put_child;
+ 		}
+ 
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index 012639f6d3354..519b2ab84a63f 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -900,8 +900,16 @@ static int __init hp_wmi_bios_setup(struct platform_device *device)
+ 	wwan_rfkill = NULL;
+ 	rfkill2_count = 0;
+ 
+-	if (hp_wmi_rfkill_setup(device))
+-		hp_wmi_rfkill2_setup(device);
++	/*
++	 * In pre-2009 BIOS, command 1Bh return 0x4 to indicate that
++	 * BIOS no longer controls the power for the wireless
++	 * devices. All features supported by this command will no
++	 * longer be supported.
++	 */
++	if (!hp_wmi_bios_2009_later()) {
++		if (hp_wmi_rfkill_setup(device))
++			hp_wmi_rfkill2_setup(device);
++	}
+ 
+ 	thermal_policy_setup(device);
+ 
+diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
+index 999c14e5d0bdd..0599566c66b06 100644
+--- a/fs/btrfs/tests/btrfs-tests.c
++++ b/fs/btrfs/tests/btrfs-tests.c
+@@ -192,7 +192,7 @@ void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
+ 
+ void btrfs_free_dummy_root(struct btrfs_root *root)
+ {
+-	if (!root)
++	if (IS_ERR_OR_NULL(root))
+ 		return;
+ 	/* Will be freed by btrfs_free_fs_roots */
+ 	if (WARN_ON(test_bit(BTRFS_ROOT_IN_RADIX, &root->state)))
+diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c
+index bc267832310c7..d5294e663df50 100644
+--- a/fs/fuse/readdir.c
++++ b/fs/fuse/readdir.c
+@@ -77,8 +77,10 @@ static void fuse_add_dirent_to_cache(struct file *file,
+ 		goto unlock;
+ 
+ 	addr = kmap_atomic(page);
+-	if (!offset)
++	if (!offset) {
+ 		clear_page(addr);
++		SetPageUptodate(page);
++	}
+ 	memcpy(addr + offset, dirent, reclen);
+ 	kunmap_atomic(addr);
+ 	fi->rdc.size = (index << PAGE_SHIFT) + offset + reclen;
+@@ -516,6 +518,12 @@ retry_locked:
+ 
+ 	page = find_get_page_flags(file->f_mapping, index,
+ 				   FGP_ACCESSED | FGP_LOCK);
++	/* Page gone missing, then re-added to cache, but not initialized? */
++	if (page && !PageUptodate(page)) {
++		unlock_page(page);
++		put_page(page);
++		page = NULL;
++	}
+ 	spin_lock(&fi->rdc.lock);
+ 	if (!page) {
+ 		/*
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 05f360b66b07a..d1cb1addea965 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -9038,7 +9038,7 @@ static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+ 
+ 		if (unlikely(ctx->sqo_dead)) {
+ 			ret = -EOWNERDEAD;
+-			goto out;
++			break;
+ 		}
+ 
+ 		if (!io_sqring_full(ctx))
+@@ -9048,7 +9048,6 @@ static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+ 	} while (!signal_pending(current));
+ 
+ 	finish_wait(&ctx->sqo_sq_wait, &wait);
+-out:
+ 	return ret;
+ }
+ 
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 545f764d70b12..5ee4973525f01 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -322,7 +322,7 @@ void nilfs_relax_pressure_in_lock(struct super_block *sb)
+ 	struct the_nilfs *nilfs = sb->s_fs_info;
+ 	struct nilfs_sc_info *sci = nilfs->ns_writer;
+ 
+-	if (!sci || !sci->sc_flush_request)
++	if (sb_rdonly(sb) || unlikely(!sci) || !sci->sc_flush_request)
+ 		return;
+ 
+ 	set_bit(NILFS_SC_PRIOR_FLUSH, &sci->sc_flags);
+@@ -2248,7 +2248,7 @@ int nilfs_construct_segment(struct super_block *sb)
+ 	struct nilfs_transaction_info *ti;
+ 	int err;
+ 
+-	if (!sci)
++	if (sb_rdonly(sb) || unlikely(!sci))
+ 		return -EROFS;
+ 
+ 	/* A call inside transactions causes a deadlock. */
+@@ -2287,7 +2287,7 @@ int nilfs_construct_dsync_segment(struct super_block *sb, struct inode *inode,
+ 	struct nilfs_transaction_info ti;
+ 	int err = 0;
+ 
+-	if (!sci)
++	if (sb_rdonly(sb) || unlikely(!sci))
+ 		return -EROFS;
+ 
+ 	nilfs_transaction_lock(sb, &ti, 0);
+@@ -2783,11 +2783,12 @@ int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
+ 
+ 	if (nilfs->ns_writer) {
+ 		/*
+-		 * This happens if the filesystem was remounted
+-		 * read/write after nilfs_error degenerated it into a
+-		 * read-only mount.
++		 * This happens if the filesystem is made read-only by
++		 * __nilfs_error or nilfs_remount and then remounted
++		 * read/write.  In these cases, reuse the existing
++		 * writer.
+ 		 */
+-		nilfs_detach_log_writer(sb);
++		return 0;
+ 	}
+ 
+ 	nilfs->ns_writer = nilfs_segctor_new(sb, root);
+diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
+index b9d30e8c43b06..7a41c9727c9e2 100644
+--- a/fs/nilfs2/super.c
++++ b/fs/nilfs2/super.c
+@@ -1133,8 +1133,6 @@ static int nilfs_remount(struct super_block *sb, int *flags, char *data)
+ 	if ((bool)(*flags & SB_RDONLY) == sb_rdonly(sb))
+ 		goto out;
+ 	if (*flags & SB_RDONLY) {
+-		/* Shutting down log writer */
+-		nilfs_detach_log_writer(sb);
+ 		sb->s_flags |= SB_RDONLY;
+ 
+ 		/*
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index c20ebecd7bc24..ce103dd39b899 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -690,9 +690,7 @@ int nilfs_count_free_blocks(struct the_nilfs *nilfs, sector_t *nblocks)
+ {
+ 	unsigned long ncleansegs;
+ 
+-	down_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem);
+ 	ncleansegs = nilfs_sufile_get_ncleansegs(nilfs->ns_sufile);
+-	up_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem);
+ 	*nblocks = (sector_t)ncleansegs * nilfs->ns_blocks_per_segment;
+ 	return 0;
+ }
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index 9f3aced46c68f..aff5ca32e4f64 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -241,7 +241,7 @@ static struct fileIdentDesc *udf_find_entry(struct inode *dir,
+ 						      poffset - lfi);
+ 			else {
+ 				if (!copy_name) {
+-					copy_name = kmalloc(UDF_NAME_LEN,
++					copy_name = kmalloc(UDF_NAME_LEN_CS0,
+ 							    GFP_NOFS);
+ 					if (!copy_name) {
+ 						fi = ERR_PTR(-ENOMEM);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index a774361f28d40..d233f9e4b9c60 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -328,6 +328,7 @@
+ #define DATA_DATA							\
+ 	*(.xiptext)							\
+ 	*(DATA_MAIN)							\
++	*(.data..decrypted)						\
+ 	*(.ref.data)							\
+ 	*(.data..shared_aligned) /* percpu related */			\
+ 	MEM_KEEP(init.data*)						\
+@@ -972,7 +973,6 @@
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ #define PERCPU_DECRYPTED_SECTION					\
+ 	. = ALIGN(PAGE_SIZE);						\
+-	*(.data..decrypted)						\
+ 	*(.data..percpu..decrypted)					\
+ 	. = ALIGN(PAGE_SIZE);
+ #else
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 391bc1480dfb1..4d37c69e76b17 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -45,7 +45,7 @@ struct bpf_reg_state {
+ 	enum bpf_reg_type type;
+ 	union {
+ 		/* valid when type == PTR_TO_PACKET */
+-		u16 range;
++		int range;
+ 
+ 		/* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE |
+ 		 *   PTR_TO_MAP_VALUE_OR_NULL
+@@ -290,6 +290,27 @@ struct bpf_verifier_state {
+ 	     iter < frame->allocated_stack / BPF_REG_SIZE;		\
+ 	     iter++, reg = bpf_get_spilled_reg(iter, frame))
+ 
++/* Invoke __expr over regsiters in __vst, setting __state and __reg */
++#define bpf_for_each_reg_in_vstate(__vst, __state, __reg, __expr)   \
++	({                                                               \
++		struct bpf_verifier_state *___vstate = __vst;            \
++		int ___i, ___j;                                          \
++		for (___i = 0; ___i <= ___vstate->curframe; ___i++) {    \
++			struct bpf_reg_state *___regs;                   \
++			__state = ___vstate->frame[___i];                \
++			___regs = __state->regs;                         \
++			for (___j = 0; ___j < MAX_BPF_REG; ___j++) {     \
++				__reg = &___regs[___j];                  \
++				(void)(__expr);                          \
++			}                                                \
++			bpf_for_each_spilled_reg(___j, __state, __reg) { \
++				if (!__reg)                              \
++					continue;                        \
++				(void)(__expr);                          \
++			}                                                \
++		}                                                        \
++	})
++
+ /* linked list of verifier states used to prune search */
+ struct bpf_verifier_state_list {
+ 	struct bpf_verifier_state state;
+diff --git a/include/uapi/linux/capability.h b/include/uapi/linux/capability.h
+index 2ddb4226cd231..43a44538ec8d0 100644
+--- a/include/uapi/linux/capability.h
++++ b/include/uapi/linux/capability.h
+@@ -427,7 +427,7 @@ struct vfs_ns_cap_data {
+  */
+ 
+ #define CAP_TO_INDEX(x)     ((x) >> 5)        /* 1 << 5 == bits in __u32 */
+-#define CAP_TO_MASK(x)      (1 << ((x) & 31)) /* mask for indexed __u32 */
++#define CAP_TO_MASK(x)      (1U << ((x) & 31)) /* mask for indexed __u32 */
+ 
+ 
+ #endif /* _UAPI_LINUX_CAPABILITY_H */
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index e4dcc23b52c01..50364031eb4d1 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2978,7 +2978,9 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
+ 			regno);
+ 		return -EACCES;
+ 	}
+-	err = __check_mem_access(env, regno, off, size, reg->range,
++
++	err = reg->range < 0 ? -EINVAL :
++	      __check_mem_access(env, regno, off, size, reg->range,
+ 				 zero_size_allowed);
+ 	if (err) {
+ 		verbose(env, "R%d offset is outside of the packet\n", regno);
+@@ -4991,50 +4993,41 @@ static int check_func_proto(const struct bpf_func_proto *fn, int func_id)
+ /* Packet data might have moved, any old PTR_TO_PACKET[_META,_END]
+  * are now invalid, so turn them into unknown SCALAR_VALUE.
+  */
+-static void __clear_all_pkt_pointers(struct bpf_verifier_env *env,
+-				     struct bpf_func_state *state)
++static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
+ {
+-	struct bpf_reg_state *regs = state->regs, *reg;
+-	int i;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++)
+-		if (reg_is_pkt_pointer_any(&regs[i]))
+-			mark_reg_unknown(env, regs, i);
++	struct bpf_func_state *state;
++	struct bpf_reg_state *reg;
+ 
+-	bpf_for_each_spilled_reg(i, state, reg) {
+-		if (!reg)
+-			continue;
++	bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
+ 		if (reg_is_pkt_pointer_any(reg))
+ 			__mark_reg_unknown(env, reg);
+-	}
++	}));
+ }
+ 
+-static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
+-{
+-	struct bpf_verifier_state *vstate = env->cur_state;
+-	int i;
+-
+-	for (i = 0; i <= vstate->curframe; i++)
+-		__clear_all_pkt_pointers(env, vstate->frame[i]);
+-}
++enum {
++	AT_PKT_END = -1,
++	BEYOND_PKT_END = -2,
++};
+ 
+-static void release_reg_references(struct bpf_verifier_env *env,
+-				   struct bpf_func_state *state,
+-				   int ref_obj_id)
++static void mark_pkt_end(struct bpf_verifier_state *vstate, int regn, bool range_open)
+ {
+-	struct bpf_reg_state *regs = state->regs, *reg;
+-	int i;
++	struct bpf_func_state *state = vstate->frame[vstate->curframe];
++	struct bpf_reg_state *reg = &state->regs[regn];
+ 
+-	for (i = 0; i < MAX_BPF_REG; i++)
+-		if (regs[i].ref_obj_id == ref_obj_id)
+-			mark_reg_unknown(env, regs, i);
++	if (reg->type != PTR_TO_PACKET)
++		/* PTR_TO_PACKET_META is not supported yet */
++		return;
+ 
+-	bpf_for_each_spilled_reg(i, state, reg) {
+-		if (!reg)
+-			continue;
+-		if (reg->ref_obj_id == ref_obj_id)
+-			__mark_reg_unknown(env, reg);
+-	}
++	/* The 'reg' is pkt > pkt_end or pkt >= pkt_end.
++	 * How far beyond pkt_end it goes is unknown.
++	 * if (!range_open) it's the case of pkt >= pkt_end
++	 * if (range_open) it's the case of pkt > pkt_end
++	 * hence this pointer is at least 1 byte bigger than pkt_end
++	 */
++	if (range_open)
++		reg->range = BEYOND_PKT_END;
++	else
++		reg->range = AT_PKT_END;
+ }
+ 
+ /* The pointer with the specified id has released its reference to kernel
+@@ -5043,16 +5036,22 @@ static void release_reg_references(struct bpf_verifier_env *env,
+ static int release_reference(struct bpf_verifier_env *env,
+ 			     int ref_obj_id)
+ {
+-	struct bpf_verifier_state *vstate = env->cur_state;
++	struct bpf_func_state *state;
++	struct bpf_reg_state *reg;
+ 	int err;
+-	int i;
+ 
+ 	err = release_reference_state(cur_func(env), ref_obj_id);
+ 	if (err)
+ 		return err;
+ 
+-	for (i = 0; i <= vstate->curframe; i++)
+-		release_reg_references(env, vstate->frame[i], ref_obj_id);
++	bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
++		if (reg->ref_obj_id == ref_obj_id) {
++			if (!env->allow_ptr_leaks)
++				__mark_reg_not_init(env, reg);
++			else
++				__mark_reg_unknown(env, reg);
++		}
++	}));
+ 
+ 	return 0;
+ }
+@@ -7191,35 +7190,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 	return 0;
+ }
+ 
+-static void __find_good_pkt_pointers(struct bpf_func_state *state,
+-				     struct bpf_reg_state *dst_reg,
+-				     enum bpf_reg_type type, u16 new_range)
+-{
+-	struct bpf_reg_state *reg;
+-	int i;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++) {
+-		reg = &state->regs[i];
+-		if (reg->type == type && reg->id == dst_reg->id)
+-			/* keep the maximum range already checked */
+-			reg->range = max(reg->range, new_range);
+-	}
+-
+-	bpf_for_each_spilled_reg(i, state, reg) {
+-		if (!reg)
+-			continue;
+-		if (reg->type == type && reg->id == dst_reg->id)
+-			reg->range = max(reg->range, new_range);
+-	}
+-}
+-
+ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
+ 				   struct bpf_reg_state *dst_reg,
+ 				   enum bpf_reg_type type,
+ 				   bool range_right_open)
+ {
+-	u16 new_range;
+-	int i;
++	struct bpf_func_state *state;
++	struct bpf_reg_state *reg;
++	int new_range;
+ 
+ 	if (dst_reg->off < 0 ||
+ 	    (dst_reg->off == 0 && range_right_open))
+@@ -7284,9 +7262,11 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
+ 	 * the range won't allow anything.
+ 	 * dst_reg->off is known < MAX_PACKET_OFF, therefore it fits in a u16.
+ 	 */
+-	for (i = 0; i <= vstate->curframe; i++)
+-		__find_good_pkt_pointers(vstate->frame[i], dst_reg, type,
+-					 new_range);
++	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
++		if (reg->type == type && reg->id == dst_reg->id)
++			/* keep the maximum range already checked */
++			reg->range = max(reg->range, new_range);
++	}));
+ }
+ 
+ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
+@@ -7470,6 +7450,67 @@ static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode,
+ 	return is_branch64_taken(reg, val, opcode);
+ }
+ 
++static int flip_opcode(u32 opcode)
++{
++	/* How can we transform "a <op> b" into "b <op> a"? */
++	static const u8 opcode_flip[16] = {
++		/* these stay the same */
++		[BPF_JEQ  >> 4] = BPF_JEQ,
++		[BPF_JNE  >> 4] = BPF_JNE,
++		[BPF_JSET >> 4] = BPF_JSET,
++		/* these swap "lesser" and "greater" (L and G in the opcodes) */
++		[BPF_JGE  >> 4] = BPF_JLE,
++		[BPF_JGT  >> 4] = BPF_JLT,
++		[BPF_JLE  >> 4] = BPF_JGE,
++		[BPF_JLT  >> 4] = BPF_JGT,
++		[BPF_JSGE >> 4] = BPF_JSLE,
++		[BPF_JSGT >> 4] = BPF_JSLT,
++		[BPF_JSLE >> 4] = BPF_JSGE,
++		[BPF_JSLT >> 4] = BPF_JSGT
++	};
++	return opcode_flip[opcode >> 4];
++}
++
++static int is_pkt_ptr_branch_taken(struct bpf_reg_state *dst_reg,
++				   struct bpf_reg_state *src_reg,
++				   u8 opcode)
++{
++	struct bpf_reg_state *pkt;
++
++	if (src_reg->type == PTR_TO_PACKET_END) {
++		pkt = dst_reg;
++	} else if (dst_reg->type == PTR_TO_PACKET_END) {
++		pkt = src_reg;
++		opcode = flip_opcode(opcode);
++	} else {
++		return -1;
++	}
++
++	if (pkt->range >= 0)
++		return -1;
++
++	switch (opcode) {
++	case BPF_JLE:
++		/* pkt <= pkt_end */
++		fallthrough;
++	case BPF_JGT:
++		/* pkt > pkt_end */
++		if (pkt->range == BEYOND_PKT_END)
++			/* pkt has at last one extra byte beyond pkt_end */
++			return opcode == BPF_JGT;
++		break;
++	case BPF_JLT:
++		/* pkt < pkt_end */
++		fallthrough;
++	case BPF_JGE:
++		/* pkt >= pkt_end */
++		if (pkt->range == BEYOND_PKT_END || pkt->range == AT_PKT_END)
++			return opcode == BPF_JGE;
++		break;
++	}
++	return -1;
++}
++
+ /* Adjusts the register min/max values in the case that the dst_reg is the
+  * variable register that we are working on, and src_reg is a constant or we're
+  * simply doing a BPF_K check.
+@@ -7640,23 +7681,7 @@ static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
+ 				u64 val, u32 val32,
+ 				u8 opcode, bool is_jmp32)
+ {
+-	/* How can we transform "a <op> b" into "b <op> a"? */
+-	static const u8 opcode_flip[16] = {
+-		/* these stay the same */
+-		[BPF_JEQ  >> 4] = BPF_JEQ,
+-		[BPF_JNE  >> 4] = BPF_JNE,
+-		[BPF_JSET >> 4] = BPF_JSET,
+-		/* these swap "lesser" and "greater" (L and G in the opcodes) */
+-		[BPF_JGE  >> 4] = BPF_JLE,
+-		[BPF_JGT  >> 4] = BPF_JLT,
+-		[BPF_JLE  >> 4] = BPF_JGE,
+-		[BPF_JLT  >> 4] = BPF_JGT,
+-		[BPF_JSGE >> 4] = BPF_JSLE,
+-		[BPF_JSGT >> 4] = BPF_JSLT,
+-		[BPF_JSLE >> 4] = BPF_JSGE,
+-		[BPF_JSLT >> 4] = BPF_JSGT
+-	};
+-	opcode = opcode_flip[opcode >> 4];
++	opcode = flip_opcode(opcode);
+ 	/* This uses zero as "not present in table"; luckily the zero opcode,
+ 	 * BPF_JA, can't get here.
+ 	 */
+@@ -7754,7 +7779,7 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
+ 			reg->ref_obj_id = 0;
+ 		} else if (!reg_may_point_to_spin_lock(reg)) {
+ 			/* For not-NULL ptr, reg->ref_obj_id will be reset
+-			 * in release_reg_references().
++			 * in release_reference().
+ 			 *
+ 			 * reg->id is still used by spin_lock ptr. Other
+ 			 * than spin_lock ptr type, reg->id can be reset.
+@@ -7764,22 +7789,6 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
+ 	}
+ }
+ 
+-static void __mark_ptr_or_null_regs(struct bpf_func_state *state, u32 id,
+-				    bool is_null)
+-{
+-	struct bpf_reg_state *reg;
+-	int i;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++)
+-		mark_ptr_or_null_reg(state, &state->regs[i], id, is_null);
+-
+-	bpf_for_each_spilled_reg(i, state, reg) {
+-		if (!reg)
+-			continue;
+-		mark_ptr_or_null_reg(state, reg, id, is_null);
+-	}
+-}
+-
+ /* The logic is similar to find_good_pkt_pointers(), both could eventually
+  * be folded together at some point.
+  */
+@@ -7787,10 +7796,9 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
+ 				  bool is_null)
+ {
+ 	struct bpf_func_state *state = vstate->frame[vstate->curframe];
+-	struct bpf_reg_state *regs = state->regs;
++	struct bpf_reg_state *regs = state->regs, *reg;
+ 	u32 ref_obj_id = regs[regno].ref_obj_id;
+ 	u32 id = regs[regno].id;
+-	int i;
+ 
+ 	if (ref_obj_id && ref_obj_id == id && is_null)
+ 		/* regs[regno] is in the " == NULL" branch.
+@@ -7799,8 +7807,9 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
+ 		 */
+ 		WARN_ON_ONCE(release_reference_state(state, id));
+ 
+-	for (i = 0; i <= vstate->curframe; i++)
+-		__mark_ptr_or_null_regs(vstate->frame[i], id, is_null);
++	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
++		mark_ptr_or_null_reg(state, reg, id, is_null);
++	}));
+ }
+ 
+ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+@@ -7825,6 +7834,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+ 			/* pkt_data' > pkt_end, pkt_meta' > pkt_data */
+ 			find_good_pkt_pointers(this_branch, dst_reg,
+ 					       dst_reg->type, false);
++			mark_pkt_end(other_branch, insn->dst_reg, true);
+ 		} else if ((dst_reg->type == PTR_TO_PACKET_END &&
+ 			    src_reg->type == PTR_TO_PACKET) ||
+ 			   (reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
+@@ -7832,6 +7842,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+ 			/* pkt_end > pkt_data', pkt_data > pkt_meta' */
+ 			find_good_pkt_pointers(other_branch, src_reg,
+ 					       src_reg->type, true);
++			mark_pkt_end(this_branch, insn->src_reg, false);
+ 		} else {
+ 			return false;
+ 		}
+@@ -7844,6 +7855,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+ 			/* pkt_data' < pkt_end, pkt_meta' < pkt_data */
+ 			find_good_pkt_pointers(other_branch, dst_reg,
+ 					       dst_reg->type, true);
++			mark_pkt_end(this_branch, insn->dst_reg, false);
+ 		} else if ((dst_reg->type == PTR_TO_PACKET_END &&
+ 			    src_reg->type == PTR_TO_PACKET) ||
+ 			   (reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
+@@ -7851,6 +7863,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+ 			/* pkt_end < pkt_data', pkt_data > pkt_meta' */
+ 			find_good_pkt_pointers(this_branch, src_reg,
+ 					       src_reg->type, false);
++			mark_pkt_end(other_branch, insn->src_reg, true);
+ 		} else {
+ 			return false;
+ 		}
+@@ -7863,6 +7876,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+ 			/* pkt_data' >= pkt_end, pkt_meta' >= pkt_data */
+ 			find_good_pkt_pointers(this_branch, dst_reg,
+ 					       dst_reg->type, true);
++			mark_pkt_end(other_branch, insn->dst_reg, false);
+ 		} else if ((dst_reg->type == PTR_TO_PACKET_END &&
+ 			    src_reg->type == PTR_TO_PACKET) ||
+ 			   (reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
+@@ -7870,6 +7884,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+ 			/* pkt_end >= pkt_data', pkt_data >= pkt_meta' */
+ 			find_good_pkt_pointers(other_branch, src_reg,
+ 					       src_reg->type, false);
++			mark_pkt_end(this_branch, insn->src_reg, true);
+ 		} else {
+ 			return false;
+ 		}
+@@ -7882,6 +7897,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+ 			/* pkt_data' <= pkt_end, pkt_meta' <= pkt_data */
+ 			find_good_pkt_pointers(other_branch, dst_reg,
+ 					       dst_reg->type, false);
++			mark_pkt_end(this_branch, insn->dst_reg, true);
+ 		} else if ((dst_reg->type == PTR_TO_PACKET_END &&
+ 			    src_reg->type == PTR_TO_PACKET) ||
+ 			   (reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
+@@ -7889,6 +7905,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+ 			/* pkt_end <= pkt_data', pkt_data <= pkt_meta' */
+ 			find_good_pkt_pointers(this_branch, src_reg,
+ 					       src_reg->type, true);
++			mark_pkt_end(other_branch, insn->src_reg, false);
+ 		} else {
+ 			return false;
+ 		}
+@@ -7905,23 +7922,11 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
+ {
+ 	struct bpf_func_state *state;
+ 	struct bpf_reg_state *reg;
+-	int i, j;
+-
+-	for (i = 0; i <= vstate->curframe; i++) {
+-		state = vstate->frame[i];
+-		for (j = 0; j < MAX_BPF_REG; j++) {
+-			reg = &state->regs[j];
+-			if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)
+-				*reg = *known_reg;
+-		}
+ 
+-		bpf_for_each_spilled_reg(j, state, reg) {
+-			if (!reg)
+-				continue;
+-			if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)
+-				*reg = *known_reg;
+-		}
+-	}
++	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
++		if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)
++			*reg = *known_reg;
++	}));
+ }
+ 
+ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+@@ -7988,6 +7993,10 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 				       src_reg->var_off.value,
+ 				       opcode,
+ 				       is_jmp32);
++	} else if (reg_is_pkt_pointer_any(dst_reg) &&
++		   reg_is_pkt_pointer_any(src_reg) &&
++		   !is_jmp32) {
++		pred = is_pkt_ptr_branch_taken(dst_reg, src_reg, opcode);
+ 	}
+ 
+ 	if (pred >= 0) {
+@@ -7996,7 +8005,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 		 */
+ 		if (!__is_pointer_value(false, dst_reg))
+ 			err = mark_chain_precision(env, insn->dst_reg);
+-		if (BPF_SRC(insn->code) == BPF_X && !err)
++		if (BPF_SRC(insn->code) == BPF_X && !err &&
++		    !__is_pointer_value(false, src_reg))
+ 			err = mark_chain_precision(env, insn->src_reg);
+ 		if (err)
+ 			return err;
+diff --git a/mm/memremap.c b/mm/memremap.c
+index 2455bac895066..299aad0d26e56 100644
+--- a/mm/memremap.c
++++ b/mm/memremap.c
+@@ -348,6 +348,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
+ 			WARN(1, "File system DAX not supported\n");
+ 			return ERR_PTR(-EINVAL);
+ 		}
++		params.pgprot = pgprot_decrypted(params.pgprot);
+ 		break;
+ 	case MEMORY_DEVICE_GENERIC:
+ 		break;
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index 1c95ede2c9a6e..cf554e8555214 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -451,7 +451,7 @@ int can_rx_register(struct net *net, struct net_device *dev, canid_t can_id,
+ 
+ 	/* insert new receiver  (dev,canid,mask) -> (func,data) */
+ 
+-	if (dev && dev->type != ARPHRD_CAN)
++	if (dev && (dev->type != ARPHRD_CAN || !can_get_ml_priv(dev)))
+ 		return -ENODEV;
+ 
+ 	if (dev && !net_eq(net, dev_net(dev)))
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index ca75d1b8f415c..9da8fbc81c04a 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -332,6 +332,9 @@ int j1939_send_one(struct j1939_priv *priv, struct sk_buff *skb)
+ 	/* re-claim the CAN_HDR from the SKB */
+ 	cf = skb_push(skb, J1939_CAN_HDR);
+ 
++	/* initialize header structure */
++	memset(cf, 0, J1939_CAN_HDR);
++
+ 	/* make it a full can frame again */
+ 	skb_put(skb, J1939_CAN_FTR + (8 - dlc));
+ 
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 7bdcdad58dc86..06169889b0ca0 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3809,23 +3809,25 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
+ 	int i = 0;
+ 	int pos;
+ 
+-	if (list_skb && !list_skb->head_frag && skb_headlen(list_skb) &&
+-	    (skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY)) {
+-		/* gso_size is untrusted, and we have a frag_list with a linear
+-		 * non head_frag head.
+-		 *
+-		 * (we assume checking the first list_skb member suffices;
+-		 * i.e if either of the list_skb members have non head_frag
+-		 * head, then the first one has too).
+-		 *
+-		 * If head_skb's headlen does not fit requested gso_size, it
+-		 * means that the frag_list members do NOT terminate on exact
+-		 * gso_size boundaries. Hence we cannot perform skb_frag_t page
+-		 * sharing. Therefore we must fallback to copying the frag_list
+-		 * skbs; we do so by disabling SG.
+-		 */
+-		if (mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb))
+-			features &= ~NETIF_F_SG;
++	if ((skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY) &&
++	    mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) {
++		struct sk_buff *check_skb;
++
++		for (check_skb = list_skb; check_skb; check_skb = check_skb->next) {
++			if (skb_headlen(check_skb) && !check_skb->head_frag) {
++				/* gso_size is untrusted, and we have a frag_list with
++				 * a linear non head_frag item.
++				 *
++				 * If head_skb's headlen does not fit requested gso_size,
++				 * it means that the frag_list members do NOT terminate
++				 * on exact gso_size boundaries. Hence we cannot perform
++				 * skb_frag_t page sharing. Therefore we must fallback to
++				 * copying the frag_list skbs; we do so by disabling SG.
++				 */
++				features &= ~NETIF_F_SG;
++				break;
++			}
++		}
+ 	}
+ 
+ 	__skb_push(head_skb, doffset);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index a7127364253c6..cc588bc2b11d7 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3291,7 +3291,7 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 	case TCP_REPAIR_OPTIONS:
+ 		if (!tp->repair)
+ 			err = -EINVAL;
+-		else if (sk->sk_state == TCP_ESTABLISHED)
++		else if (sk->sk_state == TCP_ESTABLISHED && !tp->bytes_sent)
+ 			err = tcp_repair_options_est(sk, optval, optlen);
+ 		else
+ 			err = -EPERM;
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index eaf2308c355a6..809ee0f32d598 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -315,7 +315,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ {
+ 	bool cork = false, enospc = sk_msg_full(msg);
+ 	struct sock *sk_redir;
+-	u32 tosend, delta = 0;
++	u32 tosend, origsize, sent, delta = 0;
+ 	u32 eval = __SK_NONE;
+ 	int ret;
+ 
+@@ -370,10 +370,12 @@ more_data:
+ 			cork = true;
+ 			psock->cork = NULL;
+ 		}
+-		sk_msg_return(sk, msg, msg->sg.size);
++		sk_msg_return(sk, msg, tosend);
+ 		release_sock(sk);
+ 
++		origsize = msg->sg.size;
+ 		ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
++		sent = origsize - msg->sg.size;
+ 
+ 		if (eval == __SK_REDIRECT)
+ 			sock_put(sk_redir);
+@@ -412,7 +414,7 @@ more_data:
+ 		    msg->sg.data[msg->sg.start].page_link &&
+ 		    msg->sg.data[msg->sg.start].length) {
+ 			if (eval == __SK_REDIRECT)
+-				sk_mem_charge(sk, msg->sg.size);
++				sk_mem_charge(sk, tosend - sent);
+ 			goto more_data;
+ 		}
+ 	}
+diff --git a/net/ipv6/addrlabel.c b/net/ipv6/addrlabel.c
+index 8a22486cf2702..17ac45aa7194c 100644
+--- a/net/ipv6/addrlabel.c
++++ b/net/ipv6/addrlabel.c
+@@ -437,6 +437,7 @@ static void ip6addrlbl_putmsg(struct nlmsghdr *nlh,
+ {
+ 	struct ifaddrlblmsg *ifal = nlmsg_data(nlh);
+ 	ifal->ifal_family = AF_INET6;
++	ifal->__ifal_reserved = 0;
+ 	ifal->ifal_prefixlen = prefixlen;
+ 	ifal->ifal_flags = 0;
+ 	ifal->ifal_index = ifindex;
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 49e8933136526..2d62932b59878 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -877,7 +877,7 @@ static int tipc_nl_compat_name_table_dump_header(struct tipc_nl_compat_msg *msg)
+ 	};
+ 
+ 	ntq = (struct tipc_name_table_query *)TLV_DATA(msg->req);
+-	if (TLV_GET_DATA_LEN(msg->req) < sizeof(struct tipc_name_table_query))
++	if (TLV_GET_DATA_LEN(msg->req) < (int)sizeof(struct tipc_name_table_query))
+ 		return -EINVAL;
+ 
+ 	depth = ntohl(ntq->depth);
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index fd848609e656a..a1e64d967bd38 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -1064,6 +1064,8 @@ MODULE_FIRMWARE("regulatory.db");
+ 
+ static int query_regdb_file(const char *alpha2)
+ {
++	int err;
++
+ 	ASSERT_RTNL();
+ 
+ 	if (regdb)
+@@ -1073,9 +1075,13 @@ static int query_regdb_file(const char *alpha2)
+ 	if (!alpha2)
+ 		return -ENOMEM;
+ 
+-	return request_firmware_nowait(THIS_MODULE, true, "regulatory.db",
+-				       &reg_pdev->dev, GFP_KERNEL,
+-				       (void *)alpha2, regdb_fw_cb);
++	err = request_firmware_nowait(THIS_MODULE, true, "regulatory.db",
++				      &reg_pdev->dev, GFP_KERNEL,
++				      (void *)alpha2, regdb_fw_cb);
++	if (err)
++		kfree(alpha2);
++
++	return err;
+ }
+ 
+ int reg_reload_regdb(void)
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 22d169923261f..15119c49c0934 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1669,7 +1669,9 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ 		if (old == rcu_access_pointer(known->pub.ies))
+ 			rcu_assign_pointer(known->pub.ies, new->pub.beacon_ies);
+ 
+-		cfg80211_update_hidden_bsses(known, new->pub.beacon_ies, old);
++		cfg80211_update_hidden_bsses(known,
++					     rcu_access_pointer(new->pub.beacon_ies),
++					     old);
+ 
+ 		if (old)
+ 			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+diff --git a/scripts/extract-cert.c b/scripts/extract-cert.c
+index 3bc48c726c41c..79ecbbfe37cd7 100644
+--- a/scripts/extract-cert.c
++++ b/scripts/extract-cert.c
+@@ -23,6 +23,13 @@
+ #include <openssl/err.h>
+ #include <openssl/engine.h>
+ 
++/*
++ * OpenSSL 3.0 deprecates the OpenSSL's ENGINE API.
++ *
++ * Remove this if/when that API is no longer used
++ */
++#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
++
+ #define PKEY_ID_PKCS7 2
+ 
+ static __attribute__((noreturn))
+diff --git a/scripts/sign-file.c b/scripts/sign-file.c
+index fbd34b8e8f578..7434e9ea926e2 100644
+--- a/scripts/sign-file.c
++++ b/scripts/sign-file.c
+@@ -29,6 +29,13 @@
+ #include <openssl/err.h>
+ #include <openssl/engine.h>
+ 
++/*
++ * OpenSSL 3.0 deprecates the OpenSSL's ENGINE API.
++ *
++ * Remove this if/when that API is no longer used
++ */
++#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
++
+ /*
+  * Use CMS if we have openssl-1.0.0 or newer available - otherwise we have to
+  * assume that it's not available and its header file is missing and that we
+diff --git a/sound/hda/hdac_sysfs.c b/sound/hda/hdac_sysfs.c
+index e56e833259031..bcf302f5115ac 100644
+--- a/sound/hda/hdac_sysfs.c
++++ b/sound/hda/hdac_sysfs.c
+@@ -346,8 +346,10 @@ static int add_widget_node(struct kobject *parent, hda_nid_t nid,
+ 		return -ENOMEM;
+ 	kobject_init(kobj, &widget_ktype);
+ 	err = kobject_add(kobj, parent, "%02x", nid);
+-	if (err < 0)
++	if (err < 0) {
++		kobject_put(kobj);
+ 		return err;
++	}
+ 	err = sysfs_create_group(kobj, group);
+ 	if (err < 0) {
+ 		kobject_put(kobj);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 26dfa8558792f..494bfd2135a9e 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2749,6 +2749,9 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_DEVICE(0x1002, 0xab28),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
++	{ PCI_DEVICE(0x1002, 0xab30),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
++	  AZX_DCAPS_PM_RUNTIME },
+ 	{ PCI_DEVICE(0x1002, 0xab38),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index f774b2ac9720c..82f14c3f642bd 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1272,6 +1272,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x3842, 0x1038, "EVGA X99 Classified", QUIRK_R3DI),
++	SND_PCI_QUIRK(0x3842, 0x1055, "EVGA Z390 DARK", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 60e3bc1248363..e3f6b930ad4a1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9161,6 +9161,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ 	SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
++	SND_PCI_QUIRK(0x1849, 0xa233, "Positivo Master C6300", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ 	SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index a51591f68ae68..6a78813b63f53 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2028,6 +2028,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 		}
+ 	}
+ },
++{
++	/* M-Audio Micro */
++	USB_DEVICE_VENDOR_SPEC(0x0763, 0x201a),
++},
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2030),
+ 	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 04a691bc560cb..752422147fb38 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1744,6 +1744,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 	/* XMOS based USB DACs */
+ 	switch (chip->usb_id) {
+ 	case USB_ID(0x1511, 0x0037): /* AURALiC VEGA */
++	case USB_ID(0x21ed, 0xd75a): /* Accuphase DAC-60 option card */
+ 	case USB_ID(0x2522, 0x0012): /* LH Labs VI DAC Infinity */
+ 	case USB_ID(0x2772, 0x0230): /* Pro-Ject Pre Box S2 Digital */
+ 		if (fp->altsetting == 2)
+diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
+index 144dc164b7596..8fb9256768134 100644
+--- a/tools/arch/x86/include/asm/msr-index.h
++++ b/tools/arch/x86/include/asm/msr-index.h
+@@ -489,6 +489,11 @@
+ #define MSR_AMD64_CPUID_FN_1		0xc0011004
+ #define MSR_AMD64_LS_CFG		0xc0011020
+ #define MSR_AMD64_DC_CFG		0xc0011022
++
++#define MSR_AMD64_DE_CFG		0xc0011029
++#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT	1
++#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE	BIT_ULL(MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT)
++
+ #define MSR_AMD64_BU_CFG2		0xc001102a
+ #define MSR_AMD64_IBSFETCHCTL		0xc0011030
+ #define MSR_AMD64_IBSFETCHLINAD		0xc0011031
+@@ -565,9 +570,6 @@
+ #define FAM10H_MMIO_CONF_BASE_MASK	0xfffffffULL
+ #define FAM10H_MMIO_CONF_BASE_SHIFT	20
+ #define MSR_FAM10H_NODE_ID		0xc001100c
+-#define MSR_F10H_DECFG			0xc0011029
+-#define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT	1
+-#define MSR_F10H_DECFG_LFENCE_SERIALIZE		BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT)
+ 
+ /* K8 MSRs */
+ #define MSR_K8_TOP_MEM1			0xc001001a
+diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
+index 6ebf2b215ef49..eefa2b34e641a 100644
+--- a/tools/bpf/bpftool/common.c
++++ b/tools/bpf/bpftool/common.c
+@@ -271,6 +271,9 @@ int do_pin_any(int argc, char **argv, int (*get_fd)(int *, char ***))
+ 	int err;
+ 	int fd;
+ 
++	if (!REQ_ARGS(3))
++		return -EINVAL;
++
+ 	fd = get_fd(&argc, &argv);
+ 	if (fd < 0)
+ 		return fd;
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 96fe9c1af3364..4688e39de52af 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -203,7 +203,7 @@ static void new_line_csv(struct perf_stat_config *config, void *ctx)
+ 
+ 	fputc('\n', os->fh);
+ 	if (os->prefix)
+-		fprintf(os->fh, "%s%s", os->prefix, config->csv_sep);
++		fprintf(os->fh, "%s", os->prefix);
+ 	aggr_printout(config, os->evsel, os->id, os->nr);
+ 	for (i = 0; i < os->nfields; i++)
+ 		fputs(config->csv_sep, os->fh);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-11-25 17:06 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-11-25 17:06 UTC (permalink / raw
  To: gentoo-commits

commit:     fa8da87e013364419b6f7e59b4f020258e3fce80
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 25 17:06:12 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 25 17:06:12 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fa8da87e

Linux patch 5.10.156

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1155_linux-5.10.156.patch | 4082 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4086 insertions(+)

diff --git a/0000_README b/0000_README
index 3e23190c..99f78f7a 100644
--- a/0000_README
+++ b/0000_README
@@ -663,6 +663,10 @@ Patch:  1154_linux-5.10.155.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.155
 
+Patch:  1155_linux-5.10.156.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.156
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1155_linux-5.10.156.patch b/1155_linux-5.10.156.patch
new file mode 100644
index 00000000..86b5092d
--- /dev/null
+++ b/1155_linux-5.10.156.patch
@@ -0,0 +1,4082 @@
+diff --git a/Documentation/process/code-of-conduct-interpretation.rst b/Documentation/process/code-of-conduct-interpretation.rst
+index 4f8a06b00f608..43da2cc2e3b9b 100644
+--- a/Documentation/process/code-of-conduct-interpretation.rst
++++ b/Documentation/process/code-of-conduct-interpretation.rst
+@@ -51,7 +51,7 @@ the Technical Advisory Board (TAB) or other maintainers if you're
+ uncertain how to handle situations that come up.  It will not be
+ considered a violation report unless you want it to be.  If you are
+ uncertain about approaching the TAB or any other maintainers, please
+-reach out to our conflict mediator, Joanna Lee <joanna.lee@gesmer.com>.
++reach out to our conflict mediator, Joanna Lee <jlee@linuxfoundation.org>.
+ 
+ In the end, "be kind to each other" is really what the end goal is for
+ everybody.  We know everyone is human and we all fail at times, but the
+diff --git a/Makefile b/Makefile
+index 8ccf902b3609f..166f87bdc1905 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 155
++SUBLEVEL = 156
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 9e1b0af0aa43f..e4ff47110a960 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -1221,10 +1221,10 @@
+ 			clocks = <&clks IMX7D_NAND_USDHC_BUS_RAWNAND_CLK>;
+ 		};
+ 
+-		gpmi: nand-controller@33002000{
++		gpmi: nand-controller@33002000 {
+ 			compatible = "fsl,imx7d-gpmi-nand";
+ 			#address-cells = <1>;
+-			#size-cells = <1>;
++			#size-cells = <0>;
+ 			reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
+ 			reg-names = "gpmi-nand", "bch";
+ 			interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index f4d7bb75707df..3490619a9ba96 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -939,10 +939,10 @@
+ 			clocks = <&clk IMX8MM_CLK_NAND_USDHC_BUS_RAWNAND_CLK>;
+ 		};
+ 
+-		gpmi: nand-controller@33002000{
++		gpmi: nand-controller@33002000 {
+ 			compatible = "fsl,imx8mm-gpmi-nand", "fsl,imx7d-gpmi-nand";
+ 			#address-cells = <1>;
+-			#size-cells = <1>;
++			#size-cells = <0>;
+ 			reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
+ 			reg-names = "gpmi-nand", "bch";
+ 			interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+index aea723eb2ba3f..7dba83041264c 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+@@ -809,7 +809,7 @@
+ 		gpmi: nand-controller@33002000 {
+ 			compatible = "fsl,imx8mn-gpmi-nand", "fsl,imx7d-gpmi-nand";
+ 			#address-cells = <1>;
+-			#size-cells = <1>;
++			#size-cells = <0>;
+ 			reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
+ 			reg-names = "gpmi-nand", "bch";
+ 			interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 457b6bb276bb2..9cf5d9551e991 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -41,7 +41,7 @@
+ 	(((midr) & MIDR_IMPLEMENTOR_MASK) >> MIDR_IMPLEMENTOR_SHIFT)
+ 
+ #define MIDR_CPU_MODEL(imp, partnum) \
+-	(((imp)			<< MIDR_IMPLEMENTOR_SHIFT) | \
++	((_AT(u32, imp)		<< MIDR_IMPLEMENTOR_SHIFT) | \
+ 	(0xf			<< MIDR_ARCHITECTURE_SHIFT) | \
+ 	((partnum)		<< MIDR_PARTNUM_SHIFT))
+ 
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 95234f46b0fb9..d87421acddc39 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -1247,6 +1247,15 @@ static int pt_buffer_try_single(struct pt_buffer *buf, int nr_pages)
+ 	if (1 << order != nr_pages)
+ 		goto out;
+ 
++	/*
++	 * Some processors cannot always support single range for more than
++	 * 4KB - refer errata TGL052, ADL037 and RPL017. Future processors might
++	 * also be affected, so for now rather than trying to keep track of
++	 * which ones, just disable it for all.
++	 */
++	if (nr_pages > 1)
++		goto out;
++
+ 	buf->single = true;
+ 	buf->nr_pages = nr_pages;
+ 	ret = 0;
+diff --git a/block/sed-opal.c b/block/sed-opal.c
+index daafadbb88cae..0ac5a4f3f2261 100644
+--- a/block/sed-opal.c
++++ b/block/sed-opal.c
+@@ -88,8 +88,8 @@ struct opal_dev {
+ 	u64 lowest_lba;
+ 
+ 	size_t pos;
+-	u8 cmd[IO_BUFFER_LENGTH];
+-	u8 resp[IO_BUFFER_LENGTH];
++	u8 *cmd;
++	u8 *resp;
+ 
+ 	struct parsed_resp parsed;
+ 	size_t prev_d_len;
+@@ -2134,6 +2134,8 @@ void free_opal_dev(struct opal_dev *dev)
+ 		return;
+ 
+ 	clean_opal_dev(dev);
++	kfree(dev->resp);
++	kfree(dev->cmd);
+ 	kfree(dev);
+ }
+ EXPORT_SYMBOL(free_opal_dev);
+@@ -2146,17 +2148,39 @@ struct opal_dev *init_opal_dev(void *data, sec_send_recv *send_recv)
+ 	if (!dev)
+ 		return NULL;
+ 
++	/*
++	 * Presumably DMA-able buffers must be cache-aligned. Kmalloc makes
++	 * sure the allocated buffer is DMA-safe in that regard.
++	 */
++	dev->cmd = kmalloc(IO_BUFFER_LENGTH, GFP_KERNEL);
++	if (!dev->cmd)
++		goto err_free_dev;
++
++	dev->resp = kmalloc(IO_BUFFER_LENGTH, GFP_KERNEL);
++	if (!dev->resp)
++		goto err_free_cmd;
++
+ 	INIT_LIST_HEAD(&dev->unlk_lst);
+ 	mutex_init(&dev->dev_lock);
+ 	dev->data = data;
+ 	dev->send_recv = send_recv;
+ 	if (check_opal_support(dev) != 0) {
+ 		pr_debug("Opal is not supported on this device\n");
+-		kfree(dev);
+-		return NULL;
++		goto err_free_resp;
+ 	}
+ 
+ 	return dev;
++
++err_free_resp:
++	kfree(dev->resp);
++
++err_free_cmd:
++	kfree(dev->cmd);
++
++err_free_dev:
++	kfree(dev);
++
++	return NULL;
+ }
+ EXPORT_SYMBOL(init_opal_dev);
+ 
+diff --git a/drivers/accessibility/speakup/main.c b/drivers/accessibility/speakup/main.c
+index 48019660a0967..63c5444f0f1ae 100644
+--- a/drivers/accessibility/speakup/main.c
++++ b/drivers/accessibility/speakup/main.c
+@@ -1780,7 +1780,7 @@ static void speakup_con_update(struct vc_data *vc)
+ {
+ 	unsigned long flags;
+ 
+-	if (!speakup_console[vc->vc_num] || spk_parked)
++	if (!speakup_console[vc->vc_num] || spk_parked || !synth)
+ 		return;
+ 	if (!spin_trylock_irqsave(&speakup_info.spinlock, flags))
+ 		/* Speakup output, discard */
+diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c
+index b33772df9bc60..31a66fc0c31dc 100644
+--- a/drivers/ata/libata-transport.c
++++ b/drivers/ata/libata-transport.c
+@@ -301,7 +301,9 @@ int ata_tport_add(struct device *parent,
+ 	pm_runtime_enable(dev);
+ 	pm_runtime_forbid(dev);
+ 
+-	transport_add_device(dev);
++	error = transport_add_device(dev);
++	if (error)
++		goto tport_transport_add_err;
+ 	transport_configure_device(dev);
+ 
+ 	error = ata_tlink_add(&ap->link);
+@@ -312,12 +314,12 @@ int ata_tport_add(struct device *parent,
+ 
+  tport_link_err:
+ 	transport_remove_device(dev);
++ tport_transport_add_err:
+ 	device_del(dev);
+ 
+  tport_err:
+ 	transport_destroy_device(dev);
+ 	put_device(dev);
+-	ata_host_put(ap->host);
+ 	return error;
+ }
+ 
+@@ -426,7 +428,9 @@ int ata_tlink_add(struct ata_link *link)
+ 		goto tlink_err;
+ 	}
+ 
+-	transport_add_device(dev);
++	error = transport_add_device(dev);
++	if (error)
++		goto tlink_transport_err;
+ 	transport_configure_device(dev);
+ 
+ 	ata_for_each_dev(ata_dev, link, ALL) {
+@@ -441,6 +445,7 @@ int ata_tlink_add(struct ata_link *link)
+ 		ata_tdev_delete(ata_dev);
+ 	}
+ 	transport_remove_device(dev);
++  tlink_transport_err:
+ 	device_del(dev);
+   tlink_err:
+ 	transport_destroy_device(dev);
+@@ -678,7 +683,13 @@ static int ata_tdev_add(struct ata_device *ata_dev)
+ 		return error;
+ 	}
+ 
+-	transport_add_device(dev);
++	error = transport_add_device(dev);
++	if (error) {
++		device_del(dev);
++		ata_tdev_free(ata_dev);
++		return error;
++	}
++
+ 	transport_configure_device(dev);
+ 	return 0;
+ }
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 407527ff6b1f6..51450f7c81afe 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -2720,7 +2720,7 @@ static int init_submitter(struct drbd_device *device)
+ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor)
+ {
+ 	struct drbd_resource *resource = adm_ctx->resource;
+-	struct drbd_connection *connection;
++	struct drbd_connection *connection, *n;
+ 	struct drbd_device *device;
+ 	struct drbd_peer_device *peer_device, *tmp_peer_device;
+ 	struct gendisk *disk;
+@@ -2839,7 +2839,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
+ out_idr_remove_vol:
+ 	idr_remove(&connection->peer_devices, vnr);
+ out_idr_remove_from_resource:
+-	for_each_connection(connection, resource) {
++	for_each_connection_safe(connection, n, resource) {
+ 		peer_device = idr_remove(&connection->peer_devices, vnr);
+ 		if (peer_device)
+ 			kref_put(&connection->kref, drbd_destroy_connection);
+diff --git a/drivers/firmware/google/coreboot_table.c b/drivers/firmware/google/coreboot_table.c
+index 0205987a4fd4f..568074148f62b 100644
+--- a/drivers/firmware/google/coreboot_table.c
++++ b/drivers/firmware/google/coreboot_table.c
+@@ -152,12 +152,8 @@ static int coreboot_table_probe(struct platform_device *pdev)
+ 	if (!ptr)
+ 		return -ENOMEM;
+ 
+-	ret = bus_register(&coreboot_bus_type);
+-	if (!ret) {
+-		ret = coreboot_table_populate(dev, ptr);
+-		if (ret)
+-			bus_unregister(&coreboot_bus_type);
+-	}
++	ret = coreboot_table_populate(dev, ptr);
++
+ 	memunmap(ptr);
+ 
+ 	return ret;
+@@ -172,7 +168,6 @@ static int __cb_dev_unregister(struct device *dev, void *dummy)
+ static int coreboot_table_remove(struct platform_device *pdev)
+ {
+ 	bus_for_each_dev(&coreboot_bus_type, NULL, NULL, __cb_dev_unregister);
+-	bus_unregister(&coreboot_bus_type);
+ 	return 0;
+ }
+ 
+@@ -202,6 +197,32 @@ static struct platform_driver coreboot_table_driver = {
+ 		.of_match_table = of_match_ptr(coreboot_of_match),
+ 	},
+ };
+-module_platform_driver(coreboot_table_driver);
++
++static int __init coreboot_table_driver_init(void)
++{
++	int ret;
++
++	ret = bus_register(&coreboot_bus_type);
++	if (ret)
++		return ret;
++
++	ret = platform_driver_register(&coreboot_table_driver);
++	if (ret) {
++		bus_unregister(&coreboot_bus_type);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void __exit coreboot_table_driver_exit(void)
++{
++	platform_driver_unregister(&coreboot_table_driver);
++	bus_unregister(&coreboot_bus_type);
++}
++
++module_init(coreboot_table_driver_init);
++module_exit(coreboot_table_driver_exit);
++
+ MODULE_AUTHOR("Google, Inc.");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 8f66eef0c6837..c6c4888c66651 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1746,7 +1746,7 @@ void dcn20_post_unlock_program_front_end(
+ 
+ 			for (j = 0; j < TIMEOUT_FOR_PIPE_ENABLE_MS*1000
+ 					&& hubp->funcs->hubp_is_flip_pending(hubp); j++)
+-				mdelay(1);
++				udelay(1);
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index 1c526cb239e03..3a31058b029e3 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -379,16 +379,31 @@ static int arcturus_set_default_dpm_table(struct smu_context *smu)
+ 	return 0;
+ }
+ 
+-static int arcturus_check_powerplay_table(struct smu_context *smu)
++static void arcturus_check_bxco_support(struct smu_context *smu)
+ {
+ 	struct smu_table_context *table_context = &smu->smu_table;
+ 	struct smu_11_0_powerplay_table *powerplay_table =
+ 		table_context->power_play_table;
+ 	struct smu_baco_context *smu_baco = &smu->smu_baco;
++	struct amdgpu_device *adev = smu->adev;
++	uint32_t val;
+ 
+ 	if (powerplay_table->platform_caps & SMU_11_0_PP_PLATFORM_CAP_BACO ||
+-	    powerplay_table->platform_caps & SMU_11_0_PP_PLATFORM_CAP_MACO)
+-		smu_baco->platform_support = true;
++	    powerplay_table->platform_caps & SMU_11_0_PP_PLATFORM_CAP_MACO) {
++		val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0);
++		smu_baco->platform_support =
++			(val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true :
++									false;
++	}
++}
++
++static int arcturus_check_powerplay_table(struct smu_context *smu)
++{
++	struct smu_table_context *table_context = &smu->smu_table;
++	struct smu_11_0_powerplay_table *powerplay_table =
++		table_context->power_play_table;
++
++	arcturus_check_bxco_support(smu);
+ 
+ 	table_context->thermal_controller_type =
+ 		powerplay_table->thermal_controller_type;
+@@ -2131,13 +2146,11 @@ static void arcturus_get_unique_id(struct smu_context *smu)
+ static bool arcturus_is_baco_supported(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+-	uint32_t val;
+ 
+ 	if (!smu_v11_0_baco_is_support(smu) || amdgpu_sriov_vf(adev))
+ 		return false;
+ 
+-	val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0);
+-	return (val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true : false;
++	return true;
+ }
+ 
+ static int arcturus_set_df_cstate(struct smu_context *smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index 2937784bc8249..a7773b6453d53 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -338,19 +338,34 @@ navi10_get_allowed_feature_mask(struct smu_context *smu,
+ 	return 0;
+ }
+ 
+-static int navi10_check_powerplay_table(struct smu_context *smu)
++static void navi10_check_bxco_support(struct smu_context *smu)
+ {
+ 	struct smu_table_context *table_context = &smu->smu_table;
+ 	struct smu_11_0_powerplay_table *powerplay_table =
+ 		table_context->power_play_table;
+ 	struct smu_baco_context *smu_baco = &smu->smu_baco;
++	struct amdgpu_device *adev = smu->adev;
++	uint32_t val;
++
++	if (powerplay_table->platform_caps & SMU_11_0_PP_PLATFORM_CAP_BACO ||
++	    powerplay_table->platform_caps & SMU_11_0_PP_PLATFORM_CAP_MACO) {
++		val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0);
++		smu_baco->platform_support =
++			(val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true :
++									false;
++	}
++}
++
++static int navi10_check_powerplay_table(struct smu_context *smu)
++{
++	struct smu_table_context *table_context = &smu->smu_table;
++	struct smu_11_0_powerplay_table *powerplay_table =
++		table_context->power_play_table;
+ 
+ 	if (powerplay_table->platform_caps & SMU_11_0_PP_PLATFORM_CAP_HARDWAREDC)
+ 		smu->dc_controlled_by_gpio = true;
+ 
+-	if (powerplay_table->platform_caps & SMU_11_0_PP_PLATFORM_CAP_BACO ||
+-	    powerplay_table->platform_caps & SMU_11_0_PP_PLATFORM_CAP_MACO)
+-		smu_baco->platform_support = true;
++	navi10_check_bxco_support(smu);
+ 
+ 	table_context->thermal_controller_type =
+ 		powerplay_table->thermal_controller_type;
+@@ -1948,13 +1963,11 @@ static int navi10_overdrive_get_gfx_clk_base_voltage(struct smu_context *smu,
+ static bool navi10_is_baco_supported(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+-	uint32_t val;
+ 
+ 	if (amdgpu_sriov_vf(adev) || (!smu_v11_0_baco_is_support(smu)))
+ 		return false;
+ 
+-	val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0);
+-	return (val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true : false;
++	return true;
+ }
+ 
+ static int navi10_set_default_od_settings(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 49d7fa1d08427..45c8152622008 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -294,16 +294,47 @@ sienna_cichlid_get_allowed_feature_mask(struct smu_context *smu,
+ 	return 0;
+ }
+ 
+-static int sienna_cichlid_check_powerplay_table(struct smu_context *smu)
++static void sienna_cichlid_check_bxco_support(struct smu_context *smu)
+ {
+ 	struct smu_table_context *table_context = &smu->smu_table;
+ 	struct smu_11_0_7_powerplay_table *powerplay_table =
+ 		table_context->power_play_table;
+ 	struct smu_baco_context *smu_baco = &smu->smu_baco;
++	struct amdgpu_device *adev = smu->adev;
++	uint32_t val;
+ 
+ 	if (powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_BACO ||
+-	    powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_MACO)
+-		smu_baco->platform_support = true;
++	    powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_MACO) {
++		val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0);
++		smu_baco->platform_support =
++			(val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true :
++									false;
++
++		/*
++		 * Disable BACO entry/exit completely on below SKUs to
++		 * avoid hardware intermittent failures.
++		 */
++		if (((adev->pdev->device == 0x73A1) &&
++		    (adev->pdev->revision == 0x00)) ||
++		    ((adev->pdev->device == 0x73BF) &&
++		    (adev->pdev->revision == 0xCF)) ||
++		    ((adev->pdev->device == 0x7422) &&
++		    (adev->pdev->revision == 0x00)))
++			smu_baco->platform_support = false;
++
++	}
++}
++
++static int sienna_cichlid_check_powerplay_table(struct smu_context *smu)
++{
++	struct smu_table_context *table_context = &smu->smu_table;
++	struct smu_11_0_7_powerplay_table *powerplay_table =
++		table_context->power_play_table;
++
++	if (powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_HARDWAREDC)
++		smu->dc_controlled_by_gpio = true;
++
++	sienna_cichlid_check_bxco_support(smu);
+ 
+ 	table_context->thermal_controller_type =
+ 		powerplay_table->thermal_controller_type;
+@@ -1736,13 +1767,11 @@ static int sienna_cichlid_run_btc(struct smu_context *smu)
+ static bool sienna_cichlid_is_baco_supported(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+-	uint32_t val;
+ 
+ 	if (amdgpu_sriov_vf(adev) || (!smu_v11_0_baco_is_support(smu)))
+ 		return false;
+ 
+-	val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0);
+-	return (val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true : false;
++	return true;
+ }
+ 
+ static bool sienna_cichlid_is_mode1_reset_supported(struct smu_context *smu)
+@@ -2806,6 +2835,7 @@ static const struct pptable_funcs sienna_cichlid_ppt_funcs = {
+ 	.get_dpm_ultimate_freq = sienna_cichlid_get_dpm_ultimate_freq,
+ 	.set_soft_freq_limited_range = smu_v11_0_set_soft_freq_limited_range,
+ 	.run_btc = sienna_cichlid_run_btc,
++	.set_power_source = smu_v11_0_set_power_source,
+ 	.get_pp_feature_mask = smu_cmn_get_pp_feature_mask,
+ 	.set_pp_feature_mask = smu_cmn_set_pp_feature_mask,
+ 	.get_gpu_metrics = sienna_cichlid_get_gpu_metrics,
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 006e3b896caea..4ca995ce19af6 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -610,7 +610,7 @@ static int drm_dev_init(struct drm_device *dev,
+ 	mutex_init(&dev->clientlist_mutex);
+ 	mutex_init(&dev->master_mutex);
+ 
+-	ret = drmm_add_action(dev, drm_dev_init_release, NULL);
++	ret = drmm_add_action_or_reset(dev, drm_dev_init_release, NULL);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
+index f80e0f28087d1..41efe40bc70f6 100644
+--- a/drivers/gpu/drm/drm_internal.h
++++ b/drivers/gpu/drm/drm_internal.h
+@@ -116,7 +116,8 @@ static inline void drm_vblank_flush_worker(struct drm_vblank_crtc *vblank)
+ 
+ static inline void drm_vblank_destroy_worker(struct drm_vblank_crtc *vblank)
+ {
+-	kthread_destroy_worker(vblank->worker);
++	if (vblank->worker)
++		kthread_destroy_worker(vblank->worker);
+ }
+ 
+ int drm_vblank_worker_init(struct drm_vblank_crtc *vblank);
+diff --git a/drivers/gpu/drm/imx/imx-tve.c b/drivers/gpu/drm/imx/imx-tve.c
+index 2a8d2e32e7b42..9fe6a47331067 100644
+--- a/drivers/gpu/drm/imx/imx-tve.c
++++ b/drivers/gpu/drm/imx/imx-tve.c
+@@ -212,8 +212,9 @@ static int imx_tve_connector_get_modes(struct drm_connector *connector)
+ 	return ret;
+ }
+ 
+-static int imx_tve_connector_mode_valid(struct drm_connector *connector,
+-					struct drm_display_mode *mode)
++static enum drm_mode_status
++imx_tve_connector_mode_valid(struct drm_connector *connector,
++			     struct drm_display_mode *mode)
+ {
+ 	struct imx_tve *tve = con_to_tve(connector);
+ 	unsigned long rate;
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index b7b37082a9d72..1a87cc445b5e1 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2655,6 +2655,7 @@ static const struct display_timing logictechno_lt161010_2nh_timing = {
+ static const struct panel_desc logictechno_lt161010_2nh = {
+ 	.timings = &logictechno_lt161010_2nh_timing,
+ 	.num_timings = 1,
++	.bpc = 6,
+ 	.size = {
+ 		.width = 154,
+ 		.height = 86,
+@@ -2684,6 +2685,7 @@ static const struct display_timing logictechno_lt170410_2whc_timing = {
+ static const struct panel_desc logictechno_lt170410_2whc = {
+ 	.timings = &logictechno_lt170410_2whc_timing,
+ 	.num_timings = 1,
++	.bpc = 8,
+ 	.size = {
+ 		.width = 217,
+ 		.height = 136,
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 5618c1ff34dc3..45682d30d7056 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1275,6 +1275,7 @@ static const struct {
+ 	 */
+ 	{ "Latitude 5480",      0x29 },
+ 	{ "Vostro V131",        0x1d },
++	{ "Vostro 5568",        0x29 },
+ };
+ 
+ static void register_dell_lis3lv02d_i2c_device(struct i801_priv *priv)
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 8b113ae32dc71..42f1db60ad6f8 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -283,6 +283,7 @@ struct tegra_i2c_dev {
+ 	struct dma_chan *tx_dma_chan;
+ 	struct dma_chan *rx_dma_chan;
+ 	unsigned int dma_buf_size;
++	struct device *dma_dev;
+ 	dma_addr_t dma_phys;
+ 	void *dma_buf;
+ 
+@@ -419,7 +420,7 @@ static int tegra_i2c_dma_submit(struct tegra_i2c_dev *i2c_dev, size_t len)
+ static void tegra_i2c_release_dma(struct tegra_i2c_dev *i2c_dev)
+ {
+ 	if (i2c_dev->dma_buf) {
+-		dma_free_coherent(i2c_dev->dev, i2c_dev->dma_buf_size,
++		dma_free_coherent(i2c_dev->dma_dev, i2c_dev->dma_buf_size,
+ 				  i2c_dev->dma_buf, i2c_dev->dma_phys);
+ 		i2c_dev->dma_buf = NULL;
+ 	}
+@@ -466,10 +467,13 @@ static int tegra_i2c_init_dma(struct tegra_i2c_dev *i2c_dev)
+ 
+ 	i2c_dev->tx_dma_chan = chan;
+ 
++	WARN_ON(i2c_dev->tx_dma_chan->device != i2c_dev->rx_dma_chan->device);
++	i2c_dev->dma_dev = chan->device->dev;
++
+ 	i2c_dev->dma_buf_size = i2c_dev->hw->quirks->max_write_len +
+ 				I2C_PACKET_HEADER_SIZE;
+ 
+-	dma_buf = dma_alloc_coherent(i2c_dev->dev, i2c_dev->dma_buf_size,
++	dma_buf = dma_alloc_coherent(i2c_dev->dma_dev, i2c_dev->dma_buf_size,
+ 				     &dma_phys, GFP_KERNEL | __GFP_NOWARN);
+ 	if (!dma_buf) {
+ 		dev_err(i2c_dev->dev, "failed to allocate DMA buffer\n");
+@@ -1255,7 +1259,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
+ 
+ 	if (i2c_dev->dma_mode) {
+ 		if (i2c_dev->msg_read) {
+-			dma_sync_single_for_device(i2c_dev->dev,
++			dma_sync_single_for_device(i2c_dev->dma_dev,
+ 						   i2c_dev->dma_phys,
+ 						   xfer_size, DMA_FROM_DEVICE);
+ 
+@@ -1263,7 +1267,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
+ 			if (err)
+ 				return err;
+ 		} else {
+-			dma_sync_single_for_cpu(i2c_dev->dev,
++			dma_sync_single_for_cpu(i2c_dev->dma_dev,
+ 						i2c_dev->dma_phys,
+ 						xfer_size, DMA_TO_DEVICE);
+ 		}
+@@ -1276,7 +1280,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
+ 			memcpy(i2c_dev->dma_buf + I2C_PACKET_HEADER_SIZE,
+ 			       msg->buf, msg->len);
+ 
+-			dma_sync_single_for_device(i2c_dev->dev,
++			dma_sync_single_for_device(i2c_dev->dma_dev,
+ 						   i2c_dev->dma_phys,
+ 						   xfer_size, DMA_TO_DEVICE);
+ 
+@@ -1327,7 +1331,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
+ 		}
+ 
+ 		if (i2c_dev->msg_read && i2c_dev->msg_err == I2C_ERR_NONE) {
+-			dma_sync_single_for_cpu(i2c_dev->dev,
++			dma_sync_single_for_cpu(i2c_dev->dma_dev,
+ 						i2c_dev->dma_phys,
+ 						xfer_size, DMA_FROM_DEVICE);
+ 
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 0a793e7cd53ee..38d4a910bc525 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -616,8 +616,10 @@ static struct iio_trigger *at91_adc_allocate_trigger(struct iio_dev *idev,
+ 	trig->ops = &at91_adc_trigger_ops;
+ 
+ 	ret = iio_trigger_register(trig);
+-	if (ret)
++	if (ret) {
++		iio_trigger_free(trig);
+ 		return NULL;
++	}
+ 
+ 	return trig;
+ }
+diff --git a/drivers/iio/adc/mp2629_adc.c b/drivers/iio/adc/mp2629_adc.c
+index 331a9a7282170..acd9420c04162 100644
+--- a/drivers/iio/adc/mp2629_adc.c
++++ b/drivers/iio/adc/mp2629_adc.c
+@@ -56,7 +56,8 @@ static struct iio_map mp2629_adc_maps[] = {
+ 	MP2629_MAP(SYSTEM_VOLT, "system-volt"),
+ 	MP2629_MAP(INPUT_VOLT, "input-volt"),
+ 	MP2629_MAP(BATT_CURRENT, "batt-current"),
+-	MP2629_MAP(INPUT_CURRENT, "input-current")
++	MP2629_MAP(INPUT_CURRENT, "input-current"),
++	{ }
+ };
+ 
+ static int mp2629_read_raw(struct iio_dev *indio_dev,
+@@ -73,7 +74,7 @@ static int mp2629_read_raw(struct iio_dev *indio_dev,
+ 		if (ret)
+ 			return ret;
+ 
+-		if (chan->address == MP2629_INPUT_VOLT)
++		if (chan->channel == MP2629_INPUT_VOLT)
+ 			rval &= GENMASK(6, 0);
+ 		*val = rval;
+ 		return IIO_VAL_INT;
+diff --git a/drivers/iio/pressure/ms5611_spi.c b/drivers/iio/pressure/ms5611_spi.c
+index 45d3a7d5be8e4..f7743ee3318f8 100644
+--- a/drivers/iio/pressure/ms5611_spi.c
++++ b/drivers/iio/pressure/ms5611_spi.c
+@@ -94,7 +94,7 @@ static int ms5611_spi_probe(struct spi_device *spi)
+ 	spi_set_drvdata(spi, indio_dev);
+ 
+ 	spi->mode = SPI_MODE_0;
+-	spi->max_speed_hz = 20000000;
++	spi->max_speed_hz = min(spi->max_speed_hz, 20000000U);
+ 	spi->bits_per_word = 8;
+ 	ret = spi_setup(spi);
+ 	if (ret < 0)
+diff --git a/drivers/iio/trigger/iio-trig-sysfs.c b/drivers/iio/trigger/iio-trig-sysfs.c
+index 2277d6336ac06..9ed5b9405ade0 100644
+--- a/drivers/iio/trigger/iio-trig-sysfs.c
++++ b/drivers/iio/trigger/iio-trig-sysfs.c
+@@ -209,9 +209,13 @@ static int iio_sysfs_trigger_remove(int id)
+ 
+ static int __init iio_sysfs_trig_init(void)
+ {
++	int ret;
+ 	device_initialize(&iio_sysfs_trig_dev);
+ 	dev_set_name(&iio_sysfs_trig_dev, "iio_sysfs_trigger");
+-	return device_add(&iio_sysfs_trig_dev);
++	ret = device_add(&iio_sysfs_trig_dev);
++	if (ret)
++		put_device(&iio_sysfs_trig_dev);
++	return ret;
+ }
+ module_init(iio_sysfs_trig_init);
+ 
+diff --git a/drivers/input/joystick/iforce/iforce-main.c b/drivers/input/joystick/iforce/iforce-main.c
+index b86de1312512b..84b87526b7ba3 100644
+--- a/drivers/input/joystick/iforce/iforce-main.c
++++ b/drivers/input/joystick/iforce/iforce-main.c
+@@ -273,22 +273,22 @@ int iforce_init_device(struct device *parent, u16 bustype,
+  * Get device info.
+  */
+ 
+-	if (!iforce_get_id_packet(iforce, 'M', buf, &len) || len < 3)
++	if (!iforce_get_id_packet(iforce, 'M', buf, &len) && len >= 3)
+ 		input_dev->id.vendor = get_unaligned_le16(buf + 1);
+ 	else
+ 		dev_warn(&iforce->dev->dev, "Device does not respond to id packet M\n");
+ 
+-	if (!iforce_get_id_packet(iforce, 'P', buf, &len) || len < 3)
++	if (!iforce_get_id_packet(iforce, 'P', buf, &len) && len >= 3)
+ 		input_dev->id.product = get_unaligned_le16(buf + 1);
+ 	else
+ 		dev_warn(&iforce->dev->dev, "Device does not respond to id packet P\n");
+ 
+-	if (!iforce_get_id_packet(iforce, 'B', buf, &len) || len < 3)
++	if (!iforce_get_id_packet(iforce, 'B', buf, &len) && len >= 3)
+ 		iforce->device_memory.end = get_unaligned_le16(buf + 1);
+ 	else
+ 		dev_warn(&iforce->dev->dev, "Device does not respond to id packet B\n");
+ 
+-	if (!iforce_get_id_packet(iforce, 'N', buf, &len) || len < 2)
++	if (!iforce_get_id_packet(iforce, 'N', buf, &len) && len >= 2)
+ 		ff_effects = buf[1];
+ 	else
+ 		dev_warn(&iforce->dev->dev, "Device does not respond to id packet N\n");
+diff --git a/drivers/input/serio/i8042.c b/drivers/input/serio/i8042.c
+index a9f68f535b727..8648b4c46138b 100644
+--- a/drivers/input/serio/i8042.c
++++ b/drivers/input/serio/i8042.c
+@@ -1543,8 +1543,6 @@ static int i8042_probe(struct platform_device *dev)
+ {
+ 	int error;
+ 
+-	i8042_platform_device = dev;
+-
+ 	if (i8042_reset == I8042_RESET_ALWAYS) {
+ 		error = i8042_controller_selftest();
+ 		if (error)
+@@ -1582,7 +1580,6 @@ static int i8042_probe(struct platform_device *dev)
+ 	i8042_free_aux_ports();	/* in case KBD failed but AUX not */
+ 	i8042_free_irqs();
+ 	i8042_controller_reset(false);
+-	i8042_platform_device = NULL;
+ 
+ 	return error;
+ }
+@@ -1592,7 +1589,6 @@ static int i8042_remove(struct platform_device *dev)
+ 	i8042_unregister_ports();
+ 	i8042_free_irqs();
+ 	i8042_controller_reset(false);
+-	i8042_platform_device = NULL;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index fb911b6c418f2..86fd49ae7f612 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -669,7 +669,7 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
+ 	 * Since it is a second level only translation setup, we should
+ 	 * set SRE bit as well (addresses are expected to be GPAs).
+ 	 */
+-	if (pasid != PASID_RID2PASID)
++	if (pasid != PASID_RID2PASID && ecap_srs(iommu->ecap))
+ 		pasid_set_sre(pte);
+ 	pasid_set_present(pte);
+ 	pasid_flush_caches(iommu, pte, pasid, did);
+@@ -704,7 +704,8 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu,
+ 	 * We should set SRE bit as well since the addresses are expected
+ 	 * to be GPAs.
+ 	 */
+-	pasid_set_sre(pte);
++	if (ecap_srs(iommu->ecap))
++		pasid_set_sre(pte);
+ 	pasid_set_present(pte);
+ 	pasid_flush_caches(iommu, pte, pasid, did);
+ 
+diff --git a/drivers/isdn/mISDN/core.c b/drivers/isdn/mISDN/core.c
+index 7ea0100f218a0..90ee56d07a6e9 100644
+--- a/drivers/isdn/mISDN/core.c
++++ b/drivers/isdn/mISDN/core.c
+@@ -222,7 +222,7 @@ mISDN_register_device(struct mISDNdevice *dev,
+ 
+ 	err = get_free_devid();
+ 	if (err < 0)
+-		goto error1;
++		return err;
+ 	dev->id = err;
+ 
+ 	device_initialize(&dev->dev);
+diff --git a/drivers/isdn/mISDN/dsp_pipeline.c b/drivers/isdn/mISDN/dsp_pipeline.c
+index c3b2c99b5cd5c..cfbcd9e973c2e 100644
+--- a/drivers/isdn/mISDN/dsp_pipeline.c
++++ b/drivers/isdn/mISDN/dsp_pipeline.c
+@@ -77,6 +77,7 @@ int mISDN_dsp_element_register(struct mISDN_dsp_element *elem)
+ 	if (!entry)
+ 		return -ENOMEM;
+ 
++	INIT_LIST_HEAD(&entry->list);
+ 	entry->elem = elem;
+ 
+ 	entry->dev.class = elements_class;
+@@ -107,7 +108,7 @@ err2:
+ 	device_unregister(&entry->dev);
+ 	return ret;
+ err1:
+-	kfree(entry);
++	put_device(&entry->dev);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(mISDN_dsp_element_register);
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index b839705654d4e..20171c9d8952e 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -573,7 +573,7 @@ static void list_version_get_needed(struct target_type *tt, void *needed_param)
+     size_t *needed = needed_param;
+ 
+     *needed += sizeof(struct dm_target_versions);
+-    *needed += strlen(tt->name);
++    *needed += strlen(tt->name) + 1;
+     *needed += ALIGN_MASK;
+ }
+ 
+@@ -638,7 +638,7 @@ static int __list_versions(struct dm_ioctl *param, size_t param_size, const char
+ 	iter_info.old_vers = NULL;
+ 	iter_info.vers = vers;
+ 	iter_info.flags = 0;
+-	iter_info.end = (char *)vers+len;
++	iter_info.end = (char *)vers + needed;
+ 
+ 	/*
+ 	 * Now loop through filling out the names & versions.
+diff --git a/drivers/mfd/lpc_ich.c b/drivers/mfd/lpc_ich.c
+index 3bbb29a7e7a57..2411b7a2e6f47 100644
+--- a/drivers/mfd/lpc_ich.c
++++ b/drivers/mfd/lpc_ich.c
+@@ -63,6 +63,8 @@
+ #define SPIBASE_BYT		0x54
+ #define SPIBASE_BYT_SZ		512
+ #define SPIBASE_BYT_EN		BIT(1)
++#define BYT_BCR			0xfc
++#define BYT_BCR_WPD		BIT(0)
+ 
+ #define SPIBASE_LPT		0x3800
+ #define SPIBASE_LPT_SZ		512
+@@ -1083,12 +1085,57 @@ wdt_done:
+ 	return ret;
+ }
+ 
++static bool lpc_ich_byt_set_writeable(void __iomem *base, void *data)
++{
++	u32 val;
++
++	val = readl(base + BYT_BCR);
++	if (!(val & BYT_BCR_WPD)) {
++		val |= BYT_BCR_WPD;
++		writel(val, base + BYT_BCR);
++		val = readl(base + BYT_BCR);
++	}
++
++	return val & BYT_BCR_WPD;
++}
++
++static bool lpc_ich_lpt_set_writeable(void __iomem *base, void *data)
++{
++	struct pci_dev *pdev = data;
++	u32 bcr;
++
++	pci_read_config_dword(pdev, BCR, &bcr);
++	if (!(bcr & BCR_WPD)) {
++		bcr |= BCR_WPD;
++		pci_write_config_dword(pdev, BCR, bcr);
++		pci_read_config_dword(pdev, BCR, &bcr);
++	}
++
++	return bcr & BCR_WPD;
++}
++
++static bool lpc_ich_bxt_set_writeable(void __iomem *base, void *data)
++{
++	unsigned int spi = PCI_DEVFN(13, 2);
++	struct pci_bus *bus = data;
++	u32 bcr;
++
++	pci_bus_read_config_dword(bus, spi, BCR, &bcr);
++	if (!(bcr & BCR_WPD)) {
++		bcr |= BCR_WPD;
++		pci_bus_write_config_dword(bus, spi, BCR, bcr);
++		pci_bus_read_config_dword(bus, spi, BCR, &bcr);
++	}
++
++	return bcr & BCR_WPD;
++}
++
+ static int lpc_ich_init_spi(struct pci_dev *dev)
+ {
+ 	struct lpc_ich_priv *priv = pci_get_drvdata(dev);
+ 	struct resource *res = &intel_spi_res[0];
+ 	struct intel_spi_boardinfo *info;
+-	u32 spi_base, rcba, bcr;
++	u32 spi_base, rcba;
+ 
+ 	info = devm_kzalloc(&dev->dev, sizeof(*info), GFP_KERNEL);
+ 	if (!info)
+@@ -1102,6 +1149,8 @@ static int lpc_ich_init_spi(struct pci_dev *dev)
+ 		if (spi_base & SPIBASE_BYT_EN) {
+ 			res->start = spi_base & ~(SPIBASE_BYT_SZ - 1);
+ 			res->end = res->start + SPIBASE_BYT_SZ - 1;
++
++			info->set_writeable = lpc_ich_byt_set_writeable;
+ 		}
+ 		break;
+ 
+@@ -1112,8 +1161,8 @@ static int lpc_ich_init_spi(struct pci_dev *dev)
+ 			res->start = spi_base + SPIBASE_LPT;
+ 			res->end = res->start + SPIBASE_LPT_SZ - 1;
+ 
+-			pci_read_config_dword(dev, BCR, &bcr);
+-			info->writeable = !!(bcr & BCR_WPD);
++			info->set_writeable = lpc_ich_lpt_set_writeable;
++			info->data = dev;
+ 		}
+ 		break;
+ 
+@@ -1134,8 +1183,8 @@ static int lpc_ich_init_spi(struct pci_dev *dev)
+ 			res->start = spi_base & 0xfffffff0;
+ 			res->end = res->start + SPIBASE_APL_SZ - 1;
+ 
+-			pci_bus_read_config_dword(bus, spi, BCR, &bcr);
+-			info->writeable = !!(bcr & BCR_WPD);
++			info->set_writeable = lpc_ich_bxt_set_writeable;
++			info->data = bus;
+ 		}
+ 
+ 		pci_bus_write_config_byte(bus, p2sb, 0xe1, 0x1);
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index a49782dd903cd..d4d388f021ccd 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -852,6 +852,7 @@ static int qp_notify_peer_local(bool attach, struct vmci_handle handle)
+ 	u32 context_id = vmci_get_context_id();
+ 	struct vmci_event_qp ev;
+ 
++	memset(&ev, 0, sizeof(ev));
+ 	ev.msg.hdr.dst = vmci_make_handle(context_id, VMCI_EVENT_HANDLER);
+ 	ev.msg.hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+ 					  VMCI_CONTEXT_RESOURCE_ID);
+@@ -1465,6 +1466,7 @@ static int qp_notify_peer(bool attach,
+ 	 * kernel.
+ 	 */
+ 
++	memset(&ev, 0, sizeof(ev));
+ 	ev.msg.hdr.dst = vmci_make_handle(peer_id, VMCI_EVENT_HANDLER);
+ 	ev.msg.hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+ 					  VMCI_CONTEXT_RESOURCE_ID);
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index eb82f6aac951f..7d9ec91e081b2 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -1128,7 +1128,13 @@ u32 mmc_select_voltage(struct mmc_host *host, u32 ocr)
+ 		mmc_power_cycle(host, ocr);
+ 	} else {
+ 		bit = fls(ocr) - 1;
+-		ocr &= 3 << bit;
++		/*
++		 * The bit variable represents the highest voltage bit set in
++		 * the OCR register.
++		 * To keep a range of 2 values (e.g. 3.2V/3.3V and 3.3V/3.4V),
++		 * we must shift the mask '3' with (bit - 1).
++		 */
++		ocr &= 3 << (bit - 1);
+ 		if (bit != host->ios.vdd)
+ 			dev_warn(mmc_dev(host), "exceeding card's volts\n");
+ 	}
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 449562122adc1..1f1bdd34dd554 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -1621,14 +1621,14 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536)
+ 		host->quirks |= SDHCI_QUIRK_BROKEN_ADMA;
+ 
+-	if (host->caps & MMC_CAP_8_BIT_DATA &&
++	if (host->mmc->caps & MMC_CAP_8_BIT_DATA &&
+ 	    imx_data->socdata->flags & ESDHC_FLAG_HS400)
+ 		host->quirks2 |= SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400;
+ 
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23)
+ 		host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN;
+ 
+-	if (host->caps & MMC_CAP_8_BIT_DATA &&
++	if (host->mmc->caps & MMC_CAP_8_BIT_DATA &&
+ 	    imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
+ 		host->mmc->caps2 |= MMC_CAP2_HS400_ES;
+ 		host->mmc_host_ops.hs400_enhanced_strobe =
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 7eb9a62ee0743..8b02fe3916d12 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -1799,6 +1799,8 @@ static int amd_probe(struct sdhci_pci_chip *chip)
+ 		}
+ 	}
+ 
++	pci_dev_put(smbus_dev);
++
+ 	if (gen == AMD_CHIPSET_BEFORE_ML || gen == AMD_CHIPSET_CZ)
+ 		chip->quirks2 |= SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD;
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index 8c357e3b78d7c..72234790a310b 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -31,6 +31,7 @@
+ #define O2_SD_CAPS		0xE0
+ #define O2_SD_ADMA1		0xE2
+ #define O2_SD_ADMA2		0xE7
++#define O2_SD_MISC_CTRL2	0xF0
+ #define O2_SD_INF_MOD		0xF1
+ #define O2_SD_MISC_CTRL4	0xFC
+ #define O2_SD_MISC_CTRL		0x1C0
+@@ -822,6 +823,12 @@ static int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
+ 		/* Set Tuning Windows to 5 */
+ 		pci_write_config_byte(chip->pdev,
+ 				O2_SD_TUNING_CTRL, 0x55);
++		//Adjust 1st and 2nd CD debounce time
++		pci_read_config_dword(chip->pdev, O2_SD_MISC_CTRL2, &scratch_32);
++		scratch_32 &= 0xFFE7FFFF;
++		scratch_32 |= 0x00180000;
++		pci_write_config_dword(chip->pdev, O2_SD_MISC_CTRL2, scratch_32);
++		pci_write_config_dword(chip->pdev, O2_SD_DETECT_SETTING, 1);
+ 		/* Lock WP */
+ 		ret = pci_read_config_byte(chip->pdev,
+ 					   O2_SD_LOCK_WP, &scratch);
+diff --git a/drivers/mtd/spi-nor/controllers/intel-spi-pci.c b/drivers/mtd/spi-nor/controllers/intel-spi-pci.c
+index 555fe55d14aee..8a3c1f3c2d2e8 100644
+--- a/drivers/mtd/spi-nor/controllers/intel-spi-pci.c
++++ b/drivers/mtd/spi-nor/controllers/intel-spi-pci.c
+@@ -16,12 +16,30 @@
+ #define BCR		0xdc
+ #define BCR_WPD		BIT(0)
+ 
++static bool intel_spi_pci_set_writeable(void __iomem *base, void *data)
++{
++	struct pci_dev *pdev = data;
++	u32 bcr;
++
++	/* Try to make the chip read/write */
++	pci_read_config_dword(pdev, BCR, &bcr);
++	if (!(bcr & BCR_WPD)) {
++		bcr |= BCR_WPD;
++		pci_write_config_dword(pdev, BCR, bcr);
++		pci_read_config_dword(pdev, BCR, &bcr);
++	}
++
++	return bcr & BCR_WPD;
++}
++
+ static const struct intel_spi_boardinfo bxt_info = {
+ 	.type = INTEL_SPI_BXT,
++	.set_writeable = intel_spi_pci_set_writeable,
+ };
+ 
+ static const struct intel_spi_boardinfo cnl_info = {
+ 	.type = INTEL_SPI_CNL,
++	.set_writeable = intel_spi_pci_set_writeable,
+ };
+ 
+ static int intel_spi_pci_probe(struct pci_dev *pdev,
+@@ -29,7 +47,6 @@ static int intel_spi_pci_probe(struct pci_dev *pdev,
+ {
+ 	struct intel_spi_boardinfo *info;
+ 	struct intel_spi *ispi;
+-	u32 bcr;
+ 	int ret;
+ 
+ 	ret = pcim_enable_device(pdev);
+@@ -41,15 +58,7 @@ static int intel_spi_pci_probe(struct pci_dev *pdev,
+ 	if (!info)
+ 		return -ENOMEM;
+ 
+-	/* Try to make the chip read/write */
+-	pci_read_config_dword(pdev, BCR, &bcr);
+-	if (!(bcr & BCR_WPD)) {
+-		bcr |= BCR_WPD;
+-		pci_write_config_dword(pdev, BCR, bcr);
+-		pci_read_config_dword(pdev, BCR, &bcr);
+-	}
+-	info->writeable = !!(bcr & BCR_WPD);
+-
++	info->data = pdev;
+ 	ispi = intel_spi_probe(&pdev->dev, &pdev->resource[0], info);
+ 	if (IS_ERR(ispi))
+ 		return PTR_ERR(ispi);
+diff --git a/drivers/mtd/spi-nor/controllers/intel-spi.c b/drivers/mtd/spi-nor/controllers/intel-spi.c
+index b54a56a68100e..6c802db6b4af0 100644
+--- a/drivers/mtd/spi-nor/controllers/intel-spi.c
++++ b/drivers/mtd/spi-nor/controllers/intel-spi.c
+@@ -53,17 +53,17 @@
+ #define FRACC				0x50
+ 
+ #define FREG(n)				(0x54 + ((n) * 4))
+-#define FREG_BASE_MASK			0x3fff
++#define FREG_BASE_MASK			GENMASK(14, 0)
+ #define FREG_LIMIT_SHIFT		16
+-#define FREG_LIMIT_MASK			(0x03fff << FREG_LIMIT_SHIFT)
++#define FREG_LIMIT_MASK			GENMASK(30, 16)
+ 
+ /* Offset is from @ispi->pregs */
+ #define PR(n)				((n) * 4)
+ #define PR_WPE				BIT(31)
+ #define PR_LIMIT_SHIFT			16
+-#define PR_LIMIT_MASK			(0x3fff << PR_LIMIT_SHIFT)
++#define PR_LIMIT_MASK			GENMASK(30, 16)
+ #define PR_RPE				BIT(15)
+-#define PR_BASE_MASK			0x3fff
++#define PR_BASE_MASK			GENMASK(14, 0)
+ 
+ /* Offsets are from @ispi->sregs */
+ #define SSFSTS_CTL			0x00
+@@ -117,7 +117,7 @@
+ #define ERASE_OPCODE_SHIFT		8
+ #define ERASE_OPCODE_MASK		(0xff << ERASE_OPCODE_SHIFT)
+ #define ERASE_64K_OPCODE_SHIFT		16
+-#define ERASE_64K_OPCODE_MASK		(0xff << ERASE_OPCODE_SHIFT)
++#define ERASE_64K_OPCODE_MASK		(0xff << ERASE_64K_OPCODE_SHIFT)
+ 
+ #define INTEL_SPI_TIMEOUT		5000 /* ms */
+ #define INTEL_SPI_FIFO_SZ		64
+@@ -132,7 +132,6 @@
+  * @sregs: Start of software sequencer registers
+  * @nregions: Maximum number of regions
+  * @pr_num: Maximum number of protected range registers
+- * @writeable: Is the chip writeable
+  * @locked: Is SPI setting locked
+  * @swseq_reg: Use SW sequencer in register reads/writes
+  * @swseq_erase: Use SW sequencer in erase operation
+@@ -150,7 +149,6 @@ struct intel_spi {
+ 	void __iomem *sregs;
+ 	size_t nregions;
+ 	size_t pr_num;
+-	bool writeable;
+ 	bool locked;
+ 	bool swseq_reg;
+ 	bool swseq_erase;
+@@ -305,6 +303,14 @@ static int intel_spi_wait_sw_busy(struct intel_spi *ispi)
+ 				  INTEL_SPI_TIMEOUT * 1000);
+ }
+ 
++static bool intel_spi_set_writeable(struct intel_spi *ispi)
++{
++	if (!ispi->info->set_writeable)
++		return false;
++
++	return ispi->info->set_writeable(ispi->base, ispi->info->data);
++}
++
+ static int intel_spi_init(struct intel_spi *ispi)
+ {
+ 	u32 opmenu0, opmenu1, lvscc, uvscc, val;
+@@ -317,19 +323,6 @@ static int intel_spi_init(struct intel_spi *ispi)
+ 		ispi->nregions = BYT_FREG_NUM;
+ 		ispi->pr_num = BYT_PR_NUM;
+ 		ispi->swseq_reg = true;
+-
+-		if (writeable) {
+-			/* Disable write protection */
+-			val = readl(ispi->base + BYT_BCR);
+-			if (!(val & BYT_BCR_WPD)) {
+-				val |= BYT_BCR_WPD;
+-				writel(val, ispi->base + BYT_BCR);
+-				val = readl(ispi->base + BYT_BCR);
+-			}
+-
+-			ispi->writeable = !!(val & BYT_BCR_WPD);
+-		}
+-
+ 		break;
+ 
+ 	case INTEL_SPI_LPT:
+@@ -359,6 +352,12 @@ static int intel_spi_init(struct intel_spi *ispi)
+ 		return -EINVAL;
+ 	}
+ 
++	/* Try to disable write protection if user asked to do so */
++	if (writeable && !intel_spi_set_writeable(ispi)) {
++		dev_warn(ispi->dev, "can't disable chip write protection\n");
++		writeable = false;
++	}
++
+ 	/* Disable #SMI generation from HW sequencer */
+ 	val = readl(ispi->base + HSFSTS_CTL);
+ 	val &= ~HSFSTS_CTL_FSMIE;
+@@ -885,9 +884,12 @@ static void intel_spi_fill_partition(struct intel_spi *ispi,
+ 		/*
+ 		 * If any of the regions have protection bits set, make the
+ 		 * whole partition read-only to be on the safe side.
++		 *
++		 * Also if the user did not ask the chip to be writeable
++		 * mask the bit too.
+ 		 */
+-		if (intel_spi_is_protected(ispi, base, limit))
+-			ispi->writeable = false;
++		if (!writeable || intel_spi_is_protected(ispi, base, limit))
++			part->mask_flags |= MTD_WRITEABLE;
+ 
+ 		end = (limit << 12) + 4096;
+ 		if (end > part->size)
+@@ -928,7 +930,6 @@ struct intel_spi *intel_spi_probe(struct device *dev,
+ 
+ 	ispi->dev = dev;
+ 	ispi->info = info;
+-	ispi->writeable = info->writeable;
+ 
+ 	ret = intel_spi_init(ispi);
+ 	if (ret)
+@@ -946,10 +947,6 @@ struct intel_spi *intel_spi_probe(struct device *dev,
+ 
+ 	intel_spi_fill_partition(ispi, &part);
+ 
+-	/* Prevent writes if not explicitly enabled */
+-	if (!ispi->writeable || !writeable)
+-		ispi->nor.mtd.flags &= ~MTD_WRITEABLE;
+-
+ 	ret = mtd_device_register(&ispi->nor.mtd, &part, 1);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index 52414ac2c901a..1722d4091ea3f 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -4488,13 +4488,19 @@ static struct pci_driver ena_pci_driver = {
+ 
+ static int __init ena_init(void)
+ {
++	int ret;
++
+ 	ena_wq = create_singlethread_workqueue(DRV_MODULE_NAME);
+ 	if (!ena_wq) {
+ 		pr_err("Failed to create workqueue\n");
+ 		return -ENOMEM;
+ 	}
+ 
+-	return pci_register_driver(&ena_pci_driver);
++	ret = pci_register_driver(&ena_pci_driver);
++	if (ret)
++		destroy_workqueue(ena_wq);
++
++	return ret;
+ }
+ 
+ static void __exit ena_cleanup(void)
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index c26c9b0c00d8f..fe3ca3af431a4 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -1468,7 +1468,7 @@ static int ag71xx_open(struct net_device *ndev)
+ 	if (ret) {
+ 		netif_err(ag, link, ndev, "phylink_of_phy_connect filed with err: %i\n",
+ 			  ret);
+-		goto err;
++		return ret;
+ 	}
+ 
+ 	max_frame_len = ag71xx_max_frame_len(ndev->mtu);
+@@ -1489,6 +1489,7 @@ static int ag71xx_open(struct net_device *ndev)
+ 
+ err:
+ 	ag71xx_rings_cleanup(ag);
++	phylink_disconnect_phy(ag->phylink);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
+index 2b6d929d462f4..7b79528d6eed2 100644
+--- a/drivers/net/ethernet/broadcom/Kconfig
++++ b/drivers/net/ethernet/broadcom/Kconfig
+@@ -69,7 +69,7 @@ config BCMGENET
+ 	select BCM7XXX_PHY
+ 	select MDIO_BCM_UNIMAC
+ 	select DIMLIB
+-	select BROADCOM_PHY if (ARCH_BCM2835 && PTP_1588_CLOCK_OPTIONAL)
++	select BROADCOM_PHY if ARCH_BCM2835
+ 	help
+ 	  This driver supports the built-in Ethernet MACs found in the
+ 	  Broadcom BCM7xxx Set Top Box family chipset.
+diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
+index 6290d8bedc92e..9960127f612ea 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.c
++++ b/drivers/net/ethernet/broadcom/bgmac.c
+@@ -1568,7 +1568,6 @@ void bgmac_enet_remove(struct bgmac *bgmac)
+ 	phy_disconnect(bgmac->net_dev->phydev);
+ 	netif_napi_del(&bgmac->napi);
+ 	bgmac_dma_free(bgmac);
+-	free_netdev(bgmac->net_dev);
+ }
+ EXPORT_SYMBOL_GPL(bgmac_enet_remove);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 8311473d537bd..92f54e3333958 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -13111,8 +13111,16 @@ static struct pci_driver bnxt_pci_driver = {
+ 
+ static int __init bnxt_init(void)
+ {
++	int err;
++
+ 	bnxt_debug_init();
+-	return pci_register_driver(&bnxt_pci_driver);
++	err = pci_register_driver(&bnxt_pci_driver);
++	if (err) {
++		bnxt_debug_exit();
++		return err;
++	}
++
++	return 0;
+ }
+ 
+ static void __exit bnxt_exit(void)
+diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
+index e0d18e9171080..c4dc6e2ccd6b7 100644
+--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
++++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
+@@ -1798,13 +1798,10 @@ static int liquidio_open(struct net_device *netdev)
+ 
+ 	ifstate_set(lio, LIO_IFSTATE_RUNNING);
+ 
+-	if (OCTEON_CN23XX_PF(oct)) {
+-		if (!oct->msix_on)
+-			if (setup_tx_poll_fn(netdev))
+-				return -1;
+-	} else {
+-		if (setup_tx_poll_fn(netdev))
+-			return -1;
++	if (!OCTEON_CN23XX_PF(oct) || (OCTEON_CN23XX_PF(oct) && !oct->msix_on)) {
++		ret = setup_tx_poll_fn(netdev);
++		if (ret)
++			goto err_poll;
+ 	}
+ 
+ 	netif_tx_start_all_queues(netdev);
+@@ -1817,7 +1814,7 @@ static int liquidio_open(struct net_device *netdev)
+ 	/* tell Octeon to start forwarding packets to host */
+ 	ret = send_rx_ctrl_cmd(lio, 1);
+ 	if (ret)
+-		return ret;
++		goto err_rx_ctrl;
+ 
+ 	/* start periodical statistics fetch */
+ 	INIT_DELAYED_WORK(&lio->stats_wk.work, lio_fetch_stats);
+@@ -1828,6 +1825,27 @@ static int liquidio_open(struct net_device *netdev)
+ 	dev_info(&oct->pci_dev->dev, "%s interface is opened\n",
+ 		 netdev->name);
+ 
++	return 0;
++
++err_rx_ctrl:
++	if (!OCTEON_CN23XX_PF(oct) || (OCTEON_CN23XX_PF(oct) && !oct->msix_on))
++		cleanup_tx_poll_fn(netdev);
++err_poll:
++	if (lio->ptp_clock) {
++		ptp_clock_unregister(lio->ptp_clock);
++		lio->ptp_clock = NULL;
++	}
++
++	if (oct->props[lio->ifidx].napi_enabled == 1) {
++		list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list)
++			napi_disable(napi);
++
++		oct->props[lio->ifidx].napi_enabled = 0;
++
++		if (OCTEON_CN23XX_PF(oct))
++			oct->droq[0]->ops.poll_mode = 0;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+index 4f1d585485d7a..6ec042d48cd1f 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+@@ -1502,8 +1502,15 @@ static struct pci_driver hinic_driver = {
+ 
+ static int __init hinic_module_init(void)
+ {
++	int ret;
++
+ 	hinic_dbg_register_debugfs(HINIC_DRV_NAME);
+-	return pci_register_driver(&hinic_driver);
++
++	ret = pci_register_driver(&hinic_driver);
++	if (ret)
++		hinic_dbg_unregister_debugfs();
++
++	return ret;
+ }
+ 
+ static void __exit hinic_module_exit(void)
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index f60ffef33e0ce..00b6985edea04 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -569,8 +569,14 @@ int ionic_port_reset(struct ionic *ionic)
+ 
+ static int __init ionic_init_module(void)
+ {
++	int ret;
++
+ 	ionic_debugfs_create();
+-	return ionic_bus_register_driver();
++	ret = ionic_bus_register_driver();
++	if (ret)
++		ionic_debugfs_destroy();
++
++	return ret;
+ }
+ 
+ static void __exit ionic_cleanup_module(void)
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index 6b269a72388b8..5869bc2c3aa79 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -139,7 +139,7 @@ static struct macvlan_source_entry *macvlan_hash_lookup_source(
+ 	u32 idx = macvlan_eth_hash(addr);
+ 	struct hlist_head *h = &vlan->port->vlan_source_hash[idx];
+ 
+-	hlist_for_each_entry_rcu(entry, h, hlist) {
++	hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) {
+ 		if (ether_addr_equal_64bits(entry->addr, addr) &&
+ 		    entry->vlan == vlan)
+ 			return entry;
+@@ -1176,7 +1176,7 @@ void macvlan_common_setup(struct net_device *dev)
+ {
+ 	ether_setup(dev);
+ 
+-	dev->min_mtu		= 0;
++	/* ether_setup() has set dev->min_mtu to ETH_MIN_MTU. */
+ 	dev->max_mtu		= ETH_MAX_MTU;
+ 	dev->priv_flags	       &= ~IFF_TX_SKB_SHARING;
+ 	netif_keep_dst(dev);
+@@ -1614,7 +1614,7 @@ static int macvlan_fill_info_macaddr(struct sk_buff *skb,
+ 	struct hlist_head *h = &vlan->port->vlan_source_hash[i];
+ 	struct macvlan_source_entry *entry;
+ 
+-	hlist_for_each_entry_rcu(entry, h, hlist) {
++	hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) {
+ 		if (entry->vlan != vlan)
+ 			continue;
+ 		if (nla_put(skb, IFLA_MACVLAN_MACADDR, ETH_ALEN, entry->addr))
+diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c
+index 3160443ef3b9e..5d96dc1b00b36 100644
+--- a/drivers/net/thunderbolt.c
++++ b/drivers/net/thunderbolt.c
+@@ -1343,12 +1343,21 @@ static int __init tbnet_init(void)
+ 				  TBNET_MATCH_FRAGS_ID);
+ 
+ 	ret = tb_register_property_dir("network", tbnet_dir);
+-	if (ret) {
+-		tb_property_free_dir(tbnet_dir);
+-		return ret;
+-	}
++	if (ret)
++		goto err_free_dir;
++
++	ret = tb_register_service_driver(&tbnet_driver);
++	if (ret)
++		goto err_unregister;
+ 
+-	return tb_register_service_driver(&tbnet_driver);
++	return 0;
++
++err_unregister:
++	tb_unregister_property_dir("network", tbnet_dir);
++err_free_dir:
++	tb_property_free_dir(tbnet_dir);
++
++	return ret;
+ }
+ module_init(tbnet_init);
+ 
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 65d42f5d42a3c..e1cd4c2de2d30 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -61,6 +61,7 @@ struct smsc95xx_priv {
+ 	u8 suspend_flags;
+ 	struct mii_bus *mdiobus;
+ 	struct phy_device *phydev;
++	struct task_struct *pm_task;
+ };
+ 
+ static bool turbo_mode = true;
+@@ -70,13 +71,14 @@ MODULE_PARM_DESC(turbo_mode, "Enable multiple frames per Rx transaction");
+ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
+ 					    u32 *data, int in_pm)
+ {
++	struct smsc95xx_priv *pdata = dev->driver_priv;
+ 	u32 buf;
+ 	int ret;
+ 	int (*fn)(struct usbnet *, u8, u8, u16, u16, void *, u16);
+ 
+ 	BUG_ON(!dev);
+ 
+-	if (!in_pm)
++	if (current != pdata->pm_task)
+ 		fn = usbnet_read_cmd;
+ 	else
+ 		fn = usbnet_read_cmd_nopm;
+@@ -100,13 +102,14 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
+ static int __must_check __smsc95xx_write_reg(struct usbnet *dev, u32 index,
+ 					     u32 data, int in_pm)
+ {
++	struct smsc95xx_priv *pdata = dev->driver_priv;
+ 	u32 buf;
+ 	int ret;
+ 	int (*fn)(struct usbnet *, u8, u8, u16, u16, const void *, u16);
+ 
+ 	BUG_ON(!dev);
+ 
+-	if (!in_pm)
++	if (current != pdata->pm_task)
+ 		fn = usbnet_write_cmd;
+ 	else
+ 		fn = usbnet_write_cmd_nopm;
+@@ -1468,9 +1471,12 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
+ 	u32 val, link_up;
+ 	int ret;
+ 
++	pdata->pm_task = current;
++
+ 	ret = usbnet_suspend(intf, message);
+ 	if (ret < 0) {
+ 		netdev_warn(dev->net, "usbnet_suspend error\n");
++		pdata->pm_task = NULL;
+ 		return ret;
+ 	}
+ 
+@@ -1717,6 +1723,7 @@ done:
+ 	if (ret && PMSG_IS_AUTO(message))
+ 		usbnet_resume(intf);
+ 
++	pdata->pm_task = NULL;
+ 	return ret;
+ }
+ 
+@@ -1737,29 +1744,31 @@ static int smsc95xx_resume(struct usb_interface *intf)
+ 	/* do this first to ensure it's cleared even in error case */
+ 	pdata->suspend_flags = 0;
+ 
++	pdata->pm_task = current;
++
+ 	if (suspend_flags & SUSPEND_ALLMODES) {
+ 		/* clear wake-up sources */
+ 		ret = smsc95xx_read_reg_nopm(dev, WUCSR, &val);
+ 		if (ret < 0)
+-			return ret;
++			goto done;
+ 
+ 		val &= ~(WUCSR_WAKE_EN_ | WUCSR_MPEN_);
+ 
+ 		ret = smsc95xx_write_reg_nopm(dev, WUCSR, val);
+ 		if (ret < 0)
+-			return ret;
++			goto done;
+ 
+ 		/* clear wake-up status */
+ 		ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+ 		if (ret < 0)
+-			return ret;
++			goto done;
+ 
+ 		val &= ~PM_CTL_WOL_EN_;
+ 		val |= PM_CTL_WUPS_;
+ 
+ 		ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+ 		if (ret < 0)
+-			return ret;
++			goto done;
+ 	}
+ 
+ 	ret = usbnet_resume(intf);
+@@ -1767,15 +1776,21 @@ static int smsc95xx_resume(struct usb_interface *intf)
+ 		netdev_warn(dev->net, "usbnet_resume error\n");
+ 
+ 	phy_init_hw(pdata->phydev);
++
++done:
++	pdata->pm_task = NULL;
+ 	return ret;
+ }
+ 
+ static int smsc95xx_reset_resume(struct usb_interface *intf)
+ {
+ 	struct usbnet *dev = usb_get_intfdata(intf);
++	struct smsc95xx_priv *pdata = dev->driver_priv;
+ 	int ret;
+ 
++	pdata->pm_task = current;
+ 	ret = smsc95xx_reset(dev);
++	pdata->pm_task = NULL;
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 3f106771d15be..d9c78fe85cb38 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3330,11 +3330,17 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd,
+ 	case NVME_IOCTL_IO_CMD:
+ 		return nvme_dev_user_cmd(ctrl, argp);
+ 	case NVME_IOCTL_RESET:
++		if (!capable(CAP_SYS_ADMIN))
++			return -EACCES;
+ 		dev_warn(ctrl->device, "resetting controller\n");
+ 		return nvme_reset_ctrl_sync(ctrl);
+ 	case NVME_IOCTL_SUBSYS_RESET:
++		if (!capable(CAP_SYS_ADMIN))
++			return -EACCES;
+ 		return nvme_reset_subsystem(ctrl);
+ 	case NVME_IOCTL_RESCAN:
++		if (!capable(CAP_SYS_ADMIN))
++			return -EACCES;
+ 		nvme_queue_scan(ctrl);
+ 		return 0;
+ 	default:
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index abae7ef2ac511..86336496c65ce 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -544,11 +544,23 @@ static inline void nvme_fault_inject_fini(struct nvme_fault_inject *fault_inj)
+ static inline void nvme_should_fail(struct request *req) {}
+ #endif
+ 
++bool nvme_wait_reset(struct nvme_ctrl *ctrl);
++int nvme_try_sched_reset(struct nvme_ctrl *ctrl);
++
+ static inline int nvme_reset_subsystem(struct nvme_ctrl *ctrl)
+ {
++	int ret;
++
+ 	if (!ctrl->subsystem)
+ 		return -ENOTTY;
+-	return ctrl->ops->reg_write32(ctrl, NVME_REG_NSSR, 0x4E564D65);
++	if (!nvme_wait_reset(ctrl))
++		return -EBUSY;
++
++	ret = ctrl->ops->reg_write32(ctrl, NVME_REG_NSSR, 0x4E564D65);
++	if (ret)
++		return ret;
++
++	return nvme_try_sched_reset(ctrl);
+ }
+ 
+ /*
+@@ -635,7 +647,6 @@ void nvme_cancel_tagset(struct nvme_ctrl *ctrl);
+ void nvme_cancel_admin_tagset(struct nvme_ctrl *ctrl);
+ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
+ 		enum nvme_ctrl_state new_state);
+-bool nvme_wait_reset(struct nvme_ctrl *ctrl);
+ int nvme_disable_ctrl(struct nvme_ctrl *ctrl);
+ int nvme_enable_ctrl(struct nvme_ctrl *ctrl);
+ int nvme_shutdown_ctrl(struct nvme_ctrl *ctrl);
+@@ -688,7 +699,6 @@ int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count);
+ void nvme_stop_keep_alive(struct nvme_ctrl *ctrl);
+ int nvme_reset_ctrl(struct nvme_ctrl *ctrl);
+ int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl);
+-int nvme_try_sched_reset(struct nvme_ctrl *ctrl);
+ int nvme_delete_ctrl(struct nvme_ctrl *ctrl);
+ 
+ int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp, u8 csi,
+diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
+index eda4ded4d5e52..925be41eeebec 100644
+--- a/drivers/parport/parport_pc.c
++++ b/drivers/parport/parport_pc.c
+@@ -468,7 +468,7 @@ static size_t parport_pc_fifo_write_block_pio(struct parport *port,
+ 	const unsigned char *bufp = buf;
+ 	size_t left = length;
+ 	unsigned long expire = jiffies + port->physport->cad->timeout;
+-	const int fifo = FIFO(port);
++	const unsigned long fifo = FIFO(port);
+ 	int poll_for = 8; /* 80 usecs */
+ 	const struct parport_pc_private *priv = port->physport->private_data;
+ 	const int fifo_depth = priv->fifo_depth;
+diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c
+index 3fb2387147189..eac55fee5281c 100644
+--- a/drivers/pinctrl/devicetree.c
++++ b/drivers/pinctrl/devicetree.c
+@@ -220,6 +220,8 @@ int pinctrl_dt_to_map(struct pinctrl *p, struct pinctrl_dev *pctldev)
+ 	for (state = 0; ; state++) {
+ 		/* Retrieve the pinctrl-* property */
+ 		propname = kasprintf(GFP_KERNEL, "pinctrl-%d", state);
++		if (!propname)
++			return -ENOMEM;
+ 		prop = of_find_property(np, propname, &size);
+ 		kfree(propname);
+ 		if (!prop) {
+diff --git a/drivers/platform/x86/intel_pmc_core_pltdrv.c b/drivers/platform/x86/intel_pmc_core_pltdrv.c
+index 15ca8afdd973d..ddfba38c21044 100644
+--- a/drivers/platform/x86/intel_pmc_core_pltdrv.c
++++ b/drivers/platform/x86/intel_pmc_core_pltdrv.c
+@@ -18,6 +18,8 @@
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+ 
++#include <xen/xen.h>
++
+ static void intel_pmc_core_release(struct device *dev)
+ {
+ 	kfree(dev);
+@@ -53,6 +55,13 @@ static int __init pmc_core_platform_init(void)
+ 	if (acpi_dev_present("INT33A1", NULL, -1))
+ 		return -ENODEV;
+ 
++	/*
++	 * Skip forcefully attaching the device for VMs. Make an exception for
++	 * Xen dom0, which does have full hardware access.
++	 */
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR) && !xen_initial_domain())
++		return -ENODEV;
++
+ 	if (!x86_match_cpu(intel_pmc_core_platform_ids))
+ 		return -ENODEV;
+ 
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index 8401c42db5419..524947bf21b97 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -866,7 +866,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req)
+ 	const bool is_srb = zfcp_fsf_req_is_status_read_buffer(req);
+ 	struct zfcp_adapter *adapter = req->adapter;
+ 	struct zfcp_qdio *qdio = adapter->qdio;
+-	int req_id = req->req_id;
++	unsigned long req_id = req->req_id;
+ 
+ 	zfcp_reqlist_add(adapter->req_list, req);
+ 
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 5eb959b5f7010..261b915835b40 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -7079,8 +7079,12 @@ static int sdebug_add_host_helper(int per_host_idx)
+ 	dev_set_name(&sdbg_host->dev, "adapter%d", sdebug_num_hosts);
+ 
+ 	error = device_register(&sdbg_host->dev);
+-	if (error)
++	if (error) {
++		spin_lock(&sdebug_host_list_lock);
++		list_del(&sdbg_host->host_list);
++		spin_unlock(&sdebug_host_list_lock);
+ 		goto clean;
++	}
+ 
+ 	++sdebug_num_hosts;
+ 	return 0;
+diff --git a/drivers/siox/siox-core.c b/drivers/siox/siox-core.c
+index f8c08fb9891d7..e0ffef6e93865 100644
+--- a/drivers/siox/siox-core.c
++++ b/drivers/siox/siox-core.c
+@@ -835,6 +835,8 @@ static struct siox_device *siox_device_add(struct siox_master *smaster,
+ 
+ err_device_register:
+ 	/* don't care to make the buffer smaller again */
++	put_device(&sdevice->dev);
++	sdevice = NULL;
+ 
+ err_buf_alloc:
+ 	siox_master_unlock(smaster);
+diff --git a/drivers/slimbus/stream.c b/drivers/slimbus/stream.c
+index 75f87b3d8b953..73a2aa3629572 100644
+--- a/drivers/slimbus/stream.c
++++ b/drivers/slimbus/stream.c
+@@ -67,10 +67,10 @@ static const int slim_presence_rate_table[] = {
+ 	384000,
+ 	768000,
+ 	0, /* Reserved */
+-	110250,
+-	220500,
+-	441000,
+-	882000,
++	11025,
++	22050,
++	44100,
++	88200,
+ 	176400,
+ 	352800,
+ 	705600,
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index a6dfc8fef20cd..651a6510fb544 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -941,6 +941,7 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
+ 		static DEFINE_RATELIMIT_STATE(rs,
+ 					      DEFAULT_RATELIMIT_INTERVAL * 10,
+ 					      1);
++		ratelimit_set_flags(&rs, RATELIMIT_MSG_ON_RELEASE);
+ 		if (__ratelimit(&rs))
+ 			dev_dbg_ratelimited(spi->dev, "Communication suspended\n");
+ 		if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
+diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c
+index 16d5a4e117a27..5ae5d94c5b931 100644
+--- a/drivers/target/loopback/tcm_loop.c
++++ b/drivers/target/loopback/tcm_loop.c
+@@ -394,6 +394,7 @@ static int tcm_loop_setup_hba_bus(struct tcm_loop_hba *tl_hba, int tcm_loop_host
+ 	ret = device_register(&tl_hba->dev);
+ 	if (ret) {
+ 		pr_err("device_register() failed for tl_hba->dev: %d\n", ret);
++		put_device(&tl_hba->dev);
+ 		return -ENODEV;
+ 	}
+ 
+@@ -1072,7 +1073,7 @@ check_len:
+ 	 */
+ 	ret = tcm_loop_setup_hba_bus(tl_hba, tcm_loop_hba_no_cnt);
+ 	if (ret)
+-		goto out;
++		return ERR_PTR(ret);
+ 
+ 	sh = tl_hba->sh;
+ 	tcm_loop_hba_no_cnt++;
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index c91a3004931f1..e852828259735 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -1416,7 +1416,7 @@ static struct gsm_control *gsm_control_send(struct gsm_mux *gsm,
+ 		unsigned int command, u8 *data, int clen)
+ {
+ 	struct gsm_control *ctrl = kzalloc(sizeof(struct gsm_control),
+-						GFP_KERNEL);
++						GFP_ATOMIC);
+ 	unsigned long flags;
+ 	if (ctrl == NULL)
+ 		return NULL;
+diff --git a/drivers/tty/serial/8250/8250_lpss.c b/drivers/tty/serial/8250/8250_lpss.c
+index dfb730b7ea2ae..1349c161c1922 100644
+--- a/drivers/tty/serial/8250/8250_lpss.c
++++ b/drivers/tty/serial/8250/8250_lpss.c
+@@ -268,8 +268,13 @@ static int lpss8250_dma_setup(struct lpss8250 *lpss, struct uart_8250_port *port
+ 	struct dw_dma_slave *rx_param, *tx_param;
+ 	struct device *dev = port->port.dev;
+ 
+-	if (!lpss->dma_param.dma_dev)
++	if (!lpss->dma_param.dma_dev) {
++		dma = port->dma;
++		if (dma)
++			goto out_configuration_only;
++
+ 		return 0;
++	}
+ 
+ 	rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL);
+ 	if (!rx_param)
+@@ -280,16 +285,18 @@ static int lpss8250_dma_setup(struct lpss8250 *lpss, struct uart_8250_port *port
+ 		return -ENOMEM;
+ 
+ 	*rx_param = lpss->dma_param;
+-	dma->rxconf.src_maxburst = lpss->dma_maxburst;
+-
+ 	*tx_param = lpss->dma_param;
+-	dma->txconf.dst_maxburst = lpss->dma_maxburst;
+ 
+ 	dma->fn = lpss8250_dma_filter;
+ 	dma->rx_param = rx_param;
+ 	dma->tx_param = tx_param;
+ 
+ 	port->dma = dma;
++
++out_configuration_only:
++	dma->rxconf.src_maxburst = lpss->dma_maxburst;
++	dma->txconf.dst_maxburst = lpss->dma_maxburst;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index f3744ac805ecb..3f7379f16a36e 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -157,7 +157,11 @@ static u32 uart_read(struct uart_8250_port *up, u32 reg)
+ 	return readl(up->port.membase + (reg << up->port.regshift));
+ }
+ 
+-static void omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
++/*
++ * Called on runtime PM resume path from omap8250_restore_regs(), and
++ * omap8250_set_mctrl().
++ */
++static void __omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ {
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 	struct omap8250_priv *priv = up->port.private_data;
+@@ -181,6 +185,20 @@ static void omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ 	}
+ }
+ 
++static void omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
++{
++	int err;
++
++	err = pm_runtime_resume_and_get(port->dev);
++	if (err)
++		return;
++
++	__omap8250_set_mctrl(port, mctrl);
++
++	pm_runtime_mark_last_busy(port->dev);
++	pm_runtime_put_autosuspend(port->dev);
++}
++
+ /*
+  * Work Around for Errata i202 (2430, 3430, 3630, 4430 and 4460)
+  * The access to uart register after MDR1 Access
+@@ -193,27 +211,10 @@ static void omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ static void omap_8250_mdr1_errataset(struct uart_8250_port *up,
+ 				     struct omap8250_priv *priv)
+ {
+-	u8 timeout = 255;
+-
+ 	serial_out(up, UART_OMAP_MDR1, priv->mdr1);
+ 	udelay(2);
+ 	serial_out(up, UART_FCR, up->fcr | UART_FCR_CLEAR_XMIT |
+ 			UART_FCR_CLEAR_RCVR);
+-	/*
+-	 * Wait for FIFO to empty: when empty, RX_FIFO_E bit is 0 and
+-	 * TX_FIFO_E bit is 1.
+-	 */
+-	while (UART_LSR_THRE != (serial_in(up, UART_LSR) &
+-				(UART_LSR_THRE | UART_LSR_DR))) {
+-		timeout--;
+-		if (!timeout) {
+-			/* Should *never* happen. we warn and carry on */
+-			dev_crit(up->port.dev, "Errata i202: timedout %x\n",
+-				 serial_in(up, UART_LSR));
+-			break;
+-		}
+-		udelay(1);
+-	}
+ }
+ 
+ static void omap_8250_get_divisor(struct uart_port *port, unsigned int baud,
+@@ -341,7 +342,7 @@ static void omap8250_restore_regs(struct uart_8250_port *up)
+ 
+ 	omap8250_update_mdr1(up, priv);
+ 
+-	up->port.ops->set_mctrl(&up->port, up->port.mctrl);
++	__omap8250_set_mctrl(&up->port, up->port.mctrl);
+ 
+ 	if (up->port.rs485.flags & SER_RS485_ENABLED)
+ 		serial8250_em485_stop_tx(up);
+@@ -1474,9 +1475,15 @@ err:
+ static int omap8250_remove(struct platform_device *pdev)
+ {
+ 	struct omap8250_priv *priv = platform_get_drvdata(pdev);
++	int err;
++
++	err = pm_runtime_resume_and_get(&pdev->dev);
++	if (err)
++		return err;
+ 
+ 	pm_runtime_dont_use_autosuspend(&pdev->dev);
+ 	pm_runtime_put_sync(&pdev->dev);
++	flush_work(&priv->qos_work);
+ 	pm_runtime_disable(&pdev->dev);
+ 	serial8250_unregister_port(priv->line);
+ 	cpu_latency_qos_remove_request(&priv->pm_qos_request);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index f648fd1d7548e..1f231fcda657b 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -661,13 +661,6 @@ int serial8250_em485_config(struct uart_port *port, struct serial_rs485 *rs485)
+ 		rs485->flags &= ~SER_RS485_RTS_AFTER_SEND;
+ 	}
+ 
+-	/* clamp the delays to [0, 100ms] */
+-	rs485->delay_rts_before_send = min(rs485->delay_rts_before_send, 100U);
+-	rs485->delay_rts_after_send  = min(rs485->delay_rts_after_send, 100U);
+-
+-	memset(rs485->padding, 0, sizeof(rs485->padding));
+-	port->rs485 = *rs485;
+-
+ 	gpiod_set_value(port->rs485_term_gpio,
+ 			rs485->flags & SER_RS485_TERMINATE_BUS);
+ 
+@@ -675,15 +668,8 @@ int serial8250_em485_config(struct uart_port *port, struct serial_rs485 *rs485)
+ 	 * Both serial8250_em485_init() and serial8250_em485_destroy()
+ 	 * are idempotent.
+ 	 */
+-	if (rs485->flags & SER_RS485_ENABLED) {
+-		int ret = serial8250_em485_init(up);
+-
+-		if (ret) {
+-			rs485->flags &= ~SER_RS485_ENABLED;
+-			port->rs485.flags &= ~SER_RS485_ENABLED;
+-		}
+-		return ret;
+-	}
++	if (rs485->flags & SER_RS485_ENABLED)
++		return serial8250_em485_init(up);
+ 
+ 	serial8250_em485_destroy(up);
+ 	return 0;
+@@ -1883,10 +1869,13 @@ EXPORT_SYMBOL_GPL(serial8250_modem_status);
+ static bool handle_rx_dma(struct uart_8250_port *up, unsigned int iir)
+ {
+ 	switch (iir & 0x3f) {
+-	case UART_IIR_RX_TIMEOUT:
+-		serial8250_rx_dma_flush(up);
++	case UART_IIR_RDI:
++		if (!up->dma->rx_running)
++			break;
+ 		fallthrough;
+ 	case UART_IIR_RLSI:
++	case UART_IIR_RX_TIMEOUT:
++		serial8250_rx_dma_flush(up);
+ 		return true;
+ 	}
+ 	return up->dma->rx_dma(up);
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index cf3d531657762..164597e2e0044 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2626,6 +2626,7 @@ static const struct dev_pm_ops imx_uart_pm_ops = {
+ 	.suspend_noirq = imx_uart_suspend_noirq,
+ 	.resume_noirq = imx_uart_resume_noirq,
+ 	.freeze_noirq = imx_uart_suspend_noirq,
++	.thaw_noirq = imx_uart_resume_noirq,
+ 	.restore_noirq = imx_uart_resume_noirq,
+ 	.suspend = imx_uart_suspend,
+ 	.resume = imx_uart_resume,
+diff --git a/drivers/usb/chipidea/otg_fsm.c b/drivers/usb/chipidea/otg_fsm.c
+index 6ed4b00dba961..7a2a9559693fb 100644
+--- a/drivers/usb/chipidea/otg_fsm.c
++++ b/drivers/usb/chipidea/otg_fsm.c
+@@ -256,8 +256,10 @@ static void ci_otg_del_timer(struct ci_hdrc *ci, enum otg_fsm_timer t)
+ 	ci->enabled_otg_timer_bits &= ~(1 << t);
+ 	if (ci->next_otg_timer == t) {
+ 		if (ci->enabled_otg_timer_bits == 0) {
++			spin_unlock_irqrestore(&ci->lock, flags);
+ 			/* No enabled timers after delete it */
+ 			hrtimer_cancel(&ci->otg_fsm_hrtimer);
++			spin_lock_irqsave(&ci->lock, flags);
+ 			ci->next_otg_timer = NUM_OTG_FSM_TIMERS;
+ 		} else {
+ 			/* Find the next timer */
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index eb3ea45d5d13a..6d24d138cc77e 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -362,6 +362,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM },
+ 	{ USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM },
+ 
++	/* Realforce 87U Keyboard */
++	{ USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* M-Systems Flash Disk Pioneers */
+ 	{ USB_DEVICE(0x08ec, 0x1000), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
+index 86bc2bec9038d..b06ab85f8187e 100644
+--- a/drivers/usb/dwc3/host.c
++++ b/drivers/usb/dwc3/host.c
+@@ -10,13 +10,8 @@
+ #include <linux/acpi.h>
+ #include <linux/platform_device.h>
+ 
+-#include "../host/xhci-plat.h"
+ #include "core.h"
+ 
+-static const struct xhci_plat_priv dwc3_xhci_plat_priv = {
+-	.quirks = XHCI_SKIP_PHY_INIT,
+-};
+-
+ static int dwc3_host_get_irq(struct dwc3 *dwc)
+ {
+ 	struct platform_device	*dwc3_pdev = to_platform_device(dwc->dev);
+@@ -92,11 +87,6 @@ int dwc3_host_init(struct dwc3 *dwc)
+ 		goto err;
+ 	}
+ 
+-	ret = platform_device_add_data(xhci, &dwc3_xhci_plat_priv,
+-					sizeof(dwc3_xhci_plat_priv));
+-	if (ret)
+-		goto err;
+-
+ 	memset(props, 0, sizeof(struct property_entry) * ARRAY_SIZE(props));
+ 
+ 	if (dwc->usb3_lpm_capable)
+diff --git a/drivers/usb/host/bcma-hcd.c b/drivers/usb/host/bcma-hcd.c
+index 2df52f75f6b3c..7558cc4d90cc6 100644
+--- a/drivers/usb/host/bcma-hcd.c
++++ b/drivers/usb/host/bcma-hcd.c
+@@ -285,7 +285,7 @@ static void bcma_hci_platform_power_gpio(struct bcma_device *dev, bool val)
+ {
+ 	struct bcma_hcd_device *usb_dev = bcma_get_drvdata(dev);
+ 
+-	if (IS_ERR_OR_NULL(usb_dev->gpio_desc))
++	if (!usb_dev->gpio_desc)
+ 		return;
+ 
+ 	gpiod_set_value(usb_dev->gpio_desc, val);
+@@ -406,9 +406,11 @@ static int bcma_hcd_probe(struct bcma_device *core)
+ 		return -ENOMEM;
+ 	usb_dev->core = core;
+ 
+-	if (core->dev.of_node)
+-		usb_dev->gpio_desc = devm_gpiod_get(&core->dev, "vcc",
+-						    GPIOD_OUT_HIGH);
++	usb_dev->gpio_desc = devm_gpiod_get_optional(&core->dev, "vcc",
++						     GPIOD_OUT_HIGH);
++	if (IS_ERR(usb_dev->gpio_desc))
++		return dev_err_probe(&core->dev, PTR_ERR(usb_dev->gpio_desc),
++				     "error obtaining VCC GPIO");
+ 
+ 	switch (core->id.id) {
+ 	case BCMA_CORE_USB20_HOST:
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index eea3dd18a044c..537ef276c78fe 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -162,6 +162,8 @@ static void option_instat_callback(struct urb *urb);
+ #define NOVATELWIRELESS_PRODUCT_G2		0xA010
+ #define NOVATELWIRELESS_PRODUCT_MC551		0xB001
+ 
++#define UBLOX_VENDOR_ID				0x1546
++
+ /* AMOI PRODUCTS */
+ #define AMOI_VENDOR_ID				0x1614
+ #define AMOI_PRODUCT_H01			0x0800
+@@ -240,7 +242,6 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_UC15			0x9090
+ /* These u-blox products use Qualcomm's vendor ID */
+ #define UBLOX_PRODUCT_R410M			0x90b2
+-#define UBLOX_PRODUCT_R6XX			0x90fa
+ /* These Yuga products use Qualcomm's vendor ID */
+ #define YUGA_PRODUCT_CLM920_NC5			0x9625
+ 
+@@ -581,6 +582,9 @@ static void option_instat_callback(struct urb *urb);
+ #define OPPO_VENDOR_ID				0x22d9
+ #define OPPO_PRODUCT_R11			0x276c
+ 
++/* Sierra Wireless products */
++#define SIERRA_VENDOR_ID			0x1199
++#define SIERRA_PRODUCT_EM9191			0x90d3
+ 
+ /* Device flags */
+ 
+@@ -1124,8 +1128,16 @@ static const struct usb_device_id option_ids[] = {
+ 	/* u-blox products using Qualcomm vendor ID */
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),
+ 	  .driver_info = RSVD(1) | RSVD(3) },
+-	{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R6XX),
++	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x908b),	/* u-blox LARA-R6 00B */
++	  .driver_info = RSVD(4) },
++	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x90fa),
+ 	  .driver_info = RSVD(3) },
++	/* u-blox products */
++	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1341) },	/* u-blox LARA-L6 */
++	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1342),		/* u-blox LARA-L6 (RMNET) */
++	  .driver_info = RSVD(4) },
++	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1343),		/* u-blox LARA-L6 (ECM) */
++	  .driver_info = RSVD(4) },
+ 	/* Quectel products using Quectel vendor ID */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff),
+ 	  .driver_info = NUMEP2 },
+@@ -2167,6 +2179,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x010a, 0xff) },			/* Fibocom MA510 (ECM mode) */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) },	/* Fibocom FG150 Diag */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) },		/* Fibocom FG150 AT */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) },			/* Fibocom FM160 (MBIM mode) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) },			/* Fibocom FM101-GL (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff),			/* Fibocom FM101-GL (laptop MBIM) */
+@@ -2176,6 +2189,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) },			/* GosunCn GM500 MBIM */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) },			/* GosunCn GM500 ECM/NCM */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index 80daa70e288b0..1276112edeff9 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -339,13 +339,24 @@ pmc_usb_mux_usb4(struct pmc_usb_port *port, struct typec_mux_state *state)
+ 	return pmc_usb_command(port, (void *)&req, sizeof(req));
+ }
+ 
+-static int pmc_usb_mux_safe_state(struct pmc_usb_port *port)
++static int pmc_usb_mux_safe_state(struct pmc_usb_port *port,
++				  struct typec_mux_state *state)
+ {
+ 	u8 msg;
+ 
+ 	if (IOM_PORT_ACTIVITY_IS(port->iom_status, SAFE_MODE))
+ 		return 0;
+ 
++	if ((IOM_PORT_ACTIVITY_IS(port->iom_status, DP) ||
++	     IOM_PORT_ACTIVITY_IS(port->iom_status, DP_MFD)) &&
++	     state->alt && state->alt->svid == USB_TYPEC_DP_SID)
++		return 0;
++
++	if ((IOM_PORT_ACTIVITY_IS(port->iom_status, TBT) ||
++	     IOM_PORT_ACTIVITY_IS(port->iom_status, ALT_MODE_TBT_USB)) &&
++	     state->alt && state->alt->svid == USB_TYPEC_TBT_SID)
++		return 0;
++
+ 	msg = PMC_USB_SAFE_MODE;
+ 	msg |= port->usb3_port << PMC_USB_MSG_USB3_PORT_SHIFT;
+ 
+@@ -413,7 +424,7 @@ pmc_usb_mux_set(struct typec_mux *mux, struct typec_mux_state *state)
+ 		return 0;
+ 
+ 	if (state->mode == TYPEC_STATE_SAFE)
+-		return pmc_usb_mux_safe_state(port);
++		return pmc_usb_mux_safe_state(port, state);
+ 	if (state->mode == TYPEC_STATE_USB)
+ 		return pmc_usb_connect(port, port->role);
+ 
+diff --git a/drivers/xen/pcpu.c b/drivers/xen/pcpu.c
+index cdc6daa7a9f66..9cf7085a260b4 100644
+--- a/drivers/xen/pcpu.c
++++ b/drivers/xen/pcpu.c
+@@ -228,7 +228,7 @@ static int register_pcpu(struct pcpu *pcpu)
+ 
+ 	err = device_register(dev);
+ 	if (err) {
+-		pcpu_release(dev);
++		put_device(dev);
+ 		return err;
+ 	}
+ 
+diff --git a/fs/btrfs/tests/qgroup-tests.c b/fs/btrfs/tests/qgroup-tests.c
+index c4b31dccc1846..289366c98f5b8 100644
+--- a/fs/btrfs/tests/qgroup-tests.c
++++ b/fs/btrfs/tests/qgroup-tests.c
+@@ -230,7 +230,6 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+ 			false);
+ 	if (ret) {
+-		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -246,7 +245,6 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 			false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+-		ulist_free(new_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -258,18 +256,19 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 		return ret;
+ 	}
+ 
++	/* btrfs_qgroup_account_extent() always frees the ulists passed to it. */
++	old_roots = NULL;
++	new_roots = NULL;
++
+ 	if (btrfs_verify_qgroup_counts(fs_info, BTRFS_FS_TREE_OBJECTID,
+ 				nodesize, nodesize)) {
+ 		test_err("qgroup counts didn't match expected values");
+ 		return -EINVAL;
+ 	}
+-	old_roots = NULL;
+-	new_roots = NULL;
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+ 			false);
+ 	if (ret) {
+-		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -284,7 +283,6 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 			false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+-		ulist_free(new_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -335,7 +333,6 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+ 			false);
+ 	if (ret) {
+-		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -351,7 +348,6 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 			false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+-		ulist_free(new_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -372,7 +368,6 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+ 			false);
+ 	if (ret) {
+-		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -388,7 +383,6 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 			false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+-		ulist_free(new_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -415,7 +409,6 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+ 			false);
+ 	if (ret) {
+-		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+@@ -431,7 +424,6 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 			false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+-		ulist_free(new_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+ 		return ret;
+ 	}
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 23f645657488b..ee66abadcbc2b 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -2350,7 +2350,7 @@ int generic_cont_expand_simple(struct inode *inode, loff_t size)
+ {
+ 	struct address_space *mapping = inode->i_mapping;
+ 	struct page *page;
+-	void *fsdata;
++	void *fsdata = NULL;
+ 	int err;
+ 
+ 	err = inode_newsize_ok(inode, size);
+@@ -2376,7 +2376,7 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping,
+ 	struct inode *inode = mapping->host;
+ 	unsigned int blocksize = i_blocksize(inode);
+ 	struct page *page;
+-	void *fsdata;
++	void *fsdata = NULL;
+ 	pgoff_t index, curidx;
+ 	loff_t curpos;
+ 	unsigned zerofrom, offset, len;
+diff --git a/fs/cifs/ioctl.c b/fs/cifs/ioctl.c
+index dcde44ff6cf9f..e45598b622427 100644
+--- a/fs/cifs/ioctl.c
++++ b/fs/cifs/ioctl.c
+@@ -193,7 +193,7 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg)
+ 					rc = put_user(ExtAttrBits &
+ 						FS_FL_USER_VISIBLE,
+ 						(int __user *)arg);
+-				if (rc != EOPNOTSUPP)
++				if (rc != -EOPNOTSUPP)
+ 					break;
+ 			}
+ #endif /* CONFIG_CIFS_POSIX */
+@@ -222,7 +222,7 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg)
+ 			 *		       pSMBFile->fid.netfid,
+ 			 *		       extAttrBits,
+ 			 *		       &ExtAttrMask);
+-			 * if (rc != EOPNOTSUPP)
++			 * if (rc != -EOPNOTSUPP)
+ 			 *	break;
+ 			 */
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 11efd5289ec43..72368b656b33c 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1256,6 +1256,8 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ 				COMPOUND_FID, current->tgid,
+ 				FILE_FULL_EA_INFORMATION,
+ 				SMB2_O_INFO_FILE, 0, data, size);
++	if (rc)
++		goto sea_exit;
+ 	smb2_set_next_command(tcon, &rqst[1]);
+ 	smb2_set_related(&rqst[1]);
+ 
+@@ -1266,6 +1268,8 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ 	rqst[2].rq_nvec = 1;
+ 	rc = SMB2_close_init(tcon, server,
+ 			     &rqst[2], COMPOUND_FID, COMPOUND_FID, false);
++	if (rc)
++		goto sea_exit;
+ 	smb2_set_related(&rqst[2]);
+ 
+ 	rc = compound_send_recv(xid, ses, server,
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index b9ed6a6dbcf51..648f7336043f6 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -182,7 +182,10 @@ static int gfs2_check_sb(struct gfs2_sbd *sdp, int silent)
+ 		pr_warn("Invalid superblock size\n");
+ 		return -EINVAL;
+ 	}
+-
++	if (sb->sb_bsize_shift != ffs(sb->sb_bsize) - 1) {
++		pr_warn("Invalid block size shift\n");
++		return -EINVAL;
++	}
+ 	return 0;
+ }
+ 
+@@ -381,8 +384,10 @@ static int init_names(struct gfs2_sbd *sdp, int silent)
+ 	if (!table[0])
+ 		table = sdp->sd_vfs->s_id;
+ 
+-	strlcpy(sdp->sd_proto_name, proto, GFS2_FSNAME_LEN);
+-	strlcpy(sdp->sd_table_name, table, GFS2_FSNAME_LEN);
++	BUILD_BUG_ON(GFS2_LOCKNAME_LEN > GFS2_FSNAME_LEN);
++
++	strscpy(sdp->sd_proto_name, proto, GFS2_LOCKNAME_LEN);
++	strscpy(sdp->sd_table_name, table, GFS2_LOCKNAME_LEN);
+ 
+ 	table = sdp->sd_table_name;
+ 	while ((table = strchr(table, '/')))
+@@ -1414,13 +1419,13 @@ static int gfs2_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 
+ 	switch (o) {
+ 	case Opt_lockproto:
+-		strlcpy(args->ar_lockproto, param->string, GFS2_LOCKNAME_LEN);
++		strscpy(args->ar_lockproto, param->string, GFS2_LOCKNAME_LEN);
+ 		break;
+ 	case Opt_locktable:
+-		strlcpy(args->ar_locktable, param->string, GFS2_LOCKNAME_LEN);
++		strscpy(args->ar_locktable, param->string, GFS2_LOCKNAME_LEN);
+ 		break;
+ 	case Opt_hostdata:
+-		strlcpy(args->ar_hostdata, param->string, GFS2_LOCKNAME_LEN);
++		strscpy(args->ar_hostdata, param->string, GFS2_LOCKNAME_LEN);
+ 		break;
+ 	case Opt_spectator:
+ 		args->ar_spectator = 1;
+diff --git a/fs/namei.c b/fs/namei.c
+index eba2f13d229df..4375565aca666 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -4633,7 +4633,7 @@ int __page_symlink(struct inode *inode, const char *symname, int len, int nofs)
+ {
+ 	struct address_space *mapping = inode->i_mapping;
+ 	struct page *page;
+-	void *fsdata;
++	void *fsdata = NULL;
+ 	int err;
+ 	unsigned int flags = 0;
+ 	if (nofs)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 03f09399abf4f..36af3734ac870 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7014,6 +7014,7 @@ static void nfs4_lock_done(struct rpc_task *task, void *calldata)
+ {
+ 	struct nfs4_lockdata *data = calldata;
+ 	struct nfs4_lock_state *lsp = data->lsp;
++	struct nfs_server *server = NFS_SERVER(d_inode(data->ctx->dentry));
+ 
+ 	dprintk("%s: begin!\n", __func__);
+ 
+@@ -7023,8 +7024,7 @@ static void nfs4_lock_done(struct rpc_task *task, void *calldata)
+ 	data->rpc_status = task->tk_status;
+ 	switch (task->tk_status) {
+ 	case 0:
+-		renew_lease(NFS_SERVER(d_inode(data->ctx->dentry)),
+-				data->timestamp);
++		renew_lease(server, data->timestamp);
+ 		if (data->arg.new_lock && !data->cancelled) {
+ 			data->fl.fl_flags &= ~(FL_SLEEP | FL_ACCESS);
+ 			if (locks_lock_inode_wait(lsp->ls_state->inode, &data->fl) < 0)
+@@ -7045,6 +7045,8 @@ static void nfs4_lock_done(struct rpc_task *task, void *calldata)
+ 			if (!nfs4_stateid_match(&data->arg.open_stateid,
+ 						&lsp->ls_state->open_stateid))
+ 				goto out_restart;
++			else if (nfs4_async_handle_error(task, server, lsp->ls_state, NULL) == -EAGAIN)
++				goto out_restart;
+ 		} else if (!nfs4_stateid_match(&data->arg.lock_stateid,
+ 						&lsp->ls_stateid))
+ 				goto out_restart;
+diff --git a/fs/ntfs/attrib.c b/fs/ntfs/attrib.c
+index 914e991731300..c0881d39d36a9 100644
+--- a/fs/ntfs/attrib.c
++++ b/fs/ntfs/attrib.c
+@@ -594,17 +594,37 @@ static int ntfs_attr_find(const ATTR_TYPE type, const ntfschar *name,
+ 	for (;;	a = (ATTR_RECORD*)((u8*)a + le32_to_cpu(a->length))) {
+ 		u8 *mrec_end = (u8 *)ctx->mrec +
+ 		               le32_to_cpu(ctx->mrec->bytes_allocated);
+-		u8 *name_end = (u8 *)a + le16_to_cpu(a->name_offset) +
+-			       a->name_length * sizeof(ntfschar);
+-		if ((u8*)a < (u8*)ctx->mrec || (u8*)a > mrec_end ||
+-		    name_end > mrec_end)
++		u8 *name_end;
++
++		/* check whether ATTR_RECORD wrap */
++		if ((u8 *)a < (u8 *)ctx->mrec)
++			break;
++
++		/* check whether Attribute Record Header is within bounds */
++		if ((u8 *)a > mrec_end ||
++		    (u8 *)a + sizeof(ATTR_RECORD) > mrec_end)
++			break;
++
++		/* check whether ATTR_RECORD's name is within bounds */
++		name_end = (u8 *)a + le16_to_cpu(a->name_offset) +
++			   a->name_length * sizeof(ntfschar);
++		if (name_end > mrec_end)
+ 			break;
++
+ 		ctx->attr = a;
+ 		if (unlikely(le32_to_cpu(a->type) > le32_to_cpu(type) ||
+ 				a->type == AT_END))
+ 			return -ENOENT;
+ 		if (unlikely(!a->length))
+ 			break;
++
++		/* check whether ATTR_RECORD's length wrap */
++		if ((u8 *)a + le32_to_cpu(a->length) < (u8 *)a)
++			break;
++		/* check whether ATTR_RECORD's length is within bounds */
++		if ((u8 *)a + le32_to_cpu(a->length) > mrec_end)
++			break;
++
+ 		if (a->type != type)
+ 			continue;
+ 		/*
+diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
+index cf222c9225d6d..645c4b1b23de1 100644
+--- a/fs/ntfs/inode.c
++++ b/fs/ntfs/inode.c
+@@ -1829,6 +1829,13 @@ int ntfs_read_inode_mount(struct inode *vi)
+ 		goto err_out;
+ 	}
+ 
++	/* Sanity check offset to the first attribute */
++	if (le16_to_cpu(m->attrs_offset) >= le32_to_cpu(m->bytes_allocated)) {
++		ntfs_error(sb, "Incorrect mft offset to the first attribute %u in superblock.",
++			       le16_to_cpu(m->attrs_offset));
++		goto err_out;
++	}
++
+ 	/* Need this to sanity check attribute list references to $MFT. */
+ 	vi->i_generation = ni->seq_no = le16_to_cpu(m->sequence_number);
+ 
+diff --git a/include/linux/platform_data/intel-spi.h b/include/linux/platform_data/intel-spi.h
+index 7f53a5c6f35e8..7dda3f6904654 100644
+--- a/include/linux/platform_data/intel-spi.h
++++ b/include/linux/platform_data/intel-spi.h
+@@ -19,11 +19,13 @@ enum intel_spi_type {
+ /**
+  * struct intel_spi_boardinfo - Board specific data for Intel SPI driver
+  * @type: Type which this controller is compatible with
+- * @writeable: The chip is writeable
++ * @set_writeable: Try to make the chip writeable (optional)
++ * @data: Data to be passed to @set_writeable can be %NULL
+  */
+ struct intel_spi_boardinfo {
+ 	enum intel_spi_type type;
+-	bool writeable;
++	bool (*set_writeable)(void __iomem *base, void *data);
++	void *data;
+ };
+ 
+ #endif /* INTEL_SPI_PDATA_H */
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index c9237d30c29b3..7d5a78f49d43d 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -99,7 +99,7 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *k
+ 
+ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
+ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+-			  struct file *filp, poll_table *poll_table);
++			  struct file *filp, poll_table *poll_table, int full);
+ void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu);
+ 
+ #define RING_BUFFER_ALL_CPUS -1
+diff --git a/include/linux/stddef.h b/include/linux/stddef.h
+index 998a4ba28eba4..938216f8ab7e7 100644
+--- a/include/linux/stddef.h
++++ b/include/linux/stddef.h
+@@ -36,4 +36,52 @@ enum {
+ #define offsetofend(TYPE, MEMBER) \
+ 	(offsetof(TYPE, MEMBER)	+ sizeof_field(TYPE, MEMBER))
+ 
++/**
++ * struct_group() - Wrap a set of declarations in a mirrored struct
++ *
++ * @NAME: The identifier name of the mirrored sub-struct
++ * @MEMBERS: The member declarations for the mirrored structs
++ *
++ * Used to create an anonymous union of two structs with identical
++ * layout and size: one anonymous and one named. The former can be
++ * used normally without sub-struct naming, and the latter can be
++ * used to reason about the start, end, and size of the group of
++ * struct members.
++ */
++#define struct_group(NAME, MEMBERS...)	\
++	__struct_group(/* no tag */, NAME, /* no attrs */, MEMBERS)
++
++/**
++ * struct_group_attr() - Create a struct_group() with trailing attributes
++ *
++ * @NAME: The identifier name of the mirrored sub-struct
++ * @ATTRS: Any struct attributes to apply
++ * @MEMBERS: The member declarations for the mirrored structs
++ *
++ * Used to create an anonymous union of two structs with identical
++ * layout and size: one anonymous and one named. The former can be
++ * used normally without sub-struct naming, and the latter can be
++ * used to reason about the start, end, and size of the group of
++ * struct members. Includes structure attributes argument.
++ */
++#define struct_group_attr(NAME, ATTRS, MEMBERS...) \
++	__struct_group(/* no tag */, NAME, ATTRS, MEMBERS)
++
++/**
++ * struct_group_tagged() - Create a struct_group with a reusable tag
++ *
++ * @TAG: The tag name for the named sub-struct
++ * @NAME: The identifier name of the mirrored sub-struct
++ * @MEMBERS: The member declarations for the mirrored structs
++ *
++ * Used to create an anonymous union of two structs with identical
++ * layout and size: one anonymous and one named. The former can be
++ * used normally without sub-struct naming, and the latter can be
++ * used to reason about the start, end, and size of the group of
++ * struct members. Includes struct tag argument for the named copy,
++ * so the specified layout can be reused later.
++ */
++#define struct_group_tagged(TAG, NAME, MEMBERS...) \
++	__struct_group(TAG, NAME, /* no attrs */, MEMBERS)
++
+ #endif
+diff --git a/include/net/ip.h b/include/net/ip.h
+index c5822d7824cd0..4b775af572688 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -545,7 +545,7 @@ static inline void iph_to_flow_copy_v4addrs(struct flow_keys *flow,
+ 	BUILD_BUG_ON(offsetof(typeof(flow->addrs), v4addrs.dst) !=
+ 		     offsetof(typeof(flow->addrs), v4addrs.src) +
+ 			      sizeof(flow->addrs.v4addrs.src));
+-	memcpy(&flow->addrs.v4addrs, &iph->saddr, sizeof(flow->addrs.v4addrs));
++	memcpy(&flow->addrs.v4addrs, &iph->addrs, sizeof(flow->addrs.v4addrs));
+ 	flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+ }
+ 
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 60601896d4747..89ce8a50f2363 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -842,7 +842,7 @@ static inline void iph_to_flow_copy_v6addrs(struct flow_keys *flow,
+ 	BUILD_BUG_ON(offsetof(typeof(flow->addrs), v6addrs.dst) !=
+ 		     offsetof(typeof(flow->addrs), v6addrs.src) +
+ 		     sizeof(flow->addrs.v6addrs.src));
+-	memcpy(&flow->addrs.v6addrs, &iph->saddr, sizeof(flow->addrs.v6addrs));
++	memcpy(&flow->addrs.v6addrs, &iph->addrs, sizeof(flow->addrs.v6addrs));
+ 	flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ }
+ 
+diff --git a/include/uapi/linux/ip.h b/include/uapi/linux/ip.h
+index e42d13b55cf3a..d2f143393780c 100644
+--- a/include/uapi/linux/ip.h
++++ b/include/uapi/linux/ip.h
+@@ -100,8 +100,10 @@ struct iphdr {
+ 	__u8	ttl;
+ 	__u8	protocol;
+ 	__sum16	check;
+-	__be32	saddr;
+-	__be32	daddr;
++	__struct_group(/* no tag */, addrs, /* no attrs */,
++		__be32	saddr;
++		__be32	daddr;
++	);
+ 	/*The options start here. */
+ };
+ 
+diff --git a/include/uapi/linux/ipv6.h b/include/uapi/linux/ipv6.h
+index 13e8751bf24a0..766ab5c8ee655 100644
+--- a/include/uapi/linux/ipv6.h
++++ b/include/uapi/linux/ipv6.h
+@@ -130,8 +130,10 @@ struct ipv6hdr {
+ 	__u8			nexthdr;
+ 	__u8			hop_limit;
+ 
+-	struct	in6_addr	saddr;
+-	struct	in6_addr	daddr;
++	__struct_group(/* no tag */, addrs, /* no attrs */,
++		struct	in6_addr	saddr;
++		struct	in6_addr	daddr;
++	);
+ };
+ 
+ 
+diff --git a/include/uapi/linux/stddef.h b/include/uapi/linux/stddef.h
+index ee8220f8dcf5f..c3725b4922632 100644
+--- a/include/uapi/linux/stddef.h
++++ b/include/uapi/linux/stddef.h
+@@ -1,6 +1,31 @@
+ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
++#ifndef _UAPI_LINUX_STDDEF_H
++#define _UAPI_LINUX_STDDEF_H
++
+ #include <linux/compiler_types.h>
+ 
+ #ifndef __always_inline
+ #define __always_inline inline
+ #endif
++
++/**
++ * __struct_group() - Create a mirrored named and anonyomous struct
++ *
++ * @TAG: The tag name for the named sub-struct (usually empty)
++ * @NAME: The identifier name of the mirrored sub-struct
++ * @ATTRS: Any struct attributes (usually empty)
++ * @MEMBERS: The member declarations for the mirrored structs
++ *
++ * Used to create an anonymous union of two structs with identical layout
++ * and size: one anonymous and one named. The former's members can be used
++ * normally without sub-struct naming, and the latter can be used to
++ * reason about the start, end, and size of the group of struct members.
++ * The named struct can also be explicitly tagged for layer reuse, as well
++ * as both having struct attributes appended.
++ */
++#define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \
++	union { \
++		struct { MEMBERS } ATTRS; \
++		struct TAG { MEMBERS } ATTRS NAME; \
++	}
++#endif
+diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
+index 3d897de890612..bbab8bb4b2fda 100644
+--- a/kernel/bpf/percpu_freelist.c
++++ b/kernel/bpf/percpu_freelist.c
+@@ -102,22 +102,21 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
+ 			    u32 nr_elems)
+ {
+ 	struct pcpu_freelist_head *head;
+-	int i, cpu, pcpu_entries;
++	unsigned int cpu, cpu_idx, i, j, n, m;
+ 
+-	pcpu_entries = nr_elems / num_possible_cpus() + 1;
+-	i = 0;
++	n = nr_elems / num_possible_cpus();
++	m = nr_elems % num_possible_cpus();
+ 
++	cpu_idx = 0;
+ 	for_each_possible_cpu(cpu) {
+-again:
+ 		head = per_cpu_ptr(s->freelist, cpu);
+-		/* No locking required as this is not visible yet. */
+-		pcpu_freelist_push_node(head, buf);
+-		i++;
+-		buf += elem_size;
+-		if (i == nr_elems)
+-			break;
+-		if (i % pcpu_entries)
+-			goto again;
++		j = n + (cpu_idx < m ? 1 : 0);
++		for (i = 0; i < j; i++) {
++			/* No locking required as this is not visible yet. */
++			pcpu_freelist_push_node(head, buf);
++			buf += elem_size;
++		}
++		cpu_idx++;
+ 	}
+ }
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index b0f444e86487c..75150e7555180 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1841,7 +1841,13 @@ static int __unregister_kprobe_top(struct kprobe *p)
+ 				if ((list_p != p) && (list_p->post_handler))
+ 					goto noclean;
+ 			}
+-			ap->post_handler = NULL;
++			/*
++			 * For the kprobe-on-ftrace case, we keep the
++			 * post_handler setting to identify this aggrprobe
++			 * armed with kprobe_ipmodify_ops.
++			 */
++			if (!kprobe_ftrace(ap))
++				ap->post_handler = NULL;
+ 		}
+ noclean:
+ 		/*
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 8e9ef0f555962..d97c189695cbb 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -1295,6 +1295,7 @@ static int ftrace_add_mod(struct trace_array *tr,
+ 	if (!ftrace_mod)
+ 		return -ENOMEM;
+ 
++	INIT_LIST_HEAD(&ftrace_mod->list);
+ 	ftrace_mod->func = kstrdup(func, GFP_KERNEL);
+ 	ftrace_mod->module = kstrdup(module, GFP_KERNEL);
+ 	ftrace_mod->enable = enable;
+@@ -3178,7 +3179,7 @@ static int ftrace_allocate_records(struct ftrace_page *pg, int count)
+ 		/* if we can't allocate this size, try something smaller */
+ 		if (!order)
+ 			return -ENOMEM;
+-		order >>= 1;
++		order--;
+ 		goto again;
+ 	}
+ 
+@@ -6877,7 +6878,7 @@ void __init ftrace_init(void)
+ 	}
+ 
+ 	pr_info("ftrace: allocating %ld entries in %ld pages\n",
+-		count, count / ENTRIES_PER_PAGE + 1);
++		count, DIV_ROUND_UP(count, ENTRIES_PER_PAGE));
+ 
+ 	last_ftrace_enabled = ftrace_enabled = 1;
+ 
+diff --git a/kernel/trace/kprobe_event_gen_test.c b/kernel/trace/kprobe_event_gen_test.c
+index d81f7c51025c7..c736487fc0e48 100644
+--- a/kernel/trace/kprobe_event_gen_test.c
++++ b/kernel/trace/kprobe_event_gen_test.c
+@@ -73,6 +73,10 @@ static struct trace_event_file *gen_kretprobe_test;
+ #define KPROBE_GEN_TEST_ARG3	NULL
+ #endif
+ 
++static bool trace_event_file_is_valid(struct trace_event_file *input)
++{
++	return input && !IS_ERR(input);
++}
+ 
+ /*
+  * Test to make sure we can create a kprobe event, then add more
+@@ -139,6 +143,8 @@ static int __init test_gen_kprobe_cmd(void)
+ 	kfree(buf);
+ 	return ret;
+  delete:
++	if (trace_event_file_is_valid(gen_kprobe_test))
++		gen_kprobe_test = NULL;
+ 	/* We got an error after creating the event, delete it */
+ 	ret = kprobe_event_delete("gen_kprobe_test");
+ 	goto out;
+@@ -202,6 +208,8 @@ static int __init test_gen_kretprobe_cmd(void)
+ 	kfree(buf);
+ 	return ret;
+  delete:
++	if (trace_event_file_is_valid(gen_kretprobe_test))
++		gen_kretprobe_test = NULL;
+ 	/* We got an error after creating the event, delete it */
+ 	ret = kprobe_event_delete("gen_kretprobe_test");
+ 	goto out;
+@@ -217,10 +225,12 @@ static int __init kprobe_event_gen_test_init(void)
+ 
+ 	ret = test_gen_kretprobe_cmd();
+ 	if (ret) {
+-		WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr,
+-						  "kprobes",
+-						  "gen_kretprobe_test", false));
+-		trace_put_event_file(gen_kretprobe_test);
++		if (trace_event_file_is_valid(gen_kretprobe_test)) {
++			WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr,
++							  "kprobes",
++							  "gen_kretprobe_test", false));
++			trace_put_event_file(gen_kretprobe_test);
++		}
+ 		WARN_ON(kprobe_event_delete("gen_kretprobe_test"));
+ 	}
+ 
+@@ -229,24 +239,30 @@ static int __init kprobe_event_gen_test_init(void)
+ 
+ static void __exit kprobe_event_gen_test_exit(void)
+ {
+-	/* Disable the event or you can't remove it */
+-	WARN_ON(trace_array_set_clr_event(gen_kprobe_test->tr,
+-					  "kprobes",
+-					  "gen_kprobe_test", false));
++	if (trace_event_file_is_valid(gen_kprobe_test)) {
++		/* Disable the event or you can't remove it */
++		WARN_ON(trace_array_set_clr_event(gen_kprobe_test->tr,
++						  "kprobes",
++						  "gen_kprobe_test", false));
++
++		/* Now give the file and instance back */
++		trace_put_event_file(gen_kprobe_test);
++	}
+ 
+-	/* Now give the file and instance back */
+-	trace_put_event_file(gen_kprobe_test);
+ 
+ 	/* Now unregister and free the event */
+ 	WARN_ON(kprobe_event_delete("gen_kprobe_test"));
+ 
+-	/* Disable the event or you can't remove it */
+-	WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr,
+-					  "kprobes",
+-					  "gen_kretprobe_test", false));
++	if (trace_event_file_is_valid(gen_kretprobe_test)) {
++		/* Disable the event or you can't remove it */
++		WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr,
++						  "kprobes",
++						  "gen_kretprobe_test", false));
++
++		/* Now give the file and instance back */
++		trace_put_event_file(gen_kretprobe_test);
++	}
+ 
+-	/* Now give the file and instance back */
+-	trace_put_event_file(gen_kretprobe_test);
+ 
+ 	/* Now unregister and free the event */
+ 	WARN_ON(kprobe_event_delete("gen_kretprobe_test"));
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index a12e278155550..49ebb8c662682 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -517,6 +517,7 @@ struct ring_buffer_per_cpu {
+ 	local_t				committing;
+ 	local_t				commits;
+ 	local_t				pages_touched;
++	local_t				pages_lost;
+ 	local_t				pages_read;
+ 	long				last_pages_touch;
+ 	size_t				shortest_full;
+@@ -771,10 +772,18 @@ size_t ring_buffer_nr_pages(struct trace_buffer *buffer, int cpu)
+ size_t ring_buffer_nr_dirty_pages(struct trace_buffer *buffer, int cpu)
+ {
+ 	size_t read;
++	size_t lost;
+ 	size_t cnt;
+ 
+ 	read = local_read(&buffer->buffers[cpu]->pages_read);
++	lost = local_read(&buffer->buffers[cpu]->pages_lost);
+ 	cnt = local_read(&buffer->buffers[cpu]->pages_touched);
++
++	if (WARN_ON_ONCE(cnt < lost))
++		return 0;
++
++	cnt -= lost;
++
+ 	/* The reader can read an empty page, but not more than that */
+ 	if (cnt < read) {
+ 		WARN_ON_ONCE(read > cnt + 1);
+@@ -784,6 +793,21 @@ size_t ring_buffer_nr_dirty_pages(struct trace_buffer *buffer, int cpu)
+ 	return cnt - read;
+ }
+ 
++static __always_inline bool full_hit(struct trace_buffer *buffer, int cpu, int full)
++{
++	struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu];
++	size_t nr_pages;
++	size_t dirty;
++
++	nr_pages = cpu_buffer->nr_pages;
++	if (!nr_pages || !full)
++		return true;
++
++	dirty = ring_buffer_nr_dirty_pages(buffer, cpu);
++
++	return (dirty * 100) > (full * nr_pages);
++}
++
+ /*
+  * rb_wake_up_waiters - wake up tasks waiting for ring buffer input
+  *
+@@ -912,22 +936,20 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ 		    !ring_buffer_empty_cpu(buffer, cpu)) {
+ 			unsigned long flags;
+ 			bool pagebusy;
+-			size_t nr_pages;
+-			size_t dirty;
++			bool done;
+ 
+ 			if (!full)
+ 				break;
+ 
+ 			raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ 			pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page;
+-			nr_pages = cpu_buffer->nr_pages;
+-			dirty = ring_buffer_nr_dirty_pages(buffer, cpu);
++			done = !pagebusy && full_hit(buffer, cpu, full);
++
+ 			if (!cpu_buffer->shortest_full ||
+ 			    cpu_buffer->shortest_full > full)
+ 				cpu_buffer->shortest_full = full;
+ 			raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+-			if (!pagebusy &&
+-			    (!nr_pages || (dirty * 100) > full * nr_pages))
++			if (done)
+ 				break;
+ 		}
+ 
+@@ -953,6 +975,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+  * @cpu: the cpu buffer to wait on
+  * @filp: the file descriptor
+  * @poll_table: The poll descriptor
++ * @full: wait until the percentage of pages are available, if @cpu != RING_BUFFER_ALL_CPUS
+  *
+  * If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon
+  * as data is added to any of the @buffer's cpu buffers. Otherwise
+@@ -962,14 +985,15 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+  * zero otherwise.
+  */
+ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+-			  struct file *filp, poll_table *poll_table)
++			  struct file *filp, poll_table *poll_table, int full)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	struct rb_irq_work *work;
+ 
+-	if (cpu == RING_BUFFER_ALL_CPUS)
++	if (cpu == RING_BUFFER_ALL_CPUS) {
+ 		work = &buffer->irq_work;
+-	else {
++		full = 0;
++	} else {
+ 		if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 			return -EINVAL;
+ 
+@@ -977,8 +1001,14 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 		work = &cpu_buffer->irq_work;
+ 	}
+ 
+-	poll_wait(filp, &work->waiters, poll_table);
+-	work->waiters_pending = true;
++	if (full) {
++		poll_wait(filp, &work->full_waiters, poll_table);
++		work->full_waiters_pending = true;
++	} else {
++		poll_wait(filp, &work->waiters, poll_table);
++		work->waiters_pending = true;
++	}
++
+ 	/*
+ 	 * There's a tight race between setting the waiters_pending and
+ 	 * checking if the ring buffer is empty.  Once the waiters_pending bit
+@@ -994,6 +1024,9 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 	 */
+ 	smp_mb();
+ 
++	if (full)
++		return full_hit(buffer, cpu, full) ? EPOLLIN | EPOLLRDNORM : 0;
++
+ 	if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) ||
+ 	    (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu)))
+ 		return EPOLLIN | EPOLLRDNORM;
+@@ -1635,9 +1668,9 @@ static void rb_free_cpu_buffer(struct ring_buffer_per_cpu *cpu_buffer)
+ 
+ 	free_buffer_page(cpu_buffer->reader_page);
+ 
+-	rb_head_page_deactivate(cpu_buffer);
+-
+ 	if (head) {
++		rb_head_page_deactivate(cpu_buffer);
++
+ 		list_for_each_entry_safe(bpage, tmp, head, list) {
+ 			list_del_init(&bpage->list);
+ 			free_buffer_page(bpage);
+@@ -1873,6 +1906,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+ 			 */
+ 			local_add(page_entries, &cpu_buffer->overrun);
+ 			local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
++			local_inc(&cpu_buffer->pages_lost);
+ 		}
+ 
+ 		/*
+@@ -2363,6 +2397,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer,
+ 		 */
+ 		local_add(entries, &cpu_buffer->overrun);
+ 		local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
++		local_inc(&cpu_buffer->pages_lost);
+ 
+ 		/*
+ 		 * The entries will be zeroed out when we move the
+@@ -3033,10 +3068,6 @@ static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer,
+ static __always_inline void
+ rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer)
+ {
+-	size_t nr_pages;
+-	size_t dirty;
+-	size_t full;
+-
+ 	if (buffer->irq_work.waiters_pending) {
+ 		buffer->irq_work.waiters_pending = false;
+ 		/* irq_work_queue() supplies it's own memory barriers */
+@@ -3060,10 +3091,7 @@ rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer)
+ 
+ 	cpu_buffer->last_pages_touch = local_read(&cpu_buffer->pages_touched);
+ 
+-	full = cpu_buffer->shortest_full;
+-	nr_pages = cpu_buffer->nr_pages;
+-	dirty = ring_buffer_nr_dirty_pages(buffer, cpu_buffer->cpu);
+-	if (full && nr_pages && (dirty * 100) <= full * nr_pages)
++	if (!full_hit(buffer, cpu_buffer->cpu, cpu_buffer->shortest_full))
+ 		return;
+ 
+ 	cpu_buffer->irq_work.wakeup_full = true;
+@@ -4964,6 +4992,7 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer)
+ 	local_set(&cpu_buffer->committing, 0);
+ 	local_set(&cpu_buffer->commits, 0);
+ 	local_set(&cpu_buffer->pages_touched, 0);
++	local_set(&cpu_buffer->pages_lost, 0);
+ 	local_set(&cpu_buffer->pages_read, 0);
+ 	cpu_buffer->last_pages_touch = 0;
+ 	cpu_buffer->shortest_full = 0;
+diff --git a/kernel/trace/synth_event_gen_test.c b/kernel/trace/synth_event_gen_test.c
+index edd912cd14aaf..a6a2813afb87f 100644
+--- a/kernel/trace/synth_event_gen_test.c
++++ b/kernel/trace/synth_event_gen_test.c
+@@ -120,15 +120,13 @@ static int __init test_gen_synth_cmd(void)
+ 
+ 	/* Now generate a gen_synth_test event */
+ 	ret = synth_event_trace_array(gen_synth_test, vals, ARRAY_SIZE(vals));
+- out:
++ free:
++	kfree(buf);
+ 	return ret;
+  delete:
+ 	/* We got an error after creating the event, delete it */
+ 	synth_event_delete("gen_synth_test");
+- free:
+-	kfree(buf);
+-
+-	goto out;
++	goto free;
+ }
+ 
+ /*
+@@ -227,15 +225,13 @@ static int __init test_empty_synth_event(void)
+ 
+ 	/* Now trace an empty_synth_test event */
+ 	ret = synth_event_trace_array(empty_synth_test, vals, ARRAY_SIZE(vals));
+- out:
++ free:
++	kfree(buf);
+ 	return ret;
+  delete:
+ 	/* We got an error after creating the event, delete it */
+ 	synth_event_delete("empty_synth_test");
+- free:
+-	kfree(buf);
+-
+-	goto out;
++	goto free;
+ }
+ 
+ static struct synth_field_desc create_synth_test_fields[] = {
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b7cb9147f0c59..146771d6d0072 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6263,7 +6263,7 @@ trace_poll(struct trace_iterator *iter, struct file *filp, poll_table *poll_tabl
+ 		return EPOLLIN | EPOLLRDNORM;
+ 	else
+ 		return ring_buffer_poll_wait(iter->array_buffer->buffer, iter->cpu_file,
+-					     filp, poll_table);
++					     filp, poll_table, iter->tr->buffer_percent);
+ }
+ 
+ static __poll_t
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index 881df991742ab..18291ab356570 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -791,10 +791,9 @@ static int register_synth_event(struct synth_event *event)
+ 	}
+ 
+ 	ret = set_synth_event_print_fmt(call);
+-	if (ret < 0) {
++	/* unregister_trace_event() will be called inside */
++	if (ret < 0)
+ 		trace_remove_event_call(call);
+-		goto err;
+-	}
+  out:
+ 	return ret;
+  err:
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 125b69f59caad..3a983bc1a71c9 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -3303,7 +3303,7 @@ ssize_t generic_perform_write(struct file *file,
+ 		unsigned long offset;	/* Offset into pagecache page */
+ 		unsigned long bytes;	/* Bytes to write to page */
+ 		size_t copied;		/* Bytes copied from user */
+-		void *fsdata;
++		void *fsdata = NULL;
+ 
+ 		offset = (pos & (PAGE_SIZE - 1));
+ 		bytes = min_t(unsigned long, PAGE_SIZE - offset,
+diff --git a/mm/maccess.c b/mm/maccess.c
+index 3bd70405f2d84..f6ea117a69eb5 100644
+--- a/mm/maccess.c
++++ b/mm/maccess.c
+@@ -83,7 +83,7 @@ long strncpy_from_kernel_nofault(char *dst, const void *unsafe_addr, long count)
+ 	return src - unsafe_addr;
+ Efault:
+ 	pagefault_enable();
+-	dst[-1] = '\0';
++	dst[0] = '\0';
+ 	return -EFAULT;
+ }
+ #else /* HAVE_GET_KERNEL_NOFAULT */
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 8f528e783a6c5..fec6c800c8988 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -205,6 +205,8 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 		list_move(&req->req_list, &cancel_list);
+ 	}
+ 
++	spin_unlock(&m->client->lock);
++
+ 	list_for_each_entry_safe(req, rtmp, &cancel_list, req_list) {
+ 		p9_debug(P9_DEBUG_ERROR, "call back req %p\n", req);
+ 		list_del(&req->req_list);
+@@ -212,7 +214,6 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 			req->t_err = err;
+ 		p9_client_cb(m->client, req, REQ_STATUS_ERROR);
+ 	}
+-	spin_unlock(&m->client->lock);
+ }
+ 
+ static __poll_t
+@@ -820,11 +821,14 @@ static int p9_fd_open(struct p9_client *client, int rfd, int wfd)
+ 		goto out_free_ts;
+ 	if (!(ts->rd->f_mode & FMODE_READ))
+ 		goto out_put_rd;
++	/* prevent workers from hanging on IO when fd is a pipe */
++	ts->rd->f_flags |= O_NONBLOCK;
+ 	ts->wr = fget(wfd);
+ 	if (!ts->wr)
+ 		goto out_put_rd;
+ 	if (!(ts->wr->f_mode & FMODE_WRITE))
+ 		goto out_put_wr;
++	ts->wr->f_flags |= O_NONBLOCK;
+ 
+ 	client->trans = ts;
+ 	client->status = Connected;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index e69e96ef49276..c5e4d2b8cb0be 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1986,7 +1986,7 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ 		if (link_type == LE_LINK && c->src_type == BDADDR_BREDR)
+ 			continue;
+ 
+-		if (c->psm == psm) {
++		if (c->chan_type != L2CAP_CHAN_FIXED && c->psm == psm) {
+ 			int src_match, dst_match;
+ 			int src_any, dst_any;
+ 
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 2983e926fe3cc..717b01ff9b2ba 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -231,6 +231,7 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
+ 	if (user_size > size)
+ 		return ERR_PTR(-EMSGSIZE);
+ 
++	size = SKB_DATA_ALIGN(size);
+ 	data = kzalloc(size + headroom + tailroom, GFP_USER);
+ 	if (!data)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/net/caif/chnl_net.c b/net/caif/chnl_net.c
+index 42dc080a4dbbc..806fb4d84fd3e 100644
+--- a/net/caif/chnl_net.c
++++ b/net/caif/chnl_net.c
+@@ -315,9 +315,6 @@ static int chnl_net_open(struct net_device *dev)
+ 
+ 	if (result == 0) {
+ 		pr_debug("connect timeout\n");
+-		caif_disconnect_client(dev_net(dev), &priv->chnl);
+-		priv->state = CAIF_DISCONNECTED;
+-		pr_debug("state disconnected\n");
+ 		result = -ETIMEDOUT;
+ 		goto error;
+ 	}
+diff --git a/net/ipv4/tcp_cdg.c b/net/ipv4/tcp_cdg.c
+index 709d238018239..56dede4b59d95 100644
+--- a/net/ipv4/tcp_cdg.c
++++ b/net/ipv4/tcp_cdg.c
+@@ -375,6 +375,7 @@ static void tcp_cdg_init(struct sock *sk)
+ 	struct cdg *ca = inet_csk_ca(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
++	ca->gradients = NULL;
+ 	/* We silently fall back to window = 1 if allocation fails. */
+ 	if (window > 1)
+ 		ca->gradients = kcalloc(window, sizeof(ca->gradients[0]),
+@@ -388,6 +389,7 @@ static void tcp_cdg_release(struct sock *sk)
+ 	struct cdg *ca = inet_csk_ca(sk);
+ 
+ 	kfree(ca->gradients);
++	ca->gradients = NULL;
+ }
+ 
+ static struct tcp_congestion_ops tcp_cdg __read_mostly = {
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 6b362b362f790..32b516ab9c475 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -221,7 +221,7 @@ static void requeue_rx_msgs(struct kcm_mux *mux, struct sk_buff_head *head)
+ 	struct sk_buff *skb;
+ 	struct kcm_sock *kcm;
+ 
+-	while ((skb = __skb_dequeue(head))) {
++	while ((skb = skb_dequeue(head))) {
+ 		/* Reset destructor to avoid calling kcm_rcv_ready */
+ 		skb->destructor = sock_rfree;
+ 		skb_orphan(skb);
+@@ -1084,53 +1084,18 @@ out_error:
+ 	return err;
+ }
+ 
+-static struct sk_buff *kcm_wait_data(struct sock *sk, int flags,
+-				     long timeo, int *err)
+-{
+-	struct sk_buff *skb;
+-
+-	while (!(skb = skb_peek(&sk->sk_receive_queue))) {
+-		if (sk->sk_err) {
+-			*err = sock_error(sk);
+-			return NULL;
+-		}
+-
+-		if (sock_flag(sk, SOCK_DONE))
+-			return NULL;
+-
+-		if ((flags & MSG_DONTWAIT) || !timeo) {
+-			*err = -EAGAIN;
+-			return NULL;
+-		}
+-
+-		sk_wait_data(sk, &timeo, NULL);
+-
+-		/* Handle signals */
+-		if (signal_pending(current)) {
+-			*err = sock_intr_errno(timeo);
+-			return NULL;
+-		}
+-	}
+-
+-	return skb;
+-}
+-
+ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
+ 		       size_t len, int flags)
+ {
++	int noblock = flags & MSG_DONTWAIT;
+ 	struct sock *sk = sock->sk;
+ 	struct kcm_sock *kcm = kcm_sk(sk);
+ 	int err = 0;
+-	long timeo;
+ 	struct strp_msg *stm;
+ 	int copied = 0;
+ 	struct sk_buff *skb;
+ 
+-	timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+-
+-	lock_sock(sk);
+-
+-	skb = kcm_wait_data(sk, flags, timeo, &err);
++	skb = skb_recv_datagram(sk, flags, noblock, &err);
+ 	if (!skb)
+ 		goto out;
+ 
+@@ -1161,14 +1126,11 @@ msg_finished:
+ 			/* Finished with message */
+ 			msg->msg_flags |= MSG_EOR;
+ 			KCM_STATS_INCR(kcm->stats.rx_msgs);
+-			skb_unlink(skb, &sk->sk_receive_queue);
+-			kfree_skb(skb);
+ 		}
+ 	}
+ 
+ out:
+-	release_sock(sk);
+-
++	skb_free_datagram(sk, skb);
+ 	return copied ? : err;
+ }
+ 
+@@ -1176,9 +1138,9 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
+ 			       struct pipe_inode_info *pipe, size_t len,
+ 			       unsigned int flags)
+ {
++	int noblock = flags & MSG_DONTWAIT;
+ 	struct sock *sk = sock->sk;
+ 	struct kcm_sock *kcm = kcm_sk(sk);
+-	long timeo;
+ 	struct strp_msg *stm;
+ 	int err = 0;
+ 	ssize_t copied;
+@@ -1186,11 +1148,7 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
+ 
+ 	/* Only support splice for SOCKSEQPACKET */
+ 
+-	timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+-
+-	lock_sock(sk);
+-
+-	skb = kcm_wait_data(sk, flags, timeo, &err);
++	skb = skb_recv_datagram(sk, flags, noblock, &err);
+ 	if (!skb)
+ 		goto err_out;
+ 
+@@ -1218,13 +1176,11 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
+ 	 * finish reading the message.
+ 	 */
+ 
+-	release_sock(sk);
+-
++	skb_free_datagram(sk, skb);
+ 	return copied;
+ 
+ err_out:
+-	release_sock(sk);
+-
++	skb_free_datagram(sk, skb);
+ 	return err;
+ }
+ 
+@@ -1844,10 +1800,10 @@ static int kcm_release(struct socket *sock)
+ 	kcm = kcm_sk(sk);
+ 	mux = kcm->mux;
+ 
++	lock_sock(sk);
+ 	sock_orphan(sk);
+ 	kfree_skb(kcm->seq_skb);
+ 
+-	lock_sock(sk);
+ 	/* Purge queue under lock to avoid race condition with tx_work trying
+ 	 * to act when queue is nonempty. If tx_work runs after this point
+ 	 * it will just return.
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 561b6d67ab8b9..dc8987ed08adb 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1480,11 +1480,15 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
+ 	tunnel->l2tp_net = net;
+ 	pn = l2tp_pernet(net);
+ 
++	sk = sock->sk;
++	sock_hold(sk);
++	tunnel->sock = sk;
++
+ 	spin_lock_bh(&pn->l2tp_tunnel_list_lock);
+ 	list_for_each_entry(tunnel_walk, &pn->l2tp_tunnel_list, list) {
+ 		if (tunnel_walk->tunnel_id == tunnel->tunnel_id) {
+ 			spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+-
++			sock_put(sk);
+ 			ret = -EEXIST;
+ 			goto err_sock;
+ 		}
+@@ -1492,10 +1496,6 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
+ 	list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list);
+ 	spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+ 
+-	sk = sock->sk;
+-	sock_hold(sk);
+-	tunnel->sock = sk;
+-
+ 	if (tunnel->encap == L2TP_ENCAPTYPE_UDP) {
+ 		struct udp_tunnel_sock_cfg udp_cfg = {
+ 			.sk_user_data = tunnel,
+diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c
+index 3fd06a27105dd..83a89dcf75ed0 100644
+--- a/net/sctp/outqueue.c
++++ b/net/sctp/outqueue.c
+@@ -384,6 +384,7 @@ static int sctp_prsctp_prune_unsent(struct sctp_association *asoc,
+ {
+ 	struct sctp_outq *q = &asoc->outqueue;
+ 	struct sctp_chunk *chk, *temp;
++	struct sctp_stream_out *sout;
+ 
+ 	q->sched->unsched_all(&asoc->stream);
+ 
+@@ -398,12 +399,14 @@ static int sctp_prsctp_prune_unsent(struct sctp_association *asoc,
+ 		sctp_sched_dequeue_common(q, chk);
+ 		asoc->sent_cnt_removable--;
+ 		asoc->abandoned_unsent[SCTP_PR_INDEX(PRIO)]++;
+-		if (chk->sinfo.sinfo_stream < asoc->stream.outcnt) {
+-			struct sctp_stream_out *streamout =
+-				SCTP_SO(&asoc->stream, chk->sinfo.sinfo_stream);
+ 
+-			streamout->ext->abandoned_unsent[SCTP_PR_INDEX(PRIO)]++;
+-		}
++		sout = SCTP_SO(&asoc->stream, chk->sinfo.sinfo_stream);
++		sout->ext->abandoned_unsent[SCTP_PR_INDEX(PRIO)]++;
++
++		/* clear out_curr if all frag chunks are pruned */
++		if (asoc->stream.out_curr == sout &&
++		    list_is_last(&chk->frag_list, &chk->msg->chunks))
++			asoc->stream.out_curr = NULL;
+ 
+ 		msg_len -= chk->skb->truesize + sizeof(struct sctp_chunk);
+ 		sctp_chunk_free(chk);
+diff --git a/net/x25/x25_dev.c b/net/x25/x25_dev.c
+index 25bf72ee6cad0..226397add422a 100644
+--- a/net/x25/x25_dev.c
++++ b/net/x25/x25_dev.c
+@@ -117,7 +117,7 @@ int x25_lapb_receive_frame(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	if (!pskb_may_pull(skb, 1)) {
+ 		x25_neigh_put(nb);
+-		return 0;
++		goto drop;
+ 	}
+ 
+ 	switch (skb->data[0]) {
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 6325bec3f66f8..19af6dd160e6b 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -1215,6 +1215,13 @@ sub dump_struct($$) {
+ 	$members =~ s/\s*CRYPTO_MINALIGN_ATTR/ /gos;
+ 	$members =~ s/\s*____cacheline_aligned_in_smp/ /gos;
+ 	$members =~ s/\s*____cacheline_aligned/ /gos;
++	# unwrap struct_group():
++	# - first eat non-declaration parameters and rewrite for final match
++	# - then remove macro, outer parens, and trailing semicolon
++	$members =~ s/\bstruct_group\s*\(([^,]*,)/STRUCT_GROUP(/gos;
++	$members =~ s/\bstruct_group_(attr|tagged)\s*\(([^,]*,){2}/STRUCT_GROUP(/gos;
++	$members =~ s/\b__struct_group\s*\(([^,]*,){3}/STRUCT_GROUP(/gos;
++	$members =~ s/\bSTRUCT_GROUP(\(((?:(?>[^)(]+)|(?1))*)\))[^;]*;/$2/gos;
+ 
+ 	# replace DECLARE_BITMAP
+ 	$members =~ s/__ETHTOOL_DECLARE_LINK_MODE_MASK\s*\(([^\)]+)\)/DECLARE_BITMAP($1, __ETHTOOL_LINK_MODE_MASK_NBITS)/gos;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e3f6b930ad4a1..8011b451902a8 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6883,6 +6883,7 @@ enum {
+ 	ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME,
+ 	ALC285_FIXUP_LEGION_Y9000X_SPEAKERS,
+ 	ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE,
++	ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED,
+ };
+ 
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -8693,6 +8694,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
+ 	},
++	[ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x19 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x8e11 },
++			 { }
++		},
++		.chained = true,
++		.chain_id = ALC285_FIXUP_HP_MUTE_LED,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8915,6 +8926,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8895, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x89aa, "HP EliteBook 630 G9", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+@@ -8995,6 +9007,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc1a3, "Samsung Galaxy Book Pro (NP935XDB-KC1SE)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc1a6, "Samsung Galaxy Book Pro 360 (NP930QBD)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ 	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
+diff --git a/sound/soc/codecs/jz4725b.c b/sound/soc/codecs/jz4725b.c
+index e49374c72e70a..8a830d0ad9500 100644
+--- a/sound/soc/codecs/jz4725b.c
++++ b/sound/soc/codecs/jz4725b.c
+@@ -136,14 +136,17 @@ enum {
+ #define REG_CGR3_GO1L_OFFSET		0
+ #define REG_CGR3_GO1L_MASK		(0x1f << REG_CGR3_GO1L_OFFSET)
+ 
++#define REG_CGR10_GIL_OFFSET		0
++#define REG_CGR10_GIR_OFFSET		4
++
+ struct jz_icdc {
+ 	struct regmap *regmap;
+ 	void __iomem *base;
+ 	struct clk *clk;
+ };
+ 
+-static const SNDRV_CTL_TLVD_DECLARE_DB_LINEAR(jz4725b_dac_tlv, -2250, 0);
+-static const SNDRV_CTL_TLVD_DECLARE_DB_LINEAR(jz4725b_line_tlv, -1500, 600);
++static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(jz4725b_adc_tlv,     0, 150, 0);
++static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(jz4725b_dac_tlv, -2250, 150, 0);
+ 
+ static const struct snd_kcontrol_new jz4725b_codec_controls[] = {
+ 	SOC_DOUBLE_TLV("Master Playback Volume",
+@@ -151,11 +154,11 @@ static const struct snd_kcontrol_new jz4725b_codec_controls[] = {
+ 		       REG_CGR1_GODL_OFFSET,
+ 		       REG_CGR1_GODR_OFFSET,
+ 		       0xf, 1, jz4725b_dac_tlv),
+-	SOC_DOUBLE_R_TLV("Master Capture Volume",
+-			 JZ4725B_CODEC_REG_CGR3,
+-			 JZ4725B_CODEC_REG_CGR2,
+-			 REG_CGR2_GO1R_OFFSET,
+-			 0x1f, 1, jz4725b_line_tlv),
++	SOC_DOUBLE_TLV("Master Capture Volume",
++		       JZ4725B_CODEC_REG_CGR10,
++		       REG_CGR10_GIL_OFFSET,
++		       REG_CGR10_GIR_OFFSET,
++		       0xf, 0, jz4725b_adc_tlv),
+ 
+ 	SOC_SINGLE("Master Playback Switch", JZ4725B_CODEC_REG_CR1,
+ 		   REG_CR1_DAC_MUTE_OFFSET, 1, 1),
+@@ -180,7 +183,7 @@ static SOC_VALUE_ENUM_SINGLE_DECL(jz4725b_codec_adc_src_enum,
+ 				  jz4725b_codec_adc_src_texts,
+ 				  jz4725b_codec_adc_src_values);
+ static const struct snd_kcontrol_new jz4725b_codec_adc_src_ctrl =
+-			SOC_DAPM_ENUM("Route", jz4725b_codec_adc_src_enum);
++	SOC_DAPM_ENUM("ADC Source Capture Route", jz4725b_codec_adc_src_enum);
+ 
+ static const struct snd_kcontrol_new jz4725b_codec_mixer_controls[] = {
+ 	SOC_DAPM_SINGLE("Line In Bypass", JZ4725B_CODEC_REG_CR1,
+@@ -225,7 +228,7 @@ static const struct snd_soc_dapm_widget jz4725b_codec_dapm_widgets[] = {
+ 	SND_SOC_DAPM_ADC("ADC", "Capture",
+ 			 JZ4725B_CODEC_REG_PMR1, REG_PMR1_SB_ADC_OFFSET, 1),
+ 
+-	SND_SOC_DAPM_MUX("ADC Source", SND_SOC_NOPM, 0, 0,
++	SND_SOC_DAPM_MUX("ADC Source Capture Route", SND_SOC_NOPM, 0, 0,
+ 			 &jz4725b_codec_adc_src_ctrl),
+ 
+ 	/* Mixer */
+@@ -236,7 +239,8 @@ static const struct snd_soc_dapm_widget jz4725b_codec_dapm_widgets[] = {
+ 	SND_SOC_DAPM_MIXER("DAC to Mixer", JZ4725B_CODEC_REG_CR1,
+ 			   REG_CR1_DACSEL_OFFSET, 0, NULL, 0),
+ 
+-	SND_SOC_DAPM_MIXER("Line In", SND_SOC_NOPM, 0, 0, NULL, 0),
++	SND_SOC_DAPM_MIXER("Line In", JZ4725B_CODEC_REG_PMR1,
++			   REG_PMR1_SB_LIN_OFFSET, 1, NULL, 0),
+ 	SND_SOC_DAPM_MIXER("HP Out", JZ4725B_CODEC_REG_CR1,
+ 			   REG_CR1_HP_DIS_OFFSET, 1, NULL, 0),
+ 
+@@ -283,11 +287,11 @@ static const struct snd_soc_dapm_route jz4725b_codec_dapm_routes[] = {
+ 	{"Mixer", NULL, "DAC to Mixer"},
+ 
+ 	{"Mixer to ADC", NULL, "Mixer"},
+-	{"ADC Source", "Mixer", "Mixer to ADC"},
+-	{"ADC Source", "Line In", "Line In"},
+-	{"ADC Source", "Mic 1", "Mic 1"},
+-	{"ADC Source", "Mic 2", "Mic 2"},
+-	{"ADC", NULL, "ADC Source"},
++	{"ADC Source Capture Route", "Mixer", "Mixer to ADC"},
++	{"ADC Source Capture Route", "Line In", "Line In"},
++	{"ADC Source Capture Route", "Mic 1", "Mic 1"},
++	{"ADC Source Capture Route", "Mic 2", "Mic 2"},
++	{"ADC", NULL, "ADC Source Capture Route"},
+ 
+ 	{"Out Stage", NULL, "Mixer"},
+ 	{"HP Out", NULL, "Out Stage"},
+diff --git a/sound/soc/codecs/mt6660.c b/sound/soc/codecs/mt6660.c
+index e18a58868273b..3cee2ea4b85de 100644
+--- a/sound/soc/codecs/mt6660.c
++++ b/sound/soc/codecs/mt6660.c
+@@ -504,14 +504,14 @@ static int mt6660_i2c_probe(struct i2c_client *client,
+ 		dev_err(chip->dev, "read chip revision fail\n");
+ 		goto probe_fail;
+ 	}
++	pm_runtime_set_active(chip->dev);
++	pm_runtime_enable(chip->dev);
+ 
+ 	ret = devm_snd_soc_register_component(chip->dev,
+ 					       &mt6660_component_driver,
+ 					       &mt6660_codec_dai, 1);
+-	if (!ret) {
+-		pm_runtime_set_active(chip->dev);
+-		pm_runtime_enable(chip->dev);
+-	}
++	if (ret)
++		pm_runtime_disable(chip->dev);
+ 
+ 	return ret;
+ 
+diff --git a/sound/soc/codecs/rt1308-sdw.h b/sound/soc/codecs/rt1308-sdw.h
+index c5ce75666dcc8..98293d73ebabc 100644
+--- a/sound/soc/codecs/rt1308-sdw.h
++++ b/sound/soc/codecs/rt1308-sdw.h
+@@ -139,9 +139,11 @@ static const struct reg_default rt1308_reg_defaults[] = {
+ 	{ 0x3005, 0x23 },
+ 	{ 0x3008, 0x02 },
+ 	{ 0x300a, 0x00 },
++	{ 0xc000 | (RT1308_DATA_PATH << 4), 0x00 },
+ 	{ 0xc003 | (RT1308_DAC_SET << 4), 0x00 },
+ 	{ 0xc001 | (RT1308_POWER << 4), 0x00 },
+ 	{ 0xc002 | (RT1308_POWER << 4), 0x00 },
++	{ 0xc000 | (RT1308_POWER_STATUS << 4), 0x00 },
+ };
+ 
+ #define RT1308_SDW_OFFSET 0xc000
+diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c
+index 8b262e7f5275f..c8f6f5122cacb 100644
+--- a/sound/soc/codecs/tas2764.c
++++ b/sound/soc/codecs/tas2764.c
+@@ -386,20 +386,13 @@ static int tas2764_set_dai_tdm_slot(struct snd_soc_dai *dai,
+ 	if (tx_mask == 0 || rx_mask != 0)
+ 		return -EINVAL;
+ 
+-	if (slots == 1) {
+-		if (tx_mask != 1)
+-			return -EINVAL;
+-		left_slot = 0;
+-		right_slot = 0;
++	left_slot = __ffs(tx_mask);
++	tx_mask &= ~(1 << left_slot);
++	if (tx_mask == 0) {
++		right_slot = left_slot;
+ 	} else {
+-		left_slot = __ffs(tx_mask);
+-		tx_mask &= ~(1 << left_slot);
+-		if (tx_mask == 0) {
+-			right_slot = left_slot;
+-		} else {
+-			right_slot = __ffs(tx_mask);
+-			tx_mask &= ~(1 << right_slot);
+-		}
++		right_slot = __ffs(tx_mask);
++		tx_mask &= ~(1 << right_slot);
+ 	}
+ 
+ 	if (tx_mask != 0 || left_slot >= slots || right_slot >= slots)
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 171bbcc919d55..c213c8096142b 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -395,21 +395,13 @@ static int tas2770_set_dai_tdm_slot(struct snd_soc_dai *dai,
+ 	if (tx_mask == 0 || rx_mask != 0)
+ 		return -EINVAL;
+ 
+-	if (slots == 1) {
+-		if (tx_mask != 1)
+-			return -EINVAL;
+-
+-		left_slot = 0;
+-		right_slot = 0;
++	left_slot = __ffs(tx_mask);
++	tx_mask &= ~(1 << left_slot);
++	if (tx_mask == 0) {
++		right_slot = left_slot;
+ 	} else {
+-		left_slot = __ffs(tx_mask);
+-		tx_mask &= ~(1 << left_slot);
+-		if (tx_mask == 0) {
+-			right_slot = left_slot;
+-		} else {
+-			right_slot = __ffs(tx_mask);
+-			tx_mask &= ~(1 << right_slot);
+-		}
++		right_slot = __ffs(tx_mask);
++		tx_mask &= ~(1 << right_slot);
+ 	}
+ 
+ 	if (tx_mask != 0 || left_slot >= slots || right_slot >= slots)
+diff --git a/sound/soc/codecs/wm5102.c b/sound/soc/codecs/wm5102.c
+index b7f5e5391fdb7..2ed3fa67027d0 100644
+--- a/sound/soc/codecs/wm5102.c
++++ b/sound/soc/codecs/wm5102.c
+@@ -2083,6 +2083,9 @@ static int wm5102_probe(struct platform_device *pdev)
+ 		regmap_update_bits(arizona->regmap, wm5102_digital_vu[i],
+ 				   WM5102_DIG_VU, WM5102_DIG_VU);
+ 
++	pm_runtime_enable(&pdev->dev);
++	pm_runtime_idle(&pdev->dev);
++
+ 	ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1,
+ 				  "ADSP2 Compressed IRQ", wm5102_adsp2_irq,
+ 				  wm5102);
+@@ -2115,9 +2118,6 @@ static int wm5102_probe(struct platform_device *pdev)
+ 		goto err_spk_irqs;
+ 	}
+ 
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_idle(&pdev->dev);
+-
+ 	return ret;
+ 
+ err_spk_irqs:
+diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c
+index c158f8b1e8e46..d0cef982215dc 100644
+--- a/sound/soc/codecs/wm5110.c
++++ b/sound/soc/codecs/wm5110.c
+@@ -2452,6 +2452,9 @@ static int wm5110_probe(struct platform_device *pdev)
+ 		regmap_update_bits(arizona->regmap, wm5110_digital_vu[i],
+ 				   WM5110_DIG_VU, WM5110_DIG_VU);
+ 
++	pm_runtime_enable(&pdev->dev);
++	pm_runtime_idle(&pdev->dev);
++
+ 	ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1,
+ 				  "ADSP2 Compressed IRQ", wm5110_adsp2_irq,
+ 				  wm5110);
+@@ -2484,9 +2487,6 @@ static int wm5110_probe(struct platform_device *pdev)
+ 		goto err_spk_irqs;
+ 	}
+ 
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_idle(&pdev->dev);
+-
+ 	return ret;
+ 
+ err_spk_irqs:
+diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c
+index 38651022e3d5f..21574447650cd 100644
+--- a/sound/soc/codecs/wm8962.c
++++ b/sound/soc/codecs/wm8962.c
+@@ -1840,6 +1840,49 @@ SOC_SINGLE_TLV("SPKOUTR Mixer DACR Volume", WM8962_SPEAKER_MIXER_5,
+ 	       4, 1, 0, inmix_tlv),
+ };
+ 
++static int tp_event(struct snd_soc_dapm_widget *w,
++		    struct snd_kcontrol *kcontrol, int event)
++{
++	int ret, reg, val, mask;
++	struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
++
++	ret = pm_runtime_resume_and_get(component->dev);
++	if (ret < 0) {
++		dev_err(component->dev, "Failed to resume device: %d\n", ret);
++		return ret;
++	}
++
++	reg = WM8962_ADDITIONAL_CONTROL_4;
++
++	if (!strcmp(w->name, "TEMP_HP")) {
++		mask = WM8962_TEMP_ENA_HP_MASK;
++		val = WM8962_TEMP_ENA_HP;
++	} else if (!strcmp(w->name, "TEMP_SPK")) {
++		mask = WM8962_TEMP_ENA_SPK_MASK;
++		val = WM8962_TEMP_ENA_SPK;
++	} else {
++		pm_runtime_put(component->dev);
++		return -EINVAL;
++	}
++
++	switch (event) {
++	case SND_SOC_DAPM_POST_PMD:
++		val = 0;
++		fallthrough;
++	case SND_SOC_DAPM_POST_PMU:
++		ret = snd_soc_component_update_bits(component, reg, mask, val);
++		break;
++	default:
++		WARN(1, "Invalid event %d\n", event);
++		pm_runtime_put(component->dev);
++		return -EINVAL;
++	}
++
++	pm_runtime_put(component->dev);
++
++	return 0;
++}
++
+ static int cp_event(struct snd_soc_dapm_widget *w,
+ 		    struct snd_kcontrol *kcontrol, int event)
+ {
+@@ -2133,8 +2176,10 @@ SND_SOC_DAPM_SUPPLY("TOCLK", WM8962_ADDITIONAL_CONTROL_1, 0, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY_S("DSP2", 1, WM8962_DSP2_POWER_MANAGEMENT,
+ 		      WM8962_DSP2_ENA_SHIFT, 0, dsp2_event,
+ 		      SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD),
+-SND_SOC_DAPM_SUPPLY("TEMP_HP", WM8962_ADDITIONAL_CONTROL_4, 2, 0, NULL, 0),
+-SND_SOC_DAPM_SUPPLY("TEMP_SPK", WM8962_ADDITIONAL_CONTROL_4, 1, 0, NULL, 0),
++SND_SOC_DAPM_SUPPLY("TEMP_HP", SND_SOC_NOPM, 0, 0, tp_event,
++		SND_SOC_DAPM_POST_PMU|SND_SOC_DAPM_POST_PMD),
++SND_SOC_DAPM_SUPPLY("TEMP_SPK", SND_SOC_NOPM, 0, 0, tp_event,
++		SND_SOC_DAPM_POST_PMU|SND_SOC_DAPM_POST_PMD),
+ 
+ SND_SOC_DAPM_MIXER("INPGAL", WM8962_LEFT_INPUT_PGA_CONTROL, 4, 0,
+ 		   inpgal, ARRAY_SIZE(inpgal)),
+@@ -3760,6 +3805,11 @@ static int wm8962_i2c_probe(struct i2c_client *i2c,
+ 	if (ret < 0)
+ 		goto err_pm_runtime;
+ 
++	regmap_update_bits(wm8962->regmap, WM8962_ADDITIONAL_CONTROL_4,
++			    WM8962_TEMP_ENA_HP_MASK, 0);
++	regmap_update_bits(wm8962->regmap, WM8962_ADDITIONAL_CONTROL_4,
++			    WM8962_TEMP_ENA_SPK_MASK, 0);
++
+ 	regcache_cache_only(wm8962->regmap, true);
+ 
+ 	/* The drivers should power up as needed */
+diff --git a/sound/soc/codecs/wm8997.c b/sound/soc/codecs/wm8997.c
+index 07378714b013a..229f2986cd96b 100644
+--- a/sound/soc/codecs/wm8997.c
++++ b/sound/soc/codecs/wm8997.c
+@@ -1156,6 +1156,9 @@ static int wm8997_probe(struct platform_device *pdev)
+ 		regmap_update_bits(arizona->regmap, wm8997_digital_vu[i],
+ 				   WM8997_DIG_VU, WM8997_DIG_VU);
+ 
++	pm_runtime_enable(&pdev->dev);
++	pm_runtime_idle(&pdev->dev);
++
+ 	arizona_init_common(arizona);
+ 
+ 	ret = arizona_init_vol_limit(arizona);
+@@ -1174,9 +1177,6 @@ static int wm8997_probe(struct platform_device *pdev)
+ 		goto err_spk_irqs;
+ 	}
+ 
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_idle(&pdev->dev);
+-
+ 	return ret;
+ 
+ err_spk_irqs:
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index a6d6d10cd471b..e9da95ebccc83 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -3178,10 +3178,23 @@ EXPORT_SYMBOL_GPL(snd_soc_of_get_dai_link_codecs);
+ 
+ static int __init snd_soc_init(void)
+ {
++	int ret;
++
+ 	snd_soc_debugfs_init();
+-	snd_soc_util_init();
++	ret = snd_soc_util_init();
++	if (ret)
++		goto err_util_init;
+ 
+-	return platform_driver_register(&soc_driver);
++	ret = platform_driver_register(&soc_driver);
++	if (ret)
++		goto err_register;
++	return 0;
++
++err_register:
++	snd_soc_util_exit();
++err_util_init:
++	snd_soc_debugfs_exit();
++	return ret;
+ }
+ module_init(snd_soc_init);
+ 
+diff --git a/sound/soc/soc-utils.c b/sound/soc/soc-utils.c
+index f27f94ca064bc..6b398ffabb02e 100644
+--- a/sound/soc/soc-utils.c
++++ b/sound/soc/soc-utils.c
+@@ -171,7 +171,7 @@ int __init snd_soc_util_init(void)
+ 	return ret;
+ }
+ 
+-void __exit snd_soc_util_exit(void)
++void snd_soc_util_exit(void)
+ {
+ 	platform_driver_unregister(&soc_dummy_driver);
+ 	platform_device_unregister(soc_dummy_dev);
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 93fee6e365a6e..b02e1a33304f0 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1149,10 +1149,8 @@ static int snd_usbmidi_output_open(struct snd_rawmidi_substream *substream)
+ 					port = &umidi->endpoints[i].out->ports[j];
+ 					break;
+ 				}
+-	if (!port) {
+-		snd_BUG();
++	if (!port)
+ 		return -ENXIO;
+-	}
+ 
+ 	substream->runtime->private_data = port;
+ 	port->state = STATE_UNKNOWN;
+diff --git a/tools/testing/selftests/futex/functional/Makefile b/tools/testing/selftests/futex/functional/Makefile
+index 23207829ec752..6a0ed2e7881eb 100644
+--- a/tools/testing/selftests/futex/functional/Makefile
++++ b/tools/testing/selftests/futex/functional/Makefile
+@@ -3,11 +3,11 @@ INCLUDES := -I../include -I../../
+ CFLAGS := $(CFLAGS) -g -O2 -Wall -D_GNU_SOURCE -pthread $(INCLUDES)
+ LDLIBS := -lpthread -lrt
+ 
+-HEADERS := \
++LOCAL_HDRS := \
+ 	../include/futextest.h \
+ 	../include/atomic.h \
+ 	../include/logging.h
+-TEST_GEN_FILES := \
++TEST_GEN_PROGS := \
+ 	futex_wait_timeout \
+ 	futex_wait_wouldblock \
+ 	futex_requeue_pi \
+@@ -21,5 +21,3 @@ TEST_PROGS := run.sh
+ top_srcdir = ../../../../..
+ KSFT_KHDR_INSTALL := 1
+ include ../../lib.mk
+-
+-$(TEST_GEN_FILES): $(HEADERS)
+diff --git a/tools/testing/selftests/intel_pstate/Makefile b/tools/testing/selftests/intel_pstate/Makefile
+index 39f0fa2a8fd63..05d66ef50c977 100644
+--- a/tools/testing/selftests/intel_pstate/Makefile
++++ b/tools/testing/selftests/intel_pstate/Makefile
+@@ -2,10 +2,10 @@
+ CFLAGS := $(CFLAGS) -Wall -D_GNU_SOURCE
+ LDLIBS += -lm
+ 
+-uname_M := $(shell uname -m 2>/dev/null || echo not)
+-ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
++ARCH ?= $(shell uname -m 2>/dev/null || echo not)
++ARCH_PROCESSED := $(shell echo $(ARCH) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
+ 
+-ifeq (x86,$(ARCH))
++ifeq (x86,$(ARCH_PROCESSED))
+ TEST_GEN_FILES := msr aperf
+ endif
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-12-02 17:26 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-12-02 17:26 UTC (permalink / raw
  To: gentoo-commits

commit:     c4f0427ee571892f68f140460e8abd768bc66c3b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  2 17:26:15 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec  2 17:26:15 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c4f0427e

Linux patch 5.10.157

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1156_linux-5.10.157.patch | 7754 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7758 insertions(+)

diff --git a/0000_README b/0000_README
index 99f78f7a..47a5e57c 100644
--- a/0000_README
+++ b/0000_README
@@ -667,6 +667,10 @@ Patch:  1155_linux-5.10.156.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.156
 
+Patch:  1156_linux-5.10.157.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.157
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1156_linux-5.10.157.patch b/1156_linux-5.10.157.patch
new file mode 100644
index 00000000..7c8e830d
--- /dev/null
+++ b/1156_linux-5.10.157.patch
@@ -0,0 +1,7754 @@
+diff --git a/Makefile b/Makefile
+index 166f87bdc1905..bf22df29c4d81 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 156
++SUBLEVEL = 157
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/am335x-pcm-953.dtsi b/arch/arm/boot/dts/am335x-pcm-953.dtsi
+index 6c547c83e5ddf..fc465f0d7e18b 100644
+--- a/arch/arm/boot/dts/am335x-pcm-953.dtsi
++++ b/arch/arm/boot/dts/am335x-pcm-953.dtsi
+@@ -12,22 +12,20 @@
+ 	compatible = "phytec,am335x-pcm-953", "phytec,am335x-phycore-som", "ti,am33xx";
+ 
+ 	/* Power */
+-	regulators {
+-		vcc3v3: fixedregulator@1 {
+-			compatible = "regulator-fixed";
+-			regulator-name = "vcc3v3";
+-			regulator-min-microvolt = <3300000>;
+-			regulator-max-microvolt = <3300000>;
+-			regulator-boot-on;
+-		};
++	vcc3v3: fixedregulator1 {
++		compatible = "regulator-fixed";
++		regulator-name = "vcc3v3";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++		regulator-boot-on;
++	};
+ 
+-		vcc1v8: fixedregulator@2 {
+-			compatible = "regulator-fixed";
+-			regulator-name = "vcc1v8";
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <1800000>;
+-			regulator-boot-on;
+-		};
++	vcc1v8: fixedregulator2 {
++		compatible = "regulator-fixed";
++		regulator-name = "vcc1v8";
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++		regulator-boot-on;
+ 	};
+ 
+ 	/* User IO */
+diff --git a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+index ca03685f0f086..4783e657b4cb6 100644
+--- a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
++++ b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+@@ -39,6 +39,13 @@
+ 
+ 				};
+ 
++				usb1 {
++					pinctrl_usb1_vbus_gpio: usb1_vbus_gpio {
++						atmel,pins =
++							<AT91_PIOC 5 AT91_PERIPH_GPIO AT91_PINCTRL_DEGLITCH>;	/* PC5 GPIO */
++					};
++				};
++
+ 				mmc0_slot1 {
+ 					pinctrl_board_mmc0_slot1: mmc0_slot1-board {
+ 						atmel,pins =
+@@ -84,6 +91,8 @@
+ 			};
+ 
+ 			usb1: gadget@fffa4000 {
++				pinctrl-0 = <&pinctrl_usb1_vbus_gpio>;
++				pinctrl-names = "default";
+ 				atmel,vbus-gpio = <&pioC 5 GPIO_ACTIVE_HIGH>;
+ 				status = "okay";
+ 			};
+diff --git a/arch/arm/boot/dts/imx6q-prti6q.dts b/arch/arm/boot/dts/imx6q-prti6q.dts
+index b4605edfd2ab8..d8fa83effd638 100644
+--- a/arch/arm/boot/dts/imx6q-prti6q.dts
++++ b/arch/arm/boot/dts/imx6q-prti6q.dts
+@@ -364,8 +364,8 @@
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_wifi>;
+ 		interrupts-extended = <&gpio1 30 IRQ_TYPE_LEVEL_HIGH>;
+-		ref-clock-frequency = "38400000";
+-		tcxo-clock-frequency = "19200000";
++		ref-clock-frequency = <38400000>;
++		tcxo-clock-frequency = <19200000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/mach-mxs/mach-mxs.c b/arch/arm/mach-mxs/mach-mxs.c
+index c109f47e9cbca..a687e83ad6048 100644
+--- a/arch/arm/mach-mxs/mach-mxs.c
++++ b/arch/arm/mach-mxs/mach-mxs.c
+@@ -387,8 +387,10 @@ static void __init mxs_machine_init(void)
+ 
+ 	root = of_find_node_by_path("/");
+ 	ret = of_property_read_string(root, "model", &soc_dev_attr->machine);
+-	if (ret)
++	if (ret) {
++		kfree(soc_dev_attr);
+ 		return;
++	}
+ 
+ 	soc_dev_attr->family = "Freescale MXS Family";
+ 	soc_dev_attr->soc_id = mxs_get_soc_id();
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts b/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
+index a8d363568fd62..3fc761c8d550a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
+@@ -203,7 +203,7 @@
+ 	cap-sd-highspeed;
+ 	cd-gpios = <&gpio0 RK_PA7 GPIO_ACTIVE_LOW>;
+ 	disable-wp;
+-	max-frequency = <150000000>;
++	max-frequency = <40000000>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sdmmc_clk &sdmmc_cmd &sdmmc_cd &sdmmc_bus4>;
+ 	vmmc-supply = <&vcc3v3_baseboard>;
+diff --git a/arch/arm64/include/asm/syscall_wrapper.h b/arch/arm64/include/asm/syscall_wrapper.h
+index b383b4802a7bd..d30217c21eff7 100644
+--- a/arch/arm64/include/asm/syscall_wrapper.h
++++ b/arch/arm64/include/asm/syscall_wrapper.h
+@@ -8,7 +8,7 @@
+ #ifndef __ASM_SYSCALL_WRAPPER_H
+ #define __ASM_SYSCALL_WRAPPER_H
+ 
+-struct pt_regs;
++#include <asm/ptrace.h>
+ 
+ #define SC_ARM64_REGS_TO_ARGS(x, ...)				\
+ 	__MAP(x,__SC_ARGS					\
+diff --git a/arch/mips/include/asm/fw/fw.h b/arch/mips/include/asm/fw/fw.h
+index d0ef8b4892bbe..d0494ce4b3373 100644
+--- a/arch/mips/include/asm/fw/fw.h
++++ b/arch/mips/include/asm/fw/fw.h
+@@ -26,6 +26,6 @@ extern char *fw_getcmdline(void);
+ extern void fw_meminit(void);
+ extern char *fw_getenv(char *name);
+ extern unsigned long fw_getenvl(char *name);
+-extern void fw_init_early_console(char port);
++extern void fw_init_early_console(void);
+ 
+ #endif /* __ASM_FW_H_ */
+diff --git a/arch/mips/pic32/pic32mzda/early_console.c b/arch/mips/pic32/pic32mzda/early_console.c
+index 25372e62783b5..3cd1b408fa1cb 100644
+--- a/arch/mips/pic32/pic32mzda/early_console.c
++++ b/arch/mips/pic32/pic32mzda/early_console.c
+@@ -27,7 +27,7 @@
+ #define U_BRG(x)	(UART_BASE(x) + 0x40)
+ 
+ static void __iomem *uart_base;
+-static char console_port = -1;
++static int console_port = -1;
+ 
+ static int __init configure_uart_pins(int port)
+ {
+@@ -47,7 +47,7 @@ static int __init configure_uart_pins(int port)
+ 	return 0;
+ }
+ 
+-static void __init configure_uart(char port, int baud)
++static void __init configure_uart(int port, int baud)
+ {
+ 	u32 pbclk;
+ 
+@@ -60,7 +60,7 @@ static void __init configure_uart(char port, int baud)
+ 		     uart_base + PIC32_SET(U_STA(port)));
+ }
+ 
+-static void __init setup_early_console(char port, int baud)
++static void __init setup_early_console(int port, int baud)
+ {
+ 	if (configure_uart_pins(port))
+ 		return;
+@@ -130,16 +130,15 @@ _out:
+ 	return baud;
+ }
+ 
+-void __init fw_init_early_console(char port)
++void __init fw_init_early_console(void)
+ {
+ 	char *arch_cmdline = pic32_getcmdline();
+-	int baud = -1;
++	int baud, port;
+ 
+ 	uart_base = ioremap(PIC32_BASE_UART, 0xc00);
+ 
+ 	baud = get_baud_from_cmdline(arch_cmdline);
+-	if (port == -1)
+-		port = get_port_from_cmdline(arch_cmdline);
++	port = get_port_from_cmdline(arch_cmdline);
+ 
+ 	if (port == -1)
+ 		port = EARLY_CONSOLE_PORT;
+diff --git a/arch/mips/pic32/pic32mzda/init.c b/arch/mips/pic32/pic32mzda/init.c
+index f232c77ff5265..488c0bee7ebf5 100644
+--- a/arch/mips/pic32/pic32mzda/init.c
++++ b/arch/mips/pic32/pic32mzda/init.c
+@@ -60,7 +60,7 @@ void __init plat_mem_setup(void)
+ 		strlcpy(arcs_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+ 
+ #ifdef CONFIG_EARLY_PRINTK
+-	fw_init_early_console(-1);
++	fw_init_early_console();
+ #endif
+ 	pic32_config_init();
+ }
+diff --git a/arch/nios2/boot/Makefile b/arch/nios2/boot/Makefile
+index 37dfc7e584bce..0b704c1f379f5 100644
+--- a/arch/nios2/boot/Makefile
++++ b/arch/nios2/boot/Makefile
+@@ -20,7 +20,7 @@ $(obj)/vmlinux.bin: vmlinux FORCE
+ $(obj)/vmlinux.gz: $(obj)/vmlinux.bin FORCE
+ 	$(call if_changed,gzip)
+ 
+-$(obj)/vmImage: $(obj)/vmlinux.gz
++$(obj)/vmImage: $(obj)/vmlinux.gz FORCE
+ 	$(call if_changed,uimage)
+ 	@$(kecho) 'Kernel: $@ is ready'
+ 
+diff --git a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
+index 60846e88ae4b1..dddabfbbc7a90 100644
+--- a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
++++ b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
+@@ -3,6 +3,8 @@
+ 
+ #include "fu540-c000.dtsi"
+ #include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/leds/common.h>
++#include <dt-bindings/pwm/pwm.h>
+ 
+ /* Clock frequency (in Hz) of the PCB crystal for rtcclk */
+ #define RTCCLK_FREQ		1000000
+@@ -46,6 +48,42 @@
+ 		compatible = "gpio-restart";
+ 		gpios = <&gpio 10 GPIO_ACTIVE_LOW>;
+ 	};
++
++	led-controller {
++		compatible = "pwm-leds";
++
++		led-d1 {
++			pwms = <&pwm0 0 7812500 PWM_POLARITY_INVERTED>;
++			active-low;
++			color = <LED_COLOR_ID_GREEN>;
++			max-brightness = <255>;
++			label = "d1";
++		};
++
++		led-d2 {
++			pwms = <&pwm0 1 7812500 PWM_POLARITY_INVERTED>;
++			active-low;
++			color = <LED_COLOR_ID_GREEN>;
++			max-brightness = <255>;
++			label = "d2";
++		};
++
++		led-d3 {
++			pwms = <&pwm0 2 7812500 PWM_POLARITY_INVERTED>;
++			active-low;
++			color = <LED_COLOR_ID_GREEN>;
++			max-brightness = <255>;
++			label = "d3";
++		};
++
++		led-d4 {
++			pwms = <&pwm0 3 7812500 PWM_POLARITY_INVERTED>;
++			active-low;
++			color = <LED_COLOR_ID_GREEN>;
++			max-brightness = <255>;
++			label = "d4";
++		};
++	};
+ };
+ 
+ &uart0 {
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index 926ab3960f9e4..c92b55a0ec1cb 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -28,6 +28,9 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+ 
+ obj-y += vdso.o vdso-syms.o
+ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
++ifneq ($(filter vgettimeofday, $(vdso-syms)),)
++CPPFLAGS_vdso.lds += -DHAS_VGETTIMEOFDAY
++endif
+ 
+ # Disable -pg to prevent insert call site
+ CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE)
+diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
+index e6f558bca71bb..b3e58402c3426 100644
+--- a/arch/riscv/kernel/vdso/vdso.lds.S
++++ b/arch/riscv/kernel/vdso/vdso.lds.S
+@@ -64,9 +64,11 @@ VERSION
+ 	LINUX_4.15 {
+ 	global:
+ 		__vdso_rt_sigreturn;
++#ifdef HAS_VGETTIMEOFDAY
+ 		__vdso_gettimeofday;
+ 		__vdso_clock_gettime;
+ 		__vdso_clock_getres;
++#endif
+ 		__vdso_getcpu;
+ 		__vdso_flush_icache;
+ 	local: *;
+diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
+index 76762dc67ca90..f292c3e106710 100644
+--- a/arch/s390/kernel/crash_dump.c
++++ b/arch/s390/kernel/crash_dump.c
+@@ -44,7 +44,7 @@ struct save_area {
+ 	u64 fprs[16];
+ 	u32 fpc;
+ 	u32 prefix;
+-	u64 todpreg;
++	u32 todpreg;
+ 	u64 timer;
+ 	u64 todcmp;
+ 	u64 vxrs_low[16];
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 2b7528821577c..c34ba034ca111 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -305,12 +305,6 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+ 	return 0;
+ }
+ 
+-static int is_external_interrupt(u32 info)
+-{
+-	info &= SVM_EVTINJ_TYPE_MASK | SVM_EVTINJ_VALID;
+-	return info == (SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR);
+-}
+-
+ static u32 svm_get_interrupt_shadow(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+@@ -1357,6 +1351,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
+ 	 */
+ 	svm_clear_current_vmcb(svm->vmcb);
+ 
++	svm_leave_nested(vcpu);
+ 	svm_free_nested(svm);
+ 
+ 	__free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT));
+@@ -3115,15 +3110,6 @@ static int handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
+ 		return 0;
+ 	}
+ 
+-	if (is_external_interrupt(svm->vmcb->control.exit_int_info) &&
+-	    exit_code != SVM_EXIT_EXCP_BASE + PF_VECTOR &&
+-	    exit_code != SVM_EXIT_NPF && exit_code != SVM_EXIT_TASK_SWITCH &&
+-	    exit_code != SVM_EXIT_INTR && exit_code != SVM_EXIT_NMI)
+-		printk(KERN_ERR "%s: unexpected exit_int_info 0x%x "
+-		       "exit_code 0x%x\n",
+-		       __func__, svm->vmcb->control.exit_int_info,
+-		       exit_code);
+-
+ 	if (exit_fastpath != EXIT_FASTPATH_NONE)
+ 		return 1;
+ 
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+index 91e61dbba3e0c..88cb537ccdea1 100644
+--- a/arch/x86/mm/ioremap.c
++++ b/arch/x86/mm/ioremap.c
+@@ -216,9 +216,15 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size,
+ 	 * Mappings have to be page-aligned
+ 	 */
+ 	offset = phys_addr & ~PAGE_MASK;
+-	phys_addr &= PHYSICAL_PAGE_MASK;
++	phys_addr &= PAGE_MASK;
+ 	size = PAGE_ALIGN(last_addr+1) - phys_addr;
+ 
++	/*
++	 * Mask out any bits not part of the actual physical
++	 * address, like memory encryption bits.
++	 */
++	phys_addr &= PHYSICAL_PAGE_MASK;
++
+ 	retval = memtype_reserve(phys_addr, (u64)phys_addr + size,
+ 						pcm, &new_pcm);
+ 	if (retval) {
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index be6733558b831..badb90352bf33 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -611,6 +611,10 @@ struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio)
+ 	struct bfq_group *bfqg;
+ 
+ 	while (blkg) {
++		if (!blkg->online) {
++			blkg = blkg->parent;
++			continue;
++		}
+ 		bfqg = blkg_to_bfqg(blkg);
+ 		if (bfqg->online) {
+ 			bio_associate_blkg_from_css(bio, &blkg->blkcg->css);
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index cfb1393a0891a..4473adef2f5a4 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2008,15 +2008,21 @@ static void binder_cleanup_transaction(struct binder_transaction *t,
+ /**
+  * binder_get_object() - gets object and checks for valid metadata
+  * @proc:	binder_proc owning the buffer
++ * @u:		sender's user pointer to base of buffer
+  * @buffer:	binder_buffer that we're parsing.
+  * @offset:	offset in the @buffer at which to validate an object.
+  * @object:	struct binder_object to read into
+  *
+- * Return:	If there's a valid metadata object at @offset in @buffer, the
++ * Copy the binder object at the given offset into @object. If @u is
++ * provided then the copy is from the sender's buffer. If not, then
++ * it is copied from the target's @buffer.
++ *
++ * Return:	If there's a valid metadata object at @offset, the
+  *		size of that object. Otherwise, it returns zero. The object
+  *		is read into the struct binder_object pointed to by @object.
+  */
+ static size_t binder_get_object(struct binder_proc *proc,
++				const void __user *u,
+ 				struct binder_buffer *buffer,
+ 				unsigned long offset,
+ 				struct binder_object *object)
+@@ -2026,10 +2032,16 @@ static size_t binder_get_object(struct binder_proc *proc,
+ 	size_t object_size = 0;
+ 
+ 	read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset);
+-	if (offset > buffer->data_size || read_size < sizeof(*hdr) ||
+-	    binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
+-					  offset, read_size))
++	if (offset > buffer->data_size || read_size < sizeof(*hdr))
+ 		return 0;
++	if (u) {
++		if (copy_from_user(object, u + offset, read_size))
++			return 0;
++	} else {
++		if (binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
++						  offset, read_size))
++			return 0;
++	}
+ 
+ 	/* Ok, now see if we read a complete object. */
+ 	hdr = &object->hdr;
+@@ -2102,7 +2114,7 @@ static struct binder_buffer_object *binder_validate_ptr(
+ 					  b, buffer_offset,
+ 					  sizeof(object_offset)))
+ 		return NULL;
+-	object_size = binder_get_object(proc, b, object_offset, object);
++	object_size = binder_get_object(proc, NULL, b, object_offset, object);
+ 	if (!object_size || object->hdr.type != BINDER_TYPE_PTR)
+ 		return NULL;
+ 	if (object_offsetp)
+@@ -2167,7 +2179,8 @@ static bool binder_validate_fixup(struct binder_proc *proc,
+ 		unsigned long buffer_offset;
+ 		struct binder_object last_object;
+ 		struct binder_buffer_object *last_bbo;
+-		size_t object_size = binder_get_object(proc, b, last_obj_offset,
++		size_t object_size = binder_get_object(proc, NULL, b,
++						       last_obj_offset,
+ 						       &last_object);
+ 		if (object_size != sizeof(*last_bbo))
+ 			return false;
+@@ -2282,7 +2295,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 		if (!binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
+ 						   buffer, buffer_offset,
+ 						   sizeof(object_offset)))
+-			object_size = binder_get_object(proc, buffer,
++			object_size = binder_get_object(proc, NULL, buffer,
+ 							object_offset, &object);
+ 		if (object_size == 0) {
+ 			pr_err("transaction release %d bad object at offset %lld, size %zd\n",
+@@ -2620,16 +2633,266 @@ err_fd_not_accepted:
+ 	return ret;
+ }
+ 
+-static int binder_translate_fd_array(struct binder_fd_array_object *fda,
++/**
++ * struct binder_ptr_fixup - data to be fixed-up in target buffer
++ * @offset	offset in target buffer to fixup
++ * @skip_size	bytes to skip in copy (fixup will be written later)
++ * @fixup_data	data to write at fixup offset
++ * @node	list node
++ *
++ * This is used for the pointer fixup list (pf) which is created and consumed
++ * during binder_transaction() and is only accessed locally. No
++ * locking is necessary.
++ *
++ * The list is ordered by @offset.
++ */
++struct binder_ptr_fixup {
++	binder_size_t offset;
++	size_t skip_size;
++	binder_uintptr_t fixup_data;
++	struct list_head node;
++};
++
++/**
++ * struct binder_sg_copy - scatter-gather data to be copied
++ * @offset		offset in target buffer
++ * @sender_uaddr	user address in source buffer
++ * @length		bytes to copy
++ * @node		list node
++ *
++ * This is used for the sg copy list (sgc) which is created and consumed
++ * during binder_transaction() and is only accessed locally. No
++ * locking is necessary.
++ *
++ * The list is ordered by @offset.
++ */
++struct binder_sg_copy {
++	binder_size_t offset;
++	const void __user *sender_uaddr;
++	size_t length;
++	struct list_head node;
++};
++
++/**
++ * binder_do_deferred_txn_copies() - copy and fixup scatter-gather data
++ * @alloc:	binder_alloc associated with @buffer
++ * @buffer:	binder buffer in target process
++ * @sgc_head:	list_head of scatter-gather copy list
++ * @pf_head:	list_head of pointer fixup list
++ *
++ * Processes all elements of @sgc_head, applying fixups from @pf_head
++ * and copying the scatter-gather data from the source process' user
++ * buffer to the target's buffer. It is expected that the list creation
++ * and processing all occurs during binder_transaction() so these lists
++ * are only accessed in local context.
++ *
++ * Return: 0=success, else -errno
++ */
++static int binder_do_deferred_txn_copies(struct binder_alloc *alloc,
++					 struct binder_buffer *buffer,
++					 struct list_head *sgc_head,
++					 struct list_head *pf_head)
++{
++	int ret = 0;
++	struct binder_sg_copy *sgc, *tmpsgc;
++	struct binder_ptr_fixup *tmppf;
++	struct binder_ptr_fixup *pf =
++		list_first_entry_or_null(pf_head, struct binder_ptr_fixup,
++					 node);
++
++	list_for_each_entry_safe(sgc, tmpsgc, sgc_head, node) {
++		size_t bytes_copied = 0;
++
++		while (bytes_copied < sgc->length) {
++			size_t copy_size;
++			size_t bytes_left = sgc->length - bytes_copied;
++			size_t offset = sgc->offset + bytes_copied;
++
++			/*
++			 * We copy up to the fixup (pointed to by pf)
++			 */
++			copy_size = pf ? min(bytes_left, (size_t)pf->offset - offset)
++				       : bytes_left;
++			if (!ret && copy_size)
++				ret = binder_alloc_copy_user_to_buffer(
++						alloc, buffer,
++						offset,
++						sgc->sender_uaddr + bytes_copied,
++						copy_size);
++			bytes_copied += copy_size;
++			if (copy_size != bytes_left) {
++				BUG_ON(!pf);
++				/* we stopped at a fixup offset */
++				if (pf->skip_size) {
++					/*
++					 * we are just skipping. This is for
++					 * BINDER_TYPE_FDA where the translated
++					 * fds will be fixed up when we get
++					 * to target context.
++					 */
++					bytes_copied += pf->skip_size;
++				} else {
++					/* apply the fixup indicated by pf */
++					if (!ret)
++						ret = binder_alloc_copy_to_buffer(
++							alloc, buffer,
++							pf->offset,
++							&pf->fixup_data,
++							sizeof(pf->fixup_data));
++					bytes_copied += sizeof(pf->fixup_data);
++				}
++				list_del(&pf->node);
++				kfree(pf);
++				pf = list_first_entry_or_null(pf_head,
++						struct binder_ptr_fixup, node);
++			}
++		}
++		list_del(&sgc->node);
++		kfree(sgc);
++	}
++	list_for_each_entry_safe(pf, tmppf, pf_head, node) {
++		BUG_ON(pf->skip_size == 0);
++		list_del(&pf->node);
++		kfree(pf);
++	}
++	BUG_ON(!list_empty(sgc_head));
++
++	return ret > 0 ? -EINVAL : ret;
++}
++
++/**
++ * binder_cleanup_deferred_txn_lists() - free specified lists
++ * @sgc_head:	list_head of scatter-gather copy list
++ * @pf_head:	list_head of pointer fixup list
++ *
++ * Called to clean up @sgc_head and @pf_head if there is an
++ * error.
++ */
++static void binder_cleanup_deferred_txn_lists(struct list_head *sgc_head,
++					      struct list_head *pf_head)
++{
++	struct binder_sg_copy *sgc, *tmpsgc;
++	struct binder_ptr_fixup *pf, *tmppf;
++
++	list_for_each_entry_safe(sgc, tmpsgc, sgc_head, node) {
++		list_del(&sgc->node);
++		kfree(sgc);
++	}
++	list_for_each_entry_safe(pf, tmppf, pf_head, node) {
++		list_del(&pf->node);
++		kfree(pf);
++	}
++}
++
++/**
++ * binder_defer_copy() - queue a scatter-gather buffer for copy
++ * @sgc_head:		list_head of scatter-gather copy list
++ * @offset:		binder buffer offset in target process
++ * @sender_uaddr:	user address in source process
++ * @length:		bytes to copy
++ *
++ * Specify a scatter-gather block to be copied. The actual copy must
++ * be deferred until all the needed fixups are identified and queued.
++ * Then the copy and fixups are done together so un-translated values
++ * from the source are never visible in the target buffer.
++ *
++ * We are guaranteed that repeated calls to this function will have
++ * monotonically increasing @offset values so the list will naturally
++ * be ordered.
++ *
++ * Return: 0=success, else -errno
++ */
++static int binder_defer_copy(struct list_head *sgc_head, binder_size_t offset,
++			     const void __user *sender_uaddr, size_t length)
++{
++	struct binder_sg_copy *bc = kzalloc(sizeof(*bc), GFP_KERNEL);
++
++	if (!bc)
++		return -ENOMEM;
++
++	bc->offset = offset;
++	bc->sender_uaddr = sender_uaddr;
++	bc->length = length;
++	INIT_LIST_HEAD(&bc->node);
++
++	/*
++	 * We are guaranteed that the deferred copies are in-order
++	 * so just add to the tail.
++	 */
++	list_add_tail(&bc->node, sgc_head);
++
++	return 0;
++}
++
++/**
++ * binder_add_fixup() - queue a fixup to be applied to sg copy
++ * @pf_head:	list_head of binder ptr fixup list
++ * @offset:	binder buffer offset in target process
++ * @fixup:	bytes to be copied for fixup
++ * @skip_size:	bytes to skip when copying (fixup will be applied later)
++ *
++ * Add the specified fixup to a list ordered by @offset. When copying
++ * the scatter-gather buffers, the fixup will be copied instead of
++ * data from the source buffer. For BINDER_TYPE_FDA fixups, the fixup
++ * will be applied later (in target process context), so we just skip
++ * the bytes specified by @skip_size. If @skip_size is 0, we copy the
++ * value in @fixup.
++ *
++ * This function is called *mostly* in @offset order, but there are
++ * exceptions. Since out-of-order inserts are relatively uncommon,
++ * we insert the new element by searching backward from the tail of
++ * the list.
++ *
++ * Return: 0=success, else -errno
++ */
++static int binder_add_fixup(struct list_head *pf_head, binder_size_t offset,
++			    binder_uintptr_t fixup, size_t skip_size)
++{
++	struct binder_ptr_fixup *pf = kzalloc(sizeof(*pf), GFP_KERNEL);
++	struct binder_ptr_fixup *tmppf;
++
++	if (!pf)
++		return -ENOMEM;
++
++	pf->offset = offset;
++	pf->fixup_data = fixup;
++	pf->skip_size = skip_size;
++	INIT_LIST_HEAD(&pf->node);
++
++	/* Fixups are *mostly* added in-order, but there are some
++	 * exceptions. Look backwards through list for insertion point.
++	 */
++	list_for_each_entry_reverse(tmppf, pf_head, node) {
++		if (tmppf->offset < pf->offset) {
++			list_add(&pf->node, &tmppf->node);
++			return 0;
++		}
++	}
++	/*
++	 * if we get here, then the new offset is the lowest so
++	 * insert at the head
++	 */
++	list_add(&pf->node, pf_head);
++	return 0;
++}
++
++static int binder_translate_fd_array(struct list_head *pf_head,
++				     struct binder_fd_array_object *fda,
++				     const void __user *sender_ubuffer,
+ 				     struct binder_buffer_object *parent,
++				     struct binder_buffer_object *sender_uparent,
+ 				     struct binder_transaction *t,
+ 				     struct binder_thread *thread,
+ 				     struct binder_transaction *in_reply_to)
+ {
+ 	binder_size_t fdi, fd_buf_size;
+ 	binder_size_t fda_offset;
++	const void __user *sender_ufda_base;
+ 	struct binder_proc *proc = thread->proc;
+-	struct binder_proc *target_proc = t->to_proc;
++	int ret;
++
++	if (fda->num_fds == 0)
++		return 0;
+ 
+ 	fd_buf_size = sizeof(u32) * fda->num_fds;
+ 	if (fda->num_fds >= SIZE_MAX / sizeof(u32)) {
+@@ -2653,19 +2916,25 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
+ 	 */
+ 	fda_offset = (parent->buffer - (uintptr_t)t->buffer->user_data) +
+ 		fda->parent_offset;
+-	if (!IS_ALIGNED((unsigned long)fda_offset, sizeof(u32))) {
++	sender_ufda_base = (void __user *)(uintptr_t)sender_uparent->buffer +
++				fda->parent_offset;
++
++	if (!IS_ALIGNED((unsigned long)fda_offset, sizeof(u32)) ||
++	    !IS_ALIGNED((unsigned long)sender_ufda_base, sizeof(u32))) {
+ 		binder_user_error("%d:%d parent offset not aligned correctly.\n",
+ 				  proc->pid, thread->pid);
+ 		return -EINVAL;
+ 	}
++	ret = binder_add_fixup(pf_head, fda_offset, 0, fda->num_fds * sizeof(u32));
++	if (ret)
++		return ret;
++
+ 	for (fdi = 0; fdi < fda->num_fds; fdi++) {
+ 		u32 fd;
+-		int ret;
+ 		binder_size_t offset = fda_offset + fdi * sizeof(fd);
++		binder_size_t sender_uoffset = fdi * sizeof(fd);
+ 
+-		ret = binder_alloc_copy_from_buffer(&target_proc->alloc,
+-						    &fd, t->buffer,
+-						    offset, sizeof(fd));
++		ret = copy_from_user(&fd, sender_ufda_base + sender_uoffset, sizeof(fd));
+ 		if (!ret)
+ 			ret = binder_translate_fd(fd, offset, t, thread,
+ 						  in_reply_to);
+@@ -2675,7 +2944,8 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
+ 	return 0;
+ }
+ 
+-static int binder_fixup_parent(struct binder_transaction *t,
++static int binder_fixup_parent(struct list_head *pf_head,
++			       struct binder_transaction *t,
+ 			       struct binder_thread *thread,
+ 			       struct binder_buffer_object *bp,
+ 			       binder_size_t off_start_offset,
+@@ -2721,14 +2991,7 @@ static int binder_fixup_parent(struct binder_transaction *t,
+ 	}
+ 	buffer_offset = bp->parent_offset +
+ 			(uintptr_t)parent->buffer - (uintptr_t)b->user_data;
+-	if (binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset,
+-					&bp->buffer, sizeof(bp->buffer))) {
+-		binder_user_error("%d:%d got transaction with invalid parent offset\n",
+-				  proc->pid, thread->pid);
+-		return -EINVAL;
+-	}
+-
+-	return 0;
++	return binder_add_fixup(pf_head, buffer_offset, bp->buffer, 0);
+ }
+ 
+ /**
+@@ -2848,6 +3111,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 	binder_size_t off_start_offset, off_end_offset;
+ 	binder_size_t off_min;
+ 	binder_size_t sg_buf_offset, sg_buf_end_offset;
++	binder_size_t user_offset = 0;
+ 	struct binder_proc *target_proc = NULL;
+ 	struct binder_thread *target_thread = NULL;
+ 	struct binder_node *target_node = NULL;
+@@ -2862,6 +3126,12 @@ static void binder_transaction(struct binder_proc *proc,
+ 	int t_debug_id = atomic_inc_return(&binder_last_id);
+ 	char *secctx = NULL;
+ 	u32 secctx_sz = 0;
++	struct list_head sgc_head;
++	struct list_head pf_head;
++	const void __user *user_buffer = (const void __user *)
++				(uintptr_t)tr->data.ptr.buffer;
++	INIT_LIST_HEAD(&sgc_head);
++	INIT_LIST_HEAD(&pf_head);
+ 
+ 	e = binder_transaction_log_add(&binder_transaction_log);
+ 	e->debug_id = t_debug_id;
+@@ -3173,19 +3443,6 @@ static void binder_transaction(struct binder_proc *proc,
+ 	t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);
+ 	trace_binder_transaction_alloc_buf(t->buffer);
+ 
+-	if (binder_alloc_copy_user_to_buffer(
+-				&target_proc->alloc,
+-				t->buffer, 0,
+-				(const void __user *)
+-					(uintptr_t)tr->data.ptr.buffer,
+-				tr->data_size)) {
+-		binder_user_error("%d:%d got transaction with invalid data ptr\n",
+-				proc->pid, thread->pid);
+-		return_error = BR_FAILED_REPLY;
+-		return_error_param = -EFAULT;
+-		return_error_line = __LINE__;
+-		goto err_copy_data_failed;
+-	}
+ 	if (binder_alloc_copy_user_to_buffer(
+ 				&target_proc->alloc,
+ 				t->buffer,
+@@ -3230,6 +3487,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		size_t object_size;
+ 		struct binder_object object;
+ 		binder_size_t object_offset;
++		binder_size_t copy_size;
+ 
+ 		if (binder_alloc_copy_from_buffer(&target_proc->alloc,
+ 						  &object_offset,
+@@ -3241,8 +3499,27 @@ static void binder_transaction(struct binder_proc *proc,
+ 			return_error_line = __LINE__;
+ 			goto err_bad_offset;
+ 		}
+-		object_size = binder_get_object(target_proc, t->buffer,
+-						object_offset, &object);
++
++		/*
++		 * Copy the source user buffer up to the next object
++		 * that will be processed.
++		 */
++		copy_size = object_offset - user_offset;
++		if (copy_size && (user_offset > object_offset ||
++				binder_alloc_copy_user_to_buffer(
++					&target_proc->alloc,
++					t->buffer, user_offset,
++					user_buffer + user_offset,
++					copy_size))) {
++			binder_user_error("%d:%d got transaction with invalid data ptr\n",
++					proc->pid, thread->pid);
++			return_error = BR_FAILED_REPLY;
++			return_error_param = -EFAULT;
++			return_error_line = __LINE__;
++			goto err_copy_data_failed;
++		}
++		object_size = binder_get_object(target_proc, user_buffer,
++				t->buffer, object_offset, &object);
+ 		if (object_size == 0 || object_offset < off_min) {
+ 			binder_user_error("%d:%d got transaction with invalid offset (%lld, min %lld max %lld) or object.\n",
+ 					  proc->pid, thread->pid,
+@@ -3254,6 +3531,11 @@ static void binder_transaction(struct binder_proc *proc,
+ 			return_error_line = __LINE__;
+ 			goto err_bad_offset;
+ 		}
++		/*
++		 * Set offset to the next buffer fragment to be
++		 * copied
++		 */
++		user_offset = object_offset + object_size;
+ 
+ 		hdr = &object.hdr;
+ 		off_min = object_offset + object_size;
+@@ -3316,6 +3598,8 @@ static void binder_transaction(struct binder_proc *proc,
+ 		case BINDER_TYPE_FDA: {
+ 			struct binder_object ptr_object;
+ 			binder_size_t parent_offset;
++			struct binder_object user_object;
++			size_t user_parent_size;
+ 			struct binder_fd_array_object *fda =
+ 				to_binder_fd_array_object(hdr);
+ 			size_t num_valid = (buffer_offset - off_start_offset) /
+@@ -3347,11 +3631,35 @@ static void binder_transaction(struct binder_proc *proc,
+ 				return_error_line = __LINE__;
+ 				goto err_bad_parent;
+ 			}
+-			ret = binder_translate_fd_array(fda, parent, t, thread,
+-							in_reply_to);
+-			if (ret < 0) {
++			/*
++			 * We need to read the user version of the parent
++			 * object to get the original user offset
++			 */
++			user_parent_size =
++				binder_get_object(proc, user_buffer, t->buffer,
++						  parent_offset, &user_object);
++			if (user_parent_size != sizeof(user_object.bbo)) {
++				binder_user_error("%d:%d invalid ptr object size: %zd vs %zd\n",
++						  proc->pid, thread->pid,
++						  user_parent_size,
++						  sizeof(user_object.bbo));
++				return_error = BR_FAILED_REPLY;
++				return_error_param = -EINVAL;
++				return_error_line = __LINE__;
++				goto err_bad_parent;
++			}
++			ret = binder_translate_fd_array(&pf_head, fda,
++							user_buffer, parent,
++							&user_object.bbo, t,
++							thread, in_reply_to);
++			if (!ret)
++				ret = binder_alloc_copy_to_buffer(&target_proc->alloc,
++								  t->buffer,
++								  object_offset,
++								  fda, sizeof(*fda));
++			if (ret) {
+ 				return_error = BR_FAILED_REPLY;
+-				return_error_param = ret;
++				return_error_param = ret > 0 ? -EINVAL : ret;
+ 				return_error_line = __LINE__;
+ 				goto err_translate_failed;
+ 			}
+@@ -3373,19 +3681,14 @@ static void binder_transaction(struct binder_proc *proc,
+ 				return_error_line = __LINE__;
+ 				goto err_bad_offset;
+ 			}
+-			if (binder_alloc_copy_user_to_buffer(
+-						&target_proc->alloc,
+-						t->buffer,
+-						sg_buf_offset,
+-						(const void __user *)
+-							(uintptr_t)bp->buffer,
+-						bp->length)) {
+-				binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
+-						  proc->pid, thread->pid);
+-				return_error_param = -EFAULT;
++			ret = binder_defer_copy(&sgc_head, sg_buf_offset,
++				(const void __user *)(uintptr_t)bp->buffer,
++				bp->length);
++			if (ret) {
+ 				return_error = BR_FAILED_REPLY;
++				return_error_param = ret;
+ 				return_error_line = __LINE__;
+-				goto err_copy_data_failed;
++				goto err_translate_failed;
+ 			}
+ 			/* Fixup buffer pointer to target proc address space */
+ 			bp->buffer = (uintptr_t)
+@@ -3394,7 +3697,8 @@ static void binder_transaction(struct binder_proc *proc,
+ 
+ 			num_valid = (buffer_offset - off_start_offset) /
+ 					sizeof(binder_size_t);
+-			ret = binder_fixup_parent(t, thread, bp,
++			ret = binder_fixup_parent(&pf_head, t,
++						  thread, bp,
+ 						  off_start_offset,
+ 						  num_valid,
+ 						  last_fixup_obj_off,
+@@ -3421,6 +3725,30 @@ static void binder_transaction(struct binder_proc *proc,
+ 			goto err_bad_object_type;
+ 		}
+ 	}
++	/* Done processing objects, copy the rest of the buffer */
++	if (binder_alloc_copy_user_to_buffer(
++				&target_proc->alloc,
++				t->buffer, user_offset,
++				user_buffer + user_offset,
++				tr->data_size - user_offset)) {
++		binder_user_error("%d:%d got transaction with invalid data ptr\n",
++				proc->pid, thread->pid);
++		return_error = BR_FAILED_REPLY;
++		return_error_param = -EFAULT;
++		return_error_line = __LINE__;
++		goto err_copy_data_failed;
++	}
++
++	ret = binder_do_deferred_txn_copies(&target_proc->alloc, t->buffer,
++					    &sgc_head, &pf_head);
++	if (ret) {
++		binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
++				  proc->pid, thread->pid);
++		return_error = BR_FAILED_REPLY;
++		return_error_param = ret;
++		return_error_line = __LINE__;
++		goto err_copy_data_failed;
++	}
+ 	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
+ 	t->work.type = BINDER_WORK_TRANSACTION;
+ 
+@@ -3487,6 +3815,7 @@ err_bad_object_type:
+ err_bad_offset:
+ err_bad_parent:
+ err_copy_data_failed:
++	binder_cleanup_deferred_txn_lists(&sgc_head, &pf_head);
+ 	binder_free_txn_fixups(t);
+ 	trace_binder_transaction_failed_buffer_release(t->buffer);
+ 	binder_transaction_buffer_release(target_proc, NULL, t->buffer,
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 459ece666c623..f1755efd30a25 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -4032,44 +4032,51 @@ void ata_scsi_dump_cdb(struct ata_port *ap, struct scsi_cmnd *cmd)
+ 
+ int __ata_scsi_queuecmd(struct scsi_cmnd *scmd, struct ata_device *dev)
+ {
++	struct ata_port *ap = dev->link->ap;
+ 	u8 scsi_op = scmd->cmnd[0];
+ 	ata_xlat_func_t xlat_func;
+-	int rc = 0;
++
++	/*
++	 * scsi_queue_rq() will defer commands if scsi_host_in_recovery().
++	 * However, this check is done without holding the ap->lock (a libata
++	 * specific lock), so we can have received an error irq since then,
++	 * therefore we must check if EH is pending, while holding ap->lock.
++	 */
++	if (ap->pflags & (ATA_PFLAG_EH_PENDING | ATA_PFLAG_EH_IN_PROGRESS))
++		return SCSI_MLQUEUE_DEVICE_BUSY;
++
++	if (unlikely(!scmd->cmd_len))
++		goto bad_cdb_len;
+ 
+ 	if (dev->class == ATA_DEV_ATA || dev->class == ATA_DEV_ZAC) {
+-		if (unlikely(!scmd->cmd_len || scmd->cmd_len > dev->cdb_len))
++		if (unlikely(scmd->cmd_len > dev->cdb_len))
+ 			goto bad_cdb_len;
+ 
+ 		xlat_func = ata_get_xlat_func(dev, scsi_op);
+-	} else {
+-		if (unlikely(!scmd->cmd_len))
+-			goto bad_cdb_len;
++	} else if (likely((scsi_op != ATA_16) || !atapi_passthru16)) {
++		/* relay SCSI command to ATAPI device */
++		int len = COMMAND_SIZE(scsi_op);
+ 
+-		xlat_func = NULL;
+-		if (likely((scsi_op != ATA_16) || !atapi_passthru16)) {
+-			/* relay SCSI command to ATAPI device */
+-			int len = COMMAND_SIZE(scsi_op);
+-			if (unlikely(len > scmd->cmd_len ||
+-				     len > dev->cdb_len ||
+-				     scmd->cmd_len > ATAPI_CDB_LEN))
+-				goto bad_cdb_len;
++		if (unlikely(len > scmd->cmd_len ||
++			     len > dev->cdb_len ||
++			     scmd->cmd_len > ATAPI_CDB_LEN))
++			goto bad_cdb_len;
+ 
+-			xlat_func = atapi_xlat;
+-		} else {
+-			/* ATA_16 passthru, treat as an ATA command */
+-			if (unlikely(scmd->cmd_len > 16))
+-				goto bad_cdb_len;
++		xlat_func = atapi_xlat;
++	} else {
++		/* ATA_16 passthru, treat as an ATA command */
++		if (unlikely(scmd->cmd_len > 16))
++			goto bad_cdb_len;
+ 
+-			xlat_func = ata_get_xlat_func(dev, scsi_op);
+-		}
++		xlat_func = ata_get_xlat_func(dev, scsi_op);
+ 	}
+ 
+ 	if (xlat_func)
+-		rc = ata_scsi_translate(dev, scmd, xlat_func);
+-	else
+-		ata_scsi_simulate(dev, scmd);
++		return ata_scsi_translate(dev, scmd, xlat_func);
+ 
+-	return rc;
++	ata_scsi_simulate(dev, scmd);
++
++	return 0;
+ 
+  bad_cdb_len:
+ 	DPRINTK("bad CDB len=%u, scsi_op=0x%02x, max=%u\n",
+diff --git a/drivers/bus/sunxi-rsb.c b/drivers/bus/sunxi-rsb.c
+index 9b1a5e62417cb..f8c29b888e6b4 100644
+--- a/drivers/bus/sunxi-rsb.c
++++ b/drivers/bus/sunxi-rsb.c
+@@ -268,6 +268,9 @@ EXPORT_SYMBOL_GPL(sunxi_rsb_driver_register);
+ /* common code that starts a transfer */
+ static int _sunxi_rsb_run_xfer(struct sunxi_rsb *rsb)
+ {
++	u32 int_mask, status;
++	bool timeout;
++
+ 	if (readl(rsb->regs + RSB_CTRL) & RSB_CTRL_START_TRANS) {
+ 		dev_dbg(rsb->dev, "RSB transfer still in progress\n");
+ 		return -EBUSY;
+@@ -275,13 +278,23 @@ static int _sunxi_rsb_run_xfer(struct sunxi_rsb *rsb)
+ 
+ 	reinit_completion(&rsb->complete);
+ 
+-	writel(RSB_INTS_LOAD_BSY | RSB_INTS_TRANS_ERR | RSB_INTS_TRANS_OVER,
+-	       rsb->regs + RSB_INTE);
++	int_mask = RSB_INTS_LOAD_BSY | RSB_INTS_TRANS_ERR | RSB_INTS_TRANS_OVER;
++	writel(int_mask, rsb->regs + RSB_INTE);
+ 	writel(RSB_CTRL_START_TRANS | RSB_CTRL_GLOBAL_INT_ENB,
+ 	       rsb->regs + RSB_CTRL);
+ 
+-	if (!wait_for_completion_io_timeout(&rsb->complete,
+-					    msecs_to_jiffies(100))) {
++	if (irqs_disabled()) {
++		timeout = readl_poll_timeout_atomic(rsb->regs + RSB_INTS,
++						    status, (status & int_mask),
++						    10, 100000);
++		writel(status, rsb->regs + RSB_INTS);
++	} else {
++		timeout = !wait_for_completion_io_timeout(&rsb->complete,
++							  msecs_to_jiffies(100));
++		status = rsb->status;
++	}
++
++	if (timeout) {
+ 		dev_dbg(rsb->dev, "RSB timeout\n");
+ 
+ 		/* abort the transfer */
+@@ -293,18 +306,18 @@ static int _sunxi_rsb_run_xfer(struct sunxi_rsb *rsb)
+ 		return -ETIMEDOUT;
+ 	}
+ 
+-	if (rsb->status & RSB_INTS_LOAD_BSY) {
++	if (status & RSB_INTS_LOAD_BSY) {
+ 		dev_dbg(rsb->dev, "RSB busy\n");
+ 		return -EBUSY;
+ 	}
+ 
+-	if (rsb->status & RSB_INTS_TRANS_ERR) {
+-		if (rsb->status & RSB_INTS_TRANS_ERR_ACK) {
++	if (status & RSB_INTS_TRANS_ERR) {
++		if (status & RSB_INTS_TRANS_ERR_ACK) {
+ 			dev_dbg(rsb->dev, "RSB slave nack\n");
+ 			return -EINVAL;
+ 		}
+ 
+-		if (rsb->status & RSB_INTS_TRANS_ERR_DATA) {
++		if (status & RSB_INTS_TRANS_ERR_DATA) {
+ 			dev_dbg(rsb->dev, "RSB transfer data error\n");
+ 			return -EIO;
+ 		}
+diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
+index 798f86fcd50fa..dcbb023acc455 100644
+--- a/drivers/dma-buf/dma-heap.c
++++ b/drivers/dma-buf/dma-heap.c
+@@ -209,18 +209,6 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	/* check the name is unique */
+-	mutex_lock(&heap_list_lock);
+-	list_for_each_entry(h, &heap_list, list) {
+-		if (!strcmp(h->name, exp_info->name)) {
+-			mutex_unlock(&heap_list_lock);
+-			pr_err("dma_heap: Already registered heap named %s\n",
+-			       exp_info->name);
+-			return ERR_PTR(-EINVAL);
+-		}
+-	}
+-	mutex_unlock(&heap_list_lock);
+-
+ 	heap = kzalloc(sizeof(*heap), GFP_KERNEL);
+ 	if (!heap)
+ 		return ERR_PTR(-ENOMEM);
+@@ -259,13 +247,27 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+ 		err_ret = ERR_CAST(dev_ret);
+ 		goto err2;
+ 	}
+-	/* Add heap to the list */
++
+ 	mutex_lock(&heap_list_lock);
++	/* check the name is unique */
++	list_for_each_entry(h, &heap_list, list) {
++		if (!strcmp(h->name, exp_info->name)) {
++			mutex_unlock(&heap_list_lock);
++			pr_err("dma_heap: Already registered heap named %s\n",
++			       exp_info->name);
++			err_ret = ERR_PTR(-EINVAL);
++			goto err3;
++		}
++	}
++
++	/* Add heap to the list */
+ 	list_add(&heap->list, &heap_list);
+ 	mutex_unlock(&heap_list_lock);
+ 
+ 	return heap;
+ 
++err3:
++	device_destroy(dma_heap_class, heap->heap_devt);
+ err2:
+ 	cdev_del(&heap->heap_cdev);
+ err1:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index e8c76bd8c501f..6aa9fd9cb83b4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -341,11 +341,9 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void *data,
+ 	if (r)
+ 		goto release_object;
+ 
+-	if (args->flags & AMDGPU_GEM_USERPTR_REGISTER) {
+-		r = amdgpu_mn_register(bo, args->addr);
+-		if (r)
+-			goto release_object;
+-	}
++	r = amdgpu_mn_register(bo, args->addr);
++	if (r)
++		goto release_object;
+ 
+ 	if (args->flags & AMDGPU_GEM_USERPTR_VALIDATE) {
+ 		r = amdgpu_ttm_tt_get_user_pages(bo, bo->tbo.ttm->pages);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+index 8f362e8c17870..be6d43c9979c8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+@@ -361,7 +361,8 @@ static const struct dce_audio_registers audio_regs[] = {
+ 	audio_regs(2),
+ 	audio_regs(3),
+ 	audio_regs(4),
+-	audio_regs(5)
++	audio_regs(5),
++	audio_regs(6),
+ };
+ 
+ #define DCE120_AUD_COMMON_MASK_SH_LIST(mask_sh)\
+diff --git a/drivers/gpu/drm/drm_dp_dual_mode_helper.c b/drivers/gpu/drm/drm_dp_dual_mode_helper.c
+index 1c9ea9f7fdafe..f2ff0bfdf54d7 100644
+--- a/drivers/gpu/drm/drm_dp_dual_mode_helper.c
++++ b/drivers/gpu/drm/drm_dp_dual_mode_helper.c
+@@ -62,23 +62,45 @@
+ ssize_t drm_dp_dual_mode_read(struct i2c_adapter *adapter,
+ 			      u8 offset, void *buffer, size_t size)
+ {
++	u8 zero = 0;
++	char *tmpbuf = NULL;
++	/*
++	 * As sub-addressing is not supported by all adaptors,
++	 * always explicitly read from the start and discard
++	 * any bytes that come before the requested offset.
++	 * This way, no matter whether the adaptor supports it
++	 * or not, we'll end up reading the proper data.
++	 */
+ 	struct i2c_msg msgs[] = {
+ 		{
+ 			.addr = DP_DUAL_MODE_SLAVE_ADDRESS,
+ 			.flags = 0,
+ 			.len = 1,
+-			.buf = &offset,
++			.buf = &zero,
+ 		},
+ 		{
+ 			.addr = DP_DUAL_MODE_SLAVE_ADDRESS,
+ 			.flags = I2C_M_RD,
+-			.len = size,
++			.len = size + offset,
+ 			.buf = buffer,
+ 		},
+ 	};
+ 	int ret;
+ 
++	if (offset) {
++		tmpbuf = kmalloc(size + offset, GFP_KERNEL);
++		if (!tmpbuf)
++			return -ENOMEM;
++
++		msgs[1].buf = tmpbuf;
++	}
++
+ 	ret = i2c_transfer(adapter, msgs, ARRAY_SIZE(msgs));
++	if (tmpbuf)
++		memcpy(buffer, tmpbuf + offset, size);
++
++	kfree(tmpbuf);
++
+ 	if (ret < 0)
+ 		return ret;
+ 	if (ret != ARRAY_SIZE(msgs))
+@@ -205,18 +227,6 @@ enum drm_dp_dual_mode_type drm_dp_dual_mode_detect(struct i2c_adapter *adapter)
+ 	if (ret)
+ 		return DRM_DP_DUAL_MODE_UNKNOWN;
+ 
+-	/*
+-	 * Sigh. Some (maybe all?) type 1 adaptors are broken and ack
+-	 * the offset but ignore it, and instead they just always return
+-	 * data from the start of the HDMI ID buffer. So for a broken
+-	 * type 1 HDMI adaptor a single byte read will always give us
+-	 * 0x44, and for a type 1 DVI adaptor it should give 0x00
+-	 * (assuming it implements any registers). Fortunately neither
+-	 * of those values will match the type 2 signature of the
+-	 * DP_DUAL_MODE_ADAPTOR_ID register so we can proceed with
+-	 * the type 2 adaptor detection safely even in the presence
+-	 * of broken type 1 adaptors.
+-	 */
+ 	ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_ADAPTOR_ID,
+ 				    &adaptor_id, sizeof(adaptor_id));
+ 	DRM_DEBUG_KMS("DP dual mode adaptor ID: %02x (err %zd)\n",
+@@ -231,11 +241,10 @@ enum drm_dp_dual_mode_type drm_dp_dual_mode_detect(struct i2c_adapter *adapter)
+ 				return DRM_DP_DUAL_MODE_TYPE2_DVI;
+ 		}
+ 		/*
+-		 * If neither a proper type 1 ID nor a broken type 1 adaptor
+-		 * as described above, assume type 1, but let the user know
+-		 * that we may have misdetected the type.
++		 * If not a proper type 1 ID, still assume type 1, but let
++		 * the user know that we may have misdetected the type.
+ 		 */
+-		if (!is_type1_adaptor(adaptor_id) && adaptor_id != hdmi_id[0])
++		if (!is_type1_adaptor(adaptor_id))
+ 			DRM_ERROR("Unexpected DP dual mode adaptor ID %02x\n",
+ 				  adaptor_id);
+ 
+@@ -339,10 +348,8 @@ EXPORT_SYMBOL(drm_dp_dual_mode_get_tmds_output);
+  * @enable: enable (as opposed to disable) the TMDS output buffers
+  *
+  * Set the state of the TMDS output buffers in the adaptor. For
+- * type2 this is set via the DP_DUAL_MODE_TMDS_OEN register. As
+- * some type 1 adaptors have problems with registers (see comments
+- * in drm_dp_dual_mode_detect()) we avoid touching the register,
+- * making this function a no-op on type 1 adaptors.
++ * type2 this is set via the DP_DUAL_MODE_TMDS_OEN register.
++ * Type1 adaptors do not support any register writes.
+  *
+  * Returns:
+  * 0 on success, negative error code on failure
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 083273736c837..ca0fefeaab20b 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -128,6 +128,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "One S1003"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* Acer Switch V 10 (SW5-017) */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SW5-017"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/* Anbernic Win600 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Anbernic"),
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index a33887f2464fa..5f86d9aacb8a3 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -745,6 +745,10 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ 		if (!i915_mmio_reg_offset(rb.reg))
+ 			continue;
+ 
++		if (INTEL_GEN(i915) == 12 && (engine->class == VIDEO_DECODE_CLASS ||
++		    engine->class == VIDEO_ENHANCEMENT_CLASS))
++			rb.bit = _MASKED_BIT_ENABLE(rb.bit);
++
+ 		intel_uncore_write_fw(uncore, rb.reg, rb.bit);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
+index 2c6ebc328b24f..318692ad9680f 100644
+--- a/drivers/gpu/drm/tegra/drm.c
++++ b/drivers/gpu/drm/tegra/drm.c
+@@ -1042,6 +1042,10 @@ static bool host1x_drm_wants_iommu(struct host1x_device *dev)
+ 	struct host1x *host1x = dev_get_drvdata(dev->dev.parent);
+ 	struct iommu_domain *domain;
+ 
++	/* Our IOMMU usage policy doesn't currently play well with GART */
++	if (of_machine_is_compatible("nvidia,tegra20"))
++		return false;
++
+ 	/*
+ 	 * If the Tegra DRM clients are backed by an IOMMU, push buffers are
+ 	 * likely to be allocated beyond the 32-bit boundary if sufficient
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index 8659558b518d6..9f674a8d5009d 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -198,6 +198,10 @@ static void host1x_setup_sid_table(struct host1x *host)
+ 
+ static bool host1x_wants_iommu(struct host1x *host1x)
+ {
++	/* Our IOMMU usage policy doesn't currently play well with GART */
++	if (of_machine_is_compatible("nvidia,tegra20"))
++		return false;
++
+ 	/*
+ 	 * If we support addressing a maximum of 32 bits of physical memory
+ 	 * and if the host1x firewall is enabled, there's no need to enable
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 10188b1a6a089..5b902adb0d1bf 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -501,13 +501,17 @@ static void vmbus_add_channel_work(struct work_struct *work)
+ 	 * Add the new device to the bus. This will kick off device-driver
+ 	 * binding which eventually invokes the device driver's AddDevice()
+ 	 * method.
++	 *
++	 * If vmbus_device_register() fails, the 'device_obj' is freed in
++	 * vmbus_device_release() as called by device_unregister() in the
++	 * error path of vmbus_device_register(). In the outside error
++	 * path, there's no need to free it.
+ 	 */
+ 	ret = vmbus_device_register(newchannel->device_obj);
+ 
+ 	if (ret != 0) {
+ 		pr_err("unable to add child device object (relid %d)\n",
+ 			newchannel->offermsg.child_relid);
+-		kfree(newchannel->device_obj);
+ 		goto err_deq_chan;
+ 	}
+ 
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 514279dac7cb5..e99400f3ae1d1 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2020,6 +2020,7 @@ int vmbus_device_register(struct hv_device *child_device_obj)
+ 	ret = device_register(&child_device_obj->device);
+ 	if (ret) {
+ 		pr_err("Unable to register child device\n");
++		put_device(&child_device_obj->device);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/iio/industrialio-sw-trigger.c b/drivers/iio/industrialio-sw-trigger.c
+index 9ae793a70b8bf..a7714d32a6418 100644
+--- a/drivers/iio/industrialio-sw-trigger.c
++++ b/drivers/iio/industrialio-sw-trigger.c
+@@ -58,8 +58,12 @@ int iio_register_sw_trigger_type(struct iio_sw_trigger_type *t)
+ 
+ 	t->group = configfs_register_default_group(iio_triggers_group, t->name,
+ 						&iio_trigger_type_group_type);
+-	if (IS_ERR(t->group))
++	if (IS_ERR(t->group)) {
++		mutex_lock(&iio_trigger_types_lock);
++		list_del(&t->list);
++		mutex_unlock(&iio_trigger_types_lock);
+ 		ret = PTR_ERR(t->group);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/iio/light/apds9960.c b/drivers/iio/light/apds9960.c
+index 9afb3fcc74e62..4a7ccf268ebf4 100644
+--- a/drivers/iio/light/apds9960.c
++++ b/drivers/iio/light/apds9960.c
+@@ -53,9 +53,6 @@
+ #define APDS9960_REG_CONTROL_PGAIN_MASK_SHIFT	2
+ 
+ #define APDS9960_REG_CONFIG_2	0x90
+-#define APDS9960_REG_CONFIG_2_GGAIN_MASK	0x60
+-#define APDS9960_REG_CONFIG_2_GGAIN_MASK_SHIFT	5
+-
+ #define APDS9960_REG_ID		0x92
+ 
+ #define APDS9960_REG_STATUS	0x93
+@@ -76,6 +73,9 @@
+ #define APDS9960_REG_GCONF_1_GFIFO_THRES_MASK_SHIFT	6
+ 
+ #define APDS9960_REG_GCONF_2	0xa3
++#define APDS9960_REG_GCONF_2_GGAIN_MASK			0x60
++#define APDS9960_REG_GCONF_2_GGAIN_MASK_SHIFT		5
++
+ #define APDS9960_REG_GOFFSET_U	0xa4
+ #define APDS9960_REG_GOFFSET_D	0xa5
+ #define APDS9960_REG_GPULSE	0xa6
+@@ -395,9 +395,9 @@ static int apds9960_set_pxs_gain(struct apds9960_data *data, int val)
+ 			}
+ 
+ 			ret = regmap_update_bits(data->regmap,
+-				APDS9960_REG_CONFIG_2,
+-				APDS9960_REG_CONFIG_2_GGAIN_MASK,
+-				idx << APDS9960_REG_CONFIG_2_GGAIN_MASK_SHIFT);
++				APDS9960_REG_GCONF_2,
++				APDS9960_REG_GCONF_2_GGAIN_MASK,
++				idx << APDS9960_REG_GCONF_2_GGAIN_MASK_SHIFT);
+ 			if (!ret)
+ 				data->pxs_gain = idx;
+ 			mutex_unlock(&data->lock);
+diff --git a/drivers/iio/pressure/ms5611.h b/drivers/iio/pressure/ms5611.h
+index bc06271fa38bc..5e2d2d4d87b56 100644
+--- a/drivers/iio/pressure/ms5611.h
++++ b/drivers/iio/pressure/ms5611.h
+@@ -25,13 +25,6 @@ enum {
+ 	MS5607,
+ };
+ 
+-struct ms5611_chip_info {
+-	u16 prom[MS5611_PROM_WORDS_NB];
+-
+-	int (*temp_and_pressure_compensate)(struct ms5611_chip_info *chip_info,
+-					    s32 *temp, s32 *pressure);
+-};
+-
+ /*
+  * OverSampling Rate descriptor.
+  * Warning: cmd MUST be kept aligned on a word boundary (see
+@@ -50,12 +43,15 @@ struct ms5611_state {
+ 	const struct ms5611_osr *pressure_osr;
+ 	const struct ms5611_osr *temp_osr;
+ 
+-	int (*reset)(struct device *dev);
+-	int (*read_prom_word)(struct device *dev, int index, u16 *word);
+-	int (*read_adc_temp_and_pressure)(struct device *dev,
++	u16 prom[MS5611_PROM_WORDS_NB];
++
++	int (*reset)(struct ms5611_state *st);
++	int (*read_prom_word)(struct ms5611_state *st, int index, u16 *word);
++	int (*read_adc_temp_and_pressure)(struct ms5611_state *st,
+ 					  s32 *temp, s32 *pressure);
+ 
+-	struct ms5611_chip_info *chip_info;
++	int (*compensate_temp_and_pressure)(struct ms5611_state *st, s32 *temp,
++					  s32 *pressure);
+ 	struct regulator *vdd;
+ };
+ 
+diff --git a/drivers/iio/pressure/ms5611_core.c b/drivers/iio/pressure/ms5611_core.c
+index 214b0d25f5980..874a73b3ea9d6 100644
+--- a/drivers/iio/pressure/ms5611_core.c
++++ b/drivers/iio/pressure/ms5611_core.c
+@@ -85,8 +85,7 @@ static int ms5611_read_prom(struct iio_dev *indio_dev)
+ 	struct ms5611_state *st = iio_priv(indio_dev);
+ 
+ 	for (i = 0; i < MS5611_PROM_WORDS_NB; i++) {
+-		ret = st->read_prom_word(&indio_dev->dev,
+-					 i, &st->chip_info->prom[i]);
++		ret = st->read_prom_word(st, i, &st->prom[i]);
+ 		if (ret < 0) {
+ 			dev_err(&indio_dev->dev,
+ 				"failed to read prom at %d\n", i);
+@@ -94,7 +93,7 @@ static int ms5611_read_prom(struct iio_dev *indio_dev)
+ 		}
+ 	}
+ 
+-	if (!ms5611_prom_is_valid(st->chip_info->prom, MS5611_PROM_WORDS_NB)) {
++	if (!ms5611_prom_is_valid(st->prom, MS5611_PROM_WORDS_NB)) {
+ 		dev_err(&indio_dev->dev, "PROM integrity check failed\n");
+ 		return -ENODEV;
+ 	}
+@@ -108,28 +107,27 @@ static int ms5611_read_temp_and_pressure(struct iio_dev *indio_dev,
+ 	int ret;
+ 	struct ms5611_state *st = iio_priv(indio_dev);
+ 
+-	ret = st->read_adc_temp_and_pressure(&indio_dev->dev, temp, pressure);
++	ret = st->read_adc_temp_and_pressure(st, temp, pressure);
+ 	if (ret < 0) {
+ 		dev_err(&indio_dev->dev,
+ 			"failed to read temperature and pressure\n");
+ 		return ret;
+ 	}
+ 
+-	return st->chip_info->temp_and_pressure_compensate(st->chip_info,
+-							   temp, pressure);
++	return st->compensate_temp_and_pressure(st, temp, pressure);
+ }
+ 
+-static int ms5611_temp_and_pressure_compensate(struct ms5611_chip_info *chip_info,
++static int ms5611_temp_and_pressure_compensate(struct ms5611_state *st,
+ 					       s32 *temp, s32 *pressure)
+ {
+ 	s32 t = *temp, p = *pressure;
+ 	s64 off, sens, dt;
+ 
+-	dt = t - (chip_info->prom[5] << 8);
+-	off = ((s64)chip_info->prom[2] << 16) + ((chip_info->prom[4] * dt) >> 7);
+-	sens = ((s64)chip_info->prom[1] << 15) + ((chip_info->prom[3] * dt) >> 8);
++	dt = t - (st->prom[5] << 8);
++	off = ((s64)st->prom[2] << 16) + ((st->prom[4] * dt) >> 7);
++	sens = ((s64)st->prom[1] << 15) + ((st->prom[3] * dt) >> 8);
+ 
+-	t = 2000 + ((chip_info->prom[6] * dt) >> 23);
++	t = 2000 + ((st->prom[6] * dt) >> 23);
+ 	if (t < 2000) {
+ 		s64 off2, sens2, t2;
+ 
+@@ -155,17 +153,17 @@ static int ms5611_temp_and_pressure_compensate(struct ms5611_chip_info *chip_inf
+ 	return 0;
+ }
+ 
+-static int ms5607_temp_and_pressure_compensate(struct ms5611_chip_info *chip_info,
++static int ms5607_temp_and_pressure_compensate(struct ms5611_state *st,
+ 					       s32 *temp, s32 *pressure)
+ {
+ 	s32 t = *temp, p = *pressure;
+ 	s64 off, sens, dt;
+ 
+-	dt = t - (chip_info->prom[5] << 8);
+-	off = ((s64)chip_info->prom[2] << 17) + ((chip_info->prom[4] * dt) >> 6);
+-	sens = ((s64)chip_info->prom[1] << 16) + ((chip_info->prom[3] * dt) >> 7);
++	dt = t - (st->prom[5] << 8);
++	off = ((s64)st->prom[2] << 17) + ((st->prom[4] * dt) >> 6);
++	sens = ((s64)st->prom[1] << 16) + ((st->prom[3] * dt) >> 7);
+ 
+-	t = 2000 + ((chip_info->prom[6] * dt) >> 23);
++	t = 2000 + ((st->prom[6] * dt) >> 23);
+ 	if (t < 2000) {
+ 		s64 off2, sens2, t2, tmp;
+ 
+@@ -196,7 +194,7 @@ static int ms5611_reset(struct iio_dev *indio_dev)
+ 	int ret;
+ 	struct ms5611_state *st = iio_priv(indio_dev);
+ 
+-	ret = st->reset(&indio_dev->dev);
++	ret = st->reset(st);
+ 	if (ret < 0) {
+ 		dev_err(&indio_dev->dev, "failed to reset device\n");
+ 		return ret;
+@@ -343,15 +341,6 @@ static int ms5611_write_raw(struct iio_dev *indio_dev,
+ 
+ static const unsigned long ms5611_scan_masks[] = {0x3, 0};
+ 
+-static struct ms5611_chip_info chip_info_tbl[] = {
+-	[MS5611] = {
+-		.temp_and_pressure_compensate = ms5611_temp_and_pressure_compensate,
+-	},
+-	[MS5607] = {
+-		.temp_and_pressure_compensate = ms5607_temp_and_pressure_compensate,
+-	}
+-};
+-
+ static const struct iio_chan_spec ms5611_channels[] = {
+ 	{
+ 		.type = IIO_PRESSURE,
+@@ -434,7 +423,20 @@ int ms5611_probe(struct iio_dev *indio_dev, struct device *dev,
+ 	struct ms5611_state *st = iio_priv(indio_dev);
+ 
+ 	mutex_init(&st->lock);
+-	st->chip_info = &chip_info_tbl[type];
++
++	switch (type) {
++	case MS5611:
++		st->compensate_temp_and_pressure =
++			ms5611_temp_and_pressure_compensate;
++		break;
++	case MS5607:
++		st->compensate_temp_and_pressure =
++			ms5607_temp_and_pressure_compensate;
++		break;
++	default:
++		return -EINVAL;
++	}
++
+ 	st->temp_osr =
+ 		&ms5611_avail_temp_osr[ARRAY_SIZE(ms5611_avail_temp_osr) - 1];
+ 	st->pressure_osr =
+diff --git a/drivers/iio/pressure/ms5611_i2c.c b/drivers/iio/pressure/ms5611_i2c.c
+index 7c04f730430c7..cccc40f7df0b9 100644
+--- a/drivers/iio/pressure/ms5611_i2c.c
++++ b/drivers/iio/pressure/ms5611_i2c.c
+@@ -20,17 +20,15 @@
+ 
+ #include "ms5611.h"
+ 
+-static int ms5611_i2c_reset(struct device *dev)
++static int ms5611_i2c_reset(struct ms5611_state *st)
+ {
+-	struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+-
+ 	return i2c_smbus_write_byte(st->client, MS5611_RESET);
+ }
+ 
+-static int ms5611_i2c_read_prom_word(struct device *dev, int index, u16 *word)
++static int ms5611_i2c_read_prom_word(struct ms5611_state *st, int index,
++				     u16 *word)
+ {
+ 	int ret;
+-	struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+ 
+ 	ret = i2c_smbus_read_word_swapped(st->client,
+ 			MS5611_READ_PROM_WORD + (index << 1));
+@@ -57,11 +55,10 @@ static int ms5611_i2c_read_adc(struct ms5611_state *st, s32 *val)
+ 	return 0;
+ }
+ 
+-static int ms5611_i2c_read_adc_temp_and_pressure(struct device *dev,
++static int ms5611_i2c_read_adc_temp_and_pressure(struct ms5611_state *st,
+ 						 s32 *temp, s32 *pressure)
+ {
+ 	int ret;
+-	struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+ 	const struct ms5611_osr *osr = st->temp_osr;
+ 
+ 	ret = i2c_smbus_write_byte(st->client, osr->cmd);
+diff --git a/drivers/iio/pressure/ms5611_spi.c b/drivers/iio/pressure/ms5611_spi.c
+index f7743ee3318f8..3039fe8aa2a2d 100644
+--- a/drivers/iio/pressure/ms5611_spi.c
++++ b/drivers/iio/pressure/ms5611_spi.c
+@@ -15,18 +15,17 @@
+ 
+ #include "ms5611.h"
+ 
+-static int ms5611_spi_reset(struct device *dev)
++static int ms5611_spi_reset(struct ms5611_state *st)
+ {
+ 	u8 cmd = MS5611_RESET;
+-	struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+ 
+ 	return spi_write_then_read(st->client, &cmd, 1, NULL, 0);
+ }
+ 
+-static int ms5611_spi_read_prom_word(struct device *dev, int index, u16 *word)
++static int ms5611_spi_read_prom_word(struct ms5611_state *st, int index,
++				     u16 *word)
+ {
+ 	int ret;
+-	struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+ 
+ 	ret = spi_w8r16be(st->client, MS5611_READ_PROM_WORD + (index << 1));
+ 	if (ret < 0)
+@@ -37,11 +36,10 @@ static int ms5611_spi_read_prom_word(struct device *dev, int index, u16 *word)
+ 	return 0;
+ }
+ 
+-static int ms5611_spi_read_adc(struct device *dev, s32 *val)
++static int ms5611_spi_read_adc(struct ms5611_state *st, s32 *val)
+ {
+ 	int ret;
+ 	u8 buf[3] = { MS5611_READ_ADC };
+-	struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+ 
+ 	ret = spi_write_then_read(st->client, buf, 1, buf, 3);
+ 	if (ret < 0)
+@@ -52,11 +50,10 @@ static int ms5611_spi_read_adc(struct device *dev, s32 *val)
+ 	return 0;
+ }
+ 
+-static int ms5611_spi_read_adc_temp_and_pressure(struct device *dev,
++static int ms5611_spi_read_adc_temp_and_pressure(struct ms5611_state *st,
+ 						 s32 *temp, s32 *pressure)
+ {
+ 	int ret;
+-	struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+ 	const struct ms5611_osr *osr = st->temp_osr;
+ 
+ 	/*
+@@ -68,7 +65,7 @@ static int ms5611_spi_read_adc_temp_and_pressure(struct device *dev,
+ 		return ret;
+ 
+ 	usleep_range(osr->conv_usec, osr->conv_usec + (osr->conv_usec / 10UL));
+-	ret = ms5611_spi_read_adc(dev, temp);
++	ret = ms5611_spi_read_adc(st, temp);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -78,7 +75,7 @@ static int ms5611_spi_read_adc_temp_and_pressure(struct device *dev,
+ 		return ret;
+ 
+ 	usleep_range(osr->conv_usec, osr->conv_usec + (osr->conv_usec / 10UL));
+-	return ms5611_spi_read_adc(dev, pressure);
++	return ms5611_spi_read_adc(st, pressure);
+ }
+ 
+ static int ms5611_spi_probe(struct spi_device *spi)
+diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
+index efffcf0ebd3b4..31c02c2019c1c 100644
+--- a/drivers/input/misc/soc_button_array.c
++++ b/drivers/input/misc/soc_button_array.c
+@@ -18,6 +18,10 @@
+ #include <linux/gpio.h>
+ #include <linux/platform_device.h>
+ 
++static bool use_low_level_irq;
++module_param(use_low_level_irq, bool, 0444);
++MODULE_PARM_DESC(use_low_level_irq, "Use low-level triggered IRQ instead of edge triggered");
++
+ struct soc_button_info {
+ 	const char *name;
+ 	int acpi_index;
+@@ -73,6 +77,13 @@ static const struct dmi_system_id dmi_use_low_level_irq[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"),
+ 		},
+ 	},
++	{
++		/* Acer Switch V 10 SW5-017, same issue as Acer Switch 10 SW5-012. */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "SW5-017"),
++		},
++	},
+ 	{
+ 		/*
+ 		 * Acer One S1003. _LID method messes with power-button GPIO
+@@ -164,7 +175,8 @@ soc_button_device_create(struct platform_device *pdev,
+ 		}
+ 
+ 		/* See dmi_use_low_level_irq[] comment */
+-		if (!autorepeat && dmi_check_system(dmi_use_low_level_irq)) {
++		if (!autorepeat && (use_low_level_irq ||
++				    dmi_check_system(dmi_use_low_level_irq))) {
+ 			irq_set_irq_type(irq, IRQ_TYPE_LEVEL_LOW);
+ 			gpio_keys[n_buttons].irq = irq;
+ 			gpio_keys[n_buttons].gpio = -ENOENT;
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 82577095e175e..f1013b950d579 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -191,6 +191,7 @@ static const char * const smbus_pnp_ids[] = {
+ 	"SYN3221", /* HP 15-ay000 */
+ 	"SYN323d", /* HP Spectre X360 13-w013dx */
+ 	"SYN3257", /* HP Envy 13-ad105ng */
++	"SYN3286", /* HP Laptop 15-da3001TU */
+ 	NULL
+ };
+ 
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index b23abde5d7db3..b7f87ad4b9a95 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -1059,6 +1059,7 @@ static int goodix_configure_dev(struct goodix_ts_data *ts)
+ 	input_set_abs_params(ts->input_dev, ABS_MT_WIDTH_MAJOR, 0, 255, 0, 0);
+ 	input_set_abs_params(ts->input_dev, ABS_MT_TOUCH_MAJOR, 0, 255, 0, 0);
+ 
++retry_read_config:
+ 	/* Read configuration and apply touchscreen parameters */
+ 	goodix_read_config(ts);
+ 
+@@ -1066,6 +1067,16 @@ static int goodix_configure_dev(struct goodix_ts_data *ts)
+ 	touchscreen_parse_properties(ts->input_dev, true, &ts->prop);
+ 
+ 	if (!ts->prop.max_x || !ts->prop.max_y || !ts->max_touch_num) {
++		if (!ts->reset_controller_at_probe &&
++		    ts->irq_pin_access_method != IRQ_PIN_ACCESS_NONE) {
++			dev_info(&ts->client->dev, "Config not set, resetting controller\n");
++			/* Retry after a controller reset */
++			ts->reset_controller_at_probe = true;
++			error = goodix_reset(ts);
++			if (error)
++				return error;
++			goto retry_read_config;
++		}
+ 		dev_err(&ts->client->dev,
+ 			"Invalid config (%d, %d, %d), using defaults\n",
+ 			ts->prop.max_x, ts->prop.max_y, ts->max_touch_num);
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 42b295337bafb..d8cb5bcd6b10e 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1615,7 +1615,7 @@ static int its_select_cpu(struct irq_data *d,
+ 
+ 		cpu = cpumask_pick_least_loaded(d, tmpmask);
+ 	} else {
+-		cpumask_and(tmpmask, irq_data_get_affinity_mask(d), cpu_online_mask);
++		cpumask_copy(tmpmask, aff_mask);
+ 
+ 		/* If we cannot cross sockets, limit the search to that node */
+ 		if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144) &&
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 835b1f3464d06..2156a2d5ac70e 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -254,6 +254,7 @@ struct dm_integrity_c {
+ 
+ 	struct completion crypto_backoff;
+ 
++	bool wrote_to_journal;
+ 	bool journal_uptodate;
+ 	bool just_formatted;
+ 	bool recalculate_flag;
+@@ -2256,6 +2257,8 @@ static void integrity_commit(struct work_struct *w)
+ 	if (!commit_sections)
+ 		goto release_flush_bios;
+ 
++	ic->wrote_to_journal = true;
++
+ 	i = commit_start;
+ 	for (n = 0; n < commit_sections; n++) {
+ 		for (j = 0; j < ic->journal_section_entries; j++) {
+@@ -2470,10 +2473,6 @@ static void integrity_writer(struct work_struct *w)
+ 
+ 	unsigned prev_free_sectors;
+ 
+-	/* the following test is not needed, but it tests the replay code */
+-	if (unlikely(dm_post_suspending(ic->ti)) && !ic->meta_dev)
+-		return;
+-
+ 	spin_lock_irq(&ic->endio_wait.lock);
+ 	write_start = ic->committed_section;
+ 	write_sections = ic->n_committed_sections;
+@@ -2980,10 +2979,17 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
+ 	drain_workqueue(ic->commit_wq);
+ 
+ 	if (ic->mode == 'J') {
+-		if (ic->meta_dev)
+-			queue_work(ic->writer_wq, &ic->writer_work);
++		queue_work(ic->writer_wq, &ic->writer_work);
+ 		drain_workqueue(ic->writer_wq);
+ 		dm_integrity_flush_buffers(ic, true);
++		if (ic->wrote_to_journal) {
++			init_journal(ic, ic->free_section,
++				     ic->journal_sections - ic->free_section, ic->commit_seq);
++			if (ic->free_section) {
++				init_journal(ic, 0, ic->free_section,
++					     next_commit_seq(ic->commit_seq));
++			}
++		}
+ 	}
+ 
+ 	if (ic->mode == 'B') {
+@@ -3011,6 +3017,8 @@ static void dm_integrity_resume(struct dm_target *ti)
+ 
+ 	DEBUG_print("resume\n");
+ 
++	ic->wrote_to_journal = false;
++
+ 	if (ic->provided_data_sectors != old_provided_data_sectors) {
+ 		if (ic->provided_data_sectors > old_provided_data_sectors &&
+ 		    ic->mode == 'B' &&
+diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c
+index f24623aac2dbe..4d42b1810acea 100644
+--- a/drivers/mmc/host/sdhci-brcmstb.c
++++ b/drivers/mmc/host/sdhci-brcmstb.c
+@@ -12,28 +12,55 @@
+ #include <linux/bitops.h>
+ #include <linux/delay.h>
+ 
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ #include "cqhci.h"
+ 
+ #define SDHCI_VENDOR 0x78
+ #define  SDHCI_VENDOR_ENHANCED_STRB 0x1
++#define  SDHCI_VENDOR_GATE_SDCLK_EN 0x2
+ 
+-#define BRCMSTB_PRIV_FLAGS_NO_64BIT		BIT(0)
+-#define BRCMSTB_PRIV_FLAGS_BROKEN_TIMEOUT	BIT(1)
++#define BRCMSTB_MATCH_FLAGS_NO_64BIT		BIT(0)
++#define BRCMSTB_MATCH_FLAGS_BROKEN_TIMEOUT	BIT(1)
++#define BRCMSTB_MATCH_FLAGS_HAS_CLOCK_GATE	BIT(2)
++
++#define BRCMSTB_PRIV_FLAGS_HAS_CQE		BIT(0)
++#define BRCMSTB_PRIV_FLAGS_GATE_CLOCK		BIT(1)
+ 
+ #define SDHCI_ARASAN_CQE_BASE_ADDR		0x200
+ 
+ struct sdhci_brcmstb_priv {
+ 	void __iomem *cfg_regs;
+-	bool has_cqe;
++	unsigned int flags;
+ };
+ 
+ struct brcmstb_match_priv {
+ 	void (*hs400es)(struct mmc_host *mmc, struct mmc_ios *ios);
+ 	struct sdhci_ops *ops;
+-	unsigned int flags;
++	const unsigned int flags;
+ };
+ 
++static inline void enable_clock_gating(struct sdhci_host *host)
++{
++	u32 reg;
++
++	reg = sdhci_readl(host, SDHCI_VENDOR);
++	reg |= SDHCI_VENDOR_GATE_SDCLK_EN;
++	sdhci_writel(host, reg, SDHCI_VENDOR);
++}
++
++void brcmstb_reset(struct sdhci_host *host, u8 mask)
++{
++	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++	struct sdhci_brcmstb_priv *priv = sdhci_pltfm_priv(pltfm_host);
++
++	sdhci_and_cqhci_reset(host, mask);
++
++	/* Reset will clear this, so re-enable it */
++	if (priv->flags & BRCMSTB_PRIV_FLAGS_GATE_CLOCK)
++		enable_clock_gating(host);
++}
++
+ static void sdhci_brcmstb_hs400es(struct mmc_host *mmc, struct mmc_ios *ios)
+ {
+ 	struct sdhci_host *host = mmc_priv(mmc);
+@@ -129,22 +156,23 @@ static struct sdhci_ops sdhci_brcmstb_ops = {
+ static struct sdhci_ops sdhci_brcmstb_ops_7216 = {
+ 	.set_clock = sdhci_brcmstb_set_clock,
+ 	.set_bus_width = sdhci_set_bus_width,
+-	.reset = sdhci_reset,
++	.reset = brcmstb_reset,
+ 	.set_uhs_signaling = sdhci_brcmstb_set_uhs_signaling,
+ };
+ 
+ static struct brcmstb_match_priv match_priv_7425 = {
+-	.flags = BRCMSTB_PRIV_FLAGS_NO_64BIT |
+-	BRCMSTB_PRIV_FLAGS_BROKEN_TIMEOUT,
++	.flags = BRCMSTB_MATCH_FLAGS_NO_64BIT |
++	BRCMSTB_MATCH_FLAGS_BROKEN_TIMEOUT,
+ 	.ops = &sdhci_brcmstb_ops,
+ };
+ 
+ static struct brcmstb_match_priv match_priv_7445 = {
+-	.flags = BRCMSTB_PRIV_FLAGS_BROKEN_TIMEOUT,
++	.flags = BRCMSTB_MATCH_FLAGS_BROKEN_TIMEOUT,
+ 	.ops = &sdhci_brcmstb_ops,
+ };
+ 
+ static const struct brcmstb_match_priv match_priv_7216 = {
++	.flags = BRCMSTB_MATCH_FLAGS_HAS_CLOCK_GATE,
+ 	.hs400es = sdhci_brcmstb_hs400es,
+ 	.ops = &sdhci_brcmstb_ops_7216,
+ };
+@@ -176,7 +204,7 @@ static int sdhci_brcmstb_add_host(struct sdhci_host *host,
+ 	bool dma64;
+ 	int ret;
+ 
+-	if (!priv->has_cqe)
++	if ((priv->flags & BRCMSTB_PRIV_FLAGS_HAS_CQE) == 0)
+ 		return sdhci_add_host(host);
+ 
+ 	dev_dbg(mmc_dev(host->mmc), "CQE is enabled\n");
+@@ -225,7 +253,6 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev)
+ 	struct sdhci_brcmstb_priv *priv;
+ 	struct sdhci_host *host;
+ 	struct resource *iomem;
+-	bool has_cqe = false;
+ 	struct clk *clk;
+ 	int res;
+ 
+@@ -244,10 +271,6 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev)
+ 		return res;
+ 
+ 	memset(&brcmstb_pdata, 0, sizeof(brcmstb_pdata));
+-	if (device_property_read_bool(&pdev->dev, "supports-cqe")) {
+-		has_cqe = true;
+-		match_priv->ops->irq = sdhci_brcmstb_cqhci_irq;
+-	}
+ 	brcmstb_pdata.ops = match_priv->ops;
+ 	host = sdhci_pltfm_init(pdev, &brcmstb_pdata,
+ 				sizeof(struct sdhci_brcmstb_priv));
+@@ -258,7 +281,10 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev)
+ 
+ 	pltfm_host = sdhci_priv(host);
+ 	priv = sdhci_pltfm_priv(pltfm_host);
+-	priv->has_cqe = has_cqe;
++	if (device_property_read_bool(&pdev->dev, "supports-cqe")) {
++		priv->flags |= BRCMSTB_PRIV_FLAGS_HAS_CQE;
++		match_priv->ops->irq = sdhci_brcmstb_cqhci_irq;
++	}
+ 
+ 	/* Map in the non-standard CFG registers */
+ 	iomem = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+@@ -273,6 +299,14 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev)
+ 	if (res)
+ 		goto err;
+ 
++	/*
++	 * Automatic clock gating does not work for SD cards that may
++	 * voltage switch so only enable it for non-removable devices.
++	 */
++	if ((match_priv->flags & BRCMSTB_MATCH_FLAGS_HAS_CLOCK_GATE) &&
++	    (host->mmc->caps & MMC_CAP_NONREMOVABLE))
++		priv->flags |= BRCMSTB_PRIV_FLAGS_GATE_CLOCK;
++
+ 	/*
+ 	 * If the chip has enhanced strobe and it's enabled, add
+ 	 * callback
+@@ -287,14 +321,14 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev)
+ 	 * properties through mmc_of_parse().
+ 	 */
+ 	host->caps = sdhci_readl(host, SDHCI_CAPABILITIES);
+-	if (match_priv->flags & BRCMSTB_PRIV_FLAGS_NO_64BIT)
++	if (match_priv->flags & BRCMSTB_MATCH_FLAGS_NO_64BIT)
+ 		host->caps &= ~SDHCI_CAN_64BIT;
+ 	host->caps1 = sdhci_readl(host, SDHCI_CAPABILITIES_1);
+ 	host->caps1 &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_SDR104 |
+ 			 SDHCI_SUPPORT_DDR50);
+ 	host->quirks |= SDHCI_QUIRK_MISSING_CAPS;
+ 
+-	if (match_priv->flags & BRCMSTB_PRIV_FLAGS_BROKEN_TIMEOUT)
++	if (match_priv->flags & BRCMSTB_MATCH_FLAGS_BROKEN_TIMEOUT)
+ 		host->quirks |= SDHCI_QUIRK_BROKEN_TIMEOUT_VAL;
+ 
+ 	res = sdhci_brcmstb_add_host(host, priv);
+diff --git a/drivers/net/arcnet/arc-rimi.c b/drivers/net/arcnet/arc-rimi.c
+index 98df38fe553ce..12d085405bd05 100644
+--- a/drivers/net/arcnet/arc-rimi.c
++++ b/drivers/net/arcnet/arc-rimi.c
+@@ -332,7 +332,7 @@ static int __init arc_rimi_init(void)
+ 		dev->irq = 9;
+ 
+ 	if (arcrimi_probe(dev)) {
+-		free_netdev(dev);
++		free_arcdev(dev);
+ 		return -EIO;
+ 	}
+ 
+@@ -349,7 +349,7 @@ static void __exit arc_rimi_exit(void)
+ 	iounmap(lp->mem_start);
+ 	release_mem_region(dev->mem_start, dev->mem_end - dev->mem_start + 1);
+ 	free_irq(dev->irq, dev);
+-	free_netdev(dev);
++	free_arcdev(dev);
+ }
+ 
+ #ifndef MODULE
+diff --git a/drivers/net/arcnet/arcdevice.h b/drivers/net/arcnet/arcdevice.h
+index 22a49c6d7ae6e..5d4a4c7efbbff 100644
+--- a/drivers/net/arcnet/arcdevice.h
++++ b/drivers/net/arcnet/arcdevice.h
+@@ -298,6 +298,10 @@ struct arcnet_local {
+ 
+ 	int excnak_pending;    /* We just got an excesive nak interrupt */
+ 
++	/* RESET flag handling */
++	int reset_in_progress;
++	struct work_struct reset_work;
++
+ 	struct {
+ 		uint16_t sequence;	/* sequence number (incs with each packet) */
+ 		__be16 aborted_seq;
+@@ -350,7 +354,9 @@ void arcnet_dump_skb(struct net_device *dev, struct sk_buff *skb, char *desc)
+ 
+ void arcnet_unregister_proto(struct ArcProto *proto);
+ irqreturn_t arcnet_interrupt(int irq, void *dev_id);
++
+ struct net_device *alloc_arcdev(const char *name);
++void free_arcdev(struct net_device *dev);
+ 
+ int arcnet_open(struct net_device *dev);
+ int arcnet_close(struct net_device *dev);
+diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c
+index e04efc0a5c977..d76dd7d14299e 100644
+--- a/drivers/net/arcnet/arcnet.c
++++ b/drivers/net/arcnet/arcnet.c
+@@ -387,10 +387,44 @@ static void arcnet_timer(struct timer_list *t)
+ 	struct arcnet_local *lp = from_timer(lp, t, timer);
+ 	struct net_device *dev = lp->dev;
+ 
+-	if (!netif_carrier_ok(dev)) {
++	spin_lock_irq(&lp->lock);
++
++	if (!lp->reset_in_progress && !netif_carrier_ok(dev)) {
+ 		netif_carrier_on(dev);
+ 		netdev_info(dev, "link up\n");
+ 	}
++
++	spin_unlock_irq(&lp->lock);
++}
++
++static void reset_device_work(struct work_struct *work)
++{
++	struct arcnet_local *lp;
++	struct net_device *dev;
++
++	lp = container_of(work, struct arcnet_local, reset_work);
++	dev = lp->dev;
++
++	/* Do not bring the network interface back up if an ifdown
++	 * was already done.
++	 */
++	if (!netif_running(dev) || !lp->reset_in_progress)
++		return;
++
++	rtnl_lock();
++
++	/* Do another check, in case of an ifdown that was triggered in
++	 * the small race window between the exit condition above and
++	 * acquiring RTNL.
++	 */
++	if (!netif_running(dev) || !lp->reset_in_progress)
++		goto out;
++
++	dev_close(dev);
++	dev_open(dev, NULL);
++
++out:
++	rtnl_unlock();
+ }
+ 
+ static void arcnet_reply_tasklet(unsigned long data)
+@@ -452,12 +486,25 @@ struct net_device *alloc_arcdev(const char *name)
+ 		lp->dev = dev;
+ 		spin_lock_init(&lp->lock);
+ 		timer_setup(&lp->timer, arcnet_timer, 0);
++		INIT_WORK(&lp->reset_work, reset_device_work);
+ 	}
+ 
+ 	return dev;
+ }
+ EXPORT_SYMBOL(alloc_arcdev);
+ 
++void free_arcdev(struct net_device *dev)
++{
++	struct arcnet_local *lp = netdev_priv(dev);
++
++	/* Do not cancel this at ->ndo_close(), as the workqueue itself
++	 * indirectly calls the ifdown path through dev_close().
++	 */
++	cancel_work_sync(&lp->reset_work);
++	free_netdev(dev);
++}
++EXPORT_SYMBOL(free_arcdev);
++
+ /* Open/initialize the board.  This is called sometime after booting when
+  * the 'ifconfig' program is run.
+  *
+@@ -587,6 +634,10 @@ int arcnet_close(struct net_device *dev)
+ 
+ 	/* shut down the card */
+ 	lp->hw.close(dev);
++
++	/* reset counters */
++	lp->reset_in_progress = 0;
++
+ 	module_put(lp->hw.owner);
+ 	return 0;
+ }
+@@ -820,6 +871,9 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
+ 
+ 	spin_lock_irqsave(&lp->lock, flags);
+ 
++	if (lp->reset_in_progress)
++		goto out;
++
+ 	/* RESET flag was enabled - if device is not running, we must
+ 	 * clear it right away (but nothing else).
+ 	 */
+@@ -852,11 +906,14 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
+ 		if (status & RESETflag) {
+ 			arc_printk(D_NORMAL, dev, "spurious reset (status=%Xh)\n",
+ 				   status);
+-			arcnet_close(dev);
+-			arcnet_open(dev);
++
++			lp->reset_in_progress = 1;
++			netif_stop_queue(dev);
++			netif_carrier_off(dev);
++			schedule_work(&lp->reset_work);
+ 
+ 			/* get out of the interrupt handler! */
+-			break;
++			goto out;
+ 		}
+ 		/* RX is inhibited - we must have received something.
+ 		 * Prepare to receive into the next buffer.
+@@ -1052,6 +1109,7 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
+ 	udelay(1);
+ 	lp->hw.intmask(dev, lp->intmask);
+ 
++out:
+ 	spin_unlock_irqrestore(&lp->lock, flags);
+ 	return retval;
+ }
+diff --git a/drivers/net/arcnet/com20020-isa.c b/drivers/net/arcnet/com20020-isa.c
+index f983c4ce6b07f..be618e4b9ed5e 100644
+--- a/drivers/net/arcnet/com20020-isa.c
++++ b/drivers/net/arcnet/com20020-isa.c
+@@ -169,7 +169,7 @@ static int __init com20020_init(void)
+ 		dev->irq = 9;
+ 
+ 	if (com20020isa_probe(dev)) {
+-		free_netdev(dev);
++		free_arcdev(dev);
+ 		return -EIO;
+ 	}
+ 
+@@ -182,7 +182,7 @@ static void __exit com20020_exit(void)
+ 	unregister_netdev(my_dev);
+ 	free_irq(my_dev->irq, my_dev);
+ 	release_region(my_dev->base_addr, ARCNET_TOTAL_SIZE);
+-	free_netdev(my_dev);
++	free_arcdev(my_dev);
+ }
+ 
+ #ifndef MODULE
+diff --git a/drivers/net/arcnet/com20020-pci.c b/drivers/net/arcnet/com20020-pci.c
+index 9f44e2e458df1..b4f8798d8c509 100644
+--- a/drivers/net/arcnet/com20020-pci.c
++++ b/drivers/net/arcnet/com20020-pci.c
+@@ -294,7 +294,7 @@ static void com20020pci_remove(struct pci_dev *pdev)
+ 
+ 		unregister_netdev(dev);
+ 		free_irq(dev->irq, dev);
+-		free_netdev(dev);
++		free_arcdev(dev);
+ 	}
+ }
+ 
+diff --git a/drivers/net/arcnet/com20020_cs.c b/drivers/net/arcnet/com20020_cs.c
+index cf607ffcf358e..e0c7720bd5da9 100644
+--- a/drivers/net/arcnet/com20020_cs.c
++++ b/drivers/net/arcnet/com20020_cs.c
+@@ -113,6 +113,7 @@ static int com20020_probe(struct pcmcia_device *p_dev)
+ 	struct com20020_dev *info;
+ 	struct net_device *dev;
+ 	struct arcnet_local *lp;
++	int ret = -ENOMEM;
+ 
+ 	dev_dbg(&p_dev->dev, "com20020_attach()\n");
+ 
+@@ -142,12 +143,18 @@ static int com20020_probe(struct pcmcia_device *p_dev)
+ 	info->dev = dev;
+ 	p_dev->priv = info;
+ 
+-	return com20020_config(p_dev);
++	ret = com20020_config(p_dev);
++	if (ret)
++		goto fail_config;
++
++	return 0;
+ 
++fail_config:
++	free_arcdev(dev);
+ fail_alloc_dev:
+ 	kfree(info);
+ fail_alloc_info:
+-	return -ENOMEM;
++	return ret;
+ } /* com20020_attach */
+ 
+ static void com20020_detach(struct pcmcia_device *link)
+@@ -177,7 +184,7 @@ static void com20020_detach(struct pcmcia_device *link)
+ 		dev = info->dev;
+ 		if (dev) {
+ 			dev_dbg(&link->dev, "kfree...\n");
+-			free_netdev(dev);
++			free_arcdev(dev);
+ 		}
+ 		dev_dbg(&link->dev, "kfree2...\n");
+ 		kfree(info);
+diff --git a/drivers/net/arcnet/com90io.c b/drivers/net/arcnet/com90io.c
+index cf214b7306715..3856b447d38ed 100644
+--- a/drivers/net/arcnet/com90io.c
++++ b/drivers/net/arcnet/com90io.c
+@@ -396,7 +396,7 @@ static int __init com90io_init(void)
+ 	err = com90io_probe(dev);
+ 
+ 	if (err) {
+-		free_netdev(dev);
++		free_arcdev(dev);
+ 		return err;
+ 	}
+ 
+@@ -419,7 +419,7 @@ static void __exit com90io_exit(void)
+ 
+ 	free_irq(dev->irq, dev);
+ 	release_region(dev->base_addr, ARCNET_TOTAL_SIZE);
+-	free_netdev(dev);
++	free_arcdev(dev);
+ }
+ 
+ module_init(com90io_init)
+diff --git a/drivers/net/arcnet/com90xx.c b/drivers/net/arcnet/com90xx.c
+index 3dc3d533cb19a..d8dfb9ea0de89 100644
+--- a/drivers/net/arcnet/com90xx.c
++++ b/drivers/net/arcnet/com90xx.c
+@@ -554,7 +554,7 @@ err_free_irq:
+ err_release_mem:
+ 	release_mem_region(dev->mem_start, dev->mem_end - dev->mem_start + 1);
+ err_free_dev:
+-	free_netdev(dev);
++	free_arcdev(dev);
+ 	return -EIO;
+ }
+ 
+@@ -672,7 +672,7 @@ static void __exit com90xx_exit(void)
+ 		release_region(dev->base_addr, ARCNET_TOTAL_SIZE);
+ 		release_mem_region(dev->mem_start,
+ 				   dev->mem_end - dev->mem_start + 1);
+-		free_netdev(dev);
++		free_arcdev(dev);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+index 08437eaacbb96..ac327839eed90 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+@@ -795,16 +795,20 @@ static void bnx2x_vf_enable_traffic(struct bnx2x *bp, struct bnx2x_virtf *vf)
+ 
+ static u8 bnx2x_vf_is_pcie_pending(struct bnx2x *bp, u8 abs_vfid)
+ {
+-	struct pci_dev *dev;
+ 	struct bnx2x_virtf *vf = bnx2x_vf_by_abs_fid(bp, abs_vfid);
++	struct pci_dev *dev;
++	bool pending;
+ 
+ 	if (!vf)
+ 		return false;
+ 
+ 	dev = pci_get_domain_bus_and_slot(vf->domain, vf->bus, vf->devfn);
+-	if (dev)
+-		return bnx2x_is_pcie_pending(dev);
+-	return false;
++	if (!dev)
++		return false;
++	pending = bnx2x_is_pcie_pending(dev);
++	pci_dev_put(dev);
++
++	return pending;
+ }
+ 
+ int bnx2x_vf_flr_clnup_epilog(struct bnx2x *bp, u8 abs_vfid)
+diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
+index c4dc6e2ccd6b7..eefb25bcf57ff 100644
+--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
++++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
+@@ -1798,7 +1798,7 @@ static int liquidio_open(struct net_device *netdev)
+ 
+ 	ifstate_set(lio, LIO_IFSTATE_RUNNING);
+ 
+-	if (!OCTEON_CN23XX_PF(oct) || (OCTEON_CN23XX_PF(oct) && !oct->msix_on)) {
++	if (!OCTEON_CN23XX_PF(oct) || !oct->msix_on) {
+ 		ret = setup_tx_poll_fn(netdev);
+ 		if (ret)
+ 			goto err_poll;
+@@ -1828,7 +1828,7 @@ static int liquidio_open(struct net_device *netdev)
+ 	return 0;
+ 
+ err_rx_ctrl:
+-	if (!OCTEON_CN23XX_PF(oct) || (OCTEON_CN23XX_PF(oct) && !oct->msix_on))
++	if (!OCTEON_CN23XX_PF(oct) || !oct->msix_on)
+ 		cleanup_tx_poll_fn(netdev);
+ err_poll:
+ 	if (lio->ptp_clock) {
+diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+index 8ff28ed04b7fc..f0e48b9373d6d 100644
+--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
++++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+@@ -1438,8 +1438,10 @@ static acpi_status bgx_acpi_match_id(acpi_handle handle, u32 lvl,
+ 		return AE_OK;
+ 	}
+ 
+-	if (strncmp(string.pointer, bgx_sel, 4))
++	if (strncmp(string.pointer, bgx_sel, 4)) {
++		kfree(string.pointer);
+ 		return AE_OK;
++	}
+ 
+ 	acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, 1,
+ 			    bgx_acpi_register_phy, NULL, bgx, NULL);
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index ca62c72eb7729..975762ccb66fd 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -1212,7 +1212,7 @@ static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
+ 	/* enable Tx ints by setting pkt thr to 1 */
+ 	enetc_txbdr_wr(hw, idx, ENETC_TBICR0, ENETC_TBICR0_ICEN | 0x1);
+ 
+-	tbmr = ENETC_TBMR_EN;
++	tbmr = ENETC_TBMR_EN | ENETC_TBMR_SET_PRIO(tx_ring->prio);
+ 	if (tx_ring->ndev->features & NETIF_F_HW_VLAN_CTAG_TX)
+ 		tbmr |= ENETC_TBMR_VIH;
+ 
+@@ -1272,13 +1272,14 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+ 
+ static void enetc_setup_bdrs(struct enetc_ndev_priv *priv)
+ {
++	struct enetc_hw *hw = &priv->si->hw;
+ 	int i;
+ 
+ 	for (i = 0; i < priv->num_tx_rings; i++)
+-		enetc_setup_txbdr(&priv->si->hw, priv->tx_ring[i]);
++		enetc_setup_txbdr(hw, priv->tx_ring[i]);
+ 
+ 	for (i = 0; i < priv->num_rx_rings; i++)
+-		enetc_setup_rxbdr(&priv->si->hw, priv->rx_ring[i]);
++		enetc_setup_rxbdr(hw, priv->rx_ring[i]);
+ }
+ 
+ static void enetc_clear_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+@@ -1311,13 +1312,14 @@ static void enetc_clear_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
+ 
+ static void enetc_clear_bdrs(struct enetc_ndev_priv *priv)
+ {
++	struct enetc_hw *hw = &priv->si->hw;
+ 	int i;
+ 
+ 	for (i = 0; i < priv->num_tx_rings; i++)
+-		enetc_clear_txbdr(&priv->si->hw, priv->tx_ring[i]);
++		enetc_clear_txbdr(hw, priv->tx_ring[i]);
+ 
+ 	for (i = 0; i < priv->num_rx_rings; i++)
+-		enetc_clear_rxbdr(&priv->si->hw, priv->rx_ring[i]);
++		enetc_clear_rxbdr(hw, priv->rx_ring[i]);
+ 
+ 	udelay(1);
+ }
+@@ -1325,13 +1327,13 @@ static void enetc_clear_bdrs(struct enetc_ndev_priv *priv)
+ static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
+ {
+ 	struct pci_dev *pdev = priv->si->pdev;
++	struct enetc_hw *hw = &priv->si->hw;
+ 	int i, j, err;
+ 
+ 	for (i = 0; i < priv->bdr_int_num; i++) {
+ 		int irq = pci_irq_vector(pdev, ENETC_BDR_INT_BASE_IDX + i);
+ 		struct enetc_int_vector *v = priv->int_vector[i];
+ 		int entry = ENETC_BDR_INT_BASE_IDX + i;
+-		struct enetc_hw *hw = &priv->si->hw;
+ 
+ 		snprintf(v->name, sizeof(v->name), "%s-rxtx%d",
+ 			 priv->ndev->name, i);
+@@ -1419,13 +1421,14 @@ static void enetc_setup_interrupts(struct enetc_ndev_priv *priv)
+ 
+ static void enetc_clear_interrupts(struct enetc_ndev_priv *priv)
+ {
++	struct enetc_hw *hw = &priv->si->hw;
+ 	int i;
+ 
+ 	for (i = 0; i < priv->num_tx_rings; i++)
+-		enetc_txbdr_wr(&priv->si->hw, i, ENETC_TBIER, 0);
++		enetc_txbdr_wr(hw, i, ENETC_TBIER, 0);
+ 
+ 	for (i = 0; i < priv->num_rx_rings; i++)
+-		enetc_rxbdr_wr(&priv->si->hw, i, ENETC_RBIER, 0);
++		enetc_rxbdr_wr(hw, i, ENETC_RBIER, 0);
+ }
+ 
+ static int enetc_phylink_connect(struct net_device *ndev)
+@@ -1565,6 +1568,7 @@ static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
+ {
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ 	struct tc_mqprio_qopt *mqprio = type_data;
++	struct enetc_hw *hw = &priv->si->hw;
+ 	struct enetc_bdr *tx_ring;
+ 	u8 num_tc;
+ 	int i;
+@@ -1579,7 +1583,8 @@ static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
+ 		/* Reset all ring priorities to 0 */
+ 		for (i = 0; i < priv->num_tx_rings; i++) {
+ 			tx_ring = priv->tx_ring[i];
+-			enetc_set_bdr_prio(&priv->si->hw, tx_ring->index, 0);
++			tx_ring->prio = 0;
++			enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
+ 		}
+ 
+ 		return 0;
+@@ -1598,7 +1603,8 @@ static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
+ 	 */
+ 	for (i = 0; i < num_tc; i++) {
+ 		tx_ring = priv->tx_ring[i];
+-		enetc_set_bdr_prio(&priv->si->hw, tx_ring->index, i);
++		tx_ring->prio = i;
++		enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
+ 	}
+ 
+ 	/* Reset the number of netdev queues based on the TC count */
+@@ -1679,19 +1685,21 @@ static int enetc_set_rss(struct net_device *ndev, int en)
+ static void enetc_enable_rxvlan(struct net_device *ndev, bool en)
+ {
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
++	struct enetc_hw *hw = &priv->si->hw;
+ 	int i;
+ 
+ 	for (i = 0; i < priv->num_rx_rings; i++)
+-		enetc_bdr_enable_rxvlan(&priv->si->hw, i, en);
++		enetc_bdr_enable_rxvlan(hw, i, en);
+ }
+ 
+ static void enetc_enable_txvlan(struct net_device *ndev, bool en)
+ {
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
++	struct enetc_hw *hw = &priv->si->hw;
+ 	int i;
+ 
+ 	for (i = 0; i < priv->num_tx_rings; i++)
+-		enetc_bdr_enable_txvlan(&priv->si->hw, i, en);
++		enetc_bdr_enable_txvlan(hw, i, en);
+ }
+ 
+ void enetc_set_features(struct net_device *ndev, netdev_features_t features)
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
+index 00386c5d3cde9..725c3d1cbb198 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc.h
+@@ -58,6 +58,7 @@ struct enetc_bdr {
+ 		void __iomem *rcir;
+ 	};
+ 	u16 index;
++	u16 prio;
+ 	int bd_count; /* # of BDs */
+ 	int next_to_use;
+ 	int next_to_clean;
+@@ -338,19 +339,20 @@ int enetc_set_psfp(struct net_device *ndev, bool en);
+ 
+ static inline void enetc_get_max_cap(struct enetc_ndev_priv *priv)
+ {
++	struct enetc_hw *hw = &priv->si->hw;
+ 	u32 reg;
+ 
+-	reg = enetc_port_rd(&priv->si->hw, ENETC_PSIDCAPR);
++	reg = enetc_port_rd(hw, ENETC_PSIDCAPR);
+ 	priv->psfp_cap.max_streamid = reg & ENETC_PSIDCAPR_MSK;
+ 	/* Port stream filter capability */
+-	reg = enetc_port_rd(&priv->si->hw, ENETC_PSFCAPR);
++	reg = enetc_port_rd(hw, ENETC_PSFCAPR);
+ 	priv->psfp_cap.max_psfp_filter = reg & ENETC_PSFCAPR_MSK;
+ 	/* Port stream gate capability */
+-	reg = enetc_port_rd(&priv->si->hw, ENETC_PSGCAPR);
++	reg = enetc_port_rd(hw, ENETC_PSGCAPR);
+ 	priv->psfp_cap.max_psfp_gate = (reg & ENETC_PSGCAPR_SGIT_MSK);
+ 	priv->psfp_cap.max_psfp_gatelist = (reg & ENETC_PSGCAPR_GCL_MSK) >> 16;
+ 	/* Port flow meter capability */
+-	reg = enetc_port_rd(&priv->si->hw, ENETC_PFMCAPR);
++	reg = enetc_port_rd(hw, ENETC_PFMCAPR);
+ 	priv->psfp_cap.max_psfp_meter = reg & ENETC_PFMCAPR_MSK;
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 6904e10dd46b3..515db7e6e6497 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -748,9 +748,6 @@ static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev,
+ 
+ 	ndev->priv_flags |= IFF_UNICAST_FLT;
+ 
+-	if (si->hw_features & ENETC_SI_F_QBV)
+-		priv->active_offloads |= ENETC_F_QBV;
+-
+ 	if (si->hw_features & ENETC_SI_F_PSFP && !enetc_psfp_enable(priv)) {
+ 		priv->active_offloads |= ENETC_F_QCI;
+ 		ndev->features |= NETIF_F_HW_TC;
+@@ -996,7 +993,8 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
+ 	struct enetc_ndev_priv *priv;
+ 
+ 	priv = netdev_priv(pf->si->ndev);
+-	if (priv->active_offloads & ENETC_F_QBV)
++
++	if (pf->si->hw_features & ENETC_SI_F_QBV)
+ 		enetc_sched_speed_set(priv, speed);
+ 
+ 	if (!phylink_autoneg_inband(mode) &&
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 62efe1aebf86a..5841721c81190 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -17,8 +17,9 @@ static u16 enetc_get_max_gcl_len(struct enetc_hw *hw)
+ 
+ void enetc_sched_speed_set(struct enetc_ndev_priv *priv, int speed)
+ {
++	struct enetc_hw *hw = &priv->si->hw;
+ 	u32 old_speed = priv->speed;
+-	u32 pspeed;
++	u32 pspeed, tmp;
+ 
+ 	if (speed == old_speed)
+ 		return;
+@@ -39,16 +40,15 @@ void enetc_sched_speed_set(struct enetc_ndev_priv *priv, int speed)
+ 	}
+ 
+ 	priv->speed = speed;
+-	enetc_port_wr(&priv->si->hw, ENETC_PMR,
+-		      (enetc_port_rd(&priv->si->hw, ENETC_PMR)
+-		      & (~ENETC_PMR_PSPEED_MASK))
+-		      | pspeed);
++	tmp = enetc_port_rd(hw, ENETC_PMR);
++	enetc_port_wr(hw, ENETC_PMR, (tmp & ~ENETC_PMR_PSPEED_MASK) | pspeed);
+ }
+ 
+ static int enetc_setup_taprio(struct net_device *ndev,
+ 			      struct tc_taprio_qopt_offload *admin_conf)
+ {
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
++	struct enetc_hw *hw = &priv->si->hw;
+ 	struct enetc_cbd cbd = {.cmd = 0};
+ 	struct tgs_gcl_conf *gcl_config;
+ 	struct tgs_gcl_data *gcl_data;
+@@ -60,15 +60,16 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ 	int err;
+ 	int i;
+ 
+-	if (admin_conf->num_entries > enetc_get_max_gcl_len(&priv->si->hw))
++	if (admin_conf->num_entries > enetc_get_max_gcl_len(hw))
+ 		return -EINVAL;
+ 	gcl_len = admin_conf->num_entries;
+ 
+-	tge = enetc_rd(&priv->si->hw, ENETC_QBV_PTGCR_OFFSET);
++	tge = enetc_rd(hw, ENETC_QBV_PTGCR_OFFSET);
+ 	if (!admin_conf->enable) {
+-		enetc_wr(&priv->si->hw,
+-			 ENETC_QBV_PTGCR_OFFSET,
+-			 tge & (~ENETC_QBV_TGE));
++		enetc_wr(hw, ENETC_QBV_PTGCR_OFFSET, tge & ~ENETC_QBV_TGE);
++
++		priv->active_offloads &= ~ENETC_F_QBV;
++
+ 		return 0;
+ 	}
+ 
+@@ -123,18 +124,18 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ 	cbd.cls = BDCR_CMD_PORT_GCL;
+ 	cbd.status_flags = 0;
+ 
+-	enetc_wr(&priv->si->hw, ENETC_QBV_PTGCR_OFFSET,
+-		 tge | ENETC_QBV_TGE);
++	enetc_wr(hw, ENETC_QBV_PTGCR_OFFSET, tge | ENETC_QBV_TGE);
+ 
+ 	err = enetc_send_cmd(priv->si, &cbd);
+ 	if (err)
+-		enetc_wr(&priv->si->hw,
+-			 ENETC_QBV_PTGCR_OFFSET,
+-			 tge & (~ENETC_QBV_TGE));
++		enetc_wr(hw, ENETC_QBV_PTGCR_OFFSET, tge & ~ENETC_QBV_TGE);
+ 
+ 	dma_unmap_single(&priv->si->pdev->dev, dma, data_size, DMA_TO_DEVICE);
+ 	kfree(gcl_data);
+ 
++	if (!err)
++		priv->active_offloads |= ENETC_F_QBV;
++
+ 	return err;
+ }
+ 
+@@ -142,6 +143,8 @@ int enetc_setup_tc_taprio(struct net_device *ndev, void *type_data)
+ {
+ 	struct tc_taprio_qopt_offload *taprio = type_data;
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
++	struct enetc_hw *hw = &priv->si->hw;
++	struct enetc_bdr *tx_ring;
+ 	int err;
+ 	int i;
+ 
+@@ -150,18 +153,20 @@ int enetc_setup_tc_taprio(struct net_device *ndev, void *type_data)
+ 		if (priv->tx_ring[i]->tsd_enable)
+ 			return -EBUSY;
+ 
+-	for (i = 0; i < priv->num_tx_rings; i++)
+-		enetc_set_bdr_prio(&priv->si->hw,
+-				   priv->tx_ring[i]->index,
+-				   taprio->enable ? i : 0);
++	for (i = 0; i < priv->num_tx_rings; i++) {
++		tx_ring = priv->tx_ring[i];
++		tx_ring->prio = taprio->enable ? i : 0;
++		enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
++	}
+ 
+ 	err = enetc_setup_taprio(ndev, taprio);
+-
+-	if (err)
+-		for (i = 0; i < priv->num_tx_rings; i++)
+-			enetc_set_bdr_prio(&priv->si->hw,
+-					   priv->tx_ring[i]->index,
+-					   taprio->enable ? 0 : i);
++	if (err) {
++		for (i = 0; i < priv->num_tx_rings; i++) {
++			tx_ring = priv->tx_ring[i];
++			tx_ring->prio = taprio->enable ? 0 : i;
++			enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
++		}
++	}
+ 
+ 	return err;
+ }
+@@ -182,7 +187,7 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
+ 	struct tc_cbs_qopt_offload *cbs = type_data;
+ 	u32 port_transmit_rate = priv->speed;
+ 	u8 tc_nums = netdev_get_num_tc(ndev);
+-	struct enetc_si *si = priv->si;
++	struct enetc_hw *hw = &priv->si->hw;
+ 	u32 hi_credit_bit, hi_credit_reg;
+ 	u32 max_interference_size;
+ 	u32 port_frame_max_size;
+@@ -203,15 +208,15 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
+ 		 * lower than this TC have been disabled.
+ 		 */
+ 		if (tc == prio_top &&
+-		    enetc_get_cbs_enable(&si->hw, prio_next)) {
++		    enetc_get_cbs_enable(hw, prio_next)) {
+ 			dev_err(&ndev->dev,
+ 				"Disable TC%d before disable TC%d\n",
+ 				prio_next, tc);
+ 			return -EINVAL;
+ 		}
+ 
+-		enetc_port_wr(&si->hw, ENETC_PTCCBSR1(tc), 0);
+-		enetc_port_wr(&si->hw, ENETC_PTCCBSR0(tc), 0);
++		enetc_port_wr(hw, ENETC_PTCCBSR1(tc), 0);
++		enetc_port_wr(hw, ENETC_PTCCBSR0(tc), 0);
+ 
+ 		return 0;
+ 	}
+@@ -228,13 +233,13 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
+ 	 * higher than this TC have been enabled.
+ 	 */
+ 	if (tc == prio_next) {
+-		if (!enetc_get_cbs_enable(&si->hw, prio_top)) {
++		if (!enetc_get_cbs_enable(hw, prio_top)) {
+ 			dev_err(&ndev->dev,
+ 				"Enable TC%d first before enable TC%d\n",
+ 				prio_top, prio_next);
+ 			return -EINVAL;
+ 		}
+-		bw_sum += enetc_get_cbs_bw(&si->hw, prio_top);
++		bw_sum += enetc_get_cbs_bw(hw, prio_top);
+ 	}
+ 
+ 	if (bw_sum + bw >= 100) {
+@@ -243,7 +248,7 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
+ 		return -EINVAL;
+ 	}
+ 
+-	enetc_port_rd(&si->hw, ENETC_PTCMSDUR(tc));
++	enetc_port_rd(hw, ENETC_PTCMSDUR(tc));
+ 
+ 	/* For top prio TC, the max_interfrence_size is maxSizedFrame.
+ 	 *
+@@ -263,8 +268,8 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
+ 		u32 m0, ma, r0, ra;
+ 
+ 		m0 = port_frame_max_size * 8;
+-		ma = enetc_port_rd(&si->hw, ENETC_PTCMSDUR(prio_top)) * 8;
+-		ra = enetc_get_cbs_bw(&si->hw, prio_top) *
++		ma = enetc_port_rd(hw, ENETC_PTCMSDUR(prio_top)) * 8;
++		ra = enetc_get_cbs_bw(hw, prio_top) *
+ 			port_transmit_rate * 10000ULL;
+ 		r0 = port_transmit_rate * 1000000ULL;
+ 		max_interference_size = m0 + ma +
+@@ -284,10 +289,10 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
+ 	hi_credit_reg = (u32)div_u64((ENETC_CLK * 100ULL) * hi_credit_bit,
+ 				     port_transmit_rate * 1000000ULL);
+ 
+-	enetc_port_wr(&si->hw, ENETC_PTCCBSR1(tc), hi_credit_reg);
++	enetc_port_wr(hw, ENETC_PTCCBSR1(tc), hi_credit_reg);
+ 
+ 	/* Set bw register and enable this traffic class */
+-	enetc_port_wr(&si->hw, ENETC_PTCCBSR0(tc), bw | ENETC_CBSE);
++	enetc_port_wr(hw, ENETC_PTCCBSR0(tc), bw | ENETC_CBSE);
+ 
+ 	return 0;
+ }
+@@ -297,6 +302,7 @@ int enetc_setup_tc_txtime(struct net_device *ndev, void *type_data)
+ 	struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ 	struct tc_etf_qopt_offload *qopt = type_data;
+ 	u8 tc_nums = netdev_get_num_tc(ndev);
++	struct enetc_hw *hw = &priv->si->hw;
+ 	int tc;
+ 
+ 	if (!tc_nums)
+@@ -312,12 +318,11 @@ int enetc_setup_tc_txtime(struct net_device *ndev, void *type_data)
+ 		return -EBUSY;
+ 
+ 	/* TSD and Qbv are mutually exclusive in hardware */
+-	if (enetc_rd(&priv->si->hw, ENETC_QBV_PTGCR_OFFSET) & ENETC_QBV_TGE)
++	if (enetc_rd(hw, ENETC_QBV_PTGCR_OFFSET) & ENETC_QBV_TGE)
+ 		return -EBUSY;
+ 
+ 	priv->tx_ring[tc]->tsd_enable = qopt->enable;
+-	enetc_port_wr(&priv->si->hw, ENETC_PTCTSDR(tc),
+-		      qopt->enable ? ENETC_TSDE : 0);
++	enetc_port_wr(hw, ENETC_PTCTSDR(tc), qopt->enable ? ENETC_TSDE : 0);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index c7aff89141e17..217dc67c48fa2 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -2299,7 +2299,10 @@ static int mtk_open(struct net_device *dev)
+ 		int err = mtk_start_dma(eth);
+ 
+ 		if (err)
++		if (err) {
++			phylink_disconnect_phy(mac->phylink);
+ 			return err;
++		}
+ 
+ 		mtk_gdm_config(eth, MTK_GDMA_TO_PDMA);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/qp.c b/drivers/net/ethernet/mellanox/mlx4/qp.c
+index 427e7a31862c2..d7f2890c254fe 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/qp.c
++++ b/drivers/net/ethernet/mellanox/mlx4/qp.c
+@@ -697,7 +697,8 @@ static int mlx4_create_zones(struct mlx4_dev *dev,
+ 			err = mlx4_bitmap_init(*bitmap + k, 1,
+ 					       MLX4_QP_TABLE_RAW_ETH_SIZE - 1, 0,
+ 					       0);
+-			mlx4_bitmap_alloc_range(*bitmap + k, 1, 1, 0);
++			if (!err)
++				mlx4_bitmap_alloc_range(*bitmap + k, 1, 1, 0);
+ 		}
+ 
+ 		if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index cf07318048df1..c838d8698eab4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -959,6 +959,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 		cmd_ent_get(ent);
+ 	set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state);
+ 
++	cmd_ent_get(ent); /* for the _real_ FW event on completion */
+ 	/* Skip sending command to fw if internal error */
+ 	if (mlx5_cmd_is_down(dev) || !opcode_allowed(&dev->cmd, ent->op)) {
+ 		u8 status = 0;
+@@ -972,7 +973,6 @@ static void cmd_work_handler(struct work_struct *work)
+ 		return;
+ 	}
+ 
+-	cmd_ent_get(ent); /* for the _real_ FW event on completion */
+ 	/* ring doorbell after the descriptor is valid */
+ 	mlx5_core_dbg(dev, "writing 0x%x to command doorbell\n", 1 << ent->idx);
+ 	wmb();
+@@ -1586,8 +1586,8 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ 				cmd_ent_put(ent); /* timeout work was canceled */
+ 
+ 			if (!forced || /* Real FW completion */
+-			    pci_channel_offline(dev->pdev) || /* FW is inaccessible */
+-			    dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
++			     mlx5_cmd_is_down(dev) || /* No real FW completion is expected */
++			     !opcode_allowed(cmd, ent->op))
+ 				cmd_ent_put(ent);
+ 
+ 			ent->ts2 = ktime_get_ns();
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index e8a4adccd2b26..f800e1ca5ba62 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -638,7 +638,7 @@ static void mlx5_tracer_handle_timestamp_trace(struct mlx5_fw_tracer *tracer,
+ 			trace_timestamp = (timestamp_event.timestamp & MASK_52_7) |
+ 					  (str_frmt->timestamp & MASK_6_0);
+ 		else
+-			trace_timestamp = ((timestamp_event.timestamp & MASK_52_7) - 1) |
++			trace_timestamp = ((timestamp_event.timestamp - 1) & MASK_52_7) |
+ 					  (str_frmt->timestamp & MASK_6_0);
+ 
+ 		mlx5_tracer_print_trace(str_frmt, dev, trace_timestamp);
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+index 7a8187458724d..24578c48f075b 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+@@ -363,7 +363,7 @@ int nfp_devlink_port_register(struct nfp_app *app, struct nfp_port *port)
+ 		return ret;
+ 
+ 	attrs.split = eth_port.is_split;
+-	attrs.splittable = !attrs.split;
++	attrs.splittable = eth_port.port_lanes > 1 && !attrs.split;
+ 	attrs.lanes = eth_port.port_lanes;
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+ 	attrs.phys.port_number = eth_port.label_port;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index 3977aa2f59bd1..311873ff57e33 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -1225,6 +1225,9 @@ nfp_port_get_module_info(struct net_device *netdev,
+ 	u8 data;
+ 
+ 	port = nfp_port_from_netdev(netdev);
++	if (!port)
++		return -EOPNOTSUPP;
++
+ 	/* update port state to get latest interface */
+ 	set_bit(NFP_PORT_CHANGED, &port->flags);
+ 	eth_port = nfp_port_get_eth_port(port);
+diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+index 2942102efd488..bde32f0845ca5 100644
+--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
++++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+@@ -1166,6 +1166,7 @@ static void pch_gbe_tx_queue(struct pch_gbe_adapter *adapter,
+ 		buffer_info->dma = 0;
+ 		buffer_info->time_stamp = 0;
+ 		tx_ring->next_to_use = ring_num;
++		dev_kfree_skb_any(skb);
+ 		return;
+ 	}
+ 	buffer_info->mapped = true;
+@@ -2481,6 +2482,7 @@ static void pch_gbe_remove(struct pci_dev *pdev)
+ 	unregister_netdev(netdev);
+ 
+ 	pch_gbe_phy_hw_reset(&adapter->hw);
++	pci_dev_put(adapter->ptp_pdev);
+ 
+ 	free_netdev(netdev);
+ }
+@@ -2562,7 +2564,7 @@ static int pch_gbe_probe(struct pci_dev *pdev,
+ 	/* setup the private structure */
+ 	ret = pch_gbe_sw_init(adapter);
+ 	if (ret)
+-		goto err_free_netdev;
++		goto err_put_dev;
+ 
+ 	/* Initialize PHY */
+ 	ret = pch_gbe_init_phy(adapter);
+@@ -2620,6 +2622,8 @@ static int pch_gbe_probe(struct pci_dev *pdev,
+ 
+ err_free_adapter:
+ 	pch_gbe_phy_hw_reset(&adapter->hw);
++err_put_dev:
++	pci_dev_put(adapter->ptp_pdev);
+ err_free_netdev:
+ 	free_netdev(netdev);
+ 	return ret;
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index 2219e4c59ae60..99fd35a8ca750 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -2475,6 +2475,7 @@ static netdev_tx_t ql3xxx_send(struct sk_buff *skb,
+ 					     skb_shinfo(skb)->nr_frags);
+ 	if (tx_cb->seg_count == -1) {
+ 		netdev_err(ndev, "%s: invalid segment count!\n", __func__);
++		dev_kfree_skb_any(skb);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/sfc/ef100_netdev.c b/drivers/net/ethernet/sfc/ef100_netdev.c
+index 67fe44db6b612..63a44ee763be7 100644
+--- a/drivers/net/ethernet/sfc/ef100_netdev.c
++++ b/drivers/net/ethernet/sfc/ef100_netdev.c
+@@ -200,6 +200,7 @@ static netdev_tx_t ef100_hard_start_xmit(struct sk_buff *skb,
+ 		   skb->len, skb->data_len, channel->channel);
+ 	if (!efx->n_channels || !efx->n_tx_channels || !channel) {
+ 		netif_stop_queue(net_dev);
++		dev_kfree_skb_any(skb);
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index f84e3cc0d3ec2..3e564158c401b 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -2648,11 +2648,6 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
+ 	if (ret)
+ 		goto rollback;
+ 
+-	/* Force features update, since they are different for SW MACSec and
+-	 * HW offloading cases.
+-	 */
+-	netdev_update_features(dev);
+-
+ 	rtnl_unlock();
+ 	return 0;
+ 
+@@ -3420,16 +3415,9 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
+ 	return ret;
+ }
+ 
+-#define SW_MACSEC_FEATURES \
++#define MACSEC_FEATURES \
+ 	(NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST)
+ 
+-/* If h/w offloading is enabled, use real device features save for
+- *   VLAN_FEATURES - they require additional ops
+- *   HW_MACSEC - no reason to report it
+- */
+-#define REAL_DEV_FEATURES(dev) \
+-	((dev)->features & ~(NETIF_F_VLAN_FEATURES | NETIF_F_HW_MACSEC))
+-
+ static int macsec_dev_init(struct net_device *dev)
+ {
+ 	struct macsec_dev *macsec = macsec_priv(dev);
+@@ -3446,12 +3434,8 @@ static int macsec_dev_init(struct net_device *dev)
+ 		return err;
+ 	}
+ 
+-	if (macsec_is_offloaded(macsec)) {
+-		dev->features = REAL_DEV_FEATURES(real_dev);
+-	} else {
+-		dev->features = real_dev->features & SW_MACSEC_FEATURES;
+-		dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE;
+-	}
++	dev->features = real_dev->features & MACSEC_FEATURES;
++	dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE;
+ 
+ 	dev->needed_headroom = real_dev->needed_headroom +
+ 			       MACSEC_NEEDED_HEADROOM;
+@@ -3480,10 +3464,7 @@ static netdev_features_t macsec_fix_features(struct net_device *dev,
+ 	struct macsec_dev *macsec = macsec_priv(dev);
+ 	struct net_device *real_dev = macsec->real_dev;
+ 
+-	if (macsec_is_offloaded(macsec))
+-		return REAL_DEV_FEATURES(real_dev);
+-
+-	features &= (real_dev->features & SW_MACSEC_FEATURES) |
++	features &= (real_dev->features & MACSEC_FEATURES) |
+ 		    NETIF_F_GSO_SOFTWARE | NETIF_F_SOFT_FEATURES;
+ 	features |= NETIF_F_LLTX;
+ 
+@@ -3832,7 +3813,6 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	if (macsec_is_offloaded(macsec)) {
+ 		const struct macsec_ops *ops;
+ 		struct macsec_context ctx;
+-		int ret;
+ 
+ 		ops = macsec_get_ops(netdev_priv(dev), &ctx);
+ 		if (!ops) {
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index a1c9233e264d9..7313e6e03c125 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1292,6 +1292,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x2357, 0x0201, 4)},	/* TP-LINK HSUPA Modem MA180 */
+ 	{QMI_FIXED_INTF(0x2357, 0x9000, 4)},	/* TP-LINK MA260 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x103a, 0)}, /* Telit LE910C4-WWX */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)},	/* Telit LE922A */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)},	/* Telit FN980 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)},	/* Telit LN920 */
+diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c
+index 0569f37e9ed59..8c9c6bfbaeee7 100644
+--- a/drivers/net/wireless/cisco/airo.c
++++ b/drivers/net/wireless/cisco/airo.c
+@@ -5236,7 +5236,7 @@ static int get_wep_tx_idx(struct airo_info *ai)
+ 	return -1;
+ }
+ 
+-static int set_wep_key(struct airo_info *ai, u16 index, const char *key,
++static int set_wep_key(struct airo_info *ai, u16 index, const u8 *key,
+ 		       u16 keylen, int perm, int lock)
+ {
+ 	static const unsigned char macaddr[ETH_ALEN] = { 0x01, 0, 0, 0, 0, 0 };
+@@ -5287,7 +5287,7 @@ static void proc_wepkey_on_close(struct inode *inode, struct file *file)
+ 	struct net_device *dev = PDE_DATA(inode);
+ 	struct airo_info *ai = dev->ml_priv;
+ 	int i, rc;
+-	char key[16];
++	u8 key[16];
+ 	u16 index = 0;
+ 	int j = 0;
+ 
+@@ -5315,12 +5315,22 @@ static void proc_wepkey_on_close(struct inode *inode, struct file *file)
+ 	}
+ 
+ 	for (i = 0; i < 16*3 && data->wbuffer[i+j]; i++) {
++		int val;
++
++		if (i % 3 == 2)
++			continue;
++
++		val = hex_to_bin(data->wbuffer[i+j]);
++		if (val < 0) {
++			airo_print_err(ai->dev->name, "WebKey passed invalid key hex");
++			return;
++		}
+ 		switch(i%3) {
+ 		case 0:
+-			key[i/3] = hex_to_bin(data->wbuffer[i+j])<<4;
++			key[i/3] = (u8)val << 4;
+ 			break;
+ 		case 1:
+-			key[i/3] |= hex_to_bin(data->wbuffer[i+j]);
++			key[i/3] |= (u8)val;
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index a6d4ff4760ad1..255286b2324e2 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -775,6 +775,7 @@ static void hwsim_send_nullfunc(struct mac80211_hwsim_data *data, u8 *mac,
+ 	struct hwsim_vif_priv *vp = (void *)vif->drv_priv;
+ 	struct sk_buff *skb;
+ 	struct ieee80211_hdr *hdr;
++	struct ieee80211_tx_info *cb;
+ 
+ 	if (!vp->assoc)
+ 		return;
+@@ -796,6 +797,10 @@ static void hwsim_send_nullfunc(struct mac80211_hwsim_data *data, u8 *mac,
+ 	memcpy(hdr->addr2, mac, ETH_ALEN);
+ 	memcpy(hdr->addr3, vp->bssid, ETH_ALEN);
+ 
++	cb = IEEE80211_SKB_CB(skb);
++	cb->control.rates[0].count = 1;
++	cb->control.rates[1].idx = -1;
++
+ 	rcu_read_lock();
+ 	mac80211_hwsim_tx_frame(data->hw, skb,
+ 				rcu_dereference(vif->chanctx_conf)->def.chan);
+diff --git a/drivers/net/wireless/microchip/wilc1000/cfg80211.c b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+index 6be5ac8ba518d..dd26f20861807 100644
+--- a/drivers/net/wireless/microchip/wilc1000/cfg80211.c
++++ b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+@@ -939,30 +939,52 @@ static inline void wilc_wfi_cfg_parse_ch_attr(u8 *buf, u32 len, u8 sta_ch)
+ 		return;
+ 
+ 	while (index + sizeof(*e) <= len) {
++		u16 attr_size;
++
+ 		e = (struct wilc_attr_entry *)&buf[index];
+-		if (e->attr_type == IEEE80211_P2P_ATTR_CHANNEL_LIST)
++		attr_size = le16_to_cpu(e->attr_len);
++
++		if (index + sizeof(*e) + attr_size > len)
++			return;
++
++		if (e->attr_type == IEEE80211_P2P_ATTR_CHANNEL_LIST &&
++		    attr_size >= (sizeof(struct wilc_attr_ch_list) - sizeof(*e)))
+ 			ch_list_idx = index;
+-		else if (e->attr_type == IEEE80211_P2P_ATTR_OPER_CHANNEL)
++		else if (e->attr_type == IEEE80211_P2P_ATTR_OPER_CHANNEL &&
++			 attr_size == (sizeof(struct wilc_attr_oper_ch) - sizeof(*e)))
+ 			op_ch_idx = index;
++
+ 		if (ch_list_idx && op_ch_idx)
+ 			break;
+-		index += le16_to_cpu(e->attr_len) + sizeof(*e);
++
++		index += sizeof(*e) + attr_size;
+ 	}
+ 
+ 	if (ch_list_idx) {
+-		u16 attr_size;
+-		struct wilc_ch_list_elem *e;
+-		int i;
++		unsigned int i;
++		u16 elem_size;
+ 
+ 		ch_list = (struct wilc_attr_ch_list *)&buf[ch_list_idx];
+-		attr_size = le16_to_cpu(ch_list->attr_len);
+-		for (i = 0; i < attr_size;) {
++		/* the number of bytes following the final 'elem' member */
++		elem_size = le16_to_cpu(ch_list->attr_len) -
++			(sizeof(*ch_list) - sizeof(struct wilc_attr_entry));
++		for (i = 0; i < elem_size;) {
++			struct wilc_ch_list_elem *e;
++
+ 			e = (struct wilc_ch_list_elem *)(ch_list->elem + i);
++
++			i += sizeof(*e);
++			if (i > elem_size)
++				break;
++
++			i += e->no_of_channels;
++			if (i > elem_size)
++				break;
++
+ 			if (e->op_class == WILC_WLAN_OPERATING_CLASS_2_4GHZ) {
+ 				memset(e->ch_list, sta_ch, e->no_of_channels);
+ 				break;
+ 			}
+-			i += e->no_of_channels;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index d025a30930157..b25847799138b 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -467,14 +467,25 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 
+ 	rsn_ie = cfg80211_find_ie(WLAN_EID_RSN, ies->data, ies->len);
+ 	if (rsn_ie) {
++		int rsn_ie_len = sizeof(struct element) + rsn_ie[1];
+ 		int offset = 8;
+ 
+-		param->mode_802_11i = 2;
+-		param->rsn_found = true;
+ 		/* extract RSN capabilities */
+-		offset += (rsn_ie[offset] * 4) + 2;
+-		offset += (rsn_ie[offset] * 4) + 2;
+-		memcpy(param->rsn_cap, &rsn_ie[offset], 2);
++		if (offset < rsn_ie_len) {
++			/* skip over pairwise suites */
++			offset += (rsn_ie[offset] * 4) + 2;
++
++			if (offset < rsn_ie_len) {
++				/* skip over authentication suites */
++				offset += (rsn_ie[offset] * 4) + 2;
++
++				if (offset + 1 < rsn_ie_len) {
++					param->mode_802_11i = 2;
++					param->rsn_found = true;
++					memcpy(param->rsn_cap, &rsn_ie[offset], 2);
++				}
++			}
++		}
+ 	}
+ 
+ 	if (param->rsn_found) {
+diff --git a/drivers/nfc/st-nci/se.c b/drivers/nfc/st-nci/se.c
+index 807eae04c1e34..37d397aae9b9d 100644
+--- a/drivers/nfc/st-nci/se.c
++++ b/drivers/nfc/st-nci/se.c
+@@ -327,7 +327,7 @@ static int st_nci_hci_connectivity_event_received(struct nci_dev *ndev,
+ 		 * AID          81      5 to 16
+ 		 * PARAMETERS   82      0 to 255
+ 		 */
+-		if (skb->len < NFC_MIN_AID_LENGTH + 2 &&
++		if (skb->len < NFC_MIN_AID_LENGTH + 2 ||
+ 		    skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG)
+ 			return -EPROTO;
+ 
+@@ -340,8 +340,10 @@ static int st_nci_hci_connectivity_event_received(struct nci_dev *ndev,
+ 
+ 		/* Check next byte is PARAMETERS tag (82) */
+ 		if (skb->data[transaction->aid_len + 2] !=
+-		    NFC_EVT_TRANSACTION_PARAMS_TAG)
++		    NFC_EVT_TRANSACTION_PARAMS_TAG) {
++			devm_kfree(dev, transaction);
+ 			return -EPROTO;
++		}
+ 
+ 		transaction->params_len = skb->data[transaction->aid_len + 3];
+ 		memcpy(transaction->params, skb->data +
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 65f4bf8806087..089f391035848 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3252,6 +3252,10 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_DEVICE(0x1cc1, 0x8201),   /* ADATA SX8200PNP 512GB */
+ 		.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++	 { PCI_DEVICE(0x1344, 0x5407), /* Micron Technology Inc NVMe SSD */
++		.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN },
++	 { PCI_DEVICE(0x1344, 0x6001),   /* Micron Nitro NVMe */
++		 .driver_data = NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(0x1c5c, 0x1504),   /* SK Hynix PC400 */
+ 		.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ 	{ PCI_DEVICE(0x15b7, 0x2001),   /*  Sandisk Skyhawk */
+diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
+index 8e696262215fc..ebec49957ed09 100644
+--- a/drivers/platform/x86/acer-wmi.c
++++ b/drivers/platform/x86/acer-wmi.c
+@@ -536,6 +536,15 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
+ 		},
+ 		.driver_data = (void *)ACER_CAP_KBD_DOCK,
+ 	},
++	{
++		.callback = set_force_caps,
++		.ident = "Acer Aspire Switch V 10 SW5-017",
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SW5-017"),
++		},
++		.driver_data = (void *)ACER_CAP_KBD_DOCK,
++	},
+ 	{
+ 		.callback = set_force_caps,
+ 		.ident = "Acer One 10 (S1003)",
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 39e1a6396e08d..db369cf261119 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -1212,6 +1212,8 @@ static void asus_wmi_set_xusb2pr(struct asus_wmi *asus)
+ 	pci_write_config_dword(xhci_pdev, USB_INTEL_XUSB2PR,
+ 				cpu_to_le32(ports_available));
+ 
++	pci_dev_put(xhci_pdev);
++
+ 	pr_info("set USB_INTEL_XUSB2PR old: 0x%04x, new: 0x%04x\n",
+ 			orig_ports_available, ports_available);
+ }
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index 519b2ab84a63f..6642d09b17b55 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -63,6 +63,7 @@ enum hp_wmi_event_ids {
+ 	HPWMI_PEAKSHIFT_PERIOD		= 0x0F,
+ 	HPWMI_BATTERY_CHARGE_PERIOD	= 0x10,
+ 	HPWMI_SANITIZATION_MODE		= 0x17,
++	HPWMI_SMART_EXPERIENCE_APP	= 0x21,
+ };
+ 
+ struct bios_args {
+@@ -632,6 +633,8 @@ static void hp_wmi_notify(u32 value, void *context)
+ 		break;
+ 	case HPWMI_SANITIZATION_MODE:
+ 		break;
++	case HPWMI_SMART_EXPERIENCE_APP:
++		break;
+ 	default:
+ 		pr_info("Unknown event_id - %d - 0x%x\n", event_id, event_data);
+ 		break;
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index ab6a9369649db..110ff1e6ef81f 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -756,6 +756,22 @@ static const struct ts_dmi_data predia_basic_data = {
+ 	.properties	= predia_basic_props,
+ };
+ 
++static const struct property_entry rca_cambio_w101_v2_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 20),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1644),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 874),
++	PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rca-cambio-w101-v2.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	{ }
++};
++
++static const struct ts_dmi_data rca_cambio_w101_v2_data = {
++	.acpi_name = "MSSL1680:00",
++	.properties = rca_cambio_w101_v2_props,
++};
++
+ static const struct property_entry rwc_nanote_p8_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-min-y", 46),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 1728),
+@@ -1341,6 +1357,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "0E57"),
+ 		},
+ 	},
++	{
++		/* RCA Cambio W101 v2 */
++		/* https://github.com/onitake/gsl-firmware/discussions/193 */
++		.driver_data = (void *)&rca_cambio_w101_v2_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "RCA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "W101SA23T1"),
++		},
++	},
+ 	{
+ 		/* RWC NANOTE P8 */
+ 		.driver_data = (void *)&rwc_nanote_p8_data,
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index bf8ba73d6c7c7..eb083b26ab4f6 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -4928,6 +4928,7 @@ static void regulator_dev_release(struct device *dev)
+ {
+ 	struct regulator_dev *rdev = dev_get_drvdata(dev);
+ 
++	debugfs_remove_recursive(rdev->debugfs);
+ 	kfree(rdev->constraints);
+ 	of_node_put(rdev->dev.of_node);
+ 	kfree(rdev);
+@@ -5401,11 +5402,15 @@ wash:
+ 	mutex_lock(&regulator_list_mutex);
+ 	regulator_ena_gpio_free(rdev);
+ 	mutex_unlock(&regulator_list_mutex);
++	put_device(&rdev->dev);
++	rdev = NULL;
+ clean:
+ 	if (dangling_of_gpiod)
+ 		gpiod_put(config->ena_gpiod);
++	if (rdev && rdev->dev.of_node)
++		of_node_put(rdev->dev.of_node);
++	kfree(rdev);
+ 	kfree(config);
+-	put_device(&rdev->dev);
+ rinse:
+ 	if (dangling_cfg_gpiod)
+ 		gpiod_put(cfg->ena_gpiod);
+@@ -5434,7 +5439,6 @@ void regulator_unregister(struct regulator_dev *rdev)
+ 
+ 	mutex_lock(&regulator_list_mutex);
+ 
+-	debugfs_remove_recursive(rdev->debugfs);
+ 	WARN_ON(rdev->open_count);
+ 	regulator_remove_coupling(rdev);
+ 	unset_regulator_supplies(rdev);
+diff --git a/drivers/regulator/twl6030-regulator.c b/drivers/regulator/twl6030-regulator.c
+index 430265c404d65..7c7e3648ea4bf 100644
+--- a/drivers/regulator/twl6030-regulator.c
++++ b/drivers/regulator/twl6030-regulator.c
+@@ -530,6 +530,7 @@ static const struct twlreg_info TWL6030_INFO_##label = { \
+ #define TWL6032_ADJUSTABLE_LDO(label, offset) \
+ static const struct twlreg_info TWL6032_INFO_##label = { \
+ 	.base = offset, \
++	.features = TWL6032_SUBCLASS, \
+ 	.desc = { \
+ 		.name = #label, \
+ 		.id = TWL6032_REG_##label, \
+@@ -562,6 +563,7 @@ static const struct twlreg_info TWLFIXED_INFO_##label = { \
+ #define TWL6032_ADJUSTABLE_SMPS(label, offset) \
+ static const struct twlreg_info TWLSMPS_INFO_##label = { \
+ 	.base = offset, \
++	.features = TWL6032_SUBCLASS, \
+ 	.desc = { \
+ 		.name = #label, \
+ 		.id = TWL6032_REG_##label, \
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index 7749deb614d75..53d22975a32fd 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -4627,7 +4627,6 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_raw(struct dasd_device *startdev,
+ 	struct dasd_device *basedev;
+ 	struct req_iterator iter;
+ 	struct dasd_ccw_req *cqr;
+-	unsigned int first_offs;
+ 	unsigned int trkcount;
+ 	unsigned long *idaws;
+ 	unsigned int size;
+@@ -4661,7 +4660,6 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_raw(struct dasd_device *startdev,
+ 	last_trk = (blk_rq_pos(req) + blk_rq_sectors(req) - 1) /
+ 		DASD_RAW_SECTORS_PER_TRACK;
+ 	trkcount = last_trk - first_trk + 1;
+-	first_offs = 0;
+ 
+ 	if (rq_data_dir(req) == READ)
+ 		cmd = DASD_ECKD_CCW_READ_TRACK;
+@@ -4705,13 +4703,13 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_raw(struct dasd_device *startdev,
+ 
+ 	if (use_prefix) {
+ 		prefix_LRE(ccw++, data, first_trk, last_trk, cmd, basedev,
+-			   startdev, 1, first_offs + 1, trkcount, 0, 0);
++			   startdev, 1, 0, trkcount, 0, 0);
+ 	} else {
+ 		define_extent(ccw++, data, first_trk, last_trk, cmd, basedev, 0);
+ 		ccw[-1].flags |= CCW_FLAG_CC;
+ 
+ 		data += sizeof(struct DE_eckd_data);
+-		locate_record_ext(ccw++, data, first_trk, first_offs + 1,
++		locate_record_ext(ccw++, data, first_trk, 0,
+ 				  trkcount, cmd, basedev, 0, 0);
+ 	}
+ 
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
+index f6d6539c657f0..b793e342ab7c6 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.c
++++ b/drivers/scsi/ibmvscsi/ibmvfc.c
+@@ -635,8 +635,13 @@ static void ibmvfc_init_host(struct ibmvfc_host *vhost)
+ 		memset(vhost->async_crq.msgs, 0, PAGE_SIZE);
+ 		vhost->async_crq.cur = 0;
+ 
+-		list_for_each_entry(tgt, &vhost->targets, queue)
+-			ibmvfc_del_tgt(tgt);
++		list_for_each_entry(tgt, &vhost->targets, queue) {
++			if (vhost->client_migrated)
++				tgt->need_login = 1;
++			else
++				ibmvfc_del_tgt(tgt);
++		}
++
+ 		scsi_block_requests(vhost->host);
+ 		ibmvfc_set_host_action(vhost, IBMVFC_HOST_ACTION_INIT);
+ 		vhost->job_step = ibmvfc_npiv_login;
+@@ -2822,9 +2827,12 @@ static void ibmvfc_handle_crq(struct ibmvfc_crq *crq, struct ibmvfc_host *vhost)
+ 			/* We need to re-setup the interpartition connection */
+ 			dev_info(vhost->dev, "Partition migrated, Re-enabling adapter\n");
+ 			vhost->client_migrated = 1;
++
++			scsi_block_requests(vhost->host);
+ 			ibmvfc_purge_requests(vhost, DID_REQUEUE);
+-			ibmvfc_link_down(vhost, IBMVFC_LINK_DOWN);
++			ibmvfc_set_host_state(vhost, IBMVFC_LINK_DOWN);
+ 			ibmvfc_set_host_action(vhost, IBMVFC_HOST_ACTION_REENABLE);
++			wake_up(&vhost->work_wait_q);
+ 		} else if (crq->format == IBMVFC_PARTNER_FAILED || crq->format == IBMVFC_PARTNER_DEREGISTER) {
+ 			dev_err(vhost->dev, "Host partner adapter deregistered or failed (rc=%d)\n", crq->format);
+ 			ibmvfc_purge_requests(vhost, DID_ERROR);
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 261b915835b40..cc20621bb49da 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -1878,6 +1878,13 @@ static int resp_readcap16(struct scsi_cmnd *scp,
+ 			arr[14] |= 0x40;
+ 	}
+ 
++	/*
++	 * Since the scsi_debug READ CAPACITY implementation always reports the
++	 * total disk capacity, set RC BASIS = 1 for host-managed ZBC devices.
++	 */
++	if (devip->zmodel == BLK_ZONED_HM)
++		arr[12] |= 1 << 4;
++
+ 	arr[15] = sdebug_lowest_aligned & 0xff;
+ 
+ 	if (have_dif_prot) {
+diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
+index 4a96fb05731d2..c6256fdc24b10 100644
+--- a/drivers/scsi/scsi_transport_sas.c
++++ b/drivers/scsi/scsi_transport_sas.c
+@@ -716,12 +716,17 @@ int sas_phy_add(struct sas_phy *phy)
+ 	int error;
+ 
+ 	error = device_add(&phy->dev);
+-	if (!error) {
+-		transport_add_device(&phy->dev);
+-		transport_configure_device(&phy->dev);
++	if (error)
++		return error;
++
++	error = transport_add_device(&phy->dev);
++	if (error) {
++		device_del(&phy->dev);
++		return error;
+ 	}
++	transport_configure_device(&phy->dev);
+ 
+-	return error;
++	return 0;
+ }
+ EXPORT_SYMBOL(sas_phy_add);
+ 
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 7ac1090d4379c..3fa8a0c94bdc1 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -356,16 +356,21 @@ enum storvsc_request_type {
+ };
+ 
+ /*
+- * SRB status codes and masks; a subset of the codes used here.
++ * SRB status codes and masks. In the 8-bit field, the two high order bits
++ * are flags, while the remaining 6 bits are an integer status code.  The
++ * definitions here include only the subset of the integer status codes that
++ * are tested for in this driver.
+  */
+-
+ #define SRB_STATUS_AUTOSENSE_VALID	0x80
+ #define SRB_STATUS_QUEUE_FROZEN		0x40
+-#define SRB_STATUS_INVALID_LUN	0x20
+-#define SRB_STATUS_SUCCESS	0x01
+-#define SRB_STATUS_ABORTED	0x02
+-#define SRB_STATUS_ERROR	0x04
+-#define SRB_STATUS_DATA_OVERRUN	0x12
++
++/* SRB status integer codes */
++#define SRB_STATUS_SUCCESS		0x01
++#define SRB_STATUS_ABORTED		0x02
++#define SRB_STATUS_ERROR		0x04
++#define SRB_STATUS_INVALID_REQUEST	0x06
++#define SRB_STATUS_DATA_OVERRUN		0x12
++#define SRB_STATUS_INVALID_LUN		0x20
+ 
+ #define SRB_STATUS(status) \
+ 	(status & ~(SRB_STATUS_AUTOSENSE_VALID | SRB_STATUS_QUEUE_FROZEN))
+@@ -995,38 +1000,25 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ 	void (*process_err_fn)(struct work_struct *work);
+ 	struct hv_host_device *host_dev = shost_priv(host);
+ 
+-	/*
+-	 * In some situations, Hyper-V sets multiple bits in the
+-	 * srb_status, such as ABORTED and ERROR. So process them
+-	 * individually, with the most specific bits first.
+-	 */
+-
+-	if (vm_srb->srb_status & SRB_STATUS_INVALID_LUN) {
+-		set_host_byte(scmnd, DID_NO_CONNECT);
+-		process_err_fn = storvsc_remove_lun;
+-		goto do_work;
+-	}
++	switch (SRB_STATUS(vm_srb->srb_status)) {
++	case SRB_STATUS_ERROR:
++	case SRB_STATUS_ABORTED:
++	case SRB_STATUS_INVALID_REQUEST:
++		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID) {
++			/* Check for capacity change */
++			if ((asc == 0x2a) && (ascq == 0x9)) {
++				process_err_fn = storvsc_device_scan;
++				/* Retry the I/O that triggered this. */
++				set_host_byte(scmnd, DID_REQUEUE);
++				goto do_work;
++			}
+ 
+-	if (vm_srb->srb_status & SRB_STATUS_ABORTED) {
+-		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID &&
+-		    /* Capacity data has changed */
+-		    (asc == 0x2a) && (ascq == 0x9)) {
+-			process_err_fn = storvsc_device_scan;
+ 			/*
+-			 * Retry the I/O that triggered this.
++			 * Otherwise, let upper layer deal with the
++			 * error when sense message is present
+ 			 */
+-			set_host_byte(scmnd, DID_REQUEUE);
+-			goto do_work;
+-		}
+-	}
+-
+-	if (vm_srb->srb_status & SRB_STATUS_ERROR) {
+-		/*
+-		 * Let upper layer deal with error when
+-		 * sense message is present.
+-		 */
+-		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID)
+ 			return;
++		}
+ 
+ 		/*
+ 		 * If there is an error; offline the device since all
+@@ -1049,6 +1041,13 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ 		default:
+ 			set_host_byte(scmnd, DID_ERROR);
+ 		}
++		return;
++
++	case SRB_STATUS_INVALID_LUN:
++		set_host_byte(scmnd, DID_NO_CONNECT);
++		process_err_fn = storvsc_remove_lun;
++		goto do_work;
++
+ 	}
+ 	return;
+ 
+diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
+index a09831c62192a..32ac8f9068e87 100644
+--- a/drivers/spi/spi-dw-dma.c
++++ b/drivers/spi/spi-dw-dma.c
+@@ -127,12 +127,15 @@ static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
+ 
+ 	dw_spi_dma_sg_burst_init(dws);
+ 
++	pci_dev_put(dma_dev);
++
+ 	return 0;
+ 
+ free_rxchan:
+ 	dma_release_channel(dws->rxchan);
+ 	dws->rxchan = NULL;
+ err_exit:
++	pci_dev_put(dma_dev);
+ 	return -EBUSY;
+ }
+ 
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 651a6510fb544..9ec37cf10c010 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -443,7 +443,7 @@ static int stm32_spi_prepare_mbr(struct stm32_spi *spi, u32 speed_hz,
+ 	u32 div, mbrdiv;
+ 
+ 	/* Ensure spi->clk_rate is even */
+-	div = DIV_ROUND_UP(spi->clk_rate & ~0x1, speed_hz);
++	div = DIV_ROUND_CLOSEST(spi->clk_rate & ~0x1, speed_hz);
+ 
+ 	/*
+ 	 * SPI framework set xfer->speed_hz to master->max_speed_hz if
+diff --git a/drivers/tee/optee/device.c b/drivers/tee/optee/device.c
+index 031806468af48..60ffc54da0033 100644
+--- a/drivers/tee/optee/device.c
++++ b/drivers/tee/optee/device.c
+@@ -80,7 +80,7 @@ static int optee_register_device(const uuid_t *device_uuid)
+ 	rc = device_register(&optee_device->dev);
+ 	if (rc) {
+ 		pr_err("device registration failed, err: %d\n", rc);
+-		kfree(optee_device);
++		put_device(&optee_device->dev);
+ 	}
+ 
+ 	return rc;
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 3f7379f16a36e..483fff3a95c9e 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -293,6 +293,7 @@ static void omap8250_restore_regs(struct uart_8250_port *up)
+ {
+ 	struct omap8250_priv *priv = up->port.private_data;
+ 	struct uart_8250_dma	*dma = up->dma;
++	u8 mcr = serial8250_in_MCR(up);
+ 
+ 	if (dma && dma->tx_running) {
+ 		/*
+@@ -309,7 +310,7 @@ static void omap8250_restore_regs(struct uart_8250_port *up)
+ 	serial_out(up, UART_EFR, UART_EFR_ECB);
+ 
+ 	serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+-	serial8250_out_MCR(up, UART_MCR_TCRTLR);
++	serial8250_out_MCR(up, mcr | UART_MCR_TCRTLR);
+ 	serial_out(up, UART_FCR, up->fcr);
+ 
+ 	omap8250_update_scr(up, priv);
+@@ -325,7 +326,8 @@ static void omap8250_restore_regs(struct uart_8250_port *up)
+ 	serial_out(up, UART_LCR, 0);
+ 
+ 	/* drop TCR + TLR access, we setup XON/XOFF later */
+-	serial8250_out_MCR(up, up->mcr);
++	serial8250_out_MCR(up, mcr);
++
+ 	serial_out(up, UART_IER, up->ier);
+ 
+ 	serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+@@ -684,7 +686,6 @@ static int omap_8250_startup(struct uart_port *port)
+ 
+ 	pm_runtime_get_sync(port->dev);
+ 
+-	up->mcr = 0;
+ 	serial_out(up, UART_FCR, UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
+ 
+ 	serial_out(up, UART_LCR, UART_LCR_WLEN8);
+diff --git a/drivers/usb/cdns3/core.c b/drivers/usb/cdns3/core.c
+index 6eeb7ed8e91f3..8fe7420de033d 100644
+--- a/drivers/usb/cdns3/core.c
++++ b/drivers/usb/cdns3/core.c
+@@ -97,13 +97,23 @@ static int cdns3_core_init_role(struct cdns3 *cdns)
+ 	 * can be restricted later depending on strap pin configuration.
+ 	 */
+ 	if (dr_mode == USB_DR_MODE_UNKNOWN) {
+-		if (IS_ENABLED(CONFIG_USB_CDNS3_HOST) &&
+-		    IS_ENABLED(CONFIG_USB_CDNS3_GADGET))
+-			dr_mode = USB_DR_MODE_OTG;
+-		else if (IS_ENABLED(CONFIG_USB_CDNS3_HOST))
+-			dr_mode = USB_DR_MODE_HOST;
+-		else if (IS_ENABLED(CONFIG_USB_CDNS3_GADGET))
+-			dr_mode = USB_DR_MODE_PERIPHERAL;
++		if (cdns->version == CDNSP_CONTROLLER_V2) {
++			if (IS_ENABLED(CONFIG_USB_CDNSP_HOST) &&
++			    IS_ENABLED(CONFIG_USB_CDNSP_GADGET))
++				dr_mode = USB_DR_MODE_OTG;
++			else if (IS_ENABLED(CONFIG_USB_CDNSP_HOST))
++				dr_mode = USB_DR_MODE_HOST;
++			else if (IS_ENABLED(CONFIG_USB_CDNSP_GADGET))
++				dr_mode = USB_DR_MODE_PERIPHERAL;
++		} else {
++			if (IS_ENABLED(CONFIG_USB_CDNS3_HOST) &&
++			    IS_ENABLED(CONFIG_USB_CDNS3_GADGET))
++				dr_mode = USB_DR_MODE_OTG;
++			else if (IS_ENABLED(CONFIG_USB_CDNS3_HOST))
++				dr_mode = USB_DR_MODE_HOST;
++			else if (IS_ENABLED(CONFIG_USB_CDNS3_GADGET))
++				dr_mode = USB_DR_MODE_PERIPHERAL;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/drivers/usb/cdns3/core.h b/drivers/usb/cdns3/core.h
+index 3176f924293a1..0d87871499eaa 100644
+--- a/drivers/usb/cdns3/core.h
++++ b/drivers/usb/cdns3/core.h
+@@ -55,7 +55,9 @@ struct cdns3_platform_data {
+  * @otg_res: the resource for otg
+  * @otg_v0_regs: pointer to base of v0 otg registers
+  * @otg_v1_regs: pointer to base of v1 otg registers
++ * @otg_cdnsp_regs: pointer to base of CDNSP otg registers
+  * @otg_regs: pointer to base of otg registers
++ * @otg_irq_regs: pointer to interrupt registers
+  * @otg_irq: irq number for otg controller
+  * @dev_irq: irq number for device controller
+  * @wakeup_irq: irq number for wakeup event, it is optional
+@@ -86,9 +88,12 @@ struct cdns3 {
+ 	struct resource			otg_res;
+ 	struct cdns3_otg_legacy_regs	*otg_v0_regs;
+ 	struct cdns3_otg_regs		*otg_v1_regs;
++	struct cdnsp_otg_regs		*otg_cdnsp_regs;
+ 	struct cdns3_otg_common_regs	*otg_regs;
++	struct cdns3_otg_irq_regs	*otg_irq_regs;
+ #define CDNS3_CONTROLLER_V0	0
+ #define CDNS3_CONTROLLER_V1	1
++#define CDNSP_CONTROLLER_V2	2
+ 	u32				version;
+ 	bool				phyrst_a_enable;
+ 
+diff --git a/drivers/usb/cdns3/drd.c b/drivers/usb/cdns3/drd.c
+index 38ccd29e4cdef..95863d44e3e09 100644
+--- a/drivers/usb/cdns3/drd.c
++++ b/drivers/usb/cdns3/drd.c
+@@ -2,13 +2,12 @@
+ /*
+  * Cadence USBSS DRD Driver.
+  *
+- * Copyright (C) 2018-2019 Cadence.
++ * Copyright (C) 2018-2020 Cadence.
+  * Copyright (C) 2019 Texas Instruments
+  *
+  * Author: Pawel Laszczak <pawell@cadence.com>
+  *         Roger Quadros <rogerq@ti.com>
+  *
+- *
+  */
+ #include <linux/kernel.h>
+ #include <linux/interrupt.h>
+@@ -28,8 +27,9 @@
+  *
+  * Returns 0 on success otherwise negative errno
+  */
+-int cdns3_set_mode(struct cdns3 *cdns, enum usb_dr_mode mode)
++static int cdns3_set_mode(struct cdns3 *cdns, enum usb_dr_mode mode)
+ {
++	u32 __iomem *override_reg;
+ 	u32 reg;
+ 
+ 	switch (mode) {
+@@ -39,11 +39,24 @@ int cdns3_set_mode(struct cdns3 *cdns, enum usb_dr_mode mode)
+ 		break;
+ 	case USB_DR_MODE_OTG:
+ 		dev_dbg(cdns->dev, "Set controller to OTG mode\n");
+-		if (cdns->version == CDNS3_CONTROLLER_V1) {
+-			reg = readl(&cdns->otg_v1_regs->override);
++
++		if (cdns->version == CDNSP_CONTROLLER_V2)
++			override_reg = &cdns->otg_cdnsp_regs->override;
++		else if (cdns->version == CDNS3_CONTROLLER_V1)
++			override_reg = &cdns->otg_v1_regs->override;
++		else
++			override_reg = &cdns->otg_v0_regs->ctrl1;
++
++		reg = readl(override_reg);
++
++		if (cdns->version != CDNS3_CONTROLLER_V0)
+ 			reg |= OVERRIDE_IDPULLUP;
+-			writel(reg, &cdns->otg_v1_regs->override);
++		else
++			reg |= OVERRIDE_IDPULLUP_V0;
+ 
++		writel(reg, override_reg);
++
++		if (cdns->version == CDNS3_CONTROLLER_V1) {
+ 			/*
+ 			 * Enable work around feature built into the
+ 			 * controller to address issue with RX Sensitivity
+@@ -55,10 +68,6 @@ int cdns3_set_mode(struct cdns3 *cdns, enum usb_dr_mode mode)
+ 				reg |= PHYRST_CFG_PHYRST_A_ENABLE;
+ 				writel(reg, &cdns->otg_v1_regs->phyrst_cfg);
+ 			}
+-		} else {
+-			reg = readl(&cdns->otg_v0_regs->ctrl1);
+-			reg |= OVERRIDE_IDPULLUP_V0;
+-			writel(reg, &cdns->otg_v0_regs->ctrl1);
+ 		}
+ 
+ 		/*
+@@ -123,7 +132,7 @@ bool cdns3_is_device(struct cdns3 *cdns)
+  */
+ static void cdns3_otg_disable_irq(struct cdns3 *cdns)
+ {
+-	writel(0, &cdns->otg_regs->ien);
++	writel(0, &cdns->otg_irq_regs->ien);
+ }
+ 
+ /**
+@@ -133,7 +142,7 @@ static void cdns3_otg_disable_irq(struct cdns3 *cdns)
+ static void cdns3_otg_enable_irq(struct cdns3 *cdns)
+ {
+ 	writel(OTGIEN_ID_CHANGE_INT | OTGIEN_VBUSVALID_RISE_INT |
+-	       OTGIEN_VBUSVALID_FALL_INT, &cdns->otg_regs->ien);
++	       OTGIEN_VBUSVALID_FALL_INT, &cdns->otg_irq_regs->ien);
+ }
+ 
+ /**
+@@ -144,16 +153,21 @@ static void cdns3_otg_enable_irq(struct cdns3 *cdns)
+  */
+ int cdns3_drd_host_on(struct cdns3 *cdns)
+ {
+-	u32 val;
++	u32 val, ready_bit;
+ 	int ret;
+ 
+ 	/* Enable host mode. */
+ 	writel(OTGCMD_HOST_BUS_REQ | OTGCMD_OTG_DIS,
+ 	       &cdns->otg_regs->cmd);
+ 
++	if (cdns->version == CDNSP_CONTROLLER_V2)
++		ready_bit = OTGSTS_CDNSP_XHCI_READY;
++	else
++		ready_bit = OTGSTS_CDNS3_XHCI_READY;
++
+ 	dev_dbg(cdns->dev, "Waiting till Host mode is turned on\n");
+ 	ret = readl_poll_timeout_atomic(&cdns->otg_regs->sts, val,
+-					val & OTGSTS_XHCI_READY, 1, 100000);
++					val & ready_bit, 1, 100000);
+ 
+ 	if (ret)
+ 		dev_err(cdns->dev, "timeout waiting for xhci_ready\n");
+@@ -189,17 +203,22 @@ void cdns3_drd_host_off(struct cdns3 *cdns)
+  */
+ int cdns3_drd_gadget_on(struct cdns3 *cdns)
+ {
+-	int ret, val;
+ 	u32 reg = OTGCMD_OTG_DIS;
++	u32 ready_bit;
++	int ret, val;
+ 
+ 	/* switch OTG core */
+ 	writel(OTGCMD_DEV_BUS_REQ | reg, &cdns->otg_regs->cmd);
+ 
+ 	dev_dbg(cdns->dev, "Waiting till Device mode is turned on\n");
+ 
++	if (cdns->version == CDNSP_CONTROLLER_V2)
++		ready_bit = OTGSTS_CDNSP_DEV_READY;
++	else
++		ready_bit = OTGSTS_CDNS3_DEV_READY;
++
+ 	ret = readl_poll_timeout_atomic(&cdns->otg_regs->sts, val,
+-					val & OTGSTS_DEV_READY,
+-					1, 100000);
++					val & ready_bit, 1, 100000);
+ 	if (ret) {
+ 		dev_err(cdns->dev, "timeout waiting for dev_ready\n");
+ 		return ret;
+@@ -244,7 +263,7 @@ static int cdns3_init_otg_mode(struct cdns3 *cdns)
+ 
+ 	cdns3_otg_disable_irq(cdns);
+ 	/* clear all interrupts */
+-	writel(~0, &cdns->otg_regs->ivect);
++	writel(~0, &cdns->otg_irq_regs->ivect);
+ 
+ 	ret = cdns3_set_mode(cdns, USB_DR_MODE_OTG);
+ 	if (ret)
+@@ -313,7 +332,7 @@ static irqreturn_t cdns3_drd_irq(int irq, void *data)
+ 	if (cdns->in_lpm)
+ 		return ret;
+ 
+-	reg = readl(&cdns->otg_regs->ivect);
++	reg = readl(&cdns->otg_irq_regs->ivect);
+ 
+ 	if (!reg)
+ 		return IRQ_NONE;
+@@ -332,7 +351,7 @@ static irqreturn_t cdns3_drd_irq(int irq, void *data)
+ 		ret = IRQ_WAKE_THREAD;
+ 	}
+ 
+-	writel(~0, &cdns->otg_regs->ivect);
++	writel(~0, &cdns->otg_irq_regs->ivect);
+ 	return ret;
+ }
+ 
+@@ -347,28 +366,43 @@ int cdns3_drd_init(struct cdns3 *cdns)
+ 		return PTR_ERR(regs);
+ 
+ 	/* Detection of DRD version. Controller has been released
+-	 * in two versions. Both are similar, but they have same changes
+-	 * in register maps.
+-	 * The first register in old version is command register and it's read
+-	 * only, so driver should read 0 from it. On the other hand, in v1
+-	 * the first register contains device ID number which is not set to 0.
+-	 * Driver uses this fact to detect the proper version of
++	 * in three versions. All are very similar and are software compatible,
++	 * but they have same changes in register maps.
++	 * The first register in oldest version is command register and it's
++	 * read only. Driver should read 0 from it. On the other hand, in v1
++	 * and v2 the first register contains device ID number which is not
++	 * set to 0. Driver uses this fact to detect the proper version of
+ 	 * controller.
+ 	 */
+ 	cdns->otg_v0_regs = regs;
+ 	if (!readl(&cdns->otg_v0_regs->cmd)) {
+ 		cdns->version  = CDNS3_CONTROLLER_V0;
+ 		cdns->otg_v1_regs = NULL;
++		cdns->otg_cdnsp_regs = NULL;
+ 		cdns->otg_regs = regs;
++		cdns->otg_irq_regs = (struct cdns3_otg_irq_regs *)
++				     &cdns->otg_v0_regs->ien;
+ 		writel(1, &cdns->otg_v0_regs->simulate);
+ 		dev_dbg(cdns->dev, "DRD version v0 (%08x)\n",
+ 			 readl(&cdns->otg_v0_regs->version));
+ 	} else {
+ 		cdns->otg_v0_regs = NULL;
+ 		cdns->otg_v1_regs = regs;
++		cdns->otg_cdnsp_regs = regs;
++
+ 		cdns->otg_regs = (void *)&cdns->otg_v1_regs->cmd;
+-		cdns->version  = CDNS3_CONTROLLER_V1;
+-		writel(1, &cdns->otg_v1_regs->simulate);
++
++		if (cdns->otg_cdnsp_regs->did == OTG_CDNSP_DID) {
++			cdns->otg_irq_regs = (struct cdns3_otg_irq_regs *)
++					      &cdns->otg_cdnsp_regs->ien;
++			cdns->version  = CDNSP_CONTROLLER_V2;
++		} else {
++			cdns->otg_irq_regs = (struct cdns3_otg_irq_regs *)
++					      &cdns->otg_v1_regs->ien;
++			writel(1, &cdns->otg_v1_regs->simulate);
++			cdns->version  = CDNS3_CONTROLLER_V1;
++		}
++
+ 		dev_dbg(cdns->dev, "DRD version v1 (ID: %08x, rev: %08x)\n",
+ 			 readl(&cdns->otg_v1_regs->did),
+ 			 readl(&cdns->otg_v1_regs->rid));
+@@ -378,10 +412,17 @@ int cdns3_drd_init(struct cdns3 *cdns)
+ 
+ 	/* Update dr_mode according to STRAP configuration. */
+ 	cdns->dr_mode = USB_DR_MODE_OTG;
+-	if (state == OTGSTS_STRAP_HOST) {
++
++	if ((cdns->version == CDNSP_CONTROLLER_V2 &&
++	     state == OTGSTS_CDNSP_STRAP_HOST) ||
++	    (cdns->version != CDNSP_CONTROLLER_V2 &&
++	     state == OTGSTS_STRAP_HOST)) {
+ 		dev_dbg(cdns->dev, "Controller strapped to HOST\n");
+ 		cdns->dr_mode = USB_DR_MODE_HOST;
+-	} else if (state == OTGSTS_STRAP_GADGET) {
++	} else if ((cdns->version == CDNSP_CONTROLLER_V2 &&
++		    state == OTGSTS_CDNSP_STRAP_GADGET) ||
++		   (cdns->version != CDNSP_CONTROLLER_V2 &&
++		    state == OTGSTS_STRAP_GADGET)) {
+ 		dev_dbg(cdns->dev, "Controller strapped to PERIPHERAL\n");
+ 		cdns->dr_mode = USB_DR_MODE_PERIPHERAL;
+ 	}
+diff --git a/drivers/usb/cdns3/drd.h b/drivers/usb/cdns3/drd.h
+index f1ccae285a16d..a767b6893938c 100644
+--- a/drivers/usb/cdns3/drd.h
++++ b/drivers/usb/cdns3/drd.h
+@@ -1,8 +1,8 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /*
+- * Cadence USB3 DRD header file.
++ * Cadence USB3 and USBSSP DRD header file.
+  *
+- * Copyright (C) 2018-2019 Cadence.
++ * Copyright (C) 2018-2020 Cadence.
+  *
+  * Author: Pawel Laszczak <pawell@cadence.com>
+  */
+@@ -13,7 +13,7 @@
+ #include <linux/phy/phy.h>
+ #include "core.h"
+ 
+-/*  DRD register interface for version v1. */
++/*  DRD register interface for version v1 of cdns3 driver. */
+ struct cdns3_otg_regs {
+ 	__le32 did;
+ 	__le32 rid;
+@@ -38,7 +38,7 @@ struct cdns3_otg_regs {
+ 	__le32 ctrl2;
+ };
+ 
+-/*  DRD register interface for version v0. */
++/*  DRD register interface for version v0 of cdns3 driver. */
+ struct cdns3_otg_legacy_regs {
+ 	__le32 cmd;
+ 	__le32 sts;
+@@ -57,14 +57,45 @@ struct cdns3_otg_legacy_regs {
+ 	__le32 ctrl1;
+ };
+ 
++/* DRD register interface for cdnsp driver */
++struct cdnsp_otg_regs {
++	__le32 did;
++	__le32 rid;
++	__le32 cfgs1;
++	__le32 cfgs2;
++	__le32 cmd;
++	__le32 sts;
++	__le32 state;
++	__le32 ien;
++	__le32 ivect;
++	__le32 tmr;
++	__le32 simulate;
++	__le32 adpbc_sts;
++	__le32 adp_ramp_time;
++	__le32 adpbc_ctrl1;
++	__le32 adpbc_ctrl2;
++	__le32 override;
++	__le32 vbusvalid_dbnc_cfg;
++	__le32 sessvalid_dbnc_cfg;
++	__le32 susp_timing_ctrl;
++};
++
++#define OTG_CDNSP_DID	0x0004034E
++
+ /*
+- * Common registers interface for both version of DRD.
++ * Common registers interface for both CDNS3 and CDNSP version of DRD.
+  */
+ struct cdns3_otg_common_regs {
+ 	__le32 cmd;
+ 	__le32 sts;
+ 	__le32 state;
+-	__le32 different1;
++};
++
++/*
++ * Interrupt related registers. This registers are mapped in different
++ * location for CDNSP controller.
++ */
++struct cdns3_otg_irq_regs {
+ 	__le32 ien;
+ 	__le32 ivect;
+ };
+@@ -92,9 +123,9 @@ struct cdns3_otg_common_regs {
+ #define OTGCMD_DEV_BUS_DROP		BIT(8)
+ /* Drop the bus for Host mode*/
+ #define OTGCMD_HOST_BUS_DROP		BIT(9)
+-/* Power Down USBSS-DEV. */
++/* Power Down USBSS-DEV - only for CDNS3.*/
+ #define OTGCMD_DEV_POWER_OFF		BIT(11)
+-/* Power Down CDNSXHCI. */
++/* Power Down CDNSXHCI - only for CDNS3. */
+ #define OTGCMD_HOST_POWER_OFF		BIT(12)
+ 
+ /* OTGIEN - bitmasks */
+@@ -123,20 +154,31 @@ struct cdns3_otg_common_regs {
+ #define OTGSTS_OTG_NRDY_MASK		BIT(11)
+ #define OTGSTS_OTG_NRDY(p)		((p) & OTGSTS_OTG_NRDY_MASK)
+ /*
+- * Value of the strap pins.
++ * Value of the strap pins for:
++ * CDNS3:
+  * 000 - no default configuration
+  * 010 - Controller initiall configured as Host
+  * 100 - Controller initially configured as Device
++ * CDNSP:
++ * 000 - No default configuration.
++ * 010 - Controller initiall configured as Host.
++ * 100 - Controller initially configured as Device.
+  */
+ #define OTGSTS_STRAP(p)			(((p) & GENMASK(14, 12)) >> 12)
+ #define OTGSTS_STRAP_NO_DEFAULT_CFG	0x00
+ #define OTGSTS_STRAP_HOST_OTG		0x01
+ #define OTGSTS_STRAP_HOST		0x02
+ #define OTGSTS_STRAP_GADGET		0x04
++#define OTGSTS_CDNSP_STRAP_HOST		0x01
++#define OTGSTS_CDNSP_STRAP_GADGET	0x02
++
+ /* Host mode is turned on. */
+-#define OTGSTS_XHCI_READY		BIT(26)
++#define OTGSTS_CDNS3_XHCI_READY		BIT(26)
++#define OTGSTS_CDNSP_XHCI_READY		BIT(27)
++
+ /* "Device mode is turned on .*/
+-#define OTGSTS_DEV_READY		BIT(27)
++#define OTGSTS_CDNS3_DEV_READY		BIT(27)
++#define OTGSTS_CDNSP_DEV_READY		BIT(26)
+ 
+ /* OTGSTATE- bitmasks */
+ #define OTGSTATE_DEV_STATE_MASK		GENMASK(2, 0)
+@@ -152,6 +194,8 @@ struct cdns3_otg_common_regs {
+ #define OVERRIDE_IDPULLUP		BIT(0)
+ /* Only for CDNS3_CONTROLLER_V0 version */
+ #define OVERRIDE_IDPULLUP_V0		BIT(24)
++/* Vbusvalid/Sesvalid override select. */
++#define OVERRIDE_SESS_VLD_SEL		BIT(10)
+ 
+ /* PHYRST_CFG - bitmasks */
+ #define PHYRST_CFG_PHYRST_A_ENABLE     BIT(0)
+@@ -170,6 +214,5 @@ int cdns3_drd_gadget_on(struct cdns3 *cdns);
+ void cdns3_drd_gadget_off(struct cdns3 *cdns);
+ int cdns3_drd_host_on(struct cdns3 *cdns);
+ void cdns3_drd_host_off(struct cdns3 *cdns);
+-int cdns3_set_mode(struct cdns3 *cdns, enum usb_dr_mode mode);
+ 
+ #endif /* __LINUX_CDNS3_DRD */
+diff --git a/drivers/usb/dwc3/dwc3-exynos.c b/drivers/usb/dwc3/dwc3-exynos.c
+index 90bb022737da8..ee7b71827216c 100644
+--- a/drivers/usb/dwc3/dwc3-exynos.c
++++ b/drivers/usb/dwc3/dwc3-exynos.c
+@@ -37,15 +37,6 @@ struct dwc3_exynos {
+ 	struct regulator	*vdd10;
+ };
+ 
+-static int dwc3_exynos_remove_child(struct device *dev, void *unused)
+-{
+-	struct platform_device *pdev = to_platform_device(dev);
+-
+-	platform_device_unregister(pdev);
+-
+-	return 0;
+-}
+-
+ static int dwc3_exynos_probe(struct platform_device *pdev)
+ {
+ 	struct dwc3_exynos	*exynos;
+@@ -142,7 +133,7 @@ static int dwc3_exynos_remove(struct platform_device *pdev)
+ 	struct dwc3_exynos	*exynos = platform_get_drvdata(pdev);
+ 	int i;
+ 
+-	device_for_each_child(&pdev->dev, NULL, dwc3_exynos_remove_child);
++	of_platform_depopulate(&pdev->dev);
+ 
+ 	for (i = exynos->num_clks - 1; i >= 0; i--)
+ 		clk_disable_unprepare(exynos->clks[i]);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 347ba7e4bd81a..a9a43d6494782 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -752,7 +752,7 @@ out:
+ 	return 0;
+ }
+ 
+-static void dwc3_remove_requests(struct dwc3 *dwc, struct dwc3_ep *dep)
++static void dwc3_remove_requests(struct dwc3 *dwc, struct dwc3_ep *dep, int status)
+ {
+ 	struct dwc3_request		*req;
+ 
+@@ -762,19 +762,19 @@ static void dwc3_remove_requests(struct dwc3 *dwc, struct dwc3_ep *dep)
+ 	while (!list_empty(&dep->started_list)) {
+ 		req = next_request(&dep->started_list);
+ 
+-		dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
++		dwc3_gadget_giveback(dep, req, status);
+ 	}
+ 
+ 	while (!list_empty(&dep->pending_list)) {
+ 		req = next_request(&dep->pending_list);
+ 
+-		dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
++		dwc3_gadget_giveback(dep, req, status);
+ 	}
+ 
+ 	while (!list_empty(&dep->cancelled_list)) {
+ 		req = next_request(&dep->cancelled_list);
+ 
+-		dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
++		dwc3_gadget_giveback(dep, req, status);
+ 	}
+ }
+ 
+@@ -803,18 +803,18 @@ static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep)
+ 	reg &= ~DWC3_DALEPENA_EP(dep->number);
+ 	dwc3_writel(dwc->regs, DWC3_DALEPENA, reg);
+ 
++	dwc3_remove_requests(dwc, dep, -ESHUTDOWN);
++
++	dep->stream_capable = false;
++	dep->type = 0;
++	dep->flags = 0;
++
+ 	/* Clear out the ep descriptors for non-ep0 */
+ 	if (dep->number > 1) {
+ 		dep->endpoint.comp_desc = NULL;
+ 		dep->endpoint.desc = NULL;
+ 	}
+ 
+-	dwc3_remove_requests(dwc, dep);
+-
+-	dep->stream_capable = false;
+-	dep->type = 0;
+-	dep->flags = 0;
+-
+ 	return 0;
+ }
+ 
+@@ -2067,7 +2067,7 @@ static void dwc3_stop_active_transfers(struct dwc3 *dwc)
+ 		if (!dep)
+ 			continue;
+ 
+-		dwc3_remove_requests(dwc, dep);
++		dwc3_remove_requests(dwc, dep, -ESHUTDOWN);
+ 	}
+ }
+ 
+diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
+index 9db557b76511b..804d8f4d0e73d 100644
+--- a/drivers/xen/platform-pci.c
++++ b/drivers/xen/platform-pci.c
+@@ -137,7 +137,7 @@ static int platform_pci_probe(struct pci_dev *pdev,
+ 		if (ret) {
+ 			dev_warn(&pdev->dev, "Unable to set the evtchn callback "
+ 					 "err=%d\n", ret);
+-			goto out;
++			goto irq_out;
+ 		}
+ 	}
+ 
+@@ -145,13 +145,16 @@ static int platform_pci_probe(struct pci_dev *pdev,
+ 	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+ 	ret = gnttab_setup_auto_xlat_frames(grant_frames);
+ 	if (ret)
+-		goto out;
++		goto irq_out;
+ 	ret = gnttab_init();
+ 	if (ret)
+ 		goto grant_out;
+ 	return 0;
+ grant_out:
+ 	gnttab_free_auto_xlat_frames();
++irq_out:
++	if (!xen_have_vector_callback)
++		free_irq(pdev->irq, pdev);
+ out:
+ 	pci_release_region(pdev, 0);
+ mem_out:
+diff --git a/drivers/xen/xen-pciback/conf_space_capability.c b/drivers/xen/xen-pciback/conf_space_capability.c
+index 5e53b4817f167..097316a741268 100644
+--- a/drivers/xen/xen-pciback/conf_space_capability.c
++++ b/drivers/xen/xen-pciback/conf_space_capability.c
+@@ -190,13 +190,16 @@ static const struct config_field caplist_pm[] = {
+ };
+ 
+ static struct msi_msix_field_config {
+-	u16          enable_bit; /* bit for enabling MSI/MSI-X */
+-	unsigned int int_type;   /* interrupt type for exclusiveness check */
++	u16          enable_bit;   /* bit for enabling MSI/MSI-X */
++	u16          allowed_bits; /* bits allowed to be changed */
++	unsigned int int_type;     /* interrupt type for exclusiveness check */
+ } msi_field_config = {
+ 	.enable_bit	= PCI_MSI_FLAGS_ENABLE,
++	.allowed_bits	= PCI_MSI_FLAGS_ENABLE,
+ 	.int_type	= INTERRUPT_TYPE_MSI,
+ }, msix_field_config = {
+ 	.enable_bit	= PCI_MSIX_FLAGS_ENABLE,
++	.allowed_bits	= PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL,
+ 	.int_type	= INTERRUPT_TYPE_MSIX,
+ };
+ 
+@@ -229,7 +232,7 @@ static int msi_msix_flags_write(struct pci_dev *dev, int offset, u16 new_value,
+ 		return 0;
+ 
+ 	if (!dev_data->allow_interrupt_control ||
+-	    (new_value ^ old_value) & ~field_config->enable_bit)
++	    (new_value ^ old_value) & ~field_config->allowed_bits)
+ 		return PCIBIOS_SET_FAILED;
+ 
+ 	if (new_value & field_config->enable_bit) {
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index b5e9bfe884c4b..d0c31651ec80d 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2811,6 +2811,8 @@ static int btrfs_ioctl_get_subvol_info(struct file *file, void __user *argp)
+ 		}
+ 	}
+ 
++	btrfs_free_path(path);
++	path = NULL;
+ 	if (copy_to_user(argp, subvol_info, sizeof(*subvol_info)))
+ 		ret = -EFAULT;
+ 
+@@ -2903,6 +2905,8 @@ static int btrfs_ioctl_get_subvol_rootref(struct file *file, void __user *argp)
+ 	}
+ 
+ out:
++	btrfs_free_path(path);
++
+ 	if (!ret || ret == -EOVERFLOW) {
+ 		rootrefs->num_items = found;
+ 		/* update min_treeid for next search */
+@@ -2914,7 +2918,6 @@ out:
+ 	}
+ 
+ 	kfree(rootrefs);
+-	btrfs_free_path(path);
+ 
+ 	return ret;
+ }
+@@ -3878,6 +3881,8 @@ static long btrfs_ioctl_ino_to_path(struct btrfs_root *root, void __user *arg)
+ 		ipath->fspath->val[i] = rel_ptr;
+ 	}
+ 
++	btrfs_free_path(path);
++	path = NULL;
+ 	ret = copy_to_user((void __user *)(unsigned long)ipa->fspath,
+ 			   ipath->fspath, size);
+ 	if (ret) {
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 3bb6b688ece52..ecf190286377c 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -1767,8 +1767,11 @@ int __init btrfs_init_sysfs(void)
+ 
+ #ifdef CONFIG_BTRFS_DEBUG
+ 	ret = sysfs_create_group(&btrfs_kset->kobj, &btrfs_debug_feature_attr_group);
+-	if (ret)
+-		goto out2;
++	if (ret) {
++		sysfs_unmerge_group(&btrfs_kset->kobj,
++				    &btrfs_static_feature_attr_group);
++		goto out_remove_group;
++	}
+ #endif
+ 
+ 	return 0;
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 76e43a487bc63..51562d36fa83d 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -2294,6 +2294,7 @@ static int caps_are_flushed(struct inode *inode, u64 flush_tid)
+  */
+ static int unsafe_request_wait(struct inode *inode)
+ {
++	struct ceph_mds_client *mdsc = ceph_sb_to_client(inode->i_sb)->mdsc;
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	struct ceph_mds_request *req1 = NULL, *req2 = NULL;
+ 	int ret, err = 0;
+@@ -2313,6 +2314,76 @@ static int unsafe_request_wait(struct inode *inode)
+ 	}
+ 	spin_unlock(&ci->i_unsafe_lock);
+ 
++	/*
++	 * Trigger to flush the journal logs in all the relevant MDSes
++	 * manually, or in the worst case we must wait at most 5 seconds
++	 * to wait the journal logs to be flushed by the MDSes periodically.
++	 */
++	if (req1 || req2) {
++		struct ceph_mds_request *req;
++		struct ceph_mds_session **sessions;
++		struct ceph_mds_session *s;
++		unsigned int max_sessions;
++		int i;
++
++		mutex_lock(&mdsc->mutex);
++		max_sessions = mdsc->max_sessions;
++
++		sessions = kcalloc(max_sessions, sizeof(s), GFP_KERNEL);
++		if (!sessions) {
++			mutex_unlock(&mdsc->mutex);
++			err = -ENOMEM;
++			goto out;
++		}
++
++		spin_lock(&ci->i_unsafe_lock);
++		if (req1) {
++			list_for_each_entry(req, &ci->i_unsafe_dirops,
++					    r_unsafe_dir_item) {
++				s = req->r_session;
++				if (!s)
++					continue;
++				if (!sessions[s->s_mds]) {
++					s = ceph_get_mds_session(s);
++					sessions[s->s_mds] = s;
++				}
++			}
++		}
++		if (req2) {
++			list_for_each_entry(req, &ci->i_unsafe_iops,
++					    r_unsafe_target_item) {
++				s = req->r_session;
++				if (!s)
++					continue;
++				if (!sessions[s->s_mds]) {
++					s = ceph_get_mds_session(s);
++					sessions[s->s_mds] = s;
++				}
++			}
++		}
++		spin_unlock(&ci->i_unsafe_lock);
++
++		/* the auth MDS */
++		spin_lock(&ci->i_ceph_lock);
++		if (ci->i_auth_cap) {
++			s = ci->i_auth_cap->session;
++			if (!sessions[s->s_mds])
++				sessions[s->s_mds] = ceph_get_mds_session(s);
++		}
++		spin_unlock(&ci->i_ceph_lock);
++		mutex_unlock(&mdsc->mutex);
++
++		/* send flush mdlog request to MDSes */
++		for (i = 0; i < max_sessions; i++) {
++			s = sessions[i];
++			if (s) {
++				send_flush_mdlog(s);
++				ceph_put_mds_session(s);
++			}
++		}
++		kfree(sessions);
++	}
++
+ 	dout("unsafe_request_wait %p wait on tid %llu %llu\n",
+ 	     inode, req1 ? req1->r_tid : 0ULL, req2 ? req2->r_tid : 0ULL);
+ 	if (req1) {
+@@ -2320,15 +2391,19 @@ static int unsafe_request_wait(struct inode *inode)
+ 					ceph_timeout_jiffies(req1->r_timeout));
+ 		if (ret)
+ 			err = -EIO;
+-		ceph_mdsc_put_request(req1);
+ 	}
+ 	if (req2) {
+ 		ret = !wait_for_completion_timeout(&req2->r_safe_completion,
+ 					ceph_timeout_jiffies(req2->r_timeout));
+ 		if (ret)
+ 			err = -EIO;
+-		ceph_mdsc_put_request(req2);
+ 	}
++
++out:
++	if (req1)
++		ceph_mdsc_put_request(req1);
++	if (req2)
++		ceph_mdsc_put_request(req2);
+ 	return err;
+ }
+ 
+@@ -4310,33 +4385,9 @@ static void flush_dirty_session_caps(struct ceph_mds_session *s)
+ 	dout("flush_dirty_caps done\n");
+ }
+ 
+-static void iterate_sessions(struct ceph_mds_client *mdsc,
+-			     void (*cb)(struct ceph_mds_session *))
+-{
+-	int mds;
+-
+-	mutex_lock(&mdsc->mutex);
+-	for (mds = 0; mds < mdsc->max_sessions; ++mds) {
+-		struct ceph_mds_session *s;
+-
+-		if (!mdsc->sessions[mds])
+-			continue;
+-
+-		s = ceph_get_mds_session(mdsc->sessions[mds]);
+-		if (!s)
+-			continue;
+-
+-		mutex_unlock(&mdsc->mutex);
+-		cb(s);
+-		ceph_put_mds_session(s);
+-		mutex_lock(&mdsc->mutex);
+-	}
+-	mutex_unlock(&mdsc->mutex);
+-}
+-
+ void ceph_flush_dirty_caps(struct ceph_mds_client *mdsc)
+ {
+-	iterate_sessions(mdsc, flush_dirty_session_caps);
++	ceph_mdsc_iterate_sessions(mdsc, flush_dirty_session_caps, true);
+ }
+ 
+ void __ceph_touch_fmode(struct ceph_inode_info *ci,
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 6859967df2b19..fa51872ff8504 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -809,6 +809,33 @@ static void put_request_session(struct ceph_mds_request *req)
+ 	}
+ }
+ 
++void ceph_mdsc_iterate_sessions(struct ceph_mds_client *mdsc,
++				void (*cb)(struct ceph_mds_session *),
++				bool check_state)
++{
++	int mds;
++
++	mutex_lock(&mdsc->mutex);
++	for (mds = 0; mds < mdsc->max_sessions; ++mds) {
++		struct ceph_mds_session *s;
++
++		s = __ceph_lookup_mds_session(mdsc, mds);
++		if (!s)
++			continue;
++
++		if (check_state && !check_session_state(s)) {
++			ceph_put_mds_session(s);
++			continue;
++		}
++
++		mutex_unlock(&mdsc->mutex);
++		cb(s);
++		ceph_put_mds_session(s);
++		mutex_lock(&mdsc->mutex);
++	}
++	mutex_unlock(&mdsc->mutex);
++}
++
+ void ceph_mdsc_release_request(struct kref *kref)
+ {
+ 	struct ceph_mds_request *req = container_of(kref,
+@@ -1157,7 +1184,7 @@ random:
+ /*
+  * session messages
+  */
+-static struct ceph_msg *create_session_msg(u32 op, u64 seq)
++struct ceph_msg *ceph_create_session_msg(u32 op, u64 seq)
+ {
+ 	struct ceph_msg *msg;
+ 	struct ceph_mds_session_head *h;
+@@ -1165,7 +1192,8 @@ static struct ceph_msg *create_session_msg(u32 op, u64 seq)
+ 	msg = ceph_msg_new(CEPH_MSG_CLIENT_SESSION, sizeof(*h), GFP_NOFS,
+ 			   false);
+ 	if (!msg) {
+-		pr_err("create_session_msg ENOMEM creating msg\n");
++		pr_err("ENOMEM creating session %s msg\n",
++		       ceph_session_op_name(op));
+ 		return NULL;
+ 	}
+ 	h = msg->front.iov_base;
+@@ -1299,7 +1327,7 @@ static struct ceph_msg *create_session_open_msg(struct ceph_mds_client *mdsc, u6
+ 	msg = ceph_msg_new(CEPH_MSG_CLIENT_SESSION, sizeof(*h) + extra_bytes,
+ 			   GFP_NOFS, false);
+ 	if (!msg) {
+-		pr_err("create_session_msg ENOMEM creating msg\n");
++		pr_err("ENOMEM creating session open msg\n");
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	p = msg->front.iov_base;
+@@ -1833,8 +1861,8 @@ static int send_renew_caps(struct ceph_mds_client *mdsc,
+ 
+ 	dout("send_renew_caps to mds%d (%s)\n", session->s_mds,
+ 		ceph_mds_state_name(state));
+-	msg = create_session_msg(CEPH_SESSION_REQUEST_RENEWCAPS,
+-				 ++session->s_renew_seq);
++	msg = ceph_create_session_msg(CEPH_SESSION_REQUEST_RENEWCAPS,
++				      ++session->s_renew_seq);
+ 	if (!msg)
+ 		return -ENOMEM;
+ 	ceph_con_send(&session->s_con, msg);
+@@ -1848,7 +1876,7 @@ static int send_flushmsg_ack(struct ceph_mds_client *mdsc,
+ 
+ 	dout("send_flushmsg_ack to mds%d (%s)s seq %lld\n",
+ 	     session->s_mds, ceph_session_state_name(session->s_state), seq);
+-	msg = create_session_msg(CEPH_SESSION_FLUSHMSG_ACK, seq);
++	msg = ceph_create_session_msg(CEPH_SESSION_FLUSHMSG_ACK, seq);
+ 	if (!msg)
+ 		return -ENOMEM;
+ 	ceph_con_send(&session->s_con, msg);
+@@ -1900,7 +1928,8 @@ static int request_close_session(struct ceph_mds_session *session)
+ 	dout("request_close_session mds%d state %s seq %lld\n",
+ 	     session->s_mds, ceph_session_state_name(session->s_state),
+ 	     session->s_seq);
+-	msg = create_session_msg(CEPH_SESSION_REQUEST_CLOSE, session->s_seq);
++	msg = ceph_create_session_msg(CEPH_SESSION_REQUEST_CLOSE,
++				      session->s_seq);
+ 	if (!msg)
+ 		return -ENOMEM;
+ 	ceph_con_send(&session->s_con, msg);
+@@ -4375,24 +4404,12 @@ void ceph_mdsc_lease_send_msg(struct ceph_mds_session *session,
+ }
+ 
+ /*
+- * lock unlock sessions, to wait ongoing session activities
++ * lock unlock the session, to wait ongoing session activities
+  */
+-static void lock_unlock_sessions(struct ceph_mds_client *mdsc)
++static void lock_unlock_session(struct ceph_mds_session *s)
+ {
+-	int i;
+-
+-	mutex_lock(&mdsc->mutex);
+-	for (i = 0; i < mdsc->max_sessions; i++) {
+-		struct ceph_mds_session *s = __ceph_lookup_mds_session(mdsc, i);
+-		if (!s)
+-			continue;
+-		mutex_unlock(&mdsc->mutex);
+-		mutex_lock(&s->s_mutex);
+-		mutex_unlock(&s->s_mutex);
+-		ceph_put_mds_session(s);
+-		mutex_lock(&mdsc->mutex);
+-	}
+-	mutex_unlock(&mdsc->mutex);
++	mutex_lock(&s->s_mutex);
++	mutex_unlock(&s->s_mutex);
+ }
+ 
+ static void maybe_recover_session(struct ceph_mds_client *mdsc)
+@@ -4647,6 +4664,30 @@ static void wait_requests(struct ceph_mds_client *mdsc)
+ 	dout("wait_requests done\n");
+ }
+ 
++void send_flush_mdlog(struct ceph_mds_session *s)
++{
++	struct ceph_msg *msg;
++
++	/*
++	 * Pre-luminous MDS crashes when it sees an unknown session request
++	 */
++	if (!CEPH_HAVE_FEATURE(s->s_con.peer_features, SERVER_LUMINOUS))
++		return;
++
++	mutex_lock(&s->s_mutex);
++	dout("request mdlog flush to mds%d (%s)s seq %lld\n", s->s_mds,
++	     ceph_session_state_name(s->s_state), s->s_seq);
++	msg = ceph_create_session_msg(CEPH_SESSION_REQUEST_FLUSH_MDLOG,
++				      s->s_seq);
++	if (!msg) {
++		pr_err("failed to request mdlog flush to mds%d (%s) seq %lld\n",
++		       s->s_mds, ceph_session_state_name(s->s_state), s->s_seq);
++	} else {
++		ceph_con_send(&s->s_con, msg);
++	}
++	mutex_unlock(&s->s_mutex);
++}
++
+ /*
+  * called before mount is ro, and before dentries are torn down.
+  * (hmm, does this still race with new lookups?)
+@@ -4656,7 +4697,8 @@ void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc)
+ 	dout("pre_umount\n");
+ 	mdsc->stopping = 1;
+ 
+-	lock_unlock_sessions(mdsc);
++	ceph_mdsc_iterate_sessions(mdsc, send_flush_mdlog, true);
++	ceph_mdsc_iterate_sessions(mdsc, lock_unlock_session, false);
+ 	ceph_flush_dirty_caps(mdsc);
+ 	wait_requests(mdsc);
+ 
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index acf33d7192bb6..a92e42e8a9f82 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -518,6 +518,11 @@ static inline void ceph_mdsc_put_request(struct ceph_mds_request *req)
+ 	kref_put(&req->r_kref, ceph_mdsc_release_request);
+ }
+ 
++extern void send_flush_mdlog(struct ceph_mds_session *s);
++extern void ceph_mdsc_iterate_sessions(struct ceph_mds_client *mdsc,
++				       void (*cb)(struct ceph_mds_session *),
++				       bool check_state);
++extern struct ceph_msg *ceph_create_session_msg(u32 op, u64 seq);
+ extern void __ceph_queue_cap_release(struct ceph_mds_session *session,
+ 				    struct ceph_cap *cap);
+ extern void ceph_flush_cap_releases(struct ceph_mds_client *mdsc,
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index 0369f672a76fb..734873be56a74 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -697,9 +697,10 @@ int ceph_update_snap_trace(struct ceph_mds_client *mdsc,
+ 	struct ceph_mds_snap_realm *ri;    /* encoded */
+ 	__le64 *snaps;                     /* encoded */
+ 	__le64 *prior_parent_snaps;        /* encoded */
+-	struct ceph_snap_realm *realm = NULL;
++	struct ceph_snap_realm *realm;
+ 	struct ceph_snap_realm *first_realm = NULL;
+-	int invalidate = 0;
++	struct ceph_snap_realm *realm_to_rebuild = NULL;
++	int rebuild_snapcs;
+ 	int err = -ENOMEM;
+ 	LIST_HEAD(dirty_realms);
+ 
+@@ -707,6 +708,8 @@ int ceph_update_snap_trace(struct ceph_mds_client *mdsc,
+ 
+ 	dout("update_snap_trace deletion=%d\n", deletion);
+ more:
++	realm = NULL;
++	rebuild_snapcs = 0;
+ 	ceph_decode_need(&p, e, sizeof(*ri), bad);
+ 	ri = p;
+ 	p += sizeof(*ri);
+@@ -730,7 +733,7 @@ more:
+ 	err = adjust_snap_realm_parent(mdsc, realm, le64_to_cpu(ri->parent));
+ 	if (err < 0)
+ 		goto fail;
+-	invalidate += err;
++	rebuild_snapcs += err;
+ 
+ 	if (le64_to_cpu(ri->seq) > realm->seq) {
+ 		dout("update_snap_trace updating %llx %p %lld -> %lld\n",
+@@ -755,22 +758,30 @@ more:
+ 		if (realm->seq > mdsc->last_snap_seq)
+ 			mdsc->last_snap_seq = realm->seq;
+ 
+-		invalidate = 1;
++		rebuild_snapcs = 1;
+ 	} else if (!realm->cached_context) {
+ 		dout("update_snap_trace %llx %p seq %lld new\n",
+ 		     realm->ino, realm, realm->seq);
+-		invalidate = 1;
++		rebuild_snapcs = 1;
+ 	} else {
+ 		dout("update_snap_trace %llx %p seq %lld unchanged\n",
+ 		     realm->ino, realm, realm->seq);
+ 	}
+ 
+-	dout("done with %llx %p, invalidated=%d, %p %p\n", realm->ino,
+-	     realm, invalidate, p, e);
++	dout("done with %llx %p, rebuild_snapcs=%d, %p %p\n", realm->ino,
++	     realm, rebuild_snapcs, p, e);
++
++	/*
++	 * this will always track the uppest parent realm from which
++	 * we need to rebuild the snapshot contexts _downward_ in
++	 * hierarchy.
++	 */
++	if (rebuild_snapcs)
++		realm_to_rebuild = realm;
+ 
+-	/* invalidate when we reach the _end_ (root) of the trace */
+-	if (invalidate && p >= e)
+-		rebuild_snap_realms(realm, &dirty_realms);
++	/* rebuild_snapcs when we reach the _end_ (root) of the trace */
++	if (realm_to_rebuild && p >= e)
++		rebuild_snap_realms(realm_to_rebuild, &dirty_realms);
+ 
+ 	if (!first_realm)
+ 		first_realm = realm;
+diff --git a/fs/ceph/strings.c b/fs/ceph/strings.c
+index 4a79f3632260e..573bb9556fb56 100644
+--- a/fs/ceph/strings.c
++++ b/fs/ceph/strings.c
+@@ -46,6 +46,7 @@ const char *ceph_session_op_name(int op)
+ 	case CEPH_SESSION_FLUSHMSG_ACK: return "flushmsg_ack";
+ 	case CEPH_SESSION_FORCE_RO: return "force_ro";
+ 	case CEPH_SESSION_REJECT: return "reject";
++	case CEPH_SESSION_REQUEST_FLUSH_MDLOG: return "flush_mdlog";
+ 	}
+ 	return "???";
+ }
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 30add5a3df3df..54750b7c162d2 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5182,6 +5182,7 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 	 * and it is decreased till we reach start.
+ 	 */
+ again:
++	ret = 0;
+ 	if (SHIFT == SHIFT_LEFT)
+ 		iterator = &start;
+ 	else
+@@ -5225,14 +5226,21 @@ again:
+ 					ext4_ext_get_actual_len(extent);
+ 		} else {
+ 			extent = EXT_FIRST_EXTENT(path[depth].p_hdr);
+-			if (le32_to_cpu(extent->ee_block) > 0)
++			if (le32_to_cpu(extent->ee_block) > start)
+ 				*iterator = le32_to_cpu(extent->ee_block) - 1;
+-			else
+-				/* Beginning is reached, end of the loop */
++			else if (le32_to_cpu(extent->ee_block) == start)
+ 				iterator = NULL;
+-			/* Update path extent in case we need to stop */
+-			while (le32_to_cpu(extent->ee_block) < start)
++			else {
++				extent = EXT_LAST_EXTENT(path[depth].p_hdr);
++				while (le32_to_cpu(extent->ee_block) >= start)
++					extent--;
++
++				if (extent == EXT_LAST_EXTENT(path[depth].p_hdr))
++					break;
++
+ 				extent++;
++				iterator = NULL;
++			}
+ 			path[depth].p_ext = extent;
+ 		}
+ 		ret = ext4_ext_shift_path_extents(path, shift, inode,
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 253308fcb0478..504389568dac5 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -3275,10 +3275,9 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ 		.mode = mode
+ 	};
+ 	int err;
+-	bool lock_inode = !(mode & FALLOC_FL_KEEP_SIZE) ||
+-			   (mode & FALLOC_FL_PUNCH_HOLE);
+-
+-	bool block_faults = FUSE_IS_DAX(inode) && lock_inode;
++	bool block_faults = FUSE_IS_DAX(inode) &&
++		(!(mode & FALLOC_FL_KEEP_SIZE) ||
++		 (mode & FALLOC_FL_PUNCH_HOLE));
+ 
+ 	if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
+ 		return -EOPNOTSUPP;
+@@ -3286,22 +3285,20 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ 	if (fm->fc->no_fallocate)
+ 		return -EOPNOTSUPP;
+ 
+-	if (lock_inode) {
+-		inode_lock(inode);
+-		if (block_faults) {
+-			down_write(&fi->i_mmap_sem);
+-			err = fuse_dax_break_layouts(inode, 0, 0);
+-			if (err)
+-				goto out;
+-		}
++	inode_lock(inode);
++	if (block_faults) {
++		down_write(&fi->i_mmap_sem);
++		err = fuse_dax_break_layouts(inode, 0, 0);
++		if (err)
++			goto out;
++	}
+ 
+-		if (mode & FALLOC_FL_PUNCH_HOLE) {
+-			loff_t endbyte = offset + length - 1;
++	if (mode & FALLOC_FL_PUNCH_HOLE) {
++		loff_t endbyte = offset + length - 1;
+ 
+-			err = fuse_writeback_range(inode, offset, endbyte);
+-			if (err)
+-				goto out;
+-		}
++		err = fuse_writeback_range(inode, offset, endbyte);
++		if (err)
++			goto out;
+ 	}
+ 
+ 	if (!(mode & FALLOC_FL_KEEP_SIZE) &&
+@@ -3351,8 +3348,7 @@ out:
+ 	if (block_faults)
+ 		up_write(&fi->i_mmap_sem);
+ 
+-	if (lock_inode)
+-		inode_unlock(inode);
++	inode_unlock(inode);
+ 
+ 	fuse_flush_time_update(inode);
+ 
+diff --git a/fs/nilfs2/sufile.c b/fs/nilfs2/sufile.c
+index 63722475e17e1..51f4cb060231f 100644
+--- a/fs/nilfs2/sufile.c
++++ b/fs/nilfs2/sufile.c
+@@ -495,14 +495,22 @@ void nilfs_sufile_do_free(struct inode *sufile, __u64 segnum,
+ int nilfs_sufile_mark_dirty(struct inode *sufile, __u64 segnum)
+ {
+ 	struct buffer_head *bh;
++	void *kaddr;
++	struct nilfs_segment_usage *su;
+ 	int ret;
+ 
++	down_write(&NILFS_MDT(sufile)->mi_sem);
+ 	ret = nilfs_sufile_get_segment_usage_block(sufile, segnum, 0, &bh);
+ 	if (!ret) {
+ 		mark_buffer_dirty(bh);
+ 		nilfs_mdt_mark_dirty(sufile);
++		kaddr = kmap_atomic(bh->b_page);
++		su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr);
++		nilfs_segment_usage_set_dirty(su);
++		kunmap_atomic(kaddr);
+ 		brelse(bh);
+ 	}
++	up_write(&NILFS_MDT(sufile)->mi_sem);
+ 	return ret;
+ }
+ 
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index bf5cb6efb8c09..475d23a4f8da2 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -440,14 +440,22 @@ static void __zonefs_io_error(struct inode *inode, bool write)
+ 	struct super_block *sb = inode->i_sb;
+ 	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+ 	unsigned int noio_flag;
+-	unsigned int nr_zones =
+-		zi->i_zone_size >> (sbi->s_zone_sectors_shift + SECTOR_SHIFT);
++	unsigned int nr_zones = 1;
+ 	struct zonefs_ioerr_data err = {
+ 		.inode = inode,
+ 		.write = write,
+ 	};
+ 	int ret;
+ 
++	/*
++	 * The only files that have more than one zone are conventional zone
++	 * files with aggregated conventional zones, for which the inode zone
++	 * size is always larger than the device zone size.
++	 */
++	if (zi->i_zone_size > bdev_zone_sectors(sb->s_bdev))
++		nr_zones = zi->i_zone_size >>
++			(sbi->s_zone_sectors_shift + SECTOR_SHIFT);
++
+ 	/*
+ 	 * Memory allocations in blkdev_report_zones() can trigger a memory
+ 	 * reclaim which may in turn cause a recursion into zonefs as well as
+@@ -1364,6 +1372,14 @@ static int zonefs_init_file_inode(struct inode *inode, struct blk_zone *zone,
+ 	zi->i_ztype = type;
+ 	zi->i_zsector = zone->start;
+ 	zi->i_zone_size = zone->len << SECTOR_SHIFT;
++	if (zi->i_zone_size > bdev_zone_sectors(sb->s_bdev) << SECTOR_SHIFT &&
++	    !(sbi->s_features & ZONEFS_F_AGGRCNV)) {
++		zonefs_err(sb,
++			   "zone size %llu doesn't match device's zone sectors %llu\n",
++			   zi->i_zone_size,
++			   bdev_zone_sectors(sb->s_bdev) << SECTOR_SHIFT);
++		return -EINVAL;
++	}
+ 
+ 	zi->i_max_size = min_t(loff_t, MAX_LFS_FILESIZE,
+ 			       zone->capacity << SECTOR_SHIFT);
+@@ -1406,11 +1422,11 @@ static struct dentry *zonefs_create_inode(struct dentry *parent,
+ 	struct inode *dir = d_inode(parent);
+ 	struct dentry *dentry;
+ 	struct inode *inode;
+-	int ret;
++	int ret = -ENOMEM;
+ 
+ 	dentry = d_alloc_name(parent, name);
+ 	if (!dentry)
+-		return NULL;
++		return ERR_PTR(ret);
+ 
+ 	inode = new_inode(parent->d_sb);
+ 	if (!inode)
+@@ -1435,7 +1451,7 @@ static struct dentry *zonefs_create_inode(struct dentry *parent,
+ dput:
+ 	dput(dentry);
+ 
+-	return NULL;
++	return ERR_PTR(ret);
+ }
+ 
+ struct zonefs_zone_data {
+@@ -1455,7 +1471,7 @@ static int zonefs_create_zgroup(struct zonefs_zone_data *zd,
+ 	struct blk_zone *zone, *next, *end;
+ 	const char *zgroup_name;
+ 	char *file_name;
+-	struct dentry *dir;
++	struct dentry *dir, *dent;
+ 	unsigned int n = 0;
+ 	int ret;
+ 
+@@ -1473,8 +1489,8 @@ static int zonefs_create_zgroup(struct zonefs_zone_data *zd,
+ 		zgroup_name = "seq";
+ 
+ 	dir = zonefs_create_inode(sb->s_root, zgroup_name, NULL, type);
+-	if (!dir) {
+-		ret = -ENOMEM;
++	if (IS_ERR(dir)) {
++		ret = PTR_ERR(dir);
+ 		goto free;
+ 	}
+ 
+@@ -1520,8 +1536,9 @@ static int zonefs_create_zgroup(struct zonefs_zone_data *zd,
+ 		 * Use the file number within its group as file name.
+ 		 */
+ 		snprintf(file_name, ZONEFS_NAME_MAX - 1, "%u", n);
+-		if (!zonefs_create_inode(dir, file_name, zone, type)) {
+-			ret = -ENOMEM;
++		dent = zonefs_create_inode(dir, file_name, zone, type);
++		if (IS_ERR(dent)) {
++			ret = PTR_ERR(dent);
+ 			goto free;
+ 		}
+ 
+diff --git a/include/linux/ceph/ceph_fs.h b/include/linux/ceph/ceph_fs.h
+index 455e9b9e2adf5..8287382d3d1db 100644
+--- a/include/linux/ceph/ceph_fs.h
++++ b/include/linux/ceph/ceph_fs.h
+@@ -288,6 +288,7 @@ enum {
+ 	CEPH_SESSION_FLUSHMSG_ACK,
+ 	CEPH_SESSION_FORCE_RO,
+ 	CEPH_SESSION_REJECT,
++	CEPH_SESSION_REQUEST_FLUSH_MDLOG,
+ };
+ 
+ extern const char *ceph_session_op_name(int op);
+diff --git a/include/linux/netfilter/ipset/ip_set.h b/include/linux/netfilter/ipset/ip_set.h
+index ab192720e2d66..53c9a17ecb3e3 100644
+--- a/include/linux/netfilter/ipset/ip_set.h
++++ b/include/linux/netfilter/ipset/ip_set.h
+@@ -198,6 +198,9 @@ struct ip_set_region {
+ 	u32 elements;		/* Number of elements vs timeout */
+ };
+ 
++/* Max range where every element is added/deleted in one step */
++#define IPSET_MAX_RANGE		(1<<20)
++
+ /* The core set type structure */
+ struct ip_set_type {
+ 	struct list_head list;
+diff --git a/include/net/switchdev.h b/include/net/switchdev.h
+index 8528015590e44..afdf8bd1b4fe5 100644
+--- a/include/net/switchdev.h
++++ b/include/net/switchdev.h
+@@ -38,6 +38,7 @@ enum switchdev_attr_id {
+ 	SWITCHDEV_ATTR_ID_PORT_MROUTER,
+ 	SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME,
+ 	SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING,
++	SWITCHDEV_ATTR_ID_BRIDGE_VLAN_PROTOCOL,
+ 	SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED,
+ 	SWITCHDEV_ATTR_ID_BRIDGE_MROUTER,
+ #if IS_ENABLED(CONFIG_BRIDGE_MRP)
+@@ -57,6 +58,7 @@ struct switchdev_attr {
+ 		bool mrouter;				/* PORT_MROUTER */
+ 		clock_t ageing_time;			/* BRIDGE_AGEING_TIME */
+ 		bool vlan_filtering;			/* BRIDGE_VLAN_FILTERING */
++		u16 vlan_protocol;			/* BRIDGE_VLAN_PROTOCOL */
+ 		bool mc_disabled;			/* MC_DISABLED */
+ #if IS_ENABLED(CONFIG_BRIDGE_MRP)
+ 		u8 mrp_port_role;			/* MRP_PORT_ROLE */
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 1c714336b8635..221856f2d295c 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -583,7 +583,7 @@ TRACE_EVENT(rxrpc_client,
+ 	    TP_fast_assign(
+ 		    __entry->conn = conn ? conn->debug_id : 0;
+ 		    __entry->channel = channel;
+-		    __entry->usage = conn ? atomic_read(&conn->usage) : -2;
++		    __entry->usage = conn ? refcount_read(&conn->ref) : -2;
+ 		    __entry->op = op;
+ 		    __entry->cid = conn ? conn->proto.cid : 0;
+ 			   ),
+diff --git a/include/uapi/linux/audit.h b/include/uapi/linux/audit.h
+index cd2d8279a5e44..cb4e8e6e86a90 100644
+--- a/include/uapi/linux/audit.h
++++ b/include/uapi/linux/audit.h
+@@ -182,7 +182,7 @@
+ #define AUDIT_MAX_KEY_LEN  256
+ #define AUDIT_BITMASK_SIZE 64
+ #define AUDIT_WORD(nr) ((__u32)((nr)/32))
+-#define AUDIT_BIT(nr)  (1 << ((nr) - AUDIT_WORD(nr)*32))
++#define AUDIT_BIT(nr)  (1U << ((nr) - AUDIT_WORD(nr)*32))
+ 
+ #define AUDIT_SYSCALL_CLASSES 16
+ #define AUDIT_CLASS_DIR_WRITE 0
+diff --git a/init/Kconfig b/init/Kconfig
+index 22912631d79b4..eba883d6d9ed5 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -71,7 +71,7 @@ config CC_HAS_ASM_GOTO_OUTPUT
+ config CC_HAS_ASM_GOTO_TIED_OUTPUT
+ 	depends on CC_HAS_ASM_GOTO_OUTPUT
+ 	# Detect buggy gcc and clang, fixed in gcc-11 clang-14.
+-	def_bool $(success,echo 'int foo(int *x) { asm goto (".long (%l[bar]) - .\n": "+m"(*x) ::: bar); return *x; bar: return 0; }' | $CC -x c - -c -o /dev/null)
++	def_bool $(success,echo 'int foo(int *x) { asm goto (".long (%l[bar]) - .": "+m"(*x) ::: bar); return *x; bar: return 0; }' | $CC -x c - -c -o /dev/null)
+ 
+ config TOOLS_SUPPORT_RELR
+ 	def_bool $(success,env "CC=$(CC)" "LD=$(LD)" "NM=$(NM)" "OBJCOPY=$(OBJCOPY)" $(srctree)/scripts/tools-support-relr.sh)
+diff --git a/kernel/gcov/clang.c b/kernel/gcov/clang.c
+index c466c7fbdece5..ea6b45d0fa0d6 100644
+--- a/kernel/gcov/clang.c
++++ b/kernel/gcov/clang.c
+@@ -327,6 +327,8 @@ void gcov_info_add(struct gcov_info *dst, struct gcov_info *src)
+ 
+ 		for (i = 0; i < sfn_ptr->num_counters; i++)
+ 			dfn_ptr->counters[i] += sfn_ptr->counters[i];
++
++		sfn_ptr = list_next_entry(sfn_ptr, head);
+ 	}
+ }
+ 
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 92d94615cbbbd..3cb29835632f2 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -223,11 +223,16 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
+ {
+ 	struct irq_desc *desc = irq_data_to_desc(data);
+ 	struct irq_chip *chip = irq_data_get_irq_chip(data);
++	const struct cpumask  *prog_mask;
+ 	int ret;
+ 
++	static DEFINE_RAW_SPINLOCK(tmp_mask_lock);
++	static struct cpumask tmp_mask;
++
+ 	if (!chip || !chip->irq_set_affinity)
+ 		return -EINVAL;
+ 
++	raw_spin_lock(&tmp_mask_lock);
+ 	/*
+ 	 * If this is a managed interrupt and housekeeping is enabled on
+ 	 * it check whether the requested affinity mask intersects with
+@@ -249,24 +254,34 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
+ 	 */
+ 	if (irqd_affinity_is_managed(data) &&
+ 	    housekeeping_enabled(HK_FLAG_MANAGED_IRQ)) {
+-		const struct cpumask *hk_mask, *prog_mask;
+-
+-		static DEFINE_RAW_SPINLOCK(tmp_mask_lock);
+-		static struct cpumask tmp_mask;
++		const struct cpumask *hk_mask;
+ 
+ 		hk_mask = housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
+ 
+-		raw_spin_lock(&tmp_mask_lock);
+ 		cpumask_and(&tmp_mask, mask, hk_mask);
+ 		if (!cpumask_intersects(&tmp_mask, cpu_online_mask))
+ 			prog_mask = mask;
+ 		else
+ 			prog_mask = &tmp_mask;
+-		ret = chip->irq_set_affinity(data, prog_mask, force);
+-		raw_spin_unlock(&tmp_mask_lock);
+ 	} else {
+-		ret = chip->irq_set_affinity(data, mask, force);
++		prog_mask = mask;
+ 	}
++
++	/*
++	 * Make sure we only provide online CPUs to the irqchip,
++	 * unless we are being asked to force the affinity (in which
++	 * case we do as we are told).
++	 */
++	cpumask_and(&tmp_mask, prog_mask, cpu_online_mask);
++	if (!force && !cpumask_empty(&tmp_mask))
++		ret = chip->irq_set_affinity(data, &tmp_mask, force);
++	else if (force)
++		ret = chip->irq_set_affinity(data, mask, force);
++	else
++		ret = -EINVAL;
++
++	raw_spin_unlock(&tmp_mask_lock);
++
+ 	switch (ret) {
+ 	case IRQ_SET_MASK_OK:
+ 	case IRQ_SET_MASK_OK_DONE:
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index d217acc9f71b6..b47d95b68ac1a 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -456,6 +456,13 @@ int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
+ 			irqd_clr_can_reserve(irq_data);
+ 			if (domain->flags & IRQ_DOMAIN_MSI_NOMASK_QUIRK)
+ 				irqd_set_msi_nomask_quirk(irq_data);
++			if ((info->flags & MSI_FLAG_ACTIVATE_EARLY) &&
++				irqd_affinity_is_managed(irq_data) &&
++				!cpumask_intersects(irq_data_get_affinity_mask(irq_data),
++						    cpu_online_mask)) {
++				irqd_set_managed_shutdown(irq_data);
++				continue;
++			}
+ 		}
+ 		ret = irq_domain_activate_irq(irq_data, can_reserve);
+ 		if (ret)
+diff --git a/lib/vdso/Makefile b/lib/vdso/Makefile
+index c415a685d61bb..e814061d6aa01 100644
+--- a/lib/vdso/Makefile
++++ b/lib/vdso/Makefile
+@@ -17,6 +17,6 @@ $(error ARCH_REL_TYPE_ABS is not set)
+ endif
+ 
+ quiet_cmd_vdso_check = VDSOCHK $@
+-      cmd_vdso_check = if $(OBJDUMP) -R $@ | egrep -h "$(ARCH_REL_TYPE_ABS)"; \
++      cmd_vdso_check = if $(OBJDUMP) -R $@ | grep -E -h "$(ARCH_REL_TYPE_ABS)"; \
+ 		       then (echo >&2 "$@: dynamic relocations are not supported"; \
+ 			     rm -f $@; /bin/false); fi
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index f2817e80a1ab3..51ccd80e70b6a 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2439,8 +2439,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
+ 	enum lru_list lru;
+ 	unsigned long nr_reclaimed = 0;
+ 	unsigned long nr_to_reclaim = sc->nr_to_reclaim;
++	bool proportional_reclaim;
+ 	struct blk_plug plug;
+-	bool scan_adjusted;
+ 
+ 	get_scan_count(lruvec, sc, nr);
+ 
+@@ -2458,8 +2458,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
+ 	 * abort proportional reclaim if either the file or anon lru has already
+ 	 * dropped to zero at the first pass.
+ 	 */
+-	scan_adjusted = (!cgroup_reclaim(sc) && !current_is_kswapd() &&
+-			 sc->priority == DEF_PRIORITY);
++	proportional_reclaim = (!cgroup_reclaim(sc) && !current_is_kswapd() &&
++				sc->priority == DEF_PRIORITY);
+ 
+ 	blk_start_plug(&plug);
+ 	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
+@@ -2479,7 +2479,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
+ 
+ 		cond_resched();
+ 
+-		if (nr_reclaimed < nr_to_reclaim || scan_adjusted)
++		if (nr_reclaimed < nr_to_reclaim || proportional_reclaim)
+ 			continue;
+ 
+ 		/*
+@@ -2530,8 +2530,6 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
+ 		nr_scanned = targets[lru] - nr[lru];
+ 		nr[lru] = targets[lru] * (100 - percentage) / 100;
+ 		nr[lru] -= min(nr[lru], nr_scanned);
+-
+-		scan_adjusted = true;
+ 	}
+ 	blk_finish_plug(&plug);
+ 	sc->nr_reclaimed += nr_reclaimed;
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index fec6c800c8988..400219801e63b 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -200,9 +200,11 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 
+ 	list_for_each_entry_safe(req, rtmp, &m->req_list, req_list) {
+ 		list_move(&req->req_list, &cancel_list);
++		req->status = REQ_STATUS_ERROR;
+ 	}
+ 	list_for_each_entry_safe(req, rtmp, &m->unsent_req_list, req_list) {
+ 		list_move(&req->req_list, &cancel_list);
++		req->status = REQ_STATUS_ERROR;
+ 	}
+ 
+ 	spin_unlock(&m->client->lock);
+diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
+index 852f4b54e8811..1dc5db07650c9 100644
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -855,26 +855,37 @@ EXPORT_SYMBOL_GPL(br_vlan_get_proto);
+ 
+ int __br_vlan_set_proto(struct net_bridge *br, __be16 proto)
+ {
++	struct switchdev_attr attr = {
++		.orig_dev = br->dev,
++		.id = SWITCHDEV_ATTR_ID_BRIDGE_VLAN_PROTOCOL,
++		.flags = SWITCHDEV_F_SKIP_EOPNOTSUPP,
++		.u.vlan_protocol = ntohs(proto),
++	};
+ 	int err = 0;
+ 	struct net_bridge_port *p;
+ 	struct net_bridge_vlan *vlan;
+ 	struct net_bridge_vlan_group *vg;
+-	__be16 oldproto;
++	__be16 oldproto = br->vlan_proto;
+ 
+ 	if (br->vlan_proto == proto)
+ 		return 0;
+ 
++	err = switchdev_port_attr_set(br->dev, &attr);
++	if (err && err != -EOPNOTSUPP)
++		return err;
++
+ 	/* Add VLANs for the new proto to the device filter. */
+ 	list_for_each_entry(p, &br->port_list, list) {
+ 		vg = nbp_vlan_group(p);
+ 		list_for_each_entry(vlan, &vg->vlan_list, vlist) {
++			if (vlan->priv_flags & BR_VLFLAG_ADDED_BY_SWITCHDEV)
++				continue;
+ 			err = vlan_vid_add(p->dev, proto, vlan->vid);
+ 			if (err)
+ 				goto err_filt;
+ 		}
+ 	}
+ 
+-	oldproto = br->vlan_proto;
+ 	br->vlan_proto = proto;
+ 
+ 	recalculate_group_addr(br);
+@@ -883,20 +894,32 @@ int __br_vlan_set_proto(struct net_bridge *br, __be16 proto)
+ 	/* Delete VLANs for the old proto from the device filter. */
+ 	list_for_each_entry(p, &br->port_list, list) {
+ 		vg = nbp_vlan_group(p);
+-		list_for_each_entry(vlan, &vg->vlan_list, vlist)
++		list_for_each_entry(vlan, &vg->vlan_list, vlist) {
++			if (vlan->priv_flags & BR_VLFLAG_ADDED_BY_SWITCHDEV)
++				continue;
+ 			vlan_vid_del(p->dev, oldproto, vlan->vid);
++		}
+ 	}
+ 
+ 	return 0;
+ 
+ err_filt:
+-	list_for_each_entry_continue_reverse(vlan, &vg->vlan_list, vlist)
++	attr.u.vlan_protocol = ntohs(oldproto);
++	switchdev_port_attr_set(br->dev, &attr);
++
++	list_for_each_entry_continue_reverse(vlan, &vg->vlan_list, vlist) {
++		if (vlan->priv_flags & BR_VLFLAG_ADDED_BY_SWITCHDEV)
++			continue;
+ 		vlan_vid_del(p->dev, proto, vlan->vid);
++	}
+ 
+ 	list_for_each_entry_continue_reverse(p, &br->port_list, list) {
+ 		vg = nbp_vlan_group(p);
+-		list_for_each_entry(vlan, &vg->vlan_list, vlist)
++		list_for_each_entry(vlan, &vg->vlan_list, vlist) {
++			if (vlan->priv_flags & BR_VLFLAG_ADDED_BY_SWITCHDEV)
++				continue;
+ 			vlan_vid_del(p->dev, proto, vlan->vid);
++		}
+ 	}
+ 
+ 	return err;
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index ed120828c7e21..b8d082f557183 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -263,7 +263,7 @@ skb_flow_dissect_ct(const struct sk_buff *skb,
+ 	key->ct_zone = ct->zone.id;
+ #endif
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK_MARK)
+-	key->ct_mark = ct->mark;
++	key->ct_mark = READ_ONCE(ct->mark);
+ #endif
+ 
+ 	cl = nf_ct_labels_find(ct);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index 2455b0c0e4866..a2a8b952b3c55 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -130,6 +130,8 @@ failure:
+ 	 * This unhashes the socket and releases the local port, if necessary.
+ 	 */
+ 	dccp_set_state(sk, DCCP_CLOSED);
++	if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK))
++		inet_reset_saddr(sk);
+ 	ip_rt_put(rt);
+ 	sk->sk_route_caps = 0;
+ 	inet->inet_dport = 0;
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 2be5c69824f94..21c61a9c3b152 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -957,6 +957,8 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ 
+ late_failure:
+ 	dccp_set_state(sk, DCCP_CLOSED);
++	if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK))
++		inet_reset_saddr(sk);
+ 	__sk_dst_reset(sk);
+ failure:
+ 	inet->inet_dport = 0;
+diff --git a/net/ipv4/Kconfig b/net/ipv4/Kconfig
+index 87983e70f03f3..23b06063e1a51 100644
+--- a/net/ipv4/Kconfig
++++ b/net/ipv4/Kconfig
+@@ -403,6 +403,16 @@ config INET_IPCOMP
+ 
+ 	  If unsure, say Y.
+ 
++config INET_TABLE_PERTURB_ORDER
++	int "INET: Source port perturbation table size (as power of 2)" if EXPERT
++	default 16
++	help
++	  Source port perturbation table size (as power of 2) for
++	  RFC 6056 3.3.4.  Algorithm 4: Double-Hash Port Selection Algorithm.
++
++	  The default is almost always what you want.
++	  Only change this if you know what you are doing.
++
+ config INET_XFRM_TUNNEL
+ 	tristate
+ 	select INET_TUNNEL
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 3450c9ba2728c..84257678160a3 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -312,6 +312,9 @@ static int esp_xmit(struct xfrm_state *x, struct sk_buff *skb,  netdev_features_
+ 			xo->seq.low += skb_shinfo(skb)->gso_segs;
+ 	}
+ 
++	if (xo->seq.low < seq)
++		xo->seq.hi++;
++
+ 	esp.seqno = cpu_to_be64(seq + ((u64)xo->seq.hi << 32));
+ 
+ 	ip_hdr(skb)->tot_len = htons(skb->len);
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index a28f525e2c474..d11fb16234a6a 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -1331,8 +1331,10 @@ int fib_table_insert(struct net *net, struct fib_table *tb,
+ 
+ 	/* The alias was already inserted, so the node must exist. */
+ 	l = l ? l : fib_find_node(t, &tp, key);
+-	if (WARN_ON_ONCE(!l))
++	if (WARN_ON_ONCE(!l)) {
++		err = -ENOENT;
+ 		goto out_free_new_fa;
++	}
+ 
+ 	if (fib_find_alias(&l->leaf, new_fa->fa_slen, 0, 0, tb->tb_id, true) ==
+ 	    new_fa) {
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index c0de655fffd74..c68a1dae25ca3 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -721,13 +721,13 @@ EXPORT_SYMBOL_GPL(inet_unhash);
+  * Note that we use 32bit integers (vs RFC 'short integers')
+  * because 2^16 is not a multiple of num_ephemeral and this
+  * property might be used by clever attacker.
++ *
+  * RFC claims using TABLE_LENGTH=10 buckets gives an improvement, though
+- * attacks were since demonstrated, thus we use 65536 instead to really
+- * give more isolation and privacy, at the expense of 256kB of kernel
+- * memory.
++ * attacks were since demonstrated, thus we use 65536 by default instead
++ * to really give more isolation and privacy, at the expense of 256kB
++ * of kernel memory.
+  */
+-#define INET_TABLE_PERTURB_SHIFT 16
+-#define INET_TABLE_PERTURB_SIZE (1 << INET_TABLE_PERTURB_SHIFT)
++#define INET_TABLE_PERTURB_SIZE (1 << CONFIG_INET_TABLE_PERTURB_ORDER)
+ static u32 *table_perturb;
+ 
+ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
+index f6b3237e88cab..eccd7897e7aa6 100644
+--- a/net/ipv4/ip_input.c
++++ b/net/ipv4/ip_input.c
+@@ -361,6 +361,11 @@ static int ip_rcv_finish_core(struct net *net, struct sock *sk,
+ 					   iph->tos, dev);
+ 		if (unlikely(err))
+ 			goto drop_error;
++	} else {
++		struct in_device *in_dev = __in_dev_get_rcu(dev);
++
++		if (in_dev && IN_DEV_ORCONF(in_dev, NOPOLICY))
++			IPCB(skb)->flags |= IPSKB_NOPOLICY;
+ 	}
+ 
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+index 1088564d4dbcb..77e3b67e8790b 100644
+--- a/net/ipv4/netfilter/ipt_CLUSTERIP.c
++++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+@@ -424,7 +424,7 @@ clusterip_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ 
+ 	switch (ctinfo) {
+ 	case IP_CT_NEW:
+-		ct->mark = hash;
++		WRITE_ONCE(ct->mark, hash);
+ 		break;
+ 	case IP_CT_RELATED:
+ 	case IP_CT_RELATED_REPLY:
+@@ -441,7 +441,7 @@ clusterip_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ #ifdef DEBUG
+ 	nf_ct_dump_tuple_ip(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
+ #endif
+-	pr_debug("hash=%u ct_hash=%u ", hash, ct->mark);
++	pr_debug("hash=%u ct_hash=%u ", hash, READ_ONCE(ct->mark));
+ 	if (!clusterip_responsible(cipinfo->config, hash)) {
+ 		pr_debug("not responsible\n");
+ 		return NF_DROP;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 31a8009f74eea..8bd7b1ec3b6a3 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -322,6 +322,8 @@ failure:
+ 	 * if necessary.
+ 	 */
+ 	tcp_set_state(sk, TCP_CLOSE);
++	if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK))
++		inet_reset_saddr(sk);
+ 	ip_rt_put(rt);
+ 	sk->sk_route_caps = 0;
+ 	inet->inet_dport = 0;
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 1c3f02d05d2bf..7608be04d0f58 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -343,6 +343,9 @@ static int esp6_xmit(struct xfrm_state *x, struct sk_buff *skb,  netdev_features
+ 			xo->seq.low += skb_shinfo(skb)->gso_segs;
+ 	}
+ 
++	if (xo->seq.low < seq)
++		xo->seq.hi++;
++
+ 	esp.seqno = cpu_to_be64(xo->seq.low + ((u64)xo->seq.hi << 32));
+ 
+ 	len = skb->len - sizeof(struct ipv6hdr);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index a558dd9d177b0..c599e14be414d 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -339,6 +339,8 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ 
+ late_failure:
+ 	tcp_set_state(sk, TCP_CLOSE);
++	if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK))
++		inet_reset_saddr(sk);
+ failure:
+ 	inet->inet_dport = 0;
+ 	sk->sk_route_caps = 0;
+diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c
+index af7a4b8b1e9c4..247296e3294bd 100644
+--- a/net/ipv6/xfrm6_policy.c
++++ b/net/ipv6/xfrm6_policy.c
+@@ -289,9 +289,13 @@ int __init xfrm6_init(void)
+ 	if (ret)
+ 		goto out_state;
+ 
+-	register_pernet_subsys(&xfrm6_net_ops);
++	ret = register_pernet_subsys(&xfrm6_net_ops);
++	if (ret)
++		goto out_protocol;
+ out:
+ 	return ret;
++out_protocol:
++	xfrm6_protocol_fini();
+ out_state:
+ 	xfrm6_state_fini();
+ out_policy:
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 05e2710988883..8bc7d399987b2 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2909,7 +2909,7 @@ static int count_ah_combs(const struct xfrm_tmpl *t)
+ 			break;
+ 		if (!aalg->pfkey_supported)
+ 			continue;
+-		if (aalg_tmpl_set(t, aalg) && aalg->available)
++		if (aalg_tmpl_set(t, aalg))
+ 			sz += sizeof(struct sadb_comb);
+ 	}
+ 	return sz + sizeof(struct sadb_prop);
+@@ -2927,7 +2927,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ 		if (!ealg->pfkey_supported)
+ 			continue;
+ 
+-		if (!(ealg_tmpl_set(t, ealg) && ealg->available))
++		if (!(ealg_tmpl_set(t, ealg)))
+ 			continue;
+ 
+ 		for (k = 1; ; k++) {
+@@ -2938,16 +2938,17 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ 			if (!aalg->pfkey_supported)
+ 				continue;
+ 
+-			if (aalg_tmpl_set(t, aalg) && aalg->available)
++			if (aalg_tmpl_set(t, aalg))
+ 				sz += sizeof(struct sadb_comb);
+ 		}
+ 	}
+ 	return sz + sizeof(struct sadb_prop);
+ }
+ 
+-static void dump_ah_combs(struct sk_buff *skb, const struct xfrm_tmpl *t)
++static int dump_ah_combs(struct sk_buff *skb, const struct xfrm_tmpl *t)
+ {
+ 	struct sadb_prop *p;
++	int sz = 0;
+ 	int i;
+ 
+ 	p = skb_put(skb, sizeof(struct sadb_prop));
+@@ -2975,13 +2976,17 @@ static void dump_ah_combs(struct sk_buff *skb, const struct xfrm_tmpl *t)
+ 			c->sadb_comb_soft_addtime = 20*60*60;
+ 			c->sadb_comb_hard_usetime = 8*60*60;
+ 			c->sadb_comb_soft_usetime = 7*60*60;
++			sz += sizeof(*c);
+ 		}
+ 	}
++
++	return sz + sizeof(*p);
+ }
+ 
+-static void dump_esp_combs(struct sk_buff *skb, const struct xfrm_tmpl *t)
++static int dump_esp_combs(struct sk_buff *skb, const struct xfrm_tmpl *t)
+ {
+ 	struct sadb_prop *p;
++	int sz = 0;
+ 	int i, k;
+ 
+ 	p = skb_put(skb, sizeof(struct sadb_prop));
+@@ -3023,8 +3028,11 @@ static void dump_esp_combs(struct sk_buff *skb, const struct xfrm_tmpl *t)
+ 			c->sadb_comb_soft_addtime = 20*60*60;
+ 			c->sadb_comb_hard_usetime = 8*60*60;
+ 			c->sadb_comb_soft_usetime = 7*60*60;
++			sz += sizeof(*c);
+ 		}
+ 	}
++
++	return sz + sizeof(*p);
+ }
+ 
+ static int key_notify_policy_expire(struct xfrm_policy *xp, const struct km_event *c)
+@@ -3154,6 +3162,7 @@ static int pfkey_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *t, struct
+ 	struct sadb_x_sec_ctx *sec_ctx;
+ 	struct xfrm_sec_ctx *xfrm_ctx;
+ 	int ctx_size = 0;
++	int alg_size = 0;
+ 
+ 	sockaddr_size = pfkey_sockaddr_size(x->props.family);
+ 	if (!sockaddr_size)
+@@ -3165,16 +3174,16 @@ static int pfkey_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *t, struct
+ 		sizeof(struct sadb_x_policy);
+ 
+ 	if (x->id.proto == IPPROTO_AH)
+-		size += count_ah_combs(t);
++		alg_size = count_ah_combs(t);
+ 	else if (x->id.proto == IPPROTO_ESP)
+-		size += count_esp_combs(t);
++		alg_size = count_esp_combs(t);
+ 
+ 	if ((xfrm_ctx = x->security)) {
+ 		ctx_size = PFKEY_ALIGN8(xfrm_ctx->ctx_len);
+ 		size +=  sizeof(struct sadb_x_sec_ctx) + ctx_size;
+ 	}
+ 
+-	skb =  alloc_skb(size + 16, GFP_ATOMIC);
++	skb =  alloc_skb(size + alg_size + 16, GFP_ATOMIC);
+ 	if (skb == NULL)
+ 		return -ENOMEM;
+ 
+@@ -3228,10 +3237,13 @@ static int pfkey_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *t, struct
+ 	pol->sadb_x_policy_priority = xp->priority;
+ 
+ 	/* Set sadb_comb's. */
++	alg_size = 0;
+ 	if (x->id.proto == IPPROTO_AH)
+-		dump_ah_combs(skb, t);
++		alg_size = dump_ah_combs(skb, t);
+ 	else if (x->id.proto == IPPROTO_ESP)
+-		dump_esp_combs(skb, t);
++		alg_size = dump_esp_combs(skb, t);
++
++	hdr->sadb_msg_len += alg_size / 8;
+ 
+ 	/* security context */
+ 	if (xfrm_ctx) {
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 73893025922fd..ae90ac3be59aa 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1349,8 +1349,10 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 	ieee80211_led_exit(local);
+ 	destroy_workqueue(local->workqueue);
+  fail_workqueue:
+-	if (local->wiphy_ciphers_allocated)
++	if (local->wiphy_ciphers_allocated) {
+ 		kfree(local->hw.wiphy->cipher_suites);
++		local->wiphy_ciphers_allocated = false;
++	}
+ 	kfree(local->int_scan_req);
+ 	return result;
+ }
+@@ -1420,8 +1422,10 @@ void ieee80211_free_hw(struct ieee80211_hw *hw)
+ 	mutex_destroy(&local->iflist_mtx);
+ 	mutex_destroy(&local->mtx);
+ 
+-	if (local->wiphy_ciphers_allocated)
++	if (local->wiphy_ciphers_allocated) {
+ 		kfree(local->hw.wiphy->cipher_suites);
++		local->wiphy_ciphers_allocated = false;
++	}
+ 
+ 	idr_for_each(&local->ack_status_frames,
+ 		     ieee80211_free_ack_frame, NULL);
+diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
+index 870c8eafef929..c2b051e0610ab 100644
+--- a/net/mac80211/mesh_pathtbl.c
++++ b/net/mac80211/mesh_pathtbl.c
+@@ -718,7 +718,7 @@ int mesh_path_send_to_gates(struct mesh_path *mpath)
+ void mesh_path_discard_frame(struct ieee80211_sub_if_data *sdata,
+ 			     struct sk_buff *skb)
+ {
+-	kfree_skb(skb);
++	ieee80211_free_txskb(&sdata->local->hw, skb);
+ 	sdata->u.mesh.mshstats.dropped_frames_no_route++;
+ }
+ 
+diff --git a/net/netfilter/ipset/ip_set_hash_ip.c b/net/netfilter/ipset/ip_set_hash_ip.c
+index 5d6d68eaf6a9c..d7a81b2250e71 100644
+--- a/net/netfilter/ipset/ip_set_hash_ip.c
++++ b/net/netfilter/ipset/ip_set_hash_ip.c
+@@ -131,8 +131,11 @@ hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
+ 		if (ret)
+ 			return ret;
+-		if (ip > ip_to)
++		if (ip > ip_to) {
++			if (ip_to == 0)
++				return -IPSET_ERR_HASH_ELEM;
+ 			swap(ip, ip_to);
++		}
+ 	} else if (tb[IPSET_ATTR_CIDR]) {
+ 		u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
+ 
+@@ -143,18 +146,20 @@ hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 
+ 	hosts = h->netmask == 32 ? 1 : 2 << (32 - h->netmask - 1);
+ 
+-	if (retried) {
++	/* 64bit division is not allowed on 32bit */
++	if (((u64)ip_to - ip + 1) >> (32 - h->netmask) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
++	if (retried)
+ 		ip = ntohl(h->next.ip);
+-		e.ip = htonl(ip);
+-	}
+ 	for (; ip <= ip_to;) {
++		e.ip = htonl(ip);
+ 		ret = adtfn(set, &e, &ext, &ext, flags);
+ 		if (ret && !ip_set_eexist(ret, flags))
+ 			return ret;
+ 
+ 		ip += hosts;
+-		e.ip = htonl(ip);
+-		if (e.ip == 0)
++		if (ip == 0)
+ 			return 0;
+ 
+ 		ret = 0;
+diff --git a/net/netfilter/ipset/ip_set_hash_ipmark.c b/net/netfilter/ipset/ip_set_hash_ipmark.c
+index aba1df617d6e6..eefce34a34f08 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipmark.c
++++ b/net/netfilter/ipset/ip_set_hash_ipmark.c
+@@ -120,6 +120,8 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 
+ 	e.mark = ntohl(nla_get_be32(tb[IPSET_ATTR_MARK]));
+ 	e.mark &= h->markmask;
++	if (e.mark == 0 && e.ip == 0)
++		return -IPSET_ERR_HASH_ELEM;
+ 
+ 	if (adt == IPSET_TEST ||
+ 	    !(tb[IPSET_ATTR_IP_TO] || tb[IPSET_ATTR_CIDR])) {
+@@ -132,8 +134,11 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
+ 		if (ret)
+ 			return ret;
+-		if (ip > ip_to)
++		if (ip > ip_to) {
++			if (e.mark == 0 && ip_to == 0)
++				return -IPSET_ERR_HASH_ELEM;
+ 			swap(ip, ip_to);
++		}
+ 	} else if (tb[IPSET_ATTR_CIDR]) {
+ 		u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
+ 
+@@ -142,6 +147,9 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		ip_set_mask_from_to(ip, ip_to, cidr);
+ 	}
+ 
++	if (((u64)ip_to - ip + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	for (; ip <= ip_to; ip++) {
+diff --git a/net/netfilter/ipset/ip_set_hash_ipport.c b/net/netfilter/ipset/ip_set_hash_ipport.c
+index 1ff228717e298..4a54e9e8ae59f 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipport.c
++++ b/net/netfilter/ipset/ip_set_hash_ipport.c
+@@ -172,6 +172,9 @@ hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
++	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	for (; ip <= ip_to; ip++) {
+diff --git a/net/netfilter/ipset/ip_set_hash_ipportip.c b/net/netfilter/ipset/ip_set_hash_ipportip.c
+index fa88afd812fa6..09737de5ecc34 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipportip.c
++++ b/net/netfilter/ipset/ip_set_hash_ipportip.c
+@@ -179,6 +179,9 @@ hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
++	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	for (; ip <= ip_to; ip++) {
+diff --git a/net/netfilter/ipset/ip_set_hash_ipportnet.c b/net/netfilter/ipset/ip_set_hash_ipportnet.c
+index eef6ecfcb4099..02685371a6828 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_ipportnet.c
+@@ -252,6 +252,9 @@ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
++	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	ip2_to = ip2_from;
+ 	if (tb[IPSET_ATTR_IP2_TO]) {
+ 		ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP2_TO], &ip2_to);
+diff --git a/net/netfilter/ipset/ip_set_hash_net.c b/net/netfilter/ipset/ip_set_hash_net.c
+index 136cf0781d3a4..9d1beaacb9730 100644
+--- a/net/netfilter/ipset/ip_set_hash_net.c
++++ b/net/netfilter/ipset/ip_set_hash_net.c
+@@ -139,7 +139,7 @@ hash_net4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_net4_elem e = { .cidr = HOST_MASK };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip = 0, ip_to = 0;
++	u32 ip = 0, ip_to = 0, ipn, n = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -187,6 +187,15 @@ hash_net4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		if (ip + UINT_MAX == ip_to)
+ 			return -IPSET_ERR_HASH_RANGE;
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr);
++		n++;
++	} while (ipn++ < ip_to);
++
++	if (n > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	do {
+diff --git a/net/netfilter/ipset/ip_set_hash_netiface.c b/net/netfilter/ipset/ip_set_hash_netiface.c
+index be5e95a0d8762..c3ada9c63fa38 100644
+--- a/net/netfilter/ipset/ip_set_hash_netiface.c
++++ b/net/netfilter/ipset/ip_set_hash_netiface.c
+@@ -201,7 +201,7 @@ hash_netiface4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_netiface4_elem e = { .cidr = HOST_MASK, .elem = 1 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip = 0, ip_to = 0;
++	u32 ip = 0, ip_to = 0, ipn, n = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -255,6 +255,14 @@ hash_netiface4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip, ip_to, e.cidr);
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr);
++		n++;
++	} while (ipn++ < ip_to);
++
++	if (n > IPSET_MAX_RANGE)
++		return -ERANGE;
+ 
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+diff --git a/net/netfilter/ipset/ip_set_hash_netnet.c b/net/netfilter/ipset/ip_set_hash_netnet.c
+index da4ef910b12d1..b1411bc91a404 100644
+--- a/net/netfilter/ipset/ip_set_hash_netnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netnet.c
+@@ -167,7 +167,8 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	struct hash_netnet4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0;
+-	u32 ip2 = 0, ip2_from = 0, ip2_to = 0;
++	u32 ip2 = 0, ip2_from = 0, ip2_to = 0, ipn;
++	u64 n = 0, m = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -243,6 +244,19 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip2_from, ip2_to, e.cidr[1]);
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr[0]);
++		n++;
++	} while (ipn++ < ip_to);
++	ipn = ip2_from;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip2_to, &e.cidr[1]);
++		m++;
++	} while (ipn++ < ip2_to);
++
++	if (n*m > IPSET_MAX_RANGE)
++		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip[0]);
+diff --git a/net/netfilter/ipset/ip_set_hash_netport.c b/net/netfilter/ipset/ip_set_hash_netport.c
+index 34448df80fb98..d26d13528fe8b 100644
+--- a/net/netfilter/ipset/ip_set_hash_netport.c
++++ b/net/netfilter/ipset/ip_set_hash_netport.c
+@@ -157,7 +157,8 @@ hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_netport4_elem e = { .cidr = HOST_MASK - 1 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 port, port_to, p = 0, ip = 0, ip_to = 0;
++	u32 port, port_to, p = 0, ip = 0, ip_to = 0, ipn;
++	u64 n = 0;
+ 	bool with_ports = false;
+ 	u8 cidr;
+ 	int ret;
+@@ -234,6 +235,14 @@ hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip, ip_to, e.cidr + 1);
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &cidr);
++		n++;
++	} while (ipn++ < ip_to);
++
++	if (n*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip);
+diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c
+index 934c1712cba85..6446f4fccc729 100644
+--- a/net/netfilter/ipset/ip_set_hash_netportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netportnet.c
+@@ -181,7 +181,8 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	struct hash_netportnet4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0, p = 0, port, port_to;
+-	u32 ip2_from = 0, ip2_to = 0, ip2;
++	u32 ip2_from = 0, ip2_to = 0, ip2, ipn;
++	u64 n = 0, m = 0;
+ 	bool with_ports = false;
+ 	int ret;
+ 
+@@ -283,6 +284,19 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip2_from, ip2_to, e.cidr[1]);
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr[0]);
++		n++;
++	} while (ipn++ < ip_to);
++	ipn = ip2_from;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip2_to, &e.cidr[1]);
++		m++;
++	} while (ipn++ < ip2_to);
++
++	if (n*m*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip[0]);
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 8369af0c50eab..193a18bfddc0a 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -1598,7 +1598,7 @@ init_conntrack(struct net *net, struct nf_conn *tmpl,
+ 			}
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-			ct->mark = exp->master->mark;
++			ct->mark = READ_ONCE(exp->master->mark);
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_SECMARK
+ 			ct->secmark = exp->master->secmark;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 9e6898164199b..c402283e7545b 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -317,9 +317,9 @@ nla_put_failure:
+ }
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct)
++static int ctnetlink_dump_mark(struct sk_buff *skb, u32 mark)
+ {
+-	if (nla_put_be32(skb, CTA_MARK, htonl(ct->mark)))
++	if (nla_put_be32(skb, CTA_MARK, htonl(mark)))
+ 		goto nla_put_failure;
+ 	return 0;
+ 
+@@ -532,7 +532,7 @@ static int ctnetlink_dump_extinfo(struct sk_buff *skb,
+ static int ctnetlink_dump_info(struct sk_buff *skb, struct nf_conn *ct)
+ {
+ 	if (ctnetlink_dump_status(skb, ct) < 0 ||
+-	    ctnetlink_dump_mark(skb, ct) < 0 ||
++	    ctnetlink_dump_mark(skb, READ_ONCE(ct->mark)) < 0 ||
+ 	    ctnetlink_dump_secctx(skb, ct) < 0 ||
+ 	    ctnetlink_dump_id(skb, ct) < 0 ||
+ 	    ctnetlink_dump_use(skb, ct) < 0 ||
+@@ -711,6 +711,7 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
+ 	struct sk_buff *skb;
+ 	unsigned int type;
+ 	unsigned int flags = 0, group;
++	u32 mark;
+ 	int err;
+ 
+ 	if (events & (1 << IPCT_DESTROY)) {
+@@ -811,8 +812,9 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
+ 	}
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-	if ((events & (1 << IPCT_MARK) || ct->mark)
+-	    && ctnetlink_dump_mark(skb, ct) < 0)
++	mark = READ_ONCE(ct->mark);
++	if ((events & (1 << IPCT_MARK) || mark) &&
++	    ctnetlink_dump_mark(skb, mark) < 0)
+ 		goto nla_put_failure;
+ #endif
+ 	nlmsg_end(skb, nlh);
+@@ -1099,7 +1101,7 @@ static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+ 	}
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-	if ((ct->mark & filter->mark.mask) != filter->mark.val)
++	if ((READ_ONCE(ct->mark) & filter->mark.mask) != filter->mark.val)
+ 		goto ignore_entry;
+ #endif
+ 
+@@ -1979,9 +1981,9 @@ static void ctnetlink_change_mark(struct nf_conn *ct,
+ 		mask = ~ntohl(nla_get_be32(cda[CTA_MARK_MASK]));
+ 
+ 	mark = ntohl(nla_get_be32(cda[CTA_MARK]));
+-	newmark = (ct->mark & mask) ^ mark;
+-	if (newmark != ct->mark)
+-		ct->mark = newmark;
++	newmark = (READ_ONCE(ct->mark) & mask) ^ mark;
++	if (newmark != READ_ONCE(ct->mark))
++		WRITE_ONCE(ct->mark, newmark);
+ }
+ #endif
+ 
+@@ -2669,6 +2671,7 @@ static int __ctnetlink_glue_build(struct sk_buff *skb, struct nf_conn *ct)
+ {
+ 	const struct nf_conntrack_zone *zone;
+ 	struct nlattr *nest_parms;
++	u32 mark;
+ 
+ 	zone = nf_ct_zone(ct);
+ 
+@@ -2726,7 +2729,8 @@ static int __ctnetlink_glue_build(struct sk_buff *skb, struct nf_conn *ct)
+ 		goto nla_put_failure;
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-	if (ct->mark && ctnetlink_dump_mark(skb, ct) < 0)
++	mark = READ_ONCE(ct->mark);
++	if (mark && ctnetlink_dump_mark(skb, mark) < 0)
+ 		goto nla_put_failure;
+ #endif
+ 	if (ctnetlink_dump_labels(skb, ct) < 0)
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index 313d1c8ff066a..a7f88cdf3f87c 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -360,7 +360,7 @@ static int ct_seq_show(struct seq_file *s, void *v)
+ 		goto release;
+ 
+ #if defined(CONFIG_NF_CONNTRACK_MARK)
+-	seq_printf(s, "mark=%u ", ct->mark);
++	seq_printf(s, "mark=%u ", READ_ONCE(ct->mark));
+ #endif
+ 
+ 	ct_show_secctx(s, ct);
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index d1862782be450..28306cb667190 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -910,6 +910,7 @@ static int nf_flow_table_block_setup(struct nf_flowtable *flowtable,
+ 	struct flow_block_cb *block_cb, *next;
+ 	int err = 0;
+ 
++	down_write(&flowtable->flow_block_lock);
+ 	switch (cmd) {
+ 	case FLOW_BLOCK_BIND:
+ 		list_splice(&bo->cb_list, &flowtable->flow_block.cb_list);
+@@ -924,6 +925,7 @@ static int nf_flow_table_block_setup(struct nf_flowtable *flowtable,
+ 		WARN_ON_ONCE(1);
+ 		err = -EOPNOTSUPP;
+ 	}
++	up_write(&flowtable->flow_block_lock);
+ 
+ 	return err;
+ }
+@@ -980,7 +982,9 @@ static int nf_flow_table_offload_cmd(struct flow_block_offload *bo,
+ 
+ 	nf_flow_table_block_offload_init(bo, dev_net(dev), cmd, flowtable,
+ 					 extack);
++	down_write(&flowtable->flow_block_lock);
+ 	err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_FT, bo);
++	up_write(&flowtable->flow_block_lock);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 781118465d466..14093d86e6823 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -97,7 +97,7 @@ static void nft_ct_get_eval(const struct nft_expr *expr,
+ 		return;
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+ 	case NFT_CT_MARK:
+-		*dest = ct->mark;
++		*dest = READ_ONCE(ct->mark);
+ 		return;
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_SECMARK
+@@ -294,8 +294,8 @@ static void nft_ct_set_eval(const struct nft_expr *expr,
+ 	switch (priv->key) {
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+ 	case NFT_CT_MARK:
+-		if (ct->mark != value) {
+-			ct->mark = value;
++		if (READ_ONCE(ct->mark) != value) {
++			WRITE_ONCE(ct->mark, value);
+ 			nf_conntrack_event_cache(IPCT_MARK, ct);
+ 		}
+ 		break;
+diff --git a/net/netfilter/xt_connmark.c b/net/netfilter/xt_connmark.c
+index e5ebc0810675a..ad3c033db64e7 100644
+--- a/net/netfilter/xt_connmark.c
++++ b/net/netfilter/xt_connmark.c
+@@ -30,6 +30,7 @@ connmark_tg_shift(struct sk_buff *skb, const struct xt_connmark_tginfo2 *info)
+ 	u_int32_t new_targetmark;
+ 	struct nf_conn *ct;
+ 	u_int32_t newmark;
++	u_int32_t oldmark;
+ 
+ 	ct = nf_ct_get(skb, &ctinfo);
+ 	if (ct == NULL)
+@@ -37,14 +38,15 @@ connmark_tg_shift(struct sk_buff *skb, const struct xt_connmark_tginfo2 *info)
+ 
+ 	switch (info->mode) {
+ 	case XT_CONNMARK_SET:
+-		newmark = (ct->mark & ~info->ctmask) ^ info->ctmark;
++		oldmark = READ_ONCE(ct->mark);
++		newmark = (oldmark & ~info->ctmask) ^ info->ctmark;
+ 		if (info->shift_dir == D_SHIFT_RIGHT)
+ 			newmark >>= info->shift_bits;
+ 		else
+ 			newmark <<= info->shift_bits;
+ 
+-		if (ct->mark != newmark) {
+-			ct->mark = newmark;
++		if (READ_ONCE(ct->mark) != newmark) {
++			WRITE_ONCE(ct->mark, newmark);
+ 			nf_conntrack_event_cache(IPCT_MARK, ct);
+ 		}
+ 		break;
+@@ -55,15 +57,15 @@ connmark_tg_shift(struct sk_buff *skb, const struct xt_connmark_tginfo2 *info)
+ 		else
+ 			new_targetmark <<= info->shift_bits;
+ 
+-		newmark = (ct->mark & ~info->ctmask) ^
++		newmark = (READ_ONCE(ct->mark) & ~info->ctmask) ^
+ 			  new_targetmark;
+-		if (ct->mark != newmark) {
+-			ct->mark = newmark;
++		if (READ_ONCE(ct->mark) != newmark) {
++			WRITE_ONCE(ct->mark, newmark);
+ 			nf_conntrack_event_cache(IPCT_MARK, ct);
+ 		}
+ 		break;
+ 	case XT_CONNMARK_RESTORE:
+-		new_targetmark = (ct->mark & info->ctmask);
++		new_targetmark = (READ_ONCE(ct->mark) & info->ctmask);
+ 		if (info->shift_dir == D_SHIFT_RIGHT)
+ 			new_targetmark >>= info->shift_bits;
+ 		else
+@@ -126,7 +128,7 @@ connmark_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 	if (ct == NULL)
+ 		return false;
+ 
+-	return ((ct->mark & info->mask) == info->mark) ^ info->invert;
++	return ((READ_ONCE(ct->mark) & info->mask) == info->mark) ^ info->invert;
+ }
+ 
+ static int connmark_mt_check(const struct xt_mtchk_param *par)
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index 2cfff70f70e06..ed9019d807c78 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -530,7 +530,7 @@ static int nci_open_device(struct nci_dev *ndev)
+ 		skb_queue_purge(&ndev->tx_q);
+ 
+ 		ndev->ops->close(ndev);
+-		ndev->flags = 0;
++		ndev->flags &= BIT(NCI_UNREG);
+ 	}
+ 
+ done:
+diff --git a/net/nfc/nci/data.c b/net/nfc/nci/data.c
+index b002e18f38c81..b4548d8874899 100644
+--- a/net/nfc/nci/data.c
++++ b/net/nfc/nci/data.c
+@@ -279,8 +279,10 @@ void nci_rx_data_packet(struct nci_dev *ndev, struct sk_buff *skb)
+ 		 nci_plen(skb->data));
+ 
+ 	conn_info = nci_get_conn_info_by_conn_id(ndev, nci_conn_id(skb->data));
+-	if (!conn_info)
++	if (!conn_info) {
++		kfree_skb(skb);
+ 		return;
++	}
+ 
+ 	/* strip the nci data header */
+ 	skb_pull(skb, NCI_DATA_HDR_SIZE);
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 41f248895a871..0f0f380e81a40 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -150,7 +150,7 @@ static u8 ovs_ct_get_state(enum ip_conntrack_info ctinfo)
+ static u32 ovs_ct_get_mark(const struct nf_conn *ct)
+ {
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK_MARK)
+-	return ct ? ct->mark : 0;
++	return ct ? READ_ONCE(ct->mark) : 0;
+ #else
+ 	return 0;
+ #endif
+@@ -336,9 +336,9 @@ static int ovs_ct_set_mark(struct nf_conn *ct, struct sw_flow_key *key,
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK_MARK)
+ 	u32 new_mark;
+ 
+-	new_mark = ct_mark | (ct->mark & ~(mask));
+-	if (ct->mark != new_mark) {
+-		ct->mark = new_mark;
++	new_mark = ct_mark | (READ_ONCE(ct->mark) & ~(mask));
++	if (READ_ONCE(ct->mark) != new_mark) {
++		WRITE_ONCE(ct->mark, new_mark);
+ 		if (nf_ct_is_confirmed(ct))
+ 			nf_conntrack_event_cache(IPCT_MARK, ct);
+ 		key->ct.mark = new_mark;
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index 41671af6b33f9..0354f90dc93aa 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -351,7 +351,7 @@ static void rxrpc_dummy_notify_rx(struct sock *sk, struct rxrpc_call *rxcall,
+  */
+ void rxrpc_kernel_end_call(struct socket *sock, struct rxrpc_call *call)
+ {
+-	_enter("%d{%d}", call->debug_id, atomic_read(&call->usage));
++	_enter("%d{%d}", call->debug_id, refcount_read(&call->ref));
+ 
+ 	mutex_lock(&call->user_mutex);
+ 	rxrpc_release_call(rxrpc_sk(sock->sk), call);
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index ccb65412b6704..d86894a1c35d3 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -14,14 +14,6 @@
+ #include <net/af_rxrpc.h>
+ #include "protocol.h"
+ 
+-#if 0
+-#define CHECK_SLAB_OKAY(X)				     \
+-	BUG_ON(atomic_read((X)) >> (sizeof(atomic_t) - 2) == \
+-	       (POISON_FREE << 8 | POISON_FREE))
+-#else
+-#define CHECK_SLAB_OKAY(X) do {} while (0)
+-#endif
+-
+ #define FCRYPT_BSIZE 8
+ struct rxrpc_crypt {
+ 	union {
+@@ -86,7 +78,7 @@ struct rxrpc_net {
+ 	struct work_struct	client_conn_reaper;
+ 	struct timer_list	client_conn_reap_timer;
+ 
+-	struct list_head	local_endpoints;
++	struct hlist_head	local_endpoints;
+ 	struct mutex		local_mutex;	/* Lock for ->local_endpoints */
+ 
+ 	DECLARE_HASHTABLE	(peer_hash, 10);
+@@ -264,9 +256,9 @@ struct rxrpc_security {
+ struct rxrpc_local {
+ 	struct rcu_head		rcu;
+ 	atomic_t		active_users;	/* Number of users of the local endpoint */
+-	atomic_t		usage;		/* Number of references to the structure */
++	refcount_t		ref;		/* Number of references to the structure */
+ 	struct rxrpc_net	*rxnet;		/* The network ns in which this resides */
+-	struct list_head	link;
++	struct hlist_node	link;
+ 	struct socket		*socket;	/* my UDP socket */
+ 	struct work_struct	processor;
+ 	struct rxrpc_sock __rcu	*service;	/* Service(s) listening on this endpoint */
+@@ -289,7 +281,7 @@ struct rxrpc_local {
+  */
+ struct rxrpc_peer {
+ 	struct rcu_head		rcu;		/* This must be first */
+-	atomic_t		usage;
++	refcount_t		ref;
+ 	unsigned long		hash_key;
+ 	struct hlist_node	hash_link;
+ 	struct rxrpc_local	*local;
+@@ -391,7 +383,8 @@ enum rxrpc_conn_proto_state {
+  */
+ struct rxrpc_bundle {
+ 	struct rxrpc_conn_parameters params;
+-	atomic_t		usage;
++	refcount_t		ref;
++	atomic_t		active;		/* Number of active users */
+ 	unsigned int		debug_id;
+ 	bool			try_upgrade;	/* True if the bundle is attempting upgrade */
+ 	bool			alloc_conn;	/* True if someone's getting a conn */
+@@ -412,7 +405,7 @@ struct rxrpc_connection {
+ 	struct rxrpc_conn_proto	proto;
+ 	struct rxrpc_conn_parameters params;
+ 
+-	atomic_t		usage;
++	refcount_t		ref;
+ 	struct rcu_head		rcu;
+ 	struct list_head	cache_link;
+ 
+@@ -592,7 +585,7 @@ struct rxrpc_call {
+ 	int			error;		/* Local error incurred */
+ 	enum rxrpc_call_state	state;		/* current state of call */
+ 	enum rxrpc_call_completion completion;	/* Call completion condition */
+-	atomic_t		usage;
++	refcount_t		ref;
+ 	u16			service_id;	/* service ID */
+ 	u8			security_ix;	/* Security type */
+ 	enum rxrpc_interruptibility interruptibility; /* At what point call may be interrupted */
+@@ -1001,6 +994,7 @@ void rxrpc_put_peer_locked(struct rxrpc_peer *);
+ extern const struct seq_operations rxrpc_call_seq_ops;
+ extern const struct seq_operations rxrpc_connection_seq_ops;
+ extern const struct seq_operations rxrpc_peer_seq_ops;
++extern const struct seq_operations rxrpc_local_seq_ops;
+ 
+ /*
+  * recvmsg.c
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index a0b033954ceac..2a14d69b171f3 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -91,7 +91,7 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
+ 				  (head + 1) & (size - 1));
+ 
+ 		trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_service,
+-				 atomic_read(&conn->usage), here);
++				 refcount_read(&conn->ref), here);
+ 	}
+ 
+ 	/* Now it gets complicated, because calls get registered with the
+@@ -104,7 +104,7 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
+ 	call->state = RXRPC_CALL_SERVER_PREALLOC;
+ 
+ 	trace_rxrpc_call(call->debug_id, rxrpc_call_new_service,
+-			 atomic_read(&call->usage),
++			 refcount_read(&call->ref),
+ 			 here, (const void *)user_call_ID);
+ 
+ 	write_lock(&rx->call_lock);
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 150cd7b2154c8..10dad2834d5b6 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -112,7 +112,7 @@ struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *rx,
+ found_extant_call:
+ 	rxrpc_get_call(call, rxrpc_call_got);
+ 	read_unlock(&rx->call_lock);
+-	_leave(" = %p [%d]", call, atomic_read(&call->usage));
++	_leave(" = %p [%d]", call, refcount_read(&call->ref));
+ 	return call;
+ }
+ 
+@@ -160,7 +160,7 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp,
+ 	spin_lock_init(&call->notify_lock);
+ 	spin_lock_init(&call->input_lock);
+ 	rwlock_init(&call->state_lock);
+-	atomic_set(&call->usage, 1);
++	refcount_set(&call->ref, 1);
+ 	call->debug_id = debug_id;
+ 	call->tx_total_len = -1;
+ 	call->next_rx_timo = 20 * HZ;
+@@ -301,7 +301,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ 	call->interruptibility = p->interruptibility;
+ 	call->tx_total_len = p->tx_total_len;
+ 	trace_rxrpc_call(call->debug_id, rxrpc_call_new_client,
+-			 atomic_read(&call->usage),
++			 refcount_read(&call->ref),
+ 			 here, (const void *)p->user_call_ID);
+ 	if (p->kernel)
+ 		__set_bit(RXRPC_CALL_KERNEL, &call->flags);
+@@ -354,7 +354,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ 		goto error_attached_to_socket;
+ 
+ 	trace_rxrpc_call(call->debug_id, rxrpc_call_connected,
+-			 atomic_read(&call->usage), here, NULL);
++			 refcount_read(&call->ref), here, NULL);
+ 
+ 	rxrpc_start_call_timer(call);
+ 
+@@ -374,7 +374,7 @@ error_dup_user_ID:
+ 	__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
+ 				    RX_CALL_DEAD, -EEXIST);
+ 	trace_rxrpc_call(call->debug_id, rxrpc_call_error,
+-			 atomic_read(&call->usage), here, ERR_PTR(-EEXIST));
++			 refcount_read(&call->ref), here, ERR_PTR(-EEXIST));
+ 	rxrpc_release_call(rx, call);
+ 	mutex_unlock(&call->user_mutex);
+ 	rxrpc_put_call(call, rxrpc_call_put);
+@@ -388,7 +388,7 @@ error_dup_user_ID:
+ 	 */
+ error_attached_to_socket:
+ 	trace_rxrpc_call(call->debug_id, rxrpc_call_error,
+-			 atomic_read(&call->usage), here, ERR_PTR(ret));
++			 refcount_read(&call->ref), here, ERR_PTR(ret));
+ 	set_bit(RXRPC_CALL_DISCONNECTED, &call->flags);
+ 	__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
+ 				    RX_CALL_DEAD, ret);
+@@ -444,8 +444,9 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
+ bool rxrpc_queue_call(struct rxrpc_call *call)
+ {
+ 	const void *here = __builtin_return_address(0);
+-	int n = atomic_fetch_add_unless(&call->usage, 1, 0);
+-	if (n == 0)
++	int n;
++
++	if (!__refcount_inc_not_zero(&call->ref, &n))
+ 		return false;
+ 	if (rxrpc_queue_work(&call->processor))
+ 		trace_rxrpc_call(call->debug_id, rxrpc_call_queued, n + 1,
+@@ -461,7 +462,7 @@ bool rxrpc_queue_call(struct rxrpc_call *call)
+ bool __rxrpc_queue_call(struct rxrpc_call *call)
+ {
+ 	const void *here = __builtin_return_address(0);
+-	int n = atomic_read(&call->usage);
++	int n = refcount_read(&call->ref);
+ 	ASSERTCMP(n, >=, 1);
+ 	if (rxrpc_queue_work(&call->processor))
+ 		trace_rxrpc_call(call->debug_id, rxrpc_call_queued_ref, n,
+@@ -478,7 +479,7 @@ void rxrpc_see_call(struct rxrpc_call *call)
+ {
+ 	const void *here = __builtin_return_address(0);
+ 	if (call) {
+-		int n = atomic_read(&call->usage);
++		int n = refcount_read(&call->ref);
+ 
+ 		trace_rxrpc_call(call->debug_id, rxrpc_call_seen, n,
+ 				 here, NULL);
+@@ -488,11 +489,11 @@ void rxrpc_see_call(struct rxrpc_call *call)
+ bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+ {
+ 	const void *here = __builtin_return_address(0);
+-	int n = atomic_fetch_add_unless(&call->usage, 1, 0);
++	int n;
+ 
+-	if (n == 0)
++	if (!__refcount_inc_not_zero(&call->ref, &n))
+ 		return false;
+-	trace_rxrpc_call(call->debug_id, op, n, here, NULL);
++	trace_rxrpc_call(call->debug_id, op, n + 1, here, NULL);
+ 	return true;
+ }
+ 
+@@ -502,9 +503,10 @@ bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+ void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+ {
+ 	const void *here = __builtin_return_address(0);
+-	int n = atomic_inc_return(&call->usage);
++	int n;
+ 
+-	trace_rxrpc_call(call->debug_id, op, n, here, NULL);
++	__refcount_inc(&call->ref, &n);
++	trace_rxrpc_call(call->debug_id, op, n + 1, here, NULL);
+ }
+ 
+ /*
+@@ -529,10 +531,10 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
+ 	struct rxrpc_connection *conn = call->conn;
+ 	bool put = false;
+ 
+-	_enter("{%d,%d}", call->debug_id, atomic_read(&call->usage));
++	_enter("{%d,%d}", call->debug_id, refcount_read(&call->ref));
+ 
+ 	trace_rxrpc_call(call->debug_id, rxrpc_call_release,
+-			 atomic_read(&call->usage),
++			 refcount_read(&call->ref),
+ 			 here, (const void *)call->flags);
+ 
+ 	ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
+@@ -621,14 +623,14 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+ 	struct rxrpc_net *rxnet = call->rxnet;
+ 	const void *here = __builtin_return_address(0);
+ 	unsigned int debug_id = call->debug_id;
++	bool dead;
+ 	int n;
+ 
+ 	ASSERT(call != NULL);
+ 
+-	n = atomic_dec_return(&call->usage);
++	dead = __refcount_dec_and_test(&call->ref, &n);
+ 	trace_rxrpc_call(debug_id, op, n, here, NULL);
+-	ASSERTCMP(n, >=, 0);
+-	if (n == 0) {
++	if (dead) {
+ 		_debug("call %d dead", call->debug_id);
+ 		ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
+ 
+@@ -718,7 +720,7 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
+ 			list_del_init(&call->link);
+ 
+ 			pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",
+-			       call, atomic_read(&call->usage),
++			       call, refcount_read(&call->ref),
+ 			       rxrpc_call_states[call->state],
+ 			       call->flags, call->events);
+ 
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index f5fb223aba82a..f5fa5f3083bdd 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -40,6 +40,8 @@ __read_mostly unsigned long rxrpc_conn_idle_client_fast_expiry = 2 * HZ;
+ DEFINE_IDR(rxrpc_client_conn_ids);
+ static DEFINE_SPINLOCK(rxrpc_conn_id_lock);
+ 
++static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle);
++
+ /*
+  * Get a connection ID and epoch for a client connection from the global pool.
+  * The connection struct pointer is then recorded in the idr radix tree.  The
+@@ -102,7 +104,7 @@ void rxrpc_destroy_client_conn_ids(void)
+ 	if (!idr_is_empty(&rxrpc_client_conn_ids)) {
+ 		idr_for_each_entry(&rxrpc_client_conn_ids, conn, id) {
+ 			pr_err("AF_RXRPC: Leaked client conn %p {%d}\n",
+-			       conn, atomic_read(&conn->usage));
++			       conn, refcount_read(&conn->ref));
+ 		}
+ 		BUG();
+ 	}
+@@ -122,7 +124,8 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp,
+ 	if (bundle) {
+ 		bundle->params = *cp;
+ 		rxrpc_get_peer(bundle->params.peer);
+-		atomic_set(&bundle->usage, 1);
++		refcount_set(&bundle->ref, 1);
++		atomic_set(&bundle->active, 1);
+ 		spin_lock_init(&bundle->channel_lock);
+ 		INIT_LIST_HEAD(&bundle->waiting_calls);
+ 	}
+@@ -131,7 +134,7 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp,
+ 
+ struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle)
+ {
+-	atomic_inc(&bundle->usage);
++	refcount_inc(&bundle->ref);
+ 	return bundle;
+ }
+ 
+@@ -144,10 +147,13 @@ static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)
+ void rxrpc_put_bundle(struct rxrpc_bundle *bundle)
+ {
+ 	unsigned int d = bundle->debug_id;
+-	unsigned int u = atomic_dec_return(&bundle->usage);
++	bool dead;
++	int r;
++
++	dead = __refcount_dec_and_test(&bundle->ref, &r);
+ 
+-	_debug("PUT B=%x %u", d, u);
+-	if (u == 0)
++	_debug("PUT B=%x %d", d, r - 1);
++	if (dead)
+ 		rxrpc_free_bundle(bundle);
+ }
+ 
+@@ -169,7 +175,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+-	atomic_set(&conn->usage, 1);
++	refcount_set(&conn->ref, 1);
+ 	conn->bundle		= bundle;
+ 	conn->params		= bundle->params;
+ 	conn->out_clientflag	= RXRPC_CLIENT_INITIATED;
+@@ -199,7 +205,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
+ 	key_get(conn->params.key);
+ 
+ 	trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_client,
+-			 atomic_read(&conn->usage),
++			 refcount_read(&conn->ref),
+ 			 __builtin_return_address(0));
+ 
+ 	atomic_inc(&rxnet->nr_client_conns);
+@@ -341,6 +347,7 @@ found_bundle_free:
+ 	rxrpc_free_bundle(candidate);
+ found_bundle:
+ 	rxrpc_get_bundle(bundle);
++	atomic_inc(&bundle->active);
+ 	spin_unlock(&local->client_bundles_lock);
+ 	_leave(" = %u [found]", bundle->debug_id);
+ 	return bundle;
+@@ -438,6 +445,7 @@ static void rxrpc_add_conn_to_bundle(struct rxrpc_bundle *bundle, gfp_t gfp)
+ 			if (old)
+ 				trace_rxrpc_client(old, -1, rxrpc_client_replace);
+ 			candidate->bundle_shift = shift;
++			atomic_inc(&bundle->active);
+ 			bundle->conns[i] = candidate;
+ 			for (j = 0; j < RXRPC_MAXCALLS; j++)
+ 				set_bit(shift + j, &bundle->avail_chans);
+@@ -728,6 +736,7 @@ granted_channel:
+ 	smp_rmb();
+ 
+ out_put_bundle:
++	rxrpc_deactivate_bundle(bundle);
+ 	rxrpc_put_bundle(bundle);
+ out:
+ 	_leave(" = %d", ret);
+@@ -903,9 +912,8 @@ out:
+ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn)
+ {
+ 	struct rxrpc_bundle *bundle = conn->bundle;
+-	struct rxrpc_local *local = bundle->params.local;
+ 	unsigned int bindex;
+-	bool need_drop = false, need_put = false;
++	bool need_drop = false;
+ 	int i;
+ 
+ 	_enter("C=%x", conn->debug_id);
+@@ -924,15 +932,22 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn)
+ 	}
+ 	spin_unlock(&bundle->channel_lock);
+ 
+-	/* If there are no more connections, remove the bundle */
+-	if (!bundle->avail_chans) {
+-		_debug("maybe unbundle");
+-		spin_lock(&local->client_bundles_lock);
++	if (need_drop) {
++		rxrpc_deactivate_bundle(bundle);
++		rxrpc_put_connection(conn);
++	}
++}
+ 
+-		for (i = 0; i < ARRAY_SIZE(bundle->conns); i++)
+-			if (bundle->conns[i])
+-				break;
+-		if (i == ARRAY_SIZE(bundle->conns) && !bundle->params.exclusive) {
++/*
++ * Drop the active count on a bundle.
++ */
++static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle)
++{
++	struct rxrpc_local *local = bundle->params.local;
++	bool need_put = false;
++
++	if (atomic_dec_and_lock(&bundle->active, &local->client_bundles_lock)) {
++		if (!bundle->params.exclusive) {
+ 			_debug("erase bundle");
+ 			rb_erase(&bundle->local_node, &local->client_bundles);
+ 			need_put = true;
+@@ -942,10 +957,6 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn)
+ 		if (need_put)
+ 			rxrpc_put_bundle(bundle);
+ 	}
+-
+-	if (need_drop)
+-		rxrpc_put_connection(conn);
+-	_leave("");
+ }
+ 
+ /*
+@@ -972,14 +983,13 @@ void rxrpc_put_client_conn(struct rxrpc_connection *conn)
+ {
+ 	const void *here = __builtin_return_address(0);
+ 	unsigned int debug_id = conn->debug_id;
+-	int n;
++	bool dead;
++	int r;
+ 
+-	n = atomic_dec_return(&conn->usage);
+-	trace_rxrpc_conn(debug_id, rxrpc_conn_put_client, n, here);
+-	if (n <= 0) {
+-		ASSERTCMP(n, >=, 0);
++	dead = __refcount_dec_and_test(&conn->ref, &r);
++	trace_rxrpc_conn(debug_id, rxrpc_conn_put_client, r - 1, here);
++	if (dead)
+ 		rxrpc_kill_client_conn(conn);
+-	}
+ }
+ 
+ /*
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index 3ef05a0e90ad0..d829b97550cc2 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -105,7 +105,7 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local,
+ 			goto not_found;
+ 		*_peer = peer;
+ 		conn = rxrpc_find_service_conn_rcu(peer, skb);
+-		if (!conn || atomic_read(&conn->usage) == 0)
++		if (!conn || refcount_read(&conn->ref) == 0)
+ 			goto not_found;
+ 		_leave(" = %p", conn);
+ 		return conn;
+@@ -115,7 +115,7 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local,
+ 		 */
+ 		conn = idr_find(&rxrpc_client_conn_ids,
+ 				sp->hdr.cid >> RXRPC_CIDSHIFT);
+-		if (!conn || atomic_read(&conn->usage) == 0) {
++		if (!conn || refcount_read(&conn->ref) == 0) {
+ 			_debug("no conn");
+ 			goto not_found;
+ 		}
+@@ -264,11 +264,12 @@ void rxrpc_kill_connection(struct rxrpc_connection *conn)
+ bool rxrpc_queue_conn(struct rxrpc_connection *conn)
+ {
+ 	const void *here = __builtin_return_address(0);
+-	int n = atomic_fetch_add_unless(&conn->usage, 1, 0);
+-	if (n == 0)
++	int r;
++
++	if (!__refcount_inc_not_zero(&conn->ref, &r))
+ 		return false;
+ 	if (rxrpc_queue_work(&conn->processor))
+-		trace_rxrpc_conn(conn->debug_id, rxrpc_conn_queued, n + 1, here);
++		trace_rxrpc_conn(conn->debug_id, rxrpc_conn_queued, r + 1, here);
+ 	else
+ 		rxrpc_put_connection(conn);
+ 	return true;
+@@ -281,7 +282,7 @@ void rxrpc_see_connection(struct rxrpc_connection *conn)
+ {
+ 	const void *here = __builtin_return_address(0);
+ 	if (conn) {
+-		int n = atomic_read(&conn->usage);
++		int n = refcount_read(&conn->ref);
+ 
+ 		trace_rxrpc_conn(conn->debug_id, rxrpc_conn_seen, n, here);
+ 	}
+@@ -293,9 +294,10 @@ void rxrpc_see_connection(struct rxrpc_connection *conn)
+ struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn)
+ {
+ 	const void *here = __builtin_return_address(0);
+-	int n = atomic_inc_return(&conn->usage);
++	int r;
+ 
+-	trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, n, here);
++	__refcount_inc(&conn->ref, &r);
++	trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, r, here);
+ 	return conn;
+ }
+ 
+@@ -306,11 +308,11 @@ struct rxrpc_connection *
+ rxrpc_get_connection_maybe(struct rxrpc_connection *conn)
+ {
+ 	const void *here = __builtin_return_address(0);
++	int r;
+ 
+ 	if (conn) {
+-		int n = atomic_fetch_add_unless(&conn->usage, 1, 0);
+-		if (n > 0)
+-			trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, n + 1, here);
++		if (__refcount_inc_not_zero(&conn->ref, &r))
++			trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, r + 1, here);
+ 		else
+ 			conn = NULL;
+ 	}
+@@ -334,12 +336,11 @@ void rxrpc_put_service_conn(struct rxrpc_connection *conn)
+ {
+ 	const void *here = __builtin_return_address(0);
+ 	unsigned int debug_id = conn->debug_id;
+-	int n;
++	int r;
+ 
+-	n = atomic_dec_return(&conn->usage);
+-	trace_rxrpc_conn(debug_id, rxrpc_conn_put_service, n, here);
+-	ASSERTCMP(n, >=, 0);
+-	if (n == 1)
++	__refcount_dec(&conn->ref, &r);
++	trace_rxrpc_conn(debug_id, rxrpc_conn_put_service, r - 1, here);
++	if (r - 1 == 1)
+ 		rxrpc_set_service_reap_timer(conn->params.local->rxnet,
+ 					     jiffies + rxrpc_connection_expiry);
+ }
+@@ -352,9 +353,9 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu)
+ 	struct rxrpc_connection *conn =
+ 		container_of(rcu, struct rxrpc_connection, rcu);
+ 
+-	_enter("{%d,u=%d}", conn->debug_id, atomic_read(&conn->usage));
++	_enter("{%d,u=%d}", conn->debug_id, refcount_read(&conn->ref));
+ 
+-	ASSERTCMP(atomic_read(&conn->usage), ==, 0);
++	ASSERTCMP(refcount_read(&conn->ref), ==, 0);
+ 
+ 	_net("DESTROY CONN %d", conn->debug_id);
+ 
+@@ -394,8 +395,8 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
+ 
+ 	write_lock(&rxnet->conn_lock);
+ 	list_for_each_entry_safe(conn, _p, &rxnet->service_conns, link) {
+-		ASSERTCMP(atomic_read(&conn->usage), >, 0);
+-		if (likely(atomic_read(&conn->usage) > 1))
++		ASSERTCMP(refcount_read(&conn->ref), >, 0);
++		if (likely(refcount_read(&conn->ref) > 1))
+ 			continue;
+ 		if (conn->state == RXRPC_CONN_SERVICE_PREALLOC)
+ 			continue;
+@@ -407,7 +408,7 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
+ 				expire_at = idle_timestamp + rxrpc_closed_conn_expiry * HZ;
+ 
+ 			_debug("reap CONN %d { u=%d,t=%ld }",
+-			       conn->debug_id, atomic_read(&conn->usage),
++			       conn->debug_id, refcount_read(&conn->ref),
+ 			       (long)expire_at - (long)now);
+ 
+ 			if (time_before(now, expire_at)) {
+@@ -420,7 +421,7 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
+ 		/* The usage count sits at 1 whilst the object is unused on the
+ 		 * list; we reduce that to 0 to make the object unavailable.
+ 		 */
+-		if (atomic_cmpxchg(&conn->usage, 1, 0) != 1)
++		if (!refcount_dec_if_one(&conn->ref))
+ 			continue;
+ 		trace_rxrpc_conn(conn->debug_id, rxrpc_conn_reap_service, 0, NULL);
+ 
+@@ -444,7 +445,7 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
+ 				  link);
+ 		list_del_init(&conn->link);
+ 
+-		ASSERTCMP(atomic_read(&conn->usage), ==, 0);
++		ASSERTCMP(refcount_read(&conn->ref), ==, 0);
+ 		rxrpc_kill_connection(conn);
+ 	}
+ 
+@@ -472,7 +473,7 @@ void rxrpc_destroy_all_connections(struct rxrpc_net *rxnet)
+ 	write_lock(&rxnet->conn_lock);
+ 	list_for_each_entry_safe(conn, _p, &rxnet->service_conns, link) {
+ 		pr_err("AF_RXRPC: Leaked conn %p {%d}\n",
+-		       conn, atomic_read(&conn->usage));
++		       conn, refcount_read(&conn->ref));
+ 		leak = true;
+ 	}
+ 	write_unlock(&rxnet->conn_lock);
+diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
+index 6c847720494fa..68508166bbc0b 100644
+--- a/net/rxrpc/conn_service.c
++++ b/net/rxrpc/conn_service.c
+@@ -9,7 +9,7 @@
+ #include "ar-internal.h"
+ 
+ static struct rxrpc_bundle rxrpc_service_dummy_bundle = {
+-	.usage		= ATOMIC_INIT(1),
++	.ref		= REFCOUNT_INIT(1),
+ 	.debug_id	= UINT_MAX,
+ 	.channel_lock	= __SPIN_LOCK_UNLOCKED(&rxrpc_service_dummy_bundle.channel_lock),
+ };
+@@ -99,7 +99,7 @@ conn_published:
+ 	return;
+ 
+ found_extant_conn:
+-	if (atomic_read(&cursor->usage) == 0)
++	if (refcount_read(&cursor->ref) == 0)
+ 		goto replace_old_connection;
+ 	write_sequnlock_bh(&peer->service_conn_lock);
+ 	/* We should not be able to get here.  rxrpc_incoming_connection() is
+@@ -132,7 +132,7 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn
+ 		 * the rxrpc_connections list.
+ 		 */
+ 		conn->state = RXRPC_CONN_SERVICE_PREALLOC;
+-		atomic_set(&conn->usage, 2);
++		refcount_set(&conn->ref, 2);
+ 		conn->bundle = rxrpc_get_bundle(&rxrpc_service_dummy_bundle);
+ 
+ 		atomic_inc(&rxnet->nr_conns);
+@@ -142,7 +142,7 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn
+ 		write_unlock(&rxnet->conn_lock);
+ 
+ 		trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_service,
+-				 atomic_read(&conn->usage),
++				 refcount_read(&conn->ref),
+ 				 __builtin_return_address(0));
+ 	}
+ 
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 1145cb14d86f8..e9178115a7449 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -1163,8 +1163,6 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local,
+  */
+ static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
+ {
+-	CHECK_SLAB_OKAY(&local->usage);
+-
+ 	if (rxrpc_get_local_maybe(local)) {
+ 		skb_queue_tail(&local->reject_queue, skb);
+ 		rxrpc_queue_local(local);
+@@ -1422,7 +1420,7 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
+ 		}
+ 	}
+ 
+-	if (!call || atomic_read(&call->usage) == 0) {
++	if (!call || refcount_read(&call->ref) == 0) {
+ 		if (rxrpc_to_client(sp) ||
+ 		    sp->hdr.type != RXRPC_PACKET_TYPE_DATA)
+ 			goto bad_message;
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index ebbf1b03b62cf..2c66ee981f395 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -78,10 +78,10 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet,
+ 
+ 	local = kzalloc(sizeof(struct rxrpc_local), GFP_KERNEL);
+ 	if (local) {
+-		atomic_set(&local->usage, 1);
++		refcount_set(&local->ref, 1);
+ 		atomic_set(&local->active_users, 1);
+ 		local->rxnet = rxnet;
+-		INIT_LIST_HEAD(&local->link);
++		INIT_HLIST_NODE(&local->link);
+ 		INIT_WORK(&local->processor, rxrpc_local_processor);
+ 		init_rwsem(&local->defrag_sem);
+ 		skb_queue_head_init(&local->reject_queue);
+@@ -199,7 +199,7 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
+ {
+ 	struct rxrpc_local *local;
+ 	struct rxrpc_net *rxnet = rxrpc_net(net);
+-	struct list_head *cursor;
++	struct hlist_node *cursor;
+ 	const char *age;
+ 	long diff;
+ 	int ret;
+@@ -209,16 +209,12 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
+ 
+ 	mutex_lock(&rxnet->local_mutex);
+ 
+-	for (cursor = rxnet->local_endpoints.next;
+-	     cursor != &rxnet->local_endpoints;
+-	     cursor = cursor->next) {
+-		local = list_entry(cursor, struct rxrpc_local, link);
++	hlist_for_each(cursor, &rxnet->local_endpoints) {
++		local = hlist_entry(cursor, struct rxrpc_local, link);
+ 
+ 		diff = rxrpc_local_cmp_key(local, srx);
+-		if (diff < 0)
++		if (diff != 0)
+ 			continue;
+-		if (diff > 0)
+-			break;
+ 
+ 		/* Services aren't allowed to share transport sockets, so
+ 		 * reject that here.  It is possible that the object is dying -
+@@ -230,9 +226,10 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
+ 			goto addr_in_use;
+ 		}
+ 
+-		/* Found a match.  We replace a dying object.  Attempting to
+-		 * bind the transport socket may still fail if we're attempting
+-		 * to use a local address that the dying object is still using.
++		/* Found a match.  We want to replace a dying object.
++		 * Attempting to bind the transport socket may still fail if
++		 * we're attempting to use a local address that the dying
++		 * object is still using.
+ 		 */
+ 		if (!rxrpc_use_local(local))
+ 			break;
+@@ -249,10 +246,12 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
+ 	if (ret < 0)
+ 		goto sock_error;
+ 
+-	if (cursor != &rxnet->local_endpoints)
+-		list_replace_init(cursor, &local->link);
+-	else
+-		list_add_tail(&local->link, cursor);
++	if (cursor) {
++		hlist_replace_rcu(cursor, &local->link);
++		cursor->pprev = NULL;
++	} else {
++		hlist_add_head_rcu(&local->link, &rxnet->local_endpoints);
++	}
+ 	age = "new";
+ 
+ found:
+@@ -285,10 +284,10 @@ addr_in_use:
+ struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *local)
+ {
+ 	const void *here = __builtin_return_address(0);
+-	int n;
++	int r;
+ 
+-	n = atomic_inc_return(&local->usage);
+-	trace_rxrpc_local(local->debug_id, rxrpc_local_got, n, here);
++	__refcount_inc(&local->ref, &r);
++	trace_rxrpc_local(local->debug_id, rxrpc_local_got, r + 1, here);
+ 	return local;
+ }
+ 
+@@ -298,12 +297,12 @@ struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *local)
+ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local)
+ {
+ 	const void *here = __builtin_return_address(0);
++	int r;
+ 
+ 	if (local) {
+-		int n = atomic_fetch_add_unless(&local->usage, 1, 0);
+-		if (n > 0)
++		if (__refcount_inc_not_zero(&local->ref, &r))
+ 			trace_rxrpc_local(local->debug_id, rxrpc_local_got,
+-					  n + 1, here);
++					  r + 1, here);
+ 		else
+ 			local = NULL;
+ 	}
+@@ -317,10 +316,10 @@ void rxrpc_queue_local(struct rxrpc_local *local)
+ {
+ 	const void *here = __builtin_return_address(0);
+ 	unsigned int debug_id = local->debug_id;
+-	int n = atomic_read(&local->usage);
++	int r = refcount_read(&local->ref);
+ 
+ 	if (rxrpc_queue_work(&local->processor))
+-		trace_rxrpc_local(debug_id, rxrpc_local_queued, n, here);
++		trace_rxrpc_local(debug_id, rxrpc_local_queued, r + 1, here);
+ 	else
+ 		rxrpc_put_local(local);
+ }
+@@ -332,15 +331,16 @@ void rxrpc_put_local(struct rxrpc_local *local)
+ {
+ 	const void *here = __builtin_return_address(0);
+ 	unsigned int debug_id;
+-	int n;
++	bool dead;
++	int r;
+ 
+ 	if (local) {
+ 		debug_id = local->debug_id;
+ 
+-		n = atomic_dec_return(&local->usage);
+-		trace_rxrpc_local(debug_id, rxrpc_local_put, n, here);
++		dead = __refcount_dec_and_test(&local->ref, &r);
++		trace_rxrpc_local(debug_id, rxrpc_local_put, r, here);
+ 
+-		if (n == 0)
++		if (dead)
+ 			call_rcu(&local->rcu, rxrpc_local_rcu);
+ 	}
+ }
+@@ -393,7 +393,7 @@ static void rxrpc_local_destroyer(struct rxrpc_local *local)
+ 	local->dead = true;
+ 
+ 	mutex_lock(&rxnet->local_mutex);
+-	list_del_init(&local->link);
++	hlist_del_init_rcu(&local->link);
+ 	mutex_unlock(&rxnet->local_mutex);
+ 
+ 	rxrpc_clean_up_local_conns(local);
+@@ -428,7 +428,7 @@ static void rxrpc_local_processor(struct work_struct *work)
+ 		return;
+ 
+ 	trace_rxrpc_local(local->debug_id, rxrpc_local_processing,
+-			  atomic_read(&local->usage), NULL);
++			  refcount_read(&local->ref), NULL);
+ 
+ 	do {
+ 		again = false;
+@@ -480,11 +480,11 @@ void rxrpc_destroy_all_locals(struct rxrpc_net *rxnet)
+ 
+ 	flush_workqueue(rxrpc_workqueue);
+ 
+-	if (!list_empty(&rxnet->local_endpoints)) {
++	if (!hlist_empty(&rxnet->local_endpoints)) {
+ 		mutex_lock(&rxnet->local_mutex);
+-		list_for_each_entry(local, &rxnet->local_endpoints, link) {
++		hlist_for_each_entry(local, &rxnet->local_endpoints, link) {
+ 			pr_err("AF_RXRPC: Leaked local %p {%d}\n",
+-			       local, atomic_read(&local->usage));
++			       local, refcount_read(&local->ref));
+ 		}
+ 		mutex_unlock(&rxnet->local_mutex);
+ 		BUG();
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index cc7e30733feb0..34f389975a7d0 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -72,7 +72,7 @@ static __net_init int rxrpc_init_net(struct net *net)
+ 	timer_setup(&rxnet->client_conn_reap_timer,
+ 		    rxrpc_client_conn_reap_timeout, 0);
+ 
+-	INIT_LIST_HEAD(&rxnet->local_endpoints);
++	INIT_HLIST_HEAD(&rxnet->local_endpoints);
+ 	mutex_init(&rxnet->local_mutex);
+ 
+ 	hash_init(rxnet->peer_hash);
+@@ -98,6 +98,9 @@ static __net_init int rxrpc_init_net(struct net *net)
+ 	proc_create_net("peers", 0444, rxnet->proc_net,
+ 			&rxrpc_peer_seq_ops,
+ 			sizeof(struct seq_net_private));
++	proc_create_net("locals", 0444, rxnet->proc_net,
++			&rxrpc_local_seq_ops,
++			sizeof(struct seq_net_private));
+ 	return 0;
+ 
+ err_proc:
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 0298fe2ad6d32..26d2ae9baaf2c 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -121,7 +121,7 @@ static struct rxrpc_peer *__rxrpc_lookup_peer_rcu(
+ 
+ 	hash_for_each_possible_rcu(rxnet->peer_hash, peer, hash_link, hash_key) {
+ 		if (rxrpc_peer_cmp_key(peer, local, srx, hash_key) == 0 &&
+-		    atomic_read(&peer->usage) > 0)
++		    refcount_read(&peer->ref) > 0)
+ 			return peer;
+ 	}
+ 
+@@ -140,7 +140,7 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *local,
+ 	peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
+ 	if (peer) {
+ 		_net("PEER %d {%pISp}", peer->debug_id, &peer->srx.transport);
+-		_leave(" = %p {u=%d}", peer, atomic_read(&peer->usage));
++		_leave(" = %p {u=%d}", peer, refcount_read(&peer->ref));
+ 	}
+ 	return peer;
+ }
+@@ -216,7 +216,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
+ 
+ 	peer = kzalloc(sizeof(struct rxrpc_peer), gfp);
+ 	if (peer) {
+-		atomic_set(&peer->usage, 1);
++		refcount_set(&peer->ref, 1);
+ 		peer->local = rxrpc_get_local(local);
+ 		INIT_HLIST_HEAD(&peer->error_targets);
+ 		peer->service_conns = RB_ROOT;
+@@ -378,7 +378,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx,
+ 
+ 	_net("PEER %d {%pISp}", peer->debug_id, &peer->srx.transport);
+ 
+-	_leave(" = %p {u=%d}", peer, atomic_read(&peer->usage));
++	_leave(" = %p {u=%d}", peer, refcount_read(&peer->ref));
+ 	return peer;
+ }
+ 
+@@ -388,10 +388,10 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx,
+ struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer)
+ {
+ 	const void *here = __builtin_return_address(0);
+-	int n;
++	int r;
+ 
+-	n = atomic_inc_return(&peer->usage);
+-	trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, n, here);
++	__refcount_inc(&peer->ref, &r);
++	trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, r + 1, here);
+ 	return peer;
+ }
+ 
+@@ -401,11 +401,11 @@ struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer)
+ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer)
+ {
+ 	const void *here = __builtin_return_address(0);
++	int r;
+ 
+ 	if (peer) {
+-		int n = atomic_fetch_add_unless(&peer->usage, 1, 0);
+-		if (n > 0)
+-			trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, n + 1, here);
++		if (__refcount_inc_not_zero(&peer->ref, &r))
++			trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, r + 1, here);
+ 		else
+ 			peer = NULL;
+ 	}
+@@ -436,13 +436,14 @@ void rxrpc_put_peer(struct rxrpc_peer *peer)
+ {
+ 	const void *here = __builtin_return_address(0);
+ 	unsigned int debug_id;
+-	int n;
++	bool dead;
++	int r;
+ 
+ 	if (peer) {
+ 		debug_id = peer->debug_id;
+-		n = atomic_dec_return(&peer->usage);
+-		trace_rxrpc_peer(debug_id, rxrpc_peer_put, n, here);
+-		if (n == 0)
++		dead = __refcount_dec_and_test(&peer->ref, &r);
++		trace_rxrpc_peer(debug_id, rxrpc_peer_put, r - 1, here);
++		if (dead)
+ 			__rxrpc_put_peer(peer);
+ 	}
+ }
+@@ -455,11 +456,12 @@ void rxrpc_put_peer_locked(struct rxrpc_peer *peer)
+ {
+ 	const void *here = __builtin_return_address(0);
+ 	unsigned int debug_id = peer->debug_id;
+-	int n;
++	bool dead;
++	int r;
+ 
+-	n = atomic_dec_return(&peer->usage);
+-	trace_rxrpc_peer(debug_id, rxrpc_peer_put, n, here);
+-	if (n == 0) {
++	dead = __refcount_dec_and_test(&peer->ref, &r);
++	trace_rxrpc_peer(debug_id, rxrpc_peer_put, r - 1, here);
++	if (dead) {
+ 		hash_del_rcu(&peer->hash_link);
+ 		list_del_init(&peer->keepalive_link);
+ 		rxrpc_free_peer(peer);
+@@ -481,7 +483,7 @@ void rxrpc_destroy_all_peers(struct rxrpc_net *rxnet)
+ 		hlist_for_each_entry(peer, &rxnet->peer_hash[i], hash_link) {
+ 			pr_err("Leaked peer %u {%u} %pISp\n",
+ 			       peer->debug_id,
+-			       atomic_read(&peer->usage),
++			       refcount_read(&peer->ref),
+ 			       &peer->srx.transport);
+ 		}
+ 	}
+diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
+index e2f990754f882..8967201fd8e54 100644
+--- a/net/rxrpc/proc.c
++++ b/net/rxrpc/proc.c
+@@ -107,7 +107,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
+ 		   call->cid,
+ 		   call->call_id,
+ 		   rxrpc_is_service_call(call) ? "Svc" : "Clt",
+-		   atomic_read(&call->usage),
++		   refcount_read(&call->ref),
+ 		   rxrpc_call_states[call->state],
+ 		   call->abort_code,
+ 		   call->debug_id,
+@@ -189,7 +189,7 @@ print:
+ 		   conn->service_id,
+ 		   conn->proto.cid,
+ 		   rxrpc_conn_is_service(conn) ? "Svc" : "Clt",
+-		   atomic_read(&conn->usage),
++		   refcount_read(&conn->ref),
+ 		   rxrpc_conn_states[conn->state],
+ 		   key_serial(conn->params.key),
+ 		   atomic_read(&conn->serial),
+@@ -239,7 +239,7 @@ static int rxrpc_peer_seq_show(struct seq_file *seq, void *v)
+ 		   " %3u %5u %6llus %8u %8u\n",
+ 		   lbuff,
+ 		   rbuff,
+-		   atomic_read(&peer->usage),
++		   refcount_read(&peer->ref),
+ 		   peer->cong_cwnd,
+ 		   peer->mtu,
+ 		   now - peer->last_tx_at,
+@@ -334,3 +334,72 @@ const struct seq_operations rxrpc_peer_seq_ops = {
+ 	.stop   = rxrpc_peer_seq_stop,
+ 	.show   = rxrpc_peer_seq_show,
+ };
++
++/*
++ * Generate a list of extant virtual local endpoints in /proc/net/rxrpc/locals
++ */
++static int rxrpc_local_seq_show(struct seq_file *seq, void *v)
++{
++	struct rxrpc_local *local;
++	char lbuff[50];
++
++	if (v == SEQ_START_TOKEN) {
++		seq_puts(seq,
++			 "Proto Local                                          "
++			 " Use Act\n");
++		return 0;
++	}
++
++	local = hlist_entry(v, struct rxrpc_local, link);
++
++	sprintf(lbuff, "%pISpc", &local->srx.transport);
++
++	seq_printf(seq,
++		   "UDP   %-47.47s %3u %3u\n",
++		   lbuff,
++		   refcount_read(&local->ref),
++		   atomic_read(&local->active_users));
++
++	return 0;
++}
++
++static void *rxrpc_local_seq_start(struct seq_file *seq, loff_t *_pos)
++	__acquires(rcu)
++{
++	struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
++	unsigned int n;
++
++	rcu_read_lock();
++
++	if (*_pos >= UINT_MAX)
++		return NULL;
++
++	n = *_pos;
++	if (n == 0)
++		return SEQ_START_TOKEN;
++
++	return seq_hlist_start_rcu(&rxnet->local_endpoints, n - 1);
++}
++
++static void *rxrpc_local_seq_next(struct seq_file *seq, void *v, loff_t *_pos)
++{
++	struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
++
++	if (*_pos >= UINT_MAX)
++		return NULL;
++
++	return seq_hlist_next_rcu(v, &rxnet->local_endpoints, _pos);
++}
++
++static void rxrpc_local_seq_stop(struct seq_file *seq, void *v)
++	__releases(rcu)
++{
++	rcu_read_unlock();
++}
++
++const struct seq_operations rxrpc_local_seq_ops = {
++	.start  = rxrpc_local_seq_start,
++	.next   = rxrpc_local_seq_next,
++	.stop   = rxrpc_local_seq_stop,
++	.show   = rxrpc_local_seq_show,
++};
+diff --git a/net/rxrpc/skbuff.c b/net/rxrpc/skbuff.c
+index 0348d2bf6f7d8..580a5acffee71 100644
+--- a/net/rxrpc/skbuff.c
++++ b/net/rxrpc/skbuff.c
+@@ -71,7 +71,6 @@ void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace op)
+ 	const void *here = __builtin_return_address(0);
+ 	if (skb) {
+ 		int n;
+-		CHECK_SLAB_OKAY(&skb->users);
+ 		n = atomic_dec_return(select_skb_count(skb));
+ 		trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n,
+ 				rxrpc_skb(skb)->rx_flags, here);
+diff --git a/net/sched/Kconfig b/net/sched/Kconfig
+index d762e89ab74f7..bc4e5da76fa6f 100644
+--- a/net/sched/Kconfig
++++ b/net/sched/Kconfig
+@@ -976,7 +976,7 @@ config NET_ACT_TUNNEL_KEY
+ 
+ config NET_ACT_CT
+ 	tristate "connection tracking tc action"
+-	depends on NET_CLS_ACT && NF_CONNTRACK && NF_NAT && NF_FLOW_TABLE
++	depends on NET_CLS_ACT && NF_CONNTRACK && (!NF_NAT || NF_NAT) && NF_FLOW_TABLE
+ 	help
+ 	  Say Y here to allow sending the packets to conntrack module.
+ 
+diff --git a/net/sched/act_connmark.c b/net/sched/act_connmark.c
+index e19885d7fe2cb..31d268eedf3f9 100644
+--- a/net/sched/act_connmark.c
++++ b/net/sched/act_connmark.c
+@@ -62,7 +62,7 @@ static int tcf_connmark_act(struct sk_buff *skb, const struct tc_action *a,
+ 
+ 	c = nf_ct_get(skb, &ctinfo);
+ 	if (c) {
+-		skb->mark = c->mark;
++		skb->mark = READ_ONCE(c->mark);
+ 		/* using overlimits stats to count how many packets marked */
+ 		ca->tcf_qstats.overlimits++;
+ 		goto out;
+@@ -82,7 +82,7 @@ static int tcf_connmark_act(struct sk_buff *skb, const struct tc_action *a,
+ 	c = nf_ct_tuplehash_to_ctrack(thash);
+ 	/* using overlimits stats to count how many packets marked */
+ 	ca->tcf_qstats.overlimits++;
+-	skb->mark = c->mark;
++	skb->mark = READ_ONCE(c->mark);
+ 	nf_ct_put(c);
+ 
+ out:
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index f7e88d7466c30..2d41d866de3e3 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -177,7 +177,7 @@ static void tcf_ct_flow_table_add_action_meta(struct nf_conn *ct,
+ 	entry = tcf_ct_flow_table_flow_action_get_next(action);
+ 	entry->id = FLOW_ACTION_CT_METADATA;
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK_MARK)
+-	entry->ct_metadata.mark = ct->mark;
++	entry->ct_metadata.mark = READ_ONCE(ct->mark);
+ #endif
+ 	ctinfo = dir == IP_CT_DIR_ORIGINAL ? IP_CT_ESTABLISHED :
+ 					     IP_CT_ESTABLISHED_REPLY;
+@@ -843,9 +843,9 @@ static void tcf_ct_act_set_mark(struct nf_conn *ct, u32 mark, u32 mask)
+ 	if (!mask)
+ 		return;
+ 
+-	new_mark = mark | (ct->mark & ~(mask));
+-	if (ct->mark != new_mark) {
+-		ct->mark = new_mark;
++	new_mark = mark | (READ_ONCE(ct->mark) & ~(mask));
++	if (READ_ONCE(ct->mark) != new_mark) {
++		WRITE_ONCE(ct->mark, new_mark);
+ 		if (nf_ct_is_confirmed(ct))
+ 			nf_conntrack_event_cache(IPCT_MARK, ct);
+ 	}
+diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
+index b20c8ce59905b..06c74f22ab98b 100644
+--- a/net/sched/act_ctinfo.c
++++ b/net/sched/act_ctinfo.c
+@@ -33,7 +33,7 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ {
+ 	u8 dscp, newdscp;
+ 
+-	newdscp = (((ct->mark & cp->dscpmask) >> cp->dscpmaskshift) << 2) &
++	newdscp = (((READ_ONCE(ct->mark) & cp->dscpmask) >> cp->dscpmaskshift) << 2) &
+ 		     ~INET_ECN_MASK;
+ 
+ 	switch (proto) {
+@@ -73,7 +73,7 @@ static void tcf_ctinfo_cpmark_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ 				  struct sk_buff *skb)
+ {
+ 	ca->stats_cpmark_set++;
+-	skb->mark = ct->mark & cp->cpmarkmask;
++	skb->mark = READ_ONCE(ct->mark) & cp->cpmarkmask;
+ }
+ 
+ static int tcf_ctinfo_act(struct sk_buff *skb, const struct tc_action *a,
+@@ -131,7 +131,7 @@ static int tcf_ctinfo_act(struct sk_buff *skb, const struct tc_action *a,
+ 	}
+ 
+ 	if (cp->mode & CTINFO_MODE_DSCP)
+-		if (!cp->dscpstatemask || (ct->mark & cp->dscpstatemask))
++		if (!cp->dscpstatemask || (READ_ONCE(ct->mark) & cp->dscpstatemask))
+ 			tcf_ctinfo_dscp_set(ct, ca, cp, skb, wlen, proto);
+ 
+ 	if (cp->mode & CTINFO_MODE_CPMARK)
+diff --git a/net/tipc/discover.c b/net/tipc/discover.c
+index 2ae268b674650..2730310249e3c 100644
+--- a/net/tipc/discover.c
++++ b/net/tipc/discover.c
+@@ -210,7 +210,10 @@ void tipc_disc_rcv(struct net *net, struct sk_buff *skb,
+ 	u32 self;
+ 	int err;
+ 
+-	skb_linearize(skb);
++	if (skb_linearize(skb)) {
++		kfree_skb(skb);
++		return;
++	}
+ 	hdr = buf_msg(skb);
+ 
+ 	if (caps & TIPC_NODE_ID128)
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index 561e709ae06ab..89d8a2bd30cd0 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -176,7 +176,7 @@ static void tipc_conn_close(struct tipc_conn *con)
+ 	conn_put(con);
+ }
+ 
+-static struct tipc_conn *tipc_conn_alloc(struct tipc_topsrv *s)
++static struct tipc_conn *tipc_conn_alloc(struct tipc_topsrv *s, struct socket *sock)
+ {
+ 	struct tipc_conn *con;
+ 	int ret;
+@@ -202,10 +202,12 @@ static struct tipc_conn *tipc_conn_alloc(struct tipc_topsrv *s)
+ 	}
+ 	con->conid = ret;
+ 	s->idr_in_use++;
+-	spin_unlock_bh(&s->idr_lock);
+ 
+ 	set_bit(CF_CONNECTED, &con->flags);
+ 	con->server = s;
++	con->sock = sock;
++	conn_get(con);
++	spin_unlock_bh(&s->idr_lock);
+ 
+ 	return con;
+ }
+@@ -467,7 +469,7 @@ static void tipc_topsrv_accept(struct work_struct *work)
+ 		ret = kernel_accept(lsock, &newsock, O_NONBLOCK);
+ 		if (ret < 0)
+ 			return;
+-		con = tipc_conn_alloc(srv);
++		con = tipc_conn_alloc(srv, newsock);
+ 		if (IS_ERR(con)) {
+ 			ret = PTR_ERR(con);
+ 			sock_release(newsock);
+@@ -479,11 +481,11 @@ static void tipc_topsrv_accept(struct work_struct *work)
+ 		newsk->sk_data_ready = tipc_conn_data_ready;
+ 		newsk->sk_write_space = tipc_conn_write_space;
+ 		newsk->sk_user_data = con;
+-		con->sock = newsock;
+ 		write_unlock_bh(&newsk->sk_callback_lock);
+ 
+ 		/* Wake up receive process in case of 'SYN+' message */
+ 		newsk->sk_data_ready(newsk);
++		conn_put(con);
+ 	}
+ }
+ 
+@@ -577,17 +579,17 @@ bool tipc_topsrv_kern_subscr(struct net *net, u32 port, u32 type, u32 lower,
+ 	sub.filter = filter;
+ 	*(u64 *)&sub.usr_handle = (u64)port;
+ 
+-	con = tipc_conn_alloc(tipc_topsrv(net));
++	con = tipc_conn_alloc(tipc_topsrv(net), NULL);
+ 	if (IS_ERR(con))
+ 		return false;
+ 
+ 	*conid = con->conid;
+-	con->sock = NULL;
+ 	rc = tipc_conn_rcv_sub(tipc_topsrv(net), con, &sub);
+-	if (rc >= 0)
+-		return true;
++	if (rc)
++		conn_put(con);
++
+ 	conn_put(con);
+-	return false;
++	return !rc;
+ }
+ 
+ void tipc_topsrv_kern_unsubscr(struct net *net, int conid)
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index c255aac6b816b..8b8e957a69c36 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -97,6 +97,18 @@ static void xfrm_outer_mode_prep(struct xfrm_state *x, struct sk_buff *skb)
+ 	}
+ }
+ 
++static inline bool xmit_xfrm_check_overflow(struct sk_buff *skb)
++{
++	struct xfrm_offload *xo = xfrm_offload(skb);
++	__u32 seq = xo->seq.low;
++
++	seq += skb_shinfo(skb)->gso_segs;
++	if (unlikely(seq < xo->seq.low))
++		return true;
++
++	return false;
++}
++
+ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t features, bool *again)
+ {
+ 	int err;
+@@ -134,7 +146,8 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur
+ 		return skb;
+ 	}
+ 
+-	if (skb_is_gso(skb) && unlikely(x->xso.dev != dev)) {
++	if (skb_is_gso(skb) && (unlikely(x->xso.dev != dev) ||
++				unlikely(xmit_xfrm_check_overflow(skb)))) {
+ 		struct sk_buff *segs;
+ 
+ 		/* Packet got rerouted, fixup features and segment it. */
+diff --git a/net/xfrm/xfrm_replay.c b/net/xfrm/xfrm_replay.c
+index c6a4338a0d08e..65d009e3b6bbe 100644
+--- a/net/xfrm/xfrm_replay.c
++++ b/net/xfrm/xfrm_replay.c
+@@ -657,7 +657,7 @@ static int xfrm_replay_overflow_offload_esn(struct xfrm_state *x, struct sk_buff
+ 			oseq += skb_shinfo(skb)->gso_segs;
+ 		}
+ 
+-		if (unlikely(oseq < replay_esn->oseq)) {
++		if (unlikely(xo->seq.low < replay_esn->oseq)) {
+ 			XFRM_SKB_CB(skb)->seq.output.hi = ++oseq_hi;
+ 			xo->seq.hi = oseq_hi;
+ 			replay_esn->oseq_hi = oseq_hi;
+diff --git a/sound/soc/codecs/hdac_hda.h b/sound/soc/codecs/hdac_hda.h
+index d0efc5e254ae9..da0ed74758b05 100644
+--- a/sound/soc/codecs/hdac_hda.h
++++ b/sound/soc/codecs/hdac_hda.h
+@@ -14,7 +14,7 @@ enum {
+ 	HDAC_HDMI_1_DAI_ID,
+ 	HDAC_HDMI_2_DAI_ID,
+ 	HDAC_HDMI_3_DAI_ID,
+-	HDAC_LAST_DAI_ID = HDAC_HDMI_3_DAI_ID,
++	HDAC_DAI_ID_NUM
+ };
+ 
+ struct hdac_hda_pcm {
+@@ -24,7 +24,7 @@ struct hdac_hda_pcm {
+ 
+ struct hdac_hda_priv {
+ 	struct hda_codec codec;
+-	struct hdac_hda_pcm pcm[HDAC_LAST_DAI_ID];
++	struct hdac_hda_pcm pcm[HDAC_DAI_ID_NUM];
+ 	bool need_display_power;
+ };
+ 
+diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
+index f066e016a874a..edde0323799aa 100644
+--- a/sound/soc/codecs/sgtl5000.c
++++ b/sound/soc/codecs/sgtl5000.c
+@@ -1797,6 +1797,7 @@ static int sgtl5000_i2c_remove(struct i2c_client *client)
+ {
+ 	struct sgtl5000_priv *sgtl5000 = i2c_get_clientdata(client);
+ 
++	regmap_write(sgtl5000->regmap, SGTL5000_CHIP_CLK_CTRL, SGTL5000_CHIP_CLK_CTRL_DEFAULT);
+ 	regmap_write(sgtl5000->regmap, SGTL5000_CHIP_DIG_POWER, SGTL5000_DIG_POWER_DEFAULT);
+ 	regmap_write(sgtl5000->regmap, SGTL5000_CHIP_ANA_POWER, SGTL5000_ANA_POWER_DEFAULT);
+ 
+diff --git a/sound/soc/intel/boards/bytcht_es8316.c b/sound/soc/intel/boards/bytcht_es8316.c
+index 7ed869bf1a926..81269ed5a2aaa 100644
+--- a/sound/soc/intel/boards/bytcht_es8316.c
++++ b/sound/soc/intel/boards/bytcht_es8316.c
+@@ -450,6 +450,13 @@ static const struct dmi_system_id byt_cht_es8316_quirk_table[] = {
+ 					| BYT_CHT_ES8316_INTMIC_IN2_MAP
+ 					| BYT_CHT_ES8316_JD_INVERTED),
+ 	},
++	{	/* Nanote UMPC-01 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "RWC CO.,LTD"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "UMPC-01"),
++		},
++		.driver_data = (void *)BYT_CHT_ES8316_INTMIC_IN1_MAP,
++	},
+ 	{	/* Teclast X98 Plus II */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "TECLAST"),
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 8b8a9aca2912f..0e2261ee07b67 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -723,11 +723,6 @@ static int soc_pcm_open(struct snd_pcm_substream *substream)
+ 		ret = snd_soc_dai_startup(dai, substream);
+ 		if (ret < 0)
+ 			goto err;
+-
+-		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-			dai->tx_mask = 0;
+-		else
+-			dai->rx_mask = 0;
+ 	}
+ 
+ 	/* Dynamic PCM DAI links compat checks use dynamic capabilities */
+diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c
+index 006b5bd99c089..525d810b10b81 100644
+--- a/tools/testing/selftests/bpf/verifier/ref_tracking.c
++++ b/tools/testing/selftests/bpf/verifier/ref_tracking.c
+@@ -901,3 +901,39 @@
+ 	.result_unpriv = REJECT,
+ 	.errstr_unpriv = "unknown func",
+ },
++{
++	"reference tracking: try to leak released ptr reg",
++	.insns = {
++		BPF_MOV64_IMM(BPF_REG_0, 0),
++		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
++		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
++		BPF_LD_MAP_FD(BPF_REG_1, 0),
++		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
++		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
++		BPF_EXIT_INSN(),
++		BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
++
++		BPF_MOV64_IMM(BPF_REG_0, 0),
++		BPF_LD_MAP_FD(BPF_REG_1, 0),
++		BPF_MOV64_IMM(BPF_REG_2, 8),
++		BPF_MOV64_IMM(BPF_REG_3, 0),
++		BPF_EMIT_CALL(BPF_FUNC_ringbuf_reserve),
++		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
++		BPF_EXIT_INSN(),
++		BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
++
++		BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
++		BPF_MOV64_IMM(BPF_REG_2, 0),
++		BPF_EMIT_CALL(BPF_FUNC_ringbuf_discard),
++		BPF_MOV64_IMM(BPF_REG_0, 0),
++
++		BPF_STX_MEM(BPF_DW, BPF_REG_9, BPF_REG_8, 0),
++		BPF_EXIT_INSN()
++	},
++	.fixup_map_array_48b = { 4 },
++	.fixup_map_ringbuf = { 11 },
++	.result = ACCEPT,
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "R8 !read_ok"
++},


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-12-08 11:51 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2022-12-08 11:51 UTC (permalink / raw
  To: gentoo-commits

commit:     4f58ea188191f32ad926d6c45ce26d40a837b1ae
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  8 11:49:54 2022 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Dec  8 11:49:54 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4f58ea18

Linux patch 5.10.158

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1157_linux-5.10.158.patch | 3021 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3025 insertions(+)

diff --git a/0000_README b/0000_README
index 47a5e57c..48ded094 100644
--- a/0000_README
+++ b/0000_README
@@ -671,6 +671,10 @@ Patch:  1156_linux-5.10.157.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.157
 
+Patch:  1157_linux-5.10.158.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.158
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1157_linux-5.10.158.patch b/1157_linux-5.10.158.patch
new file mode 100644
index 00000000..bfd5b507
--- /dev/null
+++ b/1157_linux-5.10.158.patch
@@ -0,0 +1,3021 @@
+diff --git a/Makefile b/Makefile
+index bf22df29c4d81..f3d1f07b6a6fc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 157
++SUBLEVEL = 158
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/at91rm9200.dtsi b/arch/arm/boot/dts/at91rm9200.dtsi
+index d1181ead18e5a..21344fbc89e5e 100644
+--- a/arch/arm/boot/dts/at91rm9200.dtsi
++++ b/arch/arm/boot/dts/at91rm9200.dtsi
+@@ -660,7 +660,7 @@
+ 				compatible = "atmel,at91rm9200-udc";
+ 				reg = <0xfffb0000 0x4000>;
+ 				interrupts = <11 IRQ_TYPE_LEVEL_HIGH 2>;
+-				clocks = <&pmc PMC_TYPE_PERIPHERAL 11>, <&pmc PMC_TYPE_SYSTEM 2>;
++				clocks = <&pmc PMC_TYPE_PERIPHERAL 11>, <&pmc PMC_TYPE_SYSTEM 1>;
+ 				clock-names = "pclk", "hclk";
+ 				status = "disabled";
+ 			};
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index c92b55a0ec1cb..f4ac7ff56bcea 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -17,6 +17,7 @@ vdso-syms += flush_icache
+ obj-vdso = $(patsubst %, %.o, $(vdso-syms)) note.o
+ 
+ ccflags-y := -fno-stack-protector
++ccflags-y += -DDISABLE_BRANCH_PROFILING
+ 
+ ifneq ($(c-gettimeofday-y),)
+   CFLAGS_vgettimeofday.o += -fPIC -include $(c-gettimeofday-y)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index f507ad7c7fd7b..1fcda82635546 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -300,6 +300,7 @@
+ #define X86_FEATURE_UNRET		(11*32+15) /* "" AMD BTB untrain return */
+ #define X86_FEATURE_USE_IBPB_FW		(11*32+16) /* "" Use IBPB during runtime firmware calls */
+ #define X86_FEATURE_RSB_VMEXIT_LITE	(11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
++#define X86_FEATURE_MSR_TSX_CTRL	(11*32+18) /* "" MSR IA32_TSX_CTRL (Intel) implemented */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX512_BF16		(12*32+ 5) /* AVX512 BFLOAT16 instructions */
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 07f5030073bbc..f14cdf9512493 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -310,7 +310,7 @@ static inline void indirect_branch_prediction_barrier(void)
+ /* The Intel SPEC CTRL MSR base value cache */
+ extern u64 x86_spec_ctrl_base;
+ DECLARE_PER_CPU(u64, x86_spec_ctrl_current);
+-extern void write_spec_ctrl_current(u64 val, bool force);
++extern void update_spec_ctrl_cond(u64 val);
+ extern u64 spec_ctrl_current(void);
+ 
+ /*
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index a300a19255b66..e2e22a5740a4d 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -59,11 +59,18 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_current);
+ 
+ static DEFINE_MUTEX(spec_ctrl_mutex);
+ 
++/* Update SPEC_CTRL MSR and its cached copy unconditionally */
++static void update_spec_ctrl(u64 val)
++{
++	this_cpu_write(x86_spec_ctrl_current, val);
++	wrmsrl(MSR_IA32_SPEC_CTRL, val);
++}
++
+ /*
+  * Keep track of the SPEC_CTRL MSR value for the current task, which may differ
+  * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update().
+  */
+-void write_spec_ctrl_current(u64 val, bool force)
++void update_spec_ctrl_cond(u64 val)
+ {
+ 	if (this_cpu_read(x86_spec_ctrl_current) == val)
+ 		return;
+@@ -74,7 +81,7 @@ void write_spec_ctrl_current(u64 val, bool force)
+ 	 * When KERNEL_IBRS this MSR is written on return-to-user, unless
+ 	 * forced the update can be delayed until that time.
+ 	 */
+-	if (force || !cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS))
++	if (!cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS))
+ 		wrmsrl(MSR_IA32_SPEC_CTRL, val);
+ }
+ 
+@@ -1291,7 +1298,7 @@ static void __init spec_ctrl_disable_kernel_rrsba(void)
+ 
+ 	if (ia32_cap & ARCH_CAP_RRSBA) {
+ 		x86_spec_ctrl_base |= SPEC_CTRL_RRSBA_DIS_S;
+-		write_spec_ctrl_current(x86_spec_ctrl_base, true);
++		update_spec_ctrl(x86_spec_ctrl_base);
+ 	}
+ }
+ 
+@@ -1413,7 +1420,7 @@ static void __init spectre_v2_select_mitigation(void)
+ 
+ 	if (spectre_v2_in_ibrs_mode(mode)) {
+ 		x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+-		write_spec_ctrl_current(x86_spec_ctrl_base, true);
++		update_spec_ctrl(x86_spec_ctrl_base);
+ 	}
+ 
+ 	switch (mode) {
+@@ -1527,7 +1534,7 @@ static void __init spectre_v2_select_mitigation(void)
+ static void update_stibp_msr(void * __unused)
+ {
+ 	u64 val = spec_ctrl_current() | (x86_spec_ctrl_base & SPEC_CTRL_STIBP);
+-	write_spec_ctrl_current(val, true);
++	update_spec_ctrl(val);
+ }
+ 
+ /* Update x86_spec_ctrl_base in case SMT state changed. */
+@@ -1760,7 +1767,7 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
+ 			x86_amd_ssb_disable();
+ 		} else {
+ 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
+-			write_spec_ctrl_current(x86_spec_ctrl_base, true);
++			update_spec_ctrl(x86_spec_ctrl_base);
+ 		}
+ 	}
+ 
+@@ -1978,7 +1985,7 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+ void x86_spec_ctrl_setup_ap(void)
+ {
+ 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
+-		write_spec_ctrl_current(x86_spec_ctrl_base, true);
++		update_spec_ctrl(x86_spec_ctrl_base);
+ 
+ 	if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
+ 		x86_amd_ssb_disable();
+diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
+index e2ad30e474f82..da06bbb5d68e1 100644
+--- a/arch/x86/kernel/cpu/tsx.c
++++ b/arch/x86/kernel/cpu/tsx.c
+@@ -58,24 +58,6 @@ void tsx_enable(void)
+ 	wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+ }
+ 
+-static bool __init tsx_ctrl_is_supported(void)
+-{
+-	u64 ia32_cap = x86_read_arch_cap_msr();
+-
+-	/*
+-	 * TSX is controlled via MSR_IA32_TSX_CTRL.  However, support for this
+-	 * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES.
+-	 *
+-	 * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a
+-	 * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES
+-	 * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get
+-	 * MSR_IA32_TSX_CTRL support even after a microcode update. Thus,
+-	 * tsx= cmdline requests will do nothing on CPUs without
+-	 * MSR_IA32_TSX_CTRL support.
+-	 */
+-	return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
+-}
+-
+ static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
+ {
+ 	if (boot_cpu_has_bug(X86_BUG_TAA))
+@@ -89,9 +71,22 @@ void __init tsx_init(void)
+ 	char arg[5] = {};
+ 	int ret;
+ 
+-	if (!tsx_ctrl_is_supported())
++	/*
++	 * TSX is controlled via MSR_IA32_TSX_CTRL.  However, support for this
++	 * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES.
++	 *
++	 * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a
++	 * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES
++	 * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get
++	 * MSR_IA32_TSX_CTRL support even after a microcode update. Thus,
++	 * tsx= cmdline requests will do nothing on CPUs without
++	 * MSR_IA32_TSX_CTRL support.
++	 */
++	if (!(x86_read_arch_cap_msr() & ARCH_CAP_TSX_CTRL_MSR))
+ 		return;
+ 
++	setup_force_cpu_cap(X86_FEATURE_MSR_TSX_CTRL);
++
+ 	ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
+ 	if (ret >= 0) {
+ 		if (!strcmp(arg, "on")) {
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 4505d845daba6..383afcc1098bf 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -556,7 +556,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+ 	}
+ 
+ 	if (updmsr)
+-		write_spec_ctrl_current(msr, false);
++		update_spec_ctrl_cond(msr);
+ }
+ 
+ static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index 61581c45788e3..4e4e76ecd3ecd 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -516,16 +516,23 @@ static int pm_cpu_check(const struct x86_cpu_id *c)
+ 
+ static void pm_save_spec_msr(void)
+ {
+-	u32 spec_msr_id[] = {
+-		MSR_IA32_SPEC_CTRL,
+-		MSR_IA32_TSX_CTRL,
+-		MSR_TSX_FORCE_ABORT,
+-		MSR_IA32_MCU_OPT_CTRL,
+-		MSR_AMD64_LS_CFG,
+-		MSR_AMD64_DE_CFG,
++	struct msr_enumeration {
++		u32 msr_no;
++		u32 feature;
++	} msr_enum[] = {
++		{ MSR_IA32_SPEC_CTRL,	 X86_FEATURE_MSR_SPEC_CTRL },
++		{ MSR_IA32_TSX_CTRL,	 X86_FEATURE_MSR_TSX_CTRL },
++		{ MSR_TSX_FORCE_ABORT,	 X86_FEATURE_TSX_FORCE_ABORT },
++		{ MSR_IA32_MCU_OPT_CTRL, X86_FEATURE_SRBDS_CTRL },
++		{ MSR_AMD64_LS_CFG,	 X86_FEATURE_LS_CFG_SSBD },
++		{ MSR_AMD64_DE_CFG,	 X86_FEATURE_LFENCE_RDTSC },
+ 	};
++	int i;
+ 
+-	msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
++	for (i = 0; i < ARRAY_SIZE(msr_enum); i++) {
++		if (boot_cpu_has(msr_enum[i].feature))
++			msr_build_context(&msr_enum[i].msr_no, 1);
++	}
+ }
+ 
+ static int pm_check_save_msr(void)
+diff --git a/block/partitions/core.c b/block/partitions/core.c
+index a02e224115943..e3d61ec4a5a64 100644
+--- a/block/partitions/core.c
++++ b/block/partitions/core.c
+@@ -329,6 +329,7 @@ void delete_partition(struct hd_struct *part)
+ 	struct gendisk *disk = part_to_disk(part);
+ 	struct disk_part_tbl *ptbl =
+ 		rcu_dereference_protected(disk->part_tbl, 1);
++	struct block_device *bdev;
+ 
+ 	/*
+ 	 * ->part_tbl is referenced in this part's release handler, so
+@@ -346,6 +347,12 @@ void delete_partition(struct hd_struct *part)
+ 	 * "in-use" until we really free the gendisk.
+ 	 */
+ 	blk_invalidate_devt(part_devt(part));
++
++	bdev = bdget_part(part);
++	if (bdev) {
++		remove_inode_hash(bdev->bd_inode);
++		bdput(bdev);
++	}
+ 	percpu_ref_kill(&part->ref);
+ }
+ 
+diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
+index 137a5dd880c26..26453a945da44 100644
+--- a/drivers/acpi/numa/hmat.c
++++ b/drivers/acpi/numa/hmat.c
+@@ -563,17 +563,26 @@ static int initiator_cmp(void *priv, const struct list_head *a,
+ {
+ 	struct memory_initiator *ia;
+ 	struct memory_initiator *ib;
+-	unsigned long *p_nodes = priv;
+ 
+ 	ia = list_entry(a, struct memory_initiator, node);
+ 	ib = list_entry(b, struct memory_initiator, node);
+ 
+-	set_bit(ia->processor_pxm, p_nodes);
+-	set_bit(ib->processor_pxm, p_nodes);
+-
+ 	return ia->processor_pxm - ib->processor_pxm;
+ }
+ 
++static int initiators_to_nodemask(unsigned long *p_nodes)
++{
++	struct memory_initiator *initiator;
++
++	if (list_empty(&initiators))
++		return -ENXIO;
++
++	list_for_each_entry(initiator, &initiators, node)
++		set_bit(initiator->processor_pxm, p_nodes);
++
++	return 0;
++}
++
+ static void hmat_register_target_initiators(struct memory_target *target)
+ {
+ 	static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
+@@ -610,7 +619,10 @@ static void hmat_register_target_initiators(struct memory_target *target)
+ 	 * initiators.
+ 	 */
+ 	bitmap_zero(p_nodes, MAX_NUMNODES);
+-	list_sort(p_nodes, &initiators, initiator_cmp);
++	list_sort(NULL, &initiators, initiator_cmp);
++	if (initiators_to_nodemask(p_nodes) < 0)
++		return;
++
+ 	if (!access0done) {
+ 		for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
+ 			loc = localities_types[i];
+@@ -644,8 +656,9 @@ static void hmat_register_target_initiators(struct memory_target *target)
+ 
+ 	/* Access 1 ignores Generic Initiators */
+ 	bitmap_zero(p_nodes, MAX_NUMNODES);
+-	list_sort(p_nodes, &initiators, initiator_cmp);
+-	best = 0;
++	if (initiators_to_nodemask(p_nodes) < 0)
++		return;
++
+ 	for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
+ 		loc = localities_types[i];
+ 		if (!loc)
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 1621ce8187052..d69905233aff2 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -401,13 +401,14 @@ int tpm_pm_suspend(struct device *dev)
+ 	    !pm_suspend_via_firmware())
+ 		goto suspended;
+ 
+-	if (!tpm_chip_start(chip)) {
++	rc = tpm_try_get_ops(chip);
++	if (!rc) {
+ 		if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ 			tpm2_shutdown(chip, TPM2_SU_STATE);
+ 		else
+ 			rc = tpm1_pm_suspend(chip, tpm_suspend_pcr);
+ 
+-		tpm_chip_stop(chip);
++		tpm_put_ops(chip);
+ 	}
+ 
+ suspended:
+diff --git a/drivers/clk/at91/at91rm9200.c b/drivers/clk/at91/at91rm9200.c
+index 2c3d8e6ca63ca..7cc20c0f88655 100644
+--- a/drivers/clk/at91/at91rm9200.c
++++ b/drivers/clk/at91/at91rm9200.c
+@@ -38,7 +38,7 @@ static const struct clk_pll_characteristics rm9200_pll_characteristics = {
+ };
+ 
+ static const struct sck at91rm9200_systemck[] = {
+-	{ .n = "udpck", .p = "usbck",    .id = 2 },
++	{ .n = "udpck", .p = "usbck",    .id = 1 },
+ 	{ .n = "uhpck", .p = "usbck",    .id = 4 },
+ 	{ .n = "pck0",  .p = "prog0",    .id = 8 },
+ 	{ .n = "pck1",  .p = "prog1",    .id = 9 },
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index 0e7748df4be30..c51c5ed15aa75 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -32,7 +32,7 @@ static int riscv_clock_next_event(unsigned long delta,
+ static unsigned int riscv_clock_event_irq;
+ static DEFINE_PER_CPU(struct clock_event_device, riscv_clock_event) = {
+ 	.name			= "riscv_timer_clockevent",
+-	.features		= CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_C3STOP,
++	.features		= CLOCK_EVT_FEAT_ONESHOT,
+ 	.rating			= 100,
+ 	.set_next_event		= riscv_clock_next_event,
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index 98d3661336a46..aabfe5705bb8a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -315,8 +315,10 @@ static void amdgpu_connector_get_edid(struct drm_connector *connector)
+ 	if (!amdgpu_connector->edid) {
+ 		/* some laptops provide a hardcoded edid in rom for LCDs */
+ 		if (((connector->connector_type == DRM_MODE_CONNECTOR_LVDS) ||
+-		     (connector->connector_type == DRM_MODE_CONNECTOR_eDP)))
++		     (connector->connector_type == DRM_MODE_CONNECTOR_eDP))) {
+ 			amdgpu_connector->edid = amdgpu_connector_get_hardcoded_edid(adev);
++			drm_connector_update_edid_property(connector, amdgpu_connector->edid);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/Kconfig b/drivers/gpu/drm/amd/display/Kconfig
+index f3274eb6b3418..6c4cba09d23b6 100644
+--- a/drivers/gpu/drm/amd/display/Kconfig
++++ b/drivers/gpu/drm/amd/display/Kconfig
+@@ -5,6 +5,7 @@ menu "Display Engine Configuration"
+ config DRM_AMD_DC
+ 	bool "AMD DC - Enable new display engine"
+ 	default y
++	depends on BROKEN || !CC_IS_CLANG || X86_64 || SPARC64 || ARM64
+ 	select SND_HDA_COMPONENT if SND_HDA_CORE
+ 	select DRM_AMD_DC_DCN if (X86 || PPC64) && !(KCOV_INSTRUMENT_ALL && KCOV_ENABLE_COMPARISONS)
+ 	help
+@@ -12,6 +13,12 @@ config DRM_AMD_DC
+ 	  support for AMDGPU. This adds required support for Vega and
+ 	  Raven ASICs.
+ 
++	  calculate_bandwidth() is presently broken on all !(X86_64 || SPARC64 || ARM64)
++	  architectures built with Clang (all released versions), whereby the stack
++	  frame gets blown up to well over 5k.  This would cause an immediate kernel
++	  panic on most architectures.  We'll revert this when the following bug report
++	  has been resolved: https://github.com/llvm/llvm-project/issues/41896.
++
+ config DRM_AMD_DC_DCN
+ 	def_bool n
+ 	help
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 55ecc67592ebc..167a1ee518a8f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2348,13 +2348,12 @@ void amdgpu_dm_update_connector_after_detect(
+ 			aconnector->edid =
+ 				(struct edid *)sink->dc_edid.raw_edid;
+ 
+-			drm_connector_update_edid_property(connector,
+-							   aconnector->edid);
+ 			if (aconnector->dc_link->aux_mode)
+ 				drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
+ 						    aconnector->edid);
+ 		}
+ 
++		drm_connector_update_edid_property(connector, aconnector->edid);
+ 		amdgpu_dm_update_freesync_caps(connector, aconnector->edid);
+ 		update_connector_ext_caps(aconnector);
+ 	} else {
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 4272cd3622f8b..0feeac52e4eb3 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -5238,7 +5238,7 @@ int drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm
+ 	mst_state = drm_atomic_get_mst_topology_state(state, mgr);
+ 
+ 	if (IS_ERR(mst_state))
+-		return -EINVAL;
++		return PTR_ERR(mst_state);
+ 
+ 	list_for_each_entry(pos, &mst_state->vcpis, next) {
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+index 66fcbf9d0fdd3..cca285185dc44 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+@@ -200,7 +200,7 @@ out_active:	spin_lock(&timelines->lock);
+ 	if (flush_submission(gt, timeout)) /* Wait, there's more! */
+ 		active_count++;
+ 
+-	return active_count ? timeout : 0;
++	return active_count ? timeout ?: -ETIME : 0;
+ }
+ 
+ int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index 032129292957e..42b84ebff0579 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -242,10 +242,13 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
+ 	 */
+ 	if (host_bridge && host_bridge->vendor == PCI_VENDOR_ID_INTEL) {
+ 		for (i = 0; i < ARRAY_SIZE(tjmax_pci_table); i++) {
+-			if (host_bridge->device == tjmax_pci_table[i].device)
++			if (host_bridge->device == tjmax_pci_table[i].device) {
++				pci_dev_put(host_bridge);
+ 				return tjmax_pci_table[i].tjmax;
++			}
+ 		}
+ 	}
++	pci_dev_put(host_bridge);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(tjmax_table); i++) {
+ 		if (strstr(c->x86_model_id, tjmax_table[i].id))
+@@ -533,6 +536,10 @@ static void coretemp_remove_core(struct platform_data *pdata, int indx)
+ {
+ 	struct temp_data *tdata = pdata->core_data[indx];
+ 
++	/* if we errored on add then this is already gone */
++	if (!tdata)
++		return;
++
+ 	/* Remove the sysfs attributes */
+ 	sysfs_remove_group(&pdata->hwmon_dev->kobj, &tdata->attr_group);
+ 
+diff --git a/drivers/hwmon/i5500_temp.c b/drivers/hwmon/i5500_temp.c
+index 360f5aee13947..d4be03f43fb45 100644
+--- a/drivers/hwmon/i5500_temp.c
++++ b/drivers/hwmon/i5500_temp.c
+@@ -108,7 +108,7 @@ static int i5500_temp_probe(struct pci_dev *pdev,
+ 	u32 tstimer;
+ 	s8 tsfsc;
+ 
+-	err = pci_enable_device(pdev);
++	err = pcim_enable_device(pdev);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to enable device\n");
+ 		return err;
+diff --git a/drivers/hwmon/ibmpex.c b/drivers/hwmon/ibmpex.c
+index b2ab83c9fd9a8..fe90f0536d76c 100644
+--- a/drivers/hwmon/ibmpex.c
++++ b/drivers/hwmon/ibmpex.c
+@@ -502,6 +502,7 @@ static void ibmpex_register_bmc(int iface, struct device *dev)
+ 	return;
+ 
+ out_register:
++	list_del(&data->list);
+ 	hwmon_device_unregister(data->hwmon_dev);
+ out_user:
+ 	ipmi_destroy_user(data->user);
+diff --git a/drivers/hwmon/ina3221.c b/drivers/hwmon/ina3221.c
+index ad11cbddc3a7b..d3c98115042b5 100644
+--- a/drivers/hwmon/ina3221.c
++++ b/drivers/hwmon/ina3221.c
+@@ -230,7 +230,7 @@ static int ina3221_read_value(struct ina3221_data *ina, unsigned int reg,
+ 	 * Shunt Voltage Sum register has 14-bit value with 1-bit shift
+ 	 * Other Shunt Voltage registers have 12 bits with 3-bit shift
+ 	 */
+-	if (reg == INA3221_SHUNT_SUM)
++	if (reg == INA3221_SHUNT_SUM || reg == INA3221_CRIT_SUM)
+ 		*val = sign_extend32(regval >> 1, 14);
+ 	else
+ 		*val = sign_extend32(regval >> 3, 12);
+@@ -465,7 +465,7 @@ static int ina3221_write_curr(struct device *dev, u32 attr,
+ 	 *     SHUNT_SUM: (1 / 40uV) << 1 = 1 / 20uV
+ 	 *     SHUNT[1-3]: (1 / 40uV) << 3 = 1 / 5uV
+ 	 */
+-	if (reg == INA3221_SHUNT_SUM)
++	if (reg == INA3221_SHUNT_SUM || reg == INA3221_CRIT_SUM)
+ 		regval = DIV_ROUND_CLOSEST(voltage_uv, 20) & 0xfffe;
+ 	else
+ 		regval = DIV_ROUND_CLOSEST(voltage_uv, 5) & 0xfff8;
+diff --git a/drivers/hwmon/ltc2947-core.c b/drivers/hwmon/ltc2947-core.c
+index 5423466de697a..e918490f3ff75 100644
+--- a/drivers/hwmon/ltc2947-core.c
++++ b/drivers/hwmon/ltc2947-core.c
+@@ -396,7 +396,7 @@ static int ltc2947_read_temp(struct device *dev, const u32 attr, long *val,
+ 		return ret;
+ 
+ 	/* in milidegrees celcius, temp is given by: */
+-	*val = (__val * 204) + 550;
++	*val = (__val * 204) + 5500;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index be4ad516293b0..b4fb4336b4e8b 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -843,7 +843,8 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs,
+ 	int i, result;
+ 	unsigned int temp;
+ 	int block_data = msgs->flags & I2C_M_RECV_LEN;
+-	int use_dma = i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data;
++	int use_dma = i2c_imx->dma && msgs->flags & I2C_M_DMA_SAFE &&
++		msgs->len >= DMA_THRESHOLD && !block_data;
+ 
+ 	dev_dbg(&i2c_imx->adapter.dev,
+ 		"<%s> write slave address: addr=0x%x\n",
+@@ -1011,7 +1012,8 @@ static int i2c_imx_xfer_common(struct i2c_adapter *adapter,
+ 			result = i2c_imx_read(i2c_imx, &msgs[i], is_lastmsg, atomic);
+ 		} else {
+ 			if (!atomic &&
+-			    i2c_imx->dma && msgs[i].len >= DMA_THRESHOLD)
++			    i2c_imx->dma && msgs[i].len >= DMA_THRESHOLD &&
++				msgs[i].flags & I2C_M_DMA_SAFE)
+ 				result = i2c_imx_dma_write(i2c_imx, &msgs[i]);
+ 			else
+ 				result = i2c_imx_write(i2c_imx, &msgs[i], atomic);
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index 31e3d2c9d6bc5..c1b6797372409 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -2362,8 +2362,17 @@ static struct platform_driver npcm_i2c_bus_driver = {
+ 
+ static int __init npcm_i2c_init(void)
+ {
++	int ret;
++
+ 	npcm_i2c_debugfs_dir = debugfs_create_dir("npcm_i2c", NULL);
+-	return platform_driver_register(&npcm_i2c_bus_driver);
++
++	ret = platform_driver_register(&npcm_i2c_bus_driver);
++	if (ret) {
++		debugfs_remove_recursive(npcm_i2c_debugfs_dir);
++		return ret;
++	}
++
++	return 0;
+ }
+ module_init(npcm_i2c_init);
+ 
+diff --git a/drivers/iio/health/afe4403.c b/drivers/iio/health/afe4403.c
+index 38734e4ce3605..82d01ac36128b 100644
+--- a/drivers/iio/health/afe4403.c
++++ b/drivers/iio/health/afe4403.c
+@@ -245,14 +245,14 @@ static int afe4403_read_raw(struct iio_dev *indio_dev,
+ 			    int *val, int *val2, long mask)
+ {
+ 	struct afe4403_data *afe = iio_priv(indio_dev);
+-	unsigned int reg = afe4403_channel_values[chan->address];
+-	unsigned int field = afe4403_channel_leds[chan->address];
++	unsigned int reg, field;
+ 	int ret;
+ 
+ 	switch (chan->type) {
+ 	case IIO_INTENSITY:
+ 		switch (mask) {
+ 		case IIO_CHAN_INFO_RAW:
++			reg = afe4403_channel_values[chan->address];
+ 			ret = afe4403_read(afe, reg, val);
+ 			if (ret)
+ 				return ret;
+@@ -262,6 +262,7 @@ static int afe4403_read_raw(struct iio_dev *indio_dev,
+ 	case IIO_CURRENT:
+ 		switch (mask) {
+ 		case IIO_CHAN_INFO_RAW:
++			field = afe4403_channel_leds[chan->address];
+ 			ret = regmap_field_read(afe->fields[field], val);
+ 			if (ret)
+ 				return ret;
+diff --git a/drivers/iio/health/afe4404.c b/drivers/iio/health/afe4404.c
+index 61fe4932d81d0..0eaa34da59a87 100644
+--- a/drivers/iio/health/afe4404.c
++++ b/drivers/iio/health/afe4404.c
+@@ -250,20 +250,20 @@ static int afe4404_read_raw(struct iio_dev *indio_dev,
+ 			    int *val, int *val2, long mask)
+ {
+ 	struct afe4404_data *afe = iio_priv(indio_dev);
+-	unsigned int value_reg = afe4404_channel_values[chan->address];
+-	unsigned int led_field = afe4404_channel_leds[chan->address];
+-	unsigned int offdac_field = afe4404_channel_offdacs[chan->address];
++	unsigned int value_reg, led_field, offdac_field;
+ 	int ret;
+ 
+ 	switch (chan->type) {
+ 	case IIO_INTENSITY:
+ 		switch (mask) {
+ 		case IIO_CHAN_INFO_RAW:
++			value_reg = afe4404_channel_values[chan->address];
+ 			ret = regmap_read(afe->regmap, value_reg, val);
+ 			if (ret)
+ 				return ret;
+ 			return IIO_VAL_INT;
+ 		case IIO_CHAN_INFO_OFFSET:
++			offdac_field = afe4404_channel_offdacs[chan->address];
+ 			ret = regmap_field_read(afe->fields[offdac_field], val);
+ 			if (ret)
+ 				return ret;
+@@ -273,6 +273,7 @@ static int afe4404_read_raw(struct iio_dev *indio_dev,
+ 	case IIO_CURRENT:
+ 		switch (mask) {
+ 		case IIO_CHAN_INFO_RAW:
++			led_field = afe4404_channel_leds[chan->address];
+ 			ret = regmap_field_read(afe->fields[led_field], val);
+ 			if (ret)
+ 				return ret;
+@@ -295,19 +296,20 @@ static int afe4404_write_raw(struct iio_dev *indio_dev,
+ 			     int val, int val2, long mask)
+ {
+ 	struct afe4404_data *afe = iio_priv(indio_dev);
+-	unsigned int led_field = afe4404_channel_leds[chan->address];
+-	unsigned int offdac_field = afe4404_channel_offdacs[chan->address];
++	unsigned int led_field, offdac_field;
+ 
+ 	switch (chan->type) {
+ 	case IIO_INTENSITY:
+ 		switch (mask) {
+ 		case IIO_CHAN_INFO_OFFSET:
++			offdac_field = afe4404_channel_offdacs[chan->address];
+ 			return regmap_field_write(afe->fields[offdac_field], val);
+ 		}
+ 		break;
+ 	case IIO_CURRENT:
+ 		switch (mask) {
+ 		case IIO_CHAN_INFO_RAW:
++			led_field = afe4404_channel_leds[chan->address];
+ 			return regmap_field_write(afe->fields[led_field], val);
+ 		}
+ 		break;
+diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig
+index 917f9becf9c75..dd52eff9ba2a3 100644
+--- a/drivers/iio/light/Kconfig
++++ b/drivers/iio/light/Kconfig
+@@ -294,6 +294,8 @@ config RPR0521
+ 	tristate "ROHM RPR0521 ALS and proximity sensor driver"
+ 	depends on I2C
+ 	select REGMAP_I2C
++	select IIO_BUFFER
++	select IIO_TRIGGERED_BUFFER
+ 	help
+ 	  Say Y here if you want to build support for ROHM's RPR0521
+ 	  ambient light and proximity sensor device.
+diff --git a/drivers/input/touchscreen/raydium_i2c_ts.c b/drivers/input/touchscreen/raydium_i2c_ts.c
+index 4d2d22a869773..bdb3e2c3ab797 100644
+--- a/drivers/input/touchscreen/raydium_i2c_ts.c
++++ b/drivers/input/touchscreen/raydium_i2c_ts.c
+@@ -210,12 +210,14 @@ static int raydium_i2c_send(struct i2c_client *client,
+ 
+ 		error = raydium_i2c_xfer(client, addr, xfer, ARRAY_SIZE(xfer));
+ 		if (likely(!error))
+-			return 0;
++			goto out;
+ 
+ 		msleep(RM_RETRY_DELAY_MS);
+ 	} while (++tries < RM_MAX_RETRIES);
+ 
+ 	dev_err(&client->dev, "%s failed: %d\n", __func__, error);
++out:
++	kfree(tx_buf);
+ 	return error;
+ }
+ 
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 0bc497f4cb9f0..a27765a7f6b75 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -816,6 +816,7 @@ int __init dmar_dev_scope_init(void)
+ 			info = dmar_alloc_pci_notify_info(dev,
+ 					BUS_NOTIFY_ADD_DEVICE);
+ 			if (!info) {
++				pci_dev_put(dev);
+ 				return dmar_dev_scope_status;
+ 			} else {
+ 				dmar_pci_bus_add_dev(info);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index f23329b7f97cd..47666c9b4ba11 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -4893,8 +4893,10 @@ static inline bool has_external_pci(void)
+ 	struct pci_dev *pdev = NULL;
+ 
+ 	for_each_pci_dev(pdev)
+-		if (pdev->external_facing)
++		if (pdev->external_facing) {
++			pci_dev_put(pdev);
+ 			return true;
++		}
+ 
+ 	return false;
+ }
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index 7d9ec91e081b2..8f24653942536 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -1539,6 +1539,11 @@ void mmc_init_erase(struct mmc_card *card)
+ 		card->pref_erase = 0;
+ }
+ 
++static bool is_trim_arg(unsigned int arg)
++{
++	return (arg & MMC_TRIM_OR_DISCARD_ARGS) && arg != MMC_DISCARD_ARG;
++}
++
+ static unsigned int mmc_mmc_erase_timeout(struct mmc_card *card,
+ 				          unsigned int arg, unsigned int qty)
+ {
+@@ -1837,7 +1842,7 @@ int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
+ 	    !(card->ext_csd.sec_feature_support & EXT_CSD_SEC_ER_EN))
+ 		return -EOPNOTSUPP;
+ 
+-	if (mmc_card_mmc(card) && (arg & MMC_TRIM_ARGS) &&
++	if (mmc_card_mmc(card) && is_trim_arg(arg) &&
+ 	    !(card->ext_csd.sec_feature_support & EXT_CSD_SEC_GB_CL_EN))
+ 		return -EOPNOTSUPP;
+ 
+@@ -1867,7 +1872,7 @@ int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
+ 	 * identified by the card->eg_boundary flag.
+ 	 */
+ 	rem = card->erase_size - (from % card->erase_size);
+-	if ((arg & MMC_TRIM_ARGS) && (card->eg_boundary) && (nr > rem)) {
++	if ((arg & MMC_TRIM_OR_DISCARD_ARGS) && card->eg_boundary && nr > rem) {
+ 		err = mmc_do_erase(card, from, from + rem - 1, arg);
+ 		from += rem;
+ 		if ((err) || (to <= from))
+diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c
+index 152e7525ed338..b9b6f000154bb 100644
+--- a/drivers/mmc/core/mmc_test.c
++++ b/drivers/mmc/core/mmc_test.c
+@@ -3195,7 +3195,8 @@ static int __mmc_test_register_dbgfs_file(struct mmc_card *card,
+ 	struct mmc_test_dbgfs_file *df;
+ 
+ 	if (card->debugfs_root)
+-		debugfs_create_file(name, mode, card->debugfs_root, card, fops);
++		file = debugfs_create_file(name, mode, card->debugfs_root,
++					   card, fops);
+ 
+ 	df = kmalloc(sizeof(*df), GFP_KERNEL);
+ 	if (!df) {
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 1f1bdd34dd554..9e827bfe19ff0 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -1449,7 +1449,7 @@ static void esdhc_cqe_enable(struct mmc_host *mmc)
+ 	 * system resume back.
+ 	 */
+ 	cqhci_writel(cq_host, 0, CQHCI_CTL);
+-	if (cqhci_readl(cq_host, CQHCI_CTL) && CQHCI_HALT)
++	if (cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT)
+ 		dev_err(mmc_dev(host->mmc),
+ 			"failed to exit halt state when enable CQE\n");
+ 
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index 8575f4537e57b..110ee0c804c8a 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -457,7 +457,7 @@ static int sdhci_sprd_voltage_switch(struct mmc_host *mmc, struct mmc_ios *ios)
+ 	}
+ 
+ 	if (IS_ERR(sprd_host->pinctrl))
+-		return 0;
++		goto reset;
+ 
+ 	switch (ios->signal_voltage) {
+ 	case MMC_SIGNAL_VOLTAGE_180:
+@@ -485,6 +485,8 @@ static int sdhci_sprd_voltage_switch(struct mmc_host *mmc, struct mmc_ios *ios)
+ 
+ 	/* Wait for 300 ~ 500 us for pin state stable */
+ 	usleep_range(300, 500);
++
++reset:
+ 	sdhci_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index d42e86cdff12e..133f0d3764804 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -339,6 +339,7 @@ static void sdhci_init(struct sdhci_host *host, int soft)
+ 	if (soft) {
+ 		/* force clock reconfiguration */
+ 		host->clock = 0;
++		host->reinit_uhs = true;
+ 		mmc->ops->set_ios(mmc, &mmc->ios);
+ 	}
+ }
+@@ -2258,11 +2259,46 @@ void sdhci_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
+ }
+ EXPORT_SYMBOL_GPL(sdhci_set_uhs_signaling);
+ 
++static bool sdhci_timing_has_preset(unsigned char timing)
++{
++	switch (timing) {
++	case MMC_TIMING_UHS_SDR12:
++	case MMC_TIMING_UHS_SDR25:
++	case MMC_TIMING_UHS_SDR50:
++	case MMC_TIMING_UHS_SDR104:
++	case MMC_TIMING_UHS_DDR50:
++	case MMC_TIMING_MMC_DDR52:
++		return true;
++	};
++	return false;
++}
++
++static bool sdhci_preset_needed(struct sdhci_host *host, unsigned char timing)
++{
++	return !(host->quirks2 & SDHCI_QUIRK2_PRESET_VALUE_BROKEN) &&
++	       sdhci_timing_has_preset(timing);
++}
++
++static bool sdhci_presetable_values_change(struct sdhci_host *host, struct mmc_ios *ios)
++{
++	/*
++	 * Preset Values are: Driver Strength, Clock Generator and SDCLK/RCLK
++	 * Frequency. Check if preset values need to be enabled, or the Driver
++	 * Strength needs updating. Note, clock changes are handled separately.
++	 */
++	return !host->preset_enabled &&
++	       (sdhci_preset_needed(host, ios->timing) || host->drv_type != ios->drv_type);
++}
++
+ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ {
+ 	struct sdhci_host *host = mmc_priv(mmc);
++	bool reinit_uhs = host->reinit_uhs;
++	bool turning_on_clk = false;
+ 	u8 ctrl;
+ 
++	host->reinit_uhs = false;
++
+ 	if (ios->power_mode == MMC_POWER_UNDEFINED)
+ 		return;
+ 
+@@ -2288,6 +2324,8 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 		sdhci_enable_preset_value(host, false);
+ 
+ 	if (!ios->clock || ios->clock != host->clock) {
++		turning_on_clk = ios->clock && !host->clock;
++
+ 		host->ops->set_clock(host, ios->clock);
+ 		host->clock = ios->clock;
+ 
+@@ -2314,6 +2352,17 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 
+ 	host->ops->set_bus_width(host, ios->bus_width);
+ 
++	/*
++	 * Special case to avoid multiple clock changes during voltage
++	 * switching.
++	 */
++	if (!reinit_uhs &&
++	    turning_on_clk &&
++	    host->timing == ios->timing &&
++	    host->version >= SDHCI_SPEC_300 &&
++	    !sdhci_presetable_values_change(host, ios))
++		return;
++
+ 	ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
+ 
+ 	if (!(host->quirks & SDHCI_QUIRK_NO_HISPD_BIT)) {
+@@ -2357,6 +2406,7 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 			}
+ 
+ 			sdhci_writew(host, ctrl_2, SDHCI_HOST_CONTROL2);
++			host->drv_type = ios->drv_type;
+ 		} else {
+ 			/*
+ 			 * According to SDHC Spec v3.00, if the Preset Value
+@@ -2384,19 +2434,14 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 		host->ops->set_uhs_signaling(host, ios->timing);
+ 		host->timing = ios->timing;
+ 
+-		if (!(host->quirks2 & SDHCI_QUIRK2_PRESET_VALUE_BROKEN) &&
+-				((ios->timing == MMC_TIMING_UHS_SDR12) ||
+-				 (ios->timing == MMC_TIMING_UHS_SDR25) ||
+-				 (ios->timing == MMC_TIMING_UHS_SDR50) ||
+-				 (ios->timing == MMC_TIMING_UHS_SDR104) ||
+-				 (ios->timing == MMC_TIMING_UHS_DDR50) ||
+-				 (ios->timing == MMC_TIMING_MMC_DDR52))) {
++		if (sdhci_preset_needed(host, ios->timing)) {
+ 			u16 preset;
+ 
+ 			sdhci_enable_preset_value(host, true);
+ 			preset = sdhci_get_preset_value(host);
+ 			ios->drv_type = FIELD_GET(SDHCI_PRESET_DRV_MASK,
+ 						  preset);
++			host->drv_type = ios->drv_type;
+ 		}
+ 
+ 		/* Re-enable SD Clock */
+@@ -3707,6 +3752,7 @@ int sdhci_resume_host(struct sdhci_host *host)
+ 		sdhci_init(host, 0);
+ 		host->pwr = 0;
+ 		host->clock = 0;
++		host->reinit_uhs = true;
+ 		mmc->ops->set_ios(mmc, &mmc->ios);
+ 	} else {
+ 		sdhci_init(host, (host->mmc->pm_flags & MMC_PM_KEEP_POWER));
+@@ -3769,6 +3815,7 @@ int sdhci_runtime_resume_host(struct sdhci_host *host, int soft_reset)
+ 		/* Force clock and power re-program */
+ 		host->pwr = 0;
+ 		host->clock = 0;
++		host->reinit_uhs = true;
+ 		mmc->ops->start_signal_voltage_switch(mmc, &mmc->ios);
+ 		mmc->ops->set_ios(mmc, &mmc->ios);
+ 
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 8b1650f37fbba..4db57c3a8cd4b 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -520,6 +520,8 @@ struct sdhci_host {
+ 
+ 	unsigned int clock;	/* Current clock (MHz) */
+ 	u8 pwr;			/* Current voltage */
++	u8 drv_type;		/* Current UHS-I driver type */
++	bool reinit_uhs;	/* Force UHS-related re-initialization */
+ 
+ 	bool runtime_suspended;	/* Host is runtime suspended */
+ 	bool bus_on;		/* Bus power prevents runtime suspend */
+diff --git a/drivers/net/can/cc770/cc770_isa.c b/drivers/net/can/cc770/cc770_isa.c
+index 194c86e0f340f..8f6dccd5a5879 100644
+--- a/drivers/net/can/cc770/cc770_isa.c
++++ b/drivers/net/can/cc770/cc770_isa.c
+@@ -264,22 +264,24 @@ static int cc770_isa_probe(struct platform_device *pdev)
+ 	if (err) {
+ 		dev_err(&pdev->dev,
+ 			"couldn't register device (err=%d)\n", err);
+-		goto exit_unmap;
++		goto exit_free;
+ 	}
+ 
+ 	dev_info(&pdev->dev, "device registered (reg_base=0x%p, irq=%d)\n",
+ 		 priv->reg_base, dev->irq);
+ 	return 0;
+ 
+- exit_unmap:
++exit_free:
++	free_cc770dev(dev);
++exit_unmap:
+ 	if (mem[idx])
+ 		iounmap(base);
+- exit_release:
++exit_release:
+ 	if (mem[idx])
+ 		release_mem_region(mem[idx], iosize);
+ 	else
+ 		release_region(port[idx], iosize);
+- exit:
++exit:
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/can/sja1000/sja1000_isa.c b/drivers/net/can/sja1000/sja1000_isa.c
+index d513fac507185..db3e767d5320f 100644
+--- a/drivers/net/can/sja1000/sja1000_isa.c
++++ b/drivers/net/can/sja1000/sja1000_isa.c
+@@ -202,22 +202,24 @@ static int sja1000_isa_probe(struct platform_device *pdev)
+ 	if (err) {
+ 		dev_err(&pdev->dev, "registering %s failed (err=%d)\n",
+ 			DRV_NAME, err);
+-		goto exit_unmap;
++		goto exit_free;
+ 	}
+ 
+ 	dev_info(&pdev->dev, "%s device registered (reg_base=0x%p, irq=%d)\n",
+ 		 DRV_NAME, priv->reg_base, dev->irq);
+ 	return 0;
+ 
+- exit_unmap:
++exit_free:
++	free_sja1000dev(dev);
++exit_unmap:
+ 	if (mem[idx])
+ 		iounmap(base);
+- exit_release:
++exit_release:
+ 	if (mem[idx])
+ 		release_mem_region(mem[idx], iosize);
+ 	else
+ 		release_region(port[idx], iosize);
+- exit:
++exit:
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index 2044d440d7de4..c79bb8cf962ce 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -958,7 +958,7 @@ static const struct lan9303_mib_desc lan9303_mib[] = {
+ 	{ .offset = LAN9303_MAC_TX_BRDCST_CNT_0, .name = "TxBroad", },
+ 	{ .offset = LAN9303_MAC_TX_PAUSE_CNT_0, .name = "TxPause", },
+ 	{ .offset = LAN9303_MAC_TX_MULCST_CNT_0, .name = "TxMulti", },
+-	{ .offset = LAN9303_MAC_RX_UNDSZE_CNT_0, .name = "TxUnderRun", },
++	{ .offset = LAN9303_MAC_RX_UNDSZE_CNT_0, .name = "RxShort", },
+ 	{ .offset = LAN9303_MAC_TX_64_CNT_0, .name = "Tx64Byte", },
+ 	{ .offset = LAN9303_MAC_TX_127_CNT_0, .name = "Tx128Byte", },
+ 	{ .offset = LAN9303_MAC_TX_255_CNT_0, .name = "Tx256Byte", },
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+index de2a9348bc3f8..1d512e6a89f5c 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+@@ -13,6 +13,7 @@
+ #include "aq_ptp.h"
+ #include "aq_filters.h"
+ #include "aq_macsec.h"
++#include "aq_main.h"
+ 
+ #include <linux/ptp_clock_kernel.h>
+ 
+@@ -841,7 +842,7 @@ static int aq_set_ringparam(struct net_device *ndev,
+ 
+ 	if (netif_running(ndev)) {
+ 		ndev_running = true;
+-		dev_close(ndev);
++		aq_ndev_close(ndev);
+ 	}
+ 
+ 	cfg->rxds = max(ring->rx_pending, hw_caps->rxds_min);
+@@ -857,7 +858,7 @@ static int aq_set_ringparam(struct net_device *ndev,
+ 		goto err_exit;
+ 
+ 	if (ndev_running)
+-		err = dev_open(ndev, NULL);
++		err = aq_ndev_open(ndev);
+ 
+ err_exit:
+ 	return err;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+index ff245f75fa3d1..1401fc4632b51 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+@@ -53,7 +53,7 @@ struct net_device *aq_ndev_alloc(void)
+ 	return ndev;
+ }
+ 
+-static int aq_ndev_open(struct net_device *ndev)
++int aq_ndev_open(struct net_device *ndev)
+ {
+ 	struct aq_nic_s *aq_nic = netdev_priv(ndev);
+ 	int err = 0;
+@@ -83,7 +83,7 @@ err_exit:
+ 	return err;
+ }
+ 
+-static int aq_ndev_close(struct net_device *ndev)
++int aq_ndev_close(struct net_device *ndev)
+ {
+ 	struct aq_nic_s *aq_nic = netdev_priv(ndev);
+ 	int err = 0;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.h b/drivers/net/ethernet/aquantia/atlantic/aq_main.h
+index a5a624b9ce733..2a562ab7a5afd 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.h
+@@ -14,5 +14,7 @@
+ 
+ void aq_ndev_schedule_work(struct work_struct *work);
+ struct net_device *aq_ndev_alloc(void);
++int aq_ndev_open(struct net_device *ndev);
++int aq_ndev_close(struct net_device *ndev);
+ 
+ #endif /* AQ_MAIN_H */
+diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c
+index 9295a9a1efc73..001850d578e8f 100644
+--- a/drivers/net/ethernet/intel/e100.c
++++ b/drivers/net/ethernet/intel/e100.c
+@@ -1739,14 +1739,11 @@ static int e100_xmit_prepare(struct nic *nic, struct cb *cb,
+ 	dma_addr_t dma_addr;
+ 	cb->command = nic->tx_command;
+ 
+-	dma_addr = pci_map_single(nic->pdev,
+-				  skb->data, skb->len, PCI_DMA_TODEVICE);
++	dma_addr = dma_map_single(&nic->pdev->dev, skb->data, skb->len,
++				  DMA_TO_DEVICE);
+ 	/* If we can't map the skb, have the upper layer try later */
+-	if (pci_dma_mapping_error(nic->pdev, dma_addr)) {
+-		dev_kfree_skb_any(skb);
+-		skb = NULL;
++	if (dma_mapping_error(&nic->pdev->dev, dma_addr))
+ 		return -ENOMEM;
+-	}
+ 
+ 	/*
+ 	 * Use the last 4 bytes of the SKB payload packet as the CRC, used for
+@@ -1828,10 +1825,10 @@ static int e100_tx_clean(struct nic *nic)
+ 			dev->stats.tx_packets++;
+ 			dev->stats.tx_bytes += cb->skb->len;
+ 
+-			pci_unmap_single(nic->pdev,
+-				le32_to_cpu(cb->u.tcb.tbd.buf_addr),
+-				le16_to_cpu(cb->u.tcb.tbd.size),
+-				PCI_DMA_TODEVICE);
++			dma_unmap_single(&nic->pdev->dev,
++					 le32_to_cpu(cb->u.tcb.tbd.buf_addr),
++					 le16_to_cpu(cb->u.tcb.tbd.size),
++					 DMA_TO_DEVICE);
+ 			dev_kfree_skb_any(cb->skb);
+ 			cb->skb = NULL;
+ 			tx_cleaned = 1;
+@@ -1855,10 +1852,10 @@ static void e100_clean_cbs(struct nic *nic)
+ 		while (nic->cbs_avail != nic->params.cbs.count) {
+ 			struct cb *cb = nic->cb_to_clean;
+ 			if (cb->skb) {
+-				pci_unmap_single(nic->pdev,
+-					le32_to_cpu(cb->u.tcb.tbd.buf_addr),
+-					le16_to_cpu(cb->u.tcb.tbd.size),
+-					PCI_DMA_TODEVICE);
++				dma_unmap_single(&nic->pdev->dev,
++						 le32_to_cpu(cb->u.tcb.tbd.buf_addr),
++						 le16_to_cpu(cb->u.tcb.tbd.size),
++						 DMA_TO_DEVICE);
+ 				dev_kfree_skb(cb->skb);
+ 			}
+ 			nic->cb_to_clean = nic->cb_to_clean->next;
+@@ -1925,10 +1922,10 @@ static int e100_rx_alloc_skb(struct nic *nic, struct rx *rx)
+ 
+ 	/* Init, and map the RFD. */
+ 	skb_copy_to_linear_data(rx->skb, &nic->blank_rfd, sizeof(struct rfd));
+-	rx->dma_addr = pci_map_single(nic->pdev, rx->skb->data,
+-		RFD_BUF_LEN, PCI_DMA_BIDIRECTIONAL);
++	rx->dma_addr = dma_map_single(&nic->pdev->dev, rx->skb->data,
++				      RFD_BUF_LEN, DMA_BIDIRECTIONAL);
+ 
+-	if (pci_dma_mapping_error(nic->pdev, rx->dma_addr)) {
++	if (dma_mapping_error(&nic->pdev->dev, rx->dma_addr)) {
+ 		dev_kfree_skb_any(rx->skb);
+ 		rx->skb = NULL;
+ 		rx->dma_addr = 0;
+@@ -1941,8 +1938,10 @@ static int e100_rx_alloc_skb(struct nic *nic, struct rx *rx)
+ 	if (rx->prev->skb) {
+ 		struct rfd *prev_rfd = (struct rfd *)rx->prev->skb->data;
+ 		put_unaligned_le32(rx->dma_addr, &prev_rfd->link);
+-		pci_dma_sync_single_for_device(nic->pdev, rx->prev->dma_addr,
+-			sizeof(struct rfd), PCI_DMA_BIDIRECTIONAL);
++		dma_sync_single_for_device(&nic->pdev->dev,
++					   rx->prev->dma_addr,
++					   sizeof(struct rfd),
++					   DMA_BIDIRECTIONAL);
+ 	}
+ 
+ 	return 0;
+@@ -1961,8 +1960,8 @@ static int e100_rx_indicate(struct nic *nic, struct rx *rx,
+ 		return -EAGAIN;
+ 
+ 	/* Need to sync before taking a peek at cb_complete bit */
+-	pci_dma_sync_single_for_cpu(nic->pdev, rx->dma_addr,
+-		sizeof(struct rfd), PCI_DMA_BIDIRECTIONAL);
++	dma_sync_single_for_cpu(&nic->pdev->dev, rx->dma_addr,
++				sizeof(struct rfd), DMA_BIDIRECTIONAL);
+ 	rfd_status = le16_to_cpu(rfd->status);
+ 
+ 	netif_printk(nic, rx_status, KERN_DEBUG, nic->netdev,
+@@ -1981,9 +1980,9 @@ static int e100_rx_indicate(struct nic *nic, struct rx *rx,
+ 
+ 			if (ioread8(&nic->csr->scb.status) & rus_no_res)
+ 				nic->ru_running = RU_SUSPENDED;
+-		pci_dma_sync_single_for_device(nic->pdev, rx->dma_addr,
+-					       sizeof(struct rfd),
+-					       PCI_DMA_FROMDEVICE);
++		dma_sync_single_for_device(&nic->pdev->dev, rx->dma_addr,
++					   sizeof(struct rfd),
++					   DMA_FROM_DEVICE);
+ 		return -ENODATA;
+ 	}
+ 
+@@ -1995,8 +1994,8 @@ static int e100_rx_indicate(struct nic *nic, struct rx *rx,
+ 		actual_size = RFD_BUF_LEN - sizeof(struct rfd);
+ 
+ 	/* Get data */
+-	pci_unmap_single(nic->pdev, rx->dma_addr,
+-		RFD_BUF_LEN, PCI_DMA_BIDIRECTIONAL);
++	dma_unmap_single(&nic->pdev->dev, rx->dma_addr, RFD_BUF_LEN,
++			 DMA_BIDIRECTIONAL);
+ 
+ 	/* If this buffer has the el bit, but we think the receiver
+ 	 * is still running, check to see if it really stopped while
+@@ -2097,22 +2096,25 @@ static void e100_rx_clean(struct nic *nic, unsigned int *work_done,
+ 			(struct rfd *)new_before_last_rx->skb->data;
+ 		new_before_last_rfd->size = 0;
+ 		new_before_last_rfd->command |= cpu_to_le16(cb_el);
+-		pci_dma_sync_single_for_device(nic->pdev,
+-			new_before_last_rx->dma_addr, sizeof(struct rfd),
+-			PCI_DMA_BIDIRECTIONAL);
++		dma_sync_single_for_device(&nic->pdev->dev,
++					   new_before_last_rx->dma_addr,
++					   sizeof(struct rfd),
++					   DMA_BIDIRECTIONAL);
+ 
+ 		/* Now that we have a new stopping point, we can clear the old
+ 		 * stopping point.  We must sync twice to get the proper
+ 		 * ordering on the hardware side of things. */
+ 		old_before_last_rfd->command &= ~cpu_to_le16(cb_el);
+-		pci_dma_sync_single_for_device(nic->pdev,
+-			old_before_last_rx->dma_addr, sizeof(struct rfd),
+-			PCI_DMA_BIDIRECTIONAL);
++		dma_sync_single_for_device(&nic->pdev->dev,
++					   old_before_last_rx->dma_addr,
++					   sizeof(struct rfd),
++					   DMA_BIDIRECTIONAL);
+ 		old_before_last_rfd->size = cpu_to_le16(VLAN_ETH_FRAME_LEN
+ 							+ ETH_FCS_LEN);
+-		pci_dma_sync_single_for_device(nic->pdev,
+-			old_before_last_rx->dma_addr, sizeof(struct rfd),
+-			PCI_DMA_BIDIRECTIONAL);
++		dma_sync_single_for_device(&nic->pdev->dev,
++					   old_before_last_rx->dma_addr,
++					   sizeof(struct rfd),
++					   DMA_BIDIRECTIONAL);
+ 	}
+ 
+ 	if (restart_required) {
+@@ -2134,8 +2136,9 @@ static void e100_rx_clean_list(struct nic *nic)
+ 	if (nic->rxs) {
+ 		for (rx = nic->rxs, i = 0; i < count; rx++, i++) {
+ 			if (rx->skb) {
+-				pci_unmap_single(nic->pdev, rx->dma_addr,
+-					RFD_BUF_LEN, PCI_DMA_BIDIRECTIONAL);
++				dma_unmap_single(&nic->pdev->dev,
++						 rx->dma_addr, RFD_BUF_LEN,
++						 DMA_BIDIRECTIONAL);
+ 				dev_kfree_skb(rx->skb);
+ 			}
+ 		}
+@@ -2177,8 +2180,8 @@ static int e100_rx_alloc_list(struct nic *nic)
+ 	before_last = (struct rfd *)rx->skb->data;
+ 	before_last->command |= cpu_to_le16(cb_el);
+ 	before_last->size = 0;
+-	pci_dma_sync_single_for_device(nic->pdev, rx->dma_addr,
+-		sizeof(struct rfd), PCI_DMA_BIDIRECTIONAL);
++	dma_sync_single_for_device(&nic->pdev->dev, rx->dma_addr,
++				   sizeof(struct rfd), DMA_BIDIRECTIONAL);
+ 
+ 	nic->rx_to_use = nic->rx_to_clean = nic->rxs;
+ 	nic->ru_running = RU_SUSPENDED;
+@@ -2377,8 +2380,8 @@ static int e100_loopback_test(struct nic *nic, enum loopback loopback_mode)
+ 
+ 	msleep(10);
+ 
+-	pci_dma_sync_single_for_cpu(nic->pdev, nic->rx_to_clean->dma_addr,
+-			RFD_BUF_LEN, PCI_DMA_BIDIRECTIONAL);
++	dma_sync_single_for_cpu(&nic->pdev->dev, nic->rx_to_clean->dma_addr,
++				RFD_BUF_LEN, DMA_BIDIRECTIONAL);
+ 
+ 	if (memcmp(nic->rx_to_clean->skb->data + sizeof(struct rfd),
+ 	   skb->data, ETH_DATA_LEN))
+@@ -2759,16 +2762,16 @@ static int e100_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+ 
+ static int e100_alloc(struct nic *nic)
+ {
+-	nic->mem = pci_alloc_consistent(nic->pdev, sizeof(struct mem),
+-		&nic->dma_addr);
++	nic->mem = dma_alloc_coherent(&nic->pdev->dev, sizeof(struct mem),
++				      &nic->dma_addr, GFP_KERNEL);
+ 	return nic->mem ? 0 : -ENOMEM;
+ }
+ 
+ static void e100_free(struct nic *nic)
+ {
+ 	if (nic->mem) {
+-		pci_free_consistent(nic->pdev, sizeof(struct mem),
+-			nic->mem, nic->dma_addr);
++		dma_free_coherent(&nic->pdev->dev, sizeof(struct mem),
++				  nic->mem, nic->dma_addr);
+ 		nic->mem = NULL;
+ 	}
+ }
+@@ -2861,7 +2864,7 @@ static int e100_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto err_out_disable_pdev;
+ 	}
+ 
+-	if ((err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)))) {
++	if ((err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)))) {
+ 		netif_err(nic, probe, nic->netdev, "No usable DMA configuration, aborting\n");
+ 		goto err_out_free_res;
+ 	}
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
+index 99b8252eb969e..a388a0fcbeed3 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c
++++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
+@@ -32,6 +32,8 @@ struct workqueue_struct *fm10k_workqueue;
+  **/
+ static int __init fm10k_init_module(void)
+ {
++	int ret;
++
+ 	pr_info("%s\n", fm10k_driver_string);
+ 	pr_info("%s\n", fm10k_copyright);
+ 
+@@ -43,7 +45,13 @@ static int __init fm10k_init_module(void)
+ 
+ 	fm10k_dbg_init();
+ 
+-	return fm10k_register_pci_driver();
++	ret = fm10k_register_pci_driver();
++	if (ret) {
++		fm10k_dbg_exit();
++		destroy_workqueue(fm10k_workqueue);
++	}
++
++	return ret;
+ }
+ module_init(fm10k_init_module);
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index ea6a984c6d12b..d7ddf9239e512 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -15972,6 +15972,8 @@ static struct pci_driver i40e_driver = {
+  **/
+ static int __init i40e_init_module(void)
+ {
++	int err;
++
+ 	pr_info("%s: %s\n", i40e_driver_name, i40e_driver_string);
+ 	pr_info("%s: %s\n", i40e_driver_name, i40e_copyright);
+ 
+@@ -15989,7 +15991,14 @@ static int __init i40e_init_module(void)
+ 	}
+ 
+ 	i40e_dbg_init();
+-	return pci_register_driver(&i40e_driver);
++	err = pci_register_driver(&i40e_driver);
++	if (err) {
++		destroy_workqueue(i40e_wq);
++		i40e_dbg_exit();
++		return err;
++	}
++
++	return 0;
+ }
+ module_init(i40e_init_module);
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index a9cea7ccdd865..ae96b552a3bb3 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1318,7 +1318,6 @@ static void iavf_fill_rss_lut(struct iavf_adapter *adapter)
+ static int iavf_init_rss(struct iavf_adapter *adapter)
+ {
+ 	struct iavf_hw *hw = &adapter->hw;
+-	int ret;
+ 
+ 	if (!RSS_PF(adapter)) {
+ 		/* Enable PCTYPES for RSS, TCP/UDP with IPv4/IPv6 */
+@@ -1334,9 +1333,8 @@ static int iavf_init_rss(struct iavf_adapter *adapter)
+ 
+ 	iavf_fill_rss_lut(adapter);
+ 	netdev_rss_key_fill((void *)adapter->rss_key, adapter->rss_key_size);
+-	ret = iavf_config_rss(adapter);
+ 
+-	return ret;
++	return iavf_config_rss(adapter);
+ }
+ 
+ /**
+@@ -4040,7 +4038,11 @@ static int __init iavf_init_module(void)
+ 		pr_err("%s: Failed to create workqueue\n", iavf_driver_name);
+ 		return -ENOMEM;
+ 	}
++
+ 	ret = pci_register_driver(&iavf_driver);
++	if (ret)
++		destroy_workqueue(iavf_wq);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index 2d6ac61d7a3e6..4510a84514fa4 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -4878,6 +4878,8 @@ static struct pci_driver ixgbevf_driver = {
+  **/
+ static int __init ixgbevf_init_module(void)
+ {
++	int err;
++
+ 	pr_info("%s\n", ixgbevf_driver_string);
+ 	pr_info("%s\n", ixgbevf_copyright);
+ 	ixgbevf_wq = create_singlethread_workqueue(ixgbevf_driver_name);
+@@ -4886,7 +4888,13 @@ static int __init ixgbevf_init_module(void)
+ 		return -ENOMEM;
+ 	}
+ 
+-	return pci_register_driver(&ixgbevf_driver);
++	err = pci_register_driver(&ixgbevf_driver);
++	if (err) {
++		destroy_workqueue(ixgbevf_wq);
++		return err;
++	}
++
++	return 0;
+ }
+ 
+ module_init(ixgbevf_init_module);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index c838d8698eab4..39c17e9039157 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1422,8 +1422,8 @@ static ssize_t outlen_write(struct file *filp, const char __user *buf,
+ 		return -EFAULT;
+ 
+ 	err = sscanf(outlen_str, "%d", &outlen);
+-	if (err < 0)
+-		return err;
++	if (err != 1)
++		return -EINVAL;
+ 
+ 	ptr = kzalloc(outlen, GFP_KERNEL);
+ 	if (!ptr)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+index 6c865cb7f445d..132ea9997676a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+@@ -308,6 +308,8 @@ revert_changes:
+ 	for (curr_dest = 0; curr_dest < num_vport_dests; curr_dest++) {
+ 		struct mlx5_termtbl_handle *tt = attr->dests[curr_dest].termtbl;
+ 
++		attr->dests[curr_dest].termtbl = NULL;
++
+ 		/* search for the destination associated with the
+ 		 * current term table
+ 		 */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c
+index b599b6beb5b95..6a4b997c258a7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c
+@@ -9,7 +9,7 @@ int mlx5dr_table_set_miss_action(struct mlx5dr_table *tbl,
+ 	struct mlx5dr_matcher *last_matcher = NULL;
+ 	struct mlx5dr_htbl_connect_info info;
+ 	struct mlx5dr_ste_htbl *last_htbl;
+-	int ret;
++	int ret = -EOPNOTSUPP;
+ 
+ 	if (action && action->action_type != DR_ACTION_TYP_FT)
+ 		return -EOPNOTSUPP;
+@@ -68,6 +68,9 @@ int mlx5dr_table_set_miss_action(struct mlx5dr_table *tbl,
+ 		}
+ 	}
+ 
++	if (ret)
++		goto out;
++
+ 	/* Release old action */
+ 	if (tbl->miss_action)
+ 		refcount_dec(&tbl->miss_action->refcount);
+diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c
+index 9c48fd85c418a..07fbd329fe93c 100644
+--- a/drivers/net/ethernet/ni/nixge.c
++++ b/drivers/net/ethernet/ni/nixge.c
+@@ -249,25 +249,26 @@ static void nixge_hw_dma_bd_release(struct net_device *ndev)
+ 	struct sk_buff *skb;
+ 	int i;
+ 
+-	for (i = 0; i < RX_BD_NUM; i++) {
+-		phys_addr = nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
+-						     phys);
+-
+-		dma_unmap_single(ndev->dev.parent, phys_addr,
+-				 NIXGE_MAX_JUMBO_FRAME_SIZE,
+-				 DMA_FROM_DEVICE);
+-
+-		skb = (struct sk_buff *)(uintptr_t)
+-			nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
+-						 sw_id_offset);
+-		dev_kfree_skb(skb);
+-	}
++	if (priv->rx_bd_v) {
++		for (i = 0; i < RX_BD_NUM; i++) {
++			phys_addr = nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
++							     phys);
++
++			dma_unmap_single(ndev->dev.parent, phys_addr,
++					 NIXGE_MAX_JUMBO_FRAME_SIZE,
++					 DMA_FROM_DEVICE);
++
++			skb = (struct sk_buff *)(uintptr_t)
++				nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
++							 sw_id_offset);
++			dev_kfree_skb(skb);
++		}
+ 
+-	if (priv->rx_bd_v)
+ 		dma_free_coherent(ndev->dev.parent,
+ 				  sizeof(*priv->rx_bd_v) * RX_BD_NUM,
+ 				  priv->rx_bd_v,
+ 				  priv->rx_bd_p);
++	}
+ 
+ 	if (priv->tx_skb)
+ 		devm_kfree(ndev->dev.parent, priv->tx_skb);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index bd06076803295..2fd5c6fdb5003 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -2991,7 +2991,7 @@ static void qlcnic_83xx_recover_driver_lock(struct qlcnic_adapter *adapter)
+ 		QLCWRX(adapter->ahw, QLC_83XX_RECOVER_DRV_LOCK, val);
+ 		dev_info(&adapter->pdev->dev,
+ 			 "%s: lock recovery initiated\n", __func__);
+-		msleep(QLC_83XX_DRV_LOCK_RECOVERY_DELAY);
++		mdelay(QLC_83XX_DRV_LOCK_RECOVERY_DELAY);
+ 		val = QLCRDX(adapter->ahw, QLC_83XX_RECOVER_DRV_LOCK);
+ 		id = ((val >> 2) & 0xF);
+ 		if (id == adapter->portnum) {
+@@ -3027,7 +3027,7 @@ int qlcnic_83xx_lock_driver(struct qlcnic_adapter *adapter)
+ 		if (status)
+ 			break;
+ 
+-		msleep(QLC_83XX_DRV_LOCK_WAIT_DELAY);
++		mdelay(QLC_83XX_DRV_LOCK_WAIT_DELAY);
+ 		i++;
+ 
+ 		if (i == 1)
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index f96eed67e1a2b..9e7b85e178fd2 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -2364,6 +2364,7 @@ static int __maybe_unused ravb_resume(struct device *dev)
+ 		ret = ravb_open(ndev);
+ 		if (ret < 0)
+ 			return ret;
++		ravb_set_rx_mode(ndev);
+ 		netif_device_attach(ndev);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 2e71e510e127d..5b052fdd2696e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -720,6 +720,8 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
+ 	if (fc & FLOW_RX) {
+ 		pr_debug("\tReceive Flow-Control ON\n");
+ 		flow |= GMAC_RX_FLOW_CTRL_RFE;
++	} else {
++		pr_debug("\tReceive Flow-Control OFF\n");
+ 	}
+ 	writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 41e71a26b1ade..14ea0168b548d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1043,8 +1043,16 @@ static void stmmac_mac_link_up(struct phylink_config *config,
+ 		ctrl |= priv->hw->link.duplex;
+ 
+ 	/* Flow Control operation */
+-	if (tx_pause && rx_pause)
+-		stmmac_mac_flow_ctrl(priv, duplex);
++	if (rx_pause && tx_pause)
++		priv->flow_ctrl = FLOW_AUTO;
++	else if (rx_pause && !tx_pause)
++		priv->flow_ctrl = FLOW_RX;
++	else if (!rx_pause && tx_pause)
++		priv->flow_ctrl = FLOW_TX;
++	else
++		priv->flow_ctrl = FLOW_OFF;
++
++	stmmac_mac_flow_ctrl(priv, duplex);
+ 
+ 	writel(ctrl, priv->ioaddr + MAC_CTRL_REG);
+ 
+diff --git a/drivers/net/ntb_netdev.c b/drivers/net/ntb_netdev.c
+index a5bab614ff845..1b7d588ff3c5c 100644
+--- a/drivers/net/ntb_netdev.c
++++ b/drivers/net/ntb_netdev.c
+@@ -484,7 +484,14 @@ static int __init ntb_netdev_init_module(void)
+ 	rc = ntb_transport_register_client_dev(KBUILD_MODNAME);
+ 	if (rc)
+ 		return rc;
+-	return ntb_transport_register_client(&ntb_netdev_client);
++
++	rc = ntb_transport_register_client(&ntb_netdev_client);
++	if (rc) {
++		ntb_transport_unregister_client_dev(KBUILD_MODNAME);
++		return rc;
++	}
++
++	return 0;
+ }
+ module_init(ntb_netdev_init_module);
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index d2f6d8107595a..3ef5aa6b72a7e 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1423,6 +1423,7 @@ error:
+ 
+ error_module_put:
+ 	module_put(d->driver->owner);
++	d->driver = NULL;
+ error_put_device:
+ 	put_device(d);
+ 	if (ndev_owner != bus->owner)
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index cb42fdbfeb326..67ce7b779af61 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -698,7 +698,6 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 		if (tun)
+ 			xdp_rxq_info_unreg(&tfile->xdp_rxq);
+ 		ptr_ring_cleanup(&tfile->tx_ring, tun_ptr_free);
+-		sock_put(&tfile->sk);
+ 	}
+ }
+ 
+@@ -714,6 +713,9 @@ static void tun_detach(struct tun_file *tfile, bool clean)
+ 	if (dev)
+ 		netdev_state_change(dev);
+ 	rtnl_unlock();
++
++	if (clean)
++		sock_put(&tfile->sk);
+ }
+ 
+ static void tun_detach_all(struct net_device *dev)
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 1d7d24e7094b0..8f998351bf4fb 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -956,8 +956,10 @@ of_fwnode_get_reference_args(const struct fwnode_handle *fwnode,
+ 						       nargs, index, &of_args);
+ 	if (ret < 0)
+ 		return ret;
+-	if (!args)
++	if (!args) {
++		of_node_put(of_args.np);
+ 		return 0;
++	}
+ 
+ 	args->nargs = of_args.args_count;
+ 	args->fwnode = of_fwnode_handle(of_args.np);
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 4de832ac47d38..db9087c129c0d 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -426,9 +426,14 @@ static void __intel_gpio_set_direction(void __iomem *padcfg0, bool input)
+ 	writel(value, padcfg0);
+ }
+ 
++static int __intel_gpio_get_gpio_mode(u32 value)
++{
++	return (value & PADCFG0_PMODE_MASK) >> PADCFG0_PMODE_SHIFT;
++}
++
+ static int intel_gpio_get_gpio_mode(void __iomem *padcfg0)
+ {
+-	return (readl(padcfg0) & PADCFG0_PMODE_MASK) >> PADCFG0_PMODE_SHIFT;
++	return __intel_gpio_get_gpio_mode(readl(padcfg0));
+ }
+ 
+ static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
+@@ -1604,6 +1609,7 @@ EXPORT_SYMBOL_GPL(intel_pinctrl_get_soc_data);
+ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int pin)
+ {
+ 	const struct pin_desc *pd = pin_desc_get(pctrl->pctldev, pin);
++	u32 value;
+ 
+ 	if (!pd || !intel_pad_usable(pctrl, pin))
+ 		return false;
+@@ -1618,6 +1624,25 @@ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int
+ 	    gpiochip_line_is_irq(&pctrl->chip, intel_pin_to_gpio(pctrl, pin)))
+ 		return true;
+ 
++	/*
++	 * The firmware on some systems may configure GPIO pins to be
++	 * an interrupt source in so called "direct IRQ" mode. In such
++	 * cases the GPIO controller driver has no idea if those pins
++	 * are being used or not. At the same time, there is a known bug
++	 * in the firmwares that don't restore the pin settings correctly
++	 * after suspend, i.e. by an unknown reason the Rx value becomes
++	 * inverted.
++	 *
++	 * Hence, let's save and restore the pins that are configured
++	 * as GPIOs in the input mode with GPIROUTIOXAPIC bit set.
++	 *
++	 * See https://bugzilla.kernel.org/show_bug.cgi?id=214749.
++	 */
++	value = readl(intel_get_padcfg(pctrl, pin, PADCFG0));
++	if ((value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&
++	    (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO))
++		return true;
++
+ 	return false;
+ }
+ 
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 17aa0d542d925..d139cd9e6d130 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -726,7 +726,7 @@ static int pcs_allocate_pin_table(struct pcs_device *pcs)
+ 
+ 	mux_bytes = pcs->width / BITS_PER_BYTE;
+ 
+-	if (pcs->bits_per_mux) {
++	if (pcs->bits_per_mux && pcs->fmask) {
+ 		pcs->bits_per_pin = fls(pcs->fmask);
+ 		nr_pins = (pcs->size * BITS_PER_BYTE) / pcs->bits_per_pin;
+ 		num_pins_in_register = pcs->width / pcs->bits_per_pin;
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 0e3bc0b0a5265..74b3b6ca15efb 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -434,8 +434,7 @@ static unsigned int mx51_ecspi_clkdiv(struct spi_imx_data *spi_imx,
+ 	unsigned int pre, post;
+ 	unsigned int fin = spi_imx->spi_clk;
+ 
+-	if (unlikely(fspi > fin))
+-		return 0;
++	fspi = min(fspi, fin);
+ 
+ 	post = fls(fin) - fls(fspi);
+ 	if (fin > fspi << post)
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index e852828259735..f5063499f9cf6 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -235,7 +235,7 @@ struct gsm_mux {
+ 	int old_c_iflag;		/* termios c_iflag value before attach */
+ 	bool constipated;		/* Asked by remote to shut up */
+ 
+-	struct mutex tx_mutex;
++	spinlock_t tx_lock;
+ 	unsigned int tx_bytes;		/* TX data outstanding */
+ #define TX_THRESH_HI		8192
+ #define TX_THRESH_LO		2048
+@@ -820,14 +820,15 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+  *
+  *	Add data to the transmit queue and try and get stuff moving
+  *	out of the mux tty if not already doing so. Take the
+- *	the gsm tx mutex and dlci lock.
++ *	the gsm tx lock and dlci lock.
+  */
+ 
+ static void gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+ {
+-	mutex_lock(&dlci->gsm->tx_mutex);
++	unsigned long flags;
++	spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
+ 	__gsm_data_queue(dlci, msg);
+-	mutex_unlock(&dlci->gsm->tx_mutex);
++	spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags);
+ }
+ 
+ /**
+@@ -839,7 +840,7 @@ static void gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+  *	is data. Keep to the MRU of the mux. This path handles the usual tty
+  *	interface which is a byte stream with optional modem data.
+  *
+- *	Caller must hold the tx_mutex of the mux.
++ *	Caller must hold the tx_lock of the mux.
+  */
+ 
+ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+@@ -902,7 +903,7 @@ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+  *	is data. Keep to the MRU of the mux. This path handles framed data
+  *	queued as skbuffs to the DLCI.
+  *
+- *	Caller must hold the tx_mutex of the mux.
++ *	Caller must hold the tx_lock of the mux.
+  */
+ 
+ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+@@ -918,7 +919,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+ 	if (dlci->adaption == 4)
+ 		overhead = 1;
+ 
+-	/* dlci->skb is locked by tx_mutex */
++	/* dlci->skb is locked by tx_lock */
+ 	if (dlci->skb == NULL) {
+ 		dlci->skb = skb_dequeue_tail(&dlci->skb_list);
+ 		if (dlci->skb == NULL)
+@@ -1018,12 +1019,13 @@ static void gsm_dlci_data_sweep(struct gsm_mux *gsm)
+ 
+ static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
+ {
++	unsigned long flags;
+ 	int sweep;
+ 
+ 	if (dlci->constipated)
+ 		return;
+ 
+-	mutex_lock(&dlci->gsm->tx_mutex);
++	spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
+ 	/* If we have nothing running then we need to fire up */
+ 	sweep = (dlci->gsm->tx_bytes < TX_THRESH_LO);
+ 	if (dlci->gsm->tx_bytes == 0) {
+@@ -1034,7 +1036,7 @@ static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
+ 	}
+ 	if (sweep)
+ 		gsm_dlci_data_sweep(dlci->gsm);
+-	mutex_unlock(&dlci->gsm->tx_mutex);
++	spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags);
+ }
+ 
+ /*
+@@ -1256,6 +1258,7 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ 						const u8 *data, int clen)
+ {
+ 	u8 buf[1];
++	unsigned long flags;
+ 
+ 	switch (command) {
+ 	case CMD_CLD: {
+@@ -1277,9 +1280,9 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ 		gsm->constipated = false;
+ 		gsm_control_reply(gsm, CMD_FCON, NULL, 0);
+ 		/* Kick the link in case it is idling */
+-		mutex_lock(&gsm->tx_mutex);
++		spin_lock_irqsave(&gsm->tx_lock, flags);
+ 		gsm_data_kick(gsm, NULL);
+-		mutex_unlock(&gsm->tx_mutex);
++		spin_unlock_irqrestore(&gsm->tx_lock, flags);
+ 		break;
+ 	case CMD_FCOFF:
+ 		/* Modem wants us to STFU */
+@@ -2225,7 +2228,6 @@ static void gsm_free_mux(struct gsm_mux *gsm)
+ 			break;
+ 		}
+ 	}
+-	mutex_destroy(&gsm->tx_mutex);
+ 	mutex_destroy(&gsm->mutex);
+ 	kfree(gsm->txframe);
+ 	kfree(gsm->buf);
+@@ -2297,12 +2299,12 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ 	}
+ 	spin_lock_init(&gsm->lock);
+ 	mutex_init(&gsm->mutex);
+-	mutex_init(&gsm->tx_mutex);
+ 	kref_init(&gsm->ref);
+ 	INIT_LIST_HEAD(&gsm->tx_list);
+ 	timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
+ 	init_waitqueue_head(&gsm->event);
+ 	spin_lock_init(&gsm->control_lock);
++	spin_lock_init(&gsm->tx_lock);
+ 
+ 	gsm->t1 = T1;
+ 	gsm->t2 = T2;
+@@ -2327,7 +2329,6 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ 	}
+ 	spin_unlock(&gsm_mux_lock);
+ 	if (i == MAX_MUX) {
+-		mutex_destroy(&gsm->tx_mutex);
+ 		mutex_destroy(&gsm->mutex);
+ 		kfree(gsm->txframe);
+ 		kfree(gsm->buf);
+@@ -2652,15 +2653,16 @@ static int gsmld_open(struct tty_struct *tty)
+ static void gsmld_write_wakeup(struct tty_struct *tty)
+ {
+ 	struct gsm_mux *gsm = tty->disc_data;
++	unsigned long flags;
+ 
+ 	/* Queue poll */
+ 	clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+-	mutex_lock(&gsm->tx_mutex);
++	spin_lock_irqsave(&gsm->tx_lock, flags);
+ 	gsm_data_kick(gsm, NULL);
+ 	if (gsm->tx_bytes < TX_THRESH_LO) {
+ 		gsm_dlci_data_sweep(gsm);
+ 	}
+-	mutex_unlock(&gsm->tx_mutex);
++	spin_unlock_irqrestore(&gsm->tx_lock, flags);
+ }
+ 
+ /**
+@@ -2703,6 +2705,7 @@ static ssize_t gsmld_write(struct tty_struct *tty, struct file *file,
+ 			   const unsigned char *buf, size_t nr)
+ {
+ 	struct gsm_mux *gsm = tty->disc_data;
++	unsigned long flags;
+ 	int space;
+ 	int ret;
+ 
+@@ -2710,13 +2713,13 @@ static ssize_t gsmld_write(struct tty_struct *tty, struct file *file,
+ 		return -ENODEV;
+ 
+ 	ret = -ENOBUFS;
+-	mutex_lock(&gsm->tx_mutex);
++	spin_lock_irqsave(&gsm->tx_lock, flags);
+ 	space = tty_write_room(tty);
+ 	if (space >= nr)
+ 		ret = tty->ops->write(tty, buf, nr);
+ 	else
+ 		set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+-	mutex_unlock(&gsm->tx_mutex);
++	spin_unlock_irqrestore(&gsm->tx_lock, flags);
+ 
+ 	return ret;
+ }
+diff --git a/fs/afs/fs_probe.c b/fs/afs/fs_probe.c
+index e7e98ad63a91a..04d42e49fc599 100644
+--- a/fs/afs/fs_probe.c
++++ b/fs/afs/fs_probe.c
+@@ -161,8 +161,8 @@ responded:
+ 		}
+ 	}
+ 
+-	if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
+-	    rtt_us < server->probe.rtt) {
++	rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us);
++	if (rtt_us < server->probe.rtt) {
+ 		server->probe.rtt = rtt_us;
+ 		server->rtt = rtt_us;
+ 		alist->preferred = index;
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 6942707f8b034..7208ba22e734a 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -2060,10 +2060,29 @@ out:
+ 	return ret;
+ }
+ 
++static int build_ino_list(u64 inum, u64 offset, u64 root, void *ctx)
++{
++	struct btrfs_data_container *inodes = ctx;
++	const size_t c = 3 * sizeof(u64);
++
++	if (inodes->bytes_left >= c) {
++		inodes->bytes_left -= c;
++		inodes->val[inodes->elem_cnt] = inum;
++		inodes->val[inodes->elem_cnt + 1] = offset;
++		inodes->val[inodes->elem_cnt + 2] = root;
++		inodes->elem_cnt += 3;
++	} else {
++		inodes->bytes_missing += c - inodes->bytes_left;
++		inodes->bytes_left = 0;
++		inodes->elem_missed += 3;
++	}
++
++	return 0;
++}
++
+ int iterate_inodes_from_logical(u64 logical, struct btrfs_fs_info *fs_info,
+ 				struct btrfs_path *path,
+-				iterate_extent_inodes_t *iterate, void *ctx,
+-				bool ignore_offset)
++				void *ctx, bool ignore_offset)
+ {
+ 	int ret;
+ 	u64 extent_item_pos;
+@@ -2081,7 +2100,7 @@ int iterate_inodes_from_logical(u64 logical, struct btrfs_fs_info *fs_info,
+ 	extent_item_pos = logical - found_key.objectid;
+ 	ret = iterate_extent_inodes(fs_info, found_key.objectid,
+ 					extent_item_pos, search_commit_root,
+-					iterate, ctx, ignore_offset);
++					build_ino_list, ctx, ignore_offset);
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
+index 17abde7f794ce..6ed18b807b640 100644
+--- a/fs/btrfs/backref.h
++++ b/fs/btrfs/backref.h
+@@ -35,8 +35,7 @@ int iterate_extent_inodes(struct btrfs_fs_info *fs_info,
+ 				bool ignore_offset);
+ 
+ int iterate_inodes_from_logical(u64 logical, struct btrfs_fs_info *fs_info,
+-				struct btrfs_path *path,
+-				iterate_extent_inodes_t *iterate, void *ctx,
++				struct btrfs_path *path, void *ctx,
+ 				bool ignore_offset);
+ 
+ int paths_from_inode(u64 inum, struct inode_fs_paths *ipath);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index d0c31651ec80d..a17076a05c4d9 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3898,26 +3898,6 @@ out:
+ 	return ret;
+ }
+ 
+-static int build_ino_list(u64 inum, u64 offset, u64 root, void *ctx)
+-{
+-	struct btrfs_data_container *inodes = ctx;
+-	const size_t c = 3 * sizeof(u64);
+-
+-	if (inodes->bytes_left >= c) {
+-		inodes->bytes_left -= c;
+-		inodes->val[inodes->elem_cnt] = inum;
+-		inodes->val[inodes->elem_cnt + 1] = offset;
+-		inodes->val[inodes->elem_cnt + 2] = root;
+-		inodes->elem_cnt += 3;
+-	} else {
+-		inodes->bytes_missing += c - inodes->bytes_left;
+-		inodes->bytes_left = 0;
+-		inodes->elem_missed += 3;
+-	}
+-
+-	return 0;
+-}
+-
+ static long btrfs_ioctl_logical_to_ino(struct btrfs_fs_info *fs_info,
+ 					void __user *arg, int version)
+ {
+@@ -3953,21 +3933,20 @@ static long btrfs_ioctl_logical_to_ino(struct btrfs_fs_info *fs_info,
+ 		size = min_t(u32, loi->size, SZ_16M);
+ 	}
+ 
+-	path = btrfs_alloc_path();
+-	if (!path) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
+-
+ 	inodes = init_data_container(size);
+ 	if (IS_ERR(inodes)) {
+ 		ret = PTR_ERR(inodes);
+-		inodes = NULL;
+-		goto out;
++		goto out_loi;
+ 	}
+ 
++	path = btrfs_alloc_path();
++	if (!path) {
++		ret = -ENOMEM;
++		goto out;
++	}
+ 	ret = iterate_inodes_from_logical(loi->logical, fs_info, path,
+-					  build_ino_list, inodes, ignore_offset);
++					  inodes, ignore_offset);
++	btrfs_free_path(path);
+ 	if (ret == -EINVAL)
+ 		ret = -ENOENT;
+ 	if (ret < 0)
+@@ -3979,7 +3958,6 @@ static long btrfs_ioctl_logical_to_ino(struct btrfs_fs_info *fs_info,
+ 		ret = -EFAULT;
+ 
+ out:
+-	btrfs_free_path(path);
+ 	kvfree(inodes);
+ out_loi:
+ 	kfree(loi);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 36da775340768..74cbbb5d8897f 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2913,14 +2913,7 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
+ 		dstgroup->rsv_rfer = inherit->lim.rsv_rfer;
+ 		dstgroup->rsv_excl = inherit->lim.rsv_excl;
+ 
+-		ret = update_qgroup_limit_item(trans, dstgroup);
+-		if (ret) {
+-			fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
+-			btrfs_info(fs_info,
+-				   "unable to update quota limit for %llu",
+-				   dstgroup->qgroupid);
+-			goto unlock;
+-		}
++		qgroup_dirty(fs_info, dstgroup);
+ 	}
+ 
+ 	if (srcid) {
+@@ -3290,7 +3283,8 @@ out:
+ static bool rescan_should_stop(struct btrfs_fs_info *fs_info)
+ {
+ 	return btrfs_fs_closing(fs_info) ||
+-		test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state);
++		test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state) ||
++		!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+ }
+ 
+ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+@@ -3320,11 +3314,9 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 			err = PTR_ERR(trans);
+ 			break;
+ 		}
+-		if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
+-			err = -EINTR;
+-		} else {
+-			err = qgroup_rescan_leaf(trans, path);
+-		}
++
++		err = qgroup_rescan_leaf(trans, path);
++
+ 		if (err > 0)
+ 			btrfs_commit_transaction(trans);
+ 		else
+@@ -3338,7 +3330,7 @@ out:
+ 	if (err > 0 &&
+ 	    fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT) {
+ 		fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
+-	} else if (err < 0) {
++	} else if (err < 0 || stopped) {
+ 		fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
+ 	}
+ 	mutex_unlock(&fs_info->qgroup_rescan_lock);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index d1cb1addea965..c5c22b067cd81 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -217,6 +217,7 @@ struct fixed_file_data {
+ 	struct completion		done;
+ 	struct list_head		ref_list;
+ 	spinlock_t			lock;
++	bool				quiesce;
+ };
+ 
+ struct io_buffer {
+@@ -7105,41 +7106,79 @@ static void io_sqe_files_set_node(struct fixed_file_data *file_data,
+ 	percpu_ref_get(&file_data->refs);
+ }
+ 
+-static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+-{
+-	struct fixed_file_data *data = ctx->file_data;
+-	struct fixed_file_ref_node *backup_node, *ref_node = NULL;
+-	unsigned nr_tables, i;
+-	int ret;
+ 
+-	if (!data)
+-		return -ENXIO;
+-	backup_node = alloc_fixed_file_ref_node(ctx);
+-	if (!backup_node)
+-		return -ENOMEM;
++static void io_sqe_files_kill_node(struct fixed_file_data *data)
++{
++	struct fixed_file_ref_node *ref_node = NULL;
+ 
+ 	spin_lock_bh(&data->lock);
+ 	ref_node = data->node;
+ 	spin_unlock_bh(&data->lock);
+ 	if (ref_node)
+ 		percpu_ref_kill(&ref_node->refs);
++}
++
++static int io_file_ref_quiesce(struct fixed_file_data *data,
++			       struct io_ring_ctx *ctx)
++{
++	int ret;
++	struct fixed_file_ref_node *backup_node;
+ 
+-	percpu_ref_kill(&data->refs);
++	if (data->quiesce)
++		return -ENXIO;
+ 
+-	/* wait for all refs nodes to complete */
+-	flush_delayed_work(&ctx->file_put_work);
++	data->quiesce = true;
+ 	do {
++		backup_node = alloc_fixed_file_ref_node(ctx);
++		if (!backup_node)
++			break;
++
++		io_sqe_files_kill_node(data);
++		percpu_ref_kill(&data->refs);
++		flush_delayed_work(&ctx->file_put_work);
++
+ 		ret = wait_for_completion_interruptible(&data->done);
+ 		if (!ret)
+ 			break;
++
++		percpu_ref_resurrect(&data->refs);
++		io_sqe_files_set_node(data, backup_node);
++		backup_node = NULL;
++		reinit_completion(&data->done);
++		mutex_unlock(&ctx->uring_lock);
+ 		ret = io_run_task_work_sig();
+-		if (ret < 0) {
+-			percpu_ref_resurrect(&data->refs);
+-			reinit_completion(&data->done);
+-			io_sqe_files_set_node(data, backup_node);
+-			return ret;
+-		}
++		mutex_lock(&ctx->uring_lock);
++
++		if (ret < 0)
++			break;
++		backup_node = alloc_fixed_file_ref_node(ctx);
++		ret = -ENOMEM;
++		if (!backup_node)
++			break;
+ 	} while (1);
++	data->quiesce = false;
++
++	if (backup_node)
++		destroy_fixed_file_ref_node(backup_node);
++	return ret;
++}
++
++static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
++{
++	struct fixed_file_data *data = ctx->file_data;
++	unsigned nr_tables, i;
++	int ret;
++
++	/*
++	 * percpu_ref_is_dying() is to stop parallel files unregister
++	 * Since we possibly drop uring lock later in this function to
++	 * run task work.
++	 */
++	if (!data || percpu_ref_is_dying(&data->refs))
++		return -ENXIO;
++	ret = io_file_ref_quiesce(data, ctx);
++	if (ret)
++		return ret;
+ 
+ 	__io_sqe_files_unregister(ctx);
+ 	nr_tables = DIV_ROUND_UP(ctx->nr_user_files, IORING_MAX_FILES_TABLE);
+@@ -7150,7 +7189,6 @@ static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+ 	kfree(data);
+ 	ctx->file_data = NULL;
+ 	ctx->nr_user_files = 0;
+-	destroy_fixed_file_ref_node(backup_node);
+ 	return 0;
+ }
+ 
+@@ -8444,7 +8482,9 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ 		css_put(ctx->sqo_blkcg_css);
+ #endif
+ 
++	mutex_lock(&ctx->uring_lock);
+ 	io_sqe_files_unregister(ctx);
++	mutex_unlock(&ctx->uring_lock);
+ 	io_eventfd_unregister(ctx);
+ 	io_destroy_buffers(ctx);
+ 
+diff --git a/fs/nilfs2/dat.c b/fs/nilfs2/dat.c
+index 1a3d183027b9e..8fedc7104320d 100644
+--- a/fs/nilfs2/dat.c
++++ b/fs/nilfs2/dat.c
+@@ -111,6 +111,13 @@ static void nilfs_dat_commit_free(struct inode *dat,
+ 	kunmap_atomic(kaddr);
+ 
+ 	nilfs_dat_commit_entry(dat, req);
++
++	if (unlikely(req->pr_desc_bh == NULL || req->pr_bitmap_bh == NULL)) {
++		nilfs_error(dat->i_sb,
++			    "state inconsistency probably due to duplicate use of vblocknr = %llu",
++			    (unsigned long long)req->pr_entry_nr);
++		return;
++	}
+ 	nilfs_palloc_commit_free_entry(dat, req);
+ }
+ 
+diff --git a/include/linux/mmc/mmc.h b/include/linux/mmc/mmc.h
+index d9a65c6a8816f..545578fb814b0 100644
+--- a/include/linux/mmc/mmc.h
++++ b/include/linux/mmc/mmc.h
+@@ -445,7 +445,7 @@ static inline bool mmc_ready_for_data(u32 status)
+ #define MMC_SECURE_TRIM1_ARG		0x80000001
+ #define MMC_SECURE_TRIM2_ARG		0x80008000
+ #define MMC_SECURE_ARGS			0x80000000
+-#define MMC_TRIM_ARGS			0x00008001
++#define MMC_TRIM_OR_DISCARD_ARGS	0x00008003
+ 
+ #define mmc_driver_type_mask(n)		(1 << (n))
+ 
+diff --git a/include/net/sctp/stream_sched.h b/include/net/sctp/stream_sched.h
+index 01a70b27e026b..65058faea4db1 100644
+--- a/include/net/sctp/stream_sched.h
++++ b/include/net/sctp/stream_sched.h
+@@ -26,6 +26,8 @@ struct sctp_sched_ops {
+ 	int (*init)(struct sctp_stream *stream);
+ 	/* Init a stream */
+ 	int (*init_sid)(struct sctp_stream *stream, __u16 sid, gfp_t gfp);
++	/* free a stream */
++	void (*free_sid)(struct sctp_stream *stream, __u16 sid);
+ 	/* Frees the entire thing */
+ 	void (*free)(struct sctp_stream *stream);
+ 
+diff --git a/ipc/sem.c b/ipc/sem.c
+index 2cb6515ef1dd1..916f7a90be31c 100644
+--- a/ipc/sem.c
++++ b/ipc/sem.c
+@@ -2190,14 +2190,15 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
+ 		 * scenarios where we were awakened externally, during the
+ 		 * window between wake_q_add() and wake_up_q().
+ 		 */
++		rcu_read_lock();
+ 		error = READ_ONCE(queue.status);
+ 		if (error != -EINTR) {
+ 			/* see SEM_BARRIER_2 for purpose/pairing */
+ 			smp_acquire__after_ctrl_dep();
++			rcu_read_unlock();
+ 			goto out_free;
+ 		}
+ 
+-		rcu_read_lock();
+ 		locknum = sem_lock(sma, sops, nsops);
+ 
+ 		if (!ipc_valid_object(&sma->sem_perm))
+diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
+index 5d3a7af9ba9ba..8aaaaef99f09f 100644
+--- a/kernel/bpf/bpf_local_storage.c
++++ b/kernel/bpf/bpf_local_storage.c
+@@ -70,7 +70,7 @@ bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner,
+ 	selem = kzalloc(smap->elem_size, GFP_ATOMIC | __GFP_NOWARN);
+ 	if (selem) {
+ 		if (value)
+-			memcpy(SDATA(selem)->data, value, smap->map.value_size);
++			copy_map_value(&smap->map, SDATA(selem)->data, value);
+ 		return selem;
+ 	}
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 0e01216f4e5af..e9b354d521a38 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -8740,7 +8740,7 @@ static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog,
+ 				PERF_RECORD_KSYMBOL_TYPE_BPF,
+ 				(u64)(unsigned long)subprog->bpf_func,
+ 				subprog->jited_len, unregister,
+-				prog->aux->ksym.name);
++				subprog->aux->ksym.name);
+ 		}
+ 	}
+ }
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index f0dd1a3b66eb9..3eb527f8a269c 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -391,13 +391,14 @@ int proc_dostring(struct ctl_table *table, int write,
+ 			ppos);
+ }
+ 
+-static size_t proc_skip_spaces(char **buf)
++static void proc_skip_spaces(char **buf, size_t *size)
+ {
+-	size_t ret;
+-	char *tmp = skip_spaces(*buf);
+-	ret = tmp - *buf;
+-	*buf = tmp;
+-	return ret;
++	while (*size) {
++		if (!isspace(**buf))
++			break;
++		(*size)--;
++		(*buf)++;
++	}
+ }
+ 
+ static void proc_skip_char(char **buf, size_t *size, const char v)
+@@ -466,13 +467,12 @@ static int proc_get_long(char **buf, size_t *size,
+ 			  unsigned long *val, bool *neg,
+ 			  const char *perm_tr, unsigned perm_tr_len, char *tr)
+ {
+-	int len;
+ 	char *p, tmp[TMPBUFLEN];
++	ssize_t len = *size;
+ 
+-	if (!*size)
++	if (len <= 0)
+ 		return -EINVAL;
+ 
+-	len = *size;
+ 	if (len > TMPBUFLEN - 1)
+ 		len = TMPBUFLEN - 1;
+ 
+@@ -630,7 +630,7 @@ static int __do_proc_dointvec(void *tbl_data, struct ctl_table *table,
+ 		bool neg;
+ 
+ 		if (write) {
+-			left -= proc_skip_spaces(&p);
++			proc_skip_spaces(&p, &left);
+ 
+ 			if (!left)
+ 				break;
+@@ -657,7 +657,7 @@ static int __do_proc_dointvec(void *tbl_data, struct ctl_table *table,
+ 	if (!write && !first && left && !err)
+ 		proc_put_char(&buffer, &left, '\n');
+ 	if (write && !err && left)
+-		left -= proc_skip_spaces(&p);
++		proc_skip_spaces(&p, &left);
+ 	if (write && first)
+ 		return err ? : -EINVAL;
+ 	*lenp -= left;
+@@ -699,7 +699,7 @@ static int do_proc_douintvec_w(unsigned int *tbl_data,
+ 	if (left > PAGE_SIZE - 1)
+ 		left = PAGE_SIZE - 1;
+ 
+-	left -= proc_skip_spaces(&p);
++	proc_skip_spaces(&p, &left);
+ 	if (!left) {
+ 		err = -EINVAL;
+ 		goto out_free;
+@@ -719,7 +719,7 @@ static int do_proc_douintvec_w(unsigned int *tbl_data,
+ 	}
+ 
+ 	if (!err && left)
+-		left -= proc_skip_spaces(&p);
++		proc_skip_spaces(&p, &left);
+ 
+ out_free:
+ 	if (err)
+@@ -1177,7 +1177,7 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table,
+ 		if (write) {
+ 			bool neg;
+ 
+-			left -= proc_skip_spaces(&p);
++			proc_skip_spaces(&p, &left);
+ 			if (!left)
+ 				break;
+ 
+@@ -1205,7 +1205,7 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table,
+ 	if (!write && !first && left && !err)
+ 		proc_put_char(&buffer, &left, '\n');
+ 	if (write && !err)
+-		left -= proc_skip_spaces(&p);
++		proc_skip_spaces(&p, &left);
+ 	if (write && first)
+ 		return err ? : -EINVAL;
+ 	*lenp -= left;
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index 5fa49cfd2bb6f..d312a52a10a5b 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -70,6 +70,7 @@ int dyn_event_release(int argc, char **argv, struct dyn_event_operations *type)
+ 		if (ret)
+ 			break;
+ 	}
++	tracing_reset_all_online_cpus();
+ 	mutex_unlock(&event_mutex);
+ 
+ 	return ret;
+@@ -165,6 +166,7 @@ int dyn_events_release_all(struct dyn_event_operations *type)
+ 			break;
+ 	}
+ out:
++	tracing_reset_all_online_cpus();
+ 	mutex_unlock(&event_mutex);
+ 
+ 	return ret;
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 826ecf01e380c..bac13f24a96e5 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -2572,7 +2572,10 @@ static int probe_remove_event_call(struct trace_event_call *call)
+ 		 * TRACE_REG_UNREGISTER.
+ 		 */
+ 		if (file->flags & EVENT_FILE_FL_ENABLED)
+-			return -EBUSY;
++			goto busy;
++
++		if (file->flags & EVENT_FILE_FL_WAS_ENABLED)
++			tr->clear_trace = true;
+ 		/*
+ 		 * The do_for_each_event_file_safe() is
+ 		 * a double loop. After finding the call for this
+@@ -2585,6 +2588,12 @@ static int probe_remove_event_call(struct trace_event_call *call)
+ 	__trace_remove_event_call(call);
+ 
+ 	return 0;
++ busy:
++	/* No need to clear the trace now */
++	list_for_each_entry(tr, &ftrace_trace_arrays, list) {
++		tr->clear_trace = false;
++	}
++	return -EBUSY;
+ }
+ 
+ /* Remove an event_call */
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index ce796ca869c22..4aed8abb2022e 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -298,8 +298,10 @@ config FRAME_WARN
+ 	int "Warn for stack frames larger than"
+ 	range 0 8192
+ 	default 2048 if GCC_PLUGIN_LATENT_ENTROPY
+-	default 1280 if (!64BIT && PARISC)
+-	default 1024 if (!64BIT && !PARISC)
++	default 2048 if PARISC
++	default 1536 if (!64BIT && XTENSA)
++	default 1280 if KASAN && !64BIT
++	default 1024 if !64BIT
+ 	default 2048 if 64BIT
+ 	help
+ 	  Tell gcc to warn at build time for stack frames larger than this.
+@@ -1801,8 +1803,14 @@ config NETDEV_NOTIFIER_ERROR_INJECT
+ 	  If unsure, say N.
+ 
+ config FUNCTION_ERROR_INJECTION
+-	def_bool y
++	bool "Fault-injections of functions"
+ 	depends on HAVE_FUNCTION_ERROR_INJECTION && KPROBES
++	help
++	  Add fault injections into various functions that are annotated with
++	  ALLOW_ERROR_INJECTION() in the kernel. BPF may also modify the return
++	  value of theses functions. This is useful to test error paths of code.
++
++	  If unsure, say N
+ 
+ config FAULT_INJECTION
+ 	bool "Fault-injection framework"
+diff --git a/mm/frame_vector.c b/mm/frame_vector.c
+index 10f82d5643b6d..0e589a9a88012 100644
+--- a/mm/frame_vector.c
++++ b/mm/frame_vector.c
+@@ -37,7 +37,6 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,
+ 	struct mm_struct *mm = current->mm;
+ 	struct vm_area_struct *vma;
+ 	int ret = 0;
+-	int err;
+ 	int locked;
+ 
+ 	if (nr_frames == 0)
+@@ -74,32 +73,14 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,
+ 		vec->is_pfns = false;
+ 		ret = pin_user_pages_locked(start, nr_frames,
+ 			gup_flags, (struct page **)(vec->ptrs), &locked);
+-		goto out;
++		if (likely(ret > 0))
++			goto out;
+ 	}
+ 
+-	vec->got_ref = false;
+-	vec->is_pfns = true;
+-	do {
+-		unsigned long *nums = frame_vector_pfns(vec);
+-
+-		while (ret < nr_frames && start + PAGE_SIZE <= vma->vm_end) {
+-			err = follow_pfn(vma, start, &nums[ret]);
+-			if (err) {
+-				if (ret == 0)
+-					ret = err;
+-				goto out;
+-			}
+-			start += PAGE_SIZE;
+-			ret++;
+-		}
+-		/*
+-		 * We stop if we have enough pages or if VMA doesn't completely
+-		 * cover the tail page.
+-		 */
+-		if (ret >= nr_frames || start < vma->vm_end)
+-			break;
+-		vma = find_vma_intersection(mm, start, start + 1);
+-	} while (vma && vma->vm_flags & (VM_IO | VM_PFNMAP));
++	/* This used to (racily) return non-refcounted pfns. Let people know */
++	WARN_ONCE(1, "get_vaddr_frames() cannot follow VM_IO mapping");
++	vec->nr_frames = 0;
++
+ out:
+ 	if (locked)
+ 		mmap_read_unlock(mm);
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 400219801e63b..deb66635f0f3b 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -852,8 +852,10 @@ static int p9_socket_open(struct p9_client *client, struct socket *csocket)
+ 	struct file *file;
+ 
+ 	p = kzalloc(sizeof(struct p9_trans_fd), GFP_KERNEL);
+-	if (!p)
++	if (!p) {
++		sock_release(csocket);
+ 		return -ENOMEM;
++	}
+ 
+ 	csocket->sk->sk_allocation = GFP_NOIO;
+ 	file = sock_alloc_file(csocket, 0, NULL);
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index 908324b46328f..cb9b54a7abd24 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -303,17 +303,18 @@ static void hsr_deliver_master(struct sk_buff *skb, struct net_device *dev,
+ 			       struct hsr_node *node_src)
+ {
+ 	bool was_multicast_frame;
+-	int res;
++	int res, recv_len;
+ 
+ 	was_multicast_frame = (skb->pkt_type == PACKET_MULTICAST);
+ 	hsr_addr_subst_source(node_src, skb);
+ 	skb_pull(skb, ETH_HLEN);
++	recv_len = skb->len;
+ 	res = netif_rx(skb);
+ 	if (res == NET_RX_DROP) {
+ 		dev->stats.rx_dropped++;
+ 	} else {
+ 		dev->stats.rx_packets++;
+-		dev->stats.rx_bytes += skb->len;
++		dev->stats.rx_bytes += recv_len;
+ 		if (was_multicast_frame)
+ 			dev->stats.multicast++;
+ 	}
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 3824b7abecf7e..52ec0c43e6b81 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -887,13 +887,15 @@ int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
+ 		return 1;
+ 	}
+ 
++	if (fi->nh) {
++		if (cfg->fc_oif || cfg->fc_gw_family || cfg->fc_mp)
++			return 1;
++		return 0;
++	}
++
+ 	if (cfg->fc_oif || cfg->fc_gw_family) {
+ 		struct fib_nh *nh;
+ 
+-		/* cannot match on nexthop object attributes */
+-		if (fi->nh)
+-			return 1;
+-
+ 		nh = fib_info_nh(fi, 0);
+ 		if (cfg->fc_encap) {
+ 			if (fib_encap_match(net, cfg->fc_encap_type,
+diff --git a/net/mac80211/airtime.c b/net/mac80211/airtime.c
+index 26d2f8ba70297..758ef63669e7b 100644
+--- a/net/mac80211/airtime.c
++++ b/net/mac80211/airtime.c
+@@ -457,6 +457,9 @@ static u32 ieee80211_get_rate_duration(struct ieee80211_hw *hw,
+ 			 (status->encoding == RX_ENC_HE && streams > 8)))
+ 		return 0;
+ 
++	if (idx >= MCS_GROUP_RATES)
++		return 0;
++
+ 	duration = airtime_mcs_groups[group].duration[idx];
+ 	duration <<= airtime_mcs_groups[group].shift;
+ 	*overhead = 36 + (streams << 2);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index b70b06e312bd0..eaa030e2ad55a 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2243,8 +2243,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL)
+ 		status |= TP_STATUS_CSUMNOTREADY;
+ 	else if (skb->pkt_type != PACKET_OUTGOING &&
+-		 (skb->ip_summed == CHECKSUM_COMPLETE ||
+-		  skb_csum_unnecessary(skb)))
++		 skb_csum_unnecessary(skb))
+ 		status |= TP_STATUS_CSUM_VALID;
+ 
+ 	if (snaplen > res)
+@@ -3480,8 +3479,7 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		if (skb->ip_summed == CHECKSUM_PARTIAL)
+ 			aux.tp_status |= TP_STATUS_CSUMNOTREADY;
+ 		else if (skb->pkt_type != PACKET_OUTGOING &&
+-			 (skb->ip_summed == CHECKSUM_COMPLETE ||
+-			  skb_csum_unnecessary(skb)))
++			 skb_csum_unnecessary(skb))
+ 			aux.tp_status |= TP_STATUS_CSUM_VALID;
+ 
+ 		aux.tp_len = origlen;
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index ef9fceadef8d5..ee6514af830f7 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -52,6 +52,19 @@ static void sctp_stream_shrink_out(struct sctp_stream *stream, __u16 outcnt)
+ 	}
+ }
+ 
++static void sctp_stream_free_ext(struct sctp_stream *stream, __u16 sid)
++{
++	struct sctp_sched_ops *sched;
++
++	if (!SCTP_SO(stream, sid)->ext)
++		return;
++
++	sched = sctp_sched_ops_from_stream(stream);
++	sched->free_sid(stream, sid);
++	kfree(SCTP_SO(stream, sid)->ext);
++	SCTP_SO(stream, sid)->ext = NULL;
++}
++
+ /* Migrates chunks from stream queues to new stream queues if needed,
+  * but not across associations. Also, removes those chunks to streams
+  * higher than the new max.
+@@ -70,16 +83,14 @@ static void sctp_stream_outq_migrate(struct sctp_stream *stream,
+ 		 * sctp_stream_update will swap ->out pointers.
+ 		 */
+ 		for (i = 0; i < outcnt; i++) {
+-			kfree(SCTP_SO(new, i)->ext);
++			sctp_stream_free_ext(new, i);
+ 			SCTP_SO(new, i)->ext = SCTP_SO(stream, i)->ext;
+ 			SCTP_SO(stream, i)->ext = NULL;
+ 		}
+ 	}
+ 
+-	for (i = outcnt; i < stream->outcnt; i++) {
+-		kfree(SCTP_SO(stream, i)->ext);
+-		SCTP_SO(stream, i)->ext = NULL;
+-	}
++	for (i = outcnt; i < stream->outcnt; i++)
++		sctp_stream_free_ext(stream, i);
+ }
+ 
+ static int sctp_stream_alloc_out(struct sctp_stream *stream, __u16 outcnt,
+@@ -174,9 +185,9 @@ void sctp_stream_free(struct sctp_stream *stream)
+ 	struct sctp_sched_ops *sched = sctp_sched_ops_from_stream(stream);
+ 	int i;
+ 
+-	sched->free(stream);
++	sched->unsched_all(stream);
+ 	for (i = 0; i < stream->outcnt; i++)
+-		kfree(SCTP_SO(stream, i)->ext);
++		sctp_stream_free_ext(stream, i);
+ 	genradix_free(&stream->out);
+ 	genradix_free(&stream->in);
+ }
+diff --git a/net/sctp/stream_sched.c b/net/sctp/stream_sched.c
+index a2e1d34f52c5b..33c2630c2496b 100644
+--- a/net/sctp/stream_sched.c
++++ b/net/sctp/stream_sched.c
+@@ -46,6 +46,10 @@ static int sctp_sched_fcfs_init_sid(struct sctp_stream *stream, __u16 sid,
+ 	return 0;
+ }
+ 
++static void sctp_sched_fcfs_free_sid(struct sctp_stream *stream, __u16 sid)
++{
++}
++
+ static void sctp_sched_fcfs_free(struct sctp_stream *stream)
+ {
+ }
+@@ -96,6 +100,7 @@ static struct sctp_sched_ops sctp_sched_fcfs = {
+ 	.get = sctp_sched_fcfs_get,
+ 	.init = sctp_sched_fcfs_init,
+ 	.init_sid = sctp_sched_fcfs_init_sid,
++	.free_sid = sctp_sched_fcfs_free_sid,
+ 	.free = sctp_sched_fcfs_free,
+ 	.enqueue = sctp_sched_fcfs_enqueue,
+ 	.dequeue = sctp_sched_fcfs_dequeue,
+diff --git a/net/sctp/stream_sched_prio.c b/net/sctp/stream_sched_prio.c
+index 80b5a2c4cbc7b..4fc9f2923ed11 100644
+--- a/net/sctp/stream_sched_prio.c
++++ b/net/sctp/stream_sched_prio.c
+@@ -204,6 +204,24 @@ static int sctp_sched_prio_init_sid(struct sctp_stream *stream, __u16 sid,
+ 	return sctp_sched_prio_set(stream, sid, 0, gfp);
+ }
+ 
++static void sctp_sched_prio_free_sid(struct sctp_stream *stream, __u16 sid)
++{
++	struct sctp_stream_priorities *prio = SCTP_SO(stream, sid)->ext->prio_head;
++	int i;
++
++	if (!prio)
++		return;
++
++	SCTP_SO(stream, sid)->ext->prio_head = NULL;
++	for (i = 0; i < stream->outcnt; i++) {
++		if (SCTP_SO(stream, i)->ext &&
++		    SCTP_SO(stream, i)->ext->prio_head == prio)
++			return;
++	}
++
++	kfree(prio);
++}
++
+ static void sctp_sched_prio_free(struct sctp_stream *stream)
+ {
+ 	struct sctp_stream_priorities *prio, *n;
+@@ -323,6 +341,7 @@ static struct sctp_sched_ops sctp_sched_prio = {
+ 	.get = sctp_sched_prio_get,
+ 	.init = sctp_sched_prio_init,
+ 	.init_sid = sctp_sched_prio_init_sid,
++	.free_sid = sctp_sched_prio_free_sid,
+ 	.free = sctp_sched_prio_free,
+ 	.enqueue = sctp_sched_prio_enqueue,
+ 	.dequeue = sctp_sched_prio_dequeue,
+diff --git a/net/sctp/stream_sched_rr.c b/net/sctp/stream_sched_rr.c
+index ff425aed62c7f..cc444fe0d67c2 100644
+--- a/net/sctp/stream_sched_rr.c
++++ b/net/sctp/stream_sched_rr.c
+@@ -90,6 +90,10 @@ static int sctp_sched_rr_init_sid(struct sctp_stream *stream, __u16 sid,
+ 	return 0;
+ }
+ 
++static void sctp_sched_rr_free_sid(struct sctp_stream *stream, __u16 sid)
++{
++}
++
+ static void sctp_sched_rr_free(struct sctp_stream *stream)
+ {
+ 	sctp_sched_rr_unsched_all(stream);
+@@ -177,6 +181,7 @@ static struct sctp_sched_ops sctp_sched_rr = {
+ 	.get = sctp_sched_rr_get,
+ 	.init = sctp_sched_rr_init,
+ 	.init_sid = sctp_sched_rr_init_sid,
++	.free_sid = sctp_sched_rr_free_sid,
+ 	.free = sctp_sched_rr_free,
+ 	.enqueue = sctp_sched_rr_enqueue,
+ 	.dequeue = sctp_sched_rr_dequeue,
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 6f91b9a306dc3..de63d6d41645c 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -1975,6 +1975,9 @@ rcv:
+ 	/* Ok, everything's fine, try to synch own keys according to peers' */
+ 	tipc_crypto_key_synch(rx, *skb);
+ 
++	/* Re-fetch skb cb as skb might be changed in tipc_msg_validate */
++	skb_cb = TIPC_SKB_CB(*skb);
++
+ 	/* Mark skb decrypted */
+ 	skb_cb->decrypted = 1;
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 15119c49c0934..d09dabae56271 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -330,7 +330,8 @@ static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
+ 			 * determine if they are the same ie.
+ 			 */
+ 			if (tmp_old[0] == WLAN_EID_VENDOR_SPECIFIC) {
+-				if (!memcmp(tmp_old + 2, tmp + 2, 5)) {
++				if (tmp_old[1] >= 5 && tmp[1] >= 5 &&
++				    !memcmp(tmp_old + 2, tmp + 2, 5)) {
+ 					/* same vendor ie, copy from
+ 					 * subelement
+ 					 */
+@@ -2466,10 +2467,15 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 	const struct cfg80211_bss_ies *ies1, *ies2;
+ 	size_t ielen = len - offsetof(struct ieee80211_mgmt,
+ 				      u.probe_resp.variable);
+-	struct cfg80211_non_tx_bss non_tx_data;
++	struct cfg80211_non_tx_bss non_tx_data = {};
+ 
+ 	res = cfg80211_inform_single_bss_frame_data(wiphy, data, mgmt,
+ 						    len, gfp);
++
++	/* don't do any further MBSSID handling for S1G */
++	if (ieee80211_is_s1g_beacon(mgmt->frame_control))
++		return res;
++
+ 	if (!res || !wiphy->support_mbssid ||
+ 	    !cfg80211_find_ie(WLAN_EID_MULTIPLE_BSSID, ie, ielen))
+ 		return res;
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index 57099687e5e1d..9e730b805e87c 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -73,7 +73,8 @@ command -v ${ADDR2LINE} >/dev/null 2>&1 || die "${ADDR2LINE} isn't installed"
+ find_dir_prefix() {
+ 	local objfile=$1
+ 
+-	local start_kernel_addr=$(${READELF} --symbols --wide $objfile | ${AWK} '$8 == "start_kernel" {printf "0x%s", $2}')
++	local start_kernel_addr=$(${READELF} --symbols --wide $objfile | sed 's/\[.*\]//' |
++		${AWK} '$8 == "start_kernel" {printf "0x%s", $2}')
+ 	[[ -z $start_kernel_addr ]] && return
+ 
+ 	local file_line=$(${ADDR2LINE} -e $objfile $start_kernel_addr)
+@@ -177,7 +178,7 @@ __faddr2line() {
+ 				found=2
+ 				break
+ 			fi
+-		done < <(${READELF} --symbols --wide $objfile | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
++		done < <(${READELF} --symbols --wide $objfile | sed 's/\[.*\]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
+ 
+ 		if [[ $found = 0 ]]; then
+ 			warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size"
+@@ -258,7 +259,7 @@ __faddr2line() {
+ 
+ 		DONE=1
+ 
+-	done < <(${READELF} --symbols --wide $objfile | ${AWK} -v fn=$sym_name '$4 == "FUNC" && $8 == fn')
++	done < <(${READELF} --symbols --wide $objfile | sed 's/\[.*\]//' | ${AWK} -v fn=$sym_name '$4 == "FUNC" && $8 == fn')
+ }
+ 
+ [[ $# -lt 2 ]] && usage
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 0f26d6c31ce50..5fdd96e77ef3b 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -432,7 +432,7 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ 	val = ucontrol->value.integer.value[0];
+ 	if (mc->platform_max && val > mc->platform_max)
+ 		return -EINVAL;
+-	if (val > max - min)
++	if (val > max)
+ 		return -EINVAL;
+ 	if (val < 0)
+ 		return -EINVAL;
+diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
+index 86c31c787fb91..5e242be45206a 100644
+--- a/tools/lib/bpf/ringbuf.c
++++ b/tools/lib/bpf/ringbuf.c
+@@ -59,6 +59,7 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
+ 	__u32 len = sizeof(info);
+ 	struct epoll_event *e;
+ 	struct ring *r;
++	__u64 mmap_sz;
+ 	void *tmp;
+ 	int err;
+ 
+@@ -97,8 +98,7 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
+ 	r->mask = info.max_entries - 1;
+ 
+ 	/* Map writable consumer page */
+-	tmp = mmap(NULL, rb->page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
+-		   map_fd, 0);
++	tmp = mmap(NULL, rb->page_size, PROT_READ | PROT_WRITE, MAP_SHARED, map_fd, 0);
+ 	if (tmp == MAP_FAILED) {
+ 		err = -errno;
+ 		pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
+@@ -111,8 +111,12 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
+ 	 * data size to allow simple reading of samples that wrap around the
+ 	 * end of a ring buffer. See kernel implementation for details.
+ 	 * */
+-	tmp = mmap(NULL, rb->page_size + 2 * info.max_entries, PROT_READ,
+-		   MAP_SHARED, map_fd, rb->page_size);
++	mmap_sz = rb->page_size + 2 * (__u64)info.max_entries;
++	if (mmap_sz != (__u64)(size_t)mmap_sz) {
++		pr_warn("ringbuf: ring buffer size (%u) is too big\n", info.max_entries);
++		return -E2BIG;
++	}
++	tmp = mmap(NULL, (size_t)mmap_sz, PROT_READ, MAP_SHARED, map_fd, rb->page_size);
+ 	if (tmp == MAP_FAILED) {
+ 		err = -errno;
+ 		ringbuf_unmap_ring(rb, r);
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index 4c7d33618437c..7ece4131dc6fc 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -931,6 +931,36 @@ ipv4_fcnal()
+ 	set +e
+ 	check_nexthop "dev veth1" ""
+ 	log_test $? 0 "Nexthops removed on admin down"
++
++	# nexthop route delete warning: route add with nhid and delete
++	# using device
++	run_cmd "$IP li set dev veth1 up"
++	run_cmd "$IP nexthop add id 12 via 172.16.1.3 dev veth1"
++	out1=`dmesg | grep "WARNING:.*fib_nh_match.*" | wc -l`
++	run_cmd "$IP route add 172.16.101.1/32 nhid 12"
++	run_cmd "$IP route delete 172.16.101.1/32 dev veth1"
++	out2=`dmesg | grep "WARNING:.*fib_nh_match.*" | wc -l`
++	[ $out1 -eq $out2 ]
++	rc=$?
++	log_test $rc 0 "Delete nexthop route warning"
++	run_cmd "$IP route delete 172.16.101.1/32 nhid 12"
++	run_cmd "$IP nexthop del id 12"
++
++	run_cmd "$IP nexthop add id 21 via 172.16.1.6 dev veth1"
++	run_cmd "$IP ro add 172.16.101.0/24 nhid 21"
++	run_cmd "$IP ro del 172.16.101.0/24 nexthop via 172.16.1.7 dev veth1 nexthop via 172.16.1.8 dev veth1"
++	log_test $? 2 "Delete multipath route with only nh id based entry"
++
++	run_cmd "$IP nexthop add id 22 via 172.16.1.6 dev veth1"
++	run_cmd "$IP ro add 172.16.102.0/24 nhid 22"
++	run_cmd "$IP ro del 172.16.102.0/24 dev veth1"
++	log_test $? 2 "Delete route when specifying only nexthop device"
++
++	run_cmd "$IP ro del 172.16.102.0/24 via 172.16.1.6"
++	log_test $? 2 "Delete route when specifying only gateway"
++
++	run_cmd "$IP ro del 172.16.102.0/24"
++	log_test $? 0 "Delete route when not specifying nexthop attributes"
+ }
+ 
+ ipv4_grp_fcnal()
+diff --git a/tools/vm/slabinfo-gnuplot.sh b/tools/vm/slabinfo-gnuplot.sh
+index 26e193ffd2a2f..873a892147e57 100644
+--- a/tools/vm/slabinfo-gnuplot.sh
++++ b/tools/vm/slabinfo-gnuplot.sh
+@@ -150,7 +150,7 @@ do_preprocess()
+ 	let lines=3
+ 	out=`basename "$in"`"-slabs-by-loss"
+ 	`cat "$in" | grep -A "$lines" 'Slabs sorted by loss' |\
+-		egrep -iv '\-\-|Name|Slabs'\
++		grep -E -iv '\-\-|Name|Slabs'\
+ 		| awk '{print $1" "$4+$2*$3" "$4}' > "$out"`
+ 	if [ $? -eq 0 ]; then
+ 		do_slabs_plotting "$out"
+@@ -159,7 +159,7 @@ do_preprocess()
+ 	let lines=3
+ 	out=`basename "$in"`"-slabs-by-size"
+ 	`cat "$in" | grep -A "$lines" 'Slabs sorted by size' |\
+-		egrep -iv '\-\-|Name|Slabs'\
++		grep -E -iv '\-\-|Name|Slabs'\
+ 		| awk '{print $1" "$4" "$4-$2*$3}' > "$out"`
+ 	if [ $? -eq 0 ]; then
+ 		do_slabs_plotting "$out"


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-12-14 12:14 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2022-12-14 12:14 UTC (permalink / raw
  To: gentoo-commits

commit:     1f9192297c331dac19647519edd7ab6b20858846
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 14 12:14:13 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 14 12:14:13 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1f919229

Linux patch 5.10.159

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1158_linux-5.10.159.patch | 3519 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3523 insertions(+)

diff --git a/0000_README b/0000_README
index 48ded094..a20d23b7 100644
--- a/0000_README
+++ b/0000_README
@@ -675,6 +675,10 @@ Patch:  1157_linux-5.10.158.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.158
 
+Patch:  1158_linux-5.10.159.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.159
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1158_linux-5.10.159.patch b/1158_linux-5.10.159.patch
new file mode 100644
index 00000000..8c72d24c
--- /dev/null
+++ b/1158_linux-5.10.159.patch
@@ -0,0 +1,3519 @@
+diff --git a/Makefile b/Makefile
+index f3d1f07b6a6fc..bb9fab281555a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 158
++SUBLEVEL = 159
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/kernel/rtc.c b/arch/alpha/kernel/rtc.c
+index 1b1d5963ac550..48ffbfbd06240 100644
+--- a/arch/alpha/kernel/rtc.c
++++ b/arch/alpha/kernel/rtc.c
+@@ -80,7 +80,12 @@ init_rtc_epoch(void)
+ static int
+ alpha_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ {
+-	mc146818_get_time(tm);
++	int ret = mc146818_get_time(tm);
++
++	if (ret < 0) {
++		dev_err_ratelimited(dev, "unable to read current time\n");
++		return ret;
++	}
+ 
+ 	/* Adjust for non-default epochs.  It's easier to depend on the
+ 	   generic __get_rtc_time and adjust the epoch here than create
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index e4ff47110a960..9e1b0af0aa43f 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -1221,10 +1221,10 @@
+ 			clocks = <&clks IMX7D_NAND_USDHC_BUS_RAWNAND_CLK>;
+ 		};
+ 
+-		gpmi: nand-controller@33002000 {
++		gpmi: nand-controller@33002000{
+ 			compatible = "fsl,imx7d-gpmi-nand";
+ 			#address-cells = <1>;
+-			#size-cells = <0>;
++			#size-cells = <1>;
+ 			reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
+ 			reg-names = "gpmi-nand", "bch";
+ 			interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/rk3036-evb.dts b/arch/arm/boot/dts/rk3036-evb.dts
+index 2a7e6624efb93..ea23ba98625e7 100644
+--- a/arch/arm/boot/dts/rk3036-evb.dts
++++ b/arch/arm/boot/dts/rk3036-evb.dts
+@@ -31,7 +31,7 @@
+ &i2c1 {
+ 	status = "okay";
+ 
+-	hym8563: hym8563@51 {
++	hym8563: rtc@51 {
+ 		compatible = "haoyu,hym8563";
+ 		reg = <0x51>;
+ 		#clock-cells = <0>;
+diff --git a/arch/arm/boot/dts/rk3188-radxarock.dts b/arch/arm/boot/dts/rk3188-radxarock.dts
+index b0fef82c0a71b..39b913f8d7018 100644
+--- a/arch/arm/boot/dts/rk3188-radxarock.dts
++++ b/arch/arm/boot/dts/rk3188-radxarock.dts
+@@ -67,7 +67,7 @@
+ 		#sound-dai-cells = <0>;
+ 	};
+ 
+-	ir_recv: gpio-ir-receiver {
++	ir_recv: ir-receiver {
+ 		compatible = "gpio-ir-receiver";
+ 		gpios = <&gpio0 RK_PB2 GPIO_ACTIVE_LOW>;
+ 		pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/rk3188.dtsi b/arch/arm/boot/dts/rk3188.dtsi
+index b6bde9d12c2be..ddf23748ead4c 100644
+--- a/arch/arm/boot/dts/rk3188.dtsi
++++ b/arch/arm/boot/dts/rk3188.dtsi
+@@ -402,7 +402,7 @@
+ 				rockchip,pins = <2 RK_PD3 1 &pcfg_pull_none>;
+ 			};
+ 
+-			lcdc1_rgb24: ldcd1-rgb24 {
++			lcdc1_rgb24: lcdc1-rgb24 {
+ 				rockchip,pins = <2 RK_PA0 1 &pcfg_pull_none>,
+ 						<2 RK_PA1 1 &pcfg_pull_none>,
+ 						<2 RK_PA2 1 &pcfg_pull_none>,
+@@ -630,7 +630,6 @@
+ 
+ &global_timer {
+ 	interrupts = <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
+-	status = "disabled";
+ };
+ 
+ &local_timer {
+diff --git a/arch/arm/boot/dts/rk3288-evb-act8846.dts b/arch/arm/boot/dts/rk3288-evb-act8846.dts
+index be695b8c1f672..8a635c2431274 100644
+--- a/arch/arm/boot/dts/rk3288-evb-act8846.dts
++++ b/arch/arm/boot/dts/rk3288-evb-act8846.dts
+@@ -54,7 +54,7 @@
+ 		vin-supply = <&vcc_sys>;
+ 	};
+ 
+-	hym8563@51 {
++	rtc@51 {
+ 		compatible = "haoyu,hym8563";
+ 		reg = <0x51>;
+ 
+diff --git a/arch/arm/boot/dts/rk3288-firefly.dtsi b/arch/arm/boot/dts/rk3288-firefly.dtsi
+index 7fb582302b326..c560afe3af780 100644
+--- a/arch/arm/boot/dts/rk3288-firefly.dtsi
++++ b/arch/arm/boot/dts/rk3288-firefly.dtsi
+@@ -233,7 +233,7 @@
+ 		vin-supply = <&vcc_sys>;
+ 	};
+ 
+-	hym8563: hym8563@51 {
++	hym8563: rtc@51 {
+ 		compatible = "haoyu,hym8563";
+ 		reg = <0x51>;
+ 		#clock-cells = <0>;
+diff --git a/arch/arm/boot/dts/rk3288-miqi.dts b/arch/arm/boot/dts/rk3288-miqi.dts
+index cf54d5ffff2f9..fe265a834e8ea 100644
+--- a/arch/arm/boot/dts/rk3288-miqi.dts
++++ b/arch/arm/boot/dts/rk3288-miqi.dts
+@@ -157,7 +157,7 @@
+ 		vin-supply = <&vcc_sys>;
+ 	};
+ 
+-	hym8563: hym8563@51 {
++	hym8563: rtc@51 {
+ 		compatible = "haoyu,hym8563";
+ 		reg = <0x51>;
+ 		#clock-cells = <0>;
+diff --git a/arch/arm/boot/dts/rk3288-rock2-square.dts b/arch/arm/boot/dts/rk3288-rock2-square.dts
+index c4d1d142d8c68..d5ef99ebbddce 100644
+--- a/arch/arm/boot/dts/rk3288-rock2-square.dts
++++ b/arch/arm/boot/dts/rk3288-rock2-square.dts
+@@ -165,7 +165,7 @@
+ };
+ 
+ &i2c0 {
+-	hym8563: hym8563@51 {
++	hym8563: rtc@51 {
+ 		compatible = "haoyu,hym8563";
+ 		reg = <0x51>;
+ 		#clock-cells = <0>;
+diff --git a/arch/arm/boot/dts/rk3xxx.dtsi b/arch/arm/boot/dts/rk3xxx.dtsi
+index 859a7477909f1..5edc46a5585cb 100644
+--- a/arch/arm/boot/dts/rk3xxx.dtsi
++++ b/arch/arm/boot/dts/rk3xxx.dtsi
+@@ -111,6 +111,13 @@
+ 		reg = <0x1013c200 0x20>;
+ 		interrupts = <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_EDGE_RISING)>;
+ 		clocks = <&cru CORE_PERI>;
++		status = "disabled";
++		/* The clock source and the sched_clock provided by the arm_global_timer
++		 * on Rockchip rk3066a/rk3188 are quite unstable because their rates
++		 * depend on the CPU frequency.
++		 * Keep the arm_global_timer disabled in order to have the
++		 * DW_APB_TIMER (rk3066a) or ROCKCHIP_TIMER (rk3188) selected by default.
++		 */
+ 	};
+ 
+ 	local_timer: local-timer@1013c600 {
+diff --git a/arch/arm/include/asm/perf_event.h b/arch/arm/include/asm/perf_event.h
+index fe87397c3d8c6..bdbc1e590891e 100644
+--- a/arch/arm/include/asm/perf_event.h
++++ b/arch/arm/include/asm/perf_event.h
+@@ -17,7 +17,7 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs);
+ 
+ #define perf_arch_fetch_caller_regs(regs, __ip) { \
+ 	(regs)->ARM_pc = (__ip); \
+-	(regs)->ARM_fp = (unsigned long) __builtin_frame_address(0); \
++	frame_pointer((regs)) = (unsigned long) __builtin_frame_address(0); \
+ 	(regs)->ARM_sp = current_stack_pointer; \
+ 	(regs)->ARM_cpsr = SVC_MODE; \
+ }
+diff --git a/arch/arm/include/asm/pgtable-nommu.h b/arch/arm/include/asm/pgtable-nommu.h
+index d16aba48fa0a4..090011394477f 100644
+--- a/arch/arm/include/asm/pgtable-nommu.h
++++ b/arch/arm/include/asm/pgtable-nommu.h
+@@ -44,12 +44,6 @@
+ 
+ typedef pte_t *pte_addr_t;
+ 
+-/*
+- * ZERO_PAGE is a global shared page that is always zero: used
+- * for zero-mapped memory areas etc..
+- */
+-#define ZERO_PAGE(vaddr)	(virt_to_page(0))
+-
+ /*
+  * Mark the prot value as uncacheable and unbufferable.
+  */
+diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
+index c02f24400369b..d38d503493cb2 100644
+--- a/arch/arm/include/asm/pgtable.h
++++ b/arch/arm/include/asm/pgtable.h
+@@ -10,6 +10,15 @@
+ #include <linux/const.h>
+ #include <asm/proc-fns.h>
+ 
++#ifndef __ASSEMBLY__
++/*
++ * ZERO_PAGE is a global shared page that is always zero: used
++ * for zero-mapped memory areas etc..
++ */
++extern struct page *empty_zero_page;
++#define ZERO_PAGE(vaddr)	(empty_zero_page)
++#endif
++
+ #ifndef CONFIG_MMU
+ 
+ #include <asm-generic/pgtable-nopud.h>
+@@ -156,13 +165,6 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
+ #define __S111  __PAGE_SHARED_EXEC
+ 
+ #ifndef __ASSEMBLY__
+-/*
+- * ZERO_PAGE is a global shared page that is always zero: used
+- * for zero-mapped memory areas etc..
+- */
+-extern struct page *empty_zero_page;
+-#define ZERO_PAGE(vaddr)	(empty_zero_page)
+-
+ 
+ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+ 
+diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
+index 8b3d7191e2b88..959f057017384 100644
+--- a/arch/arm/mm/nommu.c
++++ b/arch/arm/mm/nommu.c
+@@ -26,6 +26,13 @@
+ 
+ unsigned long vectors_base;
+ 
++/*
++ * empty_zero_page is a special page that is used for
++ * zero-initialized data and COW.
++ */
++struct page *empty_zero_page;
++EXPORT_SYMBOL(empty_zero_page);
++
+ #ifdef CONFIG_ARM_MPU
+ struct mpu_rgn_info mpu_rgn_info;
+ #endif
+@@ -148,9 +155,21 @@ void __init adjust_lowmem_bounds(void)
+  */
+ void __init paging_init(const struct machine_desc *mdesc)
+ {
++	void *zero_page;
++
+ 	early_trap_init((void *)vectors_base);
+ 	mpu_setup();
++
++	/* allocate the zero page. */
++	zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
++	if (!zero_page)
++		panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
++		      __func__, PAGE_SIZE, PAGE_SIZE);
++
+ 	bootmem_init();
++
++	empty_zero_page = virt_to_page(zero_page);
++	flush_dcache_page(empty_zero_page);
+ }
+ 
+ /*
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+index fbcb9531cc70d..213c0759c4b85 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+@@ -13,7 +13,7 @@
+ 		stdout-path = "serial2:1500000n8";
+ 	};
+ 
+-	ir_rx {
++	ir-receiver {
+ 		compatible = "gpio-ir-receiver";
+ 		gpios = <&gpio0 RK_PC0 GPIO_ACTIVE_HIGH>;
+ 		pinctrl-names = "default";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+index f121203081b97..64df643391194 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+@@ -448,7 +448,6 @@
+ &i2s1 {
+ 	rockchip,playback-channels = <2>;
+ 	rockchip,capture-channels = <2>;
+-	status = "okay";
+ };
+ 
+ &i2s2 {
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 3fbf7081c000c..ff58decfef5e8 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -535,8 +535,10 @@ static int shadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 	if (test_kvm_cpu_feat(vcpu->kvm, KVM_S390_VM_CPU_FEAT_CEI))
+ 		scb_s->eca |= scb_o->eca & ECA_CEI;
+ 	/* Epoch Extension */
+-	if (test_kvm_facility(vcpu->kvm, 139))
++	if (test_kvm_facility(vcpu->kvm, 139)) {
+ 		scb_s->ecd |= scb_o->ecd & ECD_MEF;
++		scb_s->epdx = scb_o->epdx;
++	}
+ 
+ 	/* etoken */
+ 	if (test_kvm_facility(vcpu->kvm, 156))
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 4ab7a9757e521..574df24a8e5a4 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -1325,8 +1325,12 @@ irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id)
+ 	hpet_rtc_timer_reinit();
+ 	memset(&curr_time, 0, sizeof(struct rtc_time));
+ 
+-	if (hpet_rtc_flags & (RTC_UIE | RTC_AIE))
+-		mc146818_get_time(&curr_time);
++	if (hpet_rtc_flags & (RTC_UIE | RTC_AIE)) {
++		if (unlikely(mc146818_get_time(&curr_time) < 0)) {
++			pr_err_ratelimited("unable to read current time from RTC\n");
++			return IRQ_HANDLED;
++		}
++	}
+ 
+ 	if (hpet_rtc_flags & RTC_UIE &&
+ 	    curr_time.tm_sec != hpet_prev_update_sec) {
+diff --git a/drivers/base/power/trace.c b/drivers/base/power/trace.c
+index 94665037f4a35..72b7a92337b18 100644
+--- a/drivers/base/power/trace.c
++++ b/drivers/base/power/trace.c
+@@ -120,7 +120,11 @@ static unsigned int read_magic_time(void)
+ 	struct rtc_time time;
+ 	unsigned int val;
+ 
+-	mc146818_get_time(&time);
++	if (mc146818_get_time(&time) < 0) {
++		pr_err("Unable to read current time from RTC\n");
++		return 0;
++	}
++
+ 	pr_info("RTC time: %ptRt, date: %ptRd\n", &time, &time);
+ 	val = time.tm_year;				/* 100 years */
+ 	if (val > 100)
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 6efd981979bd3..54001ad5de9f5 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -1833,6 +1833,11 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ 
+ 	rp = (struct hci_rp_read_local_version *)skb->data;
+ 
++	bt_dev_info(hdev, "CSR: Setting up dongle with HCI ver=%u rev=%04x; LMP ver=%u subver=%04x; manufacturer=%u",
++		le16_to_cpu(rp->hci_ver), le16_to_cpu(rp->hci_rev),
++		le16_to_cpu(rp->lmp_ver), le16_to_cpu(rp->lmp_subver),
++		le16_to_cpu(rp->manufacturer));
++
+ 	/* Detect a wide host of Chinese controllers that aren't CSR.
+ 	 *
+ 	 * Known fake bcdDevices: 0x0100, 0x0134, 0x1915, 0x2520, 0x7558, 0x8891
+diff --git a/drivers/gpio/gpio-amd8111.c b/drivers/gpio/gpio-amd8111.c
+index fdcebe59510dd..68d95051dd0e6 100644
+--- a/drivers/gpio/gpio-amd8111.c
++++ b/drivers/gpio/gpio-amd8111.c
+@@ -231,7 +231,10 @@ found:
+ 		ioport_unmap(gp.pm);
+ 		goto out;
+ 	}
++	return 0;
++
+ out:
++	pci_dev_put(pdev);
+ 	return err;
+ }
+ 
+@@ -239,6 +242,7 @@ static void __exit amd_gpio_exit(void)
+ {
+ 	gpiochip_remove(&gp.chip);
+ 	ioport_unmap(gp.pm);
++	pci_dev_put(gp.pdev);
+ }
+ 
+ module_init(amd_gpio_init);
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index 356c7d0bd035f..2c3c743df9505 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -2609,6 +2609,9 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
+ 	 * if supported. In any case the default RGB888 format is added
+ 	 */
+ 
++	/* Default 8bit RGB fallback */
++	output_fmts[i++] = MEDIA_BUS_FMT_RGB888_1X24;
++
+ 	if (max_bpc >= 16 && info->bpc == 16) {
+ 		if (info->color_formats & DRM_COLOR_FORMAT_YCRCB444)
+ 			output_fmts[i++] = MEDIA_BUS_FMT_YUV16_1X48;
+@@ -2642,9 +2645,6 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
+ 	if (info->color_formats & DRM_COLOR_FORMAT_YCRCB444)
+ 		output_fmts[i++] = MEDIA_BUS_FMT_YUV8_1X24;
+ 
+-	/* Default 8bit RGB fallback */
+-	output_fmts[i++] = MEDIA_BUS_FMT_RGB888_1X24;
+-
+ 	*num_output_fmts = i;
+ 
+ 	return output_fmts;
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 1a58481037b3f..77a447a3fb1d1 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -621,9 +621,9 @@ static void ti_sn_bridge_set_video_timings(struct ti_sn_bridge *pdata)
+ 		&pdata->bridge.encoder->crtc->state->adjusted_mode;
+ 	u8 hsync_polarity = 0, vsync_polarity = 0;
+ 
+-	if (mode->flags & DRM_MODE_FLAG_PHSYNC)
++	if (mode->flags & DRM_MODE_FLAG_NHSYNC)
+ 		hsync_polarity = CHA_HSYNC_POLARITY;
+-	if (mode->flags & DRM_MODE_FLAG_PVSYNC)
++	if (mode->flags & DRM_MODE_FLAG_NVSYNC)
+ 		vsync_polarity = CHA_VSYNC_POLARITY;
+ 
+ 	ti_sn_bridge_write_u16(pdata, SN_CHA_ACTIVE_LINE_LENGTH_LOW_REG,
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index cfacce0418a49..c56656a95cf99 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -563,12 +563,20 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
+ {
+ 	struct drm_gem_object *obj = vma->vm_private_data;
+ 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+-	int ret;
+ 
+ 	WARN_ON(shmem->base.import_attach);
+ 
+-	ret = drm_gem_shmem_get_pages(shmem);
+-	WARN_ON_ONCE(ret != 0);
++	mutex_lock(&shmem->pages_lock);
++
++	/*
++	 * We should have already pinned the pages when the buffer was first
++	 * mmap'd, vm_open() just grabs an additional reference for the new
++	 * mm the vma is getting copied into (ie. on fork()).
++	 */
++	if (!WARN_ON_ONCE(!shmem->pages_use_count))
++		shmem->pages_use_count++;
++
++	mutex_unlock(&shmem->pages_lock);
+ 
+ 	drm_gem_vm_open(vma);
+ }
+@@ -616,10 +624,8 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+ 	shmem = to_drm_gem_shmem_obj(obj);
+ 
+ 	ret = drm_gem_shmem_get_pages(shmem);
+-	if (ret) {
+-		drm_gem_vm_close(vma);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
+ 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+index 4bf0f5ec4fc2d..2b6590344468d 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+@@ -949,6 +949,10 @@ int vmw_kms_sou_init_display(struct vmw_private *dev_priv)
+ 	struct drm_device *dev = dev_priv->dev;
+ 	int i, ret;
+ 
++	/* Screen objects won't work if GMR's aren't available */
++	if (!dev_priv->has_gmr)
++		return -ENOSYS;
++
+ 	if (!(dev_priv->capabilities & SVGA_CAP_SCREEN_OBJECT_2)) {
+ 		DRM_INFO("Not using screen objects,"
+ 			 " missing cap SCREEN_OBJECT_2\n");
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 5550c943f9855..eaaf732f06304 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1310,6 +1310,9 @@ static s32 snto32(__u32 value, unsigned n)
+ 	if (!value || !n)
+ 		return 0;
+ 
++	if (n > 32)
++		n = 32;
++
+ 	switch (n) {
+ 	case 8:  return ((__s8)value);
+ 	case 16: return ((__s16)value);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 3350a41d7dce1..70a693f8f0343 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -257,6 +257,7 @@
+ #define USB_DEVICE_ID_CH_AXIS_295	0x001c
+ 
+ #define USB_VENDOR_ID_CHERRY		0x046a
++#define USB_DEVICE_ID_CHERRY_MOUSE_000C	0x000c
+ #define USB_DEVICE_ID_CHERRY_CYMOTION	0x0023
+ #define USB_DEVICE_ID_CHERRY_CYMOTION_SOLAR	0x0027
+ 
+@@ -874,6 +875,7 @@
+ #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER	0x02fd
+ #define USB_DEVICE_ID_MS_PIXART_MOUSE    0x00cb
+ #define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS      0x02e0
++#define USB_DEVICE_ID_MS_MOUSE_0783      0x0783
+ 
+ #define USB_VENDOR_ID_MOJO		0x8282
+ #define USB_DEVICE_ID_RETRO_ADAPTER	0x3201
+@@ -1302,6 +1304,7 @@
+ 
+ #define USB_VENDOR_ID_PRIMAX	0x0461
+ #define USB_DEVICE_ID_PRIMAX_MOUSE_4D22	0x4d22
++#define USB_DEVICE_ID_PRIMAX_MOUSE_4E2A	0x4e2a
+ #define USB_DEVICE_ID_PRIMAX_KEYBOARD	0x4e05
+ #define USB_DEVICE_ID_PRIMAX_REZEL	0x4e72
+ #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F	0x4d0f
+diff --git a/drivers/hid/hid-lg4ff.c b/drivers/hid/hid-lg4ff.c
+index 5e6a0cef2a06d..e3fcf1353fb3b 100644
+--- a/drivers/hid/hid-lg4ff.c
++++ b/drivers/hid/hid-lg4ff.c
+@@ -872,6 +872,12 @@ static ssize_t lg4ff_alternate_modes_store(struct device *dev, struct device_att
+ 		return -ENOMEM;
+ 
+ 	i = strlen(lbuf);
++
++	if (i == 0) {
++		kfree(lbuf);
++		return -EINVAL;
++	}
++
+ 	if (lbuf[i-1] == '\n') {
+ 		if (i == 1) {
+ 			kfree(lbuf);
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 4a8014e9a511c..1efde40e51364 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -54,6 +54,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_YOKE), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_PEDALS), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_THROTTLE), HID_QUIRK_NOGET },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_MOUSE_000C), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB_RAPIDFIRE), HID_QUIRK_NO_INIT_REPORTS | HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K70RGB), HID_QUIRK_NO_INIT_REPORTS },
+@@ -122,6 +123,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C05A), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C06A), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_MOUSE_0783), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE3_COVER), HID_QUIRK_NO_INIT_REPORTS },
+@@ -146,6 +148,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4E2A), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL },
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 72350343a56a6..3bafde87a1257 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -787,7 +787,13 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	num_buffers = max_t(unsigned int, *count, q->min_buffers_needed);
+ 	num_buffers = min_t(unsigned int, num_buffers, VB2_MAX_FRAME);
+ 	memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
++	/*
++	 * Set this now to ensure that drivers see the correct q->memory value
++	 * in the queue_setup op.
++	 */
++	mutex_lock(&q->mmap_lock);
+ 	q->memory = memory;
++	mutex_unlock(&q->mmap_lock);
+ 
+ 	/*
+ 	 * Ask the driver how many buffers and planes per buffer it requires.
+@@ -796,22 +802,27 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	ret = call_qop(q, queue_setup, q, &num_buffers, &num_planes,
+ 		       plane_sizes, q->alloc_devs);
+ 	if (ret)
+-		return ret;
++		goto error;
+ 
+ 	/* Check that driver has set sane values */
+-	if (WARN_ON(!num_planes))
+-		return -EINVAL;
++	if (WARN_ON(!num_planes)) {
++		ret = -EINVAL;
++		goto error;
++	}
+ 
+ 	for (i = 0; i < num_planes; i++)
+-		if (WARN_ON(!plane_sizes[i]))
+-			return -EINVAL;
++		if (WARN_ON(!plane_sizes[i])) {
++			ret = -EINVAL;
++			goto error;
++		}
+ 
+ 	/* Finally, allocate buffers and video memory */
+ 	allocated_buffers =
+ 		__vb2_queue_alloc(q, memory, num_buffers, num_planes, plane_sizes);
+ 	if (allocated_buffers == 0) {
+ 		dprintk(q, 1, "memory allocation failed\n");
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto error;
+ 	}
+ 
+ 	/*
+@@ -852,7 +863,8 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	if (ret < 0) {
+ 		/*
+ 		 * Note: __vb2_queue_free() will subtract 'allocated_buffers'
+-		 * from q->num_buffers.
++		 * from q->num_buffers and it will reset q->memory to
++		 * VB2_MEMORY_UNKNOWN.
+ 		 */
+ 		__vb2_queue_free(q, allocated_buffers);
+ 		mutex_unlock(&q->mmap_lock);
+@@ -868,6 +880,12 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	q->waiting_for_buffers = !q->is_output;
+ 
+ 	return 0;
++
++error:
++	mutex_lock(&q->mmap_lock);
++	q->memory = VB2_MEMORY_UNKNOWN;
++	mutex_unlock(&q->mmap_lock);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(vb2_core_reqbufs);
+ 
+@@ -878,6 +896,7 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
+ {
+ 	unsigned int num_planes = 0, num_buffers, allocated_buffers;
+ 	unsigned plane_sizes[VB2_MAX_PLANES] = { };
++	bool no_previous_buffers = !q->num_buffers;
+ 	int ret;
+ 
+ 	if (q->num_buffers == VB2_MAX_FRAME) {
+@@ -885,13 +904,19 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
+ 		return -ENOBUFS;
+ 	}
+ 
+-	if (!q->num_buffers) {
++	if (no_previous_buffers) {
+ 		if (q->waiting_in_dqbuf && *count) {
+ 			dprintk(q, 1, "another dup()ped fd is waiting for a buffer\n");
+ 			return -EBUSY;
+ 		}
+ 		memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
++		/*
++		 * Set this now to ensure that drivers see the correct q->memory
++		 * value in the queue_setup op.
++		 */
++		mutex_lock(&q->mmap_lock);
+ 		q->memory = memory;
++		mutex_unlock(&q->mmap_lock);
+ 		q->waiting_for_buffers = !q->is_output;
+ 	} else {
+ 		if (q->memory != memory) {
+@@ -914,14 +939,15 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	ret = call_qop(q, queue_setup, q, &num_buffers,
+ 		       &num_planes, plane_sizes, q->alloc_devs);
+ 	if (ret)
+-		return ret;
++		goto error;
+ 
+ 	/* Finally, allocate buffers and video memory */
+ 	allocated_buffers = __vb2_queue_alloc(q, memory, num_buffers,
+ 				num_planes, plane_sizes);
+ 	if (allocated_buffers == 0) {
+ 		dprintk(q, 1, "memory allocation failed\n");
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto error;
+ 	}
+ 
+ 	/*
+@@ -952,7 +978,8 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	if (ret < 0) {
+ 		/*
+ 		 * Note: __vb2_queue_free() will subtract 'allocated_buffers'
+-		 * from q->num_buffers.
++		 * from q->num_buffers and it will reset q->memory to
++		 * VB2_MEMORY_UNKNOWN.
+ 		 */
+ 		__vb2_queue_free(q, allocated_buffers);
+ 		mutex_unlock(&q->mmap_lock);
+@@ -967,6 +994,14 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	*count = allocated_buffers;
+ 
+ 	return 0;
++
++error:
++	if (no_previous_buffers) {
++		mutex_lock(&q->mmap_lock);
++		q->memory = VB2_MEMORY_UNKNOWN;
++		mutex_unlock(&q->mmap_lock);
++	}
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(vb2_core_create_bufs);
+ 
+@@ -2120,6 +2155,22 @@ static int __find_plane_by_offset(struct vb2_queue *q, unsigned long off,
+ 	struct vb2_buffer *vb;
+ 	unsigned int buffer, plane;
+ 
++	/*
++	 * Sanity checks to ensure the lock is held, MEMORY_MMAP is
++	 * used and fileio isn't active.
++	 */
++	lockdep_assert_held(&q->mmap_lock);
++
++	if (q->memory != VB2_MEMORY_MMAP) {
++		dprintk(q, 1, "queue is not currently set up for mmap\n");
++		return -EINVAL;
++	}
++
++	if (vb2_fileio_is_active(q)) {
++		dprintk(q, 1, "file io in progress\n");
++		return -EBUSY;
++	}
++
+ 	/*
+ 	 * Go over all buffers and their planes, comparing the given offset
+ 	 * with an offset assigned to each plane. If a match is found,
+@@ -2219,11 +2270,6 @@ int vb2_mmap(struct vb2_queue *q, struct vm_area_struct *vma)
+ 	int ret;
+ 	unsigned long length;
+ 
+-	if (q->memory != VB2_MEMORY_MMAP) {
+-		dprintk(q, 1, "queue is not currently set up for mmap\n");
+-		return -EINVAL;
+-	}
+-
+ 	/*
+ 	 * Check memory area access mode.
+ 	 */
+@@ -2245,14 +2291,9 @@ int vb2_mmap(struct vb2_queue *q, struct vm_area_struct *vma)
+ 
+ 	mutex_lock(&q->mmap_lock);
+ 
+-	if (vb2_fileio_is_active(q)) {
+-		dprintk(q, 1, "mmap: file io in progress\n");
+-		ret = -EBUSY;
+-		goto unlock;
+-	}
+-
+ 	/*
+-	 * Find the plane corresponding to the offset passed by userspace.
++	 * Find the plane corresponding to the offset passed by userspace. This
++	 * will return an error if not MEMORY_MMAP or file I/O is in progress.
+ 	 */
+ 	ret = __find_plane_by_offset(q, off, &buffer, &plane);
+ 	if (ret)
+@@ -2305,22 +2346,25 @@ unsigned long vb2_get_unmapped_area(struct vb2_queue *q,
+ 	void *vaddr;
+ 	int ret;
+ 
+-	if (q->memory != VB2_MEMORY_MMAP) {
+-		dprintk(q, 1, "queue is not currently set up for mmap\n");
+-		return -EINVAL;
+-	}
++	mutex_lock(&q->mmap_lock);
+ 
+ 	/*
+-	 * Find the plane corresponding to the offset passed by userspace.
++	 * Find the plane corresponding to the offset passed by userspace. This
++	 * will return an error if not MEMORY_MMAP or file I/O is in progress.
+ 	 */
+ 	ret = __find_plane_by_offset(q, off, &buffer, &plane);
+ 	if (ret)
+-		return ret;
++		goto unlock;
+ 
+ 	vb = q->bufs[buffer];
+ 
+ 	vaddr = vb2_plane_vaddr(vb, plane);
++	mutex_unlock(&q->mmap_lock);
+ 	return vaddr ? (unsigned long)vaddr : -EINVAL;
++
++unlock:
++	mutex_unlock(&q->mmap_lock);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(vb2_get_unmapped_area);
+ #endif
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index 003c32fed3f75..942d0005c55e8 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -145,6 +145,8 @@ bool v4l2_valid_dv_timings(const struct v4l2_dv_timings *t,
+ 	const struct v4l2_bt_timings *bt = &t->bt;
+ 	const struct v4l2_bt_timings_cap *cap = &dvcap->bt;
+ 	u32 caps = cap->capabilities;
++	const u32 max_vert = 10240;
++	u32 max_hor = 3 * bt->width;
+ 
+ 	if (t->type != V4L2_DV_BT_656_1120)
+ 		return false;
+@@ -166,14 +168,20 @@ bool v4l2_valid_dv_timings(const struct v4l2_dv_timings *t,
+ 	if (!bt->interlaced &&
+ 	    (bt->il_vbackporch || bt->il_vsync || bt->il_vfrontporch))
+ 		return false;
+-	if (bt->hfrontporch > 2 * bt->width ||
+-	    bt->hsync > 1024 || bt->hbackporch > 1024)
++	/*
++	 * Some video receivers cannot properly separate the frontporch,
++	 * backporch and sync values, and instead they only have the total
++	 * blanking. That can be assigned to any of these three fields.
++	 * So just check that none of these are way out of range.
++	 */
++	if (bt->hfrontporch > max_hor ||
++	    bt->hsync > max_hor || bt->hbackporch > max_hor)
+ 		return false;
+-	if (bt->vfrontporch > 4096 ||
+-	    bt->vsync > 128 || bt->vbackporch > 4096)
++	if (bt->vfrontporch > max_vert ||
++	    bt->vsync > max_vert || bt->vbackporch > max_vert)
+ 		return false;
+-	if (bt->interlaced && (bt->il_vfrontporch > 4096 ||
+-	    bt->il_vsync > 128 || bt->il_vbackporch > 4096))
++	if (bt->interlaced && (bt->il_vfrontporch > max_vert ||
++	    bt->il_vsync > max_vert || bt->il_vbackporch > max_vert))
+ 		return false;
+ 	return fnc == NULL || fnc(t, fnc_handle);
+ }
+diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
+index 8847942a8d97e..73c5343e609bc 100644
+--- a/drivers/net/can/usb/esd_usb2.c
++++ b/drivers/net/can/usb/esd_usb2.c
+@@ -227,6 +227,10 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
+ 		u8 rxerr = msg->msg.rx.data[2];
+ 		u8 txerr = msg->msg.rx.data[3];
+ 
++		netdev_dbg(priv->netdev,
++			   "CAN_ERR_EV_EXT: dlc=%#02x state=%02x ecc=%02x rec=%02x tec=%02x\n",
++			   msg->msg.rx.dlc, state, ecc, rxerr, txerr);
++
+ 		skb = alloc_can_err_skb(priv->netdev, &cf);
+ 		if (skb == NULL) {
+ 			stats->rx_dropped++;
+@@ -253,6 +257,8 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
+ 				break;
+ 			default:
+ 				priv->can.state = CAN_STATE_ERROR_ACTIVE;
++				txerr = 0;
++				rxerr = 0;
+ 				break;
+ 			}
+ 		} else {
+diff --git a/drivers/net/dsa/sja1105/sja1105_devlink.c b/drivers/net/dsa/sja1105/sja1105_devlink.c
+index ec2ac91abcfa4..8e3d185c84601 100644
+--- a/drivers/net/dsa/sja1105/sja1105_devlink.c
++++ b/drivers/net/dsa/sja1105/sja1105_devlink.c
+@@ -95,6 +95,8 @@ static int sja1105_setup_devlink_regions(struct dsa_switch *ds)
+ 		if (IS_ERR(region)) {
+ 			while (--i >= 0)
+ 				dsa_devlink_region_destroy(priv->regions[i]);
++
++			kfree(priv->regions);
+ 			return PTR_ERR(region);
+ 		}
+ 
+diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c
+index f4f50b3a472e1..0d56cb4f5dd9b 100644
+--- a/drivers/net/ethernet/aeroflex/greth.c
++++ b/drivers/net/ethernet/aeroflex/greth.c
+@@ -258,6 +258,7 @@ static int greth_init_rings(struct greth_private *greth)
+ 			if (dma_mapping_error(greth->dev, dma_addr)) {
+ 				if (netif_msg_ifup(greth))
+ 					dev_err(greth->dev, "Could not create initial DMA mapping\n");
++				dev_kfree_skb(skb);
+ 				goto cleanup;
+ 			}
+ 			greth->rx_skbuff[i] = skb;
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index c00f1a7ffc15f..488da767cfdf3 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -2258,7 +2258,7 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	err = register_netdev(netdev);
+ 	if (err) {
+ 		dev_err(dev, "Failed to register netdevice\n");
+-		goto err_unregister_interrupts;
++		goto err_destroy_workqueue;
+ 	}
+ 
+ 	nic->msg_enable = debug;
+@@ -2267,6 +2267,8 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	return 0;
+ 
++err_destroy_workqueue:
++	destroy_workqueue(nic->nicvf_rx_mode_wq);
+ err_unregister_interrupts:
+ 	nicvf_unregister_interrupts(nic);
+ err_free_netdev:
+diff --git a/drivers/net/ethernet/hisilicon/hisi_femac.c b/drivers/net/ethernet/hisilicon/hisi_femac.c
+index 57c3bc4f70895..c16dfd8693639 100644
+--- a/drivers/net/ethernet/hisilicon/hisi_femac.c
++++ b/drivers/net/ethernet/hisilicon/hisi_femac.c
+@@ -283,7 +283,7 @@ static int hisi_femac_rx(struct net_device *dev, int limit)
+ 		skb->protocol = eth_type_trans(skb, dev);
+ 		napi_gro_receive(&priv->napi, skb);
+ 		dev->stats.rx_packets++;
+-		dev->stats.rx_bytes += skb->len;
++		dev->stats.rx_bytes += len;
+ next:
+ 		pos = (pos + 1) % rxq->num;
+ 		if (rx_pkts_num >= limit)
+diff --git a/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c b/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
+index 8b2bf85039f16..43f3146caf07e 100644
+--- a/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
++++ b/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
+@@ -550,7 +550,7 @@ static int hix5hd2_rx(struct net_device *dev, int limit)
+ 		skb->protocol = eth_type_trans(skb, dev);
+ 		napi_gro_receive(&priv->napi, skb);
+ 		dev->stats.rx_packets++;
+-		dev->stats.rx_bytes += skb->len;
++		dev->stats.rx_bytes += len;
+ next:
+ 		pos = dma_ring_incr(pos, RX_DESC_NUM);
+ 	}
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index d0c4de0231120..ae0c9aaab48db 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -5937,9 +5937,9 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb,
+ 		e1000_tx_queue(tx_ring, tx_flags, count);
+ 		/* Make sure there is space in the ring for the next send. */
+ 		e1000_maybe_stop_tx(tx_ring,
+-				    (MAX_SKB_FRAGS *
++				    ((MAX_SKB_FRAGS + 1) *
+ 				     DIV_ROUND_UP(PAGE_SIZE,
+-						  adapter->tx_fifo_limit) + 2));
++						  adapter->tx_fifo_limit) + 4));
+ 
+ 		if (!netdev_xmit_more() ||
+ 		    netif_xmit_stopped(netdev_get_tx_queue(netdev, 0))) {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 144c4824b5e80..520929f4d535f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -4234,11 +4234,7 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi,
+ 			return -EOPNOTSUPP;
+ 
+ 		/* First 4 bytes of L4 header */
+-		if (usr_ip4_spec->l4_4_bytes == htonl(0xFFFFFFFF))
+-			new_mask |= I40E_L4_SRC_MASK | I40E_L4_DST_MASK;
+-		else if (!usr_ip4_spec->l4_4_bytes)
+-			new_mask &= ~(I40E_L4_SRC_MASK | I40E_L4_DST_MASK);
+-		else
++		if (usr_ip4_spec->l4_4_bytes)
+ 			return -EOPNOTSUPP;
+ 
+ 		/* Filtering on Type of Service is not supported. */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index d7ddf9239e512..2c60d2a933308 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -10065,6 +10065,21 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
+ 	return 0;
+ }
+ 
++/**
++ * i40e_clean_xps_state - clean xps state for every tx_ring
++ * @vsi: ptr to the VSI
++ **/
++static void i40e_clean_xps_state(struct i40e_vsi *vsi)
++{
++	int i;
++
++	if (vsi->tx_rings)
++		for (i = 0; i < vsi->num_queue_pairs; i++)
++			if (vsi->tx_rings[i])
++				clear_bit(__I40E_TX_XPS_INIT_DONE,
++					  vsi->tx_rings[i]->state);
++}
++
+ /**
+  * i40e_prep_for_reset - prep for the core to reset
+  * @pf: board private structure
+@@ -10096,8 +10111,10 @@ static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired)
+ 		rtnl_unlock();
+ 
+ 	for (v = 0; v < pf->num_alloc_vsi; v++) {
+-		if (pf->vsi[v])
++		if (pf->vsi[v]) {
++			i40e_clean_xps_state(pf->vsi[v]);
+ 			pf->vsi[v]->seid = 0;
++		}
+ 	}
+ 
+ 	i40e_shutdown_adminq(&pf->hw);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 381b28a087467..bb2a79b70c3ae 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1525,6 +1525,7 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
+ 	i40e_cleanup_reset_vf(vf);
+ 
+ 	i40e_flush(hw);
++	usleep_range(20000, 40000);
+ 	clear_bit(I40E_VF_STATE_RESETTING, &vf->vf_states);
+ 
+ 	return true;
+@@ -1648,6 +1649,7 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 	}
+ 
+ 	i40e_flush(hw);
++	usleep_range(20000, 40000);
+ 	clear_bit(__I40E_VF_DISABLE, pf->state);
+ 
+ 	return true;
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index 28baf203459a8..5e3b0a5843a8e 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -1413,6 +1413,8 @@ static int igb_intr_test(struct igb_adapter *adapter, u64 *data)
+ 			*data = 1;
+ 			return -1;
+ 		}
++		wr32(E1000_IVAR_MISC, E1000_IVAR_VALID << 8);
++		wr32(E1000_EIMS, BIT(0));
+ 	} else if (adapter->flags & IGB_FLAG_HAS_MSI) {
+ 		shared_int = false;
+ 		if (request_irq(irq,
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 74e266c0b8e10..f5567d485e91a 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -4140,7 +4140,7 @@ static void mvneta_percpu_elect(struct mvneta_port *pp)
+ 	/* Use the cpu associated to the rxq when it is online, in all
+ 	 * the other cases, use the cpu 0 which can't be offline.
+ 	 */
+-	if (cpu_online(pp->rxq_def))
++	if (pp->rxq_def < nr_cpu_ids && cpu_online(pp->rxq_def))
+ 		elected_cpu = pp->rxq_def;
+ 
+ 	max_cpu = num_present_cpus();
+diff --git a/drivers/net/ethernet/microchip/encx24j600-regmap.c b/drivers/net/ethernet/microchip/encx24j600-regmap.c
+index 81a8ccca7e5e0..5693784eec5bc 100644
+--- a/drivers/net/ethernet/microchip/encx24j600-regmap.c
++++ b/drivers/net/ethernet/microchip/encx24j600-regmap.c
+@@ -359,7 +359,7 @@ static int regmap_encx24j600_phy_reg_read(void *context, unsigned int reg,
+ 		goto err_out;
+ 
+ 	usleep_range(26, 100);
+-	while ((ret = regmap_read(ctx->regmap, MISTAT, &mistat) != 0) &&
++	while (((ret = regmap_read(ctx->regmap, MISTAT, &mistat)) == 0) &&
+ 	       (mistat & BUSY))
+ 		cpu_relax();
+ 
+@@ -397,7 +397,7 @@ static int regmap_encx24j600_phy_reg_write(void *context, unsigned int reg,
+ 		goto err_out;
+ 
+ 	usleep_range(26, 100);
+-	while ((ret = regmap_read(ctx->regmap, MISTAT, &mistat) != 0) &&
++	while (((ret = regmap_read(ctx->regmap, MISTAT, &mistat)) == 0) &&
+ 	       (mistat & BUSY))
+ 		cpu_relax();
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index f70d8d1ce3298..1ed74cfb61fc5 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -108,10 +108,10 @@ static struct stmmac_axi *stmmac_axi_setup(struct platform_device *pdev)
+ 
+ 	axi->axi_lpi_en = of_property_read_bool(np, "snps,lpi_en");
+ 	axi->axi_xit_frm = of_property_read_bool(np, "snps,xit_frm");
+-	axi->axi_kbbe = of_property_read_bool(np, "snps,axi_kbbe");
+-	axi->axi_fb = of_property_read_bool(np, "snps,axi_fb");
+-	axi->axi_mb = of_property_read_bool(np, "snps,axi_mb");
+-	axi->axi_rb =  of_property_read_bool(np, "snps,axi_rb");
++	axi->axi_kbbe = of_property_read_bool(np, "snps,kbbe");
++	axi->axi_fb = of_property_read_bool(np, "snps,fb");
++	axi->axi_mb = of_property_read_bool(np, "snps,mb");
++	axi->axi_rb =  of_property_read_bool(np, "snps,rb");
+ 
+ 	if (of_property_read_u32(np, "snps,wr_osr_lmt", &axi->axi_wr_osr_lmt))
+ 		axi->axi_wr_osr_lmt = 1;
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index fd9f33c833fa3..95ef3b6f98dd3 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -926,7 +926,7 @@ static int ca8210_spi_transfer(
+ 
+ 	dev_dbg(&spi->dev, "%s called\n", __func__);
+ 
+-	cas_ctl = kmalloc(sizeof(*cas_ctl), GFP_ATOMIC);
++	cas_ctl = kzalloc(sizeof(*cas_ctl), GFP_ATOMIC);
+ 	if (!cas_ctl)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/ieee802154/cc2520.c b/drivers/net/ieee802154/cc2520.c
+index 4517517215f2b..a8369bfa4050b 100644
+--- a/drivers/net/ieee802154/cc2520.c
++++ b/drivers/net/ieee802154/cc2520.c
+@@ -970,7 +970,7 @@ static int cc2520_hw_init(struct cc2520_private *priv)
+ 
+ 		if (timeout-- <= 0) {
+ 			dev_err(&priv->spi->dev, "oscillator start failed!\n");
+-			return ret;
++			return -ETIMEDOUT;
+ 		}
+ 		udelay(1);
+ 	} while (!(status & CC2520_STATUS_XOSC32M_STABLE));
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 3e564158c401b..eb029456b5946 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3680,6 +3680,7 @@ static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = {
+ 	[IFLA_MACSEC_SCB] = { .type = NLA_U8 },
+ 	[IFLA_MACSEC_REPLAY_PROTECT] = { .type = NLA_U8 },
+ 	[IFLA_MACSEC_VALIDATION] = { .type = NLA_U8 },
++	[IFLA_MACSEC_OFFLOAD] = { .type = NLA_U8 },
+ };
+ 
+ static void macsec_free_netdev(struct net_device *dev)
+diff --git a/drivers/net/plip/plip.c b/drivers/net/plip/plip.c
+index 5a0e5a8a8917b..22f7db87ed21a 100644
+--- a/drivers/net/plip/plip.c
++++ b/drivers/net/plip/plip.c
+@@ -444,12 +444,12 @@ plip_bh_timeout_error(struct net_device *dev, struct net_local *nl,
+ 	}
+ 	rcv->state = PLIP_PK_DONE;
+ 	if (rcv->skb) {
+-		kfree_skb(rcv->skb);
++		dev_kfree_skb_irq(rcv->skb);
+ 		rcv->skb = NULL;
+ 	}
+ 	snd->state = PLIP_PK_DONE;
+ 	if (snd->skb) {
+-		dev_kfree_skb(snd->skb);
++		dev_consume_skb_irq(snd->skb);
+ 		snd->skb = NULL;
+ 	}
+ 	spin_unlock_irq(&nl->lock);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 7313e6e03c125..bce151e3706a0 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1352,6 +1352,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x0489, 0xe0b4, 0)},	/* Foxconn T77W968 LTE */
+ 	{QMI_FIXED_INTF(0x0489, 0xe0b5, 0)},	/* Foxconn T77W968 LTE with eSIM support*/
+ 	{QMI_FIXED_INTF(0x2692, 0x9025, 4)},    /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
++	{QMI_QUIRK_SET_DTR(0x1546, 0x1342, 4)},	/* u-blox LARA-L6 */
+ 
+ 	/* 4. Gobi 1000 devices */
+ 	{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)},	/* Acer Gobi Modem Device */
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 6678a734cc4d3..43a4bcdd92c1d 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -1356,6 +1356,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
+ 	};
+ 	u32 num_pkts = 0;
+ 	bool skip_page_frags = false;
++	bool encap_lro = false;
+ 	struct Vmxnet3_RxCompDesc *rcd;
+ 	struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx;
+ 	u16 segCnt = 0, mss = 0;
+@@ -1496,13 +1497,18 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
+ 			if (VMXNET3_VERSION_GE_2(adapter) &&
+ 			    rcd->type == VMXNET3_CDTYPE_RXCOMP_LRO) {
+ 				struct Vmxnet3_RxCompDescExt *rcdlro;
++				union Vmxnet3_GenericDesc *gdesc;
++
+ 				rcdlro = (struct Vmxnet3_RxCompDescExt *)rcd;
++				gdesc = (union Vmxnet3_GenericDesc *)rcd;
+ 
+ 				segCnt = rcdlro->segCnt;
+ 				WARN_ON_ONCE(segCnt == 0);
+ 				mss = rcdlro->mss;
+ 				if (unlikely(segCnt <= 1))
+ 					segCnt = 0;
++				encap_lro = (le32_to_cpu(gdesc->dword[0]) &
++					(1UL << VMXNET3_RCD_HDR_INNER_SHIFT));
+ 			} else {
+ 				segCnt = 0;
+ 			}
+@@ -1570,7 +1576,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
+ 			vmxnet3_rx_csum(adapter, skb,
+ 					(union Vmxnet3_GenericDesc *)rcd);
+ 			skb->protocol = eth_type_trans(skb, adapter->netdev);
+-			if (!rcd->tcp ||
++			if ((!rcd->tcp && !encap_lro) ||
+ 			    !(adapter->netdev->features & NETIF_F_LRO))
+ 				goto not_lro;
+ 
+@@ -1579,7 +1585,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
+ 					SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
+ 				skb_shinfo(skb)->gso_size = mss;
+ 				skb_shinfo(skb)->gso_segs = segCnt;
+-			} else if (segCnt != 0 || skb->len > mtu) {
++			} else if ((segCnt != 0 || skb->len > mtu) && !encap_lro) {
+ 				u32 hlen;
+ 
+ 				hlen = vmxnet3_get_hdr_len(adapter, skb,
+@@ -1608,6 +1614,7 @@ not_lro:
+ 				napi_gro_receive(&rq->napi, skb);
+ 
+ 			ctx->skb = NULL;
++			encap_lro = false;
+ 			num_pkts++;
+ 		}
+ 
+diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
+index 6a9178896c909..1ba9749692164 100644
+--- a/drivers/net/xen-netback/common.h
++++ b/drivers/net/xen-netback/common.h
+@@ -48,7 +48,6 @@
+ #include <linux/debugfs.h>
+ 
+ typedef unsigned int pending_ring_idx_t;
+-#define INVALID_PENDING_RING_IDX (~0U)
+ 
+ struct pending_tx_info {
+ 	struct xen_netif_tx_request req; /* tx request */
+@@ -82,8 +81,6 @@ struct xenvif_rx_meta {
+ /* Discriminate from any valid pending_idx value. */
+ #define INVALID_PENDING_IDX 0xFFFF
+ 
+-#define MAX_BUFFER_OFFSET XEN_PAGE_SIZE
+-
+ #define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE
+ 
+ /* The maximum number of frags is derived from the size of a grant (same
+@@ -367,11 +364,6 @@ void xenvif_free(struct xenvif *vif);
+ int xenvif_xenbus_init(void);
+ void xenvif_xenbus_fini(void);
+ 
+-int xenvif_schedulable(struct xenvif *vif);
+-
+-int xenvif_queue_stopped(struct xenvif_queue *queue);
+-void xenvif_wake_queue(struct xenvif_queue *queue);
+-
+ /* (Un)Map communication rings. */
+ void xenvif_unmap_frontend_data_rings(struct xenvif_queue *queue);
+ int xenvif_map_frontend_data_rings(struct xenvif_queue *queue,
+@@ -394,17 +386,13 @@ int xenvif_dealloc_kthread(void *data);
+ irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data);
+ 
+ bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread);
+-void xenvif_rx_action(struct xenvif_queue *queue);
+-void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
++bool xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
+ 
+ void xenvif_carrier_on(struct xenvif *vif);
+ 
+ /* Callback from stack when TX packet can be released */
+ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
+ 
+-/* Unmap a pending page and release it back to the guest */
+-void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
+-
+ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
+ {
+ 	return MAX_PENDING_REQS -
+diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
+index 7ce9807fc24c5..97cf5bc48902a 100644
+--- a/drivers/net/xen-netback/interface.c
++++ b/drivers/net/xen-netback/interface.c
+@@ -70,7 +70,7 @@ void xenvif_skb_zerocopy_complete(struct xenvif_queue *queue)
+ 	wake_up(&queue->dealloc_wq);
+ }
+ 
+-int xenvif_schedulable(struct xenvif *vif)
++static int xenvif_schedulable(struct xenvif *vif)
+ {
+ 	return netif_running(vif->dev) &&
+ 		test_bit(VIF_STATUS_CONNECTED, &vif->status) &&
+@@ -178,20 +178,6 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
+-int xenvif_queue_stopped(struct xenvif_queue *queue)
+-{
+-	struct net_device *dev = queue->vif->dev;
+-	unsigned int id = queue->id;
+-	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
+-}
+-
+-void xenvif_wake_queue(struct xenvif_queue *queue)
+-{
+-	struct net_device *dev = queue->vif->dev;
+-	unsigned int id = queue->id;
+-	netif_tx_wake_queue(netdev_get_tx_queue(dev, id));
+-}
+-
+ static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+ 			       struct net_device *sb_dev)
+ {
+@@ -269,14 +255,16 @@ xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE)
+ 		skb_clear_hash(skb);
+ 
+-	xenvif_rx_queue_tail(queue, skb);
++	if (!xenvif_rx_queue_tail(queue, skb))
++		goto drop;
++
+ 	xenvif_kick_thread(queue);
+ 
+ 	return NETDEV_TX_OK;
+ 
+  drop:
+ 	vif->dev->stats.tx_dropped++;
+-	dev_kfree_skb(skb);
++	dev_kfree_skb_any(skb);
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index b0cbc7fead745..f9373a88cf37c 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -112,6 +112,8 @@ static void make_tx_response(struct xenvif_queue *queue,
+ 			     s8       st);
+ static void push_tx_responses(struct xenvif_queue *queue);
+ 
++static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
++
+ static inline int tx_work_todo(struct xenvif_queue *queue);
+ 
+ static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
+@@ -330,10 +332,13 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
+ 
+ 
+ struct xenvif_tx_cb {
+-	u16 pending_idx;
++	u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1];
++	u8 copy_count;
+ };
+ 
+ #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb)
++#define copy_pending_idx(skb, i) (XENVIF_TX_CB(skb)->copy_pending_idx[i])
++#define copy_count(skb) (XENVIF_TX_CB(skb)->copy_count)
+ 
+ static inline void xenvif_tx_create_map_op(struct xenvif_queue *queue,
+ 					   u16 pending_idx,
+@@ -368,31 +373,93 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
+ 	return skb;
+ }
+ 
+-static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *queue,
+-							struct sk_buff *skb,
+-							struct xen_netif_tx_request *txp,
+-							struct gnttab_map_grant_ref *gop,
+-							unsigned int frag_overflow,
+-							struct sk_buff *nskb)
++static void xenvif_get_requests(struct xenvif_queue *queue,
++				struct sk_buff *skb,
++				struct xen_netif_tx_request *first,
++				struct xen_netif_tx_request *txfrags,
++			        unsigned *copy_ops,
++			        unsigned *map_ops,
++				unsigned int frag_overflow,
++				struct sk_buff *nskb,
++				unsigned int extra_count,
++				unsigned int data_len)
+ {
+ 	struct skb_shared_info *shinfo = skb_shinfo(skb);
+ 	skb_frag_t *frags = shinfo->frags;
+-	u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx;
+-	int start;
++	u16 pending_idx;
+ 	pending_ring_idx_t index;
+ 	unsigned int nr_slots;
++	struct gnttab_copy *cop = queue->tx_copy_ops + *copy_ops;
++	struct gnttab_map_grant_ref *gop = queue->tx_map_ops + *map_ops;
++	struct xen_netif_tx_request *txp = first;
++
++	nr_slots = shinfo->nr_frags + 1;
++
++	copy_count(skb) = 0;
+ 
+-	nr_slots = shinfo->nr_frags;
++	/* Create copy ops for exactly data_len bytes into the skb head. */
++	__skb_put(skb, data_len);
++	while (data_len > 0) {
++		int amount = data_len > txp->size ? txp->size : data_len;
+ 
+-	/* Skip first skb fragment if it is on same page as header fragment. */
+-	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
++		cop->source.u.ref = txp->gref;
++		cop->source.domid = queue->vif->domid;
++		cop->source.offset = txp->offset;
+ 
+-	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
+-	     shinfo->nr_frags++, txp++, gop++) {
++		cop->dest.domid = DOMID_SELF;
++		cop->dest.offset = (offset_in_page(skb->data +
++						   skb_headlen(skb) -
++						   data_len)) & ~XEN_PAGE_MASK;
++		cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb)
++				               - data_len);
++
++		cop->len = amount;
++		cop->flags = GNTCOPY_source_gref;
++
++		index = pending_index(queue->pending_cons);
++		pending_idx = queue->pending_ring[index];
++		callback_param(queue, pending_idx).ctx = NULL;
++		copy_pending_idx(skb, copy_count(skb)) = pending_idx;
++		copy_count(skb)++;
++
++		cop++;
++		data_len -= amount;
++
++		if (amount == txp->size) {
++			/* The copy op covered the full tx_request */
++
++			memcpy(&queue->pending_tx_info[pending_idx].req,
++			       txp, sizeof(*txp));
++			queue->pending_tx_info[pending_idx].extra_count =
++				(txp == first) ? extra_count : 0;
++
++			if (txp == first)
++				txp = txfrags;
++			else
++				txp++;
++			queue->pending_cons++;
++			nr_slots--;
++		} else {
++			/* The copy op partially covered the tx_request.
++			 * The remainder will be mapped.
++			 */
++			txp->offset += amount;
++			txp->size -= amount;
++		}
++	}
++
++	for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots;
++	     shinfo->nr_frags++, gop++) {
+ 		index = pending_index(queue->pending_cons++);
+ 		pending_idx = queue->pending_ring[index];
+-		xenvif_tx_create_map_op(queue, pending_idx, txp, 0, gop);
++		xenvif_tx_create_map_op(queue, pending_idx, txp,
++				        txp == first ? extra_count : 0, gop);
+ 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
++
++		if (txp == first)
++			txp = txfrags;
++		else
++			txp++;
+ 	}
+ 
+ 	if (frag_overflow) {
+@@ -413,7 +480,8 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *que
+ 		skb_shinfo(skb)->frag_list = nskb;
+ 	}
+ 
+-	return gop;
++	(*copy_ops) = cop - queue->tx_copy_ops;
++	(*map_ops) = gop - queue->tx_map_ops;
+ }
+ 
+ static inline void xenvif_grant_handle_set(struct xenvif_queue *queue,
+@@ -449,7 +517,7 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
+ 			       struct gnttab_copy **gopp_copy)
+ {
+ 	struct gnttab_map_grant_ref *gop_map = *gopp_map;
+-	u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx;
++	u16 pending_idx;
+ 	/* This always points to the shinfo of the skb being checked, which
+ 	 * could be either the first or the one on the frag_list
+ 	 */
+@@ -460,24 +528,37 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
+ 	struct skb_shared_info *first_shinfo = NULL;
+ 	int nr_frags = shinfo->nr_frags;
+ 	const bool sharedslot = nr_frags &&
+-				frag_get_pending_idx(&shinfo->frags[0]) == pending_idx;
+-	int i, err;
++				frag_get_pending_idx(&shinfo->frags[0]) ==
++				    copy_pending_idx(skb, copy_count(skb) - 1);
++	int i, err = 0;
+ 
+-	/* Check status of header. */
+-	err = (*gopp_copy)->status;
+-	if (unlikely(err)) {
+-		if (net_ratelimit())
+-			netdev_dbg(queue->vif->dev,
+-				   "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n",
+-				   (*gopp_copy)->status,
+-				   pending_idx,
+-				   (*gopp_copy)->source.u.ref);
+-		/* The first frag might still have this slot mapped */
+-		if (!sharedslot)
+-			xenvif_idx_release(queue, pending_idx,
+-					   XEN_NETIF_RSP_ERROR);
++	for (i = 0; i < copy_count(skb); i++) {
++		int newerr;
++
++		/* Check status of header. */
++		pending_idx = copy_pending_idx(skb, i);
++
++		newerr = (*gopp_copy)->status;
++		if (likely(!newerr)) {
++			/* The first frag might still have this slot mapped */
++			if (i < copy_count(skb) - 1 || !sharedslot)
++				xenvif_idx_release(queue, pending_idx,
++						   XEN_NETIF_RSP_OKAY);
++		} else {
++			err = newerr;
++			if (net_ratelimit())
++				netdev_dbg(queue->vif->dev,
++					   "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n",
++					   (*gopp_copy)->status,
++					   pending_idx,
++					   (*gopp_copy)->source.u.ref);
++			/* The first frag might still have this slot mapped */
++			if (i < copy_count(skb) - 1 || !sharedslot)
++				xenvif_idx_release(queue, pending_idx,
++						   XEN_NETIF_RSP_ERROR);
++		}
++		(*gopp_copy)++;
+ 	}
+-	(*gopp_copy)++;
+ 
+ check_frags:
+ 	for (i = 0; i < nr_frags; i++, gop_map++) {
+@@ -524,14 +605,6 @@ check_frags:
+ 		if (err)
+ 			continue;
+ 
+-		/* First error: if the header haven't shared a slot with the
+-		 * first frag, release it as well.
+-		 */
+-		if (!sharedslot)
+-			xenvif_idx_release(queue,
+-					   XENVIF_TX_CB(skb)->pending_idx,
+-					   XEN_NETIF_RSP_OKAY);
+-
+ 		/* Invalidate preceding fragments of this skb. */
+ 		for (j = 0; j < i; j++) {
+ 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+@@ -801,7 +874,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 				     unsigned *copy_ops,
+ 				     unsigned *map_ops)
+ {
+-	struct gnttab_map_grant_ref *gop = queue->tx_map_ops;
+ 	struct sk_buff *skb, *nskb;
+ 	int ret;
+ 	unsigned int frag_overflow;
+@@ -883,8 +955,12 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 			continue;
+ 		}
+ 
++		data_len = (txreq.size > XEN_NETBACK_TX_COPY_LEN) ?
++			XEN_NETBACK_TX_COPY_LEN : txreq.size;
++
+ 		ret = xenvif_count_requests(queue, &txreq, extra_count,
+ 					    txfrags, work_to_do);
++
+ 		if (unlikely(ret < 0))
+ 			break;
+ 
+@@ -910,9 +986,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 		index = pending_index(queue->pending_cons);
+ 		pending_idx = queue->pending_ring[index];
+ 
+-		data_len = (txreq.size > XEN_NETBACK_TX_COPY_LEN &&
+-			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
+-			XEN_NETBACK_TX_COPY_LEN : txreq.size;
++		if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size)
++			data_len = txreq.size;
+ 
+ 		skb = xenvif_alloc_skb(data_len);
+ 		if (unlikely(skb == NULL)) {
+@@ -923,8 +998,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 		}
+ 
+ 		skb_shinfo(skb)->nr_frags = ret;
+-		if (data_len < txreq.size)
+-			skb_shinfo(skb)->nr_frags++;
+ 		/* At this point shinfo->nr_frags is in fact the number of
+ 		 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
+ 		 */
+@@ -986,54 +1059,19 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 					     type);
+ 		}
+ 
+-		XENVIF_TX_CB(skb)->pending_idx = pending_idx;
+-
+-		__skb_put(skb, data_len);
+-		queue->tx_copy_ops[*copy_ops].source.u.ref = txreq.gref;
+-		queue->tx_copy_ops[*copy_ops].source.domid = queue->vif->domid;
+-		queue->tx_copy_ops[*copy_ops].source.offset = txreq.offset;
+-
+-		queue->tx_copy_ops[*copy_ops].dest.u.gmfn =
+-			virt_to_gfn(skb->data);
+-		queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF;
+-		queue->tx_copy_ops[*copy_ops].dest.offset =
+-			offset_in_page(skb->data) & ~XEN_PAGE_MASK;
+-
+-		queue->tx_copy_ops[*copy_ops].len = data_len;
+-		queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
+-
+-		(*copy_ops)++;
+-
+-		if (data_len < txreq.size) {
+-			frag_set_pending_idx(&skb_shinfo(skb)->frags[0],
+-					     pending_idx);
+-			xenvif_tx_create_map_op(queue, pending_idx, &txreq,
+-						extra_count, gop);
+-			gop++;
+-		} else {
+-			frag_set_pending_idx(&skb_shinfo(skb)->frags[0],
+-					     INVALID_PENDING_IDX);
+-			memcpy(&queue->pending_tx_info[pending_idx].req,
+-			       &txreq, sizeof(txreq));
+-			queue->pending_tx_info[pending_idx].extra_count =
+-				extra_count;
+-		}
+-
+-		queue->pending_cons++;
+-
+-		gop = xenvif_get_requests(queue, skb, txfrags, gop,
+-				          frag_overflow, nskb);
++		xenvif_get_requests(queue, skb, &txreq, txfrags, copy_ops,
++				    map_ops, frag_overflow, nskb, extra_count,
++				    data_len);
+ 
+ 		__skb_queue_tail(&queue->tx_queue, skb);
+ 
+ 		queue->tx.req_cons = idx;
+ 
+-		if (((gop-queue->tx_map_ops) >= ARRAY_SIZE(queue->tx_map_ops)) ||
++		if ((*map_ops >= ARRAY_SIZE(queue->tx_map_ops)) ||
+ 		    (*copy_ops >= ARRAY_SIZE(queue->tx_copy_ops)))
+ 			break;
+ 	}
+ 
+-	(*map_ops) = gop - queue->tx_map_ops;
+ 	return;
+ }
+ 
+@@ -1112,9 +1150,8 @@ static int xenvif_tx_submit(struct xenvif_queue *queue)
+ 	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
+ 		struct xen_netif_tx_request *txp;
+ 		u16 pending_idx;
+-		unsigned data_len;
+ 
+-		pending_idx = XENVIF_TX_CB(skb)->pending_idx;
++		pending_idx = copy_pending_idx(skb, 0);
+ 		txp = &queue->pending_tx_info[pending_idx].req;
+ 
+ 		/* Check the remap error code. */
+@@ -1133,18 +1170,6 @@ static int xenvif_tx_submit(struct xenvif_queue *queue)
+ 			continue;
+ 		}
+ 
+-		data_len = skb->len;
+-		callback_param(queue, pending_idx).ctx = NULL;
+-		if (data_len < txp->size) {
+-			/* Append the packet payload as a fragment. */
+-			txp->offset += data_len;
+-			txp->size -= data_len;
+-		} else {
+-			/* Schedule a response immediately. */
+-			xenvif_idx_release(queue, pending_idx,
+-					   XEN_NETIF_RSP_OKAY);
+-		}
+-
+ 		if (txp->flags & XEN_NETTXF_csum_blank)
+ 			skb->ip_summed = CHECKSUM_PARTIAL;
+ 		else if (txp->flags & XEN_NETTXF_data_validated)
+@@ -1330,7 +1355,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif_queue *queue)
+ /* Called after netfront has transmitted */
+ int xenvif_tx_action(struct xenvif_queue *queue, int budget)
+ {
+-	unsigned nr_mops, nr_cops = 0;
++	unsigned nr_mops = 0, nr_cops = 0;
+ 	int work_done, ret;
+ 
+ 	if (unlikely(!tx_work_todo(queue)))
+@@ -1417,7 +1442,7 @@ static void push_tx_responses(struct xenvif_queue *queue)
+ 		notify_remote_via_irq(queue->tx_irq);
+ }
+ 
+-void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
++static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
+ {
+ 	int ret;
+ 	struct gnttab_unmap_grant_ref tx_unmap_op;
+diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
+index a0335407be423..0ba754ebc5baa 100644
+--- a/drivers/net/xen-netback/rx.c
++++ b/drivers/net/xen-netback/rx.c
+@@ -82,9 +82,10 @@ static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
+ 	return false;
+ }
+ 
+-void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
++bool xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
+ {
+ 	unsigned long flags;
++	bool ret = true;
+ 
+ 	spin_lock_irqsave(&queue->rx_queue.lock, flags);
+ 
+@@ -92,8 +93,7 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
+ 		struct net_device *dev = queue->vif->dev;
+ 
+ 		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+-		kfree_skb(skb);
+-		queue->vif->dev->stats.rx_dropped++;
++		ret = false;
+ 	} else {
+ 		if (skb_queue_empty(&queue->rx_queue))
+ 			xenvif_update_needed_slots(queue, skb);
+@@ -104,6 +104,8 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
+ 	}
+ 
+ 	spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
++
++	return ret;
+ }
+ 
+ static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
+@@ -486,7 +488,7 @@ static void xenvif_rx_skb(struct xenvif_queue *queue)
+ 
+ #define RX_BATCH_SIZE 64
+ 
+-void xenvif_rx_action(struct xenvif_queue *queue)
++static void xenvif_rx_action(struct xenvif_queue *queue)
+ {
+ 	struct sk_buff_head completed_skbs;
+ 	unsigned int work_done = 0;
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 569f3c8e7b756..3d149890fa36e 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -1868,6 +1868,12 @@ static int netfront_resume(struct xenbus_device *dev)
+ 	netif_tx_unlock_bh(info->netdev);
+ 
+ 	xennet_disconnect_backend(info);
++
++	rtnl_lock();
++	if (info->queues)
++		xennet_destroy_queues(info);
++	rtnl_unlock();
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index d9c78fe85cb38..e162f1dfbafe9 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3092,10 +3092,6 @@ int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 	if (!ctrl->identified) {
+ 		int i;
+ 
+-		ret = nvme_init_subsystem(ctrl, id);
+-		if (ret)
+-			goto out_free;
+-
+ 		/*
+ 		 * Check for quirks.  Quirk can depend on firmware version,
+ 		 * so, in principle, the set of quirks present can change
+@@ -3108,6 +3104,10 @@ int nvme_init_identify(struct nvme_ctrl *ctrl)
+ 			if (quirk_matches(id, &core_quirks[i]))
+ 				ctrl->quirks |= core_quirks[i].quirks;
+ 		}
++
++		ret = nvme_init_subsystem(ctrl, id);
++		if (ret)
++			goto out_free;
+ 	}
+ 	memcpy(ctrl->subsys->firmware_rev, id->fr,
+ 	       sizeof(ctrl->subsys->firmware_rev));
+diff --git a/drivers/regulator/slg51000-regulator.c b/drivers/regulator/slg51000-regulator.c
+index 75a941fb3c2bd..1b2eee95ad3f9 100644
+--- a/drivers/regulator/slg51000-regulator.c
++++ b/drivers/regulator/slg51000-regulator.c
+@@ -457,6 +457,8 @@ static int slg51000_i2c_probe(struct i2c_client *client)
+ 		chip->cs_gpiod = cs_gpiod;
+ 	}
+ 
++	usleep_range(10000, 11000);
++
+ 	i2c_set_clientdata(client, chip);
+ 	chip->chip_irq = client->irq;
+ 	chip->dev = dev;
+diff --git a/drivers/regulator/twl6030-regulator.c b/drivers/regulator/twl6030-regulator.c
+index 7c7e3648ea4bf..f3856750944f4 100644
+--- a/drivers/regulator/twl6030-regulator.c
++++ b/drivers/regulator/twl6030-regulator.c
+@@ -67,6 +67,7 @@ struct twlreg_info {
+ #define TWL6030_CFG_STATE_SLEEP	0x03
+ #define TWL6030_CFG_STATE_GRP_SHIFT	5
+ #define TWL6030_CFG_STATE_APP_SHIFT	2
++#define TWL6030_CFG_STATE_MASK		0x03
+ #define TWL6030_CFG_STATE_APP_MASK	(0x03 << TWL6030_CFG_STATE_APP_SHIFT)
+ #define TWL6030_CFG_STATE_APP(v)	(((v) & TWL6030_CFG_STATE_APP_MASK) >>\
+ 						TWL6030_CFG_STATE_APP_SHIFT)
+@@ -128,13 +129,14 @@ static int twl6030reg_is_enabled(struct regulator_dev *rdev)
+ 		if (grp < 0)
+ 			return grp;
+ 		grp &= P1_GRP_6030;
++		val = twlreg_read(info, TWL_MODULE_PM_RECEIVER, VREG_STATE);
++		val = TWL6030_CFG_STATE_APP(val);
+ 	} else {
++		val = twlreg_read(info, TWL_MODULE_PM_RECEIVER, VREG_STATE);
++		val &= TWL6030_CFG_STATE_MASK;
+ 		grp = 1;
+ 	}
+ 
+-	val = twlreg_read(info, TWL_MODULE_PM_RECEIVER, VREG_STATE);
+-	val = TWL6030_CFG_STATE_APP(val);
+-
+ 	return grp && (val == TWL6030_CFG_STATE_ON);
+ }
+ 
+@@ -187,7 +189,12 @@ static int twl6030reg_get_status(struct regulator_dev *rdev)
+ 
+ 	val = twlreg_read(info, TWL_MODULE_PM_RECEIVER, VREG_STATE);
+ 
+-	switch (TWL6030_CFG_STATE_APP(val)) {
++	if (info->features & TWL6032_SUBCLASS)
++		val &= TWL6030_CFG_STATE_MASK;
++	else
++		val = TWL6030_CFG_STATE_APP(val);
++
++	switch (val) {
+ 	case TWL6030_CFG_STATE_ON:
+ 		return REGULATOR_STATUS_NORMAL;
+ 
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 58c6382a2807c..d4f6c4dd42c47 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -222,6 +222,8 @@ static inline void cmos_write_bank2(unsigned char val, unsigned char addr)
+ 
+ static int cmos_read_time(struct device *dev, struct rtc_time *t)
+ {
++	int ret;
++
+ 	/*
+ 	 * If pm_trace abused the RTC for storage, set the timespec to 0,
+ 	 * which tells the caller that this RTC value is unusable.
+@@ -229,29 +231,64 @@ static int cmos_read_time(struct device *dev, struct rtc_time *t)
+ 	if (!pm_trace_rtc_valid())
+ 		return -EIO;
+ 
+-	/* REVISIT:  if the clock has a "century" register, use
+-	 * that instead of the heuristic in mc146818_get_time().
+-	 * That'll make Y3K compatility (year > 2070) easy!
+-	 */
+-	mc146818_get_time(t);
++	ret = mc146818_get_time(t);
++	if (ret < 0) {
++		dev_err_ratelimited(dev, "unable to read current time\n");
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+ static int cmos_set_time(struct device *dev, struct rtc_time *t)
+ {
+-	/* REVISIT:  set the "century" register if available
+-	 *
+-	 * NOTE: this ignores the issue whereby updating the seconds
++	/* NOTE: this ignores the issue whereby updating the seconds
+ 	 * takes effect exactly 500ms after we write the register.
+ 	 * (Also queueing and other delays before we get this far.)
+ 	 */
+ 	return mc146818_set_time(t);
+ }
+ 
++struct cmos_read_alarm_callback_param {
++	struct cmos_rtc *cmos;
++	struct rtc_time *time;
++	unsigned char	rtc_control;
++};
++
++static void cmos_read_alarm_callback(unsigned char __always_unused seconds,
++				     void *param_in)
++{
++	struct cmos_read_alarm_callback_param *p =
++		(struct cmos_read_alarm_callback_param *)param_in;
++	struct rtc_time *time = p->time;
++
++	time->tm_sec = CMOS_READ(RTC_SECONDS_ALARM);
++	time->tm_min = CMOS_READ(RTC_MINUTES_ALARM);
++	time->tm_hour = CMOS_READ(RTC_HOURS_ALARM);
++
++	if (p->cmos->day_alrm) {
++		/* ignore upper bits on readback per ACPI spec */
++		time->tm_mday = CMOS_READ(p->cmos->day_alrm) & 0x3f;
++		if (!time->tm_mday)
++			time->tm_mday = -1;
++
++		if (p->cmos->mon_alrm) {
++			time->tm_mon = CMOS_READ(p->cmos->mon_alrm);
++			if (!time->tm_mon)
++				time->tm_mon = -1;
++		}
++	}
++
++	p->rtc_control = CMOS_READ(RTC_CONTROL);
++}
++
+ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
+ {
+ 	struct cmos_rtc	*cmos = dev_get_drvdata(dev);
+-	unsigned char	rtc_control;
++	struct cmos_read_alarm_callback_param p = {
++		.cmos = cmos,
++		.time = &t->time,
++	};
+ 
+ 	/* This not only a rtc_op, but also called directly */
+ 	if (!is_valid_irq(cmos->irq))
+@@ -262,28 +299,18 @@ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 	 * the future.
+ 	 */
+ 
+-	spin_lock_irq(&rtc_lock);
+-	t->time.tm_sec = CMOS_READ(RTC_SECONDS_ALARM);
+-	t->time.tm_min = CMOS_READ(RTC_MINUTES_ALARM);
+-	t->time.tm_hour = CMOS_READ(RTC_HOURS_ALARM);
+-
+-	if (cmos->day_alrm) {
+-		/* ignore upper bits on readback per ACPI spec */
+-		t->time.tm_mday = CMOS_READ(cmos->day_alrm) & 0x3f;
+-		if (!t->time.tm_mday)
+-			t->time.tm_mday = -1;
+-
+-		if (cmos->mon_alrm) {
+-			t->time.tm_mon = CMOS_READ(cmos->mon_alrm);
+-			if (!t->time.tm_mon)
+-				t->time.tm_mon = -1;
+-		}
+-	}
+-
+-	rtc_control = CMOS_READ(RTC_CONTROL);
+-	spin_unlock_irq(&rtc_lock);
++	/* Some Intel chipsets disconnect the alarm registers when the clock
++	 * update is in progress - during this time reads return bogus values
++	 * and writes may fail silently. See for example "7th Generation Intel®
++	 * Processor Family I/O for U/Y Platforms [...] Datasheet", section
++	 * 27.7.1
++	 *
++	 * Use the mc146818_avoid_UIP() function to avoid this.
++	 */
++	if (!mc146818_avoid_UIP(cmos_read_alarm_callback, &p))
++		return -EIO;
+ 
+-	if (!(rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
++	if (!(p.rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ 		if (((unsigned)t->time.tm_sec) < 0x60)
+ 			t->time.tm_sec = bcd2bin(t->time.tm_sec);
+ 		else
+@@ -312,7 +339,7 @@ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 		}
+ 	}
+ 
+-	t->enabled = !!(rtc_control & RTC_AIE);
++	t->enabled = !!(p.rtc_control & RTC_AIE);
+ 	t->pending = 0;
+ 
+ 	return 0;
+@@ -443,10 +470,57 @@ static int cmos_validate_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 	return 0;
+ }
+ 
++struct cmos_set_alarm_callback_param {
++	struct cmos_rtc *cmos;
++	unsigned char mon, mday, hrs, min, sec;
++	struct rtc_wkalrm *t;
++};
++
++/* Note: this function may be executed by mc146818_avoid_UIP() more then
++ *	 once
++ */
++static void cmos_set_alarm_callback(unsigned char __always_unused seconds,
++				    void *param_in)
++{
++	struct cmos_set_alarm_callback_param *p =
++		(struct cmos_set_alarm_callback_param *)param_in;
++
++	/* next rtc irq must not be from previous alarm setting */
++	cmos_irq_disable(p->cmos, RTC_AIE);
++
++	/* update alarm */
++	CMOS_WRITE(p->hrs, RTC_HOURS_ALARM);
++	CMOS_WRITE(p->min, RTC_MINUTES_ALARM);
++	CMOS_WRITE(p->sec, RTC_SECONDS_ALARM);
++
++	/* the system may support an "enhanced" alarm */
++	if (p->cmos->day_alrm) {
++		CMOS_WRITE(p->mday, p->cmos->day_alrm);
++		if (p->cmos->mon_alrm)
++			CMOS_WRITE(p->mon, p->cmos->mon_alrm);
++	}
++
++	if (use_hpet_alarm()) {
++		/*
++		 * FIXME the HPET alarm glue currently ignores day_alrm
++		 * and mon_alrm ...
++		 */
++		hpet_set_alarm_time(p->t->time.tm_hour, p->t->time.tm_min,
++				    p->t->time.tm_sec);
++	}
++
++	if (p->t->enabled)
++		cmos_irq_enable(p->cmos, RTC_AIE);
++}
++
+ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
+ {
+ 	struct cmos_rtc	*cmos = dev_get_drvdata(dev);
+-	unsigned char mon, mday, hrs, min, sec, rtc_control;
++	struct cmos_set_alarm_callback_param p = {
++		.cmos = cmos,
++		.t = t
++	};
++	unsigned char rtc_control;
+ 	int ret;
+ 
+ 	/* This not only a rtc_op, but also called directly */
+@@ -457,11 +531,11 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	mon = t->time.tm_mon + 1;
+-	mday = t->time.tm_mday;
+-	hrs = t->time.tm_hour;
+-	min = t->time.tm_min;
+-	sec = t->time.tm_sec;
++	p.mon = t->time.tm_mon + 1;
++	p.mday = t->time.tm_mday;
++	p.hrs = t->time.tm_hour;
++	p.min = t->time.tm_min;
++	p.sec = t->time.tm_sec;
+ 
+ 	spin_lock_irq(&rtc_lock);
+ 	rtc_control = CMOS_READ(RTC_CONTROL);
+@@ -469,43 +543,21 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 
+ 	if (!(rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ 		/* Writing 0xff means "don't care" or "match all".  */
+-		mon = (mon <= 12) ? bin2bcd(mon) : 0xff;
+-		mday = (mday >= 1 && mday <= 31) ? bin2bcd(mday) : 0xff;
+-		hrs = (hrs < 24) ? bin2bcd(hrs) : 0xff;
+-		min = (min < 60) ? bin2bcd(min) : 0xff;
+-		sec = (sec < 60) ? bin2bcd(sec) : 0xff;
+-	}
+-
+-	spin_lock_irq(&rtc_lock);
+-
+-	/* next rtc irq must not be from previous alarm setting */
+-	cmos_irq_disable(cmos, RTC_AIE);
+-
+-	/* update alarm */
+-	CMOS_WRITE(hrs, RTC_HOURS_ALARM);
+-	CMOS_WRITE(min, RTC_MINUTES_ALARM);
+-	CMOS_WRITE(sec, RTC_SECONDS_ALARM);
+-
+-	/* the system may support an "enhanced" alarm */
+-	if (cmos->day_alrm) {
+-		CMOS_WRITE(mday, cmos->day_alrm);
+-		if (cmos->mon_alrm)
+-			CMOS_WRITE(mon, cmos->mon_alrm);
++		p.mon = (p.mon <= 12) ? bin2bcd(p.mon) : 0xff;
++		p.mday = (p.mday >= 1 && p.mday <= 31) ? bin2bcd(p.mday) : 0xff;
++		p.hrs = (p.hrs < 24) ? bin2bcd(p.hrs) : 0xff;
++		p.min = (p.min < 60) ? bin2bcd(p.min) : 0xff;
++		p.sec = (p.sec < 60) ? bin2bcd(p.sec) : 0xff;
+ 	}
+ 
+-	if (use_hpet_alarm()) {
+-		/*
+-		 * FIXME the HPET alarm glue currently ignores day_alrm
+-		 * and mon_alrm ...
+-		 */
+-		hpet_set_alarm_time(t->time.tm_hour, t->time.tm_min,
+-				    t->time.tm_sec);
+-	}
+-
+-	if (t->enabled)
+-		cmos_irq_enable(cmos, RTC_AIE);
+-
+-	spin_unlock_irq(&rtc_lock);
++	/*
++	 * Some Intel chipsets disconnect the alarm registers when the clock
++	 * update is in progress - during this time writes fail silently.
++	 *
++	 * Use mc146818_avoid_UIP() to avoid this.
++	 */
++	if (!mc146818_avoid_UIP(cmos_set_alarm_callback, &p))
++		return -EIO;
+ 
+ 	cmos->alarm_expires = rtc_tm_to_time64(&t->time);
+ 
+@@ -652,11 +704,10 @@ static struct cmos_rtc	cmos_rtc;
+ 
+ static irqreturn_t cmos_interrupt(int irq, void *p)
+ {
+-	unsigned long	flags;
+ 	u8		irqstat;
+ 	u8		rtc_control;
+ 
+-	spin_lock_irqsave(&rtc_lock, flags);
++	spin_lock(&rtc_lock);
+ 
+ 	/* When the HPET interrupt handler calls us, the interrupt
+ 	 * status is passed as arg1 instead of the irq number.  But
+@@ -690,7 +741,7 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
+ 			hpet_mask_rtc_irq_bit(RTC_AIE);
+ 		CMOS_READ(RTC_INTR_FLAGS);
+ 	}
+-	spin_unlock_irqrestore(&rtc_lock, flags);
++	spin_unlock(&rtc_lock);
+ 
+ 	if (is_intr(irqstat)) {
+ 		rtc_update_irq(p, 1, irqstat);
+@@ -806,6 +857,12 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq)
+ 
+ 	rename_region(ports, dev_name(&cmos_rtc.rtc->dev));
+ 
++	if (!mc146818_does_rtc_work()) {
++		dev_warn(dev, "broken or not accessible\n");
++		retval = -ENXIO;
++		goto cleanup1;
++	}
++
+ 	spin_lock_irq(&rtc_lock);
+ 
+ 	if (!(flags & CMOS_RTC_FLAGS_NOFREQ)) {
+@@ -1054,7 +1111,9 @@ static void cmos_check_wkalrm(struct device *dev)
+ 	 * ACK the rtc irq here
+ 	 */
+ 	if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
++		local_irq_disable();
+ 		cmos_interrupt(0, (void *)cmos->rtc);
++		local_irq_enable();
+ 		return;
+ 	}
+ 
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index b036ff33fbe61..347655d24b5d3 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -9,40 +9,143 @@
+ #endif
+ 
+ /*
+- * Returns true if a clock update is in progress
++ * Execute a function while the UIP (Update-in-progress) bit of the RTC is
++ * unset.
++ *
++ * Warning: callback may be executed more then once.
+  */
+-static inline unsigned char mc146818_is_updating(void)
++bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
++			void *param)
+ {
+-	unsigned char uip;
++	int i;
+ 	unsigned long flags;
++	unsigned char seconds;
+ 
+-	spin_lock_irqsave(&rtc_lock, flags);
+-	uip = (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP);
+-	spin_unlock_irqrestore(&rtc_lock, flags);
+-	return uip;
++	for (i = 0; i < 10; i++) {
++		spin_lock_irqsave(&rtc_lock, flags);
++
++		/*
++		 * Check whether there is an update in progress during which the
++		 * readout is unspecified. The maximum update time is ~2ms. Poll
++		 * every msec for completion.
++		 *
++		 * Store the second value before checking UIP so a long lasting
++		 * NMI which happens to hit after the UIP check cannot make
++		 * an update cycle invisible.
++		 */
++		seconds = CMOS_READ(RTC_SECONDS);
++
++		if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP) {
++			spin_unlock_irqrestore(&rtc_lock, flags);
++			mdelay(1);
++			continue;
++		}
++
++		/* Revalidate the above readout */
++		if (seconds != CMOS_READ(RTC_SECONDS)) {
++			spin_unlock_irqrestore(&rtc_lock, flags);
++			continue;
++		}
++
++		if (callback)
++			callback(seconds, param);
++
++		/*
++		 * Check for the UIP bit again. If it is set now then
++		 * the above values may contain garbage.
++		 */
++		if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP) {
++			spin_unlock_irqrestore(&rtc_lock, flags);
++			mdelay(1);
++			continue;
++		}
++
++		/*
++		 * A NMI might have interrupted the above sequence so check
++		 * whether the seconds value has changed which indicates that
++		 * the NMI took longer than the UIP bit was set. Unlikely, but
++		 * possible and there is also virt...
++		 */
++		if (seconds != CMOS_READ(RTC_SECONDS)) {
++			spin_unlock_irqrestore(&rtc_lock, flags);
++			continue;
++		}
++		spin_unlock_irqrestore(&rtc_lock, flags);
++
++		return true;
++	}
++	return false;
+ }
++EXPORT_SYMBOL_GPL(mc146818_avoid_UIP);
+ 
+-unsigned int mc146818_get_time(struct rtc_time *time)
++/*
++ * If the UIP (Update-in-progress) bit of the RTC is set for more then
++ * 10ms, the RTC is apparently broken or not present.
++ */
++bool mc146818_does_rtc_work(void)
++{
++	int i;
++	unsigned char val;
++	unsigned long flags;
++
++	for (i = 0; i < 10; i++) {
++		spin_lock_irqsave(&rtc_lock, flags);
++		val = CMOS_READ(RTC_FREQ_SELECT);
++		spin_unlock_irqrestore(&rtc_lock, flags);
++
++		if ((val & RTC_UIP) == 0)
++			return true;
++
++		mdelay(1);
++	}
++
++	return false;
++}
++EXPORT_SYMBOL_GPL(mc146818_does_rtc_work);
++
++int mc146818_get_time(struct rtc_time *time)
+ {
+ 	unsigned char ctrl;
+ 	unsigned long flags;
++	unsigned int iter_count = 0;
+ 	unsigned char century = 0;
++	bool retry;
+ 
+ #ifdef CONFIG_MACH_DECSTATION
+ 	unsigned int real_year;
+ #endif
+ 
++again:
++	if (iter_count > 10) {
++		memset(time, 0, sizeof(*time));
++		return -EIO;
++	}
++	iter_count++;
++
++	spin_lock_irqsave(&rtc_lock, flags);
++
+ 	/*
+-	 * read RTC once any update in progress is done. The update
+-	 * can take just over 2ms. We wait 20ms. There is no need to
+-	 * to poll-wait (up to 1s - eeccch) for the falling edge of RTC_UIP.
+-	 * If you need to know *exactly* when a second has started, enable
+-	 * periodic update complete interrupts, (via ioctl) and then
+-	 * immediately read /dev/rtc which will block until you get the IRQ.
+-	 * Once the read clears, read the RTC time (again via ioctl). Easy.
++	 * Check whether there is an update in progress during which the
++	 * readout is unspecified. The maximum update time is ~2ms. Poll
++	 * every msec for completion.
++	 *
++	 * Store the second value before checking UIP so a long lasting NMI
++	 * which happens to hit after the UIP check cannot make an update
++	 * cycle invisible.
+ 	 */
+-	if (mc146818_is_updating())
+-		mdelay(20);
++	time->tm_sec = CMOS_READ(RTC_SECONDS);
++
++	if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP) {
++		spin_unlock_irqrestore(&rtc_lock, flags);
++		mdelay(1);
++		goto again;
++	}
++
++	/* Revalidate the above readout */
++	if (time->tm_sec != CMOS_READ(RTC_SECONDS)) {
++		spin_unlock_irqrestore(&rtc_lock, flags);
++		goto again;
++	}
+ 
+ 	/*
+ 	 * Only the values that we read from the RTC are set. We leave
+@@ -50,8 +153,6 @@ unsigned int mc146818_get_time(struct rtc_time *time)
+ 	 * RTC has RTC_DAY_OF_WEEK, we ignore it, as it is only updated
+ 	 * by the RTC when initially set to a non-zero value.
+ 	 */
+-	spin_lock_irqsave(&rtc_lock, flags);
+-	time->tm_sec = CMOS_READ(RTC_SECONDS);
+ 	time->tm_min = CMOS_READ(RTC_MINUTES);
+ 	time->tm_hour = CMOS_READ(RTC_HOURS);
+ 	time->tm_mday = CMOS_READ(RTC_DAY_OF_MONTH);
+@@ -66,8 +167,24 @@ unsigned int mc146818_get_time(struct rtc_time *time)
+ 		century = CMOS_READ(acpi_gbl_FADT.century);
+ #endif
+ 	ctrl = CMOS_READ(RTC_CONTROL);
++	/*
++	 * Check for the UIP bit again. If it is set now then
++	 * the above values may contain garbage.
++	 */
++	retry = CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP;
++	/*
++	 * A NMI might have interrupted the above sequence so check whether
++	 * the seconds value has changed which indicates that the NMI took
++	 * longer than the UIP bit was set. Unlikely, but possible and
++	 * there is also virt...
++	 */
++	retry |= time->tm_sec != CMOS_READ(RTC_SECONDS);
++
+ 	spin_unlock_irqrestore(&rtc_lock, flags);
+ 
++	if (retry)
++		goto again;
++
+ 	if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+ 	{
+ 		time->tm_sec = bcd2bin(time->tm_sec);
+@@ -95,7 +212,7 @@ unsigned int mc146818_get_time(struct rtc_time *time)
+ 
+ 	time->tm_mon--;
+ 
+-	return RTC_24H;
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(mc146818_get_time);
+ 
+@@ -132,7 +249,6 @@ int mc146818_set_time(struct rtc_time *time)
+ 	if (yrs > 255)	/* They are unsigned */
+ 		return -EINVAL;
+ 
+-	spin_lock_irqsave(&rtc_lock, flags);
+ #ifdef CONFIG_MACH_DECSTATION
+ 	real_yrs = yrs;
+ 	leap_yr = ((!((yrs + 1900) % 4) && ((yrs + 1900) % 100)) ||
+@@ -161,16 +277,16 @@ int mc146818_set_time(struct rtc_time *time)
+ 	/* These limits and adjustments are independent of
+ 	 * whether the chip is in binary mode or not.
+ 	 */
+-	if (yrs > 169) {
+-		spin_unlock_irqrestore(&rtc_lock, flags);
++	if (yrs > 169)
+ 		return -EINVAL;
+-	}
+ 
+ 	if (yrs >= 100)
+ 		yrs -= 100;
+ 
+-	if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY)
+-	    || RTC_ALWAYS_BCD) {
++	spin_lock_irqsave(&rtc_lock, flags);
++	save_control = CMOS_READ(RTC_CONTROL);
++	spin_unlock_irqrestore(&rtc_lock, flags);
++	if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ 		sec = bin2bcd(sec);
+ 		min = bin2bcd(min);
+ 		hrs = bin2bcd(hrs);
+@@ -180,6 +296,7 @@ int mc146818_set_time(struct rtc_time *time)
+ 		century = bin2bcd(century);
+ 	}
+ 
++	spin_lock_irqsave(&rtc_lock, flags);
+ 	save_control = CMOS_READ(RTC_CONTROL);
+ 	CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
+ 	save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index a9a43d6494782..28a1194f849fc 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -291,7 +291,8 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
+ 	 *
+ 	 * DWC_usb3 3.30a and DWC_usb31 1.90a programming guide section 3.2.2
+ 	 */
+-	if (dwc->gadget->speed <= USB_SPEED_HIGH) {
++	if (dwc->gadget->speed <= USB_SPEED_HIGH ||
++	    DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_ENDTRANSFER) {
+ 		reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+ 		if (unlikely(reg & DWC3_GUSB2PHYCFG_SUSPHY)) {
+ 			saved_config |= DWC3_GUSB2PHYCFG_SUSPHY;
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 2618d3beef649..27828435dd4fc 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -609,7 +609,7 @@ static void fbcon_prepare_logo(struct vc_data *vc, struct fb_info *info,
+ 		if (scr_readw(r) != vc->vc_video_erase_char)
+ 			break;
+ 	if (r != q && new_rows >= rows + logo_lines) {
+-		save = kmalloc(array3_size(logo_lines, new_cols, 2),
++		save = kzalloc(array3_size(logo_lines, new_cols, 2),
+ 			       GFP_KERNEL);
+ 		if (save) {
+ 			int i = cols < new_cols ? cols : new_cols;
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 6b80dee17f49d..4a6ba0997e399 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5398,6 +5398,7 @@ static int clone_range(struct send_ctx *sctx,
+ 		u64 ext_len;
+ 		u64 clone_len;
+ 		u64 clone_data_offset;
++		bool crossed_src_i_size = false;
+ 
+ 		if (slot >= btrfs_header_nritems(leaf)) {
+ 			ret = btrfs_next_leaf(clone_root->root, path);
+@@ -5454,8 +5455,10 @@ static int clone_range(struct send_ctx *sctx,
+ 		if (key.offset >= clone_src_i_size)
+ 			break;
+ 
+-		if (key.offset + ext_len > clone_src_i_size)
++		if (key.offset + ext_len > clone_src_i_size) {
+ 			ext_len = clone_src_i_size - key.offset;
++			crossed_src_i_size = true;
++		}
+ 
+ 		clone_data_offset = btrfs_file_extent_offset(leaf, ei);
+ 		if (btrfs_file_extent_disk_bytenr(leaf, ei) == disk_byte) {
+@@ -5515,6 +5518,25 @@ static int clone_range(struct send_ctx *sctx,
+ 				ret = send_clone(sctx, offset, clone_len,
+ 						 clone_root);
+ 			}
++		} else if (crossed_src_i_size && clone_len < len) {
++			/*
++			 * If we are at i_size of the clone source inode and we
++			 * can not clone from it, terminate the loop. This is
++			 * to avoid sending two write operations, one with a
++			 * length matching clone_len and the final one after
++			 * this loop with a length of len - clone_len.
++			 *
++			 * When using encoded writes (BTRFS_SEND_FLAG_COMPRESSED
++			 * was passed to the send ioctl), this helps avoid
++			 * sending an encoded write for an offset that is not
++			 * sector size aligned, in case the i_size of the source
++			 * inode is not sector size aligned. That will make the
++			 * receiver fallback to decompression of the data and
++			 * writing it using regular buffered IO, therefore while
++			 * not incorrect, it's not optimal due decompression and
++			 * possible re-compression at the receiver.
++			 */
++			break;
+ 		} else {
+ 			ret = send_extent_data(sctx, offset, clone_len);
+ 		}
+diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
+index a0c4b99d28994..f40c9534f20be 100644
+--- a/include/asm-generic/tlb.h
++++ b/include/asm-generic/tlb.h
+@@ -205,12 +205,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
+ #define tlb_needs_table_invalidate() (true)
+ #endif
+ 
++void tlb_remove_table_sync_one(void);
++
+ #else
+ 
+ #ifdef tlb_needs_table_invalidate
+ #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
+ #endif
+ 
++static inline void tlb_remove_table_sync_one(void) { }
++
+ #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
+ 
+ 
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index 618838c48313c..959b370733f09 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -68,6 +68,7 @@ struct css_task_iter {
+ 	struct list_head		iters_node;	/* css_set->task_iters */
+ };
+ 
++extern struct file_system_type cgroup_fs_type;
+ extern struct cgroup_root cgrp_dfl_root;
+ extern struct css_set init_css_set;
+ 
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index b9fbb6d4150e2..955b19dc28a82 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -174,8 +174,8 @@ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
+ struct page *follow_huge_pd(struct vm_area_struct *vma,
+ 			    unsigned long address, hugepd_t hpd,
+ 			    int flags, int pdshift);
+-struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+-				pmd_t *pmd, int flags);
++struct page *follow_huge_pmd_pte(struct vm_area_struct *vma, unsigned long address,
++				 int flags);
+ struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
+ 				pud_t *pud, int flags);
+ struct page *follow_huge_pgd(struct mm_struct *mm, unsigned long address,
+@@ -261,8 +261,8 @@ static inline struct page *follow_huge_pd(struct vm_area_struct *vma,
+ 	return NULL;
+ }
+ 
+-static inline struct page *follow_huge_pmd(struct mm_struct *mm,
+-				unsigned long address, pmd_t *pmd, int flags)
++static inline struct page *follow_huge_pmd_pte(struct vm_area_struct *vma,
++				unsigned long address, int flags)
+ {
+ 	return NULL;
+ }
+diff --git a/include/linux/mc146818rtc.h b/include/linux/mc146818rtc.h
+index 1e02058113944..b0da04fe087bb 100644
+--- a/include/linux/mc146818rtc.h
++++ b/include/linux/mc146818rtc.h
+@@ -125,7 +125,11 @@ struct cmos_rtc_board_info {
+ #define RTC_IO_EXTENT_USED      RTC_IO_EXTENT
+ #endif /* ARCH_RTC_LOCATION */
+ 
+-unsigned int mc146818_get_time(struct rtc_time *time);
++bool mc146818_does_rtc_work(void);
++int mc146818_get_time(struct rtc_time *time);
+ int mc146818_set_time(struct rtc_time *time);
+ 
++bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
++			void *param);
++
+ #endif /* _MC146818RTC_H */
+diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h
+index 6e36e854b5124..d8fcc139ac05d 100644
+--- a/kernel/cgroup/cgroup-internal.h
++++ b/kernel/cgroup/cgroup-internal.h
+@@ -169,7 +169,6 @@ extern struct mutex cgroup_mutex;
+ extern spinlock_t css_set_lock;
+ extern struct cgroup_subsys *cgroup_subsys[];
+ extern struct list_head cgroup_roots;
+-extern struct file_system_type cgroup_fs_type;
+ 
+ /* iterate across the hierarchies */
+ #define for_each_root(root)						\
+diff --git a/mm/gup.c b/mm/gup.c
+index b47c751df069a..6d5e4fd55d320 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -405,6 +405,18 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
+ 	if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
+ 			 (FOLL_PIN | FOLL_GET)))
+ 		return ERR_PTR(-EINVAL);
++
++	/*
++	 * Considering PTE level hugetlb, like continuous-PTE hugetlb on
++	 * ARM64 architecture.
++	 */
++	if (is_vm_hugetlb_page(vma)) {
++		page = follow_huge_pmd_pte(vma, address, flags);
++		if (page)
++			return page;
++		return no_page_table(vma, flags);
++	}
++
+ retry:
+ 	if (unlikely(pmd_bad(*pmd)))
+ 		return no_page_table(vma, flags);
+@@ -560,7 +572,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
+ 	if (pmd_none(pmdval))
+ 		return no_page_table(vma, flags);
+ 	if (pmd_huge(pmdval) && is_vm_hugetlb_page(vma)) {
+-		page = follow_huge_pmd(mm, address, pmd, flags);
++		page = follow_huge_pmd_pte(vma, address, flags);
+ 		if (page)
+ 			return page;
+ 		return no_page_table(vma, flags);
+@@ -2564,7 +2576,7 @@ static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, unsigned lo
+ 		next = pud_addr_end(addr, end);
+ 		if (unlikely(!pud_present(pud)))
+ 			return 0;
+-		if (unlikely(pud_huge(pud))) {
++		if (unlikely(pud_huge(pud) || pud_devmap(pud))) {
+ 			if (!gup_huge_pud(pud, pudp, addr, next, flags,
+ 					  pages, nr))
+ 				return 0;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index d8c63d79af206..3499b3803384b 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5585,12 +5585,13 @@ follow_huge_pd(struct vm_area_struct *vma,
+ }
+ 
+ struct page * __weak
+-follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+-		pmd_t *pmd, int flags)
++follow_huge_pmd_pte(struct vm_area_struct *vma, unsigned long address, int flags)
+ {
++	struct hstate *h = hstate_vma(vma);
++	struct mm_struct *mm = vma->vm_mm;
+ 	struct page *page = NULL;
+ 	spinlock_t *ptl;
+-	pte_t pte;
++	pte_t *ptep, pte;
+ 
+ 	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
+ 	if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
+@@ -5598,17 +5599,15 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+ 		return NULL;
+ 
+ retry:
+-	ptl = pmd_lockptr(mm, pmd);
+-	spin_lock(ptl);
+-	/*
+-	 * make sure that the address range covered by this pmd is not
+-	 * unmapped from other threads.
+-	 */
+-	if (!pmd_huge(*pmd))
+-		goto out;
+-	pte = huge_ptep_get((pte_t *)pmd);
++	ptep = huge_pte_offset(mm, address, huge_page_size(h));
++	if (!ptep)
++		return NULL;
++
++	ptl = huge_pte_lock(h, mm, ptep);
++	pte = huge_ptep_get(ptep);
+ 	if (pte_present(pte)) {
+-		page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
++		page = pte_page(pte) +
++			((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+ 		/*
+ 		 * try_grab_page() should always succeed here, because: a) we
+ 		 * hold the pmd (ptl) lock, and b) we've just checked that the
+@@ -5624,7 +5623,7 @@ retry:
+ 	} else {
+ 		if (is_hugetlb_entry_migration(pte)) {
+ 			spin_unlock(ptl);
+-			__migration_entry_wait(mm, (pte_t *)pmd, ptl);
++			__migration_entry_wait(mm, ptep, ptl);
+ 			goto retry;
+ 		}
+ 		/*
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index cf4dceb9682bf..0eb3adf4ff68c 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1154,6 +1154,7 @@ static void collapse_huge_page(struct mm_struct *mm,
+ 	_pmd = pmdp_collapse_flush(vma, address, pmd);
+ 	spin_unlock(pmd_ptl);
+ 	mmu_notifier_invalidate_range_end(&range);
++	tlb_remove_table_sync_one();
+ 
+ 	spin_lock(pte_ptl);
+ 	isolated = __collapse_huge_page_isolate(vma, address, pte,
+@@ -1443,6 +1444,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	spinlock_t *ptl;
+ 	int count = 0;
+ 	int i;
++	struct mmu_notifier_range range;
+ 
+ 	if (!vma || !vma->vm_file ||
+ 	    vma->vm_start > haddr || vma->vm_end < haddr + HPAGE_PMD_SIZE)
+@@ -1457,6 +1459,14 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE))
+ 		return;
+ 
++	/*
++	 * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
++	 * that got written to. Without this, we'd have to also lock the
++	 * anon_vma if one exists.
++	 */
++	if (vma->anon_vma)
++		return;
++
+ 	hpage = find_lock_page(vma->vm_file->f_mapping,
+ 			       linear_page_index(vma, haddr));
+ 	if (!hpage)
+@@ -1469,6 +1479,19 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	if (!pmd)
+ 		goto drop_hpage;
+ 
++	/*
++	 * We need to lock the mapping so that from here on, only GUP-fast and
++	 * hardware page walks can access the parts of the page tables that
++	 * we're operating on.
++	 */
++	i_mmap_lock_write(vma->vm_file->f_mapping);
++
++	/*
++	 * This spinlock should be unnecessary: Nobody else should be accessing
++	 * the page tables under spinlock protection here, only
++	 * lockless_pages_from_mm() and the hardware page walker can access page
++	 * tables while all the high-level locks are held in write mode.
++	 */
+ 	start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
+ 
+ 	/* step 1: check all mapped PTEs are to the right huge page */
+@@ -1515,12 +1538,17 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	}
+ 
+ 	/* step 4: collapse pmd */
+-	ptl = pmd_lock(vma->vm_mm, pmd);
++	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, haddr,
++				haddr + HPAGE_PMD_SIZE);
++	mmu_notifier_invalidate_range_start(&range);
+ 	_pmd = pmdp_collapse_flush(vma, haddr, pmd);
+-	spin_unlock(ptl);
+ 	mm_dec_nr_ptes(mm);
++	tlb_remove_table_sync_one();
++	mmu_notifier_invalidate_range_end(&range);
+ 	pte_free(mm, pmd_pgtable(_pmd));
+ 
++	i_mmap_unlock_write(vma->vm_file->f_mapping);
++
+ drop_hpage:
+ 	unlock_page(hpage);
+ 	put_page(hpage);
+@@ -1528,6 +1556,7 @@ drop_hpage:
+ 
+ abort:
+ 	pte_unmap_unlock(start_pte, ptl);
++	i_mmap_unlock_write(vma->vm_file->f_mapping);
+ 	goto drop_hpage;
+ }
+ 
+@@ -1577,7 +1606,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ 		 * An alternative would be drop the check, but check that page
+ 		 * table is clear before calling pmdp_collapse_flush() under
+ 		 * ptl. It has higher chance to recover THP for the VMA, but
+-		 * has higher cost too.
++		 * has higher cost too. It would also probably require locking
++		 * the anon_vma.
+ 		 */
+ 		if (vma->anon_vma)
+ 			continue;
+@@ -1599,12 +1629,19 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ 		 */
+ 		if (mmap_write_trylock(mm)) {
+ 			if (!khugepaged_test_exit(mm)) {
+-				spinlock_t *ptl = pmd_lock(mm, pmd);
++				struct mmu_notifier_range range;
++
++				mmu_notifier_range_init(&range,
++							MMU_NOTIFY_CLEAR, 0,
++							NULL, mm, addr,
++							addr + HPAGE_PMD_SIZE);
++				mmu_notifier_invalidate_range_start(&range);
+ 				/* assume page table is clear */
+ 				_pmd = pmdp_collapse_flush(vma, addr, pmd);
+-				spin_unlock(ptl);
+ 				mm_dec_nr_ptes(mm);
++				tlb_remove_table_sync_one();
+ 				pte_free(mm, pmd_pgtable(_pmd));
++				mmu_notifier_invalidate_range_end(&range);
+ 			}
+ 			mmap_write_unlock(mm);
+ 		} else {
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 92ab008777183..c62d997c8ca1d 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -4866,6 +4866,7 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
+ 	unsigned int efd, cfd;
+ 	struct fd efile;
+ 	struct fd cfile;
++	struct dentry *cdentry;
+ 	const char *name;
+ 	char *endp;
+ 	int ret;
+@@ -4916,6 +4917,16 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
+ 	if (ret < 0)
+ 		goto out_put_cfile;
+ 
++	/*
++	 * The control file must be a regular cgroup1 file. As a regular cgroup
++	 * file can't be renamed, it's safe to access its name afterwards.
++	 */
++	cdentry = cfile.file->f_path.dentry;
++	if (cdentry->d_sb->s_type != &cgroup_fs_type || !d_is_reg(cdentry)) {
++		ret = -EINVAL;
++		goto out_put_cfile;
++	}
++
+ 	/*
+ 	 * Determine the event callbacks and set them in @event.  This used
+ 	 * to be done via struct cftype but cgroup core no longer knows
+@@ -4924,7 +4935,7 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
+ 	 *
+ 	 * DO NOT ADD NEW FILES.
+ 	 */
+-	name = cfile.file->f_path.dentry->d_name.name;
++	name = cdentry->d_name.name;
+ 
+ 	if (!strcmp(name, "memory.usage_in_bytes")) {
+ 		event->register_event = mem_cgroup_usage_register_event;
+@@ -4948,7 +4959,7 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
+ 	 * automatically removed on cgroup destruction but the removal is
+ 	 * asynchronous, so take an extra ref on @css.
+ 	 */
+-	cfile_css = css_tryget_online_from_dir(cfile.file->f_path.dentry->d_parent,
++	cfile_css = css_tryget_online_from_dir(cdentry->d_parent,
+ 					       &memory_cgrp_subsys);
+ 	ret = -EINVAL;
+ 	if (IS_ERR(cfile_css))
+diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
+index 03c33c93a582b..205fdbb5792a9 100644
+--- a/mm/mmu_gather.c
++++ b/mm/mmu_gather.c
+@@ -139,7 +139,7 @@ static void tlb_remove_table_smp_sync(void *arg)
+ 	/* Simply deliver the interrupt */
+ }
+ 
+-static void tlb_remove_table_sync_one(void)
++void tlb_remove_table_sync_one(void)
+ {
+ 	/*
+ 	 * This isn't an RCU grace period and hence the page-tables cannot be
+@@ -163,8 +163,6 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
+ 
+ #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
+ 
+-static void tlb_remove_table_sync_one(void) { }
+-
+ static void tlb_remove_table_free(struct mmu_table_batch *batch)
+ {
+ 	__tlb_remove_table_free(batch);
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index deb66635f0f3b..e070a0b8e5ca3 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -118,7 +118,7 @@ struct p9_conn {
+ 	struct list_head unsent_req_list;
+ 	struct p9_req_t *rreq;
+ 	struct p9_req_t *wreq;
+-	char tmp_buf[7];
++	char tmp_buf[P9_HDRSZ];
+ 	struct p9_fcall rc;
+ 	int wpos;
+ 	int wsize;
+@@ -291,7 +291,7 @@ static void p9_read_work(struct work_struct *work)
+ 	if (!m->rc.sdata) {
+ 		m->rc.sdata = m->tmp_buf;
+ 		m->rc.offset = 0;
+-		m->rc.capacity = 7; /* start by reading header */
++		m->rc.capacity = P9_HDRSZ; /* start by reading header */
+ 	}
+ 
+ 	clear_bit(Rpending, &m->wsched);
+@@ -314,7 +314,7 @@ static void p9_read_work(struct work_struct *work)
+ 		p9_debug(P9_DEBUG_TRANS, "got new header\n");
+ 
+ 		/* Header size */
+-		m->rc.size = 7;
++		m->rc.size = P9_HDRSZ;
+ 		err = p9_parse_header(&m->rc, &m->rc.size, NULL, NULL, 0);
+ 		if (err) {
+ 			p9_debug(P9_DEBUG_ERROR,
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 432ac5a16f2e0..6c8a33f98f093 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -231,6 +231,14 @@ static void p9_xen_response(struct work_struct *work)
+ 			continue;
+ 		}
+ 
++		if (h.size > req->rc.capacity) {
++			dev_warn(&priv->dev->dev,
++				 "requested packet size too big: %d for tag %d with capacity %zd\n",
++				 h.size, h.tag, req->rc.capacity);
++			req->status = REQ_STATUS_ERROR;
++			goto recv_error;
++		}
++
+ 		memcpy(&req->rc, &h, sizeof(h));
+ 		req->rc.offset = 0;
+ 
+@@ -240,6 +248,7 @@ static void p9_xen_response(struct work_struct *work)
+ 				     masked_prod, &masked_cons,
+ 				     XEN_9PFS_RING_SIZE(ring));
+ 
++recv_error:
+ 		virt_mb();
+ 		cons += h.size;
+ 		ring->intf->in_cons = cons;
+diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c
+index cff4944d5b663..7601ce9143c18 100644
+--- a/net/bluetooth/6lowpan.c
++++ b/net/bluetooth/6lowpan.c
+@@ -1010,6 +1010,7 @@ static int get_l2cap_conn(char *buf, bdaddr_t *addr, u8 *addr_type,
+ 	hci_dev_lock(hdev);
+ 	hcon = hci_conn_hash_lookup_le(hdev, addr, *addr_type);
+ 	hci_dev_unlock(hdev);
++	hci_dev_put(hdev);
+ 
+ 	if (!hcon)
+ 		return -ENOENT;
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 4ef6a54403aa2..2f87f57e7a4fd 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -736,7 +736,7 @@ static int __init bt_init(void)
+ 
+ 	err = bt_sysfs_init();
+ 	if (err < 0)
+-		return err;
++		goto cleanup_led;
+ 
+ 	err = sock_register(&bt_sock_family_ops);
+ 	if (err)
+@@ -772,6 +772,8 @@ unregister_socket:
+ 	sock_unregister(PF_BLUETOOTH);
+ cleanup_sysfs:
+ 	bt_sysfs_cleanup();
++cleanup_led:
++	bt_leds_cleanup();
+ 	return err;
+ }
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 866eb22432de2..f8aab38ab5953 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3799,7 +3799,8 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	hci_sock_dev_event(hdev, HCI_DEV_REG);
+ 	hci_dev_hold(hdev);
+ 
+-	if (!test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks)) {
++	if (!hdev->suspend_notifier.notifier_call &&
++	    !test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks)) {
+ 		hdev->suspend_notifier.notifier_call = hci_suspend_notifier;
+ 		error = register_pm_notifier(&hdev->suspend_notifier);
+ 		if (error)
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index cf554e8555214..79f24c6f43c8c 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -680,7 +680,7 @@ static int can_rcv(struct sk_buff *skb, struct net_device *dev,
+ {
+ 	struct canfd_frame *cfd = (struct canfd_frame *)skb->data;
+ 
+-	if (unlikely(dev->type != ARPHRD_CAN || skb->len != CAN_MTU)) {
++	if (unlikely(dev->type != ARPHRD_CAN || !can_get_ml_priv(dev) || skb->len != CAN_MTU)) {
+ 		pr_warn_once("PF_CAN: dropped non conform CAN skbuff: dev type %d, len %d\n",
+ 			     dev->type, skb->len);
+ 		goto free_skb;
+@@ -706,7 +706,7 @@ static int canfd_rcv(struct sk_buff *skb, struct net_device *dev,
+ {
+ 	struct canfd_frame *cfd = (struct canfd_frame *)skb->data;
+ 
+-	if (unlikely(dev->type != ARPHRD_CAN || skb->len != CANFD_MTU)) {
++	if (unlikely(dev->type != ARPHRD_CAN || !can_get_ml_priv(dev) || skb->len != CANFD_MTU)) {
+ 		pr_warn_once("PF_CAN: dropped non conform CAN FD skbuff: dev type %d, len %d\n",
+ 			     dev->type, skb->len);
+ 		goto free_skb;
+diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c
+index 4820dbcedfa2d..230ddf45dff0d 100644
+--- a/net/dsa/tag_ksz.c
++++ b/net/dsa/tag_ksz.c
+@@ -22,7 +22,8 @@ static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
+ 	if (!skb->dev)
+ 		return NULL;
+ 
+-	pskb_trim_rcsum(skb, skb->len - len);
++	if (pskb_trim_rcsum(skb, skb->len - len))
++		return NULL;
+ 
+ 	skb->offload_fwd_mark = true;
+ 
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index af8a4255cf1ba..5f786ef662ead 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -830,6 +830,9 @@ static int rtm_to_fib_config(struct net *net, struct sk_buff *skb,
+ 		return -EINVAL;
+ 	}
+ 
++	if (!cfg->fc_table)
++		cfg->fc_table = RT_TABLE_MAIN;
++
+ 	return 0;
+ errout:
+ 	return err;
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 52ec0c43e6b81..ab9fcc6231b86 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -423,6 +423,7 @@ static struct fib_info *fib_find_info(struct fib_info *nfi)
+ 		    nfi->fib_prefsrc == fi->fib_prefsrc &&
+ 		    nfi->fib_priority == fi->fib_priority &&
+ 		    nfi->fib_type == fi->fib_type &&
++		    nfi->fib_tb_id == fi->fib_tb_id &&
+ 		    memcmp(nfi->fib_metrics, fi->fib_metrics,
+ 			   sizeof(u32) * RTAX_MAX) == 0 &&
+ 		    !((nfi->fib_flags ^ fi->fib_flags) & ~RTNH_COMPARE_MASK) &&
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 6ab5c50aa7a87..65ead8a749337 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -1493,24 +1493,6 @@ static int ipgre_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ 	struct ip_tunnel_parm *p = &t->parms;
+ 	__be16 o_flags = p->o_flags;
+ 
+-	if (t->erspan_ver <= 2) {
+-		if (t->erspan_ver != 0 && !t->collect_md)
+-			o_flags |= TUNNEL_KEY;
+-
+-		if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, t->erspan_ver))
+-			goto nla_put_failure;
+-
+-		if (t->erspan_ver == 1) {
+-			if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, t->index))
+-				goto nla_put_failure;
+-		} else if (t->erspan_ver == 2) {
+-			if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, t->dir))
+-				goto nla_put_failure;
+-			if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, t->hwid))
+-				goto nla_put_failure;
+-		}
+-	}
+-
+ 	if (nla_put_u32(skb, IFLA_GRE_LINK, p->link) ||
+ 	    nla_put_be16(skb, IFLA_GRE_IFLAGS,
+ 			 gre_tnl_flags_to_gre_flags(p->i_flags)) ||
+@@ -1551,6 +1533,34 @@ nla_put_failure:
+ 	return -EMSGSIZE;
+ }
+ 
++static int erspan_fill_info(struct sk_buff *skb, const struct net_device *dev)
++{
++	struct ip_tunnel *t = netdev_priv(dev);
++
++	if (t->erspan_ver <= 2) {
++		if (t->erspan_ver != 0 && !t->collect_md)
++			t->parms.o_flags |= TUNNEL_KEY;
++
++		if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, t->erspan_ver))
++			goto nla_put_failure;
++
++		if (t->erspan_ver == 1) {
++			if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, t->index))
++				goto nla_put_failure;
++		} else if (t->erspan_ver == 2) {
++			if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, t->dir))
++				goto nla_put_failure;
++			if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, t->hwid))
++				goto nla_put_failure;
++		}
++	}
++
++	return ipgre_fill_info(skb, dev);
++
++nla_put_failure:
++	return -EMSGSIZE;
++}
++
+ static void erspan_setup(struct net_device *dev)
+ {
+ 	struct ip_tunnel *t = netdev_priv(dev);
+@@ -1629,7 +1639,7 @@ static struct rtnl_link_ops erspan_link_ops __read_mostly = {
+ 	.changelink	= erspan_changelink,
+ 	.dellink	= ip_tunnel_dellink,
+ 	.get_size	= ipgre_get_size,
+-	.fill_info	= ipgre_fill_info,
++	.fill_info	= erspan_fill_info,
+ 	.get_link_net	= ip_tunnel_get_link_net,
+ };
+ 
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index fadad8e83521d..e427f5040a08e 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -919,6 +919,9 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 		if (err < 0)
+ 			goto fail;
+ 
++		/* We prevent @rt from being freed. */
++		rcu_read_lock();
++
+ 		for (;;) {
+ 			/* Prepare header of the next frame,
+ 			 * before previous one went down. */
+@@ -942,6 +945,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 		if (err == 0) {
+ 			IP6_INC_STATS(net, ip6_dst_idev(&rt->dst),
+ 				      IPSTATS_MIB_FRAGOKS);
++			rcu_read_unlock();
+ 			return 0;
+ 		}
+ 
+@@ -949,6 +953,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 
+ 		IP6_INC_STATS(net, ip6_dst_idev(&rt->dst),
+ 			      IPSTATS_MIB_FRAGFAILS);
++		rcu_read_unlock();
+ 		return err;
+ 
+ slow_path_clean:
+diff --git a/net/mac802154/iface.c b/net/mac802154/iface.c
+index 1cf5ac09edcbc..a08240fe68a74 100644
+--- a/net/mac802154/iface.c
++++ b/net/mac802154/iface.c
+@@ -661,6 +661,7 @@ ieee802154_if_add(struct ieee802154_local *local, const char *name,
+ 	sdata->dev = ndev;
+ 	sdata->wpan_dev.wpan_phy = local->hw.phy;
+ 	sdata->local = local;
++	INIT_LIST_HEAD(&sdata->wpan_dev.list);
+ 
+ 	/* setup type-dependent data */
+ 	ret = ieee802154_setup_sdata(sdata, type);
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index c402283e7545b..2efdc50f978b0 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -317,8 +317,13 @@ nla_put_failure:
+ }
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-static int ctnetlink_dump_mark(struct sk_buff *skb, u32 mark)
++static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct)
+ {
++	u32 mark = READ_ONCE(ct->mark);
++
++	if (!mark)
++		return 0;
++
+ 	if (nla_put_be32(skb, CTA_MARK, htonl(mark)))
+ 		goto nla_put_failure;
+ 	return 0;
+@@ -532,7 +537,7 @@ static int ctnetlink_dump_extinfo(struct sk_buff *skb,
+ static int ctnetlink_dump_info(struct sk_buff *skb, struct nf_conn *ct)
+ {
+ 	if (ctnetlink_dump_status(skb, ct) < 0 ||
+-	    ctnetlink_dump_mark(skb, READ_ONCE(ct->mark)) < 0 ||
++	    ctnetlink_dump_mark(skb, ct) < 0 ||
+ 	    ctnetlink_dump_secctx(skb, ct) < 0 ||
+ 	    ctnetlink_dump_id(skb, ct) < 0 ||
+ 	    ctnetlink_dump_use(skb, ct) < 0 ||
+@@ -711,7 +716,6 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
+ 	struct sk_buff *skb;
+ 	unsigned int type;
+ 	unsigned int flags = 0, group;
+-	u32 mark;
+ 	int err;
+ 
+ 	if (events & (1 << IPCT_DESTROY)) {
+@@ -812,9 +816,8 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
+ 	}
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-	mark = READ_ONCE(ct->mark);
+-	if ((events & (1 << IPCT_MARK) || mark) &&
+-	    ctnetlink_dump_mark(skb, mark) < 0)
++	if (events & (1 << IPCT_MARK) &&
++	    ctnetlink_dump_mark(skb, ct) < 0)
+ 		goto nla_put_failure;
+ #endif
+ 	nlmsg_end(skb, nlh);
+@@ -2671,7 +2674,6 @@ static int __ctnetlink_glue_build(struct sk_buff *skb, struct nf_conn *ct)
+ {
+ 	const struct nf_conntrack_zone *zone;
+ 	struct nlattr *nest_parms;
+-	u32 mark;
+ 
+ 	zone = nf_ct_zone(ct);
+ 
+@@ -2729,8 +2731,7 @@ static int __ctnetlink_glue_build(struct sk_buff *skb, struct nf_conn *ct)
+ 		goto nla_put_failure;
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-	mark = READ_ONCE(ct->mark);
+-	if (mark && ctnetlink_dump_mark(skb, mark) < 0)
++	if (ctnetlink_dump_mark(skb, ct) < 0)
+ 		goto nla_put_failure;
+ #endif
+ 	if (ctnetlink_dump_labels(skb, ct) < 0)
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 949da87dbb063..30cf0673d6c19 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1162,6 +1162,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 	struct nft_pipapo_match *m = priv->clone;
+ 	u8 genmask = nft_genmask_next(net);
+ 	struct nft_pipapo_field *f;
++	const u8 *start_p, *end_p;
+ 	int i, bsize_max, err = 0;
+ 
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY_END))
+@@ -1202,9 +1203,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 	}
+ 
+ 	/* Validate */
++	start_p = start;
++	end_p = end;
+ 	nft_pipapo_for_each_field(f, i, m) {
+-		const u8 *start_p = start, *end_p = end;
+-
+ 		if (f->rules >= (unsigned long)NFT_PIPAPO_RULE0_MAX)
+ 			return -ENOSPC;
+ 
+diff --git a/net/nfc/nci/ntf.c b/net/nfc/nci/ntf.c
+index 33e1170817f0f..f8b20cddd5c96 100644
+--- a/net/nfc/nci/ntf.c
++++ b/net/nfc/nci/ntf.c
+@@ -218,6 +218,8 @@ static int nci_add_new_protocol(struct nci_dev *ndev,
+ 		target->sens_res = nfca_poll->sens_res;
+ 		target->sel_res = nfca_poll->sel_res;
+ 		target->nfcid1_len = nfca_poll->nfcid1_len;
++		if (target->nfcid1_len > ARRAY_SIZE(target->nfcid1))
++			return -EPROTO;
+ 		if (target->nfcid1_len > 0) {
+ 			memcpy(target->nfcid1, nfca_poll->nfcid1,
+ 			       target->nfcid1_len);
+@@ -226,6 +228,8 @@ static int nci_add_new_protocol(struct nci_dev *ndev,
+ 		nfcb_poll = (struct rf_tech_specific_params_nfcb_poll *)params;
+ 
+ 		target->sensb_res_len = nfcb_poll->sensb_res_len;
++		if (target->sensb_res_len > ARRAY_SIZE(target->sensb_res))
++			return -EPROTO;
+ 		if (target->sensb_res_len > 0) {
+ 			memcpy(target->sensb_res, nfcb_poll->sensb_res,
+ 			       target->sensb_res_len);
+@@ -234,6 +238,8 @@ static int nci_add_new_protocol(struct nci_dev *ndev,
+ 		nfcf_poll = (struct rf_tech_specific_params_nfcf_poll *)params;
+ 
+ 		target->sensf_res_len = nfcf_poll->sensf_res_len;
++		if (target->sensf_res_len > ARRAY_SIZE(target->sensf_res))
++			return -EPROTO;
+ 		if (target->sensf_res_len > 0) {
+ 			memcpy(target->sensf_res, nfcf_poll->sensf_res,
+ 			       target->sensf_res_len);
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 064fdb8e50e19..c1e56d1f21b38 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -2188,7 +2188,9 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	if (tipc_own_addr(l->net) > msg_prevnode(hdr))
+ 		l->net_plane = msg_net_plane(hdr);
+ 
+-	skb_linearize(skb);
++	if (skb_linearize(skb))
++		goto exit;
++
+ 	hdr = buf_msg(skb);
+ 	data = msg_data(hdr);
+ 
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index 60059827563ae..7589f2ac6fd04 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -1660,6 +1660,7 @@ int tipc_node_xmit(struct net *net, struct sk_buff_head *list,
+ 	struct tipc_node *n;
+ 	struct sk_buff_head xmitq;
+ 	bool node_up = false;
++	struct net *peer_net;
+ 	int bearer_id;
+ 	int rc;
+ 
+@@ -1676,18 +1677,23 @@ int tipc_node_xmit(struct net *net, struct sk_buff_head *list,
+ 		return -EHOSTUNREACH;
+ 	}
+ 
++	rcu_read_lock();
+ 	tipc_node_read_lock(n);
+ 	node_up = node_is_up(n);
+-	if (node_up && n->peer_net && check_net(n->peer_net)) {
++	peer_net = n->peer_net;
++	tipc_node_read_unlock(n);
++	if (node_up && peer_net && check_net(peer_net)) {
+ 		/* xmit inner linux container */
+-		tipc_lxc_xmit(n->peer_net, list);
++		tipc_lxc_xmit(peer_net, list);
+ 		if (likely(skb_queue_empty(list))) {
+-			tipc_node_read_unlock(n);
++			rcu_read_unlock();
+ 			tipc_node_put(n);
+ 			return 0;
+ 		}
+ 	}
++	rcu_read_unlock();
+ 
++	tipc_node_read_lock(n);
+ 	bearer_id = n->active_links[selector & 1];
+ 	if (unlikely(bearer_id == INVALID_BEARER_ID)) {
+ 		tipc_node_read_unlock(n);
+diff --git a/net/unix/diag.c b/net/unix/diag.c
+index 9ff64f9df1f3b..951b33fa8f5cf 100644
+--- a/net/unix/diag.c
++++ b/net/unix/diag.c
+@@ -113,14 +113,16 @@ static int sk_diag_show_rqlen(struct sock *sk, struct sk_buff *nlskb)
+ 	return nla_put(nlskb, UNIX_DIAG_RQLEN, sizeof(rql), &rql);
+ }
+ 
+-static int sk_diag_dump_uid(struct sock *sk, struct sk_buff *nlskb)
++static int sk_diag_dump_uid(struct sock *sk, struct sk_buff *nlskb,
++			    struct user_namespace *user_ns)
+ {
+-	uid_t uid = from_kuid_munged(sk_user_ns(nlskb->sk), sock_i_uid(sk));
++	uid_t uid = from_kuid_munged(user_ns, sock_i_uid(sk));
+ 	return nla_put(nlskb, UNIX_DIAG_UID, sizeof(uid_t), &uid);
+ }
+ 
+ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, struct unix_diag_req *req,
+-		u32 portid, u32 seq, u32 flags, int sk_ino)
++			struct user_namespace *user_ns,
++			u32 portid, u32 seq, u32 flags, int sk_ino)
+ {
+ 	struct nlmsghdr *nlh;
+ 	struct unix_diag_msg *rep;
+@@ -166,7 +168,7 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, struct unix_diag_r
+ 		goto out_nlmsg_trim;
+ 
+ 	if ((req->udiag_show & UDIAG_SHOW_UID) &&
+-	    sk_diag_dump_uid(sk, skb))
++	    sk_diag_dump_uid(sk, skb, user_ns))
+ 		goto out_nlmsg_trim;
+ 
+ 	nlmsg_end(skb, nlh);
+@@ -178,7 +180,8 @@ out_nlmsg_trim:
+ }
+ 
+ static int sk_diag_dump(struct sock *sk, struct sk_buff *skb, struct unix_diag_req *req,
+-		u32 portid, u32 seq, u32 flags)
++			struct user_namespace *user_ns,
++			u32 portid, u32 seq, u32 flags)
+ {
+ 	int sk_ino;
+ 
+@@ -189,7 +192,7 @@ static int sk_diag_dump(struct sock *sk, struct sk_buff *skb, struct unix_diag_r
+ 	if (!sk_ino)
+ 		return 0;
+ 
+-	return sk_diag_fill(sk, skb, req, portid, seq, flags, sk_ino);
++	return sk_diag_fill(sk, skb, req, user_ns, portid, seq, flags, sk_ino);
+ }
+ 
+ static int unix_diag_dump(struct sk_buff *skb, struct netlink_callback *cb)
+@@ -217,7 +220,7 @@ static int unix_diag_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 				goto next;
+ 			if (!(req->udiag_states & (1 << sk->sk_state)))
+ 				goto next;
+-			if (sk_diag_dump(sk, skb, req,
++			if (sk_diag_dump(sk, skb, req, sk_user_ns(skb->sk),
+ 					 NETLINK_CB(cb->skb).portid,
+ 					 cb->nlh->nlmsg_seq,
+ 					 NLM_F_MULTI) < 0)
+@@ -285,7 +288,8 @@ again:
+ 	if (!rep)
+ 		goto out;
+ 
+-	err = sk_diag_fill(sk, rep, req, NETLINK_CB(in_skb).portid,
++	err = sk_diag_fill(sk, rep, req, sk_user_ns(NETLINK_CB(in_skb).sk),
++			   NETLINK_CB(in_skb).portid,
+ 			   nlh->nlmsg_seq, 0, req->udiag_ino);
+ 	if (err < 0) {
+ 		nlmsg_free(rep);
+diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
+index 65db1a7c77b76..bb76a2dd0a2ff 100644
+--- a/sound/core/seq/seq_memory.c
++++ b/sound/core/seq/seq_memory.c
+@@ -112,15 +112,19 @@ EXPORT_SYMBOL(snd_seq_dump_var_event);
+  * expand the variable length event to linear buffer space.
+  */
+ 
+-static int seq_copy_in_kernel(char **bufptr, const void *src, int size)
++static int seq_copy_in_kernel(void *ptr, void *src, int size)
+ {
++	char **bufptr = ptr;
++
+ 	memcpy(*bufptr, src, size);
+ 	*bufptr += size;
+ 	return 0;
+ }
+ 
+-static int seq_copy_in_user(char __user **bufptr, const void *src, int size)
++static int seq_copy_in_user(void *ptr, void *src, int size)
+ {
++	char __user **bufptr = ptr;
++
+ 	if (copy_to_user(*bufptr, src, size))
+ 		return -EFAULT;
+ 	*bufptr += size;
+@@ -149,8 +153,7 @@ int snd_seq_expand_var_event(const struct snd_seq_event *event, int count, char
+ 		return newlen;
+ 	}
+ 	err = snd_seq_dump_var_event(event,
+-				     in_kernel ? (snd_seq_dump_func_t)seq_copy_in_kernel :
+-				     (snd_seq_dump_func_t)seq_copy_in_user,
++				     in_kernel ? seq_copy_in_kernel : seq_copy_in_user,
+ 				     &buf);
+ 	return err < 0 ? err : newlen;
+ }
+diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c
+index 21574447650cd..57aeded978c28 100644
+--- a/sound/soc/codecs/wm8962.c
++++ b/sound/soc/codecs/wm8962.c
+@@ -2489,6 +2489,14 @@ static void wm8962_configure_bclk(struct snd_soc_component *component)
+ 		snd_soc_component_update_bits(component, WM8962_CLOCKING2,
+ 				WM8962_SYSCLK_ENA_MASK, WM8962_SYSCLK_ENA);
+ 
++	/* DSPCLK_DIV field in WM8962_CLOCKING1 register is used to generate
++	 * correct frequency of LRCLK and BCLK. Sometimes the read-only value
++	 * can't be updated timely after enabling SYSCLK. This results in wrong
++	 * calculation values. Delay is introduced here to wait for newest
++	 * value from register. The time of the delay should be at least
++	 * 500~1000us according to test.
++	 */
++	usleep_range(500, 1000);
+ 	dspclk = snd_soc_component_read(component, WM8962_CLOCKING1);
+ 
+ 	if (snd_soc_component_get_bias_level(component) != SND_SOC_BIAS_ON)
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 0e2261ee07b67..fb874f924bbe3 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1154,6 +1154,8 @@ static void dpcm_be_reparent(struct snd_soc_pcm_runtime *fe,
+ 		return;
+ 
+ 	be_substream = snd_soc_dpcm_get_substream(be, stream);
++	if (!be_substream)
++		return;
+ 
+ 	for_each_dpcm_fe(be, stream, dpcm) {
+ 		if (dpcm->fe == fe)
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index a7f53c2a9580a..0f3bf90e04d36 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -1622,13 +1622,21 @@ ipv4_del_addr_test()
+ 
+ 	$IP addr add dev dummy1 172.16.104.1/24
+ 	$IP addr add dev dummy1 172.16.104.11/24
++	$IP addr add dev dummy1 172.16.104.12/24
++	$IP addr add dev dummy1 172.16.104.13/24
+ 	$IP addr add dev dummy2 172.16.104.1/24
+ 	$IP addr add dev dummy2 172.16.104.11/24
++	$IP addr add dev dummy2 172.16.104.12/24
+ 	$IP route add 172.16.105.0/24 via 172.16.104.2 src 172.16.104.11
++	$IP route add 172.16.106.0/24 dev lo src 172.16.104.12
++	$IP route add table 0 172.16.107.0/24 via 172.16.104.2 src 172.16.104.13
+ 	$IP route add vrf red 172.16.105.0/24 via 172.16.104.2 src 172.16.104.11
++	$IP route add vrf red 172.16.106.0/24 dev lo src 172.16.104.12
+ 	set +e
+ 
+ 	# removing address from device in vrf should only remove route from vrf table
++	echo "    Regular FIB info"
++
+ 	$IP addr del dev dummy2 172.16.104.11/24
+ 	$IP ro ls vrf red | grep -q 172.16.105.0/24
+ 	log_test $? 1 "Route removed from VRF when source address deleted"
+@@ -1646,6 +1654,35 @@ ipv4_del_addr_test()
+ 	$IP ro ls vrf red | grep -q 172.16.105.0/24
+ 	log_test $? 0 "Route in VRF is not removed by address delete"
+ 
++	# removing address from device in vrf should only remove route from vrf
++	# table even when the associated fib info only differs in table ID
++	echo "    Identical FIB info with different table ID"
++
++	$IP addr del dev dummy2 172.16.104.12/24
++	$IP ro ls vrf red | grep -q 172.16.106.0/24
++	log_test $? 1 "Route removed from VRF when source address deleted"
++
++	$IP ro ls | grep -q 172.16.106.0/24
++	log_test $? 0 "Route in default VRF not removed"
++
++	$IP addr add dev dummy2 172.16.104.12/24
++	$IP route add vrf red 172.16.106.0/24 dev lo src 172.16.104.12
++
++	$IP addr del dev dummy1 172.16.104.12/24
++	$IP ro ls | grep -q 172.16.106.0/24
++	log_test $? 1 "Route removed in default VRF when source address deleted"
++
++	$IP ro ls vrf red | grep -q 172.16.106.0/24
++	log_test $? 0 "Route in VRF is not removed by address delete"
++
++	# removing address from device in default vrf should remove route from
++	# the default vrf even when route was inserted with a table ID of 0.
++	echo "    Table ID 0"
++
++	$IP addr del dev dummy1 172.16.104.13/24
++	$IP ro ls | grep -q 172.16.107.0/24
++	log_test $? 1 "Route removed in default VRF when source address deleted"
++
+ 	$IP li del dummy1
+ 	$IP li del dummy2
+ 	cleanup
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index c9ce3dfa42ee7..c3a905923ef29 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -782,7 +782,7 @@ kci_test_ipsec_offload()
+ 	    tmpl proto esp src $srcip dst $dstip spi 9 \
+ 	    mode transport reqid 42
+ 	check_err $?
+-	ip x p add dir out src $dstip/24 dst $srcip/24 \
++	ip x p add dir in src $dstip/24 dst $srcip/24 \
+ 	    tmpl proto esp src $dstip dst $srcip spi 9 \
+ 	    mode transport reqid 42
+ 	check_err $?


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-12-19 12:33 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2022-12-19 12:33 UTC (permalink / raw
  To: gentoo-commits

commit:     98f372e307134f481931352a10ae2cb9c681202d
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 19 12:25:43 2022 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Dec 19 12:26:24 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=98f372e3

Linux patch 5.10.160

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |   4 +
 1159_linux-5.10.160.patch | 430 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 434 insertions(+)

diff --git a/0000_README b/0000_README
index a20d23b7..95418f32 100644
--- a/0000_README
+++ b/0000_README
@@ -679,6 +679,10 @@ Patch:  1158_linux-5.10.159.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.159
 
+Patch:  1159_linux-5.10.160.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.160
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1159_linux-5.10.160.patch b/1159_linux-5.10.160.patch
new file mode 100644
index 00000000..d33cffbb
--- /dev/null
+++ b/1159_linux-5.10.160.patch
@@ -0,0 +1,430 @@
+diff --git a/Makefile b/Makefile
+index bb9fab281555a..6f7dae2f1a4eb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 159
++SUBLEVEL = 160
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
+index 6a80f36b5d598..5f436cb4f7c49 100644
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
+@@ -794,8 +794,6 @@ void mtrr_ap_init(void)
+ 	if (!use_intel() || mtrr_aps_delayed_init)
+ 		return;
+ 
+-	rcu_cpu_starting(smp_processor_id());
+-
+ 	/*
+ 	 * Ideally we should hold mtrr_mutex here to avoid mtrr entries
+ 	 * changed, but this routine will be called in cpu boot time,
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 8baff500914ea..e8e5515fb7e9c 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -229,6 +229,7 @@ static void notrace start_secondary(void *unused)
+ #endif
+ 	cpu_init_exception_handling();
+ 	cpu_init();
++	rcu_cpu_starting(raw_smp_processor_id());
+ 	x86_cpuinit.early_percpu_clock_init();
+ 	smp_callin();
+ 
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 21063335ab599..c07e327929ba5 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -47,6 +47,10 @@
+ #define MCBA_VER_REQ_USB 1
+ #define MCBA_VER_REQ_CAN 2
+ 
++/* Drive the CAN_RES signal LOW "0" to activate R24 and R25 */
++#define MCBA_VER_TERMINATION_ON 0
++#define MCBA_VER_TERMINATION_OFF 1
++
+ #define MCBA_SIDL_EXID_MASK 0x8
+ #define MCBA_DLC_MASK 0xf
+ #define MCBA_DLC_RTR_MASK 0x40
+@@ -469,7 +473,7 @@ static void mcba_usb_process_ka_usb(struct mcba_priv *priv,
+ 		priv->usb_ka_first_pass = false;
+ 	}
+ 
+-	if (msg->termination_state)
++	if (msg->termination_state == MCBA_VER_TERMINATION_ON)
+ 		priv->can.termination = MCBA_TERMINATION_ENABLED;
+ 	else
+ 		priv->can.termination = MCBA_TERMINATION_DISABLED;
+@@ -789,9 +793,9 @@ static int mcba_set_termination(struct net_device *netdev, u16 term)
+ 	};
+ 
+ 	if (term == MCBA_TERMINATION_ENABLED)
+-		usb_msg.termination = 1;
++		usb_msg.termination = MCBA_VER_TERMINATION_ON;
+ 	else
+-		usb_msg.termination = 0;
++		usb_msg.termination = MCBA_VER_TERMINATION_OFF;
+ 
+ 	mcba_usb_xmit_cmd(priv, (struct mcba_usb_msg *)&usb_msg);
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
+index 6ef48eb3a77d4..b163489489e95 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
++++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
+@@ -874,7 +874,6 @@ area_cache_get(struct nfp_cpp *cpp, u32 id,
+ 	}
+ 
+ 	/* Adjust the start address to be cache size aligned */
+-	cache->id = id;
+ 	cache->addr = addr & ~(u64)(cache->size - 1);
+ 
+ 	/* Re-init to the new ID and address */
+@@ -894,6 +893,8 @@ area_cache_get(struct nfp_cpp *cpp, u32 id,
+ 		return NULL;
+ 	}
+ 
++	cache->id = id;
++
+ exit:
+ 	/* Adjust offset */
+ 	*offset = addr - cache->addr;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 089f391035848..c222d7bf6ce19 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -817,6 +817,8 @@ static blk_status_t nvme_setup_prp_simple(struct nvme_dev *dev,
+ 	cmnd->dptr.prp1 = cpu_to_le64(iod->first_dma);
+ 	if (bv->bv_len > first_prp_len)
+ 		cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma + first_prp_len);
++	else
++		cmnd->dptr.prp2 = 0;
+ 	return BLK_STS_OK;
+ }
+ 
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.c b/drivers/pinctrl/mediatek/mtk-eint.c
+index 22736f60c16ca..64a32d3ca4813 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.c
++++ b/drivers/pinctrl/mediatek/mtk-eint.c
+@@ -278,12 +278,15 @@ static struct irq_chip mtk_eint_irq_chip = {
+ 
+ static unsigned int mtk_eint_hw_init(struct mtk_eint *eint)
+ {
+-	void __iomem *reg = eint->base + eint->regs->dom_en;
++	void __iomem *dom_en = eint->base + eint->regs->dom_en;
++	void __iomem *mask_set = eint->base + eint->regs->mask_set;
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < eint->hw->ap_num; i += 32) {
+-		writel(0xffffffff, reg);
+-		reg += 4;
++		writel(0xffffffff, dom_en);
++		writel(0xffffffff, mask_set);
++		dom_en += 4;
++		mask_set += 4;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 8e95a75a4559c..80a9e50392a09 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -205,7 +205,7 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
+ 	if (inode && fuse_is_bad(inode))
+ 		goto invalid;
+ 	else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) ||
+-		 (flags & LOOKUP_REVAL)) {
++		 (flags & (LOOKUP_EXCL | LOOKUP_REVAL))) {
+ 		struct fuse_entry_out outarg;
+ 		FUSE_ARGS(args);
+ 		struct fuse_forget_link *forget;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index c5c22b067cd81..84758e512a045 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -936,7 +936,7 @@ static const struct io_op_def io_op_defs[] = {
+ 		.needs_file		= 1,
+ 		.hash_reg_file		= 1,
+ 		.unbound_nonreg_file	= 1,
+-		.work_flags		= IO_WQ_WORK_BLKCG,
++		.work_flags		= IO_WQ_WORK_BLKCG | IO_WQ_WORK_FILES,
+ 	},
+ 	[IORING_OP_PROVIDE_BUFFERS] = {},
+ 	[IORING_OP_REMOVE_BUFFERS] = {},
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index a4ae1fcd2ab1e..b09ead06a2490 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -570,6 +570,7 @@ out_err:
+ ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst,
+ 			     u64 dst_pos, u64 count)
+ {
++	ssize_t ret;
+ 
+ 	/*
+ 	 * Limit copy to 4MB to prevent indefinitely blocking an nfsd
+@@ -580,7 +581,12 @@ ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst,
+ 	 * limit like this and pipeline multiple COPY requests.
+ 	 */
+ 	count = min_t(u64, count, 1 << 22);
+-	return vfs_copy_file_range(src, src_pos, dst, dst_pos, count, 0);
++	ret = vfs_copy_file_range(src, src_pos, dst, dst_pos, count, 0);
++
++	if (ret == -EOPNOTSUPP || ret == -EXDEV)
++		ret = vfs_copy_file_range(src, src_pos, dst, dst_pos, count,
++					  COPY_FILE_SPLICE);
++	return ret;
+ }
+ 
+ __be32 nfsd4_vfs_fallocate(struct svc_rqst *rqstp, struct svc_fh *fhp,
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 75f764b434184..0066acb6b380d 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1388,28 +1388,6 @@ ssize_t generic_copy_file_range(struct file *file_in, loff_t pos_in,
+ }
+ EXPORT_SYMBOL(generic_copy_file_range);
+ 
+-static ssize_t do_copy_file_range(struct file *file_in, loff_t pos_in,
+-				  struct file *file_out, loff_t pos_out,
+-				  size_t len, unsigned int flags)
+-{
+-	/*
+-	 * Although we now allow filesystems to handle cross sb copy, passing
+-	 * a file of the wrong filesystem type to filesystem driver can result
+-	 * in an attempt to dereference the wrong type of ->private_data, so
+-	 * avoid doing that until we really have a good reason.  NFS defines
+-	 * several different file_system_type structures, but they all end up
+-	 * using the same ->copy_file_range() function pointer.
+-	 */
+-	if (file_out->f_op->copy_file_range &&
+-	    file_out->f_op->copy_file_range == file_in->f_op->copy_file_range)
+-		return file_out->f_op->copy_file_range(file_in, pos_in,
+-						       file_out, pos_out,
+-						       len, flags);
+-
+-	return generic_copy_file_range(file_in, pos_in, file_out, pos_out, len,
+-				       flags);
+-}
+-
+ /*
+  * Performs necessary checks before doing a file copy
+  *
+@@ -1431,6 +1409,26 @@ static int generic_copy_file_checks(struct file *file_in, loff_t pos_in,
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * We allow some filesystems to handle cross sb copy, but passing
++	 * a file of the wrong filesystem type to filesystem driver can result
++	 * in an attempt to dereference the wrong type of ->private_data, so
++	 * avoid doing that until we really have a good reason.
++	 *
++	 * nfs and cifs define several different file_system_type structures
++	 * and several different sets of file_operations, but they all end up
++	 * using the same ->copy_file_range() function pointer.
++	 */
++	if (flags & COPY_FILE_SPLICE) {
++		/* cross sb splice is allowed */
++	} else if (file_out->f_op->copy_file_range) {
++		if (file_in->f_op->copy_file_range !=
++		    file_out->f_op->copy_file_range)
++			return -EXDEV;
++	} else if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb) {
++		return -EXDEV;
++	}
++
+ 	/* Don't touch certain kinds of inodes */
+ 	if (IS_IMMUTABLE(inode_out))
+ 		return -EPERM;
+@@ -1473,8 +1471,9 @@ ssize_t vfs_copy_file_range(struct file *file_in, loff_t pos_in,
+ 			    size_t len, unsigned int flags)
+ {
+ 	ssize_t ret;
++	bool splice = flags & COPY_FILE_SPLICE;
+ 
+-	if (flags != 0)
++	if (flags & ~COPY_FILE_SPLICE)
+ 		return -EINVAL;
+ 
+ 	ret = generic_copy_file_checks(file_in, pos_in, file_out, pos_out, &len,
+@@ -1496,26 +1495,43 @@ ssize_t vfs_copy_file_range(struct file *file_in, loff_t pos_in,
+ 	file_start_write(file_out);
+ 
+ 	/*
+-	 * Try cloning first, this is supported by more file systems, and
+-	 * more efficient if both clone and copy are supported (e.g. NFS).
++	 * Cloning is supported by more file systems, so we implement copy on
++	 * same sb using clone, but for filesystems where both clone and copy
++	 * are supported (e.g. nfs,cifs), we only call the copy method.
+ 	 */
+-	if (file_in->f_op->remap_file_range &&
+-	    file_inode(file_in)->i_sb == file_inode(file_out)->i_sb) {
+-		loff_t cloned;
++	if (!splice && file_out->f_op->copy_file_range) {
++		ret = file_out->f_op->copy_file_range(file_in, pos_in,
++						      file_out, pos_out,
++						      len, flags);
++		goto done;
++	}
+ 
+-		cloned = file_in->f_op->remap_file_range(file_in, pos_in,
++	if (!splice && file_in->f_op->remap_file_range &&
++	    file_inode(file_in)->i_sb == file_inode(file_out)->i_sb) {
++		ret = file_in->f_op->remap_file_range(file_in, pos_in,
+ 				file_out, pos_out,
+ 				min_t(loff_t, MAX_RW_COUNT, len),
+ 				REMAP_FILE_CAN_SHORTEN);
+-		if (cloned > 0) {
+-			ret = cloned;
++		if (ret > 0)
+ 			goto done;
+-		}
+ 	}
+ 
+-	ret = do_copy_file_range(file_in, pos_in, file_out, pos_out, len,
+-				flags);
+-	WARN_ON_ONCE(ret == -EOPNOTSUPP);
++	/*
++	 * We can get here for same sb copy of filesystems that do not implement
++	 * ->copy_file_range() in case filesystem does not support clone or in
++	 * case filesystem supports clone but rejected the clone request (e.g.
++	 * because it was not block aligned).
++	 *
++	 * In both cases, fall back to kernel copy so we are able to maintain a
++	 * consistent story about which filesystems support copy_file_range()
++	 * and which filesystems do not, that will allow userspace tools to
++	 * make consistent desicions w.r.t using copy_file_range().
++	 *
++	 * We also get here if caller (e.g. nfsd) requested COPY_FILE_SPLICE.
++	 */
++	ret = generic_copy_file_range(file_in, pos_in, file_out, pos_out, len,
++				      flags);
++
+ done:
+ 	if (ret > 0) {
+ 		fsnotify_access(file_in);
+@@ -1566,6 +1582,10 @@ SYSCALL_DEFINE6(copy_file_range, int, fd_in, loff_t __user *, off_in,
+ 		pos_out = f_out.file->f_pos;
+ 	}
+ 
++	ret = -EINVAL;
++	if (flags != 0)
++		goto out;
++
+ 	ret = vfs_copy_file_range(f_in.file, pos_in, f_out.file, pos_out, len,
+ 				  flags);
+ 	if (ret > 0) {
+diff --git a/include/linux/can/platform/sja1000.h b/include/linux/can/platform/sja1000.h
+index 5755ae5a47122..6a869682c1207 100644
+--- a/include/linux/can/platform/sja1000.h
++++ b/include/linux/can/platform/sja1000.h
+@@ -14,7 +14,7 @@
+ #define OCR_MODE_TEST     0x01
+ #define OCR_MODE_NORMAL   0x02
+ #define OCR_MODE_CLOCK    0x03
+-#define OCR_MODE_MASK     0x07
++#define OCR_MODE_MASK     0x03
+ #define OCR_TX0_INVERT    0x04
+ #define OCR_TX0_PULLDOWN  0x08
+ #define OCR_TX0_PULLUP    0x10
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index df54acdd35549..ebfc0b2b4969e 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1817,6 +1817,14 @@ struct dir_context {
+  */
+ #define REMAP_FILE_ADVISORY		(REMAP_FILE_CAN_SHORTEN)
+ 
++/*
++ * These flags control the behavior of vfs_copy_file_range().
++ * They are not available to the user via syscall.
++ *
++ * COPY_FILE_SPLICE: call splice direct instead of fs clone/copy ops
++ */
++#define COPY_FILE_SPLICE		(1 << 0)
++
+ struct iov_iter;
+ 
+ struct file_operations {
+diff --git a/sound/soc/codecs/cs42l51.c b/sound/soc/codecs/cs42l51.c
+index fc6a2bc311b4f..c61b17dc2af87 100644
+--- a/sound/soc/codecs/cs42l51.c
++++ b/sound/soc/codecs/cs42l51.c
+@@ -146,7 +146,7 @@ static const struct snd_kcontrol_new cs42l51_snd_controls[] = {
+ 			0, 0xA0, 96, adc_att_tlv),
+ 	SOC_DOUBLE_R_SX_TLV("PGA Volume",
+ 			CS42L51_ALC_PGA_CTL, CS42L51_ALC_PGB_CTL,
+-			0, 0x19, 30, pga_tlv),
++			0, 0x1A, 30, pga_tlv),
+ 	SOC_SINGLE("Playback Deemphasis Switch", CS42L51_DAC_CTL, 3, 1, 0),
+ 	SOC_SINGLE("Auto-Mute Switch", CS42L51_DAC_CTL, 2, 1, 0),
+ 	SOC_SINGLE("Soft Ramp Switch", CS42L51_DAC_CTL, 1, 1, 0),
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index efc5daf53bbae..6c794605e33c9 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -190,6 +190,25 @@ static int fsl_micfil_reset(struct device *dev)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * SRES is self-cleared bit, but REG_MICFIL_CTRL1 is defined
++	 * as non-volatile register, so SRES still remain in regmap
++	 * cache after set, that every update of REG_MICFIL_CTRL1,
++	 * software reset happens. so clear it explicitly.
++	 */
++	ret = regmap_clear_bits(micfil->regmap, REG_MICFIL_CTRL1,
++				MICFIL_CTRL1_SRES);
++	if (ret)
++		return ret;
++
++	/*
++	 * Set SRES should clear CHnF flags, But even add delay here
++	 * the CHnF may not be cleared sometimes, so clear CHnF explicitly.
++	 */
++	ret = regmap_write_bits(micfil->regmap, REG_MICFIL_STAT, 0xFF, 0xFF);
++	if (ret)
++		return ret;
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 5fdd96e77ef3b..daecd386d5ec8 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -445,8 +445,15 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ 		return err;
+ 
+ 	if (snd_soc_volsw_is_stereo(mc)) {
++		val2 = ucontrol->value.integer.value[1];
++
++		if (mc->platform_max && val2 > mc->platform_max)
++			return -EINVAL;
++		if (val2 > max)
++			return -EINVAL;
++
+ 		val_mask = mask << rshift;
+-		val2 = (ucontrol->value.integer.value[1] + min) & mask;
++		val2 = (val2 + min) & mask;
+ 		val2 = val2 << rshift;
+ 
+ 		err = snd_soc_component_update_bits(component, reg2, val_mask,
+diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
+index d38284a3aaf0b..13393f0eab25c 100644
+--- a/tools/lib/bpf/libbpf_probes.c
++++ b/tools/lib/bpf/libbpf_probes.c
+@@ -244,7 +244,7 @@ bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex)
+ 	case BPF_MAP_TYPE_RINGBUF:
+ 		key_size = 0;
+ 		value_size = 0;
+-		max_entries = 4096;
++		max_entries = sysconf(_SC_PAGE_SIZE);
+ 		break;
+ 	case BPF_MAP_TYPE_UNSPEC:
+ 	case BPF_MAP_TYPE_HASH:


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2022-12-21 19:00 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2022-12-21 19:00 UTC (permalink / raw
  To: gentoo-commits

commit:     cd463cfca7d376d6569ee22f3691892ba6d47b64
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 21 18:52:58 2022 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Dec 21 18:53:37 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cd463cfc

Linux patch 5.10.161

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |   4 +
 1160_linux-5.10.161.patch | 507 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 511 insertions(+)

diff --git a/0000_README b/0000_README
index 95418f32..775d0ab8 100644
--- a/0000_README
+++ b/0000_README
@@ -683,6 +683,10 @@ Patch:  1159_linux-5.10.160.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.160
 
+Patch:  1160_linux-5.10.161.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.161
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1160_linux-5.10.161.patch b/1160_linux-5.10.161.patch
new file mode 100644
index 00000000..82dcb9b2
--- /dev/null
+++ b/1160_linux-5.10.161.patch
@@ -0,0 +1,507 @@
+diff --git a/Makefile b/Makefile
+index 6f7dae2f1a4eb..68f8efa0cc301 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 160
++SUBLEVEL = 161
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 70a693f8f0343..2001566be3f5e 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -1150,7 +1150,9 @@
+ #define USB_DEVICE_ID_SYNAPTICS_DELL_K12A	0x2819
+ #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012	0x2968
+ #define USB_DEVICE_ID_SYNAPTICS_TP_V103	0x5710
++#define USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1002	0x73f4
+ #define USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003	0x73f5
++#define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_017	0x73f6
+ #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5	0x81a7
+ 
+ #define USB_VENDOR_ID_TEXAS_INSTRUMENTS	0x2047
+diff --git a/drivers/hid/hid-ite.c b/drivers/hid/hid-ite.c
+index 742c052b0110a..b8cce9c196d8c 100644
+--- a/drivers/hid/hid-ite.c
++++ b/drivers/hid/hid-ite.c
+@@ -18,10 +18,21 @@ static __u8 *ite_report_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int
+ 	unsigned long quirks = (unsigned long)hid_get_drvdata(hdev);
+ 
+ 	if (quirks & QUIRK_TOUCHPAD_ON_OFF_REPORT) {
++		/* For Acer Aspire Switch 10 SW5-012 keyboard-dock */
+ 		if (*rsize == 188 && rdesc[162] == 0x81 && rdesc[163] == 0x02) {
+-			hid_info(hdev, "Fixing up ITE keyboard report descriptor\n");
++			hid_info(hdev, "Fixing up Acer Sw5-012 ITE keyboard report descriptor\n");
+ 			rdesc[163] = HID_MAIN_ITEM_RELATIVE;
+ 		}
++		/* For Acer One S1002/S1003 keyboard-dock */
++		if (*rsize == 188 && rdesc[185] == 0x81 && rdesc[186] == 0x02) {
++			hid_info(hdev, "Fixing up Acer S1002/S1003 ITE keyboard report descriptor\n");
++			rdesc[186] = HID_MAIN_ITEM_RELATIVE;
++		}
++		/* For Acer Aspire Switch 10E (SW3-016) keyboard-dock */
++		if (*rsize == 210 && rdesc[184] == 0x81 && rdesc[185] == 0x02) {
++			hid_info(hdev, "Fixing up Acer Aspire Switch 10E (SW3-016) ITE keyboard report descriptor\n");
++			rdesc[185] = HID_MAIN_ITEM_RELATIVE;
++		}
+ 	}
+ 
+ 	return rdesc;
+@@ -103,7 +114,18 @@ static const struct hid_device_id ite_devices[] = {
+ 	/* ITE8910 USB kbd ctlr, with Synaptics touchpad connected to it. */
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_SYNAPTICS,
+-		     USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003) },
++		     USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1002),
++	  .driver_data = QUIRK_TOUCHPAD_ON_OFF_REPORT },
++	/* ITE8910 USB kbd ctlr, with Synaptics touchpad connected to it. */
++	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++		     USB_VENDOR_ID_SYNAPTICS,
++		     USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003),
++	  .driver_data = QUIRK_TOUCHPAD_ON_OFF_REPORT },
++	/* ITE8910 USB kbd ctlr, with Synaptics touchpad connected to it. */
++	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++		     USB_VENDOR_ID_SYNAPTICS,
++		     USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_017),
++	  .driver_data = QUIRK_TOUCHPAD_ON_OFF_REPORT },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(hid, ite_devices);
+diff --git a/drivers/hid/hid-uclogic-core.c b/drivers/hid/hid-uclogic-core.c
+index 4edb241957040..e4811d37ca775 100644
+--- a/drivers/hid/hid-uclogic-core.c
++++ b/drivers/hid/hid-uclogic-core.c
+@@ -172,6 +172,7 @@ static int uclogic_probe(struct hid_device *hdev,
+ 	 * than the pen, so use QUIRK_MULTI_INPUT for all tablets.
+ 	 */
+ 	hdev->quirks |= HID_QUIRK_MULTI_INPUT;
++	hdev->quirks |= HID_QUIRK_HIDINPUT_FORCE;
+ 
+ 	/* Allocate and assign driver data */
+ 	drvdata = devm_kzalloc(&hdev->dev, sizeof(*drvdata), GFP_KERNEL);
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 327196d15a6ae..f24f1a8ec2fbc 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -7416,7 +7416,7 @@ static void igb_vf_reset_msg(struct igb_adapter *adapter, u32 vf)
+ {
+ 	struct e1000_hw *hw = &adapter->hw;
+ 	unsigned char *vf_mac = adapter->vf_data[vf].vf_mac_addresses;
+-	u32 reg, msgbuf[3];
++	u32 reg, msgbuf[3] = {};
+ 	u8 *addr = (u8 *)(&msgbuf[1]);
+ 
+ 	/* process all the same items cleared in a function level reset */
+diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c
+index a1c77cc004165..498e5c8013efb 100644
+--- a/drivers/net/loopback.c
++++ b/drivers/net/loopback.c
+@@ -208,7 +208,7 @@ static __net_init int loopback_net_init(struct net *net)
+ 	int err;
+ 
+ 	err = -ENOMEM;
+-	dev = alloc_netdev(0, "lo", NET_NAME_UNKNOWN, loopback_setup);
++	dev = alloc_netdev(0, "lo", NET_NAME_PREDICTABLE, loopback_setup);
+ 	if (!dev)
+ 		goto out;
+ 
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index fecdba85ab27f..5d39aff263f00 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -213,8 +213,9 @@ uvc_function_ep0_complete(struct usb_ep *ep, struct usb_request *req)
+ 
+ 		memset(&v4l2_event, 0, sizeof(v4l2_event));
+ 		v4l2_event.type = UVC_EVENT_DATA;
+-		uvc_event->data.length = req->actual;
+-		memcpy(&uvc_event->data.data, req->buf, req->actual);
++		uvc_event->data.length = min_t(unsigned int, req->actual,
++			sizeof(uvc_event->data.data));
++		memcpy(&uvc_event->data.data, req->buf, uvc_event->data.length);
+ 		v4l2_event_queue(&uvc->vdev, &v4l2_event);
+ 	}
+ }
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 0ee11a9370116..9168b492c02b7 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -59,6 +59,7 @@
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI		0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI		0x1138
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI		0x51ed
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_PCH_XHCI	0x54ed
+ 
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4			0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+@@ -242,7 +243,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_MISSING_CAS;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+-	    pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI)
++	    (pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_PCH_XHCI))
+ 		xhci->quirks |= XHCI_RESET_TO_DEFAULT;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 6b5ba6180c307..8a4a0d4dbc139 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -199,6 +199,8 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x16DC, 0x0015) }, /* W-IE-NE-R Plein & Baus GmbH CML Control, Monitoring and Data Logger */
+ 	{ USB_DEVICE(0x17A8, 0x0001) }, /* Kamstrup Optical Eye/3-wire */
+ 	{ USB_DEVICE(0x17A8, 0x0005) }, /* Kamstrup M-Bus Master MultiPort 250D */
++	{ USB_DEVICE(0x17A8, 0x0011) }, /* Kamstrup 444 MHz RF sniffer */
++	{ USB_DEVICE(0x17A8, 0x0013) }, /* Kamstrup 870 MHz RF sniffer */
+ 	{ USB_DEVICE(0x17A8, 0x0101) }, /* Kamstrup 868 MHz wM-Bus C-Mode Meter Reader (Int Ant) */
+ 	{ USB_DEVICE(0x17A8, 0x0102) }, /* Kamstrup 868 MHz wM-Bus C-Mode Meter Reader (Ext Ant) */
+ 	{ USB_DEVICE(0x17F4, 0xAAAA) }, /* Wavesense Jazz blood glucose meter */
+diff --git a/drivers/usb/serial/f81232.c b/drivers/usb/serial/f81232.c
+index 0c7eacc630e0e..11fe49543f26a 100644
+--- a/drivers/usb/serial/f81232.c
++++ b/drivers/usb/serial/f81232.c
+@@ -130,9 +130,6 @@ static u8 const clock_table[] = { F81232_CLK_1_846_MHZ, F81232_CLK_14_77_MHZ,
+ 
+ static int calc_baud_divisor(speed_t baudrate, speed_t clockrate)
+ {
+-	if (!baudrate)
+-		return 0;
+-
+ 	return DIV_ROUND_CLOSEST(clockrate, baudrate);
+ }
+ 
+@@ -523,9 +520,14 @@ static void f81232_set_baudrate(struct tty_struct *tty,
+ 	speed_t baud_list[] = { baudrate, old_baudrate, F81232_DEF_BAUDRATE };
+ 
+ 	for (i = 0; i < ARRAY_SIZE(baud_list); ++i) {
+-		idx = f81232_find_clk(baud_list[i]);
++		baudrate = baud_list[i];
++		if (baudrate == 0) {
++			tty_encode_baud_rate(tty, 0, 0);
++			return;
++		}
++
++		idx = f81232_find_clk(baudrate);
+ 		if (idx >= 0) {
+-			baudrate = baud_list[i];
+ 			tty_encode_baud_rate(tty, baudrate, baudrate);
+ 			break;
+ 		}
+diff --git a/drivers/usb/serial/f81534.c b/drivers/usb/serial/f81534.c
+index 5661fd03e5456..e952be683c63a 100644
+--- a/drivers/usb/serial/f81534.c
++++ b/drivers/usb/serial/f81534.c
+@@ -538,9 +538,6 @@ static int f81534_submit_writer(struct usb_serial_port *port, gfp_t mem_flags)
+ 
+ static u32 f81534_calc_baud_divisor(u32 baudrate, u32 clockrate)
+ {
+-	if (!baudrate)
+-		return 0;
+-
+ 	/* Round to nearest divisor */
+ 	return DIV_ROUND_CLOSEST(clockrate, baudrate);
+ }
+@@ -570,9 +567,14 @@ static int f81534_set_port_config(struct usb_serial_port *port,
+ 	u32 baud_list[] = {baudrate, old_baudrate, F81534_DEFAULT_BAUD_RATE};
+ 
+ 	for (i = 0; i < ARRAY_SIZE(baud_list); ++i) {
+-		idx = f81534_find_clk(baud_list[i]);
++		baudrate = baud_list[i];
++		if (baudrate == 0) {
++			tty_encode_baud_rate(tty, 0, 0);
++			return 0;
++		}
++
++		idx = f81534_find_clk(baudrate);
+ 		if (idx >= 0) {
+-			baudrate = baud_list[i];
+ 			tty_encode_baud_rate(tty, baudrate, baudrate);
+ 			break;
+ 		}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 537ef276c78fe..5636b8f522167 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -255,6 +255,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EP06			0x0306
+ #define QUECTEL_PRODUCT_EM05G			0x030a
+ #define QUECTEL_PRODUCT_EM060K			0x030b
++#define QUECTEL_PRODUCT_EM05G_SG		0x0311
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
+ #define QUECTEL_PRODUCT_RM520N			0x0801
+@@ -1160,6 +1161,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_SG, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index d32b836f6ca74..e94a18bb7f99f 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -438,6 +438,12 @@ static int udf_get_block(struct inode *inode, sector_t block,
+ 		iinfo->i_next_alloc_goal++;
+ 	}
+ 
++	/*
++	 * Block beyond EOF and prealloc extents? Just discard preallocation
++	 * as it is not useful and complicates things.
++	 */
++	if (((loff_t)block) << inode->i_blkbits > iinfo->i_lenExtents)
++		udf_discard_prealloc(inode);
+ 	udf_clear_extent_cache(inode);
+ 	phys = inode_getblk(inode, block, &err, &new);
+ 	if (!phys)
+@@ -487,8 +493,6 @@ static int udf_do_extend_file(struct inode *inode,
+ 	uint32_t add;
+ 	int count = 0, fake = !(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
+ 	struct super_block *sb = inode->i_sb;
+-	struct kernel_lb_addr prealloc_loc = {};
+-	uint32_t prealloc_len = 0;
+ 	struct udf_inode_info *iinfo;
+ 	int err;
+ 
+@@ -509,19 +513,6 @@ static int udf_do_extend_file(struct inode *inode,
+ 			~(sb->s_blocksize - 1);
+ 	}
+ 
+-	/* Last extent are just preallocated blocks? */
+-	if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) ==
+-						EXT_NOT_RECORDED_ALLOCATED) {
+-		/* Save the extent so that we can reattach it to the end */
+-		prealloc_loc = last_ext->extLocation;
+-		prealloc_len = last_ext->extLength;
+-		/* Mark the extent as a hole */
+-		last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
+-			(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
+-		last_ext->extLocation.logicalBlockNum = 0;
+-		last_ext->extLocation.partitionReferenceNum = 0;
+-	}
+-
+ 	/* Can we merge with the previous extent? */
+ 	if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) ==
+ 					EXT_NOT_RECORDED_NOT_ALLOCATED) {
+@@ -549,7 +540,7 @@ static int udf_do_extend_file(struct inode *inode,
+ 		 * more extents, we may need to enter possible following
+ 		 * empty indirect extent.
+ 		 */
+-		if (new_block_bytes || prealloc_len)
++		if (new_block_bytes)
+ 			udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
+ 	}
+ 
+@@ -583,17 +574,6 @@ static int udf_do_extend_file(struct inode *inode,
+ 	}
+ 
+ out:
+-	/* Do we have some preallocated blocks saved? */
+-	if (prealloc_len) {
+-		err = udf_add_aext(inode, last_pos, &prealloc_loc,
+-				   prealloc_len, 1);
+-		if (err)
+-			return err;
+-		last_ext->extLocation = prealloc_loc;
+-		last_ext->extLength = prealloc_len;
+-		count++;
+-	}
+-
+ 	/* last_pos should point to the last written extent... */
+ 	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+ 		last_pos->offset -= sizeof(struct short_ad);
+@@ -609,13 +589,17 @@ out:
+ static void udf_do_extend_final_block(struct inode *inode,
+ 				      struct extent_position *last_pos,
+ 				      struct kernel_long_ad *last_ext,
+-				      uint32_t final_block_len)
++				      uint32_t new_elen)
+ {
+-	struct super_block *sb = inode->i_sb;
+ 	uint32_t added_bytes;
+ 
+-	added_bytes = final_block_len -
+-		      (last_ext->extLength & (sb->s_blocksize - 1));
++	/*
++	 * Extent already large enough? It may be already rounded up to block
++	 * size...
++	 */
++	if (new_elen <= (last_ext->extLength & UDF_EXTENT_LENGTH_MASK))
++		return;
++	added_bytes = (last_ext->extLength & UDF_EXTENT_LENGTH_MASK) - new_elen;
+ 	last_ext->extLength += added_bytes;
+ 	UDF_I(inode)->i_lenExtents += added_bytes;
+ 
+@@ -632,12 +616,12 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 	int8_t etype;
+ 	struct super_block *sb = inode->i_sb;
+ 	sector_t first_block = newsize >> sb->s_blocksize_bits, offset;
+-	unsigned long partial_final_block;
++	loff_t new_elen;
+ 	int adsize;
+ 	struct udf_inode_info *iinfo = UDF_I(inode);
+ 	struct kernel_long_ad extent;
+ 	int err = 0;
+-	int within_final_block;
++	bool within_last_ext;
+ 
+ 	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+ 		adsize = sizeof(struct short_ad);
+@@ -646,8 +630,17 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 	else
+ 		BUG();
+ 
++	/*
++	 * When creating hole in file, just don't bother with preserving
++	 * preallocation. It likely won't be very useful anyway.
++	 */
++	udf_discard_prealloc(inode);
++
+ 	etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset);
+-	within_final_block = (etype != -1);
++	within_last_ext = (etype != -1);
++	/* We don't expect extents past EOF... */
++	WARN_ON_ONCE(within_last_ext &&
++		     elen > ((loff_t)offset + 1) << inode->i_blkbits);
+ 
+ 	if ((!epos.bh && epos.offset == udf_file_entry_alloc_offset(inode)) ||
+ 	    (epos.bh && epos.offset == sizeof(struct allocExtDesc))) {
+@@ -663,19 +656,17 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 		extent.extLength |= etype << 30;
+ 	}
+ 
+-	partial_final_block = newsize & (sb->s_blocksize - 1);
++	new_elen = ((loff_t)offset << inode->i_blkbits) |
++					(newsize & (sb->s_blocksize - 1));
+ 
+ 	/* File has extent covering the new size (could happen when extending
+ 	 * inside a block)?
+ 	 */
+-	if (within_final_block) {
++	if (within_last_ext) {
+ 		/* Extending file within the last file block */
+-		udf_do_extend_final_block(inode, &epos, &extent,
+-					  partial_final_block);
++		udf_do_extend_final_block(inode, &epos, &extent, new_elen);
+ 	} else {
+-		loff_t add = ((loff_t)offset << sb->s_blocksize_bits) |
+-			     partial_final_block;
+-		err = udf_do_extend_file(inode, &epos, &extent, add);
++		err = udf_do_extend_file(inode, &epos, &extent, new_elen);
+ 	}
+ 
+ 	if (err < 0)
+@@ -776,10 +767,11 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 		goto out_free;
+ 	}
+ 
+-	/* Are we beyond EOF? */
++	/* Are we beyond EOF and preallocated extent? */
+ 	if (etype == -1) {
+ 		int ret;
+ 		loff_t hole_len;
++
+ 		isBeyondEOF = true;
+ 		if (count) {
+ 			if (c)
+diff --git a/fs/udf/truncate.c b/fs/udf/truncate.c
+index 532cda99644ee..036ebd892b852 100644
+--- a/fs/udf/truncate.c
++++ b/fs/udf/truncate.c
+@@ -120,60 +120,42 @@ void udf_truncate_tail_extent(struct inode *inode)
+ 
+ void udf_discard_prealloc(struct inode *inode)
+ {
+-	struct extent_position epos = { NULL, 0, {0, 0} };
++	struct extent_position epos = {};
++	struct extent_position prev_epos = {};
+ 	struct kernel_lb_addr eloc;
+ 	uint32_t elen;
+ 	uint64_t lbcount = 0;
+ 	int8_t etype = -1, netype;
+-	int adsize;
+ 	struct udf_inode_info *iinfo = UDF_I(inode);
++	int bsize = 1 << inode->i_blkbits;
+ 
+ 	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB ||
+-	    inode->i_size == iinfo->i_lenExtents)
++	    ALIGN(inode->i_size, bsize) == ALIGN(iinfo->i_lenExtents, bsize))
+ 		return;
+ 
+-	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+-		adsize = sizeof(struct short_ad);
+-	else if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_LONG)
+-		adsize = sizeof(struct long_ad);
+-	else
+-		adsize = 0;
+-
+ 	epos.block = iinfo->i_location;
+ 
+ 	/* Find the last extent in the file */
+-	while ((netype = udf_next_aext(inode, &epos, &eloc, &elen, 1)) != -1) {
+-		etype = netype;
++	while ((netype = udf_next_aext(inode, &epos, &eloc, &elen, 0)) != -1) {
++		brelse(prev_epos.bh);
++		prev_epos = epos;
++		if (prev_epos.bh)
++			get_bh(prev_epos.bh);
++
++		etype = udf_next_aext(inode, &epos, &eloc, &elen, 1);
+ 		lbcount += elen;
+ 	}
+ 	if (etype == (EXT_NOT_RECORDED_ALLOCATED >> 30)) {
+-		epos.offset -= adsize;
+ 		lbcount -= elen;
+-		extent_trunc(inode, &epos, &eloc, etype, elen, 0);
+-		if (!epos.bh) {
+-			iinfo->i_lenAlloc =
+-				epos.offset -
+-				udf_file_entry_alloc_offset(inode);
+-			mark_inode_dirty(inode);
+-		} else {
+-			struct allocExtDesc *aed =
+-				(struct allocExtDesc *)(epos.bh->b_data);
+-			aed->lengthAllocDescs =
+-				cpu_to_le32(epos.offset -
+-					    sizeof(struct allocExtDesc));
+-			if (!UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_STRICT) ||
+-			    UDF_SB(inode->i_sb)->s_udfrev >= 0x0201)
+-				udf_update_tag(epos.bh->b_data, epos.offset);
+-			else
+-				udf_update_tag(epos.bh->b_data,
+-					       sizeof(struct allocExtDesc));
+-			mark_buffer_dirty_inode(epos.bh, inode);
+-		}
++		udf_delete_aext(inode, prev_epos);
++		udf_free_blocks(inode->i_sb, inode, &eloc, 0,
++				DIV_ROUND_UP(elen, 1 << inode->i_blkbits));
+ 	}
+ 	/* This inode entry is in-memory only and thus we don't have to mark
+ 	 * the inode dirty */
+ 	iinfo->i_lenExtents = lbcount;
+ 	brelse(epos.bh);
++	brelse(prev_epos.bh);
+ }
+ 
+ static void udf_update_alloc_ext_desc(struct inode *inode,
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index c5e4d2b8cb0be..cf56582d298ad 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4449,7 +4449,8 @@ static inline int l2cap_config_req(struct l2cap_conn *conn,
+ 
+ 	chan->ident = cmd->ident;
+ 	l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, len, rsp);
+-	chan->num_conf_rsp++;
++	if (chan->num_conf_rsp < L2CAP_CONF_MAX_CONF_RSP)
++		chan->num_conf_rsp++;
+ 
+ 	/* Reset config buffer. */
+ 	chan->conf_len = 0;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-01-04 11:39 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-01-04 11:39 UTC (permalink / raw
  To: gentoo-commits

commit:     4704709ed2b8d1f2b46eeba285feb8abcbc442c4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jan  4 11:39:03 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jan  4 11:39:03 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4704709e

Linux patch 5.10.162

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1161_linux-5.10.162.patch | 27492 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 27496 insertions(+)

diff --git a/0000_README b/0000_README
index 775d0ab8..d2b8c150 100644
--- a/0000_README
+++ b/0000_README
@@ -687,6 +687,10 @@ Patch:  1160_linux-5.10.161.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.161
 
+Patch:  1161_linux-5.10.162.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.162
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1161_linux-5.10.162.patch b/1161_linux-5.10.162.patch
new file mode 100644
index 00000000..65ea8db0
--- /dev/null
+++ b/1161_linux-5.10.162.patch
@@ -0,0 +1,27492 @@
+diff --git a/Makefile b/Makefile
+index 68f8efa0cc301..33422c7d149e1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 161
++SUBLEVEL = 162
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1128,7 +1128,7 @@ export MODORDER := $(extmod-prefix)modules.order
+ export MODULES_NSDEPS := $(extmod-prefix)modules.nsdeps
+ 
+ ifeq ($(KBUILD_EXTMOD),)
+-core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
++core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ io_uring/
+ 
+ vmlinux-dirs	:= $(patsubst %/,%,$(filter %/, \
+ 		     $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
+diff --git a/arch/alpha/include/asm/thread_info.h b/arch/alpha/include/asm/thread_info.h
+index 807d7b9a18604..0ce1eee0924b1 100644
+--- a/arch/alpha/include/asm/thread_info.h
++++ b/arch/alpha/include/asm/thread_info.h
+@@ -62,6 +62,7 @@ register struct thread_info *__current_thread_info __asm__("$8");
+ #define TIF_SIGPENDING		2	/* signal pending */
+ #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+ #define TIF_SYSCALL_AUDIT	4	/* syscall audit active */
++#define TIF_NOTIFY_SIGNAL	5	/* signal notifications exist */
+ #define TIF_DIE_IF_KERNEL	9	/* dik recursion lock */
+ #define TIF_MEMDIE		13	/* is terminating due to OOM killer */
+ #define TIF_POLLING_NRFLAG	14	/* idle is polling for TIF_NEED_RESCHED */
+@@ -71,11 +72,12 @@ register struct thread_info *__current_thread_info __asm__("$8");
+ #define _TIF_NEED_RESCHED	(1<<TIF_NEED_RESCHED)
+ #define _TIF_NOTIFY_RESUME	(1<<TIF_NOTIFY_RESUME)
+ #define _TIF_SYSCALL_AUDIT	(1<<TIF_SYSCALL_AUDIT)
++#define _TIF_NOTIFY_SIGNAL	(1<<TIF_NOTIFY_SIGNAL)
+ #define _TIF_POLLING_NRFLAG	(1<<TIF_POLLING_NRFLAG)
+ 
+ /* Work to do on interrupt/exception return.  */
+ #define _TIF_WORK_MASK		(_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
+-				 _TIF_NOTIFY_RESUME)
++				 _TIF_NOTIFY_RESUME | _TIF_NOTIFY_SIGNAL)
+ 
+ /* Work to do on any return to userspace.  */
+ #define _TIF_ALLWORK_MASK	(_TIF_WORK_MASK		\
+diff --git a/arch/alpha/kernel/entry.S b/arch/alpha/kernel/entry.S
+index 2e09248f83242..e227f3a29a43c 100644
+--- a/arch/alpha/kernel/entry.S
++++ b/arch/alpha/kernel/entry.S
+@@ -544,7 +544,7 @@ $ret_success:
+ 	.align	4
+ 	.type	work_pending, @function
+ work_pending:
+-	and	$17, _TIF_NOTIFY_RESUME | _TIF_SIGPENDING, $2
++	and	$17, _TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL, $2
+ 	bne	$2, $work_notifysig
+ 
+ $work_resched:
+diff --git a/arch/alpha/kernel/process.c b/arch/alpha/kernel/process.c
+index 4c7b0414a3ff3..08335b2294b3c 100644
+--- a/arch/alpha/kernel/process.c
++++ b/arch/alpha/kernel/process.c
+@@ -249,7 +249,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
+ 	childti->pcb.ksp = (unsigned long) childstack;
+ 	childti->pcb.flags = 1;	/* set FEN, clear everything else */
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* kernel thread */
+ 		memset(childstack, 0,
+ 			sizeof(struct switch_stack) + sizeof(struct pt_regs));
+diff --git a/arch/alpha/kernel/signal.c b/arch/alpha/kernel/signal.c
+index 3739efce1ec02..948b89789da82 100644
+--- a/arch/alpha/kernel/signal.c
++++ b/arch/alpha/kernel/signal.c
+@@ -527,7 +527,7 @@ do_work_pending(struct pt_regs *regs, unsigned long thread_flags,
+ 			schedule();
+ 		} else {
+ 			local_irq_enable();
+-			if (thread_flags & _TIF_SIGPENDING) {
++			if (thread_flags & (_TIF_SIGPENDING|_TIF_NOTIFY_SIGNAL)) {
+ 				do_signal(regs, r0, r19);
+ 				r0 = 0;
+ 			} else {
+diff --git a/arch/arc/include/asm/thread_info.h b/arch/arc/include/asm/thread_info.h
+index f9eef0e8f0b77..c0942c24d4015 100644
+--- a/arch/arc/include/asm/thread_info.h
++++ b/arch/arc/include/asm/thread_info.h
+@@ -79,6 +79,7 @@ static inline __attribute_const__ struct thread_info *current_thread_info(void)
+ #define TIF_SIGPENDING		2	/* signal pending */
+ #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+ #define TIF_SYSCALL_AUDIT	4	/* syscall auditing active */
++#define TIF_NOTIFY_SIGNAL	5	/* signal notifications exist */
+ #define TIF_SYSCALL_TRACE	15	/* syscall trace active */
+ 
+ /* true if poll_idle() is polling TIF_NEED_RESCHED */
+@@ -89,11 +90,12 @@ static inline __attribute_const__ struct thread_info *current_thread_info(void)
+ #define _TIF_SIGPENDING		(1<<TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1<<TIF_NEED_RESCHED)
+ #define _TIF_SYSCALL_AUDIT	(1<<TIF_SYSCALL_AUDIT)
++#define _TIF_NOTIFY_SIGNAL	(1<<TIF_NOTIFY_SIGNAL)
+ #define _TIF_MEMDIE		(1<<TIF_MEMDIE)
+ 
+ /* work to do on interrupt/exception return */
+ #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+-				 _TIF_NOTIFY_RESUME)
++				 _TIF_NOTIFY_RESUME | _TIF_NOTIFY_SIGNAL)
+ 
+ /*
+  * _TIF_ALLWORK_MASK includes SYSCALL_TRACE, but we don't need it.
+diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S
+index 301ade4d0b943..6ee9cb5598085 100644
+--- a/arch/arc/kernel/entry.S
++++ b/arch/arc/kernel/entry.S
+@@ -308,7 +308,8 @@ resume_user_mode_begin:
+ 	mov r0, sp	; pt_regs for arg to do_signal()/do_notify_resume()
+ 
+ 	GET_CURR_THR_INFO_FLAGS   r9
+-	bbit0  r9, TIF_SIGPENDING, .Lchk_notify_resume
++	and.f  0,  r9, _TIF_SIGPENDING|_TIF_NOTIFY_SIGNAL
++	bz .Lchk_notify_resume
+ 
+ 	; Normal Trap/IRQ entry only saves Scratch (caller-saved) regs
+ 	; in pt_reg since the "C" ABI (kernel code) will automatically
+diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
+index a85e9c625ab50..8cf2caae93f1a 100644
+--- a/arch/arc/kernel/process.c
++++ b/arch/arc/kernel/process.c
+@@ -191,7 +191,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
+ 	childksp[0] = 0;			/* fp */
+ 	childksp[1] = (unsigned long)ret_from_fork; /* blink */
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(c_regs, 0, sizeof(struct pt_regs));
+ 
+ 		c_callee->r13 = kthread_arg;
+diff --git a/arch/arc/kernel/signal.c b/arch/arc/kernel/signal.c
+index 9d5996e014c01..4868bdebf586d 100644
+--- a/arch/arc/kernel/signal.c
++++ b/arch/arc/kernel/signal.c
+@@ -405,7 +405,7 @@ void do_signal(struct pt_regs *regs)
+ 
+ 	restart_scall = in_syscall(regs) && syscall_restartable(regs);
+ 
+-	if (get_signal(&ksig)) {
++	if (test_thread_flag(TIF_SIGPENDING) && get_signal(&ksig)) {
+ 		if (restart_scall) {
+ 			arc_restart_syscall(&ksig.ka, regs);
+ 			syscall_wont_restart(regs);	/* No more restarts */
+diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
+index 536b6b979f634..eb7ce2747eb00 100644
+--- a/arch/arm/include/asm/thread_info.h
++++ b/arch/arm/include/asm/thread_info.h
+@@ -126,6 +126,8 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
+  * thread information flags:
+  *  TIF_USEDFPU		- FPU was used by this task this quantum (SMP)
+  *  TIF_POLLING_NRFLAG	- true if poll_idle() is polling TIF_NEED_RESCHED
++ *
++ * Any bit in the range of 0..15 will cause do_work_pending() to be invoked.
+  */
+ #define TIF_SIGPENDING		0	/* signal pending */
+ #define TIF_NEED_RESCHED	1	/* rescheduling necessary */
+@@ -135,6 +137,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
+ #define TIF_SYSCALL_AUDIT	5	/* syscall auditing active */
+ #define TIF_SYSCALL_TRACEPOINT	6	/* syscall tracepoint instrumentation */
+ #define TIF_SECCOMP		7	/* seccomp syscall filtering active */
++#define TIF_NOTIFY_SIGNAL	8	/* signal notifications exist */
+ 
+ #define TIF_USING_IWMMXT	17
+ #define TIF_MEMDIE		18	/* is terminating due to OOM killer */
+@@ -148,6 +151,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_USING_IWMMXT	(1 << TIF_USING_IWMMXT)
+ 
+ /* Checks for any syscall work in entry-common.S */
+@@ -158,7 +162,8 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
+  * Change these and you break ASM code in entry-common.S
+  */
+ #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+-				 _TIF_NOTIFY_RESUME | _TIF_UPROBE)
++				 _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
++				 _TIF_NOTIFY_SIGNAL)
+ 
+ #endif /* __KERNEL__ */
+ #endif /* __ASM_ARM_THREAD_INFO_H */
+diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
+index bd619da73c84e..9b3c737575e91 100644
+--- a/arch/arm/kernel/entry-common.S
++++ b/arch/arm/kernel/entry-common.S
+@@ -53,7 +53,7 @@ __ret_fast_syscall:
+ 	cmp	r2, #TASK_SIZE
+ 	blne	addr_limit_check_failed
+ 	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
+-	tst	r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
++	movs	r1, r1, lsl #16
+ 	bne	fast_work_pending
+ 
+ 
+@@ -90,7 +90,7 @@ __ret_fast_syscall:
+ 	cmp	r2, #TASK_SIZE
+ 	blne	addr_limit_check_failed
+ 	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
+-	tst	r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
++	movs	r1, r1, lsl #16
+ 	beq	no_work_pending
+  UNWIND(.fnend		)
+ ENDPROC(ret_fast_syscall)
+@@ -131,7 +131,7 @@ ENTRY(ret_to_user_from_irq)
+ 	cmp	r2, #TASK_SIZE
+ 	blne	addr_limit_check_failed
+ 	ldr	r1, [tsk, #TI_FLAGS]
+-	tst	r1, #_TIF_WORK_MASK
++	movs	r1, r1, lsl #16
+ 	bne	slow_work_pending
+ no_work_pending:
+ 	asm_trace_hardirqs_on save = 0
+diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S
+index de1f20624be15..d0e898608d303 100644
+--- a/arch/arm/kernel/entry-v7m.S
++++ b/arch/arm/kernel/entry-v7m.S
+@@ -59,7 +59,7 @@ __irq_entry:
+ 
+ 	get_thread_info tsk
+ 	ldr	r2, [tsk, #TI_FLAGS]
+-	tst	r2, #_TIF_WORK_MASK
++	movs	r2, r2, lsl #16
+ 	beq	2f			@ no work pending
+ 	mov	r0, #V7M_SCB_ICSR_PENDSVSET
+ 	str	r0, [r1, V7M_SCB_ICSR]	@ raise PendSV
+diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
+index 9f199b1e83839..2647e48c537e6 100644
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -243,7 +243,7 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
+ 	thread->cpu_domain = get_domain();
+ #endif
+ 
+-	if (likely(!(p->flags & PF_KTHREAD))) {
++	if (likely(!(p->flags & (PF_KTHREAD | PF_IO_WORKER)))) {
+ 		*childregs = *current_pt_regs();
+ 		childregs->ARM_r0 = 0;
+ 		if (stack_start)
+diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
+index 2f81d3af5f9af..a3a38d0a4c853 100644
+--- a/arch/arm/kernel/signal.c
++++ b/arch/arm/kernel/signal.c
+@@ -655,7 +655,7 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall)
+ 			if (unlikely(!user_mode(regs)))
+ 				return 0;
+ 			local_irq_enable();
+-			if (thread_flags & _TIF_SIGPENDING) {
++			if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) {
+ 				int restart = do_signal(regs, syscall);
+ 				if (unlikely(restart)) {
+ 					/*
+diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
+index 1fbab854a51b0..cdcf307764aad 100644
+--- a/arch/arm64/include/asm/thread_info.h
++++ b/arch/arm64/include/asm/thread_info.h
+@@ -68,6 +68,7 @@ void arch_release_task_struct(struct task_struct *tsk);
+ #define TIF_UPROBE		4	/* uprobe breakpoint or singlestep */
+ #define TIF_FSCHECK		5	/* Check FS is USER_DS on return */
+ #define TIF_MTE_ASYNC_FAULT	6	/* MTE Asynchronous Tag Check Fault */
++#define TIF_NOTIFY_SIGNAL	7	/* signal notifications exist */
+ #define TIF_SYSCALL_TRACE	8	/* syscall trace active */
+ #define TIF_SYSCALL_AUDIT	9	/* syscall auditing */
+ #define TIF_SYSCALL_TRACEPOINT	10	/* syscall tracepoint for ftrace */
+@@ -98,10 +99,12 @@ void arch_release_task_struct(struct task_struct *tsk);
+ #define _TIF_32BIT		(1 << TIF_32BIT)
+ #define _TIF_SVE		(1 << TIF_SVE)
+ #define _TIF_MTE_ASYNC_FAULT	(1 << TIF_MTE_ASYNC_FAULT)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ 
+ #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+ 				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
+-				 _TIF_UPROBE | _TIF_FSCHECK | _TIF_MTE_ASYNC_FAULT)
++				 _TIF_UPROBE | _TIF_FSCHECK | _TIF_MTE_ASYNC_FAULT | \
++				 _TIF_NOTIFY_SIGNAL)
+ 
+ #define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
+ 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 22275d8518eb3..3696dbcbfa80c 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -398,7 +398,7 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
+ 
+ 	ptrauth_thread_init_kernel(p);
+ 
+-	if (likely(!(p->flags & PF_KTHREAD))) {
++	if (likely(!(p->flags & (PF_KTHREAD | PF_IO_WORKER)))) {
+ 		*childregs = *current_pt_regs();
+ 		childregs->regs[0] = 0;
+ 
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 0dab5679a97d5..b6fbbd527dd79 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -938,7 +938,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
+ 					       (void __user *)NULL, current);
+ 			}
+ 
+-			if (thread_flags & _TIF_SIGPENDING)
++			if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 				do_signal(regs);
+ 
+ 			if (thread_flags & _TIF_NOTIFY_RESUME) {
+diff --git a/arch/c6x/include/asm/thread_info.h b/arch/c6x/include/asm/thread_info.h
+index f70382844b965..dd8913d57189d 100644
+--- a/arch/c6x/include/asm/thread_info.h
++++ b/arch/c6x/include/asm/thread_info.h
+@@ -82,6 +82,7 @@ struct thread_info *current_thread_info(void)
+ #define TIF_SIGPENDING		2	/* signal pending */
+ #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+ #define TIF_RESTORE_SIGMASK	4	/* restore signal mask in do_signal() */
++#define TIF_NOTIFY_SIGNAL	5	/* signal notifications exist */
+ 
+ #define TIF_MEMDIE		17	/* OOM killer killed process */
+ 
+diff --git a/arch/c6x/kernel/asm-offsets.c b/arch/c6x/kernel/asm-offsets.c
+index 0f8fde494875e..4a264ef87dcb8 100644
+--- a/arch/c6x/kernel/asm-offsets.c
++++ b/arch/c6x/kernel/asm-offsets.c
+@@ -116,6 +116,7 @@ void foo(void)
+ 	DEFINE(_TIF_NOTIFY_RESUME, (1<<TIF_NOTIFY_RESUME));
+ 	DEFINE(_TIF_SIGPENDING, (1<<TIF_SIGPENDING));
+ 	DEFINE(_TIF_NEED_RESCHED, (1<<TIF_NEED_RESCHED));
++	DEFINE(_TIF_NOTIFY_SIGNAL, (1<<TIF_NOTIFY_SIGNAL));
+ 
+ 	DEFINE(_TIF_ALLWORK_MASK, TIF_ALLWORK_MASK);
+ 	DEFINE(_TIF_WORK_MASK, TIF_WORK_MASK);
+diff --git a/arch/c6x/kernel/signal.c b/arch/c6x/kernel/signal.c
+index a3f15b9a79dae..862460c3b1830 100644
+--- a/arch/c6x/kernel/signal.c
++++ b/arch/c6x/kernel/signal.c
+@@ -13,6 +13,7 @@
+ #include <linux/syscalls.h>
+ #include <linux/tracehook.h>
+ 
++#include <asm/asm-offsets.h>
+ #include <asm/ucontext.h>
+ #include <asm/cacheflush.h>
+ 
+@@ -313,7 +314,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, u32 thread_info_flags,
+ 				 int syscall)
+ {
+ 	/* deal with pending signal delivery */
+-	if (thread_info_flags & (1 << TIF_SIGPENDING))
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs, syscall);
+ 
+ 	if (thread_info_flags & (1 << TIF_NOTIFY_RESUME))
+diff --git a/arch/csky/include/asm/thread_info.h b/arch/csky/include/asm/thread_info.h
+index 68e7a1227170c..21456a3737c2a 100644
+--- a/arch/csky/include/asm/thread_info.h
++++ b/arch/csky/include/asm/thread_info.h
+@@ -64,6 +64,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_SYSCALL_TRACE	4	/* syscall trace active */
+ #define TIF_SYSCALL_TRACEPOINT	5       /* syscall tracepoint instrumentation */
+ #define TIF_SYSCALL_AUDIT	6	/* syscall auditing */
++#define TIF_NOTIFY_SIGNAL	7	/* signal notifications exist */
+ #define TIF_POLLING_NRFLAG	16	/* poll_idle() is TIF_NEED_RESCHED */
+ #define TIF_MEMDIE		18      /* is terminating due to OOM killer */
+ #define TIF_RESTORE_SIGMASK	20	/* restore signal mask in do_signal() */
+@@ -75,6 +76,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+ #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_UPROBE		(1 << TIF_UPROBE)
+ #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
+ #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
+@@ -82,7 +84,8 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+ 
+ #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+-				 _TIF_NOTIFY_RESUME | _TIF_UPROBE)
++				 _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
++				 _TIF_NOTIFY_SIGNAL)
+ 
+ #define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
+ 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP)
+diff --git a/arch/csky/kernel/process.c b/arch/csky/kernel/process.c
+index 69af6bc87e647..3d0ca22cd0e2e 100644
+--- a/arch/csky/kernel/process.c
++++ b/arch/csky/kernel/process.c
+@@ -49,7 +49,7 @@ int copy_thread(unsigned long clone_flags,
+ 	/* setup thread.sp for switch_to !!! */
+ 	p->thread.sp = (unsigned long)childstack;
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+ 		childstack->r15 = (unsigned long) ret_from_kernel_thread;
+ 		childstack->r10 = kthread_arg;
+diff --git a/arch/csky/kernel/signal.c b/arch/csky/kernel/signal.c
+index 243228b0aa075..f7c1677e59719 100644
+--- a/arch/csky/kernel/signal.c
++++ b/arch/csky/kernel/signal.c
+@@ -261,7 +261,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
+ 		uprobe_notify_resume(regs);
+ 
+ 	/* Handle pending signal delivery */
+-	if (thread_info_flags & _TIF_SIGPENDING)
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs);
+ 
+ 	if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+diff --git a/arch/h8300/include/asm/thread_info.h b/arch/h8300/include/asm/thread_info.h
+index 0cdaa302d3d28..a518214d4ddd8 100644
+--- a/arch/h8300/include/asm/thread_info.h
++++ b/arch/h8300/include/asm/thread_info.h
+@@ -73,6 +73,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
+ #define TIF_SYSCALL_TRACEPOINT	8	/* for ftrace syscall instrumentation */
+ #define TIF_POLLING_NRFLAG	9	/* true if poll_idle() is polling TIF_NEED_RESCHED */
++#define TIF_NOTIFY_SIGNAL	10	/* signal notifications exist */
+ 
+ /* as above, but as bit values */
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+@@ -83,6 +84,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
+ #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ 
+ /* work to do in syscall trace */
+ #define _TIF_WORK_SYSCALL_MASK	(_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP | \
+@@ -92,7 +94,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_ALLWORK_MASK	(_TIF_SYSCALL_TRACE | _TIF_SIGPENDING      | \
+ 				 _TIF_NEED_RESCHED  | _TIF_SYSCALL_AUDIT   | \
+ 				 _TIF_SINGLESTEP    | _TIF_NOTIFY_RESUME   | \
+-				 _TIF_SYSCALL_TRACEPOINT)
++				 _TIF_SYSCALL_TRACEPOINT | _TIF_NOTIFY_SIGNAL)
+ 
+ /* work to do on interrupt/exception return */
+ #define _TIF_WORK_MASK		(_TIF_ALLWORK_MASK & ~(_TIF_SYSCALL_TRACE | \
+diff --git a/arch/h8300/kernel/process.c b/arch/h8300/kernel/process.c
+index bc1364db58feb..46b1342ce515b 100644
+--- a/arch/h8300/kernel/process.c
++++ b/arch/h8300/kernel/process.c
+@@ -112,7 +112,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
+ 
+ 	childregs = (struct pt_regs *) (THREAD_SIZE + task_stack_page(p)) - 1;
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+ 		childregs->retpc = (unsigned long) ret_from_kernel_thread;
+ 		childregs->er4 = topstk; /* arg */
+diff --git a/arch/h8300/kernel/signal.c b/arch/h8300/kernel/signal.c
+index 75d9b7e626b2f..75a1c36b105a1 100644
+--- a/arch/h8300/kernel/signal.c
++++ b/arch/h8300/kernel/signal.c
+@@ -279,7 +279,7 @@ static void do_signal(struct pt_regs *regs)
+ 
+ asmlinkage void do_notify_resume(struct pt_regs *regs, u32 thread_info_flags)
+ {
+-	if (thread_info_flags & _TIF_SIGPENDING)
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs);
+ 
+ 	if (thread_info_flags & _TIF_NOTIFY_RESUME)
+diff --git a/arch/hexagon/include/asm/thread_info.h b/arch/hexagon/include/asm/thread_info.h
+index 563da19864645..535976665bf00 100644
+--- a/arch/hexagon/include/asm/thread_info.h
++++ b/arch/hexagon/include/asm/thread_info.h
+@@ -95,6 +95,7 @@ register struct thread_info *__current_thread_info asm(QUOTED_THREADINFO_REG);
+ #define TIF_NEED_RESCHED        3       /* rescheduling necessary */
+ #define TIF_SINGLESTEP          4       /* restore ss @ return to usr mode */
+ #define TIF_RESTORE_SIGMASK     6       /* restore sig mask in do_signal() */
++#define TIF_NOTIFY_SIGNAL	7       /* signal notifications exist */
+ /* true if poll_idle() is polling TIF_NEED_RESCHED */
+ #define TIF_MEMDIE              17      /* OOM killer killed process */
+ 
+@@ -103,6 +104,7 @@ register struct thread_info *__current_thread_info asm(QUOTED_THREADINFO_REG);
+ #define _TIF_SIGPENDING         (1 << TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED       (1 << TIF_NEED_RESCHED)
+ #define _TIF_SINGLESTEP         (1 << TIF_SINGLESTEP)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ 
+ /* work to do on interrupt/exception return - All but TIF_SYSCALL_TRACE */
+ #define _TIF_WORK_MASK          (0x0000FFFF & ~_TIF_SYSCALL_TRACE)
+diff --git a/arch/hexagon/kernel/process.c b/arch/hexagon/kernel/process.c
+index 67767c5ed98c3..c61165c99ae0b 100644
+--- a/arch/hexagon/kernel/process.c
++++ b/arch/hexagon/kernel/process.c
+@@ -73,7 +73,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 						    sizeof(*ss));
+ 	ss->lr = (unsigned long)ret_from_fork;
+ 	p->thread.switch_sp = ss;
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+ 		/* r24 <- fn, r25 <- arg */
+ 		ss->r24 = usp;
+@@ -174,7 +174,7 @@ int do_work_pending(struct pt_regs *regs, u32 thread_info_flags)
+ 		return 1;
+ 	}
+ 
+-	if (thread_info_flags & _TIF_SIGPENDING) {
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) {
+ 		do_signal(regs);
+ 		return 1;
+ 	}
+diff --git a/arch/ia64/include/asm/thread_info.h b/arch/ia64/include/asm/thread_info.h
+index 64a1011f68121..51d20cb377062 100644
+--- a/arch/ia64/include/asm/thread_info.h
++++ b/arch/ia64/include/asm/thread_info.h
+@@ -103,6 +103,7 @@ struct thread_info {
+ #define TIF_SYSCALL_TRACE	2	/* syscall trace active */
+ #define TIF_SYSCALL_AUDIT	3	/* syscall auditing active */
+ #define TIF_SINGLESTEP		4	/* restore singlestep on return to user mode */
++#define TIF_NOTIFY_SIGNAL	5	/* signal notification exist */
+ #define TIF_NOTIFY_RESUME	6	/* resumption notification requested */
+ #define TIF_MEMDIE		17	/* is terminating due to OOM killer */
+ #define TIF_MCA_INIT		18	/* this task is processing MCA or INIT */
+@@ -115,6 +116,7 @@ struct thread_info {
+ #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
+ #define _TIF_SYSCALL_TRACEAUDIT	(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SINGLESTEP)
+ #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+ #define _TIF_MCA_INIT		(1 << TIF_MCA_INIT)
+@@ -124,7 +126,7 @@ struct thread_info {
+ 
+ /* "work to do on user-return" bits */
+ #define TIF_ALLWORK_MASK	(_TIF_SIGPENDING|_TIF_NOTIFY_RESUME|_TIF_SYSCALL_AUDIT|\
+-				 _TIF_NEED_RESCHED|_TIF_SYSCALL_TRACE)
++				 _TIF_NEED_RESCHED|_TIF_SYSCALL_TRACE|_TIF_NOTIFY_SIGNAL)
+ /* like TIF_ALLWORK_BITS but sans TIF_SYSCALL_TRACE or TIF_SYSCALL_AUDIT */
+ #define TIF_WORK_MASK		(TIF_ALLWORK_MASK&~(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT))
+ 
+diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
+index c9ff8796b5097..8159b7af5509f 100644
+--- a/arch/ia64/kernel/process.c
++++ b/arch/ia64/kernel/process.c
+@@ -171,7 +171,8 @@ do_notify_resume_user(sigset_t *unused, struct sigscratch *scr, long in_syscall)
+ 	}
+ 
+ 	/* deal with pending signal delivery */
+-	if (test_thread_flag(TIF_SIGPENDING)) {
++	if (test_thread_flag(TIF_SIGPENDING) ||
++	    test_thread_flag(TIF_NOTIFY_SIGNAL)) {
+ 		local_irq_enable();	/* force interrupt enable */
+ 		ia64_do_signal(scr, in_syscall);
+ 	}
+@@ -337,7 +338,7 @@ copy_thread(unsigned long clone_flags, unsigned long user_stack_base,
+ 
+ 	ia64_drop_fpu(p);	/* don't pick up stale state from a CPU's fph */
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		if (unlikely(!user_stack_base)) {
+ 			/* fork_idle() called us */
+ 			return 0;
+diff --git a/arch/ia64/kernel/signal.c b/arch/ia64/kernel/signal.c
+index e67b22fc3c60b..c1b299760bf7a 100644
+--- a/arch/ia64/kernel/signal.c
++++ b/arch/ia64/kernel/signal.c
+@@ -341,7 +341,8 @@ ia64_do_signal (struct sigscratch *scr, long in_syscall)
+ 	 * need to push through a forced SIGSEGV.
+ 	 */
+ 	while (1) {
+-		get_signal(&ksig);
++		if (!get_signal(&ksig))
++			break;
+ 
+ 		/*
+ 		 * get_signal() may have run a debugger (via notify_parent())
+diff --git a/arch/m68k/include/asm/thread_info.h b/arch/m68k/include/asm/thread_info.h
+index 3689c6718c883..15a757073fa58 100644
+--- a/arch/m68k/include/asm/thread_info.h
++++ b/arch/m68k/include/asm/thread_info.h
+@@ -60,6 +60,7 @@ static inline struct thread_info *current_thread_info(void)
+  * bits 0-7 are tested at every exception exit
+  * bits 8-15 are also tested at syscall exit
+  */
++#define TIF_NOTIFY_SIGNAL	4
+ #define TIF_NOTIFY_RESUME	5	/* callback before returning to user */
+ #define TIF_SIGPENDING		6	/* signal pending */
+ #define TIF_NEED_RESCHED	7	/* rescheduling necessary */
+diff --git a/arch/m68k/kernel/process.c b/arch/m68k/kernel/process.c
+index 08359a6e058f6..da83cc83e7912 100644
+--- a/arch/m68k/kernel/process.c
++++ b/arch/m68k/kernel/process.c
+@@ -157,7 +157,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 	 */
+ 	p->thread.fs = get_fs().seg;
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* kernel thread */
+ 		memset(frame, 0, sizeof(struct fork_frame));
+ 		frame->regs.sr = PS_S;
+diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c
+index fd916844a683f..5d12736b4b281 100644
+--- a/arch/m68k/kernel/signal.c
++++ b/arch/m68k/kernel/signal.c
+@@ -1129,7 +1129,8 @@ static void do_signal(struct pt_regs *regs)
+ 
+ void do_notify_resume(struct pt_regs *regs)
+ {
+-	if (test_thread_flag(TIF_SIGPENDING))
++	if (test_thread_flag(TIF_NOTIFY_SIGNAL) ||
++	    test_thread_flag(TIF_SIGPENDING))
+ 		do_signal(regs);
+ 
+ 	if (test_thread_flag(TIF_NOTIFY_RESUME))
+diff --git a/arch/microblaze/include/asm/thread_info.h b/arch/microblaze/include/asm/thread_info.h
+index ad8e8fcb90d3b..44f5ca3318625 100644
+--- a/arch/microblaze/include/asm/thread_info.h
++++ b/arch/microblaze/include/asm/thread_info.h
+@@ -107,6 +107,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_NEED_RESCHED	3 /* rescheduling necessary */
+ /* restore singlestep on return to user mode */
+ #define TIF_SINGLESTEP		4
++#define TIF_NOTIFY_SIGNAL	5	/* signal notifications exist */
+ #define TIF_MEMDIE		6	/* is terminating due to OOM killer */
+ #define TIF_SYSCALL_AUDIT	9       /* syscall auditing active */
+ #define TIF_SECCOMP		10      /* secure computing */
+@@ -119,6 +120,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+ #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+diff --git a/arch/microblaze/kernel/process.c b/arch/microblaze/kernel/process.c
+index f99860771ff48..ee000ae17e395 100644
+--- a/arch/microblaze/kernel/process.c
++++ b/arch/microblaze/kernel/process.c
+@@ -59,7 +59,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 	struct pt_regs *childregs = task_pt_regs(p);
+ 	struct thread_info *ti = task_thread_info(p);
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* if we're creating a new kernel thread then just zeroing all
+ 		 * the registers. That's OK for a brand new thread.*/
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+diff --git a/arch/microblaze/kernel/signal.c b/arch/microblaze/kernel/signal.c
+index f11a0ccccabc4..5a8d173d7b75c 100644
+--- a/arch/microblaze/kernel/signal.c
++++ b/arch/microblaze/kernel/signal.c
+@@ -313,7 +313,8 @@ static void do_signal(struct pt_regs *regs, int in_syscall)
+ 
+ asmlinkage void do_notify_resume(struct pt_regs *regs, int in_syscall)
+ {
+-	if (test_thread_flag(TIF_SIGPENDING))
++	if (test_thread_flag(TIF_SIGPENDING) ||
++	    test_thread_flag(TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs, in_syscall);
+ 
+ 	if (test_thread_flag(TIF_NOTIFY_RESUME))
+diff --git a/arch/mips/include/asm/thread_info.h b/arch/mips/include/asm/thread_info.h
+index ee26f9a4575df..e2c352da3877a 100644
+--- a/arch/mips/include/asm/thread_info.h
++++ b/arch/mips/include/asm/thread_info.h
+@@ -115,6 +115,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_SECCOMP		4	/* secure computing */
+ #define TIF_NOTIFY_RESUME	5	/* callback before returning to user */
+ #define TIF_UPROBE		6	/* breakpointed or singlestepping */
++#define TIF_NOTIFY_SIGNAL	7	/* signal notifications exist */
+ #define TIF_RESTORE_SIGMASK	9	/* restore signal mask in do_signal() */
+ #define TIF_USEDFPU		16	/* FPU was used by this task this quantum (SMP) */
+ #define TIF_MEMDIE		18	/* is terminating due to OOM killer */
+@@ -139,6 +140,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_SECCOMP		(1<<TIF_SECCOMP)
+ #define _TIF_NOTIFY_RESUME	(1<<TIF_NOTIFY_RESUME)
+ #define _TIF_UPROBE		(1<<TIF_UPROBE)
++#define _TIF_NOTIFY_SIGNAL	(1<<TIF_NOTIFY_SIGNAL)
+ #define _TIF_USEDFPU		(1<<TIF_USEDFPU)
+ #define _TIF_NOHZ		(1<<TIF_NOHZ)
+ #define _TIF_FIXADE		(1<<TIF_FIXADE)
+@@ -164,7 +166,7 @@ static inline struct thread_info *current_thread_info(void)
+ /* work to do on interrupt/exception return */
+ #define _TIF_WORK_MASK		\
+ 	(_TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_NOTIFY_RESUME |	\
+-	 _TIF_UPROBE)
++	 _TIF_UPROBE | _TIF_NOTIFY_SIGNAL)
+ /* work to do on any return to u-space */
+ #define _TIF_ALLWORK_MASK	(_TIF_NOHZ | _TIF_WORK_MASK |		\
+ 				 _TIF_WORK_SYSCALL_EXIT |		\
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 75ebd8d7bd5d0..98ecaf6f3edb0 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -135,7 +135,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
+ 	/*  Put the stack after the struct pt_regs.  */
+ 	childksp = (unsigned long) childregs;
+ 	p->thread.cp0_status = (read_c0_status() & ~(ST0_CU2|ST0_CU1)) | ST0_KERNEL_CUMASK;
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* kernel thread */
+ 		unsigned long status = p->thread.cp0_status;
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c
+index 50d0515bea21f..f1e985109da01 100644
+--- a/arch/mips/kernel/signal.c
++++ b/arch/mips/kernel/signal.c
+@@ -903,7 +903,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, void *unused,
+ 		uprobe_notify_resume(regs);
+ 
+ 	/* deal with pending signal delivery */
+-	if (thread_info_flags & _TIF_SIGPENDING)
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs);
+ 
+ 	if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+diff --git a/arch/nds32/include/asm/thread_info.h b/arch/nds32/include/asm/thread_info.h
+index c135111ec44eb..d3967ad184f03 100644
+--- a/arch/nds32/include/asm/thread_info.h
++++ b/arch/nds32/include/asm/thread_info.h
+@@ -48,6 +48,7 @@ struct thread_info {
+ #define TIF_NEED_RESCHED	2
+ #define TIF_SINGLESTEP		3
+ #define TIF_NOTIFY_RESUME	4	/* callback before returning to user */
++#define TIF_NOTIFY_SIGNAL	5	/* signal notifications exist */
+ #define TIF_SYSCALL_TRACE	8
+ #define TIF_POLLING_NRFLAG	17
+ #define TIF_MEMDIE		18
+@@ -57,6 +58,7 @@ struct thread_info {
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+ #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+ #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
+diff --git a/arch/nds32/kernel/ex-exit.S b/arch/nds32/kernel/ex-exit.S
+index 6a2966c2d8c8f..b30699911b81e 100644
+--- a/arch/nds32/kernel/ex-exit.S
++++ b/arch/nds32/kernel/ex-exit.S
+@@ -120,7 +120,7 @@ work_pending:
+ 	andi	$p1, $r1, #_TIF_NEED_RESCHED
+ 	bnez	$p1, work_resched
+ 
+-	andi	$p1, $r1, #_TIF_SIGPENDING|#_TIF_NOTIFY_RESUME
++	andi	$p1, $r1, #_TIF_SIGPENDING|#_TIF_NOTIFY_RESUME|#_TIF_NOTIFY_SIGNAL
+ 	beqz	$p1, no_work_pending
+ 
+ 	move	$r0, $sp			! 'regs'
+diff --git a/arch/nds32/kernel/process.c b/arch/nds32/kernel/process.c
+index e01ad5d172245..c1327e552ec6c 100644
+--- a/arch/nds32/kernel/process.c
++++ b/arch/nds32/kernel/process.c
+@@ -156,7 +156,7 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
+ 
+ 	memset(&p->thread.cpu_context, 0, sizeof(struct cpu_context));
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+ 		/* kernel thread fn */
+ 		p->thread.cpu_context.r6 = stack_start;
+diff --git a/arch/nds32/kernel/signal.c b/arch/nds32/kernel/signal.c
+index 2acb94812af98..7e3ca430a2233 100644
+--- a/arch/nds32/kernel/signal.c
++++ b/arch/nds32/kernel/signal.c
+@@ -376,7 +376,7 @@ static void do_signal(struct pt_regs *regs)
+ asmlinkage void
+ do_notify_resume(struct pt_regs *regs, unsigned int thread_flags)
+ {
+-	if (thread_flags & _TIF_SIGPENDING)
++	if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs);
+ 
+ 	if (thread_flags & _TIF_NOTIFY_RESUME)
+diff --git a/arch/nios2/include/asm/thread_info.h b/arch/nios2/include/asm/thread_info.h
+index 7349a4fa635be..272d2c72a7275 100644
+--- a/arch/nios2/include/asm/thread_info.h
++++ b/arch/nios2/include/asm/thread_info.h
+@@ -86,6 +86,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_MEMDIE		4	/* is terminating due to OOM killer */
+ #define TIF_SECCOMP		5	/* secure computing */
+ #define TIF_SYSCALL_AUDIT	6	/* syscall auditing active */
++#define TIF_NOTIFY_SIGNAL	7	/* signal notifications exist */
+ #define TIF_RESTORE_SIGMASK	9	/* restore signal mask in do_signal() */
+ 
+ #define TIF_POLLING_NRFLAG	16	/* true if poll_idle() is polling
+@@ -97,6 +98,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_RESTORE_SIGMASK	(1 << TIF_RESTORE_SIGMASK)
+ #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
+ 
+diff --git a/arch/nios2/kernel/process.c b/arch/nios2/kernel/process.c
+index 50b4eb19a6cc9..c5f916ca6845f 100644
+--- a/arch/nios2/kernel/process.c
++++ b/arch/nios2/kernel/process.c
+@@ -109,7 +109,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 	struct switch_stack *childstack =
+ 		((struct switch_stack *)childregs) - 1;
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(childstack, 0,
+ 			sizeof(struct switch_stack) + sizeof(struct pt_regs));
+ 
+diff --git a/arch/nios2/kernel/signal.c b/arch/nios2/kernel/signal.c
+index 916180e4a9978..68d626c4f1ba7 100644
+--- a/arch/nios2/kernel/signal.c
++++ b/arch/nios2/kernel/signal.c
+@@ -309,7 +309,8 @@ asmlinkage int do_notify_resume(struct pt_regs *regs)
+ 	if (!user_mode(regs))
+ 		return 0;
+ 
+-	if (test_thread_flag(TIF_SIGPENDING)) {
++	if (test_thread_flag(TIF_SIGPENDING) ||
++	    test_thread_flag(TIF_NOTIFY_SIGNAL)) {
+ 		int restart = do_signal(regs);
+ 
+ 		if (unlikely(restart)) {
+diff --git a/arch/openrisc/include/asm/thread_info.h b/arch/openrisc/include/asm/thread_info.h
+index 9afe68bc423b3..4f9d2a2614558 100644
+--- a/arch/openrisc/include/asm/thread_info.h
++++ b/arch/openrisc/include/asm/thread_info.h
+@@ -98,6 +98,7 @@ register struct thread_info *current_thread_info_reg asm("r10");
+ #define TIF_SINGLESTEP		4	/* restore singlestep on return to user
+ 					 * mode
+ 					 */
++#define TIF_NOTIFY_SIGNAL	5	/* signal notifications exist */
+ #define TIF_SYSCALL_TRACEPOINT  8       /* for ftrace syscall instrumentation */
+ #define TIF_RESTORE_SIGMASK     9
+ #define TIF_POLLING_NRFLAG	16	/* true if poll_idle() is polling						 * TIF_NEED_RESCHED
+@@ -109,6 +110,7 @@ register struct thread_info *current_thread_info_reg asm("r10");
+ #define _TIF_SIGPENDING		(1<<TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1<<TIF_NEED_RESCHED)
+ #define _TIF_SINGLESTEP		(1<<TIF_SINGLESTEP)
++#define _TIF_NOTIFY_SIGNAL	(1<<TIF_NOTIFY_SIGNAL)
+ #define _TIF_POLLING_NRFLAG	(1<<TIF_POLLING_NRFLAG)
+ 
+ 
+diff --git a/arch/openrisc/kernel/process.c b/arch/openrisc/kernel/process.c
+index 3c98728cce249..83fba4ee44535 100644
+--- a/arch/openrisc/kernel/process.c
++++ b/arch/openrisc/kernel/process.c
+@@ -167,7 +167,7 @@ copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 	sp -= sizeof(struct pt_regs);
+ 	kregs = (struct pt_regs *)sp;
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(kregs, 0, sizeof(struct pt_regs));
+ 		kregs->gpr[20] = usp; /* fn, kernel thread */
+ 		kregs->gpr[22] = arg;
+diff --git a/arch/openrisc/kernel/signal.c b/arch/openrisc/kernel/signal.c
+index af66f968dd459..1ebcff2710965 100644
+--- a/arch/openrisc/kernel/signal.c
++++ b/arch/openrisc/kernel/signal.c
+@@ -299,7 +299,7 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall)
+ 			if (unlikely(!user_mode(regs)))
+ 				return 0;
+ 			local_irq_enable();
+-			if (thread_flags & _TIF_SIGPENDING) {
++			if (thread_flags & (_TIF_SIGPENDING|_TIF_NOTIFY_SIGNAL)) {
+ 				int restart = do_signal(regs, syscall);
+ 				if (unlikely(restart)) {
+ 					/*
+diff --git a/arch/parisc/include/asm/thread_info.h b/arch/parisc/include/asm/thread_info.h
+index 285757544cca2..0bd38a972ceab 100644
+--- a/arch/parisc/include/asm/thread_info.h
++++ b/arch/parisc/include/asm/thread_info.h
+@@ -52,6 +52,7 @@ struct thread_info {
+ #define TIF_POLLING_NRFLAG	3	/* true if poll_idle() is polling TIF_NEED_RESCHED */
+ #define TIF_32BIT               4       /* 32 bit binary */
+ #define TIF_MEMDIE		5	/* is terminating due to OOM killer */
++#define TIF_NOTIFY_SIGNAL	6	/* signal notifications exist */
+ #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
+ #define TIF_NOTIFY_RESUME	8	/* callback before returning to user */
+ #define TIF_SINGLESTEP		9	/* single stepping? */
+@@ -61,6 +62,7 @@ struct thread_info {
+ 
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+ #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
+ #define _TIF_32BIT		(1 << TIF_32BIT)
+@@ -72,7 +74,7 @@ struct thread_info {
+ #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
+ 
+ #define _TIF_USER_WORK_MASK     (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
+-                                 _TIF_NEED_RESCHED)
++                                 _TIF_NEED_RESCHED | _TIF_NOTIFY_SIGNAL)
+ #define _TIF_SYSCALL_TRACE_MASK (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP |	\
+ 				 _TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT | \
+ 				 _TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT)
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index a92a23d6acd93..5e4381280c97b 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -200,7 +200,7 @@ copy_thread(unsigned long clone_flags, unsigned long usp,
+ 	extern void * const ret_from_kernel_thread;
+ 	extern void * const child_return;
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* kernel thread */
+ 		memset(cregs, 0, sizeof(struct pt_regs));
+ 		if (!usp) /* idle thread */
+diff --git a/arch/parisc/kernel/signal.c b/arch/parisc/kernel/signal.c
+index 8d6c9b88eb3f2..db1a47cf424dd 100644
+--- a/arch/parisc/kernel/signal.c
++++ b/arch/parisc/kernel/signal.c
+@@ -609,7 +609,8 @@ do_signal(struct pt_regs *regs, long in_syscall)
+ 
+ void do_notify_resume(struct pt_regs *regs, long in_syscall)
+ {
+-	if (test_thread_flag(TIF_SIGPENDING))
++	if (test_thread_flag(TIF_SIGPENDING) ||
++	    test_thread_flag(TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs, in_syscall);
+ 
+ 	if (test_thread_flag(TIF_NOTIFY_RESUME))
+diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
+index 6de3517bea94e..ff31d2fa2140f 100644
+--- a/arch/powerpc/include/asm/thread_info.h
++++ b/arch/powerpc/include/asm/thread_info.h
+@@ -96,6 +96,7 @@ void arch_setup_new_exec(void);
+ #define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+ #define TIF_SIGPENDING		1	/* signal pending */
+ #define TIF_NEED_RESCHED	2	/* rescheduling necessary */
++#define TIF_NOTIFY_SIGNAL	3	/* signal notifications exist */
+ #define TIF_SYSCALL_EMU		4	/* syscall emulation active */
+ #define TIF_RESTORE_TM		5	/* need to restore TM FP/VEC/VSX */
+ #define TIF_PATCH_PENDING	6	/* pending live patching update */
+@@ -121,6 +122,7 @@ void arch_setup_new_exec(void);
+ #define _TIF_SYSCALL_TRACE	(1<<TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING		(1<<TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1<<TIF_NEED_RESCHED)
++#define _TIF_NOTIFY_SIGNAL	(1<<TIF_NOTIFY_SIGNAL)
+ #define _TIF_POLLING_NRFLAG	(1<<TIF_POLLING_NRFLAG)
+ #define _TIF_32BIT		(1<<TIF_32BIT)
+ #define _TIF_RESTORE_TM		(1<<TIF_RESTORE_TM)
+@@ -142,7 +144,8 @@ void arch_setup_new_exec(void);
+ 
+ #define _TIF_USER_WORK_MASK	(_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
+ 				 _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
+-				 _TIF_RESTORE_TM | _TIF_PATCH_PENDING)
++				 _TIF_RESTORE_TM | _TIF_PATCH_PENDING | \
++				 _TIF_NOTIFY_SIGNAL)
+ #define _TIF_PERSYSCALL_MASK	(_TIF_RESTOREALL|_TIF_NOERROR)
+ 
+ /* Bits in local_flags */
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index c43cc26bde5db..cf375d67eacbd 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -1684,7 +1684,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
+ 	/* Copy registers */
+ 	sp -= sizeof(struct pt_regs);
+ 	childregs = (struct pt_regs *) sp;
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* kernel thread */
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+ 		childregs->gpr[1] = sp + sizeof(struct pt_regs);
+diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
+index d2c356f370775..a8bb0aca1d021 100644
+--- a/arch/powerpc/kernel/signal.c
++++ b/arch/powerpc/kernel/signal.c
+@@ -318,7 +318,7 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
+ 	if (thread_info_flags & _TIF_PATCH_PENDING)
+ 		klp_update_patch_state(current);
+ 
+-	if (thread_info_flags & _TIF_SIGPENDING) {
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) {
+ 		BUG_ON(regs != current->thread.regs);
+ 		do_signal(current);
+ 	}
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index d79ae9d98999f..c55e0a1f07a0f 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -80,6 +80,7 @@ struct thread_info {
+ #define TIF_SYSCALL_TRACEPOINT  6       /* syscall tracepoint instrumentation */
+ #define TIF_SYSCALL_AUDIT	7	/* syscall auditing */
+ #define TIF_SECCOMP		8	/* syscall secure computing */
++#define TIF_NOTIFY_SIGNAL	9	/* signal notifications exist */
+ 
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+ #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+@@ -88,9 +89,11 @@ struct thread_info {
+ #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ 
+ #define _TIF_WORK_MASK \
+-	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
++	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED | \
++	 _TIF_NOTIFY_SIGNAL)
+ 
+ #define _TIF_SYSCALL_WORK \
+ 	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_TRACEPOINT | _TIF_SYSCALL_AUDIT | \
+diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
+index 9a8b2e60adcf1..7868050ff426d 100644
+--- a/arch/riscv/kernel/process.c
++++ b/arch/riscv/kernel/process.c
+@@ -114,7 +114,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 	memset(&p->thread.s, 0, sizeof(p->thread.s));
+ 
+ 	/* p->thread holds context to be restored by __switch_to() */
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* Kernel thread */
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+ 		childregs->gp = gp_in_global;
+diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
+index 529c123cf0a47..50a8225c58bca 100644
+--- a/arch/riscv/kernel/signal.c
++++ b/arch/riscv/kernel/signal.c
+@@ -312,7 +312,7 @@ asmlinkage __visible void do_notify_resume(struct pt_regs *regs,
+ 					   unsigned long thread_info_flags)
+ {
+ 	/* Handle pending signal delivery */
+-	if (thread_info_flags & _TIF_SIGPENDING)
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs);
+ 
+ 	if (thread_info_flags & _TIF_NOTIFY_RESUME)
+diff --git a/arch/s390/include/asm/thread_info.h b/arch/s390/include/asm/thread_info.h
+index 13a04fcf77625..0045341ade486 100644
+--- a/arch/s390/include/asm/thread_info.h
++++ b/arch/s390/include/asm/thread_info.h
+@@ -65,6 +65,7 @@ void arch_setup_new_exec(void);
+ #define TIF_GUARDED_STORAGE	4	/* load guarded storage control block */
+ #define TIF_PATCH_PENDING	5	/* pending live patching update */
+ #define TIF_PGSTE		6	/* New mm's will use 4K page tables */
++#define TIF_NOTIFY_SIGNAL	7	/* signal notifications exist */
+ #define TIF_ISOLATE_BP		8	/* Run process with isolated BP */
+ #define TIF_ISOLATE_BP_GUEST	9	/* Run KVM guests with isolated BP */
+ 
+@@ -82,6 +83,7 @@ void arch_setup_new_exec(void);
+ #define TIF_SYSCALL_TRACEPOINT	27	/* syscall tracepoint instrumentation */
+ 
+ #define _TIF_NOTIFY_RESUME	BIT(TIF_NOTIFY_RESUME)
++#define _TIF_NOTIFY_SIGNAL	BIT(TIF_NOTIFY_SIGNAL)
+ #define _TIF_SIGPENDING		BIT(TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	BIT(TIF_NEED_RESCHED)
+ #define _TIF_UPROBE		BIT(TIF_UPROBE)
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 963e8cb936e28..88ecbcf097a36 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -52,7 +52,8 @@ STACK_SIZE  = 1 << STACK_SHIFT
+ STACK_INIT = STACK_SIZE - STACK_FRAME_OVERHEAD - __PT_SIZE
+ 
+ _TIF_WORK	= (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \
+-		   _TIF_UPROBE | _TIF_GUARDED_STORAGE | _TIF_PATCH_PENDING)
++		   _TIF_UPROBE | _TIF_GUARDED_STORAGE | _TIF_PATCH_PENDING | \
++		   _TIF_NOTIFY_SIGNAL)
+ _TIF_TRACE	= (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SECCOMP | \
+ 		   _TIF_SYSCALL_TRACEPOINT)
+ _CIF_WORK	= (_CIF_ASCE_PRIMARY | _CIF_ASCE_SECONDARY | _CIF_FPU)
+@@ -481,8 +482,8 @@ ENTRY(system_call)
+ #endif
+ 	TSTMSK	__PT_FLAGS(%r11),_PIF_SYSCALL_RESTART
+ 	jo	.Lsysc_syscall_restart
+-	TSTMSK	__TI_flags(%r12),_TIF_SIGPENDING
+-	jo	.Lsysc_sigpending
++	TSTMSK	__TI_flags(%r12),(_TIF_SIGPENDING|_TIF_NOTIFY_SIGNAL)
++	jnz	.Lsysc_sigpending
+ 	TSTMSK	__TI_flags(%r12),_TIF_NOTIFY_RESUME
+ 	jo	.Lsysc_notify_resume
+ 	TSTMSK	__LC_CPU_FLAGS,(_CIF_ASCE_PRIMARY|_CIF_ASCE_SECONDARY)
+@@ -863,8 +864,8 @@ ENTRY(io_int_handler)
+ 	TSTMSK	__TI_flags(%r12),_TIF_PATCH_PENDING
+ 	jo	.Lio_patch_pending
+ #endif
+-	TSTMSK	__TI_flags(%r12),_TIF_SIGPENDING
+-	jo	.Lio_sigpending
++	TSTMSK	__TI_flags(%r12),(_TIF_SIGPENDING|_TIF_NOTIFY_SIGNAL)
++	jnz	.Lio_sigpending
+ 	TSTMSK	__TI_flags(%r12),_TIF_NOTIFY_RESUME
+ 	jo	.Lio_notify_resume
+ 	TSTMSK	__TI_flags(%r12),_TIF_GUARDED_STORAGE
+diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
+index 137a170f47d4f..bd7da4049707c 100644
+--- a/arch/s390/kernel/process.c
++++ b/arch/s390/kernel/process.c
+@@ -127,7 +127,7 @@ int copy_thread(unsigned long clone_flags, unsigned long new_stackp,
+ 	frame->sf.gprs[9] = (unsigned long) frame;
+ 
+ 	/* Store access registers to kernel stack of new process. */
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* kernel thread */
+ 		memset(&frame->childregs, 0, sizeof(struct pt_regs));
+ 		frame->childregs.psw.mask = PSW_KERNEL_BITS | PSW_MASK_DAT |
+diff --git a/arch/s390/kernel/signal.c b/arch/s390/kernel/signal.c
+index 9e900a8977bd2..b27b6c1f058d0 100644
+--- a/arch/s390/kernel/signal.c
++++ b/arch/s390/kernel/signal.c
+@@ -472,7 +472,7 @@ void do_signal(struct pt_regs *regs)
+ 	current->thread.system_call =
+ 		test_pt_regs_flag(regs, PIF_SYSCALL) ? regs->int_code : 0;
+ 
+-	if (get_signal(&ksig)) {
++	if (test_thread_flag(TIF_SIGPENDING) && get_signal(&ksig)) {
+ 		/* Whee!  Actually deliver the signal.  */
+ 		if (current->thread.system_call) {
+ 			regs->int_code = current->thread.system_call;
+diff --git a/arch/sh/include/asm/thread_info.h b/arch/sh/include/asm/thread_info.h
+index 243ea5150aa00..598d0184ffeae 100644
+--- a/arch/sh/include/asm/thread_info.h
++++ b/arch/sh/include/asm/thread_info.h
+@@ -105,6 +105,7 @@ extern void init_thread_xstate(void);
+ #define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+ #define TIF_SIGPENDING		1	/* signal pending */
+ #define TIF_NEED_RESCHED	2	/* rescheduling necessary */
++#define TIF_NOTIFY_SIGNAL	3	/* signal notifications exist */
+ #define TIF_SINGLESTEP		4	/* singlestepping active */
+ #define TIF_SYSCALL_AUDIT	5	/* syscall auditing active */
+ #define TIF_SECCOMP		6	/* secure computing */
+@@ -116,6 +117,7 @@ extern void init_thread_xstate(void);
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+@@ -132,7 +134,7 @@ extern void init_thread_xstate(void);
+ #define _TIF_ALLWORK_MASK	(_TIF_SYSCALL_TRACE | _TIF_SIGPENDING      | \
+ 				 _TIF_NEED_RESCHED  | _TIF_SYSCALL_AUDIT   | \
+ 				 _TIF_SINGLESTEP    | _TIF_NOTIFY_RESUME   | \
+-				 _TIF_SYSCALL_TRACEPOINT)
++				 _TIF_SYSCALL_TRACEPOINT | _TIF_NOTIFY_SIGNAL)
+ 
+ /* work to do on interrupt/exception return */
+ #define _TIF_WORK_MASK		(_TIF_ALLWORK_MASK & ~(_TIF_SYSCALL_TRACE | \
+diff --git a/arch/sh/kernel/process_32.c b/arch/sh/kernel/process_32.c
+index 80a5d1c66a517..1aa508eb0823a 100644
+--- a/arch/sh/kernel/process_32.c
++++ b/arch/sh/kernel/process_32.c
+@@ -114,7 +114,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 
+ 	childregs = task_pt_regs(p);
+ 	p->thread.sp = (unsigned long) childregs;
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+ 		p->thread.pc = (unsigned long) ret_from_kernel_thread;
+ 		childregs->regs[4] = arg;
+diff --git a/arch/sh/kernel/signal_32.c b/arch/sh/kernel/signal_32.c
+index 1add47fd31f62..dd3092911efad 100644
+--- a/arch/sh/kernel/signal_32.c
++++ b/arch/sh/kernel/signal_32.c
+@@ -499,7 +499,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, unsigned int save_r0,
+ 				 unsigned long thread_info_flags)
+ {
+ 	/* deal with pending signal delivery */
+-	if (thread_info_flags & _TIF_SIGPENDING)
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs, save_r0);
+ 
+ 	if (thread_info_flags & _TIF_NOTIFY_RESUME)
+diff --git a/arch/sparc/include/asm/thread_info_32.h b/arch/sparc/include/asm/thread_info_32.h
+index 548b366165dd1..45b4955b253f2 100644
+--- a/arch/sparc/include/asm/thread_info_32.h
++++ b/arch/sparc/include/asm/thread_info_32.h
+@@ -104,6 +104,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
+ #define TIF_SIGPENDING		2	/* signal pending */
+ #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+ #define TIF_RESTORE_SIGMASK	4	/* restore signal mask in do_signal() */
++#define TIF_NOTIFY_SIGNAL	5	/* signal notifications exist */
+ #define TIF_USEDFPU		8	/* FPU was used by this task
+ 					 * this quantum (SMP) */
+ #define TIF_POLLING_NRFLAG	9	/* true if poll_idle() is polling
+@@ -115,11 +116,12 @@ register struct thread_info *current_thread_info_reg asm("g6");
+ #define _TIF_NOTIFY_RESUME	(1<<TIF_NOTIFY_RESUME)
+ #define _TIF_SIGPENDING		(1<<TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1<<TIF_NEED_RESCHED)
++#define _TIF_NOTIFY_SIGNAL	(1<<TIF_NOTIFY_SIGNAL)
+ #define _TIF_USEDFPU		(1<<TIF_USEDFPU)
+ #define _TIF_POLLING_NRFLAG	(1<<TIF_POLLING_NRFLAG)
+ 
+ #define _TIF_DO_NOTIFY_RESUME_MASK	(_TIF_NOTIFY_RESUME | \
+-					 _TIF_SIGPENDING)
++					 _TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)
+ 
+ #define is_32bit_task()	(1)
+ 
+diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
+index 20255471e653d..42cd4cd3892e3 100644
+--- a/arch/sparc/include/asm/thread_info_64.h
++++ b/arch/sparc/include/asm/thread_info_64.h
+@@ -180,7 +180,7 @@ extern struct thread_info *current_thread_info(void);
+ #define TIF_NOTIFY_RESUME	1	/* callback before returning to user */
+ #define TIF_SIGPENDING		2	/* signal pending */
+ #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+-/* flag bit 4 is available */
++#define TIF_NOTIFY_SIGNAL	4	/* signal notifications exist */
+ #define TIF_UNALIGNED		5	/* allowed to do unaligned accesses */
+ #define TIF_UPROBE		6	/* breakpointed or singlestepped */
+ #define TIF_32BIT		7	/* 32-bit binary */
+@@ -200,6 +200,7 @@ extern struct thread_info *current_thread_info(void);
+ #define _TIF_NOTIFY_RESUME	(1<<TIF_NOTIFY_RESUME)
+ #define _TIF_SIGPENDING		(1<<TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1<<TIF_NEED_RESCHED)
++#define _TIF_NOTIFY_SIGNAL	(1<<TIF_NOTIFY_SIGNAL)
+ #define _TIF_UNALIGNED		(1<<TIF_UNALIGNED)
+ #define _TIF_UPROBE		(1<<TIF_UPROBE)
+ #define _TIF_32BIT		(1<<TIF_32BIT)
+@@ -213,7 +214,8 @@ extern struct thread_info *current_thread_info(void);
+ 				 _TIF_DO_NOTIFY_RESUME_MASK | \
+ 				 _TIF_NEED_RESCHED)
+ #define _TIF_DO_NOTIFY_RESUME_MASK	(_TIF_NOTIFY_RESUME | \
+-					 _TIF_SIGPENDING | _TIF_UPROBE)
++					 _TIF_SIGPENDING | _TIF_UPROBE | \
++					 _TIF_NOTIFY_SIGNAL)
+ 
+ #define is_32bit_task()	(test_thread_flag(TIF_32BIT))
+ 
+diff --git a/arch/sparc/kernel/process_32.c b/arch/sparc/kernel/process_32.c
+index a023637359154..0f9c606e1e78e 100644
+--- a/arch/sparc/kernel/process_32.c
++++ b/arch/sparc/kernel/process_32.c
+@@ -309,7 +309,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
+ 	ti->ksp = (unsigned long) new_stack;
+ 	p->thread.kregs = childregs;
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		extern int nwindows;
+ 		unsigned long psr;
+ 		memset(new_stack, 0, STACKFRAME_SZ + TRACEREG_SZ);
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index 6f8c7822fc065..7afd0a859a78c 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -597,7 +597,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
+ 				       sizeof(struct sparc_stackf));
+ 	t->fpsaved[0] = 0;
+ 
+-	if (unlikely(p->flags & PF_KTHREAD)) {
++	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		memset(child_trap_frame, 0, child_stack_sz);
+ 		__thread_flag_byte_ptr(t)[TI_FLAG_BYTE_CWP] = 
+ 			(current_pt_regs()->tstate + 1) & TSTATE_CWP;
+diff --git a/arch/sparc/kernel/signal_32.c b/arch/sparc/kernel/signal_32.c
+index 1da36dd34990b..4d478510550c3 100644
+--- a/arch/sparc/kernel/signal_32.c
++++ b/arch/sparc/kernel/signal_32.c
+@@ -521,7 +521,7 @@ static void do_signal(struct pt_regs *regs, unsigned long orig_i0)
+ void do_notify_resume(struct pt_regs *regs, unsigned long orig_i0,
+ 		      unsigned long thread_info_flags)
+ {
+-	if (thread_info_flags & _TIF_SIGPENDING)
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs, orig_i0);
+ 	if (thread_info_flags & _TIF_NOTIFY_RESUME)
+ 		tracehook_notify_resume(regs);
+diff --git a/arch/sparc/kernel/signal_64.c b/arch/sparc/kernel/signal_64.c
+index f7ef7edcd5c1a..a0eec62c825df 100644
+--- a/arch/sparc/kernel/signal_64.c
++++ b/arch/sparc/kernel/signal_64.c
+@@ -549,7 +549,7 @@ void do_notify_resume(struct pt_regs *regs, unsigned long orig_i0, unsigned long
+ 	user_exit();
+ 	if (thread_info_flags & _TIF_UPROBE)
+ 		uprobe_notify_resume(regs);
+-	if (thread_info_flags & _TIF_SIGPENDING)
++	if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs, orig_i0);
+ 	if (thread_info_flags & _TIF_NOTIFY_RESUME)
+ 		tracehook_notify_resume(regs);
+diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h
+index 66ab6a07330b2..e610e932cfe1e 100644
+--- a/arch/um/include/asm/thread_info.h
++++ b/arch/um/include/asm/thread_info.h
+@@ -57,6 +57,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+ #define TIF_SIGPENDING		1	/* signal pending */
+ #define TIF_NEED_RESCHED	2	/* rescheduling necessary */
++#define TIF_NOTIFY_SIGNAL	3	/* signal notifications exist */
+ #define TIF_RESTART_BLOCK	4
+ #define TIF_MEMDIE		5	/* is terminating due to OOM killer */
+ #define TIF_SYSCALL_AUDIT	6
+@@ -68,6 +69,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 8eb8b736abc18..e6c9b11b20334 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -99,7 +99,8 @@ void interrupt_end(void)
+ 
+ 	if (need_resched())
+ 		schedule();
+-	if (test_thread_flag(TIF_SIGPENDING))
++	if (test_thread_flag(TIF_SIGPENDING) ||
++	    test_thread_flag(TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs);
+ 	if (test_thread_flag(TIF_NOTIFY_RESUME))
+ 		tracehook_notify_resume(regs);
+@@ -156,7 +157,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
+ 		unsigned long arg, struct task_struct * p, unsigned long tls)
+ {
+ 	void (*handler)(void);
+-	int kthread = current->flags & PF_KTHREAD;
++	int kthread = current->flags & (PF_KTHREAD | PF_IO_WORKER);
+ 	int ret = 0;
+ 
+ 	p->thread = (struct thread_struct) INIT_THREAD;
+diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
+index e701f29b48817..012c8ee93b67f 100644
+--- a/arch/x86/include/asm/thread_info.h
++++ b/arch/x86/include/asm/thread_info.h
+@@ -93,6 +93,7 @@ struct thread_info {
+ #define TIF_NOTSC		16	/* TSC is not accessible in userland */
+ #define TIF_IA32		17	/* IA32 compatibility process */
+ #define TIF_SLD			18	/* Restore split lock detection on context switch */
++#define TIF_NOTIFY_SIGNAL	19	/* signal notifications exist */
+ #define TIF_MEMDIE		20	/* is terminating due to OOM killer */
+ #define TIF_POLLING_NRFLAG	21	/* idle is polling for TIF_NEED_RESCHED */
+ #define TIF_IO_BITMAP		22	/* uses I/O bitmap */
+@@ -121,6 +122,7 @@ struct thread_info {
+ #define _TIF_NOCPUID		(1 << TIF_NOCPUID)
+ #define _TIF_NOTSC		(1 << TIF_NOTSC)
+ #define _TIF_IA32		(1 << TIF_IA32)
++#define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_SLD		(1 << TIF_SLD)
+ #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
+ #define _TIF_IO_BITMAP		(1 << TIF_IO_BITMAP)
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 383afcc1098bf..5e17c3939dd1c 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -178,6 +178,23 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
+ 	task_user_gs(p) = get_user_gs(current_pt_regs());
+ #endif
+ 
++	if (unlikely(p->flags & PF_IO_WORKER)) {
++		/*
++		 * An IO thread is a user space thread, but it doesn't
++		 * return to ret_after_fork().
++		 *
++		 * In order to indicate that to tools like gdb,
++		 * we reset the stack and instruction pointers.
++		 *
++		 * It does the same kernel frame setup to return to a kernel
++		 * function that a kernel thread does.
++		 */
++		childregs->sp = 0;
++		childregs->ip = 0;
++		kthread_frame_init(frame, sp, arg);
++		return 0;
++	}
++
+ 	/* Set a new TLS for the child thread? */
+ 	if (clone_flags & CLONE_SETTLS)
+ 		ret = set_new_tls(p, tls);
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index b001ba811cabb..9eff481715320 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -798,11 +798,11 @@ static inline unsigned long get_nr_restart_syscall(const struct pt_regs *regs)
+  * want to handle. Thus you cannot kill init even with a SIGKILL even by
+  * mistake.
+  */
+-void arch_do_signal(struct pt_regs *regs)
++void arch_do_signal_or_restart(struct pt_regs *regs, bool has_signal)
+ {
+ 	struct ksignal ksig;
+ 
+-	if (get_signal(&ksig)) {
++	if (has_signal && get_signal(&ksig)) {
+ 		/* Whee! Actually deliver the signal.  */
+ 		handle_signal(&ksig, regs);
+ 		return;
+diff --git a/arch/xtensa/include/asm/thread_info.h b/arch/xtensa/include/asm/thread_info.h
+index 6acbbe0d87d3d..a312333a9addc 100644
+--- a/arch/xtensa/include/asm/thread_info.h
++++ b/arch/xtensa/include/asm/thread_info.h
+@@ -111,18 +111,21 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_NEED_RESCHED	2	/* rescheduling necessary */
+ #define TIF_SINGLESTEP		3	/* restore singlestep on return to user mode */
+ #define TIF_SYSCALL_TRACEPOINT	4	/* syscall tracepoint instrumentation */
+-#define TIF_MEMDIE		5	/* is terminating due to OOM killer */
++#define TIF_NOTIFY_SIGNAL	5	/* signal notifications exist */
+ #define TIF_RESTORE_SIGMASK	6	/* restore signal mask in do_signal() */
+ #define TIF_NOTIFY_RESUME	7	/* callback before returning to user */
+ #define TIF_DB_DISABLED		8	/* debug trap disabled for syscall */
+ #define TIF_SYSCALL_AUDIT	9	/* syscall auditing active */
+ #define TIF_SECCOMP		10	/* secure computing */
++#define TIF_MEMDIE		11	/* is terminating due to OOM killer */
+ 
+ #define _TIF_SYSCALL_TRACE	(1<<TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING		(1<<TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1<<TIF_NEED_RESCHED)
+ #define _TIF_SINGLESTEP		(1<<TIF_SINGLESTEP)
+ #define _TIF_SYSCALL_TRACEPOINT	(1<<TIF_SYSCALL_TRACEPOINT)
++#define _TIF_NOTIFY_SIGNAL	(1<<TIF_NOTIFY_SIGNAL)
++#define _TIF_NOTIFY_RESUME	(1<<TIF_NOTIFY_RESUME)
+ #define _TIF_SYSCALL_AUDIT	(1<<TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP		(1<<TIF_SECCOMP)
+ 
+diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S
+index 703cf6205efec..647b162f959b4 100644
+--- a/arch/xtensa/kernel/entry.S
++++ b/arch/xtensa/kernel/entry.S
+@@ -500,8 +500,8 @@ common_exception_return:
+ 	 */
+ 
+ 	_bbsi.l	a4, TIF_NEED_RESCHED, 3f
+-	_bbsi.l	a4, TIF_NOTIFY_RESUME, 2f
+-	_bbci.l	a4, TIF_SIGPENDING, 5f
++	movi	a2, _TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NOTIFY_SIGNAL
++	bnone	a4, a2, 5f
+ 
+ 2:	l32i	a4, a1, PT_DEPC
+ 	bgeui	a4, VALID_DOUBLE_EXCEPTION_ADDRESS, 4f
+diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
+index 397a7de563776..9534ef515d748 100644
+--- a/arch/xtensa/kernel/process.c
++++ b/arch/xtensa/kernel/process.c
+@@ -217,7 +217,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp_thread_fn,
+ 
+ 	p->thread.sp = (unsigned long)childregs;
+ 
+-	if (!(p->flags & PF_KTHREAD)) {
++	if (!(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		struct pt_regs *regs = current_pt_regs();
+ 		unsigned long usp = usp_thread_fn ?
+ 			usp_thread_fn : regs->areg[1];
+diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c
+index 1cb230fafdf2b..f2b00f43cf236 100644
+--- a/arch/xtensa/kernel/signal.c
++++ b/arch/xtensa/kernel/signal.c
+@@ -498,7 +498,8 @@ static void do_signal(struct pt_regs *regs)
+ 
+ void do_notify_resume(struct pt_regs *regs)
+ {
+-	if (test_thread_flag(TIF_SIGPENDING))
++	if (test_thread_flag(TIF_SIGPENDING) ||
++	    test_thread_flag(TIF_NOTIFY_SIGNAL))
+ 		do_signal(regs);
+ 
+ 	if (test_thread_flag(TIF_NOTIFY_RESUME))
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 4473adef2f5a4..b403c7f063b00 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2255,7 +2255,7 @@ static void binder_deferred_fd_close(int fd)
+ 	if (!twcb)
+ 		return;
+ 	init_task_work(&twcb->twork, binder_do_fd_close);
+-	__close_fd_get_file(fd, &twcb->file);
++	close_fd_get_file(fd, &twcb->file);
+ 	if (twcb->file) {
+ 		filp_close(twcb->file, current->files);
+ 		task_work_add(current, &twcb->twork, TWA_RESUME);
+diff --git a/fs/Makefile b/fs/Makefile
+index 999d1a23f036c..c660ce28f1498 100644
+--- a/fs/Makefile
++++ b/fs/Makefile
+@@ -32,8 +32,6 @@ obj-$(CONFIG_TIMERFD)		+= timerfd.o
+ obj-$(CONFIG_EVENTFD)		+= eventfd.o
+ obj-$(CONFIG_USERFAULTFD)	+= userfaultfd.o
+ obj-$(CONFIG_AIO)               += aio.o
+-obj-$(CONFIG_IO_URING)		+= io_uring.o
+-obj-$(CONFIG_IO_WQ)		+= io-wq.o
+ obj-$(CONFIG_FS_DAX)		+= dax.o
+ obj-$(CONFIG_FS_ENCRYPTION)	+= crypto/
+ obj-$(CONFIG_FS_VERITY)		+= verity/
+diff --git a/fs/coredump.c b/fs/coredump.c
+index edbaf61125c9c..9d91e831ed0b2 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -523,7 +523,7 @@ static bool dump_interrupted(void)
+ 	 * but then we need to teach dump_write() to restart and clear
+ 	 * TIF_SIGPENDING.
+ 	 */
+-	return signal_pending(current);
++	return fatal_signal_pending(current) || freezing(current);
+ }
+ 
+ static void wait_for_dump_helpers(struct file *file)
+diff --git a/fs/eventfd.c b/fs/eventfd.c
+index df466ef81dddf..4a14295cffe0d 100644
+--- a/fs/eventfd.c
++++ b/fs/eventfd.c
+@@ -45,21 +45,7 @@ struct eventfd_ctx {
+ 	int id;
+ };
+ 
+-/**
+- * eventfd_signal - Adds @n to the eventfd counter.
+- * @ctx: [in] Pointer to the eventfd context.
+- * @n: [in] Value of the counter to be added to the eventfd internal counter.
+- *          The value cannot be negative.
+- *
+- * This function is supposed to be called by the kernel in paths that do not
+- * allow sleeping. In this function we allow the counter to reach the ULLONG_MAX
+- * value, and we signal this as overflow condition by returning a EPOLLERR
+- * to poll(2).
+- *
+- * Returns the amount by which the counter was incremented.  This will be less
+- * than @n if the counter has overflowed.
+- */
+-__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
++__u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, unsigned mask)
+ {
+ 	unsigned long flags;
+ 
+@@ -80,12 +66,31 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
+ 		n = ULLONG_MAX - ctx->count;
+ 	ctx->count += n;
+ 	if (waitqueue_active(&ctx->wqh))
+-		wake_up_locked_poll(&ctx->wqh, EPOLLIN);
++		wake_up_locked_poll(&ctx->wqh, EPOLLIN | mask);
+ 	this_cpu_dec(eventfd_wake_count);
+ 	spin_unlock_irqrestore(&ctx->wqh.lock, flags);
+ 
+ 	return n;
+ }
++
++/**
++ * eventfd_signal - Adds @n to the eventfd counter.
++ * @ctx: [in] Pointer to the eventfd context.
++ * @n: [in] Value of the counter to be added to the eventfd internal counter.
++ *          The value cannot be negative.
++ *
++ * This function is supposed to be called by the kernel in paths that do not
++ * allow sleeping. In this function we allow the counter to reach the ULLONG_MAX
++ * value, and we signal this as overflow condition by returning a EPOLLERR
++ * to poll(2).
++ *
++ * Returns the amount by which the counter was incremented.  This will be less
++ * than @n if the counter has overflowed.
++ */
++__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
++{
++	return eventfd_signal_mask(ctx, n, 0);
++}
+ EXPORT_SYMBOL_GPL(eventfd_signal);
+ 
+ static void eventfd_free_ctx(struct eventfd_ctx *ctx)
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 2f1f053157090..13d4c3d504126 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -548,7 +548,8 @@ out_unlock:
+  */
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+ 
+-static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi)
++static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi,
++			     unsigned pollflags)
+ {
+ 	struct eventpoll *ep_src;
+ 	unsigned long flags;
+@@ -579,16 +580,17 @@ static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi)
+ 	}
+ 	spin_lock_irqsave_nested(&ep->poll_wait.lock, flags, nests);
+ 	ep->nests = nests + 1;
+-	wake_up_locked_poll(&ep->poll_wait, EPOLLIN);
++	wake_up_locked_poll(&ep->poll_wait, EPOLLIN | pollflags);
+ 	ep->nests = 0;
+ 	spin_unlock_irqrestore(&ep->poll_wait.lock, flags);
+ }
+ 
+ #else
+ 
+-static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi)
++static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi,
++			     unsigned pollflags)
+ {
+-	wake_up_poll(&ep->poll_wait, EPOLLIN);
++	wake_up_poll(&ep->poll_wait, EPOLLIN | pollflags);
+ }
+ 
+ #endif
+@@ -815,7 +817,7 @@ static void ep_free(struct eventpoll *ep)
+ 
+ 	/* We need to release all tasks waiting for these file */
+ 	if (waitqueue_active(&ep->poll_wait))
+-		ep_poll_safewake(ep, NULL);
++		ep_poll_safewake(ep, NULL, 0);
+ 
+ 	/*
+ 	 * We need to lock this because we could be hit by
+@@ -1284,7 +1286,7 @@ out_unlock:
+ 
+ 	/* We have to call this outside the lock */
+ 	if (pwake)
+-		ep_poll_safewake(ep, epi);
++		ep_poll_safewake(ep, epi, pollflags & EPOLL_URING_WAKE);
+ 
+ 	if (!(epi->event.events & EPOLLEXCLUSIVE))
+ 		ewake = 1;
+@@ -1589,7 +1591,7 @@ static int ep_insert(struct eventpoll *ep, const struct epoll_event *event,
+ 
+ 	/* We have to call this outside the lock */
+ 	if (pwake)
+-		ep_poll_safewake(ep, NULL);
++		ep_poll_safewake(ep, NULL, 0);
+ 
+ 	return 0;
+ 
+@@ -1692,7 +1694,7 @@ static int ep_modify(struct eventpoll *ep, struct epitem *epi,
+ 
+ 	/* We have to call this outside the lock */
+ 	if (pwake)
+-		ep_poll_safewake(ep, NULL);
++		ep_poll_safewake(ep, NULL, 0);
+ 
+ 	return 0;
+ }
+diff --git a/fs/file.c b/fs/file.c
+index 8431dfde036cc..97a0cd31faec4 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -22,6 +22,8 @@
+ #include <linux/close_range.h>
+ #include <net/sock.h>
+ 
++#include "internal.h"
++
+ unsigned int sysctl_nr_open __read_mostly = 1024*1024;
+ unsigned int sysctl_nr_open_min = BITS_PER_LONG;
+ /* our min() is unusable in constant expressions ;-/ */
+@@ -780,9 +782,8 @@ int __close_range(unsigned fd, unsigned max_fd, unsigned int flags)
+ }
+ 
+ /*
+- * variant of __close_fd that gets a ref on the file for later fput.
+- * The caller must ensure that filp_close() called on the file, and then
+- * an fput().
++ * See close_fd_get_file() below, this variant assumes current->files->file_lock
++ * is held.
+  */
+ int __close_fd_get_file(unsigned int fd, struct file **res)
+ {
+@@ -790,26 +791,39 @@ int __close_fd_get_file(unsigned int fd, struct file **res)
+ 	struct file *file;
+ 	struct fdtable *fdt;
+ 
+-	spin_lock(&files->file_lock);
+ 	fdt = files_fdtable(files);
+ 	if (fd >= fdt->max_fds)
+-		goto out_unlock;
++		goto out_err;
+ 	file = fdt->fd[fd];
+ 	if (!file)
+-		goto out_unlock;
++		goto out_err;
+ 	rcu_assign_pointer(fdt->fd[fd], NULL);
+ 	__put_unused_fd(files, fd);
+-	spin_unlock(&files->file_lock);
+ 	get_file(file);
+ 	*res = file;
+ 	return 0;
+-
+-out_unlock:
+-	spin_unlock(&files->file_lock);
++out_err:
+ 	*res = NULL;
+ 	return -ENOENT;
+ }
+ 
++/*
++ * variant of close_fd that gets a ref on the file for later fput.
++ * The caller must ensure that filp_close() called on the file, and then
++ * an fput().
++ */
++int close_fd_get_file(unsigned int fd, struct file **res)
++{
++	struct files_struct *files = current->files;
++	int ret;
++
++	spin_lock(&files->file_lock);
++	ret = __close_fd_get_file(fd, res);
++	spin_unlock(&files->file_lock);
++
++	return ret;
++}
++
+ void do_close_on_exec(struct files_struct *files)
+ {
+ 	unsigned i;
+diff --git a/fs/internal.h b/fs/internal.h
+index 5155f6ce95c79..06d313b9beecb 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -77,6 +77,8 @@ extern int vfs_path_lookup(struct dentry *, struct vfsmount *,
+ long do_rmdir(int dfd, struct filename *name);
+ long do_unlinkat(int dfd, struct filename *name);
+ int may_linkat(struct path *link);
++int do_renameat2(int olddfd, struct filename *oldname, int newdfd,
++		 struct filename *newname, unsigned int flags);
+ 
+ /*
+  * namespace.c
+@@ -132,6 +134,7 @@ extern struct file *do_file_open_root(struct dentry *, struct vfsmount *,
+ 		const char *, const struct open_flags *);
+ extern struct open_how build_open_how(int flags, umode_t mode);
+ extern int build_open_flags(const struct open_how *how, struct open_flags *op);
++extern int __close_fd_get_file(unsigned int fd, struct file **res);
+ 
+ long do_sys_ftruncate(unsigned int fd, loff_t length, int small);
+ int chmod_common(const struct path *path, umode_t mode);
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+deleted file mode 100644
+index 3d5fc76b92d01..0000000000000
+--- a/fs/io-wq.c
++++ /dev/null
+@@ -1,1242 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Basic worker thread pool for io_uring
+- *
+- * Copyright (C) 2019 Jens Axboe
+- *
+- */
+-#include <linux/kernel.h>
+-#include <linux/init.h>
+-#include <linux/errno.h>
+-#include <linux/sched/signal.h>
+-#include <linux/mm.h>
+-#include <linux/sched/mm.h>
+-#include <linux/percpu.h>
+-#include <linux/slab.h>
+-#include <linux/kthread.h>
+-#include <linux/rculist_nulls.h>
+-#include <linux/fs_struct.h>
+-#include <linux/task_work.h>
+-#include <linux/blk-cgroup.h>
+-#include <linux/audit.h>
+-#include <linux/cpu.h>
+-
+-#include "../kernel/sched/sched.h"
+-#include "io-wq.h"
+-
+-#define WORKER_IDLE_TIMEOUT	(5 * HZ)
+-
+-enum {
+-	IO_WORKER_F_UP		= 1,	/* up and active */
+-	IO_WORKER_F_RUNNING	= 2,	/* account as running */
+-	IO_WORKER_F_FREE	= 4,	/* worker on free list */
+-	IO_WORKER_F_FIXED	= 8,	/* static idle worker */
+-	IO_WORKER_F_BOUND	= 16,	/* is doing bounded work */
+-};
+-
+-enum {
+-	IO_WQ_BIT_EXIT		= 0,	/* wq exiting */
+-	IO_WQ_BIT_CANCEL	= 1,	/* cancel work on list */
+-	IO_WQ_BIT_ERROR		= 2,	/* error on setup */
+-};
+-
+-enum {
+-	IO_WQE_FLAG_STALLED	= 1,	/* stalled on hash */
+-};
+-
+-/*
+- * One for each thread in a wqe pool
+- */
+-struct io_worker {
+-	refcount_t ref;
+-	unsigned flags;
+-	struct hlist_nulls_node nulls_node;
+-	struct list_head all_list;
+-	struct task_struct *task;
+-	struct io_wqe *wqe;
+-
+-	struct io_wq_work *cur_work;
+-	spinlock_t lock;
+-
+-	struct rcu_head rcu;
+-	struct mm_struct *mm;
+-#ifdef CONFIG_BLK_CGROUP
+-	struct cgroup_subsys_state *blkcg_css;
+-#endif
+-	const struct cred *cur_creds;
+-	const struct cred *saved_creds;
+-	struct files_struct *restore_files;
+-	struct nsproxy *restore_nsproxy;
+-	struct fs_struct *restore_fs;
+-};
+-
+-#if BITS_PER_LONG == 64
+-#define IO_WQ_HASH_ORDER	6
+-#else
+-#define IO_WQ_HASH_ORDER	5
+-#endif
+-
+-#define IO_WQ_NR_HASH_BUCKETS	(1u << IO_WQ_HASH_ORDER)
+-
+-struct io_wqe_acct {
+-	unsigned nr_workers;
+-	unsigned max_workers;
+-	atomic_t nr_running;
+-};
+-
+-enum {
+-	IO_WQ_ACCT_BOUND,
+-	IO_WQ_ACCT_UNBOUND,
+-};
+-
+-/*
+- * Per-node worker thread pool
+- */
+-struct io_wqe {
+-	struct {
+-		raw_spinlock_t lock;
+-		struct io_wq_work_list work_list;
+-		unsigned long hash_map;
+-		unsigned flags;
+-	} ____cacheline_aligned_in_smp;
+-
+-	int node;
+-	struct io_wqe_acct acct[2];
+-
+-	struct hlist_nulls_head free_list;
+-	struct list_head all_list;
+-
+-	struct io_wq *wq;
+-	struct io_wq_work *hash_tail[IO_WQ_NR_HASH_BUCKETS];
+-};
+-
+-/*
+- * Per io_wq state
+-  */
+-struct io_wq {
+-	struct io_wqe **wqes;
+-	unsigned long state;
+-
+-	free_work_fn *free_work;
+-	io_wq_work_fn *do_work;
+-
+-	struct task_struct *manager;
+-	struct user_struct *user;
+-	refcount_t refs;
+-	struct completion done;
+-
+-	struct hlist_node cpuhp_node;
+-
+-	refcount_t use_refs;
+-};
+-
+-static enum cpuhp_state io_wq_online;
+-
+-static bool io_worker_get(struct io_worker *worker)
+-{
+-	return refcount_inc_not_zero(&worker->ref);
+-}
+-
+-static void io_worker_release(struct io_worker *worker)
+-{
+-	if (refcount_dec_and_test(&worker->ref))
+-		wake_up_process(worker->task);
+-}
+-
+-/*
+- * Note: drops the wqe->lock if returning true! The caller must re-acquire
+- * the lock in that case. Some callers need to restart handling if this
+- * happens, so we can't just re-acquire the lock on behalf of the caller.
+- */
+-static bool __io_worker_unuse(struct io_wqe *wqe, struct io_worker *worker)
+-{
+-	bool dropped_lock = false;
+-
+-	if (worker->saved_creds) {
+-		revert_creds(worker->saved_creds);
+-		worker->cur_creds = worker->saved_creds = NULL;
+-	}
+-
+-	if (current->files != worker->restore_files) {
+-		__acquire(&wqe->lock);
+-		raw_spin_unlock_irq(&wqe->lock);
+-		dropped_lock = true;
+-
+-		task_lock(current);
+-		current->files = worker->restore_files;
+-		current->nsproxy = worker->restore_nsproxy;
+-		task_unlock(current);
+-	}
+-
+-	if (current->fs != worker->restore_fs)
+-		current->fs = worker->restore_fs;
+-
+-	/*
+-	 * If we have an active mm, we need to drop the wq lock before unusing
+-	 * it. If we do, return true and let the caller retry the idle loop.
+-	 */
+-	if (worker->mm) {
+-		if (!dropped_lock) {
+-			__acquire(&wqe->lock);
+-			raw_spin_unlock_irq(&wqe->lock);
+-			dropped_lock = true;
+-		}
+-		__set_current_state(TASK_RUNNING);
+-		kthread_unuse_mm(worker->mm);
+-		mmput(worker->mm);
+-		worker->mm = NULL;
+-	}
+-
+-#ifdef CONFIG_BLK_CGROUP
+-	if (worker->blkcg_css) {
+-		kthread_associate_blkcg(NULL);
+-		worker->blkcg_css = NULL;
+-	}
+-#endif
+-	if (current->signal->rlim[RLIMIT_FSIZE].rlim_cur != RLIM_INFINITY)
+-		current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
+-	return dropped_lock;
+-}
+-
+-static inline struct io_wqe_acct *io_work_get_acct(struct io_wqe *wqe,
+-						   struct io_wq_work *work)
+-{
+-	if (work->flags & IO_WQ_WORK_UNBOUND)
+-		return &wqe->acct[IO_WQ_ACCT_UNBOUND];
+-
+-	return &wqe->acct[IO_WQ_ACCT_BOUND];
+-}
+-
+-static inline struct io_wqe_acct *io_wqe_get_acct(struct io_wqe *wqe,
+-						  struct io_worker *worker)
+-{
+-	if (worker->flags & IO_WORKER_F_BOUND)
+-		return &wqe->acct[IO_WQ_ACCT_BOUND];
+-
+-	return &wqe->acct[IO_WQ_ACCT_UNBOUND];
+-}
+-
+-static void io_worker_exit(struct io_worker *worker)
+-{
+-	struct io_wqe *wqe = worker->wqe;
+-	struct io_wqe_acct *acct = io_wqe_get_acct(wqe, worker);
+-
+-	/*
+-	 * If we're not at zero, someone else is holding a brief reference
+-	 * to the worker. Wait for that to go away.
+-	 */
+-	set_current_state(TASK_INTERRUPTIBLE);
+-	if (!refcount_dec_and_test(&worker->ref))
+-		schedule();
+-	__set_current_state(TASK_RUNNING);
+-
+-	preempt_disable();
+-	current->flags &= ~PF_IO_WORKER;
+-	if (worker->flags & IO_WORKER_F_RUNNING)
+-		atomic_dec(&acct->nr_running);
+-	if (!(worker->flags & IO_WORKER_F_BOUND))
+-		atomic_dec(&wqe->wq->user->processes);
+-	worker->flags = 0;
+-	preempt_enable();
+-
+-	raw_spin_lock_irq(&wqe->lock);
+-	hlist_nulls_del_rcu(&worker->nulls_node);
+-	list_del_rcu(&worker->all_list);
+-	if (__io_worker_unuse(wqe, worker)) {
+-		__release(&wqe->lock);
+-		raw_spin_lock_irq(&wqe->lock);
+-	}
+-	acct->nr_workers--;
+-	raw_spin_unlock_irq(&wqe->lock);
+-
+-	kfree_rcu(worker, rcu);
+-	if (refcount_dec_and_test(&wqe->wq->refs))
+-		complete(&wqe->wq->done);
+-}
+-
+-static inline bool io_wqe_run_queue(struct io_wqe *wqe)
+-	__must_hold(wqe->lock)
+-{
+-	if (!wq_list_empty(&wqe->work_list) &&
+-	    !(wqe->flags & IO_WQE_FLAG_STALLED))
+-		return true;
+-	return false;
+-}
+-
+-/*
+- * Check head of free list for an available worker. If one isn't available,
+- * caller must wake up the wq manager to create one.
+- */
+-static bool io_wqe_activate_free_worker(struct io_wqe *wqe)
+-	__must_hold(RCU)
+-{
+-	struct hlist_nulls_node *n;
+-	struct io_worker *worker;
+-
+-	n = rcu_dereference(hlist_nulls_first_rcu(&wqe->free_list));
+-	if (is_a_nulls(n))
+-		return false;
+-
+-	worker = hlist_nulls_entry(n, struct io_worker, nulls_node);
+-	if (io_worker_get(worker)) {
+-		wake_up_process(worker->task);
+-		io_worker_release(worker);
+-		return true;
+-	}
+-
+-	return false;
+-}
+-
+-/*
+- * We need a worker. If we find a free one, we're good. If not, and we're
+- * below the max number of workers, wake up the manager to create one.
+- */
+-static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+-{
+-	bool ret;
+-
+-	/*
+-	 * Most likely an attempt to queue unbounded work on an io_wq that
+-	 * wasn't setup with any unbounded workers.
+-	 */
+-	if (unlikely(!acct->max_workers))
+-		pr_warn_once("io-wq is not configured for unbound workers");
+-
+-	rcu_read_lock();
+-	ret = io_wqe_activate_free_worker(wqe);
+-	rcu_read_unlock();
+-
+-	if (!ret && acct->nr_workers < acct->max_workers)
+-		wake_up_process(wqe->wq->manager);
+-}
+-
+-static void io_wqe_inc_running(struct io_wqe *wqe, struct io_worker *worker)
+-{
+-	struct io_wqe_acct *acct = io_wqe_get_acct(wqe, worker);
+-
+-	atomic_inc(&acct->nr_running);
+-}
+-
+-static void io_wqe_dec_running(struct io_wqe *wqe, struct io_worker *worker)
+-	__must_hold(wqe->lock)
+-{
+-	struct io_wqe_acct *acct = io_wqe_get_acct(wqe, worker);
+-
+-	if (atomic_dec_and_test(&acct->nr_running) && io_wqe_run_queue(wqe))
+-		io_wqe_wake_worker(wqe, acct);
+-}
+-
+-static void io_worker_start(struct io_wqe *wqe, struct io_worker *worker)
+-{
+-	allow_kernel_signal(SIGINT);
+-
+-	current->flags |= PF_IO_WORKER;
+-
+-	worker->flags |= (IO_WORKER_F_UP | IO_WORKER_F_RUNNING);
+-	worker->restore_files = current->files;
+-	worker->restore_nsproxy = current->nsproxy;
+-	worker->restore_fs = current->fs;
+-	io_wqe_inc_running(wqe, worker);
+-}
+-
+-/*
+- * Worker will start processing some work. Move it to the busy list, if
+- * it's currently on the freelist
+- */
+-static void __io_worker_busy(struct io_wqe *wqe, struct io_worker *worker,
+-			     struct io_wq_work *work)
+-	__must_hold(wqe->lock)
+-{
+-	bool worker_bound, work_bound;
+-
+-	if (worker->flags & IO_WORKER_F_FREE) {
+-		worker->flags &= ~IO_WORKER_F_FREE;
+-		hlist_nulls_del_init_rcu(&worker->nulls_node);
+-	}
+-
+-	/*
+-	 * If worker is moving from bound to unbound (or vice versa), then
+-	 * ensure we update the running accounting.
+-	 */
+-	worker_bound = (worker->flags & IO_WORKER_F_BOUND) != 0;
+-	work_bound = (work->flags & IO_WQ_WORK_UNBOUND) == 0;
+-	if (worker_bound != work_bound) {
+-		io_wqe_dec_running(wqe, worker);
+-		if (work_bound) {
+-			worker->flags |= IO_WORKER_F_BOUND;
+-			wqe->acct[IO_WQ_ACCT_UNBOUND].nr_workers--;
+-			wqe->acct[IO_WQ_ACCT_BOUND].nr_workers++;
+-			atomic_dec(&wqe->wq->user->processes);
+-		} else {
+-			worker->flags &= ~IO_WORKER_F_BOUND;
+-			wqe->acct[IO_WQ_ACCT_UNBOUND].nr_workers++;
+-			wqe->acct[IO_WQ_ACCT_BOUND].nr_workers--;
+-			atomic_inc(&wqe->wq->user->processes);
+-		}
+-		io_wqe_inc_running(wqe, worker);
+-	 }
+-}
+-
+-/*
+- * No work, worker going to sleep. Move to freelist, and unuse mm if we
+- * have one attached. Dropping the mm may potentially sleep, so we drop
+- * the lock in that case and return success. Since the caller has to
+- * retry the loop in that case (we changed task state), we don't regrab
+- * the lock if we return success.
+- */
+-static bool __io_worker_idle(struct io_wqe *wqe, struct io_worker *worker)
+-	__must_hold(wqe->lock)
+-{
+-	if (!(worker->flags & IO_WORKER_F_FREE)) {
+-		worker->flags |= IO_WORKER_F_FREE;
+-		hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
+-	}
+-
+-	return __io_worker_unuse(wqe, worker);
+-}
+-
+-static inline unsigned int io_get_work_hash(struct io_wq_work *work)
+-{
+-	return work->flags >> IO_WQ_HASH_SHIFT;
+-}
+-
+-static struct io_wq_work *io_get_next_work(struct io_wqe *wqe)
+-	__must_hold(wqe->lock)
+-{
+-	struct io_wq_work_node *node, *prev;
+-	struct io_wq_work *work, *tail;
+-	unsigned int hash;
+-
+-	wq_list_for_each(node, prev, &wqe->work_list) {
+-		work = container_of(node, struct io_wq_work, list);
+-
+-		/* not hashed, can run anytime */
+-		if (!io_wq_is_hashed(work)) {
+-			wq_list_del(&wqe->work_list, node, prev);
+-			return work;
+-		}
+-
+-		/* hashed, can run if not already running */
+-		hash = io_get_work_hash(work);
+-		if (!(wqe->hash_map & BIT(hash))) {
+-			wqe->hash_map |= BIT(hash);
+-			/* all items with this hash lie in [work, tail] */
+-			tail = wqe->hash_tail[hash];
+-			wqe->hash_tail[hash] = NULL;
+-			wq_list_cut(&wqe->work_list, &tail->list, prev);
+-			return work;
+-		}
+-	}
+-
+-	return NULL;
+-}
+-
+-static void io_wq_switch_mm(struct io_worker *worker, struct io_wq_work *work)
+-{
+-	if (worker->mm) {
+-		kthread_unuse_mm(worker->mm);
+-		mmput(worker->mm);
+-		worker->mm = NULL;
+-	}
+-
+-	if (mmget_not_zero(work->identity->mm)) {
+-		kthread_use_mm(work->identity->mm);
+-		worker->mm = work->identity->mm;
+-		return;
+-	}
+-
+-	/* failed grabbing mm, ensure work gets cancelled */
+-	work->flags |= IO_WQ_WORK_CANCEL;
+-}
+-
+-static inline void io_wq_switch_blkcg(struct io_worker *worker,
+-				      struct io_wq_work *work)
+-{
+-#ifdef CONFIG_BLK_CGROUP
+-	if (!(work->flags & IO_WQ_WORK_BLKCG))
+-		return;
+-	if (work->identity->blkcg_css != worker->blkcg_css) {
+-		kthread_associate_blkcg(work->identity->blkcg_css);
+-		worker->blkcg_css = work->identity->blkcg_css;
+-	}
+-#endif
+-}
+-
+-static void io_wq_switch_creds(struct io_worker *worker,
+-			       struct io_wq_work *work)
+-{
+-	const struct cred *old_creds = override_creds(work->identity->creds);
+-
+-	worker->cur_creds = work->identity->creds;
+-	if (worker->saved_creds)
+-		put_cred(old_creds); /* creds set by previous switch */
+-	else
+-		worker->saved_creds = old_creds;
+-}
+-
+-static void io_impersonate_work(struct io_worker *worker,
+-				struct io_wq_work *work)
+-{
+-	if ((work->flags & IO_WQ_WORK_FILES) &&
+-	    current->files != work->identity->files) {
+-		task_lock(current);
+-		current->files = work->identity->files;
+-		current->nsproxy = work->identity->nsproxy;
+-		task_unlock(current);
+-		if (!work->identity->files) {
+-			/* failed grabbing files, ensure work gets cancelled */
+-			work->flags |= IO_WQ_WORK_CANCEL;
+-		}
+-	}
+-	if ((work->flags & IO_WQ_WORK_FS) && current->fs != work->identity->fs)
+-		current->fs = work->identity->fs;
+-	if ((work->flags & IO_WQ_WORK_MM) && work->identity->mm != worker->mm)
+-		io_wq_switch_mm(worker, work);
+-	if ((work->flags & IO_WQ_WORK_CREDS) &&
+-	    worker->cur_creds != work->identity->creds)
+-		io_wq_switch_creds(worker, work);
+-	if (work->flags & IO_WQ_WORK_FSIZE)
+-		current->signal->rlim[RLIMIT_FSIZE].rlim_cur = work->identity->fsize;
+-	else if (current->signal->rlim[RLIMIT_FSIZE].rlim_cur != RLIM_INFINITY)
+-		current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
+-	io_wq_switch_blkcg(worker, work);
+-#ifdef CONFIG_AUDIT
+-	current->loginuid = work->identity->loginuid;
+-	current->sessionid = work->identity->sessionid;
+-#endif
+-}
+-
+-static void io_assign_current_work(struct io_worker *worker,
+-				   struct io_wq_work *work)
+-{
+-	if (work) {
+-		/* flush pending signals before assigning new work */
+-		if (signal_pending(current))
+-			flush_signals(current);
+-		cond_resched();
+-	}
+-
+-#ifdef CONFIG_AUDIT
+-	current->loginuid = KUIDT_INIT(AUDIT_UID_UNSET);
+-	current->sessionid = AUDIT_SID_UNSET;
+-#endif
+-
+-	spin_lock_irq(&worker->lock);
+-	worker->cur_work = work;
+-	spin_unlock_irq(&worker->lock);
+-}
+-
+-static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work);
+-
+-static void io_worker_handle_work(struct io_worker *worker)
+-	__releases(wqe->lock)
+-{
+-	struct io_wqe *wqe = worker->wqe;
+-	struct io_wq *wq = wqe->wq;
+-
+-	do {
+-		struct io_wq_work *work;
+-get_next:
+-		/*
+-		 * If we got some work, mark us as busy. If we didn't, but
+-		 * the list isn't empty, it means we stalled on hashed work.
+-		 * Mark us stalled so we don't keep looking for work when we
+-		 * can't make progress, any work completion or insertion will
+-		 * clear the stalled flag.
+-		 */
+-		work = io_get_next_work(wqe);
+-		if (work)
+-			__io_worker_busy(wqe, worker, work);
+-		else if (!wq_list_empty(&wqe->work_list))
+-			wqe->flags |= IO_WQE_FLAG_STALLED;
+-
+-		raw_spin_unlock_irq(&wqe->lock);
+-		if (!work)
+-			break;
+-		io_assign_current_work(worker, work);
+-
+-		/* handle a whole dependent link */
+-		do {
+-			struct io_wq_work *old_work, *next_hashed, *linked;
+-			unsigned int hash = io_get_work_hash(work);
+-
+-			next_hashed = wq_next_work(work);
+-			io_impersonate_work(worker, work);
+-			/*
+-			 * OK to set IO_WQ_WORK_CANCEL even for uncancellable
+-			 * work, the worker function will do the right thing.
+-			 */
+-			if (test_bit(IO_WQ_BIT_CANCEL, &wq->state))
+-				work->flags |= IO_WQ_WORK_CANCEL;
+-
+-			old_work = work;
+-			linked = wq->do_work(work);
+-
+-			work = next_hashed;
+-			if (!work && linked && !io_wq_is_hashed(linked)) {
+-				work = linked;
+-				linked = NULL;
+-			}
+-			io_assign_current_work(worker, work);
+-			wq->free_work(old_work);
+-
+-			if (linked)
+-				io_wqe_enqueue(wqe, linked);
+-
+-			if (hash != -1U && !next_hashed) {
+-				raw_spin_lock_irq(&wqe->lock);
+-				wqe->hash_map &= ~BIT_ULL(hash);
+-				wqe->flags &= ~IO_WQE_FLAG_STALLED;
+-				/* skip unnecessary unlock-lock wqe->lock */
+-				if (!work)
+-					goto get_next;
+-				raw_spin_unlock_irq(&wqe->lock);
+-			}
+-		} while (work);
+-
+-		raw_spin_lock_irq(&wqe->lock);
+-	} while (1);
+-}
+-
+-static int io_wqe_worker(void *data)
+-{
+-	struct io_worker *worker = data;
+-	struct io_wqe *wqe = worker->wqe;
+-	struct io_wq *wq = wqe->wq;
+-
+-	io_worker_start(wqe, worker);
+-
+-	while (!test_bit(IO_WQ_BIT_EXIT, &wq->state)) {
+-		set_current_state(TASK_INTERRUPTIBLE);
+-loop:
+-		raw_spin_lock_irq(&wqe->lock);
+-		if (io_wqe_run_queue(wqe)) {
+-			__set_current_state(TASK_RUNNING);
+-			io_worker_handle_work(worker);
+-			goto loop;
+-		}
+-		/* drops the lock on success, retry */
+-		if (__io_worker_idle(wqe, worker)) {
+-			__release(&wqe->lock);
+-			goto loop;
+-		}
+-		raw_spin_unlock_irq(&wqe->lock);
+-		if (signal_pending(current))
+-			flush_signals(current);
+-		if (schedule_timeout(WORKER_IDLE_TIMEOUT))
+-			continue;
+-		/* timed out, exit unless we're the fixed worker */
+-		if (test_bit(IO_WQ_BIT_EXIT, &wq->state) ||
+-		    !(worker->flags & IO_WORKER_F_FIXED))
+-			break;
+-	}
+-
+-	if (test_bit(IO_WQ_BIT_EXIT, &wq->state)) {
+-		raw_spin_lock_irq(&wqe->lock);
+-		if (!wq_list_empty(&wqe->work_list))
+-			io_worker_handle_work(worker);
+-		else
+-			raw_spin_unlock_irq(&wqe->lock);
+-	}
+-
+-	io_worker_exit(worker);
+-	return 0;
+-}
+-
+-/*
+- * Called when a worker is scheduled in. Mark us as currently running.
+- */
+-void io_wq_worker_running(struct task_struct *tsk)
+-{
+-	struct io_worker *worker = kthread_data(tsk);
+-	struct io_wqe *wqe = worker->wqe;
+-
+-	if (!(worker->flags & IO_WORKER_F_UP))
+-		return;
+-	if (worker->flags & IO_WORKER_F_RUNNING)
+-		return;
+-	worker->flags |= IO_WORKER_F_RUNNING;
+-	io_wqe_inc_running(wqe, worker);
+-}
+-
+-/*
+- * Called when worker is going to sleep. If there are no workers currently
+- * running and we have work pending, wake up a free one or have the manager
+- * set one up.
+- */
+-void io_wq_worker_sleeping(struct task_struct *tsk)
+-{
+-	struct io_worker *worker = kthread_data(tsk);
+-	struct io_wqe *wqe = worker->wqe;
+-
+-	if (!(worker->flags & IO_WORKER_F_UP))
+-		return;
+-	if (!(worker->flags & IO_WORKER_F_RUNNING))
+-		return;
+-
+-	worker->flags &= ~IO_WORKER_F_RUNNING;
+-
+-	raw_spin_lock_irq(&wqe->lock);
+-	io_wqe_dec_running(wqe, worker);
+-	raw_spin_unlock_irq(&wqe->lock);
+-}
+-
+-static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
+-{
+-	struct io_wqe_acct *acct = &wqe->acct[index];
+-	struct io_worker *worker;
+-
+-	worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, wqe->node);
+-	if (!worker)
+-		return false;
+-
+-	refcount_set(&worker->ref, 1);
+-	worker->nulls_node.pprev = NULL;
+-	worker->wqe = wqe;
+-	spin_lock_init(&worker->lock);
+-
+-	worker->task = kthread_create_on_node(io_wqe_worker, worker, wqe->node,
+-				"io_wqe_worker-%d/%d", index, wqe->node);
+-	if (IS_ERR(worker->task)) {
+-		kfree(worker);
+-		return false;
+-	}
+-	kthread_bind_mask(worker->task, cpumask_of_node(wqe->node));
+-
+-	raw_spin_lock_irq(&wqe->lock);
+-	hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
+-	list_add_tail_rcu(&worker->all_list, &wqe->all_list);
+-	worker->flags |= IO_WORKER_F_FREE;
+-	if (index == IO_WQ_ACCT_BOUND)
+-		worker->flags |= IO_WORKER_F_BOUND;
+-	if (!acct->nr_workers && (worker->flags & IO_WORKER_F_BOUND))
+-		worker->flags |= IO_WORKER_F_FIXED;
+-	acct->nr_workers++;
+-	raw_spin_unlock_irq(&wqe->lock);
+-
+-	if (index == IO_WQ_ACCT_UNBOUND)
+-		atomic_inc(&wq->user->processes);
+-
+-	refcount_inc(&wq->refs);
+-	wake_up_process(worker->task);
+-	return true;
+-}
+-
+-static inline bool io_wqe_need_worker(struct io_wqe *wqe, int index)
+-	__must_hold(wqe->lock)
+-{
+-	struct io_wqe_acct *acct = &wqe->acct[index];
+-
+-	/* if we have available workers or no work, no need */
+-	if (!hlist_nulls_empty(&wqe->free_list) || !io_wqe_run_queue(wqe))
+-		return false;
+-	return acct->nr_workers < acct->max_workers;
+-}
+-
+-static bool io_wqe_worker_send_sig(struct io_worker *worker, void *data)
+-{
+-	send_sig(SIGINT, worker->task, 1);
+-	return false;
+-}
+-
+-/*
+- * Iterate the passed in list and call the specific function for each
+- * worker that isn't exiting
+- */
+-static bool io_wq_for_each_worker(struct io_wqe *wqe,
+-				  bool (*func)(struct io_worker *, void *),
+-				  void *data)
+-{
+-	struct io_worker *worker;
+-	bool ret = false;
+-
+-	list_for_each_entry_rcu(worker, &wqe->all_list, all_list) {
+-		if (io_worker_get(worker)) {
+-			/* no task if node is/was offline */
+-			if (worker->task)
+-				ret = func(worker, data);
+-			io_worker_release(worker);
+-			if (ret)
+-				break;
+-		}
+-	}
+-
+-	return ret;
+-}
+-
+-static bool io_wq_worker_wake(struct io_worker *worker, void *data)
+-{
+-	wake_up_process(worker->task);
+-	return false;
+-}
+-
+-/*
+- * Manager thread. Tasked with creating new workers, if we need them.
+- */
+-static int io_wq_manager(void *data)
+-{
+-	struct io_wq *wq = data;
+-	int node;
+-
+-	/* create fixed workers */
+-	refcount_set(&wq->refs, 1);
+-	for_each_node(node) {
+-		if (!node_online(node))
+-			continue;
+-		if (create_io_worker(wq, wq->wqes[node], IO_WQ_ACCT_BOUND))
+-			continue;
+-		set_bit(IO_WQ_BIT_ERROR, &wq->state);
+-		set_bit(IO_WQ_BIT_EXIT, &wq->state);
+-		goto out;
+-	}
+-
+-	complete(&wq->done);
+-
+-	while (!kthread_should_stop()) {
+-		if (current->task_works)
+-			task_work_run();
+-
+-		for_each_node(node) {
+-			struct io_wqe *wqe = wq->wqes[node];
+-			bool fork_worker[2] = { false, false };
+-
+-			if (!node_online(node))
+-				continue;
+-
+-			raw_spin_lock_irq(&wqe->lock);
+-			if (io_wqe_need_worker(wqe, IO_WQ_ACCT_BOUND))
+-				fork_worker[IO_WQ_ACCT_BOUND] = true;
+-			if (io_wqe_need_worker(wqe, IO_WQ_ACCT_UNBOUND))
+-				fork_worker[IO_WQ_ACCT_UNBOUND] = true;
+-			raw_spin_unlock_irq(&wqe->lock);
+-			if (fork_worker[IO_WQ_ACCT_BOUND])
+-				create_io_worker(wq, wqe, IO_WQ_ACCT_BOUND);
+-			if (fork_worker[IO_WQ_ACCT_UNBOUND])
+-				create_io_worker(wq, wqe, IO_WQ_ACCT_UNBOUND);
+-		}
+-		set_current_state(TASK_INTERRUPTIBLE);
+-		schedule_timeout(HZ);
+-	}
+-
+-	if (current->task_works)
+-		task_work_run();
+-
+-out:
+-	if (refcount_dec_and_test(&wq->refs)) {
+-		complete(&wq->done);
+-		return 0;
+-	}
+-	/* if ERROR is set and we get here, we have workers to wake */
+-	if (test_bit(IO_WQ_BIT_ERROR, &wq->state)) {
+-		rcu_read_lock();
+-		for_each_node(node)
+-			io_wq_for_each_worker(wq->wqes[node], io_wq_worker_wake, NULL);
+-		rcu_read_unlock();
+-	}
+-	return 0;
+-}
+-
+-static bool io_wq_can_queue(struct io_wqe *wqe, struct io_wqe_acct *acct,
+-			    struct io_wq_work *work)
+-{
+-	bool free_worker;
+-
+-	if (!(work->flags & IO_WQ_WORK_UNBOUND))
+-		return true;
+-	if (atomic_read(&acct->nr_running))
+-		return true;
+-
+-	rcu_read_lock();
+-	free_worker = !hlist_nulls_empty(&wqe->free_list);
+-	rcu_read_unlock();
+-	if (free_worker)
+-		return true;
+-
+-	if (atomic_read(&wqe->wq->user->processes) >= acct->max_workers &&
+-	    !(capable(CAP_SYS_RESOURCE) || capable(CAP_SYS_ADMIN)))
+-		return false;
+-
+-	return true;
+-}
+-
+-static void io_run_cancel(struct io_wq_work *work, struct io_wqe *wqe)
+-{
+-	struct io_wq *wq = wqe->wq;
+-
+-	do {
+-		struct io_wq_work *old_work = work;
+-
+-		work->flags |= IO_WQ_WORK_CANCEL;
+-		work = wq->do_work(work);
+-		wq->free_work(old_work);
+-	} while (work);
+-}
+-
+-static void io_wqe_insert_work(struct io_wqe *wqe, struct io_wq_work *work)
+-{
+-	unsigned int hash;
+-	struct io_wq_work *tail;
+-
+-	if (!io_wq_is_hashed(work)) {
+-append:
+-		wq_list_add_tail(&work->list, &wqe->work_list);
+-		return;
+-	}
+-
+-	hash = io_get_work_hash(work);
+-	tail = wqe->hash_tail[hash];
+-	wqe->hash_tail[hash] = work;
+-	if (!tail)
+-		goto append;
+-
+-	wq_list_add_after(&work->list, &tail->list, &wqe->work_list);
+-}
+-
+-static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+-{
+-	struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
+-	bool do_wake;
+-	unsigned long flags;
+-
+-	/*
+-	 * Do early check to see if we need a new unbound worker, and if we do,
+-	 * if we're allowed to do so. This isn't 100% accurate as there's a
+-	 * gap between this check and incrementing the value, but that's OK.
+-	 * It's close enough to not be an issue, fork() has the same delay.
+-	 */
+-	if (unlikely(!io_wq_can_queue(wqe, acct, work))) {
+-		io_run_cancel(work, wqe);
+-		return;
+-	}
+-
+-	raw_spin_lock_irqsave(&wqe->lock, flags);
+-	io_wqe_insert_work(wqe, work);
+-	wqe->flags &= ~IO_WQE_FLAG_STALLED;
+-	do_wake = (work->flags & IO_WQ_WORK_CONCURRENT) ||
+-			!atomic_read(&acct->nr_running);
+-	raw_spin_unlock_irqrestore(&wqe->lock, flags);
+-
+-	if (do_wake)
+-		io_wqe_wake_worker(wqe, acct);
+-}
+-
+-void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work)
+-{
+-	struct io_wqe *wqe = wq->wqes[numa_node_id()];
+-
+-	io_wqe_enqueue(wqe, work);
+-}
+-
+-/*
+- * Work items that hash to the same value will not be done in parallel.
+- * Used to limit concurrent writes, generally hashed by inode.
+- */
+-void io_wq_hash_work(struct io_wq_work *work, void *val)
+-{
+-	unsigned int bit;
+-
+-	bit = hash_ptr(val, IO_WQ_HASH_ORDER);
+-	work->flags |= (IO_WQ_WORK_HASHED | (bit << IO_WQ_HASH_SHIFT));
+-}
+-
+-void io_wq_cancel_all(struct io_wq *wq)
+-{
+-	int node;
+-
+-	set_bit(IO_WQ_BIT_CANCEL, &wq->state);
+-
+-	rcu_read_lock();
+-	for_each_node(node) {
+-		struct io_wqe *wqe = wq->wqes[node];
+-
+-		io_wq_for_each_worker(wqe, io_wqe_worker_send_sig, NULL);
+-	}
+-	rcu_read_unlock();
+-}
+-
+-struct io_cb_cancel_data {
+-	work_cancel_fn *fn;
+-	void *data;
+-	int nr_running;
+-	int nr_pending;
+-	bool cancel_all;
+-};
+-
+-static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
+-{
+-	struct io_cb_cancel_data *match = data;
+-	unsigned long flags;
+-
+-	/*
+-	 * Hold the lock to avoid ->cur_work going out of scope, caller
+-	 * may dereference the passed in work.
+-	 */
+-	spin_lock_irqsave(&worker->lock, flags);
+-	if (worker->cur_work &&
+-	    !(worker->cur_work->flags & IO_WQ_WORK_NO_CANCEL) &&
+-	    match->fn(worker->cur_work, match->data)) {
+-		send_sig(SIGINT, worker->task, 1);
+-		match->nr_running++;
+-	}
+-	spin_unlock_irqrestore(&worker->lock, flags);
+-
+-	return match->nr_running && !match->cancel_all;
+-}
+-
+-static inline void io_wqe_remove_pending(struct io_wqe *wqe,
+-					 struct io_wq_work *work,
+-					 struct io_wq_work_node *prev)
+-{
+-	unsigned int hash = io_get_work_hash(work);
+-	struct io_wq_work *prev_work = NULL;
+-
+-	if (io_wq_is_hashed(work) && work == wqe->hash_tail[hash]) {
+-		if (prev)
+-			prev_work = container_of(prev, struct io_wq_work, list);
+-		if (prev_work && io_get_work_hash(prev_work) == hash)
+-			wqe->hash_tail[hash] = prev_work;
+-		else
+-			wqe->hash_tail[hash] = NULL;
+-	}
+-	wq_list_del(&wqe->work_list, &work->list, prev);
+-}
+-
+-static void io_wqe_cancel_pending_work(struct io_wqe *wqe,
+-				       struct io_cb_cancel_data *match)
+-{
+-	struct io_wq_work_node *node, *prev;
+-	struct io_wq_work *work;
+-	unsigned long flags;
+-
+-retry:
+-	raw_spin_lock_irqsave(&wqe->lock, flags);
+-	wq_list_for_each(node, prev, &wqe->work_list) {
+-		work = container_of(node, struct io_wq_work, list);
+-		if (!match->fn(work, match->data))
+-			continue;
+-		io_wqe_remove_pending(wqe, work, prev);
+-		raw_spin_unlock_irqrestore(&wqe->lock, flags);
+-		io_run_cancel(work, wqe);
+-		match->nr_pending++;
+-		if (!match->cancel_all)
+-			return;
+-
+-		/* not safe to continue after unlock */
+-		goto retry;
+-	}
+-	raw_spin_unlock_irqrestore(&wqe->lock, flags);
+-}
+-
+-static void io_wqe_cancel_running_work(struct io_wqe *wqe,
+-				       struct io_cb_cancel_data *match)
+-{
+-	rcu_read_lock();
+-	io_wq_for_each_worker(wqe, io_wq_worker_cancel, match);
+-	rcu_read_unlock();
+-}
+-
+-enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
+-				  void *data, bool cancel_all)
+-{
+-	struct io_cb_cancel_data match = {
+-		.fn		= cancel,
+-		.data		= data,
+-		.cancel_all	= cancel_all,
+-	};
+-	int node;
+-
+-	/*
+-	 * First check pending list, if we're lucky we can just remove it
+-	 * from there. CANCEL_OK means that the work is returned as-new,
+-	 * no completion will be posted for it.
+-	 */
+-	for_each_node(node) {
+-		struct io_wqe *wqe = wq->wqes[node];
+-
+-		io_wqe_cancel_pending_work(wqe, &match);
+-		if (match.nr_pending && !match.cancel_all)
+-			return IO_WQ_CANCEL_OK;
+-	}
+-
+-	/*
+-	 * Now check if a free (going busy) or busy worker has the work
+-	 * currently running. If we find it there, we'll return CANCEL_RUNNING
+-	 * as an indication that we attempt to signal cancellation. The
+-	 * completion will run normally in this case.
+-	 */
+-	for_each_node(node) {
+-		struct io_wqe *wqe = wq->wqes[node];
+-
+-		io_wqe_cancel_running_work(wqe, &match);
+-		if (match.nr_running && !match.cancel_all)
+-			return IO_WQ_CANCEL_RUNNING;
+-	}
+-
+-	if (match.nr_running)
+-		return IO_WQ_CANCEL_RUNNING;
+-	if (match.nr_pending)
+-		return IO_WQ_CANCEL_OK;
+-	return IO_WQ_CANCEL_NOTFOUND;
+-}
+-
+-struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+-{
+-	int ret = -ENOMEM, node;
+-	struct io_wq *wq;
+-
+-	if (WARN_ON_ONCE(!data->free_work || !data->do_work))
+-		return ERR_PTR(-EINVAL);
+-	if (WARN_ON_ONCE(!bounded))
+-		return ERR_PTR(-EINVAL);
+-
+-	wq = kzalloc(sizeof(*wq), GFP_KERNEL);
+-	if (!wq)
+-		return ERR_PTR(-ENOMEM);
+-
+-	wq->wqes = kcalloc(nr_node_ids, sizeof(struct io_wqe *), GFP_KERNEL);
+-	if (!wq->wqes)
+-		goto err_wq;
+-
+-	ret = cpuhp_state_add_instance_nocalls(io_wq_online, &wq->cpuhp_node);
+-	if (ret)
+-		goto err_wqes;
+-
+-	wq->free_work = data->free_work;
+-	wq->do_work = data->do_work;
+-
+-	/* caller must already hold a reference to this */
+-	wq->user = data->user;
+-
+-	ret = -ENOMEM;
+-	for_each_node(node) {
+-		struct io_wqe *wqe;
+-		int alloc_node = node;
+-
+-		if (!node_online(alloc_node))
+-			alloc_node = NUMA_NO_NODE;
+-		wqe = kzalloc_node(sizeof(struct io_wqe), GFP_KERNEL, alloc_node);
+-		if (!wqe)
+-			goto err;
+-		wq->wqes[node] = wqe;
+-		wqe->node = alloc_node;
+-		wqe->acct[IO_WQ_ACCT_BOUND].max_workers = bounded;
+-		atomic_set(&wqe->acct[IO_WQ_ACCT_BOUND].nr_running, 0);
+-		if (wq->user) {
+-			wqe->acct[IO_WQ_ACCT_UNBOUND].max_workers =
+-					task_rlimit(current, RLIMIT_NPROC);
+-		}
+-		atomic_set(&wqe->acct[IO_WQ_ACCT_UNBOUND].nr_running, 0);
+-		wqe->wq = wq;
+-		raw_spin_lock_init(&wqe->lock);
+-		INIT_WQ_LIST(&wqe->work_list);
+-		INIT_HLIST_NULLS_HEAD(&wqe->free_list, 0);
+-		INIT_LIST_HEAD(&wqe->all_list);
+-	}
+-
+-	init_completion(&wq->done);
+-
+-	wq->manager = kthread_create(io_wq_manager, wq, "io_wq_manager");
+-	if (!IS_ERR(wq->manager)) {
+-		wake_up_process(wq->manager);
+-		wait_for_completion(&wq->done);
+-		if (test_bit(IO_WQ_BIT_ERROR, &wq->state)) {
+-			ret = -ENOMEM;
+-			goto err;
+-		}
+-		refcount_set(&wq->use_refs, 1);
+-		reinit_completion(&wq->done);
+-		return wq;
+-	}
+-
+-	ret = PTR_ERR(wq->manager);
+-	complete(&wq->done);
+-err:
+-	cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
+-	for_each_node(node)
+-		kfree(wq->wqes[node]);
+-err_wqes:
+-	kfree(wq->wqes);
+-err_wq:
+-	kfree(wq);
+-	return ERR_PTR(ret);
+-}
+-
+-bool io_wq_get(struct io_wq *wq, struct io_wq_data *data)
+-{
+-	if (data->free_work != wq->free_work || data->do_work != wq->do_work)
+-		return false;
+-
+-	return refcount_inc_not_zero(&wq->use_refs);
+-}
+-
+-static void __io_wq_destroy(struct io_wq *wq)
+-{
+-	int node;
+-
+-	cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
+-
+-	set_bit(IO_WQ_BIT_EXIT, &wq->state);
+-	if (wq->manager)
+-		kthread_stop(wq->manager);
+-
+-	rcu_read_lock();
+-	for_each_node(node)
+-		io_wq_for_each_worker(wq->wqes[node], io_wq_worker_wake, NULL);
+-	rcu_read_unlock();
+-
+-	wait_for_completion(&wq->done);
+-
+-	for_each_node(node)
+-		kfree(wq->wqes[node]);
+-	kfree(wq->wqes);
+-	kfree(wq);
+-}
+-
+-void io_wq_destroy(struct io_wq *wq)
+-{
+-	if (refcount_dec_and_test(&wq->use_refs))
+-		__io_wq_destroy(wq);
+-}
+-
+-struct task_struct *io_wq_get_task(struct io_wq *wq)
+-{
+-	return wq->manager;
+-}
+-
+-static bool io_wq_worker_affinity(struct io_worker *worker, void *data)
+-{
+-	struct task_struct *task = worker->task;
+-	struct rq_flags rf;
+-	struct rq *rq;
+-
+-	rq = task_rq_lock(task, &rf);
+-	do_set_cpus_allowed(task, cpumask_of_node(worker->wqe->node));
+-	task->flags |= PF_NO_SETAFFINITY;
+-	task_rq_unlock(rq, task, &rf);
+-	return false;
+-}
+-
+-static int io_wq_cpu_online(unsigned int cpu, struct hlist_node *node)
+-{
+-	struct io_wq *wq = hlist_entry_safe(node, struct io_wq, cpuhp_node);
+-	int i;
+-
+-	rcu_read_lock();
+-	for_each_node(i)
+-		io_wq_for_each_worker(wq->wqes[i], io_wq_worker_affinity, NULL);
+-	rcu_read_unlock();
+-	return 0;
+-}
+-
+-static __init int io_wq_init(void)
+-{
+-	int ret;
+-
+-	ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "io-wq/online",
+-					io_wq_cpu_online, NULL);
+-	if (ret < 0)
+-		return ret;
+-	io_wq_online = ret;
+-	return 0;
+-}
+-subsys_initcall(io_wq_init);
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+deleted file mode 100644
+index 75113bcd5889f..0000000000000
+--- a/fs/io-wq.h
++++ /dev/null
+@@ -1,157 +0,0 @@
+-#ifndef INTERNAL_IO_WQ_H
+-#define INTERNAL_IO_WQ_H
+-
+-#include <linux/io_uring.h>
+-
+-struct io_wq;
+-
+-enum {
+-	IO_WQ_WORK_CANCEL	= 1,
+-	IO_WQ_WORK_HASHED	= 2,
+-	IO_WQ_WORK_UNBOUND	= 4,
+-	IO_WQ_WORK_NO_CANCEL	= 8,
+-	IO_WQ_WORK_CONCURRENT	= 16,
+-
+-	IO_WQ_WORK_FILES	= 32,
+-	IO_WQ_WORK_FS		= 64,
+-	IO_WQ_WORK_MM		= 128,
+-	IO_WQ_WORK_CREDS	= 256,
+-	IO_WQ_WORK_BLKCG	= 512,
+-	IO_WQ_WORK_FSIZE	= 1024,
+-
+-	IO_WQ_HASH_SHIFT	= 24,	/* upper 8 bits are used for hash key */
+-};
+-
+-enum io_wq_cancel {
+-	IO_WQ_CANCEL_OK,	/* cancelled before started */
+-	IO_WQ_CANCEL_RUNNING,	/* found, running, and attempted cancelled */
+-	IO_WQ_CANCEL_NOTFOUND,	/* work not found */
+-};
+-
+-struct io_wq_work_node {
+-	struct io_wq_work_node *next;
+-};
+-
+-struct io_wq_work_list {
+-	struct io_wq_work_node *first;
+-	struct io_wq_work_node *last;
+-};
+-
+-static inline void wq_list_add_after(struct io_wq_work_node *node,
+-				     struct io_wq_work_node *pos,
+-				     struct io_wq_work_list *list)
+-{
+-	struct io_wq_work_node *next = pos->next;
+-
+-	pos->next = node;
+-	node->next = next;
+-	if (!next)
+-		list->last = node;
+-}
+-
+-static inline void wq_list_add_tail(struct io_wq_work_node *node,
+-				    struct io_wq_work_list *list)
+-{
+-	if (!list->first) {
+-		list->last = node;
+-		WRITE_ONCE(list->first, node);
+-	} else {
+-		list->last->next = node;
+-		list->last = node;
+-	}
+-	node->next = NULL;
+-}
+-
+-static inline void wq_list_cut(struct io_wq_work_list *list,
+-			       struct io_wq_work_node *last,
+-			       struct io_wq_work_node *prev)
+-{
+-	/* first in the list, if prev==NULL */
+-	if (!prev)
+-		WRITE_ONCE(list->first, last->next);
+-	else
+-		prev->next = last->next;
+-
+-	if (last == list->last)
+-		list->last = prev;
+-	last->next = NULL;
+-}
+-
+-static inline void wq_list_del(struct io_wq_work_list *list,
+-			       struct io_wq_work_node *node,
+-			       struct io_wq_work_node *prev)
+-{
+-	wq_list_cut(list, node, prev);
+-}
+-
+-#define wq_list_for_each(pos, prv, head)			\
+-	for (pos = (head)->first, prv = NULL; pos; prv = pos, pos = (pos)->next)
+-
+-#define wq_list_empty(list)	(READ_ONCE((list)->first) == NULL)
+-#define INIT_WQ_LIST(list)	do {				\
+-	(list)->first = NULL;					\
+-	(list)->last = NULL;					\
+-} while (0)
+-
+-struct io_wq_work {
+-	struct io_wq_work_node list;
+-	struct io_identity *identity;
+-	unsigned flags;
+-};
+-
+-static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
+-{
+-	if (!work->list.next)
+-		return NULL;
+-
+-	return container_of(work->list.next, struct io_wq_work, list);
+-}
+-
+-typedef void (free_work_fn)(struct io_wq_work *);
+-typedef struct io_wq_work *(io_wq_work_fn)(struct io_wq_work *);
+-
+-struct io_wq_data {
+-	struct user_struct *user;
+-
+-	io_wq_work_fn *do_work;
+-	free_work_fn *free_work;
+-};
+-
+-struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data);
+-bool io_wq_get(struct io_wq *wq, struct io_wq_data *data);
+-void io_wq_destroy(struct io_wq *wq);
+-
+-void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work);
+-void io_wq_hash_work(struct io_wq_work *work, void *val);
+-
+-static inline bool io_wq_is_hashed(struct io_wq_work *work)
+-{
+-	return work->flags & IO_WQ_WORK_HASHED;
+-}
+-
+-void io_wq_cancel_all(struct io_wq *wq);
+-
+-typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
+-
+-enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
+-					void *data, bool cancel_all);
+-
+-struct task_struct *io_wq_get_task(struct io_wq *wq);
+-
+-#if defined(CONFIG_IO_WQ)
+-extern void io_wq_worker_sleeping(struct task_struct *);
+-extern void io_wq_worker_running(struct task_struct *);
+-#else
+-static inline void io_wq_worker_sleeping(struct task_struct *tsk)
+-{
+-}
+-static inline void io_wq_worker_running(struct task_struct *tsk)
+-{
+-}
+-#endif
+-
+-static inline bool io_wq_current_is_worker(void)
+-{
+-	return in_task() && (current->flags & PF_IO_WORKER);
+-}
+-#endif
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+deleted file mode 100644
+index 84758e512a045..0000000000000
+--- a/fs/io_uring.c
++++ /dev/null
+@@ -1,9971 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Shared application/kernel submission and completion ring pairs, for
+- * supporting fast/efficient IO.
+- *
+- * A note on the read/write ordering memory barriers that are matched between
+- * the application and kernel side.
+- *
+- * After the application reads the CQ ring tail, it must use an
+- * appropriate smp_rmb() to pair with the smp_wmb() the kernel uses
+- * before writing the tail (using smp_load_acquire to read the tail will
+- * do). It also needs a smp_mb() before updating CQ head (ordering the
+- * entry load(s) with the head store), pairing with an implicit barrier
+- * through a control-dependency in io_get_cqring (smp_store_release to
+- * store head will do). Failure to do so could lead to reading invalid
+- * CQ entries.
+- *
+- * Likewise, the application must use an appropriate smp_wmb() before
+- * writing the SQ tail (ordering SQ entry stores with the tail store),
+- * which pairs with smp_load_acquire in io_get_sqring (smp_store_release
+- * to store the tail will do). And it needs a barrier ordering the SQ
+- * head load before writing new SQ entries (smp_load_acquire to read
+- * head will do).
+- *
+- * When using the SQ poll thread (IORING_SETUP_SQPOLL), the application
+- * needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*
+- * updating the SQ tail; a full memory barrier smp_mb() is needed
+- * between.
+- *
+- * Also see the examples in the liburing library:
+- *
+- *	git://git.kernel.dk/liburing
+- *
+- * io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
+- * from data shared between the kernel and application. This is done both
+- * for ordering purposes, but also to ensure that once a value is loaded from
+- * data that the application could potentially modify, it remains stable.
+- *
+- * Copyright (C) 2018-2019 Jens Axboe
+- * Copyright (c) 2018-2019 Christoph Hellwig
+- */
+-#include <linux/kernel.h>
+-#include <linux/init.h>
+-#include <linux/errno.h>
+-#include <linux/syscalls.h>
+-#include <linux/compat.h>
+-#include <net/compat.h>
+-#include <linux/refcount.h>
+-#include <linux/uio.h>
+-#include <linux/bits.h>
+-
+-#include <linux/sched/signal.h>
+-#include <linux/fs.h>
+-#include <linux/file.h>
+-#include <linux/fdtable.h>
+-#include <linux/mm.h>
+-#include <linux/mman.h>
+-#include <linux/percpu.h>
+-#include <linux/slab.h>
+-#include <linux/kthread.h>
+-#include <linux/blkdev.h>
+-#include <linux/bvec.h>
+-#include <linux/net.h>
+-#include <net/sock.h>
+-#include <net/af_unix.h>
+-#include <net/scm.h>
+-#include <linux/anon_inodes.h>
+-#include <linux/sched/mm.h>
+-#include <linux/uaccess.h>
+-#include <linux/nospec.h>
+-#include <linux/sizes.h>
+-#include <linux/hugetlb.h>
+-#include <linux/highmem.h>
+-#include <linux/namei.h>
+-#include <linux/fsnotify.h>
+-#include <linux/fadvise.h>
+-#include <linux/eventpoll.h>
+-#include <linux/fs_struct.h>
+-#include <linux/splice.h>
+-#include <linux/task_work.h>
+-#include <linux/pagemap.h>
+-#include <linux/io_uring.h>
+-#include <linux/blk-cgroup.h>
+-#include <linux/audit.h>
+-
+-#define CREATE_TRACE_POINTS
+-#include <trace/events/io_uring.h>
+-
+-#include <uapi/linux/io_uring.h>
+-
+-#include "internal.h"
+-#include "io-wq.h"
+-
+-#define IORING_MAX_ENTRIES	32768
+-#define IORING_MAX_CQ_ENTRIES	(2 * IORING_MAX_ENTRIES)
+-
+-/*
+- * Shift of 9 is 512 entries, or exactly one page on 64-bit archs
+- */
+-#define IORING_FILE_TABLE_SHIFT	9
+-#define IORING_MAX_FILES_TABLE	(1U << IORING_FILE_TABLE_SHIFT)
+-#define IORING_FILE_TABLE_MASK	(IORING_MAX_FILES_TABLE - 1)
+-#define IORING_MAX_FIXED_FILES	(64 * IORING_MAX_FILES_TABLE)
+-#define IORING_MAX_RESTRICTIONS	(IORING_RESTRICTION_LAST + \
+-				 IORING_REGISTER_LAST + IORING_OP_LAST)
+-
+-struct io_uring {
+-	u32 head ____cacheline_aligned_in_smp;
+-	u32 tail ____cacheline_aligned_in_smp;
+-};
+-
+-/*
+- * This data is shared with the application through the mmap at offsets
+- * IORING_OFF_SQ_RING and IORING_OFF_CQ_RING.
+- *
+- * The offsets to the member fields are published through struct
+- * io_sqring_offsets when calling io_uring_setup.
+- */
+-struct io_rings {
+-	/*
+-	 * Head and tail offsets into the ring; the offsets need to be
+-	 * masked to get valid indices.
+-	 *
+-	 * The kernel controls head of the sq ring and the tail of the cq ring,
+-	 * and the application controls tail of the sq ring and the head of the
+-	 * cq ring.
+-	 */
+-	struct io_uring		sq, cq;
+-	/*
+-	 * Bitmasks to apply to head and tail offsets (constant, equals
+-	 * ring_entries - 1)
+-	 */
+-	u32			sq_ring_mask, cq_ring_mask;
+-	/* Ring sizes (constant, power of 2) */
+-	u32			sq_ring_entries, cq_ring_entries;
+-	/*
+-	 * Number of invalid entries dropped by the kernel due to
+-	 * invalid index stored in array
+-	 *
+-	 * Written by the kernel, shouldn't be modified by the
+-	 * application (i.e. get number of "new events" by comparing to
+-	 * cached value).
+-	 *
+-	 * After a new SQ head value was read by the application this
+-	 * counter includes all submissions that were dropped reaching
+-	 * the new SQ head (and possibly more).
+-	 */
+-	u32			sq_dropped;
+-	/*
+-	 * Runtime SQ flags
+-	 *
+-	 * Written by the kernel, shouldn't be modified by the
+-	 * application.
+-	 *
+-	 * The application needs a full memory barrier before checking
+-	 * for IORING_SQ_NEED_WAKEUP after updating the sq tail.
+-	 */
+-	u32			sq_flags;
+-	/*
+-	 * Runtime CQ flags
+-	 *
+-	 * Written by the application, shouldn't be modified by the
+-	 * kernel.
+-	 */
+-	u32                     cq_flags;
+-	/*
+-	 * Number of completion events lost because the queue was full;
+-	 * this should be avoided by the application by making sure
+-	 * there are not more requests pending than there is space in
+-	 * the completion queue.
+-	 *
+-	 * Written by the kernel, shouldn't be modified by the
+-	 * application (i.e. get number of "new events" by comparing to
+-	 * cached value).
+-	 *
+-	 * As completion events come in out of order this counter is not
+-	 * ordered with any other data.
+-	 */
+-	u32			cq_overflow;
+-	/*
+-	 * Ring buffer of completion events.
+-	 *
+-	 * The kernel writes completion events fresh every time they are
+-	 * produced, so the application is allowed to modify pending
+-	 * entries.
+-	 */
+-	struct io_uring_cqe	cqes[] ____cacheline_aligned_in_smp;
+-};
+-
+-struct io_mapped_ubuf {
+-	u64		ubuf;
+-	size_t		len;
+-	struct		bio_vec *bvec;
+-	unsigned int	nr_bvecs;
+-	unsigned long	acct_pages;
+-};
+-
+-struct fixed_file_table {
+-	struct file		**files;
+-};
+-
+-struct fixed_file_ref_node {
+-	struct percpu_ref		refs;
+-	struct list_head		node;
+-	struct list_head		file_list;
+-	struct fixed_file_data		*file_data;
+-	struct llist_node		llist;
+-	bool				done;
+-};
+-
+-struct fixed_file_data {
+-	struct fixed_file_table		*table;
+-	struct io_ring_ctx		*ctx;
+-
+-	struct fixed_file_ref_node	*node;
+-	struct percpu_ref		refs;
+-	struct completion		done;
+-	struct list_head		ref_list;
+-	spinlock_t			lock;
+-	bool				quiesce;
+-};
+-
+-struct io_buffer {
+-	struct list_head list;
+-	__u64 addr;
+-	__u32 len;
+-	__u16 bid;
+-};
+-
+-struct io_restriction {
+-	DECLARE_BITMAP(register_op, IORING_REGISTER_LAST);
+-	DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
+-	u8 sqe_flags_allowed;
+-	u8 sqe_flags_required;
+-	bool registered;
+-};
+-
+-struct io_sq_data {
+-	refcount_t		refs;
+-	struct mutex		lock;
+-
+-	/* ctx's that are using this sqd */
+-	struct list_head	ctx_list;
+-	struct list_head	ctx_new_list;
+-	struct mutex		ctx_lock;
+-
+-	struct task_struct	*thread;
+-	struct wait_queue_head	wait;
+-};
+-
+-struct io_ring_ctx {
+-	struct {
+-		struct percpu_ref	refs;
+-	} ____cacheline_aligned_in_smp;
+-
+-	struct {
+-		unsigned int		flags;
+-		unsigned int		compat: 1;
+-		unsigned int		limit_mem: 1;
+-		unsigned int		cq_overflow_flushed: 1;
+-		unsigned int		drain_next: 1;
+-		unsigned int		eventfd_async: 1;
+-		unsigned int		restricted: 1;
+-		unsigned int		sqo_dead: 1;
+-
+-		/*
+-		 * Ring buffer of indices into array of io_uring_sqe, which is
+-		 * mmapped by the application using the IORING_OFF_SQES offset.
+-		 *
+-		 * This indirection could e.g. be used to assign fixed
+-		 * io_uring_sqe entries to operations and only submit them to
+-		 * the queue when needed.
+-		 *
+-		 * The kernel modifies neither the indices array nor the entries
+-		 * array.
+-		 */
+-		u32			*sq_array;
+-		unsigned		cached_sq_head;
+-		unsigned		sq_entries;
+-		unsigned		sq_mask;
+-		unsigned		sq_thread_idle;
+-		unsigned		cached_sq_dropped;
+-		unsigned		cached_cq_overflow;
+-		unsigned long		sq_check_overflow;
+-
+-		struct list_head	defer_list;
+-		struct list_head	timeout_list;
+-		struct list_head	cq_overflow_list;
+-
+-		struct io_uring_sqe	*sq_sqes;
+-	} ____cacheline_aligned_in_smp;
+-
+-	struct io_rings	*rings;
+-
+-	/* IO offload */
+-	struct io_wq		*io_wq;
+-
+-	/*
+-	 * For SQPOLL usage - we hold a reference to the parent task, so we
+-	 * have access to the ->files
+-	 */
+-	struct task_struct	*sqo_task;
+-
+-	/* Only used for accounting purposes */
+-	struct mm_struct	*mm_account;
+-
+-#ifdef CONFIG_BLK_CGROUP
+-	struct cgroup_subsys_state	*sqo_blkcg_css;
+-#endif
+-
+-	struct io_sq_data	*sq_data;	/* if using sq thread polling */
+-
+-	struct wait_queue_head	sqo_sq_wait;
+-	struct wait_queue_entry	sqo_wait_entry;
+-	struct list_head	sqd_list;
+-
+-	/*
+-	 * If used, fixed file set. Writers must ensure that ->refs is dead,
+-	 * readers must ensure that ->refs is alive as long as the file* is
+-	 * used. Only updated through io_uring_register(2).
+-	 */
+-	struct fixed_file_data	*file_data;
+-	unsigned		nr_user_files;
+-
+-	/* if used, fixed mapped user buffers */
+-	unsigned		nr_user_bufs;
+-	struct io_mapped_ubuf	*user_bufs;
+-
+-	struct user_struct	*user;
+-
+-	const struct cred	*creds;
+-
+-#ifdef CONFIG_AUDIT
+-	kuid_t			loginuid;
+-	unsigned int		sessionid;
+-#endif
+-
+-	struct completion	ref_comp;
+-	struct completion	sq_thread_comp;
+-
+-	/* if all else fails... */
+-	struct io_kiocb		*fallback_req;
+-
+-#if defined(CONFIG_UNIX)
+-	struct socket		*ring_sock;
+-#endif
+-
+-	struct xarray		io_buffers;
+-
+-	struct xarray		personalities;
+-	u32			pers_next;
+-
+-	struct {
+-		unsigned		cached_cq_tail;
+-		unsigned		cq_entries;
+-		unsigned		cq_mask;
+-		atomic_t		cq_timeouts;
+-		unsigned		cq_last_tm_flush;
+-		unsigned long		cq_check_overflow;
+-		struct wait_queue_head	cq_wait;
+-		struct fasync_struct	*cq_fasync;
+-		struct eventfd_ctx	*cq_ev_fd;
+-	} ____cacheline_aligned_in_smp;
+-
+-	struct {
+-		struct mutex		uring_lock;
+-		wait_queue_head_t	wait;
+-	} ____cacheline_aligned_in_smp;
+-
+-	struct {
+-		spinlock_t		completion_lock;
+-
+-		/*
+-		 * ->iopoll_list is protected by the ctx->uring_lock for
+-		 * io_uring instances that don't use IORING_SETUP_SQPOLL.
+-		 * For SQPOLL, only the single threaded io_sq_thread() will
+-		 * manipulate the list, hence no extra locking is needed there.
+-		 */
+-		struct list_head	iopoll_list;
+-		struct hlist_head	*cancel_hash;
+-		unsigned		cancel_hash_bits;
+-		bool			poll_multi_file;
+-
+-		spinlock_t		inflight_lock;
+-		struct list_head	inflight_list;
+-	} ____cacheline_aligned_in_smp;
+-
+-	struct delayed_work		file_put_work;
+-	struct llist_head		file_put_llist;
+-
+-	struct work_struct		exit_work;
+-	struct io_restriction		restrictions;
+-};
+-
+-/*
+- * First field must be the file pointer in all the
+- * iocb unions! See also 'struct kiocb' in <linux/fs.h>
+- */
+-struct io_poll_iocb {
+-	struct file			*file;
+-	union {
+-		struct wait_queue_head	*head;
+-		u64			addr;
+-	};
+-	__poll_t			events;
+-	bool				done;
+-	bool				canceled;
+-	struct wait_queue_entry		wait;
+-};
+-
+-struct io_close {
+-	struct file			*file;
+-	struct file			*put_file;
+-	int				fd;
+-};
+-
+-struct io_timeout_data {
+-	struct io_kiocb			*req;
+-	struct hrtimer			timer;
+-	struct timespec64		ts;
+-	enum hrtimer_mode		mode;
+-};
+-
+-struct io_accept {
+-	struct file			*file;
+-	struct sockaddr __user		*addr;
+-	int __user			*addr_len;
+-	int				flags;
+-	unsigned long			nofile;
+-};
+-
+-struct io_sync {
+-	struct file			*file;
+-	loff_t				len;
+-	loff_t				off;
+-	int				flags;
+-	int				mode;
+-};
+-
+-struct io_cancel {
+-	struct file			*file;
+-	u64				addr;
+-};
+-
+-struct io_timeout {
+-	struct file			*file;
+-	u32				off;
+-	u32				target_seq;
+-	struct list_head		list;
+-};
+-
+-struct io_timeout_rem {
+-	struct file			*file;
+-	u64				addr;
+-};
+-
+-struct io_rw {
+-	/* NOTE: kiocb has the file as the first member, so don't do it here */
+-	struct kiocb			kiocb;
+-	u64				addr;
+-	u64				len;
+-};
+-
+-struct io_connect {
+-	struct file			*file;
+-	struct sockaddr __user		*addr;
+-	int				addr_len;
+-};
+-
+-struct io_sr_msg {
+-	struct file			*file;
+-	union {
+-		struct user_msghdr __user *umsg;
+-		void __user		*buf;
+-	};
+-	int				msg_flags;
+-	int				bgid;
+-	size_t				len;
+-	struct io_buffer		*kbuf;
+-};
+-
+-struct io_open {
+-	struct file			*file;
+-	int				dfd;
+-	bool				ignore_nonblock;
+-	struct filename			*filename;
+-	struct open_how			how;
+-	unsigned long			nofile;
+-};
+-
+-struct io_files_update {
+-	struct file			*file;
+-	u64				arg;
+-	u32				nr_args;
+-	u32				offset;
+-};
+-
+-struct io_fadvise {
+-	struct file			*file;
+-	u64				offset;
+-	u32				len;
+-	u32				advice;
+-};
+-
+-struct io_madvise {
+-	struct file			*file;
+-	u64				addr;
+-	u32				len;
+-	u32				advice;
+-};
+-
+-struct io_epoll {
+-	struct file			*file;
+-	int				epfd;
+-	int				op;
+-	int				fd;
+-	struct epoll_event		event;
+-};
+-
+-struct io_splice {
+-	struct file			*file_out;
+-	struct file			*file_in;
+-	loff_t				off_out;
+-	loff_t				off_in;
+-	u64				len;
+-	unsigned int			flags;
+-};
+-
+-struct io_provide_buf {
+-	struct file			*file;
+-	__u64				addr;
+-	__u32				len;
+-	__u32				bgid;
+-	__u16				nbufs;
+-	__u16				bid;
+-};
+-
+-struct io_statx {
+-	struct file			*file;
+-	int				dfd;
+-	unsigned int			mask;
+-	unsigned int			flags;
+-	const char __user		*filename;
+-	struct statx __user		*buffer;
+-};
+-
+-struct io_completion {
+-	struct file			*file;
+-	struct list_head		list;
+-	u32				cflags;
+-};
+-
+-struct io_async_connect {
+-	struct sockaddr_storage		address;
+-};
+-
+-struct io_async_msghdr {
+-	struct iovec			fast_iov[UIO_FASTIOV];
+-	struct iovec			*iov;
+-	struct sockaddr __user		*uaddr;
+-	struct msghdr			msg;
+-	struct sockaddr_storage		addr;
+-};
+-
+-struct io_async_rw {
+-	struct iovec			fast_iov[UIO_FASTIOV];
+-	const struct iovec		*free_iovec;
+-	struct iov_iter			iter;
+-	size_t				bytes_done;
+-	struct wait_page_queue		wpq;
+-};
+-
+-enum {
+-	REQ_F_FIXED_FILE_BIT	= IOSQE_FIXED_FILE_BIT,
+-	REQ_F_IO_DRAIN_BIT	= IOSQE_IO_DRAIN_BIT,
+-	REQ_F_LINK_BIT		= IOSQE_IO_LINK_BIT,
+-	REQ_F_HARDLINK_BIT	= IOSQE_IO_HARDLINK_BIT,
+-	REQ_F_FORCE_ASYNC_BIT	= IOSQE_ASYNC_BIT,
+-	REQ_F_BUFFER_SELECT_BIT	= IOSQE_BUFFER_SELECT_BIT,
+-
+-	REQ_F_LINK_HEAD_BIT,
+-	REQ_F_FAIL_LINK_BIT,
+-	REQ_F_INFLIGHT_BIT,
+-	REQ_F_CUR_POS_BIT,
+-	REQ_F_NOWAIT_BIT,
+-	REQ_F_LINK_TIMEOUT_BIT,
+-	REQ_F_ISREG_BIT,
+-	REQ_F_NEED_CLEANUP_BIT,
+-	REQ_F_POLLED_BIT,
+-	REQ_F_BUFFER_SELECTED_BIT,
+-	REQ_F_NO_FILE_TABLE_BIT,
+-	REQ_F_WORK_INITIALIZED_BIT,
+-	REQ_F_LTIMEOUT_ACTIVE_BIT,
+-
+-	/* not a real bit, just to check we're not overflowing the space */
+-	__REQ_F_LAST_BIT,
+-};
+-
+-enum {
+-	/* ctx owns file */
+-	REQ_F_FIXED_FILE	= BIT(REQ_F_FIXED_FILE_BIT),
+-	/* drain existing IO first */
+-	REQ_F_IO_DRAIN		= BIT(REQ_F_IO_DRAIN_BIT),
+-	/* linked sqes */
+-	REQ_F_LINK		= BIT(REQ_F_LINK_BIT),
+-	/* doesn't sever on completion < 0 */
+-	REQ_F_HARDLINK		= BIT(REQ_F_HARDLINK_BIT),
+-	/* IOSQE_ASYNC */
+-	REQ_F_FORCE_ASYNC	= BIT(REQ_F_FORCE_ASYNC_BIT),
+-	/* IOSQE_BUFFER_SELECT */
+-	REQ_F_BUFFER_SELECT	= BIT(REQ_F_BUFFER_SELECT_BIT),
+-
+-	/* head of a link */
+-	REQ_F_LINK_HEAD		= BIT(REQ_F_LINK_HEAD_BIT),
+-	/* fail rest of links */
+-	REQ_F_FAIL_LINK		= BIT(REQ_F_FAIL_LINK_BIT),
+-	/* on inflight list */
+-	REQ_F_INFLIGHT		= BIT(REQ_F_INFLIGHT_BIT),
+-	/* read/write uses file position */
+-	REQ_F_CUR_POS		= BIT(REQ_F_CUR_POS_BIT),
+-	/* must not punt to workers */
+-	REQ_F_NOWAIT		= BIT(REQ_F_NOWAIT_BIT),
+-	/* has or had linked timeout */
+-	REQ_F_LINK_TIMEOUT	= BIT(REQ_F_LINK_TIMEOUT_BIT),
+-	/* regular file */
+-	REQ_F_ISREG		= BIT(REQ_F_ISREG_BIT),
+-	/* needs cleanup */
+-	REQ_F_NEED_CLEANUP	= BIT(REQ_F_NEED_CLEANUP_BIT),
+-	/* already went through poll handler */
+-	REQ_F_POLLED		= BIT(REQ_F_POLLED_BIT),
+-	/* buffer already selected */
+-	REQ_F_BUFFER_SELECTED	= BIT(REQ_F_BUFFER_SELECTED_BIT),
+-	/* doesn't need file table for this request */
+-	REQ_F_NO_FILE_TABLE	= BIT(REQ_F_NO_FILE_TABLE_BIT),
+-	/* io_wq_work is initialized */
+-	REQ_F_WORK_INITIALIZED	= BIT(REQ_F_WORK_INITIALIZED_BIT),
+-	/* linked timeout is active, i.e. prepared by link's head */
+-	REQ_F_LTIMEOUT_ACTIVE	= BIT(REQ_F_LTIMEOUT_ACTIVE_BIT),
+-};
+-
+-struct async_poll {
+-	struct io_poll_iocb	poll;
+-	struct io_poll_iocb	*double_poll;
+-};
+-
+-/*
+- * NOTE! Each of the iocb union members has the file pointer
+- * as the first entry in their struct definition. So you can
+- * access the file pointer through any of the sub-structs,
+- * or directly as just 'ki_filp' in this struct.
+- */
+-struct io_kiocb {
+-	union {
+-		struct file		*file;
+-		struct io_rw		rw;
+-		struct io_poll_iocb	poll;
+-		struct io_accept	accept;
+-		struct io_sync		sync;
+-		struct io_cancel	cancel;
+-		struct io_timeout	timeout;
+-		struct io_timeout_rem	timeout_rem;
+-		struct io_connect	connect;
+-		struct io_sr_msg	sr_msg;
+-		struct io_open		open;
+-		struct io_close		close;
+-		struct io_files_update	files_update;
+-		struct io_fadvise	fadvise;
+-		struct io_madvise	madvise;
+-		struct io_epoll		epoll;
+-		struct io_splice	splice;
+-		struct io_provide_buf	pbuf;
+-		struct io_statx		statx;
+-		/* use only after cleaning per-op data, see io_clean_op() */
+-		struct io_completion	compl;
+-	};
+-
+-	/* opcode allocated if it needs to store data for async defer */
+-	void				*async_data;
+-	u8				opcode;
+-	/* polled IO has completed */
+-	u8				iopoll_completed;
+-
+-	u16				buf_index;
+-	u32				result;
+-
+-	struct io_ring_ctx		*ctx;
+-	unsigned int			flags;
+-	refcount_t			refs;
+-	struct task_struct		*task;
+-	u64				user_data;
+-
+-	struct list_head		link_list;
+-
+-	/*
+-	 * 1. used with ctx->iopoll_list with reads/writes
+-	 * 2. to track reqs with ->files (see io_op_def::file_table)
+-	 */
+-	struct list_head		inflight_entry;
+-
+-	struct list_head		iopoll_entry;
+-
+-	struct percpu_ref		*fixed_file_refs;
+-	struct callback_head		task_work;
+-	/* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
+-	struct hlist_node		hash_node;
+-	struct async_poll		*apoll;
+-	struct io_wq_work		work;
+-};
+-
+-struct io_defer_entry {
+-	struct list_head	list;
+-	struct io_kiocb		*req;
+-	u32			seq;
+-};
+-
+-#define IO_IOPOLL_BATCH			8
+-
+-struct io_comp_state {
+-	unsigned int		nr;
+-	struct list_head	list;
+-	struct io_ring_ctx	*ctx;
+-};
+-
+-struct io_submit_state {
+-	struct blk_plug		plug;
+-
+-	/*
+-	 * io_kiocb alloc cache
+-	 */
+-	void			*reqs[IO_IOPOLL_BATCH];
+-	unsigned int		free_reqs;
+-
+-	/*
+-	 * Batch completion logic
+-	 */
+-	struct io_comp_state	comp;
+-
+-	/*
+-	 * File reference cache
+-	 */
+-	struct file		*file;
+-	unsigned int		fd;
+-	unsigned int		has_refs;
+-	unsigned int		ios_left;
+-};
+-
+-struct io_op_def {
+-	/* needs req->file assigned */
+-	unsigned		needs_file : 1;
+-	/* don't fail if file grab fails */
+-	unsigned		needs_file_no_error : 1;
+-	/* hash wq insertion if file is a regular file */
+-	unsigned		hash_reg_file : 1;
+-	/* unbound wq insertion if file is a non-regular file */
+-	unsigned		unbound_nonreg_file : 1;
+-	/* opcode is not supported by this kernel */
+-	unsigned		not_supported : 1;
+-	/* set if opcode supports polled "wait" */
+-	unsigned		pollin : 1;
+-	unsigned		pollout : 1;
+-	/* op supports buffer selection */
+-	unsigned		buffer_select : 1;
+-	/* must always have async data allocated */
+-	unsigned		needs_async_data : 1;
+-	/* size of async data needed, if any */
+-	unsigned short		async_size;
+-	unsigned		work_flags;
+-};
+-
+-static const struct io_op_def io_op_defs[] = {
+-	[IORING_OP_NOP] = {},
+-	[IORING_OP_READV] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollin			= 1,
+-		.buffer_select		= 1,
+-		.needs_async_data	= 1,
+-		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-					  IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_WRITEV] = {
+-		.needs_file		= 1,
+-		.hash_reg_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollout		= 1,
+-		.needs_async_data	= 1,
+-		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-					  IO_WQ_WORK_FSIZE | IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_FSYNC] = {
+-		.needs_file		= 1,
+-		.work_flags		= IO_WQ_WORK_BLKCG,
+-	},
+-	[IORING_OP_READ_FIXED] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollin			= 1,
+-		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_BLKCG | IO_WQ_WORK_MM |
+-					  IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_WRITE_FIXED] = {
+-		.needs_file		= 1,
+-		.hash_reg_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollout		= 1,
+-		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_BLKCG | IO_WQ_WORK_FSIZE |
+-					  IO_WQ_WORK_MM | IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_POLL_ADD] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-	},
+-	[IORING_OP_POLL_REMOVE] = {},
+-	[IORING_OP_SYNC_FILE_RANGE] = {
+-		.needs_file		= 1,
+-		.work_flags		= IO_WQ_WORK_BLKCG,
+-	},
+-	[IORING_OP_SENDMSG] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollout		= 1,
+-		.needs_async_data	= 1,
+-		.async_size		= sizeof(struct io_async_msghdr),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-						IO_WQ_WORK_FS,
+-	},
+-	[IORING_OP_RECVMSG] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollin			= 1,
+-		.buffer_select		= 1,
+-		.needs_async_data	= 1,
+-		.async_size		= sizeof(struct io_async_msghdr),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-						IO_WQ_WORK_FS,
+-	},
+-	[IORING_OP_TIMEOUT] = {
+-		.needs_async_data	= 1,
+-		.async_size		= sizeof(struct io_timeout_data),
+-		.work_flags		= IO_WQ_WORK_MM,
+-	},
+-	[IORING_OP_TIMEOUT_REMOVE] = {},
+-	[IORING_OP_ACCEPT] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollin			= 1,
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_ASYNC_CANCEL] = {},
+-	[IORING_OP_LINK_TIMEOUT] = {
+-		.needs_async_data	= 1,
+-		.async_size		= sizeof(struct io_timeout_data),
+-		.work_flags		= IO_WQ_WORK_MM,
+-	},
+-	[IORING_OP_CONNECT] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollout		= 1,
+-		.needs_async_data	= 1,
+-		.async_size		= sizeof(struct io_async_connect),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_FS,
+-	},
+-	[IORING_OP_FALLOCATE] = {
+-		.needs_file		= 1,
+-		.work_flags		= IO_WQ_WORK_BLKCG | IO_WQ_WORK_FSIZE,
+-	},
+-	[IORING_OP_OPENAT] = {
+-		.work_flags		= IO_WQ_WORK_FILES | IO_WQ_WORK_BLKCG |
+-						IO_WQ_WORK_FS,
+-	},
+-	[IORING_OP_CLOSE] = {
+-		.needs_file		= 1,
+-		.needs_file_no_error	= 1,
+-		.work_flags		= IO_WQ_WORK_FILES | IO_WQ_WORK_BLKCG,
+-	},
+-	[IORING_OP_FILES_UPDATE] = {
+-		.work_flags		= IO_WQ_WORK_FILES | IO_WQ_WORK_MM,
+-	},
+-	[IORING_OP_STATX] = {
+-		.work_flags		= IO_WQ_WORK_FILES | IO_WQ_WORK_MM |
+-						IO_WQ_WORK_FS | IO_WQ_WORK_BLKCG,
+-	},
+-	[IORING_OP_READ] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollin			= 1,
+-		.buffer_select		= 1,
+-		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-					  IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_WRITE] = {
+-		.needs_file		= 1,
+-		.hash_reg_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollout		= 1,
+-		.async_size		= sizeof(struct io_async_rw),
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-					  IO_WQ_WORK_FSIZE | IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_FADVISE] = {
+-		.needs_file		= 1,
+-		.work_flags		= IO_WQ_WORK_BLKCG,
+-	},
+-	[IORING_OP_MADVISE] = {
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG,
+-	},
+-	[IORING_OP_SEND] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollout		= 1,
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-					  IO_WQ_WORK_FS,
+-	},
+-	[IORING_OP_RECV] = {
+-		.needs_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.pollin			= 1,
+-		.buffer_select		= 1,
+-		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG |
+-					  IO_WQ_WORK_FS,
+-	},
+-	[IORING_OP_OPENAT2] = {
+-		.work_flags		= IO_WQ_WORK_FILES | IO_WQ_WORK_FS |
+-						IO_WQ_WORK_BLKCG,
+-	},
+-	[IORING_OP_EPOLL_CTL] = {
+-		.unbound_nonreg_file	= 1,
+-		.work_flags		= IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_SPLICE] = {
+-		.needs_file		= 1,
+-		.hash_reg_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-		.work_flags		= IO_WQ_WORK_BLKCG | IO_WQ_WORK_FILES,
+-	},
+-	[IORING_OP_PROVIDE_BUFFERS] = {},
+-	[IORING_OP_REMOVE_BUFFERS] = {},
+-	[IORING_OP_TEE] = {
+-		.needs_file		= 1,
+-		.hash_reg_file		= 1,
+-		.unbound_nonreg_file	= 1,
+-	},
+-};
+-
+-enum io_mem_account {
+-	ACCT_LOCKED,
+-	ACCT_PINNED,
+-};
+-
+-static void destroy_fixed_file_ref_node(struct fixed_file_ref_node *ref_node);
+-static struct fixed_file_ref_node *alloc_fixed_file_ref_node(
+-			struct io_ring_ctx *ctx);
+-
+-static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
+-			     struct io_comp_state *cs);
+-static void io_cqring_fill_event(struct io_kiocb *req, long res);
+-static void io_put_req(struct io_kiocb *req);
+-static void io_put_req_deferred(struct io_kiocb *req, int nr);
+-static void io_double_put_req(struct io_kiocb *req);
+-static struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req);
+-static void __io_queue_linked_timeout(struct io_kiocb *req);
+-static void io_queue_linked_timeout(struct io_kiocb *req);
+-static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+-				 struct io_uring_files_update *ip,
+-				 unsigned nr_args);
+-static void __io_clean_op(struct io_kiocb *req);
+-static struct file *io_file_get(struct io_submit_state *state,
+-				struct io_kiocb *req, int fd, bool fixed);
+-static void __io_queue_sqe(struct io_kiocb *req, struct io_comp_state *cs);
+-static void io_file_put_work(struct work_struct *work);
+-
+-static ssize_t io_import_iovec(int rw, struct io_kiocb *req,
+-			       struct iovec **iovec, struct iov_iter *iter,
+-			       bool needs_lock);
+-static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
+-			     const struct iovec *fast_iov,
+-			     struct iov_iter *iter, bool force);
+-static void io_req_drop_files(struct io_kiocb *req);
+-static void io_req_task_queue(struct io_kiocb *req);
+-
+-static struct kmem_cache *req_cachep;
+-
+-static const struct file_operations io_uring_fops;
+-
+-struct sock *io_uring_get_socket(struct file *file)
+-{
+-#if defined(CONFIG_UNIX)
+-	if (file->f_op == &io_uring_fops) {
+-		struct io_ring_ctx *ctx = file->private_data;
+-
+-		return ctx->ring_sock->sk;
+-	}
+-#endif
+-	return NULL;
+-}
+-EXPORT_SYMBOL(io_uring_get_socket);
+-
+-static inline void io_clean_op(struct io_kiocb *req)
+-{
+-	if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED))
+-		__io_clean_op(req);
+-}
+-
+-static inline bool __io_match_files(struct io_kiocb *req,
+-				    struct files_struct *files)
+-{
+-	if (req->file && req->file->f_op == &io_uring_fops)
+-		return true;
+-
+-	return ((req->flags & REQ_F_WORK_INITIALIZED) &&
+-	        (req->work.flags & IO_WQ_WORK_FILES)) &&
+-		req->work.identity->files == files;
+-}
+-
+-static void io_refs_resurrect(struct percpu_ref *ref, struct completion *compl)
+-{
+-	bool got = percpu_ref_tryget(ref);
+-
+-	/* already at zero, wait for ->release() */
+-	if (!got)
+-		wait_for_completion(compl);
+-	percpu_ref_resurrect(ref);
+-	if (got)
+-		percpu_ref_put(ref);
+-}
+-
+-static bool io_match_task(struct io_kiocb *head,
+-			  struct task_struct *task,
+-			  struct files_struct *files)
+-{
+-	struct io_kiocb *link;
+-
+-	if (task && head->task != task) {
+-		/* in terms of cancelation, always match if req task is dead */
+-		if (head->task->flags & PF_EXITING)
+-			return true;
+-		return false;
+-	}
+-	if (!files)
+-		return true;
+-	if (__io_match_files(head, files))
+-		return true;
+-	if (head->flags & REQ_F_LINK_HEAD) {
+-		list_for_each_entry(link, &head->link_list, link_list) {
+-			if (__io_match_files(link, files))
+-				return true;
+-		}
+-	}
+-	return false;
+-}
+-
+-
+-static void io_sq_thread_drop_mm(void)
+-{
+-	struct mm_struct *mm = current->mm;
+-
+-	if (mm) {
+-		kthread_unuse_mm(mm);
+-		mmput(mm);
+-		current->mm = NULL;
+-	}
+-}
+-
+-static int __io_sq_thread_acquire_mm(struct io_ring_ctx *ctx)
+-{
+-	struct mm_struct *mm;
+-
+-	if (current->flags & PF_EXITING)
+-		return -EFAULT;
+-	if (current->mm)
+-		return 0;
+-
+-	/* Should never happen */
+-	if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL)))
+-		return -EFAULT;
+-
+-	task_lock(ctx->sqo_task);
+-	mm = ctx->sqo_task->mm;
+-	if (unlikely(!mm || !mmget_not_zero(mm)))
+-		mm = NULL;
+-	task_unlock(ctx->sqo_task);
+-
+-	if (mm) {
+-		kthread_use_mm(mm);
+-		return 0;
+-	}
+-
+-	return -EFAULT;
+-}
+-
+-static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx,
+-				   struct io_kiocb *req)
+-{
+-	if (!(io_op_defs[req->opcode].work_flags & IO_WQ_WORK_MM))
+-		return 0;
+-	return __io_sq_thread_acquire_mm(ctx);
+-}
+-
+-static void io_sq_thread_associate_blkcg(struct io_ring_ctx *ctx,
+-					 struct cgroup_subsys_state **cur_css)
+-
+-{
+-#ifdef CONFIG_BLK_CGROUP
+-	/* puts the old one when swapping */
+-	if (*cur_css != ctx->sqo_blkcg_css) {
+-		kthread_associate_blkcg(ctx->sqo_blkcg_css);
+-		*cur_css = ctx->sqo_blkcg_css;
+-	}
+-#endif
+-}
+-
+-static void io_sq_thread_unassociate_blkcg(void)
+-{
+-#ifdef CONFIG_BLK_CGROUP
+-	kthread_associate_blkcg(NULL);
+-#endif
+-}
+-
+-static inline void req_set_fail_links(struct io_kiocb *req)
+-{
+-	if ((req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) == REQ_F_LINK)
+-		req->flags |= REQ_F_FAIL_LINK;
+-}
+-
+-/*
+- * None of these are dereferenced, they are simply used to check if any of
+- * them have changed. If we're under current and check they are still the
+- * same, we're fine to grab references to them for actual out-of-line use.
+- */
+-static void io_init_identity(struct io_identity *id)
+-{
+-	id->files = current->files;
+-	id->mm = current->mm;
+-#ifdef CONFIG_BLK_CGROUP
+-	rcu_read_lock();
+-	id->blkcg_css = blkcg_css();
+-	rcu_read_unlock();
+-#endif
+-	id->creds = current_cred();
+-	id->nsproxy = current->nsproxy;
+-	id->fs = current->fs;
+-	id->fsize = rlimit(RLIMIT_FSIZE);
+-#ifdef CONFIG_AUDIT
+-	id->loginuid = current->loginuid;
+-	id->sessionid = current->sessionid;
+-#endif
+-	refcount_set(&id->count, 1);
+-}
+-
+-static inline void __io_req_init_async(struct io_kiocb *req)
+-{
+-	memset(&req->work, 0, sizeof(req->work));
+-	req->flags |= REQ_F_WORK_INITIALIZED;
+-}
+-
+-/*
+- * Note: must call io_req_init_async() for the first time you
+- * touch any members of io_wq_work.
+- */
+-static inline void io_req_init_async(struct io_kiocb *req)
+-{
+-	struct io_uring_task *tctx = req->task->io_uring;
+-
+-	if (req->flags & REQ_F_WORK_INITIALIZED)
+-		return;
+-
+-	__io_req_init_async(req);
+-
+-	/* Grab a ref if this isn't our static identity */
+-	req->work.identity = tctx->identity;
+-	if (tctx->identity != &tctx->__identity)
+-		refcount_inc(&req->work.identity->count);
+-}
+-
+-static inline bool io_async_submit(struct io_ring_ctx *ctx)
+-{
+-	return ctx->flags & IORING_SETUP_SQPOLL;
+-}
+-
+-static void io_ring_ctx_ref_free(struct percpu_ref *ref)
+-{
+-	struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
+-
+-	complete(&ctx->ref_comp);
+-}
+-
+-static inline bool io_is_timeout_noseq(struct io_kiocb *req)
+-{
+-	return !req->timeout.off;
+-}
+-
+-static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+-{
+-	struct io_ring_ctx *ctx;
+-	int hash_bits;
+-
+-	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+-	if (!ctx)
+-		return NULL;
+-
+-	ctx->fallback_req = kmem_cache_alloc(req_cachep, GFP_KERNEL);
+-	if (!ctx->fallback_req)
+-		goto err;
+-
+-	/*
+-	 * Use 5 bits less than the max cq entries, that should give us around
+-	 * 32 entries per hash list if totally full and uniformly spread.
+-	 */
+-	hash_bits = ilog2(p->cq_entries);
+-	hash_bits -= 5;
+-	if (hash_bits <= 0)
+-		hash_bits = 1;
+-	ctx->cancel_hash_bits = hash_bits;
+-	ctx->cancel_hash = kmalloc((1U << hash_bits) * sizeof(struct hlist_head),
+-					GFP_KERNEL);
+-	if (!ctx->cancel_hash)
+-		goto err;
+-	__hash_init(ctx->cancel_hash, 1U << hash_bits);
+-
+-	if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
+-			    PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
+-		goto err;
+-
+-	ctx->flags = p->flags;
+-	init_waitqueue_head(&ctx->sqo_sq_wait);
+-	INIT_LIST_HEAD(&ctx->sqd_list);
+-	init_waitqueue_head(&ctx->cq_wait);
+-	INIT_LIST_HEAD(&ctx->cq_overflow_list);
+-	init_completion(&ctx->ref_comp);
+-	init_completion(&ctx->sq_thread_comp);
+-	xa_init_flags(&ctx->io_buffers, XA_FLAGS_ALLOC1);
+-	xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
+-	mutex_init(&ctx->uring_lock);
+-	init_waitqueue_head(&ctx->wait);
+-	spin_lock_init(&ctx->completion_lock);
+-	INIT_LIST_HEAD(&ctx->iopoll_list);
+-	INIT_LIST_HEAD(&ctx->defer_list);
+-	INIT_LIST_HEAD(&ctx->timeout_list);
+-	spin_lock_init(&ctx->inflight_lock);
+-	INIT_LIST_HEAD(&ctx->inflight_list);
+-	INIT_DELAYED_WORK(&ctx->file_put_work, io_file_put_work);
+-	init_llist_head(&ctx->file_put_llist);
+-	return ctx;
+-err:
+-	if (ctx->fallback_req)
+-		kmem_cache_free(req_cachep, ctx->fallback_req);
+-	kfree(ctx->cancel_hash);
+-	kfree(ctx);
+-	return NULL;
+-}
+-
+-static bool req_need_defer(struct io_kiocb *req, u32 seq)
+-{
+-	if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
+-		struct io_ring_ctx *ctx = req->ctx;
+-
+-		return seq != ctx->cached_cq_tail
+-				+ READ_ONCE(ctx->cached_cq_overflow);
+-	}
+-
+-	return false;
+-}
+-
+-static void __io_commit_cqring(struct io_ring_ctx *ctx)
+-{
+-	struct io_rings *rings = ctx->rings;
+-
+-	/* order cqe stores with ring update */
+-	smp_store_release(&rings->cq.tail, ctx->cached_cq_tail);
+-}
+-
+-static void io_put_identity(struct io_uring_task *tctx, struct io_kiocb *req)
+-{
+-	if (req->work.identity == &tctx->__identity)
+-		return;
+-	if (refcount_dec_and_test(&req->work.identity->count))
+-		kfree(req->work.identity);
+-}
+-
+-static void io_req_clean_work(struct io_kiocb *req)
+-{
+-	if (!(req->flags & REQ_F_WORK_INITIALIZED))
+-		return;
+-
+-	req->flags &= ~REQ_F_WORK_INITIALIZED;
+-
+-	if (req->work.flags & IO_WQ_WORK_MM) {
+-		mmdrop(req->work.identity->mm);
+-		req->work.flags &= ~IO_WQ_WORK_MM;
+-	}
+-#ifdef CONFIG_BLK_CGROUP
+-	if (req->work.flags & IO_WQ_WORK_BLKCG) {
+-		css_put(req->work.identity->blkcg_css);
+-		req->work.flags &= ~IO_WQ_WORK_BLKCG;
+-	}
+-#endif
+-	if (req->work.flags & IO_WQ_WORK_CREDS) {
+-		put_cred(req->work.identity->creds);
+-		req->work.flags &= ~IO_WQ_WORK_CREDS;
+-	}
+-	if (req->work.flags & IO_WQ_WORK_FS) {
+-		struct fs_struct *fs = req->work.identity->fs;
+-
+-		spin_lock(&req->work.identity->fs->lock);
+-		if (--fs->users)
+-			fs = NULL;
+-		spin_unlock(&req->work.identity->fs->lock);
+-		if (fs)
+-			free_fs_struct(fs);
+-		req->work.flags &= ~IO_WQ_WORK_FS;
+-	}
+-	if (req->flags & REQ_F_INFLIGHT)
+-		io_req_drop_files(req);
+-
+-	io_put_identity(req->task->io_uring, req);
+-}
+-
+-/*
+- * Create a private copy of io_identity, since some fields don't match
+- * the current context.
+- */
+-static bool io_identity_cow(struct io_kiocb *req)
+-{
+-	struct io_uring_task *tctx = req->task->io_uring;
+-	const struct cred *creds = NULL;
+-	struct io_identity *id;
+-
+-	if (req->work.flags & IO_WQ_WORK_CREDS)
+-		creds = req->work.identity->creds;
+-
+-	id = kmemdup(req->work.identity, sizeof(*id), GFP_KERNEL);
+-	if (unlikely(!id)) {
+-		req->work.flags |= IO_WQ_WORK_CANCEL;
+-		return false;
+-	}
+-
+-	/*
+-	 * We can safely just re-init the creds we copied  Either the field
+-	 * matches the current one, or we haven't grabbed it yet. The only
+-	 * exception is ->creds, through registered personalities, so handle
+-	 * that one separately.
+-	 */
+-	io_init_identity(id);
+-	if (creds)
+-		id->creds = creds;
+-
+-	/* add one for this request */
+-	refcount_inc(&id->count);
+-
+-	/* drop tctx and req identity references, if needed */
+-	if (tctx->identity != &tctx->__identity &&
+-	    refcount_dec_and_test(&tctx->identity->count))
+-		kfree(tctx->identity);
+-	if (req->work.identity != &tctx->__identity &&
+-	    refcount_dec_and_test(&req->work.identity->count))
+-		kfree(req->work.identity);
+-
+-	req->work.identity = id;
+-	tctx->identity = id;
+-	return true;
+-}
+-
+-static bool io_grab_identity(struct io_kiocb *req)
+-{
+-	const struct io_op_def *def = &io_op_defs[req->opcode];
+-	struct io_identity *id = req->work.identity;
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	if (def->work_flags & IO_WQ_WORK_FSIZE) {
+-		if (id->fsize != rlimit(RLIMIT_FSIZE))
+-			return false;
+-		req->work.flags |= IO_WQ_WORK_FSIZE;
+-	}
+-#ifdef CONFIG_BLK_CGROUP
+-	if (!(req->work.flags & IO_WQ_WORK_BLKCG) &&
+-	    (def->work_flags & IO_WQ_WORK_BLKCG)) {
+-		rcu_read_lock();
+-		if (id->blkcg_css != blkcg_css()) {
+-			rcu_read_unlock();
+-			return false;
+-		}
+-		/*
+-		 * This should be rare, either the cgroup is dying or the task
+-		 * is moving cgroups. Just punt to root for the handful of ios.
+-		 */
+-		if (css_tryget_online(id->blkcg_css))
+-			req->work.flags |= IO_WQ_WORK_BLKCG;
+-		rcu_read_unlock();
+-	}
+-#endif
+-	if (!(req->work.flags & IO_WQ_WORK_CREDS)) {
+-		if (id->creds != current_cred())
+-			return false;
+-		get_cred(id->creds);
+-		req->work.flags |= IO_WQ_WORK_CREDS;
+-	}
+-#ifdef CONFIG_AUDIT
+-	if (!uid_eq(current->loginuid, id->loginuid) ||
+-	    current->sessionid != id->sessionid)
+-		return false;
+-#endif
+-	if (!(req->work.flags & IO_WQ_WORK_FS) &&
+-	    (def->work_flags & IO_WQ_WORK_FS)) {
+-		if (current->fs != id->fs)
+-			return false;
+-		spin_lock(&id->fs->lock);
+-		if (!id->fs->in_exec) {
+-			id->fs->users++;
+-			req->work.flags |= IO_WQ_WORK_FS;
+-		} else {
+-			req->work.flags |= IO_WQ_WORK_CANCEL;
+-		}
+-		spin_unlock(&current->fs->lock);
+-	}
+-	if (!(req->work.flags & IO_WQ_WORK_FILES) &&
+-	    (def->work_flags & IO_WQ_WORK_FILES) &&
+-	    !(req->flags & REQ_F_NO_FILE_TABLE)) {
+-		if (id->files != current->files ||
+-		    id->nsproxy != current->nsproxy)
+-			return false;
+-		atomic_inc(&id->files->count);
+-		get_nsproxy(id->nsproxy);
+-
+-		if (!(req->flags & REQ_F_INFLIGHT)) {
+-			req->flags |= REQ_F_INFLIGHT;
+-
+-			spin_lock_irq(&ctx->inflight_lock);
+-			list_add(&req->inflight_entry, &ctx->inflight_list);
+-			spin_unlock_irq(&ctx->inflight_lock);
+-		}
+-		req->work.flags |= IO_WQ_WORK_FILES;
+-	}
+-	if (!(req->work.flags & IO_WQ_WORK_MM) &&
+-	    (def->work_flags & IO_WQ_WORK_MM)) {
+-		if (id->mm != current->mm)
+-			return false;
+-		mmgrab(id->mm);
+-		req->work.flags |= IO_WQ_WORK_MM;
+-	}
+-
+-	return true;
+-}
+-
+-static void io_prep_async_work(struct io_kiocb *req)
+-{
+-	const struct io_op_def *def = &io_op_defs[req->opcode];
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_identity *id;
+-
+-	io_req_init_async(req);
+-	id = req->work.identity;
+-
+-	if (req->flags & REQ_F_FORCE_ASYNC)
+-		req->work.flags |= IO_WQ_WORK_CONCURRENT;
+-
+-	if (req->flags & REQ_F_ISREG) {
+-		if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
+-			io_wq_hash_work(&req->work, file_inode(req->file));
+-	} else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
+-		if (def->unbound_nonreg_file)
+-			req->work.flags |= IO_WQ_WORK_UNBOUND;
+-	}
+-
+-	/* if we fail grabbing identity, we must COW, regrab, and retry */
+-	if (io_grab_identity(req))
+-		return;
+-
+-	if (!io_identity_cow(req))
+-		return;
+-
+-	/* can't fail at this point */
+-	if (!io_grab_identity(req))
+-		WARN_ON(1);
+-}
+-
+-static void io_prep_async_link(struct io_kiocb *req)
+-{
+-	struct io_kiocb *cur;
+-
+-	io_prep_async_work(req);
+-	if (req->flags & REQ_F_LINK_HEAD)
+-		list_for_each_entry(cur, &req->link_list, link_list)
+-			io_prep_async_work(cur);
+-}
+-
+-static struct io_kiocb *__io_queue_async_work(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_kiocb *link = io_prep_linked_timeout(req);
+-
+-	trace_io_uring_queue_async_work(ctx, io_wq_is_hashed(&req->work), req,
+-					&req->work, req->flags);
+-	io_wq_enqueue(ctx->io_wq, &req->work);
+-	return link;
+-}
+-
+-static void io_queue_async_work(struct io_kiocb *req)
+-{
+-	struct io_kiocb *link;
+-
+-	/* init ->work of the whole link before punting */
+-	io_prep_async_link(req);
+-	link = __io_queue_async_work(req);
+-
+-	if (link)
+-		io_queue_linked_timeout(link);
+-}
+-
+-static void io_kill_timeout(struct io_kiocb *req, int status)
+-{
+-	struct io_timeout_data *io = req->async_data;
+-	int ret;
+-
+-	ret = hrtimer_try_to_cancel(&io->timer);
+-	if (ret != -1) {
+-		if (status)
+-			req_set_fail_links(req);
+-		atomic_set(&req->ctx->cq_timeouts,
+-			atomic_read(&req->ctx->cq_timeouts) + 1);
+-		list_del_init(&req->timeout.list);
+-		io_cqring_fill_event(req, status);
+-		io_put_req_deferred(req, 1);
+-	}
+-}
+-
+-/*
+- * Returns true if we found and killed one or more timeouts
+- */
+-static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
+-			     struct files_struct *files)
+-{
+-	struct io_kiocb *req, *tmp;
+-	int canceled = 0;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+-		if (io_match_task(req, tsk, files)) {
+-			io_kill_timeout(req, -ECANCELED);
+-			canceled++;
+-		}
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-	return canceled != 0;
+-}
+-
+-static void __io_queue_deferred(struct io_ring_ctx *ctx)
+-{
+-	do {
+-		struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
+-						struct io_defer_entry, list);
+-
+-		if (req_need_defer(de->req, de->seq))
+-			break;
+-		list_del_init(&de->list);
+-		io_req_task_queue(de->req);
+-		kfree(de);
+-	} while (!list_empty(&ctx->defer_list));
+-}
+-
+-static void io_flush_timeouts(struct io_ring_ctx *ctx)
+-{
+-	struct io_kiocb *req, *tmp;
+-	u32 seq;
+-
+-	if (list_empty(&ctx->timeout_list))
+-		return;
+-
+-	seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+-
+-	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+-		u32 events_needed, events_got;
+-
+-		if (io_is_timeout_noseq(req))
+-			break;
+-
+-		/*
+-		 * Since seq can easily wrap around over time, subtract
+-		 * the last seq at which timeouts were flushed before comparing.
+-		 * Assuming not more than 2^31-1 events have happened since,
+-		 * these subtractions won't have wrapped, so we can check if
+-		 * target is in [last_seq, current_seq] by comparing the two.
+-		 */
+-		events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush;
+-		events_got = seq - ctx->cq_last_tm_flush;
+-		if (events_got < events_needed)
+-			break;
+-
+-		io_kill_timeout(req, 0);
+-	}
+-
+-	ctx->cq_last_tm_flush = seq;
+-}
+-
+-static void io_commit_cqring(struct io_ring_ctx *ctx)
+-{
+-	io_flush_timeouts(ctx);
+-	__io_commit_cqring(ctx);
+-
+-	if (unlikely(!list_empty(&ctx->defer_list)))
+-		__io_queue_deferred(ctx);
+-}
+-
+-static inline bool io_sqring_full(struct io_ring_ctx *ctx)
+-{
+-	struct io_rings *r = ctx->rings;
+-
+-	return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == r->sq_ring_entries;
+-}
+-
+-static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
+-{
+-	struct io_rings *rings = ctx->rings;
+-	unsigned tail;
+-
+-	tail = ctx->cached_cq_tail;
+-	/*
+-	 * writes to the cq entry need to come after reading head; the
+-	 * control dependency is enough as we're using WRITE_ONCE to
+-	 * fill the cq entry
+-	 */
+-	if (tail - READ_ONCE(rings->cq.head) == rings->cq_ring_entries)
+-		return NULL;
+-
+-	ctx->cached_cq_tail++;
+-	return &rings->cqes[tail & ctx->cq_mask];
+-}
+-
+-static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx)
+-{
+-	if (!ctx->cq_ev_fd)
+-		return false;
+-	if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
+-		return false;
+-	if (!ctx->eventfd_async)
+-		return true;
+-	return io_wq_current_is_worker();
+-}
+-
+-static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
+-{
+-	if (wq_has_sleeper(&ctx->cq_wait)) {
+-		wake_up_interruptible(&ctx->cq_wait);
+-		kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
+-	}
+-	if (waitqueue_active(&ctx->wait))
+-		wake_up(&ctx->wait);
+-	if (ctx->sq_data && waitqueue_active(&ctx->sq_data->wait))
+-		wake_up(&ctx->sq_data->wait);
+-	if (io_should_trigger_evfd(ctx))
+-		eventfd_signal(ctx->cq_ev_fd, 1);
+-}
+-
+-static void io_cqring_mark_overflow(struct io_ring_ctx *ctx)
+-{
+-	if (list_empty(&ctx->cq_overflow_list)) {
+-		clear_bit(0, &ctx->sq_check_overflow);
+-		clear_bit(0, &ctx->cq_check_overflow);
+-		ctx->rings->sq_flags &= ~IORING_SQ_CQ_OVERFLOW;
+-	}
+-}
+-
+-/* Returns true if there are no backlogged entries after the flush */
+-static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+-				       struct task_struct *tsk,
+-				       struct files_struct *files)
+-{
+-	struct io_rings *rings = ctx->rings;
+-	struct io_kiocb *req, *tmp;
+-	struct io_uring_cqe *cqe;
+-	unsigned long flags;
+-	LIST_HEAD(list);
+-
+-	if (!force) {
+-		if ((ctx->cached_cq_tail - READ_ONCE(rings->cq.head) ==
+-		    rings->cq_ring_entries))
+-			return false;
+-	}
+-
+-	spin_lock_irqsave(&ctx->completion_lock, flags);
+-
+-	cqe = NULL;
+-	list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, compl.list) {
+-		if (!io_match_task(req, tsk, files))
+-			continue;
+-
+-		cqe = io_get_cqring(ctx);
+-		if (!cqe && !force)
+-			break;
+-
+-		list_move(&req->compl.list, &list);
+-		if (cqe) {
+-			WRITE_ONCE(cqe->user_data, req->user_data);
+-			WRITE_ONCE(cqe->res, req->result);
+-			WRITE_ONCE(cqe->flags, req->compl.cflags);
+-		} else {
+-			ctx->cached_cq_overflow++;
+-			WRITE_ONCE(ctx->rings->cq_overflow,
+-				   ctx->cached_cq_overflow);
+-		}
+-	}
+-
+-	io_commit_cqring(ctx);
+-	io_cqring_mark_overflow(ctx);
+-
+-	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-	io_cqring_ev_posted(ctx);
+-
+-	while (!list_empty(&list)) {
+-		req = list_first_entry(&list, struct io_kiocb, compl.list);
+-		list_del(&req->compl.list);
+-		io_put_req(req);
+-	}
+-
+-	return cqe != NULL;
+-}
+-
+-static void io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+-				     struct task_struct *tsk,
+-				     struct files_struct *files)
+-{
+-	if (test_bit(0, &ctx->cq_check_overflow)) {
+-		/* iopoll syncs against uring_lock, not completion_lock */
+-		if (ctx->flags & IORING_SETUP_IOPOLL)
+-			mutex_lock(&ctx->uring_lock);
+-		__io_cqring_overflow_flush(ctx, force, tsk, files);
+-		if (ctx->flags & IORING_SETUP_IOPOLL)
+-			mutex_unlock(&ctx->uring_lock);
+-	}
+-}
+-
+-static void __io_cqring_fill_event(struct io_kiocb *req, long res,
+-				   unsigned int cflags)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_uring_cqe *cqe;
+-
+-	trace_io_uring_complete(ctx, req->user_data, res);
+-
+-	/*
+-	 * If we can't get a cq entry, userspace overflowed the
+-	 * submission (by quite a lot). Increment the overflow count in
+-	 * the ring.
+-	 */
+-	cqe = io_get_cqring(ctx);
+-	if (likely(cqe)) {
+-		WRITE_ONCE(cqe->user_data, req->user_data);
+-		WRITE_ONCE(cqe->res, res);
+-		WRITE_ONCE(cqe->flags, cflags);
+-	} else if (ctx->cq_overflow_flushed ||
+-		   atomic_read(&req->task->io_uring->in_idle)) {
+-		/*
+-		 * If we're in ring overflow flush mode, or in task cancel mode,
+-		 * then we cannot store the request for later flushing, we need
+-		 * to drop it on the floor.
+-		 */
+-		ctx->cached_cq_overflow++;
+-		WRITE_ONCE(ctx->rings->cq_overflow, ctx->cached_cq_overflow);
+-	} else {
+-		if (list_empty(&ctx->cq_overflow_list)) {
+-			set_bit(0, &ctx->sq_check_overflow);
+-			set_bit(0, &ctx->cq_check_overflow);
+-			ctx->rings->sq_flags |= IORING_SQ_CQ_OVERFLOW;
+-		}
+-		io_clean_op(req);
+-		req->result = res;
+-		req->compl.cflags = cflags;
+-		refcount_inc(&req->refs);
+-		list_add_tail(&req->compl.list, &ctx->cq_overflow_list);
+-	}
+-}
+-
+-static void io_cqring_fill_event(struct io_kiocb *req, long res)
+-{
+-	__io_cqring_fill_event(req, res, 0);
+-}
+-
+-static void io_cqring_add_event(struct io_kiocb *req, long res, long cflags)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&ctx->completion_lock, flags);
+-	__io_cqring_fill_event(req, res, cflags);
+-	io_commit_cqring(ctx);
+-	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-
+-	io_cqring_ev_posted(ctx);
+-}
+-
+-static void io_submit_flush_completions(struct io_comp_state *cs)
+-{
+-	struct io_ring_ctx *ctx = cs->ctx;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	while (!list_empty(&cs->list)) {
+-		struct io_kiocb *req;
+-
+-		req = list_first_entry(&cs->list, struct io_kiocb, compl.list);
+-		list_del(&req->compl.list);
+-		__io_cqring_fill_event(req, req->result, req->compl.cflags);
+-
+-		/*
+-		 * io_free_req() doesn't care about completion_lock unless one
+-		 * of these flags is set. REQ_F_WORK_INITIALIZED is in the list
+-		 * because of a potential deadlock with req->work.fs->lock
+-		 */
+-		if (req->flags & (REQ_F_FAIL_LINK|REQ_F_LINK_TIMEOUT
+-				 |REQ_F_WORK_INITIALIZED)) {
+-			spin_unlock_irq(&ctx->completion_lock);
+-			io_put_req(req);
+-			spin_lock_irq(&ctx->completion_lock);
+-		} else {
+-			io_put_req(req);
+-		}
+-	}
+-	io_commit_cqring(ctx);
+-	spin_unlock_irq(&ctx->completion_lock);
+-
+-	io_cqring_ev_posted(ctx);
+-	cs->nr = 0;
+-}
+-
+-static void __io_req_complete(struct io_kiocb *req, long res, unsigned cflags,
+-			      struct io_comp_state *cs)
+-{
+-	if (!cs) {
+-		io_cqring_add_event(req, res, cflags);
+-		io_put_req(req);
+-	} else {
+-		io_clean_op(req);
+-		req->result = res;
+-		req->compl.cflags = cflags;
+-		list_add_tail(&req->compl.list, &cs->list);
+-		if (++cs->nr >= 32)
+-			io_submit_flush_completions(cs);
+-	}
+-}
+-
+-static void io_req_complete(struct io_kiocb *req, long res)
+-{
+-	__io_req_complete(req, res, 0, NULL);
+-}
+-
+-static inline bool io_is_fallback_req(struct io_kiocb *req)
+-{
+-	return req == (struct io_kiocb *)
+-			((unsigned long) req->ctx->fallback_req & ~1UL);
+-}
+-
+-static struct io_kiocb *io_get_fallback_req(struct io_ring_ctx *ctx)
+-{
+-	struct io_kiocb *req;
+-
+-	req = ctx->fallback_req;
+-	if (!test_and_set_bit_lock(0, (unsigned long *) &ctx->fallback_req))
+-		return req;
+-
+-	return NULL;
+-}
+-
+-static struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx,
+-				     struct io_submit_state *state)
+-{
+-	if (!state->free_reqs) {
+-		gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
+-		size_t sz;
+-		int ret;
+-
+-		sz = min_t(size_t, state->ios_left, ARRAY_SIZE(state->reqs));
+-		ret = kmem_cache_alloc_bulk(req_cachep, gfp, sz, state->reqs);
+-
+-		/*
+-		 * Bulk alloc is all-or-nothing. If we fail to get a batch,
+-		 * retry single alloc to be on the safe side.
+-		 */
+-		if (unlikely(ret <= 0)) {
+-			state->reqs[0] = kmem_cache_alloc(req_cachep, gfp);
+-			if (!state->reqs[0])
+-				goto fallback;
+-			ret = 1;
+-		}
+-		state->free_reqs = ret;
+-	}
+-
+-	state->free_reqs--;
+-	return state->reqs[state->free_reqs];
+-fallback:
+-	return io_get_fallback_req(ctx);
+-}
+-
+-static inline void io_put_file(struct io_kiocb *req, struct file *file,
+-			  bool fixed)
+-{
+-	if (fixed)
+-		percpu_ref_put(req->fixed_file_refs);
+-	else
+-		fput(file);
+-}
+-
+-static void io_dismantle_req(struct io_kiocb *req)
+-{
+-	io_clean_op(req);
+-
+-	if (req->async_data)
+-		kfree(req->async_data);
+-	if (req->file)
+-		io_put_file(req, req->file, (req->flags & REQ_F_FIXED_FILE));
+-
+-	io_req_clean_work(req);
+-}
+-
+-static void __io_free_req(struct io_kiocb *req)
+-{
+-	struct io_uring_task *tctx = req->task->io_uring;
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	io_dismantle_req(req);
+-
+-	percpu_counter_dec(&tctx->inflight);
+-	if (atomic_read(&tctx->in_idle))
+-		wake_up(&tctx->wait);
+-	put_task_struct(req->task);
+-
+-	if (likely(!io_is_fallback_req(req)))
+-		kmem_cache_free(req_cachep, req);
+-	else
+-		clear_bit_unlock(0, (unsigned long *) &ctx->fallback_req);
+-	percpu_ref_put(&ctx->refs);
+-}
+-
+-static void io_kill_linked_timeout(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_kiocb *link;
+-	bool cancelled = false;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&ctx->completion_lock, flags);
+-	link = list_first_entry_or_null(&req->link_list, struct io_kiocb,
+-					link_list);
+-	/*
+-	 * Can happen if a linked timeout fired and link had been like
+-	 * req -> link t-out -> link t-out [-> ...]
+-	 */
+-	if (link && (link->flags & REQ_F_LTIMEOUT_ACTIVE)) {
+-		struct io_timeout_data *io = link->async_data;
+-		int ret;
+-
+-		list_del_init(&link->link_list);
+-		ret = hrtimer_try_to_cancel(&io->timer);
+-		if (ret != -1) {
+-			io_cqring_fill_event(link, -ECANCELED);
+-			io_commit_cqring(ctx);
+-			cancelled = true;
+-		}
+-	}
+-	req->flags &= ~REQ_F_LINK_TIMEOUT;
+-	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-
+-	if (cancelled) {
+-		io_cqring_ev_posted(ctx);
+-		io_put_req(link);
+-	}
+-}
+-
+-static struct io_kiocb *io_req_link_next(struct io_kiocb *req)
+-{
+-	struct io_kiocb *nxt;
+-
+-	/*
+-	 * The list should never be empty when we are called here. But could
+-	 * potentially happen if the chain is messed up, check to be on the
+-	 * safe side.
+-	 */
+-	if (unlikely(list_empty(&req->link_list)))
+-		return NULL;
+-
+-	nxt = list_first_entry(&req->link_list, struct io_kiocb, link_list);
+-	list_del_init(&req->link_list);
+-	if (!list_empty(&nxt->link_list))
+-		nxt->flags |= REQ_F_LINK_HEAD;
+-	return nxt;
+-}
+-
+-/*
+- * Called if REQ_F_LINK_HEAD is set, and we fail the head request
+- */
+-static void io_fail_links(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&ctx->completion_lock, flags);
+-	while (!list_empty(&req->link_list)) {
+-		struct io_kiocb *link = list_first_entry(&req->link_list,
+-						struct io_kiocb, link_list);
+-
+-		list_del_init(&link->link_list);
+-		trace_io_uring_fail_link(req, link);
+-
+-		io_cqring_fill_event(link, -ECANCELED);
+-
+-		/*
+-		 * It's ok to free under spinlock as they're not linked anymore,
+-		 * but avoid REQ_F_WORK_INITIALIZED because it may deadlock on
+-		 * work.fs->lock.
+-		 */
+-		if (link->flags & REQ_F_WORK_INITIALIZED)
+-			io_put_req_deferred(link, 2);
+-		else
+-			io_double_put_req(link);
+-	}
+-
+-	io_commit_cqring(ctx);
+-	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-
+-	io_cqring_ev_posted(ctx);
+-}
+-
+-static struct io_kiocb *__io_req_find_next(struct io_kiocb *req)
+-{
+-	req->flags &= ~REQ_F_LINK_HEAD;
+-	if (req->flags & REQ_F_LINK_TIMEOUT)
+-		io_kill_linked_timeout(req);
+-
+-	/*
+-	 * If LINK is set, we have dependent requests in this chain. If we
+-	 * didn't fail this request, queue the first one up, moving any other
+-	 * dependencies to the next request. In case of failure, fail the rest
+-	 * of the chain.
+-	 */
+-	if (likely(!(req->flags & REQ_F_FAIL_LINK)))
+-		return io_req_link_next(req);
+-	io_fail_links(req);
+-	return NULL;
+-}
+-
+-static struct io_kiocb *io_req_find_next(struct io_kiocb *req)
+-{
+-	if (likely(!(req->flags & REQ_F_LINK_HEAD)))
+-		return NULL;
+-	return __io_req_find_next(req);
+-}
+-
+-static int io_req_task_work_add(struct io_kiocb *req, bool twa_signal_ok)
+-{
+-	struct task_struct *tsk = req->task;
+-	struct io_ring_ctx *ctx = req->ctx;
+-	enum task_work_notify_mode notify;
+-	int ret;
+-
+-	if (tsk->flags & PF_EXITING)
+-		return -ESRCH;
+-
+-	/*
+-	 * SQPOLL kernel thread doesn't need notification, just a wakeup. For
+-	 * all other cases, use TWA_SIGNAL unconditionally to ensure we're
+-	 * processing task_work. There's no reliable way to tell if TWA_RESUME
+-	 * will do the job.
+-	 */
+-	notify = TWA_NONE;
+-	if (!(ctx->flags & IORING_SETUP_SQPOLL) && twa_signal_ok)
+-		notify = TWA_SIGNAL;
+-
+-	ret = task_work_add(tsk, &req->task_work, notify);
+-	if (!ret)
+-		wake_up_process(tsk);
+-
+-	return ret;
+-}
+-
+-static void __io_req_task_cancel(struct io_kiocb *req, int error)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	io_cqring_fill_event(req, error);
+-	io_commit_cqring(ctx);
+-	spin_unlock_irq(&ctx->completion_lock);
+-
+-	io_cqring_ev_posted(ctx);
+-	req_set_fail_links(req);
+-	io_double_put_req(req);
+-}
+-
+-static void io_req_task_cancel(struct callback_head *cb)
+-{
+-	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	mutex_lock(&ctx->uring_lock);
+-	__io_req_task_cancel(req, -ECANCELED);
+-	mutex_unlock(&ctx->uring_lock);
+-	percpu_ref_put(&ctx->refs);
+-}
+-
+-static void __io_req_task_submit(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	mutex_lock(&ctx->uring_lock);
+-	if (!ctx->sqo_dead && !__io_sq_thread_acquire_mm(ctx))
+-		__io_queue_sqe(req, NULL);
+-	else
+-		__io_req_task_cancel(req, -EFAULT);
+-	mutex_unlock(&ctx->uring_lock);
+-
+-	if (ctx->flags & IORING_SETUP_SQPOLL)
+-		io_sq_thread_drop_mm();
+-}
+-
+-static void io_req_task_submit(struct callback_head *cb)
+-{
+-	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	__io_req_task_submit(req);
+-	percpu_ref_put(&ctx->refs);
+-}
+-
+-static void io_req_task_queue(struct io_kiocb *req)
+-{
+-	int ret;
+-
+-	init_task_work(&req->task_work, io_req_task_submit);
+-	percpu_ref_get(&req->ctx->refs);
+-
+-	ret = io_req_task_work_add(req, true);
+-	if (unlikely(ret)) {
+-		struct task_struct *tsk;
+-
+-		init_task_work(&req->task_work, io_req_task_cancel);
+-		tsk = io_wq_get_task(req->ctx->io_wq);
+-		task_work_add(tsk, &req->task_work, TWA_NONE);
+-		wake_up_process(tsk);
+-	}
+-}
+-
+-static void io_queue_next(struct io_kiocb *req)
+-{
+-	struct io_kiocb *nxt = io_req_find_next(req);
+-
+-	if (nxt)
+-		io_req_task_queue(nxt);
+-}
+-
+-static void io_free_req(struct io_kiocb *req)
+-{
+-	io_queue_next(req);
+-	__io_free_req(req);
+-}
+-
+-struct req_batch {
+-	void *reqs[IO_IOPOLL_BATCH];
+-	int to_free;
+-
+-	struct task_struct	*task;
+-	int			task_refs;
+-};
+-
+-static inline void io_init_req_batch(struct req_batch *rb)
+-{
+-	rb->to_free = 0;
+-	rb->task_refs = 0;
+-	rb->task = NULL;
+-}
+-
+-static void __io_req_free_batch_flush(struct io_ring_ctx *ctx,
+-				      struct req_batch *rb)
+-{
+-	kmem_cache_free_bulk(req_cachep, rb->to_free, rb->reqs);
+-	percpu_ref_put_many(&ctx->refs, rb->to_free);
+-	rb->to_free = 0;
+-}
+-
+-static void io_req_free_batch_finish(struct io_ring_ctx *ctx,
+-				     struct req_batch *rb)
+-{
+-	if (rb->to_free)
+-		__io_req_free_batch_flush(ctx, rb);
+-	if (rb->task) {
+-		struct io_uring_task *tctx = rb->task->io_uring;
+-
+-		percpu_counter_sub(&tctx->inflight, rb->task_refs);
+-		if (atomic_read(&tctx->in_idle))
+-			wake_up(&tctx->wait);
+-		put_task_struct_many(rb->task, rb->task_refs);
+-		rb->task = NULL;
+-	}
+-}
+-
+-static void io_req_free_batch(struct req_batch *rb, struct io_kiocb *req)
+-{
+-	if (unlikely(io_is_fallback_req(req))) {
+-		io_free_req(req);
+-		return;
+-	}
+-	if (req->flags & REQ_F_LINK_HEAD)
+-		io_queue_next(req);
+-
+-	if (req->task != rb->task) {
+-		if (rb->task) {
+-			struct io_uring_task *tctx = rb->task->io_uring;
+-
+-			percpu_counter_sub(&tctx->inflight, rb->task_refs);
+-			if (atomic_read(&tctx->in_idle))
+-				wake_up(&tctx->wait);
+-			put_task_struct_many(rb->task, rb->task_refs);
+-		}
+-		rb->task = req->task;
+-		rb->task_refs = 0;
+-	}
+-	rb->task_refs++;
+-
+-	io_dismantle_req(req);
+-	rb->reqs[rb->to_free++] = req;
+-	if (unlikely(rb->to_free == ARRAY_SIZE(rb->reqs)))
+-		__io_req_free_batch_flush(req->ctx, rb);
+-}
+-
+-/*
+- * Drop reference to request, return next in chain (if there is one) if this
+- * was the last reference to this request.
+- */
+-static struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
+-{
+-	struct io_kiocb *nxt = NULL;
+-
+-	if (refcount_dec_and_test(&req->refs)) {
+-		nxt = io_req_find_next(req);
+-		__io_free_req(req);
+-	}
+-	return nxt;
+-}
+-
+-static void io_put_req(struct io_kiocb *req)
+-{
+-	if (refcount_dec_and_test(&req->refs))
+-		io_free_req(req);
+-}
+-
+-static void io_put_req_deferred_cb(struct callback_head *cb)
+-{
+-	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+-
+-	io_free_req(req);
+-}
+-
+-static void io_free_req_deferred(struct io_kiocb *req)
+-{
+-	int ret;
+-
+-	init_task_work(&req->task_work, io_put_req_deferred_cb);
+-	ret = io_req_task_work_add(req, true);
+-	if (unlikely(ret)) {
+-		struct task_struct *tsk;
+-
+-		tsk = io_wq_get_task(req->ctx->io_wq);
+-		task_work_add(tsk, &req->task_work, TWA_NONE);
+-		wake_up_process(tsk);
+-	}
+-}
+-
+-static inline void io_put_req_deferred(struct io_kiocb *req, int refs)
+-{
+-	if (refcount_sub_and_test(refs, &req->refs))
+-		io_free_req_deferred(req);
+-}
+-
+-static struct io_wq_work *io_steal_work(struct io_kiocb *req)
+-{
+-	struct io_kiocb *nxt;
+-
+-	/*
+-	 * A ref is owned by io-wq in which context we're. So, if that's the
+-	 * last one, it's safe to steal next work. False negatives are Ok,
+-	 * it just will be re-punted async in io_put_work()
+-	 */
+-	if (refcount_read(&req->refs) != 1)
+-		return NULL;
+-
+-	nxt = io_req_find_next(req);
+-	return nxt ? &nxt->work : NULL;
+-}
+-
+-static void io_double_put_req(struct io_kiocb *req)
+-{
+-	/* drop both submit and complete references */
+-	if (refcount_sub_and_test(2, &req->refs))
+-		io_free_req(req);
+-}
+-
+-static unsigned io_cqring_events(struct io_ring_ctx *ctx)
+-{
+-	struct io_rings *rings = ctx->rings;
+-
+-	/* See comment at the top of this file */
+-	smp_rmb();
+-	return ctx->cached_cq_tail - READ_ONCE(rings->cq.head);
+-}
+-
+-static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
+-{
+-	struct io_rings *rings = ctx->rings;
+-
+-	/* make sure SQ entry isn't read before tail */
+-	return smp_load_acquire(&rings->sq.tail) - ctx->cached_sq_head;
+-}
+-
+-static unsigned int io_put_kbuf(struct io_kiocb *req, struct io_buffer *kbuf)
+-{
+-	unsigned int cflags;
+-
+-	cflags = kbuf->bid << IORING_CQE_BUFFER_SHIFT;
+-	cflags |= IORING_CQE_F_BUFFER;
+-	req->flags &= ~REQ_F_BUFFER_SELECTED;
+-	kfree(kbuf);
+-	return cflags;
+-}
+-
+-static inline unsigned int io_put_rw_kbuf(struct io_kiocb *req)
+-{
+-	struct io_buffer *kbuf;
+-
+-	kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
+-	return io_put_kbuf(req, kbuf);
+-}
+-
+-static inline bool io_run_task_work(void)
+-{
+-	/*
+-	 * Not safe to run on exiting task, and the task_work handling will
+-	 * not add work to such a task.
+-	 */
+-	if (unlikely(current->flags & PF_EXITING))
+-		return false;
+-	if (current->task_works) {
+-		__set_current_state(TASK_RUNNING);
+-		task_work_run();
+-		return true;
+-	}
+-
+-	return false;
+-}
+-
+-static void io_iopoll_queue(struct list_head *again)
+-{
+-	struct io_kiocb *req;
+-
+-	do {
+-		req = list_first_entry(again, struct io_kiocb, iopoll_entry);
+-		list_del(&req->iopoll_entry);
+-		__io_complete_rw(req, -EAGAIN, 0, NULL);
+-	} while (!list_empty(again));
+-}
+-
+-/*
+- * Find and free completed poll iocbs
+- */
+-static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+-			       struct list_head *done)
+-{
+-	struct req_batch rb;
+-	struct io_kiocb *req;
+-	LIST_HEAD(again);
+-
+-	/* order with ->result store in io_complete_rw_iopoll() */
+-	smp_rmb();
+-
+-	io_init_req_batch(&rb);
+-	while (!list_empty(done)) {
+-		int cflags = 0;
+-
+-		req = list_first_entry(done, struct io_kiocb, iopoll_entry);
+-		if (READ_ONCE(req->result) == -EAGAIN) {
+-			req->result = 0;
+-			req->iopoll_completed = 0;
+-			list_move_tail(&req->iopoll_entry, &again);
+-			continue;
+-		}
+-		list_del(&req->iopoll_entry);
+-
+-		if (req->flags & REQ_F_BUFFER_SELECTED)
+-			cflags = io_put_rw_kbuf(req);
+-
+-		__io_cqring_fill_event(req, req->result, cflags);
+-		(*nr_events)++;
+-
+-		if (refcount_dec_and_test(&req->refs))
+-			io_req_free_batch(&rb, req);
+-	}
+-
+-	io_commit_cqring(ctx);
+-	if (ctx->flags & IORING_SETUP_SQPOLL)
+-		io_cqring_ev_posted(ctx);
+-	io_req_free_batch_finish(ctx, &rb);
+-
+-	if (!list_empty(&again))
+-		io_iopoll_queue(&again);
+-}
+-
+-static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+-			long min)
+-{
+-	struct io_kiocb *req, *tmp;
+-	LIST_HEAD(done);
+-	bool spin;
+-	int ret;
+-
+-	/*
+-	 * Only spin for completions if we don't have multiple devices hanging
+-	 * off our complete list, and we're under the requested amount.
+-	 */
+-	spin = !ctx->poll_multi_file && *nr_events < min;
+-
+-	ret = 0;
+-	list_for_each_entry_safe(req, tmp, &ctx->iopoll_list, iopoll_entry) {
+-		struct kiocb *kiocb = &req->rw.kiocb;
+-
+-		/*
+-		 * Move completed and retryable entries to our local lists.
+-		 * If we find a request that requires polling, break out
+-		 * and complete those lists first, if we have entries there.
+-		 */
+-		if (READ_ONCE(req->iopoll_completed)) {
+-			list_move_tail(&req->iopoll_entry, &done);
+-			continue;
+-		}
+-		if (!list_empty(&done))
+-			break;
+-
+-		ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
+-		if (ret < 0)
+-			break;
+-
+-		/* iopoll may have completed current req */
+-		if (READ_ONCE(req->iopoll_completed))
+-			list_move_tail(&req->iopoll_entry, &done);
+-
+-		if (ret && spin)
+-			spin = false;
+-		ret = 0;
+-	}
+-
+-	if (!list_empty(&done))
+-		io_iopoll_complete(ctx, nr_events, &done);
+-
+-	return ret;
+-}
+-
+-/*
+- * Poll for a minimum of 'min' events. Note that if min == 0 we consider that a
+- * non-spinning poll check - we'll still enter the driver poll loop, but only
+- * as a non-spinning completion check.
+- */
+-static int io_iopoll_getevents(struct io_ring_ctx *ctx, unsigned int *nr_events,
+-				long min)
+-{
+-	while (!list_empty(&ctx->iopoll_list) && !need_resched()) {
+-		int ret;
+-
+-		ret = io_do_iopoll(ctx, nr_events, min);
+-		if (ret < 0)
+-			return ret;
+-		if (*nr_events >= min)
+-			return 0;
+-	}
+-
+-	return 1;
+-}
+-
+-/*
+- * We can't just wait for polled events to come to us, we have to actively
+- * find and complete them.
+- */
+-static void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
+-{
+-	if (!(ctx->flags & IORING_SETUP_IOPOLL))
+-		return;
+-
+-	mutex_lock(&ctx->uring_lock);
+-	while (!list_empty(&ctx->iopoll_list)) {
+-		unsigned int nr_events = 0;
+-
+-		io_do_iopoll(ctx, &nr_events, 0);
+-
+-		/* let it sleep and repeat later if can't complete a request */
+-		if (nr_events == 0)
+-			break;
+-		/*
+-		 * Ensure we allow local-to-the-cpu processing to take place,
+-		 * in this case we need to ensure that we reap all events.
+-		 * Also let task_work, etc. to progress by releasing the mutex
+-		 */
+-		if (need_resched()) {
+-			mutex_unlock(&ctx->uring_lock);
+-			cond_resched();
+-			mutex_lock(&ctx->uring_lock);
+-		}
+-	}
+-	mutex_unlock(&ctx->uring_lock);
+-}
+-
+-static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
+-{
+-	unsigned int nr_events = 0;
+-	int iters = 0, ret = 0;
+-
+-	/*
+-	 * We disallow the app entering submit/complete with polling, but we
+-	 * still need to lock the ring to prevent racing with polled issue
+-	 * that got punted to a workqueue.
+-	 */
+-	mutex_lock(&ctx->uring_lock);
+-	do {
+-		/*
+-		 * Don't enter poll loop if we already have events pending.
+-		 * If we do, we can potentially be spinning for commands that
+-		 * already triggered a CQE (eg in error).
+-		 */
+-		if (test_bit(0, &ctx->cq_check_overflow))
+-			__io_cqring_overflow_flush(ctx, false, NULL, NULL);
+-		if (io_cqring_events(ctx))
+-			break;
+-
+-		/*
+-		 * If a submit got punted to a workqueue, we can have the
+-		 * application entering polling for a command before it gets
+-		 * issued. That app will hold the uring_lock for the duration
+-		 * of the poll right here, so we need to take a breather every
+-		 * now and then to ensure that the issue has a chance to add
+-		 * the poll to the issued list. Otherwise we can spin here
+-		 * forever, while the workqueue is stuck trying to acquire the
+-		 * very same mutex.
+-		 */
+-		if (!(++iters & 7)) {
+-			mutex_unlock(&ctx->uring_lock);
+-			io_run_task_work();
+-			mutex_lock(&ctx->uring_lock);
+-		}
+-
+-		ret = io_iopoll_getevents(ctx, &nr_events, min);
+-		if (ret <= 0)
+-			break;
+-		ret = 0;
+-	} while (min && !nr_events && !need_resched());
+-
+-	mutex_unlock(&ctx->uring_lock);
+-	return ret;
+-}
+-
+-static void kiocb_end_write(struct io_kiocb *req)
+-{
+-	/*
+-	 * Tell lockdep we inherited freeze protection from submission
+-	 * thread.
+-	 */
+-	if (req->flags & REQ_F_ISREG) {
+-		struct inode *inode = file_inode(req->file);
+-
+-		__sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
+-	}
+-	file_end_write(req->file);
+-}
+-
+-static void io_complete_rw_common(struct kiocb *kiocb, long res,
+-				  struct io_comp_state *cs)
+-{
+-	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
+-	int cflags = 0;
+-
+-	if (kiocb->ki_flags & IOCB_WRITE)
+-		kiocb_end_write(req);
+-
+-	if (res != req->result)
+-		req_set_fail_links(req);
+-	if (req->flags & REQ_F_BUFFER_SELECTED)
+-		cflags = io_put_rw_kbuf(req);
+-	__io_req_complete(req, res, cflags, cs);
+-}
+-
+-#ifdef CONFIG_BLOCK
+-static bool io_resubmit_prep(struct io_kiocb *req, int error)
+-{
+-	req_set_fail_links(req);
+-	return false;
+-}
+-#endif
+-
+-static bool io_rw_reissue(struct io_kiocb *req, long res)
+-{
+-#ifdef CONFIG_BLOCK
+-	umode_t mode = file_inode(req->file)->i_mode;
+-	int ret;
+-
+-	if (!S_ISBLK(mode) && !S_ISREG(mode))
+-		return false;
+-	if ((res != -EAGAIN && res != -EOPNOTSUPP) || io_wq_current_is_worker())
+-		return false;
+-	/*
+-	 * If ref is dying, we might be running poll reap from the exit work.
+-	 * Don't attempt to reissue from that path, just let it fail with
+-	 * -EAGAIN.
+-	 */
+-	if (percpu_ref_is_dying(&req->ctx->refs))
+-		return false;
+-
+-	ret = io_sq_thread_acquire_mm(req->ctx, req);
+-
+-	if (io_resubmit_prep(req, ret)) {
+-		refcount_inc(&req->refs);
+-		io_queue_async_work(req);
+-		return true;
+-	}
+-
+-#endif
+-	return false;
+-}
+-
+-static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
+-			     struct io_comp_state *cs)
+-{
+-	if (!io_rw_reissue(req, res))
+-		io_complete_rw_common(&req->rw.kiocb, res, cs);
+-}
+-
+-static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
+-{
+-	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
+-
+-	__io_complete_rw(req, res, res2, NULL);
+-}
+-
+-static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
+-{
+-	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
+-
+-	if (kiocb->ki_flags & IOCB_WRITE)
+-		kiocb_end_write(req);
+-
+-	if (res != -EAGAIN && res != req->result)
+-		req_set_fail_links(req);
+-
+-	WRITE_ONCE(req->result, res);
+-	/* order with io_poll_complete() checking ->result */
+-	smp_wmb();
+-	WRITE_ONCE(req->iopoll_completed, 1);
+-}
+-
+-/*
+- * After the iocb has been issued, it's safe to be found on the poll list.
+- * Adding the kiocb to the list AFTER submission ensures that we don't
+- * find it from a io_iopoll_getevents() thread before the issuer is done
+- * accessing the kiocb cookie.
+- */
+-static void io_iopoll_req_issued(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	/*
+-	 * Track whether we have multiple files in our lists. This will impact
+-	 * how we do polling eventually, not spinning if we're on potentially
+-	 * different devices.
+-	 */
+-	if (list_empty(&ctx->iopoll_list)) {
+-		ctx->poll_multi_file = false;
+-	} else if (!ctx->poll_multi_file) {
+-		struct io_kiocb *list_req;
+-
+-		list_req = list_first_entry(&ctx->iopoll_list, struct io_kiocb,
+-						iopoll_entry);
+-		if (list_req->file != req->file)
+-			ctx->poll_multi_file = true;
+-	}
+-
+-	/*
+-	 * For fast devices, IO may have already completed. If it has, add
+-	 * it to the front so we find it first.
+-	 */
+-	if (READ_ONCE(req->iopoll_completed))
+-		list_add(&req->iopoll_entry, &ctx->iopoll_list);
+-	else
+-		list_add_tail(&req->iopoll_entry, &ctx->iopoll_list);
+-
+-	if ((ctx->flags & IORING_SETUP_SQPOLL) &&
+-	    wq_has_sleeper(&ctx->sq_data->wait))
+-		wake_up(&ctx->sq_data->wait);
+-}
+-
+-static void __io_state_file_put(struct io_submit_state *state)
+-{
+-	if (state->has_refs)
+-		fput_many(state->file, state->has_refs);
+-	state->file = NULL;
+-}
+-
+-static inline void io_state_file_put(struct io_submit_state *state)
+-{
+-	if (state->file)
+-		__io_state_file_put(state);
+-}
+-
+-/*
+- * Get as many references to a file as we have IOs left in this submission,
+- * assuming most submissions are for one file, or at least that each file
+- * has more than one submission.
+- */
+-static struct file *__io_file_get(struct io_submit_state *state, int fd)
+-{
+-	if (!state)
+-		return fget(fd);
+-
+-	if (state->file) {
+-		if (state->fd == fd) {
+-			state->has_refs--;
+-			return state->file;
+-		}
+-		__io_state_file_put(state);
+-	}
+-	state->file = fget_many(fd, state->ios_left);
+-	if (!state->file)
+-		return NULL;
+-
+-	state->fd = fd;
+-	state->has_refs = state->ios_left - 1;
+-	return state->file;
+-}
+-
+-static bool io_bdev_nowait(struct block_device *bdev)
+-{
+-#ifdef CONFIG_BLOCK
+-	return !bdev || blk_queue_nowait(bdev_get_queue(bdev));
+-#else
+-	return true;
+-#endif
+-}
+-
+-/*
+- * If we tracked the file through the SCM inflight mechanism, we could support
+- * any file. For now, just ensure that anything potentially problematic is done
+- * inline.
+- */
+-static bool io_file_supports_async(struct file *file, int rw)
+-{
+-	umode_t mode = file_inode(file)->i_mode;
+-
+-	if (S_ISBLK(mode)) {
+-		if (io_bdev_nowait(file->f_inode->i_bdev))
+-			return true;
+-		return false;
+-	}
+-	if (S_ISSOCK(mode))
+-		return true;
+-	if (S_ISREG(mode)) {
+-		if (io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
+-		    file->f_op != &io_uring_fops)
+-			return true;
+-		return false;
+-	}
+-
+-	/* any ->read/write should understand O_NONBLOCK */
+-	if (file->f_flags & O_NONBLOCK)
+-		return true;
+-
+-	if (!(file->f_mode & FMODE_NOWAIT))
+-		return false;
+-
+-	if (rw == READ)
+-		return file->f_op->read_iter != NULL;
+-
+-	return file->f_op->write_iter != NULL;
+-}
+-
+-static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct kiocb *kiocb = &req->rw.kiocb;
+-	unsigned ioprio;
+-	int ret;
+-
+-	if (S_ISREG(file_inode(req->file)->i_mode))
+-		req->flags |= REQ_F_ISREG;
+-
+-	kiocb->ki_pos = READ_ONCE(sqe->off);
+-	if (kiocb->ki_pos == -1 && !(req->file->f_mode & FMODE_STREAM)) {
+-		req->flags |= REQ_F_CUR_POS;
+-		kiocb->ki_pos = req->file->f_pos;
+-	}
+-	kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
+-	kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
+-	ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
+-	if (unlikely(ret))
+-		return ret;
+-
+-	ioprio = READ_ONCE(sqe->ioprio);
+-	if (ioprio) {
+-		ret = ioprio_check_cap(ioprio);
+-		if (ret)
+-			return ret;
+-
+-		kiocb->ki_ioprio = ioprio;
+-	} else
+-		kiocb->ki_ioprio = get_current_ioprio();
+-
+-	/* don't allow async punt if RWF_NOWAIT was requested */
+-	if (kiocb->ki_flags & IOCB_NOWAIT)
+-		req->flags |= REQ_F_NOWAIT;
+-
+-	if (ctx->flags & IORING_SETUP_IOPOLL) {
+-		if (!(kiocb->ki_flags & IOCB_DIRECT) ||
+-		    !kiocb->ki_filp->f_op->iopoll)
+-			return -EOPNOTSUPP;
+-
+-		kiocb->ki_flags |= IOCB_HIPRI;
+-		kiocb->ki_complete = io_complete_rw_iopoll;
+-		req->iopoll_completed = 0;
+-	} else {
+-		if (kiocb->ki_flags & IOCB_HIPRI)
+-			return -EINVAL;
+-		kiocb->ki_complete = io_complete_rw;
+-	}
+-
+-	req->rw.addr = READ_ONCE(sqe->addr);
+-	req->rw.len = READ_ONCE(sqe->len);
+-	req->buf_index = READ_ONCE(sqe->buf_index);
+-	return 0;
+-}
+-
+-static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
+-{
+-	switch (ret) {
+-	case -EIOCBQUEUED:
+-		break;
+-	case -ERESTARTSYS:
+-	case -ERESTARTNOINTR:
+-	case -ERESTARTNOHAND:
+-	case -ERESTART_RESTARTBLOCK:
+-		/*
+-		 * We can't just restart the syscall, since previously
+-		 * submitted sqes may already be in progress. Just fail this
+-		 * IO with EINTR.
+-		 */
+-		ret = -EINTR;
+-		fallthrough;
+-	default:
+-		kiocb->ki_complete(kiocb, ret, 0);
+-	}
+-}
+-
+-static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
+-		       struct io_comp_state *cs)
+-{
+-	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
+-	struct io_async_rw *io = req->async_data;
+-
+-	/* add previously done IO, if any */
+-	if (io && io->bytes_done > 0) {
+-		if (ret < 0)
+-			ret = io->bytes_done;
+-		else
+-			ret += io->bytes_done;
+-	}
+-
+-	if (req->flags & REQ_F_CUR_POS)
+-		req->file->f_pos = kiocb->ki_pos;
+-	if (ret >= 0 && kiocb->ki_complete == io_complete_rw)
+-		__io_complete_rw(req, ret, 0, cs);
+-	else
+-		io_rw_done(kiocb, ret);
+-}
+-
+-static ssize_t io_import_fixed(struct io_kiocb *req, int rw,
+-			       struct iov_iter *iter)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	size_t len = req->rw.len;
+-	struct io_mapped_ubuf *imu;
+-	u16 index, buf_index = req->buf_index;
+-	size_t offset;
+-	u64 buf_addr;
+-
+-	if (unlikely(buf_index >= ctx->nr_user_bufs))
+-		return -EFAULT;
+-	index = array_index_nospec(buf_index, ctx->nr_user_bufs);
+-	imu = &ctx->user_bufs[index];
+-	buf_addr = req->rw.addr;
+-
+-	/* overflow */
+-	if (buf_addr + len < buf_addr)
+-		return -EFAULT;
+-	/* not inside the mapped region */
+-	if (buf_addr < imu->ubuf || buf_addr + len > imu->ubuf + imu->len)
+-		return -EFAULT;
+-
+-	/*
+-	 * May not be a start of buffer, set size appropriately
+-	 * and advance us to the beginning.
+-	 */
+-	offset = buf_addr - imu->ubuf;
+-	iov_iter_bvec(iter, rw, imu->bvec, imu->nr_bvecs, offset + len);
+-
+-	if (offset) {
+-		/*
+-		 * Don't use iov_iter_advance() here, as it's really slow for
+-		 * using the latter parts of a big fixed buffer - it iterates
+-		 * over each segment manually. We can cheat a bit here, because
+-		 * we know that:
+-		 *
+-		 * 1) it's a BVEC iter, we set it up
+-		 * 2) all bvecs are PAGE_SIZE in size, except potentially the
+-		 *    first and last bvec
+-		 *
+-		 * So just find our index, and adjust the iterator afterwards.
+-		 * If the offset is within the first bvec (or the whole first
+-		 * bvec, just use iov_iter_advance(). This makes it easier
+-		 * since we can just skip the first segment, which may not
+-		 * be PAGE_SIZE aligned.
+-		 */
+-		const struct bio_vec *bvec = imu->bvec;
+-
+-		if (offset <= bvec->bv_len) {
+-			iov_iter_advance(iter, offset);
+-		} else {
+-			unsigned long seg_skip;
+-
+-			/* skip first vec */
+-			offset -= bvec->bv_len;
+-			seg_skip = 1 + (offset >> PAGE_SHIFT);
+-
+-			iter->bvec = bvec + seg_skip;
+-			iter->nr_segs -= seg_skip;
+-			iter->count -= bvec->bv_len + offset;
+-			iter->iov_offset = offset & ~PAGE_MASK;
+-		}
+-	}
+-
+-	return len;
+-}
+-
+-static void io_ring_submit_unlock(struct io_ring_ctx *ctx, bool needs_lock)
+-{
+-	if (needs_lock)
+-		mutex_unlock(&ctx->uring_lock);
+-}
+-
+-static void io_ring_submit_lock(struct io_ring_ctx *ctx, bool needs_lock)
+-{
+-	/*
+-	 * "Normal" inline submissions always hold the uring_lock, since we
+-	 * grab it from the system call. Same is true for the SQPOLL offload.
+-	 * The only exception is when we've detached the request and issue it
+-	 * from an async worker thread, grab the lock for that case.
+-	 */
+-	if (needs_lock)
+-		mutex_lock(&ctx->uring_lock);
+-}
+-
+-static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
+-					  int bgid, struct io_buffer *kbuf,
+-					  bool needs_lock)
+-{
+-	struct io_buffer *head;
+-
+-	if (req->flags & REQ_F_BUFFER_SELECTED)
+-		return kbuf;
+-
+-	io_ring_submit_lock(req->ctx, needs_lock);
+-
+-	lockdep_assert_held(&req->ctx->uring_lock);
+-
+-	head = xa_load(&req->ctx->io_buffers, bgid);
+-	if (head) {
+-		if (!list_empty(&head->list)) {
+-			kbuf = list_last_entry(&head->list, struct io_buffer,
+-							list);
+-			list_del(&kbuf->list);
+-		} else {
+-			kbuf = head;
+-			xa_erase(&req->ctx->io_buffers, bgid);
+-		}
+-		if (*len > kbuf->len)
+-			*len = kbuf->len;
+-	} else {
+-		kbuf = ERR_PTR(-ENOBUFS);
+-	}
+-
+-	io_ring_submit_unlock(req->ctx, needs_lock);
+-
+-	return kbuf;
+-}
+-
+-static void __user *io_rw_buffer_select(struct io_kiocb *req, size_t *len,
+-					bool needs_lock)
+-{
+-	struct io_buffer *kbuf;
+-	u16 bgid;
+-
+-	kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
+-	bgid = req->buf_index;
+-	kbuf = io_buffer_select(req, len, bgid, kbuf, needs_lock);
+-	if (IS_ERR(kbuf))
+-		return kbuf;
+-	req->rw.addr = (u64) (unsigned long) kbuf;
+-	req->flags |= REQ_F_BUFFER_SELECTED;
+-	return u64_to_user_ptr(kbuf->addr);
+-}
+-
+-#ifdef CONFIG_COMPAT
+-static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
+-				bool needs_lock)
+-{
+-	struct compat_iovec __user *uiov;
+-	compat_ssize_t clen;
+-	void __user *buf;
+-	ssize_t len;
+-
+-	uiov = u64_to_user_ptr(req->rw.addr);
+-	if (!access_ok(uiov, sizeof(*uiov)))
+-		return -EFAULT;
+-	if (__get_user(clen, &uiov->iov_len))
+-		return -EFAULT;
+-	if (clen < 0)
+-		return -EINVAL;
+-
+-	len = clen;
+-	buf = io_rw_buffer_select(req, &len, needs_lock);
+-	if (IS_ERR(buf))
+-		return PTR_ERR(buf);
+-	iov[0].iov_base = buf;
+-	iov[0].iov_len = (compat_size_t) len;
+-	return 0;
+-}
+-#endif
+-
+-static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
+-				      bool needs_lock)
+-{
+-	struct iovec __user *uiov = u64_to_user_ptr(req->rw.addr);
+-	void __user *buf;
+-	ssize_t len;
+-
+-	if (copy_from_user(iov, uiov, sizeof(*uiov)))
+-		return -EFAULT;
+-
+-	len = iov[0].iov_len;
+-	if (len < 0)
+-		return -EINVAL;
+-	buf = io_rw_buffer_select(req, &len, needs_lock);
+-	if (IS_ERR(buf))
+-		return PTR_ERR(buf);
+-	iov[0].iov_base = buf;
+-	iov[0].iov_len = len;
+-	return 0;
+-}
+-
+-static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
+-				    bool needs_lock)
+-{
+-	if (req->flags & REQ_F_BUFFER_SELECTED) {
+-		struct io_buffer *kbuf;
+-
+-		kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
+-		iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
+-		iov[0].iov_len = kbuf->len;
+-		return 0;
+-	}
+-	if (req->rw.len != 1)
+-		return -EINVAL;
+-
+-#ifdef CONFIG_COMPAT
+-	if (req->ctx->compat)
+-		return io_compat_import(req, iov, needs_lock);
+-#endif
+-
+-	return __io_iov_buffer_select(req, iov, needs_lock);
+-}
+-
+-static ssize_t __io_import_iovec(int rw, struct io_kiocb *req,
+-				 struct iovec **iovec, struct iov_iter *iter,
+-				 bool needs_lock)
+-{
+-	void __user *buf = u64_to_user_ptr(req->rw.addr);
+-	size_t sqe_len = req->rw.len;
+-	ssize_t ret;
+-	u8 opcode;
+-
+-	opcode = req->opcode;
+-	if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) {
+-		*iovec = NULL;
+-		return io_import_fixed(req, rw, iter);
+-	}
+-
+-	/* buffer index only valid with fixed read/write, or buffer select  */
+-	if (req->buf_index && !(req->flags & REQ_F_BUFFER_SELECT))
+-		return -EINVAL;
+-
+-	if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
+-		if (req->flags & REQ_F_BUFFER_SELECT) {
+-			buf = io_rw_buffer_select(req, &sqe_len, needs_lock);
+-			if (IS_ERR(buf))
+-				return PTR_ERR(buf);
+-			req->rw.len = sqe_len;
+-		}
+-
+-		ret = import_single_range(rw, buf, sqe_len, *iovec, iter);
+-		*iovec = NULL;
+-		return ret;
+-	}
+-
+-	if (req->flags & REQ_F_BUFFER_SELECT) {
+-		ret = io_iov_buffer_select(req, *iovec, needs_lock);
+-		if (!ret) {
+-			ret = (*iovec)->iov_len;
+-			iov_iter_init(iter, rw, *iovec, 1, ret);
+-		}
+-		*iovec = NULL;
+-		return ret;
+-	}
+-
+-	return __import_iovec(rw, buf, sqe_len, UIO_FASTIOV, iovec, iter,
+-			      req->ctx->compat);
+-}
+-
+-static ssize_t io_import_iovec(int rw, struct io_kiocb *req,
+-			       struct iovec **iovec, struct iov_iter *iter,
+-			       bool needs_lock)
+-{
+-	struct io_async_rw *iorw = req->async_data;
+-
+-	if (!iorw)
+-		return __io_import_iovec(rw, req, iovec, iter, needs_lock);
+-	*iovec = NULL;
+-	return 0;
+-}
+-
+-static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
+-{
+-	return (kiocb->ki_filp->f_mode & FMODE_STREAM) ? NULL : &kiocb->ki_pos;
+-}
+-
+-/*
+- * For files that don't have ->read_iter() and ->write_iter(), handle them
+- * by looping over ->read() or ->write() manually.
+- */
+-static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+-{
+-	struct kiocb *kiocb = &req->rw.kiocb;
+-	struct file *file = req->file;
+-	ssize_t ret = 0;
+-
+-	/*
+-	 * Don't support polled IO through this interface, and we can't
+-	 * support non-blocking either. For the latter, this just causes
+-	 * the kiocb to be handled from an async context.
+-	 */
+-	if (kiocb->ki_flags & IOCB_HIPRI)
+-		return -EOPNOTSUPP;
+-	if (kiocb->ki_flags & IOCB_NOWAIT)
+-		return -EAGAIN;
+-
+-	while (iov_iter_count(iter)) {
+-		struct iovec iovec;
+-		ssize_t nr;
+-
+-		if (!iov_iter_is_bvec(iter)) {
+-			iovec = iov_iter_iovec(iter);
+-		} else {
+-			iovec.iov_base = u64_to_user_ptr(req->rw.addr);
+-			iovec.iov_len = req->rw.len;
+-		}
+-
+-		if (rw == READ) {
+-			nr = file->f_op->read(file, iovec.iov_base,
+-					      iovec.iov_len, io_kiocb_ppos(kiocb));
+-		} else {
+-			nr = file->f_op->write(file, iovec.iov_base,
+-					       iovec.iov_len, io_kiocb_ppos(kiocb));
+-		}
+-
+-		if (nr < 0) {
+-			if (!ret)
+-				ret = nr;
+-			break;
+-		}
+-		ret += nr;
+-		if (!iov_iter_is_bvec(iter)) {
+-			iov_iter_advance(iter, nr);
+-		} else {
+-			req->rw.addr += nr;
+-			req->rw.len -= nr;
+-			if (!req->rw.len)
+-				break;
+-		}
+-		if (nr != iovec.iov_len)
+-			break;
+-	}
+-
+-	return ret;
+-}
+-
+-static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
+-			  const struct iovec *fast_iov, struct iov_iter *iter)
+-{
+-	struct io_async_rw *rw = req->async_data;
+-
+-	memcpy(&rw->iter, iter, sizeof(*iter));
+-	rw->free_iovec = iovec;
+-	rw->bytes_done = 0;
+-	/* can only be fixed buffers, no need to do anything */
+-	if (iov_iter_is_bvec(iter))
+-		return;
+-	if (!iovec) {
+-		unsigned iov_off = 0;
+-
+-		rw->iter.iov = rw->fast_iov;
+-		if (iter->iov != fast_iov) {
+-			iov_off = iter->iov - fast_iov;
+-			rw->iter.iov += iov_off;
+-		}
+-		if (rw->fast_iov != fast_iov)
+-			memcpy(rw->fast_iov + iov_off, fast_iov + iov_off,
+-			       sizeof(struct iovec) * iter->nr_segs);
+-	} else {
+-		req->flags |= REQ_F_NEED_CLEANUP;
+-	}
+-}
+-
+-static inline int __io_alloc_async_data(struct io_kiocb *req)
+-{
+-	WARN_ON_ONCE(!io_op_defs[req->opcode].async_size);
+-	req->async_data = kmalloc(io_op_defs[req->opcode].async_size, GFP_KERNEL);
+-	return req->async_data == NULL;
+-}
+-
+-static int io_alloc_async_data(struct io_kiocb *req)
+-{
+-	if (!io_op_defs[req->opcode].needs_async_data)
+-		return 0;
+-
+-	return  __io_alloc_async_data(req);
+-}
+-
+-static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
+-			     const struct iovec *fast_iov,
+-			     struct iov_iter *iter, bool force)
+-{
+-	if (!force && !io_op_defs[req->opcode].needs_async_data)
+-		return 0;
+-	if (!req->async_data) {
+-		if (__io_alloc_async_data(req))
+-			return -ENOMEM;
+-
+-		io_req_map_rw(req, iovec, fast_iov, iter);
+-	}
+-	return 0;
+-}
+-
+-static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
+-{
+-	struct io_async_rw *iorw = req->async_data;
+-	struct iovec *iov = iorw->fast_iov;
+-	ssize_t ret;
+-
+-	ret = __io_import_iovec(rw, req, &iov, &iorw->iter, false);
+-	if (unlikely(ret < 0))
+-		return ret;
+-
+-	iorw->bytes_done = 0;
+-	iorw->free_iovec = iov;
+-	if (iov)
+-		req->flags |= REQ_F_NEED_CLEANUP;
+-	return 0;
+-}
+-
+-static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	ssize_t ret;
+-
+-	ret = io_prep_rw(req, sqe);
+-	if (ret)
+-		return ret;
+-
+-	if (unlikely(!(req->file->f_mode & FMODE_READ)))
+-		return -EBADF;
+-
+-	/* either don't need iovec imported or already have it */
+-	if (!req->async_data)
+-		return 0;
+-	return io_rw_prep_async(req, READ);
+-}
+-
+-/*
+- * This is our waitqueue callback handler, registered through lock_page_async()
+- * when we initially tried to do the IO with the iocb armed our waitqueue.
+- * This gets called when the page is unlocked, and we generally expect that to
+- * happen when the page IO is completed and the page is now uptodate. This will
+- * queue a task_work based retry of the operation, attempting to copy the data
+- * again. If the latter fails because the page was NOT uptodate, then we will
+- * do a thread based blocking retry of the operation. That's the unexpected
+- * slow path.
+- */
+-static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode,
+-			     int sync, void *arg)
+-{
+-	struct wait_page_queue *wpq;
+-	struct io_kiocb *req = wait->private;
+-	struct wait_page_key *key = arg;
+-	int ret;
+-
+-	wpq = container_of(wait, struct wait_page_queue, wait);
+-
+-	if (!wake_page_match(wpq, key))
+-		return 0;
+-
+-	req->rw.kiocb.ki_flags &= ~IOCB_WAITQ;
+-	list_del_init(&wait->entry);
+-
+-	init_task_work(&req->task_work, io_req_task_submit);
+-	percpu_ref_get(&req->ctx->refs);
+-
+-	/* submit ref gets dropped, acquire a new one */
+-	refcount_inc(&req->refs);
+-	ret = io_req_task_work_add(req, true);
+-	if (unlikely(ret)) {
+-		struct task_struct *tsk;
+-
+-		/* queue just for cancelation */
+-		init_task_work(&req->task_work, io_req_task_cancel);
+-		tsk = io_wq_get_task(req->ctx->io_wq);
+-		task_work_add(tsk, &req->task_work, TWA_NONE);
+-		wake_up_process(tsk);
+-	}
+-	return 1;
+-}
+-
+-/*
+- * This controls whether a given IO request should be armed for async page
+- * based retry. If we return false here, the request is handed to the async
+- * worker threads for retry. If we're doing buffered reads on a regular file,
+- * we prepare a private wait_page_queue entry and retry the operation. This
+- * will either succeed because the page is now uptodate and unlocked, or it
+- * will register a callback when the page is unlocked at IO completion. Through
+- * that callback, io_uring uses task_work to setup a retry of the operation.
+- * That retry will attempt the buffered read again. The retry will generally
+- * succeed, or in rare cases where it fails, we then fall back to using the
+- * async worker threads for a blocking retry.
+- */
+-static bool io_rw_should_retry(struct io_kiocb *req)
+-{
+-	struct io_async_rw *rw = req->async_data;
+-	struct wait_page_queue *wait = &rw->wpq;
+-	struct kiocb *kiocb = &req->rw.kiocb;
+-
+-	/* never retry for NOWAIT, we just complete with -EAGAIN */
+-	if (req->flags & REQ_F_NOWAIT)
+-		return false;
+-
+-	/* Only for buffered IO */
+-	if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_HIPRI))
+-		return false;
+-
+-	/*
+-	 * just use poll if we can, and don't attempt if the fs doesn't
+-	 * support callback based unlocks
+-	 */
+-	if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC))
+-		return false;
+-
+-	wait->wait.func = io_async_buf_func;
+-	wait->wait.private = req;
+-	wait->wait.flags = 0;
+-	INIT_LIST_HEAD(&wait->wait.entry);
+-	kiocb->ki_flags |= IOCB_WAITQ;
+-	kiocb->ki_flags &= ~IOCB_NOWAIT;
+-	kiocb->ki_waitq = wait;
+-	return true;
+-}
+-
+-static int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
+-{
+-	if (req->file->f_op->read_iter)
+-		return call_read_iter(req->file, &req->rw.kiocb, iter);
+-	else if (req->file->f_op->read)
+-		return loop_rw_iter(READ, req, iter);
+-	else
+-		return -EINVAL;
+-}
+-
+-static int io_read(struct io_kiocb *req, bool force_nonblock,
+-		   struct io_comp_state *cs)
+-{
+-	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+-	struct kiocb *kiocb = &req->rw.kiocb;
+-	struct iov_iter __iter, *iter = &__iter;
+-	struct iov_iter iter_cp;
+-	struct io_async_rw *rw = req->async_data;
+-	ssize_t io_size, ret, ret2;
+-	bool no_async;
+-
+-	if (rw)
+-		iter = &rw->iter;
+-
+-	ret = io_import_iovec(READ, req, &iovec, iter, !force_nonblock);
+-	if (ret < 0)
+-		return ret;
+-	iter_cp = *iter;
+-	io_size = iov_iter_count(iter);
+-	req->result = io_size;
+-	ret = 0;
+-
+-	/* Ensure we clear previously set non-block flag */
+-	if (!force_nonblock)
+-		kiocb->ki_flags &= ~IOCB_NOWAIT;
+-	else
+-		kiocb->ki_flags |= IOCB_NOWAIT;
+-
+-
+-	/* If the file doesn't support async, just async punt */
+-	no_async = force_nonblock && !io_file_supports_async(req->file, READ);
+-	if (no_async)
+-		goto copy_iov;
+-
+-	ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), io_size);
+-	if (unlikely(ret))
+-		goto out_free;
+-
+-	ret = io_iter_do_read(req, iter);
+-
+-	if (!ret) {
+-		goto done;
+-	} else if (ret == -EIOCBQUEUED) {
+-		ret = 0;
+-		goto out_free;
+-	} else if (ret == -EAGAIN) {
+-		/* IOPOLL retry should happen for io-wq threads */
+-		if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL))
+-			goto done;
+-		/* no retry on NONBLOCK marked file */
+-		if (req->file->f_flags & O_NONBLOCK)
+-			goto done;
+-		/* some cases will consume bytes even on error returns */
+-		*iter = iter_cp;
+-		ret = 0;
+-		goto copy_iov;
+-	} else if (ret < 0) {
+-		/* make sure -ERESTARTSYS -> -EINTR is done */
+-		goto done;
+-	}
+-
+-	/* read it all, or we did blocking attempt. no retry. */
+-	if (!iov_iter_count(iter) || !force_nonblock ||
+-	    (req->file->f_flags & O_NONBLOCK) || !(req->flags & REQ_F_ISREG))
+-		goto done;
+-
+-	io_size -= ret;
+-copy_iov:
+-	ret2 = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
+-	if (ret2) {
+-		ret = ret2;
+-		goto out_free;
+-	}
+-	if (no_async)
+-		return -EAGAIN;
+-	rw = req->async_data;
+-	/* it's copied and will be cleaned with ->io */
+-	iovec = NULL;
+-	/* now use our persistent iterator, if we aren't already */
+-	iter = &rw->iter;
+-retry:
+-	rw->bytes_done += ret;
+-	/* if we can retry, do so with the callbacks armed */
+-	if (!io_rw_should_retry(req)) {
+-		kiocb->ki_flags &= ~IOCB_WAITQ;
+-		return -EAGAIN;
+-	}
+-
+-	/*
+-	 * Now retry read with the IOCB_WAITQ parts set in the iocb. If we
+-	 * get -EIOCBQUEUED, then we'll get a notification when the desired
+-	 * page gets unlocked. We can also get a partial read here, and if we
+-	 * do, then just retry at the new offset.
+-	 */
+-	ret = io_iter_do_read(req, iter);
+-	if (ret == -EIOCBQUEUED) {
+-		ret = 0;
+-		goto out_free;
+-	} else if (ret > 0 && ret < io_size) {
+-		/* we got some bytes, but not all. retry. */
+-		kiocb->ki_flags &= ~IOCB_WAITQ;
+-		goto retry;
+-	}
+-done:
+-	kiocb_done(kiocb, ret, cs);
+-	ret = 0;
+-out_free:
+-	/* it's reportedly faster than delegating the null check to kfree() */
+-	if (iovec)
+-		kfree(iovec);
+-	return ret;
+-}
+-
+-static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	ssize_t ret;
+-
+-	ret = io_prep_rw(req, sqe);
+-	if (ret)
+-		return ret;
+-
+-	if (unlikely(!(req->file->f_mode & FMODE_WRITE)))
+-		return -EBADF;
+-
+-	/* either don't need iovec imported or already have it */
+-	if (!req->async_data)
+-		return 0;
+-	return io_rw_prep_async(req, WRITE);
+-}
+-
+-static int io_write(struct io_kiocb *req, bool force_nonblock,
+-		    struct io_comp_state *cs)
+-{
+-	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+-	struct kiocb *kiocb = &req->rw.kiocb;
+-	struct iov_iter __iter, *iter = &__iter;
+-	struct iov_iter iter_cp;
+-	struct io_async_rw *rw = req->async_data;
+-	ssize_t ret, ret2, io_size;
+-
+-	if (rw)
+-		iter = &rw->iter;
+-
+-	ret = io_import_iovec(WRITE, req, &iovec, iter, !force_nonblock);
+-	if (ret < 0)
+-		return ret;
+-	iter_cp = *iter;
+-	io_size = iov_iter_count(iter);
+-	req->result = io_size;
+-
+-	/* Ensure we clear previously set non-block flag */
+-	if (!force_nonblock)
+-		kiocb->ki_flags &= ~IOCB_NOWAIT;
+-	else
+-		kiocb->ki_flags |= IOCB_NOWAIT;
+-
+-	/* If the file doesn't support async, just async punt */
+-	if (force_nonblock && !io_file_supports_async(req->file, WRITE))
+-		goto copy_iov;
+-
+-	/* file path doesn't support NOWAIT for non-direct_IO */
+-	if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
+-	    (req->flags & REQ_F_ISREG))
+-		goto copy_iov;
+-
+-	ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), io_size);
+-	if (unlikely(ret))
+-		goto out_free;
+-
+-	/*
+-	 * Open-code file_start_write here to grab freeze protection,
+-	 * which will be released by another thread in
+-	 * io_complete_rw().  Fool lockdep by telling it the lock got
+-	 * released so that it doesn't complain about the held lock when
+-	 * we return to userspace.
+-	 */
+-	if (req->flags & REQ_F_ISREG) {
+-		sb_start_write(file_inode(req->file)->i_sb);
+-		__sb_writers_release(file_inode(req->file)->i_sb,
+-					SB_FREEZE_WRITE);
+-	}
+-	kiocb->ki_flags |= IOCB_WRITE;
+-
+-	if (req->file->f_op->write_iter)
+-		ret2 = call_write_iter(req->file, kiocb, iter);
+-	else if (req->file->f_op->write)
+-		ret2 = loop_rw_iter(WRITE, req, iter);
+-	else
+-		ret2 = -EINVAL;
+-
+-	/*
+-	 * Raw bdev writes will return -EOPNOTSUPP for IOCB_NOWAIT. Just
+-	 * retry them without IOCB_NOWAIT.
+-	 */
+-	if (ret2 == -EOPNOTSUPP && (kiocb->ki_flags & IOCB_NOWAIT))
+-		ret2 = -EAGAIN;
+-	/* no retry on NONBLOCK marked file */
+-	if (ret2 == -EAGAIN && (req->file->f_flags & O_NONBLOCK))
+-		goto done;
+-	if (!force_nonblock || ret2 != -EAGAIN) {
+-		/* IOPOLL retry should happen for io-wq threads */
+-		if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)
+-			goto copy_iov;
+-done:
+-		kiocb_done(kiocb, ret2, cs);
+-	} else {
+-copy_iov:
+-		/* some cases will consume bytes even on error returns */
+-		*iter = iter_cp;
+-		ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);
+-		if (!ret)
+-			return -EAGAIN;
+-	}
+-out_free:
+-	/* it's reportedly faster than delegating the null check to kfree() */
+-	if (iovec)
+-		kfree(iovec);
+-	return ret;
+-}
+-
+-static int __io_splice_prep(struct io_kiocb *req,
+-			    const struct io_uring_sqe *sqe)
+-{
+-	struct io_splice* sp = &req->splice;
+-	unsigned int valid_flags = SPLICE_F_FD_IN_FIXED | SPLICE_F_ALL;
+-
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-
+-	sp->file_in = NULL;
+-	sp->len = READ_ONCE(sqe->len);
+-	sp->flags = READ_ONCE(sqe->splice_flags);
+-
+-	if (unlikely(sp->flags & ~valid_flags))
+-		return -EINVAL;
+-
+-	sp->file_in = io_file_get(NULL, req, READ_ONCE(sqe->splice_fd_in),
+-				  (sp->flags & SPLICE_F_FD_IN_FIXED));
+-	if (!sp->file_in)
+-		return -EBADF;
+-	req->flags |= REQ_F_NEED_CLEANUP;
+-
+-	if (!S_ISREG(file_inode(sp->file_in)->i_mode)) {
+-		/*
+-		 * Splice operation will be punted aync, and here need to
+-		 * modify io_wq_work.flags, so initialize io_wq_work firstly.
+-		 */
+-		io_req_init_async(req);
+-		req->work.flags |= IO_WQ_WORK_UNBOUND;
+-	}
+-
+-	return 0;
+-}
+-
+-static int io_tee_prep(struct io_kiocb *req,
+-		       const struct io_uring_sqe *sqe)
+-{
+-	if (READ_ONCE(sqe->splice_off_in) || READ_ONCE(sqe->off))
+-		return -EINVAL;
+-	return __io_splice_prep(req, sqe);
+-}
+-
+-static int io_tee(struct io_kiocb *req, bool force_nonblock)
+-{
+-	struct io_splice *sp = &req->splice;
+-	struct file *in = sp->file_in;
+-	struct file *out = sp->file_out;
+-	unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
+-	long ret = 0;
+-
+-	if (force_nonblock)
+-		return -EAGAIN;
+-	if (sp->len)
+-		ret = do_tee(in, out, sp->len, flags);
+-
+-	io_put_file(req, in, (sp->flags & SPLICE_F_FD_IN_FIXED));
+-	req->flags &= ~REQ_F_NEED_CLEANUP;
+-
+-	if (ret != sp->len)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_splice* sp = &req->splice;
+-
+-	sp->off_in = READ_ONCE(sqe->splice_off_in);
+-	sp->off_out = READ_ONCE(sqe->off);
+-	return __io_splice_prep(req, sqe);
+-}
+-
+-static int io_splice(struct io_kiocb *req, bool force_nonblock)
+-{
+-	struct io_splice *sp = &req->splice;
+-	struct file *in = sp->file_in;
+-	struct file *out = sp->file_out;
+-	unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
+-	loff_t *poff_in, *poff_out;
+-	long ret = 0;
+-
+-	if (force_nonblock)
+-		return -EAGAIN;
+-
+-	poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;
+-	poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;
+-
+-	if (sp->len)
+-		ret = do_splice(in, poff_in, out, poff_out, sp->len, flags);
+-
+-	io_put_file(req, in, (sp->flags & SPLICE_F_FD_IN_FIXED));
+-	req->flags &= ~REQ_F_NEED_CLEANUP;
+-
+-	if (ret != sp->len)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-/*
+- * IORING_OP_NOP just posts a completion event, nothing else.
+- */
+-static int io_nop(struct io_kiocb *req, struct io_comp_state *cs)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-
+-	__io_req_complete(req, 0, 0, cs);
+-	return 0;
+-}
+-
+-static int io_prep_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	if (!req->file)
+-		return -EBADF;
+-
+-	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
+-		     sqe->splice_fd_in))
+-		return -EINVAL;
+-
+-	req->sync.flags = READ_ONCE(sqe->fsync_flags);
+-	if (unlikely(req->sync.flags & ~IORING_FSYNC_DATASYNC))
+-		return -EINVAL;
+-
+-	req->sync.off = READ_ONCE(sqe->off);
+-	req->sync.len = READ_ONCE(sqe->len);
+-	return 0;
+-}
+-
+-static int io_fsync(struct io_kiocb *req, bool force_nonblock)
+-{
+-	loff_t end = req->sync.off + req->sync.len;
+-	int ret;
+-
+-	/* fsync always requires a blocking context */
+-	if (force_nonblock)
+-		return -EAGAIN;
+-
+-	ret = vfs_fsync_range(req->file, req->sync.off,
+-				end > 0 ? end : LLONG_MAX,
+-				req->sync.flags & IORING_FSYNC_DATASYNC);
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-static int io_fallocate_prep(struct io_kiocb *req,
+-			     const struct io_uring_sqe *sqe)
+-{
+-	if (sqe->ioprio || sqe->buf_index || sqe->rw_flags ||
+-	    sqe->splice_fd_in)
+-		return -EINVAL;
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-
+-	req->sync.off = READ_ONCE(sqe->off);
+-	req->sync.len = READ_ONCE(sqe->addr);
+-	req->sync.mode = READ_ONCE(sqe->len);
+-	return 0;
+-}
+-
+-static int io_fallocate(struct io_kiocb *req, bool force_nonblock)
+-{
+-	int ret;
+-
+-	/* fallocate always requiring blocking context */
+-	if (force_nonblock)
+-		return -EAGAIN;
+-	ret = vfs_fallocate(req->file, req->sync.mode, req->sync.off,
+-				req->sync.len);
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	const char __user *fname;
+-	int ret;
+-
+-	if (unlikely(sqe->ioprio || sqe->buf_index || sqe->splice_fd_in))
+-		return -EINVAL;
+-	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+-		return -EBADF;
+-
+-	/* open.how should be already initialised */
+-	if (!(req->open.how.flags & O_PATH) && force_o_largefile())
+-		req->open.how.flags |= O_LARGEFILE;
+-
+-	req->open.dfd = READ_ONCE(sqe->fd);
+-	fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
+-	req->open.filename = getname(fname);
+-	if (IS_ERR(req->open.filename)) {
+-		ret = PTR_ERR(req->open.filename);
+-		req->open.filename = NULL;
+-		return ret;
+-	}
+-	req->open.nofile = rlimit(RLIMIT_NOFILE);
+-	req->open.ignore_nonblock = false;
+-	req->flags |= REQ_F_NEED_CLEANUP;
+-	return 0;
+-}
+-
+-static int io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	u64 flags, mode;
+-
+-	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+-		return -EINVAL;
+-	mode = READ_ONCE(sqe->len);
+-	flags = READ_ONCE(sqe->open_flags);
+-	req->open.how = build_open_how(flags, mode);
+-	return __io_openat_prep(req, sqe);
+-}
+-
+-static int io_openat2_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct open_how __user *how;
+-	size_t len;
+-	int ret;
+-
+-	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+-		return -EINVAL;
+-	how = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+-	len = READ_ONCE(sqe->len);
+-	if (len < OPEN_HOW_SIZE_VER0)
+-		return -EINVAL;
+-
+-	ret = copy_struct_from_user(&req->open.how, sizeof(req->open.how), how,
+-					len);
+-	if (ret)
+-		return ret;
+-
+-	return __io_openat_prep(req, sqe);
+-}
+-
+-static int io_openat2(struct io_kiocb *req, bool force_nonblock)
+-{
+-	struct open_flags op;
+-	struct file *file;
+-	int ret;
+-
+-	if (force_nonblock && !req->open.ignore_nonblock)
+-		return -EAGAIN;
+-
+-	ret = build_open_flags(&req->open.how, &op);
+-	if (ret)
+-		goto err;
+-
+-	ret = __get_unused_fd_flags(req->open.how.flags, req->open.nofile);
+-	if (ret < 0)
+-		goto err;
+-
+-	file = do_filp_open(req->open.dfd, req->open.filename, &op);
+-	if (IS_ERR(file)) {
+-		put_unused_fd(ret);
+-		ret = PTR_ERR(file);
+-		/*
+-		 * A work-around to ensure that /proc/self works that way
+-		 * that it should - if we get -EOPNOTSUPP back, then assume
+-		 * that proc_self_get_link() failed us because we're in async
+-		 * context. We should be safe to retry this from the task
+-		 * itself with force_nonblock == false set, as it should not
+-		 * block on lookup. Would be nice to know this upfront and
+-		 * avoid the async dance, but doesn't seem feasible.
+-		 */
+-		if (ret == -EOPNOTSUPP && io_wq_current_is_worker()) {
+-			req->open.ignore_nonblock = true;
+-			refcount_inc(&req->refs);
+-			io_req_task_queue(req);
+-			return 0;
+-		}
+-	} else {
+-		fsnotify_open(file);
+-		fd_install(ret, file);
+-	}
+-err:
+-	putname(req->open.filename);
+-	req->flags &= ~REQ_F_NEED_CLEANUP;
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-static int io_openat(struct io_kiocb *req, bool force_nonblock)
+-{
+-	return io_openat2(req, force_nonblock);
+-}
+-
+-static int io_remove_buffers_prep(struct io_kiocb *req,
+-				  const struct io_uring_sqe *sqe)
+-{
+-	struct io_provide_buf *p = &req->pbuf;
+-	u64 tmp;
+-
+-	if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off ||
+-	    sqe->splice_fd_in)
+-		return -EINVAL;
+-
+-	tmp = READ_ONCE(sqe->fd);
+-	if (!tmp || tmp > USHRT_MAX)
+-		return -EINVAL;
+-
+-	memset(p, 0, sizeof(*p));
+-	p->nbufs = tmp;
+-	p->bgid = READ_ONCE(sqe->buf_group);
+-	return 0;
+-}
+-
+-static int __io_remove_buffers(struct io_ring_ctx *ctx, struct io_buffer *buf,
+-			       int bgid, unsigned nbufs)
+-{
+-	unsigned i = 0;
+-
+-	/* shouldn't happen */
+-	if (!nbufs)
+-		return 0;
+-
+-	/* the head kbuf is the list itself */
+-	while (!list_empty(&buf->list)) {
+-		struct io_buffer *nxt;
+-
+-		nxt = list_first_entry(&buf->list, struct io_buffer, list);
+-		list_del(&nxt->list);
+-		kfree(nxt);
+-		if (++i == nbufs)
+-			return i;
+-	}
+-	i++;
+-	kfree(buf);
+-	xa_erase(&ctx->io_buffers, bgid);
+-
+-	return i;
+-}
+-
+-static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
+-			     struct io_comp_state *cs)
+-{
+-	struct io_provide_buf *p = &req->pbuf;
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_buffer *head;
+-	int ret = 0;
+-
+-	io_ring_submit_lock(ctx, !force_nonblock);
+-
+-	lockdep_assert_held(&ctx->uring_lock);
+-
+-	ret = -ENOENT;
+-	head = xa_load(&ctx->io_buffers, p->bgid);
+-	if (head)
+-		ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-
+-	/* need to hold the lock to complete IOPOLL requests */
+-	if (ctx->flags & IORING_SETUP_IOPOLL) {
+-		__io_req_complete(req, ret, 0, cs);
+-		io_ring_submit_unlock(ctx, !force_nonblock);
+-	} else {
+-		io_ring_submit_unlock(ctx, !force_nonblock);
+-		__io_req_complete(req, ret, 0, cs);
+-	}
+-	return 0;
+-}
+-
+-static int io_provide_buffers_prep(struct io_kiocb *req,
+-				   const struct io_uring_sqe *sqe)
+-{
+-	unsigned long size, tmp_check;
+-	struct io_provide_buf *p = &req->pbuf;
+-	u64 tmp;
+-
+-	if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in)
+-		return -EINVAL;
+-
+-	tmp = READ_ONCE(sqe->fd);
+-	if (!tmp || tmp > USHRT_MAX)
+-		return -E2BIG;
+-	p->nbufs = tmp;
+-	p->addr = READ_ONCE(sqe->addr);
+-	p->len = READ_ONCE(sqe->len);
+-
+-	if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
+-				&size))
+-		return -EOVERFLOW;
+-	if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
+-		return -EOVERFLOW;
+-
+-	size = (unsigned long)p->len * p->nbufs;
+-	if (!access_ok(u64_to_user_ptr(p->addr), size))
+-		return -EFAULT;
+-
+-	p->bgid = READ_ONCE(sqe->buf_group);
+-	tmp = READ_ONCE(sqe->off);
+-	if (tmp > USHRT_MAX)
+-		return -E2BIG;
+-	p->bid = tmp;
+-	return 0;
+-}
+-
+-static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
+-{
+-	struct io_buffer *buf;
+-	u64 addr = pbuf->addr;
+-	int i, bid = pbuf->bid;
+-
+-	for (i = 0; i < pbuf->nbufs; i++) {
+-		buf = kmalloc(sizeof(*buf), GFP_KERNEL_ACCOUNT);
+-		if (!buf)
+-			break;
+-
+-		buf->addr = addr;
+-		buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
+-		buf->bid = bid;
+-		addr += pbuf->len;
+-		bid++;
+-		if (!*head) {
+-			INIT_LIST_HEAD(&buf->list);
+-			*head = buf;
+-		} else {
+-			list_add_tail(&buf->list, &(*head)->list);
+-		}
+-		cond_resched();
+-	}
+-
+-	return i ? i : -ENOMEM;
+-}
+-
+-static int io_provide_buffers(struct io_kiocb *req, bool force_nonblock,
+-			      struct io_comp_state *cs)
+-{
+-	struct io_provide_buf *p = &req->pbuf;
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_buffer *head, *list;
+-	int ret = 0;
+-
+-	io_ring_submit_lock(ctx, !force_nonblock);
+-
+-	lockdep_assert_held(&ctx->uring_lock);
+-
+-	list = head = xa_load(&ctx->io_buffers, p->bgid);
+-
+-	ret = io_add_buffers(p, &head);
+-	if (ret >= 0 && !list) {
+-		ret = xa_insert(&ctx->io_buffers, p->bgid, head, GFP_KERNEL);
+-		if (ret < 0)
+-			__io_remove_buffers(ctx, head, p->bgid, -1U);
+-	}
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-
+-	/* need to hold the lock to complete IOPOLL requests */
+-	if (ctx->flags & IORING_SETUP_IOPOLL) {
+-		__io_req_complete(req, ret, 0, cs);
+-		io_ring_submit_unlock(ctx, !force_nonblock);
+-	} else {
+-		io_ring_submit_unlock(ctx, !force_nonblock);
+-		__io_req_complete(req, ret, 0, cs);
+-	}
+-	return 0;
+-}
+-
+-static int io_epoll_ctl_prep(struct io_kiocb *req,
+-			     const struct io_uring_sqe *sqe)
+-{
+-#if defined(CONFIG_EPOLL)
+-	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+-		return -EINVAL;
+-	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))
+-		return -EINVAL;
+-
+-	req->epoll.epfd = READ_ONCE(sqe->fd);
+-	req->epoll.op = READ_ONCE(sqe->len);
+-	req->epoll.fd = READ_ONCE(sqe->off);
+-
+-	if (ep_op_has_event(req->epoll.op)) {
+-		struct epoll_event __user *ev;
+-
+-		ev = u64_to_user_ptr(READ_ONCE(sqe->addr));
+-		if (copy_from_user(&req->epoll.event, ev, sizeof(*ev)))
+-			return -EFAULT;
+-	}
+-
+-	return 0;
+-#else
+-	return -EOPNOTSUPP;
+-#endif
+-}
+-
+-static int io_epoll_ctl(struct io_kiocb *req, bool force_nonblock,
+-			struct io_comp_state *cs)
+-{
+-#if defined(CONFIG_EPOLL)
+-	struct io_epoll *ie = &req->epoll;
+-	int ret;
+-
+-	ret = do_epoll_ctl(ie->epfd, ie->op, ie->fd, &ie->event, force_nonblock);
+-	if (force_nonblock && ret == -EAGAIN)
+-		return -EAGAIN;
+-
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	__io_req_complete(req, ret, 0, cs);
+-	return 0;
+-#else
+-	return -EOPNOTSUPP;
+-#endif
+-}
+-
+-static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
+-	if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->splice_fd_in)
+-		return -EINVAL;
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-
+-	req->madvise.addr = READ_ONCE(sqe->addr);
+-	req->madvise.len = READ_ONCE(sqe->len);
+-	req->madvise.advice = READ_ONCE(sqe->fadvise_advice);
+-	return 0;
+-#else
+-	return -EOPNOTSUPP;
+-#endif
+-}
+-
+-static int io_madvise(struct io_kiocb *req, bool force_nonblock)
+-{
+-#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
+-	struct io_madvise *ma = &req->madvise;
+-	int ret;
+-
+-	if (force_nonblock)
+-		return -EAGAIN;
+-
+-	ret = do_madvise(current->mm, ma->addr, ma->len, ma->advice);
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-#else
+-	return -EOPNOTSUPP;
+-#endif
+-}
+-
+-static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	if (sqe->ioprio || sqe->buf_index || sqe->addr || sqe->splice_fd_in)
+-		return -EINVAL;
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-
+-	req->fadvise.offset = READ_ONCE(sqe->off);
+-	req->fadvise.len = READ_ONCE(sqe->len);
+-	req->fadvise.advice = READ_ONCE(sqe->fadvise_advice);
+-	return 0;
+-}
+-
+-static int io_fadvise(struct io_kiocb *req, bool force_nonblock)
+-{
+-	struct io_fadvise *fa = &req->fadvise;
+-	int ret;
+-
+-	if (force_nonblock) {
+-		switch (fa->advice) {
+-		case POSIX_FADV_NORMAL:
+-		case POSIX_FADV_RANDOM:
+-		case POSIX_FADV_SEQUENTIAL:
+-			break;
+-		default:
+-			return -EAGAIN;
+-		}
+-	}
+-
+-	ret = vfs_fadvise(req->file, fa->offset, fa->len, fa->advice);
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+-		return -EINVAL;
+-	if (req->flags & REQ_F_FIXED_FILE)
+-		return -EBADF;
+-
+-	req->statx.dfd = READ_ONCE(sqe->fd);
+-	req->statx.mask = READ_ONCE(sqe->len);
+-	req->statx.filename = u64_to_user_ptr(READ_ONCE(sqe->addr));
+-	req->statx.buffer = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+-	req->statx.flags = READ_ONCE(sqe->statx_flags);
+-
+-	return 0;
+-}
+-
+-static int io_statx(struct io_kiocb *req, bool force_nonblock)
+-{
+-	struct io_statx *ctx = &req->statx;
+-	int ret;
+-
+-	if (force_nonblock)
+-		return -EAGAIN;
+-
+-	ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask,
+-		       ctx->buffer);
+-
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	/*
+-	 * If we queue this for async, it must not be cancellable. That would
+-	 * leave the 'file' in an undeterminate state, and here need to modify
+-	 * io_wq_work.flags, so initialize io_wq_work firstly.
+-	 */
+-	io_req_init_async(req);
+-
+-	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->addr || sqe->len ||
+-	    sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
+-		return -EINVAL;
+-	if (req->flags & REQ_F_FIXED_FILE)
+-		return -EBADF;
+-
+-	req->close.fd = READ_ONCE(sqe->fd);
+-	if ((req->file && req->file->f_op == &io_uring_fops))
+-		return -EBADF;
+-
+-	req->close.put_file = NULL;
+-	return 0;
+-}
+-
+-static int io_close(struct io_kiocb *req, bool force_nonblock,
+-		    struct io_comp_state *cs)
+-{
+-	struct io_close *close = &req->close;
+-	int ret;
+-
+-	/* might be already done during nonblock submission */
+-	if (!close->put_file) {
+-		ret = __close_fd_get_file(close->fd, &close->put_file);
+-		if (ret < 0)
+-			return (ret == -ENOENT) ? -EBADF : ret;
+-	}
+-
+-	/* if the file has a flush method, be safe and punt to async */
+-	if (close->put_file->f_op->flush && force_nonblock) {
+-		/* not safe to cancel at this point */
+-		req->work.flags |= IO_WQ_WORK_NO_CANCEL;
+-		/* was never set, but play safe */
+-		req->flags &= ~REQ_F_NOWAIT;
+-		/* avoid grabbing files - we don't need the files */
+-		req->flags |= REQ_F_NO_FILE_TABLE;
+-		return -EAGAIN;
+-	}
+-
+-	/* No ->flush() or already async, safely close from here */
+-	ret = filp_close(close->put_file, req->work.identity->files);
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	fput(close->put_file);
+-	close->put_file = NULL;
+-	__io_req_complete(req, ret, 0, cs);
+-	return 0;
+-}
+-
+-static int io_prep_sfr(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	if (!req->file)
+-		return -EBADF;
+-
+-	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
+-		     sqe->splice_fd_in))
+-		return -EINVAL;
+-
+-	req->sync.off = READ_ONCE(sqe->off);
+-	req->sync.len = READ_ONCE(sqe->len);
+-	req->sync.flags = READ_ONCE(sqe->sync_range_flags);
+-	return 0;
+-}
+-
+-static int io_sync_file_range(struct io_kiocb *req, bool force_nonblock)
+-{
+-	int ret;
+-
+-	/* sync_file_range always requires a blocking context */
+-	if (force_nonblock)
+-		return -EAGAIN;
+-
+-	ret = sync_file_range(req->file, req->sync.off, req->sync.len,
+-				req->sync.flags);
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-#if defined(CONFIG_NET)
+-static int io_setup_async_msg(struct io_kiocb *req,
+-			      struct io_async_msghdr *kmsg)
+-{
+-	struct io_async_msghdr *async_msg = req->async_data;
+-
+-	if (async_msg)
+-		return -EAGAIN;
+-	if (io_alloc_async_data(req)) {
+-		if (kmsg->iov != kmsg->fast_iov)
+-			kfree(kmsg->iov);
+-		return -ENOMEM;
+-	}
+-	async_msg = req->async_data;
+-	req->flags |= REQ_F_NEED_CLEANUP;
+-	memcpy(async_msg, kmsg, sizeof(*kmsg));
+-	return -EAGAIN;
+-}
+-
+-static int io_sendmsg_copy_hdr(struct io_kiocb *req,
+-			       struct io_async_msghdr *iomsg)
+-{
+-	iomsg->iov = iomsg->fast_iov;
+-	iomsg->msg.msg_name = &iomsg->addr;
+-	return sendmsg_copy_msghdr(&iomsg->msg, req->sr_msg.umsg,
+-				   req->sr_msg.msg_flags, &iomsg->iov);
+-}
+-
+-static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_async_msghdr *async_msg = req->async_data;
+-	struct io_sr_msg *sr = &req->sr_msg;
+-	int ret;
+-
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (unlikely(sqe->addr2 || sqe->splice_fd_in || sqe->ioprio))
+-		return -EINVAL;
+-
+-	sr->msg_flags = READ_ONCE(sqe->msg_flags);
+-	sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+-	sr->len = READ_ONCE(sqe->len);
+-
+-#ifdef CONFIG_COMPAT
+-	if (req->ctx->compat)
+-		sr->msg_flags |= MSG_CMSG_COMPAT;
+-#endif
+-
+-	if (!async_msg || !io_op_defs[req->opcode].needs_async_data)
+-		return 0;
+-	ret = io_sendmsg_copy_hdr(req, async_msg);
+-	if (!ret)
+-		req->flags |= REQ_F_NEED_CLEANUP;
+-	return ret;
+-}
+-
+-static int io_sendmsg(struct io_kiocb *req, bool force_nonblock,
+-		      struct io_comp_state *cs)
+-{
+-	struct io_async_msghdr iomsg, *kmsg;
+-	struct socket *sock;
+-	unsigned flags;
+-	int min_ret = 0;
+-	int ret;
+-
+-	sock = sock_from_file(req->file, &ret);
+-	if (unlikely(!sock))
+-		return ret;
+-
+-	if (req->async_data) {
+-		kmsg = req->async_data;
+-		kmsg->msg.msg_name = &kmsg->addr;
+-		/* if iov is set, it's allocated already */
+-		if (!kmsg->iov)
+-			kmsg->iov = kmsg->fast_iov;
+-		kmsg->msg.msg_iter.iov = kmsg->iov;
+-	} else {
+-		ret = io_sendmsg_copy_hdr(req, &iomsg);
+-		if (ret)
+-			return ret;
+-		kmsg = &iomsg;
+-	}
+-
+-	flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
+-	if (flags & MSG_DONTWAIT)
+-		req->flags |= REQ_F_NOWAIT;
+-	else if (force_nonblock)
+-		flags |= MSG_DONTWAIT;
+-
+-	if (flags & MSG_WAITALL)
+-		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
+-
+-	ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
+-	if (force_nonblock && ret == -EAGAIN)
+-		return io_setup_async_msg(req, kmsg);
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
+-
+-	if (kmsg->iov != kmsg->fast_iov)
+-		kfree(kmsg->iov);
+-	req->flags &= ~REQ_F_NEED_CLEANUP;
+-	if (ret < min_ret)
+-		req_set_fail_links(req);
+-	__io_req_complete(req, ret, 0, cs);
+-	return 0;
+-}
+-
+-static int io_send(struct io_kiocb *req, bool force_nonblock,
+-		   struct io_comp_state *cs)
+-{
+-	struct io_sr_msg *sr = &req->sr_msg;
+-	struct msghdr msg;
+-	struct iovec iov;
+-	struct socket *sock;
+-	unsigned flags;
+-	int min_ret = 0;
+-	int ret;
+-
+-	sock = sock_from_file(req->file, &ret);
+-	if (unlikely(!sock))
+-		return ret;
+-
+-	ret = import_single_range(WRITE, sr->buf, sr->len, &iov, &msg.msg_iter);
+-	if (unlikely(ret))
+-		return ret;
+-
+-	msg.msg_name = NULL;
+-	msg.msg_control = NULL;
+-	msg.msg_controllen = 0;
+-	msg.msg_namelen = 0;
+-
+-	flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
+-	if (flags & MSG_DONTWAIT)
+-		req->flags |= REQ_F_NOWAIT;
+-	else if (force_nonblock)
+-		flags |= MSG_DONTWAIT;
+-
+-	if (flags & MSG_WAITALL)
+-		min_ret = iov_iter_count(&msg.msg_iter);
+-
+-	msg.msg_flags = flags;
+-	ret = sock_sendmsg(sock, &msg);
+-	if (force_nonblock && ret == -EAGAIN)
+-		return -EAGAIN;
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
+-
+-	if (ret < min_ret)
+-		req_set_fail_links(req);
+-	__io_req_complete(req, ret, 0, cs);
+-	return 0;
+-}
+-
+-static int __io_recvmsg_copy_hdr(struct io_kiocb *req,
+-				 struct io_async_msghdr *iomsg)
+-{
+-	struct io_sr_msg *sr = &req->sr_msg;
+-	struct iovec __user *uiov;
+-	size_t iov_len;
+-	int ret;
+-
+-	ret = __copy_msghdr_from_user(&iomsg->msg, sr->umsg,
+-					&iomsg->uaddr, &uiov, &iov_len);
+-	if (ret)
+-		return ret;
+-
+-	if (req->flags & REQ_F_BUFFER_SELECT) {
+-		if (iov_len > 1)
+-			return -EINVAL;
+-		if (copy_from_user(iomsg->iov, uiov, sizeof(*uiov)))
+-			return -EFAULT;
+-		sr->len = iomsg->iov[0].iov_len;
+-		iov_iter_init(&iomsg->msg.msg_iter, READ, iomsg->iov, 1,
+-				sr->len);
+-		iomsg->iov = NULL;
+-	} else {
+-		ret = __import_iovec(READ, uiov, iov_len, UIO_FASTIOV,
+-				     &iomsg->iov, &iomsg->msg.msg_iter,
+-				     false);
+-		if (ret > 0)
+-			ret = 0;
+-	}
+-
+-	return ret;
+-}
+-
+-#ifdef CONFIG_COMPAT
+-static int __io_compat_recvmsg_copy_hdr(struct io_kiocb *req,
+-					struct io_async_msghdr *iomsg)
+-{
+-	struct compat_msghdr __user *msg_compat;
+-	struct io_sr_msg *sr = &req->sr_msg;
+-	struct compat_iovec __user *uiov;
+-	compat_uptr_t ptr;
+-	compat_size_t len;
+-	int ret;
+-
+-	msg_compat = (struct compat_msghdr __user *) sr->umsg;
+-	ret = __get_compat_msghdr(&iomsg->msg, msg_compat, &iomsg->uaddr,
+-					&ptr, &len);
+-	if (ret)
+-		return ret;
+-
+-	uiov = compat_ptr(ptr);
+-	if (req->flags & REQ_F_BUFFER_SELECT) {
+-		compat_ssize_t clen;
+-
+-		if (len > 1)
+-			return -EINVAL;
+-		if (!access_ok(uiov, sizeof(*uiov)))
+-			return -EFAULT;
+-		if (__get_user(clen, &uiov->iov_len))
+-			return -EFAULT;
+-		if (clen < 0)
+-			return -EINVAL;
+-		sr->len = clen;
+-		iomsg->iov[0].iov_len = clen;
+-		iomsg->iov = NULL;
+-	} else {
+-		ret = __import_iovec(READ, (struct iovec __user *)uiov, len,
+-				   UIO_FASTIOV, &iomsg->iov,
+-				   &iomsg->msg.msg_iter, true);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+-	return 0;
+-}
+-#endif
+-
+-static int io_recvmsg_copy_hdr(struct io_kiocb *req,
+-			       struct io_async_msghdr *iomsg)
+-{
+-	iomsg->msg.msg_name = &iomsg->addr;
+-	iomsg->iov = iomsg->fast_iov;
+-
+-#ifdef CONFIG_COMPAT
+-	if (req->ctx->compat)
+-		return __io_compat_recvmsg_copy_hdr(req, iomsg);
+-#endif
+-
+-	return __io_recvmsg_copy_hdr(req, iomsg);
+-}
+-
+-static struct io_buffer *io_recv_buffer_select(struct io_kiocb *req,
+-					       bool needs_lock)
+-{
+-	struct io_sr_msg *sr = &req->sr_msg;
+-	struct io_buffer *kbuf;
+-
+-	kbuf = io_buffer_select(req, &sr->len, sr->bgid, sr->kbuf, needs_lock);
+-	if (IS_ERR(kbuf))
+-		return kbuf;
+-
+-	sr->kbuf = kbuf;
+-	req->flags |= REQ_F_BUFFER_SELECTED;
+-	return kbuf;
+-}
+-
+-static inline unsigned int io_put_recv_kbuf(struct io_kiocb *req)
+-{
+-	return io_put_kbuf(req, req->sr_msg.kbuf);
+-}
+-
+-static int io_recvmsg_prep(struct io_kiocb *req,
+-			   const struct io_uring_sqe *sqe)
+-{
+-	struct io_async_msghdr *async_msg = req->async_data;
+-	struct io_sr_msg *sr = &req->sr_msg;
+-	int ret;
+-
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (unlikely(sqe->addr2 || sqe->splice_fd_in || sqe->ioprio))
+-		return -EINVAL;
+-
+-	sr->msg_flags = READ_ONCE(sqe->msg_flags);
+-	sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+-	sr->len = READ_ONCE(sqe->len);
+-	sr->bgid = READ_ONCE(sqe->buf_group);
+-
+-#ifdef CONFIG_COMPAT
+-	if (req->ctx->compat)
+-		sr->msg_flags |= MSG_CMSG_COMPAT;
+-#endif
+-
+-	if (!async_msg || !io_op_defs[req->opcode].needs_async_data)
+-		return 0;
+-	ret = io_recvmsg_copy_hdr(req, async_msg);
+-	if (!ret)
+-		req->flags |= REQ_F_NEED_CLEANUP;
+-	return ret;
+-}
+-
+-static int io_recvmsg(struct io_kiocb *req, bool force_nonblock,
+-		      struct io_comp_state *cs)
+-{
+-	struct io_async_msghdr iomsg, *kmsg;
+-	struct socket *sock;
+-	struct io_buffer *kbuf;
+-	unsigned flags;
+-	int min_ret = 0;
+-	int ret, cflags = 0;
+-
+-	sock = sock_from_file(req->file, &ret);
+-	if (unlikely(!sock))
+-		return ret;
+-
+-	if (req->async_data) {
+-		kmsg = req->async_data;
+-		kmsg->msg.msg_name = &kmsg->addr;
+-		/* if iov is set, it's allocated already */
+-		if (!kmsg->iov)
+-			kmsg->iov = kmsg->fast_iov;
+-		kmsg->msg.msg_iter.iov = kmsg->iov;
+-	} else {
+-		ret = io_recvmsg_copy_hdr(req, &iomsg);
+-		if (ret)
+-			return ret;
+-		kmsg = &iomsg;
+-	}
+-
+-	if (req->flags & REQ_F_BUFFER_SELECT) {
+-		kbuf = io_recv_buffer_select(req, !force_nonblock);
+-		if (IS_ERR(kbuf))
+-			return PTR_ERR(kbuf);
+-		kmsg->fast_iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
+-		iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->iov,
+-				1, req->sr_msg.len);
+-	}
+-
+-	flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
+-	if (flags & MSG_DONTWAIT)
+-		req->flags |= REQ_F_NOWAIT;
+-	else if (force_nonblock)
+-		flags |= MSG_DONTWAIT;
+-
+-	if (flags & MSG_WAITALL)
+-		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
+-
+-	ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
+-					kmsg->uaddr, flags);
+-	if (force_nonblock && ret == -EAGAIN)
+-		return io_setup_async_msg(req, kmsg);
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
+-
+-	if (req->flags & REQ_F_BUFFER_SELECTED)
+-		cflags = io_put_recv_kbuf(req);
+-	if (kmsg->iov != kmsg->fast_iov)
+-		kfree(kmsg->iov);
+-	req->flags &= ~REQ_F_NEED_CLEANUP;
+-	if (ret < min_ret || ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
+-		req_set_fail_links(req);
+-	__io_req_complete(req, ret, cflags, cs);
+-	return 0;
+-}
+-
+-static int io_recv(struct io_kiocb *req, bool force_nonblock,
+-		   struct io_comp_state *cs)
+-{
+-	struct io_buffer *kbuf;
+-	struct io_sr_msg *sr = &req->sr_msg;
+-	struct msghdr msg;
+-	void __user *buf = sr->buf;
+-	struct socket *sock;
+-	struct iovec iov;
+-	unsigned flags;
+-	int min_ret = 0;
+-	int ret, cflags = 0;
+-
+-	sock = sock_from_file(req->file, &ret);
+-	if (unlikely(!sock))
+-		return ret;
+-
+-	if (req->flags & REQ_F_BUFFER_SELECT) {
+-		kbuf = io_recv_buffer_select(req, !force_nonblock);
+-		if (IS_ERR(kbuf))
+-			return PTR_ERR(kbuf);
+-		buf = u64_to_user_ptr(kbuf->addr);
+-	}
+-
+-	ret = import_single_range(READ, buf, sr->len, &iov, &msg.msg_iter);
+-	if (unlikely(ret))
+-		goto out_free;
+-
+-	msg.msg_name = NULL;
+-	msg.msg_control = NULL;
+-	msg.msg_controllen = 0;
+-	msg.msg_namelen = 0;
+-	msg.msg_iocb = NULL;
+-	msg.msg_flags = 0;
+-
+-	flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
+-	if (flags & MSG_DONTWAIT)
+-		req->flags |= REQ_F_NOWAIT;
+-	else if (force_nonblock)
+-		flags |= MSG_DONTWAIT;
+-
+-	if (flags & MSG_WAITALL)
+-		min_ret = iov_iter_count(&msg.msg_iter);
+-
+-	ret = sock_recvmsg(sock, &msg, flags);
+-	if (force_nonblock && ret == -EAGAIN)
+-		return -EAGAIN;
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
+-out_free:
+-	if (req->flags & REQ_F_BUFFER_SELECTED)
+-		cflags = io_put_recv_kbuf(req);
+-	if (ret < min_ret || ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
+-		req_set_fail_links(req);
+-	__io_req_complete(req, ret, cflags, cs);
+-	return 0;
+-}
+-
+-static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_accept *accept = &req->accept;
+-
+-	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->splice_fd_in)
+-		return -EINVAL;
+-
+-	accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+-	accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+-	accept->flags = READ_ONCE(sqe->accept_flags);
+-	accept->nofile = rlimit(RLIMIT_NOFILE);
+-	return 0;
+-}
+-
+-static int io_accept(struct io_kiocb *req, bool force_nonblock,
+-		     struct io_comp_state *cs)
+-{
+-	struct io_accept *accept = &req->accept;
+-	unsigned int file_flags = force_nonblock ? O_NONBLOCK : 0;
+-	int ret;
+-
+-	if (req->file->f_flags & O_NONBLOCK)
+-		req->flags |= REQ_F_NOWAIT;
+-
+-	ret = __sys_accept4_file(req->file, file_flags, accept->addr,
+-					accept->addr_len, accept->flags,
+-					accept->nofile);
+-	if (ret == -EAGAIN && force_nonblock)
+-		return -EAGAIN;
+-	if (ret < 0) {
+-		if (ret == -ERESTARTSYS)
+-			ret = -EINTR;
+-		req_set_fail_links(req);
+-	}
+-	__io_req_complete(req, ret, 0, cs);
+-	return 0;
+-}
+-
+-static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_connect *conn = &req->connect;
+-	struct io_async_connect *io = req->async_data;
+-
+-	if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags ||
+-	    sqe->splice_fd_in)
+-		return -EINVAL;
+-
+-	conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+-	conn->addr_len =  READ_ONCE(sqe->addr2);
+-
+-	if (!io)
+-		return 0;
+-
+-	return move_addr_to_kernel(conn->addr, conn->addr_len,
+-					&io->address);
+-}
+-
+-static int io_connect(struct io_kiocb *req, bool force_nonblock,
+-		      struct io_comp_state *cs)
+-{
+-	struct io_async_connect __io, *io;
+-	unsigned file_flags;
+-	int ret;
+-
+-	if (req->async_data) {
+-		io = req->async_data;
+-	} else {
+-		ret = move_addr_to_kernel(req->connect.addr,
+-						req->connect.addr_len,
+-						&__io.address);
+-		if (ret)
+-			goto out;
+-		io = &__io;
+-	}
+-
+-	file_flags = force_nonblock ? O_NONBLOCK : 0;
+-
+-	ret = __sys_connect_file(req->file, &io->address,
+-					req->connect.addr_len, file_flags);
+-	if ((ret == -EAGAIN || ret == -EINPROGRESS) && force_nonblock) {
+-		if (req->async_data)
+-			return -EAGAIN;
+-		if (io_alloc_async_data(req)) {
+-			ret = -ENOMEM;
+-			goto out;
+-		}
+-		io = req->async_data;
+-		memcpy(req->async_data, &__io, sizeof(__io));
+-		return -EAGAIN;
+-	}
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
+-out:
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	__io_req_complete(req, ret, 0, cs);
+-	return 0;
+-}
+-#else /* !CONFIG_NET */
+-static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_sendmsg(struct io_kiocb *req, bool force_nonblock,
+-		      struct io_comp_state *cs)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_send(struct io_kiocb *req, bool force_nonblock,
+-		   struct io_comp_state *cs)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_recvmsg_prep(struct io_kiocb *req,
+-			   const struct io_uring_sqe *sqe)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_recvmsg(struct io_kiocb *req, bool force_nonblock,
+-		      struct io_comp_state *cs)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_recv(struct io_kiocb *req, bool force_nonblock,
+-		   struct io_comp_state *cs)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_accept(struct io_kiocb *req, bool force_nonblock,
+-		     struct io_comp_state *cs)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static int io_connect(struct io_kiocb *req, bool force_nonblock,
+-		      struct io_comp_state *cs)
+-{
+-	return -EOPNOTSUPP;
+-}
+-#endif /* CONFIG_NET */
+-
+-struct io_poll_table {
+-	struct poll_table_struct pt;
+-	struct io_kiocb *req;
+-	int nr_entries;
+-	int error;
+-};
+-
+-static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+-			   __poll_t mask, task_work_func_t func)
+-{
+-	bool twa_signal_ok;
+-	int ret;
+-
+-	/* for instances that support it check for an event match first: */
+-	if (mask && !(mask & poll->events))
+-		return 0;
+-
+-	trace_io_uring_task_add(req->ctx, req->opcode, req->user_data, mask);
+-
+-	list_del_init(&poll->wait.entry);
+-
+-	req->result = mask;
+-	init_task_work(&req->task_work, func);
+-	percpu_ref_get(&req->ctx->refs);
+-
+-	/*
+-	 * If we using the signalfd wait_queue_head for this wakeup, then
+-	 * it's not safe to use TWA_SIGNAL as we could be recursing on the
+-	 * tsk->sighand->siglock on doing the wakeup. Should not be needed
+-	 * either, as the normal wakeup will suffice.
+-	 */
+-	twa_signal_ok = (poll->head != &req->task->sighand->signalfd_wqh);
+-
+-	/*
+-	 * If this fails, then the task is exiting. When a task exits, the
+-	 * work gets canceled, so just cancel this request as well instead
+-	 * of executing it. We can't safely execute it anyway, as we may not
+-	 * have the needed state needed for it anyway.
+-	 */
+-	ret = io_req_task_work_add(req, twa_signal_ok);
+-	if (unlikely(ret)) {
+-		struct task_struct *tsk;
+-
+-		WRITE_ONCE(poll->canceled, true);
+-		tsk = io_wq_get_task(req->ctx->io_wq);
+-		task_work_add(tsk, &req->task_work, TWA_NONE);
+-		wake_up_process(tsk);
+-	}
+-	return 1;
+-}
+-
+-static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll)
+-	__acquires(&req->ctx->completion_lock)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	if (!req->result && !READ_ONCE(poll->canceled)) {
+-		struct poll_table_struct pt = { ._key = poll->events };
+-
+-		req->result = vfs_poll(req->file, &pt) & poll->events;
+-	}
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	if (!req->result && !READ_ONCE(poll->canceled)) {
+-		add_wait_queue(poll->head, &poll->wait);
+-		return true;
+-	}
+-
+-	return false;
+-}
+-
+-static struct io_poll_iocb *io_poll_get_double(struct io_kiocb *req)
+-{
+-	/* pure poll stashes this in ->async_data, poll driven retry elsewhere */
+-	if (req->opcode == IORING_OP_POLL_ADD)
+-		return req->async_data;
+-	return req->apoll->double_poll;
+-}
+-
+-static struct io_poll_iocb *io_poll_get_single(struct io_kiocb *req)
+-{
+-	if (req->opcode == IORING_OP_POLL_ADD)
+-		return &req->poll;
+-	return &req->apoll->poll;
+-}
+-
+-static void io_poll_remove_double(struct io_kiocb *req)
+-{
+-	struct io_poll_iocb *poll = io_poll_get_double(req);
+-
+-	lockdep_assert_held(&req->ctx->completion_lock);
+-
+-	if (poll && poll->head) {
+-		struct wait_queue_head *head = poll->head;
+-
+-		spin_lock(&head->lock);
+-		list_del_init(&poll->wait.entry);
+-		if (poll->wait.private)
+-			refcount_dec(&req->refs);
+-		poll->head = NULL;
+-		spin_unlock(&head->lock);
+-	}
+-}
+-
+-static void io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	io_poll_remove_double(req);
+-	req->poll.done = true;
+-	io_cqring_fill_event(req, error ? error : mangle_poll(mask));
+-	io_commit_cqring(ctx);
+-}
+-
+-static void io_poll_task_func(struct callback_head *cb)
+-{
+-	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_kiocb *nxt;
+-
+-	if (io_poll_rewait(req, &req->poll)) {
+-		spin_unlock_irq(&ctx->completion_lock);
+-	} else {
+-		hash_del(&req->hash_node);
+-		io_poll_complete(req, req->result, 0);
+-		spin_unlock_irq(&ctx->completion_lock);
+-
+-		nxt = io_put_req_find_next(req);
+-		io_cqring_ev_posted(ctx);
+-		if (nxt)
+-			__io_req_task_submit(nxt);
+-	}
+-
+-	percpu_ref_put(&ctx->refs);
+-}
+-
+-static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
+-			       int sync, void *key)
+-{
+-	struct io_kiocb *req = wait->private;
+-	struct io_poll_iocb *poll = io_poll_get_single(req);
+-	__poll_t mask = key_to_poll(key);
+-
+-	/* for instances that support it check for an event match first: */
+-	if (mask && !(mask & poll->events))
+-		return 0;
+-
+-	list_del_init(&wait->entry);
+-
+-	if (poll && poll->head) {
+-		bool done;
+-
+-		spin_lock(&poll->head->lock);
+-		done = list_empty(&poll->wait.entry);
+-		if (!done)
+-			list_del_init(&poll->wait.entry);
+-		/* make sure double remove sees this as being gone */
+-		wait->private = NULL;
+-		spin_unlock(&poll->head->lock);
+-		if (!done) {
+-			/* use wait func handler, so it matches the rq type */
+-			poll->wait.func(&poll->wait, mode, sync, key);
+-		}
+-	}
+-	refcount_dec(&req->refs);
+-	return 1;
+-}
+-
+-static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
+-			      wait_queue_func_t wake_func)
+-{
+-	poll->head = NULL;
+-	poll->done = false;
+-	poll->canceled = false;
+-	poll->events = events;
+-	INIT_LIST_HEAD(&poll->wait.entry);
+-	init_waitqueue_func_entry(&poll->wait, wake_func);
+-}
+-
+-static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+-			    struct wait_queue_head *head,
+-			    struct io_poll_iocb **poll_ptr)
+-{
+-	struct io_kiocb *req = pt->req;
+-
+-	/*
+-	 * The file being polled uses multiple waitqueues for poll handling
+-	 * (e.g. one for read, one for write). Setup a separate io_poll_iocb
+-	 * if this happens.
+-	 */
+-	if (unlikely(pt->nr_entries)) {
+-		struct io_poll_iocb *poll_one = poll;
+-
+-		/* already have a 2nd entry, fail a third attempt */
+-		if (*poll_ptr) {
+-			pt->error = -EINVAL;
+-			return;
+-		}
+-		/* double add on the same waitqueue head, ignore */
+-		if (poll->head == head)
+-			return;
+-		poll = kmalloc(sizeof(*poll), GFP_ATOMIC);
+-		if (!poll) {
+-			pt->error = -ENOMEM;
+-			return;
+-		}
+-		io_init_poll_iocb(poll, poll_one->events, io_poll_double_wake);
+-		refcount_inc(&req->refs);
+-		poll->wait.private = req;
+-		*poll_ptr = poll;
+-	}
+-
+-	pt->nr_entries++;
+-	poll->head = head;
+-
+-	if (poll->events & EPOLLEXCLUSIVE)
+-		add_wait_queue_exclusive(head, &poll->wait);
+-	else
+-		add_wait_queue(head, &poll->wait);
+-}
+-
+-static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
+-			       struct poll_table_struct *p)
+-{
+-	struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
+-	struct async_poll *apoll = pt->req->apoll;
+-
+-	__io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll);
+-}
+-
+-static void io_async_task_func(struct callback_head *cb)
+-{
+-	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+-	struct async_poll *apoll = req->apoll;
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	trace_io_uring_task_run(req->ctx, req->opcode, req->user_data);
+-
+-	if (io_poll_rewait(req, &apoll->poll)) {
+-		spin_unlock_irq(&ctx->completion_lock);
+-		percpu_ref_put(&ctx->refs);
+-		return;
+-	}
+-
+-	/* If req is still hashed, it cannot have been canceled. Don't check. */
+-	if (hash_hashed(&req->hash_node))
+-		hash_del(&req->hash_node);
+-
+-	io_poll_remove_double(req);
+-	spin_unlock_irq(&ctx->completion_lock);
+-
+-	if (!READ_ONCE(apoll->poll.canceled))
+-		__io_req_task_submit(req);
+-	else
+-		__io_req_task_cancel(req, -ECANCELED);
+-
+-	percpu_ref_put(&ctx->refs);
+-	kfree(apoll->double_poll);
+-	kfree(apoll);
+-}
+-
+-static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+-			void *key)
+-{
+-	struct io_kiocb *req = wait->private;
+-	struct io_poll_iocb *poll = &req->apoll->poll;
+-
+-	trace_io_uring_poll_wake(req->ctx, req->opcode, req->user_data,
+-					key_to_poll(key));
+-
+-	return __io_async_wake(req, poll, key_to_poll(key), io_async_task_func);
+-}
+-
+-static void io_poll_req_insert(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct hlist_head *list;
+-
+-	list = &ctx->cancel_hash[hash_long(req->user_data, ctx->cancel_hash_bits)];
+-	hlist_add_head(&req->hash_node, list);
+-}
+-
+-static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
+-				      struct io_poll_iocb *poll,
+-				      struct io_poll_table *ipt, __poll_t mask,
+-				      wait_queue_func_t wake_func)
+-	__acquires(&ctx->completion_lock)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	bool cancel = false;
+-
+-	if (req->file->f_op->may_pollfree) {
+-		spin_lock_irq(&ctx->completion_lock);
+-		return -EOPNOTSUPP;
+-	}
+-
+-	INIT_HLIST_NODE(&req->hash_node);
+-	io_init_poll_iocb(poll, mask, wake_func);
+-	poll->file = req->file;
+-	poll->wait.private = req;
+-
+-	ipt->pt._key = mask;
+-	ipt->req = req;
+-	ipt->error = 0;
+-	ipt->nr_entries = 0;
+-
+-	mask = vfs_poll(req->file, &ipt->pt) & poll->events;
+-	if (unlikely(!ipt->nr_entries) && !ipt->error)
+-		ipt->error = -EINVAL;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	if (ipt->error)
+-		io_poll_remove_double(req);
+-	if (likely(poll->head)) {
+-		spin_lock(&poll->head->lock);
+-		if (unlikely(list_empty(&poll->wait.entry))) {
+-			if (ipt->error)
+-				cancel = true;
+-			ipt->error = 0;
+-			mask = 0;
+-		}
+-		if (mask || ipt->error)
+-			list_del_init(&poll->wait.entry);
+-		else if (cancel)
+-			WRITE_ONCE(poll->canceled, true);
+-		else if (!poll->done) /* actually waiting for an event */
+-			io_poll_req_insert(req);
+-		spin_unlock(&poll->head->lock);
+-	}
+-
+-	return mask;
+-}
+-
+-static bool io_arm_poll_handler(struct io_kiocb *req)
+-{
+-	const struct io_op_def *def = &io_op_defs[req->opcode];
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct async_poll *apoll;
+-	struct io_poll_table ipt;
+-	__poll_t mask, ret;
+-	int rw;
+-
+-	if (!req->file || !file_can_poll(req->file))
+-		return false;
+-	if (req->flags & REQ_F_POLLED)
+-		return false;
+-	if (def->pollin)
+-		rw = READ;
+-	else if (def->pollout)
+-		rw = WRITE;
+-	else
+-		return false;
+-	/* if we can't nonblock try, then no point in arming a poll handler */
+-	if (!io_file_supports_async(req->file, rw))
+-		return false;
+-
+-	apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
+-	if (unlikely(!apoll))
+-		return false;
+-	apoll->double_poll = NULL;
+-
+-	req->flags |= REQ_F_POLLED;
+-	req->apoll = apoll;
+-
+-	mask = 0;
+-	if (def->pollin)
+-		mask |= POLLIN | POLLRDNORM;
+-	if (def->pollout)
+-		mask |= POLLOUT | POLLWRNORM;
+-
+-	/* If reading from MSG_ERRQUEUE using recvmsg, ignore POLLIN */
+-	if ((req->opcode == IORING_OP_RECVMSG) &&
+-	    (req->sr_msg.msg_flags & MSG_ERRQUEUE))
+-		mask &= ~POLLIN;
+-
+-	mask |= POLLERR | POLLPRI;
+-
+-	ipt.pt._qproc = io_async_queue_proc;
+-
+-	ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask,
+-					io_async_wake);
+-	if (ret || ipt.error) {
+-		io_poll_remove_double(req);
+-		spin_unlock_irq(&ctx->completion_lock);
+-		kfree(apoll->double_poll);
+-		kfree(apoll);
+-		return false;
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-	trace_io_uring_poll_arm(ctx, req->opcode, req->user_data, mask,
+-					apoll->poll.events);
+-	return true;
+-}
+-
+-static bool __io_poll_remove_one(struct io_kiocb *req,
+-				 struct io_poll_iocb *poll)
+-{
+-	bool do_complete = false;
+-
+-	spin_lock(&poll->head->lock);
+-	WRITE_ONCE(poll->canceled, true);
+-	if (!list_empty(&poll->wait.entry)) {
+-		list_del_init(&poll->wait.entry);
+-		do_complete = true;
+-	}
+-	spin_unlock(&poll->head->lock);
+-	hash_del(&req->hash_node);
+-	return do_complete;
+-}
+-
+-static bool io_poll_remove_one(struct io_kiocb *req)
+-{
+-	bool do_complete;
+-
+-	io_poll_remove_double(req);
+-
+-	if (req->opcode == IORING_OP_POLL_ADD) {
+-		do_complete = __io_poll_remove_one(req, &req->poll);
+-	} else {
+-		struct async_poll *apoll = req->apoll;
+-
+-		/* non-poll requests have submit ref still */
+-		do_complete = __io_poll_remove_one(req, &apoll->poll);
+-		if (do_complete) {
+-			io_put_req(req);
+-			kfree(apoll->double_poll);
+-			kfree(apoll);
+-		}
+-	}
+-
+-	if (do_complete) {
+-		io_cqring_fill_event(req, -ECANCELED);
+-		io_commit_cqring(req->ctx);
+-		req_set_fail_links(req);
+-		io_put_req_deferred(req, 1);
+-	}
+-
+-	return do_complete;
+-}
+-
+-/*
+- * Returns true if we found and killed one or more poll requests
+- */
+-static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
+-			       struct files_struct *files)
+-{
+-	struct hlist_node *tmp;
+-	struct io_kiocb *req;
+-	int posted = 0, i;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
+-		struct hlist_head *list;
+-
+-		list = &ctx->cancel_hash[i];
+-		hlist_for_each_entry_safe(req, tmp, list, hash_node) {
+-			if (io_match_task(req, tsk, files))
+-				posted += io_poll_remove_one(req);
+-		}
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-
+-	if (posted)
+-		io_cqring_ev_posted(ctx);
+-
+-	return posted != 0;
+-}
+-
+-static int io_poll_cancel(struct io_ring_ctx *ctx, __u64 sqe_addr)
+-{
+-	struct hlist_head *list;
+-	struct io_kiocb *req;
+-
+-	list = &ctx->cancel_hash[hash_long(sqe_addr, ctx->cancel_hash_bits)];
+-	hlist_for_each_entry(req, list, hash_node) {
+-		if (sqe_addr != req->user_data)
+-			continue;
+-		if (io_poll_remove_one(req))
+-			return 0;
+-		return -EALREADY;
+-	}
+-
+-	return -ENOENT;
+-}
+-
+-static int io_poll_remove_prep(struct io_kiocb *req,
+-			       const struct io_uring_sqe *sqe)
+-{
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index ||
+-	    sqe->poll_events)
+-		return -EINVAL;
+-
+-	req->poll.addr = READ_ONCE(sqe->addr);
+-	return 0;
+-}
+-
+-/*
+- * Find a running poll command that matches one specified in sqe->addr,
+- * and remove it if found.
+- */
+-static int io_poll_remove(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	u64 addr;
+-	int ret;
+-
+-	addr = req->poll.addr;
+-	spin_lock_irq(&ctx->completion_lock);
+-	ret = io_poll_cancel(ctx, addr);
+-	spin_unlock_irq(&ctx->completion_lock);
+-
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_req_complete(req, ret);
+-	return 0;
+-}
+-
+-static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+-			void *key)
+-{
+-	struct io_kiocb *req = wait->private;
+-	struct io_poll_iocb *poll = &req->poll;
+-
+-	return __io_async_wake(req, poll, key_to_poll(key), io_poll_task_func);
+-}
+-
+-static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
+-			       struct poll_table_struct *p)
+-{
+-	struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
+-
+-	__io_queue_proc(&pt->req->poll, pt, head, (struct io_poll_iocb **) &pt->req->async_data);
+-}
+-
+-static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_poll_iocb *poll = &req->poll;
+-	u32 events;
+-
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (sqe->addr || sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
+-		return -EINVAL;
+-
+-	events = READ_ONCE(sqe->poll32_events);
+-#ifdef __BIG_ENDIAN
+-	events = swahw32(events);
+-#endif
+-	poll->events = demangle_poll(events) | EPOLLERR | EPOLLHUP |
+-		       (events & EPOLLEXCLUSIVE);
+-	return 0;
+-}
+-
+-static int io_poll_add(struct io_kiocb *req)
+-{
+-	struct io_poll_iocb *poll = &req->poll;
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_poll_table ipt;
+-	__poll_t mask;
+-
+-	ipt.pt._qproc = io_poll_queue_proc;
+-
+-	mask = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events,
+-					io_poll_wake);
+-
+-	if (mask) { /* no async, we'd stolen it */
+-		ipt.error = 0;
+-		io_poll_complete(req, mask, 0);
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-
+-	if (mask) {
+-		io_cqring_ev_posted(ctx);
+-		io_put_req(req);
+-	}
+-	return ipt.error;
+-}
+-
+-static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
+-{
+-	struct io_timeout_data *data = container_of(timer,
+-						struct io_timeout_data, timer);
+-	struct io_kiocb *req = data->req;
+-	struct io_ring_ctx *ctx = req->ctx;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&ctx->completion_lock, flags);
+-	list_del_init(&req->timeout.list);
+-	atomic_set(&req->ctx->cq_timeouts,
+-		atomic_read(&req->ctx->cq_timeouts) + 1);
+-
+-	io_cqring_fill_event(req, -ETIME);
+-	io_commit_cqring(ctx);
+-	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-
+-	io_cqring_ev_posted(ctx);
+-	req_set_fail_links(req);
+-	io_put_req(req);
+-	return HRTIMER_NORESTART;
+-}
+-
+-static int __io_timeout_cancel(struct io_kiocb *req)
+-{
+-	struct io_timeout_data *io = req->async_data;
+-	int ret;
+-
+-	ret = hrtimer_try_to_cancel(&io->timer);
+-	if (ret == -1)
+-		return -EALREADY;
+-	list_del_init(&req->timeout.list);
+-
+-	req_set_fail_links(req);
+-	io_cqring_fill_event(req, -ECANCELED);
+-	io_put_req_deferred(req, 1);
+-	return 0;
+-}
+-
+-static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
+-{
+-	struct io_kiocb *req;
+-	int ret = -ENOENT;
+-
+-	list_for_each_entry(req, &ctx->timeout_list, timeout.list) {
+-		if (user_data == req->user_data) {
+-			ret = 0;
+-			break;
+-		}
+-	}
+-
+-	if (ret == -ENOENT)
+-		return ret;
+-
+-	return __io_timeout_cancel(req);
+-}
+-
+-static int io_timeout_remove_prep(struct io_kiocb *req,
+-				  const struct io_uring_sqe *sqe)
+-{
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->timeout_flags ||
+-	    sqe->splice_fd_in)
+-		return -EINVAL;
+-
+-	req->timeout_rem.addr = READ_ONCE(sqe->addr);
+-	return 0;
+-}
+-
+-/*
+- * Remove or update an existing timeout command
+- */
+-static int io_timeout_remove(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	int ret;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	ret = io_timeout_cancel(ctx, req->timeout_rem.addr);
+-
+-	io_cqring_fill_event(req, ret);
+-	io_commit_cqring(ctx);
+-	spin_unlock_irq(&ctx->completion_lock);
+-	io_cqring_ev_posted(ctx);
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_put_req(req);
+-	return 0;
+-}
+-
+-static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+-			   bool is_timeout_link)
+-{
+-	struct io_timeout_data *data;
+-	unsigned flags;
+-	u32 off = READ_ONCE(sqe->off);
+-
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len != 1 ||
+-	    sqe->splice_fd_in)
+-		return -EINVAL;
+-	if (off && is_timeout_link)
+-		return -EINVAL;
+-	flags = READ_ONCE(sqe->timeout_flags);
+-	if (flags & ~IORING_TIMEOUT_ABS)
+-		return -EINVAL;
+-
+-	req->timeout.off = off;
+-
+-	if (!req->async_data && io_alloc_async_data(req))
+-		return -ENOMEM;
+-
+-	data = req->async_data;
+-	data->req = req;
+-
+-	if (get_timespec64(&data->ts, u64_to_user_ptr(sqe->addr)))
+-		return -EFAULT;
+-
+-	if (flags & IORING_TIMEOUT_ABS)
+-		data->mode = HRTIMER_MODE_ABS;
+-	else
+-		data->mode = HRTIMER_MODE_REL;
+-
+-	INIT_LIST_HEAD(&req->timeout.list);
+-	hrtimer_init(&data->timer, CLOCK_MONOTONIC, data->mode);
+-	return 0;
+-}
+-
+-static int io_timeout(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_timeout_data *data = req->async_data;
+-	struct list_head *entry;
+-	u32 tail, off = req->timeout.off;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-
+-	/*
+-	 * sqe->off holds how many events that need to occur for this
+-	 * timeout event to be satisfied. If it isn't set, then this is
+-	 * a pure timeout request, sequence isn't used.
+-	 */
+-	if (io_is_timeout_noseq(req)) {
+-		entry = ctx->timeout_list.prev;
+-		goto add;
+-	}
+-
+-	tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+-	req->timeout.target_seq = tail + off;
+-
+-	/* Update the last seq here in case io_flush_timeouts() hasn't.
+-	 * This is safe because ->completion_lock is held, and submissions
+-	 * and completions are never mixed in the same ->completion_lock section.
+-	 */
+-	ctx->cq_last_tm_flush = tail;
+-
+-	/*
+-	 * Insertion sort, ensuring the first entry in the list is always
+-	 * the one we need first.
+-	 */
+-	list_for_each_prev(entry, &ctx->timeout_list) {
+-		struct io_kiocb *nxt = list_entry(entry, struct io_kiocb,
+-						  timeout.list);
+-
+-		if (io_is_timeout_noseq(nxt))
+-			continue;
+-		/* nxt.seq is behind @tail, otherwise would've been completed */
+-		if (off >= nxt->timeout.target_seq - tail)
+-			break;
+-	}
+-add:
+-	list_add(&req->timeout.list, entry);
+-	data->timer.function = io_timeout_fn;
+-	hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode);
+-	spin_unlock_irq(&ctx->completion_lock);
+-	return 0;
+-}
+-
+-static bool io_cancel_cb(struct io_wq_work *work, void *data)
+-{
+-	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-
+-	return req->user_data == (unsigned long) data;
+-}
+-
+-static int io_async_cancel_one(struct io_ring_ctx *ctx, void *sqe_addr)
+-{
+-	enum io_wq_cancel cancel_ret;
+-	int ret = 0;
+-
+-	cancel_ret = io_wq_cancel_cb(ctx->io_wq, io_cancel_cb, sqe_addr, false);
+-	switch (cancel_ret) {
+-	case IO_WQ_CANCEL_OK:
+-		ret = 0;
+-		break;
+-	case IO_WQ_CANCEL_RUNNING:
+-		ret = -EALREADY;
+-		break;
+-	case IO_WQ_CANCEL_NOTFOUND:
+-		ret = -ENOENT;
+-		break;
+-	}
+-
+-	return ret;
+-}
+-
+-static void io_async_find_and_cancel(struct io_ring_ctx *ctx,
+-				     struct io_kiocb *req, __u64 sqe_addr,
+-				     int success_ret)
+-{
+-	unsigned long flags;
+-	int ret;
+-
+-	ret = io_async_cancel_one(ctx, (void *) (unsigned long) sqe_addr);
+-	if (ret != -ENOENT) {
+-		spin_lock_irqsave(&ctx->completion_lock, flags);
+-		goto done;
+-	}
+-
+-	spin_lock_irqsave(&ctx->completion_lock, flags);
+-	ret = io_timeout_cancel(ctx, sqe_addr);
+-	if (ret != -ENOENT)
+-		goto done;
+-	ret = io_poll_cancel(ctx, sqe_addr);
+-done:
+-	if (!ret)
+-		ret = success_ret;
+-	io_cqring_fill_event(req, ret);
+-	io_commit_cqring(ctx);
+-	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-	io_cqring_ev_posted(ctx);
+-
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	io_put_req(req);
+-}
+-
+-static int io_async_cancel_prep(struct io_kiocb *req,
+-				const struct io_uring_sqe *sqe)
+-{
+-	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+-		return -EINVAL;
+-	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags ||
+-	    sqe->splice_fd_in)
+-		return -EINVAL;
+-
+-	req->cancel.addr = READ_ONCE(sqe->addr);
+-	return 0;
+-}
+-
+-static int io_async_cancel(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	io_async_find_and_cancel(ctx, req, req->cancel.addr, 0);
+-	return 0;
+-}
+-
+-static int io_files_update_prep(struct io_kiocb *req,
+-				const struct io_uring_sqe *sqe)
+-{
+-	if (unlikely(req->ctx->flags & IORING_SETUP_SQPOLL))
+-		return -EINVAL;
+-	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+-		return -EINVAL;
+-	if (sqe->ioprio || sqe->rw_flags)
+-		return -EINVAL;
+-
+-	req->files_update.offset = READ_ONCE(sqe->off);
+-	req->files_update.nr_args = READ_ONCE(sqe->len);
+-	if (!req->files_update.nr_args)
+-		return -EINVAL;
+-	req->files_update.arg = READ_ONCE(sqe->addr);
+-	return 0;
+-}
+-
+-static int io_files_update(struct io_kiocb *req, bool force_nonblock,
+-			   struct io_comp_state *cs)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_uring_files_update up;
+-	int ret;
+-
+-	if (force_nonblock)
+-		return -EAGAIN;
+-
+-	up.offset = req->files_update.offset;
+-	up.fds = req->files_update.arg;
+-
+-	mutex_lock(&ctx->uring_lock);
+-	ret = __io_sqe_files_update(ctx, &up, req->files_update.nr_args);
+-	mutex_unlock(&ctx->uring_lock);
+-
+-	if (ret < 0)
+-		req_set_fail_links(req);
+-	__io_req_complete(req, ret, 0, cs);
+-	return 0;
+-}
+-
+-static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	switch (req->opcode) {
+-	case IORING_OP_NOP:
+-		return 0;
+-	case IORING_OP_READV:
+-	case IORING_OP_READ_FIXED:
+-	case IORING_OP_READ:
+-		return io_read_prep(req, sqe);
+-	case IORING_OP_WRITEV:
+-	case IORING_OP_WRITE_FIXED:
+-	case IORING_OP_WRITE:
+-		return io_write_prep(req, sqe);
+-	case IORING_OP_POLL_ADD:
+-		return io_poll_add_prep(req, sqe);
+-	case IORING_OP_POLL_REMOVE:
+-		return io_poll_remove_prep(req, sqe);
+-	case IORING_OP_FSYNC:
+-		return io_prep_fsync(req, sqe);
+-	case IORING_OP_SYNC_FILE_RANGE:
+-		return io_prep_sfr(req, sqe);
+-	case IORING_OP_SENDMSG:
+-	case IORING_OP_SEND:
+-		return io_sendmsg_prep(req, sqe);
+-	case IORING_OP_RECVMSG:
+-	case IORING_OP_RECV:
+-		return io_recvmsg_prep(req, sqe);
+-	case IORING_OP_CONNECT:
+-		return io_connect_prep(req, sqe);
+-	case IORING_OP_TIMEOUT:
+-		return io_timeout_prep(req, sqe, false);
+-	case IORING_OP_TIMEOUT_REMOVE:
+-		return io_timeout_remove_prep(req, sqe);
+-	case IORING_OP_ASYNC_CANCEL:
+-		return io_async_cancel_prep(req, sqe);
+-	case IORING_OP_LINK_TIMEOUT:
+-		return io_timeout_prep(req, sqe, true);
+-	case IORING_OP_ACCEPT:
+-		return io_accept_prep(req, sqe);
+-	case IORING_OP_FALLOCATE:
+-		return io_fallocate_prep(req, sqe);
+-	case IORING_OP_OPENAT:
+-		return io_openat_prep(req, sqe);
+-	case IORING_OP_CLOSE:
+-		return io_close_prep(req, sqe);
+-	case IORING_OP_FILES_UPDATE:
+-		return io_files_update_prep(req, sqe);
+-	case IORING_OP_STATX:
+-		return io_statx_prep(req, sqe);
+-	case IORING_OP_FADVISE:
+-		return io_fadvise_prep(req, sqe);
+-	case IORING_OP_MADVISE:
+-		return io_madvise_prep(req, sqe);
+-	case IORING_OP_OPENAT2:
+-		return io_openat2_prep(req, sqe);
+-	case IORING_OP_EPOLL_CTL:
+-		return io_epoll_ctl_prep(req, sqe);
+-	case IORING_OP_SPLICE:
+-		return io_splice_prep(req, sqe);
+-	case IORING_OP_PROVIDE_BUFFERS:
+-		return io_provide_buffers_prep(req, sqe);
+-	case IORING_OP_REMOVE_BUFFERS:
+-		return io_remove_buffers_prep(req, sqe);
+-	case IORING_OP_TEE:
+-		return io_tee_prep(req, sqe);
+-	}
+-
+-	printk_once(KERN_WARNING "io_uring: unhandled opcode %d\n",
+-			req->opcode);
+-	return-EINVAL;
+-}
+-
+-static int io_req_defer_prep(struct io_kiocb *req,
+-			     const struct io_uring_sqe *sqe)
+-{
+-	if (!sqe)
+-		return 0;
+-	if (io_alloc_async_data(req))
+-		return -EAGAIN;
+-	return io_req_prep(req, sqe);
+-}
+-
+-static u32 io_get_sequence(struct io_kiocb *req)
+-{
+-	struct io_kiocb *pos;
+-	struct io_ring_ctx *ctx = req->ctx;
+-	u32 total_submitted, nr_reqs = 1;
+-
+-	if (req->flags & REQ_F_LINK_HEAD)
+-		list_for_each_entry(pos, &req->link_list, link_list)
+-			nr_reqs++;
+-
+-	total_submitted = ctx->cached_sq_head - ctx->cached_sq_dropped;
+-	return total_submitted - nr_reqs;
+-}
+-
+-static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_defer_entry *de;
+-	int ret;
+-	u32 seq;
+-
+-	/* Still need defer if there is pending req in defer list. */
+-	if (likely(list_empty_careful(&ctx->defer_list) &&
+-		!(req->flags & REQ_F_IO_DRAIN)))
+-		return 0;
+-
+-	seq = io_get_sequence(req);
+-	/* Still a chance to pass the sequence check */
+-	if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list))
+-		return 0;
+-
+-	if (!req->async_data) {
+-		ret = io_req_defer_prep(req, sqe);
+-		if (ret)
+-			return ret;
+-	}
+-	io_prep_async_link(req);
+-	de = kmalloc(sizeof(*de), GFP_KERNEL);
+-	if (!de)
+-		return -ENOMEM;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
+-		spin_unlock_irq(&ctx->completion_lock);
+-		kfree(de);
+-		io_queue_async_work(req);
+-		return -EIOCBQUEUED;
+-	}
+-
+-	trace_io_uring_defer(ctx, req, req->user_data);
+-	de->req = req;
+-	de->seq = seq;
+-	list_add_tail(&de->list, &ctx->defer_list);
+-	spin_unlock_irq(&ctx->completion_lock);
+-	return -EIOCBQUEUED;
+-}
+-
+-static void io_req_drop_files(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_uring_task *tctx = req->task->io_uring;
+-	unsigned long flags;
+-
+-	if (req->work.flags & IO_WQ_WORK_FILES) {
+-		put_files_struct(req->work.identity->files);
+-		put_nsproxy(req->work.identity->nsproxy);
+-	}
+-	spin_lock_irqsave(&ctx->inflight_lock, flags);
+-	list_del(&req->inflight_entry);
+-	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
+-	req->flags &= ~REQ_F_INFLIGHT;
+-	req->work.flags &= ~IO_WQ_WORK_FILES;
+-	if (atomic_read(&tctx->in_idle))
+-		wake_up(&tctx->wait);
+-}
+-
+-static void __io_clean_op(struct io_kiocb *req)
+-{
+-	if (req->flags & REQ_F_BUFFER_SELECTED) {
+-		switch (req->opcode) {
+-		case IORING_OP_READV:
+-		case IORING_OP_READ_FIXED:
+-		case IORING_OP_READ:
+-			kfree((void *)(unsigned long)req->rw.addr);
+-			break;
+-		case IORING_OP_RECVMSG:
+-		case IORING_OP_RECV:
+-			kfree(req->sr_msg.kbuf);
+-			break;
+-		}
+-		req->flags &= ~REQ_F_BUFFER_SELECTED;
+-	}
+-
+-	if (req->flags & REQ_F_NEED_CLEANUP) {
+-		switch (req->opcode) {
+-		case IORING_OP_READV:
+-		case IORING_OP_READ_FIXED:
+-		case IORING_OP_READ:
+-		case IORING_OP_WRITEV:
+-		case IORING_OP_WRITE_FIXED:
+-		case IORING_OP_WRITE: {
+-			struct io_async_rw *io = req->async_data;
+-			if (io->free_iovec)
+-				kfree(io->free_iovec);
+-			break;
+-			}
+-		case IORING_OP_RECVMSG:
+-		case IORING_OP_SENDMSG: {
+-			struct io_async_msghdr *io = req->async_data;
+-			if (io->iov != io->fast_iov)
+-				kfree(io->iov);
+-			break;
+-			}
+-		case IORING_OP_SPLICE:
+-		case IORING_OP_TEE:
+-			io_put_file(req, req->splice.file_in,
+-				    (req->splice.flags & SPLICE_F_FD_IN_FIXED));
+-			break;
+-		case IORING_OP_OPENAT:
+-		case IORING_OP_OPENAT2:
+-			if (req->open.filename)
+-				putname(req->open.filename);
+-			break;
+-		}
+-		req->flags &= ~REQ_F_NEED_CLEANUP;
+-	}
+-}
+-
+-static int io_issue_sqe(struct io_kiocb *req, bool force_nonblock,
+-			struct io_comp_state *cs)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	int ret;
+-
+-	switch (req->opcode) {
+-	case IORING_OP_NOP:
+-		ret = io_nop(req, cs);
+-		break;
+-	case IORING_OP_READV:
+-	case IORING_OP_READ_FIXED:
+-	case IORING_OP_READ:
+-		ret = io_read(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_WRITEV:
+-	case IORING_OP_WRITE_FIXED:
+-	case IORING_OP_WRITE:
+-		ret = io_write(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_FSYNC:
+-		ret = io_fsync(req, force_nonblock);
+-		break;
+-	case IORING_OP_POLL_ADD:
+-		ret = io_poll_add(req);
+-		break;
+-	case IORING_OP_POLL_REMOVE:
+-		ret = io_poll_remove(req);
+-		break;
+-	case IORING_OP_SYNC_FILE_RANGE:
+-		ret = io_sync_file_range(req, force_nonblock);
+-		break;
+-	case IORING_OP_SENDMSG:
+-		ret = io_sendmsg(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_SEND:
+-		ret = io_send(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_RECVMSG:
+-		ret = io_recvmsg(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_RECV:
+-		ret = io_recv(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_TIMEOUT:
+-		ret = io_timeout(req);
+-		break;
+-	case IORING_OP_TIMEOUT_REMOVE:
+-		ret = io_timeout_remove(req);
+-		break;
+-	case IORING_OP_ACCEPT:
+-		ret = io_accept(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_CONNECT:
+-		ret = io_connect(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_ASYNC_CANCEL:
+-		ret = io_async_cancel(req);
+-		break;
+-	case IORING_OP_FALLOCATE:
+-		ret = io_fallocate(req, force_nonblock);
+-		break;
+-	case IORING_OP_OPENAT:
+-		ret = io_openat(req, force_nonblock);
+-		break;
+-	case IORING_OP_CLOSE:
+-		ret = io_close(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_FILES_UPDATE:
+-		ret = io_files_update(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_STATX:
+-		ret = io_statx(req, force_nonblock);
+-		break;
+-	case IORING_OP_FADVISE:
+-		ret = io_fadvise(req, force_nonblock);
+-		break;
+-	case IORING_OP_MADVISE:
+-		ret = io_madvise(req, force_nonblock);
+-		break;
+-	case IORING_OP_OPENAT2:
+-		ret = io_openat2(req, force_nonblock);
+-		break;
+-	case IORING_OP_EPOLL_CTL:
+-		ret = io_epoll_ctl(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_SPLICE:
+-		ret = io_splice(req, force_nonblock);
+-		break;
+-	case IORING_OP_PROVIDE_BUFFERS:
+-		ret = io_provide_buffers(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_REMOVE_BUFFERS:
+-		ret = io_remove_buffers(req, force_nonblock, cs);
+-		break;
+-	case IORING_OP_TEE:
+-		ret = io_tee(req, force_nonblock);
+-		break;
+-	default:
+-		ret = -EINVAL;
+-		break;
+-	}
+-
+-	if (ret)
+-		return ret;
+-
+-	/* If the op doesn't have a file, we're not polling for it */
+-	if ((ctx->flags & IORING_SETUP_IOPOLL) && req->file) {
+-		const bool in_async = io_wq_current_is_worker();
+-
+-		/* workqueue context doesn't hold uring_lock, grab it now */
+-		if (in_async)
+-			mutex_lock(&ctx->uring_lock);
+-
+-		io_iopoll_req_issued(req);
+-
+-		if (in_async)
+-			mutex_unlock(&ctx->uring_lock);
+-	}
+-
+-	return 0;
+-}
+-
+-static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
+-{
+-	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-	struct io_kiocb *timeout;
+-	int ret = 0;
+-
+-	timeout = io_prep_linked_timeout(req);
+-	if (timeout)
+-		io_queue_linked_timeout(timeout);
+-
+-	/* if NO_CANCEL is set, we must still run the work */
+-	if ((work->flags & (IO_WQ_WORK_CANCEL|IO_WQ_WORK_NO_CANCEL)) ==
+-				IO_WQ_WORK_CANCEL) {
+-		ret = -ECANCELED;
+-	}
+-
+-	if (!ret) {
+-		do {
+-			ret = io_issue_sqe(req, false, NULL);
+-			/*
+-			 * We can get EAGAIN for polled IO even though we're
+-			 * forcing a sync submission from here, since we can't
+-			 * wait for request slots on the block side.
+-			 */
+-			if (ret != -EAGAIN)
+-				break;
+-			cond_resched();
+-		} while (1);
+-	}
+-
+-	if (ret) {
+-		struct io_ring_ctx *lock_ctx = NULL;
+-
+-		if (req->ctx->flags & IORING_SETUP_IOPOLL)
+-			lock_ctx = req->ctx;
+-
+-		/*
+-		 * io_iopoll_complete() does not hold completion_lock to
+-		 * complete polled io, so here for polled io, we can not call
+-		 * io_req_complete() directly, otherwise there maybe concurrent
+-		 * access to cqring, defer_list, etc, which is not safe. Given
+-		 * that io_iopoll_complete() is always called under uring_lock,
+-		 * so here for polled io, we also get uring_lock to complete
+-		 * it.
+-		 */
+-		if (lock_ctx)
+-			mutex_lock(&lock_ctx->uring_lock);
+-
+-		req_set_fail_links(req);
+-		io_req_complete(req, ret);
+-
+-		if (lock_ctx)
+-			mutex_unlock(&lock_ctx->uring_lock);
+-	}
+-
+-	return io_steal_work(req);
+-}
+-
+-static inline struct file *io_file_from_index(struct io_ring_ctx *ctx,
+-					      int index)
+-{
+-	struct fixed_file_table *table;
+-
+-	table = &ctx->file_data->table[index >> IORING_FILE_TABLE_SHIFT];
+-	return table->files[index & IORING_FILE_TABLE_MASK];
+-}
+-
+-static struct file *io_file_get(struct io_submit_state *state,
+-				struct io_kiocb *req, int fd, bool fixed)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct file *file;
+-
+-	if (fixed) {
+-		if (unlikely((unsigned int)fd >= ctx->nr_user_files))
+-			return NULL;
+-		fd = array_index_nospec(fd, ctx->nr_user_files);
+-		file = io_file_from_index(ctx, fd);
+-		if (file) {
+-			req->fixed_file_refs = &ctx->file_data->node->refs;
+-			percpu_ref_get(req->fixed_file_refs);
+-		}
+-	} else {
+-		trace_io_uring_file_get(ctx, fd);
+-		file = __io_file_get(state, fd);
+-	}
+-
+-	if (file && file->f_op == &io_uring_fops &&
+-	    !(req->flags & REQ_F_INFLIGHT)) {
+-		io_req_init_async(req);
+-		req->flags |= REQ_F_INFLIGHT;
+-
+-		spin_lock_irq(&ctx->inflight_lock);
+-		list_add(&req->inflight_entry, &ctx->inflight_list);
+-		spin_unlock_irq(&ctx->inflight_lock);
+-	}
+-
+-	return file;
+-}
+-
+-static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,
+-			   int fd)
+-{
+-	bool fixed;
+-
+-	fixed = (req->flags & REQ_F_FIXED_FILE) != 0;
+-	if (unlikely(!fixed && io_async_submit(req->ctx)))
+-		return -EBADF;
+-
+-	req->file = io_file_get(state, req, fd, fixed);
+-	if (req->file || io_op_defs[req->opcode].needs_file_no_error)
+-		return 0;
+-	return -EBADF;
+-}
+-
+-static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
+-{
+-	struct io_timeout_data *data = container_of(timer,
+-						struct io_timeout_data, timer);
+-	struct io_kiocb *req = data->req;
+-	struct io_ring_ctx *ctx = req->ctx;
+-	struct io_kiocb *prev = NULL;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&ctx->completion_lock, flags);
+-
+-	/*
+-	 * We don't expect the list to be empty, that will only happen if we
+-	 * race with the completion of the linked work.
+-	 */
+-	if (!list_empty(&req->link_list)) {
+-		prev = list_entry(req->link_list.prev, struct io_kiocb,
+-				  link_list);
+-		list_del_init(&req->link_list);
+-		if (!refcount_inc_not_zero(&prev->refs))
+-			prev = NULL;
+-	}
+-
+-	list_del(&req->timeout.list);
+-	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-
+-	if (prev) {
+-		io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME);
+-		io_put_req_deferred(prev, 1);
+-	} else {
+-		io_cqring_add_event(req, -ETIME, 0);
+-		io_put_req_deferred(req, 1);
+-	}
+-	return HRTIMER_NORESTART;
+-}
+-
+-static void __io_queue_linked_timeout(struct io_kiocb *req)
+-{
+-	/*
+-	 * If the list is now empty, then our linked request finished before
+-	 * we got a chance to setup the timer
+-	 */
+-	if (!list_empty(&req->link_list)) {
+-		struct io_timeout_data *data = req->async_data;
+-
+-		data->timer.function = io_link_timeout_fn;
+-		hrtimer_start(&data->timer, timespec64_to_ktime(data->ts),
+-				data->mode);
+-	}
+-}
+-
+-static void io_queue_linked_timeout(struct io_kiocb *req)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	__io_queue_linked_timeout(req);
+-	spin_unlock_irq(&ctx->completion_lock);
+-
+-	/* drop submission reference */
+-	io_put_req(req);
+-}
+-
+-static struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
+-{
+-	struct io_kiocb *nxt;
+-
+-	if (!(req->flags & REQ_F_LINK_HEAD))
+-		return NULL;
+-	if (req->flags & REQ_F_LINK_TIMEOUT)
+-		return NULL;
+-
+-	nxt = list_first_entry_or_null(&req->link_list, struct io_kiocb,
+-					link_list);
+-	if (!nxt || nxt->opcode != IORING_OP_LINK_TIMEOUT)
+-		return NULL;
+-
+-	nxt->flags |= REQ_F_LTIMEOUT_ACTIVE;
+-	req->flags |= REQ_F_LINK_TIMEOUT;
+-	return nxt;
+-}
+-
+-static void __io_queue_sqe(struct io_kiocb *req, struct io_comp_state *cs)
+-{
+-	struct io_kiocb *linked_timeout;
+-	const struct cred *old_creds = NULL;
+-	int ret;
+-
+-again:
+-	linked_timeout = io_prep_linked_timeout(req);
+-
+-	if ((req->flags & REQ_F_WORK_INITIALIZED) &&
+-	    (req->work.flags & IO_WQ_WORK_CREDS) &&
+-	    req->work.identity->creds != current_cred()) {
+-		if (old_creds)
+-			revert_creds(old_creds);
+-		if (old_creds == req->work.identity->creds)
+-			old_creds = NULL; /* restored original creds */
+-		else
+-			old_creds = override_creds(req->work.identity->creds);
+-	}
+-
+-	ret = io_issue_sqe(req, true, cs);
+-
+-	/*
+-	 * We async punt it if the file wasn't marked NOWAIT, or if the file
+-	 * doesn't support non-blocking read/write attempts
+-	 */
+-	if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
+-		if (!io_arm_poll_handler(req)) {
+-			/*
+-			 * Queued up for async execution, worker will release
+-			 * submit reference when the iocb is actually submitted.
+-			 */
+-			io_queue_async_work(req);
+-		}
+-
+-		if (linked_timeout)
+-			io_queue_linked_timeout(linked_timeout);
+-	} else if (likely(!ret)) {
+-		/* drop submission reference */
+-		req = io_put_req_find_next(req);
+-		if (linked_timeout)
+-			io_queue_linked_timeout(linked_timeout);
+-
+-		if (req) {
+-			if (!(req->flags & REQ_F_FORCE_ASYNC))
+-				goto again;
+-			io_queue_async_work(req);
+-		}
+-	} else {
+-		/* un-prep timeout, so it'll be killed as any other linked */
+-		req->flags &= ~REQ_F_LINK_TIMEOUT;
+-		req_set_fail_links(req);
+-		io_put_req(req);
+-		io_req_complete(req, ret);
+-	}
+-
+-	if (old_creds)
+-		revert_creds(old_creds);
+-}
+-
+-static void io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+-			 struct io_comp_state *cs)
+-{
+-	int ret;
+-
+-	ret = io_req_defer(req, sqe);
+-	if (ret) {
+-		if (ret != -EIOCBQUEUED) {
+-fail_req:
+-			req_set_fail_links(req);
+-			io_put_req(req);
+-			io_req_complete(req, ret);
+-		}
+-	} else if (req->flags & REQ_F_FORCE_ASYNC) {
+-		if (!req->async_data) {
+-			ret = io_req_defer_prep(req, sqe);
+-			if (unlikely(ret))
+-				goto fail_req;
+-		}
+-		io_queue_async_work(req);
+-	} else {
+-		if (sqe) {
+-			ret = io_req_prep(req, sqe);
+-			if (unlikely(ret))
+-				goto fail_req;
+-		}
+-		__io_queue_sqe(req, cs);
+-	}
+-}
+-
+-static inline void io_queue_link_head(struct io_kiocb *req,
+-				      struct io_comp_state *cs)
+-{
+-	if (unlikely(req->flags & REQ_F_FAIL_LINK)) {
+-		io_put_req(req);
+-		io_req_complete(req, -ECANCELED);
+-	} else
+-		io_queue_sqe(req, NULL, cs);
+-}
+-
+-static int io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+-			 struct io_kiocb **link, struct io_comp_state *cs)
+-{
+-	struct io_ring_ctx *ctx = req->ctx;
+-	int ret;
+-
+-	/*
+-	 * If we already have a head request, queue this one for async
+-	 * submittal once the head completes. If we don't have a head but
+-	 * IOSQE_IO_LINK is set in the sqe, start a new head. This one will be
+-	 * submitted sync once the chain is complete. If none of those
+-	 * conditions are true (normal request), then just queue it.
+-	 */
+-	if (*link) {
+-		struct io_kiocb *head = *link;
+-
+-		/*
+-		 * Taking sequential execution of a link, draining both sides
+-		 * of the link also fullfils IOSQE_IO_DRAIN semantics for all
+-		 * requests in the link. So, it drains the head and the
+-		 * next after the link request. The last one is done via
+-		 * drain_next flag to persist the effect across calls.
+-		 */
+-		if (req->flags & REQ_F_IO_DRAIN) {
+-			head->flags |= REQ_F_IO_DRAIN;
+-			ctx->drain_next = 1;
+-		}
+-		ret = io_req_defer_prep(req, sqe);
+-		if (unlikely(ret)) {
+-			/* fail even hard links since we don't submit */
+-			head->flags |= REQ_F_FAIL_LINK;
+-			return ret;
+-		}
+-		trace_io_uring_link(ctx, req, head);
+-		list_add_tail(&req->link_list, &head->link_list);
+-
+-		/* last request of a link, enqueue the link */
+-		if (!(req->flags & (REQ_F_LINK | REQ_F_HARDLINK))) {
+-			io_queue_link_head(head, cs);
+-			*link = NULL;
+-		}
+-	} else {
+-		if (unlikely(ctx->drain_next)) {
+-			req->flags |= REQ_F_IO_DRAIN;
+-			ctx->drain_next = 0;
+-		}
+-		if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
+-			req->flags |= REQ_F_LINK_HEAD;
+-			INIT_LIST_HEAD(&req->link_list);
+-
+-			ret = io_req_defer_prep(req, sqe);
+-			if (unlikely(ret))
+-				req->flags |= REQ_F_FAIL_LINK;
+-			*link = req;
+-		} else {
+-			io_queue_sqe(req, sqe, cs);
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+-/*
+- * Batched submission is done, ensure local IO is flushed out.
+- */
+-static void io_submit_state_end(struct io_submit_state *state)
+-{
+-	if (!list_empty(&state->comp.list))
+-		io_submit_flush_completions(&state->comp);
+-	blk_finish_plug(&state->plug);
+-	io_state_file_put(state);
+-	if (state->free_reqs)
+-		kmem_cache_free_bulk(req_cachep, state->free_reqs, state->reqs);
+-}
+-
+-/*
+- * Start submission side cache.
+- */
+-static void io_submit_state_start(struct io_submit_state *state,
+-				  struct io_ring_ctx *ctx, unsigned int max_ios)
+-{
+-	blk_start_plug(&state->plug);
+-	state->comp.nr = 0;
+-	INIT_LIST_HEAD(&state->comp.list);
+-	state->comp.ctx = ctx;
+-	state->free_reqs = 0;
+-	state->file = NULL;
+-	state->ios_left = max_ios;
+-}
+-
+-static void io_commit_sqring(struct io_ring_ctx *ctx)
+-{
+-	struct io_rings *rings = ctx->rings;
+-
+-	/*
+-	 * Ensure any loads from the SQEs are done at this point,
+-	 * since once we write the new head, the application could
+-	 * write new data to them.
+-	 */
+-	smp_store_release(&rings->sq.head, ctx->cached_sq_head);
+-}
+-
+-/*
+- * Fetch an sqe, if one is available. Note that sqe_ptr will point to memory
+- * that is mapped by userspace. This means that care needs to be taken to
+- * ensure that reads are stable, as we cannot rely on userspace always
+- * being a good citizen. If members of the sqe are validated and then later
+- * used, it's important that those reads are done through READ_ONCE() to
+- * prevent a re-load down the line.
+- */
+-static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx)
+-{
+-	u32 *sq_array = ctx->sq_array;
+-	unsigned head;
+-
+-	/*
+-	 * The cached sq head (or cq tail) serves two purposes:
+-	 *
+-	 * 1) allows us to batch the cost of updating the user visible
+-	 *    head updates.
+-	 * 2) allows the kernel side to track the head on its own, even
+-	 *    though the application is the one updating it.
+-	 */
+-	head = READ_ONCE(sq_array[ctx->cached_sq_head & ctx->sq_mask]);
+-	if (likely(head < ctx->sq_entries))
+-		return &ctx->sq_sqes[head];
+-
+-	/* drop invalid entries */
+-	ctx->cached_sq_dropped++;
+-	WRITE_ONCE(ctx->rings->sq_dropped, ctx->cached_sq_dropped);
+-	return NULL;
+-}
+-
+-static inline void io_consume_sqe(struct io_ring_ctx *ctx)
+-{
+-	ctx->cached_sq_head++;
+-}
+-
+-/*
+- * Check SQE restrictions (opcode and flags).
+- *
+- * Returns 'true' if SQE is allowed, 'false' otherwise.
+- */
+-static inline bool io_check_restriction(struct io_ring_ctx *ctx,
+-					struct io_kiocb *req,
+-					unsigned int sqe_flags)
+-{
+-	if (!ctx->restricted)
+-		return true;
+-
+-	if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
+-		return false;
+-
+-	if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
+-	    ctx->restrictions.sqe_flags_required)
+-		return false;
+-
+-	if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
+-			  ctx->restrictions.sqe_flags_required))
+-		return false;
+-
+-	return true;
+-}
+-
+-#define SQE_VALID_FLAGS	(IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK|	\
+-				IOSQE_IO_HARDLINK | IOSQE_ASYNC | \
+-				IOSQE_BUFFER_SELECT)
+-
+-static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+-		       const struct io_uring_sqe *sqe,
+-		       struct io_submit_state *state)
+-{
+-	unsigned int sqe_flags;
+-	int id, ret;
+-
+-	req->opcode = READ_ONCE(sqe->opcode);
+-	req->user_data = READ_ONCE(sqe->user_data);
+-	req->async_data = NULL;
+-	req->file = NULL;
+-	req->ctx = ctx;
+-	req->flags = 0;
+-	/* one is dropped after submission, the other at completion */
+-	refcount_set(&req->refs, 2);
+-	req->task = current;
+-	req->result = 0;
+-
+-	if (unlikely(req->opcode >= IORING_OP_LAST))
+-		return -EINVAL;
+-
+-	if (unlikely(io_sq_thread_acquire_mm(ctx, req)))
+-		return -EFAULT;
+-
+-	sqe_flags = READ_ONCE(sqe->flags);
+-	/* enforce forwards compatibility on users */
+-	if (unlikely(sqe_flags & ~SQE_VALID_FLAGS))
+-		return -EINVAL;
+-
+-	if (unlikely(!io_check_restriction(ctx, req, sqe_flags)))
+-		return -EACCES;
+-
+-	if ((sqe_flags & IOSQE_BUFFER_SELECT) &&
+-	    !io_op_defs[req->opcode].buffer_select)
+-		return -EOPNOTSUPP;
+-
+-	id = READ_ONCE(sqe->personality);
+-	if (id) {
+-		struct io_identity *iod;
+-
+-		iod = xa_load(&ctx->personalities, id);
+-		if (unlikely(!iod))
+-			return -EINVAL;
+-		refcount_inc(&iod->count);
+-
+-		__io_req_init_async(req);
+-		get_cred(iod->creds);
+-		req->work.identity = iod;
+-		req->work.flags |= IO_WQ_WORK_CREDS;
+-	}
+-
+-	/* same numerical values with corresponding REQ_F_*, safe to copy */
+-	req->flags |= sqe_flags;
+-
+-	if (!io_op_defs[req->opcode].needs_file)
+-		return 0;
+-
+-	ret = io_req_set_file(state, req, READ_ONCE(sqe->fd));
+-	state->ios_left--;
+-	return ret;
+-}
+-
+-static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
+-{
+-	struct io_submit_state state;
+-	struct io_kiocb *link = NULL;
+-	int i, submitted = 0;
+-
+-	/* if we have a backlog and couldn't flush it all, return BUSY */
+-	if (test_bit(0, &ctx->sq_check_overflow)) {
+-		if (!__io_cqring_overflow_flush(ctx, false, NULL, NULL))
+-			return -EBUSY;
+-	}
+-
+-	/* make sure SQ entry isn't read before tail */
+-	nr = min3(nr, ctx->sq_entries, io_sqring_entries(ctx));
+-
+-	if (!percpu_ref_tryget_many(&ctx->refs, nr))
+-		return -EAGAIN;
+-
+-	percpu_counter_add(&current->io_uring->inflight, nr);
+-	refcount_add(nr, &current->usage);
+-
+-	io_submit_state_start(&state, ctx, nr);
+-
+-	for (i = 0; i < nr; i++) {
+-		const struct io_uring_sqe *sqe;
+-		struct io_kiocb *req;
+-		int err;
+-
+-		sqe = io_get_sqe(ctx);
+-		if (unlikely(!sqe)) {
+-			io_consume_sqe(ctx);
+-			break;
+-		}
+-		req = io_alloc_req(ctx, &state);
+-		if (unlikely(!req)) {
+-			if (!submitted)
+-				submitted = -EAGAIN;
+-			break;
+-		}
+-		io_consume_sqe(ctx);
+-		/* will complete beyond this point, count as submitted */
+-		submitted++;
+-
+-		err = io_init_req(ctx, req, sqe, &state);
+-		if (unlikely(err)) {
+-fail_req:
+-			io_put_req(req);
+-			io_req_complete(req, err);
+-			break;
+-		}
+-
+-		trace_io_uring_submit_sqe(ctx, req->opcode, req->user_data,
+-						true, io_async_submit(ctx));
+-		err = io_submit_sqe(req, sqe, &link, &state.comp);
+-		if (err)
+-			goto fail_req;
+-	}
+-
+-	if (unlikely(submitted != nr)) {
+-		int ref_used = (submitted == -EAGAIN) ? 0 : submitted;
+-		struct io_uring_task *tctx = current->io_uring;
+-		int unused = nr - ref_used;
+-
+-		percpu_ref_put_many(&ctx->refs, unused);
+-		percpu_counter_sub(&tctx->inflight, unused);
+-		put_task_struct_many(current, unused);
+-	}
+-	if (link)
+-		io_queue_link_head(link, &state.comp);
+-	io_submit_state_end(&state);
+-
+-	 /* Commit SQ ring head once we've consumed and submitted all SQEs */
+-	io_commit_sqring(ctx);
+-
+-	return submitted;
+-}
+-
+-static inline void io_ring_set_wakeup_flag(struct io_ring_ctx *ctx)
+-{
+-	/* Tell userspace we may need a wakeup call */
+-	spin_lock_irq(&ctx->completion_lock);
+-	ctx->rings->sq_flags |= IORING_SQ_NEED_WAKEUP;
+-	spin_unlock_irq(&ctx->completion_lock);
+-}
+-
+-static inline void io_ring_clear_wakeup_flag(struct io_ring_ctx *ctx)
+-{
+-	spin_lock_irq(&ctx->completion_lock);
+-	ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
+-	spin_unlock_irq(&ctx->completion_lock);
+-}
+-
+-static int io_sq_wake_function(struct wait_queue_entry *wqe, unsigned mode,
+-			       int sync, void *key)
+-{
+-	struct io_ring_ctx *ctx = container_of(wqe, struct io_ring_ctx, sqo_wait_entry);
+-	int ret;
+-
+-	ret = autoremove_wake_function(wqe, mode, sync, key);
+-	if (ret) {
+-		unsigned long flags;
+-
+-		spin_lock_irqsave(&ctx->completion_lock, flags);
+-		ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
+-		spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-	}
+-	return ret;
+-}
+-
+-enum sq_ret {
+-	SQT_IDLE	= 1,
+-	SQT_SPIN	= 2,
+-	SQT_DID_WORK	= 4,
+-};
+-
+-static enum sq_ret __io_sq_thread(struct io_ring_ctx *ctx,
+-				  unsigned long start_jiffies, bool cap_entries)
+-{
+-	unsigned long timeout = start_jiffies + ctx->sq_thread_idle;
+-	struct io_sq_data *sqd = ctx->sq_data;
+-	unsigned int to_submit;
+-	int ret = 0;
+-
+-again:
+-	if (!list_empty(&ctx->iopoll_list)) {
+-		unsigned nr_events = 0;
+-
+-		mutex_lock(&ctx->uring_lock);
+-		if (!list_empty(&ctx->iopoll_list) && !need_resched())
+-			io_do_iopoll(ctx, &nr_events, 0);
+-		mutex_unlock(&ctx->uring_lock);
+-	}
+-
+-	to_submit = io_sqring_entries(ctx);
+-
+-	/*
+-	 * If submit got -EBUSY, flag us as needing the application
+-	 * to enter the kernel to reap and flush events.
+-	 */
+-	if (!to_submit || ret == -EBUSY || need_resched()) {
+-		/*
+-		 * Drop cur_mm before scheduling, we can't hold it for
+-		 * long periods (or over schedule()). Do this before
+-		 * adding ourselves to the waitqueue, as the unuse/drop
+-		 * may sleep.
+-		 */
+-		io_sq_thread_drop_mm();
+-
+-		/*
+-		 * We're polling. If we're within the defined idle
+-		 * period, then let us spin without work before going
+-		 * to sleep. The exception is if we got EBUSY doing
+-		 * more IO, we should wait for the application to
+-		 * reap events and wake us up.
+-		 */
+-		if (!list_empty(&ctx->iopoll_list) || need_resched() ||
+-		    (!time_after(jiffies, timeout) && ret != -EBUSY &&
+-		    !percpu_ref_is_dying(&ctx->refs)))
+-			return SQT_SPIN;
+-
+-		prepare_to_wait(&sqd->wait, &ctx->sqo_wait_entry,
+-					TASK_INTERRUPTIBLE);
+-
+-		/*
+-		 * While doing polled IO, before going to sleep, we need
+-		 * to check if there are new reqs added to iopoll_list,
+-		 * it is because reqs may have been punted to io worker
+-		 * and will be added to iopoll_list later, hence check
+-		 * the iopoll_list again.
+-		 */
+-		if ((ctx->flags & IORING_SETUP_IOPOLL) &&
+-		    !list_empty_careful(&ctx->iopoll_list)) {
+-			finish_wait(&sqd->wait, &ctx->sqo_wait_entry);
+-			goto again;
+-		}
+-
+-		to_submit = io_sqring_entries(ctx);
+-		if (!to_submit || ret == -EBUSY)
+-			return SQT_IDLE;
+-	}
+-
+-	finish_wait(&sqd->wait, &ctx->sqo_wait_entry);
+-	io_ring_clear_wakeup_flag(ctx);
+-
+-	/* if we're handling multiple rings, cap submit size for fairness */
+-	if (cap_entries && to_submit > 8)
+-		to_submit = 8;
+-
+-	mutex_lock(&ctx->uring_lock);
+-	if (likely(!percpu_ref_is_dying(&ctx->refs) && !ctx->sqo_dead))
+-		ret = io_submit_sqes(ctx, to_submit);
+-	mutex_unlock(&ctx->uring_lock);
+-
+-	if (!io_sqring_full(ctx) && wq_has_sleeper(&ctx->sqo_sq_wait))
+-		wake_up(&ctx->sqo_sq_wait);
+-
+-	return SQT_DID_WORK;
+-}
+-
+-static void io_sqd_init_new(struct io_sq_data *sqd)
+-{
+-	struct io_ring_ctx *ctx;
+-
+-	while (!list_empty(&sqd->ctx_new_list)) {
+-		ctx = list_first_entry(&sqd->ctx_new_list, struct io_ring_ctx, sqd_list);
+-		init_wait(&ctx->sqo_wait_entry);
+-		ctx->sqo_wait_entry.func = io_sq_wake_function;
+-		list_move_tail(&ctx->sqd_list, &sqd->ctx_list);
+-		complete(&ctx->sq_thread_comp);
+-	}
+-}
+-
+-static int io_sq_thread(void *data)
+-{
+-	struct cgroup_subsys_state *cur_css = NULL;
+-	const struct cred *old_cred = NULL;
+-	struct io_sq_data *sqd = data;
+-	struct io_ring_ctx *ctx;
+-	unsigned long start_jiffies;
+-
+-	start_jiffies = jiffies;
+-	while (!kthread_should_stop()) {
+-		enum sq_ret ret = 0;
+-		bool cap_entries;
+-
+-		/*
+-		 * Any changes to the sqd lists are synchronized through the
+-		 * kthread parking. This synchronizes the thread vs users,
+-		 * the users are synchronized on the sqd->ctx_lock.
+-		 */
+-		if (kthread_should_park()) {
+-			kthread_parkme();
+-			/*
+-			 * When sq thread is unparked, in case the previous park operation
+-			 * comes from io_put_sq_data(), which means that sq thread is going
+-			 * to be stopped, so here needs to have a check.
+-			 */
+-			if (kthread_should_stop())
+-				break;
+-		}
+-
+-		if (unlikely(!list_empty(&sqd->ctx_new_list)))
+-			io_sqd_init_new(sqd);
+-
+-		cap_entries = !list_is_singular(&sqd->ctx_list);
+-
+-		list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
+-			if (current->cred != ctx->creds) {
+-				if (old_cred)
+-					revert_creds(old_cred);
+-				old_cred = override_creds(ctx->creds);
+-			}
+-			io_sq_thread_associate_blkcg(ctx, &cur_css);
+-#ifdef CONFIG_AUDIT
+-			current->loginuid = ctx->loginuid;
+-			current->sessionid = ctx->sessionid;
+-#endif
+-
+-			ret |= __io_sq_thread(ctx, start_jiffies, cap_entries);
+-
+-			io_sq_thread_drop_mm();
+-		}
+-
+-		if (ret & SQT_SPIN) {
+-			io_run_task_work();
+-			io_sq_thread_drop_mm();
+-			cond_resched();
+-		} else if (ret == SQT_IDLE) {
+-			if (kthread_should_park())
+-				continue;
+-			list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+-				io_ring_set_wakeup_flag(ctx);
+-			schedule();
+-			start_jiffies = jiffies;
+-			list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+-				io_ring_clear_wakeup_flag(ctx);
+-		}
+-	}
+-
+-	io_run_task_work();
+-	io_sq_thread_drop_mm();
+-
+-	if (cur_css)
+-		io_sq_thread_unassociate_blkcg();
+-	if (old_cred)
+-		revert_creds(old_cred);
+-
+-	kthread_parkme();
+-
+-	return 0;
+-}
+-
+-struct io_wait_queue {
+-	struct wait_queue_entry wq;
+-	struct io_ring_ctx *ctx;
+-	unsigned to_wait;
+-	unsigned nr_timeouts;
+-};
+-
+-static inline bool io_should_wake(struct io_wait_queue *iowq)
+-{
+-	struct io_ring_ctx *ctx = iowq->ctx;
+-
+-	/*
+-	 * Wake up if we have enough events, or if a timeout occurred since we
+-	 * started waiting. For timeouts, we always want to return to userspace,
+-	 * regardless of event count.
+-	 */
+-	return io_cqring_events(ctx) >= iowq->to_wait ||
+-			atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts;
+-}
+-
+-static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
+-			    int wake_flags, void *key)
+-{
+-	struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue,
+-							wq);
+-
+-	/*
+-	 * Cannot safely flush overflowed CQEs from here, ensure we wake up
+-	 * the task, and the next invocation will do it.
+-	 */
+-	if (io_should_wake(iowq) || test_bit(0, &iowq->ctx->cq_check_overflow))
+-		return autoremove_wake_function(curr, mode, wake_flags, key);
+-	return -1;
+-}
+-
+-static int io_run_task_work_sig(void)
+-{
+-	if (io_run_task_work())
+-		return 1;
+-	if (!signal_pending(current))
+-		return 0;
+-	if (current->jobctl & JOBCTL_TASK_WORK) {
+-		spin_lock_irq(&current->sighand->siglock);
+-		current->jobctl &= ~JOBCTL_TASK_WORK;
+-		recalc_sigpending();
+-		spin_unlock_irq(&current->sighand->siglock);
+-		return 1;
+-	}
+-	return -EINTR;
+-}
+-
+-/*
+- * Wait until events become available, if we don't already have some. The
+- * application must reap them itself, as they reside on the shared cq ring.
+- */
+-static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+-			  const sigset_t __user *sig, size_t sigsz)
+-{
+-	struct io_wait_queue iowq = {
+-		.wq = {
+-			.private	= current,
+-			.func		= io_wake_function,
+-			.entry		= LIST_HEAD_INIT(iowq.wq.entry),
+-		},
+-		.ctx		= ctx,
+-		.to_wait	= min_events,
+-	};
+-	struct io_rings *rings = ctx->rings;
+-	int ret = 0;
+-
+-	do {
+-		io_cqring_overflow_flush(ctx, false, NULL, NULL);
+-		if (io_cqring_events(ctx) >= min_events)
+-			return 0;
+-		if (!io_run_task_work())
+-			break;
+-	} while (1);
+-
+-	if (sig) {
+-#ifdef CONFIG_COMPAT
+-		if (in_compat_syscall())
+-			ret = set_compat_user_sigmask((const compat_sigset_t __user *)sig,
+-						      sigsz);
+-		else
+-#endif
+-			ret = set_user_sigmask(sig, sigsz);
+-
+-		if (ret)
+-			return ret;
+-	}
+-
+-	iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
+-	trace_io_uring_cqring_wait(ctx, min_events);
+-	do {
+-		io_cqring_overflow_flush(ctx, false, NULL, NULL);
+-		prepare_to_wait_exclusive(&ctx->wait, &iowq.wq,
+-						TASK_INTERRUPTIBLE);
+-		/* make sure we run task_work before checking for signals */
+-		ret = io_run_task_work_sig();
+-		if (ret > 0) {
+-			finish_wait(&ctx->wait, &iowq.wq);
+-			continue;
+-		}
+-		else if (ret < 0)
+-			break;
+-		if (io_should_wake(&iowq))
+-			break;
+-		if (test_bit(0, &ctx->cq_check_overflow)) {
+-			finish_wait(&ctx->wait, &iowq.wq);
+-			continue;
+-		}
+-		schedule();
+-	} while (1);
+-	finish_wait(&ctx->wait, &iowq.wq);
+-
+-	restore_saved_sigmask_unless(ret == -EINTR);
+-
+-	return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
+-}
+-
+-static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
+-{
+-#if defined(CONFIG_UNIX)
+-	if (ctx->ring_sock) {
+-		struct sock *sock = ctx->ring_sock->sk;
+-		struct sk_buff *skb;
+-
+-		while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
+-			kfree_skb(skb);
+-	}
+-#else
+-	int i;
+-
+-	for (i = 0; i < ctx->nr_user_files; i++) {
+-		struct file *file;
+-
+-		file = io_file_from_index(ctx, i);
+-		if (file)
+-			fput(file);
+-	}
+-#endif
+-}
+-
+-static void io_file_ref_kill(struct percpu_ref *ref)
+-{
+-	struct fixed_file_data *data;
+-
+-	data = container_of(ref, struct fixed_file_data, refs);
+-	complete(&data->done);
+-}
+-
+-static void io_sqe_files_set_node(struct fixed_file_data *file_data,
+-				  struct fixed_file_ref_node *ref_node)
+-{
+-	spin_lock_bh(&file_data->lock);
+-	file_data->node = ref_node;
+-	list_add_tail(&ref_node->node, &file_data->ref_list);
+-	spin_unlock_bh(&file_data->lock);
+-	percpu_ref_get(&file_data->refs);
+-}
+-
+-
+-static void io_sqe_files_kill_node(struct fixed_file_data *data)
+-{
+-	struct fixed_file_ref_node *ref_node = NULL;
+-
+-	spin_lock_bh(&data->lock);
+-	ref_node = data->node;
+-	spin_unlock_bh(&data->lock);
+-	if (ref_node)
+-		percpu_ref_kill(&ref_node->refs);
+-}
+-
+-static int io_file_ref_quiesce(struct fixed_file_data *data,
+-			       struct io_ring_ctx *ctx)
+-{
+-	int ret;
+-	struct fixed_file_ref_node *backup_node;
+-
+-	if (data->quiesce)
+-		return -ENXIO;
+-
+-	data->quiesce = true;
+-	do {
+-		backup_node = alloc_fixed_file_ref_node(ctx);
+-		if (!backup_node)
+-			break;
+-
+-		io_sqe_files_kill_node(data);
+-		percpu_ref_kill(&data->refs);
+-		flush_delayed_work(&ctx->file_put_work);
+-
+-		ret = wait_for_completion_interruptible(&data->done);
+-		if (!ret)
+-			break;
+-
+-		percpu_ref_resurrect(&data->refs);
+-		io_sqe_files_set_node(data, backup_node);
+-		backup_node = NULL;
+-		reinit_completion(&data->done);
+-		mutex_unlock(&ctx->uring_lock);
+-		ret = io_run_task_work_sig();
+-		mutex_lock(&ctx->uring_lock);
+-
+-		if (ret < 0)
+-			break;
+-		backup_node = alloc_fixed_file_ref_node(ctx);
+-		ret = -ENOMEM;
+-		if (!backup_node)
+-			break;
+-	} while (1);
+-	data->quiesce = false;
+-
+-	if (backup_node)
+-		destroy_fixed_file_ref_node(backup_node);
+-	return ret;
+-}
+-
+-static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+-{
+-	struct fixed_file_data *data = ctx->file_data;
+-	unsigned nr_tables, i;
+-	int ret;
+-
+-	/*
+-	 * percpu_ref_is_dying() is to stop parallel files unregister
+-	 * Since we possibly drop uring lock later in this function to
+-	 * run task work.
+-	 */
+-	if (!data || percpu_ref_is_dying(&data->refs))
+-		return -ENXIO;
+-	ret = io_file_ref_quiesce(data, ctx);
+-	if (ret)
+-		return ret;
+-
+-	__io_sqe_files_unregister(ctx);
+-	nr_tables = DIV_ROUND_UP(ctx->nr_user_files, IORING_MAX_FILES_TABLE);
+-	for (i = 0; i < nr_tables; i++)
+-		kfree(data->table[i].files);
+-	kfree(data->table);
+-	percpu_ref_exit(&data->refs);
+-	kfree(data);
+-	ctx->file_data = NULL;
+-	ctx->nr_user_files = 0;
+-	return 0;
+-}
+-
+-static void io_put_sq_data(struct io_sq_data *sqd)
+-{
+-	if (refcount_dec_and_test(&sqd->refs)) {
+-		/*
+-		 * The park is a bit of a work-around, without it we get
+-		 * warning spews on shutdown with SQPOLL set and affinity
+-		 * set to a single CPU.
+-		 */
+-		if (sqd->thread) {
+-			kthread_park(sqd->thread);
+-			kthread_stop(sqd->thread);
+-		}
+-
+-		kfree(sqd);
+-	}
+-}
+-
+-static struct io_sq_data *io_attach_sq_data(struct io_uring_params *p)
+-{
+-	struct io_ring_ctx *ctx_attach;
+-	struct io_sq_data *sqd;
+-	struct fd f;
+-
+-	f = fdget(p->wq_fd);
+-	if (!f.file)
+-		return ERR_PTR(-ENXIO);
+-	if (f.file->f_op != &io_uring_fops) {
+-		fdput(f);
+-		return ERR_PTR(-EINVAL);
+-	}
+-
+-	ctx_attach = f.file->private_data;
+-	sqd = ctx_attach->sq_data;
+-	if (!sqd) {
+-		fdput(f);
+-		return ERR_PTR(-EINVAL);
+-	}
+-
+-	refcount_inc(&sqd->refs);
+-	fdput(f);
+-	return sqd;
+-}
+-
+-static struct io_sq_data *io_get_sq_data(struct io_uring_params *p)
+-{
+-	struct io_sq_data *sqd;
+-
+-	if (p->flags & IORING_SETUP_ATTACH_WQ)
+-		return io_attach_sq_data(p);
+-
+-	sqd = kzalloc(sizeof(*sqd), GFP_KERNEL);
+-	if (!sqd)
+-		return ERR_PTR(-ENOMEM);
+-
+-	refcount_set(&sqd->refs, 1);
+-	INIT_LIST_HEAD(&sqd->ctx_list);
+-	INIT_LIST_HEAD(&sqd->ctx_new_list);
+-	mutex_init(&sqd->ctx_lock);
+-	mutex_init(&sqd->lock);
+-	init_waitqueue_head(&sqd->wait);
+-	return sqd;
+-}
+-
+-static void io_sq_thread_unpark(struct io_sq_data *sqd)
+-	__releases(&sqd->lock)
+-{
+-	if (!sqd->thread)
+-		return;
+-	kthread_unpark(sqd->thread);
+-	mutex_unlock(&sqd->lock);
+-}
+-
+-static void io_sq_thread_park(struct io_sq_data *sqd)
+-	__acquires(&sqd->lock)
+-{
+-	if (!sqd->thread)
+-		return;
+-	mutex_lock(&sqd->lock);
+-	kthread_park(sqd->thread);
+-}
+-
+-static void io_sq_thread_stop(struct io_ring_ctx *ctx)
+-{
+-	struct io_sq_data *sqd = ctx->sq_data;
+-
+-	if (sqd) {
+-		if (sqd->thread) {
+-			/*
+-			 * We may arrive here from the error branch in
+-			 * io_sq_offload_create() where the kthread is created
+-			 * without being waked up, thus wake it up now to make
+-			 * sure the wait will complete.
+-			 */
+-			wake_up_process(sqd->thread);
+-			wait_for_completion(&ctx->sq_thread_comp);
+-
+-			io_sq_thread_park(sqd);
+-		}
+-
+-		mutex_lock(&sqd->ctx_lock);
+-		list_del(&ctx->sqd_list);
+-		mutex_unlock(&sqd->ctx_lock);
+-
+-		if (sqd->thread) {
+-			finish_wait(&sqd->wait, &ctx->sqo_wait_entry);
+-			io_sq_thread_unpark(sqd);
+-		}
+-
+-		io_put_sq_data(sqd);
+-		ctx->sq_data = NULL;
+-	}
+-}
+-
+-static void io_finish_async(struct io_ring_ctx *ctx)
+-{
+-	io_sq_thread_stop(ctx);
+-
+-	if (ctx->io_wq) {
+-		io_wq_destroy(ctx->io_wq);
+-		ctx->io_wq = NULL;
+-	}
+-}
+-
+-#if defined(CONFIG_UNIX)
+-/*
+- * Ensure the UNIX gc is aware of our file set, so we are certain that
+- * the io_uring can be safely unregistered on process exit, even if we have
+- * loops in the file referencing.
+- */
+-static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+-{
+-	struct sock *sk = ctx->ring_sock->sk;
+-	struct scm_fp_list *fpl;
+-	struct sk_buff *skb;
+-	int i, nr_files;
+-
+-	fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
+-	if (!fpl)
+-		return -ENOMEM;
+-
+-	skb = alloc_skb(0, GFP_KERNEL);
+-	if (!skb) {
+-		kfree(fpl);
+-		return -ENOMEM;
+-	}
+-
+-	skb->sk = sk;
+-	skb->scm_io_uring = 1;
+-
+-	nr_files = 0;
+-	fpl->user = get_uid(ctx->user);
+-	for (i = 0; i < nr; i++) {
+-		struct file *file = io_file_from_index(ctx, i + offset);
+-
+-		if (!file)
+-			continue;
+-		fpl->fp[nr_files] = get_file(file);
+-		unix_inflight(fpl->user, fpl->fp[nr_files]);
+-		nr_files++;
+-	}
+-
+-	if (nr_files) {
+-		fpl->max = SCM_MAX_FD;
+-		fpl->count = nr_files;
+-		UNIXCB(skb).fp = fpl;
+-		skb->destructor = unix_destruct_scm;
+-		refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+-		skb_queue_head(&sk->sk_receive_queue, skb);
+-
+-		for (i = 0; i < nr; i++) {
+-			struct file *file = io_file_from_index(ctx, i + offset);
+-
+-			if (file)
+-				fput(file);
+-		}
+-	} else {
+-		kfree_skb(skb);
+-		free_uid(fpl->user);
+-		kfree(fpl);
+-	}
+-
+-	return 0;
+-}
+-
+-/*
+- * If UNIX sockets are enabled, fd passing can cause a reference cycle which
+- * causes regular reference counting to break down. We rely on the UNIX
+- * garbage collection to take care of this problem for us.
+- */
+-static int io_sqe_files_scm(struct io_ring_ctx *ctx)
+-{
+-	unsigned left, total;
+-	int ret = 0;
+-
+-	total = 0;
+-	left = ctx->nr_user_files;
+-	while (left) {
+-		unsigned this_files = min_t(unsigned, left, SCM_MAX_FD);
+-
+-		ret = __io_sqe_files_scm(ctx, this_files, total);
+-		if (ret)
+-			break;
+-		left -= this_files;
+-		total += this_files;
+-	}
+-
+-	if (!ret)
+-		return 0;
+-
+-	while (total < ctx->nr_user_files) {
+-		struct file *file = io_file_from_index(ctx, total);
+-
+-		if (file)
+-			fput(file);
+-		total++;
+-	}
+-
+-	return ret;
+-}
+-#else
+-static int io_sqe_files_scm(struct io_ring_ctx *ctx)
+-{
+-	return 0;
+-}
+-#endif
+-
+-static int io_sqe_alloc_file_tables(struct fixed_file_data *file_data,
+-				    unsigned nr_tables, unsigned nr_files)
+-{
+-	int i;
+-
+-	for (i = 0; i < nr_tables; i++) {
+-		struct fixed_file_table *table = &file_data->table[i];
+-		unsigned this_files;
+-
+-		this_files = min(nr_files, IORING_MAX_FILES_TABLE);
+-		table->files = kcalloc(this_files, sizeof(struct file *),
+-					GFP_KERNEL_ACCOUNT);
+-		if (!table->files)
+-			break;
+-		nr_files -= this_files;
+-	}
+-
+-	if (i == nr_tables)
+-		return 0;
+-
+-	for (i = 0; i < nr_tables; i++) {
+-		struct fixed_file_table *table = &file_data->table[i];
+-		kfree(table->files);
+-	}
+-	return 1;
+-}
+-
+-static void io_ring_file_put(struct io_ring_ctx *ctx, struct file *file)
+-{
+-#if defined(CONFIG_UNIX)
+-	struct sock *sock = ctx->ring_sock->sk;
+-	struct sk_buff_head list, *head = &sock->sk_receive_queue;
+-	struct sk_buff *skb;
+-	int i;
+-
+-	__skb_queue_head_init(&list);
+-
+-	/*
+-	 * Find the skb that holds this file in its SCM_RIGHTS. When found,
+-	 * remove this entry and rearrange the file array.
+-	 */
+-	skb = skb_dequeue(head);
+-	while (skb) {
+-		struct scm_fp_list *fp;
+-
+-		fp = UNIXCB(skb).fp;
+-		for (i = 0; i < fp->count; i++) {
+-			int left;
+-
+-			if (fp->fp[i] != file)
+-				continue;
+-
+-			unix_notinflight(fp->user, fp->fp[i]);
+-			left = fp->count - 1 - i;
+-			if (left) {
+-				memmove(&fp->fp[i], &fp->fp[i + 1],
+-						left * sizeof(struct file *));
+-			}
+-			fp->count--;
+-			if (!fp->count) {
+-				kfree_skb(skb);
+-				skb = NULL;
+-			} else {
+-				__skb_queue_tail(&list, skb);
+-			}
+-			fput(file);
+-			file = NULL;
+-			break;
+-		}
+-
+-		if (!file)
+-			break;
+-
+-		__skb_queue_tail(&list, skb);
+-
+-		skb = skb_dequeue(head);
+-	}
+-
+-	if (skb_peek(&list)) {
+-		spin_lock_irq(&head->lock);
+-		while ((skb = __skb_dequeue(&list)) != NULL)
+-			__skb_queue_tail(head, skb);
+-		spin_unlock_irq(&head->lock);
+-	}
+-#else
+-	fput(file);
+-#endif
+-}
+-
+-struct io_file_put {
+-	struct list_head list;
+-	struct file *file;
+-};
+-
+-static void __io_file_put_work(struct fixed_file_ref_node *ref_node)
+-{
+-	struct fixed_file_data *file_data = ref_node->file_data;
+-	struct io_ring_ctx *ctx = file_data->ctx;
+-	struct io_file_put *pfile, *tmp;
+-
+-	list_for_each_entry_safe(pfile, tmp, &ref_node->file_list, list) {
+-		list_del(&pfile->list);
+-		io_ring_file_put(ctx, pfile->file);
+-		kfree(pfile);
+-	}
+-
+-	percpu_ref_exit(&ref_node->refs);
+-	kfree(ref_node);
+-	percpu_ref_put(&file_data->refs);
+-}
+-
+-static void io_file_put_work(struct work_struct *work)
+-{
+-	struct io_ring_ctx *ctx;
+-	struct llist_node *node;
+-
+-	ctx = container_of(work, struct io_ring_ctx, file_put_work.work);
+-	node = llist_del_all(&ctx->file_put_llist);
+-
+-	while (node) {
+-		struct fixed_file_ref_node *ref_node;
+-		struct llist_node *next = node->next;
+-
+-		ref_node = llist_entry(node, struct fixed_file_ref_node, llist);
+-		__io_file_put_work(ref_node);
+-		node = next;
+-	}
+-}
+-
+-static void io_file_data_ref_zero(struct percpu_ref *ref)
+-{
+-	struct fixed_file_ref_node *ref_node;
+-	struct fixed_file_data *data;
+-	struct io_ring_ctx *ctx;
+-	bool first_add = false;
+-	int delay = HZ;
+-
+-	ref_node = container_of(ref, struct fixed_file_ref_node, refs);
+-	data = ref_node->file_data;
+-	ctx = data->ctx;
+-
+-	spin_lock_bh(&data->lock);
+-	ref_node->done = true;
+-
+-	while (!list_empty(&data->ref_list)) {
+-		ref_node = list_first_entry(&data->ref_list,
+-					struct fixed_file_ref_node, node);
+-		/* recycle ref nodes in order */
+-		if (!ref_node->done)
+-			break;
+-		list_del(&ref_node->node);
+-		first_add |= llist_add(&ref_node->llist, &ctx->file_put_llist);
+-	}
+-	spin_unlock_bh(&data->lock);
+-
+-	if (percpu_ref_is_dying(&data->refs))
+-		delay = 0;
+-
+-	if (!delay)
+-		mod_delayed_work(system_wq, &ctx->file_put_work, 0);
+-	else if (first_add)
+-		queue_delayed_work(system_wq, &ctx->file_put_work, delay);
+-}
+-
+-static struct fixed_file_ref_node *alloc_fixed_file_ref_node(
+-			struct io_ring_ctx *ctx)
+-{
+-	struct fixed_file_ref_node *ref_node;
+-
+-	ref_node = kzalloc(sizeof(*ref_node), GFP_KERNEL);
+-	if (!ref_node)
+-		return NULL;
+-
+-	if (percpu_ref_init(&ref_node->refs, io_file_data_ref_zero,
+-			    0, GFP_KERNEL)) {
+-		kfree(ref_node);
+-		return NULL;
+-	}
+-	INIT_LIST_HEAD(&ref_node->node);
+-	INIT_LIST_HEAD(&ref_node->file_list);
+-	ref_node->file_data = ctx->file_data;
+-	ref_node->done = false;
+-	return ref_node;
+-}
+-
+-static void destroy_fixed_file_ref_node(struct fixed_file_ref_node *ref_node)
+-{
+-	percpu_ref_exit(&ref_node->refs);
+-	kfree(ref_node);
+-}
+-
+-static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+-				 unsigned nr_args)
+-{
+-	__s32 __user *fds = (__s32 __user *) arg;
+-	unsigned nr_tables, i;
+-	struct file *file;
+-	int fd, ret = -ENOMEM;
+-	struct fixed_file_ref_node *ref_node;
+-	struct fixed_file_data *file_data;
+-
+-	if (ctx->file_data)
+-		return -EBUSY;
+-	if (!nr_args)
+-		return -EINVAL;
+-	if (nr_args > IORING_MAX_FIXED_FILES)
+-		return -EMFILE;
+-	if (nr_args > rlimit(RLIMIT_NOFILE))
+-		return -EMFILE;
+-
+-	file_data = kzalloc(sizeof(*ctx->file_data), GFP_KERNEL_ACCOUNT);
+-	if (!file_data)
+-		return -ENOMEM;
+-	file_data->ctx = ctx;
+-	init_completion(&file_data->done);
+-	INIT_LIST_HEAD(&file_data->ref_list);
+-	spin_lock_init(&file_data->lock);
+-
+-	nr_tables = DIV_ROUND_UP(nr_args, IORING_MAX_FILES_TABLE);
+-	file_data->table = kcalloc(nr_tables, sizeof(*file_data->table),
+-				   GFP_KERNEL_ACCOUNT);
+-	if (!file_data->table)
+-		goto out_free;
+-
+-	if (percpu_ref_init(&file_data->refs, io_file_ref_kill,
+-				PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
+-		goto out_free;
+-
+-	if (io_sqe_alloc_file_tables(file_data, nr_tables, nr_args))
+-		goto out_ref;
+-	ctx->file_data = file_data;
+-
+-	for (i = 0; i < nr_args; i++, ctx->nr_user_files++) {
+-		struct fixed_file_table *table;
+-		unsigned index;
+-
+-		if (copy_from_user(&fd, &fds[i], sizeof(fd))) {
+-			ret = -EFAULT;
+-			goto out_fput;
+-		}
+-		/* allow sparse sets */
+-		if (fd == -1)
+-			continue;
+-
+-		file = fget(fd);
+-		ret = -EBADF;
+-		if (!file)
+-			goto out_fput;
+-
+-		/*
+-		 * Don't allow io_uring instances to be registered. If UNIX
+-		 * isn't enabled, then this causes a reference cycle and this
+-		 * instance can never get freed. If UNIX is enabled we'll
+-		 * handle it just fine, but there's still no point in allowing
+-		 * a ring fd as it doesn't support regular read/write anyway.
+-		 */
+-		if (file->f_op == &io_uring_fops) {
+-			fput(file);
+-			goto out_fput;
+-		}
+-		table = &file_data->table[i >> IORING_FILE_TABLE_SHIFT];
+-		index = i & IORING_FILE_TABLE_MASK;
+-		table->files[index] = file;
+-	}
+-
+-	ret = io_sqe_files_scm(ctx);
+-	if (ret) {
+-		io_sqe_files_unregister(ctx);
+-		return ret;
+-	}
+-
+-	ref_node = alloc_fixed_file_ref_node(ctx);
+-	if (!ref_node) {
+-		io_sqe_files_unregister(ctx);
+-		return -ENOMEM;
+-	}
+-
+-	io_sqe_files_set_node(file_data, ref_node);
+-	return ret;
+-out_fput:
+-	for (i = 0; i < ctx->nr_user_files; i++) {
+-		file = io_file_from_index(ctx, i);
+-		if (file)
+-			fput(file);
+-	}
+-	for (i = 0; i < nr_tables; i++)
+-		kfree(file_data->table[i].files);
+-	ctx->nr_user_files = 0;
+-out_ref:
+-	percpu_ref_exit(&file_data->refs);
+-out_free:
+-	kfree(file_data->table);
+-	kfree(file_data);
+-	ctx->file_data = NULL;
+-	return ret;
+-}
+-
+-static int io_sqe_file_register(struct io_ring_ctx *ctx, struct file *file,
+-				int index)
+-{
+-#if defined(CONFIG_UNIX)
+-	struct sock *sock = ctx->ring_sock->sk;
+-	struct sk_buff_head *head = &sock->sk_receive_queue;
+-	struct sk_buff *skb;
+-
+-	/*
+-	 * See if we can merge this file into an existing skb SCM_RIGHTS
+-	 * file set. If there's no room, fall back to allocating a new skb
+-	 * and filling it in.
+-	 */
+-	spin_lock_irq(&head->lock);
+-	skb = skb_peek(head);
+-	if (skb) {
+-		struct scm_fp_list *fpl = UNIXCB(skb).fp;
+-
+-		if (fpl->count < SCM_MAX_FD) {
+-			__skb_unlink(skb, head);
+-			spin_unlock_irq(&head->lock);
+-			fpl->fp[fpl->count] = get_file(file);
+-			unix_inflight(fpl->user, fpl->fp[fpl->count]);
+-			fpl->count++;
+-			spin_lock_irq(&head->lock);
+-			__skb_queue_head(head, skb);
+-		} else {
+-			skb = NULL;
+-		}
+-	}
+-	spin_unlock_irq(&head->lock);
+-
+-	if (skb) {
+-		fput(file);
+-		return 0;
+-	}
+-
+-	return __io_sqe_files_scm(ctx, 1, index);
+-#else
+-	return 0;
+-#endif
+-}
+-
+-static int io_queue_file_removal(struct fixed_file_data *data,
+-				 struct file *file)
+-{
+-	struct io_file_put *pfile;
+-	struct fixed_file_ref_node *ref_node = data->node;
+-
+-	pfile = kzalloc(sizeof(*pfile), GFP_KERNEL);
+-	if (!pfile)
+-		return -ENOMEM;
+-
+-	pfile->file = file;
+-	list_add(&pfile->list, &ref_node->file_list);
+-
+-	return 0;
+-}
+-
+-static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+-				 struct io_uring_files_update *up,
+-				 unsigned nr_args)
+-{
+-	struct fixed_file_data *data = ctx->file_data;
+-	struct fixed_file_ref_node *ref_node;
+-	struct file *file;
+-	__s32 __user *fds;
+-	int fd, i, err;
+-	__u32 done;
+-	bool needs_switch = false;
+-
+-	if (check_add_overflow(up->offset, nr_args, &done))
+-		return -EOVERFLOW;
+-	if (done > ctx->nr_user_files)
+-		return -EINVAL;
+-
+-	ref_node = alloc_fixed_file_ref_node(ctx);
+-	if (!ref_node)
+-		return -ENOMEM;
+-
+-	done = 0;
+-	fds = u64_to_user_ptr(up->fds);
+-	while (nr_args) {
+-		struct fixed_file_table *table;
+-		unsigned index;
+-
+-		err = 0;
+-		if (copy_from_user(&fd, &fds[done], sizeof(fd))) {
+-			err = -EFAULT;
+-			break;
+-		}
+-		i = array_index_nospec(up->offset, ctx->nr_user_files);
+-		table = &ctx->file_data->table[i >> IORING_FILE_TABLE_SHIFT];
+-		index = i & IORING_FILE_TABLE_MASK;
+-		if (table->files[index]) {
+-			file = table->files[index];
+-			err = io_queue_file_removal(data, file);
+-			if (err)
+-				break;
+-			table->files[index] = NULL;
+-			needs_switch = true;
+-		}
+-		if (fd != -1) {
+-			file = fget(fd);
+-			if (!file) {
+-				err = -EBADF;
+-				break;
+-			}
+-			/*
+-			 * Don't allow io_uring instances to be registered. If
+-			 * UNIX isn't enabled, then this causes a reference
+-			 * cycle and this instance can never get freed. If UNIX
+-			 * is enabled we'll handle it just fine, but there's
+-			 * still no point in allowing a ring fd as it doesn't
+-			 * support regular read/write anyway.
+-			 */
+-			if (file->f_op == &io_uring_fops) {
+-				fput(file);
+-				err = -EBADF;
+-				break;
+-			}
+-			table->files[index] = file;
+-			err = io_sqe_file_register(ctx, file, i);
+-			if (err) {
+-				table->files[index] = NULL;
+-				fput(file);
+-				break;
+-			}
+-		}
+-		nr_args--;
+-		done++;
+-		up->offset++;
+-	}
+-
+-	if (needs_switch) {
+-		percpu_ref_kill(&data->node->refs);
+-		io_sqe_files_set_node(data, ref_node);
+-	} else
+-		destroy_fixed_file_ref_node(ref_node);
+-
+-	return done ? done : err;
+-}
+-
+-static int io_sqe_files_update(struct io_ring_ctx *ctx, void __user *arg,
+-			       unsigned nr_args)
+-{
+-	struct io_uring_files_update up;
+-
+-	if (!ctx->file_data)
+-		return -ENXIO;
+-	if (!nr_args)
+-		return -EINVAL;
+-	if (copy_from_user(&up, arg, sizeof(up)))
+-		return -EFAULT;
+-	if (up.resv)
+-		return -EINVAL;
+-
+-	return __io_sqe_files_update(ctx, &up, nr_args);
+-}
+-
+-static void io_free_work(struct io_wq_work *work)
+-{
+-	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-
+-	/* Consider that io_steal_work() relies on this ref */
+-	io_put_req(req);
+-}
+-
+-static int io_init_wq_offload(struct io_ring_ctx *ctx,
+-			      struct io_uring_params *p)
+-{
+-	struct io_wq_data data;
+-	struct fd f;
+-	struct io_ring_ctx *ctx_attach;
+-	unsigned int concurrency;
+-	int ret = 0;
+-
+-	data.user = ctx->user;
+-	data.free_work = io_free_work;
+-	data.do_work = io_wq_submit_work;
+-
+-	if (!(p->flags & IORING_SETUP_ATTACH_WQ)) {
+-		/* Do QD, or 4 * CPUS, whatever is smallest */
+-		concurrency = min(ctx->sq_entries, 4 * num_online_cpus());
+-
+-		ctx->io_wq = io_wq_create(concurrency, &data);
+-		if (IS_ERR(ctx->io_wq)) {
+-			ret = PTR_ERR(ctx->io_wq);
+-			ctx->io_wq = NULL;
+-		}
+-		return ret;
+-	}
+-
+-	f = fdget(p->wq_fd);
+-	if (!f.file)
+-		return -EBADF;
+-
+-	if (f.file->f_op != &io_uring_fops) {
+-		ret = -EINVAL;
+-		goto out_fput;
+-	}
+-
+-	ctx_attach = f.file->private_data;
+-	/* @io_wq is protected by holding the fd */
+-	if (!io_wq_get(ctx_attach->io_wq, &data)) {
+-		ret = -EINVAL;
+-		goto out_fput;
+-	}
+-
+-	ctx->io_wq = ctx_attach->io_wq;
+-out_fput:
+-	fdput(f);
+-	return ret;
+-}
+-
+-static int io_uring_alloc_task_context(struct task_struct *task)
+-{
+-	struct io_uring_task *tctx;
+-	int ret;
+-
+-	tctx = kmalloc(sizeof(*tctx), GFP_KERNEL);
+-	if (unlikely(!tctx))
+-		return -ENOMEM;
+-
+-	ret = percpu_counter_init(&tctx->inflight, 0, GFP_KERNEL);
+-	if (unlikely(ret)) {
+-		kfree(tctx);
+-		return ret;
+-	}
+-
+-	xa_init(&tctx->xa);
+-	init_waitqueue_head(&tctx->wait);
+-	tctx->last = NULL;
+-	atomic_set(&tctx->in_idle, 0);
+-	tctx->sqpoll = false;
+-	io_init_identity(&tctx->__identity);
+-	tctx->identity = &tctx->__identity;
+-	task->io_uring = tctx;
+-	return 0;
+-}
+-
+-void __io_uring_free(struct task_struct *tsk)
+-{
+-	struct io_uring_task *tctx = tsk->io_uring;
+-
+-	WARN_ON_ONCE(!xa_empty(&tctx->xa));
+-	WARN_ON_ONCE(refcount_read(&tctx->identity->count) != 1);
+-	if (tctx->identity != &tctx->__identity)
+-		kfree(tctx->identity);
+-	percpu_counter_destroy(&tctx->inflight);
+-	kfree(tctx);
+-	tsk->io_uring = NULL;
+-}
+-
+-static int io_sq_offload_create(struct io_ring_ctx *ctx,
+-				struct io_uring_params *p)
+-{
+-	int ret;
+-
+-	if (ctx->flags & IORING_SETUP_SQPOLL) {
+-		struct io_sq_data *sqd;
+-
+-		ret = -EPERM;
+-		if (!capable(CAP_SYS_ADMIN))
+-			goto err;
+-
+-		sqd = io_get_sq_data(p);
+-		if (IS_ERR(sqd)) {
+-			ret = PTR_ERR(sqd);
+-			goto err;
+-		}
+-
+-		ctx->sq_data = sqd;
+-		io_sq_thread_park(sqd);
+-		mutex_lock(&sqd->ctx_lock);
+-		list_add(&ctx->sqd_list, &sqd->ctx_new_list);
+-		mutex_unlock(&sqd->ctx_lock);
+-		io_sq_thread_unpark(sqd);
+-
+-		ctx->sq_thread_idle = msecs_to_jiffies(p->sq_thread_idle);
+-		if (!ctx->sq_thread_idle)
+-			ctx->sq_thread_idle = HZ;
+-
+-		if (sqd->thread)
+-			goto done;
+-
+-		if (p->flags & IORING_SETUP_SQ_AFF) {
+-			int cpu = p->sq_thread_cpu;
+-
+-			ret = -EINVAL;
+-			if (cpu >= nr_cpu_ids)
+-				goto err;
+-			if (!cpu_online(cpu))
+-				goto err;
+-
+-			sqd->thread = kthread_create_on_cpu(io_sq_thread, sqd,
+-							cpu, "io_uring-sq");
+-		} else {
+-			sqd->thread = kthread_create(io_sq_thread, sqd,
+-							"io_uring-sq");
+-		}
+-		if (IS_ERR(sqd->thread)) {
+-			ret = PTR_ERR(sqd->thread);
+-			sqd->thread = NULL;
+-			goto err;
+-		}
+-		ret = io_uring_alloc_task_context(sqd->thread);
+-		if (ret)
+-			goto err;
+-	} else if (p->flags & IORING_SETUP_SQ_AFF) {
+-		/* Can't have SQ_AFF without SQPOLL */
+-		ret = -EINVAL;
+-		goto err;
+-	}
+-
+-done:
+-	ret = io_init_wq_offload(ctx, p);
+-	if (ret)
+-		goto err;
+-
+-	return 0;
+-err:
+-	io_finish_async(ctx);
+-	return ret;
+-}
+-
+-static void io_sq_offload_start(struct io_ring_ctx *ctx)
+-{
+-	struct io_sq_data *sqd = ctx->sq_data;
+-
+-	ctx->flags &= ~IORING_SETUP_R_DISABLED;
+-	if ((ctx->flags & IORING_SETUP_SQPOLL) && sqd && sqd->thread)
+-		wake_up_process(sqd->thread);
+-}
+-
+-static inline void __io_unaccount_mem(struct user_struct *user,
+-				      unsigned long nr_pages)
+-{
+-	atomic_long_sub(nr_pages, &user->locked_vm);
+-}
+-
+-static inline int __io_account_mem(struct user_struct *user,
+-				   unsigned long nr_pages)
+-{
+-	unsigned long page_limit, cur_pages, new_pages;
+-
+-	/* Don't allow more pages than we can safely lock */
+-	page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+-
+-	do {
+-		cur_pages = atomic_long_read(&user->locked_vm);
+-		new_pages = cur_pages + nr_pages;
+-		if (new_pages > page_limit)
+-			return -ENOMEM;
+-	} while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
+-					new_pages) != cur_pages);
+-
+-	return 0;
+-}
+-
+-static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages,
+-			     enum io_mem_account acct)
+-{
+-	if (ctx->limit_mem)
+-		__io_unaccount_mem(ctx->user, nr_pages);
+-
+-	if (ctx->mm_account) {
+-		if (acct == ACCT_LOCKED)
+-			ctx->mm_account->locked_vm -= nr_pages;
+-		else if (acct == ACCT_PINNED)
+-			atomic64_sub(nr_pages, &ctx->mm_account->pinned_vm);
+-	}
+-}
+-
+-static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages,
+-			  enum io_mem_account acct)
+-{
+-	int ret;
+-
+-	if (ctx->limit_mem) {
+-		ret = __io_account_mem(ctx->user, nr_pages);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	if (ctx->mm_account) {
+-		if (acct == ACCT_LOCKED)
+-			ctx->mm_account->locked_vm += nr_pages;
+-		else if (acct == ACCT_PINNED)
+-			atomic64_add(nr_pages, &ctx->mm_account->pinned_vm);
+-	}
+-
+-	return 0;
+-}
+-
+-static void io_mem_free(void *ptr)
+-{
+-	struct page *page;
+-
+-	if (!ptr)
+-		return;
+-
+-	page = virt_to_head_page(ptr);
+-	if (put_page_testzero(page))
+-		free_compound_page(page);
+-}
+-
+-static void *io_mem_alloc(size_t size)
+-{
+-	gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP |
+-				__GFP_NORETRY;
+-
+-	return (void *) __get_free_pages(gfp_flags, get_order(size));
+-}
+-
+-static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
+-				size_t *sq_offset)
+-{
+-	struct io_rings *rings;
+-	size_t off, sq_array_size;
+-
+-	off = struct_size(rings, cqes, cq_entries);
+-	if (off == SIZE_MAX)
+-		return SIZE_MAX;
+-
+-#ifdef CONFIG_SMP
+-	off = ALIGN(off, SMP_CACHE_BYTES);
+-	if (off == 0)
+-		return SIZE_MAX;
+-#endif
+-
+-	if (sq_offset)
+-		*sq_offset = off;
+-
+-	sq_array_size = array_size(sizeof(u32), sq_entries);
+-	if (sq_array_size == SIZE_MAX)
+-		return SIZE_MAX;
+-
+-	if (check_add_overflow(off, sq_array_size, &off))
+-		return SIZE_MAX;
+-
+-	return off;
+-}
+-
+-static unsigned long ring_pages(unsigned sq_entries, unsigned cq_entries)
+-{
+-	size_t pages;
+-
+-	pages = (size_t)1 << get_order(
+-		rings_size(sq_entries, cq_entries, NULL));
+-	pages += (size_t)1 << get_order(
+-		array_size(sizeof(struct io_uring_sqe), sq_entries));
+-
+-	return pages;
+-}
+-
+-static int io_sqe_buffer_unregister(struct io_ring_ctx *ctx)
+-{
+-	int i, j;
+-
+-	if (!ctx->user_bufs)
+-		return -ENXIO;
+-
+-	for (i = 0; i < ctx->nr_user_bufs; i++) {
+-		struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
+-
+-		for (j = 0; j < imu->nr_bvecs; j++)
+-			unpin_user_page(imu->bvec[j].bv_page);
+-
+-		if (imu->acct_pages)
+-			io_unaccount_mem(ctx, imu->acct_pages, ACCT_PINNED);
+-		kvfree(imu->bvec);
+-		imu->nr_bvecs = 0;
+-	}
+-
+-	kfree(ctx->user_bufs);
+-	ctx->user_bufs = NULL;
+-	ctx->nr_user_bufs = 0;
+-	return 0;
+-}
+-
+-static int io_copy_iov(struct io_ring_ctx *ctx, struct iovec *dst,
+-		       void __user *arg, unsigned index)
+-{
+-	struct iovec __user *src;
+-
+-#ifdef CONFIG_COMPAT
+-	if (ctx->compat) {
+-		struct compat_iovec __user *ciovs;
+-		struct compat_iovec ciov;
+-
+-		ciovs = (struct compat_iovec __user *) arg;
+-		if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov)))
+-			return -EFAULT;
+-
+-		dst->iov_base = u64_to_user_ptr((u64)ciov.iov_base);
+-		dst->iov_len = ciov.iov_len;
+-		return 0;
+-	}
+-#endif
+-	src = (struct iovec __user *) arg;
+-	if (copy_from_user(dst, &src[index], sizeof(*dst)))
+-		return -EFAULT;
+-	return 0;
+-}
+-
+-/*
+- * Not super efficient, but this is just a registration time. And we do cache
+- * the last compound head, so generally we'll only do a full search if we don't
+- * match that one.
+- *
+- * We check if the given compound head page has already been accounted, to
+- * avoid double accounting it. This allows us to account the full size of the
+- * page, not just the constituent pages of a huge page.
+- */
+-static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
+-				  int nr_pages, struct page *hpage)
+-{
+-	int i, j;
+-
+-	/* check current page array */
+-	for (i = 0; i < nr_pages; i++) {
+-		if (!PageCompound(pages[i]))
+-			continue;
+-		if (compound_head(pages[i]) == hpage)
+-			return true;
+-	}
+-
+-	/* check previously registered pages */
+-	for (i = 0; i < ctx->nr_user_bufs; i++) {
+-		struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
+-
+-		for (j = 0; j < imu->nr_bvecs; j++) {
+-			if (!PageCompound(imu->bvec[j].bv_page))
+-				continue;
+-			if (compound_head(imu->bvec[j].bv_page) == hpage)
+-				return true;
+-		}
+-	}
+-
+-	return false;
+-}
+-
+-static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
+-				 int nr_pages, struct io_mapped_ubuf *imu,
+-				 struct page **last_hpage)
+-{
+-	int i, ret;
+-
+-	for (i = 0; i < nr_pages; i++) {
+-		if (!PageCompound(pages[i])) {
+-			imu->acct_pages++;
+-		} else {
+-			struct page *hpage;
+-
+-			hpage = compound_head(pages[i]);
+-			if (hpage == *last_hpage)
+-				continue;
+-			*last_hpage = hpage;
+-			if (headpage_already_acct(ctx, pages, i, hpage))
+-				continue;
+-			imu->acct_pages += page_size(hpage) >> PAGE_SHIFT;
+-		}
+-	}
+-
+-	if (!imu->acct_pages)
+-		return 0;
+-
+-	ret = io_account_mem(ctx, imu->acct_pages, ACCT_PINNED);
+-	if (ret)
+-		imu->acct_pages = 0;
+-	return ret;
+-}
+-
+-static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
+-				  unsigned nr_args)
+-{
+-	struct vm_area_struct **vmas = NULL;
+-	struct page **pages = NULL;
+-	struct page *last_hpage = NULL;
+-	int i, j, got_pages = 0;
+-	int ret = -EINVAL;
+-
+-	if (ctx->user_bufs)
+-		return -EBUSY;
+-	if (!nr_args || nr_args > UIO_MAXIOV)
+-		return -EINVAL;
+-
+-	ctx->user_bufs = kcalloc(nr_args, sizeof(struct io_mapped_ubuf),
+-					GFP_KERNEL);
+-	if (!ctx->user_bufs)
+-		return -ENOMEM;
+-
+-	for (i = 0; i < nr_args; i++) {
+-		struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
+-		unsigned long off, start, end, ubuf;
+-		int pret, nr_pages;
+-		struct iovec iov;
+-		size_t size;
+-
+-		ret = io_copy_iov(ctx, &iov, arg, i);
+-		if (ret)
+-			goto err;
+-
+-		/*
+-		 * Don't impose further limits on the size and buffer
+-		 * constraints here, we'll -EINVAL later when IO is
+-		 * submitted if they are wrong.
+-		 */
+-		ret = -EFAULT;
+-		if (!iov.iov_base || !iov.iov_len)
+-			goto err;
+-
+-		/* arbitrary limit, but we need something */
+-		if (iov.iov_len > SZ_1G)
+-			goto err;
+-
+-		ubuf = (unsigned long) iov.iov_base;
+-		end = (ubuf + iov.iov_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+-		start = ubuf >> PAGE_SHIFT;
+-		nr_pages = end - start;
+-
+-		ret = 0;
+-		if (!pages || nr_pages > got_pages) {
+-			kvfree(vmas);
+-			kvfree(pages);
+-			pages = kvmalloc_array(nr_pages, sizeof(struct page *),
+-						GFP_KERNEL);
+-			vmas = kvmalloc_array(nr_pages,
+-					sizeof(struct vm_area_struct *),
+-					GFP_KERNEL);
+-			if (!pages || !vmas) {
+-				ret = -ENOMEM;
+-				goto err;
+-			}
+-			got_pages = nr_pages;
+-		}
+-
+-		imu->bvec = kvmalloc_array(nr_pages, sizeof(struct bio_vec),
+-						GFP_KERNEL);
+-		ret = -ENOMEM;
+-		if (!imu->bvec)
+-			goto err;
+-
+-		ret = 0;
+-		mmap_read_lock(current->mm);
+-		pret = pin_user_pages(ubuf, nr_pages,
+-				      FOLL_WRITE | FOLL_LONGTERM,
+-				      pages, vmas);
+-		if (pret == nr_pages) {
+-			/* don't support file backed memory */
+-			for (j = 0; j < nr_pages; j++) {
+-				struct vm_area_struct *vma = vmas[j];
+-
+-				if (vma->vm_file &&
+-				    !is_file_hugepages(vma->vm_file)) {
+-					ret = -EOPNOTSUPP;
+-					break;
+-				}
+-			}
+-		} else {
+-			ret = pret < 0 ? pret : -EFAULT;
+-		}
+-		mmap_read_unlock(current->mm);
+-		if (ret) {
+-			/*
+-			 * if we did partial map, or found file backed vmas,
+-			 * release any pages we did get
+-			 */
+-			if (pret > 0)
+-				unpin_user_pages(pages, pret);
+-			kvfree(imu->bvec);
+-			goto err;
+-		}
+-
+-		ret = io_buffer_account_pin(ctx, pages, pret, imu, &last_hpage);
+-		if (ret) {
+-			unpin_user_pages(pages, pret);
+-			kvfree(imu->bvec);
+-			goto err;
+-		}
+-
+-		off = ubuf & ~PAGE_MASK;
+-		size = iov.iov_len;
+-		for (j = 0; j < nr_pages; j++) {
+-			size_t vec_len;
+-
+-			vec_len = min_t(size_t, size, PAGE_SIZE - off);
+-			imu->bvec[j].bv_page = pages[j];
+-			imu->bvec[j].bv_len = vec_len;
+-			imu->bvec[j].bv_offset = off;
+-			off = 0;
+-			size -= vec_len;
+-		}
+-		/* store original address for later verification */
+-		imu->ubuf = ubuf;
+-		imu->len = iov.iov_len;
+-		imu->nr_bvecs = nr_pages;
+-
+-		ctx->nr_user_bufs++;
+-	}
+-	kvfree(pages);
+-	kvfree(vmas);
+-	return 0;
+-err:
+-	kvfree(pages);
+-	kvfree(vmas);
+-	io_sqe_buffer_unregister(ctx);
+-	return ret;
+-}
+-
+-static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg)
+-{
+-	__s32 __user *fds = arg;
+-	int fd;
+-
+-	if (ctx->cq_ev_fd)
+-		return -EBUSY;
+-
+-	if (copy_from_user(&fd, fds, sizeof(*fds)))
+-		return -EFAULT;
+-
+-	ctx->cq_ev_fd = eventfd_ctx_fdget(fd);
+-	if (IS_ERR(ctx->cq_ev_fd)) {
+-		int ret = PTR_ERR(ctx->cq_ev_fd);
+-		ctx->cq_ev_fd = NULL;
+-		return ret;
+-	}
+-
+-	return 0;
+-}
+-
+-static int io_eventfd_unregister(struct io_ring_ctx *ctx)
+-{
+-	if (ctx->cq_ev_fd) {
+-		eventfd_ctx_put(ctx->cq_ev_fd);
+-		ctx->cq_ev_fd = NULL;
+-		return 0;
+-	}
+-
+-	return -ENXIO;
+-}
+-
+-static void io_destroy_buffers(struct io_ring_ctx *ctx)
+-{
+-	struct io_buffer *buf;
+-	unsigned long index;
+-
+-	xa_for_each(&ctx->io_buffers, index, buf)
+-		__io_remove_buffers(ctx, buf, index, -1U);
+-}
+-
+-static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+-{
+-	io_finish_async(ctx);
+-	io_sqe_buffer_unregister(ctx);
+-
+-	if (ctx->sqo_task) {
+-		put_task_struct(ctx->sqo_task);
+-		ctx->sqo_task = NULL;
+-	}
+-
+-#ifdef CONFIG_BLK_CGROUP
+-	if (ctx->sqo_blkcg_css)
+-		css_put(ctx->sqo_blkcg_css);
+-#endif
+-
+-	mutex_lock(&ctx->uring_lock);
+-	io_sqe_files_unregister(ctx);
+-	mutex_unlock(&ctx->uring_lock);
+-	io_eventfd_unregister(ctx);
+-	io_destroy_buffers(ctx);
+-
+-#if defined(CONFIG_UNIX)
+-	if (ctx->ring_sock) {
+-		ctx->ring_sock->file = NULL; /* so that iput() is called */
+-		sock_release(ctx->ring_sock);
+-	}
+-#endif
+-
+-	if (ctx->mm_account) {
+-		mmdrop(ctx->mm_account);
+-		ctx->mm_account = NULL;
+-	}
+-
+-	io_mem_free(ctx->rings);
+-	io_mem_free(ctx->sq_sqes);
+-
+-	percpu_ref_exit(&ctx->refs);
+-	free_uid(ctx->user);
+-	put_cred(ctx->creds);
+-	kfree(ctx->cancel_hash);
+-	kmem_cache_free(req_cachep, ctx->fallback_req);
+-	kfree(ctx);
+-}
+-
+-static __poll_t io_uring_poll(struct file *file, poll_table *wait)
+-{
+-	struct io_ring_ctx *ctx = file->private_data;
+-	__poll_t mask = 0;
+-
+-	poll_wait(file, &ctx->cq_wait, wait);
+-	/*
+-	 * synchronizes with barrier from wq_has_sleeper call in
+-	 * io_commit_cqring
+-	 */
+-	smp_rmb();
+-	if (!io_sqring_full(ctx))
+-		mask |= EPOLLOUT | EPOLLWRNORM;
+-
+-	/*
+-	 * Don't flush cqring overflow list here, just do a simple check.
+-	 * Otherwise there could possible be ABBA deadlock:
+-	 *      CPU0                    CPU1
+-	 *      ----                    ----
+-	 * lock(&ctx->uring_lock);
+-	 *                              lock(&ep->mtx);
+-	 *                              lock(&ctx->uring_lock);
+-	 * lock(&ep->mtx);
+-	 *
+-	 * Users may get EPOLLIN meanwhile seeing nothing in cqring, this
+-	 * pushs them to do the flush.
+-	 */
+-	if (io_cqring_events(ctx) || test_bit(0, &ctx->cq_check_overflow))
+-		mask |= EPOLLIN | EPOLLRDNORM;
+-
+-	return mask;
+-}
+-
+-static int io_uring_fasync(int fd, struct file *file, int on)
+-{
+-	struct io_ring_ctx *ctx = file->private_data;
+-
+-	return fasync_helper(fd, file, on, &ctx->cq_fasync);
+-}
+-
+-static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
+-{
+-	struct io_identity *iod;
+-
+-	iod = xa_erase(&ctx->personalities, id);
+-	if (iod) {
+-		put_cred(iod->creds);
+-		if (refcount_dec_and_test(&iod->count))
+-			kfree(iod);
+-		return 0;
+-	}
+-
+-	return -EINVAL;
+-}
+-
+-static void io_ring_exit_work(struct work_struct *work)
+-{
+-	struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
+-					       exit_work);
+-
+-	/*
+-	 * If we're doing polled IO and end up having requests being
+-	 * submitted async (out-of-line), then completions can come in while
+-	 * we're waiting for refs to drop. We need to reap these manually,
+-	 * as nobody else will be looking for them.
+-	 */
+-	do {
+-		io_iopoll_try_reap_events(ctx);
+-	} while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20));
+-	io_ring_ctx_free(ctx);
+-}
+-
+-static bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
+-{
+-	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-
+-	return req->ctx == data;
+-}
+-
+-static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+-{
+-	unsigned long index;
+-	struct io_identify *iod;
+-
+-	mutex_lock(&ctx->uring_lock);
+-	percpu_ref_kill(&ctx->refs);
+-	/* if force is set, the ring is going away. always drop after that */
+-
+-	if (WARN_ON_ONCE((ctx->flags & IORING_SETUP_SQPOLL) && !ctx->sqo_dead))
+-		ctx->sqo_dead = 1;
+-
+-	ctx->cq_overflow_flushed = 1;
+-	if (ctx->rings)
+-		__io_cqring_overflow_flush(ctx, true, NULL, NULL);
+-	mutex_unlock(&ctx->uring_lock);
+-
+-	io_kill_timeouts(ctx, NULL, NULL);
+-	io_poll_remove_all(ctx, NULL, NULL);
+-
+-	if (ctx->io_wq)
+-		io_wq_cancel_cb(ctx->io_wq, io_cancel_ctx_cb, ctx, true);
+-
+-	/* if we failed setting up the ctx, we might not have any rings */
+-	io_iopoll_try_reap_events(ctx);
+-	xa_for_each(&ctx->personalities, index, iod)
+-		 io_unregister_personality(ctx, index);
+-
+-	/*
+-	 * Do this upfront, so we won't have a grace period where the ring
+-	 * is closed but resources aren't reaped yet. This can cause
+-	 * spurious failure in setting up a new ring.
+-	 */
+-	io_unaccount_mem(ctx, ring_pages(ctx->sq_entries, ctx->cq_entries),
+-			 ACCT_LOCKED);
+-
+-	INIT_WORK(&ctx->exit_work, io_ring_exit_work);
+-	/*
+-	 * Use system_unbound_wq to avoid spawning tons of event kworkers
+-	 * if we're exiting a ton of rings at the same time. It just adds
+-	 * noise and overhead, there's no discernable change in runtime
+-	 * over using system_wq.
+-	 */
+-	queue_work(system_unbound_wq, &ctx->exit_work);
+-}
+-
+-static int io_uring_release(struct inode *inode, struct file *file)
+-{
+-	struct io_ring_ctx *ctx = file->private_data;
+-
+-	file->private_data = NULL;
+-	io_ring_ctx_wait_and_kill(ctx);
+-	return 0;
+-}
+-
+-struct io_task_cancel {
+-	struct task_struct *task;
+-	struct files_struct *files;
+-};
+-
+-static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+-{
+-	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-	struct io_task_cancel *cancel = data;
+-	bool ret;
+-
+-	if (cancel->files && (req->flags & REQ_F_LINK_TIMEOUT)) {
+-		unsigned long flags;
+-		struct io_ring_ctx *ctx = req->ctx;
+-
+-		/* protect against races with linked timeouts */
+-		spin_lock_irqsave(&ctx->completion_lock, flags);
+-		ret = io_match_task(req, cancel->task, cancel->files);
+-		spin_unlock_irqrestore(&ctx->completion_lock, flags);
+-	} else {
+-		ret = io_match_task(req, cancel->task, cancel->files);
+-	}
+-	return ret;
+-}
+-
+-static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+-				  struct task_struct *task,
+-				  struct files_struct *files)
+-{
+-	struct io_defer_entry *de = NULL;
+-	LIST_HEAD(list);
+-
+-	spin_lock_irq(&ctx->completion_lock);
+-	list_for_each_entry_reverse(de, &ctx->defer_list, list) {
+-		if (io_match_task(de->req, task, files)) {
+-			list_cut_position(&list, &ctx->defer_list, &de->list);
+-			break;
+-		}
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-
+-	while (!list_empty(&list)) {
+-		de = list_first_entry(&list, struct io_defer_entry, list);
+-		list_del_init(&de->list);
+-		req_set_fail_links(de->req);
+-		io_put_req(de->req);
+-		io_req_complete(de->req, -ECANCELED);
+-		kfree(de);
+-	}
+-}
+-
+-static int io_uring_count_inflight(struct io_ring_ctx *ctx,
+-				   struct task_struct *task,
+-				   struct files_struct *files)
+-{
+-	struct io_kiocb *req;
+-	int cnt = 0;
+-
+-	spin_lock_irq(&ctx->inflight_lock);
+-	list_for_each_entry(req, &ctx->inflight_list, inflight_entry)
+-		cnt += io_match_task(req, task, files);
+-	spin_unlock_irq(&ctx->inflight_lock);
+-	return cnt;
+-}
+-
+-static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+-				  struct task_struct *task,
+-				  struct files_struct *files)
+-{
+-	while (!list_empty_careful(&ctx->inflight_list)) {
+-		struct io_task_cancel cancel = { .task = task, .files = files };
+-		DEFINE_WAIT(wait);
+-		int inflight;
+-
+-		inflight = io_uring_count_inflight(ctx, task, files);
+-		if (!inflight)
+-			break;
+-
+-		io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
+-		io_poll_remove_all(ctx, task, files);
+-		io_kill_timeouts(ctx, task, files);
+-		/* cancellations _may_ trigger task work */
+-		io_run_task_work();
+-
+-		prepare_to_wait(&task->io_uring->wait, &wait,
+-				TASK_UNINTERRUPTIBLE);
+-		if (inflight == io_uring_count_inflight(ctx, task, files))
+-			schedule();
+-		finish_wait(&task->io_uring->wait, &wait);
+-	}
+-}
+-
+-static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+-					    struct task_struct *task)
+-{
+-	while (1) {
+-		struct io_task_cancel cancel = { .task = task, .files = NULL, };
+-		enum io_wq_cancel cret;
+-		bool ret = false;
+-
+-		cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
+-		if (cret != IO_WQ_CANCEL_NOTFOUND)
+-			ret = true;
+-
+-		/* SQPOLL thread does its own polling */
+-		if (!(ctx->flags & IORING_SETUP_SQPOLL)) {
+-			while (!list_empty_careful(&ctx->iopoll_list)) {
+-				io_iopoll_try_reap_events(ctx);
+-				ret = true;
+-			}
+-		}
+-
+-		ret |= io_poll_remove_all(ctx, task, NULL);
+-		ret |= io_kill_timeouts(ctx, task, NULL);
+-		if (!ret)
+-			break;
+-		io_run_task_work();
+-		cond_resched();
+-	}
+-}
+-
+-static void io_disable_sqo_submit(struct io_ring_ctx *ctx)
+-{
+-	mutex_lock(&ctx->uring_lock);
+-	ctx->sqo_dead = 1;
+-	if (ctx->flags & IORING_SETUP_R_DISABLED)
+-		io_sq_offload_start(ctx);
+-	mutex_unlock(&ctx->uring_lock);
+-
+-	/* make sure callers enter the ring to get error */
+-	if (ctx->rings)
+-		io_ring_set_wakeup_flag(ctx);
+-}
+-
+-/*
+- * We need to iteratively cancel requests, in case a request has dependent
+- * hard links. These persist even for failure of cancelations, hence keep
+- * looping until none are found.
+- */
+-static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+-					  struct files_struct *files)
+-{
+-	struct task_struct *task = current;
+-
+-	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+-		io_disable_sqo_submit(ctx);
+-		task = ctx->sq_data->thread;
+-		atomic_inc(&task->io_uring->in_idle);
+-		io_sq_thread_park(ctx->sq_data);
+-	}
+-
+-	io_cancel_defer_files(ctx, task, files);
+-	io_cqring_overflow_flush(ctx, true, task, files);
+-
+-	if (!files)
+-		__io_uring_cancel_task_requests(ctx, task);
+-	else
+-		io_uring_cancel_files(ctx, task, files);
+-
+-	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+-		atomic_dec(&task->io_uring->in_idle);
+-		io_sq_thread_unpark(ctx->sq_data);
+-	}
+-}
+-
+-/*
+- * Note that this task has used io_uring. We use it for cancelation purposes.
+- */
+-static int io_uring_add_task_file(struct io_ring_ctx *ctx, struct file *file)
+-{
+-	struct io_uring_task *tctx = current->io_uring;
+-	int ret;
+-
+-	if (unlikely(!tctx)) {
+-		ret = io_uring_alloc_task_context(current);
+-		if (unlikely(ret))
+-			return ret;
+-		tctx = current->io_uring;
+-	}
+-	if (tctx->last != file) {
+-		void *old = xa_load(&tctx->xa, (unsigned long)file);
+-
+-		if (!old) {
+-			get_file(file);
+-			ret = xa_err(xa_store(&tctx->xa, (unsigned long)file,
+-						file, GFP_KERNEL));
+-			if (ret) {
+-				fput(file);
+-				return ret;
+-			}
+-		}
+-		tctx->last = file;
+-	}
+-
+-	/*
+-	 * This is race safe in that the task itself is doing this, hence it
+-	 * cannot be going through the exit/cancel paths at the same time.
+-	 * This cannot be modified while exit/cancel is running.
+-	 */
+-	if (!tctx->sqpoll && (ctx->flags & IORING_SETUP_SQPOLL))
+-		tctx->sqpoll = true;
+-
+-	return 0;
+-}
+-
+-/*
+- * Remove this io_uring_file -> task mapping.
+- */
+-static void io_uring_del_task_file(struct file *file)
+-{
+-	struct io_uring_task *tctx = current->io_uring;
+-
+-	if (tctx->last == file)
+-		tctx->last = NULL;
+-	file = xa_erase(&tctx->xa, (unsigned long)file);
+-	if (file)
+-		fput(file);
+-}
+-
+-static void io_uring_remove_task_files(struct io_uring_task *tctx)
+-{
+-	struct file *file;
+-	unsigned long index;
+-
+-	xa_for_each(&tctx->xa, index, file)
+-		io_uring_del_task_file(file);
+-}
+-
+-void __io_uring_files_cancel(struct files_struct *files)
+-{
+-	struct io_uring_task *tctx = current->io_uring;
+-	struct file *file;
+-	unsigned long index;
+-
+-	/* make sure overflow events are dropped */
+-	atomic_inc(&tctx->in_idle);
+-	xa_for_each(&tctx->xa, index, file)
+-		io_uring_cancel_task_requests(file->private_data, files);
+-	atomic_dec(&tctx->in_idle);
+-
+-	if (files)
+-		io_uring_remove_task_files(tctx);
+-}
+-
+-static s64 tctx_inflight(struct io_uring_task *tctx)
+-{
+-	unsigned long index;
+-	struct file *file;
+-	s64 inflight;
+-
+-	inflight = percpu_counter_sum(&tctx->inflight);
+-	if (!tctx->sqpoll)
+-		return inflight;
+-
+-	/*
+-	 * If we have SQPOLL rings, then we need to iterate and find them, and
+-	 * add the pending count for those.
+-	 */
+-	xa_for_each(&tctx->xa, index, file) {
+-		struct io_ring_ctx *ctx = file->private_data;
+-
+-		if (ctx->flags & IORING_SETUP_SQPOLL) {
+-			struct io_uring_task *__tctx = ctx->sqo_task->io_uring;
+-
+-			inflight += percpu_counter_sum(&__tctx->inflight);
+-		}
+-	}
+-
+-	return inflight;
+-}
+-
+-/*
+- * Find any io_uring fd that this task has registered or done IO on, and cancel
+- * requests.
+- */
+-void __io_uring_task_cancel(void)
+-{
+-	struct io_uring_task *tctx = current->io_uring;
+-	DEFINE_WAIT(wait);
+-	s64 inflight;
+-
+-	/* make sure overflow events are dropped */
+-	atomic_inc(&tctx->in_idle);
+-
+-	/* trigger io_disable_sqo_submit() */
+-	if (tctx->sqpoll)
+-		__io_uring_files_cancel(NULL);
+-
+-	do {
+-		/* read completions before cancelations */
+-		inflight = tctx_inflight(tctx);
+-		if (!inflight)
+-			break;
+-		__io_uring_files_cancel(NULL);
+-
+-		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
+-
+-		/*
+-		 * If we've seen completions, retry without waiting. This
+-		 * avoids a race where a completion comes in before we did
+-		 * prepare_to_wait().
+-		 */
+-		if (inflight == tctx_inflight(tctx))
+-			schedule();
+-		finish_wait(&tctx->wait, &wait);
+-	} while (1);
+-
+-	atomic_dec(&tctx->in_idle);
+-
+-	io_uring_remove_task_files(tctx);
+-}
+-
+-static int io_uring_flush(struct file *file, void *data)
+-{
+-	struct io_uring_task *tctx = current->io_uring;
+-	struct io_ring_ctx *ctx = file->private_data;
+-
+-	if (fatal_signal_pending(current) || (current->flags & PF_EXITING))
+-		io_uring_cancel_task_requests(ctx, NULL);
+-
+-	if (!tctx)
+-		return 0;
+-
+-	/* we should have cancelled and erased it before PF_EXITING */
+-	WARN_ON_ONCE((current->flags & PF_EXITING) &&
+-		     xa_load(&tctx->xa, (unsigned long)file));
+-
+-	/*
+-	 * fput() is pending, will be 2 if the only other ref is our potential
+-	 * task file note. If the task is exiting, drop regardless of count.
+-	 */
+-	if (atomic_long_read(&file->f_count) != 2)
+-		return 0;
+-
+-	if (ctx->flags & IORING_SETUP_SQPOLL) {
+-		/* there is only one file note, which is owned by sqo_task */
+-		WARN_ON_ONCE(ctx->sqo_task != current &&
+-			     xa_load(&tctx->xa, (unsigned long)file));
+-		/* sqo_dead check is for when this happens after cancellation */
+-		WARN_ON_ONCE(ctx->sqo_task == current && !ctx->sqo_dead &&
+-			     !xa_load(&tctx->xa, (unsigned long)file));
+-
+-		io_disable_sqo_submit(ctx);
+-	}
+-
+-	if (!(ctx->flags & IORING_SETUP_SQPOLL) || ctx->sqo_task == current)
+-		io_uring_del_task_file(file);
+-	return 0;
+-}
+-
+-static void *io_uring_validate_mmap_request(struct file *file,
+-					    loff_t pgoff, size_t sz)
+-{
+-	struct io_ring_ctx *ctx = file->private_data;
+-	loff_t offset = pgoff << PAGE_SHIFT;
+-	struct page *page;
+-	void *ptr;
+-
+-	switch (offset) {
+-	case IORING_OFF_SQ_RING:
+-	case IORING_OFF_CQ_RING:
+-		ptr = ctx->rings;
+-		break;
+-	case IORING_OFF_SQES:
+-		ptr = ctx->sq_sqes;
+-		break;
+-	default:
+-		return ERR_PTR(-EINVAL);
+-	}
+-
+-	page = virt_to_head_page(ptr);
+-	if (sz > page_size(page))
+-		return ERR_PTR(-EINVAL);
+-
+-	return ptr;
+-}
+-
+-#ifdef CONFIG_MMU
+-
+-static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
+-{
+-	size_t sz = vma->vm_end - vma->vm_start;
+-	unsigned long pfn;
+-	void *ptr;
+-
+-	ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz);
+-	if (IS_ERR(ptr))
+-		return PTR_ERR(ptr);
+-
+-	pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
+-	return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
+-}
+-
+-#else /* !CONFIG_MMU */
+-
+-static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
+-{
+-	return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -EINVAL;
+-}
+-
+-static unsigned int io_uring_nommu_mmap_capabilities(struct file *file)
+-{
+-	return NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_WRITE;
+-}
+-
+-static unsigned long io_uring_nommu_get_unmapped_area(struct file *file,
+-	unsigned long addr, unsigned long len,
+-	unsigned long pgoff, unsigned long flags)
+-{
+-	void *ptr;
+-
+-	ptr = io_uring_validate_mmap_request(file, pgoff, len);
+-	if (IS_ERR(ptr))
+-		return PTR_ERR(ptr);
+-
+-	return (unsigned long) ptr;
+-}
+-
+-#endif /* !CONFIG_MMU */
+-
+-static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+-{
+-	int ret = 0;
+-	DEFINE_WAIT(wait);
+-
+-	do {
+-		if (!io_sqring_full(ctx))
+-			break;
+-
+-		prepare_to_wait(&ctx->sqo_sq_wait, &wait, TASK_INTERRUPTIBLE);
+-
+-		if (unlikely(ctx->sqo_dead)) {
+-			ret = -EOWNERDEAD;
+-			break;
+-		}
+-
+-		if (!io_sqring_full(ctx))
+-			break;
+-
+-		schedule();
+-	} while (!signal_pending(current));
+-
+-	finish_wait(&ctx->sqo_sq_wait, &wait);
+-	return ret;
+-}
+-
+-SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+-		u32, min_complete, u32, flags, const sigset_t __user *, sig,
+-		size_t, sigsz)
+-{
+-	struct io_ring_ctx *ctx;
+-	long ret = -EBADF;
+-	int submitted = 0;
+-	struct fd f;
+-
+-	io_run_task_work();
+-
+-	if (flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
+-			IORING_ENTER_SQ_WAIT))
+-		return -EINVAL;
+-
+-	f = fdget(fd);
+-	if (!f.file)
+-		return -EBADF;
+-
+-	ret = -EOPNOTSUPP;
+-	if (f.file->f_op != &io_uring_fops)
+-		goto out_fput;
+-
+-	ret = -ENXIO;
+-	ctx = f.file->private_data;
+-	if (!percpu_ref_tryget(&ctx->refs))
+-		goto out_fput;
+-
+-	ret = -EBADFD;
+-	if (ctx->flags & IORING_SETUP_R_DISABLED)
+-		goto out;
+-
+-	/*
+-	 * For SQ polling, the thread will do all submissions and completions.
+-	 * Just return the requested submit count, and wake the thread if
+-	 * we were asked to.
+-	 */
+-	ret = 0;
+-	if (ctx->flags & IORING_SETUP_SQPOLL) {
+-		io_cqring_overflow_flush(ctx, false, NULL, NULL);
+-
+-		if (unlikely(ctx->sqo_dead)) {
+-			ret = -EOWNERDEAD;
+-			goto out;
+-		}
+-		if (flags & IORING_ENTER_SQ_WAKEUP)
+-			wake_up(&ctx->sq_data->wait);
+-		if (flags & IORING_ENTER_SQ_WAIT) {
+-			ret = io_sqpoll_wait_sq(ctx);
+-			if (ret)
+-				goto out;
+-		}
+-		submitted = to_submit;
+-	} else if (to_submit) {
+-		ret = io_uring_add_task_file(ctx, f.file);
+-		if (unlikely(ret))
+-			goto out;
+-		mutex_lock(&ctx->uring_lock);
+-		submitted = io_submit_sqes(ctx, to_submit);
+-		mutex_unlock(&ctx->uring_lock);
+-
+-		if (submitted != to_submit)
+-			goto out;
+-	}
+-	if (flags & IORING_ENTER_GETEVENTS) {
+-		min_complete = min(min_complete, ctx->cq_entries);
+-
+-		/*
+-		 * When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, user
+-		 * space applications don't need to do io completion events
+-		 * polling again, they can rely on io_sq_thread to do polling
+-		 * work, which can reduce cpu usage and uring_lock contention.
+-		 */
+-		if (ctx->flags & IORING_SETUP_IOPOLL &&
+-		    !(ctx->flags & IORING_SETUP_SQPOLL)) {
+-			ret = io_iopoll_check(ctx, min_complete);
+-		} else {
+-			ret = io_cqring_wait(ctx, min_complete, sig, sigsz);
+-		}
+-	}
+-
+-out:
+-	percpu_ref_put(&ctx->refs);
+-out_fput:
+-	fdput(f);
+-	return submitted ? submitted : ret;
+-}
+-
+-#ifdef CONFIG_PROC_FS
+-static int io_uring_show_cred(struct seq_file *m, unsigned int id,
+-		const struct io_identity *iod)
+-{
+-	const struct cred *cred = iod->creds;
+-	struct user_namespace *uns = seq_user_ns(m);
+-	struct group_info *gi;
+-	kernel_cap_t cap;
+-	unsigned __capi;
+-	int g;
+-
+-	seq_printf(m, "%5d\n", id);
+-	seq_put_decimal_ull(m, "\tUid:\t", from_kuid_munged(uns, cred->uid));
+-	seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->euid));
+-	seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->suid));
+-	seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->fsuid));
+-	seq_put_decimal_ull(m, "\n\tGid:\t", from_kgid_munged(uns, cred->gid));
+-	seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->egid));
+-	seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->sgid));
+-	seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->fsgid));
+-	seq_puts(m, "\n\tGroups:\t");
+-	gi = cred->group_info;
+-	for (g = 0; g < gi->ngroups; g++) {
+-		seq_put_decimal_ull(m, g ? " " : "",
+-					from_kgid_munged(uns, gi->gid[g]));
+-	}
+-	seq_puts(m, "\n\tCapEff:\t");
+-	cap = cred->cap_effective;
+-	CAP_FOR_EACH_U32(__capi)
+-		seq_put_hex_ll(m, NULL, cap.cap[CAP_LAST_U32 - __capi], 8);
+-	seq_putc(m, '\n');
+-	return 0;
+-}
+-
+-static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+-{
+-	struct io_sq_data *sq = NULL;
+-	bool has_lock;
+-	int i;
+-
+-	/*
+-	 * Avoid ABBA deadlock between the seq lock and the io_uring mutex,
+-	 * since fdinfo case grabs it in the opposite direction of normal use
+-	 * cases. If we fail to get the lock, we just don't iterate any
+-	 * structures that could be going away outside the io_uring mutex.
+-	 */
+-	has_lock = mutex_trylock(&ctx->uring_lock);
+-
+-	if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL))
+-		sq = ctx->sq_data;
+-
+-	seq_printf(m, "SqThread:\t%d\n", sq ? task_pid_nr(sq->thread) : -1);
+-	seq_printf(m, "SqThreadCpu:\t%d\n", sq ? task_cpu(sq->thread) : -1);
+-	seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files);
+-	for (i = 0; has_lock && i < ctx->nr_user_files; i++) {
+-		struct fixed_file_table *table;
+-		struct file *f;
+-
+-		table = &ctx->file_data->table[i >> IORING_FILE_TABLE_SHIFT];
+-		f = table->files[i & IORING_FILE_TABLE_MASK];
+-		if (f)
+-			seq_printf(m, "%5u: %s\n", i, file_dentry(f)->d_iname);
+-		else
+-			seq_printf(m, "%5u: <none>\n", i);
+-	}
+-	seq_printf(m, "UserBufs:\t%u\n", ctx->nr_user_bufs);
+-	for (i = 0; has_lock && i < ctx->nr_user_bufs; i++) {
+-		struct io_mapped_ubuf *buf = &ctx->user_bufs[i];
+-
+-		seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf,
+-						(unsigned int) buf->len);
+-	}
+-	if (has_lock && !xa_empty(&ctx->personalities)) {
+-		unsigned long index;
+-		const struct io_identity *iod;
+-
+-		seq_printf(m, "Personalities:\n");
+-		xa_for_each(&ctx->personalities, index, iod)
+-			io_uring_show_cred(m, index, iod);
+-	}
+-	seq_printf(m, "PollList:\n");
+-	spin_lock_irq(&ctx->completion_lock);
+-	for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
+-		struct hlist_head *list = &ctx->cancel_hash[i];
+-		struct io_kiocb *req;
+-
+-		hlist_for_each_entry(req, list, hash_node)
+-			seq_printf(m, "  op=%d, task_works=%d\n", req->opcode,
+-					req->task->task_works != NULL);
+-	}
+-	spin_unlock_irq(&ctx->completion_lock);
+-	if (has_lock)
+-		mutex_unlock(&ctx->uring_lock);
+-}
+-
+-static void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
+-{
+-	struct io_ring_ctx *ctx = f->private_data;
+-
+-	if (percpu_ref_tryget(&ctx->refs)) {
+-		__io_uring_show_fdinfo(ctx, m);
+-		percpu_ref_put(&ctx->refs);
+-	}
+-}
+-#endif
+-
+-static const struct file_operations io_uring_fops = {
+-	.release	= io_uring_release,
+-	.flush		= io_uring_flush,
+-	.mmap		= io_uring_mmap,
+-#ifndef CONFIG_MMU
+-	.get_unmapped_area = io_uring_nommu_get_unmapped_area,
+-	.mmap_capabilities = io_uring_nommu_mmap_capabilities,
+-#endif
+-	.poll		= io_uring_poll,
+-	.fasync		= io_uring_fasync,
+-#ifdef CONFIG_PROC_FS
+-	.show_fdinfo	= io_uring_show_fdinfo,
+-#endif
+-};
+-
+-static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+-				  struct io_uring_params *p)
+-{
+-	struct io_rings *rings;
+-	size_t size, sq_array_offset;
+-
+-	/* make sure these are sane, as we already accounted them */
+-	ctx->sq_entries = p->sq_entries;
+-	ctx->cq_entries = p->cq_entries;
+-
+-	size = rings_size(p->sq_entries, p->cq_entries, &sq_array_offset);
+-	if (size == SIZE_MAX)
+-		return -EOVERFLOW;
+-
+-	rings = io_mem_alloc(size);
+-	if (!rings)
+-		return -ENOMEM;
+-
+-	ctx->rings = rings;
+-	ctx->sq_array = (u32 *)((char *)rings + sq_array_offset);
+-	rings->sq_ring_mask = p->sq_entries - 1;
+-	rings->cq_ring_mask = p->cq_entries - 1;
+-	rings->sq_ring_entries = p->sq_entries;
+-	rings->cq_ring_entries = p->cq_entries;
+-	ctx->sq_mask = rings->sq_ring_mask;
+-	ctx->cq_mask = rings->cq_ring_mask;
+-
+-	size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
+-	if (size == SIZE_MAX) {
+-		io_mem_free(ctx->rings);
+-		ctx->rings = NULL;
+-		return -EOVERFLOW;
+-	}
+-
+-	ctx->sq_sqes = io_mem_alloc(size);
+-	if (!ctx->sq_sqes) {
+-		io_mem_free(ctx->rings);
+-		ctx->rings = NULL;
+-		return -ENOMEM;
+-	}
+-
+-	return 0;
+-}
+-
+-static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
+-{
+-	int ret, fd;
+-
+-	fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
+-	if (fd < 0)
+-		return fd;
+-
+-	ret = io_uring_add_task_file(ctx, file);
+-	if (ret) {
+-		put_unused_fd(fd);
+-		return ret;
+-	}
+-	fd_install(fd, file);
+-	return fd;
+-}
+-
+-/*
+- * Allocate an anonymous fd, this is what constitutes the application
+- * visible backing of an io_uring instance. The application mmaps this
+- * fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
+- * we have to tie this fd to a socket for file garbage collection purposes.
+- */
+-static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
+-{
+-	struct file *file;
+-#if defined(CONFIG_UNIX)
+-	int ret;
+-
+-	ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
+-				&ctx->ring_sock);
+-	if (ret)
+-		return ERR_PTR(ret);
+-#endif
+-
+-	file = anon_inode_getfile("[io_uring]", &io_uring_fops, ctx,
+-					O_RDWR | O_CLOEXEC);
+-#if defined(CONFIG_UNIX)
+-	if (IS_ERR(file)) {
+-		sock_release(ctx->ring_sock);
+-		ctx->ring_sock = NULL;
+-	} else {
+-		ctx->ring_sock->file = file;
+-	}
+-#endif
+-	return file;
+-}
+-
+-static int io_uring_create(unsigned entries, struct io_uring_params *p,
+-			   struct io_uring_params __user *params)
+-{
+-	struct user_struct *user = NULL;
+-	struct io_ring_ctx *ctx;
+-	struct file *file;
+-	bool limit_mem;
+-	int ret;
+-
+-	if (!entries)
+-		return -EINVAL;
+-	if (entries > IORING_MAX_ENTRIES) {
+-		if (!(p->flags & IORING_SETUP_CLAMP))
+-			return -EINVAL;
+-		entries = IORING_MAX_ENTRIES;
+-	}
+-
+-	/*
+-	 * Use twice as many entries for the CQ ring. It's possible for the
+-	 * application to drive a higher depth than the size of the SQ ring,
+-	 * since the sqes are only used at submission time. This allows for
+-	 * some flexibility in overcommitting a bit. If the application has
+-	 * set IORING_SETUP_CQSIZE, it will have passed in the desired number
+-	 * of CQ ring entries manually.
+-	 */
+-	p->sq_entries = roundup_pow_of_two(entries);
+-	if (p->flags & IORING_SETUP_CQSIZE) {
+-		/*
+-		 * If IORING_SETUP_CQSIZE is set, we do the same roundup
+-		 * to a power-of-two, if it isn't already. We do NOT impose
+-		 * any cq vs sq ring sizing.
+-		 */
+-		if (!p->cq_entries)
+-			return -EINVAL;
+-		if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {
+-			if (!(p->flags & IORING_SETUP_CLAMP))
+-				return -EINVAL;
+-			p->cq_entries = IORING_MAX_CQ_ENTRIES;
+-		}
+-		p->cq_entries = roundup_pow_of_two(p->cq_entries);
+-		if (p->cq_entries < p->sq_entries)
+-			return -EINVAL;
+-	} else {
+-		p->cq_entries = 2 * p->sq_entries;
+-	}
+-
+-	user = get_uid(current_user());
+-	limit_mem = !capable(CAP_IPC_LOCK);
+-
+-	if (limit_mem) {
+-		ret = __io_account_mem(user,
+-				ring_pages(p->sq_entries, p->cq_entries));
+-		if (ret) {
+-			free_uid(user);
+-			return ret;
+-		}
+-	}
+-
+-	ctx = io_ring_ctx_alloc(p);
+-	if (!ctx) {
+-		if (limit_mem)
+-			__io_unaccount_mem(user, ring_pages(p->sq_entries,
+-								p->cq_entries));
+-		free_uid(user);
+-		return -ENOMEM;
+-	}
+-	ctx->compat = in_compat_syscall();
+-	ctx->user = user;
+-	ctx->creds = get_current_cred();
+-#ifdef CONFIG_AUDIT
+-	ctx->loginuid = current->loginuid;
+-	ctx->sessionid = current->sessionid;
+-#endif
+-	ctx->sqo_task = get_task_struct(current);
+-
+-	/*
+-	 * This is just grabbed for accounting purposes. When a process exits,
+-	 * the mm is exited and dropped before the files, hence we need to hang
+-	 * on to this mm purely for the purposes of being able to unaccount
+-	 * memory (locked/pinned vm). It's not used for anything else.
+-	 */
+-	mmgrab(current->mm);
+-	ctx->mm_account = current->mm;
+-
+-#ifdef CONFIG_BLK_CGROUP
+-	/*
+-	 * The sq thread will belong to the original cgroup it was inited in.
+-	 * If the cgroup goes offline (e.g. disabling the io controller), then
+-	 * issued bios will be associated with the closest cgroup later in the
+-	 * block layer.
+-	 */
+-	rcu_read_lock();
+-	ctx->sqo_blkcg_css = blkcg_css();
+-	ret = css_tryget_online(ctx->sqo_blkcg_css);
+-	rcu_read_unlock();
+-	if (!ret) {
+-		/* don't init against a dying cgroup, have the user try again */
+-		ctx->sqo_blkcg_css = NULL;
+-		ret = -ENODEV;
+-		goto err;
+-	}
+-#endif
+-
+-	/*
+-	 * Account memory _before_ installing the file descriptor. Once
+-	 * the descriptor is installed, it can get closed at any time. Also
+-	 * do this before hitting the general error path, as ring freeing
+-	 * will un-account as well.
+-	 */
+-	io_account_mem(ctx, ring_pages(p->sq_entries, p->cq_entries),
+-		       ACCT_LOCKED);
+-	ctx->limit_mem = limit_mem;
+-
+-	ret = io_allocate_scq_urings(ctx, p);
+-	if (ret)
+-		goto err;
+-
+-	ret = io_sq_offload_create(ctx, p);
+-	if (ret)
+-		goto err;
+-
+-	if (!(p->flags & IORING_SETUP_R_DISABLED))
+-		io_sq_offload_start(ctx);
+-
+-	memset(&p->sq_off, 0, sizeof(p->sq_off));
+-	p->sq_off.head = offsetof(struct io_rings, sq.head);
+-	p->sq_off.tail = offsetof(struct io_rings, sq.tail);
+-	p->sq_off.ring_mask = offsetof(struct io_rings, sq_ring_mask);
+-	p->sq_off.ring_entries = offsetof(struct io_rings, sq_ring_entries);
+-	p->sq_off.flags = offsetof(struct io_rings, sq_flags);
+-	p->sq_off.dropped = offsetof(struct io_rings, sq_dropped);
+-	p->sq_off.array = (char *)ctx->sq_array - (char *)ctx->rings;
+-
+-	memset(&p->cq_off, 0, sizeof(p->cq_off));
+-	p->cq_off.head = offsetof(struct io_rings, cq.head);
+-	p->cq_off.tail = offsetof(struct io_rings, cq.tail);
+-	p->cq_off.ring_mask = offsetof(struct io_rings, cq_ring_mask);
+-	p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries);
+-	p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);
+-	p->cq_off.cqes = offsetof(struct io_rings, cqes);
+-	p->cq_off.flags = offsetof(struct io_rings, cq_flags);
+-
+-	p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
+-			IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
+-			IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
+-			IORING_FEAT_POLL_32BITS;
+-
+-	if (copy_to_user(params, p, sizeof(*p))) {
+-		ret = -EFAULT;
+-		goto err;
+-	}
+-
+-	file = io_uring_get_file(ctx);
+-	if (IS_ERR(file)) {
+-		ret = PTR_ERR(file);
+-		goto err;
+-	}
+-
+-	/*
+-	 * Install ring fd as the very last thing, so we don't risk someone
+-	 * having closed it before we finish setup
+-	 */
+-	ret = io_uring_install_fd(ctx, file);
+-	if (ret < 0) {
+-		io_disable_sqo_submit(ctx);
+-		/* fput will clean it up */
+-		fput(file);
+-		return ret;
+-	}
+-
+-	trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
+-	return ret;
+-err:
+-	io_disable_sqo_submit(ctx);
+-	io_ring_ctx_wait_and_kill(ctx);
+-	return ret;
+-}
+-
+-/*
+- * Sets up an aio uring context, and returns the fd. Applications asks for a
+- * ring size, we return the actual sq/cq ring sizes (among other things) in the
+- * params structure passed in.
+- */
+-static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
+-{
+-	struct io_uring_params p;
+-	int i;
+-
+-	if (copy_from_user(&p, params, sizeof(p)))
+-		return -EFAULT;
+-	for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
+-		if (p.resv[i])
+-			return -EINVAL;
+-	}
+-
+-	if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
+-			IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
+-			IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
+-			IORING_SETUP_R_DISABLED))
+-		return -EINVAL;
+-
+-	return  io_uring_create(entries, &p, params);
+-}
+-
+-SYSCALL_DEFINE2(io_uring_setup, u32, entries,
+-		struct io_uring_params __user *, params)
+-{
+-	return io_uring_setup(entries, params);
+-}
+-
+-static int io_probe(struct io_ring_ctx *ctx, void __user *arg, unsigned nr_args)
+-{
+-	struct io_uring_probe *p;
+-	size_t size;
+-	int i, ret;
+-
+-	size = struct_size(p, ops, nr_args);
+-	if (size == SIZE_MAX)
+-		return -EOVERFLOW;
+-	p = kzalloc(size, GFP_KERNEL);
+-	if (!p)
+-		return -ENOMEM;
+-
+-	ret = -EFAULT;
+-	if (copy_from_user(p, arg, size))
+-		goto out;
+-	ret = -EINVAL;
+-	if (memchr_inv(p, 0, size))
+-		goto out;
+-
+-	p->last_op = IORING_OP_LAST - 1;
+-	if (nr_args > IORING_OP_LAST)
+-		nr_args = IORING_OP_LAST;
+-
+-	for (i = 0; i < nr_args; i++) {
+-		p->ops[i].op = i;
+-		if (!io_op_defs[i].not_supported)
+-			p->ops[i].flags = IO_URING_OP_SUPPORTED;
+-	}
+-	p->ops_len = i;
+-
+-	ret = 0;
+-	if (copy_to_user(arg, p, size))
+-		ret = -EFAULT;
+-out:
+-	kfree(p);
+-	return ret;
+-}
+-
+-static int io_register_personality(struct io_ring_ctx *ctx)
+-{
+-	struct io_identity *iod;
+-	u32 id;
+-	int ret;
+-
+-	iod = kmalloc(sizeof(*iod), GFP_KERNEL);
+-	if (unlikely(!iod))
+-		return -ENOMEM;
+-
+-	io_init_identity(iod);
+-	iod->creds = get_current_cred();
+-
+-	ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)iod,
+-			XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
+-	if (ret < 0) {
+-		put_cred(iod->creds);
+-		kfree(iod);
+-		return ret;
+-	}
+-	return id;
+-}
+-
+-static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
+-				    unsigned int nr_args)
+-{
+-	struct io_uring_restriction *res;
+-	size_t size;
+-	int i, ret;
+-
+-	/* Restrictions allowed only if rings started disabled */
+-	if (!(ctx->flags & IORING_SETUP_R_DISABLED))
+-		return -EBADFD;
+-
+-	/* We allow only a single restrictions registration */
+-	if (ctx->restrictions.registered)
+-		return -EBUSY;
+-
+-	if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
+-		return -EINVAL;
+-
+-	size = array_size(nr_args, sizeof(*res));
+-	if (size == SIZE_MAX)
+-		return -EOVERFLOW;
+-
+-	res = memdup_user(arg, size);
+-	if (IS_ERR(res))
+-		return PTR_ERR(res);
+-
+-	ret = 0;
+-
+-	for (i = 0; i < nr_args; i++) {
+-		switch (res[i].opcode) {
+-		case IORING_RESTRICTION_REGISTER_OP:
+-			if (res[i].register_op >= IORING_REGISTER_LAST) {
+-				ret = -EINVAL;
+-				goto out;
+-			}
+-
+-			__set_bit(res[i].register_op,
+-				  ctx->restrictions.register_op);
+-			break;
+-		case IORING_RESTRICTION_SQE_OP:
+-			if (res[i].sqe_op >= IORING_OP_LAST) {
+-				ret = -EINVAL;
+-				goto out;
+-			}
+-
+-			__set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
+-			break;
+-		case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
+-			ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags;
+-			break;
+-		case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
+-			ctx->restrictions.sqe_flags_required = res[i].sqe_flags;
+-			break;
+-		default:
+-			ret = -EINVAL;
+-			goto out;
+-		}
+-	}
+-
+-out:
+-	/* Reset all restrictions if an error happened */
+-	if (ret != 0)
+-		memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
+-	else
+-		ctx->restrictions.registered = true;
+-
+-	kfree(res);
+-	return ret;
+-}
+-
+-static int io_register_enable_rings(struct io_ring_ctx *ctx)
+-{
+-	if (!(ctx->flags & IORING_SETUP_R_DISABLED))
+-		return -EBADFD;
+-
+-	if (ctx->restrictions.registered)
+-		ctx->restricted = 1;
+-
+-	io_sq_offload_start(ctx);
+-	return 0;
+-}
+-
+-static bool io_register_op_must_quiesce(int op)
+-{
+-	switch (op) {
+-	case IORING_UNREGISTER_FILES:
+-	case IORING_REGISTER_FILES_UPDATE:
+-	case IORING_REGISTER_PROBE:
+-	case IORING_REGISTER_PERSONALITY:
+-	case IORING_UNREGISTER_PERSONALITY:
+-		return false;
+-	default:
+-		return true;
+-	}
+-}
+-
+-static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
+-			       void __user *arg, unsigned nr_args)
+-	__releases(ctx->uring_lock)
+-	__acquires(ctx->uring_lock)
+-{
+-	int ret;
+-
+-	/*
+-	 * We're inside the ring mutex, if the ref is already dying, then
+-	 * someone else killed the ctx or is already going through
+-	 * io_uring_register().
+-	 */
+-	if (percpu_ref_is_dying(&ctx->refs))
+-		return -ENXIO;
+-
+-	if (io_register_op_must_quiesce(opcode)) {
+-		percpu_ref_kill(&ctx->refs);
+-
+-		/*
+-		 * Drop uring mutex before waiting for references to exit. If
+-		 * another thread is currently inside io_uring_enter() it might
+-		 * need to grab the uring_lock to make progress. If we hold it
+-		 * here across the drain wait, then we can deadlock. It's safe
+-		 * to drop the mutex here, since no new references will come in
+-		 * after we've killed the percpu ref.
+-		 */
+-		mutex_unlock(&ctx->uring_lock);
+-		do {
+-			ret = wait_for_completion_interruptible(&ctx->ref_comp);
+-			if (!ret)
+-				break;
+-			ret = io_run_task_work_sig();
+-			if (ret < 0)
+-				break;
+-		} while (1);
+-		mutex_lock(&ctx->uring_lock);
+-
+-		if (ret) {
+-			io_refs_resurrect(&ctx->refs, &ctx->ref_comp);
+-			return ret;
+-		}
+-	}
+-
+-	if (ctx->restricted) {
+-		if (opcode >= IORING_REGISTER_LAST) {
+-			ret = -EINVAL;
+-			goto out;
+-		}
+-
+-		if (!test_bit(opcode, ctx->restrictions.register_op)) {
+-			ret = -EACCES;
+-			goto out;
+-		}
+-	}
+-
+-	switch (opcode) {
+-	case IORING_REGISTER_BUFFERS:
+-		ret = io_sqe_buffer_register(ctx, arg, nr_args);
+-		break;
+-	case IORING_UNREGISTER_BUFFERS:
+-		ret = -EINVAL;
+-		if (arg || nr_args)
+-			break;
+-		ret = io_sqe_buffer_unregister(ctx);
+-		break;
+-	case IORING_REGISTER_FILES:
+-		ret = io_sqe_files_register(ctx, arg, nr_args);
+-		break;
+-	case IORING_UNREGISTER_FILES:
+-		ret = -EINVAL;
+-		if (arg || nr_args)
+-			break;
+-		ret = io_sqe_files_unregister(ctx);
+-		break;
+-	case IORING_REGISTER_FILES_UPDATE:
+-		ret = io_sqe_files_update(ctx, arg, nr_args);
+-		break;
+-	case IORING_REGISTER_EVENTFD:
+-	case IORING_REGISTER_EVENTFD_ASYNC:
+-		ret = -EINVAL;
+-		if (nr_args != 1)
+-			break;
+-		ret = io_eventfd_register(ctx, arg);
+-		if (ret)
+-			break;
+-		if (opcode == IORING_REGISTER_EVENTFD_ASYNC)
+-			ctx->eventfd_async = 1;
+-		else
+-			ctx->eventfd_async = 0;
+-		break;
+-	case IORING_UNREGISTER_EVENTFD:
+-		ret = -EINVAL;
+-		if (arg || nr_args)
+-			break;
+-		ret = io_eventfd_unregister(ctx);
+-		break;
+-	case IORING_REGISTER_PROBE:
+-		ret = -EINVAL;
+-		if (!arg || nr_args > 256)
+-			break;
+-		ret = io_probe(ctx, arg, nr_args);
+-		break;
+-	case IORING_REGISTER_PERSONALITY:
+-		ret = -EINVAL;
+-		if (arg || nr_args)
+-			break;
+-		ret = io_register_personality(ctx);
+-		break;
+-	case IORING_UNREGISTER_PERSONALITY:
+-		ret = -EINVAL;
+-		if (arg)
+-			break;
+-		ret = io_unregister_personality(ctx, nr_args);
+-		break;
+-	case IORING_REGISTER_ENABLE_RINGS:
+-		ret = -EINVAL;
+-		if (arg || nr_args)
+-			break;
+-		ret = io_register_enable_rings(ctx);
+-		break;
+-	case IORING_REGISTER_RESTRICTIONS:
+-		ret = io_register_restrictions(ctx, arg, nr_args);
+-		break;
+-	default:
+-		ret = -EINVAL;
+-		break;
+-	}
+-
+-out:
+-	if (io_register_op_must_quiesce(opcode)) {
+-		/* bring the ctx back to life */
+-		percpu_ref_reinit(&ctx->refs);
+-		reinit_completion(&ctx->ref_comp);
+-	}
+-	return ret;
+-}
+-
+-SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
+-		void __user *, arg, unsigned int, nr_args)
+-{
+-	struct io_ring_ctx *ctx;
+-	long ret = -EBADF;
+-	struct fd f;
+-
+-	f = fdget(fd);
+-	if (!f.file)
+-		return -EBADF;
+-
+-	ret = -EOPNOTSUPP;
+-	if (f.file->f_op != &io_uring_fops)
+-		goto out_fput;
+-
+-	ctx = f.file->private_data;
+-
+-	mutex_lock(&ctx->uring_lock);
+-	ret = __io_uring_register(ctx, opcode, arg, nr_args);
+-	mutex_unlock(&ctx->uring_lock);
+-	trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs,
+-							ctx->cq_ev_fd != NULL, ret);
+-out_fput:
+-	fdput(f);
+-	return ret;
+-}
+-
+-static int __init io_uring_init(void)
+-{
+-#define __BUILD_BUG_VERIFY_ELEMENT(stype, eoffset, etype, ename) do { \
+-	BUILD_BUG_ON(offsetof(stype, ename) != eoffset); \
+-	BUILD_BUG_ON(sizeof(etype) != sizeof_field(stype, ename)); \
+-} while (0)
+-
+-#define BUILD_BUG_SQE_ELEM(eoffset, etype, ename) \
+-	__BUILD_BUG_VERIFY_ELEMENT(struct io_uring_sqe, eoffset, etype, ename)
+-	BUILD_BUG_ON(sizeof(struct io_uring_sqe) != 64);
+-	BUILD_BUG_SQE_ELEM(0,  __u8,   opcode);
+-	BUILD_BUG_SQE_ELEM(1,  __u8,   flags);
+-	BUILD_BUG_SQE_ELEM(2,  __u16,  ioprio);
+-	BUILD_BUG_SQE_ELEM(4,  __s32,  fd);
+-	BUILD_BUG_SQE_ELEM(8,  __u64,  off);
+-	BUILD_BUG_SQE_ELEM(8,  __u64,  addr2);
+-	BUILD_BUG_SQE_ELEM(16, __u64,  addr);
+-	BUILD_BUG_SQE_ELEM(16, __u64,  splice_off_in);
+-	BUILD_BUG_SQE_ELEM(24, __u32,  len);
+-	BUILD_BUG_SQE_ELEM(28,     __kernel_rwf_t, rw_flags);
+-	BUILD_BUG_SQE_ELEM(28, /* compat */   int, rw_flags);
+-	BUILD_BUG_SQE_ELEM(28, /* compat */ __u32, rw_flags);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  fsync_flags);
+-	BUILD_BUG_SQE_ELEM(28, /* compat */ __u16,  poll_events);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  poll32_events);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  sync_range_flags);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  msg_flags);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  timeout_flags);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  accept_flags);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  cancel_flags);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  open_flags);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  statx_flags);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  fadvise_advice);
+-	BUILD_BUG_SQE_ELEM(28, __u32,  splice_flags);
+-	BUILD_BUG_SQE_ELEM(32, __u64,  user_data);
+-	BUILD_BUG_SQE_ELEM(40, __u16,  buf_index);
+-	BUILD_BUG_SQE_ELEM(42, __u16,  personality);
+-	BUILD_BUG_SQE_ELEM(44, __s32,  splice_fd_in);
+-
+-	BUILD_BUG_ON(ARRAY_SIZE(io_op_defs) != IORING_OP_LAST);
+-	BUILD_BUG_ON(__REQ_F_LAST_BIT >= 8 * sizeof(int));
+-	req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC);
+-	return 0;
+-};
+-__initcall(io_uring_init);
+diff --git a/fs/namei.c b/fs/namei.c
+index 4375565aca666..4159c140fa473 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -529,6 +529,8 @@ static void set_nameidata(struct nameidata *p, int dfd, struct filename *name)
+ 	p->stack = p->internal;
+ 	p->dfd = dfd;
+ 	p->name = name;
++	p->path.mnt = NULL;
++	p->path.dentry = NULL;
+ 	p->total_link_count = old ? old->total_link_count : 0;
+ 	p->saved = old;
+ 	current->nameidata = p;
+@@ -602,6 +604,8 @@ static void terminate_walk(struct nameidata *nd)
+ 		rcu_read_unlock();
+ 	}
+ 	nd->depth = 0;
++	nd->path.mnt = NULL;
++	nd->path.dentry = NULL;
+ }
+ 
+ /* path_put is needed afterwards regardless of success or failure */
+@@ -630,6 +634,11 @@ static inline bool legitimize_path(struct nameidata *nd,
+ static bool legitimize_links(struct nameidata *nd)
+ {
+ 	int i;
++	if (unlikely(nd->flags & LOOKUP_CACHED)) {
++		drop_links(nd);
++		nd->depth = 0;
++		return false;
++	}
+ 	for (i = 0; i < nd->depth; i++) {
+ 		struct saved *last = nd->stack + i;
+ 		if (unlikely(!legitimize_path(nd, &last->link, last->seq))) {
+@@ -705,19 +714,19 @@ out:
+ }
+ 
+ /**
+- * unlazy_child - try to switch to ref-walk mode.
++ * try_to_unlazy_next - try to switch to ref-walk mode.
+  * @nd: nameidata pathwalk data
+- * @dentry: child of nd->path.dentry
+- * @seq: seq number to check dentry against
+- * Returns: 0 on success, -ECHILD on failure
++ * @dentry: next dentry to step into
++ * @seq: seq number to check @dentry against
++ * Returns: true on success, false on failure
+  *
+- * unlazy_child attempts to legitimize the current nd->path, nd->root and dentry
+- * for ref-walk mode.  @dentry must be a path found by a do_lookup call on
+- * @nd.  Must be called from rcu-walk context.
+- * Nothing should touch nameidata between unlazy_child() failure and
++ * Similar to to try_to_unlazy(), but here we have the next dentry already
++ * picked by rcu-walk and want to legitimize that in addition to the current
++ * nd->path and nd->root for ref-walk mode.  Must be called from rcu-walk context.
++ * Nothing should touch nameidata between try_to_unlazy_next() failure and
+  * terminate_walk().
+  */
+-static int unlazy_child(struct nameidata *nd, struct dentry *dentry, unsigned seq)
++static bool try_to_unlazy_next(struct nameidata *nd, struct dentry *dentry, unsigned seq)
+ {
+ 	BUG_ON(!(nd->flags & LOOKUP_RCU));
+ 
+@@ -747,7 +756,7 @@ static int unlazy_child(struct nameidata *nd, struct dentry *dentry, unsigned se
+ 	if (unlikely(!legitimize_root(nd)))
+ 		goto out_dput;
+ 	rcu_read_unlock();
+-	return 0;
++	return true;
+ 
+ out2:
+ 	nd->path.mnt = NULL;
+@@ -755,11 +764,11 @@ out1:
+ 	nd->path.dentry = NULL;
+ out:
+ 	rcu_read_unlock();
+-	return -ECHILD;
++	return false;
+ out_dput:
+ 	rcu_read_unlock();
+ 	dput(dentry);
+-	return -ECHILD;
++	return false;
+ }
+ 
+ static inline int d_revalidate(struct dentry *dentry, unsigned int flags)
+@@ -792,6 +801,7 @@ static int complete_walk(struct nameidata *nd)
+ 		 */
+ 		if (!(nd->flags & (LOOKUP_ROOT | LOOKUP_IS_SCOPED)))
+ 			nd->root.mnt = NULL;
++		nd->flags &= ~LOOKUP_CACHED;
+ 		if (!try_to_unlazy(nd))
+ 			return -ECHILD;
+ 	}
+@@ -1374,7 +1384,7 @@ static inline int handle_mounts(struct nameidata *nd, struct dentry *dentry,
+ 			return -ENOENT;
+ 		if (likely(__follow_mount_rcu(nd, path, inode, seqp)))
+ 			return 0;
+-		if (unlazy_child(nd, dentry, seq))
++		if (!try_to_unlazy_next(nd, dentry, seq))
+ 			return -ECHILD;
+ 		// *path might've been clobbered by __follow_mount_rcu()
+ 		path->mnt = nd->path.mnt;
+@@ -1495,7 +1505,7 @@ static struct dentry *lookup_fast(struct nameidata *nd,
+ 		status = d_revalidate(dentry, nd->flags);
+ 		if (likely(status > 0))
+ 			return dentry;
+-		if (unlazy_child(nd, dentry, seq))
++		if (!try_to_unlazy_next(nd, dentry, seq))
+ 			return ERR_PTR(-ECHILD);
+ 		if (unlikely(status == -ECHILD))
+ 			/* we'd been told to redo it in non-rcu mode */
+@@ -2204,6 +2214,10 @@ static const char *path_init(struct nameidata *nd, unsigned flags)
+ 	int error;
+ 	const char *s = nd->name->name;
+ 
++	/* LOOKUP_CACHED requires RCU, ask caller to retry */
++	if ((flags & (LOOKUP_RCU | LOOKUP_CACHED)) == LOOKUP_CACHED)
++		return ERR_PTR(-EAGAIN);
++
+ 	if (!*s)
+ 		flags &= ~LOOKUP_RCU;
+ 	if (flags & LOOKUP_RCU)
+@@ -2233,8 +2247,6 @@ static const char *path_init(struct nameidata *nd, unsigned flags)
+ 	}
+ 
+ 	nd->root.mnt = NULL;
+-	nd->path.mnt = NULL;
+-	nd->path.dentry = NULL;
+ 
+ 	/* Absolute pathname -- fetch the root (LOOKUP_IN_ROOT uses nd->dfd). */
+ 	if (*s == '/' && !(flags & LOOKUP_IN_ROOT)) {
+@@ -4341,8 +4353,8 @@ out:
+ }
+ EXPORT_SYMBOL(vfs_rename);
+ 
+-static int do_renameat2(int olddfd, const char __user *oldname, int newdfd,
+-			const char __user *newname, unsigned int flags)
++int do_renameat2(int olddfd, struct filename *from, int newdfd,
++		 struct filename *to, unsigned int flags)
+ {
+ 	struct dentry *old_dentry, *new_dentry;
+ 	struct dentry *trap;
+@@ -4350,32 +4362,30 @@ static int do_renameat2(int olddfd, const char __user *oldname, int newdfd,
+ 	struct qstr old_last, new_last;
+ 	int old_type, new_type;
+ 	struct inode *delegated_inode = NULL;
+-	struct filename *from;
+-	struct filename *to;
+ 	unsigned int lookup_flags = 0, target_flags = LOOKUP_RENAME_TARGET;
+ 	bool should_retry = false;
+-	int error;
++	int error = -EINVAL;
+ 
+ 	if (flags & ~(RENAME_NOREPLACE | RENAME_EXCHANGE | RENAME_WHITEOUT))
+-		return -EINVAL;
++		goto put_both;
+ 
+ 	if ((flags & (RENAME_NOREPLACE | RENAME_WHITEOUT)) &&
+ 	    (flags & RENAME_EXCHANGE))
+-		return -EINVAL;
++		goto put_both;
+ 
+ 	if (flags & RENAME_EXCHANGE)
+ 		target_flags = 0;
+ 
+ retry:
+-	from = filename_parentat(olddfd, getname(oldname), lookup_flags,
+-				&old_path, &old_last, &old_type);
++	from = filename_parentat(olddfd, from, lookup_flags, &old_path,
++					&old_last, &old_type);
+ 	if (IS_ERR(from)) {
+ 		error = PTR_ERR(from);
+-		goto exit;
++		goto put_new;
+ 	}
+ 
+-	to = filename_parentat(newdfd, getname(newname), lookup_flags,
+-				&new_path, &new_last, &new_type);
++	to = filename_parentat(newdfd, to, lookup_flags, &new_path, &new_last,
++				&new_type);
+ 	if (IS_ERR(to)) {
+ 		error = PTR_ERR(to);
+ 		goto exit1;
+@@ -4468,34 +4478,40 @@ exit2:
+ 	if (retry_estale(error, lookup_flags))
+ 		should_retry = true;
+ 	path_put(&new_path);
+-	putname(to);
+ exit1:
+ 	path_put(&old_path);
+-	putname(from);
+ 	if (should_retry) {
+ 		should_retry = false;
+ 		lookup_flags |= LOOKUP_REVAL;
+ 		goto retry;
+ 	}
+-exit:
++put_both:
++	if (!IS_ERR(from))
++		putname(from);
++put_new:
++	if (!IS_ERR(to))
++		putname(to);
+ 	return error;
+ }
+ 
+ SYSCALL_DEFINE5(renameat2, int, olddfd, const char __user *, oldname,
+ 		int, newdfd, const char __user *, newname, unsigned int, flags)
+ {
+-	return do_renameat2(olddfd, oldname, newdfd, newname, flags);
++	return do_renameat2(olddfd, getname(oldname), newdfd, getname(newname),
++				flags);
+ }
+ 
+ SYSCALL_DEFINE4(renameat, int, olddfd, const char __user *, oldname,
+ 		int, newdfd, const char __user *, newname)
+ {
+-	return do_renameat2(olddfd, oldname, newdfd, newname, 0);
++	return do_renameat2(olddfd, getname(oldname), newdfd, getname(newname),
++				0);
+ }
+ 
+ SYSCALL_DEFINE2(rename, const char __user *, oldname, const char __user *, newname)
+ {
+-	return do_renameat2(AT_FDCWD, oldname, AT_FDCWD, newname, 0);
++	return do_renameat2(AT_FDCWD, getname(oldname), AT_FDCWD,
++				getname(newname), 0);
+ }
+ 
+ int readlink_copy(char __user *buffer, int buflen, const char *link)
+diff --git a/fs/open.c b/fs/open.c
+index 3aaaad47d9cac..b3fbb4300fc96 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -1099,6 +1099,12 @@ inline int build_open_flags(const struct open_how *how, struct open_flags *op)
+ 		lookup_flags |= LOOKUP_BENEATH;
+ 	if (how->resolve & RESOLVE_IN_ROOT)
+ 		lookup_flags |= LOOKUP_IN_ROOT;
++	if (how->resolve & RESOLVE_CACHED) {
++		/* Don't bother even trying for create/truncate/tmpfile open */
++		if (flags & (O_TRUNC | O_CREAT | O_TMPFILE))
++			return -EAGAIN;
++		lookup_flags |= LOOKUP_CACHED;
++	}
+ 
+ 	op->lookup_flags = lookup_flags;
+ 	return 0;
+diff --git a/fs/proc/self.c b/fs/proc/self.c
+index a4012154e1096..72cd69bcaf4ad 100644
+--- a/fs/proc/self.c
++++ b/fs/proc/self.c
+@@ -16,13 +16,6 @@ static const char *proc_self_get_link(struct dentry *dentry,
+ 	pid_t tgid = task_tgid_nr_ns(current, ns);
+ 	char *name;
+ 
+-	/*
+-	 * Not currently supported. Once we can inherit all of struct pid,
+-	 * we can allow this.
+-	 */
+-	if (current->flags & PF_IO_WORKER)
+-		return ERR_PTR(-EOPNOTSUPP);
+-
+ 	if (!tgid)
+ 		return ERR_PTR(-ENOENT);
+ 	/* max length of unsigned int in decimal + NULL term */
+diff --git a/fs/proc/thread_self.c b/fs/proc/thread_self.c
+index d56681d86d28a..a553273fbd417 100644
+--- a/fs/proc/thread_self.c
++++ b/fs/proc/thread_self.c
+@@ -17,13 +17,6 @@ static const char *proc_thread_self_get_link(struct dentry *dentry,
+ 	pid_t pid = task_pid_nr_ns(current, ns);
+ 	char *name;
+ 
+-	/*
+-	 * Not currently supported. Once we can inherit all of struct pid,
+-	 * we can allow this.
+-	 */
+-	if (current->flags & PF_IO_WORKER)
+-		return ERR_PTR(-EOPNOTSUPP);
+-
+ 	if (!pid)
+ 		return ERR_PTR(-ENOENT);
+ 	name = kmalloc(10 + 6 + 10 + 1, dentry ? GFP_KERNEL : GFP_ATOMIC);
+diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
+index 7dff07713a073..46c42479f9501 100644
+--- a/include/linux/entry-common.h
++++ b/include/linux/entry-common.h
+@@ -69,7 +69,7 @@
+ 
+ #define EXIT_TO_USER_MODE_WORK						\
+ 	(_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE |		\
+-	 _TIF_NEED_RESCHED | _TIF_PATCH_PENDING |			\
++	 _TIF_NEED_RESCHED | _TIF_PATCH_PENDING | _TIF_NOTIFY_SIGNAL |	\
+ 	 ARCH_EXIT_TO_USER_MODE_WORK)
+ 
+ /**
+@@ -259,12 +259,13 @@ static __always_inline void arch_exit_to_user_mode(void) { }
+ #endif
+ 
+ /**
+- * arch_do_signal -  Architecture specific signal delivery function
++ * arch_do_signal_or_restart -  Architecture specific signal delivery function
+  * @regs:	Pointer to currents pt_regs
++ * @has_signal:	actual signal to handle
+  *
+  * Invoked from exit_to_user_mode_loop().
+  */
+-void arch_do_signal(struct pt_regs *regs);
++void arch_do_signal_or_restart(struct pt_regs *regs, bool has_signal);
+ 
+ /**
+  * arch_syscall_exit_tracehook - Wrapper around tracehook_report_syscall_exit()
+diff --git a/include/linux/entry-kvm.h b/include/linux/entry-kvm.h
+index 0cef17afb41a6..9b93f8584ff7d 100644
+--- a/include/linux/entry-kvm.h
++++ b/include/linux/entry-kvm.h
+@@ -11,8 +11,8 @@
+ # define ARCH_XFER_TO_GUEST_MODE_WORK	(0)
+ #endif
+ 
+-#define XFER_TO_GUEST_MODE_WORK					\
+-	(_TIF_NEED_RESCHED | _TIF_SIGPENDING |			\
++#define XFER_TO_GUEST_MODE_WORK						\
++	(_TIF_NEED_RESCHED | _TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL |	\
+ 	 _TIF_NOTIFY_RESUME | ARCH_XFER_TO_GUEST_MODE_WORK)
+ 
+ struct kvm_vcpu;
+diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
+index dc4fd8a6644dd..ce1cf42740bf4 100644
+--- a/include/linux/eventfd.h
++++ b/include/linux/eventfd.h
+@@ -39,6 +39,7 @@ struct file *eventfd_fget(int fd);
+ struct eventfd_ctx *eventfd_ctx_fdget(int fd);
+ struct eventfd_ctx *eventfd_ctx_fileget(struct file *file);
+ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n);
++__u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, unsigned mask);
+ int eventfd_ctx_remove_wait_queue(struct eventfd_ctx *ctx, wait_queue_entry_t *wait,
+ 				  __u64 *cnt);
+ 
+@@ -66,6 +67,12 @@ static inline int eventfd_signal(struct eventfd_ctx *ctx, int n)
+ 	return -ENOSYS;
+ }
+ 
++static inline int eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n,
++				      unsigned mask)
++{
++	return -ENOSYS;
++}
++
+ static inline void eventfd_ctx_put(struct eventfd_ctx *ctx)
+ {
+ 
+diff --git a/include/linux/fcntl.h b/include/linux/fcntl.h
+index 921e750843e66..766fcd973beba 100644
+--- a/include/linux/fcntl.h
++++ b/include/linux/fcntl.h
+@@ -19,7 +19,7 @@
+ /* List of all valid flags for the how->resolve argument: */
+ #define VALID_RESOLVE_FLAGS \
+ 	(RESOLVE_NO_XDEV | RESOLVE_NO_MAGICLINKS | RESOLVE_NO_SYMLINKS | \
+-	 RESOLVE_BENEATH | RESOLVE_IN_ROOT)
++	 RESOLVE_BENEATH | RESOLVE_IN_ROOT | RESOLVE_CACHED)
+ 
+ /* List of all open_how "versions". */
+ #define OPEN_HOW_SIZE_VER0	24 /* sizeof first published struct */
+diff --git a/include/linux/fdtable.h b/include/linux/fdtable.h
+index a32bf47c593e7..f1a99d3e55707 100644
+--- a/include/linux/fdtable.h
++++ b/include/linux/fdtable.h
+@@ -123,7 +123,7 @@ extern void __fd_install(struct files_struct *files,
+ extern int __close_fd(struct files_struct *files,
+ 		      unsigned int fd);
+ extern int __close_range(unsigned int fd, unsigned int max_fd, unsigned int flags);
+-extern int __close_fd_get_file(unsigned int fd, struct file **res);
++extern int close_fd_get_file(unsigned int fd, struct file **res);
+ extern int unshare_fd(unsigned long unshare_flags, unsigned int max_fds,
+ 		      struct files_struct **new_fdp);
+ 
+diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
+index 35b2d845704d9..649a4d7c241bc 100644
+--- a/include/linux/io_uring.h
++++ b/include/linux/io_uring.h
+@@ -5,50 +5,20 @@
+ #include <linux/sched.h>
+ #include <linux/xarray.h>
+ 
+-struct io_identity {
+-	struct files_struct		*files;
+-	struct mm_struct		*mm;
+-#ifdef CONFIG_BLK_CGROUP
+-	struct cgroup_subsys_state	*blkcg_css;
+-#endif
+-	const struct cred		*creds;
+-	struct nsproxy			*nsproxy;
+-	struct fs_struct		*fs;
+-	unsigned long			fsize;
+-#ifdef CONFIG_AUDIT
+-	kuid_t				loginuid;
+-	unsigned int			sessionid;
+-#endif
+-	refcount_t			count;
+-};
+-
+-struct io_uring_task {
+-	/* submission side */
+-	struct xarray		xa;
+-	struct wait_queue_head	wait;
+-	struct file		*last;
+-	struct percpu_counter	inflight;
+-	struct io_identity	__identity;
+-	struct io_identity	*identity;
+-	atomic_t		in_idle;
+-	bool			sqpoll;
+-};
+-
+ #if defined(CONFIG_IO_URING)
+ struct sock *io_uring_get_socket(struct file *file);
+-void __io_uring_task_cancel(void);
+-void __io_uring_files_cancel(struct files_struct *files);
++void __io_uring_cancel(bool cancel_all);
+ void __io_uring_free(struct task_struct *tsk);
+ 
+-static inline void io_uring_task_cancel(void)
++static inline void io_uring_files_cancel(void)
+ {
+-	if (current->io_uring && !xa_empty(&current->io_uring->xa))
+-		__io_uring_task_cancel();
++	if (current->io_uring)
++		__io_uring_cancel(false);
+ }
+-static inline void io_uring_files_cancel(struct files_struct *files)
++static inline void io_uring_task_cancel(void)
+ {
+-	if (current->io_uring && !xa_empty(&current->io_uring->xa))
+-		__io_uring_files_cancel(files);
++	if (current->io_uring)
++		__io_uring_cancel(true);
+ }
+ static inline void io_uring_free(struct task_struct *tsk)
+ {
+@@ -63,7 +33,7 @@ static inline struct sock *io_uring_get_socket(struct file *file)
+ static inline void io_uring_task_cancel(void)
+ {
+ }
+-static inline void io_uring_files_cancel(struct files_struct *files)
++static inline void io_uring_files_cancel(void)
+ {
+ }
+ static inline void io_uring_free(struct task_struct *tsk)
+diff --git a/include/linux/namei.h b/include/linux/namei.h
+index a4bb992623c41..b9605b2b46e71 100644
+--- a/include/linux/namei.h
++++ b/include/linux/namei.h
+@@ -46,6 +46,7 @@ enum {LAST_NORM, LAST_ROOT, LAST_DOT, LAST_DOTDOT};
+ #define LOOKUP_NO_XDEV		0x040000 /* No mountpoint crossing. */
+ #define LOOKUP_BENEATH		0x080000 /* No escaping from starting point. */
+ #define LOOKUP_IN_ROOT		0x100000 /* Treat dirfd as fs root. */
++#define LOOKUP_CACHED		0x200000 /* Only do cached lookup */
+ /* LOOKUP_* flags which do scope-related checks based on the dirfd. */
+ #define LOOKUP_IS_SCOPED (LOOKUP_BENEATH | LOOKUP_IN_ROOT)
+ 
+diff --git a/include/linux/net.h b/include/linux/net.h
+index 0dcd51feef02d..ae713c8513425 100644
+--- a/include/linux/net.h
++++ b/include/linux/net.h
+@@ -42,8 +42,6 @@ struct net;
+ #define SOCK_PASSCRED		3
+ #define SOCK_PASSSEC		4
+ 
+-#define PROTO_CMSG_DATA_ONLY	0x0001
+-
+ #ifndef ARCH_HAS_SOCKET_TYPES
+ /**
+  * enum sock_type - Socket types
+@@ -138,7 +136,6 @@ typedef int (*sk_read_actor_t)(read_descriptor_t *, struct sk_buff *,
+ 
+ struct proto_ops {
+ 	int		family;
+-	unsigned int	flags;
+ 	struct module	*owner;
+ 	int		(*release)   (struct socket *sock);
+ 	int		(*bind)	     (struct socket *sock,
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index b055c217eb0be..5da4b3c89f636 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -885,6 +885,9 @@ struct task_struct {
+ 	/* CLONE_CHILD_CLEARTID: */
+ 	int __user			*clear_child_tid;
+ 
++	/* PF_IO_WORKER */
++	void				*pf_io_worker;
++
+ 	u64				utime;
+ 	u64				stime;
+ #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
+diff --git a/include/linux/sched/jobctl.h b/include/linux/sched/jobctl.h
+index d2b4204ba4d34..fa067de9f1a94 100644
+--- a/include/linux/sched/jobctl.h
++++ b/include/linux/sched/jobctl.h
+@@ -19,7 +19,6 @@ struct task_struct;
+ #define JOBCTL_TRAPPING_BIT	21	/* switching to TRACED */
+ #define JOBCTL_LISTENING_BIT	22	/* ptracer is listening for events */
+ #define JOBCTL_TRAP_FREEZE_BIT	23	/* trap for cgroup freezer */
+-#define JOBCTL_TASK_WORK_BIT	24	/* set by TWA_SIGNAL */
+ 
+ #define JOBCTL_STOP_DEQUEUED	(1UL << JOBCTL_STOP_DEQUEUED_BIT)
+ #define JOBCTL_STOP_PENDING	(1UL << JOBCTL_STOP_PENDING_BIT)
+@@ -29,10 +28,9 @@ struct task_struct;
+ #define JOBCTL_TRAPPING		(1UL << JOBCTL_TRAPPING_BIT)
+ #define JOBCTL_LISTENING	(1UL << JOBCTL_LISTENING_BIT)
+ #define JOBCTL_TRAP_FREEZE	(1UL << JOBCTL_TRAP_FREEZE_BIT)
+-#define JOBCTL_TASK_WORK	(1UL << JOBCTL_TASK_WORK_BIT)
+ 
+ #define JOBCTL_TRAP_MASK	(JOBCTL_TRAP_STOP | JOBCTL_TRAP_NOTIFY)
+-#define JOBCTL_PENDING_MASK	(JOBCTL_STOP_PENDING | JOBCTL_TRAP_MASK | JOBCTL_TASK_WORK)
++#define JOBCTL_PENDING_MASK	(JOBCTL_STOP_PENDING | JOBCTL_TRAP_MASK)
+ 
+ extern bool task_set_jobctl_pending(struct task_struct *task, unsigned long mask);
+ extern void task_clear_jobctl_trapping(struct task_struct *task);
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 657640015b335..ae60f838ebb92 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -354,11 +354,23 @@ static inline int restart_syscall(void)
+ 	return -ERESTARTNOINTR;
+ }
+ 
+-static inline int signal_pending(struct task_struct *p)
++static inline int task_sigpending(struct task_struct *p)
+ {
+ 	return unlikely(test_tsk_thread_flag(p,TIF_SIGPENDING));
+ }
+ 
++static inline int signal_pending(struct task_struct *p)
++{
++	/*
++	 * TIF_NOTIFY_SIGNAL isn't really a signal, but it requires the same
++	 * behavior in terms of ensuring that we break out of wait loops
++	 * so that notify signal callbacks can be processed.
++	 */
++	if (unlikely(test_tsk_thread_flag(p, TIF_NOTIFY_SIGNAL)))
++		return 1;
++	return task_sigpending(p);
++}
++
+ static inline int __fatal_signal_pending(struct task_struct *p)
+ {
+ 	return unlikely(sigismember(&p->pending.signal, SIGKILL));
+@@ -366,7 +378,7 @@ static inline int __fatal_signal_pending(struct task_struct *p)
+ 
+ static inline int fatal_signal_pending(struct task_struct *p)
+ {
+-	return signal_pending(p) && __fatal_signal_pending(p);
++	return task_sigpending(p) && __fatal_signal_pending(p);
+ }
+ 
+ static inline int signal_pending_state(long state, struct task_struct *p)
+@@ -503,7 +515,7 @@ extern int set_user_sigmask(const sigset_t __user *umask, size_t sigsetsize);
+ static inline void restore_saved_sigmask_unless(bool interrupted)
+ {
+ 	if (interrupted)
+-		WARN_ON(!test_thread_flag(TIF_SIGPENDING));
++		WARN_ON(!signal_pending(current));
+ 	else
+ 		restore_saved_sigmask();
+ }
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index eeacb4a16fe3f..4ce511437a8aa 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -31,6 +31,7 @@ struct kernel_clone_args {
+ 	/* Number of elements in *set_tid */
+ 	size_t set_tid_size;
+ 	int cgroup;
++	int io_thread;
+ 	struct cgroup *cgrp;
+ 	struct css_set *cset;
+ };
+@@ -85,6 +86,7 @@ extern void exit_files(struct task_struct *);
+ extern void exit_itimers(struct task_struct *);
+ 
+ extern pid_t kernel_clone(struct kernel_clone_args *kargs);
++struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node);
+ struct task_struct *fork_idle(int);
+ struct mm_struct *copy_init_mm(void);
+ extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
+diff --git a/include/linux/socket.h b/include/linux/socket.h
+index 9aa530d497da8..c3b35d18bcd30 100644
+--- a/include/linux/socket.h
++++ b/include/linux/socket.h
+@@ -421,6 +421,9 @@ extern int __sys_accept4_file(struct file *file, unsigned file_flags,
+ 			struct sockaddr __user *upeer_sockaddr,
+ 			 int __user *upeer_addrlen, int flags,
+ 			 unsigned long nofile);
++extern struct file *do_accept(struct file *file, unsigned file_flags,
++			      struct sockaddr __user *upeer_sockaddr,
++			      int __user *upeer_addrlen, int flags);
+ extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr,
+ 			 int __user *upeer_addrlen, int flags);
+ extern int __sys_socket(int family, int type, int protocol);
+@@ -436,5 +439,6 @@ extern int __sys_getpeername(int fd, struct sockaddr __user *usockaddr,
+ 			     int __user *usockaddr_len);
+ extern int __sys_socketpair(int family, int type, int protocol,
+ 			    int __user *usockvec);
++extern int __sys_shutdown_sock(struct socket *sock, int how);
+ extern int __sys_shutdown(int fd, int how);
+ #endif /* _LINUX_SOCKET_H */
+diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
+index aea0ce9f3b745..a058c96cf2138 100644
+--- a/include/linux/syscalls.h
++++ b/include/linux/syscalls.h
+@@ -341,7 +341,7 @@ asmlinkage long sys_io_uring_setup(u32 entries,
+ 				struct io_uring_params __user *p);
+ asmlinkage long sys_io_uring_enter(unsigned int fd, u32 to_submit,
+ 				u32 min_complete, u32 flags,
+-				const sigset_t __user *sig, size_t sigsz);
++				const void __user *argp, size_t argsz);
+ asmlinkage long sys_io_uring_register(unsigned int fd, unsigned int op,
+ 				void __user *arg, unsigned int nr_args);
+ 
+diff --git a/include/linux/task_work.h b/include/linux/task_work.h
+index 0d848a1e9e62f..5b8a93f288bb4 100644
+--- a/include/linux/task_work.h
++++ b/include/linux/task_work.h
+@@ -22,6 +22,8 @@ enum task_work_notify_mode {
+ int task_work_add(struct task_struct *task, struct callback_head *twork,
+ 			enum task_work_notify_mode mode);
+ 
++struct callback_head *task_work_cancel_match(struct task_struct *task,
++	bool (*match)(struct callback_head *, void *data), void *data);
+ struct callback_head *task_work_cancel(struct task_struct *, task_work_func_t);
+ void task_work_run(void);
+ 
+diff --git a/include/linux/tracehook.h b/include/linux/tracehook.h
+index b480e1a07ed85..ee9ab7dbc8c35 100644
+--- a/include/linux/tracehook.h
++++ b/include/linux/tracehook.h
+@@ -198,4 +198,27 @@ static inline void tracehook_notify_resume(struct pt_regs *regs)
+ 	blkcg_maybe_throttle_current();
+ }
+ 
++/*
++ * called by exit_to_user_mode_loop() if ti_work & _TIF_NOTIFY_SIGNAL. This
++ * is currently used by TWA_SIGNAL based task_work, which requires breaking
++ * wait loops to ensure that task_work is noticed and run.
++ */
++static inline void tracehook_notify_signal(void)
++{
++	clear_thread_flag(TIF_NOTIFY_SIGNAL);
++	smp_mb__after_atomic();
++	if (current->task_works)
++		task_work_run();
++}
++
++/*
++ * Called when we have work to process from exit_to_user_mode_loop()
++ */
++static inline void set_notify_signal(struct task_struct *task)
++{
++	if (!test_and_set_tsk_thread_flag(task, TIF_NOTIFY_SIGNAL) &&
++	    !wake_up_state(task, TASK_INTERRUPTIBLE))
++		kick_process(task);
++}
++
+ #endif	/* <linux/tracehook.h> */
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 27ff8eb786dc3..cedb68e49e4f9 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -26,6 +26,12 @@ enum iter_type {
+ 	ITER_DISCARD = 64,
+ };
+ 
++struct iov_iter_state {
++	size_t iov_offset;
++	size_t count;
++	unsigned long nr_segs;
++};
++
+ struct iov_iter {
+ 	/*
+ 	 * Bit 0 is the read/write bit, set if we're writing.
+@@ -55,6 +61,14 @@ static inline enum iter_type iov_iter_type(const struct iov_iter *i)
+ 	return i->type & ~(READ | WRITE);
+ }
+ 
++static inline void iov_iter_save_state(struct iov_iter *iter,
++				       struct iov_iter_state *state)
++{
++	state->iov_offset = iter->iov_offset;
++	state->count = iter->count;
++	state->nr_segs = iter->nr_segs;
++}
++
+ static inline bool iter_is_iovec(const struct iov_iter *i)
+ {
+ 	return iov_iter_type(i) == ITER_IOVEC;
+@@ -226,6 +240,7 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages,
+ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages,
+ 			size_t maxsize, size_t *start);
+ int iov_iter_npages(const struct iov_iter *i, int maxpages);
++void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state);
+ 
+ const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags);
+ 
+diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h
+index 9f0d3b7d56b0f..0dd30de00e5b4 100644
+--- a/include/trace/events/io_uring.h
++++ b/include/trace/events/io_uring.h
+@@ -12,11 +12,11 @@ struct io_wq_work;
+ /**
+  * io_uring_create - called after a new io_uring context was prepared
+  *
+- * @fd:			corresponding file descriptor
+- * @ctx:		pointer to a ring context structure
++ * @fd:		corresponding file descriptor
++ * @ctx:	pointer to a ring context structure
+  * @sq_entries:	actual SQ size
+  * @cq_entries:	actual CQ size
+- * @flags:		SQ ring flags, provided to io_uring_setup(2)
++ * @flags:	SQ ring flags, provided to io_uring_setup(2)
+  *
+  * Allows to trace io_uring creation and provide pointer to a context, that can
+  * be used later to find correlated events.
+@@ -49,15 +49,15 @@ TRACE_EVENT(io_uring_create,
+ );
+ 
+ /**
+- * io_uring_register - called after a buffer/file/eventfd was succesfully
++ * io_uring_register - called after a buffer/file/eventfd was successfully
+  * 					   registered for a ring
+  *
+- * @ctx:			pointer to a ring context structure
+- * @opcode:			describes which operation to perform
++ * @ctx:		pointer to a ring context structure
++ * @opcode:		describes which operation to perform
+  * @nr_user_files:	number of registered files
+  * @nr_user_bufs:	number of registered buffers
+  * @cq_ev_fd:		whether eventfs registered or not
+- * @ret:			return code
++ * @ret:		return code
+  *
+  * Allows to trace fixed files/buffers/eventfds, that could be registered to
+  * avoid an overhead of getting references to them for every operation. This
+@@ -142,16 +142,16 @@ TRACE_EVENT(io_uring_queue_async_work,
+ 	TP_ARGS(ctx, rw, req, work, flags),
+ 
+ 	TP_STRUCT__entry (
+-		__field(  void *,				ctx		)
+-		__field(  int,					rw		)
+-		__field(  void *,				req		)
++		__field(  void *,			ctx	)
++		__field(  int,				rw	)
++		__field(  void *,			req	)
+ 		__field(  struct io_wq_work *,		work	)
+ 		__field(  unsigned int,			flags	)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->ctx	= ctx;
+-		__entry->rw		= rw;
++		__entry->rw	= rw;
+ 		__entry->req	= req;
+ 		__entry->work	= work;
+ 		__entry->flags	= flags;
+@@ -196,10 +196,10 @@ TRACE_EVENT(io_uring_defer,
+ 
+ /**
+  * io_uring_link - called before the io_uring request added into link_list of
+- * 				   another request
++ * 		   another request
+  *
+- * @ctx:			pointer to a ring context structure
+- * @req:			pointer to a linked request
++ * @ctx:		pointer to a ring context structure
++ * @req:		pointer to a linked request
+  * @target_req:		pointer to a previous request, that would contain @req
+  *
+  * Allows to track linked requests, to understand dependencies between requests
+@@ -212,8 +212,8 @@ TRACE_EVENT(io_uring_link,
+ 	TP_ARGS(ctx, req, target_req),
+ 
+ 	TP_STRUCT__entry (
+-		__field(  void *,	ctx			)
+-		__field(  void *,	req			)
++		__field(  void *,	ctx		)
++		__field(  void *,	req		)
+ 		__field(  void *,	target_req	)
+ 	),
+ 
+@@ -244,7 +244,7 @@ TRACE_EVENT(io_uring_cqring_wait,
+ 	TP_ARGS(ctx, min_events),
+ 
+ 	TP_STRUCT__entry (
+-		__field(  void *,	ctx			)
++		__field(  void *,	ctx		)
+ 		__field(  int,		min_events	)
+ 	),
+ 
+@@ -272,7 +272,7 @@ TRACE_EVENT(io_uring_fail_link,
+ 	TP_ARGS(req, link),
+ 
+ 	TP_STRUCT__entry (
+-		__field(  void *,	req		)
++		__field(  void *,	req	)
+ 		__field(  void *,	link	)
+ 	),
+ 
+@@ -290,38 +290,42 @@ TRACE_EVENT(io_uring_fail_link,
+  * @ctx:		pointer to a ring context structure
+  * @user_data:		user data associated with the request
+  * @res:		result of the request
++ * @cflags:		completion flags
+  *
+  */
+ TRACE_EVENT(io_uring_complete,
+ 
+-	TP_PROTO(void *ctx, u64 user_data, long res),
++	TP_PROTO(void *ctx, u64 user_data, int res, unsigned cflags),
+ 
+-	TP_ARGS(ctx, user_data, res),
++	TP_ARGS(ctx, user_data, res, cflags),
+ 
+ 	TP_STRUCT__entry (
+ 		__field(  void *,	ctx		)
+ 		__field(  u64,		user_data	)
+-		__field(  long,		res		)
++		__field(  int,		res		)
++		__field(  unsigned,	cflags		)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->ctx		= ctx;
+ 		__entry->user_data	= user_data;
+ 		__entry->res		= res;
++		__entry->cflags		= cflags;
+ 	),
+ 
+-	TP_printk("ring %p, user_data 0x%llx, result %ld",
++	TP_printk("ring %p, user_data 0x%llx, result %d, cflags %x",
+ 			  __entry->ctx, (unsigned long long)__entry->user_data,
+-			  __entry->res)
++			  __entry->res, __entry->cflags)
+ );
+ 
+-
+ /**
+  * io_uring_submit_sqe - called before submitting one SQE
+  *
+  * @ctx:		pointer to a ring context structure
++ * @req:		pointer to a submitted request
+  * @opcode:		opcode of request
+  * @user_data:		user data associated with the request
++ * @flags		request flags
+  * @force_nonblock:	whether a context blocking or not
+  * @sq_thread:		true if sq_thread has submitted this SQE
+  *
+@@ -330,41 +334,60 @@ TRACE_EVENT(io_uring_complete,
+  */
+ TRACE_EVENT(io_uring_submit_sqe,
+ 
+-	TP_PROTO(void *ctx, u8 opcode, u64 user_data, bool force_nonblock,
+-		 bool sq_thread),
++	TP_PROTO(void *ctx, void *req, u8 opcode, u64 user_data, u32 flags,
++		 bool force_nonblock, bool sq_thread),
+ 
+-	TP_ARGS(ctx, opcode, user_data, force_nonblock, sq_thread),
++	TP_ARGS(ctx, req, opcode, user_data, flags, force_nonblock, sq_thread),
+ 
+ 	TP_STRUCT__entry (
+ 		__field(  void *,	ctx		)
++		__field(  void *,	req		)
+ 		__field(  u8,		opcode		)
+ 		__field(  u64,		user_data	)
++		__field(  u32,		flags		)
+ 		__field(  bool,		force_nonblock	)
+ 		__field(  bool,		sq_thread	)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->ctx		= ctx;
++		__entry->req		= req;
+ 		__entry->opcode		= opcode;
+ 		__entry->user_data	= user_data;
++		__entry->flags		= flags;
+ 		__entry->force_nonblock	= force_nonblock;
+ 		__entry->sq_thread	= sq_thread;
+ 	),
+ 
+-	TP_printk("ring %p, op %d, data 0x%llx, non block %d, sq_thread %d",
+-			  __entry->ctx, __entry->opcode,
+-			  (unsigned long long) __entry->user_data,
+-			  __entry->force_nonblock, __entry->sq_thread)
++	TP_printk("ring %p, req %p, op %d, data 0x%llx, flags %u, "
++		  "non block %d, sq_thread %d", __entry->ctx, __entry->req,
++		  __entry->opcode, (unsigned long long)__entry->user_data,
++		  __entry->flags, __entry->force_nonblock, __entry->sq_thread)
+ );
+ 
++/*
++ * io_uring_poll_arm - called after arming a poll wait if successful
++ *
++ * @ctx:		pointer to a ring context structure
++ * @req:		pointer to the armed request
++ * @opcode:		opcode of request
++ * @user_data:		user data associated with the request
++ * @mask:		request poll events mask
++ * @events:		registered events of interest
++ *
++ * Allows to track which fds are waiting for and what are the events of
++ * interest.
++ */
+ TRACE_EVENT(io_uring_poll_arm,
+ 
+-	TP_PROTO(void *ctx, u8 opcode, u64 user_data, int mask, int events),
++	TP_PROTO(void *ctx, void *req, u8 opcode, u64 user_data,
++		 int mask, int events),
+ 
+-	TP_ARGS(ctx, opcode, user_data, mask, events),
++	TP_ARGS(ctx, req, opcode, user_data, mask, events),
+ 
+ 	TP_STRUCT__entry (
+ 		__field(  void *,	ctx		)
++		__field(  void *,	req		)
+ 		__field(  u8,		opcode		)
+ 		__field(  u64,		user_data	)
+ 		__field(  int,		mask		)
+@@ -373,16 +396,17 @@ TRACE_EVENT(io_uring_poll_arm,
+ 
+ 	TP_fast_assign(
+ 		__entry->ctx		= ctx;
++		__entry->req		= req;
+ 		__entry->opcode		= opcode;
+ 		__entry->user_data	= user_data;
+ 		__entry->mask		= mask;
+ 		__entry->events		= events;
+ 	),
+ 
+-	TP_printk("ring %p, op %d, data 0x%llx, mask 0x%x, events 0x%x",
+-			  __entry->ctx, __entry->opcode,
+-			  (unsigned long long) __entry->user_data,
+-			  __entry->mask, __entry->events)
++	TP_printk("ring %p, req %p, op %d, data 0x%llx, mask 0x%x, events 0x%x",
++		  __entry->ctx, __entry->req, __entry->opcode,
++		  (unsigned long long) __entry->user_data,
++		  __entry->mask, __entry->events)
+ );
+ 
+ TRACE_EVENT(io_uring_poll_wake,
+@@ -437,27 +461,40 @@ TRACE_EVENT(io_uring_task_add,
+ 			  __entry->mask)
+ );
+ 
++/*
++ * io_uring_task_run - called when task_work_run() executes the poll events
++ *                     notification callbacks
++ *
++ * @ctx:		pointer to a ring context structure
++ * @req:		pointer to the armed request
++ * @opcode:		opcode of request
++ * @user_data:		user data associated with the request
++ *
++ * Allows to track when notified poll events are processed
++ */
+ TRACE_EVENT(io_uring_task_run,
+ 
+-	TP_PROTO(void *ctx, u8 opcode, u64 user_data),
++	TP_PROTO(void *ctx, void *req, u8 opcode, u64 user_data),
+ 
+-	TP_ARGS(ctx, opcode, user_data),
++	TP_ARGS(ctx, req, opcode, user_data),
+ 
+ 	TP_STRUCT__entry (
+ 		__field(  void *,	ctx		)
++		__field(  void *,	req		)
+ 		__field(  u8,		opcode		)
+ 		__field(  u64,		user_data	)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->ctx		= ctx;
++		__entry->req		= req;
+ 		__entry->opcode		= opcode;
+ 		__entry->user_data	= user_data;
+ 	),
+ 
+-	TP_printk("ring %p, op %d, data 0x%llx",
+-			  __entry->ctx, __entry->opcode,
+-			  (unsigned long long) __entry->user_data)
++	TP_printk("ring %p, req %p, op %d, data 0x%llx",
++		  __entry->ctx, __entry->req, __entry->opcode,
++		  (unsigned long long) __entry->user_data)
+ );
+ 
+ #endif /* _TRACE_IO_URING_H */
+diff --git a/include/uapi/linux/eventpoll.h b/include/uapi/linux/eventpoll.h
+index 8a3432d0f0dcb..e687658843b1c 100644
+--- a/include/uapi/linux/eventpoll.h
++++ b/include/uapi/linux/eventpoll.h
+@@ -41,6 +41,12 @@
+ #define EPOLLMSG	(__force __poll_t)0x00000400
+ #define EPOLLRDHUP	(__force __poll_t)0x00002000
+ 
++/*
++ * Internal flag - wakeup generated by io_uring, used to detect recursion back
++ * into the io_uring poll handler.
++ */
++#define EPOLL_URING_WAKE	((__force __poll_t)(1U << 27))
++
+ /* Set exclusive wakeup mode for the target file descriptor */
+ #define EPOLLEXCLUSIVE	((__force __poll_t)(1U << 28))
+ 
+diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
+index 98d8e06dea220..6481db9370028 100644
+--- a/include/uapi/linux/io_uring.h
++++ b/include/uapi/linux/io_uring.h
+@@ -42,23 +42,25 @@ struct io_uring_sqe {
+ 		__u32		statx_flags;
+ 		__u32		fadvise_advice;
+ 		__u32		splice_flags;
++		__u32		rename_flags;
++		__u32		unlink_flags;
++		__u32		hardlink_flags;
+ 	};
+ 	__u64	user_data;	/* data to be passed back at completion time */
++	/* pack this to avoid bogus arm OABI complaints */
+ 	union {
+-		struct {
+-			/* pack this to avoid bogus arm OABI complaints */
+-			union {
+-				/* index into fixed buffers, if used */
+-				__u16	buf_index;
+-				/* for grouped buffer selection */
+-				__u16	buf_group;
+-			} __attribute__((packed));
+-			/* personality to use, if used */
+-			__u16	personality;
+-			__s32	splice_fd_in;
+-		};
+-		__u64	__pad2[3];
++		/* index into fixed buffers, if used */
++		__u16	buf_index;
++		/* for grouped buffer selection */
++		__u16	buf_group;
++	} __attribute__((packed));
++	/* personality to use, if used */
++	__u16	personality;
++	union {
++		__s32	splice_fd_in;
++		__u32	file_index;
+ 	};
++	__u64	__pad2[2];
+ };
+ 
+ enum {
+@@ -132,6 +134,9 @@ enum {
+ 	IORING_OP_PROVIDE_BUFFERS,
+ 	IORING_OP_REMOVE_BUFFERS,
+ 	IORING_OP_TEE,
++	IORING_OP_SHUTDOWN,
++	IORING_OP_RENAMEAT,
++	IORING_OP_UNLINKAT,
+ 
+ 	/* this goes last, obviously */
+ 	IORING_OP_LAST,
+@@ -145,14 +150,34 @@ enum {
+ /*
+  * sqe->timeout_flags
+  */
+-#define IORING_TIMEOUT_ABS	(1U << 0)
+-
++#define IORING_TIMEOUT_ABS		(1U << 0)
++#define IORING_TIMEOUT_UPDATE		(1U << 1)
++#define IORING_TIMEOUT_BOOTTIME		(1U << 2)
++#define IORING_TIMEOUT_REALTIME		(1U << 3)
++#define IORING_LINK_TIMEOUT_UPDATE	(1U << 4)
++#define IORING_TIMEOUT_CLOCK_MASK	(IORING_TIMEOUT_BOOTTIME | IORING_TIMEOUT_REALTIME)
++#define IORING_TIMEOUT_UPDATE_MASK	(IORING_TIMEOUT_UPDATE | IORING_LINK_TIMEOUT_UPDATE)
+ /*
+  * sqe->splice_flags
+  * extends splice(2) flags
+  */
+ #define SPLICE_F_FD_IN_FIXED	(1U << 31) /* the last bit of __u32 */
+ 
++/*
++ * POLL_ADD flags. Note that since sqe->poll_events is the flag space, the
++ * command flags for POLL_ADD are stored in sqe->len.
++ *
++ * IORING_POLL_ADD_MULTI	Multishot poll. Sets IORING_CQE_F_MORE if
++ *				the poll handler will continue to report
++ *				CQEs on behalf of the same SQE.
++ *
++ * IORING_POLL_UPDATE		Update existing poll request, matching
++ *				sqe->addr as the old user_data field.
++ */
++#define IORING_POLL_ADD_MULTI	(1U << 0)
++#define IORING_POLL_UPDATE_EVENTS	(1U << 1)
++#define IORING_POLL_UPDATE_USER_DATA	(1U << 2)
++
+ /*
+  * IO completion data structure (Completion Queue Entry)
+  */
+@@ -166,8 +191,10 @@ struct io_uring_cqe {
+  * cqe->flags
+  *
+  * IORING_CQE_F_BUFFER	If set, the upper 16 bits are the buffer ID
++ * IORING_CQE_F_MORE	If set, parent SQE will generate more CQE entries
+  */
+ #define IORING_CQE_F_BUFFER		(1U << 0)
++#define IORING_CQE_F_MORE		(1U << 1)
+ 
+ enum {
+ 	IORING_CQE_BUFFER_SHIFT		= 16,
+@@ -226,6 +253,7 @@ struct io_cqring_offsets {
+ #define IORING_ENTER_GETEVENTS	(1U << 0)
+ #define IORING_ENTER_SQ_WAKEUP	(1U << 1)
+ #define IORING_ENTER_SQ_WAIT	(1U << 2)
++#define IORING_ENTER_EXT_ARG	(1U << 3)
+ 
+ /*
+  * Passed in for io_uring_setup(2). Copied back with updated info on success
+@@ -253,6 +281,10 @@ struct io_uring_params {
+ #define IORING_FEAT_CUR_PERSONALITY	(1U << 4)
+ #define IORING_FEAT_FAST_POLL		(1U << 5)
+ #define IORING_FEAT_POLL_32BITS 	(1U << 6)
++#define IORING_FEAT_SQPOLL_NONFIXED	(1U << 7)
++#define IORING_FEAT_EXT_ARG		(1U << 8)
++#define IORING_FEAT_NATIVE_WORKERS	(1U << 9)
++#define IORING_FEAT_RSRC_TAGS		(1U << 10)
+ 
+ /*
+  * io_uring_register(2) opcodes and arguments
+@@ -272,16 +304,62 @@ enum {
+ 	IORING_REGISTER_RESTRICTIONS		= 11,
+ 	IORING_REGISTER_ENABLE_RINGS		= 12,
+ 
++	/* extended with tagging */
++	IORING_REGISTER_FILES2			= 13,
++	IORING_REGISTER_FILES_UPDATE2		= 14,
++	IORING_REGISTER_BUFFERS2		= 15,
++	IORING_REGISTER_BUFFERS_UPDATE		= 16,
++
++	/* set/clear io-wq thread affinities */
++	IORING_REGISTER_IOWQ_AFF		= 17,
++	IORING_UNREGISTER_IOWQ_AFF		= 18,
++
++	/* set/get max number of io-wq workers */
++	IORING_REGISTER_IOWQ_MAX_WORKERS	= 19,
++
+ 	/* this goes last */
+ 	IORING_REGISTER_LAST
+ };
+ 
++/* io-wq worker categories */
++enum {
++	IO_WQ_BOUND,
++	IO_WQ_UNBOUND,
++};
++
++/* deprecated, see struct io_uring_rsrc_update */
+ struct io_uring_files_update {
+ 	__u32 offset;
+ 	__u32 resv;
+ 	__aligned_u64 /* __s32 * */ fds;
+ };
+ 
++struct io_uring_rsrc_register {
++	__u32 nr;
++	__u32 resv;
++	__u64 resv2;
++	__aligned_u64 data;
++	__aligned_u64 tags;
++};
++
++struct io_uring_rsrc_update {
++	__u32 offset;
++	__u32 resv;
++	__aligned_u64 data;
++};
++
++struct io_uring_rsrc_update2 {
++	__u32 offset;
++	__u32 resv;
++	__aligned_u64 data;
++	__aligned_u64 tags;
++	__u32 nr;
++	__u32 resv2;
++};
++
++/* Skip updating fd indexes set to this value in the fd table */
++#define IORING_REGISTER_FILES_SKIP	(-2)
++
+ #define IO_URING_OP_SUPPORTED	(1U << 0)
+ 
+ struct io_uring_probe_op {
+@@ -329,4 +407,11 @@ enum {
+ 	IORING_RESTRICTION_LAST
+ };
+ 
++struct io_uring_getevents_arg {
++	__u64	sigmask;
++	__u32	sigmask_sz;
++	__u32	pad;
++	__u64	ts;
++};
++
+ #endif
+diff --git a/include/uapi/linux/openat2.h b/include/uapi/linux/openat2.h
+index 58b1eb7113600..a5feb76049487 100644
+--- a/include/uapi/linux/openat2.h
++++ b/include/uapi/linux/openat2.h
+@@ -35,5 +35,9 @@ struct open_how {
+ #define RESOLVE_IN_ROOT		0x10 /* Make all jumps to "/" and ".."
+ 					be scoped inside the dirfd
+ 					(similar to chroot(2)). */
++#define RESOLVE_CACHED		0x20 /* Only complete if resolution can be
++					completed through cached lookup. May
++					return -EAGAIN if that's not
++					possible. */
+ 
+ #endif /* _UAPI_LINUX_OPENAT2_H */
+diff --git a/io_uring/Makefile b/io_uring/Makefile
+new file mode 100644
+index 0000000000000..3680425df9478
+--- /dev/null
++++ b/io_uring/Makefile
+@@ -0,0 +1,6 @@
++# SPDX-License-Identifier: GPL-2.0
++#
++# Makefile for io_uring
++
++obj-$(CONFIG_IO_URING)		+= io_uring.o
++obj-$(CONFIG_IO_WQ)		+= io-wq.o
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+new file mode 100644
+index 0000000000000..6031fb319d878
+--- /dev/null
++++ b/io_uring/io-wq.c
+@@ -0,0 +1,1398 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Basic worker thread pool for io_uring
++ *
++ * Copyright (C) 2019 Jens Axboe
++ *
++ */
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/errno.h>
++#include <linux/sched/signal.h>
++#include <linux/percpu.h>
++#include <linux/slab.h>
++#include <linux/rculist_nulls.h>
++#include <linux/cpu.h>
++#include <linux/tracehook.h>
++#include <uapi/linux/io_uring.h>
++
++#include "io-wq.h"
++
++#define WORKER_IDLE_TIMEOUT	(5 * HZ)
++
++enum {
++	IO_WORKER_F_UP		= 1,	/* up and active */
++	IO_WORKER_F_RUNNING	= 2,	/* account as running */
++	IO_WORKER_F_FREE	= 4,	/* worker on free list */
++	IO_WORKER_F_BOUND	= 8,	/* is doing bounded work */
++};
++
++enum {
++	IO_WQ_BIT_EXIT		= 0,	/* wq exiting */
++};
++
++enum {
++	IO_ACCT_STALLED_BIT	= 0,	/* stalled on hash */
++};
++
++/*
++ * One for each thread in a wqe pool
++ */
++struct io_worker {
++	refcount_t ref;
++	unsigned flags;
++	struct hlist_nulls_node nulls_node;
++	struct list_head all_list;
++	struct task_struct *task;
++	struct io_wqe *wqe;
++
++	struct io_wq_work *cur_work;
++	spinlock_t lock;
++
++	struct completion ref_done;
++
++	unsigned long create_state;
++	struct callback_head create_work;
++	int create_index;
++
++	union {
++		struct rcu_head rcu;
++		struct work_struct work;
++	};
++};
++
++#if BITS_PER_LONG == 64
++#define IO_WQ_HASH_ORDER	6
++#else
++#define IO_WQ_HASH_ORDER	5
++#endif
++
++#define IO_WQ_NR_HASH_BUCKETS	(1u << IO_WQ_HASH_ORDER)
++
++struct io_wqe_acct {
++	unsigned nr_workers;
++	unsigned max_workers;
++	int index;
++	atomic_t nr_running;
++	struct io_wq_work_list work_list;
++	unsigned long flags;
++};
++
++enum {
++	IO_WQ_ACCT_BOUND,
++	IO_WQ_ACCT_UNBOUND,
++	IO_WQ_ACCT_NR,
++};
++
++/*
++ * Per-node worker thread pool
++ */
++struct io_wqe {
++	raw_spinlock_t lock;
++	struct io_wqe_acct acct[2];
++
++	int node;
++
++	struct hlist_nulls_head free_list;
++	struct list_head all_list;
++
++	struct wait_queue_entry wait;
++
++	struct io_wq *wq;
++	struct io_wq_work *hash_tail[IO_WQ_NR_HASH_BUCKETS];
++
++	cpumask_var_t cpu_mask;
++};
++
++/*
++ * Per io_wq state
++  */
++struct io_wq {
++	unsigned long state;
++
++	free_work_fn *free_work;
++	io_wq_work_fn *do_work;
++
++	struct io_wq_hash *hash;
++
++	atomic_t worker_refs;
++	struct completion worker_done;
++
++	struct hlist_node cpuhp_node;
++
++	struct task_struct *task;
++
++	struct io_wqe *wqes[];
++};
++
++static enum cpuhp_state io_wq_online;
++
++struct io_cb_cancel_data {
++	work_cancel_fn *fn;
++	void *data;
++	int nr_running;
++	int nr_pending;
++	bool cancel_all;
++};
++
++static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index);
++static void io_wqe_dec_running(struct io_worker *worker);
++static bool io_acct_cancel_pending_work(struct io_wqe *wqe,
++					struct io_wqe_acct *acct,
++					struct io_cb_cancel_data *match);
++static void create_worker_cb(struct callback_head *cb);
++static void io_wq_cancel_tw_create(struct io_wq *wq);
++
++static bool io_worker_get(struct io_worker *worker)
++{
++	return refcount_inc_not_zero(&worker->ref);
++}
++
++static void io_worker_release(struct io_worker *worker)
++{
++	if (refcount_dec_and_test(&worker->ref))
++		complete(&worker->ref_done);
++}
++
++static inline struct io_wqe_acct *io_get_acct(struct io_wqe *wqe, bool bound)
++{
++	return &wqe->acct[bound ? IO_WQ_ACCT_BOUND : IO_WQ_ACCT_UNBOUND];
++}
++
++static inline struct io_wqe_acct *io_work_get_acct(struct io_wqe *wqe,
++						   struct io_wq_work *work)
++{
++	return io_get_acct(wqe, !(work->flags & IO_WQ_WORK_UNBOUND));
++}
++
++static inline struct io_wqe_acct *io_wqe_get_acct(struct io_worker *worker)
++{
++	return io_get_acct(worker->wqe, worker->flags & IO_WORKER_F_BOUND);
++}
++
++static void io_worker_ref_put(struct io_wq *wq)
++{
++	if (atomic_dec_and_test(&wq->worker_refs))
++		complete(&wq->worker_done);
++}
++
++static void io_worker_cancel_cb(struct io_worker *worker)
++{
++	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++	struct io_wqe *wqe = worker->wqe;
++	struct io_wq *wq = wqe->wq;
++
++	atomic_dec(&acct->nr_running);
++	raw_spin_lock(&worker->wqe->lock);
++	acct->nr_workers--;
++	raw_spin_unlock(&worker->wqe->lock);
++	io_worker_ref_put(wq);
++	clear_bit_unlock(0, &worker->create_state);
++	io_worker_release(worker);
++}
++
++static bool io_task_worker_match(struct callback_head *cb, void *data)
++{
++	struct io_worker *worker;
++
++	if (cb->func != create_worker_cb)
++		return false;
++	worker = container_of(cb, struct io_worker, create_work);
++	return worker == data;
++}
++
++static void io_worker_exit(struct io_worker *worker)
++{
++	struct io_wqe *wqe = worker->wqe;
++	struct io_wq *wq = wqe->wq;
++
++	while (1) {
++		struct callback_head *cb = task_work_cancel_match(wq->task,
++						io_task_worker_match, worker);
++
++		if (!cb)
++			break;
++		io_worker_cancel_cb(worker);
++	}
++
++	if (refcount_dec_and_test(&worker->ref))
++		complete(&worker->ref_done);
++	wait_for_completion(&worker->ref_done);
++
++	raw_spin_lock(&wqe->lock);
++	if (worker->flags & IO_WORKER_F_FREE)
++		hlist_nulls_del_rcu(&worker->nulls_node);
++	list_del_rcu(&worker->all_list);
++	preempt_disable();
++	io_wqe_dec_running(worker);
++	worker->flags = 0;
++	current->flags &= ~PF_IO_WORKER;
++	preempt_enable();
++	raw_spin_unlock(&wqe->lock);
++
++	kfree_rcu(worker, rcu);
++	io_worker_ref_put(wqe->wq);
++	do_exit(0);
++}
++
++static inline bool io_acct_run_queue(struct io_wqe_acct *acct)
++{
++	if (!wq_list_empty(&acct->work_list) &&
++	    !test_bit(IO_ACCT_STALLED_BIT, &acct->flags))
++		return true;
++	return false;
++}
++
++/*
++ * Check head of free list for an available worker. If one isn't available,
++ * caller must create one.
++ */
++static bool io_wqe_activate_free_worker(struct io_wqe *wqe,
++					struct io_wqe_acct *acct)
++	__must_hold(RCU)
++{
++	struct hlist_nulls_node *n;
++	struct io_worker *worker;
++
++	/*
++	 * Iterate free_list and see if we can find an idle worker to
++	 * activate. If a given worker is on the free_list but in the process
++	 * of exiting, keep trying.
++	 */
++	hlist_nulls_for_each_entry_rcu(worker, n, &wqe->free_list, nulls_node) {
++		if (!io_worker_get(worker))
++			continue;
++		if (io_wqe_get_acct(worker) != acct) {
++			io_worker_release(worker);
++			continue;
++		}
++		if (wake_up_process(worker->task)) {
++			io_worker_release(worker);
++			return true;
++		}
++		io_worker_release(worker);
++	}
++
++	return false;
++}
++
++/*
++ * We need a worker. If we find a free one, we're good. If not, and we're
++ * below the max number of workers, create one.
++ */
++static bool io_wqe_create_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
++{
++	/*
++	 * Most likely an attempt to queue unbounded work on an io_wq that
++	 * wasn't setup with any unbounded workers.
++	 */
++	if (unlikely(!acct->max_workers))
++		pr_warn_once("io-wq is not configured for unbound workers");
++
++	raw_spin_lock(&wqe->lock);
++	if (acct->nr_workers >= acct->max_workers) {
++		raw_spin_unlock(&wqe->lock);
++		return true;
++	}
++	acct->nr_workers++;
++	raw_spin_unlock(&wqe->lock);
++	atomic_inc(&acct->nr_running);
++	atomic_inc(&wqe->wq->worker_refs);
++	return create_io_worker(wqe->wq, wqe, acct->index);
++}
++
++static void io_wqe_inc_running(struct io_worker *worker)
++{
++	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++
++	atomic_inc(&acct->nr_running);
++}
++
++static void create_worker_cb(struct callback_head *cb)
++{
++	struct io_worker *worker;
++	struct io_wq *wq;
++	struct io_wqe *wqe;
++	struct io_wqe_acct *acct;
++	bool do_create = false;
++
++	worker = container_of(cb, struct io_worker, create_work);
++	wqe = worker->wqe;
++	wq = wqe->wq;
++	acct = &wqe->acct[worker->create_index];
++	raw_spin_lock(&wqe->lock);
++	if (acct->nr_workers < acct->max_workers) {
++		acct->nr_workers++;
++		do_create = true;
++	}
++	raw_spin_unlock(&wqe->lock);
++	if (do_create) {
++		create_io_worker(wq, wqe, worker->create_index);
++	} else {
++		atomic_dec(&acct->nr_running);
++		io_worker_ref_put(wq);
++	}
++	clear_bit_unlock(0, &worker->create_state);
++	io_worker_release(worker);
++}
++
++static bool io_queue_worker_create(struct io_worker *worker,
++				   struct io_wqe_acct *acct,
++				   task_work_func_t func)
++{
++	struct io_wqe *wqe = worker->wqe;
++	struct io_wq *wq = wqe->wq;
++
++	/* raced with exit, just ignore create call */
++	if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
++		goto fail;
++	if (!io_worker_get(worker))
++		goto fail;
++	/*
++	 * create_state manages ownership of create_work/index. We should
++	 * only need one entry per worker, as the worker going to sleep
++	 * will trigger the condition, and waking will clear it once it
++	 * runs the task_work.
++	 */
++	if (test_bit(0, &worker->create_state) ||
++	    test_and_set_bit_lock(0, &worker->create_state))
++		goto fail_release;
++
++	atomic_inc(&wq->worker_refs);
++	init_task_work(&worker->create_work, func);
++	worker->create_index = acct->index;
++	if (!task_work_add(wq->task, &worker->create_work, TWA_SIGNAL)) {
++		/*
++		 * EXIT may have been set after checking it above, check after
++		 * adding the task_work and remove any creation item if it is
++		 * now set. wq exit does that too, but we can have added this
++		 * work item after we canceled in io_wq_exit_workers().
++		 */
++		if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
++			io_wq_cancel_tw_create(wq);
++		io_worker_ref_put(wq);
++		return true;
++	}
++	io_worker_ref_put(wq);
++	clear_bit_unlock(0, &worker->create_state);
++fail_release:
++	io_worker_release(worker);
++fail:
++	atomic_dec(&acct->nr_running);
++	io_worker_ref_put(wq);
++	return false;
++}
++
++static void io_wqe_dec_running(struct io_worker *worker)
++	__must_hold(wqe->lock)
++{
++	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++	struct io_wqe *wqe = worker->wqe;
++
++	if (!(worker->flags & IO_WORKER_F_UP))
++		return;
++
++	if (atomic_dec_and_test(&acct->nr_running) && io_acct_run_queue(acct)) {
++		atomic_inc(&acct->nr_running);
++		atomic_inc(&wqe->wq->worker_refs);
++		raw_spin_unlock(&wqe->lock);
++		io_queue_worker_create(worker, acct, create_worker_cb);
++		raw_spin_lock(&wqe->lock);
++	}
++}
++
++/*
++ * Worker will start processing some work. Move it to the busy list, if
++ * it's currently on the freelist
++ */
++static void __io_worker_busy(struct io_wqe *wqe, struct io_worker *worker,
++			     struct io_wq_work *work)
++	__must_hold(wqe->lock)
++{
++	if (worker->flags & IO_WORKER_F_FREE) {
++		worker->flags &= ~IO_WORKER_F_FREE;
++		hlist_nulls_del_init_rcu(&worker->nulls_node);
++	}
++}
++
++/*
++ * No work, worker going to sleep. Move to freelist, and unuse mm if we
++ * have one attached. Dropping the mm may potentially sleep, so we drop
++ * the lock in that case and return success. Since the caller has to
++ * retry the loop in that case (we changed task state), we don't regrab
++ * the lock if we return success.
++ */
++static void __io_worker_idle(struct io_wqe *wqe, struct io_worker *worker)
++	__must_hold(wqe->lock)
++{
++	if (!(worker->flags & IO_WORKER_F_FREE)) {
++		worker->flags |= IO_WORKER_F_FREE;
++		hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
++	}
++}
++
++static inline unsigned int io_get_work_hash(struct io_wq_work *work)
++{
++	return work->flags >> IO_WQ_HASH_SHIFT;
++}
++
++static bool io_wait_on_hash(struct io_wqe *wqe, unsigned int hash)
++{
++	struct io_wq *wq = wqe->wq;
++	bool ret = false;
++
++	spin_lock_irq(&wq->hash->wait.lock);
++	if (list_empty(&wqe->wait.entry)) {
++		__add_wait_queue(&wq->hash->wait, &wqe->wait);
++		if (!test_bit(hash, &wq->hash->map)) {
++			__set_current_state(TASK_RUNNING);
++			list_del_init(&wqe->wait.entry);
++			ret = true;
++		}
++	}
++	spin_unlock_irq(&wq->hash->wait.lock);
++	return ret;
++}
++
++static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct,
++					   struct io_worker *worker)
++	__must_hold(wqe->lock)
++{
++	struct io_wq_work_node *node, *prev;
++	struct io_wq_work *work, *tail;
++	unsigned int stall_hash = -1U;
++	struct io_wqe *wqe = worker->wqe;
++
++	wq_list_for_each(node, prev, &acct->work_list) {
++		unsigned int hash;
++
++		work = container_of(node, struct io_wq_work, list);
++
++		/* not hashed, can run anytime */
++		if (!io_wq_is_hashed(work)) {
++			wq_list_del(&acct->work_list, node, prev);
++			return work;
++		}
++
++		hash = io_get_work_hash(work);
++		/* all items with this hash lie in [work, tail] */
++		tail = wqe->hash_tail[hash];
++
++		/* hashed, can run if not already running */
++		if (!test_and_set_bit(hash, &wqe->wq->hash->map)) {
++			wqe->hash_tail[hash] = NULL;
++			wq_list_cut(&acct->work_list, &tail->list, prev);
++			return work;
++		}
++		if (stall_hash == -1U)
++			stall_hash = hash;
++		/* fast forward to a next hash, for-each will fix up @prev */
++		node = &tail->list;
++	}
++
++	if (stall_hash != -1U) {
++		bool unstalled;
++
++		/*
++		 * Set this before dropping the lock to avoid racing with new
++		 * work being added and clearing the stalled bit.
++		 */
++		set_bit(IO_ACCT_STALLED_BIT, &acct->flags);
++		raw_spin_unlock(&wqe->lock);
++		unstalled = io_wait_on_hash(wqe, stall_hash);
++		raw_spin_lock(&wqe->lock);
++		if (unstalled) {
++			clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
++			if (wq_has_sleeper(&wqe->wq->hash->wait))
++				wake_up(&wqe->wq->hash->wait);
++		}
++	}
++
++	return NULL;
++}
++
++static bool io_flush_signals(void)
++{
++	if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL))) {
++		__set_current_state(TASK_RUNNING);
++		tracehook_notify_signal();
++		return true;
++	}
++	return false;
++}
++
++static void io_assign_current_work(struct io_worker *worker,
++				   struct io_wq_work *work)
++{
++	if (work) {
++		io_flush_signals();
++		cond_resched();
++	}
++
++	spin_lock(&worker->lock);
++	worker->cur_work = work;
++	spin_unlock(&worker->lock);
++}
++
++static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work);
++
++static void io_worker_handle_work(struct io_worker *worker)
++	__releases(wqe->lock)
++{
++	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++	struct io_wqe *wqe = worker->wqe;
++	struct io_wq *wq = wqe->wq;
++	bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state);
++
++	do {
++		struct io_wq_work *work;
++get_next:
++		/*
++		 * If we got some work, mark us as busy. If we didn't, but
++		 * the list isn't empty, it means we stalled on hashed work.
++		 * Mark us stalled so we don't keep looking for work when we
++		 * can't make progress, any work completion or insertion will
++		 * clear the stalled flag.
++		 */
++		work = io_get_next_work(acct, worker);
++		if (work)
++			__io_worker_busy(wqe, worker, work);
++
++		raw_spin_unlock(&wqe->lock);
++		if (!work)
++			break;
++		io_assign_current_work(worker, work);
++		__set_current_state(TASK_RUNNING);
++
++		/* handle a whole dependent link */
++		do {
++			struct io_wq_work *next_hashed, *linked;
++			unsigned int hash = io_get_work_hash(work);
++
++			next_hashed = wq_next_work(work);
++
++			if (unlikely(do_kill) && (work->flags & IO_WQ_WORK_UNBOUND))
++				work->flags |= IO_WQ_WORK_CANCEL;
++			wq->do_work(work);
++			io_assign_current_work(worker, NULL);
++
++			linked = wq->free_work(work);
++			work = next_hashed;
++			if (!work && linked && !io_wq_is_hashed(linked)) {
++				work = linked;
++				linked = NULL;
++			}
++			io_assign_current_work(worker, work);
++			if (linked)
++				io_wqe_enqueue(wqe, linked);
++
++			if (hash != -1U && !next_hashed) {
++				/* serialize hash clear with wake_up() */
++				spin_lock_irq(&wq->hash->wait.lock);
++				clear_bit(hash, &wq->hash->map);
++				clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
++				spin_unlock_irq(&wq->hash->wait.lock);
++				if (wq_has_sleeper(&wq->hash->wait))
++					wake_up(&wq->hash->wait);
++				raw_spin_lock(&wqe->lock);
++				/* skip unnecessary unlock-lock wqe->lock */
++				if (!work)
++					goto get_next;
++				raw_spin_unlock(&wqe->lock);
++			}
++		} while (work);
++
++		raw_spin_lock(&wqe->lock);
++	} while (1);
++}
++
++static int io_wqe_worker(void *data)
++{
++	struct io_worker *worker = data;
++	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++	struct io_wqe *wqe = worker->wqe;
++	struct io_wq *wq = wqe->wq;
++	bool last_timeout = false;
++	char buf[TASK_COMM_LEN];
++
++	worker->flags |= (IO_WORKER_F_UP | IO_WORKER_F_RUNNING);
++
++	snprintf(buf, sizeof(buf), "iou-wrk-%d", wq->task->pid);
++	set_task_comm(current, buf);
++
++	while (!test_bit(IO_WQ_BIT_EXIT, &wq->state)) {
++		long ret;
++
++		set_current_state(TASK_INTERRUPTIBLE);
++loop:
++		raw_spin_lock(&wqe->lock);
++		if (io_acct_run_queue(acct)) {
++			io_worker_handle_work(worker);
++			goto loop;
++		}
++		/* timed out, exit unless we're the last worker */
++		if (last_timeout && acct->nr_workers > 1) {
++			acct->nr_workers--;
++			raw_spin_unlock(&wqe->lock);
++			__set_current_state(TASK_RUNNING);
++			break;
++		}
++		last_timeout = false;
++		__io_worker_idle(wqe, worker);
++		raw_spin_unlock(&wqe->lock);
++		if (io_flush_signals())
++			continue;
++		ret = schedule_timeout(WORKER_IDLE_TIMEOUT);
++		if (signal_pending(current)) {
++			struct ksignal ksig;
++
++			if (!get_signal(&ksig))
++				continue;
++			break;
++		}
++		last_timeout = !ret;
++	}
++
++	if (test_bit(IO_WQ_BIT_EXIT, &wq->state)) {
++		raw_spin_lock(&wqe->lock);
++		io_worker_handle_work(worker);
++	}
++
++	io_worker_exit(worker);
++	return 0;
++}
++
++/*
++ * Called when a worker is scheduled in. Mark us as currently running.
++ */
++void io_wq_worker_running(struct task_struct *tsk)
++{
++	struct io_worker *worker = tsk->pf_io_worker;
++
++	if (!worker)
++		return;
++	if (!(worker->flags & IO_WORKER_F_UP))
++		return;
++	if (worker->flags & IO_WORKER_F_RUNNING)
++		return;
++	worker->flags |= IO_WORKER_F_RUNNING;
++	io_wqe_inc_running(worker);
++}
++
++/*
++ * Called when worker is going to sleep. If there are no workers currently
++ * running and we have work pending, wake up a free one or create a new one.
++ */
++void io_wq_worker_sleeping(struct task_struct *tsk)
++{
++	struct io_worker *worker = tsk->pf_io_worker;
++
++	if (!worker)
++		return;
++	if (!(worker->flags & IO_WORKER_F_UP))
++		return;
++	if (!(worker->flags & IO_WORKER_F_RUNNING))
++		return;
++
++	worker->flags &= ~IO_WORKER_F_RUNNING;
++
++	raw_spin_lock(&worker->wqe->lock);
++	io_wqe_dec_running(worker);
++	raw_spin_unlock(&worker->wqe->lock);
++}
++
++static void io_init_new_worker(struct io_wqe *wqe, struct io_worker *worker,
++			       struct task_struct *tsk)
++{
++	tsk->pf_io_worker = worker;
++	worker->task = tsk;
++	set_cpus_allowed_ptr(tsk, wqe->cpu_mask);
++	tsk->flags |= PF_NO_SETAFFINITY;
++
++	raw_spin_lock(&wqe->lock);
++	hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
++	list_add_tail_rcu(&worker->all_list, &wqe->all_list);
++	worker->flags |= IO_WORKER_F_FREE;
++	raw_spin_unlock(&wqe->lock);
++	wake_up_new_task(tsk);
++}
++
++static bool io_wq_work_match_all(struct io_wq_work *work, void *data)
++{
++	return true;
++}
++
++static inline bool io_should_retry_thread(long err)
++{
++	/*
++	 * Prevent perpetual task_work retry, if the task (or its group) is
++	 * exiting.
++	 */
++	if (fatal_signal_pending(current))
++		return false;
++
++	switch (err) {
++	case -EAGAIN:
++	case -ERESTARTSYS:
++	case -ERESTARTNOINTR:
++	case -ERESTARTNOHAND:
++		return true;
++	default:
++		return false;
++	}
++}
++
++static void create_worker_cont(struct callback_head *cb)
++{
++	struct io_worker *worker;
++	struct task_struct *tsk;
++	struct io_wqe *wqe;
++
++	worker = container_of(cb, struct io_worker, create_work);
++	clear_bit_unlock(0, &worker->create_state);
++	wqe = worker->wqe;
++	tsk = create_io_thread(io_wqe_worker, worker, wqe->node);
++	if (!IS_ERR(tsk)) {
++		io_init_new_worker(wqe, worker, tsk);
++		io_worker_release(worker);
++		return;
++	} else if (!io_should_retry_thread(PTR_ERR(tsk))) {
++		struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++
++		atomic_dec(&acct->nr_running);
++		raw_spin_lock(&wqe->lock);
++		acct->nr_workers--;
++		if (!acct->nr_workers) {
++			struct io_cb_cancel_data match = {
++				.fn		= io_wq_work_match_all,
++				.cancel_all	= true,
++			};
++
++			while (io_acct_cancel_pending_work(wqe, acct, &match))
++				raw_spin_lock(&wqe->lock);
++		}
++		raw_spin_unlock(&wqe->lock);
++		io_worker_ref_put(wqe->wq);
++		kfree(worker);
++		return;
++	}
++
++	/* re-create attempts grab a new worker ref, drop the existing one */
++	io_worker_release(worker);
++	schedule_work(&worker->work);
++}
++
++static void io_workqueue_create(struct work_struct *work)
++{
++	struct io_worker *worker = container_of(work, struct io_worker, work);
++	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++
++	if (!io_queue_worker_create(worker, acct, create_worker_cont))
++		kfree(worker);
++}
++
++static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
++{
++	struct io_wqe_acct *acct = &wqe->acct[index];
++	struct io_worker *worker;
++	struct task_struct *tsk;
++
++	__set_current_state(TASK_RUNNING);
++
++	worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, wqe->node);
++	if (!worker) {
++fail:
++		atomic_dec(&acct->nr_running);
++		raw_spin_lock(&wqe->lock);
++		acct->nr_workers--;
++		raw_spin_unlock(&wqe->lock);
++		io_worker_ref_put(wq);
++		return false;
++	}
++
++	refcount_set(&worker->ref, 1);
++	worker->wqe = wqe;
++	spin_lock_init(&worker->lock);
++	init_completion(&worker->ref_done);
++
++	if (index == IO_WQ_ACCT_BOUND)
++		worker->flags |= IO_WORKER_F_BOUND;
++
++	tsk = create_io_thread(io_wqe_worker, worker, wqe->node);
++	if (!IS_ERR(tsk)) {
++		io_init_new_worker(wqe, worker, tsk);
++	} else if (!io_should_retry_thread(PTR_ERR(tsk))) {
++		kfree(worker);
++		goto fail;
++	} else {
++		INIT_WORK(&worker->work, io_workqueue_create);
++		schedule_work(&worker->work);
++	}
++
++	return true;
++}
++
++/*
++ * Iterate the passed in list and call the specific function for each
++ * worker that isn't exiting
++ */
++static bool io_wq_for_each_worker(struct io_wqe *wqe,
++				  bool (*func)(struct io_worker *, void *),
++				  void *data)
++{
++	struct io_worker *worker;
++	bool ret = false;
++
++	list_for_each_entry_rcu(worker, &wqe->all_list, all_list) {
++		if (io_worker_get(worker)) {
++			/* no task if node is/was offline */
++			if (worker->task)
++				ret = func(worker, data);
++			io_worker_release(worker);
++			if (ret)
++				break;
++		}
++	}
++
++	return ret;
++}
++
++static bool io_wq_worker_wake(struct io_worker *worker, void *data)
++{
++	set_notify_signal(worker->task);
++	wake_up_process(worker->task);
++	return false;
++}
++
++static void io_run_cancel(struct io_wq_work *work, struct io_wqe *wqe)
++{
++	struct io_wq *wq = wqe->wq;
++
++	do {
++		work->flags |= IO_WQ_WORK_CANCEL;
++		wq->do_work(work);
++		work = wq->free_work(work);
++	} while (work);
++}
++
++static void io_wqe_insert_work(struct io_wqe *wqe, struct io_wq_work *work)
++{
++	struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
++	unsigned int hash;
++	struct io_wq_work *tail;
++
++	if (!io_wq_is_hashed(work)) {
++append:
++		wq_list_add_tail(&work->list, &acct->work_list);
++		return;
++	}
++
++	hash = io_get_work_hash(work);
++	tail = wqe->hash_tail[hash];
++	wqe->hash_tail[hash] = work;
++	if (!tail)
++		goto append;
++
++	wq_list_add_after(&work->list, &tail->list, &acct->work_list);
++}
++
++static bool io_wq_work_match_item(struct io_wq_work *work, void *data)
++{
++	return work == data;
++}
++
++static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
++{
++	struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
++	unsigned work_flags = work->flags;
++	bool do_create;
++
++	/*
++	 * If io-wq is exiting for this task, or if the request has explicitly
++	 * been marked as one that should not get executed, cancel it here.
++	 */
++	if (test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state) ||
++	    (work->flags & IO_WQ_WORK_CANCEL)) {
++		io_run_cancel(work, wqe);
++		return;
++	}
++
++	raw_spin_lock(&wqe->lock);
++	io_wqe_insert_work(wqe, work);
++	clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
++
++	rcu_read_lock();
++	do_create = !io_wqe_activate_free_worker(wqe, acct);
++	rcu_read_unlock();
++
++	raw_spin_unlock(&wqe->lock);
++
++	if (do_create && ((work_flags & IO_WQ_WORK_CONCURRENT) ||
++	    !atomic_read(&acct->nr_running))) {
++		bool did_create;
++
++		did_create = io_wqe_create_worker(wqe, acct);
++		if (likely(did_create))
++			return;
++
++		raw_spin_lock(&wqe->lock);
++		/* fatal condition, failed to create the first worker */
++		if (!acct->nr_workers) {
++			struct io_cb_cancel_data match = {
++				.fn		= io_wq_work_match_item,
++				.data		= work,
++				.cancel_all	= false,
++			};
++
++			if (io_acct_cancel_pending_work(wqe, acct, &match))
++				raw_spin_lock(&wqe->lock);
++		}
++		raw_spin_unlock(&wqe->lock);
++	}
++}
++
++void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work)
++{
++	struct io_wqe *wqe = wq->wqes[numa_node_id()];
++
++	io_wqe_enqueue(wqe, work);
++}
++
++/*
++ * Work items that hash to the same value will not be done in parallel.
++ * Used to limit concurrent writes, generally hashed by inode.
++ */
++void io_wq_hash_work(struct io_wq_work *work, void *val)
++{
++	unsigned int bit;
++
++	bit = hash_ptr(val, IO_WQ_HASH_ORDER);
++	work->flags |= (IO_WQ_WORK_HASHED | (bit << IO_WQ_HASH_SHIFT));
++}
++
++static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
++{
++	struct io_cb_cancel_data *match = data;
++
++	/*
++	 * Hold the lock to avoid ->cur_work going out of scope, caller
++	 * may dereference the passed in work.
++	 */
++	spin_lock(&worker->lock);
++	if (worker->cur_work &&
++	    match->fn(worker->cur_work, match->data)) {
++		set_notify_signal(worker->task);
++		match->nr_running++;
++	}
++	spin_unlock(&worker->lock);
++
++	return match->nr_running && !match->cancel_all;
++}
++
++static inline void io_wqe_remove_pending(struct io_wqe *wqe,
++					 struct io_wq_work *work,
++					 struct io_wq_work_node *prev)
++{
++	struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
++	unsigned int hash = io_get_work_hash(work);
++	struct io_wq_work *prev_work = NULL;
++
++	if (io_wq_is_hashed(work) && work == wqe->hash_tail[hash]) {
++		if (prev)
++			prev_work = container_of(prev, struct io_wq_work, list);
++		if (prev_work && io_get_work_hash(prev_work) == hash)
++			wqe->hash_tail[hash] = prev_work;
++		else
++			wqe->hash_tail[hash] = NULL;
++	}
++	wq_list_del(&acct->work_list, &work->list, prev);
++}
++
++static bool io_acct_cancel_pending_work(struct io_wqe *wqe,
++					struct io_wqe_acct *acct,
++					struct io_cb_cancel_data *match)
++	__releases(wqe->lock)
++{
++	struct io_wq_work_node *node, *prev;
++	struct io_wq_work *work;
++
++	wq_list_for_each(node, prev, &acct->work_list) {
++		work = container_of(node, struct io_wq_work, list);
++		if (!match->fn(work, match->data))
++			continue;
++		io_wqe_remove_pending(wqe, work, prev);
++		raw_spin_unlock(&wqe->lock);
++		io_run_cancel(work, wqe);
++		match->nr_pending++;
++		/* not safe to continue after unlock */
++		return true;
++	}
++
++	return false;
++}
++
++static void io_wqe_cancel_pending_work(struct io_wqe *wqe,
++				       struct io_cb_cancel_data *match)
++{
++	int i;
++retry:
++	raw_spin_lock(&wqe->lock);
++	for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++		struct io_wqe_acct *acct = io_get_acct(wqe, i == 0);
++
++		if (io_acct_cancel_pending_work(wqe, acct, match)) {
++			if (match->cancel_all)
++				goto retry;
++			return;
++		}
++	}
++	raw_spin_unlock(&wqe->lock);
++}
++
++static void io_wqe_cancel_running_work(struct io_wqe *wqe,
++				       struct io_cb_cancel_data *match)
++{
++	rcu_read_lock();
++	io_wq_for_each_worker(wqe, io_wq_worker_cancel, match);
++	rcu_read_unlock();
++}
++
++enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
++				  void *data, bool cancel_all)
++{
++	struct io_cb_cancel_data match = {
++		.fn		= cancel,
++		.data		= data,
++		.cancel_all	= cancel_all,
++	};
++	int node;
++
++	/*
++	 * First check pending list, if we're lucky we can just remove it
++	 * from there. CANCEL_OK means that the work is returned as-new,
++	 * no completion will be posted for it.
++	 */
++	for_each_node(node) {
++		struct io_wqe *wqe = wq->wqes[node];
++
++		io_wqe_cancel_pending_work(wqe, &match);
++		if (match.nr_pending && !match.cancel_all)
++			return IO_WQ_CANCEL_OK;
++	}
++
++	/*
++	 * Now check if a free (going busy) or busy worker has the work
++	 * currently running. If we find it there, we'll return CANCEL_RUNNING
++	 * as an indication that we attempt to signal cancellation. The
++	 * completion will run normally in this case.
++	 */
++	for_each_node(node) {
++		struct io_wqe *wqe = wq->wqes[node];
++
++		io_wqe_cancel_running_work(wqe, &match);
++		if (match.nr_running && !match.cancel_all)
++			return IO_WQ_CANCEL_RUNNING;
++	}
++
++	if (match.nr_running)
++		return IO_WQ_CANCEL_RUNNING;
++	if (match.nr_pending)
++		return IO_WQ_CANCEL_OK;
++	return IO_WQ_CANCEL_NOTFOUND;
++}
++
++static int io_wqe_hash_wake(struct wait_queue_entry *wait, unsigned mode,
++			    int sync, void *key)
++{
++	struct io_wqe *wqe = container_of(wait, struct io_wqe, wait);
++	int i;
++
++	list_del_init(&wait->entry);
++
++	rcu_read_lock();
++	for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++		struct io_wqe_acct *acct = &wqe->acct[i];
++
++		if (test_and_clear_bit(IO_ACCT_STALLED_BIT, &acct->flags))
++			io_wqe_activate_free_worker(wqe, acct);
++	}
++	rcu_read_unlock();
++	return 1;
++}
++
++struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
++{
++	int ret, node, i;
++	struct io_wq *wq;
++
++	if (WARN_ON_ONCE(!data->free_work || !data->do_work))
++		return ERR_PTR(-EINVAL);
++	if (WARN_ON_ONCE(!bounded))
++		return ERR_PTR(-EINVAL);
++
++	wq = kzalloc(struct_size(wq, wqes, nr_node_ids), GFP_KERNEL);
++	if (!wq)
++		return ERR_PTR(-ENOMEM);
++	ret = cpuhp_state_add_instance_nocalls(io_wq_online, &wq->cpuhp_node);
++	if (ret)
++		goto err_wq;
++
++	refcount_inc(&data->hash->refs);
++	wq->hash = data->hash;
++	wq->free_work = data->free_work;
++	wq->do_work = data->do_work;
++
++	ret = -ENOMEM;
++	for_each_node(node) {
++		struct io_wqe *wqe;
++		int alloc_node = node;
++
++		if (!node_online(alloc_node))
++			alloc_node = NUMA_NO_NODE;
++		wqe = kzalloc_node(sizeof(struct io_wqe), GFP_KERNEL, alloc_node);
++		if (!wqe)
++			goto err;
++		wq->wqes[node] = wqe;
++		if (!alloc_cpumask_var(&wqe->cpu_mask, GFP_KERNEL))
++			goto err;
++		cpumask_copy(wqe->cpu_mask, cpumask_of_node(node));
++		wqe->node = alloc_node;
++		wqe->acct[IO_WQ_ACCT_BOUND].max_workers = bounded;
++		wqe->acct[IO_WQ_ACCT_UNBOUND].max_workers =
++					task_rlimit(current, RLIMIT_NPROC);
++		INIT_LIST_HEAD(&wqe->wait.entry);
++		wqe->wait.func = io_wqe_hash_wake;
++		for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++			struct io_wqe_acct *acct = &wqe->acct[i];
++
++			acct->index = i;
++			atomic_set(&acct->nr_running, 0);
++			INIT_WQ_LIST(&acct->work_list);
++		}
++		wqe->wq = wq;
++		raw_spin_lock_init(&wqe->lock);
++		INIT_HLIST_NULLS_HEAD(&wqe->free_list, 0);
++		INIT_LIST_HEAD(&wqe->all_list);
++	}
++
++	wq->task = get_task_struct(data->task);
++	atomic_set(&wq->worker_refs, 1);
++	init_completion(&wq->worker_done);
++	return wq;
++err:
++	io_wq_put_hash(data->hash);
++	cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
++	for_each_node(node) {
++		if (!wq->wqes[node])
++			continue;
++		free_cpumask_var(wq->wqes[node]->cpu_mask);
++		kfree(wq->wqes[node]);
++	}
++err_wq:
++	kfree(wq);
++	return ERR_PTR(ret);
++}
++
++static bool io_task_work_match(struct callback_head *cb, void *data)
++{
++	struct io_worker *worker;
++
++	if (cb->func != create_worker_cb && cb->func != create_worker_cont)
++		return false;
++	worker = container_of(cb, struct io_worker, create_work);
++	return worker->wqe->wq == data;
++}
++
++void io_wq_exit_start(struct io_wq *wq)
++{
++	set_bit(IO_WQ_BIT_EXIT, &wq->state);
++}
++
++static void io_wq_cancel_tw_create(struct io_wq *wq)
++{
++	struct callback_head *cb;
++
++	while ((cb = task_work_cancel_match(wq->task, io_task_work_match, wq)) != NULL) {
++		struct io_worker *worker;
++
++		worker = container_of(cb, struct io_worker, create_work);
++		io_worker_cancel_cb(worker);
++	}
++}
++
++static void io_wq_exit_workers(struct io_wq *wq)
++{
++	int node;
++
++	if (!wq->task)
++		return;
++
++	io_wq_cancel_tw_create(wq);
++
++	rcu_read_lock();
++	for_each_node(node) {
++		struct io_wqe *wqe = wq->wqes[node];
++
++		io_wq_for_each_worker(wqe, io_wq_worker_wake, NULL);
++	}
++	rcu_read_unlock();
++	io_worker_ref_put(wq);
++	wait_for_completion(&wq->worker_done);
++
++	for_each_node(node) {
++		spin_lock_irq(&wq->hash->wait.lock);
++		list_del_init(&wq->wqes[node]->wait.entry);
++		spin_unlock_irq(&wq->hash->wait.lock);
++	}
++	put_task_struct(wq->task);
++	wq->task = NULL;
++}
++
++static void io_wq_destroy(struct io_wq *wq)
++{
++	int node;
++
++	cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
++
++	for_each_node(node) {
++		struct io_wqe *wqe = wq->wqes[node];
++		struct io_cb_cancel_data match = {
++			.fn		= io_wq_work_match_all,
++			.cancel_all	= true,
++		};
++		io_wqe_cancel_pending_work(wqe, &match);
++		free_cpumask_var(wqe->cpu_mask);
++		kfree(wqe);
++	}
++	io_wq_put_hash(wq->hash);
++	kfree(wq);
++}
++
++void io_wq_put_and_exit(struct io_wq *wq)
++{
++	WARN_ON_ONCE(!test_bit(IO_WQ_BIT_EXIT, &wq->state));
++
++	io_wq_exit_workers(wq);
++	io_wq_destroy(wq);
++}
++
++struct online_data {
++	unsigned int cpu;
++	bool online;
++};
++
++static bool io_wq_worker_affinity(struct io_worker *worker, void *data)
++{
++	struct online_data *od = data;
++
++	if (od->online)
++		cpumask_set_cpu(od->cpu, worker->wqe->cpu_mask);
++	else
++		cpumask_clear_cpu(od->cpu, worker->wqe->cpu_mask);
++	return false;
++}
++
++static int __io_wq_cpu_online(struct io_wq *wq, unsigned int cpu, bool online)
++{
++	struct online_data od = {
++		.cpu = cpu,
++		.online = online
++	};
++	int i;
++
++	rcu_read_lock();
++	for_each_node(i)
++		io_wq_for_each_worker(wq->wqes[i], io_wq_worker_affinity, &od);
++	rcu_read_unlock();
++	return 0;
++}
++
++static int io_wq_cpu_online(unsigned int cpu, struct hlist_node *node)
++{
++	struct io_wq *wq = hlist_entry_safe(node, struct io_wq, cpuhp_node);
++
++	return __io_wq_cpu_online(wq, cpu, true);
++}
++
++static int io_wq_cpu_offline(unsigned int cpu, struct hlist_node *node)
++{
++	struct io_wq *wq = hlist_entry_safe(node, struct io_wq, cpuhp_node);
++
++	return __io_wq_cpu_online(wq, cpu, false);
++}
++
++int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask)
++{
++	int i;
++
++	rcu_read_lock();
++	for_each_node(i) {
++		struct io_wqe *wqe = wq->wqes[i];
++
++		if (mask)
++			cpumask_copy(wqe->cpu_mask, mask);
++		else
++			cpumask_copy(wqe->cpu_mask, cpumask_of_node(i));
++	}
++	rcu_read_unlock();
++	return 0;
++}
++
++/*
++ * Set max number of unbounded workers, returns old value. If new_count is 0,
++ * then just return the old value.
++ */
++int io_wq_max_workers(struct io_wq *wq, int *new_count)
++{
++	int prev[IO_WQ_ACCT_NR];
++	bool first_node = true;
++	int i, node;
++
++	BUILD_BUG_ON((int) IO_WQ_ACCT_BOUND   != (int) IO_WQ_BOUND);
++	BUILD_BUG_ON((int) IO_WQ_ACCT_UNBOUND != (int) IO_WQ_UNBOUND);
++	BUILD_BUG_ON((int) IO_WQ_ACCT_NR      != 2);
++
++	for (i = 0; i < 2; i++) {
++		if (new_count[i] > task_rlimit(current, RLIMIT_NPROC))
++			new_count[i] = task_rlimit(current, RLIMIT_NPROC);
++	}
++
++	for (i = 0; i < IO_WQ_ACCT_NR; i++)
++		prev[i] = 0;
++
++	rcu_read_lock();
++	for_each_node(node) {
++		struct io_wqe *wqe = wq->wqes[node];
++		struct io_wqe_acct *acct;
++
++		raw_spin_lock(&wqe->lock);
++		for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++			acct = &wqe->acct[i];
++			if (first_node)
++				prev[i] = max_t(int, acct->max_workers, prev[i]);
++			if (new_count[i])
++				acct->max_workers = new_count[i];
++		}
++		raw_spin_unlock(&wqe->lock);
++		first_node = false;
++	}
++	rcu_read_unlock();
++
++	for (i = 0; i < IO_WQ_ACCT_NR; i++)
++		new_count[i] = prev[i];
++
++	return 0;
++}
++
++static __init int io_wq_init(void)
++{
++	int ret;
++
++	ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "io-wq/online",
++					io_wq_cpu_online, io_wq_cpu_offline);
++	if (ret < 0)
++		return ret;
++	io_wq_online = ret;
++	return 0;
++}
++subsys_initcall(io_wq_init);
+diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
+new file mode 100644
+index 0000000000000..bf5c4c5337605
+--- /dev/null
++++ b/io_uring/io-wq.h
+@@ -0,0 +1,160 @@
++#ifndef INTERNAL_IO_WQ_H
++#define INTERNAL_IO_WQ_H
++
++#include <linux/refcount.h>
++
++struct io_wq;
++
++enum {
++	IO_WQ_WORK_CANCEL	= 1,
++	IO_WQ_WORK_HASHED	= 2,
++	IO_WQ_WORK_UNBOUND	= 4,
++	IO_WQ_WORK_CONCURRENT	= 16,
++
++	IO_WQ_HASH_SHIFT	= 24,	/* upper 8 bits are used for hash key */
++};
++
++enum io_wq_cancel {
++	IO_WQ_CANCEL_OK,	/* cancelled before started */
++	IO_WQ_CANCEL_RUNNING,	/* found, running, and attempted cancelled */
++	IO_WQ_CANCEL_NOTFOUND,	/* work not found */
++};
++
++struct io_wq_work_node {
++	struct io_wq_work_node *next;
++};
++
++struct io_wq_work_list {
++	struct io_wq_work_node *first;
++	struct io_wq_work_node *last;
++};
++
++static inline void wq_list_add_after(struct io_wq_work_node *node,
++				     struct io_wq_work_node *pos,
++				     struct io_wq_work_list *list)
++{
++	struct io_wq_work_node *next = pos->next;
++
++	pos->next = node;
++	node->next = next;
++	if (!next)
++		list->last = node;
++}
++
++static inline void wq_list_add_tail(struct io_wq_work_node *node,
++				    struct io_wq_work_list *list)
++{
++	node->next = NULL;
++	if (!list->first) {
++		list->last = node;
++		WRITE_ONCE(list->first, node);
++	} else {
++		list->last->next = node;
++		list->last = node;
++	}
++}
++
++static inline void wq_list_cut(struct io_wq_work_list *list,
++			       struct io_wq_work_node *last,
++			       struct io_wq_work_node *prev)
++{
++	/* first in the list, if prev==NULL */
++	if (!prev)
++		WRITE_ONCE(list->first, last->next);
++	else
++		prev->next = last->next;
++
++	if (last == list->last)
++		list->last = prev;
++	last->next = NULL;
++}
++
++static inline void wq_list_del(struct io_wq_work_list *list,
++			       struct io_wq_work_node *node,
++			       struct io_wq_work_node *prev)
++{
++	wq_list_cut(list, node, prev);
++}
++
++#define wq_list_for_each(pos, prv, head)			\
++	for (pos = (head)->first, prv = NULL; pos; prv = pos, pos = (pos)->next)
++
++#define wq_list_empty(list)	(READ_ONCE((list)->first) == NULL)
++#define INIT_WQ_LIST(list)	do {				\
++	(list)->first = NULL;					\
++	(list)->last = NULL;					\
++} while (0)
++
++struct io_wq_work {
++	struct io_wq_work_node list;
++	unsigned flags;
++};
++
++static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
++{
++	if (!work->list.next)
++		return NULL;
++
++	return container_of(work->list.next, struct io_wq_work, list);
++}
++
++typedef struct io_wq_work *(free_work_fn)(struct io_wq_work *);
++typedef void (io_wq_work_fn)(struct io_wq_work *);
++
++struct io_wq_hash {
++	refcount_t refs;
++	unsigned long map;
++	struct wait_queue_head wait;
++};
++
++static inline void io_wq_put_hash(struct io_wq_hash *hash)
++{
++	if (refcount_dec_and_test(&hash->refs))
++		kfree(hash);
++}
++
++struct io_wq_data {
++	struct io_wq_hash *hash;
++	struct task_struct *task;
++	io_wq_work_fn *do_work;
++	free_work_fn *free_work;
++};
++
++struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data);
++void io_wq_exit_start(struct io_wq *wq);
++void io_wq_put_and_exit(struct io_wq *wq);
++
++void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work);
++void io_wq_hash_work(struct io_wq_work *work, void *val);
++
++int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask);
++int io_wq_max_workers(struct io_wq *wq, int *new_count);
++
++static inline bool io_wq_is_hashed(struct io_wq_work *work)
++{
++	return work->flags & IO_WQ_WORK_HASHED;
++}
++
++typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
++
++enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
++					void *data, bool cancel_all);
++
++#if defined(CONFIG_IO_WQ)
++extern void io_wq_worker_sleeping(struct task_struct *);
++extern void io_wq_worker_running(struct task_struct *);
++#else
++static inline void io_wq_worker_sleeping(struct task_struct *tsk)
++{
++}
++static inline void io_wq_worker_running(struct task_struct *tsk)
++{
++}
++#endif
++
++static inline bool io_wq_current_is_worker(void)
++{
++	return in_task() && (current->flags & PF_IO_WORKER) &&
++		current->pf_io_worker;
++}
++#endif
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+new file mode 100644
+index 0000000000000..945faf036ad0f
+--- /dev/null
++++ b/io_uring/io_uring.c
+@@ -0,0 +1,10958 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Shared application/kernel submission and completion ring pairs, for
++ * supporting fast/efficient IO.
++ *
++ * A note on the read/write ordering memory barriers that are matched between
++ * the application and kernel side.
++ *
++ * After the application reads the CQ ring tail, it must use an
++ * appropriate smp_rmb() to pair with the smp_wmb() the kernel uses
++ * before writing the tail (using smp_load_acquire to read the tail will
++ * do). It also needs a smp_mb() before updating CQ head (ordering the
++ * entry load(s) with the head store), pairing with an implicit barrier
++ * through a control-dependency in io_get_cqe (smp_store_release to
++ * store head will do). Failure to do so could lead to reading invalid
++ * CQ entries.
++ *
++ * Likewise, the application must use an appropriate smp_wmb() before
++ * writing the SQ tail (ordering SQ entry stores with the tail store),
++ * which pairs with smp_load_acquire in io_get_sqring (smp_store_release
++ * to store the tail will do). And it needs a barrier ordering the SQ
++ * head load before writing new SQ entries (smp_load_acquire to read
++ * head will do).
++ *
++ * When using the SQ poll thread (IORING_SETUP_SQPOLL), the application
++ * needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*
++ * updating the SQ tail; a full memory barrier smp_mb() is needed
++ * between.
++ *
++ * Also see the examples in the liburing library:
++ *
++ *	git://git.kernel.dk/liburing
++ *
++ * io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
++ * from data shared between the kernel and application. This is done both
++ * for ordering purposes, but also to ensure that once a value is loaded from
++ * data that the application could potentially modify, it remains stable.
++ *
++ * Copyright (C) 2018-2019 Jens Axboe
++ * Copyright (c) 2018-2019 Christoph Hellwig
++ */
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/errno.h>
++#include <linux/syscalls.h>
++#include <linux/compat.h>
++#include <net/compat.h>
++#include <linux/refcount.h>
++#include <linux/uio.h>
++#include <linux/bits.h>
++
++#include <linux/sched/signal.h>
++#include <linux/fs.h>
++#include <linux/file.h>
++#include <linux/fdtable.h>
++#include <linux/mm.h>
++#include <linux/mman.h>
++#include <linux/percpu.h>
++#include <linux/slab.h>
++#include <linux/blkdev.h>
++#include <linux/bvec.h>
++#include <linux/net.h>
++#include <net/sock.h>
++#include <net/af_unix.h>
++#include <net/scm.h>
++#include <linux/anon_inodes.h>
++#include <linux/sched/mm.h>
++#include <linux/uaccess.h>
++#include <linux/nospec.h>
++#include <linux/sizes.h>
++#include <linux/hugetlb.h>
++#include <linux/highmem.h>
++#include <linux/namei.h>
++#include <linux/fsnotify.h>
++#include <linux/fadvise.h>
++#include <linux/eventpoll.h>
++#include <linux/splice.h>
++#include <linux/task_work.h>
++#include <linux/pagemap.h>
++#include <linux/io_uring.h>
++#include <linux/tracehook.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/io_uring.h>
++
++#include <uapi/linux/io_uring.h>
++
++#include "../fs/internal.h"
++#include "io-wq.h"
++
++#define IORING_MAX_ENTRIES	32768
++#define IORING_MAX_CQ_ENTRIES	(2 * IORING_MAX_ENTRIES)
++#define IORING_SQPOLL_CAP_ENTRIES_VALUE 8
++
++/* only define max */
++#define IORING_MAX_FIXED_FILES	(1U << 15)
++#define IORING_MAX_RESTRICTIONS	(IORING_RESTRICTION_LAST + \
++				 IORING_REGISTER_LAST + IORING_OP_LAST)
++
++#define IO_RSRC_TAG_TABLE_SHIFT	(PAGE_SHIFT - 3)
++#define IO_RSRC_TAG_TABLE_MAX	(1U << IO_RSRC_TAG_TABLE_SHIFT)
++#define IO_RSRC_TAG_TABLE_MASK	(IO_RSRC_TAG_TABLE_MAX - 1)
++
++#define IORING_MAX_REG_BUFFERS	(1U << 14)
++
++#define SQE_VALID_FLAGS	(IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK|	\
++				IOSQE_IO_HARDLINK | IOSQE_ASYNC | \
++				IOSQE_BUFFER_SELECT)
++#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
++				REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS)
++
++#define IO_TCTX_REFS_CACHE_NR	(1U << 10)
++
++struct io_uring {
++	u32 head ____cacheline_aligned_in_smp;
++	u32 tail ____cacheline_aligned_in_smp;
++};
++
++/*
++ * This data is shared with the application through the mmap at offsets
++ * IORING_OFF_SQ_RING and IORING_OFF_CQ_RING.
++ *
++ * The offsets to the member fields are published through struct
++ * io_sqring_offsets when calling io_uring_setup.
++ */
++struct io_rings {
++	/*
++	 * Head and tail offsets into the ring; the offsets need to be
++	 * masked to get valid indices.
++	 *
++	 * The kernel controls head of the sq ring and the tail of the cq ring,
++	 * and the application controls tail of the sq ring and the head of the
++	 * cq ring.
++	 */
++	struct io_uring		sq, cq;
++	/*
++	 * Bitmasks to apply to head and tail offsets (constant, equals
++	 * ring_entries - 1)
++	 */
++	u32			sq_ring_mask, cq_ring_mask;
++	/* Ring sizes (constant, power of 2) */
++	u32			sq_ring_entries, cq_ring_entries;
++	/*
++	 * Number of invalid entries dropped by the kernel due to
++	 * invalid index stored in array
++	 *
++	 * Written by the kernel, shouldn't be modified by the
++	 * application (i.e. get number of "new events" by comparing to
++	 * cached value).
++	 *
++	 * After a new SQ head value was read by the application this
++	 * counter includes all submissions that were dropped reaching
++	 * the new SQ head (and possibly more).
++	 */
++	u32			sq_dropped;
++	/*
++	 * Runtime SQ flags
++	 *
++	 * Written by the kernel, shouldn't be modified by the
++	 * application.
++	 *
++	 * The application needs a full memory barrier before checking
++	 * for IORING_SQ_NEED_WAKEUP after updating the sq tail.
++	 */
++	u32			sq_flags;
++	/*
++	 * Runtime CQ flags
++	 *
++	 * Written by the application, shouldn't be modified by the
++	 * kernel.
++	 */
++	u32			cq_flags;
++	/*
++	 * Number of completion events lost because the queue was full;
++	 * this should be avoided by the application by making sure
++	 * there are not more requests pending than there is space in
++	 * the completion queue.
++	 *
++	 * Written by the kernel, shouldn't be modified by the
++	 * application (i.e. get number of "new events" by comparing to
++	 * cached value).
++	 *
++	 * As completion events come in out of order this counter is not
++	 * ordered with any other data.
++	 */
++	u32			cq_overflow;
++	/*
++	 * Ring buffer of completion events.
++	 *
++	 * The kernel writes completion events fresh every time they are
++	 * produced, so the application is allowed to modify pending
++	 * entries.
++	 */
++	struct io_uring_cqe	cqes[] ____cacheline_aligned_in_smp;
++};
++
++enum io_uring_cmd_flags {
++	IO_URING_F_NONBLOCK		= 1,
++	IO_URING_F_COMPLETE_DEFER	= 2,
++};
++
++struct io_mapped_ubuf {
++	u64		ubuf;
++	u64		ubuf_end;
++	unsigned int	nr_bvecs;
++	unsigned long	acct_pages;
++	struct bio_vec	bvec[];
++};
++
++struct io_ring_ctx;
++
++struct io_overflow_cqe {
++	struct io_uring_cqe cqe;
++	struct list_head list;
++};
++
++struct io_fixed_file {
++	/* file * with additional FFS_* flags */
++	unsigned long file_ptr;
++};
++
++struct io_rsrc_put {
++	struct list_head list;
++	u64 tag;
++	union {
++		void *rsrc;
++		struct file *file;
++		struct io_mapped_ubuf *buf;
++	};
++};
++
++struct io_file_table {
++	struct io_fixed_file *files;
++};
++
++struct io_rsrc_node {
++	struct percpu_ref		refs;
++	struct list_head		node;
++	struct list_head		rsrc_list;
++	struct io_rsrc_data		*rsrc_data;
++	struct llist_node		llist;
++	bool				done;
++};
++
++typedef void (rsrc_put_fn)(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc);
++
++struct io_rsrc_data {
++	struct io_ring_ctx		*ctx;
++
++	u64				**tags;
++	unsigned int			nr;
++	rsrc_put_fn			*do_put;
++	atomic_t			refs;
++	struct completion		done;
++	bool				quiesce;
++};
++
++struct io_buffer {
++	struct list_head list;
++	__u64 addr;
++	__u32 len;
++	__u16 bid;
++};
++
++struct io_restriction {
++	DECLARE_BITMAP(register_op, IORING_REGISTER_LAST);
++	DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
++	u8 sqe_flags_allowed;
++	u8 sqe_flags_required;
++	bool registered;
++};
++
++enum {
++	IO_SQ_THREAD_SHOULD_STOP = 0,
++	IO_SQ_THREAD_SHOULD_PARK,
++};
++
++struct io_sq_data {
++	refcount_t		refs;
++	atomic_t		park_pending;
++	struct mutex		lock;
++
++	/* ctx's that are using this sqd */
++	struct list_head	ctx_list;
++
++	struct task_struct	*thread;
++	struct wait_queue_head	wait;
++
++	unsigned		sq_thread_idle;
++	int			sq_cpu;
++	pid_t			task_pid;
++	pid_t			task_tgid;
++
++	unsigned long		state;
++	struct completion	exited;
++};
++
++#define IO_COMPL_BATCH			32
++#define IO_REQ_CACHE_SIZE		32
++#define IO_REQ_ALLOC_BATCH		8
++
++struct io_submit_link {
++	struct io_kiocb		*head;
++	struct io_kiocb		*last;
++};
++
++struct io_submit_state {
++	struct blk_plug		plug;
++	struct io_submit_link	link;
++
++	/*
++	 * io_kiocb alloc cache
++	 */
++	void			*reqs[IO_REQ_CACHE_SIZE];
++	unsigned int		free_reqs;
++
++	bool			plug_started;
++
++	/*
++	 * Batch completion logic
++	 */
++	struct io_kiocb		*compl_reqs[IO_COMPL_BATCH];
++	unsigned int		compl_nr;
++	/* inline/task_work completion list, under ->uring_lock */
++	struct list_head	free_list;
++
++	unsigned int		ios_left;
++};
++
++struct io_ring_ctx {
++	/* const or read-mostly hot data */
++	struct {
++		struct percpu_ref	refs;
++
++		struct io_rings		*rings;
++		unsigned int		flags;
++		unsigned int		compat: 1;
++		unsigned int		drain_next: 1;
++		unsigned int		eventfd_async: 1;
++		unsigned int		restricted: 1;
++		unsigned int		off_timeout_used: 1;
++		unsigned int		drain_active: 1;
++	} ____cacheline_aligned_in_smp;
++
++	/* submission data */
++	struct {
++		struct mutex		uring_lock;
++
++		/*
++		 * Ring buffer of indices into array of io_uring_sqe, which is
++		 * mmapped by the application using the IORING_OFF_SQES offset.
++		 *
++		 * This indirection could e.g. be used to assign fixed
++		 * io_uring_sqe entries to operations and only submit them to
++		 * the queue when needed.
++		 *
++		 * The kernel modifies neither the indices array nor the entries
++		 * array.
++		 */
++		u32			*sq_array;
++		struct io_uring_sqe	*sq_sqes;
++		unsigned		cached_sq_head;
++		unsigned		sq_entries;
++		struct list_head	defer_list;
++
++		/*
++		 * Fixed resources fast path, should be accessed only under
++		 * uring_lock, and updated through io_uring_register(2)
++		 */
++		struct io_rsrc_node	*rsrc_node;
++		struct io_file_table	file_table;
++		unsigned		nr_user_files;
++		unsigned		nr_user_bufs;
++		struct io_mapped_ubuf	**user_bufs;
++
++		struct io_submit_state	submit_state;
++		struct list_head	timeout_list;
++		struct list_head	ltimeout_list;
++		struct list_head	cq_overflow_list;
++		struct xarray		io_buffers;
++		struct xarray		personalities;
++		u32			pers_next;
++		unsigned		sq_thread_idle;
++	} ____cacheline_aligned_in_smp;
++
++	/* IRQ completion list, under ->completion_lock */
++	struct list_head	locked_free_list;
++	unsigned int		locked_free_nr;
++
++	const struct cred	*sq_creds;	/* cred used for __io_sq_thread() */
++	struct io_sq_data	*sq_data;	/* if using sq thread polling */
++
++	struct wait_queue_head	sqo_sq_wait;
++	struct list_head	sqd_list;
++
++	unsigned long		check_cq_overflow;
++
++	struct {
++		unsigned		cached_cq_tail;
++		unsigned		cq_entries;
++		struct eventfd_ctx	*cq_ev_fd;
++		struct wait_queue_head	poll_wait;
++		struct wait_queue_head	cq_wait;
++		unsigned		cq_extra;
++		atomic_t		cq_timeouts;
++		unsigned		cq_last_tm_flush;
++	} ____cacheline_aligned_in_smp;
++
++	struct {
++		spinlock_t		completion_lock;
++
++		spinlock_t		timeout_lock;
++
++		/*
++		 * ->iopoll_list is protected by the ctx->uring_lock for
++		 * io_uring instances that don't use IORING_SETUP_SQPOLL.
++		 * For SQPOLL, only the single threaded io_sq_thread() will
++		 * manipulate the list, hence no extra locking is needed there.
++		 */
++		struct list_head	iopoll_list;
++		struct hlist_head	*cancel_hash;
++		unsigned		cancel_hash_bits;
++		bool			poll_multi_queue;
++	} ____cacheline_aligned_in_smp;
++
++	struct io_restriction		restrictions;
++
++	/* slow path rsrc auxilary data, used by update/register */
++	struct {
++		struct io_rsrc_node		*rsrc_backup_node;
++		struct io_mapped_ubuf		*dummy_ubuf;
++		struct io_rsrc_data		*file_data;
++		struct io_rsrc_data		*buf_data;
++
++		struct delayed_work		rsrc_put_work;
++		struct llist_head		rsrc_put_llist;
++		struct list_head		rsrc_ref_list;
++		spinlock_t			rsrc_ref_lock;
++	};
++
++	/* Keep this last, we don't need it for the fast path */
++	struct {
++		#if defined(CONFIG_UNIX)
++			struct socket		*ring_sock;
++		#endif
++		/* hashed buffered write serialization */
++		struct io_wq_hash		*hash_map;
++
++		/* Only used for accounting purposes */
++		struct user_struct		*user;
++		struct mm_struct		*mm_account;
++
++		/* ctx exit and cancelation */
++		struct llist_head		fallback_llist;
++		struct delayed_work		fallback_work;
++		struct work_struct		exit_work;
++		struct list_head		tctx_list;
++		struct completion		ref_comp;
++		u32				iowq_limits[2];
++		bool				iowq_limits_set;
++	};
++};
++
++struct io_uring_task {
++	/* submission side */
++	int			cached_refs;
++	struct xarray		xa;
++	struct wait_queue_head	wait;
++	const struct io_ring_ctx *last;
++	struct io_wq		*io_wq;
++	struct percpu_counter	inflight;
++	atomic_t		inflight_tracked;
++	atomic_t		in_idle;
++
++	spinlock_t		task_lock;
++	struct io_wq_work_list	task_list;
++	struct callback_head	task_work;
++	bool			task_running;
++};
++
++/*
++ * First field must be the file pointer in all the
++ * iocb unions! See also 'struct kiocb' in <linux/fs.h>
++ */
++struct io_poll_iocb {
++	struct file			*file;
++	struct wait_queue_head		*head;
++	__poll_t			events;
++	struct wait_queue_entry		wait;
++};
++
++struct io_poll_update {
++	struct file			*file;
++	u64				old_user_data;
++	u64				new_user_data;
++	__poll_t			events;
++	bool				update_events;
++	bool				update_user_data;
++};
++
++struct io_close {
++	struct file			*file;
++	int				fd;
++	u32				file_slot;
++};
++
++struct io_timeout_data {
++	struct io_kiocb			*req;
++	struct hrtimer			timer;
++	struct timespec64		ts;
++	enum hrtimer_mode		mode;
++	u32				flags;
++};
++
++struct io_accept {
++	struct file			*file;
++	struct sockaddr __user		*addr;
++	int __user			*addr_len;
++	int				flags;
++	u32				file_slot;
++	unsigned long			nofile;
++};
++
++struct io_sync {
++	struct file			*file;
++	loff_t				len;
++	loff_t				off;
++	int				flags;
++	int				mode;
++};
++
++struct io_cancel {
++	struct file			*file;
++	u64				addr;
++};
++
++struct io_timeout {
++	struct file			*file;
++	u32				off;
++	u32				target_seq;
++	struct list_head		list;
++	/* head of the link, used by linked timeouts only */
++	struct io_kiocb			*head;
++	/* for linked completions */
++	struct io_kiocb			*prev;
++};
++
++struct io_timeout_rem {
++	struct file			*file;
++	u64				addr;
++
++	/* timeout update */
++	struct timespec64		ts;
++	u32				flags;
++	bool				ltimeout;
++};
++
++struct io_rw {
++	/* NOTE: kiocb has the file as the first member, so don't do it here */
++	struct kiocb			kiocb;
++	u64				addr;
++	u64				len;
++};
++
++struct io_connect {
++	struct file			*file;
++	struct sockaddr __user		*addr;
++	int				addr_len;
++};
++
++struct io_sr_msg {
++	struct file			*file;
++	union {
++		struct compat_msghdr __user	*umsg_compat;
++		struct user_msghdr __user	*umsg;
++		void __user			*buf;
++	};
++	int				msg_flags;
++	int				bgid;
++	size_t				len;
++	struct io_buffer		*kbuf;
++};
++
++struct io_open {
++	struct file			*file;
++	int				dfd;
++	u32				file_slot;
++	struct filename			*filename;
++	struct open_how			how;
++	unsigned long			nofile;
++};
++
++struct io_rsrc_update {
++	struct file			*file;
++	u64				arg;
++	u32				nr_args;
++	u32				offset;
++};
++
++struct io_fadvise {
++	struct file			*file;
++	u64				offset;
++	u32				len;
++	u32				advice;
++};
++
++struct io_madvise {
++	struct file			*file;
++	u64				addr;
++	u32				len;
++	u32				advice;
++};
++
++struct io_epoll {
++	struct file			*file;
++	int				epfd;
++	int				op;
++	int				fd;
++	struct epoll_event		event;
++};
++
++struct io_splice {
++	struct file			*file_out;
++	loff_t				off_out;
++	loff_t				off_in;
++	u64				len;
++	int				splice_fd_in;
++	unsigned int			flags;
++};
++
++struct io_provide_buf {
++	struct file			*file;
++	__u64				addr;
++	__u32				len;
++	__u32				bgid;
++	__u16				nbufs;
++	__u16				bid;
++};
++
++struct io_statx {
++	struct file			*file;
++	int				dfd;
++	unsigned int			mask;
++	unsigned int			flags;
++	const char __user		*filename;
++	struct statx __user		*buffer;
++};
++
++struct io_shutdown {
++	struct file			*file;
++	int				how;
++};
++
++struct io_rename {
++	struct file			*file;
++	int				old_dfd;
++	int				new_dfd;
++	struct filename			*oldpath;
++	struct filename			*newpath;
++	int				flags;
++};
++
++struct io_unlink {
++	struct file			*file;
++	int				dfd;
++	int				flags;
++	struct filename			*filename;
++};
++
++struct io_mkdir {
++	struct file			*file;
++	int				dfd;
++	umode_t				mode;
++	struct filename			*filename;
++};
++
++struct io_symlink {
++	struct file			*file;
++	int				new_dfd;
++	struct filename			*oldpath;
++	struct filename			*newpath;
++};
++
++struct io_hardlink {
++	struct file			*file;
++	int				old_dfd;
++	int				new_dfd;
++	struct filename			*oldpath;
++	struct filename			*newpath;
++	int				flags;
++};
++
++struct io_completion {
++	struct file			*file;
++	u32				cflags;
++};
++
++struct io_async_connect {
++	struct sockaddr_storage		address;
++};
++
++struct io_async_msghdr {
++	struct iovec			fast_iov[UIO_FASTIOV];
++	/* points to an allocated iov, if NULL we use fast_iov instead */
++	struct iovec			*free_iov;
++	struct sockaddr __user		*uaddr;
++	struct msghdr			msg;
++	struct sockaddr_storage		addr;
++};
++
++struct io_async_rw {
++	struct iovec			fast_iov[UIO_FASTIOV];
++	const struct iovec		*free_iovec;
++	struct iov_iter			iter;
++	struct iov_iter_state		iter_state;
++	size_t				bytes_done;
++	struct wait_page_queue		wpq;
++};
++
++enum {
++	REQ_F_FIXED_FILE_BIT	= IOSQE_FIXED_FILE_BIT,
++	REQ_F_IO_DRAIN_BIT	= IOSQE_IO_DRAIN_BIT,
++	REQ_F_LINK_BIT		= IOSQE_IO_LINK_BIT,
++	REQ_F_HARDLINK_BIT	= IOSQE_IO_HARDLINK_BIT,
++	REQ_F_FORCE_ASYNC_BIT	= IOSQE_ASYNC_BIT,
++	REQ_F_BUFFER_SELECT_BIT	= IOSQE_BUFFER_SELECT_BIT,
++
++	/* first byte is taken by user flags, shift it to not overlap */
++	REQ_F_FAIL_BIT		= 8,
++	REQ_F_INFLIGHT_BIT,
++	REQ_F_CUR_POS_BIT,
++	REQ_F_NOWAIT_BIT,
++	REQ_F_LINK_TIMEOUT_BIT,
++	REQ_F_NEED_CLEANUP_BIT,
++	REQ_F_POLLED_BIT,
++	REQ_F_BUFFER_SELECTED_BIT,
++	REQ_F_COMPLETE_INLINE_BIT,
++	REQ_F_REISSUE_BIT,
++	REQ_F_CREDS_BIT,
++	REQ_F_REFCOUNT_BIT,
++	REQ_F_ARM_LTIMEOUT_BIT,
++	/* keep async read/write and isreg together and in order */
++	REQ_F_NOWAIT_READ_BIT,
++	REQ_F_NOWAIT_WRITE_BIT,
++	REQ_F_ISREG_BIT,
++
++	/* not a real bit, just to check we're not overflowing the space */
++	__REQ_F_LAST_BIT,
++};
++
++enum {
++	/* ctx owns file */
++	REQ_F_FIXED_FILE	= BIT(REQ_F_FIXED_FILE_BIT),
++	/* drain existing IO first */
++	REQ_F_IO_DRAIN		= BIT(REQ_F_IO_DRAIN_BIT),
++	/* linked sqes */
++	REQ_F_LINK		= BIT(REQ_F_LINK_BIT),
++	/* doesn't sever on completion < 0 */
++	REQ_F_HARDLINK		= BIT(REQ_F_HARDLINK_BIT),
++	/* IOSQE_ASYNC */
++	REQ_F_FORCE_ASYNC	= BIT(REQ_F_FORCE_ASYNC_BIT),
++	/* IOSQE_BUFFER_SELECT */
++	REQ_F_BUFFER_SELECT	= BIT(REQ_F_BUFFER_SELECT_BIT),
++
++	/* fail rest of links */
++	REQ_F_FAIL		= BIT(REQ_F_FAIL_BIT),
++	/* on inflight list, should be cancelled and waited on exit reliably */
++	REQ_F_INFLIGHT		= BIT(REQ_F_INFLIGHT_BIT),
++	/* read/write uses file position */
++	REQ_F_CUR_POS		= BIT(REQ_F_CUR_POS_BIT),
++	/* must not punt to workers */
++	REQ_F_NOWAIT		= BIT(REQ_F_NOWAIT_BIT),
++	/* has or had linked timeout */
++	REQ_F_LINK_TIMEOUT	= BIT(REQ_F_LINK_TIMEOUT_BIT),
++	/* needs cleanup */
++	REQ_F_NEED_CLEANUP	= BIT(REQ_F_NEED_CLEANUP_BIT),
++	/* already went through poll handler */
++	REQ_F_POLLED		= BIT(REQ_F_POLLED_BIT),
++	/* buffer already selected */
++	REQ_F_BUFFER_SELECTED	= BIT(REQ_F_BUFFER_SELECTED_BIT),
++	/* completion is deferred through io_comp_state */
++	REQ_F_COMPLETE_INLINE	= BIT(REQ_F_COMPLETE_INLINE_BIT),
++	/* caller should reissue async */
++	REQ_F_REISSUE		= BIT(REQ_F_REISSUE_BIT),
++	/* supports async reads */
++	REQ_F_NOWAIT_READ	= BIT(REQ_F_NOWAIT_READ_BIT),
++	/* supports async writes */
++	REQ_F_NOWAIT_WRITE	= BIT(REQ_F_NOWAIT_WRITE_BIT),
++	/* regular file */
++	REQ_F_ISREG		= BIT(REQ_F_ISREG_BIT),
++	/* has creds assigned */
++	REQ_F_CREDS		= BIT(REQ_F_CREDS_BIT),
++	/* skip refcounting if not set */
++	REQ_F_REFCOUNT		= BIT(REQ_F_REFCOUNT_BIT),
++	/* there is a linked timeout that has to be armed */
++	REQ_F_ARM_LTIMEOUT	= BIT(REQ_F_ARM_LTIMEOUT_BIT),
++};
++
++struct async_poll {
++	struct io_poll_iocb	poll;
++	struct io_poll_iocb	*double_poll;
++};
++
++typedef void (*io_req_tw_func_t)(struct io_kiocb *req, bool *locked);
++
++struct io_task_work {
++	union {
++		struct io_wq_work_node	node;
++		struct llist_node	fallback_node;
++	};
++	io_req_tw_func_t		func;
++};
++
++enum {
++	IORING_RSRC_FILE		= 0,
++	IORING_RSRC_BUFFER		= 1,
++};
++
++/*
++ * NOTE! Each of the iocb union members has the file pointer
++ * as the first entry in their struct definition. So you can
++ * access the file pointer through any of the sub-structs,
++ * or directly as just 'ki_filp' in this struct.
++ */
++struct io_kiocb {
++	union {
++		struct file		*file;
++		struct io_rw		rw;
++		struct io_poll_iocb	poll;
++		struct io_poll_update	poll_update;
++		struct io_accept	accept;
++		struct io_sync		sync;
++		struct io_cancel	cancel;
++		struct io_timeout	timeout;
++		struct io_timeout_rem	timeout_rem;
++		struct io_connect	connect;
++		struct io_sr_msg	sr_msg;
++		struct io_open		open;
++		struct io_close		close;
++		struct io_rsrc_update	rsrc_update;
++		struct io_fadvise	fadvise;
++		struct io_madvise	madvise;
++		struct io_epoll		epoll;
++		struct io_splice	splice;
++		struct io_provide_buf	pbuf;
++		struct io_statx		statx;
++		struct io_shutdown	shutdown;
++		struct io_rename	rename;
++		struct io_unlink	unlink;
++		struct io_mkdir		mkdir;
++		struct io_symlink	symlink;
++		struct io_hardlink	hardlink;
++		/* use only after cleaning per-op data, see io_clean_op() */
++		struct io_completion	compl;
++	};
++
++	/* opcode allocated if it needs to store data for async defer */
++	void				*async_data;
++	u8				opcode;
++	/* polled IO has completed */
++	u8				iopoll_completed;
++
++	u16				buf_index;
++	u32				result;
++
++	struct io_ring_ctx		*ctx;
++	unsigned int			flags;
++	atomic_t			refs;
++	struct task_struct		*task;
++	u64				user_data;
++
++	struct io_kiocb			*link;
++	struct percpu_ref		*fixed_rsrc_refs;
++
++	/* used with ctx->iopoll_list with reads/writes */
++	struct list_head		inflight_entry;
++	struct io_task_work		io_task_work;
++	/* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
++	struct hlist_node		hash_node;
++	struct async_poll		*apoll;
++	struct io_wq_work		work;
++	const struct cred		*creds;
++
++	/* store used ubuf, so we can prevent reloading */
++	struct io_mapped_ubuf		*imu;
++	/* stores selected buf, valid IFF REQ_F_BUFFER_SELECTED is set */
++	struct io_buffer		*kbuf;
++	atomic_t			poll_refs;
++};
++
++struct io_tctx_node {
++	struct list_head	ctx_node;
++	struct task_struct	*task;
++	struct io_ring_ctx	*ctx;
++};
++
++struct io_defer_entry {
++	struct list_head	list;
++	struct io_kiocb		*req;
++	u32			seq;
++};
++
++struct io_op_def {
++	/* needs req->file assigned */
++	unsigned		needs_file : 1;
++	/* hash wq insertion if file is a regular file */
++	unsigned		hash_reg_file : 1;
++	/* unbound wq insertion if file is a non-regular file */
++	unsigned		unbound_nonreg_file : 1;
++	/* opcode is not supported by this kernel */
++	unsigned		not_supported : 1;
++	/* set if opcode supports polled "wait" */
++	unsigned		pollin : 1;
++	unsigned		pollout : 1;
++	/* op supports buffer selection */
++	unsigned		buffer_select : 1;
++	/* do prep async if is going to be punted */
++	unsigned		needs_async_setup : 1;
++	/* should block plug */
++	unsigned		plug : 1;
++	/* size of async data needed, if any */
++	unsigned short		async_size;
++};
++
++static const struct io_op_def io_op_defs[] = {
++	[IORING_OP_NOP] = {},
++	[IORING_OP_READV] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollin			= 1,
++		.buffer_select		= 1,
++		.needs_async_setup	= 1,
++		.plug			= 1,
++		.async_size		= sizeof(struct io_async_rw),
++	},
++	[IORING_OP_WRITEV] = {
++		.needs_file		= 1,
++		.hash_reg_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollout		= 1,
++		.needs_async_setup	= 1,
++		.plug			= 1,
++		.async_size		= sizeof(struct io_async_rw),
++	},
++	[IORING_OP_FSYNC] = {
++		.needs_file		= 1,
++	},
++	[IORING_OP_READ_FIXED] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollin			= 1,
++		.plug			= 1,
++		.async_size		= sizeof(struct io_async_rw),
++	},
++	[IORING_OP_WRITE_FIXED] = {
++		.needs_file		= 1,
++		.hash_reg_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollout		= 1,
++		.plug			= 1,
++		.async_size		= sizeof(struct io_async_rw),
++	},
++	[IORING_OP_POLL_ADD] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++	},
++	[IORING_OP_POLL_REMOVE] = {},
++	[IORING_OP_SYNC_FILE_RANGE] = {
++		.needs_file		= 1,
++	},
++	[IORING_OP_SENDMSG] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollout		= 1,
++		.needs_async_setup	= 1,
++		.async_size		= sizeof(struct io_async_msghdr),
++	},
++	[IORING_OP_RECVMSG] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollin			= 1,
++		.buffer_select		= 1,
++		.needs_async_setup	= 1,
++		.async_size		= sizeof(struct io_async_msghdr),
++	},
++	[IORING_OP_TIMEOUT] = {
++		.async_size		= sizeof(struct io_timeout_data),
++	},
++	[IORING_OP_TIMEOUT_REMOVE] = {
++		/* used by timeout updates' prep() */
++	},
++	[IORING_OP_ACCEPT] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollin			= 1,
++	},
++	[IORING_OP_ASYNC_CANCEL] = {},
++	[IORING_OP_LINK_TIMEOUT] = {
++		.async_size		= sizeof(struct io_timeout_data),
++	},
++	[IORING_OP_CONNECT] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollout		= 1,
++		.needs_async_setup	= 1,
++		.async_size		= sizeof(struct io_async_connect),
++	},
++	[IORING_OP_FALLOCATE] = {
++		.needs_file		= 1,
++	},
++	[IORING_OP_OPENAT] = {},
++	[IORING_OP_CLOSE] = {},
++	[IORING_OP_FILES_UPDATE] = {},
++	[IORING_OP_STATX] = {},
++	[IORING_OP_READ] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollin			= 1,
++		.buffer_select		= 1,
++		.plug			= 1,
++		.async_size		= sizeof(struct io_async_rw),
++	},
++	[IORING_OP_WRITE] = {
++		.needs_file		= 1,
++		.hash_reg_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollout		= 1,
++		.plug			= 1,
++		.async_size		= sizeof(struct io_async_rw),
++	},
++	[IORING_OP_FADVISE] = {
++		.needs_file		= 1,
++	},
++	[IORING_OP_MADVISE] = {},
++	[IORING_OP_SEND] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollout		= 1,
++	},
++	[IORING_OP_RECV] = {
++		.needs_file		= 1,
++		.unbound_nonreg_file	= 1,
++		.pollin			= 1,
++		.buffer_select		= 1,
++	},
++	[IORING_OP_OPENAT2] = {
++	},
++	[IORING_OP_EPOLL_CTL] = {
++		.unbound_nonreg_file	= 1,
++	},
++	[IORING_OP_SPLICE] = {
++		.needs_file		= 1,
++		.hash_reg_file		= 1,
++		.unbound_nonreg_file	= 1,
++	},
++	[IORING_OP_PROVIDE_BUFFERS] = {},
++	[IORING_OP_REMOVE_BUFFERS] = {},
++	[IORING_OP_TEE] = {
++		.needs_file		= 1,
++		.hash_reg_file		= 1,
++		.unbound_nonreg_file	= 1,
++	},
++	[IORING_OP_SHUTDOWN] = {
++		.needs_file		= 1,
++	},
++	[IORING_OP_RENAMEAT] = {},
++	[IORING_OP_UNLINKAT] = {},
++};
++
++/* requests with any of those set should undergo io_disarm_next() */
++#define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
++
++static bool io_disarm_next(struct io_kiocb *req);
++static void io_uring_del_tctx_node(unsigned long index);
++static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
++					 struct task_struct *task,
++					 bool cancel_all);
++static void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd);
++
++static void io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 cflags);
++
++static void io_put_req(struct io_kiocb *req);
++static void io_put_req_deferred(struct io_kiocb *req);
++static void io_dismantle_req(struct io_kiocb *req);
++static void io_queue_linked_timeout(struct io_kiocb *req);
++static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
++				     struct io_uring_rsrc_update2 *up,
++				     unsigned nr_args);
++static void io_clean_op(struct io_kiocb *req);
++static struct file *io_file_get(struct io_ring_ctx *ctx,
++				struct io_kiocb *req, int fd, bool fixed);
++static void __io_queue_sqe(struct io_kiocb *req);
++static void io_rsrc_put_work(struct work_struct *work);
++
++static void io_req_task_queue(struct io_kiocb *req);
++static void io_submit_flush_completions(struct io_ring_ctx *ctx);
++static int io_req_prep_async(struct io_kiocb *req);
++
++static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
++				 unsigned int issue_flags, u32 slot_index);
++static int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags);
++
++static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer);
++
++static struct kmem_cache *req_cachep;
++
++static const struct file_operations io_uring_fops;
++
++struct sock *io_uring_get_socket(struct file *file)
++{
++#if defined(CONFIG_UNIX)
++	if (file->f_op == &io_uring_fops) {
++		struct io_ring_ctx *ctx = file->private_data;
++
++		return ctx->ring_sock->sk;
++	}
++#endif
++	return NULL;
++}
++EXPORT_SYMBOL(io_uring_get_socket);
++
++static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
++{
++	if (!*locked) {
++		mutex_lock(&ctx->uring_lock);
++		*locked = true;
++	}
++}
++
++#define io_for_each_link(pos, head) \
++	for (pos = (head); pos; pos = pos->link)
++
++/*
++ * Shamelessly stolen from the mm implementation of page reference checking,
++ * see commit f958d7b528b1 for details.
++ */
++#define req_ref_zero_or_close_to_overflow(req)	\
++	((unsigned int) atomic_read(&(req->refs)) + 127u <= 127u)
++
++static inline bool req_ref_inc_not_zero(struct io_kiocb *req)
++{
++	WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
++	return atomic_inc_not_zero(&req->refs);
++}
++
++static inline bool req_ref_put_and_test(struct io_kiocb *req)
++{
++	if (likely(!(req->flags & REQ_F_REFCOUNT)))
++		return true;
++
++	WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
++	return atomic_dec_and_test(&req->refs);
++}
++
++static inline void req_ref_get(struct io_kiocb *req)
++{
++	WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
++	WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
++	atomic_inc(&req->refs);
++}
++
++static inline void __io_req_set_refcount(struct io_kiocb *req, int nr)
++{
++	if (!(req->flags & REQ_F_REFCOUNT)) {
++		req->flags |= REQ_F_REFCOUNT;
++		atomic_set(&req->refs, nr);
++	}
++}
++
++static inline void io_req_set_refcount(struct io_kiocb *req)
++{
++	__io_req_set_refcount(req, 1);
++}
++
++static inline void io_req_set_rsrc_node(struct io_kiocb *req)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	if (!req->fixed_rsrc_refs) {
++		req->fixed_rsrc_refs = &ctx->rsrc_node->refs;
++		percpu_ref_get(req->fixed_rsrc_refs);
++	}
++}
++
++static void io_refs_resurrect(struct percpu_ref *ref, struct completion *compl)
++{
++	bool got = percpu_ref_tryget(ref);
++
++	/* already at zero, wait for ->release() */
++	if (!got)
++		wait_for_completion(compl);
++	percpu_ref_resurrect(ref);
++	if (got)
++		percpu_ref_put(ref);
++}
++
++static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
++			  bool cancel_all)
++	__must_hold(&req->ctx->timeout_lock)
++{
++	struct io_kiocb *req;
++
++	if (task && head->task != task)
++		return false;
++	if (cancel_all)
++		return true;
++
++	io_for_each_link(req, head) {
++		if (req->flags & REQ_F_INFLIGHT)
++			return true;
++	}
++	return false;
++}
++
++static bool io_match_linked(struct io_kiocb *head)
++{
++	struct io_kiocb *req;
++
++	io_for_each_link(req, head) {
++		if (req->flags & REQ_F_INFLIGHT)
++			return true;
++	}
++	return false;
++}
++
++/*
++ * As io_match_task() but protected against racing with linked timeouts.
++ * User must not hold timeout_lock.
++ */
++static bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,
++			       bool cancel_all)
++{
++	bool matched;
++
++	if (task && head->task != task)
++		return false;
++	if (cancel_all)
++		return true;
++
++	if (head->flags & REQ_F_LINK_TIMEOUT) {
++		struct io_ring_ctx *ctx = head->ctx;
++
++		/* protect against races with linked timeouts */
++		spin_lock_irq(&ctx->timeout_lock);
++		matched = io_match_linked(head);
++		spin_unlock_irq(&ctx->timeout_lock);
++	} else {
++		matched = io_match_linked(head);
++	}
++	return matched;
++}
++
++static inline void req_set_fail(struct io_kiocb *req)
++{
++	req->flags |= REQ_F_FAIL;
++}
++
++static inline void req_fail_link_node(struct io_kiocb *req, int res)
++{
++	req_set_fail(req);
++	req->result = res;
++}
++
++static void io_ring_ctx_ref_free(struct percpu_ref *ref)
++{
++	struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
++
++	complete(&ctx->ref_comp);
++}
++
++static inline bool io_is_timeout_noseq(struct io_kiocb *req)
++{
++	return !req->timeout.off;
++}
++
++static void io_fallback_req_func(struct work_struct *work)
++{
++	struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
++						fallback_work.work);
++	struct llist_node *node = llist_del_all(&ctx->fallback_llist);
++	struct io_kiocb *req, *tmp;
++	bool locked = false;
++
++	percpu_ref_get(&ctx->refs);
++	llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node)
++		req->io_task_work.func(req, &locked);
++
++	if (locked) {
++		if (ctx->submit_state.compl_nr)
++			io_submit_flush_completions(ctx);
++		mutex_unlock(&ctx->uring_lock);
++	}
++	percpu_ref_put(&ctx->refs);
++
++}
++
++static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
++{
++	struct io_ring_ctx *ctx;
++	int hash_bits;
++
++	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
++	if (!ctx)
++		return NULL;
++
++	/*
++	 * Use 5 bits less than the max cq entries, that should give us around
++	 * 32 entries per hash list if totally full and uniformly spread.
++	 */
++	hash_bits = ilog2(p->cq_entries);
++	hash_bits -= 5;
++	if (hash_bits <= 0)
++		hash_bits = 1;
++	ctx->cancel_hash_bits = hash_bits;
++	ctx->cancel_hash = kmalloc((1U << hash_bits) * sizeof(struct hlist_head),
++					GFP_KERNEL);
++	if (!ctx->cancel_hash)
++		goto err;
++	__hash_init(ctx->cancel_hash, 1U << hash_bits);
++
++	ctx->dummy_ubuf = kzalloc(sizeof(*ctx->dummy_ubuf), GFP_KERNEL);
++	if (!ctx->dummy_ubuf)
++		goto err;
++	/* set invalid range, so io_import_fixed() fails meeting it */
++	ctx->dummy_ubuf->ubuf = -1UL;
++
++	if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
++			    PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
++		goto err;
++
++	ctx->flags = p->flags;
++	init_waitqueue_head(&ctx->sqo_sq_wait);
++	INIT_LIST_HEAD(&ctx->sqd_list);
++	init_waitqueue_head(&ctx->poll_wait);
++	INIT_LIST_HEAD(&ctx->cq_overflow_list);
++	init_completion(&ctx->ref_comp);
++	xa_init_flags(&ctx->io_buffers, XA_FLAGS_ALLOC1);
++	xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
++	mutex_init(&ctx->uring_lock);
++	init_waitqueue_head(&ctx->cq_wait);
++	spin_lock_init(&ctx->completion_lock);
++	spin_lock_init(&ctx->timeout_lock);
++	INIT_LIST_HEAD(&ctx->iopoll_list);
++	INIT_LIST_HEAD(&ctx->defer_list);
++	INIT_LIST_HEAD(&ctx->timeout_list);
++	INIT_LIST_HEAD(&ctx->ltimeout_list);
++	spin_lock_init(&ctx->rsrc_ref_lock);
++	INIT_LIST_HEAD(&ctx->rsrc_ref_list);
++	INIT_DELAYED_WORK(&ctx->rsrc_put_work, io_rsrc_put_work);
++	init_llist_head(&ctx->rsrc_put_llist);
++	INIT_LIST_HEAD(&ctx->tctx_list);
++	INIT_LIST_HEAD(&ctx->submit_state.free_list);
++	INIT_LIST_HEAD(&ctx->locked_free_list);
++	INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func);
++	return ctx;
++err:
++	kfree(ctx->dummy_ubuf);
++	kfree(ctx->cancel_hash);
++	kfree(ctx);
++	return NULL;
++}
++
++static void io_account_cq_overflow(struct io_ring_ctx *ctx)
++{
++	struct io_rings *r = ctx->rings;
++
++	WRITE_ONCE(r->cq_overflow, READ_ONCE(r->cq_overflow) + 1);
++	ctx->cq_extra--;
++}
++
++static bool req_need_defer(struct io_kiocb *req, u32 seq)
++{
++	if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
++		struct io_ring_ctx *ctx = req->ctx;
++
++		return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
++	}
++
++	return false;
++}
++
++#define FFS_ASYNC_READ		0x1UL
++#define FFS_ASYNC_WRITE		0x2UL
++#ifdef CONFIG_64BIT
++#define FFS_ISREG		0x4UL
++#else
++#define FFS_ISREG		0x0UL
++#endif
++#define FFS_MASK		~(FFS_ASYNC_READ|FFS_ASYNC_WRITE|FFS_ISREG)
++
++static inline bool io_req_ffs_set(struct io_kiocb *req)
++{
++	return IS_ENABLED(CONFIG_64BIT) && (req->flags & REQ_F_FIXED_FILE);
++}
++
++static void io_req_track_inflight(struct io_kiocb *req)
++{
++	if (!(req->flags & REQ_F_INFLIGHT)) {
++		req->flags |= REQ_F_INFLIGHT;
++		atomic_inc(&req->task->io_uring->inflight_tracked);
++	}
++}
++
++static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
++{
++	if (WARN_ON_ONCE(!req->link))
++		return NULL;
++
++	req->flags &= ~REQ_F_ARM_LTIMEOUT;
++	req->flags |= REQ_F_LINK_TIMEOUT;
++
++	/* linked timeouts should have two refs once prep'ed */
++	io_req_set_refcount(req);
++	__io_req_set_refcount(req->link, 2);
++	return req->link;
++}
++
++static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
++{
++	if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT)))
++		return NULL;
++	return __io_prep_linked_timeout(req);
++}
++
++static void io_prep_async_work(struct io_kiocb *req)
++{
++	const struct io_op_def *def = &io_op_defs[req->opcode];
++	struct io_ring_ctx *ctx = req->ctx;
++
++	if (!(req->flags & REQ_F_CREDS)) {
++		req->flags |= REQ_F_CREDS;
++		req->creds = get_current_cred();
++	}
++
++	req->work.list.next = NULL;
++	req->work.flags = 0;
++	if (req->flags & REQ_F_FORCE_ASYNC)
++		req->work.flags |= IO_WQ_WORK_CONCURRENT;
++
++	if (req->flags & REQ_F_ISREG) {
++		if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
++			io_wq_hash_work(&req->work, file_inode(req->file));
++	} else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
++		if (def->unbound_nonreg_file)
++			req->work.flags |= IO_WQ_WORK_UNBOUND;
++	}
++}
++
++static void io_prep_async_link(struct io_kiocb *req)
++{
++	struct io_kiocb *cur;
++
++	if (req->flags & REQ_F_LINK_TIMEOUT) {
++		struct io_ring_ctx *ctx = req->ctx;
++
++		spin_lock_irq(&ctx->timeout_lock);
++		io_for_each_link(cur, req)
++			io_prep_async_work(cur);
++		spin_unlock_irq(&ctx->timeout_lock);
++	} else {
++		io_for_each_link(cur, req)
++			io_prep_async_work(cur);
++	}
++}
++
++static void io_queue_async_work(struct io_kiocb *req, bool *locked)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_kiocb *link = io_prep_linked_timeout(req);
++	struct io_uring_task *tctx = req->task->io_uring;
++
++	/* must not take the lock, NULL it as a precaution */
++	locked = NULL;
++
++	BUG_ON(!tctx);
++	BUG_ON(!tctx->io_wq);
++
++	/* init ->work of the whole link before punting */
++	io_prep_async_link(req);
++
++	/*
++	 * Not expected to happen, but if we do have a bug where this _can_
++	 * happen, catch it here and ensure the request is marked as
++	 * canceled. That will make io-wq go through the usual work cancel
++	 * procedure rather than attempt to run this request (or create a new
++	 * worker for it).
++	 */
++	if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
++		req->work.flags |= IO_WQ_WORK_CANCEL;
++
++	trace_io_uring_queue_async_work(ctx, io_wq_is_hashed(&req->work), req,
++					&req->work, req->flags);
++	io_wq_enqueue(tctx->io_wq, &req->work);
++	if (link)
++		io_queue_linked_timeout(link);
++}
++
++static void io_kill_timeout(struct io_kiocb *req, int status)
++	__must_hold(&req->ctx->completion_lock)
++	__must_hold(&req->ctx->timeout_lock)
++{
++	struct io_timeout_data *io = req->async_data;
++
++	if (hrtimer_try_to_cancel(&io->timer) != -1) {
++		if (status)
++			req_set_fail(req);
++		atomic_set(&req->ctx->cq_timeouts,
++			atomic_read(&req->ctx->cq_timeouts) + 1);
++		list_del_init(&req->timeout.list);
++		io_fill_cqe_req(req, status, 0);
++		io_put_req_deferred(req);
++	}
++}
++
++static void io_queue_deferred(struct io_ring_ctx *ctx)
++{
++	while (!list_empty(&ctx->defer_list)) {
++		struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
++						struct io_defer_entry, list);
++
++		if (req_need_defer(de->req, de->seq))
++			break;
++		list_del_init(&de->list);
++		io_req_task_queue(de->req);
++		kfree(de);
++	}
++}
++
++static void io_flush_timeouts(struct io_ring_ctx *ctx)
++	__must_hold(&ctx->completion_lock)
++{
++	u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
++	struct io_kiocb *req, *tmp;
++
++	spin_lock_irq(&ctx->timeout_lock);
++	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
++		u32 events_needed, events_got;
++
++		if (io_is_timeout_noseq(req))
++			break;
++
++		/*
++		 * Since seq can easily wrap around over time, subtract
++		 * the last seq at which timeouts were flushed before comparing.
++		 * Assuming not more than 2^31-1 events have happened since,
++		 * these subtractions won't have wrapped, so we can check if
++		 * target is in [last_seq, current_seq] by comparing the two.
++		 */
++		events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush;
++		events_got = seq - ctx->cq_last_tm_flush;
++		if (events_got < events_needed)
++			break;
++
++		io_kill_timeout(req, 0);
++	}
++	ctx->cq_last_tm_flush = seq;
++	spin_unlock_irq(&ctx->timeout_lock);
++}
++
++static void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
++{
++	if (ctx->off_timeout_used)
++		io_flush_timeouts(ctx);
++	if (ctx->drain_active)
++		io_queue_deferred(ctx);
++}
++
++static inline void io_commit_cqring(struct io_ring_ctx *ctx)
++{
++	if (unlikely(ctx->off_timeout_used || ctx->drain_active))
++		__io_commit_cqring_flush(ctx);
++	/* order cqe stores with ring update */
++	smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
++}
++
++static inline bool io_sqring_full(struct io_ring_ctx *ctx)
++{
++	struct io_rings *r = ctx->rings;
++
++	return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == ctx->sq_entries;
++}
++
++static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx)
++{
++	return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
++}
++
++static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
++{
++	struct io_rings *rings = ctx->rings;
++	unsigned tail, mask = ctx->cq_entries - 1;
++
++	/*
++	 * writes to the cq entry need to come after reading head; the
++	 * control dependency is enough as we're using WRITE_ONCE to
++	 * fill the cq entry
++	 */
++	if (__io_cqring_events(ctx) == ctx->cq_entries)
++		return NULL;
++
++	tail = ctx->cached_cq_tail++;
++	return &rings->cqes[tail & mask];
++}
++
++static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx)
++{
++	if (likely(!ctx->cq_ev_fd))
++		return false;
++	if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
++		return false;
++	return !ctx->eventfd_async || io_wq_current_is_worker();
++}
++
++/*
++ * This should only get called when at least one event has been posted.
++ * Some applications rely on the eventfd notification count only changing
++ * IFF a new CQE has been added to the CQ ring. There's no depedency on
++ * 1:1 relationship between how many times this function is called (and
++ * hence the eventfd count) and number of CQEs posted to the CQ ring.
++ */
++static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
++{
++	/*
++	 * wake_up_all() may seem excessive, but io_wake_function() and
++	 * io_should_wake() handle the termination of the loop and only
++	 * wake as many waiters as we need to.
++	 */
++	if (wq_has_sleeper(&ctx->cq_wait))
++		__wake_up(&ctx->cq_wait, TASK_NORMAL, 0,
++				poll_to_key(EPOLL_URING_WAKE | EPOLLIN));
++	if (ctx->sq_data && waitqueue_active(&ctx->sq_data->wait))
++		wake_up(&ctx->sq_data->wait);
++	if (io_should_trigger_evfd(ctx))
++		eventfd_signal_mask(ctx->cq_ev_fd, 1, EPOLL_URING_WAKE);
++	if (waitqueue_active(&ctx->poll_wait))
++		__wake_up(&ctx->poll_wait, TASK_INTERRUPTIBLE, 0,
++				poll_to_key(EPOLL_URING_WAKE | EPOLLIN));
++}
++
++static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
++{
++	/* see waitqueue_active() comment */
++	smp_mb();
++
++	if (ctx->flags & IORING_SETUP_SQPOLL) {
++		if (waitqueue_active(&ctx->cq_wait))
++			__wake_up(&ctx->cq_wait, TASK_NORMAL, 0,
++				  poll_to_key(EPOLL_URING_WAKE | EPOLLIN));
++	}
++	if (io_should_trigger_evfd(ctx))
++		eventfd_signal_mask(ctx->cq_ev_fd, 1, EPOLL_URING_WAKE);
++	if (waitqueue_active(&ctx->poll_wait))
++		__wake_up(&ctx->poll_wait, TASK_INTERRUPTIBLE, 0,
++				poll_to_key(EPOLL_URING_WAKE | EPOLLIN));
++}
++
++/* Returns true if there are no backlogged entries after the flush */
++static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
++{
++	bool all_flushed, posted;
++
++	if (!force && __io_cqring_events(ctx) == ctx->cq_entries)
++		return false;
++
++	posted = false;
++	spin_lock(&ctx->completion_lock);
++	while (!list_empty(&ctx->cq_overflow_list)) {
++		struct io_uring_cqe *cqe = io_get_cqe(ctx);
++		struct io_overflow_cqe *ocqe;
++
++		if (!cqe && !force)
++			break;
++		ocqe = list_first_entry(&ctx->cq_overflow_list,
++					struct io_overflow_cqe, list);
++		if (cqe)
++			memcpy(cqe, &ocqe->cqe, sizeof(*cqe));
++		else
++			io_account_cq_overflow(ctx);
++
++		posted = true;
++		list_del(&ocqe->list);
++		kfree(ocqe);
++	}
++
++	all_flushed = list_empty(&ctx->cq_overflow_list);
++	if (all_flushed) {
++		clear_bit(0, &ctx->check_cq_overflow);
++		WRITE_ONCE(ctx->rings->sq_flags,
++			   ctx->rings->sq_flags & ~IORING_SQ_CQ_OVERFLOW);
++	}
++
++	if (posted)
++		io_commit_cqring(ctx);
++	spin_unlock(&ctx->completion_lock);
++	if (posted)
++		io_cqring_ev_posted(ctx);
++	return all_flushed;
++}
++
++static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx)
++{
++	bool ret = true;
++
++	if (test_bit(0, &ctx->check_cq_overflow)) {
++		/* iopoll syncs against uring_lock, not completion_lock */
++		if (ctx->flags & IORING_SETUP_IOPOLL)
++			mutex_lock(&ctx->uring_lock);
++		ret = __io_cqring_overflow_flush(ctx, false);
++		if (ctx->flags & IORING_SETUP_IOPOLL)
++			mutex_unlock(&ctx->uring_lock);
++	}
++
++	return ret;
++}
++
++/* must to be called somewhat shortly after putting a request */
++static inline void io_put_task(struct task_struct *task, int nr)
++{
++	struct io_uring_task *tctx = task->io_uring;
++
++	if (likely(task == current)) {
++		tctx->cached_refs += nr;
++	} else {
++		percpu_counter_sub(&tctx->inflight, nr);
++		if (unlikely(atomic_read(&tctx->in_idle)))
++			wake_up(&tctx->wait);
++		put_task_struct_many(task, nr);
++	}
++}
++
++static void io_task_refs_refill(struct io_uring_task *tctx)
++{
++	unsigned int refill = -tctx->cached_refs + IO_TCTX_REFS_CACHE_NR;
++
++	percpu_counter_add(&tctx->inflight, refill);
++	refcount_add(refill, &current->usage);
++	tctx->cached_refs += refill;
++}
++
++static inline void io_get_task_refs(int nr)
++{
++	struct io_uring_task *tctx = current->io_uring;
++
++	tctx->cached_refs -= nr;
++	if (unlikely(tctx->cached_refs < 0))
++		io_task_refs_refill(tctx);
++}
++
++static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
++{
++	struct io_uring_task *tctx = task->io_uring;
++	unsigned int refs = tctx->cached_refs;
++
++	if (refs) {
++		tctx->cached_refs = 0;
++		percpu_counter_sub(&tctx->inflight, refs);
++		put_task_struct_many(task, refs);
++	}
++}
++
++static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
++				     s32 res, u32 cflags)
++{
++	struct io_overflow_cqe *ocqe;
++
++	ocqe = kmalloc(sizeof(*ocqe), GFP_ATOMIC | __GFP_ACCOUNT);
++	if (!ocqe) {
++		/*
++		 * If we're in ring overflow flush mode, or in task cancel mode,
++		 * or cannot allocate an overflow entry, then we need to drop it
++		 * on the floor.
++		 */
++		io_account_cq_overflow(ctx);
++		return false;
++	}
++	if (list_empty(&ctx->cq_overflow_list)) {
++		set_bit(0, &ctx->check_cq_overflow);
++		WRITE_ONCE(ctx->rings->sq_flags,
++			   ctx->rings->sq_flags | IORING_SQ_CQ_OVERFLOW);
++
++	}
++	ocqe->cqe.user_data = user_data;
++	ocqe->cqe.res = res;
++	ocqe->cqe.flags = cflags;
++	list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
++	return true;
++}
++
++static inline bool __io_fill_cqe(struct io_ring_ctx *ctx, u64 user_data,
++				 s32 res, u32 cflags)
++{
++	struct io_uring_cqe *cqe;
++
++	trace_io_uring_complete(ctx, user_data, res, cflags);
++
++	/*
++	 * If we can't get a cq entry, userspace overflowed the
++	 * submission (by quite a lot). Increment the overflow count in
++	 * the ring.
++	 */
++	cqe = io_get_cqe(ctx);
++	if (likely(cqe)) {
++		WRITE_ONCE(cqe->user_data, user_data);
++		WRITE_ONCE(cqe->res, res);
++		WRITE_ONCE(cqe->flags, cflags);
++		return true;
++	}
++	return io_cqring_event_overflow(ctx, user_data, res, cflags);
++}
++
++static noinline void io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 cflags)
++{
++	__io_fill_cqe(req->ctx, req->user_data, res, cflags);
++}
++
++static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
++				     s32 res, u32 cflags)
++{
++	ctx->cq_extra++;
++	return __io_fill_cqe(ctx, user_data, res, cflags);
++}
++
++static void io_req_complete_post(struct io_kiocb *req, s32 res,
++				 u32 cflags)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	spin_lock(&ctx->completion_lock);
++	__io_fill_cqe(ctx, req->user_data, res, cflags);
++	/*
++	 * If we're the last reference to this request, add to our locked
++	 * free_list cache.
++	 */
++	if (req_ref_put_and_test(req)) {
++		if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
++			if (req->flags & IO_DISARM_MASK)
++				io_disarm_next(req);
++			if (req->link) {
++				io_req_task_queue(req->link);
++				req->link = NULL;
++			}
++		}
++		io_dismantle_req(req);
++		io_put_task(req->task, 1);
++		list_add(&req->inflight_entry, &ctx->locked_free_list);
++		ctx->locked_free_nr++;
++	} else {
++		if (!percpu_ref_tryget(&ctx->refs))
++			req = NULL;
++	}
++	io_commit_cqring(ctx);
++	spin_unlock(&ctx->completion_lock);
++
++	if (req) {
++		io_cqring_ev_posted(ctx);
++		percpu_ref_put(&ctx->refs);
++	}
++}
++
++static inline bool io_req_needs_clean(struct io_kiocb *req)
++{
++	return req->flags & IO_REQ_CLEAN_FLAGS;
++}
++
++static inline void io_req_complete_state(struct io_kiocb *req, s32 res,
++					 u32 cflags)
++{
++	if (io_req_needs_clean(req))
++		io_clean_op(req);
++	req->result = res;
++	req->compl.cflags = cflags;
++	req->flags |= REQ_F_COMPLETE_INLINE;
++}
++
++static inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags,
++				     s32 res, u32 cflags)
++{
++	if (issue_flags & IO_URING_F_COMPLETE_DEFER)
++		io_req_complete_state(req, res, cflags);
++	else
++		io_req_complete_post(req, res, cflags);
++}
++
++static inline void io_req_complete(struct io_kiocb *req, s32 res)
++{
++	__io_req_complete(req, 0, res, 0);
++}
++
++static void io_req_complete_failed(struct io_kiocb *req, s32 res)
++{
++	req_set_fail(req);
++	io_req_complete_post(req, res, 0);
++}
++
++static void io_req_complete_fail_submit(struct io_kiocb *req)
++{
++	/*
++	 * We don't submit, fail them all, for that replace hardlinks with
++	 * normal links. Extra REQ_F_LINK is tolerated.
++	 */
++	req->flags &= ~REQ_F_HARDLINK;
++	req->flags |= REQ_F_LINK;
++	io_req_complete_failed(req, req->result);
++}
++
++/*
++ * Don't initialise the fields below on every allocation, but do that in
++ * advance and keep them valid across allocations.
++ */
++static void io_preinit_req(struct io_kiocb *req, struct io_ring_ctx *ctx)
++{
++	req->ctx = ctx;
++	req->link = NULL;
++	req->async_data = NULL;
++	/* not necessary, but safer to zero */
++	req->result = 0;
++}
++
++static void io_flush_cached_locked_reqs(struct io_ring_ctx *ctx,
++					struct io_submit_state *state)
++{
++	spin_lock(&ctx->completion_lock);
++	list_splice_init(&ctx->locked_free_list, &state->free_list);
++	ctx->locked_free_nr = 0;
++	spin_unlock(&ctx->completion_lock);
++}
++
++/* Returns true IFF there are requests in the cache */
++static bool io_flush_cached_reqs(struct io_ring_ctx *ctx)
++{
++	struct io_submit_state *state = &ctx->submit_state;
++	int nr;
++
++	/*
++	 * If we have more than a batch's worth of requests in our IRQ side
++	 * locked cache, grab the lock and move them over to our submission
++	 * side cache.
++	 */
++	if (READ_ONCE(ctx->locked_free_nr) > IO_COMPL_BATCH)
++		io_flush_cached_locked_reqs(ctx, state);
++
++	nr = state->free_reqs;
++	while (!list_empty(&state->free_list)) {
++		struct io_kiocb *req = list_first_entry(&state->free_list,
++					struct io_kiocb, inflight_entry);
++
++		list_del(&req->inflight_entry);
++		state->reqs[nr++] = req;
++		if (nr == ARRAY_SIZE(state->reqs))
++			break;
++	}
++
++	state->free_reqs = nr;
++	return nr != 0;
++}
++
++/*
++ * A request might get retired back into the request caches even before opcode
++ * handlers and io_issue_sqe() are done with it, e.g. inline completion path.
++ * Because of that, io_alloc_req() should be called only under ->uring_lock
++ * and with extra caution to not get a request that is still worked on.
++ */
++static struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
++	__must_hold(&ctx->uring_lock)
++{
++	struct io_submit_state *state = &ctx->submit_state;
++	gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
++	int ret, i;
++
++	BUILD_BUG_ON(ARRAY_SIZE(state->reqs) < IO_REQ_ALLOC_BATCH);
++
++	if (likely(state->free_reqs || io_flush_cached_reqs(ctx)))
++		goto got_req;
++
++	ret = kmem_cache_alloc_bulk(req_cachep, gfp, IO_REQ_ALLOC_BATCH,
++				    state->reqs);
++
++	/*
++	 * Bulk alloc is all-or-nothing. If we fail to get a batch,
++	 * retry single alloc to be on the safe side.
++	 */
++	if (unlikely(ret <= 0)) {
++		state->reqs[0] = kmem_cache_alloc(req_cachep, gfp);
++		if (!state->reqs[0])
++			return NULL;
++		ret = 1;
++	}
++
++	for (i = 0; i < ret; i++)
++		io_preinit_req(state->reqs[i], ctx);
++	state->free_reqs = ret;
++got_req:
++	state->free_reqs--;
++	return state->reqs[state->free_reqs];
++}
++
++static inline void io_put_file(struct file *file)
++{
++	if (file)
++		fput(file);
++}
++
++static void io_dismantle_req(struct io_kiocb *req)
++{
++	unsigned int flags = req->flags;
++
++	if (io_req_needs_clean(req))
++		io_clean_op(req);
++	if (!(flags & REQ_F_FIXED_FILE))
++		io_put_file(req->file);
++	if (req->fixed_rsrc_refs)
++		percpu_ref_put(req->fixed_rsrc_refs);
++	if (req->async_data) {
++		kfree(req->async_data);
++		req->async_data = NULL;
++	}
++}
++
++static void __io_free_req(struct io_kiocb *req)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	io_dismantle_req(req);
++	io_put_task(req->task, 1);
++
++	spin_lock(&ctx->completion_lock);
++	list_add(&req->inflight_entry, &ctx->locked_free_list);
++	ctx->locked_free_nr++;
++	spin_unlock(&ctx->completion_lock);
++
++	percpu_ref_put(&ctx->refs);
++}
++
++static inline void io_remove_next_linked(struct io_kiocb *req)
++{
++	struct io_kiocb *nxt = req->link;
++
++	req->link = nxt->link;
++	nxt->link = NULL;
++}
++
++static bool io_kill_linked_timeout(struct io_kiocb *req)
++	__must_hold(&req->ctx->completion_lock)
++	__must_hold(&req->ctx->timeout_lock)
++{
++	struct io_kiocb *link = req->link;
++
++	if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
++		struct io_timeout_data *io = link->async_data;
++
++		io_remove_next_linked(req);
++		link->timeout.head = NULL;
++		if (hrtimer_try_to_cancel(&io->timer) != -1) {
++			list_del(&link->timeout.list);
++			io_fill_cqe_req(link, -ECANCELED, 0);
++			io_put_req_deferred(link);
++			return true;
++		}
++	}
++	return false;
++}
++
++static void io_fail_links(struct io_kiocb *req)
++	__must_hold(&req->ctx->completion_lock)
++{
++	struct io_kiocb *nxt, *link = req->link;
++
++	req->link = NULL;
++	while (link) {
++		long res = -ECANCELED;
++
++		if (link->flags & REQ_F_FAIL)
++			res = link->result;
++
++		nxt = link->link;
++		link->link = NULL;
++
++		trace_io_uring_fail_link(req, link);
++		io_fill_cqe_req(link, res, 0);
++		io_put_req_deferred(link);
++		link = nxt;
++	}
++}
++
++static bool io_disarm_next(struct io_kiocb *req)
++	__must_hold(&req->ctx->completion_lock)
++{
++	bool posted = false;
++
++	if (req->flags & REQ_F_ARM_LTIMEOUT) {
++		struct io_kiocb *link = req->link;
++
++		req->flags &= ~REQ_F_ARM_LTIMEOUT;
++		if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
++			io_remove_next_linked(req);
++			io_fill_cqe_req(link, -ECANCELED, 0);
++			io_put_req_deferred(link);
++			posted = true;
++		}
++	} else if (req->flags & REQ_F_LINK_TIMEOUT) {
++		struct io_ring_ctx *ctx = req->ctx;
++
++		spin_lock_irq(&ctx->timeout_lock);
++		posted = io_kill_linked_timeout(req);
++		spin_unlock_irq(&ctx->timeout_lock);
++	}
++	if (unlikely((req->flags & REQ_F_FAIL) &&
++		     !(req->flags & REQ_F_HARDLINK))) {
++		posted |= (req->link != NULL);
++		io_fail_links(req);
++	}
++	return posted;
++}
++
++static struct io_kiocb *__io_req_find_next(struct io_kiocb *req)
++{
++	struct io_kiocb *nxt;
++
++	/*
++	 * If LINK is set, we have dependent requests in this chain. If we
++	 * didn't fail this request, queue the first one up, moving any other
++	 * dependencies to the next request. In case of failure, fail the rest
++	 * of the chain.
++	 */
++	if (req->flags & IO_DISARM_MASK) {
++		struct io_ring_ctx *ctx = req->ctx;
++		bool posted;
++
++		spin_lock(&ctx->completion_lock);
++		posted = io_disarm_next(req);
++		if (posted)
++			io_commit_cqring(req->ctx);
++		spin_unlock(&ctx->completion_lock);
++		if (posted)
++			io_cqring_ev_posted(ctx);
++	}
++	nxt = req->link;
++	req->link = NULL;
++	return nxt;
++}
++
++static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
++{
++	if (likely(!(req->flags & (REQ_F_LINK|REQ_F_HARDLINK))))
++		return NULL;
++	return __io_req_find_next(req);
++}
++
++static void ctx_flush_and_put(struct io_ring_ctx *ctx, bool *locked)
++{
++	if (!ctx)
++		return;
++	if (*locked) {
++		if (ctx->submit_state.compl_nr)
++			io_submit_flush_completions(ctx);
++		mutex_unlock(&ctx->uring_lock);
++		*locked = false;
++	}
++	percpu_ref_put(&ctx->refs);
++}
++
++static void tctx_task_work(struct callback_head *cb)
++{
++	bool locked = false;
++	struct io_ring_ctx *ctx = NULL;
++	struct io_uring_task *tctx = container_of(cb, struct io_uring_task,
++						  task_work);
++
++	while (1) {
++		struct io_wq_work_node *node;
++
++		if (!tctx->task_list.first && locked && ctx->submit_state.compl_nr)
++			io_submit_flush_completions(ctx);
++
++		spin_lock_irq(&tctx->task_lock);
++		node = tctx->task_list.first;
++		INIT_WQ_LIST(&tctx->task_list);
++		if (!node)
++			tctx->task_running = false;
++		spin_unlock_irq(&tctx->task_lock);
++		if (!node)
++			break;
++
++		do {
++			struct io_wq_work_node *next = node->next;
++			struct io_kiocb *req = container_of(node, struct io_kiocb,
++							    io_task_work.node);
++
++			if (req->ctx != ctx) {
++				ctx_flush_and_put(ctx, &locked);
++				ctx = req->ctx;
++				/* if not contended, grab and improve batching */
++				locked = mutex_trylock(&ctx->uring_lock);
++				percpu_ref_get(&ctx->refs);
++			}
++			req->io_task_work.func(req, &locked);
++			node = next;
++		} while (node);
++
++		cond_resched();
++	}
++
++	ctx_flush_and_put(ctx, &locked);
++
++	/* relaxed read is enough as only the task itself sets ->in_idle */
++	if (unlikely(atomic_read(&tctx->in_idle)))
++		io_uring_drop_tctx_refs(current);
++}
++
++static void io_req_task_work_add(struct io_kiocb *req)
++{
++	struct task_struct *tsk = req->task;
++	struct io_uring_task *tctx = tsk->io_uring;
++	enum task_work_notify_mode notify;
++	struct io_wq_work_node *node;
++	unsigned long flags;
++	bool running;
++
++	WARN_ON_ONCE(!tctx);
++
++	spin_lock_irqsave(&tctx->task_lock, flags);
++	wq_list_add_tail(&req->io_task_work.node, &tctx->task_list);
++	running = tctx->task_running;
++	if (!running)
++		tctx->task_running = true;
++	spin_unlock_irqrestore(&tctx->task_lock, flags);
++
++	/* task_work already pending, we're done */
++	if (running)
++		return;
++
++	/*
++	 * SQPOLL kernel thread doesn't need notification, just a wakeup. For
++	 * all other cases, use TWA_SIGNAL unconditionally to ensure we're
++	 * processing task_work. There's no reliable way to tell if TWA_RESUME
++	 * will do the job.
++	 */
++	notify = (req->ctx->flags & IORING_SETUP_SQPOLL) ? TWA_NONE : TWA_SIGNAL;
++	if (!task_work_add(tsk, &tctx->task_work, notify)) {
++		wake_up_process(tsk);
++		return;
++	}
++
++	spin_lock_irqsave(&tctx->task_lock, flags);
++	tctx->task_running = false;
++	node = tctx->task_list.first;
++	INIT_WQ_LIST(&tctx->task_list);
++	spin_unlock_irqrestore(&tctx->task_lock, flags);
++
++	while (node) {
++		req = container_of(node, struct io_kiocb, io_task_work.node);
++		node = node->next;
++		if (llist_add(&req->io_task_work.fallback_node,
++			      &req->ctx->fallback_llist))
++			schedule_delayed_work(&req->ctx->fallback_work, 1);
++	}
++}
++
++static void io_req_task_cancel(struct io_kiocb *req, bool *locked)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	/* not needed for normal modes, but SQPOLL depends on it */
++	io_tw_lock(ctx, locked);
++	io_req_complete_failed(req, req->result);
++}
++
++static void io_req_task_submit(struct io_kiocb *req, bool *locked)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	io_tw_lock(ctx, locked);
++	/* req->task == current here, checking PF_EXITING is safe */
++	if (likely(!(req->task->flags & PF_EXITING)))
++		__io_queue_sqe(req);
++	else
++		io_req_complete_failed(req, -EFAULT);
++}
++
++static void io_req_task_queue_fail(struct io_kiocb *req, int ret)
++{
++	req->result = ret;
++	req->io_task_work.func = io_req_task_cancel;
++	io_req_task_work_add(req);
++}
++
++static void io_req_task_queue(struct io_kiocb *req)
++{
++	req->io_task_work.func = io_req_task_submit;
++	io_req_task_work_add(req);
++}
++
++static void io_req_task_queue_reissue(struct io_kiocb *req)
++{
++	req->io_task_work.func = io_queue_async_work;
++	io_req_task_work_add(req);
++}
++
++static inline void io_queue_next(struct io_kiocb *req)
++{
++	struct io_kiocb *nxt = io_req_find_next(req);
++
++	if (nxt)
++		io_req_task_queue(nxt);
++}
++
++static void io_free_req(struct io_kiocb *req)
++{
++	io_queue_next(req);
++	__io_free_req(req);
++}
++
++static void io_free_req_work(struct io_kiocb *req, bool *locked)
++{
++	io_free_req(req);
++}
++
++struct req_batch {
++	struct task_struct	*task;
++	int			task_refs;
++	int			ctx_refs;
++};
++
++static inline void io_init_req_batch(struct req_batch *rb)
++{
++	rb->task_refs = 0;
++	rb->ctx_refs = 0;
++	rb->task = NULL;
++}
++
++static void io_req_free_batch_finish(struct io_ring_ctx *ctx,
++				     struct req_batch *rb)
++{
++	if (rb->ctx_refs)
++		percpu_ref_put_many(&ctx->refs, rb->ctx_refs);
++	if (rb->task)
++		io_put_task(rb->task, rb->task_refs);
++}
++
++static void io_req_free_batch(struct req_batch *rb, struct io_kiocb *req,
++			      struct io_submit_state *state)
++{
++	io_queue_next(req);
++	io_dismantle_req(req);
++
++	if (req->task != rb->task) {
++		if (rb->task)
++			io_put_task(rb->task, rb->task_refs);
++		rb->task = req->task;
++		rb->task_refs = 0;
++	}
++	rb->task_refs++;
++	rb->ctx_refs++;
++
++	if (state->free_reqs != ARRAY_SIZE(state->reqs))
++		state->reqs[state->free_reqs++] = req;
++	else
++		list_add(&req->inflight_entry, &state->free_list);
++}
++
++static void io_submit_flush_completions(struct io_ring_ctx *ctx)
++	__must_hold(&ctx->uring_lock)
++{
++	struct io_submit_state *state = &ctx->submit_state;
++	int i, nr = state->compl_nr;
++	struct req_batch rb;
++
++	spin_lock(&ctx->completion_lock);
++	for (i = 0; i < nr; i++) {
++		struct io_kiocb *req = state->compl_reqs[i];
++
++		__io_fill_cqe(ctx, req->user_data, req->result,
++			      req->compl.cflags);
++	}
++	io_commit_cqring(ctx);
++	spin_unlock(&ctx->completion_lock);
++	io_cqring_ev_posted(ctx);
++
++	io_init_req_batch(&rb);
++	for (i = 0; i < nr; i++) {
++		struct io_kiocb *req = state->compl_reqs[i];
++
++		if (req_ref_put_and_test(req))
++			io_req_free_batch(&rb, req, &ctx->submit_state);
++	}
++
++	io_req_free_batch_finish(ctx, &rb);
++	state->compl_nr = 0;
++}
++
++/*
++ * Drop reference to request, return next in chain (if there is one) if this
++ * was the last reference to this request.
++ */
++static inline struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
++{
++	struct io_kiocb *nxt = NULL;
++
++	if (req_ref_put_and_test(req)) {
++		nxt = io_req_find_next(req);
++		__io_free_req(req);
++	}
++	return nxt;
++}
++
++static inline void io_put_req(struct io_kiocb *req)
++{
++	if (req_ref_put_and_test(req))
++		io_free_req(req);
++}
++
++static inline void io_put_req_deferred(struct io_kiocb *req)
++{
++	if (req_ref_put_and_test(req)) {
++		req->io_task_work.func = io_free_req_work;
++		io_req_task_work_add(req);
++	}
++}
++
++static unsigned io_cqring_events(struct io_ring_ctx *ctx)
++{
++	/* See comment at the top of this file */
++	smp_rmb();
++	return __io_cqring_events(ctx);
++}
++
++static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
++{
++	struct io_rings *rings = ctx->rings;
++
++	/* make sure SQ entry isn't read before tail */
++	return smp_load_acquire(&rings->sq.tail) - ctx->cached_sq_head;
++}
++
++static unsigned int io_put_kbuf(struct io_kiocb *req, struct io_buffer *kbuf)
++{
++	unsigned int cflags;
++
++	cflags = kbuf->bid << IORING_CQE_BUFFER_SHIFT;
++	cflags |= IORING_CQE_F_BUFFER;
++	req->flags &= ~REQ_F_BUFFER_SELECTED;
++	kfree(kbuf);
++	return cflags;
++}
++
++static inline unsigned int io_put_rw_kbuf(struct io_kiocb *req)
++{
++	struct io_buffer *kbuf;
++
++	if (likely(!(req->flags & REQ_F_BUFFER_SELECTED)))
++		return 0;
++	kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
++	return io_put_kbuf(req, kbuf);
++}
++
++static inline bool io_run_task_work(void)
++{
++	if (test_thread_flag(TIF_NOTIFY_SIGNAL) || current->task_works) {
++		__set_current_state(TASK_RUNNING);
++		tracehook_notify_signal();
++		return true;
++	}
++
++	return false;
++}
++
++/*
++ * Find and free completed poll iocbs
++ */
++static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
++			       struct list_head *done)
++{
++	struct req_batch rb;
++	struct io_kiocb *req;
++
++	/* order with ->result store in io_complete_rw_iopoll() */
++	smp_rmb();
++
++	io_init_req_batch(&rb);
++	while (!list_empty(done)) {
++		req = list_first_entry(done, struct io_kiocb, inflight_entry);
++		list_del(&req->inflight_entry);
++
++		io_fill_cqe_req(req, req->result, io_put_rw_kbuf(req));
++		(*nr_events)++;
++
++		if (req_ref_put_and_test(req))
++			io_req_free_batch(&rb, req, &ctx->submit_state);
++	}
++
++	io_commit_cqring(ctx);
++	io_cqring_ev_posted_iopoll(ctx);
++	io_req_free_batch_finish(ctx, &rb);
++}
++
++static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
++			long min)
++{
++	struct io_kiocb *req, *tmp;
++	LIST_HEAD(done);
++	bool spin;
++
++	/*
++	 * Only spin for completions if we don't have multiple devices hanging
++	 * off our complete list, and we're under the requested amount.
++	 */
++	spin = !ctx->poll_multi_queue && *nr_events < min;
++
++	list_for_each_entry_safe(req, tmp, &ctx->iopoll_list, inflight_entry) {
++		struct kiocb *kiocb = &req->rw.kiocb;
++		int ret;
++
++		/*
++		 * Move completed and retryable entries to our local lists.
++		 * If we find a request that requires polling, break out
++		 * and complete those lists first, if we have entries there.
++		 */
++		if (READ_ONCE(req->iopoll_completed)) {
++			list_move_tail(&req->inflight_entry, &done);
++			continue;
++		}
++		if (!list_empty(&done))
++			break;
++
++		ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
++		if (unlikely(ret < 0))
++			return ret;
++		else if (ret)
++			spin = false;
++
++		/* iopoll may have completed current req */
++		if (READ_ONCE(req->iopoll_completed))
++			list_move_tail(&req->inflight_entry, &done);
++	}
++
++	if (!list_empty(&done))
++		io_iopoll_complete(ctx, nr_events, &done);
++
++	return 0;
++}
++
++/*
++ * We can't just wait for polled events to come to us, we have to actively
++ * find and complete them.
++ */
++static void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
++{
++	if (!(ctx->flags & IORING_SETUP_IOPOLL))
++		return;
++
++	mutex_lock(&ctx->uring_lock);
++	while (!list_empty(&ctx->iopoll_list)) {
++		unsigned int nr_events = 0;
++
++		io_do_iopoll(ctx, &nr_events, 0);
++
++		/* let it sleep and repeat later if can't complete a request */
++		if (nr_events == 0)
++			break;
++		/*
++		 * Ensure we allow local-to-the-cpu processing to take place,
++		 * in this case we need to ensure that we reap all events.
++		 * Also let task_work, etc. to progress by releasing the mutex
++		 */
++		if (need_resched()) {
++			mutex_unlock(&ctx->uring_lock);
++			cond_resched();
++			mutex_lock(&ctx->uring_lock);
++		}
++	}
++	mutex_unlock(&ctx->uring_lock);
++}
++
++static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
++{
++	unsigned int nr_events = 0;
++	int ret = 0;
++
++	/*
++	 * We disallow the app entering submit/complete with polling, but we
++	 * still need to lock the ring to prevent racing with polled issue
++	 * that got punted to a workqueue.
++	 */
++	mutex_lock(&ctx->uring_lock);
++	/*
++	 * Don't enter poll loop if we already have events pending.
++	 * If we do, we can potentially be spinning for commands that
++	 * already triggered a CQE (eg in error).
++	 */
++	if (test_bit(0, &ctx->check_cq_overflow))
++		__io_cqring_overflow_flush(ctx, false);
++	if (io_cqring_events(ctx))
++		goto out;
++	do {
++		/*
++		 * If a submit got punted to a workqueue, we can have the
++		 * application entering polling for a command before it gets
++		 * issued. That app will hold the uring_lock for the duration
++		 * of the poll right here, so we need to take a breather every
++		 * now and then to ensure that the issue has a chance to add
++		 * the poll to the issued list. Otherwise we can spin here
++		 * forever, while the workqueue is stuck trying to acquire the
++		 * very same mutex.
++		 */
++		if (list_empty(&ctx->iopoll_list)) {
++			u32 tail = ctx->cached_cq_tail;
++
++			mutex_unlock(&ctx->uring_lock);
++			io_run_task_work();
++			mutex_lock(&ctx->uring_lock);
++
++			/* some requests don't go through iopoll_list */
++			if (tail != ctx->cached_cq_tail ||
++			    list_empty(&ctx->iopoll_list))
++				break;
++		}
++		ret = io_do_iopoll(ctx, &nr_events, min);
++	} while (!ret && nr_events < min && !need_resched());
++out:
++	mutex_unlock(&ctx->uring_lock);
++	return ret;
++}
++
++static void kiocb_end_write(struct io_kiocb *req)
++{
++	/*
++	 * Tell lockdep we inherited freeze protection from submission
++	 * thread.
++	 */
++	if (req->flags & REQ_F_ISREG) {
++		struct super_block *sb = file_inode(req->file)->i_sb;
++
++		__sb_writers_acquired(sb, SB_FREEZE_WRITE);
++		sb_end_write(sb);
++	}
++}
++
++#ifdef CONFIG_BLOCK
++static bool io_resubmit_prep(struct io_kiocb *req)
++{
++	struct io_async_rw *rw = req->async_data;
++
++	if (!rw)
++		return !io_req_prep_async(req);
++	iov_iter_restore(&rw->iter, &rw->iter_state);
++	return true;
++}
++
++static bool io_rw_should_reissue(struct io_kiocb *req)
++{
++	umode_t mode = file_inode(req->file)->i_mode;
++	struct io_ring_ctx *ctx = req->ctx;
++
++	if (!S_ISBLK(mode) && !S_ISREG(mode))
++		return false;
++	if ((req->flags & REQ_F_NOWAIT) || (io_wq_current_is_worker() &&
++	    !(ctx->flags & IORING_SETUP_IOPOLL)))
++		return false;
++	/*
++	 * If ref is dying, we might be running poll reap from the exit work.
++	 * Don't attempt to reissue from that path, just let it fail with
++	 * -EAGAIN.
++	 */
++	if (percpu_ref_is_dying(&ctx->refs))
++		return false;
++	/*
++	 * Play it safe and assume not safe to re-import and reissue if we're
++	 * not in the original thread group (or in task context).
++	 */
++	if (!same_thread_group(req->task, current) || !in_task())
++		return false;
++	return true;
++}
++#else
++static bool io_resubmit_prep(struct io_kiocb *req)
++{
++	return false;
++}
++static bool io_rw_should_reissue(struct io_kiocb *req)
++{
++	return false;
++}
++#endif
++
++static bool __io_complete_rw_common(struct io_kiocb *req, long res)
++{
++	if (req->rw.kiocb.ki_flags & IOCB_WRITE) {
++		kiocb_end_write(req);
++		fsnotify_modify(req->file);
++	} else {
++		fsnotify_access(req->file);
++	}
++	if (res != req->result) {
++		if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
++		    io_rw_should_reissue(req)) {
++			req->flags |= REQ_F_REISSUE;
++			return true;
++		}
++		req_set_fail(req);
++		req->result = res;
++	}
++	return false;
++}
++
++static inline int io_fixup_rw_res(struct io_kiocb *req, unsigned res)
++{
++	struct io_async_rw *io = req->async_data;
++
++	/* add previously done IO, if any */
++	if (io && io->bytes_done > 0) {
++		if (res < 0)
++			res = io->bytes_done;
++		else
++			res += io->bytes_done;
++	}
++	return res;
++}
++
++static void io_req_task_complete(struct io_kiocb *req, bool *locked)
++{
++	unsigned int cflags = io_put_rw_kbuf(req);
++	int res = req->result;
++
++	if (*locked) {
++		struct io_ring_ctx *ctx = req->ctx;
++		struct io_submit_state *state = &ctx->submit_state;
++
++		io_req_complete_state(req, res, cflags);
++		state->compl_reqs[state->compl_nr++] = req;
++		if (state->compl_nr == ARRAY_SIZE(state->compl_reqs))
++			io_submit_flush_completions(ctx);
++	} else {
++		io_req_complete_post(req, res, cflags);
++	}
++}
++
++static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
++			     unsigned int issue_flags)
++{
++	if (__io_complete_rw_common(req, res))
++		return;
++	__io_req_complete(req, issue_flags, io_fixup_rw_res(req, res), io_put_rw_kbuf(req));
++}
++
++static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
++{
++	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
++
++	if (__io_complete_rw_common(req, res))
++		return;
++	req->result = io_fixup_rw_res(req, res);
++	req->io_task_work.func = io_req_task_complete;
++	io_req_task_work_add(req);
++}
++
++static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
++{
++	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
++
++	if (kiocb->ki_flags & IOCB_WRITE)
++		kiocb_end_write(req);
++	if (unlikely(res != req->result)) {
++		if (res == -EAGAIN && io_rw_should_reissue(req)) {
++			req->flags |= REQ_F_REISSUE;
++			return;
++		}
++	}
++
++	WRITE_ONCE(req->result, res);
++	/* order with io_iopoll_complete() checking ->result */
++	smp_wmb();
++	WRITE_ONCE(req->iopoll_completed, 1);
++}
++
++/*
++ * After the iocb has been issued, it's safe to be found on the poll list.
++ * Adding the kiocb to the list AFTER submission ensures that we don't
++ * find it from a io_do_iopoll() thread before the issuer is done
++ * accessing the kiocb cookie.
++ */
++static void io_iopoll_req_issued(struct io_kiocb *req)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	const bool in_async = io_wq_current_is_worker();
++
++	/* workqueue context doesn't hold uring_lock, grab it now */
++	if (unlikely(in_async))
++		mutex_lock(&ctx->uring_lock);
++
++	/*
++	 * Track whether we have multiple files in our lists. This will impact
++	 * how we do polling eventually, not spinning if we're on potentially
++	 * different devices.
++	 */
++	if (list_empty(&ctx->iopoll_list)) {
++		ctx->poll_multi_queue = false;
++	} else if (!ctx->poll_multi_queue) {
++		struct io_kiocb *list_req;
++		unsigned int queue_num0, queue_num1;
++
++		list_req = list_first_entry(&ctx->iopoll_list, struct io_kiocb,
++						inflight_entry);
++
++		if (list_req->file != req->file) {
++			ctx->poll_multi_queue = true;
++		} else {
++			queue_num0 = blk_qc_t_to_queue_num(list_req->rw.kiocb.ki_cookie);
++			queue_num1 = blk_qc_t_to_queue_num(req->rw.kiocb.ki_cookie);
++			if (queue_num0 != queue_num1)
++				ctx->poll_multi_queue = true;
++		}
++	}
++
++	/*
++	 * For fast devices, IO may have already completed. If it has, add
++	 * it to the front so we find it first.
++	 */
++	if (READ_ONCE(req->iopoll_completed))
++		list_add(&req->inflight_entry, &ctx->iopoll_list);
++	else
++		list_add_tail(&req->inflight_entry, &ctx->iopoll_list);
++
++	if (unlikely(in_async)) {
++		/*
++		 * If IORING_SETUP_SQPOLL is enabled, sqes are either handle
++		 * in sq thread task context or in io worker task context. If
++		 * current task context is sq thread, we don't need to check
++		 * whether should wake up sq thread.
++		 */
++		if ((ctx->flags & IORING_SETUP_SQPOLL) &&
++		    wq_has_sleeper(&ctx->sq_data->wait))
++			wake_up(&ctx->sq_data->wait);
++
++		mutex_unlock(&ctx->uring_lock);
++	}
++}
++
++static bool io_bdev_nowait(struct block_device *bdev)
++{
++	return !bdev || blk_queue_nowait(bdev_get_queue(bdev));
++}
++
++/*
++ * If we tracked the file through the SCM inflight mechanism, we could support
++ * any file. For now, just ensure that anything potentially problematic is done
++ * inline.
++ */
++static bool __io_file_supports_nowait(struct file *file, int rw)
++{
++	umode_t mode = file_inode(file)->i_mode;
++
++	if (S_ISBLK(mode)) {
++		if (IS_ENABLED(CONFIG_BLOCK) &&
++		    io_bdev_nowait(I_BDEV(file->f_mapping->host)))
++			return true;
++		return false;
++	}
++	if (S_ISSOCK(mode))
++		return true;
++	if (S_ISREG(mode)) {
++		if (IS_ENABLED(CONFIG_BLOCK) &&
++		    io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
++		    file->f_op != &io_uring_fops)
++			return true;
++		return false;
++	}
++
++	/* any ->read/write should understand O_NONBLOCK */
++	if (file->f_flags & O_NONBLOCK)
++		return true;
++
++	if (!(file->f_mode & FMODE_NOWAIT))
++		return false;
++
++	if (rw == READ)
++		return file->f_op->read_iter != NULL;
++
++	return file->f_op->write_iter != NULL;
++}
++
++static bool io_file_supports_nowait(struct io_kiocb *req, int rw)
++{
++	if (rw == READ && (req->flags & REQ_F_NOWAIT_READ))
++		return true;
++	else if (rw == WRITE && (req->flags & REQ_F_NOWAIT_WRITE))
++		return true;
++
++	return __io_file_supports_nowait(req->file, rw);
++}
++
++static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
++		      int rw)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	struct kiocb *kiocb = &req->rw.kiocb;
++	struct file *file = req->file;
++	unsigned ioprio;
++	int ret;
++
++	if (!io_req_ffs_set(req) && S_ISREG(file_inode(file)->i_mode))
++		req->flags |= REQ_F_ISREG;
++
++	kiocb->ki_pos = READ_ONCE(sqe->off);
++	if (kiocb->ki_pos == -1) {
++		if (!(file->f_mode & FMODE_STREAM)) {
++			req->flags |= REQ_F_CUR_POS;
++			kiocb->ki_pos = file->f_pos;
++		} else {
++			kiocb->ki_pos = 0;
++		}
++	}
++	kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
++	kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
++	ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
++	if (unlikely(ret))
++		return ret;
++
++	/*
++	 * If the file is marked O_NONBLOCK, still allow retry for it if it
++	 * supports async. Otherwise it's impossible to use O_NONBLOCK files
++	 * reliably. If not, or it IOCB_NOWAIT is set, don't retry.
++	 */
++	if ((kiocb->ki_flags & IOCB_NOWAIT) ||
++	    ((file->f_flags & O_NONBLOCK) && !io_file_supports_nowait(req, rw)))
++		req->flags |= REQ_F_NOWAIT;
++
++	ioprio = READ_ONCE(sqe->ioprio);
++	if (ioprio) {
++		ret = ioprio_check_cap(ioprio);
++		if (ret)
++			return ret;
++
++		kiocb->ki_ioprio = ioprio;
++	} else
++		kiocb->ki_ioprio = get_current_ioprio();
++
++	if (ctx->flags & IORING_SETUP_IOPOLL) {
++		if (!(kiocb->ki_flags & IOCB_DIRECT) ||
++		    !kiocb->ki_filp->f_op->iopoll)
++			return -EOPNOTSUPP;
++
++		kiocb->ki_flags |= IOCB_HIPRI;
++		kiocb->ki_complete = io_complete_rw_iopoll;
++		req->iopoll_completed = 0;
++	} else {
++		if (kiocb->ki_flags & IOCB_HIPRI)
++			return -EINVAL;
++		kiocb->ki_complete = io_complete_rw;
++	}
++
++	/* used for fixed read/write too - just read unconditionally */
++	req->buf_index = READ_ONCE(sqe->buf_index);
++	req->imu = NULL;
++
++	if (req->opcode == IORING_OP_READ_FIXED ||
++	    req->opcode == IORING_OP_WRITE_FIXED) {
++		struct io_ring_ctx *ctx = req->ctx;
++		u16 index;
++
++		if (unlikely(req->buf_index >= ctx->nr_user_bufs))
++			return -EFAULT;
++		index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
++		req->imu = ctx->user_bufs[index];
++		io_req_set_rsrc_node(req);
++	}
++
++	req->rw.addr = READ_ONCE(sqe->addr);
++	req->rw.len = READ_ONCE(sqe->len);
++	return 0;
++}
++
++static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
++{
++	switch (ret) {
++	case -EIOCBQUEUED:
++		break;
++	case -ERESTARTSYS:
++	case -ERESTARTNOINTR:
++	case -ERESTARTNOHAND:
++	case -ERESTART_RESTARTBLOCK:
++		/*
++		 * We can't just restart the syscall, since previously
++		 * submitted sqes may already be in progress. Just fail this
++		 * IO with EINTR.
++		 */
++		ret = -EINTR;
++		fallthrough;
++	default:
++		kiocb->ki_complete(kiocb, ret, 0);
++	}
++}
++
++static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
++		       unsigned int issue_flags)
++{
++	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
++
++	if (req->flags & REQ_F_CUR_POS)
++		req->file->f_pos = kiocb->ki_pos;
++	if (ret >= 0 && (kiocb->ki_complete == io_complete_rw))
++		__io_complete_rw(req, ret, 0, issue_flags);
++	else
++		io_rw_done(kiocb, ret);
++
++	if (req->flags & REQ_F_REISSUE) {
++		req->flags &= ~REQ_F_REISSUE;
++		if (io_resubmit_prep(req)) {
++			io_req_task_queue_reissue(req);
++		} else {
++			unsigned int cflags = io_put_rw_kbuf(req);
++			struct io_ring_ctx *ctx = req->ctx;
++
++			ret = io_fixup_rw_res(req, ret);
++			req_set_fail(req);
++			if (!(issue_flags & IO_URING_F_NONBLOCK)) {
++				mutex_lock(&ctx->uring_lock);
++				__io_req_complete(req, issue_flags, ret, cflags);
++				mutex_unlock(&ctx->uring_lock);
++			} else {
++				__io_req_complete(req, issue_flags, ret, cflags);
++			}
++		}
++	}
++}
++
++static int __io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
++			     struct io_mapped_ubuf *imu)
++{
++	size_t len = req->rw.len;
++	u64 buf_end, buf_addr = req->rw.addr;
++	size_t offset;
++
++	if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end)))
++		return -EFAULT;
++	/* not inside the mapped region */
++	if (unlikely(buf_addr < imu->ubuf || buf_end > imu->ubuf_end))
++		return -EFAULT;
++
++	/*
++	 * May not be a start of buffer, set size appropriately
++	 * and advance us to the beginning.
++	 */
++	offset = buf_addr - imu->ubuf;
++	iov_iter_bvec(iter, rw, imu->bvec, imu->nr_bvecs, offset + len);
++
++	if (offset) {
++		/*
++		 * Don't use iov_iter_advance() here, as it's really slow for
++		 * using the latter parts of a big fixed buffer - it iterates
++		 * over each segment manually. We can cheat a bit here, because
++		 * we know that:
++		 *
++		 * 1) it's a BVEC iter, we set it up
++		 * 2) all bvecs are PAGE_SIZE in size, except potentially the
++		 *    first and last bvec
++		 *
++		 * So just find our index, and adjust the iterator afterwards.
++		 * If the offset is within the first bvec (or the whole first
++		 * bvec, just use iov_iter_advance(). This makes it easier
++		 * since we can just skip the first segment, which may not
++		 * be PAGE_SIZE aligned.
++		 */
++		const struct bio_vec *bvec = imu->bvec;
++
++		if (offset <= bvec->bv_len) {
++			iov_iter_advance(iter, offset);
++		} else {
++			unsigned long seg_skip;
++
++			/* skip first vec */
++			offset -= bvec->bv_len;
++			seg_skip = 1 + (offset >> PAGE_SHIFT);
++
++			iter->bvec = bvec + seg_skip;
++			iter->nr_segs -= seg_skip;
++			iter->count -= bvec->bv_len + offset;
++			iter->iov_offset = offset & ~PAGE_MASK;
++		}
++	}
++
++	return 0;
++}
++
++static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter)
++{
++	if (WARN_ON_ONCE(!req->imu))
++		return -EFAULT;
++	return __io_import_fixed(req, rw, iter, req->imu);
++}
++
++static void io_ring_submit_unlock(struct io_ring_ctx *ctx, bool needs_lock)
++{
++	if (needs_lock)
++		mutex_unlock(&ctx->uring_lock);
++}
++
++static void io_ring_submit_lock(struct io_ring_ctx *ctx, bool needs_lock)
++{
++	/*
++	 * "Normal" inline submissions always hold the uring_lock, since we
++	 * grab it from the system call. Same is true for the SQPOLL offload.
++	 * The only exception is when we've detached the request and issue it
++	 * from an async worker thread, grab the lock for that case.
++	 */
++	if (needs_lock)
++		mutex_lock(&ctx->uring_lock);
++}
++
++static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
++					  int bgid, struct io_buffer *kbuf,
++					  bool needs_lock)
++{
++	struct io_buffer *head;
++
++	if (req->flags & REQ_F_BUFFER_SELECTED)
++		return kbuf;
++
++	io_ring_submit_lock(req->ctx, needs_lock);
++
++	lockdep_assert_held(&req->ctx->uring_lock);
++
++	head = xa_load(&req->ctx->io_buffers, bgid);
++	if (head) {
++		if (!list_empty(&head->list)) {
++			kbuf = list_last_entry(&head->list, struct io_buffer,
++							list);
++			list_del(&kbuf->list);
++		} else {
++			kbuf = head;
++			xa_erase(&req->ctx->io_buffers, bgid);
++		}
++		if (*len > kbuf->len)
++			*len = kbuf->len;
++	} else {
++		kbuf = ERR_PTR(-ENOBUFS);
++	}
++
++	io_ring_submit_unlock(req->ctx, needs_lock);
++
++	return kbuf;
++}
++
++static void __user *io_rw_buffer_select(struct io_kiocb *req, size_t *len,
++					bool needs_lock)
++{
++	struct io_buffer *kbuf;
++	u16 bgid;
++
++	kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
++	bgid = req->buf_index;
++	kbuf = io_buffer_select(req, len, bgid, kbuf, needs_lock);
++	if (IS_ERR(kbuf))
++		return kbuf;
++	req->rw.addr = (u64) (unsigned long) kbuf;
++	req->flags |= REQ_F_BUFFER_SELECTED;
++	return u64_to_user_ptr(kbuf->addr);
++}
++
++#ifdef CONFIG_COMPAT
++static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
++				bool needs_lock)
++{
++	struct compat_iovec __user *uiov;
++	compat_ssize_t clen;
++	void __user *buf;
++	ssize_t len;
++
++	uiov = u64_to_user_ptr(req->rw.addr);
++	if (!access_ok(uiov, sizeof(*uiov)))
++		return -EFAULT;
++	if (__get_user(clen, &uiov->iov_len))
++		return -EFAULT;
++	if (clen < 0)
++		return -EINVAL;
++
++	len = clen;
++	buf = io_rw_buffer_select(req, &len, needs_lock);
++	if (IS_ERR(buf))
++		return PTR_ERR(buf);
++	iov[0].iov_base = buf;
++	iov[0].iov_len = (compat_size_t) len;
++	return 0;
++}
++#endif
++
++static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
++				      bool needs_lock)
++{
++	struct iovec __user *uiov = u64_to_user_ptr(req->rw.addr);
++	void __user *buf;
++	ssize_t len;
++
++	if (copy_from_user(iov, uiov, sizeof(*uiov)))
++		return -EFAULT;
++
++	len = iov[0].iov_len;
++	if (len < 0)
++		return -EINVAL;
++	buf = io_rw_buffer_select(req, &len, needs_lock);
++	if (IS_ERR(buf))
++		return PTR_ERR(buf);
++	iov[0].iov_base = buf;
++	iov[0].iov_len = len;
++	return 0;
++}
++
++static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
++				    bool needs_lock)
++{
++	if (req->flags & REQ_F_BUFFER_SELECTED) {
++		struct io_buffer *kbuf;
++
++		kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
++		iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
++		iov[0].iov_len = kbuf->len;
++		return 0;
++	}
++	if (req->rw.len != 1)
++		return -EINVAL;
++
++#ifdef CONFIG_COMPAT
++	if (req->ctx->compat)
++		return io_compat_import(req, iov, needs_lock);
++#endif
++
++	return __io_iov_buffer_select(req, iov, needs_lock);
++}
++
++static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
++			   struct iov_iter *iter, bool needs_lock)
++{
++	void __user *buf = u64_to_user_ptr(req->rw.addr);
++	size_t sqe_len = req->rw.len;
++	u8 opcode = req->opcode;
++	ssize_t ret;
++
++	if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) {
++		*iovec = NULL;
++		return io_import_fixed(req, rw, iter);
++	}
++
++	/* buffer index only valid with fixed read/write, or buffer select  */
++	if (req->buf_index && !(req->flags & REQ_F_BUFFER_SELECT))
++		return -EINVAL;
++
++	if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
++		if (req->flags & REQ_F_BUFFER_SELECT) {
++			buf = io_rw_buffer_select(req, &sqe_len, needs_lock);
++			if (IS_ERR(buf))
++				return PTR_ERR(buf);
++			req->rw.len = sqe_len;
++		}
++
++		ret = import_single_range(rw, buf, sqe_len, *iovec, iter);
++		*iovec = NULL;
++		return ret;
++	}
++
++	if (req->flags & REQ_F_BUFFER_SELECT) {
++		ret = io_iov_buffer_select(req, *iovec, needs_lock);
++		if (!ret)
++			iov_iter_init(iter, rw, *iovec, 1, (*iovec)->iov_len);
++		*iovec = NULL;
++		return ret;
++	}
++
++	return __import_iovec(rw, buf, sqe_len, UIO_FASTIOV, iovec, iter,
++			      req->ctx->compat);
++}
++
++static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
++{
++	return (kiocb->ki_filp->f_mode & FMODE_STREAM) ? NULL : &kiocb->ki_pos;
++}
++
++/*
++ * For files that don't have ->read_iter() and ->write_iter(), handle them
++ * by looping over ->read() or ->write() manually.
++ */
++static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
++{
++	struct kiocb *kiocb = &req->rw.kiocb;
++	struct file *file = req->file;
++	ssize_t ret = 0;
++
++	/*
++	 * Don't support polled IO through this interface, and we can't
++	 * support non-blocking either. For the latter, this just causes
++	 * the kiocb to be handled from an async context.
++	 */
++	if (kiocb->ki_flags & IOCB_HIPRI)
++		return -EOPNOTSUPP;
++	if (kiocb->ki_flags & IOCB_NOWAIT)
++		return -EAGAIN;
++
++	while (iov_iter_count(iter)) {
++		struct iovec iovec;
++		ssize_t nr;
++
++		if (!iov_iter_is_bvec(iter)) {
++			iovec = iov_iter_iovec(iter);
++		} else {
++			iovec.iov_base = u64_to_user_ptr(req->rw.addr);
++			iovec.iov_len = req->rw.len;
++		}
++
++		if (rw == READ) {
++			nr = file->f_op->read(file, iovec.iov_base,
++					      iovec.iov_len, io_kiocb_ppos(kiocb));
++		} else {
++			nr = file->f_op->write(file, iovec.iov_base,
++					       iovec.iov_len, io_kiocb_ppos(kiocb));
++		}
++
++		if (nr < 0) {
++			if (!ret)
++				ret = nr;
++			break;
++		}
++		ret += nr;
++		if (!iov_iter_is_bvec(iter)) {
++			iov_iter_advance(iter, nr);
++		} else {
++			req->rw.addr += nr;
++			req->rw.len -= nr;
++			if (!req->rw.len)
++				break;
++		}
++		if (nr != iovec.iov_len)
++			break;
++	}
++
++	return ret;
++}
++
++static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
++			  const struct iovec *fast_iov, struct iov_iter *iter)
++{
++	struct io_async_rw *rw = req->async_data;
++
++	memcpy(&rw->iter, iter, sizeof(*iter));
++	rw->free_iovec = iovec;
++	rw->bytes_done = 0;
++	/* can only be fixed buffers, no need to do anything */
++	if (iov_iter_is_bvec(iter))
++		return;
++	if (!iovec) {
++		unsigned iov_off = 0;
++
++		rw->iter.iov = rw->fast_iov;
++		if (iter->iov != fast_iov) {
++			iov_off = iter->iov - fast_iov;
++			rw->iter.iov += iov_off;
++		}
++		if (rw->fast_iov != fast_iov)
++			memcpy(rw->fast_iov + iov_off, fast_iov + iov_off,
++			       sizeof(struct iovec) * iter->nr_segs);
++	} else {
++		req->flags |= REQ_F_NEED_CLEANUP;
++	}
++}
++
++static inline int io_alloc_async_data(struct io_kiocb *req)
++{
++	WARN_ON_ONCE(!io_op_defs[req->opcode].async_size);
++	req->async_data = kmalloc(io_op_defs[req->opcode].async_size, GFP_KERNEL);
++	return req->async_data == NULL;
++}
++
++static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
++			     const struct iovec *fast_iov,
++			     struct iov_iter *iter, bool force)
++{
++	if (!force && !io_op_defs[req->opcode].needs_async_setup)
++		return 0;
++	if (!req->async_data) {
++		struct io_async_rw *iorw;
++
++		if (io_alloc_async_data(req)) {
++			kfree(iovec);
++			return -ENOMEM;
++		}
++
++		io_req_map_rw(req, iovec, fast_iov, iter);
++		iorw = req->async_data;
++		/* we've copied and mapped the iter, ensure state is saved */
++		iov_iter_save_state(&iorw->iter, &iorw->iter_state);
++	}
++	return 0;
++}
++
++static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
++{
++	struct io_async_rw *iorw = req->async_data;
++	struct iovec *iov = iorw->fast_iov;
++	int ret;
++
++	ret = io_import_iovec(rw, req, &iov, &iorw->iter, false);
++	if (unlikely(ret < 0))
++		return ret;
++
++	iorw->bytes_done = 0;
++	iorw->free_iovec = iov;
++	if (iov)
++		req->flags |= REQ_F_NEED_CLEANUP;
++	iov_iter_save_state(&iorw->iter, &iorw->iter_state);
++	return 0;
++}
++
++static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	if (unlikely(!(req->file->f_mode & FMODE_READ)))
++		return -EBADF;
++	return io_prep_rw(req, sqe, READ);
++}
++
++/*
++ * This is our waitqueue callback handler, registered through lock_page_async()
++ * when we initially tried to do the IO with the iocb armed our waitqueue.
++ * This gets called when the page is unlocked, and we generally expect that to
++ * happen when the page IO is completed and the page is now uptodate. This will
++ * queue a task_work based retry of the operation, attempting to copy the data
++ * again. If the latter fails because the page was NOT uptodate, then we will
++ * do a thread based blocking retry of the operation. That's the unexpected
++ * slow path.
++ */
++static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode,
++			     int sync, void *arg)
++{
++	struct wait_page_queue *wpq;
++	struct io_kiocb *req = wait->private;
++	struct wait_page_key *key = arg;
++
++	wpq = container_of(wait, struct wait_page_queue, wait);
++
++	if (!wake_page_match(wpq, key))
++		return 0;
++
++	req->rw.kiocb.ki_flags &= ~IOCB_WAITQ;
++	list_del_init(&wait->entry);
++	io_req_task_queue(req);
++	return 1;
++}
++
++/*
++ * This controls whether a given IO request should be armed for async page
++ * based retry. If we return false here, the request is handed to the async
++ * worker threads for retry. If we're doing buffered reads on a regular file,
++ * we prepare a private wait_page_queue entry and retry the operation. This
++ * will either succeed because the page is now uptodate and unlocked, or it
++ * will register a callback when the page is unlocked at IO completion. Through
++ * that callback, io_uring uses task_work to setup a retry of the operation.
++ * That retry will attempt the buffered read again. The retry will generally
++ * succeed, or in rare cases where it fails, we then fall back to using the
++ * async worker threads for a blocking retry.
++ */
++static bool io_rw_should_retry(struct io_kiocb *req)
++{
++	struct io_async_rw *rw = req->async_data;
++	struct wait_page_queue *wait = &rw->wpq;
++	struct kiocb *kiocb = &req->rw.kiocb;
++
++	/* never retry for NOWAIT, we just complete with -EAGAIN */
++	if (req->flags & REQ_F_NOWAIT)
++		return false;
++
++	/* Only for buffered IO */
++	if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_HIPRI))
++		return false;
++
++	/*
++	 * just use poll if we can, and don't attempt if the fs doesn't
++	 * support callback based unlocks
++	 */
++	if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC))
++		return false;
++
++	wait->wait.func = io_async_buf_func;
++	wait->wait.private = req;
++	wait->wait.flags = 0;
++	INIT_LIST_HEAD(&wait->wait.entry);
++	kiocb->ki_flags |= IOCB_WAITQ;
++	kiocb->ki_flags &= ~IOCB_NOWAIT;
++	kiocb->ki_waitq = wait;
++	return true;
++}
++
++static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
++{
++	if (req->file->f_op->read_iter)
++		return call_read_iter(req->file, &req->rw.kiocb, iter);
++	else if (req->file->f_op->read)
++		return loop_rw_iter(READ, req, iter);
++	else
++		return -EINVAL;
++}
++
++static bool need_read_all(struct io_kiocb *req)
++{
++	return req->flags & REQ_F_ISREG ||
++		S_ISBLK(file_inode(req->file)->i_mode);
++}
++
++static int io_read(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
++	struct kiocb *kiocb = &req->rw.kiocb;
++	struct iov_iter __iter, *iter = &__iter;
++	struct io_async_rw *rw = req->async_data;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++	struct iov_iter_state __state, *state;
++	ssize_t ret, ret2;
++
++	if (rw) {
++		iter = &rw->iter;
++		state = &rw->iter_state;
++		/*
++		 * We come here from an earlier attempt, restore our state to
++		 * match in case it doesn't. It's cheap enough that we don't
++		 * need to make this conditional.
++		 */
++		iov_iter_restore(iter, state);
++		iovec = NULL;
++	} else {
++		ret = io_import_iovec(READ, req, &iovec, iter, !force_nonblock);
++		if (ret < 0)
++			return ret;
++		state = &__state;
++		iov_iter_save_state(iter, state);
++	}
++	req->result = iov_iter_count(iter);
++
++	/* Ensure we clear previously set non-block flag */
++	if (!force_nonblock)
++		kiocb->ki_flags &= ~IOCB_NOWAIT;
++	else
++		kiocb->ki_flags |= IOCB_NOWAIT;
++
++	/* If the file doesn't support async, just async punt */
++	if (force_nonblock && !io_file_supports_nowait(req, READ)) {
++		ret = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
++		return ret ?: -EAGAIN;
++	}
++
++	ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), req->result);
++	if (unlikely(ret)) {
++		kfree(iovec);
++		return ret;
++	}
++
++	ret = io_iter_do_read(req, iter);
++
++	if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
++		req->flags &= ~REQ_F_REISSUE;
++		/* IOPOLL retry should happen for io-wq threads */
++		if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL))
++			goto done;
++		/* no retry on NONBLOCK nor RWF_NOWAIT */
++		if (req->flags & REQ_F_NOWAIT)
++			goto done;
++		ret = 0;
++	} else if (ret == -EIOCBQUEUED) {
++		goto out_free;
++	} else if (ret <= 0 || ret == req->result || !force_nonblock ||
++		   (req->flags & REQ_F_NOWAIT) || !need_read_all(req)) {
++		/* read all, failed, already did sync or don't want to retry */
++		goto done;
++	}
++
++	/*
++	 * Don't depend on the iter state matching what was consumed, or being
++	 * untouched in case of error. Restore it and we'll advance it
++	 * manually if we need to.
++	 */
++	iov_iter_restore(iter, state);
++
++	ret2 = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
++	if (ret2)
++		return ret2;
++
++	iovec = NULL;
++	rw = req->async_data;
++	/*
++	 * Now use our persistent iterator and state, if we aren't already.
++	 * We've restored and mapped the iter to match.
++	 */
++	if (iter != &rw->iter) {
++		iter = &rw->iter;
++		state = &rw->iter_state;
++	}
++
++	do {
++		/*
++		 * We end up here because of a partial read, either from
++		 * above or inside this loop. Advance the iter by the bytes
++		 * that were consumed.
++		 */
++		iov_iter_advance(iter, ret);
++		if (!iov_iter_count(iter))
++			break;
++		rw->bytes_done += ret;
++		iov_iter_save_state(iter, state);
++
++		/* if we can retry, do so with the callbacks armed */
++		if (!io_rw_should_retry(req)) {
++			kiocb->ki_flags &= ~IOCB_WAITQ;
++			return -EAGAIN;
++		}
++
++		req->result = iov_iter_count(iter);
++		/*
++		 * Now retry read with the IOCB_WAITQ parts set in the iocb. If
++		 * we get -EIOCBQUEUED, then we'll get a notification when the
++		 * desired page gets unlocked. We can also get a partial read
++		 * here, and if we do, then just retry at the new offset.
++		 */
++		ret = io_iter_do_read(req, iter);
++		if (ret == -EIOCBQUEUED)
++			return 0;
++		/* we got some bytes, but not all. retry. */
++		kiocb->ki_flags &= ~IOCB_WAITQ;
++		iov_iter_restore(iter, state);
++	} while (ret > 0);
++done:
++	kiocb_done(kiocb, ret, issue_flags);
++out_free:
++	/* it's faster to check here then delegate to kfree */
++	if (iovec)
++		kfree(iovec);
++	return 0;
++}
++
++static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	if (unlikely(!(req->file->f_mode & FMODE_WRITE)))
++		return -EBADF;
++	return io_prep_rw(req, sqe, WRITE);
++}
++
++static int io_write(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
++	struct kiocb *kiocb = &req->rw.kiocb;
++	struct iov_iter __iter, *iter = &__iter;
++	struct io_async_rw *rw = req->async_data;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++	struct iov_iter_state __state, *state;
++	ssize_t ret, ret2;
++
++	if (rw) {
++		iter = &rw->iter;
++		state = &rw->iter_state;
++		iov_iter_restore(iter, state);
++		iovec = NULL;
++	} else {
++		ret = io_import_iovec(WRITE, req, &iovec, iter, !force_nonblock);
++		if (ret < 0)
++			return ret;
++		state = &__state;
++		iov_iter_save_state(iter, state);
++	}
++	req->result = iov_iter_count(iter);
++
++	/* Ensure we clear previously set non-block flag */
++	if (!force_nonblock)
++		kiocb->ki_flags &= ~IOCB_NOWAIT;
++	else
++		kiocb->ki_flags |= IOCB_NOWAIT;
++
++	/* If the file doesn't support async, just async punt */
++	if (force_nonblock && !io_file_supports_nowait(req, WRITE))
++		goto copy_iov;
++
++	/* file path doesn't support NOWAIT for non-direct_IO */
++	if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
++	    (req->flags & REQ_F_ISREG))
++		goto copy_iov;
++
++	ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
++	if (unlikely(ret))
++		goto out_free;
++
++	/*
++	 * Open-code file_start_write here to grab freeze protection,
++	 * which will be released by another thread in
++	 * io_complete_rw().  Fool lockdep by telling it the lock got
++	 * released so that it doesn't complain about the held lock when
++	 * we return to userspace.
++	 */
++	if (req->flags & REQ_F_ISREG) {
++		sb_start_write(file_inode(req->file)->i_sb);
++		__sb_writers_release(file_inode(req->file)->i_sb,
++					SB_FREEZE_WRITE);
++	}
++	kiocb->ki_flags |= IOCB_WRITE;
++
++	if (req->file->f_op->write_iter)
++		ret2 = call_write_iter(req->file, kiocb, iter);
++	else if (req->file->f_op->write)
++		ret2 = loop_rw_iter(WRITE, req, iter);
++	else
++		ret2 = -EINVAL;
++
++	if (req->flags & REQ_F_REISSUE) {
++		req->flags &= ~REQ_F_REISSUE;
++		ret2 = -EAGAIN;
++	}
++
++	/*
++	 * Raw bdev writes will return -EOPNOTSUPP for IOCB_NOWAIT. Just
++	 * retry them without IOCB_NOWAIT.
++	 */
++	if (ret2 == -EOPNOTSUPP && (kiocb->ki_flags & IOCB_NOWAIT))
++		ret2 = -EAGAIN;
++	/* no retry on NONBLOCK nor RWF_NOWAIT */
++	if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT))
++		goto done;
++	if (!force_nonblock || ret2 != -EAGAIN) {
++		/* IOPOLL retry should happen for io-wq threads */
++		if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)
++			goto copy_iov;
++done:
++		kiocb_done(kiocb, ret2, issue_flags);
++	} else {
++copy_iov:
++		iov_iter_restore(iter, state);
++		ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);
++		if (!ret) {
++			if (kiocb->ki_flags & IOCB_WRITE)
++				kiocb_end_write(req);
++			return -EAGAIN;
++		}
++		return ret;
++	}
++out_free:
++	/* it's reportedly faster than delegating the null check to kfree() */
++	if (iovec)
++		kfree(iovec);
++	return ret;
++}
++
++static int io_renameat_prep(struct io_kiocb *req,
++			    const struct io_uring_sqe *sqe)
++{
++	struct io_rename *ren = &req->rename;
++	const char __user *oldf, *newf;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
++		return -EINVAL;
++	if (unlikely(req->flags & REQ_F_FIXED_FILE))
++		return -EBADF;
++
++	ren->old_dfd = READ_ONCE(sqe->fd);
++	oldf = u64_to_user_ptr(READ_ONCE(sqe->addr));
++	newf = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++	ren->new_dfd = READ_ONCE(sqe->len);
++	ren->flags = READ_ONCE(sqe->rename_flags);
++
++	ren->oldpath = getname(oldf);
++	if (IS_ERR(ren->oldpath))
++		return PTR_ERR(ren->oldpath);
++
++	ren->newpath = getname(newf);
++	if (IS_ERR(ren->newpath)) {
++		putname(ren->oldpath);
++		return PTR_ERR(ren->newpath);
++	}
++
++	req->flags |= REQ_F_NEED_CLEANUP;
++	return 0;
++}
++
++static int io_renameat(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_rename *ren = &req->rename;
++	int ret;
++
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	ret = do_renameat2(ren->old_dfd, ren->oldpath, ren->new_dfd,
++				ren->newpath, ren->flags);
++
++	req->flags &= ~REQ_F_NEED_CLEANUP;
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++}
++
++static int io_unlinkat_prep(struct io_kiocb *req,
++			    const struct io_uring_sqe *sqe)
++{
++	struct io_unlink *un = &req->unlink;
++	const char __user *fname;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index ||
++	    sqe->splice_fd_in)
++		return -EINVAL;
++	if (unlikely(req->flags & REQ_F_FIXED_FILE))
++		return -EBADF;
++
++	un->dfd = READ_ONCE(sqe->fd);
++
++	un->flags = READ_ONCE(sqe->unlink_flags);
++	if (un->flags & ~AT_REMOVEDIR)
++		return -EINVAL;
++
++	fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
++	un->filename = getname(fname);
++	if (IS_ERR(un->filename))
++		return PTR_ERR(un->filename);
++
++	req->flags |= REQ_F_NEED_CLEANUP;
++	return 0;
++}
++
++static int io_unlinkat(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_unlink *un = &req->unlink;
++	int ret;
++
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	if (un->flags & AT_REMOVEDIR)
++		ret = do_rmdir(un->dfd, un->filename);
++	else
++		ret = do_unlinkat(un->dfd, un->filename);
++
++	req->flags &= ~REQ_F_NEED_CLEANUP;
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++}
++
++static int io_shutdown_prep(struct io_kiocb *req,
++			    const struct io_uring_sqe *sqe)
++{
++#if defined(CONFIG_NET)
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (unlikely(sqe->ioprio || sqe->off || sqe->addr || sqe->rw_flags ||
++		     sqe->buf_index || sqe->splice_fd_in))
++		return -EINVAL;
++
++	req->shutdown.how = READ_ONCE(sqe->len);
++	return 0;
++#else
++	return -EOPNOTSUPP;
++#endif
++}
++
++static int io_shutdown(struct io_kiocb *req, unsigned int issue_flags)
++{
++#if defined(CONFIG_NET)
++	struct socket *sock;
++	int ret;
++
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	sock = sock_from_file(req->file, &ret);
++	if (unlikely(!sock))
++		return ret;
++
++	ret = __sys_shutdown_sock(sock, req->shutdown.how);
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++#else
++	return -EOPNOTSUPP;
++#endif
++}
++
++static int __io_splice_prep(struct io_kiocb *req,
++			    const struct io_uring_sqe *sqe)
++{
++	struct io_splice *sp = &req->splice;
++	unsigned int valid_flags = SPLICE_F_FD_IN_FIXED | SPLICE_F_ALL;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++
++	sp->len = READ_ONCE(sqe->len);
++	sp->flags = READ_ONCE(sqe->splice_flags);
++	if (unlikely(sp->flags & ~valid_flags))
++		return -EINVAL;
++	sp->splice_fd_in = READ_ONCE(sqe->splice_fd_in);
++	return 0;
++}
++
++static int io_tee_prep(struct io_kiocb *req,
++		       const struct io_uring_sqe *sqe)
++{
++	if (READ_ONCE(sqe->splice_off_in) || READ_ONCE(sqe->off))
++		return -EINVAL;
++	return __io_splice_prep(req, sqe);
++}
++
++static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_splice *sp = &req->splice;
++	struct file *out = sp->file_out;
++	unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
++	struct file *in;
++	long ret = 0;
++
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	in = io_file_get(req->ctx, req, sp->splice_fd_in,
++				  (sp->flags & SPLICE_F_FD_IN_FIXED));
++	if (!in) {
++		ret = -EBADF;
++		goto done;
++	}
++
++	if (sp->len)
++		ret = do_tee(in, out, sp->len, flags);
++
++	if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
++		io_put_file(in);
++done:
++	if (ret != sp->len)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++}
++
++static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct io_splice *sp = &req->splice;
++
++	sp->off_in = READ_ONCE(sqe->splice_off_in);
++	sp->off_out = READ_ONCE(sqe->off);
++	return __io_splice_prep(req, sqe);
++}
++
++static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_splice *sp = &req->splice;
++	struct file *out = sp->file_out;
++	unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
++	loff_t *poff_in, *poff_out;
++	struct file *in;
++	long ret = 0;
++
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	in = io_file_get(req->ctx, req, sp->splice_fd_in,
++				  (sp->flags & SPLICE_F_FD_IN_FIXED));
++	if (!in) {
++		ret = -EBADF;
++		goto done;
++	}
++
++	poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;
++	poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;
++
++	if (sp->len)
++		ret = do_splice(in, poff_in, out, poff_out, sp->len, flags);
++
++	if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
++		io_put_file(in);
++done:
++	if (ret != sp->len)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++}
++
++/*
++ * IORING_OP_NOP just posts a completion event, nothing else.
++ */
++static int io_nop(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++
++	__io_req_complete(req, issue_flags, 0, 0);
++	return 0;
++}
++
++static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
++		     sqe->splice_fd_in))
++		return -EINVAL;
++
++	req->sync.flags = READ_ONCE(sqe->fsync_flags);
++	if (unlikely(req->sync.flags & ~IORING_FSYNC_DATASYNC))
++		return -EINVAL;
++
++	req->sync.off = READ_ONCE(sqe->off);
++	req->sync.len = READ_ONCE(sqe->len);
++	return 0;
++}
++
++static int io_fsync(struct io_kiocb *req, unsigned int issue_flags)
++{
++	loff_t end = req->sync.off + req->sync.len;
++	int ret;
++
++	/* fsync always requires a blocking context */
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	ret = vfs_fsync_range(req->file, req->sync.off,
++				end > 0 ? end : LLONG_MAX,
++				req->sync.flags & IORING_FSYNC_DATASYNC);
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++}
++
++static int io_fallocate_prep(struct io_kiocb *req,
++			     const struct io_uring_sqe *sqe)
++{
++	if (sqe->ioprio || sqe->buf_index || sqe->rw_flags ||
++	    sqe->splice_fd_in)
++		return -EINVAL;
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++
++	req->sync.off = READ_ONCE(sqe->off);
++	req->sync.len = READ_ONCE(sqe->addr);
++	req->sync.mode = READ_ONCE(sqe->len);
++	return 0;
++}
++
++static int io_fallocate(struct io_kiocb *req, unsigned int issue_flags)
++{
++	int ret;
++
++	/* fallocate always requiring blocking context */
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++	ret = vfs_fallocate(req->file, req->sync.mode, req->sync.off,
++				req->sync.len);
++	if (ret < 0)
++		req_set_fail(req);
++	else
++		fsnotify_modify(req->file);
++	io_req_complete(req, ret);
++	return 0;
++}
++
++static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	const char __user *fname;
++	int ret;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (unlikely(sqe->ioprio || sqe->buf_index))
++		return -EINVAL;
++	if (unlikely(req->flags & REQ_F_FIXED_FILE))
++		return -EBADF;
++
++	/* open.how should be already initialised */
++	if (!(req->open.how.flags & O_PATH) && force_o_largefile())
++		req->open.how.flags |= O_LARGEFILE;
++
++	req->open.dfd = READ_ONCE(sqe->fd);
++	fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
++	req->open.filename = getname(fname);
++	if (IS_ERR(req->open.filename)) {
++		ret = PTR_ERR(req->open.filename);
++		req->open.filename = NULL;
++		return ret;
++	}
++
++	req->open.file_slot = READ_ONCE(sqe->file_index);
++	if (req->open.file_slot && (req->open.how.flags & O_CLOEXEC))
++		return -EINVAL;
++
++	req->open.nofile = rlimit(RLIMIT_NOFILE);
++	req->flags |= REQ_F_NEED_CLEANUP;
++	return 0;
++}
++
++static int io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	u64 mode = READ_ONCE(sqe->len);
++	u64 flags = READ_ONCE(sqe->open_flags);
++
++	req->open.how = build_open_how(flags, mode);
++	return __io_openat_prep(req, sqe);
++}
++
++static int io_openat2_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct open_how __user *how;
++	size_t len;
++	int ret;
++
++	how = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++	len = READ_ONCE(sqe->len);
++	if (len < OPEN_HOW_SIZE_VER0)
++		return -EINVAL;
++
++	ret = copy_struct_from_user(&req->open.how, sizeof(req->open.how), how,
++					len);
++	if (ret)
++		return ret;
++
++	return __io_openat_prep(req, sqe);
++}
++
++static int io_openat2(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct open_flags op;
++	struct file *file;
++	bool resolve_nonblock, nonblock_set;
++	bool fixed = !!req->open.file_slot;
++	int ret;
++
++	ret = build_open_flags(&req->open.how, &op);
++	if (ret)
++		goto err;
++	nonblock_set = op.open_flag & O_NONBLOCK;
++	resolve_nonblock = req->open.how.resolve & RESOLVE_CACHED;
++	if (issue_flags & IO_URING_F_NONBLOCK) {
++		/*
++		 * Don't bother trying for O_TRUNC, O_CREAT, or O_TMPFILE open,
++		 * it'll always -EAGAIN
++		 */
++		if (req->open.how.flags & (O_TRUNC | O_CREAT | O_TMPFILE))
++			return -EAGAIN;
++		op.lookup_flags |= LOOKUP_CACHED;
++		op.open_flag |= O_NONBLOCK;
++	}
++
++	if (!fixed) {
++		ret = __get_unused_fd_flags(req->open.how.flags, req->open.nofile);
++		if (ret < 0)
++			goto err;
++	}
++
++	file = do_filp_open(req->open.dfd, req->open.filename, &op);
++	if (IS_ERR(file)) {
++		/*
++		 * We could hang on to this 'fd' on retrying, but seems like
++		 * marginal gain for something that is now known to be a slower
++		 * path. So just put it, and we'll get a new one when we retry.
++		 */
++		if (!fixed)
++			put_unused_fd(ret);
++
++		ret = PTR_ERR(file);
++		/* only retry if RESOLVE_CACHED wasn't already set by application */
++		if (ret == -EAGAIN &&
++		    (!resolve_nonblock && (issue_flags & IO_URING_F_NONBLOCK)))
++			return -EAGAIN;
++		goto err;
++	}
++
++	if ((issue_flags & IO_URING_F_NONBLOCK) && !nonblock_set)
++		file->f_flags &= ~O_NONBLOCK;
++	fsnotify_open(file);
++
++	if (!fixed)
++		fd_install(ret, file);
++	else
++		ret = io_install_fixed_file(req, file, issue_flags,
++					    req->open.file_slot - 1);
++err:
++	putname(req->open.filename);
++	req->flags &= ~REQ_F_NEED_CLEANUP;
++	if (ret < 0)
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++
++static int io_openat(struct io_kiocb *req, unsigned int issue_flags)
++{
++	return io_openat2(req, issue_flags);
++}
++
++static int io_remove_buffers_prep(struct io_kiocb *req,
++				  const struct io_uring_sqe *sqe)
++{
++	struct io_provide_buf *p = &req->pbuf;
++	u64 tmp;
++
++	if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off ||
++	    sqe->splice_fd_in)
++		return -EINVAL;
++
++	tmp = READ_ONCE(sqe->fd);
++	if (!tmp || tmp > USHRT_MAX)
++		return -EINVAL;
++
++	memset(p, 0, sizeof(*p));
++	p->nbufs = tmp;
++	p->bgid = READ_ONCE(sqe->buf_group);
++	return 0;
++}
++
++static int __io_remove_buffers(struct io_ring_ctx *ctx, struct io_buffer *buf,
++			       int bgid, unsigned nbufs)
++{
++	unsigned i = 0;
++
++	/* shouldn't happen */
++	if (!nbufs)
++		return 0;
++
++	/* the head kbuf is the list itself */
++	while (!list_empty(&buf->list)) {
++		struct io_buffer *nxt;
++
++		nxt = list_first_entry(&buf->list, struct io_buffer, list);
++		list_del(&nxt->list);
++		kfree(nxt);
++		if (++i == nbufs)
++			return i;
++		cond_resched();
++	}
++	i++;
++	kfree(buf);
++	xa_erase(&ctx->io_buffers, bgid);
++
++	return i;
++}
++
++static int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_provide_buf *p = &req->pbuf;
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_buffer *head;
++	int ret = 0;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++	io_ring_submit_lock(ctx, !force_nonblock);
++
++	lockdep_assert_held(&ctx->uring_lock);
++
++	ret = -ENOENT;
++	head = xa_load(&ctx->io_buffers, p->bgid);
++	if (head)
++		ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
++	if (ret < 0)
++		req_set_fail(req);
++
++	/* complete before unlock, IOPOLL may need the lock */
++	__io_req_complete(req, issue_flags, ret, 0);
++	io_ring_submit_unlock(ctx, !force_nonblock);
++	return 0;
++}
++
++static int io_provide_buffers_prep(struct io_kiocb *req,
++				   const struct io_uring_sqe *sqe)
++{
++	unsigned long size, tmp_check;
++	struct io_provide_buf *p = &req->pbuf;
++	u64 tmp;
++
++	if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in)
++		return -EINVAL;
++
++	tmp = READ_ONCE(sqe->fd);
++	if (!tmp || tmp > USHRT_MAX)
++		return -E2BIG;
++	p->nbufs = tmp;
++	p->addr = READ_ONCE(sqe->addr);
++	p->len = READ_ONCE(sqe->len);
++
++	if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
++				&size))
++		return -EOVERFLOW;
++	if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
++		return -EOVERFLOW;
++
++	size = (unsigned long)p->len * p->nbufs;
++	if (!access_ok(u64_to_user_ptr(p->addr), size))
++		return -EFAULT;
++
++	p->bgid = READ_ONCE(sqe->buf_group);
++	tmp = READ_ONCE(sqe->off);
++	if (tmp > USHRT_MAX)
++		return -E2BIG;
++	p->bid = tmp;
++	return 0;
++}
++
++static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
++{
++	struct io_buffer *buf;
++	u64 addr = pbuf->addr;
++	int i, bid = pbuf->bid;
++
++	for (i = 0; i < pbuf->nbufs; i++) {
++		buf = kmalloc(sizeof(*buf), GFP_KERNEL_ACCOUNT);
++		if (!buf)
++			break;
++
++		buf->addr = addr;
++		buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
++		buf->bid = bid;
++		addr += pbuf->len;
++		bid++;
++		if (!*head) {
++			INIT_LIST_HEAD(&buf->list);
++			*head = buf;
++		} else {
++			list_add_tail(&buf->list, &(*head)->list);
++		}
++		cond_resched();
++	}
++
++	return i ? i : -ENOMEM;
++}
++
++static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_provide_buf *p = &req->pbuf;
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_buffer *head, *list;
++	int ret = 0;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++	io_ring_submit_lock(ctx, !force_nonblock);
++
++	lockdep_assert_held(&ctx->uring_lock);
++
++	list = head = xa_load(&ctx->io_buffers, p->bgid);
++
++	ret = io_add_buffers(p, &head);
++	if (ret >= 0 && !list) {
++		ret = xa_insert(&ctx->io_buffers, p->bgid, head,
++				GFP_KERNEL_ACCOUNT);
++		if (ret < 0)
++			__io_remove_buffers(ctx, head, p->bgid, -1U);
++	}
++	if (ret < 0)
++		req_set_fail(req);
++	/* complete before unlock, IOPOLL may need the lock */
++	__io_req_complete(req, issue_flags, ret, 0);
++	io_ring_submit_unlock(ctx, !force_nonblock);
++	return 0;
++}
++
++static int io_epoll_ctl_prep(struct io_kiocb *req,
++			     const struct io_uring_sqe *sqe)
++{
++#if defined(CONFIG_EPOLL)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
++		return -EINVAL;
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++
++	req->epoll.epfd = READ_ONCE(sqe->fd);
++	req->epoll.op = READ_ONCE(sqe->len);
++	req->epoll.fd = READ_ONCE(sqe->off);
++
++	if (ep_op_has_event(req->epoll.op)) {
++		struct epoll_event __user *ev;
++
++		ev = u64_to_user_ptr(READ_ONCE(sqe->addr));
++		if (copy_from_user(&req->epoll.event, ev, sizeof(*ev)))
++			return -EFAULT;
++	}
++
++	return 0;
++#else
++	return -EOPNOTSUPP;
++#endif
++}
++
++static int io_epoll_ctl(struct io_kiocb *req, unsigned int issue_flags)
++{
++#if defined(CONFIG_EPOLL)
++	struct io_epoll *ie = &req->epoll;
++	int ret;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++	ret = do_epoll_ctl(ie->epfd, ie->op, ie->fd, &ie->event, force_nonblock);
++	if (force_nonblock && ret == -EAGAIN)
++		return -EAGAIN;
++
++	if (ret < 0)
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++#else
++	return -EOPNOTSUPP;
++#endif
++}
++
++static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
++	if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->splice_fd_in)
++		return -EINVAL;
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++
++	req->madvise.addr = READ_ONCE(sqe->addr);
++	req->madvise.len = READ_ONCE(sqe->len);
++	req->madvise.advice = READ_ONCE(sqe->fadvise_advice);
++	return 0;
++#else
++	return -EOPNOTSUPP;
++#endif
++}
++
++static int io_madvise(struct io_kiocb *req, unsigned int issue_flags)
++{
++#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
++	struct io_madvise *ma = &req->madvise;
++	int ret;
++
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	ret = do_madvise(current->mm, ma->addr, ma->len, ma->advice);
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++#else
++	return -EOPNOTSUPP;
++#endif
++}
++
++static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	if (sqe->ioprio || sqe->buf_index || sqe->addr || sqe->splice_fd_in)
++		return -EINVAL;
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++
++	req->fadvise.offset = READ_ONCE(sqe->off);
++	req->fadvise.len = READ_ONCE(sqe->len);
++	req->fadvise.advice = READ_ONCE(sqe->fadvise_advice);
++	return 0;
++}
++
++static int io_fadvise(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_fadvise *fa = &req->fadvise;
++	int ret;
++
++	if (issue_flags & IO_URING_F_NONBLOCK) {
++		switch (fa->advice) {
++		case POSIX_FADV_NORMAL:
++		case POSIX_FADV_RANDOM:
++		case POSIX_FADV_SEQUENTIAL:
++			break;
++		default:
++			return -EAGAIN;
++		}
++	}
++
++	ret = vfs_fadvise(req->file, fa->offset, fa->len, fa->advice);
++	if (ret < 0)
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++
++static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
++		return -EINVAL;
++	if (req->flags & REQ_F_FIXED_FILE)
++		return -EBADF;
++
++	req->statx.dfd = READ_ONCE(sqe->fd);
++	req->statx.mask = READ_ONCE(sqe->len);
++	req->statx.filename = u64_to_user_ptr(READ_ONCE(sqe->addr));
++	req->statx.buffer = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++	req->statx.flags = READ_ONCE(sqe->statx_flags);
++
++	return 0;
++}
++
++static int io_statx(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_statx *ctx = &req->statx;
++	int ret;
++
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask,
++		       ctx->buffer);
++
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++}
++
++static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->off || sqe->addr || sqe->len ||
++	    sqe->rw_flags || sqe->buf_index)
++		return -EINVAL;
++	if (req->flags & REQ_F_FIXED_FILE)
++		return -EBADF;
++
++	req->close.fd = READ_ONCE(sqe->fd);
++	req->close.file_slot = READ_ONCE(sqe->file_index);
++	if (req->close.file_slot && req->close.fd)
++		return -EINVAL;
++
++	return 0;
++}
++
++static int io_close(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct files_struct *files = current->files;
++	struct io_close *close = &req->close;
++	struct fdtable *fdt;
++	struct file *file = NULL;
++	int ret = -EBADF;
++
++	if (req->close.file_slot) {
++		ret = io_close_fixed(req, issue_flags);
++		goto err;
++	}
++
++	spin_lock(&files->file_lock);
++	fdt = files_fdtable(files);
++	if (close->fd >= fdt->max_fds) {
++		spin_unlock(&files->file_lock);
++		goto err;
++	}
++	file = fdt->fd[close->fd];
++	if (!file || file->f_op == &io_uring_fops) {
++		spin_unlock(&files->file_lock);
++		file = NULL;
++		goto err;
++	}
++
++	/* if the file has a flush method, be safe and punt to async */
++	if (file->f_op->flush && (issue_flags & IO_URING_F_NONBLOCK)) {
++		spin_unlock(&files->file_lock);
++		return -EAGAIN;
++	}
++
++	ret = __close_fd_get_file(close->fd, &file);
++	spin_unlock(&files->file_lock);
++	if (ret < 0) {
++		if (ret == -ENOENT)
++			ret = -EBADF;
++		goto err;
++	}
++
++	/* No ->flush() or already async, safely close from here */
++	ret = filp_close(file, current->files);
++err:
++	if (ret < 0)
++		req_set_fail(req);
++	if (file)
++		fput(file);
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++
++static int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
++		     sqe->splice_fd_in))
++		return -EINVAL;
++
++	req->sync.off = READ_ONCE(sqe->off);
++	req->sync.len = READ_ONCE(sqe->len);
++	req->sync.flags = READ_ONCE(sqe->sync_range_flags);
++	return 0;
++}
++
++static int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags)
++{
++	int ret;
++
++	/* sync_file_range always requires a blocking context */
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		return -EAGAIN;
++
++	ret = sync_file_range(req->file, req->sync.off, req->sync.len,
++				req->sync.flags);
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete(req, ret);
++	return 0;
++}
++
++#if defined(CONFIG_NET)
++static int io_setup_async_msg(struct io_kiocb *req,
++			      struct io_async_msghdr *kmsg)
++{
++	struct io_async_msghdr *async_msg = req->async_data;
++
++	if (async_msg)
++		return -EAGAIN;
++	if (io_alloc_async_data(req)) {
++		kfree(kmsg->free_iov);
++		return -ENOMEM;
++	}
++	async_msg = req->async_data;
++	req->flags |= REQ_F_NEED_CLEANUP;
++	memcpy(async_msg, kmsg, sizeof(*kmsg));
++	if (async_msg->msg.msg_name)
++		async_msg->msg.msg_name = &async_msg->addr;
++	/* if were using fast_iov, set it to the new one */
++	if (!async_msg->free_iov)
++		async_msg->msg.msg_iter.iov = async_msg->fast_iov;
++
++	return -EAGAIN;
++}
++
++static int io_sendmsg_copy_hdr(struct io_kiocb *req,
++			       struct io_async_msghdr *iomsg)
++{
++	iomsg->msg.msg_name = &iomsg->addr;
++	iomsg->free_iov = iomsg->fast_iov;
++	return sendmsg_copy_msghdr(&iomsg->msg, req->sr_msg.umsg,
++				   req->sr_msg.msg_flags, &iomsg->free_iov);
++}
++
++static int io_sendmsg_prep_async(struct io_kiocb *req)
++{
++	int ret;
++
++	ret = io_sendmsg_copy_hdr(req, req->async_data);
++	if (!ret)
++		req->flags |= REQ_F_NEED_CLEANUP;
++	return ret;
++}
++
++static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct io_sr_msg *sr = &req->sr_msg;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (unlikely(sqe->addr2 || sqe->file_index))
++		return -EINVAL;
++	if (unlikely(sqe->addr2 || sqe->file_index || sqe->ioprio))
++		return -EINVAL;
++
++	sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
++	sr->len = READ_ONCE(sqe->len);
++	sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
++	if (sr->msg_flags & MSG_DONTWAIT)
++		req->flags |= REQ_F_NOWAIT;
++
++#ifdef CONFIG_COMPAT
++	if (req->ctx->compat)
++		sr->msg_flags |= MSG_CMSG_COMPAT;
++#endif
++	return 0;
++}
++
++static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_async_msghdr iomsg, *kmsg;
++	struct socket *sock;
++	unsigned flags;
++	int min_ret = 0;
++	int ret;
++
++	sock = sock_from_file(req->file, &ret);
++	if (unlikely(!sock))
++		return ret;
++
++	kmsg = req->async_data;
++	if (!kmsg) {
++		ret = io_sendmsg_copy_hdr(req, &iomsg);
++		if (ret)
++			return ret;
++		kmsg = &iomsg;
++	}
++
++	flags = req->sr_msg.msg_flags;
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		flags |= MSG_DONTWAIT;
++	if (flags & MSG_WAITALL)
++		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
++
++	ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
++	if ((issue_flags & IO_URING_F_NONBLOCK) && ret == -EAGAIN)
++		return io_setup_async_msg(req, kmsg);
++	if (ret == -ERESTARTSYS)
++		ret = -EINTR;
++
++	/* fast path, check for non-NULL to avoid function call */
++	if (kmsg->free_iov)
++		kfree(kmsg->free_iov);
++	req->flags &= ~REQ_F_NEED_CLEANUP;
++	if (ret < min_ret)
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++
++static int io_send(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_sr_msg *sr = &req->sr_msg;
++	struct msghdr msg;
++	struct iovec iov;
++	struct socket *sock;
++	unsigned flags;
++	int min_ret = 0;
++	int ret;
++
++	sock = sock_from_file(req->file, &ret);
++	if (unlikely(!sock))
++		return ret;
++
++	ret = import_single_range(WRITE, sr->buf, sr->len, &iov, &msg.msg_iter);
++	if (unlikely(ret))
++		return ret;
++
++	msg.msg_name = NULL;
++	msg.msg_control = NULL;
++	msg.msg_controllen = 0;
++	msg.msg_namelen = 0;
++
++	flags = req->sr_msg.msg_flags;
++	if (issue_flags & IO_URING_F_NONBLOCK)
++		flags |= MSG_DONTWAIT;
++	if (flags & MSG_WAITALL)
++		min_ret = iov_iter_count(&msg.msg_iter);
++
++	msg.msg_flags = flags;
++	ret = sock_sendmsg(sock, &msg);
++	if ((issue_flags & IO_URING_F_NONBLOCK) && ret == -EAGAIN)
++		return -EAGAIN;
++	if (ret == -ERESTARTSYS)
++		ret = -EINTR;
++
++	if (ret < min_ret)
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++
++static int __io_recvmsg_copy_hdr(struct io_kiocb *req,
++				 struct io_async_msghdr *iomsg)
++{
++	struct io_sr_msg *sr = &req->sr_msg;
++	struct iovec __user *uiov;
++	size_t iov_len;
++	int ret;
++
++	ret = __copy_msghdr_from_user(&iomsg->msg, sr->umsg,
++					&iomsg->uaddr, &uiov, &iov_len);
++	if (ret)
++		return ret;
++
++	if (req->flags & REQ_F_BUFFER_SELECT) {
++		if (iov_len > 1)
++			return -EINVAL;
++		if (copy_from_user(iomsg->fast_iov, uiov, sizeof(*uiov)))
++			return -EFAULT;
++		sr->len = iomsg->fast_iov[0].iov_len;
++		iomsg->free_iov = NULL;
++	} else {
++		iomsg->free_iov = iomsg->fast_iov;
++		ret = __import_iovec(READ, uiov, iov_len, UIO_FASTIOV,
++				     &iomsg->free_iov, &iomsg->msg.msg_iter,
++				     false);
++		if (ret > 0)
++			ret = 0;
++	}
++
++	return ret;
++}
++
++#ifdef CONFIG_COMPAT
++static int __io_compat_recvmsg_copy_hdr(struct io_kiocb *req,
++					struct io_async_msghdr *iomsg)
++{
++	struct io_sr_msg *sr = &req->sr_msg;
++	struct compat_iovec __user *uiov;
++	compat_uptr_t ptr;
++	compat_size_t len;
++	int ret;
++
++	ret = __get_compat_msghdr(&iomsg->msg, sr->umsg_compat, &iomsg->uaddr,
++				  &ptr, &len);
++	if (ret)
++		return ret;
++
++	uiov = compat_ptr(ptr);
++	if (req->flags & REQ_F_BUFFER_SELECT) {
++		compat_ssize_t clen;
++
++		if (len > 1)
++			return -EINVAL;
++		if (!access_ok(uiov, sizeof(*uiov)))
++			return -EFAULT;
++		if (__get_user(clen, &uiov->iov_len))
++			return -EFAULT;
++		if (clen < 0)
++			return -EINVAL;
++		sr->len = clen;
++		iomsg->free_iov = NULL;
++	} else {
++		iomsg->free_iov = iomsg->fast_iov;
++		ret = __import_iovec(READ, (struct iovec __user *)uiov, len,
++				   UIO_FASTIOV, &iomsg->free_iov,
++				   &iomsg->msg.msg_iter, true);
++		if (ret < 0)
++			return ret;
++	}
++
++	return 0;
++}
++#endif
++
++static int io_recvmsg_copy_hdr(struct io_kiocb *req,
++			       struct io_async_msghdr *iomsg)
++{
++	iomsg->msg.msg_name = &iomsg->addr;
++
++#ifdef CONFIG_COMPAT
++	if (req->ctx->compat)
++		return __io_compat_recvmsg_copy_hdr(req, iomsg);
++#endif
++
++	return __io_recvmsg_copy_hdr(req, iomsg);
++}
++
++static struct io_buffer *io_recv_buffer_select(struct io_kiocb *req,
++					       bool needs_lock)
++{
++	struct io_sr_msg *sr = &req->sr_msg;
++	struct io_buffer *kbuf;
++
++	kbuf = io_buffer_select(req, &sr->len, sr->bgid, sr->kbuf, needs_lock);
++	if (IS_ERR(kbuf))
++		return kbuf;
++
++	sr->kbuf = kbuf;
++	req->flags |= REQ_F_BUFFER_SELECTED;
++	return kbuf;
++}
++
++static inline unsigned int io_put_recv_kbuf(struct io_kiocb *req)
++{
++	return io_put_kbuf(req, req->sr_msg.kbuf);
++}
++
++static int io_recvmsg_prep_async(struct io_kiocb *req)
++{
++	int ret;
++
++	ret = io_recvmsg_copy_hdr(req, req->async_data);
++	if (!ret)
++		req->flags |= REQ_F_NEED_CLEANUP;
++	return ret;
++}
++
++static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct io_sr_msg *sr = &req->sr_msg;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (unlikely(sqe->addr2 || sqe->file_index))
++		return -EINVAL;
++	if (unlikely(sqe->addr2 || sqe->file_index || sqe->ioprio))
++		return -EINVAL;
++
++	sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
++	sr->len = READ_ONCE(sqe->len);
++	sr->bgid = READ_ONCE(sqe->buf_group);
++	sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
++	if (sr->msg_flags & MSG_DONTWAIT)
++		req->flags |= REQ_F_NOWAIT;
++
++#ifdef CONFIG_COMPAT
++	if (req->ctx->compat)
++		sr->msg_flags |= MSG_CMSG_COMPAT;
++#endif
++	return 0;
++}
++
++static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_async_msghdr iomsg, *kmsg;
++	struct socket *sock;
++	struct io_buffer *kbuf;
++	unsigned flags;
++	int min_ret = 0;
++	int ret, cflags = 0;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++	sock = sock_from_file(req->file, &ret);
++	if (unlikely(!sock))
++		return ret;
++
++	kmsg = req->async_data;
++	if (!kmsg) {
++		ret = io_recvmsg_copy_hdr(req, &iomsg);
++		if (ret)
++			return ret;
++		kmsg = &iomsg;
++	}
++
++	if (req->flags & REQ_F_BUFFER_SELECT) {
++		kbuf = io_recv_buffer_select(req, !force_nonblock);
++		if (IS_ERR(kbuf))
++			return PTR_ERR(kbuf);
++		kmsg->fast_iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
++		kmsg->fast_iov[0].iov_len = req->sr_msg.len;
++		iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->fast_iov,
++				1, req->sr_msg.len);
++	}
++
++	flags = req->sr_msg.msg_flags;
++	if (force_nonblock)
++		flags |= MSG_DONTWAIT;
++	if (flags & MSG_WAITALL)
++		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
++
++	ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
++					kmsg->uaddr, flags);
++	if (force_nonblock && ret == -EAGAIN)
++		return io_setup_async_msg(req, kmsg);
++	if (ret == -ERESTARTSYS)
++		ret = -EINTR;
++
++	if (req->flags & REQ_F_BUFFER_SELECTED)
++		cflags = io_put_recv_kbuf(req);
++	/* fast path, check for non-NULL to avoid function call */
++	if (kmsg->free_iov)
++		kfree(kmsg->free_iov);
++	req->flags &= ~REQ_F_NEED_CLEANUP;
++	if (ret < min_ret || ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, cflags);
++	return 0;
++}
++
++static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_buffer *kbuf;
++	struct io_sr_msg *sr = &req->sr_msg;
++	struct msghdr msg;
++	void __user *buf = sr->buf;
++	struct socket *sock;
++	struct iovec iov;
++	unsigned flags;
++	int min_ret = 0;
++	int ret, cflags = 0;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++	sock = sock_from_file(req->file, &ret);
++	if (unlikely(!sock))
++		return ret;
++
++	if (req->flags & REQ_F_BUFFER_SELECT) {
++		kbuf = io_recv_buffer_select(req, !force_nonblock);
++		if (IS_ERR(kbuf))
++			return PTR_ERR(kbuf);
++		buf = u64_to_user_ptr(kbuf->addr);
++	}
++
++	ret = import_single_range(READ, buf, sr->len, &iov, &msg.msg_iter);
++	if (unlikely(ret))
++		goto out_free;
++
++	msg.msg_name = NULL;
++	msg.msg_control = NULL;
++	msg.msg_controllen = 0;
++	msg.msg_namelen = 0;
++	msg.msg_iocb = NULL;
++	msg.msg_flags = 0;
++
++	flags = req->sr_msg.msg_flags;
++	if (force_nonblock)
++		flags |= MSG_DONTWAIT;
++	if (flags & MSG_WAITALL)
++		min_ret = iov_iter_count(&msg.msg_iter);
++
++	ret = sock_recvmsg(sock, &msg, flags);
++	if (force_nonblock && ret == -EAGAIN)
++		return -EAGAIN;
++	if (ret == -ERESTARTSYS)
++		ret = -EINTR;
++out_free:
++	if (req->flags & REQ_F_BUFFER_SELECTED)
++		cflags = io_put_recv_kbuf(req);
++	if (ret < min_ret || ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, cflags);
++	return 0;
++}
++
++static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct io_accept *accept = &req->accept;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->len || sqe->buf_index)
++		return -EINVAL;
++
++	accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
++	accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++	accept->flags = READ_ONCE(sqe->accept_flags);
++	accept->nofile = rlimit(RLIMIT_NOFILE);
++
++	accept->file_slot = READ_ONCE(sqe->file_index);
++	if (accept->file_slot && (accept->flags & SOCK_CLOEXEC))
++		return -EINVAL;
++	if (accept->flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
++		return -EINVAL;
++	if (SOCK_NONBLOCK != O_NONBLOCK && (accept->flags & SOCK_NONBLOCK))
++		accept->flags = (accept->flags & ~SOCK_NONBLOCK) | O_NONBLOCK;
++	return 0;
++}
++
++static int io_accept(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_accept *accept = &req->accept;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++	unsigned int file_flags = force_nonblock ? O_NONBLOCK : 0;
++	bool fixed = !!accept->file_slot;
++	struct file *file;
++	int ret, fd;
++
++	if (req->file->f_flags & O_NONBLOCK)
++		req->flags |= REQ_F_NOWAIT;
++
++	if (!fixed) {
++		fd = __get_unused_fd_flags(accept->flags, accept->nofile);
++		if (unlikely(fd < 0))
++			return fd;
++	}
++	file = do_accept(req->file, file_flags, accept->addr, accept->addr_len,
++			 accept->flags);
++
++	if (IS_ERR(file)) {
++		if (!fixed)
++			put_unused_fd(fd);
++		ret = PTR_ERR(file);
++		if (ret == -EAGAIN && force_nonblock)
++			return -EAGAIN;
++		if (ret == -ERESTARTSYS)
++			ret = -EINTR;
++		req_set_fail(req);
++	} else if (!fixed) {
++		fd_install(fd, file);
++		ret = fd;
++	} else {
++		ret = io_install_fixed_file(req, file, issue_flags,
++					    accept->file_slot - 1);
++	}
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++
++static int io_connect_prep_async(struct io_kiocb *req)
++{
++	struct io_async_connect *io = req->async_data;
++	struct io_connect *conn = &req->connect;
++
++	return move_addr_to_kernel(conn->addr, conn->addr_len, &io->address);
++}
++
++static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct io_connect *conn = &req->connect;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags ||
++	    sqe->splice_fd_in)
++		return -EINVAL;
++
++	conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
++	conn->addr_len =  READ_ONCE(sqe->addr2);
++	return 0;
++}
++
++static int io_connect(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_async_connect __io, *io;
++	unsigned file_flags;
++	int ret;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++	if (req->async_data) {
++		io = req->async_data;
++	} else {
++		ret = move_addr_to_kernel(req->connect.addr,
++						req->connect.addr_len,
++						&__io.address);
++		if (ret)
++			goto out;
++		io = &__io;
++	}
++
++	file_flags = force_nonblock ? O_NONBLOCK : 0;
++
++	ret = __sys_connect_file(req->file, &io->address,
++					req->connect.addr_len, file_flags);
++	if ((ret == -EAGAIN || ret == -EINPROGRESS) && force_nonblock) {
++		if (req->async_data)
++			return -EAGAIN;
++		if (io_alloc_async_data(req)) {
++			ret = -ENOMEM;
++			goto out;
++		}
++		memcpy(req->async_data, &__io, sizeof(__io));
++		return -EAGAIN;
++	}
++	if (ret == -ERESTARTSYS)
++		ret = -EINTR;
++out:
++	if (ret < 0)
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++#else /* !CONFIG_NET */
++#define IO_NETOP_FN(op)							\
++static int io_##op(struct io_kiocb *req, unsigned int issue_flags)	\
++{									\
++	return -EOPNOTSUPP;						\
++}
++
++#define IO_NETOP_PREP(op)						\
++IO_NETOP_FN(op)								\
++static int io_##op##_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) \
++{									\
++	return -EOPNOTSUPP;						\
++}									\
++
++#define IO_NETOP_PREP_ASYNC(op)						\
++IO_NETOP_PREP(op)							\
++static int io_##op##_prep_async(struct io_kiocb *req)			\
++{									\
++	return -EOPNOTSUPP;						\
++}
++
++IO_NETOP_PREP_ASYNC(sendmsg);
++IO_NETOP_PREP_ASYNC(recvmsg);
++IO_NETOP_PREP_ASYNC(connect);
++IO_NETOP_PREP(accept);
++IO_NETOP_FN(send);
++IO_NETOP_FN(recv);
++#endif /* CONFIG_NET */
++
++struct io_poll_table {
++	struct poll_table_struct pt;
++	struct io_kiocb *req;
++	int nr_entries;
++	int error;
++};
++
++#define IO_POLL_CANCEL_FLAG	BIT(31)
++#define IO_POLL_RETRY_FLAG	BIT(30)
++#define IO_POLL_REF_MASK	GENMASK(29, 0)
++
++/*
++ * We usually have 1-2 refs taken, 128 is more than enough and we want to
++ * maximise the margin between this amount and the moment when it overflows.
++ */
++#define IO_POLL_REF_BIAS       128
++
++static bool io_poll_get_ownership_slowpath(struct io_kiocb *req)
++{
++	int v;
++
++	/*
++	 * poll_refs are already elevated and we don't have much hope for
++	 * grabbing the ownership. Instead of incrementing set a retry flag
++	 * to notify the loop that there might have been some change.
++	 */
++	v = atomic_fetch_or(IO_POLL_RETRY_FLAG, &req->poll_refs);
++	if (v & IO_POLL_REF_MASK)
++		return false;
++	return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
++}
++
++/*
++ * If refs part of ->poll_refs (see IO_POLL_REF_MASK) is 0, it's free. We can
++ * bump it and acquire ownership. It's disallowed to modify requests while not
++ * owning it, that prevents from races for enqueueing task_work's and b/w
++ * arming poll and wakeups.
++ */
++static inline bool io_poll_get_ownership(struct io_kiocb *req)
++{
++	if (unlikely(atomic_read(&req->poll_refs) >= IO_POLL_REF_BIAS))
++		return io_poll_get_ownership_slowpath(req);
++	return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
++}
++
++static void io_poll_mark_cancelled(struct io_kiocb *req)
++{
++	atomic_or(IO_POLL_CANCEL_FLAG, &req->poll_refs);
++}
++
++static struct io_poll_iocb *io_poll_get_double(struct io_kiocb *req)
++{
++	/* pure poll stashes this in ->async_data, poll driven retry elsewhere */
++	if (req->opcode == IORING_OP_POLL_ADD)
++		return req->async_data;
++	return req->apoll->double_poll;
++}
++
++static struct io_poll_iocb *io_poll_get_single(struct io_kiocb *req)
++{
++	if (req->opcode == IORING_OP_POLL_ADD)
++		return &req->poll;
++	return &req->apoll->poll;
++}
++
++static void io_poll_req_insert(struct io_kiocb *req)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	struct hlist_head *list;
++
++	list = &ctx->cancel_hash[hash_long(req->user_data, ctx->cancel_hash_bits)];
++	hlist_add_head(&req->hash_node, list);
++}
++
++static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
++			      wait_queue_func_t wake_func)
++{
++	poll->head = NULL;
++#define IO_POLL_UNMASK	(EPOLLERR|EPOLLHUP|EPOLLNVAL|EPOLLRDHUP)
++	/* mask in events that we always want/need */
++	poll->events = events | IO_POLL_UNMASK;
++	INIT_LIST_HEAD(&poll->wait.entry);
++	init_waitqueue_func_entry(&poll->wait, wake_func);
++}
++
++static inline void io_poll_remove_entry(struct io_poll_iocb *poll)
++{
++	struct wait_queue_head *head = smp_load_acquire(&poll->head);
++
++	if (head) {
++		spin_lock_irq(&head->lock);
++		list_del_init(&poll->wait.entry);
++		poll->head = NULL;
++		spin_unlock_irq(&head->lock);
++	}
++}
++
++static void io_poll_remove_entries(struct io_kiocb *req)
++{
++	struct io_poll_iocb *poll = io_poll_get_single(req);
++	struct io_poll_iocb *poll_double = io_poll_get_double(req);
++
++	/*
++	 * While we hold the waitqueue lock and the waitqueue is nonempty,
++	 * wake_up_pollfree() will wait for us.  However, taking the waitqueue
++	 * lock in the first place can race with the waitqueue being freed.
++	 *
++	 * We solve this as eventpoll does: by taking advantage of the fact that
++	 * all users of wake_up_pollfree() will RCU-delay the actual free.  If
++	 * we enter rcu_read_lock() and see that the pointer to the queue is
++	 * non-NULL, we can then lock it without the memory being freed out from
++	 * under us.
++	 *
++	 * Keep holding rcu_read_lock() as long as we hold the queue lock, in
++	 * case the caller deletes the entry from the queue, leaving it empty.
++	 * In that case, only RCU prevents the queue memory from being freed.
++	 */
++	rcu_read_lock();
++	io_poll_remove_entry(poll);
++	if (poll_double)
++		io_poll_remove_entry(poll_double);
++	rcu_read_unlock();
++}
++
++/*
++ * All poll tw should go through this. Checks for poll events, manages
++ * references, does rewait, etc.
++ *
++ * Returns a negative error on failure. >0 when no action require, which is
++ * either spurious wakeup or multishot CQE is served. 0 when it's done with
++ * the request, then the mask is stored in req->result.
++ */
++static int io_poll_check_events(struct io_kiocb *req)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_poll_iocb *poll = io_poll_get_single(req);
++	int v;
++
++	/* req->task == current here, checking PF_EXITING is safe */
++	if (unlikely(req->task->flags & PF_EXITING))
++		io_poll_mark_cancelled(req);
++
++	do {
++		v = atomic_read(&req->poll_refs);
++
++		/* tw handler should be the owner, and so have some references */
++		if (WARN_ON_ONCE(!(v & IO_POLL_REF_MASK)))
++			return 0;
++		if (v & IO_POLL_CANCEL_FLAG)
++			return -ECANCELED;
++		/*
++		 * cqe.res contains only events of the first wake up
++		 * and all others are be lost. Redo vfs_poll() to get
++		 * up to date state.
++		 */
++		if ((v & IO_POLL_REF_MASK) != 1)
++			req->result = 0;
++		if (v & IO_POLL_RETRY_FLAG) {
++			req->result = 0;
++			/*
++			 * We won't find new events that came in between
++			 * vfs_poll and the ref put unless we clear the
++			 * flag in advance.
++			 */
++			atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs);
++			v &= ~IO_POLL_RETRY_FLAG;
++		}
++
++		if (!req->result) {
++			struct poll_table_struct pt = { ._key = poll->events };
++
++			req->result = vfs_poll(req->file, &pt) & poll->events;
++		}
++
++		/* multishot, just fill an CQE and proceed */
++		if (req->result && !(poll->events & EPOLLONESHOT)) {
++			__poll_t mask = mangle_poll(req->result & poll->events);
++			bool filled;
++
++			spin_lock(&ctx->completion_lock);
++			filled = io_fill_cqe_aux(ctx, req->user_data, mask,
++						 IORING_CQE_F_MORE);
++			io_commit_cqring(ctx);
++			spin_unlock(&ctx->completion_lock);
++			if (unlikely(!filled))
++				return -ECANCELED;
++			io_cqring_ev_posted(ctx);
++		} else if (req->result) {
++			return 0;
++		}
++
++		/* force the next iteration to vfs_poll() */
++		req->result = 0;
++
++		/*
++		 * Release all references, retry if someone tried to restart
++		 * task_work while we were executing it.
++		 */
++	} while (atomic_sub_return(v & IO_POLL_REF_MASK, &req->poll_refs) &
++					IO_POLL_REF_MASK);
++
++	return 1;
++}
++
++static void io_poll_task_func(struct io_kiocb *req, bool *locked)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	int ret;
++
++	ret = io_poll_check_events(req);
++	if (ret > 0)
++		return;
++
++	if (!ret) {
++		req->result = mangle_poll(req->result & req->poll.events);
++	} else {
++		req->result = ret;
++		req_set_fail(req);
++	}
++
++	io_poll_remove_entries(req);
++	spin_lock(&ctx->completion_lock);
++	hash_del(&req->hash_node);
++	spin_unlock(&ctx->completion_lock);
++	io_req_complete_post(req, req->result, 0);
++}
++
++static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	int ret;
++
++	ret = io_poll_check_events(req);
++	if (ret > 0)
++		return;
++
++	io_poll_remove_entries(req);
++	spin_lock(&ctx->completion_lock);
++	hash_del(&req->hash_node);
++	spin_unlock(&ctx->completion_lock);
++
++	if (!ret)
++		io_req_task_submit(req, locked);
++	else
++		io_req_complete_failed(req, ret);
++}
++
++static void __io_poll_execute(struct io_kiocb *req, int mask)
++{
++	req->result = mask;
++	if (req->opcode == IORING_OP_POLL_ADD)
++		req->io_task_work.func = io_poll_task_func;
++	else
++		req->io_task_work.func = io_apoll_task_func;
++
++	trace_io_uring_task_add(req->ctx, req->opcode, req->user_data, mask);
++	io_req_task_work_add(req);
++}
++
++static inline void io_poll_execute(struct io_kiocb *req, int res)
++{
++	if (io_poll_get_ownership(req))
++		__io_poll_execute(req, res);
++}
++
++static void io_poll_cancel_req(struct io_kiocb *req)
++{
++	io_poll_mark_cancelled(req);
++	/* kick tw, which should complete the request */
++	io_poll_execute(req, 0);
++}
++
++static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
++			void *key)
++{
++	struct io_kiocb *req = wait->private;
++	struct io_poll_iocb *poll = container_of(wait, struct io_poll_iocb,
++						 wait);
++	__poll_t mask = key_to_poll(key);
++
++	if (unlikely(mask & POLLFREE)) {
++		io_poll_mark_cancelled(req);
++		/* we have to kick tw in case it's not already */
++		io_poll_execute(req, 0);
++
++		/*
++		 * If the waitqueue is being freed early but someone is already
++		 * holds ownership over it, we have to tear down the request as
++		 * best we can. That means immediately removing the request from
++		 * its waitqueue and preventing all further accesses to the
++		 * waitqueue via the request.
++		 */
++		list_del_init(&poll->wait.entry);
++
++		/*
++		 * Careful: this *must* be the last step, since as soon
++		 * as req->head is NULL'ed out, the request can be
++		 * completed and freed, since aio_poll_complete_work()
++		 * will no longer need to take the waitqueue lock.
++		 */
++		smp_store_release(&poll->head, NULL);
++		return 1;
++	}
++
++	/* for instances that support it check for an event match first */
++	if (mask && !(mask & poll->events))
++		return 0;
++
++	if (io_poll_get_ownership(req)) {
++		/*
++		 * If we trigger a multishot poll off our own wakeup path,
++		 * disable multishot as there is a circular dependency between
++		 * CQ posting and triggering the event.
++		 */
++		if (mask & EPOLL_URING_WAKE)
++			poll->events |= EPOLLONESHOT;
++
++		__io_poll_execute(req, mask);
++	}
++	return 1;
++}
++
++static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
++			    struct wait_queue_head *head,
++			    struct io_poll_iocb **poll_ptr)
++{
++	struct io_kiocb *req = pt->req;
++
++	/*
++	 * The file being polled uses multiple waitqueues for poll handling
++	 * (e.g. one for read, one for write). Setup a separate io_poll_iocb
++	 * if this happens.
++	 */
++	if (unlikely(pt->nr_entries)) {
++		struct io_poll_iocb *first = poll;
++
++		/* double add on the same waitqueue head, ignore */
++		if (first->head == head)
++			return;
++		/* already have a 2nd entry, fail a third attempt */
++		if (*poll_ptr) {
++			if ((*poll_ptr)->head == head)
++				return;
++			pt->error = -EINVAL;
++			return;
++		}
++
++		poll = kmalloc(sizeof(*poll), GFP_ATOMIC);
++		if (!poll) {
++			pt->error = -ENOMEM;
++			return;
++		}
++		io_init_poll_iocb(poll, first->events, first->wait.func);
++		*poll_ptr = poll;
++	}
++
++	pt->nr_entries++;
++	poll->head = head;
++	poll->wait.private = req;
++
++	if (poll->events & EPOLLEXCLUSIVE)
++		add_wait_queue_exclusive(head, &poll->wait);
++	else
++		add_wait_queue(head, &poll->wait);
++}
++
++static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
++			       struct poll_table_struct *p)
++{
++	struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
++
++	__io_queue_proc(&pt->req->poll, pt, head,
++			(struct io_poll_iocb **) &pt->req->async_data);
++}
++
++static int __io_arm_poll_handler(struct io_kiocb *req,
++				 struct io_poll_iocb *poll,
++				 struct io_poll_table *ipt, __poll_t mask)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	INIT_HLIST_NODE(&req->hash_node);
++	io_init_poll_iocb(poll, mask, io_poll_wake);
++	poll->file = req->file;
++	poll->wait.private = req;
++
++	ipt->pt._key = mask;
++	ipt->req = req;
++	ipt->error = 0;
++	ipt->nr_entries = 0;
++
++	/*
++	 * Take the ownership to delay any tw execution up until we're done
++	 * with poll arming. see io_poll_get_ownership().
++	 */
++	atomic_set(&req->poll_refs, 1);
++	mask = vfs_poll(req->file, &ipt->pt) & poll->events;
++
++	if (mask && (poll->events & EPOLLONESHOT)) {
++		io_poll_remove_entries(req);
++		/* no one else has access to the req, forget about the ref */
++		return mask;
++	}
++	if (!mask && unlikely(ipt->error || !ipt->nr_entries)) {
++		io_poll_remove_entries(req);
++		if (!ipt->error)
++			ipt->error = -EINVAL;
++		return 0;
++	}
++
++	spin_lock(&ctx->completion_lock);
++	io_poll_req_insert(req);
++	spin_unlock(&ctx->completion_lock);
++
++	if (mask) {
++		/* can't multishot if failed, just queue the event we've got */
++		if (unlikely(ipt->error || !ipt->nr_entries)) {
++			poll->events |= EPOLLONESHOT;
++			ipt->error = 0;
++		}
++		__io_poll_execute(req, mask);
++		return 0;
++	}
++
++	/*
++	 * Try to release ownership. If we see a change of state, e.g.
++	 * poll was waken up, queue up a tw, it'll deal with it.
++	 */
++	if (atomic_cmpxchg(&req->poll_refs, 1, 0) != 1)
++		__io_poll_execute(req, 0);
++	return 0;
++}
++
++static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
++			       struct poll_table_struct *p)
++{
++	struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
++	struct async_poll *apoll = pt->req->apoll;
++
++	__io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll);
++}
++
++enum {
++	IO_APOLL_OK,
++	IO_APOLL_ABORTED,
++	IO_APOLL_READY
++};
++
++static int io_arm_poll_handler(struct io_kiocb *req)
++{
++	const struct io_op_def *def = &io_op_defs[req->opcode];
++	struct io_ring_ctx *ctx = req->ctx;
++	struct async_poll *apoll;
++	struct io_poll_table ipt;
++	__poll_t mask = EPOLLONESHOT | POLLERR | POLLPRI;
++	int ret;
++
++	if (!req->file || !file_can_poll(req->file))
++		return IO_APOLL_ABORTED;
++	if (req->flags & REQ_F_POLLED)
++		return IO_APOLL_ABORTED;
++	if (!def->pollin && !def->pollout)
++		return IO_APOLL_ABORTED;
++
++	if (def->pollin) {
++		mask |= POLLIN | POLLRDNORM;
++
++		/* If reading from MSG_ERRQUEUE using recvmsg, ignore POLLIN */
++		if ((req->opcode == IORING_OP_RECVMSG) &&
++		    (req->sr_msg.msg_flags & MSG_ERRQUEUE))
++			mask &= ~POLLIN;
++	} else {
++		mask |= POLLOUT | POLLWRNORM;
++	}
++
++	apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
++	if (unlikely(!apoll))
++		return IO_APOLL_ABORTED;
++	apoll->double_poll = NULL;
++	req->apoll = apoll;
++	req->flags |= REQ_F_POLLED;
++	ipt.pt._qproc = io_async_queue_proc;
++
++	ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask);
++	if (ret || ipt.error)
++		return ret ? IO_APOLL_READY : IO_APOLL_ABORTED;
++
++	trace_io_uring_poll_arm(ctx, req, req->opcode, req->user_data,
++				mask, apoll->poll.events);
++	return IO_APOLL_OK;
++}
++
++/*
++ * Returns true if we found and killed one or more poll requests
++ */
++static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
++			       bool cancel_all)
++{
++	struct hlist_node *tmp;
++	struct io_kiocb *req;
++	bool found = false;
++	int i;
++
++	spin_lock(&ctx->completion_lock);
++	for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
++		struct hlist_head *list;
++
++		list = &ctx->cancel_hash[i];
++		hlist_for_each_entry_safe(req, tmp, list, hash_node) {
++			if (io_match_task_safe(req, tsk, cancel_all)) {
++				hlist_del_init(&req->hash_node);
++				io_poll_cancel_req(req);
++				found = true;
++			}
++		}
++	}
++	spin_unlock(&ctx->completion_lock);
++	return found;
++}
++
++static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, __u64 sqe_addr,
++				     bool poll_only)
++	__must_hold(&ctx->completion_lock)
++{
++	struct hlist_head *list;
++	struct io_kiocb *req;
++
++	list = &ctx->cancel_hash[hash_long(sqe_addr, ctx->cancel_hash_bits)];
++	hlist_for_each_entry(req, list, hash_node) {
++		if (sqe_addr != req->user_data)
++			continue;
++		if (poll_only && req->opcode != IORING_OP_POLL_ADD)
++			continue;
++		return req;
++	}
++	return NULL;
++}
++
++static bool io_poll_disarm(struct io_kiocb *req)
++	__must_hold(&ctx->completion_lock)
++{
++	if (!io_poll_get_ownership(req))
++		return false;
++	io_poll_remove_entries(req);
++	hash_del(&req->hash_node);
++	return true;
++}
++
++static int io_poll_cancel(struct io_ring_ctx *ctx, __u64 sqe_addr,
++			  bool poll_only)
++	__must_hold(&ctx->completion_lock)
++{
++	struct io_kiocb *req = io_poll_find(ctx, sqe_addr, poll_only);
++
++	if (!req)
++		return -ENOENT;
++	io_poll_cancel_req(req);
++	return 0;
++}
++
++static __poll_t io_poll_parse_events(const struct io_uring_sqe *sqe,
++				     unsigned int flags)
++{
++	u32 events;
++
++	events = READ_ONCE(sqe->poll32_events);
++#ifdef __BIG_ENDIAN
++	events = swahw32(events);
++#endif
++	if (!(flags & IORING_POLL_ADD_MULTI))
++		events |= EPOLLONESHOT;
++	return demangle_poll(events) | (events & (EPOLLEXCLUSIVE|EPOLLONESHOT));
++}
++
++static int io_poll_update_prep(struct io_kiocb *req,
++			       const struct io_uring_sqe *sqe)
++{
++	struct io_poll_update *upd = &req->poll_update;
++	u32 flags;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
++		return -EINVAL;
++	flags = READ_ONCE(sqe->len);
++	if (flags & ~(IORING_POLL_UPDATE_EVENTS | IORING_POLL_UPDATE_USER_DATA |
++		      IORING_POLL_ADD_MULTI))
++		return -EINVAL;
++	/* meaningless without update */
++	if (flags == IORING_POLL_ADD_MULTI)
++		return -EINVAL;
++
++	upd->old_user_data = READ_ONCE(sqe->addr);
++	upd->update_events = flags & IORING_POLL_UPDATE_EVENTS;
++	upd->update_user_data = flags & IORING_POLL_UPDATE_USER_DATA;
++
++	upd->new_user_data = READ_ONCE(sqe->off);
++	if (!upd->update_user_data && upd->new_user_data)
++		return -EINVAL;
++	if (upd->update_events)
++		upd->events = io_poll_parse_events(sqe, flags);
++	else if (sqe->poll32_events)
++		return -EINVAL;
++
++	return 0;
++}
++
++static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	struct io_poll_iocb *poll = &req->poll;
++	u32 flags;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->addr)
++		return -EINVAL;
++	flags = READ_ONCE(sqe->len);
++	if (flags & ~IORING_POLL_ADD_MULTI)
++		return -EINVAL;
++
++	io_req_set_refcount(req);
++	poll->events = io_poll_parse_events(sqe, flags);
++	return 0;
++}
++
++static int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_poll_iocb *poll = &req->poll;
++	struct io_poll_table ipt;
++	int ret;
++
++	ipt.pt._qproc = io_poll_queue_proc;
++
++	ret = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events);
++	if (!ret && ipt.error)
++		req_set_fail(req);
++	ret = ret ?: ipt.error;
++	if (ret)
++		__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++
++static int io_poll_update(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_kiocb *preq;
++	int ret2, ret = 0;
++
++	spin_lock(&ctx->completion_lock);
++	preq = io_poll_find(ctx, req->poll_update.old_user_data, true);
++	if (!preq || !io_poll_disarm(preq)) {
++		spin_unlock(&ctx->completion_lock);
++		ret = preq ? -EALREADY : -ENOENT;
++		goto out;
++	}
++	spin_unlock(&ctx->completion_lock);
++
++	if (req->poll_update.update_events || req->poll_update.update_user_data) {
++		/* only mask one event flags, keep behavior flags */
++		if (req->poll_update.update_events) {
++			preq->poll.events &= ~0xffff;
++			preq->poll.events |= req->poll_update.events & 0xffff;
++			preq->poll.events |= IO_POLL_UNMASK;
++		}
++		if (req->poll_update.update_user_data)
++			preq->user_data = req->poll_update.new_user_data;
++
++		ret2 = io_poll_add(preq, issue_flags);
++		/* successfully updated, don't complete poll request */
++		if (!ret2)
++			goto out;
++	}
++	req_set_fail(preq);
++	io_req_complete(preq, -ECANCELED);
++out:
++	if (ret < 0)
++		req_set_fail(req);
++	/* complete update request, we're done with it */
++	io_req_complete(req, ret);
++	return 0;
++}
++
++static void io_req_task_timeout(struct io_kiocb *req, bool *locked)
++{
++	req_set_fail(req);
++	io_req_complete_post(req, -ETIME, 0);
++}
++
++static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
++{
++	struct io_timeout_data *data = container_of(timer,
++						struct io_timeout_data, timer);
++	struct io_kiocb *req = data->req;
++	struct io_ring_ctx *ctx = req->ctx;
++	unsigned long flags;
++
++	spin_lock_irqsave(&ctx->timeout_lock, flags);
++	list_del_init(&req->timeout.list);
++	atomic_set(&req->ctx->cq_timeouts,
++		atomic_read(&req->ctx->cq_timeouts) + 1);
++	spin_unlock_irqrestore(&ctx->timeout_lock, flags);
++
++	req->io_task_work.func = io_req_task_timeout;
++	io_req_task_work_add(req);
++	return HRTIMER_NORESTART;
++}
++
++static struct io_kiocb *io_timeout_extract(struct io_ring_ctx *ctx,
++					   __u64 user_data)
++	__must_hold(&ctx->timeout_lock)
++{
++	struct io_timeout_data *io;
++	struct io_kiocb *req;
++	bool found = false;
++
++	list_for_each_entry(req, &ctx->timeout_list, timeout.list) {
++		found = user_data == req->user_data;
++		if (found)
++			break;
++	}
++	if (!found)
++		return ERR_PTR(-ENOENT);
++
++	io = req->async_data;
++	if (hrtimer_try_to_cancel(&io->timer) == -1)
++		return ERR_PTR(-EALREADY);
++	list_del_init(&req->timeout.list);
++	return req;
++}
++
++static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
++	__must_hold(&ctx->completion_lock)
++	__must_hold(&ctx->timeout_lock)
++{
++	struct io_kiocb *req = io_timeout_extract(ctx, user_data);
++
++	if (IS_ERR(req))
++		return PTR_ERR(req);
++
++	req_set_fail(req);
++	io_fill_cqe_req(req, -ECANCELED, 0);
++	io_put_req_deferred(req);
++	return 0;
++}
++
++static clockid_t io_timeout_get_clock(struct io_timeout_data *data)
++{
++	switch (data->flags & IORING_TIMEOUT_CLOCK_MASK) {
++	case IORING_TIMEOUT_BOOTTIME:
++		return CLOCK_BOOTTIME;
++	case IORING_TIMEOUT_REALTIME:
++		return CLOCK_REALTIME;
++	default:
++		/* can't happen, vetted at prep time */
++		WARN_ON_ONCE(1);
++		fallthrough;
++	case 0:
++		return CLOCK_MONOTONIC;
++	}
++}
++
++static int io_linked_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
++				    struct timespec64 *ts, enum hrtimer_mode mode)
++	__must_hold(&ctx->timeout_lock)
++{
++	struct io_timeout_data *io;
++	struct io_kiocb *req;
++	bool found = false;
++
++	list_for_each_entry(req, &ctx->ltimeout_list, timeout.list) {
++		found = user_data == req->user_data;
++		if (found)
++			break;
++	}
++	if (!found)
++		return -ENOENT;
++
++	io = req->async_data;
++	if (hrtimer_try_to_cancel(&io->timer) == -1)
++		return -EALREADY;
++	hrtimer_init(&io->timer, io_timeout_get_clock(io), mode);
++	io->timer.function = io_link_timeout_fn;
++	hrtimer_start(&io->timer, timespec64_to_ktime(*ts), mode);
++	return 0;
++}
++
++static int io_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
++			     struct timespec64 *ts, enum hrtimer_mode mode)
++	__must_hold(&ctx->timeout_lock)
++{
++	struct io_kiocb *req = io_timeout_extract(ctx, user_data);
++	struct io_timeout_data *data;
++
++	if (IS_ERR(req))
++		return PTR_ERR(req);
++
++	req->timeout.off = 0; /* noseq */
++	data = req->async_data;
++	list_add_tail(&req->timeout.list, &ctx->timeout_list);
++	hrtimer_init(&data->timer, io_timeout_get_clock(data), mode);
++	data->timer.function = io_timeout_fn;
++	hrtimer_start(&data->timer, timespec64_to_ktime(*ts), mode);
++	return 0;
++}
++
++static int io_timeout_remove_prep(struct io_kiocb *req,
++				  const struct io_uring_sqe *sqe)
++{
++	struct io_timeout_rem *tr = &req->timeout_rem;
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->splice_fd_in)
++		return -EINVAL;
++
++	tr->ltimeout = false;
++	tr->addr = READ_ONCE(sqe->addr);
++	tr->flags = READ_ONCE(sqe->timeout_flags);
++	if (tr->flags & IORING_TIMEOUT_UPDATE_MASK) {
++		if (hweight32(tr->flags & IORING_TIMEOUT_CLOCK_MASK) > 1)
++			return -EINVAL;
++		if (tr->flags & IORING_LINK_TIMEOUT_UPDATE)
++			tr->ltimeout = true;
++		if (tr->flags & ~(IORING_TIMEOUT_UPDATE_MASK|IORING_TIMEOUT_ABS))
++			return -EINVAL;
++		if (get_timespec64(&tr->ts, u64_to_user_ptr(sqe->addr2)))
++			return -EFAULT;
++	} else if (tr->flags) {
++		/* timeout removal doesn't support flags */
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++static inline enum hrtimer_mode io_translate_timeout_mode(unsigned int flags)
++{
++	return (flags & IORING_TIMEOUT_ABS) ? HRTIMER_MODE_ABS
++					    : HRTIMER_MODE_REL;
++}
++
++/*
++ * Remove or update an existing timeout command
++ */
++static int io_timeout_remove(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_timeout_rem *tr = &req->timeout_rem;
++	struct io_ring_ctx *ctx = req->ctx;
++	int ret;
++
++	if (!(req->timeout_rem.flags & IORING_TIMEOUT_UPDATE)) {
++		spin_lock(&ctx->completion_lock);
++		spin_lock_irq(&ctx->timeout_lock);
++		ret = io_timeout_cancel(ctx, tr->addr);
++		spin_unlock_irq(&ctx->timeout_lock);
++		spin_unlock(&ctx->completion_lock);
++	} else {
++		enum hrtimer_mode mode = io_translate_timeout_mode(tr->flags);
++
++		spin_lock_irq(&ctx->timeout_lock);
++		if (tr->ltimeout)
++			ret = io_linked_timeout_update(ctx, tr->addr, &tr->ts, mode);
++		else
++			ret = io_timeout_update(ctx, tr->addr, &tr->ts, mode);
++		spin_unlock_irq(&ctx->timeout_lock);
++	}
++
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete_post(req, ret, 0);
++	return 0;
++}
++
++static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
++			   bool is_timeout_link)
++{
++	struct io_timeout_data *data;
++	unsigned flags;
++	u32 off = READ_ONCE(sqe->off);
++
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->buf_index || sqe->len != 1 ||
++	    sqe->splice_fd_in)
++		return -EINVAL;
++	if (off && is_timeout_link)
++		return -EINVAL;
++	flags = READ_ONCE(sqe->timeout_flags);
++	if (flags & ~(IORING_TIMEOUT_ABS | IORING_TIMEOUT_CLOCK_MASK))
++		return -EINVAL;
++	/* more than one clock specified is invalid, obviously */
++	if (hweight32(flags & IORING_TIMEOUT_CLOCK_MASK) > 1)
++		return -EINVAL;
++
++	INIT_LIST_HEAD(&req->timeout.list);
++	req->timeout.off = off;
++	if (unlikely(off && !req->ctx->off_timeout_used))
++		req->ctx->off_timeout_used = true;
++
++	if (!req->async_data && io_alloc_async_data(req))
++		return -ENOMEM;
++
++	data = req->async_data;
++	data->req = req;
++	data->flags = flags;
++
++	if (get_timespec64(&data->ts, u64_to_user_ptr(sqe->addr)))
++		return -EFAULT;
++
++	INIT_LIST_HEAD(&req->timeout.list);
++	data->mode = io_translate_timeout_mode(flags);
++	hrtimer_init(&data->timer, io_timeout_get_clock(data), data->mode);
++
++	if (is_timeout_link) {
++		struct io_submit_link *link = &req->ctx->submit_state.link;
++
++		if (!link->head)
++			return -EINVAL;
++		if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
++			return -EINVAL;
++		req->timeout.head = link->last;
++		link->last->flags |= REQ_F_ARM_LTIMEOUT;
++	}
++	return 0;
++}
++
++static int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_timeout_data *data = req->async_data;
++	struct list_head *entry;
++	u32 tail, off = req->timeout.off;
++
++	spin_lock_irq(&ctx->timeout_lock);
++
++	/*
++	 * sqe->off holds how many events that need to occur for this
++	 * timeout event to be satisfied. If it isn't set, then this is
++	 * a pure timeout request, sequence isn't used.
++	 */
++	if (io_is_timeout_noseq(req)) {
++		entry = ctx->timeout_list.prev;
++		goto add;
++	}
++
++	tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
++	req->timeout.target_seq = tail + off;
++
++	/* Update the last seq here in case io_flush_timeouts() hasn't.
++	 * This is safe because ->completion_lock is held, and submissions
++	 * and completions are never mixed in the same ->completion_lock section.
++	 */
++	ctx->cq_last_tm_flush = tail;
++
++	/*
++	 * Insertion sort, ensuring the first entry in the list is always
++	 * the one we need first.
++	 */
++	list_for_each_prev(entry, &ctx->timeout_list) {
++		struct io_kiocb *nxt = list_entry(entry, struct io_kiocb,
++						  timeout.list);
++
++		if (io_is_timeout_noseq(nxt))
++			continue;
++		/* nxt.seq is behind @tail, otherwise would've been completed */
++		if (off >= nxt->timeout.target_seq - tail)
++			break;
++	}
++add:
++	list_add(&req->timeout.list, entry);
++	data->timer.function = io_timeout_fn;
++	hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode);
++	spin_unlock_irq(&ctx->timeout_lock);
++	return 0;
++}
++
++struct io_cancel_data {
++	struct io_ring_ctx *ctx;
++	u64 user_data;
++};
++
++static bool io_cancel_cb(struct io_wq_work *work, void *data)
++{
++	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++	struct io_cancel_data *cd = data;
++
++	return req->ctx == cd->ctx && req->user_data == cd->user_data;
++}
++
++static int io_async_cancel_one(struct io_uring_task *tctx, u64 user_data,
++			       struct io_ring_ctx *ctx)
++{
++	struct io_cancel_data data = { .ctx = ctx, .user_data = user_data, };
++	enum io_wq_cancel cancel_ret;
++	int ret = 0;
++
++	if (!tctx || !tctx->io_wq)
++		return -ENOENT;
++
++	cancel_ret = io_wq_cancel_cb(tctx->io_wq, io_cancel_cb, &data, false);
++	switch (cancel_ret) {
++	case IO_WQ_CANCEL_OK:
++		ret = 0;
++		break;
++	case IO_WQ_CANCEL_RUNNING:
++		ret = -EALREADY;
++		break;
++	case IO_WQ_CANCEL_NOTFOUND:
++		ret = -ENOENT;
++		break;
++	}
++
++	return ret;
++}
++
++static int io_try_cancel_userdata(struct io_kiocb *req, u64 sqe_addr)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	int ret;
++
++	WARN_ON_ONCE(!io_wq_current_is_worker() && req->task != current);
++
++	ret = io_async_cancel_one(req->task->io_uring, sqe_addr, ctx);
++	if (ret != -ENOENT)
++		return ret;
++
++	spin_lock(&ctx->completion_lock);
++	spin_lock_irq(&ctx->timeout_lock);
++	ret = io_timeout_cancel(ctx, sqe_addr);
++	spin_unlock_irq(&ctx->timeout_lock);
++	if (ret != -ENOENT)
++		goto out;
++	ret = io_poll_cancel(ctx, sqe_addr, false);
++out:
++	spin_unlock(&ctx->completion_lock);
++	return ret;
++}
++
++static int io_async_cancel_prep(struct io_kiocb *req,
++				const struct io_uring_sqe *sqe)
++{
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags ||
++	    sqe->splice_fd_in)
++		return -EINVAL;
++
++	req->cancel.addr = READ_ONCE(sqe->addr);
++	return 0;
++}
++
++static int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	u64 sqe_addr = req->cancel.addr;
++	struct io_tctx_node *node;
++	int ret;
++
++	ret = io_try_cancel_userdata(req, sqe_addr);
++	if (ret != -ENOENT)
++		goto done;
++
++	/* slow path, try all io-wq's */
++	io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
++	ret = -ENOENT;
++	list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
++		struct io_uring_task *tctx = node->task->io_uring;
++
++		ret = io_async_cancel_one(tctx, req->cancel.addr, ctx);
++		if (ret != -ENOENT)
++			break;
++	}
++	io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
++done:
++	if (ret < 0)
++		req_set_fail(req);
++	io_req_complete_post(req, ret, 0);
++	return 0;
++}
++
++static int io_rsrc_update_prep(struct io_kiocb *req,
++				const struct io_uring_sqe *sqe)
++{
++	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in)
++		return -EINVAL;
++
++	req->rsrc_update.offset = READ_ONCE(sqe->off);
++	req->rsrc_update.nr_args = READ_ONCE(sqe->len);
++	if (!req->rsrc_update.nr_args)
++		return -EINVAL;
++	req->rsrc_update.arg = READ_ONCE(sqe->addr);
++	return 0;
++}
++
++static int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_uring_rsrc_update2 up;
++	int ret;
++
++	up.offset = req->rsrc_update.offset;
++	up.data = req->rsrc_update.arg;
++	up.nr = 0;
++	up.tags = 0;
++	up.resv = 0;
++	up.resv2 = 0;
++
++	io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
++	ret = __io_register_rsrc_update(ctx, IORING_RSRC_FILE,
++					&up, req->rsrc_update.nr_args);
++	io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
++
++	if (ret < 0)
++		req_set_fail(req);
++	__io_req_complete(req, issue_flags, ret, 0);
++	return 0;
++}
++
++static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++	switch (req->opcode) {
++	case IORING_OP_NOP:
++		return 0;
++	case IORING_OP_READV:
++	case IORING_OP_READ_FIXED:
++	case IORING_OP_READ:
++		return io_read_prep(req, sqe);
++	case IORING_OP_WRITEV:
++	case IORING_OP_WRITE_FIXED:
++	case IORING_OP_WRITE:
++		return io_write_prep(req, sqe);
++	case IORING_OP_POLL_ADD:
++		return io_poll_add_prep(req, sqe);
++	case IORING_OP_POLL_REMOVE:
++		return io_poll_update_prep(req, sqe);
++	case IORING_OP_FSYNC:
++		return io_fsync_prep(req, sqe);
++	case IORING_OP_SYNC_FILE_RANGE:
++		return io_sfr_prep(req, sqe);
++	case IORING_OP_SENDMSG:
++	case IORING_OP_SEND:
++		return io_sendmsg_prep(req, sqe);
++	case IORING_OP_RECVMSG:
++	case IORING_OP_RECV:
++		return io_recvmsg_prep(req, sqe);
++	case IORING_OP_CONNECT:
++		return io_connect_prep(req, sqe);
++	case IORING_OP_TIMEOUT:
++		return io_timeout_prep(req, sqe, false);
++	case IORING_OP_TIMEOUT_REMOVE:
++		return io_timeout_remove_prep(req, sqe);
++	case IORING_OP_ASYNC_CANCEL:
++		return io_async_cancel_prep(req, sqe);
++	case IORING_OP_LINK_TIMEOUT:
++		return io_timeout_prep(req, sqe, true);
++	case IORING_OP_ACCEPT:
++		return io_accept_prep(req, sqe);
++	case IORING_OP_FALLOCATE:
++		return io_fallocate_prep(req, sqe);
++	case IORING_OP_OPENAT:
++		return io_openat_prep(req, sqe);
++	case IORING_OP_CLOSE:
++		return io_close_prep(req, sqe);
++	case IORING_OP_FILES_UPDATE:
++		return io_rsrc_update_prep(req, sqe);
++	case IORING_OP_STATX:
++		return io_statx_prep(req, sqe);
++	case IORING_OP_FADVISE:
++		return io_fadvise_prep(req, sqe);
++	case IORING_OP_MADVISE:
++		return io_madvise_prep(req, sqe);
++	case IORING_OP_OPENAT2:
++		return io_openat2_prep(req, sqe);
++	case IORING_OP_EPOLL_CTL:
++		return io_epoll_ctl_prep(req, sqe);
++	case IORING_OP_SPLICE:
++		return io_splice_prep(req, sqe);
++	case IORING_OP_PROVIDE_BUFFERS:
++		return io_provide_buffers_prep(req, sqe);
++	case IORING_OP_REMOVE_BUFFERS:
++		return io_remove_buffers_prep(req, sqe);
++	case IORING_OP_TEE:
++		return io_tee_prep(req, sqe);
++	case IORING_OP_SHUTDOWN:
++		return io_shutdown_prep(req, sqe);
++	case IORING_OP_RENAMEAT:
++		return io_renameat_prep(req, sqe);
++	case IORING_OP_UNLINKAT:
++		return io_unlinkat_prep(req, sqe);
++	}
++
++	printk_once(KERN_WARNING "io_uring: unhandled opcode %d\n",
++			req->opcode);
++	return -EINVAL;
++}
++
++static int io_req_prep_async(struct io_kiocb *req)
++{
++	if (!io_op_defs[req->opcode].needs_async_setup)
++		return 0;
++	if (WARN_ON_ONCE(req->async_data))
++		return -EFAULT;
++	if (io_alloc_async_data(req))
++		return -EAGAIN;
++
++	switch (req->opcode) {
++	case IORING_OP_READV:
++		return io_rw_prep_async(req, READ);
++	case IORING_OP_WRITEV:
++		return io_rw_prep_async(req, WRITE);
++	case IORING_OP_SENDMSG:
++		return io_sendmsg_prep_async(req);
++	case IORING_OP_RECVMSG:
++		return io_recvmsg_prep_async(req);
++	case IORING_OP_CONNECT:
++		return io_connect_prep_async(req);
++	}
++	printk_once(KERN_WARNING "io_uring: prep_async() bad opcode %d\n",
++		    req->opcode);
++	return -EFAULT;
++}
++
++static u32 io_get_sequence(struct io_kiocb *req)
++{
++	u32 seq = req->ctx->cached_sq_head;
++
++	/* need original cached_sq_head, but it was increased for each req */
++	io_for_each_link(req, req)
++		seq--;
++	return seq;
++}
++
++static bool io_drain_req(struct io_kiocb *req)
++{
++	struct io_kiocb *pos;
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_defer_entry *de;
++	int ret;
++	u32 seq;
++
++	if (req->flags & REQ_F_FAIL) {
++		io_req_complete_fail_submit(req);
++		return true;
++	}
++
++	/*
++	 * If we need to drain a request in the middle of a link, drain the
++	 * head request and the next request/link after the current link.
++	 * Considering sequential execution of links, IOSQE_IO_DRAIN will be
++	 * maintained for every request of our link.
++	 */
++	if (ctx->drain_next) {
++		req->flags |= REQ_F_IO_DRAIN;
++		ctx->drain_next = false;
++	}
++	/* not interested in head, start from the first linked */
++	io_for_each_link(pos, req->link) {
++		if (pos->flags & REQ_F_IO_DRAIN) {
++			ctx->drain_next = true;
++			req->flags |= REQ_F_IO_DRAIN;
++			break;
++		}
++	}
++
++	/* Still need defer if there is pending req in defer list. */
++	spin_lock(&ctx->completion_lock);
++	if (likely(list_empty_careful(&ctx->defer_list) &&
++		!(req->flags & REQ_F_IO_DRAIN))) {
++		spin_unlock(&ctx->completion_lock);
++		ctx->drain_active = false;
++		return false;
++	}
++	spin_unlock(&ctx->completion_lock);
++
++	seq = io_get_sequence(req);
++	/* Still a chance to pass the sequence check */
++	if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list))
++		return false;
++
++	ret = io_req_prep_async(req);
++	if (ret)
++		goto fail;
++	io_prep_async_link(req);
++	de = kmalloc(sizeof(*de), GFP_KERNEL);
++	if (!de) {
++		ret = -ENOMEM;
++fail:
++		io_req_complete_failed(req, ret);
++		return true;
++	}
++
++	spin_lock(&ctx->completion_lock);
++	if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
++		spin_unlock(&ctx->completion_lock);
++		kfree(de);
++		io_queue_async_work(req, NULL);
++		return true;
++	}
++
++	trace_io_uring_defer(ctx, req, req->user_data);
++	de->req = req;
++	de->seq = seq;
++	list_add_tail(&de->list, &ctx->defer_list);
++	spin_unlock(&ctx->completion_lock);
++	return true;
++}
++
++static void io_clean_op(struct io_kiocb *req)
++{
++	if (req->flags & REQ_F_BUFFER_SELECTED) {
++		switch (req->opcode) {
++		case IORING_OP_READV:
++		case IORING_OP_READ_FIXED:
++		case IORING_OP_READ:
++			kfree((void *)(unsigned long)req->rw.addr);
++			break;
++		case IORING_OP_RECVMSG:
++		case IORING_OP_RECV:
++			kfree(req->sr_msg.kbuf);
++			break;
++		}
++	}
++
++	if (req->flags & REQ_F_NEED_CLEANUP) {
++		switch (req->opcode) {
++		case IORING_OP_READV:
++		case IORING_OP_READ_FIXED:
++		case IORING_OP_READ:
++		case IORING_OP_WRITEV:
++		case IORING_OP_WRITE_FIXED:
++		case IORING_OP_WRITE: {
++			struct io_async_rw *io = req->async_data;
++
++			kfree(io->free_iovec);
++			break;
++			}
++		case IORING_OP_RECVMSG:
++		case IORING_OP_SENDMSG: {
++			struct io_async_msghdr *io = req->async_data;
++
++			kfree(io->free_iov);
++			break;
++			}
++		case IORING_OP_OPENAT:
++		case IORING_OP_OPENAT2:
++			if (req->open.filename)
++				putname(req->open.filename);
++			break;
++		case IORING_OP_RENAMEAT:
++			putname(req->rename.oldpath);
++			putname(req->rename.newpath);
++			break;
++		case IORING_OP_UNLINKAT:
++			putname(req->unlink.filename);
++			break;
++		}
++	}
++	if ((req->flags & REQ_F_POLLED) && req->apoll) {
++		kfree(req->apoll->double_poll);
++		kfree(req->apoll);
++		req->apoll = NULL;
++	}
++	if (req->flags & REQ_F_INFLIGHT) {
++		struct io_uring_task *tctx = req->task->io_uring;
++
++		atomic_dec(&tctx->inflight_tracked);
++	}
++	if (req->flags & REQ_F_CREDS)
++		put_cred(req->creds);
++
++	req->flags &= ~IO_REQ_CLEAN_FLAGS;
++}
++
++static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	const struct cred *creds = NULL;
++	int ret;
++
++	if ((req->flags & REQ_F_CREDS) && req->creds != current_cred())
++		creds = override_creds(req->creds);
++
++	switch (req->opcode) {
++	case IORING_OP_NOP:
++		ret = io_nop(req, issue_flags);
++		break;
++	case IORING_OP_READV:
++	case IORING_OP_READ_FIXED:
++	case IORING_OP_READ:
++		ret = io_read(req, issue_flags);
++		break;
++	case IORING_OP_WRITEV:
++	case IORING_OP_WRITE_FIXED:
++	case IORING_OP_WRITE:
++		ret = io_write(req, issue_flags);
++		break;
++	case IORING_OP_FSYNC:
++		ret = io_fsync(req, issue_flags);
++		break;
++	case IORING_OP_POLL_ADD:
++		ret = io_poll_add(req, issue_flags);
++		break;
++	case IORING_OP_POLL_REMOVE:
++		ret = io_poll_update(req, issue_flags);
++		break;
++	case IORING_OP_SYNC_FILE_RANGE:
++		ret = io_sync_file_range(req, issue_flags);
++		break;
++	case IORING_OP_SENDMSG:
++		ret = io_sendmsg(req, issue_flags);
++		break;
++	case IORING_OP_SEND:
++		ret = io_send(req, issue_flags);
++		break;
++	case IORING_OP_RECVMSG:
++		ret = io_recvmsg(req, issue_flags);
++		break;
++	case IORING_OP_RECV:
++		ret = io_recv(req, issue_flags);
++		break;
++	case IORING_OP_TIMEOUT:
++		ret = io_timeout(req, issue_flags);
++		break;
++	case IORING_OP_TIMEOUT_REMOVE:
++		ret = io_timeout_remove(req, issue_flags);
++		break;
++	case IORING_OP_ACCEPT:
++		ret = io_accept(req, issue_flags);
++		break;
++	case IORING_OP_CONNECT:
++		ret = io_connect(req, issue_flags);
++		break;
++	case IORING_OP_ASYNC_CANCEL:
++		ret = io_async_cancel(req, issue_flags);
++		break;
++	case IORING_OP_FALLOCATE:
++		ret = io_fallocate(req, issue_flags);
++		break;
++	case IORING_OP_OPENAT:
++		ret = io_openat(req, issue_flags);
++		break;
++	case IORING_OP_CLOSE:
++		ret = io_close(req, issue_flags);
++		break;
++	case IORING_OP_FILES_UPDATE:
++		ret = io_files_update(req, issue_flags);
++		break;
++	case IORING_OP_STATX:
++		ret = io_statx(req, issue_flags);
++		break;
++	case IORING_OP_FADVISE:
++		ret = io_fadvise(req, issue_flags);
++		break;
++	case IORING_OP_MADVISE:
++		ret = io_madvise(req, issue_flags);
++		break;
++	case IORING_OP_OPENAT2:
++		ret = io_openat2(req, issue_flags);
++		break;
++	case IORING_OP_EPOLL_CTL:
++		ret = io_epoll_ctl(req, issue_flags);
++		break;
++	case IORING_OP_SPLICE:
++		ret = io_splice(req, issue_flags);
++		break;
++	case IORING_OP_PROVIDE_BUFFERS:
++		ret = io_provide_buffers(req, issue_flags);
++		break;
++	case IORING_OP_REMOVE_BUFFERS:
++		ret = io_remove_buffers(req, issue_flags);
++		break;
++	case IORING_OP_TEE:
++		ret = io_tee(req, issue_flags);
++		break;
++	case IORING_OP_SHUTDOWN:
++		ret = io_shutdown(req, issue_flags);
++		break;
++	case IORING_OP_RENAMEAT:
++		ret = io_renameat(req, issue_flags);
++		break;
++	case IORING_OP_UNLINKAT:
++		ret = io_unlinkat(req, issue_flags);
++		break;
++	default:
++		ret = -EINVAL;
++		break;
++	}
++
++	if (creds)
++		revert_creds(creds);
++	if (ret)
++		return ret;
++	/* If the op doesn't have a file, we're not polling for it */
++	if ((ctx->flags & IORING_SETUP_IOPOLL) && req->file)
++		io_iopoll_req_issued(req);
++
++	return 0;
++}
++
++static struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
++{
++	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++
++	req = io_put_req_find_next(req);
++	return req ? &req->work : NULL;
++}
++
++static void io_wq_submit_work(struct io_wq_work *work)
++{
++	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++	struct io_kiocb *timeout;
++	int ret = 0;
++
++	/* one will be dropped by ->io_free_work() after returning to io-wq */
++	if (!(req->flags & REQ_F_REFCOUNT))
++		__io_req_set_refcount(req, 2);
++	else
++		req_ref_get(req);
++
++	timeout = io_prep_linked_timeout(req);
++	if (timeout)
++		io_queue_linked_timeout(timeout);
++
++	/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
++	if (work->flags & IO_WQ_WORK_CANCEL)
++		ret = -ECANCELED;
++
++	if (!ret) {
++		do {
++			ret = io_issue_sqe(req, 0);
++			/*
++			 * We can get EAGAIN for polled IO even though we're
++			 * forcing a sync submission from here, since we can't
++			 * wait for request slots on the block side.
++			 */
++			if (ret != -EAGAIN || !(req->ctx->flags & IORING_SETUP_IOPOLL))
++				break;
++			cond_resched();
++		} while (1);
++	}
++
++	/* avoid locking problems by failing it from a clean context */
++	if (ret)
++		io_req_task_queue_fail(req, ret);
++}
++
++static inline struct io_fixed_file *io_fixed_file_slot(struct io_file_table *table,
++						       unsigned i)
++{
++	return &table->files[i];
++}
++
++static inline struct file *io_file_from_index(struct io_ring_ctx *ctx,
++					      int index)
++{
++	struct io_fixed_file *slot = io_fixed_file_slot(&ctx->file_table, index);
++
++	return (struct file *) (slot->file_ptr & FFS_MASK);
++}
++
++static void io_fixed_file_set(struct io_fixed_file *file_slot, struct file *file)
++{
++	unsigned long file_ptr = (unsigned long) file;
++
++	if (__io_file_supports_nowait(file, READ))
++		file_ptr |= FFS_ASYNC_READ;
++	if (__io_file_supports_nowait(file, WRITE))
++		file_ptr |= FFS_ASYNC_WRITE;
++	if (S_ISREG(file_inode(file)->i_mode))
++		file_ptr |= FFS_ISREG;
++	file_slot->file_ptr = file_ptr;
++}
++
++static inline struct file *io_file_get_fixed(struct io_ring_ctx *ctx,
++					     struct io_kiocb *req, int fd)
++{
++	struct file *file;
++	unsigned long file_ptr;
++
++	if (unlikely((unsigned int)fd >= ctx->nr_user_files))
++		return NULL;
++	fd = array_index_nospec(fd, ctx->nr_user_files);
++	file_ptr = io_fixed_file_slot(&ctx->file_table, fd)->file_ptr;
++	file = (struct file *) (file_ptr & FFS_MASK);
++	file_ptr &= ~FFS_MASK;
++	/* mask in overlapping REQ_F and FFS bits */
++	req->flags |= (file_ptr << REQ_F_NOWAIT_READ_BIT);
++	io_req_set_rsrc_node(req);
++	return file;
++}
++
++static struct file *io_file_get_normal(struct io_ring_ctx *ctx,
++				       struct io_kiocb *req, int fd)
++{
++	struct file *file = fget(fd);
++
++	trace_io_uring_file_get(ctx, fd);
++
++	/* we don't allow fixed io_uring files */
++	if (file && unlikely(file->f_op == &io_uring_fops))
++		io_req_track_inflight(req);
++	return file;
++}
++
++static inline struct file *io_file_get(struct io_ring_ctx *ctx,
++				       struct io_kiocb *req, int fd, bool fixed)
++{
++	if (fixed)
++		return io_file_get_fixed(ctx, req, fd);
++	else
++		return io_file_get_normal(ctx, req, fd);
++}
++
++static void io_req_task_link_timeout(struct io_kiocb *req, bool *locked)
++{
++	struct io_kiocb *prev = req->timeout.prev;
++	int ret = -ENOENT;
++
++	if (prev) {
++		if (!(req->task->flags & PF_EXITING))
++			ret = io_try_cancel_userdata(req, prev->user_data);
++		io_req_complete_post(req, ret ?: -ETIME, 0);
++		io_put_req(prev);
++	} else {
++		io_req_complete_post(req, -ETIME, 0);
++	}
++}
++
++static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
++{
++	struct io_timeout_data *data = container_of(timer,
++						struct io_timeout_data, timer);
++	struct io_kiocb *prev, *req = data->req;
++	struct io_ring_ctx *ctx = req->ctx;
++	unsigned long flags;
++
++	spin_lock_irqsave(&ctx->timeout_lock, flags);
++	prev = req->timeout.head;
++	req->timeout.head = NULL;
++
++	/*
++	 * We don't expect the list to be empty, that will only happen if we
++	 * race with the completion of the linked work.
++	 */
++	if (prev) {
++		io_remove_next_linked(prev);
++		if (!req_ref_inc_not_zero(prev))
++			prev = NULL;
++	}
++	list_del(&req->timeout.list);
++	req->timeout.prev = prev;
++	spin_unlock_irqrestore(&ctx->timeout_lock, flags);
++
++	req->io_task_work.func = io_req_task_link_timeout;
++	io_req_task_work_add(req);
++	return HRTIMER_NORESTART;
++}
++
++static void io_queue_linked_timeout(struct io_kiocb *req)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	spin_lock_irq(&ctx->timeout_lock);
++	/*
++	 * If the back reference is NULL, then our linked request finished
++	 * before we got a chance to setup the timer
++	 */
++	if (req->timeout.head) {
++		struct io_timeout_data *data = req->async_data;
++
++		data->timer.function = io_link_timeout_fn;
++		hrtimer_start(&data->timer, timespec64_to_ktime(data->ts),
++				data->mode);
++		list_add_tail(&req->timeout.list, &ctx->ltimeout_list);
++	}
++	spin_unlock_irq(&ctx->timeout_lock);
++	/* drop submission reference */
++	io_put_req(req);
++}
++
++static void __io_queue_sqe(struct io_kiocb *req)
++	__must_hold(&req->ctx->uring_lock)
++{
++	struct io_kiocb *linked_timeout;
++	int ret;
++
++issue_sqe:
++	ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
++
++	/*
++	 * We async punt it if the file wasn't marked NOWAIT, or if the file
++	 * doesn't support non-blocking read/write attempts
++	 */
++	if (likely(!ret)) {
++		if (req->flags & REQ_F_COMPLETE_INLINE) {
++			struct io_ring_ctx *ctx = req->ctx;
++			struct io_submit_state *state = &ctx->submit_state;
++
++			state->compl_reqs[state->compl_nr++] = req;
++			if (state->compl_nr == ARRAY_SIZE(state->compl_reqs))
++				io_submit_flush_completions(ctx);
++			return;
++		}
++
++		linked_timeout = io_prep_linked_timeout(req);
++		if (linked_timeout)
++			io_queue_linked_timeout(linked_timeout);
++	} else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
++		linked_timeout = io_prep_linked_timeout(req);
++
++		switch (io_arm_poll_handler(req)) {
++		case IO_APOLL_READY:
++			if (linked_timeout)
++				io_queue_linked_timeout(linked_timeout);
++			goto issue_sqe;
++		case IO_APOLL_ABORTED:
++			/*
++			 * Queued up for async execution, worker will release
++			 * submit reference when the iocb is actually submitted.
++			 */
++			io_queue_async_work(req, NULL);
++			break;
++		}
++
++		if (linked_timeout)
++			io_queue_linked_timeout(linked_timeout);
++	} else {
++		io_req_complete_failed(req, ret);
++	}
++}
++
++static inline void io_queue_sqe(struct io_kiocb *req)
++	__must_hold(&req->ctx->uring_lock)
++{
++	if (unlikely(req->ctx->drain_active) && io_drain_req(req))
++		return;
++
++	if (likely(!(req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
++		__io_queue_sqe(req);
++	} else if (req->flags & REQ_F_FAIL) {
++		io_req_complete_fail_submit(req);
++	} else {
++		int ret = io_req_prep_async(req);
++
++		if (unlikely(ret))
++			io_req_complete_failed(req, ret);
++		else
++			io_queue_async_work(req, NULL);
++	}
++}
++
++/*
++ * Check SQE restrictions (opcode and flags).
++ *
++ * Returns 'true' if SQE is allowed, 'false' otherwise.
++ */
++static inline bool io_check_restriction(struct io_ring_ctx *ctx,
++					struct io_kiocb *req,
++					unsigned int sqe_flags)
++{
++	if (likely(!ctx->restricted))
++		return true;
++
++	if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
++		return false;
++
++	if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
++	    ctx->restrictions.sqe_flags_required)
++		return false;
++
++	if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
++			  ctx->restrictions.sqe_flags_required))
++		return false;
++
++	return true;
++}
++
++static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
++		       const struct io_uring_sqe *sqe)
++	__must_hold(&ctx->uring_lock)
++{
++	struct io_submit_state *state;
++	unsigned int sqe_flags;
++	int personality, ret = 0;
++
++	/* req is partially pre-initialised, see io_preinit_req() */
++	req->opcode = READ_ONCE(sqe->opcode);
++	/* same numerical values with corresponding REQ_F_*, safe to copy */
++	req->flags = sqe_flags = READ_ONCE(sqe->flags);
++	req->user_data = READ_ONCE(sqe->user_data);
++	req->file = NULL;
++	req->fixed_rsrc_refs = NULL;
++	req->task = current;
++
++	/* enforce forwards compatibility on users */
++	if (unlikely(sqe_flags & ~SQE_VALID_FLAGS))
++		return -EINVAL;
++	if (unlikely(req->opcode >= IORING_OP_LAST))
++		return -EINVAL;
++	if (!io_check_restriction(ctx, req, sqe_flags))
++		return -EACCES;
++
++	if ((sqe_flags & IOSQE_BUFFER_SELECT) &&
++	    !io_op_defs[req->opcode].buffer_select)
++		return -EOPNOTSUPP;
++	if (unlikely(sqe_flags & IOSQE_IO_DRAIN))
++		ctx->drain_active = true;
++
++	personality = READ_ONCE(sqe->personality);
++	if (personality) {
++		req->creds = xa_load(&ctx->personalities, personality);
++		if (!req->creds)
++			return -EINVAL;
++		get_cred(req->creds);
++		req->flags |= REQ_F_CREDS;
++	}
++	state = &ctx->submit_state;
++
++	/*
++	 * Plug now if we have more than 1 IO left after this, and the target
++	 * is potentially a read/write to block based storage.
++	 */
++	if (!state->plug_started && state->ios_left > 1 &&
++	    io_op_defs[req->opcode].plug) {
++		blk_start_plug(&state->plug);
++		state->plug_started = true;
++	}
++
++	if (io_op_defs[req->opcode].needs_file) {
++		req->file = io_file_get(ctx, req, READ_ONCE(sqe->fd),
++					(sqe_flags & IOSQE_FIXED_FILE));
++		if (unlikely(!req->file))
++			ret = -EBADF;
++	}
++
++	state->ios_left--;
++	return ret;
++}
++
++static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
++			 const struct io_uring_sqe *sqe)
++	__must_hold(&ctx->uring_lock)
++{
++	struct io_submit_link *link = &ctx->submit_state.link;
++	int ret;
++
++	ret = io_init_req(ctx, req, sqe);
++	if (unlikely(ret)) {
++fail_req:
++		/* fail even hard links since we don't submit */
++		if (link->head) {
++			/*
++			 * we can judge a link req is failed or cancelled by if
++			 * REQ_F_FAIL is set, but the head is an exception since
++			 * it may be set REQ_F_FAIL because of other req's failure
++			 * so let's leverage req->result to distinguish if a head
++			 * is set REQ_F_FAIL because of its failure or other req's
++			 * failure so that we can set the correct ret code for it.
++			 * init result here to avoid affecting the normal path.
++			 */
++			if (!(link->head->flags & REQ_F_FAIL))
++				req_fail_link_node(link->head, -ECANCELED);
++		} else if (!(req->flags & (REQ_F_LINK | REQ_F_HARDLINK))) {
++			/*
++			 * the current req is a normal req, we should return
++			 * error and thus break the submittion loop.
++			 */
++			io_req_complete_failed(req, ret);
++			return ret;
++		}
++		req_fail_link_node(req, ret);
++	} else {
++		ret = io_req_prep(req, sqe);
++		if (unlikely(ret))
++			goto fail_req;
++	}
++
++	/* don't need @sqe from now on */
++	trace_io_uring_submit_sqe(ctx, req, req->opcode, req->user_data,
++				  req->flags, true,
++				  ctx->flags & IORING_SETUP_SQPOLL);
++
++	/*
++	 * If we already have a head request, queue this one for async
++	 * submittal once the head completes. If we don't have a head but
++	 * IOSQE_IO_LINK is set in the sqe, start a new head. This one will be
++	 * submitted sync once the chain is complete. If none of those
++	 * conditions are true (normal request), then just queue it.
++	 */
++	if (link->head) {
++		struct io_kiocb *head = link->head;
++
++		if (!(req->flags & REQ_F_FAIL)) {
++			ret = io_req_prep_async(req);
++			if (unlikely(ret)) {
++				req_fail_link_node(req, ret);
++				if (!(head->flags & REQ_F_FAIL))
++					req_fail_link_node(head, -ECANCELED);
++			}
++		}
++		trace_io_uring_link(ctx, req, head);
++		link->last->link = req;
++		link->last = req;
++
++		/* last request of a link, enqueue the link */
++		if (!(req->flags & (REQ_F_LINK | REQ_F_HARDLINK))) {
++			link->head = NULL;
++			io_queue_sqe(head);
++		}
++	} else {
++		if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
++			link->head = req;
++			link->last = req;
++		} else {
++			io_queue_sqe(req);
++		}
++	}
++
++	return 0;
++}
++
++/*
++ * Batched submission is done, ensure local IO is flushed out.
++ */
++static void io_submit_state_end(struct io_submit_state *state,
++				struct io_ring_ctx *ctx)
++{
++	if (state->link.head)
++		io_queue_sqe(state->link.head);
++	if (state->compl_nr)
++		io_submit_flush_completions(ctx);
++	if (state->plug_started)
++		blk_finish_plug(&state->plug);
++}
++
++/*
++ * Start submission side cache.
++ */
++static void io_submit_state_start(struct io_submit_state *state,
++				  unsigned int max_ios)
++{
++	state->plug_started = false;
++	state->ios_left = max_ios;
++	/* set only head, no need to init link_last in advance */
++	state->link.head = NULL;
++}
++
++static void io_commit_sqring(struct io_ring_ctx *ctx)
++{
++	struct io_rings *rings = ctx->rings;
++
++	/*
++	 * Ensure any loads from the SQEs are done at this point,
++	 * since once we write the new head, the application could
++	 * write new data to them.
++	 */
++	smp_store_release(&rings->sq.head, ctx->cached_sq_head);
++}
++
++/*
++ * Fetch an sqe, if one is available. Note this returns a pointer to memory
++ * that is mapped by userspace. This means that care needs to be taken to
++ * ensure that reads are stable, as we cannot rely on userspace always
++ * being a good citizen. If members of the sqe are validated and then later
++ * used, it's important that those reads are done through READ_ONCE() to
++ * prevent a re-load down the line.
++ */
++static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx)
++{
++	unsigned head, mask = ctx->sq_entries - 1;
++	unsigned sq_idx = ctx->cached_sq_head++ & mask;
++
++	/*
++	 * The cached sq head (or cq tail) serves two purposes:
++	 *
++	 * 1) allows us to batch the cost of updating the user visible
++	 *    head updates.
++	 * 2) allows the kernel side to track the head on its own, even
++	 *    though the application is the one updating it.
++	 */
++	head = READ_ONCE(ctx->sq_array[sq_idx]);
++	if (likely(head < ctx->sq_entries))
++		return &ctx->sq_sqes[head];
++
++	/* drop invalid entries */
++	ctx->cq_extra--;
++	WRITE_ONCE(ctx->rings->sq_dropped,
++		   READ_ONCE(ctx->rings->sq_dropped) + 1);
++	return NULL;
++}
++
++static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
++	__must_hold(&ctx->uring_lock)
++{
++	int submitted = 0;
++
++	/* make sure SQ entry isn't read before tail */
++	nr = min3(nr, ctx->sq_entries, io_sqring_entries(ctx));
++	if (!percpu_ref_tryget_many(&ctx->refs, nr))
++		return -EAGAIN;
++	io_get_task_refs(nr);
++
++	io_submit_state_start(&ctx->submit_state, nr);
++	while (submitted < nr) {
++		const struct io_uring_sqe *sqe;
++		struct io_kiocb *req;
++
++		req = io_alloc_req(ctx);
++		if (unlikely(!req)) {
++			if (!submitted)
++				submitted = -EAGAIN;
++			break;
++		}
++		sqe = io_get_sqe(ctx);
++		if (unlikely(!sqe)) {
++			list_add(&req->inflight_entry, &ctx->submit_state.free_list);
++			break;
++		}
++		/* will complete beyond this point, count as submitted */
++		submitted++;
++		if (io_submit_sqe(ctx, req, sqe))
++			break;
++	}
++
++	if (unlikely(submitted != nr)) {
++		int ref_used = (submitted == -EAGAIN) ? 0 : submitted;
++		int unused = nr - ref_used;
++
++		current->io_uring->cached_refs += unused;
++		percpu_ref_put_many(&ctx->refs, unused);
++	}
++
++	io_submit_state_end(&ctx->submit_state, ctx);
++	 /* Commit SQ ring head once we've consumed and submitted all SQEs */
++	io_commit_sqring(ctx);
++
++	return submitted;
++}
++
++static inline bool io_sqd_events_pending(struct io_sq_data *sqd)
++{
++	return READ_ONCE(sqd->state);
++}
++
++static inline void io_ring_set_wakeup_flag(struct io_ring_ctx *ctx)
++{
++	/* Tell userspace we may need a wakeup call */
++	spin_lock(&ctx->completion_lock);
++	WRITE_ONCE(ctx->rings->sq_flags,
++		   ctx->rings->sq_flags | IORING_SQ_NEED_WAKEUP);
++	spin_unlock(&ctx->completion_lock);
++}
++
++static inline void io_ring_clear_wakeup_flag(struct io_ring_ctx *ctx)
++{
++	spin_lock(&ctx->completion_lock);
++	WRITE_ONCE(ctx->rings->sq_flags,
++		   ctx->rings->sq_flags & ~IORING_SQ_NEED_WAKEUP);
++	spin_unlock(&ctx->completion_lock);
++}
++
++static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries)
++{
++	unsigned int to_submit;
++	int ret = 0;
++
++	to_submit = io_sqring_entries(ctx);
++	/* if we're handling multiple rings, cap submit size for fairness */
++	if (cap_entries && to_submit > IORING_SQPOLL_CAP_ENTRIES_VALUE)
++		to_submit = IORING_SQPOLL_CAP_ENTRIES_VALUE;
++
++	if (!list_empty(&ctx->iopoll_list) || to_submit) {
++		unsigned nr_events = 0;
++		const struct cred *creds = NULL;
++
++		if (ctx->sq_creds != current_cred())
++			creds = override_creds(ctx->sq_creds);
++
++		mutex_lock(&ctx->uring_lock);
++		if (!list_empty(&ctx->iopoll_list))
++			io_do_iopoll(ctx, &nr_events, 0);
++
++		/*
++		 * Don't submit if refs are dying, good for io_uring_register(),
++		 * but also it is relied upon by io_ring_exit_work()
++		 */
++		if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs)) &&
++		    !(ctx->flags & IORING_SETUP_R_DISABLED))
++			ret = io_submit_sqes(ctx, to_submit);
++		mutex_unlock(&ctx->uring_lock);
++
++		if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait))
++			wake_up(&ctx->sqo_sq_wait);
++		if (creds)
++			revert_creds(creds);
++	}
++
++	return ret;
++}
++
++static void io_sqd_update_thread_idle(struct io_sq_data *sqd)
++{
++	struct io_ring_ctx *ctx;
++	unsigned sq_thread_idle = 0;
++
++	list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
++		sq_thread_idle = max(sq_thread_idle, ctx->sq_thread_idle);
++	sqd->sq_thread_idle = sq_thread_idle;
++}
++
++static bool io_sqd_handle_event(struct io_sq_data *sqd)
++{
++	bool did_sig = false;
++	struct ksignal ksig;
++
++	if (test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state) ||
++	    signal_pending(current)) {
++		mutex_unlock(&sqd->lock);
++		if (signal_pending(current))
++			did_sig = get_signal(&ksig);
++		cond_resched();
++		mutex_lock(&sqd->lock);
++	}
++	return did_sig || test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
++}
++
++static int io_sq_thread(void *data)
++{
++	struct io_sq_data *sqd = data;
++	struct io_ring_ctx *ctx;
++	unsigned long timeout = 0;
++	char buf[TASK_COMM_LEN];
++	DEFINE_WAIT(wait);
++
++	snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid);
++	set_task_comm(current, buf);
++
++	if (sqd->sq_cpu != -1)
++		set_cpus_allowed_ptr(current, cpumask_of(sqd->sq_cpu));
++	else
++		set_cpus_allowed_ptr(current, cpu_online_mask);
++	current->flags |= PF_NO_SETAFFINITY;
++
++	mutex_lock(&sqd->lock);
++	while (1) {
++		bool cap_entries, sqt_spin = false;
++
++		if (io_sqd_events_pending(sqd) || signal_pending(current)) {
++			if (io_sqd_handle_event(sqd))
++				break;
++			timeout = jiffies + sqd->sq_thread_idle;
++		}
++
++		cap_entries = !list_is_singular(&sqd->ctx_list);
++		list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
++			int ret = __io_sq_thread(ctx, cap_entries);
++
++			if (!sqt_spin && (ret > 0 || !list_empty(&ctx->iopoll_list)))
++				sqt_spin = true;
++		}
++		if (io_run_task_work())
++			sqt_spin = true;
++
++		if (sqt_spin || !time_after(jiffies, timeout)) {
++			cond_resched();
++			if (sqt_spin)
++				timeout = jiffies + sqd->sq_thread_idle;
++			continue;
++		}
++
++		prepare_to_wait(&sqd->wait, &wait, TASK_INTERRUPTIBLE);
++		if (!io_sqd_events_pending(sqd) && !current->task_works) {
++			bool needs_sched = true;
++
++			list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
++				io_ring_set_wakeup_flag(ctx);
++
++				if ((ctx->flags & IORING_SETUP_IOPOLL) &&
++				    !list_empty_careful(&ctx->iopoll_list)) {
++					needs_sched = false;
++					break;
++				}
++				if (io_sqring_entries(ctx)) {
++					needs_sched = false;
++					break;
++				}
++			}
++
++			if (needs_sched) {
++				mutex_unlock(&sqd->lock);
++				schedule();
++				mutex_lock(&sqd->lock);
++			}
++			list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
++				io_ring_clear_wakeup_flag(ctx);
++		}
++
++		finish_wait(&sqd->wait, &wait);
++		timeout = jiffies + sqd->sq_thread_idle;
++	}
++
++	io_uring_cancel_generic(true, sqd);
++	sqd->thread = NULL;
++	list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
++		io_ring_set_wakeup_flag(ctx);
++	io_run_task_work();
++	mutex_unlock(&sqd->lock);
++
++	complete(&sqd->exited);
++	do_exit(0);
++}
++
++struct io_wait_queue {
++	struct wait_queue_entry wq;
++	struct io_ring_ctx *ctx;
++	unsigned cq_tail;
++	unsigned nr_timeouts;
++};
++
++static inline bool io_should_wake(struct io_wait_queue *iowq)
++{
++	struct io_ring_ctx *ctx = iowq->ctx;
++	int dist = ctx->cached_cq_tail - (int) iowq->cq_tail;
++
++	/*
++	 * Wake up if we have enough events, or if a timeout occurred since we
++	 * started waiting. For timeouts, we always want to return to userspace,
++	 * regardless of event count.
++	 */
++	return dist >= 0 || atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts;
++}
++
++static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
++			    int wake_flags, void *key)
++{
++	struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue,
++							wq);
++
++	/*
++	 * Cannot safely flush overflowed CQEs from here, ensure we wake up
++	 * the task, and the next invocation will do it.
++	 */
++	if (io_should_wake(iowq) || test_bit(0, &iowq->ctx->check_cq_overflow))
++		return autoremove_wake_function(curr, mode, wake_flags, key);
++	return -1;
++}
++
++static int io_run_task_work_sig(void)
++{
++	if (io_run_task_work())
++		return 1;
++	if (!signal_pending(current))
++		return 0;
++	if (test_thread_flag(TIF_NOTIFY_SIGNAL))
++		return -ERESTARTSYS;
++	return -EINTR;
++}
++
++/* when returns >0, the caller should retry */
++static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
++					  struct io_wait_queue *iowq,
++					  ktime_t timeout)
++{
++	int ret;
++
++	/* make sure we run task_work before checking for signals */
++	ret = io_run_task_work_sig();
++	if (ret || io_should_wake(iowq))
++		return ret;
++	/* let the caller flush overflows, retry */
++	if (test_bit(0, &ctx->check_cq_overflow))
++		return 1;
++
++	if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS))
++		return -ETIME;
++	return 1;
++}
++
++/*
++ * Wait until events become available, if we don't already have some. The
++ * application must reap them itself, as they reside on the shared cq ring.
++ */
++static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
++			  const sigset_t __user *sig, size_t sigsz,
++			  struct __kernel_timespec __user *uts)
++{
++	struct io_wait_queue iowq;
++	struct io_rings *rings = ctx->rings;
++	ktime_t timeout = KTIME_MAX;
++	int ret;
++
++	do {
++		io_cqring_overflow_flush(ctx);
++		if (io_cqring_events(ctx) >= min_events)
++			return 0;
++		if (!io_run_task_work())
++			break;
++	} while (1);
++
++	if (uts) {
++		struct timespec64 ts;
++
++		if (get_timespec64(&ts, uts))
++			return -EFAULT;
++		timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
++	}
++
++	if (sig) {
++#ifdef CONFIG_COMPAT
++		if (in_compat_syscall())
++			ret = set_compat_user_sigmask((const compat_sigset_t __user *)sig,
++						      sigsz);
++		else
++#endif
++			ret = set_user_sigmask(sig, sigsz);
++
++		if (ret)
++			return ret;
++	}
++
++	init_waitqueue_func_entry(&iowq.wq, io_wake_function);
++	iowq.wq.private = current;
++	INIT_LIST_HEAD(&iowq.wq.entry);
++	iowq.ctx = ctx;
++	iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
++	iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
++
++	trace_io_uring_cqring_wait(ctx, min_events);
++	do {
++		/* if we can't even flush overflow, don't wait for more */
++		if (!io_cqring_overflow_flush(ctx)) {
++			ret = -EBUSY;
++			break;
++		}
++		prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
++						TASK_INTERRUPTIBLE);
++		ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
++		finish_wait(&ctx->cq_wait, &iowq.wq);
++		cond_resched();
++	} while (ret > 0);
++
++	restore_saved_sigmask_unless(ret == -EINTR);
++
++	return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
++}
++
++static void io_free_page_table(void **table, size_t size)
++{
++	unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
++
++	for (i = 0; i < nr_tables; i++)
++		kfree(table[i]);
++	kfree(table);
++}
++
++static void **io_alloc_page_table(size_t size)
++{
++	unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
++	size_t init_size = size;
++	void **table;
++
++	table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL_ACCOUNT);
++	if (!table)
++		return NULL;
++
++	for (i = 0; i < nr_tables; i++) {
++		unsigned int this_size = min_t(size_t, size, PAGE_SIZE);
++
++		table[i] = kzalloc(this_size, GFP_KERNEL_ACCOUNT);
++		if (!table[i]) {
++			io_free_page_table(table, init_size);
++			return NULL;
++		}
++		size -= this_size;
++	}
++	return table;
++}
++
++static void io_rsrc_node_destroy(struct io_rsrc_node *ref_node)
++{
++	percpu_ref_exit(&ref_node->refs);
++	kfree(ref_node);
++}
++
++static void io_rsrc_node_ref_zero(struct percpu_ref *ref)
++{
++	struct io_rsrc_node *node = container_of(ref, struct io_rsrc_node, refs);
++	struct io_ring_ctx *ctx = node->rsrc_data->ctx;
++	unsigned long flags;
++	bool first_add = false;
++	unsigned long delay = HZ;
++
++	spin_lock_irqsave(&ctx->rsrc_ref_lock, flags);
++	node->done = true;
++
++	/* if we are mid-quiesce then do not delay */
++	if (node->rsrc_data->quiesce)
++		delay = 0;
++
++	while (!list_empty(&ctx->rsrc_ref_list)) {
++		node = list_first_entry(&ctx->rsrc_ref_list,
++					    struct io_rsrc_node, node);
++		/* recycle ref nodes in order */
++		if (!node->done)
++			break;
++		list_del(&node->node);
++		first_add |= llist_add(&node->llist, &ctx->rsrc_put_llist);
++	}
++	spin_unlock_irqrestore(&ctx->rsrc_ref_lock, flags);
++
++	if (first_add)
++		mod_delayed_work(system_wq, &ctx->rsrc_put_work, delay);
++}
++
++static struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx)
++{
++	struct io_rsrc_node *ref_node;
++
++	ref_node = kzalloc(sizeof(*ref_node), GFP_KERNEL);
++	if (!ref_node)
++		return NULL;
++
++	if (percpu_ref_init(&ref_node->refs, io_rsrc_node_ref_zero,
++			    0, GFP_KERNEL)) {
++		kfree(ref_node);
++		return NULL;
++	}
++	INIT_LIST_HEAD(&ref_node->node);
++	INIT_LIST_HEAD(&ref_node->rsrc_list);
++	ref_node->done = false;
++	return ref_node;
++}
++
++static void io_rsrc_node_switch(struct io_ring_ctx *ctx,
++				struct io_rsrc_data *data_to_kill)
++{
++	WARN_ON_ONCE(!ctx->rsrc_backup_node);
++	WARN_ON_ONCE(data_to_kill && !ctx->rsrc_node);
++
++	if (data_to_kill) {
++		struct io_rsrc_node *rsrc_node = ctx->rsrc_node;
++
++		rsrc_node->rsrc_data = data_to_kill;
++		spin_lock_irq(&ctx->rsrc_ref_lock);
++		list_add_tail(&rsrc_node->node, &ctx->rsrc_ref_list);
++		spin_unlock_irq(&ctx->rsrc_ref_lock);
++
++		atomic_inc(&data_to_kill->refs);
++		percpu_ref_kill(&rsrc_node->refs);
++		ctx->rsrc_node = NULL;
++	}
++
++	if (!ctx->rsrc_node) {
++		ctx->rsrc_node = ctx->rsrc_backup_node;
++		ctx->rsrc_backup_node = NULL;
++	}
++}
++
++static int io_rsrc_node_switch_start(struct io_ring_ctx *ctx)
++{
++	if (ctx->rsrc_backup_node)
++		return 0;
++	ctx->rsrc_backup_node = io_rsrc_node_alloc(ctx);
++	return ctx->rsrc_backup_node ? 0 : -ENOMEM;
++}
++
++static int io_rsrc_ref_quiesce(struct io_rsrc_data *data, struct io_ring_ctx *ctx)
++{
++	int ret;
++
++	/* As we may drop ->uring_lock, other task may have started quiesce */
++	if (data->quiesce)
++		return -ENXIO;
++
++	data->quiesce = true;
++	do {
++		ret = io_rsrc_node_switch_start(ctx);
++		if (ret)
++			break;
++		io_rsrc_node_switch(ctx, data);
++
++		/* kill initial ref, already quiesced if zero */
++		if (atomic_dec_and_test(&data->refs))
++			break;
++		mutex_unlock(&ctx->uring_lock);
++		flush_delayed_work(&ctx->rsrc_put_work);
++		ret = wait_for_completion_interruptible(&data->done);
++		if (!ret) {
++			mutex_lock(&ctx->uring_lock);
++			if (atomic_read(&data->refs) > 0) {
++				/*
++				 * it has been revived by another thread while
++				 * we were unlocked
++				 */
++				mutex_unlock(&ctx->uring_lock);
++			} else {
++				break;
++			}
++		}
++
++		atomic_inc(&data->refs);
++		/* wait for all works potentially completing data->done */
++		flush_delayed_work(&ctx->rsrc_put_work);
++		reinit_completion(&data->done);
++
++		ret = io_run_task_work_sig();
++		mutex_lock(&ctx->uring_lock);
++	} while (ret >= 0);
++	data->quiesce = false;
++
++	return ret;
++}
++
++static u64 *io_get_tag_slot(struct io_rsrc_data *data, unsigned int idx)
++{
++	unsigned int off = idx & IO_RSRC_TAG_TABLE_MASK;
++	unsigned int table_idx = idx >> IO_RSRC_TAG_TABLE_SHIFT;
++
++	return &data->tags[table_idx][off];
++}
++
++static void io_rsrc_data_free(struct io_rsrc_data *data)
++{
++	size_t size = data->nr * sizeof(data->tags[0][0]);
++
++	if (data->tags)
++		io_free_page_table((void **)data->tags, size);
++	kfree(data);
++}
++
++static int io_rsrc_data_alloc(struct io_ring_ctx *ctx, rsrc_put_fn *do_put,
++			      u64 __user *utags, unsigned nr,
++			      struct io_rsrc_data **pdata)
++{
++	struct io_rsrc_data *data;
++	int ret = -ENOMEM;
++	unsigned i;
++
++	data = kzalloc(sizeof(*data), GFP_KERNEL);
++	if (!data)
++		return -ENOMEM;
++	data->tags = (u64 **)io_alloc_page_table(nr * sizeof(data->tags[0][0]));
++	if (!data->tags) {
++		kfree(data);
++		return -ENOMEM;
++	}
++
++	data->nr = nr;
++	data->ctx = ctx;
++	data->do_put = do_put;
++	if (utags) {
++		ret = -EFAULT;
++		for (i = 0; i < nr; i++) {
++			u64 *tag_slot = io_get_tag_slot(data, i);
++
++			if (copy_from_user(tag_slot, &utags[i],
++					   sizeof(*tag_slot)))
++				goto fail;
++		}
++	}
++
++	atomic_set(&data->refs, 1);
++	init_completion(&data->done);
++	*pdata = data;
++	return 0;
++fail:
++	io_rsrc_data_free(data);
++	return ret;
++}
++
++static bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files)
++{
++	table->files = kvcalloc(nr_files, sizeof(table->files[0]),
++				GFP_KERNEL_ACCOUNT);
++	return !!table->files;
++}
++
++static void io_free_file_tables(struct io_file_table *table)
++{
++	kvfree(table->files);
++	table->files = NULL;
++}
++
++static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
++{
++#if defined(CONFIG_UNIX)
++	if (ctx->ring_sock) {
++		struct sock *sock = ctx->ring_sock->sk;
++		struct sk_buff *skb;
++
++		while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
++			kfree_skb(skb);
++	}
++#else
++	int i;
++
++	for (i = 0; i < ctx->nr_user_files; i++) {
++		struct file *file;
++
++		file = io_file_from_index(ctx, i);
++		if (file)
++			fput(file);
++	}
++#endif
++	io_free_file_tables(&ctx->file_table);
++	io_rsrc_data_free(ctx->file_data);
++	ctx->file_data = NULL;
++	ctx->nr_user_files = 0;
++}
++
++static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
++{
++	unsigned nr = ctx->nr_user_files;
++	int ret;
++
++	if (!ctx->file_data)
++		return -ENXIO;
++
++	/*
++	 * Quiesce may unlock ->uring_lock, and while it's not held
++	 * prevent new requests using the table.
++	 */
++	ctx->nr_user_files = 0;
++	ret = io_rsrc_ref_quiesce(ctx->file_data, ctx);
++	ctx->nr_user_files = nr;
++	if (!ret)
++		__io_sqe_files_unregister(ctx);
++	return ret;
++}
++
++static void io_sq_thread_unpark(struct io_sq_data *sqd)
++	__releases(&sqd->lock)
++{
++	WARN_ON_ONCE(sqd->thread == current);
++
++	/*
++	 * Do the dance but not conditional clear_bit() because it'd race with
++	 * other threads incrementing park_pending and setting the bit.
++	 */
++	clear_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
++	if (atomic_dec_return(&sqd->park_pending))
++		set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
++	mutex_unlock(&sqd->lock);
++}
++
++static void io_sq_thread_park(struct io_sq_data *sqd)
++	__acquires(&sqd->lock)
++{
++	WARN_ON_ONCE(sqd->thread == current);
++
++	atomic_inc(&sqd->park_pending);
++	set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
++	mutex_lock(&sqd->lock);
++	if (sqd->thread)
++		wake_up_process(sqd->thread);
++}
++
++static void io_sq_thread_stop(struct io_sq_data *sqd)
++{
++	WARN_ON_ONCE(sqd->thread == current);
++	WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));
++
++	set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
++	mutex_lock(&sqd->lock);
++	if (sqd->thread)
++		wake_up_process(sqd->thread);
++	mutex_unlock(&sqd->lock);
++	wait_for_completion(&sqd->exited);
++}
++
++static void io_put_sq_data(struct io_sq_data *sqd)
++{
++	if (refcount_dec_and_test(&sqd->refs)) {
++		WARN_ON_ONCE(atomic_read(&sqd->park_pending));
++
++		io_sq_thread_stop(sqd);
++		kfree(sqd);
++	}
++}
++
++static void io_sq_thread_finish(struct io_ring_ctx *ctx)
++{
++	struct io_sq_data *sqd = ctx->sq_data;
++
++	if (sqd) {
++		io_sq_thread_park(sqd);
++		list_del_init(&ctx->sqd_list);
++		io_sqd_update_thread_idle(sqd);
++		io_sq_thread_unpark(sqd);
++
++		io_put_sq_data(sqd);
++		ctx->sq_data = NULL;
++	}
++}
++
++static struct io_sq_data *io_attach_sq_data(struct io_uring_params *p)
++{
++	struct io_ring_ctx *ctx_attach;
++	struct io_sq_data *sqd;
++	struct fd f;
++
++	f = fdget(p->wq_fd);
++	if (!f.file)
++		return ERR_PTR(-ENXIO);
++	if (f.file->f_op != &io_uring_fops) {
++		fdput(f);
++		return ERR_PTR(-EINVAL);
++	}
++
++	ctx_attach = f.file->private_data;
++	sqd = ctx_attach->sq_data;
++	if (!sqd) {
++		fdput(f);
++		return ERR_PTR(-EINVAL);
++	}
++	if (sqd->task_tgid != current->tgid) {
++		fdput(f);
++		return ERR_PTR(-EPERM);
++	}
++
++	refcount_inc(&sqd->refs);
++	fdput(f);
++	return sqd;
++}
++
++static struct io_sq_data *io_get_sq_data(struct io_uring_params *p,
++					 bool *attached)
++{
++	struct io_sq_data *sqd;
++
++	*attached = false;
++	if (p->flags & IORING_SETUP_ATTACH_WQ) {
++		sqd = io_attach_sq_data(p);
++		if (!IS_ERR(sqd)) {
++			*attached = true;
++			return sqd;
++		}
++		/* fall through for EPERM case, setup new sqd/task */
++		if (PTR_ERR(sqd) != -EPERM)
++			return sqd;
++	}
++
++	sqd = kzalloc(sizeof(*sqd), GFP_KERNEL);
++	if (!sqd)
++		return ERR_PTR(-ENOMEM);
++
++	atomic_set(&sqd->park_pending, 0);
++	refcount_set(&sqd->refs, 1);
++	INIT_LIST_HEAD(&sqd->ctx_list);
++	mutex_init(&sqd->lock);
++	init_waitqueue_head(&sqd->wait);
++	init_completion(&sqd->exited);
++	return sqd;
++}
++
++#if defined(CONFIG_UNIX)
++/*
++ * Ensure the UNIX gc is aware of our file set, so we are certain that
++ * the io_uring can be safely unregistered on process exit, even if we have
++ * loops in the file referencing.
++ */
++static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
++{
++	struct sock *sk = ctx->ring_sock->sk;
++	struct scm_fp_list *fpl;
++	struct sk_buff *skb;
++	int i, nr_files;
++
++	fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
++	if (!fpl)
++		return -ENOMEM;
++
++	skb = alloc_skb(0, GFP_KERNEL);
++	if (!skb) {
++		kfree(fpl);
++		return -ENOMEM;
++	}
++
++	skb->sk = sk;
++	skb->scm_io_uring = 1;
++
++	nr_files = 0;
++	fpl->user = get_uid(current_user());
++	for (i = 0; i < nr; i++) {
++		struct file *file = io_file_from_index(ctx, i + offset);
++
++		if (!file)
++			continue;
++		fpl->fp[nr_files] = get_file(file);
++		unix_inflight(fpl->user, fpl->fp[nr_files]);
++		nr_files++;
++	}
++
++	if (nr_files) {
++		fpl->max = SCM_MAX_FD;
++		fpl->count = nr_files;
++		UNIXCB(skb).fp = fpl;
++		skb->destructor = unix_destruct_scm;
++		refcount_add(skb->truesize, &sk->sk_wmem_alloc);
++		skb_queue_head(&sk->sk_receive_queue, skb);
++
++		for (i = 0; i < nr; i++) {
++			struct file *file = io_file_from_index(ctx, i + offset);
++
++			if (file)
++				fput(file);
++		}
++	} else {
++		kfree_skb(skb);
++		free_uid(fpl->user);
++		kfree(fpl);
++	}
++
++	return 0;
++}
++
++/*
++ * If UNIX sockets are enabled, fd passing can cause a reference cycle which
++ * causes regular reference counting to break down. We rely on the UNIX
++ * garbage collection to take care of this problem for us.
++ */
++static int io_sqe_files_scm(struct io_ring_ctx *ctx)
++{
++	unsigned left, total;
++	int ret = 0;
++
++	total = 0;
++	left = ctx->nr_user_files;
++	while (left) {
++		unsigned this_files = min_t(unsigned, left, SCM_MAX_FD);
++
++		ret = __io_sqe_files_scm(ctx, this_files, total);
++		if (ret)
++			break;
++		left -= this_files;
++		total += this_files;
++	}
++
++	if (!ret)
++		return 0;
++
++	while (total < ctx->nr_user_files) {
++		struct file *file = io_file_from_index(ctx, total);
++
++		if (file)
++			fput(file);
++		total++;
++	}
++
++	return ret;
++}
++#else
++static int io_sqe_files_scm(struct io_ring_ctx *ctx)
++{
++	return 0;
++}
++#endif
++
++static void io_rsrc_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
++{
++	struct file *file = prsrc->file;
++#if defined(CONFIG_UNIX)
++	struct sock *sock = ctx->ring_sock->sk;
++	struct sk_buff_head list, *head = &sock->sk_receive_queue;
++	struct sk_buff *skb;
++	int i;
++
++	__skb_queue_head_init(&list);
++
++	/*
++	 * Find the skb that holds this file in its SCM_RIGHTS. When found,
++	 * remove this entry and rearrange the file array.
++	 */
++	skb = skb_dequeue(head);
++	while (skb) {
++		struct scm_fp_list *fp;
++
++		fp = UNIXCB(skb).fp;
++		for (i = 0; i < fp->count; i++) {
++			int left;
++
++			if (fp->fp[i] != file)
++				continue;
++
++			unix_notinflight(fp->user, fp->fp[i]);
++			left = fp->count - 1 - i;
++			if (left) {
++				memmove(&fp->fp[i], &fp->fp[i + 1],
++						left * sizeof(struct file *));
++			}
++			fp->count--;
++			if (!fp->count) {
++				kfree_skb(skb);
++				skb = NULL;
++			} else {
++				__skb_queue_tail(&list, skb);
++			}
++			fput(file);
++			file = NULL;
++			break;
++		}
++
++		if (!file)
++			break;
++
++		__skb_queue_tail(&list, skb);
++
++		skb = skb_dequeue(head);
++	}
++
++	if (skb_peek(&list)) {
++		spin_lock_irq(&head->lock);
++		while ((skb = __skb_dequeue(&list)) != NULL)
++			__skb_queue_tail(head, skb);
++		spin_unlock_irq(&head->lock);
++	}
++#else
++	fput(file);
++#endif
++}
++
++static void __io_rsrc_put_work(struct io_rsrc_node *ref_node)
++{
++	struct io_rsrc_data *rsrc_data = ref_node->rsrc_data;
++	struct io_ring_ctx *ctx = rsrc_data->ctx;
++	struct io_rsrc_put *prsrc, *tmp;
++
++	list_for_each_entry_safe(prsrc, tmp, &ref_node->rsrc_list, list) {
++		list_del(&prsrc->list);
++
++		if (prsrc->tag) {
++			bool lock_ring = ctx->flags & IORING_SETUP_IOPOLL;
++
++			io_ring_submit_lock(ctx, lock_ring);
++			spin_lock(&ctx->completion_lock);
++			io_fill_cqe_aux(ctx, prsrc->tag, 0, 0);
++			io_commit_cqring(ctx);
++			spin_unlock(&ctx->completion_lock);
++			io_cqring_ev_posted(ctx);
++			io_ring_submit_unlock(ctx, lock_ring);
++		}
++
++		rsrc_data->do_put(ctx, prsrc);
++		kfree(prsrc);
++	}
++
++	io_rsrc_node_destroy(ref_node);
++	if (atomic_dec_and_test(&rsrc_data->refs))
++		complete(&rsrc_data->done);
++}
++
++static void io_rsrc_put_work(struct work_struct *work)
++{
++	struct io_ring_ctx *ctx;
++	struct llist_node *node;
++
++	ctx = container_of(work, struct io_ring_ctx, rsrc_put_work.work);
++	node = llist_del_all(&ctx->rsrc_put_llist);
++
++	while (node) {
++		struct io_rsrc_node *ref_node;
++		struct llist_node *next = node->next;
++
++		ref_node = llist_entry(node, struct io_rsrc_node, llist);
++		__io_rsrc_put_work(ref_node);
++		node = next;
++	}
++}
++
++static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
++				 unsigned nr_args, u64 __user *tags)
++{
++	__s32 __user *fds = (__s32 __user *) arg;
++	struct file *file;
++	int fd, ret;
++	unsigned i;
++
++	if (ctx->file_data)
++		return -EBUSY;
++	if (!nr_args)
++		return -EINVAL;
++	if (nr_args > IORING_MAX_FIXED_FILES)
++		return -EMFILE;
++	if (nr_args > rlimit(RLIMIT_NOFILE))
++		return -EMFILE;
++	ret = io_rsrc_node_switch_start(ctx);
++	if (ret)
++		return ret;
++	ret = io_rsrc_data_alloc(ctx, io_rsrc_file_put, tags, nr_args,
++				 &ctx->file_data);
++	if (ret)
++		return ret;
++
++	ret = -ENOMEM;
++	if (!io_alloc_file_tables(&ctx->file_table, nr_args))
++		goto out_free;
++
++	for (i = 0; i < nr_args; i++, ctx->nr_user_files++) {
++		if (copy_from_user(&fd, &fds[i], sizeof(fd))) {
++			ret = -EFAULT;
++			goto out_fput;
++		}
++		/* allow sparse sets */
++		if (fd == -1) {
++			ret = -EINVAL;
++			if (unlikely(*io_get_tag_slot(ctx->file_data, i)))
++				goto out_fput;
++			continue;
++		}
++
++		file = fget(fd);
++		ret = -EBADF;
++		if (unlikely(!file))
++			goto out_fput;
++
++		/*
++		 * Don't allow io_uring instances to be registered. If UNIX
++		 * isn't enabled, then this causes a reference cycle and this
++		 * instance can never get freed. If UNIX is enabled we'll
++		 * handle it just fine, but there's still no point in allowing
++		 * a ring fd as it doesn't support regular read/write anyway.
++		 */
++		if (file->f_op == &io_uring_fops) {
++			fput(file);
++			goto out_fput;
++		}
++		io_fixed_file_set(io_fixed_file_slot(&ctx->file_table, i), file);
++	}
++
++	ret = io_sqe_files_scm(ctx);
++	if (ret) {
++		__io_sqe_files_unregister(ctx);
++		return ret;
++	}
++
++	io_rsrc_node_switch(ctx, NULL);
++	return ret;
++out_fput:
++	for (i = 0; i < ctx->nr_user_files; i++) {
++		file = io_file_from_index(ctx, i);
++		if (file)
++			fput(file);
++	}
++	io_free_file_tables(&ctx->file_table);
++	ctx->nr_user_files = 0;
++out_free:
++	io_rsrc_data_free(ctx->file_data);
++	ctx->file_data = NULL;
++	return ret;
++}
++
++static int io_sqe_file_register(struct io_ring_ctx *ctx, struct file *file,
++				int index)
++{
++#if defined(CONFIG_UNIX)
++	struct sock *sock = ctx->ring_sock->sk;
++	struct sk_buff_head *head = &sock->sk_receive_queue;
++	struct sk_buff *skb;
++
++	/*
++	 * See if we can merge this file into an existing skb SCM_RIGHTS
++	 * file set. If there's no room, fall back to allocating a new skb
++	 * and filling it in.
++	 */
++	spin_lock_irq(&head->lock);
++	skb = skb_peek(head);
++	if (skb) {
++		struct scm_fp_list *fpl = UNIXCB(skb).fp;
++
++		if (fpl->count < SCM_MAX_FD) {
++			__skb_unlink(skb, head);
++			spin_unlock_irq(&head->lock);
++			fpl->fp[fpl->count] = get_file(file);
++			unix_inflight(fpl->user, fpl->fp[fpl->count]);
++			fpl->count++;
++			spin_lock_irq(&head->lock);
++			__skb_queue_head(head, skb);
++		} else {
++			skb = NULL;
++		}
++	}
++	spin_unlock_irq(&head->lock);
++
++	if (skb) {
++		fput(file);
++		return 0;
++	}
++
++	return __io_sqe_files_scm(ctx, 1, index);
++#else
++	return 0;
++#endif
++}
++
++static int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
++				 struct io_rsrc_node *node, void *rsrc)
++{
++	u64 *tag_slot = io_get_tag_slot(data, idx);
++	struct io_rsrc_put *prsrc;
++
++	prsrc = kzalloc(sizeof(*prsrc), GFP_KERNEL);
++	if (!prsrc)
++		return -ENOMEM;
++
++	prsrc->tag = *tag_slot;
++	*tag_slot = 0;
++	prsrc->rsrc = rsrc;
++	list_add(&prsrc->list, &node->rsrc_list);
++	return 0;
++}
++
++static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
++				 unsigned int issue_flags, u32 slot_index)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++	bool needs_switch = false;
++	struct io_fixed_file *file_slot;
++	int ret = -EBADF;
++
++	io_ring_submit_lock(ctx, !force_nonblock);
++	if (file->f_op == &io_uring_fops)
++		goto err;
++	ret = -ENXIO;
++	if (!ctx->file_data)
++		goto err;
++	ret = -EINVAL;
++	if (slot_index >= ctx->nr_user_files)
++		goto err;
++
++	slot_index = array_index_nospec(slot_index, ctx->nr_user_files);
++	file_slot = io_fixed_file_slot(&ctx->file_table, slot_index);
++
++	if (file_slot->file_ptr) {
++		struct file *old_file;
++
++		ret = io_rsrc_node_switch_start(ctx);
++		if (ret)
++			goto err;
++
++		old_file = (struct file *)(file_slot->file_ptr & FFS_MASK);
++		ret = io_queue_rsrc_removal(ctx->file_data, slot_index,
++					    ctx->rsrc_node, old_file);
++		if (ret)
++			goto err;
++		file_slot->file_ptr = 0;
++		needs_switch = true;
++	}
++
++	*io_get_tag_slot(ctx->file_data, slot_index) = 0;
++	io_fixed_file_set(file_slot, file);
++	ret = io_sqe_file_register(ctx, file, slot_index);
++	if (ret) {
++		file_slot->file_ptr = 0;
++		goto err;
++	}
++
++	ret = 0;
++err:
++	if (needs_switch)
++		io_rsrc_node_switch(ctx, ctx->file_data);
++	io_ring_submit_unlock(ctx, !force_nonblock);
++	if (ret)
++		fput(file);
++	return ret;
++}
++
++static int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags)
++{
++	unsigned int offset = req->close.file_slot - 1;
++	struct io_ring_ctx *ctx = req->ctx;
++	struct io_fixed_file *file_slot;
++	struct file *file;
++	int ret;
++
++	io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
++	ret = -ENXIO;
++	if (unlikely(!ctx->file_data))
++		goto out;
++	ret = -EINVAL;
++	if (offset >= ctx->nr_user_files)
++		goto out;
++	ret = io_rsrc_node_switch_start(ctx);
++	if (ret)
++		goto out;
++
++	offset = array_index_nospec(offset, ctx->nr_user_files);
++	file_slot = io_fixed_file_slot(&ctx->file_table, offset);
++	ret = -EBADF;
++	if (!file_slot->file_ptr)
++		goto out;
++
++	file = (struct file *)(file_slot->file_ptr & FFS_MASK);
++	ret = io_queue_rsrc_removal(ctx->file_data, offset, ctx->rsrc_node, file);
++	if (ret)
++		goto out;
++
++	file_slot->file_ptr = 0;
++	io_rsrc_node_switch(ctx, ctx->file_data);
++	ret = 0;
++out:
++	io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
++	return ret;
++}
++
++static int __io_sqe_files_update(struct io_ring_ctx *ctx,
++				 struct io_uring_rsrc_update2 *up,
++				 unsigned nr_args)
++{
++	u64 __user *tags = u64_to_user_ptr(up->tags);
++	__s32 __user *fds = u64_to_user_ptr(up->data);
++	struct io_rsrc_data *data = ctx->file_data;
++	struct io_fixed_file *file_slot;
++	struct file *file;
++	int fd, i, err = 0;
++	unsigned int done;
++	bool needs_switch = false;
++
++	if (!ctx->file_data)
++		return -ENXIO;
++	if (up->offset + nr_args > ctx->nr_user_files)
++		return -EINVAL;
++
++	for (done = 0; done < nr_args; done++) {
++		u64 tag = 0;
++
++		if ((tags && copy_from_user(&tag, &tags[done], sizeof(tag))) ||
++		    copy_from_user(&fd, &fds[done], sizeof(fd))) {
++			err = -EFAULT;
++			break;
++		}
++		if ((fd == IORING_REGISTER_FILES_SKIP || fd == -1) && tag) {
++			err = -EINVAL;
++			break;
++		}
++		if (fd == IORING_REGISTER_FILES_SKIP)
++			continue;
++
++		i = array_index_nospec(up->offset + done, ctx->nr_user_files);
++		file_slot = io_fixed_file_slot(&ctx->file_table, i);
++
++		if (file_slot->file_ptr) {
++			file = (struct file *)(file_slot->file_ptr & FFS_MASK);
++			err = io_queue_rsrc_removal(data, i, ctx->rsrc_node, file);
++			if (err)
++				break;
++			file_slot->file_ptr = 0;
++			needs_switch = true;
++		}
++		if (fd != -1) {
++			file = fget(fd);
++			if (!file) {
++				err = -EBADF;
++				break;
++			}
++			/*
++			 * Don't allow io_uring instances to be registered. If
++			 * UNIX isn't enabled, then this causes a reference
++			 * cycle and this instance can never get freed. If UNIX
++			 * is enabled we'll handle it just fine, but there's
++			 * still no point in allowing a ring fd as it doesn't
++			 * support regular read/write anyway.
++			 */
++			if (file->f_op == &io_uring_fops) {
++				fput(file);
++				err = -EBADF;
++				break;
++			}
++			*io_get_tag_slot(data, i) = tag;
++			io_fixed_file_set(file_slot, file);
++			err = io_sqe_file_register(ctx, file, i);
++			if (err) {
++				file_slot->file_ptr = 0;
++				fput(file);
++				break;
++			}
++		}
++	}
++
++	if (needs_switch)
++		io_rsrc_node_switch(ctx, data);
++	return done ? done : err;
++}
++
++static struct io_wq *io_init_wq_offload(struct io_ring_ctx *ctx,
++					struct task_struct *task)
++{
++	struct io_wq_hash *hash;
++	struct io_wq_data data;
++	unsigned int concurrency;
++
++	mutex_lock(&ctx->uring_lock);
++	hash = ctx->hash_map;
++	if (!hash) {
++		hash = kzalloc(sizeof(*hash), GFP_KERNEL);
++		if (!hash) {
++			mutex_unlock(&ctx->uring_lock);
++			return ERR_PTR(-ENOMEM);
++		}
++		refcount_set(&hash->refs, 1);
++		init_waitqueue_head(&hash->wait);
++		ctx->hash_map = hash;
++	}
++	mutex_unlock(&ctx->uring_lock);
++
++	data.hash = hash;
++	data.task = task;
++	data.free_work = io_wq_free_work;
++	data.do_work = io_wq_submit_work;
++
++	/* Do QD, or 4 * CPUS, whatever is smallest */
++	concurrency = min(ctx->sq_entries, 4 * num_online_cpus());
++
++	return io_wq_create(concurrency, &data);
++}
++
++static int io_uring_alloc_task_context(struct task_struct *task,
++				       struct io_ring_ctx *ctx)
++{
++	struct io_uring_task *tctx;
++	int ret;
++
++	tctx = kzalloc(sizeof(*tctx), GFP_KERNEL);
++	if (unlikely(!tctx))
++		return -ENOMEM;
++
++	ret = percpu_counter_init(&tctx->inflight, 0, GFP_KERNEL);
++	if (unlikely(ret)) {
++		kfree(tctx);
++		return ret;
++	}
++
++	tctx->io_wq = io_init_wq_offload(ctx, task);
++	if (IS_ERR(tctx->io_wq)) {
++		ret = PTR_ERR(tctx->io_wq);
++		percpu_counter_destroy(&tctx->inflight);
++		kfree(tctx);
++		return ret;
++	}
++
++	xa_init(&tctx->xa);
++	init_waitqueue_head(&tctx->wait);
++	atomic_set(&tctx->in_idle, 0);
++	atomic_set(&tctx->inflight_tracked, 0);
++	task->io_uring = tctx;
++	spin_lock_init(&tctx->task_lock);
++	INIT_WQ_LIST(&tctx->task_list);
++	init_task_work(&tctx->task_work, tctx_task_work);
++	return 0;
++}
++
++void __io_uring_free(struct task_struct *tsk)
++{
++	struct io_uring_task *tctx = tsk->io_uring;
++
++	WARN_ON_ONCE(!xa_empty(&tctx->xa));
++	WARN_ON_ONCE(tctx->io_wq);
++	WARN_ON_ONCE(tctx->cached_refs);
++
++	percpu_counter_destroy(&tctx->inflight);
++	kfree(tctx);
++	tsk->io_uring = NULL;
++}
++
++static int io_sq_offload_create(struct io_ring_ctx *ctx,
++				struct io_uring_params *p)
++{
++	int ret;
++
++	/* Retain compatibility with failing for an invalid attach attempt */
++	if ((ctx->flags & (IORING_SETUP_ATTACH_WQ | IORING_SETUP_SQPOLL)) ==
++				IORING_SETUP_ATTACH_WQ) {
++		struct fd f;
++
++		f = fdget(p->wq_fd);
++		if (!f.file)
++			return -ENXIO;
++		if (f.file->f_op != &io_uring_fops) {
++			fdput(f);
++			return -EINVAL;
++		}
++		fdput(f);
++	}
++	if (ctx->flags & IORING_SETUP_SQPOLL) {
++		struct task_struct *tsk;
++		struct io_sq_data *sqd;
++		bool attached;
++
++		sqd = io_get_sq_data(p, &attached);
++		if (IS_ERR(sqd)) {
++			ret = PTR_ERR(sqd);
++			goto err;
++		}
++
++		ctx->sq_creds = get_current_cred();
++		ctx->sq_data = sqd;
++		ctx->sq_thread_idle = msecs_to_jiffies(p->sq_thread_idle);
++		if (!ctx->sq_thread_idle)
++			ctx->sq_thread_idle = HZ;
++
++		io_sq_thread_park(sqd);
++		list_add(&ctx->sqd_list, &sqd->ctx_list);
++		io_sqd_update_thread_idle(sqd);
++		/* don't attach to a dying SQPOLL thread, would be racy */
++		ret = (attached && !sqd->thread) ? -ENXIO : 0;
++		io_sq_thread_unpark(sqd);
++
++		if (ret < 0)
++			goto err;
++		if (attached)
++			return 0;
++
++		if (p->flags & IORING_SETUP_SQ_AFF) {
++			int cpu = p->sq_thread_cpu;
++
++			ret = -EINVAL;
++			if (cpu >= nr_cpu_ids || !cpu_online(cpu))
++				goto err_sqpoll;
++			sqd->sq_cpu = cpu;
++		} else {
++			sqd->sq_cpu = -1;
++		}
++
++		sqd->task_pid = current->pid;
++		sqd->task_tgid = current->tgid;
++		tsk = create_io_thread(io_sq_thread, sqd, NUMA_NO_NODE);
++		if (IS_ERR(tsk)) {
++			ret = PTR_ERR(tsk);
++			goto err_sqpoll;
++		}
++
++		sqd->thread = tsk;
++		ret = io_uring_alloc_task_context(tsk, ctx);
++		wake_up_new_task(tsk);
++		if (ret)
++			goto err;
++	} else if (p->flags & IORING_SETUP_SQ_AFF) {
++		/* Can't have SQ_AFF without SQPOLL */
++		ret = -EINVAL;
++		goto err;
++	}
++
++	return 0;
++err_sqpoll:
++	complete(&ctx->sq_data->exited);
++err:
++	io_sq_thread_finish(ctx);
++	return ret;
++}
++
++static inline void __io_unaccount_mem(struct user_struct *user,
++				      unsigned long nr_pages)
++{
++	atomic_long_sub(nr_pages, &user->locked_vm);
++}
++
++static inline int __io_account_mem(struct user_struct *user,
++				   unsigned long nr_pages)
++{
++	unsigned long page_limit, cur_pages, new_pages;
++
++	/* Don't allow more pages than we can safely lock */
++	page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
++
++	do {
++		cur_pages = atomic_long_read(&user->locked_vm);
++		new_pages = cur_pages + nr_pages;
++		if (new_pages > page_limit)
++			return -ENOMEM;
++	} while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
++					new_pages) != cur_pages);
++
++	return 0;
++}
++
++static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
++{
++	if (ctx->user)
++		__io_unaccount_mem(ctx->user, nr_pages);
++
++	if (ctx->mm_account)
++		atomic64_sub(nr_pages, &ctx->mm_account->pinned_vm);
++}
++
++static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
++{
++	int ret;
++
++	if (ctx->user) {
++		ret = __io_account_mem(ctx->user, nr_pages);
++		if (ret)
++			return ret;
++	}
++
++	if (ctx->mm_account)
++		atomic64_add(nr_pages, &ctx->mm_account->pinned_vm);
++
++	return 0;
++}
++
++static void io_mem_free(void *ptr)
++{
++	struct page *page;
++
++	if (!ptr)
++		return;
++
++	page = virt_to_head_page(ptr);
++	if (put_page_testzero(page))
++		free_compound_page(page);
++}
++
++static void *io_mem_alloc(size_t size)
++{
++	gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP;
++
++	return (void *) __get_free_pages(gfp, get_order(size));
++}
++
++static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
++				size_t *sq_offset)
++{
++	struct io_rings *rings;
++	size_t off, sq_array_size;
++
++	off = struct_size(rings, cqes, cq_entries);
++	if (off == SIZE_MAX)
++		return SIZE_MAX;
++
++#ifdef CONFIG_SMP
++	off = ALIGN(off, SMP_CACHE_BYTES);
++	if (off == 0)
++		return SIZE_MAX;
++#endif
++
++	if (sq_offset)
++		*sq_offset = off;
++
++	sq_array_size = array_size(sizeof(u32), sq_entries);
++	if (sq_array_size == SIZE_MAX)
++		return SIZE_MAX;
++
++	if (check_add_overflow(off, sq_array_size, &off))
++		return SIZE_MAX;
++
++	return off;
++}
++
++static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slot)
++{
++	struct io_mapped_ubuf *imu = *slot;
++	unsigned int i;
++
++	if (imu != ctx->dummy_ubuf) {
++		for (i = 0; i < imu->nr_bvecs; i++)
++			unpin_user_page(imu->bvec[i].bv_page);
++		if (imu->acct_pages)
++			io_unaccount_mem(ctx, imu->acct_pages);
++		kvfree(imu);
++	}
++	*slot = NULL;
++}
++
++static void io_rsrc_buf_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
++{
++	io_buffer_unmap(ctx, &prsrc->buf);
++	prsrc->buf = NULL;
++}
++
++static void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
++{
++	unsigned int i;
++
++	for (i = 0; i < ctx->nr_user_bufs; i++)
++		io_buffer_unmap(ctx, &ctx->user_bufs[i]);
++	kfree(ctx->user_bufs);
++	io_rsrc_data_free(ctx->buf_data);
++	ctx->user_bufs = NULL;
++	ctx->buf_data = NULL;
++	ctx->nr_user_bufs = 0;
++}
++
++static int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
++{
++	unsigned nr = ctx->nr_user_bufs;
++	int ret;
++
++	if (!ctx->buf_data)
++		return -ENXIO;
++
++	/*
++	 * Quiesce may unlock ->uring_lock, and while it's not held
++	 * prevent new requests using the table.
++	 */
++	ctx->nr_user_bufs = 0;
++	ret = io_rsrc_ref_quiesce(ctx->buf_data, ctx);
++	ctx->nr_user_bufs = nr;
++	if (!ret)
++		__io_sqe_buffers_unregister(ctx);
++	return ret;
++}
++
++static int io_copy_iov(struct io_ring_ctx *ctx, struct iovec *dst,
++		       void __user *arg, unsigned index)
++{
++	struct iovec __user *src;
++
++#ifdef CONFIG_COMPAT
++	if (ctx->compat) {
++		struct compat_iovec __user *ciovs;
++		struct compat_iovec ciov;
++
++		ciovs = (struct compat_iovec __user *) arg;
++		if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov)))
++			return -EFAULT;
++
++		dst->iov_base = u64_to_user_ptr((u64)ciov.iov_base);
++		dst->iov_len = ciov.iov_len;
++		return 0;
++	}
++#endif
++	src = (struct iovec __user *) arg;
++	if (copy_from_user(dst, &src[index], sizeof(*dst)))
++		return -EFAULT;
++	return 0;
++}
++
++/*
++ * Not super efficient, but this is just a registration time. And we do cache
++ * the last compound head, so generally we'll only do a full search if we don't
++ * match that one.
++ *
++ * We check if the given compound head page has already been accounted, to
++ * avoid double accounting it. This allows us to account the full size of the
++ * page, not just the constituent pages of a huge page.
++ */
++static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
++				  int nr_pages, struct page *hpage)
++{
++	int i, j;
++
++	/* check current page array */
++	for (i = 0; i < nr_pages; i++) {
++		if (!PageCompound(pages[i]))
++			continue;
++		if (compound_head(pages[i]) == hpage)
++			return true;
++	}
++
++	/* check previously registered pages */
++	for (i = 0; i < ctx->nr_user_bufs; i++) {
++		struct io_mapped_ubuf *imu = ctx->user_bufs[i];
++
++		for (j = 0; j < imu->nr_bvecs; j++) {
++			if (!PageCompound(imu->bvec[j].bv_page))
++				continue;
++			if (compound_head(imu->bvec[j].bv_page) == hpage)
++				return true;
++		}
++	}
++
++	return false;
++}
++
++static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
++				 int nr_pages, struct io_mapped_ubuf *imu,
++				 struct page **last_hpage)
++{
++	int i, ret;
++
++	imu->acct_pages = 0;
++	for (i = 0; i < nr_pages; i++) {
++		if (!PageCompound(pages[i])) {
++			imu->acct_pages++;
++		} else {
++			struct page *hpage;
++
++			hpage = compound_head(pages[i]);
++			if (hpage == *last_hpage)
++				continue;
++			*last_hpage = hpage;
++			if (headpage_already_acct(ctx, pages, i, hpage))
++				continue;
++			imu->acct_pages += page_size(hpage) >> PAGE_SHIFT;
++		}
++	}
++
++	if (!imu->acct_pages)
++		return 0;
++
++	ret = io_account_mem(ctx, imu->acct_pages);
++	if (ret)
++		imu->acct_pages = 0;
++	return ret;
++}
++
++static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
++				  struct io_mapped_ubuf **pimu,
++				  struct page **last_hpage)
++{
++	struct io_mapped_ubuf *imu = NULL;
++	struct vm_area_struct **vmas = NULL;
++	struct page **pages = NULL;
++	unsigned long off, start, end, ubuf;
++	size_t size;
++	int ret, pret, nr_pages, i;
++
++	if (!iov->iov_base) {
++		*pimu = ctx->dummy_ubuf;
++		return 0;
++	}
++
++	ubuf = (unsigned long) iov->iov_base;
++	end = (ubuf + iov->iov_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
++	start = ubuf >> PAGE_SHIFT;
++	nr_pages = end - start;
++
++	*pimu = NULL;
++	ret = -ENOMEM;
++
++	pages = kvmalloc_array(nr_pages, sizeof(struct page *), GFP_KERNEL);
++	if (!pages)
++		goto done;
++
++	vmas = kvmalloc_array(nr_pages, sizeof(struct vm_area_struct *),
++			      GFP_KERNEL);
++	if (!vmas)
++		goto done;
++
++	imu = kvmalloc(struct_size(imu, bvec, nr_pages), GFP_KERNEL);
++	if (!imu)
++		goto done;
++
++	ret = 0;
++	mmap_read_lock(current->mm);
++	pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
++			      pages, vmas);
++	if (pret == nr_pages) {
++		/* don't support file backed memory */
++		for (i = 0; i < nr_pages; i++) {
++			struct vm_area_struct *vma = vmas[i];
++
++			if (vma_is_shmem(vma))
++				continue;
++			if (vma->vm_file &&
++			    !is_file_hugepages(vma->vm_file)) {
++				ret = -EOPNOTSUPP;
++				break;
++			}
++		}
++	} else {
++		ret = pret < 0 ? pret : -EFAULT;
++	}
++	mmap_read_unlock(current->mm);
++	if (ret) {
++		/*
++		 * if we did partial map, or found file backed vmas,
++		 * release any pages we did get
++		 */
++		if (pret > 0)
++			unpin_user_pages(pages, pret);
++		goto done;
++	}
++
++	ret = io_buffer_account_pin(ctx, pages, pret, imu, last_hpage);
++	if (ret) {
++		unpin_user_pages(pages, pret);
++		goto done;
++	}
++
++	off = ubuf & ~PAGE_MASK;
++	size = iov->iov_len;
++	for (i = 0; i < nr_pages; i++) {
++		size_t vec_len;
++
++		vec_len = min_t(size_t, size, PAGE_SIZE - off);
++		imu->bvec[i].bv_page = pages[i];
++		imu->bvec[i].bv_len = vec_len;
++		imu->bvec[i].bv_offset = off;
++		off = 0;
++		size -= vec_len;
++	}
++	/* store original address for later verification */
++	imu->ubuf = ubuf;
++	imu->ubuf_end = ubuf + iov->iov_len;
++	imu->nr_bvecs = nr_pages;
++	*pimu = imu;
++	ret = 0;
++done:
++	if (ret)
++		kvfree(imu);
++	kvfree(pages);
++	kvfree(vmas);
++	return ret;
++}
++
++static int io_buffers_map_alloc(struct io_ring_ctx *ctx, unsigned int nr_args)
++{
++	ctx->user_bufs = kcalloc(nr_args, sizeof(*ctx->user_bufs), GFP_KERNEL);
++	return ctx->user_bufs ? 0 : -ENOMEM;
++}
++
++static int io_buffer_validate(struct iovec *iov)
++{
++	unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1);
++
++	/*
++	 * Don't impose further limits on the size and buffer
++	 * constraints here, we'll -EINVAL later when IO is
++	 * submitted if they are wrong.
++	 */
++	if (!iov->iov_base)
++		return iov->iov_len ? -EFAULT : 0;
++	if (!iov->iov_len)
++		return -EFAULT;
++
++	/* arbitrary limit, but we need something */
++	if (iov->iov_len > SZ_1G)
++		return -EFAULT;
++
++	if (check_add_overflow((unsigned long)iov->iov_base, acct_len, &tmp))
++		return -EOVERFLOW;
++
++	return 0;
++}
++
++static int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
++				   unsigned int nr_args, u64 __user *tags)
++{
++	struct page *last_hpage = NULL;
++	struct io_rsrc_data *data;
++	int i, ret;
++	struct iovec iov;
++
++	if (ctx->user_bufs)
++		return -EBUSY;
++	if (!nr_args || nr_args > IORING_MAX_REG_BUFFERS)
++		return -EINVAL;
++	ret = io_rsrc_node_switch_start(ctx);
++	if (ret)
++		return ret;
++	ret = io_rsrc_data_alloc(ctx, io_rsrc_buf_put, tags, nr_args, &data);
++	if (ret)
++		return ret;
++	ret = io_buffers_map_alloc(ctx, nr_args);
++	if (ret) {
++		io_rsrc_data_free(data);
++		return ret;
++	}
++
++	for (i = 0; i < nr_args; i++, ctx->nr_user_bufs++) {
++		ret = io_copy_iov(ctx, &iov, arg, i);
++		if (ret)
++			break;
++		ret = io_buffer_validate(&iov);
++		if (ret)
++			break;
++		if (!iov.iov_base && *io_get_tag_slot(data, i)) {
++			ret = -EINVAL;
++			break;
++		}
++
++		ret = io_sqe_buffer_register(ctx, &iov, &ctx->user_bufs[i],
++					     &last_hpage);
++		if (ret)
++			break;
++	}
++
++	WARN_ON_ONCE(ctx->buf_data);
++
++	ctx->buf_data = data;
++	if (ret)
++		__io_sqe_buffers_unregister(ctx);
++	else
++		io_rsrc_node_switch(ctx, NULL);
++	return ret;
++}
++
++static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
++				   struct io_uring_rsrc_update2 *up,
++				   unsigned int nr_args)
++{
++	u64 __user *tags = u64_to_user_ptr(up->tags);
++	struct iovec iov, __user *iovs = u64_to_user_ptr(up->data);
++	struct page *last_hpage = NULL;
++	bool needs_switch = false;
++	__u32 done;
++	int i, err;
++
++	if (!ctx->buf_data)
++		return -ENXIO;
++	if (up->offset + nr_args > ctx->nr_user_bufs)
++		return -EINVAL;
++
++	for (done = 0; done < nr_args; done++) {
++		struct io_mapped_ubuf *imu;
++		int offset = up->offset + done;
++		u64 tag = 0;
++
++		err = io_copy_iov(ctx, &iov, iovs, done);
++		if (err)
++			break;
++		if (tags && copy_from_user(&tag, &tags[done], sizeof(tag))) {
++			err = -EFAULT;
++			break;
++		}
++		err = io_buffer_validate(&iov);
++		if (err)
++			break;
++		if (!iov.iov_base && tag) {
++			err = -EINVAL;
++			break;
++		}
++		err = io_sqe_buffer_register(ctx, &iov, &imu, &last_hpage);
++		if (err)
++			break;
++
++		i = array_index_nospec(offset, ctx->nr_user_bufs);
++		if (ctx->user_bufs[i] != ctx->dummy_ubuf) {
++			err = io_queue_rsrc_removal(ctx->buf_data, i,
++						    ctx->rsrc_node, ctx->user_bufs[i]);
++			if (unlikely(err)) {
++				io_buffer_unmap(ctx, &imu);
++				break;
++			}
++			ctx->user_bufs[i] = NULL;
++			needs_switch = true;
++		}
++
++		ctx->user_bufs[i] = imu;
++		*io_get_tag_slot(ctx->buf_data, offset) = tag;
++	}
++
++	if (needs_switch)
++		io_rsrc_node_switch(ctx, ctx->buf_data);
++	return done ? done : err;
++}
++
++static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg)
++{
++	__s32 __user *fds = arg;
++	int fd;
++
++	if (ctx->cq_ev_fd)
++		return -EBUSY;
++
++	if (copy_from_user(&fd, fds, sizeof(*fds)))
++		return -EFAULT;
++
++	ctx->cq_ev_fd = eventfd_ctx_fdget(fd);
++	if (IS_ERR(ctx->cq_ev_fd)) {
++		int ret = PTR_ERR(ctx->cq_ev_fd);
++
++		ctx->cq_ev_fd = NULL;
++		return ret;
++	}
++
++	return 0;
++}
++
++static int io_eventfd_unregister(struct io_ring_ctx *ctx)
++{
++	if (ctx->cq_ev_fd) {
++		eventfd_ctx_put(ctx->cq_ev_fd);
++		ctx->cq_ev_fd = NULL;
++		return 0;
++	}
++
++	return -ENXIO;
++}
++
++static void io_destroy_buffers(struct io_ring_ctx *ctx)
++{
++	struct io_buffer *buf;
++	unsigned long index;
++
++	xa_for_each(&ctx->io_buffers, index, buf)
++		__io_remove_buffers(ctx, buf, index, -1U);
++}
++
++static void io_req_cache_free(struct list_head *list)
++{
++	struct io_kiocb *req, *nxt;
++
++	list_for_each_entry_safe(req, nxt, list, inflight_entry) {
++		list_del(&req->inflight_entry);
++		kmem_cache_free(req_cachep, req);
++	}
++}
++
++static void io_req_caches_free(struct io_ring_ctx *ctx)
++{
++	struct io_submit_state *state = &ctx->submit_state;
++
++	mutex_lock(&ctx->uring_lock);
++
++	if (state->free_reqs) {
++		kmem_cache_free_bulk(req_cachep, state->free_reqs, state->reqs);
++		state->free_reqs = 0;
++	}
++
++	io_flush_cached_locked_reqs(ctx, state);
++	io_req_cache_free(&state->free_list);
++	mutex_unlock(&ctx->uring_lock);
++}
++
++static void io_wait_rsrc_data(struct io_rsrc_data *data)
++{
++	if (data && !atomic_dec_and_test(&data->refs))
++		wait_for_completion(&data->done);
++}
++
++static void io_ring_ctx_free(struct io_ring_ctx *ctx)
++{
++	io_sq_thread_finish(ctx);
++
++	/* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
++	io_wait_rsrc_data(ctx->buf_data);
++	io_wait_rsrc_data(ctx->file_data);
++
++	mutex_lock(&ctx->uring_lock);
++	if (ctx->buf_data)
++		__io_sqe_buffers_unregister(ctx);
++	if (ctx->file_data)
++		__io_sqe_files_unregister(ctx);
++	if (ctx->rings)
++		__io_cqring_overflow_flush(ctx, true);
++	mutex_unlock(&ctx->uring_lock);
++	io_eventfd_unregister(ctx);
++	io_destroy_buffers(ctx);
++	if (ctx->sq_creds)
++		put_cred(ctx->sq_creds);
++
++	/* there are no registered resources left, nobody uses it */
++	if (ctx->rsrc_node)
++		io_rsrc_node_destroy(ctx->rsrc_node);
++	if (ctx->rsrc_backup_node)
++		io_rsrc_node_destroy(ctx->rsrc_backup_node);
++	flush_delayed_work(&ctx->rsrc_put_work);
++
++	WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
++	WARN_ON_ONCE(!llist_empty(&ctx->rsrc_put_llist));
++
++#if defined(CONFIG_UNIX)
++	if (ctx->ring_sock) {
++		ctx->ring_sock->file = NULL; /* so that iput() is called */
++		sock_release(ctx->ring_sock);
++	}
++#endif
++	WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
++
++	if (ctx->mm_account) {
++		mmdrop(ctx->mm_account);
++		ctx->mm_account = NULL;
++	}
++
++	io_mem_free(ctx->rings);
++	io_mem_free(ctx->sq_sqes);
++
++	percpu_ref_exit(&ctx->refs);
++	free_uid(ctx->user);
++	io_req_caches_free(ctx);
++	if (ctx->hash_map)
++		io_wq_put_hash(ctx->hash_map);
++	kfree(ctx->cancel_hash);
++	kfree(ctx->dummy_ubuf);
++	kfree(ctx);
++}
++
++static __poll_t io_uring_poll(struct file *file, poll_table *wait)
++{
++	struct io_ring_ctx *ctx = file->private_data;
++	__poll_t mask = 0;
++
++	poll_wait(file, &ctx->poll_wait, wait);
++	/*
++	 * synchronizes with barrier from wq_has_sleeper call in
++	 * io_commit_cqring
++	 */
++	smp_rmb();
++	if (!io_sqring_full(ctx))
++		mask |= EPOLLOUT | EPOLLWRNORM;
++
++	/*
++	 * Don't flush cqring overflow list here, just do a simple check.
++	 * Otherwise there could possible be ABBA deadlock:
++	 *      CPU0                    CPU1
++	 *      ----                    ----
++	 * lock(&ctx->uring_lock);
++	 *                              lock(&ep->mtx);
++	 *                              lock(&ctx->uring_lock);
++	 * lock(&ep->mtx);
++	 *
++	 * Users may get EPOLLIN meanwhile seeing nothing in cqring, this
++	 * pushs them to do the flush.
++	 */
++	if (io_cqring_events(ctx) || test_bit(0, &ctx->check_cq_overflow))
++		mask |= EPOLLIN | EPOLLRDNORM;
++
++	return mask;
++}
++
++static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
++{
++	const struct cred *creds;
++
++	creds = xa_erase(&ctx->personalities, id);
++	if (creds) {
++		put_cred(creds);
++		return 0;
++	}
++
++	return -EINVAL;
++}
++
++struct io_tctx_exit {
++	struct callback_head		task_work;
++	struct completion		completion;
++	struct io_ring_ctx		*ctx;
++};
++
++static void io_tctx_exit_cb(struct callback_head *cb)
++{
++	struct io_uring_task *tctx = current->io_uring;
++	struct io_tctx_exit *work;
++
++	work = container_of(cb, struct io_tctx_exit, task_work);
++	/*
++	 * When @in_idle, we're in cancellation and it's racy to remove the
++	 * node. It'll be removed by the end of cancellation, just ignore it.
++	 * tctx can be NULL if the queueing of this task_work raced with
++	 * work cancelation off the exec path.
++	 */
++	if (tctx && !atomic_read(&tctx->in_idle))
++		io_uring_del_tctx_node((unsigned long)work->ctx);
++	complete(&work->completion);
++}
++
++static bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
++{
++	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++
++	return req->ctx == data;
++}
++
++static void io_ring_exit_work(struct work_struct *work)
++{
++	struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, exit_work);
++	unsigned long timeout = jiffies + HZ * 60 * 5;
++	unsigned long interval = HZ / 20;
++	struct io_tctx_exit exit;
++	struct io_tctx_node *node;
++	int ret;
++
++	/*
++	 * If we're doing polled IO and end up having requests being
++	 * submitted async (out-of-line), then completions can come in while
++	 * we're waiting for refs to drop. We need to reap these manually,
++	 * as nobody else will be looking for them.
++	 */
++	do {
++		io_uring_try_cancel_requests(ctx, NULL, true);
++		if (ctx->sq_data) {
++			struct io_sq_data *sqd = ctx->sq_data;
++			struct task_struct *tsk;
++
++			io_sq_thread_park(sqd);
++			tsk = sqd->thread;
++			if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
++				io_wq_cancel_cb(tsk->io_uring->io_wq,
++						io_cancel_ctx_cb, ctx, true);
++			io_sq_thread_unpark(sqd);
++		}
++
++		if (WARN_ON_ONCE(time_after(jiffies, timeout))) {
++			/* there is little hope left, don't run it too often */
++			interval = HZ * 60;
++		}
++	} while (!wait_for_completion_timeout(&ctx->ref_comp, interval));
++
++	init_completion(&exit.completion);
++	init_task_work(&exit.task_work, io_tctx_exit_cb);
++	exit.ctx = ctx;
++	/*
++	 * Some may use context even when all refs and requests have been put,
++	 * and they are free to do so while still holding uring_lock or
++	 * completion_lock, see io_req_task_submit(). Apart from other work,
++	 * this lock/unlock section also waits them to finish.
++	 */
++	mutex_lock(&ctx->uring_lock);
++	while (!list_empty(&ctx->tctx_list)) {
++		WARN_ON_ONCE(time_after(jiffies, timeout));
++
++		node = list_first_entry(&ctx->tctx_list, struct io_tctx_node,
++					ctx_node);
++		/* don't spin on a single task if cancellation failed */
++		list_rotate_left(&ctx->tctx_list);
++		ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
++		if (WARN_ON_ONCE(ret))
++			continue;
++		wake_up_process(node->task);
++
++		mutex_unlock(&ctx->uring_lock);
++		wait_for_completion(&exit.completion);
++		mutex_lock(&ctx->uring_lock);
++	}
++	mutex_unlock(&ctx->uring_lock);
++	spin_lock(&ctx->completion_lock);
++	spin_unlock(&ctx->completion_lock);
++
++	io_ring_ctx_free(ctx);
++}
++
++/* Returns true if we found and killed one or more timeouts */
++static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
++			     bool cancel_all)
++{
++	struct io_kiocb *req, *tmp;
++	int canceled = 0;
++
++	spin_lock(&ctx->completion_lock);
++	spin_lock_irq(&ctx->timeout_lock);
++	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
++		if (io_match_task(req, tsk, cancel_all)) {
++			io_kill_timeout(req, -ECANCELED);
++			canceled++;
++		}
++	}
++	spin_unlock_irq(&ctx->timeout_lock);
++	if (canceled != 0)
++		io_commit_cqring(ctx);
++	spin_unlock(&ctx->completion_lock);
++	if (canceled != 0)
++		io_cqring_ev_posted(ctx);
++	return canceled != 0;
++}
++
++static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
++{
++	unsigned long index;
++	struct creds *creds;
++
++	mutex_lock(&ctx->uring_lock);
++	percpu_ref_kill(&ctx->refs);
++	if (ctx->rings)
++		__io_cqring_overflow_flush(ctx, true);
++	xa_for_each(&ctx->personalities, index, creds)
++		io_unregister_personality(ctx, index);
++	mutex_unlock(&ctx->uring_lock);
++
++	io_kill_timeouts(ctx, NULL, true);
++	io_poll_remove_all(ctx, NULL, true);
++
++	/* if we failed setting up the ctx, we might not have any rings */
++	io_iopoll_try_reap_events(ctx);
++
++	INIT_WORK(&ctx->exit_work, io_ring_exit_work);
++	/*
++	 * Use system_unbound_wq to avoid spawning tons of event kworkers
++	 * if we're exiting a ton of rings at the same time. It just adds
++	 * noise and overhead, there's no discernable change in runtime
++	 * over using system_wq.
++	 */
++	queue_work(system_unbound_wq, &ctx->exit_work);
++}
++
++static int io_uring_release(struct inode *inode, struct file *file)
++{
++	struct io_ring_ctx *ctx = file->private_data;
++
++	file->private_data = NULL;
++	io_ring_ctx_wait_and_kill(ctx);
++	return 0;
++}
++
++struct io_task_cancel {
++	struct task_struct *task;
++	bool all;
++};
++
++static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
++{
++	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++	struct io_task_cancel *cancel = data;
++
++	return io_match_task_safe(req, cancel->task, cancel->all);
++}
++
++static bool io_cancel_defer_files(struct io_ring_ctx *ctx,
++				  struct task_struct *task, bool cancel_all)
++{
++	struct io_defer_entry *de;
++	LIST_HEAD(list);
++
++	spin_lock(&ctx->completion_lock);
++	list_for_each_entry_reverse(de, &ctx->defer_list, list) {
++		if (io_match_task_safe(de->req, task, cancel_all)) {
++			list_cut_position(&list, &ctx->defer_list, &de->list);
++			break;
++		}
++	}
++	spin_unlock(&ctx->completion_lock);
++	if (list_empty(&list))
++		return false;
++
++	while (!list_empty(&list)) {
++		de = list_first_entry(&list, struct io_defer_entry, list);
++		list_del_init(&de->list);
++		io_req_complete_failed(de->req, -ECANCELED);
++		kfree(de);
++	}
++	return true;
++}
++
++static bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx)
++{
++	struct io_tctx_node *node;
++	enum io_wq_cancel cret;
++	bool ret = false;
++
++	mutex_lock(&ctx->uring_lock);
++	list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
++		struct io_uring_task *tctx = node->task->io_uring;
++
++		/*
++		 * io_wq will stay alive while we hold uring_lock, because it's
++		 * killed after ctx nodes, which requires to take the lock.
++		 */
++		if (!tctx || !tctx->io_wq)
++			continue;
++		cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true);
++		ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
++	}
++	mutex_unlock(&ctx->uring_lock);
++
++	return ret;
++}
++
++static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
++					 struct task_struct *task,
++					 bool cancel_all)
++{
++	struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
++	struct io_uring_task *tctx = task ? task->io_uring : NULL;
++
++	while (1) {
++		enum io_wq_cancel cret;
++		bool ret = false;
++
++		if (!task) {
++			ret |= io_uring_try_cancel_iowq(ctx);
++		} else if (tctx && tctx->io_wq) {
++			/*
++			 * Cancels requests of all rings, not only @ctx, but
++			 * it's fine as the task is in exit/exec.
++			 */
++			cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
++					       &cancel, true);
++			ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
++		}
++
++		/* SQPOLL thread does its own polling */
++		if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) ||
++		    (ctx->sq_data && ctx->sq_data->thread == current)) {
++			while (!list_empty_careful(&ctx->iopoll_list)) {
++				io_iopoll_try_reap_events(ctx);
++				ret = true;
++			}
++		}
++
++		ret |= io_cancel_defer_files(ctx, task, cancel_all);
++		ret |= io_poll_remove_all(ctx, task, cancel_all);
++		ret |= io_kill_timeouts(ctx, task, cancel_all);
++		if (task)
++			ret |= io_run_task_work();
++		if (!ret)
++			break;
++		cond_resched();
++	}
++}
++
++static int __io_uring_add_tctx_node(struct io_ring_ctx *ctx)
++{
++	struct io_uring_task *tctx = current->io_uring;
++	struct io_tctx_node *node;
++	int ret;
++
++	if (unlikely(!tctx)) {
++		ret = io_uring_alloc_task_context(current, ctx);
++		if (unlikely(ret))
++			return ret;
++
++		tctx = current->io_uring;
++		if (ctx->iowq_limits_set) {
++			unsigned int limits[2] = { ctx->iowq_limits[0],
++						   ctx->iowq_limits[1], };
++
++			ret = io_wq_max_workers(tctx->io_wq, limits);
++			if (ret)
++				return ret;
++		}
++	}
++	if (!xa_load(&tctx->xa, (unsigned long)ctx)) {
++		node = kmalloc(sizeof(*node), GFP_KERNEL);
++		if (!node)
++			return -ENOMEM;
++		node->ctx = ctx;
++		node->task = current;
++
++		ret = xa_err(xa_store(&tctx->xa, (unsigned long)ctx,
++					node, GFP_KERNEL));
++		if (ret) {
++			kfree(node);
++			return ret;
++		}
++
++		mutex_lock(&ctx->uring_lock);
++		list_add(&node->ctx_node, &ctx->tctx_list);
++		mutex_unlock(&ctx->uring_lock);
++	}
++	tctx->last = ctx;
++	return 0;
++}
++
++/*
++ * Note that this task has used io_uring. We use it for cancelation purposes.
++ */
++static inline int io_uring_add_tctx_node(struct io_ring_ctx *ctx)
++{
++	struct io_uring_task *tctx = current->io_uring;
++
++	if (likely(tctx && tctx->last == ctx))
++		return 0;
++	return __io_uring_add_tctx_node(ctx);
++}
++
++/*
++ * Remove this io_uring_file -> task mapping.
++ */
++static void io_uring_del_tctx_node(unsigned long index)
++{
++	struct io_uring_task *tctx = current->io_uring;
++	struct io_tctx_node *node;
++
++	if (!tctx)
++		return;
++	node = xa_erase(&tctx->xa, index);
++	if (!node)
++		return;
++
++	WARN_ON_ONCE(current != node->task);
++	WARN_ON_ONCE(list_empty(&node->ctx_node));
++
++	mutex_lock(&node->ctx->uring_lock);
++	list_del(&node->ctx_node);
++	mutex_unlock(&node->ctx->uring_lock);
++
++	if (tctx->last == node->ctx)
++		tctx->last = NULL;
++	kfree(node);
++}
++
++static void io_uring_clean_tctx(struct io_uring_task *tctx)
++{
++	struct io_wq *wq = tctx->io_wq;
++	struct io_tctx_node *node;
++	unsigned long index;
++
++	xa_for_each(&tctx->xa, index, node) {
++		io_uring_del_tctx_node(index);
++		cond_resched();
++	}
++	if (wq) {
++		/*
++		 * Must be after io_uring_del_task_file() (removes nodes under
++		 * uring_lock) to avoid race with io_uring_try_cancel_iowq().
++		 */
++		io_wq_put_and_exit(wq);
++		tctx->io_wq = NULL;
++	}
++}
++
++static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
++{
++	if (tracked)
++		return atomic_read(&tctx->inflight_tracked);
++	return percpu_counter_sum(&tctx->inflight);
++}
++
++/*
++ * Find any io_uring ctx that this task has registered or done IO on, and cancel
++ * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
++ */
++static void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
++{
++	struct io_uring_task *tctx = current->io_uring;
++	struct io_ring_ctx *ctx;
++	s64 inflight;
++	DEFINE_WAIT(wait);
++
++	WARN_ON_ONCE(sqd && sqd->thread != current);
++
++	if (!current->io_uring)
++		return;
++	if (tctx->io_wq)
++		io_wq_exit_start(tctx->io_wq);
++
++	atomic_inc(&tctx->in_idle);
++	do {
++		io_uring_drop_tctx_refs(current);
++		/* read completions before cancelations */
++		inflight = tctx_inflight(tctx, !cancel_all);
++		if (!inflight)
++			break;
++
++		if (!sqd) {
++			struct io_tctx_node *node;
++			unsigned long index;
++
++			xa_for_each(&tctx->xa, index, node) {
++				/* sqpoll task will cancel all its requests */
++				if (node->ctx->sq_data)
++					continue;
++				io_uring_try_cancel_requests(node->ctx, current,
++							     cancel_all);
++			}
++		} else {
++			list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
++				io_uring_try_cancel_requests(ctx, current,
++							     cancel_all);
++		}
++
++		prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE);
++		io_run_task_work();
++		io_uring_drop_tctx_refs(current);
++
++		/*
++		 * If we've seen completions, retry without waiting. This
++		 * avoids a race where a completion comes in before we did
++		 * prepare_to_wait().
++		 */
++		if (inflight == tctx_inflight(tctx, !cancel_all))
++			schedule();
++		finish_wait(&tctx->wait, &wait);
++	} while (1);
++
++	io_uring_clean_tctx(tctx);
++	if (cancel_all) {
++		/*
++		 * We shouldn't run task_works after cancel, so just leave
++		 * ->in_idle set for normal exit.
++		 */
++		atomic_dec(&tctx->in_idle);
++		/* for exec all current's requests should be gone, kill tctx */
++		__io_uring_free(current);
++	}
++}
++
++void __io_uring_cancel(bool cancel_all)
++{
++	io_uring_cancel_generic(cancel_all, NULL);
++}
++
++static void *io_uring_validate_mmap_request(struct file *file,
++					    loff_t pgoff, size_t sz)
++{
++	struct io_ring_ctx *ctx = file->private_data;
++	loff_t offset = pgoff << PAGE_SHIFT;
++	struct page *page;
++	void *ptr;
++
++	switch (offset) {
++	case IORING_OFF_SQ_RING:
++	case IORING_OFF_CQ_RING:
++		ptr = ctx->rings;
++		break;
++	case IORING_OFF_SQES:
++		ptr = ctx->sq_sqes;
++		break;
++	default:
++		return ERR_PTR(-EINVAL);
++	}
++
++	page = virt_to_head_page(ptr);
++	if (sz > page_size(page))
++		return ERR_PTR(-EINVAL);
++
++	return ptr;
++}
++
++#ifdef CONFIG_MMU
++
++static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	size_t sz = vma->vm_end - vma->vm_start;
++	unsigned long pfn;
++	void *ptr;
++
++	ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz);
++	if (IS_ERR(ptr))
++		return PTR_ERR(ptr);
++
++	pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
++	return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
++}
++
++#else /* !CONFIG_MMU */
++
++static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -EINVAL;
++}
++
++static unsigned int io_uring_nommu_mmap_capabilities(struct file *file)
++{
++	return NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_WRITE;
++}
++
++static unsigned long io_uring_nommu_get_unmapped_area(struct file *file,
++	unsigned long addr, unsigned long len,
++	unsigned long pgoff, unsigned long flags)
++{
++	void *ptr;
++
++	ptr = io_uring_validate_mmap_request(file, pgoff, len);
++	if (IS_ERR(ptr))
++		return PTR_ERR(ptr);
++
++	return (unsigned long) ptr;
++}
++
++#endif /* !CONFIG_MMU */
++
++static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
++{
++	DEFINE_WAIT(wait);
++
++	do {
++		if (!io_sqring_full(ctx))
++			break;
++		prepare_to_wait(&ctx->sqo_sq_wait, &wait, TASK_INTERRUPTIBLE);
++
++		if (!io_sqring_full(ctx))
++			break;
++		schedule();
++	} while (!signal_pending(current));
++
++	finish_wait(&ctx->sqo_sq_wait, &wait);
++	return 0;
++}
++
++static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz,
++			  struct __kernel_timespec __user **ts,
++			  const sigset_t __user **sig)
++{
++	struct io_uring_getevents_arg arg;
++
++	/*
++	 * If EXT_ARG isn't set, then we have no timespec and the argp pointer
++	 * is just a pointer to the sigset_t.
++	 */
++	if (!(flags & IORING_ENTER_EXT_ARG)) {
++		*sig = (const sigset_t __user *) argp;
++		*ts = NULL;
++		return 0;
++	}
++
++	/*
++	 * EXT_ARG is set - ensure we agree on the size of it and copy in our
++	 * timespec and sigset_t pointers if good.
++	 */
++	if (*argsz != sizeof(arg))
++		return -EINVAL;
++	if (copy_from_user(&arg, argp, sizeof(arg)))
++		return -EFAULT;
++	if (arg.pad)
++		return -EINVAL;
++	*sig = u64_to_user_ptr(arg.sigmask);
++	*argsz = arg.sigmask_sz;
++	*ts = u64_to_user_ptr(arg.ts);
++	return 0;
++}
++
++SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
++		u32, min_complete, u32, flags, const void __user *, argp,
++		size_t, argsz)
++{
++	struct io_ring_ctx *ctx;
++	int submitted = 0;
++	struct fd f;
++	long ret;
++
++	io_run_task_work();
++
++	if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
++			       IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG)))
++		return -EINVAL;
++
++	f = fdget(fd);
++	if (unlikely(!f.file))
++		return -EBADF;
++
++	ret = -EOPNOTSUPP;
++	if (unlikely(f.file->f_op != &io_uring_fops))
++		goto out_fput;
++
++	ret = -ENXIO;
++	ctx = f.file->private_data;
++	if (unlikely(!percpu_ref_tryget(&ctx->refs)))
++		goto out_fput;
++
++	ret = -EBADFD;
++	if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
++		goto out;
++
++	/*
++	 * For SQ polling, the thread will do all submissions and completions.
++	 * Just return the requested submit count, and wake the thread if
++	 * we were asked to.
++	 */
++	ret = 0;
++	if (ctx->flags & IORING_SETUP_SQPOLL) {
++		io_cqring_overflow_flush(ctx);
++
++		if (unlikely(ctx->sq_data->thread == NULL)) {
++			ret = -EOWNERDEAD;
++			goto out;
++		}
++		if (flags & IORING_ENTER_SQ_WAKEUP)
++			wake_up(&ctx->sq_data->wait);
++		if (flags & IORING_ENTER_SQ_WAIT) {
++			ret = io_sqpoll_wait_sq(ctx);
++			if (ret)
++				goto out;
++		}
++		submitted = to_submit;
++	} else if (to_submit) {
++		ret = io_uring_add_tctx_node(ctx);
++		if (unlikely(ret))
++			goto out;
++		mutex_lock(&ctx->uring_lock);
++		submitted = io_submit_sqes(ctx, to_submit);
++		mutex_unlock(&ctx->uring_lock);
++
++		if (submitted != to_submit)
++			goto out;
++	}
++	if (flags & IORING_ENTER_GETEVENTS) {
++		const sigset_t __user *sig;
++		struct __kernel_timespec __user *ts;
++
++		ret = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
++		if (unlikely(ret))
++			goto out;
++
++		min_complete = min(min_complete, ctx->cq_entries);
++
++		/*
++		 * When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, user
++		 * space applications don't need to do io completion events
++		 * polling again, they can rely on io_sq_thread to do polling
++		 * work, which can reduce cpu usage and uring_lock contention.
++		 */
++		if (ctx->flags & IORING_SETUP_IOPOLL &&
++		    !(ctx->flags & IORING_SETUP_SQPOLL)) {
++			ret = io_iopoll_check(ctx, min_complete);
++		} else {
++			ret = io_cqring_wait(ctx, min_complete, sig, argsz, ts);
++		}
++	}
++
++out:
++	percpu_ref_put(&ctx->refs);
++out_fput:
++	fdput(f);
++	return submitted ? submitted : ret;
++}
++
++#ifdef CONFIG_PROC_FS
++static int io_uring_show_cred(struct seq_file *m, unsigned int id,
++		const struct cred *cred)
++{
++	struct user_namespace *uns = seq_user_ns(m);
++	struct group_info *gi;
++	kernel_cap_t cap;
++	unsigned __capi;
++	int g;
++
++	seq_printf(m, "%5d\n", id);
++	seq_put_decimal_ull(m, "\tUid:\t", from_kuid_munged(uns, cred->uid));
++	seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->euid));
++	seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->suid));
++	seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->fsuid));
++	seq_put_decimal_ull(m, "\n\tGid:\t", from_kgid_munged(uns, cred->gid));
++	seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->egid));
++	seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->sgid));
++	seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->fsgid));
++	seq_puts(m, "\n\tGroups:\t");
++	gi = cred->group_info;
++	for (g = 0; g < gi->ngroups; g++) {
++		seq_put_decimal_ull(m, g ? " " : "",
++					from_kgid_munged(uns, gi->gid[g]));
++	}
++	seq_puts(m, "\n\tCapEff:\t");
++	cap = cred->cap_effective;
++	CAP_FOR_EACH_U32(__capi)
++		seq_put_hex_ll(m, NULL, cap.cap[CAP_LAST_U32 - __capi], 8);
++	seq_putc(m, '\n');
++	return 0;
++}
++
++static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
++{
++	struct io_sq_data *sq = NULL;
++	bool has_lock;
++	int i;
++
++	/*
++	 * Avoid ABBA deadlock between the seq lock and the io_uring mutex,
++	 * since fdinfo case grabs it in the opposite direction of normal use
++	 * cases. If we fail to get the lock, we just don't iterate any
++	 * structures that could be going away outside the io_uring mutex.
++	 */
++	has_lock = mutex_trylock(&ctx->uring_lock);
++
++	if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL)) {
++		sq = ctx->sq_data;
++		if (!sq->thread)
++			sq = NULL;
++	}
++
++	seq_printf(m, "SqThread:\t%d\n", sq ? task_pid_nr(sq->thread) : -1);
++	seq_printf(m, "SqThreadCpu:\t%d\n", sq ? task_cpu(sq->thread) : -1);
++	seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files);
++	for (i = 0; has_lock && i < ctx->nr_user_files; i++) {
++		struct file *f = io_file_from_index(ctx, i);
++
++		if (f)
++			seq_printf(m, "%5u: %s\n", i, file_dentry(f)->d_iname);
++		else
++			seq_printf(m, "%5u: <none>\n", i);
++	}
++	seq_printf(m, "UserBufs:\t%u\n", ctx->nr_user_bufs);
++	for (i = 0; has_lock && i < ctx->nr_user_bufs; i++) {
++		struct io_mapped_ubuf *buf = ctx->user_bufs[i];
++		unsigned int len = buf->ubuf_end - buf->ubuf;
++
++		seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf, len);
++	}
++	if (has_lock && !xa_empty(&ctx->personalities)) {
++		unsigned long index;
++		const struct cred *cred;
++
++		seq_printf(m, "Personalities:\n");
++		xa_for_each(&ctx->personalities, index, cred)
++			io_uring_show_cred(m, index, cred);
++	}
++	seq_printf(m, "PollList:\n");
++	spin_lock(&ctx->completion_lock);
++	for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
++		struct hlist_head *list = &ctx->cancel_hash[i];
++		struct io_kiocb *req;
++
++		hlist_for_each_entry(req, list, hash_node)
++			seq_printf(m, "  op=%d, task_works=%d\n", req->opcode,
++					req->task->task_works != NULL);
++	}
++	spin_unlock(&ctx->completion_lock);
++	if (has_lock)
++		mutex_unlock(&ctx->uring_lock);
++}
++
++static void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
++{
++	struct io_ring_ctx *ctx = f->private_data;
++
++	if (percpu_ref_tryget(&ctx->refs)) {
++		__io_uring_show_fdinfo(ctx, m);
++		percpu_ref_put(&ctx->refs);
++	}
++}
++#endif
++
++static const struct file_operations io_uring_fops = {
++	.release	= io_uring_release,
++	.mmap		= io_uring_mmap,
++#ifndef CONFIG_MMU
++	.get_unmapped_area = io_uring_nommu_get_unmapped_area,
++	.mmap_capabilities = io_uring_nommu_mmap_capabilities,
++#endif
++	.poll		= io_uring_poll,
++#ifdef CONFIG_PROC_FS
++	.show_fdinfo	= io_uring_show_fdinfo,
++#endif
++};
++
++static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
++				  struct io_uring_params *p)
++{
++	struct io_rings *rings;
++	size_t size, sq_array_offset;
++
++	/* make sure these are sane, as we already accounted them */
++	ctx->sq_entries = p->sq_entries;
++	ctx->cq_entries = p->cq_entries;
++
++	size = rings_size(p->sq_entries, p->cq_entries, &sq_array_offset);
++	if (size == SIZE_MAX)
++		return -EOVERFLOW;
++
++	rings = io_mem_alloc(size);
++	if (!rings)
++		return -ENOMEM;
++
++	ctx->rings = rings;
++	ctx->sq_array = (u32 *)((char *)rings + sq_array_offset);
++	rings->sq_ring_mask = p->sq_entries - 1;
++	rings->cq_ring_mask = p->cq_entries - 1;
++	rings->sq_ring_entries = p->sq_entries;
++	rings->cq_ring_entries = p->cq_entries;
++
++	size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
++	if (size == SIZE_MAX) {
++		io_mem_free(ctx->rings);
++		ctx->rings = NULL;
++		return -EOVERFLOW;
++	}
++
++	ctx->sq_sqes = io_mem_alloc(size);
++	if (!ctx->sq_sqes) {
++		io_mem_free(ctx->rings);
++		ctx->rings = NULL;
++		return -ENOMEM;
++	}
++
++	return 0;
++}
++
++static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
++{
++	int ret, fd;
++
++	fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
++	if (fd < 0)
++		return fd;
++
++	ret = io_uring_add_tctx_node(ctx);
++	if (ret) {
++		put_unused_fd(fd);
++		return ret;
++	}
++	fd_install(fd, file);
++	return fd;
++}
++
++/*
++ * Allocate an anonymous fd, this is what constitutes the application
++ * visible backing of an io_uring instance. The application mmaps this
++ * fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
++ * we have to tie this fd to a socket for file garbage collection purposes.
++ */
++static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
++{
++	struct file *file;
++#if defined(CONFIG_UNIX)
++	int ret;
++
++	ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
++				&ctx->ring_sock);
++	if (ret)
++		return ERR_PTR(ret);
++#endif
++
++	file = anon_inode_getfile("[io_uring]", &io_uring_fops, ctx,
++					O_RDWR | O_CLOEXEC);
++#if defined(CONFIG_UNIX)
++	if (IS_ERR(file)) {
++		sock_release(ctx->ring_sock);
++		ctx->ring_sock = NULL;
++	} else {
++		ctx->ring_sock->file = file;
++	}
++#endif
++	return file;
++}
++
++static int io_uring_create(unsigned entries, struct io_uring_params *p,
++			   struct io_uring_params __user *params)
++{
++	struct io_ring_ctx *ctx;
++	struct file *file;
++	int ret;
++
++	if (!entries)
++		return -EINVAL;
++	if (entries > IORING_MAX_ENTRIES) {
++		if (!(p->flags & IORING_SETUP_CLAMP))
++			return -EINVAL;
++		entries = IORING_MAX_ENTRIES;
++	}
++
++	/*
++	 * Use twice as many entries for the CQ ring. It's possible for the
++	 * application to drive a higher depth than the size of the SQ ring,
++	 * since the sqes are only used at submission time. This allows for
++	 * some flexibility in overcommitting a bit. If the application has
++	 * set IORING_SETUP_CQSIZE, it will have passed in the desired number
++	 * of CQ ring entries manually.
++	 */
++	p->sq_entries = roundup_pow_of_two(entries);
++	if (p->flags & IORING_SETUP_CQSIZE) {
++		/*
++		 * If IORING_SETUP_CQSIZE is set, we do the same roundup
++		 * to a power-of-two, if it isn't already. We do NOT impose
++		 * any cq vs sq ring sizing.
++		 */
++		if (!p->cq_entries)
++			return -EINVAL;
++		if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {
++			if (!(p->flags & IORING_SETUP_CLAMP))
++				return -EINVAL;
++			p->cq_entries = IORING_MAX_CQ_ENTRIES;
++		}
++		p->cq_entries = roundup_pow_of_two(p->cq_entries);
++		if (p->cq_entries < p->sq_entries)
++			return -EINVAL;
++	} else {
++		p->cq_entries = 2 * p->sq_entries;
++	}
++
++	ctx = io_ring_ctx_alloc(p);
++	if (!ctx)
++		return -ENOMEM;
++	ctx->compat = in_compat_syscall();
++	if (!capable(CAP_IPC_LOCK))
++		ctx->user = get_uid(current_user());
++
++	/*
++	 * This is just grabbed for accounting purposes. When a process exits,
++	 * the mm is exited and dropped before the files, hence we need to hang
++	 * on to this mm purely for the purposes of being able to unaccount
++	 * memory (locked/pinned vm). It's not used for anything else.
++	 */
++	mmgrab(current->mm);
++	ctx->mm_account = current->mm;
++
++	ret = io_allocate_scq_urings(ctx, p);
++	if (ret)
++		goto err;
++
++	ret = io_sq_offload_create(ctx, p);
++	if (ret)
++		goto err;
++	/* always set a rsrc node */
++	ret = io_rsrc_node_switch_start(ctx);
++	if (ret)
++		goto err;
++	io_rsrc_node_switch(ctx, NULL);
++
++	memset(&p->sq_off, 0, sizeof(p->sq_off));
++	p->sq_off.head = offsetof(struct io_rings, sq.head);
++	p->sq_off.tail = offsetof(struct io_rings, sq.tail);
++	p->sq_off.ring_mask = offsetof(struct io_rings, sq_ring_mask);
++	p->sq_off.ring_entries = offsetof(struct io_rings, sq_ring_entries);
++	p->sq_off.flags = offsetof(struct io_rings, sq_flags);
++	p->sq_off.dropped = offsetof(struct io_rings, sq_dropped);
++	p->sq_off.array = (char *)ctx->sq_array - (char *)ctx->rings;
++
++	memset(&p->cq_off, 0, sizeof(p->cq_off));
++	p->cq_off.head = offsetof(struct io_rings, cq.head);
++	p->cq_off.tail = offsetof(struct io_rings, cq.tail);
++	p->cq_off.ring_mask = offsetof(struct io_rings, cq_ring_mask);
++	p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries);
++	p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);
++	p->cq_off.cqes = offsetof(struct io_rings, cqes);
++	p->cq_off.flags = offsetof(struct io_rings, cq_flags);
++
++	p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
++			IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
++			IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
++			IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
++			IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
++			IORING_FEAT_RSRC_TAGS;
++
++	if (copy_to_user(params, p, sizeof(*p))) {
++		ret = -EFAULT;
++		goto err;
++	}
++
++	file = io_uring_get_file(ctx);
++	if (IS_ERR(file)) {
++		ret = PTR_ERR(file);
++		goto err;
++	}
++
++	/*
++	 * Install ring fd as the very last thing, so we don't risk someone
++	 * having closed it before we finish setup
++	 */
++	ret = io_uring_install_fd(ctx, file);
++	if (ret < 0) {
++		/* fput will clean it up */
++		fput(file);
++		return ret;
++	}
++
++	trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
++	return ret;
++err:
++	io_ring_ctx_wait_and_kill(ctx);
++	return ret;
++}
++
++/*
++ * Sets up an aio uring context, and returns the fd. Applications asks for a
++ * ring size, we return the actual sq/cq ring sizes (among other things) in the
++ * params structure passed in.
++ */
++static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
++{
++	struct io_uring_params p;
++	int i;
++
++	if (copy_from_user(&p, params, sizeof(p)))
++		return -EFAULT;
++	for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
++		if (p.resv[i])
++			return -EINVAL;
++	}
++
++	if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
++			IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
++			IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
++			IORING_SETUP_R_DISABLED))
++		return -EINVAL;
++
++	return  io_uring_create(entries, &p, params);
++}
++
++SYSCALL_DEFINE2(io_uring_setup, u32, entries,
++		struct io_uring_params __user *, params)
++{
++	return io_uring_setup(entries, params);
++}
++
++static int io_probe(struct io_ring_ctx *ctx, void __user *arg, unsigned nr_args)
++{
++	struct io_uring_probe *p;
++	size_t size;
++	int i, ret;
++
++	size = struct_size(p, ops, nr_args);
++	if (size == SIZE_MAX)
++		return -EOVERFLOW;
++	p = kzalloc(size, GFP_KERNEL);
++	if (!p)
++		return -ENOMEM;
++
++	ret = -EFAULT;
++	if (copy_from_user(p, arg, size))
++		goto out;
++	ret = -EINVAL;
++	if (memchr_inv(p, 0, size))
++		goto out;
++
++	p->last_op = IORING_OP_LAST - 1;
++	if (nr_args > IORING_OP_LAST)
++		nr_args = IORING_OP_LAST;
++
++	for (i = 0; i < nr_args; i++) {
++		p->ops[i].op = i;
++		if (!io_op_defs[i].not_supported)
++			p->ops[i].flags = IO_URING_OP_SUPPORTED;
++	}
++	p->ops_len = i;
++
++	ret = 0;
++	if (copy_to_user(arg, p, size))
++		ret = -EFAULT;
++out:
++	kfree(p);
++	return ret;
++}
++
++static int io_register_personality(struct io_ring_ctx *ctx)
++{
++	const struct cred *creds;
++	u32 id;
++	int ret;
++
++	creds = get_current_cred();
++
++	ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)creds,
++			XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
++	if (ret < 0) {
++		put_cred(creds);
++		return ret;
++	}
++	return id;
++}
++
++static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
++				    unsigned int nr_args)
++{
++	struct io_uring_restriction *res;
++	size_t size;
++	int i, ret;
++
++	/* Restrictions allowed only if rings started disabled */
++	if (!(ctx->flags & IORING_SETUP_R_DISABLED))
++		return -EBADFD;
++
++	/* We allow only a single restrictions registration */
++	if (ctx->restrictions.registered)
++		return -EBUSY;
++
++	if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
++		return -EINVAL;
++
++	size = array_size(nr_args, sizeof(*res));
++	if (size == SIZE_MAX)
++		return -EOVERFLOW;
++
++	res = memdup_user(arg, size);
++	if (IS_ERR(res))
++		return PTR_ERR(res);
++
++	ret = 0;
++
++	for (i = 0; i < nr_args; i++) {
++		switch (res[i].opcode) {
++		case IORING_RESTRICTION_REGISTER_OP:
++			if (res[i].register_op >= IORING_REGISTER_LAST) {
++				ret = -EINVAL;
++				goto out;
++			}
++
++			__set_bit(res[i].register_op,
++				  ctx->restrictions.register_op);
++			break;
++		case IORING_RESTRICTION_SQE_OP:
++			if (res[i].sqe_op >= IORING_OP_LAST) {
++				ret = -EINVAL;
++				goto out;
++			}
++
++			__set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
++			break;
++		case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
++			ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags;
++			break;
++		case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
++			ctx->restrictions.sqe_flags_required = res[i].sqe_flags;
++			break;
++		default:
++			ret = -EINVAL;
++			goto out;
++		}
++	}
++
++out:
++	/* Reset all restrictions if an error happened */
++	if (ret != 0)
++		memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
++	else
++		ctx->restrictions.registered = true;
++
++	kfree(res);
++	return ret;
++}
++
++static int io_register_enable_rings(struct io_ring_ctx *ctx)
++{
++	if (!(ctx->flags & IORING_SETUP_R_DISABLED))
++		return -EBADFD;
++
++	if (ctx->restrictions.registered)
++		ctx->restricted = 1;
++
++	ctx->flags &= ~IORING_SETUP_R_DISABLED;
++	if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
++		wake_up(&ctx->sq_data->wait);
++	return 0;
++}
++
++static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
++				     struct io_uring_rsrc_update2 *up,
++				     unsigned nr_args)
++{
++	__u32 tmp;
++	int err;
++
++	if (check_add_overflow(up->offset, nr_args, &tmp))
++		return -EOVERFLOW;
++	err = io_rsrc_node_switch_start(ctx);
++	if (err)
++		return err;
++
++	switch (type) {
++	case IORING_RSRC_FILE:
++		return __io_sqe_files_update(ctx, up, nr_args);
++	case IORING_RSRC_BUFFER:
++		return __io_sqe_buffers_update(ctx, up, nr_args);
++	}
++	return -EINVAL;
++}
++
++static int io_register_files_update(struct io_ring_ctx *ctx, void __user *arg,
++				    unsigned nr_args)
++{
++	struct io_uring_rsrc_update2 up;
++
++	if (!nr_args)
++		return -EINVAL;
++	memset(&up, 0, sizeof(up));
++	if (copy_from_user(&up, arg, sizeof(struct io_uring_rsrc_update)))
++		return -EFAULT;
++	if (up.resv || up.resv2)
++		return -EINVAL;
++	return __io_register_rsrc_update(ctx, IORING_RSRC_FILE, &up, nr_args);
++}
++
++static int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
++				   unsigned size, unsigned type)
++{
++	struct io_uring_rsrc_update2 up;
++
++	if (size != sizeof(up))
++		return -EINVAL;
++	if (copy_from_user(&up, arg, sizeof(up)))
++		return -EFAULT;
++	if (!up.nr || up.resv || up.resv2)
++		return -EINVAL;
++	return __io_register_rsrc_update(ctx, type, &up, up.nr);
++}
++
++static int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,
++			    unsigned int size, unsigned int type)
++{
++	struct io_uring_rsrc_register rr;
++
++	/* keep it extendible */
++	if (size != sizeof(rr))
++		return -EINVAL;
++
++	memset(&rr, 0, sizeof(rr));
++	if (copy_from_user(&rr, arg, size))
++		return -EFAULT;
++	if (!rr.nr || rr.resv || rr.resv2)
++		return -EINVAL;
++
++	switch (type) {
++	case IORING_RSRC_FILE:
++		return io_sqe_files_register(ctx, u64_to_user_ptr(rr.data),
++					     rr.nr, u64_to_user_ptr(rr.tags));
++	case IORING_RSRC_BUFFER:
++		return io_sqe_buffers_register(ctx, u64_to_user_ptr(rr.data),
++					       rr.nr, u64_to_user_ptr(rr.tags));
++	}
++	return -EINVAL;
++}
++
++static int io_register_iowq_aff(struct io_ring_ctx *ctx, void __user *arg,
++				unsigned len)
++{
++	struct io_uring_task *tctx = current->io_uring;
++	cpumask_var_t new_mask;
++	int ret;
++
++	if (!tctx || !tctx->io_wq)
++		return -EINVAL;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	cpumask_clear(new_mask);
++	if (len > cpumask_size())
++		len = cpumask_size();
++
++#ifdef CONFIG_COMPAT
++	if (in_compat_syscall()) {
++		ret = compat_get_bitmap(cpumask_bits(new_mask),
++					(const compat_ulong_t __user *)arg,
++					len * 8 /* CHAR_BIT */);
++	} else {
++		ret = copy_from_user(new_mask, arg, len);
++	}
++#else
++	ret = copy_from_user(new_mask, arg, len);
++#endif
++
++	if (ret) {
++		free_cpumask_var(new_mask);
++		return -EFAULT;
++	}
++
++	ret = io_wq_cpu_affinity(tctx->io_wq, new_mask);
++	free_cpumask_var(new_mask);
++	return ret;
++}
++
++static int io_unregister_iowq_aff(struct io_ring_ctx *ctx)
++{
++	struct io_uring_task *tctx = current->io_uring;
++
++	if (!tctx || !tctx->io_wq)
++		return -EINVAL;
++
++	return io_wq_cpu_affinity(tctx->io_wq, NULL);
++}
++
++static int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
++					void __user *arg)
++	__must_hold(&ctx->uring_lock)
++{
++	struct io_tctx_node *node;
++	struct io_uring_task *tctx = NULL;
++	struct io_sq_data *sqd = NULL;
++	__u32 new_count[2];
++	int i, ret;
++
++	if (copy_from_user(new_count, arg, sizeof(new_count)))
++		return -EFAULT;
++	for (i = 0; i < ARRAY_SIZE(new_count); i++)
++		if (new_count[i] > INT_MAX)
++			return -EINVAL;
++
++	if (ctx->flags & IORING_SETUP_SQPOLL) {
++		sqd = ctx->sq_data;
++		if (sqd) {
++			/*
++			 * Observe the correct sqd->lock -> ctx->uring_lock
++			 * ordering. Fine to drop uring_lock here, we hold
++			 * a ref to the ctx.
++			 */
++			refcount_inc(&sqd->refs);
++			mutex_unlock(&ctx->uring_lock);
++			mutex_lock(&sqd->lock);
++			mutex_lock(&ctx->uring_lock);
++			if (sqd->thread)
++				tctx = sqd->thread->io_uring;
++		}
++	} else {
++		tctx = current->io_uring;
++	}
++
++	BUILD_BUG_ON(sizeof(new_count) != sizeof(ctx->iowq_limits));
++
++	for (i = 0; i < ARRAY_SIZE(new_count); i++)
++		if (new_count[i])
++			ctx->iowq_limits[i] = new_count[i];
++	ctx->iowq_limits_set = true;
++
++	ret = -EINVAL;
++	if (tctx && tctx->io_wq) {
++		ret = io_wq_max_workers(tctx->io_wq, new_count);
++		if (ret)
++			goto err;
++	} else {
++		memset(new_count, 0, sizeof(new_count));
++	}
++
++	if (sqd) {
++		mutex_unlock(&sqd->lock);
++		io_put_sq_data(sqd);
++	}
++
++	if (copy_to_user(arg, new_count, sizeof(new_count)))
++		return -EFAULT;
++
++	/* that's it for SQPOLL, only the SQPOLL task creates requests */
++	if (sqd)
++		return 0;
++
++	/* now propagate the restriction to all registered users */
++	list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
++		struct io_uring_task *tctx = node->task->io_uring;
++
++		if (WARN_ON_ONCE(!tctx->io_wq))
++			continue;
++
++		for (i = 0; i < ARRAY_SIZE(new_count); i++)
++			new_count[i] = ctx->iowq_limits[i];
++		/* ignore errors, it always returns zero anyway */
++		(void)io_wq_max_workers(tctx->io_wq, new_count);
++	}
++	return 0;
++err:
++	if (sqd) {
++		mutex_unlock(&sqd->lock);
++		io_put_sq_data(sqd);
++	}
++	return ret;
++}
++
++static bool io_register_op_must_quiesce(int op)
++{
++	switch (op) {
++	case IORING_REGISTER_BUFFERS:
++	case IORING_UNREGISTER_BUFFERS:
++	case IORING_REGISTER_FILES:
++	case IORING_UNREGISTER_FILES:
++	case IORING_REGISTER_FILES_UPDATE:
++	case IORING_REGISTER_PROBE:
++	case IORING_REGISTER_PERSONALITY:
++	case IORING_UNREGISTER_PERSONALITY:
++	case IORING_REGISTER_FILES2:
++	case IORING_REGISTER_FILES_UPDATE2:
++	case IORING_REGISTER_BUFFERS2:
++	case IORING_REGISTER_BUFFERS_UPDATE:
++	case IORING_REGISTER_IOWQ_AFF:
++	case IORING_UNREGISTER_IOWQ_AFF:
++	case IORING_REGISTER_IOWQ_MAX_WORKERS:
++		return false;
++	default:
++		return true;
++	}
++}
++
++static int io_ctx_quiesce(struct io_ring_ctx *ctx)
++{
++	long ret;
++
++	percpu_ref_kill(&ctx->refs);
++
++	/*
++	 * Drop uring mutex before waiting for references to exit. If another
++	 * thread is currently inside io_uring_enter() it might need to grab the
++	 * uring_lock to make progress. If we hold it here across the drain
++	 * wait, then we can deadlock. It's safe to drop the mutex here, since
++	 * no new references will come in after we've killed the percpu ref.
++	 */
++	mutex_unlock(&ctx->uring_lock);
++	do {
++		ret = wait_for_completion_interruptible(&ctx->ref_comp);
++		if (!ret)
++			break;
++		ret = io_run_task_work_sig();
++	} while (ret >= 0);
++	mutex_lock(&ctx->uring_lock);
++
++	if (ret)
++		io_refs_resurrect(&ctx->refs, &ctx->ref_comp);
++	return ret;
++}
++
++static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
++			       void __user *arg, unsigned nr_args)
++	__releases(ctx->uring_lock)
++	__acquires(ctx->uring_lock)
++{
++	int ret;
++
++	/*
++	 * We're inside the ring mutex, if the ref is already dying, then
++	 * someone else killed the ctx or is already going through
++	 * io_uring_register().
++	 */
++	if (percpu_ref_is_dying(&ctx->refs))
++		return -ENXIO;
++
++	if (ctx->restricted) {
++		if (opcode >= IORING_REGISTER_LAST)
++			return -EINVAL;
++		opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
++		if (!test_bit(opcode, ctx->restrictions.register_op))
++			return -EACCES;
++	}
++
++	if (io_register_op_must_quiesce(opcode)) {
++		ret = io_ctx_quiesce(ctx);
++		if (ret)
++			return ret;
++	}
++
++	switch (opcode) {
++	case IORING_REGISTER_BUFFERS:
++		ret = io_sqe_buffers_register(ctx, arg, nr_args, NULL);
++		break;
++	case IORING_UNREGISTER_BUFFERS:
++		ret = -EINVAL;
++		if (arg || nr_args)
++			break;
++		ret = io_sqe_buffers_unregister(ctx);
++		break;
++	case IORING_REGISTER_FILES:
++		ret = io_sqe_files_register(ctx, arg, nr_args, NULL);
++		break;
++	case IORING_UNREGISTER_FILES:
++		ret = -EINVAL;
++		if (arg || nr_args)
++			break;
++		ret = io_sqe_files_unregister(ctx);
++		break;
++	case IORING_REGISTER_FILES_UPDATE:
++		ret = io_register_files_update(ctx, arg, nr_args);
++		break;
++	case IORING_REGISTER_EVENTFD:
++	case IORING_REGISTER_EVENTFD_ASYNC:
++		ret = -EINVAL;
++		if (nr_args != 1)
++			break;
++		ret = io_eventfd_register(ctx, arg);
++		if (ret)
++			break;
++		if (opcode == IORING_REGISTER_EVENTFD_ASYNC)
++			ctx->eventfd_async = 1;
++		else
++			ctx->eventfd_async = 0;
++		break;
++	case IORING_UNREGISTER_EVENTFD:
++		ret = -EINVAL;
++		if (arg || nr_args)
++			break;
++		ret = io_eventfd_unregister(ctx);
++		break;
++	case IORING_REGISTER_PROBE:
++		ret = -EINVAL;
++		if (!arg || nr_args > 256)
++			break;
++		ret = io_probe(ctx, arg, nr_args);
++		break;
++	case IORING_REGISTER_PERSONALITY:
++		ret = -EINVAL;
++		if (arg || nr_args)
++			break;
++		ret = io_register_personality(ctx);
++		break;
++	case IORING_UNREGISTER_PERSONALITY:
++		ret = -EINVAL;
++		if (arg)
++			break;
++		ret = io_unregister_personality(ctx, nr_args);
++		break;
++	case IORING_REGISTER_ENABLE_RINGS:
++		ret = -EINVAL;
++		if (arg || nr_args)
++			break;
++		ret = io_register_enable_rings(ctx);
++		break;
++	case IORING_REGISTER_RESTRICTIONS:
++		ret = io_register_restrictions(ctx, arg, nr_args);
++		break;
++	case IORING_REGISTER_FILES2:
++		ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_FILE);
++		break;
++	case IORING_REGISTER_FILES_UPDATE2:
++		ret = io_register_rsrc_update(ctx, arg, nr_args,
++					      IORING_RSRC_FILE);
++		break;
++	case IORING_REGISTER_BUFFERS2:
++		ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_BUFFER);
++		break;
++	case IORING_REGISTER_BUFFERS_UPDATE:
++		ret = io_register_rsrc_update(ctx, arg, nr_args,
++					      IORING_RSRC_BUFFER);
++		break;
++	case IORING_REGISTER_IOWQ_AFF:
++		ret = -EINVAL;
++		if (!arg || !nr_args)
++			break;
++		ret = io_register_iowq_aff(ctx, arg, nr_args);
++		break;
++	case IORING_UNREGISTER_IOWQ_AFF:
++		ret = -EINVAL;
++		if (arg || nr_args)
++			break;
++		ret = io_unregister_iowq_aff(ctx);
++		break;
++	case IORING_REGISTER_IOWQ_MAX_WORKERS:
++		ret = -EINVAL;
++		if (!arg || nr_args != 2)
++			break;
++		ret = io_register_iowq_max_workers(ctx, arg);
++		break;
++	default:
++		ret = -EINVAL;
++		break;
++	}
++
++	if (io_register_op_must_quiesce(opcode)) {
++		/* bring the ctx back to life */
++		percpu_ref_reinit(&ctx->refs);
++		reinit_completion(&ctx->ref_comp);
++	}
++	return ret;
++}
++
++SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
++		void __user *, arg, unsigned int, nr_args)
++{
++	struct io_ring_ctx *ctx;
++	long ret = -EBADF;
++	struct fd f;
++
++	f = fdget(fd);
++	if (!f.file)
++		return -EBADF;
++
++	ret = -EOPNOTSUPP;
++	if (f.file->f_op != &io_uring_fops)
++		goto out_fput;
++
++	ctx = f.file->private_data;
++
++	io_run_task_work();
++
++	mutex_lock(&ctx->uring_lock);
++	ret = __io_uring_register(ctx, opcode, arg, nr_args);
++	mutex_unlock(&ctx->uring_lock);
++	trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs,
++							ctx->cq_ev_fd != NULL, ret);
++out_fput:
++	fdput(f);
++	return ret;
++}
++
++static int __init io_uring_init(void)
++{
++#define __BUILD_BUG_VERIFY_ELEMENT(stype, eoffset, etype, ename) do { \
++	BUILD_BUG_ON(offsetof(stype, ename) != eoffset); \
++	BUILD_BUG_ON(sizeof(etype) != sizeof_field(stype, ename)); \
++} while (0)
++
++#define BUILD_BUG_SQE_ELEM(eoffset, etype, ename) \
++	__BUILD_BUG_VERIFY_ELEMENT(struct io_uring_sqe, eoffset, etype, ename)
++	BUILD_BUG_ON(sizeof(struct io_uring_sqe) != 64);
++	BUILD_BUG_SQE_ELEM(0,  __u8,   opcode);
++	BUILD_BUG_SQE_ELEM(1,  __u8,   flags);
++	BUILD_BUG_SQE_ELEM(2,  __u16,  ioprio);
++	BUILD_BUG_SQE_ELEM(4,  __s32,  fd);
++	BUILD_BUG_SQE_ELEM(8,  __u64,  off);
++	BUILD_BUG_SQE_ELEM(8,  __u64,  addr2);
++	BUILD_BUG_SQE_ELEM(16, __u64,  addr);
++	BUILD_BUG_SQE_ELEM(16, __u64,  splice_off_in);
++	BUILD_BUG_SQE_ELEM(24, __u32,  len);
++	BUILD_BUG_SQE_ELEM(28,     __kernel_rwf_t, rw_flags);
++	BUILD_BUG_SQE_ELEM(28, /* compat */   int, rw_flags);
++	BUILD_BUG_SQE_ELEM(28, /* compat */ __u32, rw_flags);
++	BUILD_BUG_SQE_ELEM(28, __u32,  fsync_flags);
++	BUILD_BUG_SQE_ELEM(28, /* compat */ __u16,  poll_events);
++	BUILD_BUG_SQE_ELEM(28, __u32,  poll32_events);
++	BUILD_BUG_SQE_ELEM(28, __u32,  sync_range_flags);
++	BUILD_BUG_SQE_ELEM(28, __u32,  msg_flags);
++	BUILD_BUG_SQE_ELEM(28, __u32,  timeout_flags);
++	BUILD_BUG_SQE_ELEM(28, __u32,  accept_flags);
++	BUILD_BUG_SQE_ELEM(28, __u32,  cancel_flags);
++	BUILD_BUG_SQE_ELEM(28, __u32,  open_flags);
++	BUILD_BUG_SQE_ELEM(28, __u32,  statx_flags);
++	BUILD_BUG_SQE_ELEM(28, __u32,  fadvise_advice);
++	BUILD_BUG_SQE_ELEM(28, __u32,  splice_flags);
++	BUILD_BUG_SQE_ELEM(32, __u64,  user_data);
++	BUILD_BUG_SQE_ELEM(40, __u16,  buf_index);
++	BUILD_BUG_SQE_ELEM(40, __u16,  buf_group);
++	BUILD_BUG_SQE_ELEM(42, __u16,  personality);
++	BUILD_BUG_SQE_ELEM(44, __s32,  splice_fd_in);
++	BUILD_BUG_SQE_ELEM(44, __u32,  file_index);
++
++	BUILD_BUG_ON(sizeof(struct io_uring_files_update) !=
++		     sizeof(struct io_uring_rsrc_update));
++	BUILD_BUG_ON(sizeof(struct io_uring_rsrc_update) >
++		     sizeof(struct io_uring_rsrc_update2));
++
++	/* ->buf_index is u16 */
++	BUILD_BUG_ON(IORING_MAX_REG_BUFFERS >= (1u << 16));
++
++	/* should fit into one byte */
++	BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
++
++	BUILD_BUG_ON(ARRAY_SIZE(io_op_defs) != IORING_OP_LAST);
++	BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof(int));
++
++	req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC |
++				SLAB_ACCOUNT);
++	return 0;
++};
++__initcall(io_uring_init);
+diff --git a/kernel/entry/common.c b/kernel/entry/common.c
+index e289e67732926..09f58853f6927 100644
+--- a/kernel/entry/common.c
++++ b/kernel/entry/common.c
+@@ -135,7 +135,15 @@ static __always_inline void exit_to_user_mode(void)
+ }
+ 
+ /* Workaround to allow gradual conversion of architecture code */
+-void __weak arch_do_signal(struct pt_regs *regs) { }
++void __weak arch_do_signal_or_restart(struct pt_regs *regs, bool has_signal) { }
++
++static void handle_signal_work(struct pt_regs *regs, unsigned long ti_work)
++{
++	if (ti_work & _TIF_NOTIFY_SIGNAL)
++		tracehook_notify_signal();
++
++	arch_do_signal_or_restart(regs, ti_work & _TIF_SIGPENDING);
++}
+ 
+ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
+ 					    unsigned long ti_work)
+@@ -157,8 +165,8 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
+ 		if (ti_work & _TIF_PATCH_PENDING)
+ 			klp_update_patch_state(current);
+ 
+-		if (ti_work & _TIF_SIGPENDING)
+-			arch_do_signal(regs);
++		if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
++			handle_signal_work(regs, ti_work);
+ 
+ 		if (ti_work & _TIF_NOTIFY_RESUME) {
+ 			tracehook_notify_resume(regs);
+diff --git a/kernel/entry/kvm.c b/kernel/entry/kvm.c
+index 2a3139dab109e..7b946847be783 100644
+--- a/kernel/entry/kvm.c
++++ b/kernel/entry/kvm.c
+@@ -8,7 +8,7 @@ static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work)
+ 	do {
+ 		int ret;
+ 
+-		if (ti_work & _TIF_SIGPENDING) {
++		if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) {
+ 			kvm_handle_signal_exit(vcpu);
+ 			return -EINTR;
+ 		}
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index e1bbb3b92921d..826a2355da1ed 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1973,7 +1973,7 @@ bool uprobe_deny_signal(void)
+ 
+ 	WARN_ON_ONCE(utask->state != UTASK_SSTEP);
+ 
+-	if (signal_pending(t)) {
++	if (task_sigpending(t)) {
+ 		spin_lock_irq(&t->sighand->siglock);
+ 		clear_tsk_thread_flag(t, TIF_SIGPENDING);
+ 		spin_unlock_irq(&t->sighand->siglock);
+diff --git a/kernel/exit.c b/kernel/exit.c
+index ab900b661867f..8989e1d1f79b7 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -763,7 +763,7 @@ void __noreturn do_exit(long code)
+ 		schedule();
+ 	}
+ 
+-	io_uring_files_cancel(tsk->files);
++	io_uring_files_cancel();
+ 	exit_signals(tsk);  /* sets PF_EXITING */
+ 
+ 	/* sync mm's RSS info before statistics gathering */
+diff --git a/kernel/fork.c b/kernel/fork.c
+index b877480c901f0..68efe2a0b4fbc 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -926,6 +926,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
+ 	tsk->splice_pipe = NULL;
+ 	tsk->task_frag.page = NULL;
+ 	tsk->wake_q.next = NULL;
++	tsk->pf_io_worker = NULL;
+ 
+ 	account_kernel_stack(tsk, 1);
+ 
+@@ -1942,13 +1943,21 @@ static __latent_entropy struct task_struct *copy_process(
+ 	recalc_sigpending();
+ 	spin_unlock_irq(&current->sighand->siglock);
+ 	retval = -ERESTARTNOINTR;
+-	if (signal_pending(current))
++	if (task_sigpending(current))
+ 		goto fork_out;
+ 
+ 	retval = -ENOMEM;
+ 	p = dup_task_struct(current, node);
+ 	if (!p)
+ 		goto fork_out;
++	if (args->io_thread) {
++		/*
++		 * Mark us an IO worker, and block any signal that isn't
++		 * fatal or STOP
++		 */
++		p->flags |= PF_IO_WORKER;
++		siginitsetinv(&p->blocked, sigmask(SIGKILL)|sigmask(SIGSTOP));
++	}
+ 
+ 	/*
+ 	 * This _must_ happen before we call free_task(), i.e. before we jump
+@@ -2415,6 +2424,28 @@ struct mm_struct *copy_init_mm(void)
+ 	return dup_mm(NULL, &init_mm);
+ }
+ 
++/*
++ * This is like kernel_clone(), but shaved down and tailored to just
++ * creating io_uring workers. It returns a created task, or an error pointer.
++ * The returned task is inactive, and the caller must fire it up through
++ * wake_up_new_task(p). All signals are blocked in the created task.
++ */
++struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node)
++{
++	unsigned long flags = CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|
++				CLONE_IO;
++	struct kernel_clone_args args = {
++		.flags		= ((lower_32_bits(flags) | CLONE_VM |
++				    CLONE_UNTRACED) & ~CSIGNAL),
++		.exit_signal	= (lower_32_bits(flags) & CSIGNAL),
++		.stack		= (unsigned long)fn,
++		.stack_size	= (unsigned long)arg,
++		.io_thread	= 1,
++	};
++
++	return copy_process(NULL, 0, node, &args);
++}
++
+ /*
+  *  Ok, this is the main fork-routine.
+  *
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index da96a309eefed..a875bc59804eb 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -21,7 +21,7 @@
+ #include <asm/tlb.h>
+ 
+ #include "../workqueue_internal.h"
+-#include "../../fs/io-wq.h"
++#include "../../io_uring/io-wq.h"
+ #include "../smpboot.h"
+ 
+ #include "pelt.h"
+diff --git a/kernel/signal.c b/kernel/signal.c
+index d05f783d5a5e6..e487c4660921d 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -984,7 +984,7 @@ static inline bool wants_signal(int sig, struct task_struct *p)
+ 	if (task_is_stopped_or_traced(p))
+ 		return false;
+ 
+-	return task_curr(p) || !signal_pending(p);
++	return task_curr(p) || !task_sigpending(p);
+ }
+ 
+ static void complete_signal(int sig, struct task_struct *p, enum pid_type type)
+@@ -2520,6 +2520,21 @@ bool get_signal(struct ksignal *ksig)
+ 	struct signal_struct *signal = current->signal;
+ 	int signr;
+ 
++	if (unlikely(current->task_works))
++		task_work_run();
++
++	/*
++	 * For non-generic architectures, check for TIF_NOTIFY_SIGNAL so
++	 * that the arch handlers don't all have to do it. If we get here
++	 * without TIF_SIGPENDING, just exit after running signal work.
++	 */
++	if (!IS_ENABLED(CONFIG_GENERIC_ENTRY)) {
++		if (test_thread_flag(TIF_NOTIFY_SIGNAL))
++			tracehook_notify_signal();
++		if (!task_sigpending(current))
++			return false;
++	}
++
+ 	if (unlikely(uprobe_deny_signal()))
+ 		return false;
+ 
+@@ -2532,26 +2547,6 @@ bool get_signal(struct ksignal *ksig)
+ 
+ relock:
+ 	spin_lock_irq(&sighand->siglock);
+-	/*
+-	 * Make sure we can safely read ->jobctl() in task_work add. As Oleg
+-	 * states:
+-	 *
+-	 * It pairs with mb (implied by cmpxchg) before READ_ONCE. So we
+-	 * roughly have
+-	 *
+-	 *	task_work_add:				get_signal:
+-	 *	STORE(task->task_works, new_work);	STORE(task->jobctl);
+-	 *	mb();					mb();
+-	 *	LOAD(task->jobctl);			LOAD(task->task_works);
+-	 *
+-	 * and we can rely on STORE-MB-LOAD [ in task_work_add].
+-	 */
+-	smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
+-	if (unlikely(current->task_works)) {
+-		spin_unlock_irq(&sighand->siglock);
+-		task_work_run();
+-		goto relock;
+-	}
+ 
+ 	/*
+ 	 * Every stopped thread goes here after wakeup. Check to see if
+@@ -2742,6 +2737,14 @@ relock:
+ 			do_coredump(&ksig->info);
+ 		}
+ 
++		/*
++		 * PF_IO_WORKER threads will catch and exit on fatal signals
++		 * themselves. They have cleanup that must be performed, so
++		 * we cannot call do_exit() on their behalf.
++		 */
++		if (current->flags & PF_IO_WORKER)
++			goto out;
++
+ 		/*
+ 		 * Death signals, no core dump.
+ 		 */
+@@ -2749,7 +2752,7 @@ relock:
+ 		/* NOTREACHED */
+ 	}
+ 	spin_unlock_irq(&sighand->siglock);
+-
++out:
+ 	ksig->sig = signr;
+ 	return ksig->sig > 0;
+ }
+@@ -2813,7 +2816,7 @@ static void retarget_shared_pending(struct task_struct *tsk, sigset_t *which)
+ 		/* Remove the signals this thread can handle. */
+ 		sigandsets(&retarget, &retarget, &t->blocked);
+ 
+-		if (!signal_pending(t))
++		if (!task_sigpending(t))
+ 			signal_wake_up(t, 0);
+ 
+ 		if (sigisemptyset(&retarget))
+@@ -2847,7 +2850,7 @@ void exit_signals(struct task_struct *tsk)
+ 
+ 	cgroup_threadgroup_change_end(tsk);
+ 
+-	if (!signal_pending(tsk))
++	if (!task_sigpending(tsk))
+ 		goto out;
+ 
+ 	unblocked = tsk->blocked;
+@@ -2891,7 +2894,7 @@ long do_no_restart_syscall(struct restart_block *param)
+ 
+ static void __set_task_blocked(struct task_struct *tsk, const sigset_t *newset)
+ {
+-	if (signal_pending(tsk) && !thread_group_empty(tsk)) {
++	if (task_sigpending(tsk) && !thread_group_empty(tsk)) {
+ 		sigset_t newblocked;
+ 		/* A set of now blocked but previously unblocked signals. */
+ 		sigandnsets(&newblocked, newset, &current->blocked);
+diff --git a/kernel/task_work.c b/kernel/task_work.c
+index 8d6e1217c451c..e9316198c64bf 100644
+--- a/kernel/task_work.c
++++ b/kernel/task_work.c
+@@ -33,7 +33,6 @@ int task_work_add(struct task_struct *task, struct callback_head *work,
+ 		  enum task_work_notify_mode notify)
+ {
+ 	struct callback_head *head;
+-	unsigned long flags;
+ 
+ 	do {
+ 		head = READ_ONCE(task->task_works);
+@@ -49,17 +48,7 @@ int task_work_add(struct task_struct *task, struct callback_head *work,
+ 		set_notify_resume(task);
+ 		break;
+ 	case TWA_SIGNAL:
+-		/*
+-		 * Only grab the sighand lock if we don't already have some
+-		 * task_work pending. This pairs with the smp_store_mb()
+-		 * in get_signal(), see comment there.
+-		 */
+-		if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
+-		    lock_task_sighand(task, &flags)) {
+-			task->jobctl |= JOBCTL_TASK_WORK;
+-			signal_wake_up(task, 0);
+-			unlock_task_sighand(task, &flags);
+-		}
++		set_notify_signal(task);
+ 		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+@@ -70,18 +59,17 @@ int task_work_add(struct task_struct *task, struct callback_head *work,
+ }
+ 
+ /**
+- * task_work_cancel - cancel a pending work added by task_work_add()
++ * task_work_cancel_match - cancel a pending work added by task_work_add()
+  * @task: the task which should execute the work
+- * @func: identifies the work to remove
+- *
+- * Find the last queued pending work with ->func == @func and remove
+- * it from queue.
++ * @match: match function to call
+  *
+  * RETURNS:
+  * The found work or NULL if not found.
+  */
+ struct callback_head *
+-task_work_cancel(struct task_struct *task, task_work_func_t func)
++task_work_cancel_match(struct task_struct *task,
++		       bool (*match)(struct callback_head *, void *data),
++		       void *data)
+ {
+ 	struct callback_head **pprev = &task->task_works;
+ 	struct callback_head *work;
+@@ -97,7 +85,7 @@ task_work_cancel(struct task_struct *task, task_work_func_t func)
+ 	 */
+ 	raw_spin_lock_irqsave(&task->pi_lock, flags);
+ 	while ((work = READ_ONCE(*pprev))) {
+-		if (work->func != func)
++		if (!match(work, data))
+ 			pprev = &work->next;
+ 		else if (cmpxchg(pprev, work, work->next) == work)
+ 			break;
+@@ -107,6 +95,28 @@ task_work_cancel(struct task_struct *task, task_work_func_t func)
+ 	return work;
+ }
+ 
++static bool task_work_func_match(struct callback_head *cb, void *data)
++{
++	return cb->func == data;
++}
++
++/**
++ * task_work_cancel - cancel a pending work added by task_work_add()
++ * @task: the task which should execute the work
++ * @func: identifies the work to remove
++ *
++ * Find the last queued pending work with ->func == @func and remove
++ * it from queue.
++ *
++ * RETURNS:
++ * The found work or NULL if not found.
++ */
++struct callback_head *
++task_work_cancel(struct task_struct *task, task_work_func_t func)
++{
++	return task_work_cancel_match(task, task_work_func_match, func);
++}
++
+ /**
+  * task_work_run - execute the works added by task_work_add()
+  *
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 1b0a349fbcd92..650554964f181 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1836,24 +1836,38 @@ int import_single_range(int rw, void __user *buf, size_t len,
+ }
+ EXPORT_SYMBOL(import_single_range);
+ 
+-int iov_iter_for_each_range(struct iov_iter *i, size_t bytes,
+-			    int (*f)(struct kvec *vec, void *context),
+-			    void *context)
++/**
++ * iov_iter_restore() - Restore a &struct iov_iter to the same state as when
++ *     iov_iter_save_state() was called.
++ *
++ * @i: &struct iov_iter to restore
++ * @state: state to restore from
++ *
++ * Used after iov_iter_save_state() to bring restore @i, if operations may
++ * have advanced it.
++ *
++ * Note: only works on ITER_IOVEC, ITER_BVEC, and ITER_KVEC
++ */
++void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state)
+ {
+-	struct kvec w;
+-	int err = -EINVAL;
+-	if (!bytes)
+-		return 0;
+-
+-	iterate_all_kinds(i, bytes, v, -EINVAL, ({
+-		w.iov_base = kmap(v.bv_page) + v.bv_offset;
+-		w.iov_len = v.bv_len;
+-		err = f(&w, context);
+-		kunmap(v.bv_page);
+-		err;}), ({
+-		w = v;
+-		err = f(&w, context);})
+-	)
+-	return err;
++	if (WARN_ON_ONCE(!iov_iter_is_bvec(i) && !iter_is_iovec(i)) &&
++			 !iov_iter_is_kvec(i))
++		return;
++	i->iov_offset = state->iov_offset;
++	i->count = state->count;
++	/*
++	 * For the *vec iters, nr_segs + iov is constant - if we increment
++	 * the vec, then we also decrement the nr_segs count. Hence we don't
++	 * need to track both of these, just one is enough and we can deduct
++	 * the other from that. ITER_KVEC and ITER_IOVEC are the same struct
++	 * size, so we can just increment the iov pointer as they are unionzed.
++	 * ITER_BVEC _may_ be the same size on some archs, but on others it is
++	 * not. Be safe and handle it separately.
++	 */
++	BUILD_BUG_ON(sizeof(struct iovec) != sizeof(struct kvec));
++	if (iov_iter_is_bvec(i))
++		i->bvec -= state->nr_segs - i->nr_segs;
++	else
++		i->iov -= state->nr_segs - i->nr_segs;
++	i->nr_segs = state->nr_segs;
+ }
+-EXPORT_SYMBOL(iov_iter_for_each_range);
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 48223c264991b..8dab0d311aba3 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1017,7 +1017,6 @@ static int inet_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned lon
+ 
+ const struct proto_ops inet_stream_ops = {
+ 	.family		   = PF_INET,
+-	.flags		   = PROTO_CMSG_DATA_ONLY,
+ 	.owner		   = THIS_MODULE,
+ 	.release	   = inet_release,
+ 	.bind		   = inet_bind,
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index d30c9d949c1b5..4df9dc9375c8e 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -661,7 +661,6 @@ int inet6_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 
+ const struct proto_ops inet6_stream_ops = {
+ 	.family		   = PF_INET6,
+-	.flags		   = PROTO_CMSG_DATA_ONLY,
+ 	.owner		   = THIS_MODULE,
+ 	.release	   = inet6_release,
+ 	.bind		   = inet6_bind,
+diff --git a/net/socket.c b/net/socket.c
+index bcf68b150fe29..8657112a687a4 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1688,30 +1688,22 @@ SYSCALL_DEFINE2(listen, int, fd, int, backlog)
+ 	return __sys_listen(fd, backlog);
+ }
+ 
+-int __sys_accept4_file(struct file *file, unsigned file_flags,
++struct file *do_accept(struct file *file, unsigned file_flags,
+ 		       struct sockaddr __user *upeer_sockaddr,
+-		       int __user *upeer_addrlen, int flags,
+-		       unsigned long nofile)
++		       int __user *upeer_addrlen, int flags)
+ {
+ 	struct socket *sock, *newsock;
+ 	struct file *newfile;
+-	int err, len, newfd;
++	int err, len;
+ 	struct sockaddr_storage address;
+ 
+-	if (flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
+-		return -EINVAL;
+-
+-	if (SOCK_NONBLOCK != O_NONBLOCK && (flags & SOCK_NONBLOCK))
+-		flags = (flags & ~SOCK_NONBLOCK) | O_NONBLOCK;
+-
+ 	sock = sock_from_file(file, &err);
+ 	if (!sock)
+-		goto out;
++		return ERR_PTR(err);
+ 
+-	err = -ENFILE;
+ 	newsock = sock_alloc();
+ 	if (!newsock)
+-		goto out;
++		return ERR_PTR(-ENFILE);
+ 
+ 	newsock->type = sock->type;
+ 	newsock->ops = sock->ops;
+@@ -1722,18 +1714,9 @@ int __sys_accept4_file(struct file *file, unsigned file_flags,
+ 	 */
+ 	__module_get(newsock->ops->owner);
+ 
+-	newfd = __get_unused_fd_flags(flags, nofile);
+-	if (unlikely(newfd < 0)) {
+-		err = newfd;
+-		sock_release(newsock);
+-		goto out;
+-	}
+ 	newfile = sock_alloc_file(newsock, flags, sock->sk->sk_prot_creator->name);
+-	if (IS_ERR(newfile)) {
+-		err = PTR_ERR(newfile);
+-		put_unused_fd(newfd);
+-		goto out;
+-	}
++	if (IS_ERR(newfile))
++		return newfile;
+ 
+ 	err = security_socket_accept(sock, newsock);
+ 	if (err)
+@@ -1758,16 +1741,38 @@ int __sys_accept4_file(struct file *file, unsigned file_flags,
+ 	}
+ 
+ 	/* File flags are not inherited via accept() unlike another OSes. */
+-
+-	fd_install(newfd, newfile);
+-	err = newfd;
+-out:
+-	return err;
++	return newfile;
+ out_fd:
+ 	fput(newfile);
+-	put_unused_fd(newfd);
+-	goto out;
++	return ERR_PTR(err);
++}
+ 
++int __sys_accept4_file(struct file *file, unsigned file_flags,
++		       struct sockaddr __user *upeer_sockaddr,
++		       int __user *upeer_addrlen, int flags,
++		       unsigned long nofile)
++{
++	struct file *newfile;
++	int newfd;
++
++	if (flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
++		return -EINVAL;
++
++	if (SOCK_NONBLOCK != O_NONBLOCK && (flags & SOCK_NONBLOCK))
++		flags = (flags & ~SOCK_NONBLOCK) | O_NONBLOCK;
++
++	newfd = __get_unused_fd_flags(flags, nofile);
++	if (unlikely(newfd < 0))
++		return newfd;
++
++	newfile = do_accept(file, file_flags, upeer_sockaddr, upeer_addrlen,
++			    flags);
++	if (IS_ERR(newfile)) {
++		put_unused_fd(newfd);
++		return PTR_ERR(newfile);
++	}
++	fd_install(newfd, newfile);
++	return newfd;
+ }
+ 
+ /*
+@@ -2181,6 +2186,17 @@ SYSCALL_DEFINE5(getsockopt, int, fd, int, level, int, optname,
+  *	Shutdown a socket.
+  */
+ 
++int __sys_shutdown_sock(struct socket *sock, int how)
++{
++	int err;
++
++	err = security_socket_shutdown(sock, how);
++	if (!err)
++		err = sock->ops->shutdown(sock, how);
++
++	return err;
++}
++
+ int __sys_shutdown(int fd, int how)
+ {
+ 	int err, fput_needed;
+@@ -2188,9 +2204,7 @@ int __sys_shutdown(int fd, int how)
+ 
+ 	sock = sockfd_lookup_light(fd, &err, &fput_needed);
+ 	if (sock != NULL) {
+-		err = security_socket_shutdown(sock, how);
+-		if (!err)
+-			err = sock->ops->shutdown(sock, how);
++		err = __sys_shutdown_sock(sock, how);
+ 		fput_light(sock->file, fput_needed);
+ 	}
+ 	return err;
+@@ -2405,10 +2419,6 @@ static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg,
+ long __sys_sendmsg_sock(struct socket *sock, struct msghdr *msg,
+ 			unsigned int flags)
+ {
+-	/* disallow ancillary data requests from this path */
+-	if (msg->msg_control || msg->msg_controllen)
+-		return -EINVAL;
+-
+ 	return ____sys_sendmsg(sock, msg, flags, NULL, 0);
+ }
+ 
+@@ -2617,12 +2627,6 @@ long __sys_recvmsg_sock(struct socket *sock, struct msghdr *msg,
+ 			struct user_msghdr __user *umsg,
+ 			struct sockaddr __user *uaddr, unsigned int flags)
+ {
+-	if (msg->msg_control || msg->msg_controllen) {
+-		/* disallow ancillary data reqs unless cmsg is plain data */
+-		if (!(sock->ops->flags & PROTO_CMSG_DATA_ONLY))
+-			return -EINVAL;
+-	}
+-
+ 	return ____sys_recvmsg(sock, msg, umsg, uaddr, flags, 0);
+ }
+ 
+diff --git a/tools/include/uapi/linux/openat2.h b/tools/include/uapi/linux/openat2.h
+index 58b1eb7113600..a5feb76049487 100644
+--- a/tools/include/uapi/linux/openat2.h
++++ b/tools/include/uapi/linux/openat2.h
+@@ -35,5 +35,9 @@ struct open_how {
+ #define RESOLVE_IN_ROOT		0x10 /* Make all jumps to "/" and ".."
+ 					be scoped inside the dirfd
+ 					(similar to chroot(2)). */
++#define RESOLVE_CACHED		0x20 /* Only complete if resolution can be
++					completed through cached lookup. May
++					return -EAGAIN if that's not
++					possible. */
+ 
+ #endif /* _UAPI_LINUX_OPENAT2_H */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-01-14 13:52 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-01-14 13:52 UTC (permalink / raw
  To: gentoo-commits

commit:     178e06aa262efb3ff4aba373f287ed0281f6eafa
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 14 13:52:12 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan 14 13:52:12 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=178e06aa

Linux patch 5.10.163

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1162_linux-5.10.163.patch | 37903 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 37907 insertions(+)

diff --git a/0000_README b/0000_README
index d2b8c150..fd63814b 100644
--- a/0000_README
+++ b/0000_README
@@ -691,6 +691,10 @@ Patch:  1161_linux-5.10.162.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.162
 
+Patch:  1162_linux-5.10.163.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.163
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1162_linux-5.10.163.patch b/1162_linux-5.10.163.patch
new file mode 100644
index 00000000..69dc7879
--- /dev/null
+++ b/1162_linux-5.10.163.patch
@@ -0,0 +1,37903 @@
+diff --git a/Documentation/devicetree/bindings/sound/qcom,wcd9335.txt b/Documentation/devicetree/bindings/sound/qcom,wcd9335.txt
+index 5d6ea66a863fe..1f75feec3dec6 100644
+--- a/Documentation/devicetree/bindings/sound/qcom,wcd9335.txt
++++ b/Documentation/devicetree/bindings/sound/qcom,wcd9335.txt
+@@ -109,7 +109,7 @@ audio-codec@1{
+ 	reg  = <1 0>;
+ 	interrupts = <&msmgpio 54 IRQ_TYPE_LEVEL_HIGH>;
+ 	interrupt-names = "intr2"
+-	reset-gpios = <&msmgpio 64 0>;
++	reset-gpios = <&msmgpio 64 GPIO_ACTIVE_LOW>;
+ 	slim-ifc-dev  = <&wc9335_ifd>;
+ 	clock-names = "mclk", "native";
+ 	clocks = <&rpmcc RPM_SMD_DIV_CLK1>,
+diff --git a/Documentation/driver-api/spi.rst b/Documentation/driver-api/spi.rst
+index f64cb666498aa..f28887045049d 100644
+--- a/Documentation/driver-api/spi.rst
++++ b/Documentation/driver-api/spi.rst
+@@ -25,8 +25,8 @@ hardware, which may be as simple as a set of GPIO pins or as complex as
+ a pair of FIFOs connected to dual DMA engines on the other side of the
+ SPI shift register (maximizing throughput). Such drivers bridge between
+ whatever bus they sit on (often the platform bus) and SPI, and expose
+-the SPI side of their device as a :c:type:`struct spi_master
+-<spi_master>`. SPI devices are children of that master,
++the SPI side of their device as a :c:type:`struct spi_controller
++<spi_controller>`. SPI devices are children of that master,
+ represented as a :c:type:`struct spi_device <spi_device>` and
+ manufactured from :c:type:`struct spi_board_info
+ <spi_board_info>` descriptors which are usually provided by
+diff --git a/Documentation/fault-injection/fault-injection.rst b/Documentation/fault-injection/fault-injection.rst
+index 31ecfe44e5b45..47de5006f6456 100644
+--- a/Documentation/fault-injection/fault-injection.rst
++++ b/Documentation/fault-injection/fault-injection.rst
+@@ -78,8 +78,8 @@ configuration of fault-injection capabilities.
+ 
+ - /sys/kernel/debug/fail*/times:
+ 
+-	specifies how many times failures may happen at most.
+-	A value of -1 means "no limit".
++	specifies how many times failures may happen at most. A value of -1
++	means "no limit".
+ 
+ - /sys/kernel/debug/fail*/space:
+ 
+@@ -167,11 +167,13 @@ configuration of fault-injection capabilities.
+ 	- ERRNO: retval must be -1 to -MAX_ERRNO (-4096).
+ 	- ERR_NULL: retval must be 0 or -1 to -MAX_ERRNO (-4096).
+ 
+-- /sys/kernel/debug/fail_function/<functiuon-name>/retval:
++- /sys/kernel/debug/fail_function/<function-name>/retval:
+ 
+-	specifies the "error" return value to inject to the given
+-	function for given function. This will be created when
+-	user specifies new injection entry.
++	specifies the "error" return value to inject to the given function.
++	This will be created when the user specifies a new injection entry.
++	Note that this file only accepts unsigned values. So, if you want to
++	use a negative errno, you better use 'printf' instead of 'echo', e.g.:
++	$ printf %#x -12 > retval
+ 
+ Boot option
+ ^^^^^^^^^^^
+@@ -336,7 +338,7 @@ Application Examples
+     FAILTYPE=fail_function
+     FAILFUNC=open_ctree
+     echo $FAILFUNC > /sys/kernel/debug/$FAILTYPE/inject
+-    echo -12 > /sys/kernel/debug/$FAILTYPE/$FAILFUNC/retval
++    printf %#x -12 > /sys/kernel/debug/$FAILTYPE/$FAILFUNC/retval
+     echo N > /sys/kernel/debug/$FAILTYPE/task-filter
+     echo 100 > /sys/kernel/debug/$FAILTYPE/probability
+     echo 0 > /sys/kernel/debug/$FAILTYPE/interval
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 4d10e79030a9c..f6c6b403a1b7c 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -7280,7 +7280,7 @@ F:	Documentation/locking/*futex*
+ F:	include/asm-generic/futex.h
+ F:	include/linux/futex.h
+ F:	include/uapi/linux/futex.h
+-F:	kernel/futex.c
++F:	kernel/futex/*
+ F:	tools/perf/bench/futex*
+ F:	tools/testing/selftests/futex/
+ 
+diff --git a/Makefile b/Makefile
+index 33422c7d149e1..98fc6e7fd41df 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 162
++SUBLEVEL = 163
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -1425,7 +1425,6 @@ endif
+ 
+ PHONY += modules
+ modules: $(if $(KBUILD_BUILTIN),vmlinux) modules_check modules_prepare
+-	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost
+ 
+ PHONY += modules_check
+ modules_check: modules.order
+@@ -1443,12 +1442,9 @@ PHONY += modules_prepare
+ modules_prepare: prepare
+ 	$(Q)$(MAKE) $(build)=scripts scripts/module.lds
+ 
+-# Target to install modules
+-PHONY += modules_install
+-modules_install: _modinst_ _modinst_post
+-
+-PHONY += _modinst_
+-_modinst_:
++modules_install: __modinst_pre
++PHONY += __modinst_pre
++__modinst_pre:
+ 	@rm -rf $(MODLIB)/kernel
+ 	@rm -f $(MODLIB)/source
+ 	@mkdir -p $(MODLIB)/kernel
+@@ -1460,14 +1456,6 @@ _modinst_:
+ 	@sed 's:^:kernel/:' modules.order > $(MODLIB)/modules.order
+ 	@cp -f modules.builtin $(MODLIB)/
+ 	@cp -f $(objtree)/modules.builtin.modinfo $(MODLIB)/
+-	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modinst
+-
+-# This depmod is only for convenience to give the initial
+-# boot a modules.dep even before / is mounted read-write.  However the
+-# boot script depmod is the master version.
+-PHONY += _modinst_post
+-_modinst_post: _modinst_
+-	$(call cmd,depmod)
+ 
+ ifeq ($(CONFIG_MODULE_SIG), y)
+ PHONY += modules_sign
+@@ -1475,20 +1463,6 @@ modules_sign:
+ 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modsign
+ endif
+ 
+-else # CONFIG_MODULES
+-
+-# Modules not configured
+-# ---------------------------------------------------------------------------
+-
+-PHONY += modules modules_install
+-modules modules_install:
+-	@echo >&2
+-	@echo >&2 "The present kernel configuration has modules disabled."
+-	@echo >&2 "Type 'make config' and enable loadable module support."
+-	@echo >&2 "Then build a kernel with module support enabled."
+-	@echo >&2
+-	@exit 1
+-
+ endif # CONFIG_MODULES
+ 
+ ###
+@@ -1736,26 +1710,9 @@ KBUILD_BUILTIN :=
+ KBUILD_MODULES := 1
+ 
+ build-dirs := $(KBUILD_EXTMOD)
+-PHONY += modules
+-modules: $(MODORDER)
+-	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost
+-
+ $(MODORDER): descend
+ 	@:
+ 
+-PHONY += modules_install
+-modules_install: _emodinst_ _emodinst_post
+-
+-install-dir := $(if $(INSTALL_MOD_DIR),$(INSTALL_MOD_DIR),extra)
+-PHONY += _emodinst_
+-_emodinst_:
+-	$(Q)mkdir -p $(MODLIB)/$(install-dir)
+-	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modinst
+-
+-PHONY += _emodinst_post
+-_emodinst_post: _emodinst_
+-	$(call cmd,depmod)
+-
+ compile_commands.json: $(extmod-prefix)compile_commands.json
+ PHONY += compile_commands.json
+ 
+@@ -1778,6 +1735,41 @@ PHONY += prepare modules_prepare
+ 
+ endif # KBUILD_EXTMOD
+ 
++# ---------------------------------------------------------------------------
++# Modules
++
++PHONY += modules modules_install
++
++ifdef CONFIG_MODULES
++
++modules: $(MODORDER)
++	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost
++
++quiet_cmd_depmod = DEPMOD  $(KERNELRELEASE)
++      cmd_depmod = $(CONFIG_SHELL) $(srctree)/scripts/depmod.sh $(DEPMOD) \
++                   $(KERNELRELEASE)
++
++modules_install:
++	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modinst
++	$(call cmd,depmod)
++
++else # CONFIG_MODULES
++
++# Modules not configured
++# ---------------------------------------------------------------------------
++
++modules modules_install:
++	@echo >&2 '***'
++	@echo >&2 '*** The present kernel configuration has modules disabled.'
++	@echo >&2 '*** To use the module feature, please run "make menuconfig" etc.'
++	@echo >&2 '*** to enable CONFIG_MODULES.'
++	@echo >&2 '***'
++	@exit 1
++
++KBUILD_MODULES :=
++
++endif # CONFIG_MODULES
++
+ # Single targets
+ # ---------------------------------------------------------------------------
+ # To build individual files in subdirectories, you can do like this:
+@@ -1801,18 +1793,12 @@ $(single-ko): single_modpost
+ $(single-no-ko): descend
+ 	@:
+ 
+-ifeq ($(KBUILD_EXTMOD),)
+-# For the single build of in-tree modules, use a temporary file to avoid
+-# the situation of modules_install installing an invalid modules.order.
+-MODORDER := .modules.tmp
+-endif
+-
++# Remove MODORDER when done because it is not the real one.
+ PHONY += single_modpost
+ single_modpost: $(single-no-ko) modules_prepare
+ 	$(Q){ $(foreach m, $(single-ko), echo $(extmod-prefix)$m;) } > $(MODORDER)
+ 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost
+-
+-KBUILD_MODULES := 1
++	$(Q)rm -f $(MODORDER)
+ 
+ export KBUILD_SINGLE_TARGETS := $(addprefix $(extmod-prefix), $(single-no-ko))
+ 
+@@ -1822,10 +1808,6 @@ build-dirs := $(foreach d, $(build-dirs), \
+ 
+ endif
+ 
+-ifndef CONFIG_MODULES
+-KBUILD_MODULES :=
+-endif
+-
+ # Handle descending into subdirectories listed in $(build-dirs)
+ # Preset locale variables to speed up the build process. Limit locale
+ # tweaks to this spot to avoid wrong language settings when running
+@@ -1965,11 +1947,6 @@ tools/%: FORCE
+ quiet_cmd_rmfiles = $(if $(wildcard $(rm-files)),CLEAN   $(wildcard $(rm-files)))
+       cmd_rmfiles = rm -rf $(rm-files)
+ 
+-# Run depmod only if we have System.map and depmod is executable
+-quiet_cmd_depmod = DEPMOD  $(KERNELRELEASE)
+-      cmd_depmod = $(CONFIG_SHELL) $(srctree)/scripts/depmod.sh $(DEPMOD) \
+-                   $(KERNELRELEASE)
+-
+ # read saved command lines for existing targets
+ existing-targets := $(wildcard $(sort $(targets)))
+ 
+diff --git a/arch/alpha/kernel/entry.S b/arch/alpha/kernel/entry.S
+index e227f3a29a43c..c41a5a9c3b9f2 100644
+--- a/arch/alpha/kernel/entry.S
++++ b/arch/alpha/kernel/entry.S
+@@ -469,8 +469,10 @@ entSys:
+ #ifdef CONFIG_AUDITSYSCALL
+ 	lda     $6, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT
+ 	and     $3, $6, $3
+-#endif
+ 	bne     $3, strace
++#else
++	blbs    $3, strace		/* check for SYSCALL_TRACE in disguise */
++#endif
+ 	beq	$4, 1f
+ 	ldq	$27, 0($5)
+ 1:	jsr	$26, ($27), sys_ni_syscall
+diff --git a/arch/arm/boot/dts/armada-370.dtsi b/arch/arm/boot/dts/armada-370.dtsi
+index 46e6d3ed8f35a..c042c416a94a3 100644
+--- a/arch/arm/boot/dts/armada-370.dtsi
++++ b/arch/arm/boot/dts/armada-370.dtsi
+@@ -74,7 +74,7 @@
+ 
+ 			pcie2: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x80000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-375.dtsi b/arch/arm/boot/dts/armada-375.dtsi
+index 9805e507c695c..d117fc4ae6d93 100644
+--- a/arch/arm/boot/dts/armada-375.dtsi
++++ b/arch/arm/boot/dts/armada-375.dtsi
+@@ -582,7 +582,7 @@
+ 
+ 			pcie1: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-380.dtsi b/arch/arm/boot/dts/armada-380.dtsi
+index cff1269f3fbfd..7146cc8f082af 100644
+--- a/arch/arm/boot/dts/armada-380.dtsi
++++ b/arch/arm/boot/dts/armada-380.dtsi
+@@ -79,7 +79,7 @@
+ 			/* x1 port */
+ 			pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -98,7 +98,7 @@
+ 			/* x1 port */
+ 			pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-385-turris-omnia.dts b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+index 92e08486ec81f..320c759b4090f 100644
+--- a/arch/arm/boot/dts/armada-385-turris-omnia.dts
++++ b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+@@ -22,6 +22,12 @@
+ 		stdout-path = &uart0;
+ 	};
+ 
++	aliases {
++		ethernet0 = &eth0;
++		ethernet1 = &eth1;
++		ethernet2 = &eth2;
++	};
++
+ 	memory {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x40000000>; /* 1024 MB */
+@@ -291,7 +297,17 @@
+ 				};
+ 			};
+ 
+-			/* port 6 is connected to eth0 */
++			ports@6 {
++				reg = <6>;
++				label = "cpu";
++				ethernet = <&eth0>;
++				phy-mode = "rgmii-id";
++
++				fixed-link {
++					speed = <1000>;
++					full-duplex;
++				};
++			};
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/armada-385.dtsi b/arch/arm/boot/dts/armada-385.dtsi
+index f0022d10c7159..f081f7cb66e5f 100644
+--- a/arch/arm/boot/dts/armada-385.dtsi
++++ b/arch/arm/boot/dts/armada-385.dtsi
+@@ -84,7 +84,7 @@
+ 			/* x1 port */
+ 			pcie2: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -103,7 +103,7 @@
+ 			/* x1 port */
+ 			pcie3: pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -125,7 +125,7 @@
+ 			 */
+ 			pcie4: pcie@4,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
++				assigned-addresses = <0x82002000 0 0x48000 0 0x2000>;
+ 				reg = <0x2000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-39x.dtsi b/arch/arm/boot/dts/armada-39x.dtsi
+index e0b7c20998312..9525e7b7f4360 100644
+--- a/arch/arm/boot/dts/armada-39x.dtsi
++++ b/arch/arm/boot/dts/armada-39x.dtsi
+@@ -453,7 +453,7 @@
+ 			/* x1 port */
+ 			pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -472,7 +472,7 @@
+ 			/* x1 port */
+ 			pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -494,7 +494,7 @@
+ 			 */
+ 			pcie@4,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
++				assigned-addresses = <0x82002000 0 0x48000 0 0x2000>;
+ 				reg = <0x2000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-xp-mv78230.dtsi b/arch/arm/boot/dts/armada-xp-mv78230.dtsi
+index 8558bf6bb54c6..d55fe162fc7f0 100644
+--- a/arch/arm/boot/dts/armada-xp-mv78230.dtsi
++++ b/arch/arm/boot/dts/armada-xp-mv78230.dtsi
+@@ -97,7 +97,7 @@
+ 
+ 			pcie2: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -115,7 +115,7 @@
+ 
+ 			pcie3: pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x48000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -133,7 +133,7 @@
+ 
+ 			pcie4: pcie@4,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x4c000 0 0x2000>;
++				assigned-addresses = <0x82002000 0 0x4c000 0 0x2000>;
+ 				reg = <0x2000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -151,7 +151,7 @@
+ 
+ 			pcie5: pcie@5,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
++				assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
+ 				reg = <0x2800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-xp-mv78260.dtsi b/arch/arm/boot/dts/armada-xp-mv78260.dtsi
+index 2d85fe8ac3272..fdcc818199401 100644
+--- a/arch/arm/boot/dts/armada-xp-mv78260.dtsi
++++ b/arch/arm/boot/dts/armada-xp-mv78260.dtsi
+@@ -112,7 +112,7 @@
+ 
+ 			pcie2: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -130,7 +130,7 @@
+ 
+ 			pcie3: pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x48000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -148,7 +148,7 @@
+ 
+ 			pcie4: pcie@4,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x4c000 0 0x2000>;
++				assigned-addresses = <0x82002000 0 0x4c000 0 0x2000>;
+ 				reg = <0x2000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -166,7 +166,7 @@
+ 
+ 			pcie5: pcie@5,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
++				assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
+ 				reg = <0x2800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -184,7 +184,7 @@
+ 
+ 			pcie6: pcie@6,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x84000 0 0x2000>;
++				assigned-addresses = <0x82003000 0 0x84000 0 0x2000>;
+ 				reg = <0x3000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -202,7 +202,7 @@
+ 
+ 			pcie7: pcie@7,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x88000 0 0x2000>;
++				assigned-addresses = <0x82003800 0 0x88000 0 0x2000>;
+ 				reg = <0x3800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -220,7 +220,7 @@
+ 
+ 			pcie8: pcie@8,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x8c000 0 0x2000>;
++				assigned-addresses = <0x82004000 0 0x8c000 0 0x2000>;
+ 				reg = <0x4000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -238,7 +238,7 @@
+ 
+ 			pcie9: pcie@9,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
++				assigned-addresses = <0x82004800 0 0x42000 0 0x2000>;
+ 				reg = <0x4800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/dove.dtsi b/arch/arm/boot/dts/dove.dtsi
+index 89e0bdaf3a85f..726d353eda686 100644
+--- a/arch/arm/boot/dts/dove.dtsi
++++ b/arch/arm/boot/dts/dove.dtsi
+@@ -129,7 +129,7 @@
+ 			pcie1: pcie@2 {
+ 				device_type = "pci";
+ 				status = "disabled";
+-				assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x80000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				clocks = <&gate_clk 5>;
+ 				marvell,pcie-port = <1>;
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index 72c4a9fc41a20..fb25ede1ce9f9 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -1571,7 +1571,7 @@
+ 		};
+ 
+ 		etb@1a01000 {
+-			compatible = "coresight-etb10", "arm,primecell";
++			compatible = "arm,coresight-etb10", "arm,primecell";
+ 			reg = <0x1a01000 0x1000>;
+ 
+ 			clocks = <&rpmcc RPM_QDSS_CLK>;
+diff --git a/arch/arm/boot/dts/spear600.dtsi b/arch/arm/boot/dts/spear600.dtsi
+index fd41243a0b2c0..9d5a04a46b14e 100644
+--- a/arch/arm/boot/dts/spear600.dtsi
++++ b/arch/arm/boot/dts/spear600.dtsi
+@@ -47,7 +47,7 @@
+ 			compatible = "arm,pl110", "arm,primecell";
+ 			reg = <0xfc200000 0x1000>;
+ 			interrupt-parent = <&vic1>;
+-			interrupts = <12>;
++			interrupts = <13>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp157a-dhcor-avenger96.dts b/arch/arm/boot/dts/stm32mp157a-dhcor-avenger96.dts
+index 2e3c9fbb4eb36..275167f26fd9d 100644
+--- a/arch/arm/boot/dts/stm32mp157a-dhcor-avenger96.dts
++++ b/arch/arm/boot/dts/stm32mp157a-dhcor-avenger96.dts
+@@ -13,7 +13,6 @@
+ /dts-v1/;
+ 
+ #include "stm32mp157.dtsi"
+-#include "stm32mp15xc.dtsi"
+ #include "stm32mp15xx-dhcor-som.dtsi"
+ #include "stm32mp15xx-dhcor-avenger96.dtsi"
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index f3e0c790a4b19..723b39bb2129c 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -100,7 +100,7 @@
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+ 
+-		gpios = <&gpioz 3 GPIO_ACTIVE_HIGH>;
++		gpio = <&gpioz 3 GPIO_ACTIVE_HIGH>;
+ 		enable-active-high;
+ 	};
+ };
+diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
+index eb7ce2747eb00..fcccf35f5cf96 100644
+--- a/arch/arm/include/asm/thread_info.h
++++ b/arch/arm/include/asm/thread_info.h
+@@ -133,15 +133,16 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
+ #define TIF_NEED_RESCHED	1	/* rescheduling necessary */
+ #define TIF_NOTIFY_RESUME	2	/* callback before returning to user */
+ #define TIF_UPROBE		3	/* breakpointed or singlestepping */
+-#define TIF_SYSCALL_TRACE	4	/* syscall trace active */
+-#define TIF_SYSCALL_AUDIT	5	/* syscall auditing active */
+-#define TIF_SYSCALL_TRACEPOINT	6	/* syscall tracepoint instrumentation */
+-#define TIF_SECCOMP		7	/* seccomp syscall filtering active */
+-#define TIF_NOTIFY_SIGNAL	8	/* signal notifications exist */
++#define TIF_NOTIFY_SIGNAL	4	/* signal notifications exist */
+ 
+ #define TIF_USING_IWMMXT	17
+ #define TIF_MEMDIE		18	/* is terminating due to OOM killer */
+-#define TIF_RESTORE_SIGMASK	20
++#define TIF_RESTORE_SIGMASK	19
++#define TIF_SYSCALL_TRACE	20	/* syscall trace active */
++#define TIF_SYSCALL_AUDIT	21	/* syscall auditing active */
++#define TIF_SYSCALL_TRACEPOINT	22	/* syscall tracepoint instrumentation */
++#define TIF_SECCOMP		23	/* seccomp syscall filtering active */
++
+ 
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+diff --git a/arch/arm/mach-mmp/time.c b/arch/arm/mach-mmp/time.c
+index 41b2e8abc9e69..708816caf859c 100644
+--- a/arch/arm/mach-mmp/time.c
++++ b/arch/arm/mach-mmp/time.c
+@@ -43,18 +43,21 @@
+ static void __iomem *mmp_timer_base = TIMERS_VIRT_BASE;
+ 
+ /*
+- * FIXME: the timer needs some delay to stablize the counter capture
++ * Read the timer through the CVWR register. Delay is required after requesting
++ * a read. The CR register cannot be directly read due to metastability issues
++ * documented in the PXA168 software manual.
+  */
+ static inline uint32_t timer_read(void)
+ {
+-	int delay = 100;
++	uint32_t val;
++	int delay = 3;
+ 
+ 	__raw_writel(1, mmp_timer_base + TMR_CVWR(1));
+ 
+ 	while (delay--)
+-		cpu_relax();
++		val = __raw_readl(mmp_timer_base + TMR_CVWR(1));
+ 
+-	return __raw_readl(mmp_timer_base + TMR_CVWR(1));
++	return val;
+ }
+ 
+ static u64 notrace mmp_read_sched_clock(void)
+diff --git a/arch/arm/nwfpe/Makefile b/arch/arm/nwfpe/Makefile
+index 303400fa2cdf7..2aec85ab1e8b9 100644
+--- a/arch/arm/nwfpe/Makefile
++++ b/arch/arm/nwfpe/Makefile
+@@ -11,3 +11,9 @@ nwfpe-y				+= fpa11.o fpa11_cpdo.o fpa11_cpdt.o \
+ 				   entry.o
+ 
+ nwfpe-$(CONFIG_FPE_NWFPE_XP)	+= extended_cpdo.o
++
++# Try really hard to avoid generating calls to __aeabi_uldivmod() from
++# float64_rem() due to loop elision.
++ifdef CONFIG_CC_IS_CLANG
++CFLAGS_softfloat.o	+= -mllvm -replexitval=never
++endif
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index 00e5dbf4b8236..eea8d23683dc1 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -124,9 +124,12 @@
+ 	/delete-property/ mrvl,i2c-fast-mode;
+ 	status = "okay";
+ 
++	/* MCP7940MT-I/MNY RTC */
+ 	rtc@6f {
+ 		compatible = "microchip,mcp7940x";
+ 		reg = <0x6f>;
++		interrupt-parent = <&gpiosb>;
++		interrupts = <5 0>; /* GPIO2_5 */
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt2712-evb.dts b/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
+index 7d369fdd3117f..9d20cabf4f699 100644
+--- a/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
+@@ -26,14 +26,14 @@
+ 		stdout-path = "serial0:921600n8";
+ 	};
+ 
+-	cpus_fixed_vproc0: fixedregulator@0 {
++	cpus_fixed_vproc0: regulator-vproc-buck0 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vproc_buck0";
+ 		regulator-min-microvolt = <1000000>;
+ 		regulator-max-microvolt = <1000000>;
+ 	};
+ 
+-	cpus_fixed_vproc1: fixedregulator@1 {
++	cpus_fixed_vproc1: regulator-vproc-buck1 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vproc_buck1";
+ 		regulator-min-microvolt = <1000000>;
+@@ -50,7 +50,7 @@
+ 		id-gpio = <&pio 14 GPIO_ACTIVE_HIGH>;
+ 	};
+ 
+-	usb_p0_vbus: regulator@2 {
++	usb_p0_vbus: regulator-usb-p0-vbus {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "p0_vbus";
+ 		regulator-min-microvolt = <5000000>;
+@@ -59,7 +59,7 @@
+ 		enable-active-high;
+ 	};
+ 
+-	usb_p1_vbus: regulator@3 {
++	usb_p1_vbus: regulator-usb-p1-vbus {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "p1_vbus";
+ 		regulator-min-microvolt = <5000000>;
+@@ -68,7 +68,7 @@
+ 		enable-active-high;
+ 	};
+ 
+-	usb_p2_vbus: regulator@4 {
++	usb_p2_vbus: regulator-usb-p2-vbus {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "p2_vbus";
+ 		regulator-min-microvolt = <5000000>;
+@@ -77,7 +77,7 @@
+ 		enable-active-high;
+ 	};
+ 
+-	usb_p3_vbus: regulator@5 {
++	usb_p3_vbus: regulator-usb-p3-vbus {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "p3_vbus";
+ 		regulator-min-microvolt = <5000000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+index db17d0a4ed57b..cc3d1c99517d1 100644
+--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+@@ -160,70 +160,70 @@
+ 		#clock-cells = <0>;
+ 	};
+ 
+-	clk26m: oscillator@0 {
++	clk26m: oscillator-26m {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <26000000>;
+ 		clock-output-names = "clk26m";
+ 	};
+ 
+-	clk32k: oscillator@1 {
++	clk32k: oscillator-32k {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <32768>;
+ 		clock-output-names = "clk32k";
+ 	};
+ 
+-	clkfpc: oscillator@2 {
++	clkfpc: oscillator-50m {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <50000000>;
+ 		clock-output-names = "clkfpc";
+ 	};
+ 
+-	clkaud_ext_i_0: oscillator@3 {
++	clkaud_ext_i_0: oscillator-aud0 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <6500000>;
+ 		clock-output-names = "clkaud_ext_i_0";
+ 	};
+ 
+-	clkaud_ext_i_1: oscillator@4 {
++	clkaud_ext_i_1: oscillator-aud1 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <196608000>;
+ 		clock-output-names = "clkaud_ext_i_1";
+ 	};
+ 
+-	clkaud_ext_i_2: oscillator@5 {
++	clkaud_ext_i_2: oscillator-aud2 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <180633600>;
+ 		clock-output-names = "clkaud_ext_i_2";
+ 	};
+ 
+-	clki2si0_mck_i: oscillator@6 {
++	clki2si0_mck_i: oscillator-i2s0 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <30000000>;
+ 		clock-output-names = "clki2si0_mck_i";
+ 	};
+ 
+-	clki2si1_mck_i: oscillator@7 {
++	clki2si1_mck_i: oscillator-i2s1 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <30000000>;
+ 		clock-output-names = "clki2si1_mck_i";
+ 	};
+ 
+-	clki2si2_mck_i: oscillator@8 {
++	clki2si2_mck_i: oscillator-i2s2 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <30000000>;
+ 		clock-output-names = "clki2si2_mck_i";
+ 	};
+ 
+-	clktdmin_mclk_i: oscillator@9 {
++	clktdmin_mclk_i: oscillator-mclk {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <30000000>;
+@@ -266,7 +266,7 @@
+ 		reg = <0 0x10005000 0 0x1000>;
+ 	};
+ 
+-	pio: pinctrl@10005000 {
++	pio: pinctrl@1000b000 {
+ 		compatible = "mediatek,mt2712-pinctrl";
+ 		reg = <0 0x1000b000 0 0x1000>;
+ 		mediatek,pctl-regmap = <&syscfg_pctl_a>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt6797.dtsi b/arch/arm64/boot/dts/mediatek/mt6797.dtsi
+index 15616231022a2..c3677d77e0a45 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6797.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6797.dtsi
+@@ -95,7 +95,7 @@
+ 		};
+ 	};
+ 
+-	clk26m: oscillator@0 {
++	clk26m: oscillator-26m {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <26000000>;
+diff --git a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+index 99c2d6fd6304a..d5059735c5940 100644
+--- a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
++++ b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+@@ -17,7 +17,7 @@
+ 	};
+ 
+ 	firmware {
+-		optee: optee@4fd00000 {
++		optee: optee {
+ 			compatible = "linaro,optee-tz";
+ 			method = "smc";
+ 		};
+@@ -209,7 +209,7 @@
+ 		};
+ 	};
+ 
+-	i2c0_pins_a: i2c0@0 {
++	i2c0_pins_a: i2c0 {
+ 		pins1 {
+ 			pinmux = <MT8516_PIN_58_SDA0__FUNC_SDA0_0>,
+ 				 <MT8516_PIN_59_SCL0__FUNC_SCL0_0>;
+@@ -217,7 +217,7 @@
+ 		};
+ 	};
+ 
+-	i2c2_pins_a: i2c2@0 {
++	i2c2_pins_a: i2c2 {
+ 		pins1 {
+ 			pinmux = <MT8516_PIN_60_SDA2__FUNC_SDA2_0>,
+ 				 <MT8516_PIN_61_SCL2__FUNC_SCL2_0>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018-cp01-c1.dts b/arch/arm64/boot/dts/qcom/ipq6018-cp01-c1.dts
+index e8eaa958c1992..b867506bc7e1c 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018-cp01-c1.dts
++++ b/arch/arm64/boot/dts/qcom/ipq6018-cp01-c1.dts
+@@ -37,6 +37,8 @@
+ 
+ &spi_0 {
+ 	cs-select = <0>;
++	pinctrl-0 = <&spi_0_pins>;
++	pinctrl-names = "default";
+ 	status = "okay";
+ 
+ 	m25p80@0 {
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 291276a38d7cd..c32e4a3833f23 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1249,7 +1249,7 @@
+ 		};
+ 
+ 		mpss: remoteproc@4080000 {
+-			compatible = "qcom,msm8916-mss-pil", "qcom,q6v5-pil";
++			compatible = "qcom,msm8916-mss-pil";
+ 			reg = <0x04080000 0x100>,
+ 			      <0x04020000 0x040>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index ef5d03a150693..bc140269e4cc5 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -651,17 +651,17 @@
+ 				compatible  ="operating-points-v2";
+ 
+ 				/*
+-				 * 624Mhz and 560Mhz are only available on speed
+-				 * bin (1 << 0). All the rest are available on
+-				 * all bins of the hardware
++				 * 624Mhz is only available on speed bins 0 and 3.
++				 * 560Mhz is only available on speed bins 0, 2 and 3.
++				 * All the rest are available on all bins of the hardware.
+ 				 */
+ 				opp-624000000 {
+ 					opp-hz = /bits/ 64 <624000000>;
+-					opp-supported-hw = <0x01>;
++					opp-supported-hw = <0x09>;
+ 				};
+ 				opp-560000000 {
+ 					opp-hz = /bits/ 64 <560000000>;
+-					opp-supported-hw = <0x01>;
++					opp-supported-hw = <0x0d>;
+ 				};
+ 				opp-510000000 {
+ 					opp-hz = /bits/ 64 <510000000>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index f87054575ce7f..79d260c2b3c32 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -593,7 +593,7 @@
+ 					pins = "gpio17", "gpio18", "gpio19";
+ 					function = "gpio";
+ 					drive-strength = <2>;
+-					bias-no-pull;
++					bias-disable;
+ 				};
+ 			};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi b/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
+index 64fc1bfd66fad..26f6f193bd1b6 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
+@@ -1292,7 +1292,7 @@ ap_ts_i2c: &i2c14 {
+ 		config {
+ 			pins = "gpio126";
+ 			function = "gpio";
+-			bias-no-pull;
++			bias-disable;
+ 			drive-strength = <2>;
+ 			output-low;
+ 		};
+@@ -1302,7 +1302,7 @@ ap_ts_i2c: &i2c14 {
+ 		config {
+ 			pins = "gpio126";
+ 			function = "gpio";
+-			bias-no-pull;
++			bias-disable;
+ 			drive-strength = <2>;
+ 			output-high;
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index 96d36b38f2696..c6691bdc81002 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -1045,7 +1045,10 @@
+ 
+ /* PINCTRL - additions to nodes defined in sdm845.dtsi */
+ &qup_spi2_default {
+-	drive-strength = <16>;
++	pinconf {
++		pins = "gpio27", "gpio28", "gpio29", "gpio30";
++		drive-strength = <16>;
++	};
+ };
+ 
+ &qup_uart3_default{
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index e080c317b5e3d..4d67f49827388 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -322,8 +322,10 @@
+ };
+ 
+ &qup_i2c12_default {
+-	drive-strength = <2>;
+-	bias-disable;
++	pinmux {
++		drive-strength = <2>;
++		bias-disable;
++	};
+ };
+ 
+ &qup_uart6_default {
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index d04189771c773..4265f627ca167 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -127,7 +127,6 @@
+ 		dmas = <&main_udmap 0xc000>, <&main_udmap 0x4000>,
+ 				<&main_udmap 0x4001>;
+ 		dma-names = "tx", "rx1", "rx2";
+-		dma-coherent;
+ 
+ 		rng: rng@4e10000 {
+ 			compatible = "inside-secure,safexcel-eip76";
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index 0350ddfe2c723..691d73f0f1e07 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -367,7 +367,6 @@
+ 		dmas = <&main_udmap 0xc000>, <&main_udmap 0x4000>,
+ 				<&main_udmap 0x4001>;
+ 		dma-names = "tx", "rx1", "rx2";
+-		dma-coherent;
+ 
+ 		rng: rng@4e10000 {
+ 			compatible = "inside-secure,safexcel-eip76";
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index 7c546c3487c9f..c628d8e3a403a 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -230,13 +230,13 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
+ }
+ #endif
+ 
+-static inline bool is_ttbr0_addr(unsigned long addr)
++static __always_inline bool is_ttbr0_addr(unsigned long addr)
+ {
+ 	/* entry assembly clears tags for TTBR0 addrs */
+ 	return addr < TASK_SIZE;
+ }
+ 
+-static inline bool is_ttbr1_addr(unsigned long addr)
++static __always_inline bool is_ttbr1_addr(unsigned long addr)
+ {
+ 	/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
+ 	return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
+diff --git a/arch/mips/bcm63xx/clk.c b/arch/mips/bcm63xx/clk.c
+index dcfa0ea912fe1..f183c45503ce1 100644
+--- a/arch/mips/bcm63xx/clk.c
++++ b/arch/mips/bcm63xx/clk.c
+@@ -361,6 +361,8 @@ static struct clk clk_periph = {
+  */
+ int clk_enable(struct clk *clk)
+ {
++	if (!clk)
++		return 0;
+ 	mutex_lock(&clocks_mutex);
+ 	clk_enable_unlocked(clk);
+ 	mutex_unlock(&clocks_mutex);
+diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper-board.c b/arch/mips/cavium-octeon/executive/cvmx-helper-board.c
+index abd11b7af22f2..9b791ccf874f9 100644
+--- a/arch/mips/cavium-octeon/executive/cvmx-helper-board.c
++++ b/arch/mips/cavium-octeon/executive/cvmx-helper-board.c
+@@ -211,7 +211,7 @@ union cvmx_helper_link_info __cvmx_helper_board_link_get(int ipd_port)
+ {
+ 	union cvmx_helper_link_info result;
+ 
+-	WARN(!octeon_is_simulation(),
++	WARN_ONCE(!octeon_is_simulation(),
+ 	     "Using deprecated link status - please update your DT");
+ 
+ 	/* Unless we fix it later, all links are defaulted to down */
+diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper.c b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+index 6044ff4710022..a18ad2daf0052 100644
+--- a/arch/mips/cavium-octeon/executive/cvmx-helper.c
++++ b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+@@ -1100,7 +1100,7 @@ union cvmx_helper_link_info cvmx_helper_link_get(int ipd_port)
+ 		if (index == 0)
+ 			result = __cvmx_helper_rgmii_link_get(ipd_port);
+ 		else {
+-			WARN(1, "Using deprecated link status - please update your DT");
++			WARN_ONCE(1, "Using deprecated link status - please update your DT");
+ 			result.s.full_duplex = 1;
+ 			result.s.link_up = 1;
+ 			result.s.speed = 1000;
+diff --git a/arch/mips/kernel/vpe-cmp.c b/arch/mips/kernel/vpe-cmp.c
+index 9268ebc0f61e6..903c07bdc92d9 100644
+--- a/arch/mips/kernel/vpe-cmp.c
++++ b/arch/mips/kernel/vpe-cmp.c
+@@ -75,7 +75,6 @@ ATTRIBUTE_GROUPS(vpe);
+ 
+ static void vpe_device_release(struct device *cd)
+ {
+-	kfree(cd);
+ }
+ 
+ static struct class vpe_class = {
+@@ -157,6 +156,7 @@ out_dev:
+ 	device_del(&vpe_device);
+ 
+ out_class:
++	put_device(&vpe_device);
+ 	class_unregister(&vpe_class);
+ 
+ out_chrdev:
+@@ -169,7 +169,7 @@ void __exit vpe_module_exit(void)
+ {
+ 	struct vpe *v, *n;
+ 
+-	device_del(&vpe_device);
++	device_unregister(&vpe_device);
+ 	class_unregister(&vpe_class);
+ 	unregister_chrdev(major, VPE_MODULE_NAME);
+ 
+diff --git a/arch/mips/kernel/vpe-mt.c b/arch/mips/kernel/vpe-mt.c
+index 2e003b11a098f..9fd7cd48ea1d2 100644
+--- a/arch/mips/kernel/vpe-mt.c
++++ b/arch/mips/kernel/vpe-mt.c
+@@ -313,7 +313,6 @@ ATTRIBUTE_GROUPS(vpe);
+ 
+ static void vpe_device_release(struct device *cd)
+ {
+-	kfree(cd);
+ }
+ 
+ static struct class vpe_class = {
+@@ -497,6 +496,7 @@ out_dev:
+ 	device_del(&vpe_device);
+ 
+ out_class:
++	put_device(&vpe_device);
+ 	class_unregister(&vpe_class);
+ 
+ out_chrdev:
+@@ -509,7 +509,7 @@ void __exit vpe_module_exit(void)
+ {
+ 	struct vpe *v, *n;
+ 
+-	device_del(&vpe_device);
++	device_unregister(&vpe_device);
+ 	class_unregister(&vpe_class);
+ 	unregister_chrdev(major, VPE_MODULE_NAME);
+ 
+diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h
+index ab78cba446ed3..ef9b53b80b9d1 100644
+--- a/arch/parisc/include/uapi/asm/mman.h
++++ b/arch/parisc/include/uapi/asm/mman.h
+@@ -49,28 +49,27 @@
+ #define MADV_DONTFORK	10		/* don't inherit across fork */
+ #define MADV_DOFORK	11		/* do inherit across fork */
+ 
+-#define MADV_COLD	20		/* deactivate these pages */
+-#define MADV_PAGEOUT	21		/* reclaim these pages */
+-
+-#define MADV_MERGEABLE   65		/* KSM may merge identical pages */
+-#define MADV_UNMERGEABLE 66		/* KSM may not merge identical pages */
++#define MADV_MERGEABLE   12		/* KSM may merge identical pages */
++#define MADV_UNMERGEABLE 13		/* KSM may not merge identical pages */
+ 
+-#define MADV_HUGEPAGE	67		/* Worth backing with hugepages */
+-#define MADV_NOHUGEPAGE	68		/* Not worth backing with hugepages */
++#define MADV_HUGEPAGE	14		/* Worth backing with hugepages */
++#define MADV_NOHUGEPAGE 15		/* Not worth backing with hugepages */
+ 
+-#define MADV_DONTDUMP   69		/* Explicity exclude from the core dump,
++#define MADV_DONTDUMP   16		/* Explicity exclude from the core dump,
+ 					   overrides the coredump filter bits */
+-#define MADV_DODUMP	70		/* Clear the MADV_NODUMP flag */
++#define MADV_DODUMP	17		/* Clear the MADV_NODUMP flag */
+ 
+-#define MADV_WIPEONFORK 71		/* Zero memory on fork, child only */
+-#define MADV_KEEPONFORK 72		/* Undo MADV_WIPEONFORK */
++#define MADV_WIPEONFORK 18		/* Zero memory on fork, child only */
++#define MADV_KEEPONFORK 19		/* Undo MADV_WIPEONFORK */
++
++#define MADV_COLD	20		/* deactivate these pages */
++#define MADV_PAGEOUT	21		/* reclaim these pages */
+ 
+ #define MADV_HWPOISON     100		/* poison a page for testing */
+ #define MADV_SOFT_OFFLINE 101		/* soft offline page for testing */
+ 
+ /* compatibility flags */
+ #define MAP_FILE	0
+-#define MAP_VARIABLE	0
+ 
+ #define PKEY_DISABLE_ACCESS	0x1
+ #define PKEY_DISABLE_WRITE	0x2
+diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
+index 9549496f55230..40f881f729e4b 100644
+--- a/arch/parisc/kernel/sys_parisc.c
++++ b/arch/parisc/kernel/sys_parisc.c
+@@ -444,3 +444,30 @@ asmlinkage long parisc_inotify_init1(int flags)
+ 	flags = FIX_O_NONBLOCK(flags);
+ 	return sys_inotify_init1(flags);
+ }
++
++/*
++ * madvise() wrapper
++ *
++ * Up to kernel v6.1 parisc has different values than all other
++ * platforms for the MADV_xxx flags listed below.
++ * To keep binary compatibility with existing userspace programs
++ * translate the former values to the new values.
++ *
++ * XXX: Remove this wrapper in year 2025 (or later)
++ */
++
++asmlinkage notrace long parisc_madvise(unsigned long start, size_t len_in, int behavior)
++{
++	switch (behavior) {
++	case 65: behavior = MADV_MERGEABLE;	break;
++	case 66: behavior = MADV_UNMERGEABLE;	break;
++	case 67: behavior = MADV_HUGEPAGE;	break;
++	case 68: behavior = MADV_NOHUGEPAGE;	break;
++	case 69: behavior = MADV_DONTDUMP;	break;
++	case 70: behavior = MADV_DODUMP;	break;
++	case 71: behavior = MADV_WIPEONFORK;	break;
++	case 72: behavior = MADV_KEEPONFORK;	break;
++	}
++
++	return sys_madvise(start, len_in, behavior);
++}
+diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
+index d526ebfa58e50..dfe9254ee74b6 100644
+--- a/arch/parisc/kernel/syscalls/syscall.tbl
++++ b/arch/parisc/kernel/syscalls/syscall.tbl
+@@ -131,7 +131,7 @@
+ 116	common	sysinfo			sys_sysinfo			compat_sys_sysinfo
+ 117	common	shutdown		sys_shutdown
+ 118	common	fsync			sys_fsync
+-119	common	madvise			sys_madvise
++119	common	madvise			parisc_madvise
+ 120	common	clone			sys_clone_wrapper
+ 121	common	setdomainname		sys_setdomainname
+ 122	common	sendfile		sys_sendfile			compat_sys_sendfile
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index bf962051af0a0..014229c40435a 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -715,6 +715,7 @@ void __noreturn rtas_halt(void)
+ 
+ /* Must be in the RMO region, so we place it here */
+ static char rtas_os_term_buf[2048];
++static s32 ibm_os_term_token = RTAS_UNKNOWN_SERVICE;
+ 
+ void rtas_os_term(char *str)
+ {
+@@ -726,16 +727,20 @@ void rtas_os_term(char *str)
+ 	 * this property may terminate the partition which we want to avoid
+ 	 * since it interferes with panic_timeout.
+ 	 */
+-	if (RTAS_UNKNOWN_SERVICE == rtas_token("ibm,os-term") ||
+-	    RTAS_UNKNOWN_SERVICE == rtas_token("ibm,extended-os-term"))
++	if (ibm_os_term_token == RTAS_UNKNOWN_SERVICE)
+ 		return;
+ 
+ 	snprintf(rtas_os_term_buf, 2048, "OS panic: %s", str);
+ 
++	/*
++	 * Keep calling as long as RTAS returns a "try again" status,
++	 * but don't use rtas_busy_delay(), which potentially
++	 * schedules.
++	 */
+ 	do {
+-		status = rtas_call(rtas_token("ibm,os-term"), 1, 1, NULL,
++		status = rtas_call(ibm_os_term_token, 1, 1, NULL,
+ 				   __pa(rtas_os_term_buf));
+-	} while (rtas_busy_delay(status));
++	} while (rtas_busy_delay_time(status));
+ 
+ 	if (status != 0)
+ 		printk(KERN_EMERG "ibm,os-term call failed %d\n", status);
+@@ -1267,6 +1272,13 @@ void __init rtas_initialize(void)
+ 	no_entry = of_property_read_u32(rtas.dev, "linux,rtas-entry", &entry);
+ 	rtas.entry = no_entry ? rtas.base : entry;
+ 
++	/*
++	 * Discover these now to avoid device tree lookups in the
++	 * panic path.
++	 */
++	if (of_property_read_bool(rtas.dev, "ibm,extended-os-term"))
++		ibm_os_term_token = rtas_token("ibm,os-term");
++
+ 	/* If RTAS was found, allocate the RMO buffer for it and look for
+ 	 * the stop-self token if any
+ 	 */
+diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
+index 6c028ee513c0d..99f3c4fc21cb0 100644
+--- a/arch/powerpc/perf/callchain.c
++++ b/arch/powerpc/perf/callchain.c
+@@ -61,6 +61,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
+ 		next_sp = fp[0];
+ 
+ 		if (next_sp == sp + STACK_INT_FRAME_SIZE &&
++		    validate_sp(sp, current, STACK_INT_FRAME_SIZE) &&
+ 		    fp[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
+ 			/*
+ 			 * This looks like an interrupt frame for an
+diff --git a/arch/powerpc/perf/hv-gpci-requests.h b/arch/powerpc/perf/hv-gpci-requests.h
+index 8965b4463d433..5e86371a20c78 100644
+--- a/arch/powerpc/perf/hv-gpci-requests.h
++++ b/arch/powerpc/perf/hv-gpci-requests.h
+@@ -79,6 +79,7 @@ REQUEST(__field(0,	8,	partition_id)
+ )
+ #include I(REQUEST_END)
+ 
++#ifdef ENABLE_EVENTS_COUNTERINFO_V6
+ /*
+  * Not available for counter_info_version >= 0x8, use
+  * run_instruction_cycles_by_partition(0x100) instead.
+@@ -92,6 +93,7 @@ REQUEST(__field(0,	8,	partition_id)
+ 	__count(0x10,	8,	cycles)
+ )
+ #include I(REQUEST_END)
++#endif
+ 
+ #define REQUEST_NAME system_performance_capabilities
+ #define REQUEST_NUM 0x40
+@@ -103,6 +105,7 @@ REQUEST(__field(0,	1,	perf_collect_privileged)
+ )
+ #include I(REQUEST_END)
+ 
++#ifdef ENABLE_EVENTS_COUNTERINFO_V6
+ #define REQUEST_NAME processor_bus_utilization_abc_links
+ #define REQUEST_NUM 0x50
+ #define REQUEST_IDX_KIND "hw_chip_id=?"
+@@ -194,6 +197,7 @@ REQUEST(__field(0,	4,	phys_processor_idx)
+ 	__count(0x28,	8,	instructions_completed)
+ )
+ #include I(REQUEST_END)
++#endif
+ 
+ /* Processor_core_power_mode (0x95) skipped, no counters */
+ /* Affinity_domain_information_by_virtual_processor (0xA0) skipped,
+diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
+index c756228a081fb..28b770bbc10b4 100644
+--- a/arch/powerpc/perf/hv-gpci.c
++++ b/arch/powerpc/perf/hv-gpci.c
+@@ -72,7 +72,7 @@ static struct attribute_group format_group = {
+ 
+ static struct attribute_group event_group = {
+ 	.name  = "events",
+-	.attrs = hv_gpci_event_attrs,
++	/* .attrs is set in init */
+ };
+ 
+ #define HV_CAPS_ATTR(_name, _format)				\
+@@ -330,6 +330,7 @@ static int hv_gpci_init(void)
+ 	int r;
+ 	unsigned long hret;
+ 	struct hv_perf_caps caps;
++	struct hv_gpci_request_buffer *arg;
+ 
+ 	hv_gpci_assert_offsets_correct();
+ 
+@@ -353,6 +354,36 @@ static int hv_gpci_init(void)
+ 	/* sampling not supported */
+ 	h_gpci_pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
+ 
++	arg = (void *)get_cpu_var(hv_gpci_reqb);
++	memset(arg, 0, HGPCI_REQ_BUFFER_SIZE);
++
++	/*
++	 * hcall H_GET_PERF_COUNTER_INFO populates the output
++	 * counter_info_version value based on the system hypervisor.
++	 * Pass the counter request 0x10 corresponds to request type
++	 * 'Dispatch_timebase_by_processor', to get the supported
++	 * counter_info_version.
++	 */
++	arg->params.counter_request = cpu_to_be32(0x10);
++
++	r = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO,
++			virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE);
++	if (r) {
++		pr_devel("hcall failed, can't get supported counter_info_version: 0x%x\n", r);
++		arg->params.counter_info_version_out = 0x8;
++	}
++
++	/*
++	 * Use counter_info_version_out value to assign
++	 * required hv-gpci event list.
++	 */
++	if (arg->params.counter_info_version_out >= 0x8)
++		event_group.attrs = hv_gpci_event_attrs;
++	else
++		event_group.attrs = hv_gpci_event_attrs_v6;
++
++	put_cpu_var(hv_gpci_reqb);
++
+ 	r = perf_pmu_register(&h_gpci_pmu, h_gpci_pmu.name, -1);
+ 	if (r)
+ 		return r;
+diff --git a/arch/powerpc/perf/hv-gpci.h b/arch/powerpc/perf/hv-gpci.h
+index 4d108262bed79..c72020912dea5 100644
+--- a/arch/powerpc/perf/hv-gpci.h
++++ b/arch/powerpc/perf/hv-gpci.h
+@@ -26,6 +26,7 @@ enum {
+ #define REQUEST_FILE "../hv-gpci-requests.h"
+ #define NAME_LOWER hv_gpci
+ #define NAME_UPPER HV_GPCI
++#define ENABLE_EVENTS_COUNTERINFO_V6
+ #include "req-gen/perf.h"
+ #undef REQUEST_FILE
+ #undef NAME_LOWER
+diff --git a/arch/powerpc/perf/req-gen/perf.h b/arch/powerpc/perf/req-gen/perf.h
+index fa9bc804e67af..6b2a59fefffa7 100644
+--- a/arch/powerpc/perf/req-gen/perf.h
++++ b/arch/powerpc/perf/req-gen/perf.h
+@@ -139,6 +139,26 @@ PMU_EVENT_ATTR_STRING(							\
+ #define REQUEST_(r_name, r_value, r_idx_1, r_fields)			\
+ 	r_fields
+ 
++/* Generate event list for platforms with counter_info_version 0x6 or below */
++static __maybe_unused struct attribute *hv_gpci_event_attrs_v6[] = {
++#include REQUEST_FILE
++	NULL
++};
++
++/*
++ * Based on getPerfCountInfo v1.018 documentation, some of the hv-gpci
++ * events were deprecated for platform firmware that supports
++ * counter_info_version 0x8 or above.
++ * Those deprecated events are still part of platform firmware that
++ * support counter_info_version 0x6 and below. As per the getPerfCountInfo
++ * v1.018 documentation there is no counter_info_version 0x7.
++ * Undefining macro ENABLE_EVENTS_COUNTERINFO_V6, to disable the addition of
++ * deprecated events in "hv_gpci_event_attrs" attribute group, for platforms
++ * that supports counter_info_version 0x8 or above.
++ */
++#undef ENABLE_EVENTS_COUNTERINFO_V6
++
++/* Generate event list for platforms with counter_info_version 0x8 or above*/
+ static __maybe_unused struct attribute *hv_gpci_event_attrs[] = {
+ #include REQUEST_FILE
+ 	NULL
+diff --git a/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c b/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
+index 05e19470d523c..22e264bd3ed21 100644
+--- a/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
++++ b/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
+@@ -530,6 +530,7 @@ static int mpc52xx_lpbfifo_probe(struct platform_device *op)
+  err_bcom_rx_irq:
+ 	bcom_gen_bd_rx_release(lpbfifo.bcom_rx_task);
+  err_bcom_rx:
++	free_irq(lpbfifo.irq, &lpbfifo);
+  err_irq:
+ 	iounmap(lpbfifo.regs);
+ 	lpbfifo.regs = NULL;
+diff --git a/arch/powerpc/platforms/83xx/mpc832x_rdb.c b/arch/powerpc/platforms/83xx/mpc832x_rdb.c
+index 622c625d5ce4b..1114b6a11b3f8 100644
+--- a/arch/powerpc/platforms/83xx/mpc832x_rdb.c
++++ b/arch/powerpc/platforms/83xx/mpc832x_rdb.c
+@@ -106,7 +106,7 @@ static int __init of_fsl_spi_probe(char *type, char *compatible, u32 sysclk,
+ 
+ 		goto next;
+ unreg:
+-		platform_device_del(pdev);
++		platform_device_put(pdev);
+ err:
+ 		pr_err("%pOF: registration failed\n", np);
+ next:
+diff --git a/arch/powerpc/platforms/pseries/eeh_pseries.c b/arch/powerpc/platforms/pseries/eeh_pseries.c
+index 7ed38ebd0c7b6..4601ad10ca7b4 100644
+--- a/arch/powerpc/platforms/pseries/eeh_pseries.c
++++ b/arch/powerpc/platforms/pseries/eeh_pseries.c
+@@ -846,18 +846,8 @@ static int __init eeh_pseries_init(void)
+ 		return -EINVAL;
+ 	}
+ 
+-	/* Initialize error log lock and size */
+-	spin_lock_init(&slot_errbuf_lock);
+-	eeh_error_buf_size = rtas_token("rtas-error-log-max");
+-	if (eeh_error_buf_size == RTAS_UNKNOWN_SERVICE) {
+-		pr_info("%s: unknown EEH error log size\n",
+-			__func__);
+-		eeh_error_buf_size = 1024;
+-	} else if (eeh_error_buf_size > RTAS_ERROR_LOG_MAX) {
+-		pr_info("%s: EEH error log size %d exceeds the maximal %d\n",
+-			__func__, eeh_error_buf_size, RTAS_ERROR_LOG_MAX);
+-		eeh_error_buf_size = RTAS_ERROR_LOG_MAX;
+-	}
++	/* Initialize error log size */
++	eeh_error_buf_size = rtas_get_error_log_max();
+ 
+ 	/* Set EEH probe mode */
+ 	eeh_add_flag(EEH_PROBE_MODE_DEVTREE | EEH_ENABLE_IO_FOR_LOG);
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index 38e8b98961744..53cf14349d5e9 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -425,6 +425,7 @@ static int xive_spapr_populate_irq_data(u32 hw_irq, struct xive_irq_data *data)
+ 
+ 	data->trig_mmio = ioremap(data->trig_page, 1u << data->esb_shift);
+ 	if (!data->trig_mmio) {
++		iounmap(data->eoi_mmio);
+ 		pr_err("Failed to map trigger page for irq 0x%x\n", hw_irq);
+ 		return -ENOMEM;
+ 	}
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index 5559edf36756c..2872b66d9fec7 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -1383,7 +1383,6 @@ static long check_bp_loc(unsigned long addr)
+ 	return 1;
+ }
+ 
+-#ifndef CONFIG_PPC_8xx
+ static int find_free_data_bpt(void)
+ {
+ 	int i;
+@@ -1395,7 +1394,6 @@ static int find_free_data_bpt(void)
+ 	printf("Couldn't find free breakpoint register\n");
+ 	return -1;
+ }
+-#endif
+ 
+ static void print_data_bpts(void)
+ {
+@@ -1435,10 +1433,9 @@ bpt_cmds(void)
+ 	cmd = inchar();
+ 
+ 	switch (cmd) {
+-#ifndef CONFIG_PPC_8xx
+-	static const char badaddr[] = "Only kernel addresses are permitted for breakpoints\n";
+-	int mode;
+-	case 'd':	/* bd - hardware data breakpoint */
++	case 'd': {	/* bd - hardware data breakpoint */
++		static const char badaddr[] = "Only kernel addresses are permitted for breakpoints\n";
++		int mode;
+ 		if (xmon_is_ro) {
+ 			printf(xmon_ro_msg);
+ 			break;
+@@ -1471,6 +1468,7 @@ bpt_cmds(void)
+ 
+ 		force_enable_xmon();
+ 		break;
++	}
+ 
+ 	case 'i':	/* bi - hardware instr breakpoint */
+ 		if (xmon_is_ro) {
+@@ -1497,7 +1495,6 @@ bpt_cmds(void)
+ 			force_enable_xmon();
+ 		}
+ 		break;
+-#endif
+ 
+ 	case 'c':
+ 		if (!scanhex(&a)) {
+diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
+index a5c2ca1d1cd8b..ec19d6afc8965 100644
+--- a/arch/riscv/include/asm/hugetlb.h
++++ b/arch/riscv/include/asm/hugetlb.h
+@@ -5,4 +5,10 @@
+ #include <asm-generic/hugetlb.h>
+ #include <asm/page.h>
+ 
++static inline void arch_clear_hugepage_flags(struct page *page)
++{
++	clear_bit(PG_dcache_clean, &page->flags);
++}
++#define arch_clear_hugepage_flags arch_clear_hugepage_flags
++
+ #endif /* _ASM_RISCV_HUGETLB_H */
+diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
+index f944062c9d990..66af6abfe8af4 100644
+--- a/arch/riscv/include/asm/uaccess.h
++++ b/arch/riscv/include/asm/uaccess.h
+@@ -216,7 +216,7 @@ do {								\
+ 	might_fault();						\
+ 	access_ok(__p, sizeof(*__p)) ?		\
+ 		__get_user((x), __p) :				\
+-		((x) = 0, -EFAULT);				\
++		((x) = (__force __typeof__(x))0, -EFAULT);	\
+ })
+ 
+ #define __put_user_asm(insn, x, ptr, err)			\
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index 595342910c3f6..1e53fbe5eb783 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -57,9 +57,15 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
+ 		/* Unwind stack frame */
+ 		frame = (struct stackframe *)fp - 1;
+ 		sp = fp;
+-		fp = frame->fp;
+-		pc = ftrace_graph_ret_addr(current, NULL, frame->ra,
+-					   (unsigned long *)(fp - 8));
++		if (regs && (regs->epc == pc) && (frame->fp & 0x7)) {
++			fp = frame->ra;
++			pc = regs->ra;
++		} else {
++			fp = frame->fp;
++			pc = ftrace_graph_ret_addr(current, NULL, frame->ra,
++						   &frame->ra);
++		}
++
+ 	}
+ }
+ 
+diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
+index 9efea154349d3..4e2953a9eff09 100644
+--- a/arch/x86/events/intel/uncore.h
++++ b/arch/x86/events/intel/uncore.h
+@@ -84,6 +84,7 @@ struct intel_uncore_type {
+ 	/*
+ 	 * Optional callbacks for managing mapping of Uncore units to PMONs
+ 	 */
++	int (*get_topology)(struct intel_uncore_type *type);
+ 	int (*set_mapping)(struct intel_uncore_type *type);
+ 	void (*cleanup_mapping)(struct intel_uncore_type *type);
+ };
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index fa9289718147a..a4c20e37bec21 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -1274,6 +1274,7 @@ static void tgl_uncore_imc_freerunning_init_box(struct intel_uncore_box *box)
+ 	/* MCHBAR is disabled */
+ 	if (!(mch_bar & BIT(0))) {
+ 		pr_warn("perf uncore: MCHBAR is disabled. Failed to map IMC free-running counters.\n");
++		pci_dev_put(pdev);
+ 		return;
+ 	}
+ 	mch_bar &= ~BIT(0);
+@@ -1287,6 +1288,8 @@ static void tgl_uncore_imc_freerunning_init_box(struct intel_uncore_box *box)
+ 	box->io_addr = ioremap(addr, type->mmio_map_size);
+ 	if (!box->io_addr)
+ 		pr_warn("perf uncore: Failed to ioremap for %s.\n", type->name);
++
++	pci_dev_put(pdev);
+ }
+ 
+ static struct intel_uncore_ops tgl_uncore_imc_freerunning_ops = {
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 03c8047bebb38..ad084a5a1463b 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -2828,6 +2828,7 @@ static bool hswep_has_limit_sbox(unsigned int device)
+ 		return false;
+ 
+ 	pci_read_config_dword(dev, HSWEP_PCU_CAPID4_OFFET, &capid4);
++	pci_dev_put(dev);
+ 	if (!hswep_get_chop(capid4))
+ 		return true;
+ 
+@@ -3642,12 +3643,19 @@ static inline u8 skx_iio_stack(struct intel_uncore_pmu *pmu, int die)
+ }
+ 
+ static umode_t
+-skx_iio_mapping_visible(struct kobject *kobj, struct attribute *attr, int die)
++pmu_iio_mapping_visible(struct kobject *kobj, struct attribute *attr,
++			 int die, int zero_bus_pmu)
+ {
+ 	struct intel_uncore_pmu *pmu = dev_to_uncore_pmu(kobj_to_dev(kobj));
+ 
+-	/* Root bus 0x00 is valid only for die 0 AND pmu_idx = 0. */
+-	return (!skx_iio_stack(pmu, die) && pmu->pmu_idx) ? 0 : attr->mode;
++	return (!skx_iio_stack(pmu, die) && pmu->pmu_idx != zero_bus_pmu) ? 0 : attr->mode;
++}
++
++static umode_t
++skx_iio_mapping_visible(struct kobject *kobj, struct attribute *attr, int die)
++{
++	/* Root bus 0x00 is valid only for pmu_idx = 0. */
++	return pmu_iio_mapping_visible(kobj, attr, die, 0);
+ }
+ 
+ static ssize_t skx_iio_mapping_show(struct device *dev,
+@@ -3739,7 +3747,23 @@ static const struct attribute_group *skx_iio_attr_update[] = {
+ 	NULL,
+ };
+ 
+-static int skx_iio_set_mapping(struct intel_uncore_type *type)
++static void pmu_clear_mapping_attr(const struct attribute_group **groups,
++				   struct attribute_group *ag)
++{
++	int i;
++
++	for (i = 0; groups[i]; i++) {
++		if (groups[i] == ag) {
++			for (i++; groups[i]; i++)
++				groups[i - 1] = groups[i];
++			groups[i - 1] = NULL;
++			break;
++		}
++	}
++}
++
++static int
++pmu_iio_set_mapping(struct intel_uncore_type *type, struct attribute_group *ag)
+ {
+ 	char buf[64];
+ 	int ret;
+@@ -3747,8 +3771,8 @@ static int skx_iio_set_mapping(struct intel_uncore_type *type)
+ 	struct attribute **attrs = NULL;
+ 	struct dev_ext_attribute *eas = NULL;
+ 
+-	ret = skx_iio_get_topology(type);
+-	if (ret)
++	ret = type->get_topology(type);
++	if (ret < 0)
+ 		goto clear_attr_update;
+ 
+ 	ret = -ENOMEM;
+@@ -3774,7 +3798,7 @@ static int skx_iio_set_mapping(struct intel_uncore_type *type)
+ 		eas[die].var = (void *)die;
+ 		attrs[die] = &eas[die].attr.attr;
+ 	}
+-	skx_iio_mapping_group.attrs = attrs;
++	ag->attrs = attrs;
+ 
+ 	return 0;
+ err:
+@@ -3786,10 +3810,15 @@ clear_attrs:
+ clear_topology:
+ 	kfree(type->topology);
+ clear_attr_update:
+-	type->attr_update = NULL;
++	pmu_clear_mapping_attr(type->attr_update, ag);
+ 	return ret;
+ }
+ 
++static int skx_iio_set_mapping(struct intel_uncore_type *type)
++{
++	return pmu_iio_set_mapping(type, &skx_iio_mapping_group);
++}
++
+ static void skx_iio_cleanup_mapping(struct intel_uncore_type *type)
+ {
+ 	struct attribute **attr = skx_iio_mapping_group.attrs;
+@@ -3820,6 +3849,7 @@ static struct intel_uncore_type skx_uncore_iio = {
+ 	.ops			= &skx_uncore_iio_ops,
+ 	.format_group		= &skx_uncore_iio_format_group,
+ 	.attr_update		= skx_iio_attr_update,
++	.get_topology		= skx_iio_get_topology,
+ 	.set_mapping		= skx_iio_set_mapping,
+ 	.cleanup_mapping	= skx_iio_cleanup_mapping,
+ };
+@@ -4680,6 +4710,8 @@ static void __snr_uncore_mmio_init_box(struct intel_uncore_box *box,
+ 
+ 	addr += box_ctl;
+ 
++	pci_dev_put(pdev);
++
+ 	box->io_addr = ioremap(addr, type->mmio_map_size);
+ 	if (!box->io_addr) {
+ 		pr_warn("perf uncore: Failed to ioremap for %s.\n", type->name);
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 01860c0d324d7..70fd21ebb9d55 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -453,8 +453,6 @@ void hyperv_cleanup(void)
+ {
+ 	union hv_x64_msr_hypercall_contents hypercall_msr;
+ 
+-	unregister_syscore_ops(&hv_syscore_ops);
+-
+ 	/* Reset our OS id */
+ 	wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index e2e22a5740a4d..a2a087a797ae5 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1896,6 +1896,8 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
+ 		if (ctrl == PR_SPEC_FORCE_DISABLE)
+ 			task_set_spec_ib_force_disable(task);
+ 		task_update_spec_tif(task);
++		if (task == current)
++			indirect_branch_prediction_barrier();
+ 		break;
+ 	default:
+ 		return -ERANGE;
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index 09f7c652346a9..4f9b7c1cfc36f 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -513,7 +513,7 @@ static u32 get_block_address(u32 current_addr, u32 low, u32 high,
+ 	/* Fall back to method we used for older processors: */
+ 	switch (block) {
+ 	case 0:
+-		addr = msr_ops.misc(bank);
++		addr = mca_msr_reg(bank, MCA_MISC);
+ 		break;
+ 	case 1:
+ 		offset = ((low & MASK_BLKPTR_LO) >> 21);
+@@ -952,6 +952,24 @@ _log_error_bank(unsigned int bank, u32 msr_stat, u32 msr_addr, u64 misc)
+ 	return status & MCI_STATUS_DEFERRED;
+ }
+ 
++static bool _log_error_deferred(unsigned int bank, u32 misc)
++{
++	if (!_log_error_bank(bank, mca_msr_reg(bank, MCA_STATUS),
++			     mca_msr_reg(bank, MCA_ADDR), misc))
++		return false;
++
++	/*
++	 * Non-SMCA systems don't have MCA_DESTAT/MCA_DEADDR registers.
++	 * Return true here to avoid accessing these registers.
++	 */
++	if (!mce_flags.smca)
++		return true;
++
++	/* Clear MCA_DESTAT if the deferred error was logged from MCA_STATUS. */
++	wrmsrl(MSR_AMD64_SMCA_MCx_DESTAT(bank), 0);
++	return true;
++}
++
+ /*
+  * We have three scenarios for checking for Deferred errors:
+  *
+@@ -963,19 +981,8 @@ _log_error_bank(unsigned int bank, u32 msr_stat, u32 msr_addr, u64 misc)
+  */
+ static void log_error_deferred(unsigned int bank)
+ {
+-	bool defrd;
+-
+-	defrd = _log_error_bank(bank, msr_ops.status(bank),
+-					msr_ops.addr(bank), 0);
+-
+-	if (!mce_flags.smca)
+-		return;
+-
+-	/* Clear MCA_DESTAT if we logged the deferred error from MCA_STATUS. */
+-	if (defrd) {
+-		wrmsrl(MSR_AMD64_SMCA_MCx_DESTAT(bank), 0);
++	if (_log_error_deferred(bank, 0))
+ 		return;
+-	}
+ 
+ 	/*
+ 	 * Only deferred errors are logged in MCA_DE{STAT,ADDR} so just check
+@@ -996,7 +1003,7 @@ static void amd_deferred_error_interrupt(void)
+ 
+ static void log_error_thresholding(unsigned int bank, u64 misc)
+ {
+-	_log_error_bank(bank, msr_ops.status(bank), msr_ops.addr(bank), misc);
++	_log_error_deferred(bank, misc);
+ }
+ 
+ static void log_and_reset_block(struct threshold_block *block)
+@@ -1384,7 +1391,7 @@ static int threshold_create_bank(struct threshold_bank **bp, unsigned int cpu,
+ 		}
+ 	}
+ 
+-	err = allocate_threshold_blocks(cpu, b, bank, 0, msr_ops.misc(bank));
++	err = allocate_threshold_blocks(cpu, b, bank, 0, mca_msr_reg(bank, MCA_MISC));
+ 	if (err)
+ 		goto out_kobj;
+ 
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 5cf1a024408bf..1906387a0faf4 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -176,53 +176,27 @@ void mce_unregister_decode_chain(struct notifier_block *nb)
+ }
+ EXPORT_SYMBOL_GPL(mce_unregister_decode_chain);
+ 
+-static inline u32 ctl_reg(int bank)
++u32 mca_msr_reg(int bank, enum mca_msr reg)
+ {
+-	return MSR_IA32_MCx_CTL(bank);
+-}
+-
+-static inline u32 status_reg(int bank)
+-{
+-	return MSR_IA32_MCx_STATUS(bank);
+-}
+-
+-static inline u32 addr_reg(int bank)
+-{
+-	return MSR_IA32_MCx_ADDR(bank);
+-}
+-
+-static inline u32 misc_reg(int bank)
+-{
+-	return MSR_IA32_MCx_MISC(bank);
+-}
+-
+-static inline u32 smca_ctl_reg(int bank)
+-{
+-	return MSR_AMD64_SMCA_MCx_CTL(bank);
+-}
+-
+-static inline u32 smca_status_reg(int bank)
+-{
+-	return MSR_AMD64_SMCA_MCx_STATUS(bank);
+-}
++	if (mce_flags.smca) {
++		switch (reg) {
++		case MCA_CTL:	 return MSR_AMD64_SMCA_MCx_CTL(bank);
++		case MCA_ADDR:	 return MSR_AMD64_SMCA_MCx_ADDR(bank);
++		case MCA_MISC:	 return MSR_AMD64_SMCA_MCx_MISC(bank);
++		case MCA_STATUS: return MSR_AMD64_SMCA_MCx_STATUS(bank);
++		}
++	}
+ 
+-static inline u32 smca_addr_reg(int bank)
+-{
+-	return MSR_AMD64_SMCA_MCx_ADDR(bank);
+-}
++	switch (reg) {
++	case MCA_CTL:	 return MSR_IA32_MCx_CTL(bank);
++	case MCA_ADDR:	 return MSR_IA32_MCx_ADDR(bank);
++	case MCA_MISC:	 return MSR_IA32_MCx_MISC(bank);
++	case MCA_STATUS: return MSR_IA32_MCx_STATUS(bank);
++	}
+ 
+-static inline u32 smca_misc_reg(int bank)
+-{
+-	return MSR_AMD64_SMCA_MCx_MISC(bank);
++	return 0;
+ }
+ 
+-struct mca_msr_regs msr_ops = {
+-	.ctl	= ctl_reg,
+-	.status	= status_reg,
+-	.addr	= addr_reg,
+-	.misc	= misc_reg
+-};
+-
+ static void __print_mce(struct mce *m)
+ {
+ 	pr_emerg(HW_ERR "CPU %d: Machine Check%s: %Lx Bank %d: %016Lx\n",
+@@ -371,11 +345,11 @@ static int msr_to_offset(u32 msr)
+ 
+ 	if (msr == mca_cfg.rip_msr)
+ 		return offsetof(struct mce, ip);
+-	if (msr == msr_ops.status(bank))
++	if (msr == mca_msr_reg(bank, MCA_STATUS))
+ 		return offsetof(struct mce, status);
+-	if (msr == msr_ops.addr(bank))
++	if (msr == mca_msr_reg(bank, MCA_ADDR))
+ 		return offsetof(struct mce, addr);
+-	if (msr == msr_ops.misc(bank))
++	if (msr == mca_msr_reg(bank, MCA_MISC))
+ 		return offsetof(struct mce, misc);
+ 	if (msr == MSR_IA32_MCG_STATUS)
+ 		return offsetof(struct mce, mcgstatus);
+@@ -694,10 +668,10 @@ static struct notifier_block mce_default_nb = {
+ static noinstr void mce_read_aux(struct mce *m, int i)
+ {
+ 	if (m->status & MCI_STATUS_MISCV)
+-		m->misc = mce_rdmsrl(msr_ops.misc(i));
++		m->misc = mce_rdmsrl(mca_msr_reg(i, MCA_MISC));
+ 
+ 	if (m->status & MCI_STATUS_ADDRV) {
+-		m->addr = mce_rdmsrl(msr_ops.addr(i));
++		m->addr = mce_rdmsrl(mca_msr_reg(i, MCA_ADDR));
+ 
+ 		/*
+ 		 * Mask the reported address by the reported granularity.
+@@ -767,7 +741,7 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
+ 		m.bank = i;
+ 
+ 		barrier();
+-		m.status = mce_rdmsrl(msr_ops.status(i));
++		m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS));
+ 
+ 		/* If this entry is not valid, ignore it */
+ 		if (!(m.status & MCI_STATUS_VAL))
+@@ -835,7 +809,7 @@ clear_it:
+ 		/*
+ 		 * Clear state for this bank.
+ 		 */
+-		mce_wrmsrl(msr_ops.status(i), 0);
++		mce_wrmsrl(mca_msr_reg(i, MCA_STATUS), 0);
+ 	}
+ 
+ 	/*
+@@ -860,7 +834,7 @@ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp,
+ 	int i;
+ 
+ 	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
+-		m->status = mce_rdmsrl(msr_ops.status(i));
++		m->status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS));
+ 		if (!(m->status & MCI_STATUS_VAL))
+ 			continue;
+ 
+@@ -1149,7 +1123,7 @@ static void mce_clear_state(unsigned long *toclear)
+ 
+ 	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
+ 		if (test_bit(i, toclear))
+-			mce_wrmsrl(msr_ops.status(i), 0);
++			mce_wrmsrl(mca_msr_reg(i, MCA_STATUS), 0);
+ 	}
+ }
+ 
+@@ -1208,7 +1182,7 @@ static void __mc_scan_banks(struct mce *m, struct pt_regs *regs, struct mce *fin
+ 		m->addr = 0;
+ 		m->bank = i;
+ 
+-		m->status = mce_rdmsrl(msr_ops.status(i));
++		m->status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS));
+ 		if (!(m->status & MCI_STATUS_VAL))
+ 			continue;
+ 
+@@ -1704,8 +1678,8 @@ static void __mcheck_cpu_init_clear_banks(void)
+ 
+ 		if (!b->init)
+ 			continue;
+-		wrmsrl(msr_ops.ctl(i), b->ctl);
+-		wrmsrl(msr_ops.status(i), 0);
++		wrmsrl(mca_msr_reg(i, MCA_CTL), b->ctl);
++		wrmsrl(mca_msr_reg(i, MCA_STATUS), 0);
+ 	}
+ }
+ 
+@@ -1731,7 +1705,7 @@ static void __mcheck_cpu_check_banks(void)
+ 		if (!b->init)
+ 			continue;
+ 
+-		rdmsrl(msr_ops.ctl(i), msrval);
++		rdmsrl(mca_msr_reg(i, MCA_CTL), msrval);
+ 		b->init = !!msrval;
+ 	}
+ }
+@@ -1890,13 +1864,6 @@ static void __mcheck_cpu_init_early(struct cpuinfo_x86 *c)
+ 		mce_flags.succor	 = !!cpu_has(c, X86_FEATURE_SUCCOR);
+ 		mce_flags.smca		 = !!cpu_has(c, X86_FEATURE_SMCA);
+ 		mce_flags.amd_threshold	 = 1;
+-
+-		if (mce_flags.smca) {
+-			msr_ops.ctl	= smca_ctl_reg;
+-			msr_ops.status	= smca_status_reg;
+-			msr_ops.addr	= smca_addr_reg;
+-			msr_ops.misc	= smca_misc_reg;
+-		}
+ 	}
+ }
+ 
+@@ -2272,7 +2239,7 @@ static void mce_disable_error_reporting(void)
+ 		struct mce_bank *b = &mce_banks[i];
+ 
+ 		if (b->init)
+-			wrmsrl(msr_ops.ctl(i), 0);
++			wrmsrl(mca_msr_reg(i, MCA_CTL), 0);
+ 	}
+ 	return;
+ }
+@@ -2624,7 +2591,7 @@ static void mce_reenable_cpu(void)
+ 		struct mce_bank *b = &mce_banks[i];
+ 
+ 		if (b->init)
+-			wrmsrl(msr_ops.ctl(i), b->ctl);
++			wrmsrl(mca_msr_reg(i, MCA_CTL), b->ctl);
+ 	}
+ }
+ 
+diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
+index 88dcc79cfb07d..3a485c0d5791b 100644
+--- a/arch/x86/kernel/cpu/mce/internal.h
++++ b/arch/x86/kernel/cpu/mce/internal.h
+@@ -168,14 +168,14 @@ struct mce_vendor_flags {
+ 
+ extern struct mce_vendor_flags mce_flags;
+ 
+-struct mca_msr_regs {
+-	u32 (*ctl)	(int bank);
+-	u32 (*status)	(int bank);
+-	u32 (*addr)	(int bank);
+-	u32 (*misc)	(int bank);
++enum mca_msr {
++	MCA_CTL,
++	MCA_STATUS,
++	MCA_ADDR,
++	MCA_MISC,
+ };
+ 
+-extern struct mca_msr_regs msr_ops;
++u32 mca_msr_reg(int bank, enum mca_msr reg);
+ 
+ /* Decide whether to add MCE record to MCE event pool or filter it out. */
+ extern bool filter_mce(struct mce *m);
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index 7e8e07bddd5fe..1ba590e6ef7bb 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -659,7 +659,6 @@ void load_ucode_intel_ap(void)
+ 	else
+ 		iup = &intel_ucode_patch;
+ 
+-reget:
+ 	if (!*iup) {
+ 		patch = __load_ucode_intel(&uci);
+ 		if (!patch)
+@@ -670,12 +669,7 @@ reget:
+ 
+ 	uci.mc = *iup;
+ 
+-	if (apply_microcode_early(&uci, true)) {
+-		/* Mixed-silicon system? Try to refetch the proper patch: */
+-		*iup = NULL;
+-
+-		goto reget;
+-	}
++	apply_microcode_early(&uci, true);
+ }
+ 
+ static struct microcode_intel *find_patch(struct ucode_cpu_info *uci)
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index d096b5a1dbebe..6d546f4426ac3 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -219,7 +219,9 @@ void ftrace_replace_code(int enable)
+ 
+ 		ret = ftrace_verify_code(rec->ip, old);
+ 		if (ret) {
++			ftrace_expected = old;
+ 			ftrace_bug(ret, rec);
++			ftrace_expected = NULL;
+ 			return;
+ 		}
+ 	}
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index ee85f1b258d0a..5de757099186c 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -37,6 +37,7 @@
+ #include <linux/extable.h>
+ #include <linux/kdebug.h>
+ #include <linux/kallsyms.h>
++#include <linux/kgdb.h>
+ #include <linux/ftrace.h>
+ #include <linux/kasan.h>
+ #include <linux/moduleloader.h>
+@@ -292,6 +293,8 @@ static int can_probe(unsigned long paddr)
+ 	/* Decode instructions */
+ 	addr = paddr - offset;
+ 	while (addr < paddr) {
++		int ret;
++
+ 		/*
+ 		 * Check if the instruction has been modified by another
+ 		 * kprobe, in which case we replace the breakpoint by the
+@@ -303,15 +306,20 @@ static int can_probe(unsigned long paddr)
+ 		__addr = recover_probed_instruction(buf, addr);
+ 		if (!__addr)
+ 			return 0;
+-		kernel_insn_init(&insn, (void *)__addr, MAX_INSN_SIZE);
+-		insn_get_length(&insn);
+ 
++		ret = insn_decode(&insn, (void *)__addr, MAX_INSN_SIZE, INSN_MODE_KERN);
++		if (ret < 0)
++			return 0;
++
++#ifdef CONFIG_KGDB
+ 		/*
+-		 * Another debugging subsystem might insert this breakpoint.
+-		 * In that case, we can't recover it.
++		 * If there is a dynamically installed kgdb sw breakpoint,
++		 * this function should not be probed.
+ 		 */
+-		if (insn.opcode.bytes[0] == INT3_INSN_OPCODE)
++		if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&
++		    kgdb_has_hit_break(addr))
+ 			return 0;
++#endif
+ 		addr += insn.length;
+ 	}
+ 
+@@ -347,8 +355,8 @@ static int is_IF_modifier(kprobe_opcode_t *insn)
+ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
+ {
+ 	kprobe_opcode_t buf[MAX_INSN_SIZE];
+-	unsigned long recovered_insn =
+-		recover_probed_instruction(buf, (unsigned long)src);
++	unsigned long recovered_insn = recover_probed_instruction(buf, (unsigned long)src);
++	int ret;
+ 
+ 	if (!recovered_insn || !insn)
+ 		return 0;
+@@ -358,8 +366,9 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
+ 			MAX_INSN_SIZE))
+ 		return 0;
+ 
+-	kernel_insn_init(insn, dest, MAX_INSN_SIZE);
+-	insn_get_length(insn);
++	ret = insn_decode(insn, dest, MAX_INSN_SIZE, INSN_MODE_KERN);
++	if (ret < 0)
++		return 0;
+ 
+ 	/* We can not probe force emulate prefixed instruction */
+ 	if (insn_has_emulate_prefix(insn))
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 08eb23074f92f..3d62014920064 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -15,6 +15,7 @@
+ #include <linux/extable.h>
+ #include <linux/kdebug.h>
+ #include <linux/kallsyms.h>
++#include <linux/kgdb.h>
+ #include <linux/ftrace.h>
+ #include <linux/objtool.h>
+ #include <linux/pgtable.h>
+@@ -272,19 +273,6 @@ static int insn_is_indirect_jump(struct insn *insn)
+ 	return ret;
+ }
+ 
+-static bool is_padding_int3(unsigned long addr, unsigned long eaddr)
+-{
+-	unsigned char ops;
+-
+-	for (; addr < eaddr; addr++) {
+-		if (get_kernel_nofault(ops, (void *)addr) < 0 ||
+-		    ops != INT3_INSN_OPCODE)
+-			return false;
+-	}
+-
+-	return true;
+-}
+-
+ /* Decode whole function to ensure any instructions don't jump into target */
+ static int can_optimize(unsigned long paddr)
+ {
+@@ -312,6 +300,8 @@ static int can_optimize(unsigned long paddr)
+ 	addr = paddr - offset;
+ 	while (addr < paddr - offset + size) { /* Decode until function end */
+ 		unsigned long recovered_insn;
++		int ret;
++
+ 		if (search_exception_tables(addr))
+ 			/*
+ 			 * Since some fixup code will jumps into this function,
+@@ -321,16 +311,19 @@ static int can_optimize(unsigned long paddr)
+ 		recovered_insn = recover_probed_instruction(buf, addr);
+ 		if (!recovered_insn)
+ 			return 0;
+-		kernel_insn_init(&insn, (void *)recovered_insn, MAX_INSN_SIZE);
+-		insn_get_length(&insn);
++
++		ret = insn_decode(&insn, (void *)recovered_insn, MAX_INSN_SIZE, INSN_MODE_KERN);
++		if (ret < 0)
++			return 0;
++#ifdef CONFIG_KGDB
+ 		/*
+-		 * In the case of detecting unknown breakpoint, this could be
+-		 * a padding INT3 between functions. Let's check that all the
+-		 * rest of the bytes are also INT3.
++		 * If there is a dynamically installed kgdb sw breakpoint,
++		 * this function should not be probed.
+ 		 */
+-		if (insn.opcode.bytes[0] == INT3_INSN_OPCODE)
+-			return is_padding_int3(addr, paddr - offset + size) ? 1 : 0;
+-
++		if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&
++		    kgdb_has_hit_break(addr))
++			return 0;
++#endif
+ 		/* Recover address */
+ 		insn.kaddr = (void *)addr;
+ 		insn.next_byte = (void *)(addr + insn.length);
+diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
+index 138bdb1fd1360..9f948b2d26f6b 100644
+--- a/arch/x86/kernel/uprobes.c
++++ b/arch/x86/kernel/uprobes.c
+@@ -722,8 +722,9 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
+ 	switch (opc1) {
+ 	case 0xeb:	/* jmp 8 */
+ 	case 0xe9:	/* jmp 32 */
+-	case 0x90:	/* prefix* + nop; same as jmp with .offs = 0 */
+ 		break;
++	case 0x90:	/* prefix* + nop; same as jmp with .offs = 0 */
++		goto setup;
+ 
+ 	case 0xe8:	/* call relative */
+ 		branch_clear_offset(auprobe, insn);
+@@ -753,6 +754,7 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
+ 			return -ENOTSUPP;
+ 	}
+ 
++setup:
+ 	auprobe->branch.opc1 = opc1;
+ 	auprobe->branch.ilen = insn->length;
+ 	auprobe->branch.offs = insn->immediate.value;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 498fed0dda98c..f15ddf58a5bcd 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -4901,24 +4901,35 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ 		| FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX;
+ 
+ 	/*
+-	 * Note, KVM cannot rely on hardware to perform the CR0/CR4 #UD checks
+-	 * that have higher priority than VM-Exit (see Intel SDM's pseudocode
+-	 * for VMXON), as KVM must load valid CR0/CR4 values into hardware while
+-	 * running the guest, i.e. KVM needs to check the _guest_ values.
++	 * Manually check CR4.VMXE checks, KVM must force CR4.VMXE=1 to enter
++	 * the guest and so cannot rely on hardware to perform the check,
++	 * which has higher priority than VM-Exit (see Intel SDM's pseudocode
++	 * for VMXON).
+ 	 *
+-	 * Rely on hardware for the other two pre-VM-Exit checks, !VM86 and
+-	 * !COMPATIBILITY modes.  KVM may run the guest in VM86 to emulate Real
+-	 * Mode, but KVM will never take the guest out of those modes.
++	 * Rely on hardware for the other pre-VM-Exit checks, CR0.PE=1, !VM86
++	 * and !COMPATIBILITY modes.  For an unrestricted guest, KVM doesn't
++	 * force any of the relevant guest state.  For a restricted guest, KVM
++	 * does force CR0.PE=1, but only to also force VM86 in order to emulate
++	 * Real Mode, and so there's no need to check CR0.PE manually.
+ 	 */
+-	if (!nested_host_cr0_valid(vcpu, kvm_read_cr0(vcpu)) ||
+-	    !nested_host_cr4_valid(vcpu, kvm_read_cr4(vcpu))) {
++	if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE)) {
+ 		kvm_queue_exception(vcpu, UD_VECTOR);
+ 		return 1;
+ 	}
+ 
+ 	/*
+-	 * CPL=0 and all other checks that are lower priority than VM-Exit must
+-	 * be checked manually.
++	 * The CPL is checked for "not in VMX operation" and for "in VMX root",
++	 * and has higher priority than the VM-Fail due to being post-VMXON,
++	 * i.e. VMXON #GPs outside of VMX non-root if CPL!=0.  In VMX non-root,
++	 * VMXON causes VM-Exit and KVM unconditionally forwards VMXON VM-Exits
++	 * from L2 to L1, i.e. there's no need to check for the vCPU being in
++	 * VMX non-root.
++	 *
++	 * Forwarding the VM-Exit unconditionally, i.e. without performing the
++	 * #UD checks (see above), is functionally ok because KVM doesn't allow
++	 * L1 to run L2 without CR4.VMXE=0, and because KVM never modifies L2's
++	 * CR0 or CR4, i.e. it's L2's responsibility to emulate #UDs that are
++	 * missed by hardware due to shadowing CR0 and/or CR4.
+ 	 */
+ 	if (vmx_get_cpl(vcpu)) {
+ 		kvm_inject_gp(vcpu, 0);
+@@ -4928,6 +4939,17 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ 	if (vmx->nested.vmxon)
+ 		return nested_vmx_fail(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
+ 
++	/*
++	 * Invalid CR0/CR4 generates #GP.  These checks are performed if and
++	 * only if the vCPU isn't already in VMX operation, i.e. effectively
++	 * have lower priority than the VM-Fail above.
++	 */
++	if (!nested_host_cr0_valid(vcpu, kvm_read_cr0(vcpu)) ||
++	    !nested_host_cr4_valid(vcpu, kvm_read_cr4(vcpu))) {
++		kvm_inject_gp(vcpu, 0);
++		return 1;
++	}
++
+ 	if ((vmx->msr_ia32_feature_control & VMXON_NEEDED_FEATURES)
+ 			!= VMXON_NEEDED_FEATURES) {
+ 		kvm_inject_gp(vcpu, 0);
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index c1b2f764b29a2..cdec892b28e2e 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -32,30 +32,30 @@ static irqreturn_t xen_reschedule_interrupt(int irq, void *dev_id)
+ 
+ void xen_smp_intr_free(unsigned int cpu)
+ {
++	kfree(per_cpu(xen_resched_irq, cpu).name);
++	per_cpu(xen_resched_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_resched_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_resched_irq, cpu).irq, NULL);
+ 		per_cpu(xen_resched_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_resched_irq, cpu).name);
+-		per_cpu(xen_resched_irq, cpu).name = NULL;
+ 	}
++	kfree(per_cpu(xen_callfunc_irq, cpu).name);
++	per_cpu(xen_callfunc_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_callfunc_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_callfunc_irq, cpu).irq, NULL);
+ 		per_cpu(xen_callfunc_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_callfunc_irq, cpu).name);
+-		per_cpu(xen_callfunc_irq, cpu).name = NULL;
+ 	}
++	kfree(per_cpu(xen_debug_irq, cpu).name);
++	per_cpu(xen_debug_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_debug_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_debug_irq, cpu).irq, NULL);
+ 		per_cpu(xen_debug_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_debug_irq, cpu).name);
+-		per_cpu(xen_debug_irq, cpu).name = NULL;
+ 	}
++	kfree(per_cpu(xen_callfuncsingle_irq, cpu).name);
++	per_cpu(xen_callfuncsingle_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_callfuncsingle_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_callfuncsingle_irq, cpu).irq,
+ 				       NULL);
+ 		per_cpu(xen_callfuncsingle_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_callfuncsingle_irq, cpu).name);
+-		per_cpu(xen_callfuncsingle_irq, cpu).name = NULL;
+ 	}
+ }
+ 
+@@ -65,6 +65,7 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	char *resched_name, *callfunc_name, *debug_name;
+ 
+ 	resched_name = kasprintf(GFP_KERNEL, "resched%d", cpu);
++	per_cpu(xen_resched_irq, cpu).name = resched_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_RESCHEDULE_VECTOR,
+ 				    cpu,
+ 				    xen_reschedule_interrupt,
+@@ -74,9 +75,9 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	if (rc < 0)
+ 		goto fail;
+ 	per_cpu(xen_resched_irq, cpu).irq = rc;
+-	per_cpu(xen_resched_irq, cpu).name = resched_name;
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "callfunc%d", cpu);
++	per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_VECTOR,
+ 				    cpu,
+ 				    xen_call_function_interrupt,
+@@ -86,10 +87,10 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	if (rc < 0)
+ 		goto fail;
+ 	per_cpu(xen_callfunc_irq, cpu).irq = rc;
+-	per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
+ 
+ 	if (!xen_fifo_events) {
+ 		debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
++		per_cpu(xen_debug_irq, cpu).name = debug_name;
+ 		rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu,
+ 					     xen_debug_interrupt,
+ 					     IRQF_PERCPU | IRQF_NOBALANCING,
+@@ -97,10 +98,10 @@ int xen_smp_intr_init(unsigned int cpu)
+ 		if (rc < 0)
+ 			goto fail;
+ 		per_cpu(xen_debug_irq, cpu).irq = rc;
+-		per_cpu(xen_debug_irq, cpu).name = debug_name;
+ 	}
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "callfuncsingle%d", cpu);
++	per_cpu(xen_callfuncsingle_irq, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
+ 				    cpu,
+ 				    xen_call_function_single_interrupt,
+@@ -110,7 +111,6 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	if (rc < 0)
+ 		goto fail;
+ 	per_cpu(xen_callfuncsingle_irq, cpu).irq = rc;
+-	per_cpu(xen_callfuncsingle_irq, cpu).name = callfunc_name;
+ 
+ 	return 0;
+ 
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 35b6d15d874d0..64873937cd1d7 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -98,18 +98,18 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
+ 
+ void xen_smp_intr_free_pv(unsigned int cpu)
+ {
++	kfree(per_cpu(xen_irq_work, cpu).name);
++	per_cpu(xen_irq_work, cpu).name = NULL;
+ 	if (per_cpu(xen_irq_work, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_irq_work, cpu).irq, NULL);
+ 		per_cpu(xen_irq_work, cpu).irq = -1;
+-		kfree(per_cpu(xen_irq_work, cpu).name);
+-		per_cpu(xen_irq_work, cpu).name = NULL;
+ 	}
+ 
++	kfree(per_cpu(xen_pmu_irq, cpu).name);
++	per_cpu(xen_pmu_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_pmu_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_pmu_irq, cpu).irq, NULL);
+ 		per_cpu(xen_pmu_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_pmu_irq, cpu).name);
+-		per_cpu(xen_pmu_irq, cpu).name = NULL;
+ 	}
+ }
+ 
+@@ -119,6 +119,7 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 	char *callfunc_name, *pmu_name;
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "irqwork%d", cpu);
++	per_cpu(xen_irq_work, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_IRQ_WORK_VECTOR,
+ 				    cpu,
+ 				    xen_irq_work_interrupt,
+@@ -128,10 +129,10 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 	if (rc < 0)
+ 		goto fail;
+ 	per_cpu(xen_irq_work, cpu).irq = rc;
+-	per_cpu(xen_irq_work, cpu).name = callfunc_name;
+ 
+ 	if (is_xen_pmu) {
+ 		pmu_name = kasprintf(GFP_KERNEL, "pmu%d", cpu);
++		per_cpu(xen_pmu_irq, cpu).name = pmu_name;
+ 		rc = bind_virq_to_irqhandler(VIRQ_XENPMU, cpu,
+ 					     xen_pmu_irq_handler,
+ 					     IRQF_PERCPU|IRQF_NOBALANCING,
+@@ -139,7 +140,6 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 		if (rc < 0)
+ 			goto fail;
+ 		per_cpu(xen_pmu_irq, cpu).irq = rc;
+-		per_cpu(xen_pmu_irq, cpu).name = pmu_name;
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
+index 043c73dfd2c98..5c6fc16e4b925 100644
+--- a/arch/x86/xen/spinlock.c
++++ b/arch/x86/xen/spinlock.c
+@@ -75,6 +75,7 @@ void xen_init_lock_cpu(int cpu)
+ 	     cpu, per_cpu(lock_kicker_irq, cpu));
+ 
+ 	name = kasprintf(GFP_KERNEL, "spinlock%d", cpu);
++	per_cpu(irq_name, cpu) = name;
+ 	irq = bind_ipi_to_irqhandler(XEN_SPIN_UNLOCK_VECTOR,
+ 				     cpu,
+ 				     dummy_handler,
+@@ -85,7 +86,6 @@ void xen_init_lock_cpu(int cpu)
+ 	if (irq >= 0) {
+ 		disable_irq(irq); /* make sure it's never delivered */
+ 		per_cpu(lock_kicker_irq, cpu) = irq;
+-		per_cpu(irq_name, cpu) = name;
+ 	}
+ 
+ 	printk("cpu %d spinlock event irq %d\n", cpu, irq);
+@@ -98,6 +98,8 @@ void xen_uninit_lock_cpu(int cpu)
+ 	if (!xen_pvspin)
+ 		return;
+ 
++	kfree(per_cpu(irq_name, cpu));
++	per_cpu(irq_name, cpu) = NULL;
+ 	/*
+ 	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+ 	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+@@ -108,8 +110,6 @@ void xen_uninit_lock_cpu(int cpu)
+ 
+ 	unbind_from_irqhandler(irq, NULL);
+ 	per_cpu(lock_kicker_irq, cpu) = -1;
+-	kfree(per_cpu(irq_name, cpu));
+-	per_cpu(irq_name, cpu) = NULL;
+ }
+ 
+ PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index 7b52e7657b2d1..f0bc3398f3ed2 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -242,7 +242,7 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx)
+ {
+ 	struct request_queue *q = hctx->queue;
+ 	struct blk_mq_ctx *ctx;
+-	int i, ret;
++	int i, j, ret;
+ 
+ 	if (!hctx->nr_ctx)
+ 		return 0;
+@@ -254,9 +254,16 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx)
+ 	hctx_for_each_ctx(hctx, ctx, i) {
+ 		ret = kobject_add(&ctx->kobj, &hctx->kobj, "cpu%u", ctx->cpu);
+ 		if (ret)
+-			break;
++			goto out;
+ 	}
+ 
++	return 0;
++out:
++	hctx_for_each_ctx(hctx, ctx, j) {
++		if (j < i)
++			kobject_del(&ctx->kobj);
++	}
++	kobject_del(&hctx->kobj);
+ 	return ret;
+ }
+ 
+diff --git a/crypto/cryptd.c b/crypto/cryptd.c
+index 668095eca0faf..ca3a40fc7da91 100644
+--- a/crypto/cryptd.c
++++ b/crypto/cryptd.c
+@@ -68,11 +68,12 @@ struct aead_instance_ctx {
+ 
+ struct cryptd_skcipher_ctx {
+ 	refcount_t refcnt;
+-	struct crypto_sync_skcipher *child;
++	struct crypto_skcipher *child;
+ };
+ 
+ struct cryptd_skcipher_request_ctx {
+ 	crypto_completion_t complete;
++	struct skcipher_request req;
+ };
+ 
+ struct cryptd_hash_ctx {
+@@ -227,13 +228,13 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent,
+ 				  const u8 *key, unsigned int keylen)
+ {
+ 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(parent);
+-	struct crypto_sync_skcipher *child = ctx->child;
++	struct crypto_skcipher *child = ctx->child;
+ 
+-	crypto_sync_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+-	crypto_sync_skcipher_set_flags(child,
+-				       crypto_skcipher_get_flags(parent) &
+-					 CRYPTO_TFM_REQ_MASK);
+-	return crypto_sync_skcipher_setkey(child, key, keylen);
++	crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
++	crypto_skcipher_set_flags(child,
++				  crypto_skcipher_get_flags(parent) &
++				  CRYPTO_TFM_REQ_MASK);
++	return crypto_skcipher_setkey(child, key, keylen);
+ }
+ 
+ static void cryptd_skcipher_complete(struct skcipher_request *req, int err)
+@@ -258,13 +259,13 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base,
+ 	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct crypto_sync_skcipher *child = ctx->child;
+-	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
++	struct skcipher_request *subreq = &rctx->req;
++	struct crypto_skcipher *child = ctx->child;
+ 
+ 	if (unlikely(err == -EINPROGRESS))
+ 		goto out;
+ 
+-	skcipher_request_set_sync_tfm(subreq, child);
++	skcipher_request_set_tfm(subreq, child);
+ 	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
+ 				      NULL, NULL);
+ 	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+@@ -286,13 +287,13 @@ static void cryptd_skcipher_decrypt(struct crypto_async_request *base,
+ 	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct crypto_sync_skcipher *child = ctx->child;
+-	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
++	struct skcipher_request *subreq = &rctx->req;
++	struct crypto_skcipher *child = ctx->child;
+ 
+ 	if (unlikely(err == -EINPROGRESS))
+ 		goto out;
+ 
+-	skcipher_request_set_sync_tfm(subreq, child);
++	skcipher_request_set_tfm(subreq, child);
+ 	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
+ 				      NULL, NULL);
+ 	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+@@ -343,9 +344,10 @@ static int cryptd_skcipher_init_tfm(struct crypto_skcipher *tfm)
+ 	if (IS_ERR(cipher))
+ 		return PTR_ERR(cipher);
+ 
+-	ctx->child = (struct crypto_sync_skcipher *)cipher;
++	ctx->child = cipher;
+ 	crypto_skcipher_set_reqsize(
+-		tfm, sizeof(struct cryptd_skcipher_request_ctx));
++		tfm, sizeof(struct cryptd_skcipher_request_ctx) +
++		     crypto_skcipher_reqsize(cipher));
+ 	return 0;
+ }
+ 
+@@ -353,7 +355,7 @@ static void cryptd_skcipher_exit_tfm(struct crypto_skcipher *tfm)
+ {
+ 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+ 
+-	crypto_free_sync_skcipher(ctx->child);
++	crypto_free_skcipher(ctx->child);
+ }
+ 
+ static void cryptd_skcipher_free(struct skcipher_instance *inst)
+@@ -931,7 +933,7 @@ struct crypto_skcipher *cryptd_skcipher_child(struct cryptd_skcipher *tfm)
+ {
+ 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(&tfm->base);
+ 
+-	return &ctx->child->base;
++	return ctx->child;
+ }
+ EXPORT_SYMBOL_GPL(cryptd_skcipher_child);
+ 
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index 8609174e036e8..7972d2784b3b5 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -1282,15 +1282,6 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
+ 			goto out_free_tfm;
+ 		}
+ 
+-
+-	for (i = 0; i < num_mb; ++i)
+-		if (testmgr_alloc_buf(data[i].xbuf)) {
+-			while (i--)
+-				testmgr_free_buf(data[i].xbuf);
+-			goto out_free_tfm;
+-		}
+-
+-
+ 	for (i = 0; i < num_mb; ++i) {
+ 		data[i].req = skcipher_request_alloc(tfm, GFP_KERNEL);
+ 		if (!data[i].req) {
+diff --git a/drivers/acpi/acpica/dsmethod.c b/drivers/acpi/acpica/dsmethod.c
+index cf67caff878ab..97971c79c5f56 100644
+--- a/drivers/acpi/acpica/dsmethod.c
++++ b/drivers/acpi/acpica/dsmethod.c
+@@ -517,7 +517,7 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ 	info = ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_evaluate_info));
+ 	if (!info) {
+ 		status = AE_NO_MEMORY;
+-		goto cleanup;
++		goto pop_walk_state;
+ 	}
+ 
+ 	info->parameters = &this_walk_state->operands[0];
+@@ -529,7 +529,7 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ 
+ 	ACPI_FREE(info);
+ 	if (ACPI_FAILURE(status)) {
+-		goto cleanup;
++		goto pop_walk_state;
+ 	}
+ 
+ 	next_walk_state->method_nesting_depth =
+@@ -575,6 +575,12 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ 
+ 	return_ACPI_STATUS(status);
+ 
++pop_walk_state:
++
++	/* On error, pop the walk state to be deleted from thread */
++
++	acpi_ds_pop_walk_state(thread);
++
+ cleanup:
+ 
+ 	/* On error, we must terminate the method properly */
+diff --git a/drivers/acpi/acpica/utcopy.c b/drivers/acpi/acpica/utcopy.c
+index 41bdd0278dd8e..9a7cc679e5447 100644
+--- a/drivers/acpi/acpica/utcopy.c
++++ b/drivers/acpi/acpica/utcopy.c
+@@ -916,13 +916,6 @@ acpi_ut_copy_ipackage_to_ipackage(union acpi_operand_object *source_obj,
+ 	status = acpi_ut_walk_package_tree(source_obj, dest_obj,
+ 					   acpi_ut_copy_ielement_to_ielement,
+ 					   walk_state);
+-	if (ACPI_FAILURE(status)) {
+-
+-		/* On failure, delete the destination package object */
+-
+-		acpi_ut_remove_reference(dest_obj);
+-	}
+-
+ 	return_ACPI_STATUS(status);
+ }
+ 
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index ff2add0101fe5..7ca9fa9a75e24 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -83,6 +83,7 @@ enum board_ids {
+ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
+ static void ahci_remove_one(struct pci_dev *dev);
+ static void ahci_shutdown_one(struct pci_dev *dev);
++static void ahci_intel_pcs_quirk(struct pci_dev *pdev, struct ahci_host_priv *hpriv);
+ static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
+ 				 unsigned long deadline);
+ static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
+@@ -664,6 +665,25 @@ static void ahci_pci_save_initial_config(struct pci_dev *pdev,
+ 	ahci_save_initial_config(&pdev->dev, hpriv);
+ }
+ 
++static int ahci_pci_reset_controller(struct ata_host *host)
++{
++	struct pci_dev *pdev = to_pci_dev(host->dev);
++	struct ahci_host_priv *hpriv = host->private_data;
++	int rc;
++
++	rc = ahci_reset_controller(host);
++	if (rc)
++		return rc;
++
++	/*
++	 * If platform firmware failed to enable ports, try to enable
++	 * them here.
++	 */
++	ahci_intel_pcs_quirk(pdev, hpriv);
++
++	return 0;
++}
++
+ static void ahci_pci_init_controller(struct ata_host *host)
+ {
+ 	struct ahci_host_priv *hpriv = host->private_data;
+@@ -865,7 +885,7 @@ static int ahci_pci_device_runtime_resume(struct device *dev)
+ 	struct ata_host *host = pci_get_drvdata(pdev);
+ 	int rc;
+ 
+-	rc = ahci_reset_controller(host);
++	rc = ahci_pci_reset_controller(host);
+ 	if (rc)
+ 		return rc;
+ 	ahci_pci_init_controller(host);
+@@ -900,7 +920,7 @@ static int ahci_pci_device_resume(struct device *dev)
+ 		ahci_mcp89_apple_enable(pdev);
+ 
+ 	if (pdev->dev.power.power_state.event == PM_EVENT_SUSPEND) {
+-		rc = ahci_reset_controller(host);
++		rc = ahci_pci_reset_controller(host);
+ 		if (rc)
+ 			return rc;
+ 
+@@ -1785,12 +1805,6 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* save initial config */
+ 	ahci_pci_save_initial_config(pdev, hpriv);
+ 
+-	/*
+-	 * If platform firmware failed to enable ports, try to enable
+-	 * them here.
+-	 */
+-	ahci_intel_pcs_quirk(pdev, hpriv);
+-
+ 	/* prepare host */
+ 	if (hpriv->cap & HOST_CAP_NCQ) {
+ 		pi.flags |= ATA_FLAG_NCQ;
+@@ -1900,7 +1914,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (rc)
+ 		return rc;
+ 
+-	rc = ahci_reset_controller(host);
++	rc = ahci_pci_reset_controller(host);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/ata/pata_ixp4xx_cf.c b/drivers/ata/pata_ixp4xx_cf.c
+index abc0e87ca1a8b..43215a4c1e540 100644
+--- a/drivers/ata/pata_ixp4xx_cf.c
++++ b/drivers/ata/pata_ixp4xx_cf.c
+@@ -135,12 +135,12 @@ static void ixp4xx_setup_port(struct ata_port *ap,
+ 
+ static int ixp4xx_pata_probe(struct platform_device *pdev)
+ {
+-	unsigned int irq;
+ 	struct resource *cs0, *cs1;
+ 	struct ata_host *host;
+ 	struct ata_port *ap;
+ 	struct ixp4xx_pata_data *data = dev_get_platdata(&pdev->dev);
+ 	int ret;
++	int irq;
+ 
+ 	cs0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	cs1 = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+diff --git a/drivers/base/class.c b/drivers/base/class.c
+index c3451481194e8..ef7ff822bc082 100644
+--- a/drivers/base/class.c
++++ b/drivers/base/class.c
+@@ -192,6 +192,11 @@ int __class_register(struct class *cls, struct lock_class_key *key)
+ 	}
+ 	error = class_add_groups(class_get(cls), cls->class_groups);
+ 	class_put(cls);
++	if (error) {
++		kobject_del(&cp->subsys.kobj);
++		kfree_const(cp->subsys.kobj.name);
++		kfree(cp);
++	}
+ 	return error;
+ }
+ EXPORT_SYMBOL_GPL(__class_register);
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 72ef9e83a84b2..497e3d4255c41 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -1088,7 +1088,11 @@ static int __driver_attach(struct device *dev, void *data)
+ 		return 0;
+ 	} else if (ret < 0) {
+ 		dev_dbg(dev, "Bus failed to match device: %d\n", ret);
+-		return ret;
++		/*
++		 * Driver could not match with device, but may match with
++		 * another device on the bus.
++		 */
++		return 0;
+ 	} /* ret > 0 means positive match */
+ 
+ 	if (driver_allows_async_probing(drv)) {
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 835a39e84c1dd..360094692d29e 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -464,7 +464,10 @@ static int rpm_idle(struct device *dev, int rpmflags)
+ 	/* Pending requests need to be canceled. */
+ 	dev->power.request = RPM_REQ_NONE;
+ 
+-	if (dev->power.no_callbacks)
++	callback = RPM_GET_CALLBACK(dev, runtime_idle);
++
++	/* If no callback assume success. */
++	if (!callback || dev->power.no_callbacks)
+ 		goto out;
+ 
+ 	/* Carry out an asynchronous or a synchronous idle notification. */
+@@ -480,10 +483,17 @@ static int rpm_idle(struct device *dev, int rpmflags)
+ 
+ 	dev->power.idle_notification = true;
+ 
+-	callback = RPM_GET_CALLBACK(dev, runtime_idle);
++	if (dev->power.irq_safe)
++		spin_unlock(&dev->power.lock);
++	else
++		spin_unlock_irq(&dev->power.lock);
++
++	retval = callback(dev);
+ 
+-	if (callback)
+-		retval = __rpm_callback(callback, dev);
++	if (dev->power.irq_safe)
++		spin_lock(&dev->power.lock);
++	else
++		spin_lock_irq(&dev->power.lock);
+ 
+ 	dev->power.idle_notification = false;
+ 	wake_up_all(&dev->power.wait_queue);
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 51450f7c81afe..420bdaf8c356b 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -2819,7 +2819,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
+ 
+ 	if (init_submitter(device)) {
+ 		err = ERR_NOMEM;
+-		goto out_idr_remove_vol;
++		goto out_idr_remove_from_resource;
+ 	}
+ 
+ 	add_disk(disk);
+@@ -2836,8 +2836,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
+ 	drbd_debugfs_device_add(device);
+ 	return NO_ERROR;
+ 
+-out_idr_remove_vol:
+-	idr_remove(&connection->peer_devices, vnr);
+ out_idr_remove_from_resource:
+ 	for_each_connection_safe(connection, n, resource) {
+ 		peer_device = idr_remove(&connection->peer_devices, vnr);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 54001ad5de9f5..3d905fda9b29a 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -661,13 +661,13 @@ static inline void btusb_free_frags(struct btusb_data *data)
+ 
+ 	spin_lock_irqsave(&data->rxlock, flags);
+ 
+-	kfree_skb(data->evt_skb);
++	dev_kfree_skb_irq(data->evt_skb);
+ 	data->evt_skb = NULL;
+ 
+-	kfree_skb(data->acl_skb);
++	dev_kfree_skb_irq(data->acl_skb);
+ 	data->acl_skb = NULL;
+ 
+-	kfree_skb(data->sco_skb);
++	dev_kfree_skb_irq(data->sco_skb);
+ 	data->sco_skb = NULL;
+ 
+ 	spin_unlock_irqrestore(&data->rxlock, flags);
+diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c
+index cf4a560958173..8055f63603f45 100644
+--- a/drivers/bluetooth/hci_bcsp.c
++++ b/drivers/bluetooth/hci_bcsp.c
+@@ -378,7 +378,7 @@ static void bcsp_pkt_cull(struct bcsp_struct *bcsp)
+ 		i++;
+ 
+ 		__skb_unlink(skb, &bcsp->unack);
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 	}
+ 
+ 	if (skb_queue_empty(&bcsp->unack))
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 996729e78105a..7f70a677b92b9 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -299,7 +299,7 @@ static void h5_pkt_cull(struct h5 *h5)
+ 			break;
+ 
+ 		__skb_unlink(skb, &h5->unack);
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 	}
+ 
+ 	if (skb_queue_empty(&h5->unack))
+diff --git a/drivers/bluetooth/hci_ll.c b/drivers/bluetooth/hci_ll.c
+index 8bfe024d1fcd8..7495ca34c9e7e 100644
+--- a/drivers/bluetooth/hci_ll.c
++++ b/drivers/bluetooth/hci_ll.c
+@@ -345,7 +345,7 @@ static int ll_enqueue(struct hci_uart *hu, struct sk_buff *skb)
+ 	default:
+ 		BT_ERR("illegal hcill state: %ld (losing packet)",
+ 		       ll->hcill_state);
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 		break;
+ 	}
+ 
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index eea18aed17f8a..60b0e13bb9fc7 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -905,7 +905,7 @@ static int qca_enqueue(struct hci_uart *hu, struct sk_buff *skb)
+ 	default:
+ 		BT_ERR("Illegal tx state: %d (losing packet)",
+ 		       qca->tx_ibs_state);
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 		break;
+ 	}
+ 
+diff --git a/drivers/char/hw_random/amd-rng.c b/drivers/char/hw_random/amd-rng.c
+index 9959c762da2f8..db3dd467194c2 100644
+--- a/drivers/char/hw_random/amd-rng.c
++++ b/drivers/char/hw_random/amd-rng.c
+@@ -143,15 +143,19 @@ static int __init mod_init(void)
+ found:
+ 	err = pci_read_config_dword(pdev, 0x58, &pmbase);
+ 	if (err)
+-		return err;
++		goto put_dev;
+ 
+ 	pmbase &= 0x0000FF00;
+-	if (pmbase == 0)
+-		return -EIO;
++	if (pmbase == 0) {
++		err = -EIO;
++		goto put_dev;
++	}
+ 
+ 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+-	if (!priv)
+-		return -ENOMEM;
++	if (!priv) {
++		err = -ENOMEM;
++		goto put_dev;
++	}
+ 
+ 	if (!request_region(pmbase + PMBASE_OFFSET, PMBASE_SIZE, DRV_NAME)) {
+ 		dev_err(&pdev->dev, DRV_NAME " region 0x%x already in use!\n",
+@@ -185,6 +189,8 @@ err_iomap:
+ 	release_region(pmbase + PMBASE_OFFSET, PMBASE_SIZE);
+ out:
+ 	kfree(priv);
++put_dev:
++	pci_dev_put(pdev);
+ 	return err;
+ }
+ 
+@@ -200,6 +206,8 @@ static void __exit mod_exit(void)
+ 
+ 	release_region(priv->pmbase + PMBASE_OFFSET, PMBASE_SIZE);
+ 
++	pci_dev_put(priv->pcidev);
++
+ 	kfree(priv);
+ }
+ 
+diff --git a/drivers/char/hw_random/geode-rng.c b/drivers/char/hw_random/geode-rng.c
+index e1d421a36a138..207272979f233 100644
+--- a/drivers/char/hw_random/geode-rng.c
++++ b/drivers/char/hw_random/geode-rng.c
+@@ -51,6 +51,10 @@ static const struct pci_device_id pci_tbl[] = {
+ };
+ MODULE_DEVICE_TABLE(pci, pci_tbl);
+ 
++struct amd_geode_priv {
++	struct pci_dev *pcidev;
++	void __iomem *membase;
++};
+ 
+ static int geode_rng_data_read(struct hwrng *rng, u32 *data)
+ {
+@@ -90,6 +94,7 @@ static int __init mod_init(void)
+ 	const struct pci_device_id *ent;
+ 	void __iomem *mem;
+ 	unsigned long rng_base;
++	struct amd_geode_priv *priv;
+ 
+ 	for_each_pci_dev(pdev) {
+ 		ent = pci_match_id(pci_tbl, pdev);
+@@ -97,17 +102,26 @@ static int __init mod_init(void)
+ 			goto found;
+ 	}
+ 	/* Device not found. */
+-	goto out;
++	return err;
+ 
+ found:
++	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++	if (!priv) {
++		err = -ENOMEM;
++		goto put_dev;
++	}
++
+ 	rng_base = pci_resource_start(pdev, 0);
+ 	if (rng_base == 0)
+-		goto out;
++		goto free_priv;
+ 	err = -ENOMEM;
+ 	mem = ioremap(rng_base, 0x58);
+ 	if (!mem)
+-		goto out;
+-	geode_rng.priv = (unsigned long)mem;
++		goto free_priv;
++
++	geode_rng.priv = (unsigned long)priv;
++	priv->membase = mem;
++	priv->pcidev = pdev;
+ 
+ 	pr_info("AMD Geode RNG detected\n");
+ 	err = hwrng_register(&geode_rng);
+@@ -116,20 +130,26 @@ found:
+ 		       err);
+ 		goto err_unmap;
+ 	}
+-out:
+ 	return err;
+ 
+ err_unmap:
+ 	iounmap(mem);
+-	goto out;
++free_priv:
++	kfree(priv);
++put_dev:
++	pci_dev_put(pdev);
++	return err;
+ }
+ 
+ static void __exit mod_exit(void)
+ {
+-	void __iomem *mem = (void __iomem *)geode_rng.priv;
++	struct amd_geode_priv *priv;
+ 
++	priv = (struct amd_geode_priv *)geode_rng.priv;
+ 	hwrng_unregister(&geode_rng);
+-	iounmap(mem);
++	iounmap(priv->membase);
++	pci_dev_put(priv->pcidev);
++	kfree(priv);
+ }
+ 
+ module_init(mod_init);
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 05e7339752ac3..b89f300751b1b 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -1284,6 +1284,7 @@ static void _ipmi_destroy_user(struct ipmi_user *user)
+ 	unsigned long    flags;
+ 	struct cmd_rcvr  *rcvr;
+ 	struct cmd_rcvr  *rcvrs = NULL;
++	struct module    *owner;
+ 
+ 	if (!acquire_ipmi_user(user, &i)) {
+ 		/*
+@@ -1345,8 +1346,9 @@ static void _ipmi_destroy_user(struct ipmi_user *user)
+ 		kfree(rcvr);
+ 	}
+ 
++	owner = intf->owner;
+ 	kref_put(&intf->refcount, intf_free);
+-	module_put(intf->owner);
++	module_put(owner);
+ }
+ 
+ int ipmi_destroy_user(struct ipmi_user *user)
+@@ -3540,12 +3542,16 @@ static void deliver_smi_err_response(struct ipmi_smi *intf,
+ 				     struct ipmi_smi_msg *msg,
+ 				     unsigned char err)
+ {
++	int rv;
+ 	msg->rsp[0] = msg->data[0] | 4;
+ 	msg->rsp[1] = msg->data[1];
+ 	msg->rsp[2] = err;
+ 	msg->rsp_size = 3;
+-	/* It's an error, so it will never requeue, no need to check return. */
+-	handle_one_recv_msg(intf, msg);
++
++	/* This will never requeue, but it may ask us to free the message. */
++	rv = handle_one_recv_msg(intf, msg);
++	if (rv == 0)
++		ipmi_free_smi_msg(msg);
+ }
+ 
+ static void cleanup_smi_msgs(struct ipmi_smi *intf)
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 5eac94cf4ff89..7fe68c680d3ee 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2160,6 +2160,20 @@ skip_fallback_noirq:
+ }
+ module_init(init_ipmi_si);
+ 
++static void wait_msg_processed(struct smi_info *smi_info)
++{
++	unsigned long jiffies_now;
++	long time_diff;
++
++	while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
++		jiffies_now = jiffies;
++		time_diff = (((long)jiffies_now - (long)smi_info->last_timeout_jiffies)
++		     * SI_USEC_PER_JIFFY);
++		smi_event_handler(smi_info, time_diff);
++		schedule_timeout_uninterruptible(1);
++	}
++}
++
+ static void shutdown_smi(void *send_info)
+ {
+ 	struct smi_info *smi_info = send_info;
+@@ -2194,16 +2208,13 @@ static void shutdown_smi(void *send_info)
+ 	 * in the BMC.  Note that timers and CPU interrupts are off,
+ 	 * so no need for locks.
+ 	 */
+-	while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
+-		poll(smi_info);
+-		schedule_timeout_uninterruptible(1);
+-	}
++	wait_msg_processed(smi_info);
++
+ 	if (smi_info->handlers)
+ 		disable_si_irq(smi_info);
+-	while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
+-		poll(smi_info);
+-		schedule_timeout_uninterruptible(1);
+-	}
++
++	wait_msg_processed(smi_info);
++
+ 	if (smi_info->handlers)
+ 		smi_info->handlers->cleanup(smi_info->si_sm);
+ 
+diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c
+index 1b18ce5ebab1e..0913d3eb8d518 100644
+--- a/drivers/char/tpm/eventlog/acpi.c
++++ b/drivers/char/tpm/eventlog/acpi.c
+@@ -90,16 +90,21 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ 			return -ENODEV;
+ 
+ 		if (tbl->header.length <
+-				sizeof(*tbl) + sizeof(struct acpi_tpm2_phy))
++				sizeof(*tbl) + sizeof(struct acpi_tpm2_phy)) {
++			acpi_put_table((struct acpi_table_header *)tbl);
+ 			return -ENODEV;
++		}
+ 
+ 		tpm2_phy = (void *)tbl + sizeof(*tbl);
+ 		len = tpm2_phy->log_area_minimum_length;
+ 
+ 		start = tpm2_phy->log_area_start_address;
+-		if (!start || !len)
++		if (!start || !len) {
++			acpi_put_table((struct acpi_table_header *)tbl);
+ 			return -ENODEV;
++		}
+ 
++		acpi_put_table((struct acpi_table_header *)tbl);
+ 		format = EFI_TCG2_EVENT_LOG_FORMAT_TCG_2;
+ 	} else {
+ 		/* Find TCPA entry in RSDT (ACPI_LOGICAL_ADDRESSING) */
+@@ -120,8 +125,10 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ 			break;
+ 		}
+ 
++		acpi_put_table((struct acpi_table_header *)buff);
+ 		format = EFI_TCG2_EVENT_LOG_FORMAT_TCG_1_2;
+ 	}
++
+ 	if (!len) {
+ 		dev_warn(&chip->dev, "%s: TCPA log area empty\n", __func__);
+ 		return -EIO;
+@@ -156,5 +163,4 @@ err:
+ 	kfree(log->bios_event_log);
+ 	log->bios_event_log = NULL;
+ 	return ret;
+-
+ }
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index a9dcf31eadd21..5fe52a6839b53 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -252,7 +252,7 @@ static int __crb_relinquish_locality(struct device *dev,
+ 	iowrite32(CRB_LOC_CTRL_RELINQUISH, &priv->regs_h->loc_ctrl);
+ 	if (!crb_wait_for_reg_32(&priv->regs_h->loc_state, mask, value,
+ 				 TPM2_TIMEOUT_C)) {
+-		dev_warn(dev, "TPM_LOC_STATE_x.requestAccess timed out\n");
++		dev_warn(dev, "TPM_LOC_STATE_x.Relinquish timed out\n");
+ 		return -ETIME;
+ 	}
+ 
+@@ -676,12 +676,16 @@ static int crb_acpi_add(struct acpi_device *device)
+ 
+ 	/* Should the FIFO driver handle this? */
+ 	sm = buf->start_method;
+-	if (sm == ACPI_TPM2_MEMORY_MAPPED)
+-		return -ENODEV;
++	if (sm == ACPI_TPM2_MEMORY_MAPPED) {
++		rc = -ENODEV;
++		goto out;
++	}
+ 
+ 	priv = devm_kzalloc(dev, sizeof(struct crb_priv), GFP_KERNEL);
+-	if (!priv)
+-		return -ENOMEM;
++	if (!priv) {
++		rc = -ENOMEM;
++		goto out;
++	}
+ 
+ 	if (sm == ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC) {
+ 		if (buf->header.length < (sizeof(*buf) + sizeof(*crb_smc))) {
+@@ -689,7 +693,8 @@ static int crb_acpi_add(struct acpi_device *device)
+ 				FW_BUG "TPM2 ACPI table has wrong size %u for start method type %d\n",
+ 				buf->header.length,
+ 				ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC);
+-			return -EINVAL;
++			rc = -EINVAL;
++			goto out;
+ 		}
+ 		crb_smc = ACPI_ADD_PTR(struct tpm2_crb_smc, buf, sizeof(*buf));
+ 		priv->smc_func_id = crb_smc->smc_func_id;
+@@ -700,17 +705,23 @@ static int crb_acpi_add(struct acpi_device *device)
+ 
+ 	rc = crb_map_io(device, priv, buf);
+ 	if (rc)
+-		return rc;
++		goto out;
+ 
+ 	chip = tpmm_chip_alloc(dev, &tpm_crb);
+-	if (IS_ERR(chip))
+-		return PTR_ERR(chip);
++	if (IS_ERR(chip)) {
++		rc = PTR_ERR(chip);
++		goto out;
++	}
+ 
+ 	dev_set_drvdata(&chip->dev, priv);
+ 	chip->acpi_dev_handle = device->handle;
+ 	chip->flags = TPM_CHIP_FLAG_TPM2;
+ 
+-	return tpm_chip_register(chip);
++	rc = tpm_chip_register(chip);
++
++out:
++	acpi_put_table((struct acpi_table_header *)buf);
++	return rc;
+ }
+ 
+ static int crb_acpi_remove(struct acpi_device *device)
+diff --git a/drivers/char/tpm/tpm_ftpm_tee.c b/drivers/char/tpm/tpm_ftpm_tee.c
+index 6e3235565a4d8..d9daaafdd295c 100644
+--- a/drivers/char/tpm/tpm_ftpm_tee.c
++++ b/drivers/char/tpm/tpm_ftpm_tee.c
+@@ -397,7 +397,13 @@ static int __init ftpm_mod_init(void)
+ 	if (rc)
+ 		return rc;
+ 
+-	return driver_register(&ftpm_tee_driver.driver);
++	rc = driver_register(&ftpm_tee_driver.driver);
++	if (rc) {
++		platform_driver_unregister(&ftpm_tee_plat_driver);
++		return rc;
++	}
++
++	return 0;
+ }
+ 
+ static void __exit ftpm_mod_exit(void)
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index 4ed6e660273a4..14fad16d371fb 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -125,6 +125,7 @@ static int check_acpi_tpm2(struct device *dev)
+ 	const struct acpi_device_id *aid = acpi_match_device(tpm_acpi_tbl, dev);
+ 	struct acpi_table_tpm2 *tbl;
+ 	acpi_status st;
++	int ret = 0;
+ 
+ 	if (!aid || aid->driver_data != DEVICE_IS_TPM2)
+ 		return 0;
+@@ -132,8 +133,7 @@ static int check_acpi_tpm2(struct device *dev)
+ 	/* If the ACPI TPM2 signature is matched then a global ACPI_SIG_TPM2
+ 	 * table is mandatory
+ 	 */
+-	st =
+-	    acpi_get_table(ACPI_SIG_TPM2, 1, (struct acpi_table_header **)&tbl);
++	st = acpi_get_table(ACPI_SIG_TPM2, 1, (struct acpi_table_header **)&tbl);
+ 	if (ACPI_FAILURE(st) || tbl->header.length < sizeof(*tbl)) {
+ 		dev_err(dev, FW_BUG "failed to get TPM2 ACPI table\n");
+ 		return -EINVAL;
+@@ -141,9 +141,10 @@ static int check_acpi_tpm2(struct device *dev)
+ 
+ 	/* The tpm2_crb driver handles this device */
+ 	if (tbl->start_method != ACPI_TPM2_MEMORY_MAPPED)
+-		return -ENODEV;
++		ret = -ENODEV;
+ 
+-	return 0;
++	acpi_put_table((struct acpi_table_header *)tbl);
++	return ret;
+ }
+ #else
+ static int check_acpi_tpm2(struct device *dev)
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index db122d94db583..8a49e072d6e86 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -105,27 +105,27 @@ static const char * const imx8mn_disp_pixel_sels[] = {"osc_24m", "video_pll1_out
+ 						      "sys_pll3_out", "clk_ext4", };
+ 
+ static const char * const imx8mn_sai2_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+-						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
++						"video_pll1_out", "sys_pll1_133m", "dummy",
+ 						"clk_ext3", "clk_ext4", };
+ 
+ static const char * const imx8mn_sai3_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+-						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
++						"video_pll1_out", "sys_pll1_133m", "dummy",
+ 						"clk_ext3", "clk_ext4", };
+ 
+ static const char * const imx8mn_sai5_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+-						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
++						"video_pll1_out", "sys_pll1_133m", "dummy",
+ 						"clk_ext2", "clk_ext3", };
+ 
+ static const char * const imx8mn_sai6_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+-						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
++						"video_pll1_out", "sys_pll1_133m", "dummy",
+ 						"clk_ext3", "clk_ext4", };
+ 
+ static const char * const imx8mn_sai7_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+-						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
++						"video_pll1_out", "sys_pll1_133m", "dummy",
+ 						"clk_ext3", "clk_ext4", };
+ 
+ static const char * const imx8mn_spdif1_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+-						  "video_pll1_out", "sys_pll1_133m", "osc_hdmi",
++						  "video_pll1_out", "sys_pll1_133m", "dummy",
+ 						  "clk_ext2", "clk_ext3", };
+ 
+ static const char * const imx8mn_enet_ref_sels[] = {"osc_24m", "sys_pll2_125m", "sys_pll2_50m",
+diff --git a/drivers/clk/qcom/clk-krait.c b/drivers/clk/qcom/clk-krait.c
+index 90046428693c2..e74fc81a14d00 100644
+--- a/drivers/clk/qcom/clk-krait.c
++++ b/drivers/clk/qcom/clk-krait.c
+@@ -98,6 +98,8 @@ static int krait_div2_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	if (d->lpl)
+ 		mask = mask << (d->shift + LPL_SHIFT) | mask << d->shift;
++	else
++		mask <<= d->shift;
+ 
+ 	spin_lock_irqsave(&krait_clock_reg_lock, flags);
+ 	val = krait_get_l2_indirect_reg(d->offset);
+diff --git a/drivers/clk/qcom/gcc-sm8250.c b/drivers/clk/qcom/gcc-sm8250.c
+index ab594a0f0c408..7ec11acc82984 100644
+--- a/drivers/clk/qcom/gcc-sm8250.c
++++ b/drivers/clk/qcom/gcc-sm8250.c
+@@ -3268,7 +3268,7 @@ static struct gdsc usb30_prim_gdsc = {
+ 	.pd = {
+ 		.name = "usb30_prim_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ };
+ 
+ static struct gdsc usb30_sec_gdsc = {
+@@ -3276,7 +3276,7 @@ static struct gdsc usb30_sec_gdsc = {
+ 	.pd = {
+ 		.name = "usb30_sec_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ };
+ 
+ static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc = {
+diff --git a/drivers/clk/renesas/r9a06g032-clocks.c b/drivers/clk/renesas/r9a06g032-clocks.c
+index 245150a5484a2..285f6ac25372d 100644
+--- a/drivers/clk/renesas/r9a06g032-clocks.c
++++ b/drivers/clk/renesas/r9a06g032-clocks.c
+@@ -386,7 +386,7 @@ static int r9a06g032_attach_dev(struct generic_pm_domain *pd,
+ 	int error;
+ 	int index;
+ 
+-	while (!of_parse_phandle_with_args(np, "clocks", "#clock-cells", i,
++	while (!of_parse_phandle_with_args(np, "clocks", "#clock-cells", i++,
+ 					   &clkspec)) {
+ 		if (clkspec.np != pd->dev.of_node)
+ 			continue;
+@@ -399,7 +399,6 @@ static int r9a06g032_attach_dev(struct generic_pm_domain *pd,
+ 			if (error)
+ 				return error;
+ 		}
+-		i++;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/clk/rockchip/clk-pll.c b/drivers/clk/rockchip/clk-pll.c
+index bbbf9ce428672..d0bd513ff3c33 100644
+--- a/drivers/clk/rockchip/clk-pll.c
++++ b/drivers/clk/rockchip/clk-pll.c
+@@ -981,6 +981,7 @@ struct clk *rockchip_clk_register_pll(struct rockchip_clk_provider *ctx,
+ 	return mux_clk;
+ 
+ err_pll:
++	kfree(pll->rate_table);
+ 	clk_unregister(mux_clk);
+ 	mux_clk = pll_clk;
+ err_mux:
+diff --git a/drivers/clk/samsung/clk-pll.c b/drivers/clk/samsung/clk-pll.c
+index ac70ad785d8ed..33df20f813d59 100644
+--- a/drivers/clk/samsung/clk-pll.c
++++ b/drivers/clk/samsung/clk-pll.c
+@@ -1390,6 +1390,7 @@ static void __init _samsung_clk_register_pll(struct samsung_clk_provider *ctx,
+ 	if (ret) {
+ 		pr_err("%s: failed to register pll clock %s : %d\n",
+ 			__func__, pll_clk->name, ret);
++		kfree(pll->rate_table);
+ 		kfree(pll);
+ 		return;
+ 	}
+diff --git a/drivers/clk/socfpga/clk-gate.c b/drivers/clk/socfpga/clk-gate.c
+index cf94a12459ea4..ee2a2d284113c 100644
+--- a/drivers/clk/socfpga/clk-gate.c
++++ b/drivers/clk/socfpga/clk-gate.c
+@@ -174,21 +174,24 @@ void __init socfpga_gate_init(struct device_node *node)
+ 	u32 div_reg[3];
+ 	u32 clk_phase[2];
+ 	u32 fixed_div;
+-	struct clk *clk;
++	struct clk_hw *hw_clk;
+ 	struct socfpga_gate_clk *socfpga_clk;
+ 	const char *clk_name = node->name;
+ 	const char *parent_name[SOCFPGA_MAX_PARENTS];
+ 	struct clk_init_data init;
+ 	struct clk_ops *ops;
+ 	int rc;
++	int err;
+ 
+ 	socfpga_clk = kzalloc(sizeof(*socfpga_clk), GFP_KERNEL);
+ 	if (WARN_ON(!socfpga_clk))
+ 		return;
+ 
+ 	ops = kmemdup(&gateclk_ops, sizeof(gateclk_ops), GFP_KERNEL);
+-	if (WARN_ON(!ops))
++	if (WARN_ON(!ops)) {
++		kfree(socfpga_clk);
+ 		return;
++	}
+ 
+ 	rc = of_property_read_u32_array(node, "clk-gate", clk_gate, 2);
+ 	if (rc)
+@@ -238,12 +241,15 @@ void __init socfpga_gate_init(struct device_node *node)
+ 	init.parent_names = parent_name;
+ 	socfpga_clk->hw.hw.init = &init;
+ 
+-	clk = clk_register(NULL, &socfpga_clk->hw.hw);
+-	if (WARN_ON(IS_ERR(clk))) {
++	hw_clk = &socfpga_clk->hw.hw;
++
++	err = clk_hw_register(NULL, hw_clk);
++	if (err) {
++		kfree(ops);
+ 		kfree(socfpga_clk);
+ 		return;
+ 	}
+-	rc = of_clk_add_provider(node, of_clk_src_simple_get, clk);
++	rc = of_clk_add_provider(node, of_clk_src_simple_get, hw_clk);
+ 	if (WARN_ON(rc))
+ 		return;
+ }
+diff --git a/drivers/clk/socfpga/clk-periph.c b/drivers/clk/socfpga/clk-periph.c
+index 5e0c4b45f77f7..43707e2d72484 100644
+--- a/drivers/clk/socfpga/clk-periph.c
++++ b/drivers/clk/socfpga/clk-periph.c
+@@ -51,7 +51,7 @@ static __init void __socfpga_periph_init(struct device_node *node,
+ 	const struct clk_ops *ops)
+ {
+ 	u32 reg;
+-	struct clk *clk;
++	struct clk_hw *hw_clk;
+ 	struct socfpga_periph_clk *periph_clk;
+ 	const char *clk_name = node->name;
+ 	const char *parent_name[SOCFPGA_MAX_PARENTS];
+@@ -94,13 +94,13 @@ static __init void __socfpga_periph_init(struct device_node *node,
+ 	init.parent_names = parent_name;
+ 
+ 	periph_clk->hw.hw.init = &init;
++	hw_clk = &periph_clk->hw.hw;
+ 
+-	clk = clk_register(NULL, &periph_clk->hw.hw);
+-	if (WARN_ON(IS_ERR(clk))) {
++	if (clk_hw_register(NULL, hw_clk)) {
+ 		kfree(periph_clk);
+ 		return;
+ 	}
+-	rc = of_clk_add_provider(node, of_clk_src_simple_get, clk);
++	rc = of_clk_add_provider(node, of_clk_src_simple_get, hw_clk);
+ }
+ 
+ void __init socfpga_periph_init(struct device_node *node)
+diff --git a/drivers/clk/socfpga/clk-pll.c b/drivers/clk/socfpga/clk-pll.c
+index e5fb786843f39..dcb573d440346 100644
+--- a/drivers/clk/socfpga/clk-pll.c
++++ b/drivers/clk/socfpga/clk-pll.c
+@@ -70,17 +70,18 @@ static const struct clk_ops clk_pll_ops = {
+ 	.get_parent = clk_pll_get_parent,
+ };
+ 
+-static __init struct clk *__socfpga_pll_init(struct device_node *node,
++static __init struct clk_hw *__socfpga_pll_init(struct device_node *node,
+ 	const struct clk_ops *ops)
+ {
+ 	u32 reg;
+-	struct clk *clk;
++	struct clk_hw *hw_clk;
+ 	struct socfpga_pll *pll_clk;
+ 	const char *clk_name = node->name;
+ 	const char *parent_name[SOCFPGA_MAX_PARENTS];
+ 	struct clk_init_data init;
+ 	struct device_node *clkmgr_np;
+ 	int rc;
++	int err;
+ 
+ 	of_property_read_u32(node, "reg", &reg);
+ 
+@@ -106,13 +107,15 @@ static __init struct clk *__socfpga_pll_init(struct device_node *node,
+ 
+ 	pll_clk->hw.bit_idx = SOCFPGA_PLL_EXT_ENA;
+ 
+-	clk = clk_register(NULL, &pll_clk->hw.hw);
+-	if (WARN_ON(IS_ERR(clk))) {
++	hw_clk = &pll_clk->hw.hw;
++
++	err = clk_hw_register(NULL, hw_clk);
++	if (err) {
+ 		kfree(pll_clk);
+-		return NULL;
++		return ERR_PTR(err);
+ 	}
+-	rc = of_clk_add_provider(node, of_clk_src_simple_get, clk);
+-	return clk;
++	rc = of_clk_add_provider(node, of_clk_src_simple_get, hw_clk);
++	return hw_clk;
+ }
+ 
+ void __init socfpga_pll_init(struct device_node *node)
+diff --git a/drivers/clk/st/clkgen-fsyn.c b/drivers/clk/st/clkgen-fsyn.c
+index f1adc858b5907..0e58a7cda427f 100644
+--- a/drivers/clk/st/clkgen-fsyn.c
++++ b/drivers/clk/st/clkgen-fsyn.c
+@@ -942,9 +942,10 @@ static void __init st_of_quadfs_setup(struct device_node *np,
+ 
+ 	clk = st_clk_register_quadfs_pll(pll_name, clk_parent_name, data,
+ 			reg, lock);
+-	if (IS_ERR(clk))
++	if (IS_ERR(clk)) {
++		kfree(lock);
+ 		goto err_exit;
+-	else
++	} else
+ 		pr_debug("%s: parent %s rate %u\n",
+ 			__clk_get_name(clk),
+ 			__clk_get_name(clk_get_parent(clk)),
+diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
+index 2acfcc966bb54..66e4872ab34f9 100644
+--- a/drivers/clocksource/sh_cmt.c
++++ b/drivers/clocksource/sh_cmt.c
+@@ -13,6 +13,7 @@
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/ioport.h>
+ #include <linux/irq.h>
+ #include <linux/module.h>
+@@ -116,6 +117,7 @@ struct sh_cmt_device {
+ 	void __iomem *mapbase;
+ 	struct clk *clk;
+ 	unsigned long rate;
++	unsigned int reg_delay;
+ 
+ 	raw_spinlock_t lock; /* Protect the shared start/stop register */
+ 
+@@ -235,6 +237,8 @@ static const struct sh_cmt_info sh_cmt_info[] = {
+ #define CMCNT 1 /* channel register */
+ #define CMCOR 2 /* channel register */
+ 
++#define CMCLKE	0x1000	/* CLK Enable Register (R-Car Gen2) */
++
+ static inline u32 sh_cmt_read_cmstr(struct sh_cmt_channel *ch)
+ {
+ 	if (ch->iostart)
+@@ -245,10 +249,17 @@ static inline u32 sh_cmt_read_cmstr(struct sh_cmt_channel *ch)
+ 
+ static inline void sh_cmt_write_cmstr(struct sh_cmt_channel *ch, u32 value)
+ {
+-	if (ch->iostart)
+-		ch->cmt->info->write_control(ch->iostart, 0, value);
+-	else
+-		ch->cmt->info->write_control(ch->cmt->mapbase, 0, value);
++	u32 old_value = sh_cmt_read_cmstr(ch);
++
++	if (value != old_value) {
++		if (ch->iostart) {
++			ch->cmt->info->write_control(ch->iostart, 0, value);
++			udelay(ch->cmt->reg_delay);
++		} else {
++			ch->cmt->info->write_control(ch->cmt->mapbase, 0, value);
++			udelay(ch->cmt->reg_delay);
++		}
++	}
+ }
+ 
+ static inline u32 sh_cmt_read_cmcsr(struct sh_cmt_channel *ch)
+@@ -258,7 +269,12 @@ static inline u32 sh_cmt_read_cmcsr(struct sh_cmt_channel *ch)
+ 
+ static inline void sh_cmt_write_cmcsr(struct sh_cmt_channel *ch, u32 value)
+ {
+-	ch->cmt->info->write_control(ch->ioctrl, CMCSR, value);
++	u32 old_value = sh_cmt_read_cmcsr(ch);
++
++	if (value != old_value) {
++		ch->cmt->info->write_control(ch->ioctrl, CMCSR, value);
++		udelay(ch->cmt->reg_delay);
++	}
+ }
+ 
+ static inline u32 sh_cmt_read_cmcnt(struct sh_cmt_channel *ch)
+@@ -266,14 +282,33 @@ static inline u32 sh_cmt_read_cmcnt(struct sh_cmt_channel *ch)
+ 	return ch->cmt->info->read_count(ch->ioctrl, CMCNT);
+ }
+ 
+-static inline void sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value)
++static inline int sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value)
+ {
++	/* Tests showed that we need to wait 3 clocks here */
++	unsigned int cmcnt_delay = DIV_ROUND_UP(3 * ch->cmt->reg_delay, 2);
++	u32 reg;
++
++	if (ch->cmt->info->model > SH_CMT_16BIT) {
++		int ret = read_poll_timeout_atomic(sh_cmt_read_cmcsr, reg,
++						   !(reg & SH_CMT32_CMCSR_WRFLG),
++						   1, cmcnt_delay, false, ch);
++		if (ret < 0)
++			return ret;
++	}
++
+ 	ch->cmt->info->write_count(ch->ioctrl, CMCNT, value);
++	udelay(cmcnt_delay);
++	return 0;
+ }
+ 
+ static inline void sh_cmt_write_cmcor(struct sh_cmt_channel *ch, u32 value)
+ {
+-	ch->cmt->info->write_count(ch->ioctrl, CMCOR, value);
++	u32 old_value = ch->cmt->info->read_count(ch->ioctrl, CMCOR);
++
++	if (value != old_value) {
++		ch->cmt->info->write_count(ch->ioctrl, CMCOR, value);
++		udelay(ch->cmt->reg_delay);
++	}
+ }
+ 
+ static u32 sh_cmt_get_counter(struct sh_cmt_channel *ch, u32 *has_wrapped)
+@@ -317,7 +352,7 @@ static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start)
+ 
+ static int sh_cmt_enable(struct sh_cmt_channel *ch)
+ {
+-	int k, ret;
++	int ret;
+ 
+ 	pm_runtime_get_sync(&ch->cmt->pdev->dev);
+ 	dev_pm_syscore_device(&ch->cmt->pdev->dev, true);
+@@ -345,26 +380,9 @@ static int sh_cmt_enable(struct sh_cmt_channel *ch)
+ 	}
+ 
+ 	sh_cmt_write_cmcor(ch, 0xffffffff);
+-	sh_cmt_write_cmcnt(ch, 0);
++	ret = sh_cmt_write_cmcnt(ch, 0);
+ 
+-	/*
+-	 * According to the sh73a0 user's manual, as CMCNT can be operated
+-	 * only by the RCLK (Pseudo 32 kHz), there's one restriction on
+-	 * modifying CMCNT register; two RCLK cycles are necessary before
+-	 * this register is either read or any modification of the value
+-	 * it holds is reflected in the LSI's actual operation.
+-	 *
+-	 * While at it, we're supposed to clear out the CMCNT as of this
+-	 * moment, so make sure it's processed properly here.  This will
+-	 * take RCLKx2 at maximum.
+-	 */
+-	for (k = 0; k < 100; k++) {
+-		if (!sh_cmt_read_cmcnt(ch))
+-			break;
+-		udelay(1);
+-	}
+-
+-	if (sh_cmt_read_cmcnt(ch)) {
++	if (ret || sh_cmt_read_cmcnt(ch)) {
+ 		dev_err(&ch->cmt->pdev->dev, "ch%u: cannot clear CMCNT\n",
+ 			ch->index);
+ 		ret = -ETIMEDOUT;
+@@ -849,6 +867,7 @@ static int sh_cmt_setup_channel(struct sh_cmt_channel *ch, unsigned int index,
+ 				unsigned int hwidx, bool clockevent,
+ 				bool clocksource, struct sh_cmt_device *cmt)
+ {
++	u32 value;
+ 	int ret;
+ 
+ 	/* Skip unused channels. */
+@@ -878,6 +897,11 @@ static int sh_cmt_setup_channel(struct sh_cmt_channel *ch, unsigned int index,
+ 		ch->iostart = cmt->mapbase + ch->hwidx * 0x100;
+ 		ch->ioctrl = ch->iostart + 0x10;
+ 		ch->timer_bit = 0;
++
++		/* Enable the clock supply to the channel */
++		value = ioread32(cmt->mapbase + CMCLKE);
++		value |= BIT(hwidx);
++		iowrite32(value, cmt->mapbase + CMCLKE);
+ 		break;
+ 	}
+ 
+@@ -968,8 +992,8 @@ MODULE_DEVICE_TABLE(of, sh_cmt_of_table);
+ 
+ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
+ {
+-	unsigned int mask;
+-	unsigned int i;
++	unsigned int mask, i;
++	unsigned long rate;
+ 	int ret;
+ 
+ 	cmt->pdev = pdev;
+@@ -1005,17 +1029,21 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto err_clk_unprepare;
+ 
+-	if (cmt->info->width == 16)
+-		cmt->rate = clk_get_rate(cmt->clk) / 512;
+-	else
+-		cmt->rate = clk_get_rate(cmt->clk) / 8;
++	rate = clk_get_rate(cmt->clk);
++	if (!rate) {
++		ret = -EINVAL;
++		goto err_clk_disable;
++	}
+ 
+-	clk_disable(cmt->clk);
++	/* We shall wait 2 input clks after register writes */
++	if (cmt->info->model >= SH_CMT_48BIT)
++		cmt->reg_delay = DIV_ROUND_UP(2UL * USEC_PER_SEC, rate);
++	cmt->rate = rate / (cmt->info->width == 16 ? 512 : 8);
+ 
+ 	/* Map the memory resource(s). */
+ 	ret = sh_cmt_map_memory(cmt);
+ 	if (ret < 0)
+-		goto err_clk_unprepare;
++		goto err_clk_disable;
+ 
+ 	/* Allocate and setup the channels. */
+ 	cmt->num_channels = hweight8(cmt->hw_channels);
+@@ -1043,6 +1071,8 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
+ 		mask &= ~(1 << hwidx);
+ 	}
+ 
++	clk_disable(cmt->clk);
++
+ 	platform_set_drvdata(pdev, cmt);
+ 
+ 	return 0;
+@@ -1050,6 +1080,8 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
+ err_unmap:
+ 	kfree(cmt->channels);
+ 	iounmap(cmt->mapbase);
++err_clk_disable:
++	clk_disable(cmt->clk);
+ err_clk_unprepare:
+ 	clk_unprepare(cmt->clk);
+ err_clk_put:
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 2737407ff0698..632523c1232f6 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -345,8 +345,10 @@ static int __init dmtimer_systimer_init_clock(struct dmtimer_systimer *t,
+ 		return error;
+ 
+ 	r = clk_get_rate(clock);
+-	if (!r)
++	if (!r) {
++		clk_disable_unprepare(clock);
+ 		return -ENODEV;
++	}
+ 
+ 	if (is_ick)
+ 		t->ick = clock;
+diff --git a/drivers/counter/stm32-lptimer-cnt.c b/drivers/counter/stm32-lptimer-cnt.c
+index 937439635d53f..b084e971a4934 100644
+--- a/drivers/counter/stm32-lptimer-cnt.c
++++ b/drivers/counter/stm32-lptimer-cnt.c
+@@ -69,7 +69,7 @@ static int stm32_lptim_set_enable_state(struct stm32_lptim_cnt *priv,
+ 
+ 	/* ensure CMP & ARR registers are properly written */
+ 	ret = regmap_read_poll_timeout(priv->regmap, STM32_LPTIM_ISR, val,
+-				       (val & STM32_LPTIM_CMPOK_ARROK),
++				       (val & STM32_LPTIM_CMPOK_ARROK) == STM32_LPTIM_CMPOK_ARROK,
+ 				       100, 1000);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/cpufreq/amd_freq_sensitivity.c b/drivers/cpufreq/amd_freq_sensitivity.c
+index d0b10baf039ab..151771129c7ba 100644
+--- a/drivers/cpufreq/amd_freq_sensitivity.c
++++ b/drivers/cpufreq/amd_freq_sensitivity.c
+@@ -124,6 +124,8 @@ static int __init amd_freq_sensitivity_init(void)
+ 	if (!pcidev) {
+ 		if (!boot_cpu_has(X86_FEATURE_PROC_FEEDBACK))
+ 			return -ENODEV;
++	} else {
++		pci_dev_put(pcidev);
+ 	}
+ 
+ 	if (rdmsrl_safe(MSR_AMD64_FREQ_SENSITIVITY_ACTUAL, &val))
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 30dafe8fc5054..58342390966b7 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1211,6 +1211,7 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ 	if (!zalloc_cpumask_var(&policy->real_cpus, GFP_KERNEL))
+ 		goto err_free_rcpumask;
+ 
++	init_completion(&policy->kobj_unregister);
+ 	ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq,
+ 				   cpufreq_global_kobject, "policy%u", cpu);
+ 	if (ret) {
+@@ -1249,7 +1250,6 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ 	init_rwsem(&policy->rwsem);
+ 	spin_lock_init(&policy->transition_lock);
+ 	init_waitqueue_head(&policy->transition_wait);
+-	init_completion(&policy->kobj_unregister);
+ 	INIT_WORK(&policy->update, handle_update);
+ 
+ 	policy->cpu = cpu;
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index 6de07556665b1..a9880998f8bad 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -158,6 +158,7 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
+ 		}
+ 	} else if (ret != -ENODEV) {
+ 		dev_err(cpu_dev, "Invalid opp table in device tree\n");
++		kfree(table);
+ 		return ret;
+ 	} else {
+ 		policy->fast_switch_possible = true;
+diff --git a/drivers/cpuidle/dt_idle_states.c b/drivers/cpuidle/dt_idle_states.c
+index 252f2a9686a62..448bc796b0b40 100644
+--- a/drivers/cpuidle/dt_idle_states.c
++++ b/drivers/cpuidle/dt_idle_states.c
+@@ -223,6 +223,6 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
+ 	 * also be 0 on platforms with missing DT idle states or legacy DT
+ 	 * configuration predating the DT idle states bindings.
+ 	 */
+-	return i;
++	return state_idx - start_idx;
+ }
+ EXPORT_SYMBOL_GPL(dt_init_idle_driver);
+diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
+index ff5e85eefbf69..0a3dd0793f30e 100644
+--- a/drivers/crypto/Kconfig
++++ b/drivers/crypto/Kconfig
+@@ -749,7 +749,12 @@ config CRYPTO_DEV_IMGTEC_HASH
+ config CRYPTO_DEV_ROCKCHIP
+ 	tristate "Rockchip's Cryptographic Engine driver"
+ 	depends on OF && ARCH_ROCKCHIP
++	depends on PM
++	select CRYPTO_ECB
++	select CRYPTO_CBC
++	select CRYPTO_DES
+ 	select CRYPTO_AES
++	select CRYPTO_ENGINE
+ 	select CRYPTO_LIB_DES
+ 	select CRYPTO_MD5
+ 	select CRYPTO_SHA1
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index d0954993e2e37..49c7a8b464ddf 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -105,7 +105,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
+ 	unsigned int ivsize = crypto_skcipher_ivsize(tfm);
+ 	struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
+ 	int i = 0;
+-	u32 a;
++	dma_addr_t a;
+ 	int err;
+ 
+ 	rctx->ivlen = ivsize;
+diff --git a/drivers/crypto/amlogic/amlogic-gxl-core.c b/drivers/crypto/amlogic/amlogic-gxl-core.c
+index 5bbeff433c8c0..7a5cf1122af9e 100644
+--- a/drivers/crypto/amlogic/amlogic-gxl-core.c
++++ b/drivers/crypto/amlogic/amlogic-gxl-core.c
+@@ -240,7 +240,6 @@ static int meson_crypto_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
+-	mc->irqs = devm_kcalloc(mc->dev, MAXFLOW, sizeof(int), GFP_KERNEL);
+ 	for (i = 0; i < MAXFLOW; i++) {
+ 		mc->irqs[i] = platform_get_irq(pdev, i);
+ 		if (mc->irqs[i] < 0)
+diff --git a/drivers/crypto/amlogic/amlogic-gxl.h b/drivers/crypto/amlogic/amlogic-gxl.h
+index dc0f142324a3c..8c0746a1d6d43 100644
+--- a/drivers/crypto/amlogic/amlogic-gxl.h
++++ b/drivers/crypto/amlogic/amlogic-gxl.h
+@@ -95,7 +95,7 @@ struct meson_dev {
+ 	struct device *dev;
+ 	struct meson_flow *chanlist;
+ 	atomic_t flow;
+-	int *irqs;
++	int irqs[MAXFLOW];
+ #ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG
+ 	struct dentry *dbgfs_dir;
+ #endif
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_mbx.c b/drivers/crypto/cavium/nitrox/nitrox_mbx.c
+index b51b0449b4783..a131dbbbcb869 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_mbx.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_mbx.c
+@@ -190,6 +190,7 @@ int nitrox_mbox_init(struct nitrox_device *ndev)
+ 	ndev->iov.pf2vf_wq = alloc_workqueue("nitrox_pf2vf", 0, 0);
+ 	if (!ndev->iov.pf2vf_wq) {
+ 		kfree(ndev->iov.vfdev);
++		ndev->iov.vfdev = NULL;
+ 		return -ENOMEM;
+ 	}
+ 	/* enable pf2vf mailbox interrupts */
+diff --git a/drivers/crypto/ccree/cc_debugfs.c b/drivers/crypto/ccree/cc_debugfs.c
+index 7083767602fcf..8f008f024f8f1 100644
+--- a/drivers/crypto/ccree/cc_debugfs.c
++++ b/drivers/crypto/ccree/cc_debugfs.c
+@@ -55,7 +55,7 @@ void __init cc_debugfs_global_init(void)
+ 	cc_debugfs_dir = debugfs_create_dir("ccree", NULL);
+ }
+ 
+-void __exit cc_debugfs_global_fini(void)
++void cc_debugfs_global_fini(void)
+ {
+ 	debugfs_remove(cc_debugfs_dir);
+ }
+diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c
+index 6f519d3e896ca..7924693f58e09 100644
+--- a/drivers/crypto/ccree/cc_driver.c
++++ b/drivers/crypto/ccree/cc_driver.c
+@@ -614,9 +614,17 @@ static struct platform_driver ccree_driver = {
+ 
+ static int __init ccree_init(void)
+ {
++	int rc;
++
+ 	cc_debugfs_global_init();
+ 
+-	return platform_driver_register(&ccree_driver);
++	rc = platform_driver_register(&ccree_driver);
++	if (rc) {
++		cc_debugfs_global_fini();
++		return rc;
++	}
++
++	return 0;
+ }
+ module_init(ccree_init);
+ 
+diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h
+index 0420f4ce71979..aaad3d76dc04e 100644
+--- a/drivers/crypto/hisilicon/qm.h
++++ b/drivers/crypto/hisilicon/qm.h
+@@ -289,14 +289,14 @@ struct hisi_qp {
+ static inline int q_num_set(const char *val, const struct kernel_param *kp,
+ 			    unsigned int device)
+ {
+-	struct pci_dev *pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI,
+-					      device, NULL);
++	struct pci_dev *pdev;
+ 	u32 n, q_num;
+ 	int ret;
+ 
+ 	if (!val)
+ 		return -EINVAL;
+ 
++	pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI, device, NULL);
+ 	if (!pdev) {
+ 		q_num = min_t(u32, QM_QNUM_V1, QM_QNUM_V2);
+ 		pr_info("No device found currently, suppose queue number is %d\n",
+@@ -306,6 +306,8 @@ static inline int q_num_set(const char *val, const struct kernel_param *kp,
+ 			q_num = QM_QNUM_V1;
+ 		else
+ 			q_num = QM_QNUM_V2;
++
++		pci_dev_put(pdev);
+ 	}
+ 
+ 	ret = kstrtou32(val, 10, &n);
+diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c
+index 91f555ccbb319..cecae50d0f58d 100644
+--- a/drivers/crypto/img-hash.c
++++ b/drivers/crypto/img-hash.c
+@@ -357,12 +357,16 @@ static int img_hash_dma_init(struct img_hash_dev *hdev)
+ static void img_hash_dma_task(unsigned long d)
+ {
+ 	struct img_hash_dev *hdev = (struct img_hash_dev *)d;
+-	struct img_hash_request_ctx *ctx = ahash_request_ctx(hdev->req);
++	struct img_hash_request_ctx *ctx;
+ 	u8 *addr;
+ 	size_t nbytes, bleft, wsend, len, tbc;
+ 	struct scatterlist tsg;
+ 
+-	if (!hdev->req || !ctx->sg)
++	if (!hdev->req)
++		return;
++
++	ctx = ahash_request_ctx(hdev->req);
++	if (!ctx->sg)
+ 		return;
+ 
+ 	addr = sg_virt(ctx->sg);
+diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c
+index 3642bf83d8094..8c4149c7d7bef 100644
+--- a/drivers/crypto/n2_core.c
++++ b/drivers/crypto/n2_core.c
+@@ -1228,6 +1228,7 @@ struct n2_hash_tmpl {
+ 	const u8	*hash_init;
+ 	u8		hw_op_hashsz;
+ 	u8		digest_size;
++	u8		statesize;
+ 	u8		block_size;
+ 	u8		auth_type;
+ 	u8		hmac_type;
+@@ -1259,6 +1260,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
+ 	  .hmac_type	= AUTH_TYPE_HMAC_MD5,
+ 	  .hw_op_hashsz	= MD5_DIGEST_SIZE,
+ 	  .digest_size	= MD5_DIGEST_SIZE,
++	  .statesize	= sizeof(struct md5_state),
+ 	  .block_size	= MD5_HMAC_BLOCK_SIZE },
+ 	{ .name		= "sha1",
+ 	  .hash_zero	= sha1_zero_message_hash,
+@@ -1267,6 +1269,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
+ 	  .hmac_type	= AUTH_TYPE_HMAC_SHA1,
+ 	  .hw_op_hashsz	= SHA1_DIGEST_SIZE,
+ 	  .digest_size	= SHA1_DIGEST_SIZE,
++	  .statesize	= sizeof(struct sha1_state),
+ 	  .block_size	= SHA1_BLOCK_SIZE },
+ 	{ .name		= "sha256",
+ 	  .hash_zero	= sha256_zero_message_hash,
+@@ -1275,6 +1278,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
+ 	  .hmac_type	= AUTH_TYPE_HMAC_SHA256,
+ 	  .hw_op_hashsz	= SHA256_DIGEST_SIZE,
+ 	  .digest_size	= SHA256_DIGEST_SIZE,
++	  .statesize	= sizeof(struct sha256_state),
+ 	  .block_size	= SHA256_BLOCK_SIZE },
+ 	{ .name		= "sha224",
+ 	  .hash_zero	= sha224_zero_message_hash,
+@@ -1283,6 +1287,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
+ 	  .hmac_type	= AUTH_TYPE_RESERVED,
+ 	  .hw_op_hashsz	= SHA256_DIGEST_SIZE,
+ 	  .digest_size	= SHA224_DIGEST_SIZE,
++	  .statesize	= sizeof(struct sha256_state),
+ 	  .block_size	= SHA224_BLOCK_SIZE },
+ };
+ #define NUM_HASH_TMPLS ARRAY_SIZE(hash_tmpls)
+@@ -1423,6 +1428,7 @@ static int __n2_register_one_ahash(const struct n2_hash_tmpl *tmpl)
+ 
+ 	halg = &ahash->halg;
+ 	halg->digestsize = tmpl->digest_size;
++	halg->statesize = tmpl->statesize;
+ 
+ 	base = &halg->base;
+ 	snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "%s", tmpl->name);
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index 48f78e34cf8dd..5a57617441b81 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -2130,7 +2130,7 @@ static int omap_sham_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	pm_runtime_irq_safe(dev);
+ 
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get sync: %d\n", err);
+ 		goto err_pm;
+diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c
+index 35d73061d1569..14a0aef18ab13 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto.c
++++ b/drivers/crypto/rockchip/rk3288_crypto.c
+@@ -65,186 +65,24 @@ static void rk_crypto_disable_clk(struct rk_crypto_info *dev)
+ 	clk_disable_unprepare(dev->sclk);
+ }
+ 
+-static int check_alignment(struct scatterlist *sg_src,
+-			   struct scatterlist *sg_dst,
+-			   int align_mask)
+-{
+-	int in, out, align;
+-
+-	in = IS_ALIGNED((uint32_t)sg_src->offset, 4) &&
+-	     IS_ALIGNED((uint32_t)sg_src->length, align_mask);
+-	if (!sg_dst)
+-		return in;
+-	out = IS_ALIGNED((uint32_t)sg_dst->offset, 4) &&
+-	      IS_ALIGNED((uint32_t)sg_dst->length, align_mask);
+-	align = in && out;
+-
+-	return (align && (sg_src->length == sg_dst->length));
+-}
+-
+-static int rk_load_data(struct rk_crypto_info *dev,
+-			struct scatterlist *sg_src,
+-			struct scatterlist *sg_dst)
+-{
+-	unsigned int count;
+-
+-	dev->aligned = dev->aligned ?
+-		check_alignment(sg_src, sg_dst, dev->align_size) :
+-		dev->aligned;
+-	if (dev->aligned) {
+-		count = min(dev->left_bytes, sg_src->length);
+-		dev->left_bytes -= count;
+-
+-		if (!dma_map_sg(dev->dev, sg_src, 1, DMA_TO_DEVICE)) {
+-			dev_err(dev->dev, "[%s:%d] dma_map_sg(src)  error\n",
+-				__func__, __LINE__);
+-			return -EINVAL;
+-		}
+-		dev->addr_in = sg_dma_address(sg_src);
+-
+-		if (sg_dst) {
+-			if (!dma_map_sg(dev->dev, sg_dst, 1, DMA_FROM_DEVICE)) {
+-				dev_err(dev->dev,
+-					"[%s:%d] dma_map_sg(dst)  error\n",
+-					__func__, __LINE__);
+-				dma_unmap_sg(dev->dev, sg_src, 1,
+-					     DMA_TO_DEVICE);
+-				return -EINVAL;
+-			}
+-			dev->addr_out = sg_dma_address(sg_dst);
+-		}
+-	} else {
+-		count = (dev->left_bytes > PAGE_SIZE) ?
+-			PAGE_SIZE : dev->left_bytes;
+-
+-		if (!sg_pcopy_to_buffer(dev->first, dev->src_nents,
+-					dev->addr_vir, count,
+-					dev->total - dev->left_bytes)) {
+-			dev_err(dev->dev, "[%s:%d] pcopy err\n",
+-				__func__, __LINE__);
+-			return -EINVAL;
+-		}
+-		dev->left_bytes -= count;
+-		sg_init_one(&dev->sg_tmp, dev->addr_vir, count);
+-		if (!dma_map_sg(dev->dev, &dev->sg_tmp, 1, DMA_TO_DEVICE)) {
+-			dev_err(dev->dev, "[%s:%d] dma_map_sg(sg_tmp)  error\n",
+-				__func__, __LINE__);
+-			return -ENOMEM;
+-		}
+-		dev->addr_in = sg_dma_address(&dev->sg_tmp);
+-
+-		if (sg_dst) {
+-			if (!dma_map_sg(dev->dev, &dev->sg_tmp, 1,
+-					DMA_FROM_DEVICE)) {
+-				dev_err(dev->dev,
+-					"[%s:%d] dma_map_sg(sg_tmp)  error\n",
+-					__func__, __LINE__);
+-				dma_unmap_sg(dev->dev, &dev->sg_tmp, 1,
+-					     DMA_TO_DEVICE);
+-				return -ENOMEM;
+-			}
+-			dev->addr_out = sg_dma_address(&dev->sg_tmp);
+-		}
+-	}
+-	dev->count = count;
+-	return 0;
+-}
+-
+-static void rk_unload_data(struct rk_crypto_info *dev)
+-{
+-	struct scatterlist *sg_in, *sg_out;
+-
+-	sg_in = dev->aligned ? dev->sg_src : &dev->sg_tmp;
+-	dma_unmap_sg(dev->dev, sg_in, 1, DMA_TO_DEVICE);
+-
+-	if (dev->sg_dst) {
+-		sg_out = dev->aligned ? dev->sg_dst : &dev->sg_tmp;
+-		dma_unmap_sg(dev->dev, sg_out, 1, DMA_FROM_DEVICE);
+-	}
+-}
+-
+ static irqreturn_t rk_crypto_irq_handle(int irq, void *dev_id)
+ {
+ 	struct rk_crypto_info *dev  = platform_get_drvdata(dev_id);
+ 	u32 interrupt_status;
+ 
+-	spin_lock(&dev->lock);
+ 	interrupt_status = CRYPTO_READ(dev, RK_CRYPTO_INTSTS);
+ 	CRYPTO_WRITE(dev, RK_CRYPTO_INTSTS, interrupt_status);
+ 
++	dev->status = 1;
+ 	if (interrupt_status & 0x0a) {
+ 		dev_warn(dev->dev, "DMA Error\n");
+-		dev->err = -EFAULT;
++		dev->status = 0;
+ 	}
+-	tasklet_schedule(&dev->done_task);
++	complete(&dev->complete);
+ 
+-	spin_unlock(&dev->lock);
+ 	return IRQ_HANDLED;
+ }
+ 
+-static int rk_crypto_enqueue(struct rk_crypto_info *dev,
+-			      struct crypto_async_request *async_req)
+-{
+-	unsigned long flags;
+-	int ret;
+-
+-	spin_lock_irqsave(&dev->lock, flags);
+-	ret = crypto_enqueue_request(&dev->queue, async_req);
+-	if (dev->busy) {
+-		spin_unlock_irqrestore(&dev->lock, flags);
+-		return ret;
+-	}
+-	dev->busy = true;
+-	spin_unlock_irqrestore(&dev->lock, flags);
+-	tasklet_schedule(&dev->queue_task);
+-
+-	return ret;
+-}
+-
+-static void rk_crypto_queue_task_cb(unsigned long data)
+-{
+-	struct rk_crypto_info *dev = (struct rk_crypto_info *)data;
+-	struct crypto_async_request *async_req, *backlog;
+-	unsigned long flags;
+-	int err = 0;
+-
+-	dev->err = 0;
+-	spin_lock_irqsave(&dev->lock, flags);
+-	backlog   = crypto_get_backlog(&dev->queue);
+-	async_req = crypto_dequeue_request(&dev->queue);
+-
+-	if (!async_req) {
+-		dev->busy = false;
+-		spin_unlock_irqrestore(&dev->lock, flags);
+-		return;
+-	}
+-	spin_unlock_irqrestore(&dev->lock, flags);
+-
+-	if (backlog) {
+-		backlog->complete(backlog, -EINPROGRESS);
+-		backlog = NULL;
+-	}
+-
+-	dev->async_req = async_req;
+-	err = dev->start(dev);
+-	if (err)
+-		dev->complete(dev->async_req, err);
+-}
+-
+-static void rk_crypto_done_task_cb(unsigned long data)
+-{
+-	struct rk_crypto_info *dev = (struct rk_crypto_info *)data;
+-
+-	if (dev->err) {
+-		dev->complete(dev->async_req, dev->err);
+-		return;
+-	}
+-
+-	dev->err = dev->update(dev);
+-	if (dev->err)
+-		dev->complete(dev->async_req, dev->err);
+-}
+-
+ static struct rk_crypto_tmp *rk_cipher_algs[] = {
+ 	&rk_ecb_aes_alg,
+ 	&rk_cbc_aes_alg,
+@@ -337,8 +175,6 @@ static int rk_crypto_probe(struct platform_device *pdev)
+ 	if (err)
+ 		goto err_crypto;
+ 
+-	spin_lock_init(&crypto_info->lock);
+-
+ 	crypto_info->reg = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(crypto_info->reg)) {
+ 		err = PTR_ERR(crypto_info->reg);
+@@ -389,18 +225,11 @@ static int rk_crypto_probe(struct platform_device *pdev)
+ 	crypto_info->dev = &pdev->dev;
+ 	platform_set_drvdata(pdev, crypto_info);
+ 
+-	tasklet_init(&crypto_info->queue_task,
+-		     rk_crypto_queue_task_cb, (unsigned long)crypto_info);
+-	tasklet_init(&crypto_info->done_task,
+-		     rk_crypto_done_task_cb, (unsigned long)crypto_info);
+-	crypto_init_queue(&crypto_info->queue, 50);
++	crypto_info->engine = crypto_engine_alloc_init(&pdev->dev, true);
++	crypto_engine_start(crypto_info->engine);
++	init_completion(&crypto_info->complete);
+ 
+-	crypto_info->enable_clk = rk_crypto_enable_clk;
+-	crypto_info->disable_clk = rk_crypto_disable_clk;
+-	crypto_info->load_data = rk_load_data;
+-	crypto_info->unload_data = rk_unload_data;
+-	crypto_info->enqueue = rk_crypto_enqueue;
+-	crypto_info->busy = false;
++	rk_crypto_enable_clk(crypto_info);
+ 
+ 	err = rk_crypto_register(crypto_info);
+ 	if (err) {
+@@ -412,9 +241,9 @@ static int rk_crypto_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_register_alg:
+-	tasklet_kill(&crypto_info->queue_task);
+-	tasklet_kill(&crypto_info->done_task);
++	crypto_engine_exit(crypto_info->engine);
+ err_crypto:
++	dev_err(dev, "Crypto Accelerator not successfully registered\n");
+ 	return err;
+ }
+ 
+@@ -423,8 +252,8 @@ static int rk_crypto_remove(struct platform_device *pdev)
+ 	struct rk_crypto_info *crypto_tmp = platform_get_drvdata(pdev);
+ 
+ 	rk_crypto_unregister();
+-	tasklet_kill(&crypto_tmp->done_task);
+-	tasklet_kill(&crypto_tmp->queue_task);
++	rk_crypto_disable_clk(crypto_tmp);
++	crypto_engine_exit(crypto_tmp->engine);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h
+index 3db595570c9c2..6b1413c0359b1 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto.h
++++ b/drivers/crypto/rockchip/rk3288_crypto.h
+@@ -5,9 +5,11 @@
+ #include <crypto/aes.h>
+ #include <crypto/internal/des.h>
+ #include <crypto/algapi.h>
++#include <linux/dma-mapping.h>
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
+ #include <linux/scatterlist.h>
++#include <crypto/engine.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/internal/skcipher.h>
+ 
+@@ -192,45 +194,15 @@ struct rk_crypto_info {
+ 	struct reset_control		*rst;
+ 	void __iomem			*reg;
+ 	int				irq;
+-	struct crypto_queue		queue;
+-	struct tasklet_struct		queue_task;
+-	struct tasklet_struct		done_task;
+-	struct crypto_async_request	*async_req;
+-	int 				err;
+-	/* device lock */
+-	spinlock_t			lock;
+-
+-	/* the public variable */
+-	struct scatterlist		*sg_src;
+-	struct scatterlist		*sg_dst;
+-	struct scatterlist		sg_tmp;
+-	struct scatterlist		*first;
+-	unsigned int			left_bytes;
+-	void				*addr_vir;
+-	int				aligned;
+-	int				align_size;
+-	size_t				src_nents;
+-	size_t				dst_nents;
+-	unsigned int			total;
+-	unsigned int			count;
+-	dma_addr_t			addr_in;
+-	dma_addr_t			addr_out;
+-	bool				busy;
+-	int (*start)(struct rk_crypto_info *dev);
+-	int (*update)(struct rk_crypto_info *dev);
+-	void (*complete)(struct crypto_async_request *base, int err);
+-	int (*enable_clk)(struct rk_crypto_info *dev);
+-	void (*disable_clk)(struct rk_crypto_info *dev);
+-	int (*load_data)(struct rk_crypto_info *dev,
+-			 struct scatterlist *sg_src,
+-			 struct scatterlist *sg_dst);
+-	void (*unload_data)(struct rk_crypto_info *dev);
+-	int (*enqueue)(struct rk_crypto_info *dev,
+-		       struct crypto_async_request *async_req);
++
++	struct crypto_engine *engine;
++	struct completion complete;
++	int status;
+ };
+ 
+ /* the private variable of hash */
+ struct rk_ahash_ctx {
++	struct crypto_engine_ctx enginectx;
+ 	struct rk_crypto_info		*dev;
+ 	/* for fallback */
+ 	struct crypto_ahash		*fallback_tfm;
+@@ -240,14 +212,23 @@ struct rk_ahash_ctx {
+ struct rk_ahash_rctx {
+ 	struct ahash_request		fallback_req;
+ 	u32				mode;
++	int nrsg;
+ };
+ 
+ /* the private variable of cipher */
+ struct rk_cipher_ctx {
++	struct crypto_engine_ctx enginectx;
+ 	struct rk_crypto_info		*dev;
+ 	unsigned int			keylen;
+-	u32				mode;
++	u8				key[AES_MAX_KEY_SIZE];
+ 	u8				iv[AES_BLOCK_SIZE];
++	struct crypto_skcipher *fallback_tfm;
++};
++
++struct rk_cipher_rctx {
++	u8 backup_iv[AES_BLOCK_SIZE];
++	u32				mode;
++	struct skcipher_request fallback_req;   // keep at the end
+ };
+ 
+ enum alg_type {
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+index 81befe7febaa4..edd40e16a3f0a 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+@@ -9,6 +9,7 @@
+  * Some ideas are from marvell/cesa.c and s5p-sss.c driver.
+  */
+ #include <linux/device.h>
++#include <asm/unaligned.h>
+ #include "rk3288_crypto.h"
+ 
+ /*
+@@ -16,6 +17,40 @@
+  * so we put the fixed hash out when met zero message.
+  */
+ 
++static bool rk_ahash_need_fallback(struct ahash_request *req)
++{
++	struct scatterlist *sg;
++
++	sg = req->src;
++	while (sg) {
++		if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
++			return true;
++		}
++		if (sg->length % 4) {
++			return true;
++		}
++		sg = sg_next(sg);
++	}
++	return false;
++}
++
++static int rk_ahash_digest_fb(struct ahash_request *areq)
++{
++	struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
++	struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
++	struct rk_ahash_ctx *tfmctx = crypto_ahash_ctx(tfm);
++
++	ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
++	rctx->fallback_req.base.flags = areq->base.flags &
++					CRYPTO_TFM_REQ_MAY_SLEEP;
++
++	rctx->fallback_req.nbytes = areq->nbytes;
++	rctx->fallback_req.src = areq->src;
++	rctx->fallback_req.result = areq->result;
++
++	return crypto_ahash_digest(&rctx->fallback_req);
++}
++
+ static int zero_message_process(struct ahash_request *req)
+ {
+ 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+@@ -38,17 +73,13 @@ static int zero_message_process(struct ahash_request *req)
+ 	return 0;
+ }
+ 
+-static void rk_ahash_crypto_complete(struct crypto_async_request *base, int err)
+-{
+-	if (base->complete)
+-		base->complete(base, err);
+-}
+-
+-static void rk_ahash_reg_init(struct rk_crypto_info *dev)
++static void rk_ahash_reg_init(struct ahash_request *req)
+ {
+-	struct ahash_request *req = ahash_request_cast(dev->async_req);
+ 	struct rk_ahash_rctx *rctx = ahash_request_ctx(req);
+-	int reg_status = 0;
++	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++	struct rk_ahash_ctx *tctx = crypto_ahash_ctx(tfm);
++	struct rk_crypto_info *dev = tctx->dev;
++	int reg_status;
+ 
+ 	reg_status = CRYPTO_READ(dev, RK_CRYPTO_CTRL) |
+ 		     RK_CRYPTO_HASH_FLUSH | _SBF(0xffff, 16);
+@@ -74,7 +105,7 @@ static void rk_ahash_reg_init(struct rk_crypto_info *dev)
+ 					  RK_CRYPTO_BYTESWAP_BRFIFO |
+ 					  RK_CRYPTO_BYTESWAP_BTFIFO);
+ 
+-	CRYPTO_WRITE(dev, RK_CRYPTO_HASH_MSG_LEN, dev->total);
++	CRYPTO_WRITE(dev, RK_CRYPTO_HASH_MSG_LEN, req->nbytes);
+ }
+ 
+ static int rk_ahash_init(struct ahash_request *req)
+@@ -167,48 +198,64 @@ static int rk_ahash_digest(struct ahash_request *req)
+ 	struct rk_ahash_ctx *tctx = crypto_tfm_ctx(req->base.tfm);
+ 	struct rk_crypto_info *dev = tctx->dev;
+ 
++	if (rk_ahash_need_fallback(req))
++		return rk_ahash_digest_fb(req);
++
+ 	if (!req->nbytes)
+ 		return zero_message_process(req);
+-	else
+-		return dev->enqueue(dev, &req->base);
++
++	return crypto_transfer_hash_request_to_engine(dev->engine, req);
+ }
+ 
+-static void crypto_ahash_dma_start(struct rk_crypto_info *dev)
++static void crypto_ahash_dma_start(struct rk_crypto_info *dev, struct scatterlist *sg)
+ {
+-	CRYPTO_WRITE(dev, RK_CRYPTO_HRDMAS, dev->addr_in);
+-	CRYPTO_WRITE(dev, RK_CRYPTO_HRDMAL, (dev->count + 3) / 4);
++	CRYPTO_WRITE(dev, RK_CRYPTO_HRDMAS, sg_dma_address(sg));
++	CRYPTO_WRITE(dev, RK_CRYPTO_HRDMAL, sg_dma_len(sg) / 4);
+ 	CRYPTO_WRITE(dev, RK_CRYPTO_CTRL, RK_CRYPTO_HASH_START |
+ 					  (RK_CRYPTO_HASH_START << 16));
+ }
+ 
+-static int rk_ahash_set_data_start(struct rk_crypto_info *dev)
++static int rk_hash_prepare(struct crypto_engine *engine, void *breq)
++{
++	struct ahash_request *areq = container_of(breq, struct ahash_request, base);
++	struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
++	struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
++	struct rk_ahash_ctx *tctx = crypto_ahash_ctx(tfm);
++	int ret;
++
++	ret = dma_map_sg(tctx->dev->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVICE);
++	if (ret <= 0)
++		return -EINVAL;
++
++	rctx->nrsg = ret;
++
++	return 0;
++}
++
++static int rk_hash_unprepare(struct crypto_engine *engine, void *breq)
+ {
+-	int err;
++	struct ahash_request *areq = container_of(breq, struct ahash_request, base);
++	struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
++	struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
++	struct rk_ahash_ctx *tctx = crypto_ahash_ctx(tfm);
+ 
+-	err = dev->load_data(dev, dev->sg_src, NULL);
+-	if (!err)
+-		crypto_ahash_dma_start(dev);
+-	return err;
++	dma_unmap_sg(tctx->dev->dev, areq->src, rctx->nrsg, DMA_TO_DEVICE);
++	return 0;
+ }
+ 
+-static int rk_ahash_start(struct rk_crypto_info *dev)
++static int rk_hash_run(struct crypto_engine *engine, void *breq)
+ {
+-	struct ahash_request *req = ahash_request_cast(dev->async_req);
+-	struct crypto_ahash *tfm;
+-	struct rk_ahash_rctx *rctx;
+-
+-	dev->total = req->nbytes;
+-	dev->left_bytes = req->nbytes;
+-	dev->aligned = 0;
+-	dev->align_size = 4;
+-	dev->sg_dst = NULL;
+-	dev->sg_src = req->src;
+-	dev->first = req->src;
+-	dev->src_nents = sg_nents(req->src);
+-	rctx = ahash_request_ctx(req);
++	struct ahash_request *areq = container_of(breq, struct ahash_request, base);
++	struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
++	struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
++	struct rk_ahash_ctx *tctx = crypto_ahash_ctx(tfm);
++	struct scatterlist *sg = areq->src;
++	int err = 0;
++	int i;
++	u32 v;
++
+ 	rctx->mode = 0;
+ 
+-	tfm = crypto_ahash_reqtfm(req);
+ 	switch (crypto_ahash_digestsize(tfm)) {
+ 	case SHA1_DIGEST_SIZE:
+ 		rctx->mode = RK_CRYPTO_HASH_SHA1;
+@@ -220,32 +267,26 @@ static int rk_ahash_start(struct rk_crypto_info *dev)
+ 		rctx->mode = RK_CRYPTO_HASH_MD5;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		err =  -EINVAL;
++		goto theend;
+ 	}
+ 
+-	rk_ahash_reg_init(dev);
+-	return rk_ahash_set_data_start(dev);
+-}
+-
+-static int rk_ahash_crypto_rx(struct rk_crypto_info *dev)
+-{
+-	int err = 0;
+-	struct ahash_request *req = ahash_request_cast(dev->async_req);
+-	struct crypto_ahash *tfm;
+-
+-	dev->unload_data(dev);
+-	if (dev->left_bytes) {
+-		if (dev->aligned) {
+-			if (sg_is_last(dev->sg_src)) {
+-				dev_warn(dev->dev, "[%s:%d], Lack of data\n",
+-					 __func__, __LINE__);
+-				err = -ENOMEM;
+-				goto out_rx;
+-			}
+-			dev->sg_src = sg_next(dev->sg_src);
++	rk_ahash_reg_init(areq);
++
++	while (sg) {
++		reinit_completion(&tctx->dev->complete);
++		tctx->dev->status = 0;
++		crypto_ahash_dma_start(tctx->dev, sg);
++		wait_for_completion_interruptible_timeout(&tctx->dev->complete,
++							  msecs_to_jiffies(2000));
++		if (!tctx->dev->status) {
++			dev_err(tctx->dev->dev, "DMA timeout\n");
++			err = -EFAULT;
++			goto theend;
+ 		}
+-		err = rk_ahash_set_data_start(dev);
+-	} else {
++		sg = sg_next(sg);
++	}
++
+ 		/*
+ 		 * it will take some time to process date after last dma
+ 		 * transmission.
+@@ -256,18 +297,20 @@ static int rk_ahash_crypto_rx(struct rk_crypto_info *dev)
+ 		 * efficiency, and make it response quickly when dma
+ 		 * complete.
+ 		 */
+-		while (!CRYPTO_READ(dev, RK_CRYPTO_HASH_STS))
+-			udelay(10);
+-
+-		tfm = crypto_ahash_reqtfm(req);
+-		memcpy_fromio(req->result, dev->reg + RK_CRYPTO_HASH_DOUT_0,
+-			      crypto_ahash_digestsize(tfm));
+-		dev->complete(dev->async_req, 0);
+-		tasklet_schedule(&dev->queue_task);
++	while (!CRYPTO_READ(tctx->dev, RK_CRYPTO_HASH_STS))
++		udelay(10);
++
++	for (i = 0; i < crypto_ahash_digestsize(tfm) / 4; i++) {
++		v = readl(tctx->dev->reg + RK_CRYPTO_HASH_DOUT_0 + i * 4);
++		put_unaligned_le32(v, areq->result + i * 4);
+ 	}
+ 
+-out_rx:
+-	return err;
++theend:
++	local_bh_disable();
++	crypto_finalize_hash_request(engine, breq, err);
++	local_bh_enable();
++
++	return 0;
+ }
+ 
+ static int rk_cra_hash_init(struct crypto_tfm *tfm)
+@@ -281,14 +324,6 @@ static int rk_cra_hash_init(struct crypto_tfm *tfm)
+ 	algt = container_of(alg, struct rk_crypto_tmp, alg.hash);
+ 
+ 	tctx->dev = algt->dev;
+-	tctx->dev->addr_vir = (void *)__get_free_page(GFP_KERNEL);
+-	if (!tctx->dev->addr_vir) {
+-		dev_err(tctx->dev->dev, "failed to kmalloc for addr_vir\n");
+-		return -ENOMEM;
+-	}
+-	tctx->dev->start = rk_ahash_start;
+-	tctx->dev->update = rk_ahash_crypto_rx;
+-	tctx->dev->complete = rk_ahash_crypto_complete;
+ 
+ 	/* for fallback */
+ 	tctx->fallback_tfm = crypto_alloc_ahash(alg_name, 0,
+@@ -297,19 +332,23 @@ static int rk_cra_hash_init(struct crypto_tfm *tfm)
+ 		dev_err(tctx->dev->dev, "Could not load fallback driver.\n");
+ 		return PTR_ERR(tctx->fallback_tfm);
+ 	}
++
+ 	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+ 				 sizeof(struct rk_ahash_rctx) +
+ 				 crypto_ahash_reqsize(tctx->fallback_tfm));
+ 
+-	return tctx->dev->enable_clk(tctx->dev);
++	tctx->enginectx.op.do_one_request = rk_hash_run;
++	tctx->enginectx.op.prepare_request = rk_hash_prepare;
++	tctx->enginectx.op.unprepare_request = rk_hash_unprepare;
++
++	return 0;
+ }
+ 
+ static void rk_cra_hash_exit(struct crypto_tfm *tfm)
+ {
+ 	struct rk_ahash_ctx *tctx = crypto_tfm_ctx(tfm);
+ 
+-	free_page((unsigned long)tctx->dev->addr_vir);
+-	return tctx->dev->disable_clk(tctx->dev);
++	crypto_free_ahash(tctx->fallback_tfm);
+ }
+ 
+ struct rk_crypto_tmp rk_ahash_sha1 = {
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
+index 5bbf0d2722e11..67a7e05d5ae31 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
+@@ -9,23 +9,77 @@
+  * Some ideas are from marvell-cesa.c and s5p-sss.c driver.
+  */
+ #include <linux/device.h>
++#include <crypto/scatterwalk.h>
+ #include "rk3288_crypto.h"
+ 
+ #define RK_CRYPTO_DEC			BIT(0)
+ 
+-static void rk_crypto_complete(struct crypto_async_request *base, int err)
++static int rk_cipher_need_fallback(struct skcipher_request *req)
+ {
+-	if (base->complete)
+-		base->complete(base, err);
++	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
++	unsigned int bs = crypto_skcipher_blocksize(tfm);
++	struct scatterlist *sgs, *sgd;
++	unsigned int stodo, dtodo, len;
++
++	if (!req->cryptlen)
++		return true;
++
++	len = req->cryptlen;
++	sgs = req->src;
++	sgd = req->dst;
++	while (sgs && sgd) {
++		if (!IS_ALIGNED(sgs->offset, sizeof(u32))) {
++			return true;
++		}
++		if (!IS_ALIGNED(sgd->offset, sizeof(u32))) {
++			return true;
++		}
++		stodo = min(len, sgs->length);
++		if (stodo % bs) {
++			return true;
++		}
++		dtodo = min(len, sgd->length);
++		if (dtodo % bs) {
++			return true;
++		}
++		if (stodo != dtodo) {
++			return true;
++		}
++		len -= stodo;
++		sgs = sg_next(sgs);
++		sgd = sg_next(sgd);
++	}
++	return false;
++}
++
++static int rk_cipher_fallback(struct skcipher_request *areq)
++{
++	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
++	struct rk_cipher_ctx *op = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(areq);
++	int err;
++
++	skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm);
++	skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags,
++				      areq->base.complete, areq->base.data);
++	skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst,
++				   areq->cryptlen, areq->iv);
++	if (rctx->mode & RK_CRYPTO_DEC)
++		err = crypto_skcipher_decrypt(&rctx->fallback_req);
++	else
++		err = crypto_skcipher_encrypt(&rctx->fallback_req);
++	return err;
+ }
+ 
+ static int rk_handle_req(struct rk_crypto_info *dev,
+ 			 struct skcipher_request *req)
+ {
+-	if (!IS_ALIGNED(req->cryptlen, dev->align_size))
+-		return -EINVAL;
+-	else
+-		return dev->enqueue(dev, &req->base);
++	struct crypto_engine *engine = dev->engine;
++
++	if (rk_cipher_need_fallback(req))
++		return rk_cipher_fallback(req);
++
++	return crypto_transfer_skcipher_request_to_engine(engine, req);
+ }
+ 
+ static int rk_aes_setkey(struct crypto_skcipher *cipher,
+@@ -38,8 +92,9 @@ static int rk_aes_setkey(struct crypto_skcipher *cipher,
+ 	    keylen != AES_KEYSIZE_256)
+ 		return -EINVAL;
+ 	ctx->keylen = keylen;
+-	memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_KEY_0, key, keylen);
+-	return 0;
++	memcpy(ctx->key, key, keylen);
++
++	return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen);
+ }
+ 
+ static int rk_des_setkey(struct crypto_skcipher *cipher,
+@@ -53,8 +108,9 @@ static int rk_des_setkey(struct crypto_skcipher *cipher,
+ 		return err;
+ 
+ 	ctx->keylen = keylen;
+-	memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen);
+-	return 0;
++	memcpy(ctx->key, key, keylen);
++
++	return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen);
+ }
+ 
+ static int rk_tdes_setkey(struct crypto_skcipher *cipher,
+@@ -68,17 +124,19 @@ static int rk_tdes_setkey(struct crypto_skcipher *cipher,
+ 		return err;
+ 
+ 	ctx->keylen = keylen;
+-	memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen);
+-	return 0;
++	memcpy(ctx->key, key, keylen);
++
++	return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen);
+ }
+ 
+ static int rk_aes_ecb_encrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_AES_ECB_MODE;
++	rctx->mode = RK_CRYPTO_AES_ECB_MODE;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -86,9 +144,10 @@ static int rk_aes_ecb_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC;
++	rctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -96,9 +155,10 @@ static int rk_aes_cbc_encrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_AES_CBC_MODE;
++	rctx->mode = RK_CRYPTO_AES_CBC_MODE;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -106,9 +166,10 @@ static int rk_aes_cbc_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC;
++	rctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -116,9 +177,10 @@ static int rk_des_ecb_encrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = 0;
++	rctx->mode = 0;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -126,9 +188,10 @@ static int rk_des_ecb_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_DEC;
++	rctx->mode = RK_CRYPTO_DEC;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -136,9 +199,10 @@ static int rk_des_cbc_encrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC;
++	rctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -146,9 +210,10 @@ static int rk_des_cbc_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC;
++	rctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -156,9 +221,10 @@ static int rk_des3_ede_ecb_encrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_TDES_SELECT;
++	rctx->mode = RK_CRYPTO_TDES_SELECT;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -166,9 +232,10 @@ static int rk_des3_ede_ecb_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC;
++	rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -176,9 +243,10 @@ static int rk_des3_ede_cbc_encrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC;
++	rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+@@ -186,43 +254,42 @@ static int rk_des3_ede_cbc_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_crypto_info *dev = ctx->dev;
+ 
+-	ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC |
++	rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC |
+ 		    RK_CRYPTO_DEC;
+ 	return rk_handle_req(dev, req);
+ }
+ 
+-static void rk_ablk_hw_init(struct rk_crypto_info *dev)
++static void rk_ablk_hw_init(struct rk_crypto_info *dev, struct skcipher_request *req)
+ {
+-	struct skcipher_request *req =
+-		skcipher_request_cast(dev->async_req);
+ 	struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req);
+ 	struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher);
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher);
+-	u32 ivsize, block, conf_reg = 0;
++	u32 block, conf_reg = 0;
+ 
+ 	block = crypto_tfm_alg_blocksize(tfm);
+-	ivsize = crypto_skcipher_ivsize(cipher);
+ 
+ 	if (block == DES_BLOCK_SIZE) {
+-		ctx->mode |= RK_CRYPTO_TDES_FIFO_MODE |
++		rctx->mode |= RK_CRYPTO_TDES_FIFO_MODE |
+ 			     RK_CRYPTO_TDES_BYTESWAP_KEY |
+ 			     RK_CRYPTO_TDES_BYTESWAP_IV;
+-		CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, ctx->mode);
+-		memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, req->iv, ivsize);
++		CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, rctx->mode);
++		memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, ctx->key, ctx->keylen);
+ 		conf_reg = RK_CRYPTO_DESSEL;
+ 	} else {
+-		ctx->mode |= RK_CRYPTO_AES_FIFO_MODE |
++		rctx->mode |= RK_CRYPTO_AES_FIFO_MODE |
+ 			     RK_CRYPTO_AES_KEY_CHANGE |
+ 			     RK_CRYPTO_AES_BYTESWAP_KEY |
+ 			     RK_CRYPTO_AES_BYTESWAP_IV;
+ 		if (ctx->keylen == AES_KEYSIZE_192)
+-			ctx->mode |= RK_CRYPTO_AES_192BIT_key;
++			rctx->mode |= RK_CRYPTO_AES_192BIT_key;
+ 		else if (ctx->keylen == AES_KEYSIZE_256)
+-			ctx->mode |= RK_CRYPTO_AES_256BIT_key;
+-		CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, ctx->mode);
+-		memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, req->iv, ivsize);
++			rctx->mode |= RK_CRYPTO_AES_256BIT_key;
++		CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, rctx->mode);
++		memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_KEY_0, ctx->key, ctx->keylen);
+ 	}
+ 	conf_reg |= RK_CRYPTO_BYTESWAP_BTFIFO |
+ 		    RK_CRYPTO_BYTESWAP_BRFIFO;
+@@ -231,146 +298,138 @@ static void rk_ablk_hw_init(struct rk_crypto_info *dev)
+ 		     RK_CRYPTO_BCDMA_ERR_ENA | RK_CRYPTO_BCDMA_DONE_ENA);
+ }
+ 
+-static void crypto_dma_start(struct rk_crypto_info *dev)
++static void crypto_dma_start(struct rk_crypto_info *dev,
++			     struct scatterlist *sgs,
++			     struct scatterlist *sgd, unsigned int todo)
+ {
+-	CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAS, dev->addr_in);
+-	CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAL, dev->count / 4);
+-	CRYPTO_WRITE(dev, RK_CRYPTO_BTDMAS, dev->addr_out);
++	CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAS, sg_dma_address(sgs));
++	CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAL, todo);
++	CRYPTO_WRITE(dev, RK_CRYPTO_BTDMAS, sg_dma_address(sgd));
+ 	CRYPTO_WRITE(dev, RK_CRYPTO_CTRL, RK_CRYPTO_BLOCK_START |
+ 		     _SBF(RK_CRYPTO_BLOCK_START, 16));
+ }
+ 
+-static int rk_set_data_start(struct rk_crypto_info *dev)
++static int rk_cipher_run(struct crypto_engine *engine, void *async_req)
+ {
+-	int err;
+-	struct skcipher_request *req =
+-		skcipher_request_cast(dev->async_req);
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
++	struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base);
++	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	u32 ivsize = crypto_skcipher_ivsize(tfm);
+-	u8 *src_last_blk = page_address(sg_page(dev->sg_src)) +
+-		dev->sg_src->offset + dev->sg_src->length - ivsize;
+-
+-	/* Store the iv that need to be updated in chain mode.
+-	 * And update the IV buffer to contain the next IV for decryption mode.
+-	 */
+-	if (ctx->mode & RK_CRYPTO_DEC) {
+-		memcpy(ctx->iv, src_last_blk, ivsize);
+-		sg_pcopy_to_buffer(dev->first, dev->src_nents, req->iv,
+-				   ivsize, dev->total - ivsize);
+-	}
+-
+-	err = dev->load_data(dev, dev->sg_src, dev->sg_dst);
+-	if (!err)
+-		crypto_dma_start(dev);
+-	return err;
+-}
+-
+-static int rk_ablk_start(struct rk_crypto_info *dev)
+-{
+-	struct skcipher_request *req =
+-		skcipher_request_cast(dev->async_req);
+-	unsigned long flags;
++	struct rk_cipher_rctx *rctx = skcipher_request_ctx(areq);
++	struct scatterlist *sgs, *sgd;
+ 	int err = 0;
++	int ivsize = crypto_skcipher_ivsize(tfm);
++	int offset;
++	u8 iv[AES_BLOCK_SIZE];
++	u8 biv[AES_BLOCK_SIZE];
++	u8 *ivtouse = areq->iv;
++	unsigned int len = areq->cryptlen;
++	unsigned int todo;
++
++	ivsize = crypto_skcipher_ivsize(tfm);
++	if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
++		if (rctx->mode & RK_CRYPTO_DEC) {
++			offset = areq->cryptlen - ivsize;
++			scatterwalk_map_and_copy(rctx->backup_iv, areq->src,
++						 offset, ivsize, 0);
++		}
++	}
+ 
+-	dev->left_bytes = req->cryptlen;
+-	dev->total = req->cryptlen;
+-	dev->sg_src = req->src;
+-	dev->first = req->src;
+-	dev->src_nents = sg_nents(req->src);
+-	dev->sg_dst = req->dst;
+-	dev->dst_nents = sg_nents(req->dst);
+-	dev->aligned = 1;
+-
+-	spin_lock_irqsave(&dev->lock, flags);
+-	rk_ablk_hw_init(dev);
+-	err = rk_set_data_start(dev);
+-	spin_unlock_irqrestore(&dev->lock, flags);
+-	return err;
+-}
++	sgs = areq->src;
++	sgd = areq->dst;
+ 
+-static void rk_iv_copyback(struct rk_crypto_info *dev)
+-{
+-	struct skcipher_request *req =
+-		skcipher_request_cast(dev->async_req);
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	u32 ivsize = crypto_skcipher_ivsize(tfm);
+-
+-	/* Update the IV buffer to contain the next IV for encryption mode. */
+-	if (!(ctx->mode & RK_CRYPTO_DEC)) {
+-		if (dev->aligned) {
+-			memcpy(req->iv, sg_virt(dev->sg_dst) +
+-				dev->sg_dst->length - ivsize, ivsize);
++	while (sgs && sgd && len) {
++		if (!sgs->length) {
++			sgs = sg_next(sgs);
++			sgd = sg_next(sgd);
++			continue;
++		}
++		if (rctx->mode & RK_CRYPTO_DEC) {
++			/* we backup last block of source to be used as IV at next step */
++			offset = sgs->length - ivsize;
++			scatterwalk_map_and_copy(biv, sgs, offset, ivsize, 0);
++		}
++		if (sgs == sgd) {
++			err = dma_map_sg(ctx->dev->dev, sgs, 1, DMA_BIDIRECTIONAL);
++			if (err <= 0) {
++				err = -EINVAL;
++				goto theend_iv;
++			}
++		} else {
++			err = dma_map_sg(ctx->dev->dev, sgs, 1, DMA_TO_DEVICE);
++			if (err <= 0) {
++				err = -EINVAL;
++				goto theend_iv;
++			}
++			err = dma_map_sg(ctx->dev->dev, sgd, 1, DMA_FROM_DEVICE);
++			if (err <= 0) {
++				err = -EINVAL;
++				goto theend_sgs;
++			}
++		}
++		err = 0;
++		rk_ablk_hw_init(ctx->dev, areq);
++		if (ivsize) {
++			if (ivsize == DES_BLOCK_SIZE)
++				memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_IV_0, ivtouse, ivsize);
++			else
++				memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_IV_0, ivtouse, ivsize);
++		}
++		reinit_completion(&ctx->dev->complete);
++		ctx->dev->status = 0;
++
++		todo = min(sg_dma_len(sgs), len);
++		len -= todo;
++		crypto_dma_start(ctx->dev, sgs, sgd, todo / 4);
++		wait_for_completion_interruptible_timeout(&ctx->dev->complete,
++							  msecs_to_jiffies(2000));
++		if (!ctx->dev->status) {
++			dev_err(ctx->dev->dev, "DMA timeout\n");
++			err = -EFAULT;
++			goto theend;
++		}
++		if (sgs == sgd) {
++			dma_unmap_sg(ctx->dev->dev, sgs, 1, DMA_BIDIRECTIONAL);
++		} else {
++			dma_unmap_sg(ctx->dev->dev, sgs, 1, DMA_TO_DEVICE);
++			dma_unmap_sg(ctx->dev->dev, sgd, 1, DMA_FROM_DEVICE);
++		}
++		if (rctx->mode & RK_CRYPTO_DEC) {
++			memcpy(iv, biv, ivsize);
++			ivtouse = iv;
+ 		} else {
+-			memcpy(req->iv, dev->addr_vir +
+-				dev->count - ivsize, ivsize);
++			offset = sgd->length - ivsize;
++			scatterwalk_map_and_copy(iv, sgd, offset, ivsize, 0);
++			ivtouse = iv;
+ 		}
++		sgs = sg_next(sgs);
++		sgd = sg_next(sgd);
+ 	}
+-}
+-
+-static void rk_update_iv(struct rk_crypto_info *dev)
+-{
+-	struct skcipher_request *req =
+-		skcipher_request_cast(dev->async_req);
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	u32 ivsize = crypto_skcipher_ivsize(tfm);
+-	u8 *new_iv = NULL;
+ 
+-	if (ctx->mode & RK_CRYPTO_DEC) {
+-		new_iv = ctx->iv;
+-	} else {
+-		new_iv = page_address(sg_page(dev->sg_dst)) +
+-			 dev->sg_dst->offset + dev->sg_dst->length - ivsize;
++	if (areq->iv && ivsize > 0) {
++		offset = areq->cryptlen - ivsize;
++		if (rctx->mode & RK_CRYPTO_DEC) {
++			memcpy(areq->iv, rctx->backup_iv, ivsize);
++			memzero_explicit(rctx->backup_iv, ivsize);
++		} else {
++			scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
++						 ivsize, 0);
++		}
+ 	}
+ 
+-	if (ivsize == DES_BLOCK_SIZE)
+-		memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize);
+-	else if (ivsize == AES_BLOCK_SIZE)
+-		memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize);
+-}
++theend:
++	local_bh_disable();
++	crypto_finalize_skcipher_request(engine, areq, err);
++	local_bh_enable();
++	return 0;
+ 
+-/* return:
+- *	true	some err was occurred
+- *	fault	no err, continue
+- */
+-static int rk_ablk_rx(struct rk_crypto_info *dev)
+-{
+-	int err = 0;
+-	struct skcipher_request *req =
+-		skcipher_request_cast(dev->async_req);
+-
+-	dev->unload_data(dev);
+-	if (!dev->aligned) {
+-		if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents,
+-					  dev->addr_vir, dev->count,
+-					  dev->total - dev->left_bytes -
+-					  dev->count)) {
+-			err = -EINVAL;
+-			goto out_rx;
+-		}
+-	}
+-	if (dev->left_bytes) {
+-		rk_update_iv(dev);
+-		if (dev->aligned) {
+-			if (sg_is_last(dev->sg_src)) {
+-				dev_err(dev->dev, "[%s:%d] Lack of data\n",
+-					__func__, __LINE__);
+-				err = -ENOMEM;
+-				goto out_rx;
+-			}
+-			dev->sg_src = sg_next(dev->sg_src);
+-			dev->sg_dst = sg_next(dev->sg_dst);
+-		}
+-		err = rk_set_data_start(dev);
++theend_sgs:
++	if (sgs == sgd) {
++		dma_unmap_sg(ctx->dev->dev, sgs, 1, DMA_BIDIRECTIONAL);
+ 	} else {
+-		rk_iv_copyback(dev);
+-		/* here show the calculation is over without any err */
+-		dev->complete(dev->async_req, 0);
+-		tasklet_schedule(&dev->queue_task);
++		dma_unmap_sg(ctx->dev->dev, sgs, 1, DMA_TO_DEVICE);
++		dma_unmap_sg(ctx->dev->dev, sgd, 1, DMA_FROM_DEVICE);
+ 	}
+-out_rx:
++theend_iv:
+ 	return err;
+ }
+ 
+@@ -378,26 +437,34 @@ static int rk_ablk_init_tfm(struct crypto_skcipher *tfm)
+ {
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+ 	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
++	const char *name = crypto_tfm_alg_name(&tfm->base);
+ 	struct rk_crypto_tmp *algt;
+ 
+ 	algt = container_of(alg, struct rk_crypto_tmp, alg.skcipher);
+ 
+ 	ctx->dev = algt->dev;
+-	ctx->dev->align_size = crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm)) + 1;
+-	ctx->dev->start = rk_ablk_start;
+-	ctx->dev->update = rk_ablk_rx;
+-	ctx->dev->complete = rk_crypto_complete;
+-	ctx->dev->addr_vir = (char *)__get_free_page(GFP_KERNEL);
+ 
+-	return ctx->dev->addr_vir ? ctx->dev->enable_clk(ctx->dev) : -ENOMEM;
++	ctx->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK);
++	if (IS_ERR(ctx->fallback_tfm)) {
++		dev_err(ctx->dev->dev, "ERROR: Cannot allocate fallback for %s %ld\n",
++			name, PTR_ERR(ctx->fallback_tfm));
++		return PTR_ERR(ctx->fallback_tfm);
++	}
++
++	tfm->reqsize = sizeof(struct rk_cipher_rctx) +
++		crypto_skcipher_reqsize(ctx->fallback_tfm);
++
++	ctx->enginectx.op.do_one_request = rk_cipher_run;
++
++	return 0;
+ }
+ 
+ static void rk_ablk_exit_tfm(struct crypto_skcipher *tfm)
+ {
+ 	struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+ 
+-	free_page((unsigned long)ctx->dev->addr_vir);
+-	ctx->dev->disable_clk(ctx->dev);
++	memzero_explicit(ctx->key, ctx->keylen);
++	crypto_free_skcipher(ctx->fallback_tfm);
+ }
+ 
+ struct rk_crypto_tmp rk_ecb_aes_alg = {
+@@ -406,7 +473,7 @@ struct rk_crypto_tmp rk_ecb_aes_alg = {
+ 		.base.cra_name		= "ecb(aes)",
+ 		.base.cra_driver_name	= "ecb-aes-rk",
+ 		.base.cra_priority	= 300,
+-		.base.cra_flags		= CRYPTO_ALG_ASYNC,
++		.base.cra_flags		= CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
+ 		.base.cra_blocksize	= AES_BLOCK_SIZE,
+ 		.base.cra_ctxsize	= sizeof(struct rk_cipher_ctx),
+ 		.base.cra_alignmask	= 0x0f,
+@@ -428,7 +495,7 @@ struct rk_crypto_tmp rk_cbc_aes_alg = {
+ 		.base.cra_name		= "cbc(aes)",
+ 		.base.cra_driver_name	= "cbc-aes-rk",
+ 		.base.cra_priority	= 300,
+-		.base.cra_flags		= CRYPTO_ALG_ASYNC,
++		.base.cra_flags		= CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
+ 		.base.cra_blocksize	= AES_BLOCK_SIZE,
+ 		.base.cra_ctxsize	= sizeof(struct rk_cipher_ctx),
+ 		.base.cra_alignmask	= 0x0f,
+@@ -451,7 +518,7 @@ struct rk_crypto_tmp rk_ecb_des_alg = {
+ 		.base.cra_name		= "ecb(des)",
+ 		.base.cra_driver_name	= "ecb-des-rk",
+ 		.base.cra_priority	= 300,
+-		.base.cra_flags		= CRYPTO_ALG_ASYNC,
++		.base.cra_flags		= CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
+ 		.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 		.base.cra_ctxsize	= sizeof(struct rk_cipher_ctx),
+ 		.base.cra_alignmask	= 0x07,
+@@ -473,7 +540,7 @@ struct rk_crypto_tmp rk_cbc_des_alg = {
+ 		.base.cra_name		= "cbc(des)",
+ 		.base.cra_driver_name	= "cbc-des-rk",
+ 		.base.cra_priority	= 300,
+-		.base.cra_flags		= CRYPTO_ALG_ASYNC,
++		.base.cra_flags		= CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
+ 		.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 		.base.cra_ctxsize	= sizeof(struct rk_cipher_ctx),
+ 		.base.cra_alignmask	= 0x07,
+@@ -496,7 +563,7 @@ struct rk_crypto_tmp rk_ecb_des3_ede_alg = {
+ 		.base.cra_name		= "ecb(des3_ede)",
+ 		.base.cra_driver_name	= "ecb-des3-ede-rk",
+ 		.base.cra_priority	= 300,
+-		.base.cra_flags		= CRYPTO_ALG_ASYNC,
++		.base.cra_flags		= CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
+ 		.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 		.base.cra_ctxsize	= sizeof(struct rk_cipher_ctx),
+ 		.base.cra_alignmask	= 0x07,
+@@ -518,7 +585,7 @@ struct rk_crypto_tmp rk_cbc_des3_ede_alg = {
+ 		.base.cra_name		= "cbc(des3_ede)",
+ 		.base.cra_driver_name	= "cbc-des3-ede-rk",
+ 		.base.cra_priority	= 300,
+-		.base.cra_flags		= CRYPTO_ALG_ASYNC,
++		.base.cra_flags		= CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
+ 		.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 		.base.cra_ctxsize	= sizeof(struct rk_cipher_ctx),
+ 		.base.cra_alignmask	= 0x07,
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 829128c0cc68c..c6f460550f5e9 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -740,8 +740,7 @@ static void devfreq_dev_release(struct device *dev)
+  * @dev:	the device to add devfreq feature.
+  * @profile:	device-specific profile to run devfreq.
+  * @governor_name:	name of the policy to choose frequency.
+- * @data:	private data for the governor. The devfreq framework does not
+- *		touch this value.
++ * @data:	devfreq driver pass to governors, governor should not change it.
+  */
+ struct devfreq *devfreq_add_device(struct device *dev,
+ 				   struct devfreq_dev_profile *profile,
+@@ -953,8 +952,7 @@ static void devm_devfreq_dev_release(struct device *dev, void *res)
+  * @dev:	the device to add devfreq feature.
+  * @profile:	device-specific profile to run devfreq.
+  * @governor_name:	name of the policy to choose frequency.
+- * @data:	private data for the governor. The devfreq framework does not
+- *		touch this value.
++ * @data:	 devfreq driver pass to governors, governor should not change it.
+  *
+  * This function manages automatically the memory of devfreq device using device
+  * resource management and simplify the free operation for memory of devfreq
+diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c
+index 0fd6c48510711..8a9cf8220808e 100644
+--- a/drivers/devfreq/governor_userspace.c
++++ b/drivers/devfreq/governor_userspace.c
+@@ -21,7 +21,7 @@ struct userspace_data {
+ 
+ static int devfreq_userspace_func(struct devfreq *df, unsigned long *freq)
+ {
+-	struct userspace_data *data = df->data;
++	struct userspace_data *data = df->governor_data;
+ 
+ 	if (data->valid)
+ 		*freq = data->user_frequency;
+@@ -40,7 +40,7 @@ static ssize_t store_freq(struct device *dev, struct device_attribute *attr,
+ 	int err = 0;
+ 
+ 	mutex_lock(&devfreq->lock);
+-	data = devfreq->data;
++	data = devfreq->governor_data;
+ 
+ 	sscanf(buf, "%lu", &wanted);
+ 	data->user_frequency = wanted;
+@@ -60,7 +60,7 @@ static ssize_t show_freq(struct device *dev, struct device_attribute *attr,
+ 	int err = 0;
+ 
+ 	mutex_lock(&devfreq->lock);
+-	data = devfreq->data;
++	data = devfreq->governor_data;
+ 
+ 	if (data->valid)
+ 		err = sprintf(buf, "%lu\n", data->user_frequency);
+@@ -91,7 +91,7 @@ static int userspace_init(struct devfreq *devfreq)
+ 		goto out;
+ 	}
+ 	data->valid = false;
+-	devfreq->data = data;
++	devfreq->governor_data = data;
+ 
+ 	err = sysfs_create_group(&devfreq->dev.kobj, &dev_attr_group);
+ out:
+@@ -107,8 +107,8 @@ static void userspace_exit(struct devfreq *devfreq)
+ 	if (devfreq->dev.kobj.sd)
+ 		sysfs_remove_group(&devfreq->dev.kobj, &dev_attr_group);
+ 
+-	kfree(devfreq->data);
+-	devfreq->data = NULL;
++	kfree(devfreq->governor_data);
++	devfreq->governor_data = NULL;
+ }
+ 
+ static int devfreq_userspace_handler(struct devfreq *devfreq,
+diff --git a/drivers/dio/dio.c b/drivers/dio/dio.c
+index 193b40e7aec03..1414a1c81834a 100644
+--- a/drivers/dio/dio.c
++++ b/drivers/dio/dio.c
+@@ -110,6 +110,12 @@ static char dio_no_name[] = { 0 };
+ 
+ #endif /* CONFIG_DIO_CONSTANTS */
+ 
++static void dio_dev_release(struct device *dev)
++{
++	struct dio_dev *ddev = container_of(dev, typeof(struct dio_dev), dev);
++	kfree(ddev);
++}
++
+ int __init dio_find(int deviceid)
+ {
+ 	/* Called to find a DIO device before the full bus scan has run.
+@@ -224,6 +230,7 @@ static int __init dio_init(void)
+ 		dev->bus = &dio_bus;
+ 		dev->dev.parent = &dio_bus.dev;
+ 		dev->dev.bus = &dio_bus_type;
++		dev->dev.release = dio_dev_release;
+ 		dev->scode = scode;
+ 		dev->resource.start = pa;
+ 		dev->resource.end = pa + DIO_SIZE(scode, va);
+@@ -251,6 +258,7 @@ static int __init dio_init(void)
+ 		if (error) {
+ 			pr_err("DIO: Error registering device %s\n",
+ 			       dev->name);
++			put_device(&dev->dev);
+ 			continue;
+ 		}
+ 		error = dio_create_sysfs_dev_files(dev);
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 3a7362f968c9f..43dbea6b6e30a 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -53,11 +53,10 @@ static struct pci_dev *pci_get_dev_wrapper(int dom, unsigned int bus,
+ 	if (unlikely(pci_enable_device(pdev) < 0)) {
+ 		edac_dbg(2, "Failed to enable device %02x:%02x.%x\n",
+ 			 bus, dev, fun);
++		pci_dev_put(pdev);
+ 		return NULL;
+ 	}
+ 
+-	pci_dev_get(pdev);
+-
+ 	return pdev;
+ }
+ 
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 70be9c87fb673..ba03f5a4b30ce 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -590,7 +590,7 @@ int __init efi_config_parse_tables(const efi_config_table_t *config_tables,
+ 
+ 		seed = early_memremap(efi_rng_seed, sizeof(*seed));
+ 		if (seed != NULL) {
+-			size = min(seed->size, EFI_RANDOM_SEED_SIZE);
++			size = min_t(u32, seed->size, SZ_1K); // sanity check
+ 			early_memunmap(seed, sizeof(*seed));
+ 		} else {
+ 			pr_err("Could not map UEFI random seed!\n");
+@@ -599,8 +599,8 @@ int __init efi_config_parse_tables(const efi_config_table_t *config_tables,
+ 			seed = early_memremap(efi_rng_seed,
+ 					      sizeof(*seed) + size);
+ 			if (seed != NULL) {
+-				pr_notice("seeding entropy pool\n");
+ 				add_bootloader_randomness(seed->bits, size);
++				memzero_explicit(seed->bits, size);
+ 				early_memunmap(seed, sizeof(*seed) + size);
+ 			} else {
+ 				pr_err("Could not map UEFI random seed!\n");
+diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
+index 2d7abcd99de9b..c5e0c6e99d20c 100644
+--- a/drivers/firmware/efi/libstub/efistub.h
++++ b/drivers/firmware/efi/libstub/efistub.h
+@@ -767,6 +767,8 @@ efi_status_t efi_get_random_bytes(unsigned long size, u8 *out);
+ efi_status_t efi_random_alloc(unsigned long size, unsigned long align,
+ 			      unsigned long *addr, unsigned long random_seed);
+ 
++efi_status_t efi_random_get_seed(void);
++
+ efi_status_t check_platform_features(void);
+ 
+ void *get_efi_config_table(efi_guid_t guid);
+diff --git a/drivers/firmware/efi/libstub/random.c b/drivers/firmware/efi/libstub/random.c
+index 33ab567695951..f85d2c0668777 100644
+--- a/drivers/firmware/efi/libstub/random.c
++++ b/drivers/firmware/efi/libstub/random.c
+@@ -67,27 +67,43 @@ efi_status_t efi_random_get_seed(void)
+ 	efi_guid_t rng_proto = EFI_RNG_PROTOCOL_GUID;
+ 	efi_guid_t rng_algo_raw = EFI_RNG_ALGORITHM_RAW;
+ 	efi_guid_t rng_table_guid = LINUX_EFI_RANDOM_SEED_TABLE_GUID;
++	struct linux_efi_random_seed *prev_seed, *seed = NULL;
++	int prev_seed_size = 0, seed_size = EFI_RANDOM_SEED_SIZE;
+ 	efi_rng_protocol_t *rng = NULL;
+-	struct linux_efi_random_seed *seed = NULL;
+ 	efi_status_t status;
+ 
+ 	status = efi_bs_call(locate_protocol, &rng_proto, NULL, (void **)&rng);
+ 	if (status != EFI_SUCCESS)
+ 		return status;
+ 
++	/*
++	 * Check whether a seed was provided by a prior boot stage. In that
++	 * case, instead of overwriting it, let's create a new buffer that can
++	 * hold both, and concatenate the existing and the new seeds.
++	 * Note that we should read the seed size with caution, in case the
++	 * table got corrupted in memory somehow.
++	 */
++	prev_seed = get_efi_config_table(LINUX_EFI_RANDOM_SEED_TABLE_GUID);
++	if (prev_seed && prev_seed->size <= 512U) {
++		prev_seed_size = prev_seed->size;
++		seed_size += prev_seed_size;
++	}
++
+ 	/*
+ 	 * Use EFI_ACPI_RECLAIM_MEMORY here so that it is guaranteed that the
+ 	 * allocation will survive a kexec reboot (although we refresh the seed
+ 	 * beforehand)
+ 	 */
+ 	status = efi_bs_call(allocate_pool, EFI_ACPI_RECLAIM_MEMORY,
+-			     sizeof(*seed) + EFI_RANDOM_SEED_SIZE,
++			     struct_size(seed, bits, seed_size),
+ 			     (void **)&seed);
+-	if (status != EFI_SUCCESS)
+-		return status;
++	if (status != EFI_SUCCESS) {
++		efi_warn("Failed to allocate memory for RNG seed.\n");
++		goto err_warn;
++	}
+ 
+ 	status = efi_call_proto(rng, get_rng, &rng_algo_raw,
+-				 EFI_RANDOM_SEED_SIZE, seed->bits);
++				EFI_RANDOM_SEED_SIZE, seed->bits);
+ 
+ 	if (status == EFI_UNSUPPORTED)
+ 		/*
+@@ -100,14 +116,28 @@ efi_status_t efi_random_get_seed(void)
+ 	if (status != EFI_SUCCESS)
+ 		goto err_freepool;
+ 
+-	seed->size = EFI_RANDOM_SEED_SIZE;
++	seed->size = seed_size;
++	if (prev_seed_size)
++		memcpy(seed->bits + EFI_RANDOM_SEED_SIZE, prev_seed->bits,
++		       prev_seed_size);
++
+ 	status = efi_bs_call(install_configuration_table, &rng_table_guid, seed);
+ 	if (status != EFI_SUCCESS)
+ 		goto err_freepool;
+ 
++	if (prev_seed_size) {
++		/* wipe and free the old seed if we managed to install the new one */
++		memzero_explicit(prev_seed->bits, prev_seed_size);
++		efi_bs_call(free_pool, prev_seed);
++	}
+ 	return EFI_SUCCESS;
+ 
+ err_freepool:
++	memzero_explicit(seed, struct_size(seed, bits, seed_size));
+ 	efi_bs_call(free_pool, seed);
++	efi_warn("Failed to obtain seed from EFI_RNG_PROTOCOL\n");
++err_warn:
++	if (prev_seed)
++		efi_warn("Retaining bootloader-supplied seed only");
+ 	return status;
+ }
+diff --git a/drivers/firmware/raspberrypi.c b/drivers/firmware/raspberrypi.c
+index 1d965c1252cac..9eef49da47e04 100644
+--- a/drivers/firmware/raspberrypi.c
++++ b/drivers/firmware/raspberrypi.c
+@@ -265,6 +265,7 @@ static int rpi_firmware_probe(struct platform_device *pdev)
+ 		int ret = PTR_ERR(fw->chan);
+ 		if (ret != -EPROBE_DEFER)
+ 			dev_err(dev, "Failed to get mbox channel: %d\n", ret);
++		kfree(fw);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-sifive.c b/drivers/gpio/gpio-sifive.c
+index 4f28fa73450c1..a42ffb9f30574 100644
+--- a/drivers/gpio/gpio-sifive.c
++++ b/drivers/gpio/gpio-sifive.c
+@@ -195,6 +195,7 @@ static int sifive_gpio_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 	parent = irq_find_host(irq_parent);
++	of_node_put(irq_parent);
+ 	if (!parent) {
+ 		dev_err(dev, "no IRQ parent domain\n");
+ 		return -ENODEV;
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index 381cfa26a4a1a..40d0196d8bdcc 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -197,16 +197,18 @@ static long linehandle_ioctl(struct file *file, unsigned int cmd,
+ 	void __user *ip = (void __user *)arg;
+ 	struct gpiohandle_data ghd;
+ 	DECLARE_BITMAP(vals, GPIOHANDLES_MAX);
+-	int i;
++	unsigned int i;
++	int ret;
+ 
+-	if (cmd == GPIOHANDLE_GET_LINE_VALUES_IOCTL) {
+-		/* NOTE: It's ok to read values of output lines. */
+-		int ret = gpiod_get_array_value_complex(false,
+-							true,
+-							lh->num_descs,
+-							lh->descs,
+-							NULL,
+-							vals);
++	if (!lh->gdev->chip)
++		return -ENODEV;
++
++	switch (cmd) {
++	case GPIOHANDLE_GET_LINE_VALUES_IOCTL:
++		/* NOTE: It's okay to read values of output lines */
++		ret = gpiod_get_array_value_complex(false, true,
++						    lh->num_descs, lh->descs,
++						    NULL, vals);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -218,7 +220,7 @@ static long linehandle_ioctl(struct file *file, unsigned int cmd,
+ 			return -EFAULT;
+ 
+ 		return 0;
+-	} else if (cmd == GPIOHANDLE_SET_LINE_VALUES_IOCTL) {
++	case GPIOHANDLE_SET_LINE_VALUES_IOCTL:
+ 		/*
+ 		 * All line descriptors were created at once with the same
+ 		 * flags so just check if the first one is really output.
+@@ -240,10 +242,11 @@ static long linehandle_ioctl(struct file *file, unsigned int cmd,
+ 						     lh->descs,
+ 						     NULL,
+ 						     vals);
+-	} else if (cmd == GPIOHANDLE_SET_CONFIG_IOCTL) {
++	case GPIOHANDLE_SET_CONFIG_IOCTL:
+ 		return linehandle_set_config(lh, ip);
++	default:
++		return -EINVAL;
+ 	}
+-	return -EINVAL;
+ }
+ 
+ #ifdef CONFIG_COMPAT
+@@ -1165,14 +1168,19 @@ static long linereq_ioctl(struct file *file, unsigned int cmd,
+ 	struct linereq *lr = file->private_data;
+ 	void __user *ip = (void __user *)arg;
+ 
+-	if (cmd == GPIO_V2_LINE_GET_VALUES_IOCTL)
++	if (!lr->gdev->chip)
++		return -ENODEV;
++
++	switch (cmd) {
++	case GPIO_V2_LINE_GET_VALUES_IOCTL:
+ 		return linereq_get_values(lr, ip);
+-	else if (cmd == GPIO_V2_LINE_SET_VALUES_IOCTL)
++	case GPIO_V2_LINE_SET_VALUES_IOCTL:
+ 		return linereq_set_values(lr, ip);
+-	else if (cmd == GPIO_V2_LINE_SET_CONFIG_IOCTL)
++	case GPIO_V2_LINE_SET_CONFIG_IOCTL:
+ 		return linereq_set_config(lr, ip);
+-
+-	return -EINVAL;
++	default:
++		return -EINVAL;
++	}
+ }
+ 
+ #ifdef CONFIG_COMPAT
+@@ -1189,6 +1197,9 @@ static __poll_t linereq_poll(struct file *file,
+ 	struct linereq *lr = file->private_data;
+ 	__poll_t events = 0;
+ 
++	if (!lr->gdev->chip)
++		return EPOLLHUP | EPOLLERR;
++
+ 	poll_wait(file, &lr->wait, wait);
+ 
+ 	if (!kfifo_is_empty_spinlocked_noirqsave(&lr->events,
+@@ -1208,6 +1219,9 @@ static ssize_t linereq_read(struct file *file,
+ 	ssize_t bytes_read = 0;
+ 	int ret;
+ 
++	if (!lr->gdev->chip)
++		return -ENODEV;
++
+ 	if (count < sizeof(le))
+ 		return -EINVAL;
+ 
+@@ -1473,6 +1487,9 @@ static __poll_t lineevent_poll(struct file *file,
+ 	struct lineevent_state *le = file->private_data;
+ 	__poll_t events = 0;
+ 
++	if (!le->gdev->chip)
++		return EPOLLHUP | EPOLLERR;
++
+ 	poll_wait(file, &le->wait, wait);
+ 
+ 	if (!kfifo_is_empty_spinlocked_noirqsave(&le->events, &le->wait.lock))
+@@ -1508,6 +1525,9 @@ static ssize_t lineevent_read(struct file *file,
+ 	ssize_t ge_size;
+ 	int ret;
+ 
++	if (!le->gdev->chip)
++		return -ENODEV;
++
+ 	/*
+ 	 * When compatible system call is being used the struct gpioevent_data,
+ 	 * in case of at least ia32, has different size due to the alignment
+@@ -1586,6 +1606,9 @@ static long lineevent_ioctl(struct file *file, unsigned int cmd,
+ 	void __user *ip = (void __user *)arg;
+ 	struct gpiohandle_data ghd;
+ 
++	if (!le->gdev->chip)
++		return -ENODEV;
++
+ 	/*
+ 	 * We can get the value for an event line but not set it,
+ 	 * because it is input by definition.
+@@ -2095,28 +2118,30 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 		return -ENODEV;
+ 
+ 	/* Fill in the struct and pass to userspace */
+-	if (cmd == GPIO_GET_CHIPINFO_IOCTL) {
++	switch (cmd) {
++	case GPIO_GET_CHIPINFO_IOCTL:
+ 		return chipinfo_get(cdev, ip);
+ #ifdef CONFIG_GPIO_CDEV_V1
+-	} else if (cmd == GPIO_GET_LINEHANDLE_IOCTL) {
++	case GPIO_GET_LINEHANDLE_IOCTL:
+ 		return linehandle_create(gdev, ip);
+-	} else if (cmd == GPIO_GET_LINEEVENT_IOCTL) {
++	case GPIO_GET_LINEEVENT_IOCTL:
+ 		return lineevent_create(gdev, ip);
+-	} else if (cmd == GPIO_GET_LINEINFO_IOCTL ||
+-		   cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) {
+-		return lineinfo_get_v1(cdev, ip,
+-				       cmd == GPIO_GET_LINEINFO_WATCH_IOCTL);
++	case GPIO_GET_LINEINFO_IOCTL:
++		return lineinfo_get_v1(cdev, ip, false);
++	case GPIO_GET_LINEINFO_WATCH_IOCTL:
++		return lineinfo_get_v1(cdev, ip, true);
+ #endif /* CONFIG_GPIO_CDEV_V1 */
+-	} else if (cmd == GPIO_V2_GET_LINEINFO_IOCTL ||
+-		   cmd == GPIO_V2_GET_LINEINFO_WATCH_IOCTL) {
+-		return lineinfo_get(cdev, ip,
+-				    cmd == GPIO_V2_GET_LINEINFO_WATCH_IOCTL);
+-	} else if (cmd == GPIO_V2_GET_LINE_IOCTL) {
++	case GPIO_V2_GET_LINEINFO_IOCTL:
++		return lineinfo_get(cdev, ip, false);
++	case GPIO_V2_GET_LINEINFO_WATCH_IOCTL:
++		return lineinfo_get(cdev, ip, true);
++	case GPIO_V2_GET_LINE_IOCTL:
+ 		return linereq_create(gdev, ip);
+-	} else if (cmd == GPIO_GET_LINEINFO_UNWATCH_IOCTL) {
++	case GPIO_GET_LINEINFO_UNWATCH_IOCTL:
+ 		return lineinfo_unwatch(cdev, ip);
++	default:
++		return -EINVAL;
+ 	}
+-	return -EINVAL;
+ }
+ 
+ #ifdef CONFIG_COMPAT
+@@ -2164,6 +2189,9 @@ static __poll_t lineinfo_watch_poll(struct file *file,
+ 	struct gpio_chardev_data *cdev = file->private_data;
+ 	__poll_t events = 0;
+ 
++	if (!cdev->gdev->chip)
++		return EPOLLHUP | EPOLLERR;
++
+ 	poll_wait(file, &cdev->wait, pollt);
+ 
+ 	if (!kfifo_is_empty_spinlocked_noirqsave(&cdev->events,
+@@ -2182,6 +2210,9 @@ static ssize_t lineinfo_watch_read(struct file *file, char __user *buf,
+ 	int ret;
+ 	size_t event_size;
+ 
++	if (!cdev->gdev->chip)
++		return -ENODEV;
++
+ #ifndef CONFIG_GPIO_CDEV_V1
+ 	event_size = sizeof(struct gpio_v2_line_info_changed);
+ 	if (count < event_size)
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 59d8affad343a..3e01a3ac652d1 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -186,9 +186,8 @@ static int gpiochip_find_base(int ngpio)
+ 		/* found a free space? */
+ 		if (gdev->base + gdev->ngpio <= base)
+ 			break;
+-		else
+-			/* nope, check the space right before the chip */
+-			base = gdev->base - ngpio;
++		/* nope, check the space right before the chip */
++		base = gdev->base - ngpio;
+ 	}
+ 
+ 	if (gpio_is_valid(base)) {
+@@ -2481,8 +2480,7 @@ int gpiod_direction_output(struct gpio_desc *desc, int value)
+ 			ret = gpiod_direction_input(desc);
+ 			goto set_output_flag;
+ 		}
+-	}
+-	else if (test_bit(FLAG_OPEN_SOURCE, &desc->flags)) {
++	} else if (test_bit(FLAG_OPEN_SOURCE, &desc->flags)) {
+ 		ret = gpio_set_config(desc, PIN_CONFIG_DRIVE_OPEN_SOURCE);
+ 		if (!ret)
+ 			goto set_output_value;
+@@ -2656,9 +2654,9 @@ static int gpiod_get_raw_value_commit(const struct gpio_desc *desc)
+ static int gpio_chip_get_multiple(struct gpio_chip *gc,
+ 				  unsigned long *mask, unsigned long *bits)
+ {
+-	if (gc->get_multiple) {
++	if (gc->get_multiple)
+ 		return gc->get_multiple(gc, mask, bits);
+-	} else if (gc->get) {
++	if (gc->get) {
+ 		int i, value;
+ 
+ 		for_each_set_bit(i, mask, gc->ngpio) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+index 6333cada1e096..4b568ee932435 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+@@ -313,6 +313,7 @@ static bool amdgpu_atrm_get_bios(struct amdgpu_device *adev)
+ 
+ 	if (!found)
+ 		return false;
++	pci_dev_put(pdev);
+ 
+ 	adev->bios = kmalloc(size, GFP_KERNEL);
+ 	if (!adev->bios) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index bde0496d2f153..8bd887fb6e631 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4443,6 +4443,8 @@ static void amdgpu_device_resume_display_audio(struct amdgpu_device *adev)
+ 		pm_runtime_enable(&(p->dev));
+ 		pm_runtime_resume(&(p->dev));
+ 	}
++
++	pci_dev_put(p);
+ }
+ 
+ static int amdgpu_device_suspend_display_audio(struct amdgpu_device *adev)
+@@ -4481,6 +4483,7 @@ static int amdgpu_device_suspend_display_audio(struct amdgpu_device *adev)
+ 
+ 		if (expires < ktime_get_mono_fast_ns()) {
+ 			dev_warn(adev->dev, "failed to suspend display audio\n");
++			pci_dev_put(p);
+ 			/* TODO: abort the succeeding gpu reset? */
+ 			return -ETIMEDOUT;
+ 		}
+@@ -4488,6 +4491,7 @@ static int amdgpu_device_suspend_display_audio(struct amdgpu_device *adev)
+ 
+ 	pm_runtime_disable(&(p->dev));
+ 
++	pci_dev_put(p);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 30659c1776e81..0d8c1c318ce89 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1121,6 +1121,15 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ 			 "See modparam exp_hw_support\n");
+ 		return -ENODEV;
+ 	}
++	/* differentiate between P10 and P11 asics with the same DID */
++	if (pdev->device == 0x67FF &&
++	    (pdev->revision == 0xE3 ||
++	     pdev->revision == 0xE7 ||
++	     pdev->revision == 0xF3 ||
++	     pdev->revision == 0xF7)) {
++		flags &= ~AMD_ASIC_MASK;
++		flags |= CHIP_POLARIS10;
++	}
+ 
+ 	/* Due to hardware bugs, S/G Display on raven requires a 1:1 IOMMU mapping,
+ 	 * however, SME requires an indirect IOMMU mapping because the encryption
+@@ -1190,12 +1199,12 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ 	ddev->pdev = pdev;
+ 	pci_set_drvdata(pdev, ddev);
+ 
+-	ret = amdgpu_driver_load_kms(adev, ent->driver_data);
++	ret = amdgpu_driver_load_kms(adev, flags);
+ 	if (ret)
+ 		goto err_pci;
+ 
+ retry_init:
+-	ret = drm_dev_register(ddev, ent->driver_data);
++	ret = drm_dev_register(ddev, flags);
+ 	if (ret == -EAGAIN && ++retry <= 3) {
+ 		DRM_INFO("retry init %d\n", retry);
+ 		/* Don't request EX mode too frequently which is attacking */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 6937f81340084..b6ce64b87f48f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1531,7 +1531,8 @@ u64 amdgpu_bo_gpu_offset_no_check(struct amdgpu_bo *bo)
+ uint32_t amdgpu_bo_get_preferred_pin_domain(struct amdgpu_device *adev,
+ 					    uint32_t domain)
+ {
+-	if (domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
++	if ((domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) &&
++	    ((adev->asic_type == CHIP_CARRIZO) || (adev->asic_type == CHIP_STONEY))) {
+ 		domain = AMDGPU_GEM_DOMAIN_VRAM;
+ 		if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD)
+ 			domain = AMDGPU_GEM_DOMAIN_GTT;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+index aea49bad914fa..fbd92fff8b06c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+@@ -62,6 +62,8 @@ struct amdgpu_vf_error_buffer {
+ 	uint64_t data[AMDGPU_VF_ERROR_ENTRY_SIZE];
+ };
+ 
++enum idh_request;
++
+ /**
+  * struct amdgpu_virt_ops - amdgpu device virt operations
+  */
+@@ -71,7 +73,8 @@ struct amdgpu_virt_ops {
+ 	int (*req_init_data)(struct amdgpu_device *adev);
+ 	int (*reset_gpu)(struct amdgpu_device *adev);
+ 	int (*wait_reset)(struct amdgpu_device *adev);
+-	void (*trans_msg)(struct amdgpu_device *adev, u32 req, u32 data1, u32 data2, u32 data3);
++	void (*trans_msg)(struct amdgpu_device *adev, enum idh_request req,
++			  u32 data1, u32 data2, u32 data3);
+ };
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index 29d64e7e304f4..930d2b7d34489 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -352,6 +352,7 @@ static enum bp_result get_gpio_i2c_info(
+ 	uint32_t count = 0;
+ 	unsigned int table_index = 0;
+ 	bool find_valid = false;
++	struct atom_gpio_pin_assignment *pin;
+ 
+ 	if (!info)
+ 		return BP_RESULT_BADINPUT;
+@@ -379,20 +380,17 @@ static enum bp_result get_gpio_i2c_info(
+ 			- sizeof(struct atom_common_table_header))
+ 				/ sizeof(struct atom_gpio_pin_assignment);
+ 
++	pin = (struct atom_gpio_pin_assignment *) header->gpio_pin;
++
+ 	for (table_index = 0; table_index < count; table_index++) {
+-		if (((record->i2c_id & I2C_HW_CAP) == (
+-		header->gpio_pin[table_index].gpio_id &
+-						I2C_HW_CAP)) &&
+-		((record->i2c_id & I2C_HW_ENGINE_ID_MASK)  ==
+-		(header->gpio_pin[table_index].gpio_id &
+-					I2C_HW_ENGINE_ID_MASK)) &&
+-		((record->i2c_id & I2C_HW_LANE_MUX) ==
+-		(header->gpio_pin[table_index].gpio_id &
+-						I2C_HW_LANE_MUX))) {
++		if (((record->i2c_id & I2C_HW_CAP) 				== (pin->gpio_id & I2C_HW_CAP)) &&
++		    ((record->i2c_id & I2C_HW_ENGINE_ID_MASK)	== (pin->gpio_id & I2C_HW_ENGINE_ID_MASK)) &&
++		    ((record->i2c_id & I2C_HW_LANE_MUX) 		== (pin->gpio_id & I2C_HW_LANE_MUX))) {
+ 			/* still valid */
+ 			find_valid = true;
+ 			break;
+ 		}
++		pin = (struct atom_gpio_pin_assignment *)((uint8_t *)pin + sizeof(struct atom_gpio_pin_assignment));
+ 	}
+ 
+ 	/* If we don't find the entry that we are looking for then
+diff --git a/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c b/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
+index 5a5a9cb77acbf..bcdd8a958fc01 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
+@@ -1132,6 +1132,7 @@ struct resource_pool *dce60_create_resource_pool(
+ 	if (dce60_construct(num_virtual_links, dc, pool))
+ 		return &pool->base;
+ 
++	kfree(pool);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+@@ -1329,6 +1330,7 @@ struct resource_pool *dce61_create_resource_pool(
+ 	if (dce61_construct(num_virtual_links, dc, pool))
+ 		return &pool->base;
+ 
++	kfree(pool);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+@@ -1522,6 +1524,7 @@ struct resource_pool *dce64_create_resource_pool(
+ 	if (dce64_construct(num_virtual_links, dc, pool))
+ 		return &pool->base;
+ 
++	kfree(pool);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+index a19be9de2df7d..2eefa07762ae1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+@@ -1141,6 +1141,7 @@ struct resource_pool *dce80_create_resource_pool(
+ 	if (dce80_construct(num_virtual_links, dc, pool))
+ 		return &pool->base;
+ 
++	kfree(pool);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+@@ -1338,6 +1339,7 @@ struct resource_pool *dce81_create_resource_pool(
+ 	if (dce81_construct(num_virtual_links, dc, pool))
+ 		return &pool->base;
+ 
++	kfree(pool);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+index 94132c70d7afd..5e8876ad1a1b0 100644
+--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
++++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+@@ -282,7 +282,8 @@ struct amd_pm_funcs {
+ 	int (*get_power_profile_mode)(void *handle, char *buf);
+ 	int (*set_power_profile_mode)(void *handle, long *input, uint32_t size);
+ 	int (*set_fine_grain_clk_vol)(void *handle, uint32_t type, long *input, uint32_t size);
+-	int (*odn_edit_dpm_table)(void *handle, uint32_t type, long *input, uint32_t size);
++	int (*odn_edit_dpm_table)(void *handle, enum PP_OD_DPM_TABLE_COMMAND type,
++				  long *input, uint32_t size);
+ 	int (*set_mp1_state)(void *handle, enum pp_mp1_state mp1_state);
+ 	int (*smu_i2c_bus_access)(void *handle, bool acquire);
+ /* export to DC */
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+index eab9768029c11..a98ea29b2fd52 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+@@ -924,7 +924,8 @@ static int pp_set_fine_grain_clk_vol(void *handle, uint32_t type, long *input, u
+ 	return hwmgr->hwmgr_func->set_fine_grain_clk_vol(hwmgr, type, input, size);
+ }
+ 
+-static int pp_odn_edit_dpm_table(void *handle, uint32_t type, long *input, uint32_t size)
++static int pp_odn_edit_dpm_table(void *handle, enum PP_OD_DPM_TABLE_COMMAND type,
++				 long *input, uint32_t size)
+ {
+ 	struct pp_hwmgr *hwmgr = handle;
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+index 60cde0c528257..57a354a03e8ae 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+@@ -2962,7 +2962,8 @@ static int vega20_odn_edit_dpm_table(struct pp_hwmgr *hwmgr,
+ 			data->od8_settings.od8_settings_array;
+ 	OverDriveTable_t *od_table =
+ 			&(data->smc_state_table.overdrive_table);
+-	int32_t input_index, input_clk, input_vol, i;
++	int32_t input_clk, input_vol, i;
++	uint32_t input_index;
+ 	int od8_id;
+ 	int ret;
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+index e646f5931d795..89f20497c14f5 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+@@ -1476,6 +1476,10 @@ bool smu_v11_0_baco_is_support(struct smu_context *smu)
+ 	if (!smu_baco->platform_support)
+ 		return false;
+ 
++	/* return true if ASIC is in BACO state already */
++	if (smu_v11_0_baco_get_state(smu) == SMU_BACO_STATE_ENTER)
++		return true;
++
+ 	/* Arcturus does not support this bit mask */
+ 	if (smu_cmn_feature_is_supported(smu, SMU_FEATURE_BACO_BIT) &&
+ 	   !smu_cmn_feature_is_enabled(smu, SMU_FEATURE_BACO_BIT))
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+index 711061bf3eb7e..e95abeb64b934 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+@@ -394,7 +394,8 @@ static inline int adv7511_cec_init(struct device *dev, struct adv7511 *adv7511)
+ 
+ void adv7533_dsi_power_on(struct adv7511 *adv);
+ void adv7533_dsi_power_off(struct adv7511 *adv);
+-void adv7533_mode_set(struct adv7511 *adv, const struct drm_display_mode *mode);
++enum drm_mode_status adv7533_mode_valid(struct adv7511 *adv,
++					const struct drm_display_mode *mode);
+ int adv7533_patch_registers(struct adv7511 *adv);
+ int adv7533_patch_cec_registers(struct adv7511 *adv);
+ int adv7533_attach_dsi(struct adv7511 *adv);
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 430c5e8f0388e..6ba860a16e96c 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -697,7 +697,7 @@ adv7511_detect(struct adv7511 *adv7511, struct drm_connector *connector)
+ }
+ 
+ static enum drm_mode_status adv7511_mode_valid(struct adv7511 *adv7511,
+-			      struct drm_display_mode *mode)
++			      const struct drm_display_mode *mode)
+ {
+ 	if (mode->clock > 165000)
+ 		return MODE_CLOCK_HIGH;
+@@ -791,9 +791,6 @@ static void adv7511_mode_set(struct adv7511 *adv7511,
+ 	regmap_update_bits(adv7511->regmap, 0x17,
+ 		0x60, (vsync_polarity << 6) | (hsync_polarity << 5));
+ 
+-	if (adv7511->type == ADV7533 || adv7511->type == ADV7535)
+-		adv7533_mode_set(adv7511, adj_mode);
+-
+ 	drm_mode_copy(&adv7511->curr_mode, adj_mode);
+ 
+ 	/*
+@@ -913,6 +910,18 @@ static void adv7511_bridge_mode_set(struct drm_bridge *bridge,
+ 	adv7511_mode_set(adv, mode, adj_mode);
+ }
+ 
++static enum drm_mode_status adv7511_bridge_mode_valid(struct drm_bridge *bridge,
++						      const struct drm_display_info *info,
++		const struct drm_display_mode *mode)
++{
++	struct adv7511 *adv = bridge_to_adv7511(bridge);
++
++	if (adv->type == ADV7533 || adv->type == ADV7535)
++		return adv7533_mode_valid(adv, mode);
++	else
++		return adv7511_mode_valid(adv, mode);
++}
++
+ static int adv7511_bridge_attach(struct drm_bridge *bridge,
+ 				 enum drm_bridge_attach_flags flags)
+ {
+@@ -963,6 +972,7 @@ static const struct drm_bridge_funcs adv7511_bridge_funcs = {
+ 	.enable = adv7511_bridge_enable,
+ 	.disable = adv7511_bridge_disable,
+ 	.mode_set = adv7511_bridge_mode_set,
++	.mode_valid = adv7511_bridge_mode_valid,
+ 	.attach = adv7511_bridge_attach,
+ 	.detect = adv7511_bridge_detect,
+ 	.get_edid = adv7511_bridge_get_edid,
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7533.c b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+index aa19d5a40e319..f304a5ff8e596 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7533.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+@@ -100,26 +100,27 @@ void adv7533_dsi_power_off(struct adv7511 *adv)
+ 	regmap_write(adv->regmap_cec, 0x27, 0x0b);
+ }
+ 
+-void adv7533_mode_set(struct adv7511 *adv, const struct drm_display_mode *mode)
++enum drm_mode_status adv7533_mode_valid(struct adv7511 *adv,
++					const struct drm_display_mode *mode)
+ {
++	int lanes;
+ 	struct mipi_dsi_device *dsi = adv->dsi;
+-	int lanes, ret;
+-
+-	if (adv->num_dsi_lanes != 4)
+-		return;
+ 
+ 	if (mode->clock > 80000)
+ 		lanes = 4;
+ 	else
+ 		lanes = 3;
+ 
+-	if (lanes != dsi->lanes) {
+-		mipi_dsi_detach(dsi);
+-		dsi->lanes = lanes;
+-		ret = mipi_dsi_attach(dsi);
+-		if (ret)
+-			dev_err(&dsi->dev, "failed to change host lanes\n");
+-	}
++	/*
++	 * TODO: add support for dynamic switching of lanes
++	 * by using the bridge pre_enable() op . Till then filter
++	 * out the modes which shall need different number of lanes
++	 * than what was configured in the device tree.
++	 */
++	if (lanes != dsi->lanes)
++		return MODE_BAD;
++
++	return MODE_OK;
+ }
+ 
+ int adv7533_patch_registers(struct adv7511 *adv)
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index 5163433ac561b..9c3bbe2c3e6f9 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -484,6 +484,9 @@ void drm_connector_cleanup(struct drm_connector *connector)
+ 	mutex_destroy(&connector->mutex);
+ 
+ 	memset(connector, 0, sizeof(*connector));
++
++	if (dev->registered)
++		drm_sysfs_hotplug_event(dev);
+ }
+ EXPORT_SYMBOL(drm_connector_cleanup);
+ 
+diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c
+index 722c7ebe4e889..92152c06b75b7 100644
+--- a/drivers/gpu/drm/drm_fourcc.c
++++ b/drivers/gpu/drm/drm_fourcc.c
+@@ -280,12 +280,15 @@ const struct drm_format_info *__drm_format_info(u32 format)
+ 		  .vsub = 2, .is_yuv = true },
+ 		{ .format = DRM_FORMAT_Q410,		.depth = 0,
+ 		  .num_planes = 3, .char_per_block = { 2, 2, 2 },
+-		  .block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 0,
+-		  .vsub = 0, .is_yuv = true },
++		  .block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 1,
++		  .vsub = 1, .is_yuv = true },
+ 		{ .format = DRM_FORMAT_Q401,		.depth = 0,
+ 		  .num_planes = 3, .char_per_block = { 2, 2, 2 },
+-		  .block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 0,
+-		  .vsub = 0, .is_yuv = true },
++		  .block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 1,
++		  .vsub = 1, .is_yuv = true },
++		{ .format = DRM_FORMAT_P030,            .depth = 0,  .num_planes = 2,
++		  .char_per_block = { 4, 8, 0 }, .block_w = { 3, 3, 0 }, .block_h = { 1, 1, 0 },
++		  .hsub = 2, .vsub = 2, .is_yuv = true},
+ 	};
+ 
+ 	unsigned int i;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 2520b7dad6ce7..f3281d56b1d82 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -408,6 +408,12 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu)
+ 	if (gpu->identity.model == chipModel_GC700)
+ 		gpu->identity.features &= ~chipFeatures_FAST_CLEAR;
+ 
++	/* These models/revisions don't have the 2D pipe bit */
++	if ((gpu->identity.model == chipModel_GC500 &&
++	     gpu->identity.revision <= 2) ||
++	    gpu->identity.model == chipModel_GC300)
++		gpu->identity.features |= chipFeatures_PIPE_2D;
++
+ 	if ((gpu->identity.model == chipModel_GC500 &&
+ 	     gpu->identity.revision < 2) ||
+ 	    (gpu->identity.model == chipModel_GC300 &&
+@@ -441,8 +447,9 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu)
+ 				gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_5);
+ 	}
+ 
+-	/* GC600 idle register reports zero bits where modules aren't present */
+-	if (gpu->identity.model == chipModel_GC600)
++	/* GC600/300 idle register reports zero bits where modules aren't present */
++	if (gpu->identity.model == chipModel_GC600 ||
++	    gpu->identity.model == chipModel_GC300)
+ 		gpu->idle_mask = VIVS_HI_IDLE_STATE_TX |
+ 				 VIVS_HI_IDLE_STATE_RA |
+ 				 VIVS_HI_IDLE_STATE_SE |
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
+index 4d4a715b429d1..2c2b92324a2e9 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
+@@ -60,8 +60,9 @@ static int fsl_dcu_drm_connector_get_modes(struct drm_connector *connector)
+ 	return drm_panel_get_modes(fsl_connector->panel, connector);
+ }
+ 
+-static int fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
+-					    struct drm_display_mode *mode)
++static enum drm_mode_status
++fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
++				 struct drm_display_mode *mode)
+ {
+ 	if (mode->hdisplay & 0xf)
+ 		return MODE_ERROR;
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+index 8777d35ac7a73..117ea44fd8be7 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+@@ -133,9 +133,9 @@ static enum port intel_dsi_seq_port_to_port(struct intel_dsi *intel_dsi,
+ 		return ffs(intel_dsi->ports) - 1;
+ 
+ 	if (seq_port) {
+-		if (intel_dsi->ports & PORT_B)
++		if (intel_dsi->ports & BIT(PORT_B))
+ 			return PORT_B;
+-		else if (intel_dsi->ports & PORT_C)
++		else if (intel_dsi->ports & BIT(PORT_C))
+ 			return PORT_C;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/i915/gvt/debugfs.c b/drivers/gpu/drm/i915/gvt/debugfs.c
+index 9f1c209d92511..e08ed0e9f1653 100644
+--- a/drivers/gpu/drm/i915/gvt/debugfs.c
++++ b/drivers/gpu/drm/i915/gvt/debugfs.c
+@@ -175,8 +175,13 @@ void intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu)
+  */
+ void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu)
+ {
+-	debugfs_remove_recursive(vgpu->debugfs);
+-	vgpu->debugfs = NULL;
++	struct intel_gvt *gvt = vgpu->gvt;
++	struct drm_minor *minor = gvt->gt->i915->drm.primary;
++
++	if (minor->debugfs_root && gvt->debugfs_root) {
++		debugfs_remove_recursive(vgpu->debugfs);
++		vgpu->debugfs = NULL;
++	}
+ }
+ 
+ /**
+@@ -199,6 +204,10 @@ void intel_gvt_debugfs_init(struct intel_gvt *gvt)
+  */
+ void intel_gvt_debugfs_clean(struct intel_gvt *gvt)
+ {
+-	debugfs_remove_recursive(gvt->debugfs_root);
+-	gvt->debugfs_root = NULL;
++	struct drm_minor *minor = gvt->gt->i915->drm.primary;
++
++	if (minor->debugfs_root) {
++		debugfs_remove_recursive(gvt->debugfs_root);
++		gvt->debugfs_root = NULL;
++	}
+ }
+diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
+index aed2ef6466a2d..2bb6203298bcd 100644
+--- a/drivers/gpu/drm/i915/gvt/scheduler.c
++++ b/drivers/gpu/drm/i915/gvt/scheduler.c
+@@ -647,6 +647,7 @@ intel_vgpu_shadow_mm_pin(struct intel_vgpu_workload *workload)
+ 
+ 	if (workload->shadow_mm->type != INTEL_GVT_MM_PPGTT ||
+ 	    !workload->shadow_mm->ppgtt_mm.shadowed) {
++		intel_vgpu_unpin_mm(workload->shadow_mm);
+ 		gvt_vgpu_err("workload shadow ppgtt isn't ready\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+index e34718cf5c2e9..5ec9770e401e4 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+@@ -1120,7 +1120,11 @@ static int ingenic_drm_init(void)
+ 			return err;
+ 	}
+ 
+-	return platform_driver_register(&ingenic_drm_driver);
++	err = platform_driver_register(&ingenic_drm_driver);
++	if (IS_ENABLED(CONFIG_DRM_INGENIC_IPU) && err)
++		platform_driver_unregister(ingenic_ipu_driver_ptr);
++
++	return err;
+ }
+ module_init(ingenic_drm_init);
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index c1ae336df6833..aa3d472c79d77 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -367,9 +367,6 @@ static void mtk_dpi_power_off(struct mtk_dpi *dpi)
+ 	if (--dpi->refcount != 0)
+ 		return;
+ 
+-	if (dpi->pinctrl && dpi->pins_gpio)
+-		pinctrl_select_state(dpi->pinctrl, dpi->pins_gpio);
+-
+ 	mtk_dpi_disable(dpi);
+ 	clk_disable_unprepare(dpi->pixel_clk);
+ 	clk_disable_unprepare(dpi->engine_clk);
+@@ -394,9 +391,6 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
+ 		goto err_pixel;
+ 	}
+ 
+-	if (dpi->pinctrl && dpi->pins_dpi)
+-		pinctrl_select_state(dpi->pinctrl, dpi->pins_dpi);
+-
+ 	return 0;
+ 
+ err_pixel:
+@@ -525,12 +519,18 @@ static void mtk_dpi_bridge_disable(struct drm_bridge *bridge)
+ 	struct mtk_dpi *dpi = bridge_to_dpi(bridge);
+ 
+ 	mtk_dpi_power_off(dpi);
++
++	if (dpi->pinctrl && dpi->pins_gpio)
++		pinctrl_select_state(dpi->pinctrl, dpi->pins_gpio);
+ }
+ 
+ static void mtk_dpi_bridge_enable(struct drm_bridge *bridge)
+ {
+ 	struct mtk_dpi *dpi = bridge_to_dpi(bridge);
+ 
++	if (dpi->pinctrl && dpi->pins_dpi)
++		pinctrl_select_state(dpi->pinctrl, dpi->pins_dpi);
++
+ 	mtk_dpi_power_on(dpi);
+ 	mtk_dpi_set_display_mode(dpi, &dpi->mode);
+ 	mtk_dpi_enable(dpi);
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index d4b907889a21d..cd399b0b71814 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -436,15 +436,14 @@ void meson_viu_init(struct meson_drm *priv)
+ 
+ 	/* Initialize OSD1 fifo control register */
+ 	reg = VIU_OSD_DDR_PRIORITY_URGENT |
+-		VIU_OSD_HOLD_FIFO_LINES(31) |
+ 		VIU_OSD_FIFO_DEPTH_VAL(32) | /* fifo_depth_val: 32*8=256 */
+ 		VIU_OSD_WORDS_PER_BURST(4) | /* 4 words in 1 burst */
+ 		VIU_OSD_FIFO_LIMITS(2);      /* fifo_lim: 2*16=32 */
+ 
+ 	if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
+-		reg |= VIU_OSD_BURST_LENGTH_32;
++		reg |= (VIU_OSD_BURST_LENGTH_32 | VIU_OSD_HOLD_FIFO_LINES(31));
+ 	else
+-		reg |= VIU_OSD_BURST_LENGTH_64;
++		reg |= (VIU_OSD_BURST_LENGTH_64 | VIU_OSD_HOLD_FIFO_LINES(4));
+ 
+ 	writel_relaxed(reg, priv->io_base + _REG(VIU_OSD1_FIFO_CTRL_STAT));
+ 	writel_relaxed(reg, priv->io_base + _REG(VIU_OSD2_FIFO_CTRL_STAT));
+diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
+index 340682cd0f320..2457ef9851bb3 100644
+--- a/drivers/gpu/drm/msm/Makefile
++++ b/drivers/gpu/drm/msm/Makefile
+@@ -19,7 +19,7 @@ msm-y := \
+ 	hdmi/hdmi.o \
+ 	hdmi/hdmi_audio.o \
+ 	hdmi/hdmi_bridge.o \
+-	hdmi/hdmi_connector.o \
++	hdmi/hdmi_hpd.o \
+ 	hdmi/hdmi_i2c.o \
+ 	hdmi/hdmi_phy.o \
+ 	hdmi/hdmi_phy_8960.o \
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 5a152d505dfb9..1c3dcbc6cce8c 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -848,7 +848,7 @@ static int dp_display_set_mode(struct msm_dp *dp_display,
+ 
+ 	dp = container_of(dp_display, struct dp_display_private, dp_display);
+ 
+-	dp->panel->dp_mode.drm_mode = mode->drm_mode;
++	drm_mode_copy(&dp->panel->dp_mode.drm_mode, &mode->drm_mode);
+ 	dp->panel->dp_mode.bpp = mode->bpp;
+ 	dp->panel->dp_mode.capabilities = mode->capabilities;
+ 	dp_panel_init_panel_info(dp->panel);
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index bd65dc9b88923..efb14043a6ec4 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -8,6 +8,8 @@
+ #include <linux/of_irq.h>
+ #include <linux/of_gpio.h>
+ 
++#include <drm/drm_bridge_connector.h>
++
+ #include <sound/hdmi-codec.h>
+ #include "hdmi.h"
+ 
+@@ -41,7 +43,7 @@ static irqreturn_t msm_hdmi_irq(int irq, void *dev_id)
+ 	struct hdmi *hdmi = dev_id;
+ 
+ 	/* Process HPD: */
+-	msm_hdmi_connector_irq(hdmi->connector);
++	msm_hdmi_hpd_irq(hdmi->bridge);
+ 
+ 	/* Process DDC: */
+ 	msm_hdmi_i2c_irq(hdmi->i2c);
+@@ -245,6 +247,20 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev)
+ 		hdmi->pwr_clks[i] = clk;
+ 	}
+ 
++	hdmi->hpd_gpiod = devm_gpiod_get_optional(&pdev->dev, "hpd", GPIOD_IN);
++	/* This will catch e.g. -EPROBE_DEFER */
++	if (IS_ERR(hdmi->hpd_gpiod)) {
++		ret = PTR_ERR(hdmi->hpd_gpiod);
++		DRM_DEV_ERROR(&pdev->dev, "failed to get hpd gpio: (%d)\n", ret);
++		goto fail;
++	}
++
++	if (!hdmi->hpd_gpiod)
++		DBG("failed to get HPD gpio");
++
++	if (hdmi->hpd_gpiod)
++		gpiod_set_consumer_name(hdmi->hpd_gpiod, "HDMI_HPD");
++
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	hdmi->workq = alloc_ordered_workqueue("msm_hdmi", 0);
+@@ -311,7 +327,7 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
+ 		goto fail;
+ 	}
+ 
+-	hdmi->connector = msm_hdmi_connector_init(hdmi);
++	hdmi->connector = drm_bridge_connector_init(hdmi->dev, encoder);
+ 	if (IS_ERR(hdmi->connector)) {
+ 		ret = PTR_ERR(hdmi->connector);
+ 		DRM_DEV_ERROR(dev->dev, "failed to create HDMI connector: %d\n", ret);
+@@ -319,6 +335,8 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
+ 		goto fail;
+ 	}
+ 
++	drm_connector_attach_encoder(hdmi->connector, hdmi->encoder);
++
+ 	hdmi->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+ 	if (!hdmi->irq) {
+ 		ret = -EINVAL;
+@@ -335,7 +353,9 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
+ 		goto fail;
+ 	}
+ 
+-	ret = msm_hdmi_hpd_enable(hdmi->connector);
++	drm_bridge_connector_enable_hpd(hdmi->connector);
++
++	ret = msm_hdmi_hpd_enable(hdmi->bridge);
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(&hdmi->pdev->dev, "failed to enable HPD: %d\n", ret);
+ 		goto fail;
+@@ -423,20 +443,6 @@ static struct hdmi_platform_config hdmi_tx_8996_config = {
+ 		.hpd_freq      = hpd_clk_freq_8x74,
+ };
+ 
+-static const struct {
+-	const char *name;
+-	const bool output;
+-	const int value;
+-	const char *label;
+-} msm_hdmi_gpio_pdata[] = {
+-	{ "qcom,hdmi-tx-ddc-clk", true, 1, "HDMI_DDC_CLK" },
+-	{ "qcom,hdmi-tx-ddc-data", true, 1, "HDMI_DDC_DATA" },
+-	{ "qcom,hdmi-tx-hpd", false, 1, "HDMI_HPD" },
+-	{ "qcom,hdmi-tx-mux-en", true, 1, "HDMI_MUX_EN" },
+-	{ "qcom,hdmi-tx-mux-sel", true, 0, "HDMI_MUX_SEL" },
+-	{ "qcom,hdmi-tx-mux-lpm", true, 1, "HDMI_MUX_LPM" },
+-};
+-
+ /*
+  * HDMI audio codec callbacks
+  */
+@@ -549,7 +555,7 @@ static int msm_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 	struct hdmi_platform_config *hdmi_cfg;
+ 	struct hdmi *hdmi;
+ 	struct device_node *of_node = dev->of_node;
+-	int i, err;
++	int err;
+ 
+ 	hdmi_cfg = (struct hdmi_platform_config *)
+ 			of_device_get_match_data(dev);
+@@ -561,42 +567,6 @@ static int msm_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 	hdmi_cfg->mmio_name     = "core_physical";
+ 	hdmi_cfg->qfprom_mmio_name = "qfprom_physical";
+ 
+-	for (i = 0; i < HDMI_MAX_NUM_GPIO; i++) {
+-		const char *name = msm_hdmi_gpio_pdata[i].name;
+-		struct gpio_desc *gpiod;
+-
+-		/*
+-		 * We are fetching the GPIO lines "as is" since the connector
+-		 * code is enabling and disabling the lines. Until that point
+-		 * the power-on default value will be kept.
+-		 */
+-		gpiod = devm_gpiod_get_optional(dev, name, GPIOD_ASIS);
+-		/* This will catch e.g. -PROBE_DEFER */
+-		if (IS_ERR(gpiod))
+-			return PTR_ERR(gpiod);
+-		if (!gpiod) {
+-			/* Try a second time, stripping down the name */
+-			char name3[32];
+-
+-			/*
+-			 * Try again after stripping out the "qcom,hdmi-tx"
+-			 * prefix. This is mainly to match "hpd-gpios" used
+-			 * in the upstream bindings.
+-			 */
+-			if (sscanf(name, "qcom,hdmi-tx-%s", name3))
+-				gpiod = devm_gpiod_get_optional(dev, name3, GPIOD_ASIS);
+-			if (IS_ERR(gpiod))
+-				return PTR_ERR(gpiod);
+-			if (!gpiod)
+-				DBG("failed to get gpio: %s", name);
+-		}
+-		hdmi_cfg->gpios[i].gpiod = gpiod;
+-		if (gpiod)
+-			gpiod_set_consumer_name(gpiod, msm_hdmi_gpio_pdata[i].label);
+-		hdmi_cfg->gpios[i].output = msm_hdmi_gpio_pdata[i].output;
+-		hdmi_cfg->gpios[i].value = msm_hdmi_gpio_pdata[i].value;
+-	}
+-
+ 	dev->platform_data = hdmi_cfg;
+ 
+ 	hdmi = msm_hdmi_init(to_platform_device(dev));
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.h b/drivers/gpu/drm/msm/hdmi/hdmi.h
+index d0b84f0abee17..20f554312b17c 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.h
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.h
+@@ -19,17 +19,9 @@
+ #include "msm_drv.h"
+ #include "hdmi.xml.h"
+ 
+-#define HDMI_MAX_NUM_GPIO	6
+-
+ struct hdmi_phy;
+ struct hdmi_platform_config;
+ 
+-struct hdmi_gpio_data {
+-	struct gpio_desc *gpiod;
+-	bool output;
+-	int value;
+-};
+-
+ struct hdmi_audio {
+ 	bool enabled;
+ 	struct hdmi_audio_infoframe infoframe;
+@@ -61,6 +53,8 @@ struct hdmi {
+ 	struct clk **hpd_clks;
+ 	struct clk **pwr_clks;
+ 
++	struct gpio_desc *hpd_gpiod;
++
+ 	struct hdmi_phy *phy;
+ 	struct device *phy_dev;
+ 
+@@ -109,10 +103,14 @@ struct hdmi_platform_config {
+ 	/* clks that need to be on for screen pwr (ie pixel clk): */
+ 	const char **pwr_clk_names;
+ 	int pwr_clk_cnt;
++};
+ 
+-	/* gpio's: */
+-	struct hdmi_gpio_data gpios[HDMI_MAX_NUM_GPIO];
++struct hdmi_bridge {
++	struct drm_bridge base;
++	struct hdmi *hdmi;
++	struct work_struct hpd_work;
+ };
++#define to_hdmi_bridge(x) container_of(x, struct hdmi_bridge, base)
+ 
+ void msm_hdmi_set_mode(struct hdmi *hdmi, bool power_on);
+ 
+@@ -230,13 +228,11 @@ void msm_hdmi_audio_set_sample_rate(struct hdmi *hdmi, int rate);
+ struct drm_bridge *msm_hdmi_bridge_init(struct hdmi *hdmi);
+ void msm_hdmi_bridge_destroy(struct drm_bridge *bridge);
+ 
+-/*
+- * hdmi connector:
+- */
+-
+-void msm_hdmi_connector_irq(struct drm_connector *connector);
+-struct drm_connector *msm_hdmi_connector_init(struct hdmi *hdmi);
+-int msm_hdmi_hpd_enable(struct drm_connector *connector);
++void msm_hdmi_hpd_irq(struct drm_bridge *bridge);
++enum drm_connector_status msm_hdmi_bridge_detect(
++		struct drm_bridge *bridge);
++int msm_hdmi_hpd_enable(struct drm_bridge *bridge);
++void msm_hdmi_hpd_disable(struct hdmi_bridge *hdmi_bridge);
+ 
+ /*
+  * i2c adapter for ddc:
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c b/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
+index 6e380db9287ba..efcfdd70a02e0 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
+@@ -5,17 +5,16 @@
+  */
+ 
+ #include <linux/delay.h>
++#include <drm/drm_bridge_connector.h>
+ 
++#include "msm_kms.h"
+ #include "hdmi.h"
+ 
+-struct hdmi_bridge {
+-	struct drm_bridge base;
+-	struct hdmi *hdmi;
+-};
+-#define to_hdmi_bridge(x) container_of(x, struct hdmi_bridge, base)
+-
+ void msm_hdmi_bridge_destroy(struct drm_bridge *bridge)
+ {
++	struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
++
++	msm_hdmi_hpd_disable(hdmi_bridge);
+ }
+ 
+ static void msm_hdmi_power_on(struct drm_bridge *bridge)
+@@ -259,14 +258,76 @@ static void msm_hdmi_bridge_mode_set(struct drm_bridge *bridge,
+ 		msm_hdmi_audio_update(hdmi);
+ }
+ 
++static struct edid *msm_hdmi_bridge_get_edid(struct drm_bridge *bridge,
++		struct drm_connector *connector)
++{
++	struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
++	struct hdmi *hdmi = hdmi_bridge->hdmi;
++	struct edid *edid;
++	uint32_t hdmi_ctrl;
++
++	hdmi_ctrl = hdmi_read(hdmi, REG_HDMI_CTRL);
++	hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl | HDMI_CTRL_ENABLE);
++
++	edid = drm_get_edid(connector, hdmi->i2c);
++
++	hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl);
++
++	hdmi->hdmi_mode = drm_detect_hdmi_monitor(edid);
++
++	return edid;
++}
++
++static enum drm_mode_status msm_hdmi_bridge_mode_valid(struct drm_bridge *bridge,
++		const struct drm_display_info *info,
++		const struct drm_display_mode *mode)
++{
++	struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
++	struct hdmi *hdmi = hdmi_bridge->hdmi;
++	const struct hdmi_platform_config *config = hdmi->config;
++	struct msm_drm_private *priv = bridge->dev->dev_private;
++	struct msm_kms *kms = priv->kms;
++	long actual, requested;
++
++	requested = 1000 * mode->clock;
++	actual = kms->funcs->round_pixclk(kms,
++			requested, hdmi_bridge->hdmi->encoder);
++
++	/* for mdp5/apq8074, we manage our own pixel clk (as opposed to
++	 * mdp4/dtv stuff where pixel clk is assigned to mdp/encoder
++	 * instead):
++	 */
++	if (config->pwr_clk_cnt > 0)
++		actual = clk_round_rate(hdmi->pwr_clks[0], actual);
++
++	DBG("requested=%ld, actual=%ld", requested, actual);
++
++	if (actual != requested)
++		return MODE_CLOCK_RANGE;
++
++	return 0;
++}
++
+ static const struct drm_bridge_funcs msm_hdmi_bridge_funcs = {
+ 		.pre_enable = msm_hdmi_bridge_pre_enable,
+ 		.enable = msm_hdmi_bridge_enable,
+ 		.disable = msm_hdmi_bridge_disable,
+ 		.post_disable = msm_hdmi_bridge_post_disable,
+ 		.mode_set = msm_hdmi_bridge_mode_set,
++		.mode_valid = msm_hdmi_bridge_mode_valid,
++		.get_edid = msm_hdmi_bridge_get_edid,
++		.detect = msm_hdmi_bridge_detect,
+ };
+ 
++static void
++msm_hdmi_hotplug_work(struct work_struct *work)
++{
++	struct hdmi_bridge *hdmi_bridge =
++		container_of(work, struct hdmi_bridge, hpd_work);
++	struct drm_bridge *bridge = &hdmi_bridge->base;
++
++	drm_bridge_hpd_notify(bridge, drm_bridge_detect(bridge));
++}
+ 
+ /* initialize bridge */
+ struct drm_bridge *msm_hdmi_bridge_init(struct hdmi *hdmi)
+@@ -283,11 +344,17 @@ struct drm_bridge *msm_hdmi_bridge_init(struct hdmi *hdmi)
+ 	}
+ 
+ 	hdmi_bridge->hdmi = hdmi;
++	INIT_WORK(&hdmi_bridge->hpd_work, msm_hdmi_hotplug_work);
+ 
+ 	bridge = &hdmi_bridge->base;
+ 	bridge->funcs = &msm_hdmi_bridge_funcs;
++	bridge->ddc = hdmi->i2c;
++	bridge->type = DRM_MODE_CONNECTOR_HDMIA;
++	bridge->ops = DRM_BRIDGE_OP_HPD |
++		DRM_BRIDGE_OP_DETECT |
++		DRM_BRIDGE_OP_EDID;
+ 
+-	ret = drm_bridge_attach(hdmi->encoder, bridge, NULL, 0);
++	ret = drm_bridge_attach(hdmi->encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+ 	if (ret)
+ 		goto fail;
+ 
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_connector.c b/drivers/gpu/drm/msm/hdmi/hdmi_connector.c
+deleted file mode 100644
+index 58707a1f3878f..0000000000000
+--- a/drivers/gpu/drm/msm/hdmi/hdmi_connector.c
++++ /dev/null
+@@ -1,451 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Copyright (C) 2013 Red Hat
+- * Author: Rob Clark <robdclark@gmail.com>
+- */
+-
+-#include <linux/delay.h>
+-#include <linux/gpio/consumer.h>
+-#include <linux/pinctrl/consumer.h>
+-
+-#include "msm_kms.h"
+-#include "hdmi.h"
+-
+-struct hdmi_connector {
+-	struct drm_connector base;
+-	struct hdmi *hdmi;
+-	struct work_struct hpd_work;
+-};
+-#define to_hdmi_connector(x) container_of(x, struct hdmi_connector, base)
+-
+-static void msm_hdmi_phy_reset(struct hdmi *hdmi)
+-{
+-	unsigned int val;
+-
+-	val = hdmi_read(hdmi, REG_HDMI_PHY_CTRL);
+-
+-	if (val & HDMI_PHY_CTRL_SW_RESET_LOW) {
+-		/* pull low */
+-		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
+-				val & ~HDMI_PHY_CTRL_SW_RESET);
+-	} else {
+-		/* pull high */
+-		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
+-				val | HDMI_PHY_CTRL_SW_RESET);
+-	}
+-
+-	if (val & HDMI_PHY_CTRL_SW_RESET_PLL_LOW) {
+-		/* pull low */
+-		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
+-				val & ~HDMI_PHY_CTRL_SW_RESET_PLL);
+-	} else {
+-		/* pull high */
+-		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
+-				val | HDMI_PHY_CTRL_SW_RESET_PLL);
+-	}
+-
+-	msleep(100);
+-
+-	if (val & HDMI_PHY_CTRL_SW_RESET_LOW) {
+-		/* pull high */
+-		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
+-				val | HDMI_PHY_CTRL_SW_RESET);
+-	} else {
+-		/* pull low */
+-		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
+-				val & ~HDMI_PHY_CTRL_SW_RESET);
+-	}
+-
+-	if (val & HDMI_PHY_CTRL_SW_RESET_PLL_LOW) {
+-		/* pull high */
+-		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
+-				val | HDMI_PHY_CTRL_SW_RESET_PLL);
+-	} else {
+-		/* pull low */
+-		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
+-				val & ~HDMI_PHY_CTRL_SW_RESET_PLL);
+-	}
+-}
+-
+-static int gpio_config(struct hdmi *hdmi, bool on)
+-{
+-	const struct hdmi_platform_config *config = hdmi->config;
+-	int i;
+-
+-	if (on) {
+-		for (i = 0; i < HDMI_MAX_NUM_GPIO; i++) {
+-			struct hdmi_gpio_data gpio = config->gpios[i];
+-
+-			if (gpio.gpiod) {
+-				if (gpio.output) {
+-					gpiod_direction_output(gpio.gpiod,
+-							       gpio.value);
+-				} else {
+-					gpiod_direction_input(gpio.gpiod);
+-					gpiod_set_value_cansleep(gpio.gpiod,
+-								 gpio.value);
+-				}
+-			}
+-		}
+-
+-		DBG("gpio on");
+-	} else {
+-		for (i = 0; i < HDMI_MAX_NUM_GPIO; i++) {
+-			struct hdmi_gpio_data gpio = config->gpios[i];
+-
+-			if (!gpio.gpiod)
+-				continue;
+-
+-			if (gpio.output) {
+-				int value = gpio.value ? 0 : 1;
+-
+-				gpiod_set_value_cansleep(gpio.gpiod, value);
+-			}
+-		}
+-
+-		DBG("gpio off");
+-	}
+-
+-	return 0;
+-}
+-
+-static void enable_hpd_clocks(struct hdmi *hdmi, bool enable)
+-{
+-	const struct hdmi_platform_config *config = hdmi->config;
+-	struct device *dev = &hdmi->pdev->dev;
+-	int i, ret;
+-
+-	if (enable) {
+-		for (i = 0; i < config->hpd_clk_cnt; i++) {
+-			if (config->hpd_freq && config->hpd_freq[i]) {
+-				ret = clk_set_rate(hdmi->hpd_clks[i],
+-						   config->hpd_freq[i]);
+-				if (ret)
+-					dev_warn(dev,
+-						 "failed to set clk %s (%d)\n",
+-						 config->hpd_clk_names[i], ret);
+-			}
+-
+-			ret = clk_prepare_enable(hdmi->hpd_clks[i]);
+-			if (ret) {
+-				DRM_DEV_ERROR(dev,
+-					"failed to enable hpd clk: %s (%d)\n",
+-					config->hpd_clk_names[i], ret);
+-			}
+-		}
+-	} else {
+-		for (i = config->hpd_clk_cnt - 1; i >= 0; i--)
+-			clk_disable_unprepare(hdmi->hpd_clks[i]);
+-	}
+-}
+-
+-int msm_hdmi_hpd_enable(struct drm_connector *connector)
+-{
+-	struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
+-	struct hdmi *hdmi = hdmi_connector->hdmi;
+-	const struct hdmi_platform_config *config = hdmi->config;
+-	struct device *dev = &hdmi->pdev->dev;
+-	uint32_t hpd_ctrl;
+-	int i, ret;
+-	unsigned long flags;
+-
+-	for (i = 0; i < config->hpd_reg_cnt; i++) {
+-		ret = regulator_enable(hdmi->hpd_regs[i]);
+-		if (ret) {
+-			DRM_DEV_ERROR(dev, "failed to enable hpd regulator: %s (%d)\n",
+-					config->hpd_reg_names[i], ret);
+-			goto fail;
+-		}
+-	}
+-
+-	ret = pinctrl_pm_select_default_state(dev);
+-	if (ret) {
+-		DRM_DEV_ERROR(dev, "pinctrl state chg failed: %d\n", ret);
+-		goto fail;
+-	}
+-
+-	ret = gpio_config(hdmi, true);
+-	if (ret) {
+-		DRM_DEV_ERROR(dev, "failed to configure GPIOs: %d\n", ret);
+-		goto fail;
+-	}
+-
+-	pm_runtime_get_sync(dev);
+-	enable_hpd_clocks(hdmi, true);
+-
+-	msm_hdmi_set_mode(hdmi, false);
+-	msm_hdmi_phy_reset(hdmi);
+-	msm_hdmi_set_mode(hdmi, true);
+-
+-	hdmi_write(hdmi, REG_HDMI_USEC_REFTIMER, 0x0001001b);
+-
+-	/* enable HPD events: */
+-	hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL,
+-			HDMI_HPD_INT_CTRL_INT_CONNECT |
+-			HDMI_HPD_INT_CTRL_INT_EN);
+-
+-	/* set timeout to 4.1ms (max) for hardware debounce */
+-	spin_lock_irqsave(&hdmi->reg_lock, flags);
+-	hpd_ctrl = hdmi_read(hdmi, REG_HDMI_HPD_CTRL);
+-	hpd_ctrl |= HDMI_HPD_CTRL_TIMEOUT(0x1fff);
+-
+-	/* Toggle HPD circuit to trigger HPD sense */
+-	hdmi_write(hdmi, REG_HDMI_HPD_CTRL,
+-			~HDMI_HPD_CTRL_ENABLE & hpd_ctrl);
+-	hdmi_write(hdmi, REG_HDMI_HPD_CTRL,
+-			HDMI_HPD_CTRL_ENABLE | hpd_ctrl);
+-	spin_unlock_irqrestore(&hdmi->reg_lock, flags);
+-
+-	return 0;
+-
+-fail:
+-	return ret;
+-}
+-
+-static void hdp_disable(struct hdmi_connector *hdmi_connector)
+-{
+-	struct hdmi *hdmi = hdmi_connector->hdmi;
+-	const struct hdmi_platform_config *config = hdmi->config;
+-	struct device *dev = &hdmi->pdev->dev;
+-	int i, ret = 0;
+-
+-	/* Disable HPD interrupt */
+-	hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL, 0);
+-
+-	msm_hdmi_set_mode(hdmi, false);
+-
+-	enable_hpd_clocks(hdmi, false);
+-	pm_runtime_put_autosuspend(dev);
+-
+-	ret = gpio_config(hdmi, false);
+-	if (ret)
+-		dev_warn(dev, "failed to unconfigure GPIOs: %d\n", ret);
+-
+-	ret = pinctrl_pm_select_sleep_state(dev);
+-	if (ret)
+-		dev_warn(dev, "pinctrl state chg failed: %d\n", ret);
+-
+-	for (i = 0; i < config->hpd_reg_cnt; i++) {
+-		ret = regulator_disable(hdmi->hpd_regs[i]);
+-		if (ret)
+-			dev_warn(dev, "failed to disable hpd regulator: %s (%d)\n",
+-					config->hpd_reg_names[i], ret);
+-	}
+-}
+-
+-static void
+-msm_hdmi_hotplug_work(struct work_struct *work)
+-{
+-	struct hdmi_connector *hdmi_connector =
+-		container_of(work, struct hdmi_connector, hpd_work);
+-	struct drm_connector *connector = &hdmi_connector->base;
+-	drm_helper_hpd_irq_event(connector->dev);
+-}
+-
+-void msm_hdmi_connector_irq(struct drm_connector *connector)
+-{
+-	struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
+-	struct hdmi *hdmi = hdmi_connector->hdmi;
+-	uint32_t hpd_int_status, hpd_int_ctrl;
+-
+-	/* Process HPD: */
+-	hpd_int_status = hdmi_read(hdmi, REG_HDMI_HPD_INT_STATUS);
+-	hpd_int_ctrl   = hdmi_read(hdmi, REG_HDMI_HPD_INT_CTRL);
+-
+-	if ((hpd_int_ctrl & HDMI_HPD_INT_CTRL_INT_EN) &&
+-			(hpd_int_status & HDMI_HPD_INT_STATUS_INT)) {
+-		bool detected = !!(hpd_int_status & HDMI_HPD_INT_STATUS_CABLE_DETECTED);
+-
+-		/* ack & disable (temporarily) HPD events: */
+-		hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL,
+-			HDMI_HPD_INT_CTRL_INT_ACK);
+-
+-		DBG("status=%04x, ctrl=%04x", hpd_int_status, hpd_int_ctrl);
+-
+-		/* detect disconnect if we are connected or visa versa: */
+-		hpd_int_ctrl = HDMI_HPD_INT_CTRL_INT_EN;
+-		if (!detected)
+-			hpd_int_ctrl |= HDMI_HPD_INT_CTRL_INT_CONNECT;
+-		hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL, hpd_int_ctrl);
+-
+-		queue_work(hdmi->workq, &hdmi_connector->hpd_work);
+-	}
+-}
+-
+-static enum drm_connector_status detect_reg(struct hdmi *hdmi)
+-{
+-	uint32_t hpd_int_status;
+-
+-	pm_runtime_get_sync(&hdmi->pdev->dev);
+-	enable_hpd_clocks(hdmi, true);
+-
+-	hpd_int_status = hdmi_read(hdmi, REG_HDMI_HPD_INT_STATUS);
+-
+-	enable_hpd_clocks(hdmi, false);
+-	pm_runtime_put_autosuspend(&hdmi->pdev->dev);
+-
+-	return (hpd_int_status & HDMI_HPD_INT_STATUS_CABLE_DETECTED) ?
+-			connector_status_connected : connector_status_disconnected;
+-}
+-
+-#define HPD_GPIO_INDEX	2
+-static enum drm_connector_status detect_gpio(struct hdmi *hdmi)
+-{
+-	const struct hdmi_platform_config *config = hdmi->config;
+-	struct hdmi_gpio_data hpd_gpio = config->gpios[HPD_GPIO_INDEX];
+-
+-	return gpiod_get_value(hpd_gpio.gpiod) ?
+-			connector_status_connected :
+-			connector_status_disconnected;
+-}
+-
+-static enum drm_connector_status hdmi_connector_detect(
+-		struct drm_connector *connector, bool force)
+-{
+-	struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
+-	struct hdmi *hdmi = hdmi_connector->hdmi;
+-	const struct hdmi_platform_config *config = hdmi->config;
+-	struct hdmi_gpio_data hpd_gpio = config->gpios[HPD_GPIO_INDEX];
+-	enum drm_connector_status stat_gpio, stat_reg;
+-	int retry = 20;
+-
+-	/*
+-	 * some platforms may not have hpd gpio. Rely only on the status
+-	 * provided by REG_HDMI_HPD_INT_STATUS in this case.
+-	 */
+-	if (!hpd_gpio.gpiod)
+-		return detect_reg(hdmi);
+-
+-	do {
+-		stat_gpio = detect_gpio(hdmi);
+-		stat_reg  = detect_reg(hdmi);
+-
+-		if (stat_gpio == stat_reg)
+-			break;
+-
+-		mdelay(10);
+-	} while (--retry);
+-
+-	/* the status we get from reading gpio seems to be more reliable,
+-	 * so trust that one the most if we didn't manage to get hdmi and
+-	 * gpio status to agree:
+-	 */
+-	if (stat_gpio != stat_reg) {
+-		DBG("HDMI_HPD_INT_STATUS tells us: %d", stat_reg);
+-		DBG("hpd gpio tells us: %d", stat_gpio);
+-	}
+-
+-	return stat_gpio;
+-}
+-
+-static void hdmi_connector_destroy(struct drm_connector *connector)
+-{
+-	struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
+-
+-	hdp_disable(hdmi_connector);
+-
+-	drm_connector_cleanup(connector);
+-
+-	kfree(hdmi_connector);
+-}
+-
+-static int msm_hdmi_connector_get_modes(struct drm_connector *connector)
+-{
+-	struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
+-	struct hdmi *hdmi = hdmi_connector->hdmi;
+-	struct edid *edid;
+-	uint32_t hdmi_ctrl;
+-	int ret = 0;
+-
+-	hdmi_ctrl = hdmi_read(hdmi, REG_HDMI_CTRL);
+-	hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl | HDMI_CTRL_ENABLE);
+-
+-	edid = drm_get_edid(connector, hdmi->i2c);
+-
+-	hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl);
+-
+-	hdmi->hdmi_mode = drm_detect_hdmi_monitor(edid);
+-	drm_connector_update_edid_property(connector, edid);
+-
+-	if (edid) {
+-		ret = drm_add_edid_modes(connector, edid);
+-		kfree(edid);
+-	}
+-
+-	return ret;
+-}
+-
+-static int msm_hdmi_connector_mode_valid(struct drm_connector *connector,
+-				 struct drm_display_mode *mode)
+-{
+-	struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
+-	struct hdmi *hdmi = hdmi_connector->hdmi;
+-	const struct hdmi_platform_config *config = hdmi->config;
+-	struct msm_drm_private *priv = connector->dev->dev_private;
+-	struct msm_kms *kms = priv->kms;
+-	long actual, requested;
+-
+-	requested = 1000 * mode->clock;
+-	actual = kms->funcs->round_pixclk(kms,
+-			requested, hdmi_connector->hdmi->encoder);
+-
+-	/* for mdp5/apq8074, we manage our own pixel clk (as opposed to
+-	 * mdp4/dtv stuff where pixel clk is assigned to mdp/encoder
+-	 * instead):
+-	 */
+-	if (config->pwr_clk_cnt > 0)
+-		actual = clk_round_rate(hdmi->pwr_clks[0], actual);
+-
+-	DBG("requested=%ld, actual=%ld", requested, actual);
+-
+-	if (actual != requested)
+-		return MODE_CLOCK_RANGE;
+-
+-	return 0;
+-}
+-
+-static const struct drm_connector_funcs hdmi_connector_funcs = {
+-	.detect = hdmi_connector_detect,
+-	.fill_modes = drm_helper_probe_single_connector_modes,
+-	.destroy = hdmi_connector_destroy,
+-	.reset = drm_atomic_helper_connector_reset,
+-	.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+-	.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+-};
+-
+-static const struct drm_connector_helper_funcs msm_hdmi_connector_helper_funcs = {
+-	.get_modes = msm_hdmi_connector_get_modes,
+-	.mode_valid = msm_hdmi_connector_mode_valid,
+-};
+-
+-/* initialize connector */
+-struct drm_connector *msm_hdmi_connector_init(struct hdmi *hdmi)
+-{
+-	struct drm_connector *connector = NULL;
+-	struct hdmi_connector *hdmi_connector;
+-
+-	hdmi_connector = kzalloc(sizeof(*hdmi_connector), GFP_KERNEL);
+-	if (!hdmi_connector)
+-		return ERR_PTR(-ENOMEM);
+-
+-	hdmi_connector->hdmi = hdmi;
+-	INIT_WORK(&hdmi_connector->hpd_work, msm_hdmi_hotplug_work);
+-
+-	connector = &hdmi_connector->base;
+-
+-	drm_connector_init_with_ddc(hdmi->dev, connector,
+-				    &hdmi_connector_funcs,
+-				    DRM_MODE_CONNECTOR_HDMIA,
+-				    hdmi->i2c);
+-	drm_connector_helper_add(connector, &msm_hdmi_connector_helper_funcs);
+-
+-	connector->polled = DRM_CONNECTOR_POLL_CONNECT |
+-			DRM_CONNECTOR_POLL_DISCONNECT;
+-
+-	connector->interlace_allowed = 0;
+-	connector->doublescan_allowed = 0;
+-
+-	drm_connector_attach_encoder(connector, hdmi->encoder);
+-
+-	return connector;
+-}
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_hpd.c b/drivers/gpu/drm/msm/hdmi/hdmi_hpd.c
+new file mode 100644
+index 0000000000000..52ebe562ca9be
+--- /dev/null
++++ b/drivers/gpu/drm/msm/hdmi/hdmi_hpd.c
+@@ -0,0 +1,269 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Copyright (C) 2013 Red Hat
++ * Author: Rob Clark <robdclark@gmail.com>
++ */
++
++#include <linux/delay.h>
++#include <linux/gpio/consumer.h>
++#include <linux/pinctrl/consumer.h>
++
++#include "msm_kms.h"
++#include "hdmi.h"
++
++static void msm_hdmi_phy_reset(struct hdmi *hdmi)
++{
++	unsigned int val;
++
++	val = hdmi_read(hdmi, REG_HDMI_PHY_CTRL);
++
++	if (val & HDMI_PHY_CTRL_SW_RESET_LOW) {
++		/* pull low */
++		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
++				val & ~HDMI_PHY_CTRL_SW_RESET);
++	} else {
++		/* pull high */
++		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
++				val | HDMI_PHY_CTRL_SW_RESET);
++	}
++
++	if (val & HDMI_PHY_CTRL_SW_RESET_PLL_LOW) {
++		/* pull low */
++		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
++				val & ~HDMI_PHY_CTRL_SW_RESET_PLL);
++	} else {
++		/* pull high */
++		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
++				val | HDMI_PHY_CTRL_SW_RESET_PLL);
++	}
++
++	msleep(100);
++
++	if (val & HDMI_PHY_CTRL_SW_RESET_LOW) {
++		/* pull high */
++		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
++				val | HDMI_PHY_CTRL_SW_RESET);
++	} else {
++		/* pull low */
++		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
++				val & ~HDMI_PHY_CTRL_SW_RESET);
++	}
++
++	if (val & HDMI_PHY_CTRL_SW_RESET_PLL_LOW) {
++		/* pull high */
++		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
++				val | HDMI_PHY_CTRL_SW_RESET_PLL);
++	} else {
++		/* pull low */
++		hdmi_write(hdmi, REG_HDMI_PHY_CTRL,
++				val & ~HDMI_PHY_CTRL_SW_RESET_PLL);
++	}
++}
++
++static void enable_hpd_clocks(struct hdmi *hdmi, bool enable)
++{
++	const struct hdmi_platform_config *config = hdmi->config;
++	struct device *dev = &hdmi->pdev->dev;
++	int i, ret;
++
++	if (enable) {
++		for (i = 0; i < config->hpd_clk_cnt; i++) {
++			if (config->hpd_freq && config->hpd_freq[i]) {
++				ret = clk_set_rate(hdmi->hpd_clks[i],
++						   config->hpd_freq[i]);
++				if (ret)
++					dev_warn(dev,
++						 "failed to set clk %s (%d)\n",
++						 config->hpd_clk_names[i], ret);
++			}
++
++			ret = clk_prepare_enable(hdmi->hpd_clks[i]);
++			if (ret) {
++				DRM_DEV_ERROR(dev,
++					"failed to enable hpd clk: %s (%d)\n",
++					config->hpd_clk_names[i], ret);
++			}
++		}
++	} else {
++		for (i = config->hpd_clk_cnt - 1; i >= 0; i--)
++			clk_disable_unprepare(hdmi->hpd_clks[i]);
++	}
++}
++
++int msm_hdmi_hpd_enable(struct drm_bridge *bridge)
++{
++	struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
++	struct hdmi *hdmi = hdmi_bridge->hdmi;
++	const struct hdmi_platform_config *config = hdmi->config;
++	struct device *dev = &hdmi->pdev->dev;
++	uint32_t hpd_ctrl;
++	int i, ret;
++	unsigned long flags;
++
++	for (i = 0; i < config->hpd_reg_cnt; i++) {
++		ret = regulator_enable(hdmi->hpd_regs[i]);
++		if (ret) {
++			DRM_DEV_ERROR(dev, "failed to enable hpd regulator: %s (%d)\n",
++					config->hpd_reg_names[i], ret);
++			goto fail;
++		}
++	}
++
++	ret = pinctrl_pm_select_default_state(dev);
++	if (ret) {
++		DRM_DEV_ERROR(dev, "pinctrl state chg failed: %d\n", ret);
++		goto fail;
++	}
++
++	if (hdmi->hpd_gpiod)
++		gpiod_set_value_cansleep(hdmi->hpd_gpiod, 1);
++
++	pm_runtime_get_sync(dev);
++	enable_hpd_clocks(hdmi, true);
++
++	msm_hdmi_set_mode(hdmi, false);
++	msm_hdmi_phy_reset(hdmi);
++	msm_hdmi_set_mode(hdmi, true);
++
++	hdmi_write(hdmi, REG_HDMI_USEC_REFTIMER, 0x0001001b);
++
++	/* enable HPD events: */
++	hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL,
++			HDMI_HPD_INT_CTRL_INT_CONNECT |
++			HDMI_HPD_INT_CTRL_INT_EN);
++
++	/* set timeout to 4.1ms (max) for hardware debounce */
++	spin_lock_irqsave(&hdmi->reg_lock, flags);
++	hpd_ctrl = hdmi_read(hdmi, REG_HDMI_HPD_CTRL);
++	hpd_ctrl |= HDMI_HPD_CTRL_TIMEOUT(0x1fff);
++
++	/* Toggle HPD circuit to trigger HPD sense */
++	hdmi_write(hdmi, REG_HDMI_HPD_CTRL,
++			~HDMI_HPD_CTRL_ENABLE & hpd_ctrl);
++	hdmi_write(hdmi, REG_HDMI_HPD_CTRL,
++			HDMI_HPD_CTRL_ENABLE | hpd_ctrl);
++	spin_unlock_irqrestore(&hdmi->reg_lock, flags);
++
++	return 0;
++
++fail:
++	return ret;
++}
++
++void msm_hdmi_hpd_disable(struct hdmi_bridge *hdmi_bridge)
++{
++	struct hdmi *hdmi = hdmi_bridge->hdmi;
++	const struct hdmi_platform_config *config = hdmi->config;
++	struct device *dev = &hdmi->pdev->dev;
++	int i, ret = 0;
++
++	/* Disable HPD interrupt */
++	hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL, 0);
++
++	msm_hdmi_set_mode(hdmi, false);
++
++	enable_hpd_clocks(hdmi, false);
++	pm_runtime_put_autosuspend(dev);
++
++	ret = pinctrl_pm_select_sleep_state(dev);
++	if (ret)
++		dev_warn(dev, "pinctrl state chg failed: %d\n", ret);
++
++	for (i = 0; i < config->hpd_reg_cnt; i++) {
++		ret = regulator_disable(hdmi->hpd_regs[i]);
++		if (ret)
++			dev_warn(dev, "failed to disable hpd regulator: %s (%d)\n",
++					config->hpd_reg_names[i], ret);
++	}
++}
++
++void msm_hdmi_hpd_irq(struct drm_bridge *bridge)
++{
++	struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
++	struct hdmi *hdmi = hdmi_bridge->hdmi;
++	uint32_t hpd_int_status, hpd_int_ctrl;
++
++	/* Process HPD: */
++	hpd_int_status = hdmi_read(hdmi, REG_HDMI_HPD_INT_STATUS);
++	hpd_int_ctrl   = hdmi_read(hdmi, REG_HDMI_HPD_INT_CTRL);
++
++	if ((hpd_int_ctrl & HDMI_HPD_INT_CTRL_INT_EN) &&
++			(hpd_int_status & HDMI_HPD_INT_STATUS_INT)) {
++		bool detected = !!(hpd_int_status & HDMI_HPD_INT_STATUS_CABLE_DETECTED);
++
++		/* ack & disable (temporarily) HPD events: */
++		hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL,
++			HDMI_HPD_INT_CTRL_INT_ACK);
++
++		DBG("status=%04x, ctrl=%04x", hpd_int_status, hpd_int_ctrl);
++
++		/* detect disconnect if we are connected or visa versa: */
++		hpd_int_ctrl = HDMI_HPD_INT_CTRL_INT_EN;
++		if (!detected)
++			hpd_int_ctrl |= HDMI_HPD_INT_CTRL_INT_CONNECT;
++		hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL, hpd_int_ctrl);
++
++		queue_work(hdmi->workq, &hdmi_bridge->hpd_work);
++	}
++}
++
++static enum drm_connector_status detect_reg(struct hdmi *hdmi)
++{
++	uint32_t hpd_int_status;
++
++	pm_runtime_get_sync(&hdmi->pdev->dev);
++	enable_hpd_clocks(hdmi, true);
++
++	hpd_int_status = hdmi_read(hdmi, REG_HDMI_HPD_INT_STATUS);
++
++	enable_hpd_clocks(hdmi, false);
++	pm_runtime_put_autosuspend(&hdmi->pdev->dev);
++
++	return (hpd_int_status & HDMI_HPD_INT_STATUS_CABLE_DETECTED) ?
++			connector_status_connected : connector_status_disconnected;
++}
++
++#define HPD_GPIO_INDEX	2
++static enum drm_connector_status detect_gpio(struct hdmi *hdmi)
++{
++	return gpiod_get_value(hdmi->hpd_gpiod) ?
++			connector_status_connected :
++			connector_status_disconnected;
++}
++
++enum drm_connector_status msm_hdmi_bridge_detect(
++		struct drm_bridge *bridge)
++{
++	struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
++	struct hdmi *hdmi = hdmi_bridge->hdmi;
++	enum drm_connector_status stat_gpio, stat_reg;
++	int retry = 20;
++
++	/*
++	 * some platforms may not have hpd gpio. Rely only on the status
++	 * provided by REG_HDMI_HPD_INT_STATUS in this case.
++	 */
++	if (!hdmi->hpd_gpiod)
++		return detect_reg(hdmi);
++
++	do {
++		stat_gpio = detect_gpio(hdmi);
++		stat_reg  = detect_reg(hdmi);
++
++		if (stat_gpio == stat_reg)
++			break;
++
++		mdelay(10);
++	} while (--retry);
++
++	/* the status we get from reading gpio seems to be more reliable,
++	 * so trust that one the most if we didn't manage to get hdmi and
++	 * gpio status to agree:
++	 */
++	if (stat_gpio != stat_reg) {
++		DBG("HDMI_HPD_INT_STATUS tells us: %d", stat_reg);
++		DBG("hpd gpio tells us: %d", stat_gpio);
++	}
++
++	return stat_gpio;
++}
+diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7701.c b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+index 4d2a149b202cb..cd9f01940b173 100644
+--- a/drivers/gpu/drm/panel/panel-sitronix-st7701.c
++++ b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+@@ -384,7 +384,15 @@ static int st7701_dsi_probe(struct mipi_dsi_device *dsi)
+ 	st7701->dsi = dsi;
+ 	st7701->desc = desc;
+ 
+-	return mipi_dsi_attach(dsi);
++	ret = mipi_dsi_attach(dsi);
++	if (ret)
++		goto err_attach;
++
++	return 0;
++
++err_attach:
++	drm_panel_remove(&st7701->panel);
++	return ret;
+ }
+ 
+ static int st7701_dsi_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index 1dfc457bbefc8..4af25c0b6570f 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -81,6 +81,7 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data,
+ 	struct panfrost_gem_object *bo;
+ 	struct drm_panfrost_create_bo *args = data;
+ 	struct panfrost_gem_mapping *mapping;
++	int ret;
+ 
+ 	if (!args->size || args->pad ||
+ 	    (args->flags & ~(PANFROST_BO_NOEXEC | PANFROST_BO_HEAP)))
+@@ -91,21 +92,29 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data,
+ 	    !(args->flags & PANFROST_BO_NOEXEC))
+ 		return -EINVAL;
+ 
+-	bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags,
+-					     &args->handle);
++	bo = panfrost_gem_create(dev, args->size, args->flags);
+ 	if (IS_ERR(bo))
+ 		return PTR_ERR(bo);
+ 
++	ret = drm_gem_handle_create(file, &bo->base.base, &args->handle);
++	if (ret)
++		goto out;
++
+ 	mapping = panfrost_gem_mapping_get(bo, priv);
+-	if (!mapping) {
+-		drm_gem_object_put(&bo->base.base);
+-		return -EINVAL;
++	if (mapping) {
++		args->offset = mapping->mmnode.start << PAGE_SHIFT;
++		panfrost_gem_mapping_put(mapping);
++	} else {
++		/* This can only happen if the handle from
++		 * drm_gem_handle_create() has already been guessed and freed
++		 * by user space
++		 */
++		ret = -EINVAL;
+ 	}
+ 
+-	args->offset = mapping->mmnode.start << PAGE_SHIFT;
+-	panfrost_gem_mapping_put(mapping);
+-
+-	return 0;
++out:
++	drm_gem_object_put(&bo->base.base);
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 1d917cea5ceb4..c843fbfdb878e 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -232,12 +232,8 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t
+ }
+ 
+ struct panfrost_gem_object *
+-panfrost_gem_create_with_handle(struct drm_file *file_priv,
+-				struct drm_device *dev, size_t size,
+-				u32 flags,
+-				uint32_t *handle)
++panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags)
+ {
+-	int ret;
+ 	struct drm_gem_shmem_object *shmem;
+ 	struct panfrost_gem_object *bo;
+ 
+@@ -253,16 +249,6 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv,
+ 	bo->noexec = !!(flags & PANFROST_BO_NOEXEC);
+ 	bo->is_heap = !!(flags & PANFROST_BO_HEAP);
+ 
+-	/*
+-	 * Allocate an id of idr table where the obj is registered
+-	 * and handle has the id what user can see.
+-	 */
+-	ret = drm_gem_handle_create(file_priv, &shmem->base, handle);
+-	/* drop reference from allocate - handle holds it now. */
+-	drm_gem_object_put(&shmem->base);
+-	if (ret)
+-		return ERR_PTR(ret);
+-
+ 	return bo;
+ }
+ 
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h
+index 8088d5fd8480e..ad2877eeeccdf 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.h
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
+@@ -69,10 +69,7 @@ panfrost_gem_prime_import_sg_table(struct drm_device *dev,
+ 				   struct sg_table *sgt);
+ 
+ struct panfrost_gem_object *
+-panfrost_gem_create_with_handle(struct drm_file *file_priv,
+-				struct drm_device *dev, size_t size,
+-				u32 flags,
+-				uint32_t *handle);
++panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags);
+ 
+ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv);
+ void panfrost_gem_close(struct drm_gem_object *obj,
+diff --git a/drivers/gpu/drm/radeon/radeon_bios.c b/drivers/gpu/drm/radeon/radeon_bios.c
+index bb29cf02974d1..0c94147f76256 100644
+--- a/drivers/gpu/drm/radeon/radeon_bios.c
++++ b/drivers/gpu/drm/radeon/radeon_bios.c
+@@ -227,6 +227,7 @@ static bool radeon_atrm_get_bios(struct radeon_device *rdev)
+ 
+ 	if (!found)
+ 		return false;
++	pci_dev_put(pdev);
+ 
+ 	rdev->bios = kmalloc(size, GFP_KERNEL);
+ 	if (!rdev->bios) {
+@@ -612,13 +613,14 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+ 	acpi_size tbl_size;
+ 	UEFI_ACPI_VFCT *vfct;
+ 	unsigned offset;
++	bool r = false;
+ 
+ 	if (!ACPI_SUCCESS(acpi_get_table("VFCT", 1, &hdr)))
+ 		return false;
+ 	tbl_size = hdr->length;
+ 	if (tbl_size < sizeof(UEFI_ACPI_VFCT)) {
+ 		DRM_ERROR("ACPI VFCT table present but broken (too short #1)\n");
+-		return false;
++		goto out;
+ 	}
+ 
+ 	vfct = (UEFI_ACPI_VFCT *)hdr;
+@@ -631,13 +633,13 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+ 		offset += sizeof(VFCT_IMAGE_HEADER);
+ 		if (offset > tbl_size) {
+ 			DRM_ERROR("ACPI VFCT image header truncated\n");
+-			return false;
++			goto out;
+ 		}
+ 
+ 		offset += vhdr->ImageLength;
+ 		if (offset > tbl_size) {
+ 			DRM_ERROR("ACPI VFCT image truncated\n");
+-			return false;
++			goto out;
+ 		}
+ 
+ 		if (vhdr->ImageLength &&
+@@ -649,15 +651,18 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+ 			rdev->bios = kmemdup(&vbios->VbiosContent,
+ 					     vhdr->ImageLength,
+ 					     GFP_KERNEL);
++			if (rdev->bios)
++				r = true;
+ 
+-			if (!rdev->bios)
+-				return false;
+-			return true;
++			goto out;
+ 		}
+ 	}
+ 
+ 	DRM_ERROR("ACPI VFCT table present but broken (too short #2)\n");
+-	return false;
++
++out:
++	acpi_put_table(hdr);
++	return r;
+ }
+ #else
+ static inline bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index 857c47c69ef15..adeaa0140f0f7 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -564,7 +564,7 @@ static void cdn_dp_encoder_mode_set(struct drm_encoder *encoder,
+ 	video->v_sync_polarity = !!(mode->flags & DRM_MODE_FLAG_NVSYNC);
+ 	video->h_sync_polarity = !!(mode->flags & DRM_MODE_FLAG_NHSYNC);
+ 
+-	memcpy(&dp->mode, adjusted, sizeof(*mode));
++	drm_mode_copy(&dp->mode, adjusted);
+ }
+ 
+ static bool cdn_dp_check_link_status(struct cdn_dp_device *dp)
+diff --git a/drivers/gpu/drm/rockchip/inno_hdmi.c b/drivers/gpu/drm/rockchip/inno_hdmi.c
+index 7afdc54eb3ec1..78120da5e63aa 100644
+--- a/drivers/gpu/drm/rockchip/inno_hdmi.c
++++ b/drivers/gpu/drm/rockchip/inno_hdmi.c
+@@ -488,7 +488,7 @@ static void inno_hdmi_encoder_mode_set(struct drm_encoder *encoder,
+ 	inno_hdmi_setup(hdmi, adj_mode);
+ 
+ 	/* Store the display mode for plugin/DPMS poweron events */
+-	memcpy(&hdmi->previous_mode, adj_mode, sizeof(hdmi->previous_mode));
++	drm_mode_copy(&hdmi->previous_mode, adj_mode);
+ }
+ 
+ static void inno_hdmi_encoder_enable(struct drm_encoder *encoder)
+diff --git a/drivers/gpu/drm/rockchip/rk3066_hdmi.c b/drivers/gpu/drm/rockchip/rk3066_hdmi.c
+index 1c546c3a89984..17e7c40a9e7b9 100644
+--- a/drivers/gpu/drm/rockchip/rk3066_hdmi.c
++++ b/drivers/gpu/drm/rockchip/rk3066_hdmi.c
+@@ -383,7 +383,7 @@ rk3066_hdmi_encoder_mode_set(struct drm_encoder *encoder,
+ 	struct rk3066_hdmi *hdmi = to_rk3066_hdmi(encoder);
+ 
+ 	/* Store the display mode for plugin/DPMS poweron events. */
+-	memcpy(&hdmi->previous_mode, adj_mode, sizeof(hdmi->previous_mode));
++	drm_mode_copy(&hdmi->previous_mode, adj_mode);
+ }
+ 
+ static void rk3066_hdmi_encoder_enable(struct drm_encoder *encoder)
+diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+index 7c20b4a24a7e2..e2487937c4e3d 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+@@ -145,7 +145,7 @@ static int rk3288_lvds_poweron(struct rockchip_lvds *lvds)
+ 		DRM_DEV_ERROR(lvds->dev, "failed to enable lvds pclk %d\n", ret);
+ 		return ret;
+ 	}
+-	ret = pm_runtime_get_sync(lvds->dev);
++	ret = pm_runtime_resume_and_get(lvds->dev);
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(lvds->dev, "failed to get pm runtime: %d\n", ret);
+ 		clk_disable(lvds->pclk);
+@@ -329,16 +329,20 @@ static int px30_lvds_poweron(struct rockchip_lvds *lvds)
+ {
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(lvds->dev);
++	ret = pm_runtime_resume_and_get(lvds->dev);
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(lvds->dev, "failed to get pm runtime: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	/* Enable LVDS mode */
+-	return regmap_update_bits(lvds->grf, PX30_LVDS_GRF_PD_VO_CON1,
++	ret = regmap_update_bits(lvds->grf, PX30_LVDS_GRF_PD_VO_CON1,
+ 				  PX30_LVDS_MODE_EN(1) | PX30_LVDS_P2S_EN(1),
+ 				  PX30_LVDS_MODE_EN(1) | PX30_LVDS_P2S_EN(1));
++	if (ret)
++		pm_runtime_put(lvds->dev);
++
++	return ret;
+ }
+ 
+ static void px30_lvds_poweroff(struct rockchip_lvds *lvds)
+diff --git a/drivers/gpu/drm/sti/sti_dvo.c b/drivers/gpu/drm/sti/sti_dvo.c
+index ddb4184f07264..11225ac213e13 100644
+--- a/drivers/gpu/drm/sti/sti_dvo.c
++++ b/drivers/gpu/drm/sti/sti_dvo.c
+@@ -288,7 +288,7 @@ static void sti_dvo_set_mode(struct drm_bridge *bridge,
+ 
+ 	DRM_DEBUG_DRIVER("\n");
+ 
+-	memcpy(&dvo->mode, mode, sizeof(struct drm_display_mode));
++	drm_mode_copy(&dvo->mode, mode);
+ 
+ 	/* According to the path used (main or aux), the dvo clocks should
+ 	 * have a different parent clock. */
+@@ -346,8 +346,9 @@ static int sti_dvo_connector_get_modes(struct drm_connector *connector)
+ 
+ #define CLK_TOLERANCE_HZ 50
+ 
+-static int sti_dvo_connector_mode_valid(struct drm_connector *connector,
+-					struct drm_display_mode *mode)
++static enum drm_mode_status
++sti_dvo_connector_mode_valid(struct drm_connector *connector,
++			     struct drm_display_mode *mode)
+ {
+ 	int target = mode->clock * 1000;
+ 	int target_min = target - CLK_TOLERANCE_HZ;
+diff --git a/drivers/gpu/drm/sti/sti_hda.c b/drivers/gpu/drm/sti/sti_hda.c
+index 5c2b650b561d5..418dfccc2faf3 100644
+--- a/drivers/gpu/drm/sti/sti_hda.c
++++ b/drivers/gpu/drm/sti/sti_hda.c
+@@ -523,7 +523,7 @@ static void sti_hda_set_mode(struct drm_bridge *bridge,
+ 
+ 	DRM_DEBUG_DRIVER("\n");
+ 
+-	memcpy(&hda->mode, mode, sizeof(struct drm_display_mode));
++	drm_mode_copy(&hda->mode, mode);
+ 
+ 	if (!hda_get_mode_idx(hda->mode, &mode_idx)) {
+ 		DRM_ERROR("Undefined mode\n");
+@@ -600,8 +600,9 @@ static int sti_hda_connector_get_modes(struct drm_connector *connector)
+ 
+ #define CLK_TOLERANCE_HZ 50
+ 
+-static int sti_hda_connector_mode_valid(struct drm_connector *connector,
+-					struct drm_display_mode *mode)
++static enum drm_mode_status
++sti_hda_connector_mode_valid(struct drm_connector *connector,
++			     struct drm_display_mode *mode)
+ {
+ 	int target = mode->clock * 1000;
+ 	int target_min = target - CLK_TOLERANCE_HZ;
+diff --git a/drivers/gpu/drm/sti/sti_hdmi.c b/drivers/gpu/drm/sti/sti_hdmi.c
+index 38a558768e531..1bcee73f51144 100644
+--- a/drivers/gpu/drm/sti/sti_hdmi.c
++++ b/drivers/gpu/drm/sti/sti_hdmi.c
+@@ -934,7 +934,7 @@ static void sti_hdmi_set_mode(struct drm_bridge *bridge,
+ 	DRM_DEBUG_DRIVER("\n");
+ 
+ 	/* Copy the drm display mode in the connector local structure */
+-	memcpy(&hdmi->mode, mode, sizeof(struct drm_display_mode));
++	drm_mode_copy(&hdmi->mode, mode);
+ 
+ 	/* Update clock framerate according to the selected mode */
+ 	ret = clk_set_rate(hdmi->clk_pix, mode->clock * 1000);
+@@ -997,8 +997,9 @@ fail:
+ 
+ #define CLK_TOLERANCE_HZ 50
+ 
+-static int sti_hdmi_connector_mode_valid(struct drm_connector *connector,
+-					struct drm_display_mode *mode)
++static enum drm_mode_status
++sti_hdmi_connector_mode_valid(struct drm_connector *connector,
++			      struct drm_display_mode *mode)
+ {
+ 	int target = mode->clock * 1000;
+ 	int target_min = target - CLK_TOLERANCE_HZ;
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index ceb86338c0039..958d12da902d3 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -2564,8 +2564,10 @@ static int tegra_dc_probe(struct platform_device *pdev)
+ 	usleep_range(2000, 4000);
+ 
+ 	err = reset_control_assert(dc->rst);
+-	if (err < 0)
++	if (err < 0) {
++		clk_disable_unprepare(dc->clk);
+ 		return err;
++	}
+ 
+ 	usleep_range(2000, 4000);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index e58112997c881..0e963fd7db17e 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -182,7 +182,8 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf,
+ 	if (cmd->dma.guest.ptr.offset % PAGE_SIZE ||
+ 	    box->x != 0    || box->y != 0    || box->z != 0    ||
+ 	    box->srcx != 0 || box->srcy != 0 || box->srcz != 0 ||
+-	    box->d != 1    || box_count != 1) {
++	    box->d != 1    || box_count != 1 ||
++	    box->w > 64 || box->h > 64) {
+ 		/* TODO handle none page aligned offsets */
+ 		/* TODO handle more dst & src != 0 */
+ 		/* TODO handle more then one copy */
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 2001566be3f5e..09c3f30f10d37 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -948,7 +948,10 @@
+ #define USB_DEVICE_ID_ORTEK_IHOME_IMAC_A210S	0x8003
+ 
+ #define USB_VENDOR_ID_PLANTRONICS	0x047f
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3210_SERIES	0xc055
+ #define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES	0xc056
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3215_SERIES	0xc057
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES	0xc058
+ 
+ #define USB_VENDOR_ID_PANASONIC		0x04da
+ #define USB_DEVICE_ID_PANABOARD_UBT780	0x1044
+diff --git a/drivers/hid/hid-mcp2221.c b/drivers/hid/hid-mcp2221.c
+index de52e9f7bb8cb..560eeec4035aa 100644
+--- a/drivers/hid/hid-mcp2221.c
++++ b/drivers/hid/hid-mcp2221.c
+@@ -840,12 +840,19 @@ static int mcp2221_probe(struct hid_device *hdev,
+ 		return ret;
+ 	}
+ 
+-	ret = hid_hw_start(hdev, HID_CONNECT_HIDRAW);
++	/*
++	 * This driver uses the .raw_event callback and therefore does not need any
++	 * HID_CONNECT_xxx flags.
++	 */
++	ret = hid_hw_start(hdev, 0);
+ 	if (ret) {
+ 		hid_err(hdev, "can't start hardware\n");
+ 		return ret;
+ 	}
+ 
++	hid_info(hdev, "USB HID v%x.%02x Device [%s] on %s\n", hdev->version >> 8,
++			hdev->version & 0xff, hdev->name, hdev->phys);
++
+ 	ret = hid_hw_open(hdev);
+ 	if (ret) {
+ 		hid_err(hdev, "can't open device\n");
+@@ -870,8 +877,7 @@ static int mcp2221_probe(struct hid_device *hdev,
+ 	mcp->adapter.retries = 1;
+ 	mcp->adapter.dev.parent = &hdev->dev;
+ 	snprintf(mcp->adapter.name, sizeof(mcp->adapter.name),
+-			"MCP2221 usb-i2c bridge on hidraw%d",
+-			((struct hidraw *)hdev->hidraw)->minor);
++			"MCP2221 usb-i2c bridge");
+ 
+ 	ret = i2c_add_adapter(&mcp->adapter);
+ 	if (ret) {
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index a78ce16d4782d..ea8c52f0aa783 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1912,6 +1912,10 @@ static const struct hid_device_id mt_devices[] = {
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 			USB_VENDOR_ID_ELAN, 0x313a) },
+ 
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_ELAN, 0x3148) },
++
+ 	/* Elitegroup panel */
+ 	{ .driver_data = MT_CLS_SERIAL,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_ELITEGROUP,
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index e81b7cec2d124..3d414ae194acb 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -198,9 +198,18 @@ err:
+ }
+ 
+ static const struct hid_device_id plantronics_devices[] = {
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3210_SERIES),
++		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+ 					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES),
+ 		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3215_SERIES),
++		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES),
++		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) },
+ 	{ }
+ };
+diff --git a/drivers/hid/hid-sensor-custom.c b/drivers/hid/hid-sensor-custom.c
+index 4d25577a8573f..971600a6397a4 100644
+--- a/drivers/hid/hid-sensor-custom.c
++++ b/drivers/hid/hid-sensor-custom.c
+@@ -59,7 +59,7 @@ struct hid_sensor_sample {
+ 	u32 raw_len;
+ } __packed;
+ 
+-static struct attribute hid_custom_attrs[] = {
++static struct attribute hid_custom_attrs[HID_CUSTOM_TOTAL_ATTRS] = {
+ 	{.name = "name", .mode = S_IRUGO},
+ 	{.name = "units", .mode = S_IRUGO},
+ 	{.name = "unit-expo", .mode = S_IRUGO},
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 4dbf69078387f..b42785fdf7ed5 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -160,6 +160,9 @@ static int wacom_raw_event(struct hid_device *hdev, struct hid_report *report,
+ {
+ 	struct wacom *wacom = hid_get_drvdata(hdev);
+ 
++	if (wacom->wacom_wac.features.type == BOOTLOADER)
++		return 0;
++
+ 	if (size > WACOM_PKGLEN_MAX)
+ 		return 1;
+ 
+@@ -2786,6 +2789,11 @@ static int wacom_probe(struct hid_device *hdev,
+ 		return error;
+ 	}
+ 
++	if (features->type == BOOTLOADER) {
++		hid_warn(hdev, "Using device in hidraw-only mode");
++		return hid_hw_start(hdev, HID_CONNECT_HIDRAW);
++	}
++
+ 	error = wacom_parse_and_register(wacom, false);
+ 	if (error)
+ 		return error;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index d8d127fcc82a8..afb94b89fc4d4 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -4782,6 +4782,9 @@ static const struct wacom_features wacom_features_0x3c8 =
+ static const struct wacom_features wacom_features_HID_ANY_ID =
+ 	{ "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID };
+ 
++static const struct wacom_features wacom_features_0x94 =
++	{ "Wacom Bootloader", .type = BOOTLOADER };
++
+ #define USB_DEVICE_WACOM(prod)						\
+ 	HID_DEVICE(BUS_USB, HID_GROUP_WACOM, USB_VENDOR_ID_WACOM, prod),\
+ 	.driver_data = (kernel_ulong_t)&wacom_features_##prod
+@@ -4855,6 +4858,7 @@ const struct hid_device_id wacom_ids[] = {
+ 	{ USB_DEVICE_WACOM(0x84) },
+ 	{ USB_DEVICE_WACOM(0x90) },
+ 	{ USB_DEVICE_WACOM(0x93) },
++	{ USB_DEVICE_WACOM(0x94) },
+ 	{ USB_DEVICE_WACOM(0x97) },
+ 	{ USB_DEVICE_WACOM(0x9A) },
+ 	{ USB_DEVICE_WACOM(0x9F) },
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 8dea7cb298e69..ca172efcf072f 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -242,6 +242,7 @@ enum {
+ 	MTTPC,
+ 	MTTPC_B,
+ 	HID_GENERIC,
++	BOOTLOADER,
+ 	MAX_TYPE
+ };
+ 
+diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c
+index eb98201583185..26f2c3c012978 100644
+--- a/drivers/hsi/controllers/omap_ssi_core.c
++++ b/drivers/hsi/controllers/omap_ssi_core.c
+@@ -502,8 +502,10 @@ static int ssi_probe(struct platform_device *pd)
+ 	platform_set_drvdata(pd, ssi);
+ 
+ 	err = ssi_add_controller(ssi, pd);
+-	if (err < 0)
++	if (err < 0) {
++		hsi_put_controller(ssi);
+ 		goto out1;
++	}
+ 
+ 	pm_runtime_enable(&pd->dev);
+ 
+@@ -536,9 +538,9 @@ out3:
+ 	device_for_each_child(&pd->dev, NULL, ssi_remove_ports);
+ out2:
+ 	ssi_remove_controller(ssi);
++	pm_runtime_disable(&pd->dev);
+ out1:
+ 	platform_set_drvdata(pd, NULL);
+-	pm_runtime_disable(&pd->dev);
+ 
+ 	return err;
+ }
+@@ -629,7 +631,13 @@ static int __init ssi_init(void) {
+ 	if (ret)
+ 		return ret;
+ 
+-	return platform_driver_register(&ssi_port_pdriver);
++	ret = platform_driver_register(&ssi_port_pdriver);
++	if (ret) {
++		platform_driver_unregister(&ssi_pdriver);
++		return ret;
++	}
++
++	return 0;
+ }
+ module_init(ssi_init);
+ 
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
+index 769851b6e74c5..7ed6fad3fa8ff 100644
+--- a/drivers/hv/ring_buffer.c
++++ b/drivers/hv/ring_buffer.c
+@@ -246,6 +246,19 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info)
+ 	mutex_unlock(&ring_info->ring_buffer_mutex);
+ }
+ 
++/*
++ * Check if the ring buffer spinlock is available to take or not; used on
++ * atomic contexts, like panic path (see the Hyper-V framebuffer driver).
++ */
++
++bool hv_ringbuffer_spinlock_busy(struct vmbus_channel *channel)
++{
++	struct hv_ring_buffer_info *rinfo = &channel->outbound;
++
++	return spin_is_locked(&rinfo->ring_lock);
++}
++EXPORT_SYMBOL_GPL(hv_ringbuffer_spinlock_busy);
++
+ /* Write to the ring buffer. */
+ int hv_ringbuffer_write(struct vmbus_channel *channel,
+ 			const struct kvec *kv_list, u32 kv_count)
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index f741c7492ee40..8a427467a8427 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -766,6 +766,7 @@ config SENSORS_IT87
+ config SENSORS_JC42
+ 	tristate "JEDEC JC42.4 compliant memory module temperature sensors"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here, you get support for JEDEC JC42.4 compliant
+ 	  temperature sensors, which are used on many DDR3 memory modules for
+diff --git a/drivers/hwmon/jc42.c b/drivers/hwmon/jc42.c
+index 4a03d010ec5a8..52f341d46029b 100644
+--- a/drivers/hwmon/jc42.c
++++ b/drivers/hwmon/jc42.c
+@@ -19,6 +19,7 @@
+ #include <linux/err.h>
+ #include <linux/mutex.h>
+ #include <linux/of.h>
++#include <linux/regmap.h>
+ 
+ /* Addresses to scan */
+ static const unsigned short normal_i2c[] = {
+@@ -189,31 +190,14 @@ static struct jc42_chips jc42_chips[] = {
+ 	{ STM_MANID, STTS3000_DEVID, STTS3000_DEVID_MASK },
+ };
+ 
+-enum temp_index {
+-	t_input = 0,
+-	t_crit,
+-	t_min,
+-	t_max,
+-	t_num_temp
+-};
+-
+-static const u8 temp_regs[t_num_temp] = {
+-	[t_input] = JC42_REG_TEMP,
+-	[t_crit] = JC42_REG_TEMP_CRITICAL,
+-	[t_min] = JC42_REG_TEMP_LOWER,
+-	[t_max] = JC42_REG_TEMP_UPPER,
+-};
+-
+ /* Each client has this additional data */
+ struct jc42_data {
+-	struct i2c_client *client;
+ 	struct mutex	update_lock;	/* protect register access */
++	struct regmap	*regmap;
+ 	bool		extended;	/* true if extended range supported */
+ 	bool		valid;
+-	unsigned long	last_updated;	/* In jiffies */
+ 	u16		orig_config;	/* original configuration */
+ 	u16		config;		/* current configuration */
+-	u16		temp[t_num_temp];/* Temperatures */
+ };
+ 
+ #define JC42_TEMP_MIN_EXTENDED	(-40000)
+@@ -238,85 +222,102 @@ static int jc42_temp_from_reg(s16 reg)
+ 	return reg * 125 / 2;
+ }
+ 
+-static struct jc42_data *jc42_update_device(struct device *dev)
+-{
+-	struct jc42_data *data = dev_get_drvdata(dev);
+-	struct i2c_client *client = data->client;
+-	struct jc42_data *ret = data;
+-	int i, val;
+-
+-	mutex_lock(&data->update_lock);
+-
+-	if (time_after(jiffies, data->last_updated + HZ) || !data->valid) {
+-		for (i = 0; i < t_num_temp; i++) {
+-			val = i2c_smbus_read_word_swapped(client, temp_regs[i]);
+-			if (val < 0) {
+-				ret = ERR_PTR(val);
+-				goto abort;
+-			}
+-			data->temp[i] = val;
+-		}
+-		data->last_updated = jiffies;
+-		data->valid = true;
+-	}
+-abort:
+-	mutex_unlock(&data->update_lock);
+-	return ret;
+-}
+-
+ static int jc42_read(struct device *dev, enum hwmon_sensor_types type,
+ 		     u32 attr, int channel, long *val)
+ {
+-	struct jc42_data *data = jc42_update_device(dev);
+-	int temp, hyst;
++	struct jc42_data *data = dev_get_drvdata(dev);
++	unsigned int regval;
++	int ret, temp, hyst;
+ 
+-	if (IS_ERR(data))
+-		return PTR_ERR(data);
++	mutex_lock(&data->update_lock);
+ 
+ 	switch (attr) {
+ 	case hwmon_temp_input:
+-		*val = jc42_temp_from_reg(data->temp[t_input]);
+-		return 0;
++		ret = regmap_read(data->regmap, JC42_REG_TEMP, &regval);
++		if (ret)
++			break;
++
++		*val = jc42_temp_from_reg(regval);
++		break;
+ 	case hwmon_temp_min:
+-		*val = jc42_temp_from_reg(data->temp[t_min]);
+-		return 0;
++		ret = regmap_read(data->regmap, JC42_REG_TEMP_LOWER, &regval);
++		if (ret)
++			break;
++
++		*val = jc42_temp_from_reg(regval);
++		break;
+ 	case hwmon_temp_max:
+-		*val = jc42_temp_from_reg(data->temp[t_max]);
+-		return 0;
++		ret = regmap_read(data->regmap, JC42_REG_TEMP_UPPER, &regval);
++		if (ret)
++			break;
++
++		*val = jc42_temp_from_reg(regval);
++		break;
+ 	case hwmon_temp_crit:
+-		*val = jc42_temp_from_reg(data->temp[t_crit]);
+-		return 0;
++		ret = regmap_read(data->regmap, JC42_REG_TEMP_CRITICAL,
++				  &regval);
++		if (ret)
++			break;
++
++		*val = jc42_temp_from_reg(regval);
++		break;
+ 	case hwmon_temp_max_hyst:
+-		temp = jc42_temp_from_reg(data->temp[t_max]);
++		ret = regmap_read(data->regmap, JC42_REG_TEMP_UPPER, &regval);
++		if (ret)
++			break;
++
++		temp = jc42_temp_from_reg(regval);
+ 		hyst = jc42_hysteresis[(data->config & JC42_CFG_HYST_MASK)
+ 						>> JC42_CFG_HYST_SHIFT];
+ 		*val = temp - hyst;
+-		return 0;
++		break;
+ 	case hwmon_temp_crit_hyst:
+-		temp = jc42_temp_from_reg(data->temp[t_crit]);
++		ret = regmap_read(data->regmap, JC42_REG_TEMP_CRITICAL,
++				  &regval);
++		if (ret)
++			break;
++
++		temp = jc42_temp_from_reg(regval);
+ 		hyst = jc42_hysteresis[(data->config & JC42_CFG_HYST_MASK)
+ 						>> JC42_CFG_HYST_SHIFT];
+ 		*val = temp - hyst;
+-		return 0;
++		break;
+ 	case hwmon_temp_min_alarm:
+-		*val = (data->temp[t_input] >> JC42_ALARM_MIN_BIT) & 1;
+-		return 0;
++		ret = regmap_read(data->regmap, JC42_REG_TEMP, &regval);
++		if (ret)
++			break;
++
++		*val = (regval >> JC42_ALARM_MIN_BIT) & 1;
++		break;
+ 	case hwmon_temp_max_alarm:
+-		*val = (data->temp[t_input] >> JC42_ALARM_MAX_BIT) & 1;
+-		return 0;
++		ret = regmap_read(data->regmap, JC42_REG_TEMP, &regval);
++		if (ret)
++			break;
++
++		*val = (regval >> JC42_ALARM_MAX_BIT) & 1;
++		break;
+ 	case hwmon_temp_crit_alarm:
+-		*val = (data->temp[t_input] >> JC42_ALARM_CRIT_BIT) & 1;
+-		return 0;
++		ret = regmap_read(data->regmap, JC42_REG_TEMP, &regval);
++		if (ret)
++			break;
++
++		*val = (regval >> JC42_ALARM_CRIT_BIT) & 1;
++		break;
+ 	default:
+-		return -EOPNOTSUPP;
++		ret = -EOPNOTSUPP;
++		break;
+ 	}
++
++	mutex_unlock(&data->update_lock);
++
++	return ret;
+ }
+ 
+ static int jc42_write(struct device *dev, enum hwmon_sensor_types type,
+ 		      u32 attr, int channel, long val)
+ {
+ 	struct jc42_data *data = dev_get_drvdata(dev);
+-	struct i2c_client *client = data->client;
++	unsigned int regval;
+ 	int diff, hyst;
+ 	int ret;
+ 
+@@ -324,21 +325,23 @@ static int jc42_write(struct device *dev, enum hwmon_sensor_types type,
+ 
+ 	switch (attr) {
+ 	case hwmon_temp_min:
+-		data->temp[t_min] = jc42_temp_to_reg(val, data->extended);
+-		ret = i2c_smbus_write_word_swapped(client, temp_regs[t_min],
+-						   data->temp[t_min]);
++		ret = regmap_write(data->regmap, JC42_REG_TEMP_LOWER,
++				   jc42_temp_to_reg(val, data->extended));
+ 		break;
+ 	case hwmon_temp_max:
+-		data->temp[t_max] = jc42_temp_to_reg(val, data->extended);
+-		ret = i2c_smbus_write_word_swapped(client, temp_regs[t_max],
+-						   data->temp[t_max]);
++		ret = regmap_write(data->regmap, JC42_REG_TEMP_UPPER,
++				   jc42_temp_to_reg(val, data->extended));
+ 		break;
+ 	case hwmon_temp_crit:
+-		data->temp[t_crit] = jc42_temp_to_reg(val, data->extended);
+-		ret = i2c_smbus_write_word_swapped(client, temp_regs[t_crit],
+-						   data->temp[t_crit]);
++		ret = regmap_write(data->regmap, JC42_REG_TEMP_CRITICAL,
++				   jc42_temp_to_reg(val, data->extended));
+ 		break;
+ 	case hwmon_temp_crit_hyst:
++		ret = regmap_read(data->regmap, JC42_REG_TEMP_CRITICAL,
++				  &regval);
++		if (ret)
++			break;
++
+ 		/*
+ 		 * JC42.4 compliant chips only support four hysteresis values.
+ 		 * Pick best choice and go from there.
+@@ -346,7 +349,7 @@ static int jc42_write(struct device *dev, enum hwmon_sensor_types type,
+ 		val = clamp_val(val, (data->extended ? JC42_TEMP_MIN_EXTENDED
+ 						     : JC42_TEMP_MIN) - 6000,
+ 				JC42_TEMP_MAX);
+-		diff = jc42_temp_from_reg(data->temp[t_crit]) - val;
++		diff = jc42_temp_from_reg(regval) - val;
+ 		hyst = 0;
+ 		if (diff > 0) {
+ 			if (diff < 2250)
+@@ -358,9 +361,8 @@ static int jc42_write(struct device *dev, enum hwmon_sensor_types type,
+ 		}
+ 		data->config = (data->config & ~JC42_CFG_HYST_MASK) |
+ 				(hyst << JC42_CFG_HYST_SHIFT);
+-		ret = i2c_smbus_write_word_swapped(data->client,
+-						   JC42_REG_CONFIG,
+-						   data->config);
++		ret = regmap_write(data->regmap, JC42_REG_CONFIG,
++				   data->config);
+ 		break;
+ 	default:
+ 		ret = -EOPNOTSUPP;
+@@ -458,51 +460,80 @@ static const struct hwmon_chip_info jc42_chip_info = {
+ 	.info = jc42_info,
+ };
+ 
++static bool jc42_readable_reg(struct device *dev, unsigned int reg)
++{
++	return (reg >= JC42_REG_CAP && reg <= JC42_REG_DEVICEID) ||
++		reg == JC42_REG_SMBUS;
++}
++
++static bool jc42_writable_reg(struct device *dev, unsigned int reg)
++{
++	return (reg >= JC42_REG_CONFIG && reg <= JC42_REG_TEMP_CRITICAL) ||
++		reg == JC42_REG_SMBUS;
++}
++
++static bool jc42_volatile_reg(struct device *dev, unsigned int reg)
++{
++	return reg == JC42_REG_CONFIG || reg == JC42_REG_TEMP;
++}
++
++static const struct regmap_config jc42_regmap_config = {
++	.reg_bits = 8,
++	.val_bits = 16,
++	.val_format_endian = REGMAP_ENDIAN_BIG,
++	.max_register = JC42_REG_SMBUS,
++	.writeable_reg = jc42_writable_reg,
++	.readable_reg = jc42_readable_reg,
++	.volatile_reg = jc42_volatile_reg,
++	.cache_type = REGCACHE_RBTREE,
++};
++
+ static int jc42_probe(struct i2c_client *client)
+ {
+ 	struct device *dev = &client->dev;
+ 	struct device *hwmon_dev;
++	unsigned int config, cap;
+ 	struct jc42_data *data;
+-	int config, cap;
++	int ret;
+ 
+ 	data = devm_kzalloc(dev, sizeof(struct jc42_data), GFP_KERNEL);
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+-	data->client = client;
++	data->regmap = devm_regmap_init_i2c(client, &jc42_regmap_config);
++	if (IS_ERR(data->regmap))
++		return PTR_ERR(data->regmap);
++
+ 	i2c_set_clientdata(client, data);
+ 	mutex_init(&data->update_lock);
+ 
+-	cap = i2c_smbus_read_word_swapped(client, JC42_REG_CAP);
+-	if (cap < 0)
+-		return cap;
++	ret = regmap_read(data->regmap, JC42_REG_CAP, &cap);
++	if (ret)
++		return ret;
+ 
+ 	data->extended = !!(cap & JC42_CAP_RANGE);
+ 
+ 	if (device_property_read_bool(dev, "smbus-timeout-disable")) {
+-		int smbus;
+-
+ 		/*
+ 		 * Not all chips support this register, but from a
+ 		 * quick read of various datasheets no chip appears
+ 		 * incompatible with the below attempt to disable
+ 		 * the timeout. And the whole thing is opt-in...
+ 		 */
+-		smbus = i2c_smbus_read_word_swapped(client, JC42_REG_SMBUS);
+-		if (smbus < 0)
+-			return smbus;
+-		i2c_smbus_write_word_swapped(client, JC42_REG_SMBUS,
+-					     smbus | SMBUS_STMOUT);
++		ret = regmap_set_bits(data->regmap, JC42_REG_SMBUS,
++				      SMBUS_STMOUT);
++		if (ret)
++			return ret;
+ 	}
+ 
+-	config = i2c_smbus_read_word_swapped(client, JC42_REG_CONFIG);
+-	if (config < 0)
+-		return config;
++	ret = regmap_read(data->regmap, JC42_REG_CONFIG, &config);
++	if (ret)
++		return ret;
+ 
+ 	data->orig_config = config;
+ 	if (config & JC42_CFG_SHUTDOWN) {
+ 		config &= ~JC42_CFG_SHUTDOWN;
+-		i2c_smbus_write_word_swapped(client, JC42_REG_CONFIG, config);
++		regmap_write(data->regmap, JC42_REG_CONFIG, config);
+ 	}
+ 	data->config = config;
+ 
+@@ -523,7 +554,7 @@ static int jc42_remove(struct i2c_client *client)
+ 
+ 		config = (data->orig_config & ~JC42_CFG_HYST_MASK)
+ 		  | (data->config & JC42_CFG_HYST_MASK);
+-		i2c_smbus_write_word_swapped(client, JC42_REG_CONFIG, config);
++		regmap_write(data->regmap, JC42_REG_CONFIG, config);
+ 	}
+ 	return 0;
+ }
+@@ -535,8 +566,11 @@ static int jc42_suspend(struct device *dev)
+ 	struct jc42_data *data = dev_get_drvdata(dev);
+ 
+ 	data->config |= JC42_CFG_SHUTDOWN;
+-	i2c_smbus_write_word_swapped(data->client, JC42_REG_CONFIG,
+-				     data->config);
++	regmap_write(data->regmap, JC42_REG_CONFIG, data->config);
++
++	regcache_cache_only(data->regmap, true);
++	regcache_mark_dirty(data->regmap);
++
+ 	return 0;
+ }
+ 
+@@ -544,10 +578,13 @@ static int jc42_resume(struct device *dev)
+ {
+ 	struct jc42_data *data = dev_get_drvdata(dev);
+ 
++	regcache_cache_only(data->regmap, false);
++
+ 	data->config &= ~JC42_CFG_SHUTDOWN;
+-	i2c_smbus_write_word_swapped(data->client, JC42_REG_CONFIG,
+-				     data->config);
+-	return 0;
++	regmap_write(data->regmap, JC42_REG_CONFIG, data->config);
++
++	/* Restore cached register values to hardware */
++	return regcache_sync(data->regmap);
+ }
+ 
+ static const struct dev_pm_ops jc42_dev_pm_ops = {
+diff --git a/drivers/i2c/busses/i2c-ismt.c b/drivers/i2c/busses/i2c-ismt.c
+index 3d2d92640651e..cec2b2ae7684f 100644
+--- a/drivers/i2c/busses/i2c-ismt.c
++++ b/drivers/i2c/busses/i2c-ismt.c
+@@ -507,6 +507,9 @@ static int ismt_access(struct i2c_adapter *adap, u16 addr,
+ 		if (read_write == I2C_SMBUS_WRITE) {
+ 			/* Block Write */
+ 			dev_dbg(dev, "I2C_SMBUS_BLOCK_DATA:  WRITE\n");
++			if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX)
++				return -EINVAL;
++
+ 			dma_size = data->block[0] + 1;
+ 			dma_direction = DMA_TO_DEVICE;
+ 			desc->wr_len_cmd = dma_size;
+diff --git a/drivers/i2c/busses/i2c-pxa-pci.c b/drivers/i2c/busses/i2c-pxa-pci.c
+index f614cade432bb..30e38bc8b6db8 100644
+--- a/drivers/i2c/busses/i2c-pxa-pci.c
++++ b/drivers/i2c/busses/i2c-pxa-pci.c
+@@ -105,7 +105,7 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
+ 	int i;
+ 	struct ce4100_devices *sds;
+ 
+-	ret = pci_enable_device_mem(dev);
++	ret = pcim_enable_device(dev);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -114,10 +114,8 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
+ 		return -EINVAL;
+ 	}
+ 	sds = kzalloc(sizeof(*sds), GFP_KERNEL);
+-	if (!sds) {
+-		ret = -ENOMEM;
+-		goto err_mem;
+-	}
++	if (!sds)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(sds->pdev); i++) {
+ 		sds->pdev[i] = add_i2c_device(dev, i);
+@@ -133,8 +131,6 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
+ 
+ err_dev_add:
+ 	kfree(sds);
+-err_mem:
+-	pci_disable_device(dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/i2c/muxes/i2c-mux-reg.c b/drivers/i2c/muxes/i2c-mux-reg.c
+index 0e0679f65cf77..30a6de1694e07 100644
+--- a/drivers/i2c/muxes/i2c-mux-reg.c
++++ b/drivers/i2c/muxes/i2c-mux-reg.c
+@@ -183,13 +183,12 @@ static int i2c_mux_reg_probe(struct platform_device *pdev)
+ 	if (!mux->data.reg) {
+ 		dev_info(&pdev->dev,
+ 			"Register not set, using platform resource\n");
+-		res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-		mux->data.reg_size = resource_size(res);
+-		mux->data.reg = devm_ioremap_resource(&pdev->dev, res);
++		mux->data.reg = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 		if (IS_ERR(mux->data.reg)) {
+ 			ret = PTR_ERR(mux->data.reg);
+ 			goto err_put_parent;
+ 		}
++		mux->data.reg_size = resource_size(res);
+ 	}
+ 
+ 	if (mux->data.reg_size != 4 && mux->data.reg_size != 2 &&
+diff --git a/drivers/iio/accel/adis16201.c b/drivers/iio/accel/adis16201.c
+index 84bbdfd2f2ba3..b4ae4f86da3e1 100644
+--- a/drivers/iio/accel/adis16201.c
++++ b/drivers/iio/accel/adis16201.c
+@@ -304,3 +304,4 @@ MODULE_AUTHOR("Barry Song <21cnbao@gmail.com>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16201 Dual-Axis Digital Inclinometer and Accelerometer");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("spi:adis16201");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/iio/accel/adis16209.c b/drivers/iio/accel/adis16209.c
+index 4a841aec6268b..e6e465f397d9d 100644
+--- a/drivers/iio/accel/adis16209.c
++++ b/drivers/iio/accel/adis16209.c
+@@ -314,3 +314,4 @@ MODULE_AUTHOR("Barry Song <21cnbao@gmail.com>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16209 Dual-Axis Digital Inclinometer and Accelerometer");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("spi:adis16209");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 3a6f239d4acca..496cb2b26bfda 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -280,10 +280,10 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
+ 	unsigned int data_reg;
+ 	int ret = 0;
+ 
+-	if (iio_buffer_enabled(indio_dev))
+-		return -EBUSY;
++	ret = iio_device_claim_direct_mode(indio_dev);
++	if (ret)
++		return ret;
+ 
+-	mutex_lock(&indio_dev->mlock);
+ 	ad_sigma_delta_set_channel(sigma_delta, chan->address);
+ 
+ 	spi_bus_lock(sigma_delta->spi->master);
+@@ -322,7 +322,7 @@ out:
+ 	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
+ 	sigma_delta->bus_locked = false;
+ 	spi_bus_unlock(sigma_delta->spi->master);
+-	mutex_unlock(&indio_dev->mlock);
++	iio_device_release_direct_mode(indio_dev);
+ 
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/iio/adc/ti-adc128s052.c b/drivers/iio/adc/ti-adc128s052.c
+index 83c1ae07b3e9a..8618ae7bc0671 100644
+--- a/drivers/iio/adc/ti-adc128s052.c
++++ b/drivers/iio/adc/ti-adc128s052.c
+@@ -193,13 +193,13 @@ static int adc128_remove(struct spi_device *spi)
+ }
+ 
+ static const struct of_device_id adc128_of_match[] = {
+-	{ .compatible = "ti,adc128s052", },
+-	{ .compatible = "ti,adc122s021", },
+-	{ .compatible = "ti,adc122s051", },
+-	{ .compatible = "ti,adc122s101", },
+-	{ .compatible = "ti,adc124s021", },
+-	{ .compatible = "ti,adc124s051", },
+-	{ .compatible = "ti,adc124s101", },
++	{ .compatible = "ti,adc128s052", .data = (void*)0L, },
++	{ .compatible = "ti,adc122s021", .data = (void*)1L, },
++	{ .compatible = "ti,adc122s051", .data = (void*)1L, },
++	{ .compatible = "ti,adc122s101", .data = (void*)1L, },
++	{ .compatible = "ti,adc124s021", .data = (void*)2L, },
++	{ .compatible = "ti,adc124s051", .data = (void*)2L, },
++	{ .compatible = "ti,adc124s101", .data = (void*)2L, },
+ 	{ /* sentinel */ },
+ };
+ MODULE_DEVICE_TABLE(of, adc128_of_match);
+diff --git a/drivers/iio/gyro/adis16136.c b/drivers/iio/gyro/adis16136.c
+index a11ae9db0d11b..74db8edb42834 100644
+--- a/drivers/iio/gyro/adis16136.c
++++ b/drivers/iio/gyro/adis16136.c
+@@ -599,3 +599,4 @@ module_spi_driver(adis16136_driver);
+ MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16133/ADIS16135/ADIS16136 gyroscope driver");
+ MODULE_LICENSE("GPL v2");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/iio/gyro/adis16260.c b/drivers/iio/gyro/adis16260.c
+index e7c9a3e31c45a..1e45d93de5b7c 100644
+--- a/drivers/iio/gyro/adis16260.c
++++ b/drivers/iio/gyro/adis16260.c
+@@ -438,3 +438,4 @@ module_spi_driver(adis16260_driver);
+ MODULE_AUTHOR("Barry Song <21cnbao@gmail.com>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16260/5 Digital Gyroscope Sensor");
+ MODULE_LICENSE("GPL v2");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/iio/imu/adis.c b/drivers/iio/imu/adis.c
+index 715eef81bc248..e9821814afec1 100644
+--- a/drivers/iio/imu/adis.c
++++ b/drivers/iio/imu/adis.c
+@@ -34,8 +34,8 @@
+  * @value: The value to write to device (up to 4 bytes)
+  * @size: The size of the @value (in bytes)
+  */
+-int __adis_write_reg(struct adis *adis, unsigned int reg,
+-	unsigned int value, unsigned int size)
++int __adis_write_reg(struct adis *adis, unsigned int reg, unsigned int value,
++		     unsigned int size)
+ {
+ 	unsigned int page = reg / ADIS_PAGE_SIZE;
+ 	int ret, i;
+@@ -118,14 +118,14 @@ int __adis_write_reg(struct adis *adis, unsigned int reg,
+ 	ret = spi_sync(adis->spi, &msg);
+ 	if (ret) {
+ 		dev_err(&adis->spi->dev, "Failed to write register 0x%02X: %d\n",
+-				reg, ret);
++			reg, ret);
+ 	} else {
+ 		adis->current_page = page;
+ 	}
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(__adis_write_reg);
++EXPORT_SYMBOL_NS_GPL(__adis_write_reg, IIO_ADISLIB);
+ 
+ /**
+  * __adis_read_reg() - read N bytes from register (unlocked version)
+@@ -134,8 +134,8 @@ EXPORT_SYMBOL_GPL(__adis_write_reg);
+  * @val: The value read back from the device
+  * @size: The size of the @val buffer
+  */
+-int __adis_read_reg(struct adis *adis, unsigned int reg,
+-	unsigned int *val, unsigned int size)
++int __adis_read_reg(struct adis *adis, unsigned int reg, unsigned int *val,
++		    unsigned int size)
+ {
+ 	unsigned int page = reg / ADIS_PAGE_SIZE;
+ 	struct spi_message msg;
+@@ -205,12 +205,12 @@ int __adis_read_reg(struct adis *adis, unsigned int reg,
+ 	ret = spi_sync(adis->spi, &msg);
+ 	if (ret) {
+ 		dev_err(&adis->spi->dev, "Failed to read register 0x%02X: %d\n",
+-				reg, ret);
++			reg, ret);
+ 		return ret;
+-	} else {
+-		adis->current_page = page;
+ 	}
+ 
++	adis->current_page = page;
++
+ 	switch (size) {
+ 	case 4:
+ 		*val = get_unaligned_be32(adis->rx);
+@@ -222,7 +222,7 @@ int __adis_read_reg(struct adis *adis, unsigned int reg,
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(__adis_read_reg);
++EXPORT_SYMBOL_NS_GPL(__adis_read_reg, IIO_ADISLIB);
+ /**
+  * __adis_update_bits_base() - ADIS Update bits function - Unlocked version
+  * @adis: The adis device
+@@ -247,17 +247,17 @@ int __adis_update_bits_base(struct adis *adis, unsigned int reg, const u32 mask,
+ 
+ 	return __adis_write_reg(adis, reg, __val, size);
+ }
+-EXPORT_SYMBOL_GPL(__adis_update_bits_base);
++EXPORT_SYMBOL_NS_GPL(__adis_update_bits_base, IIO_ADISLIB);
+ 
+ #ifdef CONFIG_DEBUG_FS
+ 
+-int adis_debugfs_reg_access(struct iio_dev *indio_dev,
+-	unsigned int reg, unsigned int writeval, unsigned int *readval)
++int adis_debugfs_reg_access(struct iio_dev *indio_dev, unsigned int reg,
++			    unsigned int writeval, unsigned int *readval)
+ {
+ 	struct adis *adis = iio_device_get_drvdata(indio_dev);
+ 
+ 	if (readval) {
+-		uint16_t val16;
++		u16 val16;
+ 		int ret;
+ 
+ 		ret = adis_read_reg_16(adis, reg, &val16);
+@@ -265,36 +265,41 @@ int adis_debugfs_reg_access(struct iio_dev *indio_dev,
+ 			*readval = val16;
+ 
+ 		return ret;
+-	} else {
+-		return adis_write_reg_16(adis, reg, writeval);
+ 	}
++
++	return adis_write_reg_16(adis, reg, writeval);
+ }
+-EXPORT_SYMBOL(adis_debugfs_reg_access);
++EXPORT_SYMBOL_NS(adis_debugfs_reg_access, IIO_ADISLIB);
+ 
+ #endif
+ 
+ /**
+- * adis_enable_irq() - Enable or disable data ready IRQ
++ * __adis_enable_irq() - Enable or disable data ready IRQ (unlocked)
+  * @adis: The adis device
+  * @enable: Whether to enable the IRQ
+  *
+  * Returns 0 on success, negative error code otherwise
+  */
+-int adis_enable_irq(struct adis *adis, bool enable)
++int __adis_enable_irq(struct adis *adis, bool enable)
+ {
+-	int ret = 0;
+-	uint16_t msc;
++	int ret;
++	u16 msc;
+ 
+-	mutex_lock(&adis->state_lock);
++	if (adis->data->enable_irq)
++		return adis->data->enable_irq(adis, enable);
+ 
+-	if (adis->data->enable_irq) {
+-		ret = adis->data->enable_irq(adis, enable);
+-		goto out_unlock;
++	if (adis->data->unmasked_drdy) {
++		if (enable)
++			enable_irq(adis->spi->irq);
++		else
++			disable_irq(adis->spi->irq);
++
++		return 0;
+ 	}
+ 
+ 	ret = __adis_read_reg_16(adis, adis->data->msc_ctrl_reg, &msc);
+ 	if (ret)
+-		goto out_unlock;
++		return ret;
+ 
+ 	msc |= ADIS_MSC_CTRL_DATA_RDY_POL_HIGH;
+ 	msc &= ~ADIS_MSC_CTRL_DATA_RDY_DIO2;
+@@ -303,13 +308,9 @@ int adis_enable_irq(struct adis *adis, bool enable)
+ 	else
+ 		msc &= ~ADIS_MSC_CTRL_DATA_RDY_EN;
+ 
+-	ret = __adis_write_reg_16(adis, adis->data->msc_ctrl_reg, msc);
+-
+-out_unlock:
+-	mutex_unlock(&adis->state_lock);
+-	return ret;
++	return __adis_write_reg_16(adis, adis->data->msc_ctrl_reg, msc);
+ }
+-EXPORT_SYMBOL(adis_enable_irq);
++EXPORT_SYMBOL_NS(__adis_enable_irq, IIO_ADISLIB);
+ 
+ /**
+  * __adis_check_status() - Check the device for error conditions (unlocked)
+@@ -319,7 +320,7 @@ EXPORT_SYMBOL(adis_enable_irq);
+  */
+ int __adis_check_status(struct adis *adis)
+ {
+-	uint16_t status;
++	u16 status;
+ 	int ret;
+ 	int i;
+ 
+@@ -341,7 +342,7 @@ int __adis_check_status(struct adis *adis)
+ 
+ 	return -EIO;
+ }
+-EXPORT_SYMBOL_GPL(__adis_check_status);
++EXPORT_SYMBOL_NS_GPL(__adis_check_status, IIO_ADISLIB);
+ 
+ /**
+  * __adis_reset() - Reset the device (unlocked version)
+@@ -355,7 +356,7 @@ int __adis_reset(struct adis *adis)
+ 	const struct adis_timeout *timeouts = adis->data->timeouts;
+ 
+ 	ret = __adis_write_reg_8(adis, adis->data->glob_cmd_reg,
+-			ADIS_GLOB_CMD_SW_RESET);
++				 ADIS_GLOB_CMD_SW_RESET);
+ 	if (ret) {
+ 		dev_err(&adis->spi->dev, "Failed to reset device: %d\n", ret);
+ 		return ret;
+@@ -365,7 +366,7 @@ int __adis_reset(struct adis *adis)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(__adis_reset);
++EXPORT_SYMBOL_NS_GPL(__adis_reset, IIO_ADIS_LIB);
+ 
+ static int adis_self_test(struct adis *adis)
+ {
+@@ -411,7 +412,7 @@ int __adis_initial_startup(struct adis *adis)
+ {
+ 	const struct adis_timeout *timeouts = adis->data->timeouts;
+ 	struct gpio_desc *gpio;
+-	uint16_t prod_id;
++	u16 prod_id;
+ 	int ret;
+ 
+ 	/* check if the device has rst pin low */
+@@ -420,7 +421,7 @@ int __adis_initial_startup(struct adis *adis)
+ 		return PTR_ERR(gpio);
+ 
+ 	if (gpio) {
+-		msleep(10);
++		usleep_range(10, 12);
+ 		/* bring device out of reset */
+ 		gpiod_set_value_cansleep(gpio, 0);
+ 		msleep(timeouts->reset_ms);
+@@ -434,7 +435,13 @@ int __adis_initial_startup(struct adis *adis)
+ 	if (ret)
+ 		return ret;
+ 
+-	adis_enable_irq(adis, false);
++	/*
++	 * don't bother calling this if we can't unmask the IRQ as in this case
++	 * the IRQ is most likely not yet requested and we will request it
++	 * with 'IRQF_NO_AUTOEN' anyways.
++	 */
++	if (!adis->data->unmasked_drdy)
++		__adis_enable_irq(adis, false);
+ 
+ 	if (!adis->data->prod_id_reg)
+ 		return 0;
+@@ -450,7 +457,7 @@ int __adis_initial_startup(struct adis *adis)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(__adis_initial_startup);
++EXPORT_SYMBOL_NS_GPL(__adis_initial_startup, IIO_ADISLIB);
+ 
+ /**
+  * adis_single_conversion() - Performs a single sample conversion
+@@ -468,7 +475,8 @@ EXPORT_SYMBOL_GPL(__adis_initial_startup);
+  * a error bit in the channels raw value set error_mask to 0.
+  */
+ int adis_single_conversion(struct iio_dev *indio_dev,
+-	const struct iio_chan_spec *chan, unsigned int error_mask, int *val)
++			   const struct iio_chan_spec *chan,
++			   unsigned int error_mask, int *val)
+ {
+ 	struct adis *adis = iio_device_get_drvdata(indio_dev);
+ 	unsigned int uval;
+@@ -477,7 +485,7 @@ int adis_single_conversion(struct iio_dev *indio_dev,
+ 	mutex_lock(&adis->state_lock);
+ 
+ 	ret = __adis_read_reg(adis, chan->address, &uval,
+-			chan->scan_type.storagebits / 8);
++			      chan->scan_type.storagebits / 8);
+ 	if (ret)
+ 		goto err_unlock;
+ 
+@@ -497,7 +505,7 @@ err_unlock:
+ 	mutex_unlock(&adis->state_lock);
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(adis_single_conversion);
++EXPORT_SYMBOL_NS_GPL(adis_single_conversion, IIO_ADISLIB);
+ 
+ /**
+  * adis_init() - Initialize adis device structure
+@@ -512,7 +520,7 @@ EXPORT_SYMBOL_GPL(adis_single_conversion);
+  * called.
+  */
+ int adis_init(struct adis *adis, struct iio_dev *indio_dev,
+-	struct spi_device *spi, const struct adis_data *data)
++	      struct spi_device *spi, const struct adis_data *data)
+ {
+ 	if (!data || !data->timeouts) {
+ 		dev_err(&spi->dev, "No config data or timeouts not defined!\n");
+@@ -534,7 +542,7 @@ int adis_init(struct adis *adis, struct iio_dev *indio_dev,
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(adis_init);
++EXPORT_SYMBOL_NS_GPL(adis_init, IIO_ADISLIB);
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
+diff --git a/drivers/iio/imu/adis16400.c b/drivers/iio/imu/adis16400.c
+index 4aff16466da02..c5255116954a7 100644
+--- a/drivers/iio/imu/adis16400.c
++++ b/drivers/iio/imu/adis16400.c
+@@ -1252,3 +1252,4 @@ module_spi_driver(adis16400_driver);
+ MODULE_AUTHOR("Manuel Stahl <manuel.stahl@iis.fraunhofer.de>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16400/5 IMU SPI driver");
+ MODULE_LICENSE("GPL v2");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/iio/imu/adis16460.c b/drivers/iio/imu/adis16460.c
+index 74a161e397332..a28143a19d3a4 100644
+--- a/drivers/iio/imu/adis16460.c
++++ b/drivers/iio/imu/adis16460.c
+@@ -403,12 +403,12 @@ static int adis16460_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
++	/* We cannot mask the interrupt, so ensure it isn't auto enabled */
++	st->adis.irq_flag |= IRQF_NO_AUTOEN;
+ 	ret = devm_adis_setup_buffer_and_trigger(&st->adis, indio_dev, NULL);
+ 	if (ret)
+ 		return ret;
+ 
+-	adis16460_enable_irq(&st->adis, 0);
+-
+ 	ret = __adis_initial_startup(&st->adis);
+ 	if (ret)
+ 		return ret;
+@@ -447,3 +447,4 @@ module_spi_driver(adis16460_driver);
+ MODULE_AUTHOR("Dragos Bogdan <dragos.bogdan@analog.com>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16460 IMU driver");
+ MODULE_LICENSE("GPL");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/iio/imu/adis16475.c b/drivers/iio/imu/adis16475.c
+index 3c4e4deb87608..aed1cf3bfa13c 100644
+--- a/drivers/iio/imu/adis16475.c
++++ b/drivers/iio/imu/adis16475.c
+@@ -1196,6 +1196,9 @@ static int adis16475_config_irq_pin(struct adis16475 *st)
+ 		return -EINVAL;
+ 	}
+ 
++	/* We cannot mask the interrupt so ensure it's not enabled at request */
++	st->adis.irq_flag |= IRQF_NO_AUTOEN;
++
+ 	val = ADIS16475_MSG_CTRL_DR_POL(polarity);
+ 	ret = __adis_update_bits(&st->adis, ADIS16475_REG_MSG_CTRL,
+ 				 ADIS16475_MSG_CTRL_DR_POL_MASK, val);
+@@ -1300,8 +1303,6 @@ static int adis16475_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
+-	adis16475_enable_irq(&st->adis, false);
+-
+ 	ret = devm_iio_device_register(&spi->dev, indio_dev);
+ 	if (ret)
+ 		return ret;
+@@ -1323,3 +1324,4 @@ module_spi_driver(adis16475_driver);
+ MODULE_AUTHOR("Nuno Sa <nuno.sa@analog.com>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16475 IMU driver");
+ MODULE_LICENSE("GPL");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c
+index dfe86c5893254..c6a3d9a04fce3 100644
+--- a/drivers/iio/imu/adis16480.c
++++ b/drivers/iio/imu/adis16480.c
+@@ -1340,3 +1340,4 @@ module_spi_driver(adis16480_driver);
+ MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16480 IMU driver");
+ MODULE_LICENSE("GPL v2");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/iio/imu/adis_buffer.c b/drivers/iio/imu/adis_buffer.c
+index 175af154e4437..7cc1145910f68 100644
+--- a/drivers/iio/imu/adis_buffer.c
++++ b/drivers/iio/imu/adis_buffer.c
+@@ -20,7 +20,7 @@
+ #include <linux/iio/imu/adis.h>
+ 
+ static int adis_update_scan_mode_burst(struct iio_dev *indio_dev,
+-	const unsigned long *scan_mask)
++				       const unsigned long *scan_mask)
+ {
+ 	struct adis *adis = iio_device_get_drvdata(indio_dev);
+ 	unsigned int burst_length, burst_max_length;
+@@ -63,7 +63,7 @@ static int adis_update_scan_mode_burst(struct iio_dev *indio_dev,
+ }
+ 
+ int adis_update_scan_mode(struct iio_dev *indio_dev,
+-	const unsigned long *scan_mask)
++			  const unsigned long *scan_mask)
+ {
+ 	struct adis *adis = iio_device_get_drvdata(indio_dev);
+ 	const struct iio_chan_spec *chan;
+@@ -120,7 +120,7 @@ int adis_update_scan_mode(struct iio_dev *indio_dev,
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(adis_update_scan_mode);
++EXPORT_SYMBOL_NS_GPL(adis_update_scan_mode, IIO_ADISLIB);
+ 
+ static irqreturn_t adis_trigger_handler(int irq, void *p)
+ {
+@@ -149,7 +149,7 @@ static irqreturn_t adis_trigger_handler(int irq, void *p)
+ 	}
+ 
+ 	iio_push_to_buffers_with_timestamp(indio_dev, adis->buffer,
+-		pf->timestamp);
++					   pf->timestamp);
+ 
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+@@ -202,5 +202,5 @@ devm_adis_setup_buffer_and_trigger(struct adis *adis, struct iio_dev *indio_dev,
+ 	return devm_add_action_or_reset(&adis->spi->dev, adis_buffer_cleanup,
+ 					adis);
+ }
+-EXPORT_SYMBOL_GPL(devm_adis_setup_buffer_and_trigger);
++EXPORT_SYMBOL_NS_GPL(devm_adis_setup_buffer_and_trigger, IIO_ADISLIB);
+ 
+diff --git a/drivers/iio/imu/adis_trigger.c b/drivers/iio/imu/adis_trigger.c
+index 64e0ba51cb18e..80adfa58e50cb 100644
+--- a/drivers/iio/imu/adis_trigger.c
++++ b/drivers/iio/imu/adis_trigger.c
+@@ -15,8 +15,7 @@
+ #include <linux/iio/trigger.h>
+ #include <linux/iio/imu/adis.h>
+ 
+-static int adis_data_rdy_trigger_set_state(struct iio_trigger *trig,
+-						bool state)
++static int adis_data_rdy_trigger_set_state(struct iio_trigger *trig, bool state)
+ {
+ 	struct adis *adis = iio_trigger_get_drvdata(trig);
+ 
+@@ -36,18 +35,23 @@ static void adis_trigger_setup(struct adis *adis)
+ 
+ static int adis_validate_irq_flag(struct adis *adis)
+ {
++	unsigned long direction = adis->irq_flag & IRQF_TRIGGER_MASK;
++
++	/* We cannot mask the interrupt so ensure it's not enabled at request */
++	if (adis->data->unmasked_drdy)
++		adis->irq_flag |= IRQF_NO_AUTOEN;
+ 	/*
+ 	 * Typically this devices have data ready either on the rising edge or
+ 	 * on the falling edge of the data ready pin. This checks enforces that
+ 	 * one of those is set in the drivers... It defaults to
+-	 * IRQF_TRIGGER_RISING for backward compatibility wiht devices that
++	 * IRQF_TRIGGER_RISING for backward compatibility with devices that
+ 	 * don't support changing the pin polarity.
+ 	 */
+-	if (!adis->irq_flag) {
+-		adis->irq_flag = IRQF_TRIGGER_RISING;
++	if (direction == IRQF_TRIGGER_NONE) {
++		adis->irq_flag |= IRQF_TRIGGER_RISING;
+ 		return 0;
+-	} else if (adis->irq_flag != IRQF_TRIGGER_RISING &&
+-		   adis->irq_flag != IRQF_TRIGGER_FALLING) {
++	} else if (direction != IRQF_TRIGGER_RISING &&
++		   direction != IRQF_TRIGGER_FALLING) {
+ 		dev_err(&adis->spi->dev, "Invalid IRQ mask: %08lx\n",
+ 			adis->irq_flag);
+ 		return -EINVAL;
+@@ -88,5 +92,5 @@ int devm_adis_probe_trigger(struct adis *adis, struct iio_dev *indio_dev)
+ 
+ 	return devm_iio_trigger_register(&adis->spi->dev, adis->trig);
+ }
+-EXPORT_SYMBOL_GPL(devm_adis_probe_trigger);
++EXPORT_SYMBOL_NS_GPL(devm_adis_probe_trigger, IIO_ADISLIB);
+ 
+diff --git a/drivers/iio/temperature/ltc2983.c b/drivers/iio/temperature/ltc2983.c
+index 8306daa779081..b2ae2d2c7eefc 100644
+--- a/drivers/iio/temperature/ltc2983.c
++++ b/drivers/iio/temperature/ltc2983.c
+@@ -205,6 +205,7 @@ struct ltc2983_data {
+ 	 * Holds the converted temperature
+ 	 */
+ 	__be32 temp ____cacheline_aligned;
++	__be32 chan_val;
+ };
+ 
+ struct ltc2983_sensor {
+@@ -309,19 +310,18 @@ static int __ltc2983_fault_handler(const struct ltc2983_data *st,
+ 	return 0;
+ }
+ 
+-static int __ltc2983_chan_assign_common(const struct ltc2983_data *st,
++static int __ltc2983_chan_assign_common(struct ltc2983_data *st,
+ 					const struct ltc2983_sensor *sensor,
+ 					u32 chan_val)
+ {
+ 	u32 reg = LTC2983_CHAN_START_ADDR(sensor->chan);
+-	__be32 __chan_val;
+ 
+ 	chan_val |= LTC2983_CHAN_TYPE(sensor->type);
+ 	dev_dbg(&st->spi->dev, "Assign reg:0x%04X, val:0x%08X\n", reg,
+ 		chan_val);
+-	__chan_val = cpu_to_be32(chan_val);
+-	return regmap_bulk_write(st->regmap, reg, &__chan_val,
+-				 sizeof(__chan_val));
++	st->chan_val = cpu_to_be32(chan_val);
++	return regmap_bulk_write(st->regmap, reg, &st->chan_val,
++				 sizeof(st->chan_val));
+ }
+ 
+ static int __ltc2983_chan_custom_sensor_assign(struct ltc2983_data *st,
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index d91892ffe2436..5b7abcf102fe9 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -2793,8 +2793,8 @@ err:
+ static void __exit ib_core_cleanup(void)
+ {
+ 	roce_gid_mgmt_cleanup();
+-	nldev_exit();
+ 	rdma_nl_unregister(RDMA_NL_LS);
++	nldev_exit();
+ 	unregister_pernet_device(&rdma_dev_net_ops);
+ 	unregister_blocking_lsm_notifier(&ibdev_lsm_nb);
+ 	ib_sa_cleanup();
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index c90f6378d8396..f7f80707af4b6 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -502,7 +502,7 @@ static int fill_res_qp_entry(struct sk_buff *msg, bool has_cap_net_admin,
+ 
+ 	/* In create_qp() port is not set yet */
+ 	if (qp->port && nla_put_u32(msg, RDMA_NLDEV_ATTR_PORT_INDEX, qp->port))
+-		return -EINVAL;
++		return -EMSGSIZE;
+ 
+ 	ret = nla_put_u32(msg, RDMA_NLDEV_ATTR_RES_LQPN, qp->qp_num);
+ 	if (ret)
+@@ -541,7 +541,7 @@ static int fill_res_cm_id_entry(struct sk_buff *msg, bool has_cap_net_admin,
+ 	struct rdma_cm_id *cm_id = &id_priv->id;
+ 
+ 	if (port && port != cm_id->port_num)
+-		return 0;
++		return -EAGAIN;
+ 
+ 	if (cm_id->port_num &&
+ 	    nla_put_u32(msg, RDMA_NLDEV_ATTR_PORT_INDEX, cm_id->port_num))
+@@ -754,6 +754,8 @@ static int fill_stat_counter_qps(struct sk_buff *msg,
+ 	int ret = 0;
+ 
+ 	table_attr = nla_nest_start(msg, RDMA_NLDEV_ATTR_RES_QP);
++	if (!table_attr)
++		return -EMSGSIZE;
+ 
+ 	rt = &counter->device->res[RDMA_RESTRACK_QP];
+ 	xa_lock(&rt->xa);
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index 04b1e8f021f64..d5a8d0173709a 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -219,6 +219,8 @@ out:
+ 	for (node = 0; node < node_affinity.num_possible_nodes; node++)
+ 		hfi1_per_node_cntr[node] = 1;
+ 
++	pci_dev_put(dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/hw/hfi1/firmware.c b/drivers/infiniband/hw/hfi1/firmware.c
+index 2cf102b5abd44..f3e64c850aa51 100644
+--- a/drivers/infiniband/hw/hfi1/firmware.c
++++ b/drivers/infiniband/hw/hfi1/firmware.c
+@@ -1786,6 +1786,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 
+ 	if (!dd->platform_config.data) {
+ 		dd_dev_err(dd, "%s: Missing config file\n", __func__);
++		ret = -EINVAL;
+ 		goto bail;
+ 	}
+ 	ptr = (u32 *)dd->platform_config.data;
+@@ -1794,6 +1795,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 	ptr++;
+ 	if (magic_num != PLATFORM_CONFIG_MAGIC_NUM) {
+ 		dd_dev_err(dd, "%s: Bad config file\n", __func__);
++		ret = -EINVAL;
+ 		goto bail;
+ 	}
+ 
+@@ -1817,6 +1819,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 	if (file_length > dd->platform_config.size) {
+ 		dd_dev_info(dd, "%s:File claims to be larger than read size\n",
+ 			    __func__);
++		ret = -EINVAL;
+ 		goto bail;
+ 	} else if (file_length < dd->platform_config.size) {
+ 		dd_dev_info(dd,
+@@ -1837,6 +1840,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 			dd_dev_err(dd, "%s: Failed validation at offset %ld\n",
+ 				   __func__, (ptr - (u32 *)
+ 					      dd->platform_config.data));
++			ret = -EINVAL;
+ 			goto bail;
+ 		}
+ 
+@@ -1880,6 +1884,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 					   __func__, table_type,
+ 					   (ptr - (u32 *)
+ 					    dd->platform_config.data));
++				ret = -EINVAL;
+ 				goto bail; /* We don't trust this file now */
+ 			}
+ 			pcfgcache->config_tables[table_type].table = ptr;
+@@ -1899,6 +1904,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 					   __func__, table_type,
+ 					   (ptr -
+ 					    (u32 *)dd->platform_config.data));
++				ret = -EINVAL;
+ 				goto bail; /* We don't trust this file now */
+ 			}
+ 			pcfgcache->config_tables[table_type].table_metadata =
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 6dab03b7aca80..76ed547b76ea7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -154,21 +154,29 @@ static void set_atomic_seg(const struct ib_send_wr *wr,
+ 		       V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, valid_num_sge);
+ }
+ 
++static unsigned int get_std_sge_num(struct hns_roce_qp *qp)
++{
++	if (qp->ibqp.qp_type == IB_QPT_GSI || qp->ibqp.qp_type == IB_QPT_UD)
++		return 0;
++
++	return HNS_ROCE_SGE_IN_WQE;
++}
++
+ static int fill_ext_sge_inl_data(struct hns_roce_qp *qp,
+ 				 const struct ib_send_wr *wr,
+ 				 unsigned int *sge_idx, u32 msg_len)
+ {
+ 	struct ib_device *ibdev = &(to_hr_dev(qp->ibqp.device))->ib_dev;
+-	unsigned int dseg_len = sizeof(struct hns_roce_v2_wqe_data_seg);
+-	unsigned int ext_sge_sz = qp->sq.max_gs * dseg_len;
+ 	unsigned int left_len_in_pg;
+ 	unsigned int idx = *sge_idx;
++	unsigned int std_sge_num;
+ 	unsigned int i = 0;
+ 	unsigned int len;
+ 	void *addr;
+ 	void *dseg;
+ 
+-	if (msg_len > ext_sge_sz) {
++	std_sge_num = get_std_sge_num(qp);
++	if (msg_len > (qp->sq.max_gs - std_sge_num) * HNS_ROCE_SGE_SIZE) {
+ 		ibdev_err(ibdev,
+ 			  "no enough extended sge space for inline data.\n");
+ 		return -EINVAL;
+@@ -188,7 +196,7 @@ static int fill_ext_sge_inl_data(struct hns_roce_qp *qp,
+ 		if (len <= left_len_in_pg) {
+ 			memcpy(dseg, addr, len);
+ 
+-			idx += len / dseg_len;
++			idx += len / HNS_ROCE_SGE_SIZE;
+ 
+ 			i++;
+ 			if (i >= wr->num_sge)
+@@ -203,7 +211,7 @@ static int fill_ext_sge_inl_data(struct hns_roce_qp *qp,
+ 
+ 			len -= left_len_in_pg;
+ 			addr += left_len_in_pg;
+-			idx += left_len_in_pg / dseg_len;
++			idx += left_len_in_pg / HNS_ROCE_SGE_SIZE;
+ 			dseg = hns_roce_get_extend_sge(qp,
+ 						idx & (qp->sge.sge_cnt - 1));
+ 			left_len_in_pg = 1 << HNS_HW_PAGE_SHIFT;
+@@ -2158,6 +2166,9 @@ static int hns_roce_query_pf_caps(struct hns_roce_dev *hr_dev)
+ 	calc_pg_sz(caps->num_idx_segs, caps->idx_entry_sz, caps->idx_hop_num,
+ 		   1, &caps->idx_buf_pg_sz, &caps->idx_ba_pg_sz, HEM_TYPE_IDX);
+ 
++	if (!(caps->page_size_cap & PAGE_SIZE))
++		caps->page_size_cap = HNS_ROCE_V2_PAGE_SIZE_SUPPORTED;
++
+ 	return 0;
+ }
+ 
+@@ -2730,7 +2741,8 @@ static int set_mtpt_pbl(struct hns_roce_dev *hr_dev,
+ 	int i, count;
+ 
+ 	count = hns_roce_mtr_find(hr_dev, &mr->pbl_mtr, 0, pages,
+-				  ARRAY_SIZE(pages), &pbl_ba);
++				  min_t(int, ARRAY_SIZE(pages), mr->npages),
++				  &pbl_ba);
+ 	if (count < 1) {
+ 		ibdev_err(ibdev, "failed to find PBL mtr, count = %d.\n",
+ 			  count);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 6d7cc724862fa..1c342a7bd7dff 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -456,10 +456,10 @@ struct ib_mr *hns_roce_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type,
+ 
+ 	return &mr->ibmr;
+ 
+-err_key:
+-	free_mr_key(hr_dev, mr);
+ err_pbl:
+ 	free_mr_pbl(hr_dev, mr);
++err_key:
++	free_mr_key(hr_dev, mr);
+ err_free:
+ 	kfree(mr);
+ 	return ERR_PTR(ret);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 7a2bec0ac0055..0caff276f2c18 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -4258,6 +4258,40 @@ static bool mlx5_ib_modify_qp_allowed(struct mlx5_ib_dev *dev,
+ 	return false;
+ }
+ 
++static int validate_rd_atomic(struct mlx5_ib_dev *dev, struct ib_qp_attr *attr,
++			      int attr_mask, enum ib_qp_type qp_type)
++{
++	int log_max_ra_res;
++	int log_max_ra_req;
++
++	if (qp_type == MLX5_IB_QPT_DCI) {
++		log_max_ra_res = 1 << MLX5_CAP_GEN(dev->mdev,
++						   log_max_ra_res_dc);
++		log_max_ra_req = 1 << MLX5_CAP_GEN(dev->mdev,
++						   log_max_ra_req_dc);
++	} else {
++		log_max_ra_res = 1 << MLX5_CAP_GEN(dev->mdev,
++						   log_max_ra_res_qp);
++		log_max_ra_req = 1 << MLX5_CAP_GEN(dev->mdev,
++						   log_max_ra_req_qp);
++	}
++
++	if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
++	    attr->max_rd_atomic > log_max_ra_res) {
++		mlx5_ib_dbg(dev, "invalid max_rd_atomic value %d\n",
++			    attr->max_rd_atomic);
++		return false;
++	}
++
++	if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
++	    attr->max_dest_rd_atomic > log_max_ra_req) {
++		mlx5_ib_dbg(dev, "invalid max_dest_rd_atomic value %d\n",
++			    attr->max_dest_rd_atomic);
++		return false;
++	}
++	return true;
++}
++
+ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		      int attr_mask, struct ib_udata *udata)
+ {
+@@ -4352,21 +4386,8 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		}
+ 	}
+ 
+-	if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
+-	    attr->max_rd_atomic >
+-	    (1 << MLX5_CAP_GEN(dev->mdev, log_max_ra_res_qp))) {
+-		mlx5_ib_dbg(dev, "invalid max_rd_atomic value %d\n",
+-			    attr->max_rd_atomic);
+-		goto out;
+-	}
+-
+-	if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
+-	    attr->max_dest_rd_atomic >
+-	    (1 << MLX5_CAP_GEN(dev->mdev, log_max_ra_req_qp))) {
+-		mlx5_ib_dbg(dev, "invalid max_dest_rd_atomic value %d\n",
+-			    attr->max_dest_rd_atomic);
++	if (!validate_rd_atomic(dev, attr, attr_mask, qp_type))
+ 		goto out;
+-	}
+ 
+ 	if (cur_state == new_state && cur_state == IB_QPS_RESET) {
+ 		err = 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 2e4b008f03870..99c1b3553e6e0 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -812,12 +812,12 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+ 		qp->resp.mr = NULL;
+ 	}
+ 
+-	if (qp_type(qp) == IB_QPT_RC)
+-		sk_dst_reset(qp->sk->sk);
+-
+ 	free_rd_atomic_resources(qp);
+ 
+ 	if (qp->sk) {
++		if (qp_type(qp) == IB_QPT_RC)
++			sk_dst_reset(qp->sk->sk);
++
+ 		kernel_sock_shutdown(qp->sk, SHUT_RDWR);
+ 		sock_release(qp->sk);
+ 	}
+diff --git a/drivers/infiniband/sw/siw/siw_cq.c b/drivers/infiniband/sw/siw/siw_cq.c
+index d68e37859e73b..403029de6b92d 100644
+--- a/drivers/infiniband/sw/siw/siw_cq.c
++++ b/drivers/infiniband/sw/siw/siw_cq.c
+@@ -56,8 +56,6 @@ int siw_reap_cqe(struct siw_cq *cq, struct ib_wc *wc)
+ 	if (READ_ONCE(cqe->flags) & SIW_WQE_VALID) {
+ 		memset(wc, 0, sizeof(*wc));
+ 		wc->wr_id = cqe->id;
+-		wc->status = map_cqe_status[cqe->status].ib;
+-		wc->opcode = map_wc_opcode[cqe->opcode];
+ 		wc->byte_len = cqe->bytes;
+ 
+ 		/*
+@@ -71,10 +69,32 @@ int siw_reap_cqe(struct siw_cq *cq, struct ib_wc *wc)
+ 				wc->wc_flags = IB_WC_WITH_INVALIDATE;
+ 			}
+ 			wc->qp = cqe->base_qp;
++			wc->opcode = map_wc_opcode[cqe->opcode];
++			wc->status = map_cqe_status[cqe->status].ib;
+ 			siw_dbg_cq(cq,
+ 				   "idx %u, type %d, flags %2x, id 0x%pK\n",
+ 				   cq->cq_get % cq->num_cqe, cqe->opcode,
+ 				   cqe->flags, (void *)(uintptr_t)cqe->id);
++		} else {
++			/*
++			 * A malicious user may set invalid opcode or
++			 * status in the user mmapped CQE array.
++			 * Sanity check and correct values in that case
++			 * to avoid out-of-bounds access to global arrays
++			 * for opcode and status mapping.
++			 */
++			u8 opcode = cqe->opcode;
++			u16 status = cqe->status;
++
++			if (opcode >= SIW_NUM_OPCODES) {
++				opcode = 0;
++				status = SIW_WC_GENERAL_ERR;
++			} else if (status >= SIW_NUM_WC_STATUS) {
++				status = SIW_WC_GENERAL_ERR;
++			}
++			wc->opcode = map_wc_opcode[opcode];
++			wc->status = map_cqe_status[status].ib;
++
+ 		}
+ 		WRITE_ONCE(cqe->flags, 0);
+ 		cq->cq_get++;
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index 3c3ae5ef29428..df8802b4981cf 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -29,7 +29,7 @@ static struct page *siw_get_pblpage(struct siw_mem *mem, u64 addr, int *idx)
+ 	dma_addr_t paddr = siw_pbl_get_buffer(pbl, offset, NULL, idx);
+ 
+ 	if (paddr)
+-		return virt_to_page((void *)paddr);
++		return virt_to_page((void *)(uintptr_t)paddr);
+ 
+ 	return NULL;
+ }
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index 34e847a91eb80..d043793ff0f53 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -672,13 +672,45 @@ static int siw_copy_inline_sgl(const struct ib_send_wr *core_wr,
+ static int siw_sq_flush_wr(struct siw_qp *qp, const struct ib_send_wr *wr,
+ 			   const struct ib_send_wr **bad_wr)
+ {
+-	struct siw_sqe sqe = {};
+ 	int rv = 0;
+ 
+ 	while (wr) {
+-		sqe.id = wr->wr_id;
+-		sqe.opcode = wr->opcode;
+-		rv = siw_sqe_complete(qp, &sqe, 0, SIW_WC_WR_FLUSH_ERR);
++		struct siw_sqe sqe = {};
++
++		switch (wr->opcode) {
++		case IB_WR_RDMA_WRITE:
++			sqe.opcode = SIW_OP_WRITE;
++			break;
++		case IB_WR_RDMA_READ:
++			sqe.opcode = SIW_OP_READ;
++			break;
++		case IB_WR_RDMA_READ_WITH_INV:
++			sqe.opcode = SIW_OP_READ_LOCAL_INV;
++			break;
++		case IB_WR_SEND:
++			sqe.opcode = SIW_OP_SEND;
++			break;
++		case IB_WR_SEND_WITH_IMM:
++			sqe.opcode = SIW_OP_SEND_WITH_IMM;
++			break;
++		case IB_WR_SEND_WITH_INV:
++			sqe.opcode = SIW_OP_SEND_REMOTE_INV;
++			break;
++		case IB_WR_LOCAL_INV:
++			sqe.opcode = SIW_OP_INVAL_STAG;
++			break;
++		case IB_WR_REG_MR:
++			sqe.opcode = SIW_OP_REG_MR;
++			break;
++		default:
++			rv = -EINVAL;
++			break;
++		}
++		if (!rv) {
++			sqe.id = wr->wr_id;
++			rv = siw_sqe_complete(qp, &sqe, 0,
++					      SIW_WC_WR_FLUSH_ERR);
++		}
+ 		if (rv) {
+ 			if (bad_wr)
+ 				*bad_wr = wr;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
+index 5b05cf3837da1..28e9b70844e44 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
+@@ -42,6 +42,11 @@ static const struct nla_policy ipoib_policy[IFLA_IPOIB_MAX + 1] = {
+ 	[IFLA_IPOIB_UMCAST]	= { .type = NLA_U16 },
+ };
+ 
++static unsigned int ipoib_get_max_num_queues(void)
++{
++	return min_t(unsigned int, num_possible_cpus(), 128);
++}
++
+ static int ipoib_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ {
+ 	struct ipoib_dev_priv *priv = ipoib_priv(dev);
+@@ -173,6 +178,8 @@ static struct rtnl_link_ops ipoib_link_ops __read_mostly = {
+ 	.changelink	= ipoib_changelink,
+ 	.get_size	= ipoib_get_size,
+ 	.fill_info	= ipoib_fill_info,
++	.get_num_rx_queues = ipoib_get_max_num_queues,
++	.get_num_tx_queues = ipoib_get_max_num_queues,
+ };
+ 
+ struct rtnl_link_ops *ipoib_get_link_ops(void)
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index b4ccb333a8342..adbd56af379ff 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -3397,7 +3397,8 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 			break;
+ 
+ 		case SRP_OPT_PKEY:
+-			if (match_hex(args, &token)) {
++			ret = match_hex(args, &token);
++			if (ret) {
+ 				pr_warn("bad P_Key parameter '%s'\n", p);
+ 				goto out;
+ 			}
+@@ -3457,7 +3458,8 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 			break;
+ 
+ 		case SRP_OPT_MAX_SECT:
+-			if (match_int(args, &token)) {
++			ret = match_int(args, &token);
++			if (ret) {
+ 				pr_warn("bad max sect parameter '%s'\n", p);
+ 				goto out;
+ 			}
+@@ -3465,8 +3467,15 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 			break;
+ 
+ 		case SRP_OPT_QUEUE_SIZE:
+-			if (match_int(args, &token) || token < 1) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for queue_size parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 1) {
+ 				pr_warn("bad queue_size parameter '%s'\n", p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->scsi_host->can_queue = token;
+@@ -3477,25 +3486,40 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 			break;
+ 
+ 		case SRP_OPT_MAX_CMD_PER_LUN:
+-			if (match_int(args, &token) || token < 1) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for max cmd_per_lun parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 1) {
+ 				pr_warn("bad max cmd_per_lun parameter '%s'\n",
+ 					p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->scsi_host->cmd_per_lun = token;
+ 			break;
+ 
+ 		case SRP_OPT_TARGET_CAN_QUEUE:
+-			if (match_int(args, &token) || token < 1) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for max target_can_queue parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 1) {
+ 				pr_warn("bad max target_can_queue parameter '%s'\n",
+ 					p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->target_can_queue = token;
+ 			break;
+ 
+ 		case SRP_OPT_IO_CLASS:
+-			if (match_hex(args, &token)) {
++			ret = match_hex(args, &token);
++			if (ret) {
+ 				pr_warn("bad IO class parameter '%s'\n", p);
+ 				goto out;
+ 			}
+@@ -3504,6 +3528,7 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 				pr_warn("unknown IO class parameter value %x specified (use %x or %x).\n",
+ 					token, SRP_REV10_IB_IO_CLASS,
+ 					SRP_REV16A_IB_IO_CLASS);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->io_class = token;
+@@ -3526,16 +3551,24 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 			break;
+ 
+ 		case SRP_OPT_CMD_SG_ENTRIES:
+-			if (match_int(args, &token) || token < 1 || token > 255) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for max cmd_sg_entries parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 1 || token > 255) {
+ 				pr_warn("bad max cmd_sg_entries parameter '%s'\n",
+ 					p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->cmd_sg_cnt = token;
+ 			break;
+ 
+ 		case SRP_OPT_ALLOW_EXT_SG:
+-			if (match_int(args, &token)) {
++			ret = match_int(args, &token);
++			if (ret) {
+ 				pr_warn("bad allow_ext_sg parameter '%s'\n", p);
+ 				goto out;
+ 			}
+@@ -3543,43 +3576,77 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 			break;
+ 
+ 		case SRP_OPT_SG_TABLESIZE:
+-			if (match_int(args, &token) || token < 1 ||
+-					token > SG_MAX_SEGMENTS) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for max sg_tablesize parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 1 || token > SG_MAX_SEGMENTS) {
+ 				pr_warn("bad max sg_tablesize parameter '%s'\n",
+ 					p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->sg_tablesize = token;
+ 			break;
+ 
+ 		case SRP_OPT_COMP_VECTOR:
+-			if (match_int(args, &token) || token < 0) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for comp_vector parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 0) {
+ 				pr_warn("bad comp_vector parameter '%s'\n", p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->comp_vector = token;
+ 			break;
+ 
+ 		case SRP_OPT_TL_RETRY_COUNT:
+-			if (match_int(args, &token) || token < 2 || token > 7) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for tl_retry_count parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 2 || token > 7) {
+ 				pr_warn("bad tl_retry_count parameter '%s' (must be a number between 2 and 7)\n",
+ 					p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->tl_retry_count = token;
+ 			break;
+ 
+ 		case SRP_OPT_MAX_IT_IU_SIZE:
+-			if (match_int(args, &token) || token < 0) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for max it_iu_size parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 0) {
+ 				pr_warn("bad maximum initiator to target IU size '%s'\n", p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->max_it_iu_size = token;
+ 			break;
+ 
+ 		case SRP_OPT_CH_COUNT:
+-			if (match_int(args, &token) || token < 1) {
++			ret = match_int(args, &token);
++			if (ret) {
++				pr_warn("match_int() failed for channel count parameter '%s', Error %d\n",
++					p, ret);
++				goto out;
++			}
++			if (token < 1) {
+ 				pr_warn("bad channel count %s\n", p);
++				ret = -EINVAL;
+ 				goto out;
+ 			}
+ 			target->ch_count = token;
+@@ -3588,6 +3655,7 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 		default:
+ 			pr_warn("unknown parameter or missing value '%s' in target creation request\n",
+ 				p);
++			ret = -EINVAL;
+ 			goto out;
+ 		}
+ 	}
+diff --git a/drivers/input/joystick/Kconfig b/drivers/input/joystick/Kconfig
+index b080f0cfb068f..d8ec5193a941f 100644
+--- a/drivers/input/joystick/Kconfig
++++ b/drivers/input/joystick/Kconfig
+@@ -45,6 +45,7 @@ config JOYSTICK_A3D
+ config JOYSTICK_ADC
+ 	tristate "Simple joystick connected over ADC"
+ 	depends on IIO
++	select IIO_BUFFER
+ 	select IIO_BUFFER_CB
+ 	help
+ 	  Say Y here if you have a simple joystick connected over ADC.
+diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
+index c09aefa2661dd..ca9cee5851a05 100644
+--- a/drivers/input/touchscreen/elants_i2c.c
++++ b/drivers/input/touchscreen/elants_i2c.c
+@@ -1219,14 +1219,12 @@ static int elants_i2c_power_on(struct elants_data *ts)
+ 	if (IS_ERR_OR_NULL(ts->reset_gpio))
+ 		return 0;
+ 
+-	gpiod_set_value_cansleep(ts->reset_gpio, 1);
+-
+ 	error = regulator_enable(ts->vcc33);
+ 	if (error) {
+ 		dev_err(&ts->client->dev,
+ 			"failed to enable vcc33 regulator: %d\n",
+ 			error);
+-		goto release_reset_gpio;
++		return error;
+ 	}
+ 
+ 	error = regulator_enable(ts->vccio);
+@@ -1235,7 +1233,7 @@ static int elants_i2c_power_on(struct elants_data *ts)
+ 			"failed to enable vccio regulator: %d\n",
+ 			error);
+ 		regulator_disable(ts->vcc33);
+-		goto release_reset_gpio;
++		return error;
+ 	}
+ 
+ 	/*
+@@ -1244,7 +1242,6 @@ static int elants_i2c_power_on(struct elants_data *ts)
+ 	 */
+ 	udelay(ELAN_POWERON_DELAY_USEC);
+ 
+-release_reset_gpio:
+ 	gpiod_set_value_cansleep(ts->reset_gpio, 0);
+ 	if (error)
+ 		return error;
+@@ -1352,7 +1349,7 @@ static int elants_i2c_probe(struct i2c_client *client,
+ 		return error;
+ 	}
+ 
+-	ts->reset_gpio = devm_gpiod_get(&client->dev, "reset", GPIOD_OUT_LOW);
++	ts->reset_gpio = devm_gpiod_get(&client->dev, "reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(ts->reset_gpio)) {
+ 		error = PTR_ERR(ts->reset_gpio);
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index e988f6f198c5c..310ab24d003a0 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -3126,6 +3126,13 @@ static int __init parse_ivrs_acpihid(char *str)
+ 		return 1;
+ 	}
+ 
++	/*
++	 * Ignore leading zeroes after ':', so e.g., AMDI0095:00
++	 * will match AMDI0095:0 in the second strcmp in acpi_dev_hid_uid_match
++	 */
++	while (*uid == '0' && *(uid + 1))
++		uid++;
++
+ 	i = early_acpihid_map_size++;
+ 	memcpy(early_acpihid_map[i].hid, hid, strlen(hid));
+ 	memcpy(early_acpihid_map[i].uid, uid, strlen(uid));
+diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
+index fb61bdca4c2c1..16776e3c6eabf 100644
+--- a/drivers/iommu/amd/iommu_v2.c
++++ b/drivers/iommu/amd/iommu_v2.c
+@@ -587,6 +587,7 @@ out_drop_state:
+ 	put_device_state(dev_state);
+ 
+ out:
++	pci_dev_put(pdev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c
+index b9a974d978311..25689bdf812e5 100644
+--- a/drivers/iommu/fsl_pamu.c
++++ b/drivers/iommu/fsl_pamu.c
+@@ -1122,7 +1122,7 @@ static int fsl_pamu_probe(struct platform_device *pdev)
+ 		ret = create_csd(ppaact_phys, mem_size, csd_port_id);
+ 		if (ret) {
+ 			dev_err(dev, "could not create coherence subdomain\n");
+-			return ret;
++			goto error;
+ 		}
+ 	}
+ 
+diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c
+index ea6db13419165..65aa30d55d3ab 100644
+--- a/drivers/iommu/sun50i-iommu.c
++++ b/drivers/iommu/sun50i-iommu.c
+@@ -28,6 +28,7 @@
+ #include <linux/types.h>
+ 
+ #define IOMMU_RESET_REG			0x010
++#define IOMMU_RESET_RELEASE_ALL			0xffffffff
+ #define IOMMU_ENABLE_REG		0x020
+ #define IOMMU_ENABLE_ENABLE			BIT(0)
+ 
+@@ -271,7 +272,7 @@ static u32 sun50i_mk_pte(phys_addr_t page, int prot)
+ 	enum sun50i_iommu_aci aci;
+ 	u32 flags = 0;
+ 
+-	if (prot & (IOMMU_READ | IOMMU_WRITE))
++	if ((prot & (IOMMU_READ | IOMMU_WRITE)) == (IOMMU_READ | IOMMU_WRITE))
+ 		aci = SUN50I_IOMMU_ACI_RD_WR;
+ 	else if (prot & IOMMU_READ)
+ 		aci = SUN50I_IOMMU_ACI_RD;
+@@ -512,7 +513,7 @@ static u32 *sun50i_dte_get_page_table(struct sun50i_iommu_domain *sun50i_domain,
+ 		sun50i_iommu_free_page_table(iommu, drop_pt);
+ 	}
+ 
+-	sun50i_table_flush(sun50i_domain, page_table, PT_SIZE);
++	sun50i_table_flush(sun50i_domain, page_table, NUM_PT_ENTRIES);
+ 	sun50i_table_flush(sun50i_domain, dte_addr, 1);
+ 
+ 	return page_table;
+@@ -602,7 +603,6 @@ static struct iommu_domain *sun50i_iommu_domain_alloc(unsigned type)
+ 	struct sun50i_iommu_domain *sun50i_domain;
+ 
+ 	if (type != IOMMU_DOMAIN_DMA &&
+-	    type != IOMMU_DOMAIN_IDENTITY &&
+ 	    type != IOMMU_DOMAIN_UNMANAGED)
+ 		return NULL;
+ 
+@@ -880,8 +880,8 @@ static phys_addr_t sun50i_iommu_handle_perm_irq(struct sun50i_iommu *iommu)
+ 
+ static irqreturn_t sun50i_iommu_irq(int irq, void *dev_id)
+ {
++	u32 status, l1_status, l2_status, resets;
+ 	struct sun50i_iommu *iommu = dev_id;
+-	u32 status;
+ 
+ 	spin_lock(&iommu->iommu_lock);
+ 
+@@ -891,6 +891,9 @@ static irqreturn_t sun50i_iommu_irq(int irq, void *dev_id)
+ 		return IRQ_NONE;
+ 	}
+ 
++	l1_status = iommu_read(iommu, IOMMU_L1PG_INT_REG);
++	l2_status = iommu_read(iommu, IOMMU_L2PG_INT_REG);
++
+ 	if (status & IOMMU_INT_INVALID_L2PG)
+ 		sun50i_iommu_handle_pt_irq(iommu,
+ 					    IOMMU_INT_ERR_ADDR_L2_REG,
+@@ -904,8 +907,9 @@ static irqreturn_t sun50i_iommu_irq(int irq, void *dev_id)
+ 
+ 	iommu_write(iommu, IOMMU_INT_CLR_REG, status);
+ 
+-	iommu_write(iommu, IOMMU_RESET_REG, ~status);
+-	iommu_write(iommu, IOMMU_RESET_REG, status);
++	resets = (status | l1_status | l2_status) & IOMMU_INT_MASTER_MASK;
++	iommu_write(iommu, IOMMU_RESET_REG, ~resets);
++	iommu_write(iommu, IOMMU_RESET_REG, IOMMU_RESET_RELEASE_ALL);
+ 
+ 	spin_unlock(&iommu->iommu_lock);
+ 
+diff --git a/drivers/irqchip/irq-gic-pm.c b/drivers/irqchip/irq-gic-pm.c
+index 1337ceceb59b9..8be7d136c3bf8 100644
+--- a/drivers/irqchip/irq-gic-pm.c
++++ b/drivers/irqchip/irq-gic-pm.c
+@@ -104,7 +104,7 @@ static int gic_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		goto rpm_disable;
+ 
+diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
+index 7013a3f084299..4c5b6772562dc 100644
+--- a/drivers/isdn/hardware/mISDN/hfcmulti.c
++++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
+@@ -3219,6 +3219,7 @@ static int
+ hfcm_l1callback(struct dchannel *dch, u_int cmd)
+ {
+ 	struct hfc_multi	*hc = dch->hw;
++	struct sk_buff_head	free_queue;
+ 	u_long	flags;
+ 
+ 	switch (cmd) {
+@@ -3247,6 +3248,7 @@ hfcm_l1callback(struct dchannel *dch, u_int cmd)
+ 		l1_event(dch->l1, HW_POWERUP_IND);
+ 		break;
+ 	case HW_DEACT_REQ:
++		__skb_queue_head_init(&free_queue);
+ 		/* start deactivation */
+ 		spin_lock_irqsave(&hc->lock, flags);
+ 		if (hc->ctype == HFC_TYPE_E1) {
+@@ -3266,20 +3268,21 @@ hfcm_l1callback(struct dchannel *dch, u_int cmd)
+ 				plxsd_checksync(hc, 0);
+ 			}
+ 		}
+-		skb_queue_purge(&dch->squeue);
++		skb_queue_splice_init(&dch->squeue, &free_queue);
+ 		if (dch->tx_skb) {
+-			dev_kfree_skb(dch->tx_skb);
++			__skb_queue_tail(&free_queue, dch->tx_skb);
+ 			dch->tx_skb = NULL;
+ 		}
+ 		dch->tx_idx = 0;
+ 		if (dch->rx_skb) {
+-			dev_kfree_skb(dch->rx_skb);
++			__skb_queue_tail(&free_queue, dch->rx_skb);
+ 			dch->rx_skb = NULL;
+ 		}
+ 		test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
+ 		if (test_and_clear_bit(FLG_BUSY_TIMER, &dch->Flags))
+ 			del_timer(&dch->timer);
+ 		spin_unlock_irqrestore(&hc->lock, flags);
++		__skb_queue_purge(&free_queue);
+ 		break;
+ 	case HW_POWERUP_REQ:
+ 		spin_lock_irqsave(&hc->lock, flags);
+@@ -3386,6 +3389,9 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
+ 	case PH_DEACTIVATE_REQ:
+ 		test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
+ 		if (dch->dev.D.protocol != ISDN_P_TE_S0) {
++			struct sk_buff_head free_queue;
++
++			__skb_queue_head_init(&free_queue);
+ 			spin_lock_irqsave(&hc->lock, flags);
+ 			if (debug & DEBUG_HFCMULTI_MSG)
+ 				printk(KERN_DEBUG
+@@ -3407,14 +3413,14 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
+ 				/* deactivate */
+ 				dch->state = 1;
+ 			}
+-			skb_queue_purge(&dch->squeue);
++			skb_queue_splice_init(&dch->squeue, &free_queue);
+ 			if (dch->tx_skb) {
+-				dev_kfree_skb(dch->tx_skb);
++				__skb_queue_tail(&free_queue, dch->tx_skb);
+ 				dch->tx_skb = NULL;
+ 			}
+ 			dch->tx_idx = 0;
+ 			if (dch->rx_skb) {
+-				dev_kfree_skb(dch->rx_skb);
++				__skb_queue_tail(&free_queue, dch->rx_skb);
+ 				dch->rx_skb = NULL;
+ 			}
+ 			test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
+@@ -3426,6 +3432,7 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
+ #endif
+ 			ret = 0;
+ 			spin_unlock_irqrestore(&hc->lock, flags);
++			__skb_queue_purge(&free_queue);
+ 		} else
+ 			ret = l1_event(dch->l1, hh->prim);
+ 		break;
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index af17459c1a5c0..eba58b99cd29d 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -1617,16 +1617,19 @@ hfcpci_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
+ 		test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
+ 		spin_lock_irqsave(&hc->lock, flags);
+ 		if (hc->hw.protocol == ISDN_P_NT_S0) {
++			struct sk_buff_head free_queue;
++
++			__skb_queue_head_init(&free_queue);
+ 			/* prepare deactivation */
+ 			Write_hfc(hc, HFCPCI_STATES, 0x40);
+-			skb_queue_purge(&dch->squeue);
++			skb_queue_splice_init(&dch->squeue, &free_queue);
+ 			if (dch->tx_skb) {
+-				dev_kfree_skb(dch->tx_skb);
++				__skb_queue_tail(&free_queue, dch->tx_skb);
+ 				dch->tx_skb = NULL;
+ 			}
+ 			dch->tx_idx = 0;
+ 			if (dch->rx_skb) {
+-				dev_kfree_skb(dch->rx_skb);
++				__skb_queue_tail(&free_queue, dch->rx_skb);
+ 				dch->rx_skb = NULL;
+ 			}
+ 			test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
+@@ -1639,10 +1642,12 @@ hfcpci_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
+ 			hc->hw.mst_m &= ~HFCPCI_MASTER;
+ 			Write_hfc(hc, HFCPCI_MST_MODE, hc->hw.mst_m);
+ 			ret = 0;
++			spin_unlock_irqrestore(&hc->lock, flags);
++			__skb_queue_purge(&free_queue);
+ 		} else {
+ 			ret = l1_event(dch->l1, hh->prim);
++			spin_unlock_irqrestore(&hc->lock, flags);
+ 		}
+-		spin_unlock_irqrestore(&hc->lock, flags);
+ 		break;
+ 	}
+ 	if (!ret)
+diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c
+index cd5642cef01fd..e8b37bd5e34a3 100644
+--- a/drivers/isdn/hardware/mISDN/hfcsusb.c
++++ b/drivers/isdn/hardware/mISDN/hfcsusb.c
+@@ -326,20 +326,24 @@ hfcusb_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
+ 		test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
+ 
+ 		if (hw->protocol == ISDN_P_NT_S0) {
++			struct sk_buff_head free_queue;
++
++			__skb_queue_head_init(&free_queue);
+ 			hfcsusb_ph_command(hw, HFC_L1_DEACTIVATE_NT);
+ 			spin_lock_irqsave(&hw->lock, flags);
+-			skb_queue_purge(&dch->squeue);
++			skb_queue_splice_init(&dch->squeue, &free_queue);
+ 			if (dch->tx_skb) {
+-				dev_kfree_skb(dch->tx_skb);
++				__skb_queue_tail(&free_queue, dch->tx_skb);
+ 				dch->tx_skb = NULL;
+ 			}
+ 			dch->tx_idx = 0;
+ 			if (dch->rx_skb) {
+-				dev_kfree_skb(dch->rx_skb);
++				__skb_queue_tail(&free_queue, dch->rx_skb);
+ 				dch->rx_skb = NULL;
+ 			}
+ 			test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
+ 			spin_unlock_irqrestore(&hw->lock, flags);
++			__skb_queue_purge(&free_queue);
+ #ifdef FIXME
+ 			if (test_and_clear_bit(FLG_L1_BUSY, &dch->Flags))
+ 				dchannel_sched_event(&hc->dch, D_CLEARBUSY);
+@@ -1330,7 +1334,7 @@ tx_iso_complete(struct urb *urb)
+ 					printk("\n");
+ 				}
+ 
+-				dev_kfree_skb(tx_skb);
++				dev_consume_skb_irq(tx_skb);
+ 				tx_skb = NULL;
+ 				if (fifo->dch && get_next_dframe(fifo->dch))
+ 					tx_skb = fifo->dch->tx_skb;
+diff --git a/drivers/macintosh/macio-adb.c b/drivers/macintosh/macio-adb.c
+index d4759db002c60..defe65f51fa22 100644
+--- a/drivers/macintosh/macio-adb.c
++++ b/drivers/macintosh/macio-adb.c
+@@ -106,6 +106,10 @@ int macio_init(void)
+ 		return -ENXIO;
+ 	}
+ 	adb = ioremap(r.start, sizeof(struct adb_regs));
++	if (!adb) {
++		of_node_put(adbs);
++		return -ENOMEM;
++	}
+ 
+ 	out_8(&adb->ctrl.r, 0);
+ 	out_8(&adb->intr.r, 0);
+diff --git a/drivers/macintosh/macio_asic.c b/drivers/macintosh/macio_asic.c
+index 49af60bdac928..7db2e23a5ac81 100644
+--- a/drivers/macintosh/macio_asic.c
++++ b/drivers/macintosh/macio_asic.c
+@@ -425,7 +425,7 @@ static struct macio_dev * macio_add_one_device(struct macio_chip *chip,
+ 	if (of_device_register(&dev->ofdev) != 0) {
+ 		printk(KERN_DEBUG"macio: device registration error for %s!\n",
+ 		       dev_name(&dev->ofdev.dev));
+-		kfree(dev);
++		put_device(&dev->ofdev.dev);
+ 		return NULL;
+ 	}
+ 
+diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
+index f44079d62b1a7..527204c6d5cd0 100644
+--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
++++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
+@@ -493,6 +493,7 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ 	ret = device_register(&ipi_mbox->dev);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register ipi mbox dev.\n");
++		put_device(&ipi_mbox->dev);
+ 		return ret;
+ 	}
+ 	mdev = &ipi_mbox->dev;
+@@ -619,7 +620,8 @@ static void zynqmp_ipi_free_mboxes(struct zynqmp_ipi_pdata *pdata)
+ 		ipi_mbox = &pdata->ipi_mboxes[i];
+ 		if (ipi_mbox->dev.parent) {
+ 			mbox_controller_unregister(&ipi_mbox->mbox);
+-			device_unregister(&ipi_mbox->dev);
++			if (device_is_registered(&ipi_mbox->dev))
++				device_unregister(&ipi_mbox->dev);
+ 		}
+ 	}
+ }
+diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
+index 38cc8340e817d..8b8cd751fe9a9 100644
+--- a/drivers/mcb/mcb-core.c
++++ b/drivers/mcb/mcb-core.c
+@@ -71,8 +71,10 @@ static int mcb_probe(struct device *dev)
+ 
+ 	get_device(dev);
+ 	ret = mdrv->probe(mdev, found_id);
+-	if (ret)
++	if (ret) {
+ 		module_put(carrier_mod);
++		put_device(dev);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
+index 0266bfddfbe27..aa6938da0db85 100644
+--- a/drivers/mcb/mcb-parse.c
++++ b/drivers/mcb/mcb-parse.c
+@@ -108,7 +108,7 @@ static int chameleon_parse_gdd(struct mcb_bus *bus,
+ 	return 0;
+ 
+ err:
+-	mcb_free_dev(mdev);
++	put_device(&mdev->dev);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index af6d4f898e4c1..2ecd0db0f2945 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -551,11 +551,13 @@ static int __create_persistent_data_objects(struct dm_cache_metadata *cmd,
+ 	return r;
+ }
+ 
+-static void __destroy_persistent_data_objects(struct dm_cache_metadata *cmd)
++static void __destroy_persistent_data_objects(struct dm_cache_metadata *cmd,
++					      bool destroy_bm)
+ {
+ 	dm_sm_destroy(cmd->metadata_sm);
+ 	dm_tm_destroy(cmd->tm);
+-	dm_block_manager_destroy(cmd->bm);
++	if (destroy_bm)
++		dm_block_manager_destroy(cmd->bm);
+ }
+ 
+ typedef unsigned long (*flags_mutator)(unsigned long);
+@@ -826,7 +828,7 @@ static struct dm_cache_metadata *lookup_or_open(struct block_device *bdev,
+ 		cmd2 = lookup(bdev);
+ 		if (cmd2) {
+ 			mutex_unlock(&table_lock);
+-			__destroy_persistent_data_objects(cmd);
++			__destroy_persistent_data_objects(cmd, true);
+ 			kfree(cmd);
+ 			return cmd2;
+ 		}
+@@ -874,7 +876,7 @@ void dm_cache_metadata_close(struct dm_cache_metadata *cmd)
+ 		mutex_unlock(&table_lock);
+ 
+ 		if (!cmd->fail_io)
+-			__destroy_persistent_data_objects(cmd);
++			__destroy_persistent_data_objects(cmd, true);
+ 		kfree(cmd);
+ 	}
+ }
+@@ -1808,14 +1810,52 @@ int dm_cache_metadata_needs_check(struct dm_cache_metadata *cmd, bool *result)
+ 
+ int dm_cache_metadata_abort(struct dm_cache_metadata *cmd)
+ {
+-	int r;
++	int r = -EINVAL;
++	struct dm_block_manager *old_bm = NULL, *new_bm = NULL;
++
++	/* fail_io is double-checked with cmd->root_lock held below */
++	if (unlikely(cmd->fail_io))
++		return r;
++
++	/*
++	 * Replacement block manager (new_bm) is created and old_bm destroyed outside of
++	 * cmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of
++	 * shrinker associated with the block manager's bufio client vs cmd root_lock).
++	 * - must take shrinker_rwsem without holding cmd->root_lock
++	 */
++	new_bm = dm_block_manager_create(cmd->bdev, DM_CACHE_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
++					 CACHE_MAX_CONCURRENT_LOCKS);
+ 
+ 	WRITE_LOCK(cmd);
+-	__destroy_persistent_data_objects(cmd);
+-	r = __create_persistent_data_objects(cmd, false);
++	if (cmd->fail_io) {
++		WRITE_UNLOCK(cmd);
++		goto out;
++	}
++
++	__destroy_persistent_data_objects(cmd, false);
++	old_bm = cmd->bm;
++	if (IS_ERR(new_bm)) {
++		DMERR("could not create block manager during abort");
++		cmd->bm = NULL;
++		r = PTR_ERR(new_bm);
++		goto out_unlock;
++	}
++
++	cmd->bm = new_bm;
++	r = __open_or_format_metadata(cmd, false);
++	if (r) {
++		cmd->bm = NULL;
++		goto out_unlock;
++	}
++	new_bm = NULL;
++out_unlock:
+ 	if (r)
+ 		cmd->fail_io = true;
+ 	WRITE_UNLOCK(cmd);
++	dm_block_manager_destroy(old_bm);
++out:
++	if (new_bm && !IS_ERR(new_bm))
++		dm_block_manager_destroy(new_bm);
+ 
+ 	return r;
+ }
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 4bc453f5bbaa6..9b2aec3098010 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -985,16 +985,16 @@ static void abort_transaction(struct cache *cache)
+ 	if (get_cache_mode(cache) >= CM_READ_ONLY)
+ 		return;
+ 
+-	if (dm_cache_metadata_set_needs_check(cache->cmd)) {
+-		DMERR("%s: failed to set 'needs_check' flag in metadata", dev_name);
+-		set_cache_mode(cache, CM_FAIL);
+-	}
+-
+ 	DMERR_LIMIT("%s: aborting current metadata transaction", dev_name);
+ 	if (dm_cache_metadata_abort(cache->cmd)) {
+ 		DMERR("%s: failed to abort metadata transaction", dev_name);
+ 		set_cache_mode(cache, CM_FAIL);
+ 	}
++
++	if (dm_cache_metadata_set_needs_check(cache->cmd)) {
++		DMERR("%s: failed to set 'needs_check' flag in metadata", dev_name);
++		set_cache_mode(cache, CM_FAIL);
++	}
+ }
+ 
+ static void metadata_operation_failed(struct cache *cache, const char *op, int r)
+@@ -1965,6 +1965,7 @@ static void destroy(struct cache *cache)
+ 	if (cache->prison)
+ 		dm_bio_prison_destroy_v2(cache->prison);
+ 
++	cancel_delayed_work_sync(&cache->waker);
+ 	if (cache->wq)
+ 		destroy_workqueue(cache->wq);
+ 
+diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
+index bdb255edc2004..0d38ad6235348 100644
+--- a/drivers/md/dm-clone-target.c
++++ b/drivers/md/dm-clone-target.c
+@@ -1966,6 +1966,7 @@ static void clone_dtr(struct dm_target *ti)
+ 
+ 	mempool_exit(&clone->hydration_pool);
+ 	dm_kcopyd_client_destroy(clone->kcopyd_client);
++	cancel_delayed_work_sync(&clone->waker);
+ 	destroy_workqueue(clone->wq);
+ 	hash_table_exit(clone);
+ 	dm_clone_metadata_close(clone->cmd);
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 2156a2d5ac70e..a1c4cc48bf034 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -4388,6 +4388,8 @@ static void dm_integrity_dtr(struct dm_target *ti)
+ 	BUG_ON(!RB_EMPTY_ROOT(&ic->in_progress));
+ 	BUG_ON(!list_empty(&ic->wait_list));
+ 
++	if (ic->mode == 'B')
++		cancel_delayed_work_sync(&ic->bitmap_flush_work);
+ 	if (ic->metadata_wq)
+ 		destroy_workqueue(ic->metadata_wq);
+ 	if (ic->wait_wq)
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index 842d79e5ea3aa..8f4d149bb99d7 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -701,6 +701,15 @@ static int __open_metadata(struct dm_pool_metadata *pmd)
+ 		goto bad_cleanup_data_sm;
+ 	}
+ 
++	/*
++	 * For pool metadata opening process, root setting is redundant
++	 * because it will be set again in __begin_transaction(). But dm
++	 * pool aborting process really needs to get last transaction's
++	 * root to avoid accessing broken btree.
++	 */
++	pmd->root = le64_to_cpu(disk_super->data_mapping_root);
++	pmd->details_root = le64_to_cpu(disk_super->device_details_root);
++
+ 	__setup_btree_details(pmd);
+ 	dm_bm_unlock(sblock);
+ 
+@@ -753,13 +762,15 @@ static int __create_persistent_data_objects(struct dm_pool_metadata *pmd, bool f
+ 	return r;
+ }
+ 
+-static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd)
++static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd,
++					      bool destroy_bm)
+ {
+ 	dm_sm_destroy(pmd->data_sm);
+ 	dm_sm_destroy(pmd->metadata_sm);
+ 	dm_tm_destroy(pmd->nb_tm);
+ 	dm_tm_destroy(pmd->tm);
+-	dm_block_manager_destroy(pmd->bm);
++	if (destroy_bm)
++		dm_block_manager_destroy(pmd->bm);
+ }
+ 
+ static int __begin_transaction(struct dm_pool_metadata *pmd)
+@@ -966,7 +977,7 @@ int dm_pool_metadata_close(struct dm_pool_metadata *pmd)
+ 	}
+ 	pmd_write_unlock(pmd);
+ 	if (!pmd->fail_io)
+-		__destroy_persistent_data_objects(pmd);
++		__destroy_persistent_data_objects(pmd, true);
+ 
+ 	kfree(pmd);
+ 	return 0;
+@@ -1873,19 +1884,52 @@ static void __set_abort_with_changes_flags(struct dm_pool_metadata *pmd)
+ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd)
+ {
+ 	int r = -EINVAL;
++	struct dm_block_manager *old_bm = NULL, *new_bm = NULL;
++
++	/* fail_io is double-checked with pmd->root_lock held below */
++	if (unlikely(pmd->fail_io))
++		return r;
++
++	/*
++	 * Replacement block manager (new_bm) is created and old_bm destroyed outside of
++	 * pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of
++	 * shrinker associated with the block manager's bufio client vs pmd root_lock).
++	 * - must take shrinker_rwsem without holding pmd->root_lock
++	 */
++	new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
++					 THIN_MAX_CONCURRENT_LOCKS);
+ 
+ 	pmd_write_lock(pmd);
+-	if (pmd->fail_io)
++	if (pmd->fail_io) {
++		pmd_write_unlock(pmd);
+ 		goto out;
++	}
+ 
+ 	__set_abort_with_changes_flags(pmd);
+-	__destroy_persistent_data_objects(pmd);
+-	r = __create_persistent_data_objects(pmd, false);
++	__destroy_persistent_data_objects(pmd, false);
++	old_bm = pmd->bm;
++	if (IS_ERR(new_bm)) {
++		DMERR("could not create block manager during abort");
++		pmd->bm = NULL;
++		r = PTR_ERR(new_bm);
++		goto out_unlock;
++	}
++
++	pmd->bm = new_bm;
++	r = __open_or_format_metadata(pmd, false);
++	if (r) {
++		pmd->bm = NULL;
++		goto out_unlock;
++	}
++	new_bm = NULL;
++out_unlock:
+ 	if (r)
+ 		pmd->fail_io = true;
+-
+-out:
+ 	pmd_write_unlock(pmd);
++	dm_block_manager_destroy(old_bm);
++out:
++	if (new_bm && !IS_ERR(new_bm))
++		dm_block_manager_destroy(new_bm);
+ 
+ 	return r;
+ }
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index a196d7cb51bd2..f3c519e18a12f 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2907,6 +2907,8 @@ static void __pool_destroy(struct pool *pool)
+ 	dm_bio_prison_destroy(pool->prison);
+ 	dm_kcopyd_client_destroy(pool->copier);
+ 
++	cancel_delayed_work_sync(&pool->waker);
++	cancel_delayed_work_sync(&pool->no_space_timeout);
+ 	if (pool->wq)
+ 		destroy_workqueue(pool->wq);
+ 
+@@ -3566,20 +3568,28 @@ static int pool_preresume(struct dm_target *ti)
+ 	 */
+ 	r = bind_control_target(pool, ti);
+ 	if (r)
+-		return r;
++		goto out;
+ 
+ 	r = maybe_resize_data_dev(ti, &need_commit1);
+ 	if (r)
+-		return r;
++		goto out;
+ 
+ 	r = maybe_resize_metadata_dev(ti, &need_commit2);
+ 	if (r)
+-		return r;
++		goto out;
+ 
+ 	if (need_commit1 || need_commit2)
+ 		(void) commit(pool);
++out:
++	/*
++	 * When a thin-pool is PM_FAIL, it cannot be rebuilt if
++	 * bio is in deferred list. Therefore need to return 0
++	 * to allow pool_resume() to flush IO.
++	 */
++	if (r && get_pool_mode(pool) == PM_FAIL)
++		r = 0;
+ 
+-	return 0;
++	return r;
+ }
+ 
+ static void pool_suspend_active_thins(struct pool *pool)
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index d377ea0609255..20afc0aec1778 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -486,7 +486,7 @@ void md_bitmap_print_sb(struct bitmap *bitmap)
+ 	sb = kmap_atomic(bitmap->storage.sb_page);
+ 	pr_debug("%s: bitmap file superblock:\n", bmname(bitmap));
+ 	pr_debug("         magic: %08x\n", le32_to_cpu(sb->magic));
+-	pr_debug("       version: %d\n", le32_to_cpu(sb->version));
++	pr_debug("       version: %u\n", le32_to_cpu(sb->version));
+ 	pr_debug("          uuid: %08x.%08x.%08x.%08x\n",
+ 		 le32_to_cpu(*(__le32 *)(sb->uuid+0)),
+ 		 le32_to_cpu(*(__le32 *)(sb->uuid+4)),
+@@ -497,11 +497,11 @@ void md_bitmap_print_sb(struct bitmap *bitmap)
+ 	pr_debug("events cleared: %llu\n",
+ 		 (unsigned long long) le64_to_cpu(sb->events_cleared));
+ 	pr_debug("         state: %08x\n", le32_to_cpu(sb->state));
+-	pr_debug("     chunksize: %d B\n", le32_to_cpu(sb->chunksize));
+-	pr_debug("  daemon sleep: %ds\n", le32_to_cpu(sb->daemon_sleep));
++	pr_debug("     chunksize: %u B\n", le32_to_cpu(sb->chunksize));
++	pr_debug("  daemon sleep: %us\n", le32_to_cpu(sb->daemon_sleep));
+ 	pr_debug("     sync size: %llu KB\n",
+ 		 (unsigned long long)le64_to_cpu(sb->sync_size)/2);
+-	pr_debug("max write behind: %d\n", le32_to_cpu(sb->write_behind));
++	pr_debug("max write behind: %u\n", le32_to_cpu(sb->write_behind));
+ 	kunmap_atomic(sb);
+ }
+ 
+@@ -2106,7 +2106,8 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 			bytes = DIV_ROUND_UP(chunks, 8);
+ 			if (!bitmap->mddev->bitmap_info.external)
+ 				bytes += sizeof(bitmap_super_t);
+-		} while (bytes > (space << 9));
++		} while (bytes > (space << 9) && (chunkshift + BITMAP_BLOCK_SHIFT) <
++			(BITS_PER_BYTE * sizeof(((bitmap_super_t *)0)->chunksize) - 1));
+ 	} else
+ 		chunkshift = ffz(~chunksize) - BITMAP_BLOCK_SHIFT;
+ 
+@@ -2151,7 +2152,7 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 	bitmap->counts.missing_pages = pages;
+ 	bitmap->counts.chunkshift = chunkshift;
+ 	bitmap->counts.chunks = chunks;
+-	bitmap->mddev->bitmap_info.chunksize = 1 << (chunkshift +
++	bitmap->mddev->bitmap_info.chunksize = 1UL << (chunkshift +
+ 						     BITMAP_BLOCK_SHIFT);
+ 
+ 	blocks = min(old_counts.chunks << old_counts.chunkshift,
+@@ -2177,8 +2178,8 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 				bitmap->counts.missing_pages = old_counts.pages;
+ 				bitmap->counts.chunkshift = old_counts.chunkshift;
+ 				bitmap->counts.chunks = old_counts.chunks;
+-				bitmap->mddev->bitmap_info.chunksize = 1 << (old_counts.chunkshift +
+-									     BITMAP_BLOCK_SHIFT);
++				bitmap->mddev->bitmap_info.chunksize =
++					1UL << (old_counts.chunkshift + BITMAP_BLOCK_SHIFT);
+ 				blocks = old_counts.chunks << old_counts.chunkshift;
+ 				pr_warn("Could not pre-allocate in-memory bitmap for cluster raid\n");
+ 				break;
+@@ -2196,20 +2197,23 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 
+ 		if (set) {
+ 			bmc_new = md_bitmap_get_counter(&bitmap->counts, block, &new_blocks, 1);
+-			if (*bmc_new == 0) {
+-				/* need to set on-disk bits too. */
+-				sector_t end = block + new_blocks;
+-				sector_t start = block >> chunkshift;
+-				start <<= chunkshift;
+-				while (start < end) {
+-					md_bitmap_file_set_bit(bitmap, block);
+-					start += 1 << chunkshift;
++			if (bmc_new) {
++				if (*bmc_new == 0) {
++					/* need to set on-disk bits too. */
++					sector_t end = block + new_blocks;
++					sector_t start = block >> chunkshift;
++
++					start <<= chunkshift;
++					while (start < end) {
++						md_bitmap_file_set_bit(bitmap, block);
++						start += 1 << chunkshift;
++					}
++					*bmc_new = 2;
++					md_bitmap_count_page(&bitmap->counts, block, 1);
++					md_bitmap_set_pending(&bitmap->counts, block);
+ 				}
+-				*bmc_new = 2;
+-				md_bitmap_count_page(&bitmap->counts, block, 1);
+-				md_bitmap_set_pending(&bitmap->counts, block);
++				*bmc_new |= NEEDED_MASK;
+ 			}
+-			*bmc_new |= NEEDED_MASK;
+ 			if (new_blocks < old_blocks)
+ 				old_blocks = new_blocks;
+ 		}
+@@ -2516,6 +2520,9 @@ chunksize_store(struct mddev *mddev, const char *buf, size_t len)
+ 	if (csize < 512 ||
+ 	    !is_power_of_2(csize))
+ 		return -EINVAL;
++	if (BITS_PER_LONG > 32 && csize >= (1ULL << (BITS_PER_BYTE *
++		sizeof(((bitmap_super_t *)0)->chunksize))))
++		return -EOVERFLOW;
+ 	mddev->bitmap_info.chunksize = csize;
+ 	return len;
+ }
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 0043dec37a870..3038e7ecb7e16 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -555,13 +555,14 @@ static void md_end_flush(struct bio *bio)
+ 	struct md_rdev *rdev = bio->bi_private;
+ 	struct mddev *mddev = rdev->mddev;
+ 
++	bio_put(bio);
++
+ 	rdev_dec_pending(rdev, mddev);
+ 
+ 	if (atomic_dec_and_test(&mddev->flush_pending)) {
+ 		/* The pre-request flush has finished */
+ 		queue_work(md_wq, &mddev->flush_work);
+ 	}
+-	bio_put(bio);
+ }
+ 
+ static void md_submit_flush_data(struct work_struct *ws);
+@@ -966,10 +967,12 @@ static void super_written(struct bio *bio)
+ 	} else
+ 		clear_bit(LastDev, &rdev->flags);
+ 
++	bio_put(bio);
++
++	rdev_dec_pending(rdev, mddev);
++
+ 	if (atomic_dec_and_test(&mddev->pending_writes))
+ 		wake_up(&mddev->sb_wait);
+-	rdev_dec_pending(rdev, mddev);
+-	bio_put(bio);
+ }
+ 
+ void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index fb31e5dd54a6e..6b5cc3f59fb39 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -3115,6 +3115,7 @@ static int raid1_run(struct mddev *mddev)
+ 	 * RAID1 needs at least one disk in active
+ 	 */
+ 	if (conf->raid_disks - mddev->degraded < 1) {
++		md_unregister_thread(&conf->thread);
+ 		ret = -EINVAL;
+ 		goto abort;
+ 	}
+diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
+index e58cb8434dafe..12b7f698f5623 100644
+--- a/drivers/media/dvb-core/dmxdev.c
++++ b/drivers/media/dvb-core/dmxdev.c
+@@ -800,6 +800,11 @@ static int dvb_demux_open(struct inode *inode, struct file *file)
+ 	if (mutex_lock_interruptible(&dmxdev->mutex))
+ 		return -ERESTARTSYS;
+ 
++	if (dmxdev->exit) {
++		mutex_unlock(&dmxdev->mutex);
++		return -ENODEV;
++	}
++
+ 	for (i = 0; i < dmxdev->filternum; i++)
+ 		if (dmxdev->filter[i].state == DMXDEV_STATE_FREE)
+ 			break;
+@@ -1458,7 +1463,10 @@ EXPORT_SYMBOL(dvb_dmxdev_init);
+ 
+ void dvb_dmxdev_release(struct dmxdev *dmxdev)
+ {
++	mutex_lock(&dmxdev->mutex);
+ 	dmxdev->exit = 1;
++	mutex_unlock(&dmxdev->mutex);
++
+ 	if (dmxdev->dvbdev->users > 1) {
+ 		wait_event(dmxdev->dvbdev->wait_queue,
+ 				dmxdev->dvbdev->users == 1);
+diff --git a/drivers/media/dvb-core/dvb_ca_en50221.c b/drivers/media/dvb-core/dvb_ca_en50221.c
+index cfc27629444f3..fd476536d32ed 100644
+--- a/drivers/media/dvb-core/dvb_ca_en50221.c
++++ b/drivers/media/dvb-core/dvb_ca_en50221.c
+@@ -157,7 +157,7 @@ static void dvb_ca_private_free(struct dvb_ca_private *ca)
+ {
+ 	unsigned int i;
+ 
+-	dvb_free_device(ca->dvbdev);
++	dvb_device_put(ca->dvbdev);
+ 	for (i = 0; i < ca->slot_count; i++)
+ 		vfree(ca->slot_info[i].rx_buffer.data);
+ 
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index 06ea30a689d75..b04638321b75b 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -135,7 +135,7 @@ static void __dvb_frontend_free(struct dvb_frontend *fe)
+ 	struct dvb_frontend_private *fepriv = fe->frontend_priv;
+ 
+ 	if (fepriv)
+-		dvb_free_device(fepriv->dvbdev);
++		dvb_device_put(fepriv->dvbdev);
+ 
+ 	dvb_frontend_invoke_release(fe, fe->ops.release);
+ 
+@@ -2961,6 +2961,7 @@ int dvb_register_frontend(struct dvb_adapter *dvb,
+ 		.name = fe->ops.info.name,
+ #endif
+ 	};
++	int ret;
+ 
+ 	dev_dbg(dvb->device, "%s:\n", __func__);
+ 
+@@ -2994,8 +2995,13 @@ int dvb_register_frontend(struct dvb_adapter *dvb,
+ 		 "DVB: registering adapter %i frontend %i (%s)...\n",
+ 		 fe->dvb->num, fe->id, fe->ops.info.name);
+ 
+-	dvb_register_device(fe->dvb, &fepriv->dvbdev, &dvbdev_template,
++	ret = dvb_register_device(fe->dvb, &fepriv->dvbdev, &dvbdev_template,
+ 			    fe, DVB_DEVICE_FRONTEND, 0);
++	if (ret) {
++		dvb_frontend_put(fe);
++		mutex_unlock(&frontend_mutex);
++		return ret;
++	}
+ 
+ 	/*
+ 	 * Initialize the cache to the proper values according with the
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index ec9ebff28552b..fea2d82300b0d 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -107,7 +107,7 @@ static int dvb_device_open(struct inode *inode, struct file *file)
+ 		new_fops = fops_get(dvbdev->fops);
+ 		if (!new_fops)
+ 			goto fail;
+-		file->private_data = dvbdev;
++		file->private_data = dvb_device_get(dvbdev);
+ 		replace_fops(file, new_fops);
+ 		if (file->f_op->open)
+ 			err = file->f_op->open(inode, file);
+@@ -171,6 +171,9 @@ int dvb_generic_release(struct inode *inode, struct file *file)
+ 	}
+ 
+ 	dvbdev->users++;
++
++	dvb_device_put(dvbdev);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(dvb_generic_release);
+@@ -342,6 +345,7 @@ static int dvb_create_media_entity(struct dvb_device *dvbdev,
+ 				       GFP_KERNEL);
+ 		if (!dvbdev->pads) {
+ 			kfree(dvbdev->entity);
++			dvbdev->entity = NULL;
+ 			return -ENOMEM;
+ 		}
+ 	}
+@@ -488,6 +492,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 	}
+ 
+ 	memcpy(dvbdev, template, sizeof(struct dvb_device));
++	kref_init(&dvbdev->ref);
+ 	dvbdev->type = type;
+ 	dvbdev->id = id;
+ 	dvbdev->adapter = adap;
+@@ -517,7 +522,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ #endif
+ 
+ 	dvbdev->minor = minor;
+-	dvb_minors[minor] = dvbdev;
++	dvb_minors[minor] = dvb_device_get(dvbdev);
+ 	up_write(&minor_rwsem);
+ 
+ 	ret = dvb_register_media_device(dvbdev, type, minor, demux_sink_pads);
+@@ -557,6 +562,7 @@ void dvb_remove_device(struct dvb_device *dvbdev)
+ 
+ 	down_write(&minor_rwsem);
+ 	dvb_minors[dvbdev->minor] = NULL;
++	dvb_device_put(dvbdev);
+ 	up_write(&minor_rwsem);
+ 
+ 	dvb_media_device_free(dvbdev);
+@@ -568,21 +574,34 @@ void dvb_remove_device(struct dvb_device *dvbdev)
+ EXPORT_SYMBOL(dvb_remove_device);
+ 
+ 
+-void dvb_free_device(struct dvb_device *dvbdev)
++static void dvb_free_device(struct kref *ref)
+ {
+-	if (!dvbdev)
+-		return;
++	struct dvb_device *dvbdev = container_of(ref, struct dvb_device, ref);
+ 
+ 	kfree (dvbdev->fops);
+ 	kfree (dvbdev);
+ }
+-EXPORT_SYMBOL(dvb_free_device);
++
++
++struct dvb_device *dvb_device_get(struct dvb_device *dvbdev)
++{
++	kref_get(&dvbdev->ref);
++	return dvbdev;
++}
++EXPORT_SYMBOL(dvb_device_get);
++
++
++void dvb_device_put(struct dvb_device *dvbdev)
++{
++	if (dvbdev)
++		kref_put(&dvbdev->ref, dvb_free_device);
++}
+ 
+ 
+ void dvb_unregister_device(struct dvb_device *dvbdev)
+ {
+ 	dvb_remove_device(dvbdev);
+-	dvb_free_device(dvbdev);
++	dvb_device_put(dvbdev);
+ }
+ EXPORT_SYMBOL(dvb_unregister_device);
+ 
+diff --git a/drivers/media/dvb-frontends/bcm3510.c b/drivers/media/dvb-frontends/bcm3510.c
+index da0ff7b44da41..68b92b4419cff 100644
+--- a/drivers/media/dvb-frontends/bcm3510.c
++++ b/drivers/media/dvb-frontends/bcm3510.c
+@@ -649,6 +649,7 @@ static int bcm3510_download_firmware(struct dvb_frontend* fe)
+ 		deb_info("firmware chunk, addr: 0x%04x, len: 0x%04x, total length: 0x%04zx\n",addr,len,fw->size);
+ 		if ((ret = bcm3510_write_ram(st,addr,&b[i+4],len)) < 0) {
+ 			err("firmware download failed: %d\n",ret);
++			release_firmware(fw);
+ 			return ret;
+ 		}
+ 		i += 4 + len;
+diff --git a/drivers/media/dvb-frontends/stv0288.c b/drivers/media/dvb-frontends/stv0288.c
+index 3d54a0ec86afd..3ae1f3a2f1420 100644
+--- a/drivers/media/dvb-frontends/stv0288.c
++++ b/drivers/media/dvb-frontends/stv0288.c
+@@ -440,9 +440,8 @@ static int stv0288_set_frontend(struct dvb_frontend *fe)
+ 	struct stv0288_state *state = fe->demodulator_priv;
+ 	struct dtv_frontend_properties *c = &fe->dtv_property_cache;
+ 
+-	char tm;
+-	unsigned char tda[3];
+-	u8 reg, time_out = 0;
++	u8 tda[3], reg, time_out = 0;
++	s8 tm;
+ 
+ 	dprintk("%s : FE_SET_FRONTEND\n", __func__);
+ 
+diff --git a/drivers/media/i2c/ad5820.c b/drivers/media/i2c/ad5820.c
+index 19c74db0649fc..f55322eebf6d0 100644
+--- a/drivers/media/i2c/ad5820.c
++++ b/drivers/media/i2c/ad5820.c
+@@ -329,18 +329,18 @@ static int ad5820_probe(struct i2c_client *client,
+ 
+ 	ret = media_entity_pads_init(&coil->subdev.entity, 0, NULL);
+ 	if (ret < 0)
+-		goto cleanup2;
++		goto clean_mutex;
+ 
+ 	ret = v4l2_async_register_subdev(&coil->subdev);
+ 	if (ret < 0)
+-		goto cleanup;
++		goto clean_entity;
+ 
+ 	return ret;
+ 
+-cleanup2:
+-	mutex_destroy(&coil->power_lock);
+-cleanup:
++clean_entity:
+ 	media_entity_cleanup(&coil->subdev.entity);
++clean_mutex:
++	mutex_destroy(&coil->power_lock);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/pci/saa7164/saa7164-core.c b/drivers/media/pci/saa7164/saa7164-core.c
+index 6c08b77bfd477..3cadfbe60fe6e 100644
+--- a/drivers/media/pci/saa7164/saa7164-core.c
++++ b/drivers/media/pci/saa7164/saa7164-core.c
+@@ -1270,7 +1270,7 @@ static int saa7164_initdev(struct pci_dev *pci_dev,
+ 
+ 	if (saa7164_dev_setup(dev) < 0) {
+ 		err = -EINVAL;
+-		goto fail_free;
++		goto fail_dev;
+ 	}
+ 
+ 	/* print pci info */
+@@ -1438,6 +1438,8 @@ fail_fw:
+ 
+ fail_irq:
+ 	saa7164_dev_unregister(dev);
++fail_dev:
++	pci_disable_device(pci_dev);
+ fail_free:
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+ 	kfree(dev);
+diff --git a/drivers/media/pci/solo6x10/solo6x10-core.c b/drivers/media/pci/solo6x10/solo6x10-core.c
+index d497afc7e7b78..4ebb1e020fad5 100644
+--- a/drivers/media/pci/solo6x10/solo6x10-core.c
++++ b/drivers/media/pci/solo6x10/solo6x10-core.c
+@@ -420,6 +420,7 @@ static int solo_sysfs_init(struct solo_dev *solo_dev)
+ 		     solo_dev->nr_chans);
+ 
+ 	if (device_register(dev)) {
++		put_device(dev);
+ 		dev->parent = NULL;
+ 		return -ENOMEM;
+ 	}
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index 159c9de857885..6ffa12e83e42a 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -852,7 +852,7 @@ static void coda_setup_iram(struct coda_ctx *ctx)
+ 		/* Only H.264BP and H.263P3 are considered */
+ 		iram_info->buf_dbk_y_use = coda_iram_alloc(iram_info, w64);
+ 		iram_info->buf_dbk_c_use = coda_iram_alloc(iram_info, w64);
+-		if (!iram_info->buf_dbk_c_use)
++		if (!iram_info->buf_dbk_y_use || !iram_info->buf_dbk_c_use)
+ 			goto out;
+ 		iram_info->axi_sram_use |= dbk_bits;
+ 
+@@ -876,7 +876,7 @@ static void coda_setup_iram(struct coda_ctx *ctx)
+ 
+ 		iram_info->buf_dbk_y_use = coda_iram_alloc(iram_info, w128);
+ 		iram_info->buf_dbk_c_use = coda_iram_alloc(iram_info, w128);
+-		if (!iram_info->buf_dbk_c_use)
++		if (!iram_info->buf_dbk_y_use || !iram_info->buf_dbk_c_use)
+ 			goto out;
+ 		iram_info->axi_sram_use |= dbk_bits;
+ 
+@@ -1082,10 +1082,16 @@ static int coda_start_encoding(struct coda_ctx *ctx)
+ 	}
+ 
+ 	if (dst_fourcc == V4L2_PIX_FMT_JPEG) {
+-		if (!ctx->params.jpeg_qmat_tab[0])
++		if (!ctx->params.jpeg_qmat_tab[0]) {
+ 			ctx->params.jpeg_qmat_tab[0] = kmalloc(64, GFP_KERNEL);
+-		if (!ctx->params.jpeg_qmat_tab[1])
++			if (!ctx->params.jpeg_qmat_tab[0])
++				return -ENOMEM;
++		}
++		if (!ctx->params.jpeg_qmat_tab[1]) {
+ 			ctx->params.jpeg_qmat_tab[1] = kmalloc(64, GFP_KERNEL);
++			if (!ctx->params.jpeg_qmat_tab[1])
++				return -ENOMEM;
++		}
+ 		coda_set_jpeg_compression_quality(ctx, ctx->params.jpeg_quality);
+ 	}
+ 
+diff --git a/drivers/media/platform/coda/coda-jpeg.c b/drivers/media/platform/coda/coda-jpeg.c
+index a72f4655e5ad5..b7bf529f18f77 100644
+--- a/drivers/media/platform/coda/coda-jpeg.c
++++ b/drivers/media/platform/coda/coda-jpeg.c
+@@ -1052,10 +1052,16 @@ static int coda9_jpeg_start_encoding(struct coda_ctx *ctx)
+ 		v4l2_err(&dev->v4l2_dev, "error loading Huffman tables\n");
+ 		return ret;
+ 	}
+-	if (!ctx->params.jpeg_qmat_tab[0])
++	if (!ctx->params.jpeg_qmat_tab[0]) {
+ 		ctx->params.jpeg_qmat_tab[0] = kmalloc(64, GFP_KERNEL);
+-	if (!ctx->params.jpeg_qmat_tab[1])
++		if (!ctx->params.jpeg_qmat_tab[0])
++			return -ENOMEM;
++	}
++	if (!ctx->params.jpeg_qmat_tab[1]) {
+ 		ctx->params.jpeg_qmat_tab[1] = kmalloc(64, GFP_KERNEL);
++		if (!ctx->params.jpeg_qmat_tab[1])
++			return -ENOMEM;
++	}
+ 	coda_set_jpeg_compression_quality(ctx, ctx->params.jpeg_quality);
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/exynos4-is/fimc-core.c b/drivers/media/platform/exynos4-is/fimc-core.c
+index 08d1f39a914c5..60b28e6f739e3 100644
+--- a/drivers/media/platform/exynos4-is/fimc-core.c
++++ b/drivers/media/platform/exynos4-is/fimc-core.c
+@@ -1174,7 +1174,7 @@ int __init fimc_register_driver(void)
+ 	return platform_driver_register(&fimc_driver);
+ }
+ 
+-void __exit fimc_unregister_driver(void)
++void fimc_unregister_driver(void)
+ {
+ 	platform_driver_unregister(&fimc_driver);
+ }
+diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
+index a9a8f0433fb2c..bd37011fb671e 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.c
++++ b/drivers/media/platform/exynos4-is/media-dev.c
+@@ -401,6 +401,7 @@ static int fimc_md_parse_one_endpoint(struct fimc_md *fmd,
+ 	int index = fmd->num_sensors;
+ 	struct fimc_source_info *pd = &fmd->sensor[index].pdata;
+ 	struct device_node *rem, *np;
++	struct v4l2_async_subdev *asd;
+ 	struct v4l2_fwnode_endpoint endpoint = { .bus_type = 0 };
+ 	int ret;
+ 
+@@ -418,10 +419,10 @@ static int fimc_md_parse_one_endpoint(struct fimc_md *fmd,
+ 	pd->mux_id = (endpoint.base.port - 1) & 0x1;
+ 
+ 	rem = of_graph_get_remote_port_parent(ep);
+-	of_node_put(ep);
+ 	if (rem == NULL) {
+ 		v4l2_info(&fmd->v4l2_dev, "Remote device at %pOF not found\n",
+ 							ep);
++		of_node_put(ep);
+ 		return 0;
+ 	}
+ 
+@@ -450,6 +451,7 @@ static int fimc_md_parse_one_endpoint(struct fimc_md *fmd,
+ 	 * checking parent's node name.
+ 	 */
+ 	np = of_get_parent(rem);
++	of_node_put(rem);
+ 
+ 	if (of_node_name_eq(np, "i2c-isp"))
+ 		pd->fimc_bus_type = FIMC_BUS_TYPE_ISP_WRITEBACK;
+@@ -458,20 +460,19 @@ static int fimc_md_parse_one_endpoint(struct fimc_md *fmd,
+ 	of_node_put(np);
+ 
+ 	if (WARN_ON(index >= ARRAY_SIZE(fmd->sensor))) {
+-		of_node_put(rem);
++		of_node_put(ep);
+ 		return -EINVAL;
+ 	}
+ 
+-	fmd->sensor[index].asd.match_type = V4L2_ASYNC_MATCH_FWNODE;
+-	fmd->sensor[index].asd.match.fwnode = of_fwnode_handle(rem);
++	asd = v4l2_async_notifier_add_fwnode_remote_subdev(
++		&fmd->subdev_notifier, of_fwnode_handle(ep), sizeof(*asd));
+ 
+-	ret = v4l2_async_notifier_add_subdev(&fmd->subdev_notifier,
+-					     &fmd->sensor[index].asd);
+-	if (ret) {
+-		of_node_put(rem);
+-		return ret;
+-	}
++	of_node_put(ep);
++
++	if (IS_ERR(asd))
++		return PTR_ERR(asd);
+ 
++	fmd->sensor[index].asd = asd;
+ 	fmd->num_sensors++;
+ 
+ 	return 0;
+@@ -1377,8 +1378,7 @@ static int subdev_notifier_bound(struct v4l2_async_notifier *notifier,
+ 
+ 	/* Find platform data for this sensor subdev */
+ 	for (i = 0; i < ARRAY_SIZE(fmd->sensor); i++)
+-		if (fmd->sensor[i].asd.match.fwnode ==
+-		    of_fwnode_handle(subdev->dev->of_node))
++		if (fmd->sensor[i].asd == asd)
+ 			si = &fmd->sensor[i];
+ 
+ 	if (si == NULL)
+@@ -1470,7 +1470,7 @@ static int fimc_md_probe(struct platform_device *pdev)
+ 	pinctrl = devm_pinctrl_get(dev);
+ 	if (IS_ERR(pinctrl)) {
+ 		ret = PTR_ERR(pinctrl);
+-		if (ret != EPROBE_DEFER)
++		if (ret != -EPROBE_DEFER)
+ 			dev_err(dev, "Failed to get pinctrl: %d\n", ret);
+ 		goto err_clk;
+ 	}
+@@ -1582,7 +1582,11 @@ static int __init fimc_md_init(void)
+ 	if (ret)
+ 		return ret;
+ 
+-	return platform_driver_register(&fimc_md_driver);
++	ret = platform_driver_register(&fimc_md_driver);
++	if (ret)
++		fimc_unregister_driver();
++
++	return ret;
+ }
+ 
+ static void __exit fimc_md_exit(void)
+diff --git a/drivers/media/platform/exynos4-is/media-dev.h b/drivers/media/platform/exynos4-is/media-dev.h
+index 9447fafe23c6e..a3876d668ea6e 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.h
++++ b/drivers/media/platform/exynos4-is/media-dev.h
+@@ -83,7 +83,7 @@ struct fimc_camclk_info {
+  */
+ struct fimc_sensor_info {
+ 	struct fimc_source_info pdata;
+-	struct v4l2_async_subdev asd;
++	struct v4l2_async_subdev *asd;
+ 	struct v4l2_subdev *subdev;
+ 	struct fimc_dev *host;
+ };
+diff --git a/drivers/media/platform/qcom/camss/camss-video.c b/drivers/media/platform/qcom/camss/camss-video.c
+index 15965e63cb619..9333a7a33d4da 100644
+--- a/drivers/media/platform/qcom/camss/camss-video.c
++++ b/drivers/media/platform/qcom/camss/camss-video.c
+@@ -444,7 +444,7 @@ static int video_start_streaming(struct vb2_queue *q, unsigned int count)
+ 
+ 	ret = media_pipeline_start(&vdev->entity, &video->pipe);
+ 	if (ret < 0)
+-		return ret;
++		goto flush_buffers;
+ 
+ 	ret = video_check_format(video);
+ 	if (ret < 0)
+@@ -473,6 +473,7 @@ static int video_start_streaming(struct vb2_queue *q, unsigned int count)
+ error:
+ 	media_pipeline_stop(&vdev->entity);
+ 
++flush_buffers:
+ 	video->ops->flush_buffers(video, VB2_BUF_STATE_QUEUED);
+ 
+ 	return ret;
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index 710f9a2b132b0..f7de02352f1b0 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -764,8 +764,8 @@ static int vcodec_domains_get(struct venus_core *core)
+ 	for (i = 0; i < res->vcodec_pmdomains_num; i++) {
+ 		pd = dev_pm_domain_attach_by_name(dev,
+ 						  res->vcodec_pmdomains[i]);
+-		if (IS_ERR(pd))
+-			return PTR_ERR(pd);
++		if (IS_ERR_OR_NULL(pd))
++			return PTR_ERR(pd) ? : -ENODATA;
+ 		core->pmdomains[i] = pd;
+ 	}
+ 
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index f336a95432732..6cbec3bbfce66 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -1584,8 +1584,18 @@ static struct s5p_mfc_variant mfc_drvdata_v7 = {
+ 	.port_num	= MFC_NUM_PORTS_V7,
+ 	.buf_size	= &buf_size_v7,
+ 	.fw_name[0]     = "s5p-mfc-v7.fw",
+-	.clk_names	= {"mfc", "sclk_mfc"},
+-	.num_clocks	= 2,
++	.clk_names	= {"mfc"},
++	.num_clocks	= 1,
++};
++
++static struct s5p_mfc_variant mfc_drvdata_v7_3250 = {
++	.version        = MFC_VERSION_V7,
++	.version_bit    = MFC_V7_BIT,
++	.port_num       = MFC_NUM_PORTS_V7,
++	.buf_size       = &buf_size_v7,
++	.fw_name[0]     = "s5p-mfc-v7.fw",
++	.clk_names      = {"mfc", "sclk_mfc"},
++	.num_clocks     = 2,
+ };
+ 
+ static struct s5p_mfc_buf_size_v6 mfc_buf_size_v8 = {
+@@ -1655,6 +1665,9 @@ static const struct of_device_id exynos_mfc_match[] = {
+ 	}, {
+ 		.compatible = "samsung,mfc-v7",
+ 		.data = &mfc_drvdata_v7,
++	}, {
++		.compatible = "samsung,exynos3250-mfc",
++		.data = &mfc_drvdata_v7_3250,
+ 	}, {
+ 		.compatible = "samsung,mfc-v8",
+ 		.data = &mfc_drvdata_v8,
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c b/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
+index da138c314963a..58822ec5370e2 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
+@@ -468,8 +468,10 @@ void s5p_mfc_close_mfc_inst(struct s5p_mfc_dev *dev, struct s5p_mfc_ctx *ctx)
+ 	s5p_mfc_hw_call(dev->mfc_ops, try_run, dev);
+ 	/* Wait until instance is returned or timeout occurred */
+ 	if (s5p_mfc_wait_for_done_ctx(ctx,
+-				S5P_MFC_R2H_CMD_CLOSE_INSTANCE_RET, 0))
++				S5P_MFC_R2H_CMD_CLOSE_INSTANCE_RET, 0)){
++		clear_work_bit_irqsave(ctx);
+ 		mfc_err("Err returning instance\n");
++	}
+ 
+ 	/* Free resources */
+ 	s5p_mfc_hw_call(dev->mfc_ops, release_codec_buffers, ctx);
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_enc.c b/drivers/media/platform/s5p-mfc/s5p_mfc_enc.c
+index acc2217dd7e90..62a1ad347fa75 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_enc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_enc.c
+@@ -1218,6 +1218,7 @@ static int enc_post_frame_start(struct s5p_mfc_ctx *ctx)
+ 	unsigned long mb_y_addr, mb_c_addr;
+ 	int slice_type;
+ 	unsigned int strm_size;
++	bool src_ready;
+ 
+ 	slice_type = s5p_mfc_hw_call(dev->mfc_ops, get_enc_slice_type, dev);
+ 	strm_size = s5p_mfc_hw_call(dev->mfc_ops, get_enc_strm_size, dev);
+@@ -1257,7 +1258,8 @@ static int enc_post_frame_start(struct s5p_mfc_ctx *ctx)
+ 			}
+ 		}
+ 	}
+-	if ((ctx->src_queue_cnt > 0) && (ctx->state == MFCINST_RUNNING)) {
++	if (ctx->src_queue_cnt > 0 && (ctx->state == MFCINST_RUNNING ||
++				       ctx->state == MFCINST_FINISHING)) {
+ 		mb_entry = list_entry(ctx->src_queue.next, struct s5p_mfc_buf,
+ 									list);
+ 		if (mb_entry->flags & MFC_BUF_FLAG_USED) {
+@@ -1288,7 +1290,13 @@ static int enc_post_frame_start(struct s5p_mfc_ctx *ctx)
+ 		vb2_set_plane_payload(&mb_entry->b->vb2_buf, 0, strm_size);
+ 		vb2_buffer_done(&mb_entry->b->vb2_buf, VB2_BUF_STATE_DONE);
+ 	}
+-	if ((ctx->src_queue_cnt == 0) || (ctx->dst_queue_cnt == 0))
++
++	src_ready = true;
++	if (ctx->state == MFCINST_RUNNING && ctx->src_queue_cnt == 0)
++		src_ready = false;
++	if (ctx->state == MFCINST_FINISHING && ctx->ref_queue_cnt == 0)
++		src_ready = false;
++	if (!src_ready || ctx->dst_queue_cnt == 0)
+ 		clear_work_bit(ctx);
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c b/drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c
+index a1453053e31ab..ef8169f6c428c 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c
+@@ -1060,7 +1060,7 @@ static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+ 	}
+ 
+ 	/* aspect ratio VUI */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 5);
+ 	reg |= ((p_h264->vui_sar & 0x1) << 5);
+ 	writel(reg, mfc_regs->e_h264_options);
+@@ -1083,7 +1083,7 @@ static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+ 
+ 	/* intra picture period for H.264 open GOP */
+ 	/* control */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 4);
+ 	reg |= ((p_h264->open_gop & 0x1) << 4);
+ 	writel(reg, mfc_regs->e_h264_options);
+@@ -1097,23 +1097,23 @@ static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+ 	}
+ 
+ 	/* 'WEIGHTED_BI_PREDICTION' for B is disable */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x3 << 9);
+ 	writel(reg, mfc_regs->e_h264_options);
+ 
+ 	/* 'CONSTRAINED_INTRA_PRED_ENABLE' is disable */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 14);
+ 	writel(reg, mfc_regs->e_h264_options);
+ 
+ 	/* ASO */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 6);
+ 	reg |= ((p_h264->aso & 0x1) << 6);
+ 	writel(reg, mfc_regs->e_h264_options);
+ 
+ 	/* hier qp enable */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 8);
+ 	reg |= ((p_h264->open_gop & 0x1) << 8);
+ 	writel(reg, mfc_regs->e_h264_options);
+@@ -1134,7 +1134,7 @@ static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+ 	writel(reg, mfc_regs->e_h264_num_t_layer);
+ 
+ 	/* frame packing SEI generation */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 25);
+ 	reg |= ((p_h264->sei_frame_packing & 0x1) << 25);
+ 	writel(reg, mfc_regs->e_h264_options);
+diff --git a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
+index dbe7788083a4b..b7e0ec265b70c 100644
+--- a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
++++ b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
+@@ -937,6 +937,7 @@ static int configure_channels(struct c8sectpfei *fei)
+ 		if (ret) {
+ 			dev_err(fei->dev,
+ 				"configure_memdma_and_inputblock failed\n");
++			of_node_put(child);
+ 			goto err_unmap;
+ 		}
+ 		index++;
+diff --git a/drivers/media/radio/si470x/radio-si470x-usb.c b/drivers/media/radio/si470x/radio-si470x-usb.c
+index 3f8634a465730..1365ae732b799 100644
+--- a/drivers/media/radio/si470x/radio-si470x-usb.c
++++ b/drivers/media/radio/si470x/radio-si470x-usb.c
+@@ -733,8 +733,10 @@ static int si470x_usb_driver_probe(struct usb_interface *intf,
+ 
+ 	/* start radio */
+ 	retval = si470x_start_usb(radio);
+-	if (retval < 0)
++	if (retval < 0 && !radio->int_in_running)
+ 		goto err_buf;
++	else if (retval < 0)	/* in case of radio->int_in_running == 1 */
++		goto err_all;
+ 
+ 	/* set initial frequency */
+ 	si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index bc9ac6002e259..98a38755c694e 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -646,15 +646,14 @@ static int send_packet(struct imon_context *ictx)
+ 		pr_err_ratelimited("error submitting urb(%d)\n", retval);
+ 	} else {
+ 		/* Wait for transmission to complete (or abort) */
+-		mutex_unlock(&ictx->lock);
+ 		retval = wait_for_completion_interruptible(
+ 				&ictx->tx.finished);
+ 		if (retval) {
+ 			usb_kill_urb(ictx->tx_urb);
+ 			pr_err_ratelimited("task interrupted\n");
+ 		}
+-		mutex_lock(&ictx->lock);
+ 
++		ictx->tx.busy = false;
+ 		retval = ictx->tx.status;
+ 		if (retval)
+ 			pr_err_ratelimited("packet tx failed (%d)\n", retval);
+@@ -958,7 +957,8 @@ static ssize_t vfd_write(struct file *file, const char __user *buf,
+ 	if (ictx->disconnected)
+ 		return -ENODEV;
+ 
+-	mutex_lock(&ictx->lock);
++	if (mutex_lock_interruptible(&ictx->lock))
++		return -ERESTARTSYS;
+ 
+ 	if (!ictx->dev_present_intf0) {
+ 		pr_err_ratelimited("no iMON device present\n");
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_bridge.c b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
+index fc64d0c8492a9..3c281265a9ecc 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_bridge.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
+@@ -456,26 +456,20 @@ fail_dmx_conn:
+ 	for (j = j - 1; j >= 0; --j)
+ 		dvb->demux.dmx.remove_frontend(&dvb->demux.dmx,
+ 					       &dvb->dmx_fe[j]);
+-fail_dmx_dev:
+ 	dvb_dmxdev_release(&dvb->dmx_dev);
+-fail_dmx:
++fail_dmx_dev:
+ 	dvb_dmx_release(&dvb->demux);
++fail_dmx:
++fail_demod_probe:
++	for (i = i - 1; i >= 0; --i) {
++		dvb_unregister_frontend(dvb->fe[i]);
+ fail_fe:
+-	for (j = i; j >= 0; --j)
+-		dvb_unregister_frontend(dvb->fe[j]);
++		dvb_module_release(dvb->i2c_client_tuner[i]);
+ fail_tuner_probe:
+-	for (j = i; j >= 0; --j)
+-		if (dvb->i2c_client_tuner[j])
+-			dvb_module_release(dvb->i2c_client_tuner[j]);
+-
+-fail_demod_probe:
+-	for (j = i; j >= 0; --j)
+-		if (dvb->i2c_client_demod[j])
+-			dvb_module_release(dvb->i2c_client_demod[j]);
+-
++		dvb_module_release(dvb->i2c_client_demod[i]);
++	}
+ fail_adapter:
+ 	dvb_unregister_adapter(&dvb->adapter);
+-
+ fail_i2c:
+ 	i2c_del_adapter(&dvb->i2c_adapter);
+ 
+diff --git a/drivers/media/test-drivers/vimc/vimc-core.c b/drivers/media/test-drivers/vimc/vimc-core.c
+index 4b0ae6f51d765..857529ce3638a 100644
+--- a/drivers/media/test-drivers/vimc/vimc-core.c
++++ b/drivers/media/test-drivers/vimc/vimc-core.c
+@@ -357,7 +357,7 @@ static int __init vimc_init(void)
+ 	if (ret) {
+ 		dev_err(&vimc_pdev.dev,
+ 			"platform driver registration failed (err=%d)\n", ret);
+-		platform_driver_unregister(&vimc_pdrv);
++		platform_device_unregister(&vimc_pdev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index d493bd17481b0..437889e51ca05 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -961,6 +961,7 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
+ 			if (dev->has_compose_cap) {
+ 				v4l2_rect_set_min_size(compose, &min_rect);
+ 				v4l2_rect_set_max_size(compose, &max_rect);
++				v4l2_rect_map_inside(compose, &fmt);
+ 			}
+ 			dev->fmt_cap_rect = fmt;
+ 			tpg_s_buf_height(&dev->tpg, fmt.height);
+diff --git a/drivers/media/usb/dvb-usb/az6027.c b/drivers/media/usb/dvb-usb/az6027.c
+index 86788771175b7..32b4ee65c2802 100644
+--- a/drivers/media/usb/dvb-usb/az6027.c
++++ b/drivers/media/usb/dvb-usb/az6027.c
+@@ -975,6 +975,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
+ 		if (msg[i].addr == 0x99) {
+ 			req = 0xBE;
+ 			index = 0;
++			if (msg[i].len < 1) {
++				i = -EOPNOTSUPP;
++				break;
++			}
+ 			value = msg[i].buf[0] & 0x00ff;
+ 			length = 1;
+ 			az6027_usb_out_op(d, req, value, index, data, length);
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index 61439c8f33cab..58eea8ab54779 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -81,7 +81,7 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
+ 
+ 		ret = dvb_usb_adapter_stream_init(adap);
+ 		if (ret)
+-			return ret;
++			goto stream_init_err;
+ 
+ 		ret = dvb_usb_adapter_dvb_init(adap, adapter_nrs);
+ 		if (ret)
+@@ -114,6 +114,8 @@ frontend_init_err:
+ 	dvb_usb_adapter_dvb_exit(adap);
+ dvb_init_err:
+ 	dvb_usb_adapter_stream_exit(adap);
++stream_init_err:
++	kfree(adap->priv);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/v4l2-core/videobuf-dma-contig.c b/drivers/media/v4l2-core/videobuf-dma-contig.c
+index 52312ce2ba056..f2c4393595574 100644
+--- a/drivers/media/v4l2-core/videobuf-dma-contig.c
++++ b/drivers/media/v4l2-core/videobuf-dma-contig.c
+@@ -36,12 +36,11 @@ struct videobuf_dma_contig_memory {
+ 
+ static int __videobuf_dc_alloc(struct device *dev,
+ 			       struct videobuf_dma_contig_memory *mem,
+-			       unsigned long size, gfp_t flags)
++			       unsigned long size)
+ {
+ 	mem->size = size;
+-	mem->vaddr = dma_alloc_coherent(dev, mem->size,
+-					&mem->dma_handle, flags);
+-
++	mem->vaddr = dma_alloc_coherent(dev, mem->size, &mem->dma_handle,
++					GFP_KERNEL);
+ 	if (!mem->vaddr) {
+ 		dev_err(dev, "memory alloc size %ld failed\n", mem->size);
+ 		return -ENOMEM;
+@@ -258,8 +257,7 @@ static int __videobuf_iolock(struct videobuf_queue *q,
+ 			return videobuf_dma_contig_user_get(mem, vb);
+ 
+ 		/* allocate memory for the read() method */
+-		if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(vb->size),
+-					GFP_KERNEL))
++		if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(vb->size)))
+ 			return -ENOMEM;
+ 		break;
+ 	case V4L2_MEMORY_OVERLAY:
+@@ -295,22 +293,18 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q,
+ 	BUG_ON(!mem);
+ 	MAGIC_CHECK(mem->magic, MAGIC_DC_MEM);
+ 
+-	if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(buf->bsize),
+-				GFP_KERNEL | __GFP_COMP))
++	if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(buf->bsize)))
+ 		goto error;
+ 
+-	/* Try to remap memory */
+-	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+-
+ 	/* the "vm_pgoff" is just used in v4l2 to find the
+ 	 * corresponding buffer data structure which is allocated
+ 	 * earlier and it does not mean the offset from the physical
+ 	 * buffer start address as usual. So set it to 0 to pass
+-	 * the sanity check in vm_iomap_memory().
++	 * the sanity check in dma_mmap_coherent().
+ 	 */
+ 	vma->vm_pgoff = 0;
+-
+-	retval = vm_iomap_memory(vma, mem->dma_handle, mem->size);
++	retval = dma_mmap_coherent(q->dev, vma, mem->vaddr, mem->dma_handle,
++				   mem->size);
+ 	if (retval) {
+ 		dev_err(q->dev, "mmap: remap failed with error %d. ",
+ 			retval);
+diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
+index 186308f1f8eba..6334376826a92 100644
+--- a/drivers/misc/cxl/guest.c
++++ b/drivers/misc/cxl/guest.c
+@@ -959,10 +959,10 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
+ 	 * if it returns an error!
+ 	 */
+ 	if ((rc = cxl_register_afu(afu)))
+-		goto err_put1;
++		goto err_put_dev;
+ 
+ 	if ((rc = cxl_sysfs_afu_add(afu)))
+-		goto err_put1;
++		goto err_del_dev;
+ 
+ 	/*
+ 	 * pHyp doesn't expose the programming models supported by the
+@@ -978,7 +978,7 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
+ 		afu->modes_supported = CXL_MODE_DIRECTED;
+ 
+ 	if ((rc = cxl_afu_select_best_mode(afu)))
+-		goto err_put2;
++		goto err_remove_sysfs;
+ 
+ 	adapter->afu[afu->slice] = afu;
+ 
+@@ -998,10 +998,12 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
+ 
+ 	return 0;
+ 
+-err_put2:
++err_remove_sysfs:
+ 	cxl_sysfs_afu_remove(afu);
+-err_put1:
+-	device_unregister(&afu->dev);
++err_del_dev:
++	device_del(&afu->dev);
++err_put_dev:
++	put_device(&afu->dev);
+ 	free = false;
+ 	guest_release_serr_irq(afu);
+ err2:
+@@ -1135,18 +1137,20 @@ struct cxl *cxl_guest_init_adapter(struct device_node *np, struct platform_devic
+ 	 * even if it returns an error!
+ 	 */
+ 	if ((rc = cxl_register_adapter(adapter)))
+-		goto err_put1;
++		goto err_put_dev;
+ 
+ 	if ((rc = cxl_sysfs_adapter_add(adapter)))
+-		goto err_put1;
++		goto err_del_dev;
+ 
+ 	/* release the context lock as the adapter is configured */
+ 	cxl_adapter_context_unlock(adapter);
+ 
+ 	return adapter;
+ 
+-err_put1:
+-	device_unregister(&adapter->dev);
++err_del_dev:
++	device_del(&adapter->dev);
++err_put_dev:
++	put_device(&adapter->dev);
+ 	free = false;
+ 	cxl_guest_remove_chardev(adapter);
+ err1:
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index 2ba899f5659ff..d183836d80e3f 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -387,6 +387,7 @@ int cxl_calc_capp_routing(struct pci_dev *dev, u64 *chipid,
+ 	rc = get_phb_index(np, phb_index);
+ 	if (rc) {
+ 		pr_err("cxl: invalid phb index\n");
++		of_node_put(np);
+ 		return rc;
+ 	}
+ 
+@@ -1164,10 +1165,10 @@ static int pci_init_afu(struct cxl *adapter, int slice, struct pci_dev *dev)
+ 	 * if it returns an error!
+ 	 */
+ 	if ((rc = cxl_register_afu(afu)))
+-		goto err_put1;
++		goto err_put_dev;
+ 
+ 	if ((rc = cxl_sysfs_afu_add(afu)))
+-		goto err_put1;
++		goto err_del_dev;
+ 
+ 	adapter->afu[afu->slice] = afu;
+ 
+@@ -1176,10 +1177,12 @@ static int pci_init_afu(struct cxl *adapter, int slice, struct pci_dev *dev)
+ 
+ 	return 0;
+ 
+-err_put1:
++err_del_dev:
++	device_del(&afu->dev);
++err_put_dev:
+ 	pci_deconfigure_afu(afu);
+ 	cxl_debugfs_afu_remove(afu);
+-	device_unregister(&afu->dev);
++	put_device(&afu->dev);
+ 	return rc;
+ 
+ err_free_native:
+@@ -1667,23 +1670,25 @@ static struct cxl *cxl_pci_init_adapter(struct pci_dev *dev)
+ 	 * even if it returns an error!
+ 	 */
+ 	if ((rc = cxl_register_adapter(adapter)))
+-		goto err_put1;
++		goto err_put_dev;
+ 
+ 	if ((rc = cxl_sysfs_adapter_add(adapter)))
+-		goto err_put1;
++		goto err_del_dev;
+ 
+ 	/* Release the context lock as adapter is configured */
+ 	cxl_adapter_context_unlock(adapter);
+ 
+ 	return adapter;
+ 
+-err_put1:
++err_del_dev:
++	device_del(&adapter->dev);
++err_put_dev:
+ 	/* This should mirror cxl_remove_adapter, except without the
+ 	 * sysfs parts
+ 	 */
+ 	cxl_debugfs_adapter_remove(adapter);
+ 	cxl_deconfigure_adapter(adapter);
+-	device_unregister(&adapter->dev);
++	put_device(&adapter->dev);
+ 	return ERR_PTR(rc);
+ 
+ err_release:
+diff --git a/drivers/misc/ocxl/config.c b/drivers/misc/ocxl/config.c
+index 4d490b92d951f..3ced98b506f43 100644
+--- a/drivers/misc/ocxl/config.c
++++ b/drivers/misc/ocxl/config.c
+@@ -204,6 +204,18 @@ static int read_dvsec_vendor(struct pci_dev *dev)
+ 	return 0;
+ }
+ 
++/**
++ * get_dvsec_vendor0() - Find a related PCI device (function 0)
++ * @dev: PCI device to match
++ * @dev0: The PCI device (function 0) found
++ * @out_pos: The position of PCI device (function 0)
++ *
++ * Returns 0 on success, negative on failure.
++ *
++ * NOTE: If it's successful, the reference of dev0 is increased,
++ * so after using it, the callers must call pci_dev_put() to give
++ * up the reference.
++ */
+ static int get_dvsec_vendor0(struct pci_dev *dev, struct pci_dev **dev0,
+ 			     int *out_pos)
+ {
+@@ -213,10 +225,14 @@ static int get_dvsec_vendor0(struct pci_dev *dev, struct pci_dev **dev0,
+ 		dev = get_function_0(dev);
+ 		if (!dev)
+ 			return -1;
++	} else {
++		dev = pci_dev_get(dev);
+ 	}
+ 	pos = find_dvsec(dev, OCXL_DVSEC_VENDOR_ID);
+-	if (!pos)
++	if (!pos) {
++		pci_dev_put(dev);
+ 		return -1;
++	}
+ 	*dev0 = dev;
+ 	*out_pos = pos;
+ 	return 0;
+@@ -233,6 +249,7 @@ int ocxl_config_get_reset_reload(struct pci_dev *dev, int *val)
+ 
+ 	pci_read_config_dword(dev0, pos + OCXL_DVSEC_VENDOR_RESET_RELOAD,
+ 			      &reset_reload);
++	pci_dev_put(dev0);
+ 	*val = !!(reset_reload & BIT(0));
+ 	return 0;
+ }
+@@ -254,6 +271,7 @@ int ocxl_config_set_reset_reload(struct pci_dev *dev, int val)
+ 		reset_reload &= ~BIT(0);
+ 	pci_write_config_dword(dev0, pos + OCXL_DVSEC_VENDOR_RESET_RELOAD,
+ 			       reset_reload);
++	pci_dev_put(dev0);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
+index e094809b54ff5..524ded87964d1 100644
+--- a/drivers/misc/ocxl/file.c
++++ b/drivers/misc/ocxl/file.c
+@@ -543,8 +543,11 @@ int ocxl_file_register_afu(struct ocxl_afu *afu)
+ 		goto err_put;
+ 
+ 	rc = device_register(&info->dev);
+-	if (rc)
+-		goto err_put;
++	if (rc) {
++		free_minor(info);
++		put_device(&info->dev);
++		return rc;
++	}
+ 
+ 	rc = ocxl_sysfs_register_afu(info);
+ 	if (rc)
+diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c
+index 723825524ea0c..9c7d475d18901 100644
+--- a/drivers/misc/sgi-gru/grufault.c
++++ b/drivers/misc/sgi-gru/grufault.c
+@@ -648,6 +648,7 @@ int gru_handle_user_call_os(unsigned long cb)
+ 	if ((cb & (GRU_HANDLE_STRIDE - 1)) || ucbnum >= GRU_NUM_CB)
+ 		return -EINVAL;
+ 
++again:
+ 	gts = gru_find_lock_gts(cb);
+ 	if (!gts)
+ 		return -EINVAL;
+@@ -656,7 +657,11 @@ int gru_handle_user_call_os(unsigned long cb)
+ 	if (ucbnum >= gts->ts_cbr_au_count * GRU_CBR_AU_SIZE)
+ 		goto exit;
+ 
+-	gru_check_context_placement(gts);
++	if (gru_check_context_placement(gts)) {
++		gru_unlock_gts(gts);
++		gru_unload_context(gts, 1);
++		goto again;
++	}
+ 
+ 	/*
+ 	 * CCH may contain stale data if ts_force_cch_reload is set.
+@@ -874,7 +879,11 @@ int gru_set_context_option(unsigned long arg)
+ 		} else {
+ 			gts->ts_user_blade_id = req.val1;
+ 			gts->ts_user_chiplet_id = req.val0;
+-			gru_check_context_placement(gts);
++			if (gru_check_context_placement(gts)) {
++				gru_unlock_gts(gts);
++				gru_unload_context(gts, 1);
++				return ret;
++			}
+ 		}
+ 		break;
+ 	case sco_gseg_owner:
+diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c
+index 40ac59dd018c9..e2325e3d077ea 100644
+--- a/drivers/misc/sgi-gru/grumain.c
++++ b/drivers/misc/sgi-gru/grumain.c
+@@ -716,9 +716,10 @@ static int gru_check_chiplet_assignment(struct gru_state *gru,
+  * chiplet. Misassignment can occur if the process migrates to a different
+  * blade or if the user changes the selected blade/chiplet.
+  */
+-void gru_check_context_placement(struct gru_thread_state *gts)
++int gru_check_context_placement(struct gru_thread_state *gts)
+ {
+ 	struct gru_state *gru;
++	int ret = 0;
+ 
+ 	/*
+ 	 * If the current task is the context owner, verify that the
+@@ -726,15 +727,23 @@ void gru_check_context_placement(struct gru_thread_state *gts)
+ 	 * references. Pthread apps use non-owner references to the CBRs.
+ 	 */
+ 	gru = gts->ts_gru;
++	/*
++	 * If gru or gts->ts_tgid_owner isn't initialized properly, return
++	 * success to indicate that the caller does not need to unload the
++	 * gru context.The caller is responsible for their inspection and
++	 * reinitialization if needed.
++	 */
+ 	if (!gru || gts->ts_tgid_owner != current->tgid)
+-		return;
++		return ret;
+ 
+ 	if (!gru_check_chiplet_assignment(gru, gts)) {
+ 		STAT(check_context_unload);
+-		gru_unload_context(gts, 1);
++		ret = -EINVAL;
+ 	} else if (gru_retarget_intr(gts)) {
+ 		STAT(check_context_retarget_intr);
+ 	}
++
++	return ret;
+ }
+ 
+ 
+@@ -934,7 +943,12 @@ again:
+ 	mutex_lock(&gts->ts_ctxlock);
+ 	preempt_disable();
+ 
+-	gru_check_context_placement(gts);
++	if (gru_check_context_placement(gts)) {
++		preempt_enable();
++		mutex_unlock(&gts->ts_ctxlock);
++		gru_unload_context(gts, 1);
++		return VM_FAULT_NOPAGE;
++	}
+ 
+ 	if (!gts->ts_gru) {
+ 		STAT(load_user_context);
+diff --git a/drivers/misc/sgi-gru/grutables.h b/drivers/misc/sgi-gru/grutables.h
+index 5ce8f3081e960..10f0a083b1fab 100644
+--- a/drivers/misc/sgi-gru/grutables.h
++++ b/drivers/misc/sgi-gru/grutables.h
+@@ -637,7 +637,7 @@ extern int gru_user_flush_tlb(unsigned long arg);
+ extern int gru_user_unload_context(unsigned long arg);
+ extern int gru_get_exception_detail(unsigned long arg);
+ extern int gru_set_context_option(unsigned long address);
+-extern void gru_check_context_placement(struct gru_thread_state *gts);
++extern int gru_check_context_placement(struct gru_thread_state *gts);
+ extern int gru_cpu_fault_map_id(void);
+ extern struct vm_area_struct *gru_find_vma(unsigned long vaddr);
+ extern void gru_flush_all_tlb(struct gru_state *gru);
+diff --git a/drivers/misc/tifm_7xx1.c b/drivers/misc/tifm_7xx1.c
+index 228f2eb1d4762..2aebbfda104d8 100644
+--- a/drivers/misc/tifm_7xx1.c
++++ b/drivers/misc/tifm_7xx1.c
+@@ -190,7 +190,7 @@ static void tifm_7xx1_switch_media(struct work_struct *work)
+ 				spin_unlock_irqrestore(&fm->lock, flags);
+ 			}
+ 			if (sock)
+-				tifm_free_device(&sock->dev);
++				put_device(&sock->dev);
+ 		}
+ 		spin_lock_irqsave(&fm->lock, flags);
+ 	}
+diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c
+index bfb8efeb7eb80..d01df01d4b4d1 100644
+--- a/drivers/mmc/host/alcor.c
++++ b/drivers/mmc/host/alcor.c
+@@ -1114,7 +1114,10 @@ static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev)
+ 	alcor_hw_init(host);
+ 
+ 	dev_set_drvdata(&pdev->dev, host);
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto free_host;
++
+ 	return 0;
+ 
+ free_host:
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index 444bd3a0a9227..af85b32c6c1c8 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -2223,6 +2223,7 @@ static int atmci_init_slot(struct atmel_mci *host,
+ {
+ 	struct mmc_host			*mmc;
+ 	struct atmel_mci_slot		*slot;
++	int ret;
+ 
+ 	mmc = mmc_alloc_host(sizeof(struct atmel_mci_slot), &host->pdev->dev);
+ 	if (!mmc)
+@@ -2306,11 +2307,13 @@ static int atmci_init_slot(struct atmel_mci *host,
+ 
+ 	host->slot[id] = slot;
+ 	mmc_regulator_get_supply(mmc);
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret) {
++		mmc_free_host(mmc);
++		return ret;
++	}
+ 
+ 	if (gpio_is_valid(slot->detect_pin)) {
+-		int ret;
+-
+ 		timer_setup(&slot->detect_timer, atmci_detect_change, 0);
+ 
+ 		ret = request_irq(gpio_to_irq(slot->detect_pin),
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index bccc85b3fc50a..19a6b55e344fe 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -1280,7 +1280,9 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	mmc->ops = &meson_mmc_ops;
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto err_free_irq;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index b5684e5d79e60..5d83c8e7bf5cf 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -2191,7 +2191,9 @@ static int mmci_probe(struct amba_device *dev,
+ 	pm_runtime_set_autosuspend_delay(&dev->dev, 50);
+ 	pm_runtime_use_autosuspend(&dev->dev);
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto clk_disable;
+ 
+ 	pm_runtime_put(&dev->dev);
+ 	return 0;
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index c16300b921391..fb96bb76eefbe 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -668,7 +668,9 @@ static int moxart_probe(struct platform_device *pdev)
+ 		goto out;
+ 
+ 	dev_set_drvdata(dev, mmc);
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto out;
+ 
+ 	dev_dbg(dev, "IRQ=%d, FIFO is %d bytes\n", irq, host->fifo_width);
+ 
+diff --git a/drivers/mmc/host/mxcmmc.c b/drivers/mmc/host/mxcmmc.c
+index 12ee07285980c..93a105b075645 100644
+--- a/drivers/mmc/host/mxcmmc.c
++++ b/drivers/mmc/host/mxcmmc.c
+@@ -1167,7 +1167,9 @@ static int mxcmci_probe(struct platform_device *pdev)
+ 
+ 	timer_setup(&host->watchdog, mxcmci_watchdog, 0);
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto out_free_dma;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
+index aa9cc49206d15..5b6ede81fc9f2 100644
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -1987,7 +1987,9 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
+ 	if (!ret)
+ 		mmc->caps |= MMC_CAP_SDIO_IRQ;
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto err_irq;
+ 
+ 	if (mmc_pdata(host)->name != NULL) {
+ 		ret = device_create_file(&mmc->class_dev, &dev_attr_slot_name);
+diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
+index 55868b6b86583..e25e9bb34eb39 100644
+--- a/drivers/mmc/host/pxamci.c
++++ b/drivers/mmc/host/pxamci.c
+@@ -763,7 +763,12 @@ static int pxamci_probe(struct platform_device *pdev)
+ 			dev_warn(dev, "gpio_ro and get_ro() both defined\n");
+ 	}
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret) {
++		if (host->pdata && host->pdata->exit)
++			host->pdata->exit(dev, mmc);
++		goto out;
++	}
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index ac01fb518386a..a49b8fe2a0982 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -537,7 +537,7 @@ static void renesas_sdhi_reset_hs400_mode(struct tmio_mmc_host *host,
+ 			 SH_MOBILE_SDHI_SCC_TMPPORT2_HS400OSEL) &
+ 			sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_TMPPORT2));
+ 
+-	if (priv->adjust_hs400_calib_table)
++	if (priv->quirks && (priv->quirks->hs400_calib_table || priv->quirks->hs400_bad_taps))
+ 		renesas_sdhi_adjust_hs400_mode_disable(host);
+ 
+ 	sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, CLK_CTL_SCLKEN |
+diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c
+index 5fe4528e296e6..1be3a355f10d5 100644
+--- a/drivers/mmc/host/rtsx_usb_sdmmc.c
++++ b/drivers/mmc/host/rtsx_usb_sdmmc.c
+@@ -1332,6 +1332,7 @@ static int rtsx_usb_sdmmc_drv_probe(struct platform_device *pdev)
+ #ifdef RTSX_USB_USE_LEDS_CLASS
+ 	int err;
+ #endif
++	int ret;
+ 
+ 	ucr = usb_get_intfdata(to_usb_interface(pdev->dev.parent));
+ 	if (!ucr)
+@@ -1368,7 +1369,15 @@ static int rtsx_usb_sdmmc_drv_probe(struct platform_device *pdev)
+ 	INIT_WORK(&host->led_work, rtsx_usb_update_led);
+ 
+ #endif
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret) {
++#ifdef RTSX_USB_USE_LEDS_CLASS
++		led_classdev_unregister(&host->led);
++#endif
++		mmc_free_host(mmc);
++		pm_runtime_disable(&pdev->dev);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index 110ee0c804c8a..540ebccaa9a35 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -224,13 +224,15 @@ static inline void _sdhci_sprd_set_clock(struct sdhci_host *host,
+ 	div = ((div & 0x300) >> 2) | ((div & 0xFF) << 8);
+ 	sdhci_enable_clk(host, div);
+ 
+-	/* enable auto gate sdhc_enable_auto_gate */
+-	val = sdhci_readl(host, SDHCI_SPRD_REG_32_BUSY_POSI);
+-	mask = SDHCI_SPRD_BIT_OUTR_CLK_AUTO_EN |
+-	       SDHCI_SPRD_BIT_INNR_CLK_AUTO_EN;
+-	if (mask != (val & mask)) {
+-		val |= mask;
+-		sdhci_writel(host, val, SDHCI_SPRD_REG_32_BUSY_POSI);
++	/* Enable CLK_AUTO when the clock is greater than 400K. */
++	if (clk > 400000) {
++		val = sdhci_readl(host, SDHCI_SPRD_REG_32_BUSY_POSI);
++		mask = SDHCI_SPRD_BIT_OUTR_CLK_AUTO_EN |
++			SDHCI_SPRD_BIT_INNR_CLK_AUTO_EN;
++		if (mask != (val & mask)) {
++			val |= mask;
++			sdhci_writel(host, val, SDHCI_SPRD_REG_32_BUSY_POSI);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci_f_sdh30.c b/drivers/mmc/host/sdhci_f_sdh30.c
+index 3f5977979cf25..6c4f43e112826 100644
+--- a/drivers/mmc/host/sdhci_f_sdh30.c
++++ b/drivers/mmc/host/sdhci_f_sdh30.c
+@@ -168,6 +168,9 @@ static int sdhci_f_sdh30_probe(struct platform_device *pdev)
+ 	if (reg & SDHCI_CAN_DO_8BIT)
+ 		priv->vendor_hs200 = F_SDH30_EMMC_HS200;
+ 
++	if (!(reg & SDHCI_TIMEOUT_CLK_MASK))
++		host->quirks |= SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK;
++
+ 	ret = sdhci_add_host(host);
+ 	if (ret)
+ 		goto err_add_host;
+diff --git a/drivers/mmc/host/toshsd.c b/drivers/mmc/host/toshsd.c
+index 8d037c2071abc..497791ffada6d 100644
+--- a/drivers/mmc/host/toshsd.c
++++ b/drivers/mmc/host/toshsd.c
+@@ -651,7 +651,9 @@ static int toshsd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (ret)
+ 		goto unmap;
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto free_irq;
+ 
+ 	base = pci_resource_start(pdev, 0);
+ 	dev_dbg(&pdev->dev, "MMIO %pa, IRQ %d\n", &base, pdev->irq);
+@@ -660,6 +662,8 @@ static int toshsd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	return 0;
+ 
++free_irq:
++	free_irq(pdev->irq, host);
+ unmap:
+ 	pci_iounmap(pdev, host->ioaddr);
+ release:
+diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
+index f07c71db3cafe..f6b525fb5c0ed 100644
+--- a/drivers/mmc/host/via-sdmmc.c
++++ b/drivers/mmc/host/via-sdmmc.c
+@@ -1154,7 +1154,9 @@ static int via_sd_probe(struct pci_dev *pcidev,
+ 	    pcidev->subsystem_device == 0x3891)
+ 		sdhost->quirks = VIA_CRDR_QUIRK_300MS_PWRDELAY;
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto unmap;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index 97beece62fec4..72f65f32abbc7 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -2049,6 +2049,7 @@ static void vub300_enable_sdio_irq(struct mmc_host *mmc, int enable)
+ 		return;
+ 	kref_get(&vub300->kref);
+ 	if (enable) {
++		set_current_state(TASK_RUNNING);
+ 		mutex_lock(&vub300->irq_mutex);
+ 		if (vub300->irqs_queued) {
+ 			vub300->irqs_queued -= 1;
+@@ -2064,6 +2065,7 @@ static void vub300_enable_sdio_irq(struct mmc_host *mmc, int enable)
+ 			vub300_queue_poll_work(vub300, 0);
+ 		}
+ 		mutex_unlock(&vub300->irq_mutex);
++		set_current_state(TASK_INTERRUPTIBLE);
+ 	} else {
+ 		vub300->irq_enabled = 0;
+ 	}
+@@ -2299,14 +2301,14 @@ static int vub300_probe(struct usb_interface *interface,
+ 				0x0000, 0x0000, &vub300->system_port_status,
+ 				sizeof(vub300->system_port_status), 1000);
+ 	if (retval < 0) {
+-		goto error4;
++		goto error5;
+ 	} else if (sizeof(vub300->system_port_status) == retval) {
+ 		vub300->card_present =
+ 			(0x0001 & vub300->system_port_status.port_flags) ? 1 : 0;
+ 		vub300->read_only =
+ 			(0x0010 & vub300->system_port_status.port_flags) ? 1 : 0;
+ 	} else {
+-		goto error4;
++		goto error5;
+ 	}
+ 	usb_set_intfdata(interface, vub300);
+ 	INIT_DELAYED_WORK(&vub300->pollwork, vub300_pollwork_thread);
+@@ -2329,8 +2331,13 @@ static int vub300_probe(struct usb_interface *interface,
+ 			 "USB vub300 remote SDIO host controller[%d]"
+ 			 "connected with no SD/SDIO card inserted\n",
+ 			 interface_to_InterfaceNumber(interface));
+-	mmc_add_host(mmc);
++	retval = mmc_add_host(mmc);
++	if (retval)
++		goto error6;
++
+ 	return 0;
++error6:
++	del_timer_sync(&vub300->inactivity_timer);
+ error5:
+ 	mmc_free_host(mmc);
+ 	/*
+diff --git a/drivers/mmc/host/wbsd.c b/drivers/mmc/host/wbsd.c
+index cd63ea865b77f..f3090216e0dcc 100644
+--- a/drivers/mmc/host/wbsd.c
++++ b/drivers/mmc/host/wbsd.c
+@@ -1703,7 +1703,17 @@ static int wbsd_init(struct device *dev, int base, int irq, int dma,
+ 	 */
+ 	wbsd_init_device(host);
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret) {
++		if (!pnp)
++			wbsd_chip_poweroff(host);
++
++		wbsd_release_resources(host);
++		wbsd_free_mmc(dev);
++
++		mmc_free_host(mmc);
++		return ret;
++	}
+ 
+ 	pr_info("%s: W83L51xD", mmc_hostname(mmc));
+ 	if (host->chip_id != 0)
+diff --git a/drivers/mmc/host/wmt-sdmmc.c b/drivers/mmc/host/wmt-sdmmc.c
+index 8df722ec57edc..3933195488575 100644
+--- a/drivers/mmc/host/wmt-sdmmc.c
++++ b/drivers/mmc/host/wmt-sdmmc.c
+@@ -859,11 +859,15 @@ static int wmt_mci_probe(struct platform_device *pdev)
+ 	/* configure the controller to a known 'ready' state */
+ 	wmt_reset_hardware(mmc);
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto fail7;
+ 
+ 	dev_info(&pdev->dev, "WMT SDHC Controller initialized\n");
+ 
+ 	return 0;
++fail7:
++	clk_disable_unprepare(priv->clk_sdmmc);
+ fail6:
+ 	clk_put(priv->clk_sdmmc);
+ fail5_and_a_half:
+diff --git a/drivers/mtd/lpddr/lpddr2_nvm.c b/drivers/mtd/lpddr/lpddr2_nvm.c
+index 72f5c7b300790..add4386f99f00 100644
+--- a/drivers/mtd/lpddr/lpddr2_nvm.c
++++ b/drivers/mtd/lpddr/lpddr2_nvm.c
+@@ -433,6 +433,8 @@ static int lpddr2_nvm_probe(struct platform_device *pdev)
+ 
+ 	/* lpddr2_nvm address range */
+ 	add_range = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!add_range)
++		return -ENODEV;
+ 
+ 	/* Populate map_info data structure */
+ 	*map = (struct map_info) {
+diff --git a/drivers/mtd/maps/pxa2xx-flash.c b/drivers/mtd/maps/pxa2xx-flash.c
+index 7d96758a8f04e..6e5e557559704 100644
+--- a/drivers/mtd/maps/pxa2xx-flash.c
++++ b/drivers/mtd/maps/pxa2xx-flash.c
+@@ -66,6 +66,7 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
+ 	if (!info->map.virt) {
+ 		printk(KERN_WARNING "Failed to ioremap %s\n",
+ 		       info->map.name);
++		kfree(info);
+ 		return -ENOMEM;
+ 	}
+ 	info->map.cached = ioremap_cache(info->map.phys, info->map.size);
+@@ -87,6 +88,7 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
+ 		iounmap((void *)info->map.virt);
+ 		if (info->map.cached)
+ 			iounmap(info->map.cached);
++		kfree(info);
+ 		return -EIO;
+ 	}
+ 	info->mtd->dev.parent = &pdev->dev;
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index a5197a4819025..b2d88ff90e936 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -667,8 +667,10 @@ int add_mtd_device(struct mtd_info *mtd)
+ 	dev_set_drvdata(&mtd->dev, mtd);
+ 	of_node_get(mtd_get_of_node(mtd));
+ 	error = device_register(&mtd->dev);
+-	if (error)
++	if (error) {
++		put_device(&mtd->dev);
+ 		goto fail_added;
++	}
+ 
+ 	/* Add the nvmem provider */
+ 	error = mtd_nvmem_add(mtd);
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index e8146a47da123..2c256d455c9f3 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -1220,6 +1220,8 @@ spi_nor_find_best_erase_type(const struct spi_nor_erase_map *map,
+ 			continue;
+ 
+ 		erase = &map->erase_type[i];
++		if (!erase->size)
++			continue;
+ 
+ 		/* Alignment is not mandatory for overlaid regions */
+ 		if (region->offset & SNOR_OVERLAID_REGION &&
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index acb6ff0be5fff..320e5461853f7 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -1520,6 +1520,7 @@ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr)
+ 			slave_err(bond->dev, port->slave->dev,
+ 				  "Port %d did not find a suitable aggregator\n",
+ 				  port->actor_port_number);
++			return;
+ 		}
+ 	}
+ 	/* if all aggregator's ports are READY_N == TRUE, set ready=TRUE
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index f38a6ce5749bb..c40b92f8d16bf 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2393,12 +2393,21 @@ static int bond_slave_info_query(struct net_device *bond_dev, struct ifslave *in
+ /* called with rcu_read_lock() */
+ static int bond_miimon_inspect(struct bonding *bond)
+ {
++	bool ignore_updelay = false;
+ 	int link_state, commit = 0;
+ 	struct list_head *iter;
+ 	struct slave *slave;
+-	bool ignore_updelay;
+ 
+-	ignore_updelay = !rcu_dereference(bond->curr_active_slave);
++	if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP) {
++		ignore_updelay = !rcu_dereference(bond->curr_active_slave);
++	} else {
++		struct bond_up_slave *usable_slaves;
++
++		usable_slaves = rcu_dereference(bond->usable_slaves);
++
++		if (usable_slaves && usable_slaves->count == 0)
++			ignore_updelay = true;
++	}
+ 
+ 	bond_for_each_slave_rcu(bond, slave, iter) {
+ 		bond_propose_link_state(slave, BOND_LINK_NOCHANGE);
+diff --git a/drivers/net/can/m_can/tcan4x5x.c b/drivers/net/can/m_can/tcan4x5x.c
+index f169d9090e52f..f903f78af087a 100644
+--- a/drivers/net/can/m_can/tcan4x5x.c
++++ b/drivers/net/can/m_can/tcan4x5x.c
+@@ -294,11 +294,6 @@ static int tcan4x5x_clear_interrupts(struct m_can_classdev *cdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_MCAN_INT_REG,
+-				      TCAN4X5X_ENABLE_MCAN_INT);
+-	if (ret)
+-		return ret;
+-
+ 	ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_INT_FLAGS,
+ 				      TCAN4X5X_CLEAR_ALL_INT);
+ 	if (ret)
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+index 62958f04a2f20..5699531f87873 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+@@ -76,6 +76,14 @@ struct kvaser_usb_tx_urb_context {
+ 	int dlc;
+ };
+ 
++struct kvaser_usb_busparams {
++	__le32 bitrate;
++	u8 tseg1;
++	u8 tseg2;
++	u8 sjw;
++	u8 nsamples;
++} __packed;
++
+ struct kvaser_usb {
+ 	struct usb_device *udev;
+ 	struct usb_interface *intf;
+@@ -104,13 +112,19 @@ struct kvaser_usb_net_priv {
+ 	struct can_priv can;
+ 	struct can_berr_counter bec;
+ 
++	/* subdriver-specific data */
++	void *sub_priv;
++
+ 	struct kvaser_usb *dev;
+ 	struct net_device *netdev;
+ 	int channel;
+ 
+-	struct completion start_comp, stop_comp, flush_comp;
++	struct completion start_comp, stop_comp, flush_comp,
++			  get_busparams_comp;
+ 	struct usb_anchor tx_submitted;
+ 
++	struct kvaser_usb_busparams busparams_nominal, busparams_data;
++
+ 	spinlock_t tx_contexts_lock; /* lock for active_tx_contexts */
+ 	int active_tx_contexts;
+ 	struct kvaser_usb_tx_urb_context tx_contexts[];
+@@ -120,11 +134,15 @@ struct kvaser_usb_net_priv {
+  * struct kvaser_usb_dev_ops - Device specific functions
+  * @dev_set_mode:		used for can.do_set_mode
+  * @dev_set_bittiming:		used for can.do_set_bittiming
++ * @dev_get_busparams:		readback arbitration busparams
+  * @dev_set_data_bittiming:	used for can.do_set_data_bittiming
++ * @dev_get_data_busparams:	readback data busparams
+  * @dev_get_berr_counter:	used for can.do_get_berr_counter
+  *
+  * @dev_setup_endpoints:	setup USB in and out endpoints
+  * @dev_init_card:		initialize card
++ * @dev_init_channel:		initialize channel
++ * @dev_remove_channel:		uninitialize channel
+  * @dev_get_software_info:	get software info
+  * @dev_get_software_details:	get software details
+  * @dev_get_card_info:		get card info
+@@ -140,12 +158,18 @@ struct kvaser_usb_net_priv {
+  */
+ struct kvaser_usb_dev_ops {
+ 	int (*dev_set_mode)(struct net_device *netdev, enum can_mode mode);
+-	int (*dev_set_bittiming)(struct net_device *netdev);
+-	int (*dev_set_data_bittiming)(struct net_device *netdev);
++	int (*dev_set_bittiming)(const struct net_device *netdev,
++				 const struct kvaser_usb_busparams *busparams);
++	int (*dev_get_busparams)(struct kvaser_usb_net_priv *priv);
++	int (*dev_set_data_bittiming)(const struct net_device *netdev,
++				      const struct kvaser_usb_busparams *busparams);
++	int (*dev_get_data_busparams)(struct kvaser_usb_net_priv *priv);
+ 	int (*dev_get_berr_counter)(const struct net_device *netdev,
+ 				    struct can_berr_counter *bec);
+ 	int (*dev_setup_endpoints)(struct kvaser_usb *dev);
+ 	int (*dev_init_card)(struct kvaser_usb *dev);
++	int (*dev_init_channel)(struct kvaser_usb_net_priv *priv);
++	void (*dev_remove_channel)(struct kvaser_usb_net_priv *priv);
+ 	int (*dev_get_software_info)(struct kvaser_usb *dev);
+ 	int (*dev_get_software_details)(struct kvaser_usb *dev);
+ 	int (*dev_get_card_info)(struct kvaser_usb *dev);
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index 7491f85e85b30..1f015b496a472 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -416,10 +416,6 @@ static int kvaser_usb_open(struct net_device *netdev)
+ 	if (err)
+ 		return err;
+ 
+-	err = kvaser_usb_setup_rx_urbs(dev);
+-	if (err)
+-		goto error;
+-
+ 	err = ops->dev_set_opt_mode(priv);
+ 	if (err)
+ 		goto error;
+@@ -510,6 +506,93 @@ static int kvaser_usb_close(struct net_device *netdev)
+ 	return 0;
+ }
+ 
++static int kvaser_usb_set_bittiming(struct net_device *netdev)
++{
++	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
++	struct kvaser_usb *dev = priv->dev;
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
++	struct can_bittiming *bt = &priv->can.bittiming;
++
++	struct kvaser_usb_busparams busparams;
++	int tseg1 = bt->prop_seg + bt->phase_seg1;
++	int tseg2 = bt->phase_seg2;
++	int sjw = bt->sjw;
++	int err = -EOPNOTSUPP;
++
++	busparams.bitrate = cpu_to_le32(bt->bitrate);
++	busparams.sjw = (u8)sjw;
++	busparams.tseg1 = (u8)tseg1;
++	busparams.tseg2 = (u8)tseg2;
++	if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
++		busparams.nsamples = 3;
++	else
++		busparams.nsamples = 1;
++
++	err = ops->dev_set_bittiming(netdev, &busparams);
++	if (err)
++		return err;
++
++	err = kvaser_usb_setup_rx_urbs(priv->dev);
++	if (err)
++		return err;
++
++	err = ops->dev_get_busparams(priv);
++	if (err) {
++		/* Treat EOPNOTSUPP as success */
++		if (err == -EOPNOTSUPP)
++			err = 0;
++		return err;
++	}
++
++	if (memcmp(&busparams, &priv->busparams_nominal,
++		   sizeof(priv->busparams_nominal)) != 0)
++		err = -EINVAL;
++
++	return err;
++}
++
++static int kvaser_usb_set_data_bittiming(struct net_device *netdev)
++{
++	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
++	struct kvaser_usb *dev = priv->dev;
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
++	struct can_bittiming *dbt = &priv->can.data_bittiming;
++
++	struct kvaser_usb_busparams busparams;
++	int tseg1 = dbt->prop_seg + dbt->phase_seg1;
++	int tseg2 = dbt->phase_seg2;
++	int sjw = dbt->sjw;
++	int err;
++
++	if (!ops->dev_set_data_bittiming ||
++	    !ops->dev_get_data_busparams)
++		return -EOPNOTSUPP;
++
++	busparams.bitrate = cpu_to_le32(dbt->bitrate);
++	busparams.sjw = (u8)sjw;
++	busparams.tseg1 = (u8)tseg1;
++	busparams.tseg2 = (u8)tseg2;
++	busparams.nsamples = 1;
++
++	err = ops->dev_set_data_bittiming(netdev, &busparams);
++	if (err)
++		return err;
++
++	err = kvaser_usb_setup_rx_urbs(priv->dev);
++	if (err)
++		return err;
++
++	err = ops->dev_get_data_busparams(priv);
++	if (err)
++		return err;
++
++	if (memcmp(&busparams, &priv->busparams_data,
++		   sizeof(priv->busparams_data)) != 0)
++		err = -EINVAL;
++
++	return err;
++}
++
+ static void kvaser_usb_write_bulk_callback(struct urb *urb)
+ {
+ 	struct kvaser_usb_tx_urb_context *context = urb->context;
+@@ -645,6 +728,7 @@ static const struct net_device_ops kvaser_usb_netdev_ops = {
+ 
+ static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
+ {
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
+ 	int i;
+ 
+ 	for (i = 0; i < dev->nchannels; i++) {
+@@ -660,6 +744,9 @@ static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
+ 		if (!dev->nets[i])
+ 			continue;
+ 
++		if (ops->dev_remove_channel)
++			ops->dev_remove_channel(dev->nets[i]);
++
+ 		free_candev(dev->nets[i]->netdev);
+ 	}
+ }
+@@ -691,6 +778,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 	init_completion(&priv->start_comp);
+ 	init_completion(&priv->stop_comp);
+ 	init_completion(&priv->flush_comp);
++	init_completion(&priv->get_busparams_comp);
+ 	priv->can.ctrlmode_supported = 0;
+ 
+ 	priv->dev = dev;
+@@ -703,7 +791,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 	priv->can.state = CAN_STATE_STOPPED;
+ 	priv->can.clock.freq = dev->cfg->clock.freq;
+ 	priv->can.bittiming_const = dev->cfg->bittiming_const;
+-	priv->can.do_set_bittiming = ops->dev_set_bittiming;
++	priv->can.do_set_bittiming = kvaser_usb_set_bittiming;
+ 	priv->can.do_set_mode = ops->dev_set_mode;
+ 	if ((driver_info->quirks & KVASER_USB_QUIRK_HAS_TXRX_ERRORS) ||
+ 	    (priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP))
+@@ -715,7 +803,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 
+ 	if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) {
+ 		priv->can.data_bittiming_const = dev->cfg->data_bittiming_const;
+-		priv->can.do_set_data_bittiming = ops->dev_set_data_bittiming;
++		priv->can.do_set_data_bittiming = kvaser_usb_set_data_bittiming;
+ 	}
+ 
+ 	netdev->flags |= IFF_ECHO;
+@@ -727,17 +815,26 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 
+ 	dev->nets[channel] = priv;
+ 
++	if (ops->dev_init_channel) {
++		err = ops->dev_init_channel(priv);
++		if (err)
++			goto err;
++	}
++
+ 	err = register_candev(netdev);
+ 	if (err) {
+ 		dev_err(&dev->intf->dev, "Failed to register CAN device\n");
+-		free_candev(netdev);
+-		dev->nets[channel] = NULL;
+-		return err;
++		goto err;
+ 	}
+ 
+ 	netdev_dbg(netdev, "device registered\n");
+ 
+ 	return 0;
++
++err:
++	free_candev(netdev);
++	dev->nets[channel] = NULL;
++	return err;
+ }
+ 
+ static int kvaser_usb_probe(struct usb_interface *intf,
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index 45d2787248839..2764fdd7e84b3 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -43,6 +43,8 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_flexc;
+ 
+ /* Minihydra command IDs */
+ #define CMD_SET_BUSPARAMS_REQ			16
++#define CMD_GET_BUSPARAMS_REQ			17
++#define CMD_GET_BUSPARAMS_RESP			18
+ #define CMD_GET_CHIP_STATE_REQ			19
+ #define CMD_CHIP_STATE_EVENT			20
+ #define CMD_SET_DRIVERMODE_REQ			21
+@@ -193,21 +195,26 @@ struct kvaser_cmd_chip_state_event {
+ #define KVASER_USB_HYDRA_BUS_MODE_CANFD_ISO	0x01
+ #define KVASER_USB_HYDRA_BUS_MODE_NONISO	0x02
+ struct kvaser_cmd_set_busparams {
+-	__le32 bitrate;
+-	u8 tseg1;
+-	u8 tseg2;
+-	u8 sjw;
+-	u8 nsamples;
++	struct kvaser_usb_busparams busparams_nominal;
+ 	u8 reserved0[4];
+-	__le32 bitrate_d;
+-	u8 tseg1_d;
+-	u8 tseg2_d;
+-	u8 sjw_d;
+-	u8 nsamples_d;
++	struct kvaser_usb_busparams busparams_data;
+ 	u8 canfd_mode;
+ 	u8 reserved1[7];
+ } __packed;
+ 
++/* Busparam type */
++#define KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN	0x00
++#define KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD	0x01
++struct kvaser_cmd_get_busparams_req {
++	u8 type;
++	u8 reserved[27];
++} __packed;
++
++struct kvaser_cmd_get_busparams_res {
++	struct kvaser_usb_busparams busparams;
++	u8 reserved[20];
++} __packed;
++
+ /* Ctrl modes */
+ #define KVASER_USB_HYDRA_CTRLMODE_NORMAL	0x01
+ #define KVASER_USB_HYDRA_CTRLMODE_LISTEN	0x02
+@@ -278,6 +285,8 @@ struct kvaser_cmd {
+ 		struct kvaser_cmd_error_event error_event;
+ 
+ 		struct kvaser_cmd_set_busparams set_busparams_req;
++		struct kvaser_cmd_get_busparams_req get_busparams_req;
++		struct kvaser_cmd_get_busparams_res get_busparams_res;
+ 
+ 		struct kvaser_cmd_chip_state_event chip_state_event;
+ 
+@@ -293,6 +302,7 @@ struct kvaser_cmd {
+ #define KVASER_USB_HYDRA_CF_FLAG_OVERRUN	BIT(1)
+ #define KVASER_USB_HYDRA_CF_FLAG_REMOTE_FRAME	BIT(4)
+ #define KVASER_USB_HYDRA_CF_FLAG_EXTENDED_ID	BIT(5)
++#define KVASER_USB_HYDRA_CF_FLAG_TX_ACK		BIT(6)
+ /* CAN frame flags. Used in ext_rx_can and ext_tx_can */
+ #define KVASER_USB_HYDRA_CF_FLAG_OSM_NACK	BIT(12)
+ #define KVASER_USB_HYDRA_CF_FLAG_ABL		BIT(13)
+@@ -359,6 +369,10 @@ struct kvaser_cmd_ext {
+ 	} __packed;
+ } __packed;
+ 
++struct kvaser_usb_net_hydra_priv {
++	int pending_get_busparams_type;
++};
++
+ static const struct can_bittiming_const kvaser_usb_hydra_kcan_bittiming_c = {
+ 	.name = "kvaser_usb_kcan",
+ 	.tseg1_min = 1,
+@@ -812,6 +826,39 @@ static void kvaser_usb_hydra_flush_queue_reply(const struct kvaser_usb *dev,
+ 	complete(&priv->flush_comp);
+ }
+ 
++static void kvaser_usb_hydra_get_busparams_reply(const struct kvaser_usb *dev,
++						 const struct kvaser_cmd *cmd)
++{
++	struct kvaser_usb_net_priv *priv;
++	struct kvaser_usb_net_hydra_priv *hydra;
++
++	priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
++	if (!priv)
++		return;
++
++	hydra = priv->sub_priv;
++	if (!hydra)
++		return;
++
++	switch (hydra->pending_get_busparams_type) {
++	case KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN:
++		memcpy(&priv->busparams_nominal, &cmd->get_busparams_res.busparams,
++		       sizeof(priv->busparams_nominal));
++		break;
++	case KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD:
++		memcpy(&priv->busparams_data, &cmd->get_busparams_res.busparams,
++		       sizeof(priv->busparams_nominal));
++		break;
++	default:
++		dev_warn(&dev->intf->dev, "Unknown get_busparams_type %d\n",
++			 hydra->pending_get_busparams_type);
++		break;
++	}
++	hydra->pending_get_busparams_type = -1;
++
++	complete(&priv->get_busparams_comp);
++}
++
+ static void
+ kvaser_usb_hydra_bus_status_to_can_state(const struct kvaser_usb_net_priv *priv,
+ 					 u8 bus_status,
+@@ -1099,6 +1146,7 @@ static void kvaser_usb_hydra_tx_acknowledge(const struct kvaser_usb *dev,
+ 	struct kvaser_usb_net_priv *priv;
+ 	unsigned long irq_flags;
+ 	bool one_shot_fail = false;
++	bool is_err_frame = false;
+ 	u16 transid = kvaser_usb_hydra_get_cmd_transid(cmd);
+ 
+ 	priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
+@@ -1117,10 +1165,13 @@ static void kvaser_usb_hydra_tx_acknowledge(const struct kvaser_usb *dev,
+ 			kvaser_usb_hydra_one_shot_fail(priv, cmd_ext);
+ 			one_shot_fail = true;
+ 		}
++
++		is_err_frame = flags & KVASER_USB_HYDRA_CF_FLAG_TX_ACK &&
++			       flags & KVASER_USB_HYDRA_CF_FLAG_ERROR_FRAME;
+ 	}
+ 
+ 	context = &priv->tx_contexts[transid % dev->max_tx_urbs];
+-	if (!one_shot_fail) {
++	if (!one_shot_fail && !is_err_frame) {
+ 		struct net_device_stats *stats = &priv->netdev->stats;
+ 
+ 		stats->tx_packets++;
+@@ -1294,6 +1345,10 @@ static void kvaser_usb_hydra_handle_cmd_std(const struct kvaser_usb *dev,
+ 		kvaser_usb_hydra_state_event(dev, cmd);
+ 		break;
+ 
++	case CMD_GET_BUSPARAMS_RESP:
++		kvaser_usb_hydra_get_busparams_reply(dev, cmd);
++		break;
++
+ 	case CMD_ERROR_EVENT:
+ 		kvaser_usb_hydra_error_event(dev, cmd);
+ 		break;
+@@ -1494,15 +1549,58 @@ static int kvaser_usb_hydra_set_mode(struct net_device *netdev,
+ 	return err;
+ }
+ 
+-static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
++static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
++					  int busparams_type)
++{
++	struct kvaser_usb *dev = priv->dev;
++	struct kvaser_usb_net_hydra_priv *hydra = priv->sub_priv;
++	struct kvaser_cmd *cmd;
++	int err;
++
++	if (!hydra)
++		return -EINVAL;
++
++	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
++	if (!cmd)
++		return -ENOMEM;
++
++	cmd->header.cmd_no = CMD_GET_BUSPARAMS_REQ;
++	kvaser_usb_hydra_set_cmd_dest_he
++		(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
++	kvaser_usb_hydra_set_cmd_transid
++				(cmd, kvaser_usb_hydra_get_next_transid(dev));
++	cmd->get_busparams_req.type = busparams_type;
++	hydra->pending_get_busparams_type = busparams_type;
++
++	reinit_completion(&priv->get_busparams_comp);
++
++	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	if (err)
++		return err;
++
++	if (!wait_for_completion_timeout(&priv->get_busparams_comp,
++					 msecs_to_jiffies(KVASER_USB_TIMEOUT)))
++		return -ETIMEDOUT;
++
++	return err;
++}
++
++static int kvaser_usb_hydra_get_nominal_busparams(struct kvaser_usb_net_priv *priv)
++{
++	return kvaser_usb_hydra_get_busparams(priv, KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN);
++}
++
++static int kvaser_usb_hydra_get_data_busparams(struct kvaser_usb_net_priv *priv)
++{
++	return kvaser_usb_hydra_get_busparams(priv, KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD);
++}
++
++static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
++					  const struct kvaser_usb_busparams *busparams)
+ {
+ 	struct kvaser_cmd *cmd;
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+-	struct can_bittiming *bt = &priv->can.bittiming;
+ 	struct kvaser_usb *dev = priv->dev;
+-	int tseg1 = bt->prop_seg + bt->phase_seg1;
+-	int tseg2 = bt->phase_seg2;
+-	int sjw = bt->sjw;
+ 	int err;
+ 
+ 	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
+@@ -1510,11 +1608,8 @@ static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_SET_BUSPARAMS_REQ;
+-	cmd->set_busparams_req.bitrate = cpu_to_le32(bt->bitrate);
+-	cmd->set_busparams_req.sjw = (u8)sjw;
+-	cmd->set_busparams_req.tseg1 = (u8)tseg1;
+-	cmd->set_busparams_req.tseg2 = (u8)tseg2;
+-	cmd->set_busparams_req.nsamples = 1;
++	memcpy(&cmd->set_busparams_req.busparams_nominal, busparams,
++	       sizeof(cmd->set_busparams_req.busparams_nominal));
+ 
+ 	kvaser_usb_hydra_set_cmd_dest_he
+ 		(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
+@@ -1528,15 +1623,12 @@ static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
+ 	return err;
+ }
+ 
+-static int kvaser_usb_hydra_set_data_bittiming(struct net_device *netdev)
++static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
++					       const struct kvaser_usb_busparams *busparams)
+ {
+ 	struct kvaser_cmd *cmd;
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+-	struct can_bittiming *dbt = &priv->can.data_bittiming;
+ 	struct kvaser_usb *dev = priv->dev;
+-	int tseg1 = dbt->prop_seg + dbt->phase_seg1;
+-	int tseg2 = dbt->phase_seg2;
+-	int sjw = dbt->sjw;
+ 	int err;
+ 
+ 	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
+@@ -1544,11 +1636,8 @@ static int kvaser_usb_hydra_set_data_bittiming(struct net_device *netdev)
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_SET_BUSPARAMS_FD_REQ;
+-	cmd->set_busparams_req.bitrate_d = cpu_to_le32(dbt->bitrate);
+-	cmd->set_busparams_req.sjw_d = (u8)sjw;
+-	cmd->set_busparams_req.tseg1_d = (u8)tseg1;
+-	cmd->set_busparams_req.tseg2_d = (u8)tseg2;
+-	cmd->set_busparams_req.nsamples_d = 1;
++	memcpy(&cmd->set_busparams_req.busparams_data, busparams,
++	       sizeof(cmd->set_busparams_req.busparams_data));
+ 
+ 	if (priv->can.ctrlmode & CAN_CTRLMODE_FD) {
+ 		if (priv->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)
+@@ -1655,6 +1744,19 @@ static int kvaser_usb_hydra_init_card(struct kvaser_usb *dev)
+ 	return 0;
+ }
+ 
++static int kvaser_usb_hydra_init_channel(struct kvaser_usb_net_priv *priv)
++{
++	struct kvaser_usb_net_hydra_priv *hydra;
++
++	hydra = devm_kzalloc(&priv->dev->intf->dev, sizeof(*hydra), GFP_KERNEL);
++	if (!hydra)
++		return -ENOMEM;
++
++	priv->sub_priv = hydra;
++
++	return 0;
++}
++
+ static int kvaser_usb_hydra_get_software_info(struct kvaser_usb *dev)
+ {
+ 	struct kvaser_cmd cmd;
+@@ -1997,10 +2099,13 @@ kvaser_usb_hydra_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
+ const struct kvaser_usb_dev_ops kvaser_usb_hydra_dev_ops = {
+ 	.dev_set_mode = kvaser_usb_hydra_set_mode,
+ 	.dev_set_bittiming = kvaser_usb_hydra_set_bittiming,
++	.dev_get_busparams = kvaser_usb_hydra_get_nominal_busparams,
+ 	.dev_set_data_bittiming = kvaser_usb_hydra_set_data_bittiming,
++	.dev_get_data_busparams = kvaser_usb_hydra_get_data_busparams,
+ 	.dev_get_berr_counter = kvaser_usb_hydra_get_berr_counter,
+ 	.dev_setup_endpoints = kvaser_usb_hydra_setup_endpoints,
+ 	.dev_init_card = kvaser_usb_hydra_init_card,
++	.dev_init_channel = kvaser_usb_hydra_init_channel,
+ 	.dev_get_software_info = kvaser_usb_hydra_get_software_info,
+ 	.dev_get_software_details = kvaser_usb_hydra_get_software_details,
+ 	.dev_get_card_info = kvaser_usb_hydra_get_card_info,
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index 15380cc08ee69..f06d63db9077b 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -20,6 +20,7 @@
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/usb.h>
++#include <linux/workqueue.h>
+ 
+ #include <linux/can.h>
+ #include <linux/can/dev.h>
+@@ -55,6 +56,9 @@
+ #define CMD_RX_EXT_MESSAGE		14
+ #define CMD_TX_EXT_MESSAGE		15
+ #define CMD_SET_BUS_PARAMS		16
++#define CMD_GET_BUS_PARAMS		17
++#define CMD_GET_BUS_PARAMS_REPLY	18
++#define CMD_GET_CHIP_STATE		19
+ #define CMD_CHIP_STATE_EVENT		20
+ #define CMD_SET_CTRL_MODE		21
+ #define CMD_RESET_CHIP			24
+@@ -69,10 +73,13 @@
+ #define CMD_GET_CARD_INFO_REPLY		35
+ #define CMD_GET_SOFTWARE_INFO		38
+ #define CMD_GET_SOFTWARE_INFO_REPLY	39
++#define CMD_ERROR_EVENT			45
+ #define CMD_FLUSH_QUEUE			48
+ #define CMD_TX_ACKNOWLEDGE		50
+ #define CMD_CAN_ERROR_EVENT		51
+ #define CMD_FLUSH_QUEUE_REPLY		68
++#define CMD_GET_CAPABILITIES_REQ	95
++#define CMD_GET_CAPABILITIES_RESP	96
+ 
+ #define CMD_LEAF_LOG_MESSAGE		106
+ 
+@@ -82,6 +89,8 @@
+ #define KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK BIT(5)
+ #define KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK BIT(6)
+ 
++#define KVASER_USB_LEAF_SWOPTION_EXT_CAP BIT(12)
++
+ /* error factors */
+ #define M16C_EF_ACKE			BIT(0)
+ #define M16C_EF_CRCE			BIT(1)
+@@ -156,11 +165,7 @@ struct usbcan_cmd_softinfo {
+ struct kvaser_cmd_busparams {
+ 	u8 tid;
+ 	u8 channel;
+-	__le32 bitrate;
+-	u8 tseg1;
+-	u8 tseg2;
+-	u8 sjw;
+-	u8 no_samp;
++	struct kvaser_usb_busparams busparams;
+ } __packed;
+ 
+ struct kvaser_cmd_tx_can {
+@@ -229,7 +234,7 @@ struct kvaser_cmd_tx_acknowledge_header {
+ 	u8 tid;
+ } __packed;
+ 
+-struct leaf_cmd_error_event {
++struct leaf_cmd_can_error_event {
+ 	u8 tid;
+ 	u8 flags;
+ 	__le16 time[3];
+@@ -241,7 +246,7 @@ struct leaf_cmd_error_event {
+ 	u8 error_factor;
+ } __packed;
+ 
+-struct usbcan_cmd_error_event {
++struct usbcan_cmd_can_error_event {
+ 	u8 tid;
+ 	u8 padding;
+ 	u8 tx_errors_count_ch0;
+@@ -253,6 +258,28 @@ struct usbcan_cmd_error_event {
+ 	__le16 time;
+ } __packed;
+ 
++/* CMD_ERROR_EVENT error codes */
++#define KVASER_USB_LEAF_ERROR_EVENT_TX_QUEUE_FULL 0x8
++#define KVASER_USB_LEAF_ERROR_EVENT_PARAM 0x9
++
++struct leaf_cmd_error_event {
++	u8 tid;
++	u8 error_code;
++	__le16 timestamp[3];
++	__le16 padding;
++	__le16 info1;
++	__le16 info2;
++} __packed;
++
++struct usbcan_cmd_error_event {
++	u8 tid;
++	u8 error_code;
++	__le16 info1;
++	__le16 info2;
++	__le16 timestamp;
++	__le16 padding;
++} __packed;
++
+ struct kvaser_cmd_ctrl_mode {
+ 	u8 tid;
+ 	u8 channel;
+@@ -277,6 +304,28 @@ struct leaf_cmd_log_message {
+ 	u8 data[8];
+ } __packed;
+ 
++/* Sub commands for cap_req and cap_res */
++#define KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE 0x02
++#define KVASER_USB_LEAF_CAP_CMD_ERR_REPORT 0x05
++struct kvaser_cmd_cap_req {
++	__le16 padding0;
++	__le16 cap_cmd;
++	__le16 padding1;
++	__le16 channel;
++} __packed;
++
++/* Status codes for cap_res */
++#define KVASER_USB_LEAF_CAP_STAT_OK 0x00
++#define KVASER_USB_LEAF_CAP_STAT_NOT_IMPL 0x01
++#define KVASER_USB_LEAF_CAP_STAT_UNAVAIL 0x02
++struct kvaser_cmd_cap_res {
++	__le16 padding;
++	__le16 cap_cmd;
++	__le16 status;
++	__le32 mask;
++	__le32 value;
++} __packed;
++
+ struct kvaser_cmd {
+ 	u8 len;
+ 	u8 id;
+@@ -292,14 +341,18 @@ struct kvaser_cmd {
+ 			struct leaf_cmd_softinfo softinfo;
+ 			struct leaf_cmd_rx_can rx_can;
+ 			struct leaf_cmd_chip_state_event chip_state_event;
+-			struct leaf_cmd_error_event error_event;
++			struct leaf_cmd_can_error_event can_error_event;
+ 			struct leaf_cmd_log_message log_message;
++			struct leaf_cmd_error_event error_event;
++			struct kvaser_cmd_cap_req cap_req;
++			struct kvaser_cmd_cap_res cap_res;
+ 		} __packed leaf;
+ 
+ 		union {
+ 			struct usbcan_cmd_softinfo softinfo;
+ 			struct usbcan_cmd_rx_can rx_can;
+ 			struct usbcan_cmd_chip_state_event chip_state_event;
++			struct usbcan_cmd_can_error_event can_error_event;
+ 			struct usbcan_cmd_error_event error_event;
+ 		} __packed usbcan;
+ 
+@@ -322,7 +375,10 @@ static const u8 kvaser_usb_leaf_cmd_sizes_leaf[] = {
+ 	[CMD_RX_EXT_MESSAGE]		= kvaser_fsize(u.leaf.rx_can),
+ 	[CMD_LEAF_LOG_MESSAGE]		= kvaser_fsize(u.leaf.log_message),
+ 	[CMD_CHIP_STATE_EVENT]		= kvaser_fsize(u.leaf.chip_state_event),
+-	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.leaf.error_event),
++	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.leaf.can_error_event),
++	[CMD_GET_CAPABILITIES_RESP]	= kvaser_fsize(u.leaf.cap_res),
++	[CMD_GET_BUS_PARAMS_REPLY]	= kvaser_fsize(u.busparams),
++	[CMD_ERROR_EVENT]		= kvaser_fsize(u.leaf.error_event),
+ 	/* ignored events: */
+ 	[CMD_FLUSH_QUEUE_REPLY]		= CMD_SIZE_ANY,
+ };
+@@ -336,7 +392,8 @@ static const u8 kvaser_usb_leaf_cmd_sizes_usbcan[] = {
+ 	[CMD_RX_STD_MESSAGE]		= kvaser_fsize(u.usbcan.rx_can),
+ 	[CMD_RX_EXT_MESSAGE]		= kvaser_fsize(u.usbcan.rx_can),
+ 	[CMD_CHIP_STATE_EVENT]		= kvaser_fsize(u.usbcan.chip_state_event),
+-	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.usbcan.error_event),
++	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.usbcan.can_error_event),
++	[CMD_ERROR_EVENT]		= kvaser_fsize(u.usbcan.error_event),
+ 	/* ignored events: */
+ 	[CMD_USBCAN_CLOCK_OVERFLOW_EVENT] = CMD_SIZE_ANY,
+ };
+@@ -364,6 +421,12 @@ struct kvaser_usb_err_summary {
+ 	};
+ };
+ 
++struct kvaser_usb_net_leaf_priv {
++	struct kvaser_usb_net_priv *net;
++
++	struct delayed_work chip_state_req_work;
++};
++
+ static const struct can_bittiming_const kvaser_usb_leaf_m16c_bittiming_const = {
+ 	.name = "kvaser_usb_ucii",
+ 	.tseg1_min = 4,
+@@ -607,6 +670,9 @@ static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev,
+ 	dev->fw_version = le32_to_cpu(softinfo->fw_version);
+ 	dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx);
+ 
++	if (sw_options & KVASER_USB_LEAF_SWOPTION_EXT_CAP)
++		dev->card_data.capabilities |= KVASER_USB_CAP_EXT_CAP;
++
+ 	if (dev->driver_info->quirks & KVASER_USB_QUIRK_IGNORE_CLK_FREQ) {
+ 		/* Firmware expects bittiming parameters calculated for 16MHz
+ 		 * clock, regardless of the actual clock
+@@ -694,6 +760,116 @@ static int kvaser_usb_leaf_get_card_info(struct kvaser_usb *dev)
+ 	return 0;
+ }
+ 
++static int kvaser_usb_leaf_get_single_capability(struct kvaser_usb *dev,
++						 u16 cap_cmd_req, u16 *status)
++{
++	struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
++	struct kvaser_cmd *cmd;
++	u32 value = 0;
++	u32 mask = 0;
++	u16 cap_cmd_res;
++	int err;
++	int i;
++
++	cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
++	if (!cmd)
++		return -ENOMEM;
++
++	cmd->id = CMD_GET_CAPABILITIES_REQ;
++	cmd->u.leaf.cap_req.cap_cmd = cpu_to_le16(cap_cmd_req);
++	cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_cap_req);
++
++	err = kvaser_usb_send_cmd(dev, cmd, cmd->len);
++	if (err)
++		goto end;
++
++	err = kvaser_usb_leaf_wait_cmd(dev, CMD_GET_CAPABILITIES_RESP, cmd);
++	if (err)
++		goto end;
++
++	*status = le16_to_cpu(cmd->u.leaf.cap_res.status);
++
++	if (*status != KVASER_USB_LEAF_CAP_STAT_OK)
++		goto end;
++
++	cap_cmd_res = le16_to_cpu(cmd->u.leaf.cap_res.cap_cmd);
++	switch (cap_cmd_res) {
++	case KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE:
++	case KVASER_USB_LEAF_CAP_CMD_ERR_REPORT:
++		value = le32_to_cpu(cmd->u.leaf.cap_res.value);
++		mask = le32_to_cpu(cmd->u.leaf.cap_res.mask);
++		break;
++	default:
++		dev_warn(&dev->intf->dev, "Unknown capability command %u\n",
++			 cap_cmd_res);
++		break;
++	}
++
++	for (i = 0; i < dev->nchannels; i++) {
++		if (BIT(i) & (value & mask)) {
++			switch (cap_cmd_res) {
++			case KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE:
++				card_data->ctrlmode_supported |=
++						CAN_CTRLMODE_LISTENONLY;
++				break;
++			case KVASER_USB_LEAF_CAP_CMD_ERR_REPORT:
++				card_data->capabilities |=
++						KVASER_USB_CAP_BERR_CAP;
++				break;
++			}
++		}
++	}
++
++end:
++	kfree(cmd);
++
++	return err;
++}
++
++static int kvaser_usb_leaf_get_capabilities_leaf(struct kvaser_usb *dev)
++{
++	int err;
++	u16 status;
++
++	if (!(dev->card_data.capabilities & KVASER_USB_CAP_EXT_CAP)) {
++		dev_info(&dev->intf->dev,
++			 "No extended capability support. Upgrade device firmware.\n");
++		return 0;
++	}
++
++	err = kvaser_usb_leaf_get_single_capability(dev,
++						    KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE,
++						    &status);
++	if (err)
++		return err;
++	if (status)
++		dev_info(&dev->intf->dev,
++			 "KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE failed %u\n",
++			 status);
++
++	err = kvaser_usb_leaf_get_single_capability(dev,
++						    KVASER_USB_LEAF_CAP_CMD_ERR_REPORT,
++						    &status);
++	if (err)
++		return err;
++	if (status)
++		dev_info(&dev->intf->dev,
++			 "KVASER_USB_LEAF_CAP_CMD_ERR_REPORT failed %u\n",
++			 status);
++
++	return 0;
++}
++
++static int kvaser_usb_leaf_get_capabilities(struct kvaser_usb *dev)
++{
++	int err = 0;
++
++	if (dev->driver_info->family == KVASER_LEAF)
++		err = kvaser_usb_leaf_get_capabilities_leaf(dev);
++
++	return err;
++}
++
+ static void kvaser_usb_leaf_tx_acknowledge(const struct kvaser_usb *dev,
+ 					   const struct kvaser_cmd *cmd)
+ {
+@@ -722,7 +898,7 @@ static void kvaser_usb_leaf_tx_acknowledge(const struct kvaser_usb *dev,
+ 	context = &priv->tx_contexts[tid % dev->max_tx_urbs];
+ 
+ 	/* Sometimes the state change doesn't come after a bus-off event */
+-	if (priv->can.restart_ms && priv->can.state >= CAN_STATE_BUS_OFF) {
++	if (priv->can.restart_ms && priv->can.state == CAN_STATE_BUS_OFF) {
+ 		struct sk_buff *skb;
+ 		struct can_frame *cf;
+ 
+@@ -778,6 +954,16 @@ static int kvaser_usb_leaf_simple_cmd_async(struct kvaser_usb_net_priv *priv,
+ 	return err;
+ }
+ 
++static void kvaser_usb_leaf_chip_state_req_work(struct work_struct *work)
++{
++	struct kvaser_usb_net_leaf_priv *leaf =
++		container_of(work, struct kvaser_usb_net_leaf_priv,
++			     chip_state_req_work.work);
++	struct kvaser_usb_net_priv *priv = leaf->net;
++
++	kvaser_usb_leaf_simple_cmd_async(priv, CMD_GET_CHIP_STATE);
++}
++
+ static void
+ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
+ 					const struct kvaser_usb_err_summary *es,
+@@ -796,20 +982,16 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
+ 		new_state = CAN_STATE_BUS_OFF;
+ 	} else if (es->status & M16C_STATE_BUS_PASSIVE) {
+ 		new_state = CAN_STATE_ERROR_PASSIVE;
+-	} else if (es->status & M16C_STATE_BUS_ERROR) {
++	} else if ((es->status & M16C_STATE_BUS_ERROR) &&
++		   cur_state >= CAN_STATE_BUS_OFF) {
+ 		/* Guard against spurious error events after a busoff */
+-		if (cur_state < CAN_STATE_BUS_OFF) {
+-			if (es->txerr >= 128 || es->rxerr >= 128)
+-				new_state = CAN_STATE_ERROR_PASSIVE;
+-			else if (es->txerr >= 96 || es->rxerr >= 96)
+-				new_state = CAN_STATE_ERROR_WARNING;
+-			else if (cur_state > CAN_STATE_ERROR_ACTIVE)
+-				new_state = CAN_STATE_ERROR_ACTIVE;
+-		}
+-	}
+-
+-	if (!es->status)
++	} else if (es->txerr >= 128 || es->rxerr >= 128) {
++		new_state = CAN_STATE_ERROR_PASSIVE;
++	} else if (es->txerr >= 96 || es->rxerr >= 96) {
++		new_state = CAN_STATE_ERROR_WARNING;
++	} else {
+ 		new_state = CAN_STATE_ERROR_ACTIVE;
++	}
+ 
+ 	if (new_state != cur_state) {
+ 		tx_state = (es->txerr >= es->rxerr) ? new_state : 0;
+@@ -819,7 +1001,7 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
+ 	}
+ 
+ 	if (priv->can.restart_ms &&
+-	    cur_state >= CAN_STATE_BUS_OFF &&
++	    cur_state == CAN_STATE_BUS_OFF &&
+ 	    new_state < CAN_STATE_BUS_OFF)
+ 		priv->can.can_stats.restarts++;
+ 
+@@ -853,6 +1035,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 	struct sk_buff *skb;
+ 	struct net_device_stats *stats;
+ 	struct kvaser_usb_net_priv *priv;
++	struct kvaser_usb_net_leaf_priv *leaf;
+ 	enum can_state old_state, new_state;
+ 
+ 	if (es->channel >= dev->nchannels) {
+@@ -862,8 +1045,13 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 	}
+ 
+ 	priv = dev->nets[es->channel];
++	leaf = priv->sub_priv;
+ 	stats = &priv->netdev->stats;
+ 
++	/* Ignore e.g. state change to bus-off reported just after stopping */
++	if (!netif_running(priv->netdev))
++		return;
++
+ 	/* Update all of the CAN interface's state and error counters before
+ 	 * trying any memory allocation that can actually fail with -ENOMEM.
+ 	 *
+@@ -878,6 +1066,14 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 	kvaser_usb_leaf_rx_error_update_can_state(priv, es, &tmp_cf);
+ 	new_state = priv->can.state;
+ 
++	/* If there are errors, request status updates periodically as we do
++	 * not get automatic notifications of improved state.
++	 */
++	if (new_state < CAN_STATE_BUS_OFF &&
++	    (es->rxerr || es->txerr || new_state == CAN_STATE_ERROR_PASSIVE))
++		schedule_delayed_work(&leaf->chip_state_req_work,
++				      msecs_to_jiffies(500));
++
+ 	skb = alloc_can_err_skb(priv->netdev, &cf);
+ 	if (!skb) {
+ 		stats->rx_dropped++;
+@@ -895,7 +1091,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 		}
+ 
+ 		if (priv->can.restart_ms &&
+-		    old_state >= CAN_STATE_BUS_OFF &&
++		    old_state == CAN_STATE_BUS_OFF &&
+ 		    new_state < CAN_STATE_BUS_OFF) {
+ 			cf->can_id |= CAN_ERR_RESTARTED;
+ 			netif_carrier_on(priv->netdev);
+@@ -995,11 +1191,11 @@ static void kvaser_usb_leaf_usbcan_rx_error(const struct kvaser_usb *dev,
+ 
+ 	case CMD_CAN_ERROR_EVENT:
+ 		es.channel = 0;
+-		es.status = cmd->u.usbcan.error_event.status_ch0;
+-		es.txerr = cmd->u.usbcan.error_event.tx_errors_count_ch0;
+-		es.rxerr = cmd->u.usbcan.error_event.rx_errors_count_ch0;
++		es.status = cmd->u.usbcan.can_error_event.status_ch0;
++		es.txerr = cmd->u.usbcan.can_error_event.tx_errors_count_ch0;
++		es.rxerr = cmd->u.usbcan.can_error_event.rx_errors_count_ch0;
+ 		es.usbcan.other_ch_status =
+-			cmd->u.usbcan.error_event.status_ch1;
++			cmd->u.usbcan.can_error_event.status_ch1;
+ 		kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
+ 
+ 		/* The USBCAN firmware supports up to 2 channels.
+@@ -1007,13 +1203,13 @@ static void kvaser_usb_leaf_usbcan_rx_error(const struct kvaser_usb *dev,
+ 		 */
+ 		if (dev->nchannels == MAX_USBCAN_NET_DEVICES) {
+ 			es.channel = 1;
+-			es.status = cmd->u.usbcan.error_event.status_ch1;
++			es.status = cmd->u.usbcan.can_error_event.status_ch1;
+ 			es.txerr =
+-				cmd->u.usbcan.error_event.tx_errors_count_ch1;
++				cmd->u.usbcan.can_error_event.tx_errors_count_ch1;
+ 			es.rxerr =
+-				cmd->u.usbcan.error_event.rx_errors_count_ch1;
++				cmd->u.usbcan.can_error_event.rx_errors_count_ch1;
+ 			es.usbcan.other_ch_status =
+-				cmd->u.usbcan.error_event.status_ch0;
++				cmd->u.usbcan.can_error_event.status_ch0;
+ 			kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
+ 		}
+ 		break;
+@@ -1030,11 +1226,11 @@ static void kvaser_usb_leaf_leaf_rx_error(const struct kvaser_usb *dev,
+ 
+ 	switch (cmd->id) {
+ 	case CMD_CAN_ERROR_EVENT:
+-		es.channel = cmd->u.leaf.error_event.channel;
+-		es.status = cmd->u.leaf.error_event.status;
+-		es.txerr = cmd->u.leaf.error_event.tx_errors_count;
+-		es.rxerr = cmd->u.leaf.error_event.rx_errors_count;
+-		es.leaf.error_factor = cmd->u.leaf.error_event.error_factor;
++		es.channel = cmd->u.leaf.can_error_event.channel;
++		es.status = cmd->u.leaf.can_error_event.status;
++		es.txerr = cmd->u.leaf.can_error_event.tx_errors_count;
++		es.rxerr = cmd->u.leaf.can_error_event.rx_errors_count;
++		es.leaf.error_factor = cmd->u.leaf.can_error_event.error_factor;
+ 		break;
+ 	case CMD_LEAF_LOG_MESSAGE:
+ 		es.channel = cmd->u.leaf.log_message.channel;
+@@ -1166,6 +1362,74 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
+ 	netif_rx(skb);
+ }
+ 
++static void kvaser_usb_leaf_error_event_parameter(const struct kvaser_usb *dev,
++						  const struct kvaser_cmd *cmd)
++{
++	u16 info1 = 0;
++
++	switch (dev->driver_info->family) {
++	case KVASER_LEAF:
++		info1 = le16_to_cpu(cmd->u.leaf.error_event.info1);
++		break;
++	case KVASER_USBCAN:
++		info1 = le16_to_cpu(cmd->u.usbcan.error_event.info1);
++		break;
++	}
++
++	/* info1 will contain the offending cmd_no */
++	switch (info1) {
++	case CMD_SET_CTRL_MODE:
++		dev_warn(&dev->intf->dev,
++			 "CMD_SET_CTRL_MODE error in parameter\n");
++		break;
++
++	case CMD_SET_BUS_PARAMS:
++		dev_warn(&dev->intf->dev,
++			 "CMD_SET_BUS_PARAMS error in parameter\n");
++		break;
++
++	default:
++		dev_warn(&dev->intf->dev,
++			 "Unhandled parameter error event cmd_no (%u)\n",
++			 info1);
++		break;
++	}
++}
++
++static void kvaser_usb_leaf_error_event(const struct kvaser_usb *dev,
++					const struct kvaser_cmd *cmd)
++{
++	u8 error_code = 0;
++
++	switch (dev->driver_info->family) {
++	case KVASER_LEAF:
++		error_code = cmd->u.leaf.error_event.error_code;
++		break;
++	case KVASER_USBCAN:
++		error_code = cmd->u.usbcan.error_event.error_code;
++		break;
++	}
++
++	switch (error_code) {
++	case KVASER_USB_LEAF_ERROR_EVENT_TX_QUEUE_FULL:
++		/* Received additional CAN message, when firmware TX queue is
++		 * already full. Something is wrong with the driver.
++		 * This should never happen!
++		 */
++		dev_err(&dev->intf->dev,
++			"Received error event TX_QUEUE_FULL\n");
++		break;
++	case KVASER_USB_LEAF_ERROR_EVENT_PARAM:
++		kvaser_usb_leaf_error_event_parameter(dev, cmd);
++		break;
++
++	default:
++		dev_warn(&dev->intf->dev,
++			 "Unhandled error event (%d)\n", error_code);
++		break;
++	}
++}
++
+ static void kvaser_usb_leaf_start_chip_reply(const struct kvaser_usb *dev,
+ 					     const struct kvaser_cmd *cmd)
+ {
+@@ -1206,6 +1470,25 @@ static void kvaser_usb_leaf_stop_chip_reply(const struct kvaser_usb *dev,
+ 	complete(&priv->stop_comp);
+ }
+ 
++static void kvaser_usb_leaf_get_busparams_reply(const struct kvaser_usb *dev,
++						const struct kvaser_cmd *cmd)
++{
++	struct kvaser_usb_net_priv *priv;
++	u8 channel = cmd->u.busparams.channel;
++
++	if (channel >= dev->nchannels) {
++		dev_err(&dev->intf->dev,
++			"Invalid channel number (%d)\n", channel);
++		return;
++	}
++
++	priv = dev->nets[channel];
++	memcpy(&priv->busparams_nominal, &cmd->u.busparams.busparams,
++	       sizeof(priv->busparams_nominal));
++
++	complete(&priv->get_busparams_comp);
++}
++
+ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
+ 					   const struct kvaser_cmd *cmd)
+ {
+@@ -1244,6 +1527,14 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
+ 		kvaser_usb_leaf_tx_acknowledge(dev, cmd);
+ 		break;
+ 
++	case CMD_ERROR_EVENT:
++		kvaser_usb_leaf_error_event(dev, cmd);
++		break;
++
++	case CMD_GET_BUS_PARAMS_REPLY:
++		kvaser_usb_leaf_get_busparams_reply(dev, cmd);
++		break;
++
+ 	/* Ignored commands */
+ 	case CMD_USBCAN_CLOCK_OVERFLOW_EVENT:
+ 		if (dev->driver_info->family != KVASER_USBCAN)
+@@ -1340,10 +1631,13 @@ static int kvaser_usb_leaf_start_chip(struct kvaser_usb_net_priv *priv)
+ 
+ static int kvaser_usb_leaf_stop_chip(struct kvaser_usb_net_priv *priv)
+ {
++	struct kvaser_usb_net_leaf_priv *leaf = priv->sub_priv;
+ 	int err;
+ 
+ 	reinit_completion(&priv->stop_comp);
+ 
++	cancel_delayed_work(&leaf->chip_state_req_work);
++
+ 	err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_STOP_CHIP,
+ 					      priv->channel);
+ 	if (err)
+@@ -1390,10 +1684,35 @@ static int kvaser_usb_leaf_init_card(struct kvaser_usb *dev)
+ 	return 0;
+ }
+ 
+-static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
++static int kvaser_usb_leaf_init_channel(struct kvaser_usb_net_priv *priv)
++{
++	struct kvaser_usb_net_leaf_priv *leaf;
++
++	leaf = devm_kzalloc(&priv->dev->intf->dev, sizeof(*leaf), GFP_KERNEL);
++	if (!leaf)
++		return -ENOMEM;
++
++	leaf->net = priv;
++	INIT_DELAYED_WORK(&leaf->chip_state_req_work,
++			  kvaser_usb_leaf_chip_state_req_work);
++
++	priv->sub_priv = leaf;
++
++	return 0;
++}
++
++static void kvaser_usb_leaf_remove_channel(struct kvaser_usb_net_priv *priv)
++{
++	struct kvaser_usb_net_leaf_priv *leaf = priv->sub_priv;
++
++	if (leaf)
++		cancel_delayed_work_sync(&leaf->chip_state_req_work);
++}
++
++static int kvaser_usb_leaf_set_bittiming(const struct net_device *netdev,
++					 const struct kvaser_usb_busparams *busparams)
+ {
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+-	struct can_bittiming *bt = &priv->can.bittiming;
+ 	struct kvaser_usb *dev = priv->dev;
+ 	struct kvaser_cmd *cmd;
+ 	int rc;
+@@ -1406,15 +1725,8 @@ static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
+ 	cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_busparams);
+ 	cmd->u.busparams.channel = priv->channel;
+ 	cmd->u.busparams.tid = 0xff;
+-	cmd->u.busparams.bitrate = cpu_to_le32(bt->bitrate);
+-	cmd->u.busparams.sjw = bt->sjw;
+-	cmd->u.busparams.tseg1 = bt->prop_seg + bt->phase_seg1;
+-	cmd->u.busparams.tseg2 = bt->phase_seg2;
+-
+-	if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
+-		cmd->u.busparams.no_samp = 3;
+-	else
+-		cmd->u.busparams.no_samp = 1;
++	memcpy(&cmd->u.busparams.busparams, busparams,
++	       sizeof(cmd->u.busparams.busparams));
+ 
+ 	rc = kvaser_usb_send_cmd(dev, cmd, cmd->len);
+ 
+@@ -1422,6 +1734,27 @@ static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
+ 	return rc;
+ }
+ 
++static int kvaser_usb_leaf_get_busparams(struct kvaser_usb_net_priv *priv)
++{
++	int err;
++
++	if (priv->dev->driver_info->family == KVASER_USBCAN)
++		return -EOPNOTSUPP;
++
++	reinit_completion(&priv->get_busparams_comp);
++
++	err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_GET_BUS_PARAMS,
++					      priv->channel);
++	if (err)
++		return err;
++
++	if (!wait_for_completion_timeout(&priv->get_busparams_comp,
++					 msecs_to_jiffies(KVASER_USB_TIMEOUT)))
++		return -ETIMEDOUT;
++
++	return 0;
++}
++
+ static int kvaser_usb_leaf_set_mode(struct net_device *netdev,
+ 				    enum can_mode mode)
+ {
+@@ -1483,14 +1816,18 @@ static int kvaser_usb_leaf_setup_endpoints(struct kvaser_usb *dev)
+ const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops = {
+ 	.dev_set_mode = kvaser_usb_leaf_set_mode,
+ 	.dev_set_bittiming = kvaser_usb_leaf_set_bittiming,
++	.dev_get_busparams = kvaser_usb_leaf_get_busparams,
+ 	.dev_set_data_bittiming = NULL,
++	.dev_get_data_busparams = NULL,
+ 	.dev_get_berr_counter = kvaser_usb_leaf_get_berr_counter,
+ 	.dev_setup_endpoints = kvaser_usb_leaf_setup_endpoints,
+ 	.dev_init_card = kvaser_usb_leaf_init_card,
++	.dev_init_channel = kvaser_usb_leaf_init_channel,
++	.dev_remove_channel = kvaser_usb_leaf_remove_channel,
+ 	.dev_get_software_info = kvaser_usb_leaf_get_software_info,
+ 	.dev_get_software_details = NULL,
+ 	.dev_get_card_info = kvaser_usb_leaf_get_card_info,
+-	.dev_get_capabilities = NULL,
++	.dev_get_capabilities = kvaser_usb_leaf_get_capabilities,
+ 	.dev_set_opt_mode = kvaser_usb_leaf_set_opt_mode,
+ 	.dev_start_chip = kvaser_usb_leaf_start_chip,
+ 	.dev_stop_chip = kvaser_usb_leaf_stop_chip,
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index c79bb8cf962ce..deeed50a42c05 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -1002,9 +1002,11 @@ static void lan9303_get_ethtool_stats(struct dsa_switch *ds, int port,
+ 		ret = lan9303_read_switch_port(
+ 			chip, port, lan9303_mib[u].offset, &reg);
+ 
+-		if (ret)
++		if (ret) {
+ 			dev_warn(chip->dev, "Reading status port %d reg %u failed\n",
+ 				 port, lan9303_mib[u].offset);
++			reg = 0;
++		}
+ 		data[u] = reg;
+ 	}
+ }
+diff --git a/drivers/net/ethernet/amd/atarilance.c b/drivers/net/ethernet/amd/atarilance.c
+index 961796abab352..5e8b72db88739 100644
+--- a/drivers/net/ethernet/amd/atarilance.c
++++ b/drivers/net/ethernet/amd/atarilance.c
+@@ -825,7 +825,7 @@ lance_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	lp->memcpy_f( PKTBUF_ADDR(head), (void *)skb->data, skb->len );
+ 	head->flag = TMD1_OWN_CHIP | TMD1_ENP | TMD1_STP;
+ 	dev->stats.tx_bytes += skb->len;
+-	dev_kfree_skb( skb );
++	dev_consume_skb_irq(skb);
+ 	lp->cur_tx++;
+ 	while( lp->cur_tx >= TX_RING_SIZE && lp->dirty_tx >= TX_RING_SIZE ) {
+ 		lp->cur_tx -= TX_RING_SIZE;
+diff --git a/drivers/net/ethernet/amd/lance.c b/drivers/net/ethernet/amd/lance.c
+index aff44241988c4..9dae225b7fd5d 100644
+--- a/drivers/net/ethernet/amd/lance.c
++++ b/drivers/net/ethernet/amd/lance.c
+@@ -997,7 +997,7 @@ static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
+ 		skb_copy_from_linear_data(skb, &lp->tx_bounce_buffs[entry], skb->len);
+ 		lp->tx_ring[entry].base =
+ 			((u32)isa_virt_to_bus((lp->tx_bounce_buffs + entry)) & 0xffffff) | 0x83000000;
+-		dev_kfree_skb(skb);
++		dev_consume_skb_irq(skb);
+ 	} else {
+ 		lp->tx_skbuff[entry] = skb;
+ 		lp->tx_ring[entry].base = ((u32)isa_virt_to_bus(skb->data) & 0xffffff) | 0x83000000;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index a816b30bca04c..a5d6faf7b89e1 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -1064,6 +1064,9 @@ static void xgbe_free_irqs(struct xgbe_prv_data *pdata)
+ 
+ 	devm_free_irq(pdata->dev, pdata->dev_irq, pdata);
+ 
++	tasklet_kill(&pdata->tasklet_dev);
++	tasklet_kill(&pdata->tasklet_ecc);
++
+ 	if (pdata->vdata->ecc_support && (pdata->dev_irq != pdata->ecc_irq))
+ 		devm_free_irq(pdata->dev, pdata->ecc_irq, pdata);
+ 
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
+index 22d4fc547a0a3..a9ccc4258ee50 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
+@@ -447,8 +447,10 @@ static void xgbe_i2c_stop(struct xgbe_prv_data *pdata)
+ 	xgbe_i2c_disable(pdata);
+ 	xgbe_i2c_clear_all_interrupts(pdata);
+ 
+-	if (pdata->dev_irq != pdata->i2c_irq)
++	if (pdata->dev_irq != pdata->i2c_irq) {
+ 		devm_free_irq(pdata->dev, pdata->i2c_irq, pdata);
++		tasklet_kill(&pdata->tasklet_i2c);
++	}
+ }
+ 
+ static int xgbe_i2c_start(struct xgbe_prv_data *pdata)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 4e97b48695220..0c5c1b1556830 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -1390,8 +1390,10 @@ static void xgbe_phy_stop(struct xgbe_prv_data *pdata)
+ 	/* Disable auto-negotiation */
+ 	xgbe_an_disable_all(pdata);
+ 
+-	if (pdata->dev_irq != pdata->an_irq)
++	if (pdata->dev_irq != pdata->an_irq) {
+ 		devm_free_irq(pdata->dev, pdata->an_irq, pdata);
++		tasklet_kill(&pdata->tasklet_an);
++	}
+ 
+ 	pdata->phy_if.phy_impl.stop(pdata);
+ 
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index a7166cd1179f2..97e32c0490f8a 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -189,6 +189,7 @@ enum xgbe_sfp_cable {
+ 	XGBE_SFP_CABLE_UNKNOWN = 0,
+ 	XGBE_SFP_CABLE_ACTIVE,
+ 	XGBE_SFP_CABLE_PASSIVE,
++	XGBE_SFP_CABLE_FIBER,
+ };
+ 
+ enum xgbe_sfp_base {
+@@ -236,10 +237,7 @@ enum xgbe_sfp_speed {
+ 
+ #define XGBE_SFP_BASE_BR			12
+ #define XGBE_SFP_BASE_BR_1GBE_MIN		0x0a
+-#define XGBE_SFP_BASE_BR_1GBE_MAX		0x0d
+ #define XGBE_SFP_BASE_BR_10GBE_MIN		0x64
+-#define XGBE_SFP_BASE_BR_10GBE_MAX		0x68
+-#define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX	0x78
+ 
+ #define XGBE_SFP_BASE_CU_CABLE_LEN		18
+ 
+@@ -826,29 +824,22 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
+ static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
+ 				  enum xgbe_sfp_speed sfp_speed)
+ {
+-	u8 *sfp_base, min, max;
++	u8 *sfp_base, min;
+ 
+ 	sfp_base = sfp_eeprom->base;
+ 
+ 	switch (sfp_speed) {
+ 	case XGBE_SFP_SPEED_1000:
+ 		min = XGBE_SFP_BASE_BR_1GBE_MIN;
+-		max = XGBE_SFP_BASE_BR_1GBE_MAX;
+ 		break;
+ 	case XGBE_SFP_SPEED_10000:
+ 		min = XGBE_SFP_BASE_BR_10GBE_MIN;
+-		if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME],
+-			   XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0)
+-			max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX;
+-		else
+-			max = XGBE_SFP_BASE_BR_10GBE_MAX;
+ 		break;
+ 	default:
+ 		return false;
+ 	}
+ 
+-	return ((sfp_base[XGBE_SFP_BASE_BR] >= min) &&
+-		(sfp_base[XGBE_SFP_BASE_BR] <= max));
++	return sfp_base[XGBE_SFP_BASE_BR] >= min;
+ }
+ 
+ static void xgbe_phy_free_phy_device(struct xgbe_prv_data *pdata)
+@@ -1149,16 +1140,18 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
+ 	phy_data->sfp_tx_fault = xgbe_phy_check_sfp_tx_fault(phy_data);
+ 	phy_data->sfp_rx_los = xgbe_phy_check_sfp_rx_los(phy_data);
+ 
+-	/* Assume ACTIVE cable unless told it is PASSIVE */
++	/* Assume FIBER cable unless told otherwise */
+ 	if (sfp_base[XGBE_SFP_BASE_CABLE] & XGBE_SFP_BASE_CABLE_PASSIVE) {
+ 		phy_data->sfp_cable = XGBE_SFP_CABLE_PASSIVE;
+ 		phy_data->sfp_cable_len = sfp_base[XGBE_SFP_BASE_CU_CABLE_LEN];
+-	} else {
++	} else if (sfp_base[XGBE_SFP_BASE_CABLE] & XGBE_SFP_BASE_CABLE_ACTIVE) {
+ 		phy_data->sfp_cable = XGBE_SFP_CABLE_ACTIVE;
++	} else {
++		phy_data->sfp_cable = XGBE_SFP_CABLE_FIBER;
+ 	}
+ 
+ 	/* Determine the type of SFP */
+-	if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE &&
++	if (phy_data->sfp_cable != XGBE_SFP_CABLE_FIBER &&
+ 	    xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
+ 		phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
+ 	else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
+diff --git a/drivers/net/ethernet/apple/bmac.c b/drivers/net/ethernet/apple/bmac.c
+index 1e4e402f07d76..dd6c44f5f9256 100644
+--- a/drivers/net/ethernet/apple/bmac.c
++++ b/drivers/net/ethernet/apple/bmac.c
+@@ -1511,7 +1511,7 @@ static void bmac_tx_timeout(struct timer_list *t)
+ 	i = bp->tx_empty;
+ 	++dev->stats.tx_errors;
+ 	if (i != bp->tx_fill) {
+-		dev_kfree_skb(bp->tx_bufs[i]);
++		dev_kfree_skb_irq(bp->tx_bufs[i]);
+ 		bp->tx_bufs[i] = NULL;
+ 		if (++i >= N_TX_RING) i = 0;
+ 		bp->tx_empty = i;
+diff --git a/drivers/net/ethernet/apple/mace.c b/drivers/net/ethernet/apple/mace.c
+index 9e5006e592155..6f6530c291666 100644
+--- a/drivers/net/ethernet/apple/mace.c
++++ b/drivers/net/ethernet/apple/mace.c
+@@ -841,7 +841,7 @@ static void mace_tx_timeout(struct timer_list *t)
+     if (mp->tx_bad_runt) {
+ 	mp->tx_bad_runt = 0;
+     } else if (i != mp->tx_fill) {
+-	dev_kfree_skb(mp->tx_bufs[i]);
++	dev_kfree_skb_irq(mp->tx_bufs[i]);
+ 	if (++i >= N_TX_RING)
+ 	    i = 0;
+ 	mp->tx_empty = i;
+diff --git a/drivers/net/ethernet/dnet.c b/drivers/net/ethernet/dnet.c
+index 48c6eb142dcc0..05a0cc583f8a4 100644
+--- a/drivers/net/ethernet/dnet.c
++++ b/drivers/net/ethernet/dnet.c
+@@ -550,11 +550,11 @@ static netdev_tx_t dnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	skb_tx_timestamp(skb);
+ 
++	spin_unlock_irqrestore(&bp->lock, flags);
++
+ 	/* free the buffer */
+ 	dev_kfree_skb(skb);
+ 
+-	spin_unlock_irqrestore(&bp->lock, flags);
+-
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 975762ccb66fd..5f9603d4c0493 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -5,6 +5,7 @@
+ #include <linux/tcp.h>
+ #include <linux/udp.h>
+ #include <linux/vmalloc.h>
++#include <net/pkt_sched.h>
+ 
+ /* ENETC overhead: optional extension BD + 1 BD gap */
+ #define ENETC_TXBDS_NEEDED(val)	((val) + 2)
+@@ -384,12 +385,7 @@ static void enetc_tstamp_tx(struct sk_buff *skb, u64 tstamp)
+ 	if (skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) {
+ 		memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ 		shhwtstamps.hwtstamp = ns_to_ktime(tstamp);
+-		/* Ensure skb_mstamp_ns, which might have been populated with
+-		 * the txtime, is not mistaken for a software timestamp,
+-		 * because this will prevent the dispatch of our hardware
+-		 * timestamp to the socket.
+-		 */
+-		skb->tstamp = ktime_set(0, 0);
++		skb_txtime_consumed(skb);
+ 		skb_tstamp_tx(skb, &shhwtstamps);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index d6580e942724d..f7f3e4bbc4770 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -3089,7 +3089,8 @@ static int hclgevf_pci_reset(struct hclgevf_dev *hdev)
+ 	struct pci_dev *pdev = hdev->pdev;
+ 	int ret = 0;
+ 
+-	if (hdev->reset_type == HNAE3_VF_FULL_RESET &&
++	if ((hdev->reset_type == HNAE3_VF_FULL_RESET ||
++	     hdev->reset_type == HNAE3_FLR_RESET) &&
+ 	    test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) {
+ 		hclgevf_misc_irq_uninit(hdev);
+ 		hclgevf_uninit_msi(hdev);
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index f24f1a8ec2fbc..0ea8e4024d638 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -1204,8 +1204,12 @@ static int igb_alloc_q_vector(struct igb_adapter *adapter,
+ 	if (!q_vector) {
+ 		q_vector = kzalloc(size, GFP_KERNEL);
+ 	} else if (size > ksize(q_vector)) {
+-		kfree_rcu(q_vector, rcu);
+-		q_vector = kzalloc(size, GFP_KERNEL);
++		struct igb_q_vector *new_q_vector;
++
++		new_q_vector = kzalloc(size, GFP_KERNEL);
++		if (new_q_vector)
++			kfree_rcu(q_vector, rcu);
++		q_vector = new_q_vector;
+ 	} else {
+ 		memset(q_vector, 0, size);
+ 	}
+@@ -5879,7 +5883,7 @@ static void igb_tx_ctxtdesc(struct igb_ring *tx_ring,
+ 	 */
+ 	if (tx_ring->launchtime_enable) {
+ 		ts = ktime_to_timespec64(first->skb->tstamp);
+-		first->skb->tstamp = ktime_set(0, 0);
++		skb_txtime_consumed(first->skb);
+ 		context_desc->seqnum_seed = cpu_to_le32(ts.tv_nsec / 32);
+ 	} else {
+ 		context_desc->seqnum_seed = 0;
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index a97bf7a5f1d67..970dd878d8a76 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -87,6 +87,8 @@ struct igc_ring {
+ 	u8 queue_index;                 /* logical index of the ring*/
+ 	u8 reg_idx;                     /* physical index of the ring */
+ 	bool launchtime_enable;         /* true if LaunchTime is enabled */
++	ktime_t last_tx_cycle;          /* end of the cycle with a launchtime transmission */
++	ktime_t last_ff_cycle;          /* Last cycle with an active first flag */
+ 
+ 	u32 start_time;
+ 	u32 end_time;
+diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h
+index 32f5fd6841398..352b50d3881d0 100644
+--- a/drivers/net/ethernet/intel/igc/igc_defines.h
++++ b/drivers/net/ethernet/intel/igc/igc_defines.h
+@@ -278,6 +278,8 @@
+ #define IGC_ADVTXD_L4LEN_SHIFT	8  /* Adv ctxt L4LEN shift */
+ #define IGC_ADVTXD_MSS_SHIFT	16 /* Adv ctxt MSS shift */
+ 
++#define IGC_ADVTXD_TSN_CNTX_FIRST	0x00000080
++
+ /* Transmit Control */
+ #define IGC_TCTL_EN		0x00000002 /* enable Tx */
+ #define IGC_TCTL_PSP		0x00000008 /* pad short packets */
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index e7ffe63925fd2..1a0aae7b128d8 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -898,25 +898,118 @@ static int igc_write_mc_addr_list(struct net_device *netdev)
+ 	return netdev_mc_count(netdev);
+ }
+ 
+-static __le32 igc_tx_launchtime(struct igc_adapter *adapter, ktime_t txtime)
++static __le32 igc_tx_launchtime(struct igc_ring *ring, ktime_t txtime,
++				bool *first_flag, bool *insert_empty)
+ {
++	struct igc_adapter *adapter = netdev_priv(ring->netdev);
+ 	ktime_t cycle_time = adapter->cycle_time;
+ 	ktime_t base_time = adapter->base_time;
++	ktime_t now = ktime_get_clocktai();
++	ktime_t baset_est, end_of_cycle;
+ 	u32 launchtime;
++	s64 n;
+ 
+-	/* FIXME: when using ETF together with taprio, we may have a
+-	 * case where 'delta' is larger than the cycle_time, this may
+-	 * cause problems if we don't read the current value of
+-	 * IGC_BASET, as the value writen into the launchtime
+-	 * descriptor field may be misinterpreted.
++	n = div64_s64(ktime_sub_ns(now, base_time), cycle_time);
++
++	baset_est = ktime_add_ns(base_time, cycle_time * (n));
++	end_of_cycle = ktime_add_ns(baset_est, cycle_time);
++
++	if (ktime_compare(txtime, end_of_cycle) >= 0) {
++		if (baset_est != ring->last_ff_cycle) {
++			*first_flag = true;
++			ring->last_ff_cycle = baset_est;
++
++			if (ktime_compare(txtime, ring->last_tx_cycle) > 0)
++				*insert_empty = true;
++		}
++	}
++
++	/* Introducing a window at end of cycle on which packets
++	 * potentially not honor launchtime. Window of 5us chosen
++	 * considering software update the tail pointer and packets
++	 * are dma'ed to packet buffer.
+ 	 */
+-	div_s64_rem(ktime_sub_ns(txtime, base_time), cycle_time, &launchtime);
++	if ((ktime_sub_ns(end_of_cycle, now) < 5 * NSEC_PER_USEC))
++		netdev_warn(ring->netdev, "Packet with txtime=%llu may not be honoured\n",
++			    txtime);
++
++	ring->last_tx_cycle = end_of_cycle;
++
++	launchtime = ktime_sub_ns(txtime, baset_est);
++	if (launchtime > 0)
++		div_s64_rem(launchtime, cycle_time, &launchtime);
++	else
++		launchtime = 0;
+ 
+ 	return cpu_to_le32(launchtime);
+ }
+ 
++static int igc_init_empty_frame(struct igc_ring *ring,
++				struct igc_tx_buffer *buffer,
++				struct sk_buff *skb)
++{
++	unsigned int size;
++	dma_addr_t dma;
++
++	size = skb_headlen(skb);
++
++	dma = dma_map_single(ring->dev, skb->data, size, DMA_TO_DEVICE);
++	if (dma_mapping_error(ring->dev, dma)) {
++		netdev_err_once(ring->netdev, "Failed to map DMA for TX\n");
++		return -ENOMEM;
++	}
++
++	buffer->skb = skb;
++	buffer->protocol = 0;
++	buffer->bytecount = skb->len;
++	buffer->gso_segs = 1;
++	buffer->time_stamp = jiffies;
++	dma_unmap_len_set(buffer, len, skb->len);
++	dma_unmap_addr_set(buffer, dma, dma);
++
++	return 0;
++}
++
++static int igc_init_tx_empty_descriptor(struct igc_ring *ring,
++					struct sk_buff *skb,
++					struct igc_tx_buffer *first)
++{
++	union igc_adv_tx_desc *desc;
++	u32 cmd_type, olinfo_status;
++	int err;
++
++	if (!igc_desc_unused(ring))
++		return -EBUSY;
++
++	err = igc_init_empty_frame(ring, first, skb);
++	if (err)
++		return err;
++
++	cmd_type = IGC_ADVTXD_DTYP_DATA | IGC_ADVTXD_DCMD_DEXT |
++		   IGC_ADVTXD_DCMD_IFCS | IGC_TXD_DCMD |
++		   first->bytecount;
++	olinfo_status = first->bytecount << IGC_ADVTXD_PAYLEN_SHIFT;
++
++	desc = IGC_TX_DESC(ring, ring->next_to_use);
++	desc->read.cmd_type_len = cpu_to_le32(cmd_type);
++	desc->read.olinfo_status = cpu_to_le32(olinfo_status);
++	desc->read.buffer_addr = cpu_to_le64(dma_unmap_addr(first, dma));
++
++	netdev_tx_sent_queue(txring_txq(ring), skb->len);
++
++	first->next_to_watch = desc;
++
++	ring->next_to_use++;
++	if (ring->next_to_use == ring->count)
++		ring->next_to_use = 0;
++
++	return 0;
++}
++
++#define IGC_EMPTY_FRAME_SIZE 60
++
+ static void igc_tx_ctxtdesc(struct igc_ring *tx_ring,
+-			    struct igc_tx_buffer *first,
++			    __le32 launch_time, bool first_flag,
+ 			    u32 vlan_macip_lens, u32 type_tucmd,
+ 			    u32 mss_l4len_idx)
+ {
+@@ -935,35 +1028,17 @@ static void igc_tx_ctxtdesc(struct igc_ring *tx_ring,
+ 	if (test_bit(IGC_RING_FLAG_TX_CTX_IDX, &tx_ring->flags))
+ 		mss_l4len_idx |= tx_ring->reg_idx << 4;
+ 
++	if (first_flag)
++		mss_l4len_idx |= IGC_ADVTXD_TSN_CNTX_FIRST;
++
+ 	context_desc->vlan_macip_lens	= cpu_to_le32(vlan_macip_lens);
+ 	context_desc->type_tucmd_mlhl	= cpu_to_le32(type_tucmd);
+ 	context_desc->mss_l4len_idx	= cpu_to_le32(mss_l4len_idx);
+-
+-	/* We assume there is always a valid Tx time available. Invalid times
+-	 * should have been handled by the upper layers.
+-	 */
+-	if (tx_ring->launchtime_enable) {
+-		struct igc_adapter *adapter = netdev_priv(tx_ring->netdev);
+-		ktime_t txtime = first->skb->tstamp;
+-
+-		first->skb->tstamp = ktime_set(0, 0);
+-		context_desc->launch_time = igc_tx_launchtime(adapter,
+-							      txtime);
+-	} else {
+-		context_desc->launch_time = 0;
+-	}
++	context_desc->launch_time	= launch_time;
+ }
+ 
+-static inline bool igc_ipv6_csum_is_sctp(struct sk_buff *skb)
+-{
+-	unsigned int offset = 0;
+-
+-	ipv6_find_hdr(skb, &offset, IPPROTO_SCTP, NULL, NULL);
+-
+-	return offset == skb_checksum_start_offset(skb);
+-}
+-
+-static void igc_tx_csum(struct igc_ring *tx_ring, struct igc_tx_buffer *first)
++static void igc_tx_csum(struct igc_ring *tx_ring, struct igc_tx_buffer *first,
++			__le32 launch_time, bool first_flag)
+ {
+ 	struct sk_buff *skb = first->skb;
+ 	u32 vlan_macip_lens = 0;
+@@ -985,10 +1060,7 @@ csum_failed:
+ 		break;
+ 	case offsetof(struct sctphdr, checksum):
+ 		/* validate that this is actually an SCTP request */
+-		if ((first->protocol == htons(ETH_P_IP) &&
+-		     (ip_hdr(skb)->protocol == IPPROTO_SCTP)) ||
+-		    (first->protocol == htons(ETH_P_IPV6) &&
+-		     igc_ipv6_csum_is_sctp(skb))) {
++		if (skb_csum_is_sctp(skb)) {
+ 			type_tucmd = IGC_ADVTXD_TUCMD_L4T_SCTP;
+ 			break;
+ 		}
+@@ -1006,7 +1078,8 @@ no_csum:
+ 	vlan_macip_lens |= skb_network_offset(skb) << IGC_ADVTXD_MACLEN_SHIFT;
+ 	vlan_macip_lens |= first->tx_flags & IGC_TX_FLAGS_VLAN_MASK;
+ 
+-	igc_tx_ctxtdesc(tx_ring, first, vlan_macip_lens, type_tucmd, 0);
++	igc_tx_ctxtdesc(tx_ring, launch_time, first_flag,
++			vlan_macip_lens, type_tucmd, 0);
+ }
+ 
+ static int __igc_maybe_stop_tx(struct igc_ring *tx_ring, const u16 size)
+@@ -1230,6 +1303,7 @@ dma_error:
+ 
+ static int igc_tso(struct igc_ring *tx_ring,
+ 		   struct igc_tx_buffer *first,
++		   __le32 launch_time, bool first_flag,
+ 		   u8 *hdr_len)
+ {
+ 	u32 vlan_macip_lens, type_tucmd, mss_l4len_idx;
+@@ -1316,8 +1390,8 @@ static int igc_tso(struct igc_ring *tx_ring,
+ 	vlan_macip_lens |= (ip.hdr - skb->data) << IGC_ADVTXD_MACLEN_SHIFT;
+ 	vlan_macip_lens |= first->tx_flags & IGC_TX_FLAGS_VLAN_MASK;
+ 
+-	igc_tx_ctxtdesc(tx_ring, first, vlan_macip_lens,
+-			type_tucmd, mss_l4len_idx);
++	igc_tx_ctxtdesc(tx_ring, launch_time, first_flag,
++			vlan_macip_lens, type_tucmd, mss_l4len_idx);
+ 
+ 	return 1;
+ }
+@@ -1325,11 +1399,14 @@ static int igc_tso(struct igc_ring *tx_ring,
+ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
+ 				       struct igc_ring *tx_ring)
+ {
++	bool first_flag = false, insert_empty = false;
+ 	u16 count = TXD_USE_COUNT(skb_headlen(skb));
+ 	__be16 protocol = vlan_get_protocol(skb);
+ 	struct igc_tx_buffer *first;
++	__le32 launch_time = 0;
+ 	u32 tx_flags = 0;
+ 	unsigned short f;
++	ktime_t txtime;
+ 	u8 hdr_len = 0;
+ 	int tso = 0;
+ 
+@@ -1343,11 +1420,40 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
+ 		count += TXD_USE_COUNT(skb_frag_size(
+ 						&skb_shinfo(skb)->frags[f]));
+ 
+-	if (igc_maybe_stop_tx(tx_ring, count + 3)) {
++	if (igc_maybe_stop_tx(tx_ring, count + 5)) {
+ 		/* this is a hard error */
+ 		return NETDEV_TX_BUSY;
+ 	}
+ 
++	if (!tx_ring->launchtime_enable)
++		goto done;
++
++	txtime = skb->tstamp;
++	skb->tstamp = ktime_set(0, 0);
++	launch_time = igc_tx_launchtime(tx_ring, txtime, &first_flag, &insert_empty);
++
++	if (insert_empty) {
++		struct igc_tx_buffer *empty_info;
++		struct sk_buff *empty;
++		void *data;
++
++		empty_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
++		empty = alloc_skb(IGC_EMPTY_FRAME_SIZE, GFP_ATOMIC);
++		if (!empty)
++			goto done;
++
++		data = skb_put(empty, IGC_EMPTY_FRAME_SIZE);
++		memset(data, 0, IGC_EMPTY_FRAME_SIZE);
++
++		igc_tx_ctxtdesc(tx_ring, 0, false, 0, 0, 0);
++
++		if (igc_init_tx_empty_descriptor(tx_ring,
++						 empty,
++						 empty_info) < 0)
++			dev_kfree_skb_any(empty);
++	}
++
++done:
+ 	/* record the location of the first descriptor for this packet */
+ 	first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
+ 	first->skb = skb;
+@@ -1378,11 +1484,11 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
+ 	first->tx_flags = tx_flags;
+ 	first->protocol = protocol;
+ 
+-	tso = igc_tso(tx_ring, first, &hdr_len);
++	tso = igc_tso(tx_ring, first, launch_time, first_flag, &hdr_len);
+ 	if (tso < 0)
+ 		goto out_drop;
+ 	else if (!tso)
+-		igc_tx_csum(tx_ring, first);
++		igc_tx_csum(tx_ring, first, launch_time, first_flag);
+ 
+ 	igc_tx_map(tx_ring, first, hdr_len);
+ 
+@@ -4756,9 +4862,10 @@ static bool validate_schedule(struct igc_adapter *adapter,
+ 		return false;
+ 
+ 	for (n = 0; n < qopt->num_entries; n++) {
+-		const struct tc_taprio_sched_entry *e;
++		const struct tc_taprio_sched_entry *e, *prev;
+ 		int i;
+ 
++		prev = n ? &qopt->entries[n - 1] : NULL;
+ 		e = &qopt->entries[n];
+ 
+ 		/* i225 only supports "global" frame preemption
+@@ -4771,7 +4878,12 @@ static bool validate_schedule(struct igc_adapter *adapter,
+ 			if (e->gate_mask & BIT(i))
+ 				queue_uses[i]++;
+ 
+-			if (queue_uses[i] > 1)
++			/* There are limitations: A single queue cannot be
++			 * opened and closed multiple times per cycle unless the
++			 * gate stays open. Check for it.
++			 */
++			if (queue_uses[i] > 1 &&
++			    !(prev->gate_mask & BIT(i)))
+ 				return false;
+ 		}
+ 	}
+@@ -4798,14 +4910,19 @@ static int igc_tsn_enable_launchtime(struct igc_adapter *adapter,
+ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
+ 				 struct tc_taprio_qopt_offload *qopt)
+ {
++	bool queue_configured[IGC_MAX_TX_QUEUES] = { };
+ 	u32 start_time = 0, end_time = 0;
+ 	size_t n;
++	int i;
+ 
+ 	if (!qopt->enable) {
+ 		adapter->base_time = 0;
+ 		return 0;
+ 	}
+ 
++	if (qopt->base_time < 0)
++		return -ERANGE;
++
+ 	if (adapter->base_time)
+ 		return -EALREADY;
+ 
+@@ -4815,28 +4932,58 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
+ 	adapter->cycle_time = qopt->cycle_time;
+ 	adapter->base_time = qopt->base_time;
+ 
+-	/* FIXME: be a little smarter about cases when the gate for a
+-	 * queue stays open for more than one entry.
+-	 */
+ 	for (n = 0; n < qopt->num_entries; n++) {
+ 		struct tc_taprio_sched_entry *e = &qopt->entries[n];
+-		int i;
+ 
+ 		end_time += e->interval;
+ 
++		/* If any of the conditions below are true, we need to manually
++		 * control the end time of the cycle.
++		 * 1. Qbv users can specify a cycle time that is not equal
++		 * to the total GCL intervals. Hence, recalculation is
++		 * necessary here to exclude the time interval that
++		 * exceeds the cycle time.
++		 * 2. According to IEEE Std. 802.1Q-2018 section 8.6.9.2,
++		 * once the end of the list is reached, it will switch
++		 * to the END_OF_CYCLE state and leave the gates in the
++		 * same state until the next cycle is started.
++		 */
++		if (end_time > adapter->cycle_time ||
++		    n + 1 == qopt->num_entries)
++			end_time = adapter->cycle_time;
++
+ 		for (i = 0; i < adapter->num_tx_queues; i++) {
+ 			struct igc_ring *ring = adapter->tx_ring[i];
+ 
+ 			if (!(e->gate_mask & BIT(i)))
+ 				continue;
+ 
+-			ring->start_time = start_time;
++			/* Check whether a queue stays open for more than one
++			 * entry. If so, keep the start and advance the end
++			 * time.
++			 */
++			if (!queue_configured[i])
++				ring->start_time = start_time;
+ 			ring->end_time = end_time;
++
++			queue_configured[i] = true;
+ 		}
+ 
+ 		start_time += e->interval;
+ 	}
+ 
++	/* Check whether a queue gets configured.
++	 * If not, set the start and end time to be end time.
++	 */
++	for (i = 0; i < adapter->num_tx_queues; i++) {
++		if (!queue_configured[i]) {
++			struct igc_ring *ring = adapter->tx_ring[i];
++
++			ring->start_time = end_time;
++			ring->end_time = end_time;
++		}
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c
+index 174103c4bea6d..2d4db2a547b26 100644
+--- a/drivers/net/ethernet/intel/igc/igc_tsn.c
++++ b/drivers/net/ethernet/intel/igc/igc_tsn.c
+@@ -92,15 +92,8 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
+ 		wr32(IGC_STQT(i), ring->start_time);
+ 		wr32(IGC_ENDQT(i), ring->end_time);
+ 
+-		if (adapter->base_time) {
+-			/* If we have a base_time we are in "taprio"
+-			 * mode and we need to be strict about the
+-			 * cycles: only transmit a packet if it can be
+-			 * completed during that cycle.
+-			 */
+-			txqctl |= IGC_TXQCTL_STRICT_CYCLE |
+-				IGC_TXQCTL_STRICT_END;
+-		}
++		txqctl |= IGC_TXQCTL_STRICT_CYCLE |
++			IGC_TXQCTL_STRICT_END;
+ 
+ 		if (ring->launchtime_enable)
+ 			txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index cfc3bfcb04a2f..5673a4113253b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -992,7 +992,7 @@ static int mlx5e_alloc_xdpsq(struct mlx5e_channel *c,
+ 	sq->channel   = c;
+ 	sq->uar_map   = mdev->mlx5e_res.bfreg.map;
+ 	sq->min_inline_mode = params->tx_min_inline_mode;
+-	sq->hw_mtu    = MLX5E_SW2HW_MTU(params, params->sw_mtu);
++	sq->hw_mtu    = MLX5E_SW2HW_MTU(params, params->sw_mtu) - ETH_FCS_LEN;
+ 	sq->xsk_pool  = xsk_pool;
+ 
+ 	sq->stats = sq->xsk_pool ?
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index f1bf7f6758d0d..16846442717dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1344,9 +1344,9 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
+ 		      struct netlink_ext_ack *extack)
+ {
+ 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+-	struct net_device *out_dev, *encap_dev = NULL;
+ 	struct mlx5e_tc_flow_parse_attr *parse_attr;
+ 	struct mlx5_flow_attr *attr = flow->attr;
++	struct net_device *encap_dev = NULL;
+ 	struct mlx5_esw_flow_attr *esw_attr;
+ 	struct mlx5_fc *counter = NULL;
+ 	struct mlx5e_rep_priv *rpriv;
+@@ -1391,16 +1391,22 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
+ 	esw_attr = attr->esw_attr;
+ 
+ 	for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
++		struct net_device *out_dev;
+ 		int mirred_ifindex;
+ 
+ 		if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
+ 			continue;
+ 
+ 		mirred_ifindex = parse_attr->mirred_ifindex[out_index];
+-		out_dev = __dev_get_by_index(dev_net(priv->netdev),
+-					     mirred_ifindex);
++		out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex);
++		if (!out_dev) {
++			NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found");
++			err = -ENODEV;
++			return err;
++		}
+ 		err = mlx5e_attach_encap(priv, flow, out_dev, out_index,
+ 					 extack, &encap_dev, &encap_valid);
++		dev_put(out_dev);
+ 		if (err)
+ 			return err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index 0c32c485eb588..b210545147368 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -618,6 +618,12 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
+ 	priv = container_of(health, struct mlx5_priv, health);
+ 	dev = container_of(priv, struct mlx5_core_dev, priv);
+ 
++	mutex_lock(&dev->intf_state_mutex);
++	if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {
++		mlx5_core_err(dev, "health works are not permitted at this stage\n");
++		return;
++	}
++	mutex_unlock(&dev->intf_state_mutex);
+ 	enter_error_state(dev, false);
+ 	if (IS_ERR_OR_NULL(health->fw_fatal_reporter)) {
+ 		if (mlx5_health_try_recover(dev))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+index 5c6a376aa62ec..0e7fd200b4268 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+@@ -69,6 +69,10 @@ static void mlx5i_build_nic_params(struct mlx5_core_dev *mdev,
+ 	params->lro_en = false;
+ 	params->hard_mtu = MLX5_IB_GRH_BYTES + MLX5_IPOIB_HARD_LEN;
+ 	params->tunneled_offload_en = false;
++
++	/* CQE compression is not supported for IPoIB */
++	params->rx_cqe_compress_def = false;
++	MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS, params->rx_cqe_compress_def);
+ }
+ 
+ /* Called directly after IPoIB netdevice was created to initialize SW structs */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 8246b6285d5a4..29bc1df28aeb3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -906,6 +906,8 @@ err_rl_cleanup:
+ err_tables_cleanup:
+ 	mlx5_geneve_destroy(dev->geneve);
+ 	mlx5_vxlan_destroy(dev->vxlan);
++	mlx5_cleanup_clock(dev);
++	mlx5_cleanup_reserved_gids(dev);
+ 	mlx5_cq_debugfs_cleanup(dev);
+ 	mlx5_fw_reset_cleanup(dev);
+ err_events_cleanup:
+diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+index 1664e9184c9ca..5a1ed4818baac 100644
+--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
++++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+@@ -3920,6 +3920,7 @@ abort_with_slices:
+ 	myri10ge_free_slices(mgp);
+ 
+ abort_with_firmware:
++	kfree(mgp->msix_vectors);
+ 	myri10ge_dummy_rdma(mgp, 0);
+ 
+ abort_with_ioremap:
+diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
+index 8a30be698f992..ff46b08ac1f42 100644
+--- a/drivers/net/ethernet/neterion/s2io.c
++++ b/drivers/net/ethernet/neterion/s2io.c
+@@ -2384,7 +2384,7 @@ static void free_tx_buffers(struct s2io_nic *nic)
+ 			skb = s2io_txdl_getskb(&mac_control->fifos[i], txdp, j);
+ 			if (skb) {
+ 				swstats->mem_freed += skb->truesize;
+-				dev_kfree_skb(skb);
++				dev_kfree_skb_irq(skb);
+ 				cnt++;
+ 			}
+ 		}
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index 46dbb49f837c8..5463c8b8e43ce 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -986,7 +986,7 @@ static int ionic_tx_calc_csum(struct ionic_queue *q, struct sk_buff *skb)
+ 		stats->vlan_inserted++;
+ 	}
+ 
+-	if (skb->csum_not_inet)
++	if (skb_csum_is_sctp(skb))
+ 		stats->crc32_csum++;
+ 	else
+ 		stats->csum++;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+index 6ab3e60d4928c..4b4077cf2d266 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+@@ -1796,9 +1796,10 @@ static u32 qed_grc_dump_addr_range(struct qed_hwfn *p_hwfn,
+ 				   u8 split_id)
+ {
+ 	struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
+-	u8 port_id = 0, pf_id = 0, vf_id = 0, fid = 0;
++	u8 port_id = 0, pf_id = 0, vf_id = 0;
+ 	bool read_using_dmae = false;
+ 	u32 thresh;
++	u16 fid;
+ 
+ 	if (!dump)
+ 		return len;
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
+index d2c190732d3ef..beeeec8516b83 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
+@@ -2505,7 +2505,13 @@ int qlcnic_83xx_init(struct qlcnic_adapter *adapter, int pci_using_dac)
+ 		goto disable_mbx_intr;
+ 
+ 	qlcnic_83xx_clear_function_resources(adapter);
+-	qlcnic_dcb_enable(adapter->dcb);
++
++	err = qlcnic_dcb_enable(adapter->dcb);
++	if (err) {
++		qlcnic_dcb_free(adapter->dcb);
++		goto disable_mbx_intr;
++	}
++
+ 	qlcnic_83xx_initialize_nic(adapter, 1);
+ 	qlcnic_dcb_get_info(adapter->dcb);
+ 
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+index 7519773eaca6e..22afa2be85fdb 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+@@ -41,11 +41,6 @@ struct qlcnic_dcb {
+ 	unsigned long			state;
+ };
+ 
+-static inline void qlcnic_clear_dcb_ops(struct qlcnic_dcb *dcb)
+-{
+-	kfree(dcb);
+-}
+-
+ static inline int qlcnic_dcb_get_hw_capability(struct qlcnic_dcb *dcb)
+ {
+ 	if (dcb && dcb->ops->get_hw_capability)
+@@ -112,9 +107,8 @@ static inline void qlcnic_dcb_init_dcbnl_ops(struct qlcnic_dcb *dcb)
+ 		dcb->ops->init_dcbnl_ops(dcb);
+ }
+ 
+-static inline void qlcnic_dcb_enable(struct qlcnic_dcb *dcb)
++static inline int qlcnic_dcb_enable(struct qlcnic_dcb *dcb)
+ {
+-	if (dcb && qlcnic_dcb_attach(dcb))
+-		qlcnic_clear_dcb_ops(dcb);
++	return dcb ? qlcnic_dcb_attach(dcb) : 0;
+ }
+ #endif
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+index 27c07b2412f46..44b745293fd0c 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+@@ -2622,7 +2622,13 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			 "Device does not support MSI interrupts\n");
+ 
+ 	if (qlcnic_82xx_check(adapter)) {
+-		qlcnic_dcb_enable(adapter->dcb);
++		err = qlcnic_dcb_enable(adapter->dcb);
++		if (err) {
++			qlcnic_dcb_free(adapter->dcb);
++			dev_err(&pdev->dev, "Failed to enable DCB\n");
++			goto err_out_free_hw;
++		}
++
+ 		qlcnic_dcb_get_info(adapter->dcb);
+ 		err = qlcnic_setup_intr(adapter);
+ 
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+index 8367891bfb139..e864c453c5e6b 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+@@ -221,6 +221,8 @@ int qlcnic_sriov_init(struct qlcnic_adapter *adapter, int num_vfs)
+ 	return 0;
+ 
+ qlcnic_destroy_async_wq:
++	while (i--)
++		kfree(sriov->vf_info[i].vp);
+ 	destroy_workqueue(bc->bc_async_wq);
+ 
+ qlcnic_destroy_trans_wq:
+diff --git a/drivers/net/ethernet/rdc/r6040.c b/drivers/net/ethernet/rdc/r6040.c
+index ccdfa930130bc..4cff544f04c2e 100644
+--- a/drivers/net/ethernet/rdc/r6040.c
++++ b/drivers/net/ethernet/rdc/r6040.c
+@@ -1158,10 +1158,12 @@ static int r6040_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	err = register_netdev(dev);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to register net device\n");
+-		goto err_out_mdio_unregister;
++		goto err_out_phy_disconnect;
+ 	}
+ 	return 0;
+ 
++err_out_phy_disconnect:
++	phy_disconnect(dev->phydev);
+ err_out_mdio_unregister:
+ 	mdiobus_unregister(lp->mii_bus);
+ err_out_mdio:
+@@ -1185,6 +1187,7 @@ static void r6040_remove_one(struct pci_dev *pdev)
+ 	struct r6040_private *lp = netdev_priv(dev);
+ 
+ 	unregister_netdev(dev);
++	phy_disconnect(dev->phydev);
+ 	mdiobus_unregister(lp->mii_bus);
+ 	mdiobus_free(lp->mii_bus);
+ 	netif_napi_del(&lp->napi);
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 9e7b85e178fd2..9ec6d63691aab 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -2253,11 +2253,11 @@ static int ravb_remove(struct platform_device *pdev)
+ 			  priv->desc_bat_dma);
+ 	/* Set reset mode */
+ 	ravb_write(ndev, CCC_OPC_RESET, CCC);
+-	pm_runtime_put_sync(&pdev->dev);
+ 	unregister_netdev(ndev);
+ 	netif_napi_del(&priv->napi[RAVB_NC]);
+ 	netif_napi_del(&priv->napi[RAVB_BE]);
+ 	ravb_mdio_release(priv);
++	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	free_netdev(ndev);
+ 	platform_set_drvdata(pdev, NULL);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index 53efcc9c40e28..0ad5ce874557c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -44,7 +44,8 @@ static void config_sub_second_increment(void __iomem *ioaddr,
+ 	if (!(value & PTP_TCR_TSCTRLSSR))
+ 		data = (data * 1000) / 465;
+ 
+-	data &= PTP_SSIR_SSINC_MASK;
++	if (data > PTP_SSIR_SSINC_MAX)
++		data = PTP_SSIR_SSINC_MAX;
+ 
+ 	reg_value = data;
+ 	if (gmac4)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
+index 7abb1d47e7dac..60e6b085e2f6d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
+@@ -61,7 +61,7 @@
+ #define	PTP_TCR_TSENMACADDR	BIT(18)
+ 
+ /* SSIR defines */
+-#define	PTP_SSIR_SSINC_MASK		0xff
++#define	PTP_SSIR_SSINC_MAX		0xff
+ #define	GMAC4_PTP_SSIR_SSINC_SHIFT	16
+ 
+ #endif	/* __STMMAC_PTP_H__ */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+index dd5c4ef92ef3c..ea7200b7b6477 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+@@ -1654,12 +1654,16 @@ static int stmmac_test_arpoffload(struct stmmac_priv *priv)
+ 	}
+ 
+ 	ret = stmmac_set_arp_offload(priv, priv->hw, true, ip_addr);
+-	if (ret)
++	if (ret) {
++		kfree_skb(skb);
+ 		goto cleanup;
++	}
+ 
+ 	ret = dev_set_promiscuity(priv->dev, 1);
+-	if (ret)
++	if (ret) {
++		kfree_skb(skb);
+ 		goto cleanup;
++	}
+ 
+ 	ret = dev_direct_xmit(skb, 0);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c
+index dc50e948195de..f145abb77a497 100644
+--- a/drivers/net/ethernet/ti/netcp_core.c
++++ b/drivers/net/ethernet/ti/netcp_core.c
+@@ -1262,7 +1262,7 @@ out:
+ }
+ 
+ /* Submit the packet */
+-static int netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
++static netdev_tx_t netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ {
+ 	struct netcp_intf *netcp = netdev_priv(ndev);
+ 	struct netcp_stats *tx_stats = &netcp->stats;
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index f6ea4a0ad5dff..02b95afe25066 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -541,7 +541,7 @@ static void xemaclite_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ 	xemaclite_enable_interrupts(lp);
+ 
+ 	if (lp->deferred_skb) {
+-		dev_kfree_skb(lp->deferred_skb);
++		dev_kfree_skb_irq(lp->deferred_skb);
+ 		lp->deferred_skb = NULL;
+ 		dev->stats.tx_errors++;
+ 	}
+diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c
+index c7ce6d5491afc..442bdc6e8dc4f 100644
+--- a/drivers/net/fddi/defxx.c
++++ b/drivers/net/fddi/defxx.c
+@@ -3844,10 +3844,24 @@ static int dfx_init(void)
+ 	int status;
+ 
+ 	status = pci_register_driver(&dfx_pci_driver);
+-	if (!status)
+-		status = eisa_driver_register(&dfx_eisa_driver);
+-	if (!status)
+-		status = tc_register_driver(&dfx_tc_driver);
++	if (status)
++		goto err_pci_register;
++
++	status = eisa_driver_register(&dfx_eisa_driver);
++	if (status)
++		goto err_eisa_register;
++
++	status = tc_register_driver(&dfx_tc_driver);
++	if (status)
++		goto err_tc_register;
++
++	return 0;
++
++err_tc_register:
++	eisa_driver_unregister(&dfx_eisa_driver);
++err_eisa_register:
++	pci_unregister_driver(&dfx_pci_driver);
++err_pci_register:
+ 	return status;
+ }
+ 
+diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
+index e4e4981ac1d29..eea9d47157cfe 100644
+--- a/drivers/net/hamradio/baycom_epp.c
++++ b/drivers/net/hamradio/baycom_epp.c
+@@ -758,7 +758,7 @@ static void epp_bh(struct work_struct *work)
+  * ===================== network driver interface =========================
+  */
+ 
+-static int baycom_send_packet(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t baycom_send_packet(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct baycom_state *bc = netdev_priv(dev);
+ 
+diff --git a/drivers/net/hamradio/scc.c b/drivers/net/hamradio/scc.c
+index 36eeb80406f2b..eeb6c47d81678 100644
+--- a/drivers/net/hamradio/scc.c
++++ b/drivers/net/hamradio/scc.c
+@@ -300,12 +300,12 @@ static inline void scc_discard_buffers(struct scc_channel *scc)
+ 	spin_lock_irqsave(&scc->lock, flags);	
+ 	if (scc->tx_buff != NULL)
+ 	{
+-		dev_kfree_skb(scc->tx_buff);
++		dev_kfree_skb_irq(scc->tx_buff);
+ 		scc->tx_buff = NULL;
+ 	}
+ 	
+ 	while (!skb_queue_empty(&scc->tx_queue))
+-		dev_kfree_skb(skb_dequeue(&scc->tx_queue));
++		dev_kfree_skb_irq(skb_dequeue(&scc->tx_queue));
+ 
+ 	spin_unlock_irqrestore(&scc->lock, flags);
+ }
+@@ -1667,7 +1667,7 @@ static netdev_tx_t scc_net_tx(struct sk_buff *skb, struct net_device *dev)
+ 	if (skb_queue_len(&scc->tx_queue) > scc->dev->tx_queue_len) {
+ 		struct sk_buff *skb_del;
+ 		skb_del = skb_dequeue(&scc->tx_queue);
+-		dev_kfree_skb(skb_del);
++		dev_kfree_skb_irq(skb_del);
+ 	}
+ 	skb_queue_tail(&scc->tx_queue, skb);
+ 	netif_trans_update(dev);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index eb029456b5946..4fdb970e34823 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -2584,7 +2584,7 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
+ 	const struct macsec_ops *ops;
+ 	struct macsec_context ctx;
+ 	struct macsec_dev *macsec;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (!attrs[MACSEC_ATTR_IFINDEX])
+ 		return -EINVAL;
+@@ -2597,28 +2597,36 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
+ 					macsec_genl_offload_policy, NULL))
+ 		return -EINVAL;
+ 
++	rtnl_lock();
++
+ 	dev = get_dev_from_nl(genl_info_net(info), attrs);
+-	if (IS_ERR(dev))
+-		return PTR_ERR(dev);
++	if (IS_ERR(dev)) {
++		ret = PTR_ERR(dev);
++		goto out;
++	}
+ 	macsec = macsec_priv(dev);
+ 
+-	if (!tb_offload[MACSEC_OFFLOAD_ATTR_TYPE])
+-		return -EINVAL;
++	if (!tb_offload[MACSEC_OFFLOAD_ATTR_TYPE]) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	offload = nla_get_u8(tb_offload[MACSEC_OFFLOAD_ATTR_TYPE]);
+ 	if (macsec->offload == offload)
+-		return 0;
++		goto out;
+ 
+ 	/* Check if the offloading mode is supported by the underlying layers */
+ 	if (offload != MACSEC_OFFLOAD_OFF &&
+-	    !macsec_check_offload(offload, macsec))
+-		return -EOPNOTSUPP;
++	    !macsec_check_offload(offload, macsec)) {
++		ret = -EOPNOTSUPP;
++		goto out;
++	}
+ 
+ 	/* Check if the net device is busy. */
+-	if (netif_running(dev))
+-		return -EBUSY;
+-
+-	rtnl_lock();
++	if (netif_running(dev)) {
++		ret = -EBUSY;
++		goto out;
++	}
+ 
+ 	prev_offload = macsec->offload;
+ 	macsec->offload = offload;
+@@ -2653,7 +2661,7 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
+ 
+ rollback:
+ 	macsec->offload = prev_offload;
+-
++out:
+ 	rtnl_unlock();
+ 	return ret;
+ }
+diff --git a/drivers/net/ntb_netdev.c b/drivers/net/ntb_netdev.c
+index 1b7d588ff3c5c..b701ee83e64a8 100644
+--- a/drivers/net/ntb_netdev.c
++++ b/drivers/net/ntb_netdev.c
+@@ -137,7 +137,7 @@ static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
+ enqueue_again:
+ 	rc = ntb_transport_rx_enqueue(qp, skb, skb->data, ndev->mtu + ETH_HLEN);
+ 	if (rc) {
+-		dev_kfree_skb(skb);
++		dev_kfree_skb_any(skb);
+ 		ndev->stats.rx_errors++;
+ 		ndev->stats.rx_fifo_errors++;
+ 	}
+@@ -192,7 +192,7 @@ static void ntb_netdev_tx_handler(struct ntb_transport_qp *qp, void *qp_data,
+ 		ndev->stats.tx_aborted_errors++;
+ 	}
+ 
+-	dev_kfree_skb(skb);
++	dev_kfree_skb_any(skb);
+ 
+ 	if (ntb_transport_tx_free_entry(dev->qp) >= tx_start) {
+ 		/* Make sure anybody stopping the queue after this sees the new
+diff --git a/drivers/net/phy/xilinx_gmii2rgmii.c b/drivers/net/phy/xilinx_gmii2rgmii.c
+index 151c2a3f0b3a3..7a78dfdfa5bdd 100644
+--- a/drivers/net/phy/xilinx_gmii2rgmii.c
++++ b/drivers/net/phy/xilinx_gmii2rgmii.c
+@@ -82,6 +82,7 @@ static int xgmiitorgmii_probe(struct mdio_device *mdiodev)
+ 
+ 	if (!priv->phy_dev->drv) {
+ 		dev_info(dev, "Attached phy not ready\n");
++		put_device(&priv->phy_dev->mdio.dev);
+ 		return -EPROBE_DEFER;
+ 	}
+ 
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 2b9815ec4a622..b825c6a9b6dde 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1610,6 +1610,8 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
+ 	int len;
+ 	unsigned char *cp;
+ 
++	skb->dev = ppp->dev;
++
+ 	if (proto < 0x8000) {
+ #ifdef CONFIG_PPP_FILTER
+ 		/* check if we should pass this packet */
+diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c
+index 1505fe3f87ed3..1ff723e15d523 100644
+--- a/drivers/net/usb/rndis_host.c
++++ b/drivers/net/usb/rndis_host.c
+@@ -255,7 +255,8 @@ static int rndis_query(struct usbnet *dev, struct usb_interface *intf,
+ 
+ 	off = le32_to_cpu(u.get_c->offset);
+ 	len = le32_to_cpu(u.get_c->len);
+-	if (unlikely((8 + off + len) > CONTROL_BUFFER_SIZE))
++	if (unlikely((off > CONTROL_BUFFER_SIZE - 8) ||
++		     (len > CONTROL_BUFFER_SIZE - 8 - off)))
+ 		goto response_error;
+ 
+ 	if (*reply_len != -1 && len != *reply_len)
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index 5be8ed9105535..5aa23a036ed36 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -849,6 +849,9 @@ static int veth_poll(struct napi_struct *napi, int budget)
+ 	xdp_set_return_frame_no_direct();
+ 	done = veth_xdp_rcv(rq, budget, &bq, &stats);
+ 
++	if (stats.xdp_redirect > 0)
++		xdp_do_flush();
++
+ 	if (done < budget && napi_complete_done(napi, done)) {
+ 		/* Write rx_notify_masked before reading ptr_ring */
+ 		smp_store_mb(rq->rx_notify_masked, false);
+@@ -862,8 +865,6 @@ static int veth_poll(struct napi_struct *napi, int budget)
+ 
+ 	if (stats.xdp_tx > 0)
+ 		veth_xdp_flush(rq, &bq);
+-	if (stats.xdp_redirect > 0)
+-		xdp_do_flush();
+ 	xdp_clear_return_frame_no_direct();
+ 
+ 	return done;
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 43a4bcdd92c1d..3b889fed98826 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -1236,6 +1236,10 @@ vmxnet3_rx_csum(struct vmxnet3_adapter *adapter,
+ 		    (le32_to_cpu(gdesc->dword[3]) &
+ 		     VMXNET3_RCD_CSUM_OK) == VMXNET3_RCD_CSUM_OK) {
+ 			skb->ip_summed = CHECKSUM_UNNECESSARY;
++			if ((le32_to_cpu(gdesc->dword[0]) &
++				     (1UL << VMXNET3_RCD_HDR_INNER_SHIFT))) {
++				skb->csum_level = 1;
++			}
+ 			WARN_ON_ONCE(!(gdesc->rcd.tcp || gdesc->rcd.udp) &&
+ 				     !(le32_to_cpu(gdesc->dword[0]) &
+ 				     (1UL << VMXNET3_RCD_HDR_INNER_SHIFT)));
+@@ -1245,6 +1249,10 @@ vmxnet3_rx_csum(struct vmxnet3_adapter *adapter,
+ 		} else if (gdesc->rcd.v6 && (le32_to_cpu(gdesc->dword[3]) &
+ 					     (1 << VMXNET3_RCD_TUC_SHIFT))) {
+ 			skb->ip_summed = CHECKSUM_UNNECESSARY;
++			if ((le32_to_cpu(gdesc->dword[0]) &
++				     (1UL << VMXNET3_RCD_HDR_INNER_SHIFT))) {
++				skb->csum_level = 1;
++			}
+ 			WARN_ON_ONCE(!(gdesc->rcd.tcp || gdesc->rcd.udp) &&
+ 				     !(le32_to_cpu(gdesc->dword[0]) &
+ 				     (1UL << VMXNET3_RCD_HDR_INNER_SHIFT)));
+diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c
+index b50cf11d197d5..36a6958b6a0b2 100644
+--- a/drivers/net/wan/farsync.c
++++ b/drivers/net/wan/farsync.c
+@@ -2612,6 +2612,7 @@ fst_remove_one(struct pci_dev *pdev)
+ 	for (i = 0; i < card->nports; i++) {
+ 		struct net_device *dev = port_to_dev(&card->ports[i]);
+ 		unregister_hdlc_device(dev);
++		free_netdev(dev);
+ 	}
+ 
+ 	fst_disable_intr(card);
+@@ -2632,6 +2633,7 @@ fst_remove_one(struct pci_dev *pdev)
+ 				  card->tx_dma_handle_card);
+ 	}
+ 	fst_card_array[card->card_no] = NULL;
++	kfree(card);
+ }
+ 
+ static struct pci_driver fst_driver = {
+diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
+index 1baec4b412c8d..efe38b2c1df73 100644
+--- a/drivers/net/wireless/ath/ar5523/ar5523.c
++++ b/drivers/net/wireless/ath/ar5523/ar5523.c
+@@ -241,6 +241,11 @@ static void ar5523_cmd_tx_cb(struct urb *urb)
+ 	}
+ }
+ 
++static void ar5523_cancel_tx_cmd(struct ar5523 *ar)
++{
++	usb_kill_urb(ar->tx_cmd.urb_tx);
++}
++
+ static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata,
+ 		      int ilen, void *odata, int olen, int flags)
+ {
+@@ -280,6 +285,7 @@ static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata,
+ 	}
+ 
+ 	if (!wait_for_completion_timeout(&cmd->done, 2 * HZ)) {
++		ar5523_cancel_tx_cmd(ar);
+ 		cmd->odata = NULL;
+ 		ar5523_err(ar, "timeout waiting for command %02x reply\n",
+ 			   code);
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index 86f52bcb3e4db..67e240327fb31 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -3799,18 +3799,22 @@ static struct pci_driver ath10k_pci_driver = {
+ 
+ static int __init ath10k_pci_init(void)
+ {
+-	int ret;
++	int ret1, ret2;
+ 
+-	ret = pci_register_driver(&ath10k_pci_driver);
+-	if (ret)
++	ret1 = pci_register_driver(&ath10k_pci_driver);
++	if (ret1)
+ 		printk(KERN_ERR "failed to register ath10k pci driver: %d\n",
+-		       ret);
++		       ret1);
+ 
+-	ret = ath10k_ahb_init();
+-	if (ret)
+-		printk(KERN_ERR "ahb init failed: %d\n", ret);
++	ret2 = ath10k_ahb_init();
++	if (ret2)
++		printk(KERN_ERR "ahb init failed: %d\n", ret2);
+ 
+-	return ret;
++	if (ret1 && ret2)
++		return ret1;
++
++	/* registered to at least one bus */
++	return 0;
+ }
+ module_init(ath10k_pci_init);
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index f06eec99de688..f938ac1a4abd4 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -709,14 +709,13 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ 	struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
+ 	struct hif_device_usb *hif_dev = rx_buf->hif_dev;
+ 	struct sk_buff *skb = rx_buf->skb;
+-	struct sk_buff *nskb;
+ 	int ret;
+ 
+ 	if (!skb)
+ 		return;
+ 
+ 	if (!hif_dev)
+-		goto free;
++		goto free_skb;
+ 
+ 	switch (urb->status) {
+ 	case 0:
+@@ -725,7 +724,7 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ 	case -ECONNRESET:
+ 	case -ENODEV:
+ 	case -ESHUTDOWN:
+-		goto free;
++		goto free_skb;
+ 	default:
+ 		skb_reset_tail_pointer(skb);
+ 		skb_trim(skb, 0);
+@@ -736,25 +735,27 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ 	if (likely(urb->actual_length != 0)) {
+ 		skb_put(skb, urb->actual_length);
+ 
+-		/* Process the command first */
++		/*
++		 * Process the command first.
++		 * skb is either freed here or passed to be
++		 * managed to another callback function.
++		 */
+ 		ath9k_htc_rx_msg(hif_dev->htc_handle, skb,
+ 				 skb->len, USB_REG_IN_PIPE);
+ 
+-
+-		nskb = alloc_skb(MAX_REG_IN_BUF_SIZE, GFP_ATOMIC);
+-		if (!nskb) {
++		skb = alloc_skb(MAX_REG_IN_BUF_SIZE, GFP_ATOMIC);
++		if (!skb) {
+ 			dev_err(&hif_dev->udev->dev,
+ 				"ath9k_htc: REG_IN memory allocation failure\n");
+-			urb->context = NULL;
+-			return;
++			goto free_rx_buf;
+ 		}
+ 
+-		rx_buf->skb = nskb;
++		rx_buf->skb = skb;
+ 
+ 		usb_fill_int_urb(urb, hif_dev->udev,
+ 				 usb_rcvintpipe(hif_dev->udev,
+ 						 USB_REG_IN_PIPE),
+-				 nskb->data, MAX_REG_IN_BUF_SIZE,
++				 skb->data, MAX_REG_IN_BUF_SIZE,
+ 				 ath9k_hif_usb_reg_in_cb, rx_buf, 1);
+ 	}
+ 
+@@ -763,12 +764,13 @@ resubmit:
+ 	ret = usb_submit_urb(urb, GFP_ATOMIC);
+ 	if (ret) {
+ 		usb_unanchor_urb(urb);
+-		goto free;
++		goto free_skb;
+ 	}
+ 
+ 	return;
+-free:
++free_skb:
+ 	kfree_skb(skb);
++free_rx_buf:
+ 	kfree(rx_buf);
+ 	urb->context = NULL;
+ }
+@@ -781,14 +783,10 @@ static void ath9k_hif_usb_dealloc_tx_urbs(struct hif_device_usb *hif_dev)
+ 	spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ 	list_for_each_entry_safe(tx_buf, tx_buf_tmp,
+ 				 &hif_dev->tx.tx_buf, list) {
+-		usb_get_urb(tx_buf->urb);
+-		spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+-		usb_kill_urb(tx_buf->urb);
+ 		list_del(&tx_buf->list);
+ 		usb_free_urb(tx_buf->urb);
+ 		kfree(tx_buf->buf);
+ 		kfree(tx_buf);
+-		spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ 	}
+ 	spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+ 
+@@ -1330,10 +1328,24 @@ static int send_eject_command(struct usb_interface *interface)
+ static int ath9k_hif_usb_probe(struct usb_interface *interface,
+ 			       const struct usb_device_id *id)
+ {
++	struct usb_endpoint_descriptor *bulk_in, *bulk_out, *int_in, *int_out;
+ 	struct usb_device *udev = interface_to_usbdev(interface);
++	struct usb_host_interface *alt;
+ 	struct hif_device_usb *hif_dev;
+ 	int ret = 0;
+ 
++	/* Verify the expected endpoints are present */
++	alt = interface->cur_altsetting;
++	if (usb_find_common_endpoints(alt, &bulk_in, &bulk_out, &int_in, &int_out) < 0 ||
++	    usb_endpoint_num(bulk_in) != USB_WLAN_RX_PIPE ||
++	    usb_endpoint_num(bulk_out) != USB_WLAN_TX_PIPE ||
++	    usb_endpoint_num(int_in) != USB_REG_IN_PIPE ||
++	    usb_endpoint_num(int_out) != USB_REG_OUT_PIPE) {
++		dev_err(&udev->dev,
++			"ath9k_htc: Device endpoint numbers are not the expected ones\n");
++		return -ENODEV;
++	}
++
+ 	if (id->driver_info == STORAGE_DEVICE)
+ 		return send_eject_command(interface);
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+index a2b8d9171af2a..060889bf6d053 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+@@ -703,6 +703,11 @@ brcmf_fw_alloc_request(u32 chip, u32 chiprev,
+ 	u32 i, j;
+ 	char end = '\0';
+ 
++	if (chiprev >= BITS_PER_TYPE(u32)) {
++		brcmf_err("Invalid chip revision %u\n", chiprev);
++		return NULL;
++	}
++
+ 	for (i = 0; i < table_size; i++) {
+ 		if (mapping_table[i].chipid == chip &&
+ 		    mapping_table[i].revmask & BIT(chiprev))
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 61febc9bfa14a..6a5621f17bf58 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -618,7 +618,7 @@ static int brcmf_pcie_exit_download_state(struct brcmf_pciedev_info *devinfo,
+ 	}
+ 
+ 	if (!brcmf_chip_set_active(devinfo->ci, resetintr))
+-		return -EINVAL;
++		return -EIO;
+ 	return 0;
+ }
+ 
+@@ -1109,6 +1109,10 @@ static int brcmf_pcie_init_ringbuffers(struct brcmf_pciedev_info *devinfo)
+ 				BRCMF_NROF_H2D_COMMON_MSGRINGS;
+ 		max_completionrings = BRCMF_NROF_D2H_COMMON_MSGRINGS;
+ 	}
++	if (max_flowrings > 256) {
++		brcmf_err(bus, "invalid max_flowrings(%d)\n", max_flowrings);
++		return -EIO;
++	}
+ 
+ 	if (devinfo->dma_idx_sz != 0) {
+ 		bufsz = (max_submissionrings + max_completionrings) *
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 9929e90866f04..3c0d5c68eaca2 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -3401,6 +3401,7 @@ static int brcmf_sdio_download_firmware(struct brcmf_sdio *bus,
+ 	/* Take arm out of reset */
+ 	if (!brcmf_chip_set_active(bus->ci, rstvec)) {
+ 		brcmf_err("error getting out of ARM core reset\n");
++		bcmerror = -EIO;
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 7186e1dbbd6b5..d310337b16251 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1203,6 +1203,7 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	struct sk_buff_head mpdus_skbs;
+ 	unsigned int payload_len;
+ 	int ret;
++	struct sk_buff *orig_skb = skb;
+ 
+ 	if (WARN_ON_ONCE(!mvmsta))
+ 		return -1;
+@@ -1235,8 +1236,17 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 
+ 		ret = iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
+ 		if (ret) {
++			/* Free skbs created as part of TSO logic that have not yet been dequeued */
+ 			__skb_queue_purge(&mpdus_skbs);
+-			return ret;
++			/* skb here is not necessarily same as skb that entered this method,
++			 * so free it explicitly.
++			 */
++			if (skb == orig_skb)
++				ieee80211_free_txskb(mvm->hw, skb);
++			else
++				kfree_skb(skb);
++			/* there was error, but we consumed skb one way or another, so return 0 */
++			return 0;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index a5be66de1cff1..5a8060790a61f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -884,8 +884,9 @@ static inline bool mt76_is_skb_pktid(u8 pktid)
+ static inline u8 mt76_tx_power_nss_delta(u8 nss)
+ {
+ 	static const u8 nss_delta[4] = { 0, 6, 9, 12 };
++	u8 idx = nss - 1;
+ 
+-	return nss_delta[nss - 1];
++	return (idx < ARRAY_SIZE(nss_delta)) ? nss_delta[idx] : 0;
+ }
+ 
+ static inline bool mt76_testmode_enabled(struct mt76_dev *dev)
+diff --git a/drivers/net/wireless/microchip/wilc1000/sdio.c b/drivers/net/wireless/microchip/wilc1000/sdio.c
+index e14b9fc2c67ac..58d92311c7820 100644
+--- a/drivers/net/wireless/microchip/wilc1000/sdio.c
++++ b/drivers/net/wireless/microchip/wilc1000/sdio.c
+@@ -20,6 +20,7 @@ static const struct sdio_device_id wilc_sdio_ids[] = {
+ 	{ SDIO_DEVICE(SDIO_VENDOR_ID_MICROCHIP_WILC, SDIO_DEVICE_ID_MICROCHIP_WILC1000) },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(sdio, wilc_sdio_ids);
+ 
+ #define WILC_SDIO_BLOCK_SIZE 512
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index b28fa0c4d180c..0ed4d67308d78 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -1190,7 +1190,7 @@ struct rtl8723bu_c2h {
+ 			u8 bw;
+ 		} __packed ra_report;
+ 	};
+-};
++} __packed;
+ 
+ struct rtl8xxxu_fileops;
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index e34cd6fed7e88..9a12f1d38007b 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -1607,18 +1607,18 @@ static void rtl8xxxu_print_chipinfo(struct rtl8xxxu_priv *priv)
+ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
+ {
+ 	struct device *dev = &priv->udev->dev;
+-	u32 val32, bonding;
++	u32 val32, bonding, sys_cfg;
+ 	u16 val16;
+ 
+-	val32 = rtl8xxxu_read32(priv, REG_SYS_CFG);
+-	priv->chip_cut = (val32 & SYS_CFG_CHIP_VERSION_MASK) >>
++	sys_cfg = rtl8xxxu_read32(priv, REG_SYS_CFG);
++	priv->chip_cut = (sys_cfg & SYS_CFG_CHIP_VERSION_MASK) >>
+ 		SYS_CFG_CHIP_VERSION_SHIFT;
+-	if (val32 & SYS_CFG_TRP_VAUX_EN) {
++	if (sys_cfg & SYS_CFG_TRP_VAUX_EN) {
+ 		dev_info(dev, "Unsupported test chip\n");
+ 		return -ENOTSUPP;
+ 	}
+ 
+-	if (val32 & SYS_CFG_BT_FUNC) {
++	if (sys_cfg & SYS_CFG_BT_FUNC) {
+ 		if (priv->chip_cut >= 3) {
+ 			sprintf(priv->chip_name, "8723BU");
+ 			priv->rtl_chip = RTL8723B;
+@@ -1640,7 +1640,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
+ 		if (val32 & MULTI_GPS_FUNC_EN)
+ 			priv->has_gps = 1;
+ 		priv->is_multi_func = 1;
+-	} else if (val32 & SYS_CFG_TYPE_ID) {
++	} else if (sys_cfg & SYS_CFG_TYPE_ID) {
+ 		bonding = rtl8xxxu_read32(priv, REG_HPON_FSM);
+ 		bonding &= HPON_FSM_BONDING_MASK;
+ 		if (priv->fops->tx_desc_size ==
+@@ -1688,7 +1688,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
+ 	case RTL8188E:
+ 	case RTL8192E:
+ 	case RTL8723B:
+-		switch (val32 & SYS_CFG_VENDOR_EXT_MASK) {
++		switch (sys_cfg & SYS_CFG_VENDOR_EXT_MASK) {
+ 		case SYS_CFG_VENDOR_ID_TSMC:
+ 			sprintf(priv->chip_vendor, "TSMC");
+ 			break;
+@@ -1705,7 +1705,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
+ 		}
+ 		break;
+ 	default:
+-		if (val32 & SYS_CFG_VENDOR_ID) {
++		if (sys_cfg & SYS_CFG_VENDOR_ID) {
+ 			sprintf(priv->chip_vendor, "UMC");
+ 			priv->vendor_umc = 1;
+ 		} else {
+@@ -5520,7 +5520,6 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
+ 			rarpt->txrate.flags = 0;
+ 			rate = c2h->ra_report.rate;
+ 			sgi = c2h->ra_report.sgi;
+-			bw = c2h->ra_report.bw;
+ 
+ 			if (rate < DESC_RATE_MCS0) {
+ 				rarpt->txrate.legacy =
+@@ -5537,8 +5536,13 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
+ 						RATE_INFO_FLAGS_SHORT_GI;
+ 				}
+ 
+-				if (bw == RATE_INFO_BW_20)
+-					rarpt->txrate.bw |= RATE_INFO_BW_20;
++				if (skb->len >= offsetofend(typeof(*c2h), ra_report.bw)) {
++					if (c2h->ra_report.bw == RTL8XXXU_CHANNEL_WIDTH_40)
++						bw = RATE_INFO_BW_40;
++					else
++						bw = RATE_INFO_BW_20;
++					rarpt->txrate.bw = bw;
++				}
+ 			}
+ 			bit_rate = cfg80211_calculate_bitrate(&rarpt->txrate);
+ 			rarpt->bit_rate = bit_rate;
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+index e34d33e73e525..d3027f8fbd383 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+@@ -2385,14 +2385,10 @@ void rtl92d_phy_reload_iqk_setting(struct ieee80211_hw *hw, u8 channel)
+ 			rtl_dbg(rtlpriv, COMP_SCAN, DBG_LOUD,
+ 				"Just Read IQK Matrix reg for channel:%d....\n",
+ 				channel);
+-			if ((rtlphy->iqk_matrix[indexforchannel].
+-			     value[0] != NULL)
+-				/*&&(regea4 != 0) */)
++			if (rtlphy->iqk_matrix[indexforchannel].value[0][0] != 0)
+ 				_rtl92d_phy_patha_fill_iqk_matrix(hw, true,
+-					rtlphy->iqk_matrix[
+-					indexforchannel].value,	0,
+-					(rtlphy->iqk_matrix[
+-					indexforchannel].value[0][2] == 0));
++					rtlphy->iqk_matrix[indexforchannel].value, 0,
++					rtlphy->iqk_matrix[indexforchannel].value[0][2] == 0);
+ 			if (IS_92D_SINGLEPHY(rtlhal->version)) {
+ 				if ((rtlphy->iqk_matrix[
+ 					indexforchannel].value[0][4] != 0)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_core.c b/drivers/net/wireless/rsi/rsi_91x_core.c
+index 9c4c585572488..b7fa03813da11 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_core.c
++++ b/drivers/net/wireless/rsi/rsi_91x_core.c
+@@ -466,7 +466,9 @@ void rsi_core_xmit(struct rsi_common *common, struct sk_buff *skb)
+ 							      tid, 0);
+ 			}
+ 		}
+-		if (skb->protocol == cpu_to_be16(ETH_P_PAE)) {
++
++		if (IEEE80211_SKB_CB(skb)->control.flags &
++		    IEEE80211_TX_CTRL_PORT_CTRL_PROTO) {
+ 			q_num = MGMT_SOFT_Q;
+ 			skb->priority = q_num;
+ 		}
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index dca81a4bbdd7f..30d2eccbcadd5 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -162,12 +162,16 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 	u8 header_size;
+ 	u8 vap_id = 0;
+ 	u8 dword_align_bytes;
++	bool tx_eapol;
+ 	u16 seq_num;
+ 
+ 	info = IEEE80211_SKB_CB(skb);
+ 	vif = info->control.vif;
+ 	tx_params = (struct skb_info *)info->driver_data;
+ 
++	tx_eapol = IEEE80211_SKB_CB(skb)->control.flags &
++		   IEEE80211_TX_CTRL_PORT_CTRL_PROTO;
++
+ 	header_size = FRAME_DESC_SZ + sizeof(struct rsi_xtended_desc);
+ 	if (header_size > skb_headroom(skb)) {
+ 		rsi_dbg(ERR_ZONE, "%s: Unable to send pkt\n", __func__);
+@@ -231,7 +235,7 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 		}
+ 	}
+ 
+-	if (skb->protocol == cpu_to_be16(ETH_P_PAE)) {
++	if (tx_eapol) {
+ 		rsi_dbg(INFO_ZONE, "*** Tx EAPOL ***\n");
+ 
+ 		data_desc->frame_info = cpu_to_le16(RATE_INFO_ENABLE);
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index 8d7e29d953b7e..87e1296c68381 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -1319,6 +1319,8 @@ static int pn533_poll_dep_complete(struct pn533 *dev, void *arg,
+ 	if (IS_ERR(resp))
+ 		return PTR_ERR(resp);
+ 
++	memset(&nfc_target, 0, sizeof(struct nfc_target));
++
+ 	rsp = (struct pn533_cmd_jump_dep_response *)resp->data;
+ 
+ 	rc = rsp->status & PN533_CMD_RET_MASK;
+@@ -1960,6 +1962,8 @@ static int pn533_in_dep_link_up_complete(struct pn533 *dev, void *arg,
+ 
+ 		dev_dbg(dev->dev, "Creating new target\n");
+ 
++		memset(&nfc_target, 0, sizeof(struct nfc_target));
++
+ 		nfc_target.supported_protocols = NFC_PROTO_NFC_DEP_MASK;
+ 		nfc_target.nfcid1_len = 10;
+ 		memcpy(nfc_target.nfcid1, rsp->nfcid3t, nfc_target.nfcid1_len);
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 86336496c65ce..c3e4d9b6f9c0d 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -749,7 +749,7 @@ static inline void nvme_trace_bio_complete(struct request *req,
+ {
+ 	struct nvme_ns *ns = req->q->queuedata;
+ 
+-	if (req->cmd_flags & REQ_NVME_MPATH)
++	if ((req->cmd_flags & REQ_NVME_MPATH) && req->bio)
+ 		trace_block_bio_complete(ns->head->disk->queue, req->bio);
+ }
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index c222d7bf6ce19..67dd68462b817 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -33,7 +33,7 @@
+ #define SQ_SIZE(q)	((q)->q_depth << (q)->sqes)
+ #define CQ_SIZE(q)	((q)->q_depth * sizeof(struct nvme_completion))
+ 
+-#define SGES_PER_PAGE	(PAGE_SIZE / sizeof(struct nvme_sgl_desc))
++#define SGES_PER_PAGE	(NVME_CTRL_PAGE_SIZE / sizeof(struct nvme_sgl_desc))
+ 
+ /*
+  * These can be higher, but we need to ensure that any command doesn't
+@@ -139,9 +139,9 @@ struct nvme_dev {
+ 	mempool_t *iod_mempool;
+ 
+ 	/* shadow doorbell buffer support: */
+-	u32 *dbbuf_dbs;
++	__le32 *dbbuf_dbs;
+ 	dma_addr_t dbbuf_dbs_dma_addr;
+-	u32 *dbbuf_eis;
++	__le32 *dbbuf_eis;
+ 	dma_addr_t dbbuf_eis_dma_addr;
+ 
+ 	/* host memory buffer support: */
+@@ -209,10 +209,10 @@ struct nvme_queue {
+ #define NVMEQ_SQ_CMB		1
+ #define NVMEQ_DELETE_ERROR	2
+ #define NVMEQ_POLLED		3
+-	u32 *dbbuf_sq_db;
+-	u32 *dbbuf_cq_db;
+-	u32 *dbbuf_sq_ei;
+-	u32 *dbbuf_cq_ei;
++	__le32 *dbbuf_sq_db;
++	__le32 *dbbuf_cq_db;
++	__le32 *dbbuf_sq_ei;
++	__le32 *dbbuf_cq_ei;
+ 	struct completion delete_done;
+ };
+ 
+@@ -334,11 +334,11 @@ static inline int nvme_dbbuf_need_event(u16 event_idx, u16 new_idx, u16 old)
+ }
+ 
+ /* Update dbbuf and return true if an MMIO is required */
+-static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
+-					      volatile u32 *dbbuf_ei)
++static bool nvme_dbbuf_update_and_check_event(u16 value, __le32 *dbbuf_db,
++					      volatile __le32 *dbbuf_ei)
+ {
+ 	if (dbbuf_db) {
+-		u16 old_value;
++		u16 old_value, event_idx;
+ 
+ 		/*
+ 		 * Ensure that the queue is written before updating
+@@ -346,8 +346,8 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
+ 		 */
+ 		wmb();
+ 
+-		old_value = *dbbuf_db;
+-		*dbbuf_db = value;
++		old_value = le32_to_cpu(*dbbuf_db);
++		*dbbuf_db = cpu_to_le32(value);
+ 
+ 		/*
+ 		 * Ensure that the doorbell is updated before reading the event
+@@ -357,7 +357,8 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
+ 		 */
+ 		mb();
+ 
+-		if (!nvme_dbbuf_need_event(*dbbuf_ei, value, old_value))
++		event_idx = le32_to_cpu(*dbbuf_ei);
++		if (!nvme_dbbuf_need_event(event_idx, value, old_value))
+ 			return false;
+ 	}
+ 
+@@ -371,9 +372,9 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
+  */
+ static int nvme_pci_npages_prp(void)
+ {
+-	unsigned nprps = DIV_ROUND_UP(NVME_MAX_KB_SZ + NVME_CTRL_PAGE_SIZE,
+-				      NVME_CTRL_PAGE_SIZE);
+-	return DIV_ROUND_UP(8 * nprps, PAGE_SIZE - 8);
++	unsigned max_bytes = (NVME_MAX_KB_SZ * 1024) + NVME_CTRL_PAGE_SIZE;
++	unsigned nprps = DIV_ROUND_UP(max_bytes, NVME_CTRL_PAGE_SIZE);
++	return DIV_ROUND_UP(8 * nprps, NVME_CTRL_PAGE_SIZE - 8);
+ }
+ 
+ /*
+@@ -383,7 +384,7 @@ static int nvme_pci_npages_prp(void)
+ static int nvme_pci_npages_sgl(void)
+ {
+ 	return DIV_ROUND_UP(NVME_MAX_SEGS * sizeof(struct nvme_sgl_desc),
+-			PAGE_SIZE);
++			NVME_CTRL_PAGE_SIZE);
+ }
+ 
+ static size_t nvme_pci_iod_alloc_size(void)
+@@ -734,7 +735,7 @@ static void nvme_pci_sgl_set_seg(struct nvme_sgl_desc *sge,
+ 		sge->length = cpu_to_le32(entries * sizeof(*sge));
+ 		sge->type = NVME_SGL_FMT_LAST_SEG_DESC << 4;
+ 	} else {
+-		sge->length = cpu_to_le32(PAGE_SIZE);
++		sge->length = cpu_to_le32(NVME_CTRL_PAGE_SIZE);
+ 		sge->type = NVME_SGL_FMT_SEG_DESC << 4;
+ 	}
+ }
+diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
+index d24251ece5023..f76d01028df0b 100644
+--- a/drivers/nvme/target/passthru.c
++++ b/drivers/nvme/target/passthru.c
+@@ -259,14 +259,13 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
+ 	}
+ 
+ 	/*
+-	 * If there are effects for the command we are about to execute, or
+-	 * an end_req function we need to use nvme_execute_passthru_rq()
+-	 * synchronously in a work item seeing the end_req function and
+-	 * nvme_passthru_end() can't be called in the request done callback
+-	 * which is typically in interrupt context.
++	 * If a command needs post-execution fixups, or there are any
++	 * non-trivial effects, make sure to execute the command synchronously
++	 * in a workqueue so that nvme_passthru_end gets called.
+ 	 */
+ 	effects = nvme_command_effects(ctrl, ns, req->cmd->common.opcode);
+-	if (req->p.use_workqueue || effects) {
++	if (req->p.use_workqueue ||
++	    (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))) {
+ 		INIT_WORK(&req->p.work, nvmet_passthru_execute_cmd_work);
+ 		req->p.rq = rq;
+ 		schedule_work(&req->p.work);
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index c8a0c0e9dec1c..67b404f36e79c 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -547,7 +547,7 @@ static int find_dup_cset_node_entry(struct overlay_changeset *ovcs,
+ 
+ 		fn_1 = kasprintf(GFP_KERNEL, "%pOF", ce_1->np);
+ 		fn_2 = kasprintf(GFP_KERNEL, "%pOF", ce_2->np);
+-		node_path_match = !strcmp(fn_1, fn_2);
++		node_path_match = !fn_1 || !fn_2 || !strcmp(fn_1, fn_2);
+ 		kfree(fn_1);
+ 		kfree(fn_2);
+ 		if (node_path_match) {
+@@ -582,7 +582,7 @@ static int find_dup_cset_prop(struct overlay_changeset *ovcs,
+ 
+ 		fn_1 = kasprintf(GFP_KERNEL, "%pOF", ce_1->np);
+ 		fn_2 = kasprintf(GFP_KERNEL, "%pOF", ce_2->np);
+-		node_path_match = !strcmp(fn_1, fn_2);
++		node_path_match = !fn_1 || !fn_2 || !strcmp(fn_1, fn_2);
+ 		kfree(fn_1);
+ 		kfree(fn_2);
+ 		if (node_path_match &&
+diff --git a/drivers/parisc/led.c b/drivers/parisc/led.c
+index 36c6613f7a36b..4854120fc0954 100644
+--- a/drivers/parisc/led.c
++++ b/drivers/parisc/led.c
+@@ -137,6 +137,9 @@ static int start_task(void)
+ 
+ 	/* Create the work queue and queue the LED task */
+ 	led_wq = create_singlethread_workqueue("led_wq");	
++	if (!led_wq)
++		return -ENOMEM;
++
+ 	queue_delayed_work(led_wq, &led_task, 0);
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index 2b74ff88c5c56..28945351da14f 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -589,7 +589,7 @@ void dw_pcie_setup(struct dw_pcie *pci)
+ 	if (pci->n_fts[1]) {
+ 		val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
+ 		val &= ~PORT_LOGIC_N_FTS_MASK;
+-		val |= pci->n_fts[pci->link_gen - 1];
++		val |= pci->n_fts[1];
+ 		dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
+ 	}
+ 
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index ddfeca9016a02..ef52f5097eb37 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -870,7 +870,7 @@ static int pci_epf_test_bind(struct pci_epf *epf)
+ 	if (ret)
+ 		epf_test->dma_supported = false;
+ 
+-	if (linkup_notifier) {
++	if (linkup_notifier || core_init_notifier) {
+ 		epf->nb.notifier_call = pci_epf_test_notifier;
+ 		pci_epc_register_notifier(epc, &epf->nb);
+ 	} else {
+diff --git a/drivers/pci/irq.c b/drivers/pci/irq.c
+index 12ecd0aaa28d6..0050e8f6814ed 100644
+--- a/drivers/pci/irq.c
++++ b/drivers/pci/irq.c
+@@ -44,6 +44,8 @@ int pci_request_irq(struct pci_dev *dev, unsigned int nr, irq_handler_t handler,
+ 	va_start(ap, fmt);
+ 	devname = kvasprintf(GFP_KERNEL, fmt, ap);
+ 	va_end(ap);
++	if (!devname)
++		return -ENOMEM;
+ 
+ 	ret = request_threaded_irq(pci_irq_vector(dev, nr), handler, thread_fn,
+ 				   irqflags, devname, dev_id);
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index d15c881e2e7e7..361de072dfdca 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -1141,11 +1141,9 @@ static int pci_create_attr(struct pci_dev *pdev, int num, int write_combine)
+ 
+ 	sysfs_bin_attr_init(res_attr);
+ 	if (write_combine) {
+-		pdev->res_attr_wc[num] = res_attr;
+ 		sprintf(res_attr_name, "resource%d_wc", num);
+ 		res_attr->mmap = pci_mmap_resource_wc;
+ 	} else {
+-		pdev->res_attr[num] = res_attr;
+ 		sprintf(res_attr_name, "resource%d", num);
+ 		if (pci_resource_flags(pdev, num) & IORESOURCE_IO) {
+ 			res_attr->read = pci_read_resource_io;
+@@ -1161,10 +1159,17 @@ static int pci_create_attr(struct pci_dev *pdev, int num, int write_combine)
+ 	res_attr->size = pci_resource_len(pdev, num);
+ 	res_attr->private = (void *)(unsigned long)num;
+ 	retval = sysfs_create_bin_file(&pdev->dev.kobj, res_attr);
+-	if (retval)
++	if (retval) {
+ 		kfree(res_attr);
++		return retval;
++	}
+ 
+-	return retval;
++	if (write_combine)
++		pdev->res_attr_wc[num] = res_attr;
++	else
++		pdev->res_attr[num] = res_attr;
++
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 1162734546481..262577c81d307 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -6152,6 +6152,8 @@ bool pci_device_is_present(struct pci_dev *pdev)
+ {
+ 	u32 v;
+ 
++	/* Check PF if pdev is a VF, since VF Vendor/Device IDs are 0xffff */
++	pdev = pci_physfn(pdev);
+ 	if (pci_dev_is_disconnected(pdev))
+ 		return false;
+ 	return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0);
+diff --git a/drivers/perf/arm_dsu_pmu.c b/drivers/perf/arm_dsu_pmu.c
+index 98e68ed7db85f..1db8eccc9735c 100644
+--- a/drivers/perf/arm_dsu_pmu.c
++++ b/drivers/perf/arm_dsu_pmu.c
+@@ -866,7 +866,11 @@ static int __init dsu_pmu_init(void)
+ 	if (ret < 0)
+ 		return ret;
+ 	dsu_pmu_cpuhp_state = ret;
+-	return platform_driver_register(&dsu_pmu_driver);
++	ret = platform_driver_register(&dsu_pmu_driver);
++	if (ret)
++		cpuhp_remove_multi_state(dsu_pmu_cpuhp_state);
++
++	return ret;
+ }
+ 
+ static void __exit dsu_pmu_exit(void)
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index afa8efbdad8fa..f5a33dbe7acb9 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -870,6 +870,8 @@ static struct platform_driver smmu_pmu_driver = {
+ 
+ static int __init arm_smmu_pmu_init(void)
+ {
++	int ret;
++
+ 	cpuhp_state_num = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
+ 						  "perf/arm/pmcg:online",
+ 						  NULL,
+@@ -877,7 +879,11 @@ static int __init arm_smmu_pmu_init(void)
+ 	if (cpuhp_state_num < 0)
+ 		return cpuhp_state_num;
+ 
+-	return platform_driver_register(&smmu_pmu_driver);
++	ret = platform_driver_register(&smmu_pmu_driver);
++	if (ret)
++		cpuhp_remove_multi_state(cpuhp_state_num);
++
++	return ret;
+ }
+ module_init(arm_smmu_pmu_init);
+ 
+diff --git a/drivers/phy/broadcom/phy-brcm-usb.c b/drivers/phy/broadcom/phy-brcm-usb.c
+index b901a0d4e2a80..cd2240ea2c9a8 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb.c
++++ b/drivers/phy/broadcom/phy-brcm-usb.c
+@@ -101,9 +101,9 @@ static int brcm_pm_notifier(struct notifier_block *notifier,
+ 
+ static irqreturn_t brcm_usb_phy_wake_isr(int irq, void *dev_id)
+ {
+-	struct phy *gphy = dev_id;
++	struct device *dev = dev_id;
+ 
+-	pm_wakeup_event(&gphy->dev, 0);
++	pm_wakeup_event(dev, 0);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -437,7 +437,7 @@ static int brcm_usb_phy_dvr_init(struct platform_device *pdev,
+ 	if (priv->wake_irq >= 0) {
+ 		err = devm_request_irq(dev, priv->wake_irq,
+ 				       brcm_usb_phy_wake_isr, 0,
+-				       dev_name(dev), gphy);
++				       dev_name(dev), dev);
+ 		if (err < 0)
+ 			return err;
+ 		device_set_wakeup_capable(dev, 1);
+diff --git a/drivers/pinctrl/pinconf-generic.c b/drivers/pinctrl/pinconf-generic.c
+index 42e27dba62e26..762abb0dfebba 100644
+--- a/drivers/pinctrl/pinconf-generic.c
++++ b/drivers/pinctrl/pinconf-generic.c
+@@ -393,8 +393,10 @@ int pinconf_generic_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	for_each_available_child_of_node(np_config, np) {
+ 		ret = pinconf_generic_dt_subnode_to_map(pctldev, np, map,
+ 					&reserved_maps, num_maps, type);
+-		if (ret < 0)
++		if (ret < 0) {
++			of_node_put(np);
+ 			goto exit;
++		}
+ 	}
+ 	return 0;
+ 
+diff --git a/drivers/platform/chrome/cros_usbpd_notify.c b/drivers/platform/chrome/cros_usbpd_notify.c
+index 7f36142ab12a8..19390147ac9d6 100644
+--- a/drivers/platform/chrome/cros_usbpd_notify.c
++++ b/drivers/platform/chrome/cros_usbpd_notify.c
+@@ -284,7 +284,11 @@ static int __init cros_usbpd_notify_init(void)
+ 		return ret;
+ 
+ #ifdef CONFIG_ACPI
+-	platform_driver_register(&cros_usbpd_notify_acpi_driver);
++	ret = platform_driver_register(&cros_usbpd_notify_acpi_driver);
++	if (ret) {
++		platform_driver_unregister(&cros_usbpd_notify_plat_driver);
++		return ret;
++	}
+ #endif
+ 	return 0;
+ }
+diff --git a/drivers/platform/x86/huawei-wmi.c b/drivers/platform/x86/huawei-wmi.c
+index eac3e6b4ea113..935562c870c3d 100644
+--- a/drivers/platform/x86/huawei-wmi.c
++++ b/drivers/platform/x86/huawei-wmi.c
+@@ -760,6 +760,9 @@ static int huawei_wmi_input_setup(struct device *dev,
+ 		const char *guid,
+ 		struct input_dev **idev)
+ {
++	acpi_status status;
++	int err;
++
+ 	*idev = devm_input_allocate_device(dev);
+ 	if (!*idev)
+ 		return -ENOMEM;
+@@ -769,10 +772,19 @@ static int huawei_wmi_input_setup(struct device *dev,
+ 	(*idev)->id.bustype = BUS_HOST;
+ 	(*idev)->dev.parent = dev;
+ 
+-	return sparse_keymap_setup(*idev, huawei_wmi_keymap, NULL) ||
+-		input_register_device(*idev) ||
+-		wmi_install_notify_handler(guid, huawei_wmi_input_notify,
+-				*idev);
++	err = sparse_keymap_setup(*idev, huawei_wmi_keymap, NULL);
++	if (err)
++		return err;
++
++	err = input_register_device(*idev);
++	if (err)
++		return err;
++
++	status = wmi_install_notify_handler(guid, huawei_wmi_input_notify, *idev);
++	if (ACPI_FAILURE(status))
++		return -EIO;
++
++	return 0;
+ }
+ 
+ static void huawei_wmi_input_exit(struct device *dev, const char *guid)
+diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
+index 69d706039cb20..bdeb888c0fea4 100644
+--- a/drivers/platform/x86/intel_scu_ipc.c
++++ b/drivers/platform/x86/intel_scu_ipc.c
+@@ -583,7 +583,6 @@ __intel_scu_ipc_register(struct device *parent,
+ 	scu->dev.parent = parent;
+ 	scu->dev.class = &intel_scu_ipc_class;
+ 	scu->dev.release = intel_scu_ipc_release;
+-	dev_set_name(&scu->dev, "intel_scu_ipc");
+ 
+ 	if (!request_mem_region(scu_data->mem.start, resource_size(&scu_data->mem),
+ 				"intel_scu_ipc")) {
+@@ -612,6 +611,7 @@ __intel_scu_ipc_register(struct device *parent,
+ 	 * After this point intel_scu_ipc_release() takes care of
+ 	 * releasing the SCU IPC resources once refcount drops to zero.
+ 	 */
++	dev_set_name(&scu->dev, "intel_scu_ipc");
+ 	err = device_register(&scu->dev);
+ 	if (err) {
+ 		put_device(&scu->dev);
+diff --git a/drivers/platform/x86/mxm-wmi.c b/drivers/platform/x86/mxm-wmi.c
+index 9a19fbd2f7341..9a457956025a5 100644
+--- a/drivers/platform/x86/mxm-wmi.c
++++ b/drivers/platform/x86/mxm-wmi.c
+@@ -35,13 +35,11 @@ int mxm_wmi_call_mxds(int adapter)
+ 		.xarg = 1,
+ 	};
+ 	struct acpi_buffer input = { (acpi_size)sizeof(args), &args };
+-	struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	acpi_status status;
+ 
+ 	printk("calling mux switch %d\n", adapter);
+ 
+-	status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input,
+-				     &output);
++	status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input, NULL);
+ 
+ 	if (ACPI_FAILURE(status))
+ 		return status;
+@@ -60,13 +58,11 @@ int mxm_wmi_call_mxmx(int adapter)
+ 		.xarg = 1,
+ 	};
+ 	struct acpi_buffer input = { (acpi_size)sizeof(args), &args };
+-	struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	acpi_status status;
+ 
+ 	printk("calling mux switch %d\n", adapter);
+ 
+-	status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input,
+-				     &output);
++	status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input, NULL);
+ 
+ 	if (ACPI_FAILURE(status))
+ 		return status;
+diff --git a/drivers/pnp/core.c b/drivers/pnp/core.c
+index a50ab002e9e41..14bf75ba941d4 100644
+--- a/drivers/pnp/core.c
++++ b/drivers/pnp/core.c
+@@ -160,14 +160,14 @@ struct pnp_dev *pnp_alloc_dev(struct pnp_protocol *protocol, int id,
+ 	dev->dev.coherent_dma_mask = dev->dma_mask;
+ 	dev->dev.release = &pnp_release_device;
+ 
+-	dev_set_name(&dev->dev, "%02x:%02x", dev->protocol->number, dev->number);
+-
+ 	dev_id = pnp_add_id(dev, pnpid);
+ 	if (!dev_id) {
+ 		kfree(dev);
+ 		return NULL;
+ 	}
+ 
++	dev_set_name(&dev->dev, "%02x:%02x", dev->protocol->number, dev->number);
++
+ 	return dev;
+ }
+ 
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index 280c54c23e37e..2b644590fa8e0 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -677,6 +677,11 @@ int power_supply_get_battery_info(struct power_supply *psy,
+ 		int i, tab_len, size;
+ 
+ 		propname = kasprintf(GFP_KERNEL, "ocv-capacity-table-%d", index);
++		if (!propname) {
++			power_supply_put_battery_info(psy, info);
++			err = -ENOMEM;
++			goto out_put_node;
++		}
+ 		list = of_get_property(battery_np, propname, &size);
+ 		if (!list || !size) {
+ 			dev_err(&psy->dev, "failed to get %s\n", propname);
+@@ -1201,8 +1206,8 @@ create_triggers_failed:
+ register_cooler_failed:
+ 	psy_unregister_thermal(psy);
+ register_thermal_failed:
+-	device_del(dev);
+ wakeup_init_failed:
++	device_del(dev);
+ device_add_failed:
+ check_supplies_failed:
+ dev_set_name_failed:
+diff --git a/drivers/pwm/pwm-sifive.c b/drivers/pwm/pwm-sifive.c
+index 9cc0612f08498..12e9e23272ab1 100644
+--- a/drivers/pwm/pwm-sifive.c
++++ b/drivers/pwm/pwm-sifive.c
+@@ -217,8 +217,11 @@ static int pwm_sifive_clock_notifier(struct notifier_block *nb,
+ 	struct pwm_sifive_ddata *ddata =
+ 		container_of(nb, struct pwm_sifive_ddata, notifier);
+ 
+-	if (event == POST_RATE_CHANGE)
++	if (event == POST_RATE_CHANGE) {
++		mutex_lock(&ddata->lock);
+ 		pwm_sifive_update_clock(ddata, ndata->new_rate);
++		mutex_unlock(&ddata->lock);
++	}
+ 
+ 	return NOTIFY_OK;
+ }
+diff --git a/drivers/pwm/pwm-tegra.c b/drivers/pwm/pwm-tegra.c
+index 8c4e6657b61e7..f3528c56e8947 100644
+--- a/drivers/pwm/pwm-tegra.c
++++ b/drivers/pwm/pwm-tegra.c
+@@ -142,8 +142,8 @@ static int tegra_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		 * source clock rate as required_clk_rate, PWM controller will
+ 		 * be able to configure the requested period.
+ 		 */
+-		required_clk_rate =
+-			(NSEC_PER_SEC / period_ns) << PWM_DUTY_WIDTH;
++		required_clk_rate = DIV_ROUND_UP_ULL((u64)NSEC_PER_SEC << PWM_DUTY_WIDTH,
++						     period_ns);
+ 
+ 		err = clk_set_rate(pc->clk, required_clk_rate);
+ 		if (err < 0)
+diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
+index 94331d999d273..5ac2dc1e2abd8 100644
+--- a/drivers/rapidio/devices/rio_mport_cdev.c
++++ b/drivers/rapidio/devices/rio_mport_cdev.c
+@@ -1803,8 +1803,11 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
+ 		rio_init_dbell_res(&rdev->riores[RIO_DOORBELL_RESOURCE],
+ 				   0, 0xffff);
+ 	err = rio_add_device(rdev);
+-	if (err)
+-		goto cleanup;
++	if (err) {
++		put_device(&rdev->dev);
++		return err;
++	}
++
+ 	rio_dev_get(rdev);
+ 
+ 	return 0;
+@@ -1900,10 +1903,6 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
+ 
+ 	priv->md = chdev;
+ 
+-	mutex_lock(&chdev->file_mutex);
+-	list_add_tail(&priv->list, &chdev->file_list);
+-	mutex_unlock(&chdev->file_mutex);
+-
+ 	INIT_LIST_HEAD(&priv->db_filters);
+ 	INIT_LIST_HEAD(&priv->pw_filters);
+ 	spin_lock_init(&priv->fifo_lock);
+@@ -1912,6 +1911,7 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
+ 			  sizeof(struct rio_event) * MPORT_EVENT_DEPTH,
+ 			  GFP_KERNEL);
+ 	if (ret < 0) {
++		put_device(&chdev->dev);
+ 		dev_err(&chdev->dev, DRV_NAME ": kfifo_alloc failed\n");
+ 		ret = -ENOMEM;
+ 		goto err_fifo;
+@@ -1922,6 +1922,9 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
+ 	spin_lock_init(&priv->req_lock);
+ 	mutex_init(&priv->dma_lock);
+ #endif
++	mutex_lock(&chdev->file_mutex);
++	list_add_tail(&priv->list, &chdev->file_list);
++	mutex_unlock(&chdev->file_mutex);
+ 
+ 	filp->private_data = priv;
+ 	goto out;
+diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
+index 19b0c33f4a62a..fdcf742b2adbc 100644
+--- a/drivers/rapidio/rio-scan.c
++++ b/drivers/rapidio/rio-scan.c
+@@ -454,8 +454,12 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
+ 				   0, 0xffff);
+ 
+ 	ret = rio_add_device(rdev);
+-	if (ret)
+-		goto cleanup;
++	if (ret) {
++		if (rswitch)
++			kfree(rswitch->route_table);
++		put_device(&rdev->dev);
++		return NULL;
++	}
+ 
+ 	rio_dev_get(rdev);
+ 
+diff --git a/drivers/rapidio/rio.c b/drivers/rapidio/rio.c
+index 606986c5ba2c9..fcab174e58888 100644
+--- a/drivers/rapidio/rio.c
++++ b/drivers/rapidio/rio.c
+@@ -2267,11 +2267,16 @@ int rio_register_mport(struct rio_mport *port)
+ 	atomic_set(&port->state, RIO_DEVICE_RUNNING);
+ 
+ 	res = device_register(&port->dev);
+-	if (res)
++	if (res) {
+ 		dev_err(&port->dev, "RIO: mport%d registration failed ERR=%d\n",
+ 			port->id, res);
+-	else
++		mutex_lock(&rio_mport_list_lock);
++		list_del(&port->node);
++		mutex_unlock(&rio_mport_list_lock);
++		put_device(&port->dev);
++	} else {
+ 		dev_dbg(&port->dev, "RIO: registered mport%d\n", port->id);
++	}
+ 
+ 	return res;
+ }
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index eb083b26ab4f6..6dd698b2d0af3 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -980,7 +980,7 @@ static int drms_uA_update(struct regulator_dev *rdev)
+ 		/* get input voltage */
+ 		input_uV = 0;
+ 		if (rdev->supply)
+-			input_uV = regulator_get_voltage(rdev->supply);
++			input_uV = regulator_get_voltage_rdev(rdev->supply->rdev);
+ 		if (input_uV <= 0)
+ 			input_uV = rdev->constraints->input_uV;
+ 		if (input_uV <= 0) {
+@@ -1428,7 +1428,13 @@ static int set_machine_constraints(struct regulator_dev *rdev)
+ 		if (rdev->supply_name && !rdev->supply)
+ 			return -EPROBE_DEFER;
+ 
+-		if (rdev->supply) {
++		/* If supplying regulator has already been enabled,
++		 * it's not intended to have use_count increment
++		 * when rdev is only boot-on.
++		 */
++		if (rdev->supply &&
++		    (rdev->constraints->always_on ||
++		     !regulator_is_enabled(rdev->supply))) {
+ 			ret = regulator_enable(rdev->supply);
+ 			if (ret < 0) {
+ 				_regulator_put(rdev->supply);
+@@ -1472,6 +1478,7 @@ static int set_supply(struct regulator_dev *rdev,
+ 
+ 	rdev->supply = create_regulator(supply_rdev, &rdev->dev, "SUPPLY");
+ 	if (rdev->supply == NULL) {
++		module_put(supply_rdev->owner);
+ 		err = -ENOMEM;
+ 		return err;
+ 	}
+@@ -1645,7 +1652,7 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 
+ 	regulator = kzalloc(sizeof(*regulator), GFP_KERNEL);
+ 	if (regulator == NULL) {
+-		kfree(supply_name);
++		kfree_const(supply_name);
+ 		return NULL;
+ 	}
+ 
+@@ -1775,6 +1782,7 @@ static struct regulator_dev *regulator_dev_lookup(struct device *dev,
+ 		node = of_get_regulator(dev, supply);
+ 		if (node) {
+ 			r = of_find_regulator_by_node(node);
++			of_node_put(node);
+ 			if (r)
+ 				return r;
+ 
+@@ -5398,6 +5406,7 @@ unset_supplies:
+ 	regulator_remove_coupling(rdev);
+ 	mutex_unlock(&regulator_list_mutex);
+ wash:
++	regulator_put(rdev->supply);
+ 	kfree(rdev->coupling_desc.coupled_rdevs);
+ 	mutex_lock(&regulator_list_mutex);
+ 	regulator_ena_gpio_free(rdev);
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 0678b417707ef..1a0d6eb9425bb 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -365,6 +365,7 @@ static int adsp_alloc_memory_region(struct qcom_adsp *adsp)
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -472,6 +473,7 @@ detach_proxy_pds:
+ detach_active_pds:
+ 	adsp_pds_detach(adsp, adsp->active_pds, adsp->active_pd_count);
+ free_rproc:
++	device_init_wakeup(adsp->dev, false);
+ 	rproc_free(rproc);
+ 
+ 	return ret;
+@@ -487,6 +489,8 @@ static int adsp_remove(struct platform_device *pdev)
+ 	qcom_remove_sysmon_subdev(adsp->sysmon);
+ 	qcom_remove_smd_subdev(adsp->rproc, &adsp->smd_subdev);
+ 	qcom_remove_ssr_subdev(adsp->rproc, &adsp->ssr_subdev);
++	adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	device_init_wakeup(adsp->dev, false);
+ 	rproc_free(adsp->rproc);
+ 
+ 	return 0;
+diff --git a/drivers/remoteproc/qcom_sysmon.c b/drivers/remoteproc/qcom_sysmon.c
+index a26221a6f6c22..c348ea35e47c3 100644
+--- a/drivers/remoteproc/qcom_sysmon.c
++++ b/drivers/remoteproc/qcom_sysmon.c
+@@ -625,7 +625,9 @@ struct qcom_sysmon *qcom_add_sysmon_subdev(struct rproc *rproc,
+ 		if (sysmon->shutdown_irq != -ENODATA) {
+ 			dev_err(sysmon->dev,
+ 				"failed to retrieve shutdown-ack IRQ\n");
+-			return ERR_PTR(sysmon->shutdown_irq);
++			ret = sysmon->shutdown_irq;
++			kfree(sysmon);
++			return ERR_PTR(ret);
+ 		}
+ 	} else {
+ 		ret = devm_request_threaded_irq(sysmon->dev,
+@@ -636,6 +638,7 @@ struct qcom_sysmon *qcom_add_sysmon_subdev(struct rproc *rproc,
+ 		if (ret) {
+ 			dev_err(sysmon->dev,
+ 				"failed to acquire shutdown-ack IRQ\n");
++			kfree(sysmon);
+ 			return ERR_PTR(ret);
+ 		}
+ 	}
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index 369a97f3eca99..cc55ff0128cf2 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -1741,12 +1741,18 @@ static void rproc_crash_handler_work(struct work_struct *work)
+ 
+ 	mutex_lock(&rproc->lock);
+ 
+-	if (rproc->state == RPROC_CRASHED || rproc->state == RPROC_OFFLINE) {
++	if (rproc->state == RPROC_CRASHED) {
+ 		/* handle only the first crash detected */
+ 		mutex_unlock(&rproc->lock);
+ 		return;
+ 	}
+ 
++	if (rproc->state == RPROC_OFFLINE) {
++		/* Don't recover if the remote processor was stopped */
++		mutex_unlock(&rproc->lock);
++		goto out;
++	}
++
+ 	rproc->state = RPROC_CRASHED;
+ 	dev_err(dev, "handling crash #%u in %s\n", ++rproc->crash_cnt,
+ 		rproc->name);
+@@ -1756,6 +1762,7 @@ static void rproc_crash_handler_work(struct work_struct *work)
+ 	if (!rproc->recovery_disabled)
+ 		rproc_trigger_recovery(rproc);
+ 
++out:
+ 	pm_relax(rproc->dev.parent);
+ }
+ 
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index d4f6c4dd42c47..7f560937bf7cb 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -750,6 +750,168 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
+ 		return IRQ_NONE;
+ }
+ 
++#ifdef	CONFIG_ACPI
++
++#include <linux/acpi.h>
++
++static u32 rtc_handler(void *context)
++{
++	struct device *dev = context;
++	struct cmos_rtc *cmos = dev_get_drvdata(dev);
++	unsigned char rtc_control = 0;
++	unsigned char rtc_intr;
++	unsigned long flags;
++
++
++	/*
++	 * Always update rtc irq when ACPI is used as RTC Alarm.
++	 * Or else, ACPI SCI is enabled during suspend/resume only,
++	 * update rtc irq in that case.
++	 */
++	if (cmos_use_acpi_alarm())
++		cmos_interrupt(0, (void *)cmos->rtc);
++	else {
++		/* Fix me: can we use cmos_interrupt() here as well? */
++		spin_lock_irqsave(&rtc_lock, flags);
++		if (cmos_rtc.suspend_ctrl)
++			rtc_control = CMOS_READ(RTC_CONTROL);
++		if (rtc_control & RTC_AIE) {
++			cmos_rtc.suspend_ctrl &= ~RTC_AIE;
++			CMOS_WRITE(rtc_control, RTC_CONTROL);
++			rtc_intr = CMOS_READ(RTC_INTR_FLAGS);
++			rtc_update_irq(cmos->rtc, 1, rtc_intr);
++		}
++		spin_unlock_irqrestore(&rtc_lock, flags);
++	}
++
++	pm_wakeup_hard_event(dev);
++	acpi_clear_event(ACPI_EVENT_RTC);
++	acpi_disable_event(ACPI_EVENT_RTC, 0);
++	return ACPI_INTERRUPT_HANDLED;
++}
++
++static void acpi_rtc_event_setup(struct device *dev)
++{
++	if (acpi_disabled)
++		return;
++
++	acpi_install_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler, dev);
++	/*
++	 * After the RTC handler is installed, the Fixed_RTC event should
++	 * be disabled. Only when the RTC alarm is set will it be enabled.
++	 */
++	acpi_clear_event(ACPI_EVENT_RTC);
++	acpi_disable_event(ACPI_EVENT_RTC, 0);
++}
++
++static void acpi_rtc_event_cleanup(void)
++{
++	if (acpi_disabled)
++		return;
++
++	acpi_remove_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler);
++}
++
++static void rtc_wake_on(struct device *dev)
++{
++	acpi_clear_event(ACPI_EVENT_RTC);
++	acpi_enable_event(ACPI_EVENT_RTC, 0);
++}
++
++static void rtc_wake_off(struct device *dev)
++{
++	acpi_disable_event(ACPI_EVENT_RTC, 0);
++}
++
++#ifdef CONFIG_X86
++/* Enable use_acpi_alarm mode for Intel platforms no earlier than 2015 */
++static void use_acpi_alarm_quirks(void)
++{
++	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++		return;
++
++	if (!is_hpet_enabled())
++		return;
++
++	if (dmi_get_bios_year() < 2015)
++		return;
++
++	use_acpi_alarm = true;
++}
++#else
++static inline void use_acpi_alarm_quirks(void) { }
++#endif
++
++static void acpi_cmos_wake_setup(struct device *dev)
++{
++	if (acpi_disabled)
++		return;
++
++	use_acpi_alarm_quirks();
++
++	cmos_rtc.wake_on = rtc_wake_on;
++	cmos_rtc.wake_off = rtc_wake_off;
++
++	/* ACPI tables bug workaround. */
++	if (acpi_gbl_FADT.month_alarm && !acpi_gbl_FADT.day_alarm) {
++		dev_dbg(dev, "bogus FADT month_alarm (%d)\n",
++			acpi_gbl_FADT.month_alarm);
++		acpi_gbl_FADT.month_alarm = 0;
++	}
++
++	cmos_rtc.day_alrm = acpi_gbl_FADT.day_alarm;
++	cmos_rtc.mon_alrm = acpi_gbl_FADT.month_alarm;
++	cmos_rtc.century = acpi_gbl_FADT.century;
++
++	if (acpi_gbl_FADT.flags & ACPI_FADT_S4_RTC_WAKE)
++		dev_info(dev, "RTC can wake from S4\n");
++
++	/* RTC always wakes from S1/S2/S3, and often S4/STD */
++	device_init_wakeup(dev, 1);
++}
++
++static void cmos_check_acpi_rtc_status(struct device *dev,
++					      unsigned char *rtc_control)
++{
++	struct cmos_rtc *cmos = dev_get_drvdata(dev);
++	acpi_event_status rtc_status;
++	acpi_status status;
++
++	if (acpi_gbl_FADT.flags & ACPI_FADT_FIXED_RTC)
++		return;
++
++	status = acpi_get_event_status(ACPI_EVENT_RTC, &rtc_status);
++	if (ACPI_FAILURE(status)) {
++		dev_err(dev, "Could not get RTC status\n");
++	} else if (rtc_status & ACPI_EVENT_FLAG_SET) {
++		unsigned char mask;
++		*rtc_control &= ~RTC_AIE;
++		CMOS_WRITE(*rtc_control, RTC_CONTROL);
++		mask = CMOS_READ(RTC_INTR_FLAGS);
++		rtc_update_irq(cmos->rtc, 1, mask);
++	}
++}
++
++#else /* !CONFIG_ACPI */
++
++static inline void acpi_rtc_event_setup(struct device *dev)
++{
++}
++
++static inline void acpi_rtc_event_cleanup(void)
++{
++}
++
++static inline void acpi_cmos_wake_setup(struct device *dev)
++{
++}
++
++static inline void cmos_check_acpi_rtc_status(struct device *dev,
++					      unsigned char *rtc_control)
++{
++}
++#endif /* CONFIG_ACPI */
++
+ #ifdef	CONFIG_PNP
+ #define	INITSECTION
+ 
+@@ -833,19 +995,27 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq)
+ 		if (info->address_space)
+ 			address_space = info->address_space;
+ 
+-		if (info->rtc_day_alarm && info->rtc_day_alarm < 128)
+-			cmos_rtc.day_alrm = info->rtc_day_alarm;
+-		if (info->rtc_mon_alarm && info->rtc_mon_alarm < 128)
+-			cmos_rtc.mon_alrm = info->rtc_mon_alarm;
+-		if (info->rtc_century && info->rtc_century < 128)
+-			cmos_rtc.century = info->rtc_century;
++		cmos_rtc.day_alrm = info->rtc_day_alarm;
++		cmos_rtc.mon_alrm = info->rtc_mon_alarm;
++		cmos_rtc.century = info->rtc_century;
+ 
+ 		if (info->wake_on && info->wake_off) {
+ 			cmos_rtc.wake_on = info->wake_on;
+ 			cmos_rtc.wake_off = info->wake_off;
+ 		}
++	} else {
++		acpi_cmos_wake_setup(dev);
+ 	}
+ 
++	if (cmos_rtc.day_alrm >= 128)
++		cmos_rtc.day_alrm = 0;
++
++	if (cmos_rtc.mon_alrm >= 128)
++		cmos_rtc.mon_alrm = 0;
++
++	if (cmos_rtc.century >= 128)
++		cmos_rtc.century = 0;
++
+ 	cmos_rtc.dev = dev;
+ 	dev_set_drvdata(dev, &cmos_rtc);
+ 
+@@ -933,6 +1103,13 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq)
+ 	if (rtc_nvmem_register(cmos_rtc.rtc, &nvmem_cfg))
+ 		dev_err(dev, "nvmem registration failed\n");
+ 
++	/*
++	 * Everything has gone well so far, so by default register a handler for
++	 * the ACPI RTC fixed event.
++	 */
++	if (!info)
++		acpi_rtc_event_setup(dev);
++
+ 	dev_info(dev, "%s%s, %d bytes nvram%s\n",
+ 		 !is_valid_irq(rtc_irq) ? "no alarms" :
+ 		 cmos_rtc.mon_alrm ? "alarms up to one year" :
+@@ -978,6 +1155,9 @@ static void cmos_do_remove(struct device *dev)
+ 			hpet_unregister_irq_handler(cmos_interrupt);
+ 	}
+ 
++	if (!dev_get_platdata(dev))
++		acpi_rtc_event_cleanup();
++
+ 	cmos->rtc = NULL;
+ 
+ 	ports = cmos->iomem;
+@@ -1127,9 +1307,6 @@ static void cmos_check_wkalrm(struct device *dev)
+ 	}
+ }
+ 
+-static void cmos_check_acpi_rtc_status(struct device *dev,
+-				       unsigned char *rtc_control);
+-
+ static int __maybe_unused cmos_resume(struct device *dev)
+ {
+ 	struct cmos_rtc	*cmos = dev_get_drvdata(dev);
+@@ -1196,174 +1373,16 @@ static SIMPLE_DEV_PM_OPS(cmos_pm_ops, cmos_suspend, cmos_resume);
+  * predate even PNPBIOS should set up platform_bus devices.
+  */
+ 
+-#ifdef	CONFIG_ACPI
+-
+-#include <linux/acpi.h>
+-
+-static u32 rtc_handler(void *context)
+-{
+-	struct device *dev = context;
+-	struct cmos_rtc *cmos = dev_get_drvdata(dev);
+-	unsigned char rtc_control = 0;
+-	unsigned char rtc_intr;
+-	unsigned long flags;
+-
+-
+-	/*
+-	 * Always update rtc irq when ACPI is used as RTC Alarm.
+-	 * Or else, ACPI SCI is enabled during suspend/resume only,
+-	 * update rtc irq in that case.
+-	 */
+-	if (cmos_use_acpi_alarm())
+-		cmos_interrupt(0, (void *)cmos->rtc);
+-	else {
+-		/* Fix me: can we use cmos_interrupt() here as well? */
+-		spin_lock_irqsave(&rtc_lock, flags);
+-		if (cmos_rtc.suspend_ctrl)
+-			rtc_control = CMOS_READ(RTC_CONTROL);
+-		if (rtc_control & RTC_AIE) {
+-			cmos_rtc.suspend_ctrl &= ~RTC_AIE;
+-			CMOS_WRITE(rtc_control, RTC_CONTROL);
+-			rtc_intr = CMOS_READ(RTC_INTR_FLAGS);
+-			rtc_update_irq(cmos->rtc, 1, rtc_intr);
+-		}
+-		spin_unlock_irqrestore(&rtc_lock, flags);
+-	}
+-
+-	pm_wakeup_hard_event(dev);
+-	acpi_clear_event(ACPI_EVENT_RTC);
+-	acpi_disable_event(ACPI_EVENT_RTC, 0);
+-	return ACPI_INTERRUPT_HANDLED;
+-}
+-
+-static inline void rtc_wake_setup(struct device *dev)
+-{
+-	acpi_install_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler, dev);
+-	/*
+-	 * After the RTC handler is installed, the Fixed_RTC event should
+-	 * be disabled. Only when the RTC alarm is set will it be enabled.
+-	 */
+-	acpi_clear_event(ACPI_EVENT_RTC);
+-	acpi_disable_event(ACPI_EVENT_RTC, 0);
+-}
+-
+-static void rtc_wake_on(struct device *dev)
+-{
+-	acpi_clear_event(ACPI_EVENT_RTC);
+-	acpi_enable_event(ACPI_EVENT_RTC, 0);
+-}
+-
+-static void rtc_wake_off(struct device *dev)
+-{
+-	acpi_disable_event(ACPI_EVENT_RTC, 0);
+-}
+-
+-#ifdef CONFIG_X86
+-/* Enable use_acpi_alarm mode for Intel platforms no earlier than 2015 */
+-static void use_acpi_alarm_quirks(void)
+-{
+-	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+-		return;
+-
+-	if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
+-		return;
+-
+-	if (!is_hpet_enabled())
+-		return;
+-
+-	if (dmi_get_bios_year() < 2015)
+-		return;
+-
+-	use_acpi_alarm = true;
+-}
+-#else
+-static inline void use_acpi_alarm_quirks(void) { }
+-#endif
+-
+-/* Every ACPI platform has a mc146818 compatible "cmos rtc".  Here we find
+- * its device node and pass extra config data.  This helps its driver use
+- * capabilities that the now-obsolete mc146818 didn't have, and informs it
+- * that this board's RTC is wakeup-capable (per ACPI spec).
+- */
+-static struct cmos_rtc_board_info acpi_rtc_info;
+-
+-static void cmos_wake_setup(struct device *dev)
+-{
+-	if (acpi_disabled)
+-		return;
+-
+-	use_acpi_alarm_quirks();
+-
+-	rtc_wake_setup(dev);
+-	acpi_rtc_info.wake_on = rtc_wake_on;
+-	acpi_rtc_info.wake_off = rtc_wake_off;
+-
+-	/* workaround bug in some ACPI tables */
+-	if (acpi_gbl_FADT.month_alarm && !acpi_gbl_FADT.day_alarm) {
+-		dev_dbg(dev, "bogus FADT month_alarm (%d)\n",
+-			acpi_gbl_FADT.month_alarm);
+-		acpi_gbl_FADT.month_alarm = 0;
+-	}
+-
+-	acpi_rtc_info.rtc_day_alarm = acpi_gbl_FADT.day_alarm;
+-	acpi_rtc_info.rtc_mon_alarm = acpi_gbl_FADT.month_alarm;
+-	acpi_rtc_info.rtc_century = acpi_gbl_FADT.century;
+-
+-	/* NOTE:  S4_RTC_WAKE is NOT currently useful to Linux */
+-	if (acpi_gbl_FADT.flags & ACPI_FADT_S4_RTC_WAKE)
+-		dev_info(dev, "RTC can wake from S4\n");
+-
+-	dev->platform_data = &acpi_rtc_info;
+-
+-	/* RTC always wakes from S1/S2/S3, and often S4/STD */
+-	device_init_wakeup(dev, 1);
+-}
+-
+-static void cmos_check_acpi_rtc_status(struct device *dev,
+-				       unsigned char *rtc_control)
+-{
+-	struct cmos_rtc *cmos = dev_get_drvdata(dev);
+-	acpi_event_status rtc_status;
+-	acpi_status status;
+-
+-	if (acpi_gbl_FADT.flags & ACPI_FADT_FIXED_RTC)
+-		return;
+-
+-	status = acpi_get_event_status(ACPI_EVENT_RTC, &rtc_status);
+-	if (ACPI_FAILURE(status)) {
+-		dev_err(dev, "Could not get RTC status\n");
+-	} else if (rtc_status & ACPI_EVENT_FLAG_SET) {
+-		unsigned char mask;
+-		*rtc_control &= ~RTC_AIE;
+-		CMOS_WRITE(*rtc_control, RTC_CONTROL);
+-		mask = CMOS_READ(RTC_INTR_FLAGS);
+-		rtc_update_irq(cmos->rtc, 1, mask);
+-	}
+-}
+-
+-#else
+-
+-static void cmos_wake_setup(struct device *dev)
+-{
+-}
+-
+-static void cmos_check_acpi_rtc_status(struct device *dev,
+-				       unsigned char *rtc_control)
+-{
+-}
+-
+-#endif
+-
+ #ifdef	CONFIG_PNP
+ 
+ #include <linux/pnp.h>
+ 
+ static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
+ {
+-	cmos_wake_setup(&pnp->dev);
++	int irq;
+ 
+ 	if (pnp_port_start(pnp, 0) == 0x70 && !pnp_irq_valid(pnp, 0)) {
+-		unsigned int irq = 0;
++		irq = 0;
+ #ifdef CONFIG_X86
+ 		/* Some machines contain a PNP entry for the RTC, but
+ 		 * don't define the IRQ. It should always be safe to
+@@ -1372,13 +1391,11 @@ static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
+ 		if (nr_legacy_irqs())
+ 			irq = RTC_IRQ;
+ #endif
+-		return cmos_do_probe(&pnp->dev,
+-				pnp_get_resource(pnp, IORESOURCE_IO, 0), irq);
+ 	} else {
+-		return cmos_do_probe(&pnp->dev,
+-				pnp_get_resource(pnp, IORESOURCE_IO, 0),
+-				pnp_irq(pnp, 0));
++		irq = pnp_irq(pnp, 0);
+ 	}
++
++	return cmos_do_probe(&pnp->dev, pnp_get_resource(pnp, IORESOURCE_IO, 0), irq);
+ }
+ 
+ static void cmos_pnp_remove(struct pnp_dev *pnp)
+@@ -1465,7 +1482,6 @@ static int __init cmos_platform_probe(struct platform_device *pdev)
+ 	int irq;
+ 
+ 	cmos_of_init(pdev);
+-	cmos_wake_setup(&pdev->dev);
+ 
+ 	if (RTC_IOMAPPED)
+ 		resource = platform_get_resource(pdev, IORESOURCE_IO, 0);
+diff --git a/drivers/rtc/rtc-ds1347.c b/drivers/rtc/rtc-ds1347.c
+index 7025cf3fb9f8d..03267ac657474 100644
+--- a/drivers/rtc/rtc-ds1347.c
++++ b/drivers/rtc/rtc-ds1347.c
+@@ -112,7 +112,7 @@ static int ds1347_set_time(struct device *dev, struct rtc_time *dt)
+ 		return err;
+ 
+ 	century = (dt->tm_year / 100) + 19;
+-	err = regmap_write(map, DS1347_CENTURY_REG, century);
++	err = regmap_write(map, DS1347_CENTURY_REG, bin2bcd(century));
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/rtc/rtc-mxc_v2.c b/drivers/rtc/rtc-mxc_v2.c
+index d349cef09cb7c..48595b00ebb39 100644
+--- a/drivers/rtc/rtc-mxc_v2.c
++++ b/drivers/rtc/rtc-mxc_v2.c
+@@ -337,8 +337,10 @@ static int mxc_rtc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
+-	if (IS_ERR(pdata->rtc))
++	if (IS_ERR(pdata->rtc)) {
++		clk_disable_unprepare(pdata->clk);
+ 		return PTR_ERR(pdata->rtc);
++	}
+ 
+ 	pdata->rtc->ops = &mxc_rtc_ops;
+ 	pdata->rtc->range_max = U32_MAX;
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index 62684ca3a665e..449204d84c61d 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -167,10 +167,10 @@ static int pcf85063_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm)
+ 	if (ret)
+ 		return ret;
+ 
+-	alrm->time.tm_sec = bcd2bin(buf[0]);
+-	alrm->time.tm_min = bcd2bin(buf[1]);
+-	alrm->time.tm_hour = bcd2bin(buf[2]);
+-	alrm->time.tm_mday = bcd2bin(buf[3]);
++	alrm->time.tm_sec = bcd2bin(buf[0] & 0x7f);
++	alrm->time.tm_min = bcd2bin(buf[1] & 0x7f);
++	alrm->time.tm_hour = bcd2bin(buf[2] & 0x3f);
++	alrm->time.tm_mday = bcd2bin(buf[3] & 0x3f);
+ 
+ 	ret = regmap_read(pcf85063->regmap, PCF85063_REG_CTRL2, &val);
+ 	if (ret)
+@@ -430,7 +430,7 @@ static int pcf85063_clkout_control(struct clk_hw *hw, bool enable)
+ 	unsigned int buf;
+ 	int ret;
+ 
+-	ret = regmap_read(pcf85063->regmap, PCF85063_REG_OFFSET, &buf);
++	ret = regmap_read(pcf85063->regmap, PCF85063_REG_CTRL2, &buf);
+ 	if (ret < 0)
+ 		return ret;
+ 	buf &= PCF85063_REG_CLKO_F_MASK;
+diff --git a/drivers/rtc/rtc-pic32.c b/drivers/rtc/rtc-pic32.c
+index 2b69467446548..7be1ca1633fcf 100644
+--- a/drivers/rtc/rtc-pic32.c
++++ b/drivers/rtc/rtc-pic32.c
+@@ -324,16 +324,16 @@ static int pic32_rtc_probe(struct platform_device *pdev)
+ 
+ 	spin_lock_init(&pdata->alarm_lock);
+ 
++	pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
++	if (IS_ERR(pdata->rtc))
++		return PTR_ERR(pdata->rtc);
++
+ 	clk_prepare_enable(pdata->clk);
+ 
+ 	pic32_rtc_enable(pdata, 1);
+ 
+ 	device_init_wakeup(&pdev->dev, 1);
+ 
+-	pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
+-	if (IS_ERR(pdata->rtc))
+-		return PTR_ERR(pdata->rtc);
+-
+ 	pdata->rtc->ops = &pic32_rtcops;
+ 	pdata->rtc->range_min = RTC_TIMESTAMP_BEGIN_2000;
+ 	pdata->rtc->range_max = RTC_TIMESTAMP_END_2099;
+diff --git a/drivers/rtc/rtc-snvs.c b/drivers/rtc/rtc-snvs.c
+index 0263d996b8a86..cc7f6c4216bc8 100644
+--- a/drivers/rtc/rtc-snvs.c
++++ b/drivers/rtc/rtc-snvs.c
+@@ -32,6 +32,14 @@
+ #define SNVS_LPPGDR_INIT	0x41736166
+ #define CNTR_TO_SECS_SH		15
+ 
++/* The maximum RTC clock cycles that are allowed to pass between two
++ * consecutive clock counter register reads. If the values are corrupted a
++ * bigger difference is expected. The RTC frequency is 32kHz. With 320 cycles
++ * we end at 10ms which should be enough for most cases. If it once takes
++ * longer than expected we do a retry.
++ */
++#define MAX_RTC_READ_DIFF_CYCLES	320
++
+ struct snvs_rtc_data {
+ 	struct rtc_device *rtc;
+ 	struct regmap *regmap;
+@@ -56,6 +64,7 @@ static u64 rtc_read_lpsrt(struct snvs_rtc_data *data)
+ static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
+ {
+ 	u64 read1, read2;
++	s64 diff;
+ 	unsigned int timeout = 100;
+ 
+ 	/* As expected, the registers might update between the read of the LSB
+@@ -66,7 +75,8 @@ static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
+ 	do {
+ 		read2 = read1;
+ 		read1 = rtc_read_lpsrt(data);
+-	} while (read1 != read2 && --timeout);
++		diff = read1 - read2;
++	} while (((diff < 0) || (diff > MAX_RTC_READ_DIFF_CYCLES)) && --timeout);
+ 	if (!timeout)
+ 		dev_err(&data->rtc->dev, "Timeout trying to get valid LPSRT Counter read\n");
+ 
+@@ -78,13 +88,15 @@ static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
+ static int rtc_read_lp_counter_lsb(struct snvs_rtc_data *data, u32 *lsb)
+ {
+ 	u32 count1, count2;
++	s32 diff;
+ 	unsigned int timeout = 100;
+ 
+ 	regmap_read(data->regmap, data->offset + SNVS_LPSRTCLR, &count1);
+ 	do {
+ 		count2 = count1;
+ 		regmap_read(data->regmap, data->offset + SNVS_LPSRTCLR, &count1);
+-	} while (count1 != count2 && --timeout);
++		diff = count1 - count2;
++	} while (((diff < 0) || (diff > MAX_RTC_READ_DIFF_CYCLES)) && --timeout);
+ 	if (!timeout) {
+ 		dev_err(&data->rtc->dev, "Timeout trying to get valid LPSRT Counter read\n");
+ 		return -ETIMEDOUT;
+diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
+index 0c65448b85eef..7d53f7e2febcc 100644
+--- a/drivers/rtc/rtc-st-lpc.c
++++ b/drivers/rtc/rtc-st-lpc.c
+@@ -238,6 +238,7 @@ static int st_rtc_probe(struct platform_device *pdev)
+ 
+ 	rtc->clkrate = clk_get_rate(rtc->clk);
+ 	if (!rtc->clkrate) {
++		clk_disable_unprepare(rtc->clk);
+ 		dev_err(&pdev->dev, "Unable to fetch clock rate\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/s390/net/ctcm_main.c b/drivers/s390/net/ctcm_main.c
+index d06809eac16d3..fb0e8f1cabdcb 100644
+--- a/drivers/s390/net/ctcm_main.c
++++ b/drivers/s390/net/ctcm_main.c
+@@ -865,16 +865,9 @@ done:
+ /**
+  * Start transmission of a packet.
+  * Called from generic network device layer.
+- *
+- *  skb		Pointer to buffer containing the packet.
+- *  dev		Pointer to interface struct.
+- *
+- * returns 0 if packet consumed, !0 if packet rejected.
+- *         Note: If we return !0, then the packet is free'd by
+- *               the generic network layer.
+  */
+ /* first merge version - leaving both functions separated */
+-static int ctcm_tx(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t ctcm_tx(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ctcm_priv *priv = dev->ml_priv;
+ 
+@@ -917,7 +910,7 @@ static int ctcm_tx(struct sk_buff *skb, struct net_device *dev)
+ }
+ 
+ /* unmerged MPC variant of ctcm_tx */
+-static int ctcmpc_tx(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t ctcmpc_tx(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	int len = 0;
+ 	struct ctcm_priv *priv = dev->ml_priv;
+diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c
+index 06a322bdced6d..7e743f4717a91 100644
+--- a/drivers/s390/net/lcs.c
++++ b/drivers/s390/net/lcs.c
+@@ -1518,9 +1518,8 @@ lcs_txbuffer_cb(struct lcs_channel *channel, struct lcs_buffer *buffer)
+ /**
+  * Packet transmit function called by network stack
+  */
+-static int
+-__lcs_start_xmit(struct lcs_card *card, struct sk_buff *skb,
+-		 struct net_device *dev)
++static netdev_tx_t __lcs_start_xmit(struct lcs_card *card, struct sk_buff *skb,
++				    struct net_device *dev)
+ {
+ 	struct lcs_header *header;
+ 	int rc = NETDEV_TX_OK;
+@@ -1581,8 +1580,7 @@ out:
+ 	return rc;
+ }
+ 
+-static int
+-lcs_start_xmit(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t lcs_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct lcs_card *card;
+ 	int rc;
+diff --git a/drivers/s390/net/netiucv.c b/drivers/s390/net/netiucv.c
+index 260860cf3aa18..a2f403c4ec382 100644
+--- a/drivers/s390/net/netiucv.c
++++ b/drivers/s390/net/netiucv.c
+@@ -1260,15 +1260,8 @@ static int netiucv_close(struct net_device *dev)
+ /**
+  * Start transmission of a packet.
+  * Called from generic network device layer.
+- *
+- * @param skb Pointer to buffer containing the packet.
+- * @param dev Pointer to interface struct.
+- *
+- * @return 0 if packet consumed, !0 if packet rejected.
+- *         Note: If we return !0, then the packet is free'd by
+- *               the generic network layer.
+  */
+-static int netiucv_tx(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t netiucv_tx(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct netiucv_priv *privptr = netdev_priv(dev);
+ 	int rc;
+diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
+index 0f9274960dc6b..30afcbbe1f862 100644
+--- a/drivers/scsi/fcoe/fcoe.c
++++ b/drivers/scsi/fcoe/fcoe.c
+@@ -2504,6 +2504,7 @@ static int __init fcoe_init(void)
+ 
+ out_free:
+ 	mutex_unlock(&fcoe_config_mutex);
++	fcoe_transport_detach(&fcoe_sw_transport);
+ out_destroy:
+ 	destroy_workqueue(fcoe_wq);
+ 	return rc;
+diff --git a/drivers/scsi/fcoe/fcoe_sysfs.c b/drivers/scsi/fcoe/fcoe_sysfs.c
+index ffef2c8eddc64..68d8027d5108e 100644
+--- a/drivers/scsi/fcoe/fcoe_sysfs.c
++++ b/drivers/scsi/fcoe/fcoe_sysfs.c
+@@ -830,14 +830,15 @@ struct fcoe_ctlr_device *fcoe_ctlr_device_add(struct device *parent,
+ 
+ 	dev_set_name(&ctlr->dev, "ctlr_%d", ctlr->id);
+ 	error = device_register(&ctlr->dev);
+-	if (error)
+-		goto out_del_q2;
++	if (error) {
++		destroy_workqueue(ctlr->devloss_work_q);
++		destroy_workqueue(ctlr->work_q);
++		put_device(&ctlr->dev);
++		return NULL;
++	}
+ 
+ 	return ctlr;
+ 
+-out_del_q2:
+-	destroy_workqueue(ctlr->devloss_work_q);
+-	ctlr->devloss_work_q = NULL;
+ out_del_q:
+ 	destroy_workqueue(ctlr->work_q);
+ 	ctlr->work_q = NULL;
+@@ -1036,16 +1037,16 @@ struct fcoe_fcf_device *fcoe_fcf_device_add(struct fcoe_ctlr_device *ctlr,
+ 	fcf->selected = new_fcf->selected;
+ 
+ 	error = device_register(&fcf->dev);
+-	if (error)
+-		goto out_del;
++	if (error) {
++		put_device(&fcf->dev);
++		goto out;
++	}
+ 
+ 	fcf->state = FCOE_FCF_STATE_CONNECTED;
+ 	list_add_tail(&fcf->peers, &ctlr->fcfs);
+ 
+ 	return fcf;
+ 
+-out_del:
+-	kfree(fcf);
+ out:
+ 	return NULL;
+ }
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index 8df70c92911dd..b2d4b6c78b5c3 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -8904,7 +8904,7 @@ clean1:	/* wq/aer/h */
+ 		destroy_workqueue(h->monitor_ctlr_wq);
+ 		h->monitor_ctlr_wq = NULL;
+ 	}
+-	kfree(h);
++	hpda_free_ctlr_info(h);
+ 	return rc;
+ }
+ 
+@@ -9764,7 +9764,8 @@ static int hpsa_add_sas_host(struct ctlr_info *h)
+ 	return 0;
+ 
+ free_sas_phy:
+-	hpsa_free_sas_phy(hpsa_sas_phy);
++	sas_phy_free(hpsa_sas_phy->phy);
++	kfree(hpsa_sas_phy);
+ free_sas_port:
+ 	hpsa_free_sas_port(hpsa_sas_port);
+ free_sas_node:
+@@ -9800,10 +9801,12 @@ static int hpsa_add_sas_device(struct hpsa_sas_node *hpsa_sas_node,
+ 
+ 	rc = hpsa_sas_port_add_rphy(hpsa_sas_port, rphy);
+ 	if (rc)
+-		goto free_sas_port;
++		goto free_sas_rphy;
+ 
+ 	return 0;
+ 
++free_sas_rphy:
++	sas_rphy_free(rphy);
+ free_sas_port:
+ 	hpsa_free_sas_port(hpsa_sas_port);
+ 	device->sas_port = NULL;
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index 90e8a538b078b..a5e6fbd86ad45 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -10870,11 +10870,19 @@ static struct notifier_block ipr_notifier = {
+  **/
+ static int __init ipr_init(void)
+ {
++	int rc;
++
+ 	ipr_info("IBM Power RAID SCSI Device Driver version: %s %s\n",
+ 		 IPR_DRIVER_VERSION, IPR_DRIVER_DATE);
+ 
+ 	register_reboot_notifier(&ipr_notifier);
+-	return pci_register_driver(&ipr_driver);
++	rc = pci_register_driver(&ipr_driver);
++	if (rc) {
++		unregister_reboot_notifier(&ipr_notifier);
++		return rc;
++	}
++
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+index 6ec5b7f33dfd7..b58f4d9c296a3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+@@ -712,6 +712,8 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
+ 	if ((sas_rphy_add(rphy))) {
+ 		ioc_err(ioc, "failure at %s:%d/%s()!\n",
+ 			__FILE__, __LINE__, __func__);
++		sas_rphy_free(rphy);
++		rphy = NULL;
+ 	}
+ 
+ 	if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index cc20621bb49da..7cfc6db817634 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -3619,7 +3619,7 @@ static int resp_write_scat(struct scsi_cmnd *scp,
+ 		mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+ 		return illegal_condition_result;
+ 	}
+-	lrdp = kzalloc(lbdof_blen, GFP_ATOMIC);
++	lrdp = kzalloc(lbdof_blen, GFP_ATOMIC | __GFP_NOWARN);
+ 	if (lrdp == NULL)
+ 		return SCSI_MLQUEUE_HOST_BUSY;
+ 	if (sdebug_verbose)
+@@ -4275,7 +4275,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 	if (ret)
+ 		return ret;
+ 
+-	arr = kcalloc(lb_size, vnum, GFP_ATOMIC);
++	arr = kcalloc(lb_size, vnum, GFP_ATOMIC | __GFP_NOWARN);
+ 	if (!arr) {
+ 		mk_sense_buffer(scp, ILLEGAL_REQUEST, INSUFF_RES_ASC,
+ 				INSUFF_RES_ASCQ);
+@@ -4346,7 +4346,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
+ 	rep_max_zones = min((alloc_len - 64) >> ilog2(RZONES_DESC_HD),
+ 			    max_zones);
+ 
+-	arr = kzalloc(alloc_len, GFP_ATOMIC);
++	arr = kzalloc(alloc_len, GFP_ATOMIC | __GFP_NOWARN);
+ 	if (!arr) {
+ 		mk_sense_buffer(scp, ILLEGAL_REQUEST, INSUFF_RES_ASC,
+ 				INSUFF_RES_ASCQ);
+@@ -7103,7 +7103,10 @@ clean:
+ 		kfree(sdbg_devinfo->zstate);
+ 		kfree(sdbg_devinfo);
+ 	}
+-	kfree(sdbg_host);
++	if (sdbg_host->dev.release)
++		put_device(&sdbg_host->dev);
++	else
++		kfree(sdbg_host);
+ 	pr_warn("%s: failed, errno=%d\n", __func__, -error);
+ 	return error;
+ }
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index f11f51e2465f5..0c4bc42b55c20 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -306,19 +306,11 @@ enum blk_eh_timer_return scsi_times_out(struct request *req)
+ 
+ 	if (rtn == BLK_EH_DONE) {
+ 		/*
+-		 * Set the command to complete first in order to prevent a real
+-		 * completion from releasing the command while error handling
+-		 * is using it. If the command was already completed, then the
+-		 * lower level driver beat the timeout handler, and it is safe
+-		 * to return without escalating error recovery.
+-		 *
+-		 * If timeout handling lost the race to a real completion, the
+-		 * block layer may ignore that due to a fake timeout injection,
+-		 * so return RESET_TIMER to allow error handling another shot
+-		 * at this command.
++		 * If scsi_done() has already set SCMD_STATE_COMPLETE, do not
++		 * modify *scmd.
+ 		 */
+ 		if (test_and_set_bit(SCMD_STATE_COMPLETE, &scmd->state))
+-			return BLK_EH_RESET_TIMER;
++			return BLK_EH_DONE;
+ 		if (scsi_abort_command(scmd) != SUCCESS) {
+ 			set_host_byte(scmd, DID_TIME_OUT);
+ 			scsi_eh_scmd_add(scmd);
+diff --git a/drivers/scsi/snic/snic_disc.c b/drivers/scsi/snic/snic_disc.c
+index e9ccfb97773f1..7cf871323b2c4 100644
+--- a/drivers/scsi/snic/snic_disc.c
++++ b/drivers/scsi/snic/snic_disc.c
+@@ -318,6 +318,9 @@ snic_tgt_create(struct snic *snic, struct snic_tgt_id *tgtid)
+ 			      ret);
+ 
+ 		put_device(&snic->shost->shost_gendev);
++		spin_lock_irqsave(snic->shost->host_lock, flags);
++		list_del(&tgt->list);
++		spin_unlock_irqrestore(snic->shost->host_lock, flags);
+ 		kfree(tgt);
+ 		tgt = NULL;
+ 
+diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
+index d0cf969a8fb5f..479d60abbd14f 100644
+--- a/drivers/soc/qcom/Kconfig
++++ b/drivers/soc/qcom/Kconfig
+@@ -63,6 +63,7 @@ config QCOM_GSBI
+ config QCOM_LLCC
+ 	tristate "Qualcomm Technologies, Inc. LLCC driver"
+ 	depends on ARCH_QCOM || COMPILE_TEST
++	select REGMAP_MMIO
+ 	help
+ 	  Qualcomm Technologies, Inc. platform specific
+ 	  Last Level Cache Controller(LLCC) driver for platforms such as,
+diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
+index f736d208362c9..7063e0d42c5ed 100644
+--- a/drivers/soc/qcom/apr.c
++++ b/drivers/soc/qcom/apr.c
+@@ -15,13 +15,18 @@
+ #include <linux/rpmsg.h>
+ #include <linux/of.h>
+ 
+-struct apr {
++enum {
++	PR_TYPE_APR = 0,
++};
++
++struct packet_router {
+ 	struct rpmsg_endpoint *ch;
+ 	struct device *dev;
+ 	spinlock_t svcs_lock;
+ 	spinlock_t rx_lock;
+ 	struct idr svcs_idr;
+ 	int dest_domain_id;
++	int type;
+ 	struct pdr_handle *pdr;
+ 	struct workqueue_struct *rxwq;
+ 	struct work_struct rx_work;
+@@ -44,21 +49,21 @@ struct apr_rx_buf {
+  */
+ int apr_send_pkt(struct apr_device *adev, struct apr_pkt *pkt)
+ {
+-	struct apr *apr = dev_get_drvdata(adev->dev.parent);
++	struct packet_router *apr = dev_get_drvdata(adev->dev.parent);
+ 	struct apr_hdr *hdr;
+ 	unsigned long flags;
+ 	int ret;
+ 
+-	spin_lock_irqsave(&adev->lock, flags);
++	spin_lock_irqsave(&adev->svc.lock, flags);
+ 
+ 	hdr = &pkt->hdr;
+ 	hdr->src_domain = APR_DOMAIN_APPS;
+-	hdr->src_svc = adev->svc_id;
++	hdr->src_svc = adev->svc.id;
+ 	hdr->dest_domain = adev->domain_id;
+-	hdr->dest_svc = adev->svc_id;
++	hdr->dest_svc = adev->svc.id;
+ 
+ 	ret = rpmsg_trysend(apr->ch, pkt, hdr->pkt_size);
+-	spin_unlock_irqrestore(&adev->lock, flags);
++	spin_unlock_irqrestore(&adev->svc.lock, flags);
+ 
+ 	return ret ? ret : hdr->pkt_size;
+ }
+@@ -74,7 +79,7 @@ static void apr_dev_release(struct device *dev)
+ static int apr_callback(struct rpmsg_device *rpdev, void *buf,
+ 				  int len, void *priv, u32 addr)
+ {
+-	struct apr *apr = dev_get_drvdata(&rpdev->dev);
++	struct packet_router *apr = dev_get_drvdata(&rpdev->dev);
+ 	struct apr_rx_buf *abuf;
+ 	unsigned long flags;
+ 
+@@ -100,11 +105,11 @@ static int apr_callback(struct rpmsg_device *rpdev, void *buf,
+ 	return 0;
+ }
+ 
+-
+-static int apr_do_rx_callback(struct apr *apr, struct apr_rx_buf *abuf)
++static int apr_do_rx_callback(struct packet_router *apr, struct apr_rx_buf *abuf)
+ {
+ 	uint16_t hdr_size, msg_type, ver, svc_id;
+-	struct apr_device *svc = NULL;
++	struct pkt_router_svc *svc;
++	struct apr_device *adev;
+ 	struct apr_driver *adrv = NULL;
+ 	struct apr_resp_pkt resp;
+ 	struct apr_hdr *hdr;
+@@ -145,12 +150,15 @@ static int apr_do_rx_callback(struct apr *apr, struct apr_rx_buf *abuf)
+ 	svc_id = hdr->dest_svc;
+ 	spin_lock_irqsave(&apr->svcs_lock, flags);
+ 	svc = idr_find(&apr->svcs_idr, svc_id);
+-	if (svc && svc->dev.driver)
+-		adrv = to_apr_driver(svc->dev.driver);
++	if (svc && svc->dev->driver) {
++		adev = svc_to_apr_device(svc);
++		adrv = to_apr_driver(adev->dev.driver);
++	}
+ 	spin_unlock_irqrestore(&apr->svcs_lock, flags);
+ 
+-	if (!adrv) {
+-		dev_err(apr->dev, "APR: service is not registered\n");
++	if (!adrv || !adev) {
++		dev_err(apr->dev, "APR: service is not registered (%d)\n",
++			svc_id);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -164,20 +172,26 @@ static int apr_do_rx_callback(struct apr *apr, struct apr_rx_buf *abuf)
+ 	if (resp.payload_size > 0)
+ 		resp.payload = buf + hdr_size;
+ 
+-	adrv->callback(svc, &resp);
++	adrv->callback(adev, &resp);
+ 
+ 	return 0;
+ }
+ 
+ static void apr_rxwq(struct work_struct *work)
+ {
+-	struct apr *apr = container_of(work, struct apr, rx_work);
++	struct packet_router *apr = container_of(work, struct packet_router, rx_work);
+ 	struct apr_rx_buf *abuf, *b;
+ 	unsigned long flags;
+ 
+ 	if (!list_empty(&apr->rx_list)) {
+ 		list_for_each_entry_safe(abuf, b, &apr->rx_list, node) {
+-			apr_do_rx_callback(apr, abuf);
++			switch (apr->type) {
++			case PR_TYPE_APR:
++				apr_do_rx_callback(apr, abuf);
++				break;
++			default:
++				break;
++			}
+ 			spin_lock_irqsave(&apr->rx_lock, flags);
+ 			list_del(&abuf->node);
+ 			spin_unlock_irqrestore(&apr->rx_lock, flags);
+@@ -201,7 +215,7 @@ static int apr_device_match(struct device *dev, struct device_driver *drv)
+ 
+ 	while (id->domain_id != 0 || id->svc_id != 0) {
+ 		if (id->domain_id == adev->domain_id &&
+-		    id->svc_id == adev->svc_id)
++		    id->svc_id == adev->svc.id)
+ 			return 1;
+ 		id++;
+ 	}
+@@ -221,14 +235,14 @@ static int apr_device_remove(struct device *dev)
+ {
+ 	struct apr_device *adev = to_apr_device(dev);
+ 	struct apr_driver *adrv;
+-	struct apr *apr = dev_get_drvdata(adev->dev.parent);
++	struct packet_router *apr = dev_get_drvdata(adev->dev.parent);
+ 
+ 	if (dev->driver) {
+ 		adrv = to_apr_driver(dev->driver);
+ 		if (adrv->remove)
+ 			adrv->remove(adev);
+ 		spin_lock(&apr->svcs_lock);
+-		idr_remove(&apr->svcs_idr, adev->svc_id);
++		idr_remove(&apr->svcs_idr, adev->svc.id);
+ 		spin_unlock(&apr->svcs_lock);
+ 	}
+ 
+@@ -257,28 +271,39 @@ struct bus_type aprbus = {
+ EXPORT_SYMBOL_GPL(aprbus);
+ 
+ static int apr_add_device(struct device *dev, struct device_node *np,
+-			  const struct apr_device_id *id)
++			  u32 svc_id, u32 domain_id)
+ {
+-	struct apr *apr = dev_get_drvdata(dev);
++	struct packet_router *apr = dev_get_drvdata(dev);
+ 	struct apr_device *adev = NULL;
++	struct pkt_router_svc *svc;
+ 	int ret;
+ 
+ 	adev = kzalloc(sizeof(*adev), GFP_KERNEL);
+ 	if (!adev)
+ 		return -ENOMEM;
+ 
+-	spin_lock_init(&adev->lock);
++	adev->svc_id = svc_id;
++	svc = &adev->svc;
++
++	svc->id = svc_id;
++	svc->pr = apr;
++	svc->priv = adev;
++	svc->dev = dev;
++	spin_lock_init(&svc->lock);
++
++	adev->domain_id = domain_id;
+ 
+-	adev->svc_id = id->svc_id;
+-	adev->domain_id = id->domain_id;
+-	adev->version = id->svc_version;
+ 	if (np)
+ 		snprintf(adev->name, APR_NAME_SIZE, "%pOFn", np);
+-	else
+-		strscpy(adev->name, id->name, APR_NAME_SIZE);
+ 
+-	dev_set_name(&adev->dev, "aprsvc:%s:%x:%x", adev->name,
+-		     id->domain_id, id->svc_id);
++	switch (apr->type) {
++	case PR_TYPE_APR:
++		dev_set_name(&adev->dev, "aprsvc:%s:%x:%x", adev->name,
++			     domain_id, svc_id);
++		break;
++	default:
++		break;
++	}
+ 
+ 	adev->dev.bus = &aprbus;
+ 	adev->dev.parent = dev;
+@@ -287,12 +312,19 @@ static int apr_add_device(struct device *dev, struct device_node *np,
+ 	adev->dev.driver = NULL;
+ 
+ 	spin_lock(&apr->svcs_lock);
+-	idr_alloc(&apr->svcs_idr, adev, id->svc_id,
+-		  id->svc_id + 1, GFP_ATOMIC);
++	ret = idr_alloc(&apr->svcs_idr, svc, svc_id, svc_id + 1, GFP_ATOMIC);
+ 	spin_unlock(&apr->svcs_lock);
++	if (ret < 0) {
++		dev_err(dev, "idr_alloc failed: %d\n", ret);
++		goto out;
++	}
+ 
+-	of_property_read_string_index(np, "qcom,protection-domain",
+-				      1, &adev->service_path);
++	ret = of_property_read_string_index(np, "qcom,protection-domain",
++					    1, &adev->service_path);
++	if (ret < 0) {
++		dev_err(dev, "Failed to read second value of qcom,protection-domain\n");
++		goto out;
++	}
+ 
+ 	dev_info(dev, "Adding APR dev: %s\n", dev_name(&adev->dev));
+ 
+@@ -302,13 +334,14 @@ static int apr_add_device(struct device *dev, struct device_node *np,
+ 		put_device(&adev->dev);
+ 	}
+ 
++out:
+ 	return ret;
+ }
+ 
+ static int of_apr_add_pd_lookups(struct device *dev)
+ {
+ 	const char *service_name, *service_path;
+-	struct apr *apr = dev_get_drvdata(dev);
++	struct packet_router *apr = dev_get_drvdata(dev);
+ 	struct device_node *node;
+ 	struct pdr_service *pds;
+ 	int ret;
+@@ -340,13 +373,14 @@ static int of_apr_add_pd_lookups(struct device *dev)
+ 
+ static void of_register_apr_devices(struct device *dev, const char *svc_path)
+ {
+-	struct apr *apr = dev_get_drvdata(dev);
++	struct packet_router *apr = dev_get_drvdata(dev);
+ 	struct device_node *node;
+ 	const char *service_path;
+ 	int ret;
+ 
+ 	for_each_child_of_node(dev->of_node, node) {
+-		struct apr_device_id id = { {0} };
++		u32 svc_id;
++		u32 domain_id;
+ 
+ 		/*
+ 		 * This function is called with svc_path NULL during
+@@ -376,13 +410,13 @@ static void of_register_apr_devices(struct device *dev, const char *svc_path)
+ 				continue;
+ 		}
+ 
+-		if (of_property_read_u32(node, "reg", &id.svc_id))
++		if (of_property_read_u32(node, "reg", &svc_id))
+ 			continue;
+ 
+-		id.domain_id = apr->dest_domain_id;
++		domain_id = apr->dest_domain_id;
+ 
+-		if (apr_add_device(dev, node, &id))
+-			dev_err(dev, "Failed to add apr %d svc\n", id.svc_id);
++		if (apr_add_device(dev, node, svc_id, domain_id))
++			dev_err(dev, "Failed to add apr %d svc\n", svc_id);
+ 	}
+ }
+ 
+@@ -402,7 +436,7 @@ static int apr_remove_device(struct device *dev, void *svc_path)
+ 
+ static void apr_pd_status(int state, char *svc_path, void *priv)
+ {
+-	struct apr *apr = (struct apr *)priv;
++	struct packet_router *apr = (struct packet_router *)priv;
+ 
+ 	switch (state) {
+ 	case SERVREG_SERVICE_STATE_UP:
+@@ -417,16 +451,20 @@ static void apr_pd_status(int state, char *svc_path, void *priv)
+ static int apr_probe(struct rpmsg_device *rpdev)
+ {
+ 	struct device *dev = &rpdev->dev;
+-	struct apr *apr;
++	struct packet_router *apr;
+ 	int ret;
+ 
+ 	apr = devm_kzalloc(dev, sizeof(*apr), GFP_KERNEL);
+ 	if (!apr)
+ 		return -ENOMEM;
+ 
+-	ret = of_property_read_u32(dev->of_node, "qcom,apr-domain", &apr->dest_domain_id);
++	ret = of_property_read_u32(dev->of_node, "qcom,domain", &apr->dest_domain_id);
++	if (ret) /* try deprecated apr-domain property */
++		ret = of_property_read_u32(dev->of_node, "qcom,apr-domain",
++					   &apr->dest_domain_id);
++	apr->type = PR_TYPE_APR;
+ 	if (ret) {
+-		dev_err(dev, "APR Domain ID not specified in DT\n");
++		dev_err(dev, "Domain ID not specified in DT\n");
+ 		return ret;
+ 	}
+ 
+@@ -469,7 +507,7 @@ destroy_wq:
+ 
+ static void apr_remove(struct rpmsg_device *rpdev)
+ {
+-	struct apr *apr = dev_get_drvdata(&rpdev->dev);
++	struct packet_router *apr = dev_get_drvdata(&rpdev->dev);
+ 
+ 	pdr_handle_release(apr->pdr);
+ 	device_for_each_child(&rpdev->dev, NULL, apr_remove_device);
+@@ -506,20 +544,20 @@ void apr_driver_unregister(struct apr_driver *drv)
+ }
+ EXPORT_SYMBOL_GPL(apr_driver_unregister);
+ 
+-static const struct of_device_id apr_of_match[] = {
++static const struct of_device_id pkt_router_of_match[] = {
+ 	{ .compatible = "qcom,apr"},
+ 	{ .compatible = "qcom,apr-v2"},
+ 	{}
+ };
+-MODULE_DEVICE_TABLE(of, apr_of_match);
++MODULE_DEVICE_TABLE(of, pkt_router_of_match);
+ 
+-static struct rpmsg_driver apr_driver = {
++static struct rpmsg_driver packet_router_driver = {
+ 	.probe = apr_probe,
+ 	.remove = apr_remove,
+ 	.callback = apr_callback,
+ 	.drv = {
+ 		.name = "qcom,apr",
+-		.of_match_table = apr_of_match,
++		.of_match_table = pkt_router_of_match,
+ 	},
+ };
+ 
+@@ -529,7 +567,7 @@ static int __init apr_init(void)
+ 
+ 	ret = bus_register(&aprbus);
+ 	if (!ret)
+-		ret = register_rpmsg_driver(&apr_driver);
++		ret = register_rpmsg_driver(&packet_router_driver);
+ 	else
+ 		bus_unregister(&aprbus);
+ 
+@@ -539,7 +577,7 @@ static int __init apr_init(void)
+ static void __exit apr_exit(void)
+ {
+ 	bus_unregister(&aprbus);
+-	unregister_rpmsg_driver(&apr_driver);
++	unregister_rpmsg_driver(&packet_router_driver);
+ }
+ 
+ subsys_initcall(apr_init);
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 2e06f48d683d1..c60fe98f03e37 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -476,7 +476,7 @@ static int qcom_llcc_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err;
+ 
+-	drv_data->ecc_irq = platform_get_irq(pdev, 0);
++	drv_data->ecc_irq = platform_get_irq_optional(pdev, 0);
+ 	if (drv_data->ecc_irq >= 0) {
+ 		llcc_edac = platform_device_register_data(&pdev->dev,
+ 						"qcom_llcc_edac", -1, drv_data,
+diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c
+index 53e36d4328d1e..20c84741639e1 100644
+--- a/drivers/soc/ti/knav_qmss_queue.c
++++ b/drivers/soc/ti/knav_qmss_queue.c
+@@ -67,7 +67,7 @@ static DEFINE_MUTEX(knav_dev_lock);
+  * Newest followed by older ones. Search is done from start of the array
+  * until a firmware file is found.
+  */
+-const char *knav_acc_firmwares[] = {"ks2_qmss_pdsp_acc48.bin"};
++static const char * const knav_acc_firmwares[] = {"ks2_qmss_pdsp_acc48.bin"};
+ 
+ static bool device_ready;
+ bool knav_qmss_device_ready(void)
+@@ -1782,9 +1782,9 @@ static int knav_queue_probe(struct platform_device *pdev)
+ 	INIT_LIST_HEAD(&kdev->pdsps);
+ 
+ 	pm_runtime_enable(&pdev->dev);
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0) {
+-		pm_runtime_put_noidle(&pdev->dev);
++		pm_runtime_disable(&pdev->dev);
+ 		dev_err(dev, "Failed to enable QMSS\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/soc/ti/smartreflex.c b/drivers/soc/ti/smartreflex.c
+index 5376f3d22f31e..1228a0cba1320 100644
+--- a/drivers/soc/ti/smartreflex.c
++++ b/drivers/soc/ti/smartreflex.c
+@@ -942,6 +942,7 @@ static int omap_sr_probe(struct platform_device *pdev)
+ err_debugfs:
+ 	debugfs_remove_recursive(sr_info->dbg_dir);
+ err_list_del:
++	pm_runtime_disable(&pdev->dev);
+ 	list_del(&sr_info->node);
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+diff --git a/drivers/soc/ux500/ux500-soc-id.c b/drivers/soc/ux500/ux500-soc-id.c
+index a9472e0e5d61c..27d6e25a01153 100644
+--- a/drivers/soc/ux500/ux500-soc-id.c
++++ b/drivers/soc/ux500/ux500-soc-id.c
+@@ -167,20 +167,18 @@ ATTRIBUTE_GROUPS(ux500_soc);
+ static const char *db8500_read_soc_id(struct device_node *backupram)
+ {
+ 	void __iomem *base;
+-	void __iomem *uid;
+ 	const char *retstr;
++	u32 uid[5];
+ 
+ 	base = of_iomap(backupram, 0);
+ 	if (!base)
+ 		return NULL;
+-	uid = base + 0x1fc0;
++	memcpy_fromio(uid, base + 0x1fc0, sizeof(uid));
+ 
+ 	/* Throw these device-specific numbers into the entropy pool */
+-	add_device_randomness(uid, 0x14);
++	add_device_randomness(uid, sizeof(uid));
+ 	retstr = kasprintf(GFP_KERNEL, "%08x%08x%08x%08x%08x",
+-			 readl((u32 *)uid+0),
+-			 readl((u32 *)uid+1), readl((u32 *)uid+2),
+-			 readl((u32 *)uid+3), readl((u32 *)uid+4));
++			   uid[0], uid[1], uid[2], uid[3], uid[4]);
+ 	iounmap(base);
+ 	return retstr;
+ }
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 942d2fe132181..4487fbb8f2e61 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -1140,8 +1140,8 @@ static const struct snd_soc_dai_ops intel_pcm_dai_ops = {
+ 	.prepare = intel_prepare,
+ 	.hw_free = intel_hw_free,
+ 	.shutdown = intel_shutdown,
+-	.set_sdw_stream = intel_pcm_set_sdw_stream,
+-	.get_sdw_stream = intel_get_sdw_stream,
++	.set_stream = intel_pcm_set_sdw_stream,
++	.get_stream = intel_get_sdw_stream,
+ };
+ 
+ static const struct snd_soc_dai_ops intel_pdm_dai_ops = {
+@@ -1150,8 +1150,8 @@ static const struct snd_soc_dai_ops intel_pdm_dai_ops = {
+ 	.prepare = intel_prepare,
+ 	.hw_free = intel_hw_free,
+ 	.shutdown = intel_shutdown,
+-	.set_sdw_stream = intel_pdm_set_sdw_stream,
+-	.get_sdw_stream = intel_get_sdw_stream,
++	.set_stream = intel_pdm_set_sdw_stream,
++	.get_stream = intel_get_sdw_stream,
+ };
+ 
+ static const struct snd_soc_component_driver dai_component = {
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index 6d22df01f3547..ac73258792e6c 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -649,8 +649,8 @@ static int qcom_swrm_startup(struct snd_pcm_substream *substream,
+ 	ctrl->sruntime[dai->id] = sruntime;
+ 
+ 	for_each_rtd_codec_dais(rtd, i, codec_dai) {
+-		ret = snd_soc_dai_set_sdw_stream(codec_dai, sruntime,
+-						 substream->stream);
++		ret = snd_soc_dai_set_stream(codec_dai, sruntime,
++					     substream->stream);
+ 		if (ret < 0 && ret != -ENOTSUPP) {
+ 			dev_err(dai->dev, "Failed to set sdw stream on %s",
+ 				codec_dai->name);
+@@ -676,8 +676,8 @@ static const struct snd_soc_dai_ops qcom_swrm_pdm_dai_ops = {
+ 	.hw_free = qcom_swrm_hw_free,
+ 	.startup = qcom_swrm_startup,
+ 	.shutdown = qcom_swrm_shutdown,
+-	.set_sdw_stream = qcom_swrm_set_sdw_stream,
+-	.get_sdw_stream = qcom_swrm_get_sdw_stream,
++	.set_stream = qcom_swrm_set_sdw_stream,
++	.get_stream = qcom_swrm_get_sdw_stream,
+ };
+ 
+ static const struct snd_soc_component_driver qcom_swrm_dai_component = {
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 304ff2ee7d75a..79554d28f08aa 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1860,7 +1860,7 @@ static int set_stream(struct snd_pcm_substream *substream,
+ 
+ 	/* Set stream pointer on all DAIs */
+ 	for_each_rtd_dais(rtd, i, dai) {
+-		ret = snd_soc_dai_set_sdw_stream(dai, sdw_stream, substream->stream);
++		ret = snd_soc_dai_set_stream(dai, sdw_stream, substream->stream);
+ 		if (ret < 0) {
+ 			dev_err(rtd->dev, "failed to set stream pointer on dai %s", dai->name);
+ 			break;
+@@ -1931,7 +1931,7 @@ void sdw_shutdown_stream(void *sdw_substream)
+ 	/* Find stream from first CPU DAI */
+ 	dai = asoc_rtd_to_cpu(rtd, 0);
+ 
+-	sdw_stream = snd_soc_dai_get_sdw_stream(dai, substream->stream);
++	sdw_stream = snd_soc_dai_get_stream(dai, substream->stream);
+ 
+ 	if (IS_ERR(sdw_stream)) {
+ 		dev_err(rtd->dev, "no stream found for DAI %s", dai->name);
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 0584f4d2fde29..3ffdab6caac2a 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -244,9 +244,19 @@ static int spi_gpio_set_direction(struct spi_device *spi, bool output)
+ 	if (output)
+ 		return gpiod_direction_output(spi_gpio->mosi, 1);
+ 
+-	ret = gpiod_direction_input(spi_gpio->mosi);
+-	if (ret)
+-		return ret;
++	/*
++	 * Only change MOSI to an input if using 3WIRE mode.
++	 * Otherwise, MOSI could be left floating if there is
++	 * no pull resistor connected to the I/O pin, or could
++	 * be left logic high if there is a pull-up. Transmitting
++	 * logic high when only clocking MISO data in can put some
++	 * SPI devices in to a bad state.
++	 */
++	if (spi->mode & SPI_3WIRE) {
++		ret = gpiod_direction_input(spi_gpio->mosi);
++		if (ret)
++			return ret;
++	}
+ 	/*
+ 	 * Send a turnaround high impedance cycle when switching
+ 	 * from output to input. Theoretically there should be
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 859910ec8d9f6..9c5ec99431d2e 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -376,12 +376,23 @@ spidev_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	switch (cmd) {
+ 	/* read requests */
+ 	case SPI_IOC_RD_MODE:
+-		retval = put_user(spi->mode & SPI_MODE_MASK,
+-					(__u8 __user *)arg);
+-		break;
+ 	case SPI_IOC_RD_MODE32:
+-		retval = put_user(spi->mode & SPI_MODE_MASK,
+-					(__u32 __user *)arg);
++		tmp = spi->mode;
++
++		{
++			struct spi_controller *ctlr = spi->controller;
++
++			if (ctlr->use_gpio_descriptors && ctlr->cs_gpiods &&
++			    ctlr->cs_gpiods[spi->chip_select])
++				tmp &= ~SPI_CS_HIGH;
++		}
++
++		if (cmd == SPI_IOC_RD_MODE)
++			retval = put_user(tmp & SPI_MODE_MASK,
++					  (__u8 __user *)arg);
++		else
++			retval = put_user(tmp & SPI_MODE_MASK,
++					  (__u32 __user *)arg);
+ 		break;
+ 	case SPI_IOC_RD_LSB_FIRST:
+ 		retval = put_user((spi->mode & SPI_LSB_FIRST) ?  1 : 0,
+diff --git a/drivers/staging/iio/accel/adis16203.c b/drivers/staging/iio/accel/adis16203.c
+index b68304da288b5..7be44ff2c943a 100644
+--- a/drivers/staging/iio/accel/adis16203.c
++++ b/drivers/staging/iio/accel/adis16203.c
+@@ -318,3 +318,4 @@ MODULE_AUTHOR("Barry Song <21cnbao@gmail.com>");
+ MODULE_DESCRIPTION("Analog Devices ADIS16203 Programmable 360 Degrees Inclinometer");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("spi:adis16203");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/staging/iio/accel/adis16240.c b/drivers/staging/iio/accel/adis16240.c
+index 5064adce5f583..dbbbf81207f9c 100644
+--- a/drivers/staging/iio/accel/adis16240.c
++++ b/drivers/staging/iio/accel/adis16240.c
+@@ -445,3 +445,4 @@ MODULE_AUTHOR("Barry Song <21cnbao@gmail.com>");
+ MODULE_DESCRIPTION("Analog Devices Programmable Impact Sensor and Recorder");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("spi:adis16240");
++MODULE_IMPORT_NS(IIO_ADISLIB);
+diff --git a/drivers/staging/media/tegra-video/csi.c b/drivers/staging/media/tegra-video/csi.c
+index a19c85c57fca5..dc5d432a09e8e 100644
+--- a/drivers/staging/media/tegra-video/csi.c
++++ b/drivers/staging/media/tegra-video/csi.c
+@@ -420,7 +420,7 @@ static int tegra_csi_channel_alloc(struct tegra_csi *csi,
+ 	chan->csi = csi;
+ 	chan->csi_port_num = port_num;
+ 	chan->numlanes = lanes;
+-	chan->of_node = node;
++	chan->of_node = of_node_get(node);
+ 	chan->numpads = num_pads;
+ 	if (num_pads & 0x2) {
+ 		chan->pads[0].flags = MEDIA_PAD_FL_SINK;
+@@ -435,6 +435,7 @@ static int tegra_csi_channel_alloc(struct tegra_csi *csi,
+ 	chan->mipi = tegra_mipi_request(csi->dev, node);
+ 	if (IS_ERR(chan->mipi)) {
+ 		ret = PTR_ERR(chan->mipi);
++		chan->mipi = NULL;
+ 		dev_err(csi->dev, "failed to get mipi device: %d\n", ret);
+ 	}
+ 
+@@ -620,6 +621,7 @@ static void tegra_csi_channels_cleanup(struct tegra_csi *csi)
+ 			media_entity_cleanup(&subdev->entity);
+ 		}
+ 
++		of_node_put(chan->of_node);
+ 		list_del(&chan->list);
+ 		kfree(chan);
+ 	}
+diff --git a/drivers/staging/media/tegra-video/csi.h b/drivers/staging/media/tegra-video/csi.h
+index c65ff73b1cdc2..be4ea81e00316 100644
+--- a/drivers/staging/media/tegra-video/csi.h
++++ b/drivers/staging/media/tegra-video/csi.h
+@@ -50,7 +50,7 @@ struct tegra_csi;
+  * @framerate: active framerate for TPG
+  * @h_blank: horizontal blanking for TPG active format
+  * @v_blank: vertical blanking for TPG active format
+- * @mipi: mipi device for corresponding csi channel pads
++ * @mipi: mipi device for corresponding csi channel pads, or NULL if not applicable (TPG, error)
+  * @pixel_rate: active pixel rate from the sensor on this channel
+  */
+ struct tegra_csi_channel {
+diff --git a/drivers/staging/rtl8192e/rtllib_rx.c b/drivers/staging/rtl8192e/rtllib_rx.c
+index 63752233e551f..404794503fb6d 100644
+--- a/drivers/staging/rtl8192e/rtllib_rx.c
++++ b/drivers/staging/rtl8192e/rtllib_rx.c
+@@ -1490,9 +1490,9 @@ static int rtllib_rx_Monitor(struct rtllib_device *ieee, struct sk_buff *skb,
+ 		hdrlen += 4;
+ 	}
+ 
+-	rtllib_monitor_rx(ieee, skb, rx_stats, hdrlen);
+ 	ieee->stats.rx_packets++;
+ 	ieee->stats.rx_bytes += skb->len;
++	rtllib_monitor_rx(ieee, skb, rx_stats, hdrlen);
+ 
+ 	return 1;
+ }
+diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
+index b6fee7230ce05..3871437f4708e 100644
+--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
++++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
+@@ -954,9 +954,11 @@ int ieee80211_rx(struct ieee80211_device *ieee, struct sk_buff *skb,
+ #endif
+ 
+ 	if (ieee->iw_mode == IW_MODE_MONITOR) {
++		unsigned int len = skb->len;
++
+ 		ieee80211_monitor_rx(ieee, skb, rx_stats);
+ 		stats->rx_packets++;
+-		stats->rx_bytes += skb->len;
++		stats->rx_bytes += len;
+ 		return 1;
+ 	}
+ 
+diff --git a/drivers/thermal/imx8mm_thermal.c b/drivers/thermal/imx8mm_thermal.c
+index 0f4cabd2a8c62..6be16e0598b68 100644
+--- a/drivers/thermal/imx8mm_thermal.c
++++ b/drivers/thermal/imx8mm_thermal.c
+@@ -65,8 +65,14 @@ static int imx8mm_tmu_get_temp(void *data, int *temp)
+ 	u32 val;
+ 
+ 	val = readl_relaxed(tmu->base + TRITSR) & TRITSR_TEMP0_VAL_MASK;
++
++	/*
++	 * Do not validate against the V bit (bit 31) due to errata
++	 * ERR051272: TMU: Bit 31 of registers TMU_TSCR/TMU_TRITSR/TMU_TRATSR invalid
++	 */
++
+ 	*temp = val * 1000;
+-	if (*temp < VER1_TEMP_LOW_LIMIT)
++	if (*temp < VER1_TEMP_LOW_LIMIT || *temp > VER2_TEMP_HIGH_LIMIT)
+ 		return -EAGAIN;
+ 
+ 	return 0;
+diff --git a/drivers/tty/serial/altera_uart.c b/drivers/tty/serial/altera_uart.c
+index 0e487ce091ac9..d91f76b1d353e 100644
+--- a/drivers/tty/serial/altera_uart.c
++++ b/drivers/tty/serial/altera_uart.c
+@@ -199,9 +199,8 @@ static void altera_uart_set_termios(struct uart_port *port,
+ 	 */
+ }
+ 
+-static void altera_uart_rx_chars(struct altera_uart *pp)
++static void altera_uart_rx_chars(struct uart_port *port)
+ {
+-	struct uart_port *port = &pp->port;
+ 	unsigned char ch, flag;
+ 	unsigned short status;
+ 
+@@ -248,9 +247,8 @@ static void altera_uart_rx_chars(struct altera_uart *pp)
+ 	spin_lock(&port->lock);
+ }
+ 
+-static void altera_uart_tx_chars(struct altera_uart *pp)
++static void altera_uart_tx_chars(struct uart_port *port)
+ {
+-	struct uart_port *port = &pp->port;
+ 	struct circ_buf *xmit = &port->state->xmit;
+ 
+ 	if (port->x_char) {
+@@ -274,26 +272,25 @@ static void altera_uart_tx_chars(struct altera_uart *pp)
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(port);
+ 
+-	if (xmit->head == xmit->tail) {
+-		pp->imr &= ~ALTERA_UART_CONTROL_TRDY_MSK;
+-		altera_uart_update_ctrl_reg(pp);
+-	}
++	if (uart_circ_empty(xmit))
++		altera_uart_stop_tx(port);
+ }
+ 
+ static irqreturn_t altera_uart_interrupt(int irq, void *data)
+ {
+ 	struct uart_port *port = data;
+ 	struct altera_uart *pp = container_of(port, struct altera_uart, port);
++	unsigned long flags;
+ 	unsigned int isr;
+ 
+ 	isr = altera_uart_readl(port, ALTERA_UART_STATUS_REG) & pp->imr;
+ 
+-	spin_lock(&port->lock);
++	spin_lock_irqsave(&port->lock, flags);
+ 	if (isr & ALTERA_UART_STATUS_RRDY_MSK)
+-		altera_uart_rx_chars(pp);
++		altera_uart_rx_chars(port);
+ 	if (isr & ALTERA_UART_STATUS_TRDY_MSK)
+-		altera_uart_tx_chars(pp);
+-	spin_unlock(&port->lock);
++		altera_uart_tx_chars(port);
++	spin_unlock_irqrestore(&port->lock, flags);
+ 
+ 	return IRQ_RETVAL(isr);
+ }
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 9900ee3f90683..348d4b2a391a3 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1048,6 +1048,9 @@ static void pl011_dma_rx_callback(void *data)
+  */
+ static inline void pl011_dma_rx_stop(struct uart_amba_port *uap)
+ {
++	if (!uap->using_rx_dma)
++		return;
++
+ 	/* FIXME.  Just disable the DMA enable */
+ 	uap->dmacr &= ~UART011_RXDMAE;
+ 	pl011_write(uap->dmacr, uap, REG_DMACR);
+@@ -1757,8 +1760,17 @@ static void pl011_enable_interrupts(struct uart_amba_port *uap)
+ static void pl011_unthrottle_rx(struct uart_port *port)
+ {
+ 	struct uart_amba_port *uap = container_of(port, struct uart_amba_port, port);
++	unsigned long flags;
+ 
+-	pl011_enable_interrupts(uap);
++	spin_lock_irqsave(&uap->port.lock, flags);
++
++	uap->im = UART011_RTIM;
++	if (!pl011_dma_rx_running(uap))
++		uap->im |= UART011_RXIM;
++
++	pl011_write(uap->im, uap, REG_IMSC);
++
++	spin_unlock_irqrestore(&uap->port.lock, flags);
+ }
+ 
+ static int pl011_startup(struct uart_port *port)
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 43aca5a2ef0f2..223695947b654 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2586,6 +2586,7 @@ static int lpuart_probe(struct platform_device *pdev)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct lpuart_port *sport;
+ 	struct resource *res;
++	irq_handler_t handler;
+ 	int ret;
+ 
+ 	sport = devm_kzalloc(&pdev->dev, sizeof(*sport), GFP_KERNEL);
+@@ -2658,17 +2659,12 @@ static int lpuart_probe(struct platform_device *pdev)
+ 
+ 	if (lpuart_is_32(sport)) {
+ 		lpuart_reg.cons = LPUART32_CONSOLE;
+-		ret = devm_request_irq(&pdev->dev, sport->port.irq, lpuart32_int, 0,
+-					DRIVER_NAME, sport);
++		handler = lpuart32_int;
+ 	} else {
+ 		lpuart_reg.cons = LPUART_CONSOLE;
+-		ret = devm_request_irq(&pdev->dev, sport->port.irq, lpuart_int, 0,
+-					DRIVER_NAME, sport);
++		handler = lpuart_int;
+ 	}
+ 
+-	if (ret)
+-		goto failed_irq_request;
+-
+ 	ret = uart_get_rs485_mode(&sport->port);
+ 	if (ret)
+ 		goto failed_get_rs485;
+@@ -2684,11 +2680,17 @@ static int lpuart_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto failed_attach_port;
+ 
++	ret = devm_request_irq(&pdev->dev, sport->port.irq, handler, 0,
++				DRIVER_NAME, sport);
++	if (ret)
++		goto failed_irq_request;
++
+ 	return 0;
+ 
++failed_irq_request:
++	uart_remove_one_port(&lpuart_reg, &sport->port);
+ failed_get_rs485:
+ failed_attach_port:
+-failed_irq_request:
+ 	lpuart_disable_clks(sport);
+ 	return ret;
+ }
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index 351ad0b020291..fa2061f1cf3d3 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -711,6 +711,7 @@ static void pch_request_dma(struct uart_port *port)
+ 	if (!chan) {
+ 		dev_err(priv->port.dev, "%s:dma_request_channel FAILS(Tx)\n",
+ 			__func__);
++		pci_dev_put(dma_dev);
+ 		return;
+ 	}
+ 	priv->chan_tx = chan;
+@@ -727,6 +728,7 @@ static void pch_request_dma(struct uart_port *port)
+ 			__func__);
+ 		dma_release_channel(priv->chan_tx);
+ 		priv->chan_tx = NULL;
++		pci_dev_put(dma_dev);
+ 		return;
+ 	}
+ 
+@@ -734,6 +736,8 @@ static void pch_request_dma(struct uart_port *port)
+ 	priv->rx_buf_virt = dma_alloc_coherent(port->dev, port->fifosize,
+ 				    &priv->rx_buf_dma, GFP_KERNEL);
+ 	priv->chan_rx = chan;
++
++	pci_dev_put(dma_dev);
+ }
+ 
+ static void pch_dma_rx_complete(void *arg)
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index cda71802b6982..62377c831894d 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -614,8 +614,9 @@ static void tegra_uart_stop_tx(struct uart_port *u)
+ 	if (tup->tx_in_progress != TEGRA_UART_TX_DMA)
+ 		return;
+ 
+-	dmaengine_terminate_all(tup->tx_dma_chan);
++	dmaengine_pause(tup->tx_dma_chan);
+ 	dmaengine_tx_status(tup->tx_dma_chan, tup->tx_cookie, &state);
++	dmaengine_terminate_all(tup->tx_dma_chan);
+ 	count = tup->tx_bytes_requested - state.residue;
+ 	async_tx_ack(tup->tx_dma_desc);
+ 	uart_xmit_advance(&tup->uport, count);
+@@ -758,8 +759,9 @@ static void tegra_uart_terminate_rx_dma(struct tegra_uart_port *tup)
+ 		return;
+ 	}
+ 
+-	dmaengine_terminate_all(tup->rx_dma_chan);
++	dmaengine_pause(tup->rx_dma_chan);
+ 	dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state);
++	dmaengine_terminate_all(tup->rx_dma_chan);
+ 
+ 	tegra_uart_rx_buffer_push(tup, state.residue);
+ 	tup->rx_dma_active = false;
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 605f928f0636a..40fff38588d4f 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -2254,7 +2254,8 @@ int uart_suspend_port(struct uart_driver *drv, struct uart_port *uport)
+ 
+ 		spin_lock_irq(&uport->lock);
+ 		ops->stop_tx(uport);
+-		ops->set_mctrl(uport, 0);
++		if (!(uport->rs485.flags & SER_RS485_ENABLED))
++			ops->set_mctrl(uport, 0);
+ 		ops->stop_rx(uport);
+ 		spin_unlock_irq(&uport->lock);
+ 
+diff --git a/drivers/tty/serial/sunsab.c b/drivers/tty/serial/sunsab.c
+index bab551f469631..451c7233623f0 100644
+--- a/drivers/tty/serial/sunsab.c
++++ b/drivers/tty/serial/sunsab.c
+@@ -1137,7 +1137,13 @@ static int __init sunsab_init(void)
+ 		}
+ 	}
+ 
+-	return platform_driver_register(&sab_driver);
++	err = platform_driver_register(&sab_driver);
++	if (err) {
++		kfree(sunsab_ports);
++		sunsab_ports = NULL;
++	}
++
++	return err;
+ }
+ 
+ static void __exit sunsab_exit(void)
+diff --git a/drivers/uio/uio_dmem_genirq.c b/drivers/uio/uio_dmem_genirq.c
+index ec7f66f4555a6..92751737bbea4 100644
+--- a/drivers/uio/uio_dmem_genirq.c
++++ b/drivers/uio/uio_dmem_genirq.c
+@@ -110,8 +110,10 @@ static irqreturn_t uio_dmem_genirq_handler(int irq, struct uio_info *dev_info)
+ 	 * remember the state so we can allow user space to enable it later.
+ 	 */
+ 
++	spin_lock(&priv->lock);
+ 	if (!test_and_set_bit(0, &priv->flags))
+ 		disable_irq_nosync(irq);
++	spin_unlock(&priv->lock);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -125,20 +127,19 @@ static int uio_dmem_genirq_irqcontrol(struct uio_info *dev_info, s32 irq_on)
+ 	 * in the interrupt controller, but keep track of the
+ 	 * state to prevent per-irq depth damage.
+ 	 *
+-	 * Serialize this operation to support multiple tasks.
++	 * Serialize this operation to support multiple tasks and concurrency
++	 * with irq handler on SMP systems.
+ 	 */
+ 
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 	if (irq_on) {
+ 		if (test_and_clear_bit(0, &priv->flags))
+ 			enable_irq(dev_info->irq);
+-		spin_unlock_irqrestore(&priv->lock, flags);
+ 	} else {
+-		if (!test_and_set_bit(0, &priv->flags)) {
+-			spin_unlock_irqrestore(&priv->lock, flags);
+-			disable_irq(dev_info->irq);
+-		}
++		if (!test_and_set_bit(0, &priv->flags))
++			disable_irq_nosync(dev_info->irq);
+ 	}
++	spin_unlock_irqrestore(&priv->lock, flags);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 4a0eec1765118..d73f624ed42a3 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -120,21 +120,25 @@ static void __dwc3_set_mode(struct work_struct *work)
+ 	unsigned long flags;
+ 	int ret;
+ 	u32 reg;
++	u32 desired_dr_role;
+ 
+ 	mutex_lock(&dwc->mutex);
++	spin_lock_irqsave(&dwc->lock, flags);
++	desired_dr_role = dwc->desired_dr_role;
++	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
+ 	pm_runtime_get_sync(dwc->dev);
+ 
+ 	if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_OTG)
+ 		dwc3_otg_update(dwc, 0);
+ 
+-	if (!dwc->desired_dr_role)
++	if (!desired_dr_role)
+ 		goto out;
+ 
+-	if (dwc->desired_dr_role == dwc->current_dr_role)
++	if (desired_dr_role == dwc->current_dr_role)
+ 		goto out;
+ 
+-	if (dwc->desired_dr_role == DWC3_GCTL_PRTCAP_OTG && dwc->edev)
++	if (desired_dr_role == DWC3_GCTL_PRTCAP_OTG && dwc->edev)
+ 		goto out;
+ 
+ 	switch (dwc->current_dr_role) {
+@@ -162,7 +166,7 @@ static void __dwc3_set_mode(struct work_struct *work)
+ 	 */
+ 	if (dwc->current_dr_role && ((DWC3_IP_IS(DWC3) ||
+ 			DWC3_VER_IS_PRIOR(DWC31, 190A)) &&
+-			dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG)) {
++			desired_dr_role != DWC3_GCTL_PRTCAP_OTG)) {
+ 		reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+ 		reg |= DWC3_GCTL_CORESOFTRESET;
+ 		dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+@@ -182,11 +186,11 @@ static void __dwc3_set_mode(struct work_struct *work)
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 
+-	dwc3_set_prtcap(dwc, dwc->desired_dr_role);
++	dwc3_set_prtcap(dwc, desired_dr_role);
+ 
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
+-	switch (dwc->desired_dr_role) {
++	switch (desired_dr_role) {
+ 	case DWC3_GCTL_PRTCAP_HOST:
+ 		ret = dwc3_host_init(dwc);
+ 		if (ret) {
+@@ -956,8 +960,13 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 
+ 	if (!dwc->ulpi_ready) {
+ 		ret = dwc3_core_ulpi_init(dwc);
+-		if (ret)
++		if (ret) {
++			if (ret == -ETIMEDOUT) {
++				dwc3_core_soft_reset(dwc);
++				ret = -EPROBE_DEFER;
++			}
+ 			goto err0;
++		}
+ 		dwc->ulpi_ready = true;
+ 	}
+ 
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index ca3a35fd8f746..528e36cc58ead 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -258,7 +258,8 @@ static int dwc3_qcom_interconnect_init(struct dwc3_qcom *qcom)
+ 	if (IS_ERR(qcom->icc_path_apps)) {
+ 		dev_err(dev, "failed to get apps-usb path: %ld\n",
+ 				PTR_ERR(qcom->icc_path_apps));
+-		return PTR_ERR(qcom->icc_path_apps);
++		ret = PTR_ERR(qcom->icc_path_apps);
++		goto put_path_ddr;
+ 	}
+ 
+ 	if (usb_get_maximum_speed(&qcom->dwc3->dev) >= USB_SPEED_SUPER ||
+@@ -271,17 +272,23 @@ static int dwc3_qcom_interconnect_init(struct dwc3_qcom *qcom)
+ 
+ 	if (ret) {
+ 		dev_err(dev, "failed to set bandwidth for usb-ddr path: %d\n", ret);
+-		return ret;
++		goto put_path_apps;
+ 	}
+ 
+ 	ret = icc_set_bw(qcom->icc_path_apps,
+ 		APPS_USB_AVG_BW, APPS_USB_PEAK_BW);
+ 	if (ret) {
+ 		dev_err(dev, "failed to set bandwidth for apps-usb path: %d\n", ret);
+-		return ret;
++		goto put_path_apps;
+ 	}
+ 
+ 	return 0;
++
++put_path_apps:
++	icc_put(qcom->icc_path_apps);
++put_path_ddr:
++	icc_put(qcom->icc_path_ddr);
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 6742271cd6e6a..e7cf56b13c643 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -45,12 +45,25 @@ struct f_hidg {
+ 	unsigned short			report_desc_length;
+ 	char				*report_desc;
+ 	unsigned short			report_length;
++	/*
++	 * use_out_ep - if true, the OUT Endpoint (interrupt out method)
++	 *              will be used to receive reports from the host
++	 *              using functions with the "intout" suffix.
++	 *              Otherwise, the OUT Endpoint will not be configured
++	 *              and the SETUP/SET_REPORT method ("ssreport" suffix)
++	 *              will be used to receive reports.
++	 */
++	bool				use_out_ep;
+ 
+ 	/* recv report */
+-	struct list_head		completed_out_req;
+ 	spinlock_t			read_spinlock;
+ 	wait_queue_head_t		read_queue;
++	/* recv report - interrupt out only (use_out_ep == 1) */
++	struct list_head		completed_out_req;
+ 	unsigned int			qlen;
++	/* recv report - setup set_report only (use_out_ep == 0) */
++	char				*set_report_buf;
++	unsigned int			set_report_length;
+ 
+ 	/* send report */
+ 	spinlock_t			write_spinlock;
+@@ -58,7 +71,7 @@ struct f_hidg {
+ 	wait_queue_head_t		write_queue;
+ 	struct usb_request		*req;
+ 
+-	int				minor;
++	struct device			dev;
+ 	struct cdev			cdev;
+ 	struct usb_function		func;
+ 
+@@ -71,6 +84,14 @@ static inline struct f_hidg *func_to_hidg(struct usb_function *f)
+ 	return container_of(f, struct f_hidg, func);
+ }
+ 
++static void hidg_release(struct device *dev)
++{
++	struct f_hidg *hidg = container_of(dev, struct f_hidg, dev);
++
++	kfree(hidg->set_report_buf);
++	kfree(hidg);
++}
++
+ /*-------------------------------------------------------------------------*/
+ /*                           Static descriptors                            */
+ 
+@@ -79,7 +100,7 @@ static struct usb_interface_descriptor hidg_interface_desc = {
+ 	.bDescriptorType	= USB_DT_INTERFACE,
+ 	/* .bInterfaceNumber	= DYNAMIC */
+ 	.bAlternateSetting	= 0,
+-	.bNumEndpoints		= 2,
++	/* .bNumEndpoints	= DYNAMIC (depends on use_out_ep) */
+ 	.bInterfaceClass	= USB_CLASS_HID,
+ 	/* .bInterfaceSubClass	= DYNAMIC */
+ 	/* .bInterfaceProtocol	= DYNAMIC */
+@@ -140,7 +161,7 @@ static struct usb_ss_ep_comp_descriptor hidg_ss_out_comp_desc = {
+ 	/* .wBytesPerInterval   = DYNAMIC */
+ };
+ 
+-static struct usb_descriptor_header *hidg_ss_descriptors[] = {
++static struct usb_descriptor_header *hidg_ss_descriptors_intout[] = {
+ 	(struct usb_descriptor_header *)&hidg_interface_desc,
+ 	(struct usb_descriptor_header *)&hidg_desc,
+ 	(struct usb_descriptor_header *)&hidg_ss_in_ep_desc,
+@@ -150,6 +171,14 @@ static struct usb_descriptor_header *hidg_ss_descriptors[] = {
+ 	NULL,
+ };
+ 
++static struct usb_descriptor_header *hidg_ss_descriptors_ssreport[] = {
++	(struct usb_descriptor_header *)&hidg_interface_desc,
++	(struct usb_descriptor_header *)&hidg_desc,
++	(struct usb_descriptor_header *)&hidg_ss_in_ep_desc,
++	(struct usb_descriptor_header *)&hidg_ss_in_comp_desc,
++	NULL,
++};
++
+ /* High-Speed Support */
+ 
+ static struct usb_endpoint_descriptor hidg_hs_in_ep_desc = {
+@@ -176,7 +205,7 @@ static struct usb_endpoint_descriptor hidg_hs_out_ep_desc = {
+ 				      */
+ };
+ 
+-static struct usb_descriptor_header *hidg_hs_descriptors[] = {
++static struct usb_descriptor_header *hidg_hs_descriptors_intout[] = {
+ 	(struct usb_descriptor_header *)&hidg_interface_desc,
+ 	(struct usb_descriptor_header *)&hidg_desc,
+ 	(struct usb_descriptor_header *)&hidg_hs_in_ep_desc,
+@@ -184,6 +213,13 @@ static struct usb_descriptor_header *hidg_hs_descriptors[] = {
+ 	NULL,
+ };
+ 
++static struct usb_descriptor_header *hidg_hs_descriptors_ssreport[] = {
++	(struct usb_descriptor_header *)&hidg_interface_desc,
++	(struct usb_descriptor_header *)&hidg_desc,
++	(struct usb_descriptor_header *)&hidg_hs_in_ep_desc,
++	NULL,
++};
++
+ /* Full-Speed Support */
+ 
+ static struct usb_endpoint_descriptor hidg_fs_in_ep_desc = {
+@@ -210,7 +246,7 @@ static struct usb_endpoint_descriptor hidg_fs_out_ep_desc = {
+ 				       */
+ };
+ 
+-static struct usb_descriptor_header *hidg_fs_descriptors[] = {
++static struct usb_descriptor_header *hidg_fs_descriptors_intout[] = {
+ 	(struct usb_descriptor_header *)&hidg_interface_desc,
+ 	(struct usb_descriptor_header *)&hidg_desc,
+ 	(struct usb_descriptor_header *)&hidg_fs_in_ep_desc,
+@@ -218,6 +254,13 @@ static struct usb_descriptor_header *hidg_fs_descriptors[] = {
+ 	NULL,
+ };
+ 
++static struct usb_descriptor_header *hidg_fs_descriptors_ssreport[] = {
++	(struct usb_descriptor_header *)&hidg_interface_desc,
++	(struct usb_descriptor_header *)&hidg_desc,
++	(struct usb_descriptor_header *)&hidg_fs_in_ep_desc,
++	NULL,
++};
++
+ /*-------------------------------------------------------------------------*/
+ /*                                 Strings                                 */
+ 
+@@ -241,8 +284,8 @@ static struct usb_gadget_strings *ct_func_strings[] = {
+ /*-------------------------------------------------------------------------*/
+ /*                              Char Device                                */
+ 
+-static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+-			size_t count, loff_t *ptr)
++static ssize_t f_hidg_intout_read(struct file *file, char __user *buffer,
++				  size_t count, loff_t *ptr)
+ {
+ 	struct f_hidg *hidg = file->private_data;
+ 	struct f_hidg_req_list *list;
+@@ -255,15 +298,15 @@ static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+ 
+ 	spin_lock_irqsave(&hidg->read_spinlock, flags);
+ 
+-#define READ_COND (!list_empty(&hidg->completed_out_req))
++#define READ_COND_INTOUT (!list_empty(&hidg->completed_out_req))
+ 
+ 	/* wait for at least one buffer to complete */
+-	while (!READ_COND) {
++	while (!READ_COND_INTOUT) {
+ 		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+ 		if (file->f_flags & O_NONBLOCK)
+ 			return -EAGAIN;
+ 
+-		if (wait_event_interruptible(hidg->read_queue, READ_COND))
++		if (wait_event_interruptible(hidg->read_queue, READ_COND_INTOUT))
+ 			return -ERESTARTSYS;
+ 
+ 		spin_lock_irqsave(&hidg->read_spinlock, flags);
+@@ -313,6 +356,60 @@ static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+ 	return count;
+ }
+ 
++#define READ_COND_SSREPORT (hidg->set_report_buf != NULL)
++
++static ssize_t f_hidg_ssreport_read(struct file *file, char __user *buffer,
++				    size_t count, loff_t *ptr)
++{
++	struct f_hidg *hidg = file->private_data;
++	char *tmp_buf = NULL;
++	unsigned long flags;
++
++	if (!count)
++		return 0;
++
++	spin_lock_irqsave(&hidg->read_spinlock, flags);
++
++	while (!READ_COND_SSREPORT) {
++		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++		if (file->f_flags & O_NONBLOCK)
++			return -EAGAIN;
++
++		if (wait_event_interruptible(hidg->read_queue, READ_COND_SSREPORT))
++			return -ERESTARTSYS;
++
++		spin_lock_irqsave(&hidg->read_spinlock, flags);
++	}
++
++	count = min_t(unsigned int, count, hidg->set_report_length);
++	tmp_buf = hidg->set_report_buf;
++	hidg->set_report_buf = NULL;
++
++	spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++
++	if (tmp_buf != NULL) {
++		count -= copy_to_user(buffer, tmp_buf, count);
++		kfree(tmp_buf);
++	} else {
++		count = -ENOMEM;
++	}
++
++	wake_up(&hidg->read_queue);
++
++	return count;
++}
++
++static ssize_t f_hidg_read(struct file *file, char __user *buffer,
++			   size_t count, loff_t *ptr)
++{
++	struct f_hidg *hidg = file->private_data;
++
++	if (hidg->use_out_ep)
++		return f_hidg_intout_read(file, buffer, count, ptr);
++	else
++		return f_hidg_ssreport_read(file, buffer, count, ptr);
++}
++
+ static void f_hidg_req_complete(struct usb_ep *ep, struct usb_request *req)
+ {
+ 	struct f_hidg *hidg = (struct f_hidg *)ep->driver_data;
+@@ -433,14 +530,20 @@ static __poll_t f_hidg_poll(struct file *file, poll_table *wait)
+ 	if (WRITE_COND)
+ 		ret |= EPOLLOUT | EPOLLWRNORM;
+ 
+-	if (READ_COND)
+-		ret |= EPOLLIN | EPOLLRDNORM;
++	if (hidg->use_out_ep) {
++		if (READ_COND_INTOUT)
++			ret |= EPOLLIN | EPOLLRDNORM;
++	} else {
++		if (READ_COND_SSREPORT)
++			ret |= EPOLLIN | EPOLLRDNORM;
++	}
+ 
+ 	return ret;
+ }
+ 
+ #undef WRITE_COND
+-#undef READ_COND
++#undef READ_COND_SSREPORT
++#undef READ_COND_INTOUT
+ 
+ static int f_hidg_release(struct inode *inode, struct file *fd)
+ {
+@@ -467,7 +570,7 @@ static inline struct usb_request *hidg_alloc_ep_req(struct usb_ep *ep,
+ 	return alloc_ep_req(ep, length);
+ }
+ 
+-static void hidg_set_report_complete(struct usb_ep *ep, struct usb_request *req)
++static void hidg_intout_complete(struct usb_ep *ep, struct usb_request *req)
+ {
+ 	struct f_hidg *hidg = (struct f_hidg *) req->context;
+ 	struct usb_composite_dev *cdev = hidg->func.config->cdev;
+@@ -502,6 +605,37 @@ free_req:
+ 	}
+ }
+ 
++static void hidg_ssreport_complete(struct usb_ep *ep, struct usb_request *req)
++{
++	struct f_hidg *hidg = (struct f_hidg *)req->context;
++	struct usb_composite_dev *cdev = hidg->func.config->cdev;
++	char *new_buf = NULL;
++	unsigned long flags;
++
++	if (req->status != 0 || req->buf == NULL || req->actual == 0) {
++		ERROR(cdev,
++		      "%s FAILED: status=%d, buf=%p, actual=%d\n",
++		      __func__, req->status, req->buf, req->actual);
++		return;
++	}
++
++	spin_lock_irqsave(&hidg->read_spinlock, flags);
++
++	new_buf = krealloc(hidg->set_report_buf, req->actual, GFP_ATOMIC);
++	if (new_buf == NULL) {
++		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++		return;
++	}
++	hidg->set_report_buf = new_buf;
++
++	hidg->set_report_length = req->actual;
++	memcpy(hidg->set_report_buf, req->buf, req->actual);
++
++	spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++
++	wake_up(&hidg->read_queue);
++}
++
+ static int hidg_setup(struct usb_function *f,
+ 		const struct usb_ctrlrequest *ctrl)
+ {
+@@ -549,7 +683,11 @@ static int hidg_setup(struct usb_function *f,
+ 	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
+ 		  | HID_REQ_SET_REPORT):
+ 		VDBG(cdev, "set_report | wLength=%d\n", ctrl->wLength);
+-		goto stall;
++		if (hidg->use_out_ep)
++			goto stall;
++		req->complete = hidg_ssreport_complete;
++		req->context  = hidg;
++		goto respond;
+ 		break;
+ 
+ 	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
+@@ -637,15 +775,18 @@ static void hidg_disable(struct usb_function *f)
+ 	unsigned long flags;
+ 
+ 	usb_ep_disable(hidg->in_ep);
+-	usb_ep_disable(hidg->out_ep);
+ 
+-	spin_lock_irqsave(&hidg->read_spinlock, flags);
+-	list_for_each_entry_safe(list, next, &hidg->completed_out_req, list) {
+-		free_ep_req(hidg->out_ep, list->req);
+-		list_del(&list->list);
+-		kfree(list);
++	if (hidg->out_ep) {
++		usb_ep_disable(hidg->out_ep);
++
++		spin_lock_irqsave(&hidg->read_spinlock, flags);
++		list_for_each_entry_safe(list, next, &hidg->completed_out_req, list) {
++			free_ep_req(hidg->out_ep, list->req);
++			list_del(&list->list);
++			kfree(list);
++		}
++		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+ 	}
+-	spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+ 
+ 	spin_lock_irqsave(&hidg->write_spinlock, flags);
+ 	if (!hidg->write_pending) {
+@@ -691,8 +832,7 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ 		}
+ 	}
+ 
+-
+-	if (hidg->out_ep != NULL) {
++	if (hidg->use_out_ep && hidg->out_ep != NULL) {
+ 		/* restart endpoint */
+ 		usb_ep_disable(hidg->out_ep);
+ 
+@@ -717,7 +857,7 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ 					hidg_alloc_ep_req(hidg->out_ep,
+ 							  hidg->report_length);
+ 			if (req) {
+-				req->complete = hidg_set_report_complete;
++				req->complete = hidg_intout_complete;
+ 				req->context  = hidg;
+ 				status = usb_ep_queue(hidg->out_ep, req,
+ 						      GFP_ATOMIC);
+@@ -743,7 +883,8 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ 	}
+ 	return 0;
+ disable_out_ep:
+-	usb_ep_disable(hidg->out_ep);
++	if (hidg->out_ep)
++		usb_ep_disable(hidg->out_ep);
+ free_req_in:
+ 	if (req_in)
+ 		free_ep_req(hidg->in_ep, req_in);
+@@ -771,9 +912,7 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 	struct usb_ep		*ep;
+ 	struct f_hidg		*hidg = func_to_hidg(f);
+ 	struct usb_string	*us;
+-	struct device		*device;
+ 	int			status;
+-	dev_t			dev;
+ 
+ 	/* maybe allocate device-global string IDs, and patch descriptors */
+ 	us = usb_gstrings_attach(c->cdev, ct_func_strings,
+@@ -795,14 +934,21 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 		goto fail;
+ 	hidg->in_ep = ep;
+ 
+-	ep = usb_ep_autoconfig(c->cdev->gadget, &hidg_fs_out_ep_desc);
+-	if (!ep)
+-		goto fail;
+-	hidg->out_ep = ep;
++	hidg->out_ep = NULL;
++	if (hidg->use_out_ep) {
++		ep = usb_ep_autoconfig(c->cdev->gadget, &hidg_fs_out_ep_desc);
++		if (!ep)
++			goto fail;
++		hidg->out_ep = ep;
++	}
++
++	/* used only if use_out_ep == 1 */
++	hidg->set_report_buf = NULL;
+ 
+ 	/* set descriptor dynamic values */
+ 	hidg_interface_desc.bInterfaceSubClass = hidg->bInterfaceSubClass;
+ 	hidg_interface_desc.bInterfaceProtocol = hidg->bInterfaceProtocol;
++	hidg_interface_desc.bNumEndpoints = hidg->use_out_ep ? 2 : 1;
+ 	hidg->protocol = HID_REPORT_PROTOCOL;
+ 	hidg->idle = 1;
+ 	hidg_ss_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);
+@@ -833,9 +979,19 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 	hidg_ss_out_ep_desc.bEndpointAddress =
+ 		hidg_fs_out_ep_desc.bEndpointAddress;
+ 
+-	status = usb_assign_descriptors(f, hidg_fs_descriptors,
+-			hidg_hs_descriptors, hidg_ss_descriptors,
+-			hidg_ss_descriptors);
++	if (hidg->use_out_ep)
++		status = usb_assign_descriptors(f,
++			hidg_fs_descriptors_intout,
++			hidg_hs_descriptors_intout,
++			hidg_ss_descriptors_intout,
++			hidg_ss_descriptors_intout);
++	else
++		status = usb_assign_descriptors(f,
++			hidg_fs_descriptors_ssreport,
++			hidg_hs_descriptors_ssreport,
++			hidg_ss_descriptors_ssreport,
++			hidg_ss_descriptors_ssreport);
++
+ 	if (status)
+ 		goto fail;
+ 
+@@ -849,21 +1005,11 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 
+ 	/* create char device */
+ 	cdev_init(&hidg->cdev, &f_hidg_fops);
+-	dev = MKDEV(major, hidg->minor);
+-	status = cdev_add(&hidg->cdev, dev, 1);
++	status = cdev_device_add(&hidg->cdev, &hidg->dev);
+ 	if (status)
+ 		goto fail_free_descs;
+ 
+-	device = device_create(hidg_class, NULL, dev, NULL,
+-			       "%s%d", "hidg", hidg->minor);
+-	if (IS_ERR(device)) {
+-		status = PTR_ERR(device);
+-		goto del;
+-	}
+-
+ 	return 0;
+-del:
+-	cdev_del(&hidg->cdev);
+ fail_free_descs:
+ 	usb_free_all_descriptors(f);
+ fail:
+@@ -950,6 +1096,7 @@ CONFIGFS_ATTR(f_hid_opts_, name)
+ 
+ F_HID_OPT(subclass, 8, 255);
+ F_HID_OPT(protocol, 8, 255);
++F_HID_OPT(no_out_endpoint, 8, 1);
+ F_HID_OPT(report_length, 16, 65535);
+ 
+ static ssize_t f_hid_opts_report_desc_show(struct config_item *item, char *page)
+@@ -1009,6 +1156,7 @@ CONFIGFS_ATTR_RO(f_hid_opts_, dev);
+ static struct configfs_attribute *hid_attrs[] = {
+ 	&f_hid_opts_attr_subclass,
+ 	&f_hid_opts_attr_protocol,
++	&f_hid_opts_attr_no_out_endpoint,
+ 	&f_hid_opts_attr_report_length,
+ 	&f_hid_opts_attr_report_desc,
+ 	&f_hid_opts_attr_dev,
+@@ -1092,8 +1240,7 @@ static void hidg_free(struct usb_function *f)
+ 
+ 	hidg = func_to_hidg(f);
+ 	opts = container_of(f->fi, struct f_hid_opts, func_inst);
+-	kfree(hidg->report_desc);
+-	kfree(hidg);
++	put_device(&hidg->dev);
+ 	mutex_lock(&opts->lock);
+ 	--opts->refcnt;
+ 	mutex_unlock(&opts->lock);
+@@ -1103,8 +1250,7 @@ static void hidg_unbind(struct usb_configuration *c, struct usb_function *f)
+ {
+ 	struct f_hidg *hidg = func_to_hidg(f);
+ 
+-	device_destroy(hidg_class, MKDEV(major, hidg->minor));
+-	cdev_del(&hidg->cdev);
++	cdev_device_del(&hidg->cdev, &hidg->dev);
+ 
+ 	usb_free_all_descriptors(f);
+ }
+@@ -1113,6 +1259,7 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
+ {
+ 	struct f_hidg *hidg;
+ 	struct f_hid_opts *opts;
++	int ret;
+ 
+ 	/* allocate and initialize one new instance */
+ 	hidg = kzalloc(sizeof(*hidg), GFP_KERNEL);
+@@ -1124,21 +1271,33 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
+ 	mutex_lock(&opts->lock);
+ 	++opts->refcnt;
+ 
+-	hidg->minor = opts->minor;
++	device_initialize(&hidg->dev);
++	hidg->dev.release = hidg_release;
++	hidg->dev.class = hidg_class;
++	hidg->dev.devt = MKDEV(major, opts->minor);
++	ret = dev_set_name(&hidg->dev, "hidg%d", opts->minor);
++	if (ret) {
++		--opts->refcnt;
++		mutex_unlock(&opts->lock);
++		return ERR_PTR(ret);
++	}
++
+ 	hidg->bInterfaceSubClass = opts->subclass;
+ 	hidg->bInterfaceProtocol = opts->protocol;
+ 	hidg->report_length = opts->report_length;
+ 	hidg->report_desc_length = opts->report_desc_length;
+ 	if (opts->report_desc) {
+-		hidg->report_desc = kmemdup(opts->report_desc,
+-					    opts->report_desc_length,
+-					    GFP_KERNEL);
++		hidg->report_desc = devm_kmemdup(&hidg->dev, opts->report_desc,
++						 opts->report_desc_length,
++						 GFP_KERNEL);
+ 		if (!hidg->report_desc) {
+-			kfree(hidg);
++			put_device(&hidg->dev);
++			--opts->refcnt;
+ 			mutex_unlock(&opts->lock);
+ 			return ERR_PTR(-ENOMEM);
+ 		}
+ 	}
++	hidg->use_out_ep = !opts->no_out_endpoint;
+ 
+ 	mutex_unlock(&opts->lock);
+ 
+diff --git a/drivers/usb/gadget/function/u_hid.h b/drivers/usb/gadget/function/u_hid.h
+index 84e6da302499a..fa631f34bb3d7 100644
+--- a/drivers/usb/gadget/function/u_hid.h
++++ b/drivers/usb/gadget/function/u_hid.h
+@@ -20,6 +20,7 @@ struct f_hid_opts {
+ 	int				minor;
+ 	unsigned char			subclass;
+ 	unsigned char			protocol;
++	unsigned char			no_out_endpoint;
+ 	unsigned short			report_length;
+ 	unsigned short			report_desc_length;
+ 	unsigned char			*report_desc;
+diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
+index 75bf446f4a666..11712bc896352 100644
+--- a/drivers/usb/gadget/udc/fotg210-udc.c
++++ b/drivers/usb/gadget/udc/fotg210-udc.c
+@@ -629,10 +629,10 @@ static void fotg210_request_error(struct fotg210_udc *fotg210)
+ static void fotg210_set_address(struct fotg210_udc *fotg210,
+ 				struct usb_ctrlrequest *ctrl)
+ {
+-	if (ctrl->wValue >= 0x0100) {
++	if (le16_to_cpu(ctrl->wValue) >= 0x0100) {
+ 		fotg210_request_error(fotg210);
+ 	} else {
+-		fotg210_set_dev_addr(fotg210, ctrl->wValue);
++		fotg210_set_dev_addr(fotg210, le16_to_cpu(ctrl->wValue));
+ 		fotg210_set_cxdone(fotg210);
+ 	}
+ }
+@@ -713,17 +713,17 @@ static void fotg210_get_status(struct fotg210_udc *fotg210,
+ 
+ 	switch (ctrl->bRequestType & USB_RECIP_MASK) {
+ 	case USB_RECIP_DEVICE:
+-		fotg210->ep0_data = 1 << USB_DEVICE_SELF_POWERED;
++		fotg210->ep0_data = cpu_to_le16(1 << USB_DEVICE_SELF_POWERED);
+ 		break;
+ 	case USB_RECIP_INTERFACE:
+-		fotg210->ep0_data = 0;
++		fotg210->ep0_data = cpu_to_le16(0);
+ 		break;
+ 	case USB_RECIP_ENDPOINT:
+ 		epnum = ctrl->wIndex & USB_ENDPOINT_NUMBER_MASK;
+ 		if (epnum)
+ 			fotg210->ep0_data =
+-				fotg210_is_epnstall(fotg210->ep[epnum])
+-				<< USB_ENDPOINT_HALT;
++				cpu_to_le16(fotg210_is_epnstall(fotg210->ep[epnum])
++					    << USB_ENDPOINT_HALT);
+ 		else
+ 			fotg210_request_error(fotg210);
+ 		break;
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index c273eee35aaa7..8dc657c71541c 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1628,8 +1628,6 @@ static int musb_gadget_vbus_draw(struct usb_gadget *gadget, unsigned mA)
+ {
+ 	struct musb	*musb = gadget_to_musb(gadget);
+ 
+-	if (!musb->xceiv->set_power)
+-		return -EOPNOTSUPP;
+ 	return usb_phy_set_power(musb->xceiv, mA);
+ }
+ 
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index 33b637d0d8d99..5cc20275335d1 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -106,10 +106,13 @@ usb_role_switch_is_parent(struct fwnode_handle *fwnode)
+ 	struct fwnode_handle *parent = fwnode_get_parent(fwnode);
+ 	struct device *dev;
+ 
+-	if (!parent || !fwnode_property_present(parent, "usb-role-switch"))
++	if (!fwnode_property_present(parent, "usb-role-switch")) {
++		fwnode_handle_put(parent);
+ 		return NULL;
++	}
+ 
+ 	dev = class_find_device_by_fwnode(role_class, parent);
++	fwnode_handle_put(parent);
+ 	return dev ? to_role_switch(dev) : ERR_PTR(-EPROBE_DEFER);
+ }
+ 
+diff --git a/drivers/usb/storage/alauda.c b/drivers/usb/storage/alauda.c
+index 20b857e97e60c..7e4ce0e7e05a7 100644
+--- a/drivers/usb/storage/alauda.c
++++ b/drivers/usb/storage/alauda.c
+@@ -438,6 +438,8 @@ static int alauda_init_media(struct us_data *us)
+ 		+ MEDIA_INFO(us).blockshift + MEDIA_INFO(us).pageshift);
+ 	MEDIA_INFO(us).pba_to_lba = kcalloc(num_zones, sizeof(u16*), GFP_NOIO);
+ 	MEDIA_INFO(us).lba_to_pba = kcalloc(num_zones, sizeof(u16*), GFP_NOIO);
++	if (MEDIA_INFO(us).pba_to_lba == NULL || MEDIA_INFO(us).lba_to_pba == NULL)
++		return USB_STOR_TRANSPORT_ERROR;
+ 
+ 	if (alauda_reset_media(us) != USB_STOR_XFER_GOOD)
+ 		return USB_STOR_TRANSPORT_ERROR;
+diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c
+index e8ddb81cb6df4..f4e7f4d78b565 100644
+--- a/drivers/usb/typec/bus.c
++++ b/drivers/usb/typec/bus.c
+@@ -132,7 +132,7 @@ int typec_altmode_exit(struct typec_altmode *adev)
+ 	if (!adev || !adev->active)
+ 		return 0;
+ 
+-	if (!pdev->ops || !pdev->ops->enter)
++	if (!pdev->ops || !pdev->ops->exit)
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Moving to USB Safe State */
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index 49420e28a1f76..069affa5cb1ee 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -649,8 +649,10 @@ struct tcpci *tcpci_register_port(struct device *dev, struct tcpci_data *data)
+ 		return ERR_PTR(err);
+ 
+ 	tcpci->port = tcpm_register_port(tcpci->dev, &tcpci->tcpc);
+-	if (IS_ERR(tcpci->port))
++	if (IS_ERR(tcpci->port)) {
++		fwnode_handle_put(tcpci->tcpc.fwnode);
+ 		return ERR_CAST(tcpci->port);
++	}
+ 
+ 	return tcpci;
+ }
+@@ -659,6 +661,7 @@ EXPORT_SYMBOL_GPL(tcpci_register_port);
+ void tcpci_unregister_port(struct tcpci *tcpci)
+ {
+ 	tcpm_unregister_port(tcpci->port);
++	fwnode_handle_put(tcpci->tcpc.fwnode);
+ }
+ EXPORT_SYMBOL_GPL(tcpci_unregister_port);
+ 
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index 6cb5c8e2c8535..4722b7f7a4a2b 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -564,7 +564,6 @@ static int tps6598x_probe(struct i2c_client *client)
+ 		ret = PTR_ERR(tps->port);
+ 		goto err_role_put;
+ 	}
+-	fwnode_handle_put(fwnode);
+ 
+ 	if (status & TPS_STATUS_PLUG_PRESENT) {
+ 		ret = tps6598x_connect(tps, status);
+@@ -583,6 +582,7 @@ static int tps6598x_probe(struct i2c_client *client)
+ 	}
+ 
+ 	i2c_set_clientdata(client, tps);
++	fwnode_handle_put(fwnode);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index e83a7cd15c956..e15ef1a949e00 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -72,12 +72,11 @@ static int vfio_platform_acpi_call_reset(struct vfio_platform_device *vdev,
+ 				  const char **extra_dbg)
+ {
+ #ifdef CONFIG_ACPI
+-	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	struct device *dev = vdev->device;
+ 	acpi_handle handle = ACPI_HANDLE(dev);
+ 	acpi_status acpi_ret;
+ 
+-	acpi_ret = acpi_evaluate_object(handle, "_RST", NULL, &buffer);
++	acpi_ret = acpi_evaluate_object(handle, "_RST", NULL, NULL);
+ 	if (ACPI_FAILURE(acpi_ret)) {
+ 		if (extra_dbg)
+ 			*extra_dbg = acpi_format_exception(acpi_ret);
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index f41463ab4031d..da00a5c57db65 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2041,7 +2041,7 @@ static int translate_desc(struct vhost_virtqueue *vq, u64 addr, u32 len,
+ 	struct vhost_dev *dev = vq->dev;
+ 	struct vhost_iotlb *umem = dev->iotlb ? dev->iotlb : dev->umem;
+ 	struct iovec *_iov;
+-	u64 s = 0;
++	u64 s = 0, last = addr + len - 1;
+ 	int ret = 0;
+ 
+ 	while ((u64)len > s) {
+@@ -2051,7 +2051,7 @@ static int translate_desc(struct vhost_virtqueue *vq, u64 addr, u32 len,
+ 			break;
+ 		}
+ 
+-		map = vhost_iotlb_itree_first(umem, addr, addr + len - 1);
++		map = vhost_iotlb_itree_first(umem, addr, last);
+ 		if (map == NULL || map->start > addr) {
+ 			if (umem != dev->iotlb) {
+ 				ret = -EFAULT;
+diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
+index 5a0340c85dc6b..48f4ec2ba40a0 100644
+--- a/drivers/vhost/vringh.c
++++ b/drivers/vhost/vringh.c
+@@ -1077,7 +1077,7 @@ static int iotlb_translate(const struct vringh *vrh,
+ 	struct vhost_iotlb_map *map;
+ 	struct vhost_iotlb *iotlb = vrh->iotlb;
+ 	int ret = 0;
+-	u64 s = 0;
++	u64 s = 0, last = addr + len - 1;
+ 
+ 	while (len > s) {
+ 		u64 size, pa, pfn;
+@@ -1087,8 +1087,7 @@ static int iotlb_translate(const struct vringh *vrh,
+ 			break;
+ 		}
+ 
+-		map = vhost_iotlb_itree_first(iotlb, addr,
+-					      addr + len - 1);
++		map = vhost_iotlb_itree_first(iotlb, addr, last);
+ 		if (!map || map->start > addr) {
+ 			ret = -EINVAL;
+ 			break;
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index b0153617fe0e0..7bce5f982e58a 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -854,7 +854,14 @@ static int __init vhost_vsock_init(void)
+ 				  VSOCK_TRANSPORT_F_H2G);
+ 	if (ret < 0)
+ 		return ret;
+-	return misc_register(&vhost_vsock_misc);
++
++	ret = misc_register(&vhost_vsock_misc);
++	if (ret) {
++		vsock_core_unregister(&vhost_transport.transport);
++		return ret;
++	}
++
++	return 0;
+ };
+ 
+ static void __exit vhost_vsock_exit(void)
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 4f02db65dedec..3ac78db17e466 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -2216,7 +2216,6 @@ config FB_SSD1307
+ 	select FB_SYS_COPYAREA
+ 	select FB_SYS_IMAGEBLIT
+ 	select FB_DEFERRED_IO
+-	select PWM
+ 	select FB_BACKLIGHT
+ 	help
+ 	  This driver implements support for the Solomon SSD1307
+diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
+index 40baa79f8046e..f0a66a344d870 100644
+--- a/drivers/video/fbdev/hyperv_fb.c
++++ b/drivers/video/fbdev/hyperv_fb.c
+@@ -798,12 +798,18 @@ static void hvfb_ondemand_refresh_throttle(struct hvfb_par *par,
+ static int hvfb_on_panic(struct notifier_block *nb,
+ 			 unsigned long e, void *p)
+ {
++	struct hv_device *hdev;
+ 	struct hvfb_par *par;
+ 	struct fb_info *info;
+ 
+ 	par = container_of(nb, struct hvfb_par, hvfb_panic_nb);
+-	par->synchronous_fb = true;
+ 	info = par->info;
++	hdev = device_to_hv_device(info->device);
++
++	if (hv_ringbuffer_spinlock_busy(hdev->channel))
++		return NOTIFY_DONE;
++
++	par->synchronous_fb = true;
+ 	if (par->need_docopy)
+ 		hvfb_docopy(par, 0, dio_fb_size);
+ 	synthvid_update(info, 0, 0, INT_MAX, INT_MAX);
+diff --git a/drivers/video/fbdev/matrox/matroxfb_base.c b/drivers/video/fbdev/matrox/matroxfb_base.c
+index daaa99818d3b7..566f74832bc8a 100644
+--- a/drivers/video/fbdev/matrox/matroxfb_base.c
++++ b/drivers/video/fbdev/matrox/matroxfb_base.c
+@@ -1377,8 +1377,8 @@ static struct video_board vbG200 = {
+ 	.lowlevel = &matrox_G100
+ };
+ static struct video_board vbG200eW = {
+-	.maxvram = 0x100000,
+-	.maxdisplayable = 0x800000,
++	.maxvram = 0x1000000,
++	.maxdisplayable = 0x0800000,
+ 	.accelID = FB_ACCEL_MATROX_MGAG200,
+ 	.lowlevel = &matrox_G100
+ };
+diff --git a/drivers/video/fbdev/pm2fb.c b/drivers/video/fbdev/pm2fb.c
+index c12d46e283598..87b6a929a6b39 100644
+--- a/drivers/video/fbdev/pm2fb.c
++++ b/drivers/video/fbdev/pm2fb.c
+@@ -1529,8 +1529,10 @@ static int pm2fb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	}
+ 
+ 	info = framebuffer_alloc(sizeof(struct pm2fb_par), &pdev->dev);
+-	if (!info)
+-		return -ENOMEM;
++	if (!info) {
++		err = -ENOMEM;
++		goto err_exit_disable;
++	}
+ 	default_par = info->par;
+ 
+ 	switch (pdev->device) {
+@@ -1711,6 +1713,8 @@ static int pm2fb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	release_mem_region(pm2fb_fix.mmio_start, pm2fb_fix.mmio_len);
+  err_exit_neither:
+ 	framebuffer_release(info);
++ err_exit_disable:
++	pci_disable_device(pdev);
+ 	return retval;
+ }
+ 
+@@ -1737,6 +1741,7 @@ static void pm2fb_remove(struct pci_dev *pdev)
+ 	fb_dealloc_cmap(&info->cmap);
+ 	kfree(info->pixmap.addr);
+ 	framebuffer_release(info);
++	pci_disable_device(pdev);
+ }
+ 
+ static const struct pci_device_id pm2fb_id_table[] = {
+diff --git a/drivers/video/fbdev/uvesafb.c b/drivers/video/fbdev/uvesafb.c
+index def14ac0ebe14..661f12742e4f0 100644
+--- a/drivers/video/fbdev/uvesafb.c
++++ b/drivers/video/fbdev/uvesafb.c
+@@ -1756,6 +1756,7 @@ static int uvesafb_probe(struct platform_device *dev)
+ out_unmap:
+ 	iounmap(info->screen_base);
+ out_mem:
++	arch_phys_wc_del(par->mtrr_handle);
+ 	release_mem_region(info->fix.smem_start, info->fix.smem_len);
+ out_reg:
+ 	release_region(0x3c0, 32);
+diff --git a/drivers/video/fbdev/vermilion/vermilion.c b/drivers/video/fbdev/vermilion/vermilion.c
+index ff61605b8764f..a543643ce014d 100644
+--- a/drivers/video/fbdev/vermilion/vermilion.c
++++ b/drivers/video/fbdev/vermilion/vermilion.c
+@@ -277,8 +277,10 @@ static int vmlfb_get_gpu(struct vml_par *par)
+ 
+ 	mutex_unlock(&vml_mutex);
+ 
+-	if (pci_enable_device(par->gpu) < 0)
++	if (pci_enable_device(par->gpu) < 0) {
++		pci_dev_put(par->gpu);
+ 		return -ENODEV;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/video/fbdev/via/via-core.c b/drivers/video/fbdev/via/via-core.c
+index 89d75079b7307..0363b478fa3ef 100644
+--- a/drivers/video/fbdev/via/via-core.c
++++ b/drivers/video/fbdev/via/via-core.c
+@@ -725,7 +725,14 @@ static int __init via_core_init(void)
+ 		return ret;
+ 	viafb_i2c_init();
+ 	viafb_gpio_init();
+-	return pci_register_driver(&via_driver);
++	ret = pci_register_driver(&via_driver);
++	if (ret) {
++		viafb_gpio_exit();
++		viafb_i2c_exit();
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static void __exit via_core_exit(void)
+diff --git a/drivers/vme/bridges/vme_fake.c b/drivers/vme/bridges/vme_fake.c
+index 6a1bc284f297c..eae78366eb028 100644
+--- a/drivers/vme/bridges/vme_fake.c
++++ b/drivers/vme/bridges/vme_fake.c
+@@ -1073,6 +1073,8 @@ static int __init fake_init(void)
+ 
+ 	/* We need a fake parent device */
+ 	vme_root = __root_device_register("vme", THIS_MODULE);
++	if (IS_ERR(vme_root))
++		return PTR_ERR(vme_root);
+ 
+ 	/* If we want to support more than one bridge at some point, we need to
+ 	 * dynamically allocate this so we get one per device.
+diff --git a/drivers/vme/bridges/vme_tsi148.c b/drivers/vme/bridges/vme_tsi148.c
+index 50ae26977a027..5ccda1a363ecc 100644
+--- a/drivers/vme/bridges/vme_tsi148.c
++++ b/drivers/vme/bridges/vme_tsi148.c
+@@ -1771,6 +1771,7 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
+ 	return 0;
+ 
+ err_dma:
++	list_del(&entry->list);
+ err_dest:
+ err_source:
+ err_align:
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index cd5f2f09468e2..28537a1a0e0b9 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -760,7 +760,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
+ 		goto out;
+ 	}
+ 
+-	pfns = kcalloc(kdata.num, sizeof(*pfns), GFP_KERNEL);
++	pfns = kcalloc(kdata.num, sizeof(*pfns), GFP_KERNEL | __GFP_NOWARN);
+ 	if (!pfns) {
+ 		rc = -ENOMEM;
+ 		goto out;
+diff --git a/fs/afs/fs_probe.c b/fs/afs/fs_probe.c
+index 04d42e49fc599..def80365fe79d 100644
+--- a/fs/afs/fs_probe.c
++++ b/fs/afs/fs_probe.c
+@@ -360,12 +360,15 @@ void afs_fs_probe_dispatcher(struct work_struct *work)
+ 	unsigned long nowj, timer_at, poll_at;
+ 	bool first_pass = true, set_timer = false;
+ 
+-	if (!net->live)
++	if (!net->live) {
++		afs_dec_servers_outstanding(net);
+ 		return;
++	}
+ 
+ 	_enter("");
+ 
+ 	if (list_empty(&net->fs_probe_fast) && list_empty(&net->fs_probe_slow)) {
++		afs_dec_servers_outstanding(net);
+ 		_leave(" [none]");
+ 		return;
+ 	}
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index 5764295a3f0ff..8c3b7cc8e2a16 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -434,8 +434,9 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ 	current->mm->start_stack = current->mm->start_brk + stack_size;
+ #endif
+ 
+-	if (create_elf_fdpic_tables(bprm, current->mm,
+-				    &exec_params, &interp_params) < 0)
++	retval = create_elf_fdpic_tables(bprm, current->mm, &exec_params,
++					 &interp_params);
++	if (retval < 0)
+ 		goto error;
+ 
+ 	kdebug("- start_code  %lx", current->mm->start_code);
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index 11b5bf2419555..ce0047feea729 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -44,10 +44,10 @@ static LIST_HEAD(entries);
+ static int enabled = 1;
+ 
+ enum {Enabled, Magic};
+-#define MISC_FMT_PRESERVE_ARGV0 (1 << 31)
+-#define MISC_FMT_OPEN_BINARY (1 << 30)
+-#define MISC_FMT_CREDENTIALS (1 << 29)
+-#define MISC_FMT_OPEN_FILE (1 << 28)
++#define MISC_FMT_PRESERVE_ARGV0 (1UL << 31)
++#define MISC_FMT_OPEN_BINARY (1UL << 30)
++#define MISC_FMT_CREDENTIALS (1UL << 29)
++#define MISC_FMT_OPEN_FILE (1UL << 28)
+ 
+ typedef struct {
+ 	struct list_head list;
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 7208ba22e734a..d2f77b1242a13 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -432,6 +432,7 @@ static int add_all_parents(struct btrfs_root *root, struct btrfs_path *path,
+ 	u64 wanted_disk_byte = ref->wanted_disk_byte;
+ 	u64 count = 0;
+ 	u64 data_offset;
++	u8 type;
+ 
+ 	if (level != 0) {
+ 		eb = path->nodes[level];
+@@ -486,6 +487,9 @@ static int add_all_parents(struct btrfs_root *root, struct btrfs_path *path,
+ 			continue;
+ 		}
+ 		fi = btrfs_item_ptr(eb, slot, struct btrfs_file_extent_item);
++		type = btrfs_file_extent_type(eb, fi);
++		if (type == BTRFS_FILE_EXTENT_INLINE)
++			goto next;
+ 		disk_byte = btrfs_file_extent_disk_bytenr(eb, fi);
+ 		data_offset = btrfs_file_extent_offset(eb, fi);
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index a17076a05c4d9..fc335b5e44df8 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3401,13 +3401,10 @@ static long btrfs_ioctl_dev_info(struct btrfs_fs_info *fs_info,
+ 	di_args->bytes_used = btrfs_device_get_bytes_used(dev);
+ 	di_args->total_bytes = btrfs_device_get_total_bytes(dev);
+ 	memcpy(di_args->uuid, dev->uuid, sizeof(di_args->uuid));
+-	if (dev->name) {
+-		strncpy(di_args->path, rcu_str_deref(dev->name),
+-				sizeof(di_args->path) - 1);
+-		di_args->path[sizeof(di_args->path) - 1] = 0;
+-	} else {
++	if (dev->name)
++		strscpy(di_args->path, rcu_str_deref(dev->name), sizeof(di_args->path));
++	else
+ 		di_args->path[0] = '\0';
+-	}
+ 
+ out:
+ 	rcu_read_unlock();
+diff --git a/fs/btrfs/rcu-string.h b/fs/btrfs/rcu-string.h
+index 5c1a617eb25de..5c2b66d155ef7 100644
+--- a/fs/btrfs/rcu-string.h
++++ b/fs/btrfs/rcu-string.h
+@@ -18,7 +18,11 @@ static inline struct rcu_string *rcu_string_strdup(const char *src, gfp_t mask)
+ 					 (len * sizeof(char)), mask);
+ 	if (!ret)
+ 		return ret;
+-	strncpy(ret->str, src, len);
++	/* Warn if the source got unexpectedly truncated. */
++	if (WARN_ON(strscpy(ret->str, src, len) < 0)) {
++		kfree(ret);
++		return NULL;
++	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 51562d36fa83d..210496dc2fd49 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -2957,7 +2957,7 @@ int ceph_get_caps(struct file *filp, int need, int want,
+ 
+ 	while (true) {
+ 		flags &= CEPH_FILE_MODE_MASK;
+-		if (atomic_read(&fi->num_locks))
++		if (vfs_inode_has_locks(inode))
+ 			flags |= CHECK_FILELOCK;
+ 		_got = 0;
+ 		ret = try_get_cap_refs(inode, need, want, endoff,
+diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
+index 048a435a29bee..674d6ea89f717 100644
+--- a/fs/ceph/locks.c
++++ b/fs/ceph/locks.c
+@@ -32,18 +32,14 @@ void __init ceph_flock_init(void)
+ 
+ static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
+ {
+-	struct ceph_file_info *fi = dst->fl_file->private_data;
+ 	struct inode *inode = file_inode(dst->fl_file);
+ 	atomic_inc(&ceph_inode(inode)->i_filelock_ref);
+-	atomic_inc(&fi->num_locks);
+ }
+ 
+ static void ceph_fl_release_lock(struct file_lock *fl)
+ {
+-	struct ceph_file_info *fi = fl->fl_file->private_data;
+ 	struct inode *inode = file_inode(fl->fl_file);
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+-	atomic_dec(&fi->num_locks);
+ 	if (atomic_dec_and_test(&ci->i_filelock_ref)) {
+ 		/* clear error when all locks are released */
+ 		spin_lock(&ci->i_ceph_lock);
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 4db305fd2a02a..8716cb618cbb7 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -772,7 +772,6 @@ struct ceph_file_info {
+ 	struct list_head rw_contexts;
+ 
+ 	u32 filp_gen;
+-	atomic_t num_locks;
+ };
+ 
+ struct ceph_dir_file_info {
+diff --git a/fs/char_dev.c b/fs/char_dev.c
+index ba0ded7842a77..3f667292608c0 100644
+--- a/fs/char_dev.c
++++ b/fs/char_dev.c
+@@ -547,7 +547,7 @@ int cdev_device_add(struct cdev *cdev, struct device *dev)
+ 	}
+ 
+ 	rc = device_add(dev);
+-	if (rc)
++	if (rc && dev->devt)
+ 		cdev_del(cdev);
+ 
+ 	return rc;
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index f442ef8b65dad..0d8d7b57636f7 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -619,9 +619,15 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ 	seq_printf(s, ",echo_interval=%lu",
+ 			tcon->ses->server->echo_interval / HZ);
+ 
+-	/* Only display max_credits if it was overridden on mount */
++	/* Only display the following if overridden on mount */
+ 	if (tcon->ses->server->max_credits != SMB2_MAX_CREDITS_AVAILABLE)
+ 		seq_printf(s, ",max_credits=%u", tcon->ses->server->max_credits);
++	if (tcon->ses->server->tcp_nodelay)
++		seq_puts(s, ",tcpnodelay");
++	if (tcon->ses->server->noautotune)
++		seq_puts(s, ",noautotune");
++	if (tcon->ses->server->noblocksnd)
++		seq_puts(s, ",noblocksend");
+ 
+ 	if (tcon->snapshot_time)
+ 		seq_printf(s, ",snapshot=%llu", tcon->snapshot_time);
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 196285b0fe46c..92a7628560ccb 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -22,6 +22,8 @@
+ #include <linux/in.h>
+ #include <linux/in6.h>
+ #include <linux/slab.h>
++#include <linux/scatterlist.h>
++#include <linux/mm.h>
+ #include <linux/mempool.h>
+ #include <linux/workqueue.h>
+ #include "cifs_fs_sb.h"
+@@ -30,6 +32,7 @@
+ #include <linux/scatterlist.h>
+ #include <uapi/linux/cifs/cifs_mount.h>
+ #include "smb2pdu.h"
++#include "smb2glob.h"
+ 
+ #define CIFS_MAGIC_NUMBER 0xFF534D42      /* the first four bytes of SMB PDUs */
+ 
+@@ -2046,4 +2049,70 @@ static inline bool is_tcon_dfs(struct cifs_tcon *tcon)
+ 		tcon->share_flags & (SHI1005_FLAGS_DFS | SHI1005_FLAGS_DFS_ROOT);
+ }
+ 
++static inline unsigned int cifs_get_num_sgs(const struct smb_rqst *rqst,
++					    int num_rqst,
++					    const u8 *sig)
++{
++	unsigned int len, skip;
++	unsigned int nents = 0;
++	unsigned long addr;
++	int i, j;
++
++	/* Assumes the first rqst has a transform header as the first iov.
++	 * I.e.
++	 * rqst[0].rq_iov[0]  is transform header
++	 * rqst[0].rq_iov[1+] data to be encrypted/decrypted
++	 * rqst[1+].rq_iov[0+] data to be encrypted/decrypted
++	 */
++	for (i = 0; i < num_rqst; i++) {
++		/*
++		 * The first rqst has a transform header where the
++		 * first 20 bytes are not part of the encrypted blob.
++		 */
++		for (j = 0; j < rqst[i].rq_nvec; j++) {
++			struct kvec *iov = &rqst[i].rq_iov[j];
++
++			skip = (i == 0) && (j == 0) ? 20 : 0;
++			addr = (unsigned long)iov->iov_base + skip;
++			if (unlikely(is_vmalloc_addr((void *)addr))) {
++				len = iov->iov_len - skip;
++				nents += DIV_ROUND_UP(offset_in_page(addr) + len,
++						      PAGE_SIZE);
++			} else {
++				nents++;
++			}
++		}
++		nents += rqst[i].rq_npages;
++	}
++	nents += DIV_ROUND_UP(offset_in_page(sig) + SMB2_SIGNATURE_SIZE, PAGE_SIZE);
++	return nents;
++}
++
++/* We can not use the normal sg_set_buf() as we will sometimes pass a
++ * stack object as buf.
++ */
++static inline struct scatterlist *cifs_sg_set_buf(struct scatterlist *sg,
++						  const void *buf,
++						  unsigned int buflen)
++{
++	unsigned long addr = (unsigned long)buf;
++	unsigned int off = offset_in_page(addr);
++
++	addr &= PAGE_MASK;
++	if (unlikely(is_vmalloc_addr((void *)addr))) {
++		do {
++			unsigned int len = min_t(unsigned int, buflen, PAGE_SIZE - off);
++
++			sg_set_page(sg++, vmalloc_to_page((void *)addr), len, off);
++
++			off = 0;
++			addr += PAGE_SIZE;
++			buflen -= len;
++		} while (buflen);
++	} else {
++		sg_set_page(sg++, virt_to_page(addr), buflen, off);
++	}
++	return sg;
++}
++
+ #endif	/* _CIFS_GLOB_H */
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index a6ca4eda9a5ae..ca34cc1e1931a 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -602,8 +602,8 @@ int cifs_alloc_hash(const char *name, struct crypto_shash **shash,
+ 		    struct sdesc **sdesc);
+ void cifs_free_hash(struct crypto_shash **shash, struct sdesc **sdesc);
+ 
+-extern void rqst_page_get_length(struct smb_rqst *rqst, unsigned int page,
+-				unsigned int *len, unsigned int *offset);
++void rqst_page_get_length(const struct smb_rqst *rqst, unsigned int page,
++			  unsigned int *len, unsigned int *offset);
+ struct cifs_chan *
+ cifs_ses_find_chan(struct cifs_ses *ses, struct TCP_Server_Info *server);
+ int cifs_try_adding_channels(struct cifs_ses *ses);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index d1c3086d7ddd0..164b985407160 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3038,7 +3038,7 @@ cifs_set_cifscreds(struct smb_vol *vol __attribute__((unused)),
+ struct cifs_ses *
+ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info)
+ {
+-	int rc = -ENOMEM;
++	int rc = 0;
+ 	unsigned int xid;
+ 	struct cifs_ses *ses;
+ 	struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;
+@@ -3080,6 +3080,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info)
+ 		return ses;
+ 	}
+ 
++	rc = -ENOMEM;
++
+ 	cifs_dbg(FYI, "Existing smb sess not found\n");
+ 	ses = sesInfoAlloc();
+ 	if (ses == NULL)
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 9d740916a8ee5..9044b0fca9a3d 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -974,8 +974,8 @@ cifs_free_hash(struct crypto_shash **shash, struct sdesc **sdesc)
+  * Input: rqst - a smb_rqst, page - a page index for rqst
+  * Output: *len - the length for this page, *offset - the offset for this page
+  */
+-void rqst_page_get_length(struct smb_rqst *rqst, unsigned int page,
+-				unsigned int *len, unsigned int *offset)
++void rqst_page_get_length(const struct smb_rqst *rqst, unsigned int page,
++			  unsigned int *len, unsigned int *offset)
+ {
+ 	*len = rqst->rq_pagesz;
+ 	*offset = (page == 0) ? rqst->rq_offset : 0;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 72368b656b33c..844db4652dd1d 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -4164,69 +4164,82 @@ fill_transform_hdr(struct smb2_transform_hdr *tr_hdr, unsigned int orig_len,
+ 	memcpy(&tr_hdr->SessionId, &shdr->SessionId, 8);
+ }
+ 
+-/* We can not use the normal sg_set_buf() as we will sometimes pass a
+- * stack object as buf.
+- */
+-static inline void smb2_sg_set_buf(struct scatterlist *sg, const void *buf,
+-				   unsigned int buflen)
++static void *smb2_aead_req_alloc(struct crypto_aead *tfm, const struct smb_rqst *rqst,
++				 int num_rqst, const u8 *sig, u8 **iv,
++				 struct aead_request **req, struct scatterlist **sgl,
++				 unsigned int *num_sgs)
+ {
+-	void *addr;
+-	/*
+-	 * VMAP_STACK (at least) puts stack into the vmalloc address space
+-	 */
+-	if (is_vmalloc_addr(buf))
+-		addr = vmalloc_to_page(buf);
+-	else
+-		addr = virt_to_page(buf);
+-	sg_set_page(sg, addr, buflen, offset_in_page(buf));
++	unsigned int req_size = sizeof(**req) + crypto_aead_reqsize(tfm);
++	unsigned int iv_size = crypto_aead_ivsize(tfm);
++	unsigned int len;
++	u8 *p;
++
++	*num_sgs = cifs_get_num_sgs(rqst, num_rqst, sig);
++
++	len = iv_size;
++	len += crypto_aead_alignmask(tfm) & ~(crypto_tfm_ctx_alignment() - 1);
++	len = ALIGN(len, crypto_tfm_ctx_alignment());
++	len += req_size;
++	len = ALIGN(len, __alignof__(struct scatterlist));
++	len += *num_sgs * sizeof(**sgl);
++
++	p = kmalloc(len, GFP_ATOMIC);
++	if (!p)
++		return NULL;
++
++	*iv = (u8 *)PTR_ALIGN(p, crypto_aead_alignmask(tfm) + 1);
++	*req = (struct aead_request *)PTR_ALIGN(*iv + iv_size,
++						crypto_tfm_ctx_alignment());
++	*sgl = (struct scatterlist *)PTR_ALIGN((u8 *)*req + req_size,
++					       __alignof__(struct scatterlist));
++	return p;
+ }
+ 
+-/* Assumes the first rqst has a transform header as the first iov.
+- * I.e.
+- * rqst[0].rq_iov[0]  is transform header
+- * rqst[0].rq_iov[1+] data to be encrypted/decrypted
+- * rqst[1+].rq_iov[0+] data to be encrypted/decrypted
+- */
+-static struct scatterlist *
+-init_sg(int num_rqst, struct smb_rqst *rqst, u8 *sign)
++static void *smb2_get_aead_req(struct crypto_aead *tfm, const struct smb_rqst *rqst,
++			       int num_rqst, const u8 *sig, u8 **iv,
++			       struct aead_request **req, struct scatterlist **sgl)
+ {
+-	unsigned int sg_len;
++	unsigned int off, len, skip;
+ 	struct scatterlist *sg;
+-	unsigned int i;
+-	unsigned int j;
+-	unsigned int idx = 0;
+-	int skip;
+-
+-	sg_len = 1;
+-	for (i = 0; i < num_rqst; i++)
+-		sg_len += rqst[i].rq_nvec + rqst[i].rq_npages;
++	unsigned int num_sgs;
++	unsigned long addr;
++	int i, j;
++	void *p;
+ 
+-	sg = kmalloc_array(sg_len, sizeof(struct scatterlist), GFP_KERNEL);
+-	if (!sg)
++	p = smb2_aead_req_alloc(tfm, rqst, num_rqst, sig, iv, req, sgl, &num_sgs);
++	if (!p)
+ 		return NULL;
+ 
+-	sg_init_table(sg, sg_len);
++	sg_init_table(*sgl, num_sgs);
++	sg = *sgl;
++
++	/* Assumes the first rqst has a transform header as the first iov.
++	 * I.e.
++	 * rqst[0].rq_iov[0]  is transform header
++	 * rqst[0].rq_iov[1+] data to be encrypted/decrypted
++	 * rqst[1+].rq_iov[0+] data to be encrypted/decrypted
++	 */
+ 	for (i = 0; i < num_rqst; i++) {
++		/*
++		 * The first rqst has a transform header where the
++		 * first 20 bytes are not part of the encrypted blob.
++		 */
+ 		for (j = 0; j < rqst[i].rq_nvec; j++) {
+-			/*
+-			 * The first rqst has a transform header where the
+-			 * first 20 bytes are not part of the encrypted blob
+-			 */
+-			skip = (i == 0) && (j == 0) ? 20 : 0;
+-			smb2_sg_set_buf(&sg[idx++],
+-					rqst[i].rq_iov[j].iov_base + skip,
+-					rqst[i].rq_iov[j].iov_len - skip);
+-			}
++			struct kvec *iov = &rqst[i].rq_iov[j];
+ 
++			skip = (i == 0) && (j == 0) ? 20 : 0;
++			addr = (unsigned long)iov->iov_base + skip;
++			len = iov->iov_len - skip;
++			sg = cifs_sg_set_buf(sg, (void *)addr, len);
++		}
+ 		for (j = 0; j < rqst[i].rq_npages; j++) {
+-			unsigned int len, offset;
+-
+-			rqst_page_get_length(&rqst[i], j, &len, &offset);
+-			sg_set_page(&sg[idx++], rqst[i].rq_pages[j], len, offset);
++			rqst_page_get_length(&rqst[i], j, &len, &off);
++			sg_set_page(sg++, rqst[i].rq_pages[j], len, off);
+ 		}
+ 	}
+-	smb2_sg_set_buf(&sg[idx], sign, SMB2_SIGNATURE_SIZE);
+-	return sg;
++	cifs_sg_set_buf(sg, sig, SMB2_SIGNATURE_SIZE);
++
++	return p;
+ }
+ 
+ static int
+@@ -4270,11 +4283,11 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ 	u8 sign[SMB2_SIGNATURE_SIZE] = {};
+ 	u8 key[SMB3_ENC_DEC_KEY_SIZE];
+ 	struct aead_request *req;
+-	char *iv;
+-	unsigned int iv_len;
++	u8 *iv;
+ 	DECLARE_CRYPTO_WAIT(wait);
+ 	struct crypto_aead *tfm;
+ 	unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
++	void *creq;
+ 
+ 	rc = smb2_get_enc_key(server, tr_hdr->SessionId, enc, key);
+ 	if (rc) {
+@@ -4309,32 +4322,15 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ 		return rc;
+ 	}
+ 
+-	req = aead_request_alloc(tfm, GFP_KERNEL);
+-	if (!req) {
+-		cifs_server_dbg(VFS, "%s: Failed to alloc aead request\n", __func__);
++	creq = smb2_get_aead_req(tfm, rqst, num_rqst, sign, &iv, &req, &sg);
++	if (unlikely(!creq))
+ 		return -ENOMEM;
+-	}
+ 
+ 	if (!enc) {
+ 		memcpy(sign, &tr_hdr->Signature, SMB2_SIGNATURE_SIZE);
+ 		crypt_len += SMB2_SIGNATURE_SIZE;
+ 	}
+ 
+-	sg = init_sg(num_rqst, rqst, sign);
+-	if (!sg) {
+-		cifs_server_dbg(VFS, "%s: Failed to init sg\n", __func__);
+-		rc = -ENOMEM;
+-		goto free_req;
+-	}
+-
+-	iv_len = crypto_aead_ivsize(tfm);
+-	iv = kzalloc(iv_len, GFP_KERNEL);
+-	if (!iv) {
+-		cifs_server_dbg(VFS, "%s: Failed to alloc iv\n", __func__);
+-		rc = -ENOMEM;
+-		goto free_sg;
+-	}
+-
+ 	if ((server->cipher_type == SMB2_ENCRYPTION_AES128_GCM) ||
+ 	    (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM))
+ 		memcpy(iv, (char *)tr_hdr->Nonce, SMB3_AES_GCM_NONCE);
+@@ -4343,6 +4339,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ 		memcpy(iv + 1, (char *)tr_hdr->Nonce, SMB3_AES_CCM_NONCE);
+ 	}
+ 
++	aead_request_set_tfm(req, tfm);
+ 	aead_request_set_crypt(req, sg, sg, crypt_len, iv);
+ 	aead_request_set_ad(req, assoc_data_len);
+ 
+@@ -4355,11 +4352,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ 	if (!rc && enc)
+ 		memcpy(&tr_hdr->Signature, sign, SMB2_SIGNATURE_SIZE);
+ 
+-	kfree(iv);
+-free_sg:
+-	kfree(sg);
+-free_req:
+-	kfree(req);
++	kfree_sensitive(creq);
+ 	return rc;
+ }
+ 
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 5ad27e484014f..12388ed4faa59 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -317,6 +317,7 @@ static int configfs_create_dir(struct config_item *item, struct dentry *dentry,
+ 	return 0;
+ 
+ out_remove:
++	configfs_put(dentry->d_fsdata);
+ 	configfs_remove_dirent(dentry);
+ 	return PTR_ERR(inode);
+ }
+@@ -383,6 +384,7 @@ int configfs_create_link(struct configfs_dirent *target, struct dentry *parent,
+ 	return 0;
+ 
+ out_remove:
++	configfs_put(dentry->d_fsdata);
+ 	configfs_remove_dirent(dentry);
+ 	return PTR_ERR(inode);
+ }
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index 96059af28f508..42bab9270e7d6 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -378,8 +378,8 @@ ssize_t debugfs_attr_read(struct file *file, char __user *buf,
+ }
+ EXPORT_SYMBOL_GPL(debugfs_attr_read);
+ 
+-ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
+-			 size_t len, loff_t *ppos)
++static ssize_t debugfs_attr_write_xsigned(struct file *file, const char __user *buf,
++			 size_t len, loff_t *ppos, bool is_signed)
+ {
+ 	struct dentry *dentry = F_DENTRY(file);
+ 	ssize_t ret;
+@@ -387,12 +387,28 @@ ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
+ 	ret = debugfs_file_get(dentry);
+ 	if (unlikely(ret))
+ 		return ret;
+-	ret = simple_attr_write(file, buf, len, ppos);
++	if (is_signed)
++		ret = simple_attr_write_signed(file, buf, len, ppos);
++	else
++		ret = simple_attr_write(file, buf, len, ppos);
+ 	debugfs_file_put(dentry);
+ 	return ret;
+ }
++
++ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
++			 size_t len, loff_t *ppos)
++{
++	return debugfs_attr_write_xsigned(file, buf, len, ppos, false);
++}
+ EXPORT_SYMBOL_GPL(debugfs_attr_write);
+ 
++ssize_t debugfs_attr_write_signed(struct file *file, const char __user *buf,
++			 size_t len, loff_t *ppos)
++{
++	return debugfs_attr_write_xsigned(file, buf, len, ppos, true);
++}
++EXPORT_SYMBOL_GPL(debugfs_attr_write_signed);
++
+ static struct dentry *debugfs_create_mode_unsafe(const char *name, umode_t mode,
+ 					struct dentry *parent, void *value,
+ 					const struct file_operations *fops,
+@@ -748,11 +764,11 @@ static int debugfs_atomic_t_get(void *data, u64 *val)
+ 	*val = atomic_read((atomic_t *)data);
+ 	return 0;
+ }
+-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t, debugfs_atomic_t_get,
++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t, debugfs_atomic_t_get,
+ 			debugfs_atomic_t_set, "%lld\n");
+-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t_ro, debugfs_atomic_t_get, NULL,
++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t_ro, debugfs_atomic_t_get, NULL,
+ 			"%lld\n");
+-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t_wo, NULL, debugfs_atomic_t_set,
++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t_wo, NULL, debugfs_atomic_t_set,
+ 			"%lld\n");
+ 
+ /**
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 4ad1c3ce9398a..81dc61f1c557f 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -553,7 +553,7 @@ enum {
+  *
+  * It's not paranoia if the Murphy's Law really *is* out to get you.  :-)
+  */
+-#define TEST_FLAG_VALUE(FLAG) (EXT4_##FLAG##_FL == (1 << EXT4_INODE_##FLAG))
++#define TEST_FLAG_VALUE(FLAG) (EXT4_##FLAG##_FL == (1U << EXT4_INODE_##FLAG))
+ #define CHECK_FLAG_VALUE(FLAG) BUILD_BUG_ON(!TEST_FLAG_VALUE(FLAG))
+ 
+ static inline void ext4_check_flag_values(void)
+@@ -2842,7 +2842,8 @@ int do_journal_get_write_access(handle_t *handle,
+ typedef enum {
+ 	EXT4_IGET_NORMAL =	0,
+ 	EXT4_IGET_SPECIAL =	0x0001, /* OK to iget a system inode */
+-	EXT4_IGET_HANDLE = 	0x0002	/* Inode # is from a handle */
++	EXT4_IGET_HANDLE = 	0x0002,	/* Inode # is from a handle */
++	EXT4_IGET_BAD =		0x0004  /* Allow to iget a bad inode */
+ } ext4_iget_flags;
+ 
+ extern struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+@@ -3485,8 +3486,8 @@ extern int ext4_handle_dirty_dirblock(handle_t *handle, struct inode *inode,
+ extern int ext4_ci_compare(const struct inode *parent,
+ 			   const struct qstr *fname,
+ 			   const struct qstr *entry, bool quick);
+-extern int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name,
+-			 struct inode *inode);
++extern int __ext4_unlink(struct inode *dir, const struct qstr *d_name,
++			 struct inode *inode, struct dentry *dentry);
+ extern int __ext4_link(struct inode *dir, struct inode *inode,
+ 		       struct dentry *dentry);
+ 
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 54750b7c162d2..bf0872bb34f69 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5802,6 +5802,14 @@ int ext4_clu_mapped(struct inode *inode, ext4_lblk_t lclu)
+ 	struct ext4_extent *extent;
+ 	ext4_lblk_t first_lblk, first_lclu, last_lclu;
+ 
++	/*
++	 * if data can be stored inline, the logical cluster isn't
++	 * mapped - no physical clusters have been allocated, and the
++	 * file has no extents
++	 */
++	if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA))
++		return 0;
++
+ 	/* search for the extent closest to the first block in the cluster */
+ 	path = ext4_find_extent(inode, EXT4_C2B(sbi, lclu), NULL, 0);
+ 	if (IS_ERR(path)) {
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index 9a3a8996aacf7..aa99a3659edfc 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -1372,7 +1372,7 @@ retry:
+ 		if (count_reserved)
+ 			count_rsvd(inode, lblk, orig_es.es_len - len1 - len2,
+ 				   &orig_es, &rc);
+-		goto out;
++		goto out_get_reserved;
+ 	}
+ 
+ 	if (len1 > 0) {
+@@ -1414,6 +1414,7 @@ retry:
+ 		}
+ 	}
+ 
++out_get_reserved:
+ 	if (count_reserved)
+ 		*reserved = get_rsvd(inode, end, es, &rc);
+ out:
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 41dcf21558c4e..be768ef1fd168 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -66,7 +66,7 @@
+  * Fast Commit Ineligibility
+  * -------------------------
+  * Not all operations are supported by fast commits today (e.g extended
+- * attributes). Fast commit ineligiblity is marked by calling one of the
++ * attributes). Fast commit ineligibility is marked by calling one of the
+  * two following functions:
+  *
+  * - ext4_fc_mark_ineligible(): This makes next fast commit operation to fall
+@@ -371,25 +371,33 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
+ 	struct __track_dentry_update_args *dentry_update =
+ 		(struct __track_dentry_update_args *)arg;
+ 	struct dentry *dentry = dentry_update->dentry;
+-	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	struct inode *dir = dentry->d_parent->d_inode;
++	struct super_block *sb = inode->i_sb;
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 
+ 	mutex_unlock(&ei->i_fc_lock);
++
++	if (IS_ENCRYPTED(dir)) {
++		ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_ENCRYPTED_FILENAME);
++		mutex_lock(&ei->i_fc_lock);
++		return -EOPNOTSUPP;
++	}
++
+ 	node = kmem_cache_alloc(ext4_fc_dentry_cachep, GFP_NOFS);
+ 	if (!node) {
+-		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM);
++		ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM);
+ 		mutex_lock(&ei->i_fc_lock);
+ 		return -ENOMEM;
+ 	}
+ 
+ 	node->fcd_op = dentry_update->op;
+-	node->fcd_parent = dentry->d_parent->d_inode->i_ino;
++	node->fcd_parent = dir->i_ino;
+ 	node->fcd_ino = inode->i_ino;
+ 	if (dentry->d_name.len > DNAME_INLINE_LEN) {
+ 		node->fcd_name.name = kmalloc(dentry->d_name.len, GFP_NOFS);
+ 		if (!node->fcd_name.name) {
+ 			kmem_cache_free(ext4_fc_dentry_cachep, node);
+-			ext4_fc_mark_ineligible(inode->i_sb,
+-				EXT4_FC_REASON_NOMEM);
++			ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM);
+ 			mutex_lock(&ei->i_fc_lock);
+ 			return -ENOMEM;
+ 		}
+@@ -628,6 +636,9 @@ static u8 *ext4_fc_reserve_space(struct super_block *sb, int len, u32 *crc)
+ 		*crc = ext4_chksum(sbi, *crc, tl, sizeof(*tl));
+ 	if (pad_len > 0)
+ 		ext4_fc_memzero(sb, tl + 1, pad_len, crc);
++	/* Don't leak uninitialized memory in the unused last byte. */
++	*((u8 *)(tl + 1) + pad_len) = 0;
++
+ 	ext4_fc_submit_bh(sb);
+ 
+ 	ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh);
+@@ -684,6 +695,8 @@ static int ext4_fc_write_tail(struct super_block *sb, u32 crc)
+ 	dst += sizeof(tail.fc_tid);
+ 	tail.fc_crc = cpu_to_le32(crc);
+ 	ext4_fc_memcpy(sb, dst, &tail.fc_crc, sizeof(tail.fc_crc), NULL);
++	dst += sizeof(tail.fc_crc);
++	memset(dst, 0, bsize - off); /* Don't leak uninitialized memory. */
+ 
+ 	ext4_fc_submit_bh(sb);
+ 
+@@ -1287,7 +1300,7 @@ static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl,
+ 		return 0;
+ 	}
+ 
+-	ret = __ext4_unlink(NULL, old_parent, &entry, inode);
++	ret = __ext4_unlink(old_parent, &entry, inode, NULL);
+ 	/* -ENOENT ok coz it might not exist anymore. */
+ 	if (ret == -ENOENT)
+ 		ret = 0;
+@@ -2137,17 +2150,17 @@ void ext4_fc_init(struct super_block *sb, journal_t *journal)
+ 	journal->j_fc_cleanup_callback = ext4_fc_cleanup;
+ }
+ 
+-static const char *fc_ineligible_reasons[] = {
+-	"Extended attributes changed",
+-	"Cross rename",
+-	"Journal flag changed",
+-	"Insufficient memory",
+-	"Swap boot",
+-	"Resize",
+-	"Dir renamed",
+-	"Falloc range op",
+-	"Data journalling",
+-	"FC Commit Failed"
++static const char * const fc_ineligible_reasons[] = {
++	[EXT4_FC_REASON_XATTR] = "Extended attributes changed",
++	[EXT4_FC_REASON_CROSS_RENAME] = "Cross rename",
++	[EXT4_FC_REASON_JOURNAL_FLAG_CHANGE] = "Journal flag changed",
++	[EXT4_FC_REASON_NOMEM] = "Insufficient memory",
++	[EXT4_FC_REASON_SWAP_BOOT] = "Swap boot",
++	[EXT4_FC_REASON_RESIZE] = "Resize",
++	[EXT4_FC_REASON_RENAME_DIR] = "Dir renamed",
++	[EXT4_FC_REASON_FALLOC_RANGE] = "Falloc range op",
++	[EXT4_FC_REASON_INODE_JOURNAL_DATA] = "Data journalling",
++	[EXT4_FC_REASON_ENCRYPTED_FILENAME] = "Encrypted filename",
+ };
+ 
+ int ext4_fc_info_show(struct seq_file *seq, void *v)
+diff --git a/fs/ext4/fast_commit.h b/fs/ext4/fast_commit.h
+index d8d0998a5c163..4a5f96a9c9d72 100644
+--- a/fs/ext4/fast_commit.h
++++ b/fs/ext4/fast_commit.h
+@@ -104,6 +104,7 @@ enum {
+ 	EXT4_FC_REASON_FALLOC_RANGE,
+ 	EXT4_FC_REASON_INODE_JOURNAL_DATA,
+ 	EXT4_FC_COMMIT_FAILED,
++	EXT4_FC_REASON_ENCRYPTED_FILENAME,
+ 	EXT4_FC_REASON_MAX
+ };
+ 
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 05efa682bc2f9..237983cd8cdc2 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -148,6 +148,7 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
+ 	struct super_block *sb = inode->i_sb;
+ 	Indirect *p = chain;
+ 	struct buffer_head *bh;
++	unsigned int key;
+ 	int ret = -EIO;
+ 
+ 	*err = 0;
+@@ -156,7 +157,13 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
+ 	if (!p->key)
+ 		goto no_block;
+ 	while (--depth) {
+-		bh = sb_getblk(sb, le32_to_cpu(p->key));
++		key = le32_to_cpu(p->key);
++		if (key > ext4_blocks_count(EXT4_SB(sb)->s_es)) {
++			/* the block was out of range */
++			ret = -EFSCORRUPTED;
++			goto failure;
++		}
++		bh = sb_getblk(sb, key);
+ 		if (unlikely(!bh)) {
+ 			ret = -ENOMEM;
+ 			goto failure;
+@@ -705,7 +712,7 @@ static int ext4_ind_trunc_restart_fn(handle_t *handle, struct inode *inode,
+ 
+ /*
+  * Truncate transactions can be complex and absolutely huge.  So we need to
+- * be able to restart the transaction at a conventient checkpoint to make
++ * be able to restart the transaction at a convenient checkpoint to make
+  * sure we don't overflow the journal.
+  *
+  * Try to extend this transaction for the purposes of truncation.  If
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 88bd1d1cca233..77377befbb1c6 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -799,7 +799,7 @@ ext4_journalled_write_inline_data(struct inode *inode,
+  *    clear the inode state safely.
+  * 2. The inode has inline data, then we need to read the data, make it
+  *    update and dirty so that ext4_da_writepages can handle it. We don't
+- *    need to start the journal since the file's metatdata isn't changed now.
++ *    need to start the journal since the file's metadata isn't changed now.
+  */
+ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
+ 						 struct inode *inode,
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 45f31dc1e66ff..355343cf4609b 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -179,6 +179,8 @@ void ext4_evict_inode(struct inode *inode)
+ 
+ 	trace_ext4_evict_inode(inode);
+ 
++	if (EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)
++		ext4_evict_ea_inode(inode);
+ 	if (inode->i_nlink) {
+ 		/*
+ 		 * When journalling data dirty buffers are tracked only in the
+@@ -223,13 +225,13 @@ void ext4_evict_inode(struct inode *inode)
+ 
+ 	/*
+ 	 * For inodes with journalled data, transaction commit could have
+-	 * dirtied the inode. Flush worker is ignoring it because of I_FREEING
+-	 * flag but we still need to remove the inode from the writeback lists.
++	 * dirtied the inode. And for inodes with dioread_nolock, unwritten
++	 * extents converting worker could merge extents and also have dirtied
++	 * the inode. Flush worker is ignoring it because of I_FREEING flag but
++	 * we still need to remove the inode from the writeback lists.
+ 	 */
+-	if (!list_empty_careful(&inode->i_io_list)) {
+-		WARN_ON_ONCE(!ext4_should_journal_data(inode));
++	if (!list_empty_careful(&inode->i_io_list))
+ 		inode_io_list_del(inode);
+-	}
+ 
+ 	/*
+ 	 * Protect us against freezing - iput() caller didn't have to have any
+@@ -336,6 +338,12 @@ stop_handle:
+ 	ext4_xattr_inode_array_free(ea_inode_array);
+ 	return;
+ no_delete:
++	/*
++	 * Check out some where else accidentally dirty the evicting inode,
++	 * which may probably cause inode use-after-free issues later.
++	 */
++	WARN_ON_ONCE(!list_empty_careful(&inode->i_io_list));
++
+ 	if (!list_empty(&EXT4_I(inode)->i_fc_list))
+ 		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM);
+ 	ext4_clear_inode(inode);	/* We must guarantee clearing of inode... */
+@@ -3879,7 +3887,7 @@ unlock:
+  * starting from file offset 'from'.  The range to be zero'd must
+  * be contained with in one block.  If the specified range exceeds
+  * the end of the block it will be shortened to end of the block
+- * that cooresponds to 'from'
++ * that corresponds to 'from'
+  */
+ static int ext4_block_zero_page_range(handle_t *handle,
+ 		struct address_space *mapping, loff_t from, loff_t length)
+@@ -4285,7 +4293,8 @@ int ext4_truncate(struct inode *inode)
+ 
+ 	/* If we zero-out tail of the page, we have to create jinode for jbd2 */
+ 	if (inode->i_size & (inode->i_sb->s_blocksize - 1)) {
+-		if (ext4_inode_attach_jinode(inode) < 0)
++		err = ext4_inode_attach_jinode(inode);
++		if (err)
+ 			goto out_trace;
+ 	}
+ 
+@@ -4386,9 +4395,17 @@ static int __ext4_get_inode_loc(struct super_block *sb, unsigned long ino,
+ 	inodes_per_block = EXT4_SB(sb)->s_inodes_per_block;
+ 	inode_offset = ((ino - 1) %
+ 			EXT4_INODES_PER_GROUP(sb));
+-	block = ext4_inode_table(sb, gdp) + (inode_offset / inodes_per_block);
+ 	iloc->offset = (inode_offset % inodes_per_block) * EXT4_INODE_SIZE(sb);
+ 
++	block = ext4_inode_table(sb, gdp);
++	if ((block <= le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) ||
++	    (block >= ext4_blocks_count(EXT4_SB(sb)->s_es))) {
++		ext4_error(sb, "Invalid inode table block %llu in "
++			   "block_group %u", block, iloc->block_group);
++		return -EFSCORRUPTED;
++	}
++	block += (inode_offset / inodes_per_block);
++
+ 	bh = sb_getblk(sb, block);
+ 	if (unlikely(!bh))
+ 		return -ENOMEM;
+@@ -4960,8 +4977,14 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 	if (IS_CASEFOLDED(inode) && !ext4_has_feature_casefold(inode->i_sb))
+ 		ext4_error_inode(inode, function, line, 0,
+ 				 "casefold flag without casefold feature");
+-	brelse(iloc.bh);
++	if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) {
++		ext4_error_inode(inode, function, line, 0,
++				 "bad inode without EXT4_IGET_BAD flag");
++		ret = -EUCLEAN;
++		goto bad_inode;
++	}
+ 
++	brelse(iloc.bh);
+ 	unlock_new_inode(inode);
+ 	return inode;
+ 
+@@ -5858,6 +5881,14 @@ static int __ext4_expand_extra_isize(struct inode *inode,
+ 		return 0;
+ 	}
+ 
++	/*
++	 * We may need to allocate external xattr block so we need quotas
++	 * initialized. Here we can be called with various locks held so we
++	 * cannot affort to initialize quotas ourselves. So just bail.
++	 */
++	if (dquot_initialize_needed(inode))
++		return -EAGAIN;
++
+ 	/* try to expand with EAs present */
+ 	error = ext4_expand_extra_isize_ea(inode, new_extra_isize,
+ 					   raw_inode, handle);
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 413bf3d2f7844..240d792db9f78 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -121,7 +121,8 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	blkcnt_t blocks;
+ 	unsigned short bytes;
+ 
+-	inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO, EXT4_IGET_SPECIAL);
++	inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO,
++			EXT4_IGET_SPECIAL | EXT4_IGET_BAD);
+ 	if (IS_ERR(inode_bl))
+ 		return PTR_ERR(inode_bl);
+ 	ei_bl = EXT4_I(inode_bl);
+@@ -170,7 +171,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	/* Protect extent tree against block allocations via delalloc */
+ 	ext4_double_down_write_data_sem(inode, inode_bl);
+ 
+-	if (inode_bl->i_nlink == 0) {
++	if (is_bad_inode(inode_bl) || !S_ISREG(inode_bl->i_mode)) {
+ 		/* this inode has never been used as a BOOT_LOADER */
+ 		set_nlink(inode_bl, 1);
+ 		i_uid_write(inode_bl, 0);
+@@ -494,6 +495,10 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 	if (ext4_is_quota_file(inode))
+ 		return err;
+ 
++	err = dquot_initialize(inode);
++	if (err)
++		return err;
++
+ 	err = ext4_get_inode_loc(inode, &iloc);
+ 	if (err)
+ 		return err;
+@@ -509,10 +514,6 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 		brelse(iloc.bh);
+ 	}
+ 
+-	err = dquot_initialize(inode);
+-	if (err)
+-		return err;
+-
+ 	handle = ext4_journal_start(inode, EXT4_HT_QUOTA,
+ 		EXT4_QUOTA_INIT_BLOCKS(sb) +
+ 		EXT4_QUOTA_DEL_BLOCKS(sb) + 3);
+diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
+index e75b4749aa1c2..7be6288e48ec2 100644
+--- a/fs/ext4/mballoc.h
++++ b/fs/ext4/mballoc.h
+@@ -59,7 +59,7 @@
+  * by the stream allocator, which purpose is to pack requests
+  * as close each to other as possible to produce smooth I/O traffic
+  * We use locality group prealloc space for stream request.
+- * We can tune the same via /proc/fs/ext4/<parition>/stream_req
++ * We can tune the same via /proc/fs/ext4/<partition>/stream_req
+  */
+ #define MB_DEFAULT_STREAM_THRESHOLD	16	/* 64K */
+ 
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index 4bfe2252d9a4e..b0ea646454ac8 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -32,7 +32,7 @@ static int finish_range(handle_t *handle, struct inode *inode,
+ 	newext.ee_block = cpu_to_le32(lb->first_block);
+ 	newext.ee_len   = cpu_to_le16(lb->last_block - lb->first_block + 1);
+ 	ext4_ext_store_pblock(&newext, lb->first_pblock);
+-	/* Locking only for convinience since we are operating on temp inode */
++	/* Locking only for convenience since we are operating on temp inode */
+ 	down_write(&EXT4_I(inode)->i_data_sem);
+ 	path = ext4_find_extent(inode, lb->first_block, NULL, 0);
+ 	if (IS_ERR(path)) {
+@@ -43,8 +43,8 @@ static int finish_range(handle_t *handle, struct inode *inode,
+ 
+ 	/*
+ 	 * Calculate the credit needed to inserting this extent
+-	 * Since we are doing this in loop we may accumalate extra
+-	 * credit. But below we try to not accumalate too much
++	 * Since we are doing this in loop we may accumulate extra
++	 * credit. But below we try to not accumulate too much
+ 	 * of them by restarting the journal.
+ 	 */
+ 	needed = ext4_ext_calc_credits_for_single_extent(inode,
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index b2e131d11cf8b..7ec7c9c16a39e 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -995,7 +995,7 @@ static int ext4_htree_next_block(struct inode *dir, __u32 hash,
+ 	 * If the hash is 1, then continue only if the next page has a
+ 	 * continuation hash of any value.  This is used for readdir
+ 	 * handling.  Otherwise, check to see if the hash matches the
+-	 * desired contiuation hash.  If it doesn't, return since
++	 * desired continuation hash.  If it doesn't, return since
+ 	 * there's no point to read in the successive index pages.
+ 	 */
+ 	bhash = dx_get_hash(p->at);
+@@ -3244,14 +3244,20 @@ end_rmdir:
+ 	return retval;
+ }
+ 
+-int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name,
+-		  struct inode *inode)
++int __ext4_unlink(struct inode *dir, const struct qstr *d_name,
++		  struct inode *inode,
++		  struct dentry *dentry /* NULL during fast_commit recovery */)
+ {
+ 	int retval = -ENOENT;
+ 	struct buffer_head *bh;
+ 	struct ext4_dir_entry_2 *de;
++	handle_t *handle;
+ 	int skip_remove_dentry = 0;
+ 
++	/*
++	 * Keep this outside the transaction; it may have to set up the
++	 * directory's encryption key, which isn't GFP_NOFS-safe.
++	 */
+ 	bh = ext4_find_entry(dir, d_name, &de, NULL);
+ 	if (IS_ERR(bh))
+ 		return PTR_ERR(bh);
+@@ -3268,7 +3274,14 @@ int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name
+ 		if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
+ 			skip_remove_dentry = 1;
+ 		else
+-			goto out;
++			goto out_bh;
++	}
++
++	handle = ext4_journal_start(dir, EXT4_HT_DIR,
++				    EXT4_DATA_TRANS_BLOCKS(dir->i_sb));
++	if (IS_ERR(handle)) {
++		retval = PTR_ERR(handle);
++		goto out_bh;
+ 	}
+ 
+ 	if (IS_DIRSYNC(dir))
+@@ -3277,12 +3290,12 @@ int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name
+ 	if (!skip_remove_dentry) {
+ 		retval = ext4_delete_entry(handle, dir, de, bh);
+ 		if (retval)
+-			goto out;
++			goto out_handle;
+ 		dir->i_ctime = dir->i_mtime = current_time(dir);
+ 		ext4_update_dx_flag(dir);
+ 		retval = ext4_mark_inode_dirty(handle, dir);
+ 		if (retval)
+-			goto out;
++			goto out_handle;
+ 	} else {
+ 		retval = 0;
+ 	}
+@@ -3295,15 +3308,17 @@ int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name
+ 		ext4_orphan_add(handle, inode);
+ 	inode->i_ctime = current_time(inode);
+ 	retval = ext4_mark_inode_dirty(handle, inode);
+-
+-out:
++	if (dentry && !retval)
++		ext4_fc_track_unlink(handle, dentry);
++out_handle:
++	ext4_journal_stop(handle);
++out_bh:
+ 	brelse(bh);
+ 	return retval;
+ }
+ 
+ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
+ {
+-	handle_t *handle;
+ 	int retval;
+ 
+ 	if (unlikely(ext4_forced_shutdown(EXT4_SB(dir->i_sb))))
+@@ -3321,16 +3336,7 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
+ 	if (retval)
+ 		goto out_trace;
+ 
+-	handle = ext4_journal_start(dir, EXT4_HT_DIR,
+-				    EXT4_DATA_TRANS_BLOCKS(dir->i_sb));
+-	if (IS_ERR(handle)) {
+-		retval = PTR_ERR(handle);
+-		goto out_trace;
+-	}
+-
+-	retval = __ext4_unlink(handle, dir, &dentry->d_name, d_inode(dentry));
+-	if (!retval)
+-		ext4_fc_track_unlink(handle, dentry);
++	retval = __ext4_unlink(dir, &dentry->d_name, d_inode(dentry), dentry);
+ #ifdef CONFIG_UNICODE
+ 	/* VFS negative dentries are incompatible with Encoding and
+ 	 * Case-insensitiveness. Eventually we'll want avoid
+@@ -3341,8 +3347,6 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
+ 	if (IS_CASEFOLDED(dir))
+ 		d_invalidate(dentry);
+ #endif
+-	if (handle)
+-		ext4_journal_stop(handle);
+ 
+ out_trace:
+ 	trace_ext4_unlink_exit(dentry, retval);
+@@ -3842,6 +3846,9 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		return -EXDEV;
+ 
+ 	retval = dquot_initialize(old.dir);
++	if (retval)
++		return retval;
++	retval = dquot_initialize(old.inode);
+ 	if (retval)
+ 		return retval;
+ 	retval = dquot_initialize(new.dir);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index c55ba0390021e..51cebc1990eb1 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1545,8 +1545,8 @@ exit_journal:
+ 		int meta_bg = ext4_has_feature_meta_bg(sb);
+ 		sector_t old_gdb = 0;
+ 
+-		update_backups(sb, sbi->s_sbh->b_blocknr, (char *)es,
+-			       sizeof(struct ext4_super_block), 0);
++		update_backups(sb, ext4_group_first_block_no(sb, 0),
++			       (char *)es, sizeof(struct ext4_super_block), 0);
+ 		for (; gdb_num <= gdb_num_end; gdb_num++) {
+ 			struct buffer_head *gdb_bh;
+ 
+@@ -1753,7 +1753,7 @@ errout:
+ 		if (test_opt(sb, DEBUG))
+ 			printk(KERN_DEBUG "EXT4-fs: extended group to %llu "
+ 			       "blocks\n", ext4_blocks_count(es));
+-		update_backups(sb, EXT4_SB(sb)->s_sbh->b_blocknr,
++		update_backups(sb, ext4_group_first_block_no(sb, 0),
+ 			       (char *)es, sizeof(struct ext4_super_block), 0);
+ 	}
+ 	return err;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 9573d493c374b..935589579b8fe 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -417,104 +417,6 @@ static time64_t __ext4_get_tstamp(__le32 *lo, __u8 *hi)
+ #define ext4_get_tstamp(es, tstamp) \
+ 	__ext4_get_tstamp(&(es)->tstamp, &(es)->tstamp ## _hi)
+ 
+-static void __save_error_info(struct super_block *sb, int error,
+-			      __u32 ino, __u64 block,
+-			      const char *func, unsigned int line)
+-{
+-	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+-	int err;
+-
+-	EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS;
+-	if (bdev_read_only(sb->s_bdev))
+-		return;
+-	es->s_state |= cpu_to_le16(EXT4_ERROR_FS);
+-	ext4_update_tstamp(es, s_last_error_time);
+-	strncpy(es->s_last_error_func, func, sizeof(es->s_last_error_func));
+-	es->s_last_error_line = cpu_to_le32(line);
+-	es->s_last_error_ino = cpu_to_le32(ino);
+-	es->s_last_error_block = cpu_to_le64(block);
+-	switch (error) {
+-	case EIO:
+-		err = EXT4_ERR_EIO;
+-		break;
+-	case ENOMEM:
+-		err = EXT4_ERR_ENOMEM;
+-		break;
+-	case EFSBADCRC:
+-		err = EXT4_ERR_EFSBADCRC;
+-		break;
+-	case 0:
+-	case EFSCORRUPTED:
+-		err = EXT4_ERR_EFSCORRUPTED;
+-		break;
+-	case ENOSPC:
+-		err = EXT4_ERR_ENOSPC;
+-		break;
+-	case ENOKEY:
+-		err = EXT4_ERR_ENOKEY;
+-		break;
+-	case EROFS:
+-		err = EXT4_ERR_EROFS;
+-		break;
+-	case EFBIG:
+-		err = EXT4_ERR_EFBIG;
+-		break;
+-	case EEXIST:
+-		err = EXT4_ERR_EEXIST;
+-		break;
+-	case ERANGE:
+-		err = EXT4_ERR_ERANGE;
+-		break;
+-	case EOVERFLOW:
+-		err = EXT4_ERR_EOVERFLOW;
+-		break;
+-	case EBUSY:
+-		err = EXT4_ERR_EBUSY;
+-		break;
+-	case ENOTDIR:
+-		err = EXT4_ERR_ENOTDIR;
+-		break;
+-	case ENOTEMPTY:
+-		err = EXT4_ERR_ENOTEMPTY;
+-		break;
+-	case ESHUTDOWN:
+-		err = EXT4_ERR_ESHUTDOWN;
+-		break;
+-	case EFAULT:
+-		err = EXT4_ERR_EFAULT;
+-		break;
+-	default:
+-		err = EXT4_ERR_UNKNOWN;
+-	}
+-	es->s_last_error_errcode = err;
+-	if (!es->s_first_error_time) {
+-		es->s_first_error_time = es->s_last_error_time;
+-		es->s_first_error_time_hi = es->s_last_error_time_hi;
+-		strncpy(es->s_first_error_func, func,
+-			sizeof(es->s_first_error_func));
+-		es->s_first_error_line = cpu_to_le32(line);
+-		es->s_first_error_ino = es->s_last_error_ino;
+-		es->s_first_error_block = es->s_last_error_block;
+-		es->s_first_error_errcode = es->s_last_error_errcode;
+-	}
+-	/*
+-	 * Start the daily error reporting function if it hasn't been
+-	 * started already
+-	 */
+-	if (!es->s_error_count)
+-		mod_timer(&EXT4_SB(sb)->s_err_report, jiffies + 24*60*60*HZ);
+-	le32_add_cpu(&es->s_error_count, 1);
+-}
+-
+-static void save_error_info(struct super_block *sb, int error,
+-			    __u32 ino, __u64 block,
+-			    const char *func, unsigned int line)
+-{
+-	__save_error_info(sb, error, ino, block, func, line);
+-	if (!bdev_read_only(sb->s_bdev))
+-		ext4_commit_super(sb, 1);
+-}
+-
+ /*
+  * The del_gendisk() function uninitializes the disk-specific data
+  * structures, including the bdi structure, without telling anyone
+@@ -643,6 +545,89 @@ static bool system_going_down(void)
+ 		|| system_state == SYSTEM_RESTART;
+ }
+ 
++struct ext4_err_translation {
++	int code;
++	int errno;
++};
++
++#define EXT4_ERR_TRANSLATE(err) { .code = EXT4_ERR_##err, .errno = err }
++
++static struct ext4_err_translation err_translation[] = {
++	EXT4_ERR_TRANSLATE(EIO),
++	EXT4_ERR_TRANSLATE(ENOMEM),
++	EXT4_ERR_TRANSLATE(EFSBADCRC),
++	EXT4_ERR_TRANSLATE(EFSCORRUPTED),
++	EXT4_ERR_TRANSLATE(ENOSPC),
++	EXT4_ERR_TRANSLATE(ENOKEY),
++	EXT4_ERR_TRANSLATE(EROFS),
++	EXT4_ERR_TRANSLATE(EFBIG),
++	EXT4_ERR_TRANSLATE(EEXIST),
++	EXT4_ERR_TRANSLATE(ERANGE),
++	EXT4_ERR_TRANSLATE(EOVERFLOW),
++	EXT4_ERR_TRANSLATE(EBUSY),
++	EXT4_ERR_TRANSLATE(ENOTDIR),
++	EXT4_ERR_TRANSLATE(ENOTEMPTY),
++	EXT4_ERR_TRANSLATE(ESHUTDOWN),
++	EXT4_ERR_TRANSLATE(EFAULT),
++};
++
++static int ext4_errno_to_code(int errno)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(err_translation); i++)
++		if (err_translation[i].errno == errno)
++			return err_translation[i].code;
++	return EXT4_ERR_UNKNOWN;
++}
++
++static void __save_error_info(struct super_block *sb, int error,
++			      __u32 ino, __u64 block,
++			      const char *func, unsigned int line)
++{
++	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
++
++	EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS;
++	if (bdev_read_only(sb->s_bdev))
++		return;
++	/* We default to EFSCORRUPTED error... */
++	if (error == 0)
++		error = EFSCORRUPTED;
++	es->s_state |= cpu_to_le16(EXT4_ERROR_FS);
++	ext4_update_tstamp(es, s_last_error_time);
++	strncpy(es->s_last_error_func, func, sizeof(es->s_last_error_func));
++	es->s_last_error_line = cpu_to_le32(line);
++	es->s_last_error_ino = cpu_to_le32(ino);
++	es->s_last_error_block = cpu_to_le64(block);
++	es->s_last_error_errcode = ext4_errno_to_code(error);
++	if (!es->s_first_error_time) {
++		es->s_first_error_time = es->s_last_error_time;
++		es->s_first_error_time_hi = es->s_last_error_time_hi;
++		strncpy(es->s_first_error_func, func,
++			sizeof(es->s_first_error_func));
++		es->s_first_error_line = cpu_to_le32(line);
++		es->s_first_error_ino = es->s_last_error_ino;
++		es->s_first_error_block = es->s_last_error_block;
++		es->s_first_error_errcode = es->s_last_error_errcode;
++	}
++	/*
++	 * Start the daily error reporting function if it hasn't been
++	 * started already
++	 */
++	if (!es->s_error_count)
++		mod_timer(&EXT4_SB(sb)->s_err_report, jiffies + 24*60*60*HZ);
++	le32_add_cpu(&es->s_error_count, 1);
++}
++
++static void save_error_info(struct super_block *sb, int error,
++			    __u32 ino, __u64 block,
++			    const char *func, unsigned int line)
++{
++	__save_error_info(sb, error, ino, block, func, line);
++	if (!bdev_read_only(sb->s_bdev))
++		ext4_commit_super(sb, 1);
++}
++
+ /* Deal with the reporting of failure conditions on a filesystem such as
+  * inconsistencies detected or read IO failures.
+  *
+@@ -4809,30 +4794,31 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 		   ext4_has_feature_journal_needs_recovery(sb)) {
+ 		ext4_msg(sb, KERN_ERR, "required journal recovery "
+ 		       "suppressed and not mounted read-only");
+-		goto failed_mount_wq;
++		goto failed_mount3a;
+ 	} else {
+ 		/* Nojournal mode, all journal mount options are illegal */
+-		if (test_opt2(sb, EXPLICIT_JOURNAL_CHECKSUM)) {
+-			ext4_msg(sb, KERN_ERR, "can't mount with "
+-				 "journal_checksum, fs mounted w/o journal");
+-			goto failed_mount_wq;
+-		}
+ 		if (test_opt(sb, JOURNAL_ASYNC_COMMIT)) {
+ 			ext4_msg(sb, KERN_ERR, "can't mount with "
+ 				 "journal_async_commit, fs mounted w/o journal");
+-			goto failed_mount_wq;
++			goto failed_mount3a;
++		}
++
++		if (test_opt2(sb, EXPLICIT_JOURNAL_CHECKSUM)) {
++			ext4_msg(sb, KERN_ERR, "can't mount with "
++				 "journal_checksum, fs mounted w/o journal");
++			goto failed_mount3a;
+ 		}
+ 		if (sbi->s_commit_interval != JBD2_DEFAULT_MAX_COMMIT_AGE*HZ) {
+ 			ext4_msg(sb, KERN_ERR, "can't mount with "
+ 				 "commit=%lu, fs mounted w/o journal",
+ 				 sbi->s_commit_interval / HZ);
+-			goto failed_mount_wq;
++			goto failed_mount3a;
+ 		}
+ 		if (EXT4_MOUNT_DATA_FLAGS &
+ 		    (sbi->s_mount_opt ^ sbi->s_def_mount_opt)) {
+ 			ext4_msg(sb, KERN_ERR, "can't mount with "
+ 				 "data=, fs mounted w/o journal");
+-			goto failed_mount_wq;
++			goto failed_mount3a;
+ 		}
+ 		sbi->s_def_mount_opt &= ~EXT4_MOUNT_JOURNAL_CHECKSUM;
+ 		clear_opt(sb, JOURNAL_CHECKSUM);
+@@ -5276,7 +5262,7 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb,
+ 
+ 	jbd_debug(2, "Journal inode found at %p: %lld bytes\n",
+ 		  journal_inode, journal_inode->i_size);
+-	if (!S_ISREG(journal_inode->i_mode)) {
++	if (!S_ISREG(journal_inode->i_mode) || IS_ENCRYPTED(journal_inode)) {
+ 		ext4_msg(sb, KERN_ERR, "invalid journal inode");
+ 		iput(journal_inode);
+ 		return NULL;
+@@ -6385,6 +6371,20 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ 	return err;
+ }
+ 
++static inline bool ext4_check_quota_inum(int type, unsigned long qf_inum)
++{
++	switch (type) {
++	case USRQUOTA:
++		return qf_inum == EXT4_USR_QUOTA_INO;
++	case GRPQUOTA:
++		return qf_inum == EXT4_GRP_QUOTA_INO;
++	case PRJQUOTA:
++		return qf_inum >= EXT4_GOOD_OLD_FIRST_INO;
++	default:
++		BUG();
++	}
++}
++
+ static int ext4_quota_enable(struct super_block *sb, int type, int format_id,
+ 			     unsigned int flags)
+ {
+@@ -6401,9 +6401,16 @@ static int ext4_quota_enable(struct super_block *sb, int type, int format_id,
+ 	if (!qf_inums[type])
+ 		return -EPERM;
+ 
++	if (!ext4_check_quota_inum(type, qf_inums[type])) {
++		ext4_error(sb, "Bad quota inum: %lu, type: %d",
++				qf_inums[type], type);
++		return -EUCLEAN;
++	}
++
+ 	qf_inode = ext4_iget(sb, qf_inums[type], EXT4_IGET_SPECIAL);
+ 	if (IS_ERR(qf_inode)) {
+-		ext4_error(sb, "Bad quota inode # %lu", qf_inums[type]);
++		ext4_error(sb, "Bad quota inode: %lu, type: %d",
++				qf_inums[type], type);
+ 		return PTR_ERR(qf_inode);
+ 	}
+ 
+@@ -6442,8 +6449,9 @@ static int ext4_enable_quotas(struct super_block *sb)
+ 			if (err) {
+ 				ext4_warning(sb,
+ 					"Failed to enable quota tracking "
+-					"(type=%d, err=%d). Please run "
+-					"e2fsck to fix.", type, err);
++					"(type=%d, err=%d, ino=%lu). "
++					"Please run e2fsck to fix.", type,
++					err, qf_inums[type]);
+ 				for (type--; type >= 0; type--) {
+ 					struct inode *inode;
+ 
+diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
+index 35be8e7ec2a04..e3019f920222f 100644
+--- a/fs/ext4/verity.c
++++ b/fs/ext4/verity.c
+@@ -79,8 +79,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count,
+ 		size_t n = min_t(size_t, count,
+ 				 PAGE_SIZE - offset_in_page(pos));
+ 		struct page *page;
+-		void *fsdata;
+-		void *addr;
++		void *fsdata = NULL;
+ 		int res;
+ 
+ 		res = pagecache_write_begin(NULL, inode->i_mapping, pos, n, 0,
+@@ -88,9 +87,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count,
+ 		if (res)
+ 			return res;
+ 
+-		addr = kmap_atomic(page);
+-		memcpy(addr + offset_in_page(pos), buf, n);
+-		kunmap_atomic(addr);
++		memcpy_to_page(page, offset_in_page(pos), buf, n);
+ 
+ 		res = pagecache_write_end(NULL, inode->i_mapping, pos, n, n,
+ 					  page, fsdata);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 38531c5e16c60..6bf1c62eff04a 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -436,6 +436,21 @@ error:
+ 	return err;
+ }
+ 
++/* Remove entry from mbcache when EA inode is getting evicted */
++void ext4_evict_ea_inode(struct inode *inode)
++{
++	struct mb_cache_entry *oe;
++
++	if (!EA_INODE_CACHE(inode))
++		return;
++	/* Wait for entry to get unused so that we can remove it */
++	while ((oe = mb_cache_entry_delete_or_get(EA_INODE_CACHE(inode),
++			ext4_xattr_inode_get_hash(inode), inode->i_ino))) {
++		mb_cache_entry_wait_unused(oe);
++		mb_cache_entry_put(EA_INODE_CACHE(inode), oe);
++	}
++}
++
+ static int
+ ext4_xattr_inode_verify_hashes(struct inode *ea_inode,
+ 			       struct ext4_xattr_entry *entry, void *buffer,
+@@ -972,10 +987,8 @@ int __ext4_xattr_set_credits(struct super_block *sb, struct inode *inode,
+ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+ 				       int ref_change)
+ {
+-	struct mb_cache *ea_inode_cache = EA_INODE_CACHE(ea_inode);
+ 	struct ext4_iloc iloc;
+ 	s64 ref_count;
+-	u32 hash;
+ 	int ret;
+ 
+ 	inode_lock(ea_inode);
+@@ -998,14 +1011,6 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+ 
+ 			set_nlink(ea_inode, 1);
+ 			ext4_orphan_del(handle, ea_inode);
+-
+-			if (ea_inode_cache) {
+-				hash = ext4_xattr_inode_get_hash(ea_inode);
+-				mb_cache_entry_create(ea_inode_cache,
+-						      GFP_NOFS, hash,
+-						      ea_inode->i_ino,
+-						      true /* reusable */);
+-			}
+ 		}
+ 	} else {
+ 		WARN_ONCE(ref_count < 0, "EA inode %lu ref_count=%lld",
+@@ -1018,12 +1023,6 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+ 
+ 			clear_nlink(ea_inode);
+ 			ext4_orphan_add(handle, ea_inode);
+-
+-			if (ea_inode_cache) {
+-				hash = ext4_xattr_inode_get_hash(ea_inode);
+-				mb_cache_entry_delete(ea_inode_cache, hash,
+-						      ea_inode->i_ino);
+-			}
+ 		}
+ 	}
+ 
+@@ -1231,6 +1230,7 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ 	if (error)
+ 		goto out;
+ 
++retry_ref:
+ 	lock_buffer(bh);
+ 	hash = le32_to_cpu(BHDR(bh)->h_hash);
+ 	ref = le32_to_cpu(BHDR(bh)->h_refcount);
+@@ -1240,9 +1240,18 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ 		 * This must happen under buffer lock for
+ 		 * ext4_xattr_block_set() to reliably detect freed block
+ 		 */
+-		if (ea_block_cache)
+-			mb_cache_entry_delete(ea_block_cache, hash,
+-					      bh->b_blocknr);
++		if (ea_block_cache) {
++			struct mb_cache_entry *oe;
++
++			oe = mb_cache_entry_delete_or_get(ea_block_cache, hash,
++							  bh->b_blocknr);
++			if (oe) {
++				unlock_buffer(bh);
++				mb_cache_entry_wait_unused(oe);
++				mb_cache_entry_put(ea_block_cache, oe);
++				goto retry_ref;
++			}
++		}
+ 		get_bh(bh);
+ 		unlock_buffer(bh);
+ 
+@@ -1266,7 +1275,7 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ 				ce = mb_cache_entry_get(ea_block_cache, hash,
+ 							bh->b_blocknr);
+ 				if (ce) {
+-					ce->e_reusable = 1;
++					set_bit(MBE_REUSABLE_B, &ce->e_flags);
+ 					mb_cache_entry_put(ea_block_cache, ce);
+ 				}
+ 			}
+@@ -1425,6 +1434,9 @@ static struct inode *ext4_xattr_inode_create(handle_t *handle,
+ 		if (!err)
+ 			err = ext4_inode_attach_jinode(ea_inode);
+ 		if (err) {
++			if (ext4_xattr_inode_dec_ref(handle, ea_inode))
++				ext4_warning_inode(ea_inode,
++					"cleanup dec ref error %d", err);
+ 			iput(ea_inode);
+ 			return ERR_PTR(err);
+ 		}
+@@ -1614,7 +1626,7 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
+ 		 * If storing the value in an external inode is an option,
+ 		 * reserve space for xattr entries/names in the external
+ 		 * attribute block so that a long value does not occupy the
+-		 * whole space and prevent futher entries being added.
++		 * whole space and prevent further entries being added.
+ 		 */
+ 		if (ext4_has_feature_ea_inode(inode->i_sb) &&
+ 		    new_size && is_block &&
+@@ -1851,6 +1863,8 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ #define header(x) ((struct ext4_xattr_header *)(x))
+ 
+ 	if (s->base) {
++		int offset = (char *)s->here - bs->bh->b_data;
++
+ 		BUFFER_TRACE(bs->bh, "get_write_access");
+ 		error = ext4_journal_get_write_access(handle, bs->bh);
+ 		if (error)
+@@ -1865,9 +1879,20 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ 			 * ext4_xattr_block_set() to reliably detect modified
+ 			 * block
+ 			 */
+-			if (ea_block_cache)
+-				mb_cache_entry_delete(ea_block_cache, hash,
+-						      bs->bh->b_blocknr);
++			if (ea_block_cache) {
++				struct mb_cache_entry *oe;
++
++				oe = mb_cache_entry_delete_or_get(ea_block_cache,
++					hash, bs->bh->b_blocknr);
++				if (oe) {
++					/*
++					 * Xattr block is getting reused. Leave
++					 * it alone.
++					 */
++					mb_cache_entry_put(ea_block_cache, oe);
++					goto clone_block;
++				}
++			}
+ 			ea_bdebug(bs->bh, "modifying in-place");
+ 			error = ext4_xattr_set_entry(i, s, handle, inode,
+ 						     true /* is_block */);
+@@ -1882,50 +1907,47 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ 			if (error)
+ 				goto cleanup;
+ 			goto inserted;
+-		} else {
+-			int offset = (char *)s->here - bs->bh->b_data;
++		}
++clone_block:
++		unlock_buffer(bs->bh);
++		ea_bdebug(bs->bh, "cloning");
++		s->base = kmemdup(BHDR(bs->bh), bs->bh->b_size, GFP_NOFS);
++		error = -ENOMEM;
++		if (s->base == NULL)
++			goto cleanup;
++		s->first = ENTRY(header(s->base)+1);
++		header(s->base)->h_refcount = cpu_to_le32(1);
++		s->here = ENTRY(s->base + offset);
++		s->end = s->base + bs->bh->b_size;
+ 
+-			unlock_buffer(bs->bh);
+-			ea_bdebug(bs->bh, "cloning");
+-			s->base = kmalloc(bs->bh->b_size, GFP_NOFS);
+-			error = -ENOMEM;
+-			if (s->base == NULL)
++		/*
++		 * If existing entry points to an xattr inode, we need
++		 * to prevent ext4_xattr_set_entry() from decrementing
++		 * ref count on it because the reference belongs to the
++		 * original block. In this case, make the entry look
++		 * like it has an empty value.
++		 */
++		if (!s->not_found && s->here->e_value_inum) {
++			ea_ino = le32_to_cpu(s->here->e_value_inum);
++			error = ext4_xattr_inode_iget(inode, ea_ino,
++				      le32_to_cpu(s->here->e_hash),
++				      &tmp_inode);
++			if (error)
+ 				goto cleanup;
+-			memcpy(s->base, BHDR(bs->bh), bs->bh->b_size);
+-			s->first = ENTRY(header(s->base)+1);
+-			header(s->base)->h_refcount = cpu_to_le32(1);
+-			s->here = ENTRY(s->base + offset);
+-			s->end = s->base + bs->bh->b_size;
+-
+-			/*
+-			 * If existing entry points to an xattr inode, we need
+-			 * to prevent ext4_xattr_set_entry() from decrementing
+-			 * ref count on it because the reference belongs to the
+-			 * original block. In this case, make the entry look
+-			 * like it has an empty value.
+-			 */
+-			if (!s->not_found && s->here->e_value_inum) {
+-				ea_ino = le32_to_cpu(s->here->e_value_inum);
+-				error = ext4_xattr_inode_iget(inode, ea_ino,
+-					      le32_to_cpu(s->here->e_hash),
+-					      &tmp_inode);
+-				if (error)
+-					goto cleanup;
+ 
+-				if (!ext4_test_inode_state(tmp_inode,
+-						EXT4_STATE_LUSTRE_EA_INODE)) {
+-					/*
+-					 * Defer quota free call for previous
+-					 * inode until success is guaranteed.
+-					 */
+-					old_ea_inode_quota = le32_to_cpu(
+-							s->here->e_value_size);
+-				}
+-				iput(tmp_inode);
+-
+-				s->here->e_value_inum = 0;
+-				s->here->e_value_size = 0;
++			if (!ext4_test_inode_state(tmp_inode,
++					EXT4_STATE_LUSTRE_EA_INODE)) {
++				/*
++				 * Defer quota free call for previous
++				 * inode until success is guaranteed.
++				 */
++				old_ea_inode_quota = le32_to_cpu(
++						s->here->e_value_size);
+ 			}
++			iput(tmp_inode);
++
++			s->here->e_value_inum = 0;
++			s->here->e_value_size = 0;
+ 		}
+ 	} else {
+ 		/* Allocate a buffer where we construct the new block. */
+@@ -1992,18 +2014,13 @@ inserted:
+ 				lock_buffer(new_bh);
+ 				/*
+ 				 * We have to be careful about races with
+-				 * freeing, rehashing or adding references to
+-				 * xattr block. Once we hold buffer lock xattr
+-				 * block's state is stable so we can check
+-				 * whether the block got freed / rehashed or
+-				 * not.  Since we unhash mbcache entry under
+-				 * buffer lock when freeing / rehashing xattr
+-				 * block, checking whether entry is still
+-				 * hashed is reliable. Same rules hold for
+-				 * e_reusable handling.
++				 * adding references to xattr block. Once we
++				 * hold buffer lock xattr block's state is
++				 * stable so we can check the additional
++				 * reference fits.
+ 				 */
+-				if (hlist_bl_unhashed(&ce->e_hash_list) ||
+-				    !ce->e_reusable) {
++				ref = le32_to_cpu(BHDR(new_bh)->h_refcount) + 1;
++				if (ref > EXT4_XATTR_REFCOUNT_MAX) {
+ 					/*
+ 					 * Undo everything and check mbcache
+ 					 * again.
+@@ -2018,10 +2035,9 @@ inserted:
+ 					new_bh = NULL;
+ 					goto inserted;
+ 				}
+-				ref = le32_to_cpu(BHDR(new_bh)->h_refcount) + 1;
+ 				BHDR(new_bh)->h_refcount = cpu_to_le32(ref);
+-				if (ref >= EXT4_XATTR_REFCOUNT_MAX)
+-					ce->e_reusable = 0;
++				if (ref == EXT4_XATTR_REFCOUNT_MAX)
++					clear_bit(MBE_REUSABLE_B, &ce->e_flags);
+ 				ea_bdebug(new_bh, "reusing; refcount now=%d",
+ 					  ref);
+ 				ext4_xattr_block_csum_set(inode, new_bh);
+@@ -2049,19 +2065,11 @@ inserted:
+ 
+ 			goal = ext4_group_first_block_no(sb,
+ 						EXT4_I(inode)->i_block_group);
+-
+-			/* non-extent files can't have physical blocks past 2^32 */
+-			if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+-				goal = goal & EXT4_MAX_BLOCK_FILE_PHYS;
+-
+ 			block = ext4_new_meta_blocks(handle, inode, goal, 0,
+ 						     NULL, &error);
+ 			if (error)
+ 				goto cleanup;
+ 
+-			if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+-				BUG_ON(block > EXT4_MAX_BLOCK_FILE_PHYS);
+-
+ 			ea_idebug(inode, "creating block %llu",
+ 				  (unsigned long long)block);
+ 
+@@ -2556,7 +2564,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 
+ 	is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
+ 	bs = kzalloc(sizeof(struct ext4_xattr_block_find), GFP_NOFS);
+-	buffer = kmalloc(value_size, GFP_NOFS);
++	buffer = kvmalloc(value_size, GFP_NOFS);
+ 	b_entry_name = kmalloc(entry->e_name_len + 1, GFP_NOFS);
+ 	if (!is || !bs || !buffer || !b_entry_name) {
+ 		error = -ENOMEM;
+@@ -2608,7 +2616,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 	error = 0;
+ out:
+ 	kfree(b_entry_name);
+-	kfree(buffer);
++	kvfree(buffer);
+ 	if (is)
+ 		brelse(is->iloc.bh);
+ 	if (bs)
+diff --git a/fs/ext4/xattr.h b/fs/ext4/xattr.h
+index 87e5863bb4931..b357872ab83b4 100644
+--- a/fs/ext4/xattr.h
++++ b/fs/ext4/xattr.h
+@@ -191,6 +191,7 @@ extern void ext4_xattr_inode_array_free(struct ext4_xattr_inode_array *array);
+ 
+ extern int ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
+ 			    struct ext4_inode *raw_inode, handle_t *handle);
++extern void ext4_evict_ea_inode(struct inode *inode);
+ 
+ extern const struct xattr_handler *ext4_xattr_handlers[];
+ 
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 3baa62ef6e3a3..ce6a2a247804d 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1008,6 +1008,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 	if (ofs_in_node >= max_addrs) {
+ 		f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%u, nid:%u, max:%u",
+ 			ofs_in_node, dni->ino, dni->nid, max_addrs);
++		f2fs_put_page(node_page, 1);
+ 		return false;
+ 	}
+ 
+@@ -1649,8 +1650,9 @@ freed:
+ 				get_valid_blocks(sbi, segno, false) == 0)
+ 			seg_freed++;
+ 
+-		if (__is_large_section(sbi) && segno + 1 < end_segno)
+-			sbi->next_victim_seg[gc_type] = segno + 1;
++		if (__is_large_section(sbi))
++			sbi->next_victim_seg[gc_type] =
++				(segno + 1 < end_segno) ? segno + 1 : NULL_SEGNO;
+ skip:
+ 		f2fs_put_page(sum_page, 0);
+ 	}
+@@ -2035,8 +2037,6 @@ out_unlock:
+ 	if (err)
+ 		return err;
+ 
+-	set_sbi_flag(sbi, SBI_IS_RESIZEFS);
+-
+ 	freeze_super(sbi->sb);
+ 	down_write(&sbi->gc_lock);
+ 	mutex_lock(&sbi->cp_mutex);
+@@ -2052,6 +2052,7 @@ out_unlock:
+ 	if (err)
+ 		goto out_err;
+ 
++	set_sbi_flag(sbi, SBI_IS_RESIZEFS);
+ 	err = free_segment_range(sbi, secs, false);
+ 	if (err)
+ 		goto recover_out;
+@@ -2075,6 +2076,7 @@ out_unlock:
+ 		f2fs_commit_super(sbi, false);
+ 	}
+ recover_out:
++	clear_sbi_flag(sbi, SBI_IS_RESIZEFS);
+ 	if (err) {
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 		f2fs_err(sbi, "resize_fs failed, should run fsck to repair!");
+@@ -2087,6 +2089,5 @@ out_err:
+ 	mutex_unlock(&sbi->cp_mutex);
+ 	up_write(&sbi->gc_lock);
+ 	thaw_super(sbi->sb);
+-	clear_sbi_flag(sbi, SBI_IS_RESIZEFS);
+ 	return err;
+ }
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 68774d6198a59..7c90d93f4e435 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -1532,7 +1532,7 @@ retry:
+ 		if (i + 1 < dpolicy->granularity)
+ 			break;
+ 
+-		if (i < DEFAULT_DISCARD_GRANULARITY && dpolicy->ordered)
++		if (i + 1 < DEFAULT_DISCARD_GRANULARITY && dpolicy->ordered)
+ 			return __issue_discard_cmd_orderly(sbi, dpolicy);
+ 
+ 		pend_list = &dcc->pend_list[i];
+diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
+index f35a37c65e5ff..6fbde990e0566 100644
+--- a/fs/hfs/inode.c
++++ b/fs/hfs/inode.c
+@@ -454,14 +454,16 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 		/* panic? */
+ 		return -EIO;
+ 
++	res = -EIO;
++	if (HFS_I(main_inode)->cat_key.CName.len > HFS_NAMELEN)
++		goto out;
+ 	fd.search_key->cat = HFS_I(main_inode)->cat_key;
+ 	if (hfs_brec_find(&fd))
+-		/* panic? */
+ 		goto out;
+ 
+ 	if (S_ISDIR(main_inode->i_mode)) {
+ 		if (fd.entrylength < sizeof(struct hfs_cat_dir))
+-			/* panic? */;
++			goto out;
+ 		hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
+ 			   sizeof(struct hfs_cat_dir));
+ 		if (rec.type != HFS_CDR_DIR ||
+@@ -474,6 +476,8 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 		hfs_bnode_write(fd.bnode, &rec, fd.entryoffset,
+ 			    sizeof(struct hfs_cat_dir));
+ 	} else if (HFS_IS_RSRC(inode)) {
++		if (fd.entrylength < sizeof(struct hfs_cat_file))
++			goto out;
+ 		hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
+ 			       sizeof(struct hfs_cat_file));
+ 		hfs_inode_write_fork(inode, rec.file.RExtRec,
+@@ -482,7 +486,7 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 				sizeof(struct hfs_cat_file));
+ 	} else {
+ 		if (fd.entrylength < sizeof(struct hfs_cat_file))
+-			/* panic? */;
++			goto out;
+ 		hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
+ 			   sizeof(struct hfs_cat_file));
+ 		if (rec.type != HFS_CDR_FIL ||
+@@ -499,9 +503,10 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 		hfs_bnode_write(fd.bnode, &rec, fd.entryoffset,
+ 			    sizeof(struct hfs_cat_file));
+ 	}
++	res = 0;
+ out:
+ 	hfs_find_exit(&fd);
+-	return 0;
++	return res;
+ }
+ 
+ static struct dentry *hfs_file_lookup(struct inode *dir, struct dentry *dentry,
+diff --git a/fs/hfs/trans.c b/fs/hfs/trans.c
+index 39f5e343bf4d4..fdb0edb8a607d 100644
+--- a/fs/hfs/trans.c
++++ b/fs/hfs/trans.c
+@@ -109,7 +109,7 @@ void hfs_asc2mac(struct super_block *sb, struct hfs_name *out, const struct qstr
+ 	if (nls_io) {
+ 		wchar_t ch;
+ 
+-		while (srclen > 0) {
++		while (srclen > 0 && dstlen > 0) {
+ 			size = nls_io->char2uni(src, srclen, &ch);
+ 			if (size < 0) {
+ 				ch = '?';
+diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
+index a92de5199ec33..c438680ef9f75 100644
+--- a/fs/hfsplus/hfsplus_fs.h
++++ b/fs/hfsplus/hfsplus_fs.h
+@@ -198,6 +198,8 @@ struct hfsplus_sb_info {
+ #define HFSPLUS_SB_HFSX		3
+ #define HFSPLUS_SB_CASEFOLD	4
+ #define HFSPLUS_SB_NOBARRIER	5
++#define HFSPLUS_SB_UID		6
++#define HFSPLUS_SB_GID		7
+ 
+ static inline struct hfsplus_sb_info *HFSPLUS_SB(struct super_block *sb)
+ {
+diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
+index e3da9e96b8357..c60d5ceb0d31c 100644
+--- a/fs/hfsplus/inode.c
++++ b/fs/hfsplus/inode.c
+@@ -187,11 +187,11 @@ static void hfsplus_get_perms(struct inode *inode,
+ 	mode = be16_to_cpu(perms->mode);
+ 
+ 	i_uid_write(inode, be32_to_cpu(perms->owner));
+-	if (!i_uid_read(inode) && !mode)
++	if ((test_bit(HFSPLUS_SB_UID, &sbi->flags)) || (!i_uid_read(inode) && !mode))
+ 		inode->i_uid = sbi->uid;
+ 
+ 	i_gid_write(inode, be32_to_cpu(perms->group));
+-	if (!i_gid_read(inode) && !mode)
++	if ((test_bit(HFSPLUS_SB_GID, &sbi->flags)) || (!i_gid_read(inode) && !mode))
+ 		inode->i_gid = sbi->gid;
+ 
+ 	if (dir) {
+@@ -497,8 +497,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 	if (type == HFSPLUS_FOLDER) {
+ 		struct hfsplus_cat_folder *folder = &entry.folder;
+ 
+-		if (fd->entrylength < sizeof(struct hfsplus_cat_folder))
+-			/* panic? */;
++		WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_folder));
+ 		hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
+ 					sizeof(struct hfsplus_cat_folder));
+ 		hfsplus_get_perms(inode, &folder->permissions, 1);
+@@ -518,8 +517,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 	} else if (type == HFSPLUS_FILE) {
+ 		struct hfsplus_cat_file *file = &entry.file;
+ 
+-		if (fd->entrylength < sizeof(struct hfsplus_cat_file))
+-			/* panic? */;
++		WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_file));
+ 		hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
+ 					sizeof(struct hfsplus_cat_file));
+ 
+@@ -576,8 +574,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	if (S_ISDIR(main_inode->i_mode)) {
+ 		struct hfsplus_cat_folder *folder = &entry.folder;
+ 
+-		if (fd.entrylength < sizeof(struct hfsplus_cat_folder))
+-			/* panic? */;
++		WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_folder));
+ 		hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
+ 					sizeof(struct hfsplus_cat_folder));
+ 		/* simple node checks? */
+@@ -602,8 +599,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	} else {
+ 		struct hfsplus_cat_file *file = &entry.file;
+ 
+-		if (fd.entrylength < sizeof(struct hfsplus_cat_file))
+-			/* panic? */;
++		WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_file));
+ 		hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
+ 					sizeof(struct hfsplus_cat_file));
+ 		hfsplus_inode_write_fork(inode, &file->data_fork);
+diff --git a/fs/hfsplus/options.c b/fs/hfsplus/options.c
+index 047e05c575601..c94a58762ad6d 100644
+--- a/fs/hfsplus/options.c
++++ b/fs/hfsplus/options.c
+@@ -140,6 +140,8 @@ int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi)
+ 			if (!uid_valid(sbi->uid)) {
+ 				pr_err("invalid uid specified\n");
+ 				return 0;
++			} else {
++				set_bit(HFSPLUS_SB_UID, &sbi->flags);
+ 			}
+ 			break;
+ 		case opt_gid:
+@@ -151,6 +153,8 @@ int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi)
+ 			if (!gid_valid(sbi->gid)) {
+ 				pr_err("invalid gid specified\n");
+ 				return 0;
++			} else {
++				set_bit(HFSPLUS_SB_GID, &sbi->flags);
+ 			}
+ 			break;
+ 		case opt_part:
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index a2f43f1a85f8d..5181e6d4e18ca 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -1261,7 +1261,7 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
+ 
+ 	case Opt_size:
+ 		/* memparse() will accept a K/M/G without a digit */
+-		if (!isdigit(param->string[0]))
++		if (!param->string || !isdigit(param->string[0]))
+ 			goto bad_val;
+ 		ctx->max_size_opt = memparse(param->string, &rest);
+ 		ctx->max_val_type = SIZE_STD;
+@@ -1271,7 +1271,7 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
+ 
+ 	case Opt_nr_inodes:
+ 		/* memparse() will accept a K/M/G without a digit */
+-		if (!isdigit(param->string[0]))
++		if (!param->string || !isdigit(param->string[0]))
+ 			goto bad_val;
+ 		ctx->nr_inodes = memparse(param->string, &rest);
+ 		return 0;
+@@ -1287,7 +1287,7 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
+ 
+ 	case Opt_min_size:
+ 		/* memparse() will accept a K/M/G without a digit */
+-		if (!isdigit(param->string[0]))
++		if (!param->string || !isdigit(param->string[0]))
+ 			goto bad_val;
+ 		ctx->min_size_opt = memparse(param->string, &rest);
+ 		ctx->min_val_type = SIZE_STD;
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 0ce17ea8fa8a1..2c9493011aec3 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -155,7 +155,7 @@ int dbMount(struct inode *ipbmap)
+ 	struct bmap *bmp;
+ 	struct dbmap_disk *dbmp_le;
+ 	struct metapage *mp;
+-	int i;
++	int i, err;
+ 
+ 	/*
+ 	 * allocate/initialize the in-memory bmap descriptor
+@@ -170,8 +170,8 @@ int dbMount(struct inode *ipbmap)
+ 			   BMAPBLKNO << JFS_SBI(ipbmap->i_sb)->l2nbperpage,
+ 			   PSIZE, 0);
+ 	if (mp == NULL) {
+-		kfree(bmp);
+-		return -EIO;
++		err = -EIO;
++		goto err_kfree_bmp;
+ 	}
+ 
+ 	/* copy the on-disk bmap descriptor to its in-memory version. */
+@@ -181,9 +181,8 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
+ 	bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
+ 	if (!bmp->db_numag) {
+-		release_metapage(mp);
+-		kfree(bmp);
+-		return -EINVAL;
++		err = -EINVAL;
++		goto err_release_metapage;
+ 	}
+ 
+ 	bmp->db_maxlevel = le32_to_cpu(dbmp_le->dn_maxlevel);
+@@ -194,6 +193,16 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
+ 	bmp->db_agstart = le32_to_cpu(dbmp_le->dn_agstart);
+ 	bmp->db_agl2size = le32_to_cpu(dbmp_le->dn_agl2size);
++	if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG) {
++		err = -EINVAL;
++		goto err_release_metapage;
++	}
++
++	if (((bmp->db_mapsize - 1) >> bmp->db_agl2size) > MAXAG) {
++		err = -EINVAL;
++		goto err_release_metapage;
++	}
++
+ 	for (i = 0; i < MAXAG; i++)
+ 		bmp->db_agfree[i] = le64_to_cpu(dbmp_le->dn_agfree[i]);
+ 	bmp->db_agsize = le64_to_cpu(dbmp_le->dn_agsize);
+@@ -214,6 +223,12 @@ int dbMount(struct inode *ipbmap)
+ 	BMAP_LOCK_INIT(bmp);
+ 
+ 	return (0);
++
++err_release_metapage:
++	release_metapage(mp);
++err_kfree_bmp:
++	kfree(bmp);
++	return err;
+ }
+ 
+ 
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 7124c2e8df2f5..aa0fbd720409a 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -955,8 +955,8 @@ out:
+ EXPORT_SYMBOL_GPL(simple_attr_read);
+ 
+ /* interpret the buffer as a number to call the set function with */
+-ssize_t simple_attr_write(struct file *file, const char __user *buf,
+-			  size_t len, loff_t *ppos)
++static ssize_t simple_attr_write_xsigned(struct file *file, const char __user *buf,
++			  size_t len, loff_t *ppos, bool is_signed)
+ {
+ 	struct simple_attr *attr;
+ 	unsigned long long val;
+@@ -977,7 +977,10 @@ ssize_t simple_attr_write(struct file *file, const char __user *buf,
+ 		goto out;
+ 
+ 	attr->set_buf[size] = '\0';
+-	ret = kstrtoull(attr->set_buf, 0, &val);
++	if (is_signed)
++		ret = kstrtoll(attr->set_buf, 0, &val);
++	else
++		ret = kstrtoull(attr->set_buf, 0, &val);
+ 	if (ret)
+ 		goto out;
+ 	ret = attr->set(attr->data, val);
+@@ -987,8 +990,21 @@ out:
+ 	mutex_unlock(&attr->mutex);
+ 	return ret;
+ }
++
++ssize_t simple_attr_write(struct file *file, const char __user *buf,
++			  size_t len, loff_t *ppos)
++{
++	return simple_attr_write_xsigned(file, buf, len, ppos, false);
++}
+ EXPORT_SYMBOL_GPL(simple_attr_write);
+ 
++ssize_t simple_attr_write_signed(struct file *file, const char __user *buf,
++			  size_t len, loff_t *ppos)
++{
++	return simple_attr_write_xsigned(file, buf, len, ppos, true);
++}
++EXPORT_SYMBOL_GPL(simple_attr_write_signed);
++
+ /**
+  * generic_fh_to_dentry - generic helper for the fh_to_dentry export operation
+  * @sb:		filesystem to do the file handle conversion on
+diff --git a/fs/locks.c b/fs/locks.c
+index 32c948fe29448..12d72c3d8756c 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -2813,6 +2813,29 @@ int vfs_cancel_lock(struct file *filp, struct file_lock *fl)
+ }
+ EXPORT_SYMBOL_GPL(vfs_cancel_lock);
+ 
++/**
++ * vfs_inode_has_locks - are any file locks held on @inode?
++ * @inode: inode to check for locks
++ *
++ * Return true if there are any FL_POSIX or FL_FLOCK locks currently
++ * set on @inode.
++ */
++bool vfs_inode_has_locks(struct inode *inode)
++{
++	struct file_lock_context *ctx;
++	bool ret;
++
++	ctx = smp_load_acquire(&inode->i_flctx);
++	if (!ctx)
++		return false;
++
++	spin_lock(&ctx->flc_lock);
++	ret = !list_empty(&ctx->flc_posix) || !list_empty(&ctx->flc_flock);
++	spin_unlock(&ctx->flc_lock);
++	return ret;
++}
++EXPORT_SYMBOL_GPL(vfs_inode_has_locks);
++
+ #ifdef CONFIG_PROC_FS
+ #include <linux/proc_fs.h>
+ #include <linux/seq_file.h>
+diff --git a/fs/mbcache.c b/fs/mbcache.c
+index 97c54d3a22276..95b047256d093 100644
+--- a/fs/mbcache.c
++++ b/fs/mbcache.c
+@@ -11,7 +11,7 @@
+ /*
+  * Mbcache is a simple key-value store. Keys need not be unique, however
+  * key-value pairs are expected to be unique (we use this fact in
+- * mb_cache_entry_delete()).
++ * mb_cache_entry_delete_or_get()).
+  *
+  * Ext2 and ext4 use this cache for deduplication of extended attribute blocks.
+  * Ext4 also uses it for deduplication of xattr values stored in inodes.
+@@ -90,12 +90,19 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ 		return -ENOMEM;
+ 
+ 	INIT_LIST_HEAD(&entry->e_list);
+-	/* One ref for hash, one ref returned */
+-	atomic_set(&entry->e_refcnt, 1);
++	/*
++	 * We create entry with two references. One reference is kept by the
++	 * hash table, the other reference is used to protect us from
++	 * mb_cache_entry_delete_or_get() until the entry is fully setup. This
++	 * avoids nesting of cache->c_list_lock into hash table bit locks which
++	 * is problematic for RT.
++	 */
++	atomic_set(&entry->e_refcnt, 2);
+ 	entry->e_key = key;
+ 	entry->e_value = value;
+-	entry->e_reusable = reusable;
+-	entry->e_referenced = 0;
++	entry->e_flags = 0;
++	if (reusable)
++		set_bit(MBE_REUSABLE_B, &entry->e_flags);
+ 	head = mb_cache_entry_head(cache, key);
+ 	hlist_bl_lock(head);
+ 	hlist_bl_for_each_entry(dup, dup_node, head, e_hash_list) {
+@@ -107,24 +114,41 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ 	}
+ 	hlist_bl_add_head(&entry->e_hash_list, head);
+ 	hlist_bl_unlock(head);
+-
+ 	spin_lock(&cache->c_list_lock);
+ 	list_add_tail(&entry->e_list, &cache->c_list);
+-	/* Grab ref for LRU list */
+-	atomic_inc(&entry->e_refcnt);
+ 	cache->c_entry_count++;
+ 	spin_unlock(&cache->c_list_lock);
++	mb_cache_entry_put(cache, entry);
+ 
+ 	return 0;
+ }
+ EXPORT_SYMBOL(mb_cache_entry_create);
+ 
+-void __mb_cache_entry_free(struct mb_cache_entry *entry)
++void __mb_cache_entry_free(struct mb_cache *cache, struct mb_cache_entry *entry)
+ {
++	struct hlist_bl_head *head;
++
++	head = mb_cache_entry_head(cache, entry->e_key);
++	hlist_bl_lock(head);
++	hlist_bl_del(&entry->e_hash_list);
++	hlist_bl_unlock(head);
+ 	kmem_cache_free(mb_entry_cache, entry);
+ }
+ EXPORT_SYMBOL(__mb_cache_entry_free);
+ 
++/*
++ * mb_cache_entry_wait_unused - wait to be the last user of the entry
++ *
++ * @entry - entry to work on
++ *
++ * Wait to be the last user of the entry.
++ */
++void mb_cache_entry_wait_unused(struct mb_cache_entry *entry)
++{
++	wait_var_event(&entry->e_refcnt, atomic_read(&entry->e_refcnt) <= 2);
++}
++EXPORT_SYMBOL(mb_cache_entry_wait_unused);
++
+ static struct mb_cache_entry *__entry_find(struct mb_cache *cache,
+ 					   struct mb_cache_entry *entry,
+ 					   u32 key)
+@@ -142,10 +166,10 @@ static struct mb_cache_entry *__entry_find(struct mb_cache *cache,
+ 	while (node) {
+ 		entry = hlist_bl_entry(node, struct mb_cache_entry,
+ 				       e_hash_list);
+-		if (entry->e_key == key && entry->e_reusable) {
+-			atomic_inc(&entry->e_refcnt);
++		if (entry->e_key == key &&
++		    test_bit(MBE_REUSABLE_B, &entry->e_flags) &&
++		    atomic_inc_not_zero(&entry->e_refcnt))
+ 			goto out;
+-		}
+ 		node = node->next;
+ 	}
+ 	entry = NULL;
+@@ -205,10 +229,9 @@ struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *cache, u32 key,
+ 	head = mb_cache_entry_head(cache, key);
+ 	hlist_bl_lock(head);
+ 	hlist_bl_for_each_entry(entry, node, head, e_hash_list) {
+-		if (entry->e_key == key && entry->e_value == value) {
+-			atomic_inc(&entry->e_refcnt);
++		if (entry->e_key == key && entry->e_value == value &&
++		    atomic_inc_not_zero(&entry->e_refcnt))
+ 			goto out;
+-		}
+ 	}
+ 	entry = NULL;
+ out:
+@@ -217,7 +240,7 @@ out:
+ }
+ EXPORT_SYMBOL(mb_cache_entry_get);
+ 
+-/* mb_cache_entry_delete - remove a cache entry
++/* mb_cache_entry_delete - try to remove a cache entry
+  * @cache - cache we work with
+  * @key - key
+  * @value - value
+@@ -254,6 +277,43 @@ void mb_cache_entry_delete(struct mb_cache *cache, u32 key, u64 value)
+ }
+ EXPORT_SYMBOL(mb_cache_entry_delete);
+ 
++/* mb_cache_entry_delete_or_get - remove a cache entry if it has no users
++ * @cache - cache we work with
++ * @key - key
++ * @value - value
++ *
++ * Remove entry from cache @cache with key @key and value @value. The removal
++ * happens only if the entry is unused. The function returns NULL in case the
++ * entry was successfully removed or there's no entry in cache. Otherwise the
++ * function grabs reference of the entry that we failed to delete because it
++ * still has users and return it.
++ */
++struct mb_cache_entry *mb_cache_entry_delete_or_get(struct mb_cache *cache,
++						    u32 key, u64 value)
++{
++	struct mb_cache_entry *entry;
++
++	entry = mb_cache_entry_get(cache, key, value);
++	if (!entry)
++		return NULL;
++
++	/*
++	 * Drop the ref we got from mb_cache_entry_get() and the initial hash
++	 * ref if we are the last user
++	 */
++	if (atomic_cmpxchg(&entry->e_refcnt, 2, 0) != 2)
++		return entry;
++
++	spin_lock(&cache->c_list_lock);
++	if (!list_empty(&entry->e_list))
++		list_del_init(&entry->e_list);
++	cache->c_entry_count--;
++	spin_unlock(&cache->c_list_lock);
++	__mb_cache_entry_free(cache, entry);
++	return NULL;
++}
++EXPORT_SYMBOL(mb_cache_entry_delete_or_get);
++
+ /* mb_cache_entry_touch - cache entry got used
+  * @cache - cache the entry belongs to
+  * @entry - entry that got used
+@@ -263,7 +323,7 @@ EXPORT_SYMBOL(mb_cache_entry_delete);
+ void mb_cache_entry_touch(struct mb_cache *cache,
+ 			  struct mb_cache_entry *entry)
+ {
+-	entry->e_referenced = 1;
++	set_bit(MBE_REFERENCED_B, &entry->e_flags);
+ }
+ EXPORT_SYMBOL(mb_cache_entry_touch);
+ 
+@@ -281,34 +341,24 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache,
+ 				     unsigned long nr_to_scan)
+ {
+ 	struct mb_cache_entry *entry;
+-	struct hlist_bl_head *head;
+ 	unsigned long shrunk = 0;
+ 
+ 	spin_lock(&cache->c_list_lock);
+ 	while (nr_to_scan-- && !list_empty(&cache->c_list)) {
+ 		entry = list_first_entry(&cache->c_list,
+ 					 struct mb_cache_entry, e_list);
+-		if (entry->e_referenced) {
+-			entry->e_referenced = 0;
++		/* Drop initial hash reference if there is no user */
++		if (test_bit(MBE_REFERENCED_B, &entry->e_flags) ||
++		    atomic_cmpxchg(&entry->e_refcnt, 1, 0) != 1) {
++			clear_bit(MBE_REFERENCED_B, &entry->e_flags);
+ 			list_move_tail(&entry->e_list, &cache->c_list);
+ 			continue;
+ 		}
+ 		list_del_init(&entry->e_list);
+ 		cache->c_entry_count--;
+-		/*
+-		 * We keep LRU list reference so that entry doesn't go away
+-		 * from under us.
+-		 */
+ 		spin_unlock(&cache->c_list_lock);
+-		head = mb_cache_entry_head(cache, entry->e_key);
+-		hlist_bl_lock(head);
+-		if (!hlist_bl_unhashed(&entry->e_hash_list)) {
+-			hlist_bl_del_init(&entry->e_hash_list);
+-			atomic_dec(&entry->e_refcnt);
+-		}
+-		hlist_bl_unlock(head);
+-		if (mb_cache_entry_put(cache, entry))
+-			shrunk++;
++		__mb_cache_entry_free(cache, entry);
++		shrunk++;
+ 		cond_resched();
+ 		spin_lock(&cache->c_list_lock);
+ 	}
+@@ -400,11 +450,6 @@ void mb_cache_destroy(struct mb_cache *cache)
+ 	 * point.
+ 	 */
+ 	list_for_each_entry_safe(entry, next, &cache->c_list, e_list) {
+-		if (!hlist_bl_unhashed(&entry->e_hash_list)) {
+-			hlist_bl_del_init(&entry->e_hash_list);
+-			atomic_dec(&entry->e_refcnt);
+-		} else
+-			WARN_ON(1);
+ 		list_del(&entry->e_list);
+ 		WARN_ON(atomic_read(&entry->e_refcnt) != 1);
+ 		mb_cache_entry_put(cache, entry);
+diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c
+index 2bcbe38afe2e7..1f03445b5cb43 100644
+--- a/fs/nfs/namespace.c
++++ b/fs/nfs/namespace.c
+@@ -147,7 +147,7 @@ struct vfsmount *nfs_d_automount(struct path *path)
+ 	struct nfs_fs_context *ctx;
+ 	struct fs_context *fc;
+ 	struct vfsmount *mnt = ERR_PTR(-ENOMEM);
+-	struct nfs_server *server = NFS_SERVER(d_inode(path->dentry));
++	struct nfs_server *server = NFS_SB(path->dentry->d_sb);
+ 	struct nfs_client *client = server->nfs_client;
+ 	int timeout = READ_ONCE(nfs_mountpoint_expiry_timeout);
+ 	int ret;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 36af3734ac870..ee46ab09e3306 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -130,6 +130,11 @@ nfs4_label_init_security(struct inode *dir, struct dentry *dentry,
+ 	if (nfs_server_capable(dir, NFS_CAP_SECURITY_LABEL) == 0)
+ 		return NULL;
+ 
++	label->lfs = 0;
++	label->pi = 0;
++	label->len = 0;
++	label->label = NULL;
++
+ 	err = security_dentry_init_security(dentry, sattr->ia_mode,
+ 				&dentry->d_name, (void **)&label->label, &label->len);
+ 	if (err == 0)
+@@ -2121,18 +2126,18 @@ static struct nfs4_opendata *nfs4_open_recoverdata_alloc(struct nfs_open_context
+ }
+ 
+ static int nfs4_open_recover_helper(struct nfs4_opendata *opendata,
+-		fmode_t fmode)
++				    fmode_t fmode)
+ {
+ 	struct nfs4_state *newstate;
++	struct nfs_server *server = NFS_SB(opendata->dentry->d_sb);
++	int openflags = opendata->o_arg.open_flags;
+ 	int ret;
+ 
+ 	if (!nfs4_mode_match_open_stateid(opendata->state, fmode))
+ 		return 0;
+-	opendata->o_arg.open_flags = 0;
+ 	opendata->o_arg.fmode = fmode;
+-	opendata->o_arg.share_access = nfs4_map_atomic_open_share(
+-			NFS_SB(opendata->dentry->d_sb),
+-			fmode, 0);
++	opendata->o_arg.share_access =
++		nfs4_map_atomic_open_share(server, fmode, openflags);
+ 	memset(&opendata->o_res, 0, sizeof(opendata->o_res));
+ 	memset(&opendata->c_res, 0, sizeof(opendata->c_res));
+ 	nfs4_init_opendata_res(opendata);
+@@ -2708,10 +2713,15 @@ static int _nfs4_open_expired(struct nfs_open_context *ctx, struct nfs4_state *s
+ 	struct nfs4_opendata *opendata;
+ 	int ret;
+ 
+-	opendata = nfs4_open_recoverdata_alloc(ctx, state,
+-			NFS4_OPEN_CLAIM_FH);
++	opendata = nfs4_open_recoverdata_alloc(ctx, state, NFS4_OPEN_CLAIM_FH);
+ 	if (IS_ERR(opendata))
+ 		return PTR_ERR(opendata);
++	/*
++	 * We're not recovering a delegation, so ask for no delegation.
++	 * Otherwise the recovery thread could deadlock with an outstanding
++	 * delegation return.
++	 */
++	opendata->o_arg.open_flags = O_DIRECT;
+ 	ret = nfs4_open_recover(opendata, state);
+ 	if (ret == -ESTALE)
+ 		d_drop(ctx->dentry);
+@@ -3793,7 +3803,7 @@ nfs4_atomic_open(struct inode *dir, struct nfs_open_context *ctx,
+ 		int open_flags, struct iattr *attr, int *opened)
+ {
+ 	struct nfs4_state *state;
+-	struct nfs4_label l = {0, 0, 0, NULL}, *label = NULL;
++	struct nfs4_label l, *label;
+ 
+ 	label = nfs4_label_init_security(dir, ctx->dentry, attr, &l);
+ 
+@@ -4557,7 +4567,7 @@ nfs4_proc_create(struct inode *dir, struct dentry *dentry, struct iattr *sattr,
+ 		 int flags)
+ {
+ 	struct nfs_server *server = NFS_SERVER(dir);
+-	struct nfs4_label l, *ilabel = NULL;
++	struct nfs4_label l, *ilabel;
+ 	struct nfs_open_context *ctx;
+ 	struct nfs4_state *state;
+ 	int status = 0;
+@@ -4916,7 +4926,7 @@ static int nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
+ 	struct nfs4_exception exception = {
+ 		.interruptible = true,
+ 	};
+-	struct nfs4_label l, *label = NULL;
++	struct nfs4_label l, *label;
+ 	int err;
+ 
+ 	label = nfs4_label_init_security(dir, dentry, sattr, &l);
+@@ -4957,7 +4967,7 @@ static int nfs4_proc_mkdir(struct inode *dir, struct dentry *dentry,
+ 	struct nfs4_exception exception = {
+ 		.interruptible = true,
+ 	};
+-	struct nfs4_label l, *label = NULL;
++	struct nfs4_label l, *label;
+ 	int err;
+ 
+ 	label = nfs4_label_init_security(dir, dentry, sattr, &l);
+@@ -5078,7 +5088,7 @@ static int nfs4_proc_mknod(struct inode *dir, struct dentry *dentry,
+ 	struct nfs4_exception exception = {
+ 		.interruptible = true,
+ 	};
+-	struct nfs4_label l, *label = NULL;
++	struct nfs4_label l, *label;
+ 	int err;
+ 
+ 	label = nfs4_label_init_security(dir, dentry, sattr, &l);
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index a77a3d8c0b3f4..175b2e064003e 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1226,6 +1226,8 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
+ 	if (IS_ERR(task)) {
+ 		printk(KERN_ERR "%s: kthread_run: %ld\n",
+ 			__func__, PTR_ERR(task));
++		if (!nfs_client_init_is_complete(clp))
++			nfs_mark_client_ready(clp, PTR_ERR(task));
+ 		nfs4_clear_state_manager_bit(clp);
+ 		nfs_put_client(clp);
+ 		module_put(THIS_MODULE);
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index e2f0e3446e22a..f1e599553f2be 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -4166,19 +4166,17 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap,
+ 		p = xdr_inline_decode(xdr, len);
+ 		if (unlikely(!p))
+ 			return -EIO;
++		bitmap[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
+ 		if (len < NFS4_MAXLABELLEN) {
+-			if (label) {
+-				if (label->len) {
+-					if (label->len < len)
+-						return -ERANGE;
+-					memcpy(label->label, p, len);
+-				}
++			if (label && label->len) {
++				if (label->len < len)
++					return -ERANGE;
++				memcpy(label->label, p, len);
+ 				label->len = len;
+ 				label->pi = pi;
+ 				label->lfs = lfs;
+ 				status = NFS_ATTR_FATTR_V4_SECURITY_LABEL;
+ 			}
+-			bitmap[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
+ 		} else
+ 			printk(KERN_WARNING "%s: label too long (%u)!\n",
+ 					__func__, len);
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index 7325592b456e5..af2064e36ac61 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -915,11 +915,8 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
+ 		args.authflavor = clp->cl_cred.cr_flavor;
+ 		clp->cl_cb_ident = conn->cb_ident;
+ 	} else {
+-		if (!conn->cb_xprt) {
+-			trace_nfsd_cb_setup_err(clp, -EINVAL);
++		if (!conn->cb_xprt)
+ 			return -EINVAL;
+-		}
+-		clp->cl_cb_conn.cb_xprt = conn->cb_xprt;
+ 		clp->cl_cb_session = ses;
+ 		args.bc_xprt = conn->cb_xprt;
+ 		args.prognumber = clp->cl_cb_session->se_cb_prog;
+@@ -939,6 +936,9 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
+ 		rpc_shutdown_client(client);
+ 		return -ENOMEM;
+ 	}
++
++	if (clp->cl_minorversion != 0)
++		clp->cl_cb_conn.cb_xprt = conn->cb_xprt;
+ 	clp->cl_cb_client = client;
+ 	clp->cl_cb_cred = cred;
+ 	trace_nfsd_cb_setup(clp);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 665d0eaeb8dbf..9a47cc66963f0 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -507,15 +507,26 @@ find_any_file(struct nfs4_file *f)
+ 	return ret;
+ }
+ 
+-static struct nfsd_file *find_deleg_file(struct nfs4_file *f)
++static struct nfsd_file *find_any_file_locked(struct nfs4_file *f)
+ {
+-	struct nfsd_file *ret = NULL;
++	lockdep_assert_held(&f->fi_lock);
++
++	if (f->fi_fds[O_RDWR])
++		return f->fi_fds[O_RDWR];
++	if (f->fi_fds[O_WRONLY])
++		return f->fi_fds[O_WRONLY];
++	if (f->fi_fds[O_RDONLY])
++		return f->fi_fds[O_RDONLY];
++	return NULL;
++}
++
++static struct nfsd_file *find_deleg_file_locked(struct nfs4_file *f)
++{
++	lockdep_assert_held(&f->fi_lock);
+ 
+-	spin_lock(&f->fi_lock);
+ 	if (f->fi_deleg_file)
+-		ret = nfsd_file_get(f->fi_deleg_file);
+-	spin_unlock(&f->fi_lock);
+-	return ret;
++		return f->fi_deleg_file;
++	return NULL;
+ }
+ 
+ static atomic_long_t num_delegations;
+@@ -2462,9 +2473,11 @@ static int nfs4_show_open(struct seq_file *s, struct nfs4_stid *st)
+ 	ols = openlockstateid(st);
+ 	oo = ols->st_stateowner;
+ 	nf = st->sc_file;
+-	file = find_any_file(nf);
++
++	spin_lock(&nf->fi_lock);
++	file = find_any_file_locked(nf);
+ 	if (!file)
+-		return 0;
++		goto out;
+ 
+ 	seq_printf(s, "- ");
+ 	nfs4_show_stateid(s, &st->sc_stateid);
+@@ -2486,8 +2499,8 @@ static int nfs4_show_open(struct seq_file *s, struct nfs4_stid *st)
+ 	seq_printf(s, ", ");
+ 	nfs4_show_owner(s, oo);
+ 	seq_printf(s, " }\n");
+-	nfsd_file_put(file);
+-
++out:
++	spin_unlock(&nf->fi_lock);
+ 	return 0;
+ }
+ 
+@@ -2501,9 +2514,10 @@ static int nfs4_show_lock(struct seq_file *s, struct nfs4_stid *st)
+ 	ols = openlockstateid(st);
+ 	oo = ols->st_stateowner;
+ 	nf = st->sc_file;
+-	file = find_any_file(nf);
++	spin_lock(&nf->fi_lock);
++	file = find_any_file_locked(nf);
+ 	if (!file)
+-		return 0;
++		goto out;
+ 
+ 	seq_printf(s, "- ");
+ 	nfs4_show_stateid(s, &st->sc_stateid);
+@@ -2523,8 +2537,8 @@ static int nfs4_show_lock(struct seq_file *s, struct nfs4_stid *st)
+ 	seq_printf(s, ", ");
+ 	nfs4_show_owner(s, oo);
+ 	seq_printf(s, " }\n");
+-	nfsd_file_put(file);
+-
++out:
++	spin_unlock(&nf->fi_lock);
+ 	return 0;
+ }
+ 
+@@ -2536,9 +2550,10 @@ static int nfs4_show_deleg(struct seq_file *s, struct nfs4_stid *st)
+ 
+ 	ds = delegstateid(st);
+ 	nf = st->sc_file;
+-	file = find_deleg_file(nf);
++	spin_lock(&nf->fi_lock);
++	file = find_deleg_file_locked(nf);
+ 	if (!file)
+-		return 0;
++		goto out;
+ 
+ 	seq_printf(s, "- ");
+ 	nfs4_show_stateid(s, &st->sc_stateid);
+@@ -2554,8 +2569,8 @@ static int nfs4_show_deleg(struct seq_file *s, struct nfs4_stid *st)
+ 	seq_printf(s, ", ");
+ 	nfs4_show_fname(s, file);
+ 	seq_printf(s, " }\n");
+-	nfsd_file_put(file);
+-
++out:
++	spin_unlock(&nf->fi_lock);
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index cc605ee0b2fae..1e3410a9b03eb 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3405,6 +3405,17 @@ nfsd4_encode_dirent(void *ccdv, const char *name, int namlen,
+ 	case nfserr_noent:
+ 		xdr_truncate_encode(xdr, start_offset);
+ 		goto skip_entry;
++	case nfserr_jukebox:
++		/*
++		 * The pseudoroot should only display dentries that lead to
++		 * exports. If we get EJUKEBOX here, then we can't tell whether
++		 * this entry should be included. Just fail the whole READDIR
++		 * with NFS4ERR_DELAY in that case, and hope that the situation
++		 * will resolve itself by the client's next attempt.
++		 */
++		if (cd->rd_fhp->fh_export->ex_flags & NFSEXP_V4ROOT)
++			goto fail;
++		fallthrough;
+ 	default:
+ 		/*
+ 		 * If the client requested the RDATTR_ERROR attribute,
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 9323e30a7eafe..c7fffe1453bd1 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -426,8 +426,8 @@ static void nfsd_shutdown_net(struct net *net)
+ {
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 
+-	nfsd_file_cache_shutdown_net(net);
+ 	nfs4_state_shutdown_net(net);
++	nfsd_file_cache_shutdown_net(net);
+ 	if (nn->lockd_up) {
+ 		lockd_down(net);
+ 		nn->lockd_up = false;
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index ce103dd39b899..211937054c31f 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -13,6 +13,7 @@
+ #include <linux/blkdev.h>
+ #include <linux/backing-dev.h>
+ #include <linux/random.h>
++#include <linux/log2.h>
+ #include <linux/crc32.h>
+ #include "nilfs.h"
+ #include "segment.h"
+@@ -192,6 +193,34 @@ static int nilfs_store_log_cursor(struct the_nilfs *nilfs,
+ 	return ret;
+ }
+ 
++/**
++ * nilfs_get_blocksize - get block size from raw superblock data
++ * @sb: super block instance
++ * @sbp: superblock raw data buffer
++ * @blocksize: place to store block size
++ *
++ * nilfs_get_blocksize() calculates the block size from the block size
++ * exponent information written in @sbp and stores it in @blocksize,
++ * or aborts with an error message if it's too large.
++ *
++ * Return Value: On success, 0 is returned. If the block size is too
++ * large, -EINVAL is returned.
++ */
++static int nilfs_get_blocksize(struct super_block *sb,
++			       struct nilfs_super_block *sbp, int *blocksize)
++{
++	unsigned int shift_bits = le32_to_cpu(sbp->s_log_block_size);
++
++	if (unlikely(shift_bits >
++		     ilog2(NILFS_MAX_BLOCK_SIZE) - BLOCK_SIZE_BITS)) {
++		nilfs_err(sb, "too large filesystem blocksize: 2 ^ %u KiB",
++			  shift_bits);
++		return -EINVAL;
++	}
++	*blocksize = BLOCK_SIZE << shift_bits;
++	return 0;
++}
++
+ /**
+  * load_nilfs - load and recover the nilfs
+  * @nilfs: the_nilfs structure to be released
+@@ -245,11 +274,15 @@ int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
+ 		nilfs->ns_sbwtime = le64_to_cpu(sbp[0]->s_wtime);
+ 
+ 		/* verify consistency between two super blocks */
+-		blocksize = BLOCK_SIZE << le32_to_cpu(sbp[0]->s_log_block_size);
++		err = nilfs_get_blocksize(sb, sbp[0], &blocksize);
++		if (err)
++			goto scan_error;
++
+ 		if (blocksize != nilfs->ns_blocksize) {
+ 			nilfs_warn(sb,
+ 				   "blocksize differs between two super blocks (%d != %d)",
+ 				   blocksize, nilfs->ns_blocksize);
++			err = -EINVAL;
+ 			goto scan_error;
+ 		}
+ 
+@@ -443,11 +476,33 @@ static int nilfs_valid_sb(struct nilfs_super_block *sbp)
+ 	return crc == le32_to_cpu(sbp->s_sum);
+ }
+ 
+-static int nilfs_sb2_bad_offset(struct nilfs_super_block *sbp, u64 offset)
++/**
++ * nilfs_sb2_bad_offset - check the location of the second superblock
++ * @sbp: superblock raw data buffer
++ * @offset: byte offset of second superblock calculated from device size
++ *
++ * nilfs_sb2_bad_offset() checks if the position on the second
++ * superblock is valid or not based on the filesystem parameters
++ * stored in @sbp.  If @offset points to a location within the segment
++ * area, or if the parameters themselves are not normal, it is
++ * determined to be invalid.
++ *
++ * Return Value: true if invalid, false if valid.
++ */
++static bool nilfs_sb2_bad_offset(struct nilfs_super_block *sbp, u64 offset)
+ {
+-	return offset < ((le64_to_cpu(sbp->s_nsegments) *
+-			  le32_to_cpu(sbp->s_blocks_per_segment)) <<
+-			 (le32_to_cpu(sbp->s_log_block_size) + 10));
++	unsigned int shift_bits = le32_to_cpu(sbp->s_log_block_size);
++	u32 blocks_per_segment = le32_to_cpu(sbp->s_blocks_per_segment);
++	u64 nsegments = le64_to_cpu(sbp->s_nsegments);
++	u64 index;
++
++	if (blocks_per_segment < NILFS_SEG_MIN_BLOCKS ||
++	    shift_bits > ilog2(NILFS_MAX_BLOCK_SIZE) - BLOCK_SIZE_BITS)
++		return true;
++
++	index = offset >> (shift_bits + BLOCK_SIZE_BITS);
++	do_div(index, blocks_per_segment);
++	return index < nsegments;
+ }
+ 
+ static void nilfs_release_super_block(struct the_nilfs *nilfs)
+@@ -586,9 +641,11 @@ int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb, char *data)
+ 	if (err)
+ 		goto failed_sbh;
+ 
+-	blocksize = BLOCK_SIZE << le32_to_cpu(sbp->s_log_block_size);
+-	if (blocksize < NILFS_MIN_BLOCK_SIZE ||
+-	    blocksize > NILFS_MAX_BLOCK_SIZE) {
++	err = nilfs_get_blocksize(sb, sbp, &blocksize);
++	if (err)
++		goto failed_sbh;
++
++	if (blocksize < NILFS_MIN_BLOCK_SIZE) {
+ 		nilfs_err(sb,
+ 			  "couldn't mount because of unsupported filesystem blocksize %d",
+ 			  blocksize);
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index db52e843002a0..0534800a472a1 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -159,7 +159,7 @@ static void ocfs2_queue_replay_slots(struct ocfs2_super *osb,
+ 	replay_map->rm_state = REPLAY_DONE;
+ }
+ 
+-static void ocfs2_free_replay_slots(struct ocfs2_super *osb)
++void ocfs2_free_replay_slots(struct ocfs2_super *osb)
+ {
+ 	struct ocfs2_replay_map *replay_map = osb->replay_map;
+ 
+diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
+index bfe611ed1b1d7..eb7a21bac71ef 100644
+--- a/fs/ocfs2/journal.h
++++ b/fs/ocfs2/journal.h
+@@ -152,6 +152,7 @@ int ocfs2_recovery_init(struct ocfs2_super *osb);
+ void ocfs2_recovery_exit(struct ocfs2_super *osb);
+ 
+ int ocfs2_compute_replay_slots(struct ocfs2_super *osb);
++void ocfs2_free_replay_slots(struct ocfs2_super *osb);
+ /*
+  *  Journal Control:
+  *  Initialize, Load, Shutdown, Wipe a journal.
+diff --git a/fs/ocfs2/stackglue.c b/fs/ocfs2/stackglue.c
+index 03eacb249f379..5e272e257b0bb 100644
+--- a/fs/ocfs2/stackglue.c
++++ b/fs/ocfs2/stackglue.c
+@@ -705,6 +705,8 @@ static struct ctl_table_header *ocfs2_table_header;
+ 
+ static int __init ocfs2_stack_glue_init(void)
+ {
++	int ret;
++
+ 	strcpy(cluster_stack_name, OCFS2_STACK_PLUGIN_O2CB);
+ 
+ 	ocfs2_table_header = register_sysctl_table(ocfs2_root_table);
+@@ -714,7 +716,11 @@ static int __init ocfs2_stack_glue_init(void)
+ 		return -ENOMEM; /* or something. */
+ 	}
+ 
+-	return ocfs2_sysfs_init();
++	ret = ocfs2_sysfs_init();
++	if (ret)
++		unregister_sysctl_table(ocfs2_table_header);
++
++	return ret;
+ }
+ 
+ static void __exit ocfs2_stack_glue_exit(void)
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index c0e5f1bad499f..3e0b2e3e00ad1 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -984,28 +984,27 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	if (!ocfs2_parse_options(sb, data, &parsed_options, 0)) {
+ 		status = -EINVAL;
+-		goto read_super_error;
++		goto out;
+ 	}
+ 
+ 	/* probe for superblock */
+ 	status = ocfs2_sb_probe(sb, &bh, &sector_size, &stats);
+ 	if (status < 0) {
+ 		mlog(ML_ERROR, "superblock probe failed!\n");
+-		goto read_super_error;
++		goto out;
+ 	}
+ 
+ 	status = ocfs2_initialize_super(sb, bh, sector_size, &stats);
+-	osb = OCFS2_SB(sb);
+-	if (status < 0) {
+-		mlog_errno(status);
+-		goto read_super_error;
+-	}
+ 	brelse(bh);
+ 	bh = NULL;
++	if (status < 0)
++		goto out;
++
++	osb = OCFS2_SB(sb);
+ 
+ 	if (!ocfs2_check_set_options(sb, &parsed_options)) {
+ 		status = -EINVAL;
+-		goto read_super_error;
++		goto out_super;
+ 	}
+ 	osb->s_mount_opt = parsed_options.mount_opt;
+ 	osb->s_atime_quantum = parsed_options.atime_quantum;
+@@ -1022,7 +1021,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	status = ocfs2_verify_userspace_stack(osb, &parsed_options);
+ 	if (status)
+-		goto read_super_error;
++		goto out_super;
+ 
+ 	sb->s_magic = OCFS2_SUPER_MAGIC;
+ 
+@@ -1036,7 +1035,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 			status = -EACCES;
+ 			mlog(ML_ERROR, "Readonly device detected but readonly "
+ 			     "mount was not specified.\n");
+-			goto read_super_error;
++			goto out_super;
+ 		}
+ 
+ 		/* You should not be able to start a local heartbeat
+@@ -1045,7 +1044,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 			status = -EROFS;
+ 			mlog(ML_ERROR, "Local heartbeat specified on readonly "
+ 			     "device.\n");
+-			goto read_super_error;
++			goto out_super;
+ 		}
+ 
+ 		status = ocfs2_check_journals_nolocks(osb);
+@@ -1054,9 +1053,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 				mlog(ML_ERROR, "Recovery required on readonly "
+ 				     "file system, but write access is "
+ 				     "unavailable.\n");
+-			else
+-				mlog_errno(status);
+-			goto read_super_error;
++			goto out_super;
+ 		}
+ 
+ 		ocfs2_set_ro_flag(osb, 1);
+@@ -1072,10 +1069,8 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 	}
+ 
+ 	status = ocfs2_verify_heartbeat(osb);
+-	if (status < 0) {
+-		mlog_errno(status);
+-		goto read_super_error;
+-	}
++	if (status < 0)
++		goto out_super;
+ 
+ 	osb->osb_debug_root = debugfs_create_dir(osb->uuid_str,
+ 						 ocfs2_debugfs_root);
+@@ -1089,15 +1084,14 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	status = ocfs2_mount_volume(sb);
+ 	if (status < 0)
+-		goto read_super_error;
++		goto out_debugfs;
+ 
+ 	if (osb->root_inode)
+ 		inode = igrab(osb->root_inode);
+ 
+ 	if (!inode) {
+ 		status = -EIO;
+-		mlog_errno(status);
+-		goto read_super_error;
++		goto out_dismount;
+ 	}
+ 
+ 	osb->osb_dev_kset = kset_create_and_add(sb->s_id, NULL,
+@@ -1105,7 +1099,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 	if (!osb->osb_dev_kset) {
+ 		status = -ENOMEM;
+ 		mlog(ML_ERROR, "Unable to create device kset %s.\n", sb->s_id);
+-		goto read_super_error;
++		goto out_dismount;
+ 	}
+ 
+ 	/* Create filecheck sysfs related directories/files at
+@@ -1114,14 +1108,13 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 		status = -ENOMEM;
+ 		mlog(ML_ERROR, "Unable to create filecheck sysfs directory at "
+ 			"/sys/fs/ocfs2/%s/filecheck.\n", sb->s_id);
+-		goto read_super_error;
++		goto out_dismount;
+ 	}
+ 
+ 	root = d_make_root(inode);
+ 	if (!root) {
+ 		status = -ENOMEM;
+-		mlog_errno(status);
+-		goto read_super_error;
++		goto out_dismount;
+ 	}
+ 
+ 	sb->s_root = root;
+@@ -1168,17 +1161,22 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	return status;
+ 
+-read_super_error:
+-	brelse(bh);
+-
+-	if (status)
+-		mlog_errno(status);
++out_dismount:
++	atomic_set(&osb->vol_state, VOLUME_DISABLED);
++	wake_up(&osb->osb_mount_event);
++	ocfs2_free_replay_slots(osb);
++	ocfs2_dismount_volume(sb, 1);
++	goto out;
+ 
+-	if (osb) {
+-		atomic_set(&osb->vol_state, VOLUME_DISABLED);
+-		wake_up(&osb->osb_mount_event);
+-		ocfs2_dismount_volume(sb, 1);
+-	}
++out_debugfs:
++	debugfs_remove_recursive(osb->osb_debug_root);
++out_super:
++	ocfs2_release_system_inodes(osb);
++	kfree(osb->recovery_map);
++	ocfs2_delete_osb(osb);
++	kfree(osb);
++out:
++	mlog_errno(status);
+ 
+ 	return status;
+ }
+@@ -1787,11 +1785,10 @@ static int ocfs2_get_sector(struct super_block *sb,
+ static int ocfs2_mount_volume(struct super_block *sb)
+ {
+ 	int status = 0;
+-	int unlock_super = 0;
+ 	struct ocfs2_super *osb = OCFS2_SB(sb);
+ 
+ 	if (ocfs2_is_hard_readonly(osb))
+-		goto leave;
++		goto out;
+ 
+ 	mutex_init(&osb->obs_trim_fs_mutex);
+ 
+@@ -1801,44 +1798,58 @@ static int ocfs2_mount_volume(struct super_block *sb)
+ 		if (status == -EBADR && ocfs2_userspace_stack(osb))
+ 			mlog(ML_ERROR, "couldn't mount because cluster name on"
+ 			" disk does not match the running cluster name.\n");
+-		goto leave;
++		goto out;
+ 	}
+ 
+ 	status = ocfs2_super_lock(osb, 1);
+ 	if (status < 0) {
+ 		mlog_errno(status);
+-		goto leave;
++		goto out_dlm;
+ 	}
+-	unlock_super = 1;
+ 
+ 	/* This will load up the node map and add ourselves to it. */
+ 	status = ocfs2_find_slot(osb);
+ 	if (status < 0) {
+ 		mlog_errno(status);
+-		goto leave;
++		goto out_super_lock;
+ 	}
+ 
+ 	/* load all node-local system inodes */
+ 	status = ocfs2_init_local_system_inodes(osb);
+ 	if (status < 0) {
+ 		mlog_errno(status);
+-		goto leave;
++		goto out_super_lock;
+ 	}
+ 
+ 	status = ocfs2_check_volume(osb);
+ 	if (status < 0) {
+ 		mlog_errno(status);
+-		goto leave;
++		goto out_system_inodes;
+ 	}
+ 
+ 	status = ocfs2_truncate_log_init(osb);
+-	if (status < 0)
++	if (status < 0) {
+ 		mlog_errno(status);
++		goto out_check_volume;
++	}
+ 
+-leave:
+-	if (unlock_super)
+-		ocfs2_super_unlock(osb, 1);
++	ocfs2_super_unlock(osb, 1);
++	return 0;
+ 
++out_check_volume:
++	ocfs2_free_replay_slots(osb);
++out_system_inodes:
++	if (osb->local_alloc_state == OCFS2_LA_ENABLED)
++		ocfs2_shutdown_local_alloc(osb);
++	ocfs2_release_system_inodes(osb);
++	/* before journal shutdown, we should release slot_info */
++	ocfs2_free_slot_info(osb);
++	ocfs2_journal_shutdown(osb);
++out_super_lock:
++	ocfs2_super_unlock(osb, 1);
++out_dlm:
++	ocfs2_dlm_shutdown(osb, 0);
++out:
+ 	return status;
+ }
+ 
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index 29eaa45443727..1b508f5433846 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -194,15 +194,10 @@ void orangefs_debugfs_init(int debug_mask)
+  */
+ static void orangefs_kernel_debug_init(void)
+ {
+-	int rc = -ENOMEM;
+-	char *k_buffer = NULL;
++	static char k_buffer[ORANGEFS_MAX_DEBUG_STRING_LEN] = { };
+ 
+ 	gossip_debug(GOSSIP_DEBUGFS_DEBUG, "%s: start\n", __func__);
+ 
+-	k_buffer = kzalloc(ORANGEFS_MAX_DEBUG_STRING_LEN, GFP_KERNEL);
+-	if (!k_buffer)
+-		goto out;
+-
+ 	if (strlen(kernel_debug_string) + 1 < ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ 		strcpy(k_buffer, kernel_debug_string);
+ 		strcat(k_buffer, "\n");
+@@ -213,15 +208,14 @@ static void orangefs_kernel_debug_init(void)
+ 
+ 	debugfs_create_file(ORANGEFS_KMOD_DEBUG_FILE, 0444, debug_dir, k_buffer,
+ 			    &kernel_debug_fops);
+-
+-out:
+-	gossip_debug(GOSSIP_DEBUGFS_DEBUG, "%s: rc:%d:\n", __func__, rc);
+ }
+ 
+ 
+ void orangefs_debugfs_cleanup(void)
+ {
+ 	debugfs_remove_recursive(debug_dir);
++	kfree(debug_help_string);
++	debug_help_string = NULL;
+ }
+ 
+ /* open ORANGEFS_KMOD_DEBUG_HELP_FILE */
+@@ -297,18 +291,13 @@ static int help_show(struct seq_file *m, void *v)
+ /*
+  * initialize the client-debug file.
+  */
+-static int orangefs_client_debug_init(void)
++static void orangefs_client_debug_init(void)
+ {
+ 
+-	int rc = -ENOMEM;
+-	char *c_buffer = NULL;
++	static char c_buffer[ORANGEFS_MAX_DEBUG_STRING_LEN] = { };
+ 
+ 	gossip_debug(GOSSIP_DEBUGFS_DEBUG, "%s: start\n", __func__);
+ 
+-	c_buffer = kzalloc(ORANGEFS_MAX_DEBUG_STRING_LEN, GFP_KERNEL);
+-	if (!c_buffer)
+-		goto out;
+-
+ 	if (strlen(client_debug_string) + 1 < ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ 		strcpy(c_buffer, client_debug_string);
+ 		strcat(c_buffer, "\n");
+@@ -322,13 +311,6 @@ static int orangefs_client_debug_init(void)
+ 						  debug_dir,
+ 						  c_buffer,
+ 						  &kernel_debug_fops);
+-
+-	rc = 0;
+-
+-out:
+-
+-	gossip_debug(GOSSIP_DEBUGFS_DEBUG, "%s: rc:%d:\n", __func__, rc);
+-	return rc;
+ }
+ 
+ /* open ORANGEFS_KMOD_DEBUG_FILE or ORANGEFS_CLIENT_DEBUG_FILE.*/
+@@ -671,6 +653,7 @@ int orangefs_prepare_debugfs_help_string(int at_boot)
+ 		memset(debug_help_string, 0, DEBUG_HELP_STRING_SIZE);
+ 		strlcat(debug_help_string, new, string_size);
+ 		mutex_unlock(&orangefs_help_file_lock);
++		kfree(new);
+ 	}
+ 
+ 	rc = 0;
+diff --git a/fs/orangefs/orangefs-mod.c b/fs/orangefs/orangefs-mod.c
+index 74a3d6337ef43..ac9c91b83868f 100644
+--- a/fs/orangefs/orangefs-mod.c
++++ b/fs/orangefs/orangefs-mod.c
+@@ -141,7 +141,7 @@ static int __init orangefs_init(void)
+ 		gossip_err("%s: could not initialize device subsystem %d!\n",
+ 			   __func__,
+ 			   ret);
+-		goto cleanup_device;
++		goto cleanup_sysfs;
+ 	}
+ 
+ 	ret = register_filesystem(&orangefs_fs_type);
+@@ -152,11 +152,11 @@ static int __init orangefs_init(void)
+ 		goto out;
+ 	}
+ 
+-	orangefs_sysfs_exit();
+-
+-cleanup_device:
+ 	orangefs_dev_cleanup();
+ 
++cleanup_sysfs:
++	orangefs_sysfs_exit();
++
+ sysfs_init_failed:
+ 	orangefs_debugfs_cleanup();
+ 
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index d0e5cde277022..8ebd9f2b1c95b 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -586,28 +586,42 @@ static int ovl_create_or_link(struct dentry *dentry, struct inode *inode,
+ 			goto out_revert_creds;
+ 	}
+ 
+-	err = -ENOMEM;
+-	override_cred = prepare_creds();
+-	if (override_cred) {
++	if (!attr->hardlink) {
++		err = -ENOMEM;
++		override_cred = prepare_creds();
++		if (!override_cred)
++			goto out_revert_creds;
++		/*
++		 * In the creation cases(create, mkdir, mknod, symlink),
++		 * ovl should transfer current's fs{u,g}id to underlying
++		 * fs. Because underlying fs want to initialize its new
++		 * inode owner using current's fs{u,g}id. And in this
++		 * case, the @inode is a new inode that is initialized
++		 * in inode_init_owner() to current's fs{u,g}id. So use
++		 * the inode's i_{u,g}id to override the cred's fs{u,g}id.
++		 *
++		 * But in the other hardlink case, ovl_link() does not
++		 * create a new inode, so just use the ovl mounter's
++		 * fs{u,g}id.
++		 */
+ 		override_cred->fsuid = inode->i_uid;
+ 		override_cred->fsgid = inode->i_gid;
+-		if (!attr->hardlink) {
+-			err = security_dentry_create_files_as(dentry,
+-					attr->mode, &dentry->d_name, old_cred,
+-					override_cred);
+-			if (err) {
+-				put_cred(override_cred);
+-				goto out_revert_creds;
+-			}
++		err = security_dentry_create_files_as(dentry,
++				attr->mode, &dentry->d_name, old_cred,
++				override_cred);
++		if (err) {
++			put_cred(override_cred);
++			goto out_revert_creds;
+ 		}
+ 		put_cred(override_creds(override_cred));
+ 		put_cred(override_cred);
+-
+-		if (!ovl_dentry_is_whiteout(dentry))
+-			err = ovl_create_upper(dentry, inode, attr);
+-		else
+-			err = ovl_create_over_whiteout(dentry, inode, attr);
+ 	}
++
++	if (!ovl_dentry_is_whiteout(dentry))
++		err = ovl_create_upper(dentry, inode, attr);
++	else
++		err = ovl_create_over_whiteout(dentry, inode, attr);
++
+ out_revert_creds:
+ 	revert_creds(old_cred);
+ 	return err;
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 45c596dfe3a36..e3cd5a00f880d 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -138,11 +138,16 @@ static int ovl_dentry_revalidate_common(struct dentry *dentry,
+ 					unsigned int flags, bool weak)
+ {
+ 	struct ovl_entry *oe = dentry->d_fsdata;
++	struct inode *inode = d_inode_rcu(dentry);
+ 	struct dentry *upper;
+ 	unsigned int i;
+ 	int ret = 1;
+ 
+-	upper = ovl_dentry_upper(dentry);
++	/* Careful in RCU mode */
++	if (!inode)
++		return -ECHILD;
++
++	upper = ovl_i_dentry_upper(inode);
+ 	if (upper)
+ 		ret = ovl_revalidate_real(upper, flags, weak);
+ 
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 1106137c747a3..468e4e65a615d 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -244,7 +244,7 @@ static int propagate_one(struct mount *m)
+ 		}
+ 		do {
+ 			struct mount *parent = last_source->mnt_parent;
+-			if (last_source == first_source)
++			if (peers(last_source, first_source))
+ 				break;
+ 			done = parent->mnt_master == p;
+ 			if (done && peers(n, parent))
+diff --git a/fs/pstore/Kconfig b/fs/pstore/Kconfig
+index 8efe60487b486..71dbe9a2533f0 100644
+--- a/fs/pstore/Kconfig
++++ b/fs/pstore/Kconfig
+@@ -118,6 +118,7 @@ config PSTORE_CONSOLE
+ config PSTORE_PMSG
+ 	bool "Log user space messages"
+ 	depends on PSTORE
++	select RT_MUTEXES
+ 	help
+ 	  When the option is enabled, pstore will export a character
+ 	  interface /dev/pmsg0 to log user space messages. On reboot
+diff --git a/fs/pstore/pmsg.c b/fs/pstore/pmsg.c
+index d8542ec2f38c6..18cf94b597e05 100644
+--- a/fs/pstore/pmsg.c
++++ b/fs/pstore/pmsg.c
+@@ -7,9 +7,10 @@
+ #include <linux/device.h>
+ #include <linux/fs.h>
+ #include <linux/uaccess.h>
++#include <linux/rtmutex.h>
+ #include "internal.h"
+ 
+-static DEFINE_MUTEX(pmsg_lock);
++static DEFINE_RT_MUTEX(pmsg_lock);
+ 
+ static ssize_t write_pmsg(struct file *file, const char __user *buf,
+ 			  size_t count, loff_t *ppos)
+@@ -28,9 +29,9 @@ static ssize_t write_pmsg(struct file *file, const char __user *buf,
+ 	if (!access_ok(buf, count))
+ 		return -EFAULT;
+ 
+-	mutex_lock(&pmsg_lock);
++	rt_mutex_lock(&pmsg_lock);
+ 	ret = psinfo->write_user(&record, buf);
+-	mutex_unlock(&pmsg_lock);
++	rt_mutex_unlock(&pmsg_lock);
+ 	return ret ? ret : count;
+ }
+ 
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index ca6d8a8672856..98e579ce0d633 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -730,6 +730,7 @@ static int ramoops_probe(struct platform_device *pdev)
+ 	/* Make sure we didn't get bogus platform data pointer. */
+ 	if (!pdata) {
+ 		pr_err("NULL platform data\n");
++		err = -EINVAL;
+ 		goto fail_out;
+ 	}
+ 
+@@ -737,6 +738,7 @@ static int ramoops_probe(struct platform_device *pdev)
+ 			!pdata->ftrace_size && !pdata->pmsg_size)) {
+ 		pr_err("The memory size and the record/console size must be "
+ 			"non-zero\n");
++		err = -EINVAL;
+ 		goto fail_out;
+ 	}
+ 
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index aa8e0b65ff1ae..184cb97c83bdd 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -425,7 +425,11 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size,
+ 		phys_addr_t addr = page_start + i * PAGE_SIZE;
+ 		pages[i] = pfn_to_page(addr >> PAGE_SHIFT);
+ 	}
+-	vaddr = vmap(pages, page_count, VM_MAP, prot);
++	/*
++	 * VM_IOREMAP used here to bypass this region during vread()
++	 * and kmap_atomic() (i.e. kcore) to avoid __va() failures.
++	 */
++	vaddr = vmap(pages, page_count, VM_MAP | VM_IOREMAP, prot);
+ 	kfree(pages);
+ 
+ 	/*
+diff --git a/fs/pstore/zone.c b/fs/pstore/zone.c
+index 3ce89216670c9..b50fc33f2ab29 100644
+--- a/fs/pstore/zone.c
++++ b/fs/pstore/zone.c
+@@ -761,7 +761,7 @@ static inline int notrace psz_kmsg_write_record(struct psz_context *cxt,
+ 		/* avoid destroying old data, allocate a new one */
+ 		len = zone->buffer_size + sizeof(*zone->buffer);
+ 		zone->oldbuf = zone->buffer;
+-		zone->buffer = kzalloc(len, GFP_KERNEL);
++		zone->buffer = kzalloc(len, GFP_ATOMIC);
+ 		if (!zone->buffer) {
+ 			zone->buffer = zone->oldbuf;
+ 			return -ENOMEM;
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 65f123d5809bd..ad255f8ab5c55 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -2319,6 +2319,8 @@ static int vfs_setup_quota_inode(struct inode *inode, int type)
+ 	struct super_block *sb = inode->i_sb;
+ 	struct quota_info *dqopt = sb_dqopt(sb);
+ 
++	if (is_bad_inode(inode))
++		return -EUCLEAN;
+ 	if (!S_ISREG(inode->i_mode))
+ 		return -EACCES;
+ 	if (IS_RDONLY(inode))
+diff --git a/fs/reiserfs/namei.c b/fs/reiserfs/namei.c
+index 1594687582f0c..84cc136d41dd4 100644
+--- a/fs/reiserfs/namei.c
++++ b/fs/reiserfs/namei.c
+@@ -695,6 +695,7 @@ static int reiserfs_create(struct inode *dir, struct dentry *dentry, umode_t mod
+ 
+ out_failed:
+ 	reiserfs_write_unlock(dir->i_sb);
++	reiserfs_security_free(&security);
+ 	return retval;
+ }
+ 
+@@ -778,6 +779,7 @@ static int reiserfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode
+ 
+ out_failed:
+ 	reiserfs_write_unlock(dir->i_sb);
++	reiserfs_security_free(&security);
+ 	return retval;
+ }
+ 
+@@ -876,6 +878,7 @@ static int reiserfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode
+ 	retval = journal_end(&th);
+ out_failed:
+ 	reiserfs_write_unlock(dir->i_sb);
++	reiserfs_security_free(&security);
+ 	return retval;
+ }
+ 
+@@ -1191,6 +1194,7 @@ static int reiserfs_symlink(struct inode *parent_dir,
+ 	retval = journal_end(&th);
+ out_failed:
+ 	reiserfs_write_unlock(parent_dir->i_sb);
++	reiserfs_security_free(&security);
+ 	return retval;
+ }
+ 
+diff --git a/fs/reiserfs/xattr_security.c b/fs/reiserfs/xattr_security.c
+index 20be9a0e5870e..59d87f9f72fb4 100644
+--- a/fs/reiserfs/xattr_security.c
++++ b/fs/reiserfs/xattr_security.c
+@@ -49,6 +49,7 @@ int reiserfs_security_init(struct inode *dir, struct inode *inode,
+ 	int error;
+ 
+ 	sec->name = NULL;
++	sec->value = NULL;
+ 
+ 	/* Don't add selinux attributes on xattrs - they'll never get used */
+ 	if (IS_PRIVATE(dir))
+@@ -94,7 +95,6 @@ int reiserfs_security_write(struct reiserfs_transaction_handle *th,
+ 
+ void reiserfs_security_free(struct reiserfs_security_handle *sec)
+ {
+-	kfree(sec->name);
+ 	kfree(sec->value);
+ 	sec->name = NULL;
+ 	sec->value = NULL;
+diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c
+index bcb67b0cabe7e..31f66053e2393 100644
+--- a/fs/sysv/itree.c
++++ b/fs/sysv/itree.c
+@@ -438,7 +438,7 @@ static unsigned sysv_nblocks(struct super_block *s, loff_t size)
+ 		res += blocks;
+ 		direct = 1;
+ 	}
+-	return blocks;
++	return res;
+ }
+ 
+ int sysv_getattr(const struct path *path, struct kstat *stat,
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index e94a18bb7f99f..2132bfab67f35 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -599,7 +599,7 @@ static void udf_do_extend_final_block(struct inode *inode,
+ 	 */
+ 	if (new_elen <= (last_ext->extLength & UDF_EXTENT_LENGTH_MASK))
+ 		return;
+-	added_bytes = (last_ext->extLength & UDF_EXTENT_LENGTH_MASK) - new_elen;
++	added_bytes = new_elen - (last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
+ 	last_ext->extLength += added_bytes;
+ 	UDF_I(inode)->i_lenExtents += added_bytes;
+ 
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index aff5ca32e4f64..58120d2f265fd 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -1090,8 +1090,9 @@ static int udf_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		return -EINVAL;
+ 
+ 	ofi = udf_find_entry(old_dir, &old_dentry->d_name, &ofibh, &ocfi);
+-	if (IS_ERR(ofi)) {
+-		retval = PTR_ERR(ofi);
++	if (!ofi || IS_ERR(ofi)) {
++		if (IS_ERR(ofi))
++			retval = PTR_ERR(ofi);
+ 		goto end_rename;
+ 	}
+ 
+@@ -1100,8 +1101,7 @@ static int udf_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	brelse(ofibh.sbh);
+ 	tloc = lelb_to_cpu(ocfi.icb.extLocation);
+-	if (!ofi || udf_get_lb_pblock(old_dir->i_sb, &tloc, 0)
+-	    != old_inode->i_ino)
++	if (udf_get_lb_pblock(old_dir->i_sb, &tloc, 0) != old_inode->i_ino)
+ 		goto end_rename;
+ 
+ 	nfi = udf_find_entry(new_dir, &new_dentry->d_name, &nfibh, &ncfi);
+diff --git a/fs/xattr.c b/fs/xattr.c
+index cd7a563e8bcd4..5a03eaadf029f 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -1049,7 +1049,7 @@ static int xattr_list_one(char **buffer, ssize_t *remaining_size,
+ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
+ 			  char *buffer, size_t size)
+ {
+-	bool trusted = capable(CAP_SYS_ADMIN);
++	bool trusted = ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN);
+ 	struct simple_xattr *xattr;
+ 	ssize_t remaining_size = size;
+ 	int err = 0;
+diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
+index 2357109a8901b..9a87215b5526c 100644
+--- a/include/linux/debugfs.h
++++ b/include/linux/debugfs.h
+@@ -45,7 +45,7 @@ struct debugfs_u32_array {
+ 
+ extern struct dentry *arch_debugfs_dir;
+ 
+-#define DEFINE_DEBUGFS_ATTRIBUTE(__fops, __get, __set, __fmt)		\
++#define DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, __is_signed)	\
+ static int __fops ## _open(struct inode *inode, struct file *file)	\
+ {									\
+ 	__simple_attr_check_format(__fmt, 0ull);			\
+@@ -56,10 +56,16 @@ static const struct file_operations __fops = {				\
+ 	.open	 = __fops ## _open,					\
+ 	.release = simple_attr_release,					\
+ 	.read	 = debugfs_attr_read,					\
+-	.write	 = debugfs_attr_write,					\
++	.write	 = (__is_signed) ? debugfs_attr_write_signed : debugfs_attr_write,	\
+ 	.llseek  = no_llseek,						\
+ }
+ 
++#define DEFINE_DEBUGFS_ATTRIBUTE(__fops, __get, __set, __fmt)		\
++	DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, false)
++
++#define DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(__fops, __get, __set, __fmt)	\
++	DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, true)
++
+ typedef struct vfsmount *(*debugfs_automount_t)(struct dentry *, void *);
+ 
+ #if defined(CONFIG_DEBUG_FS)
+@@ -102,6 +108,8 @@ ssize_t debugfs_attr_read(struct file *file, char __user *buf,
+ 			size_t len, loff_t *ppos);
+ ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
+ 			size_t len, loff_t *ppos);
++ssize_t debugfs_attr_write_signed(struct file *file, const char __user *buf,
++			size_t len, loff_t *ppos);
+ 
+ struct dentry *debugfs_rename(struct dentry *old_dir, struct dentry *old_dentry,
+                 struct dentry *new_dir, const char *new_name);
+@@ -249,6 +257,13 @@ static inline ssize_t debugfs_attr_write(struct file *file,
+ 	return -ENODEV;
+ }
+ 
++static inline ssize_t debugfs_attr_write_signed(struct file *file,
++					const char __user *buf,
++					size_t len, loff_t *ppos)
++{
++	return -ENODEV;
++}
++
+ static inline struct dentry *debugfs_rename(struct dentry *old_dir, struct dentry *old_dentry,
+                 struct dentry *new_dir, char *new_name)
+ {
+diff --git a/include/linux/devfreq.h b/include/linux/devfreq.h
+index 121a2430d7f7e..fc4c7ad49cee3 100644
+--- a/include/linux/devfreq.h
++++ b/include/linux/devfreq.h
+@@ -146,8 +146,8 @@ struct devfreq_stats {
+  * @work:	delayed work for load monitoring.
+  * @previous_freq:	previously configured frequency value.
+  * @last_status:	devfreq user device info, performance statistics
+- * @data:	Private data of the governor. The devfreq framework does not
+- *		touch this.
++ * @data:	devfreq driver pass to governors, governor should not change it.
++ * @governor_data:	private data for governors, devfreq core doesn't touch it.
+  * @user_min_freq_req:	PM QoS minimum frequency request from user (via sysfs)
+  * @user_max_freq_req:	PM QoS maximum frequency request from user (via sysfs)
+  * @scaling_min_freq:	Limit minimum frequency requested by OPP interface
+@@ -183,7 +183,8 @@ struct devfreq {
+ 	unsigned long previous_freq;
+ 	struct devfreq_dev_status last_status;
+ 
+-	void *data; /* private data for governors */
++	void *data;
++	void *governor_data;
+ 
+ 	struct dev_pm_qos_request user_min_freq_req;
+ 	struct dev_pm_qos_request user_max_freq_req;
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 7feb70d32d955..0849903904209 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -1108,8 +1108,6 @@ void efi_check_for_embedded_firmwares(void);
+ static inline void efi_check_for_embedded_firmwares(void) { }
+ #endif
+ 
+-efi_status_t efi_random_get_seed(void);
+-
+ void efi_retrieve_tpm2_eventlog(void);
+ 
+ /*
+diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
+index ce1cf42740bf4..6cd2a92daf205 100644
+--- a/include/linux/eventfd.h
++++ b/include/linux/eventfd.h
+@@ -62,7 +62,7 @@ static inline struct eventfd_ctx *eventfd_ctx_fdget(int fd)
+ 	return ERR_PTR(-ENOSYS);
+ }
+ 
+-static inline int eventfd_signal(struct eventfd_ctx *ctx, int n)
++static inline int eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
+ {
+ 	return -ENOSYS;
+ }
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index ebfc0b2b4969e..74e19bccbf738 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1145,6 +1145,7 @@ extern int locks_delete_block(struct file_lock *);
+ extern int vfs_test_lock(struct file *, struct file_lock *);
+ extern int vfs_lock_file(struct file *, unsigned int, struct file_lock *, struct file_lock *);
+ extern int vfs_cancel_lock(struct file *filp, struct file_lock *fl);
++bool vfs_inode_has_locks(struct inode *inode);
+ extern int locks_lock_inode_wait(struct inode *inode, struct file_lock *fl);
+ extern int __break_lease(struct inode *inode, unsigned int flags, unsigned int type);
+ extern void lease_get_mtime(struct inode *, struct timespec64 *time);
+@@ -1257,6 +1258,11 @@ static inline int vfs_cancel_lock(struct file *filp, struct file_lock *fl)
+ 	return 0;
+ }
+ 
++static inline bool vfs_inode_has_locks(struct inode *inode)
++{
++	return false;
++}
++
+ static inline int locks_lock_inode_wait(struct inode *inode, struct file_lock *fl)
+ {
+ 	return -ENOLCK;
+@@ -3345,7 +3351,7 @@ void simple_transaction_set(struct file *file, size_t n);
+  * All attributes contain a text representation of a numeric value
+  * that are accessed with the get() and set() functions.
+  */
+-#define DEFINE_SIMPLE_ATTRIBUTE(__fops, __get, __set, __fmt)		\
++#define DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, __is_signed)	\
+ static int __fops ## _open(struct inode *inode, struct file *file)	\
+ {									\
+ 	__simple_attr_check_format(__fmt, 0ull);			\
+@@ -3356,10 +3362,16 @@ static const struct file_operations __fops = {				\
+ 	.open	 = __fops ## _open,					\
+ 	.release = simple_attr_release,					\
+ 	.read	 = simple_attr_read,					\
+-	.write	 = simple_attr_write,					\
++	.write	 = (__is_signed) ? simple_attr_write_signed : simple_attr_write,	\
+ 	.llseek	 = generic_file_llseek,					\
+ }
+ 
++#define DEFINE_SIMPLE_ATTRIBUTE(__fops, __get, __set, __fmt)		\
++	DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, false)
++
++#define DEFINE_SIMPLE_ATTRIBUTE_SIGNED(__fops, __get, __set, __fmt)	\
++	DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, true)
++
+ static inline __printf(1, 2)
+ void __simple_attr_check_format(const char *fmt, ...)
+ {
+@@ -3374,6 +3386,8 @@ ssize_t simple_attr_read(struct file *file, char __user *buf,
+ 			 size_t len, loff_t *ppos);
+ ssize_t simple_attr_write(struct file *file, const char __user *buf,
+ 			  size_t len, loff_t *ppos);
++ssize_t simple_attr_write_signed(struct file *file, const char __user *buf,
++				 size_t len, loff_t *ppos);
+ 
+ struct ctl_table;
+ int proc_nr_files(struct ctl_table *table, int write,
+diff --git a/include/linux/highmem.h b/include/linux/highmem.h
+index 14e6202ce47f1..b25df1f8d48d3 100644
+--- a/include/linux/highmem.h
++++ b/include/linux/highmem.h
+@@ -345,4 +345,22 @@ static inline void copy_highpage(struct page *to, struct page *from)
+ 
+ #endif
+ 
++static inline void memcpy_from_page(char *to, struct page *page,
++				    size_t offset, size_t len)
++{
++	char *from = kmap_atomic(page);
++
++	memcpy(to, from + offset, len);
++	kunmap_atomic(from);
++}
++
++static inline void memcpy_to_page(struct page *page, size_t offset,
++				  const char *from, size_t len)
++{
++	char *to = kmap_atomic(page);
++
++	memcpy(to + offset, from, len);
++	kunmap_atomic(to);
++}
++
+ #endif /* _LINUX_HIGHMEM_H */
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 1ce131f29f3b4..eada4d8d65879 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -1269,6 +1269,8 @@ struct hv_ring_buffer_debug_info {
+ int hv_ringbuffer_get_debuginfo(struct hv_ring_buffer_info *ring_info,
+ 				struct hv_ring_buffer_debug_info *debug_info);
+ 
++bool hv_ringbuffer_spinlock_busy(struct vmbus_channel *channel);
++
+ /* Vmbus interface */
+ #define vmbus_driver_register(driver)	\
+ 	__vmbus_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
+diff --git a/include/linux/iio/imu/adis.h b/include/linux/iio/imu/adis.h
+index 04e96d688ba9e..5f45b785e794c 100644
+--- a/include/linux/iio/imu/adis.h
++++ b/include/linux/iio/imu/adis.h
+@@ -32,6 +32,7 @@ struct adis_timeout {
+ 	u16 sw_reset_ms;
+ 	u16 self_test_ms;
+ };
++
+ /**
+  * struct adis_data - ADIS chip variant specific data
+  * @read_delay: SPI delay for read operations in us
+@@ -45,10 +46,11 @@ struct adis_timeout {
+  * @self_test_mask: Bitmask of supported self-test operations
+  * @self_test_reg: Register address to request self test command
+  * @self_test_no_autoclear: True if device's self-test needs clear of ctrl reg
+- * @status_error_msgs: Array of error messgaes
++ * @status_error_msgs: Array of error messages
+  * @status_error_mask: Bitmask of errors supported by the device
+  * @timeouts: Chip specific delays
+  * @enable_irq: Hook for ADIS devices that have a special IRQ enable/disable
++ * @unmasked_drdy: True for devices that cannot mask/unmask the data ready pin
+  * @has_paging: True if ADIS device has paged registers
+  * @burst_reg_cmd:	Register command that triggers burst
+  * @burst_len:		Burst size in the SPI RX buffer. If @burst_max_len is defined,
+@@ -77,6 +79,7 @@ struct adis_data {
+ 	unsigned int status_error_mask;
+ 
+ 	int (*enable_irq)(struct adis *adis, bool enable);
++	bool unmasked_drdy;
+ 
+ 	bool has_paging;
+ 
+@@ -126,12 +129,12 @@ struct adis {
+ 	unsigned long		irq_flag;
+ 	void			*buffer;
+ 
+-	uint8_t			tx[10] ____cacheline_aligned;
+-	uint8_t			rx[4];
++	u8			tx[10] ____cacheline_aligned;
++	u8			rx[4];
+ };
+ 
+ int adis_init(struct adis *adis, struct iio_dev *indio_dev,
+-	struct spi_device *spi, const struct adis_data *data);
++	      struct spi_device *spi, const struct adis_data *data);
+ int __adis_reset(struct adis *adis);
+ 
+ /**
+@@ -152,9 +155,9 @@ static inline int adis_reset(struct adis *adis)
+ }
+ 
+ int __adis_write_reg(struct adis *adis, unsigned int reg,
+-	unsigned int val, unsigned int size);
++		     unsigned int val, unsigned int size);
+ int __adis_read_reg(struct adis *adis, unsigned int reg,
+-	unsigned int *val, unsigned int size);
++		    unsigned int *val, unsigned int size);
+ 
+ /**
+  * __adis_write_reg_8() - Write single byte to a register (unlocked)
+@@ -163,7 +166,7 @@ int __adis_read_reg(struct adis *adis, unsigned int reg,
+  * @value: The value to write
+  */
+ static inline int __adis_write_reg_8(struct adis *adis, unsigned int reg,
+-	uint8_t val)
++				     u8 val)
+ {
+ 	return __adis_write_reg(adis, reg, val, 1);
+ }
+@@ -175,7 +178,7 @@ static inline int __adis_write_reg_8(struct adis *adis, unsigned int reg,
+  * @value: Value to be written
+  */
+ static inline int __adis_write_reg_16(struct adis *adis, unsigned int reg,
+-	uint16_t val)
++				      u16 val)
+ {
+ 	return __adis_write_reg(adis, reg, val, 2);
+ }
+@@ -187,7 +190,7 @@ static inline int __adis_write_reg_16(struct adis *adis, unsigned int reg,
+  * @value: Value to be written
+  */
+ static inline int __adis_write_reg_32(struct adis *adis, unsigned int reg,
+-	uint32_t val)
++				      u32 val)
+ {
+ 	return __adis_write_reg(adis, reg, val, 4);
+ }
+@@ -199,7 +202,7 @@ static inline int __adis_write_reg_32(struct adis *adis, unsigned int reg,
+  * @val: The value read back from the device
+  */
+ static inline int __adis_read_reg_16(struct adis *adis, unsigned int reg,
+-	uint16_t *val)
++				     u16 *val)
+ {
+ 	unsigned int tmp;
+ 	int ret;
+@@ -218,7 +221,7 @@ static inline int __adis_read_reg_16(struct adis *adis, unsigned int reg,
+  * @val: The value read back from the device
+  */
+ static inline int __adis_read_reg_32(struct adis *adis, unsigned int reg,
+-	uint32_t *val)
++				     u32 *val)
+ {
+ 	unsigned int tmp;
+ 	int ret;
+@@ -238,7 +241,7 @@ static inline int __adis_read_reg_32(struct adis *adis, unsigned int reg,
+  * @size: The size of the @value (in bytes)
+  */
+ static inline int adis_write_reg(struct adis *adis, unsigned int reg,
+-	unsigned int val, unsigned int size)
++				 unsigned int val, unsigned int size)
+ {
+ 	int ret;
+ 
+@@ -257,7 +260,7 @@ static inline int adis_write_reg(struct adis *adis, unsigned int reg,
+  * @size: The size of the @val buffer
+  */
+ static int adis_read_reg(struct adis *adis, unsigned int reg,
+-	unsigned int *val, unsigned int size)
++			 unsigned int *val, unsigned int size)
+ {
+ 	int ret;
+ 
+@@ -275,7 +278,7 @@ static int adis_read_reg(struct adis *adis, unsigned int reg,
+  * @value: The value to write
+  */
+ static inline int adis_write_reg_8(struct adis *adis, unsigned int reg,
+-	uint8_t val)
++				   u8 val)
+ {
+ 	return adis_write_reg(adis, reg, val, 1);
+ }
+@@ -287,7 +290,7 @@ static inline int adis_write_reg_8(struct adis *adis, unsigned int reg,
+  * @value: Value to be written
+  */
+ static inline int adis_write_reg_16(struct adis *adis, unsigned int reg,
+-	uint16_t val)
++				    u16 val)
+ {
+ 	return adis_write_reg(adis, reg, val, 2);
+ }
+@@ -299,7 +302,7 @@ static inline int adis_write_reg_16(struct adis *adis, unsigned int reg,
+  * @value: Value to be written
+  */
+ static inline int adis_write_reg_32(struct adis *adis, unsigned int reg,
+-	uint32_t val)
++				    u32 val)
+ {
+ 	return adis_write_reg(adis, reg, val, 4);
+ }
+@@ -311,7 +314,7 @@ static inline int adis_write_reg_32(struct adis *adis, unsigned int reg,
+  * @val: The value read back from the device
+  */
+ static inline int adis_read_reg_16(struct adis *adis, unsigned int reg,
+-	uint16_t *val)
++				   u16 *val)
+ {
+ 	unsigned int tmp;
+ 	int ret;
+@@ -330,7 +333,7 @@ static inline int adis_read_reg_16(struct adis *adis, unsigned int reg,
+  * @val: The value read back from the device
+  */
+ static inline int adis_read_reg_32(struct adis *adis, unsigned int reg,
+-	uint32_t *val)
++				   u32 *val)
+ {
+ 	unsigned int tmp;
+ 	int ret;
+@@ -401,9 +404,20 @@ static inline int adis_update_bits_base(struct adis *adis, unsigned int reg,
+ 		__adis_update_bits_base(adis, reg, mask, val, 2));	\
+ })
+ 
+-int adis_enable_irq(struct adis *adis, bool enable);
+ int __adis_check_status(struct adis *adis);
+ int __adis_initial_startup(struct adis *adis);
++int __adis_enable_irq(struct adis *adis, bool enable);
++
++static inline int adis_enable_irq(struct adis *adis, bool enable)
++{
++	int ret;
++
++	mutex_lock(&adis->state_lock);
++	ret = __adis_enable_irq(adis, enable);
++	mutex_unlock(&adis->state_lock);
++
++	return ret;
++}
+ 
+ static inline int adis_check_status(struct adis *adis)
+ {
+@@ -429,8 +443,8 @@ static inline int adis_initial_startup(struct adis *adis)
+ }
+ 
+ int adis_single_conversion(struct iio_dev *indio_dev,
+-	const struct iio_chan_spec *chan, unsigned int error_mask,
+-	int *val);
++			   const struct iio_chan_spec *chan,
++			   unsigned int error_mask, int *val);
+ 
+ #define ADIS_VOLTAGE_CHAN(addr, si, chan, name, info_all, bits) { \
+ 	.type = IIO_VOLTAGE, \
+@@ -479,7 +493,7 @@ int adis_single_conversion(struct iio_dev *indio_dev,
+ 	.modified = 1, \
+ 	.channel2 = IIO_MOD_ ## mod, \
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \
+-		 info_sep, \
++		 (info_sep), \
+ 	.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
+ 	.info_mask_shared_by_all = info_all, \
+ 	.address = (addr), \
+@@ -513,7 +527,7 @@ devm_adis_setup_buffer_and_trigger(struct adis *adis, struct iio_dev *indio_dev,
+ int devm_adis_probe_trigger(struct adis *adis, struct iio_dev *indio_dev);
+ 
+ int adis_update_scan_mode(struct iio_dev *indio_dev,
+-	const unsigned long *scan_mask);
++			  const unsigned long *scan_mask);
+ 
+ #else /* CONFIG_IIO_BUFFER */
+ 
+@@ -537,7 +551,8 @@ static inline int devm_adis_probe_trigger(struct adis *adis,
+ #ifdef CONFIG_DEBUG_FS
+ 
+ int adis_debugfs_reg_access(struct iio_dev *indio_dev,
+-	unsigned int reg, unsigned int writeval, unsigned int *readval);
++			    unsigned int reg, unsigned int writeval,
++			    unsigned int *readval);
+ 
+ #else
+ 
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index ee8299eb1f524..0652b4858ba62 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -61,6 +61,9 @@
+  *                interrupt handler after suspending interrupts. For system
+  *                wakeup devices users need to implement wakeup detection in
+  *                their interrupt handlers.
++ * IRQF_NO_AUTOEN - Don't enable IRQ or NMI automatically when users request it.
++ *                Users will enable it explicitly by enable_irq() or enable_nmi()
++ *                later.
+  */
+ #define IRQF_SHARED		0x00000080
+ #define IRQF_PROBE_SHARED	0x00000100
+@@ -74,6 +77,7 @@
+ #define IRQF_NO_THREAD		0x00010000
+ #define IRQF_EARLY_RESUME	0x00020000
+ #define IRQF_COND_SUSPEND	0x00040000
++#define IRQF_NO_AUTOEN		0x00080000
+ 
+ #define IRQF_TIMER		(__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
+ 
+diff --git a/include/linux/mbcache.h b/include/linux/mbcache.h
+index 20f1e3ff60130..591bc4cefe1d6 100644
+--- a/include/linux/mbcache.h
++++ b/include/linux/mbcache.h
+@@ -10,16 +10,29 @@
+ 
+ struct mb_cache;
+ 
++/* Cache entry flags */
++enum {
++	MBE_REFERENCED_B = 0,
++	MBE_REUSABLE_B
++};
++
+ struct mb_cache_entry {
+ 	/* List of entries in cache - protected by cache->c_list_lock */
+ 	struct list_head	e_list;
+-	/* Hash table list - protected by hash chain bitlock */
++	/*
++	 * Hash table list - protected by hash chain bitlock. The entry is
++	 * guaranteed to be hashed while e_refcnt > 0.
++	 */
+ 	struct hlist_bl_node	e_hash_list;
++	/*
++	 * Entry refcount. Once it reaches zero, entry is unhashed and freed.
++	 * While refcount > 0, the entry is guaranteed to stay in the hash and
++	 * e.g. mb_cache_entry_try_delete() will fail.
++	 */
+ 	atomic_t		e_refcnt;
+ 	/* Key in hash - stable during lifetime of the entry */
+ 	u32			e_key;
+-	u32			e_referenced:1;
+-	u32			e_reusable:1;
++	unsigned long		e_flags;
+ 	/* User provided value - stable during lifetime of the entry */
+ 	u64			e_value;
+ };
+@@ -29,16 +42,24 @@ void mb_cache_destroy(struct mb_cache *cache);
+ 
+ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ 			  u64 value, bool reusable);
+-void __mb_cache_entry_free(struct mb_cache_entry *entry);
+-static inline int mb_cache_entry_put(struct mb_cache *cache,
+-				     struct mb_cache_entry *entry)
++void __mb_cache_entry_free(struct mb_cache *cache,
++			   struct mb_cache_entry *entry);
++void mb_cache_entry_wait_unused(struct mb_cache_entry *entry);
++static inline void mb_cache_entry_put(struct mb_cache *cache,
++				      struct mb_cache_entry *entry)
+ {
+-	if (!atomic_dec_and_test(&entry->e_refcnt))
+-		return 0;
+-	__mb_cache_entry_free(entry);
+-	return 1;
++	unsigned int cnt = atomic_dec_return(&entry->e_refcnt);
++
++	if (cnt > 0) {
++		if (cnt <= 2)
++			wake_up_var(&entry->e_refcnt);
++		return;
++	}
++	__mb_cache_entry_free(cache, entry);
+ }
+ 
++struct mb_cache_entry *mb_cache_entry_delete_or_get(struct mb_cache *cache,
++						    u32 key, u64 value);
+ void mb_cache_entry_delete(struct mb_cache *cache, u32 key, u64 value);
+ struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *cache, u32 key,
+ 					  u64 value);
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index ef75567efd27a..b478a16ef284d 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -166,31 +166,38 @@ static inline bool dev_xmit_complete(int rc)
+  *	(unsigned long) so they can be read and written atomically.
+  */
+ 
++#define NET_DEV_STAT(FIELD)			\
++	union {					\
++		unsigned long FIELD;		\
++		atomic_long_t __##FIELD;	\
++	}
++
+ struct net_device_stats {
+-	unsigned long	rx_packets;
+-	unsigned long	tx_packets;
+-	unsigned long	rx_bytes;
+-	unsigned long	tx_bytes;
+-	unsigned long	rx_errors;
+-	unsigned long	tx_errors;
+-	unsigned long	rx_dropped;
+-	unsigned long	tx_dropped;
+-	unsigned long	multicast;
+-	unsigned long	collisions;
+-	unsigned long	rx_length_errors;
+-	unsigned long	rx_over_errors;
+-	unsigned long	rx_crc_errors;
+-	unsigned long	rx_frame_errors;
+-	unsigned long	rx_fifo_errors;
+-	unsigned long	rx_missed_errors;
+-	unsigned long	tx_aborted_errors;
+-	unsigned long	tx_carrier_errors;
+-	unsigned long	tx_fifo_errors;
+-	unsigned long	tx_heartbeat_errors;
+-	unsigned long	tx_window_errors;
+-	unsigned long	rx_compressed;
+-	unsigned long	tx_compressed;
++	NET_DEV_STAT(rx_packets);
++	NET_DEV_STAT(tx_packets);
++	NET_DEV_STAT(rx_bytes);
++	NET_DEV_STAT(tx_bytes);
++	NET_DEV_STAT(rx_errors);
++	NET_DEV_STAT(tx_errors);
++	NET_DEV_STAT(rx_dropped);
++	NET_DEV_STAT(tx_dropped);
++	NET_DEV_STAT(multicast);
++	NET_DEV_STAT(collisions);
++	NET_DEV_STAT(rx_length_errors);
++	NET_DEV_STAT(rx_over_errors);
++	NET_DEV_STAT(rx_crc_errors);
++	NET_DEV_STAT(rx_frame_errors);
++	NET_DEV_STAT(rx_fifo_errors);
++	NET_DEV_STAT(rx_missed_errors);
++	NET_DEV_STAT(tx_aborted_errors);
++	NET_DEV_STAT(tx_carrier_errors);
++	NET_DEV_STAT(tx_fifo_errors);
++	NET_DEV_STAT(tx_heartbeat_errors);
++	NET_DEV_STAT(tx_window_errors);
++	NET_DEV_STAT(rx_compressed);
++	NET_DEV_STAT(tx_compressed);
+ };
++#undef NET_DEV_STAT
+ 
+ 
+ #include <linux/cache.h>
+@@ -5256,4 +5263,9 @@ do {								\
+ 
+ extern struct net_device *blackhole_netdev;
+ 
++/* Note: Avoid these macros in fast path, prefer per-cpu or per-queue counters. */
++#define DEV_STATS_INC(DEV, FIELD) atomic_long_inc(&(DEV)->stats.__##FIELD)
++#define DEV_STATS_ADD(DEV, FIELD, VAL) 	\
++		atomic_long_add((VAL), &(DEV)->stats.__##FIELD)
++
+ #endif	/* _LINUX_NETDEVICE_H */
+diff --git a/include/linux/netfilter/ipset/ip_set.h b/include/linux/netfilter/ipset/ip_set.h
+index 53c9a17ecb3e3..62f7e7e257c10 100644
+--- a/include/linux/netfilter/ipset/ip_set.h
++++ b/include/linux/netfilter/ipset/ip_set.h
+@@ -199,7 +199,7 @@ struct ip_set_region {
+ };
+ 
+ /* Max range where every element is added/deleted in one step */
+-#define IPSET_MAX_RANGE		(1<<20)
++#define IPSET_MAX_RANGE		(1<<14)
+ 
+ /* The core set type structure */
+ struct ip_set_type {
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index bfed36e342ccb..fe39ed9e9303e 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -7,6 +7,7 @@
+ #ifndef _LINUX_NVME_H
+ #define _LINUX_NVME_H
+ 
++#include <linux/bits.h>
+ #include <linux/types.h>
+ #include <linux/uuid.h>
+ 
+@@ -528,7 +529,7 @@ enum {
+ 	NVME_CMD_EFFECTS_NCC		= 1 << 2,
+ 	NVME_CMD_EFFECTS_NIC		= 1 << 3,
+ 	NVME_CMD_EFFECTS_CCC		= 1 << 4,
+-	NVME_CMD_EFFECTS_CSE_MASK	= 3 << 16,
++	NVME_CMD_EFFECTS_CSE_MASK	= GENMASK(18, 16),
+ 	NVME_CMD_EFFECTS_UUID_SEL	= 1 << 19,
+ };
+ 
+diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
+index 000cc0533c336..8c892730a1f15 100644
+--- a/include/linux/proc_fs.h
++++ b/include/linux/proc_fs.h
+@@ -190,8 +190,10 @@ static inline void proc_remove(struct proc_dir_entry *de) {}
+ static inline int remove_proc_subtree(const char *name, struct proc_dir_entry *parent) { return 0; }
+ 
+ #define proc_create_net_data(name, mode, parent, ops, state_size, data) ({NULL;})
++#define proc_create_net_data_write(name, mode, parent, ops, write, state_size, data) ({NULL;})
+ #define proc_create_net(name, mode, parent, state_size, ops) ({NULL;})
+ #define proc_create_net_single(name, mode, parent, show, data) ({NULL;})
++#define proc_create_net_single_write(name, mode, parent, show, write, data) ({NULL;})
+ 
+ static inline struct pid *tgid_pidfd_to_pid(const struct file *file)
+ {
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 462b0e3ef2b27..39636fe7e8f0a 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -702,6 +702,7 @@ typedef unsigned char *sk_buff_data_t;
+  *	@transport_header: Transport layer header
+  *	@network_header: Network layer header
+  *	@mac_header: Link layer header
++ *	@kcov_handle: KCOV remote handle for remote coverage collection
+  *	@tail: Tail pointer
+  *	@end: End pointer
+  *	@head: Head of buffer
+@@ -906,6 +907,10 @@ struct sk_buff {
+ 	__u16			network_header;
+ 	__u16			mac_header;
+ 
++#ifdef CONFIG_KCOV
++	u64			kcov_handle;
++#endif
++
+ 	/* private: */
+ 	__u32			headers_end[0];
+ 	/* public: */
+@@ -4160,9 +4165,6 @@ enum skb_ext_id {
+ #endif
+ #if IS_ENABLED(CONFIG_MPTCP)
+ 	SKB_EXT_MPTCP,
+-#endif
+-#if IS_ENABLED(CONFIG_KCOV)
+-	SKB_EXT_KCOV_HANDLE,
+ #endif
+ 	SKB_EXT_NUM, /* must be last */
+ };
+@@ -4618,35 +4620,27 @@ static inline void skb_reset_redirect(struct sk_buff *skb)
+ #endif
+ }
+ 
+-#if IS_ENABLED(CONFIG_KCOV) && IS_ENABLED(CONFIG_SKB_EXTENSIONS)
++static inline bool skb_csum_is_sctp(struct sk_buff *skb)
++{
++	return skb->csum_not_inet;
++}
++
+ static inline void skb_set_kcov_handle(struct sk_buff *skb,
+ 				       const u64 kcov_handle)
+ {
+-	/* Do not allocate skb extensions only to set kcov_handle to zero
+-	 * (as it is zero by default). However, if the extensions are
+-	 * already allocated, update kcov_handle anyway since
+-	 * skb_set_kcov_handle can be called to zero a previously set
+-	 * value.
+-	 */
+-	if (skb_has_extensions(skb) || kcov_handle) {
+-		u64 *kcov_handle_ptr = skb_ext_add(skb, SKB_EXT_KCOV_HANDLE);
+-
+-		if (kcov_handle_ptr)
+-			*kcov_handle_ptr = kcov_handle;
+-	}
++#ifdef CONFIG_KCOV
++	skb->kcov_handle = kcov_handle;
++#endif
+ }
+ 
+ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
+ {
+-	u64 *kcov_handle = skb_ext_find(skb, SKB_EXT_KCOV_HANDLE);
+-
+-	return kcov_handle ? *kcov_handle : 0;
+-}
++#ifdef CONFIG_KCOV
++	return skb->kcov_handle;
+ #else
+-static inline void skb_set_kcov_handle(struct sk_buff *skb,
+-				       const u64 kcov_handle) { }
+-static inline u64 skb_get_kcov_handle(struct sk_buff *skb) { return 0; }
+-#endif /* CONFIG_KCOV && CONFIG_SKB_EXTENSIONS */
++	return 0;
++#endif
++}
+ 
+ #endif	/* __KERNEL__ */
+ #endif	/* _LINUX_SKBUFF_H */
+diff --git a/include/linux/soc/qcom/apr.h b/include/linux/soc/qcom/apr.h
+index 7f0bc3cf4d610..6374763186c80 100644
+--- a/include/linux/soc/qcom/apr.h
++++ b/include/linux/soc/qcom/apr.h
+@@ -79,6 +79,15 @@ struct apr_resp_pkt {
+ #define APR_SVC_MAJOR_VERSION(v)	((v >> 16) & 0xFF)
+ #define APR_SVC_MINOR_VERSION(v)	(v & 0xFF)
+ 
++struct packet_router;
++struct pkt_router_svc {
++	struct device *dev;
++	struct packet_router *pr;
++	spinlock_t lock;
++	int id;
++	void *priv;
++};
++
+ struct apr_device {
+ 	struct device	dev;
+ 	uint16_t	svc_id;
+@@ -86,11 +95,12 @@ struct apr_device {
+ 	uint32_t	version;
+ 	char name[APR_NAME_SIZE];
+ 	const char *service_path;
+-	spinlock_t	lock;
++	struct pkt_router_svc svc;
+ 	struct list_head node;
+ };
+ 
+ #define to_apr_device(d) container_of(d, struct apr_device, dev)
++#define svc_to_apr_device(d) container_of(d, struct apr_device, svc)
+ 
+ struct apr_driver {
+ 	int	(*probe)(struct apr_device *sl);
+diff --git a/include/linux/sunrpc/rpc_pipe_fs.h b/include/linux/sunrpc/rpc_pipe_fs.h
+index cd188a527d169..3b35b6f6533aa 100644
+--- a/include/linux/sunrpc/rpc_pipe_fs.h
++++ b/include/linux/sunrpc/rpc_pipe_fs.h
+@@ -92,6 +92,11 @@ extern ssize_t rpc_pipe_generic_upcall(struct file *, struct rpc_pipe_msg *,
+ 				       char __user *, size_t);
+ extern int rpc_queue_upcall(struct rpc_pipe *, struct rpc_pipe_msg *);
+ 
++/* returns true if the msg is in-flight, i.e., already eaten by the peer */
++static inline bool rpc_msg_is_inflight(const struct rpc_pipe_msg *msg) {
++	return (msg->copied != 0 && list_empty(&msg->list));
++}
++
+ struct rpc_clnt;
+ extern struct dentry *rpc_create_client_dir(struct dentry *, const char *, struct rpc_clnt *);
+ extern int rpc_remove_client_dir(struct rpc_clnt *);
+diff --git a/include/linux/timerqueue.h b/include/linux/timerqueue.h
+index 93884086f3924..adc80e29168ea 100644
+--- a/include/linux/timerqueue.h
++++ b/include/linux/timerqueue.h
+@@ -35,7 +35,7 @@ struct timerqueue_node *timerqueue_getnext(struct timerqueue_head *head)
+ {
+ 	struct rb_node *leftmost = rb_first_cached(&head->rb_root);
+ 
+-	return rb_entry(leftmost, struct timerqueue_node, node);
++	return rb_entry_safe(leftmost, struct timerqueue_node, node);
+ }
+ 
+ static inline void timerqueue_init(struct timerqueue_node *node)
+diff --git a/include/media/dvbdev.h b/include/media/dvbdev.h
+index e547cbeee4310..c493a3ffe45c8 100644
+--- a/include/media/dvbdev.h
++++ b/include/media/dvbdev.h
+@@ -126,6 +126,7 @@ struct dvb_adapter {
+  * struct dvb_device - represents a DVB device node
+  *
+  * @list_head:	List head with all DVB devices
++ * @ref:	reference counter
+  * @fops:	pointer to struct file_operations
+  * @adapter:	pointer to the adapter that holds this device node
+  * @type:	type of the device, as defined by &enum dvb_device_type.
+@@ -156,6 +157,7 @@ struct dvb_adapter {
+  */
+ struct dvb_device {
+ 	struct list_head list_head;
++	struct kref ref;
+ 	const struct file_operations *fops;
+ 	struct dvb_adapter *adapter;
+ 	enum dvb_device_type type;
+@@ -187,6 +189,20 @@ struct dvb_device {
+ 	void *priv;
+ };
+ 
++/**
++ * dvb_device_get - Increase dvb_device reference
++ *
++ * @dvbdev:	pointer to struct dvb_device
++ */
++struct dvb_device *dvb_device_get(struct dvb_device *dvbdev);
++
++/**
++ * dvb_device_put - Decrease dvb_device reference
++ *
++ * @dvbdev:	pointer to struct dvb_device
++ */
++void dvb_device_put(struct dvb_device *dvbdev);
++
+ /**
+  * dvb_register_adapter - Registers a new DVB adapter
+  *
+@@ -231,29 +247,17 @@ int dvb_register_device(struct dvb_adapter *adap,
+ /**
+  * dvb_remove_device - Remove a registered DVB device
+  *
+- * This does not free memory.  To do that, call dvb_free_device().
++ * This does not free memory. dvb_free_device() will do that when
++ * reference counter is empty
+  *
+  * @dvbdev:	pointer to struct dvb_device
+  */
+ void dvb_remove_device(struct dvb_device *dvbdev);
+ 
+-/**
+- * dvb_free_device - Free memory occupied by a DVB device.
+- *
+- * Call dvb_unregister_device() before calling this function.
+- *
+- * @dvbdev:	pointer to struct dvb_device
+- */
+-void dvb_free_device(struct dvb_device *dvbdev);
+ 
+ /**
+  * dvb_unregister_device - Unregisters a DVB device
+  *
+- * This is a combination of dvb_remove_device() and dvb_free_device().
+- * Using this function is usually a mistake, and is often an indicator
+- * for a use-after-free bug (when a userspace process keeps a file
+- * handle to a detached device).
+- *
+  * @dvbdev:	pointer to struct dvb_device
+  */
+ void dvb_unregister_device(struct dvb_device *dvbdev);
+diff --git a/include/net/dst.h b/include/net/dst.h
+index acd15c544cf37..ae2cf57d796b9 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -356,9 +356,8 @@ static inline void __skb_tunnel_rx(struct sk_buff *skb, struct net_device *dev,
+ static inline void skb_tunnel_rx(struct sk_buff *skb, struct net_device *dev,
+ 				 struct net *net)
+ {
+-	/* TODO : stats should be SMP safe */
+-	dev->stats.rx_packets++;
+-	dev->stats.rx_bytes += skb->len;
++	DEV_STATS_INC(dev, rx_packets);
++	DEV_STATS_ADD(dev, rx_bytes, skb->len);
+ 	__skb_tunnel_rx(skb, dev, net);
+ }
+ 
+diff --git a/include/net/mptcp.h b/include/net/mptcp.h
+index 753ba7e755d6d..3e529d8fce73a 100644
+--- a/include/net/mptcp.h
++++ b/include/net/mptcp.h
+@@ -58,8 +58,6 @@ struct mptcp_out_options {
+ };
+ 
+ #ifdef CONFIG_MPTCP
+-extern struct request_sock_ops mptcp_subflow_request_sock_ops;
+-
+ void mptcp_init(void);
+ 
+ static inline bool sk_is_mptcp(const struct sock *sk)
+@@ -133,6 +131,9 @@ void mptcp_seq_show(struct seq_file *seq);
+ int mptcp_subflow_init_cookie_req(struct request_sock *req,
+ 				  const struct sock *sk_listener,
+ 				  struct sk_buff *skb);
++struct request_sock *mptcp_subflow_reqsk_alloc(const struct request_sock_ops *ops,
++					       struct sock *sk_listener,
++					       bool attach_listener);
+ #else
+ 
+ static inline void mptcp_init(void)
+@@ -208,6 +209,13 @@ static inline int mptcp_subflow_init_cookie_req(struct request_sock *req,
+ {
+ 	return 0; /* TCP fallback */
+ }
++
++static inline struct request_sock *mptcp_subflow_reqsk_alloc(const struct request_sock_ops *ops,
++							     struct sock *sk_listener,
++							     bool attach_listener)
++{
++	return NULL;
++}
+ #endif /* CONFIG_MPTCP */
+ 
+ #if IS_ENABLED(CONFIG_MPTCP_IPV6)
+diff --git a/include/net/mrp.h b/include/net/mrp.h
+index 1c308c034e1a6..a8102661fd613 100644
+--- a/include/net/mrp.h
++++ b/include/net/mrp.h
+@@ -120,6 +120,7 @@ struct mrp_applicant {
+ 	struct sk_buff		*pdu;
+ 	struct rb_root		mad;
+ 	struct rcu_head		rcu;
++	bool			active;
+ };
+ 
+ struct mrp_port {
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 7e58b44705705..50d5ffbad473e 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -179,4 +179,13 @@ struct tc_taprio_qopt_offload *taprio_offload_get(struct tc_taprio_qopt_offload
+ 						  *offload);
+ void taprio_offload_free(struct tc_taprio_qopt_offload *offload);
+ 
++/* Ensure skb_mstamp_ns, which might have been populated with the txtime, is
++ * not mistaken for a software timestamp, because this will otherwise prevent
++ * the dispatch of hardware timestamps to the socket.
++ */
++static inline void skb_txtime_consumed(struct sk_buff *skb)
++{
++	skb->tstamp = ktime_set(0, 0);
++}
++
+ #endif
+diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
+index 6eed61e6cf8ac..496decb63f800 100644
+--- a/include/sound/hdaudio.h
++++ b/include/sound/hdaudio.h
+@@ -562,6 +562,8 @@ int snd_hdac_stream_set_params(struct hdac_stream *azx_dev,
+ void snd_hdac_stream_start(struct hdac_stream *azx_dev, bool fresh_start);
+ void snd_hdac_stream_clear(struct hdac_stream *azx_dev);
+ void snd_hdac_stream_stop(struct hdac_stream *azx_dev);
++void snd_hdac_stop_streams(struct hdac_bus *bus);
++void snd_hdac_stop_streams_and_chip(struct hdac_bus *bus);
+ void snd_hdac_stream_reset(struct hdac_stream *azx_dev);
+ void snd_hdac_stream_sync_trigger(struct hdac_stream *azx_dev, bool set,
+ 				  unsigned int streams, unsigned int reg);
+diff --git a/include/sound/hdaudio_ext.h b/include/sound/hdaudio_ext.h
+index 75048ea178f62..ddcb5b2f0a8e2 100644
+--- a/include/sound/hdaudio_ext.h
++++ b/include/sound/hdaudio_ext.h
+@@ -92,7 +92,6 @@ void snd_hdac_ext_stream_decouple_locked(struct hdac_bus *bus,
+ 				  struct hdac_ext_stream *azx_dev, bool decouple);
+ void snd_hdac_ext_stream_decouple(struct hdac_bus *bus,
+ 				struct hdac_ext_stream *azx_dev, bool decouple);
+-void snd_hdac_ext_stop_streams(struct hdac_bus *bus);
+ 
+ int snd_hdac_ext_stream_set_spib(struct hdac_bus *bus,
+ 				 struct hdac_ext_stream *stream, u32 value);
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 5ffc2efedd9f8..6554a9f71c62e 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -106,24 +106,24 @@ struct snd_pcm_ops {
+ #define SNDRV_PCM_POS_XRUN		((snd_pcm_uframes_t)-1)
+ 
+ /* If you change this don't forget to change rates[] table in pcm_native.c */
+-#define SNDRV_PCM_RATE_5512		(1<<0)		/* 5512Hz */
+-#define SNDRV_PCM_RATE_8000		(1<<1)		/* 8000Hz */
+-#define SNDRV_PCM_RATE_11025		(1<<2)		/* 11025Hz */
+-#define SNDRV_PCM_RATE_16000		(1<<3)		/* 16000Hz */
+-#define SNDRV_PCM_RATE_22050		(1<<4)		/* 22050Hz */
+-#define SNDRV_PCM_RATE_32000		(1<<5)		/* 32000Hz */
+-#define SNDRV_PCM_RATE_44100		(1<<6)		/* 44100Hz */
+-#define SNDRV_PCM_RATE_48000		(1<<7)		/* 48000Hz */
+-#define SNDRV_PCM_RATE_64000		(1<<8)		/* 64000Hz */
+-#define SNDRV_PCM_RATE_88200		(1<<9)		/* 88200Hz */
+-#define SNDRV_PCM_RATE_96000		(1<<10)		/* 96000Hz */
+-#define SNDRV_PCM_RATE_176400		(1<<11)		/* 176400Hz */
+-#define SNDRV_PCM_RATE_192000		(1<<12)		/* 192000Hz */
+-#define SNDRV_PCM_RATE_352800		(1<<13)		/* 352800Hz */
+-#define SNDRV_PCM_RATE_384000		(1<<14)		/* 384000Hz */
+-
+-#define SNDRV_PCM_RATE_CONTINUOUS	(1<<30)		/* continuous range */
+-#define SNDRV_PCM_RATE_KNOT		(1<<31)		/* supports more non-continuos rates */
++#define SNDRV_PCM_RATE_5512		(1U<<0)		/* 5512Hz */
++#define SNDRV_PCM_RATE_8000		(1U<<1)		/* 8000Hz */
++#define SNDRV_PCM_RATE_11025		(1U<<2)		/* 11025Hz */
++#define SNDRV_PCM_RATE_16000		(1U<<3)		/* 16000Hz */
++#define SNDRV_PCM_RATE_22050		(1U<<4)		/* 22050Hz */
++#define SNDRV_PCM_RATE_32000		(1U<<5)		/* 32000Hz */
++#define SNDRV_PCM_RATE_44100		(1U<<6)		/* 44100Hz */
++#define SNDRV_PCM_RATE_48000		(1U<<7)		/* 48000Hz */
++#define SNDRV_PCM_RATE_64000		(1U<<8)		/* 64000Hz */
++#define SNDRV_PCM_RATE_88200		(1U<<9)		/* 88200Hz */
++#define SNDRV_PCM_RATE_96000		(1U<<10)	/* 96000Hz */
++#define SNDRV_PCM_RATE_176400		(1U<<11)	/* 176400Hz */
++#define SNDRV_PCM_RATE_192000		(1U<<12)	/* 192000Hz */
++#define SNDRV_PCM_RATE_352800		(1U<<13)	/* 352800Hz */
++#define SNDRV_PCM_RATE_384000		(1U<<14)	/* 384000Hz */
++
++#define SNDRV_PCM_RATE_CONTINUOUS	(1U<<30)	/* continuous range */
++#define SNDRV_PCM_RATE_KNOT		(1U<<31)	/* supports more non-continuos rates */
+ 
+ #define SNDRV_PCM_RATE_8000_44100	(SNDRV_PCM_RATE_8000|SNDRV_PCM_RATE_11025|\
+ 					 SNDRV_PCM_RATE_16000|SNDRV_PCM_RATE_22050|\
+diff --git a/include/sound/soc-dai.h b/include/sound/soc-dai.h
+index 2150bd4c7a056..fe86172e86020 100644
+--- a/include/sound/soc-dai.h
++++ b/include/sound/soc-dai.h
+@@ -239,9 +239,9 @@ struct snd_soc_dai_ops {
+ 			unsigned int *rx_num, unsigned int *rx_slot);
+ 	int (*set_tristate)(struct snd_soc_dai *dai, int tristate);
+ 
+-	int (*set_sdw_stream)(struct snd_soc_dai *dai,
+-			void *stream, int direction);
+-	void *(*get_sdw_stream)(struct snd_soc_dai *dai, int direction);
++	int (*set_stream)(struct snd_soc_dai *dai,
++			  void *stream, int direction);
++	void *(*get_stream)(struct snd_soc_dai *dai, int direction);
+ 
+ 	/*
+ 	 * DAI digital mute - optional.
+@@ -446,42 +446,42 @@ static inline void *snd_soc_dai_get_drvdata(struct snd_soc_dai *dai)
+ }
+ 
+ /**
+- * snd_soc_dai_set_sdw_stream() - Configures a DAI for SDW stream operation
++ * snd_soc_dai_set_stream() - Configures a DAI for stream operation
+  * @dai: DAI
+- * @stream: STREAM
++ * @stream: STREAM (opaque structure depending on DAI type)
+  * @direction: Stream direction(Playback/Capture)
+- * SoundWire subsystem doesn't have a notion of direction and we reuse
++ * Some subsystems, such as SoundWire, don't have a notion of direction and we reuse
+  * the ASoC stream direction to configure sink/source ports.
+  * Playback maps to source ports and Capture for sink ports.
+  *
+  * This should be invoked with NULL to clear the stream set previously.
+  * Returns 0 on success, a negative error code otherwise.
+  */
+-static inline int snd_soc_dai_set_sdw_stream(struct snd_soc_dai *dai,
+-				void *stream, int direction)
++static inline int snd_soc_dai_set_stream(struct snd_soc_dai *dai,
++					 void *stream, int direction)
+ {
+-	if (dai->driver->ops->set_sdw_stream)
+-		return dai->driver->ops->set_sdw_stream(dai, stream, direction);
++	if (dai->driver->ops->set_stream)
++		return dai->driver->ops->set_stream(dai, stream, direction);
+ 	else
+ 		return -ENOTSUPP;
+ }
+ 
+ /**
+- * snd_soc_dai_get_sdw_stream() - Retrieves SDW stream from DAI
++ * snd_soc_dai_get_stream() - Retrieves stream from DAI
+  * @dai: DAI
+  * @direction: Stream direction(Playback/Capture)
+  *
+  * This routine only retrieves that was previously configured
+- * with snd_soc_dai_get_sdw_stream()
++ * with snd_soc_dai_get_stream()
+  *
+  * Returns pointer to stream or an ERR_PTR value, e.g.
+  * ERR_PTR(-ENOTSUPP) if callback is not supported;
+  */
+-static inline void *snd_soc_dai_get_sdw_stream(struct snd_soc_dai *dai,
+-					       int direction)
++static inline void *snd_soc_dai_get_stream(struct snd_soc_dai *dai,
++					   int direction)
+ {
+-	if (dai->driver->ops->get_sdw_stream)
+-		return dai->driver->ops->get_sdw_stream(dai, direction);
++	if (dai->driver->ops->get_stream)
++		return dai->driver->ops->get_stream(dai, direction);
+ 	else
+ 		return ERR_PTR(-ENOTSUPP);
+ }
+diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
+index 4973265655a7f..1a91d5789df3b 100644
+--- a/include/trace/events/ext4.h
++++ b/include/trace/events/ext4.h
+@@ -104,6 +104,7 @@ TRACE_DEFINE_ENUM(EXT4_FC_REASON_RESIZE);
+ TRACE_DEFINE_ENUM(EXT4_FC_REASON_RENAME_DIR);
+ TRACE_DEFINE_ENUM(EXT4_FC_REASON_FALLOC_RANGE);
+ TRACE_DEFINE_ENUM(EXT4_FC_REASON_INODE_JOURNAL_DATA);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_ENCRYPTED_FILENAME);
+ TRACE_DEFINE_ENUM(EXT4_FC_REASON_MAX);
+ 
+ #define show_fc_reason(reason)						\
+@@ -116,7 +117,8 @@ TRACE_DEFINE_ENUM(EXT4_FC_REASON_MAX);
+ 		{ EXT4_FC_REASON_RESIZE,	"RESIZE"},		\
+ 		{ EXT4_FC_REASON_RENAME_DIR,	"RENAME_DIR"},		\
+ 		{ EXT4_FC_REASON_FALLOC_RANGE,	"FALLOC_RANGE"},	\
+-		{ EXT4_FC_REASON_INODE_JOURNAL_DATA,	"INODE_JOURNAL_DATA"})
++		{ EXT4_FC_REASON_INODE_JOURNAL_DATA,	"INODE_JOURNAL_DATA"}, \
++		{ EXT4_FC_REASON_ENCRYPTED_FILENAME,	"ENCRYPTED_FILENAME"})
+ 
+ TRACE_EVENT(ext4_other_inode_update_time,
+ 	TP_PROTO(struct inode *inode, ino_t orig_ino),
+@@ -2940,7 +2942,7 @@ TRACE_EVENT(ext4_fc_stats,
+ 	),
+ 
+ 	TP_printk("dev %d,%d fc ineligible reasons:\n"
+-		  "%s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u "
++		  "%s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u"
+ 		  "num_commits:%lu, ineligible: %lu, numblks: %lu",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev),
+ 		  FC_REASON_NAME_STAT(EXT4_FC_REASON_XATTR),
+@@ -2952,6 +2954,7 @@ TRACE_EVENT(ext4_fc_stats,
+ 		  FC_REASON_NAME_STAT(EXT4_FC_REASON_RENAME_DIR),
+ 		  FC_REASON_NAME_STAT(EXT4_FC_REASON_FALLOC_RANGE),
+ 		  FC_REASON_NAME_STAT(EXT4_FC_REASON_INODE_JOURNAL_DATA),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_ENCRYPTED_FILENAME),
+ 		  __entry->fc_commits, __entry->fc_ineligible_commits,
+ 		  __entry->fc_numblks)
+ );
+diff --git a/include/trace/events/jbd2.h b/include/trace/events/jbd2.h
+index d16a32867f3a6..b1847e4314b83 100644
+--- a/include/trace/events/jbd2.h
++++ b/include/trace/events/jbd2.h
+@@ -40,7 +40,7 @@ DECLARE_EVENT_CLASS(jbd2_commit,
+ 	TP_STRUCT__entry(
+ 		__field(	dev_t,	dev			)
+ 		__field(	char,	sync_commit		  )
+-		__field(	int,	transaction		  )
++		__field(	tid_t,	transaction		  )
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -49,7 +49,7 @@ DECLARE_EVENT_CLASS(jbd2_commit,
+ 		__entry->transaction	= commit_transaction->t_tid;
+ 	),
+ 
+-	TP_printk("dev %d,%d transaction %d sync %d",
++	TP_printk("dev %d,%d transaction %u sync %d",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev),
+ 		  __entry->transaction, __entry->sync_commit)
+ );
+@@ -97,8 +97,8 @@ TRACE_EVENT(jbd2_end_commit,
+ 	TP_STRUCT__entry(
+ 		__field(	dev_t,	dev			)
+ 		__field(	char,	sync_commit		  )
+-		__field(	int,	transaction		  )
+-		__field(	int,	head		  	  )
++		__field(	tid_t,	transaction		  )
++		__field(	tid_t,	head		  	  )
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -108,7 +108,7 @@ TRACE_EVENT(jbd2_end_commit,
+ 		__entry->head		= journal->j_tail_sequence;
+ 	),
+ 
+-	TP_printk("dev %d,%d transaction %d sync %d head %d",
++	TP_printk("dev %d,%d transaction %u sync %d head %u",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev),
+ 		  __entry->transaction, __entry->sync_commit, __entry->head)
+ );
+@@ -134,14 +134,14 @@ TRACE_EVENT(jbd2_submit_inode_data,
+ );
+ 
+ DECLARE_EVENT_CLASS(jbd2_handle_start_class,
+-	TP_PROTO(dev_t dev, unsigned long tid, unsigned int type,
++	TP_PROTO(dev_t dev, tid_t tid, unsigned int type,
+ 		 unsigned int line_no, int requested_blocks),
+ 
+ 	TP_ARGS(dev, tid, type, line_no, requested_blocks),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(		dev_t,	dev		)
+-		__field(	unsigned long,	tid		)
++		__field(		tid_t,	tid		)
+ 		__field(	 unsigned int,	type		)
+ 		__field(	 unsigned int,	line_no		)
+ 		__field(		  int,	requested_blocks)
+@@ -155,28 +155,28 @@ DECLARE_EVENT_CLASS(jbd2_handle_start_class,
+ 		__entry->requested_blocks = requested_blocks;
+ 	),
+ 
+-	TP_printk("dev %d,%d tid %lu type %u line_no %u "
++	TP_printk("dev %d,%d tid %u type %u line_no %u "
+ 		  "requested_blocks %d",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev), __entry->tid,
+ 		  __entry->type, __entry->line_no, __entry->requested_blocks)
+ );
+ 
+ DEFINE_EVENT(jbd2_handle_start_class, jbd2_handle_start,
+-	TP_PROTO(dev_t dev, unsigned long tid, unsigned int type,
++	TP_PROTO(dev_t dev, tid_t tid, unsigned int type,
+ 		 unsigned int line_no, int requested_blocks),
+ 
+ 	TP_ARGS(dev, tid, type, line_no, requested_blocks)
+ );
+ 
+ DEFINE_EVENT(jbd2_handle_start_class, jbd2_handle_restart,
+-	TP_PROTO(dev_t dev, unsigned long tid, unsigned int type,
++	TP_PROTO(dev_t dev, tid_t tid, unsigned int type,
+ 		 unsigned int line_no, int requested_blocks),
+ 
+ 	TP_ARGS(dev, tid, type, line_no, requested_blocks)
+ );
+ 
+ TRACE_EVENT(jbd2_handle_extend,
+-	TP_PROTO(dev_t dev, unsigned long tid, unsigned int type,
++	TP_PROTO(dev_t dev, tid_t tid, unsigned int type,
+ 		 unsigned int line_no, int buffer_credits,
+ 		 int requested_blocks),
+ 
+@@ -184,7 +184,7 @@ TRACE_EVENT(jbd2_handle_extend,
+ 
+ 	TP_STRUCT__entry(
+ 		__field(		dev_t,	dev		)
+-		__field(	unsigned long,	tid		)
++		__field(		tid_t,	tid		)
+ 		__field(	 unsigned int,	type		)
+ 		__field(	 unsigned int,	line_no		)
+ 		__field(		  int,	buffer_credits  )
+@@ -200,7 +200,7 @@ TRACE_EVENT(jbd2_handle_extend,
+ 		__entry->requested_blocks = requested_blocks;
+ 	),
+ 
+-	TP_printk("dev %d,%d tid %lu type %u line_no %u "
++	TP_printk("dev %d,%d tid %u type %u line_no %u "
+ 		  "buffer_credits %d requested_blocks %d",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev), __entry->tid,
+ 		  __entry->type, __entry->line_no, __entry->buffer_credits,
+@@ -208,7 +208,7 @@ TRACE_EVENT(jbd2_handle_extend,
+ );
+ 
+ TRACE_EVENT(jbd2_handle_stats,
+-	TP_PROTO(dev_t dev, unsigned long tid, unsigned int type,
++	TP_PROTO(dev_t dev, tid_t tid, unsigned int type,
+ 		 unsigned int line_no, int interval, int sync,
+ 		 int requested_blocks, int dirtied_blocks),
+ 
+@@ -217,7 +217,7 @@ TRACE_EVENT(jbd2_handle_stats,
+ 
+ 	TP_STRUCT__entry(
+ 		__field(		dev_t,	dev		)
+-		__field(	unsigned long,	tid		)
++		__field(		tid_t,	tid		)
+ 		__field(	 unsigned int,	type		)
+ 		__field(	 unsigned int,	line_no		)
+ 		__field(		  int,	interval	)
+@@ -237,7 +237,7 @@ TRACE_EVENT(jbd2_handle_stats,
+ 		__entry->dirtied_blocks	  = dirtied_blocks;
+ 	),
+ 
+-	TP_printk("dev %d,%d tid %lu type %u line_no %u interval %d "
++	TP_printk("dev %d,%d tid %u type %u line_no %u interval %d "
+ 		  "sync %d requested_blocks %d dirtied_blocks %d",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev), __entry->tid,
+ 		  __entry->type, __entry->line_no, __entry->interval,
+@@ -246,14 +246,14 @@ TRACE_EVENT(jbd2_handle_stats,
+ );
+ 
+ TRACE_EVENT(jbd2_run_stats,
+-	TP_PROTO(dev_t dev, unsigned long tid,
++	TP_PROTO(dev_t dev, tid_t tid,
+ 		 struct transaction_run_stats_s *stats),
+ 
+ 	TP_ARGS(dev, tid, stats),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(		dev_t,	dev		)
+-		__field(	unsigned long,	tid		)
++		__field(		tid_t,	tid		)
+ 		__field(	unsigned long,	wait		)
+ 		__field(	unsigned long,	request_delay	)
+ 		__field(	unsigned long,	running		)
+@@ -279,7 +279,7 @@ TRACE_EVENT(jbd2_run_stats,
+ 		__entry->blocks_logged	= stats->rs_blocks_logged;
+ 	),
+ 
+-	TP_printk("dev %d,%d tid %lu wait %u request_delay %u running %u "
++	TP_printk("dev %d,%d tid %u wait %u request_delay %u running %u "
+ 		  "locked %u flushing %u logging %u handle_count %u "
+ 		  "blocks %u blocks_logged %u",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev), __entry->tid,
+@@ -294,14 +294,14 @@ TRACE_EVENT(jbd2_run_stats,
+ );
+ 
+ TRACE_EVENT(jbd2_checkpoint_stats,
+-	TP_PROTO(dev_t dev, unsigned long tid,
++	TP_PROTO(dev_t dev, tid_t tid,
+ 		 struct transaction_chp_stats_s *stats),
+ 
+ 	TP_ARGS(dev, tid, stats),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(		dev_t,	dev		)
+-		__field(	unsigned long,	tid		)
++		__field(		tid_t,	tid		)
+ 		__field(	unsigned long,	chp_time	)
+ 		__field(		__u32,	forced_to_close	)
+ 		__field(		__u32,	written		)
+@@ -317,7 +317,7 @@ TRACE_EVENT(jbd2_checkpoint_stats,
+ 		__entry->dropped	= stats->cs_dropped;
+ 	),
+ 
+-	TP_printk("dev %d,%d tid %lu chp_time %u forced_to_close %u "
++	TP_printk("dev %d,%d tid %u chp_time %u forced_to_close %u "
+ 		  "written %u dropped %u",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev), __entry->tid,
+ 		  jiffies_to_msecs(__entry->chp_time),
+diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
+index 5498d7a6556a7..dad9d3b4a97ad 100644
+--- a/include/uapi/drm/drm_fourcc.h
++++ b/include/uapi/drm/drm_fourcc.h
+@@ -271,6 +271,13 @@ extern "C" {
+  */
+ #define DRM_FORMAT_P016		fourcc_code('P', '0', '1', '6') /* 2x2 subsampled Cr:Cb plane 16 bits per channel */
+ 
++/* 2 plane YCbCr420.
++ * 3 10 bit components and 2 padding bits packed into 4 bytes.
++ * index 0 = Y plane, [31:0] x:Y2:Y1:Y0 2:10:10:10 little endian
++ * index 1 = Cr:Cb plane, [63:0] x:Cr2:Cb2:Cr1:x:Cb1:Cr0:Cb0 [2:10:10:10:2:10:10:10] little endian
++ */
++#define DRM_FORMAT_P030		fourcc_code('P', '0', '3', '0') /* 2x2 subsampled Cr:Cb plane 10 bits per channel packed */
++
+ /* 3 plane non-subsampled (444) YCbCr
+  * 16 bits per component, but only 10 bits are used and 6 bits are padded
+  * index 0: Y plane, [15:0] Y:x [10:6] little endian
+@@ -777,6 +784,10 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
+  * and UV.  Some SAND-using hardware stores UV in a separate tiled
+  * image from Y to reduce the column height, which is not supported
+  * with these modifiers.
++ *
++ * The DRM_FORMAT_MOD_BROADCOM_SAND128_COL_HEIGHT modifier is also
++ * supported for DRM_FORMAT_P030 where the columns remain as 128 bytes
++ * wide, but as this is a 10 bpp format that translates to 96 pixels.
+  */
+ 
+ #define DRM_FORMAT_MOD_BROADCOM_SAND32_COL_HEIGHT(v) \
+diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
+index 9d9ecc0f4c383..f086c5579006f 100644
+--- a/include/uapi/linux/idxd.h
++++ b/include/uapi/linux/idxd.h
+@@ -188,7 +188,7 @@ struct dsa_completion_record {
+ 		};
+ 
+ 		uint32_t	delta_rec_size;
+-		uint32_t	crc_val;
++		uint64_t	crc_val;
+ 
+ 		/* DIF check & strip */
+ 		struct {
+diff --git a/include/uapi/linux/swab.h b/include/uapi/linux/swab.h
+index 7272f85d6d6ab..3736f2fe15418 100644
+--- a/include/uapi/linux/swab.h
++++ b/include/uapi/linux/swab.h
+@@ -3,7 +3,7 @@
+ #define _UAPI_LINUX_SWAB_H
+ 
+ #include <linux/types.h>
+-#include <linux/compiler.h>
++#include <linux/stddef.h>
+ #include <asm/bitsperlong.h>
+ #include <asm/swab.h>
+ 
+diff --git a/include/uapi/sound/asequencer.h b/include/uapi/sound/asequencer.h
+index a75e14edc957e..dbd60f48b4b01 100644
+--- a/include/uapi/sound/asequencer.h
++++ b/include/uapi/sound/asequencer.h
+@@ -344,10 +344,10 @@ typedef int __bitwise snd_seq_client_type_t;
+ #define	KERNEL_CLIENT	((__force snd_seq_client_type_t) 2)
+                         
+ 	/* event filter flags */
+-#define SNDRV_SEQ_FILTER_BROADCAST	(1<<0)	/* accept broadcast messages */
+-#define SNDRV_SEQ_FILTER_MULTICAST	(1<<1)	/* accept multicast messages */
+-#define SNDRV_SEQ_FILTER_BOUNCE		(1<<2)	/* accept bounce event in error */
+-#define SNDRV_SEQ_FILTER_USE_EVENT	(1<<31)	/* use event filter */
++#define SNDRV_SEQ_FILTER_BROADCAST	(1U<<0)	/* accept broadcast messages */
++#define SNDRV_SEQ_FILTER_MULTICAST	(1U<<1)	/* accept multicast messages */
++#define SNDRV_SEQ_FILTER_BOUNCE		(1U<<2)	/* accept bounce event in error */
++#define SNDRV_SEQ_FILTER_USE_EVENT	(1U<<31)	/* use event filter */
+ 
+ struct snd_seq_client_info {
+ 	int client;			/* client number to inquire */
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 945faf036ad0f..0c4d16afb9ef8 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2702,7 +2702,7 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
+ 	return false;
+ }
+ 
+-static inline int io_fixup_rw_res(struct io_kiocb *req, unsigned res)
++static inline int io_fixup_rw_res(struct io_kiocb *req, long res)
+ {
+ 	struct io_async_rw *io = req->async_data;
+ 
+diff --git a/kernel/Makefile b/kernel/Makefile
+index e7905bdf6e972..82e9c843617f0 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -53,7 +53,7 @@ obj-$(CONFIG_FREEZER) += freezer.o
+ obj-$(CONFIG_PROFILING) += profile.o
+ obj-$(CONFIG_STACKTRACE) += stacktrace.o
+ obj-y += time/
+-obj-$(CONFIG_FUTEX) += futex.o
++obj-$(CONFIG_FUTEX) += futex/
+ obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
+ obj-$(CONFIG_SMP) += smp.o
+ ifneq ($(CONFIG_SMP),y)
+diff --git a/kernel/acct.c b/kernel/acct.c
+index f175df8f6aa4a..12f7dacf560e2 100644
+--- a/kernel/acct.c
++++ b/kernel/acct.c
+@@ -331,6 +331,8 @@ static comp_t encode_comp_t(unsigned long value)
+ 		exp++;
+ 	}
+ 
++	if (exp > (((comp_t) ~0U) >> MANTSIZE))
++		return (comp_t) ~0U;
+ 	/*
+ 	 * Clean it up and polish it off.
+ 	 */
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 9232938e3f960..52e7048607399 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -3675,6 +3675,11 @@ static int btf_func_proto_check(struct btf_verifier_env *env,
+ 			break;
+ 		}
+ 
++		if (btf_type_is_resolve_source_only(arg_type)) {
++			btf_verifier_log_type(env, t, "Invalid arg#%u", i + 1);
++			return -EINVAL;
++		}
++
+ 		if (args[i].name_off &&
+ 		    (!btf_name_offset_valid(btf, args[i].name_off) ||
+ 		     !btf_name_valid_identifier(btf, args[i].name_off))) {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 50364031eb4d1..232c93357b907 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -562,6 +562,14 @@ const char *kernel_type_name(u32 id)
+ 				  btf_type_by_id(btf_vmlinux, id)->name_off);
+ }
+ 
++/* The reg state of a pointer or a bounded scalar was saved when
++ * it was spilled to the stack.
++ */
++static bool is_spilled_reg(const struct bpf_stack_state *stack)
++{
++	return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL;
++}
++
+ static void print_verifier_state(struct bpf_verifier_env *env,
+ 				 const struct bpf_func_state *state)
+ {
+@@ -666,7 +674,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
+ 			continue;
+ 		verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE);
+ 		print_liveness(env, state->stack[i].spilled_ptr.live);
+-		if (state->stack[i].slot_type[0] == STACK_SPILL) {
++		if (is_spilled_reg(&state->stack[i])) {
+ 			reg = &state->stack[i].spilled_ptr;
+ 			t = reg->type;
+ 			verbose(env, "=%s", reg_type_str[t]);
+@@ -2009,7 +2017,7 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env,
+ 				reg->precise = true;
+ 			}
+ 			for (j = 0; j < func->allocated_stack / BPF_REG_SIZE; j++) {
+-				if (func->stack[j].slot_type[0] != STACK_SPILL)
++				if (!is_spilled_reg(&func->stack[j]))
+ 					continue;
+ 				reg = &func->stack[j].spilled_ptr;
+ 				if (reg->type != SCALAR_VALUE)
+@@ -2019,7 +2027,7 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env,
+ 		}
+ }
+ 
+-static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
++static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int regno,
+ 				  int spi)
+ {
+ 	struct bpf_verifier_state *st = env->cur_state;
+@@ -2036,7 +2044,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
+ 	if (!env->bpf_capable)
+ 		return 0;
+ 
+-	func = st->frame[st->curframe];
++	func = st->frame[frame];
+ 	if (regno >= 0) {
+ 		reg = &func->regs[regno];
+ 		if (reg->type != SCALAR_VALUE) {
+@@ -2051,7 +2059,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
+ 	}
+ 
+ 	while (spi >= 0) {
+-		if (func->stack[spi].slot_type[0] != STACK_SPILL) {
++		if (!is_spilled_reg(&func->stack[spi])) {
+ 			stack_mask = 0;
+ 			break;
+ 		}
+@@ -2117,7 +2125,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
+ 			break;
+ 
+ 		new_marks = false;
+-		func = st->frame[st->curframe];
++		func = st->frame[frame];
+ 		bitmap_from_u64(mask, reg_mask);
+ 		for_each_set_bit(i, mask, 32) {
+ 			reg = &func->regs[i];
+@@ -2150,7 +2158,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
+ 				return 0;
+ 			}
+ 
+-			if (func->stack[i].slot_type[0] != STACK_SPILL) {
++			if (!is_spilled_reg(&func->stack[i])) {
+ 				stack_mask &= ~(1ull << i);
+ 				continue;
+ 			}
+@@ -2183,12 +2191,17 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
+ 
+ static int mark_chain_precision(struct bpf_verifier_env *env, int regno)
+ {
+-	return __mark_chain_precision(env, regno, -1);
++	return __mark_chain_precision(env, env->cur_state->curframe, regno, -1);
+ }
+ 
+-static int mark_chain_precision_stack(struct bpf_verifier_env *env, int spi)
++static int mark_chain_precision_frame(struct bpf_verifier_env *env, int frame, int regno)
+ {
+-	return __mark_chain_precision(env, -1, spi);
++	return __mark_chain_precision(env, frame, regno, -1);
++}
++
++static int mark_chain_precision_stack_frame(struct bpf_verifier_env *env, int frame, int spi)
++{
++	return __mark_chain_precision(env, frame, -1, spi);
+ }
+ 
+ static bool is_spillable_regtype(enum bpf_reg_type type)
+@@ -2348,7 +2361,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 		/* regular write of data into stack destroys any spilled ptr */
+ 		state->stack[spi].spilled_ptr.type = NOT_INIT;
+ 		/* Mark slots as STACK_MISC if they belonged to spilled ptr. */
+-		if (state->stack[spi].slot_type[0] == STACK_SPILL)
++		if (is_spilled_reg(&state->stack[spi]))
+ 			for (i = 0; i < BPF_REG_SIZE; i++)
+ 				state->stack[spi].slot_type[i] = STACK_MISC;
+ 
+@@ -2439,14 +2452,17 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
+ 		spi = slot / BPF_REG_SIZE;
+ 		stype = &state->stack[spi].slot_type[slot % BPF_REG_SIZE];
+ 
+-		if (!env->allow_ptr_leaks
+-				&& *stype != NOT_INIT
+-				&& *stype != SCALAR_VALUE) {
+-			/* Reject the write if there's are spilled pointers in
+-			 * range. If we didn't reject here, the ptr status
+-			 * would be erased below (even though not all slots are
+-			 * actually overwritten), possibly opening the door to
+-			 * leaks.
++		if (!env->allow_ptr_leaks && *stype != STACK_MISC && *stype != STACK_ZERO) {
++			/* Reject the write if range we may write to has not
++			 * been initialized beforehand. If we didn't reject
++			 * here, the ptr status would be erased below (even
++			 * though not all slots are actually overwritten),
++			 * possibly opening the door to leaks.
++			 *
++			 * We do however catch STACK_INVALID case below, and
++			 * only allow reading possibly uninitialized memory
++			 * later for CAP_PERFMON, as the write may not happen to
++			 * that slot.
+ 			 */
+ 			verbose(env, "spilled ptr in range of var-offset stack write; insn %d, ptr off: %d",
+ 				insn_idx, i);
+@@ -2559,7 +2575,7 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
+ 	stype = reg_state->stack[spi].slot_type;
+ 	reg = &reg_state->stack[spi].spilled_ptr;
+ 
+-	if (stype[0] == STACK_SPILL) {
++	if (is_spilled_reg(&reg_state->stack[spi])) {
+ 		if (size != BPF_REG_SIZE) {
+ 			if (reg->type != SCALAR_VALUE) {
+ 				verbose_linfo(env, env->insn_idx, "; ");
+@@ -4078,11 +4094,11 @@ static int check_stack_range_initialized(
+ 			goto mark;
+ 		}
+ 
+-		if (state->stack[spi].slot_type[0] == STACK_SPILL &&
++		if (is_spilled_reg(&state->stack[spi]) &&
+ 		    state->stack[spi].spilled_ptr.type == PTR_TO_BTF_ID)
+ 			goto mark;
+ 
+-		if (state->stack[spi].slot_type[0] == STACK_SPILL &&
++		if (is_spilled_reg(&state->stack[spi]) &&
+ 		    (state->stack[spi].spilled_ptr.type == SCALAR_VALUE ||
+ 		     env->allow_ptr_leaks)) {
+ 			if (clobber) {
+@@ -6989,6 +7005,11 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
+ 				return err;
+ 			return adjust_ptr_min_max_vals(env, insn,
+ 						       dst_reg, src_reg);
++		} else if (dst_reg->precise) {
++			/* if dst_reg is precise, src_reg should be precise as well */
++			err = mark_chain_precision(env, insn->src_reg);
++			if (err)
++				return err;
+ 		}
+ 	} else {
+ 		/* Pretend the src is a reg with a known value, since we only
+@@ -9274,9 +9295,9 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ 			 * return false to continue verification of this path
+ 			 */
+ 			return false;
+-		if (i % BPF_REG_SIZE)
++		if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
+ 			continue;
+-		if (old->stack[spi].slot_type[0] != STACK_SPILL)
++		if (!is_spilled_reg(&old->stack[spi]))
+ 			continue;
+ 		if (!regsafe(env, &old->stack[spi].spilled_ptr,
+ 			     &cur->stack[spi].spilled_ptr, idmap))
+@@ -9467,34 +9488,36 @@ static int propagate_precision(struct bpf_verifier_env *env,
+ {
+ 	struct bpf_reg_state *state_reg;
+ 	struct bpf_func_state *state;
+-	int i, err = 0;
++	int i, err = 0, fr;
+ 
+-	state = old->frame[old->curframe];
+-	state_reg = state->regs;
+-	for (i = 0; i < BPF_REG_FP; i++, state_reg++) {
+-		if (state_reg->type != SCALAR_VALUE ||
+-		    !state_reg->precise)
+-			continue;
+-		if (env->log.level & BPF_LOG_LEVEL2)
+-			verbose(env, "propagating r%d\n", i);
+-		err = mark_chain_precision(env, i);
+-		if (err < 0)
+-			return err;
+-	}
++	for (fr = old->curframe; fr >= 0; fr--) {
++		state = old->frame[fr];
++		state_reg = state->regs;
++		for (i = 0; i < BPF_REG_FP; i++, state_reg++) {
++			if (state_reg->type != SCALAR_VALUE ||
++			    !state_reg->precise)
++				continue;
++			if (env->log.level & BPF_LOG_LEVEL2)
++				verbose(env, "frame %d: propagating r%d\n", i, fr);
++			err = mark_chain_precision_frame(env, fr, i);
++			if (err < 0)
++				return err;
++		}
+ 
+-	for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) {
+-		if (state->stack[i].slot_type[0] != STACK_SPILL)
+-			continue;
+-		state_reg = &state->stack[i].spilled_ptr;
+-		if (state_reg->type != SCALAR_VALUE ||
+-		    !state_reg->precise)
+-			continue;
+-		if (env->log.level & BPF_LOG_LEVEL2)
+-			verbose(env, "propagating fp%d\n",
+-				(-i - 1) * BPF_REG_SIZE);
+-		err = mark_chain_precision_stack(env, i);
+-		if (err < 0)
+-			return err;
++		for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) {
++			if (!is_spilled_reg(&state->stack[i]))
++				continue;
++			state_reg = &state->stack[i].spilled_ptr;
++			if (state_reg->type != SCALAR_VALUE ||
++			    !state_reg->precise)
++				continue;
++			if (env->log.level & BPF_LOG_LEVEL2)
++				verbose(env, "frame %d: propagating fp%d\n",
++					(-i - 1) * BPF_REG_SIZE, fr);
++			err = mark_chain_precision_stack_frame(env, fr, i);
++			if (err < 0)
++				return err;
++		}
+ 	}
+ 	return 0;
+ }
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 3c9ee966c56a5..008b50da22246 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2231,8 +2231,10 @@ static ssize_t write_cpuhp_target(struct device *dev,
+ 
+ 	if (st->state < target)
+ 		ret = cpu_up(dev->id, target);
+-	else
++	else if (st->state > target)
+ 		ret = cpu_down(dev->id, target);
++	else if (WARN_ON(st->target != target))
++		st->target = target;
+ out:
+ 	unlock_device_hotplug();
+ 	return ret ? ret : count;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e9b354d521a38..d7b61116f15bb 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10810,13 +10810,15 @@ static int pmu_dev_alloc(struct pmu *pmu)
+ 
+ 	pmu->dev->groups = pmu->attr_groups;
+ 	device_initialize(pmu->dev);
+-	ret = dev_set_name(pmu->dev, "%s", pmu->name);
+-	if (ret)
+-		goto free_dev;
+ 
+ 	dev_set_drvdata(pmu->dev, pmu);
+ 	pmu->dev->bus = &pmu_bus;
+ 	pmu->dev->release = pmu_dev_release;
++
++	ret = dev_set_name(pmu->dev, "%s", pmu->name);
++	if (ret)
++		goto free_dev;
++
+ 	ret = device_add(pmu->dev);
+ 	if (ret)
+ 		goto free_dev;
+@@ -11779,12 +11781,12 @@ SYSCALL_DEFINE5(perf_event_open,
+ 	if (flags & ~PERF_FLAG_ALL)
+ 		return -EINVAL;
+ 
+-	/* Do we allow access to perf_event_open(2) ? */
+-	err = security_perf_event_open(&attr, PERF_SECURITY_OPEN);
++	err = perf_copy_attr(attr_uptr, &attr);
+ 	if (err)
+ 		return err;
+ 
+-	err = perf_copy_attr(attr_uptr, &attr);
++	/* Do we allow access to perf_event_open(2) ? */
++	err = security_perf_event_open(&attr, PERF_SECURITY_OPEN);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/kernel/futex.c b/kernel/futex.c
+deleted file mode 100644
+index 98a6e1b80bfe4..0000000000000
+--- a/kernel/futex.c
++++ /dev/null
+@@ -1,4040 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- *  Fast Userspace Mutexes (which I call "Futexes!").
+- *  (C) Rusty Russell, IBM 2002
+- *
+- *  Generalized futexes, futex requeueing, misc fixes by Ingo Molnar
+- *  (C) Copyright 2003 Red Hat Inc, All Rights Reserved
+- *
+- *  Removed page pinning, fix privately mapped COW pages and other cleanups
+- *  (C) Copyright 2003, 2004 Jamie Lokier
+- *
+- *  Robust futex support started by Ingo Molnar
+- *  (C) Copyright 2006 Red Hat Inc, All Rights Reserved
+- *  Thanks to Thomas Gleixner for suggestions, analysis and fixes.
+- *
+- *  PI-futex support started by Ingo Molnar and Thomas Gleixner
+- *  Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+- *  Copyright (C) 2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+- *
+- *  PRIVATE futexes by Eric Dumazet
+- *  Copyright (C) 2007 Eric Dumazet <dada1@cosmosbay.com>
+- *
+- *  Requeue-PI support by Darren Hart <dvhltc@us.ibm.com>
+- *  Copyright (C) IBM Corporation, 2009
+- *  Thanks to Thomas Gleixner for conceptual design and careful reviews.
+- *
+- *  Thanks to Ben LaHaise for yelling "hashed waitqueues" loudly
+- *  enough at me, Linus for the original (flawed) idea, Matthew
+- *  Kirkwood for proof-of-concept implementation.
+- *
+- *  "The futexes are also cursed."
+- *  "But they come in a choice of three flavours!"
+- */
+-#include <linux/compat.h>
+-#include <linux/jhash.h>
+-#include <linux/pagemap.h>
+-#include <linux/syscalls.h>
+-#include <linux/freezer.h>
+-#include <linux/memblock.h>
+-#include <linux/fault-inject.h>
+-#include <linux/time_namespace.h>
+-
+-#include <asm/futex.h>
+-
+-#include "locking/rtmutex_common.h"
+-
+-/*
+- * READ this before attempting to hack on futexes!
+- *
+- * Basic futex operation and ordering guarantees
+- * =============================================
+- *
+- * The waiter reads the futex value in user space and calls
+- * futex_wait(). This function computes the hash bucket and acquires
+- * the hash bucket lock. After that it reads the futex user space value
+- * again and verifies that the data has not changed. If it has not changed
+- * it enqueues itself into the hash bucket, releases the hash bucket lock
+- * and schedules.
+- *
+- * The waker side modifies the user space value of the futex and calls
+- * futex_wake(). This function computes the hash bucket and acquires the
+- * hash bucket lock. Then it looks for waiters on that futex in the hash
+- * bucket and wakes them.
+- *
+- * In futex wake up scenarios where no tasks are blocked on a futex, taking
+- * the hb spinlock can be avoided and simply return. In order for this
+- * optimization to work, ordering guarantees must exist so that the waiter
+- * being added to the list is acknowledged when the list is concurrently being
+- * checked by the waker, avoiding scenarios like the following:
+- *
+- * CPU 0                               CPU 1
+- * val = *futex;
+- * sys_futex(WAIT, futex, val);
+- *   futex_wait(futex, val);
+- *   uval = *futex;
+- *                                     *futex = newval;
+- *                                     sys_futex(WAKE, futex);
+- *                                       futex_wake(futex);
+- *                                       if (queue_empty())
+- *                                         return;
+- *   if (uval == val)
+- *      lock(hash_bucket(futex));
+- *      queue();
+- *     unlock(hash_bucket(futex));
+- *     schedule();
+- *
+- * This would cause the waiter on CPU 0 to wait forever because it
+- * missed the transition of the user space value from val to newval
+- * and the waker did not find the waiter in the hash bucket queue.
+- *
+- * The correct serialization ensures that a waiter either observes
+- * the changed user space value before blocking or is woken by a
+- * concurrent waker:
+- *
+- * CPU 0                                 CPU 1
+- * val = *futex;
+- * sys_futex(WAIT, futex, val);
+- *   futex_wait(futex, val);
+- *
+- *   waiters++; (a)
+- *   smp_mb(); (A) <-- paired with -.
+- *                                  |
+- *   lock(hash_bucket(futex));      |
+- *                                  |
+- *   uval = *futex;                 |
+- *                                  |        *futex = newval;
+- *                                  |        sys_futex(WAKE, futex);
+- *                                  |          futex_wake(futex);
+- *                                  |
+- *                                  `--------> smp_mb(); (B)
+- *   if (uval == val)
+- *     queue();
+- *     unlock(hash_bucket(futex));
+- *     schedule();                         if (waiters)
+- *                                           lock(hash_bucket(futex));
+- *   else                                    wake_waiters(futex);
+- *     waiters--; (b)                        unlock(hash_bucket(futex));
+- *
+- * Where (A) orders the waiters increment and the futex value read through
+- * atomic operations (see hb_waiters_inc) and where (B) orders the write
+- * to futex and the waiters read (see hb_waiters_pending()).
+- *
+- * This yields the following case (where X:=waiters, Y:=futex):
+- *
+- *	X = Y = 0
+- *
+- *	w[X]=1		w[Y]=1
+- *	MB		MB
+- *	r[Y]=y		r[X]=x
+- *
+- * Which guarantees that x==0 && y==0 is impossible; which translates back into
+- * the guarantee that we cannot both miss the futex variable change and the
+- * enqueue.
+- *
+- * Note that a new waiter is accounted for in (a) even when it is possible that
+- * the wait call can return error, in which case we backtrack from it in (b).
+- * Refer to the comment in queue_lock().
+- *
+- * Similarly, in order to account for waiters being requeued on another
+- * address we always increment the waiters for the destination bucket before
+- * acquiring the lock. It then decrements them again  after releasing it -
+- * the code that actually moves the futex(es) between hash buckets (requeue_futex)
+- * will do the additional required waiter count housekeeping. This is done for
+- * double_lock_hb() and double_unlock_hb(), respectively.
+- */
+-
+-#ifdef CONFIG_HAVE_FUTEX_CMPXCHG
+-#define futex_cmpxchg_enabled 1
+-#else
+-static int  __read_mostly futex_cmpxchg_enabled;
+-#endif
+-
+-/*
+- * Futex flags used to encode options to functions and preserve them across
+- * restarts.
+- */
+-#ifdef CONFIG_MMU
+-# define FLAGS_SHARED		0x01
+-#else
+-/*
+- * NOMMU does not have per process address space. Let the compiler optimize
+- * code away.
+- */
+-# define FLAGS_SHARED		0x00
+-#endif
+-#define FLAGS_CLOCKRT		0x02
+-#define FLAGS_HAS_TIMEOUT	0x04
+-
+-/*
+- * Priority Inheritance state:
+- */
+-struct futex_pi_state {
+-	/*
+-	 * list of 'owned' pi_state instances - these have to be
+-	 * cleaned up in do_exit() if the task exits prematurely:
+-	 */
+-	struct list_head list;
+-
+-	/*
+-	 * The PI object:
+-	 */
+-	struct rt_mutex pi_mutex;
+-
+-	struct task_struct *owner;
+-	refcount_t refcount;
+-
+-	union futex_key key;
+-} __randomize_layout;
+-
+-/**
+- * struct futex_q - The hashed futex queue entry, one per waiting task
+- * @list:		priority-sorted list of tasks waiting on this futex
+- * @task:		the task waiting on the futex
+- * @lock_ptr:		the hash bucket lock
+- * @key:		the key the futex is hashed on
+- * @pi_state:		optional priority inheritance state
+- * @rt_waiter:		rt_waiter storage for use with requeue_pi
+- * @requeue_pi_key:	the requeue_pi target futex key
+- * @bitset:		bitset for the optional bitmasked wakeup
+- *
+- * We use this hashed waitqueue, instead of a normal wait_queue_entry_t, so
+- * we can wake only the relevant ones (hashed queues may be shared).
+- *
+- * A futex_q has a woken state, just like tasks have TASK_RUNNING.
+- * It is considered woken when plist_node_empty(&q->list) || q->lock_ptr == 0.
+- * The order of wakeup is always to make the first condition true, then
+- * the second.
+- *
+- * PI futexes are typically woken before they are removed from the hash list via
+- * the rt_mutex code. See unqueue_me_pi().
+- */
+-struct futex_q {
+-	struct plist_node list;
+-
+-	struct task_struct *task;
+-	spinlock_t *lock_ptr;
+-	union futex_key key;
+-	struct futex_pi_state *pi_state;
+-	struct rt_mutex_waiter *rt_waiter;
+-	union futex_key *requeue_pi_key;
+-	u32 bitset;
+-} __randomize_layout;
+-
+-static const struct futex_q futex_q_init = {
+-	/* list gets initialized in queue_me()*/
+-	.key = FUTEX_KEY_INIT,
+-	.bitset = FUTEX_BITSET_MATCH_ANY
+-};
+-
+-/*
+- * Hash buckets are shared by all the futex_keys that hash to the same
+- * location.  Each key may have multiple futex_q structures, one for each task
+- * waiting on a futex.
+- */
+-struct futex_hash_bucket {
+-	atomic_t waiters;
+-	spinlock_t lock;
+-	struct plist_head chain;
+-} ____cacheline_aligned_in_smp;
+-
+-/*
+- * The base of the bucket array and its size are always used together
+- * (after initialization only in hash_futex()), so ensure that they
+- * reside in the same cacheline.
+- */
+-static struct {
+-	struct futex_hash_bucket *queues;
+-	unsigned long            hashsize;
+-} __futex_data __read_mostly __aligned(2*sizeof(long));
+-#define futex_queues   (__futex_data.queues)
+-#define futex_hashsize (__futex_data.hashsize)
+-
+-
+-/*
+- * Fault injections for futexes.
+- */
+-#ifdef CONFIG_FAIL_FUTEX
+-
+-static struct {
+-	struct fault_attr attr;
+-
+-	bool ignore_private;
+-} fail_futex = {
+-	.attr = FAULT_ATTR_INITIALIZER,
+-	.ignore_private = false,
+-};
+-
+-static int __init setup_fail_futex(char *str)
+-{
+-	return setup_fault_attr(&fail_futex.attr, str);
+-}
+-__setup("fail_futex=", setup_fail_futex);
+-
+-static bool should_fail_futex(bool fshared)
+-{
+-	if (fail_futex.ignore_private && !fshared)
+-		return false;
+-
+-	return should_fail(&fail_futex.attr, 1);
+-}
+-
+-#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
+-
+-static int __init fail_futex_debugfs(void)
+-{
+-	umode_t mode = S_IFREG | S_IRUSR | S_IWUSR;
+-	struct dentry *dir;
+-
+-	dir = fault_create_debugfs_attr("fail_futex", NULL,
+-					&fail_futex.attr);
+-	if (IS_ERR(dir))
+-		return PTR_ERR(dir);
+-
+-	debugfs_create_bool("ignore-private", mode, dir,
+-			    &fail_futex.ignore_private);
+-	return 0;
+-}
+-
+-late_initcall(fail_futex_debugfs);
+-
+-#endif /* CONFIG_FAULT_INJECTION_DEBUG_FS */
+-
+-#else
+-static inline bool should_fail_futex(bool fshared)
+-{
+-	return false;
+-}
+-#endif /* CONFIG_FAIL_FUTEX */
+-
+-#ifdef CONFIG_COMPAT
+-static void compat_exit_robust_list(struct task_struct *curr);
+-#else
+-static inline void compat_exit_robust_list(struct task_struct *curr) { }
+-#endif
+-
+-/*
+- * Reflects a new waiter being added to the waitqueue.
+- */
+-static inline void hb_waiters_inc(struct futex_hash_bucket *hb)
+-{
+-#ifdef CONFIG_SMP
+-	atomic_inc(&hb->waiters);
+-	/*
+-	 * Full barrier (A), see the ordering comment above.
+-	 */
+-	smp_mb__after_atomic();
+-#endif
+-}
+-
+-/*
+- * Reflects a waiter being removed from the waitqueue by wakeup
+- * paths.
+- */
+-static inline void hb_waiters_dec(struct futex_hash_bucket *hb)
+-{
+-#ifdef CONFIG_SMP
+-	atomic_dec(&hb->waiters);
+-#endif
+-}
+-
+-static inline int hb_waiters_pending(struct futex_hash_bucket *hb)
+-{
+-#ifdef CONFIG_SMP
+-	/*
+-	 * Full barrier (B), see the ordering comment above.
+-	 */
+-	smp_mb();
+-	return atomic_read(&hb->waiters);
+-#else
+-	return 1;
+-#endif
+-}
+-
+-/**
+- * hash_futex - Return the hash bucket in the global hash
+- * @key:	Pointer to the futex key for which the hash is calculated
+- *
+- * We hash on the keys returned from get_futex_key (see below) and return the
+- * corresponding hash bucket in the global hash.
+- */
+-static struct futex_hash_bucket *hash_futex(union futex_key *key)
+-{
+-	u32 hash = jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4,
+-			  key->both.offset);
+-
+-	return &futex_queues[hash & (futex_hashsize - 1)];
+-}
+-
+-
+-/**
+- * match_futex - Check whether two futex keys are equal
+- * @key1:	Pointer to key1
+- * @key2:	Pointer to key2
+- *
+- * Return 1 if two futex_keys are equal, 0 otherwise.
+- */
+-static inline int match_futex(union futex_key *key1, union futex_key *key2)
+-{
+-	return (key1 && key2
+-		&& key1->both.word == key2->both.word
+-		&& key1->both.ptr == key2->both.ptr
+-		&& key1->both.offset == key2->both.offset);
+-}
+-
+-enum futex_access {
+-	FUTEX_READ,
+-	FUTEX_WRITE
+-};
+-
+-/**
+- * futex_setup_timer - set up the sleeping hrtimer.
+- * @time:	ptr to the given timeout value
+- * @timeout:	the hrtimer_sleeper structure to be set up
+- * @flags:	futex flags
+- * @range_ns:	optional range in ns
+- *
+- * Return: Initialized hrtimer_sleeper structure or NULL if no timeout
+- *	   value given
+- */
+-static inline struct hrtimer_sleeper *
+-futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout,
+-		  int flags, u64 range_ns)
+-{
+-	if (!time)
+-		return NULL;
+-
+-	hrtimer_init_sleeper_on_stack(timeout, (flags & FLAGS_CLOCKRT) ?
+-				      CLOCK_REALTIME : CLOCK_MONOTONIC,
+-				      HRTIMER_MODE_ABS);
+-	/*
+-	 * If range_ns is 0, calling hrtimer_set_expires_range_ns() is
+-	 * effectively the same as calling hrtimer_set_expires().
+-	 */
+-	hrtimer_set_expires_range_ns(&timeout->timer, *time, range_ns);
+-
+-	return timeout;
+-}
+-
+-/*
+- * Generate a machine wide unique identifier for this inode.
+- *
+- * This relies on u64 not wrapping in the life-time of the machine; which with
+- * 1ns resolution means almost 585 years.
+- *
+- * This further relies on the fact that a well formed program will not unmap
+- * the file while it has a (shared) futex waiting on it. This mapping will have
+- * a file reference which pins the mount and inode.
+- *
+- * If for some reason an inode gets evicted and read back in again, it will get
+- * a new sequence number and will _NOT_ match, even though it is the exact same
+- * file.
+- *
+- * It is important that match_futex() will never have a false-positive, esp.
+- * for PI futexes that can mess up the state. The above argues that false-negatives
+- * are only possible for malformed programs.
+- */
+-static u64 get_inode_sequence_number(struct inode *inode)
+-{
+-	static atomic64_t i_seq;
+-	u64 old;
+-
+-	/* Does the inode already have a sequence number? */
+-	old = atomic64_read(&inode->i_sequence);
+-	if (likely(old))
+-		return old;
+-
+-	for (;;) {
+-		u64 new = atomic64_add_return(1, &i_seq);
+-		if (WARN_ON_ONCE(!new))
+-			continue;
+-
+-		old = atomic64_cmpxchg_relaxed(&inode->i_sequence, 0, new);
+-		if (old)
+-			return old;
+-		return new;
+-	}
+-}
+-
+-/**
+- * get_futex_key() - Get parameters which are the keys for a futex
+- * @uaddr:	virtual address of the futex
+- * @fshared:	false for a PROCESS_PRIVATE futex, true for PROCESS_SHARED
+- * @key:	address where result is stored.
+- * @rw:		mapping needs to be read/write (values: FUTEX_READ,
+- *              FUTEX_WRITE)
+- *
+- * Return: a negative error code or 0
+- *
+- * The key words are stored in @key on success.
+- *
+- * For shared mappings (when @fshared), the key is:
+- *
+- *   ( inode->i_sequence, page->index, offset_within_page )
+- *
+- * [ also see get_inode_sequence_number() ]
+- *
+- * For private mappings (or when !@fshared), the key is:
+- *
+- *   ( current->mm, address, 0 )
+- *
+- * This allows (cross process, where applicable) identification of the futex
+- * without keeping the page pinned for the duration of the FUTEX_WAIT.
+- *
+- * lock_page() might sleep, the caller should not hold a spinlock.
+- */
+-static int get_futex_key(u32 __user *uaddr, bool fshared, union futex_key *key,
+-			 enum futex_access rw)
+-{
+-	unsigned long address = (unsigned long)uaddr;
+-	struct mm_struct *mm = current->mm;
+-	struct page *page, *tail;
+-	struct address_space *mapping;
+-	int err, ro = 0;
+-
+-	/*
+-	 * The futex address must be "naturally" aligned.
+-	 */
+-	key->both.offset = address % PAGE_SIZE;
+-	if (unlikely((address % sizeof(u32)) != 0))
+-		return -EINVAL;
+-	address -= key->both.offset;
+-
+-	if (unlikely(!access_ok(uaddr, sizeof(u32))))
+-		return -EFAULT;
+-
+-	if (unlikely(should_fail_futex(fshared)))
+-		return -EFAULT;
+-
+-	/*
+-	 * PROCESS_PRIVATE futexes are fast.
+-	 * As the mm cannot disappear under us and the 'key' only needs
+-	 * virtual address, we dont even have to find the underlying vma.
+-	 * Note : We do have to check 'uaddr' is a valid user address,
+-	 *        but access_ok() should be faster than find_vma()
+-	 */
+-	if (!fshared) {
+-		key->private.mm = mm;
+-		key->private.address = address;
+-		return 0;
+-	}
+-
+-again:
+-	/* Ignore any VERIFY_READ mapping (futex common case) */
+-	if (unlikely(should_fail_futex(true)))
+-		return -EFAULT;
+-
+-	err = get_user_pages_fast(address, 1, FOLL_WRITE, &page);
+-	/*
+-	 * If write access is not required (eg. FUTEX_WAIT), try
+-	 * and get read-only access.
+-	 */
+-	if (err == -EFAULT && rw == FUTEX_READ) {
+-		err = get_user_pages_fast(address, 1, 0, &page);
+-		ro = 1;
+-	}
+-	if (err < 0)
+-		return err;
+-	else
+-		err = 0;
+-
+-	/*
+-	 * The treatment of mapping from this point on is critical. The page
+-	 * lock protects many things but in this context the page lock
+-	 * stabilizes mapping, prevents inode freeing in the shared
+-	 * file-backed region case and guards against movement to swap cache.
+-	 *
+-	 * Strictly speaking the page lock is not needed in all cases being
+-	 * considered here and page lock forces unnecessarily serialization
+-	 * From this point on, mapping will be re-verified if necessary and
+-	 * page lock will be acquired only if it is unavoidable
+-	 *
+-	 * Mapping checks require the head page for any compound page so the
+-	 * head page and mapping is looked up now. For anonymous pages, it
+-	 * does not matter if the page splits in the future as the key is
+-	 * based on the address. For filesystem-backed pages, the tail is
+-	 * required as the index of the page determines the key. For
+-	 * base pages, there is no tail page and tail == page.
+-	 */
+-	tail = page;
+-	page = compound_head(page);
+-	mapping = READ_ONCE(page->mapping);
+-
+-	/*
+-	 * If page->mapping is NULL, then it cannot be a PageAnon
+-	 * page; but it might be the ZERO_PAGE or in the gate area or
+-	 * in a special mapping (all cases which we are happy to fail);
+-	 * or it may have been a good file page when get_user_pages_fast
+-	 * found it, but truncated or holepunched or subjected to
+-	 * invalidate_complete_page2 before we got the page lock (also
+-	 * cases which we are happy to fail).  And we hold a reference,
+-	 * so refcount care in invalidate_complete_page's remove_mapping
+-	 * prevents drop_caches from setting mapping to NULL beneath us.
+-	 *
+-	 * The case we do have to guard against is when memory pressure made
+-	 * shmem_writepage move it from filecache to swapcache beneath us:
+-	 * an unlikely race, but we do need to retry for page->mapping.
+-	 */
+-	if (unlikely(!mapping)) {
+-		int shmem_swizzled;
+-
+-		/*
+-		 * Page lock is required to identify which special case above
+-		 * applies. If this is really a shmem page then the page lock
+-		 * will prevent unexpected transitions.
+-		 */
+-		lock_page(page);
+-		shmem_swizzled = PageSwapCache(page) || page->mapping;
+-		unlock_page(page);
+-		put_page(page);
+-
+-		if (shmem_swizzled)
+-			goto again;
+-
+-		return -EFAULT;
+-	}
+-
+-	/*
+-	 * Private mappings are handled in a simple way.
+-	 *
+-	 * If the futex key is stored on an anonymous page, then the associated
+-	 * object is the mm which is implicitly pinned by the calling process.
+-	 *
+-	 * NOTE: When userspace waits on a MAP_SHARED mapping, even if
+-	 * it's a read-only handle, it's expected that futexes attach to
+-	 * the object not the particular process.
+-	 */
+-	if (PageAnon(page)) {
+-		/*
+-		 * A RO anonymous page will never change and thus doesn't make
+-		 * sense for futex operations.
+-		 */
+-		if (unlikely(should_fail_futex(true)) || ro) {
+-			err = -EFAULT;
+-			goto out;
+-		}
+-
+-		key->both.offset |= FUT_OFF_MMSHARED; /* ref taken on mm */
+-		key->private.mm = mm;
+-		key->private.address = address;
+-
+-	} else {
+-		struct inode *inode;
+-
+-		/*
+-		 * The associated futex object in this case is the inode and
+-		 * the page->mapping must be traversed. Ordinarily this should
+-		 * be stabilised under page lock but it's not strictly
+-		 * necessary in this case as we just want to pin the inode, not
+-		 * update the radix tree or anything like that.
+-		 *
+-		 * The RCU read lock is taken as the inode is finally freed
+-		 * under RCU. If the mapping still matches expectations then the
+-		 * mapping->host can be safely accessed as being a valid inode.
+-		 */
+-		rcu_read_lock();
+-
+-		if (READ_ONCE(page->mapping) != mapping) {
+-			rcu_read_unlock();
+-			put_page(page);
+-
+-			goto again;
+-		}
+-
+-		inode = READ_ONCE(mapping->host);
+-		if (!inode) {
+-			rcu_read_unlock();
+-			put_page(page);
+-
+-			goto again;
+-		}
+-
+-		key->both.offset |= FUT_OFF_INODE; /* inode-based key */
+-		key->shared.i_seq = get_inode_sequence_number(inode);
+-		key->shared.pgoff = page_to_pgoff(tail);
+-		rcu_read_unlock();
+-	}
+-
+-out:
+-	put_page(page);
+-	return err;
+-}
+-
+-/**
+- * fault_in_user_writeable() - Fault in user address and verify RW access
+- * @uaddr:	pointer to faulting user space address
+- *
+- * Slow path to fixup the fault we just took in the atomic write
+- * access to @uaddr.
+- *
+- * We have no generic implementation of a non-destructive write to the
+- * user address. We know that we faulted in the atomic pagefault
+- * disabled section so we can as well avoid the #PF overhead by
+- * calling get_user_pages() right away.
+- */
+-static int fault_in_user_writeable(u32 __user *uaddr)
+-{
+-	struct mm_struct *mm = current->mm;
+-	int ret;
+-
+-	mmap_read_lock(mm);
+-	ret = fixup_user_fault(mm, (unsigned long)uaddr,
+-			       FAULT_FLAG_WRITE, NULL);
+-	mmap_read_unlock(mm);
+-
+-	return ret < 0 ? ret : 0;
+-}
+-
+-/**
+- * futex_top_waiter() - Return the highest priority waiter on a futex
+- * @hb:		the hash bucket the futex_q's reside in
+- * @key:	the futex key (to distinguish it from other futex futex_q's)
+- *
+- * Must be called with the hb lock held.
+- */
+-static struct futex_q *futex_top_waiter(struct futex_hash_bucket *hb,
+-					union futex_key *key)
+-{
+-	struct futex_q *this;
+-
+-	plist_for_each_entry(this, &hb->chain, list) {
+-		if (match_futex(&this->key, key))
+-			return this;
+-	}
+-	return NULL;
+-}
+-
+-static int cmpxchg_futex_value_locked(u32 *curval, u32 __user *uaddr,
+-				      u32 uval, u32 newval)
+-{
+-	int ret;
+-
+-	pagefault_disable();
+-	ret = futex_atomic_cmpxchg_inatomic(curval, uaddr, uval, newval);
+-	pagefault_enable();
+-
+-	return ret;
+-}
+-
+-static int get_futex_value_locked(u32 *dest, u32 __user *from)
+-{
+-	int ret;
+-
+-	pagefault_disable();
+-	ret = __get_user(*dest, from);
+-	pagefault_enable();
+-
+-	return ret ? -EFAULT : 0;
+-}
+-
+-
+-/*
+- * PI code:
+- */
+-static int refill_pi_state_cache(void)
+-{
+-	struct futex_pi_state *pi_state;
+-
+-	if (likely(current->pi_state_cache))
+-		return 0;
+-
+-	pi_state = kzalloc(sizeof(*pi_state), GFP_KERNEL);
+-
+-	if (!pi_state)
+-		return -ENOMEM;
+-
+-	INIT_LIST_HEAD(&pi_state->list);
+-	/* pi_mutex gets initialized later */
+-	pi_state->owner = NULL;
+-	refcount_set(&pi_state->refcount, 1);
+-	pi_state->key = FUTEX_KEY_INIT;
+-
+-	current->pi_state_cache = pi_state;
+-
+-	return 0;
+-}
+-
+-static struct futex_pi_state *alloc_pi_state(void)
+-{
+-	struct futex_pi_state *pi_state = current->pi_state_cache;
+-
+-	WARN_ON(!pi_state);
+-	current->pi_state_cache = NULL;
+-
+-	return pi_state;
+-}
+-
+-static void pi_state_update_owner(struct futex_pi_state *pi_state,
+-				  struct task_struct *new_owner)
+-{
+-	struct task_struct *old_owner = pi_state->owner;
+-
+-	lockdep_assert_held(&pi_state->pi_mutex.wait_lock);
+-
+-	if (old_owner) {
+-		raw_spin_lock(&old_owner->pi_lock);
+-		WARN_ON(list_empty(&pi_state->list));
+-		list_del_init(&pi_state->list);
+-		raw_spin_unlock(&old_owner->pi_lock);
+-	}
+-
+-	if (new_owner) {
+-		raw_spin_lock(&new_owner->pi_lock);
+-		WARN_ON(!list_empty(&pi_state->list));
+-		list_add(&pi_state->list, &new_owner->pi_state_list);
+-		pi_state->owner = new_owner;
+-		raw_spin_unlock(&new_owner->pi_lock);
+-	}
+-}
+-
+-static void get_pi_state(struct futex_pi_state *pi_state)
+-{
+-	WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount));
+-}
+-
+-/*
+- * Drops a reference to the pi_state object and frees or caches it
+- * when the last reference is gone.
+- */
+-static void put_pi_state(struct futex_pi_state *pi_state)
+-{
+-	if (!pi_state)
+-		return;
+-
+-	if (!refcount_dec_and_test(&pi_state->refcount))
+-		return;
+-
+-	/*
+-	 * If pi_state->owner is NULL, the owner is most probably dying
+-	 * and has cleaned up the pi_state already
+-	 */
+-	if (pi_state->owner) {
+-		unsigned long flags;
+-
+-		raw_spin_lock_irqsave(&pi_state->pi_mutex.wait_lock, flags);
+-		pi_state_update_owner(pi_state, NULL);
+-		rt_mutex_proxy_unlock(&pi_state->pi_mutex);
+-		raw_spin_unlock_irqrestore(&pi_state->pi_mutex.wait_lock, flags);
+-	}
+-
+-	if (current->pi_state_cache) {
+-		kfree(pi_state);
+-	} else {
+-		/*
+-		 * pi_state->list is already empty.
+-		 * clear pi_state->owner.
+-		 * refcount is at 0 - put it back to 1.
+-		 */
+-		pi_state->owner = NULL;
+-		refcount_set(&pi_state->refcount, 1);
+-		current->pi_state_cache = pi_state;
+-	}
+-}
+-
+-#ifdef CONFIG_FUTEX_PI
+-
+-/*
+- * This task is holding PI mutexes at exit time => bad.
+- * Kernel cleans up PI-state, but userspace is likely hosed.
+- * (Robust-futex cleanup is separate and might save the day for userspace.)
+- */
+-static void exit_pi_state_list(struct task_struct *curr)
+-{
+-	struct list_head *next, *head = &curr->pi_state_list;
+-	struct futex_pi_state *pi_state;
+-	struct futex_hash_bucket *hb;
+-	union futex_key key = FUTEX_KEY_INIT;
+-
+-	if (!futex_cmpxchg_enabled)
+-		return;
+-	/*
+-	 * We are a ZOMBIE and nobody can enqueue itself on
+-	 * pi_state_list anymore, but we have to be careful
+-	 * versus waiters unqueueing themselves:
+-	 */
+-	raw_spin_lock_irq(&curr->pi_lock);
+-	while (!list_empty(head)) {
+-		next = head->next;
+-		pi_state = list_entry(next, struct futex_pi_state, list);
+-		key = pi_state->key;
+-		hb = hash_futex(&key);
+-
+-		/*
+-		 * We can race against put_pi_state() removing itself from the
+-		 * list (a waiter going away). put_pi_state() will first
+-		 * decrement the reference count and then modify the list, so
+-		 * its possible to see the list entry but fail this reference
+-		 * acquire.
+-		 *
+-		 * In that case; drop the locks to let put_pi_state() make
+-		 * progress and retry the loop.
+-		 */
+-		if (!refcount_inc_not_zero(&pi_state->refcount)) {
+-			raw_spin_unlock_irq(&curr->pi_lock);
+-			cpu_relax();
+-			raw_spin_lock_irq(&curr->pi_lock);
+-			continue;
+-		}
+-		raw_spin_unlock_irq(&curr->pi_lock);
+-
+-		spin_lock(&hb->lock);
+-		raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+-		raw_spin_lock(&curr->pi_lock);
+-		/*
+-		 * We dropped the pi-lock, so re-check whether this
+-		 * task still owns the PI-state:
+-		 */
+-		if (head->next != next) {
+-			/* retain curr->pi_lock for the loop invariant */
+-			raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
+-			spin_unlock(&hb->lock);
+-			put_pi_state(pi_state);
+-			continue;
+-		}
+-
+-		WARN_ON(pi_state->owner != curr);
+-		WARN_ON(list_empty(&pi_state->list));
+-		list_del_init(&pi_state->list);
+-		pi_state->owner = NULL;
+-
+-		raw_spin_unlock(&curr->pi_lock);
+-		raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+-		spin_unlock(&hb->lock);
+-
+-		rt_mutex_futex_unlock(&pi_state->pi_mutex);
+-		put_pi_state(pi_state);
+-
+-		raw_spin_lock_irq(&curr->pi_lock);
+-	}
+-	raw_spin_unlock_irq(&curr->pi_lock);
+-}
+-#else
+-static inline void exit_pi_state_list(struct task_struct *curr) { }
+-#endif
+-
+-/*
+- * We need to check the following states:
+- *
+- *      Waiter | pi_state | pi->owner | uTID      | uODIED | ?
+- *
+- * [1]  NULL   | ---      | ---       | 0         | 0/1    | Valid
+- * [2]  NULL   | ---      | ---       | >0        | 0/1    | Valid
+- *
+- * [3]  Found  | NULL     | --        | Any       | 0/1    | Invalid
+- *
+- * [4]  Found  | Found    | NULL      | 0         | 1      | Valid
+- * [5]  Found  | Found    | NULL      | >0        | 1      | Invalid
+- *
+- * [6]  Found  | Found    | task      | 0         | 1      | Valid
+- *
+- * [7]  Found  | Found    | NULL      | Any       | 0      | Invalid
+- *
+- * [8]  Found  | Found    | task      | ==taskTID | 0/1    | Valid
+- * [9]  Found  | Found    | task      | 0         | 0      | Invalid
+- * [10] Found  | Found    | task      | !=taskTID | 0/1    | Invalid
+- *
+- * [1]	Indicates that the kernel can acquire the futex atomically. We
+- *	came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit.
+- *
+- * [2]	Valid, if TID does not belong to a kernel thread. If no matching
+- *      thread is found then it indicates that the owner TID has died.
+- *
+- * [3]	Invalid. The waiter is queued on a non PI futex
+- *
+- * [4]	Valid state after exit_robust_list(), which sets the user space
+- *	value to FUTEX_WAITERS | FUTEX_OWNER_DIED.
+- *
+- * [5]	The user space value got manipulated between exit_robust_list()
+- *	and exit_pi_state_list()
+- *
+- * [6]	Valid state after exit_pi_state_list() which sets the new owner in
+- *	the pi_state but cannot access the user space value.
+- *
+- * [7]	pi_state->owner can only be NULL when the OWNER_DIED bit is set.
+- *
+- * [8]	Owner and user space value match
+- *
+- * [9]	There is no transient state which sets the user space TID to 0
+- *	except exit_robust_list(), but this is indicated by the
+- *	FUTEX_OWNER_DIED bit. See [4]
+- *
+- * [10] There is no transient state which leaves owner and user space
+- *	TID out of sync. Except one error case where the kernel is denied
+- *	write access to the user address, see fixup_pi_state_owner().
+- *
+- *
+- * Serialization and lifetime rules:
+- *
+- * hb->lock:
+- *
+- *	hb -> futex_q, relation
+- *	futex_q -> pi_state, relation
+- *
+- *	(cannot be raw because hb can contain arbitrary amount
+- *	 of futex_q's)
+- *
+- * pi_mutex->wait_lock:
+- *
+- *	{uval, pi_state}
+- *
+- *	(and pi_mutex 'obviously')
+- *
+- * p->pi_lock:
+- *
+- *	p->pi_state_list -> pi_state->list, relation
+- *
+- * pi_state->refcount:
+- *
+- *	pi_state lifetime
+- *
+- *
+- * Lock order:
+- *
+- *   hb->lock
+- *     pi_mutex->wait_lock
+- *       p->pi_lock
+- *
+- */
+-
+-/*
+- * Validate that the existing waiter has a pi_state and sanity check
+- * the pi_state against the user space value. If correct, attach to
+- * it.
+- */
+-static int attach_to_pi_state(u32 __user *uaddr, u32 uval,
+-			      struct futex_pi_state *pi_state,
+-			      struct futex_pi_state **ps)
+-{
+-	pid_t pid = uval & FUTEX_TID_MASK;
+-	u32 uval2;
+-	int ret;
+-
+-	/*
+-	 * Userspace might have messed up non-PI and PI futexes [3]
+-	 */
+-	if (unlikely(!pi_state))
+-		return -EINVAL;
+-
+-	/*
+-	 * We get here with hb->lock held, and having found a
+-	 * futex_top_waiter(). This means that futex_lock_pi() of said futex_q
+-	 * has dropped the hb->lock in between queue_me() and unqueue_me_pi(),
+-	 * which in turn means that futex_lock_pi() still has a reference on
+-	 * our pi_state.
+-	 *
+-	 * The waiter holding a reference on @pi_state also protects against
+-	 * the unlocked put_pi_state() in futex_unlock_pi(), futex_lock_pi()
+-	 * and futex_wait_requeue_pi() as it cannot go to 0 and consequently
+-	 * free pi_state before we can take a reference ourselves.
+-	 */
+-	WARN_ON(!refcount_read(&pi_state->refcount));
+-
+-	/*
+-	 * Now that we have a pi_state, we can acquire wait_lock
+-	 * and do the state validation.
+-	 */
+-	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+-
+-	/*
+-	 * Since {uval, pi_state} is serialized by wait_lock, and our current
+-	 * uval was read without holding it, it can have changed. Verify it
+-	 * still is what we expect it to be, otherwise retry the entire
+-	 * operation.
+-	 */
+-	if (get_futex_value_locked(&uval2, uaddr))
+-		goto out_efault;
+-
+-	if (uval != uval2)
+-		goto out_eagain;
+-
+-	/*
+-	 * Handle the owner died case:
+-	 */
+-	if (uval & FUTEX_OWNER_DIED) {
+-		/*
+-		 * exit_pi_state_list sets owner to NULL and wakes the
+-		 * topmost waiter. The task which acquires the
+-		 * pi_state->rt_mutex will fixup owner.
+-		 */
+-		if (!pi_state->owner) {
+-			/*
+-			 * No pi state owner, but the user space TID
+-			 * is not 0. Inconsistent state. [5]
+-			 */
+-			if (pid)
+-				goto out_einval;
+-			/*
+-			 * Take a ref on the state and return success. [4]
+-			 */
+-			goto out_attach;
+-		}
+-
+-		/*
+-		 * If TID is 0, then either the dying owner has not
+-		 * yet executed exit_pi_state_list() or some waiter
+-		 * acquired the rtmutex in the pi state, but did not
+-		 * yet fixup the TID in user space.
+-		 *
+-		 * Take a ref on the state and return success. [6]
+-		 */
+-		if (!pid)
+-			goto out_attach;
+-	} else {
+-		/*
+-		 * If the owner died bit is not set, then the pi_state
+-		 * must have an owner. [7]
+-		 */
+-		if (!pi_state->owner)
+-			goto out_einval;
+-	}
+-
+-	/*
+-	 * Bail out if user space manipulated the futex value. If pi
+-	 * state exists then the owner TID must be the same as the
+-	 * user space TID. [9/10]
+-	 */
+-	if (pid != task_pid_vnr(pi_state->owner))
+-		goto out_einval;
+-
+-out_attach:
+-	get_pi_state(pi_state);
+-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+-	*ps = pi_state;
+-	return 0;
+-
+-out_einval:
+-	ret = -EINVAL;
+-	goto out_error;
+-
+-out_eagain:
+-	ret = -EAGAIN;
+-	goto out_error;
+-
+-out_efault:
+-	ret = -EFAULT;
+-	goto out_error;
+-
+-out_error:
+-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+-	return ret;
+-}
+-
+-/**
+- * wait_for_owner_exiting - Block until the owner has exited
+- * @ret: owner's current futex lock status
+- * @exiting:	Pointer to the exiting task
+- *
+- * Caller must hold a refcount on @exiting.
+- */
+-static void wait_for_owner_exiting(int ret, struct task_struct *exiting)
+-{
+-	if (ret != -EBUSY) {
+-		WARN_ON_ONCE(exiting);
+-		return;
+-	}
+-
+-	if (WARN_ON_ONCE(ret == -EBUSY && !exiting))
+-		return;
+-
+-	mutex_lock(&exiting->futex_exit_mutex);
+-	/*
+-	 * No point in doing state checking here. If the waiter got here
+-	 * while the task was in exec()->exec_futex_release() then it can
+-	 * have any FUTEX_STATE_* value when the waiter has acquired the
+-	 * mutex. OK, if running, EXITING or DEAD if it reached exit()
+-	 * already. Highly unlikely and not a problem. Just one more round
+-	 * through the futex maze.
+-	 */
+-	mutex_unlock(&exiting->futex_exit_mutex);
+-
+-	put_task_struct(exiting);
+-}
+-
+-static int handle_exit_race(u32 __user *uaddr, u32 uval,
+-			    struct task_struct *tsk)
+-{
+-	u32 uval2;
+-
+-	/*
+-	 * If the futex exit state is not yet FUTEX_STATE_DEAD, tell the
+-	 * caller that the alleged owner is busy.
+-	 */
+-	if (tsk && tsk->futex_state != FUTEX_STATE_DEAD)
+-		return -EBUSY;
+-
+-	/*
+-	 * Reread the user space value to handle the following situation:
+-	 *
+-	 * CPU0				CPU1
+-	 *
+-	 * sys_exit()			sys_futex()
+-	 *  do_exit()			 futex_lock_pi()
+-	 *                                futex_lock_pi_atomic()
+-	 *   exit_signals(tsk)		    No waiters:
+-	 *    tsk->flags |= PF_EXITING;	    *uaddr == 0x00000PID
+-	 *  mm_release(tsk)		    Set waiter bit
+-	 *   exit_robust_list(tsk) {	    *uaddr = 0x80000PID;
+-	 *      Set owner died		    attach_to_pi_owner() {
+-	 *    *uaddr = 0xC0000000;	     tsk = get_task(PID);
+-	 *   }				     if (!tsk->flags & PF_EXITING) {
+-	 *  ...				       attach();
+-	 *  tsk->futex_state =               } else {
+-	 *	FUTEX_STATE_DEAD;              if (tsk->futex_state !=
+-	 *					  FUTEX_STATE_DEAD)
+-	 *				         return -EAGAIN;
+-	 *				       return -ESRCH; <--- FAIL
+-	 *				     }
+-	 *
+-	 * Returning ESRCH unconditionally is wrong here because the
+-	 * user space value has been changed by the exiting task.
+-	 *
+-	 * The same logic applies to the case where the exiting task is
+-	 * already gone.
+-	 */
+-	if (get_futex_value_locked(&uval2, uaddr))
+-		return -EFAULT;
+-
+-	/* If the user space value has changed, try again. */
+-	if (uval2 != uval)
+-		return -EAGAIN;
+-
+-	/*
+-	 * The exiting task did not have a robust list, the robust list was
+-	 * corrupted or the user space value in *uaddr is simply bogus.
+-	 * Give up and tell user space.
+-	 */
+-	return -ESRCH;
+-}
+-
+-/*
+- * Lookup the task for the TID provided from user space and attach to
+- * it after doing proper sanity checks.
+- */
+-static int attach_to_pi_owner(u32 __user *uaddr, u32 uval, union futex_key *key,
+-			      struct futex_pi_state **ps,
+-			      struct task_struct **exiting)
+-{
+-	pid_t pid = uval & FUTEX_TID_MASK;
+-	struct futex_pi_state *pi_state;
+-	struct task_struct *p;
+-
+-	/*
+-	 * We are the first waiter - try to look up the real owner and attach
+-	 * the new pi_state to it, but bail out when TID = 0 [1]
+-	 *
+-	 * The !pid check is paranoid. None of the call sites should end up
+-	 * with pid == 0, but better safe than sorry. Let the caller retry
+-	 */
+-	if (!pid)
+-		return -EAGAIN;
+-	p = find_get_task_by_vpid(pid);
+-	if (!p)
+-		return handle_exit_race(uaddr, uval, NULL);
+-
+-	if (unlikely(p->flags & PF_KTHREAD)) {
+-		put_task_struct(p);
+-		return -EPERM;
+-	}
+-
+-	/*
+-	 * We need to look at the task state to figure out, whether the
+-	 * task is exiting. To protect against the change of the task state
+-	 * in futex_exit_release(), we do this protected by p->pi_lock:
+-	 */
+-	raw_spin_lock_irq(&p->pi_lock);
+-	if (unlikely(p->futex_state != FUTEX_STATE_OK)) {
+-		/*
+-		 * The task is on the way out. When the futex state is
+-		 * FUTEX_STATE_DEAD, we know that the task has finished
+-		 * the cleanup:
+-		 */
+-		int ret = handle_exit_race(uaddr, uval, p);
+-
+-		raw_spin_unlock_irq(&p->pi_lock);
+-		/*
+-		 * If the owner task is between FUTEX_STATE_EXITING and
+-		 * FUTEX_STATE_DEAD then store the task pointer and keep
+-		 * the reference on the task struct. The calling code will
+-		 * drop all locks, wait for the task to reach
+-		 * FUTEX_STATE_DEAD and then drop the refcount. This is
+-		 * required to prevent a live lock when the current task
+-		 * preempted the exiting task between the two states.
+-		 */
+-		if (ret == -EBUSY)
+-			*exiting = p;
+-		else
+-			put_task_struct(p);
+-		return ret;
+-	}
+-
+-	/*
+-	 * No existing pi state. First waiter. [2]
+-	 *
+-	 * This creates pi_state, we have hb->lock held, this means nothing can
+-	 * observe this state, wait_lock is irrelevant.
+-	 */
+-	pi_state = alloc_pi_state();
+-
+-	/*
+-	 * Initialize the pi_mutex in locked state and make @p
+-	 * the owner of it:
+-	 */
+-	rt_mutex_init_proxy_locked(&pi_state->pi_mutex, p);
+-
+-	/* Store the key for possible exit cleanups: */
+-	pi_state->key = *key;
+-
+-	WARN_ON(!list_empty(&pi_state->list));
+-	list_add(&pi_state->list, &p->pi_state_list);
+-	/*
+-	 * Assignment without holding pi_state->pi_mutex.wait_lock is safe
+-	 * because there is no concurrency as the object is not published yet.
+-	 */
+-	pi_state->owner = p;
+-	raw_spin_unlock_irq(&p->pi_lock);
+-
+-	put_task_struct(p);
+-
+-	*ps = pi_state;
+-
+-	return 0;
+-}
+-
+-static int lookup_pi_state(u32 __user *uaddr, u32 uval,
+-			   struct futex_hash_bucket *hb,
+-			   union futex_key *key, struct futex_pi_state **ps,
+-			   struct task_struct **exiting)
+-{
+-	struct futex_q *top_waiter = futex_top_waiter(hb, key);
+-
+-	/*
+-	 * If there is a waiter on that futex, validate it and
+-	 * attach to the pi_state when the validation succeeds.
+-	 */
+-	if (top_waiter)
+-		return attach_to_pi_state(uaddr, uval, top_waiter->pi_state, ps);
+-
+-	/*
+-	 * We are the first waiter - try to look up the owner based on
+-	 * @uval and attach to it.
+-	 */
+-	return attach_to_pi_owner(uaddr, uval, key, ps, exiting);
+-}
+-
+-static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval)
+-{
+-	int err;
+-	u32 curval;
+-
+-	if (unlikely(should_fail_futex(true)))
+-		return -EFAULT;
+-
+-	err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
+-	if (unlikely(err))
+-		return err;
+-
+-	/* If user space value changed, let the caller retry */
+-	return curval != uval ? -EAGAIN : 0;
+-}
+-
+-/**
+- * futex_lock_pi_atomic() - Atomic work required to acquire a pi aware futex
+- * @uaddr:		the pi futex user address
+- * @hb:			the pi futex hash bucket
+- * @key:		the futex key associated with uaddr and hb
+- * @ps:			the pi_state pointer where we store the result of the
+- *			lookup
+- * @task:		the task to perform the atomic lock work for.  This will
+- *			be "current" except in the case of requeue pi.
+- * @exiting:		Pointer to store the task pointer of the owner task
+- *			which is in the middle of exiting
+- * @set_waiters:	force setting the FUTEX_WAITERS bit (1) or not (0)
+- *
+- * Return:
+- *  -  0 - ready to wait;
+- *  -  1 - acquired the lock;
+- *  - <0 - error
+- *
+- * The hb->lock and futex_key refs shall be held by the caller.
+- *
+- * @exiting is only set when the return value is -EBUSY. If so, this holds
+- * a refcount on the exiting task on return and the caller needs to drop it
+- * after waiting for the exit to complete.
+- */
+-static int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb,
+-				union futex_key *key,
+-				struct futex_pi_state **ps,
+-				struct task_struct *task,
+-				struct task_struct **exiting,
+-				int set_waiters)
+-{
+-	u32 uval, newval, vpid = task_pid_vnr(task);
+-	struct futex_q *top_waiter;
+-	int ret;
+-
+-	/*
+-	 * Read the user space value first so we can validate a few
+-	 * things before proceeding further.
+-	 */
+-	if (get_futex_value_locked(&uval, uaddr))
+-		return -EFAULT;
+-
+-	if (unlikely(should_fail_futex(true)))
+-		return -EFAULT;
+-
+-	/*
+-	 * Detect deadlocks.
+-	 */
+-	if ((unlikely((uval & FUTEX_TID_MASK) == vpid)))
+-		return -EDEADLK;
+-
+-	if ((unlikely(should_fail_futex(true))))
+-		return -EDEADLK;
+-
+-	/*
+-	 * Lookup existing state first. If it exists, try to attach to
+-	 * its pi_state.
+-	 */
+-	top_waiter = futex_top_waiter(hb, key);
+-	if (top_waiter)
+-		return attach_to_pi_state(uaddr, uval, top_waiter->pi_state, ps);
+-
+-	/*
+-	 * No waiter and user TID is 0. We are here because the
+-	 * waiters or the owner died bit is set or called from
+-	 * requeue_cmp_pi or for whatever reason something took the
+-	 * syscall.
+-	 */
+-	if (!(uval & FUTEX_TID_MASK)) {
+-		/*
+-		 * We take over the futex. No other waiters and the user space
+-		 * TID is 0. We preserve the owner died bit.
+-		 */
+-		newval = uval & FUTEX_OWNER_DIED;
+-		newval |= vpid;
+-
+-		/* The futex requeue_pi code can enforce the waiters bit */
+-		if (set_waiters)
+-			newval |= FUTEX_WAITERS;
+-
+-		ret = lock_pi_update_atomic(uaddr, uval, newval);
+-		/* If the take over worked, return 1 */
+-		return ret < 0 ? ret : 1;
+-	}
+-
+-	/*
+-	 * First waiter. Set the waiters bit before attaching ourself to
+-	 * the owner. If owner tries to unlock, it will be forced into
+-	 * the kernel and blocked on hb->lock.
+-	 */
+-	newval = uval | FUTEX_WAITERS;
+-	ret = lock_pi_update_atomic(uaddr, uval, newval);
+-	if (ret)
+-		return ret;
+-	/*
+-	 * If the update of the user space value succeeded, we try to
+-	 * attach to the owner. If that fails, no harm done, we only
+-	 * set the FUTEX_WAITERS bit in the user space variable.
+-	 */
+-	return attach_to_pi_owner(uaddr, newval, key, ps, exiting);
+-}
+-
+-/**
+- * __unqueue_futex() - Remove the futex_q from its futex_hash_bucket
+- * @q:	The futex_q to unqueue
+- *
+- * The q->lock_ptr must not be NULL and must be held by the caller.
+- */
+-static void __unqueue_futex(struct futex_q *q)
+-{
+-	struct futex_hash_bucket *hb;
+-
+-	if (WARN_ON_SMP(!q->lock_ptr) || WARN_ON(plist_node_empty(&q->list)))
+-		return;
+-	lockdep_assert_held(q->lock_ptr);
+-
+-	hb = container_of(q->lock_ptr, struct futex_hash_bucket, lock);
+-	plist_del(&q->list, &hb->chain);
+-	hb_waiters_dec(hb);
+-}
+-
+-/*
+- * The hash bucket lock must be held when this is called.
+- * Afterwards, the futex_q must not be accessed. Callers
+- * must ensure to later call wake_up_q() for the actual
+- * wakeups to occur.
+- */
+-static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
+-{
+-	struct task_struct *p = q->task;
+-
+-	if (WARN(q->pi_state || q->rt_waiter, "refusing to wake PI futex\n"))
+-		return;
+-
+-	get_task_struct(p);
+-	__unqueue_futex(q);
+-	/*
+-	 * The waiting task can free the futex_q as soon as q->lock_ptr = NULL
+-	 * is written, without taking any locks. This is possible in the event
+-	 * of a spurious wakeup, for example. A memory barrier is required here
+-	 * to prevent the following store to lock_ptr from getting ahead of the
+-	 * plist_del in __unqueue_futex().
+-	 */
+-	smp_store_release(&q->lock_ptr, NULL);
+-
+-	/*
+-	 * Queue the task for later wakeup for after we've released
+-	 * the hb->lock.
+-	 */
+-	wake_q_add_safe(wake_q, p);
+-}
+-
+-/*
+- * Caller must hold a reference on @pi_state.
+- */
+-static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_state)
+-{
+-	u32 curval, newval;
+-	struct task_struct *new_owner;
+-	bool postunlock = false;
+-	DEFINE_WAKE_Q(wake_q);
+-	int ret = 0;
+-
+-	new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
+-	if (WARN_ON_ONCE(!new_owner)) {
+-		/*
+-		 * As per the comment in futex_unlock_pi() this should not happen.
+-		 *
+-		 * When this happens, give up our locks and try again, giving
+-		 * the futex_lock_pi() instance time to complete, either by
+-		 * waiting on the rtmutex or removing itself from the futex
+-		 * queue.
+-		 */
+-		ret = -EAGAIN;
+-		goto out_unlock;
+-	}
+-
+-	/*
+-	 * We pass it to the next owner. The WAITERS bit is always kept
+-	 * enabled while there is PI state around. We cleanup the owner
+-	 * died bit, because we are the owner.
+-	 */
+-	newval = FUTEX_WAITERS | task_pid_vnr(new_owner);
+-
+-	if (unlikely(should_fail_futex(true))) {
+-		ret = -EFAULT;
+-		goto out_unlock;
+-	}
+-
+-	ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
+-	if (!ret && (curval != uval)) {
+-		/*
+-		 * If a unconditional UNLOCK_PI operation (user space did not
+-		 * try the TID->0 transition) raced with a waiter setting the
+-		 * FUTEX_WAITERS flag between get_user() and locking the hash
+-		 * bucket lock, retry the operation.
+-		 */
+-		if ((FUTEX_TID_MASK & curval) == uval)
+-			ret = -EAGAIN;
+-		else
+-			ret = -EINVAL;
+-	}
+-
+-	if (!ret) {
+-		/*
+-		 * This is a point of no return; once we modified the uval
+-		 * there is no going back and subsequent operations must
+-		 * not fail.
+-		 */
+-		pi_state_update_owner(pi_state, new_owner);
+-		postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
+-	}
+-
+-out_unlock:
+-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+-
+-	if (postunlock)
+-		rt_mutex_postunlock(&wake_q);
+-
+-	return ret;
+-}
+-
+-/*
+- * Express the locking dependencies for lockdep:
+- */
+-static inline void
+-double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
+-{
+-	if (hb1 <= hb2) {
+-		spin_lock(&hb1->lock);
+-		if (hb1 < hb2)
+-			spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
+-	} else { /* hb1 > hb2 */
+-		spin_lock(&hb2->lock);
+-		spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
+-	}
+-}
+-
+-static inline void
+-double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
+-{
+-	spin_unlock(&hb1->lock);
+-	if (hb1 != hb2)
+-		spin_unlock(&hb2->lock);
+-}
+-
+-/*
+- * Wake up waiters matching bitset queued on this futex (uaddr).
+- */
+-static int
+-futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
+-{
+-	struct futex_hash_bucket *hb;
+-	struct futex_q *this, *next;
+-	union futex_key key = FUTEX_KEY_INIT;
+-	int ret;
+-	DEFINE_WAKE_Q(wake_q);
+-
+-	if (!bitset)
+-		return -EINVAL;
+-
+-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, FUTEX_READ);
+-	if (unlikely(ret != 0))
+-		return ret;
+-
+-	hb = hash_futex(&key);
+-
+-	/* Make sure we really have tasks to wakeup */
+-	if (!hb_waiters_pending(hb))
+-		return ret;
+-
+-	spin_lock(&hb->lock);
+-
+-	plist_for_each_entry_safe(this, next, &hb->chain, list) {
+-		if (match_futex (&this->key, &key)) {
+-			if (this->pi_state || this->rt_waiter) {
+-				ret = -EINVAL;
+-				break;
+-			}
+-
+-			/* Check if one of the bits is set in both bitsets */
+-			if (!(this->bitset & bitset))
+-				continue;
+-
+-			mark_wake_futex(&wake_q, this);
+-			if (++ret >= nr_wake)
+-				break;
+-		}
+-	}
+-
+-	spin_unlock(&hb->lock);
+-	wake_up_q(&wake_q);
+-	return ret;
+-}
+-
+-static int futex_atomic_op_inuser(unsigned int encoded_op, u32 __user *uaddr)
+-{
+-	unsigned int op =	  (encoded_op & 0x70000000) >> 28;
+-	unsigned int cmp =	  (encoded_op & 0x0f000000) >> 24;
+-	int oparg = sign_extend32((encoded_op & 0x00fff000) >> 12, 11);
+-	int cmparg = sign_extend32(encoded_op & 0x00000fff, 11);
+-	int oldval, ret;
+-
+-	if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) {
+-		if (oparg < 0 || oparg > 31) {
+-			char comm[sizeof(current->comm)];
+-			/*
+-			 * kill this print and return -EINVAL when userspace
+-			 * is sane again
+-			 */
+-			pr_info_ratelimited("futex_wake_op: %s tries to shift op by %d; fix this program\n",
+-					get_task_comm(comm, current), oparg);
+-			oparg &= 31;
+-		}
+-		oparg = 1 << oparg;
+-	}
+-
+-	pagefault_disable();
+-	ret = arch_futex_atomic_op_inuser(op, oparg, &oldval, uaddr);
+-	pagefault_enable();
+-	if (ret)
+-		return ret;
+-
+-	switch (cmp) {
+-	case FUTEX_OP_CMP_EQ:
+-		return oldval == cmparg;
+-	case FUTEX_OP_CMP_NE:
+-		return oldval != cmparg;
+-	case FUTEX_OP_CMP_LT:
+-		return oldval < cmparg;
+-	case FUTEX_OP_CMP_GE:
+-		return oldval >= cmparg;
+-	case FUTEX_OP_CMP_LE:
+-		return oldval <= cmparg;
+-	case FUTEX_OP_CMP_GT:
+-		return oldval > cmparg;
+-	default:
+-		return -ENOSYS;
+-	}
+-}
+-
+-/*
+- * Wake up all waiters hashed on the physical page that is mapped
+- * to this virtual address:
+- */
+-static int
+-futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
+-	      int nr_wake, int nr_wake2, int op)
+-{
+-	union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
+-	struct futex_hash_bucket *hb1, *hb2;
+-	struct futex_q *this, *next;
+-	int ret, op_ret;
+-	DEFINE_WAKE_Q(wake_q);
+-
+-retry:
+-	ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, FUTEX_READ);
+-	if (unlikely(ret != 0))
+-		return ret;
+-	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, FUTEX_WRITE);
+-	if (unlikely(ret != 0))
+-		return ret;
+-
+-	hb1 = hash_futex(&key1);
+-	hb2 = hash_futex(&key2);
+-
+-retry_private:
+-	double_lock_hb(hb1, hb2);
+-	op_ret = futex_atomic_op_inuser(op, uaddr2);
+-	if (unlikely(op_ret < 0)) {
+-		double_unlock_hb(hb1, hb2);
+-
+-		if (!IS_ENABLED(CONFIG_MMU) ||
+-		    unlikely(op_ret != -EFAULT && op_ret != -EAGAIN)) {
+-			/*
+-			 * we don't get EFAULT from MMU faults if we don't have
+-			 * an MMU, but we might get them from range checking
+-			 */
+-			ret = op_ret;
+-			return ret;
+-		}
+-
+-		if (op_ret == -EFAULT) {
+-			ret = fault_in_user_writeable(uaddr2);
+-			if (ret)
+-				return ret;
+-		}
+-
+-		if (!(flags & FLAGS_SHARED)) {
+-			cond_resched();
+-			goto retry_private;
+-		}
+-
+-		cond_resched();
+-		goto retry;
+-	}
+-
+-	plist_for_each_entry_safe(this, next, &hb1->chain, list) {
+-		if (match_futex (&this->key, &key1)) {
+-			if (this->pi_state || this->rt_waiter) {
+-				ret = -EINVAL;
+-				goto out_unlock;
+-			}
+-			mark_wake_futex(&wake_q, this);
+-			if (++ret >= nr_wake)
+-				break;
+-		}
+-	}
+-
+-	if (op_ret > 0) {
+-		op_ret = 0;
+-		plist_for_each_entry_safe(this, next, &hb2->chain, list) {
+-			if (match_futex (&this->key, &key2)) {
+-				if (this->pi_state || this->rt_waiter) {
+-					ret = -EINVAL;
+-					goto out_unlock;
+-				}
+-				mark_wake_futex(&wake_q, this);
+-				if (++op_ret >= nr_wake2)
+-					break;
+-			}
+-		}
+-		ret += op_ret;
+-	}
+-
+-out_unlock:
+-	double_unlock_hb(hb1, hb2);
+-	wake_up_q(&wake_q);
+-	return ret;
+-}
+-
+-/**
+- * requeue_futex() - Requeue a futex_q from one hb to another
+- * @q:		the futex_q to requeue
+- * @hb1:	the source hash_bucket
+- * @hb2:	the target hash_bucket
+- * @key2:	the new key for the requeued futex_q
+- */
+-static inline
+-void requeue_futex(struct futex_q *q, struct futex_hash_bucket *hb1,
+-		   struct futex_hash_bucket *hb2, union futex_key *key2)
+-{
+-
+-	/*
+-	 * If key1 and key2 hash to the same bucket, no need to
+-	 * requeue.
+-	 */
+-	if (likely(&hb1->chain != &hb2->chain)) {
+-		plist_del(&q->list, &hb1->chain);
+-		hb_waiters_dec(hb1);
+-		hb_waiters_inc(hb2);
+-		plist_add(&q->list, &hb2->chain);
+-		q->lock_ptr = &hb2->lock;
+-	}
+-	q->key = *key2;
+-}
+-
+-/**
+- * requeue_pi_wake_futex() - Wake a task that acquired the lock during requeue
+- * @q:		the futex_q
+- * @key:	the key of the requeue target futex
+- * @hb:		the hash_bucket of the requeue target futex
+- *
+- * During futex_requeue, with requeue_pi=1, it is possible to acquire the
+- * target futex if it is uncontended or via a lock steal.  Set the futex_q key
+- * to the requeue target futex so the waiter can detect the wakeup on the right
+- * futex, but remove it from the hb and NULL the rt_waiter so it can detect
+- * atomic lock acquisition.  Set the q->lock_ptr to the requeue target hb->lock
+- * to protect access to the pi_state to fixup the owner later.  Must be called
+- * with both q->lock_ptr and hb->lock held.
+- */
+-static inline
+-void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+-			   struct futex_hash_bucket *hb)
+-{
+-	q->key = *key;
+-
+-	__unqueue_futex(q);
+-
+-	WARN_ON(!q->rt_waiter);
+-	q->rt_waiter = NULL;
+-
+-	q->lock_ptr = &hb->lock;
+-
+-	wake_up_state(q->task, TASK_NORMAL);
+-}
+-
+-/**
+- * futex_proxy_trylock_atomic() - Attempt an atomic lock for the top waiter
+- * @pifutex:		the user address of the to futex
+- * @hb1:		the from futex hash bucket, must be locked by the caller
+- * @hb2:		the to futex hash bucket, must be locked by the caller
+- * @key1:		the from futex key
+- * @key2:		the to futex key
+- * @ps:			address to store the pi_state pointer
+- * @exiting:		Pointer to store the task pointer of the owner task
+- *			which is in the middle of exiting
+- * @set_waiters:	force setting the FUTEX_WAITERS bit (1) or not (0)
+- *
+- * Try and get the lock on behalf of the top waiter if we can do it atomically.
+- * Wake the top waiter if we succeed.  If the caller specified set_waiters,
+- * then direct futex_lock_pi_atomic() to force setting the FUTEX_WAITERS bit.
+- * hb1 and hb2 must be held by the caller.
+- *
+- * @exiting is only set when the return value is -EBUSY. If so, this holds
+- * a refcount on the exiting task on return and the caller needs to drop it
+- * after waiting for the exit to complete.
+- *
+- * Return:
+- *  -  0 - failed to acquire the lock atomically;
+- *  - >0 - acquired the lock, return value is vpid of the top_waiter
+- *  - <0 - error
+- */
+-static int
+-futex_proxy_trylock_atomic(u32 __user *pifutex, struct futex_hash_bucket *hb1,
+-			   struct futex_hash_bucket *hb2, union futex_key *key1,
+-			   union futex_key *key2, struct futex_pi_state **ps,
+-			   struct task_struct **exiting, int set_waiters)
+-{
+-	struct futex_q *top_waiter = NULL;
+-	u32 curval;
+-	int ret, vpid;
+-
+-	if (get_futex_value_locked(&curval, pifutex))
+-		return -EFAULT;
+-
+-	if (unlikely(should_fail_futex(true)))
+-		return -EFAULT;
+-
+-	/*
+-	 * Find the top_waiter and determine if there are additional waiters.
+-	 * If the caller intends to requeue more than 1 waiter to pifutex,
+-	 * force futex_lock_pi_atomic() to set the FUTEX_WAITERS bit now,
+-	 * as we have means to handle the possible fault.  If not, don't set
+-	 * the bit unecessarily as it will force the subsequent unlock to enter
+-	 * the kernel.
+-	 */
+-	top_waiter = futex_top_waiter(hb1, key1);
+-
+-	/* There are no waiters, nothing for us to do. */
+-	if (!top_waiter)
+-		return 0;
+-
+-	/* Ensure we requeue to the expected futex. */
+-	if (!match_futex(top_waiter->requeue_pi_key, key2))
+-		return -EINVAL;
+-
+-	/*
+-	 * Try to take the lock for top_waiter.  Set the FUTEX_WAITERS bit in
+-	 * the contended case or if set_waiters is 1.  The pi_state is returned
+-	 * in ps in contended cases.
+-	 */
+-	vpid = task_pid_vnr(top_waiter->task);
+-	ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task,
+-				   exiting, set_waiters);
+-	if (ret == 1) {
+-		requeue_pi_wake_futex(top_waiter, key2, hb2);
+-		return vpid;
+-	}
+-	return ret;
+-}
+-
+-/**
+- * futex_requeue() - Requeue waiters from uaddr1 to uaddr2
+- * @uaddr1:	source futex user address
+- * @flags:	futex flags (FLAGS_SHARED, etc.)
+- * @uaddr2:	target futex user address
+- * @nr_wake:	number of waiters to wake (must be 1 for requeue_pi)
+- * @nr_requeue:	number of waiters to requeue (0-INT_MAX)
+- * @cmpval:	@uaddr1 expected value (or %NULL)
+- * @requeue_pi:	if we are attempting to requeue from a non-pi futex to a
+- *		pi futex (pi to pi requeue is not supported)
+- *
+- * Requeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire
+- * uaddr2 atomically on behalf of the top waiter.
+- *
+- * Return:
+- *  - >=0 - on success, the number of tasks requeued or woken;
+- *  -  <0 - on error
+- */
+-static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
+-			 u32 __user *uaddr2, int nr_wake, int nr_requeue,
+-			 u32 *cmpval, int requeue_pi)
+-{
+-	union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
+-	int task_count = 0, ret;
+-	struct futex_pi_state *pi_state = NULL;
+-	struct futex_hash_bucket *hb1, *hb2;
+-	struct futex_q *this, *next;
+-	DEFINE_WAKE_Q(wake_q);
+-
+-	if (nr_wake < 0 || nr_requeue < 0)
+-		return -EINVAL;
+-
+-	/*
+-	 * When PI not supported: return -ENOSYS if requeue_pi is true,
+-	 * consequently the compiler knows requeue_pi is always false past
+-	 * this point which will optimize away all the conditional code
+-	 * further down.
+-	 */
+-	if (!IS_ENABLED(CONFIG_FUTEX_PI) && requeue_pi)
+-		return -ENOSYS;
+-
+-	if (requeue_pi) {
+-		/*
+-		 * Requeue PI only works on two distinct uaddrs. This
+-		 * check is only valid for private futexes. See below.
+-		 */
+-		if (uaddr1 == uaddr2)
+-			return -EINVAL;
+-
+-		/*
+-		 * requeue_pi requires a pi_state, try to allocate it now
+-		 * without any locks in case it fails.
+-		 */
+-		if (refill_pi_state_cache())
+-			return -ENOMEM;
+-		/*
+-		 * requeue_pi must wake as many tasks as it can, up to nr_wake
+-		 * + nr_requeue, since it acquires the rt_mutex prior to
+-		 * returning to userspace, so as to not leave the rt_mutex with
+-		 * waiters and no owner.  However, second and third wake-ups
+-		 * cannot be predicted as they involve race conditions with the
+-		 * first wake and a fault while looking up the pi_state.  Both
+-		 * pthread_cond_signal() and pthread_cond_broadcast() should
+-		 * use nr_wake=1.
+-		 */
+-		if (nr_wake != 1)
+-			return -EINVAL;
+-	}
+-
+-retry:
+-	ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, FUTEX_READ);
+-	if (unlikely(ret != 0))
+-		return ret;
+-	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2,
+-			    requeue_pi ? FUTEX_WRITE : FUTEX_READ);
+-	if (unlikely(ret != 0))
+-		return ret;
+-
+-	/*
+-	 * The check above which compares uaddrs is not sufficient for
+-	 * shared futexes. We need to compare the keys:
+-	 */
+-	if (requeue_pi && match_futex(&key1, &key2))
+-		return -EINVAL;
+-
+-	hb1 = hash_futex(&key1);
+-	hb2 = hash_futex(&key2);
+-
+-retry_private:
+-	hb_waiters_inc(hb2);
+-	double_lock_hb(hb1, hb2);
+-
+-	if (likely(cmpval != NULL)) {
+-		u32 curval;
+-
+-		ret = get_futex_value_locked(&curval, uaddr1);
+-
+-		if (unlikely(ret)) {
+-			double_unlock_hb(hb1, hb2);
+-			hb_waiters_dec(hb2);
+-
+-			ret = get_user(curval, uaddr1);
+-			if (ret)
+-				return ret;
+-
+-			if (!(flags & FLAGS_SHARED))
+-				goto retry_private;
+-
+-			goto retry;
+-		}
+-		if (curval != *cmpval) {
+-			ret = -EAGAIN;
+-			goto out_unlock;
+-		}
+-	}
+-
+-	if (requeue_pi && (task_count - nr_wake < nr_requeue)) {
+-		struct task_struct *exiting = NULL;
+-
+-		/*
+-		 * Attempt to acquire uaddr2 and wake the top waiter. If we
+-		 * intend to requeue waiters, force setting the FUTEX_WAITERS
+-		 * bit.  We force this here where we are able to easily handle
+-		 * faults rather in the requeue loop below.
+-		 */
+-		ret = futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1,
+-						 &key2, &pi_state,
+-						 &exiting, nr_requeue);
+-
+-		/*
+-		 * At this point the top_waiter has either taken uaddr2 or is
+-		 * waiting on it.  If the former, then the pi_state will not
+-		 * exist yet, look it up one more time to ensure we have a
+-		 * reference to it. If the lock was taken, ret contains the
+-		 * vpid of the top waiter task.
+-		 * If the lock was not taken, we have pi_state and an initial
+-		 * refcount on it. In case of an error we have nothing.
+-		 */
+-		if (ret > 0) {
+-			WARN_ON(pi_state);
+-			task_count++;
+-			/*
+-			 * If we acquired the lock, then the user space value
+-			 * of uaddr2 should be vpid. It cannot be changed by
+-			 * the top waiter as it is blocked on hb2 lock if it
+-			 * tries to do so. If something fiddled with it behind
+-			 * our back the pi state lookup might unearth it. So
+-			 * we rather use the known value than rereading and
+-			 * handing potential crap to lookup_pi_state.
+-			 *
+-			 * If that call succeeds then we have pi_state and an
+-			 * initial refcount on it.
+-			 */
+-			ret = lookup_pi_state(uaddr2, ret, hb2, &key2,
+-					      &pi_state, &exiting);
+-		}
+-
+-		switch (ret) {
+-		case 0:
+-			/* We hold a reference on the pi state. */
+-			break;
+-
+-			/* If the above failed, then pi_state is NULL */
+-		case -EFAULT:
+-			double_unlock_hb(hb1, hb2);
+-			hb_waiters_dec(hb2);
+-			ret = fault_in_user_writeable(uaddr2);
+-			if (!ret)
+-				goto retry;
+-			return ret;
+-		case -EBUSY:
+-		case -EAGAIN:
+-			/*
+-			 * Two reasons for this:
+-			 * - EBUSY: Owner is exiting and we just wait for the
+-			 *   exit to complete.
+-			 * - EAGAIN: The user space value changed.
+-			 */
+-			double_unlock_hb(hb1, hb2);
+-			hb_waiters_dec(hb2);
+-			/*
+-			 * Handle the case where the owner is in the middle of
+-			 * exiting. Wait for the exit to complete otherwise
+-			 * this task might loop forever, aka. live lock.
+-			 */
+-			wait_for_owner_exiting(ret, exiting);
+-			cond_resched();
+-			goto retry;
+-		default:
+-			goto out_unlock;
+-		}
+-	}
+-
+-	plist_for_each_entry_safe(this, next, &hb1->chain, list) {
+-		if (task_count - nr_wake >= nr_requeue)
+-			break;
+-
+-		if (!match_futex(&this->key, &key1))
+-			continue;
+-
+-		/*
+-		 * FUTEX_WAIT_REQEUE_PI and FUTEX_CMP_REQUEUE_PI should always
+-		 * be paired with each other and no other futex ops.
+-		 *
+-		 * We should never be requeueing a futex_q with a pi_state,
+-		 * which is awaiting a futex_unlock_pi().
+-		 */
+-		if ((requeue_pi && !this->rt_waiter) ||
+-		    (!requeue_pi && this->rt_waiter) ||
+-		    this->pi_state) {
+-			ret = -EINVAL;
+-			break;
+-		}
+-
+-		/*
+-		 * Wake nr_wake waiters.  For requeue_pi, if we acquired the
+-		 * lock, we already woke the top_waiter.  If not, it will be
+-		 * woken by futex_unlock_pi().
+-		 */
+-		if (++task_count <= nr_wake && !requeue_pi) {
+-			mark_wake_futex(&wake_q, this);
+-			continue;
+-		}
+-
+-		/* Ensure we requeue to the expected futex for requeue_pi. */
+-		if (requeue_pi && !match_futex(this->requeue_pi_key, &key2)) {
+-			ret = -EINVAL;
+-			break;
+-		}
+-
+-		/*
+-		 * Requeue nr_requeue waiters and possibly one more in the case
+-		 * of requeue_pi if we couldn't acquire the lock atomically.
+-		 */
+-		if (requeue_pi) {
+-			/*
+-			 * Prepare the waiter to take the rt_mutex. Take a
+-			 * refcount on the pi_state and store the pointer in
+-			 * the futex_q object of the waiter.
+-			 */
+-			get_pi_state(pi_state);
+-			this->pi_state = pi_state;
+-			ret = rt_mutex_start_proxy_lock(&pi_state->pi_mutex,
+-							this->rt_waiter,
+-							this->task);
+-			if (ret == 1) {
+-				/*
+-				 * We got the lock. We do neither drop the
+-				 * refcount on pi_state nor clear
+-				 * this->pi_state because the waiter needs the
+-				 * pi_state for cleaning up the user space
+-				 * value. It will drop the refcount after
+-				 * doing so.
+-				 */
+-				requeue_pi_wake_futex(this, &key2, hb2);
+-				continue;
+-			} else if (ret) {
+-				/*
+-				 * rt_mutex_start_proxy_lock() detected a
+-				 * potential deadlock when we tried to queue
+-				 * that waiter. Drop the pi_state reference
+-				 * which we took above and remove the pointer
+-				 * to the state from the waiters futex_q
+-				 * object.
+-				 */
+-				this->pi_state = NULL;
+-				put_pi_state(pi_state);
+-				/*
+-				 * We stop queueing more waiters and let user
+-				 * space deal with the mess.
+-				 */
+-				break;
+-			}
+-		}
+-		requeue_futex(this, hb1, hb2, &key2);
+-	}
+-
+-	/*
+-	 * We took an extra initial reference to the pi_state either
+-	 * in futex_proxy_trylock_atomic() or in lookup_pi_state(). We
+-	 * need to drop it here again.
+-	 */
+-	put_pi_state(pi_state);
+-
+-out_unlock:
+-	double_unlock_hb(hb1, hb2);
+-	wake_up_q(&wake_q);
+-	hb_waiters_dec(hb2);
+-	return ret ? ret : task_count;
+-}
+-
+-/* The key must be already stored in q->key. */
+-static inline struct futex_hash_bucket *queue_lock(struct futex_q *q)
+-	__acquires(&hb->lock)
+-{
+-	struct futex_hash_bucket *hb;
+-
+-	hb = hash_futex(&q->key);
+-
+-	/*
+-	 * Increment the counter before taking the lock so that
+-	 * a potential waker won't miss a to-be-slept task that is
+-	 * waiting for the spinlock. This is safe as all queue_lock()
+-	 * users end up calling queue_me(). Similarly, for housekeeping,
+-	 * decrement the counter at queue_unlock() when some error has
+-	 * occurred and we don't end up adding the task to the list.
+-	 */
+-	hb_waiters_inc(hb); /* implies smp_mb(); (A) */
+-
+-	q->lock_ptr = &hb->lock;
+-
+-	spin_lock(&hb->lock);
+-	return hb;
+-}
+-
+-static inline void
+-queue_unlock(struct futex_hash_bucket *hb)
+-	__releases(&hb->lock)
+-{
+-	spin_unlock(&hb->lock);
+-	hb_waiters_dec(hb);
+-}
+-
+-static inline void __queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
+-{
+-	int prio;
+-
+-	/*
+-	 * The priority used to register this element is
+-	 * - either the real thread-priority for the real-time threads
+-	 * (i.e. threads with a priority lower than MAX_RT_PRIO)
+-	 * - or MAX_RT_PRIO for non-RT threads.
+-	 * Thus, all RT-threads are woken first in priority order, and
+-	 * the others are woken last, in FIFO order.
+-	 */
+-	prio = min(current->normal_prio, MAX_RT_PRIO);
+-
+-	plist_node_init(&q->list, prio);
+-	plist_add(&q->list, &hb->chain);
+-	q->task = current;
+-}
+-
+-/**
+- * queue_me() - Enqueue the futex_q on the futex_hash_bucket
+- * @q:	The futex_q to enqueue
+- * @hb:	The destination hash bucket
+- *
+- * The hb->lock must be held by the caller, and is released here. A call to
+- * queue_me() is typically paired with exactly one call to unqueue_me().  The
+- * exceptions involve the PI related operations, which may use unqueue_me_pi()
+- * or nothing if the unqueue is done as part of the wake process and the unqueue
+- * state is implicit in the state of woken task (see futex_wait_requeue_pi() for
+- * an example).
+- */
+-static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
+-	__releases(&hb->lock)
+-{
+-	__queue_me(q, hb);
+-	spin_unlock(&hb->lock);
+-}
+-
+-/**
+- * unqueue_me() - Remove the futex_q from its futex_hash_bucket
+- * @q:	The futex_q to unqueue
+- *
+- * The q->lock_ptr must not be held by the caller. A call to unqueue_me() must
+- * be paired with exactly one earlier call to queue_me().
+- *
+- * Return:
+- *  - 1 - if the futex_q was still queued (and we removed unqueued it);
+- *  - 0 - if the futex_q was already removed by the waking thread
+- */
+-static int unqueue_me(struct futex_q *q)
+-{
+-	spinlock_t *lock_ptr;
+-	int ret = 0;
+-
+-	/* In the common case we don't take the spinlock, which is nice. */
+-retry:
+-	/*
+-	 * q->lock_ptr can change between this read and the following spin_lock.
+-	 * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
+-	 * optimizing lock_ptr out of the logic below.
+-	 */
+-	lock_ptr = READ_ONCE(q->lock_ptr);
+-	if (lock_ptr != NULL) {
+-		spin_lock(lock_ptr);
+-		/*
+-		 * q->lock_ptr can change between reading it and
+-		 * spin_lock(), causing us to take the wrong lock.  This
+-		 * corrects the race condition.
+-		 *
+-		 * Reasoning goes like this: if we have the wrong lock,
+-		 * q->lock_ptr must have changed (maybe several times)
+-		 * between reading it and the spin_lock().  It can
+-		 * change again after the spin_lock() but only if it was
+-		 * already changed before the spin_lock().  It cannot,
+-		 * however, change back to the original value.  Therefore
+-		 * we can detect whether we acquired the correct lock.
+-		 */
+-		if (unlikely(lock_ptr != q->lock_ptr)) {
+-			spin_unlock(lock_ptr);
+-			goto retry;
+-		}
+-		__unqueue_futex(q);
+-
+-		BUG_ON(q->pi_state);
+-
+-		spin_unlock(lock_ptr);
+-		ret = 1;
+-	}
+-
+-	return ret;
+-}
+-
+-/*
+- * PI futexes can not be requeued and must remove themself from the
+- * hash bucket. The hash bucket lock (i.e. lock_ptr) is held on entry
+- * and dropped here.
+- */
+-static void unqueue_me_pi(struct futex_q *q)
+-	__releases(q->lock_ptr)
+-{
+-	__unqueue_futex(q);
+-
+-	BUG_ON(!q->pi_state);
+-	put_pi_state(q->pi_state);
+-	q->pi_state = NULL;
+-
+-	spin_unlock(q->lock_ptr);
+-}
+-
+-static int __fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
+-				  struct task_struct *argowner)
+-{
+-	struct futex_pi_state *pi_state = q->pi_state;
+-	struct task_struct *oldowner, *newowner;
+-	u32 uval, curval, newval, newtid;
+-	int err = 0;
+-
+-	oldowner = pi_state->owner;
+-
+-	/*
+-	 * We are here because either:
+-	 *
+-	 *  - we stole the lock and pi_state->owner needs updating to reflect
+-	 *    that (@argowner == current),
+-	 *
+-	 * or:
+-	 *
+-	 *  - someone stole our lock and we need to fix things to point to the
+-	 *    new owner (@argowner == NULL).
+-	 *
+-	 * Either way, we have to replace the TID in the user space variable.
+-	 * This must be atomic as we have to preserve the owner died bit here.
+-	 *
+-	 * Note: We write the user space value _before_ changing the pi_state
+-	 * because we can fault here. Imagine swapped out pages or a fork
+-	 * that marked all the anonymous memory readonly for cow.
+-	 *
+-	 * Modifying pi_state _before_ the user space value would leave the
+-	 * pi_state in an inconsistent state when we fault here, because we
+-	 * need to drop the locks to handle the fault. This might be observed
+-	 * in the PID check in lookup_pi_state.
+-	 */
+-retry:
+-	if (!argowner) {
+-		if (oldowner != current) {
+-			/*
+-			 * We raced against a concurrent self; things are
+-			 * already fixed up. Nothing to do.
+-			 */
+-			return 0;
+-		}
+-
+-		if (__rt_mutex_futex_trylock(&pi_state->pi_mutex)) {
+-			/* We got the lock. pi_state is correct. Tell caller. */
+-			return 1;
+-		}
+-
+-		/*
+-		 * The trylock just failed, so either there is an owner or
+-		 * there is a higher priority waiter than this one.
+-		 */
+-		newowner = rt_mutex_owner(&pi_state->pi_mutex);
+-		/*
+-		 * If the higher priority waiter has not yet taken over the
+-		 * rtmutex then newowner is NULL. We can't return here with
+-		 * that state because it's inconsistent vs. the user space
+-		 * state. So drop the locks and try again. It's a valid
+-		 * situation and not any different from the other retry
+-		 * conditions.
+-		 */
+-		if (unlikely(!newowner)) {
+-			err = -EAGAIN;
+-			goto handle_err;
+-		}
+-	} else {
+-		WARN_ON_ONCE(argowner != current);
+-		if (oldowner == current) {
+-			/*
+-			 * We raced against a concurrent self; things are
+-			 * already fixed up. Nothing to do.
+-			 */
+-			return 1;
+-		}
+-		newowner = argowner;
+-	}
+-
+-	newtid = task_pid_vnr(newowner) | FUTEX_WAITERS;
+-	/* Owner died? */
+-	if (!pi_state->owner)
+-		newtid |= FUTEX_OWNER_DIED;
+-
+-	err = get_futex_value_locked(&uval, uaddr);
+-	if (err)
+-		goto handle_err;
+-
+-	for (;;) {
+-		newval = (uval & FUTEX_OWNER_DIED) | newtid;
+-
+-		err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
+-		if (err)
+-			goto handle_err;
+-
+-		if (curval == uval)
+-			break;
+-		uval = curval;
+-	}
+-
+-	/*
+-	 * We fixed up user space. Now we need to fix the pi_state
+-	 * itself.
+-	 */
+-	pi_state_update_owner(pi_state, newowner);
+-
+-	return argowner == current;
+-
+-	/*
+-	 * In order to reschedule or handle a page fault, we need to drop the
+-	 * locks here. In the case of a fault, this gives the other task
+-	 * (either the highest priority waiter itself or the task which stole
+-	 * the rtmutex) the chance to try the fixup of the pi_state. So once we
+-	 * are back from handling the fault we need to check the pi_state after
+-	 * reacquiring the locks and before trying to do another fixup. When
+-	 * the fixup has been done already we simply return.
+-	 *
+-	 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
+-	 * drop hb->lock since the caller owns the hb -> futex_q relation.
+-	 * Dropping the pi_mutex->wait_lock requires the state revalidate.
+-	 */
+-handle_err:
+-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+-	spin_unlock(q->lock_ptr);
+-
+-	switch (err) {
+-	case -EFAULT:
+-		err = fault_in_user_writeable(uaddr);
+-		break;
+-
+-	case -EAGAIN:
+-		cond_resched();
+-		err = 0;
+-		break;
+-
+-	default:
+-		WARN_ON_ONCE(1);
+-		break;
+-	}
+-
+-	spin_lock(q->lock_ptr);
+-	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+-
+-	/*
+-	 * Check if someone else fixed it for us:
+-	 */
+-	if (pi_state->owner != oldowner)
+-		return argowner == current;
+-
+-	/* Retry if err was -EAGAIN or the fault in succeeded */
+-	if (!err)
+-		goto retry;
+-
+-	/*
+-	 * fault_in_user_writeable() failed so user state is immutable. At
+-	 * best we can make the kernel state consistent but user state will
+-	 * be most likely hosed and any subsequent unlock operation will be
+-	 * rejected due to PI futex rule [10].
+-	 *
+-	 * Ensure that the rtmutex owner is also the pi_state owner despite
+-	 * the user space value claiming something different. There is no
+-	 * point in unlocking the rtmutex if current is the owner as it
+-	 * would need to wait until the next waiter has taken the rtmutex
+-	 * to guarantee consistent state. Keep it simple. Userspace asked
+-	 * for this wreckaged state.
+-	 *
+-	 * The rtmutex has an owner - either current or some other
+-	 * task. See the EAGAIN loop above.
+-	 */
+-	pi_state_update_owner(pi_state, rt_mutex_owner(&pi_state->pi_mutex));
+-
+-	return err;
+-}
+-
+-static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
+-				struct task_struct *argowner)
+-{
+-	struct futex_pi_state *pi_state = q->pi_state;
+-	int ret;
+-
+-	lockdep_assert_held(q->lock_ptr);
+-
+-	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+-	ret = __fixup_pi_state_owner(uaddr, q, argowner);
+-	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+-	return ret;
+-}
+-
+-static long futex_wait_restart(struct restart_block *restart);
+-
+-/**
+- * fixup_owner() - Post lock pi_state and corner case management
+- * @uaddr:	user address of the futex
+- * @q:		futex_q (contains pi_state and access to the rt_mutex)
+- * @locked:	if the attempt to take the rt_mutex succeeded (1) or not (0)
+- *
+- * After attempting to lock an rt_mutex, this function is called to cleanup
+- * the pi_state owner as well as handle race conditions that may allow us to
+- * acquire the lock. Must be called with the hb lock held.
+- *
+- * Return:
+- *  -  1 - success, lock taken;
+- *  -  0 - success, lock not taken;
+- *  - <0 - on error (-EFAULT)
+- */
+-static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
+-{
+-	if (locked) {
+-		/*
+-		 * Got the lock. We might not be the anticipated owner if we
+-		 * did a lock-steal - fix up the PI-state in that case:
+-		 *
+-		 * Speculative pi_state->owner read (we don't hold wait_lock);
+-		 * since we own the lock pi_state->owner == current is the
+-		 * stable state, anything else needs more attention.
+-		 */
+-		if (q->pi_state->owner != current)
+-			return fixup_pi_state_owner(uaddr, q, current);
+-		return 1;
+-	}
+-
+-	/*
+-	 * If we didn't get the lock; check if anybody stole it from us. In
+-	 * that case, we need to fix up the uval to point to them instead of
+-	 * us, otherwise bad things happen. [10]
+-	 *
+-	 * Another speculative read; pi_state->owner == current is unstable
+-	 * but needs our attention.
+-	 */
+-	if (q->pi_state->owner == current)
+-		return fixup_pi_state_owner(uaddr, q, NULL);
+-
+-	/*
+-	 * Paranoia check. If we did not take the lock, then we should not be
+-	 * the owner of the rt_mutex. Warn and establish consistent state.
+-	 */
+-	if (WARN_ON_ONCE(rt_mutex_owner(&q->pi_state->pi_mutex) == current))
+-		return fixup_pi_state_owner(uaddr, q, current);
+-
+-	return 0;
+-}
+-
+-/**
+- * futex_wait_queue_me() - queue_me() and wait for wakeup, timeout, or signal
+- * @hb:		the futex hash bucket, must be locked by the caller
+- * @q:		the futex_q to queue up on
+- * @timeout:	the prepared hrtimer_sleeper, or null for no timeout
+- */
+-static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
+-				struct hrtimer_sleeper *timeout)
+-{
+-	/*
+-	 * The task state is guaranteed to be set before another task can
+-	 * wake it. set_current_state() is implemented using smp_store_mb() and
+-	 * queue_me() calls spin_unlock() upon completion, both serializing
+-	 * access to the hash list and forcing another memory barrier.
+-	 */
+-	set_current_state(TASK_INTERRUPTIBLE);
+-	queue_me(q, hb);
+-
+-	/* Arm the timer */
+-	if (timeout)
+-		hrtimer_sleeper_start_expires(timeout, HRTIMER_MODE_ABS);
+-
+-	/*
+-	 * If we have been removed from the hash list, then another task
+-	 * has tried to wake us, and we can skip the call to schedule().
+-	 */
+-	if (likely(!plist_node_empty(&q->list))) {
+-		/*
+-		 * If the timer has already expired, current will already be
+-		 * flagged for rescheduling. Only call schedule if there
+-		 * is no timeout, or if it has yet to expire.
+-		 */
+-		if (!timeout || timeout->task)
+-			freezable_schedule();
+-	}
+-	__set_current_state(TASK_RUNNING);
+-}
+-
+-/**
+- * futex_wait_setup() - Prepare to wait on a futex
+- * @uaddr:	the futex userspace address
+- * @val:	the expected value
+- * @flags:	futex flags (FLAGS_SHARED, etc.)
+- * @q:		the associated futex_q
+- * @hb:		storage for hash_bucket pointer to be returned to caller
+- *
+- * Setup the futex_q and locate the hash_bucket.  Get the futex value and
+- * compare it with the expected value.  Handle atomic faults internally.
+- * Return with the hb lock held and a q.key reference on success, and unlocked
+- * with no q.key reference on failure.
+- *
+- * Return:
+- *  -  0 - uaddr contains val and hb has been locked;
+- *  - <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlocked
+- */
+-static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
+-			   struct futex_q *q, struct futex_hash_bucket **hb)
+-{
+-	u32 uval;
+-	int ret;
+-
+-	/*
+-	 * Access the page AFTER the hash-bucket is locked.
+-	 * Order is important:
+-	 *
+-	 *   Userspace waiter: val = var; if (cond(val)) futex_wait(&var, val);
+-	 *   Userspace waker:  if (cond(var)) { var = new; futex_wake(&var); }
+-	 *
+-	 * The basic logical guarantee of a futex is that it blocks ONLY
+-	 * if cond(var) is known to be true at the time of blocking, for
+-	 * any cond.  If we locked the hash-bucket after testing *uaddr, that
+-	 * would open a race condition where we could block indefinitely with
+-	 * cond(var) false, which would violate the guarantee.
+-	 *
+-	 * On the other hand, we insert q and release the hash-bucket only
+-	 * after testing *uaddr.  This guarantees that futex_wait() will NOT
+-	 * absorb a wakeup if *uaddr does not match the desired values
+-	 * while the syscall executes.
+-	 */
+-retry:
+-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q->key, FUTEX_READ);
+-	if (unlikely(ret != 0))
+-		return ret;
+-
+-retry_private:
+-	*hb = queue_lock(q);
+-
+-	ret = get_futex_value_locked(&uval, uaddr);
+-
+-	if (ret) {
+-		queue_unlock(*hb);
+-
+-		ret = get_user(uval, uaddr);
+-		if (ret)
+-			return ret;
+-
+-		if (!(flags & FLAGS_SHARED))
+-			goto retry_private;
+-
+-		goto retry;
+-	}
+-
+-	if (uval != val) {
+-		queue_unlock(*hb);
+-		ret = -EWOULDBLOCK;
+-	}
+-
+-	return ret;
+-}
+-
+-static int futex_wait(u32 __user *uaddr, unsigned int flags, u32 val,
+-		      ktime_t *abs_time, u32 bitset)
+-{
+-	struct hrtimer_sleeper timeout, *to;
+-	struct restart_block *restart;
+-	struct futex_hash_bucket *hb;
+-	struct futex_q q = futex_q_init;
+-	int ret;
+-
+-	if (!bitset)
+-		return -EINVAL;
+-	q.bitset = bitset;
+-
+-	to = futex_setup_timer(abs_time, &timeout, flags,
+-			       current->timer_slack_ns);
+-retry:
+-	/*
+-	 * Prepare to wait on uaddr. On success, holds hb lock and increments
+-	 * q.key refs.
+-	 */
+-	ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
+-	if (ret)
+-		goto out;
+-
+-	/* queue_me and wait for wakeup, timeout, or a signal. */
+-	futex_wait_queue_me(hb, &q, to);
+-
+-	/* If we were woken (and unqueued), we succeeded, whatever. */
+-	ret = 0;
+-	/* unqueue_me() drops q.key ref */
+-	if (!unqueue_me(&q))
+-		goto out;
+-	ret = -ETIMEDOUT;
+-	if (to && !to->task)
+-		goto out;
+-
+-	/*
+-	 * We expect signal_pending(current), but we might be the
+-	 * victim of a spurious wakeup as well.
+-	 */
+-	if (!signal_pending(current))
+-		goto retry;
+-
+-	ret = -ERESTARTSYS;
+-	if (!abs_time)
+-		goto out;
+-
+-	restart = &current->restart_block;
+-	restart->futex.uaddr = uaddr;
+-	restart->futex.val = val;
+-	restart->futex.time = *abs_time;
+-	restart->futex.bitset = bitset;
+-	restart->futex.flags = flags | FLAGS_HAS_TIMEOUT;
+-
+-	ret = set_restart_fn(restart, futex_wait_restart);
+-
+-out:
+-	if (to) {
+-		hrtimer_cancel(&to->timer);
+-		destroy_hrtimer_on_stack(&to->timer);
+-	}
+-	return ret;
+-}
+-
+-
+-static long futex_wait_restart(struct restart_block *restart)
+-{
+-	u32 __user *uaddr = restart->futex.uaddr;
+-	ktime_t t, *tp = NULL;
+-
+-	if (restart->futex.flags & FLAGS_HAS_TIMEOUT) {
+-		t = restart->futex.time;
+-		tp = &t;
+-	}
+-	restart->fn = do_no_restart_syscall;
+-
+-	return (long)futex_wait(uaddr, restart->futex.flags,
+-				restart->futex.val, tp, restart->futex.bitset);
+-}
+-
+-
+-/*
+- * Userspace tried a 0 -> TID atomic transition of the futex value
+- * and failed. The kernel side here does the whole locking operation:
+- * if there are waiters then it will block as a consequence of relying
+- * on rt-mutexes, it does PI, etc. (Due to races the kernel might see
+- * a 0 value of the futex too.).
+- *
+- * Also serves as futex trylock_pi()'ing, and due semantics.
+- */
+-static int futex_lock_pi(u32 __user *uaddr, unsigned int flags,
+-			 ktime_t *time, int trylock)
+-{
+-	struct hrtimer_sleeper timeout, *to;
+-	struct task_struct *exiting = NULL;
+-	struct rt_mutex_waiter rt_waiter;
+-	struct futex_hash_bucket *hb;
+-	struct futex_q q = futex_q_init;
+-	int res, ret;
+-
+-	if (!IS_ENABLED(CONFIG_FUTEX_PI))
+-		return -ENOSYS;
+-
+-	if (refill_pi_state_cache())
+-		return -ENOMEM;
+-
+-	to = futex_setup_timer(time, &timeout, FLAGS_CLOCKRT, 0);
+-
+-retry:
+-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q.key, FUTEX_WRITE);
+-	if (unlikely(ret != 0))
+-		goto out;
+-
+-retry_private:
+-	hb = queue_lock(&q);
+-
+-	ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current,
+-				   &exiting, 0);
+-	if (unlikely(ret)) {
+-		/*
+-		 * Atomic work succeeded and we got the lock,
+-		 * or failed. Either way, we do _not_ block.
+-		 */
+-		switch (ret) {
+-		case 1:
+-			/* We got the lock. */
+-			ret = 0;
+-			goto out_unlock_put_key;
+-		case -EFAULT:
+-			goto uaddr_faulted;
+-		case -EBUSY:
+-		case -EAGAIN:
+-			/*
+-			 * Two reasons for this:
+-			 * - EBUSY: Task is exiting and we just wait for the
+-			 *   exit to complete.
+-			 * - EAGAIN: The user space value changed.
+-			 */
+-			queue_unlock(hb);
+-			/*
+-			 * Handle the case where the owner is in the middle of
+-			 * exiting. Wait for the exit to complete otherwise
+-			 * this task might loop forever, aka. live lock.
+-			 */
+-			wait_for_owner_exiting(ret, exiting);
+-			cond_resched();
+-			goto retry;
+-		default:
+-			goto out_unlock_put_key;
+-		}
+-	}
+-
+-	WARN_ON(!q.pi_state);
+-
+-	/*
+-	 * Only actually queue now that the atomic ops are done:
+-	 */
+-	__queue_me(&q, hb);
+-
+-	if (trylock) {
+-		ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex);
+-		/* Fixup the trylock return value: */
+-		ret = ret ? 0 : -EWOULDBLOCK;
+-		goto no_block;
+-	}
+-
+-	rt_mutex_init_waiter(&rt_waiter);
+-
+-	/*
+-	 * On PREEMPT_RT_FULL, when hb->lock becomes an rt_mutex, we must not
+-	 * hold it while doing rt_mutex_start_proxy(), because then it will
+-	 * include hb->lock in the blocking chain, even through we'll not in
+-	 * fact hold it while blocking. This will lead it to report -EDEADLK
+-	 * and BUG when futex_unlock_pi() interleaves with this.
+-	 *
+-	 * Therefore acquire wait_lock while holding hb->lock, but drop the
+-	 * latter before calling __rt_mutex_start_proxy_lock(). This
+-	 * interleaves with futex_unlock_pi() -- which does a similar lock
+-	 * handoff -- such that the latter can observe the futex_q::pi_state
+-	 * before __rt_mutex_start_proxy_lock() is done.
+-	 */
+-	raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
+-	spin_unlock(q.lock_ptr);
+-	/*
+-	 * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter
+-	 * such that futex_unlock_pi() is guaranteed to observe the waiter when
+-	 * it sees the futex_q::pi_state.
+-	 */
+-	ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current);
+-	raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock);
+-
+-	if (ret) {
+-		if (ret == 1)
+-			ret = 0;
+-		goto cleanup;
+-	}
+-
+-	if (unlikely(to))
+-		hrtimer_sleeper_start_expires(to, HRTIMER_MODE_ABS);
+-
+-	ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);
+-
+-cleanup:
+-	spin_lock(q.lock_ptr);
+-	/*
+-	 * If we failed to acquire the lock (deadlock/signal/timeout), we must
+-	 * first acquire the hb->lock before removing the lock from the
+-	 * rt_mutex waitqueue, such that we can keep the hb and rt_mutex wait
+-	 * lists consistent.
+-	 *
+-	 * In particular; it is important that futex_unlock_pi() can not
+-	 * observe this inconsistency.
+-	 */
+-	if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter))
+-		ret = 0;
+-
+-no_block:
+-	/*
+-	 * Fixup the pi_state owner and possibly acquire the lock if we
+-	 * haven't already.
+-	 */
+-	res = fixup_owner(uaddr, &q, !ret);
+-	/*
+-	 * If fixup_owner() returned an error, proprogate that.  If it acquired
+-	 * the lock, clear our -ETIMEDOUT or -EINTR.
+-	 */
+-	if (res)
+-		ret = (res < 0) ? res : 0;
+-
+-	/* Unqueue and drop the lock */
+-	unqueue_me_pi(&q);
+-	goto out;
+-
+-out_unlock_put_key:
+-	queue_unlock(hb);
+-
+-out:
+-	if (to) {
+-		hrtimer_cancel(&to->timer);
+-		destroy_hrtimer_on_stack(&to->timer);
+-	}
+-	return ret != -EINTR ? ret : -ERESTARTNOINTR;
+-
+-uaddr_faulted:
+-	queue_unlock(hb);
+-
+-	ret = fault_in_user_writeable(uaddr);
+-	if (ret)
+-		goto out;
+-
+-	if (!(flags & FLAGS_SHARED))
+-		goto retry_private;
+-
+-	goto retry;
+-}
+-
+-/*
+- * Userspace attempted a TID -> 0 atomic transition, and failed.
+- * This is the in-kernel slowpath: we look up the PI state (if any),
+- * and do the rt-mutex unlock.
+- */
+-static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
+-{
+-	u32 curval, uval, vpid = task_pid_vnr(current);
+-	union futex_key key = FUTEX_KEY_INIT;
+-	struct futex_hash_bucket *hb;
+-	struct futex_q *top_waiter;
+-	int ret;
+-
+-	if (!IS_ENABLED(CONFIG_FUTEX_PI))
+-		return -ENOSYS;
+-
+-retry:
+-	if (get_user(uval, uaddr))
+-		return -EFAULT;
+-	/*
+-	 * We release only a lock we actually own:
+-	 */
+-	if ((uval & FUTEX_TID_MASK) != vpid)
+-		return -EPERM;
+-
+-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, FUTEX_WRITE);
+-	if (ret)
+-		return ret;
+-
+-	hb = hash_futex(&key);
+-	spin_lock(&hb->lock);
+-
+-	/*
+-	 * Check waiters first. We do not trust user space values at
+-	 * all and we at least want to know if user space fiddled
+-	 * with the futex value instead of blindly unlocking.
+-	 */
+-	top_waiter = futex_top_waiter(hb, &key);
+-	if (top_waiter) {
+-		struct futex_pi_state *pi_state = top_waiter->pi_state;
+-
+-		ret = -EINVAL;
+-		if (!pi_state)
+-			goto out_unlock;
+-
+-		/*
+-		 * If current does not own the pi_state then the futex is
+-		 * inconsistent and user space fiddled with the futex value.
+-		 */
+-		if (pi_state->owner != current)
+-			goto out_unlock;
+-
+-		get_pi_state(pi_state);
+-		/*
+-		 * By taking wait_lock while still holding hb->lock, we ensure
+-		 * there is no point where we hold neither; and therefore
+-		 * wake_futex_pi() must observe a state consistent with what we
+-		 * observed.
+-		 *
+-		 * In particular; this forces __rt_mutex_start_proxy() to
+-		 * complete such that we're guaranteed to observe the
+-		 * rt_waiter. Also see the WARN in wake_futex_pi().
+-		 */
+-		raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+-		spin_unlock(&hb->lock);
+-
+-		/* drops pi_state->pi_mutex.wait_lock */
+-		ret = wake_futex_pi(uaddr, uval, pi_state);
+-
+-		put_pi_state(pi_state);
+-
+-		/*
+-		 * Success, we're done! No tricky corner cases.
+-		 */
+-		if (!ret)
+-			goto out_putkey;
+-		/*
+-		 * The atomic access to the futex value generated a
+-		 * pagefault, so retry the user-access and the wakeup:
+-		 */
+-		if (ret == -EFAULT)
+-			goto pi_faulted;
+-		/*
+-		 * A unconditional UNLOCK_PI op raced against a waiter
+-		 * setting the FUTEX_WAITERS bit. Try again.
+-		 */
+-		if (ret == -EAGAIN)
+-			goto pi_retry;
+-		/*
+-		 * wake_futex_pi has detected invalid state. Tell user
+-		 * space.
+-		 */
+-		goto out_putkey;
+-	}
+-
+-	/*
+-	 * We have no kernel internal state, i.e. no waiters in the
+-	 * kernel. Waiters which are about to queue themselves are stuck
+-	 * on hb->lock. So we can safely ignore them. We do neither
+-	 * preserve the WAITERS bit not the OWNER_DIED one. We are the
+-	 * owner.
+-	 */
+-	if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) {
+-		spin_unlock(&hb->lock);
+-		switch (ret) {
+-		case -EFAULT:
+-			goto pi_faulted;
+-
+-		case -EAGAIN:
+-			goto pi_retry;
+-
+-		default:
+-			WARN_ON_ONCE(1);
+-			goto out_putkey;
+-		}
+-	}
+-
+-	/*
+-	 * If uval has changed, let user space handle it.
+-	 */
+-	ret = (curval == uval) ? 0 : -EAGAIN;
+-
+-out_unlock:
+-	spin_unlock(&hb->lock);
+-out_putkey:
+-	return ret;
+-
+-pi_retry:
+-	cond_resched();
+-	goto retry;
+-
+-pi_faulted:
+-
+-	ret = fault_in_user_writeable(uaddr);
+-	if (!ret)
+-		goto retry;
+-
+-	return ret;
+-}
+-
+-/**
+- * handle_early_requeue_pi_wakeup() - Detect early wakeup on the initial futex
+- * @hb:		the hash_bucket futex_q was original enqueued on
+- * @q:		the futex_q woken while waiting to be requeued
+- * @key2:	the futex_key of the requeue target futex
+- * @timeout:	the timeout associated with the wait (NULL if none)
+- *
+- * Detect if the task was woken on the initial futex as opposed to the requeue
+- * target futex.  If so, determine if it was a timeout or a signal that caused
+- * the wakeup and return the appropriate error code to the caller.  Must be
+- * called with the hb lock held.
+- *
+- * Return:
+- *  -  0 = no early wakeup detected;
+- *  - <0 = -ETIMEDOUT or -ERESTARTNOINTR
+- */
+-static inline
+-int handle_early_requeue_pi_wakeup(struct futex_hash_bucket *hb,
+-				   struct futex_q *q, union futex_key *key2,
+-				   struct hrtimer_sleeper *timeout)
+-{
+-	int ret = 0;
+-
+-	/*
+-	 * With the hb lock held, we avoid races while we process the wakeup.
+-	 * We only need to hold hb (and not hb2) to ensure atomicity as the
+-	 * wakeup code can't change q.key from uaddr to uaddr2 if we hold hb.
+-	 * It can't be requeued from uaddr2 to something else since we don't
+-	 * support a PI aware source futex for requeue.
+-	 */
+-	if (!match_futex(&q->key, key2)) {
+-		WARN_ON(q->lock_ptr && (&hb->lock != q->lock_ptr));
+-		/*
+-		 * We were woken prior to requeue by a timeout or a signal.
+-		 * Unqueue the futex_q and determine which it was.
+-		 */
+-		plist_del(&q->list, &hb->chain);
+-		hb_waiters_dec(hb);
+-
+-		/* Handle spurious wakeups gracefully */
+-		ret = -EWOULDBLOCK;
+-		if (timeout && !timeout->task)
+-			ret = -ETIMEDOUT;
+-		else if (signal_pending(current))
+-			ret = -ERESTARTNOINTR;
+-	}
+-	return ret;
+-}
+-
+-/**
+- * futex_wait_requeue_pi() - Wait on uaddr and take uaddr2
+- * @uaddr:	the futex we initially wait on (non-pi)
+- * @flags:	futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be
+- *		the same type, no requeueing from private to shared, etc.
+- * @val:	the expected value of uaddr
+- * @abs_time:	absolute timeout
+- * @bitset:	32 bit wakeup bitset set by userspace, defaults to all
+- * @uaddr2:	the pi futex we will take prior to returning to user-space
+- *
+- * The caller will wait on uaddr and will be requeued by futex_requeue() to
+- * uaddr2 which must be PI aware and unique from uaddr.  Normal wakeup will wake
+- * on uaddr2 and complete the acquisition of the rt_mutex prior to returning to
+- * userspace.  This ensures the rt_mutex maintains an owner when it has waiters;
+- * without one, the pi logic would not know which task to boost/deboost, if
+- * there was a need to.
+- *
+- * We call schedule in futex_wait_queue_me() when we enqueue and return there
+- * via the following--
+- * 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue()
+- * 2) wakeup on uaddr2 after a requeue
+- * 3) signal
+- * 4) timeout
+- *
+- * If 3, cleanup and return -ERESTARTNOINTR.
+- *
+- * If 2, we may then block on trying to take the rt_mutex and return via:
+- * 5) successful lock
+- * 6) signal
+- * 7) timeout
+- * 8) other lock acquisition failure
+- *
+- * If 6, return -EWOULDBLOCK (restarting the syscall would do the same).
+- *
+- * If 4 or 7, we cleanup and return with -ETIMEDOUT.
+- *
+- * Return:
+- *  -  0 - On success;
+- *  - <0 - On error
+- */
+-static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+-				 u32 val, ktime_t *abs_time, u32 bitset,
+-				 u32 __user *uaddr2)
+-{
+-	struct hrtimer_sleeper timeout, *to;
+-	struct rt_mutex_waiter rt_waiter;
+-	struct futex_hash_bucket *hb;
+-	union futex_key key2 = FUTEX_KEY_INIT;
+-	struct futex_q q = futex_q_init;
+-	int res, ret;
+-
+-	if (!IS_ENABLED(CONFIG_FUTEX_PI))
+-		return -ENOSYS;
+-
+-	if (uaddr == uaddr2)
+-		return -EINVAL;
+-
+-	if (!bitset)
+-		return -EINVAL;
+-
+-	to = futex_setup_timer(abs_time, &timeout, flags,
+-			       current->timer_slack_ns);
+-
+-	/*
+-	 * The waiter is allocated on our stack, manipulated by the requeue
+-	 * code while we sleep on uaddr.
+-	 */
+-	rt_mutex_init_waiter(&rt_waiter);
+-
+-	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, FUTEX_WRITE);
+-	if (unlikely(ret != 0))
+-		goto out;
+-
+-	q.bitset = bitset;
+-	q.rt_waiter = &rt_waiter;
+-	q.requeue_pi_key = &key2;
+-
+-	/*
+-	 * Prepare to wait on uaddr. On success, increments q.key (key1) ref
+-	 * count.
+-	 */
+-	ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
+-	if (ret)
+-		goto out;
+-
+-	/*
+-	 * The check above which compares uaddrs is not sufficient for
+-	 * shared futexes. We need to compare the keys:
+-	 */
+-	if (match_futex(&q.key, &key2)) {
+-		queue_unlock(hb);
+-		ret = -EINVAL;
+-		goto out;
+-	}
+-
+-	/* Queue the futex_q, drop the hb lock, wait for wakeup. */
+-	futex_wait_queue_me(hb, &q, to);
+-
+-	spin_lock(&hb->lock);
+-	ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
+-	spin_unlock(&hb->lock);
+-	if (ret)
+-		goto out;
+-
+-	/*
+-	 * In order for us to be here, we know our q.key == key2, and since
+-	 * we took the hb->lock above, we also know that futex_requeue() has
+-	 * completed and we no longer have to concern ourselves with a wakeup
+-	 * race with the atomic proxy lock acquisition by the requeue code. The
+-	 * futex_requeue dropped our key1 reference and incremented our key2
+-	 * reference count.
+-	 */
+-
+-	/* Check if the requeue code acquired the second futex for us. */
+-	if (!q.rt_waiter) {
+-		/*
+-		 * Got the lock. We might not be the anticipated owner if we
+-		 * did a lock-steal - fix up the PI-state in that case.
+-		 */
+-		if (q.pi_state && (q.pi_state->owner != current)) {
+-			spin_lock(q.lock_ptr);
+-			ret = fixup_pi_state_owner(uaddr2, &q, current);
+-			/*
+-			 * Drop the reference to the pi state which
+-			 * the requeue_pi() code acquired for us.
+-			 */
+-			put_pi_state(q.pi_state);
+-			spin_unlock(q.lock_ptr);
+-			/*
+-			 * Adjust the return value. It's either -EFAULT or
+-			 * success (1) but the caller expects 0 for success.
+-			 */
+-			ret = ret < 0 ? ret : 0;
+-		}
+-	} else {
+-		struct rt_mutex *pi_mutex;
+-
+-		/*
+-		 * We have been woken up by futex_unlock_pi(), a timeout, or a
+-		 * signal.  futex_unlock_pi() will not destroy the lock_ptr nor
+-		 * the pi_state.
+-		 */
+-		WARN_ON(!q.pi_state);
+-		pi_mutex = &q.pi_state->pi_mutex;
+-		ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
+-
+-		spin_lock(q.lock_ptr);
+-		if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
+-			ret = 0;
+-
+-		debug_rt_mutex_free_waiter(&rt_waiter);
+-		/*
+-		 * Fixup the pi_state owner and possibly acquire the lock if we
+-		 * haven't already.
+-		 */
+-		res = fixup_owner(uaddr2, &q, !ret);
+-		/*
+-		 * If fixup_owner() returned an error, proprogate that.  If it
+-		 * acquired the lock, clear -ETIMEDOUT or -EINTR.
+-		 */
+-		if (res)
+-			ret = (res < 0) ? res : 0;
+-
+-		/* Unqueue and drop the lock. */
+-		unqueue_me_pi(&q);
+-	}
+-
+-	if (ret == -EINTR) {
+-		/*
+-		 * We've already been requeued, but cannot restart by calling
+-		 * futex_lock_pi() directly. We could restart this syscall, but
+-		 * it would detect that the user space "val" changed and return
+-		 * -EWOULDBLOCK.  Save the overhead of the restart and return
+-		 * -EWOULDBLOCK directly.
+-		 */
+-		ret = -EWOULDBLOCK;
+-	}
+-
+-out:
+-	if (to) {
+-		hrtimer_cancel(&to->timer);
+-		destroy_hrtimer_on_stack(&to->timer);
+-	}
+-	return ret;
+-}
+-
+-/*
+- * Support for robust futexes: the kernel cleans up held futexes at
+- * thread exit time.
+- *
+- * Implementation: user-space maintains a per-thread list of locks it
+- * is holding. Upon do_exit(), the kernel carefully walks this list,
+- * and marks all locks that are owned by this thread with the
+- * FUTEX_OWNER_DIED bit, and wakes up a waiter (if any). The list is
+- * always manipulated with the lock held, so the list is private and
+- * per-thread. Userspace also maintains a per-thread 'list_op_pending'
+- * field, to allow the kernel to clean up if the thread dies after
+- * acquiring the lock, but just before it could have added itself to
+- * the list. There can only be one such pending lock.
+- */
+-
+-/**
+- * sys_set_robust_list() - Set the robust-futex list head of a task
+- * @head:	pointer to the list-head
+- * @len:	length of the list-head, as userspace expects
+- */
+-SYSCALL_DEFINE2(set_robust_list, struct robust_list_head __user *, head,
+-		size_t, len)
+-{
+-	if (!futex_cmpxchg_enabled)
+-		return -ENOSYS;
+-	/*
+-	 * The kernel knows only one size for now:
+-	 */
+-	if (unlikely(len != sizeof(*head)))
+-		return -EINVAL;
+-
+-	current->robust_list = head;
+-
+-	return 0;
+-}
+-
+-/**
+- * sys_get_robust_list() - Get the robust-futex list head of a task
+- * @pid:	pid of the process [zero for current task]
+- * @head_ptr:	pointer to a list-head pointer, the kernel fills it in
+- * @len_ptr:	pointer to a length field, the kernel fills in the header size
+- */
+-SYSCALL_DEFINE3(get_robust_list, int, pid,
+-		struct robust_list_head __user * __user *, head_ptr,
+-		size_t __user *, len_ptr)
+-{
+-	struct robust_list_head __user *head;
+-	unsigned long ret;
+-	struct task_struct *p;
+-
+-	if (!futex_cmpxchg_enabled)
+-		return -ENOSYS;
+-
+-	rcu_read_lock();
+-
+-	ret = -ESRCH;
+-	if (!pid)
+-		p = current;
+-	else {
+-		p = find_task_by_vpid(pid);
+-		if (!p)
+-			goto err_unlock;
+-	}
+-
+-	ret = -EPERM;
+-	if (!ptrace_may_access(p, PTRACE_MODE_READ_REALCREDS))
+-		goto err_unlock;
+-
+-	head = p->robust_list;
+-	rcu_read_unlock();
+-
+-	if (put_user(sizeof(*head), len_ptr))
+-		return -EFAULT;
+-	return put_user(head, head_ptr);
+-
+-err_unlock:
+-	rcu_read_unlock();
+-
+-	return ret;
+-}
+-
+-/* Constants for the pending_op argument of handle_futex_death */
+-#define HANDLE_DEATH_PENDING	true
+-#define HANDLE_DEATH_LIST	false
+-
+-/*
+- * Process a futex-list entry, check whether it's owned by the
+- * dying task, and do notification if so:
+- */
+-static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr,
+-			      bool pi, bool pending_op)
+-{
+-	u32 uval, nval, mval;
+-	int err;
+-
+-	/* Futex address must be 32bit aligned */
+-	if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
+-		return -1;
+-
+-retry:
+-	if (get_user(uval, uaddr))
+-		return -1;
+-
+-	/*
+-	 * Special case for regular (non PI) futexes. The unlock path in
+-	 * user space has two race scenarios:
+-	 *
+-	 * 1. The unlock path releases the user space futex value and
+-	 *    before it can execute the futex() syscall to wake up
+-	 *    waiters it is killed.
+-	 *
+-	 * 2. A woken up waiter is killed before it can acquire the
+-	 *    futex in user space.
+-	 *
+-	 * In both cases the TID validation below prevents a wakeup of
+-	 * potential waiters which can cause these waiters to block
+-	 * forever.
+-	 *
+-	 * In both cases the following conditions are met:
+-	 *
+-	 *	1) task->robust_list->list_op_pending != NULL
+-	 *	   @pending_op == true
+-	 *	2) User space futex value == 0
+-	 *	3) Regular futex: @pi == false
+-	 *
+-	 * If these conditions are met, it is safe to attempt waking up a
+-	 * potential waiter without touching the user space futex value and
+-	 * trying to set the OWNER_DIED bit. The user space futex value is
+-	 * uncontended and the rest of the user space mutex state is
+-	 * consistent, so a woken waiter will just take over the
+-	 * uncontended futex. Setting the OWNER_DIED bit would create
+-	 * inconsistent state and malfunction of the user space owner died
+-	 * handling.
+-	 */
+-	if (pending_op && !pi && !uval) {
+-		futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
+-		return 0;
+-	}
+-
+-	if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr))
+-		return 0;
+-
+-	/*
+-	 * Ok, this dying thread is truly holding a futex
+-	 * of interest. Set the OWNER_DIED bit atomically
+-	 * via cmpxchg, and if the value had FUTEX_WAITERS
+-	 * set, wake up a waiter (if any). (We have to do a
+-	 * futex_wake() even if OWNER_DIED is already set -
+-	 * to handle the rare but possible case of recursive
+-	 * thread-death.) The rest of the cleanup is done in
+-	 * userspace.
+-	 */
+-	mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED;
+-
+-	/*
+-	 * We are not holding a lock here, but we want to have
+-	 * the pagefault_disable/enable() protection because
+-	 * we want to handle the fault gracefully. If the
+-	 * access fails we try to fault in the futex with R/W
+-	 * verification via get_user_pages. get_user() above
+-	 * does not guarantee R/W access. If that fails we
+-	 * give up and leave the futex locked.
+-	 */
+-	if ((err = cmpxchg_futex_value_locked(&nval, uaddr, uval, mval))) {
+-		switch (err) {
+-		case -EFAULT:
+-			if (fault_in_user_writeable(uaddr))
+-				return -1;
+-			goto retry;
+-
+-		case -EAGAIN:
+-			cond_resched();
+-			goto retry;
+-
+-		default:
+-			WARN_ON_ONCE(1);
+-			return err;
+-		}
+-	}
+-
+-	if (nval != uval)
+-		goto retry;
+-
+-	/*
+-	 * Wake robust non-PI futexes here. The wakeup of
+-	 * PI futexes happens in exit_pi_state():
+-	 */
+-	if (!pi && (uval & FUTEX_WAITERS))
+-		futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
+-
+-	return 0;
+-}
+-
+-/*
+- * Fetch a robust-list pointer. Bit 0 signals PI futexes:
+- */
+-static inline int fetch_robust_entry(struct robust_list __user **entry,
+-				     struct robust_list __user * __user *head,
+-				     unsigned int *pi)
+-{
+-	unsigned long uentry;
+-
+-	if (get_user(uentry, (unsigned long __user *)head))
+-		return -EFAULT;
+-
+-	*entry = (void __user *)(uentry & ~1UL);
+-	*pi = uentry & 1;
+-
+-	return 0;
+-}
+-
+-/*
+- * Walk curr->robust_list (very carefully, it's a userspace list!)
+- * and mark any locks found there dead, and notify any waiters.
+- *
+- * We silently return on any sign of list-walking problem.
+- */
+-static void exit_robust_list(struct task_struct *curr)
+-{
+-	struct robust_list_head __user *head = curr->robust_list;
+-	struct robust_list __user *entry, *next_entry, *pending;
+-	unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;
+-	unsigned int next_pi;
+-	unsigned long futex_offset;
+-	int rc;
+-
+-	if (!futex_cmpxchg_enabled)
+-		return;
+-
+-	/*
+-	 * Fetch the list head (which was registered earlier, via
+-	 * sys_set_robust_list()):
+-	 */
+-	if (fetch_robust_entry(&entry, &head->list.next, &pi))
+-		return;
+-	/*
+-	 * Fetch the relative futex offset:
+-	 */
+-	if (get_user(futex_offset, &head->futex_offset))
+-		return;
+-	/*
+-	 * Fetch any possibly pending lock-add first, and handle it
+-	 * if it exists:
+-	 */
+-	if (fetch_robust_entry(&pending, &head->list_op_pending, &pip))
+-		return;
+-
+-	next_entry = NULL;	/* avoid warning with gcc */
+-	while (entry != &head->list) {
+-		/*
+-		 * Fetch the next entry in the list before calling
+-		 * handle_futex_death:
+-		 */
+-		rc = fetch_robust_entry(&next_entry, &entry->next, &next_pi);
+-		/*
+-		 * A pending lock might already be on the list, so
+-		 * don't process it twice:
+-		 */
+-		if (entry != pending) {
+-			if (handle_futex_death((void __user *)entry + futex_offset,
+-						curr, pi, HANDLE_DEATH_LIST))
+-				return;
+-		}
+-		if (rc)
+-			return;
+-		entry = next_entry;
+-		pi = next_pi;
+-		/*
+-		 * Avoid excessively long or circular lists:
+-		 */
+-		if (!--limit)
+-			break;
+-
+-		cond_resched();
+-	}
+-
+-	if (pending) {
+-		handle_futex_death((void __user *)pending + futex_offset,
+-				   curr, pip, HANDLE_DEATH_PENDING);
+-	}
+-}
+-
+-static void futex_cleanup(struct task_struct *tsk)
+-{
+-	if (unlikely(tsk->robust_list)) {
+-		exit_robust_list(tsk);
+-		tsk->robust_list = NULL;
+-	}
+-
+-#ifdef CONFIG_COMPAT
+-	if (unlikely(tsk->compat_robust_list)) {
+-		compat_exit_robust_list(tsk);
+-		tsk->compat_robust_list = NULL;
+-	}
+-#endif
+-
+-	if (unlikely(!list_empty(&tsk->pi_state_list)))
+-		exit_pi_state_list(tsk);
+-}
+-
+-/**
+- * futex_exit_recursive - Set the tasks futex state to FUTEX_STATE_DEAD
+- * @tsk:	task to set the state on
+- *
+- * Set the futex exit state of the task lockless. The futex waiter code
+- * observes that state when a task is exiting and loops until the task has
+- * actually finished the futex cleanup. The worst case for this is that the
+- * waiter runs through the wait loop until the state becomes visible.
+- *
+- * This is called from the recursive fault handling path in do_exit().
+- *
+- * This is best effort. Either the futex exit code has run already or
+- * not. If the OWNER_DIED bit has been set on the futex then the waiter can
+- * take it over. If not, the problem is pushed back to user space. If the
+- * futex exit code did not run yet, then an already queued waiter might
+- * block forever, but there is nothing which can be done about that.
+- */
+-void futex_exit_recursive(struct task_struct *tsk)
+-{
+-	/* If the state is FUTEX_STATE_EXITING then futex_exit_mutex is held */
+-	if (tsk->futex_state == FUTEX_STATE_EXITING)
+-		mutex_unlock(&tsk->futex_exit_mutex);
+-	tsk->futex_state = FUTEX_STATE_DEAD;
+-}
+-
+-static void futex_cleanup_begin(struct task_struct *tsk)
+-{
+-	/*
+-	 * Prevent various race issues against a concurrent incoming waiter
+-	 * including live locks by forcing the waiter to block on
+-	 * tsk->futex_exit_mutex when it observes FUTEX_STATE_EXITING in
+-	 * attach_to_pi_owner().
+-	 */
+-	mutex_lock(&tsk->futex_exit_mutex);
+-
+-	/*
+-	 * Switch the state to FUTEX_STATE_EXITING under tsk->pi_lock.
+-	 *
+-	 * This ensures that all subsequent checks of tsk->futex_state in
+-	 * attach_to_pi_owner() must observe FUTEX_STATE_EXITING with
+-	 * tsk->pi_lock held.
+-	 *
+-	 * It guarantees also that a pi_state which was queued right before
+-	 * the state change under tsk->pi_lock by a concurrent waiter must
+-	 * be observed in exit_pi_state_list().
+-	 */
+-	raw_spin_lock_irq(&tsk->pi_lock);
+-	tsk->futex_state = FUTEX_STATE_EXITING;
+-	raw_spin_unlock_irq(&tsk->pi_lock);
+-}
+-
+-static void futex_cleanup_end(struct task_struct *tsk, int state)
+-{
+-	/*
+-	 * Lockless store. The only side effect is that an observer might
+-	 * take another loop until it becomes visible.
+-	 */
+-	tsk->futex_state = state;
+-	/*
+-	 * Drop the exit protection. This unblocks waiters which observed
+-	 * FUTEX_STATE_EXITING to reevaluate the state.
+-	 */
+-	mutex_unlock(&tsk->futex_exit_mutex);
+-}
+-
+-void futex_exec_release(struct task_struct *tsk)
+-{
+-	/*
+-	 * The state handling is done for consistency, but in the case of
+-	 * exec() there is no way to prevent futher damage as the PID stays
+-	 * the same. But for the unlikely and arguably buggy case that a
+-	 * futex is held on exec(), this provides at least as much state
+-	 * consistency protection which is possible.
+-	 */
+-	futex_cleanup_begin(tsk);
+-	futex_cleanup(tsk);
+-	/*
+-	 * Reset the state to FUTEX_STATE_OK. The task is alive and about
+-	 * exec a new binary.
+-	 */
+-	futex_cleanup_end(tsk, FUTEX_STATE_OK);
+-}
+-
+-void futex_exit_release(struct task_struct *tsk)
+-{
+-	futex_cleanup_begin(tsk);
+-	futex_cleanup(tsk);
+-	futex_cleanup_end(tsk, FUTEX_STATE_DEAD);
+-}
+-
+-long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
+-		u32 __user *uaddr2, u32 val2, u32 val3)
+-{
+-	int cmd = op & FUTEX_CMD_MASK;
+-	unsigned int flags = 0;
+-
+-	if (!(op & FUTEX_PRIVATE_FLAG))
+-		flags |= FLAGS_SHARED;
+-
+-	if (op & FUTEX_CLOCK_REALTIME) {
+-		flags |= FLAGS_CLOCKRT;
+-		if (cmd != FUTEX_WAIT_BITSET &&	cmd != FUTEX_WAIT_REQUEUE_PI)
+-			return -ENOSYS;
+-	}
+-
+-	switch (cmd) {
+-	case FUTEX_LOCK_PI:
+-	case FUTEX_UNLOCK_PI:
+-	case FUTEX_TRYLOCK_PI:
+-	case FUTEX_WAIT_REQUEUE_PI:
+-	case FUTEX_CMP_REQUEUE_PI:
+-		if (!futex_cmpxchg_enabled)
+-			return -ENOSYS;
+-	}
+-
+-	switch (cmd) {
+-	case FUTEX_WAIT:
+-		val3 = FUTEX_BITSET_MATCH_ANY;
+-		fallthrough;
+-	case FUTEX_WAIT_BITSET:
+-		return futex_wait(uaddr, flags, val, timeout, val3);
+-	case FUTEX_WAKE:
+-		val3 = FUTEX_BITSET_MATCH_ANY;
+-		fallthrough;
+-	case FUTEX_WAKE_BITSET:
+-		return futex_wake(uaddr, flags, val, val3);
+-	case FUTEX_REQUEUE:
+-		return futex_requeue(uaddr, flags, uaddr2, val, val2, NULL, 0);
+-	case FUTEX_CMP_REQUEUE:
+-		return futex_requeue(uaddr, flags, uaddr2, val, val2, &val3, 0);
+-	case FUTEX_WAKE_OP:
+-		return futex_wake_op(uaddr, flags, uaddr2, val, val2, val3);
+-	case FUTEX_LOCK_PI:
+-		return futex_lock_pi(uaddr, flags, timeout, 0);
+-	case FUTEX_UNLOCK_PI:
+-		return futex_unlock_pi(uaddr, flags);
+-	case FUTEX_TRYLOCK_PI:
+-		return futex_lock_pi(uaddr, flags, NULL, 1);
+-	case FUTEX_WAIT_REQUEUE_PI:
+-		val3 = FUTEX_BITSET_MATCH_ANY;
+-		return futex_wait_requeue_pi(uaddr, flags, val, timeout, val3,
+-					     uaddr2);
+-	case FUTEX_CMP_REQUEUE_PI:
+-		return futex_requeue(uaddr, flags, uaddr2, val, val2, &val3, 1);
+-	}
+-	return -ENOSYS;
+-}
+-
+-
+-SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val,
+-		struct __kernel_timespec __user *, utime, u32 __user *, uaddr2,
+-		u32, val3)
+-{
+-	struct timespec64 ts;
+-	ktime_t t, *tp = NULL;
+-	u32 val2 = 0;
+-	int cmd = op & FUTEX_CMD_MASK;
+-
+-	if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI ||
+-		      cmd == FUTEX_WAIT_BITSET ||
+-		      cmd == FUTEX_WAIT_REQUEUE_PI)) {
+-		if (unlikely(should_fail_futex(!(op & FUTEX_PRIVATE_FLAG))))
+-			return -EFAULT;
+-		if (get_timespec64(&ts, utime))
+-			return -EFAULT;
+-		if (!timespec64_valid(&ts))
+-			return -EINVAL;
+-
+-		t = timespec64_to_ktime(ts);
+-		if (cmd == FUTEX_WAIT)
+-			t = ktime_add_safe(ktime_get(), t);
+-		else if (cmd != FUTEX_LOCK_PI && !(op & FUTEX_CLOCK_REALTIME))
+-			t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
+-		tp = &t;
+-	}
+-	/*
+-	 * requeue parameter in 'utime' if cmd == FUTEX_*_REQUEUE_*.
+-	 * number of waiters to wake in 'utime' if cmd == FUTEX_WAKE_OP.
+-	 */
+-	if (cmd == FUTEX_REQUEUE || cmd == FUTEX_CMP_REQUEUE ||
+-	    cmd == FUTEX_CMP_REQUEUE_PI || cmd == FUTEX_WAKE_OP)
+-		val2 = (u32) (unsigned long) utime;
+-
+-	return do_futex(uaddr, op, val, tp, uaddr2, val2, val3);
+-}
+-
+-#ifdef CONFIG_COMPAT
+-/*
+- * Fetch a robust-list pointer. Bit 0 signals PI futexes:
+- */
+-static inline int
+-compat_fetch_robust_entry(compat_uptr_t *uentry, struct robust_list __user **entry,
+-		   compat_uptr_t __user *head, unsigned int *pi)
+-{
+-	if (get_user(*uentry, head))
+-		return -EFAULT;
+-
+-	*entry = compat_ptr((*uentry) & ~1);
+-	*pi = (unsigned int)(*uentry) & 1;
+-
+-	return 0;
+-}
+-
+-static void __user *futex_uaddr(struct robust_list __user *entry,
+-				compat_long_t futex_offset)
+-{
+-	compat_uptr_t base = ptr_to_compat(entry);
+-	void __user *uaddr = compat_ptr(base + futex_offset);
+-
+-	return uaddr;
+-}
+-
+-/*
+- * Walk curr->robust_list (very carefully, it's a userspace list!)
+- * and mark any locks found there dead, and notify any waiters.
+- *
+- * We silently return on any sign of list-walking problem.
+- */
+-static void compat_exit_robust_list(struct task_struct *curr)
+-{
+-	struct compat_robust_list_head __user *head = curr->compat_robust_list;
+-	struct robust_list __user *entry, *next_entry, *pending;
+-	unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;
+-	unsigned int next_pi;
+-	compat_uptr_t uentry, next_uentry, upending;
+-	compat_long_t futex_offset;
+-	int rc;
+-
+-	if (!futex_cmpxchg_enabled)
+-		return;
+-
+-	/*
+-	 * Fetch the list head (which was registered earlier, via
+-	 * sys_set_robust_list()):
+-	 */
+-	if (compat_fetch_robust_entry(&uentry, &entry, &head->list.next, &pi))
+-		return;
+-	/*
+-	 * Fetch the relative futex offset:
+-	 */
+-	if (get_user(futex_offset, &head->futex_offset))
+-		return;
+-	/*
+-	 * Fetch any possibly pending lock-add first, and handle it
+-	 * if it exists:
+-	 */
+-	if (compat_fetch_robust_entry(&upending, &pending,
+-			       &head->list_op_pending, &pip))
+-		return;
+-
+-	next_entry = NULL;	/* avoid warning with gcc */
+-	while (entry != (struct robust_list __user *) &head->list) {
+-		/*
+-		 * Fetch the next entry in the list before calling
+-		 * handle_futex_death:
+-		 */
+-		rc = compat_fetch_robust_entry(&next_uentry, &next_entry,
+-			(compat_uptr_t __user *)&entry->next, &next_pi);
+-		/*
+-		 * A pending lock might already be on the list, so
+-		 * dont process it twice:
+-		 */
+-		if (entry != pending) {
+-			void __user *uaddr = futex_uaddr(entry, futex_offset);
+-
+-			if (handle_futex_death(uaddr, curr, pi,
+-					       HANDLE_DEATH_LIST))
+-				return;
+-		}
+-		if (rc)
+-			return;
+-		uentry = next_uentry;
+-		entry = next_entry;
+-		pi = next_pi;
+-		/*
+-		 * Avoid excessively long or circular lists:
+-		 */
+-		if (!--limit)
+-			break;
+-
+-		cond_resched();
+-	}
+-	if (pending) {
+-		void __user *uaddr = futex_uaddr(pending, futex_offset);
+-
+-		handle_futex_death(uaddr, curr, pip, HANDLE_DEATH_PENDING);
+-	}
+-}
+-
+-COMPAT_SYSCALL_DEFINE2(set_robust_list,
+-		struct compat_robust_list_head __user *, head,
+-		compat_size_t, len)
+-{
+-	if (!futex_cmpxchg_enabled)
+-		return -ENOSYS;
+-
+-	if (unlikely(len != sizeof(*head)))
+-		return -EINVAL;
+-
+-	current->compat_robust_list = head;
+-
+-	return 0;
+-}
+-
+-COMPAT_SYSCALL_DEFINE3(get_robust_list, int, pid,
+-			compat_uptr_t __user *, head_ptr,
+-			compat_size_t __user *, len_ptr)
+-{
+-	struct compat_robust_list_head __user *head;
+-	unsigned long ret;
+-	struct task_struct *p;
+-
+-	if (!futex_cmpxchg_enabled)
+-		return -ENOSYS;
+-
+-	rcu_read_lock();
+-
+-	ret = -ESRCH;
+-	if (!pid)
+-		p = current;
+-	else {
+-		p = find_task_by_vpid(pid);
+-		if (!p)
+-			goto err_unlock;
+-	}
+-
+-	ret = -EPERM;
+-	if (!ptrace_may_access(p, PTRACE_MODE_READ_REALCREDS))
+-		goto err_unlock;
+-
+-	head = p->compat_robust_list;
+-	rcu_read_unlock();
+-
+-	if (put_user(sizeof(*head), len_ptr))
+-		return -EFAULT;
+-	return put_user(ptr_to_compat(head), head_ptr);
+-
+-err_unlock:
+-	rcu_read_unlock();
+-
+-	return ret;
+-}
+-#endif /* CONFIG_COMPAT */
+-
+-#ifdef CONFIG_COMPAT_32BIT_TIME
+-SYSCALL_DEFINE6(futex_time32, u32 __user *, uaddr, int, op, u32, val,
+-		struct old_timespec32 __user *, utime, u32 __user *, uaddr2,
+-		u32, val3)
+-{
+-	struct timespec64 ts;
+-	ktime_t t, *tp = NULL;
+-	int val2 = 0;
+-	int cmd = op & FUTEX_CMD_MASK;
+-
+-	if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI ||
+-		      cmd == FUTEX_WAIT_BITSET ||
+-		      cmd == FUTEX_WAIT_REQUEUE_PI)) {
+-		if (get_old_timespec32(&ts, utime))
+-			return -EFAULT;
+-		if (!timespec64_valid(&ts))
+-			return -EINVAL;
+-
+-		t = timespec64_to_ktime(ts);
+-		if (cmd == FUTEX_WAIT)
+-			t = ktime_add_safe(ktime_get(), t);
+-		else if (cmd != FUTEX_LOCK_PI && !(op & FUTEX_CLOCK_REALTIME))
+-			t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
+-		tp = &t;
+-	}
+-	if (cmd == FUTEX_REQUEUE || cmd == FUTEX_CMP_REQUEUE ||
+-	    cmd == FUTEX_CMP_REQUEUE_PI || cmd == FUTEX_WAKE_OP)
+-		val2 = (int) (unsigned long) utime;
+-
+-	return do_futex(uaddr, op, val, tp, uaddr2, val2, val3);
+-}
+-#endif /* CONFIG_COMPAT_32BIT_TIME */
+-
+-static void __init futex_detect_cmpxchg(void)
+-{
+-#ifndef CONFIG_HAVE_FUTEX_CMPXCHG
+-	u32 curval;
+-
+-	/*
+-	 * This will fail and we want it. Some arch implementations do
+-	 * runtime detection of the futex_atomic_cmpxchg_inatomic()
+-	 * functionality. We want to know that before we call in any
+-	 * of the complex code paths. Also we want to prevent
+-	 * registration of robust lists in that case. NULL is
+-	 * guaranteed to fault and we get -EFAULT on functional
+-	 * implementation, the non-functional ones will return
+-	 * -ENOSYS.
+-	 */
+-	if (cmpxchg_futex_value_locked(&curval, NULL, 0, 0) == -EFAULT)
+-		futex_cmpxchg_enabled = 1;
+-#endif
+-}
+-
+-static int __init futex_init(void)
+-{
+-	unsigned int futex_shift;
+-	unsigned long i;
+-
+-#if CONFIG_BASE_SMALL
+-	futex_hashsize = 16;
+-#else
+-	futex_hashsize = roundup_pow_of_two(256 * num_possible_cpus());
+-#endif
+-
+-	futex_queues = alloc_large_system_hash("futex", sizeof(*futex_queues),
+-					       futex_hashsize, 0,
+-					       futex_hashsize < 256 ? HASH_SMALL : 0,
+-					       &futex_shift, NULL,
+-					       futex_hashsize, futex_hashsize);
+-	futex_hashsize = 1UL << futex_shift;
+-
+-	futex_detect_cmpxchg();
+-
+-	for (i = 0; i < futex_hashsize; i++) {
+-		atomic_set(&futex_queues[i].waiters, 0);
+-		plist_head_init(&futex_queues[i].chain);
+-		spin_lock_init(&futex_queues[i].lock);
+-	}
+-
+-	return 0;
+-}
+-core_initcall(futex_init);
+diff --git a/kernel/futex/Makefile b/kernel/futex/Makefile
+new file mode 100644
+index 0000000000000..b89ba3fba3437
+--- /dev/null
++++ b/kernel/futex/Makefile
+@@ -0,0 +1,3 @@
++# SPDX-License-Identifier: GPL-2.0
++
++obj-y += core.o
+diff --git a/kernel/futex/core.c b/kernel/futex/core.c
+new file mode 100644
+index 0000000000000..8dd0bc50ac36d
+--- /dev/null
++++ b/kernel/futex/core.c
+@@ -0,0 +1,4048 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ *  Fast Userspace Mutexes (which I call "Futexes!").
++ *  (C) Rusty Russell, IBM 2002
++ *
++ *  Generalized futexes, futex requeueing, misc fixes by Ingo Molnar
++ *  (C) Copyright 2003 Red Hat Inc, All Rights Reserved
++ *
++ *  Removed page pinning, fix privately mapped COW pages and other cleanups
++ *  (C) Copyright 2003, 2004 Jamie Lokier
++ *
++ *  Robust futex support started by Ingo Molnar
++ *  (C) Copyright 2006 Red Hat Inc, All Rights Reserved
++ *  Thanks to Thomas Gleixner for suggestions, analysis and fixes.
++ *
++ *  PI-futex support started by Ingo Molnar and Thomas Gleixner
++ *  Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
++ *  Copyright (C) 2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
++ *
++ *  PRIVATE futexes by Eric Dumazet
++ *  Copyright (C) 2007 Eric Dumazet <dada1@cosmosbay.com>
++ *
++ *  Requeue-PI support by Darren Hart <dvhltc@us.ibm.com>
++ *  Copyright (C) IBM Corporation, 2009
++ *  Thanks to Thomas Gleixner for conceptual design and careful reviews.
++ *
++ *  Thanks to Ben LaHaise for yelling "hashed waitqueues" loudly
++ *  enough at me, Linus for the original (flawed) idea, Matthew
++ *  Kirkwood for proof-of-concept implementation.
++ *
++ *  "The futexes are also cursed."
++ *  "But they come in a choice of three flavours!"
++ */
++#include <linux/compat.h>
++#include <linux/jhash.h>
++#include <linux/pagemap.h>
++#include <linux/syscalls.h>
++#include <linux/freezer.h>
++#include <linux/memblock.h>
++#include <linux/fault-inject.h>
++#include <linux/time_namespace.h>
++
++#include <asm/futex.h>
++
++#include "../locking/rtmutex_common.h"
++
++/*
++ * READ this before attempting to hack on futexes!
++ *
++ * Basic futex operation and ordering guarantees
++ * =============================================
++ *
++ * The waiter reads the futex value in user space and calls
++ * futex_wait(). This function computes the hash bucket and acquires
++ * the hash bucket lock. After that it reads the futex user space value
++ * again and verifies that the data has not changed. If it has not changed
++ * it enqueues itself into the hash bucket, releases the hash bucket lock
++ * and schedules.
++ *
++ * The waker side modifies the user space value of the futex and calls
++ * futex_wake(). This function computes the hash bucket and acquires the
++ * hash bucket lock. Then it looks for waiters on that futex in the hash
++ * bucket and wakes them.
++ *
++ * In futex wake up scenarios where no tasks are blocked on a futex, taking
++ * the hb spinlock can be avoided and simply return. In order for this
++ * optimization to work, ordering guarantees must exist so that the waiter
++ * being added to the list is acknowledged when the list is concurrently being
++ * checked by the waker, avoiding scenarios like the following:
++ *
++ * CPU 0                               CPU 1
++ * val = *futex;
++ * sys_futex(WAIT, futex, val);
++ *   futex_wait(futex, val);
++ *   uval = *futex;
++ *                                     *futex = newval;
++ *                                     sys_futex(WAKE, futex);
++ *                                       futex_wake(futex);
++ *                                       if (queue_empty())
++ *                                         return;
++ *   if (uval == val)
++ *      lock(hash_bucket(futex));
++ *      queue();
++ *     unlock(hash_bucket(futex));
++ *     schedule();
++ *
++ * This would cause the waiter on CPU 0 to wait forever because it
++ * missed the transition of the user space value from val to newval
++ * and the waker did not find the waiter in the hash bucket queue.
++ *
++ * The correct serialization ensures that a waiter either observes
++ * the changed user space value before blocking or is woken by a
++ * concurrent waker:
++ *
++ * CPU 0                                 CPU 1
++ * val = *futex;
++ * sys_futex(WAIT, futex, val);
++ *   futex_wait(futex, val);
++ *
++ *   waiters++; (a)
++ *   smp_mb(); (A) <-- paired with -.
++ *                                  |
++ *   lock(hash_bucket(futex));      |
++ *                                  |
++ *   uval = *futex;                 |
++ *                                  |        *futex = newval;
++ *                                  |        sys_futex(WAKE, futex);
++ *                                  |          futex_wake(futex);
++ *                                  |
++ *                                  `--------> smp_mb(); (B)
++ *   if (uval == val)
++ *     queue();
++ *     unlock(hash_bucket(futex));
++ *     schedule();                         if (waiters)
++ *                                           lock(hash_bucket(futex));
++ *   else                                    wake_waiters(futex);
++ *     waiters--; (b)                        unlock(hash_bucket(futex));
++ *
++ * Where (A) orders the waiters increment and the futex value read through
++ * atomic operations (see hb_waiters_inc) and where (B) orders the write
++ * to futex and the waiters read (see hb_waiters_pending()).
++ *
++ * This yields the following case (where X:=waiters, Y:=futex):
++ *
++ *	X = Y = 0
++ *
++ *	w[X]=1		w[Y]=1
++ *	MB		MB
++ *	r[Y]=y		r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible; which translates back into
++ * the guarantee that we cannot both miss the futex variable change and the
++ * enqueue.
++ *
++ * Note that a new waiter is accounted for in (a) even when it is possible that
++ * the wait call can return error, in which case we backtrack from it in (b).
++ * Refer to the comment in queue_lock().
++ *
++ * Similarly, in order to account for waiters being requeued on another
++ * address we always increment the waiters for the destination bucket before
++ * acquiring the lock. It then decrements them again  after releasing it -
++ * the code that actually moves the futex(es) between hash buckets (requeue_futex)
++ * will do the additional required waiter count housekeeping. This is done for
++ * double_lock_hb() and double_unlock_hb(), respectively.
++ */
++
++#ifdef CONFIG_HAVE_FUTEX_CMPXCHG
++#define futex_cmpxchg_enabled 1
++#else
++static int  __read_mostly futex_cmpxchg_enabled;
++#endif
++
++/*
++ * Futex flags used to encode options to functions and preserve them across
++ * restarts.
++ */
++#ifdef CONFIG_MMU
++# define FLAGS_SHARED		0x01
++#else
++/*
++ * NOMMU does not have per process address space. Let the compiler optimize
++ * code away.
++ */
++# define FLAGS_SHARED		0x00
++#endif
++#define FLAGS_CLOCKRT		0x02
++#define FLAGS_HAS_TIMEOUT	0x04
++
++/*
++ * Priority Inheritance state:
++ */
++struct futex_pi_state {
++	/*
++	 * list of 'owned' pi_state instances - these have to be
++	 * cleaned up in do_exit() if the task exits prematurely:
++	 */
++	struct list_head list;
++
++	/*
++	 * The PI object:
++	 */
++	struct rt_mutex pi_mutex;
++
++	struct task_struct *owner;
++	refcount_t refcount;
++
++	union futex_key key;
++} __randomize_layout;
++
++/**
++ * struct futex_q - The hashed futex queue entry, one per waiting task
++ * @list:		priority-sorted list of tasks waiting on this futex
++ * @task:		the task waiting on the futex
++ * @lock_ptr:		the hash bucket lock
++ * @key:		the key the futex is hashed on
++ * @pi_state:		optional priority inheritance state
++ * @rt_waiter:		rt_waiter storage for use with requeue_pi
++ * @requeue_pi_key:	the requeue_pi target futex key
++ * @bitset:		bitset for the optional bitmasked wakeup
++ *
++ * We use this hashed waitqueue, instead of a normal wait_queue_entry_t, so
++ * we can wake only the relevant ones (hashed queues may be shared).
++ *
++ * A futex_q has a woken state, just like tasks have TASK_RUNNING.
++ * It is considered woken when plist_node_empty(&q->list) || q->lock_ptr == 0.
++ * The order of wakeup is always to make the first condition true, then
++ * the second.
++ *
++ * PI futexes are typically woken before they are removed from the hash list via
++ * the rt_mutex code. See unqueue_me_pi().
++ */
++struct futex_q {
++	struct plist_node list;
++
++	struct task_struct *task;
++	spinlock_t *lock_ptr;
++	union futex_key key;
++	struct futex_pi_state *pi_state;
++	struct rt_mutex_waiter *rt_waiter;
++	union futex_key *requeue_pi_key;
++	u32 bitset;
++} __randomize_layout;
++
++static const struct futex_q futex_q_init = {
++	/* list gets initialized in queue_me()*/
++	.key = FUTEX_KEY_INIT,
++	.bitset = FUTEX_BITSET_MATCH_ANY
++};
++
++/*
++ * Hash buckets are shared by all the futex_keys that hash to the same
++ * location.  Each key may have multiple futex_q structures, one for each task
++ * waiting on a futex.
++ */
++struct futex_hash_bucket {
++	atomic_t waiters;
++	spinlock_t lock;
++	struct plist_head chain;
++} ____cacheline_aligned_in_smp;
++
++/*
++ * The base of the bucket array and its size are always used together
++ * (after initialization only in hash_futex()), so ensure that they
++ * reside in the same cacheline.
++ */
++static struct {
++	struct futex_hash_bucket *queues;
++	unsigned long            hashsize;
++} __futex_data __read_mostly __aligned(2*sizeof(long));
++#define futex_queues   (__futex_data.queues)
++#define futex_hashsize (__futex_data.hashsize)
++
++
++/*
++ * Fault injections for futexes.
++ */
++#ifdef CONFIG_FAIL_FUTEX
++
++static struct {
++	struct fault_attr attr;
++
++	bool ignore_private;
++} fail_futex = {
++	.attr = FAULT_ATTR_INITIALIZER,
++	.ignore_private = false,
++};
++
++static int __init setup_fail_futex(char *str)
++{
++	return setup_fault_attr(&fail_futex.attr, str);
++}
++__setup("fail_futex=", setup_fail_futex);
++
++static bool should_fail_futex(bool fshared)
++{
++	if (fail_futex.ignore_private && !fshared)
++		return false;
++
++	return should_fail(&fail_futex.attr, 1);
++}
++
++#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
++
++static int __init fail_futex_debugfs(void)
++{
++	umode_t mode = S_IFREG | S_IRUSR | S_IWUSR;
++	struct dentry *dir;
++
++	dir = fault_create_debugfs_attr("fail_futex", NULL,
++					&fail_futex.attr);
++	if (IS_ERR(dir))
++		return PTR_ERR(dir);
++
++	debugfs_create_bool("ignore-private", mode, dir,
++			    &fail_futex.ignore_private);
++	return 0;
++}
++
++late_initcall(fail_futex_debugfs);
++
++#endif /* CONFIG_FAULT_INJECTION_DEBUG_FS */
++
++#else
++static inline bool should_fail_futex(bool fshared)
++{
++	return false;
++}
++#endif /* CONFIG_FAIL_FUTEX */
++
++#ifdef CONFIG_COMPAT
++static void compat_exit_robust_list(struct task_struct *curr);
++#else
++static inline void compat_exit_robust_list(struct task_struct *curr) { }
++#endif
++
++/*
++ * Reflects a new waiter being added to the waitqueue.
++ */
++static inline void hb_waiters_inc(struct futex_hash_bucket *hb)
++{
++#ifdef CONFIG_SMP
++	atomic_inc(&hb->waiters);
++	/*
++	 * Full barrier (A), see the ordering comment above.
++	 */
++	smp_mb__after_atomic();
++#endif
++}
++
++/*
++ * Reflects a waiter being removed from the waitqueue by wakeup
++ * paths.
++ */
++static inline void hb_waiters_dec(struct futex_hash_bucket *hb)
++{
++#ifdef CONFIG_SMP
++	atomic_dec(&hb->waiters);
++#endif
++}
++
++static inline int hb_waiters_pending(struct futex_hash_bucket *hb)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * Full barrier (B), see the ordering comment above.
++	 */
++	smp_mb();
++	return atomic_read(&hb->waiters);
++#else
++	return 1;
++#endif
++}
++
++/**
++ * hash_futex - Return the hash bucket in the global hash
++ * @key:	Pointer to the futex key for which the hash is calculated
++ *
++ * We hash on the keys returned from get_futex_key (see below) and return the
++ * corresponding hash bucket in the global hash.
++ */
++static struct futex_hash_bucket *hash_futex(union futex_key *key)
++{
++	u32 hash = jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4,
++			  key->both.offset);
++
++	return &futex_queues[hash & (futex_hashsize - 1)];
++}
++
++
++/**
++ * match_futex - Check whether two futex keys are equal
++ * @key1:	Pointer to key1
++ * @key2:	Pointer to key2
++ *
++ * Return 1 if two futex_keys are equal, 0 otherwise.
++ */
++static inline int match_futex(union futex_key *key1, union futex_key *key2)
++{
++	return (key1 && key2
++		&& key1->both.word == key2->both.word
++		&& key1->both.ptr == key2->both.ptr
++		&& key1->both.offset == key2->both.offset);
++}
++
++enum futex_access {
++	FUTEX_READ,
++	FUTEX_WRITE
++};
++
++/**
++ * futex_setup_timer - set up the sleeping hrtimer.
++ * @time:	ptr to the given timeout value
++ * @timeout:	the hrtimer_sleeper structure to be set up
++ * @flags:	futex flags
++ * @range_ns:	optional range in ns
++ *
++ * Return: Initialized hrtimer_sleeper structure or NULL if no timeout
++ *	   value given
++ */
++static inline struct hrtimer_sleeper *
++futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout,
++		  int flags, u64 range_ns)
++{
++	if (!time)
++		return NULL;
++
++	hrtimer_init_sleeper_on_stack(timeout, (flags & FLAGS_CLOCKRT) ?
++				      CLOCK_REALTIME : CLOCK_MONOTONIC,
++				      HRTIMER_MODE_ABS);
++	/*
++	 * If range_ns is 0, calling hrtimer_set_expires_range_ns() is
++	 * effectively the same as calling hrtimer_set_expires().
++	 */
++	hrtimer_set_expires_range_ns(&timeout->timer, *time, range_ns);
++
++	return timeout;
++}
++
++/*
++ * Generate a machine wide unique identifier for this inode.
++ *
++ * This relies on u64 not wrapping in the life-time of the machine; which with
++ * 1ns resolution means almost 585 years.
++ *
++ * This further relies on the fact that a well formed program will not unmap
++ * the file while it has a (shared) futex waiting on it. This mapping will have
++ * a file reference which pins the mount and inode.
++ *
++ * If for some reason an inode gets evicted and read back in again, it will get
++ * a new sequence number and will _NOT_ match, even though it is the exact same
++ * file.
++ *
++ * It is important that match_futex() will never have a false-positive, esp.
++ * for PI futexes that can mess up the state. The above argues that false-negatives
++ * are only possible for malformed programs.
++ */
++static u64 get_inode_sequence_number(struct inode *inode)
++{
++	static atomic64_t i_seq;
++	u64 old;
++
++	/* Does the inode already have a sequence number? */
++	old = atomic64_read(&inode->i_sequence);
++	if (likely(old))
++		return old;
++
++	for (;;) {
++		u64 new = atomic64_add_return(1, &i_seq);
++		if (WARN_ON_ONCE(!new))
++			continue;
++
++		old = atomic64_cmpxchg_relaxed(&inode->i_sequence, 0, new);
++		if (old)
++			return old;
++		return new;
++	}
++}
++
++/**
++ * get_futex_key() - Get parameters which are the keys for a futex
++ * @uaddr:	virtual address of the futex
++ * @fshared:	false for a PROCESS_PRIVATE futex, true for PROCESS_SHARED
++ * @key:	address where result is stored.
++ * @rw:		mapping needs to be read/write (values: FUTEX_READ,
++ *              FUTEX_WRITE)
++ *
++ * Return: a negative error code or 0
++ *
++ * The key words are stored in @key on success.
++ *
++ * For shared mappings (when @fshared), the key is:
++ *
++ *   ( inode->i_sequence, page->index, offset_within_page )
++ *
++ * [ also see get_inode_sequence_number() ]
++ *
++ * For private mappings (or when !@fshared), the key is:
++ *
++ *   ( current->mm, address, 0 )
++ *
++ * This allows (cross process, where applicable) identification of the futex
++ * without keeping the page pinned for the duration of the FUTEX_WAIT.
++ *
++ * lock_page() might sleep, the caller should not hold a spinlock.
++ */
++static int get_futex_key(u32 __user *uaddr, bool fshared, union futex_key *key,
++			 enum futex_access rw)
++{
++	unsigned long address = (unsigned long)uaddr;
++	struct mm_struct *mm = current->mm;
++	struct page *page, *tail;
++	struct address_space *mapping;
++	int err, ro = 0;
++
++	/*
++	 * The futex address must be "naturally" aligned.
++	 */
++	key->both.offset = address % PAGE_SIZE;
++	if (unlikely((address % sizeof(u32)) != 0))
++		return -EINVAL;
++	address -= key->both.offset;
++
++	if (unlikely(!access_ok(uaddr, sizeof(u32))))
++		return -EFAULT;
++
++	if (unlikely(should_fail_futex(fshared)))
++		return -EFAULT;
++
++	/*
++	 * PROCESS_PRIVATE futexes are fast.
++	 * As the mm cannot disappear under us and the 'key' only needs
++	 * virtual address, we dont even have to find the underlying vma.
++	 * Note : We do have to check 'uaddr' is a valid user address,
++	 *        but access_ok() should be faster than find_vma()
++	 */
++	if (!fshared) {
++		key->private.mm = mm;
++		key->private.address = address;
++		return 0;
++	}
++
++again:
++	/* Ignore any VERIFY_READ mapping (futex common case) */
++	if (unlikely(should_fail_futex(true)))
++		return -EFAULT;
++
++	err = get_user_pages_fast(address, 1, FOLL_WRITE, &page);
++	/*
++	 * If write access is not required (eg. FUTEX_WAIT), try
++	 * and get read-only access.
++	 */
++	if (err == -EFAULT && rw == FUTEX_READ) {
++		err = get_user_pages_fast(address, 1, 0, &page);
++		ro = 1;
++	}
++	if (err < 0)
++		return err;
++	else
++		err = 0;
++
++	/*
++	 * The treatment of mapping from this point on is critical. The page
++	 * lock protects many things but in this context the page lock
++	 * stabilizes mapping, prevents inode freeing in the shared
++	 * file-backed region case and guards against movement to swap cache.
++	 *
++	 * Strictly speaking the page lock is not needed in all cases being
++	 * considered here and page lock forces unnecessarily serialization
++	 * From this point on, mapping will be re-verified if necessary and
++	 * page lock will be acquired only if it is unavoidable
++	 *
++	 * Mapping checks require the head page for any compound page so the
++	 * head page and mapping is looked up now. For anonymous pages, it
++	 * does not matter if the page splits in the future as the key is
++	 * based on the address. For filesystem-backed pages, the tail is
++	 * required as the index of the page determines the key. For
++	 * base pages, there is no tail page and tail == page.
++	 */
++	tail = page;
++	page = compound_head(page);
++	mapping = READ_ONCE(page->mapping);
++
++	/*
++	 * If page->mapping is NULL, then it cannot be a PageAnon
++	 * page; but it might be the ZERO_PAGE or in the gate area or
++	 * in a special mapping (all cases which we are happy to fail);
++	 * or it may have been a good file page when get_user_pages_fast
++	 * found it, but truncated or holepunched or subjected to
++	 * invalidate_complete_page2 before we got the page lock (also
++	 * cases which we are happy to fail).  And we hold a reference,
++	 * so refcount care in invalidate_complete_page's remove_mapping
++	 * prevents drop_caches from setting mapping to NULL beneath us.
++	 *
++	 * The case we do have to guard against is when memory pressure made
++	 * shmem_writepage move it from filecache to swapcache beneath us:
++	 * an unlikely race, but we do need to retry for page->mapping.
++	 */
++	if (unlikely(!mapping)) {
++		int shmem_swizzled;
++
++		/*
++		 * Page lock is required to identify which special case above
++		 * applies. If this is really a shmem page then the page lock
++		 * will prevent unexpected transitions.
++		 */
++		lock_page(page);
++		shmem_swizzled = PageSwapCache(page) || page->mapping;
++		unlock_page(page);
++		put_page(page);
++
++		if (shmem_swizzled)
++			goto again;
++
++		return -EFAULT;
++	}
++
++	/*
++	 * Private mappings are handled in a simple way.
++	 *
++	 * If the futex key is stored on an anonymous page, then the associated
++	 * object is the mm which is implicitly pinned by the calling process.
++	 *
++	 * NOTE: When userspace waits on a MAP_SHARED mapping, even if
++	 * it's a read-only handle, it's expected that futexes attach to
++	 * the object not the particular process.
++	 */
++	if (PageAnon(page)) {
++		/*
++		 * A RO anonymous page will never change and thus doesn't make
++		 * sense for futex operations.
++		 */
++		if (unlikely(should_fail_futex(true)) || ro) {
++			err = -EFAULT;
++			goto out;
++		}
++
++		key->both.offset |= FUT_OFF_MMSHARED; /* ref taken on mm */
++		key->private.mm = mm;
++		key->private.address = address;
++
++	} else {
++		struct inode *inode;
++
++		/*
++		 * The associated futex object in this case is the inode and
++		 * the page->mapping must be traversed. Ordinarily this should
++		 * be stabilised under page lock but it's not strictly
++		 * necessary in this case as we just want to pin the inode, not
++		 * update the radix tree or anything like that.
++		 *
++		 * The RCU read lock is taken as the inode is finally freed
++		 * under RCU. If the mapping still matches expectations then the
++		 * mapping->host can be safely accessed as being a valid inode.
++		 */
++		rcu_read_lock();
++
++		if (READ_ONCE(page->mapping) != mapping) {
++			rcu_read_unlock();
++			put_page(page);
++
++			goto again;
++		}
++
++		inode = READ_ONCE(mapping->host);
++		if (!inode) {
++			rcu_read_unlock();
++			put_page(page);
++
++			goto again;
++		}
++
++		key->both.offset |= FUT_OFF_INODE; /* inode-based key */
++		key->shared.i_seq = get_inode_sequence_number(inode);
++		key->shared.pgoff = page_to_pgoff(tail);
++		rcu_read_unlock();
++	}
++
++out:
++	put_page(page);
++	return err;
++}
++
++/**
++ * fault_in_user_writeable() - Fault in user address and verify RW access
++ * @uaddr:	pointer to faulting user space address
++ *
++ * Slow path to fixup the fault we just took in the atomic write
++ * access to @uaddr.
++ *
++ * We have no generic implementation of a non-destructive write to the
++ * user address. We know that we faulted in the atomic pagefault
++ * disabled section so we can as well avoid the #PF overhead by
++ * calling get_user_pages() right away.
++ */
++static int fault_in_user_writeable(u32 __user *uaddr)
++{
++	struct mm_struct *mm = current->mm;
++	int ret;
++
++	mmap_read_lock(mm);
++	ret = fixup_user_fault(mm, (unsigned long)uaddr,
++			       FAULT_FLAG_WRITE, NULL);
++	mmap_read_unlock(mm);
++
++	return ret < 0 ? ret : 0;
++}
++
++/**
++ * futex_top_waiter() - Return the highest priority waiter on a futex
++ * @hb:		the hash bucket the futex_q's reside in
++ * @key:	the futex key (to distinguish it from other futex futex_q's)
++ *
++ * Must be called with the hb lock held.
++ */
++static struct futex_q *futex_top_waiter(struct futex_hash_bucket *hb,
++					union futex_key *key)
++{
++	struct futex_q *this;
++
++	plist_for_each_entry(this, &hb->chain, list) {
++		if (match_futex(&this->key, key))
++			return this;
++	}
++	return NULL;
++}
++
++static int cmpxchg_futex_value_locked(u32 *curval, u32 __user *uaddr,
++				      u32 uval, u32 newval)
++{
++	int ret;
++
++	pagefault_disable();
++	ret = futex_atomic_cmpxchg_inatomic(curval, uaddr, uval, newval);
++	pagefault_enable();
++
++	return ret;
++}
++
++static int get_futex_value_locked(u32 *dest, u32 __user *from)
++{
++	int ret;
++
++	pagefault_disable();
++	ret = __get_user(*dest, from);
++	pagefault_enable();
++
++	return ret ? -EFAULT : 0;
++}
++
++
++/*
++ * PI code:
++ */
++static int refill_pi_state_cache(void)
++{
++	struct futex_pi_state *pi_state;
++
++	if (likely(current->pi_state_cache))
++		return 0;
++
++	pi_state = kzalloc(sizeof(*pi_state), GFP_KERNEL);
++
++	if (!pi_state)
++		return -ENOMEM;
++
++	INIT_LIST_HEAD(&pi_state->list);
++	/* pi_mutex gets initialized later */
++	pi_state->owner = NULL;
++	refcount_set(&pi_state->refcount, 1);
++	pi_state->key = FUTEX_KEY_INIT;
++
++	current->pi_state_cache = pi_state;
++
++	return 0;
++}
++
++static struct futex_pi_state *alloc_pi_state(void)
++{
++	struct futex_pi_state *pi_state = current->pi_state_cache;
++
++	WARN_ON(!pi_state);
++	current->pi_state_cache = NULL;
++
++	return pi_state;
++}
++
++static void pi_state_update_owner(struct futex_pi_state *pi_state,
++				  struct task_struct *new_owner)
++{
++	struct task_struct *old_owner = pi_state->owner;
++
++	lockdep_assert_held(&pi_state->pi_mutex.wait_lock);
++
++	if (old_owner) {
++		raw_spin_lock(&old_owner->pi_lock);
++		WARN_ON(list_empty(&pi_state->list));
++		list_del_init(&pi_state->list);
++		raw_spin_unlock(&old_owner->pi_lock);
++	}
++
++	if (new_owner) {
++		raw_spin_lock(&new_owner->pi_lock);
++		WARN_ON(!list_empty(&pi_state->list));
++		list_add(&pi_state->list, &new_owner->pi_state_list);
++		pi_state->owner = new_owner;
++		raw_spin_unlock(&new_owner->pi_lock);
++	}
++}
++
++static void get_pi_state(struct futex_pi_state *pi_state)
++{
++	WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount));
++}
++
++/*
++ * Drops a reference to the pi_state object and frees or caches it
++ * when the last reference is gone.
++ */
++static void put_pi_state(struct futex_pi_state *pi_state)
++{
++	if (!pi_state)
++		return;
++
++	if (!refcount_dec_and_test(&pi_state->refcount))
++		return;
++
++	/*
++	 * If pi_state->owner is NULL, the owner is most probably dying
++	 * and has cleaned up the pi_state already
++	 */
++	if (pi_state->owner) {
++		unsigned long flags;
++
++		raw_spin_lock_irqsave(&pi_state->pi_mutex.wait_lock, flags);
++		pi_state_update_owner(pi_state, NULL);
++		rt_mutex_proxy_unlock(&pi_state->pi_mutex);
++		raw_spin_unlock_irqrestore(&pi_state->pi_mutex.wait_lock, flags);
++	}
++
++	if (current->pi_state_cache) {
++		kfree(pi_state);
++	} else {
++		/*
++		 * pi_state->list is already empty.
++		 * clear pi_state->owner.
++		 * refcount is at 0 - put it back to 1.
++		 */
++		pi_state->owner = NULL;
++		refcount_set(&pi_state->refcount, 1);
++		current->pi_state_cache = pi_state;
++	}
++}
++
++#ifdef CONFIG_FUTEX_PI
++
++/*
++ * This task is holding PI mutexes at exit time => bad.
++ * Kernel cleans up PI-state, but userspace is likely hosed.
++ * (Robust-futex cleanup is separate and might save the day for userspace.)
++ */
++static void exit_pi_state_list(struct task_struct *curr)
++{
++	struct list_head *next, *head = &curr->pi_state_list;
++	struct futex_pi_state *pi_state;
++	struct futex_hash_bucket *hb;
++	union futex_key key = FUTEX_KEY_INIT;
++
++	if (!futex_cmpxchg_enabled)
++		return;
++	/*
++	 * We are a ZOMBIE and nobody can enqueue itself on
++	 * pi_state_list anymore, but we have to be careful
++	 * versus waiters unqueueing themselves:
++	 */
++	raw_spin_lock_irq(&curr->pi_lock);
++	while (!list_empty(head)) {
++		next = head->next;
++		pi_state = list_entry(next, struct futex_pi_state, list);
++		key = pi_state->key;
++		hb = hash_futex(&key);
++
++		/*
++		 * We can race against put_pi_state() removing itself from the
++		 * list (a waiter going away). put_pi_state() will first
++		 * decrement the reference count and then modify the list, so
++		 * its possible to see the list entry but fail this reference
++		 * acquire.
++		 *
++		 * In that case; drop the locks to let put_pi_state() make
++		 * progress and retry the loop.
++		 */
++		if (!refcount_inc_not_zero(&pi_state->refcount)) {
++			raw_spin_unlock_irq(&curr->pi_lock);
++			cpu_relax();
++			raw_spin_lock_irq(&curr->pi_lock);
++			continue;
++		}
++		raw_spin_unlock_irq(&curr->pi_lock);
++
++		spin_lock(&hb->lock);
++		raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
++		raw_spin_lock(&curr->pi_lock);
++		/*
++		 * We dropped the pi-lock, so re-check whether this
++		 * task still owns the PI-state:
++		 */
++		if (head->next != next) {
++			/* retain curr->pi_lock for the loop invariant */
++			raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
++			spin_unlock(&hb->lock);
++			put_pi_state(pi_state);
++			continue;
++		}
++
++		WARN_ON(pi_state->owner != curr);
++		WARN_ON(list_empty(&pi_state->list));
++		list_del_init(&pi_state->list);
++		pi_state->owner = NULL;
++
++		raw_spin_unlock(&curr->pi_lock);
++		raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
++		spin_unlock(&hb->lock);
++
++		rt_mutex_futex_unlock(&pi_state->pi_mutex);
++		put_pi_state(pi_state);
++
++		raw_spin_lock_irq(&curr->pi_lock);
++	}
++	raw_spin_unlock_irq(&curr->pi_lock);
++}
++#else
++static inline void exit_pi_state_list(struct task_struct *curr) { }
++#endif
++
++/*
++ * We need to check the following states:
++ *
++ *      Waiter | pi_state | pi->owner | uTID      | uODIED | ?
++ *
++ * [1]  NULL   | ---      | ---       | 0         | 0/1    | Valid
++ * [2]  NULL   | ---      | ---       | >0        | 0/1    | Valid
++ *
++ * [3]  Found  | NULL     | --        | Any       | 0/1    | Invalid
++ *
++ * [4]  Found  | Found    | NULL      | 0         | 1      | Valid
++ * [5]  Found  | Found    | NULL      | >0        | 1      | Invalid
++ *
++ * [6]  Found  | Found    | task      | 0         | 1      | Valid
++ *
++ * [7]  Found  | Found    | NULL      | Any       | 0      | Invalid
++ *
++ * [8]  Found  | Found    | task      | ==taskTID | 0/1    | Valid
++ * [9]  Found  | Found    | task      | 0         | 0      | Invalid
++ * [10] Found  | Found    | task      | !=taskTID | 0/1    | Invalid
++ *
++ * [1]	Indicates that the kernel can acquire the futex atomically. We
++ *	came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit.
++ *
++ * [2]	Valid, if TID does not belong to a kernel thread. If no matching
++ *      thread is found then it indicates that the owner TID has died.
++ *
++ * [3]	Invalid. The waiter is queued on a non PI futex
++ *
++ * [4]	Valid state after exit_robust_list(), which sets the user space
++ *	value to FUTEX_WAITERS | FUTEX_OWNER_DIED.
++ *
++ * [5]	The user space value got manipulated between exit_robust_list()
++ *	and exit_pi_state_list()
++ *
++ * [6]	Valid state after exit_pi_state_list() which sets the new owner in
++ *	the pi_state but cannot access the user space value.
++ *
++ * [7]	pi_state->owner can only be NULL when the OWNER_DIED bit is set.
++ *
++ * [8]	Owner and user space value match
++ *
++ * [9]	There is no transient state which sets the user space TID to 0
++ *	except exit_robust_list(), but this is indicated by the
++ *	FUTEX_OWNER_DIED bit. See [4]
++ *
++ * [10] There is no transient state which leaves owner and user space
++ *	TID out of sync. Except one error case where the kernel is denied
++ *	write access to the user address, see fixup_pi_state_owner().
++ *
++ *
++ * Serialization and lifetime rules:
++ *
++ * hb->lock:
++ *
++ *	hb -> futex_q, relation
++ *	futex_q -> pi_state, relation
++ *
++ *	(cannot be raw because hb can contain arbitrary amount
++ *	 of futex_q's)
++ *
++ * pi_mutex->wait_lock:
++ *
++ *	{uval, pi_state}
++ *
++ *	(and pi_mutex 'obviously')
++ *
++ * p->pi_lock:
++ *
++ *	p->pi_state_list -> pi_state->list, relation
++ *
++ * pi_state->refcount:
++ *
++ *	pi_state lifetime
++ *
++ *
++ * Lock order:
++ *
++ *   hb->lock
++ *     pi_mutex->wait_lock
++ *       p->pi_lock
++ *
++ */
++
++/*
++ * Validate that the existing waiter has a pi_state and sanity check
++ * the pi_state against the user space value. If correct, attach to
++ * it.
++ */
++static int attach_to_pi_state(u32 __user *uaddr, u32 uval,
++			      struct futex_pi_state *pi_state,
++			      struct futex_pi_state **ps)
++{
++	pid_t pid = uval & FUTEX_TID_MASK;
++	u32 uval2;
++	int ret;
++
++	/*
++	 * Userspace might have messed up non-PI and PI futexes [3]
++	 */
++	if (unlikely(!pi_state))
++		return -EINVAL;
++
++	/*
++	 * We get here with hb->lock held, and having found a
++	 * futex_top_waiter(). This means that futex_lock_pi() of said futex_q
++	 * has dropped the hb->lock in between queue_me() and unqueue_me_pi(),
++	 * which in turn means that futex_lock_pi() still has a reference on
++	 * our pi_state.
++	 *
++	 * The waiter holding a reference on @pi_state also protects against
++	 * the unlocked put_pi_state() in futex_unlock_pi(), futex_lock_pi()
++	 * and futex_wait_requeue_pi() as it cannot go to 0 and consequently
++	 * free pi_state before we can take a reference ourselves.
++	 */
++	WARN_ON(!refcount_read(&pi_state->refcount));
++
++	/*
++	 * Now that we have a pi_state, we can acquire wait_lock
++	 * and do the state validation.
++	 */
++	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
++
++	/*
++	 * Since {uval, pi_state} is serialized by wait_lock, and our current
++	 * uval was read without holding it, it can have changed. Verify it
++	 * still is what we expect it to be, otherwise retry the entire
++	 * operation.
++	 */
++	if (get_futex_value_locked(&uval2, uaddr))
++		goto out_efault;
++
++	if (uval != uval2)
++		goto out_eagain;
++
++	/*
++	 * Handle the owner died case:
++	 */
++	if (uval & FUTEX_OWNER_DIED) {
++		/*
++		 * exit_pi_state_list sets owner to NULL and wakes the
++		 * topmost waiter. The task which acquires the
++		 * pi_state->rt_mutex will fixup owner.
++		 */
++		if (!pi_state->owner) {
++			/*
++			 * No pi state owner, but the user space TID
++			 * is not 0. Inconsistent state. [5]
++			 */
++			if (pid)
++				goto out_einval;
++			/*
++			 * Take a ref on the state and return success. [4]
++			 */
++			goto out_attach;
++		}
++
++		/*
++		 * If TID is 0, then either the dying owner has not
++		 * yet executed exit_pi_state_list() or some waiter
++		 * acquired the rtmutex in the pi state, but did not
++		 * yet fixup the TID in user space.
++		 *
++		 * Take a ref on the state and return success. [6]
++		 */
++		if (!pid)
++			goto out_attach;
++	} else {
++		/*
++		 * If the owner died bit is not set, then the pi_state
++		 * must have an owner. [7]
++		 */
++		if (!pi_state->owner)
++			goto out_einval;
++	}
++
++	/*
++	 * Bail out if user space manipulated the futex value. If pi
++	 * state exists then the owner TID must be the same as the
++	 * user space TID. [9/10]
++	 */
++	if (pid != task_pid_vnr(pi_state->owner))
++		goto out_einval;
++
++out_attach:
++	get_pi_state(pi_state);
++	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
++	*ps = pi_state;
++	return 0;
++
++out_einval:
++	ret = -EINVAL;
++	goto out_error;
++
++out_eagain:
++	ret = -EAGAIN;
++	goto out_error;
++
++out_efault:
++	ret = -EFAULT;
++	goto out_error;
++
++out_error:
++	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
++	return ret;
++}
++
++/**
++ * wait_for_owner_exiting - Block until the owner has exited
++ * @ret: owner's current futex lock status
++ * @exiting:	Pointer to the exiting task
++ *
++ * Caller must hold a refcount on @exiting.
++ */
++static void wait_for_owner_exiting(int ret, struct task_struct *exiting)
++{
++	if (ret != -EBUSY) {
++		WARN_ON_ONCE(exiting);
++		return;
++	}
++
++	if (WARN_ON_ONCE(ret == -EBUSY && !exiting))
++		return;
++
++	mutex_lock(&exiting->futex_exit_mutex);
++	/*
++	 * No point in doing state checking here. If the waiter got here
++	 * while the task was in exec()->exec_futex_release() then it can
++	 * have any FUTEX_STATE_* value when the waiter has acquired the
++	 * mutex. OK, if running, EXITING or DEAD if it reached exit()
++	 * already. Highly unlikely and not a problem. Just one more round
++	 * through the futex maze.
++	 */
++	mutex_unlock(&exiting->futex_exit_mutex);
++
++	put_task_struct(exiting);
++}
++
++static int handle_exit_race(u32 __user *uaddr, u32 uval,
++			    struct task_struct *tsk)
++{
++	u32 uval2;
++
++	/*
++	 * If the futex exit state is not yet FUTEX_STATE_DEAD, tell the
++	 * caller that the alleged owner is busy.
++	 */
++	if (tsk && tsk->futex_state != FUTEX_STATE_DEAD)
++		return -EBUSY;
++
++	/*
++	 * Reread the user space value to handle the following situation:
++	 *
++	 * CPU0				CPU1
++	 *
++	 * sys_exit()			sys_futex()
++	 *  do_exit()			 futex_lock_pi()
++	 *                                futex_lock_pi_atomic()
++	 *   exit_signals(tsk)		    No waiters:
++	 *    tsk->flags |= PF_EXITING;	    *uaddr == 0x00000PID
++	 *  mm_release(tsk)		    Set waiter bit
++	 *   exit_robust_list(tsk) {	    *uaddr = 0x80000PID;
++	 *      Set owner died		    attach_to_pi_owner() {
++	 *    *uaddr = 0xC0000000;	     tsk = get_task(PID);
++	 *   }				     if (!tsk->flags & PF_EXITING) {
++	 *  ...				       attach();
++	 *  tsk->futex_state =               } else {
++	 *	FUTEX_STATE_DEAD;              if (tsk->futex_state !=
++	 *					  FUTEX_STATE_DEAD)
++	 *				         return -EAGAIN;
++	 *				       return -ESRCH; <--- FAIL
++	 *				     }
++	 *
++	 * Returning ESRCH unconditionally is wrong here because the
++	 * user space value has been changed by the exiting task.
++	 *
++	 * The same logic applies to the case where the exiting task is
++	 * already gone.
++	 */
++	if (get_futex_value_locked(&uval2, uaddr))
++		return -EFAULT;
++
++	/* If the user space value has changed, try again. */
++	if (uval2 != uval)
++		return -EAGAIN;
++
++	/*
++	 * The exiting task did not have a robust list, the robust list was
++	 * corrupted or the user space value in *uaddr is simply bogus.
++	 * Give up and tell user space.
++	 */
++	return -ESRCH;
++}
++
++/*
++ * Lookup the task for the TID provided from user space and attach to
++ * it after doing proper sanity checks.
++ */
++static int attach_to_pi_owner(u32 __user *uaddr, u32 uval, union futex_key *key,
++			      struct futex_pi_state **ps,
++			      struct task_struct **exiting)
++{
++	pid_t pid = uval & FUTEX_TID_MASK;
++	struct futex_pi_state *pi_state;
++	struct task_struct *p;
++
++	/*
++	 * We are the first waiter - try to look up the real owner and attach
++	 * the new pi_state to it, but bail out when TID = 0 [1]
++	 *
++	 * The !pid check is paranoid. None of the call sites should end up
++	 * with pid == 0, but better safe than sorry. Let the caller retry
++	 */
++	if (!pid)
++		return -EAGAIN;
++	p = find_get_task_by_vpid(pid);
++	if (!p)
++		return handle_exit_race(uaddr, uval, NULL);
++
++	if (unlikely(p->flags & PF_KTHREAD)) {
++		put_task_struct(p);
++		return -EPERM;
++	}
++
++	/*
++	 * We need to look at the task state to figure out, whether the
++	 * task is exiting. To protect against the change of the task state
++	 * in futex_exit_release(), we do this protected by p->pi_lock:
++	 */
++	raw_spin_lock_irq(&p->pi_lock);
++	if (unlikely(p->futex_state != FUTEX_STATE_OK)) {
++		/*
++		 * The task is on the way out. When the futex state is
++		 * FUTEX_STATE_DEAD, we know that the task has finished
++		 * the cleanup:
++		 */
++		int ret = handle_exit_race(uaddr, uval, p);
++
++		raw_spin_unlock_irq(&p->pi_lock);
++		/*
++		 * If the owner task is between FUTEX_STATE_EXITING and
++		 * FUTEX_STATE_DEAD then store the task pointer and keep
++		 * the reference on the task struct. The calling code will
++		 * drop all locks, wait for the task to reach
++		 * FUTEX_STATE_DEAD and then drop the refcount. This is
++		 * required to prevent a live lock when the current task
++		 * preempted the exiting task between the two states.
++		 */
++		if (ret == -EBUSY)
++			*exiting = p;
++		else
++			put_task_struct(p);
++		return ret;
++	}
++
++	/*
++	 * No existing pi state. First waiter. [2]
++	 *
++	 * This creates pi_state, we have hb->lock held, this means nothing can
++	 * observe this state, wait_lock is irrelevant.
++	 */
++	pi_state = alloc_pi_state();
++
++	/*
++	 * Initialize the pi_mutex in locked state and make @p
++	 * the owner of it:
++	 */
++	rt_mutex_init_proxy_locked(&pi_state->pi_mutex, p);
++
++	/* Store the key for possible exit cleanups: */
++	pi_state->key = *key;
++
++	WARN_ON(!list_empty(&pi_state->list));
++	list_add(&pi_state->list, &p->pi_state_list);
++	/*
++	 * Assignment without holding pi_state->pi_mutex.wait_lock is safe
++	 * because there is no concurrency as the object is not published yet.
++	 */
++	pi_state->owner = p;
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	*ps = pi_state;
++
++	return 0;
++}
++
++static int lookup_pi_state(u32 __user *uaddr, u32 uval,
++			   struct futex_hash_bucket *hb,
++			   union futex_key *key, struct futex_pi_state **ps,
++			   struct task_struct **exiting)
++{
++	struct futex_q *top_waiter = futex_top_waiter(hb, key);
++
++	/*
++	 * If there is a waiter on that futex, validate it and
++	 * attach to the pi_state when the validation succeeds.
++	 */
++	if (top_waiter)
++		return attach_to_pi_state(uaddr, uval, top_waiter->pi_state, ps);
++
++	/*
++	 * We are the first waiter - try to look up the owner based on
++	 * @uval and attach to it.
++	 */
++	return attach_to_pi_owner(uaddr, uval, key, ps, exiting);
++}
++
++static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval)
++{
++	int err;
++	u32 curval;
++
++	if (unlikely(should_fail_futex(true)))
++		return -EFAULT;
++
++	err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++	if (unlikely(err))
++		return err;
++
++	/* If user space value changed, let the caller retry */
++	return curval != uval ? -EAGAIN : 0;
++}
++
++/**
++ * futex_lock_pi_atomic() - Atomic work required to acquire a pi aware futex
++ * @uaddr:		the pi futex user address
++ * @hb:			the pi futex hash bucket
++ * @key:		the futex key associated with uaddr and hb
++ * @ps:			the pi_state pointer where we store the result of the
++ *			lookup
++ * @task:		the task to perform the atomic lock work for.  This will
++ *			be "current" except in the case of requeue pi.
++ * @exiting:		Pointer to store the task pointer of the owner task
++ *			which is in the middle of exiting
++ * @set_waiters:	force setting the FUTEX_WAITERS bit (1) or not (0)
++ *
++ * Return:
++ *  -  0 - ready to wait;
++ *  -  1 - acquired the lock;
++ *  - <0 - error
++ *
++ * The hb->lock and futex_key refs shall be held by the caller.
++ *
++ * @exiting is only set when the return value is -EBUSY. If so, this holds
++ * a refcount on the exiting task on return and the caller needs to drop it
++ * after waiting for the exit to complete.
++ */
++static int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb,
++				union futex_key *key,
++				struct futex_pi_state **ps,
++				struct task_struct *task,
++				struct task_struct **exiting,
++				int set_waiters)
++{
++	u32 uval, newval, vpid = task_pid_vnr(task);
++	struct futex_q *top_waiter;
++	int ret;
++
++	/*
++	 * Read the user space value first so we can validate a few
++	 * things before proceeding further.
++	 */
++	if (get_futex_value_locked(&uval, uaddr))
++		return -EFAULT;
++
++	if (unlikely(should_fail_futex(true)))
++		return -EFAULT;
++
++	/*
++	 * Detect deadlocks.
++	 */
++	if ((unlikely((uval & FUTEX_TID_MASK) == vpid)))
++		return -EDEADLK;
++
++	if ((unlikely(should_fail_futex(true))))
++		return -EDEADLK;
++
++	/*
++	 * Lookup existing state first. If it exists, try to attach to
++	 * its pi_state.
++	 */
++	top_waiter = futex_top_waiter(hb, key);
++	if (top_waiter)
++		return attach_to_pi_state(uaddr, uval, top_waiter->pi_state, ps);
++
++	/*
++	 * No waiter and user TID is 0. We are here because the
++	 * waiters or the owner died bit is set or called from
++	 * requeue_cmp_pi or for whatever reason something took the
++	 * syscall.
++	 */
++	if (!(uval & FUTEX_TID_MASK)) {
++		/*
++		 * We take over the futex. No other waiters and the user space
++		 * TID is 0. We preserve the owner died bit.
++		 */
++		newval = uval & FUTEX_OWNER_DIED;
++		newval |= vpid;
++
++		/* The futex requeue_pi code can enforce the waiters bit */
++		if (set_waiters)
++			newval |= FUTEX_WAITERS;
++
++		ret = lock_pi_update_atomic(uaddr, uval, newval);
++		/* If the take over worked, return 1 */
++		return ret < 0 ? ret : 1;
++	}
++
++	/*
++	 * First waiter. Set the waiters bit before attaching ourself to
++	 * the owner. If owner tries to unlock, it will be forced into
++	 * the kernel and blocked on hb->lock.
++	 */
++	newval = uval | FUTEX_WAITERS;
++	ret = lock_pi_update_atomic(uaddr, uval, newval);
++	if (ret)
++		return ret;
++	/*
++	 * If the update of the user space value succeeded, we try to
++	 * attach to the owner. If that fails, no harm done, we only
++	 * set the FUTEX_WAITERS bit in the user space variable.
++	 */
++	return attach_to_pi_owner(uaddr, newval, key, ps, exiting);
++}
++
++/**
++ * __unqueue_futex() - Remove the futex_q from its futex_hash_bucket
++ * @q:	The futex_q to unqueue
++ *
++ * The q->lock_ptr must not be NULL and must be held by the caller.
++ */
++static void __unqueue_futex(struct futex_q *q)
++{
++	struct futex_hash_bucket *hb;
++
++	if (WARN_ON_SMP(!q->lock_ptr) || WARN_ON(plist_node_empty(&q->list)))
++		return;
++	lockdep_assert_held(q->lock_ptr);
++
++	hb = container_of(q->lock_ptr, struct futex_hash_bucket, lock);
++	plist_del(&q->list, &hb->chain);
++	hb_waiters_dec(hb);
++}
++
++/*
++ * The hash bucket lock must be held when this is called.
++ * Afterwards, the futex_q must not be accessed. Callers
++ * must ensure to later call wake_up_q() for the actual
++ * wakeups to occur.
++ */
++static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
++{
++	struct task_struct *p = q->task;
++
++	if (WARN(q->pi_state || q->rt_waiter, "refusing to wake PI futex\n"))
++		return;
++
++	get_task_struct(p);
++	__unqueue_futex(q);
++	/*
++	 * The waiting task can free the futex_q as soon as q->lock_ptr = NULL
++	 * is written, without taking any locks. This is possible in the event
++	 * of a spurious wakeup, for example. A memory barrier is required here
++	 * to prevent the following store to lock_ptr from getting ahead of the
++	 * plist_del in __unqueue_futex().
++	 */
++	smp_store_release(&q->lock_ptr, NULL);
++
++	/*
++	 * Queue the task for later wakeup for after we've released
++	 * the hb->lock.
++	 */
++	wake_q_add_safe(wake_q, p);
++}
++
++/*
++ * Caller must hold a reference on @pi_state.
++ */
++static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_state)
++{
++	u32 curval, newval;
++	struct task_struct *new_owner;
++	bool postunlock = false;
++	DEFINE_WAKE_Q(wake_q);
++	int ret = 0;
++
++	new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
++	if (WARN_ON_ONCE(!new_owner)) {
++		/*
++		 * As per the comment in futex_unlock_pi() this should not happen.
++		 *
++		 * When this happens, give up our locks and try again, giving
++		 * the futex_lock_pi() instance time to complete, either by
++		 * waiting on the rtmutex or removing itself from the futex
++		 * queue.
++		 */
++		ret = -EAGAIN;
++		goto out_unlock;
++	}
++
++	/*
++	 * We pass it to the next owner. The WAITERS bit is always kept
++	 * enabled while there is PI state around. We cleanup the owner
++	 * died bit, because we are the owner.
++	 */
++	newval = FUTEX_WAITERS | task_pid_vnr(new_owner);
++
++	if (unlikely(should_fail_futex(true))) {
++		ret = -EFAULT;
++		goto out_unlock;
++	}
++
++	ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++	if (!ret && (curval != uval)) {
++		/*
++		 * If a unconditional UNLOCK_PI operation (user space did not
++		 * try the TID->0 transition) raced with a waiter setting the
++		 * FUTEX_WAITERS flag between get_user() and locking the hash
++		 * bucket lock, retry the operation.
++		 */
++		if ((FUTEX_TID_MASK & curval) == uval)
++			ret = -EAGAIN;
++		else
++			ret = -EINVAL;
++	}
++
++	if (!ret) {
++		/*
++		 * This is a point of no return; once we modified the uval
++		 * there is no going back and subsequent operations must
++		 * not fail.
++		 */
++		pi_state_update_owner(pi_state, new_owner);
++		postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
++	}
++
++out_unlock:
++	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
++
++	if (postunlock)
++		rt_mutex_postunlock(&wake_q);
++
++	return ret;
++}
++
++/*
++ * Express the locking dependencies for lockdep:
++ */
++static inline void
++double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
++{
++	if (hb1 <= hb2) {
++		spin_lock(&hb1->lock);
++		if (hb1 < hb2)
++			spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
++	} else { /* hb1 > hb2 */
++		spin_lock(&hb2->lock);
++		spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
++	}
++}
++
++static inline void
++double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
++{
++	spin_unlock(&hb1->lock);
++	if (hb1 != hb2)
++		spin_unlock(&hb2->lock);
++}
++
++/*
++ * Wake up waiters matching bitset queued on this futex (uaddr).
++ */
++static int
++futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
++{
++	struct futex_hash_bucket *hb;
++	struct futex_q *this, *next;
++	union futex_key key = FUTEX_KEY_INIT;
++	int ret;
++	DEFINE_WAKE_Q(wake_q);
++
++	if (!bitset)
++		return -EINVAL;
++
++	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, FUTEX_READ);
++	if (unlikely(ret != 0))
++		return ret;
++
++	hb = hash_futex(&key);
++
++	/* Make sure we really have tasks to wakeup */
++	if (!hb_waiters_pending(hb))
++		return ret;
++
++	spin_lock(&hb->lock);
++
++	plist_for_each_entry_safe(this, next, &hb->chain, list) {
++		if (match_futex (&this->key, &key)) {
++			if (this->pi_state || this->rt_waiter) {
++				ret = -EINVAL;
++				break;
++			}
++
++			/* Check if one of the bits is set in both bitsets */
++			if (!(this->bitset & bitset))
++				continue;
++
++			mark_wake_futex(&wake_q, this);
++			if (++ret >= nr_wake)
++				break;
++		}
++	}
++
++	spin_unlock(&hb->lock);
++	wake_up_q(&wake_q);
++	return ret;
++}
++
++static int futex_atomic_op_inuser(unsigned int encoded_op, u32 __user *uaddr)
++{
++	unsigned int op =	  (encoded_op & 0x70000000) >> 28;
++	unsigned int cmp =	  (encoded_op & 0x0f000000) >> 24;
++	int oparg = sign_extend32((encoded_op & 0x00fff000) >> 12, 11);
++	int cmparg = sign_extend32(encoded_op & 0x00000fff, 11);
++	int oldval, ret;
++
++	if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) {
++		if (oparg < 0 || oparg > 31) {
++			char comm[sizeof(current->comm)];
++			/*
++			 * kill this print and return -EINVAL when userspace
++			 * is sane again
++			 */
++			pr_info_ratelimited("futex_wake_op: %s tries to shift op by %d; fix this program\n",
++					get_task_comm(comm, current), oparg);
++			oparg &= 31;
++		}
++		oparg = 1 << oparg;
++	}
++
++	pagefault_disable();
++	ret = arch_futex_atomic_op_inuser(op, oparg, &oldval, uaddr);
++	pagefault_enable();
++	if (ret)
++		return ret;
++
++	switch (cmp) {
++	case FUTEX_OP_CMP_EQ:
++		return oldval == cmparg;
++	case FUTEX_OP_CMP_NE:
++		return oldval != cmparg;
++	case FUTEX_OP_CMP_LT:
++		return oldval < cmparg;
++	case FUTEX_OP_CMP_GE:
++		return oldval >= cmparg;
++	case FUTEX_OP_CMP_LE:
++		return oldval <= cmparg;
++	case FUTEX_OP_CMP_GT:
++		return oldval > cmparg;
++	default:
++		return -ENOSYS;
++	}
++}
++
++/*
++ * Wake up all waiters hashed on the physical page that is mapped
++ * to this virtual address:
++ */
++static int
++futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
++	      int nr_wake, int nr_wake2, int op)
++{
++	union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
++	struct futex_hash_bucket *hb1, *hb2;
++	struct futex_q *this, *next;
++	int ret, op_ret;
++	DEFINE_WAKE_Q(wake_q);
++
++retry:
++	ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, FUTEX_READ);
++	if (unlikely(ret != 0))
++		return ret;
++	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, FUTEX_WRITE);
++	if (unlikely(ret != 0))
++		return ret;
++
++	hb1 = hash_futex(&key1);
++	hb2 = hash_futex(&key2);
++
++retry_private:
++	double_lock_hb(hb1, hb2);
++	op_ret = futex_atomic_op_inuser(op, uaddr2);
++	if (unlikely(op_ret < 0)) {
++		double_unlock_hb(hb1, hb2);
++
++		if (!IS_ENABLED(CONFIG_MMU) ||
++		    unlikely(op_ret != -EFAULT && op_ret != -EAGAIN)) {
++			/*
++			 * we don't get EFAULT from MMU faults if we don't have
++			 * an MMU, but we might get them from range checking
++			 */
++			ret = op_ret;
++			return ret;
++		}
++
++		if (op_ret == -EFAULT) {
++			ret = fault_in_user_writeable(uaddr2);
++			if (ret)
++				return ret;
++		}
++
++		if (!(flags & FLAGS_SHARED)) {
++			cond_resched();
++			goto retry_private;
++		}
++
++		cond_resched();
++		goto retry;
++	}
++
++	plist_for_each_entry_safe(this, next, &hb1->chain, list) {
++		if (match_futex (&this->key, &key1)) {
++			if (this->pi_state || this->rt_waiter) {
++				ret = -EINVAL;
++				goto out_unlock;
++			}
++			mark_wake_futex(&wake_q, this);
++			if (++ret >= nr_wake)
++				break;
++		}
++	}
++
++	if (op_ret > 0) {
++		op_ret = 0;
++		plist_for_each_entry_safe(this, next, &hb2->chain, list) {
++			if (match_futex (&this->key, &key2)) {
++				if (this->pi_state || this->rt_waiter) {
++					ret = -EINVAL;
++					goto out_unlock;
++				}
++				mark_wake_futex(&wake_q, this);
++				if (++op_ret >= nr_wake2)
++					break;
++			}
++		}
++		ret += op_ret;
++	}
++
++out_unlock:
++	double_unlock_hb(hb1, hb2);
++	wake_up_q(&wake_q);
++	return ret;
++}
++
++/**
++ * requeue_futex() - Requeue a futex_q from one hb to another
++ * @q:		the futex_q to requeue
++ * @hb1:	the source hash_bucket
++ * @hb2:	the target hash_bucket
++ * @key2:	the new key for the requeued futex_q
++ */
++static inline
++void requeue_futex(struct futex_q *q, struct futex_hash_bucket *hb1,
++		   struct futex_hash_bucket *hb2, union futex_key *key2)
++{
++
++	/*
++	 * If key1 and key2 hash to the same bucket, no need to
++	 * requeue.
++	 */
++	if (likely(&hb1->chain != &hb2->chain)) {
++		plist_del(&q->list, &hb1->chain);
++		hb_waiters_dec(hb1);
++		hb_waiters_inc(hb2);
++		plist_add(&q->list, &hb2->chain);
++		q->lock_ptr = &hb2->lock;
++	}
++	q->key = *key2;
++}
++
++/**
++ * requeue_pi_wake_futex() - Wake a task that acquired the lock during requeue
++ * @q:		the futex_q
++ * @key:	the key of the requeue target futex
++ * @hb:		the hash_bucket of the requeue target futex
++ *
++ * During futex_requeue, with requeue_pi=1, it is possible to acquire the
++ * target futex if it is uncontended or via a lock steal.  Set the futex_q key
++ * to the requeue target futex so the waiter can detect the wakeup on the right
++ * futex, but remove it from the hb and NULL the rt_waiter so it can detect
++ * atomic lock acquisition.  Set the q->lock_ptr to the requeue target hb->lock
++ * to protect access to the pi_state to fixup the owner later.  Must be called
++ * with both q->lock_ptr and hb->lock held.
++ */
++static inline
++void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
++			   struct futex_hash_bucket *hb)
++{
++	q->key = *key;
++
++	__unqueue_futex(q);
++
++	WARN_ON(!q->rt_waiter);
++	q->rt_waiter = NULL;
++
++	q->lock_ptr = &hb->lock;
++
++	wake_up_state(q->task, TASK_NORMAL);
++}
++
++/**
++ * futex_proxy_trylock_atomic() - Attempt an atomic lock for the top waiter
++ * @pifutex:		the user address of the to futex
++ * @hb1:		the from futex hash bucket, must be locked by the caller
++ * @hb2:		the to futex hash bucket, must be locked by the caller
++ * @key1:		the from futex key
++ * @key2:		the to futex key
++ * @ps:			address to store the pi_state pointer
++ * @exiting:		Pointer to store the task pointer of the owner task
++ *			which is in the middle of exiting
++ * @set_waiters:	force setting the FUTEX_WAITERS bit (1) or not (0)
++ *
++ * Try and get the lock on behalf of the top waiter if we can do it atomically.
++ * Wake the top waiter if we succeed.  If the caller specified set_waiters,
++ * then direct futex_lock_pi_atomic() to force setting the FUTEX_WAITERS bit.
++ * hb1 and hb2 must be held by the caller.
++ *
++ * @exiting is only set when the return value is -EBUSY. If so, this holds
++ * a refcount on the exiting task on return and the caller needs to drop it
++ * after waiting for the exit to complete.
++ *
++ * Return:
++ *  -  0 - failed to acquire the lock atomically;
++ *  - >0 - acquired the lock, return value is vpid of the top_waiter
++ *  - <0 - error
++ */
++static int
++futex_proxy_trylock_atomic(u32 __user *pifutex, struct futex_hash_bucket *hb1,
++			   struct futex_hash_bucket *hb2, union futex_key *key1,
++			   union futex_key *key2, struct futex_pi_state **ps,
++			   struct task_struct **exiting, int set_waiters)
++{
++	struct futex_q *top_waiter = NULL;
++	u32 curval;
++	int ret, vpid;
++
++	if (get_futex_value_locked(&curval, pifutex))
++		return -EFAULT;
++
++	if (unlikely(should_fail_futex(true)))
++		return -EFAULT;
++
++	/*
++	 * Find the top_waiter and determine if there are additional waiters.
++	 * If the caller intends to requeue more than 1 waiter to pifutex,
++	 * force futex_lock_pi_atomic() to set the FUTEX_WAITERS bit now,
++	 * as we have means to handle the possible fault.  If not, don't set
++	 * the bit unecessarily as it will force the subsequent unlock to enter
++	 * the kernel.
++	 */
++	top_waiter = futex_top_waiter(hb1, key1);
++
++	/* There are no waiters, nothing for us to do. */
++	if (!top_waiter)
++		return 0;
++
++	/* Ensure we requeue to the expected futex. */
++	if (!match_futex(top_waiter->requeue_pi_key, key2))
++		return -EINVAL;
++
++	/*
++	 * Try to take the lock for top_waiter.  Set the FUTEX_WAITERS bit in
++	 * the contended case or if set_waiters is 1.  The pi_state is returned
++	 * in ps in contended cases.
++	 */
++	vpid = task_pid_vnr(top_waiter->task);
++	ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task,
++				   exiting, set_waiters);
++	if (ret == 1) {
++		requeue_pi_wake_futex(top_waiter, key2, hb2);
++		return vpid;
++	}
++	return ret;
++}
++
++/**
++ * futex_requeue() - Requeue waiters from uaddr1 to uaddr2
++ * @uaddr1:	source futex user address
++ * @flags:	futex flags (FLAGS_SHARED, etc.)
++ * @uaddr2:	target futex user address
++ * @nr_wake:	number of waiters to wake (must be 1 for requeue_pi)
++ * @nr_requeue:	number of waiters to requeue (0-INT_MAX)
++ * @cmpval:	@uaddr1 expected value (or %NULL)
++ * @requeue_pi:	if we are attempting to requeue from a non-pi futex to a
++ *		pi futex (pi to pi requeue is not supported)
++ *
++ * Requeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire
++ * uaddr2 atomically on behalf of the top waiter.
++ *
++ * Return:
++ *  - >=0 - on success, the number of tasks requeued or woken;
++ *  -  <0 - on error
++ */
++static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
++			 u32 __user *uaddr2, int nr_wake, int nr_requeue,
++			 u32 *cmpval, int requeue_pi)
++{
++	union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
++	int task_count = 0, ret;
++	struct futex_pi_state *pi_state = NULL;
++	struct futex_hash_bucket *hb1, *hb2;
++	struct futex_q *this, *next;
++	DEFINE_WAKE_Q(wake_q);
++
++	if (nr_wake < 0 || nr_requeue < 0)
++		return -EINVAL;
++
++	/*
++	 * When PI not supported: return -ENOSYS if requeue_pi is true,
++	 * consequently the compiler knows requeue_pi is always false past
++	 * this point which will optimize away all the conditional code
++	 * further down.
++	 */
++	if (!IS_ENABLED(CONFIG_FUTEX_PI) && requeue_pi)
++		return -ENOSYS;
++
++	if (requeue_pi) {
++		/*
++		 * Requeue PI only works on two distinct uaddrs. This
++		 * check is only valid for private futexes. See below.
++		 */
++		if (uaddr1 == uaddr2)
++			return -EINVAL;
++
++		/*
++		 * requeue_pi requires a pi_state, try to allocate it now
++		 * without any locks in case it fails.
++		 */
++		if (refill_pi_state_cache())
++			return -ENOMEM;
++		/*
++		 * requeue_pi must wake as many tasks as it can, up to nr_wake
++		 * + nr_requeue, since it acquires the rt_mutex prior to
++		 * returning to userspace, so as to not leave the rt_mutex with
++		 * waiters and no owner.  However, second and third wake-ups
++		 * cannot be predicted as they involve race conditions with the
++		 * first wake and a fault while looking up the pi_state.  Both
++		 * pthread_cond_signal() and pthread_cond_broadcast() should
++		 * use nr_wake=1.
++		 */
++		if (nr_wake != 1)
++			return -EINVAL;
++	}
++
++retry:
++	ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, FUTEX_READ);
++	if (unlikely(ret != 0))
++		return ret;
++	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2,
++			    requeue_pi ? FUTEX_WRITE : FUTEX_READ);
++	if (unlikely(ret != 0))
++		return ret;
++
++	/*
++	 * The check above which compares uaddrs is not sufficient for
++	 * shared futexes. We need to compare the keys:
++	 */
++	if (requeue_pi && match_futex(&key1, &key2))
++		return -EINVAL;
++
++	hb1 = hash_futex(&key1);
++	hb2 = hash_futex(&key2);
++
++retry_private:
++	hb_waiters_inc(hb2);
++	double_lock_hb(hb1, hb2);
++
++	if (likely(cmpval != NULL)) {
++		u32 curval;
++
++		ret = get_futex_value_locked(&curval, uaddr1);
++
++		if (unlikely(ret)) {
++			double_unlock_hb(hb1, hb2);
++			hb_waiters_dec(hb2);
++
++			ret = get_user(curval, uaddr1);
++			if (ret)
++				return ret;
++
++			if (!(flags & FLAGS_SHARED))
++				goto retry_private;
++
++			goto retry;
++		}
++		if (curval != *cmpval) {
++			ret = -EAGAIN;
++			goto out_unlock;
++		}
++	}
++
++	if (requeue_pi && (task_count - nr_wake < nr_requeue)) {
++		struct task_struct *exiting = NULL;
++
++		/*
++		 * Attempt to acquire uaddr2 and wake the top waiter. If we
++		 * intend to requeue waiters, force setting the FUTEX_WAITERS
++		 * bit.  We force this here where we are able to easily handle
++		 * faults rather in the requeue loop below.
++		 */
++		ret = futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1,
++						 &key2, &pi_state,
++						 &exiting, nr_requeue);
++
++		/*
++		 * At this point the top_waiter has either taken uaddr2 or is
++		 * waiting on it.  If the former, then the pi_state will not
++		 * exist yet, look it up one more time to ensure we have a
++		 * reference to it. If the lock was taken, ret contains the
++		 * vpid of the top waiter task.
++		 * If the lock was not taken, we have pi_state and an initial
++		 * refcount on it. In case of an error we have nothing.
++		 */
++		if (ret > 0) {
++			WARN_ON(pi_state);
++			task_count++;
++			/*
++			 * If we acquired the lock, then the user space value
++			 * of uaddr2 should be vpid. It cannot be changed by
++			 * the top waiter as it is blocked on hb2 lock if it
++			 * tries to do so. If something fiddled with it behind
++			 * our back the pi state lookup might unearth it. So
++			 * we rather use the known value than rereading and
++			 * handing potential crap to lookup_pi_state.
++			 *
++			 * If that call succeeds then we have pi_state and an
++			 * initial refcount on it.
++			 */
++			ret = lookup_pi_state(uaddr2, ret, hb2, &key2,
++					      &pi_state, &exiting);
++		}
++
++		switch (ret) {
++		case 0:
++			/* We hold a reference on the pi state. */
++			break;
++
++			/* If the above failed, then pi_state is NULL */
++		case -EFAULT:
++			double_unlock_hb(hb1, hb2);
++			hb_waiters_dec(hb2);
++			ret = fault_in_user_writeable(uaddr2);
++			if (!ret)
++				goto retry;
++			return ret;
++		case -EBUSY:
++		case -EAGAIN:
++			/*
++			 * Two reasons for this:
++			 * - EBUSY: Owner is exiting and we just wait for the
++			 *   exit to complete.
++			 * - EAGAIN: The user space value changed.
++			 */
++			double_unlock_hb(hb1, hb2);
++			hb_waiters_dec(hb2);
++			/*
++			 * Handle the case where the owner is in the middle of
++			 * exiting. Wait for the exit to complete otherwise
++			 * this task might loop forever, aka. live lock.
++			 */
++			wait_for_owner_exiting(ret, exiting);
++			cond_resched();
++			goto retry;
++		default:
++			goto out_unlock;
++		}
++	}
++
++	plist_for_each_entry_safe(this, next, &hb1->chain, list) {
++		if (task_count - nr_wake >= nr_requeue)
++			break;
++
++		if (!match_futex(&this->key, &key1))
++			continue;
++
++		/*
++		 * FUTEX_WAIT_REQEUE_PI and FUTEX_CMP_REQUEUE_PI should always
++		 * be paired with each other and no other futex ops.
++		 *
++		 * We should never be requeueing a futex_q with a pi_state,
++		 * which is awaiting a futex_unlock_pi().
++		 */
++		if ((requeue_pi && !this->rt_waiter) ||
++		    (!requeue_pi && this->rt_waiter) ||
++		    this->pi_state) {
++			ret = -EINVAL;
++			break;
++		}
++
++		/*
++		 * Wake nr_wake waiters.  For requeue_pi, if we acquired the
++		 * lock, we already woke the top_waiter.  If not, it will be
++		 * woken by futex_unlock_pi().
++		 */
++		if (++task_count <= nr_wake && !requeue_pi) {
++			mark_wake_futex(&wake_q, this);
++			continue;
++		}
++
++		/* Ensure we requeue to the expected futex for requeue_pi. */
++		if (requeue_pi && !match_futex(this->requeue_pi_key, &key2)) {
++			ret = -EINVAL;
++			break;
++		}
++
++		/*
++		 * Requeue nr_requeue waiters and possibly one more in the case
++		 * of requeue_pi if we couldn't acquire the lock atomically.
++		 */
++		if (requeue_pi) {
++			/*
++			 * Prepare the waiter to take the rt_mutex. Take a
++			 * refcount on the pi_state and store the pointer in
++			 * the futex_q object of the waiter.
++			 */
++			get_pi_state(pi_state);
++			this->pi_state = pi_state;
++			ret = rt_mutex_start_proxy_lock(&pi_state->pi_mutex,
++							this->rt_waiter,
++							this->task);
++			if (ret == 1) {
++				/*
++				 * We got the lock. We do neither drop the
++				 * refcount on pi_state nor clear
++				 * this->pi_state because the waiter needs the
++				 * pi_state for cleaning up the user space
++				 * value. It will drop the refcount after
++				 * doing so.
++				 */
++				requeue_pi_wake_futex(this, &key2, hb2);
++				continue;
++			} else if (ret) {
++				/*
++				 * rt_mutex_start_proxy_lock() detected a
++				 * potential deadlock when we tried to queue
++				 * that waiter. Drop the pi_state reference
++				 * which we took above and remove the pointer
++				 * to the state from the waiters futex_q
++				 * object.
++				 */
++				this->pi_state = NULL;
++				put_pi_state(pi_state);
++				/*
++				 * We stop queueing more waiters and let user
++				 * space deal with the mess.
++				 */
++				break;
++			}
++		}
++		requeue_futex(this, hb1, hb2, &key2);
++	}
++
++	/*
++	 * We took an extra initial reference to the pi_state either
++	 * in futex_proxy_trylock_atomic() or in lookup_pi_state(). We
++	 * need to drop it here again.
++	 */
++	put_pi_state(pi_state);
++
++out_unlock:
++	double_unlock_hb(hb1, hb2);
++	wake_up_q(&wake_q);
++	hb_waiters_dec(hb2);
++	return ret ? ret : task_count;
++}
++
++/* The key must be already stored in q->key. */
++static inline struct futex_hash_bucket *queue_lock(struct futex_q *q)
++	__acquires(&hb->lock)
++{
++	struct futex_hash_bucket *hb;
++
++	hb = hash_futex(&q->key);
++
++	/*
++	 * Increment the counter before taking the lock so that
++	 * a potential waker won't miss a to-be-slept task that is
++	 * waiting for the spinlock. This is safe as all queue_lock()
++	 * users end up calling queue_me(). Similarly, for housekeeping,
++	 * decrement the counter at queue_unlock() when some error has
++	 * occurred and we don't end up adding the task to the list.
++	 */
++	hb_waiters_inc(hb); /* implies smp_mb(); (A) */
++
++	q->lock_ptr = &hb->lock;
++
++	spin_lock(&hb->lock);
++	return hb;
++}
++
++static inline void
++queue_unlock(struct futex_hash_bucket *hb)
++	__releases(&hb->lock)
++{
++	spin_unlock(&hb->lock);
++	hb_waiters_dec(hb);
++}
++
++static inline void __queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
++{
++	int prio;
++
++	/*
++	 * The priority used to register this element is
++	 * - either the real thread-priority for the real-time threads
++	 * (i.e. threads with a priority lower than MAX_RT_PRIO)
++	 * - or MAX_RT_PRIO for non-RT threads.
++	 * Thus, all RT-threads are woken first in priority order, and
++	 * the others are woken last, in FIFO order.
++	 */
++	prio = min(current->normal_prio, MAX_RT_PRIO);
++
++	plist_node_init(&q->list, prio);
++	plist_add(&q->list, &hb->chain);
++	q->task = current;
++}
++
++/**
++ * queue_me() - Enqueue the futex_q on the futex_hash_bucket
++ * @q:	The futex_q to enqueue
++ * @hb:	The destination hash bucket
++ *
++ * The hb->lock must be held by the caller, and is released here. A call to
++ * queue_me() is typically paired with exactly one call to unqueue_me().  The
++ * exceptions involve the PI related operations, which may use unqueue_me_pi()
++ * or nothing if the unqueue is done as part of the wake process and the unqueue
++ * state is implicit in the state of woken task (see futex_wait_requeue_pi() for
++ * an example).
++ */
++static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
++	__releases(&hb->lock)
++{
++	__queue_me(q, hb);
++	spin_unlock(&hb->lock);
++}
++
++/**
++ * unqueue_me() - Remove the futex_q from its futex_hash_bucket
++ * @q:	The futex_q to unqueue
++ *
++ * The q->lock_ptr must not be held by the caller. A call to unqueue_me() must
++ * be paired with exactly one earlier call to queue_me().
++ *
++ * Return:
++ *  - 1 - if the futex_q was still queued (and we removed unqueued it);
++ *  - 0 - if the futex_q was already removed by the waking thread
++ */
++static int unqueue_me(struct futex_q *q)
++{
++	spinlock_t *lock_ptr;
++	int ret = 0;
++
++	/* In the common case we don't take the spinlock, which is nice. */
++retry:
++	/*
++	 * q->lock_ptr can change between this read and the following spin_lock.
++	 * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
++	 * optimizing lock_ptr out of the logic below.
++	 */
++	lock_ptr = READ_ONCE(q->lock_ptr);
++	if (lock_ptr != NULL) {
++		spin_lock(lock_ptr);
++		/*
++		 * q->lock_ptr can change between reading it and
++		 * spin_lock(), causing us to take the wrong lock.  This
++		 * corrects the race condition.
++		 *
++		 * Reasoning goes like this: if we have the wrong lock,
++		 * q->lock_ptr must have changed (maybe several times)
++		 * between reading it and the spin_lock().  It can
++		 * change again after the spin_lock() but only if it was
++		 * already changed before the spin_lock().  It cannot,
++		 * however, change back to the original value.  Therefore
++		 * we can detect whether we acquired the correct lock.
++		 */
++		if (unlikely(lock_ptr != q->lock_ptr)) {
++			spin_unlock(lock_ptr);
++			goto retry;
++		}
++		__unqueue_futex(q);
++
++		BUG_ON(q->pi_state);
++
++		spin_unlock(lock_ptr);
++		ret = 1;
++	}
++
++	return ret;
++}
++
++/*
++ * PI futexes can not be requeued and must remove themself from the
++ * hash bucket. The hash bucket lock (i.e. lock_ptr) is held on entry
++ * and dropped here.
++ */
++static void unqueue_me_pi(struct futex_q *q)
++	__releases(q->lock_ptr)
++{
++	__unqueue_futex(q);
++
++	BUG_ON(!q->pi_state);
++	put_pi_state(q->pi_state);
++	q->pi_state = NULL;
++
++	spin_unlock(q->lock_ptr);
++}
++
++static int __fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
++				  struct task_struct *argowner)
++{
++	struct futex_pi_state *pi_state = q->pi_state;
++	struct task_struct *oldowner, *newowner;
++	u32 uval, curval, newval, newtid;
++	int err = 0;
++
++	oldowner = pi_state->owner;
++
++	/*
++	 * We are here because either:
++	 *
++	 *  - we stole the lock and pi_state->owner needs updating to reflect
++	 *    that (@argowner == current),
++	 *
++	 * or:
++	 *
++	 *  - someone stole our lock and we need to fix things to point to the
++	 *    new owner (@argowner == NULL).
++	 *
++	 * Either way, we have to replace the TID in the user space variable.
++	 * This must be atomic as we have to preserve the owner died bit here.
++	 *
++	 * Note: We write the user space value _before_ changing the pi_state
++	 * because we can fault here. Imagine swapped out pages or a fork
++	 * that marked all the anonymous memory readonly for cow.
++	 *
++	 * Modifying pi_state _before_ the user space value would leave the
++	 * pi_state in an inconsistent state when we fault here, because we
++	 * need to drop the locks to handle the fault. This might be observed
++	 * in the PID check in lookup_pi_state.
++	 */
++retry:
++	if (!argowner) {
++		if (oldowner != current) {
++			/*
++			 * We raced against a concurrent self; things are
++			 * already fixed up. Nothing to do.
++			 */
++			return 0;
++		}
++
++		if (__rt_mutex_futex_trylock(&pi_state->pi_mutex)) {
++			/* We got the lock. pi_state is correct. Tell caller. */
++			return 1;
++		}
++
++		/*
++		 * The trylock just failed, so either there is an owner or
++		 * there is a higher priority waiter than this one.
++		 */
++		newowner = rt_mutex_owner(&pi_state->pi_mutex);
++		/*
++		 * If the higher priority waiter has not yet taken over the
++		 * rtmutex then newowner is NULL. We can't return here with
++		 * that state because it's inconsistent vs. the user space
++		 * state. So drop the locks and try again. It's a valid
++		 * situation and not any different from the other retry
++		 * conditions.
++		 */
++		if (unlikely(!newowner)) {
++			err = -EAGAIN;
++			goto handle_err;
++		}
++	} else {
++		WARN_ON_ONCE(argowner != current);
++		if (oldowner == current) {
++			/*
++			 * We raced against a concurrent self; things are
++			 * already fixed up. Nothing to do.
++			 */
++			return 1;
++		}
++		newowner = argowner;
++	}
++
++	newtid = task_pid_vnr(newowner) | FUTEX_WAITERS;
++	/* Owner died? */
++	if (!pi_state->owner)
++		newtid |= FUTEX_OWNER_DIED;
++
++	err = get_futex_value_locked(&uval, uaddr);
++	if (err)
++		goto handle_err;
++
++	for (;;) {
++		newval = (uval & FUTEX_OWNER_DIED) | newtid;
++
++		err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++		if (err)
++			goto handle_err;
++
++		if (curval == uval)
++			break;
++		uval = curval;
++	}
++
++	/*
++	 * We fixed up user space. Now we need to fix the pi_state
++	 * itself.
++	 */
++	pi_state_update_owner(pi_state, newowner);
++
++	return argowner == current;
++
++	/*
++	 * In order to reschedule or handle a page fault, we need to drop the
++	 * locks here. In the case of a fault, this gives the other task
++	 * (either the highest priority waiter itself or the task which stole
++	 * the rtmutex) the chance to try the fixup of the pi_state. So once we
++	 * are back from handling the fault we need to check the pi_state after
++	 * reacquiring the locks and before trying to do another fixup. When
++	 * the fixup has been done already we simply return.
++	 *
++	 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
++	 * drop hb->lock since the caller owns the hb -> futex_q relation.
++	 * Dropping the pi_mutex->wait_lock requires the state revalidate.
++	 */
++handle_err:
++	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
++	spin_unlock(q->lock_ptr);
++
++	switch (err) {
++	case -EFAULT:
++		err = fault_in_user_writeable(uaddr);
++		break;
++
++	case -EAGAIN:
++		cond_resched();
++		err = 0;
++		break;
++
++	default:
++		WARN_ON_ONCE(1);
++		break;
++	}
++
++	spin_lock(q->lock_ptr);
++	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
++
++	/*
++	 * Check if someone else fixed it for us:
++	 */
++	if (pi_state->owner != oldowner)
++		return argowner == current;
++
++	/* Retry if err was -EAGAIN or the fault in succeeded */
++	if (!err)
++		goto retry;
++
++	/*
++	 * fault_in_user_writeable() failed so user state is immutable. At
++	 * best we can make the kernel state consistent but user state will
++	 * be most likely hosed and any subsequent unlock operation will be
++	 * rejected due to PI futex rule [10].
++	 *
++	 * Ensure that the rtmutex owner is also the pi_state owner despite
++	 * the user space value claiming something different. There is no
++	 * point in unlocking the rtmutex if current is the owner as it
++	 * would need to wait until the next waiter has taken the rtmutex
++	 * to guarantee consistent state. Keep it simple. Userspace asked
++	 * for this wreckaged state.
++	 *
++	 * The rtmutex has an owner - either current or some other
++	 * task. See the EAGAIN loop above.
++	 */
++	pi_state_update_owner(pi_state, rt_mutex_owner(&pi_state->pi_mutex));
++
++	return err;
++}
++
++static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
++				struct task_struct *argowner)
++{
++	struct futex_pi_state *pi_state = q->pi_state;
++	int ret;
++
++	lockdep_assert_held(q->lock_ptr);
++
++	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
++	ret = __fixup_pi_state_owner(uaddr, q, argowner);
++	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
++	return ret;
++}
++
++static long futex_wait_restart(struct restart_block *restart);
++
++/**
++ * fixup_owner() - Post lock pi_state and corner case management
++ * @uaddr:	user address of the futex
++ * @q:		futex_q (contains pi_state and access to the rt_mutex)
++ * @locked:	if the attempt to take the rt_mutex succeeded (1) or not (0)
++ *
++ * After attempting to lock an rt_mutex, this function is called to cleanup
++ * the pi_state owner as well as handle race conditions that may allow us to
++ * acquire the lock. Must be called with the hb lock held.
++ *
++ * Return:
++ *  -  1 - success, lock taken;
++ *  -  0 - success, lock not taken;
++ *  - <0 - on error (-EFAULT)
++ */
++static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
++{
++	if (locked) {
++		/*
++		 * Got the lock. We might not be the anticipated owner if we
++		 * did a lock-steal - fix up the PI-state in that case:
++		 *
++		 * Speculative pi_state->owner read (we don't hold wait_lock);
++		 * since we own the lock pi_state->owner == current is the
++		 * stable state, anything else needs more attention.
++		 */
++		if (q->pi_state->owner != current)
++			return fixup_pi_state_owner(uaddr, q, current);
++		return 1;
++	}
++
++	/*
++	 * If we didn't get the lock; check if anybody stole it from us. In
++	 * that case, we need to fix up the uval to point to them instead of
++	 * us, otherwise bad things happen. [10]
++	 *
++	 * Another speculative read; pi_state->owner == current is unstable
++	 * but needs our attention.
++	 */
++	if (q->pi_state->owner == current)
++		return fixup_pi_state_owner(uaddr, q, NULL);
++
++	/*
++	 * Paranoia check. If we did not take the lock, then we should not be
++	 * the owner of the rt_mutex. Warn and establish consistent state.
++	 */
++	if (WARN_ON_ONCE(rt_mutex_owner(&q->pi_state->pi_mutex) == current))
++		return fixup_pi_state_owner(uaddr, q, current);
++
++	return 0;
++}
++
++/**
++ * futex_wait_queue_me() - queue_me() and wait for wakeup, timeout, or signal
++ * @hb:		the futex hash bucket, must be locked by the caller
++ * @q:		the futex_q to queue up on
++ * @timeout:	the prepared hrtimer_sleeper, or null for no timeout
++ */
++static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
++				struct hrtimer_sleeper *timeout)
++{
++	/*
++	 * The task state is guaranteed to be set before another task can
++	 * wake it. set_current_state() is implemented using smp_store_mb() and
++	 * queue_me() calls spin_unlock() upon completion, both serializing
++	 * access to the hash list and forcing another memory barrier.
++	 */
++	set_current_state(TASK_INTERRUPTIBLE);
++	queue_me(q, hb);
++
++	/* Arm the timer */
++	if (timeout)
++		hrtimer_sleeper_start_expires(timeout, HRTIMER_MODE_ABS);
++
++	/*
++	 * If we have been removed from the hash list, then another task
++	 * has tried to wake us, and we can skip the call to schedule().
++	 */
++	if (likely(!plist_node_empty(&q->list))) {
++		/*
++		 * If the timer has already expired, current will already be
++		 * flagged for rescheduling. Only call schedule if there
++		 * is no timeout, or if it has yet to expire.
++		 */
++		if (!timeout || timeout->task)
++			freezable_schedule();
++	}
++	__set_current_state(TASK_RUNNING);
++}
++
++/**
++ * futex_wait_setup() - Prepare to wait on a futex
++ * @uaddr:	the futex userspace address
++ * @val:	the expected value
++ * @flags:	futex flags (FLAGS_SHARED, etc.)
++ * @q:		the associated futex_q
++ * @hb:		storage for hash_bucket pointer to be returned to caller
++ *
++ * Setup the futex_q and locate the hash_bucket.  Get the futex value and
++ * compare it with the expected value.  Handle atomic faults internally.
++ * Return with the hb lock held and a q.key reference on success, and unlocked
++ * with no q.key reference on failure.
++ *
++ * Return:
++ *  -  0 - uaddr contains val and hb has been locked;
++ *  - <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlocked
++ */
++static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
++			   struct futex_q *q, struct futex_hash_bucket **hb)
++{
++	u32 uval;
++	int ret;
++
++	/*
++	 * Access the page AFTER the hash-bucket is locked.
++	 * Order is important:
++	 *
++	 *   Userspace waiter: val = var; if (cond(val)) futex_wait(&var, val);
++	 *   Userspace waker:  if (cond(var)) { var = new; futex_wake(&var); }
++	 *
++	 * The basic logical guarantee of a futex is that it blocks ONLY
++	 * if cond(var) is known to be true at the time of blocking, for
++	 * any cond.  If we locked the hash-bucket after testing *uaddr, that
++	 * would open a race condition where we could block indefinitely with
++	 * cond(var) false, which would violate the guarantee.
++	 *
++	 * On the other hand, we insert q and release the hash-bucket only
++	 * after testing *uaddr.  This guarantees that futex_wait() will NOT
++	 * absorb a wakeup if *uaddr does not match the desired values
++	 * while the syscall executes.
++	 */
++retry:
++	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q->key, FUTEX_READ);
++	if (unlikely(ret != 0))
++		return ret;
++
++retry_private:
++	*hb = queue_lock(q);
++
++	ret = get_futex_value_locked(&uval, uaddr);
++
++	if (ret) {
++		queue_unlock(*hb);
++
++		ret = get_user(uval, uaddr);
++		if (ret)
++			return ret;
++
++		if (!(flags & FLAGS_SHARED))
++			goto retry_private;
++
++		goto retry;
++	}
++
++	if (uval != val) {
++		queue_unlock(*hb);
++		ret = -EWOULDBLOCK;
++	}
++
++	return ret;
++}
++
++static int futex_wait(u32 __user *uaddr, unsigned int flags, u32 val,
++		      ktime_t *abs_time, u32 bitset)
++{
++	struct hrtimer_sleeper timeout, *to;
++	struct restart_block *restart;
++	struct futex_hash_bucket *hb;
++	struct futex_q q = futex_q_init;
++	int ret;
++
++	if (!bitset)
++		return -EINVAL;
++	q.bitset = bitset;
++
++	to = futex_setup_timer(abs_time, &timeout, flags,
++			       current->timer_slack_ns);
++retry:
++	/*
++	 * Prepare to wait on uaddr. On success, holds hb lock and increments
++	 * q.key refs.
++	 */
++	ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
++	if (ret)
++		goto out;
++
++	/* queue_me and wait for wakeup, timeout, or a signal. */
++	futex_wait_queue_me(hb, &q, to);
++
++	/* If we were woken (and unqueued), we succeeded, whatever. */
++	ret = 0;
++	/* unqueue_me() drops q.key ref */
++	if (!unqueue_me(&q))
++		goto out;
++	ret = -ETIMEDOUT;
++	if (to && !to->task)
++		goto out;
++
++	/*
++	 * We expect signal_pending(current), but we might be the
++	 * victim of a spurious wakeup as well.
++	 */
++	if (!signal_pending(current))
++		goto retry;
++
++	ret = -ERESTARTSYS;
++	if (!abs_time)
++		goto out;
++
++	restart = &current->restart_block;
++	restart->futex.uaddr = uaddr;
++	restart->futex.val = val;
++	restart->futex.time = *abs_time;
++	restart->futex.bitset = bitset;
++	restart->futex.flags = flags | FLAGS_HAS_TIMEOUT;
++
++	ret = set_restart_fn(restart, futex_wait_restart);
++
++out:
++	if (to) {
++		hrtimer_cancel(&to->timer);
++		destroy_hrtimer_on_stack(&to->timer);
++	}
++	return ret;
++}
++
++
++static long futex_wait_restart(struct restart_block *restart)
++{
++	u32 __user *uaddr = restart->futex.uaddr;
++	ktime_t t, *tp = NULL;
++
++	if (restart->futex.flags & FLAGS_HAS_TIMEOUT) {
++		t = restart->futex.time;
++		tp = &t;
++	}
++	restart->fn = do_no_restart_syscall;
++
++	return (long)futex_wait(uaddr, restart->futex.flags,
++				restart->futex.val, tp, restart->futex.bitset);
++}
++
++
++/*
++ * Userspace tried a 0 -> TID atomic transition of the futex value
++ * and failed. The kernel side here does the whole locking operation:
++ * if there are waiters then it will block as a consequence of relying
++ * on rt-mutexes, it does PI, etc. (Due to races the kernel might see
++ * a 0 value of the futex too.).
++ *
++ * Also serves as futex trylock_pi()'ing, and due semantics.
++ */
++static int futex_lock_pi(u32 __user *uaddr, unsigned int flags,
++			 ktime_t *time, int trylock)
++{
++	struct hrtimer_sleeper timeout, *to;
++	struct task_struct *exiting = NULL;
++	struct rt_mutex_waiter rt_waiter;
++	struct futex_hash_bucket *hb;
++	struct futex_q q = futex_q_init;
++	int res, ret;
++
++	if (!IS_ENABLED(CONFIG_FUTEX_PI))
++		return -ENOSYS;
++
++	if (refill_pi_state_cache())
++		return -ENOMEM;
++
++	to = futex_setup_timer(time, &timeout, FLAGS_CLOCKRT, 0);
++
++retry:
++	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q.key, FUTEX_WRITE);
++	if (unlikely(ret != 0))
++		goto out;
++
++retry_private:
++	hb = queue_lock(&q);
++
++	ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current,
++				   &exiting, 0);
++	if (unlikely(ret)) {
++		/*
++		 * Atomic work succeeded and we got the lock,
++		 * or failed. Either way, we do _not_ block.
++		 */
++		switch (ret) {
++		case 1:
++			/* We got the lock. */
++			ret = 0;
++			goto out_unlock_put_key;
++		case -EFAULT:
++			goto uaddr_faulted;
++		case -EBUSY:
++		case -EAGAIN:
++			/*
++			 * Two reasons for this:
++			 * - EBUSY: Task is exiting and we just wait for the
++			 *   exit to complete.
++			 * - EAGAIN: The user space value changed.
++			 */
++			queue_unlock(hb);
++			/*
++			 * Handle the case where the owner is in the middle of
++			 * exiting. Wait for the exit to complete otherwise
++			 * this task might loop forever, aka. live lock.
++			 */
++			wait_for_owner_exiting(ret, exiting);
++			cond_resched();
++			goto retry;
++		default:
++			goto out_unlock_put_key;
++		}
++	}
++
++	WARN_ON(!q.pi_state);
++
++	/*
++	 * Only actually queue now that the atomic ops are done:
++	 */
++	__queue_me(&q, hb);
++
++	if (trylock) {
++		ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex);
++		/* Fixup the trylock return value: */
++		ret = ret ? 0 : -EWOULDBLOCK;
++		goto no_block;
++	}
++
++	rt_mutex_init_waiter(&rt_waiter);
++
++	/*
++	 * On PREEMPT_RT_FULL, when hb->lock becomes an rt_mutex, we must not
++	 * hold it while doing rt_mutex_start_proxy(), because then it will
++	 * include hb->lock in the blocking chain, even through we'll not in
++	 * fact hold it while blocking. This will lead it to report -EDEADLK
++	 * and BUG when futex_unlock_pi() interleaves with this.
++	 *
++	 * Therefore acquire wait_lock while holding hb->lock, but drop the
++	 * latter before calling __rt_mutex_start_proxy_lock(). This
++	 * interleaves with futex_unlock_pi() -- which does a similar lock
++	 * handoff -- such that the latter can observe the futex_q::pi_state
++	 * before __rt_mutex_start_proxy_lock() is done.
++	 */
++	raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
++	spin_unlock(q.lock_ptr);
++	/*
++	 * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter
++	 * such that futex_unlock_pi() is guaranteed to observe the waiter when
++	 * it sees the futex_q::pi_state.
++	 */
++	ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current);
++	raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock);
++
++	if (ret) {
++		if (ret == 1)
++			ret = 0;
++		goto cleanup;
++	}
++
++	if (unlikely(to))
++		hrtimer_sleeper_start_expires(to, HRTIMER_MODE_ABS);
++
++	ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);
++
++cleanup:
++	spin_lock(q.lock_ptr);
++	/*
++	 * If we failed to acquire the lock (deadlock/signal/timeout), we must
++	 * first acquire the hb->lock before removing the lock from the
++	 * rt_mutex waitqueue, such that we can keep the hb and rt_mutex wait
++	 * lists consistent.
++	 *
++	 * In particular; it is important that futex_unlock_pi() can not
++	 * observe this inconsistency.
++	 */
++	if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter))
++		ret = 0;
++
++no_block:
++	/*
++	 * Fixup the pi_state owner and possibly acquire the lock if we
++	 * haven't already.
++	 */
++	res = fixup_owner(uaddr, &q, !ret);
++	/*
++	 * If fixup_owner() returned an error, proprogate that.  If it acquired
++	 * the lock, clear our -ETIMEDOUT or -EINTR.
++	 */
++	if (res)
++		ret = (res < 0) ? res : 0;
++
++	/* Unqueue and drop the lock */
++	unqueue_me_pi(&q);
++	goto out;
++
++out_unlock_put_key:
++	queue_unlock(hb);
++
++out:
++	if (to) {
++		hrtimer_cancel(&to->timer);
++		destroy_hrtimer_on_stack(&to->timer);
++	}
++	return ret != -EINTR ? ret : -ERESTARTNOINTR;
++
++uaddr_faulted:
++	queue_unlock(hb);
++
++	ret = fault_in_user_writeable(uaddr);
++	if (ret)
++		goto out;
++
++	if (!(flags & FLAGS_SHARED))
++		goto retry_private;
++
++	goto retry;
++}
++
++/*
++ * Userspace attempted a TID -> 0 atomic transition, and failed.
++ * This is the in-kernel slowpath: we look up the PI state (if any),
++ * and do the rt-mutex unlock.
++ */
++static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
++{
++	u32 curval, uval, vpid = task_pid_vnr(current);
++	union futex_key key = FUTEX_KEY_INIT;
++	struct futex_hash_bucket *hb;
++	struct futex_q *top_waiter;
++	int ret;
++
++	if (!IS_ENABLED(CONFIG_FUTEX_PI))
++		return -ENOSYS;
++
++retry:
++	if (get_user(uval, uaddr))
++		return -EFAULT;
++	/*
++	 * We release only a lock we actually own:
++	 */
++	if ((uval & FUTEX_TID_MASK) != vpid)
++		return -EPERM;
++
++	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, FUTEX_WRITE);
++	if (ret)
++		return ret;
++
++	hb = hash_futex(&key);
++	spin_lock(&hb->lock);
++
++	/*
++	 * Check waiters first. We do not trust user space values at
++	 * all and we at least want to know if user space fiddled
++	 * with the futex value instead of blindly unlocking.
++	 */
++	top_waiter = futex_top_waiter(hb, &key);
++	if (top_waiter) {
++		struct futex_pi_state *pi_state = top_waiter->pi_state;
++
++		ret = -EINVAL;
++		if (!pi_state)
++			goto out_unlock;
++
++		/*
++		 * If current does not own the pi_state then the futex is
++		 * inconsistent and user space fiddled with the futex value.
++		 */
++		if (pi_state->owner != current)
++			goto out_unlock;
++
++		get_pi_state(pi_state);
++		/*
++		 * By taking wait_lock while still holding hb->lock, we ensure
++		 * there is no point where we hold neither; and therefore
++		 * wake_futex_pi() must observe a state consistent with what we
++		 * observed.
++		 *
++		 * In particular; this forces __rt_mutex_start_proxy() to
++		 * complete such that we're guaranteed to observe the
++		 * rt_waiter. Also see the WARN in wake_futex_pi().
++		 */
++		raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
++		spin_unlock(&hb->lock);
++
++		/* drops pi_state->pi_mutex.wait_lock */
++		ret = wake_futex_pi(uaddr, uval, pi_state);
++
++		put_pi_state(pi_state);
++
++		/*
++		 * Success, we're done! No tricky corner cases.
++		 */
++		if (!ret)
++			goto out_putkey;
++		/*
++		 * The atomic access to the futex value generated a
++		 * pagefault, so retry the user-access and the wakeup:
++		 */
++		if (ret == -EFAULT)
++			goto pi_faulted;
++		/*
++		 * A unconditional UNLOCK_PI op raced against a waiter
++		 * setting the FUTEX_WAITERS bit. Try again.
++		 */
++		if (ret == -EAGAIN)
++			goto pi_retry;
++		/*
++		 * wake_futex_pi has detected invalid state. Tell user
++		 * space.
++		 */
++		goto out_putkey;
++	}
++
++	/*
++	 * We have no kernel internal state, i.e. no waiters in the
++	 * kernel. Waiters which are about to queue themselves are stuck
++	 * on hb->lock. So we can safely ignore them. We do neither
++	 * preserve the WAITERS bit not the OWNER_DIED one. We are the
++	 * owner.
++	 */
++	if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) {
++		spin_unlock(&hb->lock);
++		switch (ret) {
++		case -EFAULT:
++			goto pi_faulted;
++
++		case -EAGAIN:
++			goto pi_retry;
++
++		default:
++			WARN_ON_ONCE(1);
++			goto out_putkey;
++		}
++	}
++
++	/*
++	 * If uval has changed, let user space handle it.
++	 */
++	ret = (curval == uval) ? 0 : -EAGAIN;
++
++out_unlock:
++	spin_unlock(&hb->lock);
++out_putkey:
++	return ret;
++
++pi_retry:
++	cond_resched();
++	goto retry;
++
++pi_faulted:
++
++	ret = fault_in_user_writeable(uaddr);
++	if (!ret)
++		goto retry;
++
++	return ret;
++}
++
++/**
++ * handle_early_requeue_pi_wakeup() - Detect early wakeup on the initial futex
++ * @hb:		the hash_bucket futex_q was original enqueued on
++ * @q:		the futex_q woken while waiting to be requeued
++ * @key2:	the futex_key of the requeue target futex
++ * @timeout:	the timeout associated with the wait (NULL if none)
++ *
++ * Detect if the task was woken on the initial futex as opposed to the requeue
++ * target futex.  If so, determine if it was a timeout or a signal that caused
++ * the wakeup and return the appropriate error code to the caller.  Must be
++ * called with the hb lock held.
++ *
++ * Return:
++ *  -  0 = no early wakeup detected;
++ *  - <0 = -ETIMEDOUT or -ERESTARTNOINTR
++ */
++static inline
++int handle_early_requeue_pi_wakeup(struct futex_hash_bucket *hb,
++				   struct futex_q *q, union futex_key *key2,
++				   struct hrtimer_sleeper *timeout)
++{
++	int ret = 0;
++
++	/*
++	 * With the hb lock held, we avoid races while we process the wakeup.
++	 * We only need to hold hb (and not hb2) to ensure atomicity as the
++	 * wakeup code can't change q.key from uaddr to uaddr2 if we hold hb.
++	 * It can't be requeued from uaddr2 to something else since we don't
++	 * support a PI aware source futex for requeue.
++	 */
++	if (!match_futex(&q->key, key2)) {
++		WARN_ON(q->lock_ptr && (&hb->lock != q->lock_ptr));
++		/*
++		 * We were woken prior to requeue by a timeout or a signal.
++		 * Unqueue the futex_q and determine which it was.
++		 */
++		plist_del(&q->list, &hb->chain);
++		hb_waiters_dec(hb);
++
++		/* Handle spurious wakeups gracefully */
++		ret = -EWOULDBLOCK;
++		if (timeout && !timeout->task)
++			ret = -ETIMEDOUT;
++		else if (signal_pending(current))
++			ret = -ERESTARTNOINTR;
++	}
++	return ret;
++}
++
++/**
++ * futex_wait_requeue_pi() - Wait on uaddr and take uaddr2
++ * @uaddr:	the futex we initially wait on (non-pi)
++ * @flags:	futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be
++ *		the same type, no requeueing from private to shared, etc.
++ * @val:	the expected value of uaddr
++ * @abs_time:	absolute timeout
++ * @bitset:	32 bit wakeup bitset set by userspace, defaults to all
++ * @uaddr2:	the pi futex we will take prior to returning to user-space
++ *
++ * The caller will wait on uaddr and will be requeued by futex_requeue() to
++ * uaddr2 which must be PI aware and unique from uaddr.  Normal wakeup will wake
++ * on uaddr2 and complete the acquisition of the rt_mutex prior to returning to
++ * userspace.  This ensures the rt_mutex maintains an owner when it has waiters;
++ * without one, the pi logic would not know which task to boost/deboost, if
++ * there was a need to.
++ *
++ * We call schedule in futex_wait_queue_me() when we enqueue and return there
++ * via the following--
++ * 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue()
++ * 2) wakeup on uaddr2 after a requeue
++ * 3) signal
++ * 4) timeout
++ *
++ * If 3, cleanup and return -ERESTARTNOINTR.
++ *
++ * If 2, we may then block on trying to take the rt_mutex and return via:
++ * 5) successful lock
++ * 6) signal
++ * 7) timeout
++ * 8) other lock acquisition failure
++ *
++ * If 6, return -EWOULDBLOCK (restarting the syscall would do the same).
++ *
++ * If 4 or 7, we cleanup and return with -ETIMEDOUT.
++ *
++ * Return:
++ *  -  0 - On success;
++ *  - <0 - On error
++ */
++static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
++				 u32 val, ktime_t *abs_time, u32 bitset,
++				 u32 __user *uaddr2)
++{
++	struct hrtimer_sleeper timeout, *to;
++	struct rt_mutex_waiter rt_waiter;
++	struct futex_hash_bucket *hb;
++	union futex_key key2 = FUTEX_KEY_INIT;
++	struct futex_q q = futex_q_init;
++	int res, ret;
++
++	if (!IS_ENABLED(CONFIG_FUTEX_PI))
++		return -ENOSYS;
++
++	if (uaddr == uaddr2)
++		return -EINVAL;
++
++	if (!bitset)
++		return -EINVAL;
++
++	to = futex_setup_timer(abs_time, &timeout, flags,
++			       current->timer_slack_ns);
++
++	/*
++	 * The waiter is allocated on our stack, manipulated by the requeue
++	 * code while we sleep on uaddr.
++	 */
++	rt_mutex_init_waiter(&rt_waiter);
++
++	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, FUTEX_WRITE);
++	if (unlikely(ret != 0))
++		goto out;
++
++	q.bitset = bitset;
++	q.rt_waiter = &rt_waiter;
++	q.requeue_pi_key = &key2;
++
++	/*
++	 * Prepare to wait on uaddr. On success, increments q.key (key1) ref
++	 * count.
++	 */
++	ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
++	if (ret)
++		goto out;
++
++	/*
++	 * The check above which compares uaddrs is not sufficient for
++	 * shared futexes. We need to compare the keys:
++	 */
++	if (match_futex(&q.key, &key2)) {
++		queue_unlock(hb);
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/* Queue the futex_q, drop the hb lock, wait for wakeup. */
++	futex_wait_queue_me(hb, &q, to);
++
++	spin_lock(&hb->lock);
++	ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
++	spin_unlock(&hb->lock);
++	if (ret)
++		goto out;
++
++	/*
++	 * In order for us to be here, we know our q.key == key2, and since
++	 * we took the hb->lock above, we also know that futex_requeue() has
++	 * completed and we no longer have to concern ourselves with a wakeup
++	 * race with the atomic proxy lock acquisition by the requeue code. The
++	 * futex_requeue dropped our key1 reference and incremented our key2
++	 * reference count.
++	 */
++
++	/* Check if the requeue code acquired the second futex for us. */
++	if (!q.rt_waiter) {
++		/*
++		 * Got the lock. We might not be the anticipated owner if we
++		 * did a lock-steal - fix up the PI-state in that case.
++		 */
++		if (q.pi_state && (q.pi_state->owner != current)) {
++			spin_lock(q.lock_ptr);
++			ret = fixup_pi_state_owner(uaddr2, &q, current);
++			/*
++			 * Drop the reference to the pi state which
++			 * the requeue_pi() code acquired for us.
++			 */
++			put_pi_state(q.pi_state);
++			spin_unlock(q.lock_ptr);
++			/*
++			 * Adjust the return value. It's either -EFAULT or
++			 * success (1) but the caller expects 0 for success.
++			 */
++			ret = ret < 0 ? ret : 0;
++		}
++	} else {
++		struct rt_mutex *pi_mutex;
++
++		/*
++		 * We have been woken up by futex_unlock_pi(), a timeout, or a
++		 * signal.  futex_unlock_pi() will not destroy the lock_ptr nor
++		 * the pi_state.
++		 */
++		WARN_ON(!q.pi_state);
++		pi_mutex = &q.pi_state->pi_mutex;
++		ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
++
++		spin_lock(q.lock_ptr);
++		if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
++			ret = 0;
++
++		debug_rt_mutex_free_waiter(&rt_waiter);
++		/*
++		 * Fixup the pi_state owner and possibly acquire the lock if we
++		 * haven't already.
++		 */
++		res = fixup_owner(uaddr2, &q, !ret);
++		/*
++		 * If fixup_owner() returned an error, proprogate that.  If it
++		 * acquired the lock, clear -ETIMEDOUT or -EINTR.
++		 */
++		if (res)
++			ret = (res < 0) ? res : 0;
++
++		/* Unqueue and drop the lock. */
++		unqueue_me_pi(&q);
++	}
++
++	if (ret == -EINTR) {
++		/*
++		 * We've already been requeued, but cannot restart by calling
++		 * futex_lock_pi() directly. We could restart this syscall, but
++		 * it would detect that the user space "val" changed and return
++		 * -EWOULDBLOCK.  Save the overhead of the restart and return
++		 * -EWOULDBLOCK directly.
++		 */
++		ret = -EWOULDBLOCK;
++	}
++
++out:
++	if (to) {
++		hrtimer_cancel(&to->timer);
++		destroy_hrtimer_on_stack(&to->timer);
++	}
++	return ret;
++}
++
++/*
++ * Support for robust futexes: the kernel cleans up held futexes at
++ * thread exit time.
++ *
++ * Implementation: user-space maintains a per-thread list of locks it
++ * is holding. Upon do_exit(), the kernel carefully walks this list,
++ * and marks all locks that are owned by this thread with the
++ * FUTEX_OWNER_DIED bit, and wakes up a waiter (if any). The list is
++ * always manipulated with the lock held, so the list is private and
++ * per-thread. Userspace also maintains a per-thread 'list_op_pending'
++ * field, to allow the kernel to clean up if the thread dies after
++ * acquiring the lock, but just before it could have added itself to
++ * the list. There can only be one such pending lock.
++ */
++
++/**
++ * sys_set_robust_list() - Set the robust-futex list head of a task
++ * @head:	pointer to the list-head
++ * @len:	length of the list-head, as userspace expects
++ */
++SYSCALL_DEFINE2(set_robust_list, struct robust_list_head __user *, head,
++		size_t, len)
++{
++	if (!futex_cmpxchg_enabled)
++		return -ENOSYS;
++	/*
++	 * The kernel knows only one size for now:
++	 */
++	if (unlikely(len != sizeof(*head)))
++		return -EINVAL;
++
++	current->robust_list = head;
++
++	return 0;
++}
++
++/**
++ * sys_get_robust_list() - Get the robust-futex list head of a task
++ * @pid:	pid of the process [zero for current task]
++ * @head_ptr:	pointer to a list-head pointer, the kernel fills it in
++ * @len_ptr:	pointer to a length field, the kernel fills in the header size
++ */
++SYSCALL_DEFINE3(get_robust_list, int, pid,
++		struct robust_list_head __user * __user *, head_ptr,
++		size_t __user *, len_ptr)
++{
++	struct robust_list_head __user *head;
++	unsigned long ret;
++	struct task_struct *p;
++
++	if (!futex_cmpxchg_enabled)
++		return -ENOSYS;
++
++	rcu_read_lock();
++
++	ret = -ESRCH;
++	if (!pid)
++		p = current;
++	else {
++		p = find_task_by_vpid(pid);
++		if (!p)
++			goto err_unlock;
++	}
++
++	ret = -EPERM;
++	if (!ptrace_may_access(p, PTRACE_MODE_READ_REALCREDS))
++		goto err_unlock;
++
++	head = p->robust_list;
++	rcu_read_unlock();
++
++	if (put_user(sizeof(*head), len_ptr))
++		return -EFAULT;
++	return put_user(head, head_ptr);
++
++err_unlock:
++	rcu_read_unlock();
++
++	return ret;
++}
++
++/* Constants for the pending_op argument of handle_futex_death */
++#define HANDLE_DEATH_PENDING	true
++#define HANDLE_DEATH_LIST	false
++
++/*
++ * Process a futex-list entry, check whether it's owned by the
++ * dying task, and do notification if so:
++ */
++static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr,
++			      bool pi, bool pending_op)
++{
++	u32 uval, nval, mval;
++	pid_t owner;
++	int err;
++
++	/* Futex address must be 32bit aligned */
++	if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
++		return -1;
++
++retry:
++	if (get_user(uval, uaddr))
++		return -1;
++
++	/*
++	 * Special case for regular (non PI) futexes. The unlock path in
++	 * user space has two race scenarios:
++	 *
++	 * 1. The unlock path releases the user space futex value and
++	 *    before it can execute the futex() syscall to wake up
++	 *    waiters it is killed.
++	 *
++	 * 2. A woken up waiter is killed before it can acquire the
++	 *    futex in user space.
++	 *
++	 * In the second case, the wake up notification could be generated
++	 * by the unlock path in user space after setting the futex value
++	 * to zero or by the kernel after setting the OWNER_DIED bit below.
++	 *
++	 * In both cases the TID validation below prevents a wakeup of
++	 * potential waiters which can cause these waiters to block
++	 * forever.
++	 *
++	 * In both cases the following conditions are met:
++	 *
++	 *	1) task->robust_list->list_op_pending != NULL
++	 *	   @pending_op == true
++	 *	2) The owner part of user space futex value == 0
++	 *	3) Regular futex: @pi == false
++	 *
++	 * If these conditions are met, it is safe to attempt waking up a
++	 * potential waiter without touching the user space futex value and
++	 * trying to set the OWNER_DIED bit. If the futex value is zero,
++	 * the rest of the user space mutex state is consistent, so a woken
++	 * waiter will just take over the uncontended futex. Setting the
++	 * OWNER_DIED bit would create inconsistent state and malfunction
++	 * of the user space owner died handling. Otherwise, the OWNER_DIED
++	 * bit is already set, and the woken waiter is expected to deal with
++	 * this.
++	 */
++	owner = uval & FUTEX_TID_MASK;
++
++	if (pending_op && !pi && !owner) {
++		futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
++		return 0;
++	}
++
++	if (owner != task_pid_vnr(curr))
++		return 0;
++
++	/*
++	 * Ok, this dying thread is truly holding a futex
++	 * of interest. Set the OWNER_DIED bit atomically
++	 * via cmpxchg, and if the value had FUTEX_WAITERS
++	 * set, wake up a waiter (if any). (We have to do a
++	 * futex_wake() even if OWNER_DIED is already set -
++	 * to handle the rare but possible case of recursive
++	 * thread-death.) The rest of the cleanup is done in
++	 * userspace.
++	 */
++	mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED;
++
++	/*
++	 * We are not holding a lock here, but we want to have
++	 * the pagefault_disable/enable() protection because
++	 * we want to handle the fault gracefully. If the
++	 * access fails we try to fault in the futex with R/W
++	 * verification via get_user_pages. get_user() above
++	 * does not guarantee R/W access. If that fails we
++	 * give up and leave the futex locked.
++	 */
++	if ((err = cmpxchg_futex_value_locked(&nval, uaddr, uval, mval))) {
++		switch (err) {
++		case -EFAULT:
++			if (fault_in_user_writeable(uaddr))
++				return -1;
++			goto retry;
++
++		case -EAGAIN:
++			cond_resched();
++			goto retry;
++
++		default:
++			WARN_ON_ONCE(1);
++			return err;
++		}
++	}
++
++	if (nval != uval)
++		goto retry;
++
++	/*
++	 * Wake robust non-PI futexes here. The wakeup of
++	 * PI futexes happens in exit_pi_state():
++	 */
++	if (!pi && (uval & FUTEX_WAITERS))
++		futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
++
++	return 0;
++}
++
++/*
++ * Fetch a robust-list pointer. Bit 0 signals PI futexes:
++ */
++static inline int fetch_robust_entry(struct robust_list __user **entry,
++				     struct robust_list __user * __user *head,
++				     unsigned int *pi)
++{
++	unsigned long uentry;
++
++	if (get_user(uentry, (unsigned long __user *)head))
++		return -EFAULT;
++
++	*entry = (void __user *)(uentry & ~1UL);
++	*pi = uentry & 1;
++
++	return 0;
++}
++
++/*
++ * Walk curr->robust_list (very carefully, it's a userspace list!)
++ * and mark any locks found there dead, and notify any waiters.
++ *
++ * We silently return on any sign of list-walking problem.
++ */
++static void exit_robust_list(struct task_struct *curr)
++{
++	struct robust_list_head __user *head = curr->robust_list;
++	struct robust_list __user *entry, *next_entry, *pending;
++	unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;
++	unsigned int next_pi;
++	unsigned long futex_offset;
++	int rc;
++
++	if (!futex_cmpxchg_enabled)
++		return;
++
++	/*
++	 * Fetch the list head (which was registered earlier, via
++	 * sys_set_robust_list()):
++	 */
++	if (fetch_robust_entry(&entry, &head->list.next, &pi))
++		return;
++	/*
++	 * Fetch the relative futex offset:
++	 */
++	if (get_user(futex_offset, &head->futex_offset))
++		return;
++	/*
++	 * Fetch any possibly pending lock-add first, and handle it
++	 * if it exists:
++	 */
++	if (fetch_robust_entry(&pending, &head->list_op_pending, &pip))
++		return;
++
++	next_entry = NULL;	/* avoid warning with gcc */
++	while (entry != &head->list) {
++		/*
++		 * Fetch the next entry in the list before calling
++		 * handle_futex_death:
++		 */
++		rc = fetch_robust_entry(&next_entry, &entry->next, &next_pi);
++		/*
++		 * A pending lock might already be on the list, so
++		 * don't process it twice:
++		 */
++		if (entry != pending) {
++			if (handle_futex_death((void __user *)entry + futex_offset,
++						curr, pi, HANDLE_DEATH_LIST))
++				return;
++		}
++		if (rc)
++			return;
++		entry = next_entry;
++		pi = next_pi;
++		/*
++		 * Avoid excessively long or circular lists:
++		 */
++		if (!--limit)
++			break;
++
++		cond_resched();
++	}
++
++	if (pending) {
++		handle_futex_death((void __user *)pending + futex_offset,
++				   curr, pip, HANDLE_DEATH_PENDING);
++	}
++}
++
++static void futex_cleanup(struct task_struct *tsk)
++{
++	if (unlikely(tsk->robust_list)) {
++		exit_robust_list(tsk);
++		tsk->robust_list = NULL;
++	}
++
++#ifdef CONFIG_COMPAT
++	if (unlikely(tsk->compat_robust_list)) {
++		compat_exit_robust_list(tsk);
++		tsk->compat_robust_list = NULL;
++	}
++#endif
++
++	if (unlikely(!list_empty(&tsk->pi_state_list)))
++		exit_pi_state_list(tsk);
++}
++
++/**
++ * futex_exit_recursive - Set the tasks futex state to FUTEX_STATE_DEAD
++ * @tsk:	task to set the state on
++ *
++ * Set the futex exit state of the task lockless. The futex waiter code
++ * observes that state when a task is exiting and loops until the task has
++ * actually finished the futex cleanup. The worst case for this is that the
++ * waiter runs through the wait loop until the state becomes visible.
++ *
++ * This is called from the recursive fault handling path in do_exit().
++ *
++ * This is best effort. Either the futex exit code has run already or
++ * not. If the OWNER_DIED bit has been set on the futex then the waiter can
++ * take it over. If not, the problem is pushed back to user space. If the
++ * futex exit code did not run yet, then an already queued waiter might
++ * block forever, but there is nothing which can be done about that.
++ */
++void futex_exit_recursive(struct task_struct *tsk)
++{
++	/* If the state is FUTEX_STATE_EXITING then futex_exit_mutex is held */
++	if (tsk->futex_state == FUTEX_STATE_EXITING)
++		mutex_unlock(&tsk->futex_exit_mutex);
++	tsk->futex_state = FUTEX_STATE_DEAD;
++}
++
++static void futex_cleanup_begin(struct task_struct *tsk)
++{
++	/*
++	 * Prevent various race issues against a concurrent incoming waiter
++	 * including live locks by forcing the waiter to block on
++	 * tsk->futex_exit_mutex when it observes FUTEX_STATE_EXITING in
++	 * attach_to_pi_owner().
++	 */
++	mutex_lock(&tsk->futex_exit_mutex);
++
++	/*
++	 * Switch the state to FUTEX_STATE_EXITING under tsk->pi_lock.
++	 *
++	 * This ensures that all subsequent checks of tsk->futex_state in
++	 * attach_to_pi_owner() must observe FUTEX_STATE_EXITING with
++	 * tsk->pi_lock held.
++	 *
++	 * It guarantees also that a pi_state which was queued right before
++	 * the state change under tsk->pi_lock by a concurrent waiter must
++	 * be observed in exit_pi_state_list().
++	 */
++	raw_spin_lock_irq(&tsk->pi_lock);
++	tsk->futex_state = FUTEX_STATE_EXITING;
++	raw_spin_unlock_irq(&tsk->pi_lock);
++}
++
++static void futex_cleanup_end(struct task_struct *tsk, int state)
++{
++	/*
++	 * Lockless store. The only side effect is that an observer might
++	 * take another loop until it becomes visible.
++	 */
++	tsk->futex_state = state;
++	/*
++	 * Drop the exit protection. This unblocks waiters which observed
++	 * FUTEX_STATE_EXITING to reevaluate the state.
++	 */
++	mutex_unlock(&tsk->futex_exit_mutex);
++}
++
++void futex_exec_release(struct task_struct *tsk)
++{
++	/*
++	 * The state handling is done for consistency, but in the case of
++	 * exec() there is no way to prevent futher damage as the PID stays
++	 * the same. But for the unlikely and arguably buggy case that a
++	 * futex is held on exec(), this provides at least as much state
++	 * consistency protection which is possible.
++	 */
++	futex_cleanup_begin(tsk);
++	futex_cleanup(tsk);
++	/*
++	 * Reset the state to FUTEX_STATE_OK. The task is alive and about
++	 * exec a new binary.
++	 */
++	futex_cleanup_end(tsk, FUTEX_STATE_OK);
++}
++
++void futex_exit_release(struct task_struct *tsk)
++{
++	futex_cleanup_begin(tsk);
++	futex_cleanup(tsk);
++	futex_cleanup_end(tsk, FUTEX_STATE_DEAD);
++}
++
++long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
++		u32 __user *uaddr2, u32 val2, u32 val3)
++{
++	int cmd = op & FUTEX_CMD_MASK;
++	unsigned int flags = 0;
++
++	if (!(op & FUTEX_PRIVATE_FLAG))
++		flags |= FLAGS_SHARED;
++
++	if (op & FUTEX_CLOCK_REALTIME) {
++		flags |= FLAGS_CLOCKRT;
++		if (cmd != FUTEX_WAIT_BITSET &&	cmd != FUTEX_WAIT_REQUEUE_PI)
++			return -ENOSYS;
++	}
++
++	switch (cmd) {
++	case FUTEX_LOCK_PI:
++	case FUTEX_UNLOCK_PI:
++	case FUTEX_TRYLOCK_PI:
++	case FUTEX_WAIT_REQUEUE_PI:
++	case FUTEX_CMP_REQUEUE_PI:
++		if (!futex_cmpxchg_enabled)
++			return -ENOSYS;
++	}
++
++	switch (cmd) {
++	case FUTEX_WAIT:
++		val3 = FUTEX_BITSET_MATCH_ANY;
++		fallthrough;
++	case FUTEX_WAIT_BITSET:
++		return futex_wait(uaddr, flags, val, timeout, val3);
++	case FUTEX_WAKE:
++		val3 = FUTEX_BITSET_MATCH_ANY;
++		fallthrough;
++	case FUTEX_WAKE_BITSET:
++		return futex_wake(uaddr, flags, val, val3);
++	case FUTEX_REQUEUE:
++		return futex_requeue(uaddr, flags, uaddr2, val, val2, NULL, 0);
++	case FUTEX_CMP_REQUEUE:
++		return futex_requeue(uaddr, flags, uaddr2, val, val2, &val3, 0);
++	case FUTEX_WAKE_OP:
++		return futex_wake_op(uaddr, flags, uaddr2, val, val2, val3);
++	case FUTEX_LOCK_PI:
++		return futex_lock_pi(uaddr, flags, timeout, 0);
++	case FUTEX_UNLOCK_PI:
++		return futex_unlock_pi(uaddr, flags);
++	case FUTEX_TRYLOCK_PI:
++		return futex_lock_pi(uaddr, flags, NULL, 1);
++	case FUTEX_WAIT_REQUEUE_PI:
++		val3 = FUTEX_BITSET_MATCH_ANY;
++		return futex_wait_requeue_pi(uaddr, flags, val, timeout, val3,
++					     uaddr2);
++	case FUTEX_CMP_REQUEUE_PI:
++		return futex_requeue(uaddr, flags, uaddr2, val, val2, &val3, 1);
++	}
++	return -ENOSYS;
++}
++
++
++SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val,
++		struct __kernel_timespec __user *, utime, u32 __user *, uaddr2,
++		u32, val3)
++{
++	struct timespec64 ts;
++	ktime_t t, *tp = NULL;
++	u32 val2 = 0;
++	int cmd = op & FUTEX_CMD_MASK;
++
++	if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI ||
++		      cmd == FUTEX_WAIT_BITSET ||
++		      cmd == FUTEX_WAIT_REQUEUE_PI)) {
++		if (unlikely(should_fail_futex(!(op & FUTEX_PRIVATE_FLAG))))
++			return -EFAULT;
++		if (get_timespec64(&ts, utime))
++			return -EFAULT;
++		if (!timespec64_valid(&ts))
++			return -EINVAL;
++
++		t = timespec64_to_ktime(ts);
++		if (cmd == FUTEX_WAIT)
++			t = ktime_add_safe(ktime_get(), t);
++		else if (cmd != FUTEX_LOCK_PI && !(op & FUTEX_CLOCK_REALTIME))
++			t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
++		tp = &t;
++	}
++	/*
++	 * requeue parameter in 'utime' if cmd == FUTEX_*_REQUEUE_*.
++	 * number of waiters to wake in 'utime' if cmd == FUTEX_WAKE_OP.
++	 */
++	if (cmd == FUTEX_REQUEUE || cmd == FUTEX_CMP_REQUEUE ||
++	    cmd == FUTEX_CMP_REQUEUE_PI || cmd == FUTEX_WAKE_OP)
++		val2 = (u32) (unsigned long) utime;
++
++	return do_futex(uaddr, op, val, tp, uaddr2, val2, val3);
++}
++
++#ifdef CONFIG_COMPAT
++/*
++ * Fetch a robust-list pointer. Bit 0 signals PI futexes:
++ */
++static inline int
++compat_fetch_robust_entry(compat_uptr_t *uentry, struct robust_list __user **entry,
++		   compat_uptr_t __user *head, unsigned int *pi)
++{
++	if (get_user(*uentry, head))
++		return -EFAULT;
++
++	*entry = compat_ptr((*uentry) & ~1);
++	*pi = (unsigned int)(*uentry) & 1;
++
++	return 0;
++}
++
++static void __user *futex_uaddr(struct robust_list __user *entry,
++				compat_long_t futex_offset)
++{
++	compat_uptr_t base = ptr_to_compat(entry);
++	void __user *uaddr = compat_ptr(base + futex_offset);
++
++	return uaddr;
++}
++
++/*
++ * Walk curr->robust_list (very carefully, it's a userspace list!)
++ * and mark any locks found there dead, and notify any waiters.
++ *
++ * We silently return on any sign of list-walking problem.
++ */
++static void compat_exit_robust_list(struct task_struct *curr)
++{
++	struct compat_robust_list_head __user *head = curr->compat_robust_list;
++	struct robust_list __user *entry, *next_entry, *pending;
++	unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;
++	unsigned int next_pi;
++	compat_uptr_t uentry, next_uentry, upending;
++	compat_long_t futex_offset;
++	int rc;
++
++	if (!futex_cmpxchg_enabled)
++		return;
++
++	/*
++	 * Fetch the list head (which was registered earlier, via
++	 * sys_set_robust_list()):
++	 */
++	if (compat_fetch_robust_entry(&uentry, &entry, &head->list.next, &pi))
++		return;
++	/*
++	 * Fetch the relative futex offset:
++	 */
++	if (get_user(futex_offset, &head->futex_offset))
++		return;
++	/*
++	 * Fetch any possibly pending lock-add first, and handle it
++	 * if it exists:
++	 */
++	if (compat_fetch_robust_entry(&upending, &pending,
++			       &head->list_op_pending, &pip))
++		return;
++
++	next_entry = NULL;	/* avoid warning with gcc */
++	while (entry != (struct robust_list __user *) &head->list) {
++		/*
++		 * Fetch the next entry in the list before calling
++		 * handle_futex_death:
++		 */
++		rc = compat_fetch_robust_entry(&next_uentry, &next_entry,
++			(compat_uptr_t __user *)&entry->next, &next_pi);
++		/*
++		 * A pending lock might already be on the list, so
++		 * dont process it twice:
++		 */
++		if (entry != pending) {
++			void __user *uaddr = futex_uaddr(entry, futex_offset);
++
++			if (handle_futex_death(uaddr, curr, pi,
++					       HANDLE_DEATH_LIST))
++				return;
++		}
++		if (rc)
++			return;
++		uentry = next_uentry;
++		entry = next_entry;
++		pi = next_pi;
++		/*
++		 * Avoid excessively long or circular lists:
++		 */
++		if (!--limit)
++			break;
++
++		cond_resched();
++	}
++	if (pending) {
++		void __user *uaddr = futex_uaddr(pending, futex_offset);
++
++		handle_futex_death(uaddr, curr, pip, HANDLE_DEATH_PENDING);
++	}
++}
++
++COMPAT_SYSCALL_DEFINE2(set_robust_list,
++		struct compat_robust_list_head __user *, head,
++		compat_size_t, len)
++{
++	if (!futex_cmpxchg_enabled)
++		return -ENOSYS;
++
++	if (unlikely(len != sizeof(*head)))
++		return -EINVAL;
++
++	current->compat_robust_list = head;
++
++	return 0;
++}
++
++COMPAT_SYSCALL_DEFINE3(get_robust_list, int, pid,
++			compat_uptr_t __user *, head_ptr,
++			compat_size_t __user *, len_ptr)
++{
++	struct compat_robust_list_head __user *head;
++	unsigned long ret;
++	struct task_struct *p;
++
++	if (!futex_cmpxchg_enabled)
++		return -ENOSYS;
++
++	rcu_read_lock();
++
++	ret = -ESRCH;
++	if (!pid)
++		p = current;
++	else {
++		p = find_task_by_vpid(pid);
++		if (!p)
++			goto err_unlock;
++	}
++
++	ret = -EPERM;
++	if (!ptrace_may_access(p, PTRACE_MODE_READ_REALCREDS))
++		goto err_unlock;
++
++	head = p->compat_robust_list;
++	rcu_read_unlock();
++
++	if (put_user(sizeof(*head), len_ptr))
++		return -EFAULT;
++	return put_user(ptr_to_compat(head), head_ptr);
++
++err_unlock:
++	rcu_read_unlock();
++
++	return ret;
++}
++#endif /* CONFIG_COMPAT */
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE6(futex_time32, u32 __user *, uaddr, int, op, u32, val,
++		struct old_timespec32 __user *, utime, u32 __user *, uaddr2,
++		u32, val3)
++{
++	struct timespec64 ts;
++	ktime_t t, *tp = NULL;
++	int val2 = 0;
++	int cmd = op & FUTEX_CMD_MASK;
++
++	if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI ||
++		      cmd == FUTEX_WAIT_BITSET ||
++		      cmd == FUTEX_WAIT_REQUEUE_PI)) {
++		if (get_old_timespec32(&ts, utime))
++			return -EFAULT;
++		if (!timespec64_valid(&ts))
++			return -EINVAL;
++
++		t = timespec64_to_ktime(ts);
++		if (cmd == FUTEX_WAIT)
++			t = ktime_add_safe(ktime_get(), t);
++		else if (cmd != FUTEX_LOCK_PI && !(op & FUTEX_CLOCK_REALTIME))
++			t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
++		tp = &t;
++	}
++	if (cmd == FUTEX_REQUEUE || cmd == FUTEX_CMP_REQUEUE ||
++	    cmd == FUTEX_CMP_REQUEUE_PI || cmd == FUTEX_WAKE_OP)
++		val2 = (int) (unsigned long) utime;
++
++	return do_futex(uaddr, op, val, tp, uaddr2, val2, val3);
++}
++#endif /* CONFIG_COMPAT_32BIT_TIME */
++
++static void __init futex_detect_cmpxchg(void)
++{
++#ifndef CONFIG_HAVE_FUTEX_CMPXCHG
++	u32 curval;
++
++	/*
++	 * This will fail and we want it. Some arch implementations do
++	 * runtime detection of the futex_atomic_cmpxchg_inatomic()
++	 * functionality. We want to know that before we call in any
++	 * of the complex code paths. Also we want to prevent
++	 * registration of robust lists in that case. NULL is
++	 * guaranteed to fault and we get -EFAULT on functional
++	 * implementation, the non-functional ones will return
++	 * -ENOSYS.
++	 */
++	if (cmpxchg_futex_value_locked(&curval, NULL, 0, 0) == -EFAULT)
++		futex_cmpxchg_enabled = 1;
++#endif
++}
++
++static int __init futex_init(void)
++{
++	unsigned int futex_shift;
++	unsigned long i;
++
++#if CONFIG_BASE_SMALL
++	futex_hashsize = 16;
++#else
++	futex_hashsize = roundup_pow_of_two(256 * num_possible_cpus());
++#endif
++
++	futex_queues = alloc_large_system_hash("futex", sizeof(*futex_queues),
++					       futex_hashsize, 0,
++					       futex_hashsize < 256 ? HASH_SMALL : 0,
++					       &futex_shift, NULL,
++					       futex_hashsize, futex_hashsize);
++	futex_hashsize = 1UL << futex_shift;
++
++	futex_detect_cmpxchg();
++
++	for (i = 0; i < futex_hashsize; i++) {
++		atomic_set(&futex_queues[i].waiters, 0);
++		plist_head_init(&futex_queues[i].chain);
++		spin_lock_init(&futex_queues[i].lock);
++	}
++
++	return 0;
++}
++core_initcall(futex_init);
+diff --git a/kernel/gcov/gcc_4_7.c b/kernel/gcov/gcc_4_7.c
+index c699feda21ac0..04880d8fba254 100644
+--- a/kernel/gcov/gcc_4_7.c
++++ b/kernel/gcov/gcc_4_7.c
+@@ -85,6 +85,7 @@ struct gcov_fn_info {
+  * @version: gcov version magic indicating the gcc version used for compilation
+  * @next: list head for a singly-linked list
+  * @stamp: uniquifying time stamp
++ * @checksum: unique object checksum
+  * @filename: name of the associated gcov data file
+  * @merge: merge functions (null for unused counter type)
+  * @n_functions: number of instrumented functions
+@@ -97,6 +98,10 @@ struct gcov_info {
+ 	unsigned int version;
+ 	struct gcov_info *next;
+ 	unsigned int stamp;
++ /* Since GCC 12.1 a checksum field is added. */
++#if (__GNUC__ >= 12)
++	unsigned int checksum;
++#endif
+ 	const char *filename;
+ 	void (*merge[GCOV_COUNTERS])(gcov_type *, unsigned int);
+ 	unsigned int n_functions;
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index e58342ace11f2..f1d83a8b44171 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -52,6 +52,7 @@ enum {
+  * IRQS_PENDING			- irq is pending and replayed later
+  * IRQS_SUSPENDED		- irq is suspended
+  * IRQS_NMI			- irq line is used to deliver NMIs
++ * IRQS_SYSFS			- descriptor has been added to sysfs
+  */
+ enum {
+ 	IRQS_AUTODETECT		= 0x00000001,
+@@ -64,6 +65,7 @@ enum {
+ 	IRQS_SUSPENDED		= 0x00000800,
+ 	IRQS_TIMINGS		= 0x00001000,
+ 	IRQS_NMI		= 0x00002000,
++	IRQS_SYSFS		= 0x00004000,
+ };
+ 
+ #include "debug.h"
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index ca36c6179aa76..9b0914a063f90 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -288,22 +288,25 @@ static void irq_sysfs_add(int irq, struct irq_desc *desc)
+ 	if (irq_kobj_base) {
+ 		/*
+ 		 * Continue even in case of failure as this is nothing
+-		 * crucial.
++		 * crucial and failures in the late irq_sysfs_init()
++		 * cannot be rolled back.
+ 		 */
+ 		if (kobject_add(&desc->kobj, irq_kobj_base, "%d", irq))
+ 			pr_warn("Failed to add kobject for irq %d\n", irq);
++		else
++			desc->istate |= IRQS_SYSFS;
+ 	}
+ }
+ 
+ static void irq_sysfs_del(struct irq_desc *desc)
+ {
+ 	/*
+-	 * If irq_sysfs_init() has not yet been invoked (early boot), then
+-	 * irq_kobj_base is NULL and the descriptor was never added.
+-	 * kobject_del() complains about a object with no parent, so make
+-	 * it conditional.
++	 * Only invoke kobject_del() when kobject_add() was successfully
++	 * invoked for the descriptor. This covers both early boot, where
++	 * sysfs is not initialized yet, and the case of a failed
++	 * kobject_add() invocation.
+ 	 */
+-	if (irq_kobj_base)
++	if (desc->istate & IRQS_SYSFS)
+ 		kobject_del(&desc->kobj);
+ }
+ 
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 3cb29835632f2..437b073dc487e 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -1667,7 +1667,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
+ 			irqd_set(&desc->irq_data, IRQD_NO_BALANCING);
+ 		}
+ 
+-		if (irq_settings_can_autoenable(desc)) {
++		if (!(new->flags & IRQF_NO_AUTOEN) &&
++		    irq_settings_can_autoenable(desc)) {
+ 			irq_startup(desc, IRQ_RESEND, IRQ_START_COND);
+ 		} else {
+ 			/*
+@@ -2054,10 +2055,15 @@ int request_threaded_irq(unsigned int irq, irq_handler_t handler,
+ 	 * which interrupt is which (messes up the interrupt freeing
+ 	 * logic etc).
+ 	 *
++	 * Also shared interrupts do not go well with disabling auto enable.
++	 * The sharing interrupt might request it while it's still disabled
++	 * and then wait for interrupts forever.
++	 *
+ 	 * Also IRQF_COND_SUSPEND only makes sense for shared interrupts and
+ 	 * it cannot be set along with IRQF_NO_SUSPEND.
+ 	 */
+ 	if (((irqflags & IRQF_SHARED) && !dev_id) ||
++	    ((irqflags & IRQF_SHARED) && (irqflags & IRQF_NO_AUTOEN)) ||
+ 	    (!(irqflags & IRQF_SHARED) && (irqflags & IRQF_COND_SUSPEND)) ||
+ 	    ((irqflags & IRQF_NO_SUSPEND) && (irqflags & IRQF_COND_SUSPEND)))
+ 		return -EINVAL;
+@@ -2213,7 +2219,8 @@ int request_nmi(unsigned int irq, irq_handler_t handler,
+ 
+ 	desc = irq_to_desc(irq);
+ 
+-	if (!desc || irq_settings_can_autoenable(desc) ||
++	if (!desc || (irq_settings_can_autoenable(desc) &&
++	    !(irqflags & IRQF_NO_AUTOEN)) ||
+ 	    !irq_settings_can_request(desc) ||
+ 	    WARN_ON(irq_settings_is_per_cpu_devid(desc)) ||
+ 	    !irq_supports_nmi(desc))
+diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
+index 23e7acb5c6679..762df6108c589 100644
+--- a/kernel/kcsan/core.c
++++ b/kernel/kcsan/core.c
+@@ -9,10 +9,12 @@
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/list.h>
++#include <linux/minmax.h>
+ #include <linux/moduleparam.h>
+ #include <linux/percpu.h>
+ #include <linux/preempt.h>
+ #include <linux/sched.h>
++#include <linux/string.h>
+ #include <linux/uaccess.h>
+ 
+ #include "atomic.h"
+@@ -1045,3 +1047,51 @@ EXPORT_SYMBOL(__tsan_atomic_thread_fence);
+ void __tsan_atomic_signal_fence(int memorder);
+ void __tsan_atomic_signal_fence(int memorder) { }
+ EXPORT_SYMBOL(__tsan_atomic_signal_fence);
++
++#ifdef __HAVE_ARCH_MEMSET
++void *__tsan_memset(void *s, int c, size_t count);
++noinline void *__tsan_memset(void *s, int c, size_t count)
++{
++	/*
++	 * Instead of not setting up watchpoints where accessed size is greater
++	 * than MAX_ENCODABLE_SIZE, truncate checked size to MAX_ENCODABLE_SIZE.
++	 */
++	size_t check_len = min_t(size_t, count, MAX_ENCODABLE_SIZE);
++
++	check_access(s, check_len, KCSAN_ACCESS_WRITE);
++	return memset(s, c, count);
++}
++#else
++void *__tsan_memset(void *s, int c, size_t count) __alias(memset);
++#endif
++EXPORT_SYMBOL(__tsan_memset);
++
++#ifdef __HAVE_ARCH_MEMMOVE
++void *__tsan_memmove(void *dst, const void *src, size_t len);
++noinline void *__tsan_memmove(void *dst, const void *src, size_t len)
++{
++	size_t check_len = min_t(size_t, len, MAX_ENCODABLE_SIZE);
++
++	check_access(dst, check_len, KCSAN_ACCESS_WRITE);
++	check_access(src, check_len, 0);
++	return memmove(dst, src, len);
++}
++#else
++void *__tsan_memmove(void *dst, const void *src, size_t len) __alias(memmove);
++#endif
++EXPORT_SYMBOL(__tsan_memmove);
++
++#ifdef __HAVE_ARCH_MEMCPY
++void *__tsan_memcpy(void *dst, const void *src, size_t len);
++noinline void *__tsan_memcpy(void *dst, const void *src, size_t len)
++{
++	size_t check_len = min_t(size_t, len, MAX_ENCODABLE_SIZE);
++
++	check_access(dst, check_len, KCSAN_ACCESS_WRITE);
++	check_access(src, check_len, 0);
++	return memcpy(dst, src, len);
++}
++#else
++void *__tsan_memcpy(void *dst, const void *src, size_t len) __alias(memcpy);
++#endif
++EXPORT_SYMBOL(__tsan_memcpy);
+diff --git a/kernel/padata.c b/kernel/padata.c
+index d4d3ba6e1728a..11ca3ebd8b123 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -220,14 +220,16 @@ int padata_do_parallel(struct padata_shell *ps,
+ 	pw = padata_work_alloc();
+ 	spin_unlock(&padata_works_lock);
+ 
++	if (!pw) {
++		/* Maximum works limit exceeded, run in the current task. */
++		padata->parallel(padata);
++	}
++
+ 	rcu_read_unlock_bh();
+ 
+ 	if (pw) {
+ 		padata_work_init(pw, padata_parallel_worker, padata, 0);
+ 		queue_work(pinst->parallel_wq, &pw->pw_work);
+-	} else {
+-		/* Maximum works limit exceeded, run in the current task. */
+-		padata->parallel(padata);
+ 	}
+ 
+ 	return 0;
+@@ -401,13 +403,16 @@ void padata_do_serial(struct padata_priv *padata)
+ 	int hashed_cpu = padata_cpu_hash(pd, padata->seq_nr);
+ 	struct padata_list *reorder = per_cpu_ptr(pd->reorder_list, hashed_cpu);
+ 	struct padata_priv *cur;
++	struct list_head *pos;
+ 
+ 	spin_lock(&reorder->lock);
+ 	/* Sort in ascending order of sequence number. */
+-	list_for_each_entry_reverse(cur, &reorder->list, list)
++	list_for_each_prev(pos, &reorder->list) {
++		cur = list_entry(pos, struct padata_priv, list);
+ 		if (cur->seq_nr < padata->seq_nr)
+ 			break;
+-	list_add(&padata->list, &cur->list);
++	}
++	list_add(&padata->list, pos);
+ 	spin_unlock(&reorder->lock);
+ 
+ 	/*
+diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
+index 1da013f50059a..f5dccd445d360 100644
+--- a/kernel/power/snapshot.c
++++ b/kernel/power/snapshot.c
+@@ -1677,8 +1677,8 @@ static unsigned long minimum_image_size(unsigned long saveable)
+  * /sys/power/reserved_size, respectively).  To make this happen, we compute the
+  * total number of available page frames and allocate at least
+  *
+- * ([page frames total] + PAGES_FOR_IO + [metadata pages]) / 2
+- *  + 2 * DIV_ROUND_UP(reserved_size, PAGE_SIZE)
++ * ([page frames total] - PAGES_FOR_IO - [metadata pages]) / 2
++ *  - 2 * DIV_ROUND_UP(reserved_size, PAGE_SIZE)
+  *
+  * of them, which corresponds to the maximum size of a hibernation image.
+  *
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index b10d6bcea77df..9cce4e13af414 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1157,7 +1157,7 @@ bool rcu_lockdep_current_cpu_online(void)
+ 	preempt_disable_notrace();
+ 	rdp = this_cpu_ptr(&rcu_data);
+ 	rnp = rdp->mynode;
+-	if (rdp->grpmask & rcu_rnp_online_cpus(rnp))
++	if (rdp->grpmask & rcu_rnp_online_cpus(rnp) || READ_ONCE(rnp->ofl_seq) & 0x1)
+ 		ret = true;
+ 	preempt_enable_notrace();
+ 	return ret;
+@@ -1724,6 +1724,7 @@ static void rcu_strict_gp_boundary(void *unused)
+  */
+ static bool rcu_gp_init(void)
+ {
++	unsigned long firstseq;
+ 	unsigned long flags;
+ 	unsigned long oldmask;
+ 	unsigned long mask;
+@@ -1767,6 +1768,12 @@ static bool rcu_gp_init(void)
+ 	 */
+ 	rcu_state.gp_state = RCU_GP_ONOFF;
+ 	rcu_for_each_leaf_node(rnp) {
++		smp_mb(); // Pair with barriers used when updating ->ofl_seq to odd values.
++		firstseq = READ_ONCE(rnp->ofl_seq);
++		if (firstseq & 0x1)
++			while (firstseq == READ_ONCE(rnp->ofl_seq))
++				schedule_timeout_idle(1);  // Can't wake unless RCU is watching.
++		smp_mb(); // Pair with barriers used when updating ->ofl_seq to even values.
+ 		raw_spin_lock(&rcu_state.ofl_lock);
+ 		raw_spin_lock_irq_rcu_node(rnp);
+ 		if (rnp->qsmaskinit == rnp->qsmaskinitnext &&
+@@ -2650,7 +2657,7 @@ void rcu_force_quiescent_state(void)
+ 	struct rcu_node *rnp_old = NULL;
+ 
+ 	/* Funnel through hierarchy to reduce memory contention. */
+-	rnp = __this_cpu_read(rcu_data.mynode);
++	rnp = raw_cpu_read(rcu_data.mynode);
+ 	for (; rnp != NULL; rnp = rnp->parent) {
+ 		ret = (READ_ONCE(rcu_state.gp_flags) & RCU_GP_FLAG_FQS) ||
+ 		       !raw_spin_trylock(&rnp->fqslock);
+@@ -4107,6 +4114,9 @@ void rcu_cpu_starting(unsigned int cpu)
+ 
+ 	rnp = rdp->mynode;
+ 	mask = rdp->grpmask;
++	WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
++	WARN_ON_ONCE(!(rnp->ofl_seq & 0x1));
++	smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+ 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
+ 	WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask);
+ 	newcpu = !(rnp->expmaskinitnext & mask);
+@@ -4124,6 +4134,9 @@ void rcu_cpu_starting(unsigned int cpu)
+ 	} else {
+ 		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ 	}
++	smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
++	WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
++	WARN_ON_ONCE(rnp->ofl_seq & 0x1);
+ 	smp_mb(); /* Ensure RCU read-side usage follows above initialization. */
+ }
+ 
+@@ -4150,6 +4163,9 @@ void rcu_report_dead(unsigned int cpu)
+ 
+ 	/* Remove outgoing CPU from mask in the leaf rcu_node structure. */
+ 	mask = rdp->grpmask;
++	WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
++	WARN_ON_ONCE(!(rnp->ofl_seq & 0x1));
++	smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+ 	raw_spin_lock(&rcu_state.ofl_lock);
+ 	raw_spin_lock_irqsave_rcu_node(rnp, flags); /* Enforce GP memory-order guarantee. */
+ 	rdp->rcu_ofl_gp_seq = READ_ONCE(rcu_state.gp_seq);
+@@ -4162,6 +4178,9 @@ void rcu_report_dead(unsigned int cpu)
+ 	WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext & ~mask);
+ 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ 	raw_spin_unlock(&rcu_state.ofl_lock);
++	smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
++	WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
++	WARN_ON_ONCE(rnp->ofl_seq & 0x1);
+ 
+ 	rdp->cpu_started = false;
+ }
+diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
+index e4f66b8f7c470..6e8c77729a470 100644
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -56,6 +56,7 @@ struct rcu_node {
+ 				/*  Initialized from ->qsmaskinitnext at the */
+ 				/*  beginning of each grace period. */
+ 	unsigned long qsmaskinitnext;
++	unsigned long ofl_seq;	/* CPU-hotplug operation sequence count. */
+ 				/* Online CPUs for next grace period. */
+ 	unsigned long expmask;	/* CPUs or groups that need to check in */
+ 				/*  to allow the current expedited GP */
+diff --git a/kernel/relay.c b/kernel/relay.c
+index b08d936d5fa75..067769b80d4ab 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -163,13 +163,13 @@ static struct rchan_buf *relay_create_buf(struct rchan *chan)
+ {
+ 	struct rchan_buf *buf;
+ 
+-	if (chan->n_subbufs > KMALLOC_MAX_SIZE / sizeof(size_t *))
++	if (chan->n_subbufs > KMALLOC_MAX_SIZE / sizeof(size_t))
+ 		return NULL;
+ 
+ 	buf = kzalloc(sizeof(struct rchan_buf), GFP_KERNEL);
+ 	if (!buf)
+ 		return NULL;
+-	buf->padding = kmalloc_array(chan->n_subbufs, sizeof(size_t *),
++	buf->padding = kmalloc_array(chan->n_subbufs, sizeof(size_t),
+ 				     GFP_KERNEL);
+ 	if (!buf->padding)
+ 		goto free_buf;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index bca0efc03a51d..c39d2fc3f9945 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4074,7 +4074,131 @@ done:
+ 	trace_sched_util_est_se_tp(&p->se);
+ }
+ 
+-static inline int task_fits_capacity(struct task_struct *p, long capacity)
++static inline int util_fits_cpu(unsigned long util,
++				unsigned long uclamp_min,
++				unsigned long uclamp_max,
++				int cpu)
++{
++	unsigned long capacity_orig, capacity_orig_thermal;
++	unsigned long capacity = capacity_of(cpu);
++	bool fits, uclamp_max_fits;
++
++	/*
++	 * Check if the real util fits without any uclamp boost/cap applied.
++	 */
++	fits = fits_capacity(util, capacity);
++
++	if (!uclamp_is_used())
++		return fits;
++
++	/*
++	 * We must use capacity_orig_of() for comparing against uclamp_min and
++	 * uclamp_max. We only care about capacity pressure (by using
++	 * capacity_of()) for comparing against the real util.
++	 *
++	 * If a task is boosted to 1024 for example, we don't want a tiny
++	 * pressure to skew the check whether it fits a CPU or not.
++	 *
++	 * Similarly if a task is capped to capacity_orig_of(little_cpu), it
++	 * should fit a little cpu even if there's some pressure.
++	 *
++	 * Only exception is for thermal pressure since it has a direct impact
++	 * on available OPP of the system.
++	 *
++	 * We honour it for uclamp_min only as a drop in performance level
++	 * could result in not getting the requested minimum performance level.
++	 *
++	 * For uclamp_max, we can tolerate a drop in performance level as the
++	 * goal is to cap the task. So it's okay if it's getting less.
++	 *
++	 * In case of capacity inversion, which is not handled yet, we should
++	 * honour the inverted capacity for both uclamp_min and uclamp_max all
++	 * the time.
++	 */
++	capacity_orig = capacity_orig_of(cpu);
++	capacity_orig_thermal = capacity_orig - arch_scale_thermal_pressure(cpu);
++
++	/*
++	 * We want to force a task to fit a cpu as implied by uclamp_max.
++	 * But we do have some corner cases to cater for..
++	 *
++	 *
++	 *                                 C=z
++	 *   |                             ___
++	 *   |                  C=y       |   |
++	 *   |_ _ _ _ _ _ _ _ _ ___ _ _ _ | _ | _ _ _ _ _  uclamp_max
++	 *   |      C=x        |   |      |   |
++	 *   |      ___        |   |      |   |
++	 *   |     |   |       |   |      |   |    (util somewhere in this region)
++	 *   |     |   |       |   |      |   |
++	 *   |     |   |       |   |      |   |
++	 *   +----------------------------------------
++	 *         cpu0        cpu1       cpu2
++	 *
++	 *   In the above example if a task is capped to a specific performance
++	 *   point, y, then when:
++	 *
++	 *   * util = 80% of x then it does not fit on cpu0 and should migrate
++	 *     to cpu1
++	 *   * util = 80% of y then it is forced to fit on cpu1 to honour
++	 *     uclamp_max request.
++	 *
++	 *   which is what we're enforcing here. A task always fits if
++	 *   uclamp_max <= capacity_orig. But when uclamp_max > capacity_orig,
++	 *   the normal upmigration rules should withhold still.
++	 *
++	 *   Only exception is when we are on max capacity, then we need to be
++	 *   careful not to block overutilized state. This is so because:
++	 *
++	 *     1. There's no concept of capping at max_capacity! We can't go
++	 *        beyond this performance level anyway.
++	 *     2. The system is being saturated when we're operating near
++	 *        max capacity, it doesn't make sense to block overutilized.
++	 */
++	uclamp_max_fits = (capacity_orig == SCHED_CAPACITY_SCALE) && (uclamp_max == SCHED_CAPACITY_SCALE);
++	uclamp_max_fits = !uclamp_max_fits && (uclamp_max <= capacity_orig);
++	fits = fits || uclamp_max_fits;
++
++	/*
++	 *
++	 *                                 C=z
++	 *   |                             ___       (region a, capped, util >= uclamp_max)
++	 *   |                  C=y       |   |
++	 *   |_ _ _ _ _ _ _ _ _ ___ _ _ _ | _ | _ _ _ _ _ uclamp_max
++	 *   |      C=x        |   |      |   |
++	 *   |      ___        |   |      |   |      (region b, uclamp_min <= util <= uclamp_max)
++	 *   |_ _ _|_ _|_ _ _ _| _ | _ _ _| _ | _ _ _ _ _ uclamp_min
++	 *   |     |   |       |   |      |   |
++	 *   |     |   |       |   |      |   |      (region c, boosted, util < uclamp_min)
++	 *   +----------------------------------------
++	 *         cpu0        cpu1       cpu2
++	 *
++	 * a) If util > uclamp_max, then we're capped, we don't care about
++	 *    actual fitness value here. We only care if uclamp_max fits
++	 *    capacity without taking margin/pressure into account.
++	 *    See comment above.
++	 *
++	 * b) If uclamp_min <= util <= uclamp_max, then the normal
++	 *    fits_capacity() rules apply. Except we need to ensure that we
++	 *    enforce we remain within uclamp_max, see comment above.
++	 *
++	 * c) If util < uclamp_min, then we are boosted. Same as (b) but we
++	 *    need to take into account the boosted value fits the CPU without
++	 *    taking margin/pressure into account.
++	 *
++	 * Cases (a) and (b) are handled in the 'fits' variable already. We
++	 * just need to consider an extra check for case (c) after ensuring we
++	 * handle the case uclamp_min > uclamp_max.
++	 */
++	uclamp_min = min(uclamp_min, uclamp_max);
++	if (util < uclamp_min && capacity_orig != SCHED_CAPACITY_SCALE)
++		fits = fits && (uclamp_min <= capacity_orig_thermal);
++
++	return fits;
++}
++
++static inline int task_fits_capacity(struct task_struct *p,
++				     unsigned long capacity)
+ {
+ 	return fits_capacity(uclamp_task_util(p), capacity);
+ }
+@@ -6247,7 +6371,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
+ 	return best_cpu;
+ }
+ 
+-static inline bool asym_fits_capacity(int task_util, int cpu)
++static inline bool asym_fits_capacity(unsigned long task_util, int cpu)
+ {
+ 	if (static_branch_unlikely(&sched_asym_cpucapacity))
+ 		return fits_capacity(task_util, capacity_of(cpu));
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 15a376f85e09b..ab912cc60760a 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -1592,7 +1592,8 @@ blk_trace_event_print_binary(struct trace_iterator *iter, int flags,
+ 
+ static enum print_line_t blk_tracer_print_line(struct trace_iterator *iter)
+ {
+-	if (!(blk_tracer_flags.val & TRACE_BLK_OPT_CLASSIC))
++	if ((iter->ent->type != TRACE_BLK) ||
++	    !(blk_tracer_flags.val & TRACE_BLK_OPT_CLASSIC))
+ 		return TRACE_TYPE_UNHANDLED;
+ 
+ 	return print_one_line(iter, true);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 146771d6d0072..c7c92b0eed048 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6371,7 +6371,20 @@ waitagain:
+ 
+ 		ret = print_trace_line(iter);
+ 		if (ret == TRACE_TYPE_PARTIAL_LINE) {
+-			/* don't print partial lines */
++			/*
++			 * If one print_trace_line() fills entire trace_seq in one shot,
++			 * trace_seq_to_user() will returns -EBUSY because save_len == 0,
++			 * In this case, we need to consume it, otherwise, loop will peek
++			 * this event next time, resulting in an infinite loop.
++			 */
++			if (save_len == 0) {
++				iter->seq.full = 0;
++				trace_seq_puts(&iter->seq, "[LINE TOO BIG]\n");
++				trace_consume(iter);
++				break;
++			}
++
++			/* In other cases, don't print partial lines */
+ 			iter->seq.seq.len = save_len;
+ 			break;
+ 		}
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index fd54168294456..0ae3e4454ff2c 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -417,7 +417,7 @@ struct action_data {
+ 	 * event param, and is passed to the synthetic event
+ 	 * invocation.
+ 	 */
+-	unsigned int		var_ref_idx[TRACING_MAP_VARS_MAX];
++	unsigned int		var_ref_idx[SYNTH_FIELDS_MAX];
+ 	struct synth_event	*synth_event;
+ 	bool			use_trace_keyword;
+ 	char			*synth_event_name;
+@@ -1846,7 +1846,9 @@ static struct hist_field *create_var_ref(struct hist_trigger_data *hist_data,
+ 			return ref_field;
+ 		}
+ 	}
+-
++	/* Sanity check to avoid out-of-bound write on 'hist_data->var_refs' */
++	if (hist_data->n_var_refs >= TRACING_MAP_VARS_MAX)
++		return NULL;
+ 	ref_field = create_hist_field(var_field->hist_data, NULL, flags, NULL);
+ 	if (ref_field) {
+ 		if (init_var_ref(ref_field, var_field, system, event_name)) {
+@@ -3113,6 +3115,7 @@ static int parse_action_params(struct trace_array *tr, char *params,
+ 	while (params) {
+ 		if (data->n_params >= SYNTH_FIELDS_MAX) {
+ 			hist_err(tr, HIST_ERR_TOO_MANY_PARAMS, 0);
++			ret = -EINVAL;
+ 			goto out;
+ 		}
+ 
+@@ -3449,6 +3452,10 @@ static int trace_action_create(struct hist_trigger_data *hist_data,
+ 
+ 	lockdep_assert_held(&event_mutex);
+ 
++	/* Sanity check to avoid out-of-bound write on 'data->var_ref_idx' */
++	if (data->n_params > SYNTH_FIELDS_MAX)
++		return -EINVAL;
++
+ 	if (data->use_trace_keyword)
+ 		synth_event_name = data->synth_event_name;
+ 	else
+@@ -5827,7 +5834,7 @@ enable:
+ 	/* Just return zero, not the number of registered triggers */
+ 	ret = 0;
+  out:
+-	if (ret == 0)
++	if (ret == 0 && glob[0])
+ 		hist_err_clear();
+ 
+ 	return ret;
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 4aed8abb2022e..19c28a34c5f1d 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1915,7 +1915,6 @@ config KCOV
+ 	depends on CC_HAS_SANCOV_TRACE_PC || GCC_PLUGINS
+ 	select DEBUG_FS
+ 	select GCC_PLUGIN_SANCOV if !CC_HAS_SANCOV_TRACE_PC
+-	select SKB_EXTENSIONS if NET
+ 	help
+ 	  KCOV exposes kernel code coverage information in a form suitable
+ 	  for coverage-guided fuzzing (randomized testing).
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 9e14ae02306bc..71bdc167a9ee7 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -440,6 +440,7 @@ static int object_cpu_offline(unsigned int cpu)
+ 	struct debug_percpu_free *percpu_pool;
+ 	struct hlist_node *tmp;
+ 	struct debug_obj *obj;
++	unsigned long flags;
+ 
+ 	/* Remote access is safe as the CPU is dead already */
+ 	percpu_pool = per_cpu_ptr(&percpu_obj_pool, cpu);
+@@ -447,6 +448,12 @@ static int object_cpu_offline(unsigned int cpu)
+ 		hlist_del(&obj->node);
+ 		kmem_cache_free(obj_cache, obj);
+ 	}
++
++	raw_spin_lock_irqsave(&pool_lock, flags);
++	obj_pool_used -= percpu_pool->obj_free;
++	debug_objects_freed += percpu_pool->obj_free;
++	raw_spin_unlock_irqrestore(&pool_lock, flags);
++
+ 	percpu_pool->obj_free = 0;
+ 
+ 	return 0;
+@@ -1316,6 +1323,8 @@ static int __init debug_objects_replace_static_objects(void)
+ 		hlist_add_head(&obj->node, &objects);
+ 	}
+ 
++	debug_objects_allocated += i;
++
+ 	/*
+ 	 * debug_objects_mem_init() is now called early that only one CPU is up
+ 	 * and interrupts have been disabled, so it is safe to replace the
+@@ -1384,6 +1393,7 @@ void __init debug_objects_mem_init(void)
+ 		debug_objects_enabled = 0;
+ 		kmem_cache_destroy(obj_cache);
+ 		pr_warn("out of memory.\n");
++		return;
+ 	} else
+ 		debug_objects_selftest();
+ 
+diff --git a/lib/fonts/fonts.c b/lib/fonts/fonts.c
+index 5f4b07b56cd9c..9738664386088 100644
+--- a/lib/fonts/fonts.c
++++ b/lib/fonts/fonts.c
+@@ -135,8 +135,8 @@ const struct font_desc *get_default_font(int xres, int yres, u32 font_w,
+ 		if (res > 20)
+ 			c += 20 - res;
+ 
+-		if ((font_w & (1 << (f->width - 1))) &&
+-		    (font_h & (1 << (f->height - 1))))
++		if ((font_w & (1U << (f->width - 1))) &&
++		    (font_h & (1U << (f->height - 1))))
+ 			c += 1000;
+ 
+ 		if (c > cc) {
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 650554964f181..6e30113303ba6 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -467,20 +467,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction,
+ }
+ EXPORT_SYMBOL(iov_iter_init);
+ 
+-static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len)
+-{
+-	char *from = kmap_atomic(page);
+-	memcpy(to, from + offset, len);
+-	kunmap_atomic(from);
+-}
+-
+-static void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len)
+-{
+-	char *to = kmap_atomic(page);
+-	memcpy(to + offset, from, len);
+-	kunmap_atomic(to);
+-}
+-
+ static void memzero_page(struct page *page, size_t offset, size_t len)
+ {
+ 	char *addr = kmap_atomic(page);
+diff --git a/lib/notifier-error-inject.c b/lib/notifier-error-inject.c
+index 21016b32d3131..2b24ea6c94979 100644
+--- a/lib/notifier-error-inject.c
++++ b/lib/notifier-error-inject.c
+@@ -15,7 +15,7 @@ static int debugfs_errno_get(void *data, u64 *val)
+ 	return 0;
+ }
+ 
+-DEFINE_SIMPLE_ATTRIBUTE(fops_errno, debugfs_errno_get, debugfs_errno_set,
++DEFINE_SIMPLE_ATTRIBUTE_SIGNED(fops_errno, debugfs_errno_get, debugfs_errno_set,
+ 			"%lld\n");
+ 
+ static struct dentry *debugfs_create_errno(const char *name, umode_t mode,
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index 2baa275a6ddf4..76550d2e2edc7 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -1114,6 +1114,7 @@ static int __init test_firmware_init(void)
+ 
+ 	rc = misc_register(&test_fw_misc_device);
+ 	if (rc) {
++		__test_firmware_config_free();
+ 		kfree(test_fw_config);
+ 		pr_err("could not register misc device: %d\n", rc);
+ 		return rc;
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 8dfbe86bd74f7..b58021666e1a3 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1245,7 +1245,7 @@ move_freelist_tail(struct list_head *freelist, struct page *freepage)
+ }
+ 
+ static void
+-fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long nr_isolated)
++fast_isolate_around(struct compact_control *cc, unsigned long pfn)
+ {
+ 	unsigned long start_pfn, end_pfn;
+ 	struct page *page;
+@@ -1266,21 +1266,13 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long
+ 	if (!page)
+ 		return;
+ 
+-	/* Scan before */
+-	if (start_pfn != pfn) {
+-		isolate_freepages_block(cc, &start_pfn, pfn, &cc->freepages, 1, false);
+-		if (cc->nr_freepages >= cc->nr_migratepages)
+-			return;
+-	}
+-
+-	/* Scan after */
+-	start_pfn = pfn + nr_isolated;
+-	if (start_pfn < end_pfn)
+-		isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
++	isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
+ 
+ 	/* Skip this pageblock in the future as it's full or nearly full */
+ 	if (cc->nr_freepages < cc->nr_migratepages)
+ 		set_pageblock_skip(page);
++
++	return;
+ }
+ 
+ /* Search orders in round-robin fashion */
+@@ -1456,7 +1448,7 @@ fast_isolate_freepages(struct compact_control *cc)
+ 		return cc->free_pfn;
+ 
+ 	low_pfn = page_to_pfn(page);
+-	fast_isolate_around(cc, low_pfn, nr_isolated);
++	fast_isolate_around(cc, low_pfn);
+ 	return low_pfn;
+ }
+ 
+diff --git a/net/802/mrp.c b/net/802/mrp.c
+index 35e04cc5390c4..c10a432a5b435 100644
+--- a/net/802/mrp.c
++++ b/net/802/mrp.c
+@@ -606,7 +606,10 @@ static void mrp_join_timer(struct timer_list *t)
+ 	spin_unlock(&app->lock);
+ 
+ 	mrp_queue_xmit(app);
+-	mrp_join_timer_arm(app);
++	spin_lock(&app->lock);
++	if (likely(app->active))
++		mrp_join_timer_arm(app);
++	spin_unlock(&app->lock);
+ }
+ 
+ static void mrp_periodic_timer_arm(struct mrp_applicant *app)
+@@ -620,11 +623,12 @@ static void mrp_periodic_timer(struct timer_list *t)
+ 	struct mrp_applicant *app = from_timer(app, t, periodic_timer);
+ 
+ 	spin_lock(&app->lock);
+-	mrp_mad_event(app, MRP_EVENT_PERIODIC);
+-	mrp_pdu_queue(app);
++	if (likely(app->active)) {
++		mrp_mad_event(app, MRP_EVENT_PERIODIC);
++		mrp_pdu_queue(app);
++		mrp_periodic_timer_arm(app);
++	}
+ 	spin_unlock(&app->lock);
+-
+-	mrp_periodic_timer_arm(app);
+ }
+ 
+ static int mrp_pdu_parse_end_mark(struct sk_buff *skb, int *offset)
+@@ -872,6 +876,7 @@ int mrp_init_applicant(struct net_device *dev, struct mrp_application *appl)
+ 	app->dev = dev;
+ 	app->app = appl;
+ 	app->mad = RB_ROOT;
++	app->active = true;
+ 	spin_lock_init(&app->lock);
+ 	skb_queue_head_init(&app->queue);
+ 	rcu_assign_pointer(dev->mrp_port->applicants[appl->type], app);
+@@ -900,6 +905,9 @@ void mrp_uninit_applicant(struct net_device *dev, struct mrp_application *appl)
+ 
+ 	RCU_INIT_POINTER(port->applicants[appl->type], NULL);
+ 
++	spin_lock_bh(&app->lock);
++	app->active = false;
++	spin_unlock_bh(&app->lock);
+ 	/* Delete timer and generate a final TX event to flush out
+ 	 * all pending messages before the applicant is gone.
+ 	 */
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index f8aab38ab5953..2af1477a05ca6 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -4910,7 +4910,7 @@ void hci_req_cmd_complete(struct hci_dev *hdev, u16 opcode, u8 status,
+ 			*req_complete_skb = bt_cb(skb)->hci.req_complete_skb;
+ 		else
+ 			*req_complete = bt_cb(skb)->hci.req_complete;
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 	}
+ 	spin_unlock_irqrestore(&hdev->cmd_q.lock, flags);
+ }
+diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
+index 7324764384b67..8d6fce9005bdd 100644
+--- a/net/bluetooth/rfcomm/core.c
++++ b/net/bluetooth/rfcomm/core.c
+@@ -590,7 +590,7 @@ int rfcomm_dlc_send(struct rfcomm_dlc *d, struct sk_buff *skb)
+ 
+ 		ret = rfcomm_dlc_send_frag(d, frag);
+ 		if (ret < 0) {
+-			kfree_skb(frag);
++			dev_kfree_skb_irq(frag);
+ 			goto unlock;
+ 		}
+ 
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 717b01ff9b2ba..7df14a0e380cb 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -442,9 +442,6 @@ static int convert___skb_to_skb(struct sk_buff *skb, struct __sk_buff *__skb)
+ {
+ 	struct qdisc_skb_cb *cb = (struct qdisc_skb_cb *)skb->cb;
+ 
+-	if (!skb->len)
+-		return -EINVAL;
+-
+ 	if (!__skb)
+ 		return 0;
+ 
+diff --git a/net/caif/cfctrl.c b/net/caif/cfctrl.c
+index 2809cbd6b7f74..d8cb4b2a076b4 100644
+--- a/net/caif/cfctrl.c
++++ b/net/caif/cfctrl.c
+@@ -269,11 +269,15 @@ int cfctrl_linkup_request(struct cflayer *layer,
+ 	default:
+ 		pr_warn("Request setup of bad link type = %d\n",
+ 			param->linktype);
++		cfpkt_destroy(pkt);
+ 		return -EINVAL;
+ 	}
+ 	req = kzalloc(sizeof(*req), GFP_KERNEL);
+-	if (!req)
++	if (!req) {
++		cfpkt_destroy(pkt);
+ 		return -ENOMEM;
++	}
++
+ 	req->client_layer = user_layer;
+ 	req->cmd = CFCTRL_CMD_LINK_SETUP;
+ 	req->param = *param;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 34b5aab42b912..37bb60a7e97ed 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3631,7 +3631,7 @@ static struct sk_buff *validate_xmit_vlan(struct sk_buff *skb,
+ int skb_csum_hwoffload_help(struct sk_buff *skb,
+ 			    const netdev_features_t features)
+ {
+-	if (unlikely(skb->csum_not_inet))
++	if (unlikely(skb_csum_is_sctp(skb)))
+ 		return !!(features & NETIF_F_SCTP_CRC) ? 0 :
+ 			skb_crc32c_csum_help(skb);
+ 
+@@ -10320,24 +10320,16 @@ void netdev_run_todo(void)
+ void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64,
+ 			     const struct net_device_stats *netdev_stats)
+ {
+-#if BITS_PER_LONG == 64
+-	BUILD_BUG_ON(sizeof(*stats64) < sizeof(*netdev_stats));
+-	memcpy(stats64, netdev_stats, sizeof(*netdev_stats));
+-	/* zero out counters that only exist in rtnl_link_stats64 */
+-	memset((char *)stats64 + sizeof(*netdev_stats), 0,
+-	       sizeof(*stats64) - sizeof(*netdev_stats));
+-#else
+-	size_t i, n = sizeof(*netdev_stats) / sizeof(unsigned long);
+-	const unsigned long *src = (const unsigned long *)netdev_stats;
++	size_t i, n = sizeof(*netdev_stats) / sizeof(atomic_long_t);
++	const atomic_long_t *src = (atomic_long_t *)netdev_stats;
+ 	u64 *dst = (u64 *)stats64;
+ 
+ 	BUILD_BUG_ON(n > sizeof(*stats64) / sizeof(u64));
+ 	for (i = 0; i < n; i++)
+-		dst[i] = src[i];
++		dst[i] = atomic_long_read(&src[i]);
+ 	/* zero out counters that only exist in rtnl_link_stats64 */
+ 	memset((char *)stats64 + n * sizeof(u64), 0,
+ 	       sizeof(*stats64) - n * sizeof(u64));
+-#endif
+ }
+ EXPORT_SYMBOL(netdev_stats_to_stats64);
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 4c22e6d1da746..a5df0cf46bbf8 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2125,8 +2125,17 @@ static int __bpf_redirect_no_mac(struct sk_buff *skb, struct net_device *dev,
+ {
+ 	unsigned int mlen = skb_network_offset(skb);
+ 
++	if (unlikely(skb->len <= mlen)) {
++		kfree_skb(skb);
++		return -ERANGE;
++	}
++
+ 	if (mlen) {
+ 		__skb_pull(skb, mlen);
++		if (unlikely(!skb->len)) {
++			kfree_skb(skb);
++			return -ERANGE;
++		}
+ 
+ 		/* At ingress, the mac header has already been pulled once.
+ 		 * At egress, skb_pospull_rcsum has to be done in case that
+@@ -2146,7 +2155,7 @@ static int __bpf_redirect_common(struct sk_buff *skb, struct net_device *dev,
+ 				 u32 flags)
+ {
+ 	/* Verify that a link layer header is carried */
+-	if (unlikely(skb->mac_header >= skb->network_header)) {
++	if (unlikely(skb->mac_header >= skb->network_header || skb->len == 0)) {
+ 		kfree_skb(skb);
+ 		return -ERANGE;
+ 	}
+@@ -3192,15 +3201,18 @@ static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len)
+ 
+ static int bpf_skb_generic_pop(struct sk_buff *skb, u32 off, u32 len)
+ {
++	void *old_data;
++
+ 	/* skb_ensure_writable() is not needed here, as we're
+ 	 * already working on an uncloned skb.
+ 	 */
+ 	if (unlikely(!pskb_may_pull(skb, off + len)))
+ 		return -ENOMEM;
+ 
+-	skb_postpull_rcsum(skb, skb->data + off, len);
+-	memmove(skb->data + len, skb->data, off);
++	old_data = skb->data;
+ 	__skb_pull(skb, len);
++	skb_postpull_rcsum(skb, old_data + off, len);
++	memmove(skb->data, old_data, off);
+ 
+ 	return 0;
+ }
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 06169889b0ca0..2b12e0730b852 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -2115,6 +2115,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta)
+ 				insp = list;
+ 			} else {
+ 				/* Eaten partially. */
++				if (skb_is_gso(skb) && !list->head_frag &&
++				    skb_headlen(list))
++					skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
+ 
+ 				if (skb_shared(list)) {
+ 					/* Sucks! We need to fork list. :-( */
+@@ -4258,9 +4261,6 @@ static const u8 skb_ext_type_len[] = {
+ #if IS_ENABLED(CONFIG_MPTCP)
+ 	[SKB_EXT_MPTCP] = SKB_EXT_CHUNKSIZEOF(struct mptcp_ext),
+ #endif
+-#if IS_ENABLED(CONFIG_KCOV)
+-	[SKB_EXT_KCOV_HANDLE] = SKB_EXT_CHUNKSIZEOF(u64),
+-#endif
+ };
+ 
+ static __always_inline unsigned int skb_ext_total_length(void)
+@@ -4277,9 +4277,6 @@ static __always_inline unsigned int skb_ext_total_length(void)
+ #endif
+ #if IS_ENABLED(CONFIG_MPTCP)
+ 		skb_ext_type_len[SKB_EXT_MPTCP] +
+-#endif
+-#if IS_ENABLED(CONFIG_KCOV)
+-		skb_ext_type_len[SKB_EXT_KCOV_HANDLE] +
+ #endif
+ 		0;
+ }
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index cbf4184fabc98..ee5d3f49b0b5b 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -358,11 +358,13 @@ static void sock_map_free(struct bpf_map *map)
+ 
+ 		sk = xchg(psk, NULL);
+ 		if (sk) {
++			sock_hold(sk);
+ 			lock_sock(sk);
+ 			rcu_read_lock();
+ 			sock_map_unref(sk, psk);
+ 			rcu_read_unlock();
+ 			release_sock(sk);
++			sock_put(sk);
+ 		}
+ 	}
+ 
+diff --git a/net/core/stream.c b/net/core/stream.c
+index a61130504827a..d7c5413d16d57 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -196,6 +196,12 @@ void sk_stream_kill_queues(struct sock *sk)
+ 	/* First the read buffer. */
+ 	__skb_queue_purge(&sk->sk_receive_queue);
+ 
++	/* Next, the error queue.
++	 * We need to use queue lock, because other threads might
++	 * add packets to the queue without socket lock being held.
++	 */
++	skb_queue_purge(&sk->sk_error_queue);
++
+ 	/* Next, the write queue. */
+ 	WARN_ON(!skb_queue_empty(&sk->sk_write_queue));
+ 
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 80d2a00d30977..47c2dd4a9b9f9 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -1966,7 +1966,8 @@ static int ethtool_phys_id(struct net_device *dev, void __user *useraddr)
+ 	} else {
+ 		/* Driver expects to be called at twice the frequency in rc */
+ 		int n = rc * 2, interval = HZ / n;
+-		u64 count = n * id.data, i = 0;
++		u64 count = mul_u32_u32(n, id.data);
++		u64 i = 0;
+ 
+ 		do {
+ 			rtnl_lock();
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index fec1b014c0a26..84e6ef4f35252 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -219,7 +219,9 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		skb->dev = master->dev;
+ 		skb_reset_mac_header(skb);
+ 		skb_reset_mac_len(skb);
++		spin_lock_bh(&hsr->seqnr_lock);
+ 		hsr_forward_skb(skb, master);
++		spin_unlock_bh(&hsr->seqnr_lock);
+ 	} else {
+ 		atomic_long_inc(&dev->tx_dropped);
+ 		dev_kfree_skb_any(skb);
+@@ -232,7 +234,7 @@ static const struct header_ops hsr_header_ops = {
+ 	.parse	 = eth_header_parse,
+ };
+ 
+-static struct sk_buff *hsr_init_skb(struct hsr_port *master, u16 proto)
++static struct sk_buff *hsr_init_skb(struct hsr_port *master)
+ {
+ 	struct hsr_priv *hsr = master->hsr;
+ 	struct sk_buff *skb;
+@@ -244,8 +246,7 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master, u16 proto)
+ 	 * being, for PRP it is a trailer and for HSR it is a
+ 	 * header
+ 	 */
+-	skb = dev_alloc_skb(sizeof(struct hsr_tag) +
+-			    sizeof(struct hsr_sup_tag) +
++	skb = dev_alloc_skb(sizeof(struct hsr_sup_tag) +
+ 			    sizeof(struct hsr_sup_payload) + hlen + tlen);
+ 
+ 	if (!skb)
+@@ -253,10 +254,9 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master, u16 proto)
+ 
+ 	skb_reserve(skb, hlen);
+ 	skb->dev = master->dev;
+-	skb->protocol = htons(proto);
+ 	skb->priority = TC_PRIO_CONTROL;
+ 
+-	if (dev_hard_header(skb, skb->dev, proto,
++	if (dev_hard_header(skb, skb->dev, ETH_P_PRP,
+ 			    hsr->sup_multicast_addr,
+ 			    skb->dev->dev_addr, skb->len) <= 0)
+ 		goto out;
+@@ -278,12 +278,9 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
+ {
+ 	struct hsr_priv *hsr = master->hsr;
+ 	__u8 type = HSR_TLV_LIFE_CHECK;
+-	struct hsr_tag *hsr_tag = NULL;
+ 	struct hsr_sup_payload *hsr_sp;
+ 	struct hsr_sup_tag *hsr_stag;
+-	unsigned long irqflags;
+ 	struct sk_buff *skb;
+-	u16 proto;
+ 
+ 	*interval = msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
+ 	if (hsr->announce_count < 3 && hsr->prot_version == 0) {
+@@ -292,39 +289,25 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
+ 		hsr->announce_count++;
+ 	}
+ 
+-	if (!hsr->prot_version)
+-		proto = ETH_P_PRP;
+-	else
+-		proto = ETH_P_HSR;
+-
+-	skb = hsr_init_skb(master, proto);
++	skb = hsr_init_skb(master);
+ 	if (!skb) {
+ 		WARN_ONCE(1, "HSR: Could not send supervision frame\n");
+ 		return;
+ 	}
+ 
+-	if (hsr->prot_version > 0) {
+-		hsr_tag = skb_put(skb, sizeof(struct hsr_tag));
+-		hsr_tag->encap_proto = htons(ETH_P_PRP);
+-		set_hsr_tag_LSDU_size(hsr_tag, HSR_V1_SUP_LSDUSIZE);
+-	}
+-
+ 	hsr_stag = skb_put(skb, sizeof(struct hsr_sup_tag));
+ 	set_hsr_stag_path(hsr_stag, (hsr->prot_version ? 0x0 : 0xf));
+ 	set_hsr_stag_HSR_ver(hsr_stag, hsr->prot_version);
+ 
+ 	/* From HSRv1 on we have separate supervision sequence numbers. */
+-	spin_lock_irqsave(&master->hsr->seqnr_lock, irqflags);
++	spin_lock_bh(&hsr->seqnr_lock);
+ 	if (hsr->prot_version > 0) {
+ 		hsr_stag->sequence_nr = htons(hsr->sup_sequence_nr);
+ 		hsr->sup_sequence_nr++;
+-		hsr_tag->sequence_nr = htons(hsr->sequence_nr);
+-		hsr->sequence_nr++;
+ 	} else {
+ 		hsr_stag->sequence_nr = htons(hsr->sequence_nr);
+ 		hsr->sequence_nr++;
+ 	}
+-	spin_unlock_irqrestore(&master->hsr->seqnr_lock, irqflags);
+ 
+ 	hsr_stag->HSR_TLV_type = type;
+ 	/* TODO: Why 12 in HSRv0? */
+@@ -335,11 +318,13 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
+ 	hsr_sp = skb_put(skb, sizeof(struct hsr_sup_payload));
+ 	ether_addr_copy(hsr_sp->macaddress_A, master->dev->dev_addr);
+ 
+-	if (skb_put_padto(skb, ETH_ZLEN + HSR_HLEN))
++	if (skb_put_padto(skb, ETH_ZLEN)) {
++		spin_unlock_bh(&hsr->seqnr_lock);
+ 		return;
++	}
+ 
+ 	hsr_forward_skb(skb, master);
+-
++	spin_unlock_bh(&hsr->seqnr_lock);
+ 	return;
+ }
+ 
+@@ -349,12 +334,9 @@ static void send_prp_supervision_frame(struct hsr_port *master,
+ 	struct hsr_priv *hsr = master->hsr;
+ 	struct hsr_sup_payload *hsr_sp;
+ 	struct hsr_sup_tag *hsr_stag;
+-	unsigned long irqflags;
+ 	struct sk_buff *skb;
+-	struct prp_rct *rct;
+-	u8 *tail;
+ 
+-	skb = hsr_init_skb(master, ETH_P_PRP);
++	skb = hsr_init_skb(master);
+ 	if (!skb) {
+ 		WARN_ONCE(1, "PRP: Could not send supervision frame\n");
+ 		return;
+@@ -366,7 +348,7 @@ static void send_prp_supervision_frame(struct hsr_port *master,
+ 	set_hsr_stag_HSR_ver(hsr_stag, (hsr->prot_version ? 1 : 0));
+ 
+ 	/* From HSRv1 on we have separate supervision sequence numbers. */
+-	spin_lock_irqsave(&master->hsr->seqnr_lock, irqflags);
++	spin_lock_bh(&hsr->seqnr_lock);
+ 	hsr_stag->sequence_nr = htons(hsr->sup_sequence_nr);
+ 	hsr->sup_sequence_nr++;
+ 	hsr_stag->HSR_TLV_type = PRP_TLV_LIFE_CHECK_DD;
+@@ -376,20 +358,13 @@ static void send_prp_supervision_frame(struct hsr_port *master,
+ 	hsr_sp = skb_put(skb, sizeof(struct hsr_sup_payload));
+ 	ether_addr_copy(hsr_sp->macaddress_A, master->dev->dev_addr);
+ 
+-	if (skb_put_padto(skb, ETH_ZLEN + HSR_HLEN)) {
+-		spin_unlock_irqrestore(&master->hsr->seqnr_lock, irqflags);
++	if (skb_put_padto(skb, ETH_ZLEN)) {
++		spin_unlock_bh(&hsr->seqnr_lock);
+ 		return;
+ 	}
+ 
+-	tail = skb_tail_pointer(skb) - HSR_HLEN;
+-	rct = (struct prp_rct *)tail;
+-	rct->PRP_suffix = htons(ETH_P_PRP);
+-	set_prp_LSDU_size(rct, HSR_V1_SUP_LSDUSIZE);
+-	rct->sequence_nr = htons(hsr->sequence_nr);
+-	hsr->sequence_nr++;
+-	spin_unlock_irqrestore(&master->hsr->seqnr_lock, irqflags);
+-
+ 	hsr_forward_skb(skb, master);
++	spin_unlock_bh(&hsr->seqnr_lock);
+ }
+ 
+ /* Announce (supervision frame) timer function
+@@ -468,7 +443,7 @@ void hsr_dev_setup(struct net_device *dev)
+ 	dev->header_ops = &hsr_header_ops;
+ 	dev->netdev_ops = &hsr_device_ops;
+ 	SET_NETDEV_DEVTYPE(dev, &hsr_type);
+-	dev->priv_flags |= IFF_NO_QUEUE;
++	dev->priv_flags |= IFF_NO_QUEUE | IFF_DISABLE_NETPOLL;
+ 
+ 	dev->needs_free_netdev = true;
+ 
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index cb9b54a7abd24..aec48e670fb69 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -186,6 +186,7 @@ static struct sk_buff *prp_fill_rct(struct sk_buff *skb,
+ 	set_prp_LSDU_size(trailer, lsdu_size);
+ 	trailer->sequence_nr = htons(frame->sequence_nr);
+ 	trailer->PRP_suffix = htons(ETH_P_PRP);
++	skb->protocol = eth_hdr(skb)->h_proto;
+ 
+ 	return skb;
+ }
+@@ -226,6 +227,7 @@ static struct sk_buff *hsr_fill_tag(struct sk_buff *skb,
+ 	hsr_ethhdr->hsr_tag.encap_proto = hsr_ethhdr->ethhdr.h_proto;
+ 	hsr_ethhdr->ethhdr.h_proto = htons(proto_version ?
+ 			ETH_P_HSR : ETH_P_PRP);
++	skb->protocol = hsr_ethhdr->ethhdr.h_proto;
+ 
+ 	return skb;
+ }
+@@ -435,7 +437,6 @@ static void handle_std_frame(struct sk_buff *skb,
+ {
+ 	struct hsr_port *port = frame->port_rcv;
+ 	struct hsr_priv *hsr = port->hsr;
+-	unsigned long irqflags;
+ 
+ 	frame->skb_hsr = NULL;
+ 	frame->skb_prp = NULL;
+@@ -445,17 +446,20 @@ static void handle_std_frame(struct sk_buff *skb,
+ 		frame->is_from_san = true;
+ 	} else {
+ 		/* Sequence nr for the master node */
+-		spin_lock_irqsave(&hsr->seqnr_lock, irqflags);
++		lockdep_assert_held(&hsr->seqnr_lock);
+ 		frame->sequence_nr = hsr->sequence_nr;
+ 		hsr->sequence_nr++;
+-		spin_unlock_irqrestore(&hsr->seqnr_lock, irqflags);
+ 	}
+ }
+ 
+ int hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
+ 			struct hsr_frame_info *frame)
+ {
+-	if (proto == htons(ETH_P_PRP) ||
++	struct hsr_port *port = frame->port_rcv;
++	struct hsr_priv *hsr = port->hsr;
++
++	/* HSRv0 supervisory frames double as a tag so treat them as tagged. */
++	if ((!hsr->prot_version && proto == htons(ETH_P_PRP)) ||
+ 	    proto == htons(ETH_P_HSR)) {
+ 		/* Check if skb contains hsr_ethhdr */
+ 		if (skb->mac_len < sizeof(struct hsr_ethhdr))
+@@ -545,11 +549,13 @@ void hsr_forward_skb(struct sk_buff *skb, struct hsr_port *port)
+ {
+ 	struct hsr_frame_info frame;
+ 
++	rcu_read_lock();
+ 	if (fill_frame_info(&frame, skb, port) < 0)
+ 		goto out_drop;
+ 
+ 	hsr_register_frame_in(frame.node_src, port, frame.sequence_nr);
+ 	hsr_forward_do(&frame);
++	rcu_read_unlock();
+ 	/* Gets called for ingress frames as well as egress from master port.
+ 	 * So check and increment stats for master port only here.
+ 	 */
+@@ -564,6 +570,7 @@ void hsr_forward_skb(struct sk_buff *skb, struct hsr_port *port)
+ 	return;
+ 
+ out_drop:
++	rcu_read_unlock();
+ 	port->dev->stats.tx_dropped++;
+ 	kfree_skb(skb);
+ }
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 805f974923b92..20cb6b7dbc694 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -159,6 +159,7 @@ static struct hsr_node *hsr_add_node(struct hsr_priv *hsr,
+ 		return NULL;
+ 
+ 	ether_addr_copy(new_node->macaddress_A, addr);
++	spin_lock_init(&new_node->seq_out_lock);
+ 
+ 	/* We are only interested in time diffs here, so use current jiffies
+ 	 * as initialization. (0 could trigger an spurious ring error warning).
+@@ -311,6 +312,7 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
+ 		goto done;
+ 
+ 	ether_addr_copy(node_real->macaddress_B, ethhdr->h_source);
++	spin_lock_bh(&node_real->seq_out_lock);
+ 	for (i = 0; i < HSR_PT_PORTS; i++) {
+ 		if (!node_curr->time_in_stale[i] &&
+ 		    time_after(node_curr->time_in[i], node_real->time_in[i])) {
+@@ -321,6 +323,7 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
+ 		if (seq_nr_after(node_curr->seq_out[i], node_real->seq_out[i]))
+ 			node_real->seq_out[i] = node_curr->seq_out[i];
+ 	}
++	spin_unlock_bh(&node_real->seq_out_lock);
+ 	node_real->addr_B_port = port_rcv->type;
+ 
+ 	spin_lock_bh(&hsr->list_lock);
+@@ -413,13 +416,17 @@ void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port,
+ int hsr_register_frame_out(struct hsr_port *port, struct hsr_node *node,
+ 			   u16 sequence_nr)
+ {
++	spin_lock_bh(&node->seq_out_lock);
+ 	if (seq_nr_before_or_eq(sequence_nr, node->seq_out[port->type]) &&
+ 	    time_is_after_jiffies(node->time_out[port->type] +
+-	    msecs_to_jiffies(HSR_ENTRY_FORGET_TIME)))
++	    msecs_to_jiffies(HSR_ENTRY_FORGET_TIME))) {
++		spin_unlock_bh(&node->seq_out_lock);
+ 		return 1;
++	}
+ 
+ 	node->time_out[port->type] = jiffies;
+ 	node->seq_out[port->type] = sequence_nr;
++	spin_unlock_bh(&node->seq_out_lock);
+ 	return 0;
+ }
+ 
+diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
+index d9628e7a5f051..5a771cb3f0325 100644
+--- a/net/hsr/hsr_framereg.h
++++ b/net/hsr/hsr_framereg.h
+@@ -69,6 +69,8 @@ void prp_update_san_info(struct hsr_node *node, bool is_sup);
+ 
+ struct hsr_node {
+ 	struct list_head	mac_list;
++	/* Protect R/W access to seq_out */
++	spinlock_t		seq_out_lock;
+ 	unsigned char		macaddress_A[ETH_ALEN];
+ 	unsigned char		macaddress_B[ETH_ALEN];
+ 	/* Local slave through which AddrB frames are received from this node */
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 4d97133240036..9ed59147ef66b 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -147,10 +147,14 @@ static int inet_csk_bind_conflict(const struct sock *sk,
+ 	 */
+ 
+ 	sk_for_each_bound(sk2, &tb->owners) {
+-		if (sk != sk2 &&
+-		    (!sk->sk_bound_dev_if ||
+-		     !sk2->sk_bound_dev_if ||
+-		     sk->sk_bound_dev_if == sk2->sk_bound_dev_if)) {
++		int bound_dev_if2;
++
++		if (sk == sk2)
++			continue;
++		bound_dev_if2 = READ_ONCE(sk2->sk_bound_dev_if);
++		if ((!sk->sk_bound_dev_if ||
++		     !bound_dev_if2 ||
++		     sk->sk_bound_dev_if == bound_dev_if2)) {
+ 			if (reuse && sk2->sk_reuse &&
+ 			    sk2->sk_state != TCP_LISTEN) {
+ 				if ((!relax ||
+@@ -912,11 +916,25 @@ void inet_csk_prepare_forced_close(struct sock *sk)
+ }
+ EXPORT_SYMBOL(inet_csk_prepare_forced_close);
+ 
++static int inet_ulp_can_listen(const struct sock *sk)
++{
++	const struct inet_connection_sock *icsk = inet_csk(sk);
++
++	if (icsk->icsk_ulp_ops && !icsk->icsk_ulp_ops->clone)
++		return -EINVAL;
++
++	return 0;
++}
++
+ int inet_csk_listen_start(struct sock *sk, int backlog)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	struct inet_sock *inet = inet_sk(sk);
+-	int err = -EADDRINUSE;
++	int err;
++
++	err = inet_ulp_can_listen(sk);
++	if (unlikely(err))
++		return err;
+ 
+ 	reqsk_queue_alloc(&icsk->icsk_accept_queue);
+ 
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 41afc9155f311..542b66783493b 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -290,12 +290,11 @@ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
+ 	struct tcp_request_sock *treq;
+ 	struct request_sock *req;
+ 
+-#ifdef CONFIG_MPTCP
+ 	if (sk_is_mptcp(sk))
+-		ops = &mptcp_subflow_request_sock_ops;
+-#endif
++		req = mptcp_subflow_reqsk_alloc(ops, sk, false);
++	else
++		req = inet_reqsk_alloc(ops, sk, false);
+ 
+-	req = inet_reqsk_alloc(ops, sk, false);
+ 	if (!req)
+ 		return NULL;
+ 
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 809ee0f32d598..6a1685461f89b 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -125,8 +125,11 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
+ 		tmp->sg.end = i;
+ 		if (apply) {
+ 			apply_bytes -= size;
+-			if (!apply_bytes)
++			if (!apply_bytes) {
++				if (sge->length)
++					sk_msg_iter_var_prev(i);
+ 				break;
++			}
+ 		}
+ 	} while (i != msg->sg.end);
+ 
+@@ -316,7 +319,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ 	bool cork = false, enospc = sk_msg_full(msg);
+ 	struct sock *sk_redir;
+ 	u32 tosend, origsize, sent, delta = 0;
+-	u32 eval = __SK_NONE;
++	u32 eval;
+ 	int ret;
+ 
+ more_data:
+@@ -347,6 +350,7 @@ more_data:
+ 	tosend = msg->sg.size;
+ 	if (psock->apply_bytes && psock->apply_bytes < tosend)
+ 		tosend = psock->apply_bytes;
++	eval = __SK_NONE;
+ 
+ 	switch (psock->eval) {
+ 	case __SK_PASS:
+diff --git a/net/ipv4/tcp_ulp.c b/net/ipv4/tcp_ulp.c
+index 7c27aa629af19..b5d707a5a31b8 100644
+--- a/net/ipv4/tcp_ulp.c
++++ b/net/ipv4/tcp_ulp.c
+@@ -136,6 +136,10 @@ static int __tcp_set_ulp(struct sock *sk, const struct tcp_ulp_ops *ulp_ops)
+ 	if (icsk->icsk_ulp_ops)
+ 		goto out_err;
+ 
++	err = -EINVAL;
++	if (!ulp_ops->clone && sk->sk_state == TCP_LISTEN)
++		goto out_err;
++
+ 	err = ulp_ops->init(sk);
+ 	if (err)
+ 		goto out_err;
+diff --git a/net/ipv4/udp_tunnel_core.c b/net/ipv4/udp_tunnel_core.c
+index 3eecba0874aa2..d70f683d3c495 100644
+--- a/net/ipv4/udp_tunnel_core.c
++++ b/net/ipv4/udp_tunnel_core.c
+@@ -194,6 +194,7 @@ EXPORT_SYMBOL_GPL(udp_tunnel_xmit_skb);
+ void udp_tunnel_sock_release(struct socket *sock)
+ {
+ 	rcu_assign_sk_user_data(sock->sk, NULL);
++	synchronize_rcu();
+ 	kernel_sock_shutdown(sock, SHUT_RDWR);
+ 	sock_release(sock);
+ }
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 3a15ef8dd3228..d04e5a1a7e0e7 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -2013,6 +2013,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		ret = register_netdevice(ndev);
+ 		if (ret) {
++			ieee80211_if_free(ndev);
+ 			free_netdev(ndev);
+ 			return ret;
+ 		}
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 2e92384909241..3b154ad4945c4 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -40,7 +40,6 @@ static void subflow_req_destructor(struct request_sock *req)
+ 		sock_put((struct sock *)subflow_req->msk);
+ 
+ 	mptcp_token_destroy_request(req);
+-	tcp_request_sock_ops.destructor(req);
+ }
+ 
+ static void subflow_generate_hmac(u64 key1, u64 key2, u32 nonce1, u32 nonce2,
+@@ -359,9 +358,8 @@ do_reset:
+ 	mptcp_subflow_reset(sk);
+ }
+ 
+-struct request_sock_ops mptcp_subflow_request_sock_ops;
+-EXPORT_SYMBOL_GPL(mptcp_subflow_request_sock_ops);
+-static struct tcp_request_sock_ops subflow_request_sock_ipv4_ops;
++static struct request_sock_ops mptcp_subflow_v4_request_sock_ops __ro_after_init;
++static struct tcp_request_sock_ops subflow_request_sock_ipv4_ops __ro_after_init;
+ 
+ static int subflow_v4_conn_request(struct sock *sk, struct sk_buff *skb)
+ {
+@@ -373,7 +371,7 @@ static int subflow_v4_conn_request(struct sock *sk, struct sk_buff *skb)
+ 	if (skb_rtable(skb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))
+ 		goto drop;
+ 
+-	return tcp_conn_request(&mptcp_subflow_request_sock_ops,
++	return tcp_conn_request(&mptcp_subflow_v4_request_sock_ops,
+ 				&subflow_request_sock_ipv4_ops,
+ 				sk, skb);
+ drop:
+@@ -381,10 +379,17 @@ drop:
+ 	return 0;
+ }
+ 
++static void subflow_v4_req_destructor(struct request_sock *req)
++{
++	subflow_req_destructor(req);
++	tcp_request_sock_ops.destructor(req);
++}
++
+ #if IS_ENABLED(CONFIG_MPTCP_IPV6)
+-static struct tcp_request_sock_ops subflow_request_sock_ipv6_ops;
+-static struct inet_connection_sock_af_ops subflow_v6_specific;
+-static struct inet_connection_sock_af_ops subflow_v6m_specific;
++static struct request_sock_ops mptcp_subflow_v6_request_sock_ops __ro_after_init;
++static struct tcp_request_sock_ops subflow_request_sock_ipv6_ops __ro_after_init;
++static struct inet_connection_sock_af_ops subflow_v6_specific __ro_after_init;
++static struct inet_connection_sock_af_ops subflow_v6m_specific __ro_after_init;
+ 
+ static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb)
+ {
+@@ -403,15 +408,36 @@ static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	return tcp_conn_request(&mptcp_subflow_request_sock_ops,
++	return tcp_conn_request(&mptcp_subflow_v6_request_sock_ops,
+ 				&subflow_request_sock_ipv6_ops, sk, skb);
+ 
+ drop:
+ 	tcp_listendrop(sk);
+ 	return 0; /* don't send reset */
+ }
++
++static void subflow_v6_req_destructor(struct request_sock *req)
++{
++	subflow_req_destructor(req);
++	tcp6_request_sock_ops.destructor(req);
++}
++#endif
++
++struct request_sock *mptcp_subflow_reqsk_alloc(const struct request_sock_ops *ops,
++					       struct sock *sk_listener,
++					       bool attach_listener)
++{
++	if (ops->family == AF_INET)
++		ops = &mptcp_subflow_v4_request_sock_ops;
++#if IS_ENABLED(CONFIG_MPTCP_IPV6)
++	else if (ops->family == AF_INET6)
++		ops = &mptcp_subflow_v6_request_sock_ops;
+ #endif
+ 
++	return inet_reqsk_alloc(ops, sk_listener, attach_listener);
++}
++EXPORT_SYMBOL(mptcp_subflow_reqsk_alloc);
++
+ /* validate hmac received in third ACK */
+ static bool subflow_hmac_valid(const struct request_sock *req,
+ 			       const struct mptcp_options_received *mp_opt)
+@@ -636,7 +662,7 @@ dispose_child:
+ 	return child;
+ }
+ 
+-static struct inet_connection_sock_af_ops subflow_specific;
++static struct inet_connection_sock_af_ops subflow_specific __ro_after_init;
+ 
+ enum mapping_status {
+ 	MAPPING_OK,
+@@ -1017,7 +1043,7 @@ static void subflow_write_space(struct sock *sk)
+ 	}
+ }
+ 
+-static struct inet_connection_sock_af_ops *
++static const struct inet_connection_sock_af_ops *
+ subflow_default_af_ops(struct sock *sk)
+ {
+ #if IS_ENABLED(CONFIG_MPTCP_IPV6)
+@@ -1032,7 +1058,7 @@ void mptcpv6_handle_mapped(struct sock *sk, bool mapped)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+-	struct inet_connection_sock_af_ops *target;
++	const struct inet_connection_sock_af_ops *target;
+ 
+ 	target = mapped ? &subflow_v6m_specific : subflow_default_af_ops(sk);
+ 
+@@ -1377,7 +1403,6 @@ static struct tcp_ulp_ops subflow_ulp_ops __read_mostly = {
+ static int subflow_ops_init(struct request_sock_ops *subflow_ops)
+ {
+ 	subflow_ops->obj_size = sizeof(struct mptcp_subflow_request_sock);
+-	subflow_ops->slab_name = "request_sock_subflow";
+ 
+ 	subflow_ops->slab = kmem_cache_create(subflow_ops->slab_name,
+ 					      subflow_ops->obj_size, 0,
+@@ -1387,16 +1412,17 @@ static int subflow_ops_init(struct request_sock_ops *subflow_ops)
+ 	if (!subflow_ops->slab)
+ 		return -ENOMEM;
+ 
+-	subflow_ops->destructor = subflow_req_destructor;
+-
+ 	return 0;
+ }
+ 
+ void __init mptcp_subflow_init(void)
+ {
+-	mptcp_subflow_request_sock_ops = tcp_request_sock_ops;
+-	if (subflow_ops_init(&mptcp_subflow_request_sock_ops) != 0)
+-		panic("MPTCP: failed to init subflow request sock ops\n");
++	mptcp_subflow_v4_request_sock_ops = tcp_request_sock_ops;
++	mptcp_subflow_v4_request_sock_ops.slab_name = "request_sock_subflow_v4";
++	mptcp_subflow_v4_request_sock_ops.destructor = subflow_v4_req_destructor;
++
++	if (subflow_ops_init(&mptcp_subflow_v4_request_sock_ops) != 0)
++		panic("MPTCP: failed to init subflow v4 request sock ops\n");
+ 
+ 	subflow_request_sock_ipv4_ops = tcp_request_sock_ipv4_ops;
+ 	subflow_request_sock_ipv4_ops.init_req = subflow_v4_init_req;
+@@ -1407,6 +1433,20 @@ void __init mptcp_subflow_init(void)
+ 	subflow_specific.sk_rx_dst_set = subflow_finish_connect;
+ 
+ #if IS_ENABLED(CONFIG_MPTCP_IPV6)
++	/* In struct mptcp_subflow_request_sock, we assume the TCP request sock
++	 * structures for v4 and v6 have the same size. It should not changed in
++	 * the future but better to make sure to be warned if it is no longer
++	 * the case.
++	 */
++	BUILD_BUG_ON(sizeof(struct tcp_request_sock) != sizeof(struct tcp6_request_sock));
++
++	mptcp_subflow_v6_request_sock_ops = tcp6_request_sock_ops;
++	mptcp_subflow_v6_request_sock_ops.slab_name = "request_sock_subflow_v6";
++	mptcp_subflow_v6_request_sock_ops.destructor = subflow_v6_req_destructor;
++
++	if (subflow_ops_init(&mptcp_subflow_v6_request_sock_ops) != 0)
++		panic("MPTCP: failed to init subflow v6 request sock ops\n");
++
+ 	subflow_request_sock_ipv6_ops = tcp_request_sock_ipv6_ops;
+ 	subflow_request_sock_ipv6_ops.init_req = subflow_v6_init_req;
+ 
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index c17a7dda0163f..1bf6ab83644b3 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -1708,9 +1708,10 @@ call_ad(struct sock *ctnl, struct sk_buff *skb, struct ip_set *set,
+ 		ret = set->variant->uadt(set, tb, adt, &lineno, flags, retried);
+ 		ip_set_unlock(set);
+ 		retried = true;
+-	} while (ret == -EAGAIN &&
+-		 set->variant->resize &&
+-		 (ret = set->variant->resize(set, retried)) == 0);
++	} while (ret == -ERANGE ||
++		 (ret == -EAGAIN &&
++		  set->variant->resize &&
++		  (ret = set->variant->resize(set, retried)) == 0));
+ 
+ 	if (!ret || (ret == -IPSET_ERR_EXIST && eexist))
+ 		return 0;
+diff --git a/net/netfilter/ipset/ip_set_hash_ip.c b/net/netfilter/ipset/ip_set_hash_ip.c
+index d7a81b2250e71..8720dc3bb6891 100644
+--- a/net/netfilter/ipset/ip_set_hash_ip.c
++++ b/net/netfilter/ipset/ip_set_hash_ip.c
+@@ -97,11 +97,11 @@ static int
+ hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	      enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_ip4 *h = set->data;
++	struct hash_ip4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_ip4_elem e = { 0 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip = 0, ip_to = 0, hosts;
++	u32 ip = 0, ip_to = 0, hosts, i = 0;
+ 	int ret = 0;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -146,14 +146,14 @@ hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 
+ 	hosts = h->netmask == 32 ? 1 : 2 << (32 - h->netmask - 1);
+ 
+-	/* 64bit division is not allowed on 32bit */
+-	if (((u64)ip_to - ip + 1) >> (32 - h->netmask) > IPSET_MAX_RANGE)
+-		return -ERANGE;
+-
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+-	for (; ip <= ip_to;) {
++	for (; ip <= ip_to; i++) {
+ 		e.ip = htonl(ip);
++		if (i > IPSET_MAX_RANGE) {
++			hash_ip4_data_next(&h->next, &e);
++			return -ERANGE;
++		}
+ 		ret = adtfn(set, &e, &ext, &ext, flags);
+ 		if (ret && !ip_set_eexist(ret, flags))
+ 			return ret;
+diff --git a/net/netfilter/ipset/ip_set_hash_ipmark.c b/net/netfilter/ipset/ip_set_hash_ipmark.c
+index eefce34a34f08..cbb05cb188f22 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipmark.c
++++ b/net/netfilter/ipset/ip_set_hash_ipmark.c
+@@ -96,11 +96,11 @@ static int
+ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		  enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_ipmark4 *h = set->data;
++	struct hash_ipmark4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_ipmark4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip, ip_to = 0;
++	u32 ip, ip_to = 0, i = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -147,13 +147,14 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		ip_set_mask_from_to(ip, ip_to, cidr);
+ 	}
+ 
+-	if (((u64)ip_to - ip + 1) > IPSET_MAX_RANGE)
+-		return -ERANGE;
+-
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+-	for (; ip <= ip_to; ip++) {
++	for (; ip <= ip_to; ip++, i++) {
+ 		e.ip = htonl(ip);
++		if (i > IPSET_MAX_RANGE) {
++			hash_ipmark4_data_next(&h->next, &e);
++			return -ERANGE;
++		}
+ 		ret = adtfn(set, &e, &ext, &ext, flags);
+ 
+ 		if (ret && !ip_set_eexist(ret, flags))
+diff --git a/net/netfilter/ipset/ip_set_hash_ipport.c b/net/netfilter/ipset/ip_set_hash_ipport.c
+index 4a54e9e8ae59f..c560f7873ecaf 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipport.c
++++ b/net/netfilter/ipset/ip_set_hash_ipport.c
+@@ -104,11 +104,11 @@ static int
+ hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		  enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_ipport4 *h = set->data;
++	struct hash_ipport4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_ipport4_elem e = { .ip = 0 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip, ip_to = 0, p = 0, port, port_to;
++	u32 ip, ip_to = 0, p = 0, port, port_to, i = 0;
+ 	bool with_ports = false;
+ 	int ret;
+ 
+@@ -172,17 +172,18 @@ hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
+-	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
+-		return -ERANGE;
+-
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	for (; ip <= ip_to; ip++) {
+ 		p = retried && ip == ntohl(h->next.ip) ? ntohs(h->next.port)
+ 						       : port;
+-		for (; p <= port_to; p++) {
++		for (; p <= port_to; p++, i++) {
+ 			e.ip = htonl(ip);
+ 			e.port = htons(p);
++			if (i > IPSET_MAX_RANGE) {
++				hash_ipport4_data_next(&h->next, &e);
++				return -ERANGE;
++			}
+ 			ret = adtfn(set, &e, &ext, &ext, flags);
+ 
+ 			if (ret && !ip_set_eexist(ret, flags))
+diff --git a/net/netfilter/ipset/ip_set_hash_ipportip.c b/net/netfilter/ipset/ip_set_hash_ipportip.c
+index 09737de5ecc34..b7eb8d1e77d9c 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipportip.c
++++ b/net/netfilter/ipset/ip_set_hash_ipportip.c
+@@ -107,11 +107,11 @@ static int
+ hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		    enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_ipportip4 *h = set->data;
++	struct hash_ipportip4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_ipportip4_elem e = { .ip = 0 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip, ip_to = 0, p = 0, port, port_to;
++	u32 ip, ip_to = 0, p = 0, port, port_to, i = 0;
+ 	bool with_ports = false;
+ 	int ret;
+ 
+@@ -179,17 +179,18 @@ hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
+-	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
+-		return -ERANGE;
+-
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	for (; ip <= ip_to; ip++) {
+ 		p = retried && ip == ntohl(h->next.ip) ? ntohs(h->next.port)
+ 						       : port;
+-		for (; p <= port_to; p++) {
++		for (; p <= port_to; p++, i++) {
+ 			e.ip = htonl(ip);
+ 			e.port = htons(p);
++			if (i > IPSET_MAX_RANGE) {
++				hash_ipportip4_data_next(&h->next, &e);
++				return -ERANGE;
++			}
+ 			ret = adtfn(set, &e, &ext, &ext, flags);
+ 
+ 			if (ret && !ip_set_eexist(ret, flags))
+diff --git a/net/netfilter/ipset/ip_set_hash_ipportnet.c b/net/netfilter/ipset/ip_set_hash_ipportnet.c
+index 02685371a6828..16c5641ced53c 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_ipportnet.c
+@@ -159,12 +159,12 @@ static int
+ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		     enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_ipportnet4 *h = set->data;
++	struct hash_ipportnet4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_ipportnet4_elem e = { .cidr = HOST_MASK - 1 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0, p = 0, port, port_to;
+-	u32 ip2_from = 0, ip2_to = 0, ip2;
++	u32 ip2_from = 0, ip2_to = 0, ip2, i = 0;
+ 	bool with_ports = false;
+ 	u8 cidr;
+ 	int ret;
+@@ -252,9 +252,6 @@ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
+-	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
+-		return -ERANGE;
+-
+ 	ip2_to = ip2_from;
+ 	if (tb[IPSET_ATTR_IP2_TO]) {
+ 		ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP2_TO], &ip2_to);
+@@ -281,9 +278,15 @@ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		for (; p <= port_to; p++) {
+ 			e.port = htons(p);
+ 			do {
++				i++;
+ 				e.ip2 = htonl(ip2);
+ 				ip2 = ip_set_range_to_cidr(ip2, ip2_to, &cidr);
+ 				e.cidr = cidr - 1;
++				if (i > IPSET_MAX_RANGE) {
++					hash_ipportnet4_data_next(&h->next,
++								  &e);
++					return -ERANGE;
++				}
+ 				ret = adtfn(set, &e, &ext, &ext, flags);
+ 
+ 				if (ret && !ip_set_eexist(ret, flags))
+diff --git a/net/netfilter/ipset/ip_set_hash_net.c b/net/netfilter/ipset/ip_set_hash_net.c
+index 9d1beaacb9730..5ab5873d1d162 100644
+--- a/net/netfilter/ipset/ip_set_hash_net.c
++++ b/net/netfilter/ipset/ip_set_hash_net.c
+@@ -135,11 +135,11 @@ static int
+ hash_net4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	       enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_net4 *h = set->data;
++	struct hash_net4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_net4_elem e = { .cidr = HOST_MASK };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip = 0, ip_to = 0, ipn, n = 0;
++	u32 ip = 0, ip_to = 0, i = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -187,19 +187,16 @@ hash_net4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		if (ip + UINT_MAX == ip_to)
+ 			return -IPSET_ERR_HASH_RANGE;
+ 	}
+-	ipn = ip;
+-	do {
+-		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr);
+-		n++;
+-	} while (ipn++ < ip_to);
+-
+-	if (n > IPSET_MAX_RANGE)
+-		return -ERANGE;
+ 
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	do {
++		i++;
+ 		e.ip = htonl(ip);
++		if (i > IPSET_MAX_RANGE) {
++			hash_net4_data_next(&h->next, &e);
++			return -ERANGE;
++		}
+ 		ip = ip_set_range_to_cidr(ip, ip_to, &e.cidr);
+ 		ret = adtfn(set, &e, &ext, &ext, flags);
+ 		if (ret && !ip_set_eexist(ret, flags))
+diff --git a/net/netfilter/ipset/ip_set_hash_netiface.c b/net/netfilter/ipset/ip_set_hash_netiface.c
+index c3ada9c63fa38..7ef240380a457 100644
+--- a/net/netfilter/ipset/ip_set_hash_netiface.c
++++ b/net/netfilter/ipset/ip_set_hash_netiface.c
+@@ -201,7 +201,7 @@ hash_netiface4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_netiface4_elem e = { .cidr = HOST_MASK, .elem = 1 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip = 0, ip_to = 0, ipn, n = 0;
++	u32 ip = 0, ip_to = 0, i = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -255,19 +255,16 @@ hash_netiface4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip, ip_to, e.cidr);
+ 	}
+-	ipn = ip;
+-	do {
+-		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr);
+-		n++;
+-	} while (ipn++ < ip_to);
+-
+-	if (n > IPSET_MAX_RANGE)
+-		return -ERANGE;
+ 
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	do {
++		i++;
+ 		e.ip = htonl(ip);
++		if (i > IPSET_MAX_RANGE) {
++			hash_netiface4_data_next(&h->next, &e);
++			return -ERANGE;
++		}
+ 		ip = ip_set_range_to_cidr(ip, ip_to, &e.cidr);
+ 		ret = adtfn(set, &e, &ext, &ext, flags);
+ 
+diff --git a/net/netfilter/ipset/ip_set_hash_netnet.c b/net/netfilter/ipset/ip_set_hash_netnet.c
+index b1411bc91a404..15f4b0292f0de 100644
+--- a/net/netfilter/ipset/ip_set_hash_netnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netnet.c
+@@ -162,13 +162,12 @@ static int
+ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		  enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_netnet4 *h = set->data;
++	struct hash_netnet4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_netnet4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0;
+-	u32 ip2 = 0, ip2_from = 0, ip2_to = 0, ipn;
+-	u64 n = 0, m = 0;
++	u32 ip2 = 0, ip2_from = 0, ip2_to = 0, i = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -244,19 +243,6 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip2_from, ip2_to, e.cidr[1]);
+ 	}
+-	ipn = ip;
+-	do {
+-		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr[0]);
+-		n++;
+-	} while (ipn++ < ip_to);
+-	ipn = ip2_from;
+-	do {
+-		ipn = ip_set_range_to_cidr(ipn, ip2_to, &e.cidr[1]);
+-		m++;
+-	} while (ipn++ < ip2_to);
+-
+-	if (n*m > IPSET_MAX_RANGE)
+-		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip[0]);
+@@ -269,7 +255,12 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		e.ip[0] = htonl(ip);
+ 		ip = ip_set_range_to_cidr(ip, ip_to, &e.cidr[0]);
+ 		do {
++			i++;
+ 			e.ip[1] = htonl(ip2);
++			if (i > IPSET_MAX_RANGE) {
++				hash_netnet4_data_next(&h->next, &e);
++				return -ERANGE;
++			}
+ 			ip2 = ip_set_range_to_cidr(ip2, ip2_to, &e.cidr[1]);
+ 			ret = adtfn(set, &e, &ext, &ext, flags);
+ 			if (ret && !ip_set_eexist(ret, flags))
+diff --git a/net/netfilter/ipset/ip_set_hash_netport.c b/net/netfilter/ipset/ip_set_hash_netport.c
+index d26d13528fe8b..e73ba50afe96f 100644
+--- a/net/netfilter/ipset/ip_set_hash_netport.c
++++ b/net/netfilter/ipset/ip_set_hash_netport.c
+@@ -153,12 +153,11 @@ static int
+ hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		   enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_netport4 *h = set->data;
++	struct hash_netport4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_netport4_elem e = { .cidr = HOST_MASK - 1 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 port, port_to, p = 0, ip = 0, ip_to = 0, ipn;
+-	u64 n = 0;
++	u32 port, port_to, p = 0, ip = 0, ip_to = 0, i = 0;
+ 	bool with_ports = false;
+ 	u8 cidr;
+ 	int ret;
+@@ -235,14 +234,6 @@ hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip, ip_to, e.cidr + 1);
+ 	}
+-	ipn = ip;
+-	do {
+-		ipn = ip_set_range_to_cidr(ipn, ip_to, &cidr);
+-		n++;
+-	} while (ipn++ < ip_to);
+-
+-	if (n*(port_to - port + 1) > IPSET_MAX_RANGE)
+-		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip);
+@@ -254,8 +245,12 @@ hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		e.ip = htonl(ip);
+ 		ip = ip_set_range_to_cidr(ip, ip_to, &cidr);
+ 		e.cidr = cidr - 1;
+-		for (; p <= port_to; p++) {
++		for (; p <= port_to; p++, i++) {
+ 			e.port = htons(p);
++			if (i > IPSET_MAX_RANGE) {
++				hash_netport4_data_next(&h->next, &e);
++				return -ERANGE;
++			}
+ 			ret = adtfn(set, &e, &ext, &ext, flags);
+ 			if (ret && !ip_set_eexist(ret, flags))
+ 				return ret;
+diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c
+index 6446f4fccc729..144346faffc13 100644
+--- a/net/netfilter/ipset/ip_set_hash_netportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netportnet.c
+@@ -172,17 +172,26 @@ hash_netportnet4_kadt(struct ip_set *set, const struct sk_buff *skb,
+ 	return adtfn(set, &e, &ext, &opt->ext, opt->cmdflags);
+ }
+ 
++static u32
++hash_netportnet4_range_to_cidr(u32 from, u32 to, u8 *cidr)
++{
++	if (from == 0 && to == UINT_MAX) {
++		*cidr = 0;
++		return to;
++	}
++	return ip_set_range_to_cidr(from, to, cidr);
++}
++
+ static int
+ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		      enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+-	const struct hash_netportnet4 *h = set->data;
++	struct hash_netportnet4 *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_netportnet4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0, p = 0, port, port_to;
+-	u32 ip2_from = 0, ip2_to = 0, ip2, ipn;
+-	u64 n = 0, m = 0;
++	u32 ip2_from = 0, ip2_to = 0, ip2, i = 0;
+ 	bool with_ports = false;
+ 	int ret;
+ 
+@@ -284,19 +293,6 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip2_from, ip2_to, e.cidr[1]);
+ 	}
+-	ipn = ip;
+-	do {
+-		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr[0]);
+-		n++;
+-	} while (ipn++ < ip_to);
+-	ipn = ip2_from;
+-	do {
+-		ipn = ip_set_range_to_cidr(ipn, ip2_to, &e.cidr[1]);
+-		m++;
+-	} while (ipn++ < ip2_to);
+-
+-	if (n*m*(port_to - port + 1) > IPSET_MAX_RANGE)
+-		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip[0]);
+@@ -309,13 +305,19 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 
+ 	do {
+ 		e.ip[0] = htonl(ip);
+-		ip = ip_set_range_to_cidr(ip, ip_to, &e.cidr[0]);
++		ip = hash_netportnet4_range_to_cidr(ip, ip_to, &e.cidr[0]);
+ 		for (; p <= port_to; p++) {
+ 			e.port = htons(p);
+ 			do {
++				i++;
+ 				e.ip[1] = htonl(ip2);
+-				ip2 = ip_set_range_to_cidr(ip2, ip2_to,
+-							   &e.cidr[1]);
++				if (i > IPSET_MAX_RANGE) {
++					hash_netportnet4_data_next(&h->next,
++								   &e);
++					return -ERANGE;
++				}
++				ip2 = hash_netportnet4_range_to_cidr(ip2,
++							ip2_to, &e.cidr[1]);
+ 				ret = adtfn(set, &e, &ext, &ext, flags);
+ 				if (ret && !ip_set_eexist(ret, flags))
+ 					return ret;
+diff --git a/net/netfilter/nf_conntrack_proto_icmpv6.c b/net/netfilter/nf_conntrack_proto_icmpv6.c
+index facd8c64ec4eb..f1a87de1c60ef 100644
+--- a/net/netfilter/nf_conntrack_proto_icmpv6.c
++++ b/net/netfilter/nf_conntrack_proto_icmpv6.c
+@@ -130,6 +130,56 @@ static void icmpv6_error_log(const struct sk_buff *skb,
+ 			       IPPROTO_ICMPV6, "%s", msg);
+ }
+ 
++static noinline_for_stack int
++nf_conntrack_icmpv6_redirect(struct nf_conn *tmpl, struct sk_buff *skb,
++			     unsigned int dataoff,
++			     const struct nf_hook_state *state)
++{
++	u8 hl = ipv6_hdr(skb)->hop_limit;
++	union nf_inet_addr outer_daddr;
++	union {
++		struct nd_opt_hdr nd_opt;
++		struct rd_msg rd_msg;
++	} tmp;
++	const struct nd_opt_hdr *nd_opt;
++	const struct rd_msg *rd_msg;
++
++	rd_msg = skb_header_pointer(skb, dataoff, sizeof(*rd_msg), &tmp.rd_msg);
++	if (!rd_msg) {
++		icmpv6_error_log(skb, state, "short redirect");
++		return -NF_ACCEPT;
++	}
++
++	if (rd_msg->icmph.icmp6_code != 0)
++		return NF_ACCEPT;
++
++	if (hl != 255 || !(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) {
++		icmpv6_error_log(skb, state, "invalid saddr or hoplimit for redirect");
++		return -NF_ACCEPT;
++	}
++
++	dataoff += sizeof(*rd_msg);
++
++	/* warning: rd_msg no longer usable after this call */
++	nd_opt = skb_header_pointer(skb, dataoff, sizeof(*nd_opt), &tmp.nd_opt);
++	if (!nd_opt || nd_opt->nd_opt_len == 0) {
++		icmpv6_error_log(skb, state, "redirect without options");
++		return -NF_ACCEPT;
++	}
++
++	/* We could call ndisc_parse_options(), but it would need
++	 * skb_linearize() and a bit more work.
++	 */
++	if (nd_opt->nd_opt_type != ND_OPT_REDIRECT_HDR)
++		return NF_ACCEPT;
++
++	memcpy(&outer_daddr.ip6, &ipv6_hdr(skb)->daddr,
++	       sizeof(outer_daddr.ip6));
++	dataoff += 8;
++	return nf_conntrack_inet_error(tmpl, skb, dataoff, state,
++				       IPPROTO_ICMPV6, &outer_daddr);
++}
++
+ int nf_conntrack_icmpv6_error(struct nf_conn *tmpl,
+ 			      struct sk_buff *skb,
+ 			      unsigned int dataoff,
+@@ -160,6 +210,9 @@ int nf_conntrack_icmpv6_error(struct nf_conn *tmpl,
+ 		return NF_ACCEPT;
+ 	}
+ 
++	if (icmp6h->icmp6_type == NDISC_REDIRECT)
++		return nf_conntrack_icmpv6_redirect(tmpl, skb, dataoff, state);
++
+ 	/* is not error message ? */
+ 	if (icmp6h->icmp6_type >= 128)
+ 		return NF_ACCEPT;
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index 28306cb667190..746ca77d0aad6 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -305,12 +305,12 @@ static void flow_offload_ipv6_mangle(struct nf_flow_rule *flow_rule,
+ 				     const __be32 *addr, const __be32 *mask)
+ {
+ 	struct flow_action_entry *entry;
+-	int i, j;
++	int i;
+ 
+-	for (i = 0, j = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32), j++) {
++	for (i = 0; i < sizeof(struct in6_addr) / sizeof(u32); i++) {
+ 		entry = flow_action_entry_next(flow_rule);
+ 		flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP6,
+-				    offset + i, &addr[j], mask);
++				    offset + i * sizeof(u32), &addr[i], mask);
+ 	}
+ }
+ 
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index b8939ebaa6d3e..610caea4feec8 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1497,6 +1497,7 @@ static int nfc_genl_se_io(struct sk_buff *skb, struct genl_info *info)
+ 	u32 dev_idx, se_idx;
+ 	u8 *apdu;
+ 	size_t apdu_len;
++	int rc;
+ 
+ 	if (!info->attrs[NFC_ATTR_DEVICE_INDEX] ||
+ 	    !info->attrs[NFC_ATTR_SE_INDEX] ||
+@@ -1510,25 +1511,37 @@ static int nfc_genl_se_io(struct sk_buff *skb, struct genl_info *info)
+ 	if (!dev)
+ 		return -ENODEV;
+ 
+-	if (!dev->ops || !dev->ops->se_io)
+-		return -ENOTSUPP;
++	if (!dev->ops || !dev->ops->se_io) {
++		rc = -EOPNOTSUPP;
++		goto put_dev;
++	}
+ 
+ 	apdu_len = nla_len(info->attrs[NFC_ATTR_SE_APDU]);
+-	if (apdu_len == 0)
+-		return -EINVAL;
++	if (apdu_len == 0) {
++		rc = -EINVAL;
++		goto put_dev;
++	}
+ 
+ 	apdu = nla_data(info->attrs[NFC_ATTR_SE_APDU]);
+-	if (!apdu)
+-		return -EINVAL;
++	if (!apdu) {
++		rc = -EINVAL;
++		goto put_dev;
++	}
+ 
+ 	ctx = kzalloc(sizeof(struct se_io_ctx), GFP_KERNEL);
+-	if (!ctx)
+-		return -ENOMEM;
++	if (!ctx) {
++		rc = -ENOMEM;
++		goto put_dev;
++	}
+ 
+ 	ctx->dev_idx = dev_idx;
+ 	ctx->se_idx = se_idx;
+ 
+-	return nfc_se_io(dev, se_idx, apdu, apdu_len, se_io_cb, ctx);
++	rc = nfc_se_io(dev, se_idx, apdu, apdu_len, se_io_cb, ctx);
++
++put_dev:
++	nfc_put_device(dev);
++	return rc;
+ }
+ 
+ static int nfc_genl_vendor_cmd(struct sk_buff *skb,
+@@ -1551,14 +1564,21 @@ static int nfc_genl_vendor_cmd(struct sk_buff *skb,
+ 	subcmd = nla_get_u32(info->attrs[NFC_ATTR_VENDOR_SUBCMD]);
+ 
+ 	dev = nfc_get_device(dev_idx);
+-	if (!dev || !dev->vendor_cmds || !dev->n_vendor_cmds)
++	if (!dev)
+ 		return -ENODEV;
+ 
++	if (!dev->vendor_cmds || !dev->n_vendor_cmds) {
++		err = -ENODEV;
++		goto put_dev;
++	}
++
+ 	if (info->attrs[NFC_ATTR_VENDOR_DATA]) {
+ 		data = nla_data(info->attrs[NFC_ATTR_VENDOR_DATA]);
+ 		data_len = nla_len(info->attrs[NFC_ATTR_VENDOR_DATA]);
+-		if (data_len == 0)
+-			return -EINVAL;
++		if (data_len == 0) {
++			err = -EINVAL;
++			goto put_dev;
++		}
+ 	} else {
+ 		data = NULL;
+ 		data_len = 0;
+@@ -1573,10 +1593,14 @@ static int nfc_genl_vendor_cmd(struct sk_buff *skb,
+ 		dev->cur_cmd_info = info;
+ 		err = cmd->doit(dev, data, data_len);
+ 		dev->cur_cmd_info = NULL;
+-		return err;
++		goto put_dev;
+ 	}
+ 
+-	return -EOPNOTSUPP;
++	err = -EOPNOTSUPP;
++
++put_dev:
++	nfc_put_device(dev);
++	return err;
+ }
+ 
+ /* message building helper */
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 7ed97dc0b5617..435f7f1be6146 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -933,6 +933,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	struct sw_flow_mask mask;
+ 	struct sk_buff *reply;
+ 	struct datapath *dp;
++	struct sw_flow_key *key;
+ 	struct sw_flow_actions *acts;
+ 	struct sw_flow_match match;
+ 	u32 ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
+@@ -960,24 +961,26 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	/* Extract key. */
+-	ovs_match_init(&match, &new_flow->key, false, &mask);
++	key = kzalloc(sizeof(*key), GFP_KERNEL);
++	if (!key) {
++		error = -ENOMEM;
++		goto err_kfree_key;
++	}
++
++	ovs_match_init(&match, key, false, &mask);
+ 	error = ovs_nla_get_match(net, &match, a[OVS_FLOW_ATTR_KEY],
+ 				  a[OVS_FLOW_ATTR_MASK], log);
+ 	if (error)
+ 		goto err_kfree_flow;
+ 
++	ovs_flow_mask_key(&new_flow->key, key, true, &mask);
++
+ 	/* Extract flow identifier. */
+ 	error = ovs_nla_get_identifier(&new_flow->id, a[OVS_FLOW_ATTR_UFID],
+-				       &new_flow->key, log);
++				       key, log);
+ 	if (error)
+ 		goto err_kfree_flow;
+ 
+-	/* unmasked key is needed to match when ufid is not used. */
+-	if (ovs_identifier_is_key(&new_flow->id))
+-		match.key = new_flow->id.unmasked_key;
+-
+-	ovs_flow_mask_key(&new_flow->key, &new_flow->key, true, &mask);
+-
+ 	/* Validate actions. */
+ 	error = ovs_nla_copy_actions(net, a[OVS_FLOW_ATTR_ACTIONS],
+ 				     &new_flow->key, &acts, log);
+@@ -1004,7 +1007,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	if (ovs_identifier_is_ufid(&new_flow->id))
+ 		flow = ovs_flow_tbl_lookup_ufid(&dp->table, &new_flow->id);
+ 	if (!flow)
+-		flow = ovs_flow_tbl_lookup(&dp->table, &new_flow->key);
++		flow = ovs_flow_tbl_lookup(&dp->table, key);
+ 	if (likely(!flow)) {
+ 		rcu_assign_pointer(new_flow->sf_acts, acts);
+ 
+@@ -1074,6 +1077,8 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	if (reply)
+ 		ovs_notify(&dp_flow_genl_family, reply, info);
++
++	kfree(key);
+ 	return 0;
+ 
+ err_unlock_ovs:
+@@ -1083,6 +1088,8 @@ err_kfree_acts:
+ 	ovs_nla_free_flow_actions(acts);
+ err_kfree_flow:
+ 	ovs_flow_free(new_flow, false);
++err_kfree_key:
++	kfree(key);
+ error:
+ 	return error;
+ }
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index eaa030e2ad55a..3716797c55643 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1885,12 +1885,22 @@ oom:
+ 
+ static void packet_parse_headers(struct sk_buff *skb, struct socket *sock)
+ {
++	int depth;
++
+ 	if ((!skb->protocol || skb->protocol == htons(ETH_P_ALL)) &&
+ 	    sock->type == SOCK_RAW) {
+ 		skb_reset_mac_header(skb);
+ 		skb->protocol = dev_parse_header_protocol(skb);
+ 	}
+ 
++	/* Move network header to the right position for VLAN tagged packets */
++	if (likely(skb->dev->type == ARPHRD_ETHER) &&
++	    eth_type_vlan(skb->protocol) &&
++	    __vlan_get_protocol(skb, skb->protocol, &depth) != 0) {
++		if (pskb_may_pull(skb, depth))
++			skb_set_network_header(skb, depth);
++	}
++
+ 	skb_probe_transport_header(skb);
+ }
+ 
+@@ -3005,6 +3015,11 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 	skb->mark = sockc.mark;
+ 	skb->tstamp = sockc.transmit_time;
+ 
++	if (unlikely(extra_len == 4))
++		skb->no_fcs = 1;
++
++	packet_parse_headers(skb, sock);
++
+ 	if (has_vnet_hdr) {
+ 		err = virtio_net_hdr_to_skb(skb, &vnet_hdr, vio_le());
+ 		if (err)
+@@ -3013,11 +3028,6 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 		virtio_net_hdr_set_proto(skb, &vnet_hdr);
+ 	}
+ 
+-	packet_parse_headers(skb, sock);
+-
+-	if (unlikely(extra_len == 4))
+-		skb->no_fcs = 1;
+-
+ 	err = po->xmit(skb);
+ 	if (unlikely(err != 0)) {
+ 		if (err > 0)
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 9683617db7049..08c117bc083ec 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -93,7 +93,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
+ 	*_hard_ack = hard_ack;
+ 	*_top = top;
+ 
+-	pkt->ack.bufferSpace	= htons(8);
++	pkt->ack.bufferSpace	= htons(0);
+ 	pkt->ack.maxSkew	= htons(0);
+ 	pkt->ack.firstPacket	= htonl(hard_ack + 1);
+ 	pkt->ack.previousPacket	= htonl(call->ackr_highest_seq);
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index eef3c14fd1c18..a670553159abe 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -733,7 +733,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ 			if (call->tx_total_len != -1 ||
+ 			    call->tx_pending ||
+ 			    call->tx_top != 0)
+-				goto error_put;
++				goto out_put_unlock;
+ 			call->tx_total_len = p.call.tx_total_len;
+ 		}
+ 	}
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index e9a8a2c86bbdd..86250221d08d4 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -332,7 +332,7 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 		  struct tcindex_filter_result *r, struct nlattr **tb,
+ 		  struct nlattr *est, bool ovr, struct netlink_ext_ack *extack)
+ {
+-	struct tcindex_filter_result new_filter_result, *old_r = r;
++	struct tcindex_filter_result new_filter_result;
+ 	struct tcindex_data *cp = NULL, *oldp;
+ 	struct tcindex_filter *f = NULL; /* make gcc behave */
+ 	struct tcf_result cr = {};
+@@ -401,7 +401,7 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 	err = tcindex_filter_result_init(&new_filter_result, cp, net);
+ 	if (err < 0)
+ 		goto errout_alloc;
+-	if (old_r)
++	if (r)
+ 		cr = r->res;
+ 
+ 	err = -EBUSY;
+@@ -478,14 +478,6 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 		tcf_bind_filter(tp, &cr, base);
+ 	}
+ 
+-	if (old_r && old_r != r) {
+-		err = tcindex_filter_result_init(old_r, cp, net);
+-		if (err < 0) {
+-			kfree(f);
+-			goto errout_alloc;
+-		}
+-	}
+-
+ 	oldp = p;
+ 	r->res = cr;
+ 	tcf_exts_change(&r->exts, &e);
+diff --git a/net/sched/ematch.c b/net/sched/ematch.c
+index f885bea5b4526..b7154103e9cd3 100644
+--- a/net/sched/ematch.c
++++ b/net/sched/ematch.c
+@@ -255,6 +255,8 @@ static int tcf_em_validate(struct tcf_proto *tp,
+ 			 * the value carried.
+ 			 */
+ 			if (em_hdr->flags & TCF_EM_SIMPLE) {
++				if (em->ops->datalen > 0)
++					goto errout;
+ 				if (data_len < sizeof(u32))
+ 					goto errout;
+ 				em->data = *(u32 *) data;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index d8ffe41143856..54e2309315eb5 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1114,6 +1114,11 @@ skip:
+ 			return -ENOENT;
+ 		}
+ 
++		if (new && new->ops == &noqueue_qdisc_ops) {
++			NL_SET_ERR_MSG(extack, "Cannot assign noqueue to a class");
++			return -EINVAL;
++		}
++
+ 		err = cops->graft(parent, cl, new, &old, extack);
+ 		if (err)
+ 			return err;
+diff --git a/net/sched/sch_atm.c b/net/sched/sch_atm.c
+index 794c7377cd7e9..95967ce1f370a 100644
+--- a/net/sched/sch_atm.c
++++ b/net/sched/sch_atm.c
+@@ -396,10 +396,13 @@ static int atm_tc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 				result = tcf_classify(skb, fl, &res, true);
+ 				if (result < 0)
+ 					continue;
++				if (result == TC_ACT_SHOT)
++					goto done;
++
+ 				flow = (struct atm_flow_data *)res.class;
+ 				if (!flow)
+ 					flow = lookup_flow(sch, res.classid);
+-				goto done;
++				goto drop;
+ 			}
+ 		}
+ 		flow = NULL;
+diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
+index 9a3dff02b7a2b..3da5eb313c246 100644
+--- a/net/sched/sch_cbq.c
++++ b/net/sched/sch_cbq.c
+@@ -231,6 +231,8 @@ cbq_classify(struct sk_buff *skb, struct Qdisc *sch, int *qerr)
+ 		result = tcf_classify(skb, fl, &res, true);
+ 		if (!fl || result < 0)
+ 			goto fallback;
++		if (result == TC_ACT_SHOT)
++			return NULL;
+ 
+ 		cl = (void *)res.class;
+ 		if (!cl) {
+@@ -251,8 +253,6 @@ cbq_classify(struct sk_buff *skb, struct Qdisc *sch, int *qerr)
+ 		case TC_ACT_TRAP:
+ 			*qerr = NET_XMIT_SUCCESS | __NET_XMIT_STOLEN;
+ 			fallthrough;
+-		case TC_ACT_SHOT:
+-			return NULL;
+ 		case TC_ACT_RECLASSIFY:
+ 			return cbq_reclassify(skb, cl);
+ 		}
+diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c
+index c16c80963e555..e4af050aec1be 100644
+--- a/net/sctp/sysctl.c
++++ b/net/sctp/sysctl.c
+@@ -79,17 +79,18 @@ static struct ctl_table sctp_table[] = {
+ 	{ /* sentinel */ }
+ };
+ 
++/* The following index defines are used in sctp_sysctl_net_register().
++ * If you add new items to the sctp_net_table, please ensure that
++ * the index values of these defines hold the same meaning indicated by
++ * their macro names when they appear in sctp_net_table.
++ */
++#define SCTP_RTO_MIN_IDX       0
++#define SCTP_RTO_MAX_IDX       1
++#define SCTP_PF_RETRANS_IDX    2
++#define SCTP_PS_RETRANS_IDX    3
++
+ static struct ctl_table sctp_net_table[] = {
+-	{
+-		.procname	= "rto_initial",
+-		.data		= &init_net.sctp.rto_initial,
+-		.maxlen		= sizeof(unsigned int),
+-		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
+-		.extra1         = SYSCTL_ONE,
+-		.extra2         = &timer_max
+-	},
+-	{
++	[SCTP_RTO_MIN_IDX] = {
+ 		.procname	= "rto_min",
+ 		.data		= &init_net.sctp.rto_min,
+ 		.maxlen		= sizeof(unsigned int),
+@@ -98,7 +99,7 @@ static struct ctl_table sctp_net_table[] = {
+ 		.extra1         = SYSCTL_ONE,
+ 		.extra2         = &init_net.sctp.rto_max
+ 	},
+-	{
++	[SCTP_RTO_MAX_IDX] =  {
+ 		.procname	= "rto_max",
+ 		.data		= &init_net.sctp.rto_max,
+ 		.maxlen		= sizeof(unsigned int),
+@@ -107,6 +108,33 @@ static struct ctl_table sctp_net_table[] = {
+ 		.extra1         = &init_net.sctp.rto_min,
+ 		.extra2         = &timer_max
+ 	},
++	[SCTP_PF_RETRANS_IDX] = {
++		.procname	= "pf_retrans",
++		.data		= &init_net.sctp.pf_retrans,
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= &init_net.sctp.ps_retrans,
++	},
++	[SCTP_PS_RETRANS_IDX] = {
++		.procname	= "ps_retrans",
++		.data		= &init_net.sctp.ps_retrans,
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec_minmax,
++		.extra1		= &init_net.sctp.pf_retrans,
++		.extra2		= &ps_retrans_max,
++	},
++	{
++		.procname	= "rto_initial",
++		.data		= &init_net.sctp.rto_initial,
++		.maxlen		= sizeof(unsigned int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec_minmax,
++		.extra1         = SYSCTL_ONE,
++		.extra2         = &timer_max
++	},
+ 	{
+ 		.procname	= "rto_alpha_exp_divisor",
+ 		.data		= &init_net.sctp.rto_alpha,
+@@ -202,24 +230,6 @@ static struct ctl_table sctp_net_table[] = {
+ 		.extra1		= SYSCTL_ONE,
+ 		.extra2		= SYSCTL_INT_MAX,
+ 	},
+-	{
+-		.procname	= "pf_retrans",
+-		.data		= &init_net.sctp.pf_retrans,
+-		.maxlen		= sizeof(int),
+-		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
+-		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &init_net.sctp.ps_retrans,
+-	},
+-	{
+-		.procname	= "ps_retrans",
+-		.data		= &init_net.sctp.ps_retrans,
+-		.maxlen		= sizeof(int),
+-		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
+-		.extra1		= &init_net.sctp.pf_retrans,
+-		.extra2		= &ps_retrans_max,
+-	},
+ 	{
+ 		.procname	= "sndbuf_policy",
+ 		.data		= &init_net.sctp.sndbuf_policy,
+@@ -489,6 +499,11 @@ int sctp_sysctl_net_register(struct net *net)
+ 	for (i = 0; table[i].data; i++)
+ 		table[i].data += (char *)(&net->sctp) - (char *)&init_net.sctp;
+ 
++	table[SCTP_RTO_MIN_IDX].extra2 = &net->sctp.rto_max;
++	table[SCTP_RTO_MAX_IDX].extra1 = &net->sctp.rto_min;
++	table[SCTP_PF_RETRANS_IDX].extra2 = &net->sctp.ps_retrans;
++	table[SCTP_PS_RETRANS_IDX].extra1 = &net->sctp.pf_retrans;
++
+ 	net->sctp.sysctl_header = register_net_sysctl(net, "net/sctp", table);
+ 	if (net->sctp.sysctl_header == NULL) {
+ 		kfree(table);
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index 5f42aa5fc6128..2ff66a6a7e54c 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -301,7 +301,7 @@ __gss_find_upcall(struct rpc_pipe *pipe, kuid_t uid, const struct gss_auth *auth
+ 	list_for_each_entry(pos, &pipe->in_downcall, list) {
+ 		if (!uid_eq(pos->uid, uid))
+ 			continue;
+-		if (auth && pos->auth->service != auth->service)
++		if (pos->auth->service != auth->service)
+ 			continue;
+ 		refcount_inc(&pos->count);
+ 		return pos;
+@@ -685,6 +685,21 @@ out:
+ 	return err;
+ }
+ 
++static struct gss_upcall_msg *
++gss_find_downcall(struct rpc_pipe *pipe, kuid_t uid)
++{
++	struct gss_upcall_msg *pos;
++	list_for_each_entry(pos, &pipe->in_downcall, list) {
++		if (!uid_eq(pos->uid, uid))
++			continue;
++		if (!rpc_msg_is_inflight(&pos->msg))
++			continue;
++		refcount_inc(&pos->count);
++		return pos;
++	}
++	return NULL;
++}
++
+ #define MSG_BUF_MAXSIZE 1024
+ 
+ static ssize_t
+@@ -731,7 +746,7 @@ gss_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ 	err = -ENOENT;
+ 	/* Find a matching upcall */
+ 	spin_lock(&pipe->lock);
+-	gss_msg = __gss_find_upcall(pipe, uid, NULL);
++	gss_msg = gss_find_downcall(pipe, uid);
+ 	if (gss_msg == NULL) {
+ 		spin_unlock(&pipe->lock);
+ 		goto err_put_ctx;
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index f5111d62972d3..406ff7f8b156e 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -1156,18 +1156,23 @@ static int gss_read_proxy_verf(struct svc_rqst *rqstp,
+ 		return res;
+ 
+ 	inlen = svc_getnl(argv);
+-	if (inlen > (argv->iov_len + rqstp->rq_arg.page_len))
++	if (inlen > (argv->iov_len + rqstp->rq_arg.page_len)) {
++		kfree(in_handle->data);
+ 		return SVC_DENIED;
++	}
+ 
+ 	pages = DIV_ROUND_UP(inlen, PAGE_SIZE);
+ 	in_token->pages = kcalloc(pages, sizeof(struct page *), GFP_KERNEL);
+-	if (!in_token->pages)
++	if (!in_token->pages) {
++		kfree(in_handle->data);
+ 		return SVC_DENIED;
++	}
+ 	in_token->page_base = 0;
+ 	in_token->page_len = inlen;
+ 	for (i = 0; i < pages; i++) {
+ 		in_token->pages[i] = alloc_page(GFP_KERNEL);
+ 		if (!in_token->pages[i]) {
++			kfree(in_handle->data);
+ 			gss_free_in_token_pages(in_token);
+ 			return SVC_DENIED;
+ 		}
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 78c6648af7827..c478108ca6a65 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1361,7 +1361,7 @@ static int rpc_sockname(struct net *net, struct sockaddr *sap, size_t salen,
+ 		break;
+ 	default:
+ 		err = -EAFNOSUPPORT;
+-		goto out;
++		goto out_release;
+ 	}
+ 	if (err < 0) {
+ 		dprintk("RPC:       can't bind UDP socket (%d)\n", err);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index dcc1992b14d76..338b06de86d16 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -866,7 +866,7 @@ struct rpcrdma_req *rpcrdma_req_create(struct rpcrdma_xprt *r_xprt, size_t size,
+ 	return req;
+ 
+ out3:
+-	kfree(req->rl_sendbuf);
++	rpcrdma_regbuf_free(req->rl_sendbuf);
+ out2:
+ 	kfree(req);
+ out1:
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index a9ca95a0fcdda..8c2856cbfeccf 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -1713,7 +1713,11 @@ static int vmci_transport_dgram_enqueue(
+ 	if (!dg)
+ 		return -ENOMEM;
+ 
+-	memcpy_from_msg(VMCI_DG_PAYLOAD(dg), msg, len);
++	err = memcpy_from_msg(VMCI_DG_PAYLOAD(dg), msg, len);
++	if (err) {
++		kfree(dg);
++		return err;
++	}
+ 
+ 	dg->dst = vmci_make_handle(remote_addr->svm_cid,
+ 				   remote_addr->svm_port);
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index a1e64d967bd38..90297264d8aea 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -4185,8 +4185,10 @@ static int __init regulatory_init_db(void)
+ 		return -EINVAL;
+ 
+ 	err = load_builtin_regdb_keys();
+-	if (err)
++	if (err) {
++		platform_device_unregister(reg_pdev);
+ 		return err;
++	}
+ 
+ 	/* We always try to get an update for the static regdomain */
+ 	err = regulatory_hint_core(cfg80211_world_regdom->alpha2);
+diff --git a/samples/vfio-mdev/mdpy-fb.c b/samples/vfio-mdev/mdpy-fb.c
+index 9ec93d90e8a5a..4eb7aa11cfbb2 100644
+--- a/samples/vfio-mdev/mdpy-fb.c
++++ b/samples/vfio-mdev/mdpy-fb.c
+@@ -109,7 +109,7 @@ static int mdpy_fb_probe(struct pci_dev *pdev,
+ 
+ 	ret = pci_request_regions(pdev, "mdpy-fb");
+ 	if (ret < 0)
+-		return ret;
++		goto err_disable_dev;
+ 
+ 	pci_read_config_dword(pdev, MDPY_FORMAT_OFFSET, &format);
+ 	pci_read_config_dword(pdev, MDPY_WIDTH_OFFSET,	&width);
+@@ -191,6 +191,9 @@ err_release_fb:
+ err_release_regions:
+ 	pci_release_regions(pdev);
+ 
++err_disable_dev:
++	pci_disable_device(pdev);
++
+ 	return ret;
+ }
+ 
+@@ -199,7 +202,10 @@ static void mdpy_fb_remove(struct pci_dev *pdev)
+ 	struct fb_info *info = pci_get_drvdata(pdev);
+ 
+ 	unregister_framebuffer(info);
++	iounmap(info->screen_base);
+ 	framebuffer_release(info);
++	pci_release_regions(pdev);
++	pci_disable_device(pdev);
+ }
+ 
+ static struct pci_device_id mdpy_fb_pci_table[] = {
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index c173f6fd7aeed..49d97b331abca 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -867,8 +867,10 @@ static struct multi_transaction *multi_transaction_new(struct file *file,
+ 	if (!t)
+ 		return ERR_PTR(-ENOMEM);
+ 	kref_init(&t->count);
+-	if (copy_from_user(t->data, buf, size))
++	if (copy_from_user(t->data, buf, size)) {
++		put_multi_transaction(t);
+ 		return ERR_PTR(-EFAULT);
++	}
+ 
+ 	return t;
+ }
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index ffeaee5ed9683..585edcc6814d2 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -1161,10 +1161,10 @@ static int apparmor_inet_conn_request(struct sock *sk, struct sk_buff *skb,
+ #endif
+ 
+ /*
+- * The cred blob is a pointer to, not an instance of, an aa_task_ctx.
++ * The cred blob is a pointer to, not an instance of, an aa_label.
+  */
+ struct lsm_blob_sizes apparmor_blob_sizes __lsm_ro_after_init = {
+-	.lbs_cred = sizeof(struct aa_task_ctx *),
++	.lbs_cred = sizeof(struct aa_label *),
+ 	.lbs_file = sizeof(struct aa_file_ctx),
+ 	.lbs_task = sizeof(struct aa_task_ctx),
+ };
+diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
+index 4c010c9a6af1d..fcf22577f606c 100644
+--- a/security/apparmor/policy.c
++++ b/security/apparmor/policy.c
+@@ -1125,7 +1125,7 @@ ssize_t aa_remove_profiles(struct aa_ns *policy_ns, struct aa_label *subj,
+ 
+ 	if (!name) {
+ 		/* remove namespace - can only happen if fqname[0] == ':' */
+-		mutex_lock_nested(&ns->parent->lock, ns->level);
++		mutex_lock_nested(&ns->parent->lock, ns->parent->level);
+ 		__aa_bump_ns_revision(ns);
+ 		__aa_remove_ns(ns);
+ 		mutex_unlock(&ns->parent->lock);
+diff --git a/security/apparmor/policy_ns.c b/security/apparmor/policy_ns.c
+index 70921d95fb406..53d24cf638936 100644
+--- a/security/apparmor/policy_ns.c
++++ b/security/apparmor/policy_ns.c
+@@ -121,7 +121,7 @@ static struct aa_ns *alloc_ns(const char *prefix, const char *name)
+ 	return ns;
+ 
+ fail_unconfined:
+-	kfree_sensitive(ns->base.hname);
++	aa_policy_destroy(&ns->base);
+ fail_ns:
+ 	kfree_sensitive(ns);
+ 	return NULL;
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 556ef65ab6ee6..519656e685822 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -964,7 +964,7 @@ static int verify_header(struct aa_ext *e, int required, const char **ns)
+ 	 * if not specified use previous version
+ 	 * Mask off everything that is not kernel abi version
+ 	 */
+-	if (VERSION_LT(e->version, v5) || VERSION_GT(e->version, v7)) {
++	if (VERSION_LT(e->version, v5) || VERSION_GT(e->version, v8)) {
+ 		audit_iface(NULL, NULL, NULL, "unsupported interface version",
+ 			    e, error);
+ 		return error;
+diff --git a/security/device_cgroup.c b/security/device_cgroup.c
+index 04375df52fc9a..fe5cb7696993d 100644
+--- a/security/device_cgroup.c
++++ b/security/device_cgroup.c
+@@ -81,6 +81,17 @@ free_and_exit:
+ 	return -ENOMEM;
+ }
+ 
++static void dev_exceptions_move(struct list_head *dest, struct list_head *orig)
++{
++	struct dev_exception_item *ex, *tmp;
++
++	lockdep_assert_held(&devcgroup_mutex);
++
++	list_for_each_entry_safe(ex, tmp, orig, list) {
++		list_move_tail(&ex->list, dest);
++	}
++}
++
+ /*
+  * called under devcgroup_mutex
+  */
+@@ -603,11 +614,13 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
+ 	int count, rc = 0;
+ 	struct dev_exception_item ex;
+ 	struct dev_cgroup *parent = css_to_devcgroup(devcgroup->css.parent);
++	struct dev_cgroup tmp_devcgrp;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	memset(&ex, 0, sizeof(ex));
++	memset(&tmp_devcgrp, 0, sizeof(tmp_devcgrp));
+ 	b = buffer;
+ 
+ 	switch (*b) {
+@@ -619,15 +632,27 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
+ 
+ 			if (!may_allow_all(parent))
+ 				return -EPERM;
+-			dev_exception_clean(devcgroup);
+-			devcgroup->behavior = DEVCG_DEFAULT_ALLOW;
+-			if (!parent)
++			if (!parent) {
++				devcgroup->behavior = DEVCG_DEFAULT_ALLOW;
++				dev_exception_clean(devcgroup);
+ 				break;
++			}
+ 
++			INIT_LIST_HEAD(&tmp_devcgrp.exceptions);
++			rc = dev_exceptions_copy(&tmp_devcgrp.exceptions,
++						 &devcgroup->exceptions);
++			if (rc)
++				return rc;
++			dev_exception_clean(devcgroup);
+ 			rc = dev_exceptions_copy(&devcgroup->exceptions,
+ 						 &parent->exceptions);
+-			if (rc)
++			if (rc) {
++				dev_exceptions_move(&devcgroup->exceptions,
++						    &tmp_devcgrp.exceptions);
+ 				return rc;
++			}
++			devcgroup->behavior = DEVCG_DEFAULT_ALLOW;
++			dev_exception_clean(&tmp_devcgrp);
+ 			break;
+ 		case DEVCG_DENY:
+ 			if (css_has_online_children(&devcgroup->css))
+diff --git a/security/integrity/digsig.c b/security/integrity/digsig.c
+index 0f518dcfde052..de442af7b3364 100644
+--- a/security/integrity/digsig.c
++++ b/security/integrity/digsig.c
+@@ -120,6 +120,7 @@ int __init integrity_init_keyring(const unsigned int id)
+ {
+ 	struct key_restriction *restriction;
+ 	key_perm_t perm;
++	int ret;
+ 
+ 	perm = (KEY_POS_ALL & ~KEY_POS_SETATTR) | KEY_USR_VIEW
+ 		| KEY_USR_READ | KEY_USR_SEARCH;
+@@ -140,7 +141,10 @@ int __init integrity_init_keyring(const unsigned int id)
+ 	perm |= KEY_USR_WRITE;
+ 
+ out:
+-	return __integrity_init_keyring(id, perm, restriction);
++	ret = __integrity_init_keyring(id, perm, restriction);
++	if (ret)
++		kfree(restriction);
++	return ret;
+ }
+ 
+ int __init integrity_add_key(const unsigned int id, const void *data,
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 2d1af8899cabd..600b97677085f 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -743,6 +743,7 @@ int ima_load_data(enum kernel_load_data_id id, bool contents)
+ 			pr_err("impossible to appraise a module without a file descriptor. sig_enforce kernel parameter might help\n");
+ 			return -EACCES;	/* INTEGRITY_UNKNOWN */
+ 		}
++		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 18569adcb4fe7..96ecb7d254037 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -370,12 +370,6 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
+ 
+ 		nentry->lsm[i].type = entry->lsm[i].type;
+ 		nentry->lsm[i].args_p = entry->lsm[i].args_p;
+-		/*
+-		 * Remove the reference from entry so that the associated
+-		 * memory will not be freed during a later call to
+-		 * ima_lsm_free_rule(entry).
+-		 */
+-		entry->lsm[i].args_p = NULL;
+ 
+ 		ima_filter_rule_init(nentry->lsm[i].type, Audit_equal,
+ 				     nentry->lsm[i].args_p,
+@@ -389,6 +383,7 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
+ 
+ static int ima_lsm_update_rule(struct ima_rule_entry *entry)
+ {
++	int i;
+ 	struct ima_rule_entry *nentry;
+ 
+ 	nentry = ima_lsm_copy_rule(entry);
+@@ -403,7 +398,8 @@ static int ima_lsm_update_rule(struct ima_rule_entry *entry)
+ 	 * references and the entry itself. All other memory refrences will now
+ 	 * be owned by nentry.
+ 	 */
+-	ima_lsm_free_rule(entry);
++	for (i = 0; i < MAX_LSM_RULES; i++)
++		ima_filter_rule_free(entry->lsm[i].rule);
+ 	kfree(entry);
+ 
+ 	return 0;
+@@ -503,6 +499,9 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode,
+ 			    const char *keyring)
+ {
+ 	int i;
++	bool result = false;
++	struct ima_rule_entry *lsm_rule = rule;
++	bool rule_reinitialized = false;
+ 
+ 	if (func == KEY_CHECK) {
+ 		return (rule->flags & IMA_FUNC) && (rule->func == func) &&
+@@ -545,34 +544,55 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode,
+ 		int rc = 0;
+ 		u32 osid;
+ 
+-		if (!rule->lsm[i].rule) {
+-			if (!rule->lsm[i].args_p)
++		if (!lsm_rule->lsm[i].rule) {
++			if (!lsm_rule->lsm[i].args_p)
+ 				continue;
+ 			else
+ 				return false;
+ 		}
++
++retry:
+ 		switch (i) {
+ 		case LSM_OBJ_USER:
+ 		case LSM_OBJ_ROLE:
+ 		case LSM_OBJ_TYPE:
+ 			security_inode_getsecid(inode, &osid);
+-			rc = ima_filter_rule_match(osid, rule->lsm[i].type,
++			rc = ima_filter_rule_match(osid, lsm_rule->lsm[i].type,
+ 						   Audit_equal,
+-						   rule->lsm[i].rule);
++						   lsm_rule->lsm[i].rule);
+ 			break;
+ 		case LSM_SUBJ_USER:
+ 		case LSM_SUBJ_ROLE:
+ 		case LSM_SUBJ_TYPE:
+-			rc = ima_filter_rule_match(secid, rule->lsm[i].type,
++			rc = ima_filter_rule_match(secid, lsm_rule->lsm[i].type,
+ 						   Audit_equal,
+-						   rule->lsm[i].rule);
++						   lsm_rule->lsm[i].rule);
++			break;
+ 		default:
+ 			break;
+ 		}
+-		if (!rc)
+-			return false;
++
++		if (rc == -ESTALE && !rule_reinitialized) {
++			lsm_rule = ima_lsm_copy_rule(rule);
++			if (lsm_rule) {
++				rule_reinitialized = true;
++				goto retry;
++			}
++		}
++		if (!rc) {
++			result = false;
++			goto out;
++		}
+ 	}
+-	return true;
++	result = true;
++
++out:
++	if (rule_reinitialized) {
++		for (i = 0; i < MAX_LSM_RULES; i++)
++			ima_filter_rule_free(lsm_rule->lsm[i].rule);
++		kfree(lsm_rule);
++	}
++	return result;
+ }
+ 
+ /*
+@@ -802,6 +822,7 @@ void __init ima_init_policy(void)
+ 		add_rules(default_measurement_rules,
+ 			  ARRAY_SIZE(default_measurement_rules),
+ 			  IMA_DEFAULT_POLICY);
++		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c
+index f64c01d53e96a..e5d941f485b19 100644
+--- a/security/integrity/ima/ima_template.c
++++ b/security/integrity/ima/ima_template.c
+@@ -220,11 +220,11 @@ int template_desc_init_fields(const char *template_fmt,
+ 	}
+ 
+ 	if (fields && num_fields) {
+-		*fields = kmalloc_array(i, sizeof(*fields), GFP_KERNEL);
++		*fields = kmalloc_array(i, sizeof(**fields), GFP_KERNEL);
+ 		if (*fields == NULL)
+ 			return -ENOMEM;
+ 
+-		memcpy(*fields, found_fields, i * sizeof(*fields));
++		memcpy(*fields, found_fields, i * sizeof(**fields));
+ 		*num_fields = i;
+ 	}
+ 
+@@ -290,8 +290,11 @@ static struct ima_template_desc *restore_template_fmt(char *template_name)
+ 
+ 	template_desc->name = "";
+ 	template_desc->fmt = kstrdup(template_name, GFP_KERNEL);
+-	if (!template_desc->fmt)
++	if (!template_desc->fmt) {
++		kfree(template_desc);
++		template_desc = NULL;
+ 		goto out;
++	}
+ 
+ 	spin_lock(&template_list);
+ 	list_add_tail_rcu(&template_desc->list, &defined_templates);
+diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c
+index 185c609c6e380..d2f2c3936277a 100644
+--- a/security/integrity/platform_certs/load_uefi.c
++++ b/security/integrity/platform_certs/load_uefi.c
+@@ -34,6 +34,7 @@ static const struct dmi_system_id uefi_skip_cert[] = {
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacPro7,1") },
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,1") },
+ 	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,2") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMacPro1,1") },
+ 	{ }
+ };
+ 
+diff --git a/security/loadpin/loadpin.c b/security/loadpin/loadpin.c
+index b12f7d986b1e3..5fce105a372d3 100644
+--- a/security/loadpin/loadpin.c
++++ b/security/loadpin/loadpin.c
+@@ -118,21 +118,11 @@ static void loadpin_sb_free_security(struct super_block *mnt_sb)
+ 	}
+ }
+ 
+-static int loadpin_read_file(struct file *file, enum kernel_read_file_id id,
+-			     bool contents)
++static int loadpin_check(struct file *file, enum kernel_read_file_id id)
+ {
+ 	struct super_block *load_root;
+ 	const char *origin = kernel_read_file_id_str(id);
+ 
+-	/*
+-	 * If we will not know that we'll be seeing the full contents
+-	 * then we cannot trust a load will be complete and unchanged
+-	 * off disk. Treat all contents=false hooks as if there were
+-	 * no associated file struct.
+-	 */
+-	if (!contents)
+-		file = NULL;
+-
+ 	/* If the file id is excluded, ignore the pinning. */
+ 	if ((unsigned int)id < ARRAY_SIZE(ignore_read_file_id) &&
+ 	    ignore_read_file_id[id]) {
+@@ -187,9 +177,25 @@ static int loadpin_read_file(struct file *file, enum kernel_read_file_id id,
+ 	return 0;
+ }
+ 
++static int loadpin_read_file(struct file *file, enum kernel_read_file_id id,
++			     bool contents)
++{
++	/*
++	 * LoadPin only cares about the _origin_ of a file, not its
++	 * contents, so we can ignore the "are full contents available"
++	 * argument here.
++	 */
++	return loadpin_check(file, id);
++}
++
+ static int loadpin_load_data(enum kernel_load_data_id id, bool contents)
+ {
+-	return loadpin_read_file(NULL, (enum kernel_read_file_id) id, contents);
++	/*
++	 * LoadPin only cares about the _origin_ of a file, not its
++	 * contents, so a NULL file is passed, and we can ignore the
++	 * state of "contents".
++	 */
++	return loadpin_check(NULL, (enum kernel_read_file_id) id);
+ }
+ 
+ static struct security_hook_list loadpin_hooks[] __lsm_ro_after_init = {
+diff --git a/sound/core/control_compat.c b/sound/core/control_compat.c
+index 97467f6a32a13..980ab3580f1b7 100644
+--- a/sound/core/control_compat.c
++++ b/sound/core/control_compat.c
+@@ -304,7 +304,9 @@ static int ctl_elem_read_user(struct snd_card *card,
+ 	err = snd_power_wait(card, SNDRV_CTL_POWER_D0);
+ 	if (err < 0)
+ 		goto error;
++	down_read(&card->controls_rwsem);
+ 	err = snd_ctl_elem_read(card, data);
++	up_read(&card->controls_rwsem);
+ 	if (err < 0)
+ 		goto error;
+ 	err = copy_ctl_value_to_user(userdata, valuep, data, type, count);
+@@ -332,7 +334,9 @@ static int ctl_elem_write_user(struct snd_ctl_file *file,
+ 	err = snd_power_wait(card, SNDRV_CTL_POWER_D0);
+ 	if (err < 0)
+ 		goto error;
++	down_write(&card->controls_rwsem);
+ 	err = snd_ctl_elem_write(card, file, data);
++	up_write(&card->controls_rwsem);
+ 	if (err < 0)
+ 		goto error;
+ 	err = copy_ctl_value_to_user(userdata, valuep, data, type, count);
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 6cc7c2a9fe732..9425fcd30c4c7 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -1413,8 +1413,10 @@ static int snd_pcm_do_start(struct snd_pcm_substream *substream,
+ static void snd_pcm_undo_start(struct snd_pcm_substream *substream,
+ 			       snd_pcm_state_t state)
+ {
+-	if (substream->runtime->trigger_master == substream)
++	if (substream->runtime->trigger_master == substream) {
+ 		substream->ops->trigger(substream, SNDRV_PCM_TRIGGER_STOP);
++		substream->runtime->stop_operating = true;
++	}
+ }
+ 
+ static void snd_pcm_post_start(struct snd_pcm_substream *substream,
+diff --git a/sound/drivers/mts64.c b/sound/drivers/mts64.c
+index 9c708b693cb33..257314920e4d4 100644
+--- a/sound/drivers/mts64.c
++++ b/sound/drivers/mts64.c
+@@ -816,6 +816,9 @@ static void snd_mts64_interrupt(void *private)
+ 	u8 status, data;
+ 	struct snd_rawmidi_substream *substream;
+ 
++	if (!mts)
++		return;
++
+ 	spin_lock(&mts->lock);
+ 	ret = mts64_read(mts->pardev->port);
+ 	data = ret & 0x00ff;
+diff --git a/sound/hda/ext/hdac_ext_stream.c b/sound/hda/ext/hdac_ext_stream.c
+index 1e6e4cf428cda..4276dae2e00af 100644
+--- a/sound/hda/ext/hdac_ext_stream.c
++++ b/sound/hda/ext/hdac_ext_stream.c
+@@ -475,23 +475,6 @@ int snd_hdac_ext_stream_get_spbmaxfifo(struct hdac_bus *bus,
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_ext_stream_get_spbmaxfifo);
+ 
+-
+-/**
+- * snd_hdac_ext_stop_streams - stop all stream if running
+- * @bus: HD-audio core bus
+- */
+-void snd_hdac_ext_stop_streams(struct hdac_bus *bus)
+-{
+-	struct hdac_stream *stream;
+-
+-	if (bus->chip_init) {
+-		list_for_each_entry(stream, &bus->stream_list, list)
+-			snd_hdac_stream_stop(stream);
+-		snd_hdac_bus_stop_chip(bus);
+-	}
+-}
+-EXPORT_SYMBOL_GPL(snd_hdac_ext_stop_streams);
+-
+ /**
+  * snd_hdac_ext_stream_drsm_enable - enable DMA resume for a stream
+  * @bus: HD-audio core bus
+diff --git a/sound/hda/hdac_stream.c b/sound/hda/hdac_stream.c
+index ce77a53201639..1e0f61affd979 100644
+--- a/sound/hda/hdac_stream.c
++++ b/sound/hda/hdac_stream.c
+@@ -142,6 +142,33 @@ void snd_hdac_stream_stop(struct hdac_stream *azx_dev)
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_stream_stop);
+ 
++/**
++ * snd_hdac_stop_streams - stop all streams
++ * @bus: HD-audio core bus
++ */
++void snd_hdac_stop_streams(struct hdac_bus *bus)
++{
++	struct hdac_stream *stream;
++
++	list_for_each_entry(stream, &bus->stream_list, list)
++		snd_hdac_stream_stop(stream);
++}
++EXPORT_SYMBOL_GPL(snd_hdac_stop_streams);
++
++/**
++ * snd_hdac_stop_streams_and_chip - stop all streams and chip if running
++ * @bus: HD-audio core bus
++ */
++void snd_hdac_stop_streams_and_chip(struct hdac_bus *bus)
++{
++
++	if (bus->chip_init) {
++		snd_hdac_stop_streams(bus);
++		snd_hdac_bus_stop_chip(bus);
++	}
++}
++EXPORT_SYMBOL_GPL(snd_hdac_stop_streams_and_chip);
++
+ /**
+  * snd_hdac_stream_reset - reset a stream
+  * @azx_dev: HD-audio core stream to reset
+diff --git a/sound/pci/asihpi/hpioctl.c b/sound/pci/asihpi/hpioctl.c
+index bb31b7fe867d6..477a5b4b50bcb 100644
+--- a/sound/pci/asihpi/hpioctl.c
++++ b/sound/pci/asihpi/hpioctl.c
+@@ -361,7 +361,7 @@ int asihpi_adapter_probe(struct pci_dev *pci_dev,
+ 		pci_dev->device, pci_dev->subsystem_vendor,
+ 		pci_dev->subsystem_device, pci_dev->devfn);
+ 
+-	if (pci_enable_device(pci_dev) < 0) {
++	if (pcim_enable_device(pci_dev) < 0) {
+ 		dev_err(&pci_dev->dev,
+ 			"pci_enable_device failed, disabling device\n");
+ 		return -EIO;
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index 3de7dc34def24..ea76395d71d38 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -1045,10 +1045,8 @@ EXPORT_SYMBOL_GPL(azx_init_chip);
+ void azx_stop_all_streams(struct azx *chip)
+ {
+ 	struct hdac_bus *bus = azx_bus(chip);
+-	struct hdac_stream *s;
+ 
+-	list_for_each_entry(s, &bus->stream_list, list)
+-		snd_hdac_stream_stop(s);
++	snd_hdac_stop_streams(bus);
+ }
+ EXPORT_SYMBOL_GPL(azx_stop_all_streams);
+ 
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index b1c57c65f6cd5..1afe9cddb69eb 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1965,6 +1965,8 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ static const struct snd_pci_quirk force_connect_list[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
++	SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1),
++	SND_PCI_QUIRK(0x103c, 0x8715, "HP", 1),
+ 	SND_PCI_QUIRK(0x1462, 0xec94, "MS-7C94", 1),
+ 	{}
+ };
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8011b451902a8..b2c9cdfb83e62 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6654,6 +6654,34 @@ static void alc256_fixup_mic_no_presence_and_resume(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc295_fixup_dell_inspiron_top_speakers(struct hda_codec *codec,
++					  const struct hda_fixup *fix, int action)
++{
++	static const struct hda_pintbl pincfgs[] = {
++		{ 0x14, 0x90170151 },
++		{ 0x17, 0x90170150 },
++		{ }
++	};
++	static const hda_nid_t conn[] = { 0x02, 0x03 };
++	static const hda_nid_t preferred_pairs[] = {
++		0x14, 0x02,
++		0x17, 0x03,
++		0x21, 0x02,
++		0
++	};
++	struct alc_spec *spec = codec->spec;
++
++	alc_fixup_no_shutup(codec, fix, action);
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_apply_pincfgs(codec, pincfgs);
++		snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
++		spec->gen.preferred_dacs = preferred_pairs;
++		break;
++	}
++}
++
+ enum {
+ 	ALC269_FIXUP_GPIO2,
+ 	ALC269_FIXUP_SONY_VAIO,
+@@ -6884,6 +6912,8 @@ enum {
+ 	ALC285_FIXUP_LEGION_Y9000X_SPEAKERS,
+ 	ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE,
+ 	ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED,
++	ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS,
++	ALC236_FIXUP_DELL_DUAL_CODECS,
+ };
+ 
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -8704,6 +8734,18 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC285_FIXUP_HP_MUTE_LED,
+ 	},
++	[ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc295_fixup_dell_inspiron_top_speakers,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
++	},
++	[ALC236_FIXUP_DELL_DUAL_CODECS] = {
++		.type = HDA_FIXUP_PINS,
++		.v.func = alc1220_fixup_gb_dual_codecs,
++		.chained = true,
++		.chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8803,6 +8845,15 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0b19, "Dell XPS 15 9520", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x0b1a, "Dell Precision 5570", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0b37, "Dell Inspiron 16 Plus 7620 2-in-1", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
++	SND_PCI_QUIRK(0x1028, 0x0b71, "Dell Inspiron 16 Plus 7620", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
++	SND_PCI_QUIRK(0x1028, 0x0c03, "Dell Precision 5340", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1028, 0x0c19, "Dell Precision 3340", ALC236_FIXUP_DELL_DUAL_CODECS),
++	SND_PCI_QUIRK(0x1028, 0x0c1a, "Dell Precision 3340", ALC236_FIXUP_DELL_DUAL_CODECS),
++	SND_PCI_QUIRK(0x1028, 0x0c1b, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS),
++	SND_PCI_QUIRK(0x1028, 0x0c1c, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
++	SND_PCI_QUIRK(0x1028, 0x0c1d, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS),
++	SND_PCI_QUIRK(0x1028, 0x0c1e, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -10514,6 +10565,17 @@ static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc897_fixup_lenovo_headset_mode(struct hda_codec *codec,
++				     const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->parse_flags |= HDA_PINCFG_HEADSET_MIC;
++		spec->gen.hp_automute_hook = alc897_hp_automute_hook;
++	}
++}
++
+ static const struct coef_fw alc668_coefs[] = {
+ 	WRITE_COEF(0x01, 0xbebe), WRITE_COEF(0x02, 0xaaaa), WRITE_COEF(0x03,    0x0),
+ 	WRITE_COEF(0x04, 0x0180), WRITE_COEF(0x06,    0x0), WRITE_COEF(0x07, 0x0f80),
+@@ -10597,6 +10659,8 @@ enum {
+ 	ALC897_FIXUP_LENOVO_HEADSET_MIC,
+ 	ALC897_FIXUP_HEADSET_MIC_PIN,
+ 	ALC897_FIXUP_HP_HSMIC_VERB,
++	ALC897_FIXUP_LENOVO_HEADSET_MODE,
++	ALC897_FIXUP_HEADSET_MIC_PIN2,
+ };
+ 
+ static const struct hda_fixup alc662_fixups[] = {
+@@ -11023,6 +11087,19 @@ static const struct hda_fixup alc662_fixups[] = {
+ 			{ }
+ 		},
+ 	},
++	[ALC897_FIXUP_LENOVO_HEADSET_MODE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc897_fixup_lenovo_headset_mode,
++	},
++	[ALC897_FIXUP_HEADSET_MIC_PIN2] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x1a, 0x01a11140 }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC897_FIXUP_LENOVO_HEADSET_MODE
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+@@ -11075,6 +11152,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32f7, "Lenovo ThinkCentre M90", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x3742, "Lenovo TianYi510Pro-14IOB", ALC897_FIXUP_HEADSET_MIC_PIN2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x1849, 0x5892, "ASRock B150M", ALC892_FIXUP_ASROCK_MOBO),
+diff --git a/sound/soc/codecs/hdac_hda.c b/sound/soc/codecs/hdac_hda.c
+index 390dd6c7f6a50..de5955db0a5f0 100644
+--- a/sound/soc/codecs/hdac_hda.c
++++ b/sound/soc/codecs/hdac_hda.c
+@@ -46,9 +46,8 @@ static int hdac_hda_dai_hw_params(struct snd_pcm_substream *substream,
+ 				  struct snd_soc_dai *dai);
+ static int hdac_hda_dai_hw_free(struct snd_pcm_substream *substream,
+ 				struct snd_soc_dai *dai);
+-static int hdac_hda_dai_set_tdm_slot(struct snd_soc_dai *dai,
+-				     unsigned int tx_mask, unsigned int rx_mask,
+-				     int slots, int slot_width);
++static int hdac_hda_dai_set_stream(struct snd_soc_dai *dai, void *stream,
++				   int direction);
+ static struct hda_pcm *snd_soc_find_pcm_from_dai(struct hdac_hda_priv *hda_pvt,
+ 						 struct snd_soc_dai *dai);
+ 
+@@ -58,7 +57,7 @@ static const struct snd_soc_dai_ops hdac_hda_dai_ops = {
+ 	.prepare = hdac_hda_dai_prepare,
+ 	.hw_params = hdac_hda_dai_hw_params,
+ 	.hw_free = hdac_hda_dai_hw_free,
+-	.set_tdm_slot = hdac_hda_dai_set_tdm_slot,
++	.set_stream = hdac_hda_dai_set_stream,
+ };
+ 
+ static struct snd_soc_dai_driver hdac_hda_dais[] = {
+@@ -180,21 +179,22 @@ static struct snd_soc_dai_driver hdac_hda_dais[] = {
+ 
+ };
+ 
+-static int hdac_hda_dai_set_tdm_slot(struct snd_soc_dai *dai,
+-				     unsigned int tx_mask, unsigned int rx_mask,
+-				     int slots, int slot_width)
++static int hdac_hda_dai_set_stream(struct snd_soc_dai *dai,
++				   void *stream, int direction)
+ {
+ 	struct snd_soc_component *component = dai->component;
+ 	struct hdac_hda_priv *hda_pvt;
+ 	struct hdac_hda_pcm *pcm;
++	struct hdac_stream *hstream;
++
++	if (!stream)
++		return -EINVAL;
+ 
+ 	hda_pvt = snd_soc_component_get_drvdata(component);
+ 	pcm = &hda_pvt->pcm[dai->id];
++	hstream = (struct hdac_stream *)stream;
+ 
+-	if (tx_mask)
+-		pcm->stream_tag[SNDRV_PCM_STREAM_PLAYBACK] = tx_mask;
+-	else
+-		pcm->stream_tag[SNDRV_PCM_STREAM_CAPTURE] = rx_mask;
++	pcm->stream_tag[direction] = hstream->stream_tag;
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/max98373-sdw.c b/sound/soc/codecs/max98373-sdw.c
+index 39afa011f0e27..a2bb93fb830b4 100644
+--- a/sound/soc/codecs/max98373-sdw.c
++++ b/sound/soc/codecs/max98373-sdw.c
+@@ -728,7 +728,7 @@ static int max98373_sdw_set_tdm_slot(struct snd_soc_dai *dai,
+ static const struct snd_soc_dai_ops max98373_dai_sdw_ops = {
+ 	.hw_params = max98373_sdw_dai_hw_params,
+ 	.hw_free = max98373_pcm_hw_free,
+-	.set_sdw_stream = max98373_set_sdw_stream,
++	.set_stream = max98373_set_sdw_stream,
+ 	.shutdown = max98373_shutdown,
+ 	.set_tdm_slot = max98373_sdw_set_tdm_slot,
+ };
+diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
+index 8153d3d01654f..3677e9029f91e 100644
+--- a/sound/soc/codecs/pcm512x.c
++++ b/sound/soc/codecs/pcm512x.c
+@@ -1599,7 +1599,7 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
+ 			if (val > 6) {
+ 				dev_err(dev, "Invalid pll-in\n");
+ 				ret = -EINVAL;
+-				goto err_clk;
++				goto err_pm;
+ 			}
+ 			pcm512x->pll_in = val;
+ 		}
+@@ -1608,7 +1608,7 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
+ 			if (val > 6) {
+ 				dev_err(dev, "Invalid pll-out\n");
+ 				ret = -EINVAL;
+-				goto err_clk;
++				goto err_pm;
+ 			}
+ 			pcm512x->pll_out = val;
+ 		}
+@@ -1617,12 +1617,12 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
+ 			dev_err(dev,
+ 				"Error: both pll-in and pll-out, or none\n");
+ 			ret = -EINVAL;
+-			goto err_clk;
++			goto err_pm;
+ 		}
+ 		if (pcm512x->pll_in && pcm512x->pll_in == pcm512x->pll_out) {
+ 			dev_err(dev, "Error: pll-in == pll-out\n");
+ 			ret = -EINVAL;
+-			goto err_clk;
++			goto err_pm;
+ 		}
+ 	}
+ #endif
+diff --git a/sound/soc/codecs/rt1308-sdw.c b/sound/soc/codecs/rt1308-sdw.c
+index 31daa749c3db4..a13296edf295c 100644
+--- a/sound/soc/codecs/rt1308-sdw.c
++++ b/sound/soc/codecs/rt1308-sdw.c
+@@ -613,7 +613,7 @@ static const struct snd_soc_component_driver soc_component_sdw_rt1308 = {
+ static const struct snd_soc_dai_ops rt1308_aif_dai_ops = {
+ 	.hw_params = rt1308_sdw_hw_params,
+ 	.hw_free	= rt1308_sdw_pcm_hw_free,
+-	.set_sdw_stream	= rt1308_set_sdw_stream,
++	.set_stream	= rt1308_set_sdw_stream,
+ 	.shutdown	= rt1308_sdw_shutdown,
+ 	.set_tdm_slot	= rt1308_sdw_set_tdm_slot,
+ };
+diff --git a/sound/soc/codecs/rt298.c b/sound/soc/codecs/rt298.c
+index dc0273a5a11f7..1ca06213e3a32 100644
+--- a/sound/soc/codecs/rt298.c
++++ b/sound/soc/codecs/rt298.c
+@@ -1168,6 +1168,13 @@ static const struct dmi_system_id force_combo_jack_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Geminilake")
+ 		}
+ 	},
++	{
++		.ident = "Intel Kabylake R RVP",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Kabylake Client platform")
++		}
++	},
+ 	{ }
+ };
+ 
+diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
+index 47ce074289ca9..58227602053fc 100644
+--- a/sound/soc/codecs/rt5670.c
++++ b/sound/soc/codecs/rt5670.c
+@@ -3192,8 +3192,6 @@ static int rt5670_i2c_probe(struct i2c_client *i2c,
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	pm_runtime_put(&i2c->dev);
+-
+ 	return 0;
+ err:
+ 	pm_runtime_disable(&i2c->dev);
+diff --git a/sound/soc/codecs/rt5682-sdw.c b/sound/soc/codecs/rt5682-sdw.c
+index c9868dd096fcd..2fb6b1edf9331 100644
+--- a/sound/soc/codecs/rt5682-sdw.c
++++ b/sound/soc/codecs/rt5682-sdw.c
+@@ -272,7 +272,7 @@ static int rt5682_sdw_hw_free(struct snd_pcm_substream *substream,
+ static struct snd_soc_dai_ops rt5682_sdw_ops = {
+ 	.hw_params	= rt5682_sdw_hw_params,
+ 	.hw_free	= rt5682_sdw_hw_free,
+-	.set_sdw_stream	= rt5682_set_sdw_stream,
++	.set_stream	= rt5682_set_sdw_stream,
+ 	.shutdown	= rt5682_sdw_shutdown,
+ };
+ 
+diff --git a/sound/soc/codecs/rt700.c b/sound/soc/codecs/rt700.c
+index 687ac2153666b..80acf0daabf84 100644
+--- a/sound/soc/codecs/rt700.c
++++ b/sound/soc/codecs/rt700.c
+@@ -1005,7 +1005,7 @@ static int rt700_pcm_hw_free(struct snd_pcm_substream *substream,
+ static struct snd_soc_dai_ops rt700_ops = {
+ 	.hw_params	= rt700_pcm_hw_params,
+ 	.hw_free	= rt700_pcm_hw_free,
+-	.set_sdw_stream	= rt700_set_sdw_stream,
++	.set_stream	= rt700_set_sdw_stream,
+ 	.shutdown	= rt700_shutdown,
+ };
+ 
+diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
+index 93d86f7558e0a..9bdcc7872053b 100644
+--- a/sound/soc/codecs/rt711.c
++++ b/sound/soc/codecs/rt711.c
+@@ -1059,7 +1059,7 @@ static int rt711_pcm_hw_free(struct snd_pcm_substream *substream,
+ static struct snd_soc_dai_ops rt711_ops = {
+ 	.hw_params	= rt711_pcm_hw_params,
+ 	.hw_free	= rt711_pcm_hw_free,
+-	.set_sdw_stream	= rt711_set_sdw_stream,
++	.set_stream	= rt711_set_sdw_stream,
+ 	.shutdown	= rt711_shutdown,
+ };
+ 
+diff --git a/sound/soc/codecs/rt715.c b/sound/soc/codecs/rt715.c
+index 532c5303e7ab0..22bdccf663468 100644
+--- a/sound/soc/codecs/rt715.c
++++ b/sound/soc/codecs/rt715.c
+@@ -686,7 +686,7 @@ static int rt715_pcm_hw_free(struct snd_pcm_substream *substream,
+ static struct snd_soc_dai_ops rt715_ops = {
+ 	.hw_params	= rt715_pcm_hw_params,
+ 	.hw_free	= rt715_pcm_hw_free,
+-	.set_sdw_stream	= rt715_set_sdw_stream,
++	.set_stream	= rt715_set_sdw_stream,
+ 	.shutdown	= rt715_shutdown,
+ };
+ 
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index f57884113406b..d3a7480fda435 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -3853,7 +3853,12 @@ static irqreturn_t wm1811_jackdet_irq(int irq, void *data)
+ 	} else {
+ 		dev_dbg(component->dev, "Jack not detected\n");
+ 
++		/* Release wm8994->accdet_lock to avoid deadlock:
++		 * cancel_delayed_work_sync() takes wm8994->mic_work internal
++		 * lock and wm1811_mic_work takes wm8994->accdet_lock */
++		mutex_unlock(&wm8994->accdet_lock);
+ 		cancel_delayed_work_sync(&wm8994->mic_work);
++		mutex_lock(&wm8994->accdet_lock);
+ 
+ 		snd_soc_component_update_bits(component, WM8958_MICBIAS2,
+ 				    WM8958_MICB2_DISCH, WM8958_MICB2_DISCH);
+diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c
+index 601525c77bbaf..15b3f47fbfa35 100644
+--- a/sound/soc/codecs/wsa881x.c
++++ b/sound/soc/codecs/wsa881x.c
+@@ -1026,7 +1026,7 @@ static struct snd_soc_dai_ops wsa881x_dai_ops = {
+ 	.hw_params = wsa881x_hw_params,
+ 	.hw_free = wsa881x_hw_free,
+ 	.mute_stream = wsa881x_digital_mute,
+-	.set_sdw_stream = wsa881x_set_sdw_stream,
++	.set_stream = wsa881x_set_sdw_stream,
+ };
+ 
+ static struct snd_soc_dai_driver wsa881x_dais[] = {
+diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
+index bfbee2d716f39..84510ca0b8fd6 100644
+--- a/sound/soc/generic/audio-graph-card.c
++++ b/sound/soc/generic/audio-graph-card.c
+@@ -466,8 +466,10 @@ static int graph_for_each_link(struct asoc_simple_priv *priv,
+ 			of_node_put(codec_ep);
+ 			of_node_put(codec_port);
+ 
+-			if (ret < 0)
++			if (ret < 0) {
++				of_node_put(cpu_ep);
+ 				return ret;
++			}
+ 
+ 			codec_port_old = codec_port;
+ 		}
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 3020a993f6ef5..8a99cb6dfcd69 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -430,6 +430,21 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{
++		/* Advantech MICA-071 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Advantech"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "MICA-071"),
++		},
++		/* OVCD Th = 1500uA to reliable detect head-phones vs -set */
++		.driver_data = (void *)(BYT_RT5640_IN3_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_1500UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_MONO_SPEAKER |
++					BYT_RT5640_DIFF_MIC |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ARCHOS"),
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 25548555d8d79..f5d8f7951cfc3 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -231,7 +231,7 @@ int sdw_prepare(struct snd_pcm_substream *substream)
+ 	/* Find stream from first CPU DAI */
+ 	dai = asoc_rtd_to_cpu(rtd, 0);
+ 
+-	sdw_stream = snd_soc_dai_get_sdw_stream(dai, substream->stream);
++	sdw_stream = snd_soc_dai_get_stream(dai, substream->stream);
+ 
+ 	if (IS_ERR(sdw_stream)) {
+ 		dev_err(rtd->dev, "no stream found for DAI %s", dai->name);
+@@ -251,7 +251,7 @@ int sdw_trigger(struct snd_pcm_substream *substream, int cmd)
+ 	/* Find stream from first CPU DAI */
+ 	dai = asoc_rtd_to_cpu(rtd, 0);
+ 
+-	sdw_stream = snd_soc_dai_get_sdw_stream(dai, substream->stream);
++	sdw_stream = snd_soc_dai_get_stream(dai, substream->stream);
+ 
+ 	if (IS_ERR(sdw_stream)) {
+ 		dev_err(rtd->dev, "no stream found for DAI %s", dai->name);
+@@ -290,7 +290,7 @@ int sdw_hw_free(struct snd_pcm_substream *substream)
+ 	/* Find stream from first CPU DAI */
+ 	dai = asoc_rtd_to_cpu(rtd, 0);
+ 
+-	sdw_stream = snd_soc_dai_get_sdw_stream(dai, substream->stream);
++	sdw_stream = snd_soc_dai_get_stream(dai, substream->stream);
+ 
+ 	if (IS_ERR(sdw_stream)) {
+ 		dev_err(rtd->dev, "no stream found for DAI %s", dai->name);
+diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
+index b1897a057397d..b531d9dfc2d64 100644
+--- a/sound/soc/intel/skylake/skl-pcm.c
++++ b/sound/soc/intel/skylake/skl-pcm.c
+@@ -563,11 +563,8 @@ static int skl_link_hw_params(struct snd_pcm_substream *substream,
+ 
+ 	stream_tag = hdac_stream(link_dev)->stream_tag;
+ 
+-	/* set the stream tag in the codec dai dma params  */
+-	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		snd_soc_dai_set_tdm_slot(codec_dai, stream_tag, 0, 0, 0);
+-	else
+-		snd_soc_dai_set_tdm_slot(codec_dai, 0, stream_tag, 0, 0);
++	/* set the hdac_stream in the codec dai */
++	snd_soc_dai_set_stream(codec_dai, hdac_stream(link_dev), substream->stream);
+ 
+ 	p_params.s_fmt = snd_pcm_format_width(params_format(params));
+ 	p_params.ch = params_channels(params);
+diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
+index 8b993722f74e7..2085e12dc611b 100644
+--- a/sound/soc/intel/skylake/skl.c
++++ b/sound/soc/intel/skylake/skl.c
+@@ -439,7 +439,7 @@ static int skl_free(struct hdac_bus *bus)
+ 
+ 	skl->init_done = 0; /* to be sure */
+ 
+-	snd_hdac_ext_stop_streams(bus);
++	snd_hdac_stop_streams_and_chip(bus);
+ 
+ 	if (bus->irq >= 0)
+ 		free_irq(bus->irq, (void *)bus);
+@@ -1100,7 +1100,10 @@ static void skl_shutdown(struct pci_dev *pci)
+ 	if (!skl->init_done)
+ 		return;
+ 
+-	snd_hdac_ext_stop_streams(bus);
++	snd_hdac_stop_streams(bus);
++	snd_hdac_ext_bus_link_power_down_all(bus);
++	skl_dsp_sleep(skl->dsp);
++
+ 	list_for_each_entry(s, &bus->stream_list, list) {
+ 		stream = stream_to_hdac_ext_stream(s);
+ 		snd_hdac_ext_stream_decouple(bus, stream, false);
+diff --git a/sound/soc/jz4740/jz4740-i2s.c b/sound/soc/jz4740/jz4740-i2s.c
+index 0793e284d0e78..fb6476b153ed3 100644
+--- a/sound/soc/jz4740/jz4740-i2s.c
++++ b/sound/soc/jz4740/jz4740-i2s.c
+@@ -59,7 +59,8 @@
+ #define JZ_AIC_CTRL_MONO_TO_STEREO BIT(11)
+ #define JZ_AIC_CTRL_SWITCH_ENDIANNESS BIT(10)
+ #define JZ_AIC_CTRL_SIGNED_TO_UNSIGNED BIT(9)
+-#define JZ_AIC_CTRL_FLUSH		BIT(8)
++#define JZ_AIC_CTRL_TFLUSH		BIT(8)
++#define JZ_AIC_CTRL_RFLUSH		BIT(7)
+ #define JZ_AIC_CTRL_ENABLE_ROR_INT BIT(6)
+ #define JZ_AIC_CTRL_ENABLE_TUR_INT BIT(5)
+ #define JZ_AIC_CTRL_ENABLE_RFS_INT BIT(4)
+@@ -94,6 +95,8 @@ enum jz47xx_i2s_version {
+ struct i2s_soc_info {
+ 	enum jz47xx_i2s_version version;
+ 	struct snd_soc_dai_driver *dai;
++
++	bool shared_fifo_flush;
+ };
+ 
+ struct jz4740_i2s {
+@@ -122,19 +125,44 @@ static inline void jz4740_i2s_write(const struct jz4740_i2s *i2s,
+ 	writel(value, i2s->base + reg);
+ }
+ 
++static inline void jz4740_i2s_set_bits(const struct jz4740_i2s *i2s,
++	unsigned int reg, uint32_t bits)
++{
++	uint32_t value = jz4740_i2s_read(i2s, reg);
++	value |= bits;
++	jz4740_i2s_write(i2s, reg, value);
++}
++
+ static int jz4740_i2s_startup(struct snd_pcm_substream *substream,
+ 	struct snd_soc_dai *dai)
+ {
+ 	struct jz4740_i2s *i2s = snd_soc_dai_get_drvdata(dai);
+-	uint32_t conf, ctrl;
++	uint32_t conf;
+ 	int ret;
+ 
++	/*
++	 * When we can flush FIFOs independently, only flush the FIFO
++	 * that is starting up. We can do this when the DAI is active
++	 * because it does not disturb other active substreams.
++	 */
++	if (!i2s->soc_info->shared_fifo_flush) {
++		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
++			jz4740_i2s_set_bits(i2s, JZ_REG_AIC_CTRL, JZ_AIC_CTRL_TFLUSH);
++		else
++			jz4740_i2s_set_bits(i2s, JZ_REG_AIC_CTRL, JZ_AIC_CTRL_RFLUSH);
++	}
++
+ 	if (snd_soc_dai_active(dai))
+ 		return 0;
+ 
+-	ctrl = jz4740_i2s_read(i2s, JZ_REG_AIC_CTRL);
+-	ctrl |= JZ_AIC_CTRL_FLUSH;
+-	jz4740_i2s_write(i2s, JZ_REG_AIC_CTRL, ctrl);
++	/*
++	 * When there is a shared flush bit for both FIFOs, the TFLUSH
++	 * bit flushes both FIFOs. Flushing while the DAI is active would
++	 * cause FIFO underruns in other active substreams so we have to
++	 * guard this behind the snd_soc_dai_active() check.
++	 */
++	if (i2s->soc_info->shared_fifo_flush)
++		jz4740_i2s_set_bits(i2s, JZ_REG_AIC_CTRL, JZ_AIC_CTRL_TFLUSH);
+ 
+ 	ret = clk_prepare_enable(i2s->clk_i2s);
+ 	if (ret)
+@@ -467,6 +495,7 @@ static struct snd_soc_dai_driver jz4740_i2s_dai = {
+ static const struct i2s_soc_info jz4740_i2s_soc_info = {
+ 	.version = JZ_I2S_JZ4740,
+ 	.dai = &jz4740_i2s_dai,
++	.shared_fifo_flush = true,
+ };
+ 
+ static const struct i2s_soc_info jz4760_i2s_soc_info = {
+diff --git a/sound/soc/mediatek/common/mtk-btcvsd.c b/sound/soc/mediatek/common/mtk-btcvsd.c
+index 86e982e3209ed..e1f57b0dedd0f 100644
+--- a/sound/soc/mediatek/common/mtk-btcvsd.c
++++ b/sound/soc/mediatek/common/mtk-btcvsd.c
+@@ -1038,11 +1038,9 @@ static int mtk_pcm_btcvsd_copy(struct snd_soc_component *component,
+ 	struct mtk_btcvsd_snd *bt = snd_soc_component_get_drvdata(component);
+ 
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		mtk_btcvsd_snd_write(bt, buf, count);
++		return mtk_btcvsd_snd_write(bt, buf, count);
+ 	else
+-		mtk_btcvsd_snd_read(bt, buf, count);
+-
+-	return 0;
++		return mtk_btcvsd_snd_read(bt, buf, count);
+ }
+ 
+ /* kcontrol */
+diff --git a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+index 7e7bda70d12e9..619d6733091cd 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
++++ b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+@@ -1054,6 +1054,7 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	int irq_id;
+ 	struct mtk_base_afe *afe;
+ 	struct mt8173_afe_private *afe_priv;
++	struct snd_soc_component *comp_pcm, *comp_hdmi;
+ 
+ 	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
+ 	if (ret)
+@@ -1071,16 +1072,6 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 
+ 	afe->dev = &pdev->dev;
+ 
+-	irq_id = platform_get_irq(pdev, 0);
+-	if (irq_id <= 0)
+-		return irq_id < 0 ? irq_id : -ENXIO;
+-	ret = devm_request_irq(afe->dev, irq_id, mt8173_afe_irq_handler,
+-			       0, "Afe_ISR_Handle", (void *)afe);
+-	if (ret) {
+-		dev_err(afe->dev, "could not request_irq\n");
+-		return ret;
+-	}
+-
+ 	afe->base_addr = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(afe->base_addr))
+ 		return PTR_ERR(afe->base_addr);
+@@ -1142,23 +1133,65 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_pm_disable;
+ 
+-	ret = devm_snd_soc_register_component(&pdev->dev,
+-					 &mt8173_afe_pcm_dai_component,
+-					 mt8173_afe_pcm_dais,
+-					 ARRAY_SIZE(mt8173_afe_pcm_dais));
++	comp_pcm = devm_kzalloc(&pdev->dev, sizeof(*comp_pcm), GFP_KERNEL);
++	if (!comp_pcm) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
++
++	ret = snd_soc_component_initialize(comp_pcm,
++					   &mt8173_afe_pcm_dai_component,
++					   &pdev->dev);
+ 	if (ret)
+ 		goto err_pm_disable;
+ 
+-	ret = devm_snd_soc_register_component(&pdev->dev,
+-					 &mt8173_afe_hdmi_dai_component,
+-					 mt8173_afe_hdmi_dais,
+-					 ARRAY_SIZE(mt8173_afe_hdmi_dais));
++#ifdef CONFIG_DEBUG_FS
++	comp_pcm->debugfs_prefix = "pcm";
++#endif
++
++	ret = snd_soc_add_component(comp_pcm,
++				    mt8173_afe_pcm_dais,
++				    ARRAY_SIZE(mt8173_afe_pcm_dais));
++	if (ret)
++		goto err_pm_disable;
++
++	comp_hdmi = devm_kzalloc(&pdev->dev, sizeof(*comp_hdmi), GFP_KERNEL);
++	if (!comp_hdmi) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
++
++	ret = snd_soc_component_initialize(comp_hdmi,
++					   &mt8173_afe_hdmi_dai_component,
++					   &pdev->dev);
+ 	if (ret)
+ 		goto err_pm_disable;
+ 
++#ifdef CONFIG_DEBUG_FS
++	comp_hdmi->debugfs_prefix = "hdmi";
++#endif
++
++	ret = snd_soc_add_component(comp_hdmi,
++				    mt8173_afe_hdmi_dais,
++				    ARRAY_SIZE(mt8173_afe_hdmi_dais));
++	if (ret)
++		goto err_cleanup_components;
++
++	irq_id = platform_get_irq(pdev, 0);
++	if (irq_id <= 0)
++		return irq_id < 0 ? irq_id : -ENXIO;
++	ret = devm_request_irq(afe->dev, irq_id, mt8173_afe_irq_handler,
++			       0, "Afe_ISR_Handle", (void *)afe);
++	if (ret) {
++		dev_err(afe->dev, "could not request_irq\n");
++		goto err_pm_disable;
++	}
++
+ 	dev_info(&pdev->dev, "MT8173 AFE driver initialized.\n");
+ 	return 0;
+ 
++err_cleanup_components:
++	snd_soc_unregister_component(&pdev->dev);
+ err_pm_disable:
+ 	pm_runtime_disable(&pdev->dev);
+ 	return ret;
+@@ -1166,6 +1199,8 @@ err_pm_disable:
+ 
+ static int mt8173_afe_pcm_dev_remove(struct platform_device *pdev)
+ {
++	snd_soc_unregister_component(&pdev->dev);
++
+ 	pm_runtime_disable(&pdev->dev);
+ 	if (!pm_runtime_status_suspended(&pdev->dev))
+ 		mt8173_afe_runtime_suspend(&pdev->dev);
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
+index 390da5bf727eb..9421b919d4627 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
+@@ -200,14 +200,16 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
+ 	if (!mt8173_rt5650_rt5514_dais[DAI_LINK_CODEC_I2S].codecs[0].of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 	mt8173_rt5650_rt5514_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node =
+ 		of_parse_phandle(pdev->dev.of_node, "mediatek,audio-codec", 1);
+ 	if (!mt8173_rt5650_rt5514_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 	mt8173_rt5650_rt5514_codec_conf[0].dlc.of_node =
+ 		mt8173_rt5650_rt5514_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node;
+@@ -219,6 +221,7 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
+ 
++out:
+ 	of_node_put(platform_node);
+ 	return ret;
+ }
+diff --git a/sound/soc/pxa/mmp-pcm.c b/sound/soc/pxa/mmp-pcm.c
+index 53fc49e32fbc8..0791737c3bf38 100644
+--- a/sound/soc/pxa/mmp-pcm.c
++++ b/sound/soc/pxa/mmp-pcm.c
+@@ -98,7 +98,7 @@ static bool filter(struct dma_chan *chan, void *param)
+ 
+ 	devname = kasprintf(GFP_KERNEL, "%s.%d", dma_data->dma_res->name,
+ 		dma_data->ssp_id);
+-	if ((strcmp(dev_name(chan->device->dev), devname) == 0) &&
++	if (devname && (strcmp(dev_name(chan->device->dev), devname) == 0) &&
+ 		(chan->chan_id == dma_data->dma_res->start)) {
+ 		found = true;
+ 	}
+diff --git a/sound/soc/qcom/lpass-sc7180.c b/sound/soc/qcom/lpass-sc7180.c
+index c647e627897a2..cb4e9017cd778 100644
+--- a/sound/soc/qcom/lpass-sc7180.c
++++ b/sound/soc/qcom/lpass-sc7180.c
+@@ -129,6 +129,9 @@ static int sc7180_lpass_init(struct platform_device *pdev)
+ 
+ 	drvdata->clks = devm_kcalloc(dev, variant->num_clks,
+ 				     sizeof(*drvdata->clks), GFP_KERNEL);
++	if (!drvdata->clks)
++		return -ENOMEM;
++
+ 	drvdata->num_clks = variant->num_clks;
+ 
+ 	for (i = 0; i < drvdata->num_clks; i++)
+diff --git a/sound/soc/qcom/sdm845.c b/sound/soc/qcom/sdm845.c
+index 153e9b2de0b53..6be7a32933ad0 100644
+--- a/sound/soc/qcom/sdm845.c
++++ b/sound/soc/qcom/sdm845.c
+@@ -56,8 +56,8 @@ static int sdm845_slim_snd_hw_params(struct snd_pcm_substream *substream,
+ 	int ret = 0, i;
+ 
+ 	for_each_rtd_codec_dais(rtd, i, codec_dai) {
+-		sruntime = snd_soc_dai_get_sdw_stream(codec_dai,
+-						      substream->stream);
++		sruntime = snd_soc_dai_get_stream(codec_dai,
++						  substream->stream);
+ 		if (sruntime != ERR_PTR(-ENOTSUPP))
+ 			pdata->sruntime[cpu_dai->id] = sruntime;
+ 
+diff --git a/sound/soc/rockchip/rockchip_pdm.c b/sound/soc/rockchip/rockchip_pdm.c
+index 5adb293d0435d..94cfbc90390ba 100644
+--- a/sound/soc/rockchip/rockchip_pdm.c
++++ b/sound/soc/rockchip/rockchip_pdm.c
+@@ -368,6 +368,7 @@ static int rockchip_pdm_runtime_resume(struct device *dev)
+ 
+ 	ret = clk_prepare_enable(pdm->hclk);
+ 	if (ret) {
++		clk_disable_unprepare(pdm->clk);
+ 		dev_err(pdm->dev, "hclock enable failed %d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/sound/soc/rockchip/rockchip_spdif.c b/sound/soc/rockchip/rockchip_spdif.c
+index 674810851fbc6..ccddcd9926af8 100644
+--- a/sound/soc/rockchip/rockchip_spdif.c
++++ b/sound/soc/rockchip/rockchip_spdif.c
+@@ -86,6 +86,7 @@ static int __maybe_unused rk_spdif_runtime_resume(struct device *dev)
+ 
+ 	ret = clk_prepare_enable(spdif->hclk);
+ 	if (ret) {
++		clk_disable_unprepare(spdif->mclk);
+ 		dev_err(spdif->dev, "hclk clock enable failed %d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index ef316311e959a..de80f1b3d7f25 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -236,11 +236,8 @@ static int hda_link_hw_params(struct snd_pcm_substream *substream,
+ 	if (!link)
+ 		return -EINVAL;
+ 
+-	/* set the stream tag in the codec dai dma params */
+-	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		snd_soc_dai_set_tdm_slot(codec_dai, stream_tag, 0, 0, 0);
+-	else
+-		snd_soc_dai_set_tdm_slot(codec_dai, 0, stream_tag, 0, 0);
++	/* set the hdac_stream in the codec dai */
++	snd_soc_dai_set_stream(codec_dai, hdac_stream(link_dev), substream->stream);
+ 
+ 	p_params.s_fmt = snd_pcm_format_width(params_format(params));
+ 	p_params.ch = params_channels(params);
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index 59faa5a9a7141..b67617b68e509 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -304,7 +304,8 @@ static void line6_data_received(struct urb *urb)
+ 		for (;;) {
+ 			done =
+ 				line6_midibuf_read(mb, line6->buffer_message,
+-						LINE6_MIDI_MESSAGE_MAXLEN);
++						   LINE6_MIDI_MESSAGE_MAXLEN,
++						   LINE6_MIDIBUF_READ_RX);
+ 
+ 			if (done <= 0)
+ 				break;
+diff --git a/sound/usb/line6/midi.c b/sound/usb/line6/midi.c
+index ba0e2b7e8fe19..0838632c788e4 100644
+--- a/sound/usb/line6/midi.c
++++ b/sound/usb/line6/midi.c
+@@ -44,7 +44,8 @@ static void line6_midi_transmit(struct snd_rawmidi_substream *substream)
+ 	int req, done;
+ 
+ 	for (;;) {
+-		req = min(line6_midibuf_bytes_free(mb), line6->max_packet_size);
++		req = min3(line6_midibuf_bytes_free(mb), line6->max_packet_size,
++			   LINE6_FALLBACK_MAXPACKETSIZE);
+ 		done = snd_rawmidi_transmit_peek(substream, chunk, req);
+ 
+ 		if (done == 0)
+@@ -56,7 +57,8 @@ static void line6_midi_transmit(struct snd_rawmidi_substream *substream)
+ 
+ 	for (;;) {
+ 		done = line6_midibuf_read(mb, chunk,
+-					  LINE6_FALLBACK_MAXPACKETSIZE);
++					  LINE6_FALLBACK_MAXPACKETSIZE,
++					  LINE6_MIDIBUF_READ_TX);
+ 
+ 		if (done == 0)
+ 			break;
+diff --git a/sound/usb/line6/midibuf.c b/sound/usb/line6/midibuf.c
+index 6a70463f82c4e..e7f830f7526c9 100644
+--- a/sound/usb/line6/midibuf.c
++++ b/sound/usb/line6/midibuf.c
+@@ -9,6 +9,7 @@
+ 
+ #include "midibuf.h"
+ 
++
+ static int midibuf_message_length(unsigned char code)
+ {
+ 	int message_length;
+@@ -20,12 +21,7 @@ static int midibuf_message_length(unsigned char code)
+ 
+ 		message_length = length[(code >> 4) - 8];
+ 	} else {
+-		/*
+-		   Note that according to the MIDI specification 0xf2 is
+-		   the "Song Position Pointer", but this is used by Line 6
+-		   to send sysex messages to the host.
+-		 */
+-		static const int length[] = { -1, 2, -1, 2, -1, -1, 1, 1, 1, 1,
++		static const int length[] = { -1, 2, 2, 2, -1, -1, 1, 1, 1, -1,
+ 			1, 1, 1, -1, 1, 1
+ 		};
+ 		message_length = length[code & 0x0f];
+@@ -125,7 +121,7 @@ int line6_midibuf_write(struct midi_buffer *this, unsigned char *data,
+ }
+ 
+ int line6_midibuf_read(struct midi_buffer *this, unsigned char *data,
+-		       int length)
++		       int length, int read_type)
+ {
+ 	int bytes_used;
+ 	int length1, length2;
+@@ -148,9 +144,22 @@ int line6_midibuf_read(struct midi_buffer *this, unsigned char *data,
+ 
+ 	length1 = this->size - this->pos_read;
+ 
+-	/* check MIDI command length */
+ 	command = this->buf[this->pos_read];
++	/*
++	   PODxt always has status byte lower nibble set to 0010,
++	   when it means to send 0000, so we correct if here so
++	   that control/program changes come on channel 1 and
++	   sysex message status byte is correct
++	 */
++	if (read_type == LINE6_MIDIBUF_READ_RX) {
++		if (command == 0xb2 || command == 0xc2 || command == 0xf2) {
++			unsigned char fixed = command & 0xf0;
++			this->buf[this->pos_read] = fixed;
++			command = fixed;
++		}
++	}
+ 
++	/* check MIDI command length */
+ 	if (command & 0x80) {
+ 		midi_length = midibuf_message_length(command);
+ 		this->command_prev = command;
+diff --git a/sound/usb/line6/midibuf.h b/sound/usb/line6/midibuf.h
+index 124a8f9f7e96c..542e8d836f87d 100644
+--- a/sound/usb/line6/midibuf.h
++++ b/sound/usb/line6/midibuf.h
+@@ -8,6 +8,9 @@
+ #ifndef MIDIBUF_H
+ #define MIDIBUF_H
+ 
++#define LINE6_MIDIBUF_READ_TX 0
++#define LINE6_MIDIBUF_READ_RX 1
++
+ struct midi_buffer {
+ 	unsigned char *buf;
+ 	int size;
+@@ -23,7 +26,7 @@ extern void line6_midibuf_destroy(struct midi_buffer *mb);
+ extern int line6_midibuf_ignore(struct midi_buffer *mb, int length);
+ extern int line6_midibuf_init(struct midi_buffer *mb, int size, int split);
+ extern int line6_midibuf_read(struct midi_buffer *mb, unsigned char *data,
+-			      int length);
++			      int length, int read_type);
+ extern void line6_midibuf_reset(struct midi_buffer *mb);
+ extern int line6_midibuf_write(struct midi_buffer *mb, unsigned char *data,
+ 			       int length);
+diff --git a/sound/usb/line6/pod.c b/sound/usb/line6/pod.c
+index 16e644330c4d6..54be5ac919bfb 100644
+--- a/sound/usb/line6/pod.c
++++ b/sound/usb/line6/pod.c
+@@ -159,8 +159,9 @@ static struct line6_pcm_properties pod_pcm_properties = {
+ 	.bytes_per_channel = 3 /* SNDRV_PCM_FMTBIT_S24_3LE */
+ };
+ 
++
+ static const char pod_version_header[] = {
+-	0xf2, 0x7e, 0x7f, 0x06, 0x02
++	0xf0, 0x7e, 0x7f, 0x06, 0x02
+ };
+ 
+ static char *pod_alloc_sysex_buffer(struct usb_line6_pod *pod, int code,
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 6a78813b63f53..ec74aead0844c 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -76,6 +76,8 @@
+ { USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f0a) },
+ /* E-Mu 0204 USB */
+ { USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f19) },
++/* Ktmicro Usb_audio device */
++{ USB_DEVICE_VENDOR_SPEC(0x31b2, 0x0011) },
+ 
+ /*
+  * Creative Technology, Ltd Live! Cam Sync HD [VF0770]
+diff --git a/tools/arch/parisc/include/uapi/asm/mman.h b/tools/arch/parisc/include/uapi/asm/mman.h
+index 506c06a6536fb..4cc88a642e106 100644
+--- a/tools/arch/parisc/include/uapi/asm/mman.h
++++ b/tools/arch/parisc/include/uapi/asm/mman.h
+@@ -1,20 +1,20 @@
+ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+ #ifndef TOOLS_ARCH_PARISC_UAPI_ASM_MMAN_FIX_H
+ #define TOOLS_ARCH_PARISC_UAPI_ASM_MMAN_FIX_H
+-#define MADV_DODUMP	70
++#define MADV_DODUMP	17
+ #define MADV_DOFORK	11
+-#define MADV_DONTDUMP   69
++#define MADV_DONTDUMP   16
+ #define MADV_DONTFORK	10
+ #define MADV_DONTNEED   4
+ #define MADV_FREE	8
+-#define MADV_HUGEPAGE	67
+-#define MADV_MERGEABLE   65
+-#define MADV_NOHUGEPAGE	68
++#define MADV_HUGEPAGE	14
++#define MADV_MERGEABLE  12
++#define MADV_NOHUGEPAGE 15
+ #define MADV_NORMAL     0
+ #define MADV_RANDOM     1
+ #define MADV_REMOVE	9
+ #define MADV_SEQUENTIAL 2
+-#define MADV_UNMERGEABLE 66
++#define MADV_UNMERGEABLE 13
+ #define MADV_WILLNEED   3
+ #define MAP_ANONYMOUS	0x10
+ #define MAP_DENYWRITE	0x0800
+diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
+index 875dde20d56e3..92a3eaa154ddd 100644
+--- a/tools/lib/bpf/bpf.h
++++ b/tools/lib/bpf/bpf.h
+@@ -241,8 +241,15 @@ LIBBPF_API int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf,
+ 				 __u32 *buf_len, __u32 *prog_id, __u32 *fd_type,
+ 				 __u64 *probe_offset, __u64 *probe_addr);
+ 
++#ifdef __cplusplus
++/* forward-declaring enums in C++ isn't compatible with pure C enums, so
++ * instead define bpf_enable_stats() as accepting int as an input
++ */
++LIBBPF_API int bpf_enable_stats(int type);
++#else
+ enum bpf_stats_type; /* defined in up-to-date linux/bpf.h */
+ LIBBPF_API int bpf_enable_stats(enum bpf_stats_type type);
++#endif
+ 
+ struct bpf_prog_bind_opts {
+ 	size_t sz; /* size of this struct for forward/backward compatibility */
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index bd22853be4a6b..0e2d63da24e91 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -188,6 +188,17 @@ static int btf_dump_resize(struct btf_dump *d)
+ 	return 0;
+ }
+ 
++static void btf_dump_free_names(struct hashmap *map)
++{
++	size_t bkt;
++	struct hashmap_entry *cur;
++
++	hashmap__for_each_entry(map, cur, bkt)
++		free((void *)cur->key);
++
++	hashmap__free(map);
++}
++
+ void btf_dump__free(struct btf_dump *d)
+ {
+ 	int i;
+@@ -206,8 +217,8 @@ void btf_dump__free(struct btf_dump *d)
+ 	free(d->cached_names);
+ 	free(d->emit_queue);
+ 	free(d->decl_stack);
+-	hashmap__free(d->type_names);
+-	hashmap__free(d->ident_names);
++	btf_dump_free_names(d->type_names);
++	btf_dump_free_names(d->ident_names);
+ 
+ 	free(d);
+ }
+@@ -1392,11 +1403,23 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
+ static size_t btf_dump_name_dups(struct btf_dump *d, struct hashmap *name_map,
+ 				 const char *orig_name)
+ {
++	char *old_name, *new_name;
+ 	size_t dup_cnt = 0;
++	int err;
++
++	new_name = strdup(orig_name);
++	if (!new_name)
++		return 1;
+ 
+ 	hashmap__find(name_map, orig_name, (void **)&dup_cnt);
+ 	dup_cnt++;
+-	hashmap__set(name_map, orig_name, (void *)dup_cnt, NULL, NULL);
++
++	err = hashmap__set(name_map, new_name, (void *)dup_cnt,
++			   (const void **)&old_name, NULL);
++	if (err)
++		free(new_name);
++
++	free(old_name);
+ 
+ 	return dup_cnt;
+ }
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 66d7f8d494dec..015ed8253f739 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3479,6 +3479,9 @@ static struct bpf_program *find_prog_by_sec_insn(const struct bpf_object *obj,
+ 	int l = 0, r = obj->nr_programs - 1, m;
+ 	struct bpf_program *prog;
+ 
++	if (!obj->nr_programs)
++		return NULL;
++
+ 	while (l < r) {
+ 		m = l + (r - l + 1) / 2;
+ 		prog = &obj->programs[m];
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index ea80b29b99134..700984e7f5ba1 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -196,7 +196,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
+ 		return false;
+ 
+ 	insn = find_insn(file, func->sec, func->offset);
+-	if (!insn->func)
++	if (!insn || !insn->func)
+ 		return false;
+ 
+ 	func_for_each_insn(file, func, insn) {
+@@ -802,6 +802,16 @@ static const char *uaccess_safe_builtin[] = {
+ 	"__tsan_read_write4",
+ 	"__tsan_read_write8",
+ 	"__tsan_read_write16",
++	"__tsan_volatile_read1",
++	"__tsan_volatile_read2",
++	"__tsan_volatile_read4",
++	"__tsan_volatile_read8",
++	"__tsan_volatile_read16",
++	"__tsan_volatile_write1",
++	"__tsan_volatile_write2",
++	"__tsan_volatile_write4",
++	"__tsan_volatile_write8",
++	"__tsan_volatile_write16",
+ 	"__tsan_atomic8_load",
+ 	"__tsan_atomic16_load",
+ 	"__tsan_atomic32_load",
+diff --git a/tools/perf/bench/bench.h b/tools/perf/bench/bench.h
+index eac36afab2b39..9a56a44f87dd4 100644
+--- a/tools/perf/bench/bench.h
++++ b/tools/perf/bench/bench.h
+@@ -10,25 +10,13 @@ extern struct timeval bench__start, bench__end, bench__runtime;
+  * The madvise transparent hugepage constants were added in glibc
+  * 2.13. For compatibility with older versions of glibc, define these
+  * tokens if they are not already defined.
+- *
+- * PA-RISC uses different madvise values from other architectures and
+- * needs to be special-cased.
+  */
+-#ifdef __hppa__
+-# ifndef MADV_HUGEPAGE
+-#  define MADV_HUGEPAGE		67
+-# endif
+-# ifndef MADV_NOHUGEPAGE
+-#  define MADV_NOHUGEPAGE	68
+-# endif
+-#else
+ # ifndef MADV_HUGEPAGE
+ #  define MADV_HUGEPAGE		14
+ # endif
+ # ifndef MADV_NOHUGEPAGE
+ #  define MADV_NOHUGEPAGE	15
+ # endif
+-#endif
+ 
+ int bench_numa(int argc, const char **argv);
+ int bench_sched_messaging(int argc, const char **argv);
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index de80534473afa..8de0d0a740de4 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -87,6 +87,8 @@
+ # define F_LINUX_SPECIFIC_BASE	1024
+ #endif
+ 
++#define RAW_SYSCALL_ARGS_NUM	6
++
+ /*
+  * strtoul: Go from a string to a value, i.e. for msr: MSR_FS_BASE to 0xc0000100
+  */
+@@ -107,7 +109,7 @@ struct syscall_fmt {
+ 		const char *sys_enter,
+ 			   *sys_exit;
+ 	}	   bpf_prog_name;
+-	struct syscall_arg_fmt arg[6];
++	struct syscall_arg_fmt arg[RAW_SYSCALL_ARGS_NUM];
+ 	u8	   nr_args;
+ 	bool	   errpid;
+ 	bool	   timeout;
+@@ -1216,7 +1218,7 @@ struct syscall {
+  */
+ struct bpf_map_syscall_entry {
+ 	bool	enabled;
+-	u16	string_args_len[6];
++	u16	string_args_len[RAW_SYSCALL_ARGS_NUM];
+ };
+ 
+ /*
+@@ -1641,7 +1643,7 @@ static int syscall__alloc_arg_fmts(struct syscall *sc, int nr_args)
+ {
+ 	int idx;
+ 
+-	if (nr_args == 6 && sc->fmt && sc->fmt->nr_args != 0)
++	if (nr_args == RAW_SYSCALL_ARGS_NUM && sc->fmt && sc->fmt->nr_args != 0)
+ 		nr_args = sc->fmt->nr_args;
+ 
+ 	sc->arg_fmt = calloc(nr_args, sizeof(*sc->arg_fmt));
+@@ -1774,11 +1776,11 @@ static int trace__read_syscall_info(struct trace *trace, int id)
+ #endif
+ 	sc = trace->syscalls.table + id;
+ 	if (sc->nonexistent)
+-		return 0;
++		return -EEXIST;
+ 
+ 	if (name == NULL) {
+ 		sc->nonexistent = true;
+-		return 0;
++		return -EEXIST;
+ 	}
+ 
+ 	sc->name = name;
+@@ -1792,11 +1794,18 @@ static int trace__read_syscall_info(struct trace *trace, int id)
+ 		sc->tp_format = trace_event__tp_format("syscalls", tp_name);
+ 	}
+ 
+-	if (syscall__alloc_arg_fmts(sc, IS_ERR(sc->tp_format) ? 6 : sc->tp_format->format.nr_fields))
+-		return -ENOMEM;
+-
+-	if (IS_ERR(sc->tp_format))
++	/*
++	 * Fails to read trace point format via sysfs node, so the trace point
++	 * doesn't exist.  Set the 'nonexistent' flag as true.
++	 */
++	if (IS_ERR(sc->tp_format)) {
++		sc->nonexistent = true;
+ 		return PTR_ERR(sc->tp_format);
++	}
++
++	if (syscall__alloc_arg_fmts(sc, IS_ERR(sc->tp_format) ?
++					RAW_SYSCALL_ARGS_NUM : sc->tp_format->format.nr_fields))
++		return -ENOMEM;
+ 
+ 	sc->args = sc->tp_format->format.fields;
+ 	/*
+@@ -2114,11 +2123,8 @@ static struct syscall *trace__syscall_info(struct trace *trace,
+ 	    (err = trace__read_syscall_info(trace, id)) != 0)
+ 		goto out_cant_read;
+ 
+-	if (trace->syscalls.table[id].name == NULL) {
+-		if (trace->syscalls.table[id].nonexistent)
+-			return NULL;
++	if (trace->syscalls.table && trace->syscalls.table[id].nonexistent)
+ 		goto out_cant_read;
+-	}
+ 
+ 	return &trace->syscalls.table[id];
+ 
+diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
+index 48754083791d8..29d32ba046b5c 100644
+--- a/tools/perf/util/data.c
++++ b/tools/perf/util/data.c
+@@ -127,6 +127,7 @@ int perf_data__open_dir(struct perf_data *data)
+ 		file->size = st.st_size;
+ 	}
+ 
++	closedir(dir);
+ 	if (!files)
+ 		return -EINVAL;
+ 
+@@ -135,6 +136,7 @@ int perf_data__open_dir(struct perf_data *data)
+ 	return 0;
+ 
+ out_err:
++	closedir(dir);
+ 	close_dir(files, nr);
+ 	return ret;
+ }
+diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c
+index 0af163abaa62b..854dd3d2d8de3 100644
+--- a/tools/perf/util/debug.c
++++ b/tools/perf/util/debug.c
+@@ -207,6 +207,10 @@ int perf_quiet_option(void)
+ 		opt++;
+ 	}
+ 
++	/* For debug variables that are used as bool types, set to 0. */
++	redirect_to_stderr = 0;
++	debug_peo_args = 0;
++
+ 	return 0;
+ }
+ 
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index 4343356f3cf9a..f8a10d5148f6f 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -308,26 +308,13 @@ static int die_get_attr_udata(Dwarf_Die *tp_die, unsigned int attr_name,
+ {
+ 	Dwarf_Attribute attr;
+ 
+-	if (dwarf_attr(tp_die, attr_name, &attr) == NULL ||
++	if (dwarf_attr_integrate(tp_die, attr_name, &attr) == NULL ||
+ 	    dwarf_formudata(&attr, result) != 0)
+ 		return -ENOENT;
+ 
+ 	return 0;
+ }
+ 
+-/* Get attribute and translate it as a sdata */
+-static int die_get_attr_sdata(Dwarf_Die *tp_die, unsigned int attr_name,
+-			      Dwarf_Sword *result)
+-{
+-	Dwarf_Attribute attr;
+-
+-	if (dwarf_attr(tp_die, attr_name, &attr) == NULL ||
+-	    dwarf_formsdata(&attr, result) != 0)
+-		return -ENOENT;
+-
+-	return 0;
+-}
+-
+ /**
+  * die_is_signed_type - Check whether a type DIE is signed or not
+  * @tp_die: a DIE of a type
+@@ -467,9 +454,9 @@ int die_get_data_member_location(Dwarf_Die *mb_die, Dwarf_Word *offs)
+ /* Get the call file index number in CU DIE */
+ static int die_get_call_fileno(Dwarf_Die *in_die)
+ {
+-	Dwarf_Sword idx;
++	Dwarf_Word idx;
+ 
+-	if (die_get_attr_sdata(in_die, DW_AT_call_file, &idx) == 0)
++	if (die_get_attr_udata(in_die, DW_AT_call_file, &idx) == 0)
+ 		return (int)idx;
+ 	else
+ 		return -ENOENT;
+@@ -478,9 +465,9 @@ static int die_get_call_fileno(Dwarf_Die *in_die)
+ /* Get the declared file index number in CU DIE */
+ static int die_get_decl_fileno(Dwarf_Die *pdie)
+ {
+-	Dwarf_Sword idx;
++	Dwarf_Word idx;
+ 
+-	if (die_get_attr_sdata(pdie, DW_AT_decl_file, &idx) == 0)
++	if (die_get_attr_udata(pdie, DW_AT_decl_file, &idx) == 0)
+ 		return (int)idx;
+ 	else
+ 		return -ENOENT;
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 3e423a9200151..5221f272f85c6 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -1247,7 +1247,7 @@ int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss,
+ 			   (!used_opd && syms_ss->adjust_symbols)) {
+ 			GElf_Phdr phdr;
+ 
+-			if (elf_read_program_header(syms_ss->elf,
++			if (elf_read_program_header(runtime_ss->elf,
+ 						    (u64)sym.st_value, &phdr)) {
+ 				pr_debug4("%s: failed to find program header for "
+ 					   "symbol: %s st_value: %#" PRIx64 "\n",
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index 4e24509645173..8b1e3ae8fe50d 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -1912,7 +1912,7 @@ sub run_scp_mod {
+ 
+ sub _get_grub_index {
+ 
+-    my ($command, $target, $skip) = @_;
++    my ($command, $target, $skip, $submenu) = @_;
+ 
+     return if (defined($grub_number) && defined($last_grub_menu) &&
+ 	       $last_grub_menu eq $grub_menu && defined($last_machine) &&
+@@ -1929,11 +1929,16 @@ sub _get_grub_index {
+ 
+     my $found = 0;
+ 
++    my $submenu_number = 0;
++
+     while (<IN>) {
+ 	if (/$target/) {
+ 	    $grub_number++;
+ 	    $found = 1;
+ 	    last;
++	} elsif (defined($submenu) && /$submenu/) {
++		$submenu_number++;
++		$grub_number = -1;
+ 	} elsif (/$skip/) {
+ 	    $grub_number++;
+ 	}
+@@ -1942,6 +1947,9 @@ sub _get_grub_index {
+ 
+     dodie "Could not find '$grub_menu' through $command on $machine"
+ 	if (!$found);
++    if ($submenu_number > 0) {
++	$grub_number = "$submenu_number>$grub_number";
++    }
+     doprint "$grub_number\n";
+     $last_grub_menu = $grub_menu;
+     $last_machine = $machine;
+@@ -1952,6 +1960,7 @@ sub get_grub_index {
+     my $command;
+     my $target;
+     my $skip;
++    my $submenu;
+     my $grub_menu_qt;
+ 
+     if ($reboot_type !~ /^grub/) {
+@@ -1966,8 +1975,9 @@ sub get_grub_index {
+ 	$skip = '^\s*title\s';
+     } elsif ($reboot_type eq "grub2") {
+ 	$command = "cat $grub_file";
+-	$target = '^menuentry.*' . $grub_menu_qt;
+-	$skip = '^menuentry\s|^submenu\s';
++	$target = '^\s*menuentry.*' . $grub_menu_qt;
++	$skip = '^\s*menuentry';
++	$submenu = '^\s*submenu\s';
+     } elsif ($reboot_type eq "grub2bls") {
+         $command = $grub_bls_get;
+         $target = '^title=.*' . $grub_menu_qt;
+@@ -1976,7 +1986,7 @@ sub get_grub_index {
+ 	return;
+     }
+ 
+-    _get_grub_index($command, $target, $skip);
++    _get_grub_index($command, $target, $skip, $submenu);
+ }
+ 
+ sub wait_for_input
+@@ -2040,7 +2050,7 @@ sub reboot_to {
+     if ($reboot_type eq "grub") {
+ 	run_ssh "'(echo \"savedefault --default=$grub_number --once\" | grub --batch)'";
+     } elsif (($reboot_type eq "grub2") or ($reboot_type eq "grub2bls")) {
+-	run_ssh "$grub_reboot $grub_number";
++	run_ssh "$grub_reboot \"'$grub_number'\"";
+     } elsif ($reboot_type eq "syslinux") {
+ 	run_ssh "$syslinux --once \\\"$syslinux_label\\\" $syslinux_path";
+     } elsif (defined $reboot_script) {
+@@ -3763,9 +3773,10 @@ sub test_this_config {
+     # .config to make sure it is missing the config that
+     # we had before
+     my %configs = %min_configs;
+-    delete $configs{$config};
++    $configs{$config} = "# $config is not set";
+     make_new_config ((values %configs), (values %keep_configs));
+     make_oldconfig;
++    delete $configs{$config};
+     undef %configs;
+     assign_configs \%configs, $output_config;
+ 
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index d9c2835031590..db1e24d7155fa 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -103,19 +103,27 @@ ifdef building_out_of_srctree
+ override LDFLAGS =
+ endif
+ 
+-ifneq ($(O),)
+-	BUILD := $(O)/kselftest
++top_srcdir ?= ../../..
++
++ifeq ("$(origin O)", "command line")
++  KBUILD_OUTPUT := $(O)
++endif
++
++ifneq ($(KBUILD_OUTPUT),)
++  # Make's built-in functions such as $(abspath ...), $(realpath ...) cannot
++  # expand a shell special character '~'. We use a somewhat tedious way here.
++  abs_objtree := $(shell cd $(top_srcdir) && mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) && pwd)
++  $(if $(abs_objtree),, \
++    $(error failed to create output directory "$(KBUILD_OUTPUT)"))
++  # $(realpath ...) resolves symlinks
++  abs_objtree := $(realpath $(abs_objtree))
++  BUILD := $(abs_objtree)/kselftest
+ else
+-	ifneq ($(KBUILD_OUTPUT),)
+-		BUILD := $(KBUILD_OUTPUT)/kselftest
+-	else
+-		BUILD := $(shell pwd)
+-		DEFAULT_INSTALL_HDR_PATH := 1
+-	endif
++  BUILD := $(CURDIR)
++  DEFAULT_INSTALL_HDR_PATH := 1
+ endif
+ 
+ # Prepare for headers install
+-top_srcdir ?= ../../..
+ include $(top_srcdir)/scripts/subarch.include
+ ARCH           ?= $(SUBARCH)
+ export KSFT_KHDR_INSTALL_DONE := 1
+diff --git a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
+index 40909c254365b..16d2de18591d3 100755
+--- a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
++++ b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
+@@ -495,8 +495,8 @@ dummy_reporter_test()
+ 
+ 	check_reporter_info dummy healthy 3 3 10 true
+ 
+-	echo 8192> $DEBUGFS_DIR/health/binary_len
+-	check_fail $? "Failed set dummy reporter binary len to 8192"
++	echo 8192 > $DEBUGFS_DIR/health/binary_len
++	check_err $? "Failed set dummy reporter binary len to 8192"
+ 
+ 	local dump=$(devlink health dump show $DL_HANDLE reporter dummy -j)
+ 	check_err $? "Failed show dump of dummy reporter"
+diff --git a/tools/testing/selftests/efivarfs/efivarfs.sh b/tools/testing/selftests/efivarfs/efivarfs.sh
+index a90f394f9aa90..d374878cc0ba9 100755
+--- a/tools/testing/selftests/efivarfs/efivarfs.sh
++++ b/tools/testing/selftests/efivarfs/efivarfs.sh
+@@ -87,6 +87,11 @@ test_create_read()
+ {
+ 	local file=$efivarfs_mount/$FUNCNAME-$test_guid
+ 	./create-read $file
++	if [ $? -ne 0 ]; then
++		echo "create and read $file failed"
++		file_cleanup $file
++		exit 1
++	fi
+ 	file_cleanup $file
+ }
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+index 3145b0f1835c3..27a68bbe778be 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+@@ -38,11 +38,18 @@ cnt_trace() {
+ 
+ test_event_enabled() {
+     val=$1
++    check_times=10		# wait for 10 * SLEEP_TIME at most
+ 
+-    e=`cat $EVENT_ENABLE`
+-    if [ "$e" != $val ]; then
+-	fail "Expected $val but found $e"
+-    fi
++    while [ $check_times -ne 0 ]; do
++	e=`cat $EVENT_ENABLE`
++	if [ "$e" == $val ]; then
++	    return 0
++	fi
++	sleep $SLEEP_TIME
++	check_times=$((check_times - 1))
++    done
++
++    fail "Expected $val but found $e"
+ }
+ 
+ run_enable_disable() {
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index b7217b5251f57..56e360e019ecc 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -128,6 +128,11 @@ endef
+ clean:
+ 	$(CLEAN)
+ 
++# Enables to extend CFLAGS and LDFLAGS from command line, e.g.
++# make USERCFLAGS=-Werror USERLDFLAGS=-static
++CFLAGS += $(USERCFLAGS)
++LDFLAGS += $(USERLDFLAGS)
++
+ # When make O= with kselftest target from main level
+ # the following aren't defined.
+ #
+diff --git a/tools/testing/selftests/netfilter/conntrack_icmp_related.sh b/tools/testing/selftests/netfilter/conntrack_icmp_related.sh
+index b48e1833bc896..76645aaf2b58f 100755
+--- a/tools/testing/selftests/netfilter/conntrack_icmp_related.sh
++++ b/tools/testing/selftests/netfilter/conntrack_icmp_related.sh
+@@ -35,6 +35,8 @@ cleanup() {
+ 	for i in 1 2;do ip netns del nsrouter$i;done
+ }
+ 
++trap cleanup EXIT
++
+ ipv4() {
+     echo -n 192.168.$1.2
+ }
+@@ -146,11 +148,17 @@ ip netns exec nsclient1 nft -f - <<EOF
+ table inet filter {
+ 	counter unknown { }
+ 	counter related { }
++	counter redir4 { }
++	counter redir6 { }
+ 	chain input {
+ 		type filter hook input priority 0; policy accept;
+-		meta l4proto { icmp, icmpv6 } ct state established,untracked accept
+ 
++		icmp type "redirect" ct state "related" counter name "redir4" accept
++		icmpv6 type "nd-redirect" ct state "related" counter name "redir6" accept
++
++		meta l4proto { icmp, icmpv6 } ct state established,untracked accept
+ 		meta l4proto { icmp, icmpv6 } ct state "related" counter name "related" accept
++
+ 		counter name "unknown" drop
+ 	}
+ }
+@@ -279,5 +287,29 @@ else
+ 	echo "ERROR: icmp error RELATED state test has failed"
+ fi
+ 
+-cleanup
++# add 'bad' route,  expect icmp REDIRECT to be generated
++ip netns exec nsclient1 ip route add 192.168.1.42 via 192.168.1.1
++ip netns exec nsclient1 ip route add dead:1::42 via dead:1::1
++
++ip netns exec "nsclient1" ping -q -c 2 192.168.1.42 > /dev/null
++
++expect="packets 1 bytes 112"
++check_counter nsclient1 "redir4" "$expect"
++if [ $? -ne 0 ];then
++	ret=1
++fi
++
++ip netns exec "nsclient1" ping -c 1 dead:1::42 > /dev/null
++expect="packets 1 bytes 192"
++check_counter nsclient1 "redir6" "$expect"
++if [ $? -ne 0 ];then
++	ret=1
++fi
++
++if [ $ret -eq 0 ];then
++	echo "PASS: icmp redirects had RELATED state"
++else
++	echo "ERROR: icmp redirect RELATED state test has failed"
++fi
++
+ exit $ret
+diff --git a/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c b/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
+index fbbdffdb2e5d2..f20d1c166d1e4 100644
+--- a/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
++++ b/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
+@@ -24,6 +24,7 @@ static int check_cpu_dscr_default(char *file, unsigned long val)
+ 	rc = read(fd, buf, sizeof(buf));
+ 	if (rc == -1) {
+ 		perror("read() failed");
++		close(fd);
+ 		return 1;
+ 	}
+ 	close(fd);
+@@ -65,8 +66,10 @@ static int check_all_cpu_dscr_defaults(unsigned long val)
+ 		if (access(file, F_OK))
+ 			continue;
+ 
+-		if (check_cpu_dscr_default(file, val))
++		if (check_cpu_dscr_default(file, val)) {
++			closedir(sysfs);
+ 			return 1;
++		}
+ 	}
+ 	closedir(sysfs);
+ 	return 0;
+diff --git a/tools/testing/selftests/proc/proc-uptime-002.c b/tools/testing/selftests/proc/proc-uptime-002.c
+index e7ceabed7f51f..7d0aa22bdc12b 100644
+--- a/tools/testing/selftests/proc/proc-uptime-002.c
++++ b/tools/testing/selftests/proc/proc-uptime-002.c
+@@ -17,6 +17,7 @@
+ // while shifting across CPUs.
+ #undef NDEBUG
+ #include <assert.h>
++#include <errno.h>
+ #include <unistd.h>
+ #include <sys/syscall.h>
+ #include <stdlib.h>
+@@ -54,7 +55,7 @@ int main(void)
+ 		len += sizeof(unsigned long);
+ 		free(m);
+ 		m = malloc(len);
+-	} while (sys_sched_getaffinity(0, len, m) == -EINVAL);
++	} while (sys_sched_getaffinity(0, len, m) == -1 && errno == EINVAL);
+ 
+ 	fd = open("/proc/uptime", O_RDONLY);
+ 	assert(fd >= 0);
+diff --git a/tools/testing/selftests/rcutorture/bin/console-badness.sh b/tools/testing/selftests/rcutorture/bin/console-badness.sh
+index 0e4c0b2eb7f08..80ae7f08b363e 100755
+--- a/tools/testing/selftests/rcutorture/bin/console-badness.sh
++++ b/tools/testing/selftests/rcutorture/bin/console-badness.sh
+@@ -13,4 +13,5 @@
+ egrep 'Badness|WARNING:|Warn|BUG|===========|Call Trace:|Oops:|detected stalls on CPUs/tasks:|self-detected stall on CPU|Stall ended before state dump start|\?\?\? Writer stall state|rcu_.*kthread starved for|!!!' |
+ grep -v 'ODEBUG: ' |
+ grep -v 'This means that this is a DEBUG kernel and it is' |
+-grep -v 'Warning: unable to open an initial console'
++grep -v 'Warning: unable to open an initial console' |
++grep -v 'NOHZ tick-stop error: Non-RCU local softirq work is pending, handler'


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-01-18 11:09 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-01-18 11:09 UTC (permalink / raw
  To: gentoo-commits

commit:     beee7ee96472c4d6609e31905f428b669281a64c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jan 18 11:09:08 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jan 18 11:09:08 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=beee7ee9

Linux patch 5.10.164

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1163_linux-5.10.164.patch | 3690 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3694 insertions(+)

diff --git a/0000_README b/0000_README
index fd63814b..eb86feb6 100644
--- a/0000_README
+++ b/0000_README
@@ -695,6 +695,10 @@ Patch:  1162_linux-5.10.163.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.163
 
+Patch:  1163_linux-5.10.164.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.164
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1163_linux-5.10.164.patch b/1163_linux-5.10.164.patch
new file mode 100644
index 00000000..3030d9d6
--- /dev/null
+++ b/1163_linux-5.10.164.patch
@@ -0,0 +1,3690 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index f577c29f20930..eb437d659f2c4 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2103,24 +2103,57 @@
+ 
+ 	ivrs_ioapic	[HW,X86-64]
+ 			Provide an override to the IOAPIC-ID<->DEVICE-ID
+-			mapping provided in the IVRS ACPI table. For
+-			example, to map IOAPIC-ID decimal 10 to
+-			PCI device 00:14.0 write the parameter as:
++			mapping provided in the IVRS ACPI table.
++			By default, PCI segment is 0, and can be omitted.
++
++			For example, to map IOAPIC-ID decimal 10 to
++			PCI segment 0x1 and PCI device 00:14.0,
++			write the parameter as:
++				ivrs_ioapic=10@0001:00:14.0
++
++			Deprecated formats:
++			* To map IOAPIC-ID decimal 10 to PCI device 00:14.0
++			  write the parameter as:
+ 				ivrs_ioapic[10]=00:14.0
++			* To map IOAPIC-ID decimal 10 to PCI segment 0x1 and
++			  PCI device 00:14.0 write the parameter as:
++				ivrs_ioapic[10]=0001:00:14.0
+ 
+ 	ivrs_hpet	[HW,X86-64]
+ 			Provide an override to the HPET-ID<->DEVICE-ID
+-			mapping provided in the IVRS ACPI table. For
+-			example, to map HPET-ID decimal 0 to
+-			PCI device 00:14.0 write the parameter as:
++			mapping provided in the IVRS ACPI table.
++			By default, PCI segment is 0, and can be omitted.
++
++			For example, to map HPET-ID decimal 10 to
++			PCI segment 0x1 and PCI device 00:14.0,
++			write the parameter as:
++				ivrs_hpet=10@0001:00:14.0
++
++			Deprecated formats:
++			* To map HPET-ID decimal 0 to PCI device 00:14.0
++			  write the parameter as:
+ 				ivrs_hpet[0]=00:14.0
++			* To map HPET-ID decimal 10 to PCI segment 0x1 and
++			  PCI device 00:14.0 write the parameter as:
++				ivrs_ioapic[10]=0001:00:14.0
+ 
+ 	ivrs_acpihid	[HW,X86-64]
+ 			Provide an override to the ACPI-HID:UID<->DEVICE-ID
+-			mapping provided in the IVRS ACPI table. For
+-			example, to map UART-HID:UID AMD0020:0 to
+-			PCI device 00:14.5 write the parameter as:
++			mapping provided in the IVRS ACPI table.
++			By default, PCI segment is 0, and can be omitted.
++
++			For example, to map UART-HID:UID AMD0020:0 to
++			PCI segment 0x1 and PCI device ID 00:14.5,
++			write the parameter as:
++				ivrs_acpihid=AMD0020:0@0001:00:14.5
++
++			Deprecated formats:
++			* To map UART-HID:UID AMD0020:0 to PCI segment is 0,
++			  PCI device ID 00:14.5, write the parameter as:
+ 				ivrs_acpihid[00:14.5]=AMD0020:0
++			* To map UART-HID:UID AMD0020:0 to PCI segment 0x1 and
++			  PCI device ID 00:14.5, write the parameter as:
++				ivrs_acpihid[0001:00:14.5]=AMD0020:0
+ 
+ 	js=		[HW,JOY] Analog joystick
+ 			See Documentation/input/joydev/joystick.rst.
+diff --git a/Documentation/sphinx/load_config.py b/Documentation/sphinx/load_config.py
+index eeb394b39e2cc..8b416bfd75ac1 100644
+--- a/Documentation/sphinx/load_config.py
++++ b/Documentation/sphinx/load_config.py
+@@ -3,7 +3,7 @@
+ 
+ import os
+ import sys
+-from sphinx.util.pycompat import execfile_
++from sphinx.util.osutil import fs_encoding
+ 
+ # ------------------------------------------------------------------------------
+ def loadConfig(namespace):
+@@ -48,7 +48,9 @@ def loadConfig(namespace):
+             sys.stdout.write("load additional sphinx-config: %s\n" % config_file)
+             config = namespace.copy()
+             config['__file__'] = config_file
+-            execfile_(config_file, config)
++            with open(config_file, 'rb') as f:
++                code = compile(f.read(), fs_encoding, 'exec')
++                exec(code, config)
+             del config['__file__']
+             namespace.update(config)
+         else:
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index cd8a585568045..2b4b64797191f 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -6398,3 +6398,63 @@ When enabled, KVM will disable paravirtual features provided to the
+ guest according to the bits in the KVM_CPUID_FEATURES CPUID leaf
+ (0x40000001). Otherwise, a guest may use the paravirtual features
+ regardless of what has actually been exposed through the CPUID leaf.
++
++9. Known KVM API problems
++=========================
++
++In some cases, KVM's API has some inconsistencies or common pitfalls
++that userspace need to be aware of.  This section details some of
++these issues.
++
++Most of them are architecture specific, so the section is split by
++architecture.
++
++9.1. x86
++--------
++
++``KVM_GET_SUPPORTED_CPUID`` issues
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++In general, ``KVM_GET_SUPPORTED_CPUID`` is designed so that it is possible
++to take its result and pass it directly to ``KVM_SET_CPUID2``.  This section
++documents some cases in which that requires some care.
++
++Local APIC features
++~~~~~~~~~~~~~~~~~~~
++
++CPU[EAX=1]:ECX[21] (X2APIC) is reported by ``KVM_GET_SUPPORTED_CPUID``,
++but it can only be enabled if ``KVM_CREATE_IRQCHIP`` or
++``KVM_ENABLE_CAP(KVM_CAP_IRQCHIP_SPLIT)`` are used to enable in-kernel emulation of
++the local APIC.
++
++The same is true for the ``KVM_FEATURE_PV_UNHALT`` paravirtualized feature.
++
++CPU[EAX=1]:ECX[24] (TSC_DEADLINE) is not reported by ``KVM_GET_SUPPORTED_CPUID``.
++It can be enabled if ``KVM_CAP_TSC_DEADLINE_TIMER`` is present and the kernel
++has enabled in-kernel emulation of the local APIC.
++
++CPU topology
++~~~~~~~~~~~~
++
++Several CPUID values include topology information for the host CPU:
++0x0b and 0x1f for Intel systems, 0x8000001e for AMD systems.  Different
++versions of KVM return different values for this information and userspace
++should not rely on it.  Currently they return all zeroes.
++
++If userspace wishes to set up a guest topology, it should be careful that
++the values of these three leaves differ for each CPU.  In particular,
++the APIC ID is found in EDX for all subleaves of 0x0b and 0x1f, and in EAX
++for 0x8000001e; the latter also encodes the core id and node id in bits
++7:0 of EBX and ECX respectively.
++
++Obsolete ioctls and capabilities
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++KVM_CAP_DISABLE_QUIRKS does not let userspace know which quirks are actually
++available.  Use ``KVM_CHECK_EXTENSION(KVM_CAP_DISABLE_QUIRKS2)`` instead if
++available.
++
++Ordering of KVM_GET_*/KVM_SET_* ioctls
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++TBD
+diff --git a/Makefile b/Makefile
+index 98fc6e7fd41df..68fd49d8d4363 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 163
++SUBLEVEL = 164
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
+index 13869b76b58cd..abd302e521c06 100644
+--- a/arch/arm64/include/asm/atomic_ll_sc.h
++++ b/arch/arm64/include/asm/atomic_ll_sc.h
+@@ -12,19 +12,6 @@
+ 
+ #include <linux/stringify.h>
+ 
+-#ifdef CONFIG_ARM64_LSE_ATOMICS
+-#define __LL_SC_FALLBACK(asm_ops)					\
+-"	b	3f\n"							\
+-"	.subsection	1\n"						\
+-"3:\n"									\
+-asm_ops "\n"								\
+-"	b	4f\n"							\
+-"	.previous\n"							\
+-"4:\n"
+-#else
+-#define __LL_SC_FALLBACK(asm_ops) asm_ops
+-#endif
+-
+ #ifndef CONFIG_CC_HAS_K_CONSTRAINT
+ #define K
+ #endif
+@@ -43,12 +30,11 @@ __ll_sc_atomic_##op(int i, atomic_t *v)					\
+ 	int result;							\
+ 									\
+ 	asm volatile("// atomic_" #op "\n"				\
+-	__LL_SC_FALLBACK(						\
+-"	prfm	pstl1strm, %2\n"					\
+-"1:	ldxr	%w0, %2\n"						\
+-"	" #asm_op "	%w0, %w0, %w3\n"				\
+-"	stxr	%w1, %w0, %2\n"						\
+-"	cbnz	%w1, 1b\n")						\
++	"	prfm	pstl1strm, %2\n"				\
++	"1:	ldxr	%w0, %2\n"					\
++	"	" #asm_op "	%w0, %w0, %w3\n"			\
++	"	stxr	%w1, %w0, %2\n"					\
++	"	cbnz	%w1, 1b\n"					\
+ 	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+ 	: __stringify(constraint) "r" (i));				\
+ }
+@@ -61,13 +47,12 @@ __ll_sc_atomic_##op##_return##name(int i, atomic_t *v)			\
+ 	int result;							\
+ 									\
+ 	asm volatile("// atomic_" #op "_return" #name "\n"		\
+-	__LL_SC_FALLBACK(						\
+-"	prfm	pstl1strm, %2\n"					\
+-"1:	ld" #acq "xr	%w0, %2\n"					\
+-"	" #asm_op "	%w0, %w0, %w3\n"				\
+-"	st" #rel "xr	%w1, %w0, %2\n"					\
+-"	cbnz	%w1, 1b\n"						\
+-"	" #mb )								\
++	"	prfm	pstl1strm, %2\n"				\
++	"1:	ld" #acq "xr	%w0, %2\n"				\
++	"	" #asm_op "	%w0, %w0, %w3\n"			\
++	"	st" #rel "xr	%w1, %w0, %2\n"				\
++	"	cbnz	%w1, 1b\n"					\
++	"	" #mb							\
+ 	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+ 	: __stringify(constraint) "r" (i)				\
+ 	: cl);								\
+@@ -83,13 +68,12 @@ __ll_sc_atomic_fetch_##op##name(int i, atomic_t *v)			\
+ 	int val, result;						\
+ 									\
+ 	asm volatile("// atomic_fetch_" #op #name "\n"			\
+-	__LL_SC_FALLBACK(						\
+-"	prfm	pstl1strm, %3\n"					\
+-"1:	ld" #acq "xr	%w0, %3\n"					\
+-"	" #asm_op "	%w1, %w0, %w4\n"				\
+-"	st" #rel "xr	%w2, %w1, %3\n"					\
+-"	cbnz	%w2, 1b\n"						\
+-"	" #mb )								\
++	"	prfm	pstl1strm, %3\n"				\
++	"1:	ld" #acq "xr	%w0, %3\n"				\
++	"	" #asm_op "	%w1, %w0, %w4\n"			\
++	"	st" #rel "xr	%w2, %w1, %3\n"				\
++	"	cbnz	%w2, 1b\n"					\
++	"	" #mb							\
+ 	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter)	\
+ 	: __stringify(constraint) "r" (i)				\
+ 	: cl);								\
+@@ -142,12 +126,11 @@ __ll_sc_atomic64_##op(s64 i, atomic64_t *v)				\
+ 	unsigned long tmp;						\
+ 									\
+ 	asm volatile("// atomic64_" #op "\n"				\
+-	__LL_SC_FALLBACK(						\
+-"	prfm	pstl1strm, %2\n"					\
+-"1:	ldxr	%0, %2\n"						\
+-"	" #asm_op "	%0, %0, %3\n"					\
+-"	stxr	%w1, %0, %2\n"						\
+-"	cbnz	%w1, 1b")						\
++	"	prfm	pstl1strm, %2\n"				\
++	"1:	ldxr	%0, %2\n"					\
++	"	" #asm_op "	%0, %0, %3\n"				\
++	"	stxr	%w1, %0, %2\n"					\
++	"	cbnz	%w1, 1b"					\
+ 	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+ 	: __stringify(constraint) "r" (i));				\
+ }
+@@ -160,13 +143,12 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v)		\
+ 	unsigned long tmp;						\
+ 									\
+ 	asm volatile("// atomic64_" #op "_return" #name "\n"		\
+-	__LL_SC_FALLBACK(						\
+-"	prfm	pstl1strm, %2\n"					\
+-"1:	ld" #acq "xr	%0, %2\n"					\
+-"	" #asm_op "	%0, %0, %3\n"					\
+-"	st" #rel "xr	%w1, %0, %2\n"					\
+-"	cbnz	%w1, 1b\n"						\
+-"	" #mb )								\
++	"	prfm	pstl1strm, %2\n"				\
++	"1:	ld" #acq "xr	%0, %2\n"				\
++	"	" #asm_op "	%0, %0, %3\n"				\
++	"	st" #rel "xr	%w1, %0, %2\n"				\
++	"	cbnz	%w1, 1b\n"					\
++	"	" #mb							\
+ 	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+ 	: __stringify(constraint) "r" (i)				\
+ 	: cl);								\
+@@ -176,19 +158,18 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v)		\
+ 
+ #define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
+ static inline long							\
+-__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v)		\
++__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v)			\
+ {									\
+ 	s64 result, val;						\
+ 	unsigned long tmp;						\
+ 									\
+ 	asm volatile("// atomic64_fetch_" #op #name "\n"		\
+-	__LL_SC_FALLBACK(						\
+-"	prfm	pstl1strm, %3\n"					\
+-"1:	ld" #acq "xr	%0, %3\n"					\
+-"	" #asm_op "	%1, %0, %4\n"					\
+-"	st" #rel "xr	%w2, %1, %3\n"					\
+-"	cbnz	%w2, 1b\n"						\
+-"	" #mb )								\
++	"	prfm	pstl1strm, %3\n"				\
++	"1:	ld" #acq "xr	%0, %3\n"				\
++	"	" #asm_op "	%1, %0, %4\n"				\
++	"	st" #rel "xr	%w2, %1, %3\n"				\
++	"	cbnz	%w2, 1b\n"					\
++	"	" #mb							\
+ 	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter)	\
+ 	: __stringify(constraint) "r" (i)				\
+ 	: cl);								\
+@@ -240,15 +221,14 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v)
+ 	unsigned long tmp;
+ 
+ 	asm volatile("// atomic64_dec_if_positive\n"
+-	__LL_SC_FALLBACK(
+-"	prfm	pstl1strm, %2\n"
+-"1:	ldxr	%0, %2\n"
+-"	subs	%0, %0, #1\n"
+-"	b.lt	2f\n"
+-"	stlxr	%w1, %0, %2\n"
+-"	cbnz	%w1, 1b\n"
+-"	dmb	ish\n"
+-"2:")
++	"	prfm	pstl1strm, %2\n"
++	"1:	ldxr	%0, %2\n"
++	"	subs	%0, %0, #1\n"
++	"	b.lt	2f\n"
++	"	stlxr	%w1, %0, %2\n"
++	"	cbnz	%w1, 1b\n"
++	"	dmb	ish\n"
++	"2:"
+ 	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
+ 	:
+ 	: "cc", "memory");
+@@ -274,7 +254,6 @@ __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr,			\
+ 		old = (u##sz)old;					\
+ 									\
+ 	asm volatile(							\
+-	__LL_SC_FALLBACK(						\
+ 	"	prfm	pstl1strm, %[v]\n"				\
+ 	"1:	ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n"		\
+ 	"	eor	%" #w "[tmp], %" #w "[oldval], %" #w "[old]\n"	\
+@@ -282,7 +261,7 @@ __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr,			\
+ 	"	st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n"	\
+ 	"	cbnz	%w[tmp], 1b\n"					\
+ 	"	" #mb "\n"						\
+-	"2:")								\
++	"2:"								\
+ 	: [tmp] "=&r" (tmp), [oldval] "=&r" (oldval),			\
+ 	  [v] "+Q" (*(u##sz *)ptr)					\
+ 	: [old] __stringify(constraint) "r" (old), [new] "r" (new)	\
+@@ -326,7 +305,6 @@ __ll_sc__cmpxchg_double##name(unsigned long old1,			\
+ 	unsigned long tmp, ret;						\
+ 									\
+ 	asm volatile("// __cmpxchg_double" #name "\n"			\
+-	__LL_SC_FALLBACK(						\
+ 	"	prfm	pstl1strm, %2\n"				\
+ 	"1:	ldxp	%0, %1, %2\n"					\
+ 	"	eor	%0, %0, %3\n"					\
+@@ -336,8 +314,8 @@ __ll_sc__cmpxchg_double##name(unsigned long old1,			\
+ 	"	st" #rel "xp	%w0, %5, %6, %2\n"			\
+ 	"	cbnz	%w0, 1b\n"					\
+ 	"	" #mb "\n"						\
+-	"2:")								\
+-	: "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr)	\
++	"2:"								\
++	: "=&r" (tmp), "=&r" (ret), "+Q" (*(__uint128_t *)ptr)		\
+ 	: "r" (old1), "r" (old2), "r" (new1), "r" (new2)		\
+ 	: cl);								\
+ 									\
+diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
+index da3280f639cd7..28e96118c1e5a 100644
+--- a/arch/arm64/include/asm/atomic_lse.h
++++ b/arch/arm64/include/asm/atomic_lse.h
+@@ -11,11 +11,11 @@
+ #define __ASM_ATOMIC_LSE_H
+ 
+ #define ATOMIC_OP(op, asm_op)						\
+-static inline void __lse_atomic_##op(int i, atomic_t *v)			\
++static inline void __lse_atomic_##op(int i, atomic_t *v)		\
+ {									\
+ 	asm volatile(							\
+ 	__LSE_PREAMBLE							\
+-"	" #asm_op "	%w[i], %[v]\n"					\
++	"	" #asm_op "	%w[i], %[v]\n"				\
+ 	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+ 	: "r" (v));							\
+ }
+@@ -32,7 +32,7 @@ static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v)	\
+ {									\
+ 	asm volatile(							\
+ 	__LSE_PREAMBLE							\
+-"	" #asm_op #mb "	%w[i], %w[i], %[v]"				\
++	"	" #asm_op #mb "	%w[i], %w[i], %[v]"			\
+ 	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+ 	: "r" (v)							\
+ 	: cl);								\
+@@ -130,7 +130,7 @@ static inline int __lse_atomic_sub_return##name(int i, atomic_t *v)	\
+ 	"	add	%w[i], %w[i], %w[tmp]"				\
+ 	: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
+ 	: "r" (v)							\
+-	: cl);							\
++	: cl);								\
+ 									\
+ 	return i;							\
+ }
+@@ -168,7 +168,7 @@ static inline void __lse_atomic64_##op(s64 i, atomic64_t *v)		\
+ {									\
+ 	asm volatile(							\
+ 	__LSE_PREAMBLE							\
+-"	" #asm_op "	%[i], %[v]\n"					\
++	"	" #asm_op "	%[i], %[v]\n"				\
+ 	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+ 	: "r" (v));							\
+ }
+@@ -185,7 +185,7 @@ static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
+ {									\
+ 	asm volatile(							\
+ 	__LSE_PREAMBLE							\
+-"	" #asm_op #mb "	%[i], %[i], %[v]"				\
++	"	" #asm_op #mb "	%[i], %[i], %[v]"			\
+ 	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+ 	: "r" (v)							\
+ 	: cl);								\
+@@ -272,7 +272,7 @@ static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
+ }
+ 
+ #define ATOMIC64_OP_SUB_RETURN(name, mb, cl...)				\
+-static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)	\
++static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)\
+ {									\
+ 	unsigned long tmp;						\
+ 									\
+@@ -403,7 +403,7 @@ __lse__cmpxchg_double##name(unsigned long old1,				\
+ 	"	eor	%[old2], %[old2], %[oldval2]\n"			\
+ 	"	orr	%[old1], %[old1], %[old2]"			\
+ 	: [old1] "+&r" (x0), [old2] "+&r" (x1),				\
+-	  [v] "+Q" (*(unsigned long *)ptr)				\
++	  [v] "+Q" (*(__uint128_t *)ptr)				\
+ 	: [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4),		\
+ 	  [oldval1] "r" (oldval1), [oldval2] "r" (oldval2)		\
+ 	: cl);								\
+diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
+index 472122d731b0d..4bca3d10ab0ef 100644
+--- a/arch/arm64/include/asm/kvm_emulate.h
++++ b/arch/arm64/include/asm/kvm_emulate.h
+@@ -382,8 +382,26 @@ static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu)
+ 
+ static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
+ {
+-	if (kvm_vcpu_abt_iss1tw(vcpu))
+-		return true;
++	if (kvm_vcpu_abt_iss1tw(vcpu)) {
++		/*
++		 * Only a permission fault on a S1PTW should be
++		 * considered as a write. Otherwise, page tables baked
++		 * in a read-only memslot will result in an exception
++		 * being delivered in the guest.
++		 *
++		 * The drawback is that we end-up faulting twice if the
++		 * guest is using any of HW AF/DB: a translation fault
++		 * to map the page containing the PT (read only at
++		 * first), then a permission fault to allow the flags
++		 * to be set.
++		 */
++		switch (kvm_vcpu_trap_get_fault_type(vcpu)) {
++		case ESR_ELx_FSC_PERM:
++			return true;
++		default:
++			return false;
++		}
++	}
+ 
+ 	if (kvm_vcpu_trap_is_iabt(vcpu))
+ 		return false;
+diff --git a/arch/powerpc/include/asm/imc-pmu.h b/arch/powerpc/include/asm/imc-pmu.h
+index 4f897993b7107..699a88584ae16 100644
+--- a/arch/powerpc/include/asm/imc-pmu.h
++++ b/arch/powerpc/include/asm/imc-pmu.h
+@@ -137,7 +137,7 @@ struct imc_pmu {
+  * are inited.
+  */
+ struct imc_pmu_ref {
+-	struct mutex lock;
++	spinlock_t lock;
+ 	unsigned int id;
+ 	int refc;
+ };
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index e8074d7f2401b..e42c2fe3dd367 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -13,6 +13,7 @@
+ #include <asm/cputhreads.h>
+ #include <asm/smp.h>
+ #include <linux/string.h>
++#include <linux/spinlock.h>
+ 
+ /* Nest IMC data structures and variables */
+ 
+@@ -20,7 +21,7 @@
+  * Used to avoid races in counting the nest-pmu units during hotplug
+  * register and unregister
+  */
+-static DEFINE_MUTEX(nest_init_lock);
++static DEFINE_SPINLOCK(nest_init_lock);
+ static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc);
+ static struct imc_pmu **per_nest_pmu_arr;
+ static cpumask_t nest_imc_cpumask;
+@@ -49,7 +50,7 @@ static int trace_imc_mem_size;
+  * core and trace-imc
+  */
+ static struct imc_pmu_ref imc_global_refc = {
+-	.lock = __MUTEX_INITIALIZER(imc_global_refc.lock),
++	.lock = __SPIN_LOCK_INITIALIZER(imc_global_refc.lock),
+ 	.id = 0,
+ 	.refc = 0,
+ };
+@@ -393,7 +394,7 @@ static int ppc_nest_imc_cpu_offline(unsigned int cpu)
+ 				       get_hard_smp_processor_id(cpu));
+ 		/*
+ 		 * If this is the last cpu in this chip then, skip the reference
+-		 * count mutex lock and make the reference count on this chip zero.
++		 * count lock and make the reference count on this chip zero.
+ 		 */
+ 		ref = get_nest_pmu_ref(cpu);
+ 		if (!ref)
+@@ -455,15 +456,15 @@ static void nest_imc_counters_release(struct perf_event *event)
+ 	/*
+ 	 * See if we need to disable the nest PMU.
+ 	 * If no events are currently in use, then we have to take a
+-	 * mutex to ensure that we don't race with another task doing
++	 * lock to ensure that we don't race with another task doing
+ 	 * enable or disable the nest counters.
+ 	 */
+ 	ref = get_nest_pmu_ref(event->cpu);
+ 	if (!ref)
+ 		return;
+ 
+-	/* Take the mutex lock for this node and then decrement the reference count */
+-	mutex_lock(&ref->lock);
++	/* Take the lock for this node and then decrement the reference count */
++	spin_lock(&ref->lock);
+ 	if (ref->refc == 0) {
+ 		/*
+ 		 * The scenario where this is true is, when perf session is
+@@ -475,7 +476,7 @@ static void nest_imc_counters_release(struct perf_event *event)
+ 		 * an OPAL call to disable the engine in that node.
+ 		 *
+ 		 */
+-		mutex_unlock(&ref->lock);
++		spin_unlock(&ref->lock);
+ 		return;
+ 	}
+ 	ref->refc--;
+@@ -483,7 +484,7 @@ static void nest_imc_counters_release(struct perf_event *event)
+ 		rc = opal_imc_counters_stop(OPAL_IMC_COUNTERS_NEST,
+ 					    get_hard_smp_processor_id(event->cpu));
+ 		if (rc) {
+-			mutex_unlock(&ref->lock);
++			spin_unlock(&ref->lock);
+ 			pr_err("nest-imc: Unable to stop the counters for core %d\n", node_id);
+ 			return;
+ 		}
+@@ -491,7 +492,7 @@ static void nest_imc_counters_release(struct perf_event *event)
+ 		WARN(1, "nest-imc: Invalid event reference count\n");
+ 		ref->refc = 0;
+ 	}
+-	mutex_unlock(&ref->lock);
++	spin_unlock(&ref->lock);
+ }
+ 
+ static int nest_imc_event_init(struct perf_event *event)
+@@ -550,26 +551,25 @@ static int nest_imc_event_init(struct perf_event *event)
+ 
+ 	/*
+ 	 * Get the imc_pmu_ref struct for this node.
+-	 * Take the mutex lock and then increment the count of nest pmu events
+-	 * inited.
++	 * Take the lock and then increment the count of nest pmu events inited.
+ 	 */
+ 	ref = get_nest_pmu_ref(event->cpu);
+ 	if (!ref)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&ref->lock);
++	spin_lock(&ref->lock);
+ 	if (ref->refc == 0) {
+ 		rc = opal_imc_counters_start(OPAL_IMC_COUNTERS_NEST,
+ 					     get_hard_smp_processor_id(event->cpu));
+ 		if (rc) {
+-			mutex_unlock(&ref->lock);
++			spin_unlock(&ref->lock);
+ 			pr_err("nest-imc: Unable to start the counters for node %d\n",
+ 									node_id);
+ 			return rc;
+ 		}
+ 	}
+ 	++ref->refc;
+-	mutex_unlock(&ref->lock);
++	spin_unlock(&ref->lock);
+ 
+ 	event->destroy = nest_imc_counters_release;
+ 	return 0;
+@@ -605,9 +605,8 @@ static int core_imc_mem_init(int cpu, int size)
+ 		return -ENOMEM;
+ 	mem_info->vbase = page_address(page);
+ 
+-	/* Init the mutex */
+ 	core_imc_refc[core_id].id = core_id;
+-	mutex_init(&core_imc_refc[core_id].lock);
++	spin_lock_init(&core_imc_refc[core_id].lock);
+ 
+ 	rc = opal_imc_counters_init(OPAL_IMC_COUNTERS_CORE,
+ 				__pa((void *)mem_info->vbase),
+@@ -696,9 +695,8 @@ static int ppc_core_imc_cpu_offline(unsigned int cpu)
+ 		perf_pmu_migrate_context(&core_imc_pmu->pmu, cpu, ncpu);
+ 	} else {
+ 		/*
+-		 * If this is the last cpu in this core then, skip taking refernce
+-		 * count mutex lock for this core and directly zero "refc" for
+-		 * this core.
++		 * If this is the last cpu in this core then skip taking reference
++		 * count lock for this core and directly zero "refc" for this core.
+ 		 */
+ 		opal_imc_counters_stop(OPAL_IMC_COUNTERS_CORE,
+ 				       get_hard_smp_processor_id(cpu));
+@@ -713,11 +711,11 @@ static int ppc_core_imc_cpu_offline(unsigned int cpu)
+ 		 * last cpu in this core and core-imc event running
+ 		 * in this cpu.
+ 		 */
+-		mutex_lock(&imc_global_refc.lock);
++		spin_lock(&imc_global_refc.lock);
+ 		if (imc_global_refc.id == IMC_DOMAIN_CORE)
+ 			imc_global_refc.refc--;
+ 
+-		mutex_unlock(&imc_global_refc.lock);
++		spin_unlock(&imc_global_refc.lock);
+ 	}
+ 	return 0;
+ }
+@@ -732,7 +730,7 @@ static int core_imc_pmu_cpumask_init(void)
+ 
+ static void reset_global_refc(struct perf_event *event)
+ {
+-		mutex_lock(&imc_global_refc.lock);
++		spin_lock(&imc_global_refc.lock);
+ 		imc_global_refc.refc--;
+ 
+ 		/*
+@@ -744,7 +742,7 @@ static void reset_global_refc(struct perf_event *event)
+ 			imc_global_refc.refc = 0;
+ 			imc_global_refc.id = 0;
+ 		}
+-		mutex_unlock(&imc_global_refc.lock);
++		spin_unlock(&imc_global_refc.lock);
+ }
+ 
+ static void core_imc_counters_release(struct perf_event *event)
+@@ -757,17 +755,17 @@ static void core_imc_counters_release(struct perf_event *event)
+ 	/*
+ 	 * See if we need to disable the IMC PMU.
+ 	 * If no events are currently in use, then we have to take a
+-	 * mutex to ensure that we don't race with another task doing
++	 * lock to ensure that we don't race with another task doing
+ 	 * enable or disable the core counters.
+ 	 */
+ 	core_id = event->cpu / threads_per_core;
+ 
+-	/* Take the mutex lock and decrement the refernce count for this core */
++	/* Take the lock and decrement the refernce count for this core */
+ 	ref = &core_imc_refc[core_id];
+ 	if (!ref)
+ 		return;
+ 
+-	mutex_lock(&ref->lock);
++	spin_lock(&ref->lock);
+ 	if (ref->refc == 0) {
+ 		/*
+ 		 * The scenario where this is true is, when perf session is
+@@ -779,7 +777,7 @@ static void core_imc_counters_release(struct perf_event *event)
+ 		 * an OPAL call to disable the engine in that core.
+ 		 *
+ 		 */
+-		mutex_unlock(&ref->lock);
++		spin_unlock(&ref->lock);
+ 		return;
+ 	}
+ 	ref->refc--;
+@@ -787,7 +785,7 @@ static void core_imc_counters_release(struct perf_event *event)
+ 		rc = opal_imc_counters_stop(OPAL_IMC_COUNTERS_CORE,
+ 					    get_hard_smp_processor_id(event->cpu));
+ 		if (rc) {
+-			mutex_unlock(&ref->lock);
++			spin_unlock(&ref->lock);
+ 			pr_err("IMC: Unable to stop the counters for core %d\n", core_id);
+ 			return;
+ 		}
+@@ -795,7 +793,7 @@ static void core_imc_counters_release(struct perf_event *event)
+ 		WARN(1, "core-imc: Invalid event reference count\n");
+ 		ref->refc = 0;
+ 	}
+-	mutex_unlock(&ref->lock);
++	spin_unlock(&ref->lock);
+ 
+ 	reset_global_refc(event);
+ }
+@@ -833,7 +831,6 @@ static int core_imc_event_init(struct perf_event *event)
+ 	if ((!pcmi->vbase))
+ 		return -ENODEV;
+ 
+-	/* Get the core_imc mutex for this core */
+ 	ref = &core_imc_refc[core_id];
+ 	if (!ref)
+ 		return -EINVAL;
+@@ -841,22 +838,22 @@ static int core_imc_event_init(struct perf_event *event)
+ 	/*
+ 	 * Core pmu units are enabled only when it is used.
+ 	 * See if this is triggered for the first time.
+-	 * If yes, take the mutex lock and enable the core counters.
++	 * If yes, take the lock and enable the core counters.
+ 	 * If not, just increment the count in core_imc_refc struct.
+ 	 */
+-	mutex_lock(&ref->lock);
++	spin_lock(&ref->lock);
+ 	if (ref->refc == 0) {
+ 		rc = opal_imc_counters_start(OPAL_IMC_COUNTERS_CORE,
+ 					     get_hard_smp_processor_id(event->cpu));
+ 		if (rc) {
+-			mutex_unlock(&ref->lock);
++			spin_unlock(&ref->lock);
+ 			pr_err("core-imc: Unable to start the counters for core %d\n",
+ 									core_id);
+ 			return rc;
+ 		}
+ 	}
+ 	++ref->refc;
+-	mutex_unlock(&ref->lock);
++	spin_unlock(&ref->lock);
+ 
+ 	/*
+ 	 * Since the system can run either in accumulation or trace-mode
+@@ -867,7 +864,7 @@ static int core_imc_event_init(struct perf_event *event)
+ 	 * to know whether any other trace/thread imc
+ 	 * events are running.
+ 	 */
+-	mutex_lock(&imc_global_refc.lock);
++	spin_lock(&imc_global_refc.lock);
+ 	if (imc_global_refc.id == 0 || imc_global_refc.id == IMC_DOMAIN_CORE) {
+ 		/*
+ 		 * No other trace/thread imc events are running in
+@@ -876,10 +873,10 @@ static int core_imc_event_init(struct perf_event *event)
+ 		imc_global_refc.id = IMC_DOMAIN_CORE;
+ 		imc_global_refc.refc++;
+ 	} else {
+-		mutex_unlock(&imc_global_refc.lock);
++		spin_unlock(&imc_global_refc.lock);
+ 		return -EBUSY;
+ 	}
+-	mutex_unlock(&imc_global_refc.lock);
++	spin_unlock(&imc_global_refc.lock);
+ 
+ 	event->hw.event_base = (u64)pcmi->vbase + (config & IMC_EVENT_OFFSET_MASK);
+ 	event->destroy = core_imc_counters_release;
+@@ -951,10 +948,10 @@ static int ppc_thread_imc_cpu_offline(unsigned int cpu)
+ 	mtspr(SPRN_LDBAR, (mfspr(SPRN_LDBAR) & (~(1UL << 63))));
+ 
+ 	/* Reduce the refc if thread-imc event running on this cpu */
+-	mutex_lock(&imc_global_refc.lock);
++	spin_lock(&imc_global_refc.lock);
+ 	if (imc_global_refc.id == IMC_DOMAIN_THREAD)
+ 		imc_global_refc.refc--;
+-	mutex_unlock(&imc_global_refc.lock);
++	spin_unlock(&imc_global_refc.lock);
+ 
+ 	return 0;
+ }
+@@ -994,7 +991,7 @@ static int thread_imc_event_init(struct perf_event *event)
+ 	if (!target)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&imc_global_refc.lock);
++	spin_lock(&imc_global_refc.lock);
+ 	/*
+ 	 * Check if any other trace/core imc events are running in the
+ 	 * system, if not set the global id to thread-imc.
+@@ -1003,10 +1000,10 @@ static int thread_imc_event_init(struct perf_event *event)
+ 		imc_global_refc.id = IMC_DOMAIN_THREAD;
+ 		imc_global_refc.refc++;
+ 	} else {
+-		mutex_unlock(&imc_global_refc.lock);
++		spin_unlock(&imc_global_refc.lock);
+ 		return -EBUSY;
+ 	}
+-	mutex_unlock(&imc_global_refc.lock);
++	spin_unlock(&imc_global_refc.lock);
+ 
+ 	event->pmu->task_ctx_nr = perf_sw_context;
+ 	event->destroy = reset_global_refc;
+@@ -1128,25 +1125,25 @@ static int thread_imc_event_add(struct perf_event *event, int flags)
+ 	/*
+ 	 * imc pmus are enabled only when it is used.
+ 	 * See if this is triggered for the first time.
+-	 * If yes, take the mutex lock and enable the counters.
++	 * If yes, take the lock and enable the counters.
+ 	 * If not, just increment the count in ref count struct.
+ 	 */
+ 	ref = &core_imc_refc[core_id];
+ 	if (!ref)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&ref->lock);
++	spin_lock(&ref->lock);
+ 	if (ref->refc == 0) {
+ 		if (opal_imc_counters_start(OPAL_IMC_COUNTERS_CORE,
+ 		    get_hard_smp_processor_id(smp_processor_id()))) {
+-			mutex_unlock(&ref->lock);
++			spin_unlock(&ref->lock);
+ 			pr_err("thread-imc: Unable to start the counter\
+ 				for core %d\n", core_id);
+ 			return -EINVAL;
+ 		}
+ 	}
+ 	++ref->refc;
+-	mutex_unlock(&ref->lock);
++	spin_unlock(&ref->lock);
+ 	return 0;
+ }
+ 
+@@ -1163,12 +1160,12 @@ static void thread_imc_event_del(struct perf_event *event, int flags)
+ 		return;
+ 	}
+ 
+-	mutex_lock(&ref->lock);
++	spin_lock(&ref->lock);
+ 	ref->refc--;
+ 	if (ref->refc == 0) {
+ 		if (opal_imc_counters_stop(OPAL_IMC_COUNTERS_CORE,
+ 		    get_hard_smp_processor_id(smp_processor_id()))) {
+-			mutex_unlock(&ref->lock);
++			spin_unlock(&ref->lock);
+ 			pr_err("thread-imc: Unable to stop the counters\
+ 				for core %d\n", core_id);
+ 			return;
+@@ -1176,7 +1173,7 @@ static void thread_imc_event_del(struct perf_event *event, int flags)
+ 	} else if (ref->refc < 0) {
+ 		ref->refc = 0;
+ 	}
+-	mutex_unlock(&ref->lock);
++	spin_unlock(&ref->lock);
+ 
+ 	/* Set bit 0 of LDBAR to zero, to stop posting updates to memory */
+ 	mtspr(SPRN_LDBAR, (mfspr(SPRN_LDBAR) & (~(1UL << 63))));
+@@ -1217,9 +1214,8 @@ static int trace_imc_mem_alloc(int cpu_id, int size)
+ 		}
+ 	}
+ 
+-	/* Init the mutex, if not already */
+ 	trace_imc_refc[core_id].id = core_id;
+-	mutex_init(&trace_imc_refc[core_id].lock);
++	spin_lock_init(&trace_imc_refc[core_id].lock);
+ 
+ 	mtspr(SPRN_LDBAR, 0);
+ 	return 0;
+@@ -1239,10 +1235,10 @@ static int ppc_trace_imc_cpu_offline(unsigned int cpu)
+ 	 * Reduce the refc if any trace-imc event running
+ 	 * on this cpu.
+ 	 */
+-	mutex_lock(&imc_global_refc.lock);
++	spin_lock(&imc_global_refc.lock);
+ 	if (imc_global_refc.id == IMC_DOMAIN_TRACE)
+ 		imc_global_refc.refc--;
+-	mutex_unlock(&imc_global_refc.lock);
++	spin_unlock(&imc_global_refc.lock);
+ 
+ 	return 0;
+ }
+@@ -1364,17 +1360,17 @@ static int trace_imc_event_add(struct perf_event *event, int flags)
+ 	}
+ 
+ 	mtspr(SPRN_LDBAR, ldbar_value);
+-	mutex_lock(&ref->lock);
++	spin_lock(&ref->lock);
+ 	if (ref->refc == 0) {
+ 		if (opal_imc_counters_start(OPAL_IMC_COUNTERS_TRACE,
+ 				get_hard_smp_processor_id(smp_processor_id()))) {
+-			mutex_unlock(&ref->lock);
++			spin_unlock(&ref->lock);
+ 			pr_err("trace-imc: Unable to start the counters for core %d\n", core_id);
+ 			return -EINVAL;
+ 		}
+ 	}
+ 	++ref->refc;
+-	mutex_unlock(&ref->lock);
++	spin_unlock(&ref->lock);
+ 	return 0;
+ }
+ 
+@@ -1407,19 +1403,19 @@ static void trace_imc_event_del(struct perf_event *event, int flags)
+ 		return;
+ 	}
+ 
+-	mutex_lock(&ref->lock);
++	spin_lock(&ref->lock);
+ 	ref->refc--;
+ 	if (ref->refc == 0) {
+ 		if (opal_imc_counters_stop(OPAL_IMC_COUNTERS_TRACE,
+ 				get_hard_smp_processor_id(smp_processor_id()))) {
+-			mutex_unlock(&ref->lock);
++			spin_unlock(&ref->lock);
+ 			pr_err("trace-imc: Unable to stop the counters for core %d\n", core_id);
+ 			return;
+ 		}
+ 	} else if (ref->refc < 0) {
+ 		ref->refc = 0;
+ 	}
+-	mutex_unlock(&ref->lock);
++	spin_unlock(&ref->lock);
+ 
+ 	trace_imc_event_stop(event, flags);
+ }
+@@ -1441,7 +1437,7 @@ static int trace_imc_event_init(struct perf_event *event)
+ 	 * no other thread is running any core/thread imc
+ 	 * events
+ 	 */
+-	mutex_lock(&imc_global_refc.lock);
++	spin_lock(&imc_global_refc.lock);
+ 	if (imc_global_refc.id == 0 || imc_global_refc.id == IMC_DOMAIN_TRACE) {
+ 		/*
+ 		 * No core/thread imc events are running in the
+@@ -1450,10 +1446,10 @@ static int trace_imc_event_init(struct perf_event *event)
+ 		imc_global_refc.id = IMC_DOMAIN_TRACE;
+ 		imc_global_refc.refc++;
+ 	} else {
+-		mutex_unlock(&imc_global_refc.lock);
++		spin_unlock(&imc_global_refc.lock);
+ 		return -EBUSY;
+ 	}
+-	mutex_unlock(&imc_global_refc.lock);
++	spin_unlock(&imc_global_refc.lock);
+ 
+ 	event->hw.idx = -1;
+ 
+@@ -1525,10 +1521,10 @@ static int init_nest_pmu_ref(void)
+ 	i = 0;
+ 	for_each_node(nid) {
+ 		/*
+-		 * Mutex lock to avoid races while tracking the number of
++		 * Take the lock to avoid races while tracking the number of
+ 		 * sessions using the chip's nest pmu units.
+ 		 */
+-		mutex_init(&nest_imc_refc[i].lock);
++		spin_lock_init(&nest_imc_refc[i].lock);
+ 
+ 		/*
+ 		 * Loop to init the "id" with the node_id. Variable "i" initialized to
+@@ -1625,7 +1621,7 @@ static void imc_common_mem_free(struct imc_pmu *pmu_ptr)
+ static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)
+ {
+ 	if (pmu_ptr->domain == IMC_DOMAIN_NEST) {
+-		mutex_lock(&nest_init_lock);
++		spin_lock(&nest_init_lock);
+ 		if (nest_pmus == 1) {
+ 			cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE);
+ 			kfree(nest_imc_refc);
+@@ -1635,7 +1631,7 @@ static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)
+ 
+ 		if (nest_pmus > 0)
+ 			nest_pmus--;
+-		mutex_unlock(&nest_init_lock);
++		spin_unlock(&nest_init_lock);
+ 	}
+ 
+ 	/* Free core_imc memory */
+@@ -1792,11 +1788,11 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
+ 		* rest. To handle the cpuhotplug callback unregister, we track
+ 		* the number of nest pmus in "nest_pmus".
+ 		*/
+-		mutex_lock(&nest_init_lock);
++		spin_lock(&nest_init_lock);
+ 		if (nest_pmus == 0) {
+ 			ret = init_nest_pmu_ref();
+ 			if (ret) {
+-				mutex_unlock(&nest_init_lock);
++				spin_unlock(&nest_init_lock);
+ 				kfree(per_nest_pmu_arr);
+ 				per_nest_pmu_arr = NULL;
+ 				goto err_free_mem;
+@@ -1804,7 +1800,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
+ 			/* Register for cpu hotplug notification. */
+ 			ret = nest_pmu_cpumask_init();
+ 			if (ret) {
+-				mutex_unlock(&nest_init_lock);
++				spin_unlock(&nest_init_lock);
+ 				kfree(nest_imc_refc);
+ 				kfree(per_nest_pmu_arr);
+ 				per_nest_pmu_arr = NULL;
+@@ -1812,7 +1808,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
+ 			}
+ 		}
+ 		nest_pmus++;
+-		mutex_unlock(&nest_init_lock);
++		spin_unlock(&nest_init_lock);
+ 		break;
+ 	case IMC_DOMAIN_CORE:
+ 		ret = core_imc_pmu_cpumask_init();
+diff --git a/arch/s390/include/asm/cpu_mf.h b/arch/s390/include/asm/cpu_mf.h
+index 0d90cbeb89b43..a0914bc6c9bdd 100644
+--- a/arch/s390/include/asm/cpu_mf.h
++++ b/arch/s390/include/asm/cpu_mf.h
+@@ -128,19 +128,21 @@ struct hws_combined_entry {
+ 	struct hws_diag_entry	diag;	/* Diagnostic-sampling data entry */
+ } __packed;
+ 
+-struct hws_trailer_entry {
+-	union {
+-		struct {
+-			unsigned int f:1;	/* 0 - Block Full Indicator   */
+-			unsigned int a:1;	/* 1 - Alert request control  */
+-			unsigned int t:1;	/* 2 - Timestamp format	      */
+-			unsigned int :29;	/* 3 - 31: Reserved	      */
+-			unsigned int bsdes:16;	/* 32-47: size of basic SDE   */
+-			unsigned int dsdes:16;	/* 48-63: size of diagnostic SDE */
+-		};
+-		unsigned long long flags;	/* 0 - 63: All indicators     */
++union hws_trailer_header {
++	struct {
++		unsigned int f:1;	/* 0 - Block Full Indicator   */
++		unsigned int a:1;	/* 1 - Alert request control  */
++		unsigned int t:1;	/* 2 - Timestamp format	      */
++		unsigned int :29;	/* 3 - 31: Reserved	      */
++		unsigned int bsdes:16;	/* 32-47: size of basic SDE   */
++		unsigned int dsdes:16;	/* 48-63: size of diagnostic SDE */
++		unsigned long long overflow; /* 64 - Overflow Count   */
+ 	};
+-	unsigned long long overflow;	 /* 64 - sample Overflow count	      */
++	__uint128_t val;
++};
++
++struct hws_trailer_entry {
++	union hws_trailer_header header; /* 0 - 15 Flags + Overflow Count     */
+ 	unsigned char timestamp[16];	 /* 16 - 31 timestamp		      */
+ 	unsigned long long reserved1;	 /* 32 -Reserved		      */
+ 	unsigned long long reserved2;	 /*				      */
+@@ -287,14 +289,11 @@ static inline unsigned long sample_rate_to_freq(struct hws_qsi_info_block *qsi,
+ 	return USEC_PER_SEC * qsi->cpu_speed / rate;
+ }
+ 
+-#define SDB_TE_ALERT_REQ_MASK	0x4000000000000000UL
+-#define SDB_TE_BUFFER_FULL_MASK 0x8000000000000000UL
+-
+ /* Return TOD timestamp contained in an trailer entry */
+ static inline unsigned long long trailer_timestamp(struct hws_trailer_entry *te)
+ {
+ 	/* TOD in STCKE format */
+-	if (te->t)
++	if (te->header.t)
+ 		return *((unsigned long long *) &te->timestamp[1]);
+ 
+ 	/* TOD in STCK format */
+diff --git a/arch/s390/include/asm/percpu.h b/arch/s390/include/asm/percpu.h
+index 918f0ba4f4d20..5e26e2c4641b4 100644
+--- a/arch/s390/include/asm/percpu.h
++++ b/arch/s390/include/asm/percpu.h
+@@ -31,7 +31,7 @@
+ 	pcp_op_T__ *ptr__;						\
+ 	preempt_disable_notrace();					\
+ 	ptr__ = raw_cpu_ptr(&(pcp));					\
+-	prev__ = *ptr__;						\
++	prev__ = READ_ONCE(*ptr__);					\
+ 	do {								\
+ 		old__ = prev__;						\
+ 		new__ = old__ op (val);					\
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index 53da174754d97..bf0596749ebd3 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -185,8 +185,6 @@ static int kexec_file_add_ipl_report(struct kimage *image,
+ 
+ 	data->memsz = ALIGN(data->memsz, PAGE_SIZE);
+ 	buf.mem = data->memsz;
+-	if (image->type == KEXEC_TYPE_CRASH)
+-		buf.mem += crashk_res.start;
+ 
+ 	ptr = (void *)ipl_cert_list_addr;
+ 	end = ptr + ipl_cert_list_size;
+@@ -223,6 +221,9 @@ static int kexec_file_add_ipl_report(struct kimage *image,
+ 		data->kernel_buf + offsetof(struct lowcore, ipl_parmblock_ptr);
+ 	*lc_ipl_parmblock_ptr = (__u32)buf.mem;
+ 
++	if (image->type == KEXEC_TYPE_CRASH)
++		buf.mem += crashk_res.start;
++
+ 	ret = kexec_add_buffer(&buf);
+ out:
+ 	return ret;
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index 19cd7b961c45c..bcd31e0b4edb3 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -163,14 +163,15 @@ static void free_sampling_buffer(struct sf_buffer *sfb)
+ 
+ static int alloc_sample_data_block(unsigned long *sdbt, gfp_t gfp_flags)
+ {
+-	unsigned long sdb, *trailer;
++	struct hws_trailer_entry *te;
++	unsigned long sdb;
+ 
+ 	/* Allocate and initialize sample-data-block */
+ 	sdb = get_zeroed_page(gfp_flags);
+ 	if (!sdb)
+ 		return -ENOMEM;
+-	trailer = trailer_entry_ptr(sdb);
+-	*trailer = SDB_TE_ALERT_REQ_MASK;
++	te = (struct hws_trailer_entry *)trailer_entry_ptr(sdb);
++	te->header.a = 1;
+ 
+ 	/* Link SDB into the sample-data-block-table */
+ 	*sdbt = sdb;
+@@ -1206,7 +1207,7 @@ static void hw_collect_samples(struct perf_event *event, unsigned long *sdbt,
+ 					    "%s: Found unknown"
+ 					    " sampling data entry: te->f %i"
+ 					    " basic.def %#4x (%p)\n", __func__,
+-					    te->f, sample->def, sample);
++					    te->header.f, sample->def, sample);
+ 			/* Sample slot is not yet written or other record.
+ 			 *
+ 			 * This condition can occur if the buffer was reused
+@@ -1217,7 +1218,7 @@ static void hw_collect_samples(struct perf_event *event, unsigned long *sdbt,
+ 			 * that are not full.  Stop processing if the first
+ 			 * invalid format was detected.
+ 			 */
+-			if (!te->f)
++			if (!te->header.f)
+ 				break;
+ 		}
+ 
+@@ -1227,6 +1228,16 @@ static void hw_collect_samples(struct perf_event *event, unsigned long *sdbt,
+ 	}
+ }
+ 
++static inline __uint128_t __cdsg(__uint128_t *ptr, __uint128_t old, __uint128_t new)
++{
++	asm volatile(
++		"	cdsg	%[old],%[new],%[ptr]\n"
++		: [old] "+d" (old), [ptr] "+QS" (*ptr)
++		: [new] "d" (new)
++		: "memory", "cc");
++	return old;
++}
++
+ /* hw_perf_event_update() - Process sampling buffer
+  * @event:	The perf event
+  * @flush_all:	Flag to also flush partially filled sample-data-blocks
+@@ -1243,10 +1254,11 @@ static void hw_collect_samples(struct perf_event *event, unsigned long *sdbt,
+  */
+ static void hw_perf_event_update(struct perf_event *event, int flush_all)
+ {
++	unsigned long long event_overflow, sampl_overflow, num_sdb;
++	union hws_trailer_header old, prev, new;
+ 	struct hw_perf_event *hwc = &event->hw;
+ 	struct hws_trailer_entry *te;
+ 	unsigned long *sdbt;
+-	unsigned long long event_overflow, sampl_overflow, num_sdb, te_flags;
+ 	int done;
+ 
+ 	/*
+@@ -1266,25 +1278,25 @@ static void hw_perf_event_update(struct perf_event *event, int flush_all)
+ 		te = (struct hws_trailer_entry *) trailer_entry_ptr(*sdbt);
+ 
+ 		/* Leave loop if no more work to do (block full indicator) */
+-		if (!te->f) {
++		if (!te->header.f) {
+ 			done = 1;
+ 			if (!flush_all)
+ 				break;
+ 		}
+ 
+ 		/* Check the sample overflow count */
+-		if (te->overflow)
++		if (te->header.overflow)
+ 			/* Account sample overflows and, if a particular limit
+ 			 * is reached, extend the sampling buffer.
+ 			 * For details, see sfb_account_overflows().
+ 			 */
+-			sampl_overflow += te->overflow;
++			sampl_overflow += te->header.overflow;
+ 
+ 		/* Timestamps are valid for full sample-data-blocks only */
+ 		debug_sprintf_event(sfdbg, 6, "%s: sdbt %#lx "
+ 				    "overflow %llu timestamp %#llx\n",
+-				    __func__, (unsigned long)sdbt, te->overflow,
+-				    (te->f) ? trailer_timestamp(te) : 0ULL);
++				    __func__, (unsigned long)sdbt, te->header.overflow,
++				    (te->header.f) ? trailer_timestamp(te) : 0ULL);
+ 
+ 		/* Collect all samples from a single sample-data-block and
+ 		 * flag if an (perf) event overflow happened.  If so, the PMU
+@@ -1294,12 +1306,16 @@ static void hw_perf_event_update(struct perf_event *event, int flush_all)
+ 		num_sdb++;
+ 
+ 		/* Reset trailer (using compare-double-and-swap) */
++		/* READ_ONCE() 16 byte header */
++		prev.val = __cdsg(&te->header.val, 0, 0);
+ 		do {
+-			te_flags = te->flags & ~SDB_TE_BUFFER_FULL_MASK;
+-			te_flags |= SDB_TE_ALERT_REQ_MASK;
+-		} while (!cmpxchg_double(&te->flags, &te->overflow,
+-					 te->flags, te->overflow,
+-					 te_flags, 0ULL));
++			old.val = prev.val;
++			new.val = prev.val;
++			new.f = 0;
++			new.a = 1;
++			new.overflow = 0;
++			prev.val = __cdsg(&te->header.val, old.val, new.val);
++		} while (prev.val != old.val);
+ 
+ 		/* Advance to next sample-data-block */
+ 		sdbt++;
+@@ -1384,7 +1400,7 @@ static void aux_output_end(struct perf_output_handle *handle)
+ 	range_scan = AUX_SDB_NUM_ALERT(aux);
+ 	for (i = 0, idx = aux->head; i < range_scan; i++, idx++) {
+ 		te = aux_sdb_trailer(aux, idx);
+-		if (!(te->flags & SDB_TE_BUFFER_FULL_MASK))
++		if (!te->header.f)
+ 			break;
+ 	}
+ 	/* i is num of SDBs which are full */
+@@ -1392,7 +1408,7 @@ static void aux_output_end(struct perf_output_handle *handle)
+ 
+ 	/* Remove alert indicators in the buffer */
+ 	te = aux_sdb_trailer(aux, aux->alert_mark);
+-	te->flags &= ~SDB_TE_ALERT_REQ_MASK;
++	te->header.a = 0;
+ 
+ 	debug_sprintf_event(sfdbg, 6, "%s: SDBs %ld range %ld head %ld\n",
+ 			    __func__, i, range_scan, aux->head);
+@@ -1437,9 +1453,9 @@ static int aux_output_begin(struct perf_output_handle *handle,
+ 		idx = aux->empty_mark + 1;
+ 		for (i = 0; i < range_scan; i++, idx++) {
+ 			te = aux_sdb_trailer(aux, idx);
+-			te->flags &= ~(SDB_TE_BUFFER_FULL_MASK |
+-				       SDB_TE_ALERT_REQ_MASK);
+-			te->overflow = 0;
++			te->header.f = 0;
++			te->header.a = 0;
++			te->header.overflow = 0;
+ 		}
+ 		/* Save the position of empty SDBs */
+ 		aux->empty_mark = aux->head + range - 1;
+@@ -1448,7 +1464,7 @@ static int aux_output_begin(struct perf_output_handle *handle,
+ 	/* Set alert indicator */
+ 	aux->alert_mark = aux->head + range/2 - 1;
+ 	te = aux_sdb_trailer(aux, aux->alert_mark);
+-	te->flags = te->flags | SDB_TE_ALERT_REQ_MASK;
++	te->header.a = 1;
+ 
+ 	/* Reset hardware buffer head */
+ 	head = AUX_SDB_INDEX(aux, aux->head);
+@@ -1475,14 +1491,17 @@ static int aux_output_begin(struct perf_output_handle *handle,
+ static bool aux_set_alert(struct aux_buffer *aux, unsigned long alert_index,
+ 			  unsigned long long *overflow)
+ {
+-	unsigned long long orig_overflow, orig_flags, new_flags;
++	union hws_trailer_header old, prev, new;
+ 	struct hws_trailer_entry *te;
+ 
+ 	te = aux_sdb_trailer(aux, alert_index);
++	/* READ_ONCE() 16 byte header */
++	prev.val = __cdsg(&te->header.val, 0, 0);
+ 	do {
+-		orig_flags = te->flags;
+-		*overflow = orig_overflow = te->overflow;
+-		if (orig_flags & SDB_TE_BUFFER_FULL_MASK) {
++		old.val = prev.val;
++		new.val = prev.val;
++		*overflow = old.overflow;
++		if (old.f) {
+ 			/*
+ 			 * SDB is already set by hardware.
+ 			 * Abort and try to set somewhere
+@@ -1490,10 +1509,10 @@ static bool aux_set_alert(struct aux_buffer *aux, unsigned long alert_index,
+ 			 */
+ 			return false;
+ 		}
+-		new_flags = orig_flags | SDB_TE_ALERT_REQ_MASK;
+-	} while (!cmpxchg_double(&te->flags, &te->overflow,
+-				 orig_flags, orig_overflow,
+-				 new_flags, 0ULL));
++		new.a = 1;
++		new.overflow = 0;
++		prev.val = __cdsg(&te->header.val, old.val, new.val);
++	} while (prev.val != old.val);
+ 	return true;
+ }
+ 
+@@ -1522,8 +1541,9 @@ static bool aux_set_alert(struct aux_buffer *aux, unsigned long alert_index,
+ static bool aux_reset_buffer(struct aux_buffer *aux, unsigned long range,
+ 			     unsigned long long *overflow)
+ {
+-	unsigned long long orig_overflow, orig_flags, new_flags;
+ 	unsigned long i, range_scan, idx, idx_old;
++	union hws_trailer_header old, prev, new;
++	unsigned long long orig_overflow;
+ 	struct hws_trailer_entry *te;
+ 
+ 	debug_sprintf_event(sfdbg, 6, "%s: range %ld head %ld alert %ld "
+@@ -1554,17 +1574,20 @@ static bool aux_reset_buffer(struct aux_buffer *aux, unsigned long range,
+ 	idx_old = idx = aux->empty_mark + 1;
+ 	for (i = 0; i < range_scan; i++, idx++) {
+ 		te = aux_sdb_trailer(aux, idx);
++		/* READ_ONCE() 16 byte header */
++		prev.val = __cdsg(&te->header.val, 0, 0);
+ 		do {
+-			orig_flags = te->flags;
+-			orig_overflow = te->overflow;
+-			new_flags = orig_flags & ~SDB_TE_BUFFER_FULL_MASK;
++			old.val = prev.val;
++			new.val = prev.val;
++			orig_overflow = old.overflow;
++			new.f = 0;
++			new.overflow = 0;
+ 			if (idx == aux->alert_mark)
+-				new_flags |= SDB_TE_ALERT_REQ_MASK;
++				new.a = 1;
+ 			else
+-				new_flags &= ~SDB_TE_ALERT_REQ_MASK;
+-		} while (!cmpxchg_double(&te->flags, &te->overflow,
+-					 orig_flags, orig_overflow,
+-					 new_flags, 0ULL));
++				new.a = 0;
++			prev.val = __cdsg(&te->header.val, old.val, new.val);
++		} while (prev.val != old.val);
+ 		*overflow += orig_overflow;
+ 	}
+ 
+diff --git a/arch/x86/boot/bioscall.S b/arch/x86/boot/bioscall.S
+index 5521ea12f44e0..aa9b964575843 100644
+--- a/arch/x86/boot/bioscall.S
++++ b/arch/x86/boot/bioscall.S
+@@ -32,7 +32,7 @@ intcall:
+ 	movw	%dx, %si
+ 	movw	%sp, %di
+ 	movw	$11, %cx
+-	rep; movsd
++	rep; movsl
+ 
+ 	/* Pop full state from the stack */
+ 	popal
+@@ -67,7 +67,7 @@ intcall:
+ 	jz	4f
+ 	movw	%sp, %si
+ 	movw	$11, %cx
+-	rep; movsd
++	rep; movsl
+ 4:	addw	$44, %sp
+ 
+ 	/* Restore state and return */
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 5a59e3315b340..ff26de11b3f15 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -577,8 +577,10 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
+ 	/*
+ 	 * Ensure the task's closid and rmid are written before determining if
+ 	 * the task is current that will decide if it will be interrupted.
++	 * This pairs with the full barrier between the rq->curr update and
++	 * resctrl_sched_in() during context switch.
+ 	 */
+-	barrier();
++	smp_mb();
+ 
+ 	/*
+ 	 * By now, the task's closid and rmid are set. If the task is current
+@@ -2313,19 +2315,23 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to,
+ 			t->closid = to->closid;
+ 			t->rmid = to->mon.rmid;
+ 
+-#ifdef CONFIG_SMP
+ 			/*
+-			 * This is safe on x86 w/o barriers as the ordering
+-			 * of writing to task_cpu() and t->on_cpu is
+-			 * reverse to the reading here. The detection is
+-			 * inaccurate as tasks might move or schedule
+-			 * before the smp function call takes place. In
+-			 * such a case the function call is pointless, but
++			 * Order the closid/rmid stores above before the loads
++			 * in task_curr(). This pairs with the full barrier
++			 * between the rq->curr update and resctrl_sched_in()
++			 * during context switch.
++			 */
++			smp_mb();
++
++			/*
++			 * If the task is on a CPU, set the CPU in the mask.
++			 * The detection is inaccurate as tasks might move or
++			 * schedule before the smp function call takes place.
++			 * In such a case the function call is pointless, but
+ 			 * there is no other side effect.
+ 			 */
+-			if (mask && t->on_cpu)
++			if (IS_ENABLED(CONFIG_SMP) && mask && task_curr(t))
+ 				cpumask_set_cpu(task_cpu(t), mask);
+-#endif
+ 		}
+ 	}
+ 	read_unlock(&tasklist_lock);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 06a776fdb90cf..de4b171cb76bc 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -511,16 +511,22 @@ struct kvm_cpuid_array {
+ 	int nent;
+ };
+ 
++static struct kvm_cpuid_entry2 *get_next_cpuid(struct kvm_cpuid_array *array)
++{
++	if (array->nent >= array->maxnent)
++		return NULL;
++
++	return &array->entries[array->nent++];
++}
++
+ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array,
+ 					      u32 function, u32 index)
+ {
+-	struct kvm_cpuid_entry2 *entry;
++	struct kvm_cpuid_entry2 *entry = get_next_cpuid(array);
+ 
+-	if (array->nent >= array->maxnent)
++	if (!entry)
+ 		return NULL;
+ 
+-	entry = &array->entries[array->nent++];
+-
+ 	entry->function = function;
+ 	entry->index = index;
+ 	entry->flags = 0;
+@@ -698,22 +704,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		entry->edx = edx.full;
+ 		break;
+ 	}
+-	/*
+-	 * Per Intel's SDM, the 0x1f is a superset of 0xb,
+-	 * thus they can be handled by common code.
+-	 */
+ 	case 0x1f:
+ 	case 0xb:
+ 		/*
+-		 * Populate entries until the level type (ECX[15:8]) of the
+-		 * previous entry is zero.  Note, CPUID EAX.{0x1f,0xb}.0 is
+-		 * the starting entry, filled by the primary do_host_cpuid().
++		 * No topology; a valid topology is indicated by the presence
++		 * of subleaf 1.
+ 		 */
+-		for (i = 1; entry->ecx & 0xff00; ++i) {
+-			entry = do_host_cpuid(array, function, i);
+-			if (!entry)
+-				goto out;
+-		}
++		entry->eax = entry->ebx = entry->ecx = 0;
+ 		break;
+ 	case 0xd:
+ 		entry->eax &= supported_xcr0;
+@@ -866,6 +863,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		entry->ebx = entry->ecx = entry->edx = 0;
+ 		break;
+ 	case 0x8000001e:
++		/* Do not return host topology information.  */
++		entry->eax = entry->ebx = entry->ecx = 0;
++		entry->edx = 0; /* reserved */
+ 		break;
+ 	/* Support memory encryption cpuid if host supports it */
+ 	case 0x8000001F:
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+index 044dcdd723a70..7d69b740b9f93 100644
+--- a/drivers/bus/mhi/core/pm.c
++++ b/drivers/bus/mhi/core/pm.c
+@@ -298,7 +298,8 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
+ 		read_lock_irq(&mhi_chan->lock);
+ 
+ 		/* Only ring DB if ring is not empty */
+-		if (tre_ring->base && tre_ring->wp  != tre_ring->rp)
++		if (tre_ring->base && tre_ring->wp  != tre_ring->rp &&
++		    mhi_chan->ch_state == MHI_CH_STATE_ENABLED)
+ 			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ 		read_unlock_irq(&mhi_chan->lock);
+ 	}
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 36e8d619e3348..72592e35836b3 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -17,6 +17,7 @@
+ 
+ static u32 share_count_nand;
+ static u32 share_count_media;
++static u32 share_count_usb;
+ 
+ static const char * const pll_ref_sels[] = { "osc_24m", "dummy", "dummy", "dummy", };
+ static const char * const audio_pll1_bypass_sels[] = {"audio_pll1", "audio_pll1_ref_sel", };
+@@ -362,7 +363,7 @@ static const char * const imx8mp_media_mipi_phy1_ref_sels[] = {"osc_24m", "sys_p
+ 							       "clk_ext2", "audio_pll2_out",
+ 							       "video_pll1_out", };
+ 
+-static const char * const imx8mp_media_disp1_pix_sels[] = {"osc_24m", "video_pll1_out", "audio_pll2_out",
++static const char * const imx8mp_media_disp_pix_sels[] = {"osc_24m", "video_pll1_out", "audio_pll2_out",
+ 							   "audio_pll1_out", "sys_pll1_800m",
+ 							   "sys_pll2_1000m", "sys_pll3_out", "clk_ext4", };
+ 
+@@ -411,6 +412,11 @@ static const char * const imx8mp_sai7_sels[] = {"osc_24m", "audio_pll1_out", "au
+ 
+ static const char * const imx8mp_dram_core_sels[] = {"dram_pll_out", "dram_alt_root", };
+ 
++static const char * const imx8mp_clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "video_pll1_out",
++						  "dummy", "dummy", "gpu_pll_out", "vpu_pll_out",
++						  "arm_pll_out", "sys_pll1", "sys_pll2", "sys_pll3",
++						  "dummy", "dummy", "osc_24m", "dummy", "osc_32k"};
++
+ static struct clk_hw **hws;
+ static struct clk_hw_onecell_data *clk_hw_data;
+ 
+@@ -532,6 +538,15 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MP_SYS_PLL2_500M] = imx_clk_hw_fixed_factor("sys_pll2_500m", "sys_pll2_500m_cg", 1, 2);
+ 	hws[IMX8MP_SYS_PLL2_1000M] = imx_clk_hw_fixed_factor("sys_pll2_1000m", "sys_pll2_out", 1, 1);
+ 
++	hws[IMX8MP_CLK_CLKOUT1_SEL] = imx_clk_hw_mux2("clkout1_sel", anatop_base + 0x128, 4, 4,
++						      imx8mp_clkout_sels, ARRAY_SIZE(imx8mp_clkout_sels));
++	hws[IMX8MP_CLK_CLKOUT1_DIV] = imx_clk_hw_divider("clkout1_div", "clkout1_sel", anatop_base + 0x128, 0, 4);
++	hws[IMX8MP_CLK_CLKOUT1] = imx_clk_hw_gate("clkout1", "clkout1_div", anatop_base + 0x128, 8);
++	hws[IMX8MP_CLK_CLKOUT2_SEL] = imx_clk_hw_mux2("clkout2_sel", anatop_base + 0x128, 20, 4,
++						      imx8mp_clkout_sels, ARRAY_SIZE(imx8mp_clkout_sels));
++	hws[IMX8MP_CLK_CLKOUT2_DIV] = imx_clk_hw_divider("clkout2_div", "clkout2_sel", anatop_base + 0x128, 16, 4);
++	hws[IMX8MP_CLK_CLKOUT2] = imx_clk_hw_gate("clkout2", "clkout2_div", anatop_base + 0x128, 24);
++
+ 	hws[IMX8MP_CLK_A53_DIV] = imx8m_clk_hw_composite_core("arm_a53_div", imx8mp_a53_sels, ccm_base + 0x8000);
+ 	hws[IMX8MP_CLK_A53_SRC] = hws[IMX8MP_CLK_A53_DIV];
+ 	hws[IMX8MP_CLK_A53_CG] = hws[IMX8MP_CLK_A53_DIV];
+@@ -566,6 +581,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MP_CLK_AHB] = imx8m_clk_hw_composite_bus_critical("ahb_root", imx8mp_ahb_sels, ccm_base + 0x9000);
+ 	hws[IMX8MP_CLK_AUDIO_AHB] = imx8m_clk_hw_composite_bus("audio_ahb", imx8mp_audio_ahb_sels, ccm_base + 0x9100);
+ 	hws[IMX8MP_CLK_MIPI_DSI_ESC_RX] = imx8m_clk_hw_composite_bus("mipi_dsi_esc_rx", imx8mp_mipi_dsi_esc_rx_sels, ccm_base + 0x9200);
++	hws[IMX8MP_CLK_MEDIA_DISP2_PIX] = imx8m_clk_hw_composite("media_disp2_pix", imx8mp_media_disp_pix_sels, ccm_base + 0x9300);
+ 
+ 	hws[IMX8MP_CLK_IPG_ROOT] = imx_clk_hw_divider2("ipg_root", "ahb_root", ccm_base + 0x9080, 0, 1);
+ 	hws[IMX8MP_CLK_IPG_AUDIO_ROOT] = imx_clk_hw_divider2("ipg_audio_root", "audio_ahb", ccm_base + 0x9180, 0, 1);
+@@ -630,7 +646,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MP_CLK_USDHC3] = imx8m_clk_hw_composite("usdhc3", imx8mp_usdhc3_sels, ccm_base + 0xbc80);
+ 	hws[IMX8MP_CLK_MEDIA_CAM1_PIX] = imx8m_clk_hw_composite("media_cam1_pix", imx8mp_media_cam1_pix_sels, ccm_base + 0xbd00);
+ 	hws[IMX8MP_CLK_MEDIA_MIPI_PHY1_REF] = imx8m_clk_hw_composite("media_mipi_phy1_ref", imx8mp_media_mipi_phy1_ref_sels, ccm_base + 0xbd80);
+-	hws[IMX8MP_CLK_MEDIA_DISP1_PIX] = imx8m_clk_hw_composite("media_disp1_pix", imx8mp_media_disp1_pix_sels, ccm_base + 0xbe00);
++	hws[IMX8MP_CLK_MEDIA_DISP1_PIX] = imx8m_clk_hw_composite("media_disp1_pix", imx8mp_media_disp_pix_sels, ccm_base + 0xbe00);
+ 	hws[IMX8MP_CLK_MEDIA_CAM2_PIX] = imx8m_clk_hw_composite("media_cam2_pix", imx8mp_media_cam2_pix_sels, ccm_base + 0xbe80);
+ 	hws[IMX8MP_CLK_MEDIA_LDB] = imx8m_clk_hw_composite("media_ldb", imx8mp_media_ldb_sels, ccm_base + 0xbf00);
+ 	hws[IMX8MP_CLK_MEMREPAIR] = imx8m_clk_hw_composite_critical("mem_repair", imx8mp_memrepair_sels, ccm_base + 0xbf80);
+@@ -691,7 +707,8 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MP_CLK_UART2_ROOT] = imx_clk_hw_gate4("uart2_root_clk", "uart2", ccm_base + 0x44a0, 0);
+ 	hws[IMX8MP_CLK_UART3_ROOT] = imx_clk_hw_gate4("uart3_root_clk", "uart3", ccm_base + 0x44b0, 0);
+ 	hws[IMX8MP_CLK_UART4_ROOT] = imx_clk_hw_gate4("uart4_root_clk", "uart4", ccm_base + 0x44c0, 0);
+-	hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0);
++	hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate2_shared2("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0, &share_count_usb);
++	hws[IMX8MP_CLK_USB_SUSP] = imx_clk_hw_gate2_shared2("usb_suspend_clk", "osc_32k", ccm_base + 0x44d0, 0, &share_count_usb);
+ 	hws[IMX8MP_CLK_USB_PHY_ROOT] = imx_clk_hw_gate4("usb_phy_root_clk", "usb_phy_ref", ccm_base + 0x44f0, 0);
+ 	hws[IMX8MP_CLK_USDHC1_ROOT] = imx_clk_hw_gate4("usdhc1_root_clk", "usdhc1", ccm_base + 0x4510, 0);
+ 	hws[IMX8MP_CLK_USDHC2_ROOT] = imx_clk_hw_gate4("usdhc2_root_clk", "usdhc2", ccm_base + 0x4520, 0);
+diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c
+index 8c4d947fb8486..8220ce5b87ca0 100644
+--- a/drivers/edac/edac_device.c
++++ b/drivers/edac/edac_device.c
+@@ -424,17 +424,16 @@ static void edac_device_workq_teardown(struct edac_device_ctl_info *edac_dev)
+  *	Then restart the workq on the new delay
+  */
+ void edac_device_reset_delay_period(struct edac_device_ctl_info *edac_dev,
+-					unsigned long value)
++				    unsigned long msec)
+ {
+-	unsigned long jiffs = msecs_to_jiffies(value);
+-
+-	if (value == 1000)
+-		jiffs = round_jiffies_relative(value);
+-
+-	edac_dev->poll_msec = value;
+-	edac_dev->delay	    = jiffs;
++	edac_dev->poll_msec = msec;
++	edac_dev->delay	    = msecs_to_jiffies(msec);
+ 
+-	edac_mod_work(&edac_dev->work, jiffs);
++	/* See comment in edac_device_workq_setup() above */
++	if (edac_dev->poll_msec == 1000)
++		edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
++	else
++		edac_mod_work(&edac_dev->work, edac_dev->delay);
+ }
+ 
+ int edac_device_alloc_index(void)
+diff --git a/drivers/edac/edac_module.h b/drivers/edac/edac_module.h
+index aa1f91688eb8e..841d238bc3f18 100644
+--- a/drivers/edac/edac_module.h
++++ b/drivers/edac/edac_module.h
+@@ -56,7 +56,7 @@ bool edac_stop_work(struct delayed_work *work);
+ bool edac_mod_work(struct delayed_work *work, unsigned long delay);
+ 
+ extern void edac_device_reset_delay_period(struct edac_device_ctl_info
+-					   *edac_dev, unsigned long value);
++					   *edac_dev, unsigned long msec);
+ extern void edac_mc_reset_delay_period(unsigned long value);
+ 
+ extern void *edac_align_ptr(void **p, unsigned size, int n_elems);
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index ba03f5a4b30ce..a2765d668856e 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -385,8 +385,8 @@ static int __init efisubsys_init(void)
+ 	efi_kobj = kobject_create_and_add("efi", firmware_kobj);
+ 	if (!efi_kobj) {
+ 		pr_err("efi: Firmware registration failed.\n");
+-		destroy_workqueue(efi_rts_wq);
+-		return -ENOMEM;
++		error = -ENOMEM;
++		goto err_destroy_wq;
+ 	}
+ 
+ 	if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE |
+@@ -429,7 +429,10 @@ err_unregister:
+ 		generic_ops_unregister();
+ err_put:
+ 	kobject_put(efi_kobj);
+-	destroy_workqueue(efi_rts_wq);
++err_destroy_wq:
++	if (efi_rts_wq)
++		destroy_workqueue(efi_rts_wq);
++
+ 	return error;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+index c3775f79525a7..4656e707fe27d 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+@@ -28,11 +28,9 @@ enum {
+ 	ADRENO_FW_MAX,
+ };
+ 
+-enum adreno_quirks {
+-	ADRENO_QUIRK_TWO_PASS_USE_WFI = 1,
+-	ADRENO_QUIRK_FAULT_DETECT_MASK = 2,
+-	ADRENO_QUIRK_LMLOADKILL_DISABLE = 3,
+-};
++#define ADRENO_QUIRK_TWO_PASS_USE_WFI		BIT(0)
++#define ADRENO_QUIRK_FAULT_DETECT_MASK		BIT(1)
++#define ADRENO_QUIRK_LMLOADKILL_DISABLE		BIT(2)
+ 
+ struct adreno_rev {
+ 	uint8_t  core;
+@@ -62,7 +60,7 @@ struct adreno_info {
+ 	const char *name;
+ 	const char *fw[ADRENO_FW_MAX];
+ 	uint32_t gmem;
+-	enum adreno_quirks quirks;
++	u64 quirks;
+ 	struct msm_gpu *(*init)(struct drm_device *dev);
+ 	const char *zapfw;
+ 	u32 inactive_period;
+diff --git a/drivers/gpu/drm/msm/dp/dp_aux.c b/drivers/gpu/drm/msm/dp/dp_aux.c
+index 19b35ae3e9272..a73a2b28a70a6 100644
+--- a/drivers/gpu/drm/msm/dp/dp_aux.c
++++ b/drivers/gpu/drm/msm/dp/dp_aux.c
+@@ -423,6 +423,10 @@ void dp_aux_isr(struct drm_dp_aux *dp_aux)
+ 
+ 	aux->isr = dp_catalog_aux_get_irq(aux->catalog);
+ 
++	/* no interrupts pending, return immediately */
++	if (!aux->isr)
++		return;
++
+ 	if (!aux->cmd_busy)
+ 		return;
+ 
+diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+index 33b8ebab178a1..36efa273155df 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+@@ -279,10 +279,18 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
+ 		drm_gem_object_release(obj);
+ 		return ret;
+ 	}
+-	drm_gem_object_put(obj);
+ 
+ 	rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */
+ 	rc->bo_handle = handle;
++
++	/*
++	 * The handle owns the reference now.  But we must drop our
++	 * remaining reference *after* we no longer need to dereference
++	 * the obj.  Otherwise userspace could guess the handle and
++	 * race closing it from another thread.
++	 */
++	drm_gem_object_put(obj);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 310ab24d003a0..ce822347f7470 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -85,6 +85,10 @@
+ #define ACPI_DEVFLAG_ATSDIS             0x10000000
+ 
+ #define LOOP_TIMEOUT	2000000
++
++#define IVRS_GET_SBDF_ID(seg, bus, dev, fd)	(((seg & 0xffff) << 16) | ((bus & 0xff) << 8) \
++						 | ((dev & 0x1f) << 3) | (fn & 0x7))
++
+ /*
+  * ACPI table definitions
+  *
+@@ -3046,24 +3050,32 @@ static int __init parse_amd_iommu_options(char *str)
+ 
+ static int __init parse_ivrs_ioapic(char *str)
+ {
+-	unsigned int bus, dev, fn;
+-	int ret, id, i;
+-	u16 devid;
++	u32 seg = 0, bus, dev, fn;
++	int id, i;
++	u32 devid;
+ 
+-	ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn);
++	if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
++	    sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5)
++		goto found;
+ 
+-	if (ret != 4) {
+-		pr_err("Invalid command line: ivrs_ioapic%s\n", str);
+-		return 1;
++	if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
++	    sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) {
++		pr_warn("ivrs_ioapic%s option format deprecated; use ivrs_ioapic=%d@%04x:%02x:%02x.%d instead\n",
++			str, id, seg, bus, dev, fn);
++		goto found;
+ 	}
+ 
++	pr_err("Invalid command line: ivrs_ioapic%s\n", str);
++	return 1;
++
++found:
+ 	if (early_ioapic_map_size == EARLY_MAP_SIZE) {
+ 		pr_err("Early IOAPIC map overflow - ignoring ivrs_ioapic%s\n",
+ 			str);
+ 		return 1;
+ 	}
+ 
+-	devid = ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7);
++	devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn);
+ 
+ 	cmdline_maps			= true;
+ 	i				= early_ioapic_map_size++;
+@@ -3076,24 +3088,32 @@ static int __init parse_ivrs_ioapic(char *str)
+ 
+ static int __init parse_ivrs_hpet(char *str)
+ {
+-	unsigned int bus, dev, fn;
+-	int ret, id, i;
+-	u16 devid;
++	u32 seg = 0, bus, dev, fn;
++	int id, i;
++	u32 devid;
+ 
+-	ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn);
++	if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
++	    sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5)
++		goto found;
+ 
+-	if (ret != 4) {
+-		pr_err("Invalid command line: ivrs_hpet%s\n", str);
+-		return 1;
++	if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
++	    sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) {
++		pr_warn("ivrs_hpet%s option format deprecated; use ivrs_hpet=%d@%04x:%02x:%02x.%d instead\n",
++			str, id, seg, bus, dev, fn);
++		goto found;
+ 	}
+ 
++	pr_err("Invalid command line: ivrs_hpet%s\n", str);
++	return 1;
++
++found:
+ 	if (early_hpet_map_size == EARLY_MAP_SIZE) {
+ 		pr_err("Early HPET map overflow - ignoring ivrs_hpet%s\n",
+ 			str);
+ 		return 1;
+ 	}
+ 
+-	devid = ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7);
++	devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn);
+ 
+ 	cmdline_maps			= true;
+ 	i				= early_hpet_map_size++;
+@@ -3106,17 +3126,37 @@ static int __init parse_ivrs_hpet(char *str)
+ 
+ static int __init parse_ivrs_acpihid(char *str)
+ {
+-	u32 bus, dev, fn;
+-	char *hid, *uid, *p;
++	u32 seg = 0, bus, dev, fn;
++	char *hid, *uid, *p, *addr;
+ 	char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0};
+-	int ret, i;
++	int i;
+ 
+-	ret = sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid);
+-	if (ret != 4) {
+-		pr_err("Invalid command line: ivrs_acpihid(%s)\n", str);
+-		return 1;
++	addr = strchr(str, '@');
++	if (!addr) {
++		if (sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid) == 4 ||
++		    sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid) == 5) {
++			pr_warn("ivrs_acpihid%s option format deprecated; use ivrs_acpihid=%s@%04x:%02x:%02x.%d instead\n",
++				str, acpiid, seg, bus, dev, fn);
++			goto found;
++		}
++		goto not_found;
+ 	}
+ 
++	/* We have the '@', make it the terminator to get just the acpiid */
++	*addr++ = 0;
++
++	if (sscanf(str, "=%s", acpiid) != 1)
++		goto not_found;
++
++	if (sscanf(addr, "%x:%x.%x", &bus, &dev, &fn) == 3 ||
++	    sscanf(addr, "%x:%x:%x.%x", &seg, &bus, &dev, &fn) == 4)
++		goto found;
++
++not_found:
++	pr_err("Invalid command line: ivrs_acpihid%s\n", str);
++	return 1;
++
++found:
+ 	p = acpiid;
+ 	hid = strsep(&p, ":");
+ 	uid = p;
+@@ -3136,8 +3176,7 @@ static int __init parse_ivrs_acpihid(char *str)
+ 	i = early_acpihid_map_size++;
+ 	memcpy(early_acpihid_map[i].hid, hid, strlen(hid));
+ 	memcpy(early_acpihid_map[i].uid, uid, strlen(uid));
+-	early_acpihid_map[i].devid =
+-		((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7);
++	early_acpihid_map[i].devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn);
+ 	early_acpihid_map[i].cmd_line	= true;
+ 
+ 	return 1;
+diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
+index 82ddfe9170d4d..2abbdd71d8d99 100644
+--- a/drivers/iommu/mtk_iommu_v1.c
++++ b/drivers/iommu/mtk_iommu_v1.c
+@@ -618,18 +618,34 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ 	ret = iommu_device_sysfs_add(&data->iommu, &pdev->dev, NULL,
+ 				     dev_name(&pdev->dev));
+ 	if (ret)
+-		return ret;
++		goto out_clk_unprepare;
+ 
+ 	iommu_device_set_ops(&data->iommu, &mtk_iommu_ops);
+ 
+ 	ret = iommu_device_register(&data->iommu);
+ 	if (ret)
+-		return ret;
++		goto out_sysfs_remove;
+ 
+-	if (!iommu_present(&platform_bus_type))
+-		bus_set_iommu(&platform_bus_type,  &mtk_iommu_ops);
++	if (!iommu_present(&platform_bus_type)) {
++		ret = bus_set_iommu(&platform_bus_type,  &mtk_iommu_ops);
++		if (ret)
++			goto out_dev_unreg;
++	}
+ 
+-	return component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
++	ret = component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
++	if (ret)
++		goto out_bus_set_null;
++	return ret;
++
++out_bus_set_null:
++	bus_set_iommu(&platform_bus_type, NULL);
++out_dev_unreg:
++	iommu_device_unregister(&data->iommu);
++out_sysfs_remove:
++	iommu_device_sysfs_remove(&data->iommu);
++out_clk_unprepare:
++	clk_disable_unprepare(data->bclk);
++	return ret;
+ }
+ 
+ static int mtk_iommu_remove(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+index fc389eecdd2b8..b0413904b798c 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+@@ -851,9 +851,11 @@ static struct pci_dev *ixgbe_get_first_secondary_devfn(unsigned int devfn)
+ 	rp_pdev = pci_get_domain_bus_and_slot(0, 0, devfn);
+ 	if (rp_pdev && rp_pdev->subordinate) {
+ 		bus = rp_pdev->subordinate->number;
++		pci_dev_put(rp_pdev);
+ 		return pci_get_domain_bus_and_slot(0, bus, 0);
+ 	}
+ 
++	pci_dev_put(rp_pdev);
+ 	return NULL;
+ }
+ 
+@@ -870,6 +872,7 @@ static bool ixgbe_x550em_a_has_mii(struct ixgbe_hw *hw)
+ 	struct ixgbe_adapter *adapter = hw->back;
+ 	struct pci_dev *pdev = adapter->pdev;
+ 	struct pci_dev *func0_pdev;
++	bool has_mii = false;
+ 
+ 	/* For the C3000 family of SoCs (x550em_a) the internal ixgbe devices
+ 	 * are always downstream of root ports @ 0000:00:16.0 & 0000:00:17.0
+@@ -880,15 +883,16 @@ static bool ixgbe_x550em_a_has_mii(struct ixgbe_hw *hw)
+ 	func0_pdev = ixgbe_get_first_secondary_devfn(PCI_DEVFN(0x16, 0));
+ 	if (func0_pdev) {
+ 		if (func0_pdev == pdev)
+-			return true;
+-		else
+-			return false;
++			has_mii = true;
++		goto out;
+ 	}
+ 	func0_pdev = ixgbe_get_first_secondary_devfn(PCI_DEVFN(0x17, 0));
+ 	if (func0_pdev == pdev)
+-		return true;
++		has_mii = true;
+ 
+-	return false;
++out:
++	pci_dev_put(func0_pdev);
++	return has_mii;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index fc27a40202c6d..c0a0a31272cc2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -145,6 +145,16 @@ int cgx_get_cgxid(void *cgxd)
+ 	return cgx->cgx_id;
+ }
+ 
++u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id)
++{
++	struct cgx *cgx_dev = cgx_get_pdata(cgx_id);
++	u64 cfg;
++
++	cfg = cgx_read(cgx_dev, lmac_id, CGXX_CMRX_CFG);
++
++	return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT;
++}
++
+ /* Ensure the required lock for event queue(where asynchronous events are
+  * posted) is acquired before calling this API. Else an asynchronous event(with
+  * latest link status) can reach the destination before this function returns
+@@ -340,9 +350,9 @@ int cgx_lmac_rx_tx_enable(void *cgxd, int lmac_id, bool enable)
+ 
+ 	cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG);
+ 	if (enable)
+-		cfg |= CMR_EN | DATA_PKT_RX_EN | DATA_PKT_TX_EN;
++		cfg |= DATA_PKT_RX_EN | DATA_PKT_TX_EN;
+ 	else
+-		cfg &= ~(CMR_EN | DATA_PKT_RX_EN | DATA_PKT_TX_EN);
++		cfg &= ~(DATA_PKT_RX_EN | DATA_PKT_TX_EN);
+ 	cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg);
+ 	return 0;
+ }
+@@ -814,8 +824,7 @@ static int cgx_lmac_verify_fwi_version(struct cgx *cgx)
+ 	minor_ver = FIELD_GET(RESP_MINOR_VER, resp);
+ 	dev_dbg(dev, "Firmware command interface version = %d.%d\n",
+ 		major_ver, minor_ver);
+-	if (major_ver != CGX_FIRMWARE_MAJOR_VER ||
+-	    minor_ver != CGX_FIRMWARE_MINOR_VER)
++	if (major_ver != CGX_FIRMWARE_MAJOR_VER)
+ 		return -EIO;
+ 	else
+ 		return 0;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+index 27ca3291682bc..e176a6c654ef2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+@@ -27,7 +27,10 @@
+ 
+ /* Registers */
+ #define CGXX_CMRX_CFG			0x00
+-#define CMR_EN				BIT_ULL(55)
++#define CMR_P2X_SEL_MASK		GENMASK_ULL(61, 59)
++#define CMR_P2X_SEL_SHIFT		59ULL
++#define CMR_P2X_SEL_NIX0		1ULL
++#define CMR_P2X_SEL_NIX1		2ULL
+ #define DATA_PKT_TX_EN			BIT_ULL(53)
+ #define DATA_PKT_RX_EN			BIT_ULL(54)
+ #define CGX_LMAC_TYPE_SHIFT		40
+@@ -142,5 +145,6 @@ int cgx_lmac_get_pause_frm(void *cgxd, int lmac_id,
+ int cgx_lmac_set_pause_frm(void *cgxd, int lmac_id,
+ 			   u8 tx_pause, u8 rx_pause);
+ void cgx_lmac_ptp_config(void *cgxd, int lmac_id, bool enable);
++u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id);
+ 
+ #endif /* CGX_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index c26652436c53a..acbc67074f59c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -316,31 +316,36 @@ static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
+ 
+ 	block->fn_map[lf] = attach ? pcifunc : 0;
+ 
+-	switch (block->type) {
+-	case BLKTYPE_NPA:
++	switch (block->addr) {
++	case BLKADDR_NPA:
+ 		pfvf->npalf = attach ? true : false;
+ 		num_lfs = pfvf->npalf;
+ 		break;
+-	case BLKTYPE_NIX:
++	case BLKADDR_NIX0:
++	case BLKADDR_NIX1:
+ 		pfvf->nixlf = attach ? true : false;
+ 		num_lfs = pfvf->nixlf;
+ 		break;
+-	case BLKTYPE_SSO:
++	case BLKADDR_SSO:
+ 		attach ? pfvf->sso++ : pfvf->sso--;
+ 		num_lfs = pfvf->sso;
+ 		break;
+-	case BLKTYPE_SSOW:
++	case BLKADDR_SSOW:
+ 		attach ? pfvf->ssow++ : pfvf->ssow--;
+ 		num_lfs = pfvf->ssow;
+ 		break;
+-	case BLKTYPE_TIM:
++	case BLKADDR_TIM:
+ 		attach ? pfvf->timlfs++ : pfvf->timlfs--;
+ 		num_lfs = pfvf->timlfs;
+ 		break;
+-	case BLKTYPE_CPT:
++	case BLKADDR_CPT0:
+ 		attach ? pfvf->cptlfs++ : pfvf->cptlfs--;
+ 		num_lfs = pfvf->cptlfs;
+ 		break;
++	case BLKADDR_CPT1:
++		attach ? pfvf->cpt1_lfs++ : pfvf->cpt1_lfs--;
++		num_lfs = pfvf->cpt1_lfs;
++		break;
+ 	}
+ 
+ 	reg = is_pf ? block->pf_lfcnt_reg : block->vf_lfcnt_reg;
+@@ -1035,7 +1040,30 @@ int rvu_mbox_handler_ready(struct rvu *rvu, struct msg_req *req,
+ /* Get current count of a RVU block's LF/slots
+  * provisioned to a given RVU func.
+  */
+-static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype)
++u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blkaddr)
++{
++	switch (blkaddr) {
++	case BLKADDR_NPA:
++		return pfvf->npalf ? 1 : 0;
++	case BLKADDR_NIX0:
++	case BLKADDR_NIX1:
++		return pfvf->nixlf ? 1 : 0;
++	case BLKADDR_SSO:
++		return pfvf->sso;
++	case BLKADDR_SSOW:
++		return pfvf->ssow;
++	case BLKADDR_TIM:
++		return pfvf->timlfs;
++	case BLKADDR_CPT0:
++		return pfvf->cptlfs;
++	case BLKADDR_CPT1:
++		return pfvf->cpt1_lfs;
++	}
++	return 0;
++}
++
++/* Return true if LFs of block type are attached to pcifunc */
++static bool is_blktype_attached(struct rvu_pfvf *pfvf, int blktype)
+ {
+ 	switch (blktype) {
+ 	case BLKTYPE_NPA:
+@@ -1043,15 +1071,16 @@ static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype)
+ 	case BLKTYPE_NIX:
+ 		return pfvf->nixlf ? 1 : 0;
+ 	case BLKTYPE_SSO:
+-		return pfvf->sso;
++		return !!pfvf->sso;
+ 	case BLKTYPE_SSOW:
+-		return pfvf->ssow;
++		return !!pfvf->ssow;
+ 	case BLKTYPE_TIM:
+-		return pfvf->timlfs;
++		return !!pfvf->timlfs;
+ 	case BLKTYPE_CPT:
+-		return pfvf->cptlfs;
++		return pfvf->cptlfs || pfvf->cpt1_lfs;
+ 	}
+-	return 0;
++
++	return false;
+ }
+ 
+ bool is_pffunc_map_valid(struct rvu *rvu, u16 pcifunc, int blktype)
+@@ -1064,7 +1093,7 @@ bool is_pffunc_map_valid(struct rvu *rvu, u16 pcifunc, int blktype)
+ 	pfvf = rvu_get_pfvf(rvu, pcifunc);
+ 
+ 	/* Check if this PFFUNC has a LF of type blktype attached */
+-	if (!rvu_get_rsrc_mapcount(pfvf, blktype))
++	if (!is_blktype_attached(pfvf, blktype))
+ 		return false;
+ 
+ 	return true;
+@@ -1105,7 +1134,7 @@ static void rvu_detach_block(struct rvu *rvu, int pcifunc, int blktype)
+ 
+ 	block = &hw->block[blkaddr];
+ 
+-	num_lfs = rvu_get_rsrc_mapcount(pfvf, block->type);
++	num_lfs = rvu_get_rsrc_mapcount(pfvf, block->addr);
+ 	if (!num_lfs)
+ 		return;
+ 
+@@ -1179,6 +1208,58 @@ int rvu_mbox_handler_detach_resources(struct rvu *rvu,
+ 	return rvu_detach_rsrcs(rvu, detach, detach->hdr.pcifunc);
+ }
+ 
++static int rvu_get_nix_blkaddr(struct rvu *rvu, u16 pcifunc)
++{
++	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
++	int blkaddr = BLKADDR_NIX0, vf;
++	struct rvu_pfvf *pf;
++
++	/* All CGX mapped PFs are set with assigned NIX block during init */
++	if (is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) {
++		pf = rvu_get_pfvf(rvu, pcifunc & ~RVU_PFVF_FUNC_MASK);
++		blkaddr = pf->nix_blkaddr;
++	} else if (is_afvf(pcifunc)) {
++		vf = pcifunc - 1;
++		/* Assign NIX based on VF number. All even numbered VFs get
++		 * NIX0 and odd numbered gets NIX1
++		 */
++		blkaddr = (vf & 1) ? BLKADDR_NIX1 : BLKADDR_NIX0;
++		/* NIX1 is not present on all silicons */
++		if (!is_block_implemented(rvu->hw, BLKADDR_NIX1))
++			blkaddr = BLKADDR_NIX0;
++	}
++
++	switch (blkaddr) {
++	case BLKADDR_NIX1:
++		pfvf->nix_blkaddr = BLKADDR_NIX1;
++		break;
++	case BLKADDR_NIX0:
++	default:
++		pfvf->nix_blkaddr = BLKADDR_NIX0;
++		break;
++	}
++
++	return pfvf->nix_blkaddr;
++}
++
++static int rvu_get_attach_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc)
++{
++	int blkaddr;
++
++	switch (blktype) {
++	case BLKTYPE_NIX:
++		blkaddr = rvu_get_nix_blkaddr(rvu, pcifunc);
++		break;
++	default:
++		return rvu_get_blkaddr(rvu, blktype, 0);
++	};
++
++	if (is_block_implemented(rvu->hw, blkaddr))
++		return blkaddr;
++
++	return -ENODEV;
++}
++
+ static void rvu_attach_block(struct rvu *rvu, int pcifunc,
+ 			     int blktype, int num_lfs)
+ {
+@@ -1192,7 +1273,7 @@ static void rvu_attach_block(struct rvu *rvu, int pcifunc,
+ 	if (!num_lfs)
+ 		return;
+ 
+-	blkaddr = rvu_get_blkaddr(rvu, blktype, 0);
++	blkaddr = rvu_get_attach_blkaddr(rvu, blktype, pcifunc);
+ 	if (blkaddr < 0)
+ 		return;
+ 
+@@ -1221,12 +1302,12 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
+ 				       struct rsrc_attach *req, u16 pcifunc)
+ {
+ 	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
++	int free_lfs, mappedlfs, blkaddr;
+ 	struct rvu_hwinfo *hw = rvu->hw;
+ 	struct rvu_block *block;
+-	int free_lfs, mappedlfs;
+ 
+ 	/* Only one NPA LF can be attached */
+-	if (req->npalf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NPA)) {
++	if (req->npalf && !is_blktype_attached(pfvf, BLKTYPE_NPA)) {
+ 		block = &hw->block[BLKADDR_NPA];
+ 		free_lfs = rvu_rsrc_free_count(&block->lf);
+ 		if (!free_lfs)
+@@ -1239,8 +1320,11 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
+ 	}
+ 
+ 	/* Only one NIX LF can be attached */
+-	if (req->nixlf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NIX)) {
+-		block = &hw->block[BLKADDR_NIX0];
++	if (req->nixlf && !is_blktype_attached(pfvf, BLKTYPE_NIX)) {
++		blkaddr = rvu_get_attach_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
++		if (blkaddr < 0)
++			return blkaddr;
++		block = &hw->block[blkaddr];
+ 		free_lfs = rvu_rsrc_free_count(&block->lf);
+ 		if (!free_lfs)
+ 			goto fail;
+@@ -1260,7 +1344,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
+ 				 pcifunc, req->sso, block->lf.max);
+ 			return -EINVAL;
+ 		}
+-		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
++		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr);
+ 		free_lfs = rvu_rsrc_free_count(&block->lf);
+ 		/* Check if additional resources are available */
+ 		if (req->sso > mappedlfs &&
+@@ -1276,7 +1360,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
+ 				 pcifunc, req->sso, block->lf.max);
+ 			return -EINVAL;
+ 		}
+-		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
++		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr);
+ 		free_lfs = rvu_rsrc_free_count(&block->lf);
+ 		if (req->ssow > mappedlfs &&
+ 		    ((req->ssow - mappedlfs) > free_lfs))
+@@ -1291,7 +1375,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
+ 				 pcifunc, req->timlfs, block->lf.max);
+ 			return -EINVAL;
+ 		}
+-		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
++		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr);
+ 		free_lfs = rvu_rsrc_free_count(&block->lf);
+ 		if (req->timlfs > mappedlfs &&
+ 		    ((req->timlfs - mappedlfs) > free_lfs))
+@@ -1306,7 +1390,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
+ 				 pcifunc, req->cptlfs, block->lf.max);
+ 			return -EINVAL;
+ 		}
+-		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
++		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr);
+ 		free_lfs = rvu_rsrc_free_count(&block->lf);
+ 		if (req->cptlfs > mappedlfs &&
+ 		    ((req->cptlfs - mappedlfs) > free_lfs))
+@@ -1942,7 +2026,7 @@ static void rvu_blklf_teardown(struct rvu *rvu, u16 pcifunc, u8 blkaddr)
+ 
+ 	block = &rvu->hw->block[blkaddr];
+ 	num_lfs = rvu_get_rsrc_mapcount(rvu_get_pfvf(rvu, pcifunc),
+-					block->type);
++					block->addr);
+ 	if (!num_lfs)
+ 		return;
+ 	for (slot = 0; slot < num_lfs; slot++) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 90eed3160915f..fc6d785b98ddd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -137,6 +137,7 @@ struct rvu_pfvf {
+ 	u16		ssow;
+ 	u16		cptlfs;
+ 	u16		timlfs;
++	u16		cpt1_lfs;
+ 	u8		cgx_lmac;
+ 
+ 	/* Block LF's MSIX vector info */
+@@ -182,6 +183,8 @@ struct rvu_pfvf {
+ 
+ 	bool	cgx_in_use; /* this PF/VF using CGX? */
+ 	int	cgx_users;  /* number of cgx users - used only by PFs */
++
++	u8	nix_blkaddr; /* BLKADDR_NIX0/1 assigned to this PF */
+ };
+ 
+ struct nix_txsch {
+@@ -420,6 +423,7 @@ void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
+ int rvu_rsrc_free_count(struct rsrc_bmap *rsrc);
+ int rvu_alloc_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc);
+ bool rvu_rsrc_check_contig(struct rsrc_bmap *rsrc, int nrsrc);
++u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blkaddr);
+ int rvu_get_pf(u16 pcifunc);
+ struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
+ void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index f4ecc755eaff1..6c6b411e78fd8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -74,6 +74,20 @@ void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu)
+ 	return rvu->cgx_idmap[cgx_id];
+ }
+ 
++/* Based on P2X connectivity find mapped NIX block for a PF */
++static void rvu_map_cgx_nix_block(struct rvu *rvu, int pf,
++				  int cgx_id, int lmac_id)
++{
++	struct rvu_pfvf *pfvf = &rvu->pf[pf];
++	u8 p2x;
++
++	p2x = cgx_lmac_get_p2x(cgx_id, lmac_id);
++	/* Firmware sets P2X_SELECT as either NIX0 or NIX1 */
++	pfvf->nix_blkaddr = BLKADDR_NIX0;
++	if (p2x == CMR_P2X_SEL_NIX1)
++		pfvf->nix_blkaddr = BLKADDR_NIX1;
++}
++
+ static int rvu_map_cgx_lmac_pf(struct rvu *rvu)
+ {
+ 	struct npc_pkind *pkind = &rvu->hw->pkind;
+@@ -117,6 +131,7 @@ static int rvu_map_cgx_lmac_pf(struct rvu *rvu)
+ 			rvu->cgxlmac2pf_map[CGX_OFFSET(cgx) + lmac] = 1 << pf;
+ 			free_pkind = rvu_alloc_rsrc(&pkind->rsrc);
+ 			pkind->pfchan_map[free_pkind] = ((pf) & 0x3F) << 16;
++			rvu_map_cgx_nix_block(rvu, pf, cgx, lmac);
+ 			rvu->cgx_mapped_pfs++;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index f6a3cf3e6f236..9886a30e9723c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -187,8 +187,8 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr,
+ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf)
+ {
+ 	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
++	int pkind, pf, vf, lbkid;
+ 	u8 cgx_id, lmac_id;
+-	int pkind, pf, vf;
+ 	int err;
+ 
+ 	pf = rvu_get_pf(pcifunc);
+@@ -221,13 +221,24 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf)
+ 	case NIX_INTF_TYPE_LBK:
+ 		vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1;
+ 
++		/* If NIX1 block is present on the silicon then NIXes are
++		 * assigned alternatively for lbk interfaces. NIX0 should
++		 * send packets on lbk link 1 channels and NIX1 should send
++		 * on lbk link 0 channels for the communication between
++		 * NIX0 and NIX1.
++		 */
++		lbkid = 0;
++		if (rvu->hw->lbk_links > 1)
++			lbkid = vf & 0x1 ? 0 : 1;
++
+ 		/* Note that AF's VFs work in pairs and talk over consecutive
+ 		 * loopback channels.Therefore if odd number of AF VFs are
+ 		 * enabled then the last VF remains with no pair.
+ 		 */
+-		pfvf->rx_chan_base = NIX_CHAN_LBK_CHX(0, vf);
+-		pfvf->tx_chan_base = vf & 0x1 ? NIX_CHAN_LBK_CHX(0, vf - 1) :
+-						NIX_CHAN_LBK_CHX(0, vf + 1);
++		pfvf->rx_chan_base = NIX_CHAN_LBK_CHX(lbkid, vf);
++		pfvf->tx_chan_base = vf & 0x1 ?
++					NIX_CHAN_LBK_CHX(lbkid, vf - 1) :
++					NIX_CHAN_LBK_CHX(lbkid, vf + 1);
+ 		pfvf->rx_chan_cnt = 1;
+ 		pfvf->tx_chan_cnt = 1;
+ 		rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf,
+@@ -3157,7 +3168,7 @@ int rvu_nix_init(struct rvu *rvu)
+ 	hw->cgx = (cfg >> 12) & 0xF;
+ 	hw->lmac_per_cgx = (cfg >> 8) & 0xF;
+ 	hw->cgx_links = hw->cgx * hw->lmac_per_cgx;
+-	hw->lbk_links = 1;
++	hw->lbk_links = (cfg >> 24) & 0xF;
+ 	hw->sdp_links = 1;
+ 
+ 	/* Initialize admin queue */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
+index 038a0f1cecec6..e44281ae570d3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
+@@ -88,6 +88,8 @@ static int mlx5e_gen_ip_tunnel_header_vxlan(char buf[],
+ 	struct udphdr *udp = (struct udphdr *)(buf);
+ 	struct vxlanhdr *vxh;
+ 
++	if (tun_key->tun_flags & TUNNEL_VXLAN_OPT)
++		return -EOPNOTSUPP;
+ 	vxh = (struct vxlanhdr *)((char *)udp + sizeof(struct udphdr));
+ 	*ip_proto = IPPROTO_UDP;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index c70c1f0ca0c19..44a434b1178b5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -440,7 +440,7 @@ static int mlx5_ptp_verify(struct ptp_clock_info *ptp, unsigned int pin,
+ static const struct ptp_clock_info mlx5_ptp_clock_info = {
+ 	.owner		= THIS_MODULE,
+ 	.name		= "mlx5_ptp",
+-	.max_adj	= 100000000,
++	.max_adj	= 50000000,
+ 	.n_alarm	= 0,
+ 	.n_ext_ts	= 0,
+ 	.n_per_out	= 0,
+diff --git a/drivers/nfc/pn533/usb.c b/drivers/nfc/pn533/usb.c
+index 84f2983bf3841..57b07446bb768 100644
+--- a/drivers/nfc/pn533/usb.c
++++ b/drivers/nfc/pn533/usb.c
+@@ -153,10 +153,17 @@ static int pn533_usb_send_ack(struct pn533 *dev, gfp_t flags)
+ 	return usb_submit_urb(phy->ack_urb, flags);
+ }
+ 
++struct pn533_out_arg {
++	struct pn533_usb_phy *phy;
++	struct completion done;
++};
++
+ static int pn533_usb_send_frame(struct pn533 *dev,
+ 				struct sk_buff *out)
+ {
+ 	struct pn533_usb_phy *phy = dev->phy;
++	struct pn533_out_arg arg;
++	void *cntx;
+ 	int rc;
+ 
+ 	if (phy->priv == NULL)
+@@ -168,10 +175,17 @@ static int pn533_usb_send_frame(struct pn533 *dev,
+ 	print_hex_dump_debug("PN533 TX: ", DUMP_PREFIX_NONE, 16, 1,
+ 			     out->data, out->len, false);
+ 
++	init_completion(&arg.done);
++	cntx = phy->out_urb->context;
++	phy->out_urb->context = &arg;
++
+ 	rc = usb_submit_urb(phy->out_urb, GFP_KERNEL);
+ 	if (rc)
+ 		return rc;
+ 
++	wait_for_completion(&arg.done);
++	phy->out_urb->context = cntx;
++
+ 	if (dev->protocol_type == PN533_PROTO_REQ_RESP) {
+ 		/* request for response for sent packet directly */
+ 		rc = pn533_submit_urb_for_response(phy, GFP_KERNEL);
+@@ -412,7 +426,31 @@ static int pn533_acr122_poweron_rdr(struct pn533_usb_phy *phy)
+ 	return arg.rc;
+ }
+ 
+-static void pn533_send_complete(struct urb *urb)
++static void pn533_out_complete(struct urb *urb)
++{
++	struct pn533_out_arg *arg = urb->context;
++	struct pn533_usb_phy *phy = arg->phy;
++
++	switch (urb->status) {
++	case 0:
++		break; /* success */
++	case -ECONNRESET:
++	case -ENOENT:
++		dev_dbg(&phy->udev->dev,
++			"The urb has been stopped (status %d)\n",
++			urb->status);
++		break;
++	case -ESHUTDOWN:
++	default:
++		nfc_err(&phy->udev->dev,
++			"Urb failure (status %d)\n",
++			urb->status);
++	}
++
++	complete(&arg->done);
++}
++
++static void pn533_ack_complete(struct urb *urb)
+ {
+ 	struct pn533_usb_phy *phy = urb->context;
+ 
+@@ -500,10 +538,10 @@ static int pn533_usb_probe(struct usb_interface *interface,
+ 
+ 	usb_fill_bulk_urb(phy->out_urb, phy->udev,
+ 			  usb_sndbulkpipe(phy->udev, out_endpoint),
+-			  NULL, 0, pn533_send_complete, phy);
++			  NULL, 0, pn533_out_complete, phy);
+ 	usb_fill_bulk_urb(phy->ack_urb, phy->udev,
+ 			  usb_sndbulkpipe(phy->udev, out_endpoint),
+-			  NULL, 0, pn533_send_complete, phy);
++			  NULL, 0, pn533_ack_complete, phy);
+ 
+ 	switch (id->driver_info) {
+ 	case PN533_DEVICE_STD:
+diff --git a/drivers/platform/x86/sony-laptop.c b/drivers/platform/x86/sony-laptop.c
+index e5a1b55334081..f070e4eb74f4a 100644
+--- a/drivers/platform/x86/sony-laptop.c
++++ b/drivers/platform/x86/sony-laptop.c
+@@ -1892,14 +1892,21 @@ static int sony_nc_kbd_backlight_setup(struct platform_device *pd,
+ 		break;
+ 	}
+ 
+-	ret = sony_call_snc_handle(handle, probe_base, &result);
+-	if (ret)
+-		return ret;
++	/*
++	 * Only probe if there is a separate probe_base, otherwise the probe call
++	 * is equivalent to __sony_nc_kbd_backlight_mode_set(0), resulting in
++	 * the keyboard backlight being turned off.
++	 */
++	if (probe_base) {
++		ret = sony_call_snc_handle(handle, probe_base, &result);
++		if (ret)
++			return ret;
+ 
+-	if ((handle == 0x0137 && !(result & 0x02)) ||
+-			!(result & 0x01)) {
+-		dprintk("no backlight keyboard found\n");
+-		return 0;
++		if ((handle == 0x0137 && !(result & 0x02)) ||
++				!(result & 0x01)) {
++			dprintk("no backlight keyboard found\n");
++			return 0;
++		}
+ 	}
+ 
+ 	kbdbl_ctl = kzalloc(sizeof(*kbdbl_ctl), GFP_KERNEL);
+diff --git a/drivers/regulator/da9211-regulator.c b/drivers/regulator/da9211-regulator.c
+index e01b32d1fa17d..00828f5baa972 100644
+--- a/drivers/regulator/da9211-regulator.c
++++ b/drivers/regulator/da9211-regulator.c
+@@ -498,6 +498,12 @@ static int da9211_i2c_probe(struct i2c_client *i2c)
+ 
+ 	chip->chip_irq = i2c->irq;
+ 
++	ret = da9211_regulator_init(chip);
++	if (ret < 0) {
++		dev_err(chip->dev, "Failed to initialize regulator: %d\n", ret);
++		return ret;
++	}
++
+ 	if (chip->chip_irq != 0) {
+ 		ret = devm_request_threaded_irq(chip->dev, chip->chip_irq, NULL,
+ 					da9211_irq_handler,
+@@ -512,11 +518,6 @@ static int da9211_i2c_probe(struct i2c_client *i2c)
+ 		dev_warn(chip->dev, "No IRQ configured\n");
+ 	}
+ 
+-	ret = da9211_regulator_init(chip);
+-
+-	if (ret < 0)
+-		dev_err(chip->dev, "Failed to initialize regulator: %d\n", ret);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index 7948660e042fd..6f387a4fd96a4 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -52,17 +52,22 @@ static DEFINE_SPINLOCK(xencons_lock);
+ 
+ static struct xencons_info *vtermno_to_xencons(int vtermno)
+ {
+-	struct xencons_info *entry, *n, *ret = NULL;
++	struct xencons_info *entry, *ret = NULL;
++	unsigned long flags;
+ 
+-	if (list_empty(&xenconsoles))
+-			return NULL;
++	spin_lock_irqsave(&xencons_lock, flags);
++	if (list_empty(&xenconsoles)) {
++		spin_unlock_irqrestore(&xencons_lock, flags);
++		return NULL;
++	}
+ 
+-	list_for_each_entry_safe(entry, n, &xenconsoles, list) {
++	list_for_each_entry(entry, &xenconsoles, list) {
+ 		if (entry->vtermno == vtermno) {
+ 			ret  = entry;
+ 			break;
+ 		}
+ 	}
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return ret;
+ }
+@@ -223,7 +228,7 @@ static int xen_hvm_console_init(void)
+ {
+ 	int r;
+ 	uint64_t v = 0;
+-	unsigned long gfn;
++	unsigned long gfn, flags;
+ 	struct xencons_info *info;
+ 
+ 	if (!xen_hvm_domain())
+@@ -258,9 +263,9 @@ static int xen_hvm_console_init(void)
+ 		goto err;
+ 	info->vtermno = HVC_COOKIE;
+ 
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	list_add_tail(&info->list, &xenconsoles);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return 0;
+ err:
+@@ -283,6 +288,7 @@ static int xencons_info_pv_init(struct xencons_info *info, int vtermno)
+ static int xen_pv_console_init(void)
+ {
+ 	struct xencons_info *info;
++	unsigned long flags;
+ 
+ 	if (!xen_pv_domain())
+ 		return -ENODEV;
+@@ -299,9 +305,9 @@ static int xen_pv_console_init(void)
+ 		/* already configured */
+ 		return 0;
+ 	}
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	xencons_info_pv_init(info, HVC_COOKIE);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -309,6 +315,7 @@ static int xen_pv_console_init(void)
+ static int xen_initial_domain_console_init(void)
+ {
+ 	struct xencons_info *info;
++	unsigned long flags;
+ 
+ 	if (!xen_initial_domain())
+ 		return -ENODEV;
+@@ -323,9 +330,9 @@ static int xen_initial_domain_console_init(void)
+ 	info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
+ 	info->vtermno = HVC_COOKIE;
+ 
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	list_add_tail(&info->list, &xenconsoles);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -380,10 +387,12 @@ static void xencons_free(struct xencons_info *info)
+ 
+ static int xen_console_remove(struct xencons_info *info)
+ {
++	unsigned long flags;
++
+ 	xencons_disconnect_backend(info);
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	list_del(&info->list);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 	if (info->xbdev != NULL)
+ 		xencons_free(info);
+ 	else {
+@@ -464,6 +473,7 @@ static int xencons_probe(struct xenbus_device *dev,
+ {
+ 	int ret, devid;
+ 	struct xencons_info *info;
++	unsigned long flags;
+ 
+ 	devid = dev->nodename[strlen(dev->nodename) - 1] - '0';
+ 	if (devid == 0)
+@@ -482,9 +492,9 @@ static int xencons_probe(struct xenbus_device *dev,
+ 	ret = xencons_connect_backend(dev, info);
+ 	if (ret < 0)
+ 		goto error;
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	list_add_tail(&info->list, &xenconsoles);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return 0;
+ 
+@@ -583,10 +593,12 @@ static int __init xen_hvc_init(void)
+ 
+ 	info->hvc = hvc_alloc(HVC_COOKIE, info->irq, ops, 256);
+ 	if (IS_ERR(info->hvc)) {
++		unsigned long flags;
++
+ 		r = PTR_ERR(info->hvc);
+-		spin_lock(&xencons_lock);
++		spin_lock_irqsave(&xencons_lock, flags);
+ 		list_del(&info->list);
+-		spin_unlock(&xencons_lock);
++		spin_unlock_irqrestore(&xencons_lock, flags);
+ 		if (info->irq)
+ 			unbind_from_irqhandler(info->irq, NULL);
+ 		kfree(info);
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index d1a42300ae58f..a8a9addb4d253 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1003,6 +1003,8 @@ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+ 	if (!dev)
+ 		return 0;
+ 
++	dev->slot_id = slot_id;
++
+ 	/* Allocate the (output) device context that will be used in the HC. */
+ 	dev->out_ctx = xhci_alloc_container_ctx(xhci, XHCI_CTX_TYPE_DEVICE, flags);
+ 	if (!dev->out_ctx)
+@@ -1021,6 +1023,8 @@ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+ 
+ 	/* Initialize the cancellation list and watchdog timers for each ep */
+ 	for (i = 0; i < 31; i++) {
++		dev->eps[i].ep_index = i;
++		dev->eps[i].vdev = dev;
+ 		xhci_init_endpoint_timer(xhci, &dev->eps[i]);
+ 		INIT_LIST_HEAD(&dev->eps[i].cancelled_td_list);
+ 		INIT_LIST_HEAD(&dev->eps[i].bw_endpoint_list);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index fa3a7ac15f825..ead42fc3e16d5 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -773,6 +773,101 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci,
+ 	seg->bounce_offs = 0;
+ }
+ 
++static int xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td,
++			   struct xhci_ring *ep_ring, int status)
++{
++	struct urb *urb = NULL;
++
++	/* Clean up the endpoint's TD list */
++	urb = td->urb;
++
++	/* if a bounce buffer was used to align this td then unmap it */
++	xhci_unmap_td_bounce_buffer(xhci, ep_ring, td);
++
++	/* Do one last check of the actual transfer length.
++	 * If the host controller said we transferred more data than the buffer
++	 * length, urb->actual_length will be a very big number (since it's
++	 * unsigned).  Play it safe and say we didn't transfer anything.
++	 */
++	if (urb->actual_length > urb->transfer_buffer_length) {
++		xhci_warn(xhci, "URB req %u and actual %u transfer length mismatch\n",
++			  urb->transfer_buffer_length, urb->actual_length);
++		urb->actual_length = 0;
++		status = 0;
++	}
++	list_del_init(&td->td_list);
++	/* Was this TD slated to be cancelled but completed anyway? */
++	if (!list_empty(&td->cancelled_td_list))
++		list_del_init(&td->cancelled_td_list);
++
++	inc_td_cnt(urb);
++	/* Giveback the urb when all the tds are completed */
++	if (last_td_in_urb(td)) {
++		if ((urb->actual_length != urb->transfer_buffer_length &&
++		     (urb->transfer_flags & URB_SHORT_NOT_OK)) ||
++		    (status != 0 && !usb_endpoint_xfer_isoc(&urb->ep->desc)))
++			xhci_dbg(xhci, "Giveback URB %p, len = %d, expected = %d, status = %d\n",
++				 urb, urb->actual_length,
++				 urb->transfer_buffer_length, status);
++
++		/* set isoc urb status to 0 just as EHCI, UHCI, and OHCI */
++		if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS)
++			status = 0;
++		xhci_giveback_urb_in_irq(xhci, td, status);
++	}
++
++	return 0;
++}
++
++static int xhci_reset_halted_ep(struct xhci_hcd *xhci, unsigned int slot_id,
++				unsigned int ep_index, enum xhci_ep_reset_type reset_type)
++{
++	struct xhci_command *command;
++	int ret = 0;
++
++	command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
++	if (!command) {
++		ret = -ENOMEM;
++		goto done;
++	}
++
++	ret = xhci_queue_reset_ep(xhci, command, slot_id, ep_index, reset_type);
++done:
++	if (ret)
++		xhci_err(xhci, "ERROR queuing reset endpoint for slot %d ep_index %d, %d\n",
++			 slot_id, ep_index, ret);
++	return ret;
++}
++
++static void xhci_handle_halted_endpoint(struct xhci_hcd *xhci,
++				struct xhci_virt_ep *ep, unsigned int stream_id,
++				struct xhci_td *td,
++				enum xhci_ep_reset_type reset_type)
++{
++	unsigned int slot_id = ep->vdev->slot_id;
++	int err;
++
++	/*
++	 * Avoid resetting endpoint if link is inactive. Can cause host hang.
++	 * Device will be reset soon to recover the link so don't do anything
++	 */
++	if (ep->vdev->flags & VDEV_PORT_ERROR)
++		return;
++
++	ep->ep_state |= EP_HALTED;
++
++	err = xhci_reset_halted_ep(xhci, slot_id, ep->ep_index, reset_type);
++	if (err)
++		return;
++
++	if (reset_type == EP_HARD_RESET) {
++		ep->ep_state |= EP_HARD_CLEAR_TOGGLE;
++		xhci_cleanup_stalled_ring(xhci, slot_id, ep->ep_index, stream_id,
++					  td);
++	}
++	xhci_ring_cmd_db(xhci);
++}
++
+ /*
+  * When we get a command completion for a Stop Endpoint Command, we need to
+  * unlink any cancelled TDs from the ring.  There are two ways to do that:
+@@ -1924,37 +2019,6 @@ static void xhci_clear_hub_tt_buffer(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	}
+ }
+ 
+-static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci,
+-		unsigned int slot_id, unsigned int ep_index,
+-		unsigned int stream_id, struct xhci_td *td,
+-		enum xhci_ep_reset_type reset_type)
+-{
+-	struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index];
+-	struct xhci_command *command;
+-
+-	/*
+-	 * Avoid resetting endpoint if link is inactive. Can cause host hang.
+-	 * Device will be reset soon to recover the link so don't do anything
+-	 */
+-	if (xhci->devs[slot_id]->flags & VDEV_PORT_ERROR)
+-		return;
+-
+-	command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+-	if (!command)
+-		return;
+-
+-	ep->ep_state |= EP_HALTED;
+-
+-	xhci_queue_reset_ep(xhci, command, slot_id, ep_index, reset_type);
+-
+-	if (reset_type == EP_HARD_RESET) {
+-		ep->ep_state |= EP_HARD_CLEAR_TOGGLE;
+-		xhci_cleanup_stalled_ring(xhci, slot_id, ep_index, stream_id,
+-					  td);
+-	}
+-	xhci_ring_cmd_db(xhci);
+-}
+-
+ /* Check if an error has halted the endpoint ring.  The class driver will
+  * cleanup the halt for a non-default control endpoint if we indicate a stall.
+  * However, a babble and other errors also halt the endpoint ring, and the class
+@@ -1995,68 +2059,15 @@ int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code)
+ 	return 0;
+ }
+ 
+-static int xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td,
+-		struct xhci_ring *ep_ring, int *status)
+-{
+-	struct urb *urb = NULL;
+-
+-	/* Clean up the endpoint's TD list */
+-	urb = td->urb;
+-
+-	/* if a bounce buffer was used to align this td then unmap it */
+-	xhci_unmap_td_bounce_buffer(xhci, ep_ring, td);
+-
+-	/* Do one last check of the actual transfer length.
+-	 * If the host controller said we transferred more data than the buffer
+-	 * length, urb->actual_length will be a very big number (since it's
+-	 * unsigned).  Play it safe and say we didn't transfer anything.
+-	 */
+-	if (urb->actual_length > urb->transfer_buffer_length) {
+-		xhci_warn(xhci, "URB req %u and actual %u transfer length mismatch\n",
+-			  urb->transfer_buffer_length, urb->actual_length);
+-		urb->actual_length = 0;
+-		*status = 0;
+-	}
+-	list_del_init(&td->td_list);
+-	/* Was this TD slated to be cancelled but completed anyway? */
+-	if (!list_empty(&td->cancelled_td_list))
+-		list_del_init(&td->cancelled_td_list);
+-
+-	inc_td_cnt(urb);
+-	/* Giveback the urb when all the tds are completed */
+-	if (last_td_in_urb(td)) {
+-		if ((urb->actual_length != urb->transfer_buffer_length &&
+-		     (urb->transfer_flags & URB_SHORT_NOT_OK)) ||
+-		    (*status != 0 && !usb_endpoint_xfer_isoc(&urb->ep->desc)))
+-			xhci_dbg(xhci, "Giveback URB %p, len = %d, expected = %d, status = %d\n",
+-				 urb, urb->actual_length,
+-				 urb->transfer_buffer_length, *status);
+-
+-		/* set isoc urb status to 0 just as EHCI, UHCI, and OHCI */
+-		if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS)
+-			*status = 0;
+-		xhci_giveback_urb_in_irq(xhci, td, *status);
+-	}
+-
+-	return 0;
+-}
+-
+ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td,
+-	struct xhci_transfer_event *event,
+-	struct xhci_virt_ep *ep, int *status)
++	struct xhci_transfer_event *event, struct xhci_virt_ep *ep)
+ {
+-	struct xhci_virt_device *xdev;
+ 	struct xhci_ep_ctx *ep_ctx;
+ 	struct xhci_ring *ep_ring;
+-	unsigned int slot_id;
+ 	u32 trb_comp_code;
+-	int ep_index;
+ 
+-	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
+-	xdev = xhci->devs[slot_id];
+-	ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1;
+ 	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+-	ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
++	ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index);
+ 	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+ 
+ 	if (trb_comp_code == COMP_STOPPED_LENGTH_INVALID ||
+@@ -2081,10 +2092,11 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		 * stall later. Hub TT buffer should only be cleared for FS/LS
+ 		 * devices behind HS hubs for functional stalls.
+ 		 */
+-		if ((ep_index != 0) || (trb_comp_code != COMP_STALL_ERROR))
++		if ((ep->ep_index != 0) || (trb_comp_code != COMP_STALL_ERROR))
+ 			xhci_clear_hub_tt_buffer(xhci, td, ep);
+-		xhci_cleanup_halted_endpoint(xhci, slot_id, ep_index,
+-					ep_ring->stream_id, td, EP_HARD_RESET);
++
++		xhci_handle_halted_endpoint(xhci, ep, ep_ring->stream_id, td,
++					     EP_HARD_RESET);
+ 	} else {
+ 		/* Update ring dequeue pointer */
+ 		while (ep_ring->dequeue != td->last_trb)
+@@ -2092,7 +2104,7 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		inc_deq(xhci, ep_ring);
+ 	}
+ 
+-	return xhci_td_cleanup(xhci, td, ep_ring, status);
++	return xhci_td_cleanup(xhci, td, ep_ring, td->status);
+ }
+ 
+ /* sum trb lengths from ring dequeue up to stop_trb, _excluding_ stop_trb */
+@@ -2115,21 +2127,15 @@ static int sum_trb_lengths(struct xhci_hcd *xhci, struct xhci_ring *ring,
+  */
+ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	union xhci_trb *ep_trb, struct xhci_transfer_event *event,
+-	struct xhci_virt_ep *ep, int *status)
++	struct xhci_virt_ep *ep)
+ {
+-	struct xhci_virt_device *xdev;
+-	unsigned int slot_id;
+-	int ep_index;
+ 	struct xhci_ep_ctx *ep_ctx;
+ 	u32 trb_comp_code;
+ 	u32 remaining, requested;
+ 	u32 trb_type;
+ 
+ 	trb_type = TRB_FIELD_TO_TYPE(le32_to_cpu(ep_trb->generic.field[3]));
+-	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
+-	xdev = xhci->devs[slot_id];
+-	ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1;
+-	ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
++	ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index);
+ 	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+ 	requested = td->urb->transfer_buffer_length;
+ 	remaining = EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+@@ -2139,13 +2145,13 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		if (trb_type != TRB_STATUS) {
+ 			xhci_warn(xhci, "WARN: Success on ctrl %s TRB without IOC set?\n",
+ 				  (trb_type == TRB_DATA) ? "data" : "setup");
+-			*status = -ESHUTDOWN;
++			td->status = -ESHUTDOWN;
+ 			break;
+ 		}
+-		*status = 0;
++		td->status = 0;
+ 		break;
+ 	case COMP_SHORT_PACKET:
+-		*status = 0;
++		td->status = 0;
+ 		break;
+ 	case COMP_STOPPED_SHORT_PACKET:
+ 		if (trb_type == TRB_DATA || trb_type == TRB_NORMAL)
+@@ -2177,7 +2183,7 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 						       ep_ctx, trb_comp_code))
+ 			break;
+ 		xhci_dbg(xhci, "TRB error %u, halted endpoint index = %u\n",
+-			 trb_comp_code, ep_index);
++			 trb_comp_code, ep->ep_index);
+ 		fallthrough;
+ 	case COMP_STALL_ERROR:
+ 		/* Did we transfer part of the data (middle) phase? */
+@@ -2209,7 +2215,7 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		td->urb->actual_length = requested;
+ 
+ finish_td:
+-	return finish_td(xhci, td, event, ep, status);
++	return finish_td(xhci, td, event, ep);
+ }
+ 
+ /*
+@@ -2217,9 +2223,8 @@ finish_td:
+  */
+ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	union xhci_trb *ep_trb, struct xhci_transfer_event *event,
+-	struct xhci_virt_ep *ep, int *status)
++	struct xhci_virt_ep *ep)
+ {
+-	struct xhci_ring *ep_ring;
+ 	struct urb_priv *urb_priv;
+ 	int idx;
+ 	struct usb_iso_packet_descriptor *frame;
+@@ -2228,7 +2233,6 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	u32 remaining, requested, ep_trb_len;
+ 	int short_framestatus;
+ 
+-	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+ 	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+ 	urb_priv = td->urb->hcpriv;
+ 	idx = urb_priv->num_tds_done;
+@@ -2289,26 +2293,23 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	}
+ 
+ 	if (sum_trbs_for_length)
+-		frame->actual_length = sum_trb_lengths(xhci, ep_ring, ep_trb) +
++		frame->actual_length = sum_trb_lengths(xhci, ep->ring, ep_trb) +
+ 			ep_trb_len - remaining;
+ 	else
+ 		frame->actual_length = requested;
+ 
+ 	td->urb->actual_length += frame->actual_length;
+ 
+-	return finish_td(xhci, td, event, ep, status);
++	return finish_td(xhci, td, event, ep);
+ }
+ 
+ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+-			struct xhci_transfer_event *event,
+-			struct xhci_virt_ep *ep, int *status)
++			struct xhci_virt_ep *ep, int status)
+ {
+-	struct xhci_ring *ep_ring;
+ 	struct urb_priv *urb_priv;
+ 	struct usb_iso_packet_descriptor *frame;
+ 	int idx;
+ 
+-	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+ 	urb_priv = td->urb->hcpriv;
+ 	idx = urb_priv->num_tds_done;
+ 	frame = &td->urb->iso_frame_desc[idx];
+@@ -2320,11 +2321,11 @@ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	frame->actual_length = 0;
+ 
+ 	/* Update ring dequeue pointer */
+-	while (ep_ring->dequeue != td->last_trb)
+-		inc_deq(xhci, ep_ring);
+-	inc_deq(xhci, ep_ring);
++	while (ep->ring->dequeue != td->last_trb)
++		inc_deq(xhci, ep->ring);
++	inc_deq(xhci, ep->ring);
+ 
+-	return xhci_td_cleanup(xhci, td, ep_ring, status);
++	return xhci_td_cleanup(xhci, td, ep->ring, status);
+ }
+ 
+ /*
+@@ -2332,18 +2333,14 @@ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+  */
+ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	union xhci_trb *ep_trb, struct xhci_transfer_event *event,
+-	struct xhci_virt_ep *ep, int *status)
++	struct xhci_virt_ep *ep)
+ {
+ 	struct xhci_slot_ctx *slot_ctx;
+ 	struct xhci_ring *ep_ring;
+ 	u32 trb_comp_code;
+ 	u32 remaining, requested, ep_trb_len;
+-	unsigned int slot_id;
+-	int ep_index;
+ 
+-	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
+-	slot_ctx = xhci_get_slot_ctx(xhci, xhci->devs[slot_id]->out_ctx);
+-	ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1;
++	slot_ctx = xhci_get_slot_ctx(xhci, ep->vdev->out_ctx);
+ 	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+ 	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+ 	remaining = EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+@@ -2352,7 +2349,7 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 
+ 	switch (trb_comp_code) {
+ 	case COMP_SUCCESS:
+-		ep_ring->err_count = 0;
++		ep->err_count = 0;
+ 		/* handle success with untransferred data as short packet */
+ 		if (ep_trb != td->last_trb || remaining) {
+ 			xhci_warn(xhci, "WARN Successful completion on short TX\n");
+@@ -2360,13 +2357,13 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 				 td->urb->ep->desc.bEndpointAddress,
+ 				 requested, remaining);
+ 		}
+-		*status = 0;
++		td->status = 0;
+ 		break;
+ 	case COMP_SHORT_PACKET:
+ 		xhci_dbg(xhci, "ep %#x - asked for %d bytes, %d bytes untransferred\n",
+ 			 td->urb->ep->desc.bEndpointAddress,
+ 			 requested, remaining);
+-		*status = 0;
++		td->status = 0;
+ 		break;
+ 	case COMP_STOPPED_SHORT_PACKET:
+ 		td->urb->actual_length = remaining;
+@@ -2378,12 +2375,14 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		break;
+ 	case COMP_USB_TRANSACTION_ERROR:
+ 		if (xhci->quirks & XHCI_NO_SOFT_RETRY ||
+-		    (ep_ring->err_count++ > MAX_SOFT_RETRY) ||
++		    (ep->err_count++ > MAX_SOFT_RETRY) ||
+ 		    le32_to_cpu(slot_ctx->tt_info) & TT_SLOT)
+ 			break;
+-		*status = 0;
+-		xhci_cleanup_halted_endpoint(xhci, slot_id, ep_index,
+-					ep_ring->stream_id, td, EP_SOFT_RESET);
++
++		td->status = 0;
++
++		xhci_handle_halted_endpoint(xhci, ep, ep_ring->stream_id, td,
++					    EP_SOFT_RESET);
+ 		return 0;
+ 	default:
+ 		/* do nothing */
+@@ -2402,7 +2401,7 @@ finish_td:
+ 			  remaining);
+ 		td->urb->actual_length = 0;
+ 	}
+-	return finish_td(xhci, td, event, ep, status);
++	return finish_td(xhci, td, event, ep);
+ }
+ 
+ /*
+@@ -2458,8 +2457,14 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 		case COMP_USB_TRANSACTION_ERROR:
+ 		case COMP_INVALID_STREAM_TYPE_ERROR:
+ 		case COMP_INVALID_STREAM_ID_ERROR:
+-			xhci_cleanup_halted_endpoint(xhci, slot_id, ep_index, 0,
+-						     NULL, EP_SOFT_RESET);
++			xhci_dbg(xhci, "Stream transaction error ep %u no id\n",
++				 ep_index);
++			if (ep->err_count++ > MAX_SOFT_RETRY)
++				xhci_handle_halted_endpoint(xhci, ep, 0, NULL,
++							    EP_HARD_RESET);
++			else
++				xhci_handle_halted_endpoint(xhci, ep, 0, NULL,
++							    EP_SOFT_RESET);
+ 			goto cleanup;
+ 		case COMP_RING_UNDERRUN:
+ 		case COMP_RING_OVERRUN:
+@@ -2642,11 +2647,10 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 			if (trb_comp_code == COMP_STALL_ERROR ||
+ 			    xhci_requires_manual_halt_cleanup(xhci, ep_ctx,
+ 							      trb_comp_code)) {
+-				xhci_cleanup_halted_endpoint(xhci, slot_id,
+-							     ep_index,
+-							     ep_ring->stream_id,
+-							     NULL,
+-							     EP_HARD_RESET);
++				xhci_handle_halted_endpoint(xhci, ep,
++							    ep_ring->stream_id,
++							    NULL,
++							    EP_HARD_RESET);
+ 			}
+ 			goto cleanup;
+ 		}
+@@ -2705,7 +2709,7 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 				return -ESHUTDOWN;
+ 			}
+ 
+-			skip_isoc_td(xhci, td, event, ep, &status);
++			skip_isoc_td(xhci, td, ep, status);
+ 			goto cleanup;
+ 		}
+ 		if (trb_comp_code == COMP_SHORT_PACKET)
+@@ -2733,25 +2737,26 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 		 * endpoint. Otherwise, the endpoint remains stalled
+ 		 * indefinitely.
+ 		 */
++
+ 		if (trb_is_noop(ep_trb)) {
+ 			if (trb_comp_code == COMP_STALL_ERROR ||
+ 			    xhci_requires_manual_halt_cleanup(xhci, ep_ctx,
+ 							      trb_comp_code))
+-				xhci_cleanup_halted_endpoint(xhci, slot_id,
+-							     ep_index,
+-							     ep_ring->stream_id,
+-							     td, EP_HARD_RESET);
++				xhci_handle_halted_endpoint(xhci, ep,
++							    ep_ring->stream_id,
++							    td, EP_HARD_RESET);
+ 			goto cleanup;
+ 		}
+ 
++		td->status = status;
++
+ 		/* update the urb's actual_length and give back to the core */
+ 		if (usb_endpoint_xfer_control(&td->urb->ep->desc))
+-			process_ctrl_td(xhci, td, ep_trb, event, ep, &status);
++			process_ctrl_td(xhci, td, ep_trb, event, ep);
+ 		else if (usb_endpoint_xfer_isoc(&td->urb->ep->desc))
+-			process_isoc_td(xhci, td, ep_trb, event, ep, &status);
++			process_isoc_td(xhci, td, ep_trb, event, ep);
+ 		else
+-			process_bulk_intr_td(xhci, td, ep_trb, event, ep,
+-					     &status);
++			process_bulk_intr_td(xhci, td, ep_trb, event, ep);
+ cleanup:
+ 		handling_skipped_tds = ep->skip &&
+ 			trb_comp_code != COMP_MISSED_SERVICE_ERROR &&
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 059050f135225..ac09b171b7832 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -924,6 +924,8 @@ struct xhci_bw_info {
+ #define SS_BW_RESERVED		10
+ 
+ struct xhci_virt_ep {
++	struct xhci_virt_device		*vdev;	/* parent */
++	unsigned int			ep_index;
+ 	struct xhci_ring		*ring;
+ 	/* Related to endpoints that are configured to use stream IDs only */
+ 	struct xhci_stream_info		*stream_info;
+@@ -931,6 +933,7 @@ struct xhci_virt_ep {
+ 	 * have to restore the device state to the previous state
+ 	 */
+ 	struct xhci_ring		*new_ring;
++	unsigned int			err_count;
+ 	unsigned int			ep_state;
+ #define SET_DEQ_PENDING		(1 << 0)
+ #define EP_HALTED		(1 << 1)	/* For stall handling */
+@@ -1002,6 +1005,7 @@ struct xhci_interval_bw_table {
+ #define EP_CTX_PER_DEV		31
+ 
+ struct xhci_virt_device {
++	int				slot_id;
+ 	struct usb_device		*udev;
+ 	/*
+ 	 * Commands to the hardware are passed an "input context" that
+@@ -1541,6 +1545,7 @@ struct xhci_segment {
+ struct xhci_td {
+ 	struct list_head	td_list;
+ 	struct list_head	cancelled_td_list;
++	int			status;
+ 	struct urb		*urb;
+ 	struct xhci_segment	*start_seg;
+ 	union xhci_trb		*first_trb;
+@@ -1612,7 +1617,6 @@ struct xhci_ring {
+ 	 * if we own the TRB (if we are the consumer).  See section 4.9.1.
+ 	 */
+ 	u32			cycle_state;
+-	unsigned int            err_count;
+ 	unsigned int		stream_id;
+ 	unsigned int		num_segs;
+ 	unsigned int		num_trbs_free;
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index 85d30fef98a29..4fc6eb8e786d4 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -471,6 +471,7 @@ smb3_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.disposition = FILE_CREATE;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
++	oparms.mode = 0644;
+ 
+ 	rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL,
+ 		       NULL, NULL);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 935589579b8fe..e940fb07ef2e9 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1279,6 +1279,7 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
+ 		return NULL;
+ 
+ 	inode_set_iversion(&ei->vfs_inode, 1);
++	ei->i_flags = 0;
+ 	spin_lock_init(&ei->i_raw_lock);
+ 	INIT_LIST_HEAD(&ei->i_prealloc_list);
+ 	atomic_set(&ei->i_prealloc_active, 0);
+diff --git a/include/dt-bindings/clock/imx8mp-clock.h b/include/dt-bindings/clock/imx8mp-clock.h
+index e8d68fbb6e3f6..d7e513243dd29 100644
+--- a/include/dt-bindings/clock/imx8mp-clock.h
++++ b/include/dt-bindings/clock/imx8mp-clock.h
+@@ -321,8 +321,16 @@
+ #define IMX8MP_CLK_AUDIO_AXI			310
+ #define IMX8MP_CLK_HSIO_AXI			311
+ #define IMX8MP_CLK_MEDIA_ISP			312
++#define IMX8MP_CLK_MEDIA_DISP2_PIX		313
++#define IMX8MP_CLK_CLKOUT1_SEL			314
++#define IMX8MP_CLK_CLKOUT1_DIV			315
++#define IMX8MP_CLK_CLKOUT1			316
++#define IMX8MP_CLK_CLKOUT2_SEL			317
++#define IMX8MP_CLK_CLKOUT2_DIV			318
++#define IMX8MP_CLK_CLKOUT2			319
++#define IMX8MP_CLK_USB_SUSP			320
+ 
+-#define IMX8MP_CLK_END				313
++#define IMX8MP_CLK_END				321
+ 
+ #define IMX8MP_CLK_AUDIOMIX_SAI1_IPG		0
+ #define IMX8MP_CLK_AUDIOMIX_SAI1_MCLK1		1
+diff --git a/include/linux/tpm_eventlog.h b/include/linux/tpm_eventlog.h
+index 20c0ff54b7a0d..7d68a5cc58816 100644
+--- a/include/linux/tpm_eventlog.h
++++ b/include/linux/tpm_eventlog.h
+@@ -198,8 +198,8 @@ static __always_inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *ev
+ 	 * The loop below will unmap these fields if the log is larger than
+ 	 * one page, so save them here for reference:
+ 	 */
+-	count = READ_ONCE(event->count);
+-	event_type = READ_ONCE(event->event_type);
++	count = event->count;
++	event_type = event->event_type;
+ 
+ 	/* Verify that it's the log header */
+ 	if (event_header->pcr_idx != 0 ||
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 6031fb319d878..87bc38b471037 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -1217,6 +1217,12 @@ static void io_wq_cancel_tw_create(struct io_wq *wq)
+ 
+ 		worker = container_of(cb, struct io_worker, create_work);
+ 		io_worker_cancel_cb(worker);
++		/*
++		 * Only the worker continuation helper has worker allocated and
++		 * hence needs freeing.
++		 */
++		if (cb->func == create_worker_cont)
++			kfree(worker);
+ 	}
+ }
+ 
+diff --git a/mm/memblock.c b/mm/memblock.c
+index f72d539570339..f6a4dffb9a888 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -1597,7 +1597,13 @@ void __init __memblock_free_late(phys_addr_t base, phys_addr_t size)
+ 	end = PFN_DOWN(base + size);
+ 
+ 	for (; cursor < end; cursor++) {
+-		memblock_free_pages(pfn_to_page(cursor), cursor, 0);
++		/*
++		 * Reserved pages are always initialized by the end of
++		 * memblock_free_all() (by memmap_init() and, if deferred
++		 * initialization is enabled, memmap_init_reserved_pages()), so
++		 * these pages can be released directly to the buddy allocator.
++		 */
++		__free_pages_core(pfn_to_page(cursor), 0);
+ 		totalram_pages_inc();
+ 	}
+ }
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 31eb54e92b3f9..110254f44a468 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -539,6 +539,7 @@ csum_copy_err:
+ static int rawv6_push_pending_frames(struct sock *sk, struct flowi6 *fl6,
+ 				     struct raw6_sock *rp)
+ {
++	struct ipv6_txoptions *opt;
+ 	struct sk_buff *skb;
+ 	int err = 0;
+ 	int offset;
+@@ -556,6 +557,9 @@ static int rawv6_push_pending_frames(struct sock *sk, struct flowi6 *fl6,
+ 
+ 	offset = rp->offset;
+ 	total_len = inet_sk(sk)->cork.base.length;
++	opt = inet6_sk(sk)->cork.opt;
++	total_len -= opt ? opt->opt_flen : 0;
++
+ 	if (offset >= total_len - 1) {
+ 		err = -EINVAL;
+ 		ip6_flush_pending_frames(sk);
+diff --git a/net/netfilter/ipset/ip_set_bitmap_ip.c b/net/netfilter/ipset/ip_set_bitmap_ip.c
+index a8ce04a4bb72a..e4fa00abde6a2 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_ip.c
++++ b/net/netfilter/ipset/ip_set_bitmap_ip.c
+@@ -308,8 +308,8 @@ bitmap_ip_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ 			return -IPSET_ERR_BITMAP_RANGE;
+ 
+ 		pr_debug("mask_bits %u, netmask %u\n", mask_bits, netmask);
+-		hosts = 2 << (32 - netmask - 1);
+-		elements = 2 << (netmask - mask_bits - 1);
++		hosts = 2U << (32 - netmask - 1);
++		elements = 2UL << (netmask - mask_bits - 1);
+ 	}
+ 	if (elements > IPSET_BITMAP_MAX_RANGE + 1)
+ 		return -IPSET_ERR_BITMAP_RANGE_SIZE;
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 551e0d6cf63d4..74c220eeec1a8 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -62,7 +62,7 @@ nft_payload_copy_vlan(u32 *d, const struct sk_buff *skb, u8 offset, u8 len)
+ 			return false;
+ 
+ 		if (offset + len > VLAN_ETH_HLEN + vlan_hlen)
+-			ethlen -= offset + len - VLAN_ETH_HLEN + vlan_hlen;
++			ethlen -= offset + len - VLAN_ETH_HLEN - vlan_hlen;
+ 
+ 		memcpy(dst_u8, vlanh + offset - vlan_hlen, ethlen);
+ 
+diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c
+index d1486ea496a2c..09799412b2489 100644
+--- a/net/sched/act_mpls.c
++++ b/net/sched/act_mpls.c
+@@ -133,6 +133,11 @@ static int valid_label(const struct nlattr *attr,
+ {
+ 	const u32 *label = nla_data(attr);
+ 
++	if (nla_len(attr) != sizeof(*label)) {
++		NL_SET_ERR_MSG_MOD(extack, "Invalid MPLS label length");
++		return -EINVAL;
++	}
++
+ 	if (*label & ~MPLS_LABEL_MASK || *label == MPLS_LABEL_IMPLNULL) {
+ 		NL_SET_ERR_MSG_MOD(extack, "MPLS label out of range");
+ 		return -EINVAL;
+@@ -144,7 +149,8 @@ static int valid_label(const struct nlattr *attr,
+ static const struct nla_policy mpls_policy[TCA_MPLS_MAX + 1] = {
+ 	[TCA_MPLS_PARMS]	= NLA_POLICY_EXACT_LEN(sizeof(struct tc_mpls)),
+ 	[TCA_MPLS_PROTO]	= { .type = NLA_U16 },
+-	[TCA_MPLS_LABEL]	= NLA_POLICY_VALIDATE_FN(NLA_U32, valid_label),
++	[TCA_MPLS_LABEL]	= NLA_POLICY_VALIDATE_FN(NLA_BINARY,
++							 valid_label),
+ 	[TCA_MPLS_TC]		= NLA_POLICY_RANGE(NLA_U8, 0, 7),
+ 	[TCA_MPLS_TTL]		= NLA_POLICY_MIN(NLA_U8, 1),
+ 	[TCA_MPLS_BOS]		= NLA_POLICY_RANGE(NLA_U8, 0, 1),
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index 7589f2ac6fd04..38f61dccb8552 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -1152,8 +1152,9 @@ void tipc_node_check_dest(struct net *net, u32 addr,
+ 	bool addr_match = false;
+ 	bool sign_match = false;
+ 	bool link_up = false;
++	bool link_is_reset = false;
+ 	bool accept_addr = false;
+-	bool reset = true;
++	bool reset = false;
+ 	char *if_name;
+ 	unsigned long intv;
+ 	u16 session;
+@@ -1173,14 +1174,14 @@ void tipc_node_check_dest(struct net *net, u32 addr,
+ 	/* Prepare to validate requesting node's signature and media address */
+ 	l = le->link;
+ 	link_up = l && tipc_link_is_up(l);
++	link_is_reset = l && tipc_link_is_reset(l);
+ 	addr_match = l && !memcmp(&le->maddr, maddr, sizeof(*maddr));
+ 	sign_match = (signature == n->signature);
+ 
+ 	/* These three flags give us eight permutations: */
+ 
+ 	if (sign_match && addr_match && link_up) {
+-		/* All is fine. Do nothing. */
+-		reset = false;
++		/* All is fine. Ignore requests. */
+ 		/* Peer node is not a container/local namespace */
+ 		if (!n->peer_hash_mix)
+ 			n->peer_hash_mix = hash_mixes;
+@@ -1205,6 +1206,7 @@ void tipc_node_check_dest(struct net *net, u32 addr,
+ 		 */
+ 		accept_addr = true;
+ 		*respond = true;
++		reset = true;
+ 	} else if (!sign_match && addr_match && link_up) {
+ 		/* Peer node rebooted. Two possibilities:
+ 		 *  - Delayed re-discovery; this link endpoint has already
+@@ -1236,6 +1238,7 @@ void tipc_node_check_dest(struct net *net, u32 addr,
+ 		n->signature = signature;
+ 		accept_addr = true;
+ 		*respond = true;
++		reset = true;
+ 	}
+ 
+ 	if (!accept_addr)
+@@ -1264,6 +1267,7 @@ void tipc_node_check_dest(struct net *net, u32 addr,
+ 		tipc_link_fsm_evt(l, LINK_RESET_EVT);
+ 		if (n->state == NODE_FAILINGOVER)
+ 			tipc_link_fsm_evt(l, LINK_FAILOVER_BEGIN_EVT);
++		link_is_reset = tipc_link_is_reset(l);
+ 		le->link = l;
+ 		n->link_cnt++;
+ 		tipc_node_calculate_timer(n, l);
+@@ -1276,7 +1280,7 @@ void tipc_node_check_dest(struct net *net, u32 addr,
+ 	memcpy(&le->maddr, maddr, sizeof(*maddr));
+ exit:
+ 	tipc_node_write_unlock(n);
+-	if (reset && l && !tipc_link_is_reset(l))
++	if (reset && !link_is_reset)
+ 		tipc_node_link_down(n, b->identity, false);
+ 	tipc_node_put(n);
+ }
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index d9841f44487f2..c6bf3898d1bf0 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -1920,6 +1920,7 @@ static int xfrm_notify_userpolicy(struct net *net)
+ 	int len = NLMSG_ALIGN(sizeof(*up));
+ 	struct nlmsghdr *nlh;
+ 	struct sk_buff *skb;
++	int err;
+ 
+ 	skb = nlmsg_new(len, GFP_ATOMIC);
+ 	if (skb == NULL)
+@@ -1938,7 +1939,11 @@ static int xfrm_notify_userpolicy(struct net *net)
+ 
+ 	nlmsg_end(skb, nlh);
+ 
+-	return xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY);
++	rcu_read_lock();
++	err = xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY);
++	rcu_read_unlock();
++
++	return err;
+ }
+ 
+ static bool xfrm_userpolicy_is_valid(__u8 policy)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b2c9cdfb83e62..eb7dd457ef5a5 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4581,6 +4581,16 @@ static void alc285_fixup_hp_coef_micmute_led(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc285_fixup_hp_gpio_micmute_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		spec->micmute_led_polarity = 1;
++	alc_fixup_hp_gpio_led(codec, action, 0, 0x04);
++}
++
+ static void alc236_fixup_hp_coef_micmute_led(struct hda_codec *codec,
+ 				const struct hda_fixup *fix, int action)
+ {
+@@ -4602,6 +4612,13 @@ static void alc285_fixup_hp_mute_led(struct hda_codec *codec,
+ 	alc285_fixup_hp_coef_micmute_led(codec, fix, action);
+ }
+ 
++static void alc285_fixup_hp_spectre_x360_mute_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	alc285_fixup_hp_mute_led_coefbit(codec, fix, action);
++	alc285_fixup_hp_gpio_micmute_led(codec, fix, action);
++}
++
+ static void alc236_fixup_hp_mute_led(struct hda_codec *codec,
+ 				const struct hda_fixup *fix, int action)
+ {
+@@ -6856,6 +6873,7 @@ enum {
+ 	ALC285_FIXUP_ASUS_G533Z_PINS,
+ 	ALC285_FIXUP_HP_GPIO_LED,
+ 	ALC285_FIXUP_HP_MUTE_LED,
++	ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED,
+ 	ALC236_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+@@ -8224,6 +8242,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_mute_led,
+ 	},
++	[ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_spectre_x360_mute_led,
++	},
+ 	[ALC236_FIXUP_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc236_fixup_hp_gpio_led,
+@@ -8936,6 +8958,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x86e7, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ 	SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
++	SND_PCI_QUIRK(0x103c, 0x86f9, "HP Spectre x360 13-aw0xxx", ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+diff --git a/sound/soc/codecs/wm8904.c b/sound/soc/codecs/wm8904.c
+index 1c360bae5652c..cc96c9bdff41f 100644
+--- a/sound/soc/codecs/wm8904.c
++++ b/sound/soc/codecs/wm8904.c
+@@ -697,6 +697,7 @@ static int out_pga_event(struct snd_soc_dapm_widget *w,
+ 	int dcs_mask;
+ 	int dcs_l, dcs_r;
+ 	int dcs_l_reg, dcs_r_reg;
++	int an_out_reg;
+ 	int timeout;
+ 	int pwr_reg;
+ 
+@@ -712,6 +713,7 @@ static int out_pga_event(struct snd_soc_dapm_widget *w,
+ 		dcs_mask = WM8904_DCS_ENA_CHAN_0 | WM8904_DCS_ENA_CHAN_1;
+ 		dcs_r_reg = WM8904_DC_SERVO_8;
+ 		dcs_l_reg = WM8904_DC_SERVO_9;
++		an_out_reg = WM8904_ANALOGUE_OUT1_LEFT;
+ 		dcs_l = 0;
+ 		dcs_r = 1;
+ 		break;
+@@ -720,6 +722,7 @@ static int out_pga_event(struct snd_soc_dapm_widget *w,
+ 		dcs_mask = WM8904_DCS_ENA_CHAN_2 | WM8904_DCS_ENA_CHAN_3;
+ 		dcs_r_reg = WM8904_DC_SERVO_6;
+ 		dcs_l_reg = WM8904_DC_SERVO_7;
++		an_out_reg = WM8904_ANALOGUE_OUT2_LEFT;
+ 		dcs_l = 2;
+ 		dcs_r = 3;
+ 		break;
+@@ -792,6 +795,10 @@ static int out_pga_event(struct snd_soc_dapm_widget *w,
+ 		snd_soc_component_update_bits(component, reg,
+ 				    WM8904_HPL_ENA_OUTP | WM8904_HPR_ENA_OUTP,
+ 				    WM8904_HPL_ENA_OUTP | WM8904_HPR_ENA_OUTP);
++
++		/* Update volume, requires PGA to be powered */
++		val = snd_soc_component_read(component, an_out_reg);
++		snd_soc_component_write(component, an_out_reg, val);
+ 		break;
+ 
+ 	case SND_SOC_DAPM_POST_PMU:
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index ecd6c049ace24..9e70c193d7f41 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -816,10 +816,11 @@ static void of_lpass_cpu_parse_dai_data(struct device *dev,
+ 					struct lpass_data *data)
+ {
+ 	struct device_node *node;
+-	int ret, id;
++	int ret, i, id;
+ 
+ 	/* Allow all channels by default for backwards compatibility */
+-	for (id = 0; id < data->variant->num_dai; id++) {
++	for (i = 0; i < data->variant->num_dai; i++) {
++		id = data->variant->dai_driver[i].id;
+ 		data->mi2s_playback_sd_mode[id] = LPAIF_I2SCTL_MODE_8CH;
+ 		data->mi2s_capture_sd_mode[id] = LPAIF_I2SCTL_MODE_8CH;
+ 	}
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index d96e86ddd2c53..18452f12510c0 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -2449,7 +2449,7 @@ static int find_dso_sym(struct dso *dso, const char *sym_name, u64 *start,
+ 				*size = sym->start - *start;
+ 			if (idx > 0) {
+ 				if (*size)
+-					return 1;
++					return 0;
+ 			} else if (dso_sym_match(sym, sym_name, &cnt, idx)) {
+ 				print_duplicate_syms(dso, sym_name);
+ 				return -EINVAL;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-01-24  7:13 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-01-24  7:13 UTC (permalink / raw
  To: gentoo-commits

commit:     c894b4c76db6e63534f73567b51afb8dcb07ab77
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 24 07:11:13 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Jan 24 07:11:49 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c894b4c7

Linux patch 5.10.165

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1164_linux-5.10.165.patch | 2890 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2894 insertions(+)

diff --git a/0000_README b/0000_README
index eb86feb6..461b650e 100644
--- a/0000_README
+++ b/0000_README
@@ -699,6 +699,10 @@ Patch:  1163_linux-5.10.164.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.164
 
+Patch:  1164_linux-5.10.165.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.165
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1164_linux-5.10.165.patch b/1164_linux-5.10.165.patch
new file mode 100644
index 00000000..ce9c57f1
--- /dev/null
+++ b/1164_linux-5.10.165.patch
@@ -0,0 +1,2890 @@
+diff --git a/Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml
+new file mode 100644
+index 0000000000000..ff86c87309a41
+--- /dev/null
++++ b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml
+@@ -0,0 +1,78 @@
++# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
++# Copyright 2019 BayLibre, SAS
++%YAML 1.2
++---
++$id: "http://devicetree.org/schemas/phy/amlogic,g12a-usb2-phy.yaml#"
++$schema: "http://devicetree.org/meta-schemas/core.yaml#"
++
++title: Amlogic G12A USB2 PHY
++
++maintainers:
++  - Neil Armstrong <narmstrong@baylibre.com>
++
++properties:
++  compatible:
++    enum:
++      - amlogic,g12a-usb2-phy
++      - amlogic,a1-usb2-phy
++
++  reg:
++    maxItems: 1
++
++  clocks:
++    maxItems: 1
++
++  clock-names:
++    items:
++      - const: xtal
++
++  resets:
++    maxItems: 1
++
++  reset-names:
++    items:
++      - const: phy
++
++  "#phy-cells":
++    const: 0
++
++  phy-supply:
++    description:
++      Phandle to a regulator that provides power to the PHY. This
++      regulator will be managed during the PHY power on/off sequence.
++
++required:
++  - compatible
++  - reg
++  - clocks
++  - clock-names
++  - resets
++  - reset-names
++  - "#phy-cells"
++
++if:
++  properties:
++    compatible:
++      enum:
++        - amlogic,meson-a1-usb-ctrl
++
++then:
++  properties:
++    power-domains:
++      maxItems: 1
++  required:
++    - power-domains
++
++additionalProperties: false
++
++examples:
++  - |
++    phy@36000 {
++          compatible = "amlogic,g12a-usb2-phy";
++          reg = <0x36000 0x2000>;
++          clocks = <&xtal>;
++          clock-names = "xtal";
++          resets = <&phy_reset>;
++          reset-names = "phy";
++          #phy-cells = <0>;
++    };
+diff --git a/Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml
+new file mode 100644
+index 0000000000000..84738644e3989
+--- /dev/null
++++ b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml
+@@ -0,0 +1,59 @@
++# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
++# Copyright 2019 BayLibre, SAS
++%YAML 1.2
++---
++$id: "http://devicetree.org/schemas/phy/amlogic,g12a-usb3-pcie-phy.yaml#"
++$schema: "http://devicetree.org/meta-schemas/core.yaml#"
++
++title: Amlogic G12A USB3 + PCIE Combo PHY
++
++maintainers:
++  - Neil Armstrong <narmstrong@baylibre.com>
++
++properties:
++  compatible:
++    enum:
++      - amlogic,g12a-usb3-pcie-phy
++
++  reg:
++    maxItems: 1
++
++  clocks:
++    maxItems: 1
++
++  clock-names:
++    items:
++      - const: ref_clk
++
++  resets:
++    maxItems: 1
++
++  reset-names:
++    items:
++      - const: phy
++
++  "#phy-cells":
++    const: 1
++
++required:
++  - compatible
++  - reg
++  - clocks
++  - clock-names
++  - resets
++  - reset-names
++  - "#phy-cells"
++
++additionalProperties: false
++
++examples:
++  - |
++    phy@46000 {
++          compatible = "amlogic,g12a-usb3-pcie-phy";
++          reg = <0x46000 0x2000>;
++          clocks = <&ref_clk>;
++          clock-names = "ref_clk";
++          resets = <&phy_reset>;
++          reset-names = "phy";
++          #phy-cells = <1>;
++    };
+diff --git a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml
+deleted file mode 100644
+index 399ebde454095..0000000000000
+--- a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml
++++ /dev/null
+@@ -1,78 +0,0 @@
+-# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+-# Copyright 2019 BayLibre, SAS
+-%YAML 1.2
+----
+-$id: "http://devicetree.org/schemas/phy/amlogic,meson-g12a-usb2-phy.yaml#"
+-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+-
+-title: Amlogic G12A USB2 PHY
+-
+-maintainers:
+-  - Neil Armstrong <narmstrong@baylibre.com>
+-
+-properties:
+-  compatible:
+-    enum:
+-      - amlogic,meson-g12a-usb2-phy
+-      - amlogic,meson-a1-usb2-phy
+-
+-  reg:
+-    maxItems: 1
+-
+-  clocks:
+-    maxItems: 1
+-
+-  clock-names:
+-    items:
+-      - const: xtal
+-
+-  resets:
+-    maxItems: 1
+-
+-  reset-names:
+-    items:
+-      - const: phy
+-
+-  "#phy-cells":
+-    const: 0
+-
+-  phy-supply:
+-    description:
+-      Phandle to a regulator that provides power to the PHY. This
+-      regulator will be managed during the PHY power on/off sequence.
+-
+-required:
+-  - compatible
+-  - reg
+-  - clocks
+-  - clock-names
+-  - resets
+-  - reset-names
+-  - "#phy-cells"
+-
+-if:
+-  properties:
+-    compatible:
+-      enum:
+-        - amlogic,meson-a1-usb-ctrl
+-
+-then:
+-  properties:
+-    power-domains:
+-      maxItems: 1
+-  required:
+-    - power-domains
+-
+-additionalProperties: false
+-
+-examples:
+-  - |
+-    phy@36000 {
+-          compatible = "amlogic,meson-g12a-usb2-phy";
+-          reg = <0x36000 0x2000>;
+-          clocks = <&xtal>;
+-          clock-names = "xtal";
+-          resets = <&phy_reset>;
+-          reset-names = "phy";
+-          #phy-cells = <0>;
+-    };
+diff --git a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml
+deleted file mode 100644
+index 453c083cf44cb..0000000000000
+--- a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml
++++ /dev/null
+@@ -1,59 +0,0 @@
+-# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+-# Copyright 2019 BayLibre, SAS
+-%YAML 1.2
+----
+-$id: "http://devicetree.org/schemas/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml#"
+-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+-
+-title: Amlogic G12A USB3 + PCIE Combo PHY
+-
+-maintainers:
+-  - Neil Armstrong <narmstrong@baylibre.com>
+-
+-properties:
+-  compatible:
+-    enum:
+-      - amlogic,meson-g12a-usb3-pcie-phy
+-
+-  reg:
+-    maxItems: 1
+-
+-  clocks:
+-    maxItems: 1
+-
+-  clock-names:
+-    items:
+-      - const: ref_clk
+-
+-  resets:
+-    maxItems: 1
+-
+-  reset-names:
+-    items:
+-      - const: phy
+-
+-  "#phy-cells":
+-    const: 1
+-
+-required:
+-  - compatible
+-  - reg
+-  - clocks
+-  - clock-names
+-  - resets
+-  - reset-names
+-  - "#phy-cells"
+-
+-additionalProperties: false
+-
+-examples:
+-  - |
+-    phy@46000 {
+-          compatible = "amlogic,meson-g12a-usb3-pcie-phy";
+-          reg = <0x46000 0x2000>;
+-          clocks = <&ref_clk>;
+-          clock-names = "ref_clk";
+-          resets = <&phy_reset>;
+-          reset-names = "phy";
+-          #phy-cells = <1>;
+-    };
+diff --git a/Makefile b/Makefile
+index 68fd49d8d4363..5fbff8603f443 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 164
++SUBLEVEL = 165
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
+index 973b144152711..16892f0d05ad6 100644
+--- a/arch/arm64/include/asm/efi.h
++++ b/arch/arm64/include/asm/efi.h
+@@ -25,6 +25,7 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+ ({									\
+ 	efi_virtmap_load();						\
+ 	__efi_fpsimd_begin();						\
++	spin_lock(&efi_rt_lock);					\
+ })
+ 
+ #define arch_efi_call_virt(p, f, args...)				\
+@@ -36,10 +37,12 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+ 
+ #define arch_efi_call_virt_teardown()					\
+ ({									\
++	spin_unlock(&efi_rt_lock);					\
+ 	__efi_fpsimd_end();						\
+ 	efi_virtmap_unload();						\
+ })
+ 
++extern spinlock_t efi_rt_lock;
+ efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...);
+ 
+ #define ARCH_EFI_IRQ_FLAGS_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S
+index 75691a2641c1c..2d3c4b02393e4 100644
+--- a/arch/arm64/kernel/efi-rt-wrapper.S
++++ b/arch/arm64/kernel/efi-rt-wrapper.S
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <linux/linkage.h>
++#include <asm/assembler.h>
+ 
+ SYM_FUNC_START(__efi_rt_asm_wrapper)
+ 	stp	x29, x30, [sp, #-32]!
+@@ -16,6 +17,12 @@ SYM_FUNC_START(__efi_rt_asm_wrapper)
+ 	 */
+ 	stp	x1, x18, [sp, #16]
+ 
++	ldr_l	x16, efi_rt_stack_top
++	mov	sp, x16
++#ifdef CONFIG_SHADOW_CALL_STACK
++	str	x18, [sp, #-16]!
++#endif
++
+ 	/*
+ 	 * We are lucky enough that no EFI runtime services take more than
+ 	 * 5 arguments, so all are passed in registers rather than via the
+@@ -29,6 +36,7 @@ SYM_FUNC_START(__efi_rt_asm_wrapper)
+ 	mov	x4, x6
+ 	blr	x8
+ 
++	mov	sp, x29
+ 	ldp	x1, x2, [sp, #16]
+ 	cmp	x2, x18
+ 	ldp	x29, x30, [sp], #32
+@@ -42,6 +50,10 @@ SYM_FUNC_START(__efi_rt_asm_wrapper)
+ 	 * called with preemption disabled and a separate shadow stack is used
+ 	 * for interrupts.
+ 	 */
+-	mov	x18, x2
++#ifdef CONFIG_SHADOW_CALL_STACK
++	ldr_l	x18, efi_rt_stack_top
++	ldr	x18, [x18, #-16]
++#endif
++
+ 	b	efi_handle_corrupted_x18	// tail call
+ SYM_FUNC_END(__efi_rt_asm_wrapper)
+diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
+index c5685179db5af..72f432d23ec5c 100644
+--- a/arch/arm64/kernel/efi.c
++++ b/arch/arm64/kernel/efi.c
+@@ -143,3 +143,30 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
+ 	pr_err_ratelimited(FW_BUG "register x18 corrupted by EFI %s\n", f);
+ 	return s;
+ }
++
++DEFINE_SPINLOCK(efi_rt_lock);
++
++asmlinkage u64 *efi_rt_stack_top __ro_after_init;
++
++/* EFI requires 8 KiB of stack space for runtime services */
++static_assert(THREAD_SIZE >= SZ_8K);
++
++static int __init arm64_efi_rt_init(void)
++{
++	void *p;
++
++	if (!efi_enabled(EFI_RUNTIME_SERVICES))
++		return 0;
++
++	p = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN, GFP_KERNEL,
++			   NUMA_NO_NODE, &&l);
++l:	if (!p) {
++		pr_warn("Failed to allocate EFI runtime stack\n");
++		clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
++		return -ENOMEM;
++	}
++
++	efi_rt_stack_top = p + THREAD_SIZE;
++	return 0;
++}
++core_initcall(arm64_efi_rt_init);
+diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
+index 701f196d7c686..3c0a621b97921 100644
+--- a/arch/x86/kernel/fpu/init.c
++++ b/arch/x86/kernel/fpu/init.c
+@@ -138,9 +138,6 @@ static void __init fpu__init_system_generic(void)
+ unsigned int fpu_kernel_xstate_size;
+ EXPORT_SYMBOL_GPL(fpu_kernel_xstate_size);
+ 
+-/* Get alignment of the TYPE. */
+-#define TYPE_ALIGN(TYPE) offsetof(struct { char x; TYPE test; }, test)
+-
+ /*
+  * Enforce that 'MEMBER' is the last field of 'TYPE'.
+  *
+@@ -148,8 +145,8 @@ EXPORT_SYMBOL_GPL(fpu_kernel_xstate_size);
+  * because that's how C aligns structs.
+  */
+ #define CHECK_MEMBER_AT_END_OF(TYPE, MEMBER) \
+-	BUILD_BUG_ON(sizeof(TYPE) != ALIGN(offsetofend(TYPE, MEMBER), \
+-					   TYPE_ALIGN(TYPE)))
++	BUILD_BUG_ON(sizeof(TYPE) !=         \
++		     ALIGN(offsetofend(TYPE, MEMBER), _Alignof(TYPE)))
+ 
+ /*
+  * We append the 'struct fpu' to the task_struct:
+diff --git a/arch/x86/lib/iomap_copy_64.S b/arch/x86/lib/iomap_copy_64.S
+index a1f9416bf67a5..6ff2f56cb0f71 100644
+--- a/arch/x86/lib/iomap_copy_64.S
++++ b/arch/x86/lib/iomap_copy_64.S
+@@ -10,6 +10,6 @@
+  */
+ SYM_FUNC_START(__iowrite32_copy)
+ 	movl %edx,%ecx
+-	rep movsd
++	rep movsl
+ 	RET
+ SYM_FUNC_END(__iowrite32_copy)
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 60b0e13bb9fc7..5347fc465ce89 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -50,6 +50,9 @@
+ #define IBS_HOST_TX_IDLE_TIMEOUT_MS	2000
+ #define CMD_TRANS_TIMEOUT_MS		100
+ #define MEMDUMP_TIMEOUT_MS		8000
++#define IBS_DISABLE_SSR_TIMEOUT_MS \
++	(MEMDUMP_TIMEOUT_MS + FW_DOWNLOAD_TIMEOUT_MS)
++#define FW_DOWNLOAD_TIMEOUT_MS		3000
+ 
+ /* susclk rate */
+ #define SUSCLK_RATE_32KHZ	32768
+@@ -68,12 +71,14 @@
+ #define QCA_MEMDUMP_BYTE		0xFB
+ 
+ enum qca_flags {
+-	QCA_IBS_ENABLED,
++	QCA_IBS_DISABLED,
+ 	QCA_DROP_VENDOR_EVENT,
+ 	QCA_SUSPENDING,
+ 	QCA_MEMDUMP_COLLECTION,
+ 	QCA_HW_ERROR_EVENT,
+-	QCA_SSR_TRIGGERED
++	QCA_SSR_TRIGGERED,
++	QCA_BT_OFF,
++	QCA_ROM_FW
+ };
+ 
+ enum qca_capabilities {
+@@ -870,7 +875,7 @@ static int qca_enqueue(struct hci_uart *hu, struct sk_buff *skb)
+ 	 * Out-Of-Band(GPIOs control) sleep is selected.
+ 	 * Don't wake the device up when suspending.
+ 	 */
+-	if (!test_bit(QCA_IBS_ENABLED, &qca->flags) ||
++	if (test_bit(QCA_IBS_DISABLED, &qca->flags) ||
+ 	    test_bit(QCA_SUSPENDING, &qca->flags)) {
+ 		skb_queue_tail(&qca->txq, skb);
+ 		spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
+@@ -1015,7 +1020,7 @@ static void qca_controller_memdump(struct work_struct *work)
+ 			 * the controller to send the dump is 8 seconds. let us
+ 			 * start timer to handle this asynchronous activity.
+ 			 */
+-			clear_bit(QCA_IBS_ENABLED, &qca->flags);
++			set_bit(QCA_IBS_DISABLED, &qca->flags);
+ 			set_bit(QCA_MEMDUMP_COLLECTION, &qca->flags);
+ 			dump = (void *) skb->data;
+ 			dump_size = __le32_to_cpu(dump->dump_size);
+@@ -1621,6 +1626,7 @@ static int qca_power_on(struct hci_dev *hdev)
+ 	struct hci_uart *hu = hci_get_drvdata(hdev);
+ 	enum qca_btsoc_type soc_type = qca_soc_type(hu);
+ 	struct qca_serdev *qcadev;
++	struct qca_data *qca = hu->priv;
+ 	int ret = 0;
+ 
+ 	/* Non-serdev device usually is powered by external power
+@@ -1640,6 +1646,7 @@ static int qca_power_on(struct hci_dev *hdev)
+ 		}
+ 	}
+ 
++	clear_bit(QCA_BT_OFF, &qca->flags);
+ 	return ret;
+ }
+ 
+@@ -1658,8 +1665,9 @@ static int qca_setup(struct hci_uart *hu)
+ 	if (ret)
+ 		return ret;
+ 
++	clear_bit(QCA_ROM_FW, &qca->flags);
+ 	/* Patch downloading has to be done without IBS mode */
+-	clear_bit(QCA_IBS_ENABLED, &qca->flags);
++	set_bit(QCA_IBS_DISABLED, &qca->flags);
+ 
+ 	/* Enable controller to do both LE scan and BR/EDR inquiry
+ 	 * simultaneously.
+@@ -1710,18 +1718,20 @@ retry:
+ 	ret = qca_uart_setup(hdev, qca_baudrate, soc_type, soc_ver,
+ 			firmware_name);
+ 	if (!ret) {
+-		set_bit(QCA_IBS_ENABLED, &qca->flags);
++		clear_bit(QCA_IBS_DISABLED, &qca->flags);
+ 		qca_debugfs_init(hdev);
+ 		hu->hdev->hw_error = qca_hw_error;
+ 		hu->hdev->cmd_timeout = qca_cmd_timeout;
+ 	} else if (ret == -ENOENT) {
+ 		/* No patch/nvm-config found, run with original fw/config */
++		set_bit(QCA_ROM_FW, &qca->flags);
+ 		ret = 0;
+ 	} else if (ret == -EAGAIN) {
+ 		/*
+ 		 * Userspace firmware loader will return -EAGAIN in case no
+ 		 * patch/nvm-config is found, so run with original fw/config.
+ 		 */
++		set_bit(QCA_ROM_FW, &qca->flags);
+ 		ret = 0;
+ 	} else {
+ 		if (retries < MAX_INIT_RETRIES) {
+@@ -1814,7 +1824,7 @@ static void qca_power_shutdown(struct hci_uart *hu)
+ 	 * data in skb's.
+ 	 */
+ 	spin_lock_irqsave(&qca->hci_ibs_lock, flags);
+-	clear_bit(QCA_IBS_ENABLED, &qca->flags);
++	set_bit(QCA_IBS_DISABLED, &qca->flags);
+ 	qca_flush(hu);
+ 	spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
+ 
+@@ -1833,6 +1843,8 @@ static void qca_power_shutdown(struct hci_uart *hu)
+ 	} else if (qcadev->bt_en) {
+ 		gpiod_set_value_cansleep(qcadev->bt_en, 0);
+ 	}
++
++	set_bit(QCA_BT_OFF, &qca->flags);
+ }
+ 
+ static int qca_power_off(struct hci_dev *hdev)
+@@ -2057,10 +2069,17 @@ static void qca_serdev_shutdown(struct device *dev)
+ 	int timeout = msecs_to_jiffies(CMD_TRANS_TIMEOUT_MS);
+ 	struct serdev_device *serdev = to_serdev_device(dev);
+ 	struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev);
++	struct hci_uart *hu = &qcadev->serdev_hu;
++	struct hci_dev *hdev = hu->hdev;
++	struct qca_data *qca = hu->priv;
+ 	const u8 ibs_wake_cmd[] = { 0xFD };
+ 	const u8 edl_reset_soc_cmd[] = { 0x01, 0x00, 0xFC, 0x01, 0x05 };
+ 
+ 	if (qcadev->btsoc_type == QCA_QCA6390) {
++		if (test_bit(QCA_BT_OFF, &qca->flags) ||
++		    !test_bit(HCI_RUNNING, &hdev->flags))
++			return;
++
+ 		serdev_device_write_flush(serdev);
+ 		ret = serdev_device_write_buf(serdev, ibs_wake_cmd,
+ 					      sizeof(ibs_wake_cmd));
+@@ -2093,13 +2112,44 @@ static int __maybe_unused qca_suspend(struct device *dev)
+ 	bool tx_pending = false;
+ 	int ret = 0;
+ 	u8 cmd;
++	u32 wait_timeout = 0;
+ 
+ 	set_bit(QCA_SUSPENDING, &qca->flags);
+ 
+-	/* Device is downloading patch or doesn't support in-band sleep. */
+-	if (!test_bit(QCA_IBS_ENABLED, &qca->flags))
++	/* if BT SoC is running with default firmware then it does not
++	 * support in-band sleep
++	 */
++	if (test_bit(QCA_ROM_FW, &qca->flags))
++		return 0;
++
++	/* During SSR after memory dump collection, controller will be
++	 * powered off and then powered on.If controller is powered off
++	 * during SSR then we should wait until SSR is completed.
++	 */
++	if (test_bit(QCA_BT_OFF, &qca->flags) &&
++	    !test_bit(QCA_SSR_TRIGGERED, &qca->flags))
+ 		return 0;
+ 
++	if (test_bit(QCA_IBS_DISABLED, &qca->flags) ||
++	    test_bit(QCA_SSR_TRIGGERED, &qca->flags)) {
++		wait_timeout = test_bit(QCA_SSR_TRIGGERED, &qca->flags) ?
++					IBS_DISABLE_SSR_TIMEOUT_MS :
++					FW_DOWNLOAD_TIMEOUT_MS;
++
++		/* QCA_IBS_DISABLED flag is set to true, During FW download
++		 * and during memory dump collection. It is reset to false,
++		 * After FW download complete.
++		 */
++		wait_on_bit_timeout(&qca->flags, QCA_IBS_DISABLED,
++			    TASK_UNINTERRUPTIBLE, msecs_to_jiffies(wait_timeout));
++
++		if (test_bit(QCA_IBS_DISABLED, &qca->flags)) {
++			bt_dev_err(hu->hdev, "SSR or FW download time out");
++			ret = -ETIMEDOUT;
++			goto error;
++		}
++	}
++
+ 	cancel_work_sync(&qca->ws_awake_device);
+ 	cancel_work_sync(&qca->ws_awake_rx);
+ 
+diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+index 14c1ac26f8664..b8b5d91b7c1a2 100644
+--- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
++++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+@@ -551,6 +551,11 @@ static noinline void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
+ 
+ 	/* The bad descriptor currently is in the head of vc list */
+ 	vd = vchan_next_desc(&chan->vc);
++	if (!vd) {
++		dev_err(chan2dev(chan), "BUG: %s, IRQ with no descriptors\n",
++			axi_chan_name(chan));
++		goto out;
++	}
+ 	/* Remove the completed descriptor from issued list */
+ 	list_del(&vd->node);
+ 
+@@ -565,6 +570,7 @@ static noinline void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
+ 	/* Try to restart the controller */
+ 	axi_chan_start_first_queued(chan);
+ 
++out:
+ 	spin_unlock_irqrestore(&chan->vc.lock, flags);
+ }
+ 
+diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
+index c5fa2ef74abc7..d84010c2e4bf1 100644
+--- a/drivers/dma/tegra210-adma.c
++++ b/drivers/dma/tegra210-adma.c
+@@ -224,7 +224,7 @@ static int tegra_adma_init(struct tegra_adma *tdma)
+ 	int ret;
+ 
+ 	/* Clear any interrupts */
+-	tdma_write(tdma, tdma->cdata->global_int_clear, 0x1);
++	tdma_write(tdma, tdma->cdata->ch_base_offset + tdma->cdata->global_int_clear, 0x1);
+ 
+ 	/* Assert soft reset */
+ 	tdma_write(tdma, ADMA_GLOBAL_SOFT_RESET, 0x1);
+diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
+index f3e54f6616f02..60075e0e4943a 100644
+--- a/drivers/firmware/efi/runtime-wrappers.c
++++ b/drivers/firmware/efi/runtime-wrappers.c
+@@ -62,6 +62,7 @@ struct efi_runtime_work efi_rts_work;
+ 									\
+ 	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {			\
+ 		pr_warn_once("EFI Runtime Services are disabled!\n");	\
++		efi_rts_work.status = EFI_DEVICE_ERROR;			\
+ 		goto exit;						\
+ 	}								\
+ 									\
+diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
+index c1cd5ca875caa..407cac71c77de 100644
+--- a/drivers/firmware/google/gsmi.c
++++ b/drivers/firmware/google/gsmi.c
+@@ -360,9 +360,10 @@ static efi_status_t gsmi_get_variable(efi_char16_t *name,
+ 		memcpy(data, gsmi_dev.data_buf->start, *data_size);
+ 
+ 		/* All variables are have the following attributes */
+-		*attr = EFI_VARIABLE_NON_VOLATILE |
+-			EFI_VARIABLE_BOOTSERVICE_ACCESS |
+-			EFI_VARIABLE_RUNTIME_ACCESS;
++		if (attr)
++			*attr = EFI_VARIABLE_NON_VOLATILE |
++				EFI_VARIABLE_BOOTSERVICE_ACCESS |
++				EFI_VARIABLE_RUNTIME_ACCESS;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&gsmi_dev.lock, flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index b6ce64b87f48f..6937f81340084 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1531,8 +1531,7 @@ u64 amdgpu_bo_gpu_offset_no_check(struct amdgpu_bo *bo)
+ uint32_t amdgpu_bo_get_preferred_pin_domain(struct amdgpu_device *adev,
+ 					    uint32_t domain)
+ {
+-	if ((domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) &&
+-	    ((adev->asic_type == CHIP_CARRIZO) || (adev->asic_type == CHIP_STONEY))) {
++	if (domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
+ 		domain = AMDGPU_GEM_DOMAIN_VRAM;
+ 		if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD)
+ 			domain = AMDGPU_GEM_DOMAIN_GTT;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 167a1ee518a8f..fbe15f4b75fd5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4567,8 +4567,6 @@ static void fill_stream_properties_from_drm_display_mode(
+ 	timing_out->pix_clk_100hz = mode_in->crtc_clock * 10;
+ 	timing_out->aspect_ratio = get_aspect_ratio(mode_in);
+ 
+-	stream->output_color_space = get_output_color_space(timing_out);
+-
+ 	stream->out_transfer_func->type = TF_TYPE_PREDEFINED;
+ 	stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB;
+ 	if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
+@@ -4579,6 +4577,8 @@ static void fill_stream_properties_from_drm_display_mode(
+ 			adjust_colour_depth_from_display_info(timing_out, info);
+ 		}
+ 	}
++
++	stream->output_color_space = get_output_color_space(timing_out);
+ }
+ 
+ static void fill_audio_info(struct audio_info *audio_info,
+@@ -8783,8 +8783,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 			goto fail;
+ 		}
+ 
+-		if (dm_old_con_state->abm_level !=
+-		    dm_new_con_state->abm_level)
++		if (dm_old_con_state->abm_level != dm_new_con_state->abm_level ||
++		    dm_old_con_state->scaling != dm_new_con_state->scaling)
+ 			new_crtc_state->connectors_changed = true;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+index 2a9080400bdde..86f3ea4edb319 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+@@ -92,8 +92,8 @@ static const struct out_csc_color_matrix_type output_csc_matrix[] = {
+ 		{ 0xE00, 0xF349, 0xFEB7, 0x1000, 0x6CE, 0x16E3,
+ 				0x24F, 0x200, 0xFCCB, 0xF535, 0xE00, 0x1000} },
+ 	{ COLOR_SPACE_YCBCR2020_TYPE,
+-		{ 0x1000, 0xF149, 0xFEB7, 0x0000, 0x0868, 0x15B2,
+-				0x01E6, 0x0000, 0xFB88, 0xF478, 0x1000, 0x0000} },
++		{ 0x1000, 0xF149, 0xFEB7, 0x1004, 0x0868, 0x15B2,
++				0x01E6, 0x201, 0xFB88, 0xF478, 0x1000, 0x1004} },
+ 	{ COLOR_SPACE_YCBCR709_BLACK_TYPE,
+ 		{ 0x0000, 0x0000, 0x0000, 0x1000, 0x0000, 0x0000,
+ 				0x0000, 0x0200, 0x0000, 0x0000, 0x0000, 0x1000} },
+diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
+index ac36b67fb46be..00b5912a88b82 100644
+--- a/drivers/gpu/drm/i915/gt/intel_reset.c
++++ b/drivers/gpu/drm/i915/gt/intel_reset.c
+@@ -289,6 +289,7 @@ out:
+ static int gen6_hw_domain_reset(struct intel_gt *gt, u32 hw_domain_mask)
+ {
+ 	struct intel_uncore *uncore = gt->uncore;
++	int loops = 2;
+ 	int err;
+ 
+ 	/*
+@@ -296,18 +297,39 @@ static int gen6_hw_domain_reset(struct intel_gt *gt, u32 hw_domain_mask)
+ 	 * for fifo space for the write or forcewake the chip for
+ 	 * the read
+ 	 */
+-	intel_uncore_write_fw(uncore, GEN6_GDRST, hw_domain_mask);
++	do {
++		intel_uncore_write_fw(uncore, GEN6_GDRST, hw_domain_mask);
+ 
+-	/* Wait for the device to ack the reset requests */
+-	err = __intel_wait_for_register_fw(uncore,
+-					   GEN6_GDRST, hw_domain_mask, 0,
+-					   500, 0,
+-					   NULL);
++		/*
++		 * Wait for the device to ack the reset requests.
++		 *
++		 * On some platforms, e.g. Jasperlake, we see that the
++		 * engine register state is not cleared until shortly after
++		 * GDRST reports completion, causing a failure as we try
++		 * to immediately resume while the internal state is still
++		 * in flux. If we immediately repeat the reset, the second
++		 * reset appears to serialise with the first, and since
++		 * it is a no-op, the registers should retain their reset
++		 * value. However, there is still a concern that upon
++		 * leaving the second reset, the internal engine state
++		 * is still in flux and not ready for resuming.
++		 */
++		err = __intel_wait_for_register_fw(uncore, GEN6_GDRST,
++						   hw_domain_mask, 0,
++						   2000, 0,
++						   NULL);
++	} while (err == 0 && --loops);
+ 	if (err)
+ 		drm_dbg(&gt->i915->drm,
+ 			"Wait for 0x%08x engines reset failed\n",
+ 			hw_domain_mask);
+ 
++	/*
++	 * As we have observed that the engine state is still volatile
++	 * after GDRST is acked, impose a small delay to let everything settle.
++	 */
++	udelay(50);
++
+ 	return err;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index fb5e30de78c2a..63dc0fcfcc8eb 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -403,7 +403,8 @@ static const struct intel_device_info ilk_m_info = {
+ 	.has_coherent_ggtt = true, \
+ 	.has_llc = 1, \
+ 	.has_rc6 = 1, \
+-	.has_rc6p = 1, \
++	/* snb does support rc6p, but enabling it causes various issues */ \
++	.has_rc6p = 0, \
+ 	.has_rps = true, \
+ 	.dma_mask_size = 40, \
+ 	.ppgtt_type = INTEL_PPGTT_ALIASING, \
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
+index 6818cac0a3b78..85bac20d9007d 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.h
++++ b/drivers/infiniband/ulp/srp/ib_srp.h
+@@ -62,9 +62,6 @@ enum {
+ 	SRP_DEFAULT_CMD_SQ_SIZE = SRP_DEFAULT_QUEUE_SIZE - SRP_RSP_SQ_SIZE -
+ 				  SRP_TSK_MGMT_SQ_SIZE,
+ 
+-	SRP_TAG_NO_REQ		= ~0U,
+-	SRP_TAG_TSK_MGMT	= 1U << 31,
+-
+ 	SRP_MAX_PAGES_PER_MR	= 512,
+ 
+ 	SRP_MAX_ADD_CDB_LEN	= 16,
+@@ -79,6 +76,11 @@ enum {
+ 				  sizeof(struct srp_imm_buf),
+ };
+ 
++enum {
++	SRP_TAG_NO_REQ		= ~0U,
++	SRP_TAG_TSK_MGMT	= BIT(31),
++};
++
+ enum srp_target_state {
+ 	SRP_TARGET_SCANNING,
+ 	SRP_TARGET_LIVE,
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 2c3142b4b5dd7..67a51f69cf9aa 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -247,6 +247,13 @@ static void fastrpc_free_map(struct kref *ref)
+ 		dma_buf_put(map->buf);
+ 	}
+ 
++	if (map->fl) {
++		spin_lock(&map->fl->lock);
++		list_del(&map->node);
++		spin_unlock(&map->fl->lock);
++		map->fl = NULL;
++	}
++
+ 	kfree(map);
+ }
+ 
+@@ -256,10 +263,12 @@ static void fastrpc_map_put(struct fastrpc_map *map)
+ 		kref_put(&map->refcount, fastrpc_free_map);
+ }
+ 
+-static void fastrpc_map_get(struct fastrpc_map *map)
++static int fastrpc_map_get(struct fastrpc_map *map)
+ {
+-	if (map)
+-		kref_get(&map->refcount);
++	if (!map)
++		return -ENOENT;
++
++	return kref_get_unless_zero(&map->refcount) ? 0 : -ENOENT;
+ }
+ 
+ static int fastrpc_map_find(struct fastrpc_user *fl, int fd,
+@@ -1112,12 +1121,7 @@ err_invoke:
+ 	fl->init_mem = NULL;
+ 	fastrpc_buf_free(imem);
+ err_alloc:
+-	if (map) {
+-		spin_lock(&fl->lock);
+-		list_del(&map->node);
+-		spin_unlock(&fl->lock);
+-		fastrpc_map_put(map);
+-	}
++	fastrpc_map_put(map);
+ err:
+ 	kfree(args);
+ 
+@@ -1194,10 +1198,8 @@ static int fastrpc_device_release(struct inode *inode, struct file *file)
+ 		fastrpc_context_put(ctx);
+ 	}
+ 
+-	list_for_each_entry_safe(map, m, &fl->maps, node) {
+-		list_del(&map->node);
++	list_for_each_entry_safe(map, m, &fl->maps, node)
+ 		fastrpc_map_put(map);
+-	}
+ 
+ 	list_for_each_entry_safe(buf, b, &fl->mmaps, node) {
+ 		list_del(&buf->node);
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index afb2e78df4d60..eabbdf17b0c6d 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -111,6 +111,8 @@
+ 
+ #define MEI_DEV_ID_RPL_S      0x7A68  /* Raptor Lake Point S */
+ 
++#define MEI_DEV_ID_MTL_M      0x7E70  /* Meteor Lake Point M */
++
+ /*
+  * MEI HW Section
+  */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 5324b65d0d29a..f2765d6b8c043 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -117,6 +117,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
+ 
++	{MEI_PCI_DEVICE(MEI_DEV_ID_MTL_M, MEI_ME_PCH15_CFG)},
++
+ 	/* required last entry */
+ 	{0, }
+ };
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 9e827bfe19ff0..70f388f83485c 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -103,6 +103,7 @@
+ #define ESDHC_TUNING_START_TAP_DEFAULT	0x1
+ #define ESDHC_TUNING_START_TAP_MASK	0x7f
+ #define ESDHC_TUNING_CMD_CRC_CHECK_DISABLE	(1 << 7)
++#define ESDHC_TUNING_STEP_DEFAULT	0x1
+ #define ESDHC_TUNING_STEP_MASK		0x00070000
+ #define ESDHC_TUNING_STEP_SHIFT		16
+ 
+@@ -1300,7 +1301,7 @@ static void sdhci_esdhc_imx_hwinit(struct sdhci_host *host)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
+ 	struct cqhci_host *cq_host = host->mmc->cqe_private;
+-	int tmp;
++	u32 tmp;
+ 
+ 	if (esdhc_is_usdhc(imx_data)) {
+ 		/*
+@@ -1353,17 +1354,24 @@ static void sdhci_esdhc_imx_hwinit(struct sdhci_host *host)
+ 
+ 		if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) {
+ 			tmp = readl(host->ioaddr + ESDHC_TUNING_CTRL);
+-			tmp |= ESDHC_STD_TUNING_EN |
+-				ESDHC_TUNING_START_TAP_DEFAULT;
+-			if (imx_data->boarddata.tuning_start_tap) {
+-				tmp &= ~ESDHC_TUNING_START_TAP_MASK;
++			tmp |= ESDHC_STD_TUNING_EN;
++
++			/*
++			 * ROM code or bootloader may config the start tap
++			 * and step, unmask them first.
++			 */
++			tmp &= ~(ESDHC_TUNING_START_TAP_MASK | ESDHC_TUNING_STEP_MASK);
++			if (imx_data->boarddata.tuning_start_tap)
+ 				tmp |= imx_data->boarddata.tuning_start_tap;
+-			}
++			else
++				tmp |= ESDHC_TUNING_START_TAP_DEFAULT;
+ 
+ 			if (imx_data->boarddata.tuning_step) {
+-				tmp &= ~ESDHC_TUNING_STEP_MASK;
+ 				tmp |= imx_data->boarddata.tuning_step
+ 					<< ESDHC_TUNING_STEP_SHIFT;
++			} else {
++				tmp |= ESDHC_TUNING_STEP_DEFAULT
++					<< ESDHC_TUNING_STEP_SHIFT;
+ 			}
+ 
+ 			/* Disable the CMD CRC check for tuning, if not, need to
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index fc62773602ec8..9215069c61560 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -1459,9 +1459,11 @@ static int sunxi_mmc_remove(struct platform_device *pdev)
+ 	struct sunxi_mmc_host *host = mmc_priv(mmc);
+ 
+ 	mmc_remove_host(mmc);
+-	pm_runtime_force_suspend(&pdev->dev);
+-	disable_irq(host->irq);
+-	sunxi_mmc_disable(host);
++	pm_runtime_disable(&pdev->dev);
++	if (!pm_runtime_status_suspended(&pdev->dev)) {
++		disable_irq(host->irq);
++		sunxi_mmc_disable(host);
++	}
+ 	dma_free_coherent(&pdev->dev, PAGE_SIZE, host->sg_cpu, host->sg_dma);
+ 	mmc_free_host(mmc);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index b210545147368..f42e118f32901 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -621,6 +621,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
+ 	mutex_lock(&dev->intf_state_mutex);
+ 	if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {
+ 		mlx5_core_err(dev, "health works are not permitted at this stage\n");
++		mutex_unlock(&dev->intf_state_mutex);
+ 		return;
+ 	}
+ 	mutex_unlock(&dev->intf_state_mutex);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 6a5621f17bf58..721d587425c7a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -1109,7 +1109,7 @@ static int brcmf_pcie_init_ringbuffers(struct brcmf_pciedev_info *devinfo)
+ 				BRCMF_NROF_H2D_COMMON_MSGRINGS;
+ 		max_completionrings = BRCMF_NROF_D2H_COMMON_MSGRINGS;
+ 	}
+-	if (max_flowrings > 256) {
++	if (max_flowrings > 512) {
+ 		brcmf_err(bus, "invalid max_flowrings(%d)\n", max_flowrings);
+ 		return -EIO;
+ 	}
+diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
+index 7063e0d42c5ed..660ee3aea447a 100644
+--- a/drivers/soc/qcom/apr.c
++++ b/drivers/soc/qcom/apr.c
+@@ -319,9 +319,10 @@ static int apr_add_device(struct device *dev, struct device_node *np,
+ 		goto out;
+ 	}
+ 
++	/* Protection domain is optional, it does not exist on older platforms */
+ 	ret = of_property_read_string_index(np, "qcom,protection-domain",
+ 					    1, &adev->service_path);
+-	if (ret < 0) {
++	if (ret < 0 && ret != -EINVAL) {
+ 		dev_err(dev, "Failed to read second value of qcom,protection-domain\n");
+ 		goto out;
+ 	}
+diff --git a/drivers/staging/comedi/drivers/adv_pci1760.c b/drivers/staging/comedi/drivers/adv_pci1760.c
+index 6de8ab97d346c..d6934b6c436d1 100644
+--- a/drivers/staging/comedi/drivers/adv_pci1760.c
++++ b/drivers/staging/comedi/drivers/adv_pci1760.c
+@@ -59,7 +59,7 @@
+ #define PCI1760_CMD_CLR_IMB2		0x00	/* Clears IMB2 */
+ #define PCI1760_CMD_SET_DO		0x01	/* Set output state */
+ #define PCI1760_CMD_GET_DO		0x02	/* Read output status */
+-#define PCI1760_CMD_GET_STATUS		0x03	/* Read current status */
++#define PCI1760_CMD_GET_STATUS		0x07	/* Read current status */
+ #define PCI1760_CMD_GET_FW_VER		0x0e	/* Read firmware version */
+ #define PCI1760_CMD_GET_HW_VER		0x0f	/* Read hardware version */
+ #define PCI1760_CMD_SET_PWM_HI(x)	(0x10 + (x) * 2) /* Set "hi" period */
+diff --git a/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h b/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
+index fefc664eefcf0..f5e1ae5f5ee27 100644
+--- a/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
++++ b/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
+@@ -82,7 +82,7 @@ struct vchiq_service_params_kernel {
+ 
+ struct vchiq_instance;
+ 
+-extern enum vchiq_status vchiq_initialise(struct vchiq_instance **pinstance);
++extern int vchiq_initialise(struct vchiq_instance **pinstance);
+ extern enum vchiq_status vchiq_shutdown(struct vchiq_instance *instance);
+ extern enum vchiq_status vchiq_connect(struct vchiq_instance *instance);
+ extern enum vchiq_status vchiq_open_service(struct vchiq_instance *instance,
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
+index 0784c5002417d..77d8fb1801739 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
+@@ -89,10 +89,10 @@ extern struct vchiq_arm_state*
+ vchiq_platform_get_arm_state(struct vchiq_state *state);
+ 
+ 
+-extern enum vchiq_status
++extern int
+ vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service,
+ 		   enum USE_TYPE_E use_type);
+-extern enum vchiq_status
++extern int
+ vchiq_release_internal(struct vchiq_state *state,
+ 		       struct vchiq_service *service);
+ 
+diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
+index 829b6ccdd5d4f..011ab5fed85b7 100644
+--- a/drivers/thunderbolt/tunnel.c
++++ b/drivers/thunderbolt/tunnel.c
+@@ -956,7 +956,7 @@ static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
+ 		return;
+ 	} else if (!ret) {
+ 		/* Use maximum link rate if the link valid is not set */
+-		ret = usb4_usb3_port_max_link_rate(tunnel->src_port);
++		ret = tb_usb3_max_link_rate(tunnel->dst_port, tunnel->src_port);
+ 		if (ret < 0) {
+ 			tb_tunnel_warn(tunnel, "failed to read maximum link rate\n");
+ 			return;
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index b7872ad3e7622..02fd0e79c8f70 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -2633,13 +2633,7 @@ static void __init atmel_console_get_options(struct uart_port *port, int *baud,
+ 	else if (mr == ATMEL_US_PAR_ODD)
+ 		*parity = 'o';
+ 
+-	/*
+-	 * The serial core only rounds down when matching this to a
+-	 * supported baud rate. Make sure we don't end up slightly
+-	 * lower than one of those, as it would make us fall through
+-	 * to a much lower baud rate than we really want.
+-	 */
+-	*baud = port->uartclk / (16 * (quot - 1));
++	*baud = port->uartclk / (16 * quot);
+ }
+ 
+ static int __init atmel_console_setup(struct console *co, char *options)
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index fa2061f1cf3d3..cb68a1028090a 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -769,7 +769,7 @@ static void pch_dma_tx_complete(void *arg)
+ 	}
+ 	xmit->tail &= UART_XMIT_SIZE - 1;
+ 	async_tx_ack(priv->desc_tx);
+-	dma_unmap_sg(port->dev, sg, priv->orig_nent, DMA_TO_DEVICE);
++	dma_unmap_sg(port->dev, priv->sg_tx_p, priv->orig_nent, DMA_TO_DEVICE);
+ 	priv->tx_dma_use = 0;
+ 	priv->nent = 0;
+ 	priv->orig_nent = 0;
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 0d85b55ea8233..f50ffc8076d8b 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -866,9 +866,10 @@ out_unlock:
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void get_tx_fifo_size(struct qcom_geni_serial_port *port)
++static int setup_fifos(struct qcom_geni_serial_port *port)
+ {
+ 	struct uart_port *uport;
++	u32 old_rx_fifo_depth = port->rx_fifo_depth;
+ 
+ 	uport = &port->uport;
+ 	port->tx_fifo_depth = geni_se_get_tx_fifo_depth(&port->se);
+@@ -876,6 +877,16 @@ static void get_tx_fifo_size(struct qcom_geni_serial_port *port)
+ 	port->rx_fifo_depth = geni_se_get_rx_fifo_depth(&port->se);
+ 	uport->fifosize =
+ 		(port->tx_fifo_depth * port->tx_fifo_width) / BITS_PER_BYTE;
++
++	if (port->rx_fifo && (old_rx_fifo_depth != port->rx_fifo_depth) && port->rx_fifo_depth) {
++		port->rx_fifo = devm_krealloc(uport->dev, port->rx_fifo,
++					      port->rx_fifo_depth * sizeof(u32),
++					      GFP_KERNEL);
++		if (!port->rx_fifo)
++			return -ENOMEM;
++	}
++
++	return 0;
+ }
+ 
+ 
+@@ -890,6 +901,7 @@ static int qcom_geni_serial_port_setup(struct uart_port *uport)
+ 	u32 rxstale = DEFAULT_BITS_PER_CHAR * STALE_TIMEOUT;
+ 	u32 proto;
+ 	u32 pin_swap;
++	int ret;
+ 
+ 	proto = geni_se_read_proto(&port->se);
+ 	if (proto != GENI_SE_UART) {
+@@ -899,7 +911,9 @@ static int qcom_geni_serial_port_setup(struct uart_port *uport)
+ 
+ 	qcom_geni_serial_stop_rx(uport);
+ 
+-	get_tx_fifo_size(port);
++	ret = setup_fifos(port);
++	if (ret)
++		return ret;
+ 
+ 	writel(rxstale, uport->membase + SE_UART_RX_STALE_CNT);
+ 
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index f2a3c0b5b535d..5925b8eb9ee38 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -42,6 +42,9 @@
+ #define USB_PRODUCT_USB5534B			0x5534
+ #define USB_VENDOR_CYPRESS			0x04b4
+ #define USB_PRODUCT_CY7C65632			0x6570
++#define USB_VENDOR_TEXAS_INSTRUMENTS		0x0451
++#define USB_PRODUCT_TUSB8041_USB3		0x8140
++#define USB_PRODUCT_TUSB8041_USB2		0x8142
+ #define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND	0x01
+ #define HUB_QUIRK_DISABLE_AUTOSUSPEND		0x02
+ 
+@@ -5715,6 +5718,16 @@ static const struct usb_device_id hub_id_table[] = {
+       .idVendor = USB_VENDOR_GENESYS_LOGIC,
+       .bInterfaceClass = USB_CLASS_HUB,
+       .driver_info = HUB_QUIRK_CHECK_PORT_AUTOSUSPEND},
++    { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++			| USB_DEVICE_ID_MATCH_PRODUCT,
++      .idVendor = USB_VENDOR_TEXAS_INSTRUMENTS,
++      .idProduct = USB_PRODUCT_TUSB8041_USB2,
++      .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
++    { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++			| USB_DEVICE_ID_MATCH_PRODUCT,
++      .idVendor = USB_VENDOR_TEXAS_INSTRUMENTS,
++      .idProduct = USB_PRODUCT_TUSB8041_USB3,
++      .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
+     { .match_flags = USB_DEVICE_ID_MATCH_DEV_CLASS,
+       .bDeviceClass = USB_CLASS_HUB},
+     { .match_flags = USB_DEVICE_ID_MATCH_INT_CLASS,
+diff --git a/drivers/usb/core/usb-acpi.c b/drivers/usb/core/usb-acpi.c
+index 50b2fc7fcc0e3..8751276ef5789 100644
+--- a/drivers/usb/core/usb-acpi.c
++++ b/drivers/usb/core/usb-acpi.c
+@@ -37,6 +37,71 @@ bool usb_acpi_power_manageable(struct usb_device *hdev, int index)
+ }
+ EXPORT_SYMBOL_GPL(usb_acpi_power_manageable);
+ 
++#define UUID_USB_CONTROLLER_DSM "ce2ee385-00e6-48cb-9f05-2edb927c4899"
++#define USB_DSM_DISABLE_U1_U2_FOR_PORT	5
++
++/**
++ * usb_acpi_port_lpm_incapable - check if lpm should be disabled for a port.
++ * @hdev: USB device belonging to the usb hub
++ * @index: zero based port index
++ *
++ * Some USB3 ports may not support USB3 link power management U1/U2 states
++ * due to different retimer setup. ACPI provides _DSM method which returns 0x01
++ * if U1 and U2 states should be disabled. Evaluate _DSM with:
++ * Arg0: UUID = ce2ee385-00e6-48cb-9f05-2edb927c4899
++ * Arg1: Revision ID = 0
++ * Arg2: Function Index = 5
++ * Arg3: (empty)
++ *
++ * Return 1 if USB3 port is LPM incapable, negative on error, otherwise 0
++ */
++
++int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index)
++{
++	union acpi_object *obj;
++	acpi_handle port_handle;
++	int port1 = index + 1;
++	guid_t guid;
++	int ret;
++
++	ret = guid_parse(UUID_USB_CONTROLLER_DSM, &guid);
++	if (ret)
++		return ret;
++
++	port_handle = usb_get_hub_port_acpi_handle(hdev, port1);
++	if (!port_handle) {
++		dev_dbg(&hdev->dev, "port-%d no acpi handle\n", port1);
++		return -ENODEV;
++	}
++
++	if (!acpi_check_dsm(port_handle, &guid, 0,
++			    BIT(USB_DSM_DISABLE_U1_U2_FOR_PORT))) {
++		dev_dbg(&hdev->dev, "port-%d no _DSM function %d\n",
++			port1, USB_DSM_DISABLE_U1_U2_FOR_PORT);
++		return -ENODEV;
++	}
++
++	obj = acpi_evaluate_dsm(port_handle, &guid, 0,
++				USB_DSM_DISABLE_U1_U2_FOR_PORT, NULL);
++
++	if (!obj)
++		return -ENODEV;
++
++	if (obj->type != ACPI_TYPE_INTEGER) {
++		dev_dbg(&hdev->dev, "evaluate port-%d _DSM failed\n", port1);
++		ACPI_FREE(obj);
++		return -EINVAL;
++	}
++
++	if (obj->integer.value == 0x01)
++		ret = 1;
++
++	ACPI_FREE(obj);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(usb_acpi_port_lpm_incapable);
++
+ /**
+  * usb_acpi_set_power_state - control usb port's power via acpi power
+  * resource
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index 855127249f242..f56147489835d 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -85,7 +85,9 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f)
+ /* peak (theoretical) bulk transfer rate in bits-per-second */
+ static inline unsigned ncm_bitrate(struct usb_gadget *g)
+ {
+-	if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
++	if (!g)
++		return 0;
++	else if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
+ 		return 4250000000U;
+ 	else if (gadget_is_superspeed(g) && g->speed == USB_SPEED_SUPER)
+ 		return 3750000000U;
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index cd097474b6c39..cbe8016409162 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -229,6 +229,7 @@ static void put_ep (struct ep_data *data)
+  */
+ 
+ static const char *CHIP;
++static DEFINE_MUTEX(sb_mutex);		/* Serialize superblock operations */
+ 
+ /*----------------------------------------------------------------------*/
+ 
+@@ -2012,13 +2013,20 @@ gadgetfs_fill_super (struct super_block *sb, struct fs_context *fc)
+ {
+ 	struct inode	*inode;
+ 	struct dev_data	*dev;
++	int		rc;
+ 
+-	if (the_device)
+-		return -ESRCH;
++	mutex_lock(&sb_mutex);
++
++	if (the_device) {
++		rc = -ESRCH;
++		goto Done;
++	}
+ 
+ 	CHIP = usb_get_gadget_udc_name();
+-	if (!CHIP)
+-		return -ENODEV;
++	if (!CHIP) {
++		rc = -ENODEV;
++		goto Done;
++	}
+ 
+ 	/* superblock */
+ 	sb->s_blocksize = PAGE_SIZE;
+@@ -2055,13 +2063,17 @@ gadgetfs_fill_super (struct super_block *sb, struct fs_context *fc)
+ 	 * from binding to a controller.
+ 	 */
+ 	the_device = dev;
+-	return 0;
++	rc = 0;
++	goto Done;
+ 
+-Enomem:
++ Enomem:
+ 	kfree(CHIP);
+ 	CHIP = NULL;
++	rc = -ENOMEM;
+ 
+-	return -ENOMEM;
++ Done:
++	mutex_unlock(&sb_mutex);
++	return rc;
+ }
+ 
+ /* "mount -t gadgetfs path /dev/gadget" ends up here */
+@@ -2083,6 +2095,7 @@ static int gadgetfs_init_fs_context(struct fs_context *fc)
+ static void
+ gadgetfs_kill_sb (struct super_block *sb)
+ {
++	mutex_lock(&sb_mutex);
+ 	kill_litter_super (sb);
+ 	if (the_device) {
+ 		put_dev (the_device);
+@@ -2090,6 +2103,7 @@ gadgetfs_kill_sb (struct super_block *sb)
+ 	}
+ 	kfree(CHIP);
+ 	CHIP = NULL;
++	mutex_unlock(&sb_mutex);
+ }
+ 
+ /*----------------------------------------------------------------------*/
+diff --git a/drivers/usb/gadget/legacy/webcam.c b/drivers/usb/gadget/legacy/webcam.c
+index 2c9eab2b863d2..ff970a9433479 100644
+--- a/drivers/usb/gadget/legacy/webcam.c
++++ b/drivers/usb/gadget/legacy/webcam.c
+@@ -293,6 +293,7 @@ static const struct uvc_descriptor_header * const uvc_fs_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_format_yuv,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
++	(const struct uvc_descriptor_header *) &uvc_color_matching,
+ 	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+@@ -305,6 +306,7 @@ static const struct uvc_descriptor_header * const uvc_hs_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_format_yuv,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
++	(const struct uvc_descriptor_header *) &uvc_color_matching,
+ 	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+@@ -317,6 +319,7 @@ static const struct uvc_descriptor_header * const uvc_ss_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_format_yuv,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
++	(const struct uvc_descriptor_header *) &uvc_color_matching,
+ 	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+diff --git a/drivers/usb/host/ehci-fsl.c b/drivers/usb/host/ehci-fsl.c
+index 1e8b59ab22729..c78f71a5faac4 100644
+--- a/drivers/usb/host/ehci-fsl.c
++++ b/drivers/usb/host/ehci-fsl.c
+@@ -29,7 +29,7 @@
+ #include "ehci-fsl.h"
+ 
+ #define DRIVER_DESC "Freescale EHCI Host controller driver"
+-#define DRV_NAME "ehci-fsl"
++#define DRV_NAME "fsl-ehci"
+ 
+ static struct hc_driver __read_mostly fsl_ehci_hc_driver;
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 9168b492c02b7..aff65cefead2f 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -77,9 +77,12 @@ static const char hcd_name[] = "xhci_hcd";
+ static struct hc_driver __read_mostly xhci_pci_hc_driver;
+ 
+ static int xhci_pci_setup(struct usb_hcd *hcd);
++static int xhci_pci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
++				      struct usb_tt *tt, gfp_t mem_flags);
+ 
+ static const struct xhci_driver_overrides xhci_pci_overrides __initconst = {
+ 	.reset = xhci_pci_setup,
++	.update_hub_device = xhci_pci_update_hub_device,
+ };
+ 
+ /* called after powerup, by probe or system-pm "wakeup" */
+@@ -348,8 +351,38 @@ static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev)
+ 				NULL);
+ 	ACPI_FREE(obj);
+ }
++
++static void xhci_find_lpm_incapable_ports(struct usb_hcd *hcd, struct usb_device *hdev)
++{
++	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
++	struct xhci_hub *rhub = &xhci->usb3_rhub;
++	int ret;
++	int i;
++
++	/* This is not the usb3 roothub we are looking for */
++	if (hcd != rhub->hcd)
++		return;
++
++	if (hdev->maxchild > rhub->num_ports) {
++		dev_err(&hdev->dev, "USB3 roothub port number mismatch\n");
++		return;
++	}
++
++	for (i = 0; i < hdev->maxchild; i++) {
++		ret = usb_acpi_port_lpm_incapable(hdev, i);
++
++		dev_dbg(&hdev->dev, "port-%d disable U1/U2 _DSM: %d\n", i + 1, ret);
++
++		if (ret >= 0) {
++			rhub->ports[i]->lpm_incapable = ret;
++			continue;
++		}
++	}
++}
++
+ #else
+ static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev) { }
++static void xhci_find_lpm_incapable_ports(struct usb_hcd *hcd, struct usb_device *hdev) { }
+ #endif /* CONFIG_ACPI */
+ 
+ /* called during probe() after chip reset completes */
+@@ -382,6 +415,16 @@ static int xhci_pci_setup(struct usb_hcd *hcd)
+ 	return xhci_pci_reinit(xhci, pdev);
+ }
+ 
++static int xhci_pci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
++				      struct usb_tt *tt, gfp_t mem_flags)
++{
++	/* Check if acpi claims some USB3 roothub ports are lpm incapable */
++	if (!hdev->parent)
++		xhci_find_lpm_incapable_ports(hcd, hdev);
++
++	return xhci_update_hub_device(hcd, hdev, tt, mem_flags);
++}
++
+ /*
+  * We need to register our own PCI probe function (instead of the USB core's
+  * function) in order to create a second roothub under xHCI.
+@@ -451,6 +494,8 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
+ 		pm_runtime_allow(&dev->dev);
+ 
++	dma_set_max_seg_size(&dev->dev, UINT_MAX);
++
+ 	return 0;
+ 
+ put_usb3_hcd:
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index ead42fc3e16d5..b69b8c7e7966c 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1044,7 +1044,10 @@ static void xhci_kill_endpoint_urbs(struct xhci_hcd *xhci,
+ 	struct xhci_virt_ep *ep;
+ 	struct xhci_ring *ring;
+ 
+-	ep = &xhci->devs[slot_id]->eps[ep_index];
++	ep = xhci_get_virt_ep(xhci, slot_id, ep_index);
++	if (!ep)
++		return;
++
+ 	if ((ep->ep_state & EP_HAS_STREAMS) ||
+ 			(ep->ep_state & EP_GETTING_NO_STREAMS)) {
+ 		int stream_id;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index c968dd8653140..2967372a99880 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3919,6 +3919,7 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ 	struct xhci_virt_device *virt_dev;
+ 	struct xhci_slot_ctx *slot_ctx;
++	unsigned long flags;
+ 	int i, ret;
+ 
+ 	/*
+@@ -3947,7 +3948,11 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	}
+ 	virt_dev->udev = NULL;
+ 	xhci_disable_slot(xhci, udev->slot_id);
++
++	spin_lock_irqsave(&xhci->lock, flags);
+ 	xhci_free_virt_device(xhci, udev->slot_id);
++	spin_unlock_irqrestore(&xhci->lock, flags);
++
+ }
+ 
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
+@@ -5004,6 +5009,7 @@ static int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
+ 			struct usb_device *udev, enum usb3_link_state state)
+ {
+ 	struct xhci_hcd	*xhci;
++	struct xhci_port *port;
+ 	u16 hub_encoded_timeout;
+ 	int mel;
+ 	int ret;
+@@ -5017,6 +5023,13 @@ static int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
+ 			!xhci->devs[udev->slot_id])
+ 		return USB3_LPM_DISABLED;
+ 
++	/* If connected to root port then check port can handle lpm */
++	if (udev->parent && !udev->parent->parent) {
++		port = xhci->usb3_rhub.ports[udev->portnum - 1];
++		if (port->lpm_incapable)
++			return USB3_LPM_DISABLED;
++	}
++
+ 	hub_encoded_timeout = xhci_calculate_lpm_timeout(hcd, udev, state);
+ 	mel = calculate_max_exit_latency(udev, state, hub_encoded_timeout);
+ 	if (mel < 0) {
+@@ -5076,7 +5089,7 @@ static int xhci_disable_usb3_lpm_timeout(struct usb_hcd *hcd,
+ /* Once a hub descriptor is fetched for a device, we need to update the xHC's
+  * internal data structures for the device.
+  */
+-static int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
++int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ 			struct usb_tt *tt, gfp_t mem_flags)
+ {
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+@@ -5176,6 +5189,7 @@ static int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ 	xhci_free_command(xhci, config_cmd);
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(xhci_update_hub_device);
+ 
+ static int xhci_get_frame(struct usb_hcd *hcd)
+ {
+@@ -5446,6 +5460,8 @@ void xhci_init_driver(struct hc_driver *drv,
+ 			drv->check_bandwidth = over->check_bandwidth;
+ 		if (over->reset_bandwidth)
+ 			drv->reset_bandwidth = over->reset_bandwidth;
++		if (over->update_hub_device)
++			drv->update_hub_device = over->update_hub_device;
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(xhci_init_driver);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index ac09b171b7832..c7749f6e34745 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1728,6 +1728,7 @@ struct xhci_port {
+ 	int			hcd_portnum;
+ 	struct xhci_hub		*rhub;
+ 	struct xhci_port_cap	*port_cap;
++	unsigned int		lpm_incapable:1;
+ };
+ 
+ struct xhci_hub {
+@@ -1933,6 +1934,8 @@ struct xhci_driver_overrides {
+ 	int (*start)(struct usb_hcd *hcd);
+ 	int (*check_bandwidth)(struct usb_hcd *, struct usb_device *);
+ 	void (*reset_bandwidth)(struct usb_hcd *, struct usb_device *);
++	int (*update_hub_device)(struct usb_hcd *hcd, struct usb_device *hdev,
++			    struct usb_tt *tt, gfp_t mem_flags);
+ };
+ 
+ #define	XHCI_CFC_DELAY		10
+@@ -2089,6 +2092,8 @@ void xhci_init_driver(struct hc_driver *drv,
+ 		      const struct xhci_driver_overrides *over);
+ int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+ void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
++int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
++			   struct usb_tt *tt, gfp_t mem_flags);
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
+ int xhci_ext_cap_init(struct xhci_hcd *xhci);
+ 
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index 72a06af250812..51a5d626134c3 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -817,7 +817,7 @@ static int iowarrior_probe(struct usb_interface *interface,
+ 			break;
+ 
+ 		case USB_DEVICE_ID_CODEMERCS_IOW100:
+-			dev->report_size = 13;
++			dev->report_size = 12;
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 8a4a0d4dbc139..9ee0fa7756121 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -64,6 +64,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x0846, 0x1100) }, /* NetGear Managed Switch M4100 series, M5300 series, M7100 series */
+ 	{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+ 	{ USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
++	{ USB_DEVICE(0x0908, 0x0070) }, /* Siemens SCALANCE LPE-9000 USB Serial Console */
+ 	{ USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
+ 	{ USB_DEVICE(0x0988, 0x0578) }, /* Teraoka AD2000 */
+ 	{ USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 5636b8f522167..2fc65cbbfea95 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -255,10 +255,16 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EP06			0x0306
+ #define QUECTEL_PRODUCT_EM05G			0x030a
+ #define QUECTEL_PRODUCT_EM060K			0x030b
++#define QUECTEL_PRODUCT_EM05G_CS		0x030c
++#define QUECTEL_PRODUCT_EM05CN_SG		0x0310
+ #define QUECTEL_PRODUCT_EM05G_SG		0x0311
++#define QUECTEL_PRODUCT_EM05CN			0x0312
++#define QUECTEL_PRODUCT_EM05G_GR		0x0313
++#define QUECTEL_PRODUCT_EM05G_RS		0x0314
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
+ #define QUECTEL_PRODUCT_RM520N			0x0801
++#define QUECTEL_PRODUCT_EC200U			0x0901
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
+ #define QUECTEL_PRODUCT_EC200T			0x6026
+ #define QUECTEL_PRODUCT_RM500K			0x7001
+@@ -1159,8 +1165,18 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05CN, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05CN_SG, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_GR, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_CS, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_RS, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_SG, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
+@@ -1180,6 +1196,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
+diff --git a/drivers/usb/storage/uas-detect.h b/drivers/usb/storage/uas-detect.h
+index 3f720faa6f97c..d73282c0ec501 100644
+--- a/drivers/usb/storage/uas-detect.h
++++ b/drivers/usb/storage/uas-detect.h
+@@ -116,6 +116,19 @@ static int uas_use_uas_driver(struct usb_interface *intf,
+ 	if (le16_to_cpu(udev->descriptor.idVendor) == 0x0bc2)
+ 		flags |= US_FL_NO_ATA_1X;
+ 
++	/*
++	 * RTL9210-based enclosure from HIKSEMI, MD202 reportedly have issues
++	 * with UAS.  This isn't distinguishable with just idVendor and
++	 * idProduct, use manufacturer and product too.
++	 *
++	 * Reported-by: Hongling Zeng <zenghongling@kylinos.cn>
++	 */
++	if (le16_to_cpu(udev->descriptor.idVendor) == 0x0bda &&
++			le16_to_cpu(udev->descriptor.idProduct) == 0x9210 &&
++			(udev->manufacturer && !strcmp(udev->manufacturer, "HIKSEMI")) &&
++			(udev->product && !strcmp(udev->product, "MD202")))
++		flags |= US_FL_IGNORE_UAS;
++
+ 	usb_stor_adjust_quirks(udev, &flags);
+ 
+ 	if (flags & US_FL_IGNORE_UAS) {
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 251778d14e2dd..c7b763d6d1023 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -83,13 +83,6 @@ UNUSUAL_DEV(0x0bc2, 0x331a, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_REPORT_LUNS),
+ 
+-/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
+-UNUSUAL_DEV(0x0bda, 0x9210, 0x0000, 0x9999,
+-		"Hiksemi",
+-		"External HDD",
+-		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+-		US_FL_IGNORE_UAS),
+-
+ /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */
+ UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999,
+ 		"Initio Corporation",
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index 5e293ccf0e904..eed719cf55525 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -409,6 +409,18 @@ static const char * const pin_assignments[] = {
+ 	[DP_PIN_ASSIGN_F] = "F",
+ };
+ 
++/*
++ * Helper function to extract a peripheral's currently supported
++ * Pin Assignments from its DisplayPort alternate mode state.
++ */
++static u8 get_current_pin_assignments(struct dp_altmode *dp)
++{
++	if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
++		return DP_CAP_PIN_ASSIGN_DFP_D(dp->alt->vdo);
++	else
++		return DP_CAP_PIN_ASSIGN_UFP_D(dp->alt->vdo);
++}
++
+ static ssize_t
+ pin_assignment_store(struct device *dev, struct device_attribute *attr,
+ 		     const char *buf, size_t size)
+@@ -435,10 +447,7 @@ pin_assignment_store(struct device *dev, struct device_attribute *attr,
+ 		goto out_unlock;
+ 	}
+ 
+-	if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
+-		assignments = DP_CAP_UFP_D_PIN_ASSIGN(dp->alt->vdo);
+-	else
+-		assignments = DP_CAP_DFP_D_PIN_ASSIGN(dp->alt->vdo);
++	assignments = get_current_pin_assignments(dp);
+ 
+ 	if (!(DP_CONF_GET_PIN_ASSIGN(conf) & assignments)) {
+ 		ret = -EINVAL;
+@@ -475,10 +484,7 @@ static ssize_t pin_assignment_show(struct device *dev,
+ 
+ 	cur = get_count_order(DP_CONF_GET_PIN_ASSIGN(dp->data.conf));
+ 
+-	if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
+-		assignments = DP_CAP_UFP_D_PIN_ASSIGN(dp->alt->vdo);
+-	else
+-		assignments = DP_CAP_DFP_D_PIN_ASSIGN(dp->alt->vdo);
++	assignments = get_current_pin_assignments(dp);
+ 
+ 	for (i = 0; assignments; assignments >>= 1, i++) {
+ 		if (assignments & 1) {
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 284294620e9fa..7d9b8050b09cd 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -1684,6 +1684,11 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans,
+ 		BUG();
+ 	if (ret && insert_reserved)
+ 		btrfs_pin_extent(trans, node->bytenr, node->num_bytes, 1);
++	if (ret < 0)
++		btrfs_err(trans->fs_info,
++"failed to run delayed ref for logical %llu num_bytes %llu type %u action %u ref_mod %d: %d",
++			  node->bytenr, node->num_bytes, node->type,
++			  node->action, node->ref_mod, ret);
+ 	return ret;
+ }
+ 
+@@ -1935,8 +1940,6 @@ static int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
+ 		if (ret) {
+ 			unselect_delayed_ref_head(delayed_refs, locked_ref);
+ 			btrfs_put_delayed_ref(ref);
+-			btrfs_debug(fs_info, "run_one_delayed_ref returned %d",
+-				    ret);
+ 			return ret;
+ 		}
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 74cbbb5d8897f..9fe6a01ea8b85 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3296,6 +3296,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 	int err = -ENOMEM;
+ 	int ret = 0;
+ 	bool stopped = false;
++	bool did_leaf_rescans = false;
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+@@ -3316,6 +3317,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 		}
+ 
+ 		err = qgroup_rescan_leaf(trans, path);
++		did_leaf_rescans = true;
+ 
+ 		if (err > 0)
+ 			btrfs_commit_transaction(trans);
+@@ -3336,16 +3338,23 @@ out:
+ 	mutex_unlock(&fs_info->qgroup_rescan_lock);
+ 
+ 	/*
+-	 * only update status, since the previous part has already updated the
+-	 * qgroup info.
++	 * Only update status, since the previous part has already updated the
++	 * qgroup info, and only if we did any actual work. This also prevents
++	 * race with a concurrent quota disable, which has already set
++	 * fs_info->quota_root to NULL and cleared BTRFS_FS_QUOTA_ENABLED at
++	 * btrfs_quota_disable().
+ 	 */
+-	trans = btrfs_start_transaction(fs_info->quota_root, 1);
+-	if (IS_ERR(trans)) {
+-		err = PTR_ERR(trans);
++	if (did_leaf_rescans) {
++		trans = btrfs_start_transaction(fs_info->quota_root, 1);
++		if (IS_ERR(trans)) {
++			err = PTR_ERR(trans);
++			trans = NULL;
++			btrfs_err(fs_info,
++				  "fail to start transaction for status update: %d",
++				  err);
++		}
++	} else {
+ 		trans = NULL;
+-		btrfs_err(fs_info,
+-			  "fail to start transaction for status update: %d",
+-			  err);
+ 	}
+ 
+ 	mutex_lock(&fs_info->qgroup_rescan_lock);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 0c4a2474e75be..9a80047bc9b7b 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -3925,12 +3925,15 @@ smb2_readv_callback(struct mid_q_entry *mid)
+ 				(struct smb2_sync_hdr *)rdata->iov[0].iov_base;
+ 	struct cifs_credits credits = { .value = 0, .instance = 0 };
+ 	struct smb_rqst rqst = { .rq_iov = &rdata->iov[1],
+-				 .rq_nvec = 1,
+-				 .rq_pages = rdata->pages,
+-				 .rq_offset = rdata->page_offset,
+-				 .rq_npages = rdata->nr_pages,
+-				 .rq_pagesz = rdata->pagesz,
+-				 .rq_tailsz = rdata->tailsz };
++				 .rq_nvec = 1, };
++
++	if (rdata->got_bytes) {
++		rqst.rq_pages = rdata->pages;
++		rqst.rq_offset = rdata->page_offset;
++		rqst.rq_npages = rdata->nr_pages;
++		rqst.rq_pagesz = rdata->pagesz;
++		rqst.rq_tailsz = rdata->tailsz;
++	}
+ 
+ 	WARN_ONCE(rdata->server != mid->server,
+ 		  "rdata server %p != mid server %p",
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index bd16c78b5bf22..ad0b83a412268 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -414,7 +414,8 @@ static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
+ 	struct extent_node *en;
+ 	bool ret = false;
+ 
+-	f2fs_bug_on(sbi, !et);
++	if (!et)
++		return false;
+ 
+ 	trace_f2fs_lookup_extent_tree_start(inode, pgofs);
+ 
+diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
+index ae5ed3a074943..deecfb50dd7e3 100644
+--- a/fs/nfs/filelayout/filelayout.c
++++ b/fs/nfs/filelayout/filelayout.c
+@@ -783,6 +783,12 @@ filelayout_alloc_lseg(struct pnfs_layout_hdr *layoutid,
+ 	return &fl->generic_hdr;
+ }
+ 
++static bool
++filelayout_lseg_is_striped(const struct nfs4_filelayout_segment *flseg)
++{
++	return flseg->num_fh > 1;
++}
++
+ /*
+  * filelayout_pg_test(). Called by nfs_can_coalesce_requests()
+  *
+@@ -803,6 +809,8 @@ filelayout_pg_test(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
+ 	size = pnfs_generic_pg_test(pgio, prev, req);
+ 	if (!size)
+ 		return 0;
++	else if (!filelayout_lseg_is_striped(FILELAYOUT_LSEG(pgio->pg_lseg)))
++		return size;
+ 
+ 	/* see if req and prev are in the same stripe */
+ 	if (prev) {
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 77efd69213a3d..65cd599cb2ab6 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -480,9 +480,18 @@ static int __nilfs_btree_get_block(const struct nilfs_bmap *btree, __u64 ptr,
+ 	ret = nilfs_btnode_submit_block(btnc, ptr, 0, REQ_OP_READ, 0, &bh,
+ 					&submit_ptr);
+ 	if (ret) {
+-		if (ret != -EEXIST)
+-			return ret;
+-		goto out_check;
++		if (likely(ret == -EEXIST))
++			goto out_check;
++		if (ret == -ENOENT) {
++			/*
++			 * Block address translation failed due to invalid
++			 * value of 'ptr'.  In this case, return internal code
++			 * -EINVAL (broken bmap) to notify bmap layer of fatal
++			 * metadata corruption.
++			 */
++			ret = -EINVAL;
++		}
++		return ret;
+ 	}
+ 
+ 	if (ra) {
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 475d23a4f8da2..66a089a62c39f 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -394,6 +394,10 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 			data_size = zonefs_check_zone_condition(inode, zone,
+ 								false, false);
+ 		}
++	} else if (sbi->s_mount_opts & ZONEFS_MNTOPT_ERRORS_RO &&
++		   data_size > isize) {
++		/* Do not expose garbage data */
++		data_size = isize;
+ 	}
+ 
+ 	/*
+@@ -772,6 +776,24 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from)
+ 
+ 	ret = submit_bio_wait(bio);
+ 
++	/*
++	 * If the file zone was written underneath the file system, the zone
++	 * write pointer may not be where we expect it to be, but the zone
++	 * append write can still succeed. So check manually that we wrote where
++	 * we intended to, that is, at zi->i_wpoffset.
++	 */
++	if (!ret) {
++		sector_t wpsector =
++			zi->i_zsector + (zi->i_wpoffset >> SECTOR_SHIFT);
++
++		if (bio->bi_iter.bi_sector != wpsector) {
++			zonefs_warn(inode->i_sb,
++				"Corrupted write pointer %llu for zone at %llu\n",
++				wpsector, zi->i_zsector);
++			ret = -EIO;
++		}
++	}
++
+ 	zonefs_file_write_dio_end_io(iocb, size, ret, 0);
+ 
+ out_release:
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index a093667991bb9..568be613bdb31 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -754,11 +754,14 @@ extern struct device *usb_intf_get_dma_device(struct usb_interface *intf);
+ extern int usb_acpi_set_power_state(struct usb_device *hdev, int index,
+ 	bool enable);
+ extern bool usb_acpi_power_manageable(struct usb_device *hdev, int index);
++extern int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index);
+ #else
+ static inline int usb_acpi_set_power_state(struct usb_device *hdev, int index,
+ 	bool enable) { return 0; }
+ static inline bool usb_acpi_power_manageable(struct usb_device *hdev, int index)
+ 	{ return true; }
++static inline int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index)
++	{ return 0; }
+ #endif
+ 
+ /* USB autosuspend and autoresume */
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index ecd24c719de4d..041be3ce10718 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -95,7 +95,7 @@ struct btrfs_space_info;
+ 	EM( FLUSH_DELALLOC,		"FLUSH_DELALLOC")		\
+ 	EM( FLUSH_DELALLOC_WAIT,	"FLUSH_DELALLOC_WAIT")		\
+ 	EM( FLUSH_DELAYED_REFS_NR,	"FLUSH_DELAYED_REFS_NR")	\
+-	EM( FLUSH_DELAYED_REFS,		"FLUSH_ELAYED_REFS")		\
++	EM( FLUSH_DELAYED_REFS,		"FLUSH_DELAYED_REFS")		\
+ 	EM( ALLOC_CHUNK,		"ALLOC_CHUNK")			\
+ 	EM( ALLOC_CHUNK_FORCE,		"ALLOC_CHUNK_FORCE")		\
+ 	EM( RUN_DELAYED_IPUTS,		"RUN_DELAYED_IPUTS")		\
+diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
+index d74c076e9e2b4..717d388ecbd6a 100644
+--- a/include/trace/trace_events.h
++++ b/include/trace/trace_events.h
+@@ -400,7 +400,7 @@ static struct trace_event_functions trace_event_type_funcs_##call = {	\
+ 
+ #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
+ 
+-#define ALIGN_STRUCTFIELD(type) ((int)(offsetof(struct {char a; type b;}, b)))
++#define ALIGN_STRUCTFIELD(type) ((int)(__alignof__(struct {type b;})))
+ 
+ #undef __field_ext
+ #define __field_ext(_type, _item, _filter_type) {			\
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 87bc38b471037..81485c1a9879e 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -513,7 +513,7 @@ static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct,
+ 
+ static bool io_flush_signals(void)
+ {
+-	if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL))) {
++	if (test_thread_flag(TIF_NOTIFY_SIGNAL) || current->task_works) {
+ 		__set_current_state(TASK_RUNNING);
+ 		tracehook_notify_signal();
+ 		return true;
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 0c4d16afb9ef8..642e1a0560c6d 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -578,6 +578,7 @@ struct io_sr_msg {
+ 	int				msg_flags;
+ 	int				bgid;
+ 	size_t				len;
++	size_t				done_io;
+ 	struct io_buffer		*kbuf;
+ };
+ 
+@@ -739,6 +740,7 @@ enum {
+ 	REQ_F_CREDS_BIT,
+ 	REQ_F_REFCOUNT_BIT,
+ 	REQ_F_ARM_LTIMEOUT_BIT,
++	REQ_F_PARTIAL_IO_BIT,
+ 	/* keep async read/write and isreg together and in order */
+ 	REQ_F_NOWAIT_READ_BIT,
+ 	REQ_F_NOWAIT_WRITE_BIT,
+@@ -794,6 +796,8 @@ enum {
+ 	REQ_F_REFCOUNT		= BIT(REQ_F_REFCOUNT_BIT),
+ 	/* there is a linked timeout that has to be armed */
+ 	REQ_F_ARM_LTIMEOUT	= BIT(REQ_F_ARM_LTIMEOUT_BIT),
++	/* request has already done partial IO */
++	REQ_F_PARTIAL_IO	= BIT(REQ_F_PARTIAL_IO_BIT),
+ };
+ 
+ struct async_poll {
+@@ -2478,12 +2482,26 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ 
+ 	io_init_req_batch(&rb);
+ 	while (!list_empty(done)) {
++		struct io_uring_cqe *cqe;
++		unsigned cflags;
++
+ 		req = list_first_entry(done, struct io_kiocb, inflight_entry);
+ 		list_del(&req->inflight_entry);
+-
+-		io_fill_cqe_req(req, req->result, io_put_rw_kbuf(req));
++		cflags = io_put_rw_kbuf(req);
+ 		(*nr_events)++;
+ 
++		cqe = io_get_cqe(ctx);
++		if (cqe) {
++			WRITE_ONCE(cqe->user_data, req->user_data);
++			WRITE_ONCE(cqe->res, req->result);
++			WRITE_ONCE(cqe->flags, cflags);
++		} else {
++			spin_lock(&ctx->completion_lock);
++			io_cqring_event_overflow(ctx, req->user_data,
++							req->result, cflags);
++			spin_unlock(&ctx->completion_lock);
++		}
++
+ 		if (req_ref_put_and_test(req))
+ 			io_req_free_batch(&rb, req, &ctx->submit_state);
+ 	}
+@@ -2682,17 +2700,32 @@ static bool io_rw_should_reissue(struct io_kiocb *req)
+ }
+ #endif
+ 
+-static bool __io_complete_rw_common(struct io_kiocb *req, long res)
++/*
++ * Trigger the notifications after having done some IO, and finish the write
++ * accounting, if any.
++ */
++static void io_req_io_end(struct io_kiocb *req)
+ {
+-	if (req->rw.kiocb.ki_flags & IOCB_WRITE) {
++	struct io_rw *rw = &req->rw;
++
++	if (rw->kiocb.ki_flags & IOCB_WRITE) {
+ 		kiocb_end_write(req);
+ 		fsnotify_modify(req->file);
+ 	} else {
+ 		fsnotify_access(req->file);
+ 	}
++}
++
++static bool __io_complete_rw_common(struct io_kiocb *req, long res)
++{
+ 	if (res != req->result) {
+ 		if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
+ 		    io_rw_should_reissue(req)) {
++			/*
++			 * Reissue will start accounting again, finish the
++			 * current cycle.
++			 */
++			io_req_io_end(req);
+ 			req->flags |= REQ_F_REISSUE;
+ 			return true;
+ 		}
+@@ -2734,12 +2767,10 @@ static void io_req_task_complete(struct io_kiocb *req, bool *locked)
+ 	}
+ }
+ 
+-static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
+-			     unsigned int issue_flags)
++static void io_req_rw_complete(struct io_kiocb *req, bool *locked)
+ {
+-	if (__io_complete_rw_common(req, res))
+-		return;
+-	__io_req_complete(req, issue_flags, io_fixup_rw_res(req, res), io_put_rw_kbuf(req));
++	io_req_io_end(req);
++	io_req_task_complete(req, locked);
+ }
+ 
+ static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
+@@ -2749,7 +2780,7 @@ static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
+ 	if (__io_complete_rw_common(req, res))
+ 		return;
+ 	req->result = io_fixup_rw_res(req, res);
+-	req->io_task_work.func = io_req_task_complete;
++	req->io_task_work.func = io_req_rw_complete;
+ 	io_req_task_work_add(req);
+ }
+ 
+@@ -2901,14 +2932,6 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 		req->flags |= REQ_F_ISREG;
+ 
+ 	kiocb->ki_pos = READ_ONCE(sqe->off);
+-	if (kiocb->ki_pos == -1) {
+-		if (!(file->f_mode & FMODE_STREAM)) {
+-			req->flags |= REQ_F_CUR_POS;
+-			kiocb->ki_pos = file->f_pos;
+-		} else {
+-			kiocb->ki_pos = 0;
+-		}
+-	}
+ 	kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
+ 	kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
+ 	ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
+@@ -2990,6 +3013,23 @@ static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
+ 	}
+ }
+ 
++static inline loff_t *io_kiocb_update_pos(struct io_kiocb *req)
++{
++	struct kiocb *kiocb = &req->rw.kiocb;
++
++	if (kiocb->ki_pos != -1)
++		return &kiocb->ki_pos;
++
++	if (!(req->file->f_mode & FMODE_STREAM)) {
++		req->flags |= REQ_F_CUR_POS;
++		kiocb->ki_pos = req->file->f_pos;
++		return &kiocb->ki_pos;
++	}
++
++	kiocb->ki_pos = 0;
++	return NULL;
++}
++
+ static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
+ 		       unsigned int issue_flags)
+ {
+@@ -2997,10 +3037,20 @@ static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
+ 
+ 	if (req->flags & REQ_F_CUR_POS)
+ 		req->file->f_pos = kiocb->ki_pos;
+-	if (ret >= 0 && (kiocb->ki_complete == io_complete_rw))
+-		__io_complete_rw(req, ret, 0, issue_flags);
+-	else
++	if (ret >= 0 && (kiocb->ki_complete == io_complete_rw)) {
++		if (!__io_complete_rw_common(req, ret)) {
++			/*
++			 * Safe to call io_end from here as we're inline
++			 * from the submission path.
++			 */
++			io_req_io_end(req);
++			__io_req_complete(req, issue_flags,
++					  io_fixup_rw_res(req, ret),
++					  io_put_rw_kbuf(req));
++		}
++	} else {
+ 		io_rw_done(kiocb, ret);
++	}
+ 
+ 	if (req->flags & REQ_F_REISSUE) {
+ 		req->flags &= ~REQ_F_REISSUE;
+@@ -3282,6 +3332,7 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+ 	struct kiocb *kiocb = &req->rw.kiocb;
+ 	struct file *file = req->file;
+ 	ssize_t ret = 0;
++	loff_t *ppos;
+ 
+ 	/*
+ 	 * Don't support polled IO through this interface, and we can't
+@@ -3293,6 +3344,8 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+ 	if (kiocb->ki_flags & IOCB_NOWAIT)
+ 		return -EAGAIN;
+ 
++	ppos = io_kiocb_ppos(kiocb);
++
+ 	while (iov_iter_count(iter)) {
+ 		struct iovec iovec;
+ 		ssize_t nr;
+@@ -3306,10 +3359,10 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+ 
+ 		if (rw == READ) {
+ 			nr = file->f_op->read(file, iovec.iov_base,
+-					      iovec.iov_len, io_kiocb_ppos(kiocb));
++					      iovec.iov_len, ppos);
+ 		} else {
+ 			nr = file->f_op->write(file, iovec.iov_base,
+-					       iovec.iov_len, io_kiocb_ppos(kiocb));
++					       iovec.iov_len, ppos);
+ 		}
+ 
+ 		if (nr < 0) {
+@@ -3510,6 +3563,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+ 	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ 	struct iov_iter_state __state, *state;
+ 	ssize_t ret, ret2;
++	loff_t *ppos;
+ 
+ 	if (rw) {
+ 		iter = &rw->iter;
+@@ -3542,7 +3596,9 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+ 		return ret ?: -EAGAIN;
+ 	}
+ 
+-	ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), req->result);
++	ppos = io_kiocb_update_pos(req);
++
++	ret = rw_verify_area(READ, req->file, ppos, req->result);
+ 	if (unlikely(ret)) {
+ 		kfree(iovec);
+ 		return ret;
+@@ -3646,6 +3702,7 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
+ 	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ 	struct iov_iter_state __state, *state;
+ 	ssize_t ret, ret2;
++	loff_t *ppos;
+ 
+ 	if (rw) {
+ 		iter = &rw->iter;
+@@ -3676,7 +3733,9 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
+ 	    (req->flags & REQ_F_ISREG))
+ 		goto copy_iov;
+ 
+-	ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
++	ppos = io_kiocb_update_pos(req);
++
++	ret = rw_verify_area(WRITE, req->file, ppos, req->result);
+ 	if (unlikely(ret))
+ 		goto out_free;
+ 
+@@ -4613,6 +4672,13 @@ static int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags)
+ }
+ 
+ #if defined(CONFIG_NET)
++static bool io_net_retry(struct socket *sock, int flags)
++{
++	if (!(flags & MSG_WAITALL))
++		return false;
++	return sock->type == SOCK_STREAM || sock->type == SOCK_SEQPACKET;
++}
++
+ static int io_setup_async_msg(struct io_kiocb *req,
+ 			      struct io_async_msghdr *kmsg)
+ {
+@@ -4630,8 +4696,10 @@ static int io_setup_async_msg(struct io_kiocb *req,
+ 	if (async_msg->msg.msg_name)
+ 		async_msg->msg.msg_name = &async_msg->addr;
+ 	/* if were using fast_iov, set it to the new one */
+-	if (!async_msg->free_iov)
+-		async_msg->msg.msg_iter.iov = async_msg->fast_iov;
++	if (!kmsg->free_iov) {
++		size_t fast_idx = kmsg->msg.msg_iter.iov - kmsg->fast_iov;
++		async_msg->msg.msg_iter.iov = &async_msg->fast_iov[fast_idx];
++	}
+ 
+ 	return -EAGAIN;
+ }
+@@ -4676,12 +4744,14 @@ static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	if (req->ctx->compat)
+ 		sr->msg_flags |= MSG_CMSG_COMPAT;
+ #endif
++	sr->done_io = 0;
+ 	return 0;
+ }
+ 
+ static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ 	struct io_async_msghdr iomsg, *kmsg;
++	struct io_sr_msg *sr = &req->sr_msg;
+ 	struct socket *sock;
+ 	unsigned flags;
+ 	int min_ret = 0;
+@@ -4706,17 +4776,27 @@ static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
+ 
+ 	ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
+-	if ((issue_flags & IO_URING_F_NONBLOCK) && ret == -EAGAIN)
+-		return io_setup_async_msg(req, kmsg);
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
+ 
++	if (ret < min_ret) {
++		if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
++			return io_setup_async_msg(req, kmsg);
++		if (ret == -ERESTARTSYS)
++			ret = -EINTR;
++		if (ret > 0 && io_net_retry(sock, flags)) {
++			sr->done_io += ret;
++			req->flags |= REQ_F_PARTIAL_IO;
++			return io_setup_async_msg(req, kmsg);
++		}
++		req_set_fail(req);
++	}
+ 	/* fast path, check for non-NULL to avoid function call */
+ 	if (kmsg->free_iov)
+ 		kfree(kmsg->free_iov);
+ 	req->flags &= ~REQ_F_NEED_CLEANUP;
+-	if (ret < min_ret)
+-		req_set_fail(req);
++	if (ret >= 0)
++		ret += sr->done_io;
++	else if (sr->done_io)
++		ret = sr->done_io;
+ 	__io_req_complete(req, issue_flags, ret, 0);
+ 	return 0;
+ }
+@@ -4752,13 +4832,24 @@ static int io_send(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 	msg.msg_flags = flags;
+ 	ret = sock_sendmsg(sock, &msg);
+-	if ((issue_flags & IO_URING_F_NONBLOCK) && ret == -EAGAIN)
+-		return -EAGAIN;
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
+-
+-	if (ret < min_ret)
++	if (ret < min_ret) {
++		if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
++			return -EAGAIN;
++		if (ret == -ERESTARTSYS)
++			ret = -EINTR;
++		if (ret > 0 && io_net_retry(sock, flags)) {
++			sr->len -= ret;
++			sr->buf += ret;
++			sr->done_io += ret;
++			req->flags |= REQ_F_PARTIAL_IO;
++			return -EAGAIN;
++		}
+ 		req_set_fail(req);
++	}
++	if (ret >= 0)
++		ret += sr->done_io;
++	else if (sr->done_io)
++		ret = sr->done_io;
+ 	__io_req_complete(req, issue_flags, ret, 0);
+ 	return 0;
+ }
+@@ -4902,12 +4993,14 @@ static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	if (req->ctx->compat)
+ 		sr->msg_flags |= MSG_CMSG_COMPAT;
+ #endif
++	sr->done_io = 0;
+ 	return 0;
+ }
+ 
+ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ 	struct io_async_msghdr iomsg, *kmsg;
++	struct io_sr_msg *sr = &req->sr_msg;
+ 	struct socket *sock;
+ 	struct io_buffer *kbuf;
+ 	unsigned flags;
+@@ -4945,10 +5038,20 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 	ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
+ 					kmsg->uaddr, flags);
+-	if (force_nonblock && ret == -EAGAIN)
+-		return io_setup_async_msg(req, kmsg);
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
++	if (ret < min_ret) {
++		if (ret == -EAGAIN && force_nonblock)
++			return io_setup_async_msg(req, kmsg);
++		if (ret == -ERESTARTSYS)
++			ret = -EINTR;
++		if (ret > 0 && io_net_retry(sock, flags)) {
++			sr->done_io += ret;
++			req->flags |= REQ_F_PARTIAL_IO;
++			return io_setup_async_msg(req, kmsg);
++		}
++		req_set_fail(req);
++	} else if ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
++		req_set_fail(req);
++	}
+ 
+ 	if (req->flags & REQ_F_BUFFER_SELECTED)
+ 		cflags = io_put_recv_kbuf(req);
+@@ -4956,8 +5059,10 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 	if (kmsg->free_iov)
+ 		kfree(kmsg->free_iov);
+ 	req->flags &= ~REQ_F_NEED_CLEANUP;
+-	if (ret < min_ret || ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
+-		req_set_fail(req);
++	if (ret >= 0)
++		ret += sr->done_io;
++	else if (sr->done_io)
++		ret = sr->done_io;
+ 	__io_req_complete(req, issue_flags, ret, cflags);
+ 	return 0;
+ }
+@@ -5004,15 +5109,29 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ 		min_ret = iov_iter_count(&msg.msg_iter);
+ 
+ 	ret = sock_recvmsg(sock, &msg, flags);
+-	if (force_nonblock && ret == -EAGAIN)
+-		return -EAGAIN;
+-	if (ret == -ERESTARTSYS)
+-		ret = -EINTR;
++	if (ret < min_ret) {
++		if (ret == -EAGAIN && force_nonblock)
++			return -EAGAIN;
++		if (ret == -ERESTARTSYS)
++			ret = -EINTR;
++		if (ret > 0 && io_net_retry(sock, flags)) {
++			sr->len -= ret;
++			sr->buf += ret;
++			sr->done_io += ret;
++			req->flags |= REQ_F_PARTIAL_IO;
++			return -EAGAIN;
++		}
++		req_set_fail(req);
++	} else if ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
+ out_free:
++		req_set_fail(req);
++	}
+ 	if (req->flags & REQ_F_BUFFER_SELECTED)
+ 		cflags = io_put_recv_kbuf(req);
+-	if (ret < min_ret || ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
+-		req_set_fail(req);
++	if (ret >= 0)
++		ret += sr->done_io;
++	else if (sr->done_io)
++		ret = sr->done_io;
+ 	__io_req_complete(req, issue_flags, ret, cflags);
+ 	return 0;
+ }
+@@ -5050,9 +5169,6 @@ static int io_accept(struct io_kiocb *req, unsigned int issue_flags)
+ 	struct file *file;
+ 	int ret, fd;
+ 
+-	if (req->file->f_flags & O_NONBLOCK)
+-		req->flags |= REQ_F_NOWAIT;
+-
+ 	if (!fixed) {
+ 		fd = __get_unused_fd_flags(accept->flags, accept->nofile);
+ 		if (unlikely(fd < 0))
+@@ -5065,6 +5181,8 @@ static int io_accept(struct io_kiocb *req, unsigned int issue_flags)
+ 		if (!fixed)
+ 			put_unused_fd(fd);
+ 		ret = PTR_ERR(file);
++		/* safe to retry */
++		req->flags |= REQ_F_PARTIAL_IO;
+ 		if (ret == -EAGAIN && force_nonblock)
+ 			return -EAGAIN;
+ 		if (ret == -ERESTARTSYS)
+@@ -5632,7 +5750,7 @@ static int io_arm_poll_handler(struct io_kiocb *req)
+ 
+ 	if (!req->file || !file_can_poll(req->file))
+ 		return IO_APOLL_ABORTED;
+-	if (req->flags & REQ_F_POLLED)
++	if ((req->flags & (REQ_F_POLLED|REQ_F_PARTIAL_IO)) == REQ_F_POLLED)
+ 		return IO_APOLL_ABORTED;
+ 	if (!def->pollin && !def->pollout)
+ 		return IO_APOLL_ABORTED;
+@@ -5648,7 +5766,12 @@ static int io_arm_poll_handler(struct io_kiocb *req)
+ 		mask |= POLLOUT | POLLWRNORM;
+ 	}
+ 
+-	apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
++	if (req->flags & REQ_F_POLLED) {
++		apoll = req->apoll;
++		kfree(apoll->double_poll);
++	} else {
++		apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
++	}
+ 	if (unlikely(!apoll))
+ 		return IO_APOLL_ABORTED;
+ 	apoll->double_poll = NULL;
+@@ -7440,7 +7563,7 @@ static int io_run_task_work_sig(void)
+ /* when returns >0, the caller should retry */
+ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 					  struct io_wait_queue *iowq,
+-					  ktime_t timeout)
++					  ktime_t *timeout)
+ {
+ 	int ret;
+ 
+@@ -7452,7 +7575,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 	if (test_bit(0, &ctx->check_cq_overflow))
+ 		return 1;
+ 
+-	if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS))
++	if (!schedule_hrtimeout(timeout, HRTIMER_MODE_ABS))
+ 		return -ETIME;
+ 	return 1;
+ }
+@@ -7515,7 +7638,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 		}
+ 		prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
+ 						TASK_INTERRUPTIBLE);
+-		ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
++		ret = io_cqring_wait_schedule(ctx, &iowq, &timeout);
+ 		finish_wait(&ctx->cq_wait, &iowq.wq);
+ 		cond_resched();
+ 	} while (ret > 0);
+@@ -9435,6 +9558,10 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ 	/* if we failed setting up the ctx, we might not have any rings */
+ 	io_iopoll_try_reap_events(ctx);
+ 
++	/* drop cached put refs after potentially doing completions */
++	if (current->io_uring)
++		io_uring_drop_tctx_refs(current);
++
+ 	INIT_WORK(&ctx->exit_work, io_ring_exit_work);
+ 	/*
+ 	 * Use system_unbound_wq to avoid spawning tons of event kworkers
+@@ -10741,8 +10868,6 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
+ 		return -ENXIO;
+ 
+ 	if (ctx->restricted) {
+-		if (opcode >= IORING_REGISTER_LAST)
+-			return -EINVAL;
+ 		opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
+ 		if (!test_bit(opcode, ctx->restrictions.register_op))
+ 			return -EACCES;
+@@ -10874,6 +10999,9 @@ SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
+ 	long ret = -EBADF;
+ 	struct fd f;
+ 
++	if (opcode >= IORING_REGISTER_LAST)
++		return -EINVAL;
++
+ 	f = fdget(fd);
+ 	if (!f.file)
+ 		return -EBADF;
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 24a3a28ae2284..9f59cc8ab8f86 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1548,6 +1548,8 @@ int do_prlimit(struct task_struct *tsk, unsigned int resource,
+ 
+ 	if (resource >= RLIM_NLIMITS)
+ 		return -EINVAL;
++	resource = array_index_nospec(resource, RLIM_NLIMITS);
++
+ 	if (new_rlim) {
+ 		if (new_rlim->rlim_cur > new_rlim->rlim_max)
+ 			return -EINVAL;
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 0eb3adf4ff68c..b77186ec70e93 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1459,14 +1459,6 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE))
+ 		return;
+ 
+-	/*
+-	 * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
+-	 * that got written to. Without this, we'd have to also lock the
+-	 * anon_vma if one exists.
+-	 */
+-	if (vma->anon_vma)
+-		return;
+-
+ 	hpage = find_lock_page(vma->vm_file->f_mapping,
+ 			       linear_page_index(vma, haddr));
+ 	if (!hpage)
+@@ -1538,6 +1530,10 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	}
+ 
+ 	/* step 4: collapse pmd */
++	/* we make no change to anon, but protect concurrent anon page lookup */
++	if (vma->anon_vma)
++		anon_vma_lock_write(vma->anon_vma);
++
+ 	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, haddr,
+ 				haddr + HPAGE_PMD_SIZE);
+ 	mmu_notifier_invalidate_range_start(&range);
+@@ -1547,6 +1543,8 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ 	mmu_notifier_invalidate_range_end(&range);
+ 	pte_free(mm, pmd_pgtable(_pmd));
+ 
++	if (vma->anon_vma)
++		anon_vma_unlock_write(vma->anon_vma);
+ 	i_mmap_unlock_write(vma->vm_file->f_mapping);
+ 
+ drop_hpage:
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 47c2dd4a9b9f9..12bf740e2fb31 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -2052,7 +2052,8 @@ static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr)
+ 		return n_stats;
+ 	if (n_stats > S32_MAX / sizeof(u64))
+ 		return -ENOMEM;
+-	WARN_ON_ONCE(!n_stats);
++	if (WARN_ON_ONCE(!n_stats))
++		return -EOPNOTSUPP;
+ 
+ 	if (copy_from_user(&stats, useraddr, sizeof(stats)))
+ 		return -EFAULT;
+diff --git a/net/ipv4/tcp_ulp.c b/net/ipv4/tcp_ulp.c
+index b5d707a5a31b8..8e135af0d4f70 100644
+--- a/net/ipv4/tcp_ulp.c
++++ b/net/ipv4/tcp_ulp.c
+@@ -136,7 +136,7 @@ static int __tcp_set_ulp(struct sock *sk, const struct tcp_ulp_ops *ulp_ops)
+ 	if (icsk->icsk_ulp_ops)
+ 		goto out_err;
+ 
+-	err = -EINVAL;
++	err = -ENOTCONN;
+ 	if (!ulp_ops->clone && sk->sk_state == TCP_LISTEN)
+ 		goto out_err;
+ 
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index 4b4ab1961068f..92e5812daf892 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -491,7 +491,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ {
+ 	struct tid_ampdu_tx *tid_tx;
+ 	struct ieee80211_local *local = sta->local;
+-	struct ieee80211_sub_if_data *sdata = sta->sdata;
++	struct ieee80211_sub_if_data *sdata;
+ 	struct ieee80211_ampdu_params params = {
+ 		.sta = &sta->sta,
+ 		.action = IEEE80211_AMPDU_TX_START,
+@@ -521,6 +521,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ 	 */
+ 	synchronize_net();
+ 
++	sdata = sta->sdata;
+ 	params.ssn = sta->tid_seq[tid] >> 4;
+ 	ret = drv_ampdu_action(local, sdata, &params);
+ 	tid_tx->ssn = params.ssn;
+@@ -534,6 +535,9 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ 		 */
+ 		set_bit(HT_AGG_STATE_DRV_READY, &tid_tx->state);
+ 	} else if (ret) {
++		if (!sdata)
++			return;
++
+ 		ht_dbg(sdata,
+ 		       "BA request denied - HW unavailable for %pM tid %d\n",
+ 		       sta->sta.addr, tid);
+diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c
+index 48322e45e7ddb..120bd9cdf7dfa 100644
+--- a/net/mac80211/driver-ops.c
++++ b/net/mac80211/driver-ops.c
+@@ -331,6 +331,9 @@ int drv_ampdu_action(struct ieee80211_local *local,
+ 
+ 	might_sleep();
+ 
++	if (!sdata)
++		return -EIO;
++
+ 	sdata = get_bss_sdata(sdata);
+ 	if (!check_sdata_in_driver(sdata))
+ 		return -EIO;
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index d04e5a1a7e0e7..3a15ef8dd3228 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -2013,7 +2013,6 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		ret = register_netdevice(ndev);
+ 		if (ret) {
+-			ieee80211_if_free(ndev);
+ 			free_netdev(ndev);
+ 			return ret;
+ 		}
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index eb7dd457ef5a5..cfd86389d37f6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3561,6 +3561,15 @@ static void alc256_init(struct hda_codec *codec)
+ 	hda_nid_t hp_pin = alc_get_hp_pin(spec);
+ 	bool hp_pin_sense;
+ 
++	if (spec->ultra_low_power) {
++		alc_update_coef_idx(codec, 0x03, 1<<1, 1<<1);
++		alc_update_coef_idx(codec, 0x08, 3<<2, 3<<2);
++		alc_update_coef_idx(codec, 0x08, 7<<4, 0);
++		alc_update_coef_idx(codec, 0x3b, 1<<15, 0);
++		alc_update_coef_idx(codec, 0x0e, 7<<6, 7<<6);
++		msleep(30);
++	}
++
+ 	if (!hp_pin)
+ 		hp_pin = 0x21;
+ 
+@@ -3572,14 +3581,6 @@ static void alc256_init(struct hda_codec *codec)
+ 		msleep(2);
+ 
+ 	alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+-	if (spec->ultra_low_power) {
+-		alc_update_coef_idx(codec, 0x03, 1<<1, 1<<1);
+-		alc_update_coef_idx(codec, 0x08, 3<<2, 3<<2);
+-		alc_update_coef_idx(codec, 0x08, 7<<4, 0);
+-		alc_update_coef_idx(codec, 0x3b, 1<<15, 0);
+-		alc_update_coef_idx(codec, 0x0e, 7<<6, 7<<6);
+-		msleep(30);
+-	}
+ 
+ 	snd_hda_codec_write(codec, hp_pin, 0,
+ 			    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+@@ -3661,6 +3662,13 @@ static void alc225_init(struct hda_codec *codec)
+ 	hda_nid_t hp_pin = alc_get_hp_pin(spec);
+ 	bool hp1_pin_sense, hp2_pin_sense;
+ 
++	if (spec->ultra_low_power) {
++		alc_update_coef_idx(codec, 0x08, 0x0f << 2, 3<<2);
++		alc_update_coef_idx(codec, 0x0e, 7<<6, 7<<6);
++		alc_update_coef_idx(codec, 0x33, 1<<11, 0);
++		msleep(30);
++	}
++
+ 	if (!hp_pin)
+ 		hp_pin = 0x21;
+ 	msleep(30);
+@@ -3672,12 +3680,6 @@ static void alc225_init(struct hda_codec *codec)
+ 		msleep(2);
+ 
+ 	alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+-	if (spec->ultra_low_power) {
+-		alc_update_coef_idx(codec, 0x08, 0x0f << 2, 3<<2);
+-		alc_update_coef_idx(codec, 0x0e, 7<<6, 7<<6);
+-		alc_update_coef_idx(codec, 0x33, 1<<11, 0);
+-		msleep(30);
+-	}
+ 
+ 	if (hp1_pin_sense || spec->ultra_low_power)
+ 		snd_hda_codec_write(codec, hp_pin, 0,
+diff --git a/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c b/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c
+new file mode 100644
+index 0000000000000..3add34df57678
+--- /dev/null
++++ b/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <test_progs.h>
++#include "jeq_infer_not_null_fail.skel.h"
++
++void test_jeq_infer_not_null(void)
++{
++	RUN_TESTS(jeq_infer_not_null_fail);
++}
+diff --git a/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c b/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c
+new file mode 100644
+index 0000000000000..f46965053acb2
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c
+@@ -0,0 +1,42 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include "vmlinux.h"
++#include <bpf/bpf_helpers.h>
++#include "bpf_misc.h"
++
++char _license[] SEC("license") = "GPL";
++
++struct {
++	__uint(type, BPF_MAP_TYPE_HASH);
++	__uint(max_entries, 1);
++	__type(key, u64);
++	__type(value, u64);
++} m_hash SEC(".maps");
++
++SEC("?raw_tp")
++__failure __msg("R8 invalid mem access 'map_value_or_null")
++int jeq_infer_not_null_ptr_to_btfid(void *ctx)
++{
++	struct bpf_map *map = (struct bpf_map *)&m_hash;
++	struct bpf_map *inner_map = map->inner_map_meta;
++	u64 key = 0, ret = 0, *val;
++
++	val = bpf_map_lookup_elem(map, &key);
++	/* Do not mark ptr as non-null if one of them is
++	 * PTR_TO_BTF_ID (R9), reject because of invalid
++	 * access to map value (R8).
++	 *
++	 * Here, we need to inline those insns to access
++	 * R8 directly, since compiler may use other reg
++	 * once it figures out val==inner_map.
++	 */
++	asm volatile("r8 = %[val];\n"
++		     "r9 = %[inner_map];\n"
++		     "if r8 != r9 goto +1;\n"
++		     "%[ret] = *(u64 *)(r8 +0);\n"
++		     : [ret] "+r"(ret)
++		     : [inner_map] "r"(inner_map), [val] "r"(val)
++		     : "r8", "r9");
++
++	return ret;
++}
+diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c
+index fa87b58bd5fa5..98ff808d6f0c2 100644
+--- a/tools/virtio/vringh_test.c
++++ b/tools/virtio/vringh_test.c
+@@ -308,6 +308,7 @@ static int parallel_test(u64 features,
+ 
+ 		gvdev.vdev.features = features;
+ 		INIT_LIST_HEAD(&gvdev.vdev.vqs);
++		spin_lock_init(&gvdev.vdev.vqs_list_lock);
+ 		gvdev.to_host_fd = to_host[1];
+ 		gvdev.notifies = 0;
+ 
+@@ -455,6 +456,7 @@ int main(int argc, char *argv[])
+ 	getrange = getrange_iov;
+ 	vdev.features = 0;
+ 	INIT_LIST_HEAD(&vdev.vqs);
++	spin_lock_init(&vdev.vqs_list_lock);
+ 
+ 	while (argv[1]) {
+ 		if (strcmp(argv[1], "--indirect") == 0)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-02-01  8:09 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-02-01  8:09 UTC (permalink / raw
  To: gentoo-commits

commit:     acbafa4e0b117593a6b65c36a83411024a9a6ba0
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb  1 08:08:16 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb  1 08:08:16 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=acbafa4e

Linux patch 5.10.166

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1165_linux-5.10.166.patch | 5596 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5600 insertions(+)

diff --git a/0000_README b/0000_README
index 461b650e..ae58f6d6 100644
--- a/0000_README
+++ b/0000_README
@@ -703,6 +703,10 @@ Patch:  1164_linux-5.10.165.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.165
 
+Patch:  1165_linux-5.10.166.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.166
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1165_linux-5.10.166.patch b/1165_linux-5.10.166.patch
new file mode 100644
index 00000000..ff8e070e
--- /dev/null
+++ b/1165_linux-5.10.166.patch
@@ -0,0 +1,5596 @@
+diff --git a/Documentation/ABI/testing/sysfs-kernel-oops_count b/Documentation/ABI/testing/sysfs-kernel-oops_count
+new file mode 100644
+index 0000000000000..156cca9dbc960
+--- /dev/null
++++ b/Documentation/ABI/testing/sysfs-kernel-oops_count
+@@ -0,0 +1,6 @@
++What:		/sys/kernel/oops_count
++Date:		November 2022
++KernelVersion:	6.2.0
++Contact:	Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
++Description:
++		Shows how many times the system has Oopsed since last boot.
+diff --git a/Documentation/ABI/testing/sysfs-kernel-warn_count b/Documentation/ABI/testing/sysfs-kernel-warn_count
+new file mode 100644
+index 0000000000000..90a029813717d
+--- /dev/null
++++ b/Documentation/ABI/testing/sysfs-kernel-warn_count
+@@ -0,0 +1,6 @@
++What:		/sys/kernel/warn_count
++Date:		November 2022
++KernelVersion:	6.2.0
++Contact:	Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
++Description:
++		Shows how many times the system has Warned since last boot.
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index a4b1ebc2e70b0..6b0c7b650deaa 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -663,6 +663,15 @@ This is the default behavior.
+ an oops event is detected.
+ 
+ 
++oops_limit
++==========
++
++Number of kernel oopses after which the kernel should panic when
++``panic_on_oops`` is not set. Setting this to 0 disables checking
++the count. Setting this to  1 has the same effect as setting
++``panic_on_oops=1``. The default value is 10000.
++
++
+ osrelease, ostype & version
+ ===========================
+ 
+@@ -1469,6 +1478,16 @@ entry will default to 2 instead of 0.
+ 2 Unprivileged calls to ``bpf()`` are disabled
+ = =============================================================
+ 
++
++warn_limit
++==========
++
++Number of kernel warnings after which the kernel should panic when
++``panic_on_warn`` is not set. Setting this to 0 disables checking
++the warning count. Setting this to 1 has the same effect as setting
++``panic_on_warn=1``. The default value is 0.
++
++
+ watchdog
+ ========
+ 
+diff --git a/Makefile b/Makefile
+index 5fbff8603f443..efdfb40a82abb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 165
++SUBLEVEL = 166
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 921d4b6e4d956..8b0f81a58b948 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -192,7 +192,7 @@ die_if_kernel(char * str, struct pt_regs *regs, long err, unsigned long *r9_15)
+ 		local_irq_enable();
+ 		while (1);
+ 	}
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+ 
+ #ifndef CONFIG_MATHEMU
+@@ -577,7 +577,7 @@ do_entUna(void * va, unsigned long opcode, unsigned long reg,
+ 
+ 	printk("Bad unaligned kernel access at %016lx: %p %lx %lu\n",
+ 		pc, va, opcode, reg);
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ 
+ got_exception:
+ 	/* Ok, we caught the exception, but we don't want it.  Is there
+@@ -632,7 +632,7 @@ got_exception:
+ 		local_irq_enable();
+ 		while (1);
+ 	}
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+ 
+ /*
+diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
+index 09172f017efc0..5d42f94887daf 100644
+--- a/arch/alpha/mm/fault.c
++++ b/arch/alpha/mm/fault.c
+@@ -204,7 +204,7 @@ retry:
+ 	printk(KERN_ALERT "Unable to handle kernel paging request at "
+ 	       "virtual address %016lx\n", address);
+ 	die_if_kernel("Oops", regs, cause, (unsigned long*)regs - 16);
+-	do_exit(SIGKILL);
++	make_task_dead(SIGKILL);
+ 
+ 	/* We ran out of memory, or some other thing happened to us that
+ 	   made us unable to handle the page fault gracefully.  */
+diff --git a/arch/arm/boot/dts/imx6qdl-gw560x.dtsi b/arch/arm/boot/dts/imx6qdl-gw560x.dtsi
+index 093a219a77ae1..f520e337698af 100644
+--- a/arch/arm/boot/dts/imx6qdl-gw560x.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-gw560x.dtsi
+@@ -634,7 +634,6 @@
+ &uart1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_uart1>;
+-	uart-has-rtscts;
+ 	rts-gpios = <&gpio7 1 GPIO_ACTIVE_HIGH>;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/imx6ul-pico-dwarf.dts b/arch/arm/boot/dts/imx6ul-pico-dwarf.dts
+index 162dc259edc8c..5a74c7f68eb62 100644
+--- a/arch/arm/boot/dts/imx6ul-pico-dwarf.dts
++++ b/arch/arm/boot/dts/imx6ul-pico-dwarf.dts
+@@ -32,7 +32,7 @@
+ };
+ 
+ &i2c2 {
+-	clock_frequency = <100000>;
++	clock-frequency = <100000>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_i2c2>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/imx7d-pico-dwarf.dts b/arch/arm/boot/dts/imx7d-pico-dwarf.dts
+index 5162fe227d1ea..fdc10563f1473 100644
+--- a/arch/arm/boot/dts/imx7d-pico-dwarf.dts
++++ b/arch/arm/boot/dts/imx7d-pico-dwarf.dts
+@@ -32,7 +32,7 @@
+ };
+ 
+ &i2c1 {
+-	clock_frequency = <100000>;
++	clock-frequency = <100000>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_i2c1>;
+ 	status = "okay";
+@@ -52,7 +52,7 @@
+ };
+ 
+ &i2c4 {
+-	clock_frequency = <100000>;
++	clock-frequency = <100000>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_i2c1>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/imx7d-pico-nymph.dts b/arch/arm/boot/dts/imx7d-pico-nymph.dts
+index 104a85254adbb..5afb1674e0125 100644
+--- a/arch/arm/boot/dts/imx7d-pico-nymph.dts
++++ b/arch/arm/boot/dts/imx7d-pico-nymph.dts
+@@ -43,7 +43,7 @@
+ };
+ 
+ &i2c1 {
+-	clock_frequency = <100000>;
++	clock-frequency = <100000>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_i2c1>;
+ 	status = "okay";
+@@ -64,7 +64,7 @@
+ };
+ 
+ &i2c2 {
+-	clock_frequency = <100000>;
++	clock-frequency = <100000>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_i2c2>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/sam9x60.dtsi b/arch/arm/boot/dts/sam9x60.dtsi
+index ec45ced3cde68..e1e0dec8cc1f2 100644
+--- a/arch/arm/boot/dts/sam9x60.dtsi
++++ b/arch/arm/boot/dts/sam9x60.dtsi
+@@ -567,7 +567,7 @@
+ 			mpddrc: mpddrc@ffffe800 {
+ 				compatible = "microchip,sam9x60-ddramc", "atmel,sama5d3-ddramc";
+ 				reg = <0xffffe800 0x200>;
+-				clocks = <&pmc PMC_TYPE_SYSTEM 2>, <&pmc PMC_TYPE_CORE PMC_MCK>;
++				clocks = <&pmc PMC_TYPE_SYSTEM 2>, <&pmc PMC_TYPE_PERIPHERAL 49>;
+ 				clock-names = "ddrck", "mpddr";
+ 			};
+ 
+diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
+index a531afad87fdb..7878c33e188d7 100644
+--- a/arch/arm/kernel/traps.c
++++ b/arch/arm/kernel/traps.c
+@@ -348,7 +348,7 @@ static void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ 	if (panic_on_oops)
+ 		panic("Fatal exception");
+ 	if (signr)
+-		do_exit(signr);
++		make_task_dead(signr);
+ }
+ 
+ /*
+diff --git a/arch/arm/mach-imx/cpu-imx25.c b/arch/arm/mach-imx/cpu-imx25.c
+index b2e1963f473de..2ee2d2813d577 100644
+--- a/arch/arm/mach-imx/cpu-imx25.c
++++ b/arch/arm/mach-imx/cpu-imx25.c
+@@ -23,6 +23,7 @@ static int mx25_read_cpu_rev(void)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx25-iim");
+ 	iim_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	BUG_ON(!iim_base);
+ 	rev = readl(iim_base + MXC_IIMSREV);
+ 	iounmap(iim_base);
+diff --git a/arch/arm/mach-imx/cpu-imx27.c b/arch/arm/mach-imx/cpu-imx27.c
+index bf70e13bbe9ee..1d28939083683 100644
+--- a/arch/arm/mach-imx/cpu-imx27.c
++++ b/arch/arm/mach-imx/cpu-imx27.c
+@@ -28,6 +28,7 @@ static int mx27_read_cpu_rev(void)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx27-ccm");
+ 	ccm_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	BUG_ON(!ccm_base);
+ 	/*
+ 	 * now we have access to the IO registers. As we need
+diff --git a/arch/arm/mach-imx/cpu-imx31.c b/arch/arm/mach-imx/cpu-imx31.c
+index b9c24b851d1ab..35c544924e509 100644
+--- a/arch/arm/mach-imx/cpu-imx31.c
++++ b/arch/arm/mach-imx/cpu-imx31.c
+@@ -39,6 +39,7 @@ static int mx31_read_cpu_rev(void)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx31-iim");
+ 	iim_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	BUG_ON(!iim_base);
+ 
+ 	/* read SREV register from IIM module */
+diff --git a/arch/arm/mach-imx/cpu-imx35.c b/arch/arm/mach-imx/cpu-imx35.c
+index 80e7d8ab9f1bb..1fe75b39c2d99 100644
+--- a/arch/arm/mach-imx/cpu-imx35.c
++++ b/arch/arm/mach-imx/cpu-imx35.c
+@@ -21,6 +21,7 @@ static int mx35_read_cpu_rev(void)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx35-iim");
+ 	iim_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	BUG_ON(!iim_base);
+ 
+ 	rev = imx_readl(iim_base + MXC_IIMSREV);
+diff --git a/arch/arm/mach-imx/cpu-imx5.c b/arch/arm/mach-imx/cpu-imx5.c
+index ad56263778f93..a67c89bf155dd 100644
+--- a/arch/arm/mach-imx/cpu-imx5.c
++++ b/arch/arm/mach-imx/cpu-imx5.c
+@@ -28,6 +28,7 @@ static u32 imx5_read_srev_reg(const char *compat)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, compat);
+ 	iim_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!iim_base);
+ 
+ 	srev = readl(iim_base + IIM_SREV) & 0xff;
+diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
+index efa4020250315..af5177801fb10 100644
+--- a/arch/arm/mm/fault.c
++++ b/arch/arm/mm/fault.c
+@@ -125,7 +125,7 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
+ 	show_pte(KERN_ALERT, mm, addr);
+ 	die("Oops", regs, fsr);
+ 	bust_spinlocks(0);
+-	do_exit(SIGKILL);
++	make_task_dead(SIGKILL);
+ }
+ 
+ /*
+diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
+index 959f057017384..c4d5b3bacf644 100644
+--- a/arch/arm/mm/nommu.c
++++ b/arch/arm/mm/nommu.c
+@@ -161,7 +161,7 @@ void __init paging_init(const struct machine_desc *mdesc)
+ 	mpu_setup();
+ 
+ 	/* allocate the zero page. */
+-	zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
++	zero_page = (void *)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ 	if (!zero_page)
+ 		panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ 		      __func__, PAGE_SIZE, PAGE_SIZE);
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
+index 5667009aae13a..674a0ab8a5396 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
+@@ -70,7 +70,7 @@
+ &ecspi2 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_espi2>;
+-	cs-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>;
++	cs-gpios = <&gpio5 13 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+ 	eeprom@0 {
+@@ -187,7 +187,7 @@
+ 			MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK		0x82
+ 			MX8MM_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI		0x82
+ 			MX8MM_IOMUXC_ECSPI2_MISO_ECSPI2_MISO		0x82
+-			MX8MM_IOMUXC_ECSPI1_SS0_GPIO5_IO9		0x41
++			MX8MM_IOMUXC_ECSPI2_SS0_GPIO5_IO13              0x41
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 2059d8f43f55f..2cdd53425509d 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -144,7 +144,7 @@ void die(const char *str, struct pt_regs *regs, int err)
+ 	raw_spin_unlock_irqrestore(&die_lock, flags);
+ 
+ 	if (ret != NOTIFY_STOP)
+-		do_exit(SIGSEGV);
++		make_task_dead(SIGSEGV);
+ }
+ 
+ static void arm64_show_signal(int signo, const char *str)
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 795d224f184ff..2be856731e817 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -293,7 +293,7 @@ static void die_kernel_fault(const char *msg, unsigned long addr,
+ 	show_pte(addr);
+ 	die("Oops", regs, esr);
+ 	bust_spinlocks(0);
+-	do_exit(SIGKILL);
++	make_task_dead(SIGKILL);
+ }
+ 
+ static void __do_kernel_fault(unsigned long addr, unsigned int esr,
+diff --git a/arch/csky/abiv1/alignment.c b/arch/csky/abiv1/alignment.c
+index cb2a0d94a144d..2df115d0e2105 100644
+--- a/arch/csky/abiv1/alignment.c
++++ b/arch/csky/abiv1/alignment.c
+@@ -294,7 +294,7 @@ bad_area:
+ 				__func__, opcode, rz, rx, imm, addr);
+ 		show_regs(regs);
+ 		bust_spinlocks(0);
+-		do_exit(SIGKILL);
++		make_task_dead(SIGKILL);
+ 	}
+ 
+ 	force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)addr);
+diff --git a/arch/csky/kernel/traps.c b/arch/csky/kernel/traps.c
+index 22721468a04b2..15711efa14a4e 100644
+--- a/arch/csky/kernel/traps.c
++++ b/arch/csky/kernel/traps.c
+@@ -111,7 +111,7 @@ void die(struct pt_regs *regs, const char *str)
+ 	if (panic_on_oops)
+ 		panic("Fatal exception");
+ 	if (ret != NOTIFY_STOP)
+-		do_exit(SIGSEGV);
++		make_task_dead(SIGSEGV);
+ }
+ 
+ void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
+diff --git a/arch/h8300/kernel/traps.c b/arch/h8300/kernel/traps.c
+index 5d8b969cd8f34..cf23ccb50c17e 100644
+--- a/arch/h8300/kernel/traps.c
++++ b/arch/h8300/kernel/traps.c
+@@ -17,6 +17,7 @@
+ #include <linux/types.h>
+ #include <linux/sched.h>
+ #include <linux/sched/debug.h>
++#include <linux/sched/task.h>
+ #include <linux/mm_types.h>
+ #include <linux/kernel.h>
+ #include <linux/errno.h>
+@@ -110,7 +111,7 @@ void die(const char *str, struct pt_regs *fp, unsigned long err)
+ 	dump(fp);
+ 
+ 	spin_unlock_irq(&die_lock);
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+ 
+ static int kstack_depth_to_print = 24;
+diff --git a/arch/h8300/mm/fault.c b/arch/h8300/mm/fault.c
+index d4bc9c16f2df9..b465441f490df 100644
+--- a/arch/h8300/mm/fault.c
++++ b/arch/h8300/mm/fault.c
+@@ -51,7 +51,7 @@ asmlinkage int do_page_fault(struct pt_regs *regs, unsigned long address,
+ 	printk(" at virtual address %08lx\n", address);
+ 	if (!user_mode(regs))
+ 		die("Oops", regs, error_code);
+-	do_exit(SIGKILL);
++	make_task_dead(SIGKILL);
+ 
+ 	return 1;
+ }
+diff --git a/arch/hexagon/kernel/traps.c b/arch/hexagon/kernel/traps.c
+index 904134b37232f..b334e80717099 100644
+--- a/arch/hexagon/kernel/traps.c
++++ b/arch/hexagon/kernel/traps.c
+@@ -218,7 +218,7 @@ int die(const char *str, struct pt_regs *regs, long err)
+ 		panic("Fatal exception");
+ 
+ 	oops_exit();
+-	do_exit(err);
++	make_task_dead(err);
+ 	return 0;
+ }
+ 
+diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
+index 39b25a5a591b3..1d0579bc9d655 100644
+--- a/arch/ia64/Kconfig
++++ b/arch/ia64/Kconfig
+@@ -361,7 +361,7 @@ config ARCH_PROC_KCORE_TEXT
+ 	depends on PROC_KCORE
+ 
+ config IA64_MCA_RECOVERY
+-	tristate "MCA recovery from errors other than TLB."
++	bool "MCA recovery from errors other than TLB."
+ 
+ config IA64_PALINFO
+ 	tristate "/proc/pal support"
+diff --git a/arch/ia64/kernel/mca_drv.c b/arch/ia64/kernel/mca_drv.c
+index 2a40268c3d494..d9ee3b186249d 100644
+--- a/arch/ia64/kernel/mca_drv.c
++++ b/arch/ia64/kernel/mca_drv.c
+@@ -176,7 +176,7 @@ mca_handler_bh(unsigned long paddr, void *iip, unsigned long ipsr)
+ 	spin_unlock(&mca_bh_lock);
+ 
+ 	/* This process is about to be killed itself */
+-	do_exit(SIGKILL);
++	make_task_dead(SIGKILL);
+ }
+ 
+ /**
+diff --git a/arch/ia64/kernel/traps.c b/arch/ia64/kernel/traps.c
+index e13cb905930fb..753642366e12e 100644
+--- a/arch/ia64/kernel/traps.c
++++ b/arch/ia64/kernel/traps.c
+@@ -85,7 +85,7 @@ die (const char *str, struct pt_regs *regs, long err)
+ 	if (panic_on_oops)
+ 		panic("Fatal exception");
+ 
+-  	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ 	return 0;
+ }
+ 
+diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c
+index cd9766d2b6e0e..829198180ca6f 100644
+--- a/arch/ia64/mm/fault.c
++++ b/arch/ia64/mm/fault.c
+@@ -274,7 +274,7 @@ retry:
+ 		regs = NULL;
+ 	bust_spinlocks(0);
+ 	if (regs)
+-		do_exit(SIGKILL);
++		make_task_dead(SIGKILL);
+ 	return;
+ 
+   out_of_memory:
+diff --git a/arch/m68k/kernel/traps.c b/arch/m68k/kernel/traps.c
+index 9e1261462bcc5..b2a31afb998c2 100644
+--- a/arch/m68k/kernel/traps.c
++++ b/arch/m68k/kernel/traps.c
+@@ -1136,7 +1136,7 @@ void die_if_kernel (char *str, struct pt_regs *fp, int nr)
+ 	pr_crit("%s: %08x\n", str, nr);
+ 	show_registers(fp);
+ 	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+ 
+ asmlinkage void set_esp0(unsigned long ssp)
+diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
+index ef46e77e97a5b..fcb3a0d8421c5 100644
+--- a/arch/m68k/mm/fault.c
++++ b/arch/m68k/mm/fault.c
+@@ -48,7 +48,7 @@ int send_fault_sig(struct pt_regs *regs)
+ 			pr_alert("Unable to handle kernel access");
+ 		pr_cont(" at virtual address %p\n", addr);
+ 		die_if_kernel("Oops", regs, 0 /*error_code*/);
+-		do_exit(SIGKILL);
++		make_task_dead(SIGKILL);
+ 	}
+ 
+ 	return 1;
+diff --git a/arch/microblaze/kernel/exceptions.c b/arch/microblaze/kernel/exceptions.c
+index cf99c411503e3..6d3a6a6442205 100644
+--- a/arch/microblaze/kernel/exceptions.c
++++ b/arch/microblaze/kernel/exceptions.c
+@@ -44,10 +44,10 @@ void die(const char *str, struct pt_regs *fp, long err)
+ 	pr_warn("Oops: %s, sig: %ld\n", str, err);
+ 	show_regs(fp);
+ 	spin_unlock_irq(&die_lock);
+-	/* do_exit() should take care of panic'ing from an interrupt
++	/* make_task_dead() should take care of panic'ing from an interrupt
+ 	 * context so we don't handle it here
+ 	 */
+-	do_exit(err);
++	make_task_dead(err);
+ }
+ 
+ /* for user application debugging */
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index b1fe4518bd221..ebd0101f0958d 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -413,7 +413,7 @@ void __noreturn die(const char *str, struct pt_regs *regs)
+ 	if (regs && kexec_should_crash(current))
+ 		crash_kexec(regs);
+ 
+-	do_exit(sig);
++	make_task_dead(sig);
+ }
+ 
+ extern struct exception_table_entry __start___dbe_table[];
+diff --git a/arch/nds32/kernel/fpu.c b/arch/nds32/kernel/fpu.c
+index 9edd7ed7d7bf8..701c09a668de4 100644
+--- a/arch/nds32/kernel/fpu.c
++++ b/arch/nds32/kernel/fpu.c
+@@ -223,7 +223,7 @@ inline void handle_fpu_exception(struct pt_regs *regs)
+ 		}
+ 	} else if (fpcsr & FPCSR_mskRIT) {
+ 		if (!user_mode(regs))
+-			do_exit(SIGILL);
++			make_task_dead(SIGILL);
+ 		si_signo = SIGILL;
+ 	}
+ 
+diff --git a/arch/nds32/kernel/traps.c b/arch/nds32/kernel/traps.c
+index 6a9772ba73927..12cdd6549360c 100644
+--- a/arch/nds32/kernel/traps.c
++++ b/arch/nds32/kernel/traps.c
+@@ -185,7 +185,7 @@ void die(const char *str, struct pt_regs *regs, int err)
+ 
+ 	bust_spinlocks(0);
+ 	spin_unlock_irq(&die_lock);
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+ 
+ EXPORT_SYMBOL(die);
+@@ -289,7 +289,7 @@ void unhandled_interruption(struct pt_regs *regs)
+ 	pr_emerg("unhandled_interruption\n");
+ 	show_regs(regs);
+ 	if (!user_mode(regs))
+-		do_exit(SIGKILL);
++		make_task_dead(SIGKILL);
+ 	force_sig(SIGKILL);
+ }
+ 
+@@ -300,7 +300,7 @@ void unhandled_exceptions(unsigned long entry, unsigned long addr,
+ 		 addr, type);
+ 	show_regs(regs);
+ 	if (!user_mode(regs))
+-		do_exit(SIGKILL);
++		make_task_dead(SIGKILL);
+ 	force_sig(SIGKILL);
+ }
+ 
+@@ -327,7 +327,7 @@ void do_revinsn(struct pt_regs *regs)
+ 	pr_emerg("Reserved Instruction\n");
+ 	show_regs(regs);
+ 	if (!user_mode(regs))
+-		do_exit(SIGILL);
++		make_task_dead(SIGILL);
+ 	force_sig(SIGILL);
+ }
+ 
+diff --git a/arch/nios2/kernel/traps.c b/arch/nios2/kernel/traps.c
+index b172da4eb1a95..86208178024f6 100644
+--- a/arch/nios2/kernel/traps.c
++++ b/arch/nios2/kernel/traps.c
+@@ -37,10 +37,10 @@ void die(const char *str, struct pt_regs *regs, long err)
+ 	show_regs(regs);
+ 	spin_unlock_irq(&die_lock);
+ 	/*
+-	 * do_exit() should take care of panic'ing from an interrupt
++	 * make_task_dead() should take care of panic'ing from an interrupt
+ 	 * context so we don't handle it here
+ 	 */
+-	do_exit(err);
++	make_task_dead(err);
+ }
+ 
+ void _exception(int signo, struct pt_regs *regs, int code, unsigned long addr)
+diff --git a/arch/openrisc/kernel/traps.c b/arch/openrisc/kernel/traps.c
+index 206e5325e61bc..fca5317f3ce17 100644
+--- a/arch/openrisc/kernel/traps.c
++++ b/arch/openrisc/kernel/traps.c
+@@ -212,7 +212,7 @@ void die(const char *str, struct pt_regs *regs, long err)
+ 	__asm__ __volatile__("l.nop   1");
+ 	do {} while (1);
+ #endif
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+ 
+ /* This is normally the 'Oops' routine */
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index bce47e0fb692c..2fad7867af100 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -268,7 +268,7 @@ void die_if_kernel(char *str, struct pt_regs *regs, long err)
+ 		panic("Fatal exception");
+ 
+ 	oops_exit();
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+ 
+ /* gdb uses break 4,8 */
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 069d451240fa4..5e5a2448ae79a 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -245,7 +245,7 @@ static void oops_end(unsigned long flags, struct pt_regs *regs,
+ 
+ 	if (panic_on_oops)
+ 		panic("Fatal exception");
+-	do_exit(signr);
++	make_task_dead(signr);
+ }
+ NOKPROBE_SYMBOL(oops_end);
+ 
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index c1a13011fb8e5..23fe03ca7ec7b 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -57,7 +57,7 @@ void die(struct pt_regs *regs, const char *str)
+ 	if (panic_on_oops)
+ 		panic("Fatal exception");
+ 	if (ret != NOTIFY_STOP)
+-		do_exit(SIGSEGV);
++		make_task_dead(SIGSEGV);
+ }
+ 
+ void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
+diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
+index 8f84bbe0ac33e..54b12943cc7b0 100644
+--- a/arch/riscv/mm/fault.c
++++ b/arch/riscv/mm/fault.c
+@@ -34,7 +34,7 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr)
+ 		(addr < PAGE_SIZE) ? "NULL pointer dereference" :
+ 		"paging request", addr);
+ 	die(regs, "Oops");
+-	do_exit(SIGKILL);
++	make_task_dead(SIGKILL);
+ }
+ 
+ static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_fault_t fault)
+diff --git a/arch/s390/include/asm/debug.h b/arch/s390/include/asm/debug.h
+index c1b82bcc017cf..29a1badbe2f5d 100644
+--- a/arch/s390/include/asm/debug.h
++++ b/arch/s390/include/asm/debug.h
+@@ -4,8 +4,8 @@
+  *
+  *    Copyright IBM Corp. 1999, 2020
+  */
+-#ifndef DEBUG_H
+-#define DEBUG_H
++#ifndef _ASM_S390_DEBUG_H
++#define _ASM_S390_DEBUG_H
+ 
+ #include <linux/string.h>
+ #include <linux/spinlock.h>
+@@ -425,4 +425,4 @@ int debug_unregister_view(debug_info_t *id, struct debug_view *view);
+ #define PRINT_FATAL(x...)	printk(KERN_DEBUG PRINTK_HEADER x)
+ #endif /* DASD_DEBUG */
+ 
+-#endif /* DEBUG_H */
++#endif /* _ASM_S390_DEBUG_H */
+diff --git a/arch/s390/kernel/dumpstack.c b/arch/s390/kernel/dumpstack.c
+index 0dc4b258b98d5..763e726025b3d 100644
+--- a/arch/s390/kernel/dumpstack.c
++++ b/arch/s390/kernel/dumpstack.c
+@@ -214,5 +214,5 @@ void die(struct pt_regs *regs, const char *str)
+ 	if (panic_on_oops)
+ 		panic("Fatal exception: panic_on_oops");
+ 	oops_exit();
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+diff --git a/arch/s390/kernel/nmi.c b/arch/s390/kernel/nmi.c
+index 86c8d5370e7f4..0102376eca3db 100644
+--- a/arch/s390/kernel/nmi.c
++++ b/arch/s390/kernel/nmi.c
+@@ -178,7 +178,7 @@ void s390_handle_mcck(void)
+ 		       "malfunction (code 0x%016lx).\n", mcck.mcck_code);
+ 		printk(KERN_EMERG "mcck: task: %s, pid: %d.\n",
+ 		       current->comm, current->pid);
+-		do_exit(SIGSEGV);
++		make_task_dead(SIGSEGV);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(s390_handle_mcck);
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index b51ab19eb9721..64d1dfe6dca53 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -81,8 +81,9 @@ static int sca_inject_ext_call(struct kvm_vcpu *vcpu, int src_id)
+ 		struct esca_block *sca = vcpu->kvm->arch.sca;
+ 		union esca_sigp_ctrl *sigp_ctrl =
+ 			&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
+-		union esca_sigp_ctrl new_val = {0}, old_val = *sigp_ctrl;
++		union esca_sigp_ctrl new_val = {0}, old_val;
+ 
++		old_val = READ_ONCE(*sigp_ctrl);
+ 		new_val.scn = src_id;
+ 		new_val.c = 1;
+ 		old_val.c = 0;
+@@ -93,8 +94,9 @@ static int sca_inject_ext_call(struct kvm_vcpu *vcpu, int src_id)
+ 		struct bsca_block *sca = vcpu->kvm->arch.sca;
+ 		union bsca_sigp_ctrl *sigp_ctrl =
+ 			&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
+-		union bsca_sigp_ctrl new_val = {0}, old_val = *sigp_ctrl;
++		union bsca_sigp_ctrl new_val = {0}, old_val;
+ 
++		old_val = READ_ONCE(*sigp_ctrl);
+ 		new_val.scn = src_id;
+ 		new_val.c = 1;
+ 		old_val.c = 0;
+@@ -124,16 +126,18 @@ static void sca_clear_ext_call(struct kvm_vcpu *vcpu)
+ 		struct esca_block *sca = vcpu->kvm->arch.sca;
+ 		union esca_sigp_ctrl *sigp_ctrl =
+ 			&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
+-		union esca_sigp_ctrl old = *sigp_ctrl;
++		union esca_sigp_ctrl old;
+ 
++		old = READ_ONCE(*sigp_ctrl);
+ 		expect = old.value;
+ 		rc = cmpxchg(&sigp_ctrl->value, old.value, 0);
+ 	} else {
+ 		struct bsca_block *sca = vcpu->kvm->arch.sca;
+ 		union bsca_sigp_ctrl *sigp_ctrl =
+ 			&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
+-		union bsca_sigp_ctrl old = *sigp_ctrl;
++		union bsca_sigp_ctrl old;
+ 
++		old = READ_ONCE(*sigp_ctrl);
+ 		expect = old.value;
+ 		rc = cmpxchg(&sigp_ctrl->value, old.value, 0);
+ 	}
+diff --git a/arch/sh/kernel/traps.c b/arch/sh/kernel/traps.c
+index 9c3d32b80038a..4efffc18c8512 100644
+--- a/arch/sh/kernel/traps.c
++++ b/arch/sh/kernel/traps.c
+@@ -57,7 +57,7 @@ void die(const char *str, struct pt_regs *regs, long err)
+ 	if (panic_on_oops)
+ 		panic("Fatal exception");
+ 
+-	do_exit(SIGSEGV);
++	make_task_dead(SIGSEGV);
+ }
+ 
+ void die_if_kernel(const char *str, struct pt_regs *regs, long err)
+diff --git a/arch/sparc/kernel/traps_32.c b/arch/sparc/kernel/traps_32.c
+index 247a0d9683b2b..5d47f4a342261 100644
+--- a/arch/sparc/kernel/traps_32.c
++++ b/arch/sparc/kernel/traps_32.c
+@@ -86,9 +86,7 @@ void __noreturn die_if_kernel(char *str, struct pt_regs *regs)
+ 	}
+ 	printk("Instruction DUMP:");
+ 	instruction_dump ((unsigned long *) regs->pc);
+-	if(regs->psr & PSR_PS)
+-		do_exit(SIGKILL);
+-	do_exit(SIGSEGV);
++	make_task_dead((regs->psr & PSR_PS) ? SIGKILL : SIGSEGV);
+ }
+ 
+ void do_hw_interrupt(struct pt_regs *regs, unsigned long type)
+diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
+index a850dccd78ea1..814277d0e3e8f 100644
+--- a/arch/sparc/kernel/traps_64.c
++++ b/arch/sparc/kernel/traps_64.c
+@@ -2564,9 +2564,7 @@ void __noreturn die_if_kernel(char *str, struct pt_regs *regs)
+ 	}
+ 	if (panic_on_oops)
+ 		panic("Fatal exception");
+-	if (regs->tstate & TSTATE_PRIV)
+-		do_exit(SIGKILL);
+-	do_exit(SIGSEGV);
++	make_task_dead((regs->tstate & TSTATE_PRIV)? SIGKILL : SIGSEGV);
+ }
+ EXPORT_SYMBOL(die_if_kernel);
+ 
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index 8fcd6a42b3a18..70bd81b6c612e 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -1333,14 +1333,14 @@ SYM_CODE_START(asm_exc_nmi)
+ SYM_CODE_END(asm_exc_nmi)
+ 
+ .pushsection .text, "ax"
+-SYM_CODE_START(rewind_stack_do_exit)
++SYM_CODE_START(rewind_stack_and_make_dead)
+ 	/* Prevent any naive code from trying to unwind to our caller. */
+ 	xorl	%ebp, %ebp
+ 
+ 	movl	PER_CPU_VAR(cpu_current_top_of_stack), %esi
+ 	leal	-TOP_OF_KERNEL_STACK_PADDING-PTREGS_SIZE(%esi), %esp
+ 
+-	call	do_exit
++	call	make_task_dead
+ 1:	jmp 1b
+-SYM_CODE_END(rewind_stack_do_exit)
++SYM_CODE_END(rewind_stack_and_make_dead)
+ .popsection
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 559c82b834757..23212c53cef7f 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -1509,7 +1509,7 @@ SYM_CODE_END(ignore_sysret)
+ #endif
+ 
+ .pushsection .text, "ax"
+-SYM_CODE_START(rewind_stack_do_exit)
++SYM_CODE_START(rewind_stack_and_make_dead)
+ 	UNWIND_HINT_FUNC
+ 	/* Prevent any naive code from trying to unwind to our caller. */
+ 	xorl	%ebp, %ebp
+@@ -1518,6 +1518,6 @@ SYM_CODE_START(rewind_stack_do_exit)
+ 	leaq	-PTREGS_SIZE(%rax), %rsp
+ 	UNWIND_HINT_REGS
+ 
+-	call	do_exit
+-SYM_CODE_END(rewind_stack_do_exit)
++	call	make_task_dead
++SYM_CODE_END(rewind_stack_and_make_dead)
+ .popsection
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 39eb276d02775..52eba415928a3 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -976,7 +976,7 @@ static int __init amd_core_pmu_init(void)
+ 		 * numbered counter following it.
+ 		 */
+ 		for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
+-			even_ctr_mask |= 1 << i;
++			even_ctr_mask |= BIT_ULL(i);
+ 
+ 		pair_constraint = (struct event_constraint)
+ 				    __EVENT_CONSTRAINT(0, even_ctr_mask, 0,
+diff --git a/arch/x86/kernel/acpi/cstate.c b/arch/x86/kernel/acpi/cstate.c
+index 49ae4e1ac9cd8..d28d43d774a26 100644
+--- a/arch/x86/kernel/acpi/cstate.c
++++ b/arch/x86/kernel/acpi/cstate.c
+@@ -79,6 +79,21 @@ void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flags,
+ 		 */
+ 		flags->bm_control = 0;
+ 	}
++	if (c->x86_vendor == X86_VENDOR_AMD && c->x86 >= 0x17) {
++		/*
++		 * For all AMD Zen or newer CPUs that support C3, caches
++		 * should not be flushed by software while entering C3
++		 * type state. Set bm->check to 1 so that kernel doesn't
++		 * need to execute cache flush operation.
++		 */
++		flags->bm_check = 1;
++		/*
++		 * In current AMD C state implementation ARB_DIS is no longer
++		 * used. So set bm_control to zero to indicate ARB_DIS is not
++		 * required while entering C3 type state.
++		 */
++		flags->bm_control = 0;
++	}
+ }
+ EXPORT_SYMBOL(acpi_processor_power_init_bm_check);
+ 
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 97aa900386cbb..b4964300153a1 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -351,7 +351,7 @@ unsigned long oops_begin(void)
+ }
+ NOKPROBE_SYMBOL(oops_begin);
+ 
+-void __noreturn rewind_stack_do_exit(int signr);
++void __noreturn rewind_stack_and_make_dead(int signr);
+ 
+ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ {
+@@ -386,7 +386,7 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ 	 * reuse the task stack and that existing poisons are invalid.
+ 	 */
+ 	kasan_unpoison_task_stack(current);
+-	rewind_stack_do_exit(signr);
++	rewind_stack_and_make_dead(signr);
+ }
+ NOKPROBE_SYMBOL(oops_end);
+ 
+diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
+index 282b4ee1339f8..f325389d03516 100644
+--- a/arch/x86/kernel/i8259.c
++++ b/arch/x86/kernel/i8259.c
+@@ -114,6 +114,7 @@ static void make_8259A_irq(unsigned int irq)
+ 	disable_irq_nosync(irq);
+ 	io_apic_irqs &= ~(1<<irq);
+ 	irq_set_chip_and_handler(irq, &i8259A_chip, handle_level_irq);
++	irq_set_status_flags(irq, IRQ_LEVEL);
+ 	enable_irq(irq);
+ 	lapic_assign_legacy_vector(irq, true);
+ }
+diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
+index beb1bada1b0ab..c683666876f1c 100644
+--- a/arch/x86/kernel/irqinit.c
++++ b/arch/x86/kernel/irqinit.c
+@@ -65,8 +65,10 @@ void __init init_ISA_irqs(void)
+ 
+ 	legacy_pic->init(0);
+ 
+-	for (i = 0; i < nr_legacy_irqs(); i++)
++	for (i = 0; i < nr_legacy_irqs(); i++) {
+ 		irq_set_chip_and_handler(i, chip, handle_level_irq);
++		irq_set_status_flags(i, IRQ_LEVEL);
++	}
+ }
+ 
+ void __init init_IRQ(void)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index af6742d11ca14..8f7152e158e28 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -3332,18 +3332,15 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var)
+ {
+ 	u32 ar;
+ 
+-	if (var->unusable || !var->present)
+-		ar = 1 << 16;
+-	else {
+-		ar = var->type & 15;
+-		ar |= (var->s & 1) << 4;
+-		ar |= (var->dpl & 3) << 5;
+-		ar |= (var->present & 1) << 7;
+-		ar |= (var->avl & 1) << 12;
+-		ar |= (var->l & 1) << 13;
+-		ar |= (var->db & 1) << 14;
+-		ar |= (var->g & 1) << 15;
+-	}
++	ar = var->type & 15;
++	ar |= (var->s & 1) << 4;
++	ar |= (var->dpl & 3) << 5;
++	ar |= (var->present & 1) << 7;
++	ar |= (var->avl & 1) << 12;
++	ar |= (var->l & 1) << 13;
++	ar |= (var->db & 1) << 14;
++	ar |= (var->g & 1) << 15;
++	ar |= (var->unusable || !var->present) << 16;
+ 
+ 	return ar;
+ }
+diff --git a/arch/xtensa/kernel/traps.c b/arch/xtensa/kernel/traps.c
+index efc3a29cde803..129f23c0ab553 100644
+--- a/arch/xtensa/kernel/traps.c
++++ b/arch/xtensa/kernel/traps.c
+@@ -545,5 +545,5 @@ void die(const char * str, struct pt_regs * regs, long err)
+ 	if (panic_on_oops)
+ 		panic("Fatal exception");
+ 
+-	do_exit(err);
++	make_task_dead(err);
+ }
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 26664f2a139eb..9afb79b322fb0 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -700,9 +700,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+ 
+ 		if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
+ 			return false;
+-
+-		WARN_ONCE(1,
+-		       "Trying to write to read-only block-device %s (partno %d)\n",
++		pr_warn("Trying to write to read-only block-device %s (partno %d)\n",
+ 			bio_devname(bio, b), part->partno);
+ 		/* Older lvm-tools actually trigger this */
+ 		return false;
+diff --git a/drivers/base/test/test_async_driver_probe.c b/drivers/base/test/test_async_driver_probe.c
+index 3bb7beb127a96..c157a912d6739 100644
+--- a/drivers/base/test/test_async_driver_probe.c
++++ b/drivers/base/test/test_async_driver_probe.c
+@@ -146,7 +146,7 @@ static int __init test_async_probe_init(void)
+ 	calltime = ktime_get();
+ 	for_each_online_cpu(cpu) {
+ 		nid = cpu_to_node(cpu);
+-		pdev = &sync_dev[sync_id];
++		pdev = &async_dev[async_id];
+ 
+ 		*pdev = test_platform_device_register_node("test_async_driver",
+ 							   async_id,
+diff --git a/drivers/clk/clk-devres.c b/drivers/clk/clk-devres.c
+index f9d5b73343417..4fb4fd4b06bda 100644
+--- a/drivers/clk/clk-devres.c
++++ b/drivers/clk/clk-devres.c
+@@ -4,42 +4,101 @@
+ #include <linux/export.h>
+ #include <linux/gfp.h>
+ 
++struct devm_clk_state {
++	struct clk *clk;
++	void (*exit)(struct clk *clk);
++};
++
+ static void devm_clk_release(struct device *dev, void *res)
+ {
+-	clk_put(*(struct clk **)res);
++	struct devm_clk_state *state = res;
++
++	if (state->exit)
++		state->exit(state->clk);
++
++	clk_put(state->clk);
+ }
+ 
+-struct clk *devm_clk_get(struct device *dev, const char *id)
++static struct clk *__devm_clk_get(struct device *dev, const char *id,
++				  struct clk *(*get)(struct device *dev, const char *id),
++				  int (*init)(struct clk *clk),
++				  void (*exit)(struct clk *clk))
+ {
+-	struct clk **ptr, *clk;
++	struct devm_clk_state *state;
++	struct clk *clk;
++	int ret;
+ 
+-	ptr = devres_alloc(devm_clk_release, sizeof(*ptr), GFP_KERNEL);
+-	if (!ptr)
++	state = devres_alloc(devm_clk_release, sizeof(*state), GFP_KERNEL);
++	if (!state)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	clk = clk_get(dev, id);
+-	if (!IS_ERR(clk)) {
+-		*ptr = clk;
+-		devres_add(dev, ptr);
+-	} else {
+-		devres_free(ptr);
++	clk = get(dev, id);
++	if (IS_ERR(clk)) {
++		ret = PTR_ERR(clk);
++		goto err_clk_get;
+ 	}
+ 
++	if (init) {
++		ret = init(clk);
++		if (ret)
++			goto err_clk_init;
++	}
++
++	state->clk = clk;
++	state->exit = exit;
++
++	devres_add(dev, state);
++
+ 	return clk;
++
++err_clk_init:
++
++	clk_put(clk);
++err_clk_get:
++
++	devres_free(state);
++	return ERR_PTR(ret);
++}
++
++struct clk *devm_clk_get(struct device *dev, const char *id)
++{
++	return __devm_clk_get(dev, id, clk_get, NULL, NULL);
+ }
+ EXPORT_SYMBOL(devm_clk_get);
+ 
+-struct clk *devm_clk_get_optional(struct device *dev, const char *id)
++struct clk *devm_clk_get_prepared(struct device *dev, const char *id)
+ {
+-	struct clk *clk = devm_clk_get(dev, id);
++	return __devm_clk_get(dev, id, clk_get, clk_prepare, clk_unprepare);
++}
++EXPORT_SYMBOL_GPL(devm_clk_get_prepared);
+ 
+-	if (clk == ERR_PTR(-ENOENT))
+-		return NULL;
++struct clk *devm_clk_get_enabled(struct device *dev, const char *id)
++{
++	return __devm_clk_get(dev, id, clk_get,
++			      clk_prepare_enable, clk_disable_unprepare);
++}
++EXPORT_SYMBOL_GPL(devm_clk_get_enabled);
+ 
+-	return clk;
++struct clk *devm_clk_get_optional(struct device *dev, const char *id)
++{
++	return __devm_clk_get(dev, id, clk_get_optional, NULL, NULL);
+ }
+ EXPORT_SYMBOL(devm_clk_get_optional);
+ 
++struct clk *devm_clk_get_optional_prepared(struct device *dev, const char *id)
++{
++	return __devm_clk_get(dev, id, clk_get_optional,
++			      clk_prepare, clk_unprepare);
++}
++EXPORT_SYMBOL_GPL(devm_clk_get_optional_prepared);
++
++struct clk *devm_clk_get_optional_enabled(struct device *dev, const char *id)
++{
++	return __devm_clk_get(dev, id, clk_get_optional,
++			      clk_prepare_enable, clk_disable_unprepare);
++}
++EXPORT_SYMBOL_GPL(devm_clk_get_optional_enabled);
++
+ struct clk_bulk_devres {
+ 	struct clk_bulk_data *clks;
+ 	int num_clks;
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index 2de7fd18f66a1..f0be8a43ec496 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -443,7 +443,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 		return -ENODEV;
+ 	}
+ 
+-	clk = clk_get(cpu_dev, 0);
++	clk = clk_get(cpu_dev, NULL);
+ 	if (IS_ERR(clk)) {
+ 		dev_err(cpu_dev, "Cannot get clock for CPU0\n");
+ 		return PTR_ERR(clk);
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index a3734014db477..aea285651fbaf 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -130,6 +130,7 @@ static const struct of_device_id blacklist[] __initconst = {
+ 	{ .compatible = "nvidia,tegra30", },
+ 	{ .compatible = "nvidia,tegra124", },
+ 	{ .compatible = "nvidia,tegra210", },
++	{ .compatible = "nvidia,tegra234", },
+ 
+ 	{ .compatible = "qcom,apq8096", },
+ 	{ .compatible = "qcom,msm8996", },
+diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
+index af3ee288bc117..4ec7bb58c195f 100644
+--- a/drivers/dma/dmaengine.c
++++ b/drivers/dma/dmaengine.c
+@@ -451,7 +451,8 @@ static int dma_chan_get(struct dma_chan *chan)
+ 	/* The channel is already in use, update client count */
+ 	if (chan->client_count) {
+ 		__module_get(owner);
+-		goto out;
++		chan->client_count++;
++		return 0;
+ 	}
+ 
+ 	if (!try_module_get(owner))
+@@ -470,11 +471,11 @@ static int dma_chan_get(struct dma_chan *chan)
+ 			goto err_out;
+ 	}
+ 
++	chan->client_count++;
++
+ 	if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
+ 		balance_ref_count(chan);
+ 
+-out:
+-	chan->client_count++;
+ 	return 0;
+ 
+ err_out:
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index e76adc31ab66f..12ad4bb3c5f28 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -3119,8 +3119,10 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 	/* Initialize the channels */
+ 	for_each_child_of_node(node, child) {
+ 		err = xilinx_dma_child_probe(xdev, child);
+-		if (err < 0)
++		if (err < 0) {
++			of_node_put(child);
+ 			goto error;
++		}
+ 	}
+ 
+ 	if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c
+index 8220ce5b87ca0..85c229985f905 100644
+--- a/drivers/edac/edac_device.c
++++ b/drivers/edac/edac_device.c
+@@ -34,6 +34,9 @@
+ static DEFINE_MUTEX(device_ctls_mutex);
+ static LIST_HEAD(edac_device_list);
+ 
++/* Default workqueue processing interval on this instance, in msecs */
++#define DEFAULT_POLL_INTERVAL 1000
++
+ #ifdef CONFIG_EDAC_DEBUG
+ static void edac_device_dump_device(struct edac_device_ctl_info *edac_dev)
+ {
+@@ -366,7 +369,7 @@ static void edac_device_workq_function(struct work_struct *work_req)
+ 	 * whole one second to save timers firing all over the period
+ 	 * between integral seconds
+ 	 */
+-	if (edac_dev->poll_msec == 1000)
++	if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
+ 		edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
+ 	else
+ 		edac_queue_work(&edac_dev->work, edac_dev->delay);
+@@ -396,7 +399,7 @@ static void edac_device_workq_setup(struct edac_device_ctl_info *edac_dev,
+ 	 * timers firing on sub-second basis, while they are happy
+ 	 * to fire together on the 1 second exactly
+ 	 */
+-	if (edac_dev->poll_msec == 1000)
++	if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
+ 		edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
+ 	else
+ 		edac_queue_work(&edac_dev->work, edac_dev->delay);
+@@ -430,7 +433,7 @@ void edac_device_reset_delay_period(struct edac_device_ctl_info *edac_dev,
+ 	edac_dev->delay	    = msecs_to_jiffies(msec);
+ 
+ 	/* See comment in edac_device_workq_setup() above */
+-	if (edac_dev->poll_msec == 1000)
++	if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
+ 		edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
+ 	else
+ 		edac_mod_work(&edac_dev->work, edac_dev->delay);
+@@ -472,11 +475,7 @@ int edac_device_add_device(struct edac_device_ctl_info *edac_dev)
+ 		/* This instance is NOW RUNNING */
+ 		edac_dev->op_state = OP_RUNNING_POLL;
+ 
+-		/*
+-		 * enable workq processing on this instance,
+-		 * default = 1000 msec
+-		 */
+-		edac_device_workq_setup(edac_dev, 1000);
++		edac_device_workq_setup(edac_dev, edac_dev->poll_msec ?: DEFAULT_POLL_INTERVAL);
+ 	} else {
+ 		edac_dev->op_state = OP_RUNNING_INTERRUPT;
+ 	}
+diff --git a/drivers/edac/highbank_mc_edac.c b/drivers/edac/highbank_mc_edac.c
+index 61b76ec226af1..19fba258ae108 100644
+--- a/drivers/edac/highbank_mc_edac.c
++++ b/drivers/edac/highbank_mc_edac.c
+@@ -174,8 +174,10 @@ static int highbank_mc_probe(struct platform_device *pdev)
+ 	drvdata = mci->pvt_info;
+ 	platform_set_drvdata(pdev, mci);
+ 
+-	if (!devres_open_group(&pdev->dev, NULL, GFP_KERNEL))
+-		return -ENOMEM;
++	if (!devres_open_group(&pdev->dev, NULL, GFP_KERNEL)) {
++		res = -ENOMEM;
++		goto free;
++	}
+ 
+ 	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (!r) {
+@@ -243,6 +245,7 @@ err2:
+ 	edac_mc_del_mc(&pdev->dev);
+ err:
+ 	devres_release_group(&pdev->dev, NULL);
++free:
+ 	edac_mc_free(mci);
+ 	return res;
+ }
+diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c
+index 97a27e42dd610..c45519f59dc11 100644
+--- a/drivers/edac/qcom_edac.c
++++ b/drivers/edac/qcom_edac.c
+@@ -252,7 +252,7 @@ clear:
+ static int
+ dump_syn_reg(struct edac_device_ctl_info *edev_ctl, int err_type, u32 bank)
+ {
+-	struct llcc_drv_data *drv = edev_ctl->pvt_info;
++	struct llcc_drv_data *drv = edev_ctl->dev->platform_data;
+ 	int ret;
+ 
+ 	ret = dump_syn_reg_values(drv, bank, err_type);
+@@ -289,7 +289,7 @@ static irqreturn_t
+ llcc_ecc_irq_handler(int irq, void *edev_ctl)
+ {
+ 	struct edac_device_ctl_info *edac_dev_ctl = edev_ctl;
+-	struct llcc_drv_data *drv = edac_dev_ctl->pvt_info;
++	struct llcc_drv_data *drv = edac_dev_ctl->dev->platform_data;
+ 	irqreturn_t irq_rc = IRQ_NONE;
+ 	u32 drp_error, trp_error, i;
+ 	int ret;
+@@ -358,7 +358,6 @@ static int qcom_llcc_edac_probe(struct platform_device *pdev)
+ 	edev_ctl->dev_name = dev_name(dev);
+ 	edev_ctl->ctl_name = "llcc";
+ 	edev_ctl->panic_on_ue = LLCC_ERP_PANIC_ON_UE;
+-	edev_ctl->pvt_info = llcc_driv_data;
+ 
+ 	rc = edac_device_add_device(edev_ctl);
+ 	if (rc)
+diff --git a/drivers/firmware/arm_scmi/shmem.c b/drivers/firmware/arm_scmi/shmem.c
+index 0e3eaea5d8526..56a1f61aa3ff2 100644
+--- a/drivers/firmware/arm_scmi/shmem.c
++++ b/drivers/firmware/arm_scmi/shmem.c
+@@ -58,10 +58,11 @@ u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem)
+ void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
+ 			  struct scmi_xfer *xfer)
+ {
++	size_t len = ioread32(&shmem->length);
++
+ 	xfer->hdr.status = ioread32(shmem->msg_payload);
+ 	/* Skip the length of header and status in shmem area i.e 8 bytes */
+-	xfer->rx.len = min_t(size_t, xfer->rx.len,
+-			     ioread32(&shmem->length) - 8);
++	xfer->rx.len = min_t(size_t, xfer->rx.len, len > 8 ? len - 8 : 0);
+ 
+ 	/* Take a copy to the rx buffer.. */
+ 	memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
+@@ -70,8 +71,10 @@ void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
+ void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
+ 			      size_t max_len, struct scmi_xfer *xfer)
+ {
++	size_t len = ioread32(&shmem->length);
++
+ 	/* Skip only the length of header in shmem area i.e 4 bytes */
+-	xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4);
++	xfer->rx.len = min_t(size_t, max_len, len > 4 ? len - 4 : 0);
+ 
+ 	/* Take a copy to the rx buffer.. */
+ 	memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
+diff --git a/drivers/gpio/gpio-mxc.c b/drivers/gpio/gpio-mxc.c
+index ba6ed2a413f51..0d5a9fee3c70a 100644
+--- a/drivers/gpio/gpio-mxc.c
++++ b/drivers/gpio/gpio-mxc.c
+@@ -231,7 +231,7 @@ static int gpio_set_irq_type(struct irq_data *d, u32 type)
+ 
+ 	writel(1 << gpio_idx, port->base + GPIO_ISR);
+ 
+-	return 0;
++	return port->gc.direction_input(&port->gc, gpio_idx);
+ }
+ 
+ static void mxc_flip_edge(struct mxc_gpio_port *port, u32 gpio)
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index ca0fefeaab20b..ce739ba45c551 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -272,6 +272,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
+ 		},
+ 		.driver_data = (void *)&lcd1200x1920_rightside_up,
++	}, {	/* Lenovo Ideapad D330-10IGL (HD) */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGL"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/* Lenovo Yoga Book X90F / X91F / X91L */
+ 		.matches = {
+ 		  /* Non exact match to match all versions */
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index b57dcad8865fa..7633f56bc0a4f 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -823,6 +823,15 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict, bool intr,
+ 		if (ret == 0) {
+ 			ret = nouveau_fence_new(chan, false, &fence);
+ 			if (ret == 0) {
++				/* TODO: figure out a better solution here
++				 *
++				 * wait on the fence here explicitly as going through
++				 * ttm_bo_move_accel_cleanup somehow doesn't seem to do it.
++				 *
++				 * Without this the operation can timeout and we'll fallback to a
++				 * software copy, which might take several minutes to finish.
++				 */
++				nouveau_fence_wait(fence, false, false);
+ 				ret = ttm_bo_move_accel_cleanup(bo,
+ 								&fence->base,
+ 								evict, false,
+diff --git a/drivers/gpu/drm/panfrost/Kconfig b/drivers/gpu/drm/panfrost/Kconfig
+index 86cdc0ce79e65..77f4d32e52045 100644
+--- a/drivers/gpu/drm/panfrost/Kconfig
++++ b/drivers/gpu/drm/panfrost/Kconfig
+@@ -3,7 +3,8 @@
+ config DRM_PANFROST
+ 	tristate "Panfrost (DRM support for ARM Mali Midgard/Bifrost GPUs)"
+ 	depends on DRM
+-	depends on ARM || ARM64 || (COMPILE_TEST && !GENERIC_ATOMIC64)
++	depends on ARM || ARM64 || COMPILE_TEST
++	depends on !GENERIC_ATOMIC64    # for IOMMU_IO_PGTABLE_LPAE
+ 	depends on MMU
+ 	select DRM_SCHED
+ 	select IOMMU_SUPPORT
+diff --git a/drivers/hid/hid-betopff.c b/drivers/hid/hid-betopff.c
+index 467d789f9bc2d..25ed7b9a917e4 100644
+--- a/drivers/hid/hid-betopff.c
++++ b/drivers/hid/hid-betopff.c
+@@ -60,7 +60,6 @@ static int betopff_init(struct hid_device *hid)
+ 	struct list_head *report_list =
+ 			&hid->report_enum[HID_OUTPUT_REPORT].report_list;
+ 	struct input_dev *dev;
+-	int field_count = 0;
+ 	int error;
+ 	int i, j;
+ 
+@@ -86,19 +85,21 @@ static int betopff_init(struct hid_device *hid)
+ 	 * -----------------------------------------
+ 	 * Do init them with default value.
+ 	 */
++	if (report->maxfield < 4) {
++		hid_err(hid, "not enough fields in the report: %d\n",
++				report->maxfield);
++		return -ENODEV;
++	}
+ 	for (i = 0; i < report->maxfield; i++) {
++		if (report->field[i]->report_count < 1) {
++			hid_err(hid, "no values in the field\n");
++			return -ENODEV;
++		}
+ 		for (j = 0; j < report->field[i]->report_count; j++) {
+ 			report->field[i]->value[j] = 0x00;
+-			field_count++;
+ 		}
+ 	}
+ 
+-	if (field_count < 4) {
+-		hid_err(hid, "not enough fields in the report: %d\n",
+-				field_count);
+-		return -ENODEV;
+-	}
+-
+ 	betopff = kzalloc(sizeof(*betopff), GFP_KERNEL);
+ 	if (!betopff)
+ 		return -ENOMEM;
+diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c
+index e8c5e3ac9fff1..e8b16665860d6 100644
+--- a/drivers/hid/hid-bigbenff.c
++++ b/drivers/hid/hid-bigbenff.c
+@@ -344,6 +344,11 @@ static int bigben_probe(struct hid_device *hid,
+ 	}
+ 
+ 	report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
++	if (list_empty(report_list)) {
++		hid_err(hid, "no output report found\n");
++		error = -ENODEV;
++		goto error_hw_stop;
++	}
+ 	bigben->report = list_entry(report_list->next,
+ 		struct hid_report, list);
+ 
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index eaaf732f06304..baadead947c8b 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -988,8 +988,8 @@ struct hid_report *hid_validate_values(struct hid_device *hid,
+ 		 * Validating on id 0 means we should examine the first
+ 		 * report in the list.
+ 		 */
+-		report = list_entry(
+-				hid->report_enum[type].report_list.next,
++		report = list_first_entry_or_null(
++				&hid->report_enum[type].report_list,
+ 				struct hid_report, list);
+ 	} else {
+ 		report = hid->report_enum[type].report_id_hash[id];
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 09c3f30f10d37..1d1306a6004e6 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -257,7 +257,6 @@
+ #define USB_DEVICE_ID_CH_AXIS_295	0x001c
+ 
+ #define USB_VENDOR_ID_CHERRY		0x046a
+-#define USB_DEVICE_ID_CHERRY_MOUSE_000C	0x000c
+ #define USB_DEVICE_ID_CHERRY_CYMOTION	0x0023
+ #define USB_DEVICE_ID_CHERRY_CYMOTION_SOLAR	0x0027
+ 
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 1efde40e51364..9f1fcbea19eb7 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -54,7 +54,6 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_YOKE), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_PEDALS), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_THROTTLE), HID_QUIRK_NOGET },
+-	{ HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_MOUSE_000C), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB_RAPIDFIRE), HID_QUIRK_NO_INIT_REPORTS | HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K70RGB), HID_QUIRK_NO_INIT_REPORTS },
+diff --git a/drivers/hid/intel-ish-hid/ishtp/dma-if.c b/drivers/hid/intel-ish-hid/ishtp/dma-if.c
+index 40554c8daca07..00046cbfd4ed0 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/dma-if.c
++++ b/drivers/hid/intel-ish-hid/ishtp/dma-if.c
+@@ -104,6 +104,11 @@ void *ishtp_cl_get_dma_send_buf(struct ishtp_device *dev,
+ 	int required_slots = (size / DMA_SLOT_SIZE)
+ 		+ 1 * (size % DMA_SLOT_SIZE != 0);
+ 
++	if (!dev->ishtp_dma_tx_map) {
++		dev_err(dev->devc, "Fail to allocate Tx map\n");
++		return NULL;
++	}
++
+ 	spin_lock_irqsave(&dev->ishtp_dma_tx_lock, flags);
+ 	for (i = 0; i <= (dev->ishtp_dma_num_slots - required_slots); i++) {
+ 		free = 1;
+@@ -150,6 +155,11 @@ void ishtp_cl_release_dma_acked_mem(struct ishtp_device *dev,
+ 		return;
+ 	}
+ 
++	if (!dev->ishtp_dma_tx_map) {
++		dev_err(dev->devc, "Fail to allocate Tx map\n");
++		return;
++	}
++
+ 	i = (msg_addr - dev->ishtp_host_dma_tx_buf) / DMA_SLOT_SIZE;
+ 	spin_lock_irqsave(&dev->ishtp_dma_tx_lock, flags);
+ 	for (j = 0; j < acked_slots; j++) {
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index 9468c6c89b3f5..682fffaab2b40 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -24,6 +24,7 @@
+ #include <linux/regmap.h>
+ #include <linux/swab.h>
+ #include <linux/types.h>
++#include <linux/units.h>
+ 
+ #include "i2c-designware-core.h"
+ 
+@@ -347,7 +348,8 @@ u32 i2c_dw_scl_hcnt(u32 ic_clk, u32 tSYMBOL, u32 tf, int cond, int offset)
+ 		 *
+ 		 * If your hardware is free from tHD;STA issue, try this one.
+ 		 */
+-		return (ic_clk * tSYMBOL + 500000) / 1000000 - 8 + offset;
++		return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * tSYMBOL, MICRO) -
++		       8 + offset;
+ 	else
+ 		/*
+ 		 * Conditional expression:
+@@ -363,8 +365,8 @@ u32 i2c_dw_scl_hcnt(u32 ic_clk, u32 tSYMBOL, u32 tf, int cond, int offset)
+ 		 * The reason why we need to take into account "tf" here,
+ 		 * is the same as described in i2c_dw_scl_lcnt().
+ 		 */
+-		return (ic_clk * (tSYMBOL + tf) + 500000) / 1000000
+-			- 3 + offset;
++		return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * (tSYMBOL + tf), MICRO) -
++		       3 + offset;
+ }
+ 
+ u32 i2c_dw_scl_lcnt(u32 ic_clk, u32 tLOW, u32 tf, int offset)
+@@ -380,7 +382,8 @@ u32 i2c_dw_scl_lcnt(u32 ic_clk, u32 tLOW, u32 tf, int offset)
+ 	 * account the fall time of SCL signal (tf).  Default tf value
+ 	 * should be 0.3 us, for safety.
+ 	 */
+-	return ((ic_clk * (tLOW + tf) + 500000) / 1000000) - 1 + offset;
++	return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * (tLOW + tf), MICRO) -
++	       1 + offset;
+ }
+ 
+ int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev)
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index ad91c7c0faa54..4747541517252 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -32,12 +32,13 @@
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/suspend.h>
++#include <linux/units.h>
+ 
+ #include "i2c-designware-core.h"
+ 
+ static u32 i2c_dw_get_clk_rate_khz(struct dw_i2c_dev *dev)
+ {
+-	return clk_get_rate(dev->clk)/1000;
++	return clk_get_rate(dev->clk) / KILO;
+ }
+ 
+ #ifdef CONFIG_ACPI
+@@ -284,7 +285,7 @@ static int dw_i2c_plat_probe(struct platform_device *pdev)
+ 
+ 		if (!dev->sda_hold_time && t->sda_hold_ns)
+ 			dev->sda_hold_time =
+-				div_u64(clk_khz * t->sda_hold_ns + 500000, 1000000);
++				DIV_S64_ROUND_CLOSEST(clk_khz * t->sda_hold_ns, MICRO);
+ 	}
+ 
+ 	adap = &dev->adapter;
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 5889639e90a1c..5123be0ab02f5 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -2911,15 +2911,18 @@ EXPORT_SYMBOL(__rdma_block_iter_start);
+ bool __rdma_block_iter_next(struct ib_block_iter *biter)
+ {
+ 	unsigned int block_offset;
++	unsigned int sg_delta;
+ 
+ 	if (!biter->__sg_nents || !biter->__sg)
+ 		return false;
+ 
+ 	biter->__dma_addr = sg_dma_address(biter->__sg) + biter->__sg_advance;
+ 	block_offset = biter->__dma_addr & (BIT_ULL(biter->__pg_bit) - 1);
+-	biter->__sg_advance += BIT_ULL(biter->__pg_bit) - block_offset;
++	sg_delta = BIT_ULL(biter->__pg_bit) - block_offset;
+ 
+-	if (biter->__sg_advance >= sg_dma_len(biter->__sg)) {
++	if (sg_dma_len(biter->__sg) - biter->__sg_advance > sg_delta) {
++		biter->__sg_advance += sg_delta;
++	} else {
+ 		biter->__sg_advance = 0;
+ 		biter->__sg = sg_next(biter->__sg);
+ 		biter->__sg_nents--;
+diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.c b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
+index b94fc7fd75a96..897923981855d 100644
+--- a/drivers/infiniband/hw/hfi1/user_exp_rcv.c
++++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
+@@ -65,18 +65,25 @@ static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata,
+ static bool tid_rb_invalidate(struct mmu_interval_notifier *mni,
+ 			      const struct mmu_notifier_range *range,
+ 			      unsigned long cur_seq);
++static bool tid_cover_invalidate(struct mmu_interval_notifier *mni,
++			         const struct mmu_notifier_range *range,
++			         unsigned long cur_seq);
+ static int program_rcvarray(struct hfi1_filedata *fd, struct tid_user_buf *,
+ 			    struct tid_group *grp,
+ 			    unsigned int start, u16 count,
+ 			    u32 *tidlist, unsigned int *tididx,
+ 			    unsigned int *pmapped);
+-static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
+-			      struct tid_group **grp);
++static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo);
++static void __clear_tid_node(struct hfi1_filedata *fd,
++			     struct tid_rb_node *node);
+ static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node);
+ 
+ static const struct mmu_interval_notifier_ops tid_mn_ops = {
+ 	.invalidate = tid_rb_invalidate,
+ };
++static const struct mmu_interval_notifier_ops tid_cover_ops = {
++	.invalidate = tid_cover_invalidate,
++};
+ 
+ /*
+  * Initialize context and file private data needed for Expected
+@@ -295,53 +302,65 @@ int hfi1_user_exp_rcv_setup(struct hfi1_filedata *fd,
+ 		tididx = 0, mapped, mapped_pages = 0;
+ 	u32 *tidlist = NULL;
+ 	struct tid_user_buf *tidbuf;
++	unsigned long mmu_seq = 0;
+ 
+ 	if (!PAGE_ALIGNED(tinfo->vaddr))
+ 		return -EINVAL;
++	if (tinfo->length == 0)
++		return -EINVAL;
+ 
+ 	tidbuf = kzalloc(sizeof(*tidbuf), GFP_KERNEL);
+ 	if (!tidbuf)
+ 		return -ENOMEM;
+ 
++	mutex_init(&tidbuf->cover_mutex);
+ 	tidbuf->vaddr = tinfo->vaddr;
+ 	tidbuf->length = tinfo->length;
+ 	tidbuf->psets = kcalloc(uctxt->expected_count, sizeof(*tidbuf->psets),
+ 				GFP_KERNEL);
+ 	if (!tidbuf->psets) {
+-		kfree(tidbuf);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto fail_release_mem;
++	}
++
++	if (fd->use_mn) {
++		ret = mmu_interval_notifier_insert(
++			&tidbuf->notifier, current->mm,
++			tidbuf->vaddr, tidbuf->npages * PAGE_SIZE,
++			&tid_cover_ops);
++		if (ret)
++			goto fail_release_mem;
++		mmu_seq = mmu_interval_read_begin(&tidbuf->notifier);
+ 	}
+ 
+ 	pinned = pin_rcv_pages(fd, tidbuf);
+ 	if (pinned <= 0) {
+-		kfree(tidbuf->psets);
+-		kfree(tidbuf);
+-		return pinned;
++		ret = (pinned < 0) ? pinned : -ENOSPC;
++		goto fail_unpin;
+ 	}
+ 
+ 	/* Find sets of physically contiguous pages */
+ 	tidbuf->n_psets = find_phys_blocks(tidbuf, pinned);
+ 
+-	/*
+-	 * We don't need to access this under a lock since tid_used is per
+-	 * process and the same process cannot be in hfi1_user_exp_rcv_clear()
+-	 * and hfi1_user_exp_rcv_setup() at the same time.
+-	 */
++	/* Reserve the number of expected tids to be used. */
+ 	spin_lock(&fd->tid_lock);
+ 	if (fd->tid_used + tidbuf->n_psets > fd->tid_limit)
+ 		pageset_count = fd->tid_limit - fd->tid_used;
+ 	else
+ 		pageset_count = tidbuf->n_psets;
++	fd->tid_used += pageset_count;
+ 	spin_unlock(&fd->tid_lock);
+ 
+-	if (!pageset_count)
+-		goto bail;
++	if (!pageset_count) {
++		ret = -ENOSPC;
++		goto fail_unreserve;
++	}
+ 
+ 	ngroups = pageset_count / dd->rcv_entries.group_size;
+ 	tidlist = kcalloc(pageset_count, sizeof(*tidlist), GFP_KERNEL);
+ 	if (!tidlist) {
+ 		ret = -ENOMEM;
+-		goto nomem;
++		goto fail_unreserve;
+ 	}
+ 
+ 	tididx = 0;
+@@ -437,43 +456,78 @@ int hfi1_user_exp_rcv_setup(struct hfi1_filedata *fd,
+ 	}
+ unlock:
+ 	mutex_unlock(&uctxt->exp_mutex);
+-nomem:
+ 	hfi1_cdbg(TID, "total mapped: tidpairs:%u pages:%u (%d)", tididx,
+ 		  mapped_pages, ret);
+-	if (tididx) {
+-		spin_lock(&fd->tid_lock);
+-		fd->tid_used += tididx;
+-		spin_unlock(&fd->tid_lock);
+-		tinfo->tidcnt = tididx;
+-		tinfo->length = mapped_pages * PAGE_SIZE;
+-
+-		if (copy_to_user(u64_to_user_ptr(tinfo->tidlist),
+-				 tidlist, sizeof(tidlist[0]) * tididx)) {
+-			/*
+-			 * On failure to copy to the user level, we need to undo
+-			 * everything done so far so we don't leak resources.
+-			 */
+-			tinfo->tidlist = (unsigned long)&tidlist;
+-			hfi1_user_exp_rcv_clear(fd, tinfo);
+-			tinfo->tidlist = 0;
+-			ret = -EFAULT;
+-			goto bail;
++
++	/* fail if nothing was programmed, set error if none provided */
++	if (tididx == 0) {
++		if (ret >= 0)
++			ret = -ENOSPC;
++		goto fail_unreserve;
++	}
++
++	/* adjust reserved tid_used to actual count */
++	spin_lock(&fd->tid_lock);
++	fd->tid_used -= pageset_count - tididx;
++	spin_unlock(&fd->tid_lock);
++
++	/* unpin all pages not covered by a TID */
++	unpin_rcv_pages(fd, tidbuf, NULL, mapped_pages, pinned - mapped_pages,
++			false);
++
++	if (fd->use_mn) {
++		/* check for an invalidate during setup */
++		bool fail = false;
++
++		mutex_lock(&tidbuf->cover_mutex);
++		fail = mmu_interval_read_retry(&tidbuf->notifier, mmu_seq);
++		mutex_unlock(&tidbuf->cover_mutex);
++
++		if (fail) {
++			ret = -EBUSY;
++			goto fail_unprogram;
+ 		}
+ 	}
+ 
+-	/*
+-	 * If not everything was mapped (due to insufficient RcvArray entries,
+-	 * for example), unpin all unmapped pages so we can pin them nex time.
+-	 */
+-	if (mapped_pages != pinned)
+-		unpin_rcv_pages(fd, tidbuf, NULL, mapped_pages,
+-				(pinned - mapped_pages), false);
+-bail:
++	tinfo->tidcnt = tididx;
++	tinfo->length = mapped_pages * PAGE_SIZE;
++
++	if (copy_to_user(u64_to_user_ptr(tinfo->tidlist),
++			 tidlist, sizeof(tidlist[0]) * tididx)) {
++		ret = -EFAULT;
++		goto fail_unprogram;
++	}
++
++	if (fd->use_mn)
++		mmu_interval_notifier_remove(&tidbuf->notifier);
++	kfree(tidbuf->pages);
+ 	kfree(tidbuf->psets);
++	kfree(tidbuf);
+ 	kfree(tidlist);
++	return 0;
++
++fail_unprogram:
++	/* unprogram, unmap, and unpin all allocated TIDs */
++	tinfo->tidlist = (unsigned long)tidlist;
++	hfi1_user_exp_rcv_clear(fd, tinfo);
++	tinfo->tidlist = 0;
++	pinned = 0;		/* nothing left to unpin */
++	pageset_count = 0;	/* nothing left reserved */
++fail_unreserve:
++	spin_lock(&fd->tid_lock);
++	fd->tid_used -= pageset_count;
++	spin_unlock(&fd->tid_lock);
++fail_unpin:
++	if (fd->use_mn)
++		mmu_interval_notifier_remove(&tidbuf->notifier);
++	if (pinned > 0)
++		unpin_rcv_pages(fd, tidbuf, NULL, 0, pinned, false);
++fail_release_mem:
+ 	kfree(tidbuf->pages);
++	kfree(tidbuf->psets);
+ 	kfree(tidbuf);
+-	return ret > 0 ? 0 : ret;
++	kfree(tidlist);
++	return ret;
+ }
+ 
+ int hfi1_user_exp_rcv_clear(struct hfi1_filedata *fd,
+@@ -494,7 +548,7 @@ int hfi1_user_exp_rcv_clear(struct hfi1_filedata *fd,
+ 
+ 	mutex_lock(&uctxt->exp_mutex);
+ 	for (tididx = 0; tididx < tinfo->tidcnt; tididx++) {
+-		ret = unprogram_rcvarray(fd, tidinfo[tididx], NULL);
++		ret = unprogram_rcvarray(fd, tidinfo[tididx]);
+ 		if (ret) {
+ 			hfi1_cdbg(TID, "Failed to unprogram rcv array %d",
+ 				  ret);
+@@ -750,6 +804,7 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd,
+ 	}
+ 
+ 	node->fdata = fd;
++	mutex_init(&node->invalidate_mutex);
+ 	node->phys = page_to_phys(pages[0]);
+ 	node->npages = npages;
+ 	node->rcventry = rcventry;
+@@ -765,11 +820,6 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd,
+ 			&tid_mn_ops);
+ 		if (ret)
+ 			goto out_unmap;
+-		/*
+-		 * FIXME: This is in the wrong order, the notifier should be
+-		 * established before the pages are pinned by pin_rcv_pages.
+-		 */
+-		mmu_interval_read_begin(&node->notifier);
+ 	}
+ 	fd->entry_to_rb[node->rcventry - uctxt->expected_base] = node;
+ 
+@@ -789,8 +839,7 @@ out_unmap:
+ 	return -EFAULT;
+ }
+ 
+-static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
+-			      struct tid_group **grp)
++static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo)
+ {
+ 	struct hfi1_ctxtdata *uctxt = fd->uctxt;
+ 	struct hfi1_devdata *dd = uctxt->dd;
+@@ -813,9 +862,6 @@ static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
+ 	if (!node || node->rcventry != (uctxt->expected_base + rcventry))
+ 		return -EBADF;
+ 
+-	if (grp)
+-		*grp = node->grp;
+-
+ 	if (fd->use_mn)
+ 		mmu_interval_notifier_remove(&node->notifier);
+ 	cacheless_tid_rb_remove(fd, node);
+@@ -823,23 +869,34 @@ static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
+ 	return 0;
+ }
+ 
+-static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node)
++static void __clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node)
+ {
+ 	struct hfi1_ctxtdata *uctxt = fd->uctxt;
+ 	struct hfi1_devdata *dd = uctxt->dd;
+ 
++	mutex_lock(&node->invalidate_mutex);
++	if (node->freed)
++		goto done;
++	node->freed = true;
++
+ 	trace_hfi1_exp_tid_unreg(uctxt->ctxt, fd->subctxt, node->rcventry,
+ 				 node->npages,
+ 				 node->notifier.interval_tree.start, node->phys,
+ 				 node->dma_addr);
+ 
+-	/*
+-	 * Make sure device has seen the write before we unpin the
+-	 * pages.
+-	 */
++	/* Make sure device has seen the write before pages are unpinned */
+ 	hfi1_put_tid(dd, node->rcventry, PT_INVALID_FLUSH, 0, 0);
+ 
+ 	unpin_rcv_pages(fd, NULL, node, 0, node->npages, true);
++done:
++	mutex_unlock(&node->invalidate_mutex);
++}
++
++static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node)
++{
++	struct hfi1_ctxtdata *uctxt = fd->uctxt;
++
++	__clear_tid_node(fd, node);
+ 
+ 	node->grp->used--;
+ 	node->grp->map &= ~(1 << (node->rcventry - node->grp->base));
+@@ -898,10 +955,16 @@ static bool tid_rb_invalidate(struct mmu_interval_notifier *mni,
+ 	if (node->freed)
+ 		return true;
+ 
++	/* take action only if unmapping */
++	if (range->event != MMU_NOTIFY_UNMAP)
++		return true;
++
+ 	trace_hfi1_exp_tid_inval(uctxt->ctxt, fdata->subctxt,
+ 				 node->notifier.interval_tree.start,
+ 				 node->rcventry, node->npages, node->dma_addr);
+-	node->freed = true;
++
++	/* clear the hardware rcvarray entry */
++	__clear_tid_node(fdata, node);
+ 
+ 	spin_lock(&fdata->invalid_lock);
+ 	if (fdata->invalid_tid_idx < uctxt->expected_count) {
+@@ -931,6 +994,23 @@ static bool tid_rb_invalidate(struct mmu_interval_notifier *mni,
+ 	return true;
+ }
+ 
++static bool tid_cover_invalidate(struct mmu_interval_notifier *mni,
++			         const struct mmu_notifier_range *range,
++			         unsigned long cur_seq)
++{
++	struct tid_user_buf *tidbuf =
++		container_of(mni, struct tid_user_buf, notifier);
++
++	/* take action only if unmapping */
++	if (range->event == MMU_NOTIFY_UNMAP) {
++		mutex_lock(&tidbuf->cover_mutex);
++		mmu_interval_set_seq(mni, cur_seq);
++		mutex_unlock(&tidbuf->cover_mutex);
++	}
++
++	return true;
++}
++
+ static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata,
+ 				    struct tid_rb_node *tnode)
+ {
+diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.h b/drivers/infiniband/hw/hfi1/user_exp_rcv.h
+index d45c7b6988d4d..849f265f2f118 100644
+--- a/drivers/infiniband/hw/hfi1/user_exp_rcv.h
++++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.h
+@@ -57,6 +57,8 @@ struct tid_pageset {
+ };
+ 
+ struct tid_user_buf {
++	struct mmu_interval_notifier notifier;
++	struct mutex cover_mutex;
+ 	unsigned long vaddr;
+ 	unsigned long length;
+ 	unsigned int npages;
+@@ -68,6 +70,7 @@ struct tid_user_buf {
+ struct tid_rb_node {
+ 	struct mmu_interval_notifier notifier;
+ 	struct hfi1_filedata *fdata;
++	struct mutex invalidate_mutex; /* covers hw removal */
+ 	unsigned long phys;
+ 	struct tid_group *grp;
+ 	u32 rcventry;
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index f1013b950d579..82577095e175e 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -191,7 +191,6 @@ static const char * const smbus_pnp_ids[] = {
+ 	"SYN3221", /* HP 15-ay000 */
+ 	"SYN323d", /* HP Spectre X360 13-w013dx */
+ 	"SYN3257", /* HP Envy 13-ad105ng */
+-	"SYN3286", /* HP Laptop 15-da3001TU */
+ 	NULL
+ };
+ 
+diff --git a/drivers/memory/atmel-sdramc.c b/drivers/memory/atmel-sdramc.c
+index 9c49d00c2a966..ea6e9e1eaf046 100644
+--- a/drivers/memory/atmel-sdramc.c
++++ b/drivers/memory/atmel-sdramc.c
+@@ -47,19 +47,17 @@ static int atmel_ramc_probe(struct platform_device *pdev)
+ 	caps = of_device_get_match_data(&pdev->dev);
+ 
+ 	if (caps->has_ddrck) {
+-		clk = devm_clk_get(&pdev->dev, "ddrck");
++		clk = devm_clk_get_enabled(&pdev->dev, "ddrck");
+ 		if (IS_ERR(clk))
+ 			return PTR_ERR(clk);
+-		clk_prepare_enable(clk);
+ 	}
+ 
+ 	if (caps->has_mpddr_clk) {
+-		clk = devm_clk_get(&pdev->dev, "mpddr");
++		clk = devm_clk_get_enabled(&pdev->dev, "mpddr");
+ 		if (IS_ERR(clk)) {
+ 			pr_err("AT91 RAMC: couldn't get mpddr clock\n");
+ 			return PTR_ERR(clk);
+ 		}
+-		clk_prepare_enable(clk);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/memory/mvebu-devbus.c b/drivers/memory/mvebu-devbus.c
+index 8450638e86700..efc6c08db2b70 100644
+--- a/drivers/memory/mvebu-devbus.c
++++ b/drivers/memory/mvebu-devbus.c
+@@ -280,10 +280,9 @@ static int mvebu_devbus_probe(struct platform_device *pdev)
+ 	if (IS_ERR(devbus->base))
+ 		return PTR_ERR(devbus->base);
+ 
+-	clk = devm_clk_get(&pdev->dev, NULL);
++	clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+-	clk_prepare_enable(clk);
+ 
+ 	/*
+ 	 * Obtain clock period in picoseconds,
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index ece4c0512ee2d..f42f2f4e4b60e 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -678,10 +678,10 @@ static int ksz9477_port_fdb_del(struct dsa_switch *ds, int port,
+ 		ksz_read32(dev, REG_SW_ALU_VAL_D, &alu_table[3]);
+ 
+ 		/* clear forwarding port */
+-		alu_table[2] &= ~BIT(port);
++		alu_table[1] &= ~BIT(port);
+ 
+ 		/* if there is no port to forward, clear table */
+-		if ((alu_table[2] & ALU_V_PORT_MAP) == 0) {
++		if ((alu_table[1] & ALU_V_PORT_MAP) == 0) {
+ 			alu_table[0] = 0;
+ 			alu_table[1] = 0;
+ 			alu_table[2] = 0;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+index d5fd49dd25f33..decc1c09a031b 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+@@ -524,19 +524,28 @@ static void xgbe_disable_vxlan(struct xgbe_prv_data *pdata)
+ 	netif_dbg(pdata, drv, pdata->netdev, "VXLAN acceleration disabled\n");
+ }
+ 
++static unsigned int xgbe_get_fc_queue_count(struct xgbe_prv_data *pdata)
++{
++	unsigned int max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
++
++	/* From MAC ver 30H the TFCR is per priority, instead of per queue */
++	if (XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER) >= 0x30)
++		return max_q_count;
++	else
++		return min_t(unsigned int, pdata->tx_q_count, max_q_count);
++}
++
+ static int xgbe_disable_tx_flow_control(struct xgbe_prv_data *pdata)
+ {
+-	unsigned int max_q_count, q_count;
+ 	unsigned int reg, reg_val;
+-	unsigned int i;
++	unsigned int i, q_count;
+ 
+ 	/* Clear MTL flow control */
+ 	for (i = 0; i < pdata->rx_q_count; i++)
+ 		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, EHFC, 0);
+ 
+ 	/* Clear MAC flow control */
+-	max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
+-	q_count = min_t(unsigned int, pdata->tx_q_count, max_q_count);
++	q_count = xgbe_get_fc_queue_count(pdata);
+ 	reg = MAC_Q0TFCR;
+ 	for (i = 0; i < q_count; i++) {
+ 		reg_val = XGMAC_IOREAD(pdata, reg);
+@@ -553,9 +562,8 @@ static int xgbe_enable_tx_flow_control(struct xgbe_prv_data *pdata)
+ {
+ 	struct ieee_pfc *pfc = pdata->pfc;
+ 	struct ieee_ets *ets = pdata->ets;
+-	unsigned int max_q_count, q_count;
+ 	unsigned int reg, reg_val;
+-	unsigned int i;
++	unsigned int i, q_count;
+ 
+ 	/* Set MTL flow control */
+ 	for (i = 0; i < pdata->rx_q_count; i++) {
+@@ -579,8 +587,7 @@ static int xgbe_enable_tx_flow_control(struct xgbe_prv_data *pdata)
+ 	}
+ 
+ 	/* Set MAC flow control */
+-	max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
+-	q_count = min_t(unsigned int, pdata->tx_q_count, max_q_count);
++	q_count = xgbe_get_fc_queue_count(pdata);
+ 	reg = MAC_Q0TFCR;
+ 	for (i = 0; i < q_count; i++) {
+ 		reg_val = XGMAC_IOREAD(pdata, reg);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 0c5c1b1556830..43fdd111235a6 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -496,6 +496,7 @@ static enum xgbe_an xgbe_an73_tx_training(struct xgbe_prv_data *pdata,
+ 	reg |= XGBE_KR_TRAINING_ENABLE;
+ 	reg |= XGBE_KR_TRAINING_START;
+ 	XMDIO_WRITE(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, reg);
++	pdata->kr_start_time = jiffies;
+ 
+ 	netif_dbg(pdata, link, pdata->netdev,
+ 		  "KR training initiated\n");
+@@ -632,6 +633,8 @@ static enum xgbe_an xgbe_an73_incompat_link(struct xgbe_prv_data *pdata)
+ 
+ 	xgbe_switch_mode(pdata);
+ 
++	pdata->an_result = XGBE_AN_READY;
++
+ 	xgbe_an_restart(pdata);
+ 
+ 	return XGBE_AN_INCOMPAT_LINK;
+@@ -1275,9 +1278,30 @@ static bool xgbe_phy_aneg_done(struct xgbe_prv_data *pdata)
+ static void xgbe_check_link_timeout(struct xgbe_prv_data *pdata)
+ {
+ 	unsigned long link_timeout;
++	unsigned long kr_time;
++	int wait;
+ 
+ 	link_timeout = pdata->link_check + (XGBE_LINK_TIMEOUT * HZ);
+ 	if (time_after(jiffies, link_timeout)) {
++		if ((xgbe_cur_mode(pdata) == XGBE_MODE_KR) &&
++		    pdata->phy.autoneg == AUTONEG_ENABLE) {
++			/* AN restart should not happen while KR training is in progress.
++			 * The while loop ensures no AN restart during KR training,
++			 * waits up to 500ms and AN restart is triggered only if KR
++			 * training is failed.
++			 */
++			wait = XGBE_KR_TRAINING_WAIT_ITER;
++			while (wait--) {
++				kr_time = pdata->kr_start_time +
++					  msecs_to_jiffies(XGBE_AN_MS_TIMEOUT);
++				if (time_after(jiffies, kr_time))
++					break;
++				/* AN restart is not required, if AN result is COMPLETE */
++				if (pdata->an_result == XGBE_AN_COMPLETE)
++					return;
++				usleep_range(10000, 11000);
++			}
++		}
+ 		netif_dbg(pdata, link, pdata->netdev, "AN link timeout\n");
+ 		xgbe_phy_config_aneg(pdata);
+ 	}
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
+index 3305979a9f7c1..e0b8f3c4cc0b2 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
+@@ -289,6 +289,7 @@
+ /* Auto-negotiation */
+ #define XGBE_AN_MS_TIMEOUT		500
+ #define XGBE_LINK_TIMEOUT		5
++#define XGBE_KR_TRAINING_WAIT_ITER	50
+ 
+ #define XGBE_SGMII_AN_LINK_STATUS	BIT(1)
+ #define XGBE_SGMII_AN_LINK_SPEED	(BIT(2) | BIT(3))
+@@ -1253,6 +1254,7 @@ struct xgbe_prv_data {
+ 	unsigned int parallel_detect;
+ 	unsigned int fec_ability;
+ 	unsigned long an_start;
++	unsigned long kr_start_time;
+ 	enum xgbe_an_mode an_mode;
+ 
+ 	/* I2C support */
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index be96116dc2ccb..613ca6124e3ce 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -11185,7 +11185,7 @@ static void tg3_reset_task(struct work_struct *work)
+ 	rtnl_lock();
+ 	tg3_full_lock(tp, 0);
+ 
+-	if (!netif_running(tp->dev)) {
++	if (tp->pcierr_recovery || !netif_running(tp->dev)) {
+ 		tg3_flag_clear(tp, RESET_TASK_PENDING);
+ 		tg3_full_unlock(tp);
+ 		rtnl_unlock();
+@@ -18179,6 +18179,9 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
+ 
+ 	netdev_info(netdev, "PCI I/O error detected\n");
+ 
++	/* Want to make sure that the reset task doesn't run */
++	tg3_reset_task_cancel(tp);
++
+ 	rtnl_lock();
+ 
+ 	/* Could be second call or maybe we don't have netdev yet */
+@@ -18195,9 +18198,6 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
+ 
+ 	tg3_timer_stop(tp);
+ 
+-	/* Want to make sure that the reset task doesn't run */
+-	tg3_reset_task_cancel(tp);
+-
+ 	netif_device_detach(netdev);
+ 
+ 	/* Clean up software state, even if MMIO is blocked */
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 792c8147c2c4c..e0d62e2513879 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1963,7 +1963,6 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
+ 	bool cloned = skb_cloned(*skb) || skb_header_cloned(*skb) ||
+ 		      skb_is_nonlinear(*skb);
+ 	int padlen = ETH_ZLEN - (*skb)->len;
+-	int headroom = skb_headroom(*skb);
+ 	int tailroom = skb_tailroom(*skb);
+ 	struct sk_buff *nskb;
+ 	u32 fcs;
+@@ -1977,9 +1976,6 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
+ 		/* FCS could be appeded to tailroom. */
+ 		if (tailroom >= ETH_FCS_LEN)
+ 			goto add_fcs;
+-		/* FCS could be appeded by moving data to headroom. */
+-		else if (!cloned && headroom + tailroom >= ETH_FCS_LEN)
+-			padlen = 0;
+ 		/* No room for FCS, need to reallocate skb. */
+ 		else
+ 			padlen = ETH_FCS_LEN;
+@@ -1988,10 +1984,7 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
+ 		padlen += ETH_FCS_LEN;
+ 	}
+ 
+-	if (!cloned && headroom + tailroom >= padlen) {
+-		(*skb)->data = memmove((*skb)->head, (*skb)->data, (*skb)->len);
+-		skb_set_tail_pointer(*skb, (*skb)->len);
+-	} else {
++	if (cloned || tailroom < padlen) {
+ 		nskb = skb_copy_expand(*skb, 0, padlen, GFP_ATOMIC);
+ 		if (!nskb)
+ 			return -ENOMEM;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 29bc1df28aeb3..112eaef186e19 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1642,7 +1642,7 @@ static void mlx5_core_verify_params(void)
+ 	}
+ }
+ 
+-static int __init init(void)
++static int __init mlx5_init(void)
+ {
+ 	int err;
+ 
+@@ -1667,7 +1667,7 @@ err_debug:
+ 	return err;
+ }
+ 
+-static void __exit cleanup(void)
++static void __exit mlx5_cleanup(void)
+ {
+ #ifdef CONFIG_MLX5_CORE_EN
+ 	mlx5e_cleanup();
+@@ -1676,5 +1676,5 @@ static void __exit cleanup(void)
+ 	mlx5_unregister_debugfs();
+ }
+ 
+-module_init(init);
+-module_exit(cleanup);
++module_init(mlx5_init);
++module_exit(mlx5_cleanup);
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 9ec6d63691aab..410ccd28f6531 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -736,14 +736,14 @@ static void ravb_error_interrupt(struct net_device *ndev)
+ 	ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS);
+ 	if (eis & EIS_QFS) {
+ 		ris2 = ravb_read(ndev, RIS2);
+-		ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED),
++		ravb_write(ndev, ~(RIS2_QFF0 | RIS2_QFF1 | RIS2_RFFF | RIS2_RESERVED),
+ 			   RIS2);
+ 
+ 		/* Receive Descriptor Empty int */
+ 		if (ris2 & RIS2_QFF0)
+ 			priv->stats[RAVB_BE].rx_over_errors++;
+ 
+-		    /* Receive Descriptor Empty int */
++		/* Receive Descriptor Empty int */
+ 		if (ris2 & RIS2_QFF1)
+ 			priv->stats[RAVB_NC].rx_over_errors++;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 14ea0168b548d..b52ca2fe04d87 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1125,6 +1125,11 @@ static int stmmac_init_phy(struct net_device *dev)
+ 		int addr = priv->plat->phy_addr;
+ 		struct phy_device *phydev;
+ 
++		if (addr < 0) {
++			netdev_err(priv->dev, "no phy found\n");
++			return -ENODEV;
++		}
++
+ 		phydev = mdiobus_get_phy(priv->mii, addr);
+ 		if (!phydev) {
+ 			netdev_err(priv->dev, "no phy at addr %d\n", addr);
+diff --git a/drivers/net/mdio/mdio-mux-meson-g12a.c b/drivers/net/mdio/mdio-mux-meson-g12a.c
+index bf86c9c7a2883..ab863530c9e89 100644
+--- a/drivers/net/mdio/mdio-mux-meson-g12a.c
++++ b/drivers/net/mdio/mdio-mux-meson-g12a.c
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <linux/bitfield.h>
++#include <linux/delay.h>
+ #include <linux/clk.h>
+ #include <linux/clk-provider.h>
+ #include <linux/device.h>
+@@ -150,6 +151,7 @@ static const struct clk_ops g12a_ephy_pll_ops = {
+ 
+ static int g12a_enable_internal_mdio(struct g12a_mdio_mux *priv)
+ {
++	u32 value;
+ 	int ret;
+ 
+ 	/* Enable the phy clock */
+@@ -163,18 +165,25 @@ static int g12a_enable_internal_mdio(struct g12a_mdio_mux *priv)
+ 
+ 	/* Initialize ephy control */
+ 	writel(EPHY_G12A_ID, priv->regs + ETH_PHY_CNTL0);
+-	writel(FIELD_PREP(PHY_CNTL1_ST_MODE, 3) |
+-	       FIELD_PREP(PHY_CNTL1_ST_PHYADD, EPHY_DFLT_ADD) |
+-	       FIELD_PREP(PHY_CNTL1_MII_MODE, EPHY_MODE_RMII) |
+-	       PHY_CNTL1_CLK_EN |
+-	       PHY_CNTL1_CLKFREQ |
+-	       PHY_CNTL1_PHY_ENB,
+-	       priv->regs + ETH_PHY_CNTL1);
++
++	/* Make sure we get a 0 -> 1 transition on the enable bit */
++	value = FIELD_PREP(PHY_CNTL1_ST_MODE, 3) |
++		FIELD_PREP(PHY_CNTL1_ST_PHYADD, EPHY_DFLT_ADD) |
++		FIELD_PREP(PHY_CNTL1_MII_MODE, EPHY_MODE_RMII) |
++		PHY_CNTL1_CLK_EN |
++		PHY_CNTL1_CLKFREQ;
++	writel(value, priv->regs + ETH_PHY_CNTL1);
+ 	writel(PHY_CNTL2_USE_INTERNAL |
+ 	       PHY_CNTL2_SMI_SRC_MAC |
+ 	       PHY_CNTL2_RX_CLK_EPHY,
+ 	       priv->regs + ETH_PHY_CNTL2);
+ 
++	value |= PHY_CNTL1_PHY_ENB;
++	writel(value, priv->regs + ETH_PHY_CNTL1);
++
++	/* The phy needs a bit of time to power up */
++	mdelay(10);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 77ba6c3c7a090..e9303be486556 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -108,7 +108,12 @@ EXPORT_SYMBOL(mdiobus_unregister_device);
+ 
+ struct phy_device *mdiobus_get_phy(struct mii_bus *bus, int addr)
+ {
+-	struct mdio_device *mdiodev = bus->mdio_map[addr];
++	struct mdio_device *mdiodev;
++
++	if (addr < 0 || addr >= ARRAY_SIZE(bus->mdio_map))
++		return NULL;
++
++	mdiodev = bus->mdio_map[addr];
+ 
+ 	if (!mdiodev)
+ 		return NULL;
+diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
+index fce6713e970ba..811c8751308c6 100644
+--- a/drivers/net/usb/sr9700.c
++++ b/drivers/net/usb/sr9700.c
+@@ -410,7 +410,7 @@ static int sr9700_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 		/* ignore the CRC length */
+ 		len = (skb->data[1] | (skb->data[2] << 8)) - 4;
+ 
+-		if (len > ETH_FRAME_LEN || len > skb->len)
++		if (len > ETH_FRAME_LEN || len > skb->len || len < 0)
+ 			return 0;
+ 
+ 		/* the last packet of current skb */
+diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c
+index 7eac6a3e1cdee..ae1ae65e7f90a 100644
+--- a/drivers/net/wan/fsl_ucc_hdlc.c
++++ b/drivers/net/wan/fsl_ucc_hdlc.c
+@@ -1245,9 +1245,11 @@ static int ucc_hdlc_probe(struct platform_device *pdev)
+ free_dev:
+ 	free_netdev(dev);
+ undo_uhdlc_init:
+-	iounmap(utdm->siram);
++	if (utdm)
++		iounmap(utdm->siram);
+ unmap_si_regs:
+-	iounmap(utdm->si_regs);
++	if (utdm)
++		iounmap(utdm->si_regs);
+ free_utdm:
+ 	if (uhdlc_priv->tsa)
+ 		kfree(utdm);
+diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c
+index 75b5d545b49e8..dc076d8448680 100644
+--- a/drivers/net/wireless/rndis_wlan.c
++++ b/drivers/net/wireless/rndis_wlan.c
+@@ -694,8 +694,8 @@ static int rndis_query_oid(struct usbnet *dev, u32 oid, void *data, int *len)
+ 		struct rndis_query	*get;
+ 		struct rndis_query_c	*get_c;
+ 	} u;
+-	int ret, buflen;
+-	int resplen, respoffs, copylen;
++	int ret;
++	size_t buflen, resplen, respoffs, copylen;
+ 
+ 	buflen = *len + sizeof(*u.get);
+ 	if (buflen < CONTROL_BUFFER_SIZE)
+@@ -730,22 +730,15 @@ static int rndis_query_oid(struct usbnet *dev, u32 oid, void *data, int *len)
+ 
+ 		if (respoffs > buflen) {
+ 			/* Device returned data offset outside buffer, error. */
+-			netdev_dbg(dev->net, "%s(%s): received invalid "
+-				"data offset: %d > %d\n", __func__,
+-				oid_to_string(oid), respoffs, buflen);
++			netdev_dbg(dev->net,
++				   "%s(%s): received invalid data offset: %zu > %zu\n",
++				   __func__, oid_to_string(oid), respoffs, buflen);
+ 
+ 			ret = -EINVAL;
+ 			goto exit_unlock;
+ 		}
+ 
+-		if ((resplen + respoffs) > buflen) {
+-			/* Device would have returned more data if buffer would
+-			 * have been big enough. Copy just the bits that we got.
+-			 */
+-			copylen = buflen - respoffs;
+-		} else {
+-			copylen = resplen;
+-		}
++		copylen = min(resplen, buflen - respoffs);
+ 
+ 		if (copylen > *len)
+ 			copylen = *len;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 67dd68462b817..c47512da9872a 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1292,7 +1292,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
+ 	else
+ 		nvme_poll_irqdisable(nvmeq);
+ 
+-	if (blk_mq_request_completed(req)) {
++	if (blk_mq_rq_state(req) != MQ_RQ_IN_FLIGHT) {
+ 		dev_warn(dev->ctrl.device,
+ 			 "I/O %d QID %d timeout, completion polled\n",
+ 			 req->tag, nvmeq->qid);
+diff --git a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+index 46ebdb1460a3d..cab6a94bf1610 100644
+--- a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
++++ b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+@@ -467,8 +467,10 @@ static int rockchip_usb2phy_power_on(struct phy *phy)
+ 		return ret;
+ 
+ 	ret = property_enable(base, &rport->port_cfg->phy_sus, false);
+-	if (ret)
++	if (ret) {
++		clk_disable_unprepare(rphy->clk480m);
+ 		return ret;
++	}
+ 
+ 	/* waiting for the utmi_clk to become stable */
+ 	usleep_range(1500, 2000);
+diff --git a/drivers/phy/ti/Kconfig b/drivers/phy/ti/Kconfig
+index 15a3bcf323086..b905902d57508 100644
+--- a/drivers/phy/ti/Kconfig
++++ b/drivers/phy/ti/Kconfig
+@@ -23,7 +23,7 @@ config PHY_DM816X_USB
+ 
+ config PHY_AM654_SERDES
+ 	tristate "TI AM654 SERDES support"
+-	depends on OF && ARCH_K3 || COMPILE_TEST
++	depends on OF && (ARCH_K3 || COMPILE_TEST)
+ 	depends on COMMON_CLK
+ 	select GENERIC_PHY
+ 	select MULTIPLEXER
+@@ -35,7 +35,7 @@ config PHY_AM654_SERDES
+ 
+ config PHY_J721E_WIZ
+ 	tristate "TI J721E WIZ (SERDES Wrapper) support"
+-	depends on OF && ARCH_K3 || COMPILE_TEST
++	depends on OF && (ARCH_K3 || COMPILE_TEST)
+ 	depends on HAS_IOMEM && OF_ADDRESS
+ 	depends on COMMON_CLK
+ 	select GENERIC_PHY
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 949ddeb673bc5..74637bd0433e6 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -478,6 +478,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0x30, { KEY_VOLUMEUP } },
+ 	{ KE_KEY, 0x31, { KEY_VOLUMEDOWN } },
+ 	{ KE_KEY, 0x32, { KEY_MUTE } },
++	{ KE_KEY, 0x33, { KEY_SCREENLOCK } },
+ 	{ KE_KEY, 0x35, { KEY_SCREENLOCK } },
+ 	{ KE_KEY, 0x40, { KEY_PREVIOUSSONG } },
+ 	{ KE_KEY, 0x41, { KEY_NEXTSONG } },
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 110ff1e6ef81f..bc26acace2c30 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -255,6 +255,23 @@ static const struct ts_dmi_data connect_tablet9_data = {
+ 	.properties     = connect_tablet9_props,
+ };
+ 
++static const struct property_entry csl_panther_tab_hd_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 1),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 20),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1980),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1526),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-csl-panther-tab-hd.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	{ }
++};
++
++static const struct ts_dmi_data csl_panther_tab_hd_data = {
++	.acpi_name      = "MSSL1680:00",
++	.properties     = csl_panther_tab_hd_props,
++};
++
+ static const struct property_entry cube_iwork8_air_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-min-x", 1),
+ 	PROPERTY_ENTRY_U32("touchscreen-min-y", 3),
+@@ -1057,6 +1074,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Tablet 9"),
+ 		},
+ 	},
++	{
++		/* CSL Panther Tab HD */
++		.driver_data = (void *)&csl_panther_tab_hd_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "CSL Computer GmbH & Co. KG"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CSL Panther Tab HD"),
++		},
++	},
+ 	{
+ 		/* CUBE iwork8 Air */
+ 		.driver_data = (void *)&cube_iwork8_air_data,
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 1feca45384c7a..e5b9229310a0f 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1408,7 +1408,7 @@ static void hisi_sas_refresh_port_id(struct hisi_hba *hisi_hba)
+ 				device->linkrate = phy->sas_phy.linkrate;
+ 
+ 			hisi_hba->hw->setup_itct(hisi_hba, sas_dev);
+-		} else
++		} else if (!port->port_attached)
+ 			port->id = 0xff;
+ 	}
+ }
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index b2d4b6c78b5c3..a44a098dbb9c7 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -5834,7 +5834,7 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
+ {
+ 	struct Scsi_Host *sh;
+ 
+-	sh = scsi_host_alloc(&hpsa_driver_template, sizeof(h));
++	sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info));
+ 	if (sh == NULL) {
+ 		dev_err(&h->pdev->dev, "scsi_host_alloc failed\n");
+ 		return -ENOMEM;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index ef7cd7520e7c7..092bd6a3d64a1 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -1674,6 +1674,13 @@ static const char *iscsi_session_state_name(int state)
+ 	return name;
+ }
+ 
++static char *iscsi_session_target_state_name[] = {
++	[ISCSI_SESSION_TARGET_UNBOUND]   = "UNBOUND",
++	[ISCSI_SESSION_TARGET_ALLOCATED] = "ALLOCATED",
++	[ISCSI_SESSION_TARGET_SCANNED]   = "SCANNED",
++	[ISCSI_SESSION_TARGET_UNBINDING] = "UNBINDING",
++};
++
+ int iscsi_session_chkready(struct iscsi_cls_session *session)
+ {
+ 	unsigned long flags;
+@@ -1805,9 +1812,13 @@ static int iscsi_user_scan_session(struct device *dev, void *data)
+ 		if ((scan_data->channel == SCAN_WILD_CARD ||
+ 		     scan_data->channel == 0) &&
+ 		    (scan_data->id == SCAN_WILD_CARD ||
+-		     scan_data->id == id))
++		     scan_data->id == id)) {
+ 			scsi_scan_target(&session->dev, 0, id,
+ 					 scan_data->lun, scan_data->rescan);
++			spin_lock_irqsave(&session->lock, flags);
++			session->target_state = ISCSI_SESSION_TARGET_SCANNED;
++			spin_unlock_irqrestore(&session->lock, flags);
++		}
+ 	}
+ 
+ user_scan_exit:
+@@ -1996,31 +2007,41 @@ static void __iscsi_unbind_session(struct work_struct *work)
+ 	struct iscsi_cls_host *ihost = shost->shost_data;
+ 	unsigned long flags;
+ 	unsigned int target_id;
++	bool remove_target = true;
+ 
+ 	ISCSI_DBG_TRANS_SESSION(session, "Unbinding session\n");
+ 
+ 	/* Prevent new scans and make sure scanning is not in progress */
+ 	mutex_lock(&ihost->mutex);
+ 	spin_lock_irqsave(&session->lock, flags);
+-	if (session->target_id == ISCSI_MAX_TARGET) {
++	if (session->target_state == ISCSI_SESSION_TARGET_ALLOCATED) {
++		remove_target = false;
++	} else if (session->target_state != ISCSI_SESSION_TARGET_SCANNED) {
+ 		spin_unlock_irqrestore(&session->lock, flags);
+ 		mutex_unlock(&ihost->mutex);
+-		goto unbind_session_exit;
++		ISCSI_DBG_TRANS_SESSION(session,
++			"Skipping target unbinding: Session is unbound/unbinding.\n");
++		return;
+ 	}
+ 
++	session->target_state = ISCSI_SESSION_TARGET_UNBINDING;
+ 	target_id = session->target_id;
+ 	session->target_id = ISCSI_MAX_TARGET;
+ 	spin_unlock_irqrestore(&session->lock, flags);
+ 	mutex_unlock(&ihost->mutex);
+ 
+-	scsi_remove_target(&session->dev);
++	if (remove_target)
++		scsi_remove_target(&session->dev);
+ 
+ 	if (session->ida_used)
+ 		ida_simple_remove(&iscsi_sess_ida, target_id);
+ 
+-unbind_session_exit:
+ 	iscsi_session_event(session, ISCSI_KEVENT_UNBIND_SESSION);
+ 	ISCSI_DBG_TRANS_SESSION(session, "Completed target removal\n");
++
++	spin_lock_irqsave(&session->lock, flags);
++	session->target_state = ISCSI_SESSION_TARGET_UNBOUND;
++	spin_unlock_irqrestore(&session->lock, flags);
+ }
+ 
+ static void __iscsi_destroy_session(struct work_struct *work)
+@@ -2089,6 +2110,9 @@ int iscsi_add_session(struct iscsi_cls_session *session, unsigned int target_id)
+ 		session->ida_used = true;
+ 	} else
+ 		session->target_id = target_id;
++	spin_lock_irqsave(&session->lock, flags);
++	session->target_state = ISCSI_SESSION_TARGET_ALLOCATED;
++	spin_unlock_irqrestore(&session->lock, flags);
+ 
+ 	dev_set_name(&session->dev, "session%u", session->sid);
+ 	err = device_add(&session->dev);
+@@ -4343,6 +4367,19 @@ iscsi_session_attr(def_taskmgmt_tmo, ISCSI_PARAM_DEF_TASKMGMT_TMO, 0);
+ iscsi_session_attr(discovery_parent_idx, ISCSI_PARAM_DISCOVERY_PARENT_IDX, 0);
+ iscsi_session_attr(discovery_parent_type, ISCSI_PARAM_DISCOVERY_PARENT_TYPE, 0);
+ 
++static ssize_t
++show_priv_session_target_state(struct device *dev, struct device_attribute *attr,
++			char *buf)
++{
++	struct iscsi_cls_session *session = iscsi_dev_to_session(dev->parent);
++
++	return sysfs_emit(buf, "%s\n",
++			iscsi_session_target_state_name[session->target_state]);
++}
++
++static ISCSI_CLASS_ATTR(priv_sess, target_state, S_IRUGO,
++			show_priv_session_target_state, NULL);
++
+ static ssize_t
+ show_priv_session_state(struct device *dev, struct device_attribute *attr,
+ 			char *buf)
+@@ -4445,6 +4482,7 @@ static struct attribute *iscsi_session_attrs[] = {
+ 	&dev_attr_sess_boot_target.attr,
+ 	&dev_attr_priv_sess_recovery_tmo.attr,
+ 	&dev_attr_priv_sess_state.attr,
++	&dev_attr_priv_sess_target_state.attr,
+ 	&dev_attr_priv_sess_creator.attr,
+ 	&dev_attr_sess_chap_out_idx.attr,
+ 	&dev_attr_sess_chap_in_idx.attr,
+@@ -4558,6 +4596,8 @@ static umode_t iscsi_session_attr_is_visible(struct kobject *kobj,
+ 		return S_IRUGO | S_IWUSR;
+ 	else if (attr == &dev_attr_priv_sess_state.attr)
+ 		return S_IRUGO;
++	else if (attr == &dev_attr_priv_sess_target_state.attr)
++		return S_IRUGO;
+ 	else if (attr == &dev_attr_priv_sess_creator.attr)
+ 		return S_IRUGO;
+ 	else if (attr == &dev_attr_priv_sess_target_id.attr)
+diff --git a/drivers/soc/qcom/cpr.c b/drivers/soc/qcom/cpr.c
+index 6298561bc29c9..fac0414c37314 100644
+--- a/drivers/soc/qcom/cpr.c
++++ b/drivers/soc/qcom/cpr.c
+@@ -1743,12 +1743,16 @@ static int cpr_probe(struct platform_device *pdev)
+ 
+ 	ret = of_genpd_add_provider_simple(dev->of_node, &drv->pd);
+ 	if (ret)
+-		return ret;
++		goto err_remove_genpd;
+ 
+ 	platform_set_drvdata(pdev, drv);
+ 	cpr_debugfs_init(drv);
+ 
+ 	return 0;
++
++err_remove_genpd:
++	pm_genpd_remove(&drv->pd);
++	return ret;
+ }
+ 
+ static int cpr_remove(struct platform_device *pdev)
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 9c5ec99431d2e..aee960a7d7f91 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -592,7 +592,6 @@ static int spidev_open(struct inode *inode, struct file *filp)
+ 	if (!spidev->tx_buffer) {
+ 		spidev->tx_buffer = kmalloc(bufsiz, GFP_KERNEL);
+ 		if (!spidev->tx_buffer) {
+-			dev_dbg(&spidev->spi->dev, "open/ENOMEM\n");
+ 			status = -ENOMEM;
+ 			goto err_find_dev;
+ 		}
+@@ -601,7 +600,6 @@ static int spidev_open(struct inode *inode, struct file *filp)
+ 	if (!spidev->rx_buffer) {
+ 		spidev->rx_buffer = kmalloc(bufsiz, GFP_KERNEL);
+ 		if (!spidev->rx_buffer) {
+-			dev_dbg(&spidev->spi->dev, "open/ENOMEM\n");
+ 			status = -ENOMEM;
+ 			goto err_alloc_rx_buf;
+ 		}
+diff --git a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
+index a337600d5bc4c..6952f4e237e1f 100644
+--- a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
++++ b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
+@@ -44,11 +44,13 @@ static int int340x_thermal_get_trip_temp(struct thermal_zone_device *zone,
+ 					 int trip, int *temp)
+ {
+ 	struct int34x_thermal_zone *d = zone->devdata;
+-	int i;
++	int i, ret = 0;
+ 
+ 	if (d->override_ops && d->override_ops->get_trip_temp)
+ 		return d->override_ops->get_trip_temp(zone, trip, temp);
+ 
++	mutex_lock(&d->trip_mutex);
++
+ 	if (trip < d->aux_trip_nr)
+ 		*temp = d->aux_trips[trip];
+ 	else if (trip == d->crt_trip_id)
+@@ -66,10 +68,12 @@ static int int340x_thermal_get_trip_temp(struct thermal_zone_device *zone,
+ 			}
+ 		}
+ 		if (i == INT340X_THERMAL_MAX_ACT_TRIP_COUNT)
+-			return -EINVAL;
++			ret = -EINVAL;
+ 	}
+ 
+-	return 0;
++	mutex_unlock(&d->trip_mutex);
++
++	return ret;
+ }
+ 
+ static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
+@@ -77,11 +81,13 @@ static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
+ 					 enum thermal_trip_type *type)
+ {
+ 	struct int34x_thermal_zone *d = zone->devdata;
+-	int i;
++	int i, ret = 0;
+ 
+ 	if (d->override_ops && d->override_ops->get_trip_type)
+ 		return d->override_ops->get_trip_type(zone, trip, type);
+ 
++	mutex_lock(&d->trip_mutex);
++
+ 	if (trip < d->aux_trip_nr)
+ 		*type = THERMAL_TRIP_PASSIVE;
+ 	else if (trip == d->crt_trip_id)
+@@ -99,10 +105,12 @@ static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
+ 			}
+ 		}
+ 		if (i == INT340X_THERMAL_MAX_ACT_TRIP_COUNT)
+-			return -EINVAL;
++			ret = -EINVAL;
+ 	}
+ 
+-	return 0;
++	mutex_unlock(&d->trip_mutex);
++
++	return ret;
+ }
+ 
+ static int int340x_thermal_set_trip_temp(struct thermal_zone_device *zone,
+@@ -174,6 +182,8 @@ int int340x_thermal_read_trips(struct int34x_thermal_zone *int34x_zone)
+ 	int trip_cnt = int34x_zone->aux_trip_nr;
+ 	int i;
+ 
++	mutex_lock(&int34x_zone->trip_mutex);
++
+ 	int34x_zone->crt_trip_id = -1;
+ 	if (!int340x_thermal_get_trip_config(int34x_zone->adev->handle, "_CRT",
+ 					     &int34x_zone->crt_temp))
+@@ -201,6 +211,8 @@ int int340x_thermal_read_trips(struct int34x_thermal_zone *int34x_zone)
+ 		int34x_zone->act_trips[i].valid = true;
+ 	}
+ 
++	mutex_unlock(&int34x_zone->trip_mutex);
++
+ 	return trip_cnt;
+ }
+ EXPORT_SYMBOL_GPL(int340x_thermal_read_trips);
+@@ -224,6 +236,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
+ 	if (!int34x_thermal_zone)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	mutex_init(&int34x_thermal_zone->trip_mutex);
++
+ 	int34x_thermal_zone->adev = adev;
+ 	int34x_thermal_zone->override_ops = override_ops;
+ 
+@@ -275,6 +289,7 @@ err_thermal_zone:
+ 	acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table);
+ 	kfree(int34x_thermal_zone->aux_trips);
+ err_trip_alloc:
++	mutex_destroy(&int34x_thermal_zone->trip_mutex);
+ 	kfree(int34x_thermal_zone);
+ 	return ERR_PTR(ret);
+ }
+@@ -286,6 +301,7 @@ void int340x_thermal_zone_remove(struct int34x_thermal_zone
+ 	thermal_zone_device_unregister(int34x_thermal_zone->zone);
+ 	acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table);
+ 	kfree(int34x_thermal_zone->aux_trips);
++	mutex_destroy(&int34x_thermal_zone->trip_mutex);
+ 	kfree(int34x_thermal_zone);
+ }
+ EXPORT_SYMBOL_GPL(int340x_thermal_zone_remove);
+diff --git a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h
+index 3b4971df1b33b..8f9872afd0d3c 100644
+--- a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h
++++ b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h
+@@ -32,6 +32,7 @@ struct int34x_thermal_zone {
+ 	struct thermal_zone_device_ops *override_ops;
+ 	void *priv_data;
+ 	struct acpi_lpat_conversion_table *lpat_table;
++	struct mutex trip_mutex;
+ };
+ 
+ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *,
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index bb0d92837f677..94000fd190e55 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -278,6 +278,9 @@ static int __ffs_ep0_queue_wait(struct ffs_data *ffs, char *data, size_t len)
+ 	struct usb_request *req = ffs->ep0req;
+ 	int ret;
+ 
++	if (!req)
++		return -EINVAL;
++
+ 	req->zero     = len < le16_to_cpu(ffs->ev.setup.wLength);
+ 
+ 	spin_unlock_irq(&ffs->ev.waitq.lock);
+@@ -1881,10 +1884,14 @@ static void functionfs_unbind(struct ffs_data *ffs)
+ 	ENTER();
+ 
+ 	if (!WARN_ON(!ffs->gadget)) {
++		/* dequeue before freeing ep0req */
++		usb_ep_dequeue(ffs->gadget->ep0, ffs->ep0req);
++		mutex_lock(&ffs->mutex);
+ 		usb_ep_free_request(ffs->gadget->ep0, ffs->ep0req);
+ 		ffs->ep0req = NULL;
+ 		ffs->gadget = NULL;
+ 		clear_bit(FFS_FL_BOUND, &ffs->flags);
++		mutex_unlock(&ffs->mutex);
+ 		ffs_data_put(ffs);
+ 	}
+ }
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 2967372a99880..473b0b64dd572 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -696,6 +696,8 @@ int xhci_run(struct usb_hcd *hcd)
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ 			"Finished xhci_run for USB2 roothub");
+ 
++	set_bit(HCD_FLAG_DEFER_RH_REGISTER, &hcd->flags);
++
+ 	xhci_dbc_init(xhci);
+ 
+ 	xhci_debugfs_init(xhci);
+diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
+index 15a2ee32f116e..15842377c8d2c 100644
+--- a/drivers/w1/w1.c
++++ b/drivers/w1/w1.c
+@@ -1131,6 +1131,8 @@ int w1_process(void *data)
+ 	/* remainder if it woke up early */
+ 	unsigned long jremain = 0;
+ 
++	atomic_inc(&dev->refcnt);
++
+ 	for (;;) {
+ 
+ 		if (!jremain && dev->search_count) {
+@@ -1158,8 +1160,10 @@ int w1_process(void *data)
+ 		 */
+ 		mutex_unlock(&dev->list_mutex);
+ 
+-		if (kthread_should_stop())
++		if (kthread_should_stop()) {
++			__set_current_state(TASK_RUNNING);
+ 			break;
++		}
+ 
+ 		/* Only sleep when the search is active. */
+ 		if (dev->search_count) {
+diff --git a/drivers/w1/w1_int.c b/drivers/w1/w1_int.c
+index b3e1792d9c49f..3a71c5eb2f837 100644
+--- a/drivers/w1/w1_int.c
++++ b/drivers/w1/w1_int.c
+@@ -51,10 +51,9 @@ static struct w1_master *w1_alloc_dev(u32 id, int slave_count, int slave_ttl,
+ 	dev->search_count	= w1_search_count;
+ 	dev->enable_pullup	= w1_enable_pullup;
+ 
+-	/* 1 for w1_process to decrement
+-	 * 1 for __w1_remove_master_device to decrement
++	/* For __w1_remove_master_device to decrement
+ 	 */
+-	atomic_set(&dev->refcnt, 2);
++	atomic_set(&dev->refcnt, 1);
+ 
+ 	INIT_LIST_HEAD(&dev->slist);
+ 	INIT_LIST_HEAD(&dev->async_list);
+diff --git a/fs/affs/file.c b/fs/affs/file.c
+index d91b0133d95da..c3d89fa1bab77 100644
+--- a/fs/affs/file.c
++++ b/fs/affs/file.c
+@@ -879,7 +879,7 @@ affs_truncate(struct inode *inode)
+ 	if (inode->i_size > AFFS_I(inode)->mmu_private) {
+ 		struct address_space *mapping = inode->i_mapping;
+ 		struct page *page;
+-		void *fsdata;
++		void *fsdata = NULL;
+ 		loff_t isize = inode->i_size;
+ 		int res;
+ 
+diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
+index b029ed31ef916..f73f9b0625251 100644
+--- a/fs/cifs/smbdirect.c
++++ b/fs/cifs/smbdirect.c
+@@ -1394,6 +1394,7 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 	destroy_workqueue(info->workqueue);
+ 	log_rdma_event(INFO,  "rdma session destroyed\n");
+ 	kfree(info);
++	server->smbd_conn = NULL;
+ }
+ 
+ /*
+diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
+index 7346acda9d767..02d3d2f0e6168 100644
+--- a/fs/nfsd/netns.h
++++ b/fs/nfsd/netns.h
+@@ -42,9 +42,6 @@ struct nfsd_net {
+ 	bool grace_ended;
+ 	time64_t boot_time;
+ 
+-	/* internal mount of the "nfsd" pseudofilesystem: */
+-	struct vfsmount *nfsd_mnt;
+-
+ 	struct dentry *nfsd_client_dir;
+ 
+ 	/*
+@@ -121,6 +118,9 @@ struct nfsd_net {
+ 	wait_queue_head_t ntf_wq;
+ 	atomic_t ntf_refcnt;
+ 
++	/* Allow umount to wait for nfsd state cleanup */
++	struct completion nfsd_shutdown_complete;
++
+ 	/*
+ 	 * clientid and stateid data for construction of net unique COPY
+ 	 * stateids.
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 9a47cc66963f0..cb13a16496320 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -7394,14 +7394,9 @@ nfs4_state_start_net(struct net *net)
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 	int ret;
+ 
+-	ret = get_nfsdfs(net);
+-	if (ret)
+-		return ret;
+ 	ret = nfs4_state_create_net(net);
+-	if (ret) {
+-		mntput(nn->nfsd_mnt);
++	if (ret)
+ 		return ret;
+-	}
+ 	locks_start_grace(net, &nn->nfsd4_manager);
+ 	nfsd4_client_tracking_init(net);
+ 	if (nn->track_reclaim_completes && nn->reclaim_str_hashtbl_size == 0)
+@@ -7471,7 +7466,6 @@ nfs4_state_shutdown_net(struct net *net)
+ 
+ 	nfsd4_client_tracking_exit(net);
+ 	nfs4_state_destroy_net(net);
+-	mntput(nn->nfsd_mnt);
+ }
+ 
+ void
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index dedec4771ecc2..c4b11560ac1b6 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1417,6 +1417,8 @@ static void nfsd_umount(struct super_block *sb)
+ {
+ 	struct net *net = sb->s_fs_info;
+ 
++	nfsd_shutdown_threads(net);
++
+ 	kill_litter_super(sb);
+ 	put_net(net);
+ }
+@@ -1429,18 +1431,6 @@ static struct file_system_type nfsd_fs_type = {
+ };
+ MODULE_ALIAS_FS("nfsd");
+ 
+-int get_nfsdfs(struct net *net)
+-{
+-	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+-	struct vfsmount *mnt;
+-
+-	mnt =  vfs_kern_mount(&nfsd_fs_type, SB_KERNMOUNT, "nfsd", NULL);
+-	if (IS_ERR(mnt))
+-		return PTR_ERR(mnt);
+-	nn->nfsd_mnt = mnt;
+-	return 0;
+-}
+-
+ #ifdef CONFIG_PROC_FS
+ static int create_proc_exports_entry(void)
+ {
+diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
+index cb742e17e04a9..4362d295ed340 100644
+--- a/fs/nfsd/nfsd.h
++++ b/fs/nfsd/nfsd.h
+@@ -85,13 +85,12 @@ int		nfsd_get_nrthreads(int n, int *, struct net *);
+ int		nfsd_set_nrthreads(int n, int *, struct net *);
+ int		nfsd_pool_stats_open(struct inode *, struct file *);
+ int		nfsd_pool_stats_release(struct inode *, struct file *);
++void		nfsd_shutdown_threads(struct net *net);
+ 
+ void		nfsd_destroy(struct net *net);
+ 
+ bool		i_am_nfsd(void);
+ 
+-int get_nfsdfs(struct net *);
+-
+ struct nfsdfs_client {
+ 	struct kref cl_ref;
+ 	void (*cl_release)(struct kref *kref);
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index c7fffe1453bd1..2e61a565cdbd8 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -600,6 +600,37 @@ static const struct svc_serv_ops nfsd_thread_sv_ops = {
+ 	.svo_module		= THIS_MODULE,
+ };
+ 
++static void nfsd_complete_shutdown(struct net *net)
++{
++	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++
++	WARN_ON(!mutex_is_locked(&nfsd_mutex));
++
++	nn->nfsd_serv = NULL;
++	complete(&nn->nfsd_shutdown_complete);
++}
++
++void nfsd_shutdown_threads(struct net *net)
++{
++	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++	struct svc_serv *serv;
++
++	mutex_lock(&nfsd_mutex);
++	serv = nn->nfsd_serv;
++	if (serv == NULL) {
++		mutex_unlock(&nfsd_mutex);
++		return;
++	}
++
++	svc_get(serv);
++	/* Kill outstanding nfsd threads */
++	serv->sv_ops->svo_setup(serv, NULL, 0);
++	nfsd_destroy(net);
++	mutex_unlock(&nfsd_mutex);
++	/* Wait for shutdown of nfsd_serv to complete */
++	wait_for_completion(&nn->nfsd_shutdown_complete);
++}
++
+ bool i_am_nfsd(void)
+ {
+ 	return kthread_func(current) == nfsd;
+@@ -622,11 +653,13 @@ int nfsd_create_serv(struct net *net)
+ 						&nfsd_thread_sv_ops);
+ 	if (nn->nfsd_serv == NULL)
+ 		return -ENOMEM;
++	init_completion(&nn->nfsd_shutdown_complete);
+ 
+ 	nn->nfsd_serv->sv_maxconn = nn->max_connections;
+ 	error = svc_bind(nn->nfsd_serv, net);
+ 	if (error < 0) {
+ 		svc_destroy(nn->nfsd_serv);
++		nfsd_complete_shutdown(net);
+ 		return error;
+ 	}
+ 
+@@ -675,7 +708,7 @@ void nfsd_destroy(struct net *net)
+ 		svc_shutdown_net(nn->nfsd_serv, net);
+ 	svc_destroy(nn->nfsd_serv);
+ 	if (destroy)
+-		nn->nfsd_serv = NULL;
++		nfsd_complete_shutdown(net);
+ }
+ 
+ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 070d2df8ab9cf..cd7c6c4af83ad 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -16,6 +16,7 @@
+ #include <linux/module.h>
+ #include <linux/bpf-cgroup.h>
+ #include <linux/mount.h>
++#include <linux/kmemleak.h>
+ #include "internal.h"
+ 
+ static const struct dentry_operations proc_sys_dentry_operations;
+@@ -1380,6 +1381,38 @@ struct ctl_table_header *register_sysctl(const char *path, struct ctl_table *tab
+ }
+ EXPORT_SYMBOL(register_sysctl);
+ 
++/**
++ * __register_sysctl_init() - register sysctl table to path
++ * @path: path name for sysctl base
++ * @table: This is the sysctl table that needs to be registered to the path
++ * @table_name: The name of sysctl table, only used for log printing when
++ *              registration fails
++ *
++ * The sysctl interface is used by userspace to query or modify at runtime
++ * a predefined value set on a variable. These variables however have default
++ * values pre-set. Code which depends on these variables will always work even
++ * if register_sysctl() fails. If register_sysctl() fails you'd just loose the
++ * ability to query or modify the sysctls dynamically at run time. Chances of
++ * register_sysctl() failing on init are extremely low, and so for both reasons
++ * this function does not return any error as it is used by initialization code.
++ *
++ * Context: Can only be called after your respective sysctl base path has been
++ * registered. So for instance, most base directories are registered early on
++ * init before init levels are processed through proc_sys_init() and
++ * sysctl_init().
++ */
++void __init __register_sysctl_init(const char *path, struct ctl_table *table,
++				 const char *table_name)
++{
++	struct ctl_table_header *hdr = register_sysctl(path, table);
++
++	if (unlikely(!hdr)) {
++		pr_err("failed when register_sysctl %s to %s\n", table_name, path);
++		return;
++	}
++	kmemleak_not_leak(hdr);
++}
++
+ static char *append_path(const char *path, char *pos, const char *name)
+ {
+ 	int namelen;
+diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
+index 913f5af9bf248..0ebb6e6849082 100644
+--- a/fs/reiserfs/super.c
++++ b/fs/reiserfs/super.c
+@@ -1437,7 +1437,6 @@ static int reiserfs_remount(struct super_block *s, int *mount_flags, char *arg)
+ 	unsigned long safe_mask = 0;
+ 	unsigned int commit_max_age = (unsigned int)-1;
+ 	struct reiserfs_journal *journal = SB_JOURNAL(s);
+-	char *new_opts;
+ 	int err;
+ 	char *qf_names[REISERFS_MAXQUOTAS];
+ 	unsigned int qfmt = 0;
+@@ -1445,10 +1444,6 @@ static int reiserfs_remount(struct super_block *s, int *mount_flags, char *arg)
+ 	int i;
+ #endif
+ 
+-	new_opts = kstrdup(arg, GFP_KERNEL);
+-	if (arg && !new_opts)
+-		return -ENOMEM;
+-
+ 	sync_filesystem(s);
+ 	reiserfs_write_lock(s);
+ 
+@@ -1599,7 +1594,6 @@ out_ok_unlocked:
+ out_err_unlock:
+ 	reiserfs_write_unlock(s);
+ out_err:
+-	kfree(new_opts);
+ 	return err;
+ }
+ 
+diff --git a/include/linux/clk.h b/include/linux/clk.h
+index 7fd6a1febcf4f..1814eabb7c204 100644
+--- a/include/linux/clk.h
++++ b/include/linux/clk.h
+@@ -418,6 +418,47 @@ int __must_check devm_clk_bulk_get_all(struct device *dev,
+  */
+ struct clk *devm_clk_get(struct device *dev, const char *id);
+ 
++/**
++ * devm_clk_get_prepared - devm_clk_get() + clk_prepare()
++ * @dev: device for clock "consumer"
++ * @id: clock consumer ID
++ *
++ * Context: May sleep.
++ *
++ * Return: a struct clk corresponding to the clock producer, or
++ * valid IS_ERR() condition containing errno.  The implementation
++ * uses @dev and @id to determine the clock consumer, and thereby
++ * the clock producer.  (IOW, @id may be identical strings, but
++ * clk_get may return different clock producers depending on @dev.)
++ *
++ * The returned clk (if valid) is prepared. Drivers must however assume
++ * that the clock is not enabled.
++ *
++ * The clock will automatically be unprepared and freed when the device
++ * is unbound from the bus.
++ */
++struct clk *devm_clk_get_prepared(struct device *dev, const char *id);
++
++/**
++ * devm_clk_get_enabled - devm_clk_get() + clk_prepare_enable()
++ * @dev: device for clock "consumer"
++ * @id: clock consumer ID
++ *
++ * Context: May sleep.
++ *
++ * Return: a struct clk corresponding to the clock producer, or
++ * valid IS_ERR() condition containing errno.  The implementation
++ * uses @dev and @id to determine the clock consumer, and thereby
++ * the clock producer.  (IOW, @id may be identical strings, but
++ * clk_get may return different clock producers depending on @dev.)
++ *
++ * The returned clk (if valid) is prepared and enabled.
++ *
++ * The clock will automatically be disabled, unprepared and freed
++ * when the device is unbound from the bus.
++ */
++struct clk *devm_clk_get_enabled(struct device *dev, const char *id);
++
+ /**
+  * devm_clk_get_optional - lookup and obtain a managed reference to an optional
+  *			   clock producer.
+@@ -429,6 +470,50 @@ struct clk *devm_clk_get(struct device *dev, const char *id);
+  */
+ struct clk *devm_clk_get_optional(struct device *dev, const char *id);
+ 
++/**
++ * devm_clk_get_optional_prepared - devm_clk_get_optional() + clk_prepare()
++ * @dev: device for clock "consumer"
++ * @id: clock consumer ID
++ *
++ * Context: May sleep.
++ *
++ * Return: a struct clk corresponding to the clock producer, or
++ * valid IS_ERR() condition containing errno.  The implementation
++ * uses @dev and @id to determine the clock consumer, and thereby
++ * the clock producer.  If no such clk is found, it returns NULL
++ * which serves as a dummy clk.  That's the only difference compared
++ * to devm_clk_get_prepared().
++ *
++ * The returned clk (if valid) is prepared. Drivers must however
++ * assume that the clock is not enabled.
++ *
++ * The clock will automatically be unprepared and freed when the
++ * device is unbound from the bus.
++ */
++struct clk *devm_clk_get_optional_prepared(struct device *dev, const char *id);
++
++/**
++ * devm_clk_get_optional_enabled - devm_clk_get_optional() +
++ *                                 clk_prepare_enable()
++ * @dev: device for clock "consumer"
++ * @id: clock consumer ID
++ *
++ * Context: May sleep.
++ *
++ * Return: a struct clk corresponding to the clock producer, or
++ * valid IS_ERR() condition containing errno.  The implementation
++ * uses @dev and @id to determine the clock consumer, and thereby
++ * the clock producer.  If no such clk is found, it returns NULL
++ * which serves as a dummy clk.  That's the only difference compared
++ * to devm_clk_get_enabled().
++ *
++ * The returned clk (if valid) is prepared and enabled.
++ *
++ * The clock will automatically be disabled, unprepared and freed
++ * when the device is unbound from the bus.
++ */
++struct clk *devm_clk_get_optional_enabled(struct device *dev, const char *id);
++
+ /**
+  * devm_get_clk_from_child - lookup and obtain a managed reference to a
+  *			     clock producer from child node.
+@@ -773,12 +858,36 @@ static inline struct clk *devm_clk_get(struct device *dev, const char *id)
+ 	return NULL;
+ }
+ 
++static inline struct clk *devm_clk_get_prepared(struct device *dev,
++						const char *id)
++{
++	return NULL;
++}
++
++static inline struct clk *devm_clk_get_enabled(struct device *dev,
++					       const char *id)
++{
++	return NULL;
++}
++
+ static inline struct clk *devm_clk_get_optional(struct device *dev,
+ 						const char *id)
+ {
+ 	return NULL;
+ }
+ 
++static inline struct clk *devm_clk_get_optional_prepared(struct device *dev,
++							 const char *id)
++{
++	return NULL;
++}
++
++static inline struct clk *devm_clk_get_optional_enabled(struct device *dev,
++							const char *id)
++{
++	return NULL;
++}
++
+ static inline int __must_check devm_clk_bulk_get(struct device *dev, int num_clks,
+ 						 struct clk_bulk_data *clks)
+ {
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index f5392d96d6886..394f10fc29aad 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -320,6 +320,7 @@ extern long (*panic_blink)(int state);
+ __printf(1, 2)
+ void panic(const char *fmt, ...) __noreturn __cold;
+ void nmi_panic(struct pt_regs *regs, const char *msg);
++void check_panic_on_warn(const char *origin);
+ extern void oops_enter(void);
+ extern void oops_exit(void);
+ extern bool oops_may_print(void);
+@@ -520,12 +521,6 @@ static inline u32 int_sqrt64(u64 x)
+ }
+ #endif
+ 
+-#ifdef CONFIG_SMP
+-extern unsigned int sysctl_oops_all_cpu_backtrace;
+-#else
+-#define sysctl_oops_all_cpu_backtrace 0
+-#endif /* CONFIG_SMP */
+-
+ extern void bust_spinlocks(int yes);
+ extern int panic_timeout;
+ extern unsigned long panic_print;
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 4ce511437a8aa..2832cc6be062b 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -61,6 +61,7 @@ extern void sched_post_fork(struct task_struct *p,
+ extern void sched_dead(struct task_struct *p);
+ 
+ void __noreturn do_task_dead(void);
++void __noreturn make_task_dead(int signr);
+ 
+ extern void proc_caches_init(void);
+ 
+diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
+index 51298a4f46235..161eba9fd9122 100644
+--- a/include/linux/sysctl.h
++++ b/include/linux/sysctl.h
+@@ -195,6 +195,9 @@ struct ctl_table_header *register_sysctl_paths(const struct ctl_path *path,
+ void unregister_sysctl_table(struct ctl_table_header * table);
+ 
+ extern int sysctl_init(void);
++extern void __register_sysctl_init(const char *path, struct ctl_table *table,
++				 const char *table_name);
++#define register_sysctl_init(path, table) __register_sysctl_init(path, table, #table)
+ void do_sysctl_args(void);
+ 
+ extern int pwrsw_enabled;
+diff --git a/include/linux/units.h b/include/linux/units.h
+index aaf716364ec34..3457179f7116a 100644
+--- a/include/linux/units.h
++++ b/include/linux/units.h
+@@ -4,6 +4,26 @@
+ 
+ #include <linux/kernel.h>
+ 
++/* Metric prefixes in accordance with Système international (d'unités) */
++#define PETA	1000000000000000ULL
++#define TERA	1000000000000ULL
++#define GIGA	1000000000UL
++#define MEGA	1000000UL
++#define KILO	1000UL
++#define HECTO	100UL
++#define DECA	10UL
++#define DECI	10UL
++#define CENTI	100UL
++#define MILLI	1000UL
++#define MICRO	1000000UL
++#define NANO	1000000000UL
++#define PICO	1000000000000ULL
++#define FEMTO	1000000000000000ULL
++
++#define MILLIWATT_PER_WATT	1000L
++#define MICROWATT_PER_MILLIWATT	1000L
++#define MICROWATT_PER_WATT	1000000L
++
+ #define ABSOLUTE_ZERO_MILLICELSIUS -273150
+ 
+ static inline long milli_kelvin_to_millicelsius(long t)
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index e7e8c318925de..61cd19ee51f4e 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1325,4 +1325,11 @@ static inline int skb_tc_reinsert(struct sk_buff *skb, struct tcf_result *res)
+ 	return res->ingress ? netif_receive_skb(skb) : dev_queue_xmit(skb);
+ }
+ 
++/* Make sure qdisc is no longer in SCHED state. */
++static inline void qdisc_synchronize(const struct Qdisc *q)
++{
++	while (test_bit(__QDISC_STATE_SCHED, &q->state))
++		msleep(1);
++}
++
+ #endif
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 90a8b8b26a207..69bbbe8bbf34a 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -315,7 +315,7 @@ struct bpf_local_storage;
+   *	@sk_tskey: counter to disambiguate concurrent tstamp requests
+   *	@sk_zckey: counter to order MSG_ZEROCOPY notifications
+   *	@sk_socket: Identd and reporting IO signals
+-  *	@sk_user_data: RPC layer private data
++  *	@sk_user_data: RPC layer private data. Write-protected by @sk_callback_lock.
+   *	@sk_frag: cached page frag
+   *	@sk_peek_off: current peek_offset value
+   *	@sk_send_head: front of stuff to transmit
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index 037c77fb5dc55..c4de15f7a0a55 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -236,6 +236,14 @@ enum {
+ 	ISCSI_SESSION_FREE,
+ };
+ 
++enum {
++	ISCSI_SESSION_TARGET_UNBOUND,
++	ISCSI_SESSION_TARGET_ALLOCATED,
++	ISCSI_SESSION_TARGET_SCANNED,
++	ISCSI_SESSION_TARGET_UNBINDING,
++	ISCSI_SESSION_TARGET_MAX,
++};
++
+ #define ISCSI_MAX_TARGET -1
+ 
+ struct iscsi_cls_session {
+@@ -262,6 +270,7 @@ struct iscsi_cls_session {
+ 	 */
+ 	pid_t creator;
+ 	int state;
++	int target_state;			/* session target bind state */
+ 	int sid;				/* session id */
+ 	void *dd_data;				/* LLD private data */
+ 	struct device dev;	/* sysfs transport/container device */
+diff --git a/include/uapi/linux/netfilter/nf_conntrack_sctp.h b/include/uapi/linux/netfilter/nf_conntrack_sctp.h
+index edc6ddab0de6a..2d6f80d75ae74 100644
+--- a/include/uapi/linux/netfilter/nf_conntrack_sctp.h
++++ b/include/uapi/linux/netfilter/nf_conntrack_sctp.h
+@@ -15,7 +15,7 @@ enum sctp_conntrack {
+ 	SCTP_CONNTRACK_SHUTDOWN_RECD,
+ 	SCTP_CONNTRACK_SHUTDOWN_ACK_SENT,
+ 	SCTP_CONNTRACK_HEARTBEAT_SENT,
+-	SCTP_CONNTRACK_HEARTBEAT_ACKED,
++	SCTP_CONNTRACK_HEARTBEAT_ACKED,	/* no longer used */
+ 	SCTP_CONNTRACK_MAX
+ };
+ 
+diff --git a/include/uapi/linux/netfilter/nfnetlink_cttimeout.h b/include/uapi/linux/netfilter/nfnetlink_cttimeout.h
+index 6b20fb22717b2..aa805e6d4e284 100644
+--- a/include/uapi/linux/netfilter/nfnetlink_cttimeout.h
++++ b/include/uapi/linux/netfilter/nfnetlink_cttimeout.h
+@@ -94,7 +94,7 @@ enum ctattr_timeout_sctp {
+ 	CTA_TIMEOUT_SCTP_SHUTDOWN_RECD,
+ 	CTA_TIMEOUT_SCTP_SHUTDOWN_ACK_SENT,
+ 	CTA_TIMEOUT_SCTP_HEARTBEAT_SENT,
+-	CTA_TIMEOUT_SCTP_HEARTBEAT_ACKED,
++	CTA_TIMEOUT_SCTP_HEARTBEAT_ACKED, /* no longer used */
+ 	__CTA_TIMEOUT_SCTP_MAX
+ };
+ #define CTA_TIMEOUT_SCTP_MAX (__CTA_TIMEOUT_SCTP_MAX - 1)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 232c93357b907..a6c931fed39bd 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2319,7 +2319,9 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 		bool sanitize = reg && is_spillable_regtype(reg->type);
+ 
+ 		for (i = 0; i < size; i++) {
+-			if (state->stack[spi].slot_type[i] == STACK_INVALID) {
++			u8 type = state->stack[spi].slot_type[i];
++
++			if (type != STACK_MISC && type != STACK_ZERO) {
+ 				sanitize = true;
+ 				break;
+ 			}
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 8989e1d1f79b7..bacdaf980933b 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -64,11 +64,58 @@
+ #include <linux/rcuwait.h>
+ #include <linux/compat.h>
+ #include <linux/io_uring.h>
++#include <linux/sysfs.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/unistd.h>
+ #include <asm/mmu_context.h>
+ 
++/*
++ * The default value should be high enough to not crash a system that randomly
++ * crashes its kernel from time to time, but low enough to at least not permit
++ * overflowing 32-bit refcounts or the ldsem writer count.
++ */
++static unsigned int oops_limit = 10000;
++
++#ifdef CONFIG_SYSCTL
++static struct ctl_table kern_exit_table[] = {
++	{
++		.procname       = "oops_limit",
++		.data           = &oops_limit,
++		.maxlen         = sizeof(oops_limit),
++		.mode           = 0644,
++		.proc_handler   = proc_douintvec,
++	},
++	{ }
++};
++
++static __init int kernel_exit_sysctls_init(void)
++{
++	register_sysctl_init("kernel", kern_exit_table);
++	return 0;
++}
++late_initcall(kernel_exit_sysctls_init);
++#endif
++
++static atomic_t oops_count = ATOMIC_INIT(0);
++
++#ifdef CONFIG_SYSFS
++static ssize_t oops_count_show(struct kobject *kobj, struct kobj_attribute *attr,
++			       char *page)
++{
++	return sysfs_emit(page, "%d\n", atomic_read(&oops_count));
++}
++
++static struct kobj_attribute oops_count_attr = __ATTR_RO(oops_count);
++
++static __init int kernel_exit_sysfs_init(void)
++{
++	sysfs_add_file_to_group(kernel_kobj, &oops_count_attr.attr, NULL);
++	return 0;
++}
++late_initcall(kernel_exit_sysfs_init);
++#endif
++
+ static void __unhash_process(struct task_struct *p, bool group_dead)
+ {
+ 	nr_threads--;
+@@ -863,6 +910,31 @@ void __noreturn do_exit(long code)
+ }
+ EXPORT_SYMBOL_GPL(do_exit);
+ 
++void __noreturn make_task_dead(int signr)
++{
++	/*
++	 * Take the task off the cpu after something catastrophic has
++	 * happened.
++	 */
++	unsigned int limit;
++
++	/*
++	 * Every time the system oopses, if the oops happens while a reference
++	 * to an object was held, the reference leaks.
++	 * If the oops doesn't also leak memory, repeated oopsing can cause
++	 * reference counters to wrap around (if they're not using refcount_t).
++	 * This means that repeated oopsing can make unexploitable-looking bugs
++	 * exploitable through repeated oopsing.
++	 * To make sure this can't happen, place an upper bound on how often the
++	 * kernel may oops without panic().
++	 */
++	limit = READ_ONCE(oops_limit);
++	if (atomic_inc_return(&oops_count) >= limit && limit)
++		panic("Oopsed too often (kernel.oops_limit is %d)", limit);
++
++	do_exit(signr);
++}
++
+ void complete_and_exit(struct completion *comp, long code)
+ {
+ 	if (comp)
+diff --git a/kernel/kcsan/kcsan-test.c b/kernel/kcsan/kcsan-test.c
+index ebe7fd2451040..8a8ccaf4f38f3 100644
+--- a/kernel/kcsan/kcsan-test.c
++++ b/kernel/kcsan/kcsan-test.c
+@@ -149,7 +149,7 @@ static bool report_matches(const struct expect_report *r)
+ 	const bool is_assert = (r->access[0].type | r->access[1].type) & KCSAN_ACCESS_ASSERT;
+ 	bool ret = false;
+ 	unsigned long flags;
+-	typeof(observed.lines) expect;
++	typeof(*observed.lines) *expect;
+ 	const char *end;
+ 	char *cur;
+ 	int i;
+@@ -158,6 +158,10 @@ static bool report_matches(const struct expect_report *r)
+ 	if (!report_available())
+ 		return false;
+ 
++	expect = kmalloc(sizeof(observed.lines), GFP_KERNEL);
++	if (WARN_ON(!expect))
++		return false;
++
+ 	/* Generate expected report contents. */
+ 
+ 	/* Title */
+@@ -241,6 +245,7 @@ static bool report_matches(const struct expect_report *r)
+ 		strstr(observed.lines[2], expect[1])));
+ out:
+ 	spin_unlock_irqrestore(&observed.lock, flags);
++	kfree(expect);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c
+index d3bf87e6007ca..069830f5a5d24 100644
+--- a/kernel/kcsan/report.c
++++ b/kernel/kcsan/report.c
+@@ -630,8 +630,8 @@ void kcsan_report(const volatile void *ptr, size_t size, int access_type,
+ 		bool reported = value_change != KCSAN_VALUE_CHANGE_FALSE &&
+ 				print_report(value_change, type, &ai, other_info);
+ 
+-		if (reported && panic_on_warn)
+-			panic("panic_on_warn set ...\n");
++		if (reported)
++			check_panic_on_warn("KCSAN");
+ 
+ 		release_report(&flags, other_info);
+ 	}
+diff --git a/kernel/module.c b/kernel/module.c
+index 6a0fd245c0483..33d1dc6d4cd6a 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3661,7 +3661,8 @@ static bool finished_loading(const char *name)
+ 	sched_annotate_sleep();
+ 	mutex_lock(&module_mutex);
+ 	mod = find_module_all(name, strlen(name), true);
+-	ret = !mod || mod->state == MODULE_STATE_LIVE;
++	ret = !mod || mod->state == MODULE_STATE_LIVE
++		|| mod->state == MODULE_STATE_GOING;
+ 	mutex_unlock(&module_mutex);
+ 
+ 	return ret;
+@@ -3827,20 +3828,35 @@ static int add_unformed_module(struct module *mod)
+ 
+ 	mod->state = MODULE_STATE_UNFORMED;
+ 
+-again:
+ 	mutex_lock(&module_mutex);
+ 	old = find_module_all(mod->name, strlen(mod->name), true);
+ 	if (old != NULL) {
+-		if (old->state != MODULE_STATE_LIVE) {
++		if (old->state == MODULE_STATE_COMING
++		    || old->state == MODULE_STATE_UNFORMED) {
+ 			/* Wait in case it fails to load. */
+ 			mutex_unlock(&module_mutex);
+ 			err = wait_event_interruptible(module_wq,
+ 					       finished_loading(mod->name));
+ 			if (err)
+ 				goto out_unlocked;
+-			goto again;
++
++			/* The module might have gone in the meantime. */
++			mutex_lock(&module_mutex);
++			old = find_module_all(mod->name, strlen(mod->name),
++					      true);
+ 		}
+-		err = -EEXIST;
++
++		/*
++		 * We are here only when the same module was being loaded. Do
++		 * not try to load it again right now. It prevents long delays
++		 * caused by serialized module load failures. It might happen
++		 * when more devices of the same type trigger load of
++		 * a particular module.
++		 */
++		if (old && old->state == MODULE_STATE_LIVE)
++			err = -EEXIST;
++		else
++			err = -EBUSY;
+ 		goto out;
+ 	}
+ 	mod_update_bounds(mod);
+diff --git a/kernel/panic.c b/kernel/panic.c
+index 332736a72a58e..bc39e2b27d315 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -31,6 +31,7 @@
+ #include <linux/bug.h>
+ #include <linux/ratelimit.h>
+ #include <linux/debugfs.h>
++#include <linux/sysfs.h>
+ #include <asm/sections.h>
+ 
+ #define PANIC_TIMER_STEP 100
+@@ -41,7 +42,9 @@
+  * Should we dump all CPUs backtraces in an oops event?
+  * Defaults to 0, can be changed via sysctl.
+  */
+-unsigned int __read_mostly sysctl_oops_all_cpu_backtrace;
++static unsigned int __read_mostly sysctl_oops_all_cpu_backtrace;
++#else
++#define sysctl_oops_all_cpu_backtrace 0
+ #endif /* CONFIG_SMP */
+ 
+ int panic_on_oops = CONFIG_PANIC_ON_OOPS_VALUE;
+@@ -54,6 +57,7 @@ bool crash_kexec_post_notifiers;
+ int panic_on_warn __read_mostly;
+ unsigned long panic_on_taint;
+ bool panic_on_taint_nousertaint = false;
++static unsigned int warn_limit __read_mostly;
+ 
+ int panic_timeout = CONFIG_PANIC_TIMEOUT;
+ EXPORT_SYMBOL_GPL(panic_timeout);
+@@ -70,6 +74,56 @@ ATOMIC_NOTIFIER_HEAD(panic_notifier_list);
+ 
+ EXPORT_SYMBOL(panic_notifier_list);
+ 
++#ifdef CONFIG_SYSCTL
++static struct ctl_table kern_panic_table[] = {
++#ifdef CONFIG_SMP
++	{
++		.procname       = "oops_all_cpu_backtrace",
++		.data           = &sysctl_oops_all_cpu_backtrace,
++		.maxlen         = sizeof(int),
++		.mode           = 0644,
++		.proc_handler   = proc_dointvec_minmax,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++#endif
++	{
++		.procname       = "warn_limit",
++		.data           = &warn_limit,
++		.maxlen         = sizeof(warn_limit),
++		.mode           = 0644,
++		.proc_handler   = proc_douintvec,
++	},
++	{ }
++};
++
++static __init int kernel_panic_sysctls_init(void)
++{
++	register_sysctl_init("kernel", kern_panic_table);
++	return 0;
++}
++late_initcall(kernel_panic_sysctls_init);
++#endif
++
++static atomic_t warn_count = ATOMIC_INIT(0);
++
++#ifdef CONFIG_SYSFS
++static ssize_t warn_count_show(struct kobject *kobj, struct kobj_attribute *attr,
++			       char *page)
++{
++	return sysfs_emit(page, "%d\n", atomic_read(&warn_count));
++}
++
++static struct kobj_attribute warn_count_attr = __ATTR_RO(warn_count);
++
++static __init int kernel_panic_sysfs_init(void)
++{
++	sysfs_add_file_to_group(kernel_kobj, &warn_count_attr.attr, NULL);
++	return 0;
++}
++late_initcall(kernel_panic_sysfs_init);
++#endif
++
+ static long no_blink(int state)
+ {
+ 	return 0;
+@@ -166,6 +220,19 @@ static void panic_print_sys_info(void)
+ 		ftrace_dump(DUMP_ALL);
+ }
+ 
++void check_panic_on_warn(const char *origin)
++{
++	unsigned int limit;
++
++	if (panic_on_warn)
++		panic("%s: panic_on_warn set ...\n", origin);
++
++	limit = READ_ONCE(warn_limit);
++	if (atomic_inc_return(&warn_count) >= limit && limit)
++		panic("%s: system warned too often (kernel.warn_limit is %d)",
++		      origin, limit);
++}
++
+ /**
+  *	panic - halt the system
+  *	@fmt: The text string to print
+@@ -183,6 +250,16 @@ void panic(const char *fmt, ...)
+ 	int old_cpu, this_cpu;
+ 	bool _crash_kexec_post_notifiers = crash_kexec_post_notifiers;
+ 
++	if (panic_on_warn) {
++		/*
++		 * This thread may hit another WARN() in the panic path.
++		 * Resetting this prevents additional WARN() from panicking the
++		 * system on this thread.  Other threads are blocked by the
++		 * panic_mutex in panic().
++		 */
++		panic_on_warn = 0;
++	}
++
+ 	/*
+ 	 * Disable local interrupts. This will prevent panic_smp_self_stop
+ 	 * from deadlocking the first cpu that invokes the panic, since
+@@ -594,16 +671,7 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
+ 	if (regs)
+ 		show_regs(regs);
+ 
+-	if (panic_on_warn) {
+-		/*
+-		 * This thread may hit another WARN() in the panic path.
+-		 * Resetting this prevents additional WARN() from panicking the
+-		 * system on this thread.  Other threads are blocked by the
+-		 * panic_mutex in panic().
+-		 */
+-		panic_on_warn = 0;
+-		panic("panic_on_warn set ...\n");
+-	}
++	check_panic_on_warn("kernel");
+ 
+ 	if (!regs)
+ 		dump_stack();
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index a875bc59804eb..1303a2607f1f8 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -4280,8 +4280,7 @@ static noinline void __schedule_bug(struct task_struct *prev)
+ 		pr_err("Preemption disabled at:");
+ 		print_ip_sym(KERN_ERR, preempt_disable_ip);
+ 	}
+-	if (panic_on_warn)
+-		panic("scheduling while atomic\n");
++	check_panic_on_warn("scheduling while atomic");
+ 
+ 	dump_stack();
+ 	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 3eb527f8a269c..d8b7b28463135 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -2199,17 +2199,6 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
+-#ifdef CONFIG_SMP
+-	{
+-		.procname	= "oops_all_cpu_backtrace",
+-		.data		= &sysctl_oops_all_cpu_backtrace,
+-		.maxlen		= sizeof(int),
+-		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
+-		.extra1		= SYSCTL_ZERO,
+-		.extra2		= SYSCTL_ONE,
+-	},
+-#endif /* CONFIG_SMP */
+ 	{
+ 		.procname	= "pid_max",
+ 		.data		= &pid_max,
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index c7c92b0eed048..f06d48be5a969 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -9680,6 +9680,8 @@ void __init early_trace_init(void)
+ 			static_key_enable(&tracepoint_printk_key.key);
+ 	}
+ 	tracer_alloc_buffers();
++
++	init_events();
+ }
+ 
+ void __init trace_init(void)
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 8d67f7f448400..37f616bf5fa93 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1673,6 +1673,7 @@ extern void trace_event_enable_cmd_record(bool enable);
+ extern void trace_event_enable_tgid_record(bool enable);
+ 
+ extern int event_trace_init(void);
++extern int init_events(void);
+ extern int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr);
+ extern int event_trace_del_tracer(struct trace_array *tr);
+ extern void __trace_early_add_events(struct trace_array *tr);
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 0ae3e4454ff2c..ccc99cd23f3c4 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1646,6 +1646,8 @@ static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
+ 		unsigned long fl = flags & ~HIST_FIELD_FL_LOG2;
+ 		hist_field->fn = hist_field_log2;
+ 		hist_field->operands[0] = create_hist_field(hist_data, field, fl, NULL);
++		if (!hist_field->operands[0])
++			goto free;
+ 		hist_field->size = hist_field->operands[0]->size;
+ 		hist_field->type = kstrdup(hist_field->operands[0]->type, GFP_KERNEL);
+ 		if (!hist_field->type)
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index 000e9dc224c61..b3ee8d9b6b62a 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -1378,7 +1378,7 @@ static struct trace_event *events[] __initdata = {
+ 	NULL
+ };
+ 
+-__init static int init_events(void)
++__init int init_events(void)
+ {
+ 	struct trace_event *event;
+ 	int i, ret;
+@@ -1396,4 +1396,3 @@ __init static int init_events(void)
+ 
+ 	return 0;
+ }
+-early_initcall(init_events);
+diff --git a/lib/lockref.c b/lib/lockref.c
+index 5b34bbd3eba81..81ac5f3552428 100644
+--- a/lib/lockref.c
++++ b/lib/lockref.c
+@@ -24,7 +24,6 @@
+ 		}								\
+ 		if (!--retry)							\
+ 			break;							\
+-		cpu_relax();							\
+ 	}									\
+ } while (0)
+ 
+diff --git a/lib/nlattr.c b/lib/nlattr.c
+index fe60f9ae9db17..aa8fc4371e930 100644
+--- a/lib/nlattr.c
++++ b/lib/nlattr.c
+@@ -10,6 +10,7 @@
+ #include <linux/kernel.h>
+ #include <linux/errno.h>
+ #include <linux/jiffies.h>
++#include <linux/nospec.h>
+ #include <linux/skbuff.h>
+ #include <linux/string.h>
+ #include <linux/types.h>
+@@ -369,6 +370,7 @@ static int validate_nla(const struct nlattr *nla, int maxtype,
+ 	if (type <= 0 || type > maxtype)
+ 		return 0;
+ 
++	type = array_index_nospec(type, maxtype + 1);
+ 	pt = &policy[type];
+ 
+ 	BUG_ON(pt->type > NLA_TYPE_MAX);
+@@ -584,6 +586,7 @@ static int __nla_validate_parse(const struct nlattr *head, int len, int maxtype,
+ 			}
+ 			continue;
+ 		}
++		type = array_index_nospec(type, maxtype + 1);
+ 		if (policy) {
+ 			int err = validate_nla(nla, maxtype, policy,
+ 					       validate, extack, depth);
+diff --git a/lib/ubsan.c b/lib/ubsan.c
+index adf8dcf3c84e6..ee14c46cac897 100644
+--- a/lib/ubsan.c
++++ b/lib/ubsan.c
+@@ -151,16 +151,7 @@ static void ubsan_epilogue(void)
+ 
+ 	current->in_ubsan--;
+ 
+-	if (panic_on_warn) {
+-		/*
+-		 * This thread may hit another WARN() in the panic path.
+-		 * Resetting this prevents additional WARN() from panicking the
+-		 * system on this thread.  Other threads are blocked by the
+-		 * panic_mutex in panic().
+-		 */
+-		panic_on_warn = 0;
+-		panic("panic_on_warn set ...\n");
+-	}
++	check_panic_on_warn("UBSAN");
+ }
+ 
+ static void handle_overflow(struct overflow_data *data, void *lhs,
+diff --git a/mm/kasan/report.c b/mm/kasan/report.c
+index 00a53f1355aec..2f5e96ac4d008 100644
+--- a/mm/kasan/report.c
++++ b/mm/kasan/report.c
+@@ -95,16 +95,8 @@ static void end_report(unsigned long *flags)
+ 	pr_err("==================================================================\n");
+ 	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+ 	spin_unlock_irqrestore(&report_lock, *flags);
+-	if (panic_on_warn && !test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags)) {
+-		/*
+-		 * This thread may hit another WARN() in the panic path.
+-		 * Resetting this prevents additional WARN() from panicking the
+-		 * system on this thread.  Other threads are blocked by the
+-		 * panic_mutex in panic().
+-		 */
+-		panic_on_warn = 0;
+-		panic("panic_on_warn set ...\n");
+-	}
++	if (!test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
++		check_panic_on_warn("KASAN");
+ 	kasan_enable_current();
+ }
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 2af1477a05ca6..08c473aa0113d 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1623,6 +1623,7 @@ setup_failed:
+ 			hdev->flush(hdev);
+ 
+ 		if (hdev->sent_cmd) {
++			cancel_delayed_work_sync(&hdev->cmd_timer);
+ 			kfree_skb(hdev->sent_cmd);
+ 			hdev->sent_cmd = NULL;
+ 		}
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index a3b7d965e9c01..e05dd4f3279a8 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -155,12 +155,12 @@ static int ops_init(const struct pernet_operations *ops, struct net *net)
+ 		return 0;
+ 
+ 	if (ops->id && ops->size) {
+-cleanup:
+ 		ng = rcu_dereference_protected(net->gen,
+ 					       lockdep_is_held(&pernet_ops_rwsem));
+ 		ng->ptr[*ops->id] = NULL;
+ 	}
+ 
++cleanup:
+ 	kfree(data);
+ 
+ out:
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index ab9fcc6231b86..4e94796ccdbd1 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -30,6 +30,7 @@
+ #include <linux/slab.h>
+ #include <linux/netlink.h>
+ #include <linux/hash.h>
++#include <linux/nospec.h>
+ 
+ #include <net/arp.h>
+ #include <net/ip.h>
+@@ -1021,6 +1022,7 @@ bool fib_metrics_match(struct fib_config *cfg, struct fib_info *fi)
+ 		if (type > RTAX_MAX)
+ 			return false;
+ 
++		type = array_index_nospec(type, RTAX_MAX + 1);
+ 		if (type == RTAX_CC_ALGO) {
+ 			char tmp[TCP_CA_NAME_MAX];
+ 			bool ecn_ca = false;
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index c68a1dae25ca3..2615b72118d1f 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -571,8 +571,20 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
+ 	spin_lock(lock);
+ 	if (osk) {
+ 		WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
+-		ret = sk_nulls_del_node_init_rcu(osk);
+-	} else if (found_dup_sk) {
++		ret = sk_hashed(osk);
++		if (ret) {
++			/* Before deleting the node, we insert a new one to make
++			 * sure that the look-up-sk process would not miss either
++			 * of them and that at least one node would exist in ehash
++			 * table all the time. Otherwise there's a tiny chance
++			 * that lookup process could find nothing in ehash table.
++			 */
++			__sk_nulls_add_node_tail_rcu(sk, list);
++			sk_nulls_del_node_init_rcu(osk);
++		}
++		goto unlock;
++	}
++	if (found_dup_sk) {
+ 		*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
+ 		if (*found_dup_sk)
+ 			ret = false;
+@@ -581,6 +593,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
+ 	if (ret)
+ 		__sk_nulls_add_node_rcu(sk, list);
+ 
++unlock:
+ 	spin_unlock(lock);
+ 
+ 	return ret;
+diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
+index c411c87ae865f..a00102d7c7fd4 100644
+--- a/net/ipv4/inet_timewait_sock.c
++++ b/net/ipv4/inet_timewait_sock.c
+@@ -81,10 +81,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
+ }
+ EXPORT_SYMBOL_GPL(inet_twsk_put);
+ 
+-static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
+-				   struct hlist_nulls_head *list)
++static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
++					struct hlist_nulls_head *list)
+ {
+-	hlist_nulls_add_head_rcu(&tw->tw_node, list);
++	hlist_nulls_add_tail_rcu(&tw->tw_node, list);
+ }
+ 
+ static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
+@@ -120,7 +120,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
+ 
+ 	spin_lock(lock);
+ 
+-	inet_twsk_add_node_rcu(tw, &ehead->chain);
++	inet_twsk_add_node_tail_rcu(tw, &ehead->chain);
+ 
+ 	/* Step 3: Remove SK from hash chain */
+ 	if (__sk_nulls_del_node_init_rcu(sk))
+diff --git a/net/ipv4/metrics.c b/net/ipv4/metrics.c
+index 3205d5f7c8c94..4966ac2aaf87d 100644
+--- a/net/ipv4/metrics.c
++++ b/net/ipv4/metrics.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ #include <linux/netlink.h>
++#include <linux/nospec.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/types.h>
+ #include <net/ip.h>
+@@ -28,6 +29,7 @@ static int ip_metrics_convert(struct net *net, struct nlattr *fc_mx,
+ 			return -EINVAL;
+ 		}
+ 
++		type = array_index_nospec(type, RTAX_MAX + 1);
+ 		if (type == RTAX_CC_ALGO) {
+ 			char tmp[TCP_CA_NAME_MAX];
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index cc588bc2b11d7..6a0560a735ce4 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -432,6 +432,7 @@ void tcp_init_sock(struct sock *sk)
+ 
+ 	/* There's a bubble in the pipe until at least the first ACK. */
+ 	tp->app_limited = ~0U;
++	tp->rate_app_limited = 1;
+ 
+ 	/* See draft-stevens-tcpca-spec-01 for discussion of the
+ 	 * initialization of these values.
+@@ -2837,6 +2838,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 	tp->last_oow_ack_time = 0;
+ 	/* There's a bubble in the pipe until at least the first ACK. */
+ 	tp->app_limited = ~0U;
++	tp->rate_app_limited = 1;
+ 	tp->rack.mstamp = 0;
+ 	tp->rack.advanced = 0;
+ 	tp->rack.reo_wnd_steps = 1;
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index dc8987ed08adb..a4b793d1b7d76 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -104,9 +104,9 @@ static struct workqueue_struct *l2tp_wq;
+ /* per-net private data for this module */
+ static unsigned int l2tp_net_id;
+ struct l2tp_net {
+-	struct list_head l2tp_tunnel_list;
+-	/* Lock for write access to l2tp_tunnel_list */
+-	spinlock_t l2tp_tunnel_list_lock;
++	/* Lock for write access to l2tp_tunnel_idr */
++	spinlock_t l2tp_tunnel_idr_lock;
++	struct idr l2tp_tunnel_idr;
+ 	struct hlist_head l2tp_session_hlist[L2TP_HASH_SIZE_2];
+ 	/* Lock for write access to l2tp_session_hlist */
+ 	spinlock_t l2tp_session_hlist_lock;
+@@ -208,13 +208,10 @@ struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id)
+ 	struct l2tp_tunnel *tunnel;
+ 
+ 	rcu_read_lock_bh();
+-	list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
+-		if (tunnel->tunnel_id == tunnel_id &&
+-		    refcount_inc_not_zero(&tunnel->ref_count)) {
+-			rcu_read_unlock_bh();
+-
+-			return tunnel;
+-		}
++	tunnel = idr_find(&pn->l2tp_tunnel_idr, tunnel_id);
++	if (tunnel && refcount_inc_not_zero(&tunnel->ref_count)) {
++		rcu_read_unlock_bh();
++		return tunnel;
+ 	}
+ 	rcu_read_unlock_bh();
+ 
+@@ -224,13 +221,14 @@ EXPORT_SYMBOL_GPL(l2tp_tunnel_get);
+ 
+ struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth)
+ {
+-	const struct l2tp_net *pn = l2tp_pernet(net);
++	struct l2tp_net *pn = l2tp_pernet(net);
++	unsigned long tunnel_id, tmp;
+ 	struct l2tp_tunnel *tunnel;
+ 	int count = 0;
+ 
+ 	rcu_read_lock_bh();
+-	list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
+-		if (++count > nth &&
++	idr_for_each_entry_ul(&pn->l2tp_tunnel_idr, tunnel, tmp, tunnel_id) {
++		if (tunnel && ++count > nth &&
+ 		    refcount_inc_not_zero(&tunnel->ref_count)) {
+ 			rcu_read_unlock_bh();
+ 			return tunnel;
+@@ -1043,7 +1041,7 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb, uns
+ 	IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | IPSKB_REROUTED);
+ 	nf_reset_ct(skb);
+ 
+-	bh_lock_sock(sk);
++	bh_lock_sock_nested(sk);
+ 	if (sock_owned_by_user(sk)) {
+ 		kfree_skb(skb);
+ 		ret = NET_XMIT_DROP;
+@@ -1150,8 +1148,10 @@ static void l2tp_tunnel_destruct(struct sock *sk)
+ 	}
+ 
+ 	/* Remove hooks into tunnel socket */
++	write_lock_bh(&sk->sk_callback_lock);
+ 	sk->sk_destruct = tunnel->old_sk_destruct;
+ 	sk->sk_user_data = NULL;
++	write_unlock_bh(&sk->sk_callback_lock);
+ 
+ 	/* Call the original destructor */
+ 	if (sk->sk_destruct)
+@@ -1227,6 +1227,15 @@ static void l2tp_udp_encap_destroy(struct sock *sk)
+ 		l2tp_tunnel_delete(tunnel);
+ }
+ 
++static void l2tp_tunnel_remove(struct net *net, struct l2tp_tunnel *tunnel)
++{
++	struct l2tp_net *pn = l2tp_pernet(net);
++
++	spin_lock_bh(&pn->l2tp_tunnel_idr_lock);
++	idr_remove(&pn->l2tp_tunnel_idr, tunnel->tunnel_id);
++	spin_unlock_bh(&pn->l2tp_tunnel_idr_lock);
++}
++
+ /* Workqueue tunnel deletion function */
+ static void l2tp_tunnel_del_work(struct work_struct *work)
+ {
+@@ -1234,7 +1243,6 @@ static void l2tp_tunnel_del_work(struct work_struct *work)
+ 						  del_work);
+ 	struct sock *sk = tunnel->sock;
+ 	struct socket *sock = sk->sk_socket;
+-	struct l2tp_net *pn;
+ 
+ 	l2tp_tunnel_closeall(tunnel);
+ 
+@@ -1248,12 +1256,7 @@ static void l2tp_tunnel_del_work(struct work_struct *work)
+ 		}
+ 	}
+ 
+-	/* Remove the tunnel struct from the tunnel list */
+-	pn = l2tp_pernet(tunnel->l2tp_net);
+-	spin_lock_bh(&pn->l2tp_tunnel_list_lock);
+-	list_del_rcu(&tunnel->list);
+-	spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+-
++	l2tp_tunnel_remove(tunnel->l2tp_net, tunnel);
+ 	/* drop initial ref */
+ 	l2tp_tunnel_dec_refcount(tunnel);
+ 
+@@ -1384,8 +1387,6 @@ out:
+ 	return err;
+ }
+ 
+-static struct lock_class_key l2tp_socket_class;
+-
+ int l2tp_tunnel_create(int fd, int version, u32 tunnel_id, u32 peer_tunnel_id,
+ 		       struct l2tp_tunnel_cfg *cfg, struct l2tp_tunnel **tunnelp)
+ {
+@@ -1455,12 +1456,19 @@ static int l2tp_validate_socket(const struct sock *sk, const struct net *net,
+ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
+ 			 struct l2tp_tunnel_cfg *cfg)
+ {
+-	struct l2tp_tunnel *tunnel_walk;
+-	struct l2tp_net *pn;
++	struct l2tp_net *pn = l2tp_pernet(net);
++	u32 tunnel_id = tunnel->tunnel_id;
+ 	struct socket *sock;
+ 	struct sock *sk;
+ 	int ret;
+ 
++	spin_lock_bh(&pn->l2tp_tunnel_idr_lock);
++	ret = idr_alloc_u32(&pn->l2tp_tunnel_idr, NULL, &tunnel_id, tunnel_id,
++			    GFP_ATOMIC);
++	spin_unlock_bh(&pn->l2tp_tunnel_idr_lock);
++	if (ret)
++		return ret == -ENOSPC ? -EEXIST : ret;
++
+ 	if (tunnel->fd < 0) {
+ 		ret = l2tp_tunnel_sock_create(net, tunnel->tunnel_id,
+ 					      tunnel->peer_tunnel_id, cfg,
+@@ -1471,30 +1479,16 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
+ 		sock = sockfd_lookup(tunnel->fd, &ret);
+ 		if (!sock)
+ 			goto err;
+-
+-		ret = l2tp_validate_socket(sock->sk, net, tunnel->encap);
+-		if (ret < 0)
+-			goto err_sock;
+ 	}
+ 
+-	tunnel->l2tp_net = net;
+-	pn = l2tp_pernet(net);
+-
+ 	sk = sock->sk;
+-	sock_hold(sk);
+-	tunnel->sock = sk;
+-
+-	spin_lock_bh(&pn->l2tp_tunnel_list_lock);
+-	list_for_each_entry(tunnel_walk, &pn->l2tp_tunnel_list, list) {
+-		if (tunnel_walk->tunnel_id == tunnel->tunnel_id) {
+-			spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+-			sock_put(sk);
+-			ret = -EEXIST;
+-			goto err_sock;
+-		}
+-	}
+-	list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list);
+-	spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
++	lock_sock(sk);
++	write_lock_bh(&sk->sk_callback_lock);
++	ret = l2tp_validate_socket(sk, net, tunnel->encap);
++	if (ret < 0)
++		goto err_inval_sock;
++	rcu_assign_sk_user_data(sk, tunnel);
++	write_unlock_bh(&sk->sk_callback_lock);
+ 
+ 	if (tunnel->encap == L2TP_ENCAPTYPE_UDP) {
+ 		struct udp_tunnel_sock_cfg udp_cfg = {
+@@ -1505,15 +1499,20 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
+ 		};
+ 
+ 		setup_udp_tunnel_sock(net, sock, &udp_cfg);
+-	} else {
+-		sk->sk_user_data = tunnel;
+ 	}
+ 
+ 	tunnel->old_sk_destruct = sk->sk_destruct;
+ 	sk->sk_destruct = &l2tp_tunnel_destruct;
+-	lockdep_set_class_and_name(&sk->sk_lock.slock, &l2tp_socket_class,
+-				   "l2tp_sock");
+ 	sk->sk_allocation = GFP_ATOMIC;
++	release_sock(sk);
++
++	sock_hold(sk);
++	tunnel->sock = sk;
++	tunnel->l2tp_net = net;
++
++	spin_lock_bh(&pn->l2tp_tunnel_idr_lock);
++	idr_replace(&pn->l2tp_tunnel_idr, tunnel, tunnel->tunnel_id);
++	spin_unlock_bh(&pn->l2tp_tunnel_idr_lock);
+ 
+ 	trace_register_tunnel(tunnel);
+ 
+@@ -1522,12 +1521,16 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
+ 
+ 	return 0;
+ 
+-err_sock:
++err_inval_sock:
++	write_unlock_bh(&sk->sk_callback_lock);
++	release_sock(sk);
++
+ 	if (tunnel->fd < 0)
+ 		sock_release(sock);
+ 	else
+ 		sockfd_put(sock);
+ err:
++	l2tp_tunnel_remove(net, tunnel);
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(l2tp_tunnel_register);
+@@ -1641,8 +1644,8 @@ static __net_init int l2tp_init_net(struct net *net)
+ 	struct l2tp_net *pn = net_generic(net, l2tp_net_id);
+ 	int hash;
+ 
+-	INIT_LIST_HEAD(&pn->l2tp_tunnel_list);
+-	spin_lock_init(&pn->l2tp_tunnel_list_lock);
++	idr_init(&pn->l2tp_tunnel_idr);
++	spin_lock_init(&pn->l2tp_tunnel_idr_lock);
+ 
+ 	for (hash = 0; hash < L2TP_HASH_SIZE_2; hash++)
+ 		INIT_HLIST_HEAD(&pn->l2tp_session_hlist[hash]);
+@@ -1656,11 +1659,13 @@ static __net_exit void l2tp_exit_net(struct net *net)
+ {
+ 	struct l2tp_net *pn = l2tp_pernet(net);
+ 	struct l2tp_tunnel *tunnel = NULL;
++	unsigned long tunnel_id, tmp;
+ 	int hash;
+ 
+ 	rcu_read_lock_bh();
+-	list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
+-		l2tp_tunnel_delete(tunnel);
++	idr_for_each_entry_ul(&pn->l2tp_tunnel_idr, tunnel, tmp, tunnel_id) {
++		if (tunnel)
++			l2tp_tunnel_delete(tunnel);
+ 	}
+ 	rcu_read_unlock_bh();
+ 
+@@ -1670,6 +1675,7 @@ static __net_exit void l2tp_exit_net(struct net *net)
+ 
+ 	for (hash = 0; hash < L2TP_HASH_SIZE_2; hash++)
+ 		WARN_ON_ONCE(!hlist_empty(&pn->l2tp_session_hlist[hash]));
++	idr_destroy(&pn->l2tp_tunnel_idr);
+ }
+ 
+ static struct pernet_operations l2tp_net_ops = {
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index 7626f3e1c70a7..cec4b16170a0b 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -27,22 +27,16 @@
+ #include <net/netfilter/nf_conntrack_ecache.h>
+ #include <net/netfilter/nf_conntrack_timeout.h>
+ 
+-/* FIXME: Examine ipfilter's timeouts and conntrack transitions more
+-   closely.  They're more complex. --RR
+-
+-   And so for me for SCTP :D -Kiran */
+-
+ static const char *const sctp_conntrack_names[] = {
+-	"NONE",
+-	"CLOSED",
+-	"COOKIE_WAIT",
+-	"COOKIE_ECHOED",
+-	"ESTABLISHED",
+-	"SHUTDOWN_SENT",
+-	"SHUTDOWN_RECD",
+-	"SHUTDOWN_ACK_SENT",
+-	"HEARTBEAT_SENT",
+-	"HEARTBEAT_ACKED",
++	[SCTP_CONNTRACK_NONE]			= "NONE",
++	[SCTP_CONNTRACK_CLOSED]			= "CLOSED",
++	[SCTP_CONNTRACK_COOKIE_WAIT]		= "COOKIE_WAIT",
++	[SCTP_CONNTRACK_COOKIE_ECHOED]		= "COOKIE_ECHOED",
++	[SCTP_CONNTRACK_ESTABLISHED]		= "ESTABLISHED",
++	[SCTP_CONNTRACK_SHUTDOWN_SENT]		= "SHUTDOWN_SENT",
++	[SCTP_CONNTRACK_SHUTDOWN_RECD]		= "SHUTDOWN_RECD",
++	[SCTP_CONNTRACK_SHUTDOWN_ACK_SENT]	= "SHUTDOWN_ACK_SENT",
++	[SCTP_CONNTRACK_HEARTBEAT_SENT]		= "HEARTBEAT_SENT",
+ };
+ 
+ #define SECS  * HZ
+@@ -54,12 +48,11 @@ static const unsigned int sctp_timeouts[SCTP_CONNTRACK_MAX] = {
+ 	[SCTP_CONNTRACK_CLOSED]			= 10 SECS,
+ 	[SCTP_CONNTRACK_COOKIE_WAIT]		= 3 SECS,
+ 	[SCTP_CONNTRACK_COOKIE_ECHOED]		= 3 SECS,
+-	[SCTP_CONNTRACK_ESTABLISHED]		= 5 DAYS,
++	[SCTP_CONNTRACK_ESTABLISHED]		= 210 SECS,
+ 	[SCTP_CONNTRACK_SHUTDOWN_SENT]		= 300 SECS / 1000,
+ 	[SCTP_CONNTRACK_SHUTDOWN_RECD]		= 300 SECS / 1000,
+ 	[SCTP_CONNTRACK_SHUTDOWN_ACK_SENT]	= 3 SECS,
+ 	[SCTP_CONNTRACK_HEARTBEAT_SENT]		= 30 SECS,
+-	[SCTP_CONNTRACK_HEARTBEAT_ACKED]	= 210 SECS,
+ };
+ 
+ #define	SCTP_FLAG_HEARTBEAT_VTAG_FAILED	1
+@@ -73,7 +66,6 @@ static const unsigned int sctp_timeouts[SCTP_CONNTRACK_MAX] = {
+ #define	sSR SCTP_CONNTRACK_SHUTDOWN_RECD
+ #define	sSA SCTP_CONNTRACK_SHUTDOWN_ACK_SENT
+ #define	sHS SCTP_CONNTRACK_HEARTBEAT_SENT
+-#define	sHA SCTP_CONNTRACK_HEARTBEAT_ACKED
+ #define	sIV SCTP_CONNTRACK_MAX
+ 
+ /*
+@@ -96,9 +88,6 @@ SHUTDOWN_ACK_SENT - We have seen a SHUTDOWN_ACK chunk in the direction opposite
+ CLOSED            - We have seen a SHUTDOWN_COMPLETE chunk in the direction of
+ 		    the SHUTDOWN chunk. Connection is closed.
+ HEARTBEAT_SENT    - We have seen a HEARTBEAT in a new flow.
+-HEARTBEAT_ACKED   - We have seen a HEARTBEAT-ACK in the direction opposite to
+-		    that of the HEARTBEAT chunk. Secondary connection is
+-		    established.
+ */
+ 
+ /* TODO
+@@ -115,33 +104,33 @@ cookie echoed to closed.
+ static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {
+ 	{
+ /*	ORIGINAL	*/
+-/*                  sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */
+-/* init         */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW, sHA},
+-/* init_ack     */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA},
+-/* abort        */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},
+-/* shutdown     */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL, sSS},
+-/* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA, sHA},
+-/* error        */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA},/* Can't have Stale cookie*/
+-/* cookie_echo  */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL, sHA},/* 5.2.4 - Big TODO */
+-/* cookie_ack   */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA},/* Can't come in orig dir */
+-/* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL, sHA},
+-/* heartbeat    */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA},
+-/* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA}
++/*                  sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */
++/* init         */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW},
++/* init_ack     */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},
++/* abort        */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},
++/* shutdown     */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL},
++/* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA},
++/* error        */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't have Stale cookie*/
++/* cookie_echo  */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL},/* 5.2.4 - Big TODO */
++/* cookie_ack   */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */
++/* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL},
++/* heartbeat    */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
++/* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
+ 	},
+ 	{
+ /*	REPLY	*/
+-/*                  sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */
+-/* init         */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},/* INIT in sCL Big TODO */
+-/* init_ack     */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},
+-/* abort        */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV, sCL},
+-/* shutdown     */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV, sSR},
+-/* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV, sHA},
+-/* error        */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV, sHA},
+-/* cookie_echo  */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},/* Can't come in reply dir */
+-/* cookie_ack   */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV, sHA},
+-/* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV, sHA},
+-/* heartbeat    */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA},
+-/* heartbeat_ack*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHA, sHA}
++/*                  sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */
++/* init         */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* INIT in sCL Big TODO */
++/* init_ack     */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV},
++/* abort        */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV},
++/* shutdown     */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV},
++/* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV},
++/* error        */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV},
++/* cookie_echo  */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */
++/* cookie_ack   */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV},
++/* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV},
++/* heartbeat    */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
++/* heartbeat_ack*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sES},
+ 	}
+ };
+ 
+@@ -412,22 +401,29 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ 	for_each_sctp_chunk (skb, sch, _sch, offset, dataoff, count) {
+ 		/* Special cases of Verification tag check (Sec 8.5.1) */
+ 		if (sch->type == SCTP_CID_INIT) {
+-			/* Sec 8.5.1 (A) */
++			/* (A) vtag MUST be zero */
+ 			if (sh->vtag != 0)
+ 				goto out_unlock;
+ 		} else if (sch->type == SCTP_CID_ABORT) {
+-			/* Sec 8.5.1 (B) */
+-			if (sh->vtag != ct->proto.sctp.vtag[dir] &&
+-			    sh->vtag != ct->proto.sctp.vtag[!dir])
++			/* (B) vtag MUST match own vtag if T flag is unset OR
++			 * MUST match peer's vtag if T flag is set
++			 */
++			if ((!(sch->flags & SCTP_CHUNK_FLAG_T) &&
++			     sh->vtag != ct->proto.sctp.vtag[dir]) ||
++			    ((sch->flags & SCTP_CHUNK_FLAG_T) &&
++			     sh->vtag != ct->proto.sctp.vtag[!dir]))
+ 				goto out_unlock;
+ 		} else if (sch->type == SCTP_CID_SHUTDOWN_COMPLETE) {
+-			/* Sec 8.5.1 (C) */
+-			if (sh->vtag != ct->proto.sctp.vtag[dir] &&
+-			    sh->vtag != ct->proto.sctp.vtag[!dir] &&
+-			    sch->flags & SCTP_CHUNK_FLAG_T)
++			/* (C) vtag MUST match own vtag if T flag is unset OR
++			 * MUST match peer's vtag if T flag is set
++			 */
++			if ((!(sch->flags & SCTP_CHUNK_FLAG_T) &&
++			     sh->vtag != ct->proto.sctp.vtag[dir]) ||
++			    ((sch->flags & SCTP_CHUNK_FLAG_T) &&
++			     sh->vtag != ct->proto.sctp.vtag[!dir]))
+ 				goto out_unlock;
+ 		} else if (sch->type == SCTP_CID_COOKIE_ECHO) {
+-			/* Sec 8.5.1 (D) */
++			/* (D) vtag must be same as init_vtag as found in INIT_ACK */
+ 			if (sh->vtag != ct->proto.sctp.vtag[dir])
+ 				goto out_unlock;
+ 		} else if (sch->type == SCTP_CID_HEARTBEAT) {
+@@ -501,8 +497,12 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ 		}
+ 
+ 		ct->proto.sctp.state = new_state;
+-		if (old_state != new_state)
++		if (old_state != new_state) {
+ 			nf_conntrack_event_cache(IPCT_PROTOINFO, ct);
++			if (new_state == SCTP_CONNTRACK_ESTABLISHED &&
++			    !test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
++				nf_conntrack_event_cache(IPCT_ASSURED, ct);
++		}
+ 	}
+ 	spin_unlock_bh(&ct->lock);
+ 
+@@ -516,14 +516,6 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ 
+ 	nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[new_state]);
+ 
+-	if (old_state == SCTP_CONNTRACK_COOKIE_ECHOED &&
+-	    dir == IP_CT_DIR_REPLY &&
+-	    new_state == SCTP_CONNTRACK_ESTABLISHED) {
+-		pr_debug("Setting assured bit\n");
+-		set_bit(IPS_ASSURED_BIT, &ct->status);
+-		nf_conntrack_event_cache(IPCT_ASSURED, ct);
+-	}
+-
+ 	return NF_ACCEPT;
+ 
+ out_unlock:
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 3f785bdfa942d..c1d02c0b4f005 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -1158,6 +1158,16 @@ int nf_conntrack_tcp_packet(struct nf_conn *ct,
+ 			nf_ct_kill_acct(ct, ctinfo, skb);
+ 			return NF_ACCEPT;
+ 		}
++
++		if (index == TCP_SYN_SET && old_state == TCP_CONNTRACK_SYN_SENT) {
++			/* do not renew timeout on SYN retransmit.
++			 *
++			 * Else port reuse by client or NAT middlebox can keep
++			 * entry alive indefinitely (including nat info).
++			 */
++			return NF_ACCEPT;
++		}
++
+ 		/* ESTABLISHED without SEEN_REPLY, i.e. mid-connection
+ 		 * pickup with loose=1. Avoid large ESTABLISHED timeout.
+ 		 */
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index a7f88cdf3f87c..e12b52019a550 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -583,7 +583,6 @@ enum nf_ct_sysctl_index {
+ 	NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_SHUTDOWN_RECD,
+ 	NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_SHUTDOWN_ACK_SENT,
+ 	NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_SENT,
+-	NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_ACKED,
+ #endif
+ #ifdef CONFIG_NF_CT_PROTO_DCCP
+ 	NF_SYSCTL_CT_PROTO_TIMEOUT_DCCP_REQUEST,
+@@ -853,12 +852,6 @@ static struct ctl_table nf_ct_sysctl_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_jiffies,
+ 	},
+-	[NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_ACKED] = {
+-		.procname       = "nf_conntrack_sctp_timeout_heartbeat_acked",
+-		.maxlen         = sizeof(unsigned int),
+-		.mode           = 0644,
+-		.proc_handler   = proc_dointvec_jiffies,
+-	},
+ #endif
+ #ifdef CONFIG_NF_CT_PROTO_DCCP
+ 	[NF_SYSCTL_CT_PROTO_TIMEOUT_DCCP_REQUEST] = {
+@@ -987,7 +980,6 @@ static void nf_conntrack_standalone_init_sctp_sysctl(struct net *net,
+ 	XASSIGN(SHUTDOWN_RECD, sn);
+ 	XASSIGN(SHUTDOWN_ACK_SENT, sn);
+ 	XASSIGN(HEARTBEAT_SENT, sn);
+-	XASSIGN(HEARTBEAT_ACKED, sn);
+ #undef XASSIGN
+ #endif
+ }
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 94a5446c5eae8..4b9a499fe8f4d 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -38,10 +38,12 @@ static bool nft_rbtree_interval_start(const struct nft_rbtree_elem *rbe)
+ 	return !nft_rbtree_interval_end(rbe);
+ }
+ 
+-static bool nft_rbtree_equal(const struct nft_set *set, const void *this,
+-			     const struct nft_rbtree_elem *interval)
++static int nft_rbtree_cmp(const struct nft_set *set,
++			  const struct nft_rbtree_elem *e1,
++			  const struct nft_rbtree_elem *e2)
+ {
+-	return memcmp(this, nft_set_ext_key(&interval->ext), set->klen) == 0;
++	return memcmp(nft_set_ext_key(&e1->ext), nft_set_ext_key(&e2->ext),
++		      set->klen);
+ }
+ 
+ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+@@ -52,7 +54,6 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 	const struct nft_rbtree_elem *rbe, *interval = NULL;
+ 	u8 genmask = nft_genmask_cur(net);
+ 	const struct rb_node *parent;
+-	const void *this;
+ 	int d;
+ 
+ 	parent = rcu_dereference_raw(priv->root.rb_node);
+@@ -62,12 +63,11 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 
+ 		rbe = rb_entry(parent, struct nft_rbtree_elem, node);
+ 
+-		this = nft_set_ext_key(&rbe->ext);
+-		d = memcmp(this, key, set->klen);
++		d = memcmp(nft_set_ext_key(&rbe->ext), key, set->klen);
+ 		if (d < 0) {
+ 			parent = rcu_dereference_raw(parent->rb_left);
+ 			if (interval &&
+-			    nft_rbtree_equal(set, this, interval) &&
++			    !nft_rbtree_cmp(set, rbe, interval) &&
+ 			    nft_rbtree_interval_end(rbe) &&
+ 			    nft_rbtree_interval_start(interval))
+ 				continue;
+@@ -214,154 +214,216 @@ static void *nft_rbtree_get(const struct net *net, const struct nft_set *set,
+ 	return rbe;
+ }
+ 
++static int nft_rbtree_gc_elem(const struct nft_set *__set,
++			      struct nft_rbtree *priv,
++			      struct nft_rbtree_elem *rbe)
++{
++	struct nft_set *set = (struct nft_set *)__set;
++	struct rb_node *prev = rb_prev(&rbe->node);
++	struct nft_rbtree_elem *rbe_prev;
++	struct nft_set_gc_batch *gcb;
++
++	gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC);
++	if (!gcb)
++		return -ENOMEM;
++
++	/* search for expired end interval coming before this element. */
++	do {
++		rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
++		if (nft_rbtree_interval_end(rbe_prev))
++			break;
++
++		prev = rb_prev(prev);
++	} while (prev != NULL);
++
++	rb_erase(&rbe_prev->node, &priv->root);
++	rb_erase(&rbe->node, &priv->root);
++	atomic_sub(2, &set->nelems);
++
++	nft_set_gc_batch_add(gcb, rbe);
++	nft_set_gc_batch_complete(gcb);
++
++	return 0;
++}
++
++static bool nft_rbtree_update_first(const struct nft_set *set,
++				    struct nft_rbtree_elem *rbe,
++				    struct rb_node *first)
++{
++	struct nft_rbtree_elem *first_elem;
++
++	first_elem = rb_entry(first, struct nft_rbtree_elem, node);
++	/* this element is closest to where the new element is to be inserted:
++	 * update the first element for the node list path.
++	 */
++	if (nft_rbtree_cmp(set, rbe, first_elem) < 0)
++		return true;
++
++	return false;
++}
++
+ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 			       struct nft_rbtree_elem *new,
+ 			       struct nft_set_ext **ext)
+ {
+-	bool overlap = false, dup_end_left = false, dup_end_right = false;
++	struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;
++	struct rb_node *node, *parent, **p, *first = NULL;
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	u8 genmask = nft_genmask_next(net);
+-	struct nft_rbtree_elem *rbe;
+-	struct rb_node *parent, **p;
+-	int d;
++	int d, err;
+ 
+-	/* Detect overlaps as we descend the tree. Set the flag in these cases:
+-	 *
+-	 * a1. _ _ __>|  ?_ _ __|  (insert end before existing end)
+-	 * a2. _ _ ___|  ?_ _ _>|  (insert end after existing end)
+-	 * a3. _ _ ___? >|_ _ __|  (insert start before existing end)
+-	 *
+-	 * and clear it later on, as we eventually reach the points indicated by
+-	 * '?' above, in the cases described below. We'll always meet these
+-	 * later, locally, due to tree ordering, and overlaps for the intervals
+-	 * that are the closest together are always evaluated last.
+-	 *
+-	 * b1. _ _ __>|  !_ _ __|  (insert end before existing start)
+-	 * b2. _ _ ___|  !_ _ _>|  (insert end after existing start)
+-	 * b3. _ _ ___! >|_ _ __|  (insert start after existing end, as a leaf)
+-	 *            '--' no nodes falling in this range
+-	 * b4.          >|_ _   !  (insert start before existing start)
+-	 *
+-	 * Case a3. resolves to b3.:
+-	 * - if the inserted start element is the leftmost, because the '0'
+-	 *   element in the tree serves as end element
+-	 * - otherwise, if an existing end is found immediately to the left. If
+-	 *   there are existing nodes in between, we need to further descend the
+-	 *   tree before we can conclude the new start isn't causing an overlap
+-	 *
+-	 * or to b4., which, preceded by a3., means we already traversed one or
+-	 * more existing intervals entirely, from the right.
+-	 *
+-	 * For a new, rightmost pair of elements, we'll hit cases b3. and b2.,
+-	 * in that order.
+-	 *
+-	 * The flag is also cleared in two special cases:
+-	 *
+-	 * b5. |__ _ _!|<_ _ _   (insert start right before existing end)
+-	 * b6. |__ _ >|!__ _ _   (insert end right after existing start)
+-	 *
+-	 * which always happen as last step and imply that no further
+-	 * overlapping is possible.
+-	 *
+-	 * Another special case comes from the fact that start elements matching
+-	 * an already existing start element are allowed: insertion is not
+-	 * performed but we return -EEXIST in that case, and the error will be
+-	 * cleared by the caller if NLM_F_EXCL is not present in the request.
+-	 * This way, request for insertion of an exact overlap isn't reported as
+-	 * error to userspace if not desired.
+-	 *
+-	 * However, if the existing start matches a pre-existing start, but the
+-	 * end element doesn't match the corresponding pre-existing end element,
+-	 * we need to report a partial overlap. This is a local condition that
+-	 * can be noticed without need for a tracking flag, by checking for a
+-	 * local duplicated end for a corresponding start, from left and right,
+-	 * separately.
++	/* Descend the tree to search for an existing element greater than the
++	 * key value to insert that is greater than the new element. This is the
++	 * first element to walk the ordered elements to find possible overlap.
+ 	 */
+-
+ 	parent = NULL;
+ 	p = &priv->root.rb_node;
+ 	while (*p != NULL) {
+ 		parent = *p;
+ 		rbe = rb_entry(parent, struct nft_rbtree_elem, node);
+-		d = memcmp(nft_set_ext_key(&rbe->ext),
+-			   nft_set_ext_key(&new->ext),
+-			   set->klen);
++		d = nft_rbtree_cmp(set, rbe, new);
++
+ 		if (d < 0) {
+ 			p = &parent->rb_left;
+-
+-			if (nft_rbtree_interval_start(new)) {
+-				if (nft_rbtree_interval_end(rbe) &&
+-				    nft_set_elem_active(&rbe->ext, genmask) &&
+-				    !nft_set_elem_expired(&rbe->ext) && !*p)
+-					overlap = false;
+-			} else {
+-				if (dup_end_left && !*p)
+-					return -ENOTEMPTY;
+-
+-				overlap = nft_rbtree_interval_end(rbe) &&
+-					  nft_set_elem_active(&rbe->ext,
+-							      genmask) &&
+-					  !nft_set_elem_expired(&rbe->ext);
+-
+-				if (overlap) {
+-					dup_end_right = true;
+-					continue;
+-				}
+-			}
+ 		} else if (d > 0) {
+-			p = &parent->rb_right;
++			if (!first ||
++			    nft_rbtree_update_first(set, rbe, first))
++				first = &rbe->node;
+ 
+-			if (nft_rbtree_interval_end(new)) {
+-				if (dup_end_right && !*p)
+-					return -ENOTEMPTY;
+-
+-				overlap = nft_rbtree_interval_end(rbe) &&
+-					  nft_set_elem_active(&rbe->ext,
+-							      genmask) &&
+-					  !nft_set_elem_expired(&rbe->ext);
+-
+-				if (overlap) {
+-					dup_end_left = true;
+-					continue;
+-				}
+-			} else if (nft_set_elem_active(&rbe->ext, genmask) &&
+-				   !nft_set_elem_expired(&rbe->ext)) {
+-				overlap = nft_rbtree_interval_end(rbe);
+-			}
++			p = &parent->rb_right;
+ 		} else {
+-			if (nft_rbtree_interval_end(rbe) &&
+-			    nft_rbtree_interval_start(new)) {
++			if (nft_rbtree_interval_end(rbe))
+ 				p = &parent->rb_left;
+-
+-				if (nft_set_elem_active(&rbe->ext, genmask) &&
+-				    !nft_set_elem_expired(&rbe->ext))
+-					overlap = false;
+-			} else if (nft_rbtree_interval_start(rbe) &&
+-				   nft_rbtree_interval_end(new)) {
++			else
+ 				p = &parent->rb_right;
++		}
++	}
++
++	if (!first)
++		first = rb_first(&priv->root);
++
++	/* Detect overlap by going through the list of valid tree nodes.
++	 * Values stored in the tree are in reversed order, starting from
++	 * highest to lowest value.
++	 */
++	for (node = first; node != NULL; node = rb_next(node)) {
++		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ 
+-				if (nft_set_elem_active(&rbe->ext, genmask) &&
+-				    !nft_set_elem_expired(&rbe->ext))
+-					overlap = false;
+-			} else if (nft_set_elem_active(&rbe->ext, genmask) &&
+-				   !nft_set_elem_expired(&rbe->ext)) {
+-				*ext = &rbe->ext;
+-				return -EEXIST;
+-			} else {
+-				overlap = false;
+-				if (nft_rbtree_interval_end(rbe))
+-					p = &parent->rb_left;
+-				else
+-					p = &parent->rb_right;
++		if (!nft_set_elem_active(&rbe->ext, genmask))
++			continue;
++
++		/* perform garbage collection to avoid bogus overlap reports. */
++		if (nft_set_elem_expired(&rbe->ext)) {
++			err = nft_rbtree_gc_elem(set, priv, rbe);
++			if (err < 0)
++				return err;
++
++			continue;
++		}
++
++		d = nft_rbtree_cmp(set, rbe, new);
++		if (d == 0) {
++			/* Matching end element: no need to look for an
++			 * overlapping greater or equal element.
++			 */
++			if (nft_rbtree_interval_end(rbe)) {
++				rbe_le = rbe;
++				break;
++			}
++
++			/* first element that is greater or equal to key value. */
++			if (!rbe_ge) {
++				rbe_ge = rbe;
++				continue;
++			}
++
++			/* this is a closer more or equal element, update it. */
++			if (nft_rbtree_cmp(set, rbe_ge, new) != 0) {
++				rbe_ge = rbe;
++				continue;
++			}
++
++			/* element is equal to key value, make sure flags are
++			 * the same, an existing more or equal start element
++			 * must not be replaced by more or equal end element.
++			 */
++			if ((nft_rbtree_interval_start(new) &&
++			     nft_rbtree_interval_start(rbe_ge)) ||
++			    (nft_rbtree_interval_end(new) &&
++			     nft_rbtree_interval_end(rbe_ge))) {
++				rbe_ge = rbe;
++				continue;
+ 			}
++		} else if (d > 0) {
++			/* annotate element greater than the new element. */
++			rbe_ge = rbe;
++			continue;
++		} else if (d < 0) {
++			/* annotate element less than the new element. */
++			rbe_le = rbe;
++			break;
+ 		}
++	}
+ 
+-		dup_end_left = dup_end_right = false;
++	/* - new start element matching existing start element: full overlap
++	 *   reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.
++	 */
++	if (rbe_ge && !nft_rbtree_cmp(set, new, rbe_ge) &&
++	    nft_rbtree_interval_start(rbe_ge) == nft_rbtree_interval_start(new)) {
++		*ext = &rbe_ge->ext;
++		return -EEXIST;
+ 	}
+ 
+-	if (overlap)
++	/* - new end element matching existing end element: full overlap
++	 *   reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.
++	 */
++	if (rbe_le && !nft_rbtree_cmp(set, new, rbe_le) &&
++	    nft_rbtree_interval_end(rbe_le) == nft_rbtree_interval_end(new)) {
++		*ext = &rbe_le->ext;
++		return -EEXIST;
++	}
++
++	/* - new start element with existing closest, less or equal key value
++	 *   being a start element: partial overlap, reported as -ENOTEMPTY.
++	 *   Anonymous sets allow for two consecutive start element since they
++	 *   are constant, skip them to avoid bogus overlap reports.
++	 */
++	if (!nft_set_is_anonymous(set) && rbe_le &&
++	    nft_rbtree_interval_start(rbe_le) && nft_rbtree_interval_start(new))
++		return -ENOTEMPTY;
++
++	/* - new end element with existing closest, less or equal key value
++	 *   being a end element: partial overlap, reported as -ENOTEMPTY.
++	 */
++	if (rbe_le &&
++	    nft_rbtree_interval_end(rbe_le) && nft_rbtree_interval_end(new))
+ 		return -ENOTEMPTY;
+ 
++	/* - new end element with existing closest, greater or equal key value
++	 *   being an end element: partial overlap, reported as -ENOTEMPTY
++	 */
++	if (rbe_ge &&
++	    nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
++		return -ENOTEMPTY;
++
++	/* Accepted element: pick insertion point depending on key value */
++	parent = NULL;
++	p = &priv->root.rb_node;
++	while (*p != NULL) {
++		parent = *p;
++		rbe = rb_entry(parent, struct nft_rbtree_elem, node);
++		d = nft_rbtree_cmp(set, rbe, new);
++
++		if (d < 0)
++			p = &parent->rb_left;
++		else if (d > 0)
++			p = &parent->rb_right;
++		else if (nft_rbtree_interval_end(rbe))
++			p = &parent->rb_left;
++		else
++			p = &parent->rb_right;
++	}
++
+ 	rb_link_node_rcu(&new->node, parent, p);
+ 	rb_insert_color(&new->node, &priv->root);
+ 	return 0;
+@@ -500,23 +562,37 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 	struct nft_rbtree *priv;
+ 	struct rb_node *node;
+ 	struct nft_set *set;
++	struct net *net;
++	u8 genmask;
+ 
+ 	priv = container_of(work, struct nft_rbtree, gc_work.work);
+ 	set  = nft_set_container_of(priv);
++	net  = read_pnet(&set->net);
++	genmask = nft_genmask_cur(net);
+ 
+ 	write_lock_bh(&priv->lock);
+ 	write_seqcount_begin(&priv->count);
+ 	for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ 
++		if (!nft_set_elem_active(&rbe->ext, genmask))
++			continue;
++
++		/* elements are reversed in the rbtree for historical reasons,
++		 * from highest to lowest value, that is why end element is
++		 * always visited before the start element.
++		 */
+ 		if (nft_rbtree_interval_end(rbe)) {
+ 			rbe_end = rbe;
+ 			continue;
+ 		}
+ 		if (!nft_set_elem_expired(&rbe->ext))
+ 			continue;
+-		if (nft_set_elem_mark_busy(&rbe->ext))
++
++		if (nft_set_elem_mark_busy(&rbe->ext)) {
++			rbe_end = NULL;
+ 			continue;
++		}
+ 
+ 		if (rbe_prev) {
+ 			rb_erase(&rbe_prev->node, &priv->root);
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index d96a610929d9a..2104fbdd63d29 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -570,7 +570,9 @@ static int netlink_insert(struct sock *sk, u32 portid)
+ 	if (nlk_sk(sk)->bound)
+ 		goto err;
+ 
+-	nlk_sk(sk)->portid = portid;
++	/* portid can be read locklessly from netlink_getname(). */
++	WRITE_ONCE(nlk_sk(sk)->portid, portid);
++
+ 	sock_hold(sk);
+ 
+ 	err = __netlink_insert(table, sk);
+@@ -1079,9 +1081,11 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
+ 		return -EINVAL;
+ 
+ 	if (addr->sa_family == AF_UNSPEC) {
+-		sk->sk_state	= NETLINK_UNCONNECTED;
+-		nlk->dst_portid	= 0;
+-		nlk->dst_group  = 0;
++		/* paired with READ_ONCE() in netlink_getsockbyportid() */
++		WRITE_ONCE(sk->sk_state, NETLINK_UNCONNECTED);
++		/* dst_portid and dst_group can be read locklessly */
++		WRITE_ONCE(nlk->dst_portid, 0);
++		WRITE_ONCE(nlk->dst_group, 0);
+ 		return 0;
+ 	}
+ 	if (addr->sa_family != AF_NETLINK)
+@@ -1102,9 +1106,11 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
+ 		err = netlink_autobind(sock);
+ 
+ 	if (err == 0) {
+-		sk->sk_state	= NETLINK_CONNECTED;
+-		nlk->dst_portid = nladdr->nl_pid;
+-		nlk->dst_group  = ffs(nladdr->nl_groups);
++		/* paired with READ_ONCE() in netlink_getsockbyportid() */
++		WRITE_ONCE(sk->sk_state, NETLINK_CONNECTED);
++		/* dst_portid and dst_group can be read locklessly */
++		WRITE_ONCE(nlk->dst_portid, nladdr->nl_pid);
++		WRITE_ONCE(nlk->dst_group, ffs(nladdr->nl_groups));
+ 	}
+ 
+ 	return err;
+@@ -1121,10 +1127,12 @@ static int netlink_getname(struct socket *sock, struct sockaddr *addr,
+ 	nladdr->nl_pad = 0;
+ 
+ 	if (peer) {
+-		nladdr->nl_pid = nlk->dst_portid;
+-		nladdr->nl_groups = netlink_group_mask(nlk->dst_group);
++		/* Paired with WRITE_ONCE() in netlink_connect() */
++		nladdr->nl_pid = READ_ONCE(nlk->dst_portid);
++		nladdr->nl_groups = netlink_group_mask(READ_ONCE(nlk->dst_group));
+ 	} else {
+-		nladdr->nl_pid = nlk->portid;
++		/* Paired with WRITE_ONCE() in netlink_insert() */
++		nladdr->nl_pid = READ_ONCE(nlk->portid);
+ 		netlink_lock_table();
+ 		nladdr->nl_groups = nlk->groups ? nlk->groups[0] : 0;
+ 		netlink_unlock_table();
+@@ -1151,8 +1159,9 @@ static struct sock *netlink_getsockbyportid(struct sock *ssk, u32 portid)
+ 
+ 	/* Don't bother queuing skb if kernel socket has no input function */
+ 	nlk = nlk_sk(sock);
+-	if (sock->sk_state == NETLINK_CONNECTED &&
+-	    nlk->dst_portid != nlk_sk(ssk)->portid) {
++	/* dst_portid and sk_state can be changed in netlink_connect() */
++	if (READ_ONCE(sock->sk_state) == NETLINK_CONNECTED &&
++	    READ_ONCE(nlk->dst_portid) != nlk_sk(ssk)->portid) {
+ 		sock_put(sock);
+ 		return ERR_PTR(-ECONNREFUSED);
+ 	}
+@@ -1888,8 +1897,9 @@ static int netlink_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 			goto out;
+ 		netlink_skb_flags |= NETLINK_SKB_DST;
+ 	} else {
+-		dst_portid = nlk->dst_portid;
+-		dst_group = nlk->dst_group;
++		/* Paired with WRITE_ONCE() in netlink_connect() */
++		dst_portid = READ_ONCE(nlk->dst_portid);
++		dst_group = READ_ONCE(nlk->dst_group);
+ 	}
+ 
+ 	/* Paired with WRITE_ONCE() in netlink_insert() */
+diff --git a/net/netrom/nr_timer.c b/net/netrom/nr_timer.c
+index a8da88db7893f..4e7c968cde2dc 100644
+--- a/net/netrom/nr_timer.c
++++ b/net/netrom/nr_timer.c
+@@ -121,6 +121,7 @@ static void nr_heartbeat_expiry(struct timer_list *t)
+ 		   is accepted() it isn't 'dead' so doesn't get removed. */
+ 		if (sock_flag(sk, SOCK_DESTROY) ||
+ 		    (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) {
++			sock_hold(sk);
+ 			bh_unlock_sock(sk);
+ 			nr_destroy_socket(sk);
+ 			goto out;
+diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
+index cc997518f79d1..edadebb3efd2a 100644
+--- a/net/nfc/llcp_core.c
++++ b/net/nfc/llcp_core.c
+@@ -159,6 +159,7 @@ static void local_cleanup(struct nfc_llcp_local *local)
+ 	cancel_work_sync(&local->rx_work);
+ 	cancel_work_sync(&local->timeout_work);
+ 	kfree_skb(local->rx_pending);
++	local->rx_pending = NULL;
+ 	del_timer_sync(&local->sdreq_timer);
+ 	cancel_work_sync(&local->sdreq_timeout_work);
+ 	nfc_llcp_free_sdp_tlv_list(&local->pending_sdreqs);
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 7f33b31c7b8bd..e25fe44899ffb 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1621,6 +1621,7 @@ static void taprio_reset(struct Qdisc *sch)
+ 	int i;
+ 
+ 	hrtimer_cancel(&q->advance_timer);
++
+ 	if (q->qdiscs) {
+ 		for (i = 0; i < dev->num_tx_queues; i++)
+ 			if (q->qdiscs[i])
+@@ -1642,6 +1643,7 @@ static void taprio_destroy(struct Qdisc *sch)
+ 	 * happens in qdisc_create(), after taprio_init() has been called.
+ 	 */
+ 	hrtimer_cancel(&q->advance_timer);
++	qdisc_synchronize(sch);
+ 
+ 	taprio_disable_offload(dev, q, NULL);
+ 
+diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
+index 59e653b528b1f..6b95d3ba8fe1c 100644
+--- a/net/sctp/bind_addr.c
++++ b/net/sctp/bind_addr.c
+@@ -73,6 +73,12 @@ int sctp_bind_addr_copy(struct net *net, struct sctp_bind_addr *dest,
+ 		}
+ 	}
+ 
++	/* If somehow no addresses were found that can be used with this
++	 * scope, it's an error.
++	 */
++	if (list_empty(&dest->address_list))
++		error = -ENETUNREACH;
++
+ out:
+ 	if (error)
+ 		sctp_bind_addr_clean(dest);
+diff --git a/scripts/tracing/ftrace-bisect.sh b/scripts/tracing/ftrace-bisect.sh
+index 926701162bc83..bb4f59262bbe9 100755
+--- a/scripts/tracing/ftrace-bisect.sh
++++ b/scripts/tracing/ftrace-bisect.sh
+@@ -12,7 +12,7 @@
+ #   (note, if this is a problem with function_graph tracing, then simply
+ #    replace "function" with "function_graph" in the following steps).
+ #
+-#  # cd /sys/kernel/debug/tracing
++#  # cd /sys/kernel/tracing
+ #  # echo schedule > set_ftrace_filter
+ #  # echo function > current_tracer
+ #
+@@ -20,22 +20,40 @@
+ #
+ #  # echo nop > current_tracer
+ #
+-#  # cat available_filter_functions > ~/full-file
++# Starting with v5.1 this can be done with numbers, making it much faster:
++#
++# The old (slow) way, for kernels before v5.1.
++#
++# [old-way] # cat available_filter_functions > ~/full-file
++#
++# [old-way] *** Note ***  this process will take several minutes to update the
++# [old-way] filters. Setting multiple functions is an O(n^2) operation, and we
++# [old-way] are dealing with thousands of functions. So go have coffee, talk
++# [old-way] with your coworkers, read facebook. And eventually, this operation
++# [old-way] will end.
++#
++# The new way (using numbers) is an O(n) operation, and usually takes less than a second.
++#
++# seq `wc -l available_filter_functions | cut -d' ' -f1` > ~/full-file
++#
++# This will create a sequence of numbers that match the functions in
++# available_filter_functions, and when echoing in a number into the
++# set_ftrace_filter file, it will enable the corresponding function in
++# O(1) time. Making enabling all functions O(n) where n is the number of
++# functions to enable.
++#
++# For either the new or old way, the rest of the operations remain the same.
++#
+ #  # ftrace-bisect ~/full-file ~/test-file ~/non-test-file
+ #  # cat ~/test-file > set_ftrace_filter
+ #
+-# *** Note *** this will take several minutes. Setting multiple functions is
+-# an O(n^2) operation, and we are dealing with thousands of functions. So go
+-# have  coffee, talk with your coworkers, read facebook. And eventually, this
+-# operation will end.
+-#
+ #  # echo function > current_tracer
+ #
+ # If it crashes, we know that ~/test-file has a bad function.
+ #
+ #   Reboot back to test kernel.
+ #
+-#     # cd /sys/kernel/debug/tracing
++#     # cd /sys/kernel/tracing
+ #     # mv ~/test-file ~/full-file
+ #
+ # If it didn't crash.
+diff --git a/security/tomoyo/Makefile b/security/tomoyo/Makefile
+index cca5a3012fee2..221eaadffb09c 100644
+--- a/security/tomoyo/Makefile
++++ b/security/tomoyo/Makefile
+@@ -10,7 +10,7 @@ endef
+ quiet_cmd_policy  = POLICY  $@
+       cmd_policy  = ($(call do_policy,profile); $(call do_policy,exception_policy); $(call do_policy,domain_policy); $(call do_policy,manager); $(call do_policy,stat)) >$@
+ 
+-$(obj)/builtin-policy.h: $(wildcard $(obj)/policy/*.conf $(src)/policy/*.conf.default) FORCE
++$(obj)/builtin-policy.h: $(wildcard $(obj)/policy/*.conf $(srctree)/$(src)/policy/*.conf.default) FORCE
+ 	$(call if_changed,policy)
+ 
+ $(obj)/common.o: $(obj)/builtin-policy.h
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index 7cd14d6b9436a..9a756d0a60320 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -117,11 +117,11 @@ static const struct snd_soc_dapm_route audio_map[] = {
+ 
+ static const struct snd_soc_dapm_route audio_map_ac97[] = {
+ 	/* 1st half -- Normal DAPM routes */
+-	{"Playback",  NULL, "AC97 Playback"},
+-	{"AC97 Capture",  NULL, "Capture"},
++	{"AC97 Playback",  NULL, "CPU AC97 Playback"},
++	{"CPU AC97 Capture",  NULL, "AC97 Capture"},
+ 	/* 2nd half -- ASRC DAPM routes */
+-	{"AC97 Playback",  NULL, "ASRC-Playback"},
+-	{"ASRC-Capture",  NULL, "AC97 Capture"},
++	{"CPU AC97 Playback",  NULL, "ASRC-Playback"},
++	{"ASRC-Capture",  NULL, "CPU AC97 Capture"},
+ };
+ 
+ static const struct snd_soc_dapm_route audio_map_tx[] = {
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 6c794605e33c9..97f83c63e7652 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -87,21 +87,21 @@ static DECLARE_TLV_DB_SCALE(gain_tlv, 0, 100, 0);
+ 
+ static const struct snd_kcontrol_new fsl_micfil_snd_controls[] = {
+ 	SOC_SINGLE_SX_TLV("CH0 Volume", REG_MICFIL_OUT_CTRL,
+-			  MICFIL_OUTGAIN_CHX_SHIFT(0), 0xF, 0x7, gain_tlv),
++			  MICFIL_OUTGAIN_CHX_SHIFT(0), 0x8, 0xF, gain_tlv),
+ 	SOC_SINGLE_SX_TLV("CH1 Volume", REG_MICFIL_OUT_CTRL,
+-			  MICFIL_OUTGAIN_CHX_SHIFT(1), 0xF, 0x7, gain_tlv),
++			  MICFIL_OUTGAIN_CHX_SHIFT(1), 0x8, 0xF, gain_tlv),
+ 	SOC_SINGLE_SX_TLV("CH2 Volume", REG_MICFIL_OUT_CTRL,
+-			  MICFIL_OUTGAIN_CHX_SHIFT(2), 0xF, 0x7, gain_tlv),
++			  MICFIL_OUTGAIN_CHX_SHIFT(2), 0x8, 0xF, gain_tlv),
+ 	SOC_SINGLE_SX_TLV("CH3 Volume", REG_MICFIL_OUT_CTRL,
+-			  MICFIL_OUTGAIN_CHX_SHIFT(3), 0xF, 0x7, gain_tlv),
++			  MICFIL_OUTGAIN_CHX_SHIFT(3), 0x8, 0xF, gain_tlv),
+ 	SOC_SINGLE_SX_TLV("CH4 Volume", REG_MICFIL_OUT_CTRL,
+-			  MICFIL_OUTGAIN_CHX_SHIFT(4), 0xF, 0x7, gain_tlv),
++			  MICFIL_OUTGAIN_CHX_SHIFT(4), 0x8, 0xF, gain_tlv),
+ 	SOC_SINGLE_SX_TLV("CH5 Volume", REG_MICFIL_OUT_CTRL,
+-			  MICFIL_OUTGAIN_CHX_SHIFT(5), 0xF, 0x7, gain_tlv),
++			  MICFIL_OUTGAIN_CHX_SHIFT(5), 0x8, 0xF, gain_tlv),
+ 	SOC_SINGLE_SX_TLV("CH6 Volume", REG_MICFIL_OUT_CTRL,
+-			  MICFIL_OUTGAIN_CHX_SHIFT(6), 0xF, 0x7, gain_tlv),
++			  MICFIL_OUTGAIN_CHX_SHIFT(6), 0x8, 0xF, gain_tlv),
+ 	SOC_SINGLE_SX_TLV("CH7 Volume", REG_MICFIL_OUT_CTRL,
+-			  MICFIL_OUTGAIN_CHX_SHIFT(7), 0xF, 0x7, gain_tlv),
++			  MICFIL_OUTGAIN_CHX_SHIFT(7), 0x8, 0xF, gain_tlv),
+ 	SOC_ENUM_EXT("MICFIL Quality Select",
+ 		     fsl_micfil_quality_enum,
+ 		     snd_soc_get_enum_double, snd_soc_put_enum_double),
+diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
+index 1d774c876c52e..94229ce1a30ef 100644
+--- a/sound/soc/fsl/fsl_ssi.c
++++ b/sound/soc/fsl/fsl_ssi.c
+@@ -1161,14 +1161,14 @@ static struct snd_soc_dai_driver fsl_ssi_ac97_dai = {
+ 	.symmetric_channels = 1,
+ 	.probe = fsl_ssi_dai_probe,
+ 	.playback = {
+-		.stream_name = "AC97 Playback",
++		.stream_name = "CPU AC97 Playback",
+ 		.channels_min = 2,
+ 		.channels_max = 2,
+ 		.rates = SNDRV_PCM_RATE_8000_48000,
+ 		.formats = SNDRV_PCM_FMTBIT_S16 | SNDRV_PCM_FMTBIT_S20,
+ 	},
+ 	.capture = {
+-		.stream_name = "AC97 Capture",
++		.stream_name = "CPU AC97 Capture",
+ 		.channels_min = 2,
+ 		.channels_max = 2,
+ 		.rates = SNDRV_PCM_RATE_48000,
+diff --git a/tools/gpio/gpio-event-mon.c b/tools/gpio/gpio-event-mon.c
+index 84ae1039b0a87..367c10636890a 100644
+--- a/tools/gpio/gpio-event-mon.c
++++ b/tools/gpio/gpio-event-mon.c
+@@ -86,6 +86,7 @@ int monitor_device(const char *device_name,
+ 			gpiotools_test_bit(values.bits, i));
+ 	}
+ 
++	i = 0;
+ 	while (1) {
+ 		struct gpio_v2_line_event event;
+ 
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 700984e7f5ba1..985bcc5cea8a4 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -168,6 +168,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
+ 		"panic",
+ 		"do_exit",
+ 		"do_task_dead",
++		"make_task_dead",
+ 		"__module_put_and_exit",
+ 		"complete_and_exit",
+ 		"__reiserfs_panic",
+@@ -175,7 +176,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
+ 		"fortify_panic",
+ 		"usercopy_abort",
+ 		"machine_real_restart",
+-		"rewind_stack_do_exit",
++		"rewind_stack_and_make_dead",
+ 		"kunit_try_catch_throw",
+ 		"xen_start_kernel",
+ 		"cpu_bringup_and_idle",
+diff --git a/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c b/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c
+deleted file mode 100644
+index 3add34df57678..0000000000000
+--- a/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c
++++ /dev/null
+@@ -1,9 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-
+-#include <test_progs.h>
+-#include "jeq_infer_not_null_fail.skel.h"
+-
+-void test_jeq_infer_not_null(void)
+-{
+-	RUN_TESTS(jeq_infer_not_null_fail);
+-}
+diff --git a/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c b/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c
+deleted file mode 100644
+index f46965053acb2..0000000000000
+--- a/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c
++++ /dev/null
+@@ -1,42 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-
+-#include "vmlinux.h"
+-#include <bpf/bpf_helpers.h>
+-#include "bpf_misc.h"
+-
+-char _license[] SEC("license") = "GPL";
+-
+-struct {
+-	__uint(type, BPF_MAP_TYPE_HASH);
+-	__uint(max_entries, 1);
+-	__type(key, u64);
+-	__type(value, u64);
+-} m_hash SEC(".maps");
+-
+-SEC("?raw_tp")
+-__failure __msg("R8 invalid mem access 'map_value_or_null")
+-int jeq_infer_not_null_ptr_to_btfid(void *ctx)
+-{
+-	struct bpf_map *map = (struct bpf_map *)&m_hash;
+-	struct bpf_map *inner_map = map->inner_map_meta;
+-	u64 key = 0, ret = 0, *val;
+-
+-	val = bpf_map_lookup_elem(map, &key);
+-	/* Do not mark ptr as non-null if one of them is
+-	 * PTR_TO_BTF_ID (R9), reject because of invalid
+-	 * access to map value (R8).
+-	 *
+-	 * Here, we need to inline those insns to access
+-	 * R8 directly, since compiler may use other reg
+-	 * once it figures out val==inner_map.
+-	 */
+-	asm volatile("r8 = %[val];\n"
+-		     "r9 = %[inner_map];\n"
+-		     "if r8 != r9 goto +1;\n"
+-		     "%[ret] = *(u64 *)(r8 +0);\n"
+-		     : [ret] "+r"(ret)
+-		     : [inner_map] "r"(inner_map), [val] "r"(val)
+-		     : "r8", "r9");
+-
+-	return ret;
+-}
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc
+index 955e3ceea44b5..ada594fe16cb3 100644
+--- a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc
+@@ -1,38 +1,19 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ # description: event trigger - test synthetic_events syntax parser errors
+-# requires: synthetic_events error_log "char name[]' >> synthetic_events":README
++# requires: synthetic_events error_log
+ 
+ check_error() { # command-with-error-pos-by-^
+     ftrace_errlog_check 'synthetic_events' "$1" 'synthetic_events'
+ }
+ 
+-check_dyn_error() { # command-with-error-pos-by-^
+-    ftrace_errlog_check 'synthetic_events' "$1" 'dynamic_events'
+-}
+-
+ check_error 'myevent ^chr arg'			# INVALID_TYPE
+-check_error 'myevent ^unsigned arg'		# INCOMPLETE_TYPE
+-
+-check_error 'myevent char ^str]; int v'		# BAD_NAME
+-check_error '^mye-vent char str[]'		# BAD_NAME
+-check_error 'myevent char ^st-r[]'		# BAD_NAME
+-
+-check_error 'myevent char str;^[]'		# INVALID_FIELD
+-check_error 'myevent char str; ^int'		# INVALID_FIELD
+-
+-check_error 'myevent char ^str[; int v'		# INVALID_ARRAY_SPEC
+-check_error 'myevent char ^str[kdjdk]'		# INVALID_ARRAY_SPEC
+-check_error 'myevent char ^str[257]'		# INVALID_ARRAY_SPEC
+-
+-check_error '^mye;vent char str[]'		# INVALID_CMD
+-check_error '^myevent ; char str[]'		# INVALID_CMD
+-check_error '^myevent; char str[]'		# INVALID_CMD
+-check_error '^myevent ;char str[]'		# INVALID_CMD
+-check_error '^; char str[]'			# INVALID_CMD
+-check_error '^;myevent char str[]'		# INVALID_CMD
+-check_error '^myevent'				# INVALID_CMD
+-
+-check_dyn_error '^s:junk/myevent char str['	# INVALID_DYN_CMD
++check_error 'myevent ^char str[];; int v'	# INVALID_TYPE
++check_error 'myevent char ^str]; int v'		# INVALID_NAME
++check_error 'myevent char ^str;[]'		# INVALID_NAME
++check_error 'myevent ^char str[; int v'		# INVALID_TYPE
++check_error '^mye;vent char str[]'		# BAD_NAME
++check_error 'myevent char str[]; ^int'		# INVALID_FIELD
++check_error '^myevent'				# INCOMPLETE_CMD
+ 
+ exit 0


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-02-02 19:11 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-02-02 19:11 UTC (permalink / raw
  To: gentoo-commits

commit:     8050d2167624c5e61f5d951c876dd6f92d6fee96
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Feb  2 19:10:59 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Feb  2 19:10:59 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8050d216

gcc-plugins: drop -std=gnu++11 to fix GCC 13 build

See: https://lore.kernel.org/all/20230201230009.2252783-1-sam <AT> gentoo.org/

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                                   |  4 ++++
 ...c-plugins-drop-std-gnu-plus-plus-to-fix-GCC-13-build.patch | 11 +++++++++++
 2 files changed, 15 insertions(+)

diff --git a/0000_README b/0000_README
index ae58f6d6..3bb96c32 100644
--- a/0000_README
+++ b/0000_README
@@ -727,6 +727,10 @@ Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL
 
+Patch:  2940_gcc-plugins-drop-std-gnu-plus-plus-to-fix-GCC-13-build.patch
+From:   https://lore.kernel.org/all/20230201230009.2252783-1-sam@gentoo.org/
+Desc:   gcc-plugins: drop -std=gnu++11 to fix GCC 13 build
+
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852
 Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev

diff --git a/2940_gcc-plugins-drop-std-gnu-plus-plus-to-fix-GCC-13-build.patch b/2940_gcc-plugins-drop-std-gnu-plus-plus-to-fix-GCC-13-build.patch
new file mode 100644
index 00000000..2cce544a
--- /dev/null
+++ b/2940_gcc-plugins-drop-std-gnu-plus-plus-to-fix-GCC-13-build.patch
@@ -0,0 +1,11 @@
+--- a/scripts/gcc-plugins/Makefile	2023-02-02 14:09:24.615360391 -0500
++++ b/scripts/gcc-plugins/Makefile	2023-02-02 14:09:51.422139879 -0500
+@@ -22,7 +22,7 @@ always-y += $(GCC_PLUGIN)
+ GCC_PLUGINS_DIR = $(shell $(CC) -print-file-name=plugin)
+ 
+ plugin_cxxflags	= -Wp,-MMD,$(depfile) $(KBUILD_HOSTCXXFLAGS) -fPIC \
+-		   -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++11 \
++			 -I $(GCC_PLUGINS_DIR)/include -I $(obj) \
+ 		   -fno-rtti -fno-exceptions -fasynchronous-unwind-tables \
+ 		   -ggdb -Wno-narrowing -Wno-unused-variable \
+ 		   -Wno-format-diag


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-02-06 12:47 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-02-06 12:47 UTC (permalink / raw
  To: gentoo-commits

commit:     437036d1af736c254ed24cae3357fb729a898ddd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb  6 12:47:34 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb  6 12:47:34 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=437036d1

Linux patch 5.10.167

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1166_linux-5.10.167.patch | 203 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 207 insertions(+)

diff --git a/0000_README b/0000_README
index 3bb96c32..99a312cb 100644
--- a/0000_README
+++ b/0000_README
@@ -707,6 +707,10 @@ Patch:  1165_linux-5.10.166.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.166
 
+Patch:  1166_linux-5.10.167.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.167
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1166_linux-5.10.167.patch b/1166_linux-5.10.167.patch
new file mode 100644
index 00000000..989d3996
--- /dev/null
+++ b/1166_linux-5.10.167.patch
@@ -0,0 +1,203 @@
+diff --git a/Makefile b/Makefile
+index efdfb40a82abb..84b78e657357c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 166
++SUBLEVEL = 167
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts
+index 006fbd7f54322..54e39db447c4d 100644
+--- a/arch/arm/boot/dts/imx53-ppd.dts
++++ b/arch/arm/boot/dts/imx53-ppd.dts
+@@ -487,7 +487,7 @@
+ 	scl-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>;
+ 	status = "okay";
+ 
+-	i2c-switch@70 {
++	i2c-mux@70 {
+ 		compatible = "nxp,pca9547";
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts b/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
+index 6f1e0f0d4f0ae..073f5d196ca9b 100644
+--- a/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
++++ b/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
+@@ -345,7 +345,7 @@
+ };
+ 
+ &i2c2 {
+-	tca9548@70 {
++	i2c-mux@70 {
+ 		compatible = "nxp,pca9548";
+ 		pinctrl-0 = <&pinctrl_i2c_mux_reset>;
+ 		pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/vf610-zii-dev-rev-c.dts b/arch/arm/boot/dts/vf610-zii-dev-rev-c.dts
+index de79dcfd32e62..ba2001f373158 100644
+--- a/arch/arm/boot/dts/vf610-zii-dev-rev-c.dts
++++ b/arch/arm/boot/dts/vf610-zii-dev-rev-c.dts
+@@ -340,7 +340,7 @@
+ };
+ 
+ &i2c2 {
+-	tca9548@70 {
++	i2c-mux@70 {
+ 		compatible = "nxp,pca9548";
+ 		pinctrl-0 = <&pinctrl_i2c_mux_reset>;
+ 		pinctrl-names = "default";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-thor96.dts b/arch/arm64/boot/dts/freescale/imx8mq-thor96.dts
+index 5d5aa6537225f..6e6182709d220 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-thor96.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mq-thor96.dts
+@@ -339,7 +339,7 @@
+ 	bus-width = <4>;
+ 	non-removable;
+ 	no-sd;
+-	no-emmc;
++	no-mmc;
+ 	status = "okay";
+ 
+ 	brcmf: wifi@1 {
+@@ -359,7 +359,7 @@
+ 	cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>;
+ 	bus-width = <4>;
+ 	no-sdio;
+-	no-emmc;
++	no-mmc;
+ 	disable-wp;
+ 	status = "okay";
+ };
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 484c6b2dd264e..c623632c1cda0 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1370,6 +1370,10 @@ retry:
+ 		list_for_each_entry_reverse(blkg, &q->blkg_list, q_node)
+ 			pol->pd_init_fn(blkg->pd[pol->plid]);
+ 
++	if (pol->pd_online_fn)
++		list_for_each_entry_reverse(blkg, &q->blkg_list, q_node)
++			pol->pd_online_fn(blkg->pd[pol->plid]);
++
+ 	__set_bit(pol->plid, q->blkcg_pols);
+ 	ret = 0;
+ 
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index e5dd87ddc6b34..59781e765e0e2 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -536,10 +536,27 @@ static void wait_for_freeze(void)
+ 	/* No delay is needed if we are in guest */
+ 	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ 		return;
++	/*
++	 * Modern (>=Nehalem) Intel systems use ACPI via intel_idle,
++	 * not this code.  Assume that any Intel systems using this
++	 * are ancient and may need the dummy wait.  This also assumes
++	 * that the motivating chipset issue was Intel-only.
++	 */
++	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++		return;
+ #endif
+-	/* Dummy wait op - must do something useless after P_LVL2 read
+-	   because chipsets cannot guarantee that STPCLK# signal
+-	   gets asserted in time to freeze execution properly. */
++	/*
++	 * Dummy wait op - must do something useless after P_LVL2 read
++	 * because chipsets cannot guarantee that STPCLK# signal gets
++	 * asserted in time to freeze execution properly
++	 *
++	 * This workaround has been in place since the original ACPI
++	 * implementation was merged, circa 2002.
++	 *
++	 * If a profile is pointing to this instruction, please first
++	 * consider moving your system to a more modern idle
++	 * mechanism.
++	 */
+ 	inl(acpi_gbl_FADT.xpm_timer_block.address);
+ }
+ 
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 2283dcd8bf91d..6514db824473c 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -1363,10 +1363,12 @@ static struct sdma_desc *sdma_transfer_init(struct sdma_channel *sdmac,
+ 		sdma_config_ownership(sdmac, false, true, false);
+ 
+ 	if (sdma_load_context(sdmac))
+-		goto err_desc_out;
++		goto err_bd_out;
+ 
+ 	return desc;
+ 
++err_bd_out:
++	sdma_free_bd(desc);
+ err_desc_out:
+ 	kfree(desc);
+ err_out:
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index a9e074769881f..ab4f51716645b 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1072,6 +1072,9 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type)
+ 		return -EPERM;
+ 	if (unlikely(!nmi_uaccess_okay()))
+ 		return -EPERM;
++	/* Task should not be pid=1 to avoid kernel panic. */
++	if (unlikely(is_global_init(current)))
++		return -EPERM;
+ 
+ 	if (irqs_disabled()) {
+ 		/* Do an early check on signal validity. Otherwise,
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 954b29605c942..eb111504afc60 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4307,6 +4307,19 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ 	struct hci_ev_sync_conn_complete *ev = (void *) skb->data;
+ 	struct hci_conn *conn;
+ 
++	switch (ev->link_type) {
++	case SCO_LINK:
++	case ESCO_LINK:
++		break;
++	default:
++		/* As per Core 5.3 Vol 4 Part E 7.7.35 (p.2219), Link_Type
++		 * for HCI_Synchronous_Connection_Complete is limited to
++		 * either SCO or eSCO
++		 */
++		bt_dev_err(hdev, "Ignoring connect complete event for invalid link type");
++		return;
++	}
++
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+ 	hci_dev_lock(hdev);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 2b12e0730b852..668a9d0fbbc6e 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3688,7 +3688,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 
+ 	skb_shinfo(skb)->frag_list = NULL;
+ 
+-	do {
++	while (list_skb) {
+ 		nskb = list_skb;
+ 		list_skb = list_skb->next;
+ 
+@@ -3732,8 +3732,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 		if (skb_needs_linearize(nskb, features) &&
+ 		    __skb_linearize(nskb))
+ 			goto err_linearize;
+-
+-	} while (list_skb);
++	}
+ 
+ 	skb->truesize = skb->truesize - delta_truesize;
+ 	skb->data_len = skb->data_len - delta_len;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-02-15 16:40 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-02-15 16:40 UTC (permalink / raw
  To: gentoo-commits

commit:     f17a00c0cbb46c2d96d25505ffeb0551e49f3968
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 15 16:40:29 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Feb 15 16:40:29 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f17a00c0

Linux patch 5.10.168

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1167_linux-5.10.168.patch | 5144 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5148 insertions(+)

diff --git a/0000_README b/0000_README
index 99a312cb..88e6260b 100644
--- a/0000_README
+++ b/0000_README
@@ -711,6 +711,10 @@ Patch:  1166_linux-5.10.167.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.167
 
+Patch:  1167_linux-5.10.168.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.168
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1167_linux-5.10.168.patch b/1167_linux-5.10.168.patch
new file mode 100644
index 00000000..c5897287
--- /dev/null
+++ b/1167_linux-5.10.168.patch
@@ -0,0 +1,5144 @@
+diff --git a/Makefile b/Makefile
+index 84b78e657357c..af3270277fd0e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 167
++SUBLEVEL = 168
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+index fae48efae83e9..5c75fbf0d4709 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+@@ -1754,7 +1754,7 @@
+ 			sd_emmc_b: sd@5000 {
+ 				compatible = "amlogic,meson-axg-mmc";
+ 				reg = <0x0 0x5000 0x0 0x800>;
+-				interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>;
++				interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>;
+ 				status = "disabled";
+ 				clocks = <&clkc CLKID_SD_EMMC_B>,
+ 					<&clkc CLKID_SD_EMMC_B_CLK0>,
+@@ -1766,7 +1766,7 @@
+ 			sd_emmc_c: mmc@7000 {
+ 				compatible = "amlogic,meson-axg-mmc";
+ 				reg = <0x0 0x7000 0x0 0x800>;
+-				interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>;
++				interrupts = <GIC_SPI 218 IRQ_TYPE_LEVEL_HIGH>;
+ 				status = "disabled";
+ 				clocks = <&clkc CLKID_SD_EMMC_C>,
+ 					<&clkc CLKID_SD_EMMC_C_CLK0>,
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 075153a4d49fc..2091db7c9b8af 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -2317,7 +2317,7 @@
+ 		sd_emmc_a: sd@ffe03000 {
+ 			compatible = "amlogic,meson-axg-mmc";
+ 			reg = <0x0 0xffe03000 0x0 0x800>;
+-			interrupts = <GIC_SPI 189 IRQ_TYPE_EDGE_RISING>;
++			interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>;
+ 			status = "disabled";
+ 			clocks = <&clkc CLKID_SD_EMMC_A>,
+ 				 <&clkc CLKID_SD_EMMC_A_CLK0>,
+@@ -2329,7 +2329,7 @@
+ 		sd_emmc_b: sd@ffe05000 {
+ 			compatible = "amlogic,meson-axg-mmc";
+ 			reg = <0x0 0xffe05000 0x0 0x800>;
+-			interrupts = <GIC_SPI 190 IRQ_TYPE_EDGE_RISING>;
++			interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>;
+ 			status = "disabled";
+ 			clocks = <&clkc CLKID_SD_EMMC_B>,
+ 				 <&clkc CLKID_SD_EMMC_B_CLK0>,
+@@ -2341,7 +2341,7 @@
+ 		sd_emmc_c: mmc@ffe07000 {
+ 			compatible = "amlogic,meson-axg-mmc";
+ 			reg = <0x0 0xffe07000 0x0 0x800>;
+-			interrupts = <GIC_SPI 191 IRQ_TYPE_EDGE_RISING>;
++			interrupts = <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>;
+ 			status = "disabled";
+ 			clocks = <&clkc CLKID_SD_EMMC_C>,
+ 				 <&clkc CLKID_SD_EMMC_C_CLK0>,
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 47cbb0a1eb183..88a7db5c55a07 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -595,21 +595,21 @@
+ 			sd_emmc_a: mmc@70000 {
+ 				compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
+ 				reg = <0x0 0x70000 0x0 0x800>;
+-				interrupts = <GIC_SPI 216 IRQ_TYPE_EDGE_RISING>;
++				interrupts = <GIC_SPI 216 IRQ_TYPE_LEVEL_HIGH>;
+ 				status = "disabled";
+ 			};
+ 
+ 			sd_emmc_b: mmc@72000 {
+ 				compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
+ 				reg = <0x0 0x72000 0x0 0x800>;
+-				interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>;
++				interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>;
+ 				status = "disabled";
+ 			};
+ 
+ 			sd_emmc_c: mmc@74000 {
+ 				compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
+ 				reg = <0x0 0x74000 0x0 0x800>;
+-				interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>;
++				interrupts = <GIC_SPI 218 IRQ_TYPE_LEVEL_HIGH>;
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h b/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
+index a003e6af33533..56271abfb7e09 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
++++ b/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
+@@ -601,7 +601,7 @@
+ #define MX8MM_IOMUXC_UART1_RXD_GPIO5_IO22                                   0x234 0x49C 0x000 0x5 0x0
+ #define MX8MM_IOMUXC_UART1_RXD_TPSMP_HDATA24                                0x234 0x49C 0x000 0x7 0x0
+ #define MX8MM_IOMUXC_UART1_TXD_UART1_DCE_TX                                 0x238 0x4A0 0x000 0x0 0x0
+-#define MX8MM_IOMUXC_UART1_TXD_UART1_DTE_RX                                 0x238 0x4A0 0x4F4 0x0 0x0
++#define MX8MM_IOMUXC_UART1_TXD_UART1_DTE_RX                                 0x238 0x4A0 0x4F4 0x0 0x1
+ #define MX8MM_IOMUXC_UART1_TXD_ECSPI3_MOSI                                  0x238 0x4A0 0x000 0x1 0x0
+ #define MX8MM_IOMUXC_UART1_TXD_GPIO5_IO23                                   0x238 0x4A0 0x000 0x5 0x0
+ #define MX8MM_IOMUXC_UART1_TXD_TPSMP_HDATA25                                0x238 0x4A0 0x000 0x7 0x0
+diff --git a/arch/parisc/kernel/firmware.c b/arch/parisc/kernel/firmware.c
+index 665b70086685c..7ed28ddcaba7d 100644
+--- a/arch/parisc/kernel/firmware.c
++++ b/arch/parisc/kernel/firmware.c
+@@ -1230,7 +1230,7 @@ static char __attribute__((aligned(64))) iodc_dbuf[4096];
+  */
+ int pdc_iodc_print(const unsigned char *str, unsigned count)
+ {
+-	unsigned int i;
++	unsigned int i, found = 0;
+ 	unsigned long flags;
+ 
+ 	for (i = 0; i < count;) {
+@@ -1239,6 +1239,7 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
+ 			iodc_dbuf[i+0] = '\r';
+ 			iodc_dbuf[i+1] = '\n';
+ 			i += 2;
++			found = 1;
+ 			goto print;
+ 		default:
+ 			iodc_dbuf[i] = str[i];
+@@ -1255,7 +1256,7 @@ print:
+                     __pa(iodc_retbuf), 0, __pa(iodc_dbuf), i, 0);
+         spin_unlock_irqrestore(&pdc_lock, flags);
+ 
+-	return i;
++	return i - found;
+ }
+ 
+ #if !defined(BOOTLOADER)
+diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c
+index 2127974982df9..df9b9f0591bfb 100644
+--- a/arch/parisc/kernel/ptrace.c
++++ b/arch/parisc/kernel/ptrace.c
+@@ -127,6 +127,12 @@ long arch_ptrace(struct task_struct *child, long request,
+ 	unsigned long tmp;
+ 	long ret = -EIO;
+ 
++	unsigned long user_regs_struct_size = sizeof(struct user_regs_struct);
++#ifdef CONFIG_64BIT
++	if (is_compat_task())
++		user_regs_struct_size /= 2;
++#endif
++
+ 	switch (request) {
+ 
+ 	/* Read the word at location addr in the USER area.  For ptraced
+@@ -182,14 +188,14 @@ long arch_ptrace(struct task_struct *child, long request,
+ 		return copy_regset_to_user(child,
+ 					   task_user_regset_view(current),
+ 					   REGSET_GENERAL,
+-					   0, sizeof(struct user_regs_struct),
++					   0, user_regs_struct_size,
+ 					   datap);
+ 
+ 	case PTRACE_SETREGS:	/* Set all gp regs in the child. */
+ 		return copy_regset_from_user(child,
+ 					     task_user_regset_view(current),
+ 					     REGSET_GENERAL,
+-					     0, sizeof(struct user_regs_struct),
++					     0, user_regs_struct_size,
+ 					     datap);
+ 
+ 	case PTRACE_GETFPREGS:	/* Get the child FPU state. */
+@@ -303,6 +309,11 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+ 			}
+ 		}
+ 		break;
++	case PTRACE_GETREGS:
++	case PTRACE_SETREGS:
++	case PTRACE_GETFPREGS:
++	case PTRACE_SETFPREGS:
++		return arch_ptrace(child, request, addr, data);
+ 
+ 	default:
+ 		ret = compat_ptrace_request(child, request, addr, data);
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index e42c2fe3dd367..b773c411aa5c2 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -21,7 +21,7 @@
+  * Used to avoid races in counting the nest-pmu units during hotplug
+  * register and unregister
+  */
+-static DEFINE_SPINLOCK(nest_init_lock);
++static DEFINE_MUTEX(nest_init_lock);
+ static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc);
+ static struct imc_pmu **per_nest_pmu_arr;
+ static cpumask_t nest_imc_cpumask;
+@@ -1621,7 +1621,7 @@ static void imc_common_mem_free(struct imc_pmu *pmu_ptr)
+ static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)
+ {
+ 	if (pmu_ptr->domain == IMC_DOMAIN_NEST) {
+-		spin_lock(&nest_init_lock);
++		mutex_lock(&nest_init_lock);
+ 		if (nest_pmus == 1) {
+ 			cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE);
+ 			kfree(nest_imc_refc);
+@@ -1631,7 +1631,7 @@ static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)
+ 
+ 		if (nest_pmus > 0)
+ 			nest_pmus--;
+-		spin_unlock(&nest_init_lock);
++		mutex_unlock(&nest_init_lock);
+ 	}
+ 
+ 	/* Free core_imc memory */
+@@ -1788,11 +1788,11 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
+ 		* rest. To handle the cpuhotplug callback unregister, we track
+ 		* the number of nest pmus in "nest_pmus".
+ 		*/
+-		spin_lock(&nest_init_lock);
++		mutex_lock(&nest_init_lock);
+ 		if (nest_pmus == 0) {
+ 			ret = init_nest_pmu_ref();
+ 			if (ret) {
+-				spin_unlock(&nest_init_lock);
++				mutex_unlock(&nest_init_lock);
+ 				kfree(per_nest_pmu_arr);
+ 				per_nest_pmu_arr = NULL;
+ 				goto err_free_mem;
+@@ -1800,7 +1800,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
+ 			/* Register for cpu hotplug notification. */
+ 			ret = nest_pmu_cpumask_init();
+ 			if (ret) {
+-				spin_unlock(&nest_init_lock);
++				mutex_unlock(&nest_init_lock);
+ 				kfree(nest_imc_refc);
+ 				kfree(per_nest_pmu_arr);
+ 				per_nest_pmu_arr = NULL;
+@@ -1808,7 +1808,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
+ 			}
+ 		}
+ 		nest_pmus++;
+-		spin_unlock(&nest_init_lock);
++		mutex_unlock(&nest_init_lock);
+ 		break;
+ 	case IMC_DOMAIN_CORE:
+ 		ret = core_imc_pmu_cpumask_init();
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 1bb1bf1141cc7..9446282b52bab 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -74,6 +74,9 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
+         KBUILD_CFLAGS += -fno-omit-frame-pointer
+ endif
+ 
++# Avoid generating .eh_frame sections.
++KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
++
+ KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
+ KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
+ 
+diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
+index 89f81067e09ed..2ae1201cff886 100644
+--- a/arch/riscv/mm/cacheflush.c
++++ b/arch/riscv/mm/cacheflush.c
+@@ -85,7 +85,9 @@ void flush_icache_pte(pte_t pte)
+ {
+ 	struct page *page = pte_page(pte);
+ 
+-	if (!test_and_set_bit(PG_dcache_clean, &page->flags))
++	if (!test_bit(PG_dcache_clean, &page->flags)) {
+ 		flush_icache_all();
++		set_bit(PG_dcache_clean, &page->flags);
++	}
+ }
+ #endif /* CONFIG_MMU */
+diff --git a/arch/x86/include/asm/debugreg.h b/arch/x86/include/asm/debugreg.h
+index cfdf307ddc012..9ed8343c9b3cb 100644
+--- a/arch/x86/include/asm/debugreg.h
++++ b/arch/x86/include/asm/debugreg.h
+@@ -39,7 +39,20 @@ static __always_inline unsigned long native_get_debugreg(int regno)
+ 		asm("mov %%db6, %0" :"=r" (val));
+ 		break;
+ 	case 7:
+-		asm("mov %%db7, %0" :"=r" (val));
++		/*
++		 * Apply __FORCE_ORDER to DR7 reads to forbid re-ordering them
++		 * with other code.
++		 *
++		 * This is needed because a DR7 access can cause a #VC exception
++		 * when running under SEV-ES. Taking a #VC exception is not a
++		 * safe thing to do just anywhere in the entry code and
++		 * re-ordering might place the access into an unsafe location.
++		 *
++		 * This happened in the NMI handler, where the DR7 read was
++		 * re-ordered to happen before the call to sev_es_ist_enter(),
++		 * causing stack recursion.
++		 */
++		asm volatile("mov %%db7, %0" : "=r" (val) : __FORCE_ORDER);
+ 		break;
+ 	default:
+ 		BUG();
+@@ -66,7 +79,16 @@ static __always_inline void native_set_debugreg(int regno, unsigned long value)
+ 		asm("mov %0, %%db6"	::"r" (value));
+ 		break;
+ 	case 7:
+-		asm("mov %0, %%db7"	::"r" (value));
++		/*
++		 * Apply __FORCE_ORDER to DR7 writes to forbid re-ordering them
++		 * with other code.
++		 *
++		 * While is didn't happen with a DR7 write (see the DR7 read
++		 * comment above which explains where it happened), add the
++		 * __FORCE_ORDER here too to avoid similar problems in the
++		 * future.
++		 */
++		asm volatile("mov %0, %%db7"	::"r" (value), __FORCE_ORDER);
+ 		break;
+ 	default:
+ 		BUG();
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index d13474c6d1818..14150767be444 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3051,7 +3051,7 @@ int sata_down_spd_limit(struct ata_link *link, u32 spd_limit)
+ 	 */
+ 	if (spd > 1)
+ 		mask &= (1 << (spd - 1)) - 1;
+-	else
++	else if (link->sata_spd)
+ 		return -EINVAL;
+ 
+ 	/* were we already at the bottom? */
+diff --git a/drivers/bus/sunxi-rsb.c b/drivers/bus/sunxi-rsb.c
+index f8c29b888e6b4..98cbb18f17fa3 100644
+--- a/drivers/bus/sunxi-rsb.c
++++ b/drivers/bus/sunxi-rsb.c
+@@ -781,7 +781,13 @@ static int __init sunxi_rsb_init(void)
+ 		return ret;
+ 	}
+ 
+-	return platform_driver_register(&sunxi_rsb_driver);
++	ret = platform_driver_register(&sunxi_rsb_driver);
++	if (ret) {
++		bus_unregister(&sunxi_rsb_bus);
++		return ret;
++	}
++
++	return 0;
+ }
+ module_init(sunxi_rsb_init);
+ 
+diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
+index b0cc3f1e9bb00..16ea847ade5fd 100644
+--- a/drivers/firewire/core-cdev.c
++++ b/drivers/firewire/core-cdev.c
+@@ -818,8 +818,10 @@ static int ioctl_send_response(struct client *client, union ioctl_arg *arg)
+ 
+ 	r = container_of(resource, struct inbound_transaction_resource,
+ 			 resource);
+-	if (is_fcp_request(r->request))
++	if (is_fcp_request(r->request)) {
++		kfree(r->data);
+ 		goto out;
++	}
+ 
+ 	if (a->length != fw_get_response_length(r->request)) {
+ 		ret = -EINVAL;
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index a2765d668856e..332739f3eded5 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -950,6 +950,8 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+ 	/* first try to find a slot in an existing linked list entry */
+ 	for (prsv = efi_memreserve_root->next; prsv; ) {
+ 		rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB);
++		if (!rsv)
++			return -ENOMEM;
+ 		index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size);
+ 		if (index < rsv->size) {
+ 			rsv->entry[index].base = addr;
+diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c
+index 0a9aba5f9ceff..f178b2984dfb2 100644
+--- a/drivers/firmware/efi/memattr.c
++++ b/drivers/firmware/efi/memattr.c
+@@ -33,7 +33,7 @@ int __init efi_memattr_init(void)
+ 		return -ENOMEM;
+ 	}
+ 
+-	if (tbl->version > 1) {
++	if (tbl->version > 2) {
+ 		pr_warn("Unexpected EFI Memory Attributes table version %d\n",
+ 			tbl->version);
+ 		goto unmap;
+diff --git a/drivers/fpga/stratix10-soc.c b/drivers/fpga/stratix10-soc.c
+index 9e34bbbce26e2..7a8269b52723d 100644
+--- a/drivers/fpga/stratix10-soc.c
++++ b/drivers/fpga/stratix10-soc.c
+@@ -213,9 +213,9 @@ static int s10_ops_write_init(struct fpga_manager *mgr,
+ 	/* Allocate buffers from the service layer's pool. */
+ 	for (i = 0; i < NUM_SVC_BUFS; i++) {
+ 		kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE);
+-		if (!kbuf) {
++		if (IS_ERR(kbuf)) {
+ 			s10_free_buffers(mgr);
+-			ret = -ENOMEM;
++			ret = PTR_ERR(kbuf);
+ 			goto init_done;
+ 		}
+ 
+diff --git a/drivers/fsi/fsi-sbefifo.c b/drivers/fsi/fsi-sbefifo.c
+index 84cb965bfed5c..97045a8d94224 100644
+--- a/drivers/fsi/fsi-sbefifo.c
++++ b/drivers/fsi/fsi-sbefifo.c
+@@ -640,7 +640,7 @@ static void sbefifo_collect_async_ffdc(struct sbefifo *sbefifo)
+ 	}
+         ffdc_iov.iov_base = ffdc;
+ 	ffdc_iov.iov_len = SBEFIFO_MAX_FFDC_SIZE;
+-        iov_iter_kvec(&ffdc_iter, WRITE, &ffdc_iov, 1, SBEFIFO_MAX_FFDC_SIZE);
++        iov_iter_kvec(&ffdc_iter, READ, &ffdc_iov, 1, SBEFIFO_MAX_FFDC_SIZE);
+ 	cmd[0] = cpu_to_be32(2);
+ 	cmd[1] = cpu_to_be32(SBEFIFO_CMD_GET_SBE_FFDC);
+ 	rc = sbefifo_do_command(sbefifo, cmd, 2, &ffdc_iter);
+@@ -737,7 +737,7 @@ int sbefifo_submit(struct device *dev, const __be32 *command, size_t cmd_len,
+ 	rbytes = (*resp_len) * sizeof(__be32);
+ 	resp_iov.iov_base = response;
+ 	resp_iov.iov_len = rbytes;
+-        iov_iter_kvec(&resp_iter, WRITE, &resp_iov, 1, rbytes);
++        iov_iter_kvec(&resp_iter, READ, &resp_iov, 1, rbytes);
+ 
+ 	/* Perform the command */
+ 	mutex_lock(&sbefifo->lock);
+@@ -817,7 +817,7 @@ static ssize_t sbefifo_user_read(struct file *file, char __user *buf,
+ 	/* Prepare iov iterator */
+ 	resp_iov.iov_base = buf;
+ 	resp_iov.iov_len = len;
+-	iov_iter_init(&resp_iter, WRITE, &resp_iov, 1, len);
++	iov_iter_init(&resp_iter, READ, &resp_iov, 1, len);
+ 
+ 	/* Perform the command */
+ 	mutex_lock(&sbefifo->lock);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
+index ffcaee74a2493..545fa703d9757 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
+@@ -296,10 +296,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
+ 	spin_unlock(&obj->vma.lock);
+ 
+ 	obj->tiling_and_stride = tiling | stride;
+-	i915_gem_object_unlock(obj);
+-
+-	/* Force the fence to be reacquired for GTT access */
+-	i915_gem_object_release_mmap_gtt(obj);
+ 
+ 	/* Try to preallocate memory required to save swizzling on put-pages */
+ 	if (i915_gem_object_needs_bit17_swizzle(obj)) {
+@@ -312,6 +308,11 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
+ 		obj->bit_17 = NULL;
+ 	}
+ 
++	i915_gem_object_unlock(obj);
++
++	/* Force the fence to be reacquired for GTT access */
++	i915_gem_object_release_mmap_gtt(obj);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 08175c3dd374b..539ebf85fd7c0 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1491,7 +1491,8 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ 		return 0;
+ 
+ 	vc4_hdmi->cec_adap = cec_allocate_adapter(&vc4_hdmi_cec_adap_ops,
+-						  vc4_hdmi, "vc4",
++						  vc4_hdmi,
++						  vc4_hdmi->variant->card_name,
+ 						  CEC_CAP_DEFAULTS |
+ 						  CEC_CAP_CONNECTOR_INFO, 1);
+ 	ret = PTR_ERR_OR_ZERO(vc4_hdmi->cec_adap);
+diff --git a/drivers/i2c/busses/i2c-mxs.c b/drivers/i2c/busses/i2c-mxs.c
+index c4b08a9244614..abad24808e855 100644
+--- a/drivers/i2c/busses/i2c-mxs.c
++++ b/drivers/i2c/busses/i2c-mxs.c
+@@ -842,8 +842,8 @@ static int mxs_i2c_probe(struct platform_device *pdev)
+ 	/* Setup the DMA */
+ 	i2c->dmach = dma_request_chan(dev, "rx-tx");
+ 	if (IS_ERR(i2c->dmach)) {
+-		dev_err(dev, "Failed to request dma\n");
+-		return PTR_ERR(i2c->dmach);
++		return dev_err_probe(dev, PTR_ERR(i2c->dmach),
++				     "Failed to request dma\n");
+ 	}
+ 
+ 	platform_set_drvdata(pdev, i2c);
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 02ddb237f69af..13c14eb175e94 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -80,7 +80,7 @@ enum {
+ #define DEFAULT_SCL_RATE  (100 * 1000) /* Hz */
+ 
+ /**
+- * struct i2c_spec_values:
++ * struct i2c_spec_values - I2C specification values for various modes
+  * @min_hold_start_ns: min hold time (repeated) START condition
+  * @min_low_ns: min LOW period of the SCL clock
+  * @min_high_ns: min HIGH period of the SCL cloc
+@@ -136,7 +136,7 @@ static const struct i2c_spec_values fast_mode_plus_spec = {
+ };
+ 
+ /**
+- * struct rk3x_i2c_calced_timings:
++ * struct rk3x_i2c_calced_timings - calculated V1 timings
+  * @div_low: Divider output for low
+  * @div_high: Divider output for high
+  * @tuning: Used to adjust setup/hold data time,
+@@ -159,7 +159,7 @@ enum rk3x_i2c_state {
+ };
+ 
+ /**
+- * struct rk3x_i2c_soc_data:
++ * struct rk3x_i2c_soc_data - SOC-specific data
+  * @grf_offset: offset inside the grf regmap for setting the i2c type
+  * @calc_timings: Callback function for i2c timing information calculated
+  */
+@@ -239,7 +239,8 @@ static inline void rk3x_i2c_clean_ipd(struct rk3x_i2c *i2c)
+ }
+ 
+ /**
+- * Generate a START condition, which triggers a REG_INT_START interrupt.
++ * rk3x_i2c_start - Generate a START condition, which triggers a REG_INT_START interrupt.
++ * @i2c: target controller data
+  */
+ static void rk3x_i2c_start(struct rk3x_i2c *i2c)
+ {
+@@ -258,8 +259,8 @@ static void rk3x_i2c_start(struct rk3x_i2c *i2c)
+ }
+ 
+ /**
+- * Generate a STOP condition, which triggers a REG_INT_STOP interrupt.
+- *
++ * rk3x_i2c_stop - Generate a STOP condition, which triggers a REG_INT_STOP interrupt.
++ * @i2c: target controller data
+  * @error: Error code to return in rk3x_i2c_xfer
+  */
+ static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)
+@@ -298,7 +299,8 @@ static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)
+ }
+ 
+ /**
+- * Setup a read according to i2c->msg
++ * rk3x_i2c_prepare_read - Setup a read according to i2c->msg
++ * @i2c: target controller data
+  */
+ static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)
+ {
+@@ -329,7 +331,8 @@ static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)
+ }
+ 
+ /**
+- * Fill the transmit buffer with data from i2c->msg
++ * rk3x_i2c_fill_transmit_buf - Fill the transmit buffer with data from i2c->msg
++ * @i2c: target controller data
+  */
+ static void rk3x_i2c_fill_transmit_buf(struct rk3x_i2c *i2c)
+ {
+@@ -532,11 +535,10 @@ out:
+ }
+ 
+ /**
+- * Get timing values of I2C specification
+- *
++ * rk3x_i2c_get_spec - Get timing values of I2C specification
+  * @speed: Desired SCL frequency
+  *
+- * Returns: Matched i2c spec values.
++ * Return: Matched i2c_spec_values.
+  */
+ static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)
+ {
+@@ -549,13 +551,12 @@ static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)
+ }
+ 
+ /**
+- * Calculate divider values for desired SCL frequency
+- *
++ * rk3x_i2c_v0_calc_timings - Calculate divider values for desired SCL frequency
+  * @clk_rate: I2C input clock rate
+  * @t: Known I2C timing information
+  * @t_calc: Caculated rk3x private timings that would be written into regs
+  *
+- * Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case
++ * Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case
+  * a best-effort divider value is returned in divs. If the target rate is
+  * too high, we silently use the highest possible rate.
+  */
+@@ -710,13 +711,12 @@ static int rk3x_i2c_v0_calc_timings(unsigned long clk_rate,
+ }
+ 
+ /**
+- * Calculate timing values for desired SCL frequency
+- *
++ * rk3x_i2c_v1_calc_timings - Calculate timing values for desired SCL frequency
+  * @clk_rate: I2C input clock rate
+  * @t: Known I2C timing information
+  * @t_calc: Caculated rk3x private timings that would be written into regs
+  *
+- * Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case
++ * Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case
+  * a best-effort divider value is returned in divs. If the target rate is
+  * too high, we silently use the highest possible rate.
+  * The following formulas are v1's method to calculate timings.
+@@ -960,14 +960,14 @@ static int rk3x_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long
+ }
+ 
+ /**
+- * Setup I2C registers for an I2C operation specified by msgs, num.
+- *
+- * Must be called with i2c->lock held.
+- *
++ * rk3x_i2c_setup - Setup I2C registers for an I2C operation specified by msgs, num.
++ * @i2c: target controller data
+  * @msgs: I2C msgs to process
+  * @num: Number of msgs
+  *
+- * returns: Number of I2C msgs processed or negative in case of error
++ * Must be called with i2c->lock held.
++ *
++ * Return: Number of I2C msgs processed or negative in case of error
+  */
+ static int rk3x_i2c_setup(struct rk3x_i2c *i2c, struct i2c_msg *msgs, int num)
+ {
+diff --git a/drivers/iio/accel/hid-sensor-accel-3d.c b/drivers/iio/accel/hid-sensor-accel-3d.c
+index f05840d17fb71..8d929a4f9110f 100644
+--- a/drivers/iio/accel/hid-sensor-accel-3d.c
++++ b/drivers/iio/accel/hid-sensor-accel-3d.c
+@@ -277,6 +277,7 @@ static int accel_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
+ 			hid_sensor_convert_timestamp(
+ 					&accel_state->common_attributes,
+ 					*(int64_t *)raw_data);
++		ret = 0;
+ 	break;
+ 	default:
+ 		break;
+diff --git a/drivers/iio/adc/berlin2-adc.c b/drivers/iio/adc/berlin2-adc.c
+index 8b04b95b7b7ae..fa2c87946e16f 100644
+--- a/drivers/iio/adc/berlin2-adc.c
++++ b/drivers/iio/adc/berlin2-adc.c
+@@ -289,8 +289,10 @@ static int berlin2_adc_probe(struct platform_device *pdev)
+ 	int ret;
+ 
+ 	indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv));
+-	if (!indio_dev)
++	if (!indio_dev) {
++		of_node_put(parent_np);
+ 		return -ENOMEM;
++	}
+ 
+ 	priv = iio_priv(indio_dev);
+ 	platform_set_drvdata(pdev, indio_dev);
+diff --git a/drivers/iio/adc/stm32-dfsdm-adc.c b/drivers/iio/adc/stm32-dfsdm-adc.c
+index 9234f14167b7a..171d73efb2f88 100644
+--- a/drivers/iio/adc/stm32-dfsdm-adc.c
++++ b/drivers/iio/adc/stm32-dfsdm-adc.c
+@@ -1521,6 +1521,7 @@ static const struct of_device_id stm32_dfsdm_adc_match[] = {
+ 	},
+ 	{}
+ };
++MODULE_DEVICE_TABLE(of, stm32_dfsdm_adc_match);
+ 
+ static int stm32_dfsdm_adc_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/iio/adc/twl6030-gpadc.c b/drivers/iio/adc/twl6030-gpadc.c
+index 256177b15c511..024bdc1ef77e6 100644
+--- a/drivers/iio/adc/twl6030-gpadc.c
++++ b/drivers/iio/adc/twl6030-gpadc.c
+@@ -57,6 +57,18 @@
+ #define TWL6030_GPADCS				BIT(1)
+ #define TWL6030_GPADCR				BIT(0)
+ 
++#define USB_VBUS_CTRL_SET			0x04
++#define USB_ID_CTRL_SET				0x06
++
++#define TWL6030_MISC1				0xE4
++#define VBUS_MEAS				0x01
++#define ID_MEAS					0x01
++
++#define VAC_MEAS                0x04
++#define VBAT_MEAS               0x02
++#define BB_MEAS                 0x01
++
++
+ /**
+  * struct twl6030_chnl_calib - channel calibration
+  * @gain:		slope coefficient for ideal curve
+@@ -927,6 +939,26 @@ static int twl6030_gpadc_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	ret = twl_i2c_write_u8(TWL_MODULE_USB, VBUS_MEAS, USB_VBUS_CTRL_SET);
++	if (ret < 0) {
++		dev_err(dev, "failed to wire up inputs\n");
++		return ret;
++	}
++
++	ret = twl_i2c_write_u8(TWL_MODULE_USB, ID_MEAS, USB_ID_CTRL_SET);
++	if (ret < 0) {
++		dev_err(dev, "failed to wire up inputs\n");
++		return ret;
++	}
++
++	ret = twl_i2c_write_u8(TWL6030_MODULE_ID0,
++				VBAT_MEAS | BB_MEAS | VAC_MEAS,
++				TWL6030_MISC1);
++	if (ret < 0) {
++		dev_err(dev, "failed to wire up inputs\n");
++		return ret;
++	}
++
+ 	indio_dev->name = DRIVER_NAME;
+ 	indio_dev->info = &twl6030_gpadc_iio_info;
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+diff --git a/drivers/iio/imu/fxos8700_core.c b/drivers/iio/imu/fxos8700_core.c
+index ab288186f36e4..04d3778fcc153 100644
+--- a/drivers/iio/imu/fxos8700_core.c
++++ b/drivers/iio/imu/fxos8700_core.c
+@@ -10,6 +10,7 @@
+ #include <linux/regmap.h>
+ #include <linux/acpi.h>
+ #include <linux/bitops.h>
++#include <linux/bitfield.h>
+ 
+ #include <linux/iio/iio.h>
+ #include <linux/iio/sysfs.h>
+@@ -144,9 +145,8 @@
+ #define FXOS8700_NVM_DATA_BNK0      0xa7
+ 
+ /* Bit definitions for FXOS8700_CTRL_REG1 */
+-#define FXOS8700_CTRL_ODR_MSK       0x38
+ #define FXOS8700_CTRL_ODR_MAX       0x00
+-#define FXOS8700_CTRL_ODR_MIN       GENMASK(4, 3)
++#define FXOS8700_CTRL_ODR_MSK       GENMASK(5, 3)
+ 
+ /* Bit definitions for FXOS8700_M_CTRL_REG1 */
+ #define FXOS8700_HMS_MASK           GENMASK(1, 0)
+@@ -320,7 +320,7 @@ static enum fxos8700_sensor fxos8700_to_sensor(enum iio_chan_type iio_type)
+ 	switch (iio_type) {
+ 	case IIO_ACCEL:
+ 		return FXOS8700_ACCEL;
+-	case IIO_ANGL_VEL:
++	case IIO_MAGN:
+ 		return FXOS8700_MAGN;
+ 	default:
+ 		return -EINVAL;
+@@ -345,15 +345,35 @@ static int fxos8700_set_active_mode(struct fxos8700_data *data,
+ static int fxos8700_set_scale(struct fxos8700_data *data,
+ 			      enum fxos8700_sensor t, int uscale)
+ {
+-	int i;
++	int i, ret, val;
++	bool active_mode;
+ 	static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale);
+ 	struct device *dev = regmap_get_device(data->regmap);
+ 
+ 	if (t == FXOS8700_MAGN) {
+-		dev_err(dev, "Magnetometer scale is locked at 1200uT\n");
++		dev_err(dev, "Magnetometer scale is locked at 0.001Gs\n");
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * When device is in active mode, it failed to set an ACCEL
++	 * full-scale range(2g/4g/8g) in FXOS8700_XYZ_DATA_CFG.
++	 * This is not align with the datasheet, but it is a fxos8700
++	 * chip behavier. Set the device in standby mode before setting
++	 * an ACCEL full-scale range.
++	 */
++	ret = regmap_read(data->regmap, FXOS8700_CTRL_REG1, &val);
++	if (ret)
++		return ret;
++
++	active_mode = val & FXOS8700_ACTIVE;
++	if (active_mode) {
++		ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1,
++				   val & ~FXOS8700_ACTIVE);
++		if (ret)
++			return ret;
++	}
++
+ 	for (i = 0; i < scale_num; i++)
+ 		if (fxos8700_accel_scale[i].uscale == uscale)
+ 			break;
+@@ -361,8 +381,12 @@ static int fxos8700_set_scale(struct fxos8700_data *data,
+ 	if (i == scale_num)
+ 		return -EINVAL;
+ 
+-	return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG,
++	ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG,
+ 			    fxos8700_accel_scale[i].bits);
++	if (ret)
++		return ret;
++	return regmap_write(data->regmap, FXOS8700_CTRL_REG1,
++				  active_mode);
+ }
+ 
+ static int fxos8700_get_scale(struct fxos8700_data *data,
+@@ -372,7 +396,7 @@ static int fxos8700_get_scale(struct fxos8700_data *data,
+ 	static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale);
+ 
+ 	if (t == FXOS8700_MAGN) {
+-		*uscale = 1200; /* Magnetometer is locked at 1200uT */
++		*uscale = 1000; /* Magnetometer is locked at 0.001Gs */
+ 		return 0;
+ 	}
+ 
+@@ -394,22 +418,61 @@ static int fxos8700_get_data(struct fxos8700_data *data, int chan_type,
+ 			     int axis, int *val)
+ {
+ 	u8 base, reg;
++	s16 tmp;
+ 	int ret;
+-	enum fxos8700_sensor type = fxos8700_to_sensor(chan_type);
+ 
+-	base = type ? FXOS8700_OUT_X_MSB : FXOS8700_M_OUT_X_MSB;
++	/*
++	 * Different register base addresses varies with channel types.
++	 * This bug hasn't been noticed before because using an enum is
++	 * really hard to read. Use an a switch statement to take over that.
++	 */
++	switch (chan_type) {
++	case IIO_ACCEL:
++		base = FXOS8700_OUT_X_MSB;
++		break;
++	case IIO_MAGN:
++		base = FXOS8700_M_OUT_X_MSB;
++		break;
++	default:
++		return -EINVAL;
++	}
+ 
+ 	/* Block read 6 bytes of device output registers to avoid data loss */
+ 	ret = regmap_bulk_read(data->regmap, base, data->buf,
+-			       FXOS8700_DATA_BUF_SIZE);
++			       sizeof(data->buf));
+ 	if (ret)
+ 		return ret;
+ 
+ 	/* Convert axis to buffer index */
+ 	reg = axis - IIO_MOD_X;
+ 
++	/*
++	 * Convert to native endianness. The accel data and magn data
++	 * are signed, so a forced type conversion is needed.
++	 */
++	tmp = be16_to_cpu(data->buf[reg]);
++
++	/*
++	 * ACCEL output data registers contain the X-axis, Y-axis, and Z-axis
++	 * 14-bit left-justified sample data and MAGN output data registers
++	 * contain the X-axis, Y-axis, and Z-axis 16-bit sample data. Apply
++	 * a signed 2 bits right shift to the readback raw data from ACCEL
++	 * output data register and keep that from MAGN sensor as the origin.
++	 * Value should be extended to 32 bit.
++	 */
++	switch (chan_type) {
++	case IIO_ACCEL:
++		tmp = tmp >> 2;
++		break;
++	case IIO_MAGN:
++		/* Nothing to do */
++		break;
++	default:
++		return -EINVAL;
++	}
++
+ 	/* Convert to native endianness */
+-	*val = sign_extend32(be16_to_cpu(data->buf[reg]), 15);
++	*val = sign_extend32(tmp, 15);
+ 
+ 	return 0;
+ }
+@@ -445,10 +508,9 @@ static int fxos8700_set_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
+ 	if (i >= odr_num)
+ 		return -EINVAL;
+ 
+-	return regmap_update_bits(data->regmap,
+-				  FXOS8700_CTRL_REG1,
+-				  FXOS8700_CTRL_ODR_MSK + FXOS8700_ACTIVE,
+-				  fxos8700_odr[i].bits << 3 | active_mode);
++	val &= ~FXOS8700_CTRL_ODR_MSK;
++	val |= FIELD_PREP(FXOS8700_CTRL_ODR_MSK, fxos8700_odr[i].bits) | FXOS8700_ACTIVE;
++	return regmap_write(data->regmap, FXOS8700_CTRL_REG1, val);
+ }
+ 
+ static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
+@@ -461,7 +523,7 @@ static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
+ 	if (ret)
+ 		return ret;
+ 
+-	val &= FXOS8700_CTRL_ODR_MSK;
++	val = FIELD_GET(FXOS8700_CTRL_ODR_MSK, val);
+ 
+ 	for (i = 0; i < odr_num; i++)
+ 		if (val == fxos8700_odr[i].bits)
+@@ -526,7 +588,7 @@ static IIO_CONST_ATTR(in_accel_sampling_frequency_available,
+ static IIO_CONST_ATTR(in_magn_sampling_frequency_available,
+ 		      "1.5625 6.25 12.5 50 100 200 400 800");
+ static IIO_CONST_ATTR(in_accel_scale_available, "0.000244 0.000488 0.000976");
+-static IIO_CONST_ATTR(in_magn_scale_available, "0.000001200");
++static IIO_CONST_ATTR(in_magn_scale_available, "0.001000");
+ 
+ static struct attribute *fxos8700_attrs[] = {
+ 	&iio_const_attr_in_accel_sampling_frequency_available.dev_attr.attr,
+@@ -592,14 +654,19 @@ static int fxos8700_chip_init(struct fxos8700_data *data, bool use_spi)
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Max ODR (800Hz individual or 400Hz hybrid), active mode */
+-	ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1,
+-			   FXOS8700_CTRL_ODR_MAX | FXOS8700_ACTIVE);
++	/*
++	 * Set max full-scale range (+/-8G) for ACCEL sensor in chip
++	 * initialization then activate the device.
++	 */
++	ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Set for max full-scale range (+/-8G) */
+-	return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G);
++	/* Max ODR (800Hz individual or 400Hz hybrid), active mode */
++	return regmap_update_bits(data->regmap, FXOS8700_CTRL_REG1,
++				FXOS8700_CTRL_ODR_MSK | FXOS8700_ACTIVE,
++				FIELD_PREP(FXOS8700_CTRL_ODR_MSK, FXOS8700_CTRL_ODR_MAX) |
++				FXOS8700_ACTIVE);
+ }
+ 
+ static void fxos8700_chip_uninit(void *data)
+diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
+index d84b1098762c1..7828fb5931203 100644
+--- a/drivers/infiniband/hw/hfi1/file_ops.c
++++ b/drivers/infiniband/hw/hfi1/file_ops.c
+@@ -1359,12 +1359,15 @@ static int user_exp_rcv_setup(struct hfi1_filedata *fd, unsigned long arg,
+ 		addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
+ 		if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
+ 				 sizeof(tinfo.tidcnt)))
+-			return -EFAULT;
++			ret = -EFAULT;
+ 
+ 		addr = arg + offsetof(struct hfi1_tid_info, length);
+-		if (copy_to_user((void __user *)addr, &tinfo.length,
++		if (!ret && copy_to_user((void __user *)addr, &tinfo.length,
+ 				 sizeof(tinfo.length)))
+ 			ret = -EFAULT;
++
++		if (ret)
++			hfi1_user_exp_rcv_invalid(fd, &tinfo);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c
+index 760b254ba42d6..48a57568cad69 100644
+--- a/drivers/infiniband/hw/usnic/usnic_uiom.c
++++ b/drivers/infiniband/hw/usnic/usnic_uiom.c
+@@ -281,8 +281,8 @@ iter_chunk:
+ 				size = pa_end - pa_start + PAGE_SIZE;
+ 				usnic_dbg("va 0x%lx pa %pa size 0x%zx flags 0x%x",
+ 					va_start, &pa_start, size, flags);
+-				err = iommu_map(pd->domain, va_start, pa_start,
+-							size, flags);
++				err = iommu_map_atomic(pd->domain, va_start,
++						       pa_start, size, flags);
+ 				if (err) {
+ 					usnic_err("Failed to map va 0x%lx pa %pa size 0x%zx with err %d\n",
+ 						va_start, &pa_start, size, err);
+@@ -298,8 +298,8 @@ iter_chunk:
+ 				size = pa - pa_start + PAGE_SIZE;
+ 				usnic_dbg("va 0x%lx pa %pa size 0x%zx flags 0x%x\n",
+ 					va_start, &pa_start, size, flags);
+-				err = iommu_map(pd->domain, va_start, pa_start,
+-						size, flags);
++				err = iommu_map_atomic(pd->domain, va_start,
++						       pa_start, size, flags);
+ 				if (err) {
+ 					usnic_err("Failed to map va 0x%lx pa %pa size 0x%zx with err %d\n",
+ 						va_start, &pa_start, size, err);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index abfab89423f41..35322d23fc340 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -2188,6 +2188,14 @@ int ipoib_intf_init(struct ib_device *hca, u8 port, const char *name,
+ 		rn->attach_mcast = ipoib_mcast_attach;
+ 		rn->detach_mcast = ipoib_mcast_detach;
+ 		rn->hca = hca;
++
++		rc = netif_set_real_num_tx_queues(dev, 1);
++		if (rc)
++			goto out;
++
++		rc = netif_set_real_num_rx_queues(dev, 1);
++		if (rc)
++			goto out;
+ 	}
+ 
+ 	priv->rn_ops = dev->netdev_ops;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 5c39e4c4bef7f..1a5805260778b 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -902,7 +902,7 @@ static void rtrs_clt_init_req(struct rtrs_clt_io_req *req,
+ 	req->need_inv_comp = false;
+ 	req->inv_errno = 0;
+ 
+-	iov_iter_kvec(&iter, READ, vec, 1, usr_len);
++	iov_iter_kvec(&iter, WRITE, vec, 1, usr_len);
+ 	len = _copy_from_iter(req->iu->buf, usr_len, &iter);
+ 	WARN_ON(len != usr_len);
+ 
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 148a7c5fd0e22..65c0081838e3d 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -67,25 +67,84 @@ static inline void i8042_write_command(int val)
+ 
+ #include <linux/dmi.h>
+ 
+-static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = {
++#define SERIO_QUIRK_NOKBD		BIT(0)
++#define SERIO_QUIRK_NOAUX		BIT(1)
++#define SERIO_QUIRK_NOMUX		BIT(2)
++#define SERIO_QUIRK_FORCEMUX		BIT(3)
++#define SERIO_QUIRK_UNLOCK		BIT(4)
++#define SERIO_QUIRK_PROBE_DEFER		BIT(5)
++#define SERIO_QUIRK_RESET_ALWAYS	BIT(6)
++#define SERIO_QUIRK_RESET_NEVER		BIT(7)
++#define SERIO_QUIRK_DIECT		BIT(8)
++#define SERIO_QUIRK_DUMBKBD		BIT(9)
++#define SERIO_QUIRK_NOLOOP		BIT(10)
++#define SERIO_QUIRK_NOTIMEOUT		BIT(11)
++#define SERIO_QUIRK_KBDRESET		BIT(12)
++#define SERIO_QUIRK_DRITEK		BIT(13)
++#define SERIO_QUIRK_NOPNP		BIT(14)
++
++/* Quirk table for different mainboards. Options similar or identical to i8042
++ * module parameters.
++ * ORDERING IS IMPORTANT! The first match will be apllied and the rest ignored.
++ * This allows entries to overwrite vendor wide quirks on a per device basis.
++ * Where this is irrelevant, entries are sorted case sensitive by DMI_SYS_VENDOR
++ * and/or DMI_BOARD_VENDOR to make it easier to avoid dublicate entries.
++ */
++static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 	{
+-		/*
+-		 * Arima-Rioworks HDAMB -
+-		 * AUX LOOP command does not raise AUX IRQ
+-		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "RIOWORKS"),
+-			DMI_MATCH(DMI_BOARD_NAME, "HDAMB"),
+-			DMI_MATCH(DMI_BOARD_VERSION, "Rev E"),
++			DMI_MATCH(DMI_SYS_VENDOR, "ALIENWARE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Sentia"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* ASUS G1S */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer Inc."),
+-			DMI_MATCH(DMI_BOARD_NAME, "G1S"),
+-			DMI_MATCH(DMI_BOARD_VERSION, "1.0"),
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X750LN"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Asus X450LCP */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X450LCP"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_NEVER)
++	},
++	{
++		/* ASUS ZenBook UX425UA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER)
++	},
++	{
++		/* ASUS ZenBook UM325UA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER)
++	},
++	/*
++	 * On some Asus laptops, just running self tests cause problems.
++	 */
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_NEVER)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_NEVER)
+ 	},
+ 	{
+ 		/* ASUS P65UP5 - AUX LOOP command does not raise AUX IRQ */
+@@ -94,585 +153,681 @@ static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "P/I-P65UP5"),
+ 			DMI_MATCH(DMI_BOARD_VERSION, "REV 2.X"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
++		/* ASUS G1S */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "X750LN"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer Inc."),
++			DMI_MATCH(DMI_BOARD_NAME, "G1S"),
++			DMI_MATCH(DMI_BOARD_VERSION, "1.0"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
+-			DMI_MATCH(DMI_PRODUCT_NAME , "ProLiant"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "8500"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 1360"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
++		/* Acer Aspire 5710 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
+-			DMI_MATCH(DMI_PRODUCT_NAME , "ProLiant"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "DL760"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5710"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Dell Embedded Box PC 3000 */
++		/* Acer Aspire 7738 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Embedded Box PC 3000"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 7738"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* OQO Model 01 */
++		/* Acer Aspire 5536 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "OQO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ZEPTO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "00"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5536"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "0100"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* ULI EV4873 - AUX LOOP does not work properly */
++		/*
++		 * Acer Aspire 5738z
++		 * Touchpad stops working in mux mode when dis- + re-enabled
++		 * with the touchpad enable/disable toggle hotkey
++		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ULI"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "EV4873"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "5a"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Microsoft Virtual Machine */
++		/* Acer Aspire One 150 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Virtual Machine"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "VS2005R2"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "AOA150"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Medion MAM 2070 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "MAM 2070"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "5a"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A114-31"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Medion Akoya E7225 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Medion"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Akoya E7225"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A314-31"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Blue FB5601 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "blue"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "FB5601"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "M606"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A315-31"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Gigabyte M912 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "M912"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "01"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-132"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Gigabyte M1022M netbook */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co.,Ltd."),
+-			DMI_MATCH(DMI_BOARD_NAME, "M1022E"),
+-			DMI_MATCH(DMI_BOARD_VERSION, "1.02"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-332"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Gigabyte Spring Peak - defines wrong chassis type */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Spring Peak"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-432"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Gigabyte T1005 - defines wrong chassis type ("Other") */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "T1005"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate Spin B118-RN"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
++	/*
++	 * Some Wistron based laptops need us to explicitly enable the 'Dritek
++	 * keyboard extension' to make their extra keys start generating scancodes.
++	 * Originally, this was just confined to older laptops, but a few Acer laptops
++	 * have turned up in 2007 that also need this again.
++	 */
+ 	{
+-		/* Gigabyte T1005M/P - defines wrong chassis type ("Other") */
++		/* Acer Aspire 5100 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "T1005M/P"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5100"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
++		/* Acer Aspire 5610 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv9700"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "Rev 1"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5610"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
++		/* Acer Aspire 5630 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "PEGATRON CORPORATION"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "C15B"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5630"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
++		/* Acer Aspire 5650 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ByteSpeed LLC"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ByteSpeed Laptop C15B"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5650"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+-	{ }
+-};
+-
+-/*
+- * Some Fujitsu notebooks are having trouble with touchpads if
+- * active multiplexing mode is activated. Luckily they don't have
+- * external PS/2 ports so we can safely disable it.
+- * ... apparently some Toshibas don't like MUX mode either and
+- * die horrible death on reboot.
+- */
+-static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ 	{
+-		/* Fujitsu Lifebook P7010/P7010D */
++		/* Acer Aspire 5680 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P7010"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5680"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
+-		/* Fujitsu Lifebook P7010 */
++		/* Acer Aspire 5720 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "0000000000"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5720"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
+-		/* Fujitsu Lifebook P5020D */
++		/* Acer Aspire 9110 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook P Series"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 9110"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
+-		/* Fujitsu Lifebook S2000 */
++		/* Acer TravelMate 660 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S Series"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 660"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
+-		/* Fujitsu Lifebook S6230 */
++		/* Acer TravelMate 2490 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 2490"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
+-		/* Fujitsu Lifebook T725 laptop */
++		/* Acer TravelMate 4280 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 4280"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
+ 	{
+-		/* Fujitsu Lifebook U745 */
++		/* Amoi M636/A737 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U745"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Amoi Electronics CO.,LTD."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "M636/A737 platform"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Fujitsu T70H */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "FMVLT70H"),
++			DMI_MATCH(DMI_SYS_VENDOR, "ByteSpeed LLC"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ByteSpeed Laptop C15B"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Fujitsu-Siemens Lifebook T3010 */
++		/* Compal HEL80I */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T3010"),
++			DMI_MATCH(DMI_SYS_VENDOR, "COMPAL"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HEL80I"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Fujitsu-Siemens Lifebook E4010 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E4010"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ProLiant"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "8500"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Fujitsu-Siemens Amilo Pro 2010 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "AMILO Pro V2010"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ProLiant"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "DL760"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Fujitsu-Siemens Amilo Pro 2030 */
++		/* Advent 4211 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "AMILO PRO V2030"),
++			DMI_MATCH(DMI_SYS_VENDOR, "DIXONSXP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Advent 4211"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/*
+-		 * No data is coming from the touchscreen unless KBC
+-		 * is in legacy mode.
+-		 */
+-		/* Panasonic CF-29 */
++		/* Dell Embedded Box PC 3000 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Matsushita"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "CF-29"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Embedded Box PC 3000"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/*
+-		 * HP Pavilion DV4017EA -
+-		 * errors on MUX ports are reported without raising AUXDATA
+-		 * causing "spurious NAK" messages.
+-		 */
++		/* Dell XPS M1530 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Pavilion dv4000 (EA032EA#ABF)"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "XPS M1530"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/*
+-		 * HP Pavilion ZT1000 -
+-		 * like DV4017EA does not raise AUXERR for errors on MUX ports.
+-		 */
++		/* Dell Vostro 1510 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Notebook PC"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "HP Pavilion Notebook ZT1000"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro1510"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/*
+-		 * HP Pavilion DV4270ca -
+-		 * like DV4017EA does not raise AUXERR for errors on MUX ports.
+-		 */
++		/* Dell Vostro V13 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Pavilion dv4000 (EH476UA#ABL)"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V13"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
+ 	},
+ 	{
++		/* Dell Vostro 1320 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P10"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1320"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
++		/* Dell Vostro 1520 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "EQUIUM A110"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1520"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
++		/* Dell Vostro 1720 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE C850D"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1720"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Entroware Proteus */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS)
+ 	},
++	/*
++	 * Some Fujitsu notebooks are having trouble with touchpads if
++	 * active multiplexing mode is activated. Luckily they don't have
++	 * external PS/2 ports so we can safely disable it.
++	 * ... apparently some Toshibas don't like MUX mode either and
++	 * die horrible death on reboot.
++	 */
+ 	{
++		/* Fujitsu Lifebook P7010/P7010D */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ALIENWARE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Sentia"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P7010"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Sharp Actius MM20 */
++		/* Fujitsu Lifebook P5020D */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "SHARP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "PC-MM20 Series"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook P Series"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Sony Vaio FS-115b */
++		/* Fujitsu Lifebook S2000 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FS115B"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S Series"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/*
+-		 * Sony Vaio FZ-240E -
+-		 * reset and GET ID commands issued via KBD port are
+-		 * sometimes being delivered to AUX3.
+-		 */
++		/* Fujitsu Lifebook S6230 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FZ240E"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/*
+-		 * Most (all?) VAIOs do not have external PS/2 ports nor
+-		 * they implement active multiplexing properly, and
+-		 * MUX discovery usually messes up keyboard/touchpad.
+-		 */
++		/* Fujitsu Lifebook T725 laptop */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "VAIO"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
+ 	},
+ 	{
+-		/* Amoi M636/A737 */
++		/* Fujitsu Lifebook U745 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Amoi Electronics CO.,LTD."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "M636/A737 platform"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U745"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Lenovo 3000 n100 */
++		/* Fujitsu T70H */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "076804U"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "FMVLT70H"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Lenovo XiaoXin Air 12 */
++		/* Fujitsu A544 laptop */
++		/* https://bugzilla.redhat.com/show_bug.cgi?id=1111138 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "80UN"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK A544"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
+ 	},
+ 	{
++		/* Fujitsu AH544 laptop */
++		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 1360"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Fujitsu U574 laptop */
++		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U574"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Fujitsu UH554 laptop */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK UH544"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Fujitsu Lifebook P7010 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "0000000000"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu-Siemens Lifebook T3010 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T3010"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Acer Aspire 5710 */
++		/* Fujitsu-Siemens Lifebook E4010 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5710"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E4010"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Acer Aspire 7738 */
++		/* Fujitsu-Siemens Amilo Pro 2010 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 7738"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "AMILO Pro V2010"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Gericom Bellagio */
++		/* Fujitsu-Siemens Amilo Pro 2030 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Gericom"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "N34AS6"),
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "AMILO PRO V2030"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* IBM 2656 */
++		/* Gigabyte M912 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "2656"),
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "M912"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "01"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Dell XPS M1530 */
++		/* Gigabyte Spring Peak - defines wrong chassis type */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "XPS M1530"),
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Spring Peak"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Compal HEL80I */
++		/* Gigabyte T1005 - defines wrong chassis type ("Other") */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "COMPAL"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HEL80I"),
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "T1005"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Dell Vostro 1510 */
++		/* Gigabyte T1005M/P - defines wrong chassis type ("Other") */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro1510"),
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "T1005M/P"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
++	/*
++	 * Some laptops need keyboard reset before probing for the trackpad to get
++	 * it detected, initialised & finally work.
++	 */
+ 	{
+-		/* Acer Aspire 5536 */
++		/* Gigabyte P35 v2 - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5536"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "0100"),
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P35V2"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+ 	},
+-	{
+-		/* Dell Vostro V13 */
++		{
++		/* Aorus branded Gigabyte X3 Plus - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V13"),
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X3"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+ 	},
+ 	{
+-		/* Newer HP Pavilion dv4 models */
++		/* Gigabyte P34 - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"),
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P34"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+ 	},
+ 	{
+-		/* Asus X450LCP */
++		/* Gigabyte P57 - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "X450LCP"),
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P57"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+ 	},
+ 	{
+-		/* Avatar AVIU-145A6 */
++		/* Gericom Bellagio */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Intel"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "IC4I"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Gericom"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "N34AS6"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* TUXEDO BU1406 */
++		/* Gigabyte M1022M netbook */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "N24_25BU"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co.,Ltd."),
++			DMI_MATCH(DMI_BOARD_NAME, "M1022E"),
++			DMI_MATCH(DMI_BOARD_VERSION, "1.02"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Lenovo LaVie Z */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv9700"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Rev 1"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+ 		/*
+-		 * Acer Aspire 5738z
+-		 * Touchpad stops working in mux mode when dis- + re-enabled
+-		 * with the touchpad enable/disable toggle hotkey
++		 * HP Pavilion DV4017EA -
++		 * errors on MUX ports are reported without raising AUXDATA
++		 * causing "spurious NAK" messages.
+ 		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Pavilion dv4000 (EA032EA#ABF)"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Entroware Proteus */
++		/*
++		 * HP Pavilion ZT1000 -
++		 * like DV4017EA does not raise AUXERR for errors on MUX ports.
++		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Notebook PC"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "HP Pavilion Notebook ZT1000"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+-	{ }
+-};
+-
+-static const struct dmi_system_id i8042_dmi_forcemux_table[] __initconst = {
+ 	{
+ 		/*
+-		 * Sony Vaio VGN-CS series require MUX or the touch sensor
+-		 * buttons will disturb touchpad operation
++		 * HP Pavilion DV4270ca -
++		 * like DV4017EA does not raise AUXERR for errors on MUX ports.
+ 		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-CS"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Pavilion dv4000 (EH476UA#ABL)"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+-	{ }
+-};
+-
+-/*
+- * On some Asus laptops, just running self tests cause problems.
+- */
+-static const struct dmi_system_id i8042_dmi_noselftest_table[] = {
+ 	{
++		/* Newer HP Pavilion dv4 models */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
+-		},
+-	}, {
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
+ 	},
+-	{ }
+-};
+-static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ 	{
+-		/* MSI Wind U-100 */
++		/* IBM 2656 */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "U-100"),
+-			DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"),
++			DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "2656"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* LG Electronics X110 */
++		/* Avatar AVIU-145A6 */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "X110"),
+-			DMI_MATCH(DMI_BOARD_VENDOR, "LG Electronics Inc."),
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "IC4I"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Acer Aspire One 150 */
++		/* Intel MBO Desktop D845PESV */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "AOA150"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "D845PESV"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
++		/*
++		 * Intel NUC D54250WYK - does not have i8042 controller but
++		 * declares PS/2 devices in DSDT.
++		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A114-31"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "D54250WYK"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
++		/* Lenovo 3000 n100 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A314-31"),
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "076804U"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
++		/* Lenovo XiaoXin Air 12 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A315-31"),
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "80UN"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
++		/* Lenovo LaVie Z */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-132"),
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
++		/* Lenovo Ideapad U455 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-332"),
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "20046"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
++		/* Lenovo ThinkPad L460 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-432"),
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L460"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
++		/* Lenovo ThinkPad Twist S230u */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate Spin B118-RN"),
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "33474HU"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Advent 4211 */
++		/* LG Electronics X110 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "DIXONSXP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Advent 4211"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "LG Electronics Inc."),
++			DMI_MATCH(DMI_BOARD_NAME, "X110"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+ 		/* Medion Akoya Mini E1210 */
+@@ -680,6 +835,7 @@ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "E1210"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+ 		/* Medion Akoya E1222 */
+@@ -687,48 +843,62 @@ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "E122X"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
+ 	{
+-		/* Mivvy M310 */
++		/* MSI Wind U-100 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "VIOOO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "N10"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"),
++			DMI_MATCH(DMI_BOARD_NAME, "U-100"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Dell Vostro 1320 */
++		/*
++		 * No data is coming from the touchscreen unless KBC
++		 * is in legacy mode.
++		 */
++		/* Panasonic CF-29 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1320"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Matsushita"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CF-29"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Dell Vostro 1520 */
++		/* Medion Akoya E7225 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1520"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Medion"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Akoya E7225"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Dell Vostro 1720 */
++		/* Microsoft Virtual Machine */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1720"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Virtual Machine"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "VS2005R2"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Lenovo Ideapad U455 */
++		/* Medion MAM 2070 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "20046"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MAM 2070"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "5a"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Lenovo ThinkPad L460 */
++		/* TUXEDO BU1406 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L460"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "N24_25BU"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+ 		/* Clevo P650RS, 650RP6, Sager NP8152-S, and others */
+@@ -736,282 +906,318 @@ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* OQO Model 01 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "OQO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZEPTO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "00"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Lenovo ThinkPad Twist S230u */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "33474HU"),
++			DMI_MATCH(DMI_SYS_VENDOR, "PEGATRON CORPORATION"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "C15B"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* Entroware Proteus */
++		/* Acer Aspire 5 A515 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "PK"),
++			DMI_MATCH(DMI_BOARD_NAME, "Grumpy_PK"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
+ 	},
+-	{ }
+-};
+-
+-#ifdef CONFIG_PNP
+-static const struct dmi_system_id __initconst i8042_dmi_nopnp_table[] = {
+ 	{
+-		/* Intel MBO Desktop D845PESV */
++		/* ULI EV4873 - AUX LOOP does not work properly */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "D845PESV"),
+-			DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_SYS_VENDOR, "ULI"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "EV4873"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "5a"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+ 		/*
+-		 * Intel NUC D54250WYK - does not have i8042 controller but
+-		 * declares PS/2 devices in DSDT.
++		 * Arima-Rioworks HDAMB -
++		 * AUX LOOP command does not raise AUX IRQ
+ 		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "D54250WYK"),
+-			DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "RIOWORKS"),
++			DMI_MATCH(DMI_BOARD_NAME, "HDAMB"),
++			DMI_MATCH(DMI_BOARD_VERSION, "Rev E"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+ 	{
+-		/* MSI Wind U-100 */
++		/* Sharp Actius MM20 */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "U-100"),
+-			DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"),
++			DMI_MATCH(DMI_SYS_VENDOR, "SHARP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "PC-MM20 Series"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Acer Aspire 5 A515 */
++		/*
++		 * Sony Vaio FZ-240E -
++		 * reset and GET ID commands issued via KBD port are
++		 * sometimes being delivered to AUX3.
++		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "Grumpy_PK"),
+-			DMI_MATCH(DMI_BOARD_VENDOR, "PK"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FZ240E"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+-	{ }
+-};
+-
+-static const struct dmi_system_id __initconst i8042_dmi_laptop_table[] = {
+ 	{
++		/*
++		 * Most (all?) VAIOs do not have external PS/2 ports nor
++		 * they implement active multiplexing properly, and
++		 * MUX discovery usually messes up keyboard/touchpad.
++		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "8"), /* Portable */
++			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "VAIO"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
++		/* Sony Vaio FS-115b */
+ 		.matches = {
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "9"), /* Laptop */
++			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FS115B"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
++		/*
++		 * Sony Vaio VGN-CS series require MUX or the touch sensor
++		 * buttons will disturb touchpad operation
++		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
++			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-CS"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_FORCEMUX)
+ 	},
+ 	{
+ 		.matches = {
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "14"), /* Sub-Notebook */
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P10"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+-	{ }
+-};
+-#endif
+-
+-static const struct dmi_system_id __initconst i8042_dmi_notimeout_table[] = {
+ 	{
+-		/* Dell Vostro V13 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V13"),
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "EQUIUM A110"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
+ 	{
+-		/* Newer HP Pavilion dv4 models */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"),
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE C850D"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
++	/*
++	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
++	 * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
++	 * none of them have an external PS/2 port so this can safely be set for
++	 * all of them. These two are based on a Clevo design, but have the
++	 * board_name changed.
++	 */
+ 	{
+-		/* Fujitsu A544 laptop */
+-		/* https://bugzilla.redhat.com/show_bug.cgi?id=1111138 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK A544"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"),
++			DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Fujitsu AH544 laptop */
+-		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"),
++			DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Fujitsu Lifebook T725 laptop */
++		/* Mivvy M310 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
++			DMI_MATCH(DMI_SYS_VENDOR, "VIOOO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "N10"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+ 	},
++	/*
++	 * Some laptops need keyboard reset before probing for the trackpad to get
++	 * it detected, initialised & finally work.
++	 */
+ 	{
+-		/* Fujitsu U574 laptop */
+-		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
++		/* Schenker XMG C504 - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U574"),
++			DMI_MATCH(DMI_SYS_VENDOR, "XMG"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "C504"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+ 	},
+ 	{
+-		/* Fujitsu UH554 laptop */
++		/* Blue FB5601 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK UH544"),
++			DMI_MATCH(DMI_SYS_VENDOR, "blue"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "FB5601"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "M606"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
+-	{ }
+-};
+-
+-/*
+- * Some Wistron based laptops need us to explicitly enable the 'Dritek
+- * keyboard extension' to make their extra keys start generating scancodes.
+- * Originally, this was just confined to older laptops, but a few Acer laptops
+- * have turned up in 2007 that also need this again.
+- */
+-static const struct dmi_system_id __initconst i8042_dmi_dritek_table[] = {
++	/*
++	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
++	 * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
++	 * none of them have an external PS/2 port so this can safely be set for
++	 * all of them.
++	 * Clevo barebones come with board_vendor and/or system_vendor set to
++	 * either the very generic string "Notebook" and/or a different value
++	 * for each individual reseller. The only somewhat universal way to
++	 * identify them is by board_name.
++	 */
+ 	{
+-		/* Acer Aspire 5100 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5100"),
++			DMI_MATCH(DMI_BOARD_NAME, "LAPQC71A"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Acer Aspire 5610 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5610"),
++			DMI_MATCH(DMI_BOARD_NAME, "LAPQC71B"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Acer Aspire 5630 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5630"),
++			DMI_MATCH(DMI_BOARD_NAME, "N140CU"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Acer Aspire 5650 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5650"),
++			DMI_MATCH(DMI_BOARD_NAME, "N141CU"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Acer Aspire 5680 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5680"),
++			DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Acer Aspire 5720 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5720"),
++			DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
++	/*
++	 * At least one modern Clevo barebone has the touchpad connected both
++	 * via PS/2 and i2c interface. This causes a race condition between the
++	 * psmouse and i2c-hid driver. Since the full capability of the touchpad
++	 * is available via the i2c interface and the device has no external
++	 * PS/2 port, it is safe to just ignore all ps2 mouses here to avoid
++	 * this issue. The known affected device is the
++	 * TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU which comes with one of
++	 * the two different dmi strings below. NS50MU is not a typo!
++	 */
+ 	{
+-		/* Acer Aspire 9110 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 9110"),
++			DMI_MATCH(DMI_BOARD_NAME, "NS50MU"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX |
++					SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
++					SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Acer TravelMate 660 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 660"),
++			DMI_MATCH(DMI_BOARD_NAME, "NS50_70MU"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX |
++					SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
++					SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Acer TravelMate 2490 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 2490"),
++			DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Acer TravelMate 4280 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 4280"),
++			DMI_MATCH(DMI_BOARD_NAME, "PB50_70DFx,DDx"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+-	{ }
+-};
+-
+-/*
+- * Some laptops need keyboard reset before probing for the trackpad to get
+- * it detected, initialised & finally work.
+- */
+-static const struct dmi_system_id __initconst i8042_dmi_kbdreset_table[] = {
+ 	{
+-		/* Gigabyte P35 v2 - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P35V2"),
++			DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+-		{
+-		/* Aorus branded Gigabyte X3 Plus - Elantech touchpad */
++	{
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "X3"),
++			DMI_MATCH(DMI_BOARD_NAME, "X170SM"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/* Gigabyte P34 - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P34"),
++			DMI_MATCH(DMI_BOARD_NAME, "X170KM-G"),
+ 		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
++	{ }
++};
++
++#ifdef CONFIG_PNP
++static const struct dmi_system_id i8042_dmi_laptop_table[] __initconst = {
+ 	{
+-		/* Gigabyte P57 - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P57"),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "8"), /* Portable */
+ 		},
+ 	},
+ 	{
+-		/* Schenker XMG C504 - Elantech touchpad */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "XMG"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "C504"),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "9"), /* Laptop */
+ 		},
+ 	},
+-	{ }
+-};
+-
+-static const struct dmi_system_id i8042_dmi_probe_defer_table[] __initconst = {
+ 	{
+-		/* ASUS ZenBook UX425UA */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
+ 		},
+ 	},
+ 	{
+-		/* ASUS ZenBook UM325UA */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "14"), /* Sub-Notebook */
+ 		},
+ 	},
+ 	{ }
+ };
++#endif
+ 
+ #endif /* CONFIG_X86 */
+ 
+@@ -1167,11 +1373,6 @@ static int __init i8042_pnp_init(void)
+ 	bool pnp_data_busted = false;
+ 	int err;
+ 
+-#ifdef CONFIG_X86
+-	if (dmi_check_system(i8042_dmi_nopnp_table))
+-		i8042_nopnp = true;
+-#endif
+-
+ 	if (i8042_nopnp) {
+ 		pr_info("PNP detection disabled\n");
+ 		return 0;
+@@ -1275,6 +1476,59 @@ static inline int i8042_pnp_init(void) { return 0; }
+ static inline void i8042_pnp_exit(void) { }
+ #endif /* CONFIG_PNP */
+ 
++
++#ifdef CONFIG_X86
++static void __init i8042_check_quirks(void)
++{
++	const struct dmi_system_id *device_quirk_info;
++	uintptr_t quirks;
++
++	device_quirk_info = dmi_first_match(i8042_dmi_quirk_table);
++	if (!device_quirk_info)
++		return;
++
++	quirks = (uintptr_t)device_quirk_info->driver_data;
++
++	if (quirks & SERIO_QUIRK_NOKBD)
++		i8042_nokbd = true;
++	if (quirks & SERIO_QUIRK_NOAUX)
++		i8042_noaux = true;
++	if (quirks & SERIO_QUIRK_NOMUX)
++		i8042_nomux = true;
++	if (quirks & SERIO_QUIRK_FORCEMUX)
++		i8042_nomux = false;
++	if (quirks & SERIO_QUIRK_UNLOCK)
++		i8042_unlock = true;
++	if (quirks & SERIO_QUIRK_PROBE_DEFER)
++		i8042_probe_defer = true;
++	/* Honor module parameter when value is not default */
++	if (i8042_reset == I8042_RESET_DEFAULT) {
++		if (quirks & SERIO_QUIRK_RESET_ALWAYS)
++			i8042_reset = I8042_RESET_ALWAYS;
++		if (quirks & SERIO_QUIRK_RESET_NEVER)
++			i8042_reset = I8042_RESET_NEVER;
++	}
++	if (quirks & SERIO_QUIRK_DIECT)
++		i8042_direct = true;
++	if (quirks & SERIO_QUIRK_DUMBKBD)
++		i8042_dumbkbd = true;
++	if (quirks & SERIO_QUIRK_NOLOOP)
++		i8042_noloop = true;
++	if (quirks & SERIO_QUIRK_NOTIMEOUT)
++		i8042_notimeout = true;
++	if (quirks & SERIO_QUIRK_KBDRESET)
++		i8042_kbdreset = true;
++	if (quirks & SERIO_QUIRK_DRITEK)
++		i8042_dritek = true;
++#ifdef CONFIG_PNP
++	if (quirks & SERIO_QUIRK_NOPNP)
++		i8042_nopnp = true;
++#endif
++}
++#else
++static inline void i8042_check_quirks(void) {}
++#endif
++
+ static int __init i8042_platform_init(void)
+ {
+ 	int retval;
+@@ -1297,45 +1551,17 @@ static int __init i8042_platform_init(void)
+ 	i8042_kbd_irq = I8042_MAP_IRQ(1);
+ 	i8042_aux_irq = I8042_MAP_IRQ(12);
+ 
+-	retval = i8042_pnp_init();
+-	if (retval)
+-		return retval;
+-
+ #if defined(__ia64__)
+-        i8042_reset = I8042_RESET_ALWAYS;
++	i8042_reset = I8042_RESET_ALWAYS;
+ #endif
+ 
+-#ifdef CONFIG_X86
+-	/* Honor module parameter when value is not default */
+-	if (i8042_reset == I8042_RESET_DEFAULT) {
+-		if (dmi_check_system(i8042_dmi_reset_table))
+-			i8042_reset = I8042_RESET_ALWAYS;
+-
+-		if (dmi_check_system(i8042_dmi_noselftest_table))
+-			i8042_reset = I8042_RESET_NEVER;
+-	}
+-
+-	if (dmi_check_system(i8042_dmi_noloop_table))
+-		i8042_noloop = true;
+-
+-	if (dmi_check_system(i8042_dmi_nomux_table))
+-		i8042_nomux = true;
+-
+-	if (dmi_check_system(i8042_dmi_forcemux_table))
+-		i8042_nomux = false;
+-
+-	if (dmi_check_system(i8042_dmi_notimeout_table))
+-		i8042_notimeout = true;
+-
+-	if (dmi_check_system(i8042_dmi_dritek_table))
+-		i8042_dritek = true;
+-
+-	if (dmi_check_system(i8042_dmi_kbdreset_table))
+-		i8042_kbdreset = true;
++	i8042_check_quirks();
+ 
+-	if (dmi_check_system(i8042_dmi_probe_defer_table))
+-		i8042_probe_defer = true;
++	retval = i8042_pnp_init();
++	if (retval)
++		return retval;
+ 
++#ifdef CONFIG_X86
+ 	/*
+ 	 * A20 was already enabled during early kernel init. But some buggy
+ 	 * BIOSes (in MSI Laptops) require A20 to be enabled using 8042 to
+diff --git a/drivers/net/bonding/bond_debugfs.c b/drivers/net/bonding/bond_debugfs.c
+index f3f86ef68ae0c..8b6cf2bf9025a 100644
+--- a/drivers/net/bonding/bond_debugfs.c
++++ b/drivers/net/bonding/bond_debugfs.c
+@@ -76,7 +76,7 @@ void bond_debug_reregister(struct bonding *bond)
+ 
+ 	d = debugfs_rename(bonding_debug_root, bond->debug_dir,
+ 			   bonding_debug_root, bond->dev->name);
+-	if (d) {
++	if (!IS_ERR(d)) {
+ 		bond->debug_dir = d;
+ 	} else {
+ 		netdev_warn(bond->dev, "failed to reregister, so just unregister old one\n");
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index f193709c8efc6..c1465096239b6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4853,7 +4853,7 @@ static int __init ice_module_init(void)
+ 	pr_info("%s\n", ice_driver_string);
+ 	pr_info("%s\n", ice_copyright);
+ 
+-	ice_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME);
++	ice_wq = alloc_workqueue("%s", 0, 0, KBUILD_MODNAME);
+ 	if (!ice_wq) {
+ 		pr_err("Failed to create workqueue\n");
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 4ab46eee3d938..ef53f7665b58c 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -134,10 +134,12 @@ static int igc_ptp_feature_enable_i225(struct ptp_clock_info *ptp,
+  *
+  * We need to convert the system time value stored in the RX/TXSTMP registers
+  * into a hwtstamp which can be used by the upper level timestamping functions.
++ *
++ * Returns 0 on success.
+  **/
+-static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
+-				       struct skb_shared_hwtstamps *hwtstamps,
+-				       u64 systim)
++static int igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
++				      struct skb_shared_hwtstamps *hwtstamps,
++				      u64 systim)
+ {
+ 	switch (adapter->hw.mac.type) {
+ 	case igc_i225:
+@@ -147,8 +149,9 @@ static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
+ 						systim & 0xFFFFFFFF);
+ 		break;
+ 	default:
+-		break;
++		return -EINVAL;
+ 	}
++	return 0;
+ }
+ 
+ /**
+@@ -372,7 +375,8 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
+ 
+ 	regval = rd32(IGC_TXSTMPL);
+ 	regval |= (u64)rd32(IGC_TXSTMPH) << 32;
+-	igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);
++	if (igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval))
++		return;
+ 
+ 	switch (adapter->link_speed) {
+ 	case SPEED_10:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index f800e1ca5ba62..40d7bfca37499 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -64,6 +64,7 @@ static int mlx5_query_mtrc_caps(struct mlx5_fw_tracer *tracer)
+ 			MLX5_GET(mtrc_cap, out, num_string_trace);
+ 	tracer->str_db.num_string_db = MLX5_GET(mtrc_cap, out, num_string_db);
+ 	tracer->owner = !!MLX5_GET(mtrc_cap, out, trace_owner);
++	tracer->str_db.loaded = false;
+ 
+ 	for (i = 0; i < tracer->str_db.num_string_db; i++) {
+ 		mtrc_cap_sp = MLX5_ADDR_OF(mtrc_cap, out, string_db_param[i]);
+@@ -756,6 +757,7 @@ static int mlx5_fw_tracer_set_mtrc_conf(struct mlx5_fw_tracer *tracer)
+ 	if (err)
+ 		mlx5_core_warn(dev, "FWTracer: Failed to set tracer configurations %d\n", err);
+ 
++	tracer->buff.consumer_index = 0;
+ 	return err;
+ }
+ 
+@@ -820,7 +822,6 @@ static void mlx5_fw_tracer_ownership_change(struct work_struct *work)
+ 	mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner);
+ 	if (tracer->owner) {
+ 		tracer->owner = false;
+-		tracer->buff.consumer_index = 0;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
+index cac8f085b16d7..2cf7f0fc170b8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
+@@ -166,16 +166,16 @@ static inline int mlx5_ptys_rate_enum_to_int(enum mlx5_ptys_rate rate)
+ 	}
+ }
+ 
+-static int mlx5i_get_speed_settings(u16 ib_link_width_oper, u16 ib_proto_oper)
++static u32 mlx5i_get_speed_settings(u16 ib_link_width_oper, u16 ib_proto_oper)
+ {
+ 	int rate, width;
+ 
+ 	rate = mlx5_ptys_rate_enum_to_int(ib_proto_oper);
+ 	if (rate < 0)
+-		return -EINVAL;
++		return SPEED_UNKNOWN;
+ 	width = mlx5_ptys_width_enum_to_int(ib_link_width_oper);
+ 	if (width < 0)
+-		return -EINVAL;
++		return SPEED_UNKNOWN;
+ 
+ 	return rate * width;
+ }
+@@ -198,16 +198,13 @@ static int mlx5i_get_link_ksettings(struct net_device *netdev,
+ 	ethtool_link_ksettings_zero_link_mode(link_ksettings, advertising);
+ 
+ 	speed = mlx5i_get_speed_settings(ib_link_width_oper, ib_proto_oper);
+-	if (speed < 0)
+-		return -EINVAL;
++	link_ksettings->base.speed = speed;
++	link_ksettings->base.duplex = speed == SPEED_UNKNOWN ? DUPLEX_UNKNOWN : DUPLEX_FULL;
+ 
+-	link_ksettings->base.duplex = DUPLEX_FULL;
+ 	link_ksettings->base.port = PORT_OTHER;
+ 
+ 	link_ksettings->base.autoneg = AUTONEG_DISABLE;
+ 
+-	link_ksettings->base.speed = speed;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index b221b83ec5a6f..e56e540ce05c3 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -468,6 +468,18 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
+ 		flow_rule_match_control(rule, &match);
+ 	}
+ 
++	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
++		struct flow_match_vlan match;
++
++		flow_rule_match_vlan(rule, &match);
++		filter->key_type = OCELOT_VCAP_KEY_ANY;
++		filter->vlan.vid.value = match.key->vlan_id;
++		filter->vlan.vid.mask = match.mask->vlan_id;
++		filter->vlan.pcp.value[0] = match.key->vlan_priority;
++		filter->vlan.pcp.mask[0] = match.mask->vlan_priority;
++		match_protocol = false;
++	}
++
+ 	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+ 		struct flow_match_eth_addrs match;
+ 
+@@ -600,18 +612,6 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
+ 		match_protocol = false;
+ 	}
+ 
+-	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+-		struct flow_match_vlan match;
+-
+-		flow_rule_match_vlan(rule, &match);
+-		filter->key_type = OCELOT_VCAP_KEY_ANY;
+-		filter->vlan.vid.value = match.key->vlan_id;
+-		filter->vlan.vid.mask = match.mask->vlan_id;
+-		filter->vlan.pcp.value[0] = match.key->vlan_priority;
+-		filter->vlan.pcp.mask[0] = match.mask->vlan_priority;
+-		match_protocol = false;
+-	}
+-
+ finished_key_parsing:
+ 	if (match_protocol && proto != ETH_P_ALL) {
+ 		if (filter->block_id == VCAP_ES0) {
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index cb12d0171517e..fcd4213c99b83 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -257,6 +257,7 @@ static int ionic_qcq_enable(struct ionic_qcq *qcq)
+ 			.oper = IONIC_Q_ENABLE,
+ 		},
+ 	};
++	int ret;
+ 
+ 	idev = &lif->ionic->idev;
+ 	dev = lif->ionic->dev;
+@@ -264,16 +265,24 @@ static int ionic_qcq_enable(struct ionic_qcq *qcq)
+ 	dev_dbg(dev, "q_enable.index %d q_enable.qtype %d\n",
+ 		ctx.cmd.q_control.index, ctx.cmd.q_control.type);
+ 
++	if (qcq->flags & IONIC_QCQ_F_INTR)
++		ionic_intr_clean(idev->intr_ctrl, qcq->intr.index);
++
++	ret = ionic_adminq_post_wait(lif, &ctx);
++	if (ret)
++		return ret;
++
++	if (qcq->napi.poll)
++		napi_enable(&qcq->napi);
++
+ 	if (qcq->flags & IONIC_QCQ_F_INTR) {
+ 		irq_set_affinity_hint(qcq->intr.vector,
+ 				      &qcq->intr.affinity_mask);
+-		napi_enable(&qcq->napi);
+-		ionic_intr_clean(idev->intr_ctrl, qcq->intr.index);
+ 		ionic_intr_mask(idev->intr_ctrl, qcq->intr.index,
+ 				IONIC_INTR_MASK_CLEAR);
+ 	}
+ 
+-	return ionic_adminq_post_wait(lif, &ctx);
++	return 0;
+ }
+ 
+ static int ionic_qcq_disable(struct ionic_qcq *qcq, bool send_to_hw)
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+index d210632676d32..a632de208a7dc 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+@@ -1456,7 +1456,12 @@ int qede_poll(struct napi_struct *napi, int budget)
+ 	rx_work_done = (likely(fp->type & QEDE_FASTPATH_RX) &&
+ 			qede_has_rx_work(fp->rxq)) ?
+ 			qede_rx_int(fp, budget) : 0;
+-	if (rx_work_done < budget) {
++
++	if (fp->xdp_xmit & QEDE_XDP_REDIRECT)
++		xdp_do_flush();
++
++	/* Handle case where we are called by netpoll with a budget of 0 */
++	if (rx_work_done < budget || !budget) {
+ 		if (!qede_poll_is_more_work(fp)) {
+ 			napi_complete_done(napi, rx_work_done);
+ 
+@@ -1474,9 +1479,6 @@ int qede_poll(struct napi_struct *napi, int budget)
+ 		qede_update_tx_producer(fp->xdp_tx);
+ 	}
+ 
+-	if (fp->xdp_xmit & QEDE_XDP_REDIRECT)
+-		xdp_do_flush_map();
+-
+ 	return rx_work_done;
+ }
+ 
+diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
+index 7183080763417..29c8d2c990044 100644
+--- a/drivers/net/ethernet/sfc/efx.c
++++ b/drivers/net/ethernet/sfc/efx.c
+@@ -1047,8 +1047,11 @@ static int efx_pci_probe_post_io(struct efx_nic *efx)
+ 	/* Determine netdevice features */
+ 	net_dev->features |= (efx->type->offload_features | NETIF_F_SG |
+ 			      NETIF_F_TSO | NETIF_F_RXCSUM | NETIF_F_RXALL);
+-	if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM))
++	if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM)) {
+ 		net_dev->features |= NETIF_F_TSO6;
++		if (efx_has_cap(efx, TX_TSO_V2_ENCAP))
++			net_dev->hw_enc_features |= NETIF_F_TSO6;
++	}
+ 	/* Check whether device supports TSO */
+ 	if (!efx->type->tso_versions || !efx->type->tso_versions(efx))
+ 		net_dev->features &= ~NETIF_F_ALL_TSO;
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index db651649e0b80..81412999445d8 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -247,7 +247,8 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 				DP83822_ENERGY_DET_INT_EN |
+ 				DP83822_LINK_QUAL_INT_EN);
+ 
+-		if (!dp83822->fx_enabled)
++		/* Private data pointer is NULL on DP83825/26 */
++		if (!dp83822 || !dp83822->fx_enabled)
+ 			misr_status |= DP83822_ANEG_COMPLETE_INT_EN |
+ 				       DP83822_DUP_MODE_CHANGE_INT_EN |
+ 				       DP83822_SPEED_CHANGED_INT_EN;
+@@ -267,7 +268,8 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 				DP83822_PAGE_RX_INT_EN |
+ 				DP83822_EEE_ERROR_CHANGE_INT_EN);
+ 
+-		if (!dp83822->fx_enabled)
++		/* Private data pointer is NULL on DP83825/26 */
++		if (!dp83822 || !dp83822->fx_enabled)
+ 			misr_status |= DP83822_ANEG_ERR_INT_EN |
+ 				       DP83822_WOL_PKT_INT_EN;
+ 
+diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
+index e8f2ca625837f..39151ec6f65e2 100644
+--- a/drivers/net/phy/meson-gxl.c
++++ b/drivers/net/phy/meson-gxl.c
+@@ -235,6 +235,8 @@ static struct phy_driver meson_gxl_phy[] = {
+ 		.config_intr	= meson_gxl_config_intr,
+ 		.suspend        = genphy_suspend,
+ 		.resume         = genphy_resume,
++		.read_mmd	= genphy_read_mmd_unsupported,
++		.write_mmd	= genphy_write_mmd_unsupported,
+ 	}, {
+ 		PHY_ID_MATCH_EXACT(0x01803301),
+ 		.name		= "Meson G12A Internal PHY",
+@@ -245,6 +247,8 @@ static struct phy_driver meson_gxl_phy[] = {
+ 		.config_intr	= meson_gxl_config_intr,
+ 		.suspend        = genphy_suspend,
+ 		.resume         = genphy_resume,
++		.read_mmd	= genphy_read_mmd_unsupported,
++		.write_mmd	= genphy_write_mmd_unsupported,
+ 	},
+ };
+ 
+diff --git a/drivers/net/usb/plusb.c b/drivers/net/usb/plusb.c
+index 17c9c63b8eebb..ce7862dac2b75 100644
+--- a/drivers/net/usb/plusb.c
++++ b/drivers/net/usb/plusb.c
+@@ -57,9 +57,7 @@
+ static inline int
+ pl_vendor_req(struct usbnet *dev, u8 req, u8 val, u8 index)
+ {
+-	return usbnet_read_cmd(dev, req,
+-				USB_DIR_IN | USB_TYPE_VENDOR |
+-				USB_RECIP_DEVICE,
++	return usbnet_write_cmd(dev, req, USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				val, index, NULL, 0);
+ }
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index c942cd6a2c65e..d533211161366 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1525,13 +1525,13 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
+ 
+ 	received = virtnet_receive(rq, budget, &xdp_xmit);
+ 
++	if (xdp_xmit & VIRTIO_XDP_REDIR)
++		xdp_do_flush();
++
+ 	/* Out of packets? */
+ 	if (received < budget)
+ 		virtqueue_napi_complete(napi, rq->vq, received);
+ 
+-	if (xdp_xmit & VIRTIO_XDP_REDIR)
+-		xdp_do_flush();
+-
+ 	if (xdp_xmit & VIRTIO_XDP_TX) {
+ 		sq = virtnet_xdp_get_sq(vi);
+ 		if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) {
+@@ -1928,8 +1928,8 @@ static int virtnet_close(struct net_device *dev)
+ 	cancel_delayed_work_sync(&vi->refill);
+ 
+ 	for (i = 0; i < vi->max_queue_pairs; i++) {
+-		xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
+ 		napi_disable(&vi->rq[i].napi);
++		xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
+ 		virtnet_napi_tx_disable(&vi->sq[i].napi);
+ 	}
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index c2b6e5c966d04..0a069bc7f1567 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -90,6 +90,9 @@
+ #define BRCMF_ASSOC_PARAMS_FIXED_SIZE \
+ 	(sizeof(struct brcmf_assoc_params_le) - sizeof(u16))
+ 
++#define BRCMF_MAX_CHANSPEC_LIST \
++	(BRCMF_DCMD_MEDLEN / sizeof(__le32) - 1)
++
+ static bool check_vif_up(struct brcmf_cfg80211_vif *vif)
+ {
+ 	if (!test_bit(BRCMF_VIF_STATUS_READY, &vif->sme_state)) {
+@@ -6459,6 +6462,13 @@ static int brcmf_construct_chaninfo(struct brcmf_cfg80211_info *cfg,
+ 			band->channels[i].flags = IEEE80211_CHAN_DISABLED;
+ 
+ 	total = le32_to_cpu(list->count);
++	if (total > BRCMF_MAX_CHANSPEC_LIST) {
++		bphy_err(drvr, "Invalid count of channel Spec. (%u)\n",
++			 total);
++		err = -EINVAL;
++		goto fail_pbuf;
++	}
++
+ 	for (i = 0; i < total; i++) {
+ 		ch.chspec = (u16)le32_to_cpu(list->element[i]);
+ 		cfg->d11inf.decchspec(&ch);
+@@ -6604,6 +6614,13 @@ static int brcmf_enable_bw40_2g(struct brcmf_cfg80211_info *cfg)
+ 		band = cfg_to_wiphy(cfg)->bands[NL80211_BAND_2GHZ];
+ 		list = (struct brcmf_chanspec_list *)pbuf;
+ 		num_chan = le32_to_cpu(list->count);
++		if (num_chan > BRCMF_MAX_CHANSPEC_LIST) {
++			bphy_err(drvr, "Invalid count of channel Spec. (%u)\n",
++				 num_chan);
++			kfree(pbuf);
++			return -EINVAL;
++		}
++
+ 		for (i = 0; i < num_chan; i++) {
+ 			ch.chspec = (u16)le32_to_cpu(list->element[i]);
+ 			cfg->d11inf.decchspec(&ch);
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 21d89d80d0838..48fbe49e3772b 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -625,9 +625,11 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 		return ERR_PTR(rval);
+ 	}
+ 
++	nvmem->id = rval;
++
+ 	if (config->wp_gpio)
+ 		nvmem->wp_gpio = config->wp_gpio;
+-	else
++	else if (!config->ignore_wp)
+ 		nvmem->wp_gpio = gpiod_get_optional(config->dev, "wp",
+ 						    GPIOD_OUT_HIGH);
+ 	if (IS_ERR(nvmem->wp_gpio)) {
+@@ -640,7 +642,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 	kref_init(&nvmem->refcnt);
+ 	INIT_LIST_HEAD(&nvmem->cells);
+ 
+-	nvmem->id = rval;
+ 	nvmem->owner = config->owner;
+ 	if (!nvmem->owner && config->dev->driver)
+ 		nvmem->owner = config->dev->driver->owner;
+@@ -694,7 +695,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 	if (config->cells) {
+ 		rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
+ 		if (rval)
+-			goto err_teardown_compat;
++			goto err_remove_cells;
+ 	}
+ 
+ 	rval = nvmem_add_cells_from_table(nvmem);
+@@ -711,7 +712,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 
+ err_remove_cells:
+ 	nvmem_device_remove_all_cells(nvmem);
+-err_teardown_compat:
+ 	if (config->compat)
+ 		nvmem_sysfs_remove_compat(nvmem, config);
+ err_device_del:
+diff --git a/drivers/nvmem/qcom-spmi-sdam.c b/drivers/nvmem/qcom-spmi-sdam.c
+index f6e9f96933ca2..1549bfcc4c2d9 100644
+--- a/drivers/nvmem/qcom-spmi-sdam.c
++++ b/drivers/nvmem/qcom-spmi-sdam.c
+@@ -166,6 +166,7 @@ static const struct of_device_id sdam_match_table[] = {
+ 	{ .compatible = "qcom,spmi-sdam" },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, sdam_match_table);
+ 
+ static struct platform_driver sdam_driver = {
+ 	.driver = {
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 73ddf2540f3fe..f686fb5011b87 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -990,8 +990,19 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
+ 	}
+ 
+ 	of_dma_range_parser_init(&parser, node);
+-	for_each_of_range(&parser, &range)
++	for_each_of_range(&parser, &range) {
++		if (range.cpu_addr == OF_BAD_ADDR) {
++			pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n",
++			       range.bus_addr, node);
++			continue;
++		}
+ 		num_ranges++;
++	}
++
++	if (!num_ranges) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	r = kcalloc(num_ranges + 1, sizeof(*r), GFP_KERNEL);
+ 	if (!r) {
+@@ -1000,18 +1011,16 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
+ 	}
+ 
+ 	/*
+-	 * Record all info in the generic DMA ranges array for struct device.
++	 * Record all info in the generic DMA ranges array for struct device,
++	 * returning an error if we don't find any parsable ranges.
+ 	 */
+ 	*map = r;
+ 	of_dma_range_parser_init(&parser, node);
+ 	for_each_of_range(&parser, &range) {
+ 		pr_debug("dma_addr(%llx) cpu_addr(%llx) size(%llx)\n",
+ 			 range.bus_addr, range.cpu_addr, range.size);
+-		if (range.cpu_addr == OF_BAD_ADDR) {
+-			pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n",
+-			       range.bus_addr, node);
++		if (range.cpu_addr == OF_BAD_ADDR)
+ 			continue;
+-		}
+ 		r->cpu_start = range.cpu_addr;
+ 		r->dma_start = range.bus_addr;
+ 		r->size = range.size;
+diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed.c b/drivers/pinctrl/aspeed/pinctrl-aspeed.c
+index e792318c38946..d26d859546275 100644
+--- a/drivers/pinctrl/aspeed/pinctrl-aspeed.c
++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed.c
+@@ -121,7 +121,7 @@ static int aspeed_disable_sig(struct aspeed_pinmux_data *ctx,
+ 	int ret = 0;
+ 
+ 	if (!exprs)
+-		return true;
++		return -EINVAL;
+ 
+ 	while (*exprs && !ret) {
+ 		ret = aspeed_sig_expr_disable(ctx, *exprs);
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index db9087c129c0d..2ef9e2d8fd9c5 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -1606,6 +1606,12 @@ const struct intel_pinctrl_soc_data *intel_pinctrl_get_soc_data(struct platform_
+ EXPORT_SYMBOL_GPL(intel_pinctrl_get_soc_data);
+ 
+ #ifdef CONFIG_PM_SLEEP
++static bool __intel_gpio_is_direct_irq(u32 value)
++{
++	return (value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&
++	       (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO);
++}
++
+ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int pin)
+ {
+ 	const struct pin_desc *pd = pin_desc_get(pctrl->pctldev, pin);
+@@ -1639,8 +1645,7 @@ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int
+ 	 * See https://bugzilla.kernel.org/show_bug.cgi?id=214749.
+ 	 */
+ 	value = readl(intel_get_padcfg(pctrl, pin, PADCFG0));
+-	if ((value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&
+-	    (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO))
++	if (__intel_gpio_is_direct_irq(value))
+ 		return true;
+ 
+ 	return false;
+@@ -1770,7 +1775,12 @@ int intel_pinctrl_resume_noirq(struct device *dev)
+ 	for (i = 0; i < pctrl->soc->npins; i++) {
+ 		const struct pinctrl_pin_desc *desc = &pctrl->soc->pins[i];
+ 
+-		if (!intel_pinctrl_should_save(pctrl, desc->number))
++		if (!(intel_pinctrl_should_save(pctrl, desc->number) ||
++		      /*
++		       * If the firmware mangled the register contents too much,
++		       * check the saved value for the Direct IRQ mode.
++		       */
++		      __intel_gpio_is_direct_irq(pads[i].padcfg0)))
+ 			continue;
+ 
+ 		intel_restore_padcfg(pctrl, desc->number, PADCFG0, pads[i].padcfg0);
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index d139cd9e6d130..22e471933b373 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -372,6 +372,8 @@ static int pcs_set_mux(struct pinctrl_dev *pctldev, unsigned fselector,
+ 	if (!pcs->fmask)
+ 		return 0;
+ 	function = pinmux_generic_get_function(pctldev, fselector);
++	if (!function)
++		return -EINVAL;
+ 	func = function->data;
+ 	if (!func)
+ 		return -EINVAL;
+diff --git a/drivers/platform/x86/dell-wmi.c b/drivers/platform/x86/dell-wmi.c
+index bbdb3e8608927..6ef327a80ccf4 100644
+--- a/drivers/platform/x86/dell-wmi.c
++++ b/drivers/platform/x86/dell-wmi.c
+@@ -259,6 +259,9 @@ static const struct key_entry dell_wmi_keymap_type_0010[] = {
+ 	{ KE_KEY,    0x57, { KEY_BRIGHTNESSDOWN } },
+ 	{ KE_KEY,    0x58, { KEY_BRIGHTNESSUP } },
+ 
++	/*Speaker Mute*/
++	{ KE_KEY, 0x109, { KEY_MUTE} },
++
+ 	/* Mic mute */
+ 	{ KE_KEY, 0x150, { KEY_MICMUTE } },
+ 
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index 6485c1aa9e741..252d7881f99c2 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -802,7 +802,7 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
+ 				       enum iscsi_host_param param, char *buf)
+ {
+ 	struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(shost);
+-	struct iscsi_session *session = tcp_sw_host->session;
++	struct iscsi_session *session;
+ 	struct iscsi_conn *conn;
+ 	struct iscsi_tcp_conn *tcp_conn;
+ 	struct iscsi_sw_tcp_conn *tcp_sw_conn;
+@@ -812,6 +812,7 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
+ 
+ 	switch (param) {
+ 	case ISCSI_HOST_PARAM_IPADDRESS:
++		session = tcp_sw_host->session;
+ 		if (!session)
+ 			return -ENOTCONN;
+ 
+@@ -906,12 +907,14 @@ iscsi_sw_tcp_session_create(struct iscsi_endpoint *ep, uint16_t cmds_max,
+ 	if (!cls_session)
+ 		goto remove_host;
+ 	session = cls_session->dd_data;
+-	tcp_sw_host = iscsi_host_priv(shost);
+-	tcp_sw_host->session = session;
+ 
+ 	shost->can_queue = session->scsi_cmds_max;
+ 	if (iscsi_tcp_r2tpool_alloc(session))
+ 		goto remove_session;
++
++	/* We are now fully setup so expose the session to sysfs. */
++	tcp_sw_host = iscsi_host_priv(shost);
++	tcp_sw_host->session = session;
+ 	return cls_session;
+ 
+ remove_session:
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 8e474b1452495..6f7c4d41c51de 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -1129,8 +1129,7 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
+ 	 * that no LUN is present, so don't add sdev in these cases.
+ 	 * Two specific examples are:
+ 	 * 1) NetApp targets: return PQ=1, PDT=0x1f
+-	 * 2) IBM/2145 targets: return PQ=1, PDT=0
+-	 * 3) USB UFI: returns PDT=0x1f, with the PQ bits being "reserved"
++	 * 2) USB UFI: returns PDT=0x1f, with the PQ bits being "reserved"
+ 	 *    in the UFI 1.0 spec (we cannot rely on reserved bits).
+ 	 *
+ 	 * References:
+@@ -1144,8 +1143,8 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
+ 	 * PDT=00h Direct-access device (floppy)
+ 	 * PDT=1Fh none (no FDD connected to the requested logical unit)
+ 	 */
+-	if (((result[0] >> 5) == 1 ||
+-	    (starget->pdt_1f_for_no_lun && (result[0] & 0x1f) == 0x1f)) &&
++	if (((result[0] >> 5) == 1 || starget->pdt_1f_for_no_lun) &&
++	    (result[0] & 0x1f) == 0x1f &&
+ 	    !scsi_is_wlun(lun)) {
+ 		SCSI_LOG_SCAN_BUS(3, sdev_printk(KERN_INFO, sdev,
+ 					"scsi scan: peripheral device type"
+diff --git a/drivers/spi/spi-dw-core.c b/drivers/spi/spi-dw-core.c
+index c33866f747dbe..aa116cee1fd8d 100644
+--- a/drivers/spi/spi-dw-core.c
++++ b/drivers/spi/spi-dw-core.c
+@@ -353,7 +353,7 @@ static void dw_spi_irq_setup(struct dw_spi *dws)
+ 	 * will be adjusted at the final stage of the IRQ-based SPI transfer
+ 	 * execution so not to lose the leftover of the incoming data.
+ 	 */
+-	level = min_t(u16, dws->fifo_len / 2, dws->tx_len);
++	level = min_t(unsigned int, dws->fifo_len / 2, dws->tx_len);
+ 	dw_writel(dws, DW_SPI_TXFTLR, level);
+ 	dw_writel(dws, DW_SPI_RXFTLR, level - 1);
+ 
+diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
+index 7143d03f0e027..18fbbe510d018 100644
+--- a/drivers/target/target_core_file.c
++++ b/drivers/target/target_core_file.c
+@@ -340,7 +340,7 @@ static int fd_do_rw(struct se_cmd *cmd, struct file *fd,
+ 		len += sg->length;
+ 	}
+ 
+-	iov_iter_bvec(&iter, READ, bvec, sgl_nents, len);
++	iov_iter_bvec(&iter, is_write, bvec, sgl_nents, len);
+ 	if (is_write)
+ 		ret = vfs_iter_write(fd, &iter, &pos, 0);
+ 	else
+@@ -477,7 +477,7 @@ fd_execute_write_same(struct se_cmd *cmd)
+ 		len += se_dev->dev_attrib.block_size;
+ 	}
+ 
+-	iov_iter_bvec(&iter, READ, bvec, nolb, len);
++	iov_iter_bvec(&iter, WRITE, bvec, nolb, len);
+ 	ret = vfs_iter_write(fd_dev->fd_file, &iter, &pos, 0);
+ 
+ 	kfree(bvec);
+diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
+index e4513ef091593..3efd5a3bd69d1 100644
+--- a/drivers/target/target_core_tmr.c
++++ b/drivers/target/target_core_tmr.c
+@@ -82,8 +82,8 @@ static bool __target_check_io_state(struct se_cmd *se_cmd,
+ {
+ 	struct se_session *sess = se_cmd->se_sess;
+ 
+-	assert_spin_locked(&sess->sess_cmd_lock);
+-	WARN_ON_ONCE(!irqs_disabled());
++	lockdep_assert_held(&sess->sess_cmd_lock);
++
+ 	/*
+ 	 * If command already reached CMD_T_COMPLETE state within
+ 	 * target_complete_cmd() or CMD_T_FABRIC_STOP due to shutdown,
+diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
+index b3c3f7e5851ab..33ce4b218d9ef 100644
+--- a/drivers/tty/serial/8250/8250_dma.c
++++ b/drivers/tty/serial/8250/8250_dma.c
+@@ -46,19 +46,39 @@ static void __dma_rx_complete(void *param)
+ 	struct uart_8250_dma	*dma = p->dma;
+ 	struct tty_port		*tty_port = &p->port.state->port;
+ 	struct dma_tx_state	state;
++	enum dma_status		dma_status;
+ 	int			count;
+ 
+-	dma->rx_running = 0;
+-	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
++	/*
++	 * New DMA Rx can be started during the completion handler before it
++	 * could acquire port's lock and it might still be ongoing. Don't to
++	 * anything in such case.
++	 */
++	dma_status = dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
++	if (dma_status == DMA_IN_PROGRESS)
++		return;
+ 
+ 	count = dma->rx_size - state.residue;
+ 
+ 	tty_insert_flip_string(tty_port, dma->rx_buf, count);
+ 	p->port.icount.rx += count;
++	dma->rx_running = 0;
+ 
+ 	tty_flip_buffer_push(tty_port);
+ }
+ 
++static void dma_rx_complete(void *param)
++{
++	struct uart_8250_port *p = param;
++	struct uart_8250_dma *dma = p->dma;
++	unsigned long flags;
++
++	spin_lock_irqsave(&p->port.lock, flags);
++	if (dma->rx_running)
++		__dma_rx_complete(p);
++	spin_unlock_irqrestore(&p->port.lock, flags);
++}
++
+ int serial8250_tx_dma(struct uart_8250_port *p)
+ {
+ 	struct uart_8250_dma		*dma = p->dma;
+@@ -130,7 +150,7 @@ int serial8250_rx_dma(struct uart_8250_port *p)
+ 		return -EBUSY;
+ 
+ 	dma->rx_running = 1;
+-	desc->callback = __dma_rx_complete;
++	desc->callback = dma_rx_complete;
+ 	desc->callback_param = p;
+ 
+ 	dma->rx_cookie = dmaengine_submit(desc);
+diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c
+index 1850bacdb5b0e..f566eb1839dc5 100644
+--- a/drivers/tty/vt/vc_screen.c
++++ b/drivers/tty/vt/vc_screen.c
+@@ -386,10 +386,6 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+ 
+ 	uni_mode = use_unicode(inode);
+ 	attr = use_attributes(inode);
+-	ret = -ENXIO;
+-	vc = vcs_vc(inode, &viewed);
+-	if (!vc)
+-		goto unlock_out;
+ 
+ 	ret = -EINVAL;
+ 	if (pos < 0)
+@@ -407,6 +403,11 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+ 		unsigned int this_round, skip = 0;
+ 		int size;
+ 
++		ret = -ENXIO;
++		vc = vcs_vc(inode, &viewed);
++		if (!vc)
++			goto unlock_out;
++
+ 		/* Check whether we are above size each round,
+ 		 * as copy_to_user at the end of this loop
+ 		 * could sleep.
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 6d24d138cc77e..4ac1c22f13be0 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -527,6 +527,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* DJI CineSSD */
+ 	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+ 
++	/* Alcor Link AK9563 SC Reader used in 2022 Lenovo ThinkPads */
++	{ USB_DEVICE(0x2ce3, 0x9563), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* DELL USB GEN2 */
+ 	{ USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 528e36cc58ead..dac13fe978110 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -115,7 +115,7 @@ static inline void dwc3_qcom_clrbits(void __iomem *base, u32 offset, u32 val)
+ 	readl(base + offset);
+ }
+ 
+-static void dwc3_qcom_vbus_overrride_enable(struct dwc3_qcom *qcom, bool enable)
++static void dwc3_qcom_vbus_override_enable(struct dwc3_qcom *qcom, bool enable)
+ {
+ 	if (enable) {
+ 		dwc3_qcom_setbits(qcom->qscratch_base, QSCRATCH_SS_PHY_CTRL,
+@@ -136,7 +136,7 @@ static int dwc3_qcom_vbus_notifier(struct notifier_block *nb,
+ 	struct dwc3_qcom *qcom = container_of(nb, struct dwc3_qcom, vbus_nb);
+ 
+ 	/* enable vbus override for device mode */
+-	dwc3_qcom_vbus_overrride_enable(qcom, event);
++	dwc3_qcom_vbus_override_enable(qcom, event);
+ 	qcom->mode = event ? USB_DR_MODE_PERIPHERAL : USB_DR_MODE_HOST;
+ 
+ 	return NOTIFY_DONE;
+@@ -148,7 +148,7 @@ static int dwc3_qcom_host_notifier(struct notifier_block *nb,
+ 	struct dwc3_qcom *qcom = container_of(nb, struct dwc3_qcom, host_nb);
+ 
+ 	/* disable vbus override in host mode */
+-	dwc3_qcom_vbus_overrride_enable(qcom, !event);
++	dwc3_qcom_vbus_override_enable(qcom, !event);
+ 	qcom->mode = event ? USB_DR_MODE_HOST : USB_DR_MODE_PERIPHERAL;
+ 
+ 	return NOTIFY_DONE;
+@@ -832,8 +832,8 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 	qcom->mode = usb_get_dr_mode(&qcom->dwc3->dev);
+ 
+ 	/* enable vbus override for device mode */
+-	if (qcom->mode == USB_DR_MODE_PERIPHERAL)
+-		dwc3_qcom_vbus_overrride_enable(qcom, true);
++	if (qcom->mode != USB_DR_MODE_HOST)
++		dwc3_qcom_vbus_override_enable(qcom, true);
+ 
+ 	/* register extcon to override sw_vbus on Vbus change later */
+ 	ret = dwc3_qcom_register_extcon(qcom);
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 94000fd190e55..8c48c9f801be2 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -278,8 +278,10 @@ static int __ffs_ep0_queue_wait(struct ffs_data *ffs, char *data, size_t len)
+ 	struct usb_request *req = ffs->ep0req;
+ 	int ret;
+ 
+-	if (!req)
++	if (!req) {
++		spin_unlock_irq(&ffs->ev.waitq.lock);
+ 		return -EINVAL;
++	}
+ 
+ 	req->zero     = len < le16_to_cpu(ffs->ev.setup.wLength);
+ 
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index eed719cf55525..e8eaca5a84db2 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -524,10 +524,10 @@ int dp_altmode_probe(struct typec_altmode *alt)
+ 	/* FIXME: Port can only be DFP_U. */
+ 
+ 	/* Make sure we have compatiple pin configurations */
+-	if (!(DP_CAP_DFP_D_PIN_ASSIGN(port->vdo) &
+-	      DP_CAP_UFP_D_PIN_ASSIGN(alt->vdo)) &&
+-	    !(DP_CAP_UFP_D_PIN_ASSIGN(port->vdo) &
+-	      DP_CAP_DFP_D_PIN_ASSIGN(alt->vdo)))
++	if (!(DP_CAP_PIN_ASSIGN_DFP_D(port->vdo) &
++	      DP_CAP_PIN_ASSIGN_UFP_D(alt->vdo)) &&
++	    !(DP_CAP_PIN_ASSIGN_UFP_D(port->vdo) &
++	      DP_CAP_PIN_ASSIGN_DFP_D(alt->vdo)))
+ 		return -ENODEV;
+ 
+ 	ret = sysfs_create_group(&alt->dev.kobj, &dp_altmode_group);
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 5beb20768b204..b9c8e40252142 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -1517,6 +1517,9 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
+ 	nvq = &n->vqs[index];
+ 	mutex_lock(&vq->mutex);
+ 
++	if (fd == -1)
++		vhost_clear_msg(&n->dev);
++
+ 	/* Verify that ring has been setup correctly. */
+ 	if (!vhost_vq_access_ok(vq)) {
+ 		r = -EFAULT;
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index da00a5c57db65..1f9a1554ce5f4 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -669,7 +669,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
+ }
+ EXPORT_SYMBOL_GPL(vhost_dev_stop);
+ 
+-static void vhost_clear_msg(struct vhost_dev *dev)
++void vhost_clear_msg(struct vhost_dev *dev)
+ {
+ 	struct vhost_msg_node *node, *n;
+ 
+@@ -687,6 +687,7 @@ static void vhost_clear_msg(struct vhost_dev *dev)
+ 
+ 	spin_unlock(&dev->iotlb_lock);
+ }
++EXPORT_SYMBOL_GPL(vhost_clear_msg);
+ 
+ void vhost_dev_cleanup(struct vhost_dev *dev)
+ {
+diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
+index b063324c7669d..8f80d6b0d843e 100644
+--- a/drivers/vhost/vhost.h
++++ b/drivers/vhost/vhost.h
+@@ -183,6 +183,7 @@ long vhost_dev_ioctl(struct vhost_dev *, unsigned int ioctl, void __user *argp);
+ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp);
+ bool vhost_vq_access_ok(struct vhost_virtqueue *vq);
+ bool vhost_log_access_ok(struct vhost_dev *);
++void vhost_clear_msg(struct vhost_dev *dev);
+ 
+ int vhost_get_vq_desc(struct vhost_virtqueue *,
+ 		      struct iovec iov[], unsigned int iov_count,
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 27828435dd4fc..6d58c8a5cb446 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2513,9 +2513,12 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font,
+ 	    h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres))
+ 		return -EINVAL;
+ 
++	if (font->width > 32 || font->height > 32)
++		return -EINVAL;
++
+ 	/* Make sure drawing engine can handle the font */
+-	if (!(info->pixmap.blit_x & (1 << (font->width - 1))) ||
+-	    !(info->pixmap.blit_y & (1 << (font->height - 1))))
++	if (!(info->pixmap.blit_x & BIT(font->width - 1)) ||
++	    !(info->pixmap.blit_y & BIT(font->height - 1)))
+ 		return -EINVAL;
+ 
+ 	/* Make sure driver can handle the font length */
+diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
+index 5fa3f1e5dfe88..b3295cd7fd4f9 100644
+--- a/drivers/video/fbdev/smscufx.c
++++ b/drivers/video/fbdev/smscufx.c
+@@ -1621,7 +1621,7 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 	struct usb_device *usbdev;
+ 	struct ufx_data *dev;
+ 	struct fb_info *info;
+-	int retval;
++	int retval = -ENOMEM;
+ 	u32 id_rev, fpga_rev;
+ 
+ 	/* usb initialization */
+@@ -1653,15 +1653,17 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 
+ 	if (!ufx_alloc_urb_list(dev, WRITES_IN_FLIGHT, MAX_TRANSFER)) {
+ 		dev_err(dev->gdev, "ufx_alloc_urb_list failed\n");
+-		goto e_nomem;
++		goto put_ref;
+ 	}
+ 
+ 	/* We don't register a new USB class. Our client interface is fbdev */
+ 
+ 	/* allocates framebuffer driver structure, not framebuffer memory */
+ 	info = framebuffer_alloc(0, &usbdev->dev);
+-	if (!info)
+-		goto e_nomem;
++	if (!info) {
++		dev_err(dev->gdev, "framebuffer_alloc failed\n");
++		goto free_urb_list;
++	}
+ 
+ 	dev->info = info;
+ 	info->par = dev;
+@@ -1704,22 +1706,34 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 	check_warn_goto_error(retval, "unable to find common mode for display and adapter");
+ 
+ 	retval = ufx_reg_set_bits(dev, 0x4000, 0x00000001);
+-	check_warn_goto_error(retval, "error %d enabling graphics engine", retval);
++	if (retval < 0) {
++		dev_err(dev->gdev, "error %d enabling graphics engine", retval);
++		goto setup_modes;
++	}
+ 
+ 	/* ready to begin using device */
+ 	atomic_set(&dev->usb_active, 1);
+ 
+ 	dev_dbg(dev->gdev, "checking var");
+ 	retval = ufx_ops_check_var(&info->var, info);
+-	check_warn_goto_error(retval, "error %d ufx_ops_check_var", retval);
++	if (retval < 0) {
++		dev_err(dev->gdev, "error %d ufx_ops_check_var", retval);
++		goto reset_active;
++	}
+ 
+ 	dev_dbg(dev->gdev, "setting par");
+ 	retval = ufx_ops_set_par(info);
+-	check_warn_goto_error(retval, "error %d ufx_ops_set_par", retval);
++	if (retval < 0) {
++		dev_err(dev->gdev, "error %d ufx_ops_set_par", retval);
++		goto reset_active;
++	}
+ 
+ 	dev_dbg(dev->gdev, "registering framebuffer");
+ 	retval = register_framebuffer(info);
+-	check_warn_goto_error(retval, "error %d register_framebuffer", retval);
++	if (retval < 0) {
++		dev_err(dev->gdev, "error %d register_framebuffer", retval);
++		goto reset_active;
++	}
+ 
+ 	dev_info(dev->gdev, "SMSC UDX USB device /dev/fb%d attached. %dx%d resolution."
+ 		" Using %dK framebuffer memory\n", info->node,
+@@ -1727,21 +1741,23 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 
+ 	return 0;
+ 
+-error:
+-	fb_dealloc_cmap(&info->cmap);
+-destroy_modedb:
++reset_active:
++	atomic_set(&dev->usb_active, 0);
++setup_modes:
+ 	fb_destroy_modedb(info->monspecs.modedb);
+ 	vfree(info->screen_base);
+ 	fb_destroy_modelist(&info->modelist);
++error:
++	fb_dealloc_cmap(&info->cmap);
++destroy_modedb:
+ 	framebuffer_release(info);
++free_urb_list:
++	if (dev->urbs.count > 0)
++		ufx_free_urb_list(dev);
+ put_ref:
+ 	kref_put(&dev->kref, ufx_free); /* ref for framebuffer */
+ 	kref_put(&dev->kref, ufx_free); /* last ref from kref_init */
+ 	return retval;
+-
+-e_nomem:
+-	retval = -ENOMEM;
+-	goto put_ref;
+ }
+ 
+ static void ufx_usb_disconnect(struct usb_interface *interface)
+diff --git a/drivers/watchdog/diag288_wdt.c b/drivers/watchdog/diag288_wdt.c
+index aafc8d98bf9fd..370f648cb4b1a 100644
+--- a/drivers/watchdog/diag288_wdt.c
++++ b/drivers/watchdog/diag288_wdt.c
+@@ -86,7 +86,7 @@ static int __diag288(unsigned int func, unsigned int timeout,
+ 		"1:\n"
+ 		EX_TABLE(0b, 1b)
+ 		: "+d" (err) : "d"(__func), "d"(__timeout),
+-		  "d"(__action), "d"(__len) : "1", "cc");
++		  "d"(__action), "d"(__len) : "1", "cc", "memory");
+ 	return err;
+ }
+ 
+@@ -272,12 +272,21 @@ static int __init diag288_init(void)
+ 	char ebc_begin[] = {
+ 		194, 197, 199, 201, 213
+ 	};
++	char *ebc_cmd;
+ 
+ 	watchdog_set_nowayout(&wdt_dev, nowayout_info);
+ 
+ 	if (MACHINE_IS_VM) {
+-		if (__diag288_vm(WDT_FUNC_INIT, 15,
+-				 ebc_begin, sizeof(ebc_begin)) != 0) {
++		ebc_cmd = kmalloc(sizeof(ebc_begin), GFP_KERNEL);
++		if (!ebc_cmd) {
++			pr_err("The watchdog cannot be initialized\n");
++			return -ENOMEM;
++		}
++		memcpy(ebc_cmd, ebc_begin, sizeof(ebc_begin));
++		ret = __diag288_vm(WDT_FUNC_INIT, 15,
++				   ebc_cmd, sizeof(ebc_begin));
++		kfree(ebc_cmd);
++		if (ret != 0) {
+ 			pr_err("The watchdog cannot be initialized\n");
+ 			return -EINVAL;
+ 		}
+diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
+index a7d293fa8d140..3b5a8e2c4d475 100644
+--- a/drivers/xen/pvcalls-back.c
++++ b/drivers/xen/pvcalls-back.c
+@@ -129,13 +129,13 @@ static bool pvcalls_conn_back_read(void *opaque)
+ 	if (masked_prod < masked_cons) {
+ 		vec[0].iov_base = data->in + masked_prod;
+ 		vec[0].iov_len = wanted;
+-		iov_iter_kvec(&msg.msg_iter, WRITE, vec, 1, wanted);
++		iov_iter_kvec(&msg.msg_iter, READ, vec, 1, wanted);
+ 	} else {
+ 		vec[0].iov_base = data->in + masked_prod;
+ 		vec[0].iov_len = array_size - masked_prod;
+ 		vec[1].iov_base = data->in;
+ 		vec[1].iov_len = wanted - vec[0].iov_len;
+-		iov_iter_kvec(&msg.msg_iter, WRITE, vec, 2, wanted);
++		iov_iter_kvec(&msg.msg_iter, READ, vec, 2, wanted);
+ 	}
+ 
+ 	atomic_set(&map->read, 0);
+@@ -188,13 +188,13 @@ static bool pvcalls_conn_back_write(struct sock_mapping *map)
+ 	if (pvcalls_mask(prod, array_size) > pvcalls_mask(cons, array_size)) {
+ 		vec[0].iov_base = data->out + pvcalls_mask(cons, array_size);
+ 		vec[0].iov_len = size;
+-		iov_iter_kvec(&msg.msg_iter, READ, vec, 1, size);
++		iov_iter_kvec(&msg.msg_iter, WRITE, vec, 1, size);
+ 	} else {
+ 		vec[0].iov_base = data->out + pvcalls_mask(cons, array_size);
+ 		vec[0].iov_len = array_size - pvcalls_mask(cons, array_size);
+ 		vec[1].iov_base = data->out;
+ 		vec[1].iov_len = size - vec[0].iov_len;
+-		iov_iter_kvec(&msg.msg_iter, READ, vec, 2, size);
++		iov_iter_kvec(&msg.msg_iter, WRITE, vec, 2, size);
+ 	}
+ 
+ 	atomic_set(&map->write, 0);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index d4d89e0738ff4..15435f983180f 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -381,6 +381,7 @@ void btrfs_free_device(struct btrfs_device *device)
+ static void free_fs_devices(struct btrfs_fs_devices *fs_devices)
+ {
+ 	struct btrfs_device *device;
++
+ 	WARN_ON(fs_devices->opened);
+ 	while (!list_empty(&fs_devices->devices)) {
+ 		device = list_entry(fs_devices->devices.next,
+@@ -1227,9 +1228,22 @@ void btrfs_close_devices(struct btrfs_fs_devices *fs_devices)
+ 
+ 	mutex_lock(&uuid_mutex);
+ 	close_fs_devices(fs_devices);
+-	if (!fs_devices->opened)
++	if (!fs_devices->opened) {
+ 		list_splice_init(&fs_devices->seed_list, &list);
+ 
++		/*
++		 * If the struct btrfs_fs_devices is not assembled with any
++		 * other device, it can be re-initialized during the next mount
++		 * without the needing device-scan step. Therefore, it can be
++		 * fully freed.
++		 */
++		if (fs_devices->num_devices == 1) {
++			list_del(&fs_devices->fs_list);
++			free_fs_devices(fs_devices);
++		}
++	}
++
++
+ 	list_for_each_entry_safe(fs_devices, tmp, &list, seed_list) {
+ 		close_fs_devices(fs_devices);
+ 		list_del(&fs_devices->seed_list);
+@@ -1580,7 +1594,7 @@ again:
+ 			goto out;
+ 	}
+ 
+-	while (1) {
++	while (search_start < search_end) {
+ 		l = path->nodes[0];
+ 		slot = path->slots[0];
+ 		if (slot >= btrfs_header_nritems(l)) {
+@@ -1603,6 +1617,9 @@ again:
+ 		if (key.type != BTRFS_DEV_EXTENT_KEY)
+ 			goto next;
+ 
++		if (key.offset > search_end)
++			break;
++
+ 		if (key.offset > search_start) {
+ 			hole_size = key.offset - search_start;
+ 			dev_extent_hole_check(device, &search_start, &hole_size,
+@@ -1663,6 +1680,7 @@ next:
+ 	else
+ 		ret = 0;
+ 
++	ASSERT(max_hole_start + max_hole_size <= search_end);
+ out:
+ 	btrfs_free_path(path);
+ 	*start = max_hole_start;
+diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
+index 05615a1099dbc..673d74d7f7182 100644
+--- a/fs/btrfs/zlib.c
++++ b/fs/btrfs/zlib.c
+@@ -63,7 +63,7 @@ struct list_head *zlib_alloc_workspace(unsigned int level)
+ 
+ 	workspacesize = max(zlib_deflate_workspacesize(MAX_WBITS, MAX_MEM_LEVEL),
+ 			zlib_inflate_workspacesize());
+-	workspace->strm.workspace = kvmalloc(workspacesize, GFP_KERNEL);
++	workspace->strm.workspace = kvzalloc(workspacesize, GFP_KERNEL);
+ 	workspace->level = level;
+ 	workspace->buf = NULL;
+ 	/*
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index fa51872ff8504..87a9e9096421a 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3496,6 +3496,12 @@ static void handle_session(struct ceph_mds_session *session,
+ 		break;
+ 
+ 	case CEPH_SESSION_FLUSHMSG:
++		/* flush cap releases */
++		spin_lock(&session->s_cap_lock);
++		if (session->s_num_cap_releases)
++			ceph_flush_cap_releases(mdsc, session);
++		spin_unlock(&session->s_cap_lock);
++
+ 		send_flushmsg_ack(mdsc, session, seq);
+ 		break;
+ 
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 144064dc0d38a..5fe85dc0e2651 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3539,7 +3539,7 @@ uncached_fill_pages(struct TCP_Server_Info *server,
+ 		rdata->got_bytes += result;
+ 	}
+ 
+-	return rdata->got_bytes > 0 && result != -ECONNABORTED ?
++	return result != -ECONNABORTED && rdata->got_bytes > 0 ?
+ 						rdata->got_bytes : result;
+ }
+ 
+@@ -4302,7 +4302,7 @@ readpages_fill_pages(struct TCP_Server_Info *server,
+ 		rdata->got_bytes += result;
+ 	}
+ 
+-	return rdata->got_bytes > 0 && result != -ECONNABORTED ?
++	return result != -ECONNABORTED && rdata->got_bytes > 0 ?
+ 						rdata->got_bytes : result;
+ }
+ 
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index ce6a2a247804d..66ac048cc8998 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -977,7 +977,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ {
+ 	struct page *node_page;
+ 	nid_t nid;
+-	unsigned int ofs_in_node, max_addrs;
++	unsigned int ofs_in_node, max_addrs, base;
+ 	block_t source_blkaddr;
+ 
+ 	nid = le32_to_cpu(sum->nid);
+@@ -1003,11 +1003,17 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 		return false;
+ 	}
+ 
+-	max_addrs = IS_INODE(node_page) ? DEF_ADDRS_PER_INODE :
+-						DEF_ADDRS_PER_BLOCK;
+-	if (ofs_in_node >= max_addrs) {
+-		f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%u, nid:%u, max:%u",
+-			ofs_in_node, dni->ino, dni->nid, max_addrs);
++	if (IS_INODE(node_page)) {
++		base = offset_in_addr(F2FS_INODE(node_page));
++		max_addrs = DEF_ADDRS_PER_INODE;
++	} else {
++		base = 0;
++		max_addrs = DEF_ADDRS_PER_BLOCK;
++	}
++
++	if (base + ofs_in_node >= max_addrs) {
++		f2fs_err(sbi, "Inconsistent blkaddr offset: base:%u, ofs_in_node:%u, max:%u, ino:%u, nid:%u",
++			base, ofs_in_node, max_addrs, dni->ino, dni->nid);
+ 		f2fs_put_page(node_page, 1);
+ 		return false;
+ 	}
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 8b75a04836b63..39b1038076c3e 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -714,9 +714,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
+ 			page = device_private_entry_to_page(swpent);
+ 	}
+ 	if (page) {
+-		int mapcount = page_mapcount(page);
+-
+-		if (mapcount >= 2)
++		if (page_mapcount(page) >= 2 || hugetlb_pmd_shared(pte))
+ 			mss->shared_hugetlb += huge_page_size(hstate_vma(vma));
+ 		else
+ 			mss->private_hugetlb += huge_page_size(hstate_vma(vma));
+diff --git a/fs/squashfs/squashfs_fs.h b/fs/squashfs/squashfs_fs.h
+index b3fdc8212c5f5..95f8e89017689 100644
+--- a/fs/squashfs/squashfs_fs.h
++++ b/fs/squashfs/squashfs_fs.h
+@@ -183,7 +183,7 @@ static inline int squashfs_block_size(__le32 raw)
+ #define SQUASHFS_ID_BLOCK_BYTES(A)	(SQUASHFS_ID_BLOCKS(A) *\
+ 					sizeof(u64))
+ /* xattr id lookup table defines */
+-#define SQUASHFS_XATTR_BYTES(A)		((A) * sizeof(struct squashfs_xattr_id))
++#define SQUASHFS_XATTR_BYTES(A)		(((u64) (A)) * sizeof(struct squashfs_xattr_id))
+ 
+ #define SQUASHFS_XATTR_BLOCK(A)		(SQUASHFS_XATTR_BYTES(A) / \
+ 					SQUASHFS_METADATA_SIZE)
+diff --git a/fs/squashfs/squashfs_fs_sb.h b/fs/squashfs/squashfs_fs_sb.h
+index 166e98806265b..8f9445e290e72 100644
+--- a/fs/squashfs/squashfs_fs_sb.h
++++ b/fs/squashfs/squashfs_fs_sb.h
+@@ -63,7 +63,7 @@ struct squashfs_sb_info {
+ 	long long				bytes_used;
+ 	unsigned int				inodes;
+ 	unsigned int				fragments;
+-	int					xattr_ids;
++	unsigned int				xattr_ids;
+ 	unsigned int				ids;
+ };
+ #endif
+diff --git a/fs/squashfs/xattr.h b/fs/squashfs/xattr.h
+index d8a270d3ac4cb..f1a463d8bfa02 100644
+--- a/fs/squashfs/xattr.h
++++ b/fs/squashfs/xattr.h
+@@ -10,12 +10,12 @@
+ 
+ #ifdef CONFIG_SQUASHFS_XATTR
+ extern __le64 *squashfs_read_xattr_id_table(struct super_block *, u64,
+-		u64 *, int *);
++		u64 *, unsigned int *);
+ extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *,
+ 		unsigned int *, unsigned long long *);
+ #else
+ static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb,
+-		u64 start, u64 *xattr_table_start, int *xattr_ids)
++		u64 start, u64 *xattr_table_start, unsigned int *xattr_ids)
+ {
+ 	struct squashfs_xattr_id_table *id_table;
+ 
+diff --git a/fs/squashfs/xattr_id.c b/fs/squashfs/xattr_id.c
+index 087cab8c78f4e..b88d19e9581e9 100644
+--- a/fs/squashfs/xattr_id.c
++++ b/fs/squashfs/xattr_id.c
+@@ -56,7 +56,7 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index,
+  * Read uncompressed xattr id lookup table indexes from disk into memory
+  */
+ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
+-		u64 *xattr_table_start, int *xattr_ids)
++		u64 *xattr_table_start, unsigned int *xattr_ids)
+ {
+ 	struct squashfs_sb_info *msblk = sb->s_fs_info;
+ 	unsigned int len, indexes;
+@@ -76,7 +76,7 @@ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
+ 	/* Sanity check values */
+ 
+ 	/* there is always at least one xattr id */
+-	if (*xattr_ids == 0)
++	if (*xattr_ids <= 0)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 955b19dc28a82..c0ba379574a46 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -7,6 +7,7 @@
+ #include <linux/fs.h>
+ #include <linux/hugetlb_inline.h>
+ #include <linux/cgroup.h>
++#include <linux/page_ref.h>
+ #include <linux/list.h>
+ #include <linux/kref.h>
+ #include <linux/pgtable.h>
+@@ -144,7 +145,7 @@ int hugetlb_reserve_pages(struct inode *inode, long from, long to,
+ 						vm_flags_t vm_flags);
+ long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
+ 						long freed);
+-bool isolate_huge_page(struct page *page, struct list_head *list);
++int isolate_hugetlb(struct page *page, struct list_head *list);
+ void putback_active_hugepage(struct page *page);
+ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason);
+ void free_huge_page(struct page *page);
+@@ -325,9 +326,9 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
+ 	return NULL;
+ }
+ 
+-static inline bool isolate_huge_page(struct page *page, struct list_head *list)
++static inline int isolate_hugetlb(struct page *page, struct list_head *list)
+ {
+-	return false;
++	return -EBUSY;
+ }
+ 
+ static inline void putback_active_hugepage(struct page *page)
+@@ -942,4 +943,16 @@ static inline __init void hugetlb_cma_check(void)
+ }
+ #endif
+ 
++#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE
++static inline bool hugetlb_pmd_shared(pte_t *pte)
++{
++	return page_count(virt_to_page(pte)) > 1;
++}
++#else
++static inline bool hugetlb_pmd_shared(pte_t *pte)
++{
++	return false;
++}
++#endif
++
+ #endif /* _LINUX_HUGETLB_H */
+diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
+index 06409a6c40bcb..39ec67689898b 100644
+--- a/include/linux/nvmem-provider.h
++++ b/include/linux/nvmem-provider.h
+@@ -49,7 +49,8 @@ enum nvmem_type {
+  * @word_size:	Minimum read/write access granularity.
+  * @stride:	Minimum read/write access stride.
+  * @priv:	User context passed to read/write callbacks.
+- * @wp-gpio:   Write protect pin
++ * @wp-gpio:	Write protect pin
++ * @ignore_wp:  Write Protect pin is managed by the provider.
+  *
+  * Note: A default "nvmem<id>" name will be assigned to the device if
+  * no name is specified in its configuration. In such case "<id>" is
+@@ -69,6 +70,7 @@ struct nvmem_config {
+ 	enum nvmem_type		type;
+ 	bool			read_only;
+ 	bool			root_only;
++	bool			ignore_wp;
+ 	bool			no_of_node;
+ 	nvmem_reg_read_t	reg_read;
+ 	nvmem_reg_write_t	reg_write;
+diff --git a/include/linux/util_macros.h b/include/linux/util_macros.h
+index 72299f261b253..43db6e47503c7 100644
+--- a/include/linux/util_macros.h
++++ b/include/linux/util_macros.h
+@@ -38,4 +38,16 @@
+  */
+ #define find_closest_descending(x, a, as) __find_closest(x, a, as, >=)
+ 
++/**
++ * is_insidevar - check if the @ptr points inside the @var memory range.
++ * @ptr:	the pointer to a memory address.
++ * @var:	the variable which address and size identify the memory range.
++ *
++ * Evaluates to true if the address in @ptr lies within the memory
++ * range allocated to @var.
++ */
++#define is_insidevar(ptr, var)						\
++	((uintptr_t)(ptr) >= (uintptr_t)(var) &&			\
++	 (uintptr_t)(ptr) <  (uintptr_t)(var) + sizeof(var))
++
+ #endif
+diff --git a/include/uapi/linux/ip.h b/include/uapi/linux/ip.h
+index d2f143393780c..860bbf6bf29cb 100644
+--- a/include/uapi/linux/ip.h
++++ b/include/uapi/linux/ip.h
+@@ -18,6 +18,7 @@
+ #ifndef _UAPI_LINUX_IP_H
+ #define _UAPI_LINUX_IP_H
+ #include <linux/types.h>
++#include <linux/stddef.h>
+ #include <asm/byteorder.h>
+ 
+ #define IPTOS_TOS_MASK		0x1E
+diff --git a/include/uapi/linux/ipv6.h b/include/uapi/linux/ipv6.h
+index 766ab5c8ee655..d44d0483fd73f 100644
+--- a/include/uapi/linux/ipv6.h
++++ b/include/uapi/linux/ipv6.h
+@@ -4,6 +4,7 @@
+ 
+ #include <linux/libc-compat.h>
+ #include <linux/types.h>
++#include <linux/stddef.h>
+ #include <linux/in6.h>
+ #include <asm/byteorder.h>
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a6c931fed39bd..9e5f1ebe67d7f 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -570,6 +570,12 @@ static bool is_spilled_reg(const struct bpf_stack_state *stack)
+ 	return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL;
+ }
+ 
++static void scrub_spilled_slot(u8 *stype)
++{
++	if (*stype != STACK_INVALID)
++		*stype = STACK_MISC;
++}
++
+ static void print_verifier_state(struct bpf_verifier_env *env,
+ 				 const struct bpf_func_state *state)
+ {
+@@ -1876,8 +1882,6 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 		 */
+ 		if (insn->src_reg != BPF_REG_FP)
+ 			return 0;
+-		if (BPF_SIZE(insn->code) != BPF_DW)
+-			return 0;
+ 
+ 		/* dreg = *(u64 *)[fp - off] was a fill from the stack.
+ 		 * that [fp - off] slot contains scalar that needs to be
+@@ -1900,8 +1904,6 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 		/* scalars can only be spilled into stack */
+ 		if (insn->dst_reg != BPF_REG_FP)
+ 			return 0;
+-		if (BPF_SIZE(insn->code) != BPF_DW)
+-			return 0;
+ 		spi = (-insn->off - 1) / BPF_REG_SIZE;
+ 		if (spi >= 64) {
+ 			verbose(env, "BUG spi %d\n", spi);
+@@ -2272,16 +2274,33 @@ static bool __is_pointer_value(bool allow_ptr_leaks,
+ 	return reg->type != SCALAR_VALUE;
+ }
+ 
++/* Copy src state preserving dst->parent and dst->live fields */
++static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src)
++{
++	struct bpf_reg_state *parent = dst->parent;
++	enum bpf_reg_liveness live = dst->live;
++
++	*dst = *src;
++	dst->parent = parent;
++	dst->live = live;
++}
++
+ static void save_register_state(struct bpf_func_state *state,
+-				int spi, struct bpf_reg_state *reg)
++				int spi, struct bpf_reg_state *reg,
++				int size)
+ {
+ 	int i;
+ 
+-	state->stack[spi].spilled_ptr = *reg;
+-	state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
++	copy_register_state(&state->stack[spi].spilled_ptr, reg);
++	if (size == BPF_REG_SIZE)
++		state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
++
++	for (i = BPF_REG_SIZE; i > BPF_REG_SIZE - size; i--)
++		state->stack[spi].slot_type[i - 1] = STACK_SPILL;
+ 
+-	for (i = 0; i < BPF_REG_SIZE; i++)
+-		state->stack[spi].slot_type[i] = STACK_SPILL;
++	/* size < 8 bytes spill */
++	for (; i; i--)
++		scrub_spilled_slot(&state->stack[spi].slot_type[i - 1]);
+ }
+ 
+ /* check_stack_{read,write}_fixed_off functions track spill/fill of registers,
+@@ -2331,7 +2350,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 			env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
+ 	}
+ 
+-	if (reg && size == BPF_REG_SIZE && register_is_bounded(reg) &&
++	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
+ 	    !register_is_null(reg) && env->bpf_capable) {
+ 		if (dst_reg != BPF_REG_FP) {
+ 			/* The backtracking logic can only recognize explicit
+@@ -2344,7 +2363,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 			if (err)
+ 				return err;
+ 		}
+-		save_register_state(state, spi, reg);
++		save_register_state(state, spi, reg, size);
+ 	} else if (reg && is_spillable_regtype(reg->type)) {
+ 		/* register containing pointer is being spilled into stack */
+ 		if (size != BPF_REG_SIZE) {
+@@ -2356,7 +2375,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 			verbose(env, "cannot spill pointers to stack into stack frame of the caller\n");
+ 			return -EINVAL;
+ 		}
+-		save_register_state(state, spi, reg);
++		save_register_state(state, spi, reg, size);
+ 	} else {
+ 		u8 type = STACK_MISC;
+ 
+@@ -2365,7 +2384,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 		/* Mark slots as STACK_MISC if they belonged to spilled ptr. */
+ 		if (is_spilled_reg(&state->stack[spi]))
+ 			for (i = 0; i < BPF_REG_SIZE; i++)
+-				state->stack[spi].slot_type[i] = STACK_MISC;
++				scrub_spilled_slot(&state->stack[spi].slot_type[i]);
+ 
+ 		/* only mark the slot as written if all 8 bytes were written
+ 		 * otherwise read propagation may incorrectly stop too soon
+@@ -2572,35 +2591,56 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
+ 	struct bpf_func_state *state = vstate->frame[vstate->curframe];
+ 	int i, slot = -off - 1, spi = slot / BPF_REG_SIZE;
+ 	struct bpf_reg_state *reg;
+-	u8 *stype;
++	u8 *stype, type;
+ 
+ 	stype = reg_state->stack[spi].slot_type;
+ 	reg = &reg_state->stack[spi].spilled_ptr;
+ 
+ 	if (is_spilled_reg(&reg_state->stack[spi])) {
+-		if (size != BPF_REG_SIZE) {
++		u8 spill_size = 1;
++
++		for (i = BPF_REG_SIZE - 1; i > 0 && stype[i - 1] == STACK_SPILL; i--)
++			spill_size++;
++
++		if (size != BPF_REG_SIZE || spill_size != BPF_REG_SIZE) {
+ 			if (reg->type != SCALAR_VALUE) {
+ 				verbose_linfo(env, env->insn_idx, "; ");
+ 				verbose(env, "invalid size of register fill\n");
+ 				return -EACCES;
+ 			}
+-			if (dst_regno >= 0) {
++
++			mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
++			if (dst_regno < 0)
++				return 0;
++
++			if (!(off % BPF_REG_SIZE) && size == spill_size) {
++				/* The earlier check_reg_arg() has decided the
++				 * subreg_def for this insn.  Save it first.
++				 */
++				s32 subreg_def = state->regs[dst_regno].subreg_def;
++
++				copy_register_state(&state->regs[dst_regno], reg);
++				state->regs[dst_regno].subreg_def = subreg_def;
++			} else {
++				for (i = 0; i < size; i++) {
++					type = stype[(slot - i) % BPF_REG_SIZE];
++					if (type == STACK_SPILL)
++						continue;
++					if (type == STACK_MISC)
++						continue;
++					verbose(env, "invalid read from stack off %d+%d size %d\n",
++						off, i, size);
++					return -EACCES;
++				}
+ 				mark_reg_unknown(env, state->regs, dst_regno);
+-				state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
+ 			}
+-			mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
++			state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
+ 			return 0;
+ 		}
+-		for (i = 1; i < BPF_REG_SIZE; i++) {
+-			if (stype[(slot - i) % BPF_REG_SIZE] != STACK_SPILL) {
+-				verbose(env, "corrupted spill memory\n");
+-				return -EACCES;
+-			}
+-		}
+ 
+ 		if (dst_regno >= 0) {
+ 			/* restore register state from stack */
+-			state->regs[dst_regno] = *reg;
++			copy_register_state(&state->regs[dst_regno], reg);
+ 			/* mark reg as written since spilled pointer state likely
+ 			 * has its liveness marks cleared by is_state_visited()
+ 			 * which resets stack/reg liveness for state transitions
+@@ -2619,8 +2659,6 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
+ 		}
+ 		mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
+ 	} else {
+-		u8 type;
+-
+ 		for (i = 0; i < size; i++) {
+ 			type = stype[(slot - i) % BPF_REG_SIZE];
+ 			if (type == STACK_MISC)
+@@ -4106,7 +4144,7 @@ static int check_stack_range_initialized(
+ 			if (clobber) {
+ 				__mark_reg_unknown(env, &state->stack[spi].spilled_ptr);
+ 				for (j = 0; j < BPF_REG_SIZE; j++)
+-					state->stack[spi].slot_type[j] = STACK_MISC;
++					scrub_spilled_slot(&state->stack[spi].slot_type[j]);
+ 			}
+ 			goto mark;
+ 		}
+@@ -5863,7 +5901,7 @@ do_sim:
+ 	 */
+ 	if (!ptr_is_dst_reg) {
+ 		tmp = *dst_reg;
+-		*dst_reg = *ptr_reg;
++		copy_register_state(dst_reg, ptr_reg);
+ 	}
+ 	ret = sanitize_speculative_path(env, NULL, env->insn_idx + 1,
+ 					env->insn_idx);
+@@ -7117,7 +7155,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 					 * to propagate min/max range.
+ 					 */
+ 					src_reg->id = ++env->id_gen;
+-				*dst_reg = *src_reg;
++				copy_register_state(dst_reg, src_reg);
+ 				dst_reg->live |= REG_LIVE_WRITTEN;
+ 				dst_reg->subreg_def = DEF_NOT_SUBREG;
+ 			} else {
+@@ -7128,7 +7166,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 						insn->src_reg);
+ 					return -EACCES;
+ 				} else if (src_reg->type == SCALAR_VALUE) {
+-					*dst_reg = *src_reg;
++					copy_register_state(dst_reg, src_reg);
+ 					/* Make sure ID is cleared otherwise
+ 					 * dst_reg min/max could be incorrectly
+ 					 * propagated into src_reg by find_equal_scalars()
+@@ -7948,7 +7986,7 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
+ 
+ 	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
+ 		if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)
+-			*reg = *known_reg;
++			copy_register_state(reg, known_reg);
+ 	}));
+ }
+ 
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index ab4f51716645b..94e51d36fb497 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1055,6 +1055,7 @@ static void do_bpf_send_signal(struct irq_work *entry)
+ 
+ 	work = container_of(entry, struct send_signal_irq_work, irq_work);
+ 	group_send_sig_info(work->sig, SEND_SIG_PRIV, work->task, work->type);
++	put_task_struct(work->task);
+ }
+ 
+ static int bpf_send_signal_common(u32 sig, enum pid_type type)
+@@ -1091,7 +1092,7 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type)
+ 		 * to the irq_work. The current task may change when queued
+ 		 * irq works get executed.
+ 		 */
+-		work->task = current;
++		work->task = get_task_struct(current);
+ 		work->sig = sig;
+ 		work->type = type;
+ 		irq_work_queue(&work->irq_work);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index f06d48be5a969..8637eab2986ee 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8569,9 +8569,6 @@ buffer_percent_write(struct file *filp, const char __user *ubuf,
+ 	if (val > 100)
+ 		return -EINVAL;
+ 
+-	if (!val)
+-		val = 1;
+-
+ 	tr->buffer_percent = val;
+ 
+ 	(*ppos)++;
+diff --git a/mm/gup.c b/mm/gup.c
+index 6d5e4fd55d320..11307a8b20d58 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1627,7 +1627,7 @@ check_again:
+ 		 */
+ 		if (is_migrate_cma_page(head)) {
+ 			if (PageHuge(head)) {
+-				if (!isolate_huge_page(head, &cma_page_list))
++				if (isolate_hugetlb(head, &cma_page_list))
+ 					isolation_error_count++;
+ 			} else {
+ 				if (!PageLRU(head) && drain_allow) {
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 3499b3803384b..81949f6d29af5 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5655,14 +5655,14 @@ follow_huge_pgd(struct mm_struct *mm, unsigned long address, pgd_t *pgd, int fla
+ 	return pte_page(*(pte_t *)pgd) + ((address & ~PGDIR_MASK) >> PAGE_SHIFT);
+ }
+ 
+-bool isolate_huge_page(struct page *page, struct list_head *list)
++int isolate_hugetlb(struct page *page, struct list_head *list)
+ {
+-	bool ret = true;
++	int ret = 0;
+ 
+ 	spin_lock(&hugetlb_lock);
+ 	if (!PageHeadHuge(page) || !page_huge_active(page) ||
+ 	    !get_page_unless_zero(page)) {
+-		ret = false;
++		ret = -EBUSY;
+ 		goto unlock;
+ 	}
+ 	clear_page_huge_active(page);
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index aef267c6a7246..b21dd4a793926 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1763,7 +1763,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
+ 	bool lru = PageLRU(page);
+ 
+ 	if (PageHuge(page)) {
+-		isolated = isolate_huge_page(page, pagelist);
++		isolated = !isolate_hugetlb(page, pagelist);
+ 	} else {
+ 		if (lru)
+ 			isolated = !isolate_lru_page(page);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 6275b1c05f111..f0633f9a91166 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1288,7 +1288,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+ 
+ 		if (PageHuge(page)) {
+ 			pfn = page_to_pfn(head) + compound_nr(head) - 1;
+-			isolate_huge_page(head, &source);
++			isolate_hugetlb(head, &source);
+ 			continue;
+ 		} else if (PageTransHuge(page))
+ 			pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index f9f47449e8dda..6c98585f20dfe 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -622,8 +622,9 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
+ 
+ 	/* With MPOL_MF_MOVE, we migrate only unshared hugepage. */
+ 	if (flags & (MPOL_MF_MOVE_ALL) ||
+-	    (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) {
+-		if (!isolate_huge_page(page, qp->pagelist) &&
++	    (flags & MPOL_MF_MOVE && page_mapcount(page) == 1 &&
++	     !hugetlb_pmd_shared(pte))) {
++		if (isolate_hugetlb(page, qp->pagelist) &&
+ 			(flags & MPOL_MF_STRICT))
+ 			/*
+ 			 * Failed to isolate page but allow migrating pages
+diff --git a/mm/migrate.c b/mm/migrate.c
+index b716b8fa2c3ff..fcb7eb6a6ecae 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -164,7 +164,7 @@ void putback_movable_page(struct page *page)
+  *
+  * This function shall be used whenever the isolated pageset has been
+  * built from lru, balloon, hugetlbfs page. See isolate_migratepages_range()
+- * and isolate_huge_page().
++ * and isolate_hugetlb().
+  */
+ void putback_movable_pages(struct list_head *l)
+ {
+@@ -1657,8 +1657,9 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
+ 
+ 	if (PageHuge(page)) {
+ 		if (PageHead(page)) {
+-			isolate_huge_page(page, pagelist);
+-			err = 1;
++			err = isolate_hugetlb(page, pagelist);
++			if (!err)
++				err = 1;
+ 		}
+ 	} else {
+ 		struct page *head;
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index a56f2b9df5a01..1fd41b91a1a88 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -5054,9 +5054,12 @@ static inline void free_the_page(struct page *page, unsigned int order)
+ 
+ void __free_pages(struct page *page, unsigned int order)
+ {
++	/* get PageHead before we drop reference */
++	int head = PageHead(page);
++
+ 	if (put_page_testzero(page))
+ 		free_the_page(page, order);
+-	else if (!PageHead(page))
++	else if (!head)
+ 		while (order-- > 0)
+ 			free_the_page(page + (1 << order), order);
+ }
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 5af6b0f770de6..d87d6971afc9d 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1104,6 +1104,7 @@ start_over:
+ 			goto check_out;
+ 		pr_debug("scan_swap_map of si %d failed to find offset\n",
+ 			si->type);
++		cond_resched();
+ 
+ 		spin_lock(&swap_avail_lock);
+ nextsi:
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index a718204c4bfdd..f3c7cfba31e1b 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -871,6 +871,7 @@ static unsigned int ip_sabotage_in(void *priv,
+ 	if (nf_bridge && !nf_bridge->in_prerouting &&
+ 	    !netif_is_l3_master(skb->dev) &&
+ 	    !netif_is_l3_slave(skb->dev)) {
++		nf_bridge_info_free(skb);
+ 		state->okfn(state->net, state->sk, skb);
+ 		return NF_STOLEN;
+ 	}
+diff --git a/net/can/j1939/address-claim.c b/net/can/j1939/address-claim.c
+index f33c473279278..ca4ad6cdd5cbf 100644
+--- a/net/can/j1939/address-claim.c
++++ b/net/can/j1939/address-claim.c
+@@ -165,6 +165,46 @@ static void j1939_ac_process(struct j1939_priv *priv, struct sk_buff *skb)
+ 	 * leaving this function.
+ 	 */
+ 	ecu = j1939_ecu_get_by_name_locked(priv, name);
++
++	if (ecu && ecu->addr == skcb->addr.sa) {
++		/* The ISO 11783-5 standard, in "4.5.2 - Address claim
++		 * requirements", states:
++		 *   d) No CF shall begin, or resume, transmission on the
++		 *      network until 250 ms after it has successfully claimed
++		 *      an address except when responding to a request for
++		 *      address-claimed.
++		 *
++		 * But "Figure 6" and "Figure 7" in "4.5.4.2 - Address-claim
++		 * prioritization" show that the CF begins the transmission
++		 * after 250 ms from the first AC (address-claimed) message
++		 * even if it sends another AC message during that time window
++		 * to resolve the address contention with another CF.
++		 *
++		 * As stated in "4.4.2.3 - Address-claimed message":
++		 *   In order to successfully claim an address, the CF sending
++		 *   an address claimed message shall not receive a contending
++		 *   claim from another CF for at least 250 ms.
++		 *
++		 * As stated in "4.4.3.2 - NAME management (NM) message":
++		 *   1) A commanding CF can
++		 *      d) request that a CF with a specified NAME transmit
++		 *         the address-claimed message with its current NAME.
++		 *   2) A target CF shall
++		 *      d) send an address-claimed message in response to a
++		 *         request for a matching NAME
++		 *
++		 * Taking the above arguments into account, the 250 ms wait is
++		 * requested only during network initialization.
++		 *
++		 * Do not restart the timer on AC message if both the NAME and
++		 * the address match and so if the address has already been
++		 * claimed (timer has expired) or the AC message has been sent
++		 * to resolve the contention with another CF (timer is still
++		 * running).
++		 */
++		goto out_ecu_put;
++	}
++
+ 	if (!ecu && j1939_address_is_unicast(skcb->addr.sa))
+ 		ecu = j1939_ecu_create_locked(priv, name);
+ 
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 78f6a91106994..57d6aac7f4353 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1087,10 +1087,6 @@ static bool j1939_session_deactivate(struct j1939_session *session)
+ 	bool active;
+ 
+ 	j1939_session_list_lock(priv);
+-	/* This function should be called with a session ref-count of at
+-	 * least 2.
+-	 */
+-	WARN_ON_ONCE(kref_read(&session->kref) < 2);
+ 	active = j1939_session_deactivate_locked(session);
+ 	j1939_session_list_unlock(priv);
+ 
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 6a1685461f89b..926e29e84b40b 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -6,6 +6,7 @@
+ #include <linux/bpf.h>
+ #include <linux/init.h>
+ #include <linux/wait.h>
++#include <linux/util_macros.h>
+ 
+ #include <net/inet_common.h>
+ #include <net/tls.h>
+@@ -642,10 +643,9 @@ struct proto *tcp_bpf_get_proto(struct sock *sk, struct sk_psock *psock)
+  */
+ void tcp_bpf_clone(const struct sock *sk, struct sock *newsk)
+ {
+-	int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
+ 	struct proto *prot = newsk->sk_prot;
+ 
+-	if (prot == &tcp_bpf_prots[family][TCP_BPF_BASE])
++	if (is_insidevar(prot, tcp_bpf_prots))
+ 		newsk->sk_prot = sk->sk_prot_creator;
+ }
+ #endif /* CONFIG_BPF_STREAM_PARSER */
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index e5c8a295e6406..5c04da4cfbad0 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -400,6 +400,11 @@ static int nr_listen(struct socket *sock, int backlog)
+ 	struct sock *sk = sock->sk;
+ 
+ 	lock_sock(sk);
++	if (sock->state != SS_UNCONNECTED) {
++		release_sock(sk);
++		return -EINVAL;
++	}
++
+ 	if (sk->sk_state != TCP_LISTEN) {
+ 		memset(&nr_sk(sk)->user_addr, 0, AX25_ADDR_LEN);
+ 		sk->sk_max_ack_backlog = backlog;
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 435f7f1be6146..b625ab5e9a430 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -964,14 +964,14 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	key = kzalloc(sizeof(*key), GFP_KERNEL);
+ 	if (!key) {
+ 		error = -ENOMEM;
+-		goto err_kfree_key;
++		goto err_kfree_flow;
+ 	}
+ 
+ 	ovs_match_init(&match, key, false, &mask);
+ 	error = ovs_nla_get_match(net, &match, a[OVS_FLOW_ATTR_KEY],
+ 				  a[OVS_FLOW_ATTR_MASK], log);
+ 	if (error)
+-		goto err_kfree_flow;
++		goto err_kfree_key;
+ 
+ 	ovs_flow_mask_key(&new_flow->key, key, true, &mask);
+ 
+@@ -979,14 +979,14 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	error = ovs_nla_get_identifier(&new_flow->id, a[OVS_FLOW_ATTR_UFID],
+ 				       key, log);
+ 	if (error)
+-		goto err_kfree_flow;
++		goto err_kfree_key;
+ 
+ 	/* Validate actions. */
+ 	error = ovs_nla_copy_actions(net, a[OVS_FLOW_ATTR_ACTIONS],
+ 				     &new_flow->key, &acts, log);
+ 	if (error) {
+ 		OVS_NLERR(log, "Flow actions may not be safe on all matching packets.");
+-		goto err_kfree_flow;
++		goto err_kfree_key;
+ 	}
+ 
+ 	reply = ovs_flow_cmd_alloc_info(acts, &new_flow->id, info, false,
+@@ -1086,10 +1086,10 @@ err_unlock_ovs:
+ 	kfree_skb(reply);
+ err_kfree_acts:
+ 	ovs_nla_free_flow_actions(acts);
+-err_kfree_flow:
+-	ovs_flow_free(new_flow, false);
+ err_kfree_key:
+ 	kfree(key);
++err_kfree_flow:
++	ovs_flow_free(new_flow, false);
+ error:
+ 	return error;
+ }
+diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
+index e760d4a38fafd..fe81e03851686 100644
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -83,7 +83,10 @@ static struct qrtr_node *node_get(unsigned int node_id)
+ 
+ 	node->id = node_id;
+ 
+-	radix_tree_insert(&nodes, node_id, node);
++	if (radix_tree_insert(&nodes, node_id, node)) {
++		kfree(node);
++		return NULL;
++	}
+ 
+ 	return node;
+ }
+diff --git a/net/rds/message.c b/net/rds/message.c
+index 799034e0f513d..b363ef13c75ef 100644
+--- a/net/rds/message.c
++++ b/net/rds/message.c
+@@ -104,9 +104,9 @@ static void rds_rm_zerocopy_callback(struct rds_sock *rs,
+ 	spin_lock_irqsave(&q->lock, flags);
+ 	head = &q->zcookie_head;
+ 	if (!list_empty(head)) {
+-		info = list_entry(head, struct rds_msg_zcopy_info,
+-				  rs_zcookie_next);
+-		if (info && rds_zcookie_add(info, cookie)) {
++		info = list_first_entry(head, struct rds_msg_zcopy_info,
++					rs_zcookie_next);
++		if (rds_zcookie_add(info, cookie)) {
+ 			spin_unlock_irqrestore(&q->lock, flags);
+ 			kfree(rds_info_from_znotifier(znotif));
+ 			/* caller invokes rds_wake_sk_sleep() */
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index d231d4620c38f..161dc194e6342 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -492,6 +492,12 @@ static int x25_listen(struct socket *sock, int backlog)
+ 	int rc = -EOPNOTSUPP;
+ 
+ 	lock_sock(sk);
++	if (sock->state != SS_UNCONNECTED) {
++		rc = -EINVAL;
++		release_sock(sk);
++		return rc;
++	}
++
+ 	if (sk->sk_state != TCP_LISTEN) {
+ 		memset(&x25_sk(sk)->dest_addr, 0, X25_ADDR_LEN);
+ 		sk->sk_max_ack_backlog = backlog;
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index a0f62fa02e06e..8cbf45a8bcdc2 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -5,6 +5,7 @@
+  * Based on code and translator idea by: Florian Westphal <fw@strlen.de>
+  */
+ #include <linux/compat.h>
++#include <linux/nospec.h>
+ #include <linux/xfrm.h>
+ #include <net/xfrm.h>
+ 
+@@ -302,7 +303,7 @@ static int xfrm_xlate64(struct sk_buff *dst, const struct nlmsghdr *nlh_src)
+ 	nla_for_each_attr(nla, attrs, len, remaining) {
+ 		int err;
+ 
+-		switch (type) {
++		switch (nlh_src->nlmsg_type) {
+ 		case XFRM_MSG_NEWSPDINFO:
+ 			err = xfrm_nla_cpy(dst, nla, nla_len(nla));
+ 			break;
+@@ -437,6 +438,7 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
+ 		NL_SET_ERR_MSG(extack, "Bad attribute");
+ 		return -EOPNOTSUPP;
+ 	}
++	type = array_index_nospec(type, XFRMA_MAX + 1);
+ 	if (nla_len(nla) < compat_policy[type].len) {
+ 		NL_SET_ERR_MSG(extack, "Attribute bad length");
+ 		return -EOPNOTSUPP;
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 77e82033ad700..fef99a1c5df10 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -277,8 +277,7 @@ static int xfrm6_remove_tunnel_encap(struct xfrm_state *x, struct sk_buff *skb)
+ 		goto out;
+ 
+ 	if (x->props.flags & XFRM_STATE_DECAP_DSCP)
+-		ipv6_copy_dscp(ipv6_get_dsfield(ipv6_hdr(skb)),
+-			       ipipv6_hdr(skb));
++		ipv6_copy_dscp(XFRM_MODE_SKB_CB(skb)->tos, ipipv6_hdr(skb));
+ 	if (!(x->props.flags & XFRM_STATE_NOECN))
+ 		ipip6_ecn_decapsulate(skb);
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index cfd86389d37f6..d66d2cf7708ec 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8811,6 +8811,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1534, "Acer Predator PH315-54", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ 	SND_PCI_QUIRK(0x1028, 0x053c, "Dell Latitude E5430", ALC292_FIXUP_DELL_E7X),
+ 	SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+@@ -9089,6 +9090,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++	SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+@@ -9260,6 +9262,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
++	SND_PCI_QUIRK(0x1c6c, 0x1251, "Positivo N14KP6-TG", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS),
+ 	SND_PCI_QUIRK(0x1d05, 0x1096, "TongFang GMxMRxx", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x1d05, 0x1100, "TongFang GKxNRxx", ALC269_FIXUP_NO_SHUTUP),
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index a188901a83bbe..29abc96dc146c 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -821,6 +821,9 @@ static int add_secret_dac_path(struct hda_codec *codec)
+ 		return 0;
+ 	nums = snd_hda_get_connections(codec, spec->gen.mixer_nid, conn,
+ 				       ARRAY_SIZE(conn) - 1);
++	if (nums < 0)
++		return nums;
++
+ 	for (i = 0; i < nums; i++) {
+ 		if (get_wcaps_type(get_wcaps(codec, conn[i])) == AC_WID_AUD_OUT)
+ 			return 0;
+diff --git a/sound/pci/lx6464es/lx_core.c b/sound/pci/lx6464es/lx_core.c
+index f884f5a6a61c2..a49a3254f9677 100644
+--- a/sound/pci/lx6464es/lx_core.c
++++ b/sound/pci/lx6464es/lx_core.c
+@@ -493,12 +493,11 @@ int lx_buffer_ask(struct lx6464es *chip, u32 pipe, int is_capture,
+ 		dev_dbg(chip->card->dev,
+ 			"CMD_08_ASK_BUFFERS: needed %d, freed %d\n",
+ 			    *r_needed, *r_freed);
+-		for (i = 0; i < MAX_STREAM_BUFFER; ++i) {
+-			for (i = 0; i != chip->rmh.stat_len; ++i)
+-				dev_dbg(chip->card->dev,
+-					"  stat[%d]: %x, %x\n", i,
+-					    chip->rmh.stat[i],
+-					    chip->rmh.stat[i] & MASK_DATA_SIZE);
++		for (i = 0; i < MAX_STREAM_BUFFER && i < chip->rmh.stat_len;
++		     ++i) {
++			dev_dbg(chip->card->dev, "  stat[%d]: %x, %x\n", i,
++				chip->rmh.stat[i],
++				chip->rmh.stat[i] & MASK_DATA_SIZE);
+ 		}
+ 	}
+ 
+diff --git a/sound/synth/emux/emux_nrpn.c b/sound/synth/emux/emux_nrpn.c
+index 7eed5791972cc..a7d83182f7d28 100644
+--- a/sound/synth/emux/emux_nrpn.c
++++ b/sound/synth/emux/emux_nrpn.c
+@@ -349,6 +349,9 @@ int
+ snd_emux_xg_control(struct snd_emux_port *port, struct snd_midi_channel *chan,
+ 		    int param)
+ {
++	if (param >= ARRAY_SIZE(chan->control))
++		return -EINVAL;
++
+ 	return send_converted_effect(xg_effects, ARRAY_SIZE(xg_effects),
+ 				     port, chan, param,
+ 				     chan->control[param],
+diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
+index 54020d05a62b8..9605e158a0bfc 100644
+--- a/tools/testing/selftests/net/forwarding/lib.sh
++++ b/tools/testing/selftests/net/forwarding/lib.sh
+@@ -731,14 +731,14 @@ sysctl_set()
+ 	local value=$1; shift
+ 
+ 	SYSCTL_ORIG[$key]=$(sysctl -n $key)
+-	sysctl -qw $key=$value
++	sysctl -qw $key="$value"
+ }
+ 
+ sysctl_restore()
+ {
+ 	local key=$1; shift
+ 
+-	sysctl -qw $key=${SYSCTL_ORIG["$key"]}
++	sysctl -qw $key="${SYSCTL_ORIG[$key]}"
+ }
+ 
+ forwarding_enable()
+diff --git a/tools/testing/selftests/net/udpgso_bench.sh b/tools/testing/selftests/net/udpgso_bench.sh
+index dc932fd653634..640bc43452faa 100755
+--- a/tools/testing/selftests/net/udpgso_bench.sh
++++ b/tools/testing/selftests/net/udpgso_bench.sh
+@@ -7,6 +7,7 @@ readonly GREEN='\033[0;92m'
+ readonly YELLOW='\033[0;33m'
+ readonly RED='\033[0;31m'
+ readonly NC='\033[0m' # No Color
++readonly TESTPORT=8000
+ 
+ readonly KSFT_PASS=0
+ readonly KSFT_FAIL=1
+@@ -56,11 +57,26 @@ trap wake_children EXIT
+ 
+ run_one() {
+ 	local -r args=$@
++	local nr_socks=0
++	local i=0
++	local -r timeout=10
++
++	./udpgso_bench_rx -p "$TESTPORT" &
++	./udpgso_bench_rx -p "$TESTPORT" -t &
++
++	# Wait for the above test program to get ready to receive connections.
++	while [ "$i" -lt "$timeout" ]; do
++		nr_socks="$(ss -lnHi | grep -c "\*:${TESTPORT}")"
++		[ "$nr_socks" -eq 2 ] && break
++		i=$((i + 1))
++		sleep 1
++	done
++	if [ "$nr_socks" -ne 2 ]; then
++		echo "timed out while waiting for udpgso_bench_rx"
++		exit 1
++	fi
+ 
+-	./udpgso_bench_rx &
+-	./udpgso_bench_rx -t &
+-
+-	./udpgso_bench_tx ${args}
++	./udpgso_bench_tx -p "$TESTPORT" ${args}
+ }
+ 
+ run_in_netns() {
+diff --git a/tools/testing/selftests/net/udpgso_bench_rx.c b/tools/testing/selftests/net/udpgso_bench_rx.c
+index 6a193425c367f..4058c7451e70d 100644
+--- a/tools/testing/selftests/net/udpgso_bench_rx.c
++++ b/tools/testing/selftests/net/udpgso_bench_rx.c
+@@ -250,7 +250,7 @@ static int recv_msg(int fd, char *buf, int len, int *gso_size)
+ static void do_flush_udp(int fd)
+ {
+ 	static char rbuf[ETH_MAX_MTU];
+-	int ret, len, gso_size, budget = 256;
++	int ret, len, gso_size = 0, budget = 256;
+ 
+ 	len = cfg_read_all ? sizeof(rbuf) : 0;
+ 	while (budget--) {
+@@ -336,6 +336,8 @@ static void parse_opts(int argc, char **argv)
+ 			cfg_verify = true;
+ 			cfg_read_all = true;
+ 			break;
++		default:
++			exit(1);
+ 		}
+ 	}
+ 
+diff --git a/tools/testing/selftests/net/udpgso_bench_tx.c b/tools/testing/selftests/net/udpgso_bench_tx.c
+index f1fdaa2702913..477392715a9ad 100644
+--- a/tools/testing/selftests/net/udpgso_bench_tx.c
++++ b/tools/testing/selftests/net/udpgso_bench_tx.c
+@@ -62,6 +62,7 @@ static int	cfg_payload_len	= (1472 * 42);
+ static int	cfg_port	= 8000;
+ static int	cfg_runtime_ms	= -1;
+ static bool	cfg_poll;
++static int	cfg_poll_loop_timeout_ms = 2000;
+ static bool	cfg_segment;
+ static bool	cfg_sendmmsg;
+ static bool	cfg_tcp;
+@@ -235,16 +236,17 @@ static void flush_errqueue_recv(int fd)
+ 	}
+ }
+ 
+-static void flush_errqueue(int fd, const bool do_poll)
++static void flush_errqueue(int fd, const bool do_poll,
++			   unsigned long poll_timeout, const bool poll_err)
+ {
+ 	if (do_poll) {
+ 		struct pollfd fds = {0};
+ 		int ret;
+ 
+ 		fds.fd = fd;
+-		ret = poll(&fds, 1, 500);
++		ret = poll(&fds, 1, poll_timeout);
+ 		if (ret == 0) {
+-			if (cfg_verbose)
++			if ((cfg_verbose) && (poll_err))
+ 				fprintf(stderr, "poll timeout\n");
+ 		} else if (ret < 0) {
+ 			error(1, errno, "poll");
+@@ -254,6 +256,20 @@ static void flush_errqueue(int fd, const bool do_poll)
+ 	flush_errqueue_recv(fd);
+ }
+ 
++static void flush_errqueue_retry(int fd, unsigned long num_sends)
++{
++	unsigned long tnow, tstop;
++	bool first_try = true;
++
++	tnow = gettimeofday_ms();
++	tstop = tnow + cfg_poll_loop_timeout_ms;
++	do {
++		flush_errqueue(fd, true, tstop - tnow, first_try);
++		first_try = false;
++		tnow = gettimeofday_ms();
++	} while ((stat_zcopies != num_sends) && (tnow < tstop));
++}
++
+ static int send_tcp(int fd, char *data)
+ {
+ 	int ret, done = 0, count = 0;
+@@ -413,7 +429,8 @@ static int send_udp_segment(int fd, char *data)
+ 
+ static void usage(const char *filepath)
+ {
+-	error(1, 0, "Usage: %s [-46acmHPtTuvz] [-C cpu] [-D dst ip] [-l secs] [-M messagenr] [-p port] [-s sendsize] [-S gsosize]",
++	error(1, 0, "Usage: %s [-46acmHPtTuvz] [-C cpu] [-D dst ip] [-l secs] "
++		    "[-L secs] [-M messagenr] [-p port] [-s sendsize] [-S gsosize]",
+ 		    filepath);
+ }
+ 
+@@ -423,7 +440,7 @@ static void parse_opts(int argc, char **argv)
+ 	int max_len, hdrlen;
+ 	int c;
+ 
+-	while ((c = getopt(argc, argv, "46acC:D:Hl:mM:p:s:PS:tTuvz")) != -1) {
++	while ((c = getopt(argc, argv, "46acC:D:Hl:L:mM:p:s:PS:tTuvz")) != -1) {
+ 		switch (c) {
+ 		case '4':
+ 			if (cfg_family != PF_UNSPEC)
+@@ -452,6 +469,9 @@ static void parse_opts(int argc, char **argv)
+ 		case 'l':
+ 			cfg_runtime_ms = strtoul(optarg, NULL, 10) * 1000;
+ 			break;
++		case 'L':
++			cfg_poll_loop_timeout_ms = strtoul(optarg, NULL, 10) * 1000;
++			break;
+ 		case 'm':
+ 			cfg_sendmmsg = true;
+ 			break;
+@@ -490,6 +510,8 @@ static void parse_opts(int argc, char **argv)
+ 		case 'z':
+ 			cfg_zerocopy = true;
+ 			break;
++		default:
++			exit(1);
+ 		}
+ 	}
+ 
+@@ -677,7 +699,7 @@ int main(int argc, char **argv)
+ 			num_sends += send_udp(fd, buf[i]);
+ 		num_msgs++;
+ 		if ((cfg_zerocopy && ((num_msgs & 0xF) == 0)) || cfg_tx_tstamp)
+-			flush_errqueue(fd, cfg_poll);
++			flush_errqueue(fd, cfg_poll, 500, true);
+ 
+ 		if (cfg_msg_nr && num_msgs >= cfg_msg_nr)
+ 			break;
+@@ -696,7 +718,7 @@ int main(int argc, char **argv)
+ 	} while (!interrupted && (cfg_runtime_ms == -1 || tnow < tstop));
+ 
+ 	if (cfg_zerocopy || cfg_tx_tstamp)
+-		flush_errqueue(fd, true);
++		flush_errqueue_retry(fd, num_sends);
+ 
+ 	if (close(fd))
+ 		error(1, errno, "close");


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-02-22 14:04 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-02-22 14:04 UTC (permalink / raw
  To: gentoo-commits

commit:     65feb0eaa077c16637d1e39ff27b50f6545ea049
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 22 13:03:56 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 22 13:03:56 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=65feb0ea

Linux patch 5.10.169

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1168_linux-5.10.169.patch | 1694 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1698 insertions(+)

diff --git a/0000_README b/0000_README
index 88e6260b..2e18ffb2 100644
--- a/0000_README
+++ b/0000_README
@@ -715,6 +715,10 @@ Patch:  1167_linux-5.10.168.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.168
 
+Patch:  1168_linux-5.10.169.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.169
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1168_linux-5.10.169.patch b/1168_linux-5.10.169.patch
new file mode 100644
index 00000000..788cb613
--- /dev/null
+++ b/1168_linux-5.10.169.patch
@@ -0,0 +1,1694 @@
+diff --git a/Makefile b/Makefile
+index af3270277fd0e..dbbfaa5d4fe29 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 168
++SUBLEVEL = 169
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/s390/boot/compressed/decompressor.c b/arch/s390/boot/compressed/decompressor.c
+index 3061b11c4d27f..8eaa1712a1c8d 100644
+--- a/arch/s390/boot/compressed/decompressor.c
++++ b/arch/s390/boot/compressed/decompressor.c
+@@ -79,6 +79,6 @@ void *decompress_kernel(void)
+ 	void *output = (void *)decompress_offset;
+ 
+ 	__decompress(_compressed_start, _compressed_end - _compressed_start,
+-		     NULL, NULL, output, 0, NULL, error);
++		     NULL, NULL, output, vmlinux.image_size, NULL, error);
+ 	return output;
+ }
+diff --git a/arch/s390/kernel/signal.c b/arch/s390/kernel/signal.c
+index b27b6c1f058d0..9e900a8977bd2 100644
+--- a/arch/s390/kernel/signal.c
++++ b/arch/s390/kernel/signal.c
+@@ -472,7 +472,7 @@ void do_signal(struct pt_regs *regs)
+ 	current->thread.system_call =
+ 		test_pt_regs_flag(regs, PIF_SYSCALL) ? regs->int_code : 0;
+ 
+-	if (test_thread_flag(TIF_SIGPENDING) && get_signal(&ksig)) {
++	if (get_signal(&ksig)) {
+ 		/* Whee!  Actually deliver the signal.  */
+ 		if (current->thread.system_call) {
+ 			regs->int_code = current->thread.system_call;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 23d7c563e012b..554d37873c253 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4455,12 +4455,11 @@ static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
+ {
+ 	unsigned long val;
+ 
++	memset(dbgregs, 0, sizeof(*dbgregs));
+ 	memcpy(dbgregs->db, vcpu->arch.db, sizeof(vcpu->arch.db));
+ 	kvm_get_dr(vcpu, 6, &val);
+ 	dbgregs->dr6 = val;
+ 	dbgregs->dr7 = vcpu->arch.dr7;
+-	dbgregs->flags = 0;
+-	memset(&dbgregs->reserved, 0, sizeof(dbgregs->reserved));
+ }
+ 
+ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
+diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+index 4a3bde7c9f217..ae5cf2b55e159 100644
+--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+@@ -1212,6 +1212,22 @@ icl_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
+ 		    GAMT_CHKN_BIT_REG,
+ 		    GAMT_CHKN_DISABLE_L3_COH_PIPE);
+ 
++	/*
++	 * Wa_1408615072:icl,ehl  (vsunit)
++	 * Wa_1407596294:icl,ehl  (hsunit)
++	 */
++	wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE,
++		    VSUNIT_CLKGATE_DIS | HSUNIT_CLKGATE_DIS);
++
++	/* Wa_1407352427:icl,ehl */
++	wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2,
++		    PSDUNIT_CLKGATE_DIS);
++
++	/* Wa_1406680159:icl,ehl */
++	wa_write_or(wal,
++		    SUBSLICE_UNIT_LEVEL_CLKGATE,
++		    GWUNIT_CLKGATE_DIS);
++
+ 	/* Wa_1607087056:icl,ehl,jsl */
+ 	if (IS_ICELAKE(i915) ||
+ 	    IS_EHL_REVID(i915, EHL_REVID_A0, EHL_REVID_A0)) {
+@@ -1816,22 +1832,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+ 		wa_masked_en(wal, GEN9_CSFE_CHICKEN1_RCS,
+ 			     GEN11_ENABLE_32_PLANE_MODE);
+ 
+-		/*
+-		 * Wa_1408615072:icl,ehl  (vsunit)
+-		 * Wa_1407596294:icl,ehl  (hsunit)
+-		 */
+-		wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE,
+-			    VSUNIT_CLKGATE_DIS | HSUNIT_CLKGATE_DIS);
+-
+-		/* Wa_1407352427:icl,ehl */
+-		wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2,
+-			    PSDUNIT_CLKGATE_DIS);
+-
+-		/* Wa_1406680159:icl,ehl */
+-		wa_write_or(wal,
+-			    SUBSLICE_UNIT_LEVEL_CLKGATE,
+-			    GWUNIT_CLKGATE_DIS);
+-
+ 		/*
+ 		 * Wa_1408767742:icl[a2..forever],ehl[all]
+ 		 * Wa_1605460711:icl[a0..c0]
+diff --git a/drivers/mmc/core/sdio_bus.c b/drivers/mmc/core/sdio_bus.c
+index a448535c1265d..89dd49260080b 100644
+--- a/drivers/mmc/core/sdio_bus.c
++++ b/drivers/mmc/core/sdio_bus.c
+@@ -295,6 +295,12 @@ static void sdio_release_func(struct device *dev)
+ 	if (!(func->card->quirks & MMC_QUIRK_NONSTD_SDIO))
+ 		sdio_free_func_cis(func);
+ 
++	/*
++	 * We have now removed the link to the tuples in the
++	 * card structure, so remove the reference.
++	 */
++	put_device(&func->card->dev);
++
+ 	kfree(func->info);
+ 	kfree(func->tmpbuf);
+ 	kfree(func);
+@@ -325,6 +331,12 @@ struct sdio_func *sdio_alloc_func(struct mmc_card *card)
+ 
+ 	device_initialize(&func->dev);
+ 
++	/*
++	 * We may link to tuples in the card structure,
++	 * we need make sure we have a reference to it.
++	 */
++	get_device(&func->card->dev);
++
+ 	func->dev.parent = &card->dev;
+ 	func->dev.bus = &sdio_bus_type;
+ 	func->dev.release = sdio_release_func;
+@@ -378,10 +390,9 @@ int sdio_add_func(struct sdio_func *func)
+  */
+ void sdio_remove_func(struct sdio_func *func)
+ {
+-	if (!sdio_func_present(func))
+-		return;
++	if (sdio_func_present(func))
++		device_del(&func->dev);
+ 
+-	device_del(&func->dev);
+ 	of_node_put(func->dev.of_node);
+ 	put_device(&func->dev);
+ }
+diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c
+index b23773583179d..ce524f7e11fb2 100644
+--- a/drivers/mmc/core/sdio_cis.c
++++ b/drivers/mmc/core/sdio_cis.c
+@@ -391,12 +391,6 @@ int sdio_read_func_cis(struct sdio_func *func)
+ 	if (ret)
+ 		return ret;
+ 
+-	/*
+-	 * Since we've linked to tuples in the card structure,
+-	 * we must make sure we have a reference to it.
+-	 */
+-	get_device(&func->card->dev);
+-
+ 	/*
+ 	 * Vendor/device id is optional for function CIS, so
+ 	 * copy it from the card structure as needed.
+@@ -422,11 +416,5 @@ void sdio_free_func_cis(struct sdio_func *func)
+ 	}
+ 
+ 	func->tuples = NULL;
+-
+-	/*
+-	 * We have now removed the link to the tuples in the
+-	 * card structure, so remove the reference.
+-	 */
+-	put_device(&func->card->dev);
+ }
+ 
+diff --git a/drivers/mmc/host/jz4740_mmc.c b/drivers/mmc/host/jz4740_mmc.c
+index aa3dfb9c1071b..62d00232f85ea 100644
+--- a/drivers/mmc/host/jz4740_mmc.c
++++ b/drivers/mmc/host/jz4740_mmc.c
+@@ -1041,6 +1041,16 @@ static int jz4740_mmc_probe(struct platform_device* pdev)
+ 	mmc->ops = &jz4740_mmc_ops;
+ 	if (!mmc->f_max)
+ 		mmc->f_max = JZ_MMC_CLK_RATE;
++
++	/*
++	 * There seems to be a problem with this driver on the JZ4760 and
++	 * JZ4760B SoCs. There, when using the maximum rate supported (50 MHz),
++	 * the communication fails with many SD cards.
++	 * Until this bug is sorted out, limit the maximum rate to 24 MHz.
++	 */
++	if (host->version == JZ_MMC_JZ4760 && mmc->f_max > JZ_MMC_CLK_RATE)
++		mmc->f_max = JZ_MMC_CLK_RATE;
++
+ 	mmc->f_min = mmc->f_max / 128;
+ 	mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
+ 
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index 02f4fd26e76a9..1d814919eb6be 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -1450,7 +1450,7 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 
+ 	status = mmc_add_host(mmc);
+ 	if (status != 0)
+-		goto fail_add_host;
++		goto fail_glue_init;
+ 
+ 	/*
+ 	 * Index 0 is card detect
+@@ -1458,7 +1458,7 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 	 */
+ 	status = mmc_gpiod_request_cd(mmc, NULL, 0, false, 1000);
+ 	if (status == -EPROBE_DEFER)
+-		goto fail_add_host;
++		goto fail_gpiod_request;
+ 	if (!status) {
+ 		/*
+ 		 * The platform has a CD GPIO signal that may support
+@@ -1473,7 +1473,7 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 	/* Index 1 is write protect/read only */
+ 	status = mmc_gpiod_request_ro(mmc, NULL, 1, 0);
+ 	if (status == -EPROBE_DEFER)
+-		goto fail_add_host;
++		goto fail_gpiod_request;
+ 	if (!status)
+ 		has_ro = true;
+ 
+@@ -1487,7 +1487,7 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 				? ", cd polling" : "");
+ 	return 0;
+ 
+-fail_add_host:
++fail_gpiod_request:
+ 	mmc_remove_host(mmc);
+ fail_glue_init:
+ 	mmc_spi_dma_free(host);
+diff --git a/drivers/net/ethernet/broadcom/bgmac-bcma.c b/drivers/net/ethernet/broadcom/bgmac-bcma.c
+index 26746197515fc..022aebb68f462 100644
+--- a/drivers/net/ethernet/broadcom/bgmac-bcma.c
++++ b/drivers/net/ethernet/broadcom/bgmac-bcma.c
+@@ -228,12 +228,12 @@ static int bgmac_probe(struct bcma_device *core)
+ 		bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
+ 		bgmac->feature_flags |= BGMAC_FEAT_FLW_CTRL1;
+ 		bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_PHY;
+-		if (ci->pkg == BCMA_PKG_ID_BCM47188 ||
+-		    ci->pkg == BCMA_PKG_ID_BCM47186) {
++		if ((ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM47186) ||
++		    (ci->id == BCMA_CHIP_ID_BCM53572 && ci->pkg == BCMA_PKG_ID_BCM47188)) {
+ 			bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_RGMII;
+ 			bgmac->feature_flags |= BGMAC_FEAT_IOST_ATTACHED;
+ 		}
+-		if (ci->pkg == BCMA_PKG_ID_BCM5358)
++		if (ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM5358)
+ 			bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_EPHYRMII;
+ 		break;
+ 	case BCMA_CHIP_ID_BCM53573:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 92f54e3333958..c4a768ce8c99d 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -8761,10 +8761,14 @@ int bnxt_reserve_rings(struct bnxt *bp, bool irq_re_init)
+ 		netdev_err(bp->dev, "ring reservation/IRQ init failure rc: %d\n", rc);
+ 		return rc;
+ 	}
+-	if (tcs && (bp->tx_nr_rings_per_tc * tcs != bp->tx_nr_rings)) {
++	if (tcs && (bp->tx_nr_rings_per_tc * tcs !=
++		    bp->tx_nr_rings - bp->tx_nr_rings_xdp)) {
+ 		netdev_err(bp->dev, "tx ring reservation failure\n");
+ 		netdev_reset_tc(bp->dev);
+-		bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
++		if (bp->tx_nr_rings_xdp)
++			bp->tx_nr_rings_per_tc = bp->tx_nr_rings_xdp;
++		else
++			bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
+ 		return -ENOMEM;
+ 	}
+ 	return 0;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 2c60d2a933308..9e8a20a94862f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -2788,7 +2788,7 @@ static int i40e_change_mtu(struct net_device *netdev, int new_mtu)
+ 	struct i40e_pf *pf = vsi->back;
+ 
+ 	if (i40e_enabled_xdp_vsi(vsi)) {
+-		int frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
++		int frame_size = new_mtu + I40E_PACKET_HDR_PAD;
+ 
+ 		if (frame_size > i40e_max_xdp_frame_size(vsi))
+ 			return -EINVAL;
+@@ -12520,6 +12520,8 @@ static int i40e_ndo_bridge_setlink(struct net_device *dev,
+ 	}
+ 
+ 	br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
++	if (!br_spec)
++		return -EINVAL;
+ 
+ 	nla_for_each_nested(attr, br_spec, rem) {
+ 		__u16 mode;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+index 27c6f911737bf..18251edbfabfb 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+@@ -67,6 +67,8 @@
+ #define IXGBE_RXBUFFER_4K    4096
+ #define IXGBE_MAX_RXBUFFER  16384  /* largest size for a single descriptor */
+ 
++#define IXGBE_PKT_HDR_PAD   (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))
++
+ /* Attempt to maximize the headroom available for incoming frames.  We
+  * use a 2K buffer for receives and need 1536/1534 to store the data for
+  * the frame.  This leaves us with 512 bytes of room.  From that we need
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index b5b8be4672aa4..5c542f5d2b20d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -6728,6 +6728,18 @@ static void ixgbe_free_all_rx_resources(struct ixgbe_adapter *adapter)
+ 			ixgbe_free_rx_resources(adapter->rx_ring[i]);
+ }
+ 
++/**
++ * ixgbe_max_xdp_frame_size - returns the maximum allowed frame size for XDP
++ * @adapter: device handle, pointer to adapter
++ */
++static int ixgbe_max_xdp_frame_size(struct ixgbe_adapter *adapter)
++{
++	if (PAGE_SIZE >= 8192 || adapter->flags2 & IXGBE_FLAG2_RX_LEGACY)
++		return IXGBE_RXBUFFER_2K;
++	else
++		return IXGBE_RXBUFFER_3K;
++}
++
+ /**
+  * ixgbe_change_mtu - Change the Maximum Transfer Unit
+  * @netdev: network interface device structure
+@@ -6739,18 +6751,12 @@ static int ixgbe_change_mtu(struct net_device *netdev, int new_mtu)
+ {
+ 	struct ixgbe_adapter *adapter = netdev_priv(netdev);
+ 
+-	if (adapter->xdp_prog) {
+-		int new_frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN +
+-				     VLAN_HLEN;
+-		int i;
+-
+-		for (i = 0; i < adapter->num_rx_queues; i++) {
+-			struct ixgbe_ring *ring = adapter->rx_ring[i];
++	if (ixgbe_enabled_xdp_adapter(adapter)) {
++		int new_frame_size = new_mtu + IXGBE_PKT_HDR_PAD;
+ 
+-			if (new_frame_size > ixgbe_rx_bufsz(ring)) {
+-				e_warn(probe, "Requested MTU size is not supported with XDP\n");
+-				return -EINVAL;
+-			}
++		if (new_frame_size > ixgbe_max_xdp_frame_size(adapter)) {
++			e_warn(probe, "Requested MTU size is not supported with XDP\n");
++			return -EINVAL;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+index bfc4a92f1d92b..78be62ecc9a9a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+@@ -505,6 +505,8 @@ static int qcom_ethqos_probe(struct platform_device *pdev)
+ 	plat_dat->has_gmac4 = 1;
+ 	plat_dat->pmt = 1;
+ 	plat_dat->tso_en = of_property_read_bool(np, "snps,tso");
++	if (of_device_is_compatible(np, "qcom,qcs404-ethqos"))
++		plat_dat->rx_clk_runs_in_lpi = 1;
+ 
+ 	ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
+index de5255b951e14..d1b8b51bf6ad9 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
+@@ -520,9 +520,9 @@ int dwmac5_flex_pps_config(void __iomem *ioaddr, int index,
+ 		return 0;
+ 	}
+ 
+-	val |= PPSCMDx(index, 0x2);
+ 	val |= TRGTMODSELx(index, 0x2);
+ 	val |= PPSEN0;
++	writel(val, ioaddr + MAC_PPS_CONTROL);
+ 
+ 	writel(cfg->start.tv_sec, ioaddr + MAC_PPSx_TARGET_TIME_SEC(index));
+ 
+@@ -547,6 +547,7 @@ int dwmac5_flex_pps_config(void __iomem *ioaddr, int index,
+ 	writel(period - 1, ioaddr + MAC_PPSx_WIDTH(index));
+ 
+ 	/* Finally, activate it */
++	val |= PPSCMDx(index, 0x2);
+ 	writel(val, ioaddr + MAC_PPS_CONTROL);
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index b52ca2fe04d87..1ec000d4c7705 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1058,7 +1058,8 @@ static void stmmac_mac_link_up(struct phylink_config *config,
+ 
+ 	stmmac_mac_set(priv, priv->ioaddr, true);
+ 	if (phy && priv->dma_cap.eee) {
+-		priv->eee_active = phy_init_eee(phy, 1) >= 0;
++		priv->eee_active =
++			phy_init_eee(phy, !priv->plat->rx_clk_runs_in_lpi) >= 0;
+ 		priv->eee_enabled = stmmac_eee_init(priv);
+ 		priv->tx_lpi_enabled = priv->eee_enabled;
+ 		stmmac_set_eee_pls(priv, priv->hw, true);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 1ed74cfb61fc5..f02ce09020fbc 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -559,7 +559,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
+ 	dma_cfg->mixed_burst = of_property_read_bool(np, "snps,mixed-burst");
+ 
+ 	plat->force_thresh_dma_mode = of_property_read_bool(np, "snps,force_thresh_dma_mode");
+-	if (plat->force_thresh_dma_mode) {
++	if (plat->force_thresh_dma_mode && plat->force_sf_dma_mode) {
+ 		plat->force_sf_dma_mode = 0;
+ 		dev_warn(&pdev->dev,
+ 			 "force_sf_dma_mode is ignored if force_thresh_dma_mode is set.\n");
+diff --git a/drivers/net/usb/kalmia.c b/drivers/net/usb/kalmia.c
+index fc5895f85cee2..a552bb1665b8a 100644
+--- a/drivers/net/usb/kalmia.c
++++ b/drivers/net/usb/kalmia.c
+@@ -65,8 +65,8 @@ kalmia_send_init_packet(struct usbnet *dev, u8 *init_msg, u8 init_msg_len,
+ 		init_msg, init_msg_len, &act_len, KALMIA_USB_TIMEOUT);
+ 	if (status != 0) {
+ 		netdev_err(dev->net,
+-			"Error sending init packet. Status %i, length %i\n",
+-			status, act_len);
++			"Error sending init packet. Status %i\n",
++			status);
+ 		return status;
+ 	}
+ 	else if (act_len != init_msg_len) {
+@@ -83,8 +83,8 @@ kalmia_send_init_packet(struct usbnet *dev, u8 *init_msg, u8 init_msg_len,
+ 
+ 	if (status != 0)
+ 		netdev_err(dev->net,
+-			"Error receiving init result. Status %i, length %i\n",
+-			status, act_len);
++			"Error receiving init result. Status %i\n",
++			status);
+ 	else if (act_len != expected_len)
+ 		netdev_err(dev->net, "Unexpected init result length: %i\n",
+ 			act_len);
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 640031cbda7cc..46fc44ce86712 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -1675,8 +1675,10 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
+ 		else {
+ 			queue = nvmet_fc_alloc_target_queue(iod->assoc, 0,
+ 					be16_to_cpu(rqst->assoc_cmd.sqsize));
+-			if (!queue)
++			if (!queue) {
+ 				ret = VERR_QUEUE_ALLOC_FAIL;
++				nvmet_fc_tgt_a_put(iod->assoc);
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 48fbe49e3772b..1505c745154e7 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -627,16 +627,19 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 
+ 	nvmem->id = rval;
+ 
+-	if (config->wp_gpio)
+-		nvmem->wp_gpio = config->wp_gpio;
+-	else if (!config->ignore_wp)
++	nvmem->dev.type = &nvmem_provider_type;
++	nvmem->dev.bus = &nvmem_bus_type;
++	nvmem->dev.parent = config->dev;
++
++	device_initialize(&nvmem->dev);
++
++	if (!config->ignore_wp)
+ 		nvmem->wp_gpio = gpiod_get_optional(config->dev, "wp",
+ 						    GPIOD_OUT_HIGH);
+ 	if (IS_ERR(nvmem->wp_gpio)) {
+-		ida_free(&nvmem_ida, nvmem->id);
+ 		rval = PTR_ERR(nvmem->wp_gpio);
+-		kfree(nvmem);
+-		return ERR_PTR(rval);
++		nvmem->wp_gpio = NULL;
++		goto err_put_device;
+ 	}
+ 
+ 	kref_init(&nvmem->refcnt);
+@@ -648,9 +651,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 	nvmem->stride = config->stride ?: 1;
+ 	nvmem->word_size = config->word_size ?: 1;
+ 	nvmem->size = config->size;
+-	nvmem->dev.type = &nvmem_provider_type;
+-	nvmem->dev.bus = &nvmem_bus_type;
+-	nvmem->dev.parent = config->dev;
+ 	nvmem->root_only = config->root_only;
+ 	nvmem->priv = config->priv;
+ 	nvmem->type = config->type;
+@@ -661,18 +661,21 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 
+ 	switch (config->id) {
+ 	case NVMEM_DEVID_NONE:
+-		dev_set_name(&nvmem->dev, "%s", config->name);
++		rval = dev_set_name(&nvmem->dev, "%s", config->name);
+ 		break;
+ 	case NVMEM_DEVID_AUTO:
+-		dev_set_name(&nvmem->dev, "%s%d", config->name, nvmem->id);
++		rval = dev_set_name(&nvmem->dev, "%s%d", config->name, nvmem->id);
+ 		break;
+ 	default:
+-		dev_set_name(&nvmem->dev, "%s%d",
++		rval = dev_set_name(&nvmem->dev, "%s%d",
+ 			     config->name ? : "nvmem",
+ 			     config->name ? config->id : nvmem->id);
+ 		break;
+ 	}
+ 
++	if (rval)
++		goto err_put_device;
++
+ 	nvmem->read_only = device_property_present(config->dev, "read-only") ||
+ 			   config->read_only || !nvmem->reg_write;
+ 
+@@ -680,16 +683,10 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 	nvmem->dev.groups = nvmem_dev_groups;
+ #endif
+ 
+-	dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
+-
+-	rval = device_register(&nvmem->dev);
+-	if (rval)
+-		goto err_put_device;
+-
+ 	if (config->compat) {
+ 		rval = nvmem_sysfs_setup_compat(nvmem, config);
+ 		if (rval)
+-			goto err_device_del;
++			goto err_put_device;
+ 	}
+ 
+ 	if (config->cells) {
+@@ -706,6 +703,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 	if (rval)
+ 		goto err_remove_cells;
+ 
++	dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
++
++	rval = device_add(&nvmem->dev);
++	if (rval)
++		goto err_remove_cells;
++
+ 	blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
+ 
+ 	return nvmem;
+@@ -714,8 +717,6 @@ err_remove_cells:
+ 	nvmem_device_remove_all_cells(nvmem);
+ 	if (config->compat)
+ 		nvmem_sysfs_remove_compat(nvmem, config);
+-err_device_del:
+-	device_del(&nvmem->dev);
+ err_put_device:
+ 	put_device(&nvmem->dev);
+ 
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index bc26acace2c30..b96fbc8dba09d 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -1030,6 +1030,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BIOS_DATE, "05/07/2016"),
+ 		},
+ 	},
++	{
++		/* Chuwi Vi8 (CWI501) */
++		.driver_data = (void *)&chuwi_vi8_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "i86"),
++			DMI_MATCH(DMI_BIOS_VERSION, "CHUWI.W86JLBNR01"),
++		},
++	},
+ 	{
+ 		/* Chuwi Vi8 (CWI506) */
+ 		.driver_data = (void *)&chuwi_vi8_data,
+diff --git a/fs/aio.c b/fs/aio.c
+index 2a9dfa58ec3ab..5934ea84b4993 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -335,6 +335,9 @@ static int aio_ring_mremap(struct vm_area_struct *vma)
+ 	spin_lock(&mm->ioctx_lock);
+ 	rcu_read_lock();
+ 	table = rcu_dereference(mm->ioctx_table);
++	if (!table)
++		goto out_unlock;
++
+ 	for (i = 0; i < table->nr; i++) {
+ 		struct kioctx *ctx;
+ 
+@@ -348,6 +351,7 @@ static int aio_ring_mremap(struct vm_area_struct *vma)
+ 		}
+ 	}
+ 
++out_unlock:
+ 	rcu_read_unlock();
+ 	spin_unlock(&mm->ioctx_lock);
+ 	return res;
+diff --git a/fs/nilfs2/ioctl.c b/fs/nilfs2/ioctl.c
+index 07d26f61f22aa..3a1dea5d14484 100644
+--- a/fs/nilfs2/ioctl.c
++++ b/fs/nilfs2/ioctl.c
+@@ -1129,7 +1129,14 @@ static int nilfs_ioctl_set_alloc_range(struct inode *inode, void __user *argp)
+ 
+ 	minseg = range[0] + segbytes - 1;
+ 	do_div(minseg, segbytes);
++
++	if (range[1] < 4096)
++		goto out;
++
+ 	maxseg = NILFS_SB2_OFFSET_BYTES(range[1]);
++	if (maxseg < segbytes)
++		goto out;
++
+ 	do_div(maxseg, segbytes);
+ 	maxseg--;
+ 
+diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
+index 7a41c9727c9e2..7751848687635 100644
+--- a/fs/nilfs2/super.c
++++ b/fs/nilfs2/super.c
+@@ -408,6 +408,15 @@ int nilfs_resize_fs(struct super_block *sb, __u64 newsize)
+ 	if (newsize > devsize)
+ 		goto out;
+ 
++	/*
++	 * Prevent underflow in second superblock position calculation.
++	 * The exact minimum size check is done in nilfs_sufile_resize().
++	 */
++	if (newsize < 4096) {
++		ret = -ENOSPC;
++		goto out;
++	}
++
+ 	/*
+ 	 * Write lock is required to protect some functions depending
+ 	 * on the number of segments, the number of reserved segments,
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index 211937054c31f..38a1206cf9481 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -544,9 +544,15 @@ static int nilfs_load_super_block(struct the_nilfs *nilfs,
+ {
+ 	struct nilfs_super_block **sbp = nilfs->ns_sbp;
+ 	struct buffer_head **sbh = nilfs->ns_sbh;
+-	u64 sb2off = NILFS_SB2_OFFSET_BYTES(nilfs->ns_bdev->bd_inode->i_size);
++	u64 sb2off, devsize = nilfs->ns_bdev->bd_inode->i_size;
+ 	int valid[2], swp = 0;
+ 
++	if (devsize < NILFS_SEG_MIN_BLOCKS * NILFS_MIN_BLOCK_SIZE + 4096) {
++		nilfs_err(sb, "device size too small");
++		return -EINVAL;
++	}
++	sb2off = NILFS_SB2_OFFSET_BYTES(devsize);
++
+ 	sbp[0] = nilfs_read_super_block(sb, NILFS_SB_OFFSET_BYTES, blocksize,
+ 					&sbh[0]);
+ 	sbp[1] = nilfs_read_super_block(sb, sb2off, blocksize, &sbh[1]);
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index b019f27c13601..0e734c8b4dfa2 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -531,9 +531,16 @@ static long ovl_fallocate(struct file *file, int mode, loff_t offset, loff_t len
+ 	const struct cred *old_cred;
+ 	int ret;
+ 
++	inode_lock(inode);
++	/* Update mode */
++	ovl_copyattr(ovl_inode_real(inode), inode);
++	ret = file_remove_privs(file);
++	if (ret)
++		goto out_unlock;
++
+ 	ret = ovl_real_fdget(file, &real);
+ 	if (ret)
+-		return ret;
++		goto out_unlock;
+ 
+ 	old_cred = ovl_override_creds(file_inode(file)->i_sb);
+ 	ret = vfs_fallocate(real.file, mode, offset, len);
+@@ -544,6 +551,9 @@ static long ovl_fallocate(struct file *file, int mode, loff_t offset, loff_t len
+ 
+ 	fdput(real);
+ 
++out_unlock:
++	inode_unlock(inode);
++
+ 	return ret;
+ }
+ 
+@@ -687,14 +697,23 @@ static loff_t ovl_copyfile(struct file *file_in, loff_t pos_in,
+ 	const struct cred *old_cred;
+ 	loff_t ret;
+ 
++	inode_lock(inode_out);
++	if (op != OVL_DEDUPE) {
++		/* Update mode */
++		ovl_copyattr(ovl_inode_real(inode_out), inode_out);
++		ret = file_remove_privs(file_out);
++		if (ret)
++			goto out_unlock;
++	}
++
+ 	ret = ovl_real_fdget(file_out, &real_out);
+ 	if (ret)
+-		return ret;
++		goto out_unlock;
+ 
+ 	ret = ovl_real_fdget(file_in, &real_in);
+ 	if (ret) {
+ 		fdput(real_out);
+-		return ret;
++		goto out_unlock;
+ 	}
+ 
+ 	old_cred = ovl_override_creds(file_inode(file_out)->i_sb);
+@@ -723,6 +742,9 @@ static loff_t ovl_copyfile(struct file *file_in, loff_t pos_in,
+ 	fdput(real_in);
+ 	fdput(real_out);
+ 
++out_unlock:
++	inode_unlock(inode_out);
++
+ 	return ret;
+ }
+ 
+diff --git a/fs/squashfs/xattr_id.c b/fs/squashfs/xattr_id.c
+index b88d19e9581e9..c8469c656e0dc 100644
+--- a/fs/squashfs/xattr_id.c
++++ b/fs/squashfs/xattr_id.c
+@@ -76,7 +76,7 @@ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
+ 	/* Sanity check values */
+ 
+ 	/* there is always at least one xattr id */
+-	if (*xattr_ids <= 0)
++	if (*xattr_ids == 0)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index c0ba379574a46..99b73fc4a8246 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -542,7 +542,10 @@ static inline struct hstate *hstate_sizelog(int page_size_log)
+ 	if (!page_size_log)
+ 		return &default_hstate;
+ 
+-	return size_to_hstate(1UL << page_size_log);
++	if (page_size_log < BITS_PER_LONG)
++		return size_to_hstate(1UL << page_size_log);
++
++	return NULL;
+ }
+ 
+ static inline struct hstate *hstate_vma(struct vm_area_struct *vma)
+diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
+index 39ec67689898b..5e07f3cfad301 100644
+--- a/include/linux/nvmem-provider.h
++++ b/include/linux/nvmem-provider.h
+@@ -49,7 +49,6 @@ enum nvmem_type {
+  * @word_size:	Minimum read/write access granularity.
+  * @stride:	Minimum read/write access stride.
+  * @priv:	User context passed to read/write callbacks.
+- * @wp-gpio:	Write protect pin
+  * @ignore_wp:  Write Protect pin is managed by the provider.
+  *
+  * Note: A default "nvmem<id>" name will be assigned to the device if
+@@ -64,7 +63,6 @@ struct nvmem_config {
+ 	const char		*name;
+ 	int			id;
+ 	struct module		*owner;
+-	struct gpio_desc	*wp_gpio;
+ 	const struct nvmem_cell_info	*cells;
+ 	int			ncells;
+ 	enum nvmem_type		type;
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index 40df88728a6f4..abf7b8ec1fb64 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -199,6 +199,7 @@ struct plat_stmmacenet_data {
+ 	int rss_en;
+ 	int mac_port_sel_speed;
+ 	bool en_tx_lpi_clockgating;
++	bool rx_clk_runs_in_lpi;
+ 	int has_xgmac;
+ 	bool vlan_fail_q_en;
+ 	u8 vlan_fail_q;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 69bbbe8bbf34a..0f48d50a6dde7 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2243,6 +2243,19 @@ static inline __must_check bool skb_set_owner_sk_safe(struct sk_buff *skb, struc
+ 	return false;
+ }
+ 
++static inline struct sk_buff *skb_clone_and_charge_r(struct sk_buff *skb, struct sock *sk)
++{
++	skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC));
++	if (skb) {
++		if (sk_rmem_schedule(sk, skb, skb->truesize)) {
++			skb_set_owner_r(skb, sk);
++			return skb;
++		}
++		__kfree_skb(skb);
++	}
++	return NULL;
++}
++
+ void sk_reset_timer(struct sock *sk, struct timer_list *timer,
+ 		    unsigned long expires);
+ 
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index b7f38f3ad42a2..debaeb07ae533 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -1158,10 +1158,11 @@ void psi_trigger_destroy(struct psi_trigger *t)
+ 
+ 	group = t->group;
+ 	/*
+-	 * Wakeup waiters to stop polling. Can happen if cgroup is deleted
+-	 * from under a polling process.
++	 * Wakeup waiters to stop polling and clear the queue to prevent it from
++	 * being accessed later. Can happen if cgroup is deleted from under a
++	 * polling process.
+ 	 */
+-	wake_up_interruptible(&t->event_wait);
++	wake_up_pollfree(&t->event_wait);
+ 
+ 	mutex_lock(&group->trigger_lock);
+ 
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index daeaa7140d0aa..1de426d3f694c 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -470,11 +470,35 @@ u64 alarm_forward(struct alarm *alarm, ktime_t now, ktime_t interval)
+ }
+ EXPORT_SYMBOL_GPL(alarm_forward);
+ 
+-u64 alarm_forward_now(struct alarm *alarm, ktime_t interval)
++static u64 __alarm_forward_now(struct alarm *alarm, ktime_t interval, bool throttle)
+ {
+ 	struct alarm_base *base = &alarm_bases[alarm->type];
++	ktime_t now = base->get_ktime();
++
++	if (IS_ENABLED(CONFIG_HIGH_RES_TIMERS) && throttle) {
++		/*
++		 * Same issue as with posix_timer_fn(). Timers which are
++		 * periodic but the signal is ignored can starve the system
++		 * with a very small interval. The real fix which was
++		 * promised in the context of posix_timer_fn() never
++		 * materialized, but someone should really work on it.
++		 *
++		 * To prevent DOS fake @now to be 1 jiffie out which keeps
++		 * the overrun accounting correct but creates an
++		 * inconsistency vs. timer_gettime(2).
++		 */
++		ktime_t kj = NSEC_PER_SEC / HZ;
++
++		if (interval < kj)
++			now = ktime_add(now, kj);
++	}
++
++	return alarm_forward(alarm, now, interval);
++}
+ 
+-	return alarm_forward(alarm, base->get_ktime(), interval);
++u64 alarm_forward_now(struct alarm *alarm, ktime_t interval)
++{
++	return __alarm_forward_now(alarm, interval, false);
+ }
+ EXPORT_SYMBOL_GPL(alarm_forward_now);
+ 
+@@ -548,9 +572,10 @@ static enum alarmtimer_restart alarm_handle_timer(struct alarm *alarm,
+ 	if (posix_timer_event(ptr, si_private) && ptr->it_interval) {
+ 		/*
+ 		 * Handle ignored signals and rearm the timer. This will go
+-		 * away once we handle ignored signals proper.
++		 * away once we handle ignored signals proper. Ensure that
++		 * small intervals cannot starve the system.
+ 		 */
+-		ptr->it_overrun += alarm_forward_now(alarm, ptr->it_interval);
++		ptr->it_overrun += __alarm_forward_now(alarm, ptr->it_interval, true);
+ 		++ptr->it_requeue_pending;
+ 		ptr->it_active = 1;
+ 		result = ALARMTIMER_RESTART;
+diff --git a/mm/memblock.c b/mm/memblock.c
+index f6a4dffb9a888..f72d539570339 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -1597,13 +1597,7 @@ void __init __memblock_free_late(phys_addr_t base, phys_addr_t size)
+ 	end = PFN_DOWN(base + size);
+ 
+ 	for (; cursor < end; cursor++) {
+-		/*
+-		 * Reserved pages are always initialized by the end of
+-		 * memblock_free_all() (by memmap_init() and, if deferred
+-		 * initialization is enabled, memmap_init_reserved_pages()), so
+-		 * these pages can be released directly to the buddy allocator.
+-		 */
+-		__free_pages_core(pfn_to_page(cursor), 0);
++		memblock_free_pages(pfn_to_page(cursor), cursor, 0);
+ 		totalram_pages_inc();
+ 	}
+ }
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 37bb60a7e97ed..b7646d4e079b4 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -10326,7 +10326,7 @@ void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64,
+ 
+ 	BUILD_BUG_ON(n > sizeof(*stats64) / sizeof(u64));
+ 	for (i = 0; i < n; i++)
+-		dst[i] = atomic_long_read(&src[i]);
++		dst[i] = (unsigned long)atomic_long_read(&src[i]);
+ 	/* zero out counters that only exist in rtnl_link_stats64 */
+ 	memset((char *)stats64 + n * sizeof(u64), 0,
+ 	       sizeof(*stats64) - n * sizeof(u64));
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 21c61a9c3b152..c563f9b325d05 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -541,11 +541,9 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
+ 	*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL);
+ 	/* Clone pktoptions received with SYN, if we own the req */
+ 	if (*own_req && ireq->pktopts) {
+-		newnp->pktoptions = skb_clone(ireq->pktopts, GFP_ATOMIC);
++		newnp->pktoptions = skb_clone_and_charge_r(ireq->pktopts, newsk);
+ 		consume_skb(ireq->pktopts);
+ 		ireq->pktopts = NULL;
+-		if (newnp->pktoptions)
+-			skb_set_owner_r(newnp->pktoptions, newsk);
+ 	}
+ 
+ 	return newsk;
+@@ -605,7 +603,7 @@ static int dccp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
+ 					       --ANK (980728)
+ 	 */
+ 	if (np->rxopt.all)
+-		opt_skb = skb_clone(skb, GFP_ATOMIC);
++		opt_skb = skb_clone_and_charge_r(skb, sk);
+ 
+ 	if (sk->sk_state == DCCP_OPEN) { /* Fast path */
+ 		if (dccp_rcv_established(sk, skb, dccp_hdr(skb), skb->len))
+@@ -669,7 +667,6 @@ ipv6_pktoptions:
+ 			np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));
+ 		if (ipv6_opt_accepted(sk, opt_skb,
+ 				      &DCCP_SKB_CB(opt_skb)->header.h6)) {
+-			skb_set_owner_r(opt_skb, sk);
+ 			memmove(IP6CB(opt_skb),
+ 				&DCCP_SKB_CB(opt_skb)->header.h6,
+ 				sizeof(struct inet6_skb_parm));
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index f4559e5bc84bf..a30ff5d6808aa 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -51,7 +51,7 @@ static void ip6_datagram_flow_key_init(struct flowi6 *fl6, struct sock *sk)
+ 	fl6->flowi6_mark = sk->sk_mark;
+ 	fl6->fl6_dport = inet->inet_dport;
+ 	fl6->fl6_sport = inet->inet_sport;
+-	fl6->flowlabel = np->flow_label;
++	fl6->flowlabel = ip6_make_flowinfo(np->tclass, np->flow_label);
+ 	fl6->flowi6_uid = sk->sk_uid;
+ 
+ 	if (!fl6->flowi6_oif)
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index c599e14be414d..e4ae5362cb51b 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -269,6 +269,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ 	fl6.flowi6_proto = IPPROTO_TCP;
+ 	fl6.daddr = sk->sk_v6_daddr;
+ 	fl6.saddr = saddr ? *saddr : np->saddr;
++	fl6.flowlabel = ip6_make_flowinfo(np->tclass, np->flow_label);
+ 	fl6.flowi6_oif = sk->sk_bound_dev_if;
+ 	fl6.flowi6_mark = sk->sk_mark;
+ 	fl6.fl6_dport = usin->sin6_port;
+@@ -1406,14 +1407,11 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 
+ 		/* Clone pktoptions received with SYN, if we own the req */
+ 		if (ireq->pktopts) {
+-			newnp->pktoptions = skb_clone(ireq->pktopts,
+-						      sk_gfp_mask(sk, GFP_ATOMIC));
++			newnp->pktoptions = skb_clone_and_charge_r(ireq->pktopts, newsk);
+ 			consume_skb(ireq->pktopts);
+ 			ireq->pktopts = NULL;
+-			if (newnp->pktoptions) {
++			if (newnp->pktoptions)
+ 				tcp_v6_restore_cb(newnp->pktoptions);
+-				skb_set_owner_r(newnp->pktoptions, newsk);
+-			}
+ 		}
+ 	} else {
+ 		if (!req_unhash && found_dup_sk) {
+@@ -1481,7 +1479,7 @@ static int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
+ 					       --ANK (980728)
+ 	 */
+ 	if (np->rxopt.all)
+-		opt_skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC));
++		opt_skb = skb_clone_and_charge_r(skb, sk);
+ 
+ 	if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */
+ 		struct dst_entry *dst;
+@@ -1563,7 +1561,6 @@ ipv6_pktoptions:
+ 		if (np->repflow)
+ 			np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));
+ 		if (ipv6_opt_accepted(sk, opt_skb, &TCP_SKB_CB(opt_skb)->header.h6)) {
+-			skb_set_owner_r(opt_skb, sk);
+ 			tcp_v6_restore_cb(opt_skb);
+ 			opt_skb = xchg(&np->pktoptions, opt_skb);
+ 		} else {
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index 72398149e4d4f..1dcbdab9319bb 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -1427,6 +1427,7 @@ static int mpls_dev_sysctl_register(struct net_device *dev,
+ free:
+ 	kfree(table);
+ out:
++	mdev->sysctl = NULL;
+ 	return -ENOBUFS;
+ }
+ 
+@@ -1436,6 +1437,9 @@ static void mpls_dev_sysctl_unregister(struct net_device *dev,
+ 	struct net *net = dev_net(dev);
+ 	struct ctl_table *table;
+ 
++	if (!mdev->sysctl)
++		return;
++
+ 	table = mdev->sysctl->ctl_table_arg;
+ 	unregister_net_sysctl_table(mdev->sysctl);
+ 	kfree(table);
+diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
+index 37c728bdad41c..c49d318f8e6ed 100644
+--- a/net/netfilter/nft_tproxy.c
++++ b/net/netfilter/nft_tproxy.c
+@@ -289,6 +289,13 @@ static int nft_tproxy_dump(struct sk_buff *skb,
+ 	return 0;
+ }
+ 
++static int nft_tproxy_validate(const struct nft_ctx *ctx,
++			       const struct nft_expr *expr,
++			       const struct nft_data **data)
++{
++	return nft_chain_validate_hooks(ctx->chain, 1 << NF_INET_PRE_ROUTING);
++}
++
+ static struct nft_expr_type nft_tproxy_type;
+ static const struct nft_expr_ops nft_tproxy_ops = {
+ 	.type		= &nft_tproxy_type,
+@@ -296,6 +303,7 @@ static const struct nft_expr_ops nft_tproxy_ops = {
+ 	.eval		= nft_tproxy_eval,
+ 	.init		= nft_tproxy_init,
+ 	.dump		= nft_tproxy_dump,
++	.validate	= nft_tproxy_validate,
+ };
+ 
+ static struct nft_expr_type nft_tproxy_type __read_mostly = {
+diff --git a/net/openvswitch/meter.c b/net/openvswitch/meter.c
+index e594b4d6b58a9..0cf3dda5319fe 100644
+--- a/net/openvswitch/meter.c
++++ b/net/openvswitch/meter.c
+@@ -450,7 +450,7 @@ static int ovs_meter_cmd_set(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	err = attach_meter(meter_tbl, meter);
+ 	if (err)
+-		goto exit_unlock;
++		goto exit_free_old_meter;
+ 
+ 	ovs_unlock();
+ 
+@@ -473,6 +473,8 @@ static int ovs_meter_cmd_set(struct sk_buff *skb, struct genl_info *info)
+ 	genlmsg_end(reply, ovs_reply_header);
+ 	return genlmsg_reply(reply, info);
+ 
++exit_free_old_meter:
++	ovs_meter_free(old_meter);
+ exit_unlock:
+ 	ovs_unlock();
+ 	nlmsg_free(reply);
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 29a208ed8fb88..86c93cf1744b0 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -487,6 +487,12 @@ static int rose_listen(struct socket *sock, int backlog)
+ {
+ 	struct sock *sk = sock->sk;
+ 
++	lock_sock(sk);
++	if (sock->state != SS_UNCONNECTED) {
++		release_sock(sk);
++		return -EINVAL;
++	}
++
+ 	if (sk->sk_state != TCP_LISTEN) {
+ 		struct rose_sock *rose = rose_sk(sk);
+ 
+@@ -496,8 +502,10 @@ static int rose_listen(struct socket *sock, int backlog)
+ 		memset(rose->dest_digis, 0, AX25_ADDR_LEN * ROSE_MAX_DIGIS);
+ 		sk->sk_max_ack_backlog = backlog;
+ 		sk->sk_state           = TCP_LISTEN;
++		release_sock(sk);
+ 		return 0;
+ 	}
++	release_sock(sk);
+ 
+ 	return -EOPNOTSUPP;
+ }
+diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c
+index a4c7ba35a3438..78f1cd70c8d19 100644
+--- a/net/sched/act_bpf.c
++++ b/net/sched/act_bpf.c
+@@ -307,7 +307,7 @@ static int tcf_bpf_init(struct net *net, struct nlattr *nla,
+ 	ret = tcf_idr_check_alloc(tn, &index, act, bind);
+ 	if (!ret) {
+ 		ret = tcf_idr_create(tn, index, est, act,
+-				     &act_bpf_ops, bind, true, 0);
++				     &act_bpf_ops, bind, true, flags);
+ 		if (ret < 0) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_connmark.c b/net/sched/act_connmark.c
+index 31d268eedf3f9..b6576a250e851 100644
+--- a/net/sched/act_connmark.c
++++ b/net/sched/act_connmark.c
+@@ -124,7 +124,7 @@ static int tcf_connmark_init(struct net *net, struct nlattr *nla,
+ 	ret = tcf_idr_check_alloc(tn, &index, a, bind);
+ 	if (!ret) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_connmark_ops, bind, false, 0);
++				     &act_connmark_ops, bind, false, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
+index 06c74f22ab98b..5aa005835c066 100644
+--- a/net/sched/act_ctinfo.c
++++ b/net/sched/act_ctinfo.c
+@@ -92,7 +92,7 @@ static int tcf_ctinfo_act(struct sk_buff *skb, const struct tc_action *a,
+ 	cp = rcu_dereference_bh(ca->params);
+ 
+ 	tcf_lastuse_update(&ca->tcf_tm);
+-	bstats_update(&ca->tcf_bstats, skb);
++	tcf_action_update_bstats(&ca->common, skb);
+ 	action = READ_ONCE(ca->tcf_action);
+ 
+ 	wlen = skb_network_offset(skb);
+@@ -211,8 +211,8 @@ static int tcf_ctinfo_init(struct net *net, struct nlattr *nla,
+ 	index = actparm->index;
+ 	err = tcf_idr_check_alloc(tn, &index, a, bind);
+ 	if (!err) {
+-		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_ctinfo_ops, bind, false, 0);
++		ret = tcf_idr_create_from_flags(tn, index, est, a,
++						&act_ctinfo_ops, bind, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_gate.c b/net/sched/act_gate.c
+index a78cb79657182..0e7568a06351b 100644
+--- a/net/sched/act_gate.c
++++ b/net/sched/act_gate.c
+@@ -357,7 +357,7 @@ static int tcf_gate_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!err) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_gate_ops, bind, false, 0);
++				     &act_gate_ops, bind, false, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
+index a2ddea04183af..99548b2a1bc83 100644
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -553,7 +553,7 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, index, est, a, &act_ife_ops,
+-				     bind, true, 0);
++				     bind, true, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			kfree(p);
+diff --git a/net/sched/act_ipt.c b/net/sched/act_ipt.c
+index 8dc3bec0d3258..080f2952cd536 100644
+--- a/net/sched/act_ipt.c
++++ b/net/sched/act_ipt.c
+@@ -144,7 +144,7 @@ static int __tcf_ipt_init(struct net *net, unsigned int id, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, index, est, a, ops, bind,
+-				     false, 0);
++				     false, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c
+index 09799412b2489..47b963ded4e43 100644
+--- a/net/sched/act_mpls.c
++++ b/net/sched/act_mpls.c
+@@ -254,7 +254,7 @@ static int tcf_mpls_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_mpls_ops, bind, true, 0);
++				     &act_mpls_ops, bind, true, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_nat.c b/net/sched/act_nat.c
+index 1ebd2a86d980f..8466dc25fe397 100644
+--- a/net/sched/act_nat.c
++++ b/net/sched/act_nat.c
+@@ -61,7 +61,7 @@ static int tcf_nat_init(struct net *net, struct nlattr *nla, struct nlattr *est,
+ 	err = tcf_idr_check_alloc(tn, &index, a, bind);
+ 	if (!err) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_nat_ops, bind, false, 0);
++				     &act_nat_ops, bind, false, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index 0d5463ddfd62f..db0d3bff19eba 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -189,7 +189,7 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ 	err = tcf_idr_check_alloc(tn, &index, a, bind);
+ 	if (!err) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_pedit_ops, bind, false, 0);
++				     &act_pedit_ops, bind, false, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			goto out_free;
+diff --git a/net/sched/act_police.c b/net/sched/act_police.c
+index 3807335889590..c30cd3ecb3911 100644
+--- a/net/sched/act_police.c
++++ b/net/sched/act_police.c
+@@ -87,7 +87,7 @@ static int tcf_police_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, index, NULL, a,
+-				     &act_police_ops, bind, true, 0);
++				     &act_police_ops, bind, true, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 3ebf9ede3cf10..2f0e98bcf4945 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -69,7 +69,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_sample_ops, bind, true, 0);
++				     &act_sample_ops, bind, true, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_simple.c b/net/sched/act_simple.c
+index a4f3d0f0daa96..b9bbc87a87c5b 100644
+--- a/net/sched/act_simple.c
++++ b/net/sched/act_simple.c
+@@ -128,7 +128,7 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_simp_ops, bind, false, 0);
++				     &act_simp_ops, bind, false, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
+index e5f3fb8b00e32..a5661f2d93e99 100644
+--- a/net/sched/act_skbedit.c
++++ b/net/sched/act_skbedit.c
+@@ -176,7 +176,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_skbedit_ops, bind, true, 0);
++				     &act_skbedit_ops, bind, true, act_flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c
+index 8d17a543cc9fe..aa98dcac94b95 100644
+--- a/net/sched/act_skbmod.c
++++ b/net/sched/act_skbmod.c
+@@ -147,7 +147,7 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, index, est, a,
+-				     &act_skbmod_ops, bind, true, 0);
++				     &act_skbmod_ops, bind, true, flags);
+ 		if (ret) {
+ 			tcf_idr_cleanup(tn, index);
+ 			return ret;
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 86250221d08d4..2c0c95204cb5a 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -12,6 +12,7 @@
+ #include <linux/errno.h>
+ #include <linux/slab.h>
+ #include <linux/refcount.h>
++#include <linux/rcupdate.h>
+ #include <net/act_api.h>
+ #include <net/netlink.h>
+ #include <net/pkt_cls.h>
+@@ -338,6 +339,7 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 	struct tcf_result cr = {};
+ 	int err, balloc = 0;
+ 	struct tcf_exts e;
++	bool update_h = false;
+ 
+ 	err = tcf_exts_init(&e, net, TCA_TCINDEX_ACT, TCA_TCINDEX_POLICE);
+ 	if (err < 0)
+@@ -455,10 +457,13 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 		}
+ 	}
+ 
+-	if (cp->perfect)
++	if (cp->perfect) {
+ 		r = cp->perfect + handle;
+-	else
+-		r = tcindex_lookup(cp, handle) ? : &new_filter_result;
++	} else {
++		/* imperfect area is updated in-place using rcu */
++		update_h = !!tcindex_lookup(cp, handle);
++		r = &new_filter_result;
++	}
+ 
+ 	if (r == &new_filter_result) {
+ 		f = kzalloc(sizeof(*f), GFP_KERNEL);
+@@ -484,7 +489,28 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 
+ 	rcu_assign_pointer(tp->root, cp);
+ 
+-	if (r == &new_filter_result) {
++	if (update_h) {
++		struct tcindex_filter __rcu **fp;
++		struct tcindex_filter *cf;
++
++		f->result.res = r->res;
++		tcf_exts_change(&f->result.exts, &r->exts);
++
++		/* imperfect area bucket */
++		fp = cp->h + (handle % cp->hash);
++
++		/* lookup the filter, guaranteed to exist */
++		for (cf = rcu_dereference_bh_rtnl(*fp); cf;
++		     fp = &cf->next, cf = rcu_dereference_bh_rtnl(*fp))
++			if (cf->key == (u16)handle)
++				break;
++
++		f->next = cf->next;
++
++		cf = rcu_replace_pointer(*fp, f, 1);
++		tcf_exts_get_net(&cf->result.exts);
++		tcf_queue_work(&cf->rwork, tcindex_destroy_fexts_work);
++	} else if (r == &new_filter_result) {
+ 		struct tcindex_filter *nfp;
+ 		struct tcindex_filter __rcu **fp;
+ 
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index c3ba018fd083e..ff84ed531199a 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -405,7 +405,10 @@ static void htb_activate_prios(struct htb_sched *q, struct htb_class *cl)
+ 	while (cl->cmode == HTB_MAY_BORROW && p && mask) {
+ 		m = mask;
+ 		while (m) {
+-			int prio = ffz(~m);
++			unsigned int prio = ffz(~m);
++
++			if (WARN_ON_ONCE(prio >= ARRAY_SIZE(p->inner.clprio)))
++				break;
+ 			m &= ~(1 << prio);
+ 
+ 			if (p->inner.clprio[prio].feed.rb_node)
+diff --git a/net/sctp/diag.c b/net/sctp/diag.c
+index 68ff82ff49a3d..07d0ada23bfd2 100644
+--- a/net/sctp/diag.c
++++ b/net/sctp/diag.c
+@@ -349,11 +349,9 @@ static int sctp_sock_filter(struct sctp_endpoint *ep, struct sctp_transport *tsp
+ 	struct sctp_comm_param *commp = p;
+ 	struct sock *sk = ep->base.sk;
+ 	const struct inet_diag_req_v2 *r = commp->r;
+-	struct sctp_association *assoc =
+-		list_entry(ep->asocs.next, struct sctp_association, asocs);
+ 
+ 	/* find the ep only once through the transports by this condition */
+-	if (tsp->asoc != assoc)
++	if (!list_is_first(&tsp->asoc->asocs, &ep->asocs))
+ 		return 0;
+ 
+ 	if (r->sdiag_family != AF_UNSPEC && sk->sk_family != r->sdiag_family)
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 4efbcc41fdfb7..0a83afa5f373c 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -143,6 +143,7 @@ static int hda_codec_driver_probe(struct device *dev)
+ 
+  error:
+ 	snd_hda_codec_cleanup_for_unbind(codec);
++	codec->preset = NULL;
+ 	return err;
+ }
+ 
+@@ -159,6 +160,7 @@ static int hda_codec_driver_remove(struct device *dev)
+ 	if (codec->patch_ops.free)
+ 		codec->patch_ops.free(codec);
+ 	snd_hda_codec_cleanup_for_unbind(codec);
++	codec->preset = NULL;
+ 	module_put(dev->driver->owner);
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 39281106477eb..fc4a64a83ff2f 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -784,7 +784,6 @@ void snd_hda_codec_cleanup_for_unbind(struct hda_codec *codec)
+ 	snd_array_free(&codec->cvt_setups);
+ 	snd_array_free(&codec->spdif_out);
+ 	snd_array_free(&codec->verbs);
+-	codec->preset = NULL;
+ 	codec->follower_dig_outs = NULL;
+ 	codec->spdif_status_reset = 0;
+ 	snd_array_free(&codec->mixers);
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 2bd0a5839e805..48b802563c2da 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -1117,6 +1117,7 @@ static const struct hda_device_id snd_hda_id_conexant[] = {
+ 	HDA_CODEC_ENTRY(0x14f11f86, "CX8070", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f12008, "CX8200", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f120d0, "CX11970", patch_conexant_auto),
++	HDA_CODEC_ENTRY(0x14f120d1, "SN6180", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f15045, "CX20549 (Venice)", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f15047, "CX20551 (Waikiki)", patch_conexant_auto),
+ 	HDA_CODEC_ENTRY(0x14f15051, "CX20561 (Hermosa)", patch_conexant_auto),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d66d2cf7708ec..fffa681313b66 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -827,7 +827,7 @@ do_sku:
+ 			alc_setup_gpio(codec, 0x02);
+ 			break;
+ 		case 7:
+-			alc_setup_gpio(codec, 0x03);
++			alc_setup_gpio(codec, 0x04);
+ 			break;
+ 		case 5:
+ 		default:
+diff --git a/sound/soc/codecs/cs42l56.c b/sound/soc/codecs/cs42l56.c
+index d41e031931061..3c5ec47a8fe64 100644
+--- a/sound/soc/codecs/cs42l56.c
++++ b/sound/soc/codecs/cs42l56.c
+@@ -1193,18 +1193,12 @@ static int cs42l56_i2c_probe(struct i2c_client *i2c_client,
+ 	if (pdata) {
+ 		cs42l56->pdata = *pdata;
+ 	} else {
+-		pdata = devm_kzalloc(&i2c_client->dev, sizeof(*pdata),
+-				     GFP_KERNEL);
+-		if (!pdata)
+-			return -ENOMEM;
+-
+ 		if (i2c_client->dev.of_node) {
+ 			ret = cs42l56_handle_of_data(i2c_client,
+ 						     &cs42l56->pdata);
+ 			if (ret != 0)
+ 				return ret;
+ 		}
+-		cs42l56->pdata = *pdata;
+ 	}
+ 
+ 	if (cs42l56->pdata.gpio_nreset) {
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index 1f94fa5a15db6..5883d1fa3b7ed 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -704,6 +704,9 @@ static struct snd_soc_dai_link *sof_card_dai_links_create(struct device *dev,
+ 		links[id].num_platforms = ARRAY_SIZE(platform_component);
+ 		links[id].nonatomic = true;
+ 		links[id].dpcm_playback = 1;
++		/* feedback stream or firmware-generated echo reference */
++		links[id].dpcm_capture = 1;
++
+ 		links[id].no_pcm = 1;
+ 		links[id].cpus = &cpus[id];
+ 		links[id].num_cpus = 1;
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index de80f1b3d7f25..a6275cc92a405 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -212,6 +212,10 @@ static int hda_link_hw_params(struct snd_pcm_substream *substream,
+ 	int stream_tag;
+ 	int ret;
+ 
++	link = snd_hdac_ext_bus_get_link(bus, codec_dai->component->name);
++	if (!link)
++		return -EINVAL;
++
+ 	/* get stored dma data if resuming from system suspend */
+ 	link_dev = snd_soc_dai_get_dma_data(dai, substream);
+ 	if (!link_dev) {
+@@ -232,10 +236,6 @@ static int hda_link_hw_params(struct snd_pcm_substream *substream,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	link = snd_hdac_ext_bus_get_link(bus, codec_dai->component->name);
+-	if (!link)
+-		return -EINVAL;
+-
+ 	/* set the hdac_stream in the codec dai */
+ 	snd_soc_dai_set_stream(codec_dai, hdac_stream(link_dev), substream->stream);
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c
+index 7e50cb80873a5..7e36078f8f482 100644
+--- a/tools/testing/selftests/bpf/verifier/search_pruning.c
++++ b/tools/testing/selftests/bpf/verifier/search_pruning.c
+@@ -154,3 +154,39 @@
+ 	.result_unpriv = ACCEPT,
+ 	.insn_processed = 15,
+ },
++/* The test performs a conditional 64-bit write to a stack location
++ * fp[-8], this is followed by an unconditional 8-bit write to fp[-8],
++ * then data is read from fp[-8]. This sequence is unsafe.
++ *
++ * The test would be mistakenly marked as safe w/o dst register parent
++ * preservation in verifier.c:copy_register_state() function.
++ *
++ * Note the usage of BPF_F_TEST_STATE_FREQ to force creation of the
++ * checkpoint state after conditional 64-bit assignment.
++ */
++{
++	"write tracking and register parent chain bug",
++	.insns = {
++	/* r6 = ktime_get_ns() */
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
++	/* r0 = ktime_get_ns() */
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	/* if r0 > r6 goto +1 */
++	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_6, 1),
++	/* *(u64 *)(r10 - 8) = 0xdeadbeef */
++	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0xdeadbeef),
++	/* r1 = 42 */
++	BPF_MOV64_IMM(BPF_REG_1, 42),
++	/* *(u8 *)(r10 - 8) = r1 */
++	BPF_STX_MEM(BPF_B, BPF_REG_FP, BPF_REG_1, -8),
++	/* r2 = *(u64 *)(r10 - 8) */
++	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_FP, -8),
++	/* exit(0) */
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.flags = BPF_F_TEST_STATE_FREQ,
++	.errstr = "invalid read from stack off -8+1 size 8",
++	.result = REJECT,
++},
+diff --git a/tools/virtio/linux/bug.h b/tools/virtio/linux/bug.h
+index b14c2c3b6b857..74aef964f5099 100644
+--- a/tools/virtio/linux/bug.h
++++ b/tools/virtio/linux/bug.h
+@@ -1,11 +1,9 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef BUG_H
+-#define BUG_H
++#ifndef _LINUX_BUG_H
++#define _LINUX_BUG_H
+ 
+ #define BUG_ON(__BUG_ON_cond) assert(!(__BUG_ON_cond))
+ 
+-#define BUILD_BUG_ON(x)
+-
+ #define BUG() abort()
+ 
+-#endif /* BUG_H */
++#endif /* _LINUX_BUG_H */
+diff --git a/tools/virtio/linux/build_bug.h b/tools/virtio/linux/build_bug.h
+new file mode 100644
+index 0000000000000..cdbb75e28a604
+--- /dev/null
++++ b/tools/virtio/linux/build_bug.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _LINUX_BUILD_BUG_H
++#define _LINUX_BUILD_BUG_H
++
++#define BUILD_BUG_ON(x)
++
++#endif	/* _LINUX_BUILD_BUG_H */
+diff --git a/tools/virtio/linux/cpumask.h b/tools/virtio/linux/cpumask.h
+new file mode 100644
+index 0000000000000..307da69d6b26c
+--- /dev/null
++++ b/tools/virtio/linux/cpumask.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _LINUX_CPUMASK_H
++#define _LINUX_CPUMASK_H
++
++#include <linux/kernel.h>
++
++#endif /* _LINUX_CPUMASK_H */
+diff --git a/tools/virtio/linux/gfp.h b/tools/virtio/linux/gfp.h
+new file mode 100644
+index 0000000000000..43d146f236f14
+--- /dev/null
++++ b/tools/virtio/linux/gfp.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __LINUX_GFP_H
++#define __LINUX_GFP_H
++
++#include <linux/topology.h>
++
++#endif
+diff --git a/tools/virtio/linux/kernel.h b/tools/virtio/linux/kernel.h
+index 315e85cabedab..063ccc8975647 100644
+--- a/tools/virtio/linux/kernel.h
++++ b/tools/virtio/linux/kernel.h
+@@ -10,6 +10,7 @@
+ #include <stdarg.h>
+ 
+ #include <linux/compiler.h>
++#include <linux/log2.h>
+ #include <linux/types.h>
+ #include <linux/list.h>
+ #include <linux/printk.h>
+diff --git a/tools/virtio/linux/kmsan.h b/tools/virtio/linux/kmsan.h
+new file mode 100644
+index 0000000000000..272b5aa285d5a
+--- /dev/null
++++ b/tools/virtio/linux/kmsan.h
+@@ -0,0 +1,12 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _LINUX_KMSAN_H
++#define _LINUX_KMSAN_H
++
++#include <linux/gfp.h>
++
++inline void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
++			     enum dma_data_direction dir)
++{
++}
++
++#endif /* _LINUX_KMSAN_H */
+diff --git a/tools/virtio/linux/scatterlist.h b/tools/virtio/linux/scatterlist.h
+index 369ee308b6686..74d9e1825748e 100644
+--- a/tools/virtio/linux/scatterlist.h
++++ b/tools/virtio/linux/scatterlist.h
+@@ -2,6 +2,7 @@
+ #ifndef SCATTERLIST_H
+ #define SCATTERLIST_H
+ #include <linux/kernel.h>
++#include <linux/bug.h>
+ 
+ struct scatterlist {
+ 	unsigned long	page_link;
+diff --git a/tools/virtio/linux/topology.h b/tools/virtio/linux/topology.h
+new file mode 100644
+index 0000000000000..910794afb993a
+--- /dev/null
++++ b/tools/virtio/linux/topology.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _LINUX_TOPOLOGY_H
++#define _LINUX_TOPOLOGY_H
++
++#include <linux/cpumask.h>
++
++#endif /* _LINUX_TOPOLOGY_H */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-02-24  3:06 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-02-24  3:06 UTC (permalink / raw
  To: gentoo-commits

commit:     b596555288344b61e0db490ebd6f251046113c54
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 24 02:56:39 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Feb 24 03:05:46 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b5965552

0000_README: use https:// instead of http://

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README | 338 ++++++++++++++++++++++++++++++------------------------------
 1 file changed, 169 insertions(+), 169 deletions(-)

diff --git a/0000_README b/0000_README
index 2e18ffb2..ebfebcb6 100644
--- a/0000_README
+++ b/0000_README
@@ -44,679 +44,679 @@ Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
 Patch:  1000_linux-5.10.1.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.1
 
 Patch:  1001_linux-5.10.2.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.2
 
 Patch:  1002_linux-5.10.3.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.3
 
 Patch:  1003_linux-5.10.4.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.4
 
 Patch:  1004_linux-5.10.5.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.5
 
 Patch:  1005_linux-5.10.6.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.6
 
 Patch:  1006_linux-5.10.7.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.7
 
 Patch:  1007_linux-5.10.8.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.8
 
 Patch:  1008_linux-5.10.9.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.9
 
 Patch:  1009_linux-5.10.10.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.10
 
 Patch:  1010_linux-5.10.11.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.11
 
 Patch:  1011_linux-5.10.12.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.12
 
 Patch:  1012_linux-5.10.13.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.13
 
 Patch:  1013_linux-5.10.14.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.14
 
 Patch:  1014_linux-5.10.15.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.15
 
 Patch:  1015_linux-5.10.16.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.16
 
 Patch:  1016_linux-5.10.17.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.17
 
 Patch:  1017_linux-5.10.18.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.18
 
 Patch:  1018_linux-5.10.19.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.19
 
 Patch:  1019_linux-5.10.20.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.20
 
 Patch:  1020_linux-5.10.21.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.21
 
 Patch:  1021_linux-5.10.22.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.22
 
 Patch:  1022_linux-5.10.23.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.23
 
 Patch:  1023_linux-5.10.24.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.24
 
 Patch:  1024_linux-5.10.25.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.25
 
 Patch:  1025_linux-5.10.26.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.26
 
 Patch:  1026_linux-5.10.27.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.27
 
 Patch:  1027_linux-5.10.28.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.28
 
 Patch:  1028_linux-5.10.29.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.29
 
 Patch:  1029_linux-5.10.30.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.30
 
 Patch:  1030_linux-5.10.31.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.31
 
 Patch:  1031_linux-5.10.32.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.32
 
 Patch:  1032_linux-5.10.33.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.33
 
 Patch:  1033_linux-5.10.34.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.34
 
 Patch:  1034_linux-5.10.35.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.35
 
 Patch:  1035_linux-5.10.36.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.36
 
 Patch:  1036_linux-5.10.37.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.37
 
 Patch:  1037_linux-5.10.38.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.38
 
 Patch:  1038_linux-5.10.39.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.39
 
 Patch:  1039_linux-5.10.40.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.40
 
 Patch:  1040_linux-5.10.41.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.41
 
 Patch:  1041_linux-5.10.42.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.42
 
 Patch:  1042_linux-5.10.43.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.43
 
 Patch:  1043_linux-5.10.44.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.44
 
 Patch:  1044_linux-5.10.45.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.45
 
 Patch:  1045_linux-5.10.46.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.46
 
 Patch:  1046_linux-5.10.47.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.47
 
 Patch:  1047_linux-5.10.48.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.48
 
 Patch:  1048_linux-5.10.49.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.49
 
 Patch:  1049_linux-5.10.50.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.50
 
 Patch:  1050_linux-5.10.51.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.51
 
 Patch:  1051_linux-5.10.52.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.52
 
 Patch:  1052_linux-5.10.53.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.53
 
 Patch:  1053_linux-5.10.54.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.54
 
 Patch:  1054_linux-5.10.55.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.55
 
 Patch:  1055_linux-5.10.56.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.56
 
 Patch:  1056_linux-5.10.57.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.57
 
 Patch:  1057_linux-5.10.58.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.58
 
 Patch:  1058_linux-5.10.59.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.59
 
 Patch:  1059_linux-5.10.60.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.60
 
 Patch:  1060_linux-5.10.61.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.61
 
 Patch:  1061_linux-5.10.62.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.62
 
 Patch:  1062_linux-5.10.63.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.63
 
 Patch:  1063_linux-5.10.64.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.64
 
 Patch:  1064_linux-5.10.65.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.65
 
 Patch:  1065_linux-5.10.66.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.66
 
 Patch:  1066_linux-5.10.67.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.67
 
 Patch:  1067_linux-5.10.68.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.68
 
 Patch:  1068_linux-5.10.69.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.69
 
 Patch:  1069_linux-5.10.70.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.70
 
 Patch:  1070_linux-5.10.71.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.71
 
 Patch:  1071_linux-5.10.72.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.72
 
 Patch:  1072_linux-5.10.73.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.73
 
 Patch:  1073_linux-5.10.74.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.74
 
 Patch:  1074_linux-5.10.75.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.75
 
 Patch:  1075_linux-5.10.76.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.76
 
 Patch:  1076_linux-5.10.77.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.77
 
 Patch:  1077_linux-5.10.78.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.78
 
 Patch:  1078_linux-5.10.79.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.79
 
 Patch:  1079_linux-5.10.80.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.80
 
 Patch:  1080_linux-5.10.81.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.81
 
 Patch:  1081_linux-5.10.82.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.82
 
 Patch:  1082_linux-5.10.83.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.83
 
 Patch:  1083_linux-5.10.84.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.84
 
 Patch:  1084_linux-5.10.85.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.85
 
 Patch:  1085_linux-5.10.86.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.86
 
 Patch:  1086_linux-5.10.87.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.87
 
 Patch:  1087_linux-5.10.88.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.88
 
 Patch:  1088_linux-5.10.89.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.89
 
 Patch:  1089_linux-5.10.90.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.90
 
 Patch:  1090_linux-5.10.91.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.91
 
 Patch:  1091_linux-5.10.92.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.92
 
 Patch:  1092_linux-5.10.93.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.93
 
 Patch:  1093_linux-5.10.94.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.94
 
 Patch:  1094_linux-5.10.95.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.95
 
 Patch:  1095_linux-5.10.96.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.96
 
 Patch:  1096_linux-5.10.97.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.97
 
 Patch:  1097_linux-5.10.98.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.98
 
 Patch:  1098_linux-5.10.99.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.99
 
 Patch:  1099_linux-5.10.100.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.100
 
 Patch:  1100_linux-5.10.101.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.101
 
 Patch:  1101_linux-5.10.102.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.102
 
 Patch:  1102_linux-5.10.103.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.103
 
 Patch:  1103_linux-5.10.104.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.104
 
 Patch:  1104_linux-5.10.105.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.105
 
 Patch:  1105_linux-5.10.106.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.106
 
 Patch:  1106_linux-5.10.107.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.107
 
 Patch:  1107_linux-5.10.108.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.108
 
 Patch:  1108_linux-5.10.109.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.109
 
 Patch:  1109_linux-5.10.110.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.110
 
 Patch:  1110_linux-5.10.111.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.111
 
 Patch:  1111_linux-5.10.112.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.112
 
 Patch:  1112_linux-5.10.113.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.113
 
 Patch:  1113_linux-5.10.114.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.114
 
 Patch:  1114_linux-5.10.115.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.115
 
 Patch:  1115_linux-5.10.116.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.116
 
 Patch:  1116_linux-5.10.117.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.117
 
 Patch:  1117_linux-5.10.118.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.118
 
 Patch:  1118_linux-5.10.119.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.119
 
 Patch:  1119_linux-5.10.120.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.120
 
 Patch:  1120_linux-5.10.121.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.121
 
 Patch:  1121_linux-5.10.122.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.122
 
 Patch:  1122_linux-5.10.123.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.123
 
 Patch:  1123_linux-5.10.124.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.124
 
 Patch:  1124_linux-5.10.125.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.125
 
 Patch:  1125_linux-5.10.126.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.126
 
 Patch:  1126_linux-5.10.127.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.127
 
 Patch:  1127_linux-5.10.128.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.128
 
 Patch:  1128_linux-5.10.129.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.129
 
 Patch:  1129_linux-5.10.130.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.130
 
 Patch:  1130_linux-5.10.131.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.131
 
 Patch:  1131_linux-5.10.132.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.132
 
 Patch:  1132_linux-5.10.133.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.133
 
 Patch:  1133_linux-5.10.134.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.134
 
 Patch:  1134_linux-5.10.135.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.135
 
 Patch:  1135_linux-5.10.136.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.136
 
 Patch:  1136_linux-5.10.137.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.137
 
 Patch:  1137_linux-5.10.138.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.138
 
 Patch:  1138_linux-5.10.139.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.139
 
 Patch:  1139_linux-5.10.140.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.140
 
 Patch:  1140_linux-5.10.141.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.141
 
 Patch:  1141_linux-5.10.142.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.142
 
 Patch:  1142_linux-5.10.143.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.143
 
 Patch:  1143_linux-5.10.144.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.144
 
 Patch:  1144_linux-5.10.145.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.145
 
 Patch:  1145_linux-5.10.146.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.146
 
 Patch:  1146_linux-5.10.147.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.147
 
 Patch:  1147_linux-5.10.148.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.148
 
 Patch:  1148_linux-5.10.149.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.149
 
 Patch:  1149_linux-5.10.150.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.150
 
 Patch:  1150_linux-5.10.151.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.151
 
 Patch:  1151_linux-5.10.152.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.152
 
 Patch:  1152_linux-5.10.153.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.153
 
 Patch:  1153_linux-5.10.154.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.154
 
 Patch:  1154_linux-5.10.155.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.155
 
 Patch:  1155_linux-5.10.156.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.156
 
 Patch:  1156_linux-5.10.157.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.157
 
 Patch:  1157_linux-5.10.158.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.158
 
 Patch:  1158_linux-5.10.159.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.159
 
 Patch:  1159_linux-5.10.160.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.160
 
 Patch:  1160_linux-5.10.161.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.161
 
 Patch:  1161_linux-5.10.162.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.162
 
 Patch:  1162_linux-5.10.163.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.163
 
 Patch:  1163_linux-5.10.164.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.164
 
 Patch:  1164_linux-5.10.165.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.165
 
 Patch:  1165_linux-5.10.166.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.166
 
 Patch:  1166_linux-5.10.167.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.167
 
 Patch:  1167_linux-5.10.168.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.168
 
 Patch:  1168_linux-5.10.169.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.10.169
 
 Patch:  1500_XATTR_USER_PREFIX.patch


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-02-25 11:44 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-02-25 11:44 UTC (permalink / raw
  To: gentoo-commits

commit:     c77c6b6d768c67d22176ee2e5725b2951eb44c14
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 25 11:43:50 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 25 11:43:50 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c77c6b6d

Linux patch 5.10.170

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1169_linux-5.10.170.patch | 1664 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1668 insertions(+)

diff --git a/0000_README b/0000_README
index ebfebcb6..4e3efaf2 100644
--- a/0000_README
+++ b/0000_README
@@ -719,6 +719,10 @@ Patch:  1168_linux-5.10.169.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.169
 
+Patch:  1169_linux-5.10.170.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.170
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1169_linux-5.10.170.patch b/1169_linux-5.10.170.patch
new file mode 100644
index 00000000..efeeddcd
--- /dev/null
+++ b/1169_linux-5.10.170.patch
@@ -0,0 +1,1664 @@
+diff --git a/MAINTAINERS b/MAINTAINERS
+index f6c6b403a1b7c..6c5efc4013ab5 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -3001,7 +3001,7 @@ F:	drivers/net/ieee802154/atusb.h
+ AUDIT SUBSYSTEM
+ M:	Paul Moore <paul@paul-moore.com>
+ M:	Eric Paris <eparis@redhat.com>
+-L:	linux-audit@redhat.com (moderated for non-subscribers)
++L:	audit@vger.kernel.org
+ S:	Supported
+ W:	https://github.com/linux-audit
+ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit.git
+diff --git a/Makefile b/Makefile
+index dbbfaa5d4fe29..028fca7ec5cf3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 169
++SUBLEVEL = 170
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-2.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-2.dtsi
+new file mode 100644
+index 0000000000000..437dab3fc0176
+--- /dev/null
++++ b/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-2.dtsi
+@@ -0,0 +1,44 @@
++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
++/*
++ * QorIQ FMan v3 10g port #2 device tree stub [ controller @ offset 0x400000 ]
++ *
++ * Copyright 2022 Sean Anderson <sean.anderson@seco.com>
++ * Copyright 2012 - 2015 Freescale Semiconductor Inc.
++ */
++
++fman@400000 {
++	fman0_rx_0x08: port@88000 {
++		cell-index = <0x8>;
++		compatible = "fsl,fman-v3-port-rx";
++		reg = <0x88000 0x1000>;
++		fsl,fman-10g-port;
++	};
++
++	fman0_tx_0x28: port@a8000 {
++		cell-index = <0x28>;
++		compatible = "fsl,fman-v3-port-tx";
++		reg = <0xa8000 0x1000>;
++		fsl,fman-10g-port;
++	};
++
++	ethernet@e0000 {
++		cell-index = <0>;
++		compatible = "fsl,fman-memac";
++		reg = <0xe0000 0x1000>;
++		fsl,fman-ports = <&fman0_rx_0x08 &fman0_tx_0x28>;
++		ptp-timer = <&ptp_timer0>;
++		pcsphy-handle = <&pcsphy0>;
++	};
++
++	mdio@e1000 {
++		#address-cells = <1>;
++		#size-cells = <0>;
++		compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
++		reg = <0xe1000 0x1000>;
++		fsl,erratum-a011043; /* must ignore read errors */
++
++		pcsphy0: ethernet-phy@0 {
++			reg = <0x0>;
++		};
++	};
++};
+diff --git a/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-3.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-3.dtsi
+new file mode 100644
+index 0000000000000..ad116b17850a8
+--- /dev/null
++++ b/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-3.dtsi
+@@ -0,0 +1,44 @@
++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
++/*
++ * QorIQ FMan v3 10g port #3 device tree stub [ controller @ offset 0x400000 ]
++ *
++ * Copyright 2022 Sean Anderson <sean.anderson@seco.com>
++ * Copyright 2012 - 2015 Freescale Semiconductor Inc.
++ */
++
++fman@400000 {
++	fman0_rx_0x09: port@89000 {
++		cell-index = <0x9>;
++		compatible = "fsl,fman-v3-port-rx";
++		reg = <0x89000 0x1000>;
++		fsl,fman-10g-port;
++	};
++
++	fman0_tx_0x29: port@a9000 {
++		cell-index = <0x29>;
++		compatible = "fsl,fman-v3-port-tx";
++		reg = <0xa9000 0x1000>;
++		fsl,fman-10g-port;
++	};
++
++	ethernet@e2000 {
++		cell-index = <1>;
++		compatible = "fsl,fman-memac";
++		reg = <0xe2000 0x1000>;
++		fsl,fman-ports = <&fman0_rx_0x09 &fman0_tx_0x29>;
++		ptp-timer = <&ptp_timer0>;
++		pcsphy-handle = <&pcsphy1>;
++	};
++
++	mdio@e3000 {
++		#address-cells = <1>;
++		#size-cells = <0>;
++		compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
++		reg = <0xe3000 0x1000>;
++		fsl,erratum-a011043; /* must ignore read errors */
++
++		pcsphy1: ethernet-phy@0 {
++			reg = <0x0>;
++		};
++	};
++};
+diff --git a/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi b/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
+index ecbb447920bc6..27714dc2f04a5 100644
+--- a/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
++++ b/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
+@@ -609,8 +609,8 @@
+ /include/ "qoriq-bman1.dtsi"
+ 
+ /include/ "qoriq-fman3-0.dtsi"
+-/include/ "qoriq-fman3-0-1g-0.dtsi"
+-/include/ "qoriq-fman3-0-1g-1.dtsi"
++/include/ "qoriq-fman3-0-10g-2.dtsi"
++/include/ "qoriq-fman3-0-10g-3.dtsi"
+ /include/ "qoriq-fman3-0-1g-2.dtsi"
+ /include/ "qoriq-fman3-0-1g-3.dtsi"
+ /include/ "qoriq-fman3-0-1g-4.dtsi"
+@@ -659,3 +659,19 @@
+ 		interrupts = <16 2 1 9>;
+ 	};
+ };
++
++&fman0_rx_0x08 {
++	/delete-property/ fsl,fman-10g-port;
++};
++
++&fman0_tx_0x28 {
++	/delete-property/ fsl,fman-10g-port;
++};
++
++&fman0_rx_0x09 {
++	/delete-property/ fsl,fman-10g-port;
++};
++
++&fman0_tx_0x29 {
++	/delete-property/ fsl,fman-10g-port;
++};
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index c34ba034ca111..5775983fec56e 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -3480,8 +3480,14 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
+ 
+ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+ {
+-	if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
+-	    to_svm(vcpu)->vmcb->control.exit_info_1)
++	struct vmcb_control_area *control = &to_svm(vcpu)->vmcb->control;
++
++	/*
++	 * Note, the next RIP must be provided as SRCU isn't held, i.e. KVM
++	 * can't read guest memory (dereference memslots) to decode the WRMSR.
++	 */
++	if (control->exit_code == SVM_EXIT_MSR && control->exit_info_1 &&
++	    nrips && control->next_rip)
+ 		return handle_fastpath_set_msr_irqoff(vcpu);
+ 
+ 	return EXIT_FASTPATH_NONE;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index f15ddf58a5bcd..91371b01eae0c 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -4556,6 +4556,17 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ 
+ 	vmx_switch_vmcs(vcpu, &vmx->vmcs01);
+ 
++	/*
++	 * If IBRS is advertised to the vCPU, KVM must flush the indirect
++	 * branch predictors when transitioning from L2 to L1, as L1 expects
++	 * hardware (KVM in this case) to provide separate predictor modes.
++	 * Bare metal isolates VMX root (host) from VMX non-root (guest), but
++	 * doesn't isolate different VMCSs, i.e. in this case, doesn't provide
++	 * separate modes for L2 vs L1.
++	 */
++	if (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
++		indirect_branch_prediction_barrier();
++
+ 	/* Update any VMCS fields that might have changed while L2 ran */
+ 	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
+ 	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 8f7152e158e28..c37cbd3fdd852 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1431,8 +1431,10 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
+ 
+ 		/*
+ 		 * No indirect branch prediction barrier needed when switching
+-		 * the active VMCS within a guest, e.g. on nested VM-Enter.
+-		 * The L1 VMM can protect itself with retpolines, IBPB or IBRS.
++		 * the active VMCS within a vCPU, unless IBRS is advertised to
++		 * the vCPU.  To minimize the number of IBPBs executed, KVM
++		 * performs IBPB on nested VM-Exit (a single nested transition
++		 * may switch the active VMCS multiple times).
+ 		 */
+ 		if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
+ 			indirect_branch_prediction_barrier();
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 554d37873c253..0ccc8d1b972c9 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7534,7 +7534,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 						  write_fault_to_spt,
+ 						  emulation_type))
+ 				return 1;
+-			if (ctxt->have_exception) {
++
++			if (ctxt->have_exception &&
++			    !(emulation_type & EMULTYPE_SKIP)) {
+ 				/*
+ 				 * #UD should result in just EMULATION_FAILED, and trap-like
+ 				 * exception should not be encountered during decode.
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index b0d3dadeb9643..dbcd903ba128f 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1865,8 +1865,19 @@ static int nbd_genl_connect(struct sk_buff *skb, struct genl_info *info)
+ 	if (!netlink_capable(skb, CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+-	if (info->attrs[NBD_ATTR_INDEX])
++	if (info->attrs[NBD_ATTR_INDEX]) {
+ 		index = nla_get_u32(info->attrs[NBD_ATTR_INDEX]);
++
++		/*
++		 * Too big first_minor can cause duplicate creation of
++		 * sysfs files/links, since index << part_shift might overflow, or
++		 * MKDEV() expect that the max bits of first_minor is 20.
++		 */
++		if (index < 0 || index > MINORMASK >> part_shift) {
++			printk(KERN_ERR "nbd: illegal input index %d\n", index);
++			return -EINVAL;
++		}
++	}
+ 	if (!info->attrs[NBD_ATTR_SOCKETS]) {
+ 		printk(KERN_ERR "nbd: must specify at least one socket\n");
+ 		return -EINVAL;
+diff --git a/drivers/clk/x86/Kconfig b/drivers/clk/x86/Kconfig
+index 69642e15fcc1f..ced99e082e3dd 100644
+--- a/drivers/clk/x86/Kconfig
++++ b/drivers/clk/x86/Kconfig
+@@ -1,8 +1,9 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config CLK_LGM_CGU
+ 	depends on OF && HAS_IOMEM && (X86 || COMPILE_TEST)
++	select MFD_SYSCON
+ 	select OF_EARLY_FLATTREE
+ 	bool "Clock driver for Lightning Mountain(LGM) platform"
+ 	help
+-	  Clock Generation Unit(CGU) driver for Intel Lightning Mountain(LGM)
+-	  network processor SoC.
++	  Clock Generation Unit(CGU) driver for MaxLinear's x86 based
++	  Lightning Mountain(LGM) network processor SoC.
+diff --git a/drivers/clk/x86/clk-cgu-pll.c b/drivers/clk/x86/clk-cgu-pll.c
+index 3179557b5f784..409dbf55f4cae 100644
+--- a/drivers/clk/x86/clk-cgu-pll.c
++++ b/drivers/clk/x86/clk-cgu-pll.c
+@@ -1,8 +1,9 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
++ * Copyright (C) 2020-2022 MaxLinear, Inc.
+  * Copyright (C) 2020 Intel Corporation.
+- * Zhu YiXin <yixin.zhu@intel.com>
+- * Rahul Tanwar <rahul.tanwar@intel.com>
++ * Zhu Yixin <yzhu@maxlinear.com>
++ * Rahul Tanwar <rtanwar@maxlinear.com>
+  */
+ 
+ #include <linux/clk-provider.h>
+@@ -40,13 +41,10 @@ static unsigned long lgm_pll_recalc_rate(struct clk_hw *hw, unsigned long prate)
+ {
+ 	struct lgm_clk_pll *pll = to_lgm_clk_pll(hw);
+ 	unsigned int div, mult, frac;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&pll->lock, flags);
+ 	mult = lgm_get_clk_val(pll->membase, PLL_REF_DIV(pll->reg), 0, 12);
+ 	div = lgm_get_clk_val(pll->membase, PLL_REF_DIV(pll->reg), 18, 6);
+ 	frac = lgm_get_clk_val(pll->membase, pll->reg, 2, 24);
+-	spin_unlock_irqrestore(&pll->lock, flags);
+ 
+ 	if (pll->type == TYPE_LJPLL)
+ 		div *= 4;
+@@ -57,12 +55,9 @@ static unsigned long lgm_pll_recalc_rate(struct clk_hw *hw, unsigned long prate)
+ static int lgm_pll_is_enabled(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_pll *pll = to_lgm_clk_pll(hw);
+-	unsigned long flags;
+ 	unsigned int ret;
+ 
+-	spin_lock_irqsave(&pll->lock, flags);
+ 	ret = lgm_get_clk_val(pll->membase, pll->reg, 0, 1);
+-	spin_unlock_irqrestore(&pll->lock, flags);
+ 
+ 	return ret;
+ }
+@@ -70,15 +65,13 @@ static int lgm_pll_is_enabled(struct clk_hw *hw)
+ static int lgm_pll_enable(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_pll *pll = to_lgm_clk_pll(hw);
+-	unsigned long flags;
+ 	u32 val;
+ 	int ret;
+ 
+-	spin_lock_irqsave(&pll->lock, flags);
+ 	lgm_set_clk_val(pll->membase, pll->reg, 0, 1, 1);
+-	ret = readl_poll_timeout_atomic(pll->membase + pll->reg,
+-					val, (val & 0x1), 1, 100);
+-	spin_unlock_irqrestore(&pll->lock, flags);
++	ret = regmap_read_poll_timeout_atomic(pll->membase, pll->reg,
++					      val, (val & 0x1), 1, 100);
++
+ 
+ 	return ret;
+ }
+@@ -86,11 +79,8 @@ static int lgm_pll_enable(struct clk_hw *hw)
+ static void lgm_pll_disable(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_pll *pll = to_lgm_clk_pll(hw);
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&pll->lock, flags);
+ 	lgm_set_clk_val(pll->membase, pll->reg, 0, 1, 0);
+-	spin_unlock_irqrestore(&pll->lock, flags);
+ }
+ 
+ static const struct clk_ops lgm_pll_ops = {
+@@ -121,7 +111,6 @@ lgm_clk_register_pll(struct lgm_clk_provider *ctx,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	pll->membase = ctx->membase;
+-	pll->lock = ctx->lock;
+ 	pll->reg = list->reg;
+ 	pll->flags = list->flags;
+ 	pll->type = list->type;
+diff --git a/drivers/clk/x86/clk-cgu.c b/drivers/clk/x86/clk-cgu.c
+index 33de600e0c38e..89b53f280aee0 100644
+--- a/drivers/clk/x86/clk-cgu.c
++++ b/drivers/clk/x86/clk-cgu.c
+@@ -1,8 +1,9 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
++ * Copyright (C) 2020-2022 MaxLinear, Inc.
+  * Copyright (C) 2020 Intel Corporation.
+- * Zhu YiXin <yixin.zhu@intel.com>
+- * Rahul Tanwar <rahul.tanwar@intel.com>
++ * Zhu Yixin <yzhu@maxlinear.com>
++ * Rahul Tanwar <rtanwar@maxlinear.com>
+  */
+ #include <linux/clk-provider.h>
+ #include <linux/device.h>
+@@ -24,14 +25,10 @@
+ static struct clk_hw *lgm_clk_register_fixed(struct lgm_clk_provider *ctx,
+ 					     const struct lgm_clk_branch *list)
+ {
+-	unsigned long flags;
+ 
+-	if (list->div_flags & CLOCK_FLAG_VAL_INIT) {
+-		spin_lock_irqsave(&ctx->lock, flags);
++	if (list->div_flags & CLOCK_FLAG_VAL_INIT)
+ 		lgm_set_clk_val(ctx->membase, list->div_off, list->div_shift,
+ 				list->div_width, list->div_val);
+-		spin_unlock_irqrestore(&ctx->lock, flags);
+-	}
+ 
+ 	return clk_hw_register_fixed_rate(NULL, list->name,
+ 					  list->parent_data[0].name,
+@@ -41,33 +38,27 @@ static struct clk_hw *lgm_clk_register_fixed(struct lgm_clk_provider *ctx,
+ static u8 lgm_clk_mux_get_parent(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_mux *mux = to_lgm_clk_mux(hw);
+-	unsigned long flags;
+ 	u32 val;
+ 
+-	spin_lock_irqsave(&mux->lock, flags);
+ 	if (mux->flags & MUX_CLK_SW)
+ 		val = mux->reg;
+ 	else
+ 		val = lgm_get_clk_val(mux->membase, mux->reg, mux->shift,
+ 				      mux->width);
+-	spin_unlock_irqrestore(&mux->lock, flags);
+ 	return clk_mux_val_to_index(hw, NULL, mux->flags, val);
+ }
+ 
+ static int lgm_clk_mux_set_parent(struct clk_hw *hw, u8 index)
+ {
+ 	struct lgm_clk_mux *mux = to_lgm_clk_mux(hw);
+-	unsigned long flags;
+ 	u32 val;
+ 
+ 	val = clk_mux_index_to_val(NULL, mux->flags, index);
+-	spin_lock_irqsave(&mux->lock, flags);
+ 	if (mux->flags & MUX_CLK_SW)
+ 		mux->reg = val;
+ 	else
+ 		lgm_set_clk_val(mux->membase, mux->reg, mux->shift,
+ 				mux->width, val);
+-	spin_unlock_irqrestore(&mux->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -90,7 +81,7 @@ static struct clk_hw *
+ lgm_clk_register_mux(struct lgm_clk_provider *ctx,
+ 		     const struct lgm_clk_branch *list)
+ {
+-	unsigned long flags, cflags = list->mux_flags;
++	unsigned long cflags = list->mux_flags;
+ 	struct device *dev = ctx->dev;
+ 	u8 shift = list->mux_shift;
+ 	u8 width = list->mux_width;
+@@ -111,7 +102,6 @@ lgm_clk_register_mux(struct lgm_clk_provider *ctx,
+ 	init.num_parents = list->num_parents;
+ 
+ 	mux->membase = ctx->membase;
+-	mux->lock = ctx->lock;
+ 	mux->reg = reg;
+ 	mux->shift = shift;
+ 	mux->width = width;
+@@ -123,11 +113,8 @@ lgm_clk_register_mux(struct lgm_clk_provider *ctx,
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+-	if (cflags & CLOCK_FLAG_VAL_INIT) {
+-		spin_lock_irqsave(&mux->lock, flags);
++	if (cflags & CLOCK_FLAG_VAL_INIT)
+ 		lgm_set_clk_val(mux->membase, reg, shift, width, list->mux_val);
+-		spin_unlock_irqrestore(&mux->lock, flags);
+-	}
+ 
+ 	return hw;
+ }
+@@ -136,13 +123,10 @@ static unsigned long
+ lgm_clk_divider_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+ {
+ 	struct lgm_clk_divider *divider = to_lgm_clk_divider(hw);
+-	unsigned long flags;
+ 	unsigned int val;
+ 
+-	spin_lock_irqsave(&divider->lock, flags);
+ 	val = lgm_get_clk_val(divider->membase, divider->reg,
+ 			      divider->shift, divider->width);
+-	spin_unlock_irqrestore(&divider->lock, flags);
+ 
+ 	return divider_recalc_rate(hw, parent_rate, val, divider->table,
+ 				   divider->flags, divider->width);
+@@ -163,7 +147,6 @@ lgm_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
+ 			 unsigned long prate)
+ {
+ 	struct lgm_clk_divider *divider = to_lgm_clk_divider(hw);
+-	unsigned long flags;
+ 	int value;
+ 
+ 	value = divider_get_val(rate, prate, divider->table,
+@@ -171,10 +154,8 @@ lgm_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	if (value < 0)
+ 		return value;
+ 
+-	spin_lock_irqsave(&divider->lock, flags);
+ 	lgm_set_clk_val(divider->membase, divider->reg,
+ 			divider->shift, divider->width, value);
+-	spin_unlock_irqrestore(&divider->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -182,12 +163,10 @@ lgm_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
+ static int lgm_clk_divider_enable_disable(struct clk_hw *hw, int enable)
+ {
+ 	struct lgm_clk_divider *div = to_lgm_clk_divider(hw);
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&div->lock, flags);
+-	lgm_set_clk_val(div->membase, div->reg, div->shift_gate,
+-			div->width_gate, enable);
+-	spin_unlock_irqrestore(&div->lock, flags);
++	if (div->flags != DIV_CLK_NO_MASK)
++		lgm_set_clk_val(div->membase, div->reg, div->shift_gate,
++				div->width_gate, enable);
+ 	return 0;
+ }
+ 
+@@ -213,7 +192,7 @@ static struct clk_hw *
+ lgm_clk_register_divider(struct lgm_clk_provider *ctx,
+ 			 const struct lgm_clk_branch *list)
+ {
+-	unsigned long flags, cflags = list->div_flags;
++	unsigned long cflags = list->div_flags;
+ 	struct device *dev = ctx->dev;
+ 	struct lgm_clk_divider *div;
+ 	struct clk_init_data init = {};
+@@ -236,7 +215,6 @@ lgm_clk_register_divider(struct lgm_clk_provider *ctx,
+ 	init.num_parents = 1;
+ 
+ 	div->membase = ctx->membase;
+-	div->lock = ctx->lock;
+ 	div->reg = reg;
+ 	div->shift = shift;
+ 	div->width = width;
+@@ -251,11 +229,8 @@ lgm_clk_register_divider(struct lgm_clk_provider *ctx,
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+-	if (cflags & CLOCK_FLAG_VAL_INIT) {
+-		spin_lock_irqsave(&div->lock, flags);
++	if (cflags & CLOCK_FLAG_VAL_INIT)
+ 		lgm_set_clk_val(div->membase, reg, shift, width, list->div_val);
+-		spin_unlock_irqrestore(&div->lock, flags);
+-	}
+ 
+ 	return hw;
+ }
+@@ -264,7 +239,6 @@ static struct clk_hw *
+ lgm_clk_register_fixed_factor(struct lgm_clk_provider *ctx,
+ 			      const struct lgm_clk_branch *list)
+ {
+-	unsigned long flags;
+ 	struct clk_hw *hw;
+ 
+ 	hw = clk_hw_register_fixed_factor(ctx->dev, list->name,
+@@ -273,12 +247,9 @@ lgm_clk_register_fixed_factor(struct lgm_clk_provider *ctx,
+ 	if (IS_ERR(hw))
+ 		return ERR_CAST(hw);
+ 
+-	if (list->div_flags & CLOCK_FLAG_VAL_INIT) {
+-		spin_lock_irqsave(&ctx->lock, flags);
++	if (list->div_flags & CLOCK_FLAG_VAL_INIT)
+ 		lgm_set_clk_val(ctx->membase, list->div_off, list->div_shift,
+ 				list->div_width, list->div_val);
+-		spin_unlock_irqrestore(&ctx->lock, flags);
+-	}
+ 
+ 	return hw;
+ }
+@@ -286,13 +257,10 @@ lgm_clk_register_fixed_factor(struct lgm_clk_provider *ctx,
+ static int lgm_clk_gate_enable(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_gate *gate = to_lgm_clk_gate(hw);
+-	unsigned long flags;
+ 	unsigned int reg;
+ 
+-	spin_lock_irqsave(&gate->lock, flags);
+ 	reg = GATE_HW_REG_EN(gate->reg);
+ 	lgm_set_clk_val(gate->membase, reg, gate->shift, 1, 1);
+-	spin_unlock_irqrestore(&gate->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -300,25 +268,19 @@ static int lgm_clk_gate_enable(struct clk_hw *hw)
+ static void lgm_clk_gate_disable(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_gate *gate = to_lgm_clk_gate(hw);
+-	unsigned long flags;
+ 	unsigned int reg;
+ 
+-	spin_lock_irqsave(&gate->lock, flags);
+ 	reg = GATE_HW_REG_DIS(gate->reg);
+ 	lgm_set_clk_val(gate->membase, reg, gate->shift, 1, 1);
+-	spin_unlock_irqrestore(&gate->lock, flags);
+ }
+ 
+ static int lgm_clk_gate_is_enabled(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_gate *gate = to_lgm_clk_gate(hw);
+ 	unsigned int reg, ret;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&gate->lock, flags);
+ 	reg = GATE_HW_REG_STAT(gate->reg);
+ 	ret = lgm_get_clk_val(gate->membase, reg, gate->shift, 1);
+-	spin_unlock_irqrestore(&gate->lock, flags);
+ 
+ 	return ret;
+ }
+@@ -333,7 +295,7 @@ static struct clk_hw *
+ lgm_clk_register_gate(struct lgm_clk_provider *ctx,
+ 		      const struct lgm_clk_branch *list)
+ {
+-	unsigned long flags, cflags = list->gate_flags;
++	unsigned long cflags = list->gate_flags;
+ 	const char *pname = list->parent_data[0].name;
+ 	struct device *dev = ctx->dev;
+ 	u8 shift = list->gate_shift;
+@@ -354,7 +316,6 @@ lgm_clk_register_gate(struct lgm_clk_provider *ctx,
+ 	init.num_parents = pname ? 1 : 0;
+ 
+ 	gate->membase = ctx->membase;
+-	gate->lock = ctx->lock;
+ 	gate->reg = reg;
+ 	gate->shift = shift;
+ 	gate->flags = cflags;
+@@ -366,9 +327,7 @@ lgm_clk_register_gate(struct lgm_clk_provider *ctx,
+ 		return ERR_PTR(ret);
+ 
+ 	if (cflags & CLOCK_FLAG_VAL_INIT) {
+-		spin_lock_irqsave(&gate->lock, flags);
+ 		lgm_set_clk_val(gate->membase, reg, shift, 1, list->gate_val);
+-		spin_unlock_irqrestore(&gate->lock, flags);
+ 	}
+ 
+ 	return hw;
+@@ -396,8 +355,22 @@ int lgm_clk_register_branches(struct lgm_clk_provider *ctx,
+ 			hw = lgm_clk_register_fixed_factor(ctx, list);
+ 			break;
+ 		case CLK_TYPE_GATE:
+-			hw = lgm_clk_register_gate(ctx, list);
++			if (list->gate_flags & GATE_CLK_HW) {
++				hw = lgm_clk_register_gate(ctx, list);
++			} else {
++				/*
++				 * GATE_CLKs can be controlled either from
++				 * CGU clk driver i.e. this driver or directly
++				 * from power management driver/daemon. It is
++				 * dependent on the power policy/profile requirements
++				 * of the end product. To override control of gate
++				 * clks from this driver, provide NULL for this index
++				 * of gate clk provider.
++				 */
++				hw = NULL;
++			}
+ 			break;
++
+ 		default:
+ 			dev_err(ctx->dev, "invalid clk type\n");
+ 			return -EINVAL;
+@@ -443,24 +416,18 @@ lgm_clk_ddiv_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+ static int lgm_clk_ddiv_enable(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_ddiv *ddiv = to_lgm_clk_ddiv(hw);
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&ddiv->lock, flags);
+ 	lgm_set_clk_val(ddiv->membase, ddiv->reg, ddiv->shift_gate,
+ 			ddiv->width_gate, 1);
+-	spin_unlock_irqrestore(&ddiv->lock, flags);
+ 	return 0;
+ }
+ 
+ static void lgm_clk_ddiv_disable(struct clk_hw *hw)
+ {
+ 	struct lgm_clk_ddiv *ddiv = to_lgm_clk_ddiv(hw);
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&ddiv->lock, flags);
+ 	lgm_set_clk_val(ddiv->membase, ddiv->reg, ddiv->shift_gate,
+ 			ddiv->width_gate, 0);
+-	spin_unlock_irqrestore(&ddiv->lock, flags);
+ }
+ 
+ static int
+@@ -497,32 +464,25 @@ lgm_clk_ddiv_set_rate(struct clk_hw *hw, unsigned long rate,
+ {
+ 	struct lgm_clk_ddiv *ddiv = to_lgm_clk_ddiv(hw);
+ 	u32 div, ddiv1, ddiv2;
+-	unsigned long flags;
+ 
+ 	div = DIV_ROUND_CLOSEST_ULL((u64)prate, rate);
+ 
+-	spin_lock_irqsave(&ddiv->lock, flags);
+ 	if (lgm_get_clk_val(ddiv->membase, ddiv->reg, ddiv->shift2, 1)) {
+ 		div = DIV_ROUND_CLOSEST_ULL((u64)div, 5);
+ 		div = div * 2;
+ 	}
+ 
+-	if (div <= 0) {
+-		spin_unlock_irqrestore(&ddiv->lock, flags);
++	if (div <= 0)
+ 		return -EINVAL;
+-	}
+ 
+-	if (lgm_clk_get_ddiv_val(div, &ddiv1, &ddiv2)) {
+-		spin_unlock_irqrestore(&ddiv->lock, flags);
++	if (lgm_clk_get_ddiv_val(div, &ddiv1, &ddiv2))
+ 		return -EINVAL;
+-	}
+ 
+ 	lgm_set_clk_val(ddiv->membase, ddiv->reg, ddiv->shift0, ddiv->width0,
+ 			ddiv1 - 1);
+ 
+ 	lgm_set_clk_val(ddiv->membase, ddiv->reg,  ddiv->shift1, ddiv->width1,
+ 			ddiv2 - 1);
+-	spin_unlock_irqrestore(&ddiv->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -533,18 +493,15 @@ lgm_clk_ddiv_round_rate(struct clk_hw *hw, unsigned long rate,
+ {
+ 	struct lgm_clk_ddiv *ddiv = to_lgm_clk_ddiv(hw);
+ 	u32 div, ddiv1, ddiv2;
+-	unsigned long flags;
+ 	u64 rate64;
+ 
+ 	div = DIV_ROUND_CLOSEST_ULL((u64)*prate, rate);
+ 
+ 	/* if predivide bit is enabled, modify div by factor of 2.5 */
+-	spin_lock_irqsave(&ddiv->lock, flags);
+ 	if (lgm_get_clk_val(ddiv->membase, ddiv->reg, ddiv->shift2, 1)) {
+ 		div = div * 2;
+ 		div = DIV_ROUND_CLOSEST_ULL((u64)div, 5);
+ 	}
+-	spin_unlock_irqrestore(&ddiv->lock, flags);
+ 
+ 	if (div <= 0)
+ 		return *prate;
+@@ -558,12 +515,10 @@ lgm_clk_ddiv_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	do_div(rate64, ddiv2);
+ 
+ 	/* if predivide bit is enabled, modify rounded rate by factor of 2.5 */
+-	spin_lock_irqsave(&ddiv->lock, flags);
+ 	if (lgm_get_clk_val(ddiv->membase, ddiv->reg, ddiv->shift2, 1)) {
+ 		rate64 = rate64 * 2;
+ 		rate64 = DIV_ROUND_CLOSEST_ULL(rate64, 5);
+ 	}
+-	spin_unlock_irqrestore(&ddiv->lock, flags);
+ 
+ 	return rate64;
+ }
+@@ -600,7 +555,6 @@ int lgm_clk_register_ddiv(struct lgm_clk_provider *ctx,
+ 		init.num_parents = 1;
+ 
+ 		ddiv->membase = ctx->membase;
+-		ddiv->lock = ctx->lock;
+ 		ddiv->reg = list->reg;
+ 		ddiv->shift0 = list->shift0;
+ 		ddiv->width0 = list->width0;
+diff --git a/drivers/clk/x86/clk-cgu.h b/drivers/clk/x86/clk-cgu.h
+index 4e22bfb223128..bcaf8aec94e5d 100644
+--- a/drivers/clk/x86/clk-cgu.h
++++ b/drivers/clk/x86/clk-cgu.h
+@@ -1,28 +1,28 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /*
+- * Copyright(c) 2020 Intel Corporation.
+- * Zhu YiXin <yixin.zhu@intel.com>
+- * Rahul Tanwar <rahul.tanwar@intel.com>
++ * Copyright (C) 2020-2022 MaxLinear, Inc.
++ * Copyright (C) 2020 Intel Corporation.
++ * Zhu Yixin <yzhu@maxlinear.com>
++ * Rahul Tanwar <rtanwar@maxlinear.com>
+  */
+ 
+ #ifndef __CLK_CGU_H
+ #define __CLK_CGU_H
+ 
+-#include <linux/io.h>
++#include <linux/regmap.h>
+ 
+ struct lgm_clk_mux {
+ 	struct clk_hw hw;
+-	void __iomem *membase;
++	struct regmap *membase;
+ 	unsigned int reg;
+ 	u8 shift;
+ 	u8 width;
+ 	unsigned long flags;
+-	spinlock_t lock;
+ };
+ 
+ struct lgm_clk_divider {
+ 	struct clk_hw hw;
+-	void __iomem *membase;
++	struct regmap *membase;
+ 	unsigned int reg;
+ 	u8 shift;
+ 	u8 width;
+@@ -30,12 +30,11 @@ struct lgm_clk_divider {
+ 	u8 width_gate;
+ 	unsigned long flags;
+ 	const struct clk_div_table *table;
+-	spinlock_t lock;
+ };
+ 
+ struct lgm_clk_ddiv {
+ 	struct clk_hw hw;
+-	void __iomem *membase;
++	struct regmap *membase;
+ 	unsigned int reg;
+ 	u8 shift0;
+ 	u8 width0;
+@@ -48,16 +47,14 @@ struct lgm_clk_ddiv {
+ 	unsigned int mult;
+ 	unsigned int div;
+ 	unsigned long flags;
+-	spinlock_t lock;
+ };
+ 
+ struct lgm_clk_gate {
+ 	struct clk_hw hw;
+-	void __iomem *membase;
++	struct regmap *membase;
+ 	unsigned int reg;
+ 	u8 shift;
+ 	unsigned long flags;
+-	spinlock_t lock;
+ };
+ 
+ enum lgm_clk_type {
+@@ -77,11 +74,10 @@ enum lgm_clk_type {
+  * @clk_data: array of hw clocks and clk number.
+  */
+ struct lgm_clk_provider {
+-	void __iomem *membase;
++	struct regmap *membase;
+ 	struct device_node *np;
+ 	struct device *dev;
+ 	struct clk_hw_onecell_data clk_data;
+-	spinlock_t lock;
+ };
+ 
+ enum pll_type {
+@@ -92,11 +88,10 @@ enum pll_type {
+ 
+ struct lgm_clk_pll {
+ 	struct clk_hw hw;
+-	void __iomem *membase;
++	struct regmap *membase;
+ 	unsigned int reg;
+ 	unsigned long flags;
+ 	enum pll_type type;
+-	spinlock_t lock;
+ };
+ 
+ /**
+@@ -202,6 +197,8 @@ struct lgm_clk_branch {
+ /* clock flags definition */
+ #define CLOCK_FLAG_VAL_INIT	BIT(16)
+ #define MUX_CLK_SW		BIT(17)
++#define GATE_CLK_HW		BIT(18)
++#define DIV_CLK_NO_MASK		BIT(19)
+ 
+ #define LGM_MUX(_id, _name, _pdata, _f, _reg,		\
+ 		_shift, _width, _cf, _v)		\
+@@ -300,29 +297,32 @@ struct lgm_clk_branch {
+ 		.div = _d,					\
+ 	}
+ 
+-static inline void lgm_set_clk_val(void __iomem *membase, u32 reg,
++static inline void lgm_set_clk_val(struct regmap *membase, u32 reg,
+ 				   u8 shift, u8 width, u32 set_val)
+ {
+ 	u32 mask = (GENMASK(width - 1, 0) << shift);
+-	u32 regval;
+ 
+-	regval = readl(membase + reg);
+-	regval = (regval & ~mask) | ((set_val << shift) & mask);
+-	writel(regval, membase + reg);
++	regmap_update_bits(membase, reg, mask, set_val << shift);
+ }
+ 
+-static inline u32 lgm_get_clk_val(void __iomem *membase, u32 reg,
++static inline u32 lgm_get_clk_val(struct regmap *membase, u32 reg,
+ 				  u8 shift, u8 width)
+ {
+ 	u32 mask = (GENMASK(width - 1, 0) << shift);
+ 	u32 val;
+ 
+-	val = readl(membase + reg);
++	if (regmap_read(membase, reg, &val)) {
++		WARN_ONCE(1, "Failed to read clk reg: 0x%x\n", reg);
++		return 0;
++	}
++
+ 	val = (val & mask) >> shift;
+ 
+ 	return val;
+ }
+ 
++
++
+ int lgm_clk_register_branches(struct lgm_clk_provider *ctx,
+ 			      const struct lgm_clk_branch *list,
+ 			      unsigned int nr_clk);
+diff --git a/drivers/clk/x86/clk-lgm.c b/drivers/clk/x86/clk-lgm.c
+index 020f4e83a5ccb..f69455dd1c980 100644
+--- a/drivers/clk/x86/clk-lgm.c
++++ b/drivers/clk/x86/clk-lgm.c
+@@ -1,10 +1,12 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
++ * Copyright (C) 2020-2022 MaxLinear, Inc.
+  * Copyright (C) 2020 Intel Corporation.
+- * Zhu YiXin <yixin.zhu@intel.com>
+- * Rahul Tanwar <rahul.tanwar@intel.com>
++ * Zhu Yixin <yzhu@maxlinear.com>
++ * Rahul Tanwar <rtanwar@maxlinear.com>
+  */
+ #include <linux/clk-provider.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+ #include <dt-bindings/clock/intel,lgm-clk.h>
+@@ -253,8 +255,8 @@ static const struct lgm_clk_branch lgm_branch_clks[] = {
+ 	LGM_FIXED(LGM_CLK_SLIC, "slic", NULL, 0, CGU_IF_CLK1,
+ 		  8, 2, CLOCK_FLAG_VAL_INIT, 8192000, 2),
+ 	LGM_FIXED(LGM_CLK_DOCSIS, "v_docsis", NULL, 0, 0, 0, 0, 0, 16000000, 0),
+-	LGM_DIV(LGM_CLK_DCL, "dcl", "v_ifclk", 0, CGU_PCMCR,
+-		25, 3, 0, 0, 0, 0, dcl_div),
++	LGM_DIV(LGM_CLK_DCL, "dcl", "v_ifclk", CLK_SET_RATE_PARENT, CGU_PCMCR,
++		25, 3, 0, 0, DIV_CLK_NO_MASK, 0, dcl_div),
+ 	LGM_MUX(LGM_CLK_PCM, "pcm", pcm_p, 0, CGU_C55_PCMCR,
+ 		0, 1, CLK_MUX_ROUND_CLOSEST, 0),
+ 	LGM_FIXED_FACTOR(LGM_CLK_DDR_PHY, "ddr_phy", "ddr",
+@@ -433,13 +435,15 @@ static int lgm_cgu_probe(struct platform_device *pdev)
+ 
+ 	ctx->clk_data.num = CLK_NR_CLKS;
+ 
+-	ctx->membase = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(ctx->membase))
++	ctx->membase = syscon_node_to_regmap(np);
++	if (IS_ERR(ctx->membase)) {
++		dev_err(dev, "Failed to get clk CGU iomem\n");
+ 		return PTR_ERR(ctx->membase);
++	}
++
+ 
+ 	ctx->np = np;
+ 	ctx->dev = dev;
+-	spin_lock_init(&ctx->lock);
+ 
+ 	ret = lgm_clk_register_plls(ctx, lgm_pll_clks,
+ 				    ARRAY_SIZE(lgm_pll_clks));
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+index 9ba2fe48228f1..44fbc0a123bf3 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+@@ -80,10 +80,10 @@ static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova,
+ 		return -EINVAL;
+ 
+ 	for_each_sgtable_dma_sg(sgt, sg, i) {
+-		u32 pa = sg_dma_address(sg) - sg->offset;
++		phys_addr_t pa = sg_dma_address(sg) - sg->offset;
+ 		size_t bytes = sg_dma_len(sg) + sg->offset;
+ 
+-		VERB("map[%d]: %08x %08x(%zx)", i, iova, pa, bytes);
++		VERB("map[%d]: %08x %pap(%zx)", i, iova, &pa, bytes);
+ 
+ 		ret = etnaviv_context_map(context, da, pa, bytes, prot);
+ 		if (ret)
+diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
+index a3a4305eda01b..0201f9b5f87e7 100644
+--- a/drivers/gpu/drm/i915/gvt/gtt.c
++++ b/drivers/gpu/drm/i915/gvt/gtt.c
+@@ -1192,10 +1192,8 @@ static int split_2MB_gtt_entry(struct intel_vgpu *vgpu,
+ 	for_each_shadow_entry(sub_spt, &sub_se, sub_index) {
+ 		ret = intel_gvt_hypervisor_dma_map_guest_page(vgpu,
+ 				start_gfn + sub_index, PAGE_SIZE, &dma_addr);
+-		if (ret) {
+-			ppgtt_invalidate_spt(spt);
+-			return ret;
+-		}
++		if (ret)
++			goto err;
+ 		sub_se.val64 = se->val64;
+ 
+ 		/* Copy the PAT field from PDE. */
+@@ -1214,6 +1212,17 @@ static int split_2MB_gtt_entry(struct intel_vgpu *vgpu,
+ 	ops->set_pfn(se, sub_spt->shadow_page.mfn);
+ 	ppgtt_set_shadow_entry(spt, se, index);
+ 	return 0;
++err:
++	/* Cancel the existing addess mappings of DMA addr. */
++	for_each_present_shadow_entry(sub_spt, &sub_se, sub_index) {
++		gvt_vdbg_mm("invalidate 4K entry\n");
++		ppgtt_invalidate_pte(sub_spt, &sub_se);
++	}
++	/* Release the new allocated spt. */
++	trace_spt_change(sub_spt->vgpu->id, "release", sub_spt,
++		sub_spt->guest_page.gfn, sub_spt->shadow_page.type);
++	ppgtt_free_spt(sub_spt);
++	return ret;
+ }
+ 
+ static int split_64KB_gtt_entry(struct intel_vgpu *vgpu,
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index 2764fdd7e84b3..233bbfeaa771e 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -518,6 +518,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
+ 					    u8 cmd_no, int channel)
+ {
+ 	struct kvaser_cmd *cmd;
++	size_t cmd_len;
+ 	int err;
+ 
+ 	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
+@@ -525,6 +526,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = cmd_no;
++	cmd_len = kvaser_usb_hydra_cmd_size(cmd);
+ 	if (channel < 0) {
+ 		kvaser_usb_hydra_set_cmd_dest_he
+ 				(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL);
+@@ -541,7 +543,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
+ 	kvaser_usb_hydra_set_cmd_transid
+ 				(cmd, kvaser_usb_hydra_get_next_transid(dev));
+ 
+-	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
+ 	if (err)
+ 		goto end;
+ 
+@@ -557,6 +559,7 @@ kvaser_usb_hydra_send_simple_cmd_async(struct kvaser_usb_net_priv *priv,
+ {
+ 	struct kvaser_cmd *cmd;
+ 	struct kvaser_usb *dev = priv->dev;
++	size_t cmd_len;
+ 	int err;
+ 
+ 	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_ATOMIC);
+@@ -564,14 +567,14 @@ kvaser_usb_hydra_send_simple_cmd_async(struct kvaser_usb_net_priv *priv,
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = cmd_no;
++	cmd_len = kvaser_usb_hydra_cmd_size(cmd);
+ 
+ 	kvaser_usb_hydra_set_cmd_dest_he
+ 		(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
+ 	kvaser_usb_hydra_set_cmd_transid
+ 				(cmd, kvaser_usb_hydra_get_next_transid(dev));
+ 
+-	err = kvaser_usb_send_cmd_async(priv, cmd,
+-					kvaser_usb_hydra_cmd_size(cmd));
++	err = kvaser_usb_send_cmd_async(priv, cmd, cmd_len);
+ 	if (err)
+ 		kfree(cmd);
+ 
+@@ -715,6 +718,7 @@ static int kvaser_usb_hydra_get_single_capability(struct kvaser_usb *dev,
+ {
+ 	struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
+ 	struct kvaser_cmd *cmd;
++	size_t cmd_len;
+ 	u32 value = 0;
+ 	u32 mask = 0;
+ 	u16 cap_cmd_res;
+@@ -726,13 +730,14 @@ static int kvaser_usb_hydra_get_single_capability(struct kvaser_usb *dev,
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_GET_CAPABILITIES_REQ;
++	cmd_len = kvaser_usb_hydra_cmd_size(cmd);
+ 	cmd->cap_req.cap_cmd = cpu_to_le16(cap_cmd_req);
+ 
+ 	kvaser_usb_hydra_set_cmd_dest_he(cmd, card_data->hydra.sysdbg_he);
+ 	kvaser_usb_hydra_set_cmd_transid
+ 				(cmd, kvaser_usb_hydra_get_next_transid(dev));
+ 
+-	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
+ 	if (err)
+ 		goto end;
+ 
+@@ -1555,6 +1560,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
+ 	struct kvaser_usb *dev = priv->dev;
+ 	struct kvaser_usb_net_hydra_priv *hydra = priv->sub_priv;
+ 	struct kvaser_cmd *cmd;
++	size_t cmd_len;
+ 	int err;
+ 
+ 	if (!hydra)
+@@ -1565,6 +1571,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_GET_BUSPARAMS_REQ;
++	cmd_len = kvaser_usb_hydra_cmd_size(cmd);
+ 	kvaser_usb_hydra_set_cmd_dest_he
+ 		(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
+ 	kvaser_usb_hydra_set_cmd_transid
+@@ -1574,7 +1581,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
+ 
+ 	reinit_completion(&priv->get_busparams_comp);
+ 
+-	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
+ 	if (err)
+ 		return err;
+ 
+@@ -1601,6 +1608,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
+ 	struct kvaser_cmd *cmd;
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+ 	struct kvaser_usb *dev = priv->dev;
++	size_t cmd_len;
+ 	int err;
+ 
+ 	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
+@@ -1608,6 +1616,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_SET_BUSPARAMS_REQ;
++	cmd_len = kvaser_usb_hydra_cmd_size(cmd);
+ 	memcpy(&cmd->set_busparams_req.busparams_nominal, busparams,
+ 	       sizeof(cmd->set_busparams_req.busparams_nominal));
+ 
+@@ -1616,7 +1625,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
+ 	kvaser_usb_hydra_set_cmd_transid
+ 				(cmd, kvaser_usb_hydra_get_next_transid(dev));
+ 
+-	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
+ 
+ 	kfree(cmd);
+ 
+@@ -1629,6 +1638,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
+ 	struct kvaser_cmd *cmd;
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+ 	struct kvaser_usb *dev = priv->dev;
++	size_t cmd_len;
+ 	int err;
+ 
+ 	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
+@@ -1636,6 +1646,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_SET_BUSPARAMS_FD_REQ;
++	cmd_len = kvaser_usb_hydra_cmd_size(cmd);
+ 	memcpy(&cmd->set_busparams_req.busparams_data, busparams,
+ 	       sizeof(cmd->set_busparams_req.busparams_data));
+ 
+@@ -1653,7 +1664,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
+ 	kvaser_usb_hydra_set_cmd_transid
+ 				(cmd, kvaser_usb_hydra_get_next_transid(dev));
+ 
+-	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
+ 
+ 	kfree(cmd);
+ 
+@@ -1781,6 +1792,7 @@ static int kvaser_usb_hydra_get_software_info(struct kvaser_usb *dev)
+ static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
+ {
+ 	struct kvaser_cmd *cmd;
++	size_t cmd_len;
+ 	int err;
+ 	u32 flags;
+ 	struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
+@@ -1790,6 +1802,7 @@ static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_GET_SOFTWARE_DETAILS_REQ;
++	cmd_len = kvaser_usb_hydra_cmd_size(cmd);
+ 	cmd->sw_detail_req.use_ext_cmd = 1;
+ 	kvaser_usb_hydra_set_cmd_dest_he
+ 				(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL);
+@@ -1797,7 +1810,7 @@ static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
+ 	kvaser_usb_hydra_set_cmd_transid
+ 				(cmd, kvaser_usb_hydra_get_next_transid(dev));
+ 
+-	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
+ 	if (err)
+ 		goto end;
+ 
+@@ -1913,6 +1926,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
+ {
+ 	struct kvaser_usb *dev = priv->dev;
+ 	struct kvaser_cmd *cmd;
++	size_t cmd_len;
+ 	int err;
+ 
+ 	if ((priv->can.ctrlmode &
+@@ -1928,6 +1942,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_SET_DRIVERMODE_REQ;
++	cmd_len = kvaser_usb_hydra_cmd_size(cmd);
+ 	kvaser_usb_hydra_set_cmd_dest_he
+ 		(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
+ 	kvaser_usb_hydra_set_cmd_transid
+@@ -1937,7 +1952,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
+ 	else
+ 		cmd->set_ctrlmode.mode = KVASER_USB_HYDRA_CTRLMODE_NORMAL;
+ 
+-	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
+ 	kfree(cmd);
+ 
+ 	return err;
+diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.c b/drivers/net/wireless/marvell/mwifiex/sdio.c
+index bde9e4bbfffe7..7fb6eef409285 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sdio.c
++++ b/drivers/net/wireless/marvell/mwifiex/sdio.c
+@@ -485,6 +485,7 @@ static struct memory_type_mapping mem_type_mapping_tbl[] = {
+ };
+ 
+ static const struct of_device_id mwifiex_sdio_of_match_table[] = {
++	{ .compatible = "marvell,sd8787" },
+ 	{ .compatible = "marvell,sd8897" },
+ 	{ .compatible = "marvell,sd8997" },
+ 	{ }
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 9a12f1d38007b..2cb86c28d11fe 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -4369,12 +4369,9 @@ void rtl8xxxu_gen1_report_connect(struct rtl8xxxu_priv *priv,
+ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
+ 				  u8 macid, bool connect)
+ {
+-#ifdef RTL8XXXU_GEN2_REPORT_CONNECT
+ 	/*
+-	 * Barry Day reports this causes issues with 8192eu and 8723bu
+-	 * devices reconnecting. The reason for this is unclear, but
+-	 * until it is better understood, leave the code in place but
+-	 * disabled, so it is not lost.
++	 * The firmware turns on the rate control when it knows it's
++	 * connected to a network.
+ 	 */
+ 	struct h2c_cmd h2c;
+ 
+@@ -4387,7 +4384,6 @@ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
+ 		h2c.media_status_rpt.parm &= ~BIT(0);
+ 
+ 	rtl8xxxu_gen2_h2c_cmd(priv, &h2c, sizeof(h2c.media_status_rpt));
+-#endif
+ }
+ 
+ void rtl8xxxu_gen1_init_aggregation(struct rtl8xxxu_priv *priv)
+diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
+index f24bef3be48a3..ce74cde6d8faa 100644
+--- a/fs/ext4/sysfs.c
++++ b/fs/ext4/sysfs.c
+@@ -487,6 +487,11 @@ static void ext4_sb_release(struct kobject *kobj)
+ 	complete(&sbi->s_kobj_unregister);
+ }
+ 
++static void ext4_feat_release(struct kobject *kobj)
++{
++	kfree(kobj);
++}
++
+ static const struct sysfs_ops ext4_attr_ops = {
+ 	.show	= ext4_attr_show,
+ 	.store	= ext4_attr_store,
+@@ -501,7 +506,7 @@ static struct kobj_type ext4_sb_ktype = {
+ static struct kobj_type ext4_feat_ktype = {
+ 	.default_groups = ext4_feat_groups,
+ 	.sysfs_ops	= &ext4_attr_ops,
+-	.release	= (void (*)(struct kobject *))kfree,
++	.release	= ext4_feat_release,
+ };
+ 
+ static struct kobject *ext4_root;
+diff --git a/include/linux/nospec.h b/include/linux/nospec.h
+index c1e79f72cd892..9f0af4f116d98 100644
+--- a/include/linux/nospec.h
++++ b/include/linux/nospec.h
+@@ -11,6 +11,10 @@
+ 
+ struct task_struct;
+ 
++#ifndef barrier_nospec
++# define barrier_nospec() do { } while (0)
++#endif
++
+ /**
+  * array_index_mask_nospec() - generate a ~0 mask when index < size, 0 otherwise
+  * @index: array element index
+diff --git a/include/linux/random.h b/include/linux/random.h
+index 917470c4490ac..ed2bac6c7a8ac 100644
+--- a/include/linux/random.h
++++ b/include/linux/random.h
+@@ -19,14 +19,14 @@ void add_input_randomness(unsigned int type, unsigned int code,
+ void add_interrupt_randomness(int irq) __latent_entropy;
+ void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
+ 
+-#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
+ static inline void add_latent_entropy(void)
+ {
++#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
+ 	add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy));
+-}
+ #else
+-static inline void add_latent_entropy(void) { }
++	add_device_randomness(NULL, 0);
+ #endif
++}
+ 
+ void get_random_bytes(void *buf, size_t len);
+ size_t __must_check get_random_bytes_arch(void *buf, size_t len);
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index fd2aa6b9909ec..73d4b1e32fbdb 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -32,6 +32,7 @@
+ #include <linux/perf_event.h>
+ #include <linux/extable.h>
+ #include <linux/log2.h>
++#include <linux/nospec.h>
+ 
+ #include <asm/barrier.h>
+ #include <asm/unaligned.h>
+@@ -1642,9 +1643,7 @@ out:
+ 		 * reuse preexisting logic from Spectre v1 mitigation that
+ 		 * happens to produce the required code on x86 for v4 as well.
+ 		 */
+-#ifdef CONFIG_X86
+ 		barrier_nospec();
+-#endif
+ 		CONT;
+ #define LDST(SIZEOP, SIZE)						\
+ 	STX_MEM_##SIZEOP:						\
+diff --git a/lib/usercopy.c b/lib/usercopy.c
+index 7413dd300516e..7ee63df042d7e 100644
+--- a/lib/usercopy.c
++++ b/lib/usercopy.c
+@@ -3,6 +3,7 @@
+ #include <linux/fault-inject-usercopy.h>
+ #include <linux/instrumented.h>
+ #include <linux/uaccess.h>
++#include <linux/nospec.h>
+ 
+ /* out-of-line parts */
+ 
+@@ -12,6 +13,12 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
+ 	unsigned long res = n;
+ 	might_fault();
+ 	if (!should_fail_usercopy() && likely(access_ok(from, n))) {
++		/*
++		 * Ensure that bad access_ok() speculation will not
++		 * lead to nasty side effects *after* the copy is
++		 * finished:
++		 */
++		barrier_nospec();
+ 		instrument_copy_from_user(to, from, n);
+ 		res = raw_copy_from_user(to, from, n);
+ 	}
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 63499db5c63d9..bd349ae9ee4b4 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -644,6 +644,26 @@ struct mesh_csa_settings {
+ 	struct cfg80211_csa_settings settings;
+ };
+ 
++/**
++ * struct mesh_table
++ *
++ * @known_gates: list of known mesh gates and their mpaths by the station. The
++ * gate's mpath may or may not be resolved and active.
++ * @gates_lock: protects updates to known_gates
++ * @rhead: the rhashtable containing struct mesh_paths, keyed by dest addr
++ * @walk_head: linked list containing all mesh_path objects
++ * @walk_lock: lock protecting walk_head
++ * @entries: number of entries in the table
++ */
++struct mesh_table {
++	struct hlist_head known_gates;
++	spinlock_t gates_lock;
++	struct rhashtable rhead;
++	struct hlist_head walk_head;
++	spinlock_t walk_lock;
++	atomic_t entries;		/* Up to MAX_MESH_NEIGHBOURS */
++};
++
+ struct ieee80211_if_mesh {
+ 	struct timer_list housekeeping_timer;
+ 	struct timer_list mesh_path_timer;
+@@ -718,8 +738,8 @@ struct ieee80211_if_mesh {
+ 	/* offset from skb->data while building IE */
+ 	int meshconf_offset;
+ 
+-	struct mesh_table *mesh_paths;
+-	struct mesh_table *mpp_paths; /* Store paths for MPP&MAP */
++	struct mesh_table mesh_paths;
++	struct mesh_table mpp_paths; /* Store paths for MPP&MAP */
+ 	int mesh_paths_generation;
+ 	int mpp_paths_generation;
+ };
+diff --git a/net/mac80211/mesh.h b/net/mac80211/mesh.h
+index 40492d1bd8fda..b2b717a78114f 100644
+--- a/net/mac80211/mesh.h
++++ b/net/mac80211/mesh.h
+@@ -127,26 +127,6 @@ struct mesh_path {
+ 	u32 path_change_count;
+ };
+ 
+-/**
+- * struct mesh_table
+- *
+- * @known_gates: list of known mesh gates and their mpaths by the station. The
+- * gate's mpath may or may not be resolved and active.
+- * @gates_lock: protects updates to known_gates
+- * @rhead: the rhashtable containing struct mesh_paths, keyed by dest addr
+- * @walk_head: linked list containging all mesh_path objects
+- * @walk_lock: lock protecting walk_head
+- * @entries: number of entries in the table
+- */
+-struct mesh_table {
+-	struct hlist_head known_gates;
+-	spinlock_t gates_lock;
+-	struct rhashtable rhead;
+-	struct hlist_head walk_head;
+-	spinlock_t walk_lock;
+-	atomic_t entries;		/* Up to MAX_MESH_NEIGHBOURS */
+-};
+-
+ /* Recent multicast cache */
+ /* RMC_BUCKETS must be a power of 2, maximum 256 */
+ #define RMC_BUCKETS		256
+@@ -308,7 +288,7 @@ int mesh_path_error_tx(struct ieee80211_sub_if_data *sdata,
+ void mesh_path_assign_nexthop(struct mesh_path *mpath, struct sta_info *sta);
+ void mesh_path_flush_pending(struct mesh_path *mpath);
+ void mesh_path_tx_pending(struct mesh_path *mpath);
+-int mesh_pathtbl_init(struct ieee80211_sub_if_data *sdata);
++void mesh_pathtbl_init(struct ieee80211_sub_if_data *sdata);
+ void mesh_pathtbl_unregister(struct ieee80211_sub_if_data *sdata);
+ int mesh_path_del(struct ieee80211_sub_if_data *sdata, const u8 *addr);
+ void mesh_path_timer(struct timer_list *t);
+diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
+index c2b051e0610ab..d936ef0c17a37 100644
+--- a/net/mac80211/mesh_pathtbl.c
++++ b/net/mac80211/mesh_pathtbl.c
+@@ -47,32 +47,24 @@ static void mesh_path_rht_free(void *ptr, void *tblptr)
+ 	mesh_path_free_rcu(tbl, mpath);
+ }
+ 
+-static struct mesh_table *mesh_table_alloc(void)
++static void mesh_table_init(struct mesh_table *tbl)
+ {
+-	struct mesh_table *newtbl;
++	INIT_HLIST_HEAD(&tbl->known_gates);
++	INIT_HLIST_HEAD(&tbl->walk_head);
++	atomic_set(&tbl->entries,  0);
++	spin_lock_init(&tbl->gates_lock);
++	spin_lock_init(&tbl->walk_lock);
+ 
+-	newtbl = kmalloc(sizeof(struct mesh_table), GFP_ATOMIC);
+-	if (!newtbl)
+-		return NULL;
+-
+-	INIT_HLIST_HEAD(&newtbl->known_gates);
+-	INIT_HLIST_HEAD(&newtbl->walk_head);
+-	atomic_set(&newtbl->entries,  0);
+-	spin_lock_init(&newtbl->gates_lock);
+-	spin_lock_init(&newtbl->walk_lock);
+-	if (rhashtable_init(&newtbl->rhead, &mesh_rht_params)) {
+-		kfree(newtbl);
+-		return NULL;
+-	}
+-
+-	return newtbl;
++	/* rhashtable_init() may fail only in case of wrong
++	 * mesh_rht_params
++	 */
++	WARN_ON(rhashtable_init(&tbl->rhead, &mesh_rht_params));
+ }
+ 
+ static void mesh_table_free(struct mesh_table *tbl)
+ {
+ 	rhashtable_free_and_destroy(&tbl->rhead,
+ 				    mesh_path_rht_free, tbl);
+-	kfree(tbl);
+ }
+ 
+ /**
+@@ -238,13 +230,13 @@ static struct mesh_path *mpath_lookup(struct mesh_table *tbl, const u8 *dst,
+ struct mesh_path *
+ mesh_path_lookup(struct ieee80211_sub_if_data *sdata, const u8 *dst)
+ {
+-	return mpath_lookup(sdata->u.mesh.mesh_paths, dst, sdata);
++	return mpath_lookup(&sdata->u.mesh.mesh_paths, dst, sdata);
+ }
+ 
+ struct mesh_path *
+ mpp_path_lookup(struct ieee80211_sub_if_data *sdata, const u8 *dst)
+ {
+-	return mpath_lookup(sdata->u.mesh.mpp_paths, dst, sdata);
++	return mpath_lookup(&sdata->u.mesh.mpp_paths, dst, sdata);
+ }
+ 
+ static struct mesh_path *
+@@ -281,7 +273,7 @@ __mesh_path_lookup_by_idx(struct mesh_table *tbl, int idx)
+ struct mesh_path *
+ mesh_path_lookup_by_idx(struct ieee80211_sub_if_data *sdata, int idx)
+ {
+-	return __mesh_path_lookup_by_idx(sdata->u.mesh.mesh_paths, idx);
++	return __mesh_path_lookup_by_idx(&sdata->u.mesh.mesh_paths, idx);
+ }
+ 
+ /**
+@@ -296,7 +288,7 @@ mesh_path_lookup_by_idx(struct ieee80211_sub_if_data *sdata, int idx)
+ struct mesh_path *
+ mpp_path_lookup_by_idx(struct ieee80211_sub_if_data *sdata, int idx)
+ {
+-	return __mesh_path_lookup_by_idx(sdata->u.mesh.mpp_paths, idx);
++	return __mesh_path_lookup_by_idx(&sdata->u.mesh.mpp_paths, idx);
+ }
+ 
+ /**
+@@ -309,7 +301,7 @@ int mesh_path_add_gate(struct mesh_path *mpath)
+ 	int err;
+ 
+ 	rcu_read_lock();
+-	tbl = mpath->sdata->u.mesh.mesh_paths;
++	tbl = &mpath->sdata->u.mesh.mesh_paths;
+ 
+ 	spin_lock_bh(&mpath->state_lock);
+ 	if (mpath->is_gate) {
+@@ -418,7 +410,7 @@ struct mesh_path *mesh_path_add(struct ieee80211_sub_if_data *sdata,
+ 	if (!new_mpath)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	tbl = sdata->u.mesh.mesh_paths;
++	tbl = &sdata->u.mesh.mesh_paths;
+ 	spin_lock_bh(&tbl->walk_lock);
+ 	mpath = rhashtable_lookup_get_insert_fast(&tbl->rhead,
+ 						  &new_mpath->rhash,
+@@ -460,7 +452,7 @@ int mpp_path_add(struct ieee80211_sub_if_data *sdata,
+ 		return -ENOMEM;
+ 
+ 	memcpy(new_mpath->mpp, mpp, ETH_ALEN);
+-	tbl = sdata->u.mesh.mpp_paths;
++	tbl = &sdata->u.mesh.mpp_paths;
+ 
+ 	spin_lock_bh(&tbl->walk_lock);
+ 	ret = rhashtable_lookup_insert_fast(&tbl->rhead,
+@@ -489,7 +481,7 @@ int mpp_path_add(struct ieee80211_sub_if_data *sdata,
+ void mesh_plink_broken(struct sta_info *sta)
+ {
+ 	struct ieee80211_sub_if_data *sdata = sta->sdata;
+-	struct mesh_table *tbl = sdata->u.mesh.mesh_paths;
++	struct mesh_table *tbl = &sdata->u.mesh.mesh_paths;
+ 	static const u8 bcast[ETH_ALEN] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
+ 	struct mesh_path *mpath;
+ 
+@@ -548,7 +540,7 @@ static void __mesh_path_del(struct mesh_table *tbl, struct mesh_path *mpath)
+ void mesh_path_flush_by_nexthop(struct sta_info *sta)
+ {
+ 	struct ieee80211_sub_if_data *sdata = sta->sdata;
+-	struct mesh_table *tbl = sdata->u.mesh.mesh_paths;
++	struct mesh_table *tbl = &sdata->u.mesh.mesh_paths;
+ 	struct mesh_path *mpath;
+ 	struct hlist_node *n;
+ 
+@@ -563,7 +555,7 @@ void mesh_path_flush_by_nexthop(struct sta_info *sta)
+ static void mpp_flush_by_proxy(struct ieee80211_sub_if_data *sdata,
+ 			       const u8 *proxy)
+ {
+-	struct mesh_table *tbl = sdata->u.mesh.mpp_paths;
++	struct mesh_table *tbl = &sdata->u.mesh.mpp_paths;
+ 	struct mesh_path *mpath;
+ 	struct hlist_node *n;
+ 
+@@ -597,8 +589,8 @@ static void table_flush_by_iface(struct mesh_table *tbl)
+  */
+ void mesh_path_flush_by_iface(struct ieee80211_sub_if_data *sdata)
+ {
+-	table_flush_by_iface(sdata->u.mesh.mesh_paths);
+-	table_flush_by_iface(sdata->u.mesh.mpp_paths);
++	table_flush_by_iface(&sdata->u.mesh.mesh_paths);
++	table_flush_by_iface(&sdata->u.mesh.mpp_paths);
+ }
+ 
+ /**
+@@ -644,7 +636,7 @@ int mesh_path_del(struct ieee80211_sub_if_data *sdata, const u8 *addr)
+ 	/* flush relevant mpp entries first */
+ 	mpp_flush_by_proxy(sdata, addr);
+ 
+-	err = table_path_del(sdata->u.mesh.mesh_paths, sdata, addr);
++	err = table_path_del(&sdata->u.mesh.mesh_paths, sdata, addr);
+ 	sdata->u.mesh.mesh_paths_generation++;
+ 	return err;
+ }
+@@ -682,7 +674,7 @@ int mesh_path_send_to_gates(struct mesh_path *mpath)
+ 	struct mesh_path *gate;
+ 	bool copy = false;
+ 
+-	tbl = sdata->u.mesh.mesh_paths;
++	tbl = &sdata->u.mesh.mesh_paths;
+ 
+ 	rcu_read_lock();
+ 	hlist_for_each_entry_rcu(gate, &tbl->known_gates, gate_list) {
+@@ -762,29 +754,10 @@ void mesh_path_fix_nexthop(struct mesh_path *mpath, struct sta_info *next_hop)
+ 	mesh_path_tx_pending(mpath);
+ }
+ 
+-int mesh_pathtbl_init(struct ieee80211_sub_if_data *sdata)
++void mesh_pathtbl_init(struct ieee80211_sub_if_data *sdata)
+ {
+-	struct mesh_table *tbl_path, *tbl_mpp;
+-	int ret;
+-
+-	tbl_path = mesh_table_alloc();
+-	if (!tbl_path)
+-		return -ENOMEM;
+-
+-	tbl_mpp = mesh_table_alloc();
+-	if (!tbl_mpp) {
+-		ret = -ENOMEM;
+-		goto free_path;
+-	}
+-
+-	sdata->u.mesh.mesh_paths = tbl_path;
+-	sdata->u.mesh.mpp_paths = tbl_mpp;
+-
+-	return 0;
+-
+-free_path:
+-	mesh_table_free(tbl_path);
+-	return ret;
++	mesh_table_init(&sdata->u.mesh.mesh_paths);
++	mesh_table_init(&sdata->u.mesh.mpp_paths);
+ }
+ 
+ static
+@@ -806,12 +779,12 @@ void mesh_path_tbl_expire(struct ieee80211_sub_if_data *sdata,
+ 
+ void mesh_path_expire(struct ieee80211_sub_if_data *sdata)
+ {
+-	mesh_path_tbl_expire(sdata, sdata->u.mesh.mesh_paths);
+-	mesh_path_tbl_expire(sdata, sdata->u.mesh.mpp_paths);
++	mesh_path_tbl_expire(sdata, &sdata->u.mesh.mesh_paths);
++	mesh_path_tbl_expire(sdata, &sdata->u.mesh.mpp_paths);
+ }
+ 
+ void mesh_pathtbl_unregister(struct ieee80211_sub_if_data *sdata)
+ {
+-	mesh_table_free(sdata->u.mesh.mesh_paths);
+-	mesh_table_free(sdata->u.mesh.mpp_paths);
++	mesh_table_free(&sdata->u.mesh.mesh_paths);
++	mesh_table_free(&sdata->u.mesh.mpp_paths);
+ }
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index e25fe44899ffb..2d842f31ec5a8 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1906,14 +1906,12 @@ start_error:
+ 
+ static struct Qdisc *taprio_leaf(struct Qdisc *sch, unsigned long cl)
+ {
+-	struct taprio_sched *q = qdisc_priv(sch);
+-	struct net_device *dev = qdisc_dev(sch);
+-	unsigned int ntx = cl - 1;
++	struct netdev_queue *dev_queue = taprio_queue_get(sch, cl);
+ 
+-	if (ntx >= dev->num_tx_queues)
++	if (!dev_queue)
+ 		return NULL;
+ 
+-	return q->qdiscs[ntx];
++	return dev_queue->qdisc_sleeping;
+ }
+ 
+ static unsigned long taprio_find(struct Qdisc *sch, u32 classid)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-03-03 12:30 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-03-03 12:30 UTC (permalink / raw
  To: gentoo-commits

commit:     ea80b3ffdf0cbc13e3d65a4f84ff8af127c33061
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar  3 12:30:10 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar  3 12:30:10 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ea80b3ff

Linux patch 5.10.171

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1170_linux-5.10.171.patch | 597 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 601 insertions(+)

diff --git a/0000_README b/0000_README
index 4e3efaf2..1a198b29 100644
--- a/0000_README
+++ b/0000_README
@@ -723,6 +723,10 @@ Patch:  1169_linux-5.10.170.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.170
 
+Patch:  1170_linux-5.10.171.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.171
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1170_linux-5.10.171.patch b/1170_linux-5.10.171.patch
new file mode 100644
index 00000000..689d118e
--- /dev/null
+++ b/1170_linux-5.10.171.patch
@@ -0,0 +1,597 @@
+diff --git a/Makefile b/Makefile
+index 028fca7ec5cf3..9dde2c2307893 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 170
++SUBLEVEL = 171
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index 9051fb4a267d4..aab28161b9ae9 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -1203,6 +1203,7 @@
+ 		clock-names = "dp", "pclk";
+ 		phys = <&edp_phy>;
+ 		phy-names = "dp";
++		power-domains = <&power RK3288_PD_VIO>;
+ 		resets = <&cru SRST_EDP>;
+ 		reset-names = "dp";
+ 		rockchip,grf = <&grf>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+index daa9a0c601a9f..22ab5e1d7319d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+@@ -91,7 +91,6 @@
+ 			linux,default-trigger = "heartbeat";
+ 			gpios = <&rk805 1 GPIO_ACTIVE_LOW>;
+ 			default-state = "on";
+-			mode = <0x23>;
+ 		};
+ 
+ 		user_led: led-1 {
+@@ -99,7 +98,6 @@
+ 			linux,default-trigger = "mmc1";
+ 			gpios = <&rk805 0 GPIO_ACTIVE_LOW>;
+ 			default-state = "off";
+-			mode = <0x05>;
+ 		};
+ 	};
+ };
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 99e23a5df0267..2306abb09f7f5 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -3687,8 +3687,8 @@ void acpi_nfit_shutdown(void *data)
+ 
+ 	mutex_lock(&acpi_desc->init_mutex);
+ 	set_bit(ARS_CANCEL, &acpi_desc->scrub_flags);
+-	cancel_delayed_work_sync(&acpi_desc->dwork);
+ 	mutex_unlock(&acpi_desc->init_mutex);
++	cancel_delayed_work_sync(&acpi_desc->dwork);
+ 
+ 	/*
+ 	 * Bounce the nvdimm bus lock to make sure any in-flight
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index 0c98978e2e55c..1681486860019 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -157,9 +157,10 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
+ 	 * since virtio_gpu doesn't support dma-buf import from other devices.
+ 	 */
+ 	shmem->pages = drm_gem_shmem_get_sg_table(&bo->base.base);
+-	if (!shmem->pages) {
++	if (IS_ERR(shmem->pages)) {
+ 		drm_gem_shmem_unpin(&bo->base.base);
+-		return -EINVAL;
++		shmem->pages = NULL;
++		return PTR_ERR(shmem->pages);
+ 	}
+ 
+ 	if (use_dma_api) {
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index baadead947c8b..5f9ec1d1464a2 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1197,6 +1197,7 @@ int hid_open_report(struct hid_device *device)
+ 	__u8 *end;
+ 	__u8 *next;
+ 	int ret;
++	int i;
+ 	static int (*dispatch_type[])(struct hid_parser *parser,
+ 				      struct hid_item *item) = {
+ 		hid_parser_main,
+@@ -1247,6 +1248,8 @@ int hid_open_report(struct hid_device *device)
+ 		goto err;
+ 	}
+ 	device->collection_size = HID_DEFAULT_NUM_COLLECTIONS;
++	for (i = 0; i < HID_DEFAULT_NUM_COLLECTIONS; i++)
++		device->collection[i].parent_idx = -1;
+ 
+ 	ret = -EINVAL;
+ 	while ((next = fetch_item(start, end, &item)) != NULL) {
+diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.c b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
+index 897923981855d..0e0be6c62e3d1 100644
+--- a/drivers/infiniband/hw/hfi1/user_exp_rcv.c
++++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
+@@ -202,16 +202,11 @@ static void unpin_rcv_pages(struct hfi1_filedata *fd,
+ static int pin_rcv_pages(struct hfi1_filedata *fd, struct tid_user_buf *tidbuf)
+ {
+ 	int pinned;
+-	unsigned int npages;
++	unsigned int npages = tidbuf->npages;
+ 	unsigned long vaddr = tidbuf->vaddr;
+ 	struct page **pages = NULL;
+ 	struct hfi1_devdata *dd = fd->uctxt->dd;
+ 
+-	/* Get the number of pages the user buffer spans */
+-	npages = num_user_pages(vaddr, tidbuf->length);
+-	if (!npages)
+-		return -EINVAL;
+-
+ 	if (npages > fd->uctxt->expected_count) {
+ 		dd_dev_err(dd, "Expected buffer too big\n");
+ 		return -EINVAL;
+@@ -238,7 +233,6 @@ static int pin_rcv_pages(struct hfi1_filedata *fd, struct tid_user_buf *tidbuf)
+ 		return pinned;
+ 	}
+ 	tidbuf->pages = pages;
+-	tidbuf->npages = npages;
+ 	fd->tid_n_pinned += pinned;
+ 	return pinned;
+ }
+@@ -316,6 +310,7 @@ int hfi1_user_exp_rcv_setup(struct hfi1_filedata *fd,
+ 	mutex_init(&tidbuf->cover_mutex);
+ 	tidbuf->vaddr = tinfo->vaddr;
+ 	tidbuf->length = tinfo->length;
++	tidbuf->npages = num_user_pages(tidbuf->vaddr, tidbuf->length);
+ 	tidbuf->psets = kcalloc(uctxt->expected_count, sizeof(*tidbuf->psets),
+ 				GFP_KERNEL);
+ 	if (!tidbuf->psets) {
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 3038e7ecb7e16..c0b34637bd667 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -5683,6 +5683,7 @@ static int md_alloc(dev_t dev, char *name)
+ 	 * completely removed (mddev_delayed_delete).
+ 	 */
+ 	flush_workqueue(md_misc_wq);
++	flush_workqueue(md_rdev_misc_wq);
+ 
+ 	mutex_lock(&disks_mutex);
+ 	error = -EEXIST;
+diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c
+index f566eb1839dc5..71e091f879f0e 100644
+--- a/drivers/tty/vt/vc_screen.c
++++ b/drivers/tty/vt/vc_screen.c
+@@ -403,10 +403,11 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+ 		unsigned int this_round, skip = 0;
+ 		int size;
+ 
+-		ret = -ENXIO;
+ 		vc = vcs_vc(inode, &viewed);
+-		if (!vc)
+-			goto unlock_out;
++		if (!vc) {
++			ret = -ENXIO;
++			break;
++		}
+ 
+ 		/* Check whether we are above size each round,
+ 		 * as copy_to_user at the end of this loop
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 5925b8eb9ee38..7af2def631a24 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2380,9 +2380,8 @@ static int usb_enumerate_device_otg(struct usb_device *udev)
+  * usb_enumerate_device - Read device configs/intfs/otg (usbcore-internal)
+  * @udev: newly addressed device (in ADDRESS state)
+  *
+- * This is only called by usb_new_device() and usb_authorize_device()
+- * and FIXME -- all comments that apply to them apply here wrt to
+- * environment.
++ * This is only called by usb_new_device() -- all comments that apply there
++ * apply here wrt to environment.
+  *
+  * If the device is WUSB and not authorized, we don't attempt to read
+  * the string descriptors, as they will be errored out by the device
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index 8d134193fa0cf..a2ca38e25e0c3 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -889,11 +889,7 @@ read_descriptors(struct file *filp, struct kobject *kobj,
+ 	size_t srclen, n;
+ 	int cfgno;
+ 	void *src;
+-	int retval;
+ 
+-	retval = usb_lock_device_interruptible(udev);
+-	if (retval < 0)
+-		return -EINTR;
+ 	/* The binary attribute begins with the device descriptor.
+ 	 * Following that are the raw descriptor entries for all the
+ 	 * configurations (config plus subsidiary descriptors).
+@@ -918,7 +914,6 @@ read_descriptors(struct file *filp, struct kobject *kobj,
+ 			off -= srclen;
+ 		}
+ 	}
+-	usb_unlock_device(udev);
+ 	return count - nleft;
+ }
+ 
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 2caccbb6e0140..7b54e814aefb1 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -81,6 +81,9 @@
+ #define WRITE_BUF_SIZE		8192		/* TX only */
+ #define GS_CONSOLE_BUF_SIZE	8192
+ 
++/* Prevents race conditions while accessing gser->ioport */
++static DEFINE_SPINLOCK(serial_port_lock);
++
+ /* console info */
+ struct gs_console {
+ 	struct console		console;
+@@ -1376,8 +1379,10 @@ void gserial_disconnect(struct gserial *gser)
+ 	if (!port)
+ 		return;
+ 
++	spin_lock_irqsave(&serial_port_lock, flags);
++
+ 	/* tell the TTY glue not to do I/O here any more */
+-	spin_lock_irqsave(&port->port_lock, flags);
++	spin_lock(&port->port_lock);
+ 
+ 	gs_console_disconnect(port);
+ 
+@@ -1392,7 +1397,8 @@ void gserial_disconnect(struct gserial *gser)
+ 			tty_hangup(port->port.tty);
+ 	}
+ 	port->suspended = false;
+-	spin_unlock_irqrestore(&port->port_lock, flags);
++	spin_unlock(&port->port_lock);
++	spin_unlock_irqrestore(&serial_port_lock, flags);
+ 
+ 	/* disable endpoints, aborting down any active I/O */
+ 	usb_ep_disable(gser->out);
+@@ -1426,10 +1432,19 @@ EXPORT_SYMBOL_GPL(gserial_suspend);
+ 
+ void gserial_resume(struct gserial *gser)
+ {
+-	struct gs_port *port = gser->ioport;
++	struct gs_port *port;
+ 	unsigned long	flags;
+ 
+-	spin_lock_irqsave(&port->port_lock, flags);
++	spin_lock_irqsave(&serial_port_lock, flags);
++	port = gser->ioport;
++
++	if (!port) {
++		spin_unlock_irqrestore(&serial_port_lock, flags);
++		return;
++	}
++
++	spin_lock(&port->port_lock);
++	spin_unlock(&serial_port_lock);
+ 	port->suspended = false;
+ 	if (!port->start_delayed) {
+ 		spin_unlock_irqrestore(&port->port_lock, flags);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 2fc65cbbfea95..14a7af7f3bcd7 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -402,6 +402,8 @@ static void option_instat_callback(struct urb *urb);
+ #define LONGCHEER_VENDOR_ID			0x1c9e
+ 
+ /* 4G Systems products */
++/* This one was sold as the VW and Skoda "Carstick LTE" */
++#define FOUR_G_SYSTEMS_PRODUCT_CARSTICK_LTE	0x7605
+ /* This is the 4G XS Stick W14 a.k.a. Mobilcom Debitel Surf-Stick *
+  * It seems to contain a Qualcomm QSC6240/6290 chipset            */
+ #define FOUR_G_SYSTEMS_PRODUCT_W14		0x9603
+@@ -1976,6 +1978,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(2) },
+ 	{ USB_DEVICE(AIRPLUS_VENDOR_ID, AIRPLUS_PRODUCT_MCD650) },
+ 	{ USB_DEVICE(TLAYTECH_VENDOR_ID, TLAYTECH_PRODUCT_TEU800) },
++	{ USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_CARSTICK_LTE),
++	  .driver_info = RSVD(0) },
+ 	{ USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14),
+ 	  .driver_info = NCTRL(0) | NCTRL(1) },
+ 	{ USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W100),
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 4a6ba0997e399..b081b61e97c8d 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -7276,10 +7276,10 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 	/*
+ 	 * Check that we don't overflow at later allocations, we request
+ 	 * clone_sources_count + 1 items, and compare to unsigned long inside
+-	 * access_ok.
++	 * access_ok. Also set an upper limit for allocation size so this can't
++	 * easily exhaust memory. Max number of clone sources is about 200K.
+ 	 */
+-	if (arg->clone_sources_count >
+-	    ULONG_MAX / sizeof(struct clone_root) - 1) {
++	if (arg->clone_sources_count > SZ_8M / sizeof(struct clone_root)) {
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 642e1a0560c6d..0c27b81ee1eb7 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1092,7 +1092,8 @@ static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
+ 				     unsigned nr_args);
+ static void io_clean_op(struct io_kiocb *req);
+ static struct file *io_file_get(struct io_ring_ctx *ctx,
+-				struct io_kiocb *req, int fd, bool fixed);
++				struct io_kiocb *req, int fd, bool fixed,
++				unsigned int issue_flags);
+ static void __io_queue_sqe(struct io_kiocb *req);
+ static void io_rsrc_put_work(struct work_struct *work);
+ 
+@@ -3975,7 +3976,7 @@ static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
+ 		return -EAGAIN;
+ 
+ 	in = io_file_get(req->ctx, req, sp->splice_fd_in,
+-				  (sp->flags & SPLICE_F_FD_IN_FIXED));
++			 (sp->flags & SPLICE_F_FD_IN_FIXED), issue_flags);
+ 	if (!in) {
+ 		ret = -EBADF;
+ 		goto done;
+@@ -4015,7 +4016,7 @@ static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
+ 		return -EAGAIN;
+ 
+ 	in = io_file_get(req->ctx, req, sp->splice_fd_in,
+-				  (sp->flags & SPLICE_F_FD_IN_FIXED));
++			 (sp->flags & SPLICE_F_FD_IN_FIXED), issue_flags);
+ 	if (!in) {
+ 		ret = -EBADF;
+ 		goto done;
+@@ -6876,13 +6877,16 @@ static void io_fixed_file_set(struct io_fixed_file *file_slot, struct file *file
+ }
+ 
+ static inline struct file *io_file_get_fixed(struct io_ring_ctx *ctx,
+-					     struct io_kiocb *req, int fd)
++					     struct io_kiocb *req, int fd,
++					     unsigned int issue_flags)
+ {
+-	struct file *file;
++	struct file *file = NULL;
+ 	unsigned long file_ptr;
+ 
++	io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
++
+ 	if (unlikely((unsigned int)fd >= ctx->nr_user_files))
+-		return NULL;
++		goto out;
+ 	fd = array_index_nospec(fd, ctx->nr_user_files);
+ 	file_ptr = io_fixed_file_slot(&ctx->file_table, fd)->file_ptr;
+ 	file = (struct file *) (file_ptr & FFS_MASK);
+@@ -6890,6 +6894,8 @@ static inline struct file *io_file_get_fixed(struct io_ring_ctx *ctx,
+ 	/* mask in overlapping REQ_F and FFS bits */
+ 	req->flags |= (file_ptr << REQ_F_NOWAIT_READ_BIT);
+ 	io_req_set_rsrc_node(req);
++out:
++	io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
+ 	return file;
+ }
+ 
+@@ -6907,10 +6913,11 @@ static struct file *io_file_get_normal(struct io_ring_ctx *ctx,
+ }
+ 
+ static inline struct file *io_file_get(struct io_ring_ctx *ctx,
+-				       struct io_kiocb *req, int fd, bool fixed)
++				       struct io_kiocb *req, int fd, bool fixed,
++				       unsigned int issue_flags)
+ {
+ 	if (fixed)
+-		return io_file_get_fixed(ctx, req, fd);
++		return io_file_get_fixed(ctx, req, fd, issue_flags);
+ 	else
+ 		return io_file_get_normal(ctx, req, fd);
+ }
+@@ -7132,7 +7139,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ 
+ 	if (io_op_defs[req->opcode].needs_file) {
+ 		req->file = io_file_get(ctx, req, READ_ONCE(sqe->fd),
+-					(sqe_flags & IOSQE_FIXED_FILE));
++					(sqe_flags & IOSQE_FIXED_FILE), 0);
+ 		if (unlikely(!req->file))
+ 			ret = -EBADF;
+ 	}
+diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
+index 9d26c5e9da058..d35ea927ca8af 100644
+--- a/net/caif/caif_socket.c
++++ b/net/caif/caif_socket.c
+@@ -1020,6 +1020,7 @@ static void caif_sock_destructor(struct sock *sk)
+ 		return;
+ 	}
+ 	sk_stream_kill_queues(&cf_sk->sk);
++	WARN_ON(sk->sk_forward_alloc);
+ 	caif_free_client(&cf_sk->layer);
+ }
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index a5df0cf46bbf8..b9c954182b375 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -5401,7 +5401,7 @@ static int bpf_ipv4_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
+ 		neigh = __ipv6_neigh_lookup_noref_stub(dev, dst);
+ 	}
+ 
+-	if (!neigh)
++	if (!neigh || !(neigh->nud_state & NUD_VALID))
+ 		return BPF_FIB_LKUP_RET_NO_NEIGH;
+ 
+ 	return bpf_fib_set_fwd_params(params, neigh, dev);
+@@ -5514,7 +5514,7 @@ static int bpf_ipv6_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
+ 	 * not needed here.
+ 	 */
+ 	neigh = __ipv6_neigh_lookup_noref_stub(dev, dst);
+-	if (!neigh)
++	if (!neigh || !(neigh->nud_state & NUD_VALID))
+ 		return BPF_FIB_LKUP_RET_NO_NEIGH;
+ 
+ 	return bpf_fib_set_fwd_params(params, neigh, dev);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index f6f580e9d2820..82ccc3eebe71d 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -242,7 +242,7 @@ static int neigh_forced_gc(struct neigh_table *tbl)
+ 			    (n->nud_state == NUD_NOARP) ||
+ 			    (tbl->is_multicast &&
+ 			     tbl->is_multicast(n->primary_key)) ||
+-			    time_after(tref, n->updated))
++			    !time_in_range(n->updated, tref, jiffies))
+ 				remove = true;
+ 			write_unlock(&n->lock);
+ 
+@@ -262,7 +262,17 @@ static int neigh_forced_gc(struct neigh_table *tbl)
+ 
+ static void neigh_add_timer(struct neighbour *n, unsigned long when)
+ {
++	/* Use safe distance from the jiffies - LONG_MAX point while timer
++	 * is running in DELAY/PROBE state but still show to user space
++	 * large times in the past.
++	 */
++	unsigned long mint = jiffies - (LONG_MAX - 86400 * HZ);
++
+ 	neigh_hold(n);
++	if (!time_in_range(n->confirmed, mint, jiffies))
++		n->confirmed = mint;
++	if (time_before(n->used, n->confirmed))
++		n->used = n->confirmed;
+ 	if (unlikely(mod_timer(&n->timer, when))) {
+ 		printk("NEIGH: BUG, double timer add, state is %x\n",
+ 		       n->nud_state);
+@@ -948,12 +958,14 @@ static void neigh_periodic_work(struct work_struct *work)
+ 				goto next_elt;
+ 			}
+ 
+-			if (time_before(n->used, n->confirmed))
++			if (time_before(n->used, n->confirmed) &&
++			    time_is_before_eq_jiffies(n->confirmed))
+ 				n->used = n->confirmed;
+ 
+ 			if (refcount_read(&n->refcnt) == 1 &&
+ 			    (state == NUD_FAILED ||
+-			     time_after(jiffies, n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) {
++			     !time_in_range_open(jiffies, n->used,
++						 n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) {
+ 				*np = n->next;
+ 				neigh_mark_dead(n);
+ 				write_unlock(&n->lock);
+diff --git a/net/core/stream.c b/net/core/stream.c
+index d7c5413d16d57..cd60746877b1e 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -209,7 +209,6 @@ void sk_stream_kill_queues(struct sock *sk)
+ 	sk_mem_reclaim(sk);
+ 
+ 	WARN_ON(sk->sk_wmem_queued);
+-	WARN_ON(sk->sk_forward_alloc);
+ 
+ 	/* It is _impossible_ for the backlog to contain anything
+ 	 * when we get here.  All user references to this socket
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index da518b4ca84c6..e4f21a6924153 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -207,6 +207,52 @@ static void xfrmi_scrub_packet(struct sk_buff *skb, bool xnet)
+ 	skb->mark = 0;
+ }
+ 
++static int xfrmi_input(struct sk_buff *skb, int nexthdr, __be32 spi,
++		       int encap_type, unsigned short family)
++{
++	struct sec_path *sp;
++
++	sp = skb_sec_path(skb);
++	if (sp && (sp->len || sp->olen) &&
++	    !xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
++		goto discard;
++
++	XFRM_SPI_SKB_CB(skb)->family = family;
++	if (family == AF_INET) {
++		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct iphdr, daddr);
++		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = NULL;
++	} else {
++		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr);
++		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL;
++	}
++
++	return xfrm_input(skb, nexthdr, spi, encap_type);
++discard:
++	kfree_skb(skb);
++	return 0;
++}
++
++static int xfrmi4_rcv(struct sk_buff *skb)
++{
++	return xfrmi_input(skb, ip_hdr(skb)->protocol, 0, 0, AF_INET);
++}
++
++static int xfrmi6_rcv(struct sk_buff *skb)
++{
++	return xfrmi_input(skb, skb_network_header(skb)[IP6CB(skb)->nhoff],
++			   0, 0, AF_INET6);
++}
++
++static int xfrmi4_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
++{
++	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET);
++}
++
++static int xfrmi6_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
++{
++	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET6);
++}
++
+ static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
+ {
+ 	const struct xfrm_mode *inner_mode;
+@@ -780,8 +826,8 @@ static struct pernet_operations xfrmi_net_ops = {
+ };
+ 
+ static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = {
+-	.handler	=	xfrm6_rcv,
+-	.input_handler	=	xfrm_input,
++	.handler	=	xfrmi6_rcv,
++	.input_handler	=	xfrmi6_input,
+ 	.cb_handler	=	xfrmi_rcv_cb,
+ 	.err_handler	=	xfrmi6_err,
+ 	.priority	=	10,
+@@ -831,8 +877,8 @@ static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = {
+ #endif
+ 
+ static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = {
+-	.handler	=	xfrm4_rcv,
+-	.input_handler	=	xfrm_input,
++	.handler	=	xfrmi4_rcv,
++	.input_handler	=	xfrmi4_input,
+ 	.cb_handler	=	xfrmi_rcv_cb,
+ 	.err_handler	=	xfrmi4_err,
+ 	.priority	=	10,
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 0d12bdf59d4cc..d15aa62887de0 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3710,6 +3710,9 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 			goto reject;
+ 		}
+ 
++		if (if_id)
++			secpath_reset(skb);
++
+ 		xfrm_pols_put(pols, npols);
+ 		return 1;
+ 	}
+diff --git a/scripts/tags.sh b/scripts/tags.sh
+index fd96734deff13..b82aebb0c995e 100755
+--- a/scripts/tags.sh
++++ b/scripts/tags.sh
+@@ -95,10 +95,13 @@ all_sources()
+ 
+ all_compiled_sources()
+ {
+-	realpath -es $([ -z "$KBUILD_ABS_SRCTREE" ] && echo --relative-to=.) \
+-		include/generated/autoconf.h $(find $ignore -name "*.cmd" -exec \
+-		grep -Poh '(?(?=^source_.* \K).*|(?=^  \K\S).*(?= \\))' {} \+ |
+-		awk '!a[$0]++') | sort -u
++	{
++		echo include/generated/autoconf.h
++		find $ignore -name "*.cmd" -exec \
++			sed -n -E 's/^source_.* (.*)/\1/p; s/^  (\S.*) \\/\1/p' {} \+ |
++		awk '!a[$0]++'
++	} | xargs realpath -es $([ -z "$KBUILD_ABS_SRCTREE" ] && echo --relative-to=.) |
++	sort -u
+ }
+ 
+ all_target_sources()


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-03-03 15:01 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-03-03 15:01 UTC (permalink / raw
  To: gentoo-commits

commit:     5de46cc1e109501dc7df3def239eab3a3ee82708
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar  3 15:01:17 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar  3 15:01:17 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5de46cc1

Linux patch 5.10.172

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |  4 ++++
 1171_linux-5.10.172.patch | 27 +++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/0000_README b/0000_README
index 1a198b29..dbed36d9 100644
--- a/0000_README
+++ b/0000_README
@@ -727,6 +727,10 @@ Patch:  1170_linux-5.10.171.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.171
 
+Patch:  1171_linux-5.10.172.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.172
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1171_linux-5.10.172.patch b/1171_linux-5.10.172.patch
new file mode 100644
index 00000000..e16c8778
--- /dev/null
+++ b/1171_linux-5.10.172.patch
@@ -0,0 +1,27 @@
+diff --git a/Makefile b/Makefile
+index 9dde2c2307893..447ed158d6bc0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 171
++SUBLEVEL = 172
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 0c27b81ee1eb7..cf6f8aeb450db 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -7139,7 +7139,8 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ 
+ 	if (io_op_defs[req->opcode].needs_file) {
+ 		req->file = io_file_get(ctx, req, READ_ONCE(sqe->fd),
+-					(sqe_flags & IOSQE_FIXED_FILE), 0);
++					(sqe_flags & IOSQE_FIXED_FILE),
++					IO_URING_F_NONBLOCK);
+ 		if (unlikely(!req->file))
+ 			ret = -EBADF;
+ 	}


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-03-11 16:05 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-03-11 16:05 UTC (permalink / raw
  To: gentoo-commits

commit:     2b61db0c03aea401d08a6ddcafc906e883bdf9f3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 11 16:04:53 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 11 16:04:53 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2b61db0c

Linux patch 5.10.173

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1172_linux-5.10.173.patch | 20406 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 20410 insertions(+)

diff --git a/0000_README b/0000_README
index dbed36d9..e4325784 100644
--- a/0000_README
+++ b/0000_README
@@ -731,6 +731,10 @@ Patch:  1171_linux-5.10.172.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.172
 
+Patch:  1172_linux-5.10.173.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.173
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1172_linux-5.10.173.patch b/1172_linux-5.10.173.patch
new file mode 100644
index 00000000..c5ae6c8c
--- /dev/null
+++ b/1172_linux-5.10.173.patch
@@ -0,0 +1,20406 @@
+diff --git a/Documentation/ABI/testing/configfs-usb-gadget-uvc b/Documentation/ABI/testing/configfs-usb-gadget-uvc
+index ac5e11af79a81..4b1813994bd0d 100644
+--- a/Documentation/ABI/testing/configfs-usb-gadget-uvc
++++ b/Documentation/ABI/testing/configfs-usb-gadget-uvc
+@@ -51,7 +51,7 @@ Date:		Dec 2014
+ KernelVersion:	4.0
+ Description:	Default output terminal descriptors
+ 
+-		All attributes read only:
++		All attributes read only except bSourceID:
+ 
+ 		==============	=============================================
+ 		iTerminal	index of string descriptor
+diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
+index 12757e63b26ce..7882037aca679 100644
+--- a/Documentation/admin-guide/cgroup-v1/memory.rst
++++ b/Documentation/admin-guide/cgroup-v1/memory.rst
+@@ -82,6 +82,8 @@ Brief summary of control files.
+  memory.swappiness		     set/show swappiness parameter of vmscan
+ 				     (See sysctl's vm.swappiness)
+  memory.move_charge_at_immigrate     set/show controls of moving charges
++                                     This knob is deprecated and shouldn't be
++                                     used.
+  memory.oom_control		     set/show oom controls.
+  memory.numa_stat		     show the number of memory usage per numa
+ 				     node
+@@ -740,8 +742,15 @@ NOTE2:
+        It is recommended to set the soft limit always below the hard limit,
+        otherwise the hard limit will take precedence.
+ 
+-8. Move charges at task migration
+-=================================
++8. Move charges at task migration (DEPRECATED!)
++===============================================
++
++THIS IS DEPRECATED!
++
++It's expensive and unreliable! It's better practice to launch workload
++tasks directly from inside their target cgroup. Use dedicated workload
++cgroups to allow fine-grained policy adjustments without having to
++move physical pages between control domains.
+ 
+ Users can move charges associated with a task along with task migration, that
+ is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index 7e061ed449aaa..0fba3758d0da8 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -479,8 +479,16 @@ Spectre variant 2
+    On Intel Skylake-era systems the mitigation covers most, but not all,
+    cases. See :ref:`[3] <spec_ref3>` for more details.
+ 
+-   On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
+-   IBRS on x86), retpoline is automatically disabled at run time.
++   On CPUs with hardware mitigation for Spectre variant 2 (e.g. IBRS
++   or enhanced IBRS on x86), retpoline is automatically disabled at run time.
++
++   Systems which support enhanced IBRS (eIBRS) enable IBRS protection once at
++   boot, by setting the IBRS bit, and they're automatically protected against
++   Spectre v2 variant attacks, including cross-thread branch target injections
++   on SMT systems (STIBP). In other words, eIBRS enables STIBP too.
++
++   Legacy IBRS systems clear the IBRS bit on exit to userspace and
++   therefore explicitly enable STIBP for that
+ 
+    The retpoline mitigation is turned on by default on vulnerable
+    CPUs. It can be forced on or off by the administrator
+@@ -504,9 +512,12 @@ Spectre variant 2
+    For Spectre variant 2 mitigation, individual user programs
+    can be compiled with return trampolines for indirect branches.
+    This protects them from consuming poisoned entries in the branch
+-   target buffer left by malicious software.  Alternatively, the
+-   programs can disable their indirect branch speculation via prctl()
+-   (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
++   target buffer left by malicious software.
++
++   On legacy IBRS systems, at return to userspace, implicit STIBP is disabled
++   because the kernel clears the IBRS bit. In this case, the userspace programs
++   can disable indirect branch speculation via prctl() (See
++   :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
+    On x86, this will turn on STIBP to guard against attacks from the
+    sibling thread when the user program is running, and use IBPB to
+    flush the branch target buffer when switching to/from the program.
+diff --git a/Documentation/admin-guide/kdump/gdbmacros.txt b/Documentation/admin-guide/kdump/gdbmacros.txt
+index 82aecdcae8a6c..030de95e3e6b2 100644
+--- a/Documentation/admin-guide/kdump/gdbmacros.txt
++++ b/Documentation/admin-guide/kdump/gdbmacros.txt
+@@ -312,10 +312,10 @@ define dmesg
+ 			set var $prev_flags = $info->flags
+ 		end
+ 
+-		set var $id = ($id + 1) & $id_mask
+ 		if ($id == $end_id)
+ 			loop_break
+ 		end
++		set var $id = ($id + 1) & $id_mask
+ 	end
+ end
+ document dmesg
+diff --git a/Documentation/dev-tools/gdb-kernel-debugging.rst b/Documentation/dev-tools/gdb-kernel-debugging.rst
+index 4756f6b3a04e5..10cdd990b63d0 100644
+--- a/Documentation/dev-tools/gdb-kernel-debugging.rst
++++ b/Documentation/dev-tools/gdb-kernel-debugging.rst
+@@ -39,6 +39,10 @@ Setup
+   this mode. In this case, you should build the kernel with
+   CONFIG_RANDOMIZE_BASE disabled if the architecture supports KASLR.
+ 
++- Build the gdb scripts (required on kernels v5.1 and above)::
++
++    make scripts_gdb
++
+ - Enable the gdb stub of QEMU/KVM, either
+ 
+     - at VM startup time by appending "-s" to the QEMU command line
+diff --git a/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml b/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
+index 2e35aeaa8781d..89e3819c6127a 100644
+--- a/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
++++ b/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
+@@ -61,7 +61,7 @@ patternProperties:
+         description: phandle of the CPU DAI
+ 
+     patternProperties:
+-      "^codec-[0-9]+$":
++      "^codec(-[0-9]+)?$":
+         type: object
+         description: |-
+           Codecs:
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index 2b4b64797191f..08295f488d057 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -4031,6 +4031,18 @@ not holding a previously reported uncorrected error).
+ :Parameters: struct kvm_s390_cmma_log (in, out)
+ :Returns: 0 on success, a negative value on error
+ 
++Errors:
++
++  ======     =============================================================
++  ENOMEM     not enough memory can be allocated to complete the task
++  ENXIO      if CMMA is not enabled
++  EINVAL     if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled
++  EINVAL     if KVM_S390_CMMA_PEEK is not set but dirty tracking has been
++             disabled (and thus migration mode was automatically disabled)
++  EFAULT     if the userspace address is invalid or if no page table is
++             present for the addresses (e.g. when using hugepages).
++  ======     =============================================================
++
+ This ioctl is used to get the values of the CMMA bits on the s390
+ architecture. It is meant to be used in two scenarios:
+ 
+@@ -4111,12 +4123,6 @@ mask is unused.
+ 
+ values points to the userspace buffer where the result will be stored.
+ 
+-This ioctl can fail with -ENOMEM if not enough memory can be allocated to
+-complete the task, with -ENXIO if CMMA is not enabled, with -EINVAL if
+-KVM_S390_CMMA_PEEK is not set but migration mode was not enabled, with
+--EFAULT if the userspace address is invalid or if no page table is
+-present for the addresses (e.g. when using hugepages).
+-
+ 4.108 KVM_S390_SET_CMMA_BITS
+ ----------------------------
+ 
+diff --git a/Documentation/virt/kvm/devices/vm.rst b/Documentation/virt/kvm/devices/vm.rst
+index 60acc39e0e937..147efec626e52 100644
+--- a/Documentation/virt/kvm/devices/vm.rst
++++ b/Documentation/virt/kvm/devices/vm.rst
+@@ -302,6 +302,10 @@ Allows userspace to start migration mode, needed for PGSTE migration.
+ Setting this attribute when migration mode is already active will have
+ no effects.
+ 
++Dirty tracking must be enabled on all memslots, else -EINVAL is returned. When
++dirty tracking is disabled on any memslot, migration mode is automatically
++stopped.
++
+ :Parameters: none
+ :Returns:   -ENOMEM if there is not enough free memory to start migration mode;
+ 	    -EINVAL if the state of the VM is invalid (e.g. no memory defined);
+diff --git a/Makefile b/Makefile
+index 447ed158d6bc0..1a6ea79940797 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 172
++SUBLEVEL = 173
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -93,9 +93,16 @@ endif
+ 
+ # If the user is running make -s (silent mode), suppress echoing of
+ # commands
++# make-4.0 (and later) keep single letter options in the 1st word of MAKEFLAGS.
+ 
+-ifneq ($(findstring s,$(filter-out --%,$(MAKEFLAGS))),)
+-  quiet=silent_
++ifeq ($(filter 3.%,$(MAKE_VERSION)),)
++silence:=$(findstring s,$(firstword -$(MAKEFLAGS)))
++else
++silence:=$(findstring s,$(filter-out --%,$(MAKEFLAGS)))
++endif
++
++ifeq ($(silence),s)
++quiet=silent_
+ endif
+ 
+ export quiet Q KBUILD_VERBOSE
+diff --git a/arch/alpha/boot/tools/objstrip.c b/arch/alpha/boot/tools/objstrip.c
+index 08b430d25a315..7cf92d172dce9 100644
+--- a/arch/alpha/boot/tools/objstrip.c
++++ b/arch/alpha/boot/tools/objstrip.c
+@@ -148,7 +148,7 @@ main (int argc, char *argv[])
+ #ifdef __ELF__
+     elf = (struct elfhdr *) buf;
+ 
+-    if (elf->e_ident[0] == 0x7f && str_has_prefix((char *)elf->e_ident + 1, "ELF")) {
++    if (memcmp(&elf->e_ident[EI_MAG0], ELFMAG, SELFMAG) == 0) {
+ 	if (elf->e_type != ET_EXEC) {
+ 	    fprintf(stderr, "%s: %s is not an ELF executable\n",
+ 		    prog_name, inname);
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 8b0f81a58b948..751d3197ca766 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -235,7 +235,21 @@ do_entIF(unsigned long type, struct pt_regs *regs)
+ {
+ 	int signo, code;
+ 
+-	if ((regs->ps & ~IPL_MAX) == 0) {
++	if (type == 3) { /* FEN fault */
++		/* Irritating users can call PAL_clrfen to disable the
++		   FPU for the process.  The kernel will then trap in
++		   do_switch_stack and undo_switch_stack when we try
++		   to save and restore the FP registers.
++
++		   Given that GCC by default generates code that uses the
++		   FP registers, PAL_clrfen is not useful except for DoS
++		   attacks.  So turn the bleeding FPU back on and be done
++		   with it.  */
++		current_thread_info()->pcb.flags |= 1;
++		__reload_thread(&current_thread_info()->pcb);
++		return;
++	}
++	if (!user_mode(regs)) {
+ 		if (type == 1) {
+ 			const unsigned int *data
+ 			  = (const unsigned int *) regs->pc;
+@@ -368,20 +382,6 @@ do_entIF(unsigned long type, struct pt_regs *regs)
+ 		}
+ 		break;
+ 
+-	      case 3: /* FEN fault */
+-		/* Irritating users can call PAL_clrfen to disable the
+-		   FPU for the process.  The kernel will then trap in
+-		   do_switch_stack and undo_switch_stack when we try
+-		   to save and restore the FP registers.
+-
+-		   Given that GCC by default generates code that uses the
+-		   FP registers, PAL_clrfen is not useful except for DoS
+-		   attacks.  So turn the bleeding FPU back on and be done
+-		   with it.  */
+-		current_thread_info()->pcb.flags |= 1;
+-		__reload_thread(&current_thread_info()->pcb);
+-		return;
+-
+ 	      case 5: /* illoc */
+ 	      default: /* unexpected instruction-fault type */
+ 		      ;
+diff --git a/arch/arm/boot/dts/exynos3250-rinato.dts b/arch/arm/boot/dts/exynos3250-rinato.dts
+index f9e3b13d3aac2..bbf01f76ce3b1 100644
+--- a/arch/arm/boot/dts/exynos3250-rinato.dts
++++ b/arch/arm/boot/dts/exynos3250-rinato.dts
+@@ -249,7 +249,7 @@
+ 	i80-if-timings {
+ 		cs-setup = <0>;
+ 		wr-setup = <0>;
+-		wr-act = <1>;
++		wr-active = <1>;
+ 		wr-hold = <0>;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi b/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
+index 021d9fc1b4923..27a1a89526655 100644
+--- a/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
++++ b/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
+@@ -10,7 +10,7 @@
+ / {
+ thermal-zones {
+ 	cpu_thermal: cpu-thermal {
+-		thermal-sensors = <&tmu 0>;
++		thermal-sensors = <&tmu>;
+ 		polling-delay-passive = <0>;
+ 		polling-delay = <0>;
+ 		trips {
+diff --git a/arch/arm/boot/dts/exynos4.dtsi b/arch/arm/boot/dts/exynos4.dtsi
+index a1e54449f33f0..41f0e64b1365e 100644
+--- a/arch/arm/boot/dts/exynos4.dtsi
++++ b/arch/arm/boot/dts/exynos4.dtsi
+@@ -605,7 +605,7 @@
+ 			status = "disabled";
+ 
+ 			hdmi_i2c_phy: hdmiphy@38 {
+-				compatible = "exynos4210-hdmiphy";
++				compatible = "samsung,exynos4210-hdmiphy";
+ 				reg = <0x38>;
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
+index fddc661ded28f..448e1b153a01b 100644
+--- a/arch/arm/boot/dts/exynos4210.dtsi
++++ b/arch/arm/boot/dts/exynos4210.dtsi
+@@ -382,7 +382,6 @@
+ &cpu_thermal {
+ 	polling-delay-passive = <0>;
+ 	polling-delay = <0>;
+-	thermal-sensors = <&tmu 0>;
+ };
+ 
+ &gic {
+diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
+index bd2d8835dd369..62051e600b325 100644
+--- a/arch/arm/boot/dts/exynos5250.dtsi
++++ b/arch/arm/boot/dts/exynos5250.dtsi
+@@ -1109,7 +1109,7 @@
+ &cpu_thermal {
+ 	polling-delay-passive = <0>;
+ 	polling-delay = <0>;
+-	thermal-sensors = <&tmu 0>;
++	thermal-sensors = <&tmu>;
+ 
+ 	cooling-maps {
+ 		map0 {
+diff --git a/arch/arm/boot/dts/exynos5410-odroidxu.dts b/arch/arm/boot/dts/exynos5410-odroidxu.dts
+index bd1d8499a108b..147c077e88551 100644
+--- a/arch/arm/boot/dts/exynos5410-odroidxu.dts
++++ b/arch/arm/boot/dts/exynos5410-odroidxu.dts
+@@ -116,7 +116,6 @@
+ };
+ 
+ &cpu0_thermal {
+-	thermal-sensors = <&tmu_cpu0 0>;
+ 	polling-delay-passive = <0>;
+ 	polling-delay = <0>;
+ 
+diff --git a/arch/arm/boot/dts/exynos5420.dtsi b/arch/arm/boot/dts/exynos5420.dtsi
+index 83580f076a587..34886535f8477 100644
+--- a/arch/arm/boot/dts/exynos5420.dtsi
++++ b/arch/arm/boot/dts/exynos5420.dtsi
+@@ -605,7 +605,7 @@
+ 		};
+ 
+ 		mipi_phy: mipi-video-phy {
+-			compatible = "samsung,s5pv210-mipi-video-phy";
++			compatible = "samsung,exynos5420-mipi-video-phy";
+ 			syscon = <&pmu_system_controller>;
+ 			#phy-cells = <1>;
+ 		};
+diff --git a/arch/arm/boot/dts/exynos5422-odroidhc1.dts b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+index 88f5c150a30a1..a1871d4a0f2a9 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidhc1.dts
++++ b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+@@ -29,7 +29,7 @@
+ 
+ 	thermal-zones {
+ 		cpu0_thermal: cpu0-thermal {
+-			thermal-sensors = <&tmu_cpu0 0>;
++			thermal-sensors = <&tmu_cpu0>;
+ 			trips {
+ 				cpu0_alert0: cpu-alert-0 {
+ 					temperature = <70000>; /* millicelsius */
+@@ -84,7 +84,7 @@
+ 			};
+ 		};
+ 		cpu1_thermal: cpu1-thermal {
+-			thermal-sensors = <&tmu_cpu1 0>;
++			thermal-sensors = <&tmu_cpu1>;
+ 			trips {
+ 				cpu1_alert0: cpu-alert-0 {
+ 					temperature = <70000>;
+@@ -128,7 +128,7 @@
+ 			};
+ 		};
+ 		cpu2_thermal: cpu2-thermal {
+-			thermal-sensors = <&tmu_cpu2 0>;
++			thermal-sensors = <&tmu_cpu2>;
+ 			trips {
+ 				cpu2_alert0: cpu-alert-0 {
+ 					temperature = <70000>;
+@@ -172,7 +172,7 @@
+ 			};
+ 		};
+ 		cpu3_thermal: cpu3-thermal {
+-			thermal-sensors = <&tmu_cpu3 0>;
++			thermal-sensors = <&tmu_cpu3>;
+ 			trips {
+ 				cpu3_alert0: cpu-alert-0 {
+ 					temperature = <70000>;
+@@ -216,7 +216,7 @@
+ 			};
+ 		};
+ 		gpu_thermal: gpu-thermal {
+-			thermal-sensors = <&tmu_gpu 0>;
++			thermal-sensors = <&tmu_gpu>;
+ 			trips {
+ 				gpu_alert0: gpu-alert-0 {
+ 					temperature = <70000>;
+diff --git a/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi b/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
+index 5da2d81e3be27..099ed4384be89 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
+@@ -50,7 +50,7 @@
+ 
+ 	thermal-zones {
+ 		cpu0_thermal: cpu0-thermal {
+-			thermal-sensors = <&tmu_cpu0 0>;
++			thermal-sensors = <&tmu_cpu0>;
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <0>;
+ 			trips {
+@@ -139,7 +139,7 @@
+ 			};
+ 		};
+ 		cpu1_thermal: cpu1-thermal {
+-			thermal-sensors = <&tmu_cpu1 0>;
++			thermal-sensors = <&tmu_cpu1>;
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <0>;
+ 			trips {
+@@ -212,7 +212,7 @@
+ 			};
+ 		};
+ 		cpu2_thermal: cpu2-thermal {
+-			thermal-sensors = <&tmu_cpu2 0>;
++			thermal-sensors = <&tmu_cpu2>;
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <0>;
+ 			trips {
+@@ -285,7 +285,7 @@
+ 			};
+ 		};
+ 		cpu3_thermal: cpu3-thermal {
+-			thermal-sensors = <&tmu_cpu3 0>;
++			thermal-sensors = <&tmu_cpu3>;
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <0>;
+ 			trips {
+@@ -358,7 +358,7 @@
+ 			};
+ 		};
+ 		gpu_thermal: gpu-thermal {
+-			thermal-sensors = <&tmu_gpu 0>;
++			thermal-sensors = <&tmu_gpu>;
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <0>;
+ 			trips {
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 9e1b0af0aa43f..43b39ad9ddcee 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -494,7 +494,7 @@
+ 
+ 				mux: mux-controller {
+ 					compatible = "mmio-mux";
+-					#mux-control-cells = <0>;
++					#mux-control-cells = <1>;
+ 					mux-reg-masks = <0x14 0x00000010>;
+ 				};
+ 
+diff --git a/arch/arm/boot/dts/spear320-hmi.dts b/arch/arm/boot/dts/spear320-hmi.dts
+index 367ba48aac3e5..5c562fb4886f4 100644
+--- a/arch/arm/boot/dts/spear320-hmi.dts
++++ b/arch/arm/boot/dts/spear320-hmi.dts
+@@ -242,7 +242,7 @@
+ 					irq-trigger = <0x1>;
+ 
+ 					stmpegpio: stmpe-gpio {
+-						compatible = "stmpe,gpio";
++						compatible = "st,stmpe-gpio";
+ 						reg = <0>;
+ 						gpio-controller;
+ 						#gpio-cells = <2>;
+diff --git a/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts b/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
+index 6b149271ef13f..8722fdf77ebc2 100644
+--- a/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
++++ b/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
+@@ -57,7 +57,7 @@
+ 		regulator-ramp-delay = <50>; /* 4ms */
+ 
+ 		enable-active-high;
+-		enable-gpio = <&r_pio 0 8 GPIO_ACTIVE_HIGH>; /* PL8 */
++		enable-gpios = <&r_pio 0 8 GPIO_ACTIVE_HIGH>; /* PL8 */
+ 		gpios = <&r_pio 0 6 GPIO_ACTIVE_HIGH>; /* PL6 */
+ 		gpios-states = <0x1>;
+ 		states = <1100000 0>, <1300000 1>;
+diff --git a/arch/arm/mach-imx/mmdc.c b/arch/arm/mach-imx/mmdc.c
+index af12668d0bf51..b9efe9da06e0b 100644
+--- a/arch/arm/mach-imx/mmdc.c
++++ b/arch/arm/mach-imx/mmdc.c
+@@ -99,6 +99,7 @@ struct mmdc_pmu {
+ 	cpumask_t cpu;
+ 	struct hrtimer hrtimer;
+ 	unsigned int active_events;
++	int id;
+ 	struct device *dev;
+ 	struct perf_event *mmdc_events[MMDC_NUM_COUNTERS];
+ 	struct hlist_node node;
+@@ -433,8 +434,6 @@ static enum hrtimer_restart mmdc_pmu_timer_handler(struct hrtimer *hrtimer)
+ static int mmdc_pmu_init(struct mmdc_pmu *pmu_mmdc,
+ 		void __iomem *mmdc_base, struct device *dev)
+ {
+-	int mmdc_num;
+-
+ 	*pmu_mmdc = (struct mmdc_pmu) {
+ 		.pmu = (struct pmu) {
+ 			.task_ctx_nr    = perf_invalid_context,
+@@ -452,15 +451,16 @@ static int mmdc_pmu_init(struct mmdc_pmu *pmu_mmdc,
+ 		.active_events = 0,
+ 	};
+ 
+-	mmdc_num = ida_simple_get(&mmdc_ida, 0, 0, GFP_KERNEL);
++	pmu_mmdc->id = ida_simple_get(&mmdc_ida, 0, 0, GFP_KERNEL);
+ 
+-	return mmdc_num;
++	return pmu_mmdc->id;
+ }
+ 
+ static int imx_mmdc_remove(struct platform_device *pdev)
+ {
+ 	struct mmdc_pmu *pmu_mmdc = platform_get_drvdata(pdev);
+ 
++	ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
+ 	cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
+ 	perf_pmu_unregister(&pmu_mmdc->pmu);
+ 	iounmap(pmu_mmdc->mmdc_base);
+@@ -474,7 +474,6 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ {
+ 	struct mmdc_pmu *pmu_mmdc;
+ 	char *name;
+-	int mmdc_num;
+ 	int ret;
+ 	const struct of_device_id *of_id =
+ 		of_match_device(imx_mmdc_dt_ids, &pdev->dev);
+@@ -497,14 +496,14 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ 		cpuhp_mmdc_state = ret;
+ 	}
+ 
+-	mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
+-	pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
+-	if (mmdc_num == 0)
+-		name = "mmdc";
+-	else
+-		name = devm_kasprintf(&pdev->dev,
+-				GFP_KERNEL, "mmdc%d", mmdc_num);
++	ret = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
++	if (ret < 0)
++		goto  pmu_free;
+ 
++	name = devm_kasprintf(&pdev->dev,
++				GFP_KERNEL, "mmdc%d", ret);
++
++	pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
+ 	pmu_mmdc->devtype_data = (struct fsl_mmdc_devtype_data *)of_id->data;
+ 
+ 	hrtimer_init(&pmu_mmdc->hrtimer, CLOCK_MONOTONIC,
+@@ -525,6 +524,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ 
+ pmu_register_err:
+ 	pr_warn("MMDC Perf PMU failed (%d), disabled\n", ret);
++	ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
+ 	cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
+ 	hrtimer_cancel(&pmu_mmdc->hrtimer);
+ pmu_free:
+diff --git a/arch/arm/mach-omap1/timer.c b/arch/arm/mach-omap1/timer.c
+index 97fc2096b9709..05f016d5e9f67 100644
+--- a/arch/arm/mach-omap1/timer.c
++++ b/arch/arm/mach-omap1/timer.c
+@@ -165,7 +165,7 @@ err_free_pdata:
+ 	kfree(pdata);
+ 
+ err_free_pdev:
+-	platform_device_unregister(pdev);
++	platform_device_put(pdev);
+ 
+ 	return ret;
+ }
+diff --git a/arch/arm/mach-omap2/timer.c b/arch/arm/mach-omap2/timer.c
+index 620ba69c8f114..5677c4a08f376 100644
+--- a/arch/arm/mach-omap2/timer.c
++++ b/arch/arm/mach-omap2/timer.c
+@@ -76,6 +76,7 @@ static void __init realtime_counter_init(void)
+ 	}
+ 
+ 	rate = clk_get_rate(sys_clk);
++	clk_put(sys_clk);
+ 
+ 	if (soc_is_dra7xx()) {
+ 		/*
+diff --git a/arch/arm/mach-s3c/s3c64xx.c b/arch/arm/mach-s3c/s3c64xx.c
+index 4dfb648142f2a..17f0065031490 100644
+--- a/arch/arm/mach-s3c/s3c64xx.c
++++ b/arch/arm/mach-s3c/s3c64xx.c
+@@ -173,7 +173,8 @@ static struct samsung_pwm_variant s3c64xx_pwm_variant = {
+ 	.tclk_mask	= (1 << 7) | (1 << 6) | (1 << 5),
+ };
+ 
+-void __init s3c64xx_set_timer_source(unsigned int event, unsigned int source)
++void __init s3c64xx_set_timer_source(enum s3c64xx_timer_mode event,
++				     enum s3c64xx_timer_mode source)
+ {
+ 	s3c64xx_pwm_variant.output_mask = BIT(SAMSUNG_PWM_NUM) - 1;
+ 	s3c64xx_pwm_variant.output_mask &= ~(BIT(event) | BIT(source));
+diff --git a/arch/arm/mach-zynq/slcr.c b/arch/arm/mach-zynq/slcr.c
+index 37707614885a5..9765b3f4c2fc5 100644
+--- a/arch/arm/mach-zynq/slcr.c
++++ b/arch/arm/mach-zynq/slcr.c
+@@ -213,6 +213,7 @@ int __init zynq_early_slcr_init(void)
+ 	zynq_slcr_regmap = syscon_regmap_lookup_by_compatible("xlnx,zynq-slcr");
+ 	if (IS_ERR(zynq_slcr_regmap)) {
+ 		pr_err("%s: failed to find zynq-slcr\n", __func__);
++		of_node_put(np);
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+index 5c75fbf0d4709..c892b252e5b0c 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+@@ -151,7 +151,7 @@
+ 		scpi_clocks: clocks {
+ 			compatible = "arm,scpi-clocks";
+ 
+-			scpi_dvfs: clock-controller {
++			scpi_dvfs: clocks-0 {
+ 				compatible = "arm,scpi-dvfs-clocks";
+ 				#clock-cells = <1>;
+ 				clock-indices = <0>;
+@@ -160,7 +160,7 @@
+ 		};
+ 
+ 		scpi_sensors: sensors {
+-			compatible = "amlogic,meson-gxbb-scpi-sensors";
++			compatible = "amlogic,meson-gxbb-scpi-sensors", "arm,scpi-sensors";
+ 			#thermal-sensor-cells = <1>;
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 2091db7c9b8af..c0defb36592d0 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -1727,7 +1727,7 @@
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 
+-					internal_ephy: ethernet_phy@8 {
++					internal_ephy: ethernet-phy@8 {
+ 						compatible = "ethernet-phy-id0180.3301",
+ 							     "ethernet-phy-ieee802.3-c22";
+ 						interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
+index fb0ab27d1f642..6eaceb717d617 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
+@@ -57,26 +57,6 @@
+ 		compatible = "operating-points-v2";
+ 		opp-shared;
+ 
+-		opp-100000000 {
+-			opp-hz = /bits/ 64 <100000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-250000000 {
+-			opp-hz = /bits/ 64 <250000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-500000000 {
+-			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <731000>;
+-		};
+-
+-		opp-667000000 {
+-			opp-hz = /bits/ 64 <666666666>;
+-			opp-microvolt = <731000>;
+-		};
+-
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+ 			opp-microvolt = <731000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+index c2480bab8d337..27e964bfa947a 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+@@ -17,7 +17,7 @@
+ 		io-channel-names = "buttons";
+ 		keyup-threshold-microvolt = <1800000>;
+ 
+-		update-button {
++		button-update {
+ 			label = "update";
+ 			linux,code = <KEY_VENDOR>;
+ 			press-threshold-microvolt = <1300000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 88a7db5c55a07..4c7131526c4d3 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -226,7 +226,7 @@
+ 			reg = <0x14 0x10>;
+ 		};
+ 
+-		eth_mac: eth_mac@34 {
++		eth_mac: eth-mac@34 {
+ 			reg = <0x34 0x10>;
+ 		};
+ 
+@@ -243,7 +243,7 @@
+ 		scpi_clocks: clocks {
+ 			compatible = "arm,scpi-clocks";
+ 
+-			scpi_dvfs: scpi_clocks@0 {
++			scpi_dvfs: clocks-0 {
+ 				compatible = "arm,scpi-dvfs-clocks";
+ 				#clock-cells = <1>;
+ 				clock-indices = <0>;
+@@ -524,7 +524,7 @@
+ 			#size-cells = <2>;
+ 			ranges = <0x0 0x0 0x0 0xc8834000 0x0 0x2000>;
+ 
+-			hwrng: rng {
++			hwrng: rng@0 {
+ 				compatible = "amlogic,meson-rng";
+ 				reg = <0x0 0x0 0x0 0x4>;
+ 			};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+index e8394a8269ee1..802faf7e4e3cb 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+@@ -16,7 +16,7 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
+-		status {
++		led {
+ 			gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
+ 			default-state = "off";
+ 			color = <LED_COLOR_ID_RED>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
+index 9ef210f17b4aa..393d3cb33b9ee 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
+@@ -18,7 +18,7 @@
+ 	leds {
+ 		compatible = "gpio-leds";
+ 
+-		status {
++		led {
+ 			label = "n1:white:status";
+ 			gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
+ 			default-state = "on";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
+index 0b95e9ecbef0a..ca3fd6b67b940 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
+@@ -75,6 +75,5 @@
+ 		enable-gpios = <&gpio GPIOX_17 GPIO_ACTIVE_HIGH>;
+ 		max-speed = <2000000>;
+ 		clocks = <&wifi32k>;
+-		clock-names = "lpo";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+index c3ac531c4f84a..3500229350522 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+@@ -759,7 +759,7 @@
+ 		};
+ 	};
+ 
+-	eth-phy-mux {
++	eth-phy-mux@55c {
+ 		compatible = "mdio-mux-mmioreg", "mdio-mux";
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+index 7c6d871538a63..884930a5849a2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+@@ -428,6 +428,7 @@
+ 	pwm: pwm@11006000 {
+ 		compatible = "mediatek,mt7622-pwm";
+ 		reg = <0 0x11006000 0 0x1000>;
++		#pwm-cells = <2>;
+ 		interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_LOW>;
+ 		clocks = <&topckgen CLK_TOP_PWM_SEL>,
+ 			 <&pericfg CLK_PERI_PWM_PD>,
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 08a914d3a6435..31bc8bae8cff8 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -205,6 +205,15 @@
+ 		method          = "smc";
+ 	};
+ 
++	clk13m: fixed-factor-clock-13m {
++		compatible = "fixed-factor-clock";
++		#clock-cells = <0>;
++		clocks = <&clk26m>;
++		clock-div = <2>;
++		clock-mult = <1>;
++		clock-output-names = "clk13m";
++	};
++
+ 	clk26m: oscillator {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+@@ -355,8 +364,7 @@
+ 				     "mediatek,mt6765-timer";
+ 			reg = <0 0x10017000 0 0x1000>;
+ 			interrupts = <GIC_SPI 200 IRQ_TYPE_LEVEL_HIGH>;
+-			clocks = <&topckgen CLK_TOP_CLK13M>;
+-			clock-names = "clk13m";
++			clocks = <&clk13m>;
+ 		};
+ 
+ 		gce: mailbox@10238000 {
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 99e2488b92dc3..e191a7bc532be 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -108,7 +108,7 @@
+ 				#phy-cells = <0>;
+ 				clocks = <&gcc GCC_USB1_PIPE_CLK>;
+ 				clock-names = "pipe0";
+-				clock-output-names = "gcc_usb1_pipe_clk_src";
++				clock-output-names = "usb3phy_1_cc_pipe_clk";
+ 			};
+ 		};
+ 
+@@ -151,7 +151,7 @@
+ 				#phy-cells = <0>;
+ 				clocks = <&gcc GCC_USB0_PIPE_CLK>;
+ 				clock-names = "pipe0";
+-				clock-output-names = "gcc_usb0_pipe_clk_src";
++				clock-output-names = "usb3phy_0_cc_pipe_clk";
+ 			};
+ 		};
+ 
+@@ -167,34 +167,61 @@
+ 			resets = <&gcc GCC_QUSB2_0_PHY_BCR>;
+ 		};
+ 
+-		pcie_phy0: phy@86000 {
+-			compatible = "qcom,ipq8074-qmp-pcie-phy";
+-			reg = <0x00086000 0x1000>;
+-			#phy-cells = <0>;
+-			clocks = <&gcc GCC_PCIE0_PIPE_CLK>;
+-			clock-names = "pipe_clk";
+-			clock-output-names = "pcie20_phy0_pipe_clk";
++		pcie_qmp0: phy@84000 {
++			compatible = "qcom,ipq8074-qmp-gen3-pcie-phy";
++			reg = <0x00084000 0x1bc>;
++			#address-cells = <1>;
++			#size-cells = <1>;
++			ranges;
+ 
++			clocks = <&gcc GCC_PCIE0_AUX_CLK>,
++				<&gcc GCC_PCIE0_AHB_CLK>;
++			clock-names = "aux", "cfg_ahb";
+ 			resets = <&gcc GCC_PCIE0_PHY_BCR>,
+ 				<&gcc GCC_PCIE0PHY_PHY_BCR>;
+ 			reset-names = "phy",
+ 				      "common";
+ 			status = "disabled";
++
++			pcie_phy0: phy@84200 {
++				reg = <0x84200 0x16c>,
++				      <0x84400 0x200>,
++				      <0x84800 0x1f0>,
++				      <0x84c00 0xf4>;
++				#phy-cells = <0>;
++				#clock-cells = <0>;
++				clocks = <&gcc GCC_PCIE0_PIPE_CLK>;
++				clock-names = "pipe0";
++				clock-output-names = "pcie20_phy0_pipe_clk";
++			};
+ 		};
+ 
+-		pcie_phy1: phy@8e000 {
++		pcie_qmp1: phy@8e000 {
+ 			compatible = "qcom,ipq8074-qmp-pcie-phy";
+-			reg = <0x0008e000 0x1000>;
+-			#phy-cells = <0>;
+-			clocks = <&gcc GCC_PCIE1_PIPE_CLK>;
+-			clock-names = "pipe_clk";
+-			clock-output-names = "pcie20_phy1_pipe_clk";
++			reg = <0x0008e000 0x1c4>;
++			#address-cells = <1>;
++			#size-cells = <1>;
++			ranges;
+ 
++			clocks = <&gcc GCC_PCIE1_AUX_CLK>,
++				<&gcc GCC_PCIE1_AHB_CLK>;
++			clock-names = "aux", "cfg_ahb";
+ 			resets = <&gcc GCC_PCIE1_PHY_BCR>,
+ 				<&gcc GCC_PCIE1PHY_PHY_BCR>;
+ 			reset-names = "phy",
+ 				      "common";
+ 			status = "disabled";
++
++			pcie_phy1: phy@8e200 {
++				reg = <0x8e200 0x130>,
++				      <0x8e400 0x200>,
++				      <0x8e800 0x1f8>;
++				#phy-cells = <0>;
++				#clock-cells = <0>;
++				clocks = <&gcc GCC_PCIE1_PIPE_CLK>;
++				clock-names = "pipe0";
++				clock-output-names = "pcie20_phy1_pipe_clk";
++			};
+ 		};
+ 
+ 		tlmm: pinctrl@1000000 {
+@@ -583,9 +610,9 @@
+ 			phy-names = "pciephy";
+ 
+ 			ranges = <0x81000000 0 0x10200000 0x10200000
+-				  0 0x100000   /* downstream I/O */
+-				  0x82000000 0 0x10300000 0x10300000
+-				  0 0xd00000>; /* non-prefetchable memory */
++				  0 0x10000>,   /* downstream I/O */
++				 <0x82000000 0 0x10220000 0x10220000
++				  0 0xfde0000>; /* non-prefetchable memory */
+ 
+ 			interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -628,16 +655,18 @@
+ 		};
+ 
+ 		pcie0: pci@20000000 {
+-			compatible = "qcom,pcie-ipq8074";
++			compatible = "qcom,pcie-ipq8074-gen3";
+ 			reg = <0x20000000 0xf1d>,
+ 			      <0x20000f20 0xa8>,
+-			      <0x00080000 0x2000>,
++			      <0x20001000 0x1000>,
++			      <0x00080000 0x4000>,
+ 			      <0x20100000 0x1000>;
+-			reg-names = "dbi", "elbi", "parf", "config";
++			reg-names = "dbi", "elbi", "atu", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <0>;
+ 			bus-range = <0x00 0xff>;
+ 			num-lanes = <1>;
++			max-link-speed = <3>;
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+@@ -645,9 +674,9 @@
+ 			phy-names = "pciephy";
+ 
+ 			ranges = <0x81000000 0 0x20200000 0x20200000
+-				  0 0x100000   /* downstream I/O */
+-				  0x82000000 0 0x20300000 0x20300000
+-				  0 0xd00000>; /* non-prefetchable memory */
++				  0 0x10000>, /* downstream I/O */
++				 <0x82000000 0 0x20220000 0x20220000
++				  0 0xfde0000>; /* non-prefetchable memory */
+ 
+ 			interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -665,28 +694,30 @@
+ 			clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>,
+ 				 <&gcc GCC_PCIE0_AXI_M_CLK>,
+ 				 <&gcc GCC_PCIE0_AXI_S_CLK>,
+-				 <&gcc GCC_PCIE0_AHB_CLK>,
+-				 <&gcc GCC_PCIE0_AUX_CLK>;
+-
++				 <&gcc GCC_PCIE0_AXI_S_BRIDGE_CLK>,
++				 <&gcc GCC_PCIE0_RCHNG_CLK>;
+ 			clock-names = "iface",
+ 				      "axi_m",
+ 				      "axi_s",
+-				      "ahb",
+-				      "aux";
++				      "axi_bridge",
++				      "rchng";
++
+ 			resets = <&gcc GCC_PCIE0_PIPE_ARES>,
+ 				 <&gcc GCC_PCIE0_SLEEP_ARES>,
+ 				 <&gcc GCC_PCIE0_CORE_STICKY_ARES>,
+ 				 <&gcc GCC_PCIE0_AXI_MASTER_ARES>,
+ 				 <&gcc GCC_PCIE0_AXI_SLAVE_ARES>,
+ 				 <&gcc GCC_PCIE0_AHB_ARES>,
+-				 <&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>;
++				 <&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>,
++				 <&gcc GCC_PCIE0_AXI_SLAVE_STICKY_ARES>;
+ 			reset-names = "pipe",
+ 				      "sleep",
+ 				      "sticky",
+ 				      "axi_m",
+ 				      "axi_s",
+ 				      "ahb",
+-				      "axi_m_sticky";
++				      "axi_m_sticky",
++				      "axi_s_sticky";
+ 			status = "disabled";
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+index 7bddc5ebc6aa2..d41f068dde167 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+@@ -775,7 +775,7 @@
+ 
+ 			clocks = <&gcc GCC_PCIE_0_PIPE_CLK>;
+ 			resets = <&gcc GCC_PCIEPHY_0_PHY_BCR>,
+-				 <&gcc 21>;
++				 <&gcc GCC_PCIE_0_PIPE_ARES>;
+ 			reset-names = "phy", "pipe";
+ 
+ 			clock-output-names = "pcie_0_pipe_clk";
+@@ -1305,12 +1305,12 @@
+ 				 <&gcc GCC_PCIE_0_SLV_AXI_CLK>;
+ 			clock-names = "iface", "aux", "master_bus", "slave_bus";
+ 
+-			resets = <&gcc 18>,
+-				 <&gcc 17>,
+-				 <&gcc 15>,
+-				 <&gcc 19>,
++			resets = <&gcc GCC_PCIE_0_AXI_MASTER_ARES>,
++				 <&gcc GCC_PCIE_0_AXI_SLAVE_ARES>,
++				 <&gcc GCC_PCIE_0_AXI_MASTER_STICKY_ARES>,
++				 <&gcc GCC_PCIE_0_CORE_STICKY_ARES>,
+ 				 <&gcc GCC_PCIE_0_BCR>,
+-				 <&gcc 16>;
++				 <&gcc GCC_PCIE_0_AHB_ARES>;
+ 			reset-names = "axi_m",
+ 				      "axi_s",
+ 				      "axi_m_sticky",
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index c71f3afc1cc9f..eb07a882d43b3 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -3066,8 +3066,8 @@
+ 			interrupts-extended = <&pdc 1 IRQ_TYPE_LEVEL_HIGH>;
+ 			qcom,ee = <0>;
+ 			qcom,channel = <0>;
+-			#address-cells = <1>;
+-			#size-cells = <1>;
++			#address-cells = <2>;
++			#size-cells = <0>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <4>;
+ 			cell-index = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index c6691bdc81002..1e889ca932e41 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -896,7 +896,7 @@
+ 	};
+ 
+ 	wcd_intr_default: wcd_intr_default {
+-		pins = <54>;
++		pins = "gpio54";
+ 		function = "gpio";
+ 
+ 		input-enable;
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+index 53e1d43cbecf8..663adf79471bc 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+@@ -399,20 +399,6 @@
+ 		};
+ 	};
+ 
+-	/* 0 - lcd_reset */
+-	/* 1 - lcd_pwr */
+-	/* 2 - lcd_select */
+-	/* 3 - backlight-enable */
+-	/* 4 - Touch_shdwn */
+-	/* 5 - LCD_H_pol */
+-	/* 6 - lcd_V_pol */
+-	gpio_exp1: gpio@20 {
+-		compatible = "onnn,pca9654";
+-		reg = <0x20>;
+-		gpio-controller;
+-		#gpio-cells = <2>;
+-	};
+-
+ 	touchscreen@26 {
+ 		compatible = "ilitek,ili2117";
+ 		reg = <0x26>;
+@@ -445,6 +431,16 @@
+ 			};
+ 		};
+ 	};
++
++	gpio_exp1: gpio@70 {
++		compatible = "nxp,pca9538";
++		reg = <0x70>;
++		gpio-controller;
++		#gpio-cells = <2>;
++		gpio-line-names = "lcd_reset", "lcd_pwr", "lcd_select",
++				  "backlight-enable", "Touch_shdwn",
++				  "LCD_H_pol", "lcd_V_pol";
++	};
+ };
+ 
+ &lvds0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index e8a4143e1c241..909ab6661aef5 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -16,7 +16,7 @@
+ 	};
+ };
+ 
+-&wkup_pmx0 {
++&wkup_pmx2 {
+ 	mcu_cpsw_pins_default: mcu-cpsw-pins-default {
+ 		pinctrl-single,pins = <
+ 			J721E_WKUP_IOPAD(0x0068, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+index eb2a78a53512c..7f252cc6eb379 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+@@ -56,7 +56,34 @@
+ 	wkup_pmx0: pinctrl@4301c000 {
+ 		compatible = "pinctrl-single";
+ 		/* Proxy 0 addressing */
+-		reg = <0x00 0x4301c000 0x00 0x178>;
++		reg = <0x00 0x4301c000 0x00 0x34>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx1: pinctrl@0x4301c038 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c038 0x00 0x8>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx2: pinctrl@0x4301c068 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c068 0x00 0xec>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx3: pinctrl@0x4301c174 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c174 0x00 0x20>;
+ 		#pinctrl-cells = <1>;
+ 		pinctrl-single,register-width = <32>;
+ 		pinctrl-single,function-mask = <0xffffffff>;
+diff --git a/arch/m68k/68000/entry.S b/arch/m68k/68000/entry.S
+index 259b3661b6141..94abf3d8afc52 100644
+--- a/arch/m68k/68000/entry.S
++++ b/arch/m68k/68000/entry.S
+@@ -47,6 +47,8 @@ do_trace:
+ 	jbsr	syscall_trace_enter
+ 	RESTORE_SWITCH_STACK
+ 	addql	#4,%sp
++	addql	#1,%d0
++	jeq	ret_from_exception
+ 	movel	%sp@(PT_OFF_ORIG_D0),%d1
+ 	movel	#-ENOSYS,%d0
+ 	cmpl	#NR_syscalls,%d1
+diff --git a/arch/m68k/Kconfig.devices b/arch/m68k/Kconfig.devices
+index 6a87b4a5fcac2..e6e3efac18407 100644
+--- a/arch/m68k/Kconfig.devices
++++ b/arch/m68k/Kconfig.devices
+@@ -19,6 +19,7 @@ config HEARTBEAT
+ # We have a dedicated heartbeat LED. :-)
+ config PROC_HARDWARE
+ 	bool "/proc/hardware support"
++	depends on PROC_FS
+ 	help
+ 	  Say Y here to support the /proc/hardware file, which gives you
+ 	  access to information about the machine you're running on,
+diff --git a/arch/m68k/coldfire/entry.S b/arch/m68k/coldfire/entry.S
+index d43a02795a4a4..f1d41a9328a27 100644
+--- a/arch/m68k/coldfire/entry.S
++++ b/arch/m68k/coldfire/entry.S
+@@ -92,6 +92,8 @@ ENTRY(system_call)
+ 	jbsr	syscall_trace_enter
+ 	RESTORE_SWITCH_STACK
+ 	addql	#4,%sp
++	addql	#1,%d0
++	jeq	ret_from_exception
+ 	movel	%d3,%a0
+ 	jbsr	%a0@
+ 	movel	%d0,%sp@(PT_OFF_D0)		/* save the return value */
+diff --git a/arch/m68k/kernel/entry.S b/arch/m68k/kernel/entry.S
+index 9dd76fbb7c6b2..546bab6bfc273 100644
+--- a/arch/m68k/kernel/entry.S
++++ b/arch/m68k/kernel/entry.S
+@@ -167,9 +167,12 @@ do_trace_entry:
+ 	jbsr	syscall_trace
+ 	RESTORE_SWITCH_STACK
+ 	addql	#4,%sp
++	addql	#1,%d0			| optimization for cmpil #-1,%d0
++	jeq	ret_from_syscall
+ 	movel	%sp@(PT_OFF_ORIG_D0),%d0
+ 	cmpl	#NR_syscalls,%d0
+ 	jcs	syscall
++	jra	ret_from_syscall
+ badsys:
+ 	movel	#-ENOSYS,%sp@(PT_OFF_D0)
+ 	jra	ret_from_syscall
+diff --git a/arch/mips/include/asm/syscall.h b/arch/mips/include/asm/syscall.h
+index 25fa651c937d5..ebdf4d910af2f 100644
+--- a/arch/mips/include/asm/syscall.h
++++ b/arch/mips/include/asm/syscall.h
+@@ -38,7 +38,7 @@ static inline bool mips_syscall_is_indirect(struct task_struct *task,
+ static inline long syscall_get_nr(struct task_struct *task,
+ 				  struct pt_regs *regs)
+ {
+-	return current_thread_info()->syscall;
++	return task_thread_info(task)->syscall;
+ }
+ 
+ static inline void mips_syscall_update_nr(struct task_struct *task,
+diff --git a/arch/mips/include/asm/vpe.h b/arch/mips/include/asm/vpe.h
+index 80e70dbd1f641..012731546cf60 100644
+--- a/arch/mips/include/asm/vpe.h
++++ b/arch/mips/include/asm/vpe.h
+@@ -104,7 +104,6 @@ struct vpe_control {
+ 	struct list_head tc_list;       /* Thread contexts */
+ };
+ 
+-extern unsigned long physical_memsize;
+ extern struct vpe_control vpecontrol;
+ extern const struct file_operations vpe_fops;
+ 
+diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
+index dbb3f1fc71ab6..f659adb681bc3 100644
+--- a/arch/mips/kernel/smp-cps.c
++++ b/arch/mips/kernel/smp-cps.c
+@@ -423,9 +423,11 @@ static void cps_shutdown_this_cpu(enum cpu_death death)
+ 			wmb();
+ 		}
+ 	} else {
+-		pr_debug("Gating power to core %d\n", core);
+-		/* Power down the core */
+-		cps_pm_enter_state(CPS_PM_POWER_GATED);
++		if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
++			pr_debug("Gating power to core %d\n", core);
++			/* Power down the core */
++			cps_pm_enter_state(CPS_PM_POWER_GATED);
++		}
+ 	}
+ }
+ 
+diff --git a/arch/mips/kernel/vpe-mt.c b/arch/mips/kernel/vpe-mt.c
+index 9fd7cd48ea1d2..496ed8f362f62 100644
+--- a/arch/mips/kernel/vpe-mt.c
++++ b/arch/mips/kernel/vpe-mt.c
+@@ -92,12 +92,11 @@ int vpe_run(struct vpe *v)
+ 	write_tc_c0_tchalt(read_tc_c0_tchalt() & ~TCHALT_H);
+ 
+ 	/*
+-	 * The sde-kit passes 'memsize' to __start in $a3, so set something
+-	 * here...  Or set $a3 to zero and define DFLT_STACK_SIZE and
+-	 * DFLT_HEAP_SIZE when you compile your program
++	 * We don't pass the memsize here, so VPE programs need to be
++	 * compiled with DFLT_STACK_SIZE and DFLT_HEAP_SIZE defined.
+ 	 */
++	mttgpr(7, 0);
+ 	mttgpr(6, v->ntcs);
+-	mttgpr(7, physical_memsize);
+ 
+ 	/* set up VPE1 */
+ 	/*
+diff --git a/arch/mips/lantiq/prom.c b/arch/mips/lantiq/prom.c
+index 3f568f5aae2d1..2729a4b63e187 100644
+--- a/arch/mips/lantiq/prom.c
++++ b/arch/mips/lantiq/prom.c
+@@ -22,12 +22,6 @@
+ DEFINE_SPINLOCK(ebu_lock);
+ EXPORT_SYMBOL_GPL(ebu_lock);
+ 
+-/*
+- * This is needed by the VPE loader code, just set it to 0 and assume
+- * that the firmware hardcodes this value to something useful.
+- */
+-unsigned long physical_memsize = 0L;
+-
+ /*
+  * this struct is filled by the soc specific detection code and holds
+  * information about the specific soc type, revision and name
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index 6122541412961..a3f66ade09b32 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -92,7 +92,7 @@ aflags-$(CONFIG_CPU_LITTLE_ENDIAN)	+= -mlittle-endian
+ 
+ ifeq ($(HAS_BIARCH),y)
+ KBUILD_CFLAGS	+= -m$(BITS)
+-KBUILD_AFLAGS	+= -m$(BITS) -Wl,-a$(BITS)
++KBUILD_AFLAGS	+= -m$(BITS)
+ KBUILD_LDFLAGS	+= -m elf$(BITS)$(LDEMULATION)
+ endif
+ 
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index 3eff6a4888e79..665d847ef9b5a 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -1054,45 +1054,46 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ 		}
+ 
+ 		pr_info("EEH: Recovery successful.\n");
+-	} else  {
+-		/*
+-		 * About 90% of all real-life EEH failures in the field
+-		 * are due to poorly seated PCI cards. Only 10% or so are
+-		 * due to actual, failed cards.
+-		 */
+-		pr_err("EEH: Unable to recover from failure from PHB#%x-PE#%x.\n"
+-		       "Please try reseating or replacing it\n",
+-			pe->phb->global_number, pe->addr);
++		goto out;
++	}
+ 
+-		eeh_slot_error_detail(pe, EEH_LOG_PERM);
++	/*
++	 * About 90% of all real-life EEH failures in the field
++	 * are due to poorly seated PCI cards. Only 10% or so are
++	 * due to actual, failed cards.
++	 */
++	pr_err("EEH: Unable to recover from failure from PHB#%x-PE#%x.\n"
++		"Please try reseating or replacing it\n",
++		pe->phb->global_number, pe->addr);
+ 
+-		/* Notify all devices that they're about to go down. */
+-		eeh_set_channel_state(pe, pci_channel_io_perm_failure);
+-		eeh_set_irq_state(pe, false);
+-		eeh_pe_report("error_detected(permanent failure)", pe,
+-			      eeh_report_failure, NULL);
++	eeh_slot_error_detail(pe, EEH_LOG_PERM);
+ 
+-		/* Mark the PE to be removed permanently */
+-		eeh_pe_state_mark(pe, EEH_PE_REMOVED);
++	/* Notify all devices that they're about to go down. */
++	eeh_set_irq_state(pe, false);
++	eeh_pe_report("error_detected(permanent failure)", pe,
++		      eeh_report_failure, NULL);
++	eeh_set_channel_state(pe, pci_channel_io_perm_failure);
+ 
+-		/*
+-		 * Shut down the device drivers for good. We mark
+-		 * all removed devices correctly to avoid access
+-		 * the their PCI config any more.
+-		 */
+-		if (pe->type & EEH_PE_VF) {
+-			eeh_pe_dev_traverse(pe, eeh_rmv_device, NULL);
+-			eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
+-		} else {
+-			eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
+-			eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
++	/* Mark the PE to be removed permanently */
++	eeh_pe_state_mark(pe, EEH_PE_REMOVED);
+ 
+-			pci_lock_rescan_remove();
+-			pci_hp_remove_devices(bus);
+-			pci_unlock_rescan_remove();
+-			/* The passed PE should no longer be used */
+-			return;
+-		}
++	/*
++	 * Shut down the device drivers for good. We mark
++	 * all removed devices correctly to avoid access
++	 * the their PCI config any more.
++	 */
++	if (pe->type & EEH_PE_VF) {
++		eeh_pe_dev_traverse(pe, eeh_rmv_device, NULL);
++		eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
++	} else {
++		eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
++		eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
++
++		pci_lock_rescan_remove();
++		pci_hp_remove_devices(bus);
++		pci_unlock_rescan_remove();
++		/* The passed PE should no longer be used */
++		return;
+ 	}
+ 
+ out:
+@@ -1188,10 +1189,10 @@ void eeh_handle_special_event(void)
+ 
+ 			/* Notify all devices to be down */
+ 			eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
+-			eeh_set_channel_state(pe, pci_channel_io_perm_failure);
+ 			eeh_pe_report(
+ 				"error_detected(permanent failure)", pe,
+ 				eeh_report_failure, NULL);
++			eeh_set_channel_state(pe, pci_channel_io_perm_failure);
+ 
+ 			pci_lock_rescan_remove();
+ 			list_for_each_entry(hose, &hose_list, list_node) {
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 014229c40435a..c2e407a112a28 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -52,10 +52,10 @@ struct rtas_t rtas = {
+ EXPORT_SYMBOL(rtas);
+ 
+ DEFINE_SPINLOCK(rtas_data_buf_lock);
+-EXPORT_SYMBOL(rtas_data_buf_lock);
++EXPORT_SYMBOL_GPL(rtas_data_buf_lock);
+ 
+-char rtas_data_buf[RTAS_DATA_BUF_SIZE] __cacheline_aligned;
+-EXPORT_SYMBOL(rtas_data_buf);
++char rtas_data_buf[RTAS_DATA_BUF_SIZE] __aligned(SZ_4K);
++EXPORT_SYMBOL_GPL(rtas_data_buf);
+ 
+ unsigned long rtas_rmo_buf;
+ 
+@@ -64,7 +64,7 @@ unsigned long rtas_rmo_buf;
+  * This is done like this so rtas_flash can be a module.
+  */
+ void (*rtas_flash_term_hook)(int);
+-EXPORT_SYMBOL(rtas_flash_term_hook);
++EXPORT_SYMBOL_GPL(rtas_flash_term_hook);
+ 
+ /* RTAS use home made raw locking instead of spin_lock_irqsave
+  * because those can be called from within really nasty contexts
+@@ -312,7 +312,7 @@ void rtas_progress(char *s, unsigned short hex)
+  
+ 	spin_unlock(&progress_lock);
+ }
+-EXPORT_SYMBOL(rtas_progress);		/* needed by rtas_flash module */
++EXPORT_SYMBOL_GPL(rtas_progress);		/* needed by rtas_flash module */
+ 
+ int rtas_token(const char *service)
+ {
+@@ -322,7 +322,7 @@ int rtas_token(const char *service)
+ 	tokp = of_get_property(rtas.dev, service, NULL);
+ 	return tokp ? be32_to_cpu(*tokp) : RTAS_UNKNOWN_SERVICE;
+ }
+-EXPORT_SYMBOL(rtas_token);
++EXPORT_SYMBOL_GPL(rtas_token);
+ 
+ int rtas_service_present(const char *service)
+ {
+@@ -482,7 +482,7 @@ int rtas_call(int token, int nargs, int nret, int *outputs, ...)
+ 	}
+ 	return ret;
+ }
+-EXPORT_SYMBOL(rtas_call);
++EXPORT_SYMBOL_GPL(rtas_call);
+ 
+ /* For RTAS_BUSY (-2), delay for 1 millisecond.  For an extended busy status
+  * code of 990n, perform the hinted delay of 10^n (last digit) milliseconds.
+@@ -517,7 +517,7 @@ unsigned int rtas_busy_delay(int status)
+ 
+ 	return ms;
+ }
+-EXPORT_SYMBOL(rtas_busy_delay);
++EXPORT_SYMBOL_GPL(rtas_busy_delay);
+ 
+ static int rtas_error_rc(int rtas_rc)
+ {
+@@ -563,7 +563,7 @@ int rtas_get_power_level(int powerdomain, int *level)
+ 		return rtas_error_rc(rc);
+ 	return rc;
+ }
+-EXPORT_SYMBOL(rtas_get_power_level);
++EXPORT_SYMBOL_GPL(rtas_get_power_level);
+ 
+ int rtas_set_power_level(int powerdomain, int level, int *setlevel)
+ {
+@@ -581,7 +581,7 @@ int rtas_set_power_level(int powerdomain, int level, int *setlevel)
+ 		return rtas_error_rc(rc);
+ 	return rc;
+ }
+-EXPORT_SYMBOL(rtas_set_power_level);
++EXPORT_SYMBOL_GPL(rtas_set_power_level);
+ 
+ int rtas_get_sensor(int sensor, int index, int *state)
+ {
+@@ -599,7 +599,7 @@ int rtas_get_sensor(int sensor, int index, int *state)
+ 		return rtas_error_rc(rc);
+ 	return rc;
+ }
+-EXPORT_SYMBOL(rtas_get_sensor);
++EXPORT_SYMBOL_GPL(rtas_get_sensor);
+ 
+ int rtas_get_sensor_fast(int sensor, int index, int *state)
+ {
+@@ -660,7 +660,7 @@ int rtas_set_indicator(int indicator, int index, int new_value)
+ 		return rtas_error_rc(rc);
+ 	return rc;
+ }
+-EXPORT_SYMBOL(rtas_set_indicator);
++EXPORT_SYMBOL_GPL(rtas_set_indicator);
+ 
+ /*
+  * Ignoring RTAS extended delay
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index 4c2f75916a7ea..abbfd5cc40c93 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -941,15 +941,12 @@ is_local:
+ 			}
+ 		}
+ 	} else {
+-		bool hflush = false;
++		bool hflush;
+ 		unsigned long hstart, hend;
+ 
+-		if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+-			hstart = (start + PMD_SIZE - 1) & PMD_MASK;
+-			hend = end & PMD_MASK;
+-			if (hstart < hend)
+-				hflush = true;
+-		}
++		hstart = (start + PMD_SIZE - 1) & PMD_MASK;
++		hend = end & PMD_MASK;
++		hflush = IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hstart < hend;
+ 
+ 		if (local) {
+ 			asm volatile("ptesync": : :"memory");
+diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
+index 6e7e820508df7..1cd2351d241e8 100644
+--- a/arch/powerpc/perf/hv-24x7.c
++++ b/arch/powerpc/perf/hv-24x7.c
+@@ -79,9 +79,8 @@ static u32 phys_coresperchip; /* Physical cores per chip */
+  */
+ void read_24x7_sys_info(void)
+ {
+-	int call_status, len, ntypes;
+-
+-	spin_lock(&rtas_data_buf_lock);
++	const s32 token = rtas_token("ibm,get-system-parameter");
++	int call_status;
+ 
+ 	/*
+ 	 * Making system parameter: chips and sockets and cores per chip
+@@ -91,32 +90,27 @@ void read_24x7_sys_info(void)
+ 	phys_chipspersocket = 1;
+ 	phys_coresperchip = 1;
+ 
+-	call_status = rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1,
+-				NULL,
+-				PROCESSOR_MODULE_INFO,
+-				__pa(rtas_data_buf),
+-				RTAS_DATA_BUF_SIZE);
++	do {
++		spin_lock(&rtas_data_buf_lock);
++		call_status = rtas_call(token, 3, 1, NULL, PROCESSOR_MODULE_INFO,
++					__pa(rtas_data_buf), RTAS_DATA_BUF_SIZE);
++		if (call_status == 0) {
++			int ntypes = be16_to_cpup((__be16 *)&rtas_data_buf[2]);
++			int len = be16_to_cpup((__be16 *)&rtas_data_buf[0]);
++
++			if (len >= 8 && ntypes != 0) {
++				phys_sockets = be16_to_cpup((__be16 *)&rtas_data_buf[4]);
++				phys_chipspersocket = be16_to_cpup((__be16 *)&rtas_data_buf[6]);
++				phys_coresperchip = be16_to_cpup((__be16 *)&rtas_data_buf[8]);
++			}
++		}
++		spin_unlock(&rtas_data_buf_lock);
++	} while (rtas_busy_delay(call_status));
+ 
+ 	if (call_status != 0) {
+ 		pr_err("Error calling get-system-parameter %d\n",
+ 		       call_status);
+-	} else {
+-		len = be16_to_cpup((__be16 *)&rtas_data_buf[0]);
+-		if (len < 8)
+-			goto out;
+-
+-		ntypes = be16_to_cpup((__be16 *)&rtas_data_buf[2]);
+-
+-		if (!ntypes)
+-			goto out;
+-
+-		phys_sockets = be16_to_cpup((__be16 *)&rtas_data_buf[4]);
+-		phys_chipspersocket = be16_to_cpup((__be16 *)&rtas_data_buf[6]);
+-		phys_coresperchip = be16_to_cpup((__be16 *)&rtas_data_buf[8]);
+ 	}
+-
+-out:
+-	spin_unlock(&rtas_data_buf_lock);
+ }
+ 
+ /* Domains for which more than one result element are returned for each event. */
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 2b4ceb5e6ce4c..a1e6dd47743f1 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2260,7 +2260,8 @@ static void pnv_ioda_setup_pe_res(struct pnv_ioda_pe *pe,
+ 	int index;
+ 	int64_t rc;
+ 
+-	if (!res || !res->flags || res->start > res->end)
++	if (!res || !res->flags || res->start > res->end ||
++	    res->flags & IORESOURCE_UNSET)
+ 		return;
+ 
+ 	if (res->flags & IORESOURCE_IO) {
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 1c3ac0f663369..115d196560b8b 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -1433,22 +1433,22 @@ static inline void __init check_lp_set_hblkrm(unsigned int lp,
+ 
+ void __init pseries_lpar_read_hblkrm_characteristics(void)
+ {
++	const s32 token = rtas_token("ibm,get-system-parameter");
+ 	unsigned char local_buffer[SPLPAR_TLB_BIC_MAXLENGTH];
+ 	int call_status, len, idx, bpsize;
+ 
+ 	if (!firmware_has_feature(FW_FEATURE_BLOCK_REMOVE))
+ 		return;
+ 
+-	spin_lock(&rtas_data_buf_lock);
+-	memset(rtas_data_buf, 0, RTAS_DATA_BUF_SIZE);
+-	call_status = rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1,
+-				NULL,
+-				SPLPAR_TLB_BIC_TOKEN,
+-				__pa(rtas_data_buf),
+-				RTAS_DATA_BUF_SIZE);
+-	memcpy(local_buffer, rtas_data_buf, SPLPAR_TLB_BIC_MAXLENGTH);
+-	local_buffer[SPLPAR_TLB_BIC_MAXLENGTH - 1] = '\0';
+-	spin_unlock(&rtas_data_buf_lock);
++	do {
++		spin_lock(&rtas_data_buf_lock);
++		memset(rtas_data_buf, 0, RTAS_DATA_BUF_SIZE);
++		call_status = rtas_call(token, 3, 1, NULL, SPLPAR_TLB_BIC_TOKEN,
++					__pa(rtas_data_buf), RTAS_DATA_BUF_SIZE);
++		memcpy(local_buffer, rtas_data_buf, SPLPAR_TLB_BIC_MAXLENGTH);
++		local_buffer[SPLPAR_TLB_BIC_MAXLENGTH - 1] = '\0';
++		spin_unlock(&rtas_data_buf_lock);
++	} while (rtas_busy_delay(call_status));
+ 
+ 	if (call_status != 0) {
+ 		pr_warn("%s %s Error calling get-system-parameter (0x%x)\n",
+diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c
+index e278390ab28d1..d3517e498512f 100644
+--- a/arch/powerpc/platforms/pseries/lparcfg.c
++++ b/arch/powerpc/platforms/pseries/lparcfg.c
+@@ -322,6 +322,7 @@ static void parse_mpp_x_data(struct seq_file *m)
+  */
+ static void parse_system_parameter_string(struct seq_file *m)
+ {
++	const s32 token = rtas_token("ibm,get-system-parameter");
+ 	int call_status;
+ 
+ 	unsigned char *local_buffer = kmalloc(SPLPAR_MAXLENGTH, GFP_KERNEL);
+@@ -331,16 +332,15 @@ static void parse_system_parameter_string(struct seq_file *m)
+ 		return;
+ 	}
+ 
+-	spin_lock(&rtas_data_buf_lock);
+-	memset(rtas_data_buf, 0, SPLPAR_MAXLENGTH);
+-	call_status = rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1,
+-				NULL,
+-				SPLPAR_CHARACTERISTICS_TOKEN,
+-				__pa(rtas_data_buf),
+-				RTAS_DATA_BUF_SIZE);
+-	memcpy(local_buffer, rtas_data_buf, SPLPAR_MAXLENGTH);
+-	local_buffer[SPLPAR_MAXLENGTH - 1] = '\0';
+-	spin_unlock(&rtas_data_buf_lock);
++	do {
++		spin_lock(&rtas_data_buf_lock);
++		memset(rtas_data_buf, 0, SPLPAR_MAXLENGTH);
++		call_status = rtas_call(token, 3, 1, NULL, SPLPAR_CHARACTERISTICS_TOKEN,
++					__pa(rtas_data_buf), RTAS_DATA_BUF_SIZE);
++		memcpy(local_buffer, rtas_data_buf, SPLPAR_MAXLENGTH);
++		local_buffer[SPLPAR_MAXLENGTH - 1] = '\0';
++		spin_unlock(&rtas_data_buf_lock);
++	} while (rtas_busy_delay(call_status));
+ 
+ 	if (call_status != 0) {
+ 		printk(KERN_INFO
+diff --git a/arch/riscv/include/asm/jump_label.h b/arch/riscv/include/asm/jump_label.h
+index 38af2ec7b9bf9..729991e8f7825 100644
+--- a/arch/riscv/include/asm/jump_label.h
++++ b/arch/riscv/include/asm/jump_label.h
+@@ -18,6 +18,7 @@ static __always_inline bool arch_static_branch(struct static_key *key,
+ 					       bool branch)
+ {
+ 	asm_volatile_goto(
++		"	.align		2			\n\t"
+ 		"	.option push				\n\t"
+ 		"	.option norelax				\n\t"
+ 		"	.option norvc				\n\t"
+@@ -39,6 +40,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key,
+ 						    bool branch)
+ {
+ 	asm_volatile_goto(
++		"	.align		2			\n\t"
+ 		"	.option push				\n\t"
+ 		"	.option norelax				\n\t"
+ 		"	.option norvc				\n\t"
+diff --git a/arch/riscv/include/asm/parse_asm.h b/arch/riscv/include/asm/parse_asm.h
+index f36368de839f5..7fee806805c1b 100644
+--- a/arch/riscv/include/asm/parse_asm.h
++++ b/arch/riscv/include/asm/parse_asm.h
+@@ -125,7 +125,7 @@
+ #define FUNCT3_C_J		0xa000
+ #define FUNCT3_C_JAL		0x2000
+ #define FUNCT4_C_JR		0x8000
+-#define FUNCT4_C_JALR		0xf000
++#define FUNCT4_C_JALR		0x9000
+ 
+ #define FUNCT12_SRET		0x10200000
+ 
+diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
+index 8a5cf99c07762..303ae47dfb4d6 100644
+--- a/arch/riscv/kernel/time.c
++++ b/arch/riscv/kernel/time.c
+@@ -5,6 +5,7 @@
+  */
+ 
+ #include <linux/of_clk.h>
++#include <linux/clockchips.h>
+ #include <linux/clocksource.h>
+ #include <linux/delay.h>
+ #include <asm/sbi.h>
+@@ -28,6 +29,8 @@ void __init time_init(void)
+ 
+ 	of_clk_init(NULL);
+ 	timer_probe();
++
++	tick_setup_hrtimer_broadcast();
+ }
+ 
+ void clocksource_arch_init(struct clocksource *cs)
+diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c
+index aae24dc75df61..0f7e7a68d57bb 100644
+--- a/arch/s390/kernel/kprobes.c
++++ b/arch/s390/kernel/kprobes.c
+@@ -241,6 +241,7 @@ static void pop_kprobe(struct kprobe_ctlblk *kcb)
+ {
+ 	__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
+ 	kcb->kprobe_status = kcb->prev_kprobe.status;
++	kcb->prev_kprobe.kp = NULL;
+ }
+ NOKPROBE_SYMBOL(pop_kprobe);
+ 
+@@ -402,12 +403,11 @@ static int post_kprobe_handler(struct pt_regs *regs)
+ 	if (!p)
+ 		return 0;
+ 
++	resume_execution(p, regs);
+ 	if (kcb->kprobe_status != KPROBE_REENTER && p->post_handler) {
+ 		kcb->kprobe_status = KPROBE_HIT_SSDONE;
+ 		p->post_handler(p, regs, 0);
+ 	}
+-
+-	resume_execution(p, regs);
+ 	pop_kprobe(kcb);
+ 	preempt_enable_no_resched();
+ 
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index 9505bdb0aa544..d7291eb0d0c07 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -188,5 +188,6 @@ SECTIONS
+ 	DISCARDS
+ 	/DISCARD/ : {
+ 		*(.eh_frame)
++		*(.interp)
+ 	}
+ }
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 59db85fb63e1c..7ffc73ba220fb 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -5012,6 +5012,23 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ 	/* When we are protected, we should not change the memory slots */
+ 	if (kvm_s390_pv_get_handle(kvm))
+ 		return -EINVAL;
++
++	if (!kvm->arch.migration_mode)
++		return 0;
++
++	/*
++	 * Turn off migration mode when:
++	 * - userspace creates a new memslot with dirty logging off,
++	 * - userspace modifies an existing memslot (MOVE or FLAGS_ONLY) and
++	 *   dirty logging is turned off.
++	 * Migration mode expects dirty page logging being enabled to store
++	 * its dirty bitmap.
++	 */
++	if (change != KVM_MR_DELETE &&
++	    !(mem->flags & KVM_MEM_LOG_DIRTY_PAGES))
++		WARN(kvm_s390_vm_stop_migration(kvm),
++		     "Failed to stop migration mode");
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
+index 5060956b8e7d6..1bc42ce265990 100644
+--- a/arch/s390/mm/extmem.c
++++ b/arch/s390/mm/extmem.c
+@@ -289,15 +289,17 @@ segment_overlaps_others (struct dcss_segment *seg)
+ 
+ /*
+  * real segment loading function, called from segment_load
++ * Must return either an error code < 0, or the segment type code >= 0
+  */
+ static int
+ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long *end)
+ {
+ 	unsigned long start_addr, end_addr, dummy;
+ 	struct dcss_segment *seg;
+-	int rc, diag_cc;
++	int rc, diag_cc, segtype;
+ 
+ 	start_addr = end_addr = 0;
++	segtype = -1;
+ 	seg = kmalloc(sizeof(*seg), GFP_KERNEL | GFP_DMA);
+ 	if (seg == NULL) {
+ 		rc = -ENOMEM;
+@@ -326,9 +328,9 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
+ 	seg->res_name[8] = '\0';
+ 	strlcat(seg->res_name, " (DCSS)", sizeof(seg->res_name));
+ 	seg->res->name = seg->res_name;
+-	rc = seg->vm_segtype;
+-	if (rc == SEG_TYPE_SC ||
+-	    ((rc == SEG_TYPE_SR || rc == SEG_TYPE_ER) && !do_nonshared))
++	segtype = seg->vm_segtype;
++	if (segtype == SEG_TYPE_SC ||
++	    ((segtype == SEG_TYPE_SR || segtype == SEG_TYPE_ER) && !do_nonshared))
+ 		seg->res->flags |= IORESOURCE_READONLY;
+ 
+ 	/* Check for overlapping resources before adding the mapping. */
+@@ -386,7 +388,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
+  out_free:
+ 	kfree(seg);
+  out:
+-	return rc;
++	return rc < 0 ? rc : segtype;
+ }
+ 
+ /*
+diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
+index b239f2ba93b09..cbfff2460e58d 100644
+--- a/arch/s390/mm/vmem.c
++++ b/arch/s390/mm/vmem.c
+@@ -296,7 +296,7 @@ static void try_free_pmd_table(pud_t *pud, unsigned long start)
+ 	if (end > VMALLOC_START)
+ 		return;
+ #ifdef CONFIG_KASAN
+-	if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
++	if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
+ 		return;
+ #endif
+ 	pmd = pmd_offset(pud, start);
+@@ -371,7 +371,7 @@ static void try_free_pud_table(p4d_t *p4d, unsigned long start)
+ 	if (end > VMALLOC_START)
+ 		return;
+ #ifdef CONFIG_KASAN
+-	if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
++	if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
+ 		return;
+ #endif
+ 
+@@ -425,7 +425,7 @@ static void try_free_p4d_table(pgd_t *pgd, unsigned long start)
+ 	if (end > VMALLOC_START)
+ 		return;
+ #ifdef CONFIG_KASAN
+-	if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
++	if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
+ 		return;
+ #endif
+ 
+diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
+index 530b7ec5d3ca9..b5ed893420591 100644
+--- a/arch/sparc/Kconfig
++++ b/arch/sparc/Kconfig
+@@ -293,7 +293,7 @@ config FORCE_MAX_ZONEORDER
+ 	  This config option is actually maximum order plus one. For example,
+ 	  a value of 13 means that the largest free memory block is 2^12 pages.
+ 
+-if SPARC64
++if SPARC64 || COMPILE_TEST
+ source "kernel/power/Kconfig"
+ endif
+ 
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index 555203e3e7b45..fc662f7cc2afb 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -771,6 +771,7 @@ static int vector_config(char *str, char **error_out)
+ 
+ 	if (parsed == NULL) {
+ 		*error_out = "vector_config failed to parse parameters";
++		kfree(params);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index d64e690139950..2284666e8c90c 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1329,17 +1329,16 @@ config MICROCODE_AMD
+ 	  If you select this option, microcode patch loading support for AMD
+ 	  processors will be enabled.
+ 
+-config MICROCODE_OLD_INTERFACE
+-	bool "Ancient loading interface (DEPRECATED)"
++config MICROCODE_LATE_LOADING
++	bool "Late microcode loading (DANGEROUS)"
+ 	default n
+ 	depends on MICROCODE
+ 	help
+-	  DO NOT USE THIS! This is the ancient /dev/cpu/microcode interface
+-	  which was used by userspace tools like iucode_tool and microcode.ctl.
+-	  It is inadequate because it runs too late to be able to properly
+-	  load microcode on a machine and it needs special tools. Instead, you
+-	  should've switched to the early loading method with the initrd or
+-	  builtin microcode by now: Documentation/x86/microcode.rst
++	  Loading microcode late, when the system is up and executing instructions
++	  is a tricky business and should be avoided if possible. Just the sequence
++	  of synchronizing all cores and SMT threads is one fragile dance which does
++	  not guarantee that cores might not softlock after the loading. Therefore,
++	  use this at your own risk. Late loading taints the kernel too.
+ 
+ config X86_MSR
+ 	tristate "/dev/cpu/*/msr - Model-specific register support"
+diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c
+index 1f1a95f3dd0ca..c0ab0ff4af655 100644
+--- a/arch/x86/crypto/ghash-clmulni-intel_glue.c
++++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c
+@@ -19,6 +19,7 @@
+ #include <crypto/internal/simd.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/simd.h>
++#include <asm/unaligned.h>
+ 
+ #define GHASH_BLOCK_SIZE	16
+ #define GHASH_DIGEST_SIZE	16
+@@ -54,15 +55,14 @@ static int ghash_setkey(struct crypto_shash *tfm,
+ 			const u8 *key, unsigned int keylen)
+ {
+ 	struct ghash_ctx *ctx = crypto_shash_ctx(tfm);
+-	be128 *x = (be128 *)key;
+ 	u64 a, b;
+ 
+ 	if (keylen != GHASH_BLOCK_SIZE)
+ 		return -EINVAL;
+ 
+ 	/* perform multiplication by 'x' in GF(2^128) */
+-	a = be64_to_cpu(x->a);
+-	b = be64_to_cpu(x->b);
++	a = get_unaligned_be64(key);
++	b = get_unaligned_be64(key + 8);
+ 
+ 	ctx->shash.a = (b << 1) | (a >> 63);
+ 	ctx->shash.b = (a << 1) | (b >> 63);
+diff --git a/arch/x86/events/zhaoxin/core.c b/arch/x86/events/zhaoxin/core.c
+index e68827e604ad1..e927346960303 100644
+--- a/arch/x86/events/zhaoxin/core.c
++++ b/arch/x86/events/zhaoxin/core.c
+@@ -541,7 +541,13 @@ __init int zhaoxin_pmu_init(void)
+ 
+ 	switch (boot_cpu_data.x86) {
+ 	case 0x06:
+-		if (boot_cpu_data.x86_model == 0x0f || boot_cpu_data.x86_model == 0x19) {
++		/*
++		 * Support Zhaoxin CPU from ZXC series, exclude Nano series through FMS.
++		 * Nano FMS: Family=6, Model=F, Stepping=[0-A][C-D]
++		 * ZXC FMS: Family=6, Model=F, Stepping=E-F OR Family=6, Model=0x19, Stepping=0-3
++		 */
++		if ((boot_cpu_data.x86_model == 0x0f && boot_cpu_data.x86_stepping >= 0x0e) ||
++			boot_cpu_data.x86_model == 0x19) {
+ 
+ 			x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
+ 
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index f73327397b898..509cc0262fdc2 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -131,7 +131,7 @@ static inline unsigned int x86_cpuid_family(void)
+ int __init microcode_init(void);
+ extern void __init load_ucode_bsp(void);
+ extern void load_ucode_ap(void);
+-void reload_early_microcode(void);
++void reload_early_microcode(unsigned int cpu);
+ extern bool get_builtin_firmware(struct cpio_data *cd, const char *name);
+ extern bool initrd_gone;
+ void microcode_bsp_resume(void);
+@@ -139,7 +139,7 @@ void microcode_bsp_resume(void);
+ static inline int __init microcode_init(void)			{ return 0; };
+ static inline void __init load_ucode_bsp(void)			{ }
+ static inline void load_ucode_ap(void)				{ }
+-static inline void reload_early_microcode(void)			{ }
++static inline void reload_early_microcode(unsigned int cpu)	{ }
+ static inline void microcode_bsp_resume(void)			{ }
+ static inline bool
+ get_builtin_firmware(struct cpio_data *cd, const char *name)	{ return false; }
+diff --git a/arch/x86/include/asm/microcode_amd.h b/arch/x86/include/asm/microcode_amd.h
+index 7063b5a43220a..a645b25ee442a 100644
+--- a/arch/x86/include/asm/microcode_amd.h
++++ b/arch/x86/include/asm/microcode_amd.h
+@@ -47,12 +47,12 @@ struct microcode_amd {
+ extern void __init load_ucode_amd_bsp(unsigned int family);
+ extern void load_ucode_amd_ap(unsigned int family);
+ extern int __init save_microcode_in_initrd_amd(unsigned int family);
+-void reload_ucode_amd(void);
++void reload_ucode_amd(unsigned int cpu);
+ #else
+ static inline void __init load_ucode_amd_bsp(unsigned int family) {}
+ static inline void load_ucode_amd_ap(unsigned int family) {}
+ static inline int __init
+ save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
+-static inline void reload_ucode_amd(void) {}
++static inline void reload_ucode_amd(unsigned int cpu) {}
+ #endif
+ #endif /* _ASM_X86_MICROCODE_AMD_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 5a8ee3b83af2a..f71a177b6b185 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -54,6 +54,10 @@
+ #define SPEC_CTRL_RRSBA_DIS_S_SHIFT	6	   /* Disable RRSBA behavior */
+ #define SPEC_CTRL_RRSBA_DIS_S		BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT)
+ 
++/* A mask for bits which the kernel toggles when controlling mitigations */
++#define SPEC_CTRL_MITIGATIONS_MASK	(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD \
++							| SPEC_CTRL_RRSBA_DIS_S)
++
+ #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
+ #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
+ 
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index d428d611a43a9..60514502ead67 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -682,6 +682,7 @@ extern void load_direct_gdt(int);
+ extern void load_fixmap_gdt(int);
+ extern void load_percpu_segment(int);
+ extern void cpu_init(void);
++extern void cpu_init_secondary(void);
+ extern void cpu_init_exception_handling(void);
+ extern void cr4_init(void);
+ 
+@@ -838,8 +839,9 @@ bool xen_set_default_idle(void);
+ #define xen_set_default_idle 0
+ #endif
+ 
+-void stop_this_cpu(void *dummy);
+-void microcode_check(void);
++void __noreturn stop_this_cpu(void *dummy);
++void microcode_check(struct cpuinfo_x86 *prev_info);
++void store_cpu_caps(struct cpuinfo_x86 *info);
+ 
+ enum l1tf_mitigations {
+ 	L1TF_MITIGATION_OFF,
+diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h
+index 04c17be9b5fda..bc5b4d788c08d 100644
+--- a/arch/x86/include/asm/reboot.h
++++ b/arch/x86/include/asm/reboot.h
+@@ -25,6 +25,8 @@ void __noreturn machine_real_restart(unsigned int type);
+ #define MRR_BIOS	0
+ #define MRR_APM		1
+ 
++void cpu_emergency_disable_virtualization(void);
++
+ typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
+ void nmi_panic_self_stop(struct pt_regs *regs);
+ void nmi_shootdown_cpus(nmi_shootdown_cb callback);
+diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
+index 07603064df8fc..b9ccdf5ea98ba 100644
+--- a/arch/x86/include/asm/resctrl.h
++++ b/arch/x86/include/asm/resctrl.h
+@@ -51,24 +51,27 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
+  *   simple as possible.
+  * Must be called with preemption disabled.
+  */
+-static void __resctrl_sched_in(void)
++static inline void __resctrl_sched_in(struct task_struct *tsk)
+ {
+ 	struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state);
+ 	u32 closid = state->default_closid;
+ 	u32 rmid = state->default_rmid;
++	u32 tmp;
+ 
+ 	/*
+ 	 * If this task has a closid/rmid assigned, use it.
+ 	 * Else use the closid/rmid assigned to this cpu.
+ 	 */
+ 	if (static_branch_likely(&rdt_alloc_enable_key)) {
+-		if (current->closid)
+-			closid = current->closid;
++		tmp = READ_ONCE(tsk->closid);
++		if (tmp)
++			closid = tmp;
+ 	}
+ 
+ 	if (static_branch_likely(&rdt_mon_enable_key)) {
+-		if (current->rmid)
+-			rmid = current->rmid;
++		tmp = READ_ONCE(tsk->rmid);
++		if (tmp)
++			rmid = tmp;
+ 	}
+ 
+ 	if (closid != state->cur_closid || rmid != state->cur_rmid) {
+@@ -78,17 +81,17 @@ static void __resctrl_sched_in(void)
+ 	}
+ }
+ 
+-static inline void resctrl_sched_in(void)
++static inline void resctrl_sched_in(struct task_struct *tsk)
+ {
+ 	if (static_branch_likely(&rdt_enable_key))
+-		__resctrl_sched_in();
++		__resctrl_sched_in(tsk);
+ }
+ 
+ void resctrl_cpu_detect(struct cpuinfo_x86 *c);
+ 
+ #else
+ 
+-static inline void resctrl_sched_in(void) {}
++static inline void resctrl_sched_in(struct task_struct *tsk) {}
+ static inline void resctrl_cpu_detect(struct cpuinfo_x86 *c) {}
+ 
+ #endif /* CONFIG_X86_CPU_RESCTRL */
+diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
+index fda3e7747c223..8eefa3386d8ce 100644
+--- a/arch/x86/include/asm/virtext.h
++++ b/arch/x86/include/asm/virtext.h
+@@ -120,7 +120,21 @@ static inline void cpu_svm_disable(void)
+ 
+ 	wrmsrl(MSR_VM_HSAVE_PA, 0);
+ 	rdmsrl(MSR_EFER, efer);
+-	wrmsrl(MSR_EFER, efer & ~EFER_SVME);
++	if (efer & EFER_SVME) {
++		/*
++		 * Force GIF=1 prior to disabling SVM to ensure INIT and NMI
++		 * aren't blocked, e.g. if a fatal error occurred between CLGI
++		 * and STGI.  Note, STGI may #UD if SVM is disabled from NMI
++		 * context between reading EFER and executing STGI.  In that
++		 * case, GIF must already be set, otherwise the NMI would have
++		 * been blocked, so just eat the fault.
++		 */
++		asm_volatile_goto("1: stgi\n\t"
++				  _ASM_EXTABLE(1b, %l[fault])
++				  ::: "memory" : fault);
++fault:
++		wrmsrl(MSR_EFER, efer & ~EFER_SVME);
++	}
+ }
+ 
+ /** Makes sure SVM is disabled, if it is supported on the CPU
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index a2a087a797ae5..c81b8b029b680 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -136,9 +136,17 @@ void __init check_bugs(void)
+ 	 * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
+ 	 * init code as it is not enumerated and depends on the family.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
++	if (cpu_feature_enabled(X86_FEATURE_MSR_SPEC_CTRL)) {
+ 		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+ 
++		/*
++		 * Previously running kernel (kexec), may have some controls
++		 * turned ON. Clear them and let the mitigations setup below
++		 * rediscover them based on configuration.
++		 */
++		x86_spec_ctrl_base &= ~SPEC_CTRL_MITIGATIONS_MASK;
++	}
++
+ 	/* Select the proper CPU mitigations before patching alternatives: */
+ 	spectre_v1_select_mitigation();
+ 	spectre_v2_select_mitigation();
+@@ -1058,14 +1066,18 @@ spectre_v2_parse_user_cmdline(void)
+ 	return SPECTRE_V2_USER_CMD_AUTO;
+ }
+ 
+-static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
++static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
+ {
+-	return mode == SPECTRE_V2_IBRS ||
+-	       mode == SPECTRE_V2_EIBRS ||
++	return mode == SPECTRE_V2_EIBRS ||
+ 	       mode == SPECTRE_V2_EIBRS_RETPOLINE ||
+ 	       mode == SPECTRE_V2_EIBRS_LFENCE;
+ }
+ 
++static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
++{
++	return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
++}
++
+ static void __init
+ spectre_v2_user_select_mitigation(void)
+ {
+@@ -1128,12 +1140,19 @@ spectre_v2_user_select_mitigation(void)
+ 	}
+ 
+ 	/*
+-	 * If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible,
+-	 * STIBP is not required.
++	 * If no STIBP, enhanced IBRS is enabled, or SMT impossible, STIBP
++	 * is not required.
++	 *
++	 * Enhanced IBRS also protects against cross-thread branch target
++	 * injection in user-mode as the IBRS bit remains always set which
++	 * implicitly enables cross-thread protections.  However, in legacy IBRS
++	 * mode, the IBRS bit is set only on kernel entry and cleared on return
++	 * to userspace. This disables the implicit cross-thread protection,
++	 * so allow for STIBP to be selected in that case.
+ 	 */
+ 	if (!boot_cpu_has(X86_FEATURE_STIBP) ||
+ 	    !smt_possible ||
+-	    spectre_v2_in_ibrs_mode(spectre_v2_enabled))
++	    spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ 		return;
+ 
+ 	/*
+@@ -2227,7 +2246,7 @@ static ssize_t mmio_stale_data_show_state(char *buf)
+ 
+ static char *stibp_state(void)
+ {
+-	if (spectre_v2_in_ibrs_mode(spectre_v2_enabled))
++	if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ 		return "";
+ 
+ 	switch (spectre_v2_user_stibp) {
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 56573241d0293..e2dee60108460 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2048,13 +2048,12 @@ void cpu_init_exception_handling(void)
+ 
+ /*
+  * cpu_init() initializes state that is per-CPU. Some data is already
+- * initialized (naturally) in the bootstrap process, such as the GDT
+- * and IDT. We reload them nevertheless, this function acts as a
+- * 'CPU state barrier', nothing should get across.
++ * initialized (naturally) in the bootstrap process, such as the GDT.  We
++ * reload it nevertheless, this function acts as a 'CPU state barrier',
++ * nothing should get across.
+  */
+ void cpu_init(void)
+ {
+-	struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw);
+ 	struct task_struct *cur = current;
+ 	int cpu = raw_smp_processor_id();
+ 
+@@ -2067,8 +2066,6 @@ void cpu_init(void)
+ 	    early_cpu_to_node(cpu) != NUMA_NO_NODE)
+ 		set_numa_node(early_cpu_to_node(cpu));
+ #endif
+-	setup_getcpu(cpu);
+-
+ 	pr_debug("Initializing CPU#%d\n", cpu);
+ 
+ 	if (IS_ENABLED(CONFIG_X86_64) || cpu_feature_enabled(X86_FEATURE_VME) ||
+@@ -2080,7 +2077,6 @@ void cpu_init(void)
+ 	 * and set up the GDT descriptor:
+ 	 */
+ 	switch_to_new_gdt(cpu);
+-	load_current_idt();
+ 
+ 	if (IS_ENABLED(CONFIG_X86_64)) {
+ 		loadsegment(fs, 0);
+@@ -2100,12 +2096,6 @@ void cpu_init(void)
+ 	initialize_tlbstate_and_flush();
+ 	enter_lazy_tlb(&init_mm, cur);
+ 
+-	/* Initialize the TSS. */
+-	tss_setup_ist(tss);
+-	tss_setup_io_bitmap(tss);
+-	set_tss_desc(cpu, &get_cpu_entry_area(cpu)->tss.x86_tss);
+-
+-	load_TR_desc();
+ 	/*
+ 	 * sp0 points to the entry trampoline stack regardless of what task
+ 	 * is running.
+@@ -2127,35 +2117,64 @@ void cpu_init(void)
+ 	load_fixmap_gdt(cpu);
+ }
+ 
+-/*
++#ifdef CONFIG_SMP
++void cpu_init_secondary(void)
++{
++	/*
++	 * Relies on the BP having set-up the IDT tables, which are loaded
++	 * on this CPU in cpu_init_exception_handling().
++	 */
++	cpu_init_exception_handling();
++	cpu_init();
++}
++#endif
++
++#ifdef CONFIG_MICROCODE_LATE_LOADING
++/**
++ * store_cpu_caps() - Store a snapshot of CPU capabilities
++ * @curr_info: Pointer where to store it
++ *
++ * Returns: None
++ */
++void store_cpu_caps(struct cpuinfo_x86 *curr_info)
++{
++	/* Reload CPUID max function as it might've changed. */
++	curr_info->cpuid_level = cpuid_eax(0);
++
++	/* Copy all capability leafs and pick up the synthetic ones. */
++	memcpy(&curr_info->x86_capability, &boot_cpu_data.x86_capability,
++	       sizeof(curr_info->x86_capability));
++
++	/* Get the hardware CPUID leafs */
++	get_cpu_cap(curr_info);
++}
++
++/**
++ * microcode_check() - Check if any CPU capabilities changed after an update.
++ * @prev_info:	CPU capabilities stored before an update.
++ *
+  * The microcode loader calls this upon late microcode load to recheck features,
+  * only when microcode has been updated. Caller holds microcode_mutex and CPU
+  * hotplug lock.
++ *
++ * Return: None
+  */
+-void microcode_check(void)
++void microcode_check(struct cpuinfo_x86 *prev_info)
+ {
+-	struct cpuinfo_x86 info;
++	struct cpuinfo_x86 curr_info;
+ 
+ 	perf_check_microcode();
+ 
+-	/* Reload CPUID max function as it might've changed. */
+-	info.cpuid_level = cpuid_eax(0);
+-
+-	/*
+-	 * Copy all capability leafs to pick up the synthetic ones so that
+-	 * memcmp() below doesn't fail on that. The ones coming from CPUID will
+-	 * get overwritten in get_cpu_cap().
+-	 */
+-	memcpy(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability));
++	store_cpu_caps(&curr_info);
+ 
+-	get_cpu_cap(&info);
+-
+-	if (!memcmp(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability)))
++	if (!memcmp(&prev_info->x86_capability, &curr_info.x86_capability,
++		    sizeof(prev_info->x86_capability)))
+ 		return;
+ 
+ 	pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
+ 	pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
+ }
++#endif
+ 
+ /*
+  * Invoked from core CPU hotplug code after hotplug operations
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 234a96f25248d..d3bce6d380ed6 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -55,7 +55,9 @@ struct cont_desc {
+ };
+ 
+ static u32 ucode_new_rev;
+-static u8 amd_ucode_patch[PATCH_MAX_SIZE];
++
++/* One blob per node. */
++static u8 amd_ucode_patch[MAX_NUMNODES][PATCH_MAX_SIZE];
+ 
+ /*
+  * Microcode patch container file is prepended to the initrd in cpio
+@@ -429,7 +431,7 @@ apply_microcode_early_amd(u32 cpuid_1_eax, void *ucode, size_t size, bool save_p
+ 	patch	= (u8 (*)[PATCH_MAX_SIZE])__pa_nodebug(&amd_ucode_patch);
+ #else
+ 	new_rev = &ucode_new_rev;
+-	patch	= &amd_ucode_patch;
++	patch	= &amd_ucode_patch[0];
+ #endif
+ 
+ 	desc.cpuid_1_eax = cpuid_1_eax;
+@@ -548,8 +550,7 @@ void load_ucode_amd_ap(unsigned int cpuid_1_eax)
+ 	apply_microcode_early_amd(cpuid_1_eax, cp.data, cp.size, false);
+ }
+ 
+-static enum ucode_state
+-load_microcode_amd(bool save, u8 family, const u8 *data, size_t size);
++static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size);
+ 
+ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
+ {
+@@ -567,19 +568,19 @@ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
+ 	if (!desc.mc)
+ 		return -EINVAL;
+ 
+-	ret = load_microcode_amd(true, x86_family(cpuid_1_eax), desc.data, desc.size);
++	ret = load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
+ 	if (ret > UCODE_UPDATED)
+ 		return -EINVAL;
+ 
+ 	return 0;
+ }
+ 
+-void reload_ucode_amd(void)
++void reload_ucode_amd(unsigned int cpu)
+ {
+-	struct microcode_amd *mc;
+ 	u32 rev, dummy __always_unused;
++	struct microcode_amd *mc;
+ 
+-	mc = (struct microcode_amd *)amd_ucode_patch;
++	mc = (struct microcode_amd *)amd_ucode_patch[cpu_to_node(cpu)];
+ 
+ 	rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+ 
+@@ -845,9 +846,10 @@ static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
+ 	return UCODE_OK;
+ }
+ 
+-static enum ucode_state
+-load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
++static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size)
+ {
++	struct cpuinfo_x86 *c;
++	unsigned int nid, cpu;
+ 	struct ucode_patch *p;
+ 	enum ucode_state ret;
+ 
+@@ -860,22 +862,22 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
+ 		return ret;
+ 	}
+ 
+-	p = find_patch(0);
+-	if (!p) {
+-		return ret;
+-	} else {
+-		if (boot_cpu_data.microcode >= p->patch_id)
+-			return ret;
++	for_each_node(nid) {
++		cpu = cpumask_first(cpumask_of_node(nid));
++		c = &cpu_data(cpu);
+ 
+-		ret = UCODE_NEW;
+-	}
++		p = find_patch(cpu);
++		if (!p)
++			continue;
+ 
+-	/* save BSP's matching patch for early load */
+-	if (!save)
+-		return ret;
++		if (c->microcode >= p->patch_id)
++			continue;
+ 
+-	memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
+-	memcpy(amd_ucode_patch, p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
++		ret = UCODE_NEW;
++
++		memset(&amd_ucode_patch[nid], 0, PATCH_MAX_SIZE);
++		memcpy(&amd_ucode_patch[nid], p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
++	}
+ 
+ 	return ret;
+ }
+@@ -901,12 +903,11 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
+ {
+ 	char fw_name[36] = "amd-ucode/microcode_amd.bin";
+ 	struct cpuinfo_x86 *c = &cpu_data(cpu);
+-	bool bsp = c->cpu_index == boot_cpu_data.cpu_index;
+ 	enum ucode_state ret = UCODE_NFOUND;
+ 	const struct firmware *fw;
+ 
+ 	/* reload ucode container only on the boot cpu */
+-	if (!refresh_fw || !bsp)
++	if (!refresh_fw)
+ 		return UCODE_OK;
+ 
+ 	if (c->x86 >= 0x15)
+@@ -921,7 +922,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
+ 	if (!verify_container(fw->data, fw->size, false))
+ 		goto fw_release;
+ 
+-	ret = load_microcode_amd(bsp, c->x86, fw->data, fw->size);
++	ret = load_microcode_amd(c->x86, fw->data, fw->size);
+ 
+  fw_release:
+ 	release_firmware(fw);
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 0b1732b98e719..24254d1411789 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -55,7 +55,7 @@ LIST_HEAD(microcode_cache);
+  * All non cpu-hotplug-callback call sites use:
+  *
+  * - microcode_mutex to synchronize with each other;
+- * - get/put_online_cpus() to synchronize with
++ * - cpus_read_lock/unlock() to synchronize with
+  *   the cpu-hotplug-callback call sites.
+  *
+  * We guarantee that only a single cpu is being
+@@ -315,7 +315,7 @@ struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa)
+ #endif
+ }
+ 
+-void reload_early_microcode(void)
++void reload_early_microcode(unsigned int cpu)
+ {
+ 	int vendor, family;
+ 
+@@ -329,7 +329,7 @@ void reload_early_microcode(void)
+ 		break;
+ 	case X86_VENDOR_AMD:
+ 		if (family >= 0x10)
+-			reload_ucode_amd();
++			reload_ucode_amd(cpu);
+ 		break;
+ 	default:
+ 		break;
+@@ -390,101 +390,10 @@ static int apply_microcode_on_target(int cpu)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_MICROCODE_OLD_INTERFACE
+-static int do_microcode_update(const void __user *buf, size_t size)
+-{
+-	int error = 0;
+-	int cpu;
+-
+-	for_each_online_cpu(cpu) {
+-		struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+-		enum ucode_state ustate;
+-
+-		if (!uci->valid)
+-			continue;
+-
+-		ustate = microcode_ops->request_microcode_user(cpu, buf, size);
+-		if (ustate == UCODE_ERROR) {
+-			error = -1;
+-			break;
+-		} else if (ustate == UCODE_NEW) {
+-			apply_microcode_on_target(cpu);
+-		}
+-	}
+-
+-	return error;
+-}
+-
+-static int microcode_open(struct inode *inode, struct file *file)
+-{
+-	return capable(CAP_SYS_RAWIO) ? stream_open(inode, file) : -EPERM;
+-}
+-
+-static ssize_t microcode_write(struct file *file, const char __user *buf,
+-			       size_t len, loff_t *ppos)
+-{
+-	ssize_t ret = -EINVAL;
+-	unsigned long nr_pages = totalram_pages();
+-
+-	if ((len >> PAGE_SHIFT) > nr_pages) {
+-		pr_err("too much data (max %ld pages)\n", nr_pages);
+-		return ret;
+-	}
+-
+-	get_online_cpus();
+-	mutex_lock(&microcode_mutex);
+-
+-	if (do_microcode_update(buf, len) == 0)
+-		ret = (ssize_t)len;
+-
+-	if (ret > 0)
+-		perf_check_microcode();
+-
+-	mutex_unlock(&microcode_mutex);
+-	put_online_cpus();
+-
+-	return ret;
+-}
+-
+-static const struct file_operations microcode_fops = {
+-	.owner			= THIS_MODULE,
+-	.write			= microcode_write,
+-	.open			= microcode_open,
+-	.llseek		= no_llseek,
+-};
+-
+-static struct miscdevice microcode_dev = {
+-	.minor			= MICROCODE_MINOR,
+-	.name			= "microcode",
+-	.nodename		= "cpu/microcode",
+-	.fops			= &microcode_fops,
+-};
+-
+-static int __init microcode_dev_init(void)
+-{
+-	int error;
+-
+-	error = misc_register(&microcode_dev);
+-	if (error) {
+-		pr_err("can't misc_register on minor=%d\n", MICROCODE_MINOR);
+-		return error;
+-	}
+-
+-	return 0;
+-}
+-
+-static void __exit microcode_dev_exit(void)
+-{
+-	misc_deregister(&microcode_dev);
+-}
+-#else
+-#define microcode_dev_init()	0
+-#define microcode_dev_exit()	do { } while (0)
+-#endif
+-
+ /* fake device for request_firmware */
+ static struct platform_device	*microcode_pdev;
+ 
++#ifdef CONFIG_MICROCODE_LATE_LOADING
+ /*
+  * Late loading dance. Why the heavy-handed stomp_machine effort?
+  *
+@@ -599,16 +508,27 @@ wait_for_siblings:
+  */
+ static int microcode_reload_late(void)
+ {
+-	int ret;
++	int old = boot_cpu_data.microcode, ret;
++	struct cpuinfo_x86 prev_info;
+ 
+ 	atomic_set(&late_cpus_in,  0);
+ 	atomic_set(&late_cpus_out, 0);
+ 
+-	ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask);
+-	if (ret == 0)
+-		microcode_check();
++	/*
++	 * Take a snapshot before the microcode update in order to compare and
++	 * check whether any bits changed after an update.
++	 */
++	store_cpu_caps(&prev_info);
+ 
+-	pr_info("Reload completed, microcode revision: 0x%x\n", boot_cpu_data.microcode);
++	ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask);
++	if (!ret) {
++		pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n",
++			old, boot_cpu_data.microcode);
++		microcode_check(&prev_info);
++	} else {
++		pr_info("Reload failed, current microcode revision: 0x%x\n",
++			boot_cpu_data.microcode);
++	}
+ 
+ 	return ret;
+ }
+@@ -629,7 +549,7 @@ static ssize_t reload_store(struct device *dev,
+ 	if (val != 1)
+ 		return size;
+ 
+-	get_online_cpus();
++	cpus_read_lock();
+ 
+ 	ret = check_online_cpus();
+ 	if (ret)
+@@ -644,7 +564,7 @@ static ssize_t reload_store(struct device *dev,
+ 	mutex_unlock(&microcode_mutex);
+ 
+ put:
+-	put_online_cpus();
++	cpus_read_unlock();
+ 
+ 	if (ret == 0)
+ 		ret = size;
+@@ -652,6 +572,9 @@ put:
+ 	return ret;
+ }
+ 
++static DEVICE_ATTR_WO(reload);
++#endif
++
+ static ssize_t version_show(struct device *dev,
+ 			struct device_attribute *attr, char *buf)
+ {
+@@ -668,7 +591,6 @@ static ssize_t pf_show(struct device *dev,
+ 	return sprintf(buf, "0x%x\n", uci->cpu_sig.pf);
+ }
+ 
+-static DEVICE_ATTR_WO(reload);
+ static DEVICE_ATTR(version, 0444, version_show, NULL);
+ static DEVICE_ATTR(processor_flags, 0444, pf_show, NULL);
+ 
+@@ -785,7 +707,7 @@ void microcode_bsp_resume(void)
+ 	if (uci->valid && uci->mc)
+ 		microcode_ops->apply_microcode(cpu);
+ 	else if (!uci->mc)
+-		reload_early_microcode();
++		reload_early_microcode(cpu);
+ }
+ 
+ static struct syscore_ops mc_syscore_ops = {
+@@ -821,7 +743,9 @@ static int mc_cpu_down_prep(unsigned int cpu)
+ }
+ 
+ static struct attribute *cpu_root_microcode_attrs[] = {
++#ifdef CONFIG_MICROCODE_LATE_LOADING
+ 	&dev_attr_reload.attr,
++#endif
+ 	NULL
+ };
+ 
+@@ -853,14 +777,14 @@ int __init microcode_init(void)
+ 	if (IS_ERR(microcode_pdev))
+ 		return PTR_ERR(microcode_pdev);
+ 
+-	get_online_cpus();
++	cpus_read_lock();
+ 	mutex_lock(&microcode_mutex);
+ 
+ 	error = subsys_interface_register(&mc_cpu_interface);
+ 	if (!error)
+ 		perf_check_microcode();
+ 	mutex_unlock(&microcode_mutex);
+-	put_online_cpus();
++	cpus_read_unlock();
+ 
+ 	if (error)
+ 		goto out_pdev;
+@@ -873,10 +797,6 @@ int __init microcode_init(void)
+ 		goto out_driver;
+ 	}
+ 
+-	error = microcode_dev_init();
+-	if (error)
+-		goto out_ucode_group;
+-
+ 	register_syscore_ops(&mc_syscore_ops);
+ 	cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:starting",
+ 				  mc_cpu_starting, NULL);
+@@ -887,18 +807,14 @@ int __init microcode_init(void)
+ 
+ 	return 0;
+ 
+- out_ucode_group:
+-	sysfs_remove_group(&cpu_subsys.dev_root->kobj,
+-			   &cpu_root_microcode_group);
+-
+  out_driver:
+-	get_online_cpus();
++	cpus_read_lock();
+ 	mutex_lock(&microcode_mutex);
+ 
+ 	subsys_interface_unregister(&mc_cpu_interface);
+ 
+ 	mutex_unlock(&microcode_mutex);
+-	put_online_cpus();
++	cpus_read_unlock();
+ 
+  out_pdev:
+ 	platform_device_unregister(microcode_pdev);
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index ff26de11b3f15..1a943743cfe4b 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -311,7 +311,7 @@ static void update_cpu_closid_rmid(void *info)
+ 	 * executing task might have its own closid selected. Just reuse
+ 	 * the context switch code.
+ 	 */
+-	resctrl_sched_in();
++	resctrl_sched_in(current);
+ }
+ 
+ /*
+@@ -532,7 +532,7 @@ static void _update_task_closid_rmid(void *task)
+ 	 * Otherwise, the MSR is updated when the task is scheduled in.
+ 	 */
+ 	if (task == current)
+-		resctrl_sched_in();
++		resctrl_sched_in(task);
+ }
+ 
+ static void update_task_closid_rmid(struct task_struct *t)
+@@ -563,11 +563,11 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
+ 	 */
+ 
+ 	if (rdtgrp->type == RDTCTRL_GROUP) {
+-		tsk->closid = rdtgrp->closid;
+-		tsk->rmid = rdtgrp->mon.rmid;
++		WRITE_ONCE(tsk->closid, rdtgrp->closid);
++		WRITE_ONCE(tsk->rmid, rdtgrp->mon.rmid);
+ 	} else if (rdtgrp->type == RDTMON_GROUP) {
+ 		if (rdtgrp->mon.parent->closid == tsk->closid) {
+-			tsk->rmid = rdtgrp->mon.rmid;
++			WRITE_ONCE(tsk->rmid, rdtgrp->mon.rmid);
+ 		} else {
+ 			rdt_last_cmd_puts("Can't move task to different control group\n");
+ 			return -EINVAL;
+@@ -2312,8 +2312,8 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to,
+ 	for_each_process_thread(p, t) {
+ 		if (!from || is_closid_match(t, from) ||
+ 		    is_rmid_match(t, from)) {
+-			t->closid = to->closid;
+-			t->rmid = to->mon.rmid;
++			WRITE_ONCE(t->closid, to->closid);
++			WRITE_ONCE(t->rmid, to->mon.rmid);
+ 
+ 			/*
+ 			 * Order the closid/rmid stores above before the loads
+diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
+index b1deacbeb2669..a932a07d00253 100644
+--- a/arch/x86/kernel/crash.c
++++ b/arch/x86/kernel/crash.c
+@@ -37,7 +37,6 @@
+ #include <linux/kdebug.h>
+ #include <asm/cpu.h>
+ #include <asm/reboot.h>
+-#include <asm/virtext.h>
+ #include <asm/intel_pt.h>
+ #include <asm/crash.h>
+ #include <asm/cmdline.h>
+@@ -94,15 +93,6 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
+ 	 */
+ 	cpu_crash_vmclear_loaded_vmcss();
+ 
+-	/* Disable VMX or SVM if needed.
+-	 *
+-	 * We need to disable virtualization on all CPUs.
+-	 * Having VMX or SVM enabled on any CPU may break rebooting
+-	 * after the kdump kernel has finished its task.
+-	 */
+-	cpu_emergency_vmxoff();
+-	cpu_emergency_svm_disable();
+-
+ 	/*
+ 	 * Disable Intel PT to stop its logging
+ 	 */
+@@ -161,12 +151,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
+ 	 */
+ 	cpu_crash_vmclear_loaded_vmcss();
+ 
+-	/* Booting kdump kernel with VMX or SVM enabled won't work,
+-	 * because (among other limitations) we can't disable paging
+-	 * with the virt flags.
+-	 */
+-	cpu_emergency_vmxoff();
+-	cpu_emergency_svm_disable();
++	cpu_emergency_disable_virtualization();
+ 
+ 	/*
+ 	 * Disable Intel PT to stop its logging
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 3d62014920064..e37e5e82481a5 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr)
+ 		/* This function only handles jump-optimized kprobe */
+ 		if (kp && kprobe_optimized(kp)) {
+ 			op = container_of(kp, struct optimized_kprobe, kp);
+-			/* If op->list is not empty, op is under optimizing */
+-			if (list_empty(&op->list))
++			/* If op is optimized or under unoptimizing */
++			if (list_empty(&op->list) || optprobe_queued_unopt(op))
+ 				goto found;
+ 		}
+ 	}
+@@ -346,7 +346,7 @@ int arch_check_optimized_kprobe(struct optimized_kprobe *op)
+ 
+ 	for (i = 1; i < op->optinsn.size; i++) {
+ 		p = get_kprobe(op->kp.addr + i);
+-		if (p && !kprobe_disabled(p))
++		if (p && !kprobe_disarmed(p))
+ 			return -EEXIST;
+ 	}
+ 
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 5e17c3939dd1c..1cba09a9f1c13 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -720,7 +720,7 @@ bool xen_set_default_idle(void)
+ }
+ #endif
+ 
+-void stop_this_cpu(void *dummy)
++void __noreturn stop_this_cpu(void *dummy)
+ {
+ 	local_irq_disable();
+ 	/*
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 98bf8fd189025..3b4c394a1a768 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -214,7 +214,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	switch_fpu_finish(next_p);
+ 
+ 	/* Load the Intel cache allocation PQR MSR. */
+-	resctrl_sched_in();
++	resctrl_sched_in(next_p);
+ 
+ 	return prev_p;
+ }
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index ad3f82a18de9d..1d8bc4736fb79 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -629,7 +629,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	}
+ 
+ 	/* Load the Intel cache allocation PQR MSR. */
+-	resctrl_sched_in();
++	resctrl_sched_in(next_p);
+ 
+ 	return prev_p;
+ }
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index df3514835b356..4d8c0e2581500 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -528,33 +528,29 @@ static inline void kb_wait(void)
+ 	}
+ }
+ 
+-static void vmxoff_nmi(int cpu, struct pt_regs *regs)
+-{
+-	cpu_emergency_vmxoff();
+-}
++static inline void nmi_shootdown_cpus_on_restart(void);
+ 
+-/* Use NMIs as IPIs to tell all CPUs to disable virtualization */
+-static void emergency_vmx_disable_all(void)
++static void emergency_reboot_disable_virtualization(void)
+ {
+ 	/* Just make sure we won't change CPUs while doing this */
+ 	local_irq_disable();
+ 
+ 	/*
+-	 * Disable VMX on all CPUs before rebooting, otherwise we risk hanging
+-	 * the machine, because the CPU blocks INIT when it's in VMX root.
++	 * Disable virtualization on all CPUs before rebooting to avoid hanging
++	 * the system, as VMX and SVM block INIT when running in the host.
+ 	 *
+ 	 * We can't take any locks and we may be on an inconsistent state, so
+-	 * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt.
++	 * use NMIs as IPIs to tell the other CPUs to disable VMX/SVM and halt.
+ 	 *
+-	 * Do the NMI shootdown even if VMX if off on _this_ CPU, as that
+-	 * doesn't prevent a different CPU from being in VMX root operation.
++	 * Do the NMI shootdown even if virtualization is off on _this_ CPU, as
++	 * other CPUs may have virtualization enabled.
+ 	 */
+-	if (cpu_has_vmx()) {
+-		/* Safely force _this_ CPU out of VMX root operation. */
+-		__cpu_emergency_vmxoff();
++	if (cpu_has_vmx() || cpu_has_svm(NULL)) {
++		/* Safely force _this_ CPU out of VMX/SVM operation. */
++		cpu_emergency_disable_virtualization();
+ 
+-		/* Halt and exit VMX root operation on the other CPUs. */
+-		nmi_shootdown_cpus(vmxoff_nmi);
++		/* Disable VMX/SVM and halt on other CPUs. */
++		nmi_shootdown_cpus_on_restart();
+ 	}
+ }
+ 
+@@ -590,7 +586,7 @@ static void native_machine_emergency_restart(void)
+ 	unsigned short mode;
+ 
+ 	if (reboot_emergency)
+-		emergency_vmx_disable_all();
++		emergency_reboot_disable_virtualization();
+ 
+ 	tboot_shutdown(TB_SHUTDOWN_REBOOT);
+ 
+@@ -795,6 +791,17 @@ void machine_crash_shutdown(struct pt_regs *regs)
+ /* This is the CPU performing the emergency shutdown work. */
+ int crashing_cpu = -1;
+ 
++/*
++ * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized during
++ * reboot.  VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT if
++ * GIF=0, i.e. if the crash occurred between CLGI and STGI.
++ */
++void cpu_emergency_disable_virtualization(void)
++{
++	cpu_emergency_vmxoff();
++	cpu_emergency_svm_disable();
++}
++
+ #if defined(CONFIG_SMP)
+ 
+ static nmi_shootdown_cb shootdown_callback;
+@@ -817,7 +824,14 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
+ 		return NMI_HANDLED;
+ 	local_irq_disable();
+ 
+-	shootdown_callback(cpu, regs);
++	if (shootdown_callback)
++		shootdown_callback(cpu, regs);
++
++	/*
++	 * Prepare the CPU for reboot _after_ invoking the callback so that the
++	 * callback can safely use virtualization instructions, e.g. VMCLEAR.
++	 */
++	cpu_emergency_disable_virtualization();
+ 
+ 	atomic_dec(&waiting_for_crash_ipi);
+ 	/* Assume hlt works */
+@@ -828,18 +842,32 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
+ 	return NMI_HANDLED;
+ }
+ 
+-/*
+- * Halt all other CPUs, calling the specified function on each of them
++/**
++ * nmi_shootdown_cpus - Stop other CPUs via NMI
++ * @callback:	Optional callback to be invoked from the NMI handler
++ *
++ * The NMI handler on the remote CPUs invokes @callback, if not
++ * NULL, first and then disables virtualization to ensure that
++ * INIT is recognized during reboot.
+  *
+- * This function can be used to halt all other CPUs on crash
+- * or emergency reboot time. The function passed as parameter
+- * will be called inside a NMI handler on all CPUs.
++ * nmi_shootdown_cpus() can only be invoked once. After the first
++ * invocation all other CPUs are stuck in crash_nmi_callback() and
++ * cannot respond to a second NMI.
+  */
+ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
+ {
+ 	unsigned long msecs;
++
+ 	local_irq_disable();
+ 
++	/*
++	 * Avoid certain doom if a shootdown already occurred; re-registering
++	 * the NMI handler will cause list corruption, modifying the callback
++	 * will do who knows what, etc...
++	 */
++	if (WARN_ON_ONCE(crash_ipi_issued))
++		return;
++
+ 	/* Make a note of crashing cpu. Will be used in NMI callback. */
+ 	crashing_cpu = safe_smp_processor_id();
+ 
+@@ -867,7 +895,17 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
+ 		msecs--;
+ 	}
+ 
+-	/* Leave the nmi callback set */
++	/*
++	 * Leave the nmi callback set, shootdown is a one-time thing.  Clearing
++	 * the callback could result in a NULL pointer dereference if a CPU
++	 * (finally) responds after the timeout expires.
++	 */
++}
++
++static inline void nmi_shootdown_cpus_on_restart(void)
++{
++	if (!crash_ipi_issued)
++		nmi_shootdown_cpus(NULL);
+ }
+ 
+ /*
+@@ -897,6 +935,8 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
+ 	/* No other CPUs to shoot down */
+ }
+ 
++static inline void nmi_shootdown_cpus_on_restart(void) { }
++
+ void run_crash_ipi_callback(struct pt_regs *regs)
+ {
+ }
+diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
+index eff4ce3b10da7..95758ae120baf 100644
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -32,7 +32,7 @@
+ #include <asm/mce.h>
+ #include <asm/trace/irq_vectors.h>
+ #include <asm/kexec.h>
+-#include <asm/virtext.h>
++#include <asm/reboot.h>
+ 
+ /*
+  *	Some notes on x86 processor bugs affecting SMP operation:
+@@ -122,7 +122,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
+ 	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
+ 		return NMI_HANDLED;
+ 
+-	cpu_emergency_vmxoff();
++	cpu_emergency_disable_virtualization();
+ 	stop_this_cpu(NULL);
+ 
+ 	return NMI_HANDLED;
+@@ -134,7 +134,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
+ DEFINE_IDTENTRY_SYSVEC(sysvec_reboot)
+ {
+ 	ack_APIC_irq();
+-	cpu_emergency_vmxoff();
++	cpu_emergency_disable_virtualization();
+ 	stop_this_cpu(NULL);
+ }
+ 
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index e8e5515fb7e9c..bda89ecc7799f 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -227,8 +227,7 @@ static void notrace start_secondary(void *unused)
+ 	load_cr3(swapper_pg_dir);
+ 	__flush_tlb_all();
+ #endif
+-	cpu_init_exception_handling();
+-	cpu_init();
++	cpu_init_secondary();
+ 	rcu_cpu_starting(raw_smp_processor_id());
+ 	x86_cpuinit.early_percpu_clock_init();
+ 	smp_callin();
+diff --git a/arch/x86/kernel/sysfb_efi.c b/arch/x86/kernel/sysfb_efi.c
+index 653b7f617b61b..9ea65611fba0b 100644
+--- a/arch/x86/kernel/sysfb_efi.c
++++ b/arch/x86/kernel/sysfb_efi.c
+@@ -264,6 +264,14 @@ static const struct dmi_system_id efifb_dmi_swap_width_height[] __initconst = {
+ 					"Lenovo ideapad D330-10IGM"),
+ 		},
+ 	},
++	{
++		/* Lenovo IdeaPad Duet 3 10IGL5 with 1200x1920 portrait screen */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_VERSION,
++					"IdeaPad Duet 3 10IGL5"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 2a39a2df6f43e..3780c728345c3 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -1185,9 +1185,7 @@ void __init trap_init(void)
+ 
+ 	idt_setup_traps();
+ 
+-	/*
+-	 * Should be a barrier for any external CPU state:
+-	 */
++	cpu_init_exception_handling();
+ 	cpu_init();
+ 
+ 	idt_setup_ist_traps();
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 260727eaa6b96..21189804524a7 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2115,10 +2115,14 @@ int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
+ 		break;
+ 
+ 	case APIC_SELF_IPI:
+-		if (apic_x2apic_mode(apic))
+-			kvm_apic_send_ipi(apic, APIC_DEST_SELF | (val & APIC_VECTOR_MASK), 0);
+-		else
++		/*
++		 * Self-IPI exists only when x2APIC is enabled.  Bits 7:0 hold
++		 * the vector, everything else is reserved.
++		 */
++		if (!apic_x2apic_mode(apic) || (val & ~APIC_VECTOR_MASK))
+ 			ret = 1;
++		else
++			kvm_apic_send_ipi(apic, APIC_DEST_SELF | val, 0);
+ 		break;
+ 	default:
+ 		ret = 1;
+diff --git a/arch/x86/um/vdso/um_vdso.c b/arch/x86/um/vdso/um_vdso.c
+index 2112b8d146688..ff0f3b4b6c45e 100644
+--- a/arch/x86/um/vdso/um_vdso.c
++++ b/arch/x86/um/vdso/um_vdso.c
+@@ -17,8 +17,10 @@ int __vdso_clock_gettime(clockid_t clock, struct __kernel_old_timespec *ts)
+ {
+ 	long ret;
+ 
+-	asm("syscall" : "=a" (ret) :
+-		"0" (__NR_clock_gettime), "D" (clock), "S" (ts) : "memory");
++	asm("syscall"
++		: "=a" (ret)
++		: "0" (__NR_clock_gettime), "D" (clock), "S" (ts)
++		: "rcx", "r11", "memory");
+ 
+ 	return ret;
+ }
+@@ -29,8 +31,10 @@ int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz)
+ {
+ 	long ret;
+ 
+-	asm("syscall" : "=a" (ret) :
+-		"0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory");
++	asm("syscall"
++		: "=a" (ret)
++		: "0" (__NR_gettimeofday), "D" (tv), "S" (tz)
++		: "rcx", "r11", "memory");
+ 
+ 	return ret;
+ }
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 4f6f140a44e06..a4cfc97275df6 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -428,6 +428,7 @@ int bio_integrity_clone(struct bio *bio, struct bio *bio_src,
+ 
+ 	bip->bip_vcnt = bip_src->bip_vcnt;
+ 	bip->bip_iter = bip_src->bip_iter;
++	bip->bip_flags = bip_src->bip_flags & ~BIP_BLOCK_INTEGRITY;
+ 
+ 	return 0;
+ }
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index fb8f959a7f327..9255b642d6adb 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -872,9 +872,14 @@ static void calc_lcoefs(u64 bps, u64 seqiops, u64 randiops,
+ 
+ 	*page = *seqio = *randio = 0;
+ 
+-	if (bps)
+-		*page = DIV64_U64_ROUND_UP(VTIME_PER_SEC,
+-					   DIV_ROUND_UP_ULL(bps, IOC_PAGE_SIZE));
++	if (bps) {
++		u64 bps_pages = DIV_ROUND_UP_ULL(bps, IOC_PAGE_SIZE);
++
++		if (bps_pages)
++			*page = DIV64_U64_ROUND_UP(VTIME_PER_SEC, bps_pages);
++		else
++			*page = 1;
++	}
+ 
+ 	if (seqiops) {
+ 		v = DIV64_U64_ROUND_UP(VTIME_PER_SEC, seqiops);
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 72e64ba661fc7..7858c5a3535e9 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -45,8 +45,7 @@ void blk_mq_sched_assign_ioc(struct request *rq)
+ }
+ 
+ /*
+- * Mark a hardware queue as needing a restart. For shared queues, maintain
+- * a count of how many hardware queues are marked for restart.
++ * Mark a hardware queue as needing a restart.
+  */
+ void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx)
+ {
+@@ -110,7 +109,7 @@ dispatch:
+ /*
+  * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
+  * its queue by itself in its completion handler, so we don't need to
+- * restart queue if .get_budget() returns BLK_STS_NO_RESOURCE.
++ * restart queue if .get_budget() fails to get the budget.
+  *
+  * Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has to
+  * be run again.  This is necessary to avoid starving flushes.
+@@ -224,7 +223,7 @@ static struct blk_mq_ctx *blk_mq_next_ctx(struct blk_mq_hw_ctx *hctx,
+ /*
+  * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
+  * its queue by itself in its completion handler, so we don't need to
+- * restart queue if .get_budget() returns BLK_STS_NO_RESOURCE.
++ * restart queue if .get_budget() fails to get the budget.
+  *
+  * Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has to
+  * be run again.  This is necessary to avoid starving flushes.
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index e37ba792902af..cf66de0f00fd3 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -448,7 +448,8 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+ 	 * allocator for this for the rare use case of a command tied to
+ 	 * a specific queue.
+ 	 */
+-	if (WARN_ON_ONCE(!(flags & (BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED))))
++	if (WARN_ON_ONCE(!(flags & BLK_MQ_REQ_NOWAIT)) ||
++	    WARN_ON_ONCE(!(flags & BLK_MQ_REQ_RESERVED)))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if (hctx_idx >= q->nr_hw_queues)
+diff --git a/crypto/essiv.c b/crypto/essiv.c
+index d012be23d496d..85bb624e32b9b 100644
+--- a/crypto/essiv.c
++++ b/crypto/essiv.c
+@@ -170,7 +170,12 @@ static void essiv_aead_done(struct crypto_async_request *areq, int err)
+ 	struct aead_request *req = areq->data;
+ 	struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+ 
++	if (err == -EINPROGRESS)
++		goto out;
++
+ 	kfree(rctx->assoc);
++
++out:
+ 	aead_request_complete(req, err);
+ }
+ 
+@@ -246,7 +251,7 @@ static int essiv_aead_crypt(struct aead_request *req, bool enc)
+ 	err = enc ? crypto_aead_encrypt(subreq) :
+ 		    crypto_aead_decrypt(subreq);
+ 
+-	if (rctx->assoc && err != -EINPROGRESS)
++	if (rctx->assoc && err != -EINPROGRESS && err != -EBUSY)
+ 		kfree(rctx->assoc);
+ 	return err;
+ }
+diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
+index 9d804831c8b3f..a4ebbb889274e 100644
+--- a/crypto/rsa-pkcs1pad.c
++++ b/crypto/rsa-pkcs1pad.c
+@@ -214,16 +214,14 @@ static void pkcs1pad_encrypt_sign_complete_cb(
+ 		struct crypto_async_request *child_async_req, int err)
+ {
+ 	struct akcipher_request *req = child_async_req->data;
+-	struct crypto_async_request async_req;
+ 
+ 	if (err == -EINPROGRESS)
+-		return;
++		goto out;
++
++	err = pkcs1pad_encrypt_sign_complete(req, err);
+ 
+-	async_req.data = req->base.data;
+-	async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
+-	async_req.flags = child_async_req->flags;
+-	req->base.complete(&async_req,
+-			pkcs1pad_encrypt_sign_complete(req, err));
++out:
++	akcipher_request_complete(req, err);
+ }
+ 
+ static int pkcs1pad_encrypt(struct akcipher_request *req)
+@@ -332,15 +330,14 @@ static void pkcs1pad_decrypt_complete_cb(
+ 		struct crypto_async_request *child_async_req, int err)
+ {
+ 	struct akcipher_request *req = child_async_req->data;
+-	struct crypto_async_request async_req;
+ 
+ 	if (err == -EINPROGRESS)
+-		return;
++		goto out;
++
++	err = pkcs1pad_decrypt_complete(req, err);
+ 
+-	async_req.data = req->base.data;
+-	async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
+-	async_req.flags = child_async_req->flags;
+-	req->base.complete(&async_req, pkcs1pad_decrypt_complete(req, err));
++out:
++	akcipher_request_complete(req, err);
+ }
+ 
+ static int pkcs1pad_decrypt(struct akcipher_request *req)
+@@ -512,15 +509,14 @@ static void pkcs1pad_verify_complete_cb(
+ 		struct crypto_async_request *child_async_req, int err)
+ {
+ 	struct akcipher_request *req = child_async_req->data;
+-	struct crypto_async_request async_req;
+ 
+ 	if (err == -EINPROGRESS)
+-		return;
++		goto out;
+ 
+-	async_req.data = req->base.data;
+-	async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
+-	async_req.flags = child_async_req->flags;
+-	req->base.complete(&async_req, pkcs1pad_verify_complete(req, err));
++	err = pkcs1pad_verify_complete(req, err);
++
++out:
++	akcipher_request_complete(req, err);
+ }
+ 
+ /*
+diff --git a/crypto/seqiv.c b/crypto/seqiv.c
+index 0899d527c2845..b1bcfe537daf1 100644
+--- a/crypto/seqiv.c
++++ b/crypto/seqiv.c
+@@ -23,7 +23,7 @@ static void seqiv_aead_encrypt_complete2(struct aead_request *req, int err)
+ 	struct aead_request *subreq = aead_request_ctx(req);
+ 	struct crypto_aead *geniv;
+ 
+-	if (err == -EINPROGRESS)
++	if (err == -EINPROGRESS || err == -EBUSY)
+ 		return;
+ 
+ 	if (err)
+diff --git a/crypto/xts.c b/crypto/xts.c
+index ad45b009774b1..c6a105dba38b9 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -202,12 +202,12 @@ static void xts_encrypt_done(struct crypto_async_request *areq, int err)
+ 	if (!err) {
+ 		struct xts_request_ctx *rctx = skcipher_request_ctx(req);
+ 
+-		rctx->subreq.base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
++		rctx->subreq.base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
+ 		err = xts_xor_tweak_post(req, true);
+ 
+ 		if (!err && unlikely(req->cryptlen % XTS_BLOCK_SIZE)) {
+ 			err = xts_cts_final(req, crypto_skcipher_encrypt);
+-			if (err == -EINPROGRESS)
++			if (err == -EINPROGRESS || err == -EBUSY)
+ 				return;
+ 		}
+ 	}
+@@ -222,12 +222,12 @@ static void xts_decrypt_done(struct crypto_async_request *areq, int err)
+ 	if (!err) {
+ 		struct xts_request_ctx *rctx = skcipher_request_ctx(req);
+ 
+-		rctx->subreq.base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
++		rctx->subreq.base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
+ 		err = xts_xor_tweak_post(req, false);
+ 
+ 		if (!err && unlikely(req->cryptlen % XTS_BLOCK_SIZE)) {
+ 			err = xts_cts_final(req, crypto_skcipher_decrypt);
+-			if (err == -EINPROGRESS)
++			if (err == -EINPROGRESS || err == -EBUSY)
+ 				return;
+ 		}
+ 	}
+diff --git a/drivers/acpi/acpica/Makefile b/drivers/acpi/acpica/Makefile
+index 59700433a96e5..f919811156b1f 100644
+--- a/drivers/acpi/acpica/Makefile
++++ b/drivers/acpi/acpica/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for ACPICA Core interpreter
+ #
+ 
+-ccflags-y			:= -Os -D_LINUX -DBUILDING_ACPICA
++ccflags-y			:= -D_LINUX -DBUILDING_ACPICA
+ ccflags-$(CONFIG_ACPI_DEBUG)	+= -DACPI_DEBUG_OUTPUT
+ 
+ # use acpi.o to put all files here into acpi.o modparam namespace
+diff --git a/drivers/acpi/acpica/hwvalid.c b/drivers/acpi/acpica/hwvalid.c
+index b2ca7dfd3fc92..0cc4de3f71d51 100644
+--- a/drivers/acpi/acpica/hwvalid.c
++++ b/drivers/acpi/acpica/hwvalid.c
+@@ -23,8 +23,8 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width);
+  *
+  * The table is used to implement the Microsoft port access rules that
+  * first appeared in Windows XP. Some ports are always illegal, and some
+- * ports are only illegal if the BIOS calls _OSI with a win_XP string or
+- * later (meaning that the BIOS itelf is post-XP.)
++ * ports are only illegal if the BIOS calls _OSI with nothing newer than
++ * the specific _OSI strings.
+  *
+  * This provides ACPICA with the desired port protections and
+  * Microsoft compatibility.
+@@ -145,7 +145,8 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width)
+ 
+ 			/* Port illegality may depend on the _OSI calls made by the BIOS */
+ 
+-			if (acpi_gbl_osi_data >= port_info->osi_dependency) {
++			if (port_info->osi_dependency == ACPI_ALWAYS_ILLEGAL ||
++			    acpi_gbl_osi_data == port_info->osi_dependency) {
+ 				ACPI_DEBUG_PRINT((ACPI_DB_VALUES,
+ 						  "Denied AML access to port 0x%8.8X%8.8X/%X (%s 0x%.4X-0x%.4X)\n",
+ 						  ACPI_FORMAT_UINT64(address),
+diff --git a/drivers/acpi/acpica/nsrepair.c b/drivers/acpi/acpica/nsrepair.c
+index 90db2d85e7f5c..f28d811a3724d 100644
+--- a/drivers/acpi/acpica/nsrepair.c
++++ b/drivers/acpi/acpica/nsrepair.c
+@@ -181,8 +181,9 @@ acpi_ns_simple_repair(struct acpi_evaluate_info *info,
+ 	 * Try to fix if there was no return object. Warning if failed to fix.
+ 	 */
+ 	if (!return_object) {
+-		if (expected_btypes && (!(expected_btypes & ACPI_RTYPE_NONE))) {
+-			if (package_index != ACPI_NOT_PACKAGE_ELEMENT) {
++		if (expected_btypes) {
++			if (!(expected_btypes & ACPI_RTYPE_NONE) &&
++			    package_index != ACPI_NOT_PACKAGE_ELEMENT) {
+ 				ACPI_WARN_PREDEFINED((AE_INFO,
+ 						      info->full_pathname,
+ 						      ACPI_WARN_ALWAYS,
+@@ -196,14 +197,15 @@ acpi_ns_simple_repair(struct acpi_evaluate_info *info,
+ 				if (ACPI_SUCCESS(status)) {
+ 					return (AE_OK);	/* Repair was successful */
+ 				}
+-			} else {
++			}
++
++			if (expected_btypes != ACPI_RTYPE_NONE) {
+ 				ACPI_WARN_PREDEFINED((AE_INFO,
+ 						      info->full_pathname,
+ 						      ACPI_WARN_ALWAYS,
+ 						      "Missing expected return value"));
++				return (AE_AML_NO_RETURN_VALUE);
+ 			}
+-
+-			return (AE_AML_NO_RETURN_VALUE);
+ 		}
+ 	}
+ 
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index be743d177bcbf..8b43efe97da5d 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -454,7 +454,7 @@ static int extract_package(struct acpi_battery *battery,
+ 			u8 *ptr = (u8 *)battery + offsets[i].offset;
+ 			if (element->type == ACPI_TYPE_STRING ||
+ 			    element->type == ACPI_TYPE_BUFFER)
+-				strncpy(ptr, element->string.pointer, 32);
++				strscpy(ptr, element->string.pointer, 32);
+ 			else if (element->type == ACPI_TYPE_INTEGER) {
+ 				strncpy(ptr, (u8 *)&element->integer.value,
+ 					sizeof(u64));
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index b13713199ad94..038542b3a80a7 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -313,7 +313,7 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 	 .ident = "Lenovo Ideapad Z570",
+ 	 .matches = {
+ 		DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-		DMI_MATCH(DMI_PRODUCT_NAME, "102434U"),
++		DMI_MATCH(DMI_PRODUCT_VERSION, "Ideapad Z570"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index cc49a921339f7..11078e1663683 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -80,11 +80,9 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector)
+ }
+ 
+ /*
+- * Look up and return a brd's page for a given sector.
+- * If one does not exist, allocate an empty page, and insert that. Then
+- * return it.
++ * Insert a new page for a given sector, if one does not already exist.
+  */
+-static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
++static int brd_insert_page(struct brd_device *brd, sector_t sector)
+ {
+ 	pgoff_t idx;
+ 	struct page *page;
+@@ -92,7 +90,7 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
+ 
+ 	page = brd_lookup_page(brd, sector);
+ 	if (page)
+-		return page;
++		return 0;
+ 
+ 	/*
+ 	 * Must use NOIO because we don't want to recurse back into the
+@@ -101,11 +99,11 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
+ 	gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM;
+ 	page = alloc_page(gfp_flags);
+ 	if (!page)
+-		return NULL;
++		return -ENOMEM;
+ 
+ 	if (radix_tree_preload(GFP_NOIO)) {
+ 		__free_page(page);
+-		return NULL;
++		return -ENOMEM;
+ 	}
+ 
+ 	spin_lock(&brd->brd_lock);
+@@ -120,8 +118,7 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
+ 	spin_unlock(&brd->brd_lock);
+ 
+ 	radix_tree_preload_end();
+-
+-	return page;
++	return 0;
+ }
+ 
+ /*
+@@ -174,16 +171,17 @@ static int copy_to_brd_setup(struct brd_device *brd, sector_t sector, size_t n)
+ {
+ 	unsigned int offset = (sector & (PAGE_SECTORS-1)) << SECTOR_SHIFT;
+ 	size_t copy;
++	int ret;
+ 
+ 	copy = min_t(size_t, n, PAGE_SIZE - offset);
+-	if (!brd_insert_page(brd, sector))
+-		return -ENOSPC;
++	ret = brd_insert_page(brd, sector);
++	if (ret)
++		return ret;
+ 	if (copy < n) {
+ 		sector += copy >> SECTOR_SHIFT;
+-		if (!brd_insert_page(brd, sector))
+-			return -ENOSPC;
++		ret = brd_insert_page(brd, sector);
+ 	}
+-	return 0;
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index b10410585a746..d86fbea54652a 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1029,13 +1029,13 @@ loop_set_status_from_info(struct loop_device *lo,
+ 	if (err)
+ 		return err;
+ 
++	/* Avoid assigning overflow values */
++	if (info->lo_offset > LLONG_MAX || info->lo_sizelimit > LLONG_MAX)
++		return -EOVERFLOW;
++
+ 	lo->lo_offset = info->lo_offset;
+ 	lo->lo_sizelimit = info->lo_sizelimit;
+ 
+-	/* loff_t vars have been assigned __u64 */
+-	if (lo->lo_offset < 0 || lo->lo_sizelimit < 0)
+-		return -EOVERFLOW;
+-
+ 	memcpy(lo->lo_file_name, info->lo_file_name, LO_NAME_SIZE);
+ 	memcpy(lo->lo_crypt_name, info->lo_crypt_name, LO_NAME_SIZE);
+ 	lo->lo_file_name[LO_NAME_SIZE-1] = 0;
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 340b1df365f72..932d4bb8e4035 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -5369,8 +5369,7 @@ static void rbd_dev_release(struct device *dev)
+ 		module_put(THIS_MODULE);
+ }
+ 
+-static struct rbd_device *__rbd_dev_create(struct rbd_client *rbdc,
+-					   struct rbd_spec *spec)
++static struct rbd_device *__rbd_dev_create(struct rbd_spec *spec)
+ {
+ 	struct rbd_device *rbd_dev;
+ 
+@@ -5415,9 +5414,6 @@ static struct rbd_device *__rbd_dev_create(struct rbd_client *rbdc,
+ 	rbd_dev->dev.parent = &rbd_root_dev;
+ 	device_initialize(&rbd_dev->dev);
+ 
+-	rbd_dev->rbd_client = rbdc;
+-	rbd_dev->spec = spec;
+-
+ 	return rbd_dev;
+ }
+ 
+@@ -5430,12 +5426,10 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
+ {
+ 	struct rbd_device *rbd_dev;
+ 
+-	rbd_dev = __rbd_dev_create(rbdc, spec);
++	rbd_dev = __rbd_dev_create(spec);
+ 	if (!rbd_dev)
+ 		return NULL;
+ 
+-	rbd_dev->opts = opts;
+-
+ 	/* get an id and fill in device name */
+ 	rbd_dev->dev_id = ida_simple_get(&rbd_dev_id_ida, 0,
+ 					 minor_to_rbd_dev_id(1 << MINORBITS),
+@@ -5452,6 +5446,10 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
+ 	/* we have a ref from do_rbd_add() */
+ 	__module_get(THIS_MODULE);
+ 
++	rbd_dev->rbd_client = rbdc;
++	rbd_dev->spec = spec;
++	rbd_dev->opts = opts;
++
+ 	dout("%s rbd_dev %p dev_id %d\n", __func__, rbd_dev, rbd_dev->dev_id);
+ 	return rbd_dev;
+ 
+@@ -6812,7 +6810,7 @@ static int rbd_dev_probe_parent(struct rbd_device *rbd_dev, int depth)
+ 		goto out_err;
+ 	}
+ 
+-	parent = __rbd_dev_create(rbd_dev->rbd_client, rbd_dev->parent_spec);
++	parent = __rbd_dev_create(rbd_dev->parent_spec);
+ 	if (!parent) {
+ 		ret = -ENOMEM;
+ 		goto out_err;
+@@ -6822,8 +6820,8 @@ static int rbd_dev_probe_parent(struct rbd_device *rbd_dev, int depth)
+ 	 * Images related by parent/child relationships always share
+ 	 * rbd_client and spec/parent_spec, so bump their refcounts.
+ 	 */
+-	__rbd_get_client(rbd_dev->rbd_client);
+-	rbd_spec_get(rbd_dev->parent_spec);
++	parent->rbd_client = __rbd_get_client(rbd_dev->rbd_client);
++	parent->spec = rbd_spec_get(rbd_dev->parent_spec);
+ 
+ 	__set_bit(RBD_DEV_FLAG_READONLY, &parent->flags);
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 3d905fda9b29a..2695ece47eb0e 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -393,6 +393,10 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01),
+ 	  .driver_info = BTUSB_IGNORE },
+ 
++	/* Realtek 8821CE Bluetooth devices */
++	{ USB_DEVICE(0x13d3, 0x3529), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++
+ 	/* Realtek 8822CE Bluetooth devices */
+ 	{ USB_DEVICE(0x0bda, 0xb00c), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 4771397495130..0f2bac24e564d 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -92,7 +92,7 @@
+ #define SSIF_WATCH_WATCHDOG_TIMEOUT	msecs_to_jiffies(250)
+ 
+ enum ssif_intf_state {
+-	SSIF_NORMAL,
++	SSIF_IDLE,
+ 	SSIF_GETTING_FLAGS,
+ 	SSIF_GETTING_EVENTS,
+ 	SSIF_CLEARING_FLAGS,
+@@ -100,8 +100,8 @@ enum ssif_intf_state {
+ 	/* FIXME - add watchdog stuff. */
+ };
+ 
+-#define SSIF_IDLE(ssif)	 ((ssif)->ssif_state == SSIF_NORMAL \
+-			  && (ssif)->curr_msg == NULL)
++#define IS_SSIF_IDLE(ssif) ((ssif)->ssif_state == SSIF_IDLE \
++			    && (ssif)->curr_msg == NULL)
+ 
+ /*
+  * Indexes into stats[] in ssif_info below.
+@@ -348,9 +348,9 @@ static void return_hosed_msg(struct ssif_info *ssif_info,
+ 
+ /*
+  * Must be called with the message lock held.  This will release the
+- * message lock.  Note that the caller will check SSIF_IDLE and start a
+- * new operation, so there is no need to check for new messages to
+- * start in here.
++ * message lock.  Note that the caller will check IS_SSIF_IDLE and
++ * start a new operation, so there is no need to check for new
++ * messages to start in here.
+  */
+ static void start_clear_flags(struct ssif_info *ssif_info, unsigned long *flags)
+ {
+@@ -367,7 +367,7 @@ static void start_clear_flags(struct ssif_info *ssif_info, unsigned long *flags)
+ 
+ 	if (start_send(ssif_info, msg, 3) != 0) {
+ 		/* Error, just go to normal state. */
+-		ssif_info->ssif_state = SSIF_NORMAL;
++		ssif_info->ssif_state = SSIF_IDLE;
+ 	}
+ }
+ 
+@@ -382,7 +382,7 @@ static void start_flag_fetch(struct ssif_info *ssif_info, unsigned long *flags)
+ 	mb[0] = (IPMI_NETFN_APP_REQUEST << 2);
+ 	mb[1] = IPMI_GET_MSG_FLAGS_CMD;
+ 	if (start_send(ssif_info, mb, 2) != 0)
+-		ssif_info->ssif_state = SSIF_NORMAL;
++		ssif_info->ssif_state = SSIF_IDLE;
+ }
+ 
+ static void check_start_send(struct ssif_info *ssif_info, unsigned long *flags,
+@@ -393,7 +393,7 @@ static void check_start_send(struct ssif_info *ssif_info, unsigned long *flags,
+ 
+ 		flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+ 		ssif_info->curr_msg = NULL;
+-		ssif_info->ssif_state = SSIF_NORMAL;
++		ssif_info->ssif_state = SSIF_IDLE;
+ 		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		ipmi_free_smi_msg(msg);
+ 	}
+@@ -407,7 +407,7 @@ static void start_event_fetch(struct ssif_info *ssif_info, unsigned long *flags)
+ 
+ 	msg = ipmi_alloc_smi_msg();
+ 	if (!msg) {
+-		ssif_info->ssif_state = SSIF_NORMAL;
++		ssif_info->ssif_state = SSIF_IDLE;
+ 		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		return;
+ 	}
+@@ -430,7 +430,7 @@ static void start_recv_msg_fetch(struct ssif_info *ssif_info,
+ 
+ 	msg = ipmi_alloc_smi_msg();
+ 	if (!msg) {
+-		ssif_info->ssif_state = SSIF_NORMAL;
++		ssif_info->ssif_state = SSIF_IDLE;
+ 		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		return;
+ 	}
+@@ -448,9 +448,9 @@ static void start_recv_msg_fetch(struct ssif_info *ssif_info,
+ 
+ /*
+  * Must be called with the message lock held.  This will release the
+- * message lock.  Note that the caller will check SSIF_IDLE and start a
+- * new operation, so there is no need to check for new messages to
+- * start in here.
++ * message lock.  Note that the caller will check IS_SSIF_IDLE and
++ * start a new operation, so there is no need to check for new
++ * messages to start in here.
+  */
+ static void handle_flags(struct ssif_info *ssif_info, unsigned long *flags)
+ {
+@@ -466,7 +466,7 @@ static void handle_flags(struct ssif_info *ssif_info, unsigned long *flags)
+ 		/* Events available. */
+ 		start_event_fetch(ssif_info, flags);
+ 	else {
+-		ssif_info->ssif_state = SSIF_NORMAL;
++		ssif_info->ssif_state = SSIF_IDLE;
+ 		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 	}
+ }
+@@ -579,7 +579,7 @@ static void watch_timeout(struct timer_list *t)
+ 	if (ssif_info->watch_timeout) {
+ 		mod_timer(&ssif_info->watch_timer,
+ 			  jiffies + ssif_info->watch_timeout);
+-		if (SSIF_IDLE(ssif_info)) {
++		if (IS_SSIF_IDLE(ssif_info)) {
+ 			start_flag_fetch(ssif_info, flags); /* Releases lock */
+ 			return;
+ 		}
+@@ -782,7 +782,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 	}
+ 
+ 	switch (ssif_info->ssif_state) {
+-	case SSIF_NORMAL:
++	case SSIF_IDLE:
+ 		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		if (!msg)
+ 			break;
+@@ -800,7 +800,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			 * Error fetching flags, or invalid length,
+ 			 * just give up for now.
+ 			 */
+-			ssif_info->ssif_state = SSIF_NORMAL;
++			ssif_info->ssif_state = SSIF_IDLE;
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 			dev_warn(&ssif_info->client->dev,
+ 				 "Error getting flags: %d %d, %x\n",
+@@ -835,7 +835,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 				 "Invalid response clearing flags: %x %x\n",
+ 				 data[0], data[1]);
+ 		}
+-		ssif_info->ssif_state = SSIF_NORMAL;
++		ssif_info->ssif_state = SSIF_IDLE;
+ 		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		break;
+ 
+@@ -913,7 +913,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 	}
+ 
+ 	flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+-	if (SSIF_IDLE(ssif_info) && !ssif_info->stopping) {
++	if (IS_SSIF_IDLE(ssif_info) && !ssif_info->stopping) {
+ 		if (ssif_info->req_events)
+ 			start_event_fetch(ssif_info, flags);
+ 		else if (ssif_info->req_flags)
+@@ -1087,7 +1087,7 @@ static void start_next_msg(struct ssif_info *ssif_info, unsigned long *flags)
+ 	unsigned long oflags;
+ 
+  restart:
+-	if (!SSIF_IDLE(ssif_info)) {
++	if (!IS_SSIF_IDLE(ssif_info)) {
+ 		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		return;
+ 	}
+@@ -1310,7 +1310,7 @@ static void shutdown_ssif(void *send_info)
+ 	dev_set_drvdata(&ssif_info->client->dev, NULL);
+ 
+ 	/* make sure the driver is not looking for flags any more. */
+-	while (ssif_info->ssif_state != SSIF_NORMAL)
++	while (ssif_info->ssif_state != SSIF_IDLE)
+ 		schedule_timeout(1);
+ 
+ 	ssif_info->stopping = true;
+@@ -1882,7 +1882,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	}
+ 
+ 	spin_lock_init(&ssif_info->lock);
+-	ssif_info->ssif_state = SSIF_NORMAL;
++	ssif_info->ssif_state = SSIF_IDLE;
+ 	timer_setup(&ssif_info->retry_timer, retry_timeout, 0);
+ 	timer_setup(&ssif_info->watch_timer, watch_timeout, 0);
+ 
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index b355d3d40f63a..3575afe16a574 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -251,6 +251,17 @@ static bool clk_core_is_enabled(struct clk_core *core)
+ 		}
+ 	}
+ 
++	/*
++	 * This could be called with the enable lock held, or from atomic
++	 * context. If the parent isn't enabled already, we can't do
++	 * anything here. We can also assume this clock isn't enabled.
++	 */
++	if ((core->flags & CLK_OPS_PARENT_ENABLE) && core->parent)
++		if (!clk_core_is_enabled(core->parent)) {
++			ret = false;
++			goto done;
++		}
++
+ 	ret = core->ops->is_enabled(core->hw);
+ done:
+ 	if (core->rpm_enabled)
+diff --git a/drivers/clk/imx/clk.c b/drivers/clk/imx/clk.c
+index 7cc669934253a..d4cf0c7045ab2 100644
+--- a/drivers/clk/imx/clk.c
++++ b/drivers/clk/imx/clk.c
+@@ -201,9 +201,10 @@ static int __init imx_clk_disable_uart(void)
+ 			clk_disable_unprepare(imx_uart_clocks[i]);
+ 			clk_put(imx_uart_clocks[i]);
+ 		}
+-		kfree(imx_uart_clocks);
+ 	}
+ 
++	kfree(imx_uart_clocks);
++
+ 	return 0;
+ }
+ late_initcall_sync(imx_clk_disable_uart);
+diff --git a/drivers/clk/qcom/gcc-qcs404.c b/drivers/clk/qcom/gcc-qcs404.c
+index 46d314d692505..a7a9884799cd3 100644
+--- a/drivers/clk/qcom/gcc-qcs404.c
++++ b/drivers/clk/qcom/gcc-qcs404.c
+@@ -25,11 +25,9 @@ enum {
+ 	P_CORE_BI_PLL_TEST_SE,
+ 	P_DSI0_PHY_PLL_OUT_BYTECLK,
+ 	P_DSI0_PHY_PLL_OUT_DSICLK,
+-	P_GPLL0_OUT_AUX,
+ 	P_GPLL0_OUT_MAIN,
+ 	P_GPLL1_OUT_MAIN,
+ 	P_GPLL3_OUT_MAIN,
+-	P_GPLL4_OUT_AUX,
+ 	P_GPLL4_OUT_MAIN,
+ 	P_GPLL6_OUT_AUX,
+ 	P_HDMI_PHY_PLL_CLK,
+@@ -109,28 +107,24 @@ static const char * const gcc_parent_names_4[] = {
+ static const struct parent_map gcc_parent_map_5[] = {
+ 	{ P_XO, 0 },
+ 	{ P_DSI0_PHY_PLL_OUT_BYTECLK, 1 },
+-	{ P_GPLL0_OUT_AUX, 2 },
+ 	{ P_CORE_BI_PLL_TEST_SE, 7 },
+ };
+ 
+ static const char * const gcc_parent_names_5[] = {
+ 	"cxo",
+-	"dsi0pll_byteclk_src",
+-	"gpll0_out_aux",
++	"dsi0pllbyte",
+ 	"core_bi_pll_test_se",
+ };
+ 
+ static const struct parent_map gcc_parent_map_6[] = {
+ 	{ P_XO, 0 },
+ 	{ P_DSI0_PHY_PLL_OUT_BYTECLK, 2 },
+-	{ P_GPLL0_OUT_AUX, 3 },
+ 	{ P_CORE_BI_PLL_TEST_SE, 7 },
+ };
+ 
+ static const char * const gcc_parent_names_6[] = {
+ 	"cxo",
+-	"dsi0_phy_pll_out_byteclk",
+-	"gpll0_out_aux",
++	"dsi0pllbyte",
+ 	"core_bi_pll_test_se",
+ };
+ 
+@@ -139,7 +133,6 @@ static const struct parent_map gcc_parent_map_7[] = {
+ 	{ P_GPLL0_OUT_MAIN, 1 },
+ 	{ P_GPLL3_OUT_MAIN, 2 },
+ 	{ P_GPLL6_OUT_AUX, 3 },
+-	{ P_GPLL4_OUT_AUX, 4 },
+ 	{ P_CORE_BI_PLL_TEST_SE, 7 },
+ };
+ 
+@@ -148,7 +141,6 @@ static const char * const gcc_parent_names_7[] = {
+ 	"gpll0_out_main",
+ 	"gpll3_out_main",
+ 	"gpll6_out_aux",
+-	"gpll4_out_aux",
+ 	"core_bi_pll_test_se",
+ };
+ 
+@@ -175,7 +167,7 @@ static const struct parent_map gcc_parent_map_9[] = {
+ static const char * const gcc_parent_names_9[] = {
+ 	"cxo",
+ 	"gpll0_out_main",
+-	"dsi0_phy_pll_out_dsiclk",
++	"dsi0pll",
+ 	"gpll6_out_aux",
+ 	"core_bi_pll_test_se",
+ };
+@@ -207,14 +199,12 @@ static const char * const gcc_parent_names_11[] = {
+ static const struct parent_map gcc_parent_map_12[] = {
+ 	{ P_XO, 0 },
+ 	{ P_DSI0_PHY_PLL_OUT_DSICLK, 1 },
+-	{ P_GPLL0_OUT_AUX, 2 },
+ 	{ P_CORE_BI_PLL_TEST_SE, 7 },
+ };
+ 
+ static const char * const gcc_parent_names_12[] = {
+ 	"cxo",
+-	"dsi0pll_pclk_src",
+-	"gpll0_out_aux",
++	"dsi0pll",
+ 	"core_bi_pll_test_se",
+ };
+ 
+@@ -237,40 +227,34 @@ static const char * const gcc_parent_names_13[] = {
+ static const struct parent_map gcc_parent_map_14[] = {
+ 	{ P_XO, 0 },
+ 	{ P_GPLL0_OUT_MAIN, 1 },
+-	{ P_GPLL4_OUT_AUX, 2 },
+ 	{ P_CORE_BI_PLL_TEST_SE, 7 },
+ };
+ 
+ static const char * const gcc_parent_names_14[] = {
+ 	"cxo",
+ 	"gpll0_out_main",
+-	"gpll4_out_aux",
+ 	"core_bi_pll_test_se",
+ };
+ 
+ static const struct parent_map gcc_parent_map_15[] = {
+ 	{ P_XO, 0 },
+-	{ P_GPLL0_OUT_AUX, 2 },
+ 	{ P_CORE_BI_PLL_TEST_SE, 7 },
+ };
+ 
+ static const char * const gcc_parent_names_15[] = {
+ 	"cxo",
+-	"gpll0_out_aux",
+ 	"core_bi_pll_test_se",
+ };
+ 
+ static const struct parent_map gcc_parent_map_16[] = {
+ 	{ P_XO, 0 },
+ 	{ P_GPLL0_OUT_MAIN, 1 },
+-	{ P_GPLL0_OUT_AUX, 2 },
+ 	{ P_CORE_BI_PLL_TEST_SE, 7 },
+ };
+ 
+ static const char * const gcc_parent_names_16[] = {
+ 	"cxo",
+ 	"gpll0_out_main",
+-	"gpll0_out_aux",
+ 	"core_bi_pll_test_se",
+ };
+ 
+diff --git a/drivers/clk/qcom/gpucc-sc7180.c b/drivers/clk/qcom/gpucc-sc7180.c
+index 88a739b6fec39..c51114e7e1e66 100644
+--- a/drivers/clk/qcom/gpucc-sc7180.c
++++ b/drivers/clk/qcom/gpucc-sc7180.c
+@@ -21,8 +21,6 @@
+ #define CX_GMU_CBCR_SLEEP_SHIFT		4
+ #define CX_GMU_CBCR_WAKE_MASK		0xF
+ #define CX_GMU_CBCR_WAKE_SHIFT		8
+-#define CLK_DIS_WAIT_SHIFT		12
+-#define CLK_DIS_WAIT_MASK		(0xf << CLK_DIS_WAIT_SHIFT)
+ 
+ enum {
+ 	P_BI_TCXO,
+@@ -163,6 +161,7 @@ static struct clk_branch gpu_cc_cxo_clk = {
+ static struct gdsc cx_gdsc = {
+ 	.gdscr = 0x106c,
+ 	.gds_hw_ctrl = 0x1540,
++	.clk_dis_wait_val = 8,
+ 	.pd = {
+ 		.name = "cx_gdsc",
+ 	},
+@@ -245,10 +244,6 @@ static int gpu_cc_sc7180_probe(struct platform_device *pdev)
+ 	value = 0xF << CX_GMU_CBCR_WAKE_SHIFT | 0xF << CX_GMU_CBCR_SLEEP_SHIFT;
+ 	regmap_update_bits(regmap, 0x1098, mask, value);
+ 
+-	/* Configure clk_dis_wait for gpu_cx_gdsc */
+-	regmap_update_bits(regmap, 0x106c, CLK_DIS_WAIT_MASK,
+-						8 << CLK_DIS_WAIT_SHIFT);
+-
+ 	return qcom_cc_really_probe(pdev, &gpu_cc_sc7180_desc, regmap);
+ }
+ 
+diff --git a/drivers/clk/qcom/gpucc-sdm845.c b/drivers/clk/qcom/gpucc-sdm845.c
+index 5663698b306b9..658c6ac700e1e 100644
+--- a/drivers/clk/qcom/gpucc-sdm845.c
++++ b/drivers/clk/qcom/gpucc-sdm845.c
+@@ -22,8 +22,6 @@
+ #define CX_GMU_CBCR_SLEEP_SHIFT		4
+ #define CX_GMU_CBCR_WAKE_MASK		0xf
+ #define CX_GMU_CBCR_WAKE_SHIFT		8
+-#define CLK_DIS_WAIT_SHIFT		12
+-#define CLK_DIS_WAIT_MASK		(0xf << CLK_DIS_WAIT_SHIFT)
+ 
+ enum {
+ 	P_BI_TCXO,
+@@ -124,6 +122,7 @@ static struct clk_branch gpu_cc_cxo_clk = {
+ static struct gdsc gpu_cx_gdsc = {
+ 	.gdscr = 0x106c,
+ 	.gds_hw_ctrl = 0x1540,
++	.clk_dis_wait_val = 0x8,
+ 	.pd = {
+ 		.name = "gpu_cx_gdsc",
+ 	},
+@@ -196,10 +195,6 @@ static int gpu_cc_sdm845_probe(struct platform_device *pdev)
+ 	value = 0xf << CX_GMU_CBCR_WAKE_SHIFT | 0xf << CX_GMU_CBCR_SLEEP_SHIFT;
+ 	regmap_update_bits(regmap, 0x1098, mask, value);
+ 
+-	/* Configure clk_dis_wait for gpu_cx_gdsc */
+-	regmap_update_bits(regmap, 0x106c, CLK_DIS_WAIT_MASK,
+-						8 << CLK_DIS_WAIT_SHIFT);
+-
+ 	return qcom_cc_really_probe(pdev, &gpu_cc_sdm845_desc, regmap);
+ }
+ 
+diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
+index 94db883703377..a5a68e1e75490 100644
+--- a/drivers/clk/renesas/renesas-cpg-mssr.c
++++ b/drivers/clk/renesas/renesas-cpg-mssr.c
+@@ -914,9 +914,8 @@ static int cpg_mssr_resume_noirq(struct device *dev)
+ 		}
+ 
+ 		if (!i)
+-			dev_warn(dev, "Failed to enable %s%u[0x%x]\n",
+-				 priv->reg_layout == CLK_REG_LAYOUT_RZ_A ?
+-				 "STB" : "SMSTP", reg, oldval & mask);
++			dev_warn(dev, "Failed to enable SMSTP%u[0x%x]\n", reg,
++				 oldval & mask);
+ 	}
+ 
+ 	return 0;
+@@ -960,7 +959,6 @@ static int __init cpg_mssr_common_init(struct device *dev,
+ 		goto out_err;
+ 	}
+ 
+-	cpg_mssr_priv = priv;
+ 	priv->num_core_clks = info->num_total_core_clks;
+ 	priv->num_mod_clks = info->num_hw_mod_clks;
+ 	priv->last_dt_core_clk = info->last_dt_core_clk;
+@@ -990,6 +988,8 @@ static int __init cpg_mssr_common_init(struct device *dev,
+ 	if (error)
+ 		goto out_err;
+ 
++	cpg_mssr_priv = priv;
++
+ 	return 0;
+ 
+ out_err:
+diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
+index 2e3690f65786d..6d05ac0c05134 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.c
++++ b/drivers/crypto/amcc/crypto4xx_core.c
+@@ -522,7 +522,6 @@ static void crypto4xx_cipher_done(struct crypto4xx_device *dev,
+ {
+ 	struct skcipher_request *req;
+ 	struct scatterlist *dst;
+-	dma_addr_t addr;
+ 
+ 	req = skcipher_request_cast(pd_uinfo->async_req);
+ 
+@@ -531,8 +530,8 @@ static void crypto4xx_cipher_done(struct crypto4xx_device *dev,
+ 					  req->cryptlen, req->dst);
+ 	} else {
+ 		dst = pd_uinfo->dest_va;
+-		addr = dma_map_page(dev->core_dev->device, sg_page(dst),
+-				    dst->offset, dst->length, DMA_FROM_DEVICE);
++		dma_unmap_page(dev->core_dev->device, pd->dest, dst->length,
++			       DMA_FROM_DEVICE);
+ 	}
+ 
+ 	if (pd_uinfo->sa_va->sa_command_0.bf.save_iv == SA_SAVE_IV) {
+@@ -557,10 +556,9 @@ static void crypto4xx_ahash_done(struct crypto4xx_device *dev,
+ 	struct ahash_request *ahash_req;
+ 
+ 	ahash_req = ahash_request_cast(pd_uinfo->async_req);
+-	ctx  = crypto_tfm_ctx(ahash_req->base.tfm);
++	ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(ahash_req));
+ 
+-	crypto4xx_copy_digest_to_dst(ahash_req->result, pd_uinfo,
+-				     crypto_tfm_ctx(ahash_req->base.tfm));
++	crypto4xx_copy_digest_to_dst(ahash_req->result, pd_uinfo, ctx);
+ 	crypto4xx_ret_sg_desc(dev, pd_uinfo);
+ 
+ 	if (pd_uinfo->state & PD_ENTRY_BUSY)
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index b9299defb431d..e416456b2b8aa 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -643,14 +643,26 @@ static void ccp_dma_release(struct ccp_device *ccp)
+ 		chan = ccp->ccp_dma_chan + i;
+ 		dma_chan = &chan->dma_chan;
+ 
+-		if (dma_chan->client_count)
+-			dma_release_channel(dma_chan);
+-
+ 		tasklet_kill(&chan->cleanup_tasklet);
+ 		list_del_rcu(&dma_chan->device_node);
+ 	}
+ }
+ 
++static void ccp_dma_release_channels(struct ccp_device *ccp)
++{
++	struct ccp_dma_chan *chan;
++	struct dma_chan *dma_chan;
++	unsigned int i;
++
++	for (i = 0; i < ccp->cmd_q_count; i++) {
++		chan = ccp->ccp_dma_chan + i;
++		dma_chan = &chan->dma_chan;
++
++		if (dma_chan->client_count)
++			dma_release_channel(dma_chan);
++	}
++}
++
+ int ccp_dmaengine_register(struct ccp_device *ccp)
+ {
+ 	struct ccp_dma_chan *chan;
+@@ -771,8 +783,9 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
+ 	if (!dmaengine)
+ 		return;
+ 
+-	ccp_dma_release(ccp);
++	ccp_dma_release_channels(ccp);
+ 	dma_async_device_unregister(dma_dev);
++	ccp_dma_release(ccp);
+ 
+ 	kmem_cache_destroy(ccp->dma_desc_cache);
+ 	kmem_cache_destroy(ccp->dma_cmd_cache);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index ed39a22e1b2b9..856d867f46ebb 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -23,6 +23,7 @@
+ #include <linux/gfp.h>
+ 
+ #include <asm/smp.h>
++#include <asm/cacheflush.h>
+ 
+ #include "psp-dev.h"
+ #include "sev-dev.h"
+@@ -138,6 +139,17 @@ static int sev_cmd_buffer_len(int cmd)
+ 	return 0;
+ }
+ 
++static void *sev_fw_alloc(unsigned long len)
++{
++	struct page *page;
++
++	page = alloc_pages(GFP_KERNEL, get_order(len));
++	if (!page)
++		return NULL;
++
++	return page_address(page);
++}
++
+ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ {
+ 	struct psp_device *psp = psp_master;
+@@ -304,15 +316,14 @@ static int sev_platform_shutdown(int *error)
+ 
+ static int sev_get_platform_state(int *state, int *error)
+ {
+-	struct sev_device *sev = psp_master->sev_data;
++	struct sev_user_data_status data;
+ 	int rc;
+ 
+-	rc = __sev_do_cmd_locked(SEV_CMD_PLATFORM_STATUS,
+-				 &sev->status_cmd_buf, error);
++	rc = __sev_do_cmd_locked(SEV_CMD_PLATFORM_STATUS, &data, error);
+ 	if (rc)
+ 		return rc;
+ 
+-	*state = sev->status_cmd_buf.state;
++	*state = data.state;
+ 	return rc;
+ }
+ 
+@@ -350,15 +361,16 @@ static int sev_ioctl_do_reset(struct sev_issue_cmd *argp, bool writable)
+ 
+ static int sev_ioctl_do_platform_status(struct sev_issue_cmd *argp)
+ {
+-	struct sev_device *sev = psp_master->sev_data;
+-	struct sev_user_data_status *data = &sev->status_cmd_buf;
++	struct sev_user_data_status data;
+ 	int ret;
+ 
+-	ret = __sev_do_cmd_locked(SEV_CMD_PLATFORM_STATUS, data, &argp->error);
++	memset(&data, 0, sizeof(data));
++
++	ret = __sev_do_cmd_locked(SEV_CMD_PLATFORM_STATUS, &data, &argp->error);
+ 	if (ret)
+ 		return ret;
+ 
+-	if (copy_to_user((void __user *)argp->data, data, sizeof(*data)))
++	if (copy_to_user((void __user *)argp->data, &data, sizeof(data)))
+ 		ret = -EFAULT;
+ 
+ 	return ret;
+@@ -385,7 +397,7 @@ static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)
+ {
+ 	struct sev_device *sev = psp_master->sev_data;
+ 	struct sev_user_data_pek_csr input;
+-	struct sev_data_pek_csr *data;
++	struct sev_data_pek_csr data;
+ 	void __user *input_address;
+ 	void *blob = NULL;
+ 	int ret;
+@@ -396,9 +408,7 @@ static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)
+ 	if (copy_from_user(&input, (void __user *)argp->data, sizeof(input)))
+ 		return -EFAULT;
+ 
+-	data = kzalloc(sizeof(*data), GFP_KERNEL);
+-	if (!data)
+-		return -ENOMEM;
++	memset(&data, 0, sizeof(data));
+ 
+ 	/* userspace wants to query CSR length */
+ 	if (!input.address || !input.length)
+@@ -406,19 +416,15 @@ static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)
+ 
+ 	/* allocate a physically contiguous buffer to store the CSR blob */
+ 	input_address = (void __user *)input.address;
+-	if (input.length > SEV_FW_BLOB_MAX_SIZE) {
+-		ret = -EFAULT;
+-		goto e_free;
+-	}
++	if (input.length > SEV_FW_BLOB_MAX_SIZE)
++		return -EFAULT;
+ 
+-	blob = kmalloc(input.length, GFP_KERNEL);
+-	if (!blob) {
+-		ret = -ENOMEM;
+-		goto e_free;
+-	}
++	blob = kzalloc(input.length, GFP_KERNEL);
++	if (!blob)
++		return -ENOMEM;
+ 
+-	data->address = __psp_pa(blob);
+-	data->len = input.length;
++	data.address = __psp_pa(blob);
++	data.len = input.length;
+ 
+ cmd:
+ 	if (sev->state == SEV_STATE_UNINIT) {
+@@ -427,10 +433,10 @@ cmd:
+ 			goto e_free_blob;
+ 	}
+ 
+-	ret = __sev_do_cmd_locked(SEV_CMD_PEK_CSR, data, &argp->error);
++	ret = __sev_do_cmd_locked(SEV_CMD_PEK_CSR, &data, &argp->error);
+ 
+ 	 /* If we query the CSR length, FW responded with expected data. */
+-	input.length = data->len;
++	input.length = data.len;
+ 
+ 	if (copy_to_user((void __user *)argp->data, &input, sizeof(input))) {
+ 		ret = -EFAULT;
+@@ -444,8 +450,6 @@ cmd:
+ 
+ e_free_blob:
+ 	kfree(blob);
+-e_free:
+-	kfree(data);
+ 	return ret;
+ }
+ 
+@@ -465,21 +469,20 @@ EXPORT_SYMBOL_GPL(psp_copy_user_blob);
+ static int sev_get_api_version(void)
+ {
+ 	struct sev_device *sev = psp_master->sev_data;
+-	struct sev_user_data_status *status;
++	struct sev_user_data_status status;
+ 	int error = 0, ret;
+ 
+-	status = &sev->status_cmd_buf;
+-	ret = sev_platform_status(status, &error);
++	ret = sev_platform_status(&status, &error);
+ 	if (ret) {
+ 		dev_err(sev->dev,
+ 			"SEV: failed to get status. Error: %#x\n", error);
+ 		return 1;
+ 	}
+ 
+-	sev->api_major = status->api_major;
+-	sev->api_minor = status->api_minor;
+-	sev->build = status->build;
+-	sev->state = status->state;
++	sev->api_major = status.api_major;
++	sev->api_minor = status.api_minor;
++	sev->build = status.build;
++	sev->state = status.state;
+ 
+ 	return 0;
+ }
+@@ -577,7 +580,7 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)
+ {
+ 	struct sev_device *sev = psp_master->sev_data;
+ 	struct sev_user_data_pek_cert_import input;
+-	struct sev_data_pek_cert_import *data;
++	struct sev_data_pek_cert_import data;
+ 	void *pek_blob, *oca_blob;
+ 	int ret;
+ 
+@@ -587,19 +590,14 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)
+ 	if (copy_from_user(&input, (void __user *)argp->data, sizeof(input)))
+ 		return -EFAULT;
+ 
+-	data = kzalloc(sizeof(*data), GFP_KERNEL);
+-	if (!data)
+-		return -ENOMEM;
+-
+ 	/* copy PEK certificate blobs from userspace */
+ 	pek_blob = psp_copy_user_blob(input.pek_cert_address, input.pek_cert_len);
+-	if (IS_ERR(pek_blob)) {
+-		ret = PTR_ERR(pek_blob);
+-		goto e_free;
+-	}
++	if (IS_ERR(pek_blob))
++		return PTR_ERR(pek_blob);
+ 
+-	data->pek_cert_address = __psp_pa(pek_blob);
+-	data->pek_cert_len = input.pek_cert_len;
++	data.reserved = 0;
++	data.pek_cert_address = __psp_pa(pek_blob);
++	data.pek_cert_len = input.pek_cert_len;
+ 
+ 	/* copy PEK certificate blobs from userspace */
+ 	oca_blob = psp_copy_user_blob(input.oca_cert_address, input.oca_cert_len);
+@@ -608,8 +606,8 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)
+ 		goto e_free_pek;
+ 	}
+ 
+-	data->oca_cert_address = __psp_pa(oca_blob);
+-	data->oca_cert_len = input.oca_cert_len;
++	data.oca_cert_address = __psp_pa(oca_blob);
++	data.oca_cert_len = input.oca_cert_len;
+ 
+ 	/* If platform is not in INIT state then transition it to INIT */
+ 	if (sev->state != SEV_STATE_INIT) {
+@@ -618,21 +616,19 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)
+ 			goto e_free_oca;
+ 	}
+ 
+-	ret = __sev_do_cmd_locked(SEV_CMD_PEK_CERT_IMPORT, data, &argp->error);
++	ret = __sev_do_cmd_locked(SEV_CMD_PEK_CERT_IMPORT, &data, &argp->error);
+ 
+ e_free_oca:
+ 	kfree(oca_blob);
+ e_free_pek:
+ 	kfree(pek_blob);
+-e_free:
+-	kfree(data);
+ 	return ret;
+ }
+ 
+ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
+ {
+ 	struct sev_user_data_get_id2 input;
+-	struct sev_data_get_id *data;
++	struct sev_data_get_id data;
+ 	void __user *input_address;
+ 	void *id_blob = NULL;
+ 	int ret;
+@@ -646,28 +642,32 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
+ 
+ 	input_address = (void __user *)input.address;
+ 
+-	data = kzalloc(sizeof(*data), GFP_KERNEL);
+-	if (!data)
+-		return -ENOMEM;
+-
+ 	if (input.address && input.length) {
+-		id_blob = kmalloc(input.length, GFP_KERNEL);
+-		if (!id_blob) {
+-			kfree(data);
++		/*
++		 * The length of the ID shouldn't be assumed by software since
++		 * it may change in the future.  The allocation size is limited
++		 * to 1 << (PAGE_SHIFT + MAX_ORDER - 1) by the page allocator.
++		 * If the allocation fails, simply return ENOMEM rather than
++		 * warning in the kernel log.
++		 */
++		id_blob = kzalloc(input.length, GFP_KERNEL | __GFP_NOWARN);
++		if (!id_blob)
+ 			return -ENOMEM;
+-		}
+ 
+-		data->address = __psp_pa(id_blob);
+-		data->len = input.length;
++		data.address = __psp_pa(id_blob);
++		data.len = input.length;
++	} else {
++		data.address = 0;
++		data.len = 0;
+ 	}
+ 
+-	ret = __sev_do_cmd_locked(SEV_CMD_GET_ID, data, &argp->error);
++	ret = __sev_do_cmd_locked(SEV_CMD_GET_ID, &data, &argp->error);
+ 
+ 	/*
+ 	 * Firmware will return the length of the ID value (either the minimum
+ 	 * required length or the actual length written), return it to the user.
+ 	 */
+-	input.length = data->len;
++	input.length = data.len;
+ 
+ 	if (copy_to_user((void __user *)argp->data, &input, sizeof(input))) {
+ 		ret = -EFAULT;
+@@ -675,7 +675,7 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
+ 	}
+ 
+ 	if (id_blob) {
+-		if (copy_to_user(input_address, id_blob, data->len)) {
++		if (copy_to_user(input_address, id_blob, data.len)) {
+ 			ret = -EFAULT;
+ 			goto e_free;
+ 		}
+@@ -683,7 +683,6 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
+ 
+ e_free:
+ 	kfree(id_blob);
+-	kfree(data);
+ 
+ 	return ret;
+ }
+@@ -733,7 +732,7 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
+ 	struct sev_device *sev = psp_master->sev_data;
+ 	struct sev_user_data_pdh_cert_export input;
+ 	void *pdh_blob = NULL, *cert_blob = NULL;
+-	struct sev_data_pdh_cert_export *data;
++	struct sev_data_pdh_cert_export data;
+ 	void __user *input_cert_chain_address;
+ 	void __user *input_pdh_cert_address;
+ 	int ret;
+@@ -751,9 +750,7 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
+ 	if (copy_from_user(&input, (void __user *)argp->data, sizeof(input)))
+ 		return -EFAULT;
+ 
+-	data = kzalloc(sizeof(*data), GFP_KERNEL);
+-	if (!data)
+-		return -ENOMEM;
++	memset(&data, 0, sizeof(data));
+ 
+ 	/* Userspace wants to query the certificate length. */
+ 	if (!input.pdh_cert_address ||
+@@ -765,41 +762,35 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
+ 	input_cert_chain_address = (void __user *)input.cert_chain_address;
+ 
+ 	/* Allocate a physically contiguous buffer to store the PDH blob. */
+-	if (input.pdh_cert_len > SEV_FW_BLOB_MAX_SIZE) {
+-		ret = -EFAULT;
+-		goto e_free;
+-	}
++	if (input.pdh_cert_len > SEV_FW_BLOB_MAX_SIZE)
++		return -EFAULT;
+ 
+ 	/* Allocate a physically contiguous buffer to store the cert chain blob. */
+-	if (input.cert_chain_len > SEV_FW_BLOB_MAX_SIZE) {
+-		ret = -EFAULT;
+-		goto e_free;
+-	}
++	if (input.cert_chain_len > SEV_FW_BLOB_MAX_SIZE)
++		return -EFAULT;
+ 
+-	pdh_blob = kmalloc(input.pdh_cert_len, GFP_KERNEL);
+-	if (!pdh_blob) {
+-		ret = -ENOMEM;
+-		goto e_free;
+-	}
++	pdh_blob = kzalloc(input.pdh_cert_len, GFP_KERNEL);
++	if (!pdh_blob)
++		return -ENOMEM;
+ 
+-	data->pdh_cert_address = __psp_pa(pdh_blob);
+-	data->pdh_cert_len = input.pdh_cert_len;
++	data.pdh_cert_address = __psp_pa(pdh_blob);
++	data.pdh_cert_len = input.pdh_cert_len;
+ 
+-	cert_blob = kmalloc(input.cert_chain_len, GFP_KERNEL);
++	cert_blob = kzalloc(input.cert_chain_len, GFP_KERNEL);
+ 	if (!cert_blob) {
+ 		ret = -ENOMEM;
+ 		goto e_free_pdh;
+ 	}
+ 
+-	data->cert_chain_address = __psp_pa(cert_blob);
+-	data->cert_chain_len = input.cert_chain_len;
++	data.cert_chain_address = __psp_pa(cert_blob);
++	data.cert_chain_len = input.cert_chain_len;
+ 
+ cmd:
+-	ret = __sev_do_cmd_locked(SEV_CMD_PDH_CERT_EXPORT, data, &argp->error);
++	ret = __sev_do_cmd_locked(SEV_CMD_PDH_CERT_EXPORT, &data, &argp->error);
+ 
+ 	/* If we query the length, FW responded with expected data. */
+-	input.cert_chain_len = data->cert_chain_len;
+-	input.pdh_cert_len = data->pdh_cert_len;
++	input.cert_chain_len = data.cert_chain_len;
++	input.pdh_cert_len = data.pdh_cert_len;
+ 
+ 	if (copy_to_user((void __user *)argp->data, &input, sizeof(input))) {
+ 		ret = -EFAULT;
+@@ -824,8 +815,6 @@ e_free_cert:
+ 	kfree(cert_blob);
+ e_free_pdh:
+ 	kfree(pdh_blob);
+-e_free:
+-	kfree(data);
+ 	return ret;
+ }
+ 
+@@ -1063,7 +1052,6 @@ EXPORT_SYMBOL_GPL(sev_issue_cmd_external_user);
+ void sev_pci_init(void)
+ {
+ 	struct sev_device *sev = psp_master->sev_data;
+-	struct page *tmr_page;
+ 	int error, rc;
+ 
+ 	if (!sev)
+@@ -1079,14 +1067,13 @@ void sev_pci_init(void)
+ 		sev_get_api_version();
+ 
+ 	/* Obtain the TMR memory area for SEV-ES use */
+-	tmr_page = alloc_pages(GFP_KERNEL, get_order(SEV_ES_TMR_SIZE));
+-	if (tmr_page) {
+-		sev_es_tmr = page_address(tmr_page);
+-	} else {
+-		sev_es_tmr = NULL;
++	sev_es_tmr = sev_fw_alloc(SEV_ES_TMR_SIZE);
++	if (sev_es_tmr)
++		/* Must flush the cache before giving it to the firmware */
++		clflush_cache_range(sev_es_tmr, SEV_ES_TMR_SIZE);
++	else
+ 		dev_warn(sev->dev,
+ 			 "SEV: TMR allocation failed, SEV-ES support unavailable\n");
+-	}
+ 
+ 	/* Initialize the platform */
+ 	rc = sev_platform_init(&error);
+diff --git a/drivers/crypto/ccp/sev-dev.h b/drivers/crypto/ccp/sev-dev.h
+index dd5c4fe82914c..3b0cd0f854df9 100644
+--- a/drivers/crypto/ccp/sev-dev.h
++++ b/drivers/crypto/ccp/sev-dev.h
+@@ -46,7 +46,6 @@ struct sev_device {
+ 	unsigned int int_rcvd;
+ 	wait_queue_head_t int_queue;
+ 	struct sev_misc_dev *misc;
+-	struct sev_user_data_status status_cmd_buf;
+ 	struct sev_data_init init_cmd_buf;
+ 
+ 	u8 api_major;
+diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c
+index 725a739800b0a..ce77826c7fb05 100644
+--- a/drivers/crypto/hisilicon/sgl.c
++++ b/drivers/crypto/hisilicon/sgl.c
+@@ -113,9 +113,8 @@ err_free_mem:
+ 	for (j = 0; j < i; j++) {
+ 		dma_free_coherent(dev, block_size, block[j].sgl,
+ 				  block[j].sgl_dma);
+-		memset(block + j, 0, sizeof(*block));
+ 	}
+-	kfree(pool);
++	kfree_sensitive(pool);
+ 	return ERR_PTR(-ENOMEM);
+ }
+ EXPORT_SYMBOL_GPL(hisi_acc_create_sgl_pool);
+diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
+index c1d379bd7af33..a02777c93c07b 100644
+--- a/drivers/dax/bus.c
++++ b/drivers/dax/bus.c
+@@ -398,8 +398,8 @@ static void unregister_dev_dax(void *dev)
+ 	dev_dbg(dev, "%s\n", __func__);
+ 
+ 	kill_dev_dax(dev_dax);
+-	free_dev_dax_ranges(dev_dax);
+ 	device_del(dev);
++	free_dev_dax_ranges(dev_dax);
+ 	put_device(dev);
+ }
+ 
+diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
+index b4368c5b6a0cc..27d669f8b5f3a 100644
+--- a/drivers/dax/kmem.c
++++ b/drivers/dax/kmem.c
+@@ -114,7 +114,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
+ 		if (rc) {
+ 			dev_warn(dev, "mapping%d: %#llx-%#llx memory add failed\n",
+ 					i, range.start, range.end);
+-			release_resource(res);
++			remove_resource(res);
+ 			kfree(res);
+ 			data->res[i] = NULL;
+ 			if (mapped)
+@@ -159,7 +159,7 @@ static int dev_dax_kmem_remove(struct dev_dax *dev_dax)
+ 		rc = remove_memory(dev_dax->target_node, range.start,
+ 				range_len(&range));
+ 		if (rc == 0) {
+-			release_resource(data->res[i]);
++			remove_resource(data->res[i]);
+ 			kfree(data->res[i]);
+ 			data->res[i] = NULL;
+ 			success++;
+diff --git a/drivers/firmware/google/framebuffer-coreboot.c b/drivers/firmware/google/framebuffer-coreboot.c
+index 916f26adc5955..922c079d13c8a 100644
+--- a/drivers/firmware/google/framebuffer-coreboot.c
++++ b/drivers/firmware/google/framebuffer-coreboot.c
+@@ -43,9 +43,7 @@ static int framebuffer_probe(struct coreboot_device *dev)
+ 		    fb->green_mask_pos     == formats[i].green.offset &&
+ 		    fb->green_mask_size    == formats[i].green.length &&
+ 		    fb->blue_mask_pos      == formats[i].blue.offset &&
+-		    fb->blue_mask_size     == formats[i].blue.length &&
+-		    fb->reserved_mask_pos  == formats[i].transp.offset &&
+-		    fb->reserved_mask_size == formats[i].transp.length)
++		    fb->blue_mask_size     == formats[i].blue.length)
+ 			pdata.format = formats[i].name;
+ 	}
+ 	if (!pdata.format)
+diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
+index 1ae612c796eef..396a687e020f5 100644
+--- a/drivers/gpio/gpio-vf610.c
++++ b/drivers/gpio/gpio-vf610.c
+@@ -304,7 +304,7 @@ static int vf610_gpio_probe(struct platform_device *pdev)
+ 	gc = &port->gc;
+ 	gc->of_node = np;
+ 	gc->parent = dev;
+-	gc->label = "vf610-gpio";
++	gc->label = dev_name(dev);
+ 	gc->ngpio = VF610_GPIO_PER_PORT;
+ 	gc->base = of_alias_get_id(np, "gpio") * VF610_GPIO_PER_PORT;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index fbe15f4b75fd5..dbdf0e210522c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2051,12 +2051,14 @@ static int dm_resume(void *handle)
+ 	drm_for_each_connector_iter(connector, &iter) {
+ 		aconnector = to_amdgpu_dm_connector(connector);
+ 
++		if (!aconnector->dc_link)
++			continue;
++
+ 		/*
+ 		 * this is the case when traversing through already created
+ 		 * MST connectors, should be skipped
+ 		 */
+-		if (aconnector->dc_link &&
+-		    aconnector->dc_link->type == dc_connection_mst_branch)
++		if (aconnector->dc_link->type == dc_connection_mst_branch)
+ 			continue;
+ 
+ 		mutex_lock(&aconnector->hpd_lock);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 99887bcfada04..7e0a55aa2b180 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -616,6 +616,7 @@ static bool dc_construct_ctx(struct dc *dc,
+ 
+ 	dc_ctx->perf_trace = dc_perf_trace_create();
+ 	if (!dc_ctx->perf_trace) {
++		kfree(dc_ctx);
+ 		ASSERT_CRITICAL(false);
+ 		return false;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+index b3f0476899d32..14e7a59a9cd13 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+@@ -3897,14 +3897,14 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 					mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
+ 							* (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 
+-				locals->ODMCombineEnablePerState[i][k] = false;
++				locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ 				mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
+ 				if (mode_lib->vba.ODMCapability) {
+ 					if (locals->PlaneRequiredDISPCLKWithoutODMCombine > mode_lib->vba.MaxDispclkRoundedDownToDFSGranularity) {
+-						locals->ODMCombineEnablePerState[i][k] = true;
++						locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ 					} else if (locals->HActive[k] > DCN20_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
+-						locals->ODMCombineEnablePerState[i][k] = true;
++						locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ 					}
+ 				}
+@@ -3957,7 +3957,7 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 				locals->RequiredDISPCLK[i][j] = 0.0;
+ 				locals->DISPCLK_DPPCLK_Support[i][j] = true;
+ 				for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
+-					locals->ODMCombineEnablePerState[i][k] = false;
++					locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ 					if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
+ 						locals->NoOfDPP[i][j][k] = 1;
+ 						locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+index 1bcda7eba4a6f..ee1c80366bd65 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+@@ -3974,17 +3974,17 @@ void dml20v2_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode
+ 					mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
+ 							* (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 
+-				locals->ODMCombineEnablePerState[i][k] = false;
++				locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ 				mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
+ 				if (mode_lib->vba.ODMCapability) {
+ 					if (locals->PlaneRequiredDISPCLKWithoutODMCombine > MaxMaxDispclkRoundedDown) {
+-						locals->ODMCombineEnablePerState[i][k] = true;
++						locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ 					} else if (locals->DSCEnabled[k] && (locals->HActive[k] > DCN20_MAX_DSC_IMAGE_WIDTH)) {
+-						locals->ODMCombineEnablePerState[i][k] = true;
++						locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ 					} else if (locals->HActive[k] > DCN20_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
+-						locals->ODMCombineEnablePerState[i][k] = true;
++						locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ 					}
+ 				}
+@@ -4037,7 +4037,7 @@ void dml20v2_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode
+ 				locals->RequiredDISPCLK[i][j] = 0.0;
+ 				locals->DISPCLK_DPPCLK_Support[i][j] = true;
+ 				for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
+-					locals->ODMCombineEnablePerState[i][k] = false;
++					locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ 					if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
+ 						locals->NoOfDPP[i][j][k] = 1;
+ 						locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+index c09bca3350687..25693e62db805 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+@@ -3975,17 +3975,17 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 					mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
+ 							* (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 
+-				locals->ODMCombineEnablePerState[i][k] = false;
++				locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ 				mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
+ 				if (mode_lib->vba.ODMCapability) {
+ 					if (locals->PlaneRequiredDISPCLKWithoutODMCombine > MaxMaxDispclkRoundedDown) {
+-						locals->ODMCombineEnablePerState[i][k] = true;
++						locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ 					} else if (locals->DSCEnabled[k] && (locals->HActive[k] > DCN21_MAX_DSC_IMAGE_WIDTH)) {
+-						locals->ODMCombineEnablePerState[i][k] = true;
++						locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ 					} else if (locals->HActive[k] > DCN21_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
+-						locals->ODMCombineEnablePerState[i][k] = true;
++						locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ 					}
+ 				}
+@@ -4038,7 +4038,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 				locals->RequiredDISPCLK[i][j] = 0.0;
+ 				locals->DISPCLK_DPPCLK_Support[i][j] = true;
+ 				for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
+-					locals->ODMCombineEnablePerState[i][k] = false;
++					locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ 					if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
+ 						locals->NoOfDPP[i][j][k] = 1;
+ 						locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
+@@ -5213,7 +5213,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 			mode_lib->vba.ODMCombineEnabled[k] =
+ 					locals->ODMCombineEnablePerState[mode_lib->vba.VoltageLevel][k];
+ 		} else {
+-			mode_lib->vba.ODMCombineEnabled[k] = false;
++			mode_lib->vba.ODMCombineEnabled[k] = dm_odm_combine_mode_disabled;
+ 		}
+ 		mode_lib->vba.DSCEnabled[k] =
+ 				locals->RequiresDSC[mode_lib->vba.VoltageLevel][k];
+diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c
+index f1e8bc39b16d3..24604b410372d 100644
+--- a/drivers/gpu/drm/arm/malidp_planes.c
++++ b/drivers/gpu/drm/arm/malidp_planes.c
+@@ -348,7 +348,7 @@ static bool malidp_check_pages_threshold(struct malidp_plane_state *ms,
+ 		else
+ 			sgt = obj->funcs->get_sg_table(obj);
+ 
+-		if (!sgt)
++		if (IS_ERR(sgt))
+ 			return false;
+ 
+ 		sgl = sgt->sgl;
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index 1dcc28a4d8537..0c6dea9ccb728 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -185,12 +185,14 @@ static void lt9611_mipi_video_setup(struct lt9611 *lt9611,
+ 
+ 	regmap_write(lt9611->regmap, 0x8319, (u8)(hfront_porch % 256));
+ 
+-	regmap_write(lt9611->regmap, 0x831a, (u8)(hsync_porch / 256));
++	regmap_write(lt9611->regmap, 0x831a, (u8)(hsync_porch / 256) |
++						((hfront_porch / 256) << 4));
+ 	regmap_write(lt9611->regmap, 0x831b, (u8)(hsync_porch % 256));
+ }
+ 
+-static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode)
++static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode, unsigned int postdiv)
+ {
++	unsigned int pcr_m = mode->clock * 5 * postdiv / 27000;
+ 	const struct reg_sequence reg_cfg[] = {
+ 		{ 0x830b, 0x01 },
+ 		{ 0x830c, 0x10 },
+@@ -205,7 +207,6 @@ static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mod
+ 
+ 		/* stage 2 */
+ 		{ 0x834a, 0x40 },
+-		{ 0x831d, 0x10 },
+ 
+ 		/* MK limit */
+ 		{ 0x832d, 0x38 },
+@@ -220,30 +221,28 @@ static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mod
+ 		{ 0x8325, 0x00 },
+ 		{ 0x832a, 0x01 },
+ 		{ 0x834a, 0x10 },
+-		{ 0x831d, 0x10 },
+-		{ 0x8326, 0x37 },
+ 	};
++	u8 pol = 0x10;
+ 
+-	regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
++	if (mode->flags & DRM_MODE_FLAG_NHSYNC)
++		pol |= 0x2;
++	if (mode->flags & DRM_MODE_FLAG_NVSYNC)
++		pol |= 0x1;
++	regmap_write(lt9611->regmap, 0x831d, pol);
+ 
+-	switch (mode->hdisplay) {
+-	case 640:
+-		regmap_write(lt9611->regmap, 0x8326, 0x14);
+-		break;
+-	case 1920:
+-		regmap_write(lt9611->regmap, 0x8326, 0x37);
+-		break;
+-	case 3840:
++	if (mode->hdisplay == 3840)
+ 		regmap_multi_reg_write(lt9611->regmap, reg_cfg2, ARRAY_SIZE(reg_cfg2));
+-		break;
+-	}
++	else
++		regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
++
++	regmap_write(lt9611->regmap, 0x8326, pcr_m);
+ 
+ 	/* pcr rst */
+ 	regmap_write(lt9611->regmap, 0x8011, 0x5a);
+ 	regmap_write(lt9611->regmap, 0x8011, 0xfa);
+ }
+ 
+-static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode)
++static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode, unsigned int *postdiv)
+ {
+ 	unsigned int pclk = mode->clock;
+ 	const struct reg_sequence reg_cfg[] = {
+@@ -261,12 +260,16 @@ static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode
+ 
+ 	regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
+ 
+-	if (pclk > 150000)
++	if (pclk > 150000) {
+ 		regmap_write(lt9611->regmap, 0x812d, 0x88);
+-	else if (pclk > 70000)
++		*postdiv = 1;
++	} else if (pclk > 70000) {
+ 		regmap_write(lt9611->regmap, 0x812d, 0x99);
+-	else
++		*postdiv = 2;
++	} else {
+ 		regmap_write(lt9611->regmap, 0x812d, 0xaa);
++		*postdiv = 4;
++	}
+ 
+ 	/*
+ 	 * first divide pclk by 2 first
+@@ -446,12 +449,11 @@ static void lt9611_sleep_setup(struct lt9611 *lt9611)
+ 		{ 0x8023, 0x01 },
+ 		{ 0x8157, 0x03 }, /* set addr pin as output */
+ 		{ 0x8149, 0x0b },
+-		{ 0x8151, 0x30 }, /* disable IRQ */
++
+ 		{ 0x8102, 0x48 }, /* MIPI Rx power down */
+ 		{ 0x8123, 0x80 },
+ 		{ 0x8130, 0x00 },
+-		{ 0x8100, 0x01 }, /* bandgap power down */
+-		{ 0x8101, 0x00 }, /* system clk power down */
++		{ 0x8011, 0x0a },
+ 	};
+ 
+ 	regmap_multi_reg_write(lt9611->regmap,
+@@ -757,7 +759,7 @@ static const struct drm_connector_funcs lt9611_bridge_connector_funcs = {
+ static struct mipi_dsi_device *lt9611_attach_dsi(struct lt9611 *lt9611,
+ 						 struct device_node *dsi_node)
+ {
+-	const struct mipi_dsi_device_info info = { "lt9611", 0, NULL };
++	const struct mipi_dsi_device_info info = { "lt9611", 0, lt9611->dev->of_node};
+ 	struct mipi_dsi_device *dsi;
+ 	struct mipi_dsi_host *host;
+ 	int ret;
+@@ -881,12 +883,18 @@ static enum drm_mode_status lt9611_bridge_mode_valid(struct drm_bridge *bridge,
+ static void lt9611_bridge_pre_enable(struct drm_bridge *bridge)
+ {
+ 	struct lt9611 *lt9611 = bridge_to_lt9611(bridge);
++	static const struct reg_sequence reg_cfg[] = {
++		{ 0x8102, 0x12 },
++		{ 0x8123, 0x40 },
++		{ 0x8130, 0xea },
++		{ 0x8011, 0xfa },
++	};
+ 
+ 	if (!lt9611->sleep)
+ 		return;
+ 
+-	lt9611_reset(lt9611);
+-	regmap_write(lt9611->regmap, 0x80ee, 0x01);
++	regmap_multi_reg_write(lt9611->regmap,
++			       reg_cfg, ARRAY_SIZE(reg_cfg));
+ 
+ 	lt9611->sleep = false;
+ }
+@@ -904,14 +912,15 @@ static void lt9611_bridge_mode_set(struct drm_bridge *bridge,
+ {
+ 	struct lt9611 *lt9611 = bridge_to_lt9611(bridge);
+ 	struct hdmi_avi_infoframe avi_frame;
++	unsigned int postdiv;
+ 	int ret;
+ 
+ 	lt9611_bridge_pre_enable(bridge);
+ 
+ 	lt9611_mipi_input_digital(lt9611, mode);
+-	lt9611_pll_setup(lt9611, mode);
++	lt9611_pll_setup(lt9611, mode, &postdiv);
+ 	lt9611_mipi_video_setup(lt9611, mode);
+-	lt9611_pcr_setup(lt9611, mode);
++	lt9611_pcr_setup(lt9611, mode, postdiv);
+ 
+ 	ret = drm_hdmi_avi_infoframe_from_display_mode(&avi_frame,
+ 						       &lt9611->connector,
+diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+index 72248a565579e..e41afcc5326b1 100644
+--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
++++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+@@ -444,7 +444,11 @@ static int __init stdpxxxx_ge_b850v3_init(void)
+ 	if (ret)
+ 		return ret;
+ 
+-	return i2c_add_driver(&stdp2690_ge_b850v3_fw_driver);
++	ret = i2c_add_driver(&stdp2690_ge_b850v3_fw_driver);
++	if (ret)
++		i2c_del_driver(&stdp4028_ge_b850v3_fw_driver);
++
++	return ret;
+ }
+ module_init(stdpxxxx_ge_b850v3_init);
+ 
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 0feeac52e4eb3..b5e15933cb5f4 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -3769,6 +3769,9 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
+ 		set_bit(0, &mgr->payload_mask);
+ 		mgr->vcpi_mask = 0;
+ 		mgr->payload_id_table_cleared = false;
++
++		memset(&mgr->down_rep_recv, 0, sizeof(mgr->down_rep_recv));
++		memset(&mgr->up_req_recv, 0, sizeof(mgr->up_req_recv));
+ 	}
+ 
+ out_unlock:
+@@ -3985,7 +3988,7 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
+ 	struct drm_dp_sideband_msg_rx *msg = &mgr->down_rep_recv;
+ 
+ 	if (!drm_dp_get_one_sb_msg(mgr, false, &mstb))
+-		goto out;
++		goto out_clear_reply;
+ 
+ 	/* Multi-packet message transmission, don't clear the reply */
+ 	if (!msg->have_eomt)
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 4334e466b4e05..39eb39e78d7a2 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -5560,8 +5560,6 @@ static u8 drm_mode_hdmi_vic(const struct drm_connector *connector,
+ static u8 drm_mode_cea_vic(const struct drm_connector *connector,
+ 			   const struct drm_display_mode *mode)
+ {
+-	u8 vic;
+-
+ 	/*
+ 	 * HDMI spec says if a mode is found in HDMI 1.4b 4K modes
+ 	 * we should send its VIC in vendor infoframes, else send the
+@@ -5571,13 +5569,18 @@ static u8 drm_mode_cea_vic(const struct drm_connector *connector,
+ 	if (drm_mode_hdmi_vic(connector, mode))
+ 		return 0;
+ 
+-	vic = drm_match_cea_mode(mode);
++	return drm_match_cea_mode(mode);
++}
+ 
+-	/*
+-	 * HDMI 1.4 VIC range: 1 <= VIC <= 64 (CEA-861-D) but
+-	 * HDMI 2.0 VIC range: 1 <= VIC <= 107 (CEA-861-F). So we
+-	 * have to make sure we dont break HDMI 1.4 sinks.
+-	 */
++/*
++ * Avoid sending VICs defined in HDMI 2.0 in AVI infoframes to sinks that
++ * conform to HDMI 1.4.
++ *
++ * HDMI 1.4 (CTA-861-D) VIC range: [1..64]
++ * HDMI 2.0 (CTA-861-F) VIC range: [1..107]
++ */
++static u8 vic_for_avi_infoframe(const struct drm_connector *connector, u8 vic)
++{
+ 	if (!is_hdmi2_sink(connector) && vic > 64)
+ 		return 0;
+ 
+@@ -5653,7 +5656,7 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
+ 		picture_aspect = HDMI_PICTURE_ASPECT_NONE;
+ 	}
+ 
+-	frame->video_code = vic;
++	frame->video_code = vic_for_avi_infoframe(connector, vic);
+ 	frame->picture_aspect = picture_aspect;
+ 	frame->active_aspect = HDMI_ACTIVE_ASPECT_PICTURE;
+ 	frame->scan_mode = HDMI_SCAN_MODE_UNDERSCAN;
+diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c
+index 92152c06b75b7..8d1064061e836 100644
+--- a/drivers/gpu/drm/drm_fourcc.c
++++ b/drivers/gpu/drm/drm_fourcc.c
+@@ -178,6 +178,10 @@ const struct drm_format_info *__drm_format_info(u32 format)
+ 		{ .format = DRM_FORMAT_BGRA5551,	.depth = 15, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1, .has_alpha = true },
+ 		{ .format = DRM_FORMAT_RGB565,		.depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
+ 		{ .format = DRM_FORMAT_BGR565,		.depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
++#ifdef __BIG_ENDIAN
++		{ .format = DRM_FORMAT_XRGB1555 | DRM_FORMAT_BIG_ENDIAN, .depth = 15, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
++		{ .format = DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN, .depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
++#endif
+ 		{ .format = DRM_FORMAT_RGB888,		.depth = 24, .num_planes = 1, .cpp = { 3, 0, 0 }, .hsub = 1, .vsub = 1 },
+ 		{ .format = DRM_FORMAT_BGR888,		.depth = 24, .num_planes = 1, .cpp = { 3, 0, 0 }, .hsub = 1, .vsub = 1 },
+ 		{ .format = DRM_FORMAT_XRGB8888,	.depth = 24, .num_planes = 1, .cpp = { 4, 0, 0 }, .hsub = 1, .vsub = 1 },
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 2c43d54766f34..19fb1d93a4f07 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -1143,6 +1143,58 @@ int mipi_dsi_dcs_get_display_brightness(struct mipi_dsi_device *dsi,
+ }
+ EXPORT_SYMBOL(mipi_dsi_dcs_get_display_brightness);
+ 
++/**
++ * mipi_dsi_dcs_set_display_brightness_large() - sets the 16-bit brightness value
++ *    of the display
++ * @dsi: DSI peripheral device
++ * @brightness: brightness value
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++int mipi_dsi_dcs_set_display_brightness_large(struct mipi_dsi_device *dsi,
++					     u16 brightness)
++{
++	u8 payload[2] = { brightness >> 8, brightness & 0xff };
++	ssize_t err;
++
++	err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_DISPLAY_BRIGHTNESS,
++				 payload, sizeof(payload));
++	if (err < 0)
++		return err;
++
++	return 0;
++}
++EXPORT_SYMBOL(mipi_dsi_dcs_set_display_brightness_large);
++
++/**
++ * mipi_dsi_dcs_get_display_brightness_large() - gets the current 16-bit
++ *    brightness value of the display
++ * @dsi: DSI peripheral device
++ * @brightness: brightness value
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++int mipi_dsi_dcs_get_display_brightness_large(struct mipi_dsi_device *dsi,
++					     u16 *brightness)
++{
++	u8 brightness_be[2];
++	ssize_t err;
++
++	err = mipi_dsi_dcs_read(dsi, MIPI_DCS_GET_DISPLAY_BRIGHTNESS,
++				brightness_be, sizeof(brightness_be));
++	if (err <= 0) {
++		if (err == 0)
++			err = -ENODATA;
++
++		return err;
++	}
++
++	*brightness = (brightness_be[0] << 8) | brightness_be[1];
++
++	return 0;
++}
++EXPORT_SYMBOL(mipi_dsi_dcs_get_display_brightness_large);
++
+ static int mipi_dsi_drv_probe(struct device *dev)
+ {
+ 	struct mipi_dsi_driver *drv = to_mipi_dsi_driver(dev->driver);
+diff --git a/drivers/gpu/drm/drm_mode_config.c b/drivers/gpu/drm/drm_mode_config.c
+index f1affc1bb6799..fad2c11811270 100644
+--- a/drivers/gpu/drm/drm_mode_config.c
++++ b/drivers/gpu/drm/drm_mode_config.c
+@@ -398,6 +398,8 @@ static void drm_mode_config_init_release(struct drm_device *dev, void *ptr)
+  */
+ int drmm_mode_config_init(struct drm_device *dev)
+ {
++	int ret;
++
+ 	mutex_init(&dev->mode_config.mutex);
+ 	drm_modeset_lock_init(&dev->mode_config.connection_mutex);
+ 	mutex_init(&dev->mode_config.idr_mutex);
+@@ -419,7 +421,11 @@ int drmm_mode_config_init(struct drm_device *dev)
+ 	init_llist_head(&dev->mode_config.connector_free_list);
+ 	INIT_WORK(&dev->mode_config.connector_free_work, drm_connector_free_work_fn);
+ 
+-	drm_mode_create_standard_properties(dev);
++	ret = drm_mode_create_standard_properties(dev);
++	if (ret) {
++		drm_mode_config_cleanup(dev);
++		return ret;
++	}
+ 
+ 	/* Just to be sure */
+ 	dev->mode_config.num_fb = 0;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index ce739ba45c551..8768073794fbf 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -278,6 +278,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGL"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* Lenovo IdeaPad Duet 3 10IGL5 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "IdeaPad Duet 3 10IGL5"),
++		},
++		.driver_data = (void *)&lcd1200x1920_rightside_up,
+ 	}, {	/* Lenovo Yoga Book X90F / X91F / X91L */
+ 		.matches = {
+ 		  /* Non exact match to match all versions */
+diff --git a/drivers/gpu/drm/i915/display/intel_quirks.c b/drivers/gpu/drm/i915/display/intel_quirks.c
+index 8eb1842f14cea..b4e74c86fae73 100644
+--- a/drivers/gpu/drm/i915/display/intel_quirks.c
++++ b/drivers/gpu/drm/i915/display/intel_quirks.c
+@@ -159,6 +159,8 @@ static struct intel_quirk intel_quirks[] = {
+ 	/* ECS Liva Q2 */
+ 	{ 0x3185, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
+ 	{ 0x3184, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
++	/* HP Notebook - 14-r206nv */
++	{ 0x0f31, 0x103c, 0x220f, quirk_invert_brightness },
+ };
+ 
+ void intel_init_quirks(struct drm_i915_private *i915)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index dfd5ed15a7f4a..e83b1c406b96a 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -803,6 +803,8 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
+ 
+ 	mtk_crtc->planes = devm_kcalloc(dev, num_comp_planes,
+ 					sizeof(struct drm_plane), GFP_KERNEL);
++	if (!mtk_crtc->planes)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
+ 		ret = mtk_drm_crtc_init_comp_planes(drm_dev, mtk_crtc, i,
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 59c85c63b7cc9..719c46d245dd2 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -378,6 +378,7 @@ static int mtk_drm_bind(struct device *dev)
+ err_deinit:
+ 	mtk_drm_kms_deinit(drm);
+ err_free:
++	private->drm = NULL;
+ 	drm_dev_put(drm);
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+index 0583e557ad372..29702dd8631d4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+@@ -142,8 +142,6 @@ static int mtk_drm_gem_object_mmap(struct drm_gem_object *obj,
+ 
+ 	ret = dma_mmap_attrs(priv->dma_dev, vma, mtk_gem->cookie,
+ 			     mtk_gem->dma_addr, obj->size, mtk_gem->dma_attrs);
+-	if (ret)
+-		drm_gem_vm_close(vma);
+ 
+ 	return ret;
+ }
+@@ -266,6 +264,6 @@ void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+ 		return;
+ 
+ 	vunmap(vaddr);
+-	mtk_gem->kvaddr = 0;
++	mtk_gem->kvaddr = NULL;
+ 	kfree(mtk_gem->pages);
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index 146c4d04f572d..a6e71b7b69b83 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -704,7 +704,7 @@ static void mtk_dsi_lane_ready(struct mtk_dsi *dsi)
+ 		mtk_dsi_clk_ulp_mode_leave(dsi);
+ 		mtk_dsi_lane0_ulp_mode_leave(dsi);
+ 		mtk_dsi_clk_hs_mode(dsi, 0);
+-		msleep(20);
++		usleep_range(1000, 3000);
+ 		/* The reaction time after pulling up the mipi signal for dsi_rx */
+ 	}
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index de8cc25506d61..78181e2d78a97 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -954,13 +954,13 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
+ void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
+ {
+ 	struct msm_gpu *gpu = &adreno_gpu->base;
+-	struct msm_drm_private *priv = gpu->dev->dev_private;
++	struct msm_drm_private *priv = gpu->dev ? gpu->dev->dev_private : NULL;
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
+ 		release_firmware(adreno_gpu->fw[i]);
+ 
+-	if (pm_runtime_enabled(&priv->gpu_pdev->dev))
++	if (priv && pm_runtime_enabled(&priv->gpu_pdev->dev))
+ 		pm_runtime_disable(&priv->gpu_pdev->dev);
+ 
+ 	msm_gpu_cleanup(&adreno_gpu->base);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index f56414a06ec41..5afb3c544653c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -682,7 +682,10 @@ static void dpu_crtc_reset(struct drm_crtc *crtc)
+ 	if (crtc->state)
+ 		dpu_crtc_destroy_state(crtc, crtc->state);
+ 
+-	__drm_atomic_helper_crtc_reset(crtc, &cstate->base);
++	if (cstate)
++		__drm_atomic_helper_crtc_reset(crtc, &cstate->base);
++	else
++		__drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+ 
+ /**
+@@ -831,6 +834,8 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+ 	struct drm_rect crtc_rect = { 0 };
+ 
+ 	pstates = kzalloc(sizeof(*pstates) * DPU_STAGE_MAX * 4, GFP_KERNEL);
++	if (!pstates)
++		return -ENOMEM;
+ 
+ 	if (!state->enable || !state->active) {
+ 		DPU_DEBUG("crtc%d -> enable %d, active %d, skip atomic_check\n",
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+index 74a13ccad34c0..9483005297438 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+@@ -633,6 +633,11 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm,
+ 				  blks_size, enc_id);
+ 			break;
+ 		}
++		if (!hw_blks[i]) {
++			DPU_ERROR("Allocated resource %d unavailable to assign to enc %d\n",
++				  type, enc_id);
++			break;
++		}
+ 		blks[num_blks++] = hw_blks[i];
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index ff4f207cbdeaf..60e7371cd0e0d 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -1124,7 +1124,10 @@ static void mdp5_crtc_reset(struct drm_crtc *crtc)
+ 	if (crtc->state)
+ 		mdp5_crtc_destroy_state(crtc, crtc->state);
+ 
+-	__drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
++	if (mdp5_cstate)
++		__drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
++	else
++		__drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+ 
+ static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = {
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 51e8318cc8ff4..5a76aa1389173 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1913,6 +1913,9 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi)
+ 
+ 	/* setup workqueue */
+ 	msm_host->workqueue = alloc_ordered_workqueue("dsi_drm_work", 0);
++	if (!msm_host->workqueue)
++		return -ENOMEM;
++
+ 	INIT_WORK(&msm_host->err_work, dsi_err_worker);
+ 	INIT_WORK(&msm_host->hpd_work, dsi_hpd_worker);
+ 
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index efb14043a6ec4..bee208773beec 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -264,6 +264,10 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	hdmi->workq = alloc_ordered_workqueue("msm_hdmi", 0);
++	if (!hdmi->workq) {
++		ret = -ENOMEM;
++		goto fail;
++	}
+ 
+ 	hdmi->i2c = msm_hdmi_i2c_init(hdmi);
+ 	if (IS_ERR(hdmi->i2c)) {
+diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
+index cd59a59180385..50a25c119f4d9 100644
+--- a/drivers/gpu/drm/msm/msm_fence.c
++++ b/drivers/gpu/drm/msm/msm_fence.c
+@@ -20,7 +20,7 @@ msm_fence_context_alloc(struct drm_device *dev, const char *name)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	fctx->dev = dev;
+-	strncpy(fctx->name, name, sizeof(fctx->name));
++	strscpy(fctx->name, name, sizeof(fctx->name));
+ 	fctx->context = dma_fence_context_alloc(1);
+ 	init_waitqueue_head(&fctx->event);
+ 	spin_lock_init(&fctx->spinlock);
+diff --git a/drivers/gpu/drm/mxsfb/Kconfig b/drivers/gpu/drm/mxsfb/Kconfig
+index ee22cd25d3e3d..e7201e16119a4 100644
+--- a/drivers/gpu/drm/mxsfb/Kconfig
++++ b/drivers/gpu/drm/mxsfb/Kconfig
+@@ -8,6 +8,7 @@ config DRM_MXSFB
+ 	tristate "i.MX (e)LCDIF LCD controller"
+ 	depends on DRM && OF
+ 	depends on COMMON_CLK
++	depends on ARCH_MXS || ARCH_MXC || COMPILE_TEST
+ 	select DRM_MXS
+ 	select DRM_KMS_HELPER
+ 	select DRM_KMS_CMA_HELPER
+diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
+index eeccf40bae416..1b1ddc5fe6dcc 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
+@@ -1444,22 +1444,26 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
+ {
+ 	struct dsi_data *dsi = s->private;
+ 	unsigned long flags;
+-	struct dsi_irq_stats stats;
++	struct dsi_irq_stats *stats;
++
++	stats = kmalloc(sizeof(*stats), GFP_KERNEL);
++	if (!stats)
++		return -ENOMEM;
+ 
+ 	spin_lock_irqsave(&dsi->irq_stats_lock, flags);
+ 
+-	stats = dsi->irq_stats;
++	*stats = dsi->irq_stats;
+ 	memset(&dsi->irq_stats, 0, sizeof(dsi->irq_stats));
+ 	dsi->irq_stats.last_reset = jiffies;
+ 
+ 	spin_unlock_irqrestore(&dsi->irq_stats_lock, flags);
+ 
+ 	seq_printf(s, "period %u ms\n",
+-			jiffies_to_msecs(jiffies - stats.last_reset));
++			jiffies_to_msecs(jiffies - stats->last_reset));
+ 
+-	seq_printf(s, "irqs %d\n", stats.irq_count);
++	seq_printf(s, "irqs %d\n", stats->irq_count);
+ #define PIS(x) \
+-	seq_printf(s, "%-20s %10d\n", #x, stats.dsi_irqs[ffs(DSI_IRQ_##x)-1]);
++	seq_printf(s, "%-20s %10d\n", #x, stats->dsi_irqs[ffs(DSI_IRQ_##x)-1]);
+ 
+ 	seq_printf(s, "-- DSI%d interrupts --\n", dsi->module_id + 1);
+ 	PIS(VC0);
+@@ -1483,10 +1487,10 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
+ 
+ #define PIS(x) \
+ 	seq_printf(s, "%-20s %10d %10d %10d %10d\n", #x, \
+-			stats.vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
+-			stats.vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
+-			stats.vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
+-			stats.vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
++			stats->vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
++			stats->vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
++			stats->vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
++			stats->vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
+ 
+ 	seq_printf(s, "-- VC interrupts --\n");
+ 	PIS(CS);
+@@ -1502,7 +1506,7 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
+ 
+ #define PIS(x) \
+ 	seq_printf(s, "%-20s %10d\n", #x, \
+-			stats.cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
++			stats->cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
+ 
+ 	seq_printf(s, "-- CIO interrupts --\n");
+ 	PIS(ERRSYNCESC1);
+@@ -1527,6 +1531,8 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
+ 	PIS(ULPSACTIVENOT_ALL1);
+ #undef PIS
+ 
++	kfree(stats);
++
+ 	return 0;
+ }
+ #endif
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index 12aa7877a625a..8cca58f25c0f3 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -2191,11 +2191,12 @@ int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx)
+ 
+ 	/*
+ 	 * On DCE32 any encoder can drive any block so usually just use crtc id,
+-	 * but Apple thinks different at least on iMac10,1, so there use linkb,
++	 * but Apple thinks different at least on iMac10,1 and iMac11,2, so there use linkb,
+ 	 * otherwise the internal eDP panel will stay dark.
+ 	 */
+ 	if (ASIC_IS_DCE32(rdev)) {
+-		if (dmi_match(DMI_PRODUCT_NAME, "iMac10,1"))
++		if (dmi_match(DMI_PRODUCT_NAME, "iMac10,1") ||
++		    dmi_match(DMI_PRODUCT_NAME, "iMac11,2"))
+ 			enc_idx = (dig->linkb) ? 1 : 0;
+ 		else
+ 			enc_idx = radeon_crtc->crtc_id;
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index 8287410f471fb..131f425c363a5 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1022,6 +1022,7 @@ void radeon_atombios_fini(struct radeon_device *rdev)
+ {
+ 	if (rdev->mode_info.atom_context) {
+ 		kfree(rdev->mode_info.atom_context->scratch);
++		kfree(rdev->mode_info.atom_context->iio);
+ 	}
+ 	kfree(rdev->mode_info.atom_context);
+ 	rdev->mode_info.atom_context = NULL;
+diff --git a/drivers/gpu/drm/tidss/tidss_dispc.c b/drivers/gpu/drm/tidss/tidss_dispc.c
+index b669168ae7cb2..33716213a8210 100644
+--- a/drivers/gpu/drm/tidss/tidss_dispc.c
++++ b/drivers/gpu/drm/tidss/tidss_dispc.c
+@@ -1855,8 +1855,8 @@ static const struct {
+ 	{ DRM_FORMAT_XBGR4444, 0x21, },
+ 	{ DRM_FORMAT_RGBX4444, 0x22, },
+ 
+-	{ DRM_FORMAT_ARGB1555, 0x25, },
+-	{ DRM_FORMAT_ABGR1555, 0x26, },
++	{ DRM_FORMAT_XRGB1555, 0x25, },
++	{ DRM_FORMAT_XBGR1555, 0x26, },
+ 
+ 	{ DRM_FORMAT_XRGB8888, 0x27, },
+ 	{ DRM_FORMAT_XBGR8888, 0x28, },
+diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
+index 403af68fa4400..7ea26e5fbcb28 100644
+--- a/drivers/gpu/drm/tiny/ili9486.c
++++ b/drivers/gpu/drm/tiny/ili9486.c
+@@ -43,6 +43,7 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
+ 			     size_t num)
+ {
+ 	struct spi_device *spi = mipi->spi;
++	unsigned int bpw = 8;
+ 	void *data = par;
+ 	u32 speed_hz;
+ 	int i, ret;
+@@ -56,8 +57,6 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
+ 	 * The displays are Raspberry Pi HATs and connected to the 8-bit only
+ 	 * SPI controller, so 16-bit command and parameters need byte swapping
+ 	 * before being transferred as 8-bit on the big endian SPI bus.
+-	 * Pixel data bytes have already been swapped before this function is
+-	 * called.
+ 	 */
+ 	buf[0] = cpu_to_be16(*cmd);
+ 	gpiod_set_value_cansleep(mipi->dc, 0);
+@@ -71,12 +70,18 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
+ 		for (i = 0; i < num; i++)
+ 			buf[i] = cpu_to_be16(par[i]);
+ 		num *= 2;
+-		speed_hz = mipi_dbi_spi_cmd_max_speed(spi, num);
+ 		data = buf;
+ 	}
+ 
++	/*
++	 * Check whether pixel data bytes needs to be swapped or not
++	 */
++	if (*cmd == MIPI_DCS_WRITE_MEMORY_START && !mipi->swap_bytes)
++		bpw = 16;
++
+ 	gpiod_set_value_cansleep(mipi->dc, 1);
+-	ret = mipi_dbi_spi_transfer(spi, speed_hz, 8, data, num);
++	speed_hz = mipi_dbi_spi_cmd_max_speed(spi, num);
++	ret = mipi_dbi_spi_transfer(spi, speed_hz, bpw, data, num);
+  free:
+ 	kfree(buf);
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_dpi.c b/drivers/gpu/drm/vc4/vc4_dpi.c
+index a90f2545baee0..9c8a71d7426a0 100644
+--- a/drivers/gpu/drm/vc4/vc4_dpi.c
++++ b/drivers/gpu/drm/vc4/vc4_dpi.c
+@@ -148,35 +148,45 @@ static void vc4_dpi_encoder_enable(struct drm_encoder *encoder)
+ 	}
+ 	drm_connector_list_iter_end(&conn_iter);
+ 
+-	if (connector && connector->display_info.num_bus_formats) {
+-		u32 bus_format = connector->display_info.bus_formats[0];
+-
+-		switch (bus_format) {
+-		case MEDIA_BUS_FMT_RGB888_1X24:
+-			dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB,
+-					       DPI_FORMAT);
+-			break;
+-		case MEDIA_BUS_FMT_BGR888_1X24:
+-			dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB,
+-					       DPI_FORMAT);
+-			dpi_c |= VC4_SET_FIELD(DPI_ORDER_BGR, DPI_ORDER);
+-			break;
+-		case MEDIA_BUS_FMT_RGB666_1X24_CPADHI:
+-			dpi_c |= VC4_SET_FIELD(DPI_FORMAT_18BIT_666_RGB_2,
+-					       DPI_FORMAT);
+-			break;
+-		case MEDIA_BUS_FMT_RGB666_1X18:
+-			dpi_c |= VC4_SET_FIELD(DPI_FORMAT_18BIT_666_RGB_1,
+-					       DPI_FORMAT);
+-			break;
+-		case MEDIA_BUS_FMT_RGB565_1X16:
+-			dpi_c |= VC4_SET_FIELD(DPI_FORMAT_16BIT_565_RGB_3,
+-					       DPI_FORMAT);
+-			break;
+-		default:
+-			DRM_ERROR("Unknown media bus format %d\n", bus_format);
+-			break;
++	if (connector) {
++		if (connector->display_info.num_bus_formats) {
++			u32 bus_format = connector->display_info.bus_formats[0];
++
++			switch (bus_format) {
++			case MEDIA_BUS_FMT_RGB888_1X24:
++				dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB,
++						       DPI_FORMAT);
++				break;
++			case MEDIA_BUS_FMT_BGR888_1X24:
++				dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB,
++						       DPI_FORMAT);
++				dpi_c |= VC4_SET_FIELD(DPI_ORDER_BGR,
++						       DPI_ORDER);
++				break;
++			case MEDIA_BUS_FMT_RGB666_1X24_CPADHI:
++				dpi_c |= VC4_SET_FIELD(DPI_FORMAT_18BIT_666_RGB_2,
++						       DPI_FORMAT);
++				break;
++			case MEDIA_BUS_FMT_RGB666_1X18:
++				dpi_c |= VC4_SET_FIELD(DPI_FORMAT_18BIT_666_RGB_1,
++						       DPI_FORMAT);
++				break;
++			case MEDIA_BUS_FMT_RGB565_1X16:
++				dpi_c |= VC4_SET_FIELD(DPI_FORMAT_16BIT_565_RGB_1,
++						       DPI_FORMAT);
++				break;
++			default:
++				DRM_ERROR("Unknown media bus format %d\n",
++					  bus_format);
++				break;
++			}
+ 		}
++
++		if (connector->display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
++			dpi_c |= DPI_PIXEL_CLK_INVERT;
++
++		if (connector->display_info.bus_flags & DRM_BUS_FLAG_DE_LOW)
++			dpi_c |= DPI_OUTPUT_ENABLE_INVERT;
+ 	} else {
+ 		/* Default to 24bit if no connector found. */
+ 		dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB, DPI_FORMAT);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 539ebf85fd7c0..7e8620838de9c 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -567,11 +567,12 @@ static void vc5_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
+ 		     VC4_SET_FIELD(mode->crtc_vdisplay, VC5_HDMI_VERTA_VAL));
+ 	u32 vertb = (VC4_SET_FIELD(mode->htotal >> (2 - pixel_rep),
+ 				   VC5_HDMI_VERTB_VSPO) |
+-		     VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
++		     VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end +
++				   interlaced,
+ 				   VC4_HDMI_VERTB_VBP));
+ 	u32 vertb_even = (VC4_SET_FIELD(0, VC5_HDMI_VERTB_VSPO) |
+ 			  VC4_SET_FIELD(mode->crtc_vtotal -
+-					mode->crtc_vsync_end - interlaced,
++					mode->crtc_vsync_end,
+ 					VC4_HDMI_VERTB_VBP));
+ 
+ 	HDMI_WRITE(HDMI_VEC_INTERFACE_XBAR, 0x354021);
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index 95fa6fc052a72..f8f2fc3d15f73 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -677,6 +677,17 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 		      SCALER_DISPCTRL_DSPEISLUR(2) |
+ 		      SCALER_DISPCTRL_SCLEIRQ);
+ 
++	/* Set AXI panic mode.
++	 * VC4 panics when < 2 lines in FIFO.
++	 * VC5 panics when less than 1 line in the FIFO.
++	 */
++	dispctrl &= ~(SCALER_DISPCTRL_PANIC0_MASK |
++		      SCALER_DISPCTRL_PANIC1_MASK |
++		      SCALER_DISPCTRL_PANIC2_MASK);
++	dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC0);
++	dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC1);
++	dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC2);
++
+ 	HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+ 
+ 	ret = devm_request_irq(dev, platform_get_irq(pdev, 0),
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index 4df222a830493..2e03c16c60bb1 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -72,11 +72,13 @@ static const struct hvs_format {
+ 		.drm = DRM_FORMAT_ARGB1555,
+ 		.hvs = HVS_PIXEL_FORMAT_RGBA5551,
+ 		.pixel_order = HVS_PIXEL_ORDER_ABGR,
++		.pixel_order_hvs5 = HVS_PIXEL_ORDER_ARGB,
+ 	},
+ 	{
+ 		.drm = DRM_FORMAT_XRGB1555,
+ 		.hvs = HVS_PIXEL_FORMAT_RGBA5551,
+ 		.pixel_order = HVS_PIXEL_ORDER_ABGR,
++		.pixel_order_hvs5 = HVS_PIXEL_ORDER_ARGB,
+ 	},
+ 	{
+ 		.drm = DRM_FORMAT_RGB888,
+diff --git a/drivers/gpu/drm/vc4/vc4_regs.h b/drivers/gpu/drm/vc4/vc4_regs.h
+index be2c32a519b31..a324ef88ceafb 100644
+--- a/drivers/gpu/drm/vc4/vc4_regs.h
++++ b/drivers/gpu/drm/vc4/vc4_regs.h
+@@ -220,6 +220,12 @@
+ #define SCALER_DISPCTRL                         0x00000000
+ /* Global register for clock gating the HVS */
+ # define SCALER_DISPCTRL_ENABLE			BIT(31)
++# define SCALER_DISPCTRL_PANIC0_MASK		VC4_MASK(25, 24)
++# define SCALER_DISPCTRL_PANIC0_SHIFT		24
++# define SCALER_DISPCTRL_PANIC1_MASK		VC4_MASK(27, 26)
++# define SCALER_DISPCTRL_PANIC1_SHIFT		26
++# define SCALER_DISPCTRL_PANIC2_MASK		VC4_MASK(29, 28)
++# define SCALER_DISPCTRL_PANIC2_SHIFT		28
+ # define SCALER_DISPCTRL_DSP3_MUX_MASK		VC4_MASK(19, 18)
+ # define SCALER_DISPCTRL_DSP3_MUX_SHIFT		18
+ 
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index 1681486860019..49fa59e09187b 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -159,8 +159,9 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
+ 	shmem->pages = drm_gem_shmem_get_sg_table(&bo->base.base);
+ 	if (IS_ERR(shmem->pages)) {
+ 		drm_gem_shmem_unpin(&bo->base.base);
++		ret = PTR_ERR(shmem->pages);
+ 		shmem->pages = NULL;
+-		return PTR_ERR(shmem->pages);
++		return ret;
+ 	}
+ 
+ 	if (use_dma_api) {
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
+index cb0b6230c22ce..838428988f79d 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.c
++++ b/drivers/gpu/drm/vkms/vkms_drv.c
+@@ -61,7 +61,8 @@ static void vkms_release(struct drm_device *dev)
+ {
+ 	struct vkms_device *vkms = container_of(dev, struct vkms_device, drm);
+ 
+-	destroy_workqueue(vkms->output.composer_workq);
++	if (vkms->output.composer_workq)
++		destroy_workqueue(vkms->output.composer_workq);
+ }
+ 
+ static void vkms_atomic_commit_tail(struct drm_atomic_state *old_state)
+diff --git a/drivers/gpu/host1x/hw/syncpt_hw.c b/drivers/gpu/host1x/hw/syncpt_hw.c
+index dd39d67ccec36..8cf35b2eff3db 100644
+--- a/drivers/gpu/host1x/hw/syncpt_hw.c
++++ b/drivers/gpu/host1x/hw/syncpt_hw.c
+@@ -106,9 +106,6 @@ static void syncpt_assign_to_channel(struct host1x_syncpt *sp,
+ #if HOST1X_HW >= 6
+ 	struct host1x *host = sp->host;
+ 
+-	if (!host->hv_regs)
+-		return;
+-
+ 	host1x_sync_writel(host,
+ 			   HOST1X_SYNC_SYNCPT_CH_APP_CH(ch ? ch->id : 0xff),
+ 			   HOST1X_SYNC_SYNCPT_CH_APP(sp->id));
+diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
+index d166ee262ce43..22dae3f510517 100644
+--- a/drivers/gpu/ipu-v3/ipu-common.c
++++ b/drivers/gpu/ipu-v3/ipu-common.c
+@@ -1168,6 +1168,7 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
+ 		pdev = platform_device_alloc(reg->name, id++);
+ 		if (!pdev) {
+ 			ret = -ENOMEM;
++			of_node_put(of_node);
+ 			goto err_register;
+ 		}
+ 
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index f85c6e3309a09..6865cab33cf8a 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -95,6 +95,7 @@ struct asus_kbd_leds {
+ 	struct hid_device *hdev;
+ 	struct work_struct work;
+ 	unsigned int brightness;
++	spinlock_t lock;
+ 	bool removed;
+ };
+ 
+@@ -397,24 +398,42 @@ static int asus_kbd_get_functions(struct hid_device *hdev,
+ 	return ret;
+ }
+ 
++static void asus_schedule_work(struct asus_kbd_leds *led)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&led->lock, flags);
++	if (!led->removed)
++		schedule_work(&led->work);
++	spin_unlock_irqrestore(&led->lock, flags);
++}
++
+ static void asus_kbd_backlight_set(struct led_classdev *led_cdev,
+ 				   enum led_brightness brightness)
+ {
+ 	struct asus_kbd_leds *led = container_of(led_cdev, struct asus_kbd_leds,
+ 						 cdev);
+-	if (led->brightness == brightness)
+-		return;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&led->lock, flags);
+ 	led->brightness = brightness;
+-	schedule_work(&led->work);
++	spin_unlock_irqrestore(&led->lock, flags);
++
++	asus_schedule_work(led);
+ }
+ 
+ static enum led_brightness asus_kbd_backlight_get(struct led_classdev *led_cdev)
+ {
+ 	struct asus_kbd_leds *led = container_of(led_cdev, struct asus_kbd_leds,
+ 						 cdev);
++	enum led_brightness brightness;
++	unsigned long flags;
++
++	spin_lock_irqsave(&led->lock, flags);
++	brightness = led->brightness;
++	spin_unlock_irqrestore(&led->lock, flags);
+ 
+-	return led->brightness;
++	return brightness;
+ }
+ 
+ static void asus_kbd_backlight_work(struct work_struct *work)
+@@ -422,11 +441,11 @@ static void asus_kbd_backlight_work(struct work_struct *work)
+ 	struct asus_kbd_leds *led = container_of(work, struct asus_kbd_leds, work);
+ 	u8 buf[] = { FEATURE_KBD_REPORT_ID, 0xba, 0xc5, 0xc4, 0x00 };
+ 	int ret;
++	unsigned long flags;
+ 
+-	if (led->removed)
+-		return;
+-
++	spin_lock_irqsave(&led->lock, flags);
+ 	buf[4] = led->brightness;
++	spin_unlock_irqrestore(&led->lock, flags);
+ 
+ 	ret = asus_kbd_set_report(led->hdev, buf, sizeof(buf));
+ 	if (ret < 0)
+@@ -488,6 +507,7 @@ static int asus_kbd_register_leds(struct hid_device *hdev)
+ 	drvdata->kbd_backlight->cdev.brightness_set = asus_kbd_backlight_set;
+ 	drvdata->kbd_backlight->cdev.brightness_get = asus_kbd_backlight_get;
+ 	INIT_WORK(&drvdata->kbd_backlight->work, asus_kbd_backlight_work);
++	spin_lock_init(&drvdata->kbd_backlight->lock);
+ 
+ 	ret = devm_led_classdev_register(&hdev->dev, &drvdata->kbd_backlight->cdev);
+ 	if (ret < 0) {
+@@ -1016,9 +1036,13 @@ err_stop_hw:
+ static void asus_remove(struct hid_device *hdev)
+ {
+ 	struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
++	unsigned long flags;
+ 
+ 	if (drvdata->kbd_backlight) {
++		spin_lock_irqsave(&drvdata->kbd_backlight->lock, flags);
+ 		drvdata->kbd_backlight->removed = true;
++		spin_unlock_irqrestore(&drvdata->kbd_backlight->lock, flags);
++
+ 		cancel_work_sync(&drvdata->kbd_backlight->work);
+ 	}
+ 
+diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c
+index e8b16665860d6..a02cb517b4c47 100644
+--- a/drivers/hid/hid-bigbenff.c
++++ b/drivers/hid/hid-bigbenff.c
+@@ -174,6 +174,7 @@ static __u8 pid0902_rdesc_fixed[] = {
+ struct bigben_device {
+ 	struct hid_device *hid;
+ 	struct hid_report *report;
++	spinlock_t lock;
+ 	bool removed;
+ 	u8 led_state;         /* LED1 = 1 .. LED4 = 8 */
+ 	u8 right_motor_on;    /* right motor off/on 0/1 */
+@@ -184,18 +185,39 @@ struct bigben_device {
+ 	struct work_struct worker;
+ };
+ 
++static inline void bigben_schedule_work(struct bigben_device *bigben)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&bigben->lock, flags);
++	if (!bigben->removed)
++		schedule_work(&bigben->worker);
++	spin_unlock_irqrestore(&bigben->lock, flags);
++}
+ 
+ static void bigben_worker(struct work_struct *work)
+ {
+ 	struct bigben_device *bigben = container_of(work,
+ 		struct bigben_device, worker);
+ 	struct hid_field *report_field = bigben->report->field[0];
+-
+-	if (bigben->removed || !report_field)
++	bool do_work_led = false;
++	bool do_work_ff = false;
++	u8 *buf;
++	u32 len;
++	unsigned long flags;
++
++	buf = hid_alloc_report_buf(bigben->report, GFP_KERNEL);
++	if (!buf)
+ 		return;
+ 
++	len = hid_report_len(bigben->report);
++
++	/* LED work */
++	spin_lock_irqsave(&bigben->lock, flags);
++
+ 	if (bigben->work_led) {
+ 		bigben->work_led = false;
++		do_work_led = true;
+ 		report_field->value[0] = 0x01; /* 1 = led message */
+ 		report_field->value[1] = 0x08; /* reserved value, always 8 */
+ 		report_field->value[2] = bigben->led_state;
+@@ -204,11 +226,22 @@ static void bigben_worker(struct work_struct *work)
+ 		report_field->value[5] = 0x00; /* padding */
+ 		report_field->value[6] = 0x00; /* padding */
+ 		report_field->value[7] = 0x00; /* padding */
+-		hid_hw_request(bigben->hid, bigben->report, HID_REQ_SET_REPORT);
++		hid_output_report(bigben->report, buf);
++	}
++
++	spin_unlock_irqrestore(&bigben->lock, flags);
++
++	if (do_work_led) {
++		hid_hw_raw_request(bigben->hid, bigben->report->id, buf, len,
++				   bigben->report->type, HID_REQ_SET_REPORT);
+ 	}
+ 
++	/* FF work */
++	spin_lock_irqsave(&bigben->lock, flags);
++
+ 	if (bigben->work_ff) {
+ 		bigben->work_ff = false;
++		do_work_ff = true;
+ 		report_field->value[0] = 0x02; /* 2 = rumble effect message */
+ 		report_field->value[1] = 0x08; /* reserved value, always 8 */
+ 		report_field->value[2] = bigben->right_motor_on;
+@@ -217,8 +250,17 @@ static void bigben_worker(struct work_struct *work)
+ 		report_field->value[5] = 0x00; /* padding */
+ 		report_field->value[6] = 0x00; /* padding */
+ 		report_field->value[7] = 0x00; /* padding */
+-		hid_hw_request(bigben->hid, bigben->report, HID_REQ_SET_REPORT);
++		hid_output_report(bigben->report, buf);
++	}
++
++	spin_unlock_irqrestore(&bigben->lock, flags);
++
++	if (do_work_ff) {
++		hid_hw_raw_request(bigben->hid, bigben->report->id, buf, len,
++				   bigben->report->type, HID_REQ_SET_REPORT);
+ 	}
++
++	kfree(buf);
+ }
+ 
+ static int hid_bigben_play_effect(struct input_dev *dev, void *data,
+@@ -228,6 +270,7 @@ static int hid_bigben_play_effect(struct input_dev *dev, void *data,
+ 	struct bigben_device *bigben = hid_get_drvdata(hid);
+ 	u8 right_motor_on;
+ 	u8 left_motor_force;
++	unsigned long flags;
+ 
+ 	if (!bigben) {
+ 		hid_err(hid, "no device data\n");
+@@ -242,10 +285,13 @@ static int hid_bigben_play_effect(struct input_dev *dev, void *data,
+ 
+ 	if (right_motor_on != bigben->right_motor_on ||
+ 			left_motor_force != bigben->left_motor_force) {
++		spin_lock_irqsave(&bigben->lock, flags);
+ 		bigben->right_motor_on   = right_motor_on;
+ 		bigben->left_motor_force = left_motor_force;
+ 		bigben->work_ff = true;
+-		schedule_work(&bigben->worker);
++		spin_unlock_irqrestore(&bigben->lock, flags);
++
++		bigben_schedule_work(bigben);
+ 	}
+ 
+ 	return 0;
+@@ -259,6 +305,7 @@ static void bigben_set_led(struct led_classdev *led,
+ 	struct bigben_device *bigben = hid_get_drvdata(hid);
+ 	int n;
+ 	bool work;
++	unsigned long flags;
+ 
+ 	if (!bigben) {
+ 		hid_err(hid, "no device data\n");
+@@ -267,6 +314,7 @@ static void bigben_set_led(struct led_classdev *led,
+ 
+ 	for (n = 0; n < NUM_LEDS; n++) {
+ 		if (led == bigben->leds[n]) {
++			spin_lock_irqsave(&bigben->lock, flags);
+ 			if (value == LED_OFF) {
+ 				work = (bigben->led_state & BIT(n));
+ 				bigben->led_state &= ~BIT(n);
+@@ -274,10 +322,11 @@ static void bigben_set_led(struct led_classdev *led,
+ 				work = !(bigben->led_state & BIT(n));
+ 				bigben->led_state |= BIT(n);
+ 			}
++			spin_unlock_irqrestore(&bigben->lock, flags);
+ 
+ 			if (work) {
+ 				bigben->work_led = true;
+-				schedule_work(&bigben->worker);
++				bigben_schedule_work(bigben);
+ 			}
+ 			return;
+ 		}
+@@ -307,8 +356,12 @@ static enum led_brightness bigben_get_led(struct led_classdev *led)
+ static void bigben_remove(struct hid_device *hid)
+ {
+ 	struct bigben_device *bigben = hid_get_drvdata(hid);
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&bigben->lock, flags);
+ 	bigben->removed = true;
++	spin_unlock_irqrestore(&bigben->lock, flags);
++
+ 	cancel_work_sync(&bigben->worker);
+ 	hid_hw_stop(hid);
+ }
+@@ -318,7 +371,6 @@ static int bigben_probe(struct hid_device *hid,
+ {
+ 	struct bigben_device *bigben;
+ 	struct hid_input *hidinput;
+-	struct list_head *report_list;
+ 	struct led_classdev *led;
+ 	char *name;
+ 	size_t name_sz;
+@@ -343,14 +395,12 @@ static int bigben_probe(struct hid_device *hid,
+ 		return error;
+ 	}
+ 
+-	report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+-	if (list_empty(report_list)) {
++	bigben->report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 8);
++	if (!bigben->report) {
+ 		hid_err(hid, "no output report found\n");
+ 		error = -ENODEV;
+ 		goto error_hw_stop;
+ 	}
+-	bigben->report = list_entry(report_list->next,
+-		struct hid_report, list);
+ 
+ 	if (list_empty(&hid->inputs)) {
+ 		hid_err(hid, "no inputs found\n");
+@@ -362,6 +412,7 @@ static int bigben_probe(struct hid_device *hid,
+ 	set_bit(FF_RUMBLE, hidinput->input->ffbit);
+ 
+ 	INIT_WORK(&bigben->worker, bigben_worker);
++	spin_lock_init(&bigben->lock);
+ 
+ 	error = input_ff_create_memless(hidinput->input, NULL,
+ 		hid_bigben_play_effect);
+@@ -402,7 +453,7 @@ static int bigben_probe(struct hid_device *hid,
+ 	bigben->left_motor_force = 0;
+ 	bigben->work_led = true;
+ 	bigben->work_ff = true;
+-	schedule_work(&bigben->worker);
++	bigben_schedule_work(bigben);
+ 
+ 	hid_info(hid, "LED and force feedback support for BigBen gamepad\n");
+ 
+diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
+index f4e2e69377589..1f60a381ae63e 100644
+--- a/drivers/hid/hid-debug.c
++++ b/drivers/hid/hid-debug.c
+@@ -933,6 +933,7 @@ static const char *keys[KEY_MAX + 1] = {
+ 	[KEY_VOICECOMMAND] = "VoiceCommand",
+ 	[KEY_EMOJI_PICKER] = "EmojiPicker",
+ 	[KEY_DICTATE] = "Dictate",
++	[KEY_MICMUTE] = "MicrophoneMute",
+ 	[KEY_BRIGHTNESS_MIN] = "BrightnessMin",
+ 	[KEY_BRIGHTNESS_MAX] = "BrightnessMax",
+ 	[KEY_BRIGHTNESS_AUTO] = "BrightnessAuto",
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 75a4d8d6bb0fd..3399953256d85 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -675,6 +675,14 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 			break;
+ 		}
+ 
++		if ((usage->hid & 0xf0) == 0xa0) {	/* SystemControl */
++			switch (usage->hid & 0xf) {
++			case 0x9: map_key_clear(KEY_MICMUTE); break;
++			default: goto ignore;
++			}
++			break;
++		}
++
+ 		if ((usage->hid & 0xf0) == 0xb0) {	/* SC - Display */
+ 			switch (usage->hid & 0xf) {
+ 			case 0x05: map_key_clear(KEY_SWITCHVIDEOMODE); break;
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 66b1051620390..f5ea8e1d84452 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -3763,6 +3763,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	bool connected;
+ 	unsigned int connect_mask = HID_CONNECT_DEFAULT;
+ 	struct hidpp_ff_private_data data;
++	bool will_restart = false;
+ 
+ 	/* report_fixup needs drvdata to be set before we call hid_parse */
+ 	hidpp = devm_kzalloc(&hdev->dev, sizeof(*hidpp), GFP_KERNEL);
+@@ -3818,6 +3819,10 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 			return ret;
+ 	}
+ 
++	if (hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT ||
++	    hidpp->quirks & HIDPP_QUIRK_UNIFYING)
++		will_restart = true;
++
+ 	INIT_WORK(&hidpp->work, delayed_work_cb);
+ 	mutex_init(&hidpp->send_mutex);
+ 	init_waitqueue_head(&hidpp->wait);
+@@ -3832,7 +3837,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	 * Plain USB connections need to actually call start and open
+ 	 * on the transport driver to allow incoming data.
+ 	 */
+-	ret = hid_hw_start(hdev, 0);
++	ret = hid_hw_start(hdev, will_restart ? 0 : connect_mask);
+ 	if (ret) {
+ 		hid_err(hdev, "hw start failed\n");
+ 		goto hid_hw_start_fail;
+@@ -3869,6 +3874,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 			hidpp->wireless_feature_index = 0;
+ 		else if (ret)
+ 			goto hid_hw_init_fail;
++		ret = 0;
+ 	}
+ 
+ 	if (connected && (hidpp->quirks & HIDPP_QUIRK_CLASS_WTP)) {
+@@ -3883,19 +3889,21 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	hidpp_connect_event(hidpp);
+ 
+-	/* Reset the HID node state */
+-	hid_device_io_stop(hdev);
+-	hid_hw_close(hdev);
+-	hid_hw_stop(hdev);
++	if (will_restart) {
++		/* Reset the HID node state */
++		hid_device_io_stop(hdev);
++		hid_hw_close(hdev);
++		hid_hw_stop(hdev);
+ 
+-	if (hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT)
+-		connect_mask &= ~HID_CONNECT_HIDINPUT;
++		if (hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT)
++			connect_mask &= ~HID_CONNECT_HIDINPUT;
+ 
+-	/* Now export the actual inputs and hidraw nodes to the world */
+-	ret = hid_hw_start(hdev, connect_mask);
+-	if (ret) {
+-		hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
+-		goto hid_hw_start_fail;
++		/* Now export the actual inputs and hidraw nodes to the world */
++		ret = hid_hw_start(hdev, connect_mask);
++		if (ret) {
++			hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
++			goto hid_hw_start_fail;
++		}
+ 	}
+ 
+ 	if (hidpp->quirks & HIDPP_QUIRK_CLASS_G920) {
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index 42b84ebff0579..eaae5de2ab616 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -550,66 +550,49 @@ static void coretemp_remove_core(struct platform_data *pdata, int indx)
+ 		ida_free(&pdata->ida, indx - BASE_SYSFS_ATTR_NO);
+ }
+ 
+-static int coretemp_probe(struct platform_device *pdev)
++static int coretemp_device_add(int zoneid)
+ {
+-	struct device *dev = &pdev->dev;
++	struct platform_device *pdev;
+ 	struct platform_data *pdata;
++	int err;
+ 
+ 	/* Initialize the per-zone data structures */
+-	pdata = devm_kzalloc(dev, sizeof(struct platform_data), GFP_KERNEL);
++	pdata = kzalloc(sizeof(*pdata), GFP_KERNEL);
+ 	if (!pdata)
+ 		return -ENOMEM;
+ 
+-	pdata->pkg_id = pdev->id;
++	pdata->pkg_id = zoneid;
+ 	ida_init(&pdata->ida);
+-	platform_set_drvdata(pdev, pdata);
+ 
+-	pdata->hwmon_dev = devm_hwmon_device_register_with_groups(dev, DRVNAME,
+-								  pdata, NULL);
+-	return PTR_ERR_OR_ZERO(pdata->hwmon_dev);
+-}
+-
+-static int coretemp_remove(struct platform_device *pdev)
+-{
+-	struct platform_data *pdata = platform_get_drvdata(pdev);
+-	int i;
++	pdev = platform_device_alloc(DRVNAME, zoneid);
++	if (!pdev) {
++		err = -ENOMEM;
++		goto err_free_pdata;
++	}
+ 
+-	for (i = MAX_CORE_DATA - 1; i >= 0; --i)
+-		if (pdata->core_data[i])
+-			coretemp_remove_core(pdata, i);
++	err = platform_device_add(pdev);
++	if (err)
++		goto err_put_dev;
+ 
+-	ida_destroy(&pdata->ida);
++	platform_set_drvdata(pdev, pdata);
++	zone_devices[zoneid] = pdev;
+ 	return 0;
+-}
+ 
+-static struct platform_driver coretemp_driver = {
+-	.driver = {
+-		.name = DRVNAME,
+-	},
+-	.probe = coretemp_probe,
+-	.remove = coretemp_remove,
+-};
++err_put_dev:
++	platform_device_put(pdev);
++err_free_pdata:
++	kfree(pdata);
++	return err;
++}
+ 
+-static struct platform_device *coretemp_device_add(unsigned int cpu)
++static void coretemp_device_remove(int zoneid)
+ {
+-	int err, zoneid = topology_logical_die_id(cpu);
+-	struct platform_device *pdev;
+-
+-	if (zoneid < 0)
+-		return ERR_PTR(-ENOMEM);
+-
+-	pdev = platform_device_alloc(DRVNAME, zoneid);
+-	if (!pdev)
+-		return ERR_PTR(-ENOMEM);
+-
+-	err = platform_device_add(pdev);
+-	if (err) {
+-		platform_device_put(pdev);
+-		return ERR_PTR(err);
+-	}
++	struct platform_device *pdev = zone_devices[zoneid];
++	struct platform_data *pdata = platform_get_drvdata(pdev);
+ 
+-	zone_devices[zoneid] = pdev;
+-	return pdev;
++	ida_destroy(&pdata->ida);
++	kfree(pdata);
++	platform_device_unregister(pdev);
+ }
+ 
+ static int coretemp_cpu_online(unsigned int cpu)
+@@ -633,7 +616,10 @@ static int coretemp_cpu_online(unsigned int cpu)
+ 	if (!cpu_has(c, X86_FEATURE_DTHERM))
+ 		return -ENODEV;
+ 
+-	if (!pdev) {
++	pdata = platform_get_drvdata(pdev);
++	if (!pdata->hwmon_dev) {
++		struct device *hwmon;
++
+ 		/* Check the microcode version of the CPU */
+ 		if (chk_ucode_version(cpu))
+ 			return -EINVAL;
+@@ -644,9 +630,11 @@ static int coretemp_cpu_online(unsigned int cpu)
+ 		 * online. So, initialize per-pkg data structures and
+ 		 * then bring this core online.
+ 		 */
+-		pdev = coretemp_device_add(cpu);
+-		if (IS_ERR(pdev))
+-			return PTR_ERR(pdev);
++		hwmon = hwmon_device_register_with_groups(&pdev->dev, DRVNAME,
++							  pdata, NULL);
++		if (IS_ERR(hwmon))
++			return PTR_ERR(hwmon);
++		pdata->hwmon_dev = hwmon;
+ 
+ 		/*
+ 		 * Check whether pkgtemp support is available.
+@@ -656,7 +644,6 @@ static int coretemp_cpu_online(unsigned int cpu)
+ 			coretemp_add_core(pdev, cpu, 1);
+ 	}
+ 
+-	pdata = platform_get_drvdata(pdev);
+ 	/*
+ 	 * Check whether a thread sibling is already online. If not add the
+ 	 * interface for this CPU core.
+@@ -675,18 +662,14 @@ static int coretemp_cpu_offline(unsigned int cpu)
+ 	struct temp_data *tdata;
+ 	int i, indx = -1, target;
+ 
+-	/*
+-	 * Don't execute this on suspend as the device remove locks
+-	 * up the machine.
+-	 */
++	/* No need to tear down any interfaces for suspend */
+ 	if (cpuhp_tasks_frozen)
+ 		return 0;
+ 
+ 	/* If the physical CPU device does not exist, just return */
+-	if (!pdev)
+-		return 0;
+-
+ 	pd = platform_get_drvdata(pdev);
++	if (!pd->hwmon_dev)
++		return 0;
+ 
+ 	for (i = 0; i < NUM_REAL_CORES; i++) {
+ 		if (pd->cpu_map[i] == topology_core_id(cpu)) {
+@@ -718,13 +701,14 @@ static int coretemp_cpu_offline(unsigned int cpu)
+ 	}
+ 
+ 	/*
+-	 * If all cores in this pkg are offline, remove the device. This
+-	 * will invoke the platform driver remove function, which cleans up
+-	 * the rest.
++	 * If all cores in this pkg are offline, remove the interface.
+ 	 */
++	tdata = pd->core_data[PKG_SYSFS_ATTR_NO];
+ 	if (cpumask_empty(&pd->cpumask)) {
+-		zone_devices[topology_logical_die_id(cpu)] = NULL;
+-		platform_device_unregister(pdev);
++		if (tdata)
++			coretemp_remove_core(pd, PKG_SYSFS_ATTR_NO);
++		hwmon_device_unregister(pd->hwmon_dev);
++		pd->hwmon_dev = NULL;
+ 		return 0;
+ 	}
+ 
+@@ -732,7 +716,6 @@ static int coretemp_cpu_offline(unsigned int cpu)
+ 	 * Check whether this core is the target for the package
+ 	 * interface. We need to assign it to some other cpu.
+ 	 */
+-	tdata = pd->core_data[PKG_SYSFS_ATTR_NO];
+ 	if (tdata && tdata->cpu == cpu) {
+ 		target = cpumask_first(&pd->cpumask);
+ 		mutex_lock(&tdata->update_lock);
+@@ -751,7 +734,7 @@ static enum cpuhp_state coretemp_hp_online;
+ 
+ static int __init coretemp_init(void)
+ {
+-	int err;
++	int i, err;
+ 
+ 	/*
+ 	 * CPUID.06H.EAX[0] indicates whether the CPU has thermal
+@@ -767,20 +750,22 @@ static int __init coretemp_init(void)
+ 	if (!zone_devices)
+ 		return -ENOMEM;
+ 
+-	err = platform_driver_register(&coretemp_driver);
+-	if (err)
+-		goto outzone;
++	for (i = 0; i < max_zones; i++) {
++		err = coretemp_device_add(i);
++		if (err)
++			goto outzone;
++	}
+ 
+ 	err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hwmon/coretemp:online",
+ 				coretemp_cpu_online, coretemp_cpu_offline);
+ 	if (err < 0)
+-		goto outdrv;
++		goto outzone;
+ 	coretemp_hp_online = err;
+ 	return 0;
+ 
+-outdrv:
+-	platform_driver_unregister(&coretemp_driver);
+ outzone:
++	while (i--)
++		coretemp_device_remove(i);
+ 	kfree(zone_devices);
+ 	return err;
+ }
+@@ -788,8 +773,11 @@ module_init(coretemp_init)
+ 
+ static void __exit coretemp_exit(void)
+ {
++	int i;
++
+ 	cpuhp_remove_state(coretemp_hp_online);
+-	platform_driver_unregister(&coretemp_driver);
++	for (i = 0; i < max_zones; i++)
++		coretemp_device_remove(i);
+ 	kfree(zone_devices);
+ }
+ module_exit(coretemp_exit)
+diff --git a/drivers/hwmon/ltc2945.c b/drivers/hwmon/ltc2945.c
+index ba9c868a8641e..65d792f184255 100644
+--- a/drivers/hwmon/ltc2945.c
++++ b/drivers/hwmon/ltc2945.c
+@@ -248,6 +248,8 @@ static ssize_t ltc2945_value_store(struct device *dev,
+ 
+ 	/* convert to register value, then clamp and write result */
+ 	regval = ltc2945_val_to_reg(dev, reg, val);
++	if (regval < 0)
++		return regval;
+ 	if (is_power_reg(reg)) {
+ 		regval = clamp_val(regval, 0, 0xffffff);
+ 		regbuf[0] = regval >> 16;
+diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c
+index bd8f5a3aaad9c..052c897a635d5 100644
+--- a/drivers/hwmon/mlxreg-fan.c
++++ b/drivers/hwmon/mlxreg-fan.c
+@@ -127,6 +127,12 @@ mlxreg_fan_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
+ 			if (err)
+ 				return err;
+ 
++			if (MLXREG_FAN_GET_FAULT(regval, tacho->mask)) {
++				/* FAN is broken - return zero for FAN speed. */
++				*val = 0;
++				return 0;
++			}
++
+ 			*val = MLXREG_FAN_GET_RPM(regval, fan->divider,
+ 						  fan->samples);
+ 			break;
+diff --git a/drivers/iio/accel/mma9551_core.c b/drivers/iio/accel/mma9551_core.c
+index 666e7a04a7d7a..9bb5c2fea08cf 100644
+--- a/drivers/iio/accel/mma9551_core.c
++++ b/drivers/iio/accel/mma9551_core.c
+@@ -296,9 +296,12 @@ int mma9551_read_config_word(struct i2c_client *client, u8 app_id,
+ 
+ 	ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_CONFIG,
+ 			       reg, NULL, 0, (u8 *)&v, 2);
++	if (ret < 0)
++		return ret;
++
+ 	*val = be16_to_cpu(v);
+ 
+-	return ret;
++	return 0;
+ }
+ EXPORT_SYMBOL(mma9551_read_config_word);
+ 
+@@ -354,9 +357,12 @@ int mma9551_read_status_word(struct i2c_client *client, u8 app_id,
+ 
+ 	ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_STATUS,
+ 			       reg, NULL, 0, (u8 *)&v, 2);
++	if (ret < 0)
++		return ret;
++
+ 	*val = be16_to_cpu(v);
+ 
+-	return ret;
++	return 0;
+ }
+ EXPORT_SYMBOL(mma9551_read_status_word);
+ 
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index 88476a1a601a4..4b41f35668b20 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -1097,7 +1097,7 @@ static void read_link_down_reason(struct hfi1_devdata *dd, u8 *ldr);
+ static void handle_temp_err(struct hfi1_devdata *dd);
+ static void dc_shutdown(struct hfi1_devdata *dd);
+ static void dc_start(struct hfi1_devdata *dd);
+-static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
++static int qos_rmt_entries(unsigned int n_krcv_queues, unsigned int *mp,
+ 			   unsigned int *np);
+ static void clear_full_mgmt_pkey(struct hfi1_pportdata *ppd);
+ static int wait_link_transfer_active(struct hfi1_devdata *dd, int wait_ms);
+@@ -13403,7 +13403,6 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
+ 	int ret;
+ 	unsigned ngroups;
+ 	int rmt_count;
+-	int user_rmt_reduced;
+ 	u32 n_usr_ctxts;
+ 	u32 send_contexts = chip_send_contexts(dd);
+ 	u32 rcv_contexts = chip_rcv_contexts(dd);
+@@ -13462,28 +13461,34 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
+ 					 (num_kernel_contexts + n_usr_ctxts),
+ 					 &node_affinity.real_cpu_mask);
+ 	/*
+-	 * The RMT entries are currently allocated as shown below:
+-	 * 1. QOS (0 to 128 entries);
+-	 * 2. FECN (num_kernel_context - 1 + num_user_contexts +
+-	 *    num_netdev_contexts);
+-	 * 3. netdev (num_netdev_contexts).
+-	 * It should be noted that FECN oversubscribe num_netdev_contexts
+-	 * entries of RMT because both netdev and PSM could allocate any receive
+-	 * context between dd->first_dyn_alloc_text and dd->num_rcv_contexts,
+-	 * and PSM FECN must reserve an RMT entry for each possible PSM receive
+-	 * context.
++	 * RMT entries are allocated as follows:
++	 * 1. QOS (0 to 128 entries)
++	 * 2. FECN (num_kernel_context - 1 [a] + num_user_contexts +
++	 *          num_netdev_contexts [b])
++	 * 3. netdev (NUM_NETDEV_MAP_ENTRIES)
++	 *
++	 * Notes:
++	 * [a] Kernel contexts (except control) are included in FECN if kernel
++	 *     TID_RDMA is active.
++	 * [b] Netdev and user contexts are randomly allocated from the same
++	 *     context pool, so FECN must cover all contexts in the pool.
+ 	 */
+-	rmt_count = qos_rmt_entries(dd, NULL, NULL) + (num_netdev_contexts * 2);
+-	if (HFI1_CAP_IS_KSET(TID_RDMA))
+-		rmt_count += num_kernel_contexts - 1;
+-	if (rmt_count + n_usr_ctxts > NUM_MAP_ENTRIES) {
+-		user_rmt_reduced = NUM_MAP_ENTRIES - rmt_count;
+-		dd_dev_err(dd,
+-			   "RMT size is reducing the number of user receive contexts from %u to %d\n",
+-			   n_usr_ctxts,
+-			   user_rmt_reduced);
+-		/* recalculate */
+-		n_usr_ctxts = user_rmt_reduced;
++	rmt_count = qos_rmt_entries(num_kernel_contexts - 1, NULL, NULL)
++		    + (HFI1_CAP_IS_KSET(TID_RDMA) ? num_kernel_contexts - 1
++						  : 0)
++		    + n_usr_ctxts
++		    + num_netdev_contexts
++		    + NUM_NETDEV_MAP_ENTRIES;
++	if (rmt_count > NUM_MAP_ENTRIES) {
++		int over = rmt_count - NUM_MAP_ENTRIES;
++		/* try to squish user contexts, minimum of 1 */
++		if (over >= n_usr_ctxts) {
++			dd_dev_err(dd, "RMT overflow: reduce the requested number of contexts\n");
++			return -EINVAL;
++		}
++		dd_dev_err(dd, "RMT overflow: reducing # user contexts from %u to %u\n",
++			   n_usr_ctxts, n_usr_ctxts - over);
++		n_usr_ctxts -= over;
+ 	}
+ 
+ 	/* the first N are kernel contexts, the rest are user/netdev contexts */
+@@ -14340,15 +14345,15 @@ static void clear_rsm_rule(struct hfi1_devdata *dd, u8 rule_index)
+ }
+ 
+ /* return the number of RSM map table entries that will be used for QOS */
+-static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
++static int qos_rmt_entries(unsigned int n_krcv_queues, unsigned int *mp,
+ 			   unsigned int *np)
+ {
+ 	int i;
+ 	unsigned int m, n;
+-	u8 max_by_vl = 0;
++	uint max_by_vl = 0;
+ 
+ 	/* is QOS active at all? */
+-	if (dd->n_krcv_queues <= MIN_KERNEL_KCTXTS ||
++	if (n_krcv_queues < MIN_KERNEL_KCTXTS ||
+ 	    num_vls == 1 ||
+ 	    krcvqsset <= 1)
+ 		goto no_qos;
+@@ -14406,7 +14411,7 @@ static void init_qos(struct hfi1_devdata *dd, struct rsm_map_table *rmt)
+ 
+ 	if (!rmt)
+ 		goto bail;
+-	rmt_entries = qos_rmt_entries(dd, &m, &n);
++	rmt_entries = qos_rmt_entries(dd->n_krcv_queues - 1, &m, &n);
+ 	if (rmt_entries == 0)
+ 		goto bail;
+ 	qpns_per_vl = 1 << m;
+diff --git a/drivers/input/misc/iqs269a.c b/drivers/input/misc/iqs269a.c
+index a348247d3d38f..8b30c911f7899 100644
+--- a/drivers/input/misc/iqs269a.c
++++ b/drivers/input/misc/iqs269a.c
+@@ -9,6 +9,7 @@
+  * axial sliders presented by the device.
+  */
+ 
++#include <linux/completion.h>
+ #include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
+@@ -96,8 +97,6 @@
+ #define IQS269_MISC_B_TRACKING_UI_ENABLE	BIT(4)
+ #define IQS269_MISC_B_FILT_STR_SLIDER		GENMASK(1, 0)
+ 
+-#define IQS269_CHx_SETTINGS			0x8C
+-
+ #define IQS269_CHx_ENG_A_MEAS_CAP_SIZE		BIT(15)
+ #define IQS269_CHx_ENG_A_RX_GND_INACTIVE	BIT(13)
+ #define IQS269_CHx_ENG_A_LOCAL_CAP_SIZE		BIT(12)
+@@ -146,14 +145,7 @@
+ #define IQS269_NUM_CH				8
+ #define IQS269_NUM_SL				2
+ 
+-#define IQS269_ATI_POLL_SLEEP_US		(iqs269->delay_mult * 10000)
+-#define IQS269_ATI_POLL_TIMEOUT_US		(iqs269->delay_mult * 500000)
+-#define IQS269_ATI_STABLE_DELAY_MS		(iqs269->delay_mult * 150)
+-
+-#define IQS269_PWR_MODE_POLL_SLEEP_US		IQS269_ATI_POLL_SLEEP_US
+-#define IQS269_PWR_MODE_POLL_TIMEOUT_US		IQS269_ATI_POLL_TIMEOUT_US
+-
+-#define iqs269_irq_wait()			usleep_range(100, 150)
++#define iqs269_irq_wait()			usleep_range(200, 250)
+ 
+ enum iqs269_local_cap_size {
+ 	IQS269_LOCAL_CAP_SIZE_0,
+@@ -245,6 +237,18 @@ struct iqs269_ver_info {
+ 	u8 padding;
+ } __packed;
+ 
++struct iqs269_ch_reg {
++	u8 rx_enable;
++	u8 tx_enable;
++	__be16 engine_a;
++	__be16 engine_b;
++	__be16 ati_comp;
++	u8 thresh[3];
++	u8 hyst;
++	u8 assoc_select;
++	u8 assoc_weight;
++} __packed;
++
+ struct iqs269_sys_reg {
+ 	__be16 general;
+ 	u8 active;
+@@ -266,18 +270,7 @@ struct iqs269_sys_reg {
+ 	u8 timeout_swipe;
+ 	u8 thresh_swipe;
+ 	u8 redo_ati;
+-} __packed;
+-
+-struct iqs269_ch_reg {
+-	u8 rx_enable;
+-	u8 tx_enable;
+-	__be16 engine_a;
+-	__be16 engine_b;
+-	__be16 ati_comp;
+-	u8 thresh[3];
+-	u8 hyst;
+-	u8 assoc_select;
+-	u8 assoc_weight;
++	struct iqs269_ch_reg ch_reg[IQS269_NUM_CH];
+ } __packed;
+ 
+ struct iqs269_flags {
+@@ -292,13 +285,11 @@ struct iqs269_private {
+ 	struct regmap *regmap;
+ 	struct mutex lock;
+ 	struct iqs269_switch_desc switches[ARRAY_SIZE(iqs269_events)];
+-	struct iqs269_ch_reg ch_reg[IQS269_NUM_CH];
+ 	struct iqs269_sys_reg sys_reg;
++	struct completion ati_done;
+ 	struct input_dev *keypad;
+ 	struct input_dev *slider[IQS269_NUM_SL];
+ 	unsigned int keycode[ARRAY_SIZE(iqs269_events) * IQS269_NUM_CH];
+-	unsigned int suspend_mode;
+-	unsigned int delay_mult;
+ 	unsigned int ch_num;
+ 	bool hall_enable;
+ 	bool ati_current;
+@@ -307,6 +298,7 @@ struct iqs269_private {
+ static int iqs269_ati_mode_set(struct iqs269_private *iqs269,
+ 			       unsigned int ch_num, unsigned int mode)
+ {
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 	u16 engine_a;
+ 
+ 	if (ch_num >= IQS269_NUM_CH)
+@@ -317,12 +309,12 @@ static int iqs269_ati_mode_set(struct iqs269_private *iqs269,
+ 
+ 	mutex_lock(&iqs269->lock);
+ 
+-	engine_a = be16_to_cpu(iqs269->ch_reg[ch_num].engine_a);
++	engine_a = be16_to_cpu(ch_reg[ch_num].engine_a);
+ 
+ 	engine_a &= ~IQS269_CHx_ENG_A_ATI_MODE_MASK;
+ 	engine_a |= (mode << IQS269_CHx_ENG_A_ATI_MODE_SHIFT);
+ 
+-	iqs269->ch_reg[ch_num].engine_a = cpu_to_be16(engine_a);
++	ch_reg[ch_num].engine_a = cpu_to_be16(engine_a);
+ 	iqs269->ati_current = false;
+ 
+ 	mutex_unlock(&iqs269->lock);
+@@ -333,13 +325,14 @@ static int iqs269_ati_mode_set(struct iqs269_private *iqs269,
+ static int iqs269_ati_mode_get(struct iqs269_private *iqs269,
+ 			       unsigned int ch_num, unsigned int *mode)
+ {
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 	u16 engine_a;
+ 
+ 	if (ch_num >= IQS269_NUM_CH)
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&iqs269->lock);
+-	engine_a = be16_to_cpu(iqs269->ch_reg[ch_num].engine_a);
++	engine_a = be16_to_cpu(ch_reg[ch_num].engine_a);
+ 	mutex_unlock(&iqs269->lock);
+ 
+ 	engine_a &= IQS269_CHx_ENG_A_ATI_MODE_MASK;
+@@ -351,6 +344,7 @@ static int iqs269_ati_mode_get(struct iqs269_private *iqs269,
+ static int iqs269_ati_base_set(struct iqs269_private *iqs269,
+ 			       unsigned int ch_num, unsigned int base)
+ {
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 	u16 engine_b;
+ 
+ 	if (ch_num >= IQS269_NUM_CH)
+@@ -379,12 +373,12 @@ static int iqs269_ati_base_set(struct iqs269_private *iqs269,
+ 
+ 	mutex_lock(&iqs269->lock);
+ 
+-	engine_b = be16_to_cpu(iqs269->ch_reg[ch_num].engine_b);
++	engine_b = be16_to_cpu(ch_reg[ch_num].engine_b);
+ 
+ 	engine_b &= ~IQS269_CHx_ENG_B_ATI_BASE_MASK;
+ 	engine_b |= base;
+ 
+-	iqs269->ch_reg[ch_num].engine_b = cpu_to_be16(engine_b);
++	ch_reg[ch_num].engine_b = cpu_to_be16(engine_b);
+ 	iqs269->ati_current = false;
+ 
+ 	mutex_unlock(&iqs269->lock);
+@@ -395,13 +389,14 @@ static int iqs269_ati_base_set(struct iqs269_private *iqs269,
+ static int iqs269_ati_base_get(struct iqs269_private *iqs269,
+ 			       unsigned int ch_num, unsigned int *base)
+ {
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 	u16 engine_b;
+ 
+ 	if (ch_num >= IQS269_NUM_CH)
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&iqs269->lock);
+-	engine_b = be16_to_cpu(iqs269->ch_reg[ch_num].engine_b);
++	engine_b = be16_to_cpu(ch_reg[ch_num].engine_b);
+ 	mutex_unlock(&iqs269->lock);
+ 
+ 	switch (engine_b & IQS269_CHx_ENG_B_ATI_BASE_MASK) {
+@@ -429,6 +424,7 @@ static int iqs269_ati_base_get(struct iqs269_private *iqs269,
+ static int iqs269_ati_target_set(struct iqs269_private *iqs269,
+ 				 unsigned int ch_num, unsigned int target)
+ {
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 	u16 engine_b;
+ 
+ 	if (ch_num >= IQS269_NUM_CH)
+@@ -439,12 +435,12 @@ static int iqs269_ati_target_set(struct iqs269_private *iqs269,
+ 
+ 	mutex_lock(&iqs269->lock);
+ 
+-	engine_b = be16_to_cpu(iqs269->ch_reg[ch_num].engine_b);
++	engine_b = be16_to_cpu(ch_reg[ch_num].engine_b);
+ 
+ 	engine_b &= ~IQS269_CHx_ENG_B_ATI_TARGET_MASK;
+ 	engine_b |= target / 32;
+ 
+-	iqs269->ch_reg[ch_num].engine_b = cpu_to_be16(engine_b);
++	ch_reg[ch_num].engine_b = cpu_to_be16(engine_b);
+ 	iqs269->ati_current = false;
+ 
+ 	mutex_unlock(&iqs269->lock);
+@@ -455,13 +451,14 @@ static int iqs269_ati_target_set(struct iqs269_private *iqs269,
+ static int iqs269_ati_target_get(struct iqs269_private *iqs269,
+ 				 unsigned int ch_num, unsigned int *target)
+ {
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 	u16 engine_b;
+ 
+ 	if (ch_num >= IQS269_NUM_CH)
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&iqs269->lock);
+-	engine_b = be16_to_cpu(iqs269->ch_reg[ch_num].engine_b);
++	engine_b = be16_to_cpu(ch_reg[ch_num].engine_b);
+ 	mutex_unlock(&iqs269->lock);
+ 
+ 	*target = (engine_b & IQS269_CHx_ENG_B_ATI_TARGET_MASK) * 32;
+@@ -531,13 +528,7 @@ static int iqs269_parse_chan(struct iqs269_private *iqs269,
+ 	if (fwnode_property_present(ch_node, "azoteq,slider1-select"))
+ 		iqs269->sys_reg.slider_select[1] |= BIT(reg);
+ 
+-	ch_reg = &iqs269->ch_reg[reg];
+-
+-	error = regmap_raw_read(iqs269->regmap,
+-				IQS269_CHx_SETTINGS + reg * sizeof(*ch_reg) / 2,
+-				ch_reg, sizeof(*ch_reg));
+-	if (error)
+-		return error;
++	ch_reg = &iqs269->sys_reg.ch_reg[reg];
+ 
+ 	error = iqs269_parse_mask(ch_node, "azoteq,rx-enable",
+ 				  &ch_reg->rx_enable);
+@@ -694,6 +685,7 @@ static int iqs269_parse_chan(struct iqs269_private *iqs269,
+ 				dev_err(&client->dev,
+ 					"Invalid channel %u threshold: %u\n",
+ 					reg, val);
++				fwnode_handle_put(ev_node);
+ 				return -EINVAL;
+ 			}
+ 
+@@ -707,6 +699,7 @@ static int iqs269_parse_chan(struct iqs269_private *iqs269,
+ 				dev_err(&client->dev,
+ 					"Invalid channel %u hysteresis: %u\n",
+ 					reg, val);
++				fwnode_handle_put(ev_node);
+ 				return -EINVAL;
+ 			}
+ 
+@@ -721,8 +714,16 @@ static int iqs269_parse_chan(struct iqs269_private *iqs269,
+ 			}
+ 		}
+ 
+-		if (fwnode_property_read_u32(ev_node, "linux,code", &val))
++		error = fwnode_property_read_u32(ev_node, "linux,code", &val);
++		fwnode_handle_put(ev_node);
++		if (error == -EINVAL) {
+ 			continue;
++		} else if (error) {
++			dev_err(&client->dev,
++				"Failed to read channel %u code: %d\n", reg,
++				error);
++			return error;
++		}
+ 
+ 		switch (reg) {
+ 		case IQS269_CHx_HALL_ACTIVE:
+@@ -759,17 +760,6 @@ static int iqs269_parse_prop(struct iqs269_private *iqs269)
+ 	iqs269->hall_enable = device_property_present(&client->dev,
+ 						      "azoteq,hall-enable");
+ 
+-	if (!device_property_read_u32(&client->dev, "azoteq,suspend-mode",
+-				      &val)) {
+-		if (val > IQS269_SYS_SETTINGS_PWR_MODE_MAX) {
+-			dev_err(&client->dev, "Invalid suspend mode: %u\n",
+-				val);
+-			return -EINVAL;
+-		}
+-
+-		iqs269->suspend_mode = val;
+-	}
+-
+ 	error = regmap_raw_read(iqs269->regmap, IQS269_SYS_SETTINGS, sys_reg,
+ 				sizeof(*sys_reg));
+ 	if (error)
+@@ -980,13 +970,8 @@ static int iqs269_parse_prop(struct iqs269_private *iqs269)
+ 
+ 	general = be16_to_cpu(sys_reg->general);
+ 
+-	if (device_property_present(&client->dev, "azoteq,clk-div")) {
++	if (device_property_present(&client->dev, "azoteq,clk-div"))
+ 		general |= IQS269_SYS_SETTINGS_CLK_DIV;
+-		iqs269->delay_mult = 4;
+-	} else {
+-		general &= ~IQS269_SYS_SETTINGS_CLK_DIV;
+-		iqs269->delay_mult = 1;
+-	}
+ 
+ 	/*
+ 	 * Configure the device to automatically switch between normal and low-
+@@ -997,6 +982,17 @@ static int iqs269_parse_prop(struct iqs269_private *iqs269)
+ 	general &= ~IQS269_SYS_SETTINGS_DIS_AUTO;
+ 	general &= ~IQS269_SYS_SETTINGS_PWR_MODE_MASK;
+ 
++	if (!device_property_read_u32(&client->dev, "azoteq,suspend-mode",
++				      &val)) {
++		if (val > IQS269_SYS_SETTINGS_PWR_MODE_MAX) {
++			dev_err(&client->dev, "Invalid suspend mode: %u\n",
++				val);
++			return -EINVAL;
++		}
++
++		general |= (val << IQS269_SYS_SETTINGS_PWR_MODE_SHIFT);
++	}
++
+ 	if (!device_property_read_u32(&client->dev, "azoteq,ulp-update",
+ 				      &val)) {
+ 		if (val > IQS269_SYS_SETTINGS_ULP_UPDATE_MAX) {
+@@ -1032,10 +1028,7 @@ static int iqs269_parse_prop(struct iqs269_private *iqs269)
+ 
+ static int iqs269_dev_init(struct iqs269_private *iqs269)
+ {
+-	struct iqs269_sys_reg *sys_reg = &iqs269->sys_reg;
+-	struct iqs269_ch_reg *ch_reg;
+-	unsigned int val;
+-	int error, i;
++	int error;
+ 
+ 	mutex_lock(&iqs269->lock);
+ 
+@@ -1045,38 +1038,17 @@ static int iqs269_dev_init(struct iqs269_private *iqs269)
+ 	if (error)
+ 		goto err_mutex;
+ 
+-	for (i = 0; i < IQS269_NUM_CH; i++) {
+-		if (!(sys_reg->active & BIT(i)))
+-			continue;
+-
+-		ch_reg = &iqs269->ch_reg[i];
+-
+-		error = regmap_raw_write(iqs269->regmap,
+-					 IQS269_CHx_SETTINGS + i *
+-					 sizeof(*ch_reg) / 2, ch_reg,
+-					 sizeof(*ch_reg));
+-		if (error)
+-			goto err_mutex;
+-	}
+-
+-	/*
+-	 * The REDO-ATI and ATI channel selection fields must be written in the
+-	 * same block write, so every field between registers 0x80 through 0x8B
+-	 * (inclusive) must be written as well.
+-	 */
+-	error = regmap_raw_write(iqs269->regmap, IQS269_SYS_SETTINGS, sys_reg,
+-				 sizeof(*sys_reg));
++	error = regmap_raw_write(iqs269->regmap, IQS269_SYS_SETTINGS,
++				 &iqs269->sys_reg, sizeof(iqs269->sys_reg));
+ 	if (error)
+ 		goto err_mutex;
+ 
+-	error = regmap_read_poll_timeout(iqs269->regmap, IQS269_SYS_FLAGS, val,
+-					!(val & IQS269_SYS_FLAGS_IN_ATI),
+-					 IQS269_ATI_POLL_SLEEP_US,
+-					 IQS269_ATI_POLL_TIMEOUT_US);
+-	if (error)
+-		goto err_mutex;
++	/*
++	 * The following delay gives the device time to deassert its RDY output
++	 * so as to prevent an interrupt from being serviced prematurely.
++	 */
++	usleep_range(2000, 2100);
+ 
+-	msleep(IQS269_ATI_STABLE_DELAY_MS);
+ 	iqs269->ati_current = true;
+ 
+ err_mutex:
+@@ -1088,10 +1060,8 @@ err_mutex:
+ static int iqs269_input_init(struct iqs269_private *iqs269)
+ {
+ 	struct i2c_client *client = iqs269->client;
+-	struct iqs269_flags flags;
+ 	unsigned int sw_code, keycode;
+ 	int error, i, j;
+-	u8 dir_mask, state;
+ 
+ 	iqs269->keypad = devm_input_allocate_device(&client->dev);
+ 	if (!iqs269->keypad)
+@@ -1104,23 +1074,7 @@ static int iqs269_input_init(struct iqs269_private *iqs269)
+ 	iqs269->keypad->name = "iqs269a_keypad";
+ 	iqs269->keypad->id.bustype = BUS_I2C;
+ 
+-	if (iqs269->hall_enable) {
+-		error = regmap_raw_read(iqs269->regmap, IQS269_SYS_FLAGS,
+-					&flags, sizeof(flags));
+-		if (error) {
+-			dev_err(&client->dev,
+-				"Failed to read initial status: %d\n", error);
+-			return error;
+-		}
+-	}
+-
+ 	for (i = 0; i < ARRAY_SIZE(iqs269_events); i++) {
+-		dir_mask = flags.states[IQS269_ST_OFFS_DIR];
+-		if (!iqs269_events[i].dir_up)
+-			dir_mask = ~dir_mask;
+-
+-		state = flags.states[iqs269_events[i].st_offs] & dir_mask;
+-
+ 		sw_code = iqs269->switches[i].code;
+ 
+ 		for (j = 0; j < IQS269_NUM_CH; j++) {
+@@ -1133,13 +1087,9 @@ static int iqs269_input_init(struct iqs269_private *iqs269)
+ 			switch (j) {
+ 			case IQS269_CHx_HALL_ACTIVE:
+ 				if (iqs269->hall_enable &&
+-				    iqs269->switches[i].enabled) {
++				    iqs269->switches[i].enabled)
+ 					input_set_capability(iqs269->keypad,
+ 							     EV_SW, sw_code);
+-					input_report_switch(iqs269->keypad,
+-							    sw_code,
+-							    state & BIT(j));
+-				}
+ 				fallthrough;
+ 
+ 			case IQS269_CHx_HALL_INACTIVE:
+@@ -1155,14 +1105,6 @@ static int iqs269_input_init(struct iqs269_private *iqs269)
+ 		}
+ 	}
+ 
+-	input_sync(iqs269->keypad);
+-
+-	error = input_register_device(iqs269->keypad);
+-	if (error) {
+-		dev_err(&client->dev, "Failed to register keypad: %d\n", error);
+-		return error;
+-	}
+-
+ 	for (i = 0; i < IQS269_NUM_SL; i++) {
+ 		if (!iqs269->sys_reg.slider_select[i])
+ 			continue;
+@@ -1222,6 +1164,9 @@ static int iqs269_report(struct iqs269_private *iqs269)
+ 		return error;
+ 	}
+ 
++	if (be16_to_cpu(flags.system) & IQS269_SYS_FLAGS_IN_ATI)
++		return 0;
++
+ 	error = regmap_raw_read(iqs269->regmap, IQS269_SLIDER_X, slider_x,
+ 				sizeof(slider_x));
+ 	if (error) {
+@@ -1284,6 +1229,12 @@ static int iqs269_report(struct iqs269_private *iqs269)
+ 
+ 	input_sync(iqs269->keypad);
+ 
++	/*
++	 * The following completion signals that ATI has finished, any initial
++	 * switch states have been reported and the keypad can be registered.
++	 */
++	complete_all(&iqs269->ati_done);
++
+ 	return 0;
+ }
+ 
+@@ -1315,6 +1266,9 @@ static ssize_t counts_show(struct device *dev,
+ 	if (!iqs269->ati_current || iqs269->hall_enable)
+ 		return -EPERM;
+ 
++	if (!completion_done(&iqs269->ati_done))
++		return -EBUSY;
++
+ 	/*
+ 	 * Unsolicited I2C communication prompts the device to assert its RDY
+ 	 * pin, so disable the interrupt line until the operation is finished
+@@ -1339,6 +1293,7 @@ static ssize_t hall_bin_show(struct device *dev,
+ 			     struct device_attribute *attr, char *buf)
+ {
+ 	struct iqs269_private *iqs269 = dev_get_drvdata(dev);
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 	struct i2c_client *client = iqs269->client;
+ 	unsigned int val;
+ 	int error;
+@@ -1353,8 +1308,8 @@ static ssize_t hall_bin_show(struct device *dev,
+ 	if (error)
+ 		return error;
+ 
+-	switch (iqs269->ch_reg[IQS269_CHx_HALL_ACTIVE].rx_enable &
+-		iqs269->ch_reg[IQS269_CHx_HALL_INACTIVE].rx_enable) {
++	switch (ch_reg[IQS269_CHx_HALL_ACTIVE].rx_enable &
++		ch_reg[IQS269_CHx_HALL_INACTIVE].rx_enable) {
+ 	case IQS269_HALL_PAD_R:
+ 		val &= IQS269_CAL_DATA_A_HALL_BIN_R_MASK;
+ 		val >>= IQS269_CAL_DATA_A_HALL_BIN_R_SHIFT;
+@@ -1434,9 +1389,10 @@ static ssize_t rx_enable_show(struct device *dev,
+ 			      struct device_attribute *attr, char *buf)
+ {
+ 	struct iqs269_private *iqs269 = dev_get_drvdata(dev);
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 
+ 	return scnprintf(buf, PAGE_SIZE, "%u\n",
+-			 iqs269->ch_reg[iqs269->ch_num].rx_enable);
++			 ch_reg[iqs269->ch_num].rx_enable);
+ }
+ 
+ static ssize_t rx_enable_store(struct device *dev,
+@@ -1444,6 +1400,7 @@ static ssize_t rx_enable_store(struct device *dev,
+ 			       size_t count)
+ {
+ 	struct iqs269_private *iqs269 = dev_get_drvdata(dev);
++	struct iqs269_ch_reg *ch_reg = iqs269->sys_reg.ch_reg;
+ 	unsigned int val;
+ 	int error;
+ 
+@@ -1456,7 +1413,7 @@ static ssize_t rx_enable_store(struct device *dev,
+ 
+ 	mutex_lock(&iqs269->lock);
+ 
+-	iqs269->ch_reg[iqs269->ch_num].rx_enable = val;
++	ch_reg[iqs269->ch_num].rx_enable = val;
+ 	iqs269->ati_current = false;
+ 
+ 	mutex_unlock(&iqs269->lock);
+@@ -1568,7 +1525,9 @@ static ssize_t ati_trigger_show(struct device *dev,
+ {
+ 	struct iqs269_private *iqs269 = dev_get_drvdata(dev);
+ 
+-	return scnprintf(buf, PAGE_SIZE, "%u\n", iqs269->ati_current);
++	return scnprintf(buf, PAGE_SIZE, "%u\n",
++			 iqs269->ati_current &&
++			 completion_done(&iqs269->ati_done));
+ }
+ 
+ static ssize_t ati_trigger_store(struct device *dev,
+@@ -1588,6 +1547,7 @@ static ssize_t ati_trigger_store(struct device *dev,
+ 		return count;
+ 
+ 	disable_irq(client->irq);
++	reinit_completion(&iqs269->ati_done);
+ 
+ 	error = iqs269_dev_init(iqs269);
+ 
+@@ -1597,6 +1557,10 @@ static ssize_t ati_trigger_store(struct device *dev,
+ 	if (error)
+ 		return error;
+ 
++	if (!wait_for_completion_timeout(&iqs269->ati_done,
++					 msecs_to_jiffies(2000)))
++		return -ETIMEDOUT;
++
+ 	return count;
+ }
+ 
+@@ -1655,6 +1619,7 @@ static int iqs269_probe(struct i2c_client *client)
+ 	}
+ 
+ 	mutex_init(&iqs269->lock);
++	init_completion(&iqs269->ati_done);
+ 
+ 	error = regmap_raw_read(iqs269->regmap, IQS269_VER_INFO, &ver_info,
+ 				sizeof(ver_info));
+@@ -1690,6 +1655,22 @@ static int iqs269_probe(struct i2c_client *client)
+ 		return error;
+ 	}
+ 
++	if (!wait_for_completion_timeout(&iqs269->ati_done,
++					 msecs_to_jiffies(2000))) {
++		dev_err(&client->dev, "Failed to complete ATI\n");
++		return -ETIMEDOUT;
++	}
++
++	/*
++	 * The keypad may include one or more switches and is not registered
++	 * until ATI is complete and the initial switch states are read.
++	 */
++	error = input_register_device(iqs269->keypad);
++	if (error) {
++		dev_err(&client->dev, "Failed to register keypad: %d\n", error);
++		return error;
++	}
++
+ 	error = devm_device_add_group(&client->dev, &iqs269_attr_group);
+ 	if (error)
+ 		dev_err(&client->dev, "Failed to add attributes: %d\n", error);
+@@ -1697,59 +1678,30 @@ static int iqs269_probe(struct i2c_client *client)
+ 	return error;
+ }
+ 
++static u16 iqs269_general_get(struct iqs269_private *iqs269)
++{
++	u16 general = be16_to_cpu(iqs269->sys_reg.general);
++
++	general &= ~IQS269_SYS_SETTINGS_REDO_ATI;
++	general &= ~IQS269_SYS_SETTINGS_ACK_RESET;
++
++	return general | IQS269_SYS_SETTINGS_DIS_AUTO;
++}
++
+ static int __maybe_unused iqs269_suspend(struct device *dev)
+ {
+ 	struct iqs269_private *iqs269 = dev_get_drvdata(dev);
+ 	struct i2c_client *client = iqs269->client;
+-	unsigned int val;
+ 	int error;
++	u16 general = iqs269_general_get(iqs269);
+ 
+-	if (!iqs269->suspend_mode)
++	if (!(general & IQS269_SYS_SETTINGS_PWR_MODE_MASK))
+ 		return 0;
+ 
+ 	disable_irq(client->irq);
+ 
+-	/*
+-	 * Automatic power mode switching must be disabled before the device is
+-	 * forced into any particular power mode. In this case, the device will
+-	 * transition into normal-power mode.
+-	 */
+-	error = regmap_update_bits(iqs269->regmap, IQS269_SYS_SETTINGS,
+-				   IQS269_SYS_SETTINGS_DIS_AUTO, ~0);
+-	if (error)
+-		goto err_irq;
+-
+-	/*
+-	 * The following check ensures the device has completed its transition
+-	 * into normal-power mode before a manual mode switch is performed.
+-	 */
+-	error = regmap_read_poll_timeout(iqs269->regmap, IQS269_SYS_FLAGS, val,
+-					!(val & IQS269_SYS_FLAGS_PWR_MODE_MASK),
+-					 IQS269_PWR_MODE_POLL_SLEEP_US,
+-					 IQS269_PWR_MODE_POLL_TIMEOUT_US);
+-	if (error)
+-		goto err_irq;
+-
+-	error = regmap_update_bits(iqs269->regmap, IQS269_SYS_SETTINGS,
+-				   IQS269_SYS_SETTINGS_PWR_MODE_MASK,
+-				   iqs269->suspend_mode <<
+-				   IQS269_SYS_SETTINGS_PWR_MODE_SHIFT);
+-	if (error)
+-		goto err_irq;
++	error = regmap_write(iqs269->regmap, IQS269_SYS_SETTINGS, general);
+ 
+-	/*
+-	 * This last check ensures the device has completed its transition into
+-	 * the desired power mode to prevent any spurious interrupts from being
+-	 * triggered after iqs269_suspend has already returned.
+-	 */
+-	error = regmap_read_poll_timeout(iqs269->regmap, IQS269_SYS_FLAGS, val,
+-					 (val & IQS269_SYS_FLAGS_PWR_MODE_MASK)
+-					 == (iqs269->suspend_mode <<
+-					     IQS269_SYS_FLAGS_PWR_MODE_SHIFT),
+-					 IQS269_PWR_MODE_POLL_SLEEP_US,
+-					 IQS269_PWR_MODE_POLL_TIMEOUT_US);
+-
+-err_irq:
+ 	iqs269_irq_wait();
+ 	enable_irq(client->irq);
+ 
+@@ -1760,43 +1712,20 @@ static int __maybe_unused iqs269_resume(struct device *dev)
+ {
+ 	struct iqs269_private *iqs269 = dev_get_drvdata(dev);
+ 	struct i2c_client *client = iqs269->client;
+-	unsigned int val;
+ 	int error;
++	u16 general = iqs269_general_get(iqs269);
+ 
+-	if (!iqs269->suspend_mode)
++	if (!(general & IQS269_SYS_SETTINGS_PWR_MODE_MASK))
+ 		return 0;
+ 
+ 	disable_irq(client->irq);
+ 
+-	error = regmap_update_bits(iqs269->regmap, IQS269_SYS_SETTINGS,
+-				   IQS269_SYS_SETTINGS_PWR_MODE_MASK, 0);
+-	if (error)
+-		goto err_irq;
+-
+-	/*
+-	 * This check ensures the device has returned to normal-power mode
+-	 * before automatic power mode switching is re-enabled.
+-	 */
+-	error = regmap_read_poll_timeout(iqs269->regmap, IQS269_SYS_FLAGS, val,
+-					!(val & IQS269_SYS_FLAGS_PWR_MODE_MASK),
+-					 IQS269_PWR_MODE_POLL_SLEEP_US,
+-					 IQS269_PWR_MODE_POLL_TIMEOUT_US);
+-	if (error)
+-		goto err_irq;
+-
+-	error = regmap_update_bits(iqs269->regmap, IQS269_SYS_SETTINGS,
+-				   IQS269_SYS_SETTINGS_DIS_AUTO, 0);
+-	if (error)
+-		goto err_irq;
+-
+-	/*
+-	 * This step reports any events that may have been "swallowed" as a
+-	 * result of polling PWR_MODE (which automatically acknowledges any
+-	 * pending interrupts).
+-	 */
+-	error = iqs269_report(iqs269);
++	error = regmap_write(iqs269->regmap, IQS269_SYS_SETTINGS,
++			     general & ~IQS269_SYS_SETTINGS_PWR_MODE_MASK);
++	if (!error)
++		error = regmap_write(iqs269->regmap, IQS269_SYS_SETTINGS,
++				     general & ~IQS269_SYS_SETTINGS_DIS_AUTO);
+ 
+-err_irq:
+ 	iqs269_irq_wait();
+ 	enable_irq(client->irq);
+ 
+diff --git a/drivers/input/touchscreen/ads7846.c b/drivers/input/touchscreen/ads7846.c
+index ff97897feaf2a..1753288cedde7 100644
+--- a/drivers/input/touchscreen/ads7846.c
++++ b/drivers/input/touchscreen/ads7846.c
+@@ -63,19 +63,15 @@
+ /* this driver doesn't aim at the peak continuous sample rate */
+ #define	SAMPLE_BITS	(8 /*cmd*/ + 16 /*sample*/ + 2 /* before, after */)
+ 
+-struct ts_event {
+-	/*
+-	 * For portability, we can't read 12 bit values using SPI (which
+-	 * would make the controller deliver them as native byte order u16
+-	 * with msbs zeroed).  Instead, we read them as two 8-bit values,
+-	 * *** WHICH NEED BYTESWAPPING *** and range adjustment.
+-	 */
+-	u16	x;
+-	u16	y;
+-	u16	z1, z2;
+-	bool	ignore;
+-	u8	x_buf[3];
+-	u8	y_buf[3];
++struct ads7846_buf {
++	u8 cmd;
++	__be16 data;
++} __packed;
++
++struct ads7846_buf_layout {
++	unsigned int offset;
++	unsigned int count;
++	unsigned int skip;
+ };
+ 
+ /*
+@@ -84,11 +80,18 @@ struct ts_event {
+  * systems where main memory is not DMA-coherent (most non-x86 boards).
+  */
+ struct ads7846_packet {
+-	u8			read_x, read_y, read_z1, read_z2, pwrdown;
+-	u16			dummy;		/* for the pwrdown read */
+-	struct ts_event		tc;
+-	/* for ads7845 with mpc5121 psc spi we use 3-byte buffers */
+-	u8			read_x_cmd[3], read_y_cmd[3], pwrdown_cmd[3];
++	unsigned int count;
++	unsigned int count_skip;
++	unsigned int cmds;
++	unsigned int last_cmd_idx;
++	struct ads7846_buf_layout l[5];
++	struct ads7846_buf *rx;
++	struct ads7846_buf *tx;
++
++	struct ads7846_buf pwrdown_cmd;
++
++	bool ignore;
++	u16 x, y, z1, z2;
+ };
+ 
+ struct ads7846 {
+@@ -187,7 +190,6 @@ struct ads7846 {
+ #define	READ_Y(vref)	(READ_12BIT_DFR(y,  1, vref))
+ #define	READ_Z1(vref)	(READ_12BIT_DFR(z1, 1, vref))
+ #define	READ_Z2(vref)	(READ_12BIT_DFR(z2, 1, vref))
+-
+ #define	READ_X(vref)	(READ_12BIT_DFR(x,  1, vref))
+ #define	PWRDOWN		(READ_12BIT_DFR(y,  0, 0))	/* LAST */
+ 
+@@ -200,6 +202,21 @@ struct ads7846 {
+ #define	REF_ON	(READ_12BIT_DFR(x, 1, 1))
+ #define	REF_OFF	(READ_12BIT_DFR(y, 0, 0))
+ 
++/* Order commands in the most optimal way to reduce Vref switching and
++ * settling time:
++ * Measure:  X; Vref: X+, X-; IN: Y+
++ * Measure:  Y; Vref: Y+, Y-; IN: X+
++ * Measure: Z1; Vref: Y+, X-; IN: X+
++ * Measure: Z2; Vref: Y+, X-; IN: Y-
++ */
++enum ads7846_cmds {
++	ADS7846_X,
++	ADS7846_Y,
++	ADS7846_Z1,
++	ADS7846_Z2,
++	ADS7846_PWDOWN,
++};
++
+ static int get_pendown_state(struct ads7846 *ts)
+ {
+ 	if (ts->get_pendown_state)
+@@ -682,32 +699,109 @@ static int ads7846_no_filter(void *ads, int data_idx, int *val)
+ 	return ADS7846_FILTER_OK;
+ }
+ 
+-static int ads7846_get_value(struct ads7846 *ts, struct spi_message *m)
++static int ads7846_get_value(struct ads7846_buf *buf)
+ {
+ 	int value;
+-	struct spi_transfer *t =
+-		list_entry(m->transfers.prev, struct spi_transfer, transfer_list);
+ 
+-	if (ts->model == 7845) {
+-		value = be16_to_cpup((__be16 *)&(((char *)t->rx_buf)[1]));
+-	} else {
+-		/*
+-		 * adjust:  on-wire is a must-ignore bit, a BE12 value, then
+-		 * padding; built from two 8 bit values written msb-first.
+-		 */
+-		value = be16_to_cpup((__be16 *)t->rx_buf);
+-	}
++	value = be16_to_cpup(&buf->data);
+ 
+ 	/* enforce ADC output is 12 bits width */
+ 	return (value >> 3) & 0xfff;
+ }
+ 
+-static void ads7846_update_value(struct spi_message *m, int val)
++static void ads7846_set_cmd_val(struct ads7846 *ts, enum ads7846_cmds cmd_idx,
++				u16 val)
+ {
+-	struct spi_transfer *t =
+-		list_entry(m->transfers.prev, struct spi_transfer, transfer_list);
++	struct ads7846_packet *packet = ts->packet;
++
++	switch (cmd_idx) {
++	case ADS7846_Y:
++		packet->y = val;
++		break;
++	case ADS7846_X:
++		packet->x = val;
++		break;
++	case ADS7846_Z1:
++		packet->z1 = val;
++		break;
++	case ADS7846_Z2:
++		packet->z2 = val;
++		break;
++	default:
++		WARN_ON_ONCE(1);
++	}
++}
++
++static u8 ads7846_get_cmd(enum ads7846_cmds cmd_idx, int vref)
++{
++	switch (cmd_idx) {
++	case ADS7846_Y:
++		return READ_Y(vref);
++	case ADS7846_X:
++		return READ_X(vref);
++
++	/* 7846 specific commands  */
++	case ADS7846_Z1:
++		return READ_Z1(vref);
++	case ADS7846_Z2:
++		return READ_Z2(vref);
++	case ADS7846_PWDOWN:
++		return PWRDOWN;
++	default:
++		WARN_ON_ONCE(1);
++	}
+ 
+-	*(u16 *)t->rx_buf = val;
++	return 0;
++}
++
++static bool ads7846_cmd_need_settle(enum ads7846_cmds cmd_idx)
++{
++	switch (cmd_idx) {
++	case ADS7846_X:
++	case ADS7846_Y:
++	case ADS7846_Z1:
++	case ADS7846_Z2:
++		return true;
++	case ADS7846_PWDOWN:
++		return false;
++	default:
++		WARN_ON_ONCE(1);
++	}
++
++	return false;
++}
++
++static int ads7846_filter(struct ads7846 *ts)
++{
++	struct ads7846_packet *packet = ts->packet;
++	int action;
++	int val;
++	unsigned int cmd_idx, b;
++
++	packet->ignore = false;
++	for (cmd_idx = packet->last_cmd_idx; cmd_idx < packet->cmds - 1; cmd_idx++) {
++		struct ads7846_buf_layout *l = &packet->l[cmd_idx];
++
++		packet->last_cmd_idx = cmd_idx;
++
++		for (b = l->skip; b < l->count; b++) {
++			val = ads7846_get_value(&packet->rx[l->offset + b]);
++
++			action = ts->filter(ts->filter_data, cmd_idx, &val);
++			if (action == ADS7846_FILTER_REPEAT) {
++				if (b == l->count - 1)
++					return -EAGAIN;
++			} else if (action == ADS7846_FILTER_OK) {
++				ads7846_set_cmd_val(ts, cmd_idx, val);
++				break;
++			} else {
++				packet->ignore = true;
++				return 0;
++			}
++		}
++	}
++
++	return 0;
+ }
+ 
+ static void ads7846_read_state(struct ads7846 *ts)
+@@ -715,52 +809,26 @@ static void ads7846_read_state(struct ads7846 *ts)
+ 	struct ads7846_packet *packet = ts->packet;
+ 	struct spi_message *m;
+ 	int msg_idx = 0;
+-	int val;
+-	int action;
+ 	int error;
+ 
+-	while (msg_idx < ts->msg_count) {
++	packet->last_cmd_idx = 0;
+ 
++	while (true) {
+ 		ts->wait_for_sync();
+ 
+ 		m = &ts->msg[msg_idx];
+ 		error = spi_sync(ts->spi, m);
+ 		if (error) {
+ 			dev_err(&ts->spi->dev, "spi_sync --> %d\n", error);
+-			packet->tc.ignore = true;
++			packet->ignore = true;
+ 			return;
+ 		}
+ 
+-		/*
+-		 * Last message is power down request, no need to convert
+-		 * or filter the value.
+-		 */
+-		if (msg_idx < ts->msg_count - 1) {
+-
+-			val = ads7846_get_value(ts, m);
++		error = ads7846_filter(ts);
++		if (error)
++			continue;
+ 
+-			action = ts->filter(ts->filter_data, msg_idx, &val);
+-			switch (action) {
+-			case ADS7846_FILTER_REPEAT:
+-				continue;
+-
+-			case ADS7846_FILTER_IGNORE:
+-				packet->tc.ignore = true;
+-				msg_idx = ts->msg_count - 1;
+-				continue;
+-
+-			case ADS7846_FILTER_OK:
+-				ads7846_update_value(m, val);
+-				packet->tc.ignore = false;
+-				msg_idx++;
+-				break;
+-
+-			default:
+-				BUG();
+-			}
+-		} else {
+-			msg_idx++;
+-		}
++		return;
+ 	}
+ }
+ 
+@@ -770,35 +838,22 @@ static void ads7846_report_state(struct ads7846 *ts)
+ 	unsigned int Rt;
+ 	u16 x, y, z1, z2;
+ 
+-	/*
+-	 * ads7846_get_value() does in-place conversion (including byte swap)
+-	 * from on-the-wire format as part of debouncing to get stable
+-	 * readings.
+-	 */
++	x = packet->x;
++	y = packet->y;
+ 	if (ts->model == 7845) {
+-		x = *(u16 *)packet->tc.x_buf;
+-		y = *(u16 *)packet->tc.y_buf;
+ 		z1 = 0;
+ 		z2 = 0;
+ 	} else {
+-		x = packet->tc.x;
+-		y = packet->tc.y;
+-		z1 = packet->tc.z1;
+-		z2 = packet->tc.z2;
++		z1 = packet->z1;
++		z2 = packet->z2;
+ 	}
+ 
+ 	/* range filtering */
+ 	if (x == MAX_12BIT)
+ 		x = 0;
+ 
+-	if (ts->model == 7843) {
++	if (ts->model == 7843 || ts->model == 7845) {
+ 		Rt = ts->pressure_max / 2;
+-	} else if (ts->model == 7845) {
+-		if (get_pendown_state(ts))
+-			Rt = ts->pressure_max / 2;
+-		else
+-			Rt = 0;
+-		dev_vdbg(&ts->spi->dev, "x/y: %d/%d, PD %d\n", x, y, Rt);
+ 	} else if (likely(x && z1)) {
+ 		/* compute touch pressure resistance using equation #2 */
+ 		Rt = z2;
+@@ -817,9 +872,9 @@ static void ads7846_report_state(struct ads7846 *ts)
+ 	 * the maximum. Don't report it to user space, repeat at least
+ 	 * once more the measurement
+ 	 */
+-	if (packet->tc.ignore || Rt > ts->pressure_max) {
++	if (packet->ignore || Rt > ts->pressure_max) {
+ 		dev_vdbg(&ts->spi->dev, "ignored %d pressure %d\n",
+-			 packet->tc.ignore, Rt);
++			 packet->ignore, Rt);
+ 		return;
+ 	}
+ 
+@@ -980,13 +1035,62 @@ static int ads7846_setup_pendown(struct spi_device *spi,
+  * Set up the transfers to read touchscreen state; this assumes we
+  * use formula #2 for pressure, not #3.
+  */
+-static void ads7846_setup_spi_msg(struct ads7846 *ts,
++static int ads7846_setup_spi_msg(struct ads7846 *ts,
+ 				  const struct ads7846_platform_data *pdata)
+ {
+ 	struct spi_message *m = &ts->msg[0];
+ 	struct spi_transfer *x = ts->xfer;
+ 	struct ads7846_packet *packet = ts->packet;
+ 	int vref = pdata->keep_vref_on;
++	unsigned int count, offset = 0;
++	unsigned int cmd_idx, b;
++	unsigned long time;
++	size_t size = 0;
++
++	/* time per bit */
++	time = NSEC_PER_SEC / ts->spi->max_speed_hz;
++
++	count = pdata->settle_delay_usecs * NSEC_PER_USEC / time;
++	packet->count_skip = DIV_ROUND_UP(count, 24);
++
++	if (ts->debounce_max && ts->debounce_rep)
++		/* ads7846_debounce_filter() is making ts->debounce_rep + 2
++		 * reads. So we need to get all samples for normal case. */
++		packet->count = ts->debounce_rep + 2;
++	else
++		packet->count = 1;
++
++	if (ts->model == 7846)
++		packet->cmds = 5; /* x, y, z1, z2, pwdown */
++	else
++		packet->cmds = 3; /* x, y, pwdown */
++
++	for (cmd_idx = 0; cmd_idx < packet->cmds; cmd_idx++) {
++		struct ads7846_buf_layout *l = &packet->l[cmd_idx];
++		unsigned int max_count;
++
++		if (cmd_idx == packet->cmds - 1)
++			cmd_idx = ADS7846_PWDOWN;
++
++		if (ads7846_cmd_need_settle(cmd_idx))
++			max_count = packet->count + packet->count_skip;
++		else
++			max_count = packet->count;
++
++		l->offset = offset;
++		offset += max_count;
++		l->count = max_count;
++		l->skip = packet->count_skip;
++		size += sizeof(*packet->tx) * max_count;
++	}
++
++	packet->tx = devm_kzalloc(&ts->spi->dev, size, GFP_KERNEL);
++	if (!packet->tx)
++		return -ENOMEM;
++
++	packet->rx = devm_kzalloc(&ts->spi->dev, size, GFP_KERNEL);
++	if (!packet->rx)
++		return -ENOMEM;
+ 
+ 	if (ts->model == 7873) {
+ 		/*
+@@ -1002,185 +1106,25 @@ static void ads7846_setup_spi_msg(struct ads7846 *ts,
+ 	spi_message_init(m);
+ 	m->context = ts;
+ 
+-	if (ts->model == 7845) {
+-		packet->read_y_cmd[0] = READ_Y(vref);
+-		packet->read_y_cmd[1] = 0;
+-		packet->read_y_cmd[2] = 0;
+-		x->tx_buf = &packet->read_y_cmd[0];
+-		x->rx_buf = &packet->tc.y_buf[0];
+-		x->len = 3;
+-		spi_message_add_tail(x, m);
+-	} else {
+-		/* y- still on; turn on only y+ (and ADC) */
+-		packet->read_y = READ_Y(vref);
+-		x->tx_buf = &packet->read_y;
+-		x->len = 1;
+-		spi_message_add_tail(x, m);
+-
+-		x++;
+-		x->rx_buf = &packet->tc.y;
+-		x->len = 2;
+-		spi_message_add_tail(x, m);
+-	}
+-
+-	/*
+-	 * The first sample after switching drivers can be low quality;
+-	 * optionally discard it, using a second one after the signals
+-	 * have had enough time to stabilize.
+-	 */
+-	if (pdata->settle_delay_usecs) {
+-		x->delay.value = pdata->settle_delay_usecs;
+-		x->delay.unit = SPI_DELAY_UNIT_USECS;
+-
+-		x++;
+-		x->tx_buf = &packet->read_y;
+-		x->len = 1;
+-		spi_message_add_tail(x, m);
+-
+-		x++;
+-		x->rx_buf = &packet->tc.y;
+-		x->len = 2;
+-		spi_message_add_tail(x, m);
+-	}
+-
+-	ts->msg_count++;
+-	m++;
+-	spi_message_init(m);
+-	m->context = ts;
+-
+-	if (ts->model == 7845) {
+-		x++;
+-		packet->read_x_cmd[0] = READ_X(vref);
+-		packet->read_x_cmd[1] = 0;
+-		packet->read_x_cmd[2] = 0;
+-		x->tx_buf = &packet->read_x_cmd[0];
+-		x->rx_buf = &packet->tc.x_buf[0];
+-		x->len = 3;
+-		spi_message_add_tail(x, m);
+-	} else {
+-		/* turn y- off, x+ on, then leave in lowpower */
+-		x++;
+-		packet->read_x = READ_X(vref);
+-		x->tx_buf = &packet->read_x;
+-		x->len = 1;
+-		spi_message_add_tail(x, m);
+-
+-		x++;
+-		x->rx_buf = &packet->tc.x;
+-		x->len = 2;
+-		spi_message_add_tail(x, m);
+-	}
++	for (cmd_idx = 0; cmd_idx < packet->cmds; cmd_idx++) {
++		struct ads7846_buf_layout *l = &packet->l[cmd_idx];
++		u8 cmd;
+ 
+-	/* ... maybe discard first sample ... */
+-	if (pdata->settle_delay_usecs) {
+-		x->delay.value = pdata->settle_delay_usecs;
+-		x->delay.unit = SPI_DELAY_UNIT_USECS;
++		if (cmd_idx == packet->cmds - 1)
++			cmd_idx = ADS7846_PWDOWN;
+ 
+-		x++;
+-		x->tx_buf = &packet->read_x;
+-		x->len = 1;
+-		spi_message_add_tail(x, m);
++		cmd = ads7846_get_cmd(cmd_idx, vref);
+ 
+-		x++;
+-		x->rx_buf = &packet->tc.x;
+-		x->len = 2;
+-		spi_message_add_tail(x, m);
++		for (b = 0; b < l->count; b++)
++			packet->tx[l->offset + b].cmd = cmd;
+ 	}
+ 
+-	/* turn y+ off, x- on; we'll use formula #2 */
+-	if (ts->model == 7846) {
+-		ts->msg_count++;
+-		m++;
+-		spi_message_init(m);
+-		m->context = ts;
+-
+-		x++;
+-		packet->read_z1 = READ_Z1(vref);
+-		x->tx_buf = &packet->read_z1;
+-		x->len = 1;
+-		spi_message_add_tail(x, m);
+-
+-		x++;
+-		x->rx_buf = &packet->tc.z1;
+-		x->len = 2;
+-		spi_message_add_tail(x, m);
+-
+-		/* ... maybe discard first sample ... */
+-		if (pdata->settle_delay_usecs) {
+-			x->delay.value = pdata->settle_delay_usecs;
+-			x->delay.unit = SPI_DELAY_UNIT_USECS;
+-
+-			x++;
+-			x->tx_buf = &packet->read_z1;
+-			x->len = 1;
+-			spi_message_add_tail(x, m);
+-
+-			x++;
+-			x->rx_buf = &packet->tc.z1;
+-			x->len = 2;
+-			spi_message_add_tail(x, m);
+-		}
+-
+-		ts->msg_count++;
+-		m++;
+-		spi_message_init(m);
+-		m->context = ts;
+-
+-		x++;
+-		packet->read_z2 = READ_Z2(vref);
+-		x->tx_buf = &packet->read_z2;
+-		x->len = 1;
+-		spi_message_add_tail(x, m);
+-
+-		x++;
+-		x->rx_buf = &packet->tc.z2;
+-		x->len = 2;
+-		spi_message_add_tail(x, m);
+-
+-		/* ... maybe discard first sample ... */
+-		if (pdata->settle_delay_usecs) {
+-			x->delay.value = pdata->settle_delay_usecs;
+-			x->delay.unit = SPI_DELAY_UNIT_USECS;
+-
+-			x++;
+-			x->tx_buf = &packet->read_z2;
+-			x->len = 1;
+-			spi_message_add_tail(x, m);
+-
+-			x++;
+-			x->rx_buf = &packet->tc.z2;
+-			x->len = 2;
+-			spi_message_add_tail(x, m);
+-		}
+-	}
+-
+-	/* power down */
+-	ts->msg_count++;
+-	m++;
+-	spi_message_init(m);
+-	m->context = ts;
+-
+-	if (ts->model == 7845) {
+-		x++;
+-		packet->pwrdown_cmd[0] = PWRDOWN;
+-		packet->pwrdown_cmd[1] = 0;
+-		packet->pwrdown_cmd[2] = 0;
+-		x->tx_buf = &packet->pwrdown_cmd[0];
+-		x->len = 3;
+-	} else {
+-		x++;
+-		packet->pwrdown = PWRDOWN;
+-		x->tx_buf = &packet->pwrdown;
+-		x->len = 1;
+-		spi_message_add_tail(x, m);
+-
+-		x++;
+-		x->rx_buf = &packet->dummy;
+-		x->len = 2;
+-	}
+-
+-	CS_CHANGE(*x);
++	x->tx_buf = packet->tx;
++	x->rx_buf = packet->rx;
++	x->len = size;
+ 	spi_message_add_tail(x, m);
++
++	return 0;
+ }
+ 
+ #ifdef CONFIG_OF
+@@ -1381,8 +1325,9 @@ static int ads7846_probe(struct spi_device *spi)
+ 			pdata->y_min ? : 0,
+ 			pdata->y_max ? : MAX_12BIT,
+ 			0, 0);
+-	input_set_abs_params(input_dev, ABS_PRESSURE,
+-			pdata->pressure_min, pdata->pressure_max, 0, 0);
++	if (ts->model != 7845)
++		input_set_abs_params(input_dev, ABS_PRESSURE,
++				pdata->pressure_min, pdata->pressure_max, 0, 0);
+ 
+ 	/*
+ 	 * Parse common framework properties. Must be done here to ensure the
+diff --git a/drivers/irqchip/irq-alpine-msi.c b/drivers/irqchip/irq-alpine-msi.c
+index ede02dc2bcd0b..1819bb1d27230 100644
+--- a/drivers/irqchip/irq-alpine-msi.c
++++ b/drivers/irqchip/irq-alpine-msi.c
+@@ -199,6 +199,7 @@ static int alpine_msix_init_domains(struct alpine_msix_data *priv,
+ 	}
+ 
+ 	gic_domain = irq_find_host(gic_node);
++	of_node_put(gic_node);
+ 	if (!gic_domain) {
+ 		pr_err("Failed to find the GIC domain\n");
+ 		return -ENXIO;
+diff --git a/drivers/irqchip/irq-bcm7120-l2.c b/drivers/irqchip/irq-bcm7120-l2.c
+index c7c9e976acbb9..7d776c905b7d2 100644
+--- a/drivers/irqchip/irq-bcm7120-l2.c
++++ b/drivers/irqchip/irq-bcm7120-l2.c
+@@ -273,7 +273,8 @@ static int __init bcm7120_l2_intc_probe(struct device_node *dn,
+ 		flags |= IRQ_GC_BE_IO;
+ 
+ 	ret = irq_alloc_domain_generic_chips(data->domain, IRQS_PER_WORD, 1,
+-				dn->full_name, handle_level_irq, clr, 0, flags);
++				dn->full_name, handle_level_irq, clr,
++				IRQ_LEVEL, flags);
+ 	if (ret) {
+ 		pr_err("failed to allocate generic irq chip\n");
+ 		goto out_free_domain;
+diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
+index cdd6a42d4efa4..a4aee16db5314 100644
+--- a/drivers/irqchip/irq-brcmstb-l2.c
++++ b/drivers/irqchip/irq-brcmstb-l2.c
+@@ -161,6 +161,7 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
+ 					  *init_params)
+ {
+ 	unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN;
++	unsigned int set = 0;
+ 	struct brcmstb_l2_intc_data *data;
+ 	struct irq_chip_type *ct;
+ 	int ret;
+@@ -208,9 +209,12 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
+ 	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		flags |= IRQ_GC_BE_IO;
+ 
++	if (init_params->handler == handle_level_irq)
++		set |= IRQ_LEVEL;
++
+ 	/* Allocate a single Generic IRQ chip for this node */
+ 	ret = irq_alloc_domain_generic_chips(data->domain, 32, 1,
+-			np->full_name, init_params->handler, clr, 0, flags);
++			np->full_name, init_params->handler, clr, set, flags);
+ 	if (ret) {
+ 		pr_err("failed to allocate generic irq chip\n");
+ 		goto out_free_domain;
+diff --git a/drivers/irqchip/irq-mvebu-gicp.c b/drivers/irqchip/irq-mvebu-gicp.c
+index 3be5c5dba1dab..5caec411059f5 100644
+--- a/drivers/irqchip/irq-mvebu-gicp.c
++++ b/drivers/irqchip/irq-mvebu-gicp.c
+@@ -223,6 +223,7 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	parent_domain = irq_find_host(irq_parent_dn);
++	of_node_put(irq_parent_dn);
+ 	if (!parent_domain) {
+ 		dev_err(&pdev->dev, "failed to find parent IRQ domain\n");
+ 		return -ENODEV;
+diff --git a/drivers/irqchip/irq-ti-sci-intr.c b/drivers/irqchip/irq-ti-sci-intr.c
+index fe8fad22bcf96..020ddf29efb80 100644
+--- a/drivers/irqchip/irq-ti-sci-intr.c
++++ b/drivers/irqchip/irq-ti-sci-intr.c
+@@ -236,6 +236,7 @@ static int ti_sci_intr_irq_domain_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	parent_domain = irq_find_host(parent_node);
++	of_node_put(parent_node);
+ 	if (!parent_domain) {
+ 		dev_err(dev, "Failed to find IRQ parent domain\n");
+ 		return -ENODEV;
+diff --git a/drivers/irqchip/irqchip.c b/drivers/irqchip/irqchip.c
+index 3570f0a588c4b..7899607fbee8d 100644
+--- a/drivers/irqchip/irqchip.c
++++ b/drivers/irqchip/irqchip.c
+@@ -38,8 +38,10 @@ int platform_irqchip_probe(struct platform_device *pdev)
+ 	struct device_node *par_np = of_irq_find_parent(np);
+ 	of_irq_init_cb_t irq_init_cb = of_device_get_match_data(&pdev->dev);
+ 
+-	if (!irq_init_cb)
++	if (!irq_init_cb) {
++		of_node_put(par_np);
+ 		return -EINVAL;
++	}
+ 
+ 	if (par_np == np)
+ 		par_np = NULL;
+@@ -52,8 +54,10 @@ int platform_irqchip_probe(struct platform_device *pdev)
+ 	 * interrupt controller. The actual initialization callback of this
+ 	 * interrupt controller can check for specific domains as necessary.
+ 	 */
+-	if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY))
++	if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY)) {
++		of_node_put(par_np);
+ 		return -EPROBE_DEFER;
++	}
+ 
+ 	return irq_init_cb(np, par_np);
+ }
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 4365c1cc4505f..fcb9eee3b6097 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -236,14 +236,17 @@ struct led_classdev *of_led_get(struct device_node *np, int index)
+ 
+ 	led_dev = class_find_device_by_of_node(leds_class, led_node);
+ 	of_node_put(led_node);
++	put_device(led_dev);
+ 
+ 	if (!led_dev)
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 
+ 	led_cdev = dev_get_drvdata(led_dev);
+ 
+-	if (!try_module_get(led_cdev->dev->parent->driver->owner))
++	if (!try_module_get(led_cdev->dev->parent->driver->owner)) {
++		put_device(led_cdev->dev);
+ 		return ERR_PTR(-ENODEV);
++	}
+ 
+ 	return led_cdev;
+ }
+@@ -256,6 +259,7 @@ EXPORT_SYMBOL_GPL(of_led_get);
+ void led_put(struct led_classdev *led_cdev)
+ {
+ 	module_put(led_cdev->dev->parent->driver->owner);
++	put_device(led_cdev->dev);
+ }
+ EXPORT_SYMBOL_GPL(led_put);
+ 
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 9b2aec3098010..f98ad4366301b 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -1883,6 +1883,7 @@ static void process_deferred_bios(struct work_struct *ws)
+ 
+ 		else
+ 			commit_needed = process_bio(cache, bio) || commit_needed;
++		cond_resched();
+ 	}
+ 
+ 	if (commit_needed)
+@@ -1905,6 +1906,7 @@ static void requeue_deferred_bios(struct cache *cache)
+ 	while ((bio = bio_list_pop(&bios))) {
+ 		bio->bi_status = BLK_STS_DM_REQUEUE;
+ 		bio_endio(bio);
++		cond_resched();
+ 	}
+ }
+ 
+@@ -1945,6 +1947,8 @@ static void check_migrations(struct work_struct *ws)
+ 		r = mg_start(cache, op, NULL);
+ 		if (r)
+ 			break;
++
++		cond_resched();
+ 	}
+ }
+ 
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index a2cc9e45cbba1..36a4ef51ecaa8 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -301,8 +301,11 @@ static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc)
+ 	 */
+ 	bio_for_each_segment(bvec, bio, iter) {
+ 		if (bio_iter_len(bio, iter) > corrupt_bio_byte) {
+-			char *segment = (page_address(bio_iter_page(bio, iter))
+-					 + bio_iter_offset(bio, iter));
++			char *segment;
++			struct page *page = bio_iter_page(bio, iter);
++			if (unlikely(page == ZERO_PAGE(0)))
++				break;
++			segment = (page_address(page) + bio_iter_offset(bio, iter));
+ 			segment[corrupt_bio_byte] = fc->corrupt_bio_value;
+ 			DMDEBUG("Corrupting data bio=%p by writing %u to byte %u "
+ 				"(rw=%c bi_opf=%u bi_sector=%llu size=%u)\n",
+@@ -359,9 +362,11 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 		/*
+ 		 * Corrupt matching writes.
+ 		 */
+-		if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == WRITE)) {
+-			if (all_corrupt_bio_flags_match(bio, fc))
+-				corrupt_bio_data(bio, fc);
++		if (fc->corrupt_bio_byte) {
++			if (fc->corrupt_bio_rw == WRITE) {
++				if (all_corrupt_bio_flags_match(bio, fc))
++					corrupt_bio_data(bio, fc);
++			}
+ 			goto map_bio;
+ 		}
+ 
+@@ -387,13 +392,14 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio,
+ 		return DM_ENDIO_DONE;
+ 
+ 	if (!*error && pb->bio_submitted && (bio_data_dir(bio) == READ)) {
+-		if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == READ) &&
+-		    all_corrupt_bio_flags_match(bio, fc)) {
+-			/*
+-			 * Corrupt successful matching READs while in down state.
+-			 */
+-			corrupt_bio_data(bio, fc);
+-
++		if (fc->corrupt_bio_byte) {
++			if ((fc->corrupt_bio_rw == READ) &&
++			    all_corrupt_bio_flags_match(bio, fc)) {
++				/*
++				 * Corrupt successful matching READs while in down state.
++				 */
++				corrupt_bio_data(bio, fc);
++			}
+ 		} else if (!test_bit(DROP_WRITES, &fc->flags) &&
+ 			   !test_bit(ERROR_WRITES, &fc->flags)) {
+ 			/*
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index f3c519e18a12f..c890bb3e51852 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2217,6 +2217,7 @@ static void process_thin_deferred_bios(struct thin_c *tc)
+ 			throttle_work_update(&pool->throttle);
+ 			dm_pool_issue_prefetches(pool->pmd);
+ 		}
++		cond_resched();
+ 	}
+ 	blk_finish_plug(&plug);
+ }
+@@ -2299,6 +2300,7 @@ static void process_thin_deferred_cells(struct thin_c *tc)
+ 			else
+ 				pool->process_cell(tc, cell);
+ 		}
++		cond_resched();
+ 	} while (!list_empty(&cells));
+ }
+ 
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 1005abf768609..c60febd14be14 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -265,7 +265,6 @@ out_uevent_exit:
+ 
+ static void local_exit(void)
+ {
+-	flush_scheduled_work();
+ 	destroy_workqueue(deferred_remove_workqueue);
+ 
+ 	unregister_blkdev(_major, _name);
+@@ -2394,6 +2393,7 @@ static void dm_wq_work(struct work_struct *work)
+ 			break;
+ 
+ 		submit_bio_noacct(bio);
++		cond_resched();
+ 	}
+ }
+ 
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index 4771d0ef2c46f..b975636d94401 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -89,6 +89,12 @@
+ 
+ #define IMX219_REG_ORIENTATION		0x0172
+ 
++/* Binning  Mode */
++#define IMX219_REG_BINNING_MODE		0x0174
++#define IMX219_BINNING_NONE		0x0000
++#define IMX219_BINNING_2X2		0x0101
++#define IMX219_BINNING_2X2_ANALOG	0x0303
++
+ /* Test Pattern Control */
+ #define IMX219_REG_TEST_PATTERN		0x0600
+ #define IMX219_TEST_PATTERN_DISABLE	0
+@@ -143,25 +149,66 @@ struct imx219_mode {
+ 
+ 	/* Default register values */
+ 	struct imx219_reg_list reg_list;
++
++	/* 2x2 binning is used */
++	bool binning;
+ };
+ 
+-/*
+- * Register sets lifted off the i2C interface from the Raspberry Pi firmware
+- * driver.
+- * 3280x2464 = mode 2, 1920x1080 = mode 1, 1640x1232 = mode 4, 640x480 = mode 7.
+- */
+-static const struct imx219_reg mode_3280x2464_regs[] = {
+-	{0x0100, 0x00},
++static const struct imx219_reg imx219_common_regs[] = {
++	{0x0100, 0x00},	/* Mode Select */
++
++	/* To Access Addresses 3000-5fff, send the following commands */
+ 	{0x30eb, 0x0c},
+ 	{0x30eb, 0x05},
+ 	{0x300a, 0xff},
+ 	{0x300b, 0xff},
+ 	{0x30eb, 0x05},
+ 	{0x30eb, 0x09},
+-	{0x0114, 0x01},
+-	{0x0128, 0x00},
+-	{0x012a, 0x18},
++
++	/* PLL Clock Table */
++	{0x0301, 0x05},	/* VTPXCK_DIV */
++	{0x0303, 0x01},	/* VTSYSCK_DIV */
++	{0x0304, 0x03},	/* PREPLLCK_VT_DIV 0x03 = AUTO set */
++	{0x0305, 0x03}, /* PREPLLCK_OP_DIV 0x03 = AUTO set */
++	{0x0306, 0x00},	/* PLL_VT_MPY */
++	{0x0307, 0x39},
++	{0x030b, 0x01},	/* OP_SYS_CLK_DIV */
++	{0x030c, 0x00},	/* PLL_OP_MPY */
++	{0x030d, 0x72},
++
++	/* Undocumented registers */
++	{0x455e, 0x00},
++	{0x471e, 0x4b},
++	{0x4767, 0x0f},
++	{0x4750, 0x14},
++	{0x4540, 0x00},
++	{0x47b4, 0x14},
++	{0x4713, 0x30},
++	{0x478b, 0x10},
++	{0x478f, 0x10},
++	{0x4793, 0x10},
++	{0x4797, 0x0e},
++	{0x479b, 0x0e},
++
++	/* Frame Bank Register Group "A" */
++	{0x0162, 0x0d},	/* Line_Length_A */
++	{0x0163, 0x78},
++	{0x0170, 0x01}, /* X_ODD_INC_A */
++	{0x0171, 0x01}, /* Y_ODD_INC_A */
++
++	/* Output setup registers */
++	{0x0114, 0x01},	/* CSI 2-Lane Mode */
++	{0x0128, 0x00},	/* DPHY Auto Mode */
++	{0x012a, 0x18},	/* EXCK_Freq */
+ 	{0x012b, 0x00},
++};
++
++/*
++ * Register sets lifted off the i2C interface from the Raspberry Pi firmware
++ * driver.
++ * 3280x2464 = mode 2, 1920x1080 = mode 1, 1640x1232 = mode 4, 640x480 = mode 7.
++ */
++static const struct imx219_reg mode_3280x2464_regs[] = {
+ 	{0x0164, 0x00},
+ 	{0x0165, 0x00},
+ 	{0x0166, 0x0c},
+@@ -174,53 +221,13 @@ static const struct imx219_reg mode_3280x2464_regs[] = {
+ 	{0x016d, 0xd0},
+ 	{0x016e, 0x09},
+ 	{0x016f, 0xa0},
+-	{0x0170, 0x01},
+-	{0x0171, 0x01},
+-	{0x0174, 0x00},
+-	{0x0175, 0x00},
+-	{0x0301, 0x05},
+-	{0x0303, 0x01},
+-	{0x0304, 0x03},
+-	{0x0305, 0x03},
+-	{0x0306, 0x00},
+-	{0x0307, 0x39},
+-	{0x030b, 0x01},
+-	{0x030c, 0x00},
+-	{0x030d, 0x72},
+ 	{0x0624, 0x0c},
+ 	{0x0625, 0xd0},
+ 	{0x0626, 0x09},
+ 	{0x0627, 0xa0},
+-	{0x455e, 0x00},
+-	{0x471e, 0x4b},
+-	{0x4767, 0x0f},
+-	{0x4750, 0x14},
+-	{0x4540, 0x00},
+-	{0x47b4, 0x14},
+-	{0x4713, 0x30},
+-	{0x478b, 0x10},
+-	{0x478f, 0x10},
+-	{0x4793, 0x10},
+-	{0x4797, 0x0e},
+-	{0x479b, 0x0e},
+-	{0x0162, 0x0d},
+-	{0x0163, 0x78},
+ };
+ 
+ static const struct imx219_reg mode_1920_1080_regs[] = {
+-	{0x0100, 0x00},
+-	{0x30eb, 0x05},
+-	{0x30eb, 0x0c},
+-	{0x300a, 0xff},
+-	{0x300b, 0xff},
+-	{0x30eb, 0x05},
+-	{0x30eb, 0x09},
+-	{0x0114, 0x01},
+-	{0x0128, 0x00},
+-	{0x012a, 0x18},
+-	{0x012b, 0x00},
+-	{0x0162, 0x0d},
+-	{0x0163, 0x78},
+ 	{0x0164, 0x02},
+ 	{0x0165, 0xa8},
+ 	{0x0166, 0x0a},
+@@ -233,51 +240,13 @@ static const struct imx219_reg mode_1920_1080_regs[] = {
+ 	{0x016d, 0x80},
+ 	{0x016e, 0x04},
+ 	{0x016f, 0x38},
+-	{0x0170, 0x01},
+-	{0x0171, 0x01},
+-	{0x0174, 0x00},
+-	{0x0175, 0x00},
+-	{0x0301, 0x05},
+-	{0x0303, 0x01},
+-	{0x0304, 0x03},
+-	{0x0305, 0x03},
+-	{0x0306, 0x00},
+-	{0x0307, 0x39},
+-	{0x030b, 0x01},
+-	{0x030c, 0x00},
+-	{0x030d, 0x72},
+ 	{0x0624, 0x07},
+ 	{0x0625, 0x80},
+ 	{0x0626, 0x04},
+ 	{0x0627, 0x38},
+-	{0x455e, 0x00},
+-	{0x471e, 0x4b},
+-	{0x4767, 0x0f},
+-	{0x4750, 0x14},
+-	{0x4540, 0x00},
+-	{0x47b4, 0x14},
+-	{0x4713, 0x30},
+-	{0x478b, 0x10},
+-	{0x478f, 0x10},
+-	{0x4793, 0x10},
+-	{0x4797, 0x0e},
+-	{0x479b, 0x0e},
+-	{0x0162, 0x0d},
+-	{0x0163, 0x78},
+ };
+ 
+ static const struct imx219_reg mode_1640_1232_regs[] = {
+-	{0x0100, 0x00},
+-	{0x30eb, 0x0c},
+-	{0x30eb, 0x05},
+-	{0x300a, 0xff},
+-	{0x300b, 0xff},
+-	{0x30eb, 0x05},
+-	{0x30eb, 0x09},
+-	{0x0114, 0x01},
+-	{0x0128, 0x00},
+-	{0x012a, 0x18},
+-	{0x012b, 0x00},
+ 	{0x0164, 0x00},
+ 	{0x0165, 0x00},
+ 	{0x0166, 0x0c},
+@@ -290,53 +259,13 @@ static const struct imx219_reg mode_1640_1232_regs[] = {
+ 	{0x016d, 0x68},
+ 	{0x016e, 0x04},
+ 	{0x016f, 0xd0},
+-	{0x0170, 0x01},
+-	{0x0171, 0x01},
+-	{0x0174, 0x01},
+-	{0x0175, 0x01},
+-	{0x0301, 0x05},
+-	{0x0303, 0x01},
+-	{0x0304, 0x03},
+-	{0x0305, 0x03},
+-	{0x0306, 0x00},
+-	{0x0307, 0x39},
+-	{0x030b, 0x01},
+-	{0x030c, 0x00},
+-	{0x030d, 0x72},
+ 	{0x0624, 0x06},
+ 	{0x0625, 0x68},
+ 	{0x0626, 0x04},
+ 	{0x0627, 0xd0},
+-	{0x455e, 0x00},
+-	{0x471e, 0x4b},
+-	{0x4767, 0x0f},
+-	{0x4750, 0x14},
+-	{0x4540, 0x00},
+-	{0x47b4, 0x14},
+-	{0x4713, 0x30},
+-	{0x478b, 0x10},
+-	{0x478f, 0x10},
+-	{0x4793, 0x10},
+-	{0x4797, 0x0e},
+-	{0x479b, 0x0e},
+-	{0x0162, 0x0d},
+-	{0x0163, 0x78},
+ };
+ 
+ static const struct imx219_reg mode_640_480_regs[] = {
+-	{0x0100, 0x00},
+-	{0x30eb, 0x05},
+-	{0x30eb, 0x0c},
+-	{0x300a, 0xff},
+-	{0x300b, 0xff},
+-	{0x30eb, 0x05},
+-	{0x30eb, 0x09},
+-	{0x0114, 0x01},
+-	{0x0128, 0x00},
+-	{0x012a, 0x18},
+-	{0x012b, 0x00},
+-	{0x0162, 0x0d},
+-	{0x0163, 0x78},
+ 	{0x0164, 0x03},
+ 	{0x0165, 0xe8},
+ 	{0x0166, 0x08},
+@@ -349,35 +278,10 @@ static const struct imx219_reg mode_640_480_regs[] = {
+ 	{0x016d, 0x80},
+ 	{0x016e, 0x01},
+ 	{0x016f, 0xe0},
+-	{0x0170, 0x01},
+-	{0x0171, 0x01},
+-	{0x0174, 0x03},
+-	{0x0175, 0x03},
+-	{0x0301, 0x05},
+-	{0x0303, 0x01},
+-	{0x0304, 0x03},
+-	{0x0305, 0x03},
+-	{0x0306, 0x00},
+-	{0x0307, 0x39},
+-	{0x030b, 0x01},
+-	{0x030c, 0x00},
+-	{0x030d, 0x72},
+ 	{0x0624, 0x06},
+ 	{0x0625, 0x68},
+ 	{0x0626, 0x04},
+ 	{0x0627, 0xd0},
+-	{0x455e, 0x00},
+-	{0x471e, 0x4b},
+-	{0x4767, 0x0f},
+-	{0x4750, 0x14},
+-	{0x4540, 0x00},
+-	{0x47b4, 0x14},
+-	{0x4713, 0x30},
+-	{0x478b, 0x10},
+-	{0x478f, 0x10},
+-	{0x4793, 0x10},
+-	{0x4797, 0x0e},
+-	{0x479b, 0x0e},
+ };
+ 
+ static const struct imx219_reg raw8_framefmt_regs[] = {
+@@ -483,6 +387,7 @@ static const struct imx219_mode supported_modes[] = {
+ 			.num_of_regs = ARRAY_SIZE(mode_3280x2464_regs),
+ 			.regs = mode_3280x2464_regs,
+ 		},
++		.binning = false,
+ 	},
+ 	{
+ 		/* 1080P 30fps cropped */
+@@ -499,6 +404,7 @@ static const struct imx219_mode supported_modes[] = {
+ 			.num_of_regs = ARRAY_SIZE(mode_1920_1080_regs),
+ 			.regs = mode_1920_1080_regs,
+ 		},
++		.binning = false,
+ 	},
+ 	{
+ 		/* 2x2 binned 30fps mode */
+@@ -515,6 +421,7 @@ static const struct imx219_mode supported_modes[] = {
+ 			.num_of_regs = ARRAY_SIZE(mode_1640_1232_regs),
+ 			.regs = mode_1640_1232_regs,
+ 		},
++		.binning = true,
+ 	},
+ 	{
+ 		/* 640x480 30fps mode */
+@@ -531,6 +438,7 @@ static const struct imx219_mode supported_modes[] = {
+ 			.num_of_regs = ARRAY_SIZE(mode_640_480_regs),
+ 			.regs = mode_640_480_regs,
+ 		},
++		.binning = true,
+ 	},
+ };
+ 
+@@ -969,6 +877,35 @@ static int imx219_set_framefmt(struct imx219 *imx219)
+ 	return -EINVAL;
+ }
+ 
++static int imx219_set_binning(struct imx219 *imx219)
++{
++	if (!imx219->mode->binning) {
++		return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
++					IMX219_REG_VALUE_16BIT,
++					IMX219_BINNING_NONE);
++	}
++
++	switch (imx219->fmt.code) {
++	case MEDIA_BUS_FMT_SRGGB8_1X8:
++	case MEDIA_BUS_FMT_SGRBG8_1X8:
++	case MEDIA_BUS_FMT_SGBRG8_1X8:
++	case MEDIA_BUS_FMT_SBGGR8_1X8:
++		return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
++					IMX219_REG_VALUE_16BIT,
++					IMX219_BINNING_2X2_ANALOG);
++
++	case MEDIA_BUS_FMT_SRGGB10_1X10:
++	case MEDIA_BUS_FMT_SGRBG10_1X10:
++	case MEDIA_BUS_FMT_SGBRG10_1X10:
++	case MEDIA_BUS_FMT_SBGGR10_1X10:
++		return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
++					IMX219_REG_VALUE_16BIT,
++					IMX219_BINNING_2X2);
++	}
++
++	return -EINVAL;
++}
++
+ static const struct v4l2_rect *
+ __imx219_get_pad_crop(struct imx219 *imx219, struct v4l2_subdev_pad_config *cfg,
+ 		      unsigned int pad, enum v4l2_subdev_format_whence which)
+@@ -1032,6 +969,13 @@ static int imx219_start_streaming(struct imx219 *imx219)
+ 		return ret;
+ 	}
+ 
++	/* Send all registers that are common to all modes */
++	ret = imx219_write_regs(imx219, imx219_common_regs, ARRAY_SIZE(imx219_common_regs));
++	if (ret) {
++		dev_err(&client->dev, "%s failed to send mfg header\n", __func__);
++		goto err_rpm_put;
++	}
++
+ 	/* Apply default values of current mode */
+ 	reg_list = &imx219->mode->reg_list;
+ 	ret = imx219_write_regs(imx219, reg_list->regs, reg_list->num_of_regs);
+@@ -1047,6 +991,13 @@ static int imx219_start_streaming(struct imx219 *imx219)
+ 		goto err_rpm_put;
+ 	}
+ 
++	ret = imx219_set_binning(imx219);
++	if (ret) {
++		dev_err(&client->dev, "%s failed to set binning: %d\n",
++			__func__, ret);
++		goto err_rpm_put;
++	}
++
+ 	/* Apply customized values from user */
+ 	ret =  __v4l2_ctrl_handler_setup(imx219->sd.ctrl_handler);
+ 	if (ret)
+diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
+index b1e2476d3c9e6..79a11c0184c65 100644
+--- a/drivers/media/i2c/max9286.c
++++ b/drivers/media/i2c/max9286.c
+@@ -890,6 +890,7 @@ static int max9286_v4l2_register(struct max9286_priv *priv)
+ err_put_node:
+ 	fwnode_handle_put(ep);
+ err_async:
++	v4l2_ctrl_handler_free(&priv->ctrls);
+ 	max9286_v4l2_notifier_unregister(priv);
+ 
+ 	return ret;
+diff --git a/drivers/media/i2c/ov2740.c b/drivers/media/i2c/ov2740.c
+index bd0d45b0d43f5..34d74e575a431 100644
+--- a/drivers/media/i2c/ov2740.c
++++ b/drivers/media/i2c/ov2740.c
+@@ -577,8 +577,10 @@ static int ov2740_init_controls(struct ov2740 *ov2740)
+ 				     V4L2_CID_TEST_PATTERN,
+ 				     ARRAY_SIZE(ov2740_test_pattern_menu) - 1,
+ 				     0, 0, ov2740_test_pattern_menu);
+-	if (ctrl_hdlr->error)
++	if (ctrl_hdlr->error) {
++		v4l2_ctrl_handler_free(ctrl_hdlr);
+ 		return ctrl_hdlr->error;
++	}
+ 
+ 	ov2740->sd.ctrl_handler = ctrl_hdlr;
+ 
+diff --git a/drivers/media/i2c/ov5675.c b/drivers/media/i2c/ov5675.c
+index 9540ce8918f0c..aa35a9546177a 100644
+--- a/drivers/media/i2c/ov5675.c
++++ b/drivers/media/i2c/ov5675.c
+@@ -791,8 +791,10 @@ static int ov5675_init_controls(struct ov5675 *ov5675)
+ 	v4l2_ctrl_new_std(ctrl_hdlr, &ov5675_ctrl_ops,
+ 			  V4L2_CID_VFLIP, 0, 1, 1, 0);
+ 
+-	if (ctrl_hdlr->error)
++	if (ctrl_hdlr->error) {
++		v4l2_ctrl_handler_free(ctrl_hdlr);
+ 		return ctrl_hdlr->error;
++	}
+ 
+ 	ov5675->sd.ctrl_handler = ctrl_hdlr;
+ 
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index 154776d0069ea..e47800cb6c0f7 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -1824,7 +1824,7 @@ static int ov7670_parse_dt(struct device *dev,
+ 
+ 	if (bus_cfg.bus_type != V4L2_MBUS_PARALLEL) {
+ 		dev_err(dev, "Unsupported media bus type\n");
+-		return ret;
++		return -EINVAL;
+ 	}
+ 	info->mbus_config = bus_cfg.bus.parallel.flags;
+ 
+diff --git a/drivers/media/i2c/ov772x.c b/drivers/media/i2c/ov772x.c
+index 2cc6a678069a2..5033950a48ab6 100644
+--- a/drivers/media/i2c/ov772x.c
++++ b/drivers/media/i2c/ov772x.c
+@@ -1397,7 +1397,7 @@ static int ov772x_probe(struct i2c_client *client)
+ 	priv->subdev.ctrl_handler = &priv->hdl;
+ 	if (priv->hdl.error) {
+ 		ret = priv->hdl.error;
+-		goto error_mutex_destroy;
++		goto error_ctrl_free;
+ 	}
+ 
+ 	priv->clk = clk_get(&client->dev, NULL);
+@@ -1446,7 +1446,6 @@ error_clk_put:
+ 	clk_put(priv->clk);
+ error_ctrl_free:
+ 	v4l2_ctrl_handler_free(&priv->hdl);
+-error_mutex_destroy:
+ 	mutex_destroy(&priv->lock);
+ 
+ 	return ret;
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.c b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+index 2fe4a0bd02844..d6838c8ebd7e8 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.c
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+@@ -1831,6 +1831,9 @@ static void cio2_pci_remove(struct pci_dev *pci_dev)
+ 	v4l2_device_unregister(&cio2->v4l2_dev);
+ 	media_device_cleanup(&cio2->media_dev);
+ 	mutex_destroy(&cio2->lock);
++
++	pm_runtime_forbid(&pci_dev->dev);
++	pm_runtime_get_noresume(&pci_dev->dev);
+ }
+ 
+ static int __maybe_unused cio2_runtime_suspend(struct device *dev)
+diff --git a/drivers/media/pci/saa7134/saa7134-core.c b/drivers/media/pci/saa7134/saa7134-core.c
+index efb757d5168a6..e97c30070fc64 100644
+--- a/drivers/media/pci/saa7134/saa7134-core.c
++++ b/drivers/media/pci/saa7134/saa7134-core.c
+@@ -977,7 +977,7 @@ static void saa7134_unregister_video(struct saa7134_dev *dev)
+ 	}
+ 	if (dev->radio_dev) {
+ 		if (video_is_registered(dev->radio_dev))
+-			vb2_video_unregister_device(dev->radio_dev);
++			video_unregister_device(dev->radio_dev);
+ 		else
+ 			video_device_release(dev->radio_dev);
+ 		dev->radio_dev = NULL;
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index 1311b4996eceb..21c16698cc2db 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -2297,7 +2297,16 @@ static int isp_probe(struct platform_device *pdev)
+ 
+ 	/* Regulators */
+ 	isp->isp_csiphy1.vdd = devm_regulator_get(&pdev->dev, "vdd-csiphy1");
++	if (IS_ERR(isp->isp_csiphy1.vdd)) {
++		ret = PTR_ERR(isp->isp_csiphy1.vdd);
++		goto error;
++	}
++
+ 	isp->isp_csiphy2.vdd = devm_regulator_get(&pdev->dev, "vdd-csiphy2");
++	if (IS_ERR(isp->isp_csiphy2.vdd)) {
++		ret = PTR_ERR(isp->isp_csiphy2.vdd);
++		goto error;
++	}
+ 
+ 	/* Clocks
+ 	 *
+diff --git a/drivers/media/platform/ti-vpe/cal.c b/drivers/media/platform/ti-vpe/cal.c
+index 2eef245c31a17..93121c90d76ae 100644
+--- a/drivers/media/platform/ti-vpe/cal.c
++++ b/drivers/media/platform/ti-vpe/cal.c
+@@ -624,8 +624,10 @@ static struct cal_ctx *cal_ctx_create(struct cal_dev *cal, int inst)
+ 	ctx->cport = inst;
+ 
+ 	ret = cal_ctx_v4l2_init(ctx);
+-	if (ret)
++	if (ret) {
++		kfree(ctx);
+ 		return NULL;
++	}
+ 
+ 	return ctx;
+ }
+diff --git a/drivers/media/rc/ene_ir.c b/drivers/media/rc/ene_ir.c
+index 6049e5c95394f..5aa3953cab82c 100644
+--- a/drivers/media/rc/ene_ir.c
++++ b/drivers/media/rc/ene_ir.c
+@@ -1106,6 +1106,8 @@ static void ene_remove(struct pnp_dev *pnp_dev)
+ 	struct ene_device *dev = pnp_get_drvdata(pnp_dev);
+ 	unsigned long flags;
+ 
++	rc_unregister_device(dev->rdev);
++	del_timer_sync(&dev->tx_sim_timer);
+ 	spin_lock_irqsave(&dev->hw_lock, flags);
+ 	ene_rx_disable(dev);
+ 	ene_rx_restore_hw_buffer(dev);
+@@ -1113,7 +1115,6 @@ static void ene_remove(struct pnp_dev *pnp_dev)
+ 
+ 	free_irq(dev->irq, dev);
+ 	release_region(dev->hw_io, ENE_IO_SIZE);
+-	rc_unregister_device(dev->rdev);
+ 	kfree(dev);
+ }
+ 
+diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
+index df4c5dcba39cd..1babfe6e2c361 100644
+--- a/drivers/media/usb/siano/smsusb.c
++++ b/drivers/media/usb/siano/smsusb.c
+@@ -179,6 +179,7 @@ static void smsusb_stop_streaming(struct smsusb_device_t *dev)
+ 
+ 	for (i = 0; i < MAX_URBS; i++) {
+ 		usb_kill_urb(&dev->surbs[i].urb);
++		cancel_work_sync(&dev->surbs[i].wq);
+ 
+ 		if (dev->surbs[i].cb) {
+ 			smscore_putbuffer(dev->coredev, dev->surbs[i].cb);
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index f479d8971dfbb..5e0acabed37a0 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -6,6 +6,7 @@
+  *          Laurent Pinchart (laurent.pinchart@ideasonboard.com)
+  */
+ 
++#include <asm/barrier.h>
+ #include <linux/kernel.h>
+ #include <linux/list.h>
+ #include <linux/module.h>
+@@ -1275,17 +1276,12 @@ static void uvc_ctrl_send_slave_event(struct uvc_video_chain *chain,
+ 	uvc_ctrl_send_event(chain, handle, ctrl, mapping, val, changes);
+ }
+ 
+-static void uvc_ctrl_status_event_work(struct work_struct *work)
++void uvc_ctrl_status_event(struct uvc_video_chain *chain,
++			   struct uvc_control *ctrl, const u8 *data)
+ {
+-	struct uvc_device *dev = container_of(work, struct uvc_device,
+-					      async_ctrl.work);
+-	struct uvc_ctrl_work *w = &dev->async_ctrl;
+-	struct uvc_video_chain *chain = w->chain;
+ 	struct uvc_control_mapping *mapping;
+-	struct uvc_control *ctrl = w->ctrl;
+ 	struct uvc_fh *handle;
+ 	unsigned int i;
+-	int ret;
+ 
+ 	mutex_lock(&chain->ctrl_mutex);
+ 
+@@ -1293,7 +1289,7 @@ static void uvc_ctrl_status_event_work(struct work_struct *work)
+ 	ctrl->handle = NULL;
+ 
+ 	list_for_each_entry(mapping, &ctrl->info.mappings, list) {
+-		s32 value = __uvc_ctrl_get_value(mapping, w->data);
++		s32 value = __uvc_ctrl_get_value(mapping, data);
+ 
+ 		/*
+ 		 * handle may be NULL here if the device sends auto-update
+@@ -1312,6 +1308,20 @@ static void uvc_ctrl_status_event_work(struct work_struct *work)
+ 	}
+ 
+ 	mutex_unlock(&chain->ctrl_mutex);
++}
++
++static void uvc_ctrl_status_event_work(struct work_struct *work)
++{
++	struct uvc_device *dev = container_of(work, struct uvc_device,
++					      async_ctrl.work);
++	struct uvc_ctrl_work *w = &dev->async_ctrl;
++	int ret;
++
++	uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
++
++	/* The barrier is needed to synchronize with uvc_status_stop(). */
++	if (smp_load_acquire(&dev->flush_status))
++		return;
+ 
+ 	/* Resubmit the URB. */
+ 	w->urb->interval = dev->int_ep->desc.bInterval;
+@@ -1321,8 +1331,8 @@ static void uvc_ctrl_status_event_work(struct work_struct *work)
+ 			   ret);
+ }
+ 
+-bool uvc_ctrl_status_event(struct urb *urb, struct uvc_video_chain *chain,
+-			   struct uvc_control *ctrl, const u8 *data)
++bool uvc_ctrl_status_event_async(struct urb *urb, struct uvc_video_chain *chain,
++				 struct uvc_control *ctrl, const u8 *data)
+ {
+ 	struct uvc_device *dev = chain->dev;
+ 	struct uvc_ctrl_work *w = &dev->async_ctrl;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 282f3d2388cc2..6334f99f1854d 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -1121,10 +1121,8 @@ static int uvc_parse_vendor_control(struct uvc_device *dev,
+ 					       + n;
+ 		memcpy(unit->extension.bmControls, &buffer[23+p], 2*n);
+ 
+-		if (buffer[24+p+2*n] != 0)
+-			usb_string(udev, buffer[24+p+2*n], unit->name,
+-				   sizeof(unit->name));
+-		else
++		if (buffer[24+p+2*n] == 0 ||
++		    usb_string(udev, buffer[24+p+2*n], unit->name, sizeof(unit->name)) < 0)
+ 			sprintf(unit->name, "Extension %u", buffer[3]);
+ 
+ 		list_add_tail(&unit->list, &dev->entities);
+@@ -1249,15 +1247,15 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ 			memcpy(term->media.bmTransportModes, &buffer[10+n], p);
+ 		}
+ 
+-		if (buffer[7] != 0)
+-			usb_string(udev, buffer[7], term->name,
+-				   sizeof(term->name));
+-		else if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA)
+-			sprintf(term->name, "Camera %u", buffer[3]);
+-		else if (UVC_ENTITY_TYPE(term) == UVC_ITT_MEDIA_TRANSPORT_INPUT)
+-			sprintf(term->name, "Media %u", buffer[3]);
+-		else
+-			sprintf(term->name, "Input %u", buffer[3]);
++		if (buffer[7] == 0 ||
++		    usb_string(udev, buffer[7], term->name, sizeof(term->name)) < 0) {
++			if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA)
++				sprintf(term->name, "Camera %u", buffer[3]);
++			if (UVC_ENTITY_TYPE(term) == UVC_ITT_MEDIA_TRANSPORT_INPUT)
++				sprintf(term->name, "Media %u", buffer[3]);
++			else
++				sprintf(term->name, "Input %u", buffer[3]);
++		}
+ 
+ 		list_add_tail(&term->list, &dev->entities);
+ 		break;
+@@ -1289,10 +1287,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ 
+ 		memcpy(term->baSourceID, &buffer[7], 1);
+ 
+-		if (buffer[8] != 0)
+-			usb_string(udev, buffer[8], term->name,
+-				   sizeof(term->name));
+-		else
++		if (buffer[8] == 0 ||
++		    usb_string(udev, buffer[8], term->name, sizeof(term->name)) < 0)
+ 			sprintf(term->name, "Output %u", buffer[3]);
+ 
+ 		list_add_tail(&term->list, &dev->entities);
+@@ -1314,10 +1310,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ 
+ 		memcpy(unit->baSourceID, &buffer[5], p);
+ 
+-		if (buffer[5+p] != 0)
+-			usb_string(udev, buffer[5+p], unit->name,
+-				   sizeof(unit->name));
+-		else
++		if (buffer[5+p] == 0 ||
++		    usb_string(udev, buffer[5+p], unit->name, sizeof(unit->name)) < 0)
+ 			sprintf(unit->name, "Selector %u", buffer[3]);
+ 
+ 		list_add_tail(&unit->list, &dev->entities);
+@@ -1347,10 +1341,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ 		if (dev->uvc_version >= 0x0110)
+ 			unit->processing.bmVideoStandards = buffer[9+n];
+ 
+-		if (buffer[8+n] != 0)
+-			usb_string(udev, buffer[8+n], unit->name,
+-				   sizeof(unit->name));
+-		else
++		if (buffer[8+n] == 0 ||
++		    usb_string(udev, buffer[8+n], unit->name, sizeof(unit->name)) < 0)
+ 			sprintf(unit->name, "Processing %u", buffer[3]);
+ 
+ 		list_add_tail(&unit->list, &dev->entities);
+@@ -1378,10 +1370,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ 		unit->extension.bmControls = (u8 *)unit + sizeof(*unit);
+ 		memcpy(unit->extension.bmControls, &buffer[23+p], n);
+ 
+-		if (buffer[23+p+n] != 0)
+-			usb_string(udev, buffer[23+p+n], unit->name,
+-				   sizeof(unit->name));
+-		else
++		if (buffer[23+p+n] == 0 ||
++		    usb_string(udev, buffer[23+p+n], unit->name, sizeof(unit->name)) < 0)
+ 			sprintf(unit->name, "Extension %u", buffer[3]);
+ 
+ 		list_add_tail(&unit->list, &dev->entities);
+@@ -2565,6 +2555,24 @@ static const struct usb_device_id uvc_ids[] = {
+ 	  .bInterfaceSubClass	= 1,
+ 	  .bInterfaceProtocol	= 0,
+ 	  .driver_info		= (kernel_ulong_t)&uvc_quirk_probe_minmax },
++	/* Logitech, Webcam C910 */
++	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
++				| USB_DEVICE_ID_MATCH_INT_INFO,
++	  .idVendor		= 0x046d,
++	  .idProduct		= 0x0821,
++	  .bInterfaceClass	= USB_CLASS_VIDEO,
++	  .bInterfaceSubClass	= 1,
++	  .bInterfaceProtocol	= 0,
++	  .driver_info		= UVC_INFO_QUIRK(UVC_QUIRK_WAKE_AUTOSUSPEND)},
++	/* Logitech, Webcam B910 */
++	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
++				| USB_DEVICE_ID_MATCH_INT_INFO,
++	  .idVendor		= 0x046d,
++	  .idProduct		= 0x0823,
++	  .bInterfaceClass	= USB_CLASS_VIDEO,
++	  .bInterfaceSubClass	= 1,
++	  .bInterfaceProtocol	= 0,
++	  .driver_info		= UVC_INFO_QUIRK(UVC_QUIRK_WAKE_AUTOSUSPEND)},
+ 	/* Logitech Quickcam Fusion */
+ 	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
+ 				| USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/media/usb/uvc/uvc_entity.c b/drivers/media/usb/uvc/uvc_entity.c
+index ca3a9c2eec271..7c9895377118c 100644
+--- a/drivers/media/usb/uvc/uvc_entity.c
++++ b/drivers/media/usb/uvc/uvc_entity.c
+@@ -37,7 +37,7 @@ static int uvc_mc_create_links(struct uvc_video_chain *chain,
+ 			continue;
+ 
+ 		remote = uvc_entity_by_id(chain->dev, entity->baSourceID[i]);
+-		if (remote == NULL)
++		if (remote == NULL || remote->num_pads == 0)
+ 			return -EINVAL;
+ 
+ 		source = (UVC_ENTITY_TYPE(remote) == UVC_TT_STREAMING)
+diff --git a/drivers/media/usb/uvc/uvc_status.c b/drivers/media/usb/uvc/uvc_status.c
+index 2bdb0ff203f8e..73725051cc163 100644
+--- a/drivers/media/usb/uvc/uvc_status.c
++++ b/drivers/media/usb/uvc/uvc_status.c
+@@ -6,6 +6,7 @@
+  *          Laurent Pinchart (laurent.pinchart@ideasonboard.com)
+  */
+ 
++#include <asm/barrier.h>
+ #include <linux/kernel.h>
+ #include <linux/input.h>
+ #include <linux/slab.h>
+@@ -179,7 +180,8 @@ static bool uvc_event_control(struct urb *urb,
+ 
+ 	switch (status->bAttribute) {
+ 	case UVC_CTRL_VALUE_CHANGE:
+-		return uvc_ctrl_status_event(urb, chain, ctrl, status->bValue);
++		return uvc_ctrl_status_event_async(urb, chain, ctrl,
++						   status->bValue);
+ 
+ 	case UVC_CTRL_INFO_CHANGE:
+ 	case UVC_CTRL_FAILURE_CHANGE:
+@@ -309,5 +311,41 @@ int uvc_status_start(struct uvc_device *dev, gfp_t flags)
+ 
+ void uvc_status_stop(struct uvc_device *dev)
+ {
++	struct uvc_ctrl_work *w = &dev->async_ctrl;
++
++	/*
++	 * Prevent the asynchronous control handler from requeing the URB. The
++	 * barrier is needed so the flush_status change is visible to other
++	 * CPUs running the asynchronous handler before usb_kill_urb() is
++	 * called below.
++	 */
++	smp_store_release(&dev->flush_status, true);
++
++	/*
++	 * Cancel any pending asynchronous work. If any status event was queued,
++	 * process it synchronously.
++	 */
++	if (cancel_work_sync(&w->work))
++		uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
++
++	/* Kill the urb. */
+ 	usb_kill_urb(dev->int_urb);
++
++	/*
++	 * The URB completion handler may have queued asynchronous work. This
++	 * won't resubmit the URB as flush_status is set, but it needs to be
++	 * cancelled before returning or it could then race with a future
++	 * uvc_status_start() call.
++	 */
++	if (cancel_work_sync(&w->work))
++		uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
++
++	/*
++	 * From this point, there are no events on the queue and the status URB
++	 * is dead. No events will be queued until uvc_status_start() is called.
++	 * The barrier is needed to make sure that flush_status is visible to
++	 * uvc_ctrl_status_event_work() when uvc_status_start() will be called
++	 * again.
++	 */
++	smp_store_release(&dev->flush_status, false);
+ }
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index f6373d678d256..03dfe96bcebac 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -1308,7 +1308,9 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
+ 	if (has_scr)
+ 		memcpy(stream->clock.last_scr, scr, 6);
+ 
+-	memcpy(&meta->length, mem, length);
++	meta->length = mem[0];
++	meta->flags  = mem[1];
++	memcpy(meta->buf, &mem[2], length - 2);
+ 	meta_buf->bytesused += length + sizeof(meta->ns) + sizeof(meta->sof);
+ 
+ 	uvc_trace(UVC_TRACE_FRAME,
+@@ -1903,6 +1905,17 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
+ 		uvc_trace(UVC_TRACE_VIDEO, "Selecting alternate setting %u "
+ 			"(%u B/frame bandwidth).\n", altsetting, best_psize);
+ 
++		/*
++		 * Some devices, namely the Logitech C910 and B910, are unable
++		 * to recover from a USB autosuspend, unless the alternate
++		 * setting of the streaming interface is toggled.
++		 */
++		if (stream->dev->quirks & UVC_QUIRK_WAKE_AUTOSUSPEND) {
++			usb_set_interface(stream->dev->udev, intfnum,
++					  altsetting);
++			usb_set_interface(stream->dev->udev, intfnum, 0);
++		}
++
+ 		ret = usb_set_interface(stream->dev->udev, intfnum, altsetting);
+ 		if (ret < 0)
+ 			return ret;
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index c884020b28784..c75990c0957e7 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -203,6 +203,7 @@
+ #define UVC_QUIRK_RESTORE_CTRLS_ON_INIT	0x00000400
+ #define UVC_QUIRK_FORCE_Y8		0x00000800
+ #define UVC_QUIRK_FORCE_BPP		0x00001000
++#define UVC_QUIRK_WAKE_AUTOSUSPEND	0x00002000
+ 
+ /* Format flags */
+ #define UVC_FMT_FLAG_COMPRESSED		0x00000001
+@@ -669,6 +670,7 @@ struct uvc_device {
+ 	/* Status Interrupt Endpoint */
+ 	struct usb_host_endpoint *int_ep;
+ 	struct urb *int_urb;
++	bool flush_status;
+ 	u8 *status;
+ 	struct input_dev *input;
+ 	char input_phys[64];
+@@ -838,7 +840,9 @@ int uvc_ctrl_add_mapping(struct uvc_video_chain *chain,
+ int uvc_ctrl_init_device(struct uvc_device *dev);
+ void uvc_ctrl_cleanup_device(struct uvc_device *dev);
+ int uvc_ctrl_restore_values(struct uvc_device *dev);
+-bool uvc_ctrl_status_event(struct urb *urb, struct uvc_video_chain *chain,
++bool uvc_ctrl_status_event_async(struct urb *urb, struct uvc_video_chain *chain,
++				 struct uvc_control *ctrl, const u8 *data);
++void uvc_ctrl_status_event(struct uvc_video_chain *chain,
+ 			   struct uvc_control *ctrl, const u8 *data);
+ 
+ int uvc_ctrl_begin(struct uvc_video_chain *chain);
+diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c
+index 000cb82023e35..afdc490836255 100644
+--- a/drivers/mfd/arizona-core.c
++++ b/drivers/mfd/arizona-core.c
+@@ -45,7 +45,7 @@ int arizona_clk32k_enable(struct arizona *arizona)
+ 	if (arizona->clk32k_ref == 1) {
+ 		switch (arizona->pdata.clk32k_src) {
+ 		case ARIZONA_32KZ_MCLK1:
+-			ret = pm_runtime_get_sync(arizona->dev);
++			ret = pm_runtime_resume_and_get(arizona->dev);
+ 			if (ret != 0)
+ 				goto err_ref;
+ 			ret = clk_prepare_enable(arizona->mclk[ARIZONA_MCLK1]);
+diff --git a/drivers/mfd/pcf50633-adc.c b/drivers/mfd/pcf50633-adc.c
+index 5cd653e615125..191b1bc6141c2 100644
+--- a/drivers/mfd/pcf50633-adc.c
++++ b/drivers/mfd/pcf50633-adc.c
+@@ -136,6 +136,7 @@ int pcf50633_adc_async_read(struct pcf50633 *pcf, int mux, int avg,
+ 			     void *callback_param)
+ {
+ 	struct pcf50633_adc_request *req;
++	int ret;
+ 
+ 	/* req is freed when the result is ready, in interrupt handler */
+ 	req = kmalloc(sizeof(*req), GFP_KERNEL);
+@@ -147,7 +148,11 @@ int pcf50633_adc_async_read(struct pcf50633 *pcf, int mux, int avg,
+ 	req->callback = callback;
+ 	req->callback_param = callback_param;
+ 
+-	return adc_enqueue_request(pcf, req);
++	ret = adc_enqueue_request(pcf, req);
++	if (ret)
++		kfree(req);
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(pcf50633_adc_async_read);
+ 
+diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
+index 4e30fa98fe7d3..c4c1275581ec9 100644
+--- a/drivers/misc/mei/bus-fixup.c
++++ b/drivers/misc/mei/bus-fixup.c
+@@ -172,7 +172,7 @@ static int mei_fwver(struct mei_cl_device *cldev)
+ 	ret = __mei_cl_send(cldev->cl, (u8 *)&req, sizeof(req),
+ 			    MEI_CL_IO_TX_BLOCKING);
+ 	if (ret < 0) {
+-		dev_err(&cldev->dev, "Could not send ReqFWVersion cmd\n");
++		dev_err(&cldev->dev, "Could not send ReqFWVersion cmd ret = %d\n", ret);
+ 		return ret;
+ 	}
+ 
+@@ -184,7 +184,7 @@ static int mei_fwver(struct mei_cl_device *cldev)
+ 		 * Should be at least one version block,
+ 		 * error out if nothing found
+ 		 */
+-		dev_err(&cldev->dev, "Could not read FW version\n");
++		dev_err(&cldev->dev, "Could not read FW version ret = %d\n", bytes_recv);
+ 		return -EIO;
+ 	}
+ 
+@@ -332,7 +332,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
+ 
+ 	ret = __mei_cl_send(cl, (u8 *)&cmd, sizeof(cmd), MEI_CL_IO_TX_BLOCKING);
+ 	if (ret < 0) {
+-		dev_err(bus->dev, "Could not send IF version cmd\n");
++		dev_err(bus->dev, "Could not send IF version cmd ret = %d\n", ret);
+ 		return ret;
+ 	}
+ 
+@@ -346,7 +346,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
+ 	ret = 0;
+ 	bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0, 0);
+ 	if (bytes_recv < 0 || (size_t)bytes_recv < if_version_length) {
+-		dev_err(bus->dev, "Could not read IF version\n");
++		dev_err(bus->dev, "Could not read IF version ret = %d\n", bytes_recv);
+ 		ret = -EIO;
+ 		goto err;
+ 	}
+diff --git a/drivers/mtd/nand/raw/sunxi_nand.c b/drivers/mtd/nand/raw/sunxi_nand.c
+index 2a7ca3072f357..52eb28f3277cd 100644
+--- a/drivers/mtd/nand/raw/sunxi_nand.c
++++ b/drivers/mtd/nand/raw/sunxi_nand.c
+@@ -1587,7 +1587,7 @@ static int sunxi_nand_ooblayout_free(struct mtd_info *mtd, int section,
+ 	if (section < ecc->steps)
+ 		oobregion->length = 4;
+ 	else
+-		oobregion->offset = mtd->oobsize - oobregion->offset;
++		oobregion->length = mtd->oobsize - oobregion->offset;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 2c256d455c9f3..3422152319321 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -2424,6 +2424,15 @@ void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size,
+ 	erase->size_mask = (1 << erase->size_shift) - 1;
+ }
+ 
++/**
++ * spi_nor_mask_erase_type() - mask out a SPI NOR erase type
++ * @erase:	pointer to a structure that describes a SPI NOR erase type
++ */
++void spi_nor_mask_erase_type(struct spi_nor_erase_type *erase)
++{
++	erase->size = 0;
++}
++
+ /**
+  * spi_nor_init_uniform_erase_map() - Initialize uniform erase map
+  * @map:		the erase map of the SPI NOR
+diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h
+index 6f62ee861231a..788775bb67958 100644
+--- a/drivers/mtd/spi-nor/core.h
++++ b/drivers/mtd/spi-nor/core.h
+@@ -424,6 +424,7 @@ void spi_nor_set_pp_settings(struct spi_nor_pp_command *pp, u8 opcode,
+ 
+ void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size,
+ 			    u8 opcode);
++void spi_nor_mask_erase_type(struct spi_nor_erase_type *erase);
+ struct spi_nor_erase_region *
+ spi_nor_region_next(struct spi_nor_erase_region *region);
+ void spi_nor_init_uniform_erase_map(struct spi_nor_erase_map *map,
+diff --git a/drivers/mtd/spi-nor/sfdp.c b/drivers/mtd/spi-nor/sfdp.c
+index 08de2a2b44520..9dc0528ea8842 100644
+--- a/drivers/mtd/spi-nor/sfdp.c
++++ b/drivers/mtd/spi-nor/sfdp.c
+@@ -852,7 +852,7 @@ spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
+ 	 */
+ 	for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)
+ 		if (!(regions_erase_type & BIT(erase[i].idx)))
+-			spi_nor_set_erase_type(&erase[i], 0, 0xFF);
++			spi_nor_mask_erase_type(&erase[i]);
+ 
+ 	return 0;
+ }
+@@ -1063,7 +1063,7 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
+ 			erase_type[i].opcode = (dwords[1] >>
+ 						erase_type[i].idx * 8) & 0xFF;
+ 		else
+-			spi_nor_set_erase_type(&erase_type[i], 0u, 0xFF);
++			spi_nor_mask_erase_type(&erase_type[i]);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index 4153e0d15c5f9..e45fdc1bf66a4 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -467,6 +467,7 @@ static int uif_init(struct ubi_device *ubi)
+ 			err = ubi_add_volume(ubi, ubi->volumes[i]);
+ 			if (err) {
+ 				ubi_err(ubi, "cannot add volume %d", i);
++				ubi->volumes[i] = NULL;
+ 				goto out_volumes;
+ 			}
+ 		}
+@@ -664,6 +665,12 @@ static int io_init(struct ubi_device *ubi, int max_beb_per1024)
+ 	ubi->ec_hdr_alsize = ALIGN(UBI_EC_HDR_SIZE, ubi->hdrs_min_io_size);
+ 	ubi->vid_hdr_alsize = ALIGN(UBI_VID_HDR_SIZE, ubi->hdrs_min_io_size);
+ 
++	if (ubi->vid_hdr_offset && ((ubi->vid_hdr_offset + UBI_VID_HDR_SIZE) >
++	    ubi->vid_hdr_alsize)) {
++		ubi_err(ubi, "VID header offset %d too large.", ubi->vid_hdr_offset);
++		return -EINVAL;
++	}
++
+ 	dbg_gen("min_io_size      %d", ubi->min_io_size);
+ 	dbg_gen("max_write_size   %d", ubi->max_write_size);
+ 	dbg_gen("hdrs_min_io_size %d", ubi->hdrs_min_io_size);
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 053ab52668e8b..69592be33adfc 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -146,13 +146,15 @@ void ubi_refill_pools(struct ubi_device *ubi)
+ 	if (ubi->fm_anchor) {
+ 		wl_tree_add(ubi->fm_anchor, &ubi->free);
+ 		ubi->free_count++;
++		ubi->fm_anchor = NULL;
+ 	}
+ 
+-	/*
+-	 * All available PEBs are in ubi->free, now is the time to get
+-	 * the best anchor PEBs.
+-	 */
+-	ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1);
++	if (!ubi->fm_disabled)
++		/*
++		 * All available PEBs are in ubi->free, now is the time to get
++		 * the best anchor PEBs.
++		 */
++		ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1);
+ 
+ 	for (;;) {
+ 		enough = 0;
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 6ea95ade4ca6b..d79323e8ea29d 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -464,7 +464,7 @@ int ubi_resize_volume(struct ubi_volume_desc *desc, int reserved_pebs)
+ 		for (i = 0; i < -pebs; i++) {
+ 			err = ubi_eba_unmap_leb(ubi, vol, reserved_pebs + i);
+ 			if (err)
+-				goto out_acc;
++				goto out_free;
+ 		}
+ 		spin_lock(&ubi->volumes_lock);
+ 		ubi->rsvd_pebs += pebs;
+@@ -512,8 +512,10 @@ out_acc:
+ 		ubi->avail_pebs += pebs;
+ 		spin_unlock(&ubi->volumes_lock);
+ 	}
++	return err;
++
+ out_free:
+-	kfree(new_eba_tbl);
++	ubi_eba_destroy_table(new_eba_tbl);
+ 	return err;
+ }
+ 
+@@ -580,6 +582,7 @@ int ubi_add_volume(struct ubi_device *ubi, struct ubi_volume *vol)
+ 	if (err) {
+ 		ubi_err(ubi, "cannot add character device for volume %d, error %d",
+ 			vol_id, err);
++		vol_release(&vol->dev);
+ 		return err;
+ 	}
+ 
+@@ -590,15 +593,14 @@ int ubi_add_volume(struct ubi_device *ubi, struct ubi_volume *vol)
+ 	vol->dev.groups = volume_dev_groups;
+ 	dev_set_name(&vol->dev, "%s_%d", ubi->ubi_name, vol->vol_id);
+ 	err = device_register(&vol->dev);
+-	if (err)
+-		goto out_cdev;
++	if (err) {
++		cdev_del(&vol->cdev);
++		put_device(&vol->dev);
++		return err;
++	}
+ 
+ 	self_check_volumes(ubi);
+ 	return err;
+-
+-out_cdev:
+-	cdev_del(&vol->cdev);
+-	return err;
+ }
+ 
+ /**
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 820b5c1c8e8e7..6da09263e0b9f 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -885,8 +885,11 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ 
+ 	err = do_sync_erase(ubi, e1, vol_id, lnum, 0);
+ 	if (err) {
+-		if (e2)
++		if (e2) {
++			spin_lock(&ubi->wl_lock);
+ 			wl_entry_destroy(ubi, e2);
++			spin_unlock(&ubi->wl_lock);
++		}
+ 		goto out_ro;
+ 	}
+ 
+@@ -968,11 +971,11 @@ out_error:
+ 	spin_lock(&ubi->wl_lock);
+ 	ubi->move_from = ubi->move_to = NULL;
+ 	ubi->move_to_put = ubi->wl_scheduled = 0;
++	wl_entry_destroy(ubi, e1);
++	wl_entry_destroy(ubi, e2);
+ 	spin_unlock(&ubi->wl_lock);
+ 
+ 	ubi_free_vid_buf(vidb);
+-	wl_entry_destroy(ubi, e1);
+-	wl_entry_destroy(ubi, e2);
+ 
+ out_ro:
+ 	ubi_ro_mode(ubi);
+@@ -1121,14 +1124,18 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
+ 		/* Re-schedule the LEB for erasure */
+ 		err1 = schedule_erase(ubi, e, vol_id, lnum, 0, false);
+ 		if (err1) {
++			spin_lock(&ubi->wl_lock);
+ 			wl_entry_destroy(ubi, e);
++			spin_unlock(&ubi->wl_lock);
+ 			err = err1;
+ 			goto out_ro;
+ 		}
+ 		return err;
+ 	}
+ 
++	spin_lock(&ubi->wl_lock);
+ 	wl_entry_destroy(ubi, e);
++	spin_unlock(&ubi->wl_lock);
+ 	if (err != -EIO)
+ 		/*
+ 		 * If this is not %-EIO, we have no idea what to do. Scheduling
+@@ -1244,6 +1251,18 @@ int ubi_wl_put_peb(struct ubi_device *ubi, int vol_id, int lnum,
+ retry:
+ 	spin_lock(&ubi->wl_lock);
+ 	e = ubi->lookuptbl[pnum];
++	if (!e) {
++		/*
++		 * This wl entry has been removed for some errors by other
++		 * process (eg. wear leveling worker), corresponding process
++		 * (except __erase_worker, which cannot concurrent with
++		 * ubi_wl_put_peb) will set ubi ro_mode at the same time,
++		 * just ignore this wl entry.
++		 */
++		spin_unlock(&ubi->wl_lock);
++		up_read(&ubi->fm_protect);
++		return 0;
++	}
+ 	if (e == ubi->move_from) {
+ 		/*
+ 		 * User is putting the physical eraseblock which was selected to
+diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
+index 73c5343e609bc..c9ccce6c60b46 100644
+--- a/drivers/net/can/usb/esd_usb2.c
++++ b/drivers/net/can/usb/esd_usb2.c
+@@ -278,7 +278,6 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
+ 				cf->data[2] |= CAN_ERR_PROT_STUFF;
+ 				break;
+ 			default:
+-				cf->data[3] = ecc & SJA1000_ECC_SEG;
+ 				break;
+ 			}
+ 
+@@ -286,6 +285,9 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
+ 			if (!(ecc & SJA1000_ECC_DIR))
+ 				cf->data[2] |= CAN_ERR_PROT_TX;
+ 
++			/* Bit stream position in CAN frame as the error was detected */
++			cf->data[3] = ecc & SJA1000_ECC_SEG;
++
+ 			if (priv->can.state == CAN_STATE_ERROR_WARNING ||
+ 			    priv->can.state == CAN_STATE_ERROR_PASSIVE) {
+ 				cf->data[1] = (txerr > rxerr) ?
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index e0a6a2e62d23b..7667cbb5adfd6 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2263,6 +2263,14 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
+ 			  __func__, p_index, ring->c_index,
+ 			  ring->read_ptr, dma_length_status);
+ 
++		if (unlikely(len > RX_BUF_LENGTH)) {
++			netif_err(priv, rx_status, dev, "oversized packet\n");
++			dev->stats.rx_length_errors++;
++			dev->stats.rx_errors++;
++			dev_kfree_skb_any(skb);
++			goto next;
++		}
++
+ 		if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
+ 			netif_err(priv, rx_status, dev,
+ 				  "dropping fragmented packet!\n");
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index f9e91304d2327..4b875838a6467 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -165,15 +165,6 @@ void bcmgenet_phy_power_set(struct net_device *dev, bool enable)
+ 
+ static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
+ {
+-	u32 reg;
+-
+-	if (!GENET_IS_V5(priv)) {
+-		/* Speed settings are set in bcmgenet_mii_setup() */
+-		reg = bcmgenet_sys_readl(priv, SYS_PORT_CTRL);
+-		reg |= LED_ACT_SOURCE_MAC;
+-		bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL);
+-	}
+-
+ 	if (priv->hw_params->flags & GENET_HAS_MOCA_LINK_DET)
+ 		fixed_phy_set_link_update(priv->dev->phydev,
+ 					  bcmgenet_fixed_phy_link_update);
+@@ -206,6 +197,8 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
+ 
+ 		if (!phy_name) {
+ 			phy_name = "MoCA";
++			if (!GENET_IS_V5(priv))
++				port_ctrl |= LED_ACT_SOURCE_MAC;
+ 			bcmgenet_moca_phy_setup(priv);
+ 		}
+ 		break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index c1465096239b6..4f0d63fa5709b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -5200,15 +5200,12 @@ int ice_vsi_cfg(struct ice_vsi *vsi)
+ {
+ 	int err;
+ 
+-	if (vsi->netdev) {
++	if (vsi->netdev && vsi->type == ICE_VSI_PF) {
+ 		ice_set_rx_mode(vsi->netdev);
+ 
+-		if (vsi->type != ICE_VSI_LB) {
+-			err = ice_vsi_vlan_setup(vsi);
+-
+-			if (err)
+-				return err;
+-		}
++		err = ice_vsi_vlan_setup(vsi);
++		if (err)
++			return err;
+ 	}
+ 	ice_vsi_cfg_dcb_rings(vsi);
+ 
+@@ -5267,7 +5264,7 @@ static int ice_up_complete(struct ice_vsi *vsi)
+ 
+ 	if (vsi->port_info &&
+ 	    (vsi->port_info->phy.link_info.link_info & ICE_AQ_LINK_UP) &&
+-	    vsi->netdev) {
++	    vsi->netdev && vsi->type == ICE_VSI_PF) {
+ 		ice_print_link_msg(vsi, true);
+ 		netif_tx_start_all_queues(vsi->netdev);
+ 		netif_carrier_on(vsi->netdev);
+@@ -5277,7 +5274,9 @@ static int ice_up_complete(struct ice_vsi *vsi)
+ 	 * set the baseline so counters are ready when interface is up
+ 	 */
+ 	ice_update_eth_stats(vsi);
+-	ice_service_task_schedule(pf);
++
++	if (vsi->type == ICE_VSI_PF)
++		ice_service_task_schedule(pf);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 40d7bfca37499..0a011a41c039e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -603,7 +603,7 @@ static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer,
+ 	} else {
+ 		cur_string = mlx5_tracer_message_get(tracer, tracer_event);
+ 		if (!cur_string) {
+-			pr_debug("%s Got string event for unknown string tdsm: %d\n",
++			pr_debug("%s Got string event for unknown string tmsn: %d\n",
+ 				 __func__, tracer_event->string_event.tmsn);
+ 			return -1;
+ 		}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c
+index 23361a9ae4fa0..6dc83e871cd76 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c
+@@ -105,6 +105,7 @@ int mlx5_geneve_tlv_option_add(struct mlx5_geneve *geneve, struct geneve_opt *op
+ 		geneve->opt_type = opt->type;
+ 		geneve->obj_id = res;
+ 		geneve->refcount++;
++		res = 0;
+ 	}
+ 
+ unlock:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+index a44a2bad5bbb5..1ea71f06fdb1c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+@@ -216,7 +216,8 @@ static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function)
+ 
+ 	n = find_first_bit(&fp->bitmask, 8 * sizeof(fp->bitmask));
+ 	if (n >= MLX5_NUM_4K_IN_PAGE) {
+-		mlx5_core_warn(dev, "alloc 4k bug\n");
++		mlx5_core_warn(dev, "alloc 4k bug: fw page = 0x%llx, n = %u, bitmask: %lu, max num of 4K pages: %d\n",
++			       fp->addr, n, fp->bitmask,  MLX5_NUM_4K_IN_PAGE);
+ 		return -ENOENT;
+ 	}
+ 	clear_bit(n, &fp->bitmask);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 059d68d48f1e9..4074310abcff4 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -426,9 +426,7 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common,
+ 	writel(common->rx_flow_id_base,
+ 	       host_p->port_base + AM65_CPSW_PORT0_REG_FLOW_ID_OFFSET);
+ 	/* en tx crc offload */
+-	if (features & NETIF_F_HW_CSUM)
+-		writel(AM65_CPSW_P0_REG_CTL_RX_CHECKSUM_EN,
+-		       host_p->port_base + AM65_CPSW_P0_REG_CTL);
++	writel(AM65_CPSW_P0_REG_CTL_RX_CHECKSUM_EN, host_p->port_base + AM65_CPSW_P0_REG_CTL);
+ 
+ 	am65_cpsw_nuss_set_p0_ptype(common);
+ 
+@@ -1369,31 +1367,6 @@ static void am65_cpsw_nuss_ndo_get_stats(struct net_device *dev,
+ 	stats->tx_dropped	= dev->stats.tx_dropped;
+ }
+ 
+-static int am65_cpsw_nuss_ndo_slave_set_features(struct net_device *ndev,
+-						 netdev_features_t features)
+-{
+-	struct am65_cpsw_common *common = am65_ndev_to_common(ndev);
+-	netdev_features_t changes = features ^ ndev->features;
+-	struct am65_cpsw_host *host_p;
+-
+-	host_p = am65_common_get_host(common);
+-
+-	if (changes & NETIF_F_HW_CSUM) {
+-		bool enable = !!(features & NETIF_F_HW_CSUM);
+-
+-		dev_info(common->dev, "Turn %s tx-checksum-ip-generic\n",
+-			 enable ? "ON" : "OFF");
+-		if (enable)
+-			writel(AM65_CPSW_P0_REG_CTL_RX_CHECKSUM_EN,
+-			       host_p->port_base + AM65_CPSW_P0_REG_CTL);
+-		else
+-			writel(0,
+-			       host_p->port_base + AM65_CPSW_P0_REG_CTL);
+-	}
+-
+-	return 0;
+-}
+-
+ static const struct net_device_ops am65_cpsw_nuss_netdev_ops_2g = {
+ 	.ndo_open		= am65_cpsw_nuss_ndo_slave_open,
+ 	.ndo_stop		= am65_cpsw_nuss_ndo_slave_stop,
+@@ -1406,7 +1379,6 @@ static const struct net_device_ops am65_cpsw_nuss_netdev_ops_2g = {
+ 	.ndo_vlan_rx_add_vid	= am65_cpsw_nuss_ndo_slave_add_vid,
+ 	.ndo_vlan_rx_kill_vid	= am65_cpsw_nuss_ndo_slave_kill_vid,
+ 	.ndo_do_ioctl		= am65_cpsw_nuss_ndo_slave_ioctl,
+-	.ndo_set_features	= am65_cpsw_nuss_ndo_slave_set_features,
+ 	.ndo_setup_tc           = am65_cpsw_qos_ndo_setup_tc,
+ };
+ 
+@@ -1515,9 +1487,8 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cpsw_common *common)
+ 						    tx_chn->tx_chn_name,
+ 						    &tx_cfg);
+ 		if (IS_ERR(tx_chn->tx_chn)) {
+-			ret = PTR_ERR(tx_chn->tx_chn);
+-			dev_err(dev, "Failed to request tx dma channel %d\n",
+-				ret);
++			ret = dev_err_probe(dev, PTR_ERR(tx_chn->tx_chn),
++					    "Failed to request tx dma channel\n");
+ 			goto err;
+ 		}
+ 
+@@ -1588,8 +1559,8 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ 
+ 	rx_chn->rx_chn = k3_udma_glue_request_rx_chn(dev, "rx", &rx_cfg);
+ 	if (IS_ERR(rx_chn->rx_chn)) {
+-		ret = PTR_ERR(rx_chn->rx_chn);
+-		dev_err(dev, "Failed to request rx dma channel %d\n", ret);
++		ret = dev_err_probe(dev, PTR_ERR(rx_chn->rx_chn),
++				    "Failed to request rx dma channel\n");
+ 		goto err;
+ 	}
+ 
+@@ -1753,13 +1724,14 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ 		if (ret < 0) {
+ 			dev_err(dev, "%pOF error reading port_id %d\n",
+ 				port_np, ret);
+-			return ret;
++			goto of_node_put;
+ 		}
+ 
+ 		if (!port_id || port_id > common->port_num) {
+ 			dev_err(dev, "%pOF has invalid port_id %u %s\n",
+ 				port_np, port_id, port_np->name);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto of_node_put;
+ 		}
+ 
+ 		port = am65_common_get_port(common, port_id);
+@@ -1775,8 +1747,10 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ 				(AM65_CPSW_NU_FRAM_PORT_OFFSET * (port_id - 1));
+ 
+ 		port->slave.mac_sl = cpsw_sl_get("am65", dev, port->port_base);
+-		if (IS_ERR(port->slave.mac_sl))
+-			return PTR_ERR(port->slave.mac_sl);
++		if (IS_ERR(port->slave.mac_sl)) {
++			ret = PTR_ERR(port->slave.mac_sl);
++			goto of_node_put;
++		}
+ 
+ 		port->disabled = !of_device_is_available(port_np);
+ 		if (port->disabled)
+@@ -1787,7 +1761,7 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ 			ret = PTR_ERR(port->slave.ifphy);
+ 			dev_err(dev, "%pOF error retrieving port phy: %d\n",
+ 				port_np, ret);
+-			return ret;
++			goto of_node_put;
+ 		}
+ 
+ 		port->slave.mac_only =
+@@ -1797,10 +1771,10 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ 		if (of_phy_is_fixed_link(port_np)) {
+ 			ret = of_phy_register_fixed_link(port_np);
+ 			if (ret) {
+-				if (ret != -EPROBE_DEFER)
+-					dev_err(dev, "%pOF failed to register fixed-link phy: %d\n",
+-						port_np, ret);
+-				return ret;
++				ret = dev_err_probe(dev, ret,
++						     "failed to register fixed-link phy %pOF\n",
++						     port_np);
++				goto of_node_put;
+ 			}
+ 			port->slave.phy_node = of_node_get(port_np);
+ 		} else {
+@@ -1811,14 +1785,15 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ 		if (!port->slave.phy_node) {
+ 			dev_err(dev,
+ 				"slave[%d] no phy found\n", port_id);
+-			return -ENODEV;
++			ret = -ENODEV;
++			goto of_node_put;
+ 		}
+ 
+ 		ret = of_get_phy_mode(port_np, &port->slave.phy_if);
+ 		if (ret) {
+ 			dev_err(dev, "%pOF read phy-mode err %d\n",
+ 				port_np, ret);
+-			return ret;
++			goto of_node_put;
+ 		}
+ 
+ 		mac_addr = of_get_mac_address(port_np);
+@@ -1835,6 +1810,11 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ 	of_node_put(node);
+ 
+ 	return 0;
++
++of_node_put:
++	of_node_put(port_np);
++	of_node_put(node);
++	return ret;
+ }
+ 
+ static void am65_cpsw_pcpu_stats_free(void *data)
+@@ -2090,13 +2070,8 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	clk = devm_clk_get(dev, "fck");
+-	if (IS_ERR(clk)) {
+-		ret = PTR_ERR(clk);
+-
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "error getting fck clock %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(clk))
++		return dev_err_probe(dev, PTR_ERR(clk), "getting fck clock\n");
+ 	common->bus_freq = clk_get_rate(clk);
+ 
+ 	pm_runtime_enable(dev);
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 8f7bb15206e9f..d9018d9fe3106 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -523,7 +523,7 @@ static int tap_open(struct inode *inode, struct file *file)
+ 	q->sock.state = SS_CONNECTED;
+ 	q->sock.file = file;
+ 	q->sock.ops = &tap_socket_ops;
+-	sock_init_data(&q->sock, &q->sk);
++	sock_init_data_uid(&q->sock, &q->sk, inode->i_uid);
+ 	q->sk.sk_write_space = tap_sock_write_space;
+ 	q->sk.sk_destruct = tap_sock_destruct;
+ 	q->flags = IFF_VNET_HDR | IFF_NO_PI | IFF_TAP;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 67ce7b779af61..f1d46aea8a2ba 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -3457,7 +3457,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
+ 	tfile->socket.file = file;
+ 	tfile->socket.ops = &tun_socket_ops;
+ 
+-	sock_init_data(&tfile->socket, &tfile->sk);
++	sock_init_data_uid(&tfile->socket, &tfile->sk, inode->i_uid);
+ 
+ 	tfile->sk.sk_write_space = tun_sock_write_space;
+ 	tfile->sk.sk_sndbuf = INT_MAX;
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index d2f2898d17b49..a66e275af1eb4 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -712,7 +712,6 @@ struct ath11k_base {
+ 	enum ath11k_dfs_region dfs_region;
+ #ifdef CONFIG_ATH11K_DEBUGFS
+ 	struct dentry *debugfs_soc;
+-	struct dentry *debugfs_ath11k;
+ #endif
+ 	struct ath11k_soc_dp_stats soc_stats;
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c
+index 1b914e67d314d..196314ab4ff0b 100644
+--- a/drivers/net/wireless/ath/ath11k/debugfs.c
++++ b/drivers/net/wireless/ath/ath11k/debugfs.c
+@@ -836,10 +836,6 @@ int ath11k_debugfs_pdev_create(struct ath11k_base *ab)
+ 	if (test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags))
+ 		return 0;
+ 
+-	ab->debugfs_soc = debugfs_create_dir(ab->hw_params.name, ab->debugfs_ath11k);
+-	if (IS_ERR(ab->debugfs_soc))
+-		return PTR_ERR(ab->debugfs_soc);
+-
+ 	debugfs_create_file("simulate_fw_crash", 0600, ab->debugfs_soc, ab,
+ 			    &fops_simulate_fw_crash);
+ 
+@@ -857,15 +853,51 @@ void ath11k_debugfs_pdev_destroy(struct ath11k_base *ab)
+ 
+ int ath11k_debugfs_soc_create(struct ath11k_base *ab)
+ {
+-	ab->debugfs_ath11k = debugfs_create_dir("ath11k", NULL);
++	struct dentry *root;
++	bool dput_needed;
++	char name[64];
++	int ret;
++
++	root = debugfs_lookup("ath11k", NULL);
++	if (!root) {
++		root = debugfs_create_dir("ath11k", NULL);
++		if (IS_ERR_OR_NULL(root))
++			return PTR_ERR(root);
++
++		dput_needed = false;
++	} else {
++		/* a dentry from lookup() needs dput() after we don't use it */
++		dput_needed = true;
++	}
++
++	scnprintf(name, sizeof(name), "%s-%s", ath11k_bus_str(ab->hif.bus),
++		  dev_name(ab->dev));
++
++	ab->debugfs_soc = debugfs_create_dir(name, root);
++	if (IS_ERR_OR_NULL(ab->debugfs_soc)) {
++		ret = PTR_ERR(ab->debugfs_soc);
++		goto out;
++	}
++
++	ret = 0;
+ 
+-	return PTR_ERR_OR_ZERO(ab->debugfs_ath11k);
++out:
++	if (dput_needed)
++		dput(root);
++
++	return ret;
+ }
+ 
+ void ath11k_debugfs_soc_destroy(struct ath11k_base *ab)
+ {
+-	debugfs_remove_recursive(ab->debugfs_ath11k);
+-	ab->debugfs_ath11k = NULL;
++	debugfs_remove_recursive(ab->debugfs_soc);
++	ab->debugfs_soc = NULL;
++
++	/* We are not removing ath11k directory on purpose, even if it
++	 * would be empty. This simplifies the directory handling and it's
++	 * a minor cosmetic issue to leave an empty ath11k directory to
++	 * debugfs.
++	 */
+ }
+ 
+ void ath11k_debugfs_fw_stats_init(struct ath11k *ar)
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 2e77dca6b1ad6..578fdc446bc03 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -3022,6 +3022,7 @@ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id
+ 	if (!peer) {
+ 		ath11k_warn(ab, "failed to find the peer to set up fragment info\n");
+ 		spin_unlock_bh(&ab->base_lock);
++		crypto_free_shash(tfm);
+ 		return -ENOENT;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index f938ac1a4abd4..f521dfa2f1945 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -244,11 +244,11 @@ static inline void ath9k_skb_queue_complete(struct hif_device_usb *hif_dev,
+ 		ath9k_htc_txcompletion_cb(hif_dev->htc_handle,
+ 					  skb, txok);
+ 		if (txok) {
+-			TX_STAT_INC(skb_success);
+-			TX_STAT_ADD(skb_success_bytes, ln);
++			TX_STAT_INC(hif_dev, skb_success);
++			TX_STAT_ADD(hif_dev, skb_success_bytes, ln);
+ 		}
+ 		else
+-			TX_STAT_INC(skb_failed);
++			TX_STAT_INC(hif_dev, skb_failed);
+ 	}
+ }
+ 
+@@ -302,7 +302,7 @@ static void hif_usb_tx_cb(struct urb *urb)
+ 	hif_dev->tx.tx_buf_cnt++;
+ 	if (!(hif_dev->tx.flags & HIF_USB_TX_STOP))
+ 		__hif_usb_tx(hif_dev); /* Check for pending SKBs */
+-	TX_STAT_INC(buf_completed);
++	TX_STAT_INC(hif_dev, buf_completed);
+ 	spin_unlock(&hif_dev->tx.tx_lock);
+ }
+ 
+@@ -353,7 +353,7 @@ static int __hif_usb_tx(struct hif_device_usb *hif_dev)
+ 			tx_buf->len += tx_buf->offset;
+ 
+ 		__skb_queue_tail(&tx_buf->skb_queue, nskb);
+-		TX_STAT_INC(skb_queued);
++		TX_STAT_INC(hif_dev, skb_queued);
+ 	}
+ 
+ 	usb_fill_bulk_urb(tx_buf->urb, hif_dev->udev,
+@@ -368,11 +368,10 @@ static int __hif_usb_tx(struct hif_device_usb *hif_dev)
+ 		__skb_queue_head_init(&tx_buf->skb_queue);
+ 		list_move_tail(&tx_buf->list, &hif_dev->tx.tx_buf);
+ 		hif_dev->tx.tx_buf_cnt++;
++	} else {
++		TX_STAT_INC(hif_dev, buf_queued);
+ 	}
+ 
+-	if (!ret)
+-		TX_STAT_INC(buf_queued);
+-
+ 	return ret;
+ }
+ 
+@@ -515,7 +514,7 @@ static void hif_usb_sta_drain(void *hif_handle, u8 idx)
+ 			ath9k_htc_txcompletion_cb(hif_dev->htc_handle,
+ 						  skb, false);
+ 			hif_dev->tx.tx_skb_cnt--;
+-			TX_STAT_INC(skb_failed);
++			TX_STAT_INC(hif_dev, skb_failed);
+ 		}
+ 	}
+ 
+@@ -562,11 +561,11 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 			memcpy(ptr, skb->data, rx_remain_len);
+ 
+ 			rx_pkt_len += rx_remain_len;
+-			hif_dev->rx_remain_len = 0;
+ 			skb_put(remain_skb, rx_pkt_len);
+ 
+ 			skb_pool[pool_index++] = remain_skb;
+-
++			hif_dev->remain_skb = NULL;
++			hif_dev->rx_remain_len = 0;
+ 		} else {
+ 			index = rx_remain_len;
+ 		}
+@@ -585,16 +584,21 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 		pkt_len = get_unaligned_le16(ptr + index);
+ 		pkt_tag = get_unaligned_le16(ptr + index + 2);
+ 
++		/* It is supposed that if we have an invalid pkt_tag or
++		 * pkt_len then the whole input SKB is considered invalid
++		 * and dropped; the associated packets already in skb_pool
++		 * are dropped, too.
++		 */
+ 		if (pkt_tag != ATH_USB_RX_STREAM_MODE_TAG) {
+-			RX_STAT_INC(skb_dropped);
+-			return;
++			RX_STAT_INC(hif_dev, skb_dropped);
++			goto invalid_pkt;
+ 		}
+ 
+ 		if (pkt_len > 2 * MAX_RX_BUF_SIZE) {
+ 			dev_err(&hif_dev->udev->dev,
+ 				"ath9k_htc: invalid pkt_len (%x)\n", pkt_len);
+-			RX_STAT_INC(skb_dropped);
+-			return;
++			RX_STAT_INC(hif_dev, skb_dropped);
++			goto invalid_pkt;
+ 		}
+ 
+ 		pad_len = 4 - (pkt_len & 0x3);
+@@ -606,11 +610,6 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 
+ 		if (index > MAX_RX_BUF_SIZE) {
+ 			spin_lock(&hif_dev->rx_lock);
+-			hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
+-			hif_dev->rx_transfer_len =
+-				MAX_RX_BUF_SIZE - chk_idx - 4;
+-			hif_dev->rx_pad_len = pad_len;
+-
+ 			nskb = __dev_alloc_skb(pkt_len + 32, GFP_ATOMIC);
+ 			if (!nskb) {
+ 				dev_err(&hif_dev->udev->dev,
+@@ -618,8 +617,14 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 				spin_unlock(&hif_dev->rx_lock);
+ 				goto err;
+ 			}
++
++			hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
++			hif_dev->rx_transfer_len =
++				MAX_RX_BUF_SIZE - chk_idx - 4;
++			hif_dev->rx_pad_len = pad_len;
++
+ 			skb_reserve(nskb, 32);
+-			RX_STAT_INC(skb_allocated);
++			RX_STAT_INC(hif_dev, skb_allocated);
+ 
+ 			memcpy(nskb->data, &(skb->data[chk_idx+4]),
+ 			       hif_dev->rx_transfer_len);
+@@ -640,7 +645,7 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 				goto err;
+ 			}
+ 			skb_reserve(nskb, 32);
+-			RX_STAT_INC(skb_allocated);
++			RX_STAT_INC(hif_dev, skb_allocated);
+ 
+ 			memcpy(nskb->data, &(skb->data[chk_idx+4]), pkt_len);
+ 			skb_put(nskb, pkt_len);
+@@ -650,11 +655,18 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 
+ err:
+ 	for (i = 0; i < pool_index; i++) {
+-		RX_STAT_ADD(skb_completed_bytes, skb_pool[i]->len);
++		RX_STAT_ADD(hif_dev, skb_completed_bytes, skb_pool[i]->len);
+ 		ath9k_htc_rx_msg(hif_dev->htc_handle, skb_pool[i],
+ 				 skb_pool[i]->len, USB_WLAN_RX_PIPE);
+-		RX_STAT_INC(skb_completed);
++		RX_STAT_INC(hif_dev, skb_completed);
+ 	}
++	return;
++invalid_pkt:
++	for (i = 0; i < pool_index; i++) {
++		dev_kfree_skb_any(skb_pool[i]);
++		RX_STAT_INC(hif_dev, skb_dropped);
++	}
++	return;
+ }
+ 
+ static void ath9k_hif_usb_rx_cb(struct urb *urb)
+@@ -1412,8 +1424,6 @@ static void ath9k_hif_usb_disconnect(struct usb_interface *interface)
+ 
+ 	if (hif_dev->flags & HIF_USB_READY) {
+ 		ath9k_htc_hw_deinit(hif_dev->htc_handle, unplugged);
+-		ath9k_hif_usb_dev_deinit(hif_dev);
+-		ath9k_destroy_wmi(hif_dev->htc_handle->drv_priv);
+ 		ath9k_htc_hw_free(hif_dev->htc_handle);
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
+index e3d546ef71ddc..237f4ec2cffd7 100644
+--- a/drivers/net/wireless/ath/ath9k/htc.h
++++ b/drivers/net/wireless/ath/ath9k/htc.h
+@@ -327,14 +327,18 @@ static inline struct ath9k_htc_tx_ctl *HTC_SKB_CB(struct sk_buff *skb)
+ }
+ 
+ #ifdef CONFIG_ATH9K_HTC_DEBUGFS
+-#define __STAT_SAFE(expr) (hif_dev->htc_handle->drv_priv ? (expr) : 0)
+-#define TX_STAT_INC(c) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.tx_stats.c++)
+-#define TX_STAT_ADD(c, a) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.tx_stats.c += a)
+-#define RX_STAT_INC(c) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c++)
+-#define RX_STAT_ADD(c, a) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c += a)
+-#define CAB_STAT_INC   priv->debug.tx_stats.cab_queued++
+-
+-#define TX_QSTAT_INC(q) (priv->debug.tx_stats.queue_stats[q]++)
++#define __STAT_SAFE(hif_dev, expr)	do { ((hif_dev)->htc_handle->drv_priv ? (expr) : 0); } while (0)
++#define CAB_STAT_INC(priv)		do { ((priv)->debug.tx_stats.cab_queued++); } while (0)
++#define TX_QSTAT_INC(priv, q)		do { ((priv)->debug.tx_stats.queue_stats[q]++); } while (0)
++
++#define TX_STAT_INC(hif_dev, c) \
++		__STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.tx_stats.c++)
++#define TX_STAT_ADD(hif_dev, c, a) \
++		__STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.tx_stats.c += a)
++#define RX_STAT_INC(hif_dev, c) \
++		__STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.skbrx_stats.c++)
++#define RX_STAT_ADD(hif_dev, c, a) \
++		__STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.skbrx_stats.c += a)
+ 
+ void ath9k_htc_err_stat_rx(struct ath9k_htc_priv *priv,
+ 			   struct ath_rx_status *rs);
+@@ -374,13 +378,13 @@ void ath9k_htc_get_et_stats(struct ieee80211_hw *hw,
+ 			    struct ethtool_stats *stats, u64 *data);
+ #else
+ 
+-#define TX_STAT_INC(c) do { } while (0)
+-#define TX_STAT_ADD(c, a) do { } while (0)
+-#define RX_STAT_INC(c) do { } while (0)
+-#define RX_STAT_ADD(c, a) do { } while (0)
+-#define CAB_STAT_INC   do { } while (0)
++#define TX_STAT_INC(hif_dev, c)		do { } while (0)
++#define TX_STAT_ADD(hif_dev, c, a)	do { } while (0)
++#define RX_STAT_INC(hif_dev, c)		do { } while (0)
++#define RX_STAT_ADD(hif_dev, c, a)	do { } while (0)
+ 
+-#define TX_QSTAT_INC(c) do { } while (0)
++#define CAB_STAT_INC(priv)
++#define TX_QSTAT_INC(priv, c)
+ 
+ static inline void ath9k_htc_err_stat_rx(struct ath9k_htc_priv *priv,
+ 					 struct ath_rx_status *rs)
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index 07ac88fb1c577..96a3185a96d75 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -988,6 +988,8 @@ void ath9k_htc_disconnect_device(struct htc_target *htc_handle, bool hotunplug)
+ 
+ 		ath9k_deinit_device(htc_handle->drv_priv);
+ 		ath9k_stop_wmi(htc_handle->drv_priv);
++		ath9k_hif_usb_dealloc_urbs((struct hif_device_usb *)htc_handle->hif_dev);
++		ath9k_destroy_wmi(htc_handle->drv_priv);
+ 		ieee80211_free_hw(htc_handle->drv_priv->hw);
+ 	}
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 43a743ec9d9e0..622fc7f170402 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -106,20 +106,20 @@ static inline enum htc_endpoint_id get_htc_epid(struct ath9k_htc_priv *priv,
+ 
+ 	switch (qnum) {
+ 	case 0:
+-		TX_QSTAT_INC(IEEE80211_AC_VO);
++		TX_QSTAT_INC(priv, IEEE80211_AC_VO);
+ 		epid = priv->data_vo_ep;
+ 		break;
+ 	case 1:
+-		TX_QSTAT_INC(IEEE80211_AC_VI);
++		TX_QSTAT_INC(priv, IEEE80211_AC_VI);
+ 		epid = priv->data_vi_ep;
+ 		break;
+ 	case 2:
+-		TX_QSTAT_INC(IEEE80211_AC_BE);
++		TX_QSTAT_INC(priv, IEEE80211_AC_BE);
+ 		epid = priv->data_be_ep;
+ 		break;
+ 	case 3:
+ 	default:
+-		TX_QSTAT_INC(IEEE80211_AC_BK);
++		TX_QSTAT_INC(priv, IEEE80211_AC_BK);
+ 		epid = priv->data_bk_ep;
+ 		break;
+ 	}
+@@ -323,7 +323,7 @@ static void ath9k_htc_tx_data(struct ath9k_htc_priv *priv,
+ 	memcpy(tx_fhdr, (u8 *) &tx_hdr, sizeof(tx_hdr));
+ 
+ 	if (is_cab) {
+-		CAB_STAT_INC;
++		CAB_STAT_INC(priv);
+ 		tx_ctl->epid = priv->cab_ep;
+ 		return;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index ca05b07a45e67..fe62ff668f757 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -391,7 +391,7 @@ static void ath9k_htc_fw_panic_report(struct htc_target *htc_handle,
+  * HTC Messages are handled directly here and the obtained SKB
+  * is freed.
+  *
+- * Service messages (Data, WMI) passed to the corresponding
++ * Service messages (Data, WMI) are passed to the corresponding
+  * endpoint RX handlers, which have to free the SKB.
+  */
+ void ath9k_htc_rx_msg(struct htc_target *htc_handle,
+@@ -478,6 +478,8 @@ invalid:
+ 		if (endpoint->ep_callbacks.rx)
+ 			endpoint->ep_callbacks.rx(endpoint->ep_callbacks.priv,
+ 						  skb, epid);
++		else
++			goto invalid;
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index f315c54bd3ac0..19345b8f7bfd5 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -341,6 +341,7 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ 	if (!time_left) {
+ 		ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
+ 			wmi_cmd_to_name(cmd_id));
++		wmi->last_seq_id = 0;
+ 		mutex_unlock(&wmi->op_mutex);
+ 		return -ETIMEDOUT;
+ 	}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+index e3758bd86acf0..f29de630908d7 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+@@ -264,6 +264,7 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
+ 			 err);
+ 		goto done;
+ 	}
++	buf[sizeof(buf) - 1] = '\0';
+ 	ptr = (char *)buf;
+ 	strsep(&ptr, "\n");
+ 
+@@ -280,15 +281,17 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
+ 	if (err) {
+ 		brcmf_dbg(TRACE, "retrieving clmver failed, %d\n", err);
+ 	} else {
++		buf[sizeof(buf) - 1] = '\0';
+ 		clmver = (char *)buf;
+-		/* store CLM version for adding it to revinfo debugfs file */
+-		memcpy(ifp->drvr->clmver, clmver, sizeof(ifp->drvr->clmver));
+ 
+ 		/* Replace all newline/linefeed characters with space
+ 		 * character
+ 		 */
+ 		strreplace(clmver, '\n', ' ');
+ 
++		/* store CLM version for adding it to revinfo debugfs file */
++		memcpy(ifp->drvr->clmver, clmver, sizeof(ifp->drvr->clmver));
++
+ 		brcmf_dbg(INFO, "CLM version = %s\n", clmver);
+ 	}
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index c8e1d505f7b5d..3d544eedc1a39 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -333,6 +333,7 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
+ 			bphy_err(drvr, "%s: failed to expand headroom\n",
+ 				 brcmf_ifname(ifp));
+ 			atomic_inc(&drvr->bus_if->stats.pktcow_failed);
++			dev_kfree_skb(skb);
+ 			goto done;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+index 7c8e08ee8f0ff..bd3b234b78038 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+@@ -346,8 +346,11 @@ brcmf_msgbuf_alloc_pktid(struct device *dev,
+ 		count++;
+ 	} while (count < pktids->array_size);
+ 
+-	if (count == pktids->array_size)
++	if (count == pktids->array_size) {
++		dma_unmap_single(dev, *physaddr, skb->len - data_offset,
++				 pktids->direction);
+ 		return -ENOMEM;
++	}
+ 
+ 	array[*idx].data_offset = data_offset;
+ 	array[*idx].physaddr = *physaddr;
+diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+index ada6ce32c1f19..bb728fb24b8a4 100644
+--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c
++++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+@@ -3444,7 +3444,7 @@ static void ipw_rx_queue_reset(struct ipw_priv *priv,
+ 			dma_unmap_single(&priv->pci_dev->dev,
+ 					 rxq->pool[i].dma_addr,
+ 					 IPW_RX_BUF_SIZE, DMA_FROM_DEVICE);
+-			dev_kfree_skb(rxq->pool[i].skb);
++			dev_kfree_skb_irq(rxq->pool[i].skb);
+ 			rxq->pool[i].skb = NULL;
+ 		}
+ 		list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
+@@ -11400,9 +11400,14 @@ static int ipw_wdev_init(struct net_device *dev)
+ 	set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);
+ 
+ 	/* With that information in place, we can now register the wiphy... */
+-	if (wiphy_register(wdev->wiphy))
+-		rc = -EIO;
++	rc = wiphy_register(wdev->wiphy);
++	if (rc)
++		goto out;
++
++	return 0;
+ out:
++	kfree(priv->ieee->a_band.channels);
++	kfree(priv->ieee->bg_band.channels);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlegacy/3945-mac.c b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+index 4ca8212d4fa4b..ef0ac42a55a2a 100644
+--- a/drivers/net/wireless/intel/iwlegacy/3945-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+@@ -3380,10 +3380,12 @@ static DEVICE_ATTR(dump_errors, 0200, NULL, il3945_dump_error_log);
+  *
+  *****************************************************************************/
+ 
+-static void
++static int
+ il3945_setup_deferred_work(struct il_priv *il)
+ {
+ 	il->workqueue = create_singlethread_workqueue(DRV_NAME);
++	if (!il->workqueue)
++		return -ENOMEM;
+ 
+ 	init_waitqueue_head(&il->wait_command_queue);
+ 
+@@ -3400,6 +3402,8 @@ il3945_setup_deferred_work(struct il_priv *il)
+ 	timer_setup(&il->watchdog, il_bg_watchdog, 0);
+ 
+ 	tasklet_setup(&il->irq_tasklet, il3945_irq_tasklet);
++
++	return 0;
+ }
+ 
+ static void
+@@ -3721,7 +3725,10 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	}
+ 
+ 	il_set_rxon_channel(il, &il->bands[NL80211_BAND_2GHZ].channels[5]);
+-	il3945_setup_deferred_work(il);
++	err = il3945_setup_deferred_work(il);
++	if (err)
++		goto out_remove_sysfs;
++
+ 	il3945_setup_handlers(il);
+ 	il_power_initialize(il);
+ 
+@@ -3733,7 +3740,7 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	err = il3945_setup_mac(il);
+ 	if (err)
+-		goto out_remove_sysfs;
++		goto out_destroy_workqueue;
+ 
+ 	il_dbgfs_register(il, DRV_NAME);
+ 
+@@ -3742,9 +3749,10 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	return 0;
+ 
+-out_remove_sysfs:
++out_destroy_workqueue:
+ 	destroy_workqueue(il->workqueue);
+ 	il->workqueue = NULL;
++out_remove_sysfs:
+ 	sysfs_remove_group(&pdev->dev.kobj, &il3945_attribute_group);
+ out_release_irq:
+ 	free_irq(il->pci_dev->irq, il);
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index 28675a4ad8612..12cf22d0e9949 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -6212,10 +6212,12 @@ out:
+ 	mutex_unlock(&il->mutex);
+ }
+ 
+-static void
++static int
+ il4965_setup_deferred_work(struct il_priv *il)
+ {
+ 	il->workqueue = create_singlethread_workqueue(DRV_NAME);
++	if (!il->workqueue)
++		return -ENOMEM;
+ 
+ 	init_waitqueue_head(&il->wait_command_queue);
+ 
+@@ -6234,6 +6236,8 @@ il4965_setup_deferred_work(struct il_priv *il)
+ 	timer_setup(&il->watchdog, il_bg_watchdog, 0);
+ 
+ 	tasklet_setup(&il->irq_tasklet, il4965_irq_tasklet);
++
++	return 0;
+ }
+ 
+ static void
+@@ -6623,7 +6627,10 @@ il4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto out_disable_msi;
+ 	}
+ 
+-	il4965_setup_deferred_work(il);
++	err = il4965_setup_deferred_work(il);
++	if (err)
++		goto out_free_irq;
++
+ 	il4965_setup_handlers(il);
+ 
+ 	/*********************************************
+@@ -6661,6 +6668,7 @@ il4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ out_destroy_workqueue:
+ 	destroy_workqueue(il->workqueue);
+ 	il->workqueue = NULL;
++out_free_irq:
+ 	free_irq(il->pci_dev->irq, il);
+ out_disable_msi:
+ 	pci_disable_msi(il->pci_dev);
+diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c
+index 0651a6a416d1d..4b55779de00a9 100644
+--- a/drivers/net/wireless/intel/iwlegacy/common.c
++++ b/drivers/net/wireless/intel/iwlegacy/common.c
+@@ -5176,7 +5176,7 @@ il_mac_reset_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ 	memset(&il->current_ht_config, 0, sizeof(struct il_ht_config));
+ 
+ 	/* new association get rid of ibss beacon skb */
+-	dev_kfree_skb(il->beacon_skb);
++	dev_consume_skb_irq(il->beacon_skb);
+ 	il->beacon_skb = NULL;
+ 	il->timestamp = 0;
+ 
+@@ -5295,7 +5295,7 @@ il_beacon_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ 	}
+ 
+ 	spin_lock_irqsave(&il->lock, flags);
+-	dev_kfree_skb(il->beacon_skb);
++	dev_consume_skb_irq(il->beacon_skb);
+ 	il->beacon_skb = skb;
+ 
+ 	timestamp = ((struct ieee80211_mgmt *)skb->data)->u.beacon.timestamp;
+diff --git a/drivers/net/wireless/intersil/orinoco/hw.c b/drivers/net/wireless/intersil/orinoco/hw.c
+index 61af5a28f269f..af49aa421e47f 100644
+--- a/drivers/net/wireless/intersil/orinoco/hw.c
++++ b/drivers/net/wireless/intersil/orinoco/hw.c
+@@ -931,6 +931,8 @@ int __orinoco_hw_setup_enc(struct orinoco_private *priv)
+ 			err = hermes_write_wordrec(hw, USER_BAP,
+ 					HERMES_RID_CNFAUTHENTICATION_AGERE,
+ 					auth_flag);
++			if (err)
++				return err;
+ 		}
+ 		err = hermes_write_wordrec(hw, USER_BAP,
+ 					   HERMES_RID_CNFWEPENABLED_AGERE,
+diff --git a/drivers/net/wireless/marvell/libertas/cmdresp.c b/drivers/net/wireless/marvell/libertas/cmdresp.c
+index cb515c5584c1f..74cb7551f4275 100644
+--- a/drivers/net/wireless/marvell/libertas/cmdresp.c
++++ b/drivers/net/wireless/marvell/libertas/cmdresp.c
+@@ -48,7 +48,7 @@ void lbs_mac_event_disconnected(struct lbs_private *priv,
+ 
+ 	/* Free Tx and Rx packets */
+ 	spin_lock_irqsave(&priv->driver_lock, flags);
+-	kfree_skb(priv->currenttxskb);
++	dev_kfree_skb_irq(priv->currenttxskb);
+ 	priv->currenttxskb = NULL;
+ 	priv->tx_pending_len = 0;
+ 	spin_unlock_irqrestore(&priv->driver_lock, flags);
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index 32fdc4150b605..2240b4db8c036 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -637,7 +637,7 @@ static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
+ 	priv->resp_len[i] = (recvlength - MESSAGE_HEADER_LEN);
+ 	memcpy(priv->resp_buf[i], recvbuff + MESSAGE_HEADER_LEN,
+ 		priv->resp_len[i]);
+-	kfree_skb(skb);
++	dev_kfree_skb_irq(skb);
+ 	lbs_notify_command_response(priv, i);
+ 
+ 	spin_unlock_irqrestore(&priv->driver_lock, flags);
+diff --git a/drivers/net/wireless/marvell/libertas/main.c b/drivers/net/wireless/marvell/libertas/main.c
+index ee4cf3437e28a..1c56cc2742b07 100644
+--- a/drivers/net/wireless/marvell/libertas/main.c
++++ b/drivers/net/wireless/marvell/libertas/main.c
+@@ -217,7 +217,7 @@ int lbs_stop_iface(struct lbs_private *priv)
+ 
+ 	spin_lock_irqsave(&priv->driver_lock, flags);
+ 	priv->iface_running = false;
+-	kfree_skb(priv->currenttxskb);
++	dev_kfree_skb_irq(priv->currenttxskb);
+ 	priv->currenttxskb = NULL;
+ 	priv->tx_pending_len = 0;
+ 	spin_unlock_irqrestore(&priv->driver_lock, flags);
+@@ -870,6 +870,7 @@ static int lbs_init_adapter(struct lbs_private *priv)
+ 	ret = kfifo_alloc(&priv->event_fifo, sizeof(u32) * 16, GFP_KERNEL);
+ 	if (ret) {
+ 		pr_err("Out of memory allocating event FIFO buffer\n");
++		lbs_free_cmd_buffer(priv);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+index ecce8b56f8a28..2c45ef6e04077 100644
+--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+@@ -613,7 +613,7 @@ static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
+ 	spin_lock_irqsave(&priv->driver_lock, flags);
+ 	memcpy(priv->cmd_resp_buff, recvbuff + MESSAGE_HEADER_LEN,
+ 	       recvlength - MESSAGE_HEADER_LEN);
+-	kfree_skb(skb);
++	dev_kfree_skb_irq(skb);
+ 	lbtf_cmd_response_rx(priv);
+ 	spin_unlock_irqrestore(&priv->driver_lock, flags);
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
+index cf08a4af84d6d..b99381ebb82a1 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n.c
+@@ -890,7 +890,7 @@ mwifiex_send_delba_txbastream_tbl(struct mwifiex_private *priv, u8 tid)
+  */
+ void mwifiex_update_ampdu_txwinsize(struct mwifiex_adapter *adapter)
+ {
+-	u8 i;
++	u8 i, j;
+ 	u32 tx_win_size;
+ 	struct mwifiex_private *priv;
+ 
+@@ -921,8 +921,8 @@ void mwifiex_update_ampdu_txwinsize(struct mwifiex_adapter *adapter)
+ 		if (tx_win_size != priv->add_ba_param.tx_win_size) {
+ 			if (!priv->media_connected)
+ 				continue;
+-			for (i = 0; i < MAX_NUM_TID; i++)
+-				mwifiex_send_delba_txbastream_tbl(priv, i);
++			for (j = 0; j < MAX_NUM_TID; j++)
++				mwifiex_send_delba_txbastream_tbl(priv, j);
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index f01b455783b23..7991705e9d134 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -476,6 +476,7 @@ mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
+ 	bool more;
+ 
+ 	spin_lock_bh(&q->lock);
++
+ 	do {
+ 		buf = mt76_dma_dequeue(dev, q, true, NULL, NULL, &more);
+ 		if (!buf)
+@@ -483,6 +484,12 @@ mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
+ 
+ 		skb_free_frag(buf);
+ 	} while (1);
++
++	if (q->rx_head) {
++		dev_kfree_skb(q->rx_head);
++		q->rx_head = NULL;
++	}
++
+ 	spin_unlock_bh(&q->lock);
+ 
+ 	if (!q->rx_page.va)
+@@ -505,12 +512,6 @@ mt76_dma_rx_reset(struct mt76_dev *dev, enum mt76_rxq_id qid)
+ 	mt76_dma_rx_cleanup(dev, q);
+ 	mt76_dma_sync_idx(dev, q);
+ 	mt76_dma_rx_fill(dev, q);
+-
+-	if (!q->rx_head)
+-		return;
+-
+-	dev_kfree_skb(q->rx_head);
+-	q->rx_head = NULL;
+ }
+ 
+ static void
+diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c
+index 11071519fce81..8ba291abecff8 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/dma.c
++++ b/drivers/net/wireless/mediatek/mt7601u/dma.c
+@@ -118,7 +118,8 @@ static u16 mt7601u_rx_next_seg_len(u8 *data, u32 data_len)
+ 	if (data_len < min_seg_len ||
+ 	    WARN_ON_ONCE(!dma_len) ||
+ 	    WARN_ON_ONCE(dma_len + MT_DMA_HDRS > data_len) ||
+-	    WARN_ON_ONCE(dma_len & 0x3))
++	    WARN_ON_ONCE(dma_len & 0x3) ||
++	    WARN_ON_ONCE(dma_len < min_seg_len))
+ 		return 0;
+ 
+ 	return MT_DMA_HDRS + dma_len;
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index 20615c7ec1683..c508f429984ab 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -684,6 +684,7 @@ netdev_tx_t wilc_mac_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 
+ 	if (skb->dev != ndev) {
+ 		netdev_err(ndev, "Packet not destined to this device\n");
++		dev_kfree_skb(skb);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+index 199e7e031d7d9..3b3cb3a6e2e89 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+@@ -1671,6 +1671,11 @@ static void rtl8192e_enable_rf(struct rtl8xxxu_priv *priv)
+ 	val8 = rtl8xxxu_read8(priv, REG_PAD_CTRL1);
+ 	val8 &= ~BIT(0);
+ 	rtl8xxxu_write8(priv, REG_PAD_CTRL1, val8);
++
++	/*
++	 * Fix transmission failure of rtl8192e.
++	 */
++	rtl8xxxu_write8(priv, REG_TXPAUSE, 0x00);
+ }
+ 
+ struct rtl8xxxu_fileops rtl8192eu_fops = {
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 2cb86c28d11fe..deef1c09de319 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -5184,7 +5184,7 @@ static void rtl8xxxu_queue_rx_urb(struct rtl8xxxu_priv *priv,
+ 		pending = priv->rx_urb_pending_count;
+ 	} else {
+ 		skb = (struct sk_buff *)rx_urb->urb.context;
+-		dev_kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 		usb_free_urb(&rx_urb->urb);
+ 	}
+ 
+@@ -5491,9 +5491,6 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
+ 	btcoex = &priv->bt_coex;
+ 	rarpt = &priv->ra_report;
+ 
+-	if (priv->rf_paths > 1)
+-		goto out;
+-
+ 	while (!skb_queue_empty(&priv->c2hcmd_queue)) {
+ 		spin_lock_irqsave(&priv->c2hcmd_lock, flags);
+ 		skb = __skb_dequeue(&priv->c2hcmd_queue);
+@@ -5547,10 +5544,9 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
+ 		default:
+ 			break;
+ 		}
+-	}
+ 
+-out:
+-	dev_kfree_skb(skb);
++		dev_kfree_skb(skb);
++	}
+ }
+ 
+ static void rtl8723bu_handle_c2h(struct rtl8xxxu_priv *priv,
+@@ -5912,7 +5908,6 @@ static int rtl8xxxu_config(struct ieee80211_hw *hw, u32 changed)
+ {
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+ 	struct device *dev = &priv->udev->dev;
+-	u16 val16;
+ 	int ret = 0, channel;
+ 	bool ht40;
+ 
+@@ -5922,14 +5917,6 @@ static int rtl8xxxu_config(struct ieee80211_hw *hw, u32 changed)
+ 			 __func__, hw->conf.chandef.chan->hw_value,
+ 			 changed, hw->conf.chandef.width);
+ 
+-	if (changed & IEEE80211_CONF_CHANGE_RETRY_LIMITS) {
+-		val16 = ((hw->conf.long_frame_max_tx_count <<
+-			  RETRY_LIMIT_LONG_SHIFT) & RETRY_LIMIT_LONG_MASK) |
+-			((hw->conf.short_frame_max_tx_count <<
+-			  RETRY_LIMIT_SHORT_SHIFT) & RETRY_LIMIT_SHORT_MASK);
+-		rtl8xxxu_write16(priv, REG_RETRY_LIMIT, val16);
+-	}
+-
+ 	if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
+ 		switch (hw->conf.chandef.width) {
+ 		case NL80211_CHAN_WIDTH_20_NOHT:
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+index 63f9ea21962f2..335a3c9cdbab6 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+@@ -68,8 +68,10 @@ static void _rtl88ee_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
++	struct sk_buff_head free_list;
+ 	unsigned long flags;
+ 
++	skb_queue_head_init(&free_list);
+ 	spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
+ 	while (skb_queue_len(&ring->queue)) {
+ 		struct rtl_tx_desc *entry = &ring->desc[ring->idx];
+@@ -79,10 +81,12 @@ static void _rtl88ee_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ 				 rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
+ 						true, HW_DESC_TXBUFF_ADDR),
+ 				 skb->len, DMA_TO_DEVICE);
+-		kfree_skb(skb);
++		__skb_queue_tail(&free_list, skb);
+ 		ring->idx = (ring->idx + 1) % ring->entries;
+ 	}
+ 	spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
++
++	__skb_queue_purge(&free_list);
+ }
+ 
+ static void _rtl88ee_disable_bcn_sub_func(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+index 0748aedce2adb..ccbb082d5e928 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+@@ -30,8 +30,10 @@ static void _rtl8723be_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
++	struct sk_buff_head free_list;
+ 	unsigned long flags;
+ 
++	skb_queue_head_init(&free_list);
+ 	spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
+ 	while (skb_queue_len(&ring->queue)) {
+ 		struct rtl_tx_desc *entry = &ring->desc[ring->idx];
+@@ -41,10 +43,12 @@ static void _rtl8723be_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ 				 rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
+ 						true, HW_DESC_TXBUFF_ADDR),
+ 				 skb->len, DMA_TO_DEVICE);
+-		kfree_skb(skb);
++		__skb_queue_tail(&free_list, skb);
+ 		ring->idx = (ring->idx + 1) % ring->entries;
+ 	}
+ 	spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
++
++	__skb_queue_purge(&free_list);
+ }
+ 
+ static void _rtl8723be_set_bcn_ctrl_reg(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+index 33ffc24d36759..c4ee65cc2d5e6 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+@@ -26,8 +26,10 @@ static void _rtl8821ae_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
++	struct sk_buff_head free_list;
+ 	unsigned long flags;
+ 
++	skb_queue_head_init(&free_list);
+ 	spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
+ 	while (skb_queue_len(&ring->queue)) {
+ 		struct rtl_tx_desc *entry = &ring->desc[ring->idx];
+@@ -37,10 +39,12 @@ static void _rtl8821ae_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ 				 rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
+ 						true, HW_DESC_TXBUFF_ADDR),
+ 				 skb->len, DMA_TO_DEVICE);
+-		kfree_skb(skb);
++		__skb_queue_tail(&free_list, skb);
+ 		ring->idx = (ring->idx + 1) % ring->entries;
+ 	}
+ 	spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
++
++	__skb_queue_purge(&free_list);
+ }
+ 
+ static void _rtl8821ae_set_bcn_ctrl_reg(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+index f41a7643b9c42..c0c06ab6d3e76 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+@@ -1581,7 +1581,7 @@ static void _rtl8821ae_phy_txpower_by_rate_configuration(struct ieee80211_hw *hw
+ }
+ 
+ /* string is in decimal */
+-static bool _rtl8812ae_get_integer_from_string(char *str, u8 *pint)
++static bool _rtl8812ae_get_integer_from_string(const char *str, u8 *pint)
+ {
+ 	u16 i = 0;
+ 	*pint = 0;
+@@ -1599,18 +1599,6 @@ static bool _rtl8812ae_get_integer_from_string(char *str, u8 *pint)
+ 	return true;
+ }
+ 
+-static bool _rtl8812ae_eq_n_byte(u8 *str1, u8 *str2, u32 num)
+-{
+-	if (num == 0)
+-		return false;
+-	while (num > 0) {
+-		num--;
+-		if (str1[num] != str2[num])
+-			return false;
+-	}
+-	return true;
+-}
+-
+ static s8 _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(struct ieee80211_hw *hw,
+ 					      u8 band, u8 channel)
+ {
+@@ -1637,10 +1625,11 @@ static s8 _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(struct ieee80211_hw *hw,
+ 	return channel_index;
+ }
+ 
+-static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw, u8 *pregulation,
+-				      u8 *pband, u8 *pbandwidth,
+-				      u8 *prate_section, u8 *prf_path,
+-				      u8 *pchannel, u8 *ppower_limit)
++static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw,
++				      const char *pregulation,
++				      const char *pband, const char *pbandwidth,
++				      const char *prate_section, const char *prf_path,
++				      const char *pchannel, const char *ppower_limit)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_phy *rtlphy = &rtlpriv->phy;
+@@ -1648,8 +1637,8 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw, u8 *pregul
+ 	u8 channel_index;
+ 	s8 power_limit = 0, prev_power_limit, ret;
+ 
+-	if (!_rtl8812ae_get_integer_from_string((char *)pchannel, &channel) ||
+-	    !_rtl8812ae_get_integer_from_string((char *)ppower_limit,
++	if (!_rtl8812ae_get_integer_from_string(pchannel, &channel) ||
++	    !_rtl8812ae_get_integer_from_string(ppower_limit,
+ 						&power_limit)) {
+ 		rtl_dbg(rtlpriv, COMP_INIT, DBG_TRACE,
+ 			"Illegal index of pwr_lmt table [chnl %d][val %d]\n",
+@@ -1659,42 +1648,42 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw, u8 *pregul
+ 	power_limit = power_limit > MAX_POWER_INDEX ?
+ 		      MAX_POWER_INDEX : power_limit;
+ 
+-	if (_rtl8812ae_eq_n_byte(pregulation, (u8 *)("FCC"), 3))
++	if (strcmp(pregulation, "FCC") == 0)
+ 		regulation = 0;
+-	else if (_rtl8812ae_eq_n_byte(pregulation, (u8 *)("MKK"), 3))
++	else if (strcmp(pregulation, "MKK") == 0)
+ 		regulation = 1;
+-	else if (_rtl8812ae_eq_n_byte(pregulation, (u8 *)("ETSI"), 4))
++	else if (strcmp(pregulation, "ETSI") == 0)
+ 		regulation = 2;
+-	else if (_rtl8812ae_eq_n_byte(pregulation, (u8 *)("WW13"), 4))
++	else if (strcmp(pregulation, "WW13") == 0)
+ 		regulation = 3;
+ 
+-	if (_rtl8812ae_eq_n_byte(prate_section, (u8 *)("CCK"), 3))
++	if (strcmp(prate_section, "CCK") == 0)
+ 		rate_section = 0;
+-	else if (_rtl8812ae_eq_n_byte(prate_section, (u8 *)("OFDM"), 4))
++	else if (strcmp(prate_section, "OFDM") == 0)
+ 		rate_section = 1;
+-	else if (_rtl8812ae_eq_n_byte(prate_section, (u8 *)("HT"), 2) &&
+-		 _rtl8812ae_eq_n_byte(prf_path, (u8 *)("1T"), 2))
++	else if (strcmp(prate_section, "HT") == 0 &&
++		 strcmp(prf_path, "1T") == 0)
+ 		rate_section = 2;
+-	else if (_rtl8812ae_eq_n_byte(prate_section, (u8 *)("HT"), 2) &&
+-		 _rtl8812ae_eq_n_byte(prf_path, (u8 *)("2T"), 2))
++	else if (strcmp(prate_section, "HT") == 0 &&
++		 strcmp(prf_path, "2T") == 0)
+ 		rate_section = 3;
+-	else if (_rtl8812ae_eq_n_byte(prate_section, (u8 *)("VHT"), 3) &&
+-		 _rtl8812ae_eq_n_byte(prf_path, (u8 *)("1T"), 2))
++	else if (strcmp(prate_section, "VHT") == 0 &&
++		 strcmp(prf_path, "1T") == 0)
+ 		rate_section = 4;
+-	else if (_rtl8812ae_eq_n_byte(prate_section, (u8 *)("VHT"), 3) &&
+-		 _rtl8812ae_eq_n_byte(prf_path, (u8 *)("2T"), 2))
++	else if (strcmp(prate_section, "VHT") == 0 &&
++		 strcmp(prf_path, "2T") == 0)
+ 		rate_section = 5;
+ 
+-	if (_rtl8812ae_eq_n_byte(pbandwidth, (u8 *)("20M"), 3))
++	if (strcmp(pbandwidth, "20M") == 0)
+ 		bandwidth = 0;
+-	else if (_rtl8812ae_eq_n_byte(pbandwidth, (u8 *)("40M"), 3))
++	else if (strcmp(pbandwidth, "40M") == 0)
+ 		bandwidth = 1;
+-	else if (_rtl8812ae_eq_n_byte(pbandwidth, (u8 *)("80M"), 3))
++	else if (strcmp(pbandwidth, "80M") == 0)
+ 		bandwidth = 2;
+-	else if (_rtl8812ae_eq_n_byte(pbandwidth, (u8 *)("160M"), 4))
++	else if (strcmp(pbandwidth, "160M") == 0)
+ 		bandwidth = 3;
+ 
+-	if (_rtl8812ae_eq_n_byte(pband, (u8 *)("2.4G"), 4)) {
++	if (strcmp(pband, "2.4G") == 0) {
+ 		ret = _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(hw,
+ 							       BAND_ON_2_4G,
+ 							       channel);
+@@ -1718,7 +1707,7 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw, u8 *pregul
+ 			regulation, bandwidth, rate_section, channel_index,
+ 			rtlphy->txpwr_limit_2_4g[regulation][bandwidth]
+ 				[rate_section][channel_index][RF90_PATH_A]);
+-	} else if (_rtl8812ae_eq_n_byte(pband, (u8 *)("5G"), 2)) {
++	} else if (strcmp(pband, "5G") == 0) {
+ 		ret = _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(hw,
+ 							       BAND_ON_5G,
+ 							       channel);
+@@ -1749,10 +1738,10 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw, u8 *pregul
+ }
+ 
+ static void _rtl8812ae_phy_config_bb_txpwr_lmt(struct ieee80211_hw *hw,
+-					  u8 *regulation, u8 *band,
+-					  u8 *bandwidth, u8 *rate_section,
+-					  u8 *rf_path, u8 *channel,
+-					  u8 *power_limit)
++					  const char *regulation, const char *band,
++					  const char *bandwidth, const char *rate_section,
++					  const char *rf_path, const char *channel,
++					  const char *power_limit)
+ {
+ 	_rtl8812ae_phy_set_txpower_limit(hw, regulation, band, bandwidth,
+ 					 rate_section, rf_path, channel,
+@@ -1765,7 +1754,7 @@ static void _rtl8821ae_phy_read_and_config_txpwr_lmt(struct ieee80211_hw *hw)
+ 	struct rtl_hal *rtlhal = rtl_hal(rtlpriv);
+ 	u32 i = 0;
+ 	u32 array_len;
+-	u8 **array;
++	const char **array;
+ 
+ 	if (rtlhal->hw_type == HARDWARE_TYPE_RTL8812AE) {
+ 		array_len = RTL8812AE_TXPWR_LMT_ARRAY_LEN;
+@@ -1778,13 +1767,13 @@ static void _rtl8821ae_phy_read_and_config_txpwr_lmt(struct ieee80211_hw *hw)
+ 	rtl_dbg(rtlpriv, COMP_INIT, DBG_TRACE, "\n");
+ 
+ 	for (i = 0; i < array_len; i += 7) {
+-		u8 *regulation = array[i];
+-		u8 *band = array[i+1];
+-		u8 *bandwidth = array[i+2];
+-		u8 *rate = array[i+3];
+-		u8 *rf_path = array[i+4];
+-		u8 *chnl = array[i+5];
+-		u8 *val = array[i+6];
++		const char *regulation = array[i];
++		const char *band = array[i+1];
++		const char *bandwidth = array[i+2];
++		const char *rate = array[i+3];
++		const char *rf_path = array[i+4];
++		const char *chnl = array[i+5];
++		const char *val = array[i+6];
+ 
+ 		_rtl8812ae_phy_config_bb_txpwr_lmt(hw, regulation, band,
+ 						   bandwidth, rate, rf_path,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
+index ed72a2aeb6c8e..fcaaf664cbec5 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
+@@ -2894,7 +2894,7 @@ u32 RTL8821AE_AGC_TAB_1TARRAYLEN = ARRAY_SIZE(RTL8821AE_AGC_TAB_ARRAY);
+ *                           TXPWR_LMT.TXT
+ ******************************************************************************/
+ 
+-u8 *RTL8812AE_TXPWR_LMT[] = {
++const char *RTL8812AE_TXPWR_LMT[] = {
+ 	"FCC", "2.4G", "20M", "CCK", "1T", "01", "36",
+ 	"ETSI", "2.4G", "20M", "CCK", "1T", "01", "32",
+ 	"MKK", "2.4G", "20M", "CCK", "1T", "01", "32",
+@@ -3463,7 +3463,7 @@ u8 *RTL8812AE_TXPWR_LMT[] = {
+ 
+ u32 RTL8812AE_TXPWR_LMT_ARRAY_LEN = ARRAY_SIZE(RTL8812AE_TXPWR_LMT);
+ 
+-u8 *RTL8821AE_TXPWR_LMT[] = {
++const char *RTL8821AE_TXPWR_LMT[] = {
+ 	"FCC", "2.4G", "20M", "CCK", "1T", "01", "32",
+ 	"ETSI", "2.4G", "20M", "CCK", "1T", "01", "32",
+ 	"MKK", "2.4G", "20M", "CCK", "1T", "01", "32",
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.h b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.h
+index 540159c25078a..76c62b7c0fb24 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.h
+@@ -28,7 +28,7 @@ extern u32 RTL8821AE_AGC_TAB_ARRAY[];
+ extern u32 RTL8812AE_AGC_TAB_1TARRAYLEN;
+ extern u32 RTL8812AE_AGC_TAB_ARRAY[];
+ extern u32 RTL8812AE_TXPWR_LMT_ARRAY_LEN;
+-extern u8 *RTL8812AE_TXPWR_LMT[];
++extern const char *RTL8812AE_TXPWR_LMT[];
+ extern u32 RTL8821AE_TXPWR_LMT_ARRAY_LEN;
+-extern u8 *RTL8821AE_TXPWR_LMT[];
++extern const char *RTL8821AE_TXPWR_LMT[];
+ #endif
+diff --git a/drivers/net/wireless/rsi/rsi_91x_coex.c b/drivers/net/wireless/rsi/rsi_91x_coex.c
+index a0c5d02ae88cf..7395359b43b77 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_coex.c
++++ b/drivers/net/wireless/rsi/rsi_91x_coex.c
+@@ -160,6 +160,7 @@ int rsi_coex_attach(struct rsi_common *common)
+ 			       rsi_coex_scheduler_thread,
+ 			       "Coex-Tx-Thread")) {
+ 		rsi_dbg(ERR_ZONE, "%s: Unable to init tx thrd\n", __func__);
++		kfree(coex_cb);
+ 		return -EINVAL;
+ 	}
+ 	return 0;
+diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
+index ff1701adbb179..ccf6344ed6fd2 100644
+--- a/drivers/net/wireless/wl3501_cs.c
++++ b/drivers/net/wireless/wl3501_cs.c
+@@ -1330,7 +1330,7 @@ static netdev_tx_t wl3501_hard_start_xmit(struct sk_buff *skb,
+ 	} else {
+ 		++dev->stats.tx_packets;
+ 		dev->stats.tx_bytes += skb->len;
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 
+ 		if (this->tx_buffer_cnt < 2)
+ 			netif_stop_queue(dev);
+diff --git a/drivers/nfc/st-nci/se.c b/drivers/nfc/st-nci/se.c
+index 37d397aae9b9d..a14afceaf5e92 100644
+--- a/drivers/nfc/st-nci/se.c
++++ b/drivers/nfc/st-nci/se.c
+@@ -664,6 +664,12 @@ int st_nci_se_io(struct nci_dev *ndev, u32 se_idx,
+ 					ST_NCI_EVT_TRANSMIT_DATA, apdu,
+ 					apdu_length);
+ 	default:
++		/* Need to free cb_context here as at the moment we can't
++		 * clearly indicate to the caller if the callback function
++		 * would be called (and free it) or not. In both cases a
++		 * negative value may be returned to the caller.
++		 */
++		kfree(cb_context);
+ 		return -ENODEV;
+ 	}
+ }
+diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
+index d416365042462..6a1d3b2752fbf 100644
+--- a/drivers/nfc/st21nfca/se.c
++++ b/drivers/nfc/st21nfca/se.c
+@@ -236,6 +236,12 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx,
+ 					ST21NFCA_EVT_TRANSMIT_DATA,
+ 					apdu, apdu_length);
+ 	default:
++		/* Need to free cb_context here as at the moment we can't
++		 * clearly indicate to the caller if the callback function
++		 * would be called (and free it) or not. In both cases a
++		 * negative value may be returned to the caller.
++		 */
++		kfree(cb_context);
+ 		return -ENODEV;
+ 	}
+ }
+diff --git a/drivers/opp/debugfs.c b/drivers/opp/debugfs.c
+index 596c185b5dda4..60f4ff8e044d1 100644
+--- a/drivers/opp/debugfs.c
++++ b/drivers/opp/debugfs.c
+@@ -204,7 +204,7 @@ static void opp_migrate_dentry(struct opp_device *opp_dev,
+ 
+ 	dentry = debugfs_rename(rootdir, opp_dev->dentry, rootdir,
+ 				opp_table->dentry_name);
+-	if (!dentry) {
++	if (IS_ERR(dentry)) {
+ 		dev_err(dev, "%s: Failed to rename link from: %s to %s\n",
+ 			__func__, dev_name(opp_dev->dev), dev_name(dev));
+ 		return;
+diff --git a/drivers/pci/controller/pci-loongson.c b/drivers/pci/controller/pci-loongson.c
+index 48169b1e38171..e73e18a73833b 100644
+--- a/drivers/pci/controller/pci-loongson.c
++++ b/drivers/pci/controller/pci-loongson.c
+@@ -13,9 +13,14 @@
+ #include "../pci.h"
+ 
+ /* Device IDs */
+-#define DEV_PCIE_PORT_0	0x7a09
+-#define DEV_PCIE_PORT_1	0x7a19
+-#define DEV_PCIE_PORT_2	0x7a29
++#define DEV_LS2K_PCIE_PORT0	0x1a05
++#define DEV_LS7A_PCIE_PORT0	0x7a09
++#define DEV_LS7A_PCIE_PORT1	0x7a19
++#define DEV_LS7A_PCIE_PORT2	0x7a29
++#define DEV_LS7A_PCIE_PORT3	0x7a39
++#define DEV_LS7A_PCIE_PORT4	0x7a49
++#define DEV_LS7A_PCIE_PORT5	0x7a59
++#define DEV_LS7A_PCIE_PORT6	0x7a69
+ 
+ #define DEV_LS2K_APB	0x7a02
+ #define DEV_LS7A_CONF	0x7a10
+@@ -38,11 +43,11 @@ static void bridge_class_quirk(struct pci_dev *dev)
+ 	dev->class = PCI_CLASS_BRIDGE_PCI << 8;
+ }
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+-			DEV_PCIE_PORT_0, bridge_class_quirk);
++			DEV_LS7A_PCIE_PORT0, bridge_class_quirk);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+-			DEV_PCIE_PORT_1, bridge_class_quirk);
++			DEV_LS7A_PCIE_PORT1, bridge_class_quirk);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+-			DEV_PCIE_PORT_2, bridge_class_quirk);
++			DEV_LS7A_PCIE_PORT2, bridge_class_quirk);
+ 
+ static void system_bus_quirk(struct pci_dev *pdev)
+ {
+@@ -60,37 +65,33 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ 			DEV_LS7A_LPC, system_bus_quirk);
+ 
+-static void loongson_mrrs_quirk(struct pci_dev *dev)
++static void loongson_mrrs_quirk(struct pci_dev *pdev)
+ {
+-	struct pci_bus *bus = dev->bus;
+-	struct pci_dev *bridge;
+-	static const struct pci_device_id bridge_devids[] = {
+-		{ PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_0) },
+-		{ PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_1) },
+-		{ PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_2) },
+-		{ 0, },
+-	};
+-
+-	/* look for the matching bridge */
+-	while (!pci_is_root_bus(bus)) {
+-		bridge = bus->self;
+-		bus = bus->parent;
+-		/*
+-		 * Some Loongson PCIe ports have a h/w limitation of
+-		 * 256 bytes maximum read request size. They can't handle
+-		 * anything larger than this. So force this limit on
+-		 * any devices attached under these ports.
+-		 */
+-		if (pci_match_id(bridge_devids, bridge)) {
+-			if (pcie_get_readrq(dev) > 256) {
+-				pci_info(dev, "limiting MRRS to 256\n");
+-				pcie_set_readrq(dev, 256);
+-			}
+-			break;
+-		}
+-	}
++	/*
++	 * Some Loongson PCIe ports have h/w limitations of maximum read
++	 * request size. They can't handle anything larger than this. So
++	 * force this limit on any devices attached under these ports.
++	 */
++	struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus);
++
++	bridge->no_inc_mrrs = 1;
+ }
+-DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, loongson_mrrs_quirk);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
++			DEV_LS2K_PCIE_PORT0, loongson_mrrs_quirk);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
++			DEV_LS7A_PCIE_PORT0, loongson_mrrs_quirk);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
++			DEV_LS7A_PCIE_PORT1, loongson_mrrs_quirk);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
++			DEV_LS7A_PCIE_PORT2, loongson_mrrs_quirk);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
++			DEV_LS7A_PCIE_PORT3, loongson_mrrs_quirk);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
++			DEV_LS7A_PCIE_PORT4, loongson_mrrs_quirk);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
++			DEV_LS7A_PCIE_PORT5, loongson_mrrs_quirk);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
++			DEV_LS7A_PCIE_PORT6, loongson_mrrs_quirk);
+ 
+ static void __iomem *cfg1_map(struct loongson_pci *priv, int bus,
+ 				unsigned int devfn, int where)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 262577c81d307..744a2e05635b9 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4808,7 +4808,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ 	if (pci_dev_is_disconnected(dev))
+ 		return;
+ 
+-	if (!pci_is_bridge(dev) || !dev->bridge_d3)
++	if (!pci_is_bridge(dev))
+ 		return;
+ 
+ 	down_read(&pci_bus_sem);
+@@ -5739,6 +5739,7 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
+ {
+ 	u16 v;
+ 	int ret;
++	struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
+ 
+ 	if (rq < 128 || rq > 4096 || !is_power_of_2(rq))
+ 		return -EINVAL;
+@@ -5757,6 +5758,15 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
+ 
+ 	v = (ffs(rq) - 8) << 12;
+ 
++	if (bridge->no_inc_mrrs) {
++		int max_mrrs = pcie_get_readrq(dev);
++
++		if (rq > max_mrrs) {
++			pci_info(dev, "can't set Max_Read_Request_Size to %d; max is %d\n", rq, max_mrrs);
++			return -EINVAL;
++		}
++	}
++
+ 	ret = pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL,
+ 						  PCI_EXP_DEVCTL_READRQ, v);
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 0039460c6ab02..9197d7362731e 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -351,53 +351,36 @@ struct pci_sriov {
+  * @dev - pci device to set new error_state
+  * @new - the state we want dev to be in
+  *
+- * Must be called with device_lock held.
++ * If the device is experiencing perm_failure, it has to remain in that state.
++ * Any other transition is allowed.
+  *
+  * Returns true if state has been changed to the requested state.
+  */
+ static inline bool pci_dev_set_io_state(struct pci_dev *dev,
+ 					pci_channel_state_t new)
+ {
+-	bool changed = false;
++	pci_channel_state_t old;
+ 
+-	device_lock_assert(&dev->dev);
+ 	switch (new) {
+ 	case pci_channel_io_perm_failure:
+-		switch (dev->error_state) {
+-		case pci_channel_io_frozen:
+-		case pci_channel_io_normal:
+-		case pci_channel_io_perm_failure:
+-			changed = true;
+-			break;
+-		}
+-		break;
++		xchg(&dev->error_state, pci_channel_io_perm_failure);
++		return true;
+ 	case pci_channel_io_frozen:
+-		switch (dev->error_state) {
+-		case pci_channel_io_frozen:
+-		case pci_channel_io_normal:
+-			changed = true;
+-			break;
+-		}
+-		break;
++		old = cmpxchg(&dev->error_state, pci_channel_io_normal,
++			      pci_channel_io_frozen);
++		return old != pci_channel_io_perm_failure;
+ 	case pci_channel_io_normal:
+-		switch (dev->error_state) {
+-		case pci_channel_io_frozen:
+-		case pci_channel_io_normal:
+-			changed = true;
+-			break;
+-		}
+-		break;
++		old = cmpxchg(&dev->error_state, pci_channel_io_frozen,
++			      pci_channel_io_normal);
++		return old != pci_channel_io_perm_failure;
++	default:
++		return false;
+ 	}
+-	if (changed)
+-		dev->error_state = new;
+-	return changed;
+ }
+ 
+ static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
+ {
+-	device_lock(&dev->dev);
+ 	pci_dev_set_io_state(dev, pci_channel_io_perm_failure);
+-	device_unlock(&dev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index fb2e52fd01b39..c1ebd5e12b06e 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4797,6 +4797,26 @@ static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags)
+ 		PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ }
+ 
++/*
++ * Wangxun 10G/1G NICs have no ACS capability, and on multi-function
++ * devices, peer-to-peer transactions are not be used between the functions.
++ * So add an ACS quirk for below devices to isolate functions.
++ * SFxxx 1G NICs(em).
++ * RP1000/RP2000 10G NICs(sp).
++ */
++static int  pci_quirk_wangxun_nic_acs(struct pci_dev *dev, u16 acs_flags)
++{
++	switch (dev->device) {
++	case 0x0100 ... 0x010F:
++	case 0x1001:
++	case 0x2001:
++		return pci_acs_ctrl_enabled(acs_flags,
++			PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
++	}
++
++	return false;
++}
++
+ static const struct pci_dev_acs_enabled {
+ 	u16 vendor;
+ 	u16 device;
+@@ -4942,6 +4962,8 @@ static const struct pci_dev_acs_enabled {
+ 	{ PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs },
+ 	/* Zhaoxin Root/Downstream Ports */
+ 	{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
++	/* Wangxun nics */
++	{ PCI_VENDOR_ID_WANGXUN, PCI_ANY_ID, pci_quirk_wangxun_nic_acs },
+ 	{ 0 }
+ };
+ 
+@@ -5302,6 +5324,7 @@ static void quirk_no_flr(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1487, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x148c, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x7901, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr);
+ 
+diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+index 2ce636937c6ea..16d291e10627b 100644
+--- a/drivers/pci/setup-bus.c
++++ b/drivers/pci/setup-bus.c
+@@ -1878,12 +1878,67 @@ static void adjust_bridge_window(struct pci_dev *bridge, struct resource *res,
+ 		add_size = size - new_size;
+ 		pci_dbg(bridge, "bridge window %pR shrunken by %pa\n", res,
+ 			&add_size);
++	} else {
++		return;
+ 	}
+ 
+ 	res->end = res->start + new_size - 1;
+ 	remove_from_list(add_list, res);
+ }
+ 
++static void remove_dev_resource(struct resource *avail, struct pci_dev *dev,
++				struct resource *res)
++{
++	resource_size_t size, align, tmp;
++
++	size = resource_size(res);
++	if (!size)
++		return;
++
++	align = pci_resource_alignment(dev, res);
++	align = align ? ALIGN(avail->start, align) - avail->start : 0;
++	tmp = align + size;
++	avail->start = min(avail->start + tmp, avail->end + 1);
++}
++
++static void remove_dev_resources(struct pci_dev *dev, struct resource *io,
++				 struct resource *mmio,
++				 struct resource *mmio_pref)
++{
++	int i;
++
++	for (i = 0; i < PCI_NUM_RESOURCES; i++) {
++		struct resource *res = &dev->resource[i];
++
++		if (resource_type(res) == IORESOURCE_IO) {
++			remove_dev_resource(io, dev, res);
++		} else if (resource_type(res) == IORESOURCE_MEM) {
++
++			/*
++			 * Make sure prefetchable memory is reduced from
++			 * the correct resource. Specifically we put 32-bit
++			 * prefetchable memory in non-prefetchable window
++			 * if there is an 64-bit pretchable window.
++			 *
++			 * See comments in __pci_bus_size_bridges() for
++			 * more information.
++			 */
++			if ((res->flags & IORESOURCE_PREFETCH) &&
++			    ((res->flags & IORESOURCE_MEM_64) ==
++			     (mmio_pref->flags & IORESOURCE_MEM_64)))
++				remove_dev_resource(mmio_pref, dev, res);
++			else
++				remove_dev_resource(mmio, dev, res);
++		}
++	}
++}
++
++/*
++ * io, mmio and mmio_pref contain the total amount of bridge window space
++ * available. This includes the minimal space needed to cover all the
++ * existing devices on the bus and the possible extra space that can be
++ * shared with the bridges.
++ */
+ static void pci_bus_distribute_available_resources(struct pci_bus *bus,
+ 					    struct list_head *add_list,
+ 					    struct resource io,
+@@ -1893,7 +1948,7 @@ static void pci_bus_distribute_available_resources(struct pci_bus *bus,
+ 	unsigned int normal_bridges = 0, hotplug_bridges = 0;
+ 	struct resource *io_res, *mmio_res, *mmio_pref_res;
+ 	struct pci_dev *dev, *bridge = bus->self;
+-	resource_size_t io_per_hp, mmio_per_hp, mmio_pref_per_hp, align;
++	resource_size_t io_per_b, mmio_per_b, mmio_pref_per_b, align;
+ 
+ 	io_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW];
+ 	mmio_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW];
+@@ -1937,94 +1992,88 @@ static void pci_bus_distribute_available_resources(struct pci_bus *bus,
+ 			normal_bridges++;
+ 	}
+ 
++	if (!(hotplug_bridges + normal_bridges))
++		return;
++
+ 	/*
+-	 * There is only one bridge on the bus so it gets all available
+-	 * resources which it can then distribute to the possible hotplug
+-	 * bridges below.
++	 * Calculate the amount of space we can forward from "bus" to any
++	 * downstream buses, i.e., the space left over after assigning the
++	 * BARs and windows on "bus".
+ 	 */
+-	if (hotplug_bridges + normal_bridges == 1) {
+-		dev = list_first_entry(&bus->devices, struct pci_dev, bus_list);
+-		if (dev->subordinate)
+-			pci_bus_distribute_available_resources(dev->subordinate,
+-				add_list, io, mmio, mmio_pref);
+-		return;
++	list_for_each_entry(dev, &bus->devices, bus_list) {
++		if (!dev->is_virtfn)
++			remove_dev_resources(dev, &io, &mmio, &mmio_pref);
+ 	}
+ 
+-	if (hotplug_bridges == 0)
+-		return;
+-
+ 	/*
+-	 * Calculate the total amount of extra resource space we can
+-	 * pass to bridges below this one.  This is basically the
+-	 * extra space reduced by the minimal required space for the
+-	 * non-hotplug bridges.
++	 * If there is at least one hotplug bridge on this bus it gets all
++	 * the extra resource space that was left after the reductions
++	 * above.
++	 *
++	 * If there are no hotplug bridges the extra resource space is
++	 * split between non-hotplug bridges. This is to allow possible
++	 * hotplug bridges below them to get the extra space as well.
+ 	 */
++	if (hotplug_bridges) {
++		io_per_b = div64_ul(resource_size(&io), hotplug_bridges);
++		mmio_per_b = div64_ul(resource_size(&mmio), hotplug_bridges);
++		mmio_pref_per_b = div64_ul(resource_size(&mmio_pref),
++					   hotplug_bridges);
++	} else {
++		io_per_b = div64_ul(resource_size(&io), normal_bridges);
++		mmio_per_b = div64_ul(resource_size(&mmio), normal_bridges);
++		mmio_pref_per_b = div64_ul(resource_size(&mmio_pref),
++					   normal_bridges);
++	}
++
+ 	for_each_pci_bridge(dev, bus) {
+-		resource_size_t used_size;
+ 		struct resource *res;
++		struct pci_bus *b;
+ 
+-		if (dev->is_hotplug_bridge)
++		b = dev->subordinate;
++		if (!b)
++			continue;
++		if (hotplug_bridges && !dev->is_hotplug_bridge)
+ 			continue;
+ 
++		res = &dev->resource[PCI_BRIDGE_IO_WINDOW];
++
+ 		/*
+-		 * Reduce the available resource space by what the
+-		 * bridge and devices below it occupy.
++		 * Make sure the split resource space is properly aligned
++		 * for bridge windows (align it down to avoid going above
++		 * what is available).
+ 		 */
+-		res = &dev->resource[PCI_BRIDGE_IO_WINDOW];
+ 		align = pci_resource_alignment(dev, res);
+-		align = align ? ALIGN(io.start, align) - io.start : 0;
+-		used_size = align + resource_size(res);
+-		if (!res->parent)
+-			io.start = min(io.start + used_size, io.end + 1);
++		io.end = align ? io.start + ALIGN_DOWN(io_per_b, align) - 1
++			       : io.start + io_per_b - 1;
++
++		/*
++		 * The x_per_b holds the extra resource space that can be
++		 * added for each bridge but there is the minimal already
++		 * reserved as well so adjust x.start down accordingly to
++		 * cover the whole space.
++		 */
++		io.start -= resource_size(res);
+ 
+ 		res = &dev->resource[PCI_BRIDGE_MEM_WINDOW];
+ 		align = pci_resource_alignment(dev, res);
+-		align = align ? ALIGN(mmio.start, align) - mmio.start : 0;
+-		used_size = align + resource_size(res);
+-		if (!res->parent)
+-			mmio.start = min(mmio.start + used_size, mmio.end + 1);
++		mmio.end = align ? mmio.start + ALIGN_DOWN(mmio_per_b, align) - 1
++				 : mmio.start + mmio_per_b - 1;
++		mmio.start -= resource_size(res);
+ 
+ 		res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW];
+ 		align = pci_resource_alignment(dev, res);
+-		align = align ? ALIGN(mmio_pref.start, align) -
+-			mmio_pref.start : 0;
+-		used_size = align + resource_size(res);
+-		if (!res->parent)
+-			mmio_pref.start = min(mmio_pref.start + used_size,
+-				mmio_pref.end + 1);
+-	}
+-
+-	io_per_hp = div64_ul(resource_size(&io), hotplug_bridges);
+-	mmio_per_hp = div64_ul(resource_size(&mmio), hotplug_bridges);
+-	mmio_pref_per_hp = div64_ul(resource_size(&mmio_pref),
+-		hotplug_bridges);
+-
+-	/*
+-	 * Go over devices on this bus and distribute the remaining
+-	 * resource space between hotplug bridges.
+-	 */
+-	for_each_pci_bridge(dev, bus) {
+-		struct pci_bus *b;
+-
+-		b = dev->subordinate;
+-		if (!b || !dev->is_hotplug_bridge)
+-			continue;
+-
+-		/*
+-		 * Distribute available extra resources equally between
+-		 * hotplug-capable downstream ports taking alignment into
+-		 * account.
+-		 */
+-		io.end = io.start + io_per_hp - 1;
+-		mmio.end = mmio.start + mmio_per_hp - 1;
+-		mmio_pref.end = mmio_pref.start + mmio_pref_per_hp - 1;
++		mmio_pref.end = align ? mmio_pref.start +
++					ALIGN_DOWN(mmio_pref_per_b, align) - 1
++				      : mmio_pref.start + mmio_pref_per_b - 1;
++		mmio_pref.start -= resource_size(res);
+ 
+ 		pci_bus_distribute_available_resources(b, add_list, io, mmio,
+ 						       mmio_pref);
+ 
+-		io.start += io_per_hp;
+-		mmio.start += mmio_per_hp;
+-		mmio_pref.start += mmio_pref_per_hp;
++		io.start += io.end + 1;
++		mmio.start += mmio.end + 1;
++		mmio_pref.start += mmio_pref.end + 1;
+ 	}
+ }
+ 
+diff --git a/drivers/phy/rockchip/phy-rockchip-typec.c b/drivers/phy/rockchip/phy-rockchip-typec.c
+index 70a31251b202b..20f787d5ec581 100644
+--- a/drivers/phy/rockchip/phy-rockchip-typec.c
++++ b/drivers/phy/rockchip/phy-rockchip-typec.c
+@@ -808,9 +808,8 @@ static int tcphy_get_mode(struct rockchip_typec_phy *tcphy)
+ 	struct extcon_dev *edev = tcphy->extcon;
+ 	union extcon_property_value property;
+ 	unsigned int id;
+-	bool ufp, dp;
+ 	u8 mode;
+-	int ret;
++	int ret, ufp, dp;
+ 
+ 	if (!edev)
+ 		return MODE_DFP_USB;
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 39d2024dc2ee5..c7ae9f900b532 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -356,8 +356,6 @@ static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc,
+ {
+ 	struct pinctrl_dev *pctldev = of_pinctrl_get(np);
+ 
+-	of_node_put(np);
+-
+ 	if (!pctldev)
+ 		return 0;
+ 
+diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
+index d0a4ebbe1e7e6..e486d66e220b0 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
+@@ -574,7 +574,7 @@ static int mtk_hw_get_value_wrap(struct mtk_pinctrl *hw, unsigned int gpio, int
+ ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
+ 	unsigned int gpio, char *buf, unsigned int bufLen)
+ {
+-	int pinmux, pullup, pullen, len = 0, r1 = -1, r0 = -1;
++	int pinmux, pullup = 0, pullen = 0, len = 0, r1 = -1, r0 = -1;
+ 	const struct mtk_pin_desc *desc;
+ 
+ 	if (gpio >= hw->soc->npins)
+@@ -637,7 +637,7 @@ static void mtk_pctrl_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *s,
+ 			  unsigned int gpio)
+ {
+ 	struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
+-	char buf[PIN_DBG_BUF_SZ];
++	char buf[PIN_DBG_BUF_SZ] = { 0 };
+ 
+ 	(void)mtk_pctrl_show_one_pin(hw, gpio, buf, PIN_DBG_BUF_SZ);
+ 
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index 578b387100d9b..d2e2b101978f8 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -1081,8 +1081,8 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+ 
+ 		pin_desc[i].number = i;
+ 		/* Pin naming convention: P(bank_name)(bank_pin_number). */
+-		pin_desc[i].name = kasprintf(GFP_KERNEL, "P%c%d",
+-					     bank + 'A', line);
++		pin_desc[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "P%c%d",
++						  bank + 'A', line);
+ 
+ 		group->name = group_names[i] = pin_desc[i].name;
+ 		group->pin = pin_desc[i].number;
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 9015486e38c18..52ecd47c18e2d 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -1891,7 +1891,7 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	for (i = 0; i < chip->ngpio; i++)
+-		names[i] = kasprintf(GFP_KERNEL, "pio%c%d", alias_idx + 'A', i);
++		names[i] = devm_kasprintf(&pdev->dev, GFP_KERNEL, "pio%c%d", alias_idx + 'A', i);
+ 
+ 	chip->names = (const char *const *)names;
+ 
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index e0df5ad6741dc..4d07c531371cd 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -11,6 +11,7 @@
+ #include <linux/gpio/driver.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
++#include <linux/kernel.h>
+ #include <linux/of_device.h>
+ #include <linux/of_irq.h>
+ #include <linux/of_platform.h>
+@@ -2826,6 +2827,8 @@ static int __init ingenic_pinctrl_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++#define IF_ENABLED(cfg, ptr)	PTR_IF(IS_ENABLED(cfg), (ptr))
++
+ static const struct of_device_id ingenic_pinctrl_of_match[] = {
+ 	{ .compatible = "ingenic,jz4740-pinctrl", .data = &jz4740_chip_info },
+ 	{ .compatible = "ingenic,jz4725b-pinctrl", .data = &jz4725b_chip_info },
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 07b1204174bf1..2a454098eaaa5 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -61,8 +61,17 @@ enum rockchip_pinctrl_type {
+ 	RK3308,
+ 	RK3368,
+ 	RK3399,
++	RK3568,
+ };
+ 
++
++/**
++ * Generate a bitmask for setting a value (v) with a write mask bit in hiword
++ * register 31:16 area.
++ */
++#define WRITE_MASK_VAL(h, l, v) \
++	(GENMASK(((h) + 16), ((l) + 16)) | (((v) << (l)) & GENMASK((h), (l))))
++
+ /*
+  * Encode variants of iomux registers into a type variable
+  */
+@@ -290,6 +299,25 @@ struct rockchip_pin_bank {
+ 		.pull_type[3] = pull3,					\
+ 	}
+ 
++#define PIN_BANK_MUX_ROUTE_FLAGS(ID, PIN, FUNC, REG, VAL, FLAG)		\
++	{								\
++		.bank_num	= ID,					\
++		.pin		= PIN,					\
++		.func		= FUNC,					\
++		.route_offset	= REG,					\
++		.route_val	= VAL,					\
++		.route_location	= FLAG,					\
++	}
++
++#define RK_MUXROUTE_SAME(ID, PIN, FUNC, REG, VAL)	\
++	PIN_BANK_MUX_ROUTE_FLAGS(ID, PIN, FUNC, REG, VAL, ROCKCHIP_ROUTE_SAME)
++
++#define RK_MUXROUTE_GRF(ID, PIN, FUNC, REG, VAL)	\
++	PIN_BANK_MUX_ROUTE_FLAGS(ID, PIN, FUNC, REG, VAL, ROCKCHIP_ROUTE_GRF)
++
++#define RK_MUXROUTE_PMU(ID, PIN, FUNC, REG, VAL)	\
++	PIN_BANK_MUX_ROUTE_FLAGS(ID, PIN, FUNC, REG, VAL, ROCKCHIP_ROUTE_PMU)
++
+ /**
+  * struct rockchip_mux_recalced_data: represent a pin iomux data.
+  * @num: bank number.
+@@ -816,597 +844,203 @@ static void rockchip_get_recalced_mux(struct rockchip_pin_bank *bank, int pin,
+ }
+ 
+ static struct rockchip_mux_route_data px30_mux_route_data[] = {
+-	{
+-		/* cif-d2m0 */
+-		.bank_num = 2,
+-		.pin = 0,
+-		.func = 1,
+-		.route_offset = 0x184,
+-		.route_val = BIT(16 + 7),
+-	}, {
+-		/* cif-d2m1 */
+-		.bank_num = 3,
+-		.pin = 3,
+-		.func = 3,
+-		.route_offset = 0x184,
+-		.route_val = BIT(16 + 7) | BIT(7),
+-	}, {
+-		/* pdm-m0 */
+-		.bank_num = 3,
+-		.pin = 22,
+-		.func = 2,
+-		.route_offset = 0x184,
+-		.route_val = BIT(16 + 8),
+-	}, {
+-		/* pdm-m1 */
+-		.bank_num = 2,
+-		.pin = 22,
+-		.func = 1,
+-		.route_offset = 0x184,
+-		.route_val = BIT(16 + 8) | BIT(8),
+-	}, {
+-		/* uart2-rxm0 */
+-		.bank_num = 1,
+-		.pin = 27,
+-		.func = 2,
+-		.route_offset = 0x184,
+-		.route_val = BIT(16 + 10),
+-	}, {
+-		/* uart2-rxm1 */
+-		.bank_num = 2,
+-		.pin = 14,
+-		.func = 2,
+-		.route_offset = 0x184,
+-		.route_val = BIT(16 + 10) | BIT(10),
+-	}, {
+-		/* uart3-rxm0 */
+-		.bank_num = 0,
+-		.pin = 17,
+-		.func = 2,
+-		.route_offset = 0x184,
+-		.route_val = BIT(16 + 9),
+-	}, {
+-		/* uart3-rxm1 */
+-		.bank_num = 1,
+-		.pin = 15,
+-		.func = 2,
+-		.route_offset = 0x184,
+-		.route_val = BIT(16 + 9) | BIT(9),
+-	},
++	RK_MUXROUTE_SAME(2, RK_PA0, 1, 0x184, BIT(16 + 7)), /* cif-d2m0 */
++	RK_MUXROUTE_SAME(3, RK_PA3, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d2m1 */
++	RK_MUXROUTE_SAME(3, RK_PC6, 2, 0x184, BIT(16 + 8)), /* pdm-m0 */
++	RK_MUXROUTE_SAME(2, RK_PC6, 1, 0x184, BIT(16 + 8) | BIT(8)), /* pdm-m1 */
++	RK_MUXROUTE_SAME(1, RK_PD3, 2, 0x184, BIT(16 + 10)), /* uart2-rxm0 */
++	RK_MUXROUTE_SAME(2, RK_PB6, 2, 0x184, BIT(16 + 10) | BIT(10)), /* uart2-rxm1 */
++	RK_MUXROUTE_SAME(0, RK_PC1, 2, 0x184, BIT(16 + 9)), /* uart3-rxm0 */
++	RK_MUXROUTE_SAME(1, RK_PB7, 2, 0x184, BIT(16 + 9) | BIT(9)), /* uart3-rxm1 */
+ };
+ 
+ static struct rockchip_mux_route_data rk3128_mux_route_data[] = {
+-	{
+-		/* spi-0 */
+-		.bank_num = 1,
+-		.pin = 10,
+-		.func = 1,
+-		.route_offset = 0x144,
+-		.route_val = BIT(16 + 3) | BIT(16 + 4),
+-	}, {
+-		/* spi-1 */
+-		.bank_num = 1,
+-		.pin = 27,
+-		.func = 3,
+-		.route_offset = 0x144,
+-		.route_val = BIT(16 + 3) | BIT(16 + 4) | BIT(3),
+-	}, {
+-		/* spi-2 */
+-		.bank_num = 0,
+-		.pin = 13,
+-		.func = 2,
+-		.route_offset = 0x144,
+-		.route_val = BIT(16 + 3) | BIT(16 + 4) | BIT(4),
+-	}, {
+-		/* i2s-0 */
+-		.bank_num = 1,
+-		.pin = 5,
+-		.func = 1,
+-		.route_offset = 0x144,
+-		.route_val = BIT(16 + 5),
+-	}, {
+-		/* i2s-1 */
+-		.bank_num = 0,
+-		.pin = 14,
+-		.func = 1,
+-		.route_offset = 0x144,
+-		.route_val = BIT(16 + 5) | BIT(5),
+-	}, {
+-		/* emmc-0 */
+-		.bank_num = 1,
+-		.pin = 22,
+-		.func = 2,
+-		.route_offset = 0x144,
+-		.route_val = BIT(16 + 6),
+-	}, {
+-		/* emmc-1 */
+-		.bank_num = 2,
+-		.pin = 4,
+-		.func = 2,
+-		.route_offset = 0x144,
+-		.route_val = BIT(16 + 6) | BIT(6),
+-	},
++	RK_MUXROUTE_SAME(1, RK_PB2, 1, 0x144, BIT(16 + 3) | BIT(16 + 4)), /* spi-0 */
++	RK_MUXROUTE_SAME(1, RK_PD3, 3, 0x144, BIT(16 + 3) | BIT(16 + 4) | BIT(3)), /* spi-1 */
++	RK_MUXROUTE_SAME(0, RK_PB5, 2, 0x144, BIT(16 + 3) | BIT(16 + 4) | BIT(4)), /* spi-2 */
++	RK_MUXROUTE_SAME(1, RK_PA5, 1, 0x144, BIT(16 + 5)), /* i2s-0 */
++	RK_MUXROUTE_SAME(0, RK_PB6, 1, 0x144, BIT(16 + 5) | BIT(5)), /* i2s-1 */
++	RK_MUXROUTE_SAME(1, RK_PC6, 2, 0x144, BIT(16 + 6)), /* emmc-0 */
++	RK_MUXROUTE_SAME(2, RK_PA4, 2, 0x144, BIT(16 + 6) | BIT(6)), /* emmc-1 */
+ };
+ 
+ static struct rockchip_mux_route_data rk3188_mux_route_data[] = {
+-	{
+-		/* non-iomuxed emmc/flash pins on flash-dqs */
+-		.bank_num = 0,
+-		.pin = 24,
+-		.func = 1,
+-		.route_location = ROCKCHIP_ROUTE_GRF,
+-		.route_offset = 0xa0,
+-		.route_val = BIT(16 + 11),
+-	}, {
+-		/* non-iomuxed emmc/flash pins on emmc-clk */
+-		.bank_num = 0,
+-		.pin = 24,
+-		.func = 2,
+-		.route_location = ROCKCHIP_ROUTE_GRF,
+-		.route_offset = 0xa0,
+-		.route_val = BIT(16 + 11) | BIT(11),
+-	},
++	RK_MUXROUTE_SAME(0, RK_PD0, 1, 0xa0, BIT(16 + 11)), /* non-iomuxed emmc/flash pins on flash-dqs */
++	RK_MUXROUTE_SAME(0, RK_PD0, 2, 0xa0, BIT(16 + 11) | BIT(11)), /* non-iomuxed emmc/flash pins on emmc-clk */
+ };
+ 
+ static struct rockchip_mux_route_data rk3228_mux_route_data[] = {
+-	{
+-		/* pwm0-0 */
+-		.bank_num = 0,
+-		.pin = 26,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16),
+-	}, {
+-		/* pwm0-1 */
+-		.bank_num = 3,
+-		.pin = 21,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16) | BIT(0),
+-	}, {
+-		/* pwm1-0 */
+-		.bank_num = 0,
+-		.pin = 27,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 1),
+-	}, {
+-		/* pwm1-1 */
+-		.bank_num = 0,
+-		.pin = 30,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 1) | BIT(1),
+-	}, {
+-		/* pwm2-0 */
+-		.bank_num = 0,
+-		.pin = 28,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 2),
+-	}, {
+-		/* pwm2-1 */
+-		.bank_num = 1,
+-		.pin = 12,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 2) | BIT(2),
+-	}, {
+-		/* pwm3-0 */
+-		.bank_num = 3,
+-		.pin = 26,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 3),
+-	}, {
+-		/* pwm3-1 */
+-		.bank_num = 1,
+-		.pin = 11,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 3) | BIT(3),
+-	}, {
+-		/* sdio-0_d0 */
+-		.bank_num = 1,
+-		.pin = 1,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 4),
+-	}, {
+-		/* sdio-1_d0 */
+-		.bank_num = 3,
+-		.pin = 2,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 4) | BIT(4),
+-	}, {
+-		/* spi-0_rx */
+-		.bank_num = 0,
+-		.pin = 13,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 5),
+-	}, {
+-		/* spi-1_rx */
+-		.bank_num = 2,
+-		.pin = 0,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 5) | BIT(5),
+-	}, {
+-		/* emmc-0_cmd */
+-		.bank_num = 1,
+-		.pin = 22,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 7),
+-	}, {
+-		/* emmc-1_cmd */
+-		.bank_num = 2,
+-		.pin = 4,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 7) | BIT(7),
+-	}, {
+-		/* uart2-0_rx */
+-		.bank_num = 1,
+-		.pin = 19,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 8),
+-	}, {
+-		/* uart2-1_rx */
+-		.bank_num = 1,
+-		.pin = 10,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 8) | BIT(8),
+-	}, {
+-		/* uart1-0_rx */
+-		.bank_num = 1,
+-		.pin = 10,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 11),
+-	}, {
+-		/* uart1-1_rx */
+-		.bank_num = 3,
+-		.pin = 13,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 11) | BIT(11),
+-	},
++	RK_MUXROUTE_SAME(0, RK_PD2, 1, 0x50, BIT(16)), /* pwm0-0 */
++	RK_MUXROUTE_SAME(3, RK_PC5, 1, 0x50, BIT(16) | BIT(0)), /* pwm0-1 */
++	RK_MUXROUTE_SAME(0, RK_PD3, 1, 0x50, BIT(16 + 1)), /* pwm1-0 */
++	RK_MUXROUTE_SAME(0, RK_PD6, 2, 0x50, BIT(16 + 1) | BIT(1)), /* pwm1-1 */
++	RK_MUXROUTE_SAME(0, RK_PD4, 1, 0x50, BIT(16 + 2)), /* pwm2-0 */
++	RK_MUXROUTE_SAME(1, RK_PB4, 2, 0x50, BIT(16 + 2) | BIT(2)), /* pwm2-1 */
++	RK_MUXROUTE_SAME(3, RK_PD2, 1, 0x50, BIT(16 + 3)), /* pwm3-0 */
++	RK_MUXROUTE_SAME(1, RK_PB3, 2, 0x50, BIT(16 + 3) | BIT(3)), /* pwm3-1 */
++	RK_MUXROUTE_SAME(1, RK_PA1, 1, 0x50, BIT(16 + 4)), /* sdio-0_d0 */
++	RK_MUXROUTE_SAME(3, RK_PA2, 1, 0x50, BIT(16 + 4) | BIT(4)), /* sdio-1_d0 */
++	RK_MUXROUTE_SAME(0, RK_PB5, 2, 0x50, BIT(16 + 5)), /* spi-0_rx */
++	RK_MUXROUTE_SAME(2, RK_PA0, 2, 0x50, BIT(16 + 5) | BIT(5)), /* spi-1_rx */
++	RK_MUXROUTE_SAME(1, RK_PC6, 2, 0x50, BIT(16 + 7)), /* emmc-0_cmd */
++	RK_MUXROUTE_SAME(2, RK_PA4, 2, 0x50, BIT(16 + 7) | BIT(7)), /* emmc-1_cmd */
++	RK_MUXROUTE_SAME(1, RK_PC3, 2, 0x50, BIT(16 + 8)), /* uart2-0_rx */
++	RK_MUXROUTE_SAME(1, RK_PB2, 2, 0x50, BIT(16 + 8) | BIT(8)), /* uart2-1_rx */
++	RK_MUXROUTE_SAME(1, RK_PB2, 1, 0x50, BIT(16 + 11)), /* uart1-0_rx */
++	RK_MUXROUTE_SAME(3, RK_PB5, 1, 0x50, BIT(16 + 11) | BIT(11)), /* uart1-1_rx */
+ };
+ 
+ static struct rockchip_mux_route_data rk3288_mux_route_data[] = {
+-	{
+-		/* edphdmi_cecinoutt1 */
+-		.bank_num = 7,
+-		.pin = 16,
+-		.func = 2,
+-		.route_offset = 0x264,
+-		.route_val = BIT(16 + 12) | BIT(12),
+-	}, {
+-		/* edphdmi_cecinout */
+-		.bank_num = 7,
+-		.pin = 23,
+-		.func = 4,
+-		.route_offset = 0x264,
+-		.route_val = BIT(16 + 12),
+-	},
++	RK_MUXROUTE_SAME(7, RK_PC0, 2, 0x264, BIT(16 + 12) | BIT(12)), /* edphdmi_cecinoutt1 */
++	RK_MUXROUTE_SAME(7, RK_PC7, 4, 0x264, BIT(16 + 12)), /* edphdmi_cecinout */
+ };
+ 
+ static struct rockchip_mux_route_data rk3308_mux_route_data[] = {
+-	{
+-		/* rtc_clk */
+-		.bank_num = 0,
+-		.pin = 19,
+-		.func = 1,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 0) | BIT(0),
+-	}, {
+-		/* uart2_rxm0 */
+-		.bank_num = 1,
+-		.pin = 22,
+-		.func = 2,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 2) | BIT(16 + 3),
+-	}, {
+-		/* uart2_rxm1 */
+-		.bank_num = 4,
+-		.pin = 26,
+-		.func = 2,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 2) | BIT(16 + 3) | BIT(2),
+-	}, {
+-		/* i2c3_sdam0 */
+-		.bank_num = 0,
+-		.pin = 15,
+-		.func = 2,
+-		.route_offset = 0x608,
+-		.route_val = BIT(16 + 8) | BIT(16 + 9),
+-	}, {
+-		/* i2c3_sdam1 */
+-		.bank_num = 3,
+-		.pin = 12,
+-		.func = 2,
+-		.route_offset = 0x608,
+-		.route_val = BIT(16 + 8) | BIT(16 + 9) | BIT(8),
+-	}, {
+-		/* i2c3_sdam2 */
+-		.bank_num = 2,
+-		.pin = 0,
+-		.func = 3,
+-		.route_offset = 0x608,
+-		.route_val = BIT(16 + 8) | BIT(16 + 9) | BIT(9),
+-	}, {
+-		/* i2s-8ch-1-sclktxm0 */
+-		.bank_num = 1,
+-		.pin = 3,
+-		.func = 2,
+-		.route_offset = 0x308,
+-		.route_val = BIT(16 + 3),
+-	}, {
+-		/* i2s-8ch-1-sclkrxm0 */
+-		.bank_num = 1,
+-		.pin = 4,
+-		.func = 2,
+-		.route_offset = 0x308,
+-		.route_val = BIT(16 + 3),
+-	}, {
+-		/* i2s-8ch-1-sclktxm1 */
+-		.bank_num = 1,
+-		.pin = 13,
+-		.func = 2,
+-		.route_offset = 0x308,
+-		.route_val = BIT(16 + 3) | BIT(3),
+-	}, {
+-		/* i2s-8ch-1-sclkrxm1 */
+-		.bank_num = 1,
+-		.pin = 14,
+-		.func = 2,
+-		.route_offset = 0x308,
+-		.route_val = BIT(16 + 3) | BIT(3),
+-	}, {
+-		/* pdm-clkm0 */
+-		.bank_num = 1,
+-		.pin = 4,
+-		.func = 3,
+-		.route_offset = 0x308,
+-		.route_val =  BIT(16 + 12) | BIT(16 + 13),
+-	}, {
+-		/* pdm-clkm1 */
+-		.bank_num = 1,
+-		.pin = 14,
+-		.func = 4,
+-		.route_offset = 0x308,
+-		.route_val = BIT(16 + 12) | BIT(16 + 13) | BIT(12),
+-	}, {
+-		/* pdm-clkm2 */
+-		.bank_num = 2,
+-		.pin = 6,
+-		.func = 2,
+-		.route_offset = 0x308,
+-		.route_val = BIT(16 + 12) | BIT(16 + 13) | BIT(13),
+-	}, {
+-		/* pdm-clkm-m2 */
+-		.bank_num = 2,
+-		.pin = 4,
+-		.func = 3,
+-		.route_offset = 0x600,
+-		.route_val = BIT(16 + 2) | BIT(2),
+-	}, {
+-		/* spi1_miso */
+-		.bank_num = 3,
+-		.pin = 10,
+-		.func = 3,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 9),
+-	}, {
+-		/* spi1_miso_m1 */
+-		.bank_num = 2,
+-		.pin = 4,
+-		.func = 2,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 9) | BIT(9),
+-	}, {
+-		/* owire_m0 */
+-		.bank_num = 0,
+-		.pin = 11,
+-		.func = 3,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 10) | BIT(16 + 11),
+-	}, {
+-		/* owire_m1 */
+-		.bank_num = 1,
+-		.pin = 22,
+-		.func = 7,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 10) | BIT(16 + 11) | BIT(10),
+-	}, {
+-		/* owire_m2 */
+-		.bank_num = 2,
+-		.pin = 2,
+-		.func = 5,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 10) | BIT(16 + 11) | BIT(11),
+-	}, {
+-		/* can_rxd_m0 */
+-		.bank_num = 0,
+-		.pin = 11,
+-		.func = 2,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 12) | BIT(16 + 13),
+-	}, {
+-		/* can_rxd_m1 */
+-		.bank_num = 1,
+-		.pin = 22,
+-		.func = 5,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 12) | BIT(16 + 13) | BIT(12),
+-	}, {
+-		/* can_rxd_m2 */
+-		.bank_num = 2,
+-		.pin = 2,
+-		.func = 4,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 12) | BIT(16 + 13) | BIT(13),
+-	}, {
+-		/* mac_rxd0_m0 */
+-		.bank_num = 1,
+-		.pin = 20,
+-		.func = 3,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 14),
+-	}, {
+-		/* mac_rxd0_m1 */
+-		.bank_num = 4,
+-		.pin = 2,
+-		.func = 2,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 14) | BIT(14),
+-	}, {
+-		/* uart3_rx */
+-		.bank_num = 3,
+-		.pin = 12,
+-		.func = 4,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 15),
+-	}, {
+-		/* uart3_rx_m1 */
+-		.bank_num = 0,
+-		.pin = 17,
+-		.func = 3,
+-		.route_offset = 0x314,
+-		.route_val = BIT(16 + 15) | BIT(15),
+-	},
++	RK_MUXROUTE_SAME(0, RK_PC3, 1, 0x314, BIT(16 + 0) | BIT(0)), /* rtc_clk */
++	RK_MUXROUTE_SAME(1, RK_PC6, 2, 0x314, BIT(16 + 2) | BIT(16 + 3)), /* uart2_rxm0 */
++	RK_MUXROUTE_SAME(4, RK_PD2, 2, 0x314, BIT(16 + 2) | BIT(16 + 3) | BIT(2)), /* uart2_rxm1 */
++	RK_MUXROUTE_SAME(0, RK_PB7, 2, 0x608, BIT(16 + 8) | BIT(16 + 9)), /* i2c3_sdam0 */
++	RK_MUXROUTE_SAME(3, RK_PB4, 2, 0x608, BIT(16 + 8) | BIT(16 + 9) | BIT(8)), /* i2c3_sdam1 */
++	RK_MUXROUTE_SAME(2, RK_PA0, 3, 0x608, BIT(16 + 8) | BIT(16 + 9) | BIT(9)), /* i2c3_sdam2 */
++	RK_MUXROUTE_SAME(1, RK_PA3, 2, 0x308, BIT(16 + 3)), /* i2s-8ch-1-sclktxm0 */
++	RK_MUXROUTE_SAME(1, RK_PA4, 2, 0x308, BIT(16 + 3)), /* i2s-8ch-1-sclkrxm0 */
++	RK_MUXROUTE_SAME(1, RK_PB5, 2, 0x308, BIT(16 + 3) | BIT(3)), /* i2s-8ch-1-sclktxm1 */
++	RK_MUXROUTE_SAME(1, RK_PB6, 2, 0x308, BIT(16 + 3) | BIT(3)), /* i2s-8ch-1-sclkrxm1 */
++	RK_MUXROUTE_SAME(1, RK_PA4, 3, 0x308, BIT(16 + 12) | BIT(16 + 13)), /* pdm-clkm0 */
++	RK_MUXROUTE_SAME(1, RK_PB6, 4, 0x308, BIT(16 + 12) | BIT(16 + 13) | BIT(12)), /* pdm-clkm1 */
++	RK_MUXROUTE_SAME(2, RK_PA6, 2, 0x308, BIT(16 + 12) | BIT(16 + 13) | BIT(13)), /* pdm-clkm2 */
++	RK_MUXROUTE_SAME(2, RK_PA4, 3, 0x600, BIT(16 + 2) | BIT(2)), /* pdm-clkm-m2 */
++	RK_MUXROUTE_SAME(3, RK_PB2, 3, 0x314, BIT(16 + 9)), /* spi1_miso */
++	RK_MUXROUTE_SAME(2, RK_PA4, 2, 0x314, BIT(16 + 9) | BIT(9)), /* spi1_miso_m1 */
++	RK_MUXROUTE_SAME(0, RK_PB3, 3, 0x314, BIT(16 + 10) | BIT(16 + 11)), /* owire_m0 */
++	RK_MUXROUTE_SAME(1, RK_PC6, 7, 0x314, BIT(16 + 10) | BIT(16 + 11) | BIT(10)), /* owire_m1 */
++	RK_MUXROUTE_SAME(2, RK_PA2, 5, 0x314, BIT(16 + 10) | BIT(16 + 11) | BIT(11)), /* owire_m2 */
++	RK_MUXROUTE_SAME(0, RK_PB3, 2, 0x314, BIT(16 + 12) | BIT(16 + 13)), /* can_rxd_m0 */
++	RK_MUXROUTE_SAME(1, RK_PC6, 5, 0x314, BIT(16 + 12) | BIT(16 + 13) | BIT(12)), /* can_rxd_m1 */
++	RK_MUXROUTE_SAME(2, RK_PA2, 4, 0x314, BIT(16 + 12) | BIT(16 + 13) | BIT(13)), /* can_rxd_m2 */
++	RK_MUXROUTE_SAME(1, RK_PC4, 3, 0x314, BIT(16 + 14)), /* mac_rxd0_m0 */
++	RK_MUXROUTE_SAME(4, RK_PA2, 2, 0x314, BIT(16 + 14) | BIT(14)), /* mac_rxd0_m1 */
++	RK_MUXROUTE_SAME(3, RK_PB4, 4, 0x314, BIT(16 + 15)), /* uart3_rx */
++	RK_MUXROUTE_SAME(0, RK_PC1, 3, 0x314, BIT(16 + 15) | BIT(15)), /* uart3_rx_m1 */
+ };
+ 
+ static struct rockchip_mux_route_data rk3328_mux_route_data[] = {
+-	{
+-		/* uart2dbg_rxm0 */
+-		.bank_num = 1,
+-		.pin = 1,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16) | BIT(16 + 1),
+-	}, {
+-		/* uart2dbg_rxm1 */
+-		.bank_num = 2,
+-		.pin = 1,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16) | BIT(16 + 1) | BIT(0),
+-	}, {
+-		/* gmac-m1_rxd0 */
+-		.bank_num = 1,
+-		.pin = 11,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 2) | BIT(2),
+-	}, {
+-		/* gmac-m1-optimized_rxd3 */
+-		.bank_num = 1,
+-		.pin = 14,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 10) | BIT(10),
+-	}, {
+-		/* pdm_sdi0m0 */
+-		.bank_num = 2,
+-		.pin = 19,
+-		.func = 2,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 3),
+-	}, {
+-		/* pdm_sdi0m1 */
+-		.bank_num = 1,
+-		.pin = 23,
+-		.func = 3,
+-		.route_offset = 0x50,
+-		.route_val =  BIT(16 + 3) | BIT(3),
+-	}, {
+-		/* spi_rxdm2 */
+-		.bank_num = 3,
+-		.pin = 2,
+-		.func = 4,
+-		.route_offset = 0x50,
+-		.route_val =  BIT(16 + 4) | BIT(16 + 5) | BIT(5),
+-	}, {
+-		/* i2s2_sdim0 */
+-		.bank_num = 1,
+-		.pin = 24,
+-		.func = 1,
+-		.route_offset = 0x50,
+-		.route_val = BIT(16 + 6),
+-	}, {
+-		/* i2s2_sdim1 */
+-		.bank_num = 3,
+-		.pin = 2,
+-		.func = 6,
+-		.route_offset = 0x50,
+-		.route_val =  BIT(16 + 6) | BIT(6),
+-	}, {
+-		/* card_iom1 */
+-		.bank_num = 2,
+-		.pin = 22,
+-		.func = 3,
+-		.route_offset = 0x50,
+-		.route_val =  BIT(16 + 7) | BIT(7),
+-	}, {
+-		/* tsp_d5m1 */
+-		.bank_num = 2,
+-		.pin = 16,
+-		.func = 3,
+-		.route_offset = 0x50,
+-		.route_val =  BIT(16 + 8) | BIT(8),
+-	}, {
+-		/* cif_data5m1 */
+-		.bank_num = 2,
+-		.pin = 16,
+-		.func = 4,
+-		.route_offset = 0x50,
+-		.route_val =  BIT(16 + 9) | BIT(9),
+-	},
++	RK_MUXROUTE_SAME(1, RK_PA1, 2, 0x50, BIT(16) | BIT(16 + 1)), /* uart2dbg_rxm0 */
++	RK_MUXROUTE_SAME(2, RK_PA1, 1, 0x50, BIT(16) | BIT(16 + 1) | BIT(0)), /* uart2dbg_rxm1 */
++	RK_MUXROUTE_SAME(1, RK_PB3, 2, 0x50, BIT(16 + 2) | BIT(2)), /* gmac-m1_rxd0 */
++	RK_MUXROUTE_SAME(1, RK_PB6, 2, 0x50, BIT(16 + 10) | BIT(10)), /* gmac-m1-optimized_rxd3 */
++	RK_MUXROUTE_SAME(2, RK_PC3, 2, 0x50, BIT(16 + 3)), /* pdm_sdi0m0 */
++	RK_MUXROUTE_SAME(1, RK_PC7, 3, 0x50, BIT(16 + 3) | BIT(3)), /* pdm_sdi0m1 */
++	RK_MUXROUTE_SAME(3, RK_PA2, 4, 0x50, BIT(16 + 4) | BIT(16 + 5) | BIT(5)), /* spi_rxdm2 */
++	RK_MUXROUTE_SAME(1, RK_PD0, 1, 0x50, BIT(16 + 6)), /* i2s2_sdim0 */
++	RK_MUXROUTE_SAME(3, RK_PA2, 6, 0x50, BIT(16 + 6) | BIT(6)), /* i2s2_sdim1 */
++	RK_MUXROUTE_SAME(2, RK_PC6, 3, 0x50, BIT(16 + 7) | BIT(7)), /* card_iom1 */
++	RK_MUXROUTE_SAME(2, RK_PC0, 3, 0x50, BIT(16 + 8) | BIT(8)), /* tsp_d5m1 */
++	RK_MUXROUTE_SAME(2, RK_PC0, 4, 0x50, BIT(16 + 9) | BIT(9)), /* cif_data5m1 */
+ };
+ 
+ static struct rockchip_mux_route_data rk3399_mux_route_data[] = {
+-	{
+-		/* uart2dbga_rx */
+-		.bank_num = 4,
+-		.pin = 8,
+-		.func = 2,
+-		.route_offset = 0xe21c,
+-		.route_val = BIT(16 + 10) | BIT(16 + 11),
+-	}, {
+-		/* uart2dbgb_rx */
+-		.bank_num = 4,
+-		.pin = 16,
+-		.func = 2,
+-		.route_offset = 0xe21c,
+-		.route_val = BIT(16 + 10) | BIT(16 + 11) | BIT(10),
+-	}, {
+-		/* uart2dbgc_rx */
+-		.bank_num = 4,
+-		.pin = 19,
+-		.func = 1,
+-		.route_offset = 0xe21c,
+-		.route_val = BIT(16 + 10) | BIT(16 + 11) | BIT(11),
+-	}, {
+-		/* pcie_clkreqn */
+-		.bank_num = 2,
+-		.pin = 26,
+-		.func = 2,
+-		.route_offset = 0xe21c,
+-		.route_val = BIT(16 + 14),
+-	}, {
+-		/* pcie_clkreqnb */
+-		.bank_num = 4,
+-		.pin = 24,
+-		.func = 1,
+-		.route_offset = 0xe21c,
+-		.route_val = BIT(16 + 14) | BIT(14),
+-	},
++	RK_MUXROUTE_SAME(4, RK_PB0, 2, 0xe21c, BIT(16 + 10) | BIT(16 + 11)), /* uart2dbga_rx */
++	RK_MUXROUTE_SAME(4, RK_PC0, 2, 0xe21c, BIT(16 + 10) | BIT(16 + 11) | BIT(10)), /* uart2dbgb_rx */
++	RK_MUXROUTE_SAME(4, RK_PC3, 1, 0xe21c, BIT(16 + 10) | BIT(16 + 11) | BIT(11)), /* uart2dbgc_rx */
++	RK_MUXROUTE_SAME(2, RK_PD2, 2, 0xe21c, BIT(16 + 14)), /* pcie_clkreqn */
++	RK_MUXROUTE_SAME(4, RK_PD0, 1, 0xe21c, BIT(16 + 14) | BIT(14)), /* pcie_clkreqnb */
++};
++
++static struct rockchip_mux_route_data rk3568_mux_route_data[] = {
++	RK_MUXROUTE_PMU(0, RK_PB7, 1, 0x0110, WRITE_MASK_VAL(1, 0, 0)), /* PWM0 IO mux M0 */
++	RK_MUXROUTE_PMU(0, RK_PC7, 2, 0x0110, WRITE_MASK_VAL(1, 0, 1)), /* PWM0 IO mux M1 */
++	RK_MUXROUTE_PMU(0, RK_PC0, 1, 0x0110, WRITE_MASK_VAL(3, 2, 0)), /* PWM1 IO mux M0 */
++	RK_MUXROUTE_PMU(0, RK_PB5, 4, 0x0110, WRITE_MASK_VAL(3, 2, 1)), /* PWM1 IO mux M1 */
++	RK_MUXROUTE_PMU(0, RK_PC1, 1, 0x0110, WRITE_MASK_VAL(5, 4, 0)), /* PWM2 IO mux M0 */
++	RK_MUXROUTE_PMU(0, RK_PB6, 4, 0x0110, WRITE_MASK_VAL(5, 4, 1)), /* PWM2 IO mux M1 */
++	RK_MUXROUTE_GRF(0, RK_PB3, 2, 0x0300, WRITE_MASK_VAL(0, 0, 0)), /* CAN0 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PA1, 4, 0x0300, WRITE_MASK_VAL(0, 0, 1)), /* CAN0 IO mux M1 */
++	RK_MUXROUTE_GRF(1, RK_PA1, 3, 0x0300, WRITE_MASK_VAL(2, 2, 0)), /* CAN1 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC3, 3, 0x0300, WRITE_MASK_VAL(2, 2, 1)), /* CAN1 IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PB5, 3, 0x0300, WRITE_MASK_VAL(4, 4, 0)), /* CAN2 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PB2, 4, 0x0300, WRITE_MASK_VAL(4, 4, 1)), /* CAN2 IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PC4, 1, 0x0300, WRITE_MASK_VAL(6, 6, 0)), /* HPDIN IO mux M0 */
++	RK_MUXROUTE_GRF(0, RK_PC2, 2, 0x0300, WRITE_MASK_VAL(6, 6, 1)), /* HPDIN IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PB1, 3, 0x0300, WRITE_MASK_VAL(8, 8, 0)), /* GMAC1 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PA7, 3, 0x0300, WRITE_MASK_VAL(8, 8, 1)), /* GMAC1 IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PD1, 1, 0x0300, WRITE_MASK_VAL(10, 10, 0)), /* HDMITX IO mux M0 */
++	RK_MUXROUTE_GRF(0, RK_PC7, 1, 0x0300, WRITE_MASK_VAL(10, 10, 1)), /* HDMITX IO mux M1 */
++	RK_MUXROUTE_GRF(0, RK_PB6, 1, 0x0300, WRITE_MASK_VAL(14, 14, 0)), /* I2C2 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PB4, 1, 0x0300, WRITE_MASK_VAL(14, 14, 1)), /* I2C2 IO mux M1 */
++	RK_MUXROUTE_GRF(1, RK_PA0, 1, 0x0304, WRITE_MASK_VAL(0, 0, 0)), /* I2C3 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PB6, 4, 0x0304, WRITE_MASK_VAL(0, 0, 1)), /* I2C3 IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PB2, 1, 0x0304, WRITE_MASK_VAL(2, 2, 0)), /* I2C4 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PB1, 2, 0x0304, WRITE_MASK_VAL(2, 2, 1)), /* I2C4 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PB4, 4, 0x0304, WRITE_MASK_VAL(4, 4, 0)), /* I2C5 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PD0, 2, 0x0304, WRITE_MASK_VAL(4, 4, 1)), /* I2C5 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PB1, 5, 0x0304, WRITE_MASK_VAL(14, 14, 0)), /* PWM8 IO mux M0 */
++	RK_MUXROUTE_GRF(1, RK_PD5, 4, 0x0304, WRITE_MASK_VAL(14, 14, 1)), /* PWM8 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PB2, 5, 0x0308, WRITE_MASK_VAL(0, 0, 0)), /* PWM9 IO mux M0 */
++	RK_MUXROUTE_GRF(1, RK_PD6, 4, 0x0308, WRITE_MASK_VAL(0, 0, 1)), /* PWM9 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PB5, 5, 0x0308, WRITE_MASK_VAL(2, 2, 0)), /* PWM10 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PA1, 2, 0x0308, WRITE_MASK_VAL(2, 2, 1)), /* PWM10 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PB6, 5, 0x0308, WRITE_MASK_VAL(4, 4, 0)), /* PWM11 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC0, 3, 0x0308, WRITE_MASK_VAL(4, 4, 1)), /* PWM11 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PB7, 2, 0x0308, WRITE_MASK_VAL(6, 6, 0)), /* PWM12 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC5, 1, 0x0308, WRITE_MASK_VAL(6, 6, 1)), /* PWM12 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PC0, 2, 0x0308, WRITE_MASK_VAL(8, 8, 0)), /* PWM13 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC6, 1, 0x0308, WRITE_MASK_VAL(8, 8, 1)), /* PWM13 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PC4, 1, 0x0308, WRITE_MASK_VAL(10, 10, 0)), /* PWM14 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC2, 1, 0x0308, WRITE_MASK_VAL(10, 10, 1)), /* PWM14 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PC5, 1, 0x0308, WRITE_MASK_VAL(12, 12, 0)), /* PWM15 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC3, 1, 0x0308, WRITE_MASK_VAL(12, 12, 1)), /* PWM15 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PD2, 3, 0x0308, WRITE_MASK_VAL(14, 14, 0)), /* SDMMC2 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PA5, 5, 0x0308, WRITE_MASK_VAL(14, 14, 1)), /* SDMMC2 IO mux M1 */
++	RK_MUXROUTE_GRF(0, RK_PB5, 2, 0x030c, WRITE_MASK_VAL(0, 0, 0)), /* SPI0 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PD3, 3, 0x030c, WRITE_MASK_VAL(0, 0, 1)), /* SPI0 IO mux M1 */
++	RK_MUXROUTE_GRF(2, RK_PB5, 3, 0x030c, WRITE_MASK_VAL(2, 2, 0)), /* SPI1 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PC3, 3, 0x030c, WRITE_MASK_VAL(2, 2, 1)), /* SPI1 IO mux M1 */
++	RK_MUXROUTE_GRF(2, RK_PC1, 4, 0x030c, WRITE_MASK_VAL(4, 4, 0)), /* SPI2 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PA0, 3, 0x030c, WRITE_MASK_VAL(4, 4, 1)), /* SPI2 IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PB3, 4, 0x030c, WRITE_MASK_VAL(6, 6, 0)), /* SPI3 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC2, 2, 0x030c, WRITE_MASK_VAL(6, 6, 1)), /* SPI3 IO mux M1 */
++	RK_MUXROUTE_GRF(2, RK_PB4, 2, 0x030c, WRITE_MASK_VAL(8, 8, 0)), /* UART1 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PD6, 4, 0x030c, WRITE_MASK_VAL(8, 8, 1)), /* UART1 IO mux M1 */
++	RK_MUXROUTE_GRF(0, RK_PD1, 1, 0x030c, WRITE_MASK_VAL(10, 10, 0)), /* UART2 IO mux M0 */
++	RK_MUXROUTE_GRF(1, RK_PD5, 2, 0x030c, WRITE_MASK_VAL(10, 10, 1)), /* UART2 IO mux M1 */
++	RK_MUXROUTE_GRF(1, RK_PA1, 2, 0x030c, WRITE_MASK_VAL(12, 12, 0)), /* UART3 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PB7, 4, 0x030c, WRITE_MASK_VAL(12, 12, 1)), /* UART3 IO mux M1 */
++	RK_MUXROUTE_GRF(1, RK_PA6, 2, 0x030c, WRITE_MASK_VAL(14, 14, 0)), /* UART4 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PB2, 4, 0x030c, WRITE_MASK_VAL(14, 14, 1)), /* UART4 IO mux M1 */
++	RK_MUXROUTE_GRF(2, RK_PA2, 3, 0x0310, WRITE_MASK_VAL(0, 0, 0)), /* UART5 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PC2, 4, 0x0310, WRITE_MASK_VAL(0, 0, 1)), /* UART5 IO mux M1 */
++	RK_MUXROUTE_GRF(2, RK_PA4, 3, 0x0310, WRITE_MASK_VAL(2, 2, 0)), /* UART6 IO mux M0 */
++	RK_MUXROUTE_GRF(1, RK_PD5, 3, 0x0310, WRITE_MASK_VAL(2, 2, 1)), /* UART6 IO mux M1 */
++	RK_MUXROUTE_GRF(2, RK_PA6, 3, 0x0310, WRITE_MASK_VAL(5, 4, 0)), /* UART7 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PC4, 4, 0x0310, WRITE_MASK_VAL(5, 4, 1)), /* UART7 IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PA2, 4, 0x0310, WRITE_MASK_VAL(5, 4, 2)), /* UART7 IO mux M2 */
++	RK_MUXROUTE_GRF(2, RK_PC5, 3, 0x0310, WRITE_MASK_VAL(6, 6, 0)), /* UART8 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PD7, 4, 0x0310, WRITE_MASK_VAL(6, 6, 1)), /* UART8 IO mux M1 */
++	RK_MUXROUTE_GRF(2, RK_PB0, 3, 0x0310, WRITE_MASK_VAL(9, 8, 0)), /* UART9 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC5, 4, 0x0310, WRITE_MASK_VAL(9, 8, 1)), /* UART9 IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PA4, 4, 0x0310, WRITE_MASK_VAL(9, 8, 2)), /* UART9 IO mux M2 */
++	RK_MUXROUTE_GRF(1, RK_PA2, 1, 0x0310, WRITE_MASK_VAL(11, 10, 0)), /* I2S1 IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PC6, 4, 0x0310, WRITE_MASK_VAL(11, 10, 1)), /* I2S1 IO mux M1 */
++	RK_MUXROUTE_GRF(2, RK_PD0, 5, 0x0310, WRITE_MASK_VAL(11, 10, 2)), /* I2S1 IO mux M2 */
++	RK_MUXROUTE_GRF(2, RK_PC1, 1, 0x0310, WRITE_MASK_VAL(12, 12, 0)), /* I2S2 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PB6, 5, 0x0310, WRITE_MASK_VAL(12, 12, 1)), /* I2S2 IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PA2, 4, 0x0310, WRITE_MASK_VAL(14, 14, 0)), /* I2S3 IO mux M0 */
++	RK_MUXROUTE_GRF(4, RK_PC2, 5, 0x0310, WRITE_MASK_VAL(14, 14, 1)), /* I2S3 IO mux M1 */
++	RK_MUXROUTE_GRF(1, RK_PA4, 3, 0x0314, WRITE_MASK_VAL(1, 0, 0)), /* PDM IO mux M0 */
++	RK_MUXROUTE_GRF(1, RK_PA6, 3, 0x0314, WRITE_MASK_VAL(1, 0, 0)), /* PDM IO mux M0 */
++	RK_MUXROUTE_GRF(3, RK_PD6, 5, 0x0314, WRITE_MASK_VAL(1, 0, 1)), /* PDM IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PA0, 4, 0x0314, WRITE_MASK_VAL(1, 0, 1)), /* PDM IO mux M1 */
++	RK_MUXROUTE_GRF(3, RK_PC4, 5, 0x0314, WRITE_MASK_VAL(1, 0, 2)), /* PDM IO mux M2 */
++	RK_MUXROUTE_GRF(0, RK_PA5, 3, 0x0314, WRITE_MASK_VAL(3, 2, 0)), /* PCIE20 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PD0, 4, 0x0314, WRITE_MASK_VAL(3, 2, 1)), /* PCIE20 IO mux M1 */
++	RK_MUXROUTE_GRF(1, RK_PB0, 4, 0x0314, WRITE_MASK_VAL(3, 2, 2)), /* PCIE20 IO mux M2 */
++	RK_MUXROUTE_GRF(0, RK_PA4, 3, 0x0314, WRITE_MASK_VAL(5, 4, 0)), /* PCIE30X1 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PD2, 4, 0x0314, WRITE_MASK_VAL(5, 4, 1)), /* PCIE30X1 IO mux M1 */
++	RK_MUXROUTE_GRF(1, RK_PA5, 4, 0x0314, WRITE_MASK_VAL(5, 4, 2)), /* PCIE30X1 IO mux M2 */
++	RK_MUXROUTE_GRF(0, RK_PA6, 2, 0x0314, WRITE_MASK_VAL(7, 6, 0)), /* PCIE30X2 IO mux M0 */
++	RK_MUXROUTE_GRF(2, RK_PD4, 4, 0x0314, WRITE_MASK_VAL(7, 6, 1)), /* PCIE30X2 IO mux M1 */
++	RK_MUXROUTE_GRF(4, RK_PC2, 4, 0x0314, WRITE_MASK_VAL(7, 6, 2)), /* PCIE30X2 IO mux M2 */
+ };
+ 
+ static bool rockchip_get_mux_route(struct rockchip_pin_bank *bank, int pin,
+@@ -2117,6 +1751,68 @@ static void rk3399_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ 		*bit = (pin_num % 8) * 2;
+ }
+ 
++#define RK3568_PULL_PMU_OFFSET		0x20
++#define RK3568_PULL_GRF_OFFSET		0x80
++#define RK3568_PULL_BITS_PER_PIN	2
++#define RK3568_PULL_PINS_PER_REG	8
++#define RK3568_PULL_BANK_STRIDE		0x10
++
++static void rk3568_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
++					 int pin_num, struct regmap **regmap,
++					 int *reg, u8 *bit)
++{
++	struct rockchip_pinctrl *info = bank->drvdata;
++
++	if (bank->bank_num == 0) {
++		*regmap = info->regmap_pmu;
++		*reg = RK3568_PULL_PMU_OFFSET;
++		*reg += bank->bank_num * RK3568_PULL_BANK_STRIDE;
++		*reg += ((pin_num / RK3568_PULL_PINS_PER_REG) * 4);
++
++		*bit = pin_num % RK3568_PULL_PINS_PER_REG;
++		*bit *= RK3568_PULL_BITS_PER_PIN;
++	} else {
++		*regmap = info->regmap_base;
++		*reg = RK3568_PULL_GRF_OFFSET;
++		*reg += (bank->bank_num - 1) * RK3568_PULL_BANK_STRIDE;
++		*reg += ((pin_num / RK3568_PULL_PINS_PER_REG) * 4);
++
++		*bit = (pin_num % RK3568_PULL_PINS_PER_REG);
++		*bit *= RK3568_PULL_BITS_PER_PIN;
++	}
++}
++
++#define RK3568_DRV_PMU_OFFSET		0x70
++#define RK3568_DRV_GRF_OFFSET		0x200
++#define RK3568_DRV_BITS_PER_PIN		8
++#define RK3568_DRV_PINS_PER_REG		2
++#define RK3568_DRV_BANK_STRIDE		0x40
++
++static void rk3568_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
++					int pin_num, struct regmap **regmap,
++					int *reg, u8 *bit)
++{
++	struct rockchip_pinctrl *info = bank->drvdata;
++
++	/* The first 32 pins of the first bank are located in PMU */
++	if (bank->bank_num == 0) {
++		*regmap = info->regmap_pmu;
++		*reg = RK3568_DRV_PMU_OFFSET;
++		*reg += ((pin_num / RK3568_DRV_PINS_PER_REG) * 4);
++
++		*bit = pin_num % RK3568_DRV_PINS_PER_REG;
++		*bit *= RK3568_DRV_BITS_PER_PIN;
++	} else {
++		*regmap = info->regmap_base;
++		*reg = RK3568_DRV_GRF_OFFSET;
++		*reg += (bank->bank_num - 1) * RK3568_DRV_BANK_STRIDE;
++		*reg += ((pin_num / RK3568_DRV_PINS_PER_REG) * 4);
++
++		*bit = (pin_num % RK3568_DRV_PINS_PER_REG);
++		*bit *= RK3568_DRV_BITS_PER_PIN;
++	}
++}
++
+ static int rockchip_perpin_drv_list[DRV_TYPE_MAX][8] = {
+ 	{ 2, 4, 8, 12, -1, -1, -1, -1 },
+ 	{ 3, 6, 9, 12, -1, -1, -1, -1 },
+@@ -2217,6 +1913,11 @@ static int rockchip_set_drive_perpin(struct rockchip_pin_bank *bank,
+ 		bank->bank_num, pin_num, strength);
+ 
+ 	ctrl->drv_calc_reg(bank, pin_num, &regmap, &reg, &bit);
++	if (ctrl->type == RK3568) {
++		rmask_bits = RK3568_DRV_BITS_PER_PIN;
++		ret = (1 << (strength + 1)) - 1;
++		goto config;
++	}
+ 
+ 	ret = -EINVAL;
+ 	for (i = 0; i < ARRAY_SIZE(rockchip_perpin_drv_list[drv_type]); i++) {
+@@ -2286,6 +1987,7 @@ static int rockchip_set_drive_perpin(struct rockchip_pin_bank *bank,
+ 		return -EINVAL;
+ 	}
+ 
++config:
+ 	/* enable the write to the equivalent lower bits */
+ 	data = ((1 << rmask_bits) - 1) << (bit + 16);
+ 	rmask = data | (data >> 16);
+@@ -2343,9 +2045,18 @@ static int rockchip_get_pull(struct rockchip_pin_bank *bank, int pin_num)
+ 	case RK3308:
+ 	case RK3368:
+ 	case RK3399:
++	case RK3568:
+ 		pull_type = bank->pull_type[pin_num / 8];
+ 		data >>= bit;
+ 		data &= (1 << RK3188_PULL_BITS_PER_PIN) - 1;
++		/*
++		 * In the TRM, pull-up being 1 for everything except the GPIO0_D3-D6,
++		 * where that pull up value becomes 3.
++		 */
++		if (ctrl->type == RK3568 && bank->bank_num == 0 && pin_num >= 27 && pin_num <= 30) {
++			if (data == 3)
++				data = 1;
++		}
+ 
+ 		return rockchip_pull_list[pull_type][data];
+ 	default:
+@@ -2388,6 +2099,7 @@ static int rockchip_set_pull(struct rockchip_pin_bank *bank,
+ 	case RK3308:
+ 	case RK3368:
+ 	case RK3399:
++	case RK3568:
+ 		pull_type = bank->pull_type[pin_num / 8];
+ 		ret = -EINVAL;
+ 		for (i = 0; i < ARRAY_SIZE(rockchip_pull_list[pull_type]);
+@@ -2397,6 +2109,14 @@ static int rockchip_set_pull(struct rockchip_pin_bank *bank,
+ 				break;
+ 			}
+ 		}
++		/*
++		 * In the TRM, pull-up being 1 for everything except the GPIO0_D3-D6,
++		 * where that pull up value becomes 3.
++		 */
++		if (ctrl->type == RK3568 && bank->bank_num == 0 && pin_num >= 27 && pin_num <= 30) {
++			if (ret == 1)
++				ret = 3;
++		}
+ 
+ 		if (ret < 0) {
+ 			dev_err(info->dev, "unsupported pull setting %d\n",
+@@ -2441,6 +2161,35 @@ static int rk3328_calc_schmitt_reg_and_bit(struct rockchip_pin_bank *bank,
+ 	return 0;
+ }
+ 
++#define RK3568_SCHMITT_BITS_PER_PIN		2
++#define RK3568_SCHMITT_PINS_PER_REG		8
++#define RK3568_SCHMITT_BANK_STRIDE		0x10
++#define RK3568_SCHMITT_GRF_OFFSET		0xc0
++#define RK3568_SCHMITT_PMUGRF_OFFSET		0x30
++
++static int rk3568_calc_schmitt_reg_and_bit(struct rockchip_pin_bank *bank,
++					   int pin_num,
++					   struct regmap **regmap,
++					   int *reg, u8 *bit)
++{
++	struct rockchip_pinctrl *info = bank->drvdata;
++
++	if (bank->bank_num == 0) {
++		*regmap = info->regmap_pmu;
++		*reg = RK3568_SCHMITT_PMUGRF_OFFSET;
++	} else {
++		*regmap = info->regmap_base;
++		*reg = RK3568_SCHMITT_GRF_OFFSET;
++		*reg += (bank->bank_num - 1) * RK3568_SCHMITT_BANK_STRIDE;
++	}
++
++	*reg += ((pin_num / RK3568_SCHMITT_PINS_PER_REG) * 4);
++	*bit = pin_num % RK3568_SCHMITT_PINS_PER_REG;
++	*bit *= RK3568_SCHMITT_BITS_PER_PIN;
++
++	return 0;
++}
++
+ static int rockchip_get_schmitt(struct rockchip_pin_bank *bank, int pin_num)
+ {
+ 	struct rockchip_pinctrl *info = bank->drvdata;
+@@ -2459,6 +2208,13 @@ static int rockchip_get_schmitt(struct rockchip_pin_bank *bank, int pin_num)
+ 		return ret;
+ 
+ 	data >>= bit;
++	switch (ctrl->type) {
++	case RK3568:
++		return data & ((1 << RK3568_SCHMITT_BITS_PER_PIN) - 1);
++	default:
++		break;
++	}
++
+ 	return data & 0x1;
+ }
+ 
+@@ -2480,8 +2236,17 @@ static int rockchip_set_schmitt(struct rockchip_pin_bank *bank,
+ 		return ret;
+ 
+ 	/* enable the write to the equivalent lower bits */
+-	data = BIT(bit + 16) | (enable << bit);
+-	rmask = BIT(bit + 16) | BIT(bit);
++	switch (ctrl->type) {
++	case RK3568:
++		data = ((1 << RK3568_SCHMITT_BITS_PER_PIN) - 1) << (bit + 16);
++		rmask = data | (data >> 16);
++		data |= ((enable ? 0x2 : 0x1) << bit);
++		break;
++	default:
++		data = BIT(bit + 16) | (enable << bit);
++		rmask = BIT(bit + 16) | BIT(bit);
++		break;
++	}
+ 
+ 	return regmap_update_bits(regmap, reg, rmask, data);
+ }
+@@ -2655,6 +2420,7 @@ static bool rockchip_pinconf_pull_valid(struct rockchip_pin_ctrl *ctrl,
+ 	case RK3308:
+ 	case RK3368:
+ 	case RK3399:
++	case RK3568:
+ 		return (pull != PIN_CONFIG_BIAS_PULL_PIN_DEFAULT);
+ 	}
+ 
+@@ -2893,6 +2659,7 @@ static int rockchip_pinctrl_parse_groups(struct device_node *np,
+ 		np_config = of_find_node_by_phandle(be32_to_cpup(phandle));
+ 		ret = pinconf_generic_parse_dt_config(np_config, NULL,
+ 				&grp->data[j].configs, &grp->data[j].nconfigs);
++		of_node_put(np_config);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -4230,6 +3997,45 @@ static struct rockchip_pin_ctrl rk3399_pin_ctrl = {
+ 		.drv_calc_reg		= rk3399_calc_drv_reg_and_bit,
+ };
+ 
++static struct rockchip_pin_bank rk3568_pin_banks[] = {
++	PIN_BANK_IOMUX_FLAGS(0, 32, "gpio0", IOMUX_SOURCE_PMU | IOMUX_WIDTH_4BIT,
++					     IOMUX_SOURCE_PMU | IOMUX_WIDTH_4BIT,
++					     IOMUX_SOURCE_PMU | IOMUX_WIDTH_4BIT,
++					     IOMUX_SOURCE_PMU | IOMUX_WIDTH_4BIT),
++	PIN_BANK_IOMUX_FLAGS(1, 32, "gpio1", IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT),
++	PIN_BANK_IOMUX_FLAGS(2, 32, "gpio2", IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT),
++	PIN_BANK_IOMUX_FLAGS(3, 32, "gpio3", IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT),
++	PIN_BANK_IOMUX_FLAGS(4, 32, "gpio4", IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT,
++					     IOMUX_WIDTH_4BIT),
++};
++
++static struct rockchip_pin_ctrl rk3568_pin_ctrl = {
++	.pin_banks		= rk3568_pin_banks,
++	.nr_banks		= ARRAY_SIZE(rk3568_pin_banks),
++	.label			= "RK3568-GPIO",
++	.type			= RK3568,
++	.grf_mux_offset		= 0x0,
++	.pmu_mux_offset		= 0x0,
++	.grf_drv_offset		= 0x0200,
++	.pmu_drv_offset		= 0x0070,
++	.iomux_routes		= rk3568_mux_route_data,
++	.niomux_routes		= ARRAY_SIZE(rk3568_mux_route_data),
++	.pull_calc_reg		= rk3568_calc_pull_reg_and_bit,
++	.drv_calc_reg		= rk3568_calc_drv_reg_and_bit,
++	.schmitt_calc_reg	= rk3568_calc_schmitt_reg_and_bit,
++};
++
+ static const struct of_device_id rockchip_pinctrl_dt_match[] = {
+ 	{ .compatible = "rockchip,px30-pinctrl",
+ 		.data = &px30_pin_ctrl },
+@@ -4259,6 +4065,8 @@ static const struct of_device_id rockchip_pinctrl_dt_match[] = {
+ 		.data = &rk3368_pin_ctrl },
+ 	{ .compatible = "rockchip,rk3399-pinctrl",
+ 		.data = &rk3399_pin_ctrl },
++	{ .compatible = "rockchip,rk3568-pinctrl",
++		.data = &rk3568_pin_ctrl },
+ 	{},
+ };
+ 
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm8976.c b/drivers/pinctrl/qcom/pinctrl-msm8976.c
+index ec43edf9b660a..e11d845847190 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm8976.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm8976.c
+@@ -733,7 +733,7 @@ static const char * const codec_int2_groups[] = {
+ 	"gpio74",
+ };
+ static const char * const wcss_bt_groups[] = {
+-	"gpio39", "gpio47", "gpio88",
++	"gpio39", "gpio47", "gpio48",
+ };
+ static const char * const sdc3_groups[] = {
+ 	"gpio39", "gpio40", "gpio41",
+@@ -958,9 +958,9 @@ static const struct msm_pingroup msm8976_groups[] = {
+ 	PINGROUP(37, NA, NA, NA, qdss_tracedata_b, NA, NA, NA, NA, NA),
+ 	PINGROUP(38, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b, NA),
+ 	PINGROUP(39, wcss_bt, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+-	PINGROUP(40, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+-	PINGROUP(41, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+-	PINGROUP(42, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
++	PINGROUP(40, wcss_wlan2, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
++	PINGROUP(41, wcss_wlan1, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
++	PINGROUP(42, wcss_wlan0, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+ 	PINGROUP(43, wcss_wlan, sdc3, NA, NA, qdss_tracedata_a, NA, NA, NA, NA),
+ 	PINGROUP(44, wcss_wlan, sdc3, NA, NA, NA, NA, NA, NA, NA),
+ 	PINGROUP(45, wcss_fm, NA, qdss_tracectl_a, NA, NA, NA, NA, NA, NA),
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 60406f1f8337e..2d852f15cc501 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -1338,6 +1338,7 @@ static struct irq_domain *stm32_pctrl_get_irq_domain(struct device_node *np)
+ 		return ERR_PTR(-ENXIO);
+ 
+ 	domain = irq_find_host(parent);
++	of_node_put(parent);
+ 	if (!domain)
+ 		/* domain not registered yet */
+ 		return ERR_PTR(-EPROBE_DEFER);
+diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c
+index 3f0b8e2ef3d46..7a3109a538813 100644
+--- a/drivers/powercap/powercap_sys.c
++++ b/drivers/powercap/powercap_sys.c
+@@ -530,9 +530,6 @@ struct powercap_zone *powercap_register_zone(
+ 	power_zone->name = kstrdup(name, GFP_KERNEL);
+ 	if (!power_zone->name)
+ 		goto err_name_alloc;
+-	dev_set_name(&power_zone->dev, "%s:%x",
+-					dev_name(power_zone->dev.parent),
+-					power_zone->id);
+ 	power_zone->constraints = kcalloc(nr_constraints,
+ 					  sizeof(*power_zone->constraints),
+ 					  GFP_KERNEL);
+@@ -555,9 +552,16 @@ struct powercap_zone *powercap_register_zone(
+ 	power_zone->dev_attr_groups[0] = &power_zone->dev_zone_attr_group;
+ 	power_zone->dev_attr_groups[1] = NULL;
+ 	power_zone->dev.groups = power_zone->dev_attr_groups;
++	dev_set_name(&power_zone->dev, "%s:%x",
++					dev_name(power_zone->dev.parent),
++					power_zone->id);
+ 	result = device_register(&power_zone->dev);
+-	if (result)
+-		goto err_dev_ret;
++	if (result) {
++		put_device(&power_zone->dev);
++		mutex_unlock(&control_type->lock);
++
++		return ERR_PTR(result);
++	}
+ 
+ 	control_type->nr_zones++;
+ 	mutex_unlock(&control_type->lock);
+diff --git a/drivers/pwm/pwm-sifive.c b/drivers/pwm/pwm-sifive.c
+index 12e9e23272ab1..52a55bae033de 100644
+--- a/drivers/pwm/pwm-sifive.c
++++ b/drivers/pwm/pwm-sifive.c
+@@ -41,7 +41,7 @@
+ 
+ struct pwm_sifive_ddata {
+ 	struct pwm_chip	chip;
+-	struct mutex lock; /* lock to protect user_count */
++	struct mutex lock; /* lock to protect user_count and approx_period */
+ 	struct notifier_block notifier;
+ 	struct clk *clk;
+ 	void __iomem *regs;
+@@ -76,6 +76,7 @@ static void pwm_sifive_free(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	mutex_unlock(&ddata->lock);
+ }
+ 
++/* Called holding ddata->lock */
+ static void pwm_sifive_update_clock(struct pwm_sifive_ddata *ddata,
+ 				    unsigned long rate)
+ {
+@@ -163,7 +164,6 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		return ret;
+ 	}
+ 
+-	mutex_lock(&ddata->lock);
+ 	cur_state = pwm->state;
+ 	enabled = cur_state.enabled;
+ 
+@@ -182,14 +182,23 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	/* The hardware cannot generate a 100% duty cycle */
+ 	frac = min(frac, (1U << PWM_SIFIVE_CMPWIDTH) - 1);
+ 
++	mutex_lock(&ddata->lock);
+ 	if (state->period != ddata->approx_period) {
+-		if (ddata->user_count != 1) {
++		/*
++		 * Don't let a 2nd user change the period underneath the 1st user.
++		 * However if ddate->approx_period == 0 this is the first time we set
++		 * any period, so let whoever gets here first set the period so other
++		 * users who agree on the period won't fail.
++		 */
++		if (ddata->user_count != 1 && ddata->approx_period) {
++			mutex_unlock(&ddata->lock);
+ 			ret = -EBUSY;
+ 			goto exit;
+ 		}
+ 		ddata->approx_period = state->period;
+ 		pwm_sifive_update_clock(ddata, clk_get_rate(ddata->clk));
+ 	}
++	mutex_unlock(&ddata->lock);
+ 
+ 	writel(frac, ddata->regs + PWM_SIFIVE_PWMCMP(pwm->hwpwm));
+ 
+@@ -198,7 +207,6 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ exit:
+ 	clk_disable(ddata->clk);
+-	mutex_unlock(&ddata->lock);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/pwm/pwm-stm32-lp.c b/drivers/pwm/pwm-stm32-lp.c
+index 945a8b2b85648..c8a847fcb775b 100644
+--- a/drivers/pwm/pwm-stm32-lp.c
++++ b/drivers/pwm/pwm-stm32-lp.c
+@@ -127,7 +127,7 @@ static int stm32_pwm_lp_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	/* ensure CMP & ARR registers are properly written */
+ 	ret = regmap_read_poll_timeout(priv->regmap, STM32_LPTIM_ISR, val,
+-				       (val & STM32_LPTIM_CMPOK_ARROK),
++				       (val & STM32_LPTIM_CMPOK_ARROK) == STM32_LPTIM_CMPOK_ARROK,
+ 				       100, 1000);
+ 	if (ret) {
+ 		dev_err(priv->chip.dev, "ARR/CMP registers write issue\n");
+diff --git a/drivers/regulator/max77802-regulator.c b/drivers/regulator/max77802-regulator.c
+index 7b8ec8c0bd151..660e179a82a2c 100644
+--- a/drivers/regulator/max77802-regulator.c
++++ b/drivers/regulator/max77802-regulator.c
+@@ -95,9 +95,11 @@ static int max77802_set_suspend_disable(struct regulator_dev *rdev)
+ {
+ 	unsigned int val = MAX77802_OFF_PWRREQ;
+ 	struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+-	int id = rdev_get_id(rdev);
++	unsigned int id = rdev_get_id(rdev);
+ 	int shift = max77802_get_opmode_shift(id);
+ 
++	if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++		return -EINVAL;
+ 	max77802->opmode[id] = val;
+ 	return regmap_update_bits(rdev->regmap, rdev->desc->enable_reg,
+ 				  rdev->desc->enable_mask, val << shift);
+@@ -111,7 +113,7 @@ static int max77802_set_suspend_disable(struct regulator_dev *rdev)
+ static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
+ {
+ 	struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+-	int id = rdev_get_id(rdev);
++	unsigned int id = rdev_get_id(rdev);
+ 	unsigned int val;
+ 	int shift = max77802_get_opmode_shift(id);
+ 
+@@ -128,6 +130,9 @@ static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
+ 		return -EINVAL;
+ 	}
+ 
++	if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++		return -EINVAL;
++
+ 	max77802->opmode[id] = val;
+ 	return regmap_update_bits(rdev->regmap, rdev->desc->enable_reg,
+ 				  rdev->desc->enable_mask, val << shift);
+@@ -136,8 +141,10 @@ static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
+ static unsigned max77802_get_mode(struct regulator_dev *rdev)
+ {
+ 	struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+-	int id = rdev_get_id(rdev);
++	unsigned int id = rdev_get_id(rdev);
+ 
++	if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++		return -EINVAL;
+ 	return max77802_map_mode(max77802->opmode[id]);
+ }
+ 
+@@ -161,10 +168,13 @@ static int max77802_set_suspend_mode(struct regulator_dev *rdev,
+ 				     unsigned int mode)
+ {
+ 	struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+-	int id = rdev_get_id(rdev);
++	unsigned int id = rdev_get_id(rdev);
+ 	unsigned int val;
+ 	int shift = max77802_get_opmode_shift(id);
+ 
++	if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++		return -EINVAL;
++
+ 	/*
+ 	 * If the regulator has been disabled for suspend
+ 	 * then is invalid to try setting a suspend mode.
+@@ -210,9 +220,11 @@ static int max77802_set_suspend_mode(struct regulator_dev *rdev,
+ static int max77802_enable(struct regulator_dev *rdev)
+ {
+ 	struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+-	int id = rdev_get_id(rdev);
++	unsigned int id = rdev_get_id(rdev);
+ 	int shift = max77802_get_opmode_shift(id);
+ 
++	if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++		return -EINVAL;
+ 	if (max77802->opmode[id] == MAX77802_OFF_PWRREQ)
+ 		max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
+ 
+@@ -541,7 +553,7 @@ static int max77802_pmic_probe(struct platform_device *pdev)
+ 
+ 	for (i = 0; i < MAX77802_REG_MAX; i++) {
+ 		struct regulator_dev *rdev;
+-		int id = regulators[i].id;
++		unsigned int id = regulators[i].id;
+ 		int shift = max77802_get_opmode_shift(id);
+ 		int ret;
+ 
+@@ -559,10 +571,12 @@ static int max77802_pmic_probe(struct platform_device *pdev)
+ 		 * the hardware reports OFF as the regulator operating mode.
+ 		 * Default to operating mode NORMAL in that case.
+ 		 */
+-		if (val == MAX77802_STATUS_OFF)
+-			max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
+-		else
+-			max77802->opmode[id] = val;
++		if (id < ARRAY_SIZE(max77802->opmode)) {
++			if (val == MAX77802_STATUS_OFF)
++				max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
++			else
++				max77802->opmode[id] = val;
++		}
+ 
+ 		rdev = devm_regulator_register(&pdev->dev,
+ 					       &regulators[i], &config);
+diff --git a/drivers/regulator/s5m8767.c b/drivers/regulator/s5m8767.c
+index 35269f9982105..754c6fcc6e642 100644
+--- a/drivers/regulator/s5m8767.c
++++ b/drivers/regulator/s5m8767.c
+@@ -923,10 +923,14 @@ static int s5m8767_pmic_probe(struct platform_device *pdev)
+ 
+ 	for (i = 0; i < pdata->num_regulators; i++) {
+ 		const struct sec_voltage_desc *desc;
+-		int id = pdata->regulators[i].id;
++		unsigned int id = pdata->regulators[i].id;
+ 		int enable_reg, enable_val;
+ 		struct regulator_dev *rdev;
+ 
++		BUILD_BUG_ON(ARRAY_SIZE(regulators) != ARRAY_SIZE(reg_voltage_map));
++		if (WARN_ON_ONCE(id >= ARRAY_SIZE(regulators)))
++			continue;
++
+ 		desc = reg_voltage_map[id];
+ 		if (desc) {
+ 			regulators[id].n_voltages =
+diff --git a/drivers/remoteproc/mtk_scp_ipi.c b/drivers/remoteproc/mtk_scp_ipi.c
+index 6dc955ecab80f..968128b78e59c 100644
+--- a/drivers/remoteproc/mtk_scp_ipi.c
++++ b/drivers/remoteproc/mtk_scp_ipi.c
+@@ -164,21 +164,21 @@ int scp_ipi_send(struct mtk_scp *scp, u32 id, void *buf, unsigned int len,
+ 	    WARN_ON(len > sizeof(send_obj->share_buf)) || WARN_ON(!buf))
+ 		return -EINVAL;
+ 
+-	mutex_lock(&scp->send_lock);
+-
+ 	ret = clk_prepare_enable(scp->clk);
+ 	if (ret) {
+ 		dev_err(scp->dev, "failed to enable clock\n");
+-		goto unlock_mutex;
++		return ret;
+ 	}
+ 
++	mutex_lock(&scp->send_lock);
++
+ 	 /* Wait until SCP receives the last command */
+ 	timeout = jiffies + msecs_to_jiffies(2000);
+ 	do {
+ 		if (time_after(jiffies, timeout)) {
+ 			dev_err(scp->dev, "%s: IPI timeout!\n", __func__);
+ 			ret = -ETIMEDOUT;
+-			goto clock_disable;
++			goto unlock_mutex;
+ 		}
+ 	} while (readl(scp->reg_base + scp->data->host_to_scp_reg));
+ 
+@@ -205,10 +205,9 @@ int scp_ipi_send(struct mtk_scp *scp, u32 id, void *buf, unsigned int len,
+ 			ret = 0;
+ 	}
+ 
+-clock_disable:
+-	clk_disable_unprepare(scp->clk);
+ unlock_mutex:
+ 	mutex_unlock(&scp->send_lock);
++	clk_disable_unprepare(scp->clk);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 1b3aa84e36e7a..3d975ecd93360 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -17,6 +17,7 @@
+ #include <linux/module.h>
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/platform_device.h>
+ #include <linux/pm_domain.h>
+ #include <linux/pm_runtime.h>
+@@ -190,6 +191,9 @@ struct q6v5 {
+ 	size_t mba_size;
+ 	size_t dp_size;
+ 
++	phys_addr_t mdata_phys;
++	size_t mdata_size;
++
+ 	phys_addr_t mpss_phys;
+ 	phys_addr_t mpss_reloc;
+ 	size_t mpss_size;
+@@ -816,15 +820,35 @@ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw)
+ 	if (IS_ERR(metadata))
+ 		return PTR_ERR(metadata);
+ 
+-	ptr = dma_alloc_attrs(qproc->dev, size, &phys, GFP_KERNEL, dma_attrs);
+-	if (!ptr) {
+-		kfree(metadata);
+-		dev_err(qproc->dev, "failed to allocate mdt buffer\n");
+-		return -ENOMEM;
++	if (qproc->mdata_phys) {
++		if (size > qproc->mdata_size) {
++			ret = -EINVAL;
++			dev_err(qproc->dev, "metadata size outside memory range\n");
++			goto free_metadata;
++		}
++
++		phys = qproc->mdata_phys;
++		ptr = memremap(qproc->mdata_phys, size, MEMREMAP_WC);
++		if (!ptr) {
++			ret = -EBUSY;
++			dev_err(qproc->dev, "unable to map memory region: %pa+%zx\n",
++				&qproc->mdata_phys, size);
++			goto free_metadata;
++		}
++	} else {
++		ptr = dma_alloc_attrs(qproc->dev, size, &phys, GFP_KERNEL, dma_attrs);
++		if (!ptr) {
++			ret = -ENOMEM;
++			dev_err(qproc->dev, "failed to allocate mdt buffer\n");
++			goto free_metadata;
++		}
+ 	}
+ 
+ 	memcpy(ptr, metadata, size);
+ 
++	if (qproc->mdata_phys)
++		memunmap(ptr);
++
+ 	/* Hypervisor mapping to access metadata by modem */
+ 	mdata_perm = BIT(QCOM_SCM_VMID_HLOS);
+ 	ret = q6v5_xfer_mem_ownership(qproc, &mdata_perm, false, true,
+@@ -853,7 +877,9 @@ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw)
+ 			 "mdt buffer not reclaimed system may become unstable\n");
+ 
+ free_dma_attrs:
+-	dma_free_attrs(qproc->dev, size, ptr, phys, dma_attrs);
++	if (!qproc->mdata_phys)
++		dma_free_attrs(qproc->dev, size, ptr, phys, dma_attrs);
++free_metadata:
+ 	kfree(metadata);
+ 
+ 	return ret < 0 ? ret : 0;
+@@ -1585,6 +1611,7 @@ static int q6v5_init_reset(struct q6v5 *qproc)
+ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ {
+ 	struct device_node *child;
++	struct reserved_mem *rmem;
+ 	struct device_node *node;
+ 	struct resource r;
+ 	int ret;
+@@ -1637,6 +1664,26 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ 	qproc->mpss_phys = qproc->mpss_reloc = r.start;
+ 	qproc->mpss_size = resource_size(&r);
+ 
++	if (!child) {
++		node = of_parse_phandle(qproc->dev->of_node, "memory-region", 2);
++	} else {
++		child = of_get_child_by_name(qproc->dev->of_node, "metadata");
++		node = of_parse_phandle(child, "memory-region", 0);
++		of_node_put(child);
++	}
++
++	if (!node)
++		return 0;
++
++	rmem = of_reserved_mem_lookup(node);
++	if (!rmem) {
++		dev_err(qproc->dev, "unable to resolve metadata region\n");
++		return -EINVAL;
++	}
++
++	qproc->mdata_phys = rmem->base;
++	qproc->mdata_size = rmem->size;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 7cbed0310c09f..98b6d4c09c82c 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -929,6 +929,7 @@ static void qcom_glink_handle_intent(struct qcom_glink *glink,
+ 	spin_unlock_irqrestore(&glink->idr_lock, flags);
+ 	if (!channel) {
+ 		dev_err(glink->dev, "intents for non-existing channel\n");
++		qcom_glink_rx_advance(glink, ALIGN(msglen, 8));
+ 		return;
+ 	}
+ 
+diff --git a/drivers/rtc/rtc-pm8xxx.c b/drivers/rtc/rtc-pm8xxx.c
+index b45ee2cb2c044..3417eef0aca3f 100644
+--- a/drivers/rtc/rtc-pm8xxx.c
++++ b/drivers/rtc/rtc-pm8xxx.c
+@@ -219,7 +219,6 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ {
+ 	int rc, i;
+ 	u8 value[NUM_8_BIT_RTC_REGS];
+-	unsigned int ctrl_reg;
+ 	unsigned long secs, irq_flags;
+ 	struct pm8xxx_rtc *rtc_dd = dev_get_drvdata(dev);
+ 	const struct pm8xxx_rtc_regs *regs = rtc_dd->regs;
+@@ -231,6 +230,11 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ 		secs >>= 8;
+ 	}
+ 
++	rc = regmap_update_bits(rtc_dd->regmap, regs->alarm_ctrl,
++				regs->alarm_en, 0);
++	if (rc)
++		return rc;
++
+ 	spin_lock_irqsave(&rtc_dd->ctrl_reg_lock, irq_flags);
+ 
+ 	rc = regmap_bulk_write(rtc_dd->regmap, regs->alarm_rw, value,
+@@ -240,19 +244,11 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ 		goto rtc_rw_fail;
+ 	}
+ 
+-	rc = regmap_read(rtc_dd->regmap, regs->alarm_ctrl, &ctrl_reg);
+-	if (rc)
+-		goto rtc_rw_fail;
+-
+-	if (alarm->enabled)
+-		ctrl_reg |= regs->alarm_en;
+-	else
+-		ctrl_reg &= ~regs->alarm_en;
+-
+-	rc = regmap_write(rtc_dd->regmap, regs->alarm_ctrl, ctrl_reg);
+-	if (rc) {
+-		dev_err(dev, "Write to RTC alarm control register failed\n");
+-		goto rtc_rw_fail;
++	if (alarm->enabled) {
++		rc = regmap_update_bits(rtc_dd->regmap, regs->alarm_ctrl,
++					regs->alarm_en, regs->alarm_en);
++		if (rc)
++			goto rtc_rw_fail;
+ 	}
+ 
+ 	dev_dbg(dev, "Alarm Set for h:m:s=%ptRt, y-m-d=%ptRdr\n",
+diff --git a/drivers/rtc/rtc-sun6i.c b/drivers/rtc/rtc-sun6i.c
+index 52b36b7c61298..a72856fb5252c 100644
+--- a/drivers/rtc/rtc-sun6i.c
++++ b/drivers/rtc/rtc-sun6i.c
+@@ -128,7 +128,6 @@ struct sun6i_rtc_clk_data {
+ 	unsigned int fixed_prescaler : 16;
+ 	unsigned int has_prescaler : 1;
+ 	unsigned int has_out_clk : 1;
+-	unsigned int export_iosc : 1;
+ 	unsigned int has_losc_en : 1;
+ 	unsigned int has_auto_swt : 1;
+ };
+@@ -260,10 +259,8 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
+ 	/* Yes, I know, this is ugly. */
+ 	sun6i_rtc = rtc;
+ 
+-	/* Only read IOSC name from device tree if it is exported */
+-	if (rtc->data->export_iosc)
+-		of_property_read_string_index(node, "clock-output-names", 2,
+-					      &iosc_name);
++	of_property_read_string_index(node, "clock-output-names", 2,
++				      &iosc_name);
+ 
+ 	rtc->int_osc = clk_hw_register_fixed_rate_with_accuracy(NULL,
+ 								iosc_name,
+@@ -304,13 +301,10 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
+ 		goto err_register;
+ 	}
+ 
+-	clk_data->num = 2;
++	clk_data->num = 3;
+ 	clk_data->hws[0] = &rtc->hw;
+ 	clk_data->hws[1] = __clk_get_hw(rtc->ext_losc);
+-	if (rtc->data->export_iosc) {
+-		clk_data->hws[2] = rtc->int_osc;
+-		clk_data->num = 3;
+-	}
++	clk_data->hws[2] = rtc->int_osc;
+ 	of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ 	return;
+ 
+@@ -350,7 +344,6 @@ static const struct sun6i_rtc_clk_data sun8i_h3_rtc_data = {
+ 	.fixed_prescaler = 32,
+ 	.has_prescaler = 1,
+ 	.has_out_clk = 1,
+-	.export_iosc = 1,
+ };
+ 
+ static void __init sun8i_h3_rtc_clk_init(struct device_node *node)
+@@ -368,7 +361,6 @@ static const struct sun6i_rtc_clk_data sun50i_h6_rtc_data = {
+ 	.fixed_prescaler = 32,
+ 	.has_prescaler = 1,
+ 	.has_out_clk = 1,
+-	.export_iosc = 1,
+ 	.has_losc_en = 1,
+ 	.has_auto_swt = 1,
+ };
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index f4edfe383e9d9..9f26f55e4988a 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -2128,8 +2128,8 @@ static void __dasd_device_check_path_events(struct dasd_device *device)
+ 	if (device->stopped &
+ 	    ~(DASD_STOPPED_DC_WAIT | DASD_UNRESUMED_PM))
+ 		return;
+-	rc = device->discipline->verify_path(device,
+-					     dasd_path_get_tbvpm(device));
++	rc = device->discipline->pe_handler(device,
++					    dasd_path_get_tbvpm(device));
+ 	if (rc)
+ 		dasd_device_set_timer(device, 50);
+ 	else
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index 53d22975a32fd..c6930c159d2a6 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -103,7 +103,7 @@ struct ext_pool_exhaust_work_data {
+ };
+ 
+ /* definitions for the path verification worker */
+-struct path_verification_work_data {
++struct pe_handler_work_data {
+ 	struct work_struct worker;
+ 	struct dasd_device *device;
+ 	struct dasd_ccw_req cqr;
+@@ -112,8 +112,8 @@ struct path_verification_work_data {
+ 	int isglobal;
+ 	__u8 tbvpm;
+ };
+-static struct path_verification_work_data *path_verification_worker;
+-static DEFINE_MUTEX(dasd_path_verification_mutex);
++static struct pe_handler_work_data *pe_handler_worker;
++static DEFINE_MUTEX(dasd_pe_handler_mutex);
+ 
+ struct check_attention_work_data {
+ 	struct work_struct worker;
+@@ -1219,7 +1219,7 @@ static int verify_fcx_max_data(struct dasd_device *device, __u8 lpm)
+ }
+ 
+ static int rebuild_device_uid(struct dasd_device *device,
+-			      struct path_verification_work_data *data)
++			      struct pe_handler_work_data *data)
+ {
+ 	struct dasd_eckd_private *private = device->private;
+ 	__u8 lpm, opm = dasd_path_get_opm(device);
+@@ -1257,10 +1257,9 @@ static int rebuild_device_uid(struct dasd_device *device,
+ 	return rc;
+ }
+ 
+-static void do_path_verification_work(struct work_struct *work)
++static void dasd_eckd_path_available_action(struct dasd_device *device,
++					    struct pe_handler_work_data *data)
+ {
+-	struct path_verification_work_data *data;
+-	struct dasd_device *device;
+ 	struct dasd_eckd_private path_private;
+ 	struct dasd_uid *uid;
+ 	__u8 path_rcd_buf[DASD_ECKD_RCD_DATA_SIZE];
+@@ -1269,19 +1268,6 @@ static void do_path_verification_work(struct work_struct *work)
+ 	char print_uid[60];
+ 	int rc;
+ 
+-	data = container_of(work, struct path_verification_work_data, worker);
+-	device = data->device;
+-
+-	/* delay path verification until device was resumed */
+-	if (test_bit(DASD_FLAG_SUSPENDED, &device->flags)) {
+-		schedule_work(work);
+-		return;
+-	}
+-	/* check if path verification already running and delay if so */
+-	if (test_and_set_bit(DASD_FLAG_PATH_VERIFY, &device->flags)) {
+-		schedule_work(work);
+-		return;
+-	}
+ 	opm = 0;
+ 	npm = 0;
+ 	ppm = 0;
+@@ -1418,30 +1404,54 @@ static void do_path_verification_work(struct work_struct *work)
+ 		dasd_path_add_nohpfpm(device, hpfpm);
+ 		spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
+ 	}
++}
++
++static void do_pe_handler_work(struct work_struct *work)
++{
++	struct pe_handler_work_data *data;
++	struct dasd_device *device;
++
++	data = container_of(work, struct pe_handler_work_data, worker);
++	device = data->device;
++
++	/* delay path verification until device was resumed */
++	if (test_bit(DASD_FLAG_SUSPENDED, &device->flags)) {
++		schedule_work(work);
++		return;
++	}
++	/* check if path verification already running and delay if so */
++	if (test_and_set_bit(DASD_FLAG_PATH_VERIFY, &device->flags)) {
++		schedule_work(work);
++		return;
++	}
++
++	dasd_eckd_path_available_action(device, data);
++
+ 	clear_bit(DASD_FLAG_PATH_VERIFY, &device->flags);
+ 	dasd_put_device(device);
+ 	if (data->isglobal)
+-		mutex_unlock(&dasd_path_verification_mutex);
++		mutex_unlock(&dasd_pe_handler_mutex);
+ 	else
+ 		kfree(data);
+ }
+ 
+-static int dasd_eckd_verify_path(struct dasd_device *device, __u8 lpm)
++static int dasd_eckd_pe_handler(struct dasd_device *device, __u8 lpm)
+ {
+-	struct path_verification_work_data *data;
++	struct pe_handler_work_data *data;
+ 
+ 	data = kmalloc(sizeof(*data), GFP_ATOMIC | GFP_DMA);
+ 	if (!data) {
+-		if (mutex_trylock(&dasd_path_verification_mutex)) {
+-			data = path_verification_worker;
++		if (mutex_trylock(&dasd_pe_handler_mutex)) {
++			data = pe_handler_worker;
+ 			data->isglobal = 1;
+-		} else
++		} else {
+ 			return -ENOMEM;
++		}
+ 	} else {
+ 		memset(data, 0, sizeof(*data));
+ 		data->isglobal = 0;
+ 	}
+-	INIT_WORK(&data->worker, do_path_verification_work);
++	INIT_WORK(&data->worker, do_pe_handler_work);
+ 	dasd_get_device(device);
+ 	data->device = device;
+ 	data->tbvpm = lpm;
+@@ -6694,7 +6704,7 @@ static struct dasd_discipline dasd_eckd_discipline = {
+ 	.check_device = dasd_eckd_check_characteristics,
+ 	.uncheck_device = dasd_eckd_uncheck_device,
+ 	.do_analysis = dasd_eckd_do_analysis,
+-	.verify_path = dasd_eckd_verify_path,
++	.pe_handler = dasd_eckd_pe_handler,
+ 	.basic_to_ready = dasd_eckd_basic_to_ready,
+ 	.online_to_ready = dasd_eckd_online_to_ready,
+ 	.basic_to_known = dasd_eckd_basic_to_known,
+@@ -6753,18 +6763,20 @@ dasd_eckd_init(void)
+ 		return -ENOMEM;
+ 	dasd_vol_info_req = kmalloc(sizeof(*dasd_vol_info_req),
+ 				    GFP_KERNEL | GFP_DMA);
+-	if (!dasd_vol_info_req)
++	if (!dasd_vol_info_req) {
++		kfree(dasd_reserve_req);
+ 		return -ENOMEM;
+-	path_verification_worker = kmalloc(sizeof(*path_verification_worker),
+-				   GFP_KERNEL | GFP_DMA);
+-	if (!path_verification_worker) {
++	}
++	pe_handler_worker = kmalloc(sizeof(*pe_handler_worker),
++				    GFP_KERNEL | GFP_DMA);
++	if (!pe_handler_worker) {
+ 		kfree(dasd_reserve_req);
+ 		kfree(dasd_vol_info_req);
+ 		return -ENOMEM;
+ 	}
+ 	rawpadpage = (void *)__get_free_page(GFP_KERNEL);
+ 	if (!rawpadpage) {
+-		kfree(path_verification_worker);
++		kfree(pe_handler_worker);
+ 		kfree(dasd_reserve_req);
+ 		kfree(dasd_vol_info_req);
+ 		return -ENOMEM;
+@@ -6773,7 +6785,7 @@ dasd_eckd_init(void)
+ 	if (!ret)
+ 		wait_for_device_probe();
+ 	else {
+-		kfree(path_verification_worker);
++		kfree(pe_handler_worker);
+ 		kfree(dasd_reserve_req);
+ 		kfree(dasd_vol_info_req);
+ 		free_page((unsigned long)rawpadpage);
+@@ -6785,7 +6797,7 @@ static void __exit
+ dasd_eckd_cleanup(void)
+ {
+ 	ccw_driver_unregister(&dasd_eckd_driver);
+-	kfree(path_verification_worker);
++	kfree(pe_handler_worker);
+ 	kfree(dasd_reserve_req);
+ 	free_page((unsigned long)rawpadpage);
+ }
+diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
+index 9d9685c25253d..e8a06d85d6f72 100644
+--- a/drivers/s390/block/dasd_int.h
++++ b/drivers/s390/block/dasd_int.h
+@@ -299,6 +299,7 @@ struct dasd_discipline {
+ 	 * configuration.
+ 	 */
+ 	int (*verify_path)(struct dasd_device *, __u8);
++	int (*pe_handler)(struct dasd_device *, __u8);
+ 
+ 	/*
+ 	 * Last things to do when a device is set online, and first things
+diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
+index f923ed019d4a1..593b167ceefee 100644
+--- a/drivers/scsi/aic94xx/aic94xx_task.c
++++ b/drivers/scsi/aic94xx/aic94xx_task.c
+@@ -50,6 +50,9 @@ static int asd_map_scatterlist(struct sas_task *task,
+ 		dma_addr_t dma = dma_map_single(&asd_ha->pcidev->dev, p,
+ 						task->total_xfer_len,
+ 						task->data_dir);
++		if (dma_mapping_error(&asd_ha->pcidev->dev, dma))
++			return -ENOMEM;
++
+ 		sg_arr[0].bus_addr = cpu_to_le64((u64)dma);
+ 		sg_arr[0].size = cpu_to_le32(task->total_xfer_len);
+ 		sg_arr[0].flags |= ASD_SG_EL_LIST_EOL;
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index a5e6fbd86ad45..8c376736a8f51 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -1516,23 +1516,22 @@ static void ipr_process_ccn(struct ipr_cmnd *ipr_cmd)
+ }
+ 
+ /**
+- * strip_and_pad_whitespace - Strip and pad trailing whitespace.
+- * @i:		index into buffer
+- * @buf:		string to modify
++ * strip_whitespace - Strip and pad trailing whitespace.
++ * @i:		size of buffer
++ * @buf:	string to modify
+  *
+- * This function will strip all trailing whitespace, pad the end
+- * of the string with a single space, and NULL terminate the string.
++ * This function will strip all trailing whitespace and
++ * NUL terminate the string.
+  *
+- * Return value:
+- * 	new length of string
+  **/
+-static int strip_and_pad_whitespace(int i, char *buf)
++static void strip_whitespace(int i, char *buf)
+ {
++	if (i < 1)
++		return;
++	i--;
+ 	while (i && buf[i] == ' ')
+ 		i--;
+-	buf[i+1] = ' ';
+-	buf[i+2] = '\0';
+-	return i + 2;
++	buf[i+1] = '\0';
+ }
+ 
+ /**
+@@ -1547,19 +1546,21 @@ static int strip_and_pad_whitespace(int i, char *buf)
+ static void ipr_log_vpd_compact(char *prefix, struct ipr_hostrcb *hostrcb,
+ 				struct ipr_vpd *vpd)
+ {
+-	char buffer[IPR_VENDOR_ID_LEN + IPR_PROD_ID_LEN + IPR_SERIAL_NUM_LEN + 3];
+-	int i = 0;
++	char vendor_id[IPR_VENDOR_ID_LEN + 1];
++	char product_id[IPR_PROD_ID_LEN + 1];
++	char sn[IPR_SERIAL_NUM_LEN + 1];
+ 
+-	memcpy(buffer, vpd->vpids.vendor_id, IPR_VENDOR_ID_LEN);
+-	i = strip_and_pad_whitespace(IPR_VENDOR_ID_LEN - 1, buffer);
++	memcpy(vendor_id, vpd->vpids.vendor_id, IPR_VENDOR_ID_LEN);
++	strip_whitespace(IPR_VENDOR_ID_LEN, vendor_id);
+ 
+-	memcpy(&buffer[i], vpd->vpids.product_id, IPR_PROD_ID_LEN);
+-	i = strip_and_pad_whitespace(i + IPR_PROD_ID_LEN - 1, buffer);
++	memcpy(product_id, vpd->vpids.product_id, IPR_PROD_ID_LEN);
++	strip_whitespace(IPR_PROD_ID_LEN, product_id);
+ 
+-	memcpy(&buffer[i], vpd->sn, IPR_SERIAL_NUM_LEN);
+-	buffer[IPR_SERIAL_NUM_LEN + i] = '\0';
++	memcpy(sn, vpd->sn, IPR_SERIAL_NUM_LEN);
++	strip_whitespace(IPR_SERIAL_NUM_LEN, sn);
+ 
+-	ipr_hcam_err(hostrcb, "%s VPID/SN: %s\n", prefix, buffer);
++	ipr_hcam_err(hostrcb, "%s VPID/SN: %s %s %s\n", prefix,
++		     vendor_id, product_id, sn);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index c1b76cda60dbc..26b15a24300ef 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -2822,19 +2822,25 @@ static int
+ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
+ {
+ 	struct sysinfo s;
++	u64 coherent_dma_mask, dma_mask;
+ 
+-	if (ioc->is_mcpu_endpoint ||
+-	    sizeof(dma_addr_t) == 4 || ioc->use_32bit_dma ||
+-	    dma_get_required_mask(&pdev->dev) <= DMA_BIT_MASK(32))
++	if (ioc->is_mcpu_endpoint || sizeof(dma_addr_t) == 4) {
+ 		ioc->dma_mask = 32;
++		coherent_dma_mask = dma_mask = DMA_BIT_MASK(32);
+ 	/* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */
+-	else if (ioc->hba_mpi_version_belonged > MPI2_VERSION)
++	} else if (ioc->hba_mpi_version_belonged > MPI2_VERSION) {
+ 		ioc->dma_mask = 63;
+-	else
++		coherent_dma_mask = dma_mask = DMA_BIT_MASK(63);
++	} else {
+ 		ioc->dma_mask = 64;
++		coherent_dma_mask = dma_mask = DMA_BIT_MASK(64);
++	}
+ 
+-	if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(ioc->dma_mask)) ||
+-	    dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(ioc->dma_mask)))
++	if (ioc->use_32bit_dma)
++		coherent_dma_mask = DMA_BIT_MASK(32);
++
++	if (dma_set_mask(&pdev->dev, dma_mask) ||
++	    dma_set_coherent_mask(&pdev->dev, coherent_dma_mask))
+ 		return -ENODEV;
+ 
+ 	if (ioc->dma_mask > 32) {
+@@ -4905,6 +4911,9 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ 		}
+ 		dma_pool_destroy(ioc->pcie_sgl_dma_pool);
+ 	}
++	kfree(ioc->pcie_sg_lookup);
++	ioc->pcie_sg_lookup = NULL;
++
+ 	if (ioc->config_page) {
+ 		dexitprintk(ioc,
+ 			    ioc_info(ioc, "config_page(0x%p): free\n",
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index d63ccdf6e9887..695dd89be3307 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -165,18 +165,6 @@ out:
+ 	qla2xxx_rel_qpair_sp(sp->qpair, sp);
+ }
+ 
+-static void qla_nvme_ls_unmap(struct srb *sp, struct nvmefc_ls_req *fd)
+-{
+-	if (sp->flags & SRB_DMA_VALID) {
+-		struct srb_iocb *nvme = &sp->u.iocb_cmd;
+-		struct qla_hw_data *ha = sp->fcport->vha->hw;
+-
+-		dma_unmap_single(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
+-				 fd->rqstlen, DMA_TO_DEVICE);
+-		sp->flags &= ~SRB_DMA_VALID;
+-	}
+-}
+-
+ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ {
+ 	struct srb *sp = container_of(kref, struct srb, cmd_kref);
+@@ -194,7 +182,6 @@ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ 
+ 	fd = priv->fd;
+ 
+-	qla_nvme_ls_unmap(sp, fd);
+ 	fd->done(fd, priv->comp_status);
+ out:
+ 	qla2x00_rel_sp(sp);
+@@ -336,13 +323,10 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ 	nvme->u.nvme.rsp_len = fd->rsplen;
+ 	nvme->u.nvme.rsp_dma = fd->rspdma;
+ 	nvme->u.nvme.timeout_sec = fd->timeout;
+-	nvme->u.nvme.cmd_dma = dma_map_single(&ha->pdev->dev, fd->rqstaddr,
+-	    fd->rqstlen, DMA_TO_DEVICE);
++	nvme->u.nvme.cmd_dma = fd->rqstdma;
+ 	dma_sync_single_for_device(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
+ 	    fd->rqstlen, DMA_TO_DEVICE);
+ 
+-	sp->flags |= SRB_DMA_VALID;
+-
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+ 		ql_log(ql_log_warn, vha, 0x700e,
+@@ -350,7 +334,6 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ 		wake_up(&sp->nvme_ls_waitq);
+ 		sp->priv = NULL;
+ 		priv->sp = NULL;
+-		qla_nvme_ls_unmap(sp, fd);
+ 		qla2x00_rel_sp(sp);
+ 		return rval;
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 419156121cb59..e1132970f1892 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6899,9 +6899,12 @@ qla2x00_do_dpc(void *data)
+ 			}
+ 		}
+ loop_resync_check:
+-		if (test_and_clear_bit(LOOP_RESYNC_NEEDED,
++		if (!qla2x00_reset_active(base_vha) &&
++		    test_and_clear_bit(LOOP_RESYNC_NEEDED,
+ 		    &base_vha->dpc_flags)) {
+-
++			/*
++			 * Allow abort_isp to complete before moving on to scanning.
++			 */
+ 			ql_dbg(ql_dbg_dpc, base_vha, 0x400f,
+ 			    "Loop resync scheduled.\n");
+ 
+@@ -7145,7 +7148,7 @@ qla2x00_timer(struct timer_list *t)
+ 
+ 		/* if the loop has been down for 4 minutes, reinit adapter */
+ 		if (atomic_dec_and_test(&vha->loop_down_timer) != 0) {
+-			if (!(vha->device_flags & DFLG_NO_CABLE)) {
++			if (!(vha->device_flags & DFLG_NO_CABLE) && !vha->vp_idx) {
+ 				ql_log(ql_log_warn, vha, 0x6009,
+ 				    "Loop down - aborting ISP.\n");
+ 
+diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
+index 0a1734f34587d..1707d6d144d21 100644
+--- a/drivers/scsi/ses.c
++++ b/drivers/scsi/ses.c
+@@ -433,8 +433,8 @@ int ses_match_host(struct enclosure_device *edev, void *data)
+ }
+ #endif  /*  0  */
+ 
+-static void ses_process_descriptor(struct enclosure_component *ecomp,
+-				   unsigned char *desc)
++static int ses_process_descriptor(struct enclosure_component *ecomp,
++				   unsigned char *desc, int max_desc_len)
+ {
+ 	int eip = desc[0] & 0x10;
+ 	int invalid = desc[0] & 0x80;
+@@ -445,22 +445,32 @@ static void ses_process_descriptor(struct enclosure_component *ecomp,
+ 	unsigned char *d;
+ 
+ 	if (invalid)
+-		return;
++		return 0;
+ 
+ 	switch (proto) {
+ 	case SCSI_PROTOCOL_FCP:
+ 		if (eip) {
++			if (max_desc_len <= 7)
++				return 1;
+ 			d = desc + 4;
+ 			slot = d[3];
+ 		}
+ 		break;
+ 	case SCSI_PROTOCOL_SAS:
++
+ 		if (eip) {
++			if (max_desc_len <= 27)
++				return 1;
+ 			d = desc + 4;
+ 			slot = d[3];
+ 			d = desc + 8;
+-		} else
++		} else {
++			if (max_desc_len <= 23)
++				return 1;
+ 			d = desc + 4;
++		}
++
++
+ 		/* only take the phy0 addr */
+ 		addr = (u64)d[12] << 56 |
+ 			(u64)d[13] << 48 |
+@@ -477,6 +487,8 @@ static void ses_process_descriptor(struct enclosure_component *ecomp,
+ 	}
+ 	ecomp->slot = slot;
+ 	scomp->addr = addr;
++
++	return 0;
+ }
+ 
+ struct efd {
+@@ -549,7 +561,7 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ 		/* skip past overall descriptor */
+ 		desc_ptr += len + 4;
+ 	}
+-	if (ses_dev->page10)
++	if (ses_dev->page10 && ses_dev->page10_len > 9)
+ 		addl_desc_ptr = ses_dev->page10 + 8;
+ 	type_ptr = ses_dev->page1_types;
+ 	components = 0;
+@@ -557,17 +569,22 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ 		for (j = 0; j < type_ptr[1]; j++) {
+ 			char *name = NULL;
+ 			struct enclosure_component *ecomp;
++			int max_desc_len;
+ 
+ 			if (desc_ptr) {
+-				if (desc_ptr >= buf + page7_len) {
++				if (desc_ptr + 3 >= buf + page7_len) {
+ 					desc_ptr = NULL;
+ 				} else {
+ 					len = (desc_ptr[2] << 8) + desc_ptr[3];
+ 					desc_ptr += 4;
+-					/* Add trailing zero - pushes into
+-					 * reserved space */
+-					desc_ptr[len] = '\0';
+-					name = desc_ptr;
++					if (desc_ptr + len > buf + page7_len)
++						desc_ptr = NULL;
++					else {
++						/* Add trailing zero - pushes into
++						 * reserved space */
++						desc_ptr[len] = '\0';
++						name = desc_ptr;
++					}
+ 				}
+ 			}
+ 			if (type_ptr[0] == ENCLOSURE_COMPONENT_DEVICE ||
+@@ -583,10 +600,14 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ 					ecomp = &edev->component[components++];
+ 
+ 				if (!IS_ERR(ecomp)) {
+-					if (addl_desc_ptr)
+-						ses_process_descriptor(
+-							ecomp,
+-							addl_desc_ptr);
++					if (addl_desc_ptr) {
++						max_desc_len = ses_dev->page10_len -
++						    (addl_desc_ptr - ses_dev->page10);
++						if (ses_process_descriptor(ecomp,
++						    addl_desc_ptr,
++						    max_desc_len))
++							addl_desc_ptr = NULL;
++					}
+ 					if (create)
+ 						enclosure_component_register(
+ 							ecomp);
+@@ -603,9 +624,11 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ 			     /* these elements are optional */
+ 			     type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_TARGET_PORT ||
+ 			     type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT ||
+-			     type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS))
++			     type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS)) {
+ 				addl_desc_ptr += addl_desc_ptr[1] + 2;
+-
++				if (addl_desc_ptr + 1 >= ses_dev->page10 + ses_dev->page10_len)
++					addl_desc_ptr = NULL;
++			}
+ 		}
+ 	}
+ 	kfree(buf);
+@@ -704,6 +727,12 @@ static int ses_intf_add(struct device *cdev,
+ 		    type_ptr[0] == ENCLOSURE_COMPONENT_ARRAY_DEVICE)
+ 			components += type_ptr[1];
+ 	}
++
++	if (components == 0) {
++		sdev_printk(KERN_WARNING, sdev, "enclosure has no enumerated components\n");
++		goto err_free;
++	}
++
+ 	ses_dev->page1 = buf;
+ 	ses_dev->page1_len = len;
+ 	buf = NULL;
+@@ -827,7 +856,8 @@ static void ses_intf_remove_enclosure(struct scsi_device *sdev)
+ 	kfree(ses_dev->page2);
+ 	kfree(ses_dev);
+ 
+-	kfree(edev->component[0].scratch);
++	if (edev->components)
++		kfree(edev->component[0].scratch);
+ 
+ 	put_device(&edev->edev);
+ 	enclosure_unregister(edev);
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index a3247692ddc07..18e7d158fcca4 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -511,6 +511,29 @@ cdns_fill_msg_resp(struct sdw_cdns *cdns,
+ 	return SDW_CMD_OK;
+ }
+ 
++static void cdns_read_response(struct sdw_cdns *cdns)
++{
++	u32 num_resp, cmd_base;
++	int i;
++
++	/* RX_FIFO_AVAIL can be 2 entries more than the FIFO size */
++	BUILD_BUG_ON(ARRAY_SIZE(cdns->response_buf) < CDNS_MCP_CMD_LEN + 2);
++
++	num_resp = cdns_readl(cdns, CDNS_MCP_FIFOSTAT);
++	num_resp &= CDNS_MCP_RX_FIFO_AVAIL;
++	if (num_resp > ARRAY_SIZE(cdns->response_buf)) {
++		dev_warn(cdns->dev, "RX AVAIL %d too long\n", num_resp);
++		num_resp = ARRAY_SIZE(cdns->response_buf);
++	}
++
++	cmd_base = CDNS_MCP_CMD_BASE;
++
++	for (i = 0; i < num_resp; i++) {
++		cdns->response_buf[i] = cdns_readl(cdns, cmd_base);
++		cmd_base += CDNS_MCP_CMD_WORD_LEN;
++	}
++}
++
+ static enum sdw_command_response
+ _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd,
+ 	       int offset, int count, bool defer)
+@@ -552,6 +575,10 @@ _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd,
+ 		dev_err(cdns->dev, "IO transfer timed out, cmd %d device %d addr %x len %d\n",
+ 			cmd, msg->dev_num, msg->addr, msg->len);
+ 		msg->len = 0;
++
++		/* Drain anything in the RX_FIFO */
++		cdns_read_response(cdns);
++
+ 		return SDW_CMD_TIMEOUT;
+ 	}
+ 
+@@ -720,22 +747,6 @@ EXPORT_SYMBOL(cdns_reset_page_addr);
+  * IRQ handling
+  */
+ 
+-static void cdns_read_response(struct sdw_cdns *cdns)
+-{
+-	u32 num_resp, cmd_base;
+-	int i;
+-
+-	num_resp = cdns_readl(cdns, CDNS_MCP_FIFOSTAT);
+-	num_resp &= CDNS_MCP_RX_FIFO_AVAIL;
+-
+-	cmd_base = CDNS_MCP_CMD_BASE;
+-
+-	for (i = 0; i < num_resp; i++) {
+-		cdns->response_buf[i] = cdns_readl(cdns, cmd_base);
+-		cmd_base += CDNS_MCP_CMD_WORD_LEN;
+-	}
+-}
+-
+ static int cdns_update_slave_status(struct sdw_cdns *cdns,
+ 				    u32 slave0, u32 slave1)
+ {
+diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h
+index 4d1aab5b5ec2d..e7f0108d417ca 100644
+--- a/drivers/soundwire/cadence_master.h
++++ b/drivers/soundwire/cadence_master.h
+@@ -8,6 +8,12 @@
+ #define SDW_CADENCE_GSYNC_KHZ		4 /* 4 kHz */
+ #define SDW_CADENCE_GSYNC_HZ		(SDW_CADENCE_GSYNC_KHZ * 1000)
+ 
++/*
++ * The Cadence IP supports up to 32 entries in the FIFO, though implementations
++ * can configure the IP to have a smaller FIFO.
++ */
++#define CDNS_MCP_IP_MAX_CMD_LEN		32
++
+ /**
+  * struct sdw_cdns_pdi: PDI (Physical Data Interface) instance
+  *
+@@ -119,7 +125,12 @@ struct sdw_cdns {
+ 	struct sdw_bus bus;
+ 	unsigned int instance;
+ 
+-	u32 response_buf[0x80];
++	/*
++	 * The datasheet says the RX FIFO AVAIL can be 2 entries more
++	 * than the FIFO capacity, so allow for this.
++	 */
++	u32 response_buf[CDNS_MCP_IP_MAX_CMD_LEN + 2];
++
+ 	struct completion tx_complete;
+ 	struct sdw_defer *defer;
+ 
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index aadaea052f51d..4d98ce7571df0 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -256,7 +256,6 @@ config SPI_DW_BT1
+ 	tristate "Baikal-T1 SPI driver for DW SPI core"
+ 	depends on MIPS_BAIKAL_T1 || COMPILE_TEST
+ 	select MULTIPLEXER
+-	select MUX_MMIO
+ 	help
+ 	  Baikal-T1 SoC is equipped with three DW APB SSI-based MMIO SPI
+ 	  controllers. Two of them are pretty much normal: with IRQ, DMA,
+diff --git a/drivers/spi/spi-bcm63xx-hsspi.c b/drivers/spi/spi-bcm63xx-hsspi.c
+index 1f08d7553f079..02f56fc001b47 100644
+--- a/drivers/spi/spi-bcm63xx-hsspi.c
++++ b/drivers/spi/spi-bcm63xx-hsspi.c
+@@ -21,6 +21,7 @@
+ #include <linux/mutex.h>
+ #include <linux/of.h>
+ #include <linux/reset.h>
++#include <linux/pm_runtime.h>
+ 
+ #define HSSPI_GLOBAL_CTRL_REG			0x0
+ #define GLOBAL_CTRL_CS_POLARITY_SHIFT		0
+@@ -162,6 +163,7 @@ static int bcm63xx_hsspi_do_txrx(struct spi_device *spi, struct spi_transfer *t)
+ 	int step_size = HSSPI_BUFFER_LEN;
+ 	const u8 *tx = t->tx_buf;
+ 	u8 *rx = t->rx_buf;
++	u32 val = 0;
+ 
+ 	bcm63xx_hsspi_set_clk(bs, spi, t->speed_hz);
+ 	bcm63xx_hsspi_set_cs(bs, spi->chip_select, true);
+@@ -177,11 +179,16 @@ static int bcm63xx_hsspi_do_txrx(struct spi_device *spi, struct spi_transfer *t)
+ 		step_size -= HSSPI_OPCODE_LEN;
+ 
+ 	if ((opcode == HSSPI_OP_READ && t->rx_nbits == SPI_NBITS_DUAL) ||
+-	    (opcode == HSSPI_OP_WRITE && t->tx_nbits == SPI_NBITS_DUAL))
++	    (opcode == HSSPI_OP_WRITE && t->tx_nbits == SPI_NBITS_DUAL)) {
+ 		opcode |= HSSPI_OP_MULTIBIT;
+ 
+-	__raw_writel(1 << MODE_CTRL_MULTIDATA_WR_SIZE_SHIFT |
+-		     1 << MODE_CTRL_MULTIDATA_RD_SIZE_SHIFT | 0xff,
++		if (t->rx_nbits == SPI_NBITS_DUAL)
++			val |= 1 << MODE_CTRL_MULTIDATA_RD_SIZE_SHIFT;
++		if (t->tx_nbits == SPI_NBITS_DUAL)
++			val |= 1 << MODE_CTRL_MULTIDATA_WR_SIZE_SHIFT;
++	}
++
++	__raw_writel(val | 0xff,
+ 		     bs->regs + HSSPI_PROFILE_MODE_CTRL_REG(chip_select));
+ 
+ 	while (pending > 0) {
+@@ -439,13 +446,17 @@ static int bcm63xx_hsspi_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto out_put_master;
+ 
++	pm_runtime_enable(&pdev->dev);
++
+ 	/* register and we are done */
+ 	ret = devm_spi_register_master(dev, master);
+ 	if (ret)
+-		goto out_put_master;
++		goto out_pm_disable;
+ 
+ 	return 0;
+ 
++out_pm_disable:
++	pm_runtime_disable(&pdev->dev);
+ out_put_master:
+ 	spi_master_put(master);
+ out_disable_pll_clk:
+diff --git a/drivers/spi/spi-synquacer.c b/drivers/spi/spi-synquacer.c
+index 47cbe73137c23..dc188f9202c97 100644
+--- a/drivers/spi/spi-synquacer.c
++++ b/drivers/spi/spi-synquacer.c
+@@ -472,10 +472,9 @@ static int synquacer_spi_transfer_one(struct spi_master *master,
+ 		read_fifo(sspi);
+ 	}
+ 
+-	if (status < 0) {
+-		dev_err(sspi->dev, "failed to transfer. status: 0x%x\n",
+-			status);
+-		return status;
++	if (status == 0) {
++		dev_err(sspi->dev, "failed to transfer. Timeout.\n");
++		return -ETIMEDOUT;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c
+index 3897f8e8f5e0d..6870a33d4ccf3 100644
+--- a/drivers/staging/emxx_udc/emxx_udc.c
++++ b/drivers/staging/emxx_udc/emxx_udc.c
+@@ -2591,10 +2591,15 @@ static int nbu2ss_ep_queue(struct usb_ep *_ep,
+ 		req->unaligned = false;
+ 
+ 	if (req->unaligned) {
+-		if (!ep->virt_buf)
++		if (!ep->virt_buf) {
+ 			ep->virt_buf = dma_alloc_coherent(udc->dev, PAGE_SIZE,
+ 							  &ep->phys_buf,
+ 							  GFP_ATOMIC | GFP_DMA);
++			if (!ep->virt_buf) {
++				spin_unlock_irqrestore(&udc->lock, flags);
++				return -ENOMEM;
++			}
++		}
+ 		if (ep->epnum > 0)  {
+ 			if (ep->direct == USB_DIR_IN)
+ 				memcpy(ep->virt_buf, req->req.buf,
+diff --git a/drivers/thermal/hisi_thermal.c b/drivers/thermal/hisi_thermal.c
+index ee05950afd2f9..7b1e81912ccf7 100644
+--- a/drivers/thermal/hisi_thermal.c
++++ b/drivers/thermal/hisi_thermal.c
+@@ -435,10 +435,6 @@ static int hi3660_thermal_probe(struct hisi_thermal_data *data)
+ 	data->sensor[0].irq_name = "tsensor_a73";
+ 	data->sensor[0].data = data;
+ 
+-	data->sensor[1].id = HI3660_LITTLE_SENSOR;
+-	data->sensor[1].irq_name = "tsensor_a53";
+-	data->sensor[1].data = data;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/thermal/intel/Kconfig b/drivers/thermal/intel/Kconfig
+index 8025b21f43fa5..b5427579fae59 100644
+--- a/drivers/thermal/intel/Kconfig
++++ b/drivers/thermal/intel/Kconfig
+@@ -60,7 +60,8 @@ endmenu
+ 
+ config INTEL_BXT_PMIC_THERMAL
+ 	tristate "Intel Broxton PMIC thermal driver"
+-	depends on X86 && INTEL_SOC_PMIC_BXTWC && REGMAP
++	depends on X86 && INTEL_SOC_PMIC_BXTWC
++	select REGMAP
+ 	help
+ 	  Select this driver for Intel Broxton PMIC with ADC channels monitoring
+ 	  system temperature measurements and alerts.
+diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c
+index fb04470d7d4bb..6e7c230d308f7 100644
+--- a/drivers/thermal/intel/intel_powerclamp.c
++++ b/drivers/thermal/intel/intel_powerclamp.c
+@@ -57,6 +57,7 @@
+ 
+ static unsigned int target_mwait;
+ static struct dentry *debug_dir;
++static bool poll_pkg_cstate_enable;
+ 
+ /* user selected target */
+ static unsigned int set_target_ratio;
+@@ -262,6 +263,9 @@ static unsigned int get_compensation(int ratio)
+ {
+ 	unsigned int comp = 0;
+ 
++	if (!poll_pkg_cstate_enable)
++		return 0;
++
+ 	/* we only use compensation if all adjacent ones are good */
+ 	if (ratio == 1 &&
+ 		cal_data[ratio].confidence >= CONFIDENCE_OK &&
+@@ -534,7 +538,8 @@ static int start_power_clamp(void)
+ 	control_cpu = cpumask_first(cpu_online_mask);
+ 
+ 	clamping = true;
+-	schedule_delayed_work(&poll_pkg_cstate_work, 0);
++	if (poll_pkg_cstate_enable)
++		schedule_delayed_work(&poll_pkg_cstate_work, 0);
+ 
+ 	/* start one kthread worker per online cpu */
+ 	for_each_online_cpu(cpu) {
+@@ -603,11 +608,15 @@ static int powerclamp_get_max_state(struct thermal_cooling_device *cdev,
+ static int powerclamp_get_cur_state(struct thermal_cooling_device *cdev,
+ 				 unsigned long *state)
+ {
+-	if (true == clamping)
+-		*state = pkg_cstate_ratio_cur;
+-	else
++	if (clamping) {
++		if (poll_pkg_cstate_enable)
++			*state = pkg_cstate_ratio_cur;
++		else
++			*state = set_target_ratio;
++	} else {
+ 		/* to save power, do not poll idle ratio while not clamping */
+ 		*state = -1; /* indicates invalid state */
++	}
+ 
+ 	return 0;
+ }
+@@ -732,6 +741,9 @@ static int __init powerclamp_init(void)
+ 		goto exit_unregister;
+ 	}
+ 
++	if (topology_max_packages() == 1 && topology_max_die_per_package() == 1)
++		poll_pkg_cstate_enable = true;
++
+ 	cooling_dev = thermal_cooling_device_register("intel_powerclamp", NULL,
+ 						&powerclamp_cooling_ops);
+ 	if (IS_ERR(cooling_dev)) {
+diff --git a/drivers/thermal/intel/intel_quark_dts_thermal.c b/drivers/thermal/intel/intel_quark_dts_thermal.c
+index 3eafc6b0e6c30..b43fbd5eaa6b4 100644
+--- a/drivers/thermal/intel/intel_quark_dts_thermal.c
++++ b/drivers/thermal/intel/intel_quark_dts_thermal.c
+@@ -415,22 +415,14 @@ MODULE_DEVICE_TABLE(x86cpu, qrk_thermal_ids);
+ 
+ static int __init intel_quark_thermal_init(void)
+ {
+-	int err = 0;
+-
+ 	if (!x86_match_cpu(qrk_thermal_ids) || !iosf_mbi_available())
+ 		return -ENODEV;
+ 
+ 	soc_dts = alloc_soc_dts();
+-	if (IS_ERR(soc_dts)) {
+-		err = PTR_ERR(soc_dts);
+-		goto err_free;
+-	}
++	if (IS_ERR(soc_dts))
++		return PTR_ERR(soc_dts);
+ 
+ 	return 0;
+-
+-err_free:
+-	free_soc_dts(soc_dts);
+-	return err;
+ }
+ 
+ static void __exit intel_quark_thermal_exit(void)
+diff --git a/drivers/thermal/intel/intel_soc_dts_iosf.c b/drivers/thermal/intel/intel_soc_dts_iosf.c
+index 4f1a2f7c016cc..8d6707e48d023 100644
+--- a/drivers/thermal/intel/intel_soc_dts_iosf.c
++++ b/drivers/thermal/intel/intel_soc_dts_iosf.c
+@@ -404,7 +404,7 @@ struct intel_soc_dts_sensors *intel_soc_dts_iosf_init(
+ {
+ 	struct intel_soc_dts_sensors *sensors;
+ 	bool notification;
+-	u32 tj_max;
++	int tj_max;
+ 	int ret;
+ 	int i;
+ 
+diff --git a/drivers/thermal/qcom/tsens-v1.c b/drivers/thermal/qcom/tsens-v1.c
+index 3c19a3800c6d6..faa4576fa028f 100644
+--- a/drivers/thermal/qcom/tsens-v1.c
++++ b/drivers/thermal/qcom/tsens-v1.c
+@@ -78,11 +78,6 @@
+ 
+ #define MSM8976_CAL_SEL_MASK	0x3
+ 
+-#define MSM8976_CAL_DEGC_PT1	30
+-#define MSM8976_CAL_DEGC_PT2	120
+-#define MSM8976_SLOPE_FACTOR	1000
+-#define MSM8976_SLOPE_DEFAULT	3200
+-
+ /* eeprom layout data for qcs404/405 (v1) */
+ #define BASE0_MASK	0x000007f8
+ #define BASE1_MASK	0x0007f800
+@@ -142,30 +137,6 @@
+ #define CAL_SEL_MASK	7
+ #define CAL_SEL_SHIFT	0
+ 
+-static void compute_intercept_slope_8976(struct tsens_priv *priv,
+-			      u32 *p1, u32 *p2, u32 mode)
+-{
+-	int i;
+-
+-	priv->sensor[0].slope = 3313;
+-	priv->sensor[1].slope = 3275;
+-	priv->sensor[2].slope = 3320;
+-	priv->sensor[3].slope = 3246;
+-	priv->sensor[4].slope = 3279;
+-	priv->sensor[5].slope = 3257;
+-	priv->sensor[6].slope = 3234;
+-	priv->sensor[7].slope = 3269;
+-	priv->sensor[8].slope = 3255;
+-	priv->sensor[9].slope = 3239;
+-	priv->sensor[10].slope = 3286;
+-
+-	for (i = 0; i < priv->num_sensors; i++) {
+-		priv->sensor[i].offset = (p1[i] * MSM8976_SLOPE_FACTOR) -
+-				(MSM8976_CAL_DEGC_PT1 *
+-				priv->sensor[i].slope);
+-	}
+-}
+-
+ static int calibrate_v1(struct tsens_priv *priv)
+ {
+ 	u32 base0 = 0, base1 = 0;
+@@ -291,7 +262,7 @@ static int calibrate_8976(struct tsens_priv *priv)
+ 		break;
+ 	}
+ 
+-	compute_intercept_slope_8976(priv, p1, p2, mode);
++	compute_intercept_slope(priv, p1, p2, mode);
+ 	kfree(qfprom_cdata);
+ 
+ 	return 0;
+@@ -362,6 +333,22 @@ static const struct reg_field tsens_v1_regfields[MAX_REGFIELDS] = {
+ 	[TRDY] = REG_FIELD(TM_TRDY_OFF, 0, 0),
+ };
+ 
++static int __init init_8956(struct tsens_priv *priv) {
++	priv->sensor[0].slope = 3313;
++	priv->sensor[1].slope = 3275;
++	priv->sensor[2].slope = 3320;
++	priv->sensor[3].slope = 3246;
++	priv->sensor[4].slope = 3279;
++	priv->sensor[5].slope = 3257;
++	priv->sensor[6].slope = 3234;
++	priv->sensor[7].slope = 3269;
++	priv->sensor[8].slope = 3255;
++	priv->sensor[9].slope = 3239;
++	priv->sensor[10].slope = 3286;
++
++	return init_common(priv);
++}
++
+ static const struct tsens_ops ops_generic_v1 = {
+ 	.init		= init_common,
+ 	.calibrate	= calibrate_v1,
+@@ -374,17 +361,29 @@ struct tsens_plat_data data_tsens_v1 = {
+ 	.fields	= tsens_v1_regfields,
+ };
+ 
++static const struct tsens_ops ops_8956 = {
++	.init		= init_8956,
++	.calibrate	= calibrate_8976,
++	.get_temp	= get_temp_tsens_valid,
++};
++
++struct tsens_plat_data data_8956 = {
++	.num_sensors	= 11,
++	.ops		= &ops_8956,
++	.feat		= &tsens_v1_feat,
++	.fields		= tsens_v1_regfields,
++};
++
+ static const struct tsens_ops ops_8976 = {
+ 	.init		= init_common,
+ 	.calibrate	= calibrate_8976,
+ 	.get_temp	= get_temp_tsens_valid,
+ };
+ 
+-/* Valid for both MSM8956 and MSM8976. Sensor ID 3 is unused. */
+ struct tsens_plat_data data_8976 = {
+ 	.num_sensors	= 11,
+ 	.ops		= &ops_8976,
+-	.hw_ids		= (unsigned int[]){0, 1, 2, 4, 5, 6, 7, 8, 9, 10},
++	.hw_ids		= (unsigned int[]){0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
+ 	.feat		= &tsens_v1_feat,
+ 	.fields		= tsens_v1_regfields,
+ };
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index cb4f4b5224460..c73792ca727a1 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -902,6 +902,12 @@ static const struct of_device_id tsens_table[] = {
+ 	}, {
+ 		.compatible = "qcom,msm8939-tsens",
+ 		.data = &data_8939,
++	}, {
++		.compatible = "qcom,msm8956-tsens",
++		.data = &data_8956,
++	}, {
++		.compatible = "qcom,msm8960-tsens",
++		.data = &data_8960,
+ 	}, {
+ 		.compatible = "qcom,msm8974-tsens",
+ 		.data = &data_8974,
+diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h
+index f40b625f897e8..bbb1e8332821c 100644
+--- a/drivers/thermal/qcom/tsens.h
++++ b/drivers/thermal/qcom/tsens.h
+@@ -588,7 +588,7 @@ extern struct tsens_plat_data data_8960;
+ extern struct tsens_plat_data data_8916, data_8939, data_8974;
+ 
+ /* TSENS v1 targets */
+-extern struct tsens_plat_data data_tsens_v1, data_8976;
++extern struct tsens_plat_data data_tsens_v1, data_8976, data_8956;
+ 
+ /* TSENS v2 targets */
+ extern struct tsens_plat_data data_8996, data_tsens_v2;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 223695947b654..9cb0e8673f826 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1448,12 +1448,32 @@ static void lpuart_break_ctl(struct uart_port *port, int break_state)
+ 
+ static void lpuart32_break_ctl(struct uart_port *port, int break_state)
+ {
+-	unsigned long temp;
++	unsigned long temp, modem;
++	struct tty_struct *tty;
++	unsigned int cflag = 0;
++
++	tty = tty_port_tty_get(&port->state->port);
++	if (tty) {
++		cflag = tty->termios.c_cflag;
++		tty_kref_put(tty);
++	}
+ 
+ 	temp = lpuart32_read(port, UARTCTRL) & ~UARTCTRL_SBK;
++	modem = lpuart32_read(port, UARTMODIR);
+ 
+-	if (break_state != 0)
++	if (break_state != 0) {
+ 		temp |= UARTCTRL_SBK;
++		/*
++		 * LPUART CTS has higher priority than SBK, need to disable CTS before
++		 * asserting SBK to avoid any interference if flow control is enabled.
++		 */
++		if (cflag & CRTSCTS && modem & UARTMODIR_TXCTSE)
++			lpuart32_write(port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
++	} else {
++		/* Re-enable the CTS when break off. */
++		if (cflag & CRTSCTS && !(modem & UARTMODIR_TXCTSE))
++			lpuart32_write(port, modem | UARTMODIR_TXCTSE, UARTMODIR);
++	}
+ 
+ 	lpuart32_write(port, temp, UARTCTRL);
+ }
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 04b4ed5d06341..7ece8d1a23cb3 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1243,25 +1243,6 @@ static int sc16is7xx_probe(struct device *dev,
+ 	}
+ 	sched_set_fifo(s->kworker_task);
+ 
+-#ifdef CONFIG_GPIOLIB
+-	if (devtype->nr_gpio) {
+-		/* Setup GPIO cotroller */
+-		s->gpio.owner		 = THIS_MODULE;
+-		s->gpio.parent		 = dev;
+-		s->gpio.label		 = dev_name(dev);
+-		s->gpio.direction_input	 = sc16is7xx_gpio_direction_input;
+-		s->gpio.get		 = sc16is7xx_gpio_get;
+-		s->gpio.direction_output = sc16is7xx_gpio_direction_output;
+-		s->gpio.set		 = sc16is7xx_gpio_set;
+-		s->gpio.base		 = -1;
+-		s->gpio.ngpio		 = devtype->nr_gpio;
+-		s->gpio.can_sleep	 = 1;
+-		ret = gpiochip_add_data(&s->gpio, s);
+-		if (ret)
+-			goto out_thread;
+-	}
+-#endif
+-
+ 	/* reset device, purging any pending irq / data */
+ 	regmap_write(s->regmap, SC16IS7XX_IOCONTROL_REG << SC16IS7XX_REG_SHIFT,
+ 			SC16IS7XX_IOCONTROL_SRESET_BIT);
+@@ -1327,6 +1308,25 @@ static int sc16is7xx_probe(struct device *dev,
+ 				s->p[u].irda_mode = true;
+ 	}
+ 
++#ifdef CONFIG_GPIOLIB
++	if (devtype->nr_gpio) {
++		/* Setup GPIO cotroller */
++		s->gpio.owner		 = THIS_MODULE;
++		s->gpio.parent		 = dev;
++		s->gpio.label		 = dev_name(dev);
++		s->gpio.direction_input	 = sc16is7xx_gpio_direction_input;
++		s->gpio.get		 = sc16is7xx_gpio_get;
++		s->gpio.direction_output = sc16is7xx_gpio_direction_output;
++		s->gpio.set		 = sc16is7xx_gpio_set;
++		s->gpio.base		 = -1;
++		s->gpio.ngpio		 = devtype->nr_gpio;
++		s->gpio.can_sleep	 = 1;
++		ret = gpiochip_add_data(&s->gpio, s);
++		if (ret)
++			goto out_thread;
++	}
++#endif
++
+ 	/*
+ 	 * Setup interrupt. We first try to acquire the IRQ line as level IRQ.
+ 	 * If that succeeds, we can allow sharing the interrupt as well.
+@@ -1346,18 +1346,19 @@ static int sc16is7xx_probe(struct device *dev,
+ 	if (!ret)
+ 		return 0;
+ 
+-out_ports:
+-	for (i--; i >= 0; i--) {
+-		uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port);
+-		clear_bit(s->p[i].port.line, &sc16is7xx_lines);
+-	}
+-
+ #ifdef CONFIG_GPIOLIB
+ 	if (devtype->nr_gpio)
+ 		gpiochip_remove(&s->gpio);
+ 
+ out_thread:
+ #endif
++
++out_ports:
++	for (i--; i >= 0; i--) {
++		uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port);
++		clear_bit(s->p[i].port.line, &sc16is7xx_lines);
++	}
++
+ 	kthread_stop(s->kworker_task);
+ 
+ out_clk:
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 669aef77a0bd0..c37d2657308cd 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1237,14 +1237,16 @@ static struct tty_struct *tty_driver_lookup_tty(struct tty_driver *driver,
+ {
+ 	struct tty_struct *tty;
+ 
+-	if (driver->ops->lookup)
++	if (driver->ops->lookup) {
+ 		if (!file)
+ 			tty = ERR_PTR(-EIO);
+ 		else
+ 			tty = driver->ops->lookup(driver, file, idx);
+-	else
++	} else {
++		if (idx >= driver->num)
++			return ERR_PTR(-EINVAL);
+ 		tty = driver->ttys[idx];
+-
++	}
+ 	if (!IS_ERR(tty))
+ 		tty_kref_get(tty);
+ 	return tty;
+diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c
+index 71e091f879f0e..1dc07f9214d57 100644
+--- a/drivers/tty/vt/vc_screen.c
++++ b/drivers/tty/vt/vc_screen.c
+@@ -415,10 +415,8 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+ 		 */
+ 		size = vcs_size(vc, attr, uni_mode);
+ 		if (size < 0) {
+-			if (read)
+-				break;
+ 			ret = size;
+-			goto unlock_out;
++			break;
+ 		}
+ 		if (pos >= size)
+ 			break;
+diff --git a/drivers/usb/gadget/function/uvc_configfs.c b/drivers/usb/gadget/function/uvc_configfs.c
+index 00fb58e50a155..2db01170d096d 100644
+--- a/drivers/usb/gadget/function/uvc_configfs.c
++++ b/drivers/usb/gadget/function/uvc_configfs.c
+@@ -505,11 +505,68 @@ UVC_ATTR_RO(uvcg_default_output_, cname, aname)
+ UVCG_DEFAULT_OUTPUT_ATTR(b_terminal_id, bTerminalID, 8);
+ UVCG_DEFAULT_OUTPUT_ATTR(w_terminal_type, wTerminalType, 16);
+ UVCG_DEFAULT_OUTPUT_ATTR(b_assoc_terminal, bAssocTerminal, 8);
+-UVCG_DEFAULT_OUTPUT_ATTR(b_source_id, bSourceID, 8);
+ UVCG_DEFAULT_OUTPUT_ATTR(i_terminal, iTerminal, 8);
+ 
+ #undef UVCG_DEFAULT_OUTPUT_ATTR
+ 
++static ssize_t uvcg_default_output_b_source_id_show(struct config_item *item,
++						    char *page)
++{
++	struct config_group *group = to_config_group(item);
++	struct f_uvc_opts *opts;
++	struct config_item *opts_item;
++	struct mutex *su_mutex = &group->cg_subsys->su_mutex;
++	struct uvc_output_terminal_descriptor *cd;
++	int result;
++
++	mutex_lock(su_mutex); /* for navigating configfs hierarchy */
++
++	opts_item = group->cg_item.ci_parent->ci_parent->
++			ci_parent->ci_parent;
++	opts = to_f_uvc_opts(opts_item);
++	cd = &opts->uvc_output_terminal;
++
++	mutex_lock(&opts->lock);
++	result = sprintf(page, "%u\n", le8_to_cpu(cd->bSourceID));
++	mutex_unlock(&opts->lock);
++
++	mutex_unlock(su_mutex);
++
++	return result;
++}
++
++static ssize_t uvcg_default_output_b_source_id_store(struct config_item *item,
++						     const char *page, size_t len)
++{
++	struct config_group *group = to_config_group(item);
++	struct f_uvc_opts *opts;
++	struct config_item *opts_item;
++	struct mutex *su_mutex = &group->cg_subsys->su_mutex;
++	struct uvc_output_terminal_descriptor *cd;
++	int result;
++	u8 num;
++
++	result = kstrtou8(page, 0, &num);
++	if (result)
++		return result;
++
++	mutex_lock(su_mutex); /* for navigating configfs hierarchy */
++
++	opts_item = group->cg_item.ci_parent->ci_parent->
++			ci_parent->ci_parent;
++	opts = to_f_uvc_opts(opts_item);
++	cd = &opts->uvc_output_terminal;
++
++	mutex_lock(&opts->lock);
++	cd->bSourceID = num;
++	mutex_unlock(&opts->lock);
++
++	mutex_unlock(su_mutex);
++
++	return len;
++}
++UVC_ATTR(uvcg_default_output_, b_source_id, bSourceID);
++
+ static struct configfs_attribute *uvcg_default_output_attrs[] = {
+ 	&uvcg_default_output_attr_b_terminal_id,
+ 	&uvcg_default_output_attr_w_terminal_type,
+diff --git a/drivers/usb/host/xhci-mvebu.c b/drivers/usb/host/xhci-mvebu.c
+index 8ca1a235d1645..eabccf25796b2 100644
+--- a/drivers/usb/host/xhci-mvebu.c
++++ b/drivers/usb/host/xhci-mvebu.c
+@@ -33,7 +33,7 @@ static void xhci_mvebu_mbus_config(void __iomem *base,
+ 
+ 	/* Program each DRAM CS in a seperate window */
+ 	for (win = 0; win < dram->num_cs; win++) {
+-		const struct mbus_dram_window *cs = dram->cs + win;
++		const struct mbus_dram_window *cs = &dram->cs[win];
+ 
+ 		writel(((cs->size - 1) & 0xffff0000) | (cs->mbus_attr << 8) |
+ 		       (dram->mbus_dram_target_id << 4) | 1,
+diff --git a/drivers/usb/storage/ene_ub6250.c b/drivers/usb/storage/ene_ub6250.c
+index c9ce1c25c80cc..737398f1b896a 100644
+--- a/drivers/usb/storage/ene_ub6250.c
++++ b/drivers/usb/storage/ene_ub6250.c
+@@ -938,7 +938,7 @@ static int ms_lib_process_bootblock(struct us_data *us, u16 PhyBlock, u8 *PageDa
+ 	struct ms_lib_type_extdat ExtraData;
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+ 
+-	PageBuffer = kmalloc(MS_BYTES_PER_PAGE, GFP_KERNEL);
++	PageBuffer = kzalloc(MS_BYTES_PER_PAGE * 2, GFP_KERNEL);
+ 	if (PageBuffer == NULL)
+ 		return (u32)-1;
+ 
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 32c9925de4736..1f94ea46c01a5 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -448,7 +448,6 @@ void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev)
+ 		unmap_direct_mr(mvdev, dmr);
+ 		kfree(dmr);
+ 	}
+-	memset(mr, 0, sizeof(*mr));
+ 	mr->initialized = false;
+ out:
+ 	mutex_unlock(&mr->mkey_mtx);
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index ce50ca9a320c7..ec1428dbdf9d9 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -96,6 +96,7 @@ struct vfio_dma {
+ 	struct task_struct	*task;
+ 	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
+ 	unsigned long		*bitmap;
++	struct mm_struct	*mm;
+ };
+ 
+ struct vfio_batch {
+@@ -391,8 +392,8 @@ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
+ 	if (!npage)
+ 		return 0;
+ 
+-	mm = async ? get_task_mm(dma->task) : dma->task->mm;
+-	if (!mm)
++	mm = dma->mm;
++	if (async && !mmget_not_zero(mm))
+ 		return -ESRCH; /* process exited */
+ 
+ 	ret = mmap_write_lock_killable(mm);
+@@ -666,8 +667,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+ 	struct mm_struct *mm;
+ 	int ret;
+ 
+-	mm = get_task_mm(dma->task);
+-	if (!mm)
++	mm = dma->mm;
++	if (!mmget_not_zero(mm))
+ 		return -ENODEV;
+ 
+ 	ret = vaddr_get_pfns(mm, vaddr, 1, dma->prot, pfn_base, pages);
+@@ -677,7 +678,7 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+ 	ret = 0;
+ 
+ 	if (do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
+-		ret = vfio_lock_acct(dma, 1, true);
++		ret = vfio_lock_acct(dma, 1, false);
+ 		if (ret) {
+ 			put_pfn(*pfn_base, dma->prot);
+ 			if (ret == -ENOMEM)
+@@ -1031,6 +1032,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
+ 	vfio_unmap_unpin(iommu, dma, true);
+ 	vfio_unlink_dma(iommu, dma);
+ 	put_task_struct(dma->task);
++	mmdrop(dma->mm);
+ 	vfio_dma_bitmap_free(dma);
+ 	kfree(dma);
+ 	iommu->dma_avail++;
+@@ -1452,29 +1454,15 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
+ 	 * against the locked memory limit and we need to be able to do both
+ 	 * outside of this call path as pinning can be asynchronous via the
+ 	 * external interfaces for mdev devices.  RLIMIT_MEMLOCK requires a
+-	 * task_struct and VM locked pages requires an mm_struct, however
+-	 * holding an indefinite mm reference is not recommended, therefore we
+-	 * only hold a reference to a task.  We could hold a reference to
+-	 * current, however QEMU uses this call path through vCPU threads,
+-	 * which can be killed resulting in a NULL mm and failure in the unmap
+-	 * path when called via a different thread.  Avoid this problem by
+-	 * using the group_leader as threads within the same group require
+-	 * both CLONE_THREAD and CLONE_VM and will therefore use the same
+-	 * mm_struct.
+-	 *
+-	 * Previously we also used the task for testing CAP_IPC_LOCK at the
+-	 * time of pinning and accounting, however has_capability() makes use
+-	 * of real_cred, a copy-on-write field, so we can't guarantee that it
+-	 * matches group_leader, or in fact that it might not change by the
+-	 * time it's evaluated.  If a process were to call MAP_DMA with
+-	 * CAP_IPC_LOCK but later drop it, it doesn't make sense that they
+-	 * possibly see different results for an iommu_mapped vfio_dma vs
+-	 * externally mapped.  Therefore track CAP_IPC_LOCK in vfio_dma at the
+-	 * time of calling MAP_DMA.
++	 * task_struct. Save the group_leader so that all DMA tracking uses
++	 * the same task, to make debugging easier.  VM locked pages requires
++	 * an mm_struct, so grab the mm in case the task dies.
+ 	 */
+ 	get_task_struct(current->group_leader);
+ 	dma->task = current->group_leader;
+ 	dma->lock_cap = capable(CAP_IPC_LOCK);
++	dma->mm = current->mm;
++	mmgrab(dma->mm);
+ 
+ 	dma->pfn_list = RB_ROOT;
+ 
+@@ -2998,9 +2986,8 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
+ 			!(dma->prot & IOMMU_READ))
+ 		return -EPERM;
+ 
+-	mm = get_task_mm(dma->task);
+-
+-	if (!mm)
++	mm = dma->mm;
++	if (!mmget_not_zero(mm))
+ 		return -EPERM;
+ 
+ 	if (kthread)
+diff --git a/drivers/watchdog/at91sam9_wdt.c b/drivers/watchdog/at91sam9_wdt.c
+index 292b5a1ca8318..fed7be2464420 100644
+--- a/drivers/watchdog/at91sam9_wdt.c
++++ b/drivers/watchdog/at91sam9_wdt.c
+@@ -206,10 +206,9 @@ static int at91_wdt_init(struct platform_device *pdev, struct at91wdt *wdt)
+ 			 "min heartbeat and max heartbeat might be too close for the system to handle it correctly\n");
+ 
+ 	if ((tmp & AT91_WDT_WDFIEN) && wdt->irq) {
+-		err = request_irq(wdt->irq, wdt_interrupt,
+-				  IRQF_SHARED | IRQF_IRQPOLL |
+-				  IRQF_NO_SUSPEND,
+-				  pdev->name, wdt);
++		err = devm_request_irq(dev, wdt->irq, wdt_interrupt,
++				       IRQF_SHARED | IRQF_IRQPOLL | IRQF_NO_SUSPEND,
++				       pdev->name, wdt);
+ 		if (err)
+ 			return err;
+ 	}
+diff --git a/drivers/watchdog/pcwd_usb.c b/drivers/watchdog/pcwd_usb.c
+index 1bdaf17c1d38d..8202f0a6b0935 100644
+--- a/drivers/watchdog/pcwd_usb.c
++++ b/drivers/watchdog/pcwd_usb.c
+@@ -325,7 +325,8 @@ static int usb_pcwd_set_heartbeat(struct usb_pcwd_private *usb_pcwd, int t)
+ static int usb_pcwd_get_temperature(struct usb_pcwd_private *usb_pcwd,
+ 							int *temperature)
+ {
+-	unsigned char msb, lsb;
++	unsigned char msb = 0x00;
++	unsigned char lsb = 0x00;
+ 
+ 	usb_pcwd_send_command(usb_pcwd, CMD_READ_TEMP, &msb, &lsb);
+ 
+@@ -341,7 +342,8 @@ static int usb_pcwd_get_temperature(struct usb_pcwd_private *usb_pcwd,
+ static int usb_pcwd_get_timeleft(struct usb_pcwd_private *usb_pcwd,
+ 								int *time_left)
+ {
+-	unsigned char msb, lsb;
++	unsigned char msb = 0x00;
++	unsigned char lsb = 0x00;
+ 
+ 	/* Read the time that's left before rebooting */
+ 	/* Note: if the board is not yet armed then we will read 0xFFFF */
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 2ee017442dfcd..f37255cd75fdf 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -1037,8 +1037,8 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ 		if (wdd->id == 0) {
+ 			misc_deregister(&watchdog_miscdev);
+ 			old_wd_data = NULL;
+-			put_device(&wd_data->dev);
+ 		}
++		put_device(&wd_data->dev);
+ 		return err;
+ 	}
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 844db4652dd1d..8fdd34ff20ef5 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -859,12 +859,13 @@ smb3_qfs_tcon(const unsigned int xid, struct cifs_tcon *tcon,
+ 	bool no_cached_open = tcon->nohandlecache;
+ 	struct cached_fid *cfid = NULL;
+ 
+-	oparms.tcon = tcon;
+-	oparms.desired_access = FILE_READ_ATTRIBUTES;
+-	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = cifs_create_options(cifs_sb, 0);
+-	oparms.fid = &fid;
+-	oparms.reconnect = false;
++	oparms = (struct cifs_open_parms) {
++		.tcon = tcon,
++		.desired_access = FILE_READ_ATTRIBUTES,
++		.disposition = FILE_OPEN,
++		.create_options = cifs_create_options(cifs_sb, 0),
++		.fid = &fid,
++	};
+ 
+ 	if (no_cached_open) {
+ 		rc = SMB2_open(xid, &oparms, &srch_path, &oplock, NULL, NULL,
+diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
+index f73f9b0625251..bcc611069308a 100644
+--- a/fs/cifs/smbdirect.c
++++ b/fs/cifs/smbdirect.c
+@@ -1691,6 +1691,7 @@ static struct smbd_connection *_smbd_get_connection(
+ 
+ allocate_mr_failed:
+ 	/* At this point, need to a full transport shutdown */
++	server->smbd_conn = info;
+ 	smbd_destroy(server);
+ 	return NULL;
+ 
+@@ -2239,6 +2240,7 @@ static int allocate_mr_list(struct smbd_connection *info)
+ 	atomic_set(&info->mr_ready_count, 0);
+ 	atomic_set(&info->mr_used_count, 0);
+ 	init_waitqueue_head(&info->wait_for_mr_cleanup);
++	INIT_WORK(&info->mr_recovery_work, smbd_mr_recovery_work);
+ 	/* Allocate more MRs (2x) than hardware responder_resources */
+ 	for (i = 0; i < info->responder_resources * 2; i++) {
+ 		smbdirect_mr = kzalloc(sizeof(*smbdirect_mr), GFP_KERNEL);
+@@ -2266,13 +2268,13 @@ static int allocate_mr_list(struct smbd_connection *info)
+ 		list_add_tail(&smbdirect_mr->list, &info->mr_list);
+ 		atomic_inc(&info->mr_ready_count);
+ 	}
+-	INIT_WORK(&info->mr_recovery_work, smbd_mr_recovery_work);
+ 	return 0;
+ 
+ out:
+ 	kfree(smbdirect_mr);
+ 
+ 	list_for_each_entry_safe(smbdirect_mr, tmp, &info->mr_list, list) {
++		list_del(&smbdirect_mr->list);
+ 		ib_dereg_mr(smbdirect_mr->mr);
+ 		kfree(smbdirect_mr->sgl);
+ 		kfree(smbdirect_mr);
+diff --git a/fs/coda/upcall.c b/fs/coda/upcall.c
+index eb3b1898da462..610484c90260b 100644
+--- a/fs/coda/upcall.c
++++ b/fs/coda/upcall.c
+@@ -790,7 +790,7 @@ static int coda_upcall(struct venus_comm *vcp,
+ 	sig_req = kmalloc(sizeof(struct upc_req), GFP_KERNEL);
+ 	if (!sig_req) goto exit;
+ 
+-	sig_inputArgs = kvzalloc(sizeof(struct coda_in_hdr), GFP_KERNEL);
++	sig_inputArgs = kvzalloc(sizeof(*sig_inputArgs), GFP_KERNEL);
+ 	if (!sig_inputArgs) {
+ 		kfree(sig_req);
+ 		goto exit;
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index dedbc55cd48f5..6caded58cda52 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -102,7 +102,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ 			clu.dir = ei->hint_bmap.clu;
+ 		}
+ 
+-		while (clu_offset > 0) {
++		while (clu_offset > 0 && clu.dir != EXFAT_EOF_CLUSTER) {
+ 			if (exfat_get_next_cluster(sb, &(clu.dir)))
+ 				return -EIO;
+ 
+@@ -236,10 +236,7 @@ static int exfat_iterate(struct file *filp, struct dir_context *ctx)
+ 		fake_offset = 1;
+ 	}
+ 
+-	if (cpos & (DENTRY_SIZE - 1)) {
+-		err = -ENOENT;
+-		goto unlock;
+-	}
++	cpos = round_up(cpos, DENTRY_SIZE);
+ 
+ 	/* name buffer should be allocated before use */
+ 	err = exfat_alloc_namebuf(nb);
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index 0d139c7d150d9..07b09af57436f 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -42,7 +42,7 @@ enum {
+ #define ES_2_ENTRIES		2
+ #define ES_ALL_ENTRIES		0
+ 
+-#define DIR_DELETED		0xFFFF0321
++#define DIR_DELETED		0xFFFFFFF7
+ 
+ /* type values */
+ #define TYPE_UNUSED		0x0000
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index c819e8427ea57..819f47278305e 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -250,8 +250,7 @@ void exfat_truncate(struct inode *inode, loff_t size)
+ 	else
+ 		mark_inode_dirty(inode);
+ 
+-	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
+-				inode->i_blkbits;
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
+ write_size:
+ 	aligned_size = i_size_read(inode);
+ 	if (aligned_size & (blocksize - 1)) {
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index 2a9f6a80584ee..4bd73820a4ac0 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -242,8 +242,7 @@ static int exfat_map_cluster(struct inode *inode, unsigned int clu_offset,
+ 				return err;
+ 		} /* end of if != DIR_DELETED */
+ 
+-		inode->i_blocks +=
+-			num_to_be_allocated << sbi->sect_per_clus_bits;
++		inode->i_blocks += EXFAT_CLU_TO_B(num_to_be_allocated, sbi) >> 9;
+ 
+ 		/*
+ 		 * Move *clu pointer along FAT chains (hole care) because the
+@@ -600,8 +599,7 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
+ 
+ 	exfat_save_attr(inode, info->attr);
+ 
+-	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
+-				inode->i_blkbits;
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
+ 	inode->i_mtime = info->mtime;
+ 	inode->i_ctime = info->mtime;
+ 	ei->i_crtime = info->crtime;
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 935f600509009..1382d816912c8 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -398,7 +398,7 @@ static int exfat_find_empty_entry(struct inode *inode,
+ 		ei->i_size_ondisk += sbi->cluster_size;
+ 		ei->i_size_aligned += sbi->cluster_size;
+ 		ei->flags = p_dir->flags;
+-		inode->i_blocks += 1 << sbi->sect_per_clus_bits;
++		inode->i_blocks += sbi->cluster_size >> 9;
+ 	}
+ 
+ 	return dentry;
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index ba70ed1c98049..62d79af257a90 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -364,8 +364,7 @@ static int exfat_read_root(struct inode *inode)
+ 	inode->i_op = &exfat_dir_inode_operations;
+ 	inode->i_fop = &exfat_dir_operations;
+ 
+-	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
+-				inode->i_blkbits;
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
+ 	ei->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff;
+ 	ei->i_size_aligned = i_size_read(inode);
+ 	ei->i_size_ondisk = i_size_read(inode);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 6bf1c62eff04a..b80ad5a7b05c0 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1415,6 +1415,13 @@ static struct inode *ext4_xattr_inode_create(handle_t *handle,
+ 	uid_t owner[2] = { i_uid_read(inode), i_gid_read(inode) };
+ 	int err;
+ 
++	if (inode->i_sb->s_root == NULL) {
++		ext4_warning(inode->i_sb,
++			     "refuse to create EA inode when umounting");
++		WARN_ON(1);
++		return ERR_PTR(-EINVAL);
++	}
++
+ 	/*
+ 	 * Let the next inode be the goal, so we try and allocate the EA inode
+ 	 * in the same group, or nearby one.
+@@ -2564,9 +2571,8 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 
+ 	is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
+ 	bs = kzalloc(sizeof(struct ext4_xattr_block_find), GFP_NOFS);
+-	buffer = kvmalloc(value_size, GFP_NOFS);
+ 	b_entry_name = kmalloc(entry->e_name_len + 1, GFP_NOFS);
+-	if (!is || !bs || !buffer || !b_entry_name) {
++	if (!is || !bs || !b_entry_name) {
+ 		error = -ENOMEM;
+ 		goto out;
+ 	}
+@@ -2578,12 +2584,18 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 
+ 	/* Save the entry name and the entry value */
+ 	if (entry->e_value_inum) {
++		buffer = kvmalloc(value_size, GFP_NOFS);
++		if (!buffer) {
++			error = -ENOMEM;
++			goto out;
++		}
++
+ 		error = ext4_xattr_inode_get(inode, entry, buffer, value_size);
+ 		if (error)
+ 			goto out;
+ 	} else {
+ 		size_t value_offs = le16_to_cpu(entry->e_value_offs);
+-		memcpy(buffer, (void *)IFIRST(header) + value_offs, value_size);
++		buffer = (void *)IFIRST(header) + value_offs;
+ 	}
+ 
+ 	memcpy(b_entry_name, entry->e_name, entry->e_name_len);
+@@ -2598,25 +2610,26 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 	if (error)
+ 		goto out;
+ 
+-	/* Remove the chosen entry from the inode */
+-	error = ext4_xattr_ibody_set(handle, inode, &i, is);
+-	if (error)
+-		goto out;
+-
+ 	i.value = buffer;
+ 	i.value_len = value_size;
+ 	error = ext4_xattr_block_find(inode, &i, bs);
+ 	if (error)
+ 		goto out;
+ 
+-	/* Add entry which was removed from the inode into the block */
++	/* Move ea entry from the inode into the block */
+ 	error = ext4_xattr_block_set(handle, inode, &i, bs);
+ 	if (error)
+ 		goto out;
+-	error = 0;
++
++	/* Remove the chosen entry from the inode */
++	i.value = NULL;
++	i.value_len = 0;
++	error = ext4_xattr_ibody_set(handle, inode, &i, is);
++
+ out:
+ 	kfree(b_entry_name);
+-	kvfree(buffer);
++	if (entry->e_value_inum && buffer)
++		kvfree(buffer);
+ 	if (is)
+ 		brelse(is->iloc.bh);
+ 	if (bs)
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 9270330ec5ced..db26e87b8f0dd 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -721,7 +721,7 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ 	}
+ 
+ 	if (fio->io_wbc && !is_read_io(fio->op))
+-		wbc_account_cgroup_owner(fio->io_wbc, page, PAGE_SIZE);
++		wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
+ 
+ 	__attach_io_flag(fio);
+ 	bio_set_op_attrs(bio, fio->op, fio->op_flags);
+@@ -929,7 +929,7 @@ alloc_new:
+ 	}
+ 
+ 	if (fio->io_wbc)
+-		wbc_account_cgroup_owner(fio->io_wbc, page, PAGE_SIZE);
++		wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
+ 
+ 	inc_page_count(fio->sbi, WB_DATA_TYPE(page));
+ 
+@@ -1003,7 +1003,7 @@ alloc_new:
+ 	}
+ 
+ 	if (fio->io_wbc)
+-		wbc_account_cgroup_owner(fio->io_wbc, bio_page, PAGE_SIZE);
++		wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
+ 
+ 	io->last_block_in_bio = fio->new_blkaddr;
+ 	f2fs_trace_ios(fio, 0);
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index f97c23ec93ce5..df1a0cbfa1be4 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -64,7 +64,6 @@ bool f2fs_may_inline_dentry(struct inode *inode)
+ void f2fs_do_read_inline_data(struct page *page, struct page *ipage)
+ {
+ 	struct inode *inode = page->mapping->host;
+-	void *src_addr, *dst_addr;
+ 
+ 	if (PageUptodate(page))
+ 		return;
+@@ -74,11 +73,8 @@ void f2fs_do_read_inline_data(struct page *page, struct page *ipage)
+ 	zero_user_segment(page, MAX_INLINE_DATA(inode), PAGE_SIZE);
+ 
+ 	/* Copy the whole inline data block */
+-	src_addr = inline_data_addr(inode, ipage);
+-	dst_addr = kmap_atomic(page);
+-	memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode));
+-	flush_dcache_page(page);
+-	kunmap_atomic(dst_addr);
++	memcpy_to_page(page, 0, inline_data_addr(inode, ipage),
++		       MAX_INLINE_DATA(inode));
+ 	if (!PageUptodate(page))
+ 		SetPageUptodate(page);
+ }
+@@ -245,7 +241,6 @@ out:
+ 
+ int f2fs_write_inline_data(struct inode *inode, struct page *page)
+ {
+-	void *src_addr, *dst_addr;
+ 	struct dnode_of_data dn;
+ 	int err;
+ 
+@@ -262,10 +257,8 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
+ 	f2fs_bug_on(F2FS_I_SB(inode), page->index);
+ 
+ 	f2fs_wait_on_page_writeback(dn.inode_page, NODE, true, true);
+-	src_addr = kmap_atomic(page);
+-	dst_addr = inline_data_addr(inode, dn.inode_page);
+-	memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode));
+-	kunmap_atomic(src_addr);
++	memcpy_from_page(inline_data_addr(inode, dn.inode_page),
++			 page, 0, MAX_INLINE_DATA(inode));
+ 	set_page_dirty(dn.inode_page);
+ 
+ 	f2fs_clear_page_cache_dirty_tag(page);
+@@ -420,18 +413,17 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
+ 
+ 	dentry_blk = page_address(page);
+ 
++	/*
++	 * Start by zeroing the full block, to ensure that all unused space is
++	 * zeroed and no uninitialized memory is leaked to disk.
++	 */
++	memset(dentry_blk, 0, F2FS_BLKSIZE);
++
+ 	make_dentry_ptr_inline(dir, &src, inline_dentry);
+ 	make_dentry_ptr_block(dir, &dst, dentry_blk);
+ 
+ 	/* copy data from inline dentry block to new dentry block */
+ 	memcpy(dst.bitmap, src.bitmap, src.nr_bitmap);
+-	memset(dst.bitmap + src.nr_bitmap, 0, dst.nr_bitmap - src.nr_bitmap);
+-	/*
+-	 * we do not need to zero out remainder part of dentry and filename
+-	 * field, since we have used bitmap for marking the usage status of
+-	 * them, besides, we can also ignore copying/zeroing reserved space
+-	 * of dentry block, because them haven't been used so far.
+-	 */
+ 	memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max);
+ 	memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN);
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index fba413ced9826..0bba5c72fc77e 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2034,7 +2034,6 @@ static ssize_t f2fs_quota_read(struct super_block *sb, int type, char *data,
+ 	size_t toread;
+ 	loff_t i_size = i_size_read(inode);
+ 	struct page *page;
+-	char *kaddr;
+ 
+ 	if (off > i_size)
+ 		return 0;
+@@ -2068,9 +2067,7 @@ repeat:
+ 			return -EIO;
+ 		}
+ 
+-		kaddr = kmap_atomic(page);
+-		memcpy(data, kaddr + offset, tocopy);
+-		kunmap_atomic(kaddr);
++		memcpy_from_page(data, page, offset, tocopy);
+ 		f2fs_put_page(page, 1);
+ 
+ 		offset = 0;
+@@ -2092,7 +2089,6 @@ static ssize_t f2fs_quota_write(struct super_block *sb, int type,
+ 	size_t towrite = len;
+ 	struct page *page;
+ 	void *fsdata = NULL;
+-	char *kaddr;
+ 	int err = 0;
+ 	int tocopy;
+ 
+@@ -2112,10 +2108,7 @@ retry:
+ 			break;
+ 		}
+ 
+-		kaddr = kmap_atomic(page);
+-		memcpy(kaddr + offset, data, tocopy);
+-		kunmap_atomic(kaddr);
+-		flush_dcache_page(page);
++		memcpy_to_page(page, offset, data, tocopy);
+ 
+ 		a_ops->write_end(NULL, mapping, off, tocopy, tocopy,
+ 						page, fsdata);
+diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c
+index cff94d095d0fe..cef40d92268f7 100644
+--- a/fs/f2fs/verity.c
++++ b/fs/f2fs/verity.c
+@@ -47,16 +47,13 @@ static int pagecache_read(struct inode *inode, void *buf, size_t count,
+ 		size_t n = min_t(size_t, count,
+ 				 PAGE_SIZE - offset_in_page(pos));
+ 		struct page *page;
+-		void *addr;
+ 
+ 		page = read_mapping_page(inode->i_mapping, pos >> PAGE_SHIFT,
+ 					 NULL);
+ 		if (IS_ERR(page))
+ 			return PTR_ERR(page);
+ 
+-		addr = kmap_atomic(page);
+-		memcpy(buf, addr + offset_in_page(pos), n);
+-		kunmap_atomic(addr);
++		memcpy_from_page(buf, page, offset_in_page(pos), n);
+ 
+ 		put_page(page);
+ 
+@@ -81,8 +78,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count,
+ 		size_t n = min_t(size_t, count,
+ 				 PAGE_SIZE - offset_in_page(pos));
+ 		struct page *page;
+-		void *fsdata;
+-		void *addr;
++		void *fsdata = NULL;
+ 		int res;
+ 
+ 		res = pagecache_write_begin(NULL, inode->i_mapping, pos, n, 0,
+@@ -90,9 +86,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count,
+ 		if (res)
+ 			return res;
+ 
+-		addr = kmap_atomic(page);
+-		memcpy(addr + offset_in_page(pos), buf, n);
+-		kunmap_atomic(addr);
++		memcpy_to_page(page, offset_in_page(pos), buf, n);
+ 
+ 		res = pagecache_write_end(NULL, inode->i_mapping, pos, n, n,
+ 					  page, fsdata);
+diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
+index cc4f987687f3c..5306595548703 100644
+--- a/fs/gfs2/aops.c
++++ b/fs/gfs2/aops.c
+@@ -152,7 +152,6 @@ static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *w
+ {
+ 	struct inode *inode = page->mapping->host;
+ 	struct gfs2_inode *ip = GFS2_I(inode);
+-	struct gfs2_sbd *sdp = GFS2_SB(inode);
+ 
+ 	if (PageChecked(page)) {
+ 		ClearPageChecked(page);
+@@ -160,7 +159,7 @@ static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *w
+ 			create_empty_buffers(page, inode->i_sb->s_blocksize,
+ 					     BIT(BH_Dirty)|BIT(BH_Uptodate));
+ 		}
+-		gfs2_page_add_databufs(ip, page, 0, sdp->sd_vfs->s_blocksize);
++		gfs2_page_add_databufs(ip, page, 0, PAGE_SIZE);
+ 	}
+ 	return gfs2_write_jdata_page(page, wbc);
+ }
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index d14b98aa1c3eb..5cb7e771b57ab 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -145,8 +145,10 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 		return -EIO;
+ 
+ 	error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
+-	if (error || gfs2_withdrawn(sdp))
++	if (error) {
++		gfs2_consist(sdp);
+ 		return error;
++	}
+ 
+ 	if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) {
+ 		gfs2_consist(sdp);
+@@ -158,7 +160,9 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	gfs2_log_pointers_init(sdp, head.lh_blkno);
+ 
+ 	error = gfs2_quota_init(sdp);
+-	if (!error && !gfs2_withdrawn(sdp))
++	if (!error && gfs2_withdrawn(sdp))
++		error = -EIO;
++	if (!error)
+ 		set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+ 	return error;
+ }
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index c0a73a6ffb28b..397e02a566970 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -281,6 +281,7 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
+ 		tree->node_hash[hash] = node;
+ 		tree->node_hash_cnt++;
+ 	} else {
++		hfs_bnode_get(node2);
+ 		spin_unlock(&tree->hash_lock);
+ 		kfree(node);
+ 		wait_event(node2->lock_wq, !test_bit(HFS_BNODE_NEW, &node2->flags));
+diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
+index 807119ae5adf7..7648f64a17a82 100644
+--- a/fs/hfsplus/super.c
++++ b/fs/hfsplus/super.c
+@@ -295,11 +295,11 @@ static void hfsplus_put_super(struct super_block *sb)
+ 		hfsplus_sync_fs(sb, 1);
+ 	}
+ 
++	iput(sbi->alloc_file);
++	iput(sbi->hidden_dir);
+ 	hfs_btree_close(sbi->attr_tree);
+ 	hfs_btree_close(sbi->cat_tree);
+ 	hfs_btree_close(sbi->ext_tree);
+-	iput(sbi->alloc_file);
+-	iput(sbi->hidden_dir);
+ 	kfree(sbi->s_vhdr_buf);
+ 	kfree(sbi->s_backup_vhdr_buf);
+ 	unload_nls(sbi->nls);
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 86472212cce17..1923528154b52 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -984,36 +984,28 @@ repeat:
+ 	 * ie. locked but not dirty) or tune2fs (which may actually have
+ 	 * the buffer dirtied, ugh.)  */
+ 
+-	if (buffer_dirty(bh)) {
++	if (buffer_dirty(bh) && jh->b_transaction) {
++		warn_dirty_buffer(bh);
+ 		/*
+-		 * First question: is this buffer already part of the current
+-		 * transaction or the existing committing transaction?
+-		 */
+-		if (jh->b_transaction) {
+-			J_ASSERT_JH(jh,
+-				jh->b_transaction == transaction ||
+-				jh->b_transaction ==
+-					journal->j_committing_transaction);
+-			if (jh->b_next_transaction)
+-				J_ASSERT_JH(jh, jh->b_next_transaction ==
+-							transaction);
+-			warn_dirty_buffer(bh);
+-		}
+-		/*
+-		 * In any case we need to clean the dirty flag and we must
+-		 * do it under the buffer lock to be sure we don't race
+-		 * with running write-out.
++		 * We need to clean the dirty flag and we must do it under the
++		 * buffer lock to be sure we don't race with running write-out.
+ 		 */
+ 		JBUFFER_TRACE(jh, "Journalling dirty buffer");
+ 		clear_buffer_dirty(bh);
++		/*
++		 * The buffer is going to be added to BJ_Reserved list now and
++		 * nothing guarantees jbd2_journal_dirty_metadata() will be
++		 * ever called for it. So we need to set jbddirty bit here to
++		 * make sure the buffer is dirtied and written out when the
++		 * journaling machinery is done with it.
++		 */
+ 		set_buffer_jbddirty(bh);
+ 	}
+ 
+-	unlock_buffer(bh);
+-
+ 	error = -EROFS;
+ 	if (is_handle_aborted(handle)) {
+ 		spin_unlock(&jh->b_state_lock);
++		unlock_buffer(bh);
+ 		goto out;
+ 	}
+ 	error = 0;
+@@ -1023,8 +1015,10 @@ repeat:
+ 	 * b_next_transaction points to it
+ 	 */
+ 	if (jh->b_transaction == transaction ||
+-	    jh->b_next_transaction == transaction)
++	    jh->b_next_transaction == transaction) {
++		unlock_buffer(bh);
+ 		goto done;
++	}
+ 
+ 	/*
+ 	 * this is the first time this transaction is touching this buffer,
+@@ -1048,10 +1042,24 @@ repeat:
+ 		 */
+ 		smp_wmb();
+ 		spin_lock(&journal->j_list_lock);
++		if (test_clear_buffer_dirty(bh)) {
++			/*
++			 * Execute buffer dirty clearing and jh->b_transaction
++			 * assignment under journal->j_list_lock locked to
++			 * prevent bh being removed from checkpoint list if
++			 * the buffer is in an intermediate state (not dirty
++			 * and jh->b_transaction is NULL).
++			 */
++			JBUFFER_TRACE(jh, "Journalling dirty buffer");
++			set_buffer_jbddirty(bh);
++		}
+ 		__jbd2_journal_file_buffer(jh, transaction, BJ_Reserved);
+ 		spin_unlock(&journal->j_list_lock);
++		unlock_buffer(bh);
+ 		goto done;
+ 	}
++	unlock_buffer(bh);
++
+ 	/*
+ 	 * If there is already a copy-out version of this buffer, then we don't
+ 	 * need to make another one
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 2c9493011aec3..501263355ef48 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -193,7 +193,8 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
+ 	bmp->db_agstart = le32_to_cpu(dbmp_le->dn_agstart);
+ 	bmp->db_agl2size = le32_to_cpu(dbmp_le->dn_agl2size);
+-	if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG) {
++	if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG ||
++	    bmp->db_agl2size < 0) {
+ 		err = -EINVAL;
+ 		goto err_release_metapage;
+ 	}
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index ad856b7b9a46c..7be1a7f7fcb2a 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -487,8 +487,9 @@ static int nfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ {
+ 	unsigned long blocks;
+ 	long long isize;
+-	struct rpc_clnt *clnt = NFS_CLIENT(file->f_mapping->host);
+-	struct inode *inode = file->f_mapping->host;
++	struct inode *inode = file_inode(file);
++	struct rpc_clnt *clnt = NFS_CLIENT(inode);
++	struct nfs_client *cl = NFS_SERVER(inode)->nfs_client;
+ 
+ 	spin_lock(&inode->i_lock);
+ 	blocks = inode->i_blocks;
+@@ -501,14 +502,22 @@ static int nfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ 
+ 	*span = sis->pages;
+ 
++
++	if (cl->rpc_ops->enable_swap)
++		cl->rpc_ops->enable_swap(inode);
++
+ 	return rpc_clnt_swap_activate(clnt);
+ }
+ 
+ static void nfs_swap_deactivate(struct file *file)
+ {
+-	struct rpc_clnt *clnt = NFS_CLIENT(file->f_mapping->host);
++	struct inode *inode = file_inode(file);
++	struct rpc_clnt *clnt = NFS_CLIENT(inode);
++	struct nfs_client *cl = NFS_SERVER(inode)->nfs_client;
+ 
+ 	rpc_clnt_swap_deactivate(clnt);
++	if (cl->rpc_ops->disable_swap)
++		cl->rpc_ops->disable_swap(file_inode(file));
+ }
+ 
+ const struct address_space_operations nfs_file_aops = {
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 1adece1cff3ed..36f415278c042 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1906,7 +1906,11 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 	nfs_wcc_update_inode(inode, fattr);
+ 
+ 	if (pnfs_layoutcommit_outstanding(inode)) {
+-		nfsi->cache_validity |= save_cache_validity & NFS_INO_INVALID_ATTR;
++		nfsi->cache_validity |=
++			save_cache_validity &
++			(NFS_INO_INVALID_CHANGE | NFS_INO_INVALID_CTIME |
++			 NFS_INO_INVALID_MTIME | NFS_INO_INVALID_SIZE |
++			 NFS_INO_REVAL_FORCED);
+ 		cache_revalidated = false;
+ 	}
+ 
+diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
+index 6d916563356ef..8b41c0b8624e3 100644
+--- a/fs/nfs/nfs4_fs.h
++++ b/fs/nfs/nfs4_fs.h
+@@ -42,6 +42,7 @@ enum nfs4_client_state {
+ 	NFS4CLNT_LEASE_MOVED,
+ 	NFS4CLNT_DELEGATION_EXPIRED,
+ 	NFS4CLNT_RUN_MANAGER,
++	NFS4CLNT_MANAGER_AVAILABLE,
+ 	NFS4CLNT_RECALL_RUNNING,
+ 	NFS4CLNT_RECALL_ANY_LAYOUT_READ,
+ 	NFS4CLNT_RECALL_ANY_LAYOUT_RW,
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index ee46ab09e3306..8653335c17b67 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10385,6 +10385,26 @@ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ 	return error + error2 + error3;
+ }
+ 
++static void nfs4_enable_swap(struct inode *inode)
++{
++	/* The state manager thread must always be running.
++	 * It will notice the client is a swapper, and stay put.
++	 */
++	struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
++
++	nfs4_schedule_state_manager(clp);
++}
++
++static void nfs4_disable_swap(struct inode *inode)
++{
++	/* The state manager thread will now exit once it is
++	 * woken.
++	 */
++	struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
++
++	nfs4_schedule_state_manager(clp);
++}
++
+ static const struct inode_operations nfs4_dir_inode_operations = {
+ 	.create		= nfs_create,
+ 	.lookup		= nfs_lookup,
+@@ -10461,6 +10481,8 @@ const struct nfs_rpc_ops nfs_v4_clientops = {
+ 	.free_client	= nfs4_free_client,
+ 	.create_server	= nfs4_create_server,
+ 	.clone_server	= nfs_clone_server,
++	.enable_swap	= nfs4_enable_swap,
++	.disable_swap	= nfs4_disable_swap,
+ };
+ 
+ static const struct xattr_handler nfs4_xattr_nfs4_acl_handler = {
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 175b2e064003e..628e030f8e3ba 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1208,10 +1208,17 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
+ {
+ 	struct task_struct *task;
+ 	char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1];
++	struct rpc_clnt *cl = clp->cl_rpcclient;
++
++	while (cl != cl->cl_parent)
++		cl = cl->cl_parent;
+ 
+ 	set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
+-	if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
++	if (test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state) != 0) {
++		wake_up_var(&clp->cl_state);
+ 		return;
++	}
++	set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state);
+ 	__module_get(THIS_MODULE);
+ 	refcount_inc(&clp->cl_count);
+ 
+@@ -1229,6 +1236,7 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
+ 		if (!nfs_client_init_is_complete(clp))
+ 			nfs_mark_client_ready(clp, PTR_ERR(task));
+ 		nfs4_clear_state_manager_bit(clp);
++		clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 		nfs_put_client(clp);
+ 		module_put(THIS_MODULE);
+ 	}
+@@ -2680,12 +2688,8 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ 			clear_bit(NFS4CLNT_RECALL_RUNNING, &clp->cl_state);
+ 		}
+ 
+-		/* Did we race with an attempt to give us more work? */
+-		if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state))
+-			return;
+-		if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
+-			return;
+-		memflags = memalloc_nofs_save();
++		return;
++
+ 	} while (refcount_read(&clp->cl_count) > 1 && !signalled());
+ 	goto out_drain;
+ 
+@@ -2706,9 +2710,31 @@ out_drain:
+ static int nfs4_run_state_manager(void *ptr)
+ {
+ 	struct nfs_client *clp = ptr;
++	struct rpc_clnt *cl = clp->cl_rpcclient;
++
++	while (cl != cl->cl_parent)
++		cl = cl->cl_parent;
+ 
+ 	allow_signal(SIGKILL);
++again:
++	set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state);
+ 	nfs4_state_manager(clp);
++	if (atomic_read(&cl->cl_swapper)) {
++		wait_var_event_interruptible(&clp->cl_state,
++					     test_bit(NFS4CLNT_RUN_MANAGER,
++						      &clp->cl_state));
++		if (atomic_read(&cl->cl_swapper) &&
++		    test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state))
++			goto again;
++		/* Either no longer a swapper, or were signalled */
++	}
++	clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
++
++	if (refcount_read(&clp->cl_count) > 1 && !signalled() &&
++	    test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state) &&
++	    !test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state))
++		goto again;
++
+ 	nfs_put_client(clp);
+ 	module_put_and_exit(0);
+ 	return 0;
+diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
+index 484c1da96dea2..d862df9761e77 100644
+--- a/fs/nfs/nfs4trace.h
++++ b/fs/nfs/nfs4trace.h
+@@ -584,32 +584,34 @@ TRACE_DEFINE_ENUM(NFS4CLNT_MOVED);
+ TRACE_DEFINE_ENUM(NFS4CLNT_LEASE_MOVED);
+ TRACE_DEFINE_ENUM(NFS4CLNT_DELEGATION_EXPIRED);
+ TRACE_DEFINE_ENUM(NFS4CLNT_RUN_MANAGER);
++TRACE_DEFINE_ENUM(NFS4CLNT_MANAGER_AVAILABLE);
+ TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_RUNNING);
+ TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_ANY_LAYOUT_READ);
+ TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_ANY_LAYOUT_RW);
++TRACE_DEFINE_ENUM(NFS4CLNT_DELEGRETURN_DELAYED);
+ 
+ #define show_nfs4_clp_state(state) \
+ 	__print_flags(state, "|", \
+-		{ NFS4CLNT_MANAGER_RUNNING,	"MANAGER_RUNNING" }, \
+-		{ NFS4CLNT_CHECK_LEASE,		"CHECK_LEASE" }, \
+-		{ NFS4CLNT_LEASE_EXPIRED,	"LEASE_EXPIRED" }, \
+-		{ NFS4CLNT_RECLAIM_REBOOT,	"RECLAIM_REBOOT" }, \
+-		{ NFS4CLNT_RECLAIM_NOGRACE,	"RECLAIM_NOGRACE" }, \
+-		{ NFS4CLNT_DELEGRETURN,		"DELEGRETURN" }, \
+-		{ NFS4CLNT_SESSION_RESET,	"SESSION_RESET" }, \
+-		{ NFS4CLNT_LEASE_CONFIRM,	"LEASE_CONFIRM" }, \
+-		{ NFS4CLNT_SERVER_SCOPE_MISMATCH, \
+-						"SERVER_SCOPE_MISMATCH" }, \
+-		{ NFS4CLNT_PURGE_STATE,		"PURGE_STATE" }, \
+-		{ NFS4CLNT_BIND_CONN_TO_SESSION, \
+-						"BIND_CONN_TO_SESSION" }, \
+-		{ NFS4CLNT_MOVED,		"MOVED" }, \
+-		{ NFS4CLNT_LEASE_MOVED,		"LEASE_MOVED" }, \
+-		{ NFS4CLNT_DELEGATION_EXPIRED,	"DELEGATION_EXPIRED" }, \
+-		{ NFS4CLNT_RUN_MANAGER,		"RUN_MANAGER" }, \
+-		{ NFS4CLNT_RECALL_RUNNING,	"RECALL_RUNNING" }, \
+-		{ NFS4CLNT_RECALL_ANY_LAYOUT_READ, "RECALL_ANY_LAYOUT_READ" }, \
+-		{ NFS4CLNT_RECALL_ANY_LAYOUT_RW, "RECALL_ANY_LAYOUT_RW" })
++	{ BIT(NFS4CLNT_MANAGER_RUNNING),	"MANAGER_RUNNING" }, \
++	{ BIT(NFS4CLNT_CHECK_LEASE),		"CHECK_LEASE" }, \
++	{ BIT(NFS4CLNT_LEASE_EXPIRED),	"LEASE_EXPIRED" }, \
++	{ BIT(NFS4CLNT_RECLAIM_REBOOT),	"RECLAIM_REBOOT" }, \
++	{ BIT(NFS4CLNT_RECLAIM_NOGRACE),	"RECLAIM_NOGRACE" }, \
++	{ BIT(NFS4CLNT_DELEGRETURN),		"DELEGRETURN" }, \
++	{ BIT(NFS4CLNT_SESSION_RESET),	"SESSION_RESET" }, \
++	{ BIT(NFS4CLNT_LEASE_CONFIRM),	"LEASE_CONFIRM" }, \
++	{ BIT(NFS4CLNT_SERVER_SCOPE_MISMATCH),	"SERVER_SCOPE_MISMATCH" }, \
++	{ BIT(NFS4CLNT_PURGE_STATE),		"PURGE_STATE" }, \
++	{ BIT(NFS4CLNT_BIND_CONN_TO_SESSION),	"BIND_CONN_TO_SESSION" }, \
++	{ BIT(NFS4CLNT_MOVED),		"MOVED" }, \
++	{ BIT(NFS4CLNT_LEASE_MOVED),		"LEASE_MOVED" }, \
++	{ BIT(NFS4CLNT_DELEGATION_EXPIRED),	"DELEGATION_EXPIRED" }, \
++	{ BIT(NFS4CLNT_RUN_MANAGER),		"RUN_MANAGER" }, \
++	{ BIT(NFS4CLNT_MANAGER_AVAILABLE), "MANAGER_AVAILABLE" }, \
++	{ BIT(NFS4CLNT_RECALL_RUNNING),	"RECALL_RUNNING" }, \
++	{ BIT(NFS4CLNT_RECALL_ANY_LAYOUT_READ), "RECALL_ANY_LAYOUT_READ" }, \
++	{ BIT(NFS4CLNT_RECALL_ANY_LAYOUT_RW), "RECALL_ANY_LAYOUT_RW" }, \
++	{ BIT(NFS4CLNT_DELEGRETURN_DELAYED), "DELERETURN_DELAYED" })
+ 
+ TRACE_EVENT(nfs4_state_mgr,
+ 		TP_PROTO(
+diff --git a/fs/nfsd/nfs4layouts.c b/fs/nfsd/nfs4layouts.c
+index a97873f2d22b0..2673019d30ecd 100644
+--- a/fs/nfsd/nfs4layouts.c
++++ b/fs/nfsd/nfs4layouts.c
+@@ -322,11 +322,11 @@ nfsd4_recall_file_layout(struct nfs4_layout_stateid *ls)
+ 	if (ls->ls_recalled)
+ 		goto out_unlock;
+ 
+-	ls->ls_recalled = true;
+-	atomic_inc(&ls->ls_stid.sc_file->fi_lo_recalls);
+ 	if (list_empty(&ls->ls_layouts))
+ 		goto out_unlock;
+ 
++	ls->ls_recalled = true;
++	atomic_inc(&ls->ls_stid.sc_file->fi_lo_recalls);
+ 	trace_nfsd_layout_recall(&ls->ls_stid.sc_stateid);
+ 
+ 	refcount_inc(&ls->ls_stid.sc_count);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 735ee8a798705..f82cfe843b99b 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1075,8 +1075,10 @@ out:
+ 	return status;
+ out_put_dst:
+ 	nfsd_file_put(*dst);
++	*dst = NULL;
+ out_put_src:
+ 	nfsd_file_put(*src);
++	*src = NULL;
+ 	goto out;
+ }
+ 
+diff --git a/fs/ocfs2/move_extents.c b/fs/ocfs2/move_extents.c
+index 758d9661ef1e4..98e77ea957ff3 100644
+--- a/fs/ocfs2/move_extents.c
++++ b/fs/ocfs2/move_extents.c
+@@ -107,14 +107,6 @@ static int __ocfs2_move_extent(handle_t *handle,
+ 	 */
+ 	replace_rec.e_flags = ext_flags & ~OCFS2_EXT_REFCOUNTED;
+ 
+-	ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode),
+-				      context->et.et_root_bh,
+-				      OCFS2_JOURNAL_ACCESS_WRITE);
+-	if (ret) {
+-		mlog_errno(ret);
+-		goto out;
+-	}
+-
+ 	ret = ocfs2_split_extent(handle, &context->et, path, index,
+ 				 &replace_rec, context->meta_ac,
+ 				 &context->dealloc);
+@@ -123,8 +115,6 @@ static int __ocfs2_move_extent(handle_t *handle,
+ 		goto out;
+ 	}
+ 
+-	ocfs2_journal_dirty(handle, context->et.et_root_bh);
+-
+ 	context->new_phys_cpos = new_p_cpos;
+ 
+ 	/*
+@@ -446,7 +436,7 @@ static int ocfs2_find_victim_alloc_group(struct inode *inode,
+ 			bg = (struct ocfs2_group_desc *)gd_bh->b_data;
+ 
+ 			if (vict_blkno < (le64_to_cpu(bg->bg_blkno) +
+-						le16_to_cpu(bg->bg_bits))) {
++						(le16_to_cpu(bg->bg_bits) << bits_per_unit))) {
+ 
+ 				*ret_bh = gd_bh;
+ 				*vict_bit = (vict_blkno - blkno) >>
+@@ -561,6 +551,7 @@ static void ocfs2_probe_alloc_group(struct inode *inode, struct buffer_head *bh,
+ 			last_free_bits++;
+ 
+ 		if (last_free_bits == move_len) {
++			i -= move_len;
+ 			*goal_bit = i;
+ 			*phys_cpos = base_cpos + i;
+ 			break;
+@@ -1032,18 +1023,19 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp)
+ 
+ 	context->range = &range;
+ 
++	/*
++	 * ok, the default theshold for the defragmentation
++	 * is 1M, since our maximum clustersize was 1M also.
++	 * any thought?
++	 */
++	if (!range.me_threshold)
++		range.me_threshold = 1024 * 1024;
++
++	if (range.me_threshold > i_size_read(inode))
++		range.me_threshold = i_size_read(inode);
++
+ 	if (range.me_flags & OCFS2_MOVE_EXT_FL_AUTO_DEFRAG) {
+ 		context->auto_defrag = 1;
+-		/*
+-		 * ok, the default theshold for the defragmentation
+-		 * is 1M, since our maximum clustersize was 1M also.
+-		 * any thought?
+-		 */
+-		if (!range.me_threshold)
+-			range.me_threshold = 1024 * 1024;
+-
+-		if (range.me_threshold > i_size_read(inode))
+-			range.me_threshold = i_size_read(inode);
+ 
+ 		if (range.me_flags & OCFS2_MOVE_EXT_FL_PART_DEFRAG)
+ 			context->partial = 1;
+diff --git a/fs/ubifs/budget.c b/fs/ubifs/budget.c
+index c0b84e960b20c..9cb05ef9b9dd9 100644
+--- a/fs/ubifs/budget.c
++++ b/fs/ubifs/budget.c
+@@ -212,11 +212,10 @@ long long ubifs_calc_available(const struct ubifs_info *c, int min_idx_lebs)
+ 	subtract_lebs += 1;
+ 
+ 	/*
+-	 * The GC journal head LEB is not really accessible. And since
+-	 * different write types go to different heads, we may count only on
+-	 * one head's space.
++	 * Since different write types go to different heads, we should
++	 * reserve one leb for each head.
+ 	 */
+-	subtract_lebs += c->jhead_cnt - 1;
++	subtract_lebs += c->jhead_cnt;
+ 
+ 	/* We also reserve one LEB for deletions, which bypass budgeting */
+ 	subtract_lebs += 1;
+@@ -403,7 +402,7 @@ static int calc_dd_growth(const struct ubifs_info *c,
+ 	dd_growth = req->dirtied_page ? c->bi.page_budget : 0;
+ 
+ 	if (req->dirtied_ino)
+-		dd_growth += c->bi.inode_budget << (req->dirtied_ino - 1);
++		dd_growth += c->bi.inode_budget * req->dirtied_ino;
+ 	if (req->mod_dent)
+ 		dd_growth += c->bi.dent_budget;
+ 	dd_growth += req->dirtied_ino_d;
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 9257ee893bdb8..6039943877e10 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1117,7 +1117,6 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	int err, sz_change, len = strlen(symname);
+ 	struct fscrypt_str disk_link;
+ 	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
+-					.new_ino_d = ALIGN(len, 8),
+ 					.dirtied_ino = 1 };
+ 	struct fscrypt_name nm;
+ 
+@@ -1133,6 +1132,7 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	 * Budget request settings: new inode, new direntry and changing parent
+ 	 * directory inode.
+ 	 */
++	req.new_ino_d = ALIGN(disk_link.len - 1, 8);
+ 	err = ubifs_budget_space(c, &req);
+ 	if (err)
+ 		return err;
+@@ -1288,6 +1288,8 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	if (unlink) {
+ 		ubifs_assert(c, inode_is_locked(new_inode));
+ 
++		/* Budget for old inode's data when its nlink > 1. */
++		req.dirtied_ino_d = ALIGN(ubifs_inode(new_inode)->data_len, 8);
+ 		err = ubifs_purge_xattrs(new_inode);
+ 		if (err)
+ 			return err;
+@@ -1530,6 +1532,10 @@ static int ubifs_xrename(struct inode *old_dir, struct dentry *old_dentry,
+ 		return err;
+ 	}
+ 
++	err = ubifs_budget_space(c, &req);
++	if (err)
++		goto out;
++
+ 	lock_4_inodes(old_dir, new_dir, NULL, NULL);
+ 
+ 	time = current_time(old_dir);
+@@ -1555,6 +1561,7 @@ static int ubifs_xrename(struct inode *old_dir, struct dentry *old_dentry,
+ 	unlock_4_inodes(old_dir, new_dir, NULL, NULL);
+ 	ubifs_release_budget(c, &req);
+ 
++out:
+ 	fscrypt_free_filename(&fst_nm);
+ 	fscrypt_free_filename(&snd_nm);
+ 	return err;
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index 354457e846cda..19fdcda045890 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -1031,7 +1031,7 @@ static int ubifs_writepage(struct page *page, struct writeback_control *wbc)
+ 		if (page->index >= synced_i_size >> PAGE_SHIFT) {
+ 			err = inode->i_sb->s_op->write_inode(inode, NULL);
+ 			if (err)
+-				goto out_unlock;
++				goto out_redirty;
+ 			/*
+ 			 * The inode has been written, but the write-buffer has
+ 			 * not been synchronized, so in case of an unclean
+@@ -1059,11 +1059,17 @@ static int ubifs_writepage(struct page *page, struct writeback_control *wbc)
+ 	if (i_size > synced_i_size) {
+ 		err = inode->i_sb->s_op->write_inode(inode, NULL);
+ 		if (err)
+-			goto out_unlock;
++			goto out_redirty;
+ 	}
+ 
+ 	return do_writepage(page, len);
+-
++out_redirty:
++	/*
++	 * redirty_page_for_writepage() won't call ubifs_dirty_inode() because
++	 * it passes I_DIRTY_PAGES flag while calling __mark_inode_dirty(), so
++	 * there is no need to do space budget for dirty inode.
++	 */
++	redirty_page_for_writepage(wbc, page);
+ out_unlock:
+ 	unlock_page(page);
+ 	return err;
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index 6a8f9efc2e2f0..1df193c87e920 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -833,7 +833,7 @@ static int alloc_wbufs(struct ubifs_info *c)
+ 		INIT_LIST_HEAD(&c->jheads[i].buds_list);
+ 		err = ubifs_wbuf_init(c, &c->jheads[i].wbuf);
+ 		if (err)
+-			return err;
++			goto out_wbuf;
+ 
+ 		c->jheads[i].wbuf.sync_callback = &bud_wbuf_callback;
+ 		c->jheads[i].wbuf.jhead = i;
+@@ -841,7 +841,7 @@ static int alloc_wbufs(struct ubifs_info *c)
+ 		c->jheads[i].log_hash = ubifs_hash_get_desc(c);
+ 		if (IS_ERR(c->jheads[i].log_hash)) {
+ 			err = PTR_ERR(c->jheads[i].log_hash);
+-			goto out;
++			goto out_log_hash;
+ 		}
+ 	}
+ 
+@@ -854,9 +854,18 @@ static int alloc_wbufs(struct ubifs_info *c)
+ 
+ 	return 0;
+ 
+-out:
+-	while (i--)
++out_log_hash:
++	kfree(c->jheads[i].wbuf.buf);
++	kfree(c->jheads[i].wbuf.inodes);
++
++out_wbuf:
++	while (i--) {
++		kfree(c->jheads[i].wbuf.buf);
++		kfree(c->jheads[i].wbuf.inodes);
+ 		kfree(c->jheads[i].log_hash);
++	}
++	kfree(c->jheads);
++	c->jheads = NULL;
+ 
+ 	return err;
+ }
+diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
+index 894f1ab14616e..07470449b9602 100644
+--- a/fs/ubifs/tnc.c
++++ b/fs/ubifs/tnc.c
+@@ -267,11 +267,18 @@ static struct ubifs_znode *dirty_cow_znode(struct ubifs_info *c,
+ 	if (zbr->len) {
+ 		err = insert_old_idx(c, zbr->lnum, zbr->offs);
+ 		if (unlikely(err))
+-			return ERR_PTR(err);
++			/*
++			 * Obsolete znodes will be freed by tnc_destroy_cnext()
++			 * or free_obsolete_znodes(), copied up znodes should
++			 * be added back to tnc and freed by
++			 * ubifs_destroy_tnc_subtree().
++			 */
++			goto out;
+ 		err = add_idx_dirt(c, zbr->lnum, zbr->len);
+ 	} else
+ 		err = 0;
+ 
++out:
+ 	zbr->znode = zn;
+ 	zbr->lnum = 0;
+ 	zbr->offs = 0;
+@@ -3053,6 +3060,21 @@ static void tnc_destroy_cnext(struct ubifs_info *c)
+ 		cnext = cnext->cnext;
+ 		if (ubifs_zn_obsolete(znode))
+ 			kfree(znode);
++		else if (!ubifs_zn_cow(znode)) {
++			/*
++			 * Don't forget to update clean znode count after
++			 * committing failed, because ubifs will check this
++			 * count while closing tnc. Non-obsolete znode could
++			 * be re-dirtied during committing process, so dirty
++			 * flag is untrustable. The flag 'COW_ZNODE' is set
++			 * for each dirty znode before committing, and it is
++			 * cleared as long as the znode become clean, so we
++			 * can statistic clean znode count according to this
++			 * flag.
++			 */
++			atomic_long_inc(&c->clean_zn_cnt);
++			atomic_long_inc(&ubifs_clean_zn_cnt);
++		}
+ 	} while (cnext && cnext != c->cnext);
+ }
+ 
+diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
+index e7e48f3b179ab..b66ebab5c5dec 100644
+--- a/fs/ubifs/ubifs.h
++++ b/fs/ubifs/ubifs.h
+@@ -1594,8 +1594,13 @@ static inline int ubifs_check_hmac(const struct ubifs_info *c,
+ 	return crypto_memneq(expected, got, c->hmac_desc_len);
+ }
+ 
++#ifdef CONFIG_UBIFS_FS_AUTHENTICATION
+ void ubifs_bad_hash(const struct ubifs_info *c, const void *node,
+ 		    const u8 *hash, int lnum, int offs);
++#else
++static inline void ubifs_bad_hash(const struct ubifs_info *c, const void *node,
++				  const u8 *hash, int lnum, int offs) {};
++#endif
+ 
+ int __ubifs_node_check_hash(const struct ubifs_info *c, const void *buf,
+ 			  const u8 *expected);
+diff --git a/fs/udf/file.c b/fs/udf/file.c
+index ad8eefad27d7f..e283a62701b83 100644
+--- a/fs/udf/file.c
++++ b/fs/udf/file.c
+@@ -147,26 +147,24 @@ static ssize_t udf_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 		goto out;
+ 
+ 	down_write(&iinfo->i_data_sem);
+-	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) {
+-		loff_t end = iocb->ki_pos + iov_iter_count(from);
+-
+-		if (inode->i_sb->s_blocksize <
+-				(udf_file_entry_alloc_offset(inode) + end)) {
+-			err = udf_expand_file_adinicb(inode);
+-			if (err) {
+-				inode_unlock(inode);
+-				udf_debug("udf_expand_adinicb: err=%d\n", err);
+-				return err;
+-			}
+-		} else {
+-			iinfo->i_lenAlloc = max(end, inode->i_size);
+-			up_write(&iinfo->i_data_sem);
++	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB &&
++	    inode->i_sb->s_blocksize < (udf_file_entry_alloc_offset(inode) +
++				 iocb->ki_pos + iov_iter_count(from))) {
++		err = udf_expand_file_adinicb(inode);
++		if (err) {
++			inode_unlock(inode);
++			udf_debug("udf_expand_adinicb: err=%d\n", err);
++			return err;
+ 		}
+ 	} else
+ 		up_write(&iinfo->i_data_sem);
+ 
+ 	retval = __generic_file_write_iter(iocb, from);
+ out:
++	down_write(&iinfo->i_data_sem);
++	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB && retval > 0)
++		iinfo->i_lenAlloc = inode->i_size;
++	up_write(&iinfo->i_data_sem);
+ 	inode_unlock(inode);
+ 
+ 	if (retval > 0) {
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 2132bfab67f35..81876284a83c0 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -525,8 +525,10 @@ static int udf_do_extend_file(struct inode *inode,
+ 	}
+ 
+ 	if (fake) {
+-		udf_add_aext(inode, last_pos, &last_ext->extLocation,
+-			     last_ext->extLength, 1);
++		err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
++				   last_ext->extLength, 1);
++		if (err < 0)
++			goto out_err;
+ 		count++;
+ 	} else {
+ 		struct kernel_lb_addr tmploc;
+@@ -560,7 +562,7 @@ static int udf_do_extend_file(struct inode *inode,
+ 		err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
+ 				   last_ext->extLength, 1);
+ 		if (err)
+-			return err;
++			goto out_err;
+ 		count++;
+ 	}
+ 	if (new_block_bytes) {
+@@ -569,7 +571,7 @@ static int udf_do_extend_file(struct inode *inode,
+ 		err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
+ 				   last_ext->extLength, 1);
+ 		if (err)
+-			return err;
++			goto out_err;
+ 		count++;
+ 	}
+ 
+@@ -583,6 +585,11 @@ out:
+ 		return -EIO;
+ 
+ 	return count;
++out_err:
++	/* Remove extents we've created so far */
++	udf_clear_extent_cache(inode);
++	udf_truncate_extents(inode);
++	return err;
+ }
+ 
+ /* Extend the final block of the file to final_block_len bytes */
+@@ -797,19 +804,17 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 		c = 0;
+ 		offset = 0;
+ 		count += ret;
+-		/* We are not covered by a preallocated extent? */
+-		if ((laarr[0].extLength & UDF_EXTENT_FLAG_MASK) !=
+-						EXT_NOT_RECORDED_ALLOCATED) {
+-			/* Is there any real extent? - otherwise we overwrite
+-			 * the fake one... */
+-			if (count)
+-				c = !c;
+-			laarr[c].extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
+-				inode->i_sb->s_blocksize;
+-			memset(&laarr[c].extLocation, 0x00,
+-				sizeof(struct kernel_lb_addr));
+-			count++;
+-		}
++		/*
++		 * Is there any real extent? - otherwise we overwrite the fake
++		 * one...
++		 */
++		if (count)
++			c = !c;
++		laarr[c].extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
++			inode->i_sb->s_blocksize;
++		memset(&laarr[c].extLocation, 0x00,
++			sizeof(struct kernel_lb_addr));
++		count++;
+ 		endnum = c + 1;
+ 		lastblock = 1;
+ 	} else {
+@@ -1086,23 +1091,8 @@ static void udf_merge_extents(struct inode *inode, struct kernel_long_ad *laarr,
+ 			blocksize - 1) >> blocksize_bits)))) {
+ 
+ 			if (((li->extLength & UDF_EXTENT_LENGTH_MASK) +
+-				(lip1->extLength & UDF_EXTENT_LENGTH_MASK) +
+-				blocksize - 1) & ~UDF_EXTENT_LENGTH_MASK) {
+-				lip1->extLength = (lip1->extLength -
+-						  (li->extLength &
+-						   UDF_EXTENT_LENGTH_MASK) +
+-						   UDF_EXTENT_LENGTH_MASK) &
+-							~(blocksize - 1);
+-				li->extLength = (li->extLength &
+-						 UDF_EXTENT_FLAG_MASK) +
+-						(UDF_EXTENT_LENGTH_MASK + 1) -
+-						blocksize;
+-				lip1->extLocation.logicalBlockNum =
+-					li->extLocation.logicalBlockNum +
+-					((li->extLength &
+-						UDF_EXTENT_LENGTH_MASK) >>
+-						blocksize_bits);
+-			} else {
++			     (lip1->extLength & UDF_EXTENT_LENGTH_MASK) +
++			     blocksize - 1) <= UDF_EXTENT_LENGTH_MASK) {
+ 				li->extLength = lip1->extLength +
+ 					(((li->extLength &
+ 						UDF_EXTENT_LENGTH_MASK) +
+@@ -1393,6 +1383,7 @@ reread:
+ 		ret = -EIO;
+ 		goto out;
+ 	}
++	iinfo->i_hidden = hidden_inode;
+ 	iinfo->i_unique = 0;
+ 	iinfo->i_lenEAttr = 0;
+ 	iinfo->i_lenExtents = 0;
+@@ -1728,8 +1719,12 @@ static int udf_update_inode(struct inode *inode, int do_sync)
+ 
+ 	if (S_ISDIR(inode->i_mode) && inode->i_nlink > 0)
+ 		fe->fileLinkCount = cpu_to_le16(inode->i_nlink - 1);
+-	else
+-		fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
++	else {
++		if (iinfo->i_hidden)
++			fe->fileLinkCount = cpu_to_le16(0);
++		else
++			fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
++	}
+ 
+ 	fe->informationLength = cpu_to_le64(inode->i_size);
+ 
+@@ -1900,8 +1895,13 @@ struct inode *__udf_iget(struct super_block *sb, struct kernel_lb_addr *ino,
+ 	if (!inode)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	if (!(inode->i_state & I_NEW))
++	if (!(inode->i_state & I_NEW)) {
++		if (UDF_I(inode)->i_hidden != hidden_inode) {
++			iput(inode);
++			return ERR_PTR(-EFSCORRUPTED);
++		}
+ 		return inode;
++	}
+ 
+ 	memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr));
+ 	err = udf_read_inode(inode, hidden_inode);
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 3448098e54768..4af9ce34ee804 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -147,6 +147,7 @@ static struct inode *udf_alloc_inode(struct super_block *sb)
+ 	ei->i_next_alloc_goal = 0;
+ 	ei->i_strat4096 = 0;
+ 	ei->i_streamdir = 0;
++	ei->i_hidden = 0;
+ 	init_rwsem(&ei->i_data_sem);
+ 	ei->cached_extent.lstart = -1;
+ 	spin_lock_init(&ei->i_extent_cache_lock);
+diff --git a/fs/udf/udf_i.h b/fs/udf/udf_i.h
+index 06ff7006b8227..312b7c9ef10e2 100644
+--- a/fs/udf/udf_i.h
++++ b/fs/udf/udf_i.h
+@@ -44,7 +44,8 @@ struct udf_inode_info {
+ 	unsigned		i_use : 1;	/* unallocSpaceEntry */
+ 	unsigned		i_strat4096 : 1;
+ 	unsigned		i_streamdir : 1;
+-	unsigned		reserved : 25;
++	unsigned		i_hidden : 1;	/* hidden system inode */
++	unsigned		reserved : 24;
+ 	__u8			*i_data;
+ 	struct kernel_lb_addr	i_locStreamdir;
+ 	__u64			i_lenStreams;
+diff --git a/fs/udf/udf_sb.h b/fs/udf/udf_sb.h
+index 4fa620543d302..2205859731dc2 100644
+--- a/fs/udf/udf_sb.h
++++ b/fs/udf/udf_sb.h
+@@ -51,6 +51,8 @@
+ #define MF_DUPLICATE_MD		0x01
+ #define MF_MIRROR_FE_LOADED	0x02
+ 
++#define EFSCORRUPTED EUCLEAN
++
+ struct udf_meta_data {
+ 	__u32	s_meta_file_loc;
+ 	__u32	s_mirror_file_loc;
+diff --git a/include/drm/drm_mipi_dsi.h b/include/drm/drm_mipi_dsi.h
+index 360e6377e84ba..31ba85a4110a8 100644
+--- a/include/drm/drm_mipi_dsi.h
++++ b/include/drm/drm_mipi_dsi.h
+@@ -283,6 +283,10 @@ int mipi_dsi_dcs_set_display_brightness(struct mipi_dsi_device *dsi,
+ 					u16 brightness);
+ int mipi_dsi_dcs_get_display_brightness(struct mipi_dsi_device *dsi,
+ 					u16 *brightness);
++int mipi_dsi_dcs_set_display_brightness_large(struct mipi_dsi_device *dsi,
++					     u16 brightness);
++int mipi_dsi_dcs_get_display_brightness_large(struct mipi_dsi_device *dsi,
++					     u16 *brightness);
+ 
+ /**
+  * struct mipi_dsi_driver - DSI driver
+diff --git a/include/linux/bootconfig.h b/include/linux/bootconfig.h
+index 2696eb0fc1497..df9cbf02d0303 100644
+--- a/include/linux/bootconfig.h
++++ b/include/linux/bootconfig.h
+@@ -29,7 +29,7 @@ struct xbc_node {
+ /* Maximum size of boot config is 32KB - 1 */
+ #define XBC_DATA_MAX	(XBC_VALUE - 1)
+ 
+-#define XBC_NODE_MAX	1024
++#define XBC_NODE_MAX	8192
+ #define XBC_KEYLEN_MAX	256
+ #define XBC_DEPTH_MAX	16
+ 
+diff --git a/include/linux/ima.h b/include/linux/ima.h
+index 8fa7bcfb2da2c..cd8483fa703ea 100644
+--- a/include/linux/ima.h
++++ b/include/linux/ima.h
+@@ -18,7 +18,8 @@ extern int ima_bprm_check(struct linux_binprm *bprm);
+ extern int ima_file_check(struct file *file, int mask);
+ extern void ima_post_create_tmpfile(struct inode *inode);
+ extern void ima_file_free(struct file *file);
+-extern int ima_file_mmap(struct file *file, unsigned long prot);
++extern int ima_file_mmap(struct file *file, unsigned long reqprot,
++			 unsigned long prot, unsigned long flags);
+ extern int ima_file_mprotect(struct vm_area_struct *vma, unsigned long prot);
+ extern int ima_load_data(enum kernel_load_data_id id, bool contents);
+ extern int ima_post_load_data(char *buf, loff_t size,
+@@ -70,7 +71,8 @@ static inline void ima_file_free(struct file *file)
+ 	return;
+ }
+ 
+-static inline int ima_file_mmap(struct file *file, unsigned long prot)
++static inline int ima_file_mmap(struct file *file, unsigned long reqprot,
++				unsigned long prot, unsigned long flags)
+ {
+ 	return 0;
+ }
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index 394f10fc29aad..66948e1bf4fa6 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -47,6 +47,8 @@
+  */
+ #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
+ 
++#define PTR_IF(cond, ptr)	((cond) ? (ptr) : NULL)
++
+ #define u64_to_user_ptr(x) (		\
+ {					\
+ 	typecheck(u64, (x));		\
+diff --git a/include/linux/kernel_stat.h b/include/linux/kernel_stat.h
+index 8fff3500d50ee..1160e20995a02 100644
+--- a/include/linux/kernel_stat.h
++++ b/include/linux/kernel_stat.h
+@@ -73,7 +73,7 @@ extern unsigned int kstat_irqs_usr(unsigned int irq);
+ /*
+  * Number of interrupts per cpu, since bootup
+  */
+-static inline unsigned int kstat_cpu_irqs_sum(unsigned int cpu)
++static inline unsigned long kstat_cpu_irqs_sum(unsigned int cpu)
+ {
+ 	return kstat_cpu(cpu).irqs_sum;
+ }
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 4dbebd319b6f5..18b7c40ffb37a 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -342,6 +342,8 @@ extern int proc_kprobes_optimization_handler(struct ctl_table *table,
+ 					     size_t *length, loff_t *ppos);
+ #endif
+ extern void wait_for_kprobe_optimizer(void);
++bool optprobe_queued_unopt(struct optimized_kprobe *op);
++bool kprobe_disarmed(struct kprobe *p);
+ #else
+ static inline void wait_for_kprobe_optimizer(void) { }
+ #endif /* CONFIG_OPTPROBES */
+diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
+index 5491ad5f48a94..33442fd018a06 100644
+--- a/include/linux/nfs_xdr.h
++++ b/include/linux/nfs_xdr.h
+@@ -1789,6 +1789,8 @@ struct nfs_rpc_ops {
+ 	struct nfs_server *(*create_server)(struct fs_context *);
+ 	struct nfs_server *(*clone_server)(struct nfs_server *, struct nfs_fh *,
+ 					   struct nfs_fattr *, rpc_authflavor_t);
++	void	(*enable_swap)(struct inode *inode);
++	void	(*disable_swap)(struct inode *inode);
+ };
+ 
+ /*
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 692ce678c5f1c..4cc42ad2f6c52 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -539,6 +539,7 @@ struct pci_host_bridge {
+ 	struct msi_controller *msi;
+ 	unsigned int	ignore_reset_delay:1;	/* For entire hierarchy */
+ 	unsigned int	no_ext_tags:1;		/* No Extended Tags */
++	unsigned int	no_inc_mrrs:1;		/* No Increase MRRS */
+ 	unsigned int	native_aer:1;		/* OS may use PCIe AER */
+ 	unsigned int	native_pcie_hotplug:1;	/* OS may use PCIe hotplug */
+ 	unsigned int	native_shpc_hotplug:1;	/* OS may use SHPC hotplug */
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 69e310173fbca..2e1935917c241 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -3033,6 +3033,8 @@
+ #define PCI_DEVICE_ID_INTEL_VMD_9A0B	0x9a0b
+ #define PCI_DEVICE_ID_INTEL_S21152BB	0xb152
+ 
++#define PCI_VENDOR_ID_WANGXUN		0x8088
++
+ #define PCI_VENDOR_ID_SCALEMP		0x8686
+ #define PCI_DEVICE_ID_SCALEMP_VSMP_CTL	0x1010
+ 
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 095b3b39bd032..ef8d56b18da6b 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -189,6 +189,7 @@ void synchronize_rcu_tasks_rude(void);
+ 
+ #define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t, false)
+ void exit_tasks_rcu_start(void);
++void exit_tasks_rcu_stop(void);
+ void exit_tasks_rcu_finish(void);
+ #else /* #ifdef CONFIG_TASKS_RCU_GENERIC */
+ #define rcu_tasks_qs(t, preempt) do { } while (0)
+@@ -196,6 +197,7 @@ void exit_tasks_rcu_finish(void);
+ #define call_rcu_tasks call_rcu
+ #define synchronize_rcu_tasks synchronize_rcu
+ static inline void exit_tasks_rcu_start(void) { }
++static inline void exit_tasks_rcu_stop(void) { }
+ static inline void exit_tasks_rcu_finish(void) { }
+ #endif /* #else #ifdef CONFIG_TASKS_RCU_GENERIC */
+ 
+@@ -306,11 +308,18 @@ static inline int rcu_read_lock_any_held(void)
+  * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
+  * @c: condition to check
+  * @s: informative message
++ *
++ * This checks debug_lockdep_rcu_enabled() before checking (c) to
++ * prevent early boot splats due to lockdep not yet being initialized,
++ * and rechecks it after checking (c) to prevent false-positive splats
++ * due to races with lockdep being disabled.  See commit 3066820034b5dd
++ * ("rcu: Reject RCU_LOCKDEP_WARN() false positives") for more detail.
+  */
+ #define RCU_LOCKDEP_WARN(c, s)						\
+ 	do {								\
+ 		static bool __section(".data.unlikely") __warned;	\
+-		if ((c) && debug_lockdep_rcu_enabled() && !__warned) {	\
++		if (debug_lockdep_rcu_enabled() && (c) &&		\
++		    debug_lockdep_rcu_enabled() && !__warned) {		\
+ 			__warned = true;				\
+ 			lockdep_rcu_suspicious(__FILE__, __LINE__, s);	\
+ 		}							\
+diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
+index c7c6e8b8344d4..20668760daa02 100644
+--- a/include/linux/uaccess.h
++++ b/include/linux/uaccess.h
+@@ -348,6 +348,10 @@ copy_struct_from_user(void *dst, size_t ksize, const void __user *src,
+ 	size_t size = min(ksize, usize);
+ 	size_t rest = max(ksize, usize) - size;
+ 
++	/* Double check if ksize is larger than a known object size. */
++	if (WARN_ON_ONCE(ksize > __builtin_object_size(dst, 1)))
++		return -E2BIG;
++
+ 	/* Deal with trailing bytes. */
+ 	if (usize < ksize) {
+ 		memset(dst + size, 0, rest);
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index be9ff0422c162..be59e8df0bffc 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -1394,6 +1394,7 @@ struct sctp_stream_priorities {
+ 	/* The next stream in line */
+ 	struct sctp_stream_out_ext *next;
+ 	__u16 prio;
++	__u16 users;
+ };
+ 
+ struct sctp_stream_out_ext {
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 0f48d50a6dde7..1d8529311d6f9 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1775,7 +1775,12 @@ void sk_common_release(struct sock *sk);
+  *	Default socket callbacks and setup code
+  */
+ 
+-/* Initialise core socket variables */
++/* Initialise core socket variables using an explicit uid. */
++void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid);
++
++/* Initialise core socket variables.
++ * Assumes struct socket *sock is embedded in a struct socket_alloc.
++ */
+ void sock_init_data(struct socket *sock, struct sock *sk);
+ 
+ /*
+diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
+index c3039e97929a5..32e93d55acf73 100644
+--- a/include/sound/soc-dapm.h
++++ b/include/sound/soc-dapm.h
+@@ -16,6 +16,7 @@
+ #include <sound/asoc.h>
+ 
+ struct device;
++struct snd_pcm_substream;
+ struct snd_soc_pcm_runtime;
+ struct soc_enum;
+ 
+diff --git a/include/uapi/linux/usb/video.h b/include/uapi/linux/usb/video.h
+index bfdae12cdacf8..c58854fb7d94a 100644
+--- a/include/uapi/linux/usb/video.h
++++ b/include/uapi/linux/usb/video.h
+@@ -179,6 +179,36 @@
+ #define UVC_CONTROL_CAP_AUTOUPDATE			(1 << 3)
+ #define UVC_CONTROL_CAP_ASYNCHRONOUS			(1 << 4)
+ 
++/* 3.9.2.6 Color Matching Descriptor Values */
++enum uvc_color_primaries_values {
++	UVC_COLOR_PRIMARIES_UNSPECIFIED,
++	UVC_COLOR_PRIMARIES_BT_709_SRGB,
++	UVC_COLOR_PRIMARIES_BT_470_2_M,
++	UVC_COLOR_PRIMARIES_BT_470_2_B_G,
++	UVC_COLOR_PRIMARIES_SMPTE_170M,
++	UVC_COLOR_PRIMARIES_SMPTE_240M,
++};
++
++enum uvc_transfer_characteristics_values {
++	UVC_TRANSFER_CHARACTERISTICS_UNSPECIFIED,
++	UVC_TRANSFER_CHARACTERISTICS_BT_709,
++	UVC_TRANSFER_CHARACTERISTICS_BT_470_2_M,
++	UVC_TRANSFER_CHARACTERISTICS_BT_470_2_B_G,
++	UVC_TRANSFER_CHARACTERISTICS_SMPTE_170M,
++	UVC_TRANSFER_CHARACTERISTICS_SMPTE_240M,
++	UVC_TRANSFER_CHARACTERISTICS_LINEAR,
++	UVC_TRANSFER_CHARACTERISTICS_SRGB,
++};
++
++enum uvc_matrix_coefficients {
++	UVC_MATRIX_COEFFICIENTS_UNSPECIFIED,
++	UVC_MATRIX_COEFFICIENTS_BT_709,
++	UVC_MATRIX_COEFFICIENTS_FCC,
++	UVC_MATRIX_COEFFICIENTS_BT_470_2_B_G,
++	UVC_MATRIX_COEFFICIENTS_SMPTE_170M,
++	UVC_MATRIX_COEFFICIENTS_SMPTE_240M,
++};
++
+ /* ------------------------------------------------------------------------
+  * UVC structures
+  */
+diff --git a/include/uapi/linux/uvcvideo.h b/include/uapi/linux/uvcvideo.h
+index f80f05b3c423f..2140923661934 100644
+--- a/include/uapi/linux/uvcvideo.h
++++ b/include/uapi/linux/uvcvideo.h
+@@ -86,7 +86,7 @@ struct uvc_xu_control_query {
+  * struct. The first two fields are added by the driver, they can be used for
+  * clock synchronisation. The rest is an exact copy of a UVC payload header.
+  * Only complete objects with complete buffers are included. Therefore it's
+- * always sizeof(meta->ts) + sizeof(meta->sof) + meta->length bytes large.
++ * always sizeof(meta->ns) + sizeof(meta->sof) + meta->length bytes large.
+  */
+ struct uvc_meta_buf {
+ 	__u64 ns;
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index cf6f8aeb450db..445afda927f47 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -486,6 +486,7 @@ struct io_poll_iocb {
+ 	struct file			*file;
+ 	struct wait_queue_head		*head;
+ 	__poll_t			events;
++	int				retries;
+ 	struct wait_queue_entry		wait;
+ };
+ 
+@@ -2460,6 +2461,15 @@ static inline unsigned int io_put_rw_kbuf(struct io_kiocb *req)
+ 
+ static inline bool io_run_task_work(void)
+ {
++	/*
++	 * PF_IO_WORKER never returns to userspace, so check here if we have
++	 * notify work that needs processing.
++	 */
++	if (current->flags & PF_IO_WORKER &&
++	    test_thread_flag(TIF_NOTIFY_RESUME)) {
++		__set_current_state(TASK_RUNNING);
++		tracehook_notify_resume(NULL);
++	}
+ 	if (test_thread_flag(TIF_NOTIFY_SIGNAL) || current->task_works) {
+ 		__set_current_state(TASK_RUNNING);
+ 		tracehook_notify_signal();
+@@ -4986,7 +4996,7 @@ static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+ 	sr->len = READ_ONCE(sqe->len);
+ 	sr->bgid = READ_ONCE(sqe->buf_group);
+-	sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
++	sr->msg_flags = READ_ONCE(sqe->msg_flags);
+ 	if (sr->msg_flags & MSG_DONTWAIT)
+ 		req->flags |= REQ_F_NOWAIT;
+ 
+@@ -5740,6 +5750,14 @@ enum {
+ 	IO_APOLL_READY
+ };
+ 
++/*
++ * We can't reliably detect loops in repeated poll triggers and issue
++ * subsequently failing. But rather than fail these immediately, allow a
++ * certain amount of retries before we give up. Given that this condition
++ * should _rarely_ trigger even once, we should be fine with a larger value.
++ */
++#define APOLL_MAX_RETRY		128
++
+ static int io_arm_poll_handler(struct io_kiocb *req)
+ {
+ 	const struct io_op_def *def = &io_op_defs[req->opcode];
+@@ -5751,8 +5769,6 @@ static int io_arm_poll_handler(struct io_kiocb *req)
+ 
+ 	if (!req->file || !file_can_poll(req->file))
+ 		return IO_APOLL_ABORTED;
+-	if ((req->flags & (REQ_F_POLLED|REQ_F_PARTIAL_IO)) == REQ_F_POLLED)
+-		return IO_APOLL_ABORTED;
+ 	if (!def->pollin && !def->pollout)
+ 		return IO_APOLL_ABORTED;
+ 
+@@ -5770,8 +5786,13 @@ static int io_arm_poll_handler(struct io_kiocb *req)
+ 	if (req->flags & REQ_F_POLLED) {
+ 		apoll = req->apoll;
+ 		kfree(apoll->double_poll);
++		if (unlikely(!--apoll->poll.retries)) {
++			apoll->double_poll = NULL;
++			return IO_APOLL_ABORTED;
++		}
+ 	} else {
+ 		apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
++		apoll->poll.retries = APOLL_MAX_RETRY;
+ 	}
+ 	if (unlikely(!apoll))
+ 		return IO_APOLL_ABORTED;
+@@ -9048,14 +9069,17 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
+ 	pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
+ 			      pages, vmas);
+ 	if (pret == nr_pages) {
++		struct file *file = vmas[0]->vm_file;
++
+ 		/* don't support file backed memory */
+ 		for (i = 0; i < nr_pages; i++) {
+-			struct vm_area_struct *vma = vmas[i];
+-
+-			if (vma_is_shmem(vma))
++			if (vmas[i]->vm_file != file) {
++				ret = -EINVAL;
++				break;
++			}
++			if (!file)
+ 				continue;
+-			if (vma->vm_file &&
+-			    !is_file_hugepages(vma->vm_file)) {
++			if (!vma_is_shmem(vmas[i]) && !is_file_hugepages(file)) {
+ 				ret = -EOPNOTSUPP;
+ 				break;
+ 			}
+@@ -9681,6 +9705,7 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+ 			while (!list_empty_careful(&ctx->iopoll_list)) {
+ 				io_iopoll_try_reap_events(ctx);
+ 				ret = true;
++				cond_resched();
+ 			}
+ 		}
+ 
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 52e7048607399..11b612e94e4e1 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -4273,6 +4273,7 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, struct btf *btf,
+ 	if (!ctx_struct)
+ 		/* should not happen */
+ 		return NULL;
++again:
+ 	ctx_tname = btf_name_by_offset(btf_vmlinux, ctx_struct->name_off);
+ 	if (!ctx_tname) {
+ 		/* should not happen */
+@@ -4286,8 +4287,16 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, struct btf *btf,
+ 	 * int socket_filter_bpf_prog(struct __sk_buff *skb)
+ 	 * { // no fields of skb are ever used }
+ 	 */
+-	if (strcmp(ctx_tname, tname))
+-		return NULL;
++	if (strcmp(ctx_tname, tname)) {
++		/* bpf_user_pt_regs_t is a typedef, so resolve it to
++		 * underlying struct and check name again
++		 */
++		if (!btf_type_is_modifier(ctx_struct))
++			return NULL;
++		while (btf_type_is_modifier(ctx_struct))
++			ctx_struct = btf_type_by_id(btf_vmlinux, ctx_struct->type);
++		goto again;
++	}
+ 	return ctx_type;
+ }
+ 
+diff --git a/kernel/fail_function.c b/kernel/fail_function.c
+index b0b1ad93fa957..8f3795d8ac5b0 100644
+--- a/kernel/fail_function.c
++++ b/kernel/fail_function.c
+@@ -163,10 +163,7 @@ static void fei_debugfs_add_attr(struct fei_attr *attr)
+ 
+ static void fei_debugfs_remove_attr(struct fei_attr *attr)
+ {
+-	struct dentry *dir;
+-
+-	dir = debugfs_lookup(attr->kp.symbol_name, fei_debugfs_dir);
+-	debugfs_remove_recursive(dir);
++	debugfs_lookup_and_remove(attr->kp.symbol_name, fei_debugfs_dir);
+ }
+ 
+ static int fei_kprobe_handler(struct kprobe *kp, struct pt_regs *regs)
+diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
+index c6b419db68efc..1720998933f8d 100644
+--- a/kernel/irq/irqdomain.c
++++ b/kernel/irq/irqdomain.c
+@@ -495,6 +495,9 @@ void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
+ 		return;
+ 
+ 	hwirq = irq_data->hwirq;
++
++	mutex_lock(&irq_domain_mutex);
++
+ 	irq_set_status_flags(irq, IRQ_NOREQUEST);
+ 
+ 	/* remove chip and handler */
+@@ -514,10 +517,12 @@ void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
+ 
+ 	/* Clear reverse map for this hwirq */
+ 	irq_domain_clear_mapping(domain, hwirq);
++
++	mutex_unlock(&irq_domain_mutex);
+ }
+ 
+-int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+-			 irq_hw_number_t hwirq)
++static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
++				       irq_hw_number_t hwirq)
+ {
+ 	struct irq_data *irq_data = irq_get_irq_data(virq);
+ 	int ret;
+@@ -530,7 +535,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+ 	if (WARN(irq_data->domain, "error: virq%i is already associated", virq))
+ 		return -EINVAL;
+ 
+-	mutex_lock(&irq_domain_mutex);
+ 	irq_data->hwirq = hwirq;
+ 	irq_data->domain = domain;
+ 	if (domain->ops->map) {
+@@ -547,7 +551,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+ 			}
+ 			irq_data->domain = NULL;
+ 			irq_data->hwirq = 0;
+-			mutex_unlock(&irq_domain_mutex);
+ 			return ret;
+ 		}
+ 
+@@ -558,12 +561,23 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+ 
+ 	domain->mapcount++;
+ 	irq_domain_set_mapping(domain, hwirq, irq_data);
+-	mutex_unlock(&irq_domain_mutex);
+ 
+ 	irq_clear_status_flags(virq, IRQ_NOREQUEST);
+ 
+ 	return 0;
+ }
++
++int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
++			 irq_hw_number_t hwirq)
++{
++	int ret;
++
++	mutex_lock(&irq_domain_mutex);
++	ret = irq_domain_associate_locked(domain, virq, hwirq);
++	mutex_unlock(&irq_domain_mutex);
++
++	return ret;
++}
+ EXPORT_SYMBOL_GPL(irq_domain_associate);
+ 
+ void irq_domain_associate_many(struct irq_domain *domain, unsigned int irq_base,
+@@ -823,13 +837,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
+ 	}
+ 
+ 	irq_data = irq_get_irq_data(virq);
+-	if (!irq_data) {
+-		if (irq_domain_is_hierarchy(domain))
+-			irq_domain_free_irqs(virq, 1);
+-		else
+-			irq_dispose_mapping(virq);
++	if (WARN_ON(!irq_data))
+ 		return 0;
+-	}
+ 
+ 	/* Store trigger type */
+ 	irqd_set_trigger_type(irq_data, type);
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 75150e7555180..86d71c49b4957 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -447,8 +447,8 @@ static inline int kprobe_optready(struct kprobe *p)
+ 	return 0;
+ }
+ 
+-/* Return true(!0) if the kprobe is disarmed. Note: p must be on hash list */
+-static inline int kprobe_disarmed(struct kprobe *p)
++/* Return true if the kprobe is disarmed. Note: p must be on hash list */
++bool kprobe_disarmed(struct kprobe *p)
+ {
+ 	struct optimized_kprobe *op;
+ 
+@@ -652,7 +652,7 @@ void wait_for_kprobe_optimizer(void)
+ 	mutex_unlock(&kprobe_mutex);
+ }
+ 
+-static bool optprobe_queued_unopt(struct optimized_kprobe *op)
++bool optprobe_queued_unopt(struct optimized_kprobe *op)
+ {
+ 	struct optimized_kprobe *_op;
+ 
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index ef8733e2a476e..20243682e6056 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -251,7 +251,24 @@ void zap_pid_ns_processes(struct pid_namespace *pid_ns)
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		if (pid_ns->pid_allocated == init_pids)
+ 			break;
++		/*
++		 * Release tasks_rcu_exit_srcu to avoid following deadlock:
++		 *
++		 * 1) TASK A unshare(CLONE_NEWPID)
++		 * 2) TASK A fork() twice -> TASK B (child reaper for new ns)
++		 *    and TASK C
++		 * 3) TASK B exits, kills TASK C, waits for TASK A to reap it
++		 * 4) TASK A calls synchronize_rcu_tasks()
++		 *                   -> synchronize_srcu(tasks_rcu_exit_srcu)
++		 * 5) *DEADLOCK*
++		 *
++		 * It is considered safe to release tasks_rcu_exit_srcu here
++		 * because we assume the current task can not be concurrently
++		 * reaped at this point.
++		 */
++		exit_tasks_rcu_stop();
+ 		schedule();
++		exit_tasks_rcu_start();
+ 	}
+ 	__set_current_state(TASK_RUNNING);
+ 
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index 119b929dcff0f..334173fe6940e 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -72,10 +72,7 @@ static void em_debug_create_pd(struct device *dev)
+ 
+ static void em_debug_remove_pd(struct device *dev)
+ {
+-	struct dentry *debug_dir;
+-
+-	debug_dir = debugfs_lookup(dev_name(dev), rootdir);
+-	debugfs_remove_recursive(debug_dir);
++	debugfs_lookup_and_remove(dev_name(dev), rootdir);
+ }
+ 
+ static int __init em_debug_init(void)
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 8b51e6a5b3869..c66d47685b28e 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -171,8 +171,9 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
+ static void synchronize_rcu_tasks_generic(struct rcu_tasks *rtp)
+ {
+ 	/* Complain if the scheduler has not started.  */
+-	WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
+-			 "synchronize_rcu_tasks called too soon");
++	if (WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
++			 "synchronize_%s() called too soon", rtp->name))
++		return;
+ 
+ 	/* Wait for the grace period. */
+ 	wait_rcu_gp(rtp->call_func);
+@@ -416,11 +417,21 @@ static void rcu_tasks_pertask(struct task_struct *t, struct list_head *hop)
+ static void rcu_tasks_postscan(struct list_head *hop)
+ {
+ 	/*
+-	 * Wait for tasks that are in the process of exiting.  This
+-	 * does only part of the job, ensuring that all tasks that were
+-	 * previously exiting reach the point where they have disabled
+-	 * preemption, allowing the later synchronize_rcu() to finish
+-	 * the job.
++	 * Exiting tasks may escape the tasklist scan. Those are vulnerable
++	 * until their final schedule() with TASK_DEAD state. To cope with
++	 * this, divide the fragile exit path part in two intersecting
++	 * read side critical sections:
++	 *
++	 * 1) An _SRCU_ read side starting before calling exit_notify(),
++	 *    which may remove the task from the tasklist, and ending after
++	 *    the final preempt_disable() call in do_exit().
++	 *
++	 * 2) An _RCU_ read side starting with the final preempt_disable()
++	 *    call in do_exit() and ending with the final call to schedule()
++	 *    with TASK_DEAD state.
++	 *
++	 * This handles the part 1). And postgp will handle part 2) with a
++	 * call to synchronize_rcu().
+ 	 */
+ 	synchronize_srcu(&tasks_rcu_exit_srcu);
+ }
+@@ -487,7 +498,10 @@ static void rcu_tasks_postgp(struct rcu_tasks *rtp)
+ 	 *
+ 	 * In addition, this synchronize_rcu() waits for exiting tasks
+ 	 * to complete their final preempt_disable() region of execution,
+-	 * cleaning up after the synchronize_srcu() above.
++	 * cleaning up after synchronize_srcu(&tasks_rcu_exit_srcu),
++	 * enforcing the whole region before tasklist removal until
++	 * the final schedule() with TASK_DEAD state to be an RCU TASKS
++	 * read side critical section.
+ 	 */
+ 	synchronize_rcu();
+ }
+@@ -576,28 +590,43 @@ static void show_rcu_tasks_classic_gp_kthread(void)
+ }
+ #endif /* #ifndef CONFIG_TINY_RCU */
+ 
+-/* Do the srcu_read_lock() for the above synchronize_srcu().  */
++/*
++ * Contribute to protect against tasklist scan blind spot while the
++ * task is exiting and may be removed from the tasklist. See
++ * corresponding synchronize_srcu() for further details.
++ */
+ void exit_tasks_rcu_start(void) __acquires(&tasks_rcu_exit_srcu)
+ {
+-	preempt_disable();
+ 	current->rcu_tasks_idx = __srcu_read_lock(&tasks_rcu_exit_srcu);
+-	preempt_enable();
+ }
+ 
+-/* Do the srcu_read_unlock() for the above synchronize_srcu().  */
+-void exit_tasks_rcu_finish(void) __releases(&tasks_rcu_exit_srcu)
++/*
++ * Contribute to protect against tasklist scan blind spot while the
++ * task is exiting and may be removed from the tasklist. See
++ * corresponding synchronize_srcu() for further details.
++ */
++void exit_tasks_rcu_stop(void) __releases(&tasks_rcu_exit_srcu)
+ {
+ 	struct task_struct *t = current;
+ 
+-	preempt_disable();
+ 	__srcu_read_unlock(&tasks_rcu_exit_srcu, t->rcu_tasks_idx);
+-	preempt_enable();
+-	exit_tasks_rcu_finish_trace(t);
++}
++
++/*
++ * Contribute to protect against tasklist scan blind spot while the
++ * task is exiting and may be removed from the tasklist. See
++ * corresponding synchronize_srcu() for further details.
++ */
++void exit_tasks_rcu_finish(void)
++{
++	exit_tasks_rcu_stop();
++	exit_tasks_rcu_finish_trace(current);
+ }
+ 
+ #else /* #ifdef CONFIG_TASKS_RCU */
+ static inline void show_rcu_tasks_classic_gp_kthread(void) { }
+ void exit_tasks_rcu_start(void) { }
++void exit_tasks_rcu_stop(void) { }
+ void exit_tasks_rcu_finish(void) { exit_tasks_rcu_finish_trace(current); }
+ #endif /* #else #ifdef CONFIG_TASKS_RCU */
+ 
+@@ -620,9 +649,6 @@ static void rcu_tasks_be_rude(struct work_struct *work)
+ // Wait for one rude RCU-tasks grace period.
+ static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
+ {
+-	if (num_online_cpus() <= 1)
+-		return;	// Fastpath for only one CPU.
+-
+ 	rtp->n_ipis += cpumask_weight(cpu_online_mask);
+ 	schedule_on_each_cpu(rcu_tasks_be_rude);
+ }
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 0dc16345e668c..ef6570137dcd5 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -564,7 +564,9 @@ static void synchronize_rcu_expedited_wait(void)
+ 				mask = leaf_node_cpu_bit(rnp, cpu);
+ 				if (!(READ_ONCE(rnp->expmask) & mask))
+ 					continue;
++				preempt_disable(); // For smp_processor_id() in dump_cpu_task().
+ 				dump_cpu_task(cpu);
++				preempt_enable();
+ 			}
+ 		}
+ 		jiffies_stall = 3 * rcu_jiffies_till_stall_check() + 3;
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 817545ff80b9b..100253d4909c9 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -1293,20 +1293,6 @@ retry:
+ 			continue;
+ 		}
+ 
+-		/*
+-		 * All memory regions added from memory-hotplug path have the
+-		 * flag IORESOURCE_SYSTEM_RAM. If the resource does not have
+-		 * this flag, we know that we are dealing with a resource coming
+-		 * from HMM/devm. HMM/devm use another mechanism to add/release
+-		 * a resource. This goes via devm_request_mem_region and
+-		 * devm_release_mem_region.
+-		 * HMM/devm take care to release their resources when they want,
+-		 * so if we are dealing with them, let us just back off here.
+-		 */
+-		if (!(res->flags & IORESOURCE_SYSRAM)) {
+-			break;
+-		}
+-
+ 		if (!(res->flags & IORESOURCE_MEM))
+ 			break;
+ 
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index aaf98771f9357..f59cb3e8a6130 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1847,8 +1847,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first)
+ 	deadline_queue_push_tasks(rq);
+ }
+ 
+-static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq,
+-						   struct dl_rq *dl_rq)
++static struct sched_dl_entity *pick_next_dl_entity(struct dl_rq *dl_rq)
+ {
+ 	struct rb_node *left = rb_first_cached(&dl_rq->root);
+ 
+@@ -1867,7 +1866,7 @@ static struct task_struct *pick_next_task_dl(struct rq *rq)
+ 	if (!sched_dl_runnable(rq))
+ 		return NULL;
+ 
+-	dl_se = pick_next_dl_entity(rq, dl_rq);
++	dl_se = pick_next_dl_entity(dl_rq);
+ 	BUG_ON(!dl_se);
+ 	p = dl_task_of(dl_se);
+ 	set_next_task_dl(rq, p, true);
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index e6f22836c600b..f690f901b6cc7 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -1605,8 +1605,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
+ 	rt_queue_push_tasks(rq);
+ }
+ 
+-static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq,
+-						   struct rt_rq *rt_rq)
++static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq)
+ {
+ 	struct rt_prio_array *array = &rt_rq->active;
+ 	struct sched_rt_entity *next = NULL;
+@@ -1617,6 +1616,8 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq,
+ 	BUG_ON(idx >= MAX_RT_PRIO);
+ 
+ 	queue = array->queue + idx;
++	if (SCHED_WARN_ON(list_empty(queue)))
++		return NULL;
+ 	next = list_entry(queue->next, struct sched_rt_entity, run_list);
+ 
+ 	return next;
+@@ -1628,8 +1629,9 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq)
+ 	struct rt_rq *rt_rq  = &rq->rt;
+ 
+ 	do {
+-		rt_se = pick_next_rt_entity(rq, rt_rq);
+-		BUG_ON(!rt_se);
++		rt_se = pick_next_rt_entity(rt_rq);
++		if (unlikely(!rt_se))
++			return NULL;
+ 		rt_rq = group_rt_rq(rt_se);
+ 	} while (rt_rq);
+ 
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index e34ceb91f4c5a..86e0fbe583f2b 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -312,6 +312,15 @@ static void clocksource_verify_percpu(struct clocksource *cs)
+ 			testcpu, cs_nsec_min, cs_nsec_max, cs->name);
+ }
+ 
++static inline void clocksource_reset_watchdog(void)
++{
++	struct clocksource *cs;
++
++	list_for_each_entry(cs, &watchdog_list, wd_list)
++		cs->flags &= ~CLOCK_SOURCE_WATCHDOG;
++}
++
++
+ static void clocksource_watchdog(struct timer_list *unused)
+ {
+ 	u64 csnow, wdnow, cslast, wdlast, delta;
+@@ -319,6 +328,7 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 	int64_t wd_nsec, cs_nsec;
+ 	struct clocksource *cs;
+ 	enum wd_read_status read_ret;
++	unsigned long extra_wait = 0;
+ 	u32 md;
+ 
+ 	spin_lock(&watchdog_lock);
+@@ -338,13 +348,30 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 
+ 		read_ret = cs_watchdog_read(cs, &csnow, &wdnow);
+ 
+-		if (read_ret != WD_READ_SUCCESS) {
+-			if (read_ret == WD_READ_UNSTABLE)
+-				/* Clock readout unreliable, so give it up. */
+-				__clocksource_unstable(cs);
++		if (read_ret == WD_READ_UNSTABLE) {
++			/* Clock readout unreliable, so give it up. */
++			__clocksource_unstable(cs);
+ 			continue;
+ 		}
+ 
++		/*
++		 * When WD_READ_SKIP is returned, it means the system is likely
++		 * under very heavy load, where the latency of reading
++		 * watchdog/clocksource is very big, and affect the accuracy of
++		 * watchdog check. So give system some space and suspend the
++		 * watchdog check for 5 minutes.
++		 */
++		if (read_ret == WD_READ_SKIP) {
++			/*
++			 * As the watchdog timer will be suspended, and
++			 * cs->last could keep unchanged for 5 minutes, reset
++			 * the counters.
++			 */
++			clocksource_reset_watchdog();
++			extra_wait = HZ * 300;
++			break;
++		}
++
+ 		/* Clocksource initialized ? */
+ 		if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) ||
+ 		    atomic_read(&watchdog_reset_pending)) {
+@@ -434,7 +461,7 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 	 * pair clocksource_stop_watchdog() clocksource_start_watchdog().
+ 	 */
+ 	if (!timer_pending(&watchdog_timer)) {
+-		watchdog_timer.expires += WATCHDOG_INTERVAL;
++		watchdog_timer.expires += WATCHDOG_INTERVAL + extra_wait;
+ 		add_timer_on(&watchdog_timer, next_cpu);
+ 	}
+ out:
+@@ -459,14 +486,6 @@ static inline void clocksource_stop_watchdog(void)
+ 	watchdog_running = 0;
+ }
+ 
+-static inline void clocksource_reset_watchdog(void)
+-{
+-	struct clocksource *cs;
+-
+-	list_for_each_entry(cs, &watchdog_list, wd_list)
+-		cs->flags &= ~CLOCK_SOURCE_WATCHDOG;
+-}
+-
+ static void clocksource_resume_watchdog(void)
+ {
+ 	atomic_inc(&watchdog_reset_pending);
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 544ce87ba38a7..70deb2f01e97a 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2024,6 +2024,7 @@ SYSCALL_DEFINE2(nanosleep, struct __kernel_timespec __user *, rqtp,
+ 	if (!timespec64_valid(&tu))
+ 		return -EINVAL;
+ 
++	current->restart_block.fn = do_no_restart_syscall;
+ 	current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
+ 	current->restart_block.nanosleep.rmtp = rmtp;
+ 	return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL,
+@@ -2045,6 +2046,7 @@ SYSCALL_DEFINE2(nanosleep_time32, struct old_timespec32 __user *, rqtp,
+ 	if (!timespec64_valid(&tu))
+ 		return -EINVAL;
+ 
++	current->restart_block.fn = do_no_restart_syscall;
+ 	current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
+ 	current->restart_block.nanosleep.compat_rmtp = rmtp;
+ 	return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL,
+diff --git a/kernel/time/posix-stubs.c b/kernel/time/posix-stubs.c
+index fcb3b21d8bdcd..3783d07d60ba0 100644
+--- a/kernel/time/posix-stubs.c
++++ b/kernel/time/posix-stubs.c
+@@ -146,6 +146,7 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
+ 		return -EINVAL;
+ 	if (flags & TIMER_ABSTIME)
+ 		rmtp = NULL;
++	current->restart_block.fn = do_no_restart_syscall;
+ 	current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
+ 	current->restart_block.nanosleep.rmtp = rmtp;
+ 	texp = timespec64_to_ktime(t);
+@@ -239,6 +240,7 @@ SYSCALL_DEFINE4(clock_nanosleep_time32, clockid_t, which_clock, int, flags,
+ 		return -EINVAL;
+ 	if (flags & TIMER_ABSTIME)
+ 		rmtp = NULL;
++	current->restart_block.fn = do_no_restart_syscall;
+ 	current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
+ 	current->restart_block.nanosleep.compat_rmtp = rmtp;
+ 	texp = timespec64_to_ktime(t);
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index b624788023d8f..724ca7eb1a6e8 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -1270,6 +1270,7 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
+ 		return -EINVAL;
+ 	if (flags & TIMER_ABSTIME)
+ 		rmtp = NULL;
++	current->restart_block.fn = do_no_restart_syscall;
+ 	current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
+ 	current->restart_block.nanosleep.rmtp = rmtp;
+ 
+@@ -1297,6 +1298,7 @@ SYSCALL_DEFINE4(clock_nanosleep_time32, clockid_t, which_clock, int, flags,
+ 		return -EINVAL;
+ 	if (flags & TIMER_ABSTIME)
+ 		rmtp = NULL;
++	current->restart_block.fn = do_no_restart_syscall;
+ 	current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
+ 	current->restart_block.nanosleep.compat_rmtp = rmtp;
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 49ebb8c662682..70da6f3212bc4 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1449,19 +1449,6 @@ static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer,
+ 	return 0;
+ }
+ 
+-/**
+- * rb_check_list - make sure a pointer to a list has the last bits zero
+- */
+-static int rb_check_list(struct ring_buffer_per_cpu *cpu_buffer,
+-			 struct list_head *list)
+-{
+-	if (RB_WARN_ON(cpu_buffer, rb_list_head(list->prev) != list->prev))
+-		return 1;
+-	if (RB_WARN_ON(cpu_buffer, rb_list_head(list->next) != list->next))
+-		return 1;
+-	return 0;
+-}
+-
+ /**
+  * rb_check_pages - integrity check of buffer pages
+  * @cpu_buffer: CPU buffer with pages to test
+@@ -1471,36 +1458,27 @@ static int rb_check_list(struct ring_buffer_per_cpu *cpu_buffer,
+  */
+ static int rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer)
+ {
+-	struct list_head *head = cpu_buffer->pages;
+-	struct buffer_page *bpage, *tmp;
+-
+-	/* Reset the head page if it exists */
+-	if (cpu_buffer->head_page)
+-		rb_set_head_page(cpu_buffer);
+-
+-	rb_head_page_deactivate(cpu_buffer);
++	struct list_head *head = rb_list_head(cpu_buffer->pages);
++	struct list_head *tmp;
+ 
+-	if (RB_WARN_ON(cpu_buffer, head->next->prev != head))
+-		return -1;
+-	if (RB_WARN_ON(cpu_buffer, head->prev->next != head))
++	if (RB_WARN_ON(cpu_buffer,
++			rb_list_head(rb_list_head(head->next)->prev) != head))
+ 		return -1;
+ 
+-	if (rb_check_list(cpu_buffer, head))
++	if (RB_WARN_ON(cpu_buffer,
++			rb_list_head(rb_list_head(head->prev)->next) != head))
+ 		return -1;
+ 
+-	list_for_each_entry_safe(bpage, tmp, head, list) {
++	for (tmp = rb_list_head(head->next); tmp != head; tmp = rb_list_head(tmp->next)) {
+ 		if (RB_WARN_ON(cpu_buffer,
+-			       bpage->list.next->prev != &bpage->list))
++				rb_list_head(rb_list_head(tmp->next)->prev) != tmp))
+ 			return -1;
++
+ 		if (RB_WARN_ON(cpu_buffer,
+-			       bpage->list.prev->next != &bpage->list))
+-			return -1;
+-		if (rb_check_list(cpu_buffer, &bpage->list))
++				rb_list_head(rb_list_head(tmp->prev)->next) != tmp))
+ 			return -1;
+ 	}
+ 
+-	rb_head_page_activate(cpu_buffer);
+-
+ 	return 0;
+ }
+ 
+@@ -5324,11 +5302,16 @@ EXPORT_SYMBOL_GPL(ring_buffer_alloc_read_page);
+  */
+ void ring_buffer_free_read_page(struct trace_buffer *buffer, int cpu, void *data)
+ {
+-	struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu];
++	struct ring_buffer_per_cpu *cpu_buffer;
+ 	struct buffer_data_page *bpage = data;
+ 	struct page *page = virt_to_page(bpage);
+ 	unsigned long flags;
+ 
++	if (!buffer || !buffer->buffers || !buffer->buffers[cpu])
++		return;
++
++	cpu_buffer = buffer->buffers[cpu];
++
+ 	/* If the page is still in use someplace else, we can't reuse it */
+ 	if (page_ref_count(page) > 1)
+ 		goto out;
+diff --git a/lib/errname.c b/lib/errname.c
+index 0c4d3e66170e9..a5799a8c9cab9 100644
+--- a/lib/errname.c
++++ b/lib/errname.c
+@@ -20,6 +20,7 @@ static const char *names_0[] = {
+ 	E(EADDRNOTAVAIL),
+ 	E(EADV),
+ 	E(EAFNOSUPPORT),
++	E(EAGAIN), /* EWOULDBLOCK */
+ 	E(EALREADY),
+ 	E(EBADE),
+ 	E(EBADF),
+@@ -30,15 +31,17 @@ static const char *names_0[] = {
+ 	E(EBADSLT),
+ 	E(EBFONT),
+ 	E(EBUSY),
+-#ifdef ECANCELLED
+-	E(ECANCELLED),
+-#endif
++	E(ECANCELED), /* ECANCELLED */
+ 	E(ECHILD),
+ 	E(ECHRNG),
+ 	E(ECOMM),
+ 	E(ECONNABORTED),
++	E(ECONNREFUSED), /* EREFUSED */
+ 	E(ECONNRESET),
++	E(EDEADLK), /* EDEADLOCK */
++#if EDEADLK != EDEADLOCK /* mips, sparc, powerpc */
+ 	E(EDEADLOCK),
++#endif
+ 	E(EDESTADDRREQ),
+ 	E(EDOM),
+ 	E(EDOTDOT),
+@@ -165,14 +168,17 @@ static const char *names_0[] = {
+ 	E(EUSERS),
+ 	E(EXDEV),
+ 	E(EXFULL),
+-
+-	E(ECANCELED), /* ECANCELLED */
+-	E(EAGAIN), /* EWOULDBLOCK */
+-	E(ECONNREFUSED), /* EREFUSED */
+-	E(EDEADLK), /* EDEADLOCK */
+ };
+ #undef E
+ 
++#ifdef EREFUSED /* parisc */
++static_assert(EREFUSED == ECONNREFUSED);
++#endif
++#ifdef ECANCELLED /* parisc */
++static_assert(ECANCELLED == ECANCELED);
++#endif
++static_assert(EAGAIN == EWOULDBLOCK); /* everywhere */
++
+ #define E(err) [err - 512 + BUILD_BUG_ON_ZERO(err < 512 || err > 550)] = "-" #err
+ static const char *names_512[] = {
+ 	E(ERESTARTSYS),
+diff --git a/lib/mpi/mpicoder.c b/lib/mpi/mpicoder.c
+index 7ea225b2204fa..7054311d78792 100644
+--- a/lib/mpi/mpicoder.c
++++ b/lib/mpi/mpicoder.c
+@@ -504,7 +504,8 @@ MPI mpi_read_raw_from_sgl(struct scatterlist *sgl, unsigned int nbytes)
+ 
+ 	while (sg_miter_next(&miter)) {
+ 		buff = miter.addr;
+-		len = miter.length;
++		len = min_t(unsigned, miter.length, nbytes);
++		nbytes -= len;
+ 
+ 		for (x = 0; x < len; x++) {
+ 			a <<= 8;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index cb7b0aead7096..9b15760e0541a 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2803,6 +2803,9 @@ void deferred_split_huge_page(struct page *page)
+ 	if (PageSwapCache(page))
+ 		return;
+ 
++	if (!list_empty(page_deferred_list(page)))
++		return;
++
+ 	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ 	if (list_empty(page_deferred_list(page))) {
+ 		count_vm_event(THP_DEFERRED_SPLIT_PAGE);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index c62d997c8ca1d..751e3670d7b0c 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -3930,6 +3930,10 @@ static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
+ {
+ 	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ 
++	pr_warn_once("Cgroup memory moving (move_charge_at_immigrate) is deprecated. "
++		     "Please report your usecase to linux-mm@kvack.org if you "
++		     "depend on this functionality.\n");
++
+ 	if (val & ~MOVE_MASK)
+ 		return -EINVAL;
+ 
+diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
+index 2885ff9c76f07..7217bd9886e36 100644
+--- a/net/9p/trans_rdma.c
++++ b/net/9p/trans_rdma.c
+@@ -386,6 +386,7 @@ post_recv(struct p9_client *client, struct p9_rdma_context *c)
+ 	struct p9_trans_rdma *rdma = client->trans;
+ 	struct ib_recv_wr wr;
+ 	struct ib_sge sge;
++	int ret;
+ 
+ 	c->busa = ib_dma_map_single(rdma->cm_id->device,
+ 				    c->rc.sdata, client->msize,
+@@ -403,7 +404,12 @@ post_recv(struct p9_client *client, struct p9_rdma_context *c)
+ 	wr.wr_cqe = &c->cqe;
+ 	wr.sg_list = &sge;
+ 	wr.num_sge = 1;
+-	return ib_post_recv(rdma->qp, &wr, NULL);
++
++	ret = ib_post_recv(rdma->qp, &wr, NULL);
++	if (ret)
++		ib_dma_unmap_single(rdma->cm_id->device, c->busa,
++				    client->msize, DMA_FROM_DEVICE);
++	return ret;
+ 
+  error:
+ 	p9_debug(P9_DEBUG_ERROR, "EIO\n");
+@@ -500,7 +506,7 @@ dont_need_post_recv:
+ 
+ 	if (down_interruptible(&rdma->sq_sem)) {
+ 		err = -EINTR;
+-		goto send_error;
++		goto dma_unmap;
+ 	}
+ 
+ 	/* Mark request as `sent' *before* we actually send it,
+@@ -510,11 +516,14 @@ dont_need_post_recv:
+ 	req->status = REQ_STATUS_SENT;
+ 	err = ib_post_send(rdma->qp, &wr, NULL);
+ 	if (err)
+-		goto send_error;
++		goto dma_unmap;
+ 
+ 	/* Success */
+ 	return 0;
+ 
++dma_unmap:
++	ib_dma_unmap_single(rdma->cm_id->device, c->busa,
++			    c->req->tc.size, DMA_TO_DEVICE);
+  /* Handle errors that happened during or while preparing the send: */
+  send_error:
+ 	req->status = REQ_STATUS_ERROR;
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 6c8a33f98f093..220e8f4ac0cfe 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -393,19 +393,24 @@ out:
+ 	return ret;
+ }
+ 
+-static int xen_9pfs_front_probe(struct xenbus_device *dev,
+-				const struct xenbus_device_id *id)
++static int xen_9pfs_front_init(struct xenbus_device *dev)
+ {
+ 	int ret, i;
+ 	struct xenbus_transaction xbt;
+-	struct xen_9pfs_front_priv *priv = NULL;
+-	char *versions;
++	struct xen_9pfs_front_priv *priv = dev_get_drvdata(&dev->dev);
++	char *versions, *v;
+ 	unsigned int max_rings, max_ring_order, len = 0;
+ 
+ 	versions = xenbus_read(XBT_NIL, dev->otherend, "versions", &len);
+ 	if (IS_ERR(versions))
+ 		return PTR_ERR(versions);
+-	if (strcmp(versions, "1")) {
++	for (v = versions; *v; v++) {
++		if (simple_strtoul(v, &v, 10) == 1) {
++			v = NULL;
++			break;
++		}
++	}
++	if (v) {
+ 		kfree(versions);
+ 		return -EINVAL;
+ 	}
+@@ -420,11 +425,6 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
+ 	if (p9_xen_trans.maxsize > XEN_FLEX_RING_SIZE(max_ring_order))
+ 		p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order) / 2;
+ 
+-	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+-	if (!priv)
+-		return -ENOMEM;
+-
+-	priv->dev = dev;
+ 	priv->num_rings = XEN_9PFS_NUM_RINGS;
+ 	priv->rings = kcalloc(priv->num_rings, sizeof(*priv->rings),
+ 			      GFP_KERNEL);
+@@ -483,23 +483,35 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
+ 		goto error;
+ 	}
+ 
+-	write_lock(&xen_9pfs_lock);
+-	list_add_tail(&priv->list, &xen_9pfs_devs);
+-	write_unlock(&xen_9pfs_lock);
+-	dev_set_drvdata(&dev->dev, priv);
+-	xenbus_switch_state(dev, XenbusStateInitialised);
+-
+ 	return 0;
+ 
+  error_xenbus:
+ 	xenbus_transaction_end(xbt, 1);
+ 	xenbus_dev_fatal(dev, ret, "writing xenstore");
+  error:
+-	dev_set_drvdata(&dev->dev, NULL);
+ 	xen_9pfs_front_free(priv);
+ 	return ret;
+ }
+ 
++static int xen_9pfs_front_probe(struct xenbus_device *dev,
++				const struct xenbus_device_id *id)
++{
++	struct xen_9pfs_front_priv *priv = NULL;
++
++	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++	if (!priv)
++		return -ENOMEM;
++
++	priv->dev = dev;
++	dev_set_drvdata(&dev->dev, priv);
++
++	write_lock(&xen_9pfs_lock);
++	list_add_tail(&priv->list, &xen_9pfs_devs);
++	write_unlock(&xen_9pfs_lock);
++
++	return 0;
++}
++
+ static int xen_9pfs_front_resume(struct xenbus_device *dev)
+ {
+ 	dev_warn(&dev->dev, "suspend/resume unsupported\n");
+@@ -518,6 +530,8 @@ static void xen_9pfs_front_changed(struct xenbus_device *dev,
+ 		break;
+ 
+ 	case XenbusStateInitWait:
++		if (!xen_9pfs_front_init(dev))
++			xenbus_switch_state(dev, XenbusStateInitialised);
+ 		break;
+ 
+ 	case XenbusStateConnected:
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 71d18d3295f50..73779af2fed61 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -888,10 +888,6 @@ static int hci_sock_release(struct socket *sock)
+ 	}
+ 
+ 	sock_orphan(sk);
+-
+-	skb_queue_purge(&sk->sk_receive_queue);
+-	skb_queue_purge(&sk->sk_write_queue);
+-
+ 	release_sock(sk);
+ 	sock_put(sk);
+ 	return 0;
+@@ -2012,6 +2008,12 @@ done:
+ 	return err;
+ }
+ 
++static void hci_sock_destruct(struct sock *sk)
++{
++	skb_queue_purge(&sk->sk_receive_queue);
++	skb_queue_purge(&sk->sk_write_queue);
++}
++
+ static const struct proto_ops hci_sock_ops = {
+ 	.family		= PF_BLUETOOTH,
+ 	.owner		= THIS_MODULE,
+@@ -2065,6 +2067,7 @@ static int hci_sock_create(struct net *net, struct socket *sock, int protocol,
+ 
+ 	sock->state = SS_UNCONNECTED;
+ 	sk->sk_state = BT_OPEN;
++	sk->sk_destruct = hci_sock_destruct;
+ 
+ 	bt_sock_link(&hci_sk_list, sk);
+ 	return 0;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index cf56582d298ad..bde90df6b4976 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -2679,14 +2679,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
+ 		if (IS_ERR(skb))
+ 			return PTR_ERR(skb);
+ 
+-		/* Channel lock is released before requesting new skb and then
+-		 * reacquired thus we need to recheck channel state.
+-		 */
+-		if (chan->state != BT_CONNECTED) {
+-			kfree_skb(skb);
+-			return -ENOTCONN;
+-		}
+-
+ 		l2cap_do_send(chan, skb);
+ 		return len;
+ 	}
+@@ -2731,14 +2723,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
+ 		if (IS_ERR(skb))
+ 			return PTR_ERR(skb);
+ 
+-		/* Channel lock is released before requesting new skb and then
+-		 * reacquired thus we need to recheck channel state.
+-		 */
+-		if (chan->state != BT_CONNECTED) {
+-			kfree_skb(skb);
+-			return -ENOTCONN;
+-		}
+-
+ 		l2cap_do_send(chan, skb);
+ 		err = len;
+ 		break;
+@@ -2759,14 +2743,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
+ 		 */
+ 		err = l2cap_segment_sdu(chan, &seg_queue, msg, len);
+ 
+-		/* The channel could have been closed while segmenting,
+-		 * check that it is still connected.
+-		 */
+-		if (chan->state != BT_CONNECTED) {
+-			__skb_queue_purge(&seg_queue);
+-			err = -ENOTCONN;
+-		}
+-
+ 		if (err)
+ 			break;
+ 
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index d2c6785205992..a267c9b6bcef4 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1623,6 +1623,14 @@ static struct sk_buff *l2cap_sock_alloc_skb_cb(struct l2cap_chan *chan,
+ 	if (!skb)
+ 		return ERR_PTR(err);
+ 
++	/* Channel lock is released before requesting new skb and then
++	 * reacquired thus we need to recheck channel state.
++	 */
++	if (chan->state != BT_CONNECTED) {
++		kfree_skb(skb);
++		return ERR_PTR(-ENOTCONN);
++	}
++
+ 	skb->priority = sk->sk_priority;
+ 
+ 	bt_cb(skb)->l2cap.chan = chan;
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 06b80b5843819..8335b7e4bcf6f 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -1049,7 +1049,7 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
+ 
+ 	audit_log_nfcfg(repl->name, AF_BRIDGE, repl->nentries,
+ 			AUDIT_XT_OP_REPLACE, GFP_KERNEL);
+-	return ret;
++	return 0;
+ 
+ free_unlock:
+ 	mutex_unlock(&ebt_mutex);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index b7646d4e079b4..8cbcb6a104f2f 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3119,8 +3119,10 @@ void __dev_kfree_skb_any(struct sk_buff *skb, enum skb_free_reason reason)
+ {
+ 	if (in_irq() || irqs_disabled())
+ 		__dev_kfree_skb_irq(skb, reason);
++	else if (unlikely(reason == SKB_REASON_DROPPED))
++		kfree_skb(skb);
+ 	else
+-		dev_kfree_skb(skb);
++		consume_skb(skb);
+ }
+ EXPORT_SYMBOL(__dev_kfree_skb_any);
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 1bb6a003323b3..c5ae520d4a69c 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2968,7 +2968,7 @@ void sk_stop_timer_sync(struct sock *sk, struct timer_list *timer)
+ }
+ EXPORT_SYMBOL(sk_stop_timer_sync);
+ 
+-void sock_init_data(struct socket *sock, struct sock *sk)
++void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid)
+ {
+ 	sk_init_common(sk);
+ 	sk->sk_send_head	=	NULL;
+@@ -2987,11 +2987,10 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ 		sk->sk_type	=	sock->type;
+ 		RCU_INIT_POINTER(sk->sk_wq, &sock->wq);
+ 		sock->sk	=	sk;
+-		sk->sk_uid	=	SOCK_INODE(sock)->i_uid;
+ 	} else {
+ 		RCU_INIT_POINTER(sk->sk_wq, NULL);
+-		sk->sk_uid	=	make_kuid(sock_net(sk)->user_ns, 0);
+ 	}
++	sk->sk_uid	=	uid;
+ 
+ 	rwlock_init(&sk->sk_callback_lock);
+ 	if (sk->sk_kern_sock)
+@@ -3049,6 +3048,16 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ 	refcount_set(&sk->sk_refcnt, 1);
+ 	atomic_set(&sk->sk_drops, 0);
+ }
++EXPORT_SYMBOL(sock_init_data_uid);
++
++void sock_init_data(struct socket *sock, struct sock *sk)
++{
++	kuid_t uid = sock ?
++		SOCK_INODE(sock)->i_uid :
++		make_kuid(sock_net(sk)->user_ns, 0);
++
++	sock_init_data_uid(sock, sk, uid);
++}
+ EXPORT_SYMBOL(sock_init_data);
+ 
+ void lock_sock_nested(struct sock *sk, int subclass)
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 9ed59147ef66b..e05dd87848f78 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -946,6 +946,7 @@ int inet_csk_listen_start(struct sock *sk, int backlog)
+ 	 * It is OK, because this socket enters to hash table only
+ 	 * after validation is complete.
+ 	 */
++	err = -EADDRINUSE;
+ 	inet_sk_state_store(sk, TCP_LISTEN);
+ 	if (!sk->sk_prot->get_port(sk, inet->inet_num)) {
+ 		inet->inet_sport = htons(inet->inet_num);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 2615b72118d1f..79bf550c9dfc5 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -760,17 +760,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 	u32 index;
+ 
+ 	if (port) {
+-		head = &hinfo->bhash[inet_bhashfn(net, port,
+-						  hinfo->bhash_size)];
+-		tb = inet_csk(sk)->icsk_bind_hash;
+-		spin_lock_bh(&head->lock);
+-		if (sk_head(&tb->owners) == sk && !sk->sk_bind_node.next) {
+-			inet_ehash_nolisten(sk, NULL, NULL);
+-			spin_unlock_bh(&head->lock);
+-			return 0;
+-		}
+-		spin_unlock(&head->lock);
+-		/* No definite answer... Walk to established hash table */
++		local_bh_disable();
+ 		ret = check_established(death_row, sk, port, NULL);
+ 		local_bh_enable();
+ 		return ret;
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index f77ea0dbe6562..ec981618b7b22 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -1044,7 +1044,6 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
+ 	struct xt_counters *counters;
+ 	struct ipt_entry *iter;
+ 
+-	ret = 0;
+ 	counters = xt_counters_alloc(num_counters);
+ 	if (!counters) {
+ 		ret = -ENOMEM;
+@@ -1090,7 +1089,7 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
+ 		net_warn_ratelimited("iptables: counters copy to user failed while replacing table\n");
+ 	}
+ 	vfree(counters);
+-	return ret;
++	return 0;
+ 
+  put_module:
+ 	module_put(t->me);
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index e42312321191b..8d854feebdb00 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -565,6 +565,9 @@ EXPORT_SYMBOL(tcp_create_openreq_child);
+  * validation and inside tcp_v4_reqsk_send_ack(). Can we do better?
+  *
+  * We don't need to initialize tmp_opt.sack_ok as we don't use the results
++ *
++ * Note: If @fastopen is true, this can be called from process context.
++ *       Otherwise, this is from BH context.
+  */
+ 
+ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+@@ -717,7 +720,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ 					  &tcp_rsk(req)->last_oow_ack_time))
+ 			req->rsk_ops->send_ack(sk, skb, req);
+ 		if (paws_reject)
+-			__NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
++			NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
+ 		return NULL;
+ 	}
+ 
+@@ -736,7 +739,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ 	 *	   "fourth, check the SYN bit"
+ 	 */
+ 	if (flg & (TCP_FLAG_RST|TCP_FLAG_SYN)) {
+-		__TCP_INC_STATS(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
++		TCP_INC_STATS(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
+ 		goto embryonic_reset;
+ 	}
+ 
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index d36168baf6776..99bb11d167127 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -1062,7 +1062,6 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
+ 	struct xt_counters *counters;
+ 	struct ip6t_entry *iter;
+ 
+-	ret = 0;
+ 	counters = xt_counters_alloc(num_counters);
+ 	if (!counters) {
+ 		ret = -ENOMEM;
+@@ -1108,7 +1107,7 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
+ 		net_warn_ratelimited("ip6tables: counters copy to user failed while replacing table\n");
+ 	}
+ 	vfree(counters);
+-	return ret;
++	return 0;
+ 
+  put_module:
+ 	module_put(t->me);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 803d1aa83140c..a6d5c99f65a3a 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5444,16 +5444,17 @@ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
+ 		nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
+ 					 &nexthop_len);
+ 	} else {
++		struct fib6_info *sibling, *next_sibling;
+ 		struct fib6_nh *nh = f6i->fib6_nh;
+ 
+ 		nexthop_len = 0;
+ 		if (f6i->fib6_nsiblings) {
+-			nexthop_len = nla_total_size(0)	 /* RTA_MULTIPATH */
+-				    + NLA_ALIGN(sizeof(struct rtnexthop))
+-				    + nla_total_size(16) /* RTA_GATEWAY */
+-				    + lwtunnel_get_encap_size(nh->fib_nh_lws);
++			rt6_nh_nlmsg_size(nh, &nexthop_len);
+ 
+-			nexthop_len *= f6i->fib6_nsiblings;
++			list_for_each_entry_safe(sibling, next_sibling,
++						 &f6i->fib6_siblings, fib6_siblings) {
++				rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
++			}
+ 		}
+ 		nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
+ 	}
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index aea85f91f0599..5ecc0f2009444 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -651,54 +651,22 @@ static int pppol2tp_tunnel_mtu(const struct l2tp_tunnel *tunnel)
+ 	return mtu - PPPOL2TP_HEADER_OVERHEAD;
+ }
+ 
+-/* connect() handler. Attach a PPPoX socket to a tunnel UDP socket
+- */
+-static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
+-			    int sockaddr_len, int flags)
++static struct l2tp_tunnel *pppol2tp_tunnel_get(struct net *net,
++					       const struct l2tp_connect_info *info,
++					       bool *new_tunnel)
+ {
+-	struct sock *sk = sock->sk;
+-	struct pppox_sock *po = pppox_sk(sk);
+-	struct l2tp_session *session = NULL;
+-	struct l2tp_connect_info info;
+ 	struct l2tp_tunnel *tunnel;
+-	struct pppol2tp_session *ps;
+-	struct l2tp_session_cfg cfg = { 0, };
+-	bool drop_refcnt = false;
+-	bool drop_tunnel = false;
+-	bool new_session = false;
+-	bool new_tunnel = false;
+ 	int error;
+ 
+-	error = pppol2tp_sockaddr_get_info(uservaddr, sockaddr_len, &info);
+-	if (error < 0)
+-		return error;
++	*new_tunnel = false;
+ 
+-	lock_sock(sk);
+-
+-	/* Check for already bound sockets */
+-	error = -EBUSY;
+-	if (sk->sk_state & PPPOX_CONNECTED)
+-		goto end;
+-
+-	/* We don't supporting rebinding anyway */
+-	error = -EALREADY;
+-	if (sk->sk_user_data)
+-		goto end; /* socket is already attached */
+-
+-	/* Don't bind if tunnel_id is 0 */
+-	error = -EINVAL;
+-	if (!info.tunnel_id)
+-		goto end;
+-
+-	tunnel = l2tp_tunnel_get(sock_net(sk), info.tunnel_id);
+-	if (tunnel)
+-		drop_tunnel = true;
++	tunnel = l2tp_tunnel_get(net, info->tunnel_id);
+ 
+ 	/* Special case: create tunnel context if session_id and
+ 	 * peer_session_id is 0. Otherwise look up tunnel using supplied
+ 	 * tunnel id.
+ 	 */
+-	if (!info.session_id && !info.peer_session_id) {
++	if (!info->session_id && !info->peer_session_id) {
+ 		if (!tunnel) {
+ 			struct l2tp_tunnel_cfg tcfg = {
+ 				.encap = L2TP_ENCAPTYPE_UDP,
+@@ -707,40 +675,82 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
+ 			/* Prevent l2tp_tunnel_register() from trying to set up
+ 			 * a kernel socket.
+ 			 */
+-			if (info.fd < 0) {
+-				error = -EBADF;
+-				goto end;
+-			}
++			if (info->fd < 0)
++				return ERR_PTR(-EBADF);
+ 
+-			error = l2tp_tunnel_create(info.fd,
+-						   info.version,
+-						   info.tunnel_id,
+-						   info.peer_tunnel_id, &tcfg,
++			error = l2tp_tunnel_create(info->fd,
++						   info->version,
++						   info->tunnel_id,
++						   info->peer_tunnel_id, &tcfg,
+ 						   &tunnel);
+ 			if (error < 0)
+-				goto end;
++				return ERR_PTR(error);
+ 
+ 			l2tp_tunnel_inc_refcount(tunnel);
+-			error = l2tp_tunnel_register(tunnel, sock_net(sk),
+-						     &tcfg);
++			error = l2tp_tunnel_register(tunnel, net, &tcfg);
+ 			if (error < 0) {
+ 				kfree(tunnel);
+-				goto end;
++				return ERR_PTR(error);
+ 			}
+-			drop_tunnel = true;
+-			new_tunnel = true;
++
++			*new_tunnel = true;
+ 		}
+ 	} else {
+ 		/* Error if we can't find the tunnel */
+-		error = -ENOENT;
+ 		if (!tunnel)
+-			goto end;
++			return ERR_PTR(-ENOENT);
+ 
+ 		/* Error if socket is not prepped */
+-		if (!tunnel->sock)
+-			goto end;
++		if (!tunnel->sock) {
++			l2tp_tunnel_dec_refcount(tunnel);
++			return ERR_PTR(-ENOENT);
++		}
+ 	}
+ 
++	return tunnel;
++}
++
++/* connect() handler. Attach a PPPoX socket to a tunnel UDP socket
++ */
++static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
++			    int sockaddr_len, int flags)
++{
++	struct sock *sk = sock->sk;
++	struct pppox_sock *po = pppox_sk(sk);
++	struct l2tp_session *session = NULL;
++	struct l2tp_connect_info info;
++	struct l2tp_tunnel *tunnel;
++	struct pppol2tp_session *ps;
++	struct l2tp_session_cfg cfg = { 0, };
++	bool drop_refcnt = false;
++	bool new_session = false;
++	bool new_tunnel = false;
++	int error;
++
++	error = pppol2tp_sockaddr_get_info(uservaddr, sockaddr_len, &info);
++	if (error < 0)
++		return error;
++
++	/* Don't bind if tunnel_id is 0 */
++	if (!info.tunnel_id)
++		return -EINVAL;
++
++	tunnel = pppol2tp_tunnel_get(sock_net(sk), &info, &new_tunnel);
++	if (IS_ERR(tunnel))
++		return PTR_ERR(tunnel);
++
++	lock_sock(sk);
++
++	/* Check for already bound sockets */
++	error = -EBUSY;
++	if (sk->sk_state & PPPOX_CONNECTED)
++		goto end;
++
++	/* We don't supporting rebinding anyway */
++	error = -EALREADY;
++	if (sk->sk_user_data)
++		goto end; /* socket is already attached */
++
+ 	if (tunnel->peer_tunnel_id == 0)
+ 		tunnel->peer_tunnel_id = info.peer_tunnel_id;
+ 
+@@ -841,8 +851,7 @@ end:
+ 	}
+ 	if (drop_refcnt)
+ 		l2tp_session_dec_refcount(session);
+-	if (drop_tunnel)
+-		l2tp_tunnel_dec_refcount(tunnel);
++	l2tp_tunnel_dec_refcount(tunnel);
+ 	release_sock(sk);
+ 
+ 	return error;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index cee39ae52245c..d572478c4d68e 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2159,7 +2159,7 @@ static void sta_stats_decode_rate(struct ieee80211_local *local, u32 rate,
+ 
+ static int sta_set_rate_info_rx(struct sta_info *sta, struct rate_info *rinfo)
+ {
+-	u16 rate = READ_ONCE(sta_get_last_rx_stats(sta)->last_rate);
++	u32 rate = READ_ONCE(sta_get_last_rx_stats(sta)->last_rate);
+ 
+ 	if (rate == STA_STATS_RATE_INVALID)
+ 		return -EINVAL;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 2efdc50f978b0..f8ba3bc25cf34 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -2359,12 +2359,15 @@ ctnetlink_create_conntrack(struct net *net,
+ 
+ 	err = nf_conntrack_hash_check_insert(ct);
+ 	if (err < 0)
+-		goto err2;
++		goto err3;
+ 
+ 	rcu_read_unlock();
+ 
+ 	return ct;
+ 
++err3:
++	if (ct->master)
++		nf_ct_put(ct->master);
+ err2:
+ 	rcu_read_unlock();
+ err1:
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 610caea4feec8..3f4785be066a8 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1442,7 +1442,11 @@ static int nfc_se_io(struct nfc_dev *dev, u32 se_idx,
+ 	rc = dev->ops->se_io(dev, se_idx, apdu,
+ 			apdu_length, cb, cb_context);
+ 
++	device_unlock(&dev->dev);
++	return rc;
++
+ error:
++	kfree(cb_context);
+ 	device_unlock(&dev->dev);
+ 	return rc;
+ }
+diff --git a/net/rds/message.c b/net/rds/message.c
+index b363ef13c75ef..8fa3d19c2e667 100644
+--- a/net/rds/message.c
++++ b/net/rds/message.c
+@@ -118,7 +118,7 @@ static void rds_rm_zerocopy_callback(struct rds_sock *rs,
+ 	ck = &info->zcookies;
+ 	memset(ck, 0, sizeof(*ck));
+ 	WARN_ON(!rds_zcookie_add(info, cookie));
+-	list_add_tail(&q->zcookie_head, &info->rs_zcookie_next);
++	list_add_tail(&info->rs_zcookie_next, &q->zcookie_head);
+ 
+ 	spin_unlock_irqrestore(&q->lock, flags);
+ 	/* caller invokes rds_wake_sk_sleep() */
+diff --git a/net/sched/Kconfig b/net/sched/Kconfig
+index bc4e5da76fa6f..697522371914c 100644
+--- a/net/sched/Kconfig
++++ b/net/sched/Kconfig
+@@ -503,17 +503,6 @@ config NET_CLS_BASIC
+ 	  To compile this code as a module, choose M here: the
+ 	  module will be called cls_basic.
+ 
+-config NET_CLS_TCINDEX
+-	tristate "Traffic-Control Index (TCINDEX)"
+-	select NET_CLS
+-	help
+-	  Say Y here if you want to be able to classify packets based on
+-	  traffic control indices. You will want this feature if you want
+-	  to implement Differentiated Services together with DSMARK.
+-
+-	  To compile this code as a module, choose M here: the
+-	  module will be called cls_tcindex.
+-
+ config NET_CLS_ROUTE4
+ 	tristate "Routing decision (ROUTE)"
+ 	depends on INET
+diff --git a/net/sched/Makefile b/net/sched/Makefile
+index 66bbf9a98f9ea..4311fdb211197 100644
+--- a/net/sched/Makefile
++++ b/net/sched/Makefile
+@@ -69,7 +69,6 @@ obj-$(CONFIG_NET_CLS_U32)	+= cls_u32.o
+ obj-$(CONFIG_NET_CLS_ROUTE4)	+= cls_route.o
+ obj-$(CONFIG_NET_CLS_FW)	+= cls_fw.o
+ obj-$(CONFIG_NET_CLS_RSVP)	+= cls_rsvp.o
+-obj-$(CONFIG_NET_CLS_TCINDEX)	+= cls_tcindex.o
+ obj-$(CONFIG_NET_CLS_RSVP6)	+= cls_rsvp6.o
+ obj-$(CONFIG_NET_CLS_BASIC)	+= cls_basic.o
+ obj-$(CONFIG_NET_CLS_FLOW)	+= cls_flow.o
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 2f0e98bcf4945..6988a9cf40806 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -54,8 +54,8 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 					  sample_policy, NULL);
+ 	if (ret < 0)
+ 		return ret;
+-	if (!tb[TCA_SAMPLE_PARMS] || !tb[TCA_SAMPLE_RATE] ||
+-	    !tb[TCA_SAMPLE_PSAMPLE_GROUP])
++
++	if (!tb[TCA_SAMPLE_PARMS])
+ 		return -EINVAL;
+ 
+ 	parm = nla_data(tb[TCA_SAMPLE_PARMS]);
+@@ -79,6 +79,13 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 		tcf_idr_release(*a, bind);
+ 		return -EEXIST;
+ 	}
++
++	if (!tb[TCA_SAMPLE_RATE] || !tb[TCA_SAMPLE_PSAMPLE_GROUP]) {
++		NL_SET_ERR_MSG(extack, "sample rate and group are required");
++		err = -EINVAL;
++		goto release_idr;
++	}
++
+ 	err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
+ 	if (err < 0)
+ 		goto release_idr;
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+deleted file mode 100644
+index 2c0c95204cb5a..0000000000000
+--- a/net/sched/cls_tcindex.c
++++ /dev/null
+@@ -1,756 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * net/sched/cls_tcindex.c	Packet classifier for skb->tc_index
+- *
+- * Written 1998,1999 by Werner Almesberger, EPFL ICA
+- */
+-
+-#include <linux/module.h>
+-#include <linux/types.h>
+-#include <linux/kernel.h>
+-#include <linux/skbuff.h>
+-#include <linux/errno.h>
+-#include <linux/slab.h>
+-#include <linux/refcount.h>
+-#include <linux/rcupdate.h>
+-#include <net/act_api.h>
+-#include <net/netlink.h>
+-#include <net/pkt_cls.h>
+-#include <net/sch_generic.h>
+-
+-/*
+- * Passing parameters to the root seems to be done more awkwardly than really
+- * necessary. At least, u32 doesn't seem to use such dirty hacks. To be
+- * verified. FIXME.
+- */
+-
+-#define PERFECT_HASH_THRESHOLD	64	/* use perfect hash if not bigger */
+-#define DEFAULT_HASH_SIZE	64	/* optimized for diffserv */
+-
+-
+-struct tcindex_data;
+-
+-struct tcindex_filter_result {
+-	struct tcf_exts		exts;
+-	struct tcf_result	res;
+-	struct tcindex_data	*p;
+-	struct rcu_work		rwork;
+-};
+-
+-struct tcindex_filter {
+-	u16 key;
+-	struct tcindex_filter_result result;
+-	struct tcindex_filter __rcu *next;
+-	struct rcu_work rwork;
+-};
+-
+-
+-struct tcindex_data {
+-	struct tcindex_filter_result *perfect; /* perfect hash; NULL if none */
+-	struct tcindex_filter __rcu **h; /* imperfect hash; */
+-	struct tcf_proto *tp;
+-	u16 mask;		/* AND key with mask */
+-	u32 shift;		/* shift ANDed key to the right */
+-	u32 hash;		/* hash table size; 0 if undefined */
+-	u32 alloc_hash;		/* allocated size */
+-	u32 fall_through;	/* 0: only classify if explicit match */
+-	refcount_t refcnt;	/* a temporary refcnt for perfect hash */
+-	struct rcu_work rwork;
+-};
+-
+-static inline int tcindex_filter_is_set(struct tcindex_filter_result *r)
+-{
+-	return tcf_exts_has_actions(&r->exts) || r->res.classid;
+-}
+-
+-static void tcindex_data_get(struct tcindex_data *p)
+-{
+-	refcount_inc(&p->refcnt);
+-}
+-
+-static void tcindex_data_put(struct tcindex_data *p)
+-{
+-	if (refcount_dec_and_test(&p->refcnt)) {
+-		kfree(p->perfect);
+-		kfree(p->h);
+-		kfree(p);
+-	}
+-}
+-
+-static struct tcindex_filter_result *tcindex_lookup(struct tcindex_data *p,
+-						    u16 key)
+-{
+-	if (p->perfect) {
+-		struct tcindex_filter_result *f = p->perfect + key;
+-
+-		return tcindex_filter_is_set(f) ? f : NULL;
+-	} else if (p->h) {
+-		struct tcindex_filter __rcu **fp;
+-		struct tcindex_filter *f;
+-
+-		fp = &p->h[key % p->hash];
+-		for (f = rcu_dereference_bh_rtnl(*fp);
+-		     f;
+-		     fp = &f->next, f = rcu_dereference_bh_rtnl(*fp))
+-			if (f->key == key)
+-				return &f->result;
+-	}
+-
+-	return NULL;
+-}
+-
+-
+-static int tcindex_classify(struct sk_buff *skb, const struct tcf_proto *tp,
+-			    struct tcf_result *res)
+-{
+-	struct tcindex_data *p = rcu_dereference_bh(tp->root);
+-	struct tcindex_filter_result *f;
+-	int key = (skb->tc_index & p->mask) >> p->shift;
+-
+-	pr_debug("tcindex_classify(skb %p,tp %p,res %p),p %p\n",
+-		 skb, tp, res, p);
+-
+-	f = tcindex_lookup(p, key);
+-	if (!f) {
+-		struct Qdisc *q = tcf_block_q(tp->chain->block);
+-
+-		if (!p->fall_through)
+-			return -1;
+-		res->classid = TC_H_MAKE(TC_H_MAJ(q->handle), key);
+-		res->class = 0;
+-		pr_debug("alg 0x%x\n", res->classid);
+-		return 0;
+-	}
+-	*res = f->res;
+-	pr_debug("map 0x%x\n", res->classid);
+-
+-	return tcf_exts_exec(skb, &f->exts, res);
+-}
+-
+-
+-static void *tcindex_get(struct tcf_proto *tp, u32 handle)
+-{
+-	struct tcindex_data *p = rtnl_dereference(tp->root);
+-	struct tcindex_filter_result *r;
+-
+-	pr_debug("tcindex_get(tp %p,handle 0x%08x)\n", tp, handle);
+-	if (p->perfect && handle >= p->alloc_hash)
+-		return NULL;
+-	r = tcindex_lookup(p, handle);
+-	return r && tcindex_filter_is_set(r) ? r : NULL;
+-}
+-
+-static int tcindex_init(struct tcf_proto *tp)
+-{
+-	struct tcindex_data *p;
+-
+-	pr_debug("tcindex_init(tp %p)\n", tp);
+-	p = kzalloc(sizeof(struct tcindex_data), GFP_KERNEL);
+-	if (!p)
+-		return -ENOMEM;
+-
+-	p->mask = 0xffff;
+-	p->hash = DEFAULT_HASH_SIZE;
+-	p->fall_through = 1;
+-	refcount_set(&p->refcnt, 1); /* Paired with tcindex_destroy_work() */
+-
+-	rcu_assign_pointer(tp->root, p);
+-	return 0;
+-}
+-
+-static void __tcindex_destroy_rexts(struct tcindex_filter_result *r)
+-{
+-	tcf_exts_destroy(&r->exts);
+-	tcf_exts_put_net(&r->exts);
+-	tcindex_data_put(r->p);
+-}
+-
+-static void tcindex_destroy_rexts_work(struct work_struct *work)
+-{
+-	struct tcindex_filter_result *r;
+-
+-	r = container_of(to_rcu_work(work),
+-			 struct tcindex_filter_result,
+-			 rwork);
+-	rtnl_lock();
+-	__tcindex_destroy_rexts(r);
+-	rtnl_unlock();
+-}
+-
+-static void __tcindex_destroy_fexts(struct tcindex_filter *f)
+-{
+-	tcf_exts_destroy(&f->result.exts);
+-	tcf_exts_put_net(&f->result.exts);
+-	kfree(f);
+-}
+-
+-static void tcindex_destroy_fexts_work(struct work_struct *work)
+-{
+-	struct tcindex_filter *f = container_of(to_rcu_work(work),
+-						struct tcindex_filter,
+-						rwork);
+-
+-	rtnl_lock();
+-	__tcindex_destroy_fexts(f);
+-	rtnl_unlock();
+-}
+-
+-static int tcindex_delete(struct tcf_proto *tp, void *arg, bool *last,
+-			  bool rtnl_held, struct netlink_ext_ack *extack)
+-{
+-	struct tcindex_data *p = rtnl_dereference(tp->root);
+-	struct tcindex_filter_result *r = arg;
+-	struct tcindex_filter __rcu **walk;
+-	struct tcindex_filter *f = NULL;
+-
+-	pr_debug("tcindex_delete(tp %p,arg %p),p %p\n", tp, arg, p);
+-	if (p->perfect) {
+-		if (!r->res.class)
+-			return -ENOENT;
+-	} else {
+-		int i;
+-
+-		for (i = 0; i < p->hash; i++) {
+-			walk = p->h + i;
+-			for (f = rtnl_dereference(*walk); f;
+-			     walk = &f->next, f = rtnl_dereference(*walk)) {
+-				if (&f->result == r)
+-					goto found;
+-			}
+-		}
+-		return -ENOENT;
+-
+-found:
+-		rcu_assign_pointer(*walk, rtnl_dereference(f->next));
+-	}
+-	tcf_unbind_filter(tp, &r->res);
+-	/* all classifiers are required to call tcf_exts_destroy() after rcu
+-	 * grace period, since converted-to-rcu actions are relying on that
+-	 * in cleanup() callback
+-	 */
+-	if (f) {
+-		if (tcf_exts_get_net(&f->result.exts))
+-			tcf_queue_work(&f->rwork, tcindex_destroy_fexts_work);
+-		else
+-			__tcindex_destroy_fexts(f);
+-	} else {
+-		tcindex_data_get(p);
+-
+-		if (tcf_exts_get_net(&r->exts))
+-			tcf_queue_work(&r->rwork, tcindex_destroy_rexts_work);
+-		else
+-			__tcindex_destroy_rexts(r);
+-	}
+-
+-	*last = false;
+-	return 0;
+-}
+-
+-static void tcindex_destroy_work(struct work_struct *work)
+-{
+-	struct tcindex_data *p = container_of(to_rcu_work(work),
+-					      struct tcindex_data,
+-					      rwork);
+-
+-	tcindex_data_put(p);
+-}
+-
+-static inline int
+-valid_perfect_hash(struct tcindex_data *p)
+-{
+-	return  p->hash > (p->mask >> p->shift);
+-}
+-
+-static const struct nla_policy tcindex_policy[TCA_TCINDEX_MAX + 1] = {
+-	[TCA_TCINDEX_HASH]		= { .type = NLA_U32 },
+-	[TCA_TCINDEX_MASK]		= { .type = NLA_U16 },
+-	[TCA_TCINDEX_SHIFT]		= { .type = NLA_U32 },
+-	[TCA_TCINDEX_FALL_THROUGH]	= { .type = NLA_U32 },
+-	[TCA_TCINDEX_CLASSID]		= { .type = NLA_U32 },
+-};
+-
+-static int tcindex_filter_result_init(struct tcindex_filter_result *r,
+-				      struct tcindex_data *p,
+-				      struct net *net)
+-{
+-	memset(r, 0, sizeof(*r));
+-	r->p = p;
+-	return tcf_exts_init(&r->exts, net, TCA_TCINDEX_ACT,
+-			     TCA_TCINDEX_POLICE);
+-}
+-
+-static void tcindex_free_perfect_hash(struct tcindex_data *cp);
+-
+-static void tcindex_partial_destroy_work(struct work_struct *work)
+-{
+-	struct tcindex_data *p = container_of(to_rcu_work(work),
+-					      struct tcindex_data,
+-					      rwork);
+-
+-	rtnl_lock();
+-	if (p->perfect)
+-		tcindex_free_perfect_hash(p);
+-	kfree(p);
+-	rtnl_unlock();
+-}
+-
+-static void tcindex_free_perfect_hash(struct tcindex_data *cp)
+-{
+-	int i;
+-
+-	for (i = 0; i < cp->hash; i++)
+-		tcf_exts_destroy(&cp->perfect[i].exts);
+-	kfree(cp->perfect);
+-}
+-
+-static int tcindex_alloc_perfect_hash(struct net *net, struct tcindex_data *cp)
+-{
+-	int i, err = 0;
+-
+-	cp->perfect = kcalloc(cp->hash, sizeof(struct tcindex_filter_result),
+-			      GFP_KERNEL | __GFP_NOWARN);
+-	if (!cp->perfect)
+-		return -ENOMEM;
+-
+-	for (i = 0; i < cp->hash; i++) {
+-		err = tcf_exts_init(&cp->perfect[i].exts, net,
+-				    TCA_TCINDEX_ACT, TCA_TCINDEX_POLICE);
+-		if (err < 0)
+-			goto errout;
+-		cp->perfect[i].p = cp;
+-	}
+-
+-	return 0;
+-
+-errout:
+-	tcindex_free_perfect_hash(cp);
+-	return err;
+-}
+-
+-static int
+-tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+-		  u32 handle, struct tcindex_data *p,
+-		  struct tcindex_filter_result *r, struct nlattr **tb,
+-		  struct nlattr *est, bool ovr, struct netlink_ext_ack *extack)
+-{
+-	struct tcindex_filter_result new_filter_result;
+-	struct tcindex_data *cp = NULL, *oldp;
+-	struct tcindex_filter *f = NULL; /* make gcc behave */
+-	struct tcf_result cr = {};
+-	int err, balloc = 0;
+-	struct tcf_exts e;
+-	bool update_h = false;
+-
+-	err = tcf_exts_init(&e, net, TCA_TCINDEX_ACT, TCA_TCINDEX_POLICE);
+-	if (err < 0)
+-		return err;
+-	err = tcf_exts_validate(net, tp, tb, est, &e, ovr, true, extack);
+-	if (err < 0)
+-		goto errout;
+-
+-	err = -ENOMEM;
+-	/* tcindex_data attributes must look atomic to classifier/lookup so
+-	 * allocate new tcindex data and RCU assign it onto root. Keeping
+-	 * perfect hash and hash pointers from old data.
+-	 */
+-	cp = kzalloc(sizeof(*cp), GFP_KERNEL);
+-	if (!cp)
+-		goto errout;
+-
+-	cp->mask = p->mask;
+-	cp->shift = p->shift;
+-	cp->hash = p->hash;
+-	cp->alloc_hash = p->alloc_hash;
+-	cp->fall_through = p->fall_through;
+-	cp->tp = tp;
+-	refcount_set(&cp->refcnt, 1); /* Paired with tcindex_destroy_work() */
+-
+-	if (tb[TCA_TCINDEX_HASH])
+-		cp->hash = nla_get_u32(tb[TCA_TCINDEX_HASH]);
+-
+-	if (tb[TCA_TCINDEX_MASK])
+-		cp->mask = nla_get_u16(tb[TCA_TCINDEX_MASK]);
+-
+-	if (tb[TCA_TCINDEX_SHIFT]) {
+-		cp->shift = nla_get_u32(tb[TCA_TCINDEX_SHIFT]);
+-		if (cp->shift > 16) {
+-			err = -EINVAL;
+-			goto errout;
+-		}
+-	}
+-	if (!cp->hash) {
+-		/* Hash not specified, use perfect hash if the upper limit
+-		 * of the hashing index is below the threshold.
+-		 */
+-		if ((cp->mask >> cp->shift) < PERFECT_HASH_THRESHOLD)
+-			cp->hash = (cp->mask >> cp->shift) + 1;
+-		else
+-			cp->hash = DEFAULT_HASH_SIZE;
+-	}
+-
+-	if (p->perfect) {
+-		int i;
+-
+-		if (tcindex_alloc_perfect_hash(net, cp) < 0)
+-			goto errout;
+-		cp->alloc_hash = cp->hash;
+-		for (i = 0; i < min(cp->hash, p->hash); i++)
+-			cp->perfect[i].res = p->perfect[i].res;
+-		balloc = 1;
+-	}
+-	cp->h = p->h;
+-
+-	err = tcindex_filter_result_init(&new_filter_result, cp, net);
+-	if (err < 0)
+-		goto errout_alloc;
+-	if (r)
+-		cr = r->res;
+-
+-	err = -EBUSY;
+-
+-	/* Hash already allocated, make sure that we still meet the
+-	 * requirements for the allocated hash.
+-	 */
+-	if (cp->perfect) {
+-		if (!valid_perfect_hash(cp) ||
+-		    cp->hash > cp->alloc_hash)
+-			goto errout_alloc;
+-	} else if (cp->h && cp->hash != cp->alloc_hash) {
+-		goto errout_alloc;
+-	}
+-
+-	err = -EINVAL;
+-	if (tb[TCA_TCINDEX_FALL_THROUGH])
+-		cp->fall_through = nla_get_u32(tb[TCA_TCINDEX_FALL_THROUGH]);
+-
+-	if (!cp->perfect && !cp->h)
+-		cp->alloc_hash = cp->hash;
+-
+-	/* Note: this could be as restrictive as if (handle & ~(mask >> shift))
+-	 * but then, we'd fail handles that may become valid after some future
+-	 * mask change. While this is extremely unlikely to ever matter,
+-	 * the check below is safer (and also more backwards-compatible).
+-	 */
+-	if (cp->perfect || valid_perfect_hash(cp))
+-		if (handle >= cp->alloc_hash)
+-			goto errout_alloc;
+-
+-
+-	err = -ENOMEM;
+-	if (!cp->perfect && !cp->h) {
+-		if (valid_perfect_hash(cp)) {
+-			if (tcindex_alloc_perfect_hash(net, cp) < 0)
+-				goto errout_alloc;
+-			balloc = 1;
+-		} else {
+-			struct tcindex_filter __rcu **hash;
+-
+-			hash = kcalloc(cp->hash,
+-				       sizeof(struct tcindex_filter *),
+-				       GFP_KERNEL);
+-
+-			if (!hash)
+-				goto errout_alloc;
+-
+-			cp->h = hash;
+-			balloc = 2;
+-		}
+-	}
+-
+-	if (cp->perfect) {
+-		r = cp->perfect + handle;
+-	} else {
+-		/* imperfect area is updated in-place using rcu */
+-		update_h = !!tcindex_lookup(cp, handle);
+-		r = &new_filter_result;
+-	}
+-
+-	if (r == &new_filter_result) {
+-		f = kzalloc(sizeof(*f), GFP_KERNEL);
+-		if (!f)
+-			goto errout_alloc;
+-		f->key = handle;
+-		f->next = NULL;
+-		err = tcindex_filter_result_init(&f->result, cp, net);
+-		if (err < 0) {
+-			kfree(f);
+-			goto errout_alloc;
+-		}
+-	}
+-
+-	if (tb[TCA_TCINDEX_CLASSID]) {
+-		cr.classid = nla_get_u32(tb[TCA_TCINDEX_CLASSID]);
+-		tcf_bind_filter(tp, &cr, base);
+-	}
+-
+-	oldp = p;
+-	r->res = cr;
+-	tcf_exts_change(&r->exts, &e);
+-
+-	rcu_assign_pointer(tp->root, cp);
+-
+-	if (update_h) {
+-		struct tcindex_filter __rcu **fp;
+-		struct tcindex_filter *cf;
+-
+-		f->result.res = r->res;
+-		tcf_exts_change(&f->result.exts, &r->exts);
+-
+-		/* imperfect area bucket */
+-		fp = cp->h + (handle % cp->hash);
+-
+-		/* lookup the filter, guaranteed to exist */
+-		for (cf = rcu_dereference_bh_rtnl(*fp); cf;
+-		     fp = &cf->next, cf = rcu_dereference_bh_rtnl(*fp))
+-			if (cf->key == (u16)handle)
+-				break;
+-
+-		f->next = cf->next;
+-
+-		cf = rcu_replace_pointer(*fp, f, 1);
+-		tcf_exts_get_net(&cf->result.exts);
+-		tcf_queue_work(&cf->rwork, tcindex_destroy_fexts_work);
+-	} else if (r == &new_filter_result) {
+-		struct tcindex_filter *nfp;
+-		struct tcindex_filter __rcu **fp;
+-
+-		f->result.res = r->res;
+-		tcf_exts_change(&f->result.exts, &r->exts);
+-
+-		fp = cp->h + (handle % cp->hash);
+-		for (nfp = rtnl_dereference(*fp);
+-		     nfp;
+-		     fp = &nfp->next, nfp = rtnl_dereference(*fp))
+-				; /* nothing */
+-
+-		rcu_assign_pointer(*fp, f);
+-	} else {
+-		tcf_exts_destroy(&new_filter_result.exts);
+-	}
+-
+-	if (oldp)
+-		tcf_queue_work(&oldp->rwork, tcindex_partial_destroy_work);
+-	return 0;
+-
+-errout_alloc:
+-	if (balloc == 1)
+-		tcindex_free_perfect_hash(cp);
+-	else if (balloc == 2)
+-		kfree(cp->h);
+-	tcf_exts_destroy(&new_filter_result.exts);
+-errout:
+-	kfree(cp);
+-	tcf_exts_destroy(&e);
+-	return err;
+-}
+-
+-static int
+-tcindex_change(struct net *net, struct sk_buff *in_skb,
+-	       struct tcf_proto *tp, unsigned long base, u32 handle,
+-	       struct nlattr **tca, void **arg, bool ovr,
+-	       bool rtnl_held, struct netlink_ext_ack *extack)
+-{
+-	struct nlattr *opt = tca[TCA_OPTIONS];
+-	struct nlattr *tb[TCA_TCINDEX_MAX + 1];
+-	struct tcindex_data *p = rtnl_dereference(tp->root);
+-	struct tcindex_filter_result *r = *arg;
+-	int err;
+-
+-	pr_debug("tcindex_change(tp %p,handle 0x%08x,tca %p,arg %p),opt %p,"
+-	    "p %p,r %p,*arg %p\n",
+-	    tp, handle, tca, arg, opt, p, r, *arg);
+-
+-	if (!opt)
+-		return 0;
+-
+-	err = nla_parse_nested_deprecated(tb, TCA_TCINDEX_MAX, opt,
+-					  tcindex_policy, NULL);
+-	if (err < 0)
+-		return err;
+-
+-	return tcindex_set_parms(net, tp, base, handle, p, r, tb,
+-				 tca[TCA_RATE], ovr, extack);
+-}
+-
+-static void tcindex_walk(struct tcf_proto *tp, struct tcf_walker *walker,
+-			 bool rtnl_held)
+-{
+-	struct tcindex_data *p = rtnl_dereference(tp->root);
+-	struct tcindex_filter *f, *next;
+-	int i;
+-
+-	pr_debug("tcindex_walk(tp %p,walker %p),p %p\n", tp, walker, p);
+-	if (p->perfect) {
+-		for (i = 0; i < p->hash; i++) {
+-			if (!p->perfect[i].res.class)
+-				continue;
+-			if (walker->count >= walker->skip) {
+-				if (walker->fn(tp, p->perfect + i, walker) < 0) {
+-					walker->stop = 1;
+-					return;
+-				}
+-			}
+-			walker->count++;
+-		}
+-	}
+-	if (!p->h)
+-		return;
+-	for (i = 0; i < p->hash; i++) {
+-		for (f = rtnl_dereference(p->h[i]); f; f = next) {
+-			next = rtnl_dereference(f->next);
+-			if (walker->count >= walker->skip) {
+-				if (walker->fn(tp, &f->result, walker) < 0) {
+-					walker->stop = 1;
+-					return;
+-				}
+-			}
+-			walker->count++;
+-		}
+-	}
+-}
+-
+-static void tcindex_destroy(struct tcf_proto *tp, bool rtnl_held,
+-			    struct netlink_ext_ack *extack)
+-{
+-	struct tcindex_data *p = rtnl_dereference(tp->root);
+-	int i;
+-
+-	pr_debug("tcindex_destroy(tp %p),p %p\n", tp, p);
+-
+-	if (p->perfect) {
+-		for (i = 0; i < p->hash; i++) {
+-			struct tcindex_filter_result *r = p->perfect + i;
+-
+-			/* tcf_queue_work() does not guarantee the ordering we
+-			 * want, so we have to take this refcnt temporarily to
+-			 * ensure 'p' is freed after all tcindex_filter_result
+-			 * here. Imperfect hash does not need this, because it
+-			 * uses linked lists rather than an array.
+-			 */
+-			tcindex_data_get(p);
+-
+-			tcf_unbind_filter(tp, &r->res);
+-			if (tcf_exts_get_net(&r->exts))
+-				tcf_queue_work(&r->rwork,
+-					       tcindex_destroy_rexts_work);
+-			else
+-				__tcindex_destroy_rexts(r);
+-		}
+-	}
+-
+-	for (i = 0; p->h && i < p->hash; i++) {
+-		struct tcindex_filter *f, *next;
+-		bool last;
+-
+-		for (f = rtnl_dereference(p->h[i]); f; f = next) {
+-			next = rtnl_dereference(f->next);
+-			tcindex_delete(tp, &f->result, &last, rtnl_held, NULL);
+-		}
+-	}
+-
+-	tcf_queue_work(&p->rwork, tcindex_destroy_work);
+-}
+-
+-
+-static int tcindex_dump(struct net *net, struct tcf_proto *tp, void *fh,
+-			struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
+-{
+-	struct tcindex_data *p = rtnl_dereference(tp->root);
+-	struct tcindex_filter_result *r = fh;
+-	struct nlattr *nest;
+-
+-	pr_debug("tcindex_dump(tp %p,fh %p,skb %p,t %p),p %p,r %p\n",
+-		 tp, fh, skb, t, p, r);
+-	pr_debug("p->perfect %p p->h %p\n", p->perfect, p->h);
+-
+-	nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+-	if (nest == NULL)
+-		goto nla_put_failure;
+-
+-	if (!fh) {
+-		t->tcm_handle = ~0; /* whatever ... */
+-		if (nla_put_u32(skb, TCA_TCINDEX_HASH, p->hash) ||
+-		    nla_put_u16(skb, TCA_TCINDEX_MASK, p->mask) ||
+-		    nla_put_u32(skb, TCA_TCINDEX_SHIFT, p->shift) ||
+-		    nla_put_u32(skb, TCA_TCINDEX_FALL_THROUGH, p->fall_through))
+-			goto nla_put_failure;
+-		nla_nest_end(skb, nest);
+-	} else {
+-		if (p->perfect) {
+-			t->tcm_handle = r - p->perfect;
+-		} else {
+-			struct tcindex_filter *f;
+-			struct tcindex_filter __rcu **fp;
+-			int i;
+-
+-			t->tcm_handle = 0;
+-			for (i = 0; !t->tcm_handle && i < p->hash; i++) {
+-				fp = &p->h[i];
+-				for (f = rtnl_dereference(*fp);
+-				     !t->tcm_handle && f;
+-				     fp = &f->next, f = rtnl_dereference(*fp)) {
+-					if (&f->result == r)
+-						t->tcm_handle = f->key;
+-				}
+-			}
+-		}
+-		pr_debug("handle = %d\n", t->tcm_handle);
+-		if (r->res.class &&
+-		    nla_put_u32(skb, TCA_TCINDEX_CLASSID, r->res.classid))
+-			goto nla_put_failure;
+-
+-		if (tcf_exts_dump(skb, &r->exts) < 0)
+-			goto nla_put_failure;
+-		nla_nest_end(skb, nest);
+-
+-		if (tcf_exts_dump_stats(skb, &r->exts) < 0)
+-			goto nla_put_failure;
+-	}
+-
+-	return skb->len;
+-
+-nla_put_failure:
+-	nla_nest_cancel(skb, nest);
+-	return -1;
+-}
+-
+-static void tcindex_bind_class(void *fh, u32 classid, unsigned long cl,
+-			       void *q, unsigned long base)
+-{
+-	struct tcindex_filter_result *r = fh;
+-
+-	if (r && r->res.classid == classid) {
+-		if (cl)
+-			__tcf_bind_filter(q, &r->res, base);
+-		else
+-			__tcf_unbind_filter(q, &r->res);
+-	}
+-}
+-
+-static struct tcf_proto_ops cls_tcindex_ops __read_mostly = {
+-	.kind		=	"tcindex",
+-	.classify	=	tcindex_classify,
+-	.init		=	tcindex_init,
+-	.destroy	=	tcindex_destroy,
+-	.get		=	tcindex_get,
+-	.change		=	tcindex_change,
+-	.delete		=	tcindex_delete,
+-	.walk		=	tcindex_walk,
+-	.dump		=	tcindex_dump,
+-	.bind_class	=	tcindex_bind_class,
+-	.owner		=	THIS_MODULE,
+-};
+-
+-static int __init init_tcindex(void)
+-{
+-	return register_tcf_proto_ops(&cls_tcindex_ops);
+-}
+-
+-static void __exit exit_tcindex(void)
+-{
+-	unregister_tcf_proto_ops(&cls_tcindex_ops);
+-}
+-
+-module_init(init_tcindex)
+-module_exit(exit_tcindex)
+-MODULE_LICENSE("GPL");
+diff --git a/net/sctp/stream_sched_prio.c b/net/sctp/stream_sched_prio.c
+index 4fc9f2923ed11..7dd9f8b387cca 100644
+--- a/net/sctp/stream_sched_prio.c
++++ b/net/sctp/stream_sched_prio.c
+@@ -25,6 +25,18 @@
+ 
+ static void sctp_sched_prio_unsched_all(struct sctp_stream *stream);
+ 
++static struct sctp_stream_priorities *sctp_sched_prio_head_get(struct sctp_stream_priorities *p)
++{
++	p->users++;
++	return p;
++}
++
++static void sctp_sched_prio_head_put(struct sctp_stream_priorities *p)
++{
++	if (p && --p->users == 0)
++		kfree(p);
++}
++
+ static struct sctp_stream_priorities *sctp_sched_prio_new_head(
+ 			struct sctp_stream *stream, int prio, gfp_t gfp)
+ {
+@@ -38,6 +50,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_new_head(
+ 	INIT_LIST_HEAD(&p->active);
+ 	p->next = NULL;
+ 	p->prio = prio;
++	p->users = 1;
+ 
+ 	return p;
+ }
+@@ -53,7 +66,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_get_head(
+ 	 */
+ 	list_for_each_entry(p, &stream->prio_list, prio_sched) {
+ 		if (p->prio == prio)
+-			return p;
++			return sctp_sched_prio_head_get(p);
+ 		if (p->prio > prio)
+ 			break;
+ 	}
+@@ -70,7 +83,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_get_head(
+ 			 */
+ 			break;
+ 		if (p->prio == prio)
+-			return p;
++			return sctp_sched_prio_head_get(p);
+ 	}
+ 
+ 	/* If not even there, allocate a new one. */
+@@ -154,32 +167,21 @@ static int sctp_sched_prio_set(struct sctp_stream *stream, __u16 sid,
+ 	struct sctp_stream_out_ext *soute = sout->ext;
+ 	struct sctp_stream_priorities *prio_head, *old;
+ 	bool reschedule = false;
+-	int i;
++
++	old = soute->prio_head;
++	if (old && old->prio == prio)
++		return 0;
+ 
+ 	prio_head = sctp_sched_prio_get_head(stream, prio, gfp);
+ 	if (!prio_head)
+ 		return -ENOMEM;
+ 
+ 	reschedule = sctp_sched_prio_unsched(soute);
+-	old = soute->prio_head;
+ 	soute->prio_head = prio_head;
+ 	if (reschedule)
+ 		sctp_sched_prio_sched(stream, soute);
+ 
+-	if (!old)
+-		/* Happens when we set the priority for the first time */
+-		return 0;
+-
+-	for (i = 0; i < stream->outcnt; i++) {
+-		soute = SCTP_SO(stream, i)->ext;
+-		if (soute && soute->prio_head == old)
+-			/* It's still in use, nothing else to do here. */
+-			return 0;
+-	}
+-
+-	/* No hits, we are good to free it. */
+-	kfree(old);
+-
++	sctp_sched_prio_head_put(old);
+ 	return 0;
+ }
+ 
+@@ -206,20 +208,8 @@ static int sctp_sched_prio_init_sid(struct sctp_stream *stream, __u16 sid,
+ 
+ static void sctp_sched_prio_free_sid(struct sctp_stream *stream, __u16 sid)
+ {
+-	struct sctp_stream_priorities *prio = SCTP_SO(stream, sid)->ext->prio_head;
+-	int i;
+-
+-	if (!prio)
+-		return;
+-
++	sctp_sched_prio_head_put(SCTP_SO(stream, sid)->ext->prio_head);
+ 	SCTP_SO(stream, sid)->ext->prio_head = NULL;
+-	for (i = 0; i < stream->outcnt; i++) {
+-		if (SCTP_SO(stream, i)->ext &&
+-		    SCTP_SO(stream, i)->ext->prio_head == prio)
+-			return;
+-	}
+-
+-	kfree(prio);
+ }
+ 
+ static void sctp_sched_prio_free(struct sctp_stream *stream)
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index c478108ca6a65..c6e8bd78e35d6 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -3026,6 +3026,8 @@ rpc_clnt_swap_activate_callback(struct rpc_clnt *clnt,
+ int
+ rpc_clnt_swap_activate(struct rpc_clnt *clnt)
+ {
++	while (clnt != clnt->cl_parent)
++		clnt = clnt->cl_parent;
+ 	if (atomic_inc_return(&clnt->cl_swapper) == 1)
+ 		return rpc_clnt_iterate_for_each_xprt(clnt,
+ 				rpc_clnt_swap_activate_callback, NULL);
+@@ -3045,6 +3047,8 @@ rpc_clnt_swap_deactivate_callback(struct rpc_clnt *clnt,
+ void
+ rpc_clnt_swap_deactivate(struct rpc_clnt *clnt)
+ {
++	while (clnt != clnt->cl_parent)
++		clnt = clnt->cl_parent;
+ 	if (atomic_dec_if_positive(&clnt->cl_swapper) == 0)
+ 		rpc_clnt_iterate_for_each_xprt(clnt,
+ 				rpc_clnt_swap_deactivate_callback, NULL);
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 21f20c3cda971..ac7feadb43904 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -949,7 +949,9 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 			       MSG_CMSG_COMPAT))
+ 		return -EOPNOTSUPP;
+ 
+-	mutex_lock(&tls_ctx->tx_lock);
++	ret = mutex_lock_interruptible(&tls_ctx->tx_lock);
++	if (ret)
++		return ret;
+ 	lock_sock(sk);
+ 
+ 	if (unlikely(msg->msg_controllen)) {
+@@ -1283,7 +1285,9 @@ int tls_sw_sendpage(struct sock *sk, struct page *page,
+ 		      MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY))
+ 		return -EOPNOTSUPP;
+ 
+-	mutex_lock(&tls_ctx->tx_lock);
++	ret = mutex_lock_interruptible(&tls_ctx->tx_lock);
++	if (ret)
++		return ret;
+ 	lock_sock(sk);
+ 	ret = tls_sw_do_sendpage(sk, page, offset, size, flags);
+ 	release_sock(sk);
+@@ -2266,11 +2270,19 @@ static void tx_work_handler(struct work_struct *work)
+ 
+ 	if (!test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask))
+ 		return;
+-	mutex_lock(&tls_ctx->tx_lock);
+-	lock_sock(sk);
+-	tls_tx_records(sk, -1);
+-	release_sock(sk);
+-	mutex_unlock(&tls_ctx->tx_lock);
++
++	if (mutex_trylock(&tls_ctx->tx_lock)) {
++		lock_sock(sk);
++		tls_tx_records(sk, -1);
++		release_sock(sk);
++		mutex_unlock(&tls_ctx->tx_lock);
++	} else if (!test_and_set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) {
++		/* Someone is holding the tx_lock, they will likely run Tx
++		 * and cancel the work on their way out of the lock section.
++		 * Schedule a long delay just in case.
++		 */
++		schedule_delayed_work(&ctx->tx_work.work, msecs_to_jiffies(10));
++	}
+ }
+ 
+ void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 8a7f0c8fba5e9..ea36d8c47b31a 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -12675,7 +12675,7 @@ static int nl80211_set_rekey_data(struct sk_buff *skb, struct genl_info *info)
+ 		return -ERANGE;
+ 	if (nla_len(tb[NL80211_REKEY_DATA_KCK]) != NL80211_KCK_LEN &&
+ 	    !(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK &&
+-	      nla_len(tb[NL80211_REKEY_DATA_KEK]) == NL80211_KCK_EXT_LEN))
++	      nla_len(tb[NL80211_REKEY_DATA_KCK]) == NL80211_KCK_EXT_LEN))
+ 		return -ERANGE;
+ 
+ 	rekey_data.kek = nla_data(tb[NL80211_REKEY_DATA_KEK]);
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index 060e365c8259b..f4d98ed8fa313 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -269,6 +269,15 @@ void cfg80211_conn_work(struct work_struct *work)
+ 	rtnl_unlock();
+ }
+ 
++static void cfg80211_step_auth_next(struct cfg80211_conn *conn,
++				    struct cfg80211_bss *bss)
++{
++	memcpy(conn->bssid, bss->bssid, ETH_ALEN);
++	conn->params.bssid = conn->bssid;
++	conn->params.channel = bss->channel;
++	conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
++}
++
+ /* Returned bss is reference counted and must be cleaned up appropriately. */
+ static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
+ {
+@@ -286,10 +295,7 @@ static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
+ 	if (!bss)
+ 		return NULL;
+ 
+-	memcpy(wdev->conn->bssid, bss->bssid, ETH_ALEN);
+-	wdev->conn->params.bssid = wdev->conn->bssid;
+-	wdev->conn->params.channel = bss->channel;
+-	wdev->conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
++	cfg80211_step_auth_next(wdev->conn, bss);
+ 	schedule_work(&rdev->conn_work);
+ 
+ 	return bss;
+@@ -568,7 +574,12 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
+ 	wdev->conn->params.ssid_len = wdev->ssid_len;
+ 
+ 	/* see if we have the bss already */
+-	bss = cfg80211_get_conn_bss(wdev);
++	bss = cfg80211_get_bss(wdev->wiphy, wdev->conn->params.channel,
++			       wdev->conn->params.bssid,
++			       wdev->conn->params.ssid,
++			       wdev->conn->params.ssid_len,
++			       wdev->conn_bss_type,
++			       IEEE80211_PRIVACY(wdev->conn->params.privacy));
+ 
+ 	if (prev_bssid) {
+ 		memcpy(wdev->conn->prev_bssid, prev_bssid, ETH_ALEN);
+@@ -579,6 +590,7 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
+ 	if (bss) {
+ 		enum nl80211_timeout_reason treason;
+ 
++		cfg80211_step_auth_next(wdev->conn, bss);
+ 		err = cfg80211_conn_do_work(wdev, &treason);
+ 		cfg80211_put_bss(wdev->wiphy, bss);
+ 	} else {
+@@ -1245,6 +1257,15 @@ int cfg80211_connect(struct cfg80211_registered_device *rdev,
+ 	} else {
+ 		if (WARN_ON(connkeys))
+ 			return -EINVAL;
++
++		/* connect can point to wdev->wext.connect which
++		 * can hold key data from a previous connection
++		 */
++		connect->key = NULL;
++		connect->key_len = 0;
++		connect->key_idx = 0;
++		connect->crypto.cipher_group = 0;
++		connect->crypto.n_ciphers_pairwise = 0;
+ 	}
+ 
+ 	wdev->connect_keys = connkeys;
+diff --git a/scripts/package/mkdebian b/scripts/package/mkdebian
+index 60a2a63a5e900..32d528a367868 100755
+--- a/scripts/package/mkdebian
++++ b/scripts/package/mkdebian
+@@ -236,7 +236,7 @@ binary-arch: build-arch
+ 	KBUILD_BUILD_VERSION=${revision} -f \$(srctree)/Makefile intdeb-pkg
+ 
+ clean:
+-	rm -rf debian/*tmp debian/files
++	rm -rf debian/files debian/linux-*
+ 	\$(MAKE) clean
+ 
+ binary: binary-arch
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 600b97677085f..dd4b28b11ebe3 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -378,7 +378,9 @@ out:
+ /**
+  * ima_file_mmap - based on policy, collect/store measurement.
+  * @file: pointer to the file to be measured (May be NULL)
+- * @prot: contains the protection that will be applied by the kernel.
++ * @reqprot: protection requested by the application
++ * @prot: protection that will be applied by the kernel
++ * @flags: operational flags
+  *
+  * Measure files being mmapped executable based on the ima_must_measure()
+  * policy decision.
+@@ -386,7 +388,8 @@ out:
+  * On success return 0.  On integrity appraisal error, assuming the file
+  * is in policy and IMA-appraisal is in enforcing mode, return -EACCES.
+  */
+-int ima_file_mmap(struct file *file, unsigned long prot)
++int ima_file_mmap(struct file *file, unsigned long reqprot,
++		  unsigned long prot, unsigned long flags)
+ {
+ 	u32 secid;
+ 
+diff --git a/security/security.c b/security/security.c
+index 8ea826ea6167e..f9157d5023c66 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -1534,12 +1534,13 @@ static inline unsigned long mmap_prot(struct file *file, unsigned long prot)
+ int security_mmap_file(struct file *file, unsigned long prot,
+ 			unsigned long flags)
+ {
++	unsigned long prot_adj = mmap_prot(file, prot);
+ 	int ret;
+-	ret = call_int_hook(mmap_file, 0, file, prot,
+-					mmap_prot(file, prot), flags);
++
++	ret = call_int_hook(mmap_file, 0, file, prot, prot_adj, flags);
+ 	if (ret)
+ 		return ret;
+-	return ima_file_mmap(file, prot);
++	return ima_file_mmap(file, prot, prot_adj, flags);
+ }
+ 
+ int security_mmap_addr(unsigned long addr)
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 82f14c3f642bd..24c2638cde376 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -2331,7 +2331,7 @@ static int dspio_set_uint_param_no_source(struct hda_codec *codec, int mod_id,
+ static int dspio_alloc_dma_chan(struct hda_codec *codec, unsigned int *dma_chan)
+ {
+ 	int status = 0;
+-	unsigned int size = sizeof(dma_chan);
++	unsigned int size = sizeof(*dma_chan);
+ 
+ 	codec_dbg(codec, "     dspio_alloc_dma_chan() -- begin\n");
+ 	status = dspio_scp(codec, MASTERCONTROL, 0x20,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index fffa681313b66..f2ef75c8de427 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11153,6 +11153,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
++	SND_PCI_QUIRK(0x103c, 0x870c, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2),
+diff --git a/sound/pci/ice1712/aureon.c b/sound/pci/ice1712/aureon.c
+index 9a30f6d35d135..40a0e00950301 100644
+--- a/sound/pci/ice1712/aureon.c
++++ b/sound/pci/ice1712/aureon.c
+@@ -1892,6 +1892,7 @@ static int aureon_add_controls(struct snd_ice1712 *ice)
+ 		unsigned char id;
+ 		snd_ice1712_save_gpio_status(ice);
+ 		id = aureon_cs8415_get(ice, CS8415_ID);
++		snd_ice1712_restore_gpio_status(ice);
+ 		if (id != 0x41)
+ 			dev_info(ice->card->dev,
+ 				 "No CS8415 chip. Skipping CS8415 controls.\n");
+@@ -1909,7 +1910,6 @@ static int aureon_add_controls(struct snd_ice1712 *ice)
+ 					kctl->id.device = ice->pcm->device;
+ 			}
+ 		}
+-		snd_ice1712_restore_gpio_status(ice);
+ 	}
+ 
+ 	return 0;
+diff --git a/sound/soc/atmel/mchp-spdifrx.c b/sound/soc/atmel/mchp-spdifrx.c
+index 46f3407ed0e81..39a3c2a33bdbb 100644
+--- a/sound/soc/atmel/mchp-spdifrx.c
++++ b/sound/soc/atmel/mchp-spdifrx.c
+@@ -56,7 +56,7 @@
+ /* Validity Bit Mode */
+ #define SPDIFRX_MR_VBMODE_MASK		GENAMSK(1, 1)
+ #define SPDIFRX_MR_VBMODE_ALWAYS_LOAD \
+-	(0 << 1)	/* Load sample regardles of validity bit value */
++	(0 << 1)	/* Load sample regardless of validity bit value */
+ #define SPDIFRX_MR_VBMODE_DISCARD_IF_VB1 \
+ 	(1 << 1)	/* Load sample only if validity bit is 0 */
+ 
+@@ -217,7 +217,6 @@ struct mchp_spdifrx_ch_stat {
+ struct mchp_spdifrx_user_data {
+ 	unsigned char data[SPDIFRX_UD_BITS / 8];
+ 	struct completion done;
+-	spinlock_t lock;	/* protect access to user data */
+ };
+ 
+ struct mchp_spdifrx_mixer_control {
+@@ -231,13 +230,13 @@ struct mchp_spdifrx_mixer_control {
+ struct mchp_spdifrx_dev {
+ 	struct snd_dmaengine_dai_dma_data	capture;
+ 	struct mchp_spdifrx_mixer_control	control;
+-	spinlock_t				blockend_lock;	/* protect access to blockend_refcount */
+-	int					blockend_refcount;
++	struct mutex				mlock;
+ 	struct device				*dev;
+ 	struct regmap				*regmap;
+ 	struct clk				*pclk;
+ 	struct clk				*gclk;
+ 	unsigned int				fmt;
++	unsigned int				trigger_enabled;
+ 	unsigned int				gclk_enabled:1;
+ };
+ 
+@@ -275,37 +274,11 @@ static void mchp_spdifrx_channel_user_data_read(struct mchp_spdifrx_dev *dev,
+ 	}
+ }
+ 
+-/* called from non-atomic context only */
+-static void mchp_spdifrx_isr_blockend_en(struct mchp_spdifrx_dev *dev)
+-{
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&dev->blockend_lock, flags);
+-	dev->blockend_refcount++;
+-	/* don't enable BLOCKEND interrupt if it's already enabled */
+-	if (dev->blockend_refcount == 1)
+-		regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_BLOCKEND);
+-	spin_unlock_irqrestore(&dev->blockend_lock, flags);
+-}
+-
+-/* called from atomic/non-atomic context */
+-static void mchp_spdifrx_isr_blockend_dis(struct mchp_spdifrx_dev *dev)
+-{
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&dev->blockend_lock, flags);
+-	dev->blockend_refcount--;
+-	/* don't enable BLOCKEND interrupt if it's already enabled */
+-	if (dev->blockend_refcount == 0)
+-		regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
+-	spin_unlock_irqrestore(&dev->blockend_lock, flags);
+-}
+-
+ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+ {
+ 	struct mchp_spdifrx_dev *dev = dev_id;
+ 	struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
+-	u32 sr, imr, pending, idr = 0;
++	u32 sr, imr, pending;
+ 	irqreturn_t ret = IRQ_NONE;
+ 	int ch;
+ 
+@@ -320,13 +293,10 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+ 
+ 	if (pending & SPDIFRX_IR_BLOCKEND) {
+ 		for (ch = 0; ch < SPDIFRX_CHANNELS; ch++) {
+-			spin_lock(&ctrl->user_data[ch].lock);
+ 			mchp_spdifrx_channel_user_data_read(dev, ch);
+-			spin_unlock(&ctrl->user_data[ch].lock);
+-
+ 			complete(&ctrl->user_data[ch].done);
+ 		}
+-		mchp_spdifrx_isr_blockend_dis(dev);
++		regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
+ 		ret = IRQ_HANDLED;
+ 	}
+ 
+@@ -334,7 +304,7 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+ 		if (pending & SPDIFRX_IR_CSC(ch)) {
+ 			mchp_spdifrx_channel_status_read(dev, ch);
+ 			complete(&ctrl->ch_stat[ch].done);
+-			idr |= SPDIFRX_IR_CSC(ch);
++			regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_CSC(ch));
+ 			ret = IRQ_HANDLED;
+ 		}
+ 	}
+@@ -344,8 +314,6 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+ 		ret = IRQ_HANDLED;
+ 	}
+ 
+-	regmap_write(dev->regmap, SPDIFRX_IDR, idr);
+-
+ 	return ret;
+ }
+ 
+@@ -353,47 +321,40 @@ static int mchp_spdifrx_trigger(struct snd_pcm_substream *substream, int cmd,
+ 				struct snd_soc_dai *dai)
+ {
+ 	struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+-	u32 mr;
+-	int running;
+-	int ret;
+-
+-	regmap_read(dev->regmap, SPDIFRX_MR, &mr);
+-	running = !!(mr & SPDIFRX_MR_RXEN_ENABLE);
++	int ret = 0;
+ 
+ 	switch (cmd) {
+ 	case SNDRV_PCM_TRIGGER_START:
+ 	case SNDRV_PCM_TRIGGER_RESUME:
+ 	case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+-		if (!running) {
+-			mr &= ~SPDIFRX_MR_RXEN_MASK;
+-			mr |= SPDIFRX_MR_RXEN_ENABLE;
+-			/* enable overrun interrupts */
+-			regmap_write(dev->regmap, SPDIFRX_IER,
+-				     SPDIFRX_IR_OVERRUN);
+-		}
++		mutex_lock(&dev->mlock);
++		/* Enable overrun interrupts */
++		regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_OVERRUN);
++
++		/* Enable receiver. */
++		regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
++				   SPDIFRX_MR_RXEN_ENABLE);
++		dev->trigger_enabled = true;
++		mutex_unlock(&dev->mlock);
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 	case SNDRV_PCM_TRIGGER_SUSPEND:
+ 	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+-		if (running) {
+-			mr &= ~SPDIFRX_MR_RXEN_MASK;
+-			mr |= SPDIFRX_MR_RXEN_DISABLE;
+-			/* disable overrun interrupts */
+-			regmap_write(dev->regmap, SPDIFRX_IDR,
+-				     SPDIFRX_IR_OVERRUN);
+-		}
++		mutex_lock(&dev->mlock);
++		/* Disable overrun interrupts */
++		regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_OVERRUN);
++
++		/* Disable receiver. */
++		regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
++				   SPDIFRX_MR_RXEN_DISABLE);
++		dev->trigger_enabled = false;
++		mutex_unlock(&dev->mlock);
+ 		break;
+ 	default:
+-		return -EINVAL;
+-	}
+-
+-	ret = regmap_write(dev->regmap, SPDIFRX_MR, mr);
+-	if (ret) {
+-		dev_err(dev->dev, "unable to enable/disable RX: %d\n", ret);
+-		return ret;
++		ret = -EINVAL;
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+@@ -401,7 +362,7 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+ 				  struct snd_soc_dai *dai)
+ {
+ 	struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+-	u32 mr;
++	u32 mr = 0;
+ 	int ret;
+ 
+ 	dev_dbg(dev->dev, "%s() rate=%u format=%#x width=%u channels=%u\n",
+@@ -413,13 +374,6 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+ 		return -EINVAL;
+ 	}
+ 
+-	regmap_read(dev->regmap, SPDIFRX_MR, &mr);
+-
+-	if (mr & SPDIFRX_MR_RXEN_ENABLE) {
+-		dev_err(dev->dev, "PCM already running\n");
+-		return -EBUSY;
+-	}
+-
+ 	if (params_channels(params) != SPDIFRX_CHANNELS) {
+ 		dev_err(dev->dev, "unsupported number of channels: %d\n",
+ 			params_channels(params));
+@@ -445,6 +399,13 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+ 		return -EINVAL;
+ 	}
+ 
++	mutex_lock(&dev->mlock);
++	if (dev->trigger_enabled) {
++		dev_err(dev->dev, "PCM already running\n");
++		ret = -EBUSY;
++		goto unlock;
++	}
++
+ 	if (dev->gclk_enabled) {
+ 		clk_disable_unprepare(dev->gclk);
+ 		dev->gclk_enabled = 0;
+@@ -455,19 +416,24 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+ 		dev_err(dev->dev,
+ 			"unable to set gclk min rate: rate %u * ratio %u + 1\n",
+ 			params_rate(params), SPDIFRX_GCLK_RATIO_MIN);
+-		return ret;
++		goto unlock;
+ 	}
+ 	ret = clk_prepare_enable(dev->gclk);
+ 	if (ret) {
+ 		dev_err(dev->dev, "unable to enable gclk: %d\n", ret);
+-		return ret;
++		goto unlock;
+ 	}
+ 	dev->gclk_enabled = 1;
+ 
+ 	dev_dbg(dev->dev, "GCLK range min set to %d\n",
+ 		params_rate(params) * SPDIFRX_GCLK_RATIO_MIN + 1);
+ 
+-	return regmap_write(dev->regmap, SPDIFRX_MR, mr);
++	ret = regmap_write(dev->regmap, SPDIFRX_MR, mr);
++
++unlock:
++	mutex_unlock(&dev->mlock);
++
++	return ret;
+ }
+ 
+ static int mchp_spdifrx_hw_free(struct snd_pcm_substream *substream,
+@@ -475,10 +441,12 @@ static int mchp_spdifrx_hw_free(struct snd_pcm_substream *substream,
+ {
+ 	struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+ 
++	mutex_lock(&dev->mlock);
+ 	if (dev->gclk_enabled) {
+ 		clk_disable_unprepare(dev->gclk);
+ 		dev->gclk_enabled = 0;
+ 	}
++	mutex_unlock(&dev->mlock);
+ 	return 0;
+ }
+ 
+@@ -515,22 +483,51 @@ static int mchp_spdifrx_cs_get(struct mchp_spdifrx_dev *dev,
+ {
+ 	struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
+ 	struct mchp_spdifrx_ch_stat *ch_stat = &ctrl->ch_stat[channel];
+-	int ret;
+-
+-	regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_CSC(channel));
+-	/* check for new data available */
+-	ret = wait_for_completion_interruptible_timeout(&ch_stat->done,
+-							msecs_to_jiffies(100));
+-	/* IP might not be started or valid stream might not be prezent */
+-	if (ret < 0) {
+-		dev_dbg(dev->dev, "channel status for channel %d timeout\n",
+-			channel);
++	int ret = 0;
++
++	mutex_lock(&dev->mlock);
++
++	/*
++	 * We may reach this point with both clocks enabled but the receiver
++	 * still disabled. To void waiting for completion and return with
++	 * timeout check the dev->trigger_enabled.
++	 *
++	 * To retrieve data:
++	 * - if the receiver is enabled CSC IRQ will update the data in software
++	 *   caches (ch_stat->data)
++	 * - otherwise we just update it here the software caches with latest
++	 *   available information and return it; in this case we don't need
++	 *   spin locking as the IRQ is disabled and will not be raised from
++	 *   anywhere else.
++	 */
++
++	if (dev->trigger_enabled) {
++		reinit_completion(&ch_stat->done);
++		regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_CSC(channel));
++		/* Check for new data available */
++		ret = wait_for_completion_interruptible_timeout(&ch_stat->done,
++								msecs_to_jiffies(100));
++		/* Valid stream might not be present */
++		if (ret <= 0) {
++			dev_dbg(dev->dev, "channel status for channel %d timeout\n",
++				channel);
++			regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_CSC(channel));
++			ret = ret ? : -ETIMEDOUT;
++			goto unlock;
++		} else {
++			ret = 0;
++		}
++	} else {
++		/* Update software cache with latest channel status. */
++		mchp_spdifrx_channel_status_read(dev, channel);
+ 	}
+ 
+ 	memcpy(uvalue->value.iec958.status, ch_stat->data,
+ 	       sizeof(ch_stat->data));
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&dev->mlock);
++	return ret;
+ }
+ 
+ static int mchp_spdifrx_cs1_get(struct snd_kcontrol *kcontrol,
+@@ -564,29 +561,49 @@ static int mchp_spdifrx_subcode_ch_get(struct mchp_spdifrx_dev *dev,
+ 				       int channel,
+ 				       struct snd_ctl_elem_value *uvalue)
+ {
+-	unsigned long flags;
+ 	struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
+ 	struct mchp_spdifrx_user_data *user_data = &ctrl->user_data[channel];
+-	int ret;
+-
+-	reinit_completion(&user_data->done);
+-	mchp_spdifrx_isr_blockend_en(dev);
+-	ret = wait_for_completion_interruptible_timeout(&user_data->done,
+-							msecs_to_jiffies(100));
+-	/* IP might not be started or valid stream might not be prezent */
+-	if (ret <= 0) {
+-		dev_dbg(dev->dev, "user data for channel %d timeout\n",
+-			channel);
+-		mchp_spdifrx_isr_blockend_dis(dev);
+-		return ret;
++	int ret = 0;
++
++	mutex_lock(&dev->mlock);
++
++	/*
++	 * We may reach this point with both clocks enabled but the receiver
++	 * still disabled. To void waiting for completion to just timeout we
++	 * check here the dev->trigger_enabled flag.
++	 *
++	 * To retrieve data:
++	 * - if the receiver is enabled we need to wait for blockend IRQ to read
++	 *   data to and update it for us in software caches
++	 * - otherwise reading the SPDIFRX_CHUD() registers is enough.
++	 */
++
++	if (dev->trigger_enabled) {
++		reinit_completion(&user_data->done);
++		regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_BLOCKEND);
++		ret = wait_for_completion_interruptible_timeout(&user_data->done,
++								msecs_to_jiffies(100));
++		/* Valid stream might not be present. */
++		if (ret <= 0) {
++			dev_dbg(dev->dev, "user data for channel %d timeout\n",
++				channel);
++			regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
++			ret = ret ? : -ETIMEDOUT;
++			goto unlock;
++		} else {
++			ret = 0;
++		}
++	} else {
++		/* Update software cache with last available data. */
++		mchp_spdifrx_channel_user_data_read(dev, channel);
+ 	}
+ 
+-	spin_lock_irqsave(&user_data->lock, flags);
+ 	memcpy(uvalue->value.iec958.subcode, user_data->data,
+ 	       sizeof(user_data->data));
+-	spin_unlock_irqrestore(&user_data->lock, flags);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&dev->mlock);
++	return ret;
+ }
+ 
+ static int mchp_spdifrx_subcode_ch1_get(struct snd_kcontrol *kcontrol,
+@@ -627,10 +644,24 @@ static int mchp_spdifrx_ulock_get(struct snd_kcontrol *kcontrol,
+ 	u32 val;
+ 	bool ulock_old = ctrl->ulock;
+ 
+-	regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+-	ctrl->ulock = !(val & SPDIFRX_RSR_ULOCK);
++	mutex_lock(&dev->mlock);
++
++	/*
++	 * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
++	 * and the receiver is disabled. Thus we take into account the
++	 * dev->trigger_enabled here to return a real status.
++	 */
++	if (dev->trigger_enabled) {
++		regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++		ctrl->ulock = !(val & SPDIFRX_RSR_ULOCK);
++	} else {
++		ctrl->ulock = 0;
++	}
++
+ 	uvalue->value.integer.value[0] = ctrl->ulock;
+ 
++	mutex_unlock(&dev->mlock);
++
+ 	return ulock_old != ctrl->ulock;
+ }
+ 
+@@ -643,8 +674,22 @@ static int mchp_spdifrx_badf_get(struct snd_kcontrol *kcontrol,
+ 	u32 val;
+ 	bool badf_old = ctrl->badf;
+ 
+-	regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+-	ctrl->badf = !!(val & SPDIFRX_RSR_BADF);
++	mutex_lock(&dev->mlock);
++
++	/*
++	 * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
++	 * and the receiver is disabled. Thus we take into account the
++	 * dev->trigger_enabled here to return a real status.
++	 */
++	if (dev->trigger_enabled) {
++		regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++		ctrl->badf = !!(val & SPDIFRX_RSR_BADF);
++	} else {
++		ctrl->badf = 0;
++	}
++
++	mutex_unlock(&dev->mlock);
++
+ 	uvalue->value.integer.value[0] = ctrl->badf;
+ 
+ 	return badf_old != ctrl->badf;
+@@ -656,11 +701,48 @@ static int mchp_spdifrx_signal_get(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
+ 	struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+ 	struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
+-	u32 val;
++	u32 val = ~0U, loops = 10;
++	int ret;
+ 	bool signal_old = ctrl->signal;
+ 
+-	regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+-	ctrl->signal = !(val & SPDIFRX_RSR_NOSIGNAL);
++	mutex_lock(&dev->mlock);
++
++	/*
++	 * To get the signal we need to have receiver enabled. This
++	 * could be enabled also from trigger() function thus we need to
++	 * take care of not disabling the receiver when it runs.
++	 */
++	if (!dev->trigger_enabled) {
++		ret = clk_prepare_enable(dev->gclk);
++		if (ret)
++			goto unlock;
++
++		regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
++				   SPDIFRX_MR_RXEN_ENABLE);
++
++		/* Wait for RSR.ULOCK bit. */
++		while (--loops) {
++			regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++			if (!(val & SPDIFRX_RSR_ULOCK))
++				break;
++			usleep_range(100, 150);
++		}
++
++		regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
++				   SPDIFRX_MR_RXEN_DISABLE);
++
++		clk_disable_unprepare(dev->gclk);
++	} else {
++		regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++	}
++
++unlock:
++	mutex_unlock(&dev->mlock);
++
++	if (!(val & SPDIFRX_RSR_ULOCK))
++		ctrl->signal = !(val & SPDIFRX_RSR_NOSIGNAL);
++	else
++		ctrl->signal = 0;
+ 	uvalue->value.integer.value[0] = ctrl->signal;
+ 
+ 	return signal_old != ctrl->signal;
+@@ -685,18 +767,32 @@ static int mchp_spdifrx_rate_get(struct snd_kcontrol *kcontrol,
+ 	u32 val;
+ 	int rate;
+ 
+-	regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+-
+-	/* if the receiver is not locked, ISF data is invalid */
+-	if (val & SPDIFRX_RSR_ULOCK || !(val & SPDIFRX_RSR_IFS_MASK)) {
++	mutex_lock(&dev->mlock);
++
++	/*
++	 * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
++	 * and the receiver is disabled. Thus we take into account the
++	 * dev->trigger_enabled here to return a real status.
++	 */
++	if (dev->trigger_enabled) {
++		regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++		/* If the receiver is not locked, ISF data is invalid. */
++		if (val & SPDIFRX_RSR_ULOCK || !(val & SPDIFRX_RSR_IFS_MASK)) {
++			ucontrol->value.integer.value[0] = 0;
++			goto unlock;
++		}
++	} else {
++		/* Reveicer is not locked, IFS data is invalid. */
+ 		ucontrol->value.integer.value[0] = 0;
+-		return 0;
++		goto unlock;
+ 	}
+ 
+ 	rate = clk_get_rate(dev->gclk);
+ 
+ 	ucontrol->value.integer.value[0] = rate / (32 * SPDIFRX_RSR_IFS(val));
+ 
++unlock:
++	mutex_unlock(&dev->mlock);
+ 	return 0;
+ }
+ 
+@@ -808,11 +904,9 @@ static int mchp_spdifrx_dai_probe(struct snd_soc_dai *dai)
+ 		     SPDIFRX_MR_AUTORST_NOACTION |
+ 		     SPDIFRX_MR_PACK_DISABLED);
+ 
+-	dev->blockend_refcount = 0;
+ 	for (ch = 0; ch < SPDIFRX_CHANNELS; ch++) {
+ 		init_completion(&ctrl->ch_stat[ch].done);
+ 		init_completion(&ctrl->user_data[ch].done);
+-		spin_lock_init(&ctrl->user_data[ch].lock);
+ 	}
+ 
+ 	/* Add controls */
+@@ -827,7 +921,7 @@ static int mchp_spdifrx_dai_remove(struct snd_soc_dai *dai)
+ 	struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+ 
+ 	/* Disable interrupts */
+-	regmap_write(dev->regmap, SPDIFRX_IDR, 0xFF);
++	regmap_write(dev->regmap, SPDIFRX_IDR, GENMASK(14, 0));
+ 
+ 	clk_disable_unprepare(dev->pclk);
+ 
+@@ -912,7 +1006,17 @@ static int mchp_spdifrx_probe(struct platform_device *pdev)
+ 			"failed to get the PMC generated clock: %d\n", err);
+ 		return err;
+ 	}
+-	spin_lock_init(&dev->blockend_lock);
++
++	/*
++	 * Signal control need a valid rate on gclk. hw_params() configures
++	 * it propertly but requesting signal before any hw_params() has been
++	 * called lead to invalid value returned for signal. Thus, configure
++	 * gclk at a valid rate, here, in initialization, to simplify the
++	 * control path.
++	 */
++	clk_set_min_rate(dev->gclk, 48000 * SPDIFRX_GCLK_RATIO_MIN + 1);
++
++	mutex_init(&dev->mlock);
+ 
+ 	dev->dev = &pdev->dev;
+ 	dev->regmap = regmap;
+diff --git a/sound/soc/atmel/mchp-spdiftx.c b/sound/soc/atmel/mchp-spdiftx.c
+index 0d2e3fa21519c..bcca1cf3cd7b6 100644
+--- a/sound/soc/atmel/mchp-spdiftx.c
++++ b/sound/soc/atmel/mchp-spdiftx.c
+@@ -80,7 +80,7 @@
+ #define SPDIFTX_MR_VALID1			BIT(24)
+ #define SPDIFTX_MR_VALID2			BIT(25)
+ 
+-/* Disable Null Frame on underrrun */
++/* Disable Null Frame on underrun */
+ #define SPDIFTX_MR_DNFR_MASK		GENMASK(27, 27)
+ #define SPDIFTX_MR_DNFR_INVALID		(0 << 27)
+ #define SPDIFTX_MR_DNFR_VALID		(1 << 27)
+diff --git a/sound/soc/atmel/tse850-pcm5142.c b/sound/soc/atmel/tse850-pcm5142.c
+index 59e2edb22b3ad..50c3dc6936f90 100644
+--- a/sound/soc/atmel/tse850-pcm5142.c
++++ b/sound/soc/atmel/tse850-pcm5142.c
+@@ -23,7 +23,7 @@
+ //   IN2 +---o--+------------+--o---+ OUT2
+ //               loop2 relays
+ //
+-// The 'loop1' gpio pin controlls two relays, which are either in loop
++// The 'loop1' gpio pin controls two relays, which are either in loop
+ // position, meaning that input and output are directly connected, or
+ // they are in mixer position, meaning that the signal is passed through
+ // the 'Sum' mixer. Similarly for 'loop2'.
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 25f331551f689..f1c9e563994b2 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -1701,7 +1701,7 @@ config SND_SOC_WSA881X
+ config SND_SOC_ZL38060
+ 	tristate "Microsemi ZL38060 Connected Home Audio Processor"
+ 	depends on SPI_MASTER
+-	select GPIOLIB
++	depends on GPIOLIB
+ 	select REGMAP
+ 	help
+ 	  Support for ZL38060 Connected Home Audio Processor from Microsemi,
+diff --git a/sound/soc/codecs/adau7118.c b/sound/soc/codecs/adau7118.c
+index 841229dcbca10..305f294b7710e 100644
+--- a/sound/soc/codecs/adau7118.c
++++ b/sound/soc/codecs/adau7118.c
+@@ -445,22 +445,6 @@ static const struct snd_soc_component_driver adau7118_component_driver = {
+ 	.non_legacy_dai_naming	= 1,
+ };
+ 
+-static void adau7118_regulator_disable(void *data)
+-{
+-	struct adau7118_data *st = data;
+-	int ret;
+-	/*
+-	 * If we fail to disable DVDD, don't bother in trying IOVDD. We
+-	 * actually don't want to be left in the situation where DVDD
+-	 * is enabled and IOVDD is disabled.
+-	 */
+-	ret = regulator_disable(st->dvdd);
+-	if (ret)
+-		return;
+-
+-	regulator_disable(st->iovdd);
+-}
+-
+ static int adau7118_regulator_setup(struct adau7118_data *st)
+ {
+ 	st->iovdd = devm_regulator_get(st->dev, "iovdd");
+@@ -482,8 +466,7 @@ static int adau7118_regulator_setup(struct adau7118_data *st)
+ 		regcache_cache_only(st->map, true);
+ 	}
+ 
+-	return devm_add_action_or_reset(st->dev, adau7118_regulator_disable,
+-					st);
++	return 0;
+ }
+ 
+ static int adau7118_parset_dt(const struct adau7118_data *st)
+diff --git a/sound/soc/codecs/tlv320adcx140.c b/sound/soc/codecs/tlv320adcx140.c
+index 53a80246aee19..a6241a0453694 100644
+--- a/sound/soc/codecs/tlv320adcx140.c
++++ b/sound/soc/codecs/tlv320adcx140.c
+@@ -870,7 +870,7 @@ static int adcx140_configure_gpio(struct adcx140_priv *adcx140)
+ 
+ 	gpio_count = device_property_count_u32(adcx140->dev,
+ 			"ti,gpio-config");
+-	if (gpio_count == 0)
++	if (gpio_count <= 0)
+ 		return 0;
+ 
+ 	if (gpio_count != ADCX140_NUM_GPIO_CFGS)
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 3e5c1eaccd5e7..6a5d2b08e2713 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -230,6 +230,7 @@ static int fsl_sai_set_dai_fmt_tr(struct snd_soc_dai *cpu_dai,
+ 	if (!sai->is_lsb_first)
+ 		val_cr4 |= FSL_SAI_CR4_MF;
+ 
++	sai->is_dsp_mode = false;
+ 	/* DAI mode */
+ 	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_I2S:
+diff --git a/sound/soc/kirkwood/kirkwood-dma.c b/sound/soc/kirkwood/kirkwood-dma.c
+index e037826b24517..2d41e6ab2ce4e 100644
+--- a/sound/soc/kirkwood/kirkwood-dma.c
++++ b/sound/soc/kirkwood/kirkwood-dma.c
+@@ -86,7 +86,7 @@ kirkwood_dma_conf_mbus_windows(void __iomem *base, int win,
+ 
+ 	/* try to find matching cs for current dma address */
+ 	for (i = 0; i < dram->num_cs; i++) {
+-		const struct mbus_dram_window *cs = dram->cs + i;
++		const struct mbus_dram_window *cs = &dram->cs[i];
+ 		if ((cs->base & 0xffff0000) < (dma & 0xffff0000)) {
+ 			writel(cs->base & 0xffff0000,
+ 				base + KIRKWOOD_AUDIO_WIN_BASE_REG(win));
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index d0f3ff8edd904..8f4ebb189e019 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -822,7 +822,7 @@ int snd_soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
+ 		rtd->fe_compr = 1;
+ 		if (rtd->dai_link->dpcm_playback)
+ 			be_pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream->private_data = rtd;
+-		else if (rtd->dai_link->dpcm_capture)
++		if (rtd->dai_link->dpcm_capture)
+ 			be_pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream->private_data = rtd;
+ 		memcpy(compr->ops, &soc_compr_dyn_ops, sizeof(soc_compr_dyn_ops));
+ 	} else {
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 592536904dde2..d2bcce627b320 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -1912,10 +1912,38 @@ static void profile_close_perf_events(struct profiler_bpf *obj)
+ 	profile_perf_event_cnt = 0;
+ }
+ 
++static int profile_open_perf_event(int mid, int cpu, int map_fd)
++{
++	int pmu_fd;
++
++	pmu_fd = syscall(__NR_perf_event_open, &metrics[mid].attr,
++			 -1 /*pid*/, cpu, -1 /*group_fd*/, 0);
++	if (pmu_fd < 0) {
++		if (errno == ENODEV) {
++			p_info("cpu %d may be offline, skip %s profiling.",
++				cpu, metrics[mid].name);
++			profile_perf_event_cnt++;
++			return 0;
++		}
++		return -1;
++	}
++
++	if (bpf_map_update_elem(map_fd,
++				&profile_perf_event_cnt,
++				&pmu_fd, BPF_ANY) ||
++	    ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) {
++		close(pmu_fd);
++		return -1;
++	}
++
++	profile_perf_events[profile_perf_event_cnt++] = pmu_fd;
++	return 0;
++}
++
+ static int profile_open_perf_events(struct profiler_bpf *obj)
+ {
+ 	unsigned int cpu, m;
+-	int map_fd, pmu_fd;
++	int map_fd;
+ 
+ 	profile_perf_events = calloc(
+ 		sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric);
+@@ -1934,17 +1962,11 @@ static int profile_open_perf_events(struct profiler_bpf *obj)
+ 		if (!metrics[m].selected)
+ 			continue;
+ 		for (cpu = 0; cpu < obj->rodata->num_cpu; cpu++) {
+-			pmu_fd = syscall(__NR_perf_event_open, &metrics[m].attr,
+-					 -1/*pid*/, cpu, -1/*group_fd*/, 0);
+-			if (pmu_fd < 0 ||
+-			    bpf_map_update_elem(map_fd, &profile_perf_event_cnt,
+-						&pmu_fd, BPF_ANY) ||
+-			    ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) {
++			if (profile_open_perf_event(m, cpu, map_fd)) {
+ 				p_err("failed to create event %s on cpu %d",
+ 				      metrics[m].name, cpu);
+ 				return -1;
+ 			}
+-			profile_perf_events[profile_perf_event_cnt++] = pmu_fd;
+ 		}
+ 	}
+ 	return 0;
+diff --git a/tools/iio/iio_utils.c b/tools/iio/iio_utils.c
+index d66b18c54606a..48360994c2a13 100644
+--- a/tools/iio/iio_utils.c
++++ b/tools/iio/iio_utils.c
+@@ -262,6 +262,7 @@ int iioutils_get_param_float(float *output, const char *param_name,
+ 			if (fscanf(sysfsfp, "%f", output) != 1)
+ 				ret = errno ? -errno : -ENODATA;
+ 
++			fclose(sysfsfp);
+ 			break;
+ 		}
+ error_free_filename:
+@@ -342,9 +343,9 @@ int build_channel_array(const char *device_dir,
+ 			}
+ 
+ 			sysfsfp = fopen(filename, "r");
++			free(filename);
+ 			if (!sysfsfp) {
+ 				ret = -errno;
+-				free(filename);
+ 				goto error_close_dir;
+ 			}
+ 
+@@ -354,7 +355,6 @@ int build_channel_array(const char *device_dir,
+ 				if (fclose(sysfsfp))
+ 					perror("build_channel_array(): Failed to close file");
+ 
+-				free(filename);
+ 				goto error_close_dir;
+ 			}
+ 			if (ret == 1)
+@@ -362,11 +362,9 @@ int build_channel_array(const char *device_dir,
+ 
+ 			if (fclose(sysfsfp)) {
+ 				ret = -errno;
+-				free(filename);
+ 				goto error_close_dir;
+ 			}
+ 
+-			free(filename);
+ 		}
+ 
+ 	*ci_array = malloc(sizeof(**ci_array) * (*counter));
+@@ -392,9 +390,9 @@ int build_channel_array(const char *device_dir,
+ 			}
+ 
+ 			sysfsfp = fopen(filename, "r");
++			free(filename);
+ 			if (!sysfsfp) {
+ 				ret = -errno;
+-				free(filename);
+ 				count--;
+ 				goto error_cleanup_array;
+ 			}
+@@ -402,20 +400,17 @@ int build_channel_array(const char *device_dir,
+ 			errno = 0;
+ 			if (fscanf(sysfsfp, "%i", &current_enabled) != 1) {
+ 				ret = errno ? -errno : -ENODATA;
+-				free(filename);
+ 				count--;
+ 				goto error_cleanup_array;
+ 			}
+ 
+ 			if (fclose(sysfsfp)) {
+ 				ret = -errno;
+-				free(filename);
+ 				count--;
+ 				goto error_cleanup_array;
+ 			}
+ 
+ 			if (!current_enabled) {
+-				free(filename);
+ 				count--;
+ 				continue;
+ 			}
+@@ -426,7 +421,6 @@ int build_channel_array(const char *device_dir,
+ 						strlen(ent->d_name) -
+ 						strlen("_en"));
+ 			if (!current->name) {
+-				free(filename);
+ 				ret = -ENOMEM;
+ 				count--;
+ 				goto error_cleanup_array;
+@@ -436,7 +430,6 @@ int build_channel_array(const char *device_dir,
+ 			ret = iioutils_break_up_name(current->name,
+ 						     &current->generic_name);
+ 			if (ret) {
+-				free(filename);
+ 				free(current->name);
+ 				count--;
+ 				goto error_cleanup_array;
+@@ -447,17 +440,16 @@ int build_channel_array(const char *device_dir,
+ 				       scan_el_dir,
+ 				       current->name);
+ 			if (ret < 0) {
+-				free(filename);
+ 				ret = -ENOMEM;
+ 				goto error_cleanup_array;
+ 			}
+ 
+ 			sysfsfp = fopen(filename, "r");
++			free(filename);
+ 			if (!sysfsfp) {
+ 				ret = -errno;
+-				fprintf(stderr, "failed to open %s\n",
+-					filename);
+-				free(filename);
++				fprintf(stderr, "failed to open %s/%s_index\n",
++					scan_el_dir, current->name);
+ 				goto error_cleanup_array;
+ 			}
+ 
+@@ -467,17 +459,14 @@ int build_channel_array(const char *device_dir,
+ 				if (fclose(sysfsfp))
+ 					perror("build_channel_array(): Failed to close file");
+ 
+-				free(filename);
+ 				goto error_cleanup_array;
+ 			}
+ 
+ 			if (fclose(sysfsfp)) {
+ 				ret = -errno;
+-				free(filename);
+ 				goto error_cleanup_array;
+ 			}
+ 
+-			free(filename);
+ 			/* Find the scale */
+ 			ret = iioutils_get_param_float(&current->scale,
+ 						       "scale",
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index e6f644cdc9f15..f7c48b1fb3a05 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -614,8 +614,21 @@ int btf__align_of(const struct btf *btf, __u32 id)
+ 			if (align <= 0)
+ 				return align;
+ 			max_align = max(max_align, align);
++
++			/* if field offset isn't aligned according to field
++			 * type's alignment, then struct must be packed
++			 */
++			if (btf_member_bitfield_size(t, i) == 0 &&
++			    (m->offset % (8 * align)) != 0)
++				return 1;
+ 		}
+ 
++		/* if struct/union size isn't a multiple of its alignment,
++		 * then struct must be packed
++		 */
++		if ((t->size % max_align) != 0)
++			return 1;
++
+ 		return max_align;
+ 	}
+ 	default:
+diff --git a/tools/lib/bpf/nlattr.c b/tools/lib/bpf/nlattr.c
+index b607fa9852b1c..1a04299a2a604 100644
+--- a/tools/lib/bpf/nlattr.c
++++ b/tools/lib/bpf/nlattr.c
+@@ -178,7 +178,7 @@ int libbpf_nla_dump_errormsg(struct nlmsghdr *nlh)
+ 		hlen += nlmsg_len(&err->msg);
+ 
+ 	attr = (struct nlattr *) ((void *) err + hlen);
+-	alen = nlh->nlmsg_len - hlen;
++	alen = (void *)nlh + nlh->nlmsg_len - (void *)attr;
+ 
+ 	if (libbpf_nla_parse(tb, NLMSGERR_ATTR_MAX, attr, alen,
+ 			     extack_policy) != 0) {
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 985bcc5cea8a4..9a0a54194636c 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -180,6 +180,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
+ 		"kunit_try_catch_throw",
+ 		"xen_start_kernel",
+ 		"cpu_bringup_and_idle",
++		"stop_this_cpu",
+ 	};
+ 
+ 	if (!func)
+@@ -571,6 +572,7 @@ static int create_static_call_sections(struct objtool_file *file)
+ 		if (strncmp(key_name, STATIC_CALL_TRAMP_PREFIX_STR,
+ 			    STATIC_CALL_TRAMP_PREFIX_LEN)) {
+ 			WARN("static_call: trampoline name malformed: %s", key_name);
++			free(key_name);
+ 			return -1;
+ 		}
+ 		tmp = key_name + STATIC_CALL_TRAMP_PREFIX_LEN - STATIC_CALL_KEY_PREFIX_LEN;
+@@ -580,6 +582,7 @@ static int create_static_call_sections(struct objtool_file *file)
+ 		if (!key_sym) {
+ 			if (!module) {
+ 				WARN("static_call: can't find static_call_key symbol: %s", tmp);
++				free(key_name);
+ 				return -1;
+ 			}
+ 
+@@ -863,6 +866,8 @@ static const char *uaccess_safe_builtin[] = {
+ 	"__tsan_atomic64_compare_exchange_val",
+ 	"__tsan_atomic_thread_fence",
+ 	"__tsan_atomic_signal_fence",
++	"__tsan_unaligned_read16",
++	"__tsan_unaligned_write16",
+ 	/* KCOV */
+ 	"write_comp_data",
+ 	"check_kcov_mode",
+diff --git a/tools/perf/perf-completion.sh b/tools/perf/perf-completion.sh
+index fdf75d45efff7..978249d7868c2 100644
+--- a/tools/perf/perf-completion.sh
++++ b/tools/perf/perf-completion.sh
+@@ -165,7 +165,12 @@ __perf_main ()
+ 
+ 		local cur1=${COMP_WORDS[COMP_CWORD]}
+ 		local raw_evts=$($cmd list --raw-dump)
+-		local arr s tmp result
++		local arr s tmp result cpu_evts
++
++		# aarch64 doesn't have /sys/bus/event_source/devices/cpu/events
++		if [[ `uname -m` != aarch64 ]]; then
++			cpu_evts=$(ls /sys/bus/event_source/devices/cpu/events)
++		fi
+ 
+ 		if [[ "$cur1" == */* && ${cur1#*/} =~ ^[A-Z] ]]; then
+ 			OLD_IFS="$IFS"
+@@ -183,9 +188,9 @@ __perf_main ()
+ 				fi
+ 			done
+ 
+-			evts=${result}" "$(ls /sys/bus/event_source/devices/cpu/events)
++			evts=${result}" "${cpu_evts}
+ 		else
+-			evts=${raw_evts}" "$(ls /sys/bus/event_source/devices/cpu/events)
++			evts=${raw_evts}" "${cpu_evts}
+ 		fi
+ 
+ 		if [[ "$cur1" == , ]]; then
+diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
+index 0bf6b4d4c90a7..570cde4640d05 100644
+--- a/tools/perf/util/llvm-utils.c
++++ b/tools/perf/util/llvm-utils.c
+@@ -525,14 +525,37 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
+ 
+ 	pr_debug("llvm compiling command template: %s\n", template);
+ 
++	/*
++	 * Below, substitute control characters for values that can cause the
++	 * echo to misbehave, then substitute the values back.
++	 */
+ 	err = -ENOMEM;
+-	if (asprintf(&command_echo, "echo -n \"%s\"", template) < 0)
++	if (asprintf(&command_echo, "echo -n \a%s\a", template) < 0)
+ 		goto errout;
+ 
++#define SWAP_CHAR(a, b) do { if (*p == a) *p = b; } while (0)
++	for (char *p = command_echo; *p; p++) {
++		SWAP_CHAR('<', '\001');
++		SWAP_CHAR('>', '\002');
++		SWAP_CHAR('"', '\003');
++		SWAP_CHAR('\'', '\004');
++		SWAP_CHAR('|', '\005');
++		SWAP_CHAR('&', '\006');
++		SWAP_CHAR('\a', '"');
++	}
+ 	err = read_from_pipe(command_echo, (void **) &command_out, NULL);
+ 	if (err)
+ 		goto errout;
+ 
++	for (char *p = command_out; *p; p++) {
++		SWAP_CHAR('\001', '<');
++		SWAP_CHAR('\002', '>');
++		SWAP_CHAR('\003', '"');
++		SWAP_CHAR('\004', '\'');
++		SWAP_CHAR('\005', '|');
++		SWAP_CHAR('\006', '&');
++	}
++#undef SWAP_CHAR
+ 	pr_debug("llvm compiling command : %s\n", command_out);
+ 
+ 	err = read_from_pipe(template, &obj_buf, &obj_buf_sz);
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index 8b1e3ae8fe50d..ea26f2b0c1bc2 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -178,6 +178,7 @@ my $store_failures;
+ my $store_successes;
+ my $test_name;
+ my $timeout;
++my $run_timeout;
+ my $connect_timeout;
+ my $config_bisect_exec;
+ my $booted_timeout;
+@@ -340,6 +341,7 @@ my %option_map = (
+     "STORE_SUCCESSES"		=> \$store_successes,
+     "TEST_NAME"			=> \$test_name,
+     "TIMEOUT"			=> \$timeout,
++    "RUN_TIMEOUT"		=> \$run_timeout,
+     "CONNECT_TIMEOUT"		=> \$connect_timeout,
+     "CONFIG_BISECT_EXEC"	=> \$config_bisect_exec,
+     "BOOTED_TIMEOUT"		=> \$booted_timeout,
+@@ -1433,7 +1435,8 @@ sub reboot {
+ 
+ 	# Still need to wait for the reboot to finish
+ 	wait_for_monitor($time, $reboot_success_line);
+-
++    }
++    if ($powercycle || $time) {
+ 	end_monitor;
+     }
+ }
+@@ -1799,6 +1802,14 @@ sub run_command {
+     $command =~ s/\$SSH_USER/$ssh_user/g;
+     $command =~ s/\$MACHINE/$machine/g;
+ 
++    if (!defined($timeout)) {
++	$timeout = $run_timeout;
++    }
++
++    if (!defined($timeout)) {
++	$timeout = -1; # tell wait_for_input to wait indefinitely
++    }
++
+     doprint("$command ... ");
+     $start_time = time;
+ 
+@@ -1825,13 +1836,10 @@ sub run_command {
+ 
+     while (1) {
+ 	my $fp = \*CMD;
+-	if (defined($timeout)) {
+-	    doprint "timeout = $timeout\n";
+-	}
+ 	my $line = wait_for_input($fp, $timeout);
+ 	if (!defined($line)) {
+ 	    my $now = time;
+-	    if (defined($timeout) && (($now - $start_time) >= $timeout)) {
++	    if ($timeout >= 0 && (($now - $start_time) >= $timeout)) {
+ 		doprint "Hit timeout of $timeout, killing process\n";
+ 		$hit_timeout = 1;
+ 		kill 9, $pid;
+@@ -2004,6 +2012,11 @@ sub wait_for_input
+ 	$time = $timeout;
+     }
+ 
++    if ($time < 0) {
++	# Negative number means wait indefinitely
++	undef $time;
++    }
++
+     $rin = '';
+     vec($rin, fileno($fp), 1) = 1;
+     vec($rin, fileno(\*STDIN), 1) = 1;
+@@ -4283,6 +4296,9 @@ sub send_email {
+ }
+ 
+ sub cancel_test {
++    if ($monitor_cnt) {
++	end_monitor;
++    }
+     if ($email_when_canceled) {
+ 	my $name = get_test_name;
+         send_email("KTEST: Your [$name] test was cancelled",
+diff --git a/tools/testing/ktest/sample.conf b/tools/testing/ktest/sample.conf
+index 5e7d1d7297529..65957a9803b50 100644
+--- a/tools/testing/ktest/sample.conf
++++ b/tools/testing/ktest/sample.conf
+@@ -809,6 +809,11 @@
+ # is issued instead of a reboot.
+ # CONNECT_TIMEOUT = 25
+ 
++# The timeout in seconds for how long to wait for any running command
++# to timeout. If not defined, it will let it go indefinitely.
++# (default undefined)
++#RUN_TIMEOUT = 600
++
+ # In between tests, a reboot of the box may occur, and this
+ # is the time to wait for the console after it stops producing
+ # output. Some machines may not produce a large lag on reboot
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 1d91555333608..a845724e0906a 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -119,8 +119,6 @@ RESOLVE_BTFIDS := $(BUILD_DIR)/resolve_btfids/resolve_btfids
+ # NOTE: Semicolon at the end is critical to override lib.mk's default static
+ # rule for binaries.
+ $(notdir $(TEST_GEN_PROGS)						\
+-	 $(TEST_PROGS)							\
+-	 $(TEST_PROGS_EXTENDED)						\
+ 	 $(TEST_GEN_PROGS_EXTENDED)					\
+ 	 $(TEST_CUSTOM_PROGS)): %: $(OUTPUT)/% ;
+ 
+diff --git a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
+index 16d2de18591d3..2c81e01c30b31 100755
+--- a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
++++ b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
+@@ -16,6 +16,18 @@ SYSFS_NET_DIR=/sys/bus/netdevsim/devices/$DEV_NAME/net/
+ DEBUGFS_DIR=/sys/kernel/debug/netdevsim/$DEV_NAME/
+ DL_HANDLE=netdevsim/$DEV_NAME
+ 
++wait_for_devlink()
++{
++	"$@" | grep -q $DL_HANDLE
++}
++
++devlink_wait()
++{
++	local timeout=$1
++
++	busywait "$timeout" wait_for_devlink devlink dev
++}
++
+ fw_flash_test()
+ {
+ 	RET=0
+@@ -255,6 +267,9 @@ netns_reload_test()
+ 	ip netns del testns2
+ 	ip netns del testns1
+ 
++	# Wait until netns async cleanup is done.
++	devlink_wait 2000
++
+ 	log_test "netns reload test"
+ }
+ 
+@@ -347,6 +362,9 @@ resource_test()
+ 	ip netns del testns2
+ 	ip netns del testns1
+ 
++	# Wait until netns async cleanup is done.
++	devlink_wait 2000
++
+ 	log_test "resource test"
+ }
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+index 27a68bbe778be..d9b8127950771 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+@@ -42,7 +42,7 @@ test_event_enabled() {
+ 
+     while [ $check_times -ne 0 ]; do
+ 	e=`cat $EVENT_ENABLE`
+-	if [ "$e" == $val ]; then
++	if [ "$e" = $val ]; then
+ 	    return 0
+ 	fi
+ 	sleep $SLEEP_TIME
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 0f3bf90e04d36..8f42e17db5d09 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -1773,6 +1773,8 @@ EOF
+ ################################################################################
+ # main
+ 
++trap cleanup EXIT
++
+ while getopts :t:pPhv o
+ do
+ 	case $o in
+diff --git a/tools/testing/selftests/net/udpgso_bench_rx.c b/tools/testing/selftests/net/udpgso_bench_rx.c
+index 4058c7451e70d..f35a924d4a303 100644
+--- a/tools/testing/selftests/net/udpgso_bench_rx.c
++++ b/tools/testing/selftests/net/udpgso_bench_rx.c
+@@ -214,11 +214,10 @@ static void do_verify_udp(const char *data, int len)
+ 
+ static int recv_msg(int fd, char *buf, int len, int *gso_size)
+ {
+-	char control[CMSG_SPACE(sizeof(uint16_t))] = {0};
++	char control[CMSG_SPACE(sizeof(int))] = {0};
+ 	struct msghdr msg = {0};
+ 	struct iovec iov = {0};
+ 	struct cmsghdr *cmsg;
+-	uint16_t *gsosizeptr;
+ 	int ret;
+ 
+ 	iov.iov_base = buf;
+@@ -237,8 +236,7 @@ static int recv_msg(int fd, char *buf, int len, int *gso_size)
+ 		     cmsg = CMSG_NXTHDR(&msg, cmsg)) {
+ 			if (cmsg->cmsg_level == SOL_UDP
+ 			    && cmsg->cmsg_type == UDP_GRO) {
+-				gsosizeptr = (uint16_t *) CMSG_DATA(cmsg);
+-				*gso_size = *gsosizeptr;
++				*gso_size = *(int *)CMSG_DATA(cmsg);
+ 				break;
+ 			}
+ 		}
+diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
+index d5bebb37238c0..ce6f3b916ef9d 100644
+--- a/virt/kvm/coalesced_mmio.c
++++ b/virt/kvm/coalesced_mmio.c
+@@ -187,15 +187,17 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ 			r = kvm_io_bus_unregister_dev(kvm,
+ 				zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev);
+ 
++			kvm_iodevice_destructor(&dev->dev);
++
+ 			/*
+ 			 * On failure, unregister destroys all devices on the
+ 			 * bus _except_ the target device, i.e. coalesced_zones
+-			 * has been modified.  No need to restart the walk as
+-			 * there aren't any zones left.
++			 * has been modified.  Bail after destroying the target
++			 * device, there's no need to restart the walk as there
++			 * aren't any zones left.
+ 			 */
+ 			if (r)
+ 				break;
+-			kvm_iodevice_destructor(&dev->dev);
+ 		}
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-03-13 11:32 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-03-13 11:32 UTC (permalink / raw
  To: gentoo-commits

commit:     79406fd33a673f686fb56a7e85919ed3eaf58177
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 13 10:45:50 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Mar 13 10:45:50 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=79406fd3

Linux patch 5.10.174

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |   4 ++
 1173_linux-5.10.174.patch | 104 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 108 insertions(+)

diff --git a/0000_README b/0000_README
index e4325784..1f64c809 100644
--- a/0000_README
+++ b/0000_README
@@ -735,6 +735,10 @@ Patch:  1172_linux-5.10.173.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.173
 
+Patch:  1173_linux-5.10.174.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.174
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1173_linux-5.10.174.patch b/1173_linux-5.10.174.patch
new file mode 100644
index 00000000..91805d21
--- /dev/null
+++ b/1173_linux-5.10.174.patch
@@ -0,0 +1,104 @@
+diff --git a/Makefile b/Makefile
+index 1a6ea79940797..92accf2ddc089 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 173
++SUBLEVEL = 174
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c b/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
+index 462835684e8b0..916ff5058ae79 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
+@@ -185,7 +185,6 @@ static void _rtl92e_dm_init_fsync(struct net_device *dev);
+ static void _rtl92e_dm_deinit_fsync(struct net_device *dev);
+ 
+ static	void _rtl92e_dm_check_txrateandretrycount(struct net_device *dev);
+-static  void _rtl92e_dm_check_ac_dc_power(struct net_device *dev);
+ static void _rtl92e_dm_check_fsync(struct net_device *dev);
+ static void _rtl92e_dm_check_rf_ctrl_gpio(void *data);
+ static void _rtl92e_dm_fsync_timer_callback(struct timer_list *t);
+@@ -238,8 +237,6 @@ void rtl92e_dm_watchdog(struct net_device *dev)
+ 	if (priv->being_init_adapter)
+ 		return;
+ 
+-	_rtl92e_dm_check_ac_dc_power(dev);
+-
+ 	_rtl92e_dm_check_txrateandretrycount(dev);
+ 	_rtl92e_dm_check_edca_turbo(dev);
+ 
+@@ -257,30 +254,6 @@ void rtl92e_dm_watchdog(struct net_device *dev)
+ 	_rtl92e_dm_cts_to_self(dev);
+ }
+ 
+-static void _rtl92e_dm_check_ac_dc_power(struct net_device *dev)
+-{
+-	struct r8192_priv *priv = rtllib_priv(dev);
+-	static char const ac_dc_script[] = "/etc/acpi/wireless-rtl-ac-dc-power.sh";
+-	char *argv[] = {(char *)ac_dc_script, DRV_NAME, NULL};
+-	static char *envp[] = {"HOME=/",
+-			"TERM=linux",
+-			"PATH=/usr/bin:/bin",
+-			 NULL};
+-
+-	if (priv->ResetProgress == RESET_TYPE_SILENT) {
+-		RT_TRACE((COMP_INIT | COMP_POWER | COMP_RF),
+-			 "GPIOChangeRFWorkItemCallBack(): Silent Reset!!!!!!!\n");
+-		return;
+-	}
+-
+-	if (priv->rtllib->state != RTLLIB_LINKED)
+-		return;
+-	call_usermodehelper(ac_dc_script, argv, envp, UMH_WAIT_PROC);
+-
+-	return;
+-};
+-
+-
+ void rtl92e_init_adaptive_rate(struct net_device *dev)
+ {
+ 
+@@ -1800,10 +1773,6 @@ static void _rtl92e_dm_check_rf_ctrl_gpio(void *data)
+ 	u8 tmp1byte;
+ 	enum rt_rf_power_state eRfPowerStateToSet;
+ 	bool bActuallySet = false;
+-	char *argv[3];
+-	static char const RadioPowerPath[] = "/etc/acpi/events/RadioPower.sh";
+-	static char *envp[] = {"HOME=/", "TERM=linux", "PATH=/usr/bin:/bin",
+-			       NULL};
+ 
+ 	bActuallySet = false;
+ 
+@@ -1835,14 +1804,6 @@ static void _rtl92e_dm_check_rf_ctrl_gpio(void *data)
+ 		mdelay(1000);
+ 		priv->bHwRfOffAction = 1;
+ 		rtl92e_set_rf_state(dev, eRfPowerStateToSet, RF_CHANGE_BY_HW);
+-		if (priv->bHwRadioOff)
+-			argv[1] = "RFOFF";
+-		else
+-			argv[1] = "RFON";
+-
+-		argv[0] = (char *)RadioPowerPath;
+-		argv[2] = NULL;
+-		call_usermodehelper(RadioPowerPath, argv, envp, UMH_WAIT_PROC);
+ 	}
+ }
+ 
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index f4d98ed8fa313..f7e2e172a68df 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -1264,8 +1264,6 @@ int cfg80211_connect(struct cfg80211_registered_device *rdev,
+ 		connect->key = NULL;
+ 		connect->key_len = 0;
+ 		connect->key_idx = 0;
+-		connect->crypto.cipher_group = 0;
+-		connect->crypto.n_ciphers_pairwise = 0;
+ 	}
+ 
+ 	wdev->connect_keys = connkeys;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-03-17 10:45 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-03-17 10:45 UTC (permalink / raw
  To: gentoo-commits

commit:     1dec275e031329245573fdc49d8cc7939df6096e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 17 10:44:56 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 17 10:44:56 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1dec275e

Linux patch 5.10.175

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1174_linux-5.10.175.patch | 3774 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3778 insertions(+)

diff --git a/0000_README b/0000_README
index 1f64c809..f3c2cbaf 100644
--- a/0000_README
+++ b/0000_README
@@ -739,6 +739,10 @@ Patch:  1173_linux-5.10.174.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.174
 
+Patch:  1174_linux-5.10.175.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.175
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1174_linux-5.10.175.patch b/1174_linux-5.10.175.patch
new file mode 100644
index 00000000..20e8ab4e
--- /dev/null
+++ b/1174_linux-5.10.175.patch
@@ -0,0 +1,3774 @@
+diff --git a/Makefile b/Makefile
+index 92accf2ddc089..e6b09052f222b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 174
++SUBLEVEL = 175
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/kernel/module.c b/arch/alpha/kernel/module.c
+index 5b60c248de9ea..cbefa5a773846 100644
+--- a/arch/alpha/kernel/module.c
++++ b/arch/alpha/kernel/module.c
+@@ -146,10 +146,8 @@ apply_relocate_add(Elf64_Shdr *sechdrs, const char *strtab,
+ 	base = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr;
+ 	symtab = (Elf64_Sym *)sechdrs[symindex].sh_addr;
+ 
+-	/* The small sections were sorted to the end of the segment.
+-	   The following should definitely cover them.  */
+-	gp = (u64)me->core_layout.base + me->core_layout.size - 0x8000;
+ 	got = sechdrs[me->arch.gotsecindex].sh_addr;
++	gp = got + 0x8000;
+ 
+ 	for (i = 0; i < n; i++) {
+ 		unsigned long r_sym = ELF64_R_SYM (rela[i].r_info);
+diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
+index 16892f0d05ad6..538b6a1b198b9 100644
+--- a/arch/arm64/include/asm/efi.h
++++ b/arch/arm64/include/asm/efi.h
+@@ -25,7 +25,7 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+ ({									\
+ 	efi_virtmap_load();						\
+ 	__efi_fpsimd_begin();						\
+-	spin_lock(&efi_rt_lock);					\
++	raw_spin_lock(&efi_rt_lock);					\
+ })
+ 
+ #define arch_efi_call_virt(p, f, args...)				\
+@@ -37,12 +37,12 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+ 
+ #define arch_efi_call_virt_teardown()					\
+ ({									\
+-	spin_unlock(&efi_rt_lock);					\
++	raw_spin_unlock(&efi_rt_lock);					\
+ 	__efi_fpsimd_end();						\
+ 	efi_virtmap_unload();						\
+ })
+ 
+-extern spinlock_t efi_rt_lock;
++extern raw_spinlock_t efi_rt_lock;
+ efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...);
+ 
+ #define ARCH_EFI_IRQ_FLAGS_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
+index 72f432d23ec5c..3ee3b3daca47b 100644
+--- a/arch/arm64/kernel/efi.c
++++ b/arch/arm64/kernel/efi.c
+@@ -144,7 +144,7 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
+ 	return s;
+ }
+ 
+-DEFINE_SPINLOCK(efi_rt_lock);
++DEFINE_RAW_SPINLOCK(efi_rt_lock);
+ 
+ asmlinkage u64 *efi_rt_stack_top __ro_after_init;
+ 
+diff --git a/arch/mips/include/asm/mach-rc32434/pci.h b/arch/mips/include/asm/mach-rc32434/pci.h
+index 9a6eefd127571..3eb767c8a4eec 100644
+--- a/arch/mips/include/asm/mach-rc32434/pci.h
++++ b/arch/mips/include/asm/mach-rc32434/pci.h
+@@ -374,7 +374,7 @@ struct pci_msu {
+ 				 PCI_CFG04_STAT_SSE | \
+ 				 PCI_CFG04_STAT_PE)
+ 
+-#define KORINA_CNFG1		((KORINA_STAT<<16)|KORINA_CMD)
++#define KORINA_CNFG1		(KORINA_STAT | KORINA_CMD)
+ 
+ #define KORINA_REVID		0
+ #define KORINA_CLASS_CODE	0
+diff --git a/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+index 73f8c998c64df..d4f5f159d6f23 100644
+--- a/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
++++ b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+@@ -10,7 +10,6 @@
+ 
+ / {
+ 	model = "fsl,T1040RDB-REV-A";
+-	compatible = "fsl,T1040RDB-REV-A";
+ };
+ 
+ &seville_port0 {
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index 1d20f0f77a920..ba9b54d35f570 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -436,7 +436,7 @@ void vtime_flush(struct task_struct *tsk)
+ #define calc_cputime_factors()
+ #endif
+ 
+-void __delay(unsigned long loops)
++void __no_kcsan __delay(unsigned long loops)
+ {
+ 	unsigned long start;
+ 
+@@ -457,7 +457,7 @@ void __delay(unsigned long loops)
+ }
+ EXPORT_SYMBOL(__delay);
+ 
+-void udelay(unsigned long usecs)
++void __no_kcsan udelay(unsigned long usecs)
+ {
+ 	__delay(tb_ticks_per_usec * usecs);
+ }
+diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
+index 4a1f494ef03f3..fabe6cf10bd24 100644
+--- a/arch/powerpc/kernel/vmlinux.lds.S
++++ b/arch/powerpc/kernel/vmlinux.lds.S
+@@ -8,6 +8,7 @@
+ #define BSS_FIRST_SECTIONS *(.bss.prominit)
+ #define EMITS_PT_NOTE
+ #define RO_EXCEPTION_TABLE_ALIGN	0
++#define RUNTIME_DISCARD_EXIT
+ 
+ #include <asm/page.h>
+ #include <asm-generic/vmlinux.lds.h>
+@@ -378,9 +379,12 @@ SECTIONS
+ 	DISCARDS
+ 	/DISCARD/ : {
+ 		*(*.EMB.apuinfo)
+-		*(.glink .iplt .plt .rela* .comment)
++		*(.glink .iplt .plt .comment)
+ 		*(.gnu.version*)
+ 		*(.gnu.attributes)
+ 		*(.eh_frame)
++#ifndef CONFIG_RELOCATABLE
++		*(.rela*)
++#endif
+ 	}
+ }
+diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
+index 04dad33800418..bc745900c1631 100644
+--- a/arch/riscv/include/asm/ftrace.h
++++ b/arch/riscv/include/asm/ftrace.h
+@@ -83,6 +83,6 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
+ #define ftrace_init_nop ftrace_init_nop
+ #endif
+ 
+-#endif
++#endif /* CONFIG_DYNAMIC_FTRACE */
+ 
+ #endif /* _ASM_RISCV_FTRACE_H */
+diff --git a/arch/riscv/include/asm/parse_asm.h b/arch/riscv/include/asm/parse_asm.h
+index 7fee806805c1b..ad254da85e615 100644
+--- a/arch/riscv/include/asm/parse_asm.h
++++ b/arch/riscv/include/asm/parse_asm.h
+@@ -3,6 +3,9 @@
+  * Copyright (C) 2020 SiFive
+  */
+ 
++#ifndef _ASM_RISCV_INSN_H
++#define _ASM_RISCV_INSN_H
++
+ #include <linux/bits.h>
+ 
+ /* The bit field of immediate value in I-type instruction */
+@@ -217,3 +220,5 @@ static inline bool is_ ## INSN_NAME ## _insn(long insn) \
+ 	(RVC_X(x_, RVC_B_IMM_5_OPOFF, RVC_B_IMM_5_MASK) << RVC_B_IMM_5_OFF) | \
+ 	(RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \
+ 	(RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); })
++
++#endif /* _ASM_RISCV_INSN_H */
+diff --git a/arch/riscv/include/asm/patch.h b/arch/riscv/include/asm/patch.h
+index 9a7d7346001ee..98d9de07cba17 100644
+--- a/arch/riscv/include/asm/patch.h
++++ b/arch/riscv/include/asm/patch.h
+@@ -9,4 +9,6 @@
+ int patch_text_nosync(void *addr, const void *insns, size_t len);
+ int patch_text(void *addr, u32 insn);
+ 
++extern int riscv_patch_in_stop_machine;
++
+ #endif /* _ASM_RISCV_PATCH_H */
+diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
+index 765b62434f303..8693dfcffb022 100644
+--- a/arch/riscv/kernel/ftrace.c
++++ b/arch/riscv/kernel/ftrace.c
+@@ -15,11 +15,21 @@
+ int ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex)
+ {
+ 	mutex_lock(&text_mutex);
++
++	/*
++	 * The code sequences we use for ftrace can't be patched while the
++	 * kernel is running, so we need to use stop_machine() to modify them
++	 * for now.  This doesn't play nice with text_mutex, we use this flag
++	 * to elide the check.
++	 */
++	riscv_patch_in_stop_machine = true;
++
+ 	return 0;
+ }
+ 
+ int ftrace_arch_code_modify_post_process(void) __releases(&text_mutex)
+ {
++	riscv_patch_in_stop_machine = false;
+ 	mutex_unlock(&text_mutex);
+ 	return 0;
+ }
+@@ -109,9 +119,9 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
+ {
+ 	int out;
+ 
+-	ftrace_arch_code_modify_prepare();
++	mutex_lock(&text_mutex);
+ 	out = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
+-	ftrace_arch_code_modify_post_process();
++	mutex_unlock(&text_mutex);
+ 
+ 	return out;
+ }
+diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
+index 1612e11f7bf6d..c3fced410e742 100644
+--- a/arch/riscv/kernel/patch.c
++++ b/arch/riscv/kernel/patch.c
+@@ -11,6 +11,7 @@
+ #include <asm/kprobes.h>
+ #include <asm/cacheflush.h>
+ #include <asm/fixmap.h>
++#include <asm/ftrace.h>
+ #include <asm/patch.h>
+ 
+ struct patch_insn {
+@@ -19,6 +20,8 @@ struct patch_insn {
+ 	atomic_t cpu_count;
+ };
+ 
++int riscv_patch_in_stop_machine = false;
++
+ #ifdef CONFIG_MMU
+ static void *patch_map(void *addr, int fixmap)
+ {
+@@ -55,8 +58,15 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
+ 	 * Before reaching here, it was expected to lock the text_mutex
+ 	 * already, so we don't need to give another lock here and could
+ 	 * ensure that it was safe between each cores.
++	 *
++	 * We're currently using stop_machine() for ftrace & kprobes, and while
++	 * that ensures text_mutex is held before installing the mappings it
++	 * does not ensure text_mutex is held by the calling thread.  That's
++	 * safe but triggers a lockdep failure, so just elide it for that
++	 * specific case.
+ 	 */
+-	lockdep_assert_held(&text_mutex);
++	if (!riscv_patch_in_stop_machine)
++		lockdep_assert_held(&text_mutex);
+ 
+ 	if (across_pages)
+ 		patch_map(addr + len, FIX_TEXT_POKE1);
+@@ -117,13 +127,25 @@ NOKPROBE_SYMBOL(patch_text_cb);
+ 
+ int patch_text(void *addr, u32 insn)
+ {
++	int ret;
+ 	struct patch_insn patch = {
+ 		.addr = addr,
+ 		.insn = insn,
+ 		.cpu_count = ATOMIC_INIT(0),
+ 	};
+ 
+-	return stop_machine_cpuslocked(patch_text_cb,
+-				       &patch, cpu_online_mask);
++	/*
++	 * kprobes takes text_mutex, before calling patch_text(), but as we call
++	 * calls stop_machine(), the lockdep assertion in patch_insn_write()
++	 * gets confused by the context in which the lock is taken.
++	 * Instead, ensure the lock is held before calling stop_machine(), and
++	 * set riscv_patch_in_stop_machine to skip the check in
++	 * patch_insn_write().
++	 */
++	lockdep_assert_held(&text_mutex);
++	riscv_patch_in_stop_machine = true;
++	ret = stop_machine_cpuslocked(patch_text_cb, &patch, cpu_online_mask);
++	riscv_patch_in_stop_machine = false;
++	return ret;
+ }
+ NOKPROBE_SYMBOL(patch_text);
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index 1e53fbe5eb783..9c34735c1e771 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -96,7 +96,7 @@ void notrace walk_stackframe(struct task_struct *task,
+ 	while (!kstack_end(ksp)) {
+ 		if (__kernel_text_address(pc) && unlikely(fn(pc, arg)))
+ 			break;
+-		pc = (*ksp++) - 0x4;
++		pc = READ_ONCE_NOCHECK(*ksp++) - 0x4;
+ 	}
+ }
+ 
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 23fe03ca7ec7b..227253fde33c4 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -31,25 +31,29 @@ void die(struct pt_regs *regs, const char *str)
+ {
+ 	static int die_counter;
+ 	int ret;
++	long cause;
++	unsigned long flags;
+ 
+ 	oops_enter();
+ 
+-	spin_lock_irq(&die_lock);
++	spin_lock_irqsave(&die_lock, flags);
+ 	console_verbose();
+ 	bust_spinlocks(1);
+ 
+ 	pr_emerg("%s [#%d]\n", str, ++die_counter);
+ 	print_modules();
+-	show_regs(regs);
++	if (regs)
++		show_regs(regs);
+ 
+-	ret = notify_die(DIE_OOPS, str, regs, 0, regs->cause, SIGSEGV);
++	cause = regs ? regs->cause : -1;
++	ret = notify_die(DIE_OOPS, str, regs, 0, cause, SIGSEGV);
+ 
+-	if (regs && kexec_should_crash(current))
++	if (kexec_should_crash(current))
+ 		crash_kexec(regs);
+ 
+ 	bust_spinlocks(0);
+ 	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+-	spin_unlock_irq(&die_lock);
++	spin_unlock_irqrestore(&die_lock, flags);
+ 	oops_exit();
+ 
+ 	if (in_interrupt())
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index d7291eb0d0c07..1c65c38ec9a3e 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -15,6 +15,8 @@
+ /* Handle ro_after_init data on our own. */
+ #define RO_AFTER_INIT_DATA
+ 
++#define RUNTIME_DISCARD_EXIT
++
+ #define EMITS_PT_NOTE
+ 
+ #include <asm-generic/vmlinux.lds.h>
+diff --git a/arch/sh/kernel/vmlinux.lds.S b/arch/sh/kernel/vmlinux.lds.S
+index 3161b9ccd2a57..b6276a3521d73 100644
+--- a/arch/sh/kernel/vmlinux.lds.S
++++ b/arch/sh/kernel/vmlinux.lds.S
+@@ -4,6 +4,7 @@
+  * Written by Niibe Yutaka and Paul Mundt
+  */
+ OUTPUT_ARCH(sh)
++#define RUNTIME_DISCARD_EXIT
+ #include <asm/thread_info.h>
+ #include <asm/cache.h>
+ #include <asm/vmlinux.lds.h>
+diff --git a/arch/um/kernel/vmlinux.lds.S b/arch/um/kernel/vmlinux.lds.S
+index 16e49bfa2b426..53d719c04ba94 100644
+--- a/arch/um/kernel/vmlinux.lds.S
++++ b/arch/um/kernel/vmlinux.lds.S
+@@ -1,4 +1,4 @@
+-
++#define RUNTIME_DISCARD_EXIT
+ KERNEL_STACK_SIZE = 4096 * (1 << CONFIG_KERNEL_STACK_ORDER);
+ 
+ #ifdef CONFIG_LD_SCRIPT_STATIC
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index ec3fa4dc90318..89a9b77544765 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -932,6 +932,15 @@ void init_spectral_chicken(struct cpuinfo_x86 *c)
+ 		}
+ 	}
+ #endif
++	/*
++	 * Work around Erratum 1386.  The XSAVES instruction malfunctions in
++	 * certain circumstances on Zen1/2 uarch, and not all parts have had
++	 * updated microcode at the time of writing (March 2023).
++	 *
++	 * Affected parts all have no supervisor XSAVE states, meaning that
++	 * the XSAVEC instruction (which works fine) is equivalent.
++	 */
++	clear_cpu_cap(c, X86_FEATURE_XSAVES);
+ }
+ 
+ static void init_amd_zn(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
+index 011929a638230..9180155d5d89c 100644
+--- a/arch/x86/kvm/vmx/evmcs.h
++++ b/arch/x86/kvm/vmx/evmcs.h
+@@ -166,16 +166,6 @@ static inline u16 evmcs_read16(unsigned long field)
+ 	return *(u16 *)((char *)current_evmcs + offset);
+ }
+ 
+-static inline void evmcs_touch_msr_bitmap(void)
+-{
+-	if (unlikely(!current_evmcs))
+-		return;
+-
+-	if (current_evmcs->hv_enlightenments_control.msr_bitmap)
+-		current_evmcs->hv_clean_fields &=
+-			~HV_VMX_ENLIGHTENED_CLEAN_FIELD_MSR_BITMAP;
+-}
+-
+ static inline void evmcs_load(u64 phys_addr)
+ {
+ 	struct hv_vp_assist_page *vp_ap =
+@@ -196,7 +186,6 @@ static inline u64 evmcs_read64(unsigned long field) { return 0; }
+ static inline u32 evmcs_read32(unsigned long field) { return 0; }
+ static inline u16 evmcs_read16(unsigned long field) { return 0; }
+ static inline void evmcs_load(u64 phys_addr) {}
+-static inline void evmcs_touch_msr_bitmap(void) {}
+ #endif /* IS_ENABLED(CONFIG_HYPERV) */
+ 
+ enum nested_evmptrld_status {
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index c37cbd3fdd852..2c5d8b9f9873f 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2725,15 +2725,6 @@ int alloc_loaded_vmcs(struct loaded_vmcs *loaded_vmcs)
+ 		if (!loaded_vmcs->msr_bitmap)
+ 			goto out_vmcs;
+ 		memset(loaded_vmcs->msr_bitmap, 0xff, PAGE_SIZE);
+-
+-		if (IS_ENABLED(CONFIG_HYPERV) &&
+-		    static_branch_unlikely(&enable_evmcs) &&
+-		    (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) {
+-			struct hv_enlightened_vmcs *evmcs =
+-				(struct hv_enlightened_vmcs *)loaded_vmcs->vmcs;
+-
+-			evmcs->hv_enlightenments_control.msr_bitmap = 1;
+-		}
+ 	}
+ 
+ 	memset(&loaded_vmcs->host_state, 0, sizeof(struct vmcs_host_state));
+@@ -3794,6 +3785,22 @@ static void vmx_set_msr_bitmap_write(ulong *msr_bitmap, u32 msr)
+ 		__set_bit(msr & 0x1fff, msr_bitmap + 0xc00 / f);
+ }
+ 
++static void vmx_msr_bitmap_l01_changed(struct vcpu_vmx *vmx)
++{
++	/*
++	 * When KVM is a nested hypervisor on top of Hyper-V and uses
++	 * 'Enlightened MSR Bitmap' feature L0 needs to know that MSR
++	 * bitmap has changed.
++	 */
++	if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs)) {
++		struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;
++
++		if (evmcs->hv_enlightenments_control.msr_bitmap)
++			evmcs->hv_clean_fields &=
++				~HV_VMX_ENLIGHTENED_CLEAN_FIELD_MSR_BITMAP;
++	}
++}
++
+ static __always_inline void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu,
+ 							  u32 msr, int type)
+ {
+@@ -3803,8 +3810,7 @@ static __always_inline void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu,
+ 	if (!cpu_has_vmx_msr_bitmap())
+ 		return;
+ 
+-	if (static_branch_unlikely(&enable_evmcs))
+-		evmcs_touch_msr_bitmap();
++	vmx_msr_bitmap_l01_changed(vmx);
+ 
+ 	/*
+ 	 * Mark the desired intercept state in shadow bitmap, this is needed
+@@ -3849,8 +3855,7 @@ static __always_inline void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu,
+ 	if (!cpu_has_vmx_msr_bitmap())
+ 		return;
+ 
+-	if (static_branch_unlikely(&enable_evmcs))
+-		evmcs_touch_msr_bitmap();
++	vmx_msr_bitmap_l01_changed(vmx);
+ 
+ 	/*
+ 	 * Mark the desired intercept state in shadow bitmap, this is needed
+@@ -7029,6 +7034,19 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+ 	if (err < 0)
+ 		goto free_pml;
+ 
++	/*
++	 * Use Hyper-V 'Enlightened MSR Bitmap' feature when KVM runs as a
++	 * nested (L1) hypervisor and Hyper-V in L0 supports it. Enable the
++	 * feature only for vmcs01, KVM currently isn't equipped to realize any
++	 * performance benefits from enabling it for vmcs02.
++	 */
++	if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs) &&
++	    (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) {
++		struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;
++
++		evmcs->hv_enlightenments_control.msr_bitmap = 1;
++	}
++
+ 	/* The MSR bitmap starts with all ones */
+ 	bitmap_fill(vmx->shadow_msr_intercept.read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
+ 	bitmap_fill(vmx->shadow_msr_intercept.write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index badb90352bf33..1f9ccc661d574 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -705,15 +705,15 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ 				     struct bfq_io_cq *bic,
+ 				     struct bfq_group *bfqg)
+ {
+-	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
+-	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
++	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, false);
++	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, true);
+ 	struct bfq_entity *entity;
+ 
+ 	if (async_bfqq) {
+ 		entity = &async_bfqq->entity;
+ 
+ 		if (entity->sched_data != &bfqg->sched_data) {
+-			bic_set_bfqq(bic, NULL, 0);
++			bic_set_bfqq(bic, NULL, false);
+ 			bfq_release_process_ref(bfqd, async_bfqq);
+ 		}
+ 	}
+@@ -748,8 +748,8 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ 				 * request from the old cgroup.
+ 				 */
+ 				bfq_put_cooperator(sync_bfqq);
++				bic_set_bfqq(bic, NULL, true);
+ 				bfq_release_process_ref(bfqd, sync_bfqq);
+-				bic_set_bfqq(bic, NULL, 1);
+ 			}
+ 		}
+ 	}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 7c4b8d0635ebd..6687b805bab3b 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -373,6 +373,12 @@ struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync)
+ 
+ void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
+ {
++	struct bfq_queue *old_bfqq = bic->bfqq[is_sync];
++
++	/* Clear bic pointer if bfqq is detached from this bic */
++	if (old_bfqq && old_bfqq->bic == bic)
++		old_bfqq->bic = NULL;
++
+ 	bic->bfqq[is_sync] = bfqq;
+ }
+ 
+@@ -2810,7 +2816,7 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ 	/*
+ 	 * Merge queues (that is, let bic redirect its requests to new_bfqq)
+ 	 */
+-	bic_set_bfqq(bic, new_bfqq, 1);
++	bic_set_bfqq(bic, new_bfqq, true);
+ 	bfq_mark_bfqq_coop(new_bfqq);
+ 	/*
+ 	 * new_bfqq now belongs to at least two bics (it is a shared queue):
+@@ -4977,9 +4983,8 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
+ 		unsigned long flags;
+ 
+ 		spin_lock_irqsave(&bfqd->lock, flags);
+-		bfqq->bic = NULL;
+-		bfq_exit_bfqq(bfqd, bfqq);
+ 		bic_set_bfqq(bic, NULL, is_sync);
++		bfq_exit_bfqq(bfqd, bfqq);
+ 		spin_unlock_irqrestore(&bfqd->lock, flags);
+ 	}
+ }
+@@ -5065,9 +5070,11 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
+ 
+ 	bfqq = bic_to_bfqq(bic, false);
+ 	if (bfqq) {
+-		bfq_release_process_ref(bfqd, bfqq);
+-		bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic);
++		struct bfq_queue *old_bfqq = bfqq;
++
++		bfqq = bfq_get_queue(bfqd, bio, false, bic);
+ 		bic_set_bfqq(bic, bfqq, false);
++		bfq_release_process_ref(bfqd, old_bfqq);
+ 	}
+ 
+ 	bfqq = bic_to_bfqq(bic, true);
+@@ -6009,7 +6016,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+ 		return bfqq;
+ 	}
+ 
+-	bic_set_bfqq(bic, NULL, 1);
++	bic_set_bfqq(bic, NULL, true);
+ 
+ 	bfq_put_cooperator(bfqq);
+ 
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index 92eda5b2f1341..883b4a3410122 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -503,7 +503,7 @@ static void panic_halt_ipmi_heartbeat(void)
+ 	msg.cmd = IPMI_WDOG_RESET_TIMER;
+ 	msg.data = NULL;
+ 	msg.data_len = 0;
+-	atomic_add(1, &panic_done_count);
++	atomic_add(2, &panic_done_count);
+ 	rv = ipmi_request_supply_msgs(watchdog_user,
+ 				      (struct ipmi_addr *) &addr,
+ 				      0,
+@@ -513,7 +513,7 @@ static void panic_halt_ipmi_heartbeat(void)
+ 				      &panic_halt_heartbeat_recv_msg,
+ 				      1);
+ 	if (rv)
+-		atomic_sub(1, &panic_done_count);
++		atomic_sub(2, &panic_done_count);
+ }
+ 
+ static struct ipmi_smi_msg panic_halt_smi_msg = {
+@@ -537,12 +537,12 @@ static void panic_halt_ipmi_set_timeout(void)
+ 	/* Wait for the messages to be free. */
+ 	while (atomic_read(&panic_done_count) != 0)
+ 		ipmi_poll_interface(watchdog_user);
+-	atomic_add(1, &panic_done_count);
++	atomic_add(2, &panic_done_count);
+ 	rv = __ipmi_set_timeout(&panic_halt_smi_msg,
+ 				&panic_halt_recv_msg,
+ 				&send_heartbeat_now);
+ 	if (rv) {
+-		atomic_sub(1, &panic_done_count);
++		atomic_sub(2, &panic_done_count);
+ 		pr_warn("Unable to extend the watchdog timeout\n");
+ 	} else {
+ 		if (send_heartbeat_now)
+diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c
+index 0913d3eb8d518..cd266021d0103 100644
+--- a/drivers/char/tpm/eventlog/acpi.c
++++ b/drivers/char/tpm/eventlog/acpi.c
+@@ -143,8 +143,12 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ 
+ 	ret = -EIO;
+ 	virt = acpi_os_map_iomem(start, len);
+-	if (!virt)
++	if (!virt) {
++		dev_warn(&chip->dev, "%s: Failed to map ACPI memory\n", __func__);
++		/* try EFI log next */
++		ret = -ENODEV;
+ 		goto err;
++	}
+ 
+ 	memcpy_fromio(log->bios_event_log, virt, len);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 7212b9900e0ab..994e6635b8347 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -382,8 +382,9 @@ static int soc15_read_register(struct amdgpu_device *adev, u32 se_num,
+ 	*value = 0;
+ 	for (i = 0; i < ARRAY_SIZE(soc15_allowed_read_registers); i++) {
+ 		en = &soc15_allowed_read_registers[i];
+-		if (adev->reg_offset[en->hwip][en->inst] &&
+-			reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
++		if (!adev->reg_offset[en->hwip][en->inst])
++			continue;
++		else if (reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
+ 					+ en->reg_offset))
+ 			continue;
+ 
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index 58527f151984c..98b659981f1ad 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -1010,6 +1010,7 @@ static void drm_atomic_connector_print_state(struct drm_printer *p,
+ 	drm_printf(p, "connector[%u]: %s\n", connector->base.id, connector->name);
+ 	drm_printf(p, "\tcrtc=%s\n", state->crtc ? state->crtc->name : "(null)");
+ 	drm_printf(p, "\tself_refresh_aware=%d\n", state->self_refresh_aware);
++	drm_printf(p, "\tmax_requested_bpc=%d\n", state->max_requested_bpc);
+ 
+ 	if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
+ 		if (state->writeback_job && state->writeback_job->fb)
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c
+index 4034a4bac7f08..69b2e5509d678 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring.c
+@@ -49,7 +49,7 @@ int intel_ring_pin(struct intel_ring *ring, struct i915_gem_ww_ctx *ww)
+ 	if (unlikely(ret))
+ 		goto err_unpin;
+ 
+-	if (i915_vma_is_map_and_fenceable(vma))
++	if (i915_vma_is_map_and_fenceable(vma) && !HAS_LLC(vma->vm->i915))
+ 		addr = (void __force *)i915_vma_pin_iomap(vma);
+ 	else
+ 		addr = i915_gem_object_pin_map(vma->obj,
+@@ -91,7 +91,7 @@ void intel_ring_unpin(struct intel_ring *ring)
+ 		return;
+ 
+ 	i915_vma_unset_ggtt_write(vma);
+-	if (i915_vma_is_map_and_fenceable(vma))
++	if (i915_vma_is_map_and_fenceable(vma) && !HAS_LLC(vma->vm->i915))
+ 		i915_vma_unpin_iomap(vma);
+ 	else
+ 		i915_gem_object_unpin_map(vma->obj);
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 0ca7e53db112a..6f84db97e20e8 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -36,7 +36,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
+ 		OUT_RING(ring, upper_32_bits(shadowptr(a5xx_gpu, ring)));
+ 	}
+ 
+-	spin_lock_irqsave(&ring->lock, flags);
++	spin_lock_irqsave(&ring->preempt_lock, flags);
+ 
+ 	/* Copy the shadow to the actual register */
+ 	ring->cur = ring->next;
+@@ -44,7 +44,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
+ 	/* Make sure to wrap wptr if we need to */
+ 	wptr = get_wptr(ring);
+ 
+-	spin_unlock_irqrestore(&ring->lock, flags);
++	spin_unlock_irqrestore(&ring->preempt_lock, flags);
+ 
+ 	/* Make sure everything is posted before making a decision */
+ 	mb();
+@@ -144,8 +144,8 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ 	OUT_RING(ring, 1);
+ 
+ 	/* Enable local preemption for finegrain preemption */
+-	OUT_PKT7(ring, CP_PREEMPT_ENABLE_GLOBAL, 1);
+-	OUT_RING(ring, 0x02);
++	OUT_PKT7(ring, CP_PREEMPT_ENABLE_LOCAL, 1);
++	OUT_RING(ring, 0x1);
+ 
+ 	/* Allow CP_CONTEXT_SWITCH_YIELD packets in the IB2 */
+ 	OUT_PKT7(ring, CP_YIELD_ENABLE, 1);
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+index 7e04509c4e1f0..b8e71ad6f8d8a 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+@@ -45,9 +45,9 @@ static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+ 	if (!ring)
+ 		return;
+ 
+-	spin_lock_irqsave(&ring->lock, flags);
++	spin_lock_irqsave(&ring->preempt_lock, flags);
+ 	wptr = get_wptr(ring);
+-	spin_unlock_irqrestore(&ring->lock, flags);
++	spin_unlock_irqrestore(&ring->preempt_lock, flags);
+ 
+ 	gpu_write(gpu, REG_A5XX_CP_RB_WPTR, wptr);
+ }
+@@ -62,9 +62,9 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
+ 		bool empty;
+ 		struct msm_ringbuffer *ring = gpu->rb[i];
+ 
+-		spin_lock_irqsave(&ring->lock, flags);
+-		empty = (get_wptr(ring) == ring->memptrs->rptr);
+-		spin_unlock_irqrestore(&ring->lock, flags);
++		spin_lock_irqsave(&ring->preempt_lock, flags);
++		empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
++		spin_unlock_irqrestore(&ring->preempt_lock, flags);
+ 
+ 		if (!empty)
+ 			return ring;
+@@ -132,9 +132,9 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu)
+ 	}
+ 
+ 	/* Make sure the wptr doesn't update while we're in motion */
+-	spin_lock_irqsave(&ring->lock, flags);
++	spin_lock_irqsave(&ring->preempt_lock, flags);
+ 	a5xx_gpu->preempt[ring->id]->wptr = get_wptr(ring);
+-	spin_unlock_irqrestore(&ring->lock, flags);
++	spin_unlock_irqrestore(&ring->preempt_lock, flags);
+ 
+ 	/* Set the address of the incoming preemption record */
+ 	gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_LO,
+@@ -210,6 +210,7 @@ void a5xx_preempt_hw_init(struct msm_gpu *gpu)
+ 		a5xx_gpu->preempt[i]->wptr = 0;
+ 		a5xx_gpu->preempt[i]->rptr = 0;
+ 		a5xx_gpu->preempt[i]->rbase = gpu->rb[i]->iova;
++		a5xx_gpu->preempt[i]->rptr_addr = shadowptr(a5xx_gpu, gpu->rb[i]);
+ 	}
+ 
+ 	/* Write a 0 to signal that we aren't switching pagetables */
+@@ -261,7 +262,6 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu,
+ 	ptr->data = 0;
+ 	ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE;
+ 
+-	ptr->rptr_addr = shadowptr(a5xx_gpu, ring);
+ 	ptr->counter = counters_iova;
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index dffc133b8b1cc..29b40acedb389 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -65,7 +65,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+ 		OUT_RING(ring, upper_32_bits(shadowptr(a6xx_gpu, ring)));
+ 	}
+ 
+-	spin_lock_irqsave(&ring->lock, flags);
++	spin_lock_irqsave(&ring->preempt_lock, flags);
+ 
+ 	/* Copy the shadow to the actual register */
+ 	ring->cur = ring->next;
+@@ -73,7 +73,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+ 	/* Make sure to wrap wptr if we need to */
+ 	wptr = get_wptr(ring);
+ 
+-	spin_unlock_irqrestore(&ring->lock, flags);
++	spin_unlock_irqrestore(&ring->preempt_lock, flags);
+ 
+ 	/* Make sure everything is posted before making a decision */
+ 	mb();
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index aa5c60a7132d8..c4e5037512b9d 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -494,8 +494,8 @@ static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *dev,
+ 	int ret = 0;
+ 	uint32_t i, j;
+ 
+-	post_deps = kmalloc_array(nr_syncobjs, sizeof(*post_deps),
+-	                          GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
++	post_deps = kcalloc(nr_syncobjs, sizeof(*post_deps),
++			    GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
+ 	if (!post_deps)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -510,7 +510,6 @@ static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *dev,
+ 		}
+ 
+ 		post_deps[i].point = syncobj_desc.point;
+-		post_deps[i].chain = NULL;
+ 
+ 		if (syncobj_desc.flags) {
+ 			ret = -EINVAL;
+diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c
+index 935bf9b1d9418..1b6958e908dca 100644
+--- a/drivers/gpu/drm/msm/msm_ringbuffer.c
++++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
+@@ -46,7 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
+ 	ring->memptrs_iova = memptrs_iova;
+ 
+ 	INIT_LIST_HEAD(&ring->submits);
+-	spin_lock_init(&ring->lock);
++	spin_lock_init(&ring->preempt_lock);
+ 
+ 	snprintf(name, sizeof(name), "gpu-ring-%d", ring->id);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
+index 0987d6bf848cf..4956d1bc5d0e1 100644
+--- a/drivers/gpu/drm/msm/msm_ringbuffer.h
++++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
+@@ -46,7 +46,12 @@ struct msm_ringbuffer {
+ 	struct msm_rbmemptrs *memptrs;
+ 	uint64_t memptrs_iova;
+ 	struct msm_fence_context *fctx;
+-	spinlock_t lock;
++
++	/*
++	 * preempt_lock protects preemption and serializes wptr updates against
++	 * preemption.  Can be aquired from irq context.
++	 */
++	spinlock_t preempt_lock;
+ };
+ 
+ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index c2d34c91e840c..804ea035fa46b 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -2555,14 +2555,6 @@ nv50_display_fini(struct drm_device *dev, bool runtime, bool suspend)
+ {
+ 	struct nouveau_drm *drm = nouveau_drm(dev);
+ 	struct drm_encoder *encoder;
+-	struct drm_plane *plane;
+-
+-	drm_for_each_plane(plane, dev) {
+-		struct nv50_wndw *wndw = nv50_wndw(plane);
+-		if (plane->funcs != &nv50_wndw)
+-			continue;
+-		nv50_wndw_fini(wndw);
+-	}
+ 
+ 	list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+ 		if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST)
+@@ -2578,7 +2570,6 @@ nv50_display_init(struct drm_device *dev, bool resume, bool runtime)
+ {
+ 	struct nv50_core *core = nv50_disp(dev)->core;
+ 	struct drm_encoder *encoder;
+-	struct drm_plane *plane;
+ 
+ 	if (resume || runtime)
+ 		core->func->init(core);
+@@ -2591,13 +2582,6 @@ nv50_display_init(struct drm_device *dev, bool resume, bool runtime)
+ 		}
+ 	}
+ 
+-	drm_for_each_plane(plane, dev) {
+-		struct nv50_wndw *wndw = nv50_wndw(plane);
+-		if (plane->funcs != &nv50_wndw)
+-			continue;
+-		nv50_wndw_init(wndw);
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+index f07916ffe42cb..831125b4453df 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+@@ -690,18 +690,6 @@ nv50_wndw_notify(struct nvif_notify *notify)
+ 	return NVIF_NOTIFY_KEEP;
+ }
+ 
+-void
+-nv50_wndw_fini(struct nv50_wndw *wndw)
+-{
+-	nvif_notify_put(&wndw->notify);
+-}
+-
+-void
+-nv50_wndw_init(struct nv50_wndw *wndw)
+-{
+-	nvif_notify_get(&wndw->notify);
+-}
+-
+ static const u64 nv50_cursor_format_modifiers[] = {
+ 	DRM_FORMAT_MOD_LINEAR,
+ 	DRM_FORMAT_MOD_INVALID,
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.h b/drivers/gpu/drm/nouveau/dispnv50/wndw.h
+index 3278e28800343..77bf124319fbd 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.h
++++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.h
+@@ -38,10 +38,9 @@ struct nv50_wndw {
+ 
+ int nv50_wndw_new_(const struct nv50_wndw_func *, struct drm_device *,
+ 		   enum drm_plane_type, const char *name, int index,
+-		   const u32 *format, enum nv50_disp_interlock_type,
+-		   u32 interlock_data, u32 heads, struct nv50_wndw **);
+-void nv50_wndw_init(struct nv50_wndw *);
+-void nv50_wndw_fini(struct nv50_wndw *);
++		   const u32 *format, u32 heads,
++		   enum nv50_disp_interlock_type, u32 interlock_data,
++		   struct nv50_wndw **);
+ void nv50_wndw_flush_set(struct nv50_wndw *, u32 *interlock,
+ 			 struct nv50_wndw_atom *);
+ void nv50_wndw_flush_clr(struct nv50_wndw *, u32 *interlock, bool flush,
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index ce822347f7470..603f625a74e54 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -3124,15 +3124,26 @@ found:
+ 	return 1;
+ }
+ 
++#define ACPIID_LEN (ACPIHID_UID_LEN + ACPIHID_HID_LEN)
++
+ static int __init parse_ivrs_acpihid(char *str)
+ {
+ 	u32 seg = 0, bus, dev, fn;
+ 	char *hid, *uid, *p, *addr;
+-	char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0};
++	char acpiid[ACPIID_LEN] = {0};
+ 	int i;
+ 
+ 	addr = strchr(str, '@');
+ 	if (!addr) {
++		addr = strchr(str, '=');
++		if (!addr)
++			goto not_found;
++
++		++addr;
++
++		if (strlen(addr) > ACPIID_LEN)
++			goto not_found;
++
+ 		if (sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid) == 4 ||
+ 		    sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid) == 5) {
+ 			pr_warn("ivrs_acpihid%s option format deprecated; use ivrs_acpihid=%s@%04x:%02x:%02x.%d instead\n",
+@@ -3145,6 +3156,9 @@ static int __init parse_ivrs_acpihid(char *str)
+ 	/* We have the '@', make it the terminator to get just the acpiid */
+ 	*addr++ = 0;
+ 
++	if (strlen(str) > ACPIID_LEN + 1)
++		goto not_found;
++
+ 	if (sscanf(str, "=%s", acpiid) != 1)
+ 		goto not_found;
+ 
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index 86fd49ae7f612..80d6412e2c546 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -24,7 +24,6 @@
+ /*
+  * Intel IOMMU system wide PASID name space:
+  */
+-static DEFINE_SPINLOCK(pasid_lock);
+ u32 intel_pasid_max_id = PASID_MAX;
+ 
+ int vcmd_alloc_pasid(struct intel_iommu *iommu, u32 *pasid)
+@@ -187,6 +186,9 @@ int intel_pasid_alloc_table(struct device *dev)
+ attach_out:
+ 	device_attach_pasid_table(info, pasid_table);
+ 
++	if (!ecap_coherent(info->iommu->ecap))
++		clflush_cache_range(pasid_table->table, size);
++
+ 	return 0;
+ }
+ 
+@@ -259,19 +261,29 @@ struct pasid_entry *intel_pasid_get_entry(struct device *dev, u32 pasid)
+ 	dir_index = pasid >> PASID_PDE_SHIFT;
+ 	index = pasid & PASID_PTE_MASK;
+ 
+-	spin_lock(&pasid_lock);
++retry:
+ 	entries = get_pasid_table_from_pde(&dir[dir_index]);
+ 	if (!entries) {
+ 		entries = alloc_pgtable_page(info->iommu->node);
+-		if (!entries) {
+-			spin_unlock(&pasid_lock);
++		if (!entries)
+ 			return NULL;
+-		}
+ 
+-		WRITE_ONCE(dir[dir_index].val,
+-			   (u64)virt_to_phys(entries) | PASID_PTE_PRESENT);
++		/*
++		 * The pasid directory table entry won't be freed after
++		 * allocation. No worry about the race with free and
++		 * clear. However, this entry might be populated by others
++		 * while we are preparing it. Use theirs with a retry.
++		 */
++		if (cmpxchg64(&dir[dir_index].val, 0ULL,
++			      (u64)virt_to_phys(entries) | PASID_PTE_PRESENT)) {
++			free_pgtable_page(entries);
++			goto retry;
++		}
++		if (!ecap_coherent(info->iommu->ecap)) {
++			clflush_cache_range(entries, VTD_PAGE_SIZE);
++			clflush_cache_range(&dir[dir_index].val, sizeof(*dir));
++		}
+ 	}
+-	spin_unlock(&pasid_lock);
+ 
+ 	return &entries[index];
+ }
+diff --git a/drivers/irqchip/irq-aspeed-vic.c b/drivers/irqchip/irq-aspeed-vic.c
+index 6567ed782f82c..58717cd44f99f 100644
+--- a/drivers/irqchip/irq-aspeed-vic.c
++++ b/drivers/irqchip/irq-aspeed-vic.c
+@@ -71,7 +71,7 @@ static void vic_init_hw(struct aspeed_vic *vic)
+ 	writel(0, vic->base + AVIC_INT_SELECT);
+ 	writel(0, vic->base + AVIC_INT_SELECT + 4);
+ 
+-	/* Some interrupts have a programable high/low level trigger
++	/* Some interrupts have a programmable high/low level trigger
+ 	 * (4 GPIO direct inputs), for now we assume this was configured
+ 	 * by firmware. We read which ones are edge now.
+ 	 */
+@@ -203,7 +203,7 @@ static int __init avic_of_init(struct device_node *node,
+ 	}
+ 	vic->base = regs;
+ 
+-	/* Initialize soures, all masked */
++	/* Initialize sources, all masked */
+ 	vic_init_hw(vic);
+ 
+ 	/* Ready to receive interrupts */
+diff --git a/drivers/irqchip/irq-bcm7120-l2.c b/drivers/irqchip/irq-bcm7120-l2.c
+index 7d776c905b7d2..1c2c5bd5a9fc1 100644
+--- a/drivers/irqchip/irq-bcm7120-l2.c
++++ b/drivers/irqchip/irq-bcm7120-l2.c
+@@ -310,7 +310,7 @@ static int __init bcm7120_l2_intc_probe(struct device_node *dn,
+ 
+ 		if (data->can_wake) {
+ 			/* This IRQ chip can wake the system, set all
+-			 * relevant child interupts in wake_enabled mask
++			 * relevant child interrupts in wake_enabled mask
+ 			 */
+ 			gc->wake_enabled = 0xffffffff;
+ 			gc->wake_enabled &= ~gc->unused;
+diff --git a/drivers/irqchip/irq-csky-apb-intc.c b/drivers/irqchip/irq-csky-apb-intc.c
+index 5a2ec43b7ddd4..ab91afa867557 100644
+--- a/drivers/irqchip/irq-csky-apb-intc.c
++++ b/drivers/irqchip/irq-csky-apb-intc.c
+@@ -176,7 +176,7 @@ gx_intc_init(struct device_node *node, struct device_node *parent)
+ 	writel(0x0, reg_base + GX_INTC_NEN63_32);
+ 
+ 	/*
+-	 * Initial mask reg with all unmasked, because we only use enalbe reg
++	 * Initial mask reg with all unmasked, because we only use enable reg
+ 	 */
+ 	writel(0x0, reg_base + GX_INTC_NMASK31_00);
+ 	writel(0x0, reg_base + GX_INTC_NMASK63_32);
+diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
+index fbec07d634ad2..4116b48e60aff 100644
+--- a/drivers/irqchip/irq-gic-v2m.c
++++ b/drivers/irqchip/irq-gic-v2m.c
+@@ -371,7 +371,7 @@ static int __init gicv2m_init_one(struct fwnode_handle *fwnode,
+ 	 * the MSI data is the absolute value within the range from
+ 	 * spi_start to (spi_start + num_spis).
+ 	 *
+-	 * Broadom NS2 GICv2m implementation has an erratum where the MSI data
++	 * Broadcom NS2 GICv2m implementation has an erratum where the MSI data
+ 	 * is 'spi_number - 32'
+ 	 *
+ 	 * Reading that register fails on the Graviton implementation
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index d8cb5bcd6b10e..5ec091c64d47f 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1492,7 +1492,7 @@ static void its_vlpi_set_doorbell(struct irq_data *d, bool enable)
+ 	 *
+ 	 * Ideally, we'd issue a VMAPTI to set the doorbell to its LPI
+ 	 * value or to 1023, depending on the enable bit. But that
+-	 * would be issueing a mapping for an /existing/ DevID+EventID
++	 * would be issuing a mapping for an /existing/ DevID+EventID
+ 	 * pair, which is UNPREDICTABLE. Instead, let's issue a VMOVI
+ 	 * to the /same/ vPE, using this opportunity to adjust the
+ 	 * doorbell. Mouahahahaha. We loves it, Precious.
+@@ -3122,7 +3122,7 @@ static void its_cpu_init_lpis(void)
+ 
+ 		/*
+ 		 * It's possible for CPU to receive VLPIs before it is
+-		 * sheduled as a vPE, especially for the first CPU, and the
++		 * scheduled as a vPE, especially for the first CPU, and the
+ 		 * VLPI with INTID larger than 2^(IDbits+1) will be considered
+ 		 * as out of range and dropped by GIC.
+ 		 * So we initialize IDbits to known value to avoid VLPI drop.
+@@ -3613,7 +3613,7 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq,
+ 
+ 	/*
+ 	 * If all interrupts have been freed, start mopping the
+-	 * floor. This is conditionned on the device not being shared.
++	 * floor. This is conditioned on the device not being shared.
+ 	 */
+ 	if (!its_dev->shared &&
+ 	    bitmap_empty(its_dev->event_map.lpi_map,
+@@ -4187,7 +4187,7 @@ static int its_sgi_set_affinity(struct irq_data *d,
+ {
+ 	/*
+ 	 * There is no notion of affinity for virtual SGIs, at least
+-	 * not on the host (since they can only be targetting a vPE).
++	 * not on the host (since they can only be targeting a vPE).
+ 	 * Tell the kernel we've done whatever it asked for.
+ 	 */
+ 	irq_data_update_effective_affinity(d, mask_val);
+@@ -4232,7 +4232,7 @@ static int its_sgi_get_irqchip_state(struct irq_data *d,
+ 	/*
+ 	 * Locking galore! We can race against two different events:
+ 	 *
+-	 * - Concurent vPE affinity change: we must make sure it cannot
++	 * - Concurrent vPE affinity change: we must make sure it cannot
+ 	 *   happen, or we'll talk to the wrong redistributor. This is
+ 	 *   identical to what happens with vLPIs.
+ 	 *
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 4c8f18f0cecf8..2805969e4f15a 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -1456,7 +1456,7 @@ static int gic_irq_domain_translate(struct irq_domain *d,
+ 
+ 		/*
+ 		 * Make it clear that broken DTs are... broken.
+-		 * Partitionned PPIs are an unfortunate exception.
++		 * Partitioned PPIs are an unfortunate exception.
+ 		 */
+ 		WARN_ON(*type == IRQ_TYPE_NONE &&
+ 			fwspec->param[0] != GIC_IRQ_TYPE_PARTITION);
+diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c
+index 90e1ad6e36120..a4eb8a2181c7f 100644
+--- a/drivers/irqchip/irq-loongson-pch-pic.c
++++ b/drivers/irqchip/irq-loongson-pch-pic.c
+@@ -180,7 +180,7 @@ static void pch_pic_reset(struct pch_pic *priv)
+ 	int i;
+ 
+ 	for (i = 0; i < PIC_COUNT; i++) {
+-		/* Write vectore ID */
++		/* Write vectored ID */
+ 		writeb(priv->ht_vec_base + i, priv->base + PCH_INT_HTVEC(i));
+ 		/* Hardcode route to HT0 Lo */
+ 		writeb(1, priv->base + PCH_INT_ROUTE(i));
+diff --git a/drivers/irqchip/irq-meson-gpio.c b/drivers/irqchip/irq-meson-gpio.c
+index bc7aebcc96e9c..e50676ce2ec84 100644
+--- a/drivers/irqchip/irq-meson-gpio.c
++++ b/drivers/irqchip/irq-meson-gpio.c
+@@ -227,7 +227,7 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
+ 
+ 	/*
+ 	 * Get the hwirq number assigned to this channel through
+-	 * a pointer the channel_irq table. The added benifit of this
++	 * a pointer the channel_irq table. The added benefit of this
+ 	 * method is that we can also retrieve the channel index with
+ 	 * it, using the table base.
+ 	 */
+diff --git a/drivers/irqchip/irq-mtk-cirq.c b/drivers/irqchip/irq-mtk-cirq.c
+index 69ba8ce3c1785..9bca0918078e8 100644
+--- a/drivers/irqchip/irq-mtk-cirq.c
++++ b/drivers/irqchip/irq-mtk-cirq.c
+@@ -217,7 +217,7 @@ static void mtk_cirq_resume(void)
+ {
+ 	u32 value;
+ 
+-	/* flush recored interrupts, will send signals to parent controller */
++	/* flush recorded interrupts, will send signals to parent controller */
+ 	value = readl_relaxed(cirq_data->base + CIRQ_CONTROL);
+ 	writel_relaxed(value | CIRQ_FLUSH, cirq_data->base + CIRQ_CONTROL);
+ 
+diff --git a/drivers/irqchip/irq-mxs.c b/drivers/irqchip/irq-mxs.c
+index a671938fd97f6..d1f5740cd5755 100644
+--- a/drivers/irqchip/irq-mxs.c
++++ b/drivers/irqchip/irq-mxs.c
+@@ -58,7 +58,7 @@ struct icoll_priv {
+ static struct icoll_priv icoll_priv;
+ static struct irq_domain *icoll_domain;
+ 
+-/* calculate bit offset depending on number of intterupt per register */
++/* calculate bit offset depending on number of interrupt per register */
+ static u32 icoll_intr_bitshift(struct irq_data *d, u32 bit)
+ {
+ 	/*
+@@ -68,7 +68,7 @@ static u32 icoll_intr_bitshift(struct irq_data *d, u32 bit)
+ 	return bit << ((d->hwirq & 3) << 3);
+ }
+ 
+-/* calculate mem offset depending on number of intterupt per register */
++/* calculate mem offset depending on number of interrupt per register */
+ static void __iomem *icoll_intr_reg(struct irq_data *d)
+ {
+ 	/* offset = hwirq / intr_per_reg * 0x10 */
+diff --git a/drivers/irqchip/irq-sun4i.c b/drivers/irqchip/irq-sun4i.c
+index fb78d6623556c..9ea94456b178c 100644
+--- a/drivers/irqchip/irq-sun4i.c
++++ b/drivers/irqchip/irq-sun4i.c
+@@ -189,7 +189,7 @@ static void __exception_irq_entry sun4i_handle_irq(struct pt_regs *regs)
+ 	 * 3) spurious irq
+ 	 * So if we immediately get a reading of 0, check the irq-pending reg
+ 	 * to differentiate between 2 and 3. We only do this once to avoid
+-	 * the extra check in the common case of 1 hapening after having
++	 * the extra check in the common case of 1 happening after having
+ 	 * read the vector-reg once.
+ 	 */
+ 	hwirq = readl(irq_ic_data->irq_base + SUN4I_IRQ_VECTOR_REG) >> 2;
+diff --git a/drivers/irqchip/irq-ti-sci-inta.c b/drivers/irqchip/irq-ti-sci-inta.c
+index 532d0ae172d9f..ca1f593f4d13a 100644
+--- a/drivers/irqchip/irq-ti-sci-inta.c
++++ b/drivers/irqchip/irq-ti-sci-inta.c
+@@ -78,7 +78,7 @@ struct ti_sci_inta_vint_desc {
+  * struct ti_sci_inta_irq_domain - Structure representing a TISCI based
+  *				   Interrupt Aggregator IRQ domain.
+  * @sci:		Pointer to TISCI handle
+- * @vint:		TISCI resource pointer representing IA inerrupts.
++ * @vint:		TISCI resource pointer representing IA interrupts.
+  * @global_event:	TISCI resource pointer representing global events.
+  * @vint_list:		List of the vints active in the system
+  * @vint_mutex:		Mutex to protect vint_list
+diff --git a/drivers/irqchip/irq-vic.c b/drivers/irqchip/irq-vic.c
+index e460363742272..62f3d29f90420 100644
+--- a/drivers/irqchip/irq-vic.c
++++ b/drivers/irqchip/irq-vic.c
+@@ -163,7 +163,7 @@ static struct syscore_ops vic_syscore_ops = {
+ };
+ 
+ /**
+- * vic_pm_init - initicall to register VIC pm
++ * vic_pm_init - initcall to register VIC pm
+  *
+  * This is called via late_initcall() to register
+  * the resources for the VICs due to the early
+@@ -397,7 +397,7 @@ static void __init vic_clear_interrupts(void __iomem *base)
+ /*
+  * The PL190 cell from ARM has been modified by ST to handle 64 interrupts.
+  * The original cell has 32 interrupts, while the modified one has 64,
+- * replocating two blocks 0x00..0x1f in 0x20..0x3f. In that case
++ * replicating two blocks 0x00..0x1f in 0x20..0x3f. In that case
+  * the probe function is called twice, with base set to offset 000
+  *  and 020 within the page. We call this "second block".
+  */
+diff --git a/drivers/irqchip/irq-xilinx-intc.c b/drivers/irqchip/irq-xilinx-intc.c
+index 1d3d273309bd3..8cd1bfc730572 100644
+--- a/drivers/irqchip/irq-xilinx-intc.c
++++ b/drivers/irqchip/irq-xilinx-intc.c
+@@ -210,7 +210,7 @@ static int __init xilinx_intc_of_init(struct device_node *intc,
+ 
+ 	/*
+ 	 * Disable all external interrupts until they are
+-	 * explicity requested.
++	 * explicitly requested.
+ 	 */
+ 	xintc_write(irqc, IER, 0);
+ 
+diff --git a/drivers/macintosh/windfarm_lm75_sensor.c b/drivers/macintosh/windfarm_lm75_sensor.c
+index 29f48c2028b6d..e90ad1b78e936 100644
+--- a/drivers/macintosh/windfarm_lm75_sensor.c
++++ b/drivers/macintosh/windfarm_lm75_sensor.c
+@@ -34,8 +34,8 @@
+ #endif
+ 
+ struct wf_lm75_sensor {
+-	int			ds1775 : 1;
+-	int			inited : 1;
++	unsigned int		ds1775 : 1;
++	unsigned int		inited : 1;
+ 	struct i2c_client	*i2c;
+ 	struct wf_sensor	sens;
+ };
+diff --git a/drivers/macintosh/windfarm_smu_sensors.c b/drivers/macintosh/windfarm_smu_sensors.c
+index c8706cfb83fd8..714c1e14074ed 100644
+--- a/drivers/macintosh/windfarm_smu_sensors.c
++++ b/drivers/macintosh/windfarm_smu_sensors.c
+@@ -273,8 +273,8 @@ struct smu_cpu_power_sensor {
+ 	struct list_head	link;
+ 	struct wf_sensor	*volts;
+ 	struct wf_sensor	*amps;
+-	int			fake_volts : 1;
+-	int			quadratic : 1;
++	unsigned int		fake_volts : 1;
++	unsigned int		quadratic : 1;
+ 	struct wf_sensor	sens;
+ };
+ #define to_smu_cpu_power(c) container_of(c, struct smu_cpu_power_sensor, sens)
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index 8f0812e859012..92a5f9aff9b53 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -2748,7 +2748,7 @@ static int ov5640_init_controls(struct ov5640_dev *sensor)
+ 	/* Auto/manual gain */
+ 	ctrls->auto_gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_AUTOGAIN,
+ 					     0, 1, 1, 1);
+-	ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_GAIN,
++	ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_ANALOGUE_GAIN,
+ 					0, 1023, 1, 0);
+ 
+ 	ctrls->saturation = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_SATURATION,
+diff --git a/drivers/media/rc/gpio-ir-recv.c b/drivers/media/rc/gpio-ir-recv.c
+index 22e524b69806a..a56c844d7f816 100644
+--- a/drivers/media/rc/gpio-ir-recv.c
++++ b/drivers/media/rc/gpio-ir-recv.c
+@@ -130,6 +130,23 @@ static int gpio_ir_recv_probe(struct platform_device *pdev)
+ 				"gpio-ir-recv-irq", gpio_dev);
+ }
+ 
++static int gpio_ir_recv_remove(struct platform_device *pdev)
++{
++	struct gpio_rc_dev *gpio_dev = platform_get_drvdata(pdev);
++	struct device *pmdev = gpio_dev->pmdev;
++
++	if (pmdev) {
++		pm_runtime_get_sync(pmdev);
++		cpu_latency_qos_remove_request(&gpio_dev->qos);
++
++		pm_runtime_disable(pmdev);
++		pm_runtime_put_noidle(pmdev);
++		pm_runtime_set_suspended(pmdev);
++	}
++
++	return 0;
++}
++
+ #ifdef CONFIG_PM
+ static int gpio_ir_recv_suspend(struct device *dev)
+ {
+@@ -189,6 +206,7 @@ MODULE_DEVICE_TABLE(of, gpio_ir_recv_of_match);
+ 
+ static struct platform_driver gpio_ir_recv_driver = {
+ 	.probe  = gpio_ir_recv_probe,
++	.remove = gpio_ir_recv_remove,
+ 	.driver = {
+ 		.name   = KBUILD_MODNAME,
+ 		.of_match_table = of_match_ptr(gpio_ir_recv_of_match),
+diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
+index 9960127f612ea..bb999e67d7736 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.c
++++ b/drivers/net/ethernet/broadcom/bgmac.c
+@@ -890,13 +890,13 @@ static void bgmac_chip_reset_idm_config(struct bgmac *bgmac)
+ 
+ 		if (iost & BGMAC_BCMA_IOST_ATTACHED) {
+ 			flags = BGMAC_BCMA_IOCTL_SW_CLKEN;
+-			if (!bgmac->has_robosw)
++			if (bgmac->in_init || !bgmac->has_robosw)
+ 				flags |= BGMAC_BCMA_IOCTL_SW_RESET;
+ 		}
+ 		bgmac_clk_enable(bgmac, flags);
+ 	}
+ 
+-	if (iost & BGMAC_BCMA_IOST_ATTACHED && !bgmac->has_robosw)
++	if (iost & BGMAC_BCMA_IOST_ATTACHED && (bgmac->in_init || !bgmac->has_robosw))
+ 		bgmac_idm_write(bgmac, BCMA_IOCTL,
+ 				bgmac_idm_read(bgmac, BCMA_IOCTL) &
+ 				~BGMAC_BCMA_IOCTL_SW_RESET);
+@@ -1490,6 +1490,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
+ 	struct net_device *net_dev = bgmac->net_dev;
+ 	int err;
+ 
++	bgmac->in_init = true;
++
+ 	bgmac_chip_intrs_off(bgmac);
+ 
+ 	net_dev->irq = bgmac->irq;
+@@ -1542,6 +1544,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
+ 	/* Omit FCS from max MTU size */
+ 	net_dev->max_mtu = BGMAC_RX_MAX_FRAME_SIZE - ETH_FCS_LEN;
+ 
++	bgmac->in_init = false;
++
+ 	err = register_netdev(bgmac->net_dev);
+ 	if (err) {
+ 		dev_err(bgmac->dev, "Cannot register net device\n");
+diff --git a/drivers/net/ethernet/broadcom/bgmac.h b/drivers/net/ethernet/broadcom/bgmac.h
+index 351c598a3ec6d..d1200b27af1ed 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.h
++++ b/drivers/net/ethernet/broadcom/bgmac.h
+@@ -512,6 +512,8 @@ struct bgmac {
+ 	int irq;
+ 	u32 int_mask;
+ 
++	bool in_init;
++
+ 	/* Current MAC state */
+ 	int mac_speed;
+ 	int mac_duplex;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index c4a768ce8c99d..6928c0b578abb 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2854,7 +2854,7 @@ static int bnxt_alloc_ring(struct bnxt *bp, struct bnxt_ring_mem_info *rmem)
+ 
+ static void bnxt_free_tpa_info(struct bnxt *bp)
+ {
+-	int i;
++	int i, j;
+ 
+ 	for (i = 0; i < bp->rx_nr_rings; i++) {
+ 		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
+@@ -2862,8 +2862,10 @@ static void bnxt_free_tpa_info(struct bnxt *bp)
+ 		kfree(rxr->rx_tpa_idx_map);
+ 		rxr->rx_tpa_idx_map = NULL;
+ 		if (rxr->rx_tpa) {
+-			kfree(rxr->rx_tpa[0].agg_arr);
+-			rxr->rx_tpa[0].agg_arr = NULL;
++			for (j = 0; j < bp->max_tpa; j++) {
++				kfree(rxr->rx_tpa[j].agg_arr);
++				rxr->rx_tpa[j].agg_arr = NULL;
++			}
+ 		}
+ 		kfree(rxr->rx_tpa);
+ 		rxr->rx_tpa = NULL;
+@@ -2872,14 +2874,13 @@ static void bnxt_free_tpa_info(struct bnxt *bp)
+ 
+ static int bnxt_alloc_tpa_info(struct bnxt *bp)
+ {
+-	int i, j, total_aggs = 0;
++	int i, j;
+ 
+ 	bp->max_tpa = MAX_TPA;
+ 	if (bp->flags & BNXT_FLAG_CHIP_P5) {
+ 		if (!bp->max_tpa_v2)
+ 			return 0;
+ 		bp->max_tpa = max_t(u16, bp->max_tpa_v2, MAX_TPA_P5);
+-		total_aggs = bp->max_tpa * MAX_SKB_FRAGS;
+ 	}
+ 
+ 	for (i = 0; i < bp->rx_nr_rings; i++) {
+@@ -2893,12 +2894,12 @@ static int bnxt_alloc_tpa_info(struct bnxt *bp)
+ 
+ 		if (!(bp->flags & BNXT_FLAG_CHIP_P5))
+ 			continue;
+-		agg = kcalloc(total_aggs, sizeof(*agg), GFP_KERNEL);
+-		rxr->rx_tpa[0].agg_arr = agg;
+-		if (!agg)
+-			return -ENOMEM;
+-		for (j = 1; j < bp->max_tpa; j++)
+-			rxr->rx_tpa[j].agg_arr = agg + j * MAX_SKB_FRAGS;
++		for (j = 0; j < bp->max_tpa; j++) {
++			agg = kcalloc(MAX_SKB_FRAGS, sizeof(*agg), GFP_KERNEL);
++			if (!agg)
++				return -ENOMEM;
++			rxr->rx_tpa[j].agg_arr = agg;
++		}
+ 		rxr->rx_tpa_idx_map = kzalloc(sizeof(*rxr->rx_tpa_idx_map),
+ 					      GFP_KERNEL);
+ 		if (!rxr->rx_tpa_idx_map)
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 217dc67c48fa2..a8319295f1ab2 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -354,7 +354,8 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
+ 	mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
+ 	mcr_new = mcr_cur;
+ 	mcr_new |= MAC_MCR_MAX_RX_1536 | MAC_MCR_IPG_CFG | MAC_MCR_FORCE_MODE |
+-		   MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK;
++		   MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK |
++		   MAC_MCR_RX_FIFO_CLR_DIS;
+ 
+ 	/* Only update control register when needed! */
+ 	if (mcr_new != mcr_cur)
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 54a7cd93cc0fe..0ca3223ad5457 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -339,6 +339,7 @@
+ #define MAC_MCR_FORCE_MODE	BIT(15)
+ #define MAC_MCR_TX_EN		BIT(14)
+ #define MAC_MCR_RX_EN		BIT(13)
++#define MAC_MCR_RX_FIFO_CLR_DIS	BIT(12)
+ #define MAC_MCR_BACKOFF_EN	BIT(9)
+ #define MAC_MCR_BACKPR_EN	BIT(8)
+ #define MAC_MCR_FORCE_RX_FC	BIT(5)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 1ec000d4c7705..04c59102a2863 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1145,6 +1145,7 @@ static int stmmac_init_phy(struct net_device *dev)
+ 
+ 		phylink_ethtool_get_wol(priv->phylink, &wol);
+ 		device_set_wakeup_capable(priv->device, !!wol.supported);
++		device_set_wakeup_enable(priv->device, !!wol.wolopts);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c
+index a644e8e5071c3..375bbd60b38af 100644
+--- a/drivers/net/phy/microchip.c
++++ b/drivers/net/phy/microchip.c
+@@ -326,6 +326,37 @@ static int lan88xx_config_aneg(struct phy_device *phydev)
+ 	return genphy_config_aneg(phydev);
+ }
+ 
++static void lan88xx_link_change_notify(struct phy_device *phydev)
++{
++	int temp;
++
++	/* At forced 100 F/H mode, chip may fail to set mode correctly
++	 * when cable is switched between long(~50+m) and short one.
++	 * As workaround, set to 10 before setting to 100
++	 * at forced 100 F/H mode.
++	 */
++	if (!phydev->autoneg && phydev->speed == 100) {
++		/* disable phy interrupt */
++		temp = phy_read(phydev, LAN88XX_INT_MASK);
++		temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
++		phy_write(phydev, LAN88XX_INT_MASK, temp);
++
++		temp = phy_read(phydev, MII_BMCR);
++		temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000);
++		phy_write(phydev, MII_BMCR, temp); /* set to 10 first */
++		temp |= BMCR_SPEED100;
++		phy_write(phydev, MII_BMCR, temp); /* set to 100 later */
++
++		/* clear pending interrupt generated while workaround */
++		temp = phy_read(phydev, LAN88XX_INT_STS);
++
++		/* enable phy interrupt back */
++		temp = phy_read(phydev, LAN88XX_INT_MASK);
++		temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
++		phy_write(phydev, LAN88XX_INT_MASK, temp);
++	}
++}
++
+ static struct phy_driver microchip_phy_driver[] = {
+ {
+ 	.phy_id		= 0x0007c130,
+@@ -339,6 +370,7 @@ static struct phy_driver microchip_phy_driver[] = {
+ 
+ 	.config_init	= lan88xx_config_init,
+ 	.config_aneg	= lan88xx_config_aneg,
++	.link_change_notify = lan88xx_link_change_notify,
+ 
+ 	.ack_interrupt	= lan88xx_phy_ack_interrupt,
+ 	.config_intr	= lan88xx_phy_config_intr,
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 3ef5aa6b72a7e..e771e0e8a9bc6 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -2833,8 +2833,6 @@ static int phy_probe(struct device *dev)
+ 	if (phydrv->flags & PHY_IS_INTERNAL)
+ 		phydev->is_internal = true;
+ 
+-	mutex_lock(&phydev->lock);
+-
+ 	/* Deassert the reset signal */
+ 	phy_device_reset(phydev, 0);
+ 
+@@ -2903,12 +2901,10 @@ static int phy_probe(struct device *dev)
+ 	phydev->state = PHY_READY;
+ 
+ out:
+-	/* Assert the reset signal */
++	/* Re-assert the reset signal on error */
+ 	if (err)
+ 		phy_device_reset(phydev, 1);
+ 
+-	mutex_unlock(&phydev->lock);
+-
+ 	return err;
+ }
+ 
+@@ -2918,9 +2914,7 @@ static int phy_remove(struct device *dev)
+ 
+ 	cancel_delayed_work_sync(&phydev->state_queue);
+ 
+-	mutex_lock(&phydev->lock);
+ 	phydev->state = PHY_DOWN;
+-	mutex_unlock(&phydev->lock);
+ 
+ 	sfp_bus_del_upstream(phydev->sfp_bus);
+ 	phydev->sfp_bus = NULL;
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 6f7b70522d926..667984efeb3be 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -824,20 +824,19 @@ static int lan78xx_read_raw_otp(struct lan78xx_net *dev, u32 offset,
+ 				u32 length, u8 *data)
+ {
+ 	int i;
+-	int ret;
+ 	u32 buf;
+ 	unsigned long timeout;
+ 
+-	ret = lan78xx_read_reg(dev, OTP_PWR_DN, &buf);
++	lan78xx_read_reg(dev, OTP_PWR_DN, &buf);
+ 
+ 	if (buf & OTP_PWR_DN_PWRDN_N_) {
+ 		/* clear it and wait to be cleared */
+-		ret = lan78xx_write_reg(dev, OTP_PWR_DN, 0);
++		lan78xx_write_reg(dev, OTP_PWR_DN, 0);
+ 
+ 		timeout = jiffies + HZ;
+ 		do {
+ 			usleep_range(1, 10);
+-			ret = lan78xx_read_reg(dev, OTP_PWR_DN, &buf);
++			lan78xx_read_reg(dev, OTP_PWR_DN, &buf);
+ 			if (time_after(jiffies, timeout)) {
+ 				netdev_warn(dev->net,
+ 					    "timeout on OTP_PWR_DN");
+@@ -847,18 +846,18 @@ static int lan78xx_read_raw_otp(struct lan78xx_net *dev, u32 offset,
+ 	}
+ 
+ 	for (i = 0; i < length; i++) {
+-		ret = lan78xx_write_reg(dev, OTP_ADDR1,
++		lan78xx_write_reg(dev, OTP_ADDR1,
+ 					((offset + i) >> 8) & OTP_ADDR1_15_11);
+-		ret = lan78xx_write_reg(dev, OTP_ADDR2,
++		lan78xx_write_reg(dev, OTP_ADDR2,
+ 					((offset + i) & OTP_ADDR2_10_3));
+ 
+-		ret = lan78xx_write_reg(dev, OTP_FUNC_CMD, OTP_FUNC_CMD_READ_);
+-		ret = lan78xx_write_reg(dev, OTP_CMD_GO, OTP_CMD_GO_GO_);
++		lan78xx_write_reg(dev, OTP_FUNC_CMD, OTP_FUNC_CMD_READ_);
++		lan78xx_write_reg(dev, OTP_CMD_GO, OTP_CMD_GO_GO_);
+ 
+ 		timeout = jiffies + HZ;
+ 		do {
+ 			udelay(1);
+-			ret = lan78xx_read_reg(dev, OTP_STATUS, &buf);
++			lan78xx_read_reg(dev, OTP_STATUS, &buf);
+ 			if (time_after(jiffies, timeout)) {
+ 				netdev_warn(dev->net,
+ 					    "timeout on OTP_STATUS");
+@@ -866,7 +865,7 @@ static int lan78xx_read_raw_otp(struct lan78xx_net *dev, u32 offset,
+ 			}
+ 		} while (buf & OTP_STATUS_BUSY_);
+ 
+-		ret = lan78xx_read_reg(dev, OTP_RD_DATA, &buf);
++		lan78xx_read_reg(dev, OTP_RD_DATA, &buf);
+ 
+ 		data[i] = (u8)(buf & 0xFF);
+ 	}
+@@ -878,20 +877,19 @@ static int lan78xx_write_raw_otp(struct lan78xx_net *dev, u32 offset,
+ 				 u32 length, u8 *data)
+ {
+ 	int i;
+-	int ret;
+ 	u32 buf;
+ 	unsigned long timeout;
+ 
+-	ret = lan78xx_read_reg(dev, OTP_PWR_DN, &buf);
++	lan78xx_read_reg(dev, OTP_PWR_DN, &buf);
+ 
+ 	if (buf & OTP_PWR_DN_PWRDN_N_) {
+ 		/* clear it and wait to be cleared */
+-		ret = lan78xx_write_reg(dev, OTP_PWR_DN, 0);
++		lan78xx_write_reg(dev, OTP_PWR_DN, 0);
+ 
+ 		timeout = jiffies + HZ;
+ 		do {
+ 			udelay(1);
+-			ret = lan78xx_read_reg(dev, OTP_PWR_DN, &buf);
++			lan78xx_read_reg(dev, OTP_PWR_DN, &buf);
+ 			if (time_after(jiffies, timeout)) {
+ 				netdev_warn(dev->net,
+ 					    "timeout on OTP_PWR_DN completion");
+@@ -901,21 +899,21 @@ static int lan78xx_write_raw_otp(struct lan78xx_net *dev, u32 offset,
+ 	}
+ 
+ 	/* set to BYTE program mode */
+-	ret = lan78xx_write_reg(dev, OTP_PRGM_MODE, OTP_PRGM_MODE_BYTE_);
++	lan78xx_write_reg(dev, OTP_PRGM_MODE, OTP_PRGM_MODE_BYTE_);
+ 
+ 	for (i = 0; i < length; i++) {
+-		ret = lan78xx_write_reg(dev, OTP_ADDR1,
++		lan78xx_write_reg(dev, OTP_ADDR1,
+ 					((offset + i) >> 8) & OTP_ADDR1_15_11);
+-		ret = lan78xx_write_reg(dev, OTP_ADDR2,
++		lan78xx_write_reg(dev, OTP_ADDR2,
+ 					((offset + i) & OTP_ADDR2_10_3));
+-		ret = lan78xx_write_reg(dev, OTP_PRGM_DATA, data[i]);
+-		ret = lan78xx_write_reg(dev, OTP_TST_CMD, OTP_TST_CMD_PRGVRFY_);
+-		ret = lan78xx_write_reg(dev, OTP_CMD_GO, OTP_CMD_GO_GO_);
++		lan78xx_write_reg(dev, OTP_PRGM_DATA, data[i]);
++		lan78xx_write_reg(dev, OTP_TST_CMD, OTP_TST_CMD_PRGVRFY_);
++		lan78xx_write_reg(dev, OTP_CMD_GO, OTP_CMD_GO_GO_);
+ 
+ 		timeout = jiffies + HZ;
+ 		do {
+ 			udelay(1);
+-			ret = lan78xx_read_reg(dev, OTP_STATUS, &buf);
++			lan78xx_read_reg(dev, OTP_STATUS, &buf);
+ 			if (time_after(jiffies, timeout)) {
+ 				netdev_warn(dev->net,
+ 					    "Timeout on OTP_STATUS completion");
+@@ -1040,7 +1038,6 @@ static void lan78xx_deferred_multicast_write(struct work_struct *param)
+ 			container_of(param, struct lan78xx_priv, set_multicast);
+ 	struct lan78xx_net *dev = pdata->dev;
+ 	int i;
+-	int ret;
+ 
+ 	netif_dbg(dev, drv, dev->net, "deferred multicast write 0x%08x\n",
+ 		  pdata->rfe_ctl);
+@@ -1049,14 +1046,14 @@ static void lan78xx_deferred_multicast_write(struct work_struct *param)
+ 			       DP_SEL_VHF_HASH_LEN, pdata->mchash_table);
+ 
+ 	for (i = 1; i < NUM_OF_MAF; i++) {
+-		ret = lan78xx_write_reg(dev, MAF_HI(i), 0);
+-		ret = lan78xx_write_reg(dev, MAF_LO(i),
++		lan78xx_write_reg(dev, MAF_HI(i), 0);
++		lan78xx_write_reg(dev, MAF_LO(i),
+ 					pdata->pfilter_table[i][1]);
+-		ret = lan78xx_write_reg(dev, MAF_HI(i),
++		lan78xx_write_reg(dev, MAF_HI(i),
+ 					pdata->pfilter_table[i][0]);
+ 	}
+ 
+-	ret = lan78xx_write_reg(dev, RFE_CTL, pdata->rfe_ctl);
++	lan78xx_write_reg(dev, RFE_CTL, pdata->rfe_ctl);
+ }
+ 
+ static void lan78xx_set_multicast(struct net_device *netdev)
+@@ -1126,7 +1123,6 @@ static int lan78xx_update_flowcontrol(struct lan78xx_net *dev, u8 duplex,
+ 				      u16 lcladv, u16 rmtadv)
+ {
+ 	u32 flow = 0, fct_flow = 0;
+-	int ret;
+ 	u8 cap;
+ 
+ 	if (dev->fc_autoneg)
+@@ -1149,10 +1145,10 @@ static int lan78xx_update_flowcontrol(struct lan78xx_net *dev, u8 duplex,
+ 		  (cap & FLOW_CTRL_RX ? "enabled" : "disabled"),
+ 		  (cap & FLOW_CTRL_TX ? "enabled" : "disabled"));
+ 
+-	ret = lan78xx_write_reg(dev, FCT_FLOW, fct_flow);
++	lan78xx_write_reg(dev, FCT_FLOW, fct_flow);
+ 
+ 	/* threshold value should be set before enabling flow */
+-	ret = lan78xx_write_reg(dev, FLOW, flow);
++	lan78xx_write_reg(dev, FLOW, flow);
+ 
+ 	return 0;
+ }
+@@ -1673,11 +1669,10 @@ static const struct ethtool_ops lan78xx_ethtool_ops = {
+ static void lan78xx_init_mac_address(struct lan78xx_net *dev)
+ {
+ 	u32 addr_lo, addr_hi;
+-	int ret;
+ 	u8 addr[6];
+ 
+-	ret = lan78xx_read_reg(dev, RX_ADDRL, &addr_lo);
+-	ret = lan78xx_read_reg(dev, RX_ADDRH, &addr_hi);
++	lan78xx_read_reg(dev, RX_ADDRL, &addr_lo);
++	lan78xx_read_reg(dev, RX_ADDRH, &addr_hi);
+ 
+ 	addr[0] = addr_lo & 0xFF;
+ 	addr[1] = (addr_lo >> 8) & 0xFF;
+@@ -1710,12 +1705,12 @@ static void lan78xx_init_mac_address(struct lan78xx_net *dev)
+ 			  (addr[2] << 16) | (addr[3] << 24);
+ 		addr_hi = addr[4] | (addr[5] << 8);
+ 
+-		ret = lan78xx_write_reg(dev, RX_ADDRL, addr_lo);
+-		ret = lan78xx_write_reg(dev, RX_ADDRH, addr_hi);
++		lan78xx_write_reg(dev, RX_ADDRL, addr_lo);
++		lan78xx_write_reg(dev, RX_ADDRH, addr_hi);
+ 	}
+ 
+-	ret = lan78xx_write_reg(dev, MAF_LO(0), addr_lo);
+-	ret = lan78xx_write_reg(dev, MAF_HI(0), addr_hi | MAF_HI_VALID_);
++	lan78xx_write_reg(dev, MAF_LO(0), addr_lo);
++	lan78xx_write_reg(dev, MAF_HI(0), addr_hi | MAF_HI_VALID_);
+ 
+ 	ether_addr_copy(dev->net->dev_addr, addr);
+ }
+@@ -1848,33 +1843,8 @@ static void lan78xx_remove_mdio(struct lan78xx_net *dev)
+ static void lan78xx_link_status_change(struct net_device *net)
+ {
+ 	struct phy_device *phydev = net->phydev;
+-	int ret, temp;
+-
+-	/* At forced 100 F/H mode, chip may fail to set mode correctly
+-	 * when cable is switched between long(~50+m) and short one.
+-	 * As workaround, set to 10 before setting to 100
+-	 * at forced 100 F/H mode.
+-	 */
+-	if (!phydev->autoneg && (phydev->speed == 100)) {
+-		/* disable phy interrupt */
+-		temp = phy_read(phydev, LAN88XX_INT_MASK);
+-		temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
+-		ret = phy_write(phydev, LAN88XX_INT_MASK, temp);
+ 
+-		temp = phy_read(phydev, MII_BMCR);
+-		temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000);
+-		phy_write(phydev, MII_BMCR, temp); /* set to 10 first */
+-		temp |= BMCR_SPEED100;
+-		phy_write(phydev, MII_BMCR, temp); /* set to 100 later */
+-
+-		/* clear pending interrupt generated while workaround */
+-		temp = phy_read(phydev, LAN88XX_INT_STS);
+-
+-		/* enable phy interrupt back */
+-		temp = phy_read(phydev, LAN88XX_INT_MASK);
+-		temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
+-		ret = phy_write(phydev, LAN88XX_INT_MASK, temp);
+-	}
++	phy_print_status(phydev);
+ }
+ 
+ static int irq_map(struct irq_domain *d, unsigned int irq,
+@@ -1927,14 +1897,13 @@ static void lan78xx_irq_bus_sync_unlock(struct irq_data *irqd)
+ 	struct lan78xx_net *dev =
+ 			container_of(data, struct lan78xx_net, domain_data);
+ 	u32 buf;
+-	int ret;
+ 
+ 	/* call register access here because irq_bus_lock & irq_bus_sync_unlock
+ 	 * are only two callbacks executed in non-atomic contex.
+ 	 */
+-	ret = lan78xx_read_reg(dev, INT_EP_CTL, &buf);
++	lan78xx_read_reg(dev, INT_EP_CTL, &buf);
+ 	if (buf != data->irqenable)
+-		ret = lan78xx_write_reg(dev, INT_EP_CTL, data->irqenable);
++		lan78xx_write_reg(dev, INT_EP_CTL, data->irqenable);
+ 
+ 	mutex_unlock(&data->irq_lock);
+ }
+@@ -2001,7 +1970,6 @@ static void lan78xx_remove_irq_domain(struct lan78xx_net *dev)
+ static int lan8835_fixup(struct phy_device *phydev)
+ {
+ 	int buf;
+-	int ret;
+ 	struct lan78xx_net *dev = netdev_priv(phydev->attached_dev);
+ 
+ 	/* LED2/PME_N/IRQ_N/RGMII_ID pin to IRQ_N mode */
+@@ -2011,11 +1979,11 @@ static int lan8835_fixup(struct phy_device *phydev)
+ 	phy_write_mmd(phydev, MDIO_MMD_PCS, 0x8010, buf);
+ 
+ 	/* RGMII MAC TXC Delay Enable */
+-	ret = lan78xx_write_reg(dev, MAC_RGMII_ID,
++	lan78xx_write_reg(dev, MAC_RGMII_ID,
+ 				MAC_RGMII_ID_TXC_DELAY_EN_);
+ 
+ 	/* RGMII TX DLL Tune Adjust */
+-	ret = lan78xx_write_reg(dev, RGMII_TX_BYP_DLL, 0x3D00);
++	lan78xx_write_reg(dev, RGMII_TX_BYP_DLL, 0x3D00);
+ 
+ 	dev->interface = PHY_INTERFACE_MODE_RGMII_TXID;
+ 
+@@ -2199,28 +2167,27 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
+ 
+ static int lan78xx_set_rx_max_frame_length(struct lan78xx_net *dev, int size)
+ {
+-	int ret = 0;
+ 	u32 buf;
+ 	bool rxenabled;
+ 
+-	ret = lan78xx_read_reg(dev, MAC_RX, &buf);
++	lan78xx_read_reg(dev, MAC_RX, &buf);
+ 
+ 	rxenabled = ((buf & MAC_RX_RXEN_) != 0);
+ 
+ 	if (rxenabled) {
+ 		buf &= ~MAC_RX_RXEN_;
+-		ret = lan78xx_write_reg(dev, MAC_RX, buf);
++		lan78xx_write_reg(dev, MAC_RX, buf);
+ 	}
+ 
+ 	/* add 4 to size for FCS */
+ 	buf &= ~MAC_RX_MAX_SIZE_MASK_;
+ 	buf |= (((size + 4) << MAC_RX_MAX_SIZE_SHIFT_) & MAC_RX_MAX_SIZE_MASK_);
+ 
+-	ret = lan78xx_write_reg(dev, MAC_RX, buf);
++	lan78xx_write_reg(dev, MAC_RX, buf);
+ 
+ 	if (rxenabled) {
+ 		buf |= MAC_RX_RXEN_;
+-		ret = lan78xx_write_reg(dev, MAC_RX, buf);
++		lan78xx_write_reg(dev, MAC_RX, buf);
+ 	}
+ 
+ 	return 0;
+@@ -2277,13 +2244,12 @@ static int lan78xx_change_mtu(struct net_device *netdev, int new_mtu)
+ 	int ll_mtu = new_mtu + netdev->hard_header_len;
+ 	int old_hard_mtu = dev->hard_mtu;
+ 	int old_rx_urb_size = dev->rx_urb_size;
+-	int ret;
+ 
+ 	/* no second zero-length packet read wanted after mtu-sized packets */
+ 	if ((ll_mtu % dev->maxpacket) == 0)
+ 		return -EDOM;
+ 
+-	ret = lan78xx_set_rx_max_frame_length(dev, new_mtu + VLAN_ETH_HLEN);
++	lan78xx_set_rx_max_frame_length(dev, new_mtu + VLAN_ETH_HLEN);
+ 
+ 	netdev->mtu = new_mtu;
+ 
+@@ -2306,7 +2272,6 @@ static int lan78xx_set_mac_addr(struct net_device *netdev, void *p)
+ 	struct lan78xx_net *dev = netdev_priv(netdev);
+ 	struct sockaddr *addr = p;
+ 	u32 addr_lo, addr_hi;
+-	int ret;
+ 
+ 	if (netif_running(netdev))
+ 		return -EBUSY;
+@@ -2323,12 +2288,12 @@ static int lan78xx_set_mac_addr(struct net_device *netdev, void *p)
+ 	addr_hi = netdev->dev_addr[4] |
+ 		  netdev->dev_addr[5] << 8;
+ 
+-	ret = lan78xx_write_reg(dev, RX_ADDRL, addr_lo);
+-	ret = lan78xx_write_reg(dev, RX_ADDRH, addr_hi);
++	lan78xx_write_reg(dev, RX_ADDRL, addr_lo);
++	lan78xx_write_reg(dev, RX_ADDRH, addr_hi);
+ 
+ 	/* Added to support MAC address changes */
+-	ret = lan78xx_write_reg(dev, MAF_LO(0), addr_lo);
+-	ret = lan78xx_write_reg(dev, MAF_HI(0), addr_hi | MAF_HI_VALID_);
++	lan78xx_write_reg(dev, MAF_LO(0), addr_lo);
++	lan78xx_write_reg(dev, MAF_HI(0), addr_hi | MAF_HI_VALID_);
+ 
+ 	return 0;
+ }
+@@ -2340,7 +2305,6 @@ static int lan78xx_set_features(struct net_device *netdev,
+ 	struct lan78xx_net *dev = netdev_priv(netdev);
+ 	struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
+ 	unsigned long flags;
+-	int ret;
+ 
+ 	spin_lock_irqsave(&pdata->rfe_ctl_lock, flags);
+ 
+@@ -2364,7 +2328,7 @@ static int lan78xx_set_features(struct net_device *netdev,
+ 
+ 	spin_unlock_irqrestore(&pdata->rfe_ctl_lock, flags);
+ 
+-	ret = lan78xx_write_reg(dev, RFE_CTL, pdata->rfe_ctl);
++	lan78xx_write_reg(dev, RFE_CTL, pdata->rfe_ctl);
+ 
+ 	return 0;
+ }
+@@ -3820,7 +3784,6 @@ static u16 lan78xx_wakeframe_crc16(const u8 *buf, int len)
+ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ {
+ 	u32 buf;
+-	int ret;
+ 	int mask_index;
+ 	u16 crc;
+ 	u32 temp_wucsr;
+@@ -3829,26 +3792,26 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 	const u8 ipv6_multicast[3] = { 0x33, 0x33 };
+ 	const u8 arp_type[2] = { 0x08, 0x06 };
+ 
+-	ret = lan78xx_read_reg(dev, MAC_TX, &buf);
++	lan78xx_read_reg(dev, MAC_TX, &buf);
+ 	buf &= ~MAC_TX_TXEN_;
+-	ret = lan78xx_write_reg(dev, MAC_TX, buf);
+-	ret = lan78xx_read_reg(dev, MAC_RX, &buf);
++	lan78xx_write_reg(dev, MAC_TX, buf);
++	lan78xx_read_reg(dev, MAC_RX, &buf);
+ 	buf &= ~MAC_RX_RXEN_;
+-	ret = lan78xx_write_reg(dev, MAC_RX, buf);
++	lan78xx_write_reg(dev, MAC_RX, buf);
+ 
+-	ret = lan78xx_write_reg(dev, WUCSR, 0);
+-	ret = lan78xx_write_reg(dev, WUCSR2, 0);
+-	ret = lan78xx_write_reg(dev, WK_SRC, 0xFFF1FF1FUL);
++	lan78xx_write_reg(dev, WUCSR, 0);
++	lan78xx_write_reg(dev, WUCSR2, 0);
++	lan78xx_write_reg(dev, WK_SRC, 0xFFF1FF1FUL);
+ 
+ 	temp_wucsr = 0;
+ 
+ 	temp_pmt_ctl = 0;
+-	ret = lan78xx_read_reg(dev, PMT_CTL, &temp_pmt_ctl);
++	lan78xx_read_reg(dev, PMT_CTL, &temp_pmt_ctl);
+ 	temp_pmt_ctl &= ~PMT_CTL_RES_CLR_WKP_EN_;
+ 	temp_pmt_ctl |= PMT_CTL_RES_CLR_WKP_STS_;
+ 
+ 	for (mask_index = 0; mask_index < NUM_OF_WUF_CFG; mask_index++)
+-		ret = lan78xx_write_reg(dev, WUF_CFG(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_CFG(mask_index), 0);
+ 
+ 	mask_index = 0;
+ 	if (wol & WAKE_PHY) {
+@@ -3877,30 +3840,30 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 
+ 		/* set WUF_CFG & WUF_MASK for IPv4 Multicast */
+ 		crc = lan78xx_wakeframe_crc16(ipv4_multicast, 3);
+-		ret = lan78xx_write_reg(dev, WUF_CFG(mask_index),
++		lan78xx_write_reg(dev, WUF_CFG(mask_index),
+ 					WUF_CFGX_EN_ |
+ 					WUF_CFGX_TYPE_MCAST_ |
+ 					(0 << WUF_CFGX_OFFSET_SHIFT_) |
+ 					(crc & WUF_CFGX_CRC16_MASK_));
+ 
+-		ret = lan78xx_write_reg(dev, WUF_MASK0(mask_index), 7);
+-		ret = lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
+-		ret = lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
+-		ret = lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK0(mask_index), 7);
++		lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
+ 		mask_index++;
+ 
+ 		/* for IPv6 Multicast */
+ 		crc = lan78xx_wakeframe_crc16(ipv6_multicast, 2);
+-		ret = lan78xx_write_reg(dev, WUF_CFG(mask_index),
++		lan78xx_write_reg(dev, WUF_CFG(mask_index),
+ 					WUF_CFGX_EN_ |
+ 					WUF_CFGX_TYPE_MCAST_ |
+ 					(0 << WUF_CFGX_OFFSET_SHIFT_) |
+ 					(crc & WUF_CFGX_CRC16_MASK_));
+ 
+-		ret = lan78xx_write_reg(dev, WUF_MASK0(mask_index), 3);
+-		ret = lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
+-		ret = lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
+-		ret = lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK0(mask_index), 3);
++		lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
+ 		mask_index++;
+ 
+ 		temp_pmt_ctl |= PMT_CTL_WOL_EN_;
+@@ -3921,16 +3884,16 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 		 * for packettype (offset 12,13) = ARP (0x0806)
+ 		 */
+ 		crc = lan78xx_wakeframe_crc16(arp_type, 2);
+-		ret = lan78xx_write_reg(dev, WUF_CFG(mask_index),
++		lan78xx_write_reg(dev, WUF_CFG(mask_index),
+ 					WUF_CFGX_EN_ |
+ 					WUF_CFGX_TYPE_ALL_ |
+ 					(0 << WUF_CFGX_OFFSET_SHIFT_) |
+ 					(crc & WUF_CFGX_CRC16_MASK_));
+ 
+-		ret = lan78xx_write_reg(dev, WUF_MASK0(mask_index), 0x3000);
+-		ret = lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
+-		ret = lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
+-		ret = lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK0(mask_index), 0x3000);
++		lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
++		lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
+ 		mask_index++;
+ 
+ 		temp_pmt_ctl |= PMT_CTL_WOL_EN_;
+@@ -3938,7 +3901,7 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 		temp_pmt_ctl |= PMT_CTL_SUS_MODE_0_;
+ 	}
+ 
+-	ret = lan78xx_write_reg(dev, WUCSR, temp_wucsr);
++	lan78xx_write_reg(dev, WUCSR, temp_wucsr);
+ 
+ 	/* when multiple WOL bits are set */
+ 	if (hweight_long((unsigned long)wol) > 1) {
+@@ -3946,16 +3909,16 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 		temp_pmt_ctl &= ~PMT_CTL_SUS_MODE_MASK_;
+ 		temp_pmt_ctl |= PMT_CTL_SUS_MODE_0_;
+ 	}
+-	ret = lan78xx_write_reg(dev, PMT_CTL, temp_pmt_ctl);
++	lan78xx_write_reg(dev, PMT_CTL, temp_pmt_ctl);
+ 
+ 	/* clear WUPS */
+-	ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++	lan78xx_read_reg(dev, PMT_CTL, &buf);
+ 	buf |= PMT_CTL_WUPS_MASK_;
+-	ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++	lan78xx_write_reg(dev, PMT_CTL, buf);
+ 
+-	ret = lan78xx_read_reg(dev, MAC_RX, &buf);
++	lan78xx_read_reg(dev, MAC_RX, &buf);
+ 	buf |= MAC_RX_RXEN_;
+-	ret = lan78xx_write_reg(dev, MAC_RX, buf);
++	lan78xx_write_reg(dev, MAC_RX, buf);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/nfc/fdp/i2c.c b/drivers/nfc/fdp/i2c.c
+index 5e300788be525..808d73050afd0 100644
+--- a/drivers/nfc/fdp/i2c.c
++++ b/drivers/nfc/fdp/i2c.c
+@@ -249,6 +249,9 @@ static void fdp_nci_i2c_read_device_properties(struct device *dev,
+ 					   len, sizeof(**fw_vsc_cfg),
+ 					   GFP_KERNEL);
+ 
++		if (!*fw_vsc_cfg)
++			goto alloc_err;
++
+ 		r = device_property_read_u8_array(dev, FDP_DP_FW_VSC_CFG_NAME,
+ 						  *fw_vsc_cfg, len);
+ 
+@@ -262,6 +265,7 @@ vsc_read_err:
+ 		*fw_vsc_cfg = NULL;
+ 	}
+ 
++alloc_err:
+ 	dev_dbg(dev, "Clock type: %d, clock frequency: %d, VSC: %s",
+ 		*clock_type, *clock_freq, *fw_vsc_cfg != NULL ? "yes" : "no");
+ }
+diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
+index a1858689d6e10..84c5b922f245e 100644
+--- a/drivers/platform/x86/Kconfig
++++ b/drivers/platform/x86/Kconfig
+@@ -1195,7 +1195,8 @@ config I2C_MULTI_INSTANTIATE
+ 
+ config MLX_PLATFORM
+ 	tristate "Mellanox Technologies platform support"
+-	depends on I2C && REGMAP
++	depends on I2C
++	select REGMAP
+ 	help
+ 	  This option enables system support for the Mellanox Technologies
+ 	  platform. The Mellanox systems provide data center networking
+diff --git a/drivers/s390/block/dasd_diag.c b/drivers/s390/block/dasd_diag.c
+index 1b9e1442e6a50..d5c7b70bd4de5 100644
+--- a/drivers/s390/block/dasd_diag.c
++++ b/drivers/s390/block/dasd_diag.c
+@@ -642,12 +642,17 @@ static void dasd_diag_setup_blk_queue(struct dasd_block *block)
+ 	blk_queue_segment_boundary(q, PAGE_SIZE - 1);
+ }
+ 
++static int dasd_diag_pe_handler(struct dasd_device *device, __u8 tbvpm)
++{
++	return dasd_generic_verify_path(device, tbvpm);
++}
++
+ static struct dasd_discipline dasd_diag_discipline = {
+ 	.owner = THIS_MODULE,
+ 	.name = "DIAG",
+ 	.ebcname = "DIAG",
+ 	.check_device = dasd_diag_check_device,
+-	.verify_path = dasd_generic_verify_path,
++	.pe_handler = dasd_diag_pe_handler,
+ 	.fill_geometry = dasd_diag_fill_geometry,
+ 	.setup_blk_queue = dasd_diag_setup_blk_queue,
+ 	.start_IO = dasd_start_diag,
+diff --git a/drivers/s390/block/dasd_fba.c b/drivers/s390/block/dasd_fba.c
+index 1a44e321b54e1..b159575a27608 100644
+--- a/drivers/s390/block/dasd_fba.c
++++ b/drivers/s390/block/dasd_fba.c
+@@ -803,13 +803,18 @@ static void dasd_fba_setup_blk_queue(struct dasd_block *block)
+ 	blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
+ }
+ 
++static int dasd_fba_pe_handler(struct dasd_device *device, __u8 tbvpm)
++{
++	return dasd_generic_verify_path(device, tbvpm);
++}
++
+ static struct dasd_discipline dasd_fba_discipline = {
+ 	.owner = THIS_MODULE,
+ 	.name = "FBA ",
+ 	.ebcname = "FBA ",
+ 	.check_device = dasd_fba_check_characteristics,
+ 	.do_analysis = dasd_fba_do_analysis,
+-	.verify_path = dasd_generic_verify_path,
++	.pe_handler = dasd_fba_pe_handler,
+ 	.setup_blk_queue = dasd_fba_setup_blk_queue,
+ 	.fill_geometry = dasd_fba_fill_geometry,
+ 	.start_IO = dasd_start_IO,
+diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
+index e8a06d85d6f72..5d7d35ca5eb48 100644
+--- a/drivers/s390/block/dasd_int.h
++++ b/drivers/s390/block/dasd_int.h
+@@ -298,7 +298,6 @@ struct dasd_discipline {
+ 	 * e.g. verify that new path is compatible with the current
+ 	 * configuration.
+ 	 */
+-	int (*verify_path)(struct dasd_device *, __u8);
+ 	int (*pe_handler)(struct dasd_device *, __u8);
+ 
+ 	/*
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index d664c4650b2dd..fae0323242103 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -180,6 +180,7 @@ void scsi_remove_host(struct Scsi_Host *shost)
+ 	scsi_forget_host(shost);
+ 	mutex_unlock(&shost->scan_mutex);
+ 	scsi_proc_host_rm(shost);
++	scsi_proc_hostdir_rm(shost->hostt);
+ 
+ 	spin_lock_irqsave(shost->host_lock, flags);
+ 	if (scsi_host_set_state(shost, SHOST_DEL))
+@@ -321,6 +322,7 @@ static void scsi_host_dev_release(struct device *dev)
+ 	struct Scsi_Host *shost = dev_to_shost(dev);
+ 	struct device *parent = dev->parent;
+ 
++	/* In case scsi_remove_host() has not been called. */
+ 	scsi_proc_hostdir_rm(shost->hostt);
+ 
+ 	/* Wait for functions invoked through call_rcu(&shost->rcu, ...) */
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index c088a848776ef..2d5b1d5978664 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -1517,6 +1517,8 @@ struct megasas_ctrl_info {
+ #define MEGASAS_MAX_LD_IDS			(MEGASAS_MAX_LD_CHANNELS * \
+ 						MEGASAS_MAX_DEV_PER_CHANNEL)
+ 
++#define MEGASAS_MAX_SUPPORTED_LD_IDS		240
++
+ #define MEGASAS_MAX_SECTORS                    (2*1024)
+ #define MEGASAS_MAX_SECTORS_IEEE		(2*128)
+ #define MEGASAS_DBG_LVL				1
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fp.c b/drivers/scsi/megaraid/megaraid_sas_fp.c
+index 83f69c33b01a9..ec10d35b4685a 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fp.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fp.c
+@@ -358,7 +358,7 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance, u64 map_id)
+ 		ld = MR_TargetIdToLdGet(i, drv_map);
+ 
+ 		/* For non existing VDs, iterate to next VD*/
+-		if (ld >= (MAX_LOGICAL_DRIVES_EXT - 1))
++		if (ld >= MEGASAS_MAX_SUPPORTED_LD_IDS)
+ 			continue;
+ 
+ 		raid = MR_LdRaidGet(ld, drv_map);
+diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
+index 8e6ca23ed172c..eed5b855dd949 100644
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -294,15 +294,10 @@ void ext4_release_system_zone(struct super_block *sb)
+ 		call_rcu(&system_blks->rcu, ext4_destroy_system_zone);
+ }
+ 
+-/*
+- * Returns 1 if the passed-in block region (start_blk,
+- * start_blk+count) is valid; 0 if some part of the block region
+- * overlaps with some other filesystem metadata blocks.
+- */
+-int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
+-			  unsigned int count)
++int ext4_sb_block_valid(struct super_block *sb, struct inode *inode,
++				ext4_fsblk_t start_blk, unsigned int count)
+ {
+-	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	struct ext4_system_blocks *system_blks;
+ 	struct ext4_system_zone *entry;
+ 	struct rb_node *n;
+@@ -331,7 +326,9 @@ int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
+ 		else if (start_blk >= (entry->start_blk + entry->count))
+ 			n = n->rb_right;
+ 		else {
+-			ret = (entry->ino == inode->i_ino);
++			ret = 0;
++			if (inode)
++				ret = (entry->ino == inode->i_ino);
+ 			break;
+ 		}
+ 	}
+@@ -340,6 +337,17 @@ out_rcu:
+ 	return ret;
+ }
+ 
++/*
++ * Returns 1 if the passed-in block region (start_blk,
++ * start_blk+count) is valid; 0 if some part of the block region
++ * overlaps with some other filesystem metadata blocks.
++ */
++int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
++			  unsigned int count)
++{
++	return ext4_sb_block_valid(inode->i_sb, inode, start_blk, count);
++}
++
+ int ext4_check_blockref(const char *function, unsigned int line,
+ 			struct inode *inode, __le32 *p, unsigned int max)
+ {
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 81dc61f1c557f..246573a4e8041 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3536,6 +3536,9 @@ extern int ext4_inode_block_valid(struct inode *inode,
+ 				  unsigned int count);
+ extern int ext4_check_blockref(const char *, unsigned int,
+ 			       struct inode *, __le32 *, unsigned int);
++extern int ext4_sb_block_valid(struct super_block *sb, struct inode *inode,
++				ext4_fsblk_t start_blk, unsigned int count);
++
+ 
+ /* extents.c */
+ struct ext4_ext_path;
+diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
+index 4493ef0c715e9..cdf9bfe10137f 100644
+--- a/fs/ext4/fsmap.c
++++ b/fs/ext4/fsmap.c
+@@ -486,6 +486,8 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+ 		keys[0].fmr_physical = bofs;
+ 	if (keys[1].fmr_physical >= eofs)
+ 		keys[1].fmr_physical = eofs - 1;
++	if (keys[1].fmr_physical < keys[0].fmr_physical)
++		return 0;
+ 	start_fsb = keys[0].fmr_physical;
+ 	end_fsb = keys[1].fmr_physical;
+ 
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 77377befbb1c6..61cb50e8fcb77 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -157,7 +157,6 @@ int ext4_find_inline_data_nolock(struct inode *inode)
+ 					(void *)ext4_raw_inode(&is.iloc));
+ 		EXT4_I(inode)->i_inline_size = EXT4_MIN_INLINE_DATA_SIZE +
+ 				le32_to_cpu(is.s.here->e_value_size);
+-		ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ 	}
+ out:
+ 	brelse(is.iloc.bh);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 355343cf4609b..1a654a1f3f46b 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4639,8 +4639,13 @@ static inline int ext4_iget_extra_inode(struct inode *inode,
+ 
+ 	if (EXT4_INODE_HAS_XATTR_SPACE(inode)  &&
+ 	    *magic == cpu_to_le32(EXT4_XATTR_MAGIC)) {
++		int err;
++
+ 		ext4_set_inode_state(inode, EXT4_STATE_XATTR);
+-		return ext4_find_inline_data_nolock(inode);
++		err = ext4_find_inline_data_nolock(inode);
++		if (!err && ext4_has_inline_data(inode))
++			ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
++		return err;
+ 	} else
+ 		EXT4_I(inode)->i_inline_off = 0;
+ 	return 0;
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 240d792db9f78..53bdc67a815f6 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -180,6 +180,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 		ei_bl->i_flags = 0;
+ 		inode_set_iversion(inode_bl, 1);
+ 		i_size_write(inode_bl, 0);
++		EXT4_I(inode_bl)->i_disksize = inode_bl->i_size;
+ 		inode_bl->i_mode = S_IFREG;
+ 		if (ext4_has_feature_extents(sb)) {
+ 			ext4_set_inode_flag(inode_bl, EXT4_INODE_EXTENTS);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index d5ca02a7766e0..843840c2aced9 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5303,7 +5303,8 @@ static void ext4_free_blocks_simple(struct inode *inode, ext4_fsblk_t block,
+ }
+ 
+ /**
+- * ext4_free_blocks() -- Free given blocks and update quota
++ * ext4_mb_clear_bb() -- helper function for freeing blocks.
++ *			Used by ext4_free_blocks()
+  * @handle:		handle for this transaction
+  * @inode:		inode
+  * @bh:			optional buffer of the block to be freed
+@@ -5311,9 +5312,9 @@ static void ext4_free_blocks_simple(struct inode *inode, ext4_fsblk_t block,
+  * @count:		number of blocks to be freed
+  * @flags:		flags used by ext4_free_blocks
+  */
+-void ext4_free_blocks(handle_t *handle, struct inode *inode,
+-		      struct buffer_head *bh, ext4_fsblk_t block,
+-		      unsigned long count, int flags)
++static void ext4_mb_clear_bb(handle_t *handle, struct inode *inode,
++			       ext4_fsblk_t block, unsigned long count,
++			       int flags)
+ {
+ 	struct buffer_head *bitmap_bh = NULL;
+ 	struct super_block *sb = inode->i_sb;
+@@ -5330,79 +5331,14 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ 
+ 	sbi = EXT4_SB(sb);
+ 
+-	if (sbi->s_mount_state & EXT4_FC_REPLAY) {
+-		ext4_free_blocks_simple(inode, block, count);
+-		return;
+-	}
+-
+-	might_sleep();
+-	if (bh) {
+-		if (block)
+-			BUG_ON(block != bh->b_blocknr);
+-		else
+-			block = bh->b_blocknr;
+-	}
+-
+ 	if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
+ 	    !ext4_inode_block_valid(inode, block, count)) {
+-		ext4_error(sb, "Freeing blocks not in datazone - "
+-			   "block = %llu, count = %lu", block, count);
++		ext4_error(sb, "Freeing blocks in system zone - "
++			   "Block = %llu, count = %lu", block, count);
++		/* err = 0. ext4_std_error should be a no op */
+ 		goto error_return;
+ 	}
+-
+-	ext4_debug("freeing block %llu\n", block);
+-	trace_ext4_free_blocks(inode, block, count, flags);
+-
+-	if (bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
+-		BUG_ON(count > 1);
+-
+-		ext4_forget(handle, flags & EXT4_FREE_BLOCKS_METADATA,
+-			    inode, bh, block);
+-	}
+-
+-	/*
+-	 * If the extent to be freed does not begin on a cluster
+-	 * boundary, we need to deal with partial clusters at the
+-	 * beginning and end of the extent.  Normally we will free
+-	 * blocks at the beginning or the end unless we are explicitly
+-	 * requested to avoid doing so.
+-	 */
+-	overflow = EXT4_PBLK_COFF(sbi, block);
+-	if (overflow) {
+-		if (flags & EXT4_FREE_BLOCKS_NOFREE_FIRST_CLUSTER) {
+-			overflow = sbi->s_cluster_ratio - overflow;
+-			block += overflow;
+-			if (count > overflow)
+-				count -= overflow;
+-			else
+-				return;
+-		} else {
+-			block -= overflow;
+-			count += overflow;
+-		}
+-	}
+-	overflow = EXT4_LBLK_COFF(sbi, count);
+-	if (overflow) {
+-		if (flags & EXT4_FREE_BLOCKS_NOFREE_LAST_CLUSTER) {
+-			if (count > overflow)
+-				count -= overflow;
+-			else
+-				return;
+-		} else
+-			count += sbi->s_cluster_ratio - overflow;
+-	}
+-
+-	if (!bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
+-		int i;
+-		int is_metadata = flags & EXT4_FREE_BLOCKS_METADATA;
+-
+-		for (i = 0; i < count; i++) {
+-			cond_resched();
+-			if (is_metadata)
+-				bh = sb_find_get_block(inode->i_sb, block + i);
+-			ext4_forget(handle, is_metadata, inode, bh, block + i);
+-		}
+-	}
++	flags |= EXT4_FREE_BLOCKS_VALIDATED;
+ 
+ do_more:
+ 	overflow = 0;
+@@ -5420,6 +5356,8 @@ do_more:
+ 		overflow = EXT4_C2B(sbi, bit) + count -
+ 			EXT4_BLOCKS_PER_GROUP(sb);
+ 		count -= overflow;
++		/* The range changed so it's no longer validated */
++		flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ 	}
+ 	count_clusters = EXT4_NUM_B2C(sbi, count);
+ 	bitmap_bh = ext4_read_block_bitmap(sb, block_group);
+@@ -5434,13 +5372,8 @@ do_more:
+ 		goto error_return;
+ 	}
+ 
+-	if (in_range(ext4_block_bitmap(sb, gdp), block, count) ||
+-	    in_range(ext4_inode_bitmap(sb, gdp), block, count) ||
+-	    in_range(block, ext4_inode_table(sb, gdp),
+-		     sbi->s_itb_per_group) ||
+-	    in_range(block + count - 1, ext4_inode_table(sb, gdp),
+-		     sbi->s_itb_per_group)) {
+-
++	if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
++	    !ext4_inode_block_valid(inode, block, count)) {
+ 		ext4_error(sb, "Freeing blocks in system zone - "
+ 			   "Block = %llu, count = %lu", block, count);
+ 		/* err = 0. ext4_std_error should be a no op */
+@@ -5510,7 +5443,7 @@ do_more:
+ 						 NULL);
+ 			if (err && err != -EOPNOTSUPP)
+ 				ext4_msg(sb, KERN_WARNING, "discard request in"
+-					 " group:%d block:%d count:%lu failed"
++					 " group:%u block:%d count:%lu failed"
+ 					 " with %d", block_group, bit, count,
+ 					 err);
+ 		} else
+@@ -5562,6 +5495,8 @@ do_more:
+ 		block += count;
+ 		count = overflow;
+ 		put_bh(bitmap_bh);
++		/* The range changed so it's no longer validated */
++		flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ 		goto do_more;
+ 	}
+ error_return:
+@@ -5570,6 +5505,108 @@ error_return:
+ 	return;
+ }
+ 
++/**
++ * ext4_free_blocks() -- Free given blocks and update quota
++ * @handle:		handle for this transaction
++ * @inode:		inode
++ * @bh:			optional buffer of the block to be freed
++ * @block:		starting physical block to be freed
++ * @count:		number of blocks to be freed
++ * @flags:		flags used by ext4_free_blocks
++ */
++void ext4_free_blocks(handle_t *handle, struct inode *inode,
++		      struct buffer_head *bh, ext4_fsblk_t block,
++		      unsigned long count, int flags)
++{
++	struct super_block *sb = inode->i_sb;
++	unsigned int overflow;
++	struct ext4_sb_info *sbi;
++
++	sbi = EXT4_SB(sb);
++
++	if (sbi->s_mount_state & EXT4_FC_REPLAY) {
++		ext4_free_blocks_simple(inode, block, count);
++		return;
++	}
++
++	might_sleep();
++	if (bh) {
++		if (block)
++			BUG_ON(block != bh->b_blocknr);
++		else
++			block = bh->b_blocknr;
++	}
++
++	if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
++	    !ext4_inode_block_valid(inode, block, count)) {
++		ext4_error(sb, "Freeing blocks not in datazone - "
++			   "block = %llu, count = %lu", block, count);
++		return;
++	}
++	flags |= EXT4_FREE_BLOCKS_VALIDATED;
++
++	ext4_debug("freeing block %llu\n", block);
++	trace_ext4_free_blocks(inode, block, count, flags);
++
++	if (bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
++		BUG_ON(count > 1);
++
++		ext4_forget(handle, flags & EXT4_FREE_BLOCKS_METADATA,
++			    inode, bh, block);
++	}
++
++	/*
++	 * If the extent to be freed does not begin on a cluster
++	 * boundary, we need to deal with partial clusters at the
++	 * beginning and end of the extent.  Normally we will free
++	 * blocks at the beginning or the end unless we are explicitly
++	 * requested to avoid doing so.
++	 */
++	overflow = EXT4_PBLK_COFF(sbi, block);
++	if (overflow) {
++		if (flags & EXT4_FREE_BLOCKS_NOFREE_FIRST_CLUSTER) {
++			overflow = sbi->s_cluster_ratio - overflow;
++			block += overflow;
++			if (count > overflow)
++				count -= overflow;
++			else
++				return;
++		} else {
++			block -= overflow;
++			count += overflow;
++		}
++		/* The range changed so it's no longer validated */
++		flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
++	}
++	overflow = EXT4_LBLK_COFF(sbi, count);
++	if (overflow) {
++		if (flags & EXT4_FREE_BLOCKS_NOFREE_LAST_CLUSTER) {
++			if (count > overflow)
++				count -= overflow;
++			else
++				return;
++		} else
++			count += sbi->s_cluster_ratio - overflow;
++		/* The range changed so it's no longer validated */
++		flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
++	}
++
++	if (!bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
++		int i;
++		int is_metadata = flags & EXT4_FREE_BLOCKS_METADATA;
++
++		for (i = 0; i < count; i++) {
++			cond_resched();
++			if (is_metadata)
++				bh = sb_find_get_block(inode->i_sb, block + i);
++			ext4_forget(handle, is_metadata, inode, bh, block + i);
++		}
++	}
++
++	ext4_mb_clear_bb(handle, inode, block, count, flags);
++	return;
++}
++
+ /**
+  * ext4_group_add_blocks() -- Add given blocks to an existing group
+  * @handle:			handle to this transaction
+@@ -5626,11 +5663,7 @@ int ext4_group_add_blocks(handle_t *handle, struct super_block *sb,
+ 		goto error_return;
+ 	}
+ 
+-	if (in_range(ext4_block_bitmap(sb, desc), block, count) ||
+-	    in_range(ext4_inode_bitmap(sb, desc), block, count) ||
+-	    in_range(block, ext4_inode_table(sb, desc), sbi->s_itb_per_group) ||
+-	    in_range(block + count - 1, ext4_inode_table(sb, desc),
+-		     sbi->s_itb_per_group)) {
++	if (!ext4_sb_block_valid(sb, NULL, block, count)) {
+ 		ext4_error(sb, "Adding blocks in system zones - "
+ 			   "Block = %llu, count = %lu",
+ 			   block, count);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 7ec7c9c16a39e..1f47aeca71422 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1512,11 +1512,10 @@ static struct buffer_head *__ext4_find_entry(struct inode *dir,
+ 		int has_inline_data = 1;
+ 		ret = ext4_find_inline_entry(dir, fname, res_dir,
+ 					     &has_inline_data);
+-		if (has_inline_data) {
+-			if (inlined)
+-				*inlined = 1;
++		if (inlined)
++			*inlined = has_inline_data;
++		if (has_inline_data)
+ 			goto cleanup_and_exit;
+-		}
+ 	}
+ 
+ 	if ((namelen <= 2) && (name[0] == '.') &&
+@@ -3698,7 +3697,8 @@ static void ext4_resetent(handle_t *handle, struct ext4_renament *ent,
+ 	 * so the old->de may no longer valid and need to find it again
+ 	 * before reset old inode info.
+ 	 */
+-	old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de, NULL);
++	old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de,
++				 &old.inlined);
+ 	if (IS_ERR(old.bh))
+ 		retval = PTR_ERR(old.bh);
+ 	if (!old.bh)
+@@ -3863,9 +3863,20 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			return retval;
+ 	}
+ 
+-	old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de, NULL);
+-	if (IS_ERR(old.bh))
+-		return PTR_ERR(old.bh);
++	/*
++	 * We need to protect against old.inode directory getting converted
++	 * from inline directory format into a normal one.
++	 */
++	if (S_ISDIR(old.inode->i_mode))
++		inode_lock_nested(old.inode, I_MUTEX_NONDIR2);
++
++	old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de,
++				 &old.inlined);
++	if (IS_ERR(old.bh)) {
++		retval = PTR_ERR(old.bh);
++		goto unlock_moved_dir;
++	}
++
+ 	/*
+ 	 *  Check for inode number is _not_ due to possible IO errors.
+ 	 *  We might rmdir the source, keep it as pwd of some process
+@@ -3923,8 +3934,10 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 				goto end_rename;
+ 		}
+ 		retval = ext4_rename_dir_prepare(handle, &old);
+-		if (retval)
++		if (retval) {
++			inode_unlock(old.inode);
+ 			goto end_rename;
++		}
+ 	}
+ 	/*
+ 	 * If we're renaming a file within an inline_data dir and adding or
+@@ -4053,6 +4066,11 @@ release_bh:
+ 	brelse(old.dir_bh);
+ 	brelse(old.bh);
+ 	brelse(new.bh);
++
++unlock_moved_dir:
++	if (S_ISDIR(old.inode->i_mode))
++		inode_unlock(old.inode);
++
+ 	return retval;
+ }
+ 
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index 4569075a7da0c..a94cc7b22d7ea 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -416,7 +416,8 @@ static void io_submit_init_bio(struct ext4_io_submit *io,
+ 
+ static void io_submit_add_bh(struct ext4_io_submit *io,
+ 			     struct inode *inode,
+-			     struct page *page,
++			     struct page *pagecache_page,
++			     struct page *bounce_page,
+ 			     struct buffer_head *bh)
+ {
+ 	int ret;
+@@ -430,10 +431,11 @@ submit_and_retry:
+ 		io_submit_init_bio(io, bh);
+ 		io->io_bio->bi_write_hint = inode->i_write_hint;
+ 	}
+-	ret = bio_add_page(io->io_bio, page, bh->b_size, bh_offset(bh));
++	ret = bio_add_page(io->io_bio, bounce_page ?: pagecache_page,
++			   bh->b_size, bh_offset(bh));
+ 	if (ret != bh->b_size)
+ 		goto submit_and_retry;
+-	wbc_account_cgroup_owner(io->io_wbc, page, bh->b_size);
++	wbc_account_cgroup_owner(io->io_wbc, pagecache_page, bh->b_size);
+ 	io->io_next_block++;
+ }
+ 
+@@ -551,8 +553,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
+ 	do {
+ 		if (!buffer_async_write(bh))
+ 			continue;
+-		io_submit_add_bh(io, inode,
+-				 bounce_page ? bounce_page : page, bh);
++		io_submit_add_bh(io, inode, page, bounce_page, bh);
+ 		nr_submitted++;
+ 		clear_buffer_dirty(bh);
+ 	} while ((bh = bh->b_this_page) != head);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index b80ad5a7b05c0..60e122761352c 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2804,6 +2804,9 @@ shift:
+ 			(void *)header, total_ino);
+ 	EXT4_I(inode)->i_extra_isize = new_extra_isize;
+ 
++	if (ext4_has_inline_data(inode))
++		error = ext4_find_inline_data_nolock(inode);
++
+ cleanup:
+ 	if (error && (mnt_count != le16_to_cpu(sbi->s_es->s_mnt_count))) {
+ 		ext4_warning(inode->i_sb, "Unable to expand inode %lu. Delete some EAs or run e2fsck.",
+diff --git a/fs/file.c b/fs/file.c
+index 97a0cd31faec4..173d318208b85 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -677,6 +677,7 @@ static struct file *pick_file(struct files_struct *files, unsigned fd)
+ 	fdt = files_fdtable(files);
+ 	if (fd >= fdt->max_fds)
+ 		goto out_unlock;
++	fd = array_index_nospec(fd, fdt->max_fds);
+ 	file = fdt->fd[fd];
+ 	if (!file)
+ 		goto out_unlock;
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 81876284a83c0..d114774ecdea8 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -442,7 +442,7 @@ static int udf_get_block(struct inode *inode, sector_t block,
+ 	 * Block beyond EOF and prealloc extents? Just discard preallocation
+ 	 * as it is not useful and complicates things.
+ 	 */
+-	if (((loff_t)block) << inode->i_blkbits > iinfo->i_lenExtents)
++	if (((loff_t)block) << inode->i_blkbits >= iinfo->i_lenExtents)
+ 		udf_discard_prealloc(inode);
+ 	udf_clear_extent_cache(inode);
+ 	phys = inode_getblk(inode, block, &err, &new);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index d233f9e4b9c60..44103f9487c9a 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -906,7 +906,12 @@
+ #define TRACEDATA
+ #endif
+ 
++/*
++ * Discard .note.GNU-stack, which is emitted as PROGBITS by the compiler.
++ * Otherwise, the type of .notes section would become PROGBITS instead of NOTES.
++ */
+ #define NOTES								\
++	/DISCARD/ : { *(.note.GNU-stack) }				\
+ 	.notes : AT(ADDR(.notes) - LOAD_OFFSET) {			\
+ 		__start_notes = .;					\
+ 		KEEP(*(.note.*))					\
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index 607bee9271bd7..b89a8ac83d1bc 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -116,7 +116,7 @@ enum {
+  * IRQ_SET_MASK_NOCPY	- OK, chip did update irq_common_data.affinity
+  * IRQ_SET_MASK_OK_DONE	- Same as IRQ_SET_MASK_OK for core. Special code to
+  *			  support stacked irqchips, which indicates skipping
+- *			  all descendent irqchips.
++ *			  all descendant irqchips.
+  */
+ enum {
+ 	IRQ_SET_MASK_OK = 0,
+@@ -302,7 +302,7 @@ static inline bool irqd_is_level_type(struct irq_data *d)
+ 
+ /*
+  * Must only be called of irqchip.irq_set_affinity() or low level
+- * hieararchy domain allocation functions.
++ * hierarchy domain allocation functions.
+  */
+ static inline void irqd_set_single_target(struct irq_data *d)
+ {
+diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
+index 5745491303e03..fdb22e0f9a91e 100644
+--- a/include/linux/irqdesc.h
++++ b/include/linux/irqdesc.h
+@@ -32,7 +32,7 @@ struct pt_regs;
+  * @last_unhandled:	aging timer for unhandled count
+  * @irqs_unhandled:	stats field for spurious unhandled interrupts
+  * @threads_handled:	stats field for deferred spurious detection of threaded handlers
+- * @threads_handled_last: comparator field for deferred spurious detection of theraded handlers
++ * @threads_handled_last: comparator field for deferred spurious detection of threaded handlers
+  * @lock:		locking for SMP
+  * @affinity_hint:	hint to user space for preferred irq affinity
+  * @affinity_notify:	context for notification of affinity changes
+diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
+index ea5a337e0f8b8..9b9743f7538c4 100644
+--- a/include/linux/irqdomain.h
++++ b/include/linux/irqdomain.h
+@@ -256,7 +256,7 @@ static inline struct fwnode_handle *irq_domain_alloc_fwnode(phys_addr_t *pa)
+ }
+ 
+ void irq_domain_free_fwnode(struct fwnode_handle *fwnode);
+-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size,
++struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
+ 				    irq_hw_number_t hwirq_max, int direct_max,
+ 				    const struct irq_domain_ops *ops,
+ 				    void *host_data);
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 2e1935917c241..4b34a5c125999 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -3115,6 +3115,8 @@
+ 
+ #define PCI_VENDOR_ID_3COM_2		0xa727
+ 
++#define PCI_VENDOR_ID_SOLIDRUN		0xd063
++
+ #define PCI_VENDOR_ID_DIGIUM		0xd161
+ #define PCI_DEVICE_ID_DIGIUM_HFC4S	0xb410
+ 
+diff --git a/include/net/netfilter/nf_tproxy.h b/include/net/netfilter/nf_tproxy.h
+index 82d0e41b76f22..faa108b1ba675 100644
+--- a/include/net/netfilter/nf_tproxy.h
++++ b/include/net/netfilter/nf_tproxy.h
+@@ -17,6 +17,13 @@ static inline bool nf_tproxy_sk_is_transparent(struct sock *sk)
+ 	return false;
+ }
+ 
++static inline void nf_tproxy_twsk_deschedule_put(struct inet_timewait_sock *tw)
++{
++	local_bh_disable();
++	inet_twsk_deschedule_put(tw);
++	local_bh_enable();
++}
++
+ /* assign a socket to the skb -- consumes sk */
+ static inline void nf_tproxy_assign_sock(struct sk_buff *skb, struct sock *sk)
+ {
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 11b612e94e4e1..cb80d18a49b56 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -3541,6 +3541,7 @@ static int btf_datasec_resolve(struct btf_verifier_env *env,
+ 	struct btf *btf = env->btf;
+ 	u16 i;
+ 
++	env->resolve_mode = RESOLVE_TBD;
+ 	for_each_vsi_from(i, v->next_member, v->t, vsi) {
+ 		u32 var_type_id = vsi->type, type_id, type_size = 0;
+ 		const struct btf_type *var_type = btf_type_by_id(env->btf,
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 68efe2a0b4fbc..a5bc0c6a00fd1 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2726,7 +2726,7 @@ static bool clone3_args_valid(struct kernel_clone_args *kargs)
+ 	 * - make the CLONE_DETACHED bit reuseable for clone3
+ 	 * - make the CSIGNAL bits reuseable for clone3
+ 	 */
+-	if (kargs->flags & (CLONE_DETACHED | CSIGNAL))
++	if (kargs->flags & (CLONE_DETACHED | (CSIGNAL & (~CLONE_NEWTIME))))
+ 		return false;
+ 
+ 	if ((kargs->flags & (CLONE_SIGHAND | CLONE_CLEAR_SIGHAND)) ==
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 621d8dd157bc1..e7d284261d450 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -811,7 +811,7 @@ void handle_edge_irq(struct irq_desc *desc)
+ 		/*
+ 		 * When another irq arrived while we were handling
+ 		 * one, we could have masked the irq.
+-		 * Renable it, if it was not disabled in meantime.
++		 * Reenable it, if it was not disabled in meantime.
+ 		 */
+ 		if (unlikely(desc->istate & IRQS_PENDING)) {
+ 			if (!irqd_irq_disabled(&desc->irq_data) &&
+diff --git a/kernel/irq/dummychip.c b/kernel/irq/dummychip.c
+index 0b0cdf206dc44..7fe6cffe7d0df 100644
+--- a/kernel/irq/dummychip.c
++++ b/kernel/irq/dummychip.c
+@@ -13,7 +13,7 @@
+ 
+ /*
+  * What should we do if we get a hw irq event on an illegal vector?
+- * Each architecture has to answer this themself.
++ * Each architecture has to answer this themselves.
+  */
+ static void ack_bad(struct irq_data *data)
+ {
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index 9b0914a063f90..6c009a033c73f 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -31,7 +31,7 @@ static int __init irq_affinity_setup(char *str)
+ 	cpulist_parse(str, irq_default_affinity);
+ 	/*
+ 	 * Set at least the boot cpu. We don't want to end up with
+-	 * bugreports caused by random comandline masks
++	 * bugreports caused by random commandline masks
+ 	 */
+ 	cpumask_set_cpu(smp_processor_id(), irq_default_affinity);
+ 	return 1;
+diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
+index 1720998933f8d..fd3f7c16c299a 100644
+--- a/kernel/irq/irqdomain.c
++++ b/kernel/irq/irqdomain.c
+@@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
+ 
+ static struct irq_domain *irq_default_domain;
+ 
++static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
++					unsigned int nr_irqs, int node, void *arg,
++					bool realloc, const struct irq_affinity_desc *affinity);
+ static void irq_domain_check_hierarchy(struct irq_domain *domain);
+ 
+ struct irqchip_fwid {
+@@ -53,7 +56,7 @@ EXPORT_SYMBOL_GPL(irqchip_fwnode_ops);
+  * @name:	Optional user provided domain name
+  * @pa:		Optional user-provided physical address
+  *
+- * Allocate a struct irqchip_fwid, and return a poiner to the embedded
++ * Allocate a struct irqchip_fwid, and return a pointer to the embedded
+  * fwnode_handle (or NULL on failure).
+  *
+  * Note: The types IRQCHIP_FWNODE_NAMED and IRQCHIP_FWNODE_NAMED_ID are
+@@ -114,23 +117,12 @@ void irq_domain_free_fwnode(struct fwnode_handle *fwnode)
+ }
+ EXPORT_SYMBOL_GPL(irq_domain_free_fwnode);
+ 
+-/**
+- * __irq_domain_add() - Allocate a new irq_domain data structure
+- * @fwnode: firmware node for the interrupt controller
+- * @size: Size of linear map; 0 for radix mapping only
+- * @hwirq_max: Maximum number of interrupts supported by controller
+- * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
+- *              direct mapping
+- * @ops: domain callbacks
+- * @host_data: Controller private data pointer
+- *
+- * Allocates and initializes an irq_domain structure.
+- * Returns pointer to IRQ domain, or NULL on failure.
+- */
+-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size,
+-				    irq_hw_number_t hwirq_max, int direct_max,
+-				    const struct irq_domain_ops *ops,
+-				    void *host_data)
++static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
++					      unsigned int size,
++					      irq_hw_number_t hwirq_max,
++					      int direct_max,
++					      const struct irq_domain_ops *ops,
++					      void *host_data)
+ {
+ 	struct irqchip_fwid *fwid;
+ 	struct irq_domain *domain;
+@@ -207,12 +199,44 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size,
+ 	domain->revmap_direct_max_irq = direct_max;
+ 	irq_domain_check_hierarchy(domain);
+ 
++	return domain;
++}
++
++static void __irq_domain_publish(struct irq_domain *domain)
++{
+ 	mutex_lock(&irq_domain_mutex);
+ 	debugfs_add_domain_dir(domain);
+ 	list_add(&domain->link, &irq_domain_list);
+ 	mutex_unlock(&irq_domain_mutex);
+ 
+ 	pr_debug("Added domain %s\n", domain->name);
++}
++
++/**
++ * __irq_domain_add() - Allocate a new irq_domain data structure
++ * @fwnode: firmware node for the interrupt controller
++ * @size: Size of linear map; 0 for radix mapping only
++ * @hwirq_max: Maximum number of interrupts supported by controller
++ * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
++ *              direct mapping
++ * @ops: domain callbacks
++ * @host_data: Controller private data pointer
++ *
++ * Allocates and initializes an irq_domain structure.
++ * Returns pointer to IRQ domain, or NULL on failure.
++ */
++struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
++				    irq_hw_number_t hwirq_max, int direct_max,
++				    const struct irq_domain_ops *ops,
++				    void *host_data)
++{
++	struct irq_domain *domain;
++
++	domain = __irq_domain_create(fwnode, size, hwirq_max, direct_max,
++				     ops, host_data);
++	if (domain)
++		__irq_domain_publish(domain);
++
+ 	return domain;
+ }
+ EXPORT_SYMBOL_GPL(__irq_domain_add);
+@@ -637,6 +661,34 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
+ }
+ EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
+ 
++static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
++						       irq_hw_number_t hwirq,
++						       const struct irq_affinity_desc *affinity)
++{
++	struct device_node *of_node = irq_domain_get_of_node(domain);
++	int virq;
++
++	pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
++
++	/* Allocate a virtual interrupt number */
++	virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
++				      affinity);
++	if (virq <= 0) {
++		pr_debug("-> virq allocation failed\n");
++		return 0;
++	}
++
++	if (irq_domain_associate_locked(domain, virq, hwirq)) {
++		irq_free_desc(virq);
++		return 0;
++	}
++
++	pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
++		hwirq, of_node_full_name(of_node), virq);
++
++	return virq;
++}
++
+ /**
+  * irq_create_mapping_affinity() - Map a hardware interrupt into linux irq space
+  * @domain: domain owning this hardware interrupt or NULL for default domain
+@@ -649,47 +701,31 @@ EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
+  * on the number returned from that call.
+  */
+ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
+-				       irq_hw_number_t hwirq,
+-				       const struct irq_affinity_desc *affinity)
++					 irq_hw_number_t hwirq,
++					 const struct irq_affinity_desc *affinity)
+ {
+-	struct device_node *of_node;
+ 	int virq;
+ 
+-	pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
+-
+-	/* Look for default domain if nececssary */
++	/* Look for default domain if necessary */
+ 	if (domain == NULL)
+ 		domain = irq_default_domain;
+ 	if (domain == NULL) {
+ 		WARN(1, "%s(, %lx) called with NULL domain\n", __func__, hwirq);
+ 		return 0;
+ 	}
+-	pr_debug("-> using domain @%p\n", domain);
+ 
+-	of_node = irq_domain_get_of_node(domain);
++	mutex_lock(&irq_domain_mutex);
+ 
+ 	/* Check if mapping already exists */
+ 	virq = irq_find_mapping(domain, hwirq);
+ 	if (virq) {
+-		pr_debug("-> existing mapping on virq %d\n", virq);
+-		return virq;
+-	}
+-
+-	/* Allocate a virtual interrupt number */
+-	virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
+-				      affinity);
+-	if (virq <= 0) {
+-		pr_debug("-> virq allocation failed\n");
+-		return 0;
++		pr_debug("existing mapping on virq %d\n", virq);
++		goto out;
+ 	}
+ 
+-	if (irq_domain_associate(domain, virq, hwirq)) {
+-		irq_free_desc(virq);
+-		return 0;
+-	}
+-
+-	pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
+-		hwirq, of_node_full_name(of_node), virq);
++	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
++out:
++	mutex_unlock(&irq_domain_mutex);
+ 
+ 	return virq;
+ }
+@@ -793,6 +829,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
+ 	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
+ 		type &= IRQ_TYPE_SENSE_MASK;
+ 
++	mutex_lock(&irq_domain_mutex);
++
+ 	/*
+ 	 * If we've already configured this interrupt,
+ 	 * don't do it again, or hell will break loose.
+@@ -805,7 +843,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
+ 		 * interrupt number.
+ 		 */
+ 		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
+-			return virq;
++			goto out;
+ 
+ 		/*
+ 		 * If the trigger type has not been set yet, then set
+@@ -813,35 +851,45 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
+ 		 */
+ 		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
+ 			irq_data = irq_get_irq_data(virq);
+-			if (!irq_data)
+-				return 0;
++			if (!irq_data) {
++				virq = 0;
++				goto out;
++			}
+ 
+ 			irqd_set_trigger_type(irq_data, type);
+-			return virq;
++			goto out;
+ 		}
+ 
+ 		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
+ 			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
+-		return 0;
++		virq = 0;
++		goto out;
+ 	}
+ 
+ 	if (irq_domain_is_hierarchy(domain)) {
+-		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
+-		if (virq <= 0)
+-			return 0;
++		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
++						    fwspec, false, NULL);
++		if (virq <= 0) {
++			virq = 0;
++			goto out;
++		}
+ 	} else {
+ 		/* Create mapping */
+-		virq = irq_create_mapping(domain, hwirq);
++		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
+ 		if (!virq)
+-			return virq;
++			goto out;
+ 	}
+ 
+ 	irq_data = irq_get_irq_data(virq);
+-	if (WARN_ON(!irq_data))
+-		return 0;
++	if (WARN_ON(!irq_data)) {
++		virq = 0;
++		goto out;
++	}
+ 
+ 	/* Store trigger type */
+ 	irqd_set_trigger_type(irq_data, type);
++out:
++	mutex_unlock(&irq_domain_mutex);
+ 
+ 	return virq;
+ }
+@@ -893,7 +941,7 @@ unsigned int irq_find_mapping(struct irq_domain *domain,
+ {
+ 	struct irq_data *data;
+ 
+-	/* Look for default domain if nececssary */
++	/* Look for default domain if necessary */
+ 	if (domain == NULL)
+ 		domain = irq_default_domain;
+ 	if (domain == NULL)
+@@ -1083,12 +1131,15 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
+ 	struct irq_domain *domain;
+ 
+ 	if (size)
+-		domain = irq_domain_create_linear(fwnode, size, ops, host_data);
++		domain = __irq_domain_create(fwnode, size, size, 0, ops, host_data);
+ 	else
+-		domain = irq_domain_create_tree(fwnode, ops, host_data);
++		domain = __irq_domain_create(fwnode, 0, ~0, 0, ops, host_data);
++
+ 	if (domain) {
+ 		domain->parent = parent;
+ 		domain->flags |= flags;
++
++		__irq_domain_publish(domain);
+ 	}
+ 
+ 	return domain;
+@@ -1405,40 +1456,12 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
+ 	return domain->ops->alloc(domain, irq_base, nr_irqs, arg);
+ }
+ 
+-/**
+- * __irq_domain_alloc_irqs - Allocate IRQs from domain
+- * @domain:	domain to allocate from
+- * @irq_base:	allocate specified IRQ number if irq_base >= 0
+- * @nr_irqs:	number of IRQs to allocate
+- * @node:	NUMA node id for memory allocation
+- * @arg:	domain specific argument
+- * @realloc:	IRQ descriptors have already been allocated if true
+- * @affinity:	Optional irq affinity mask for multiqueue devices
+- *
+- * Allocate IRQ numbers and initialized all data structures to support
+- * hierarchy IRQ domains.
+- * Parameter @realloc is mainly to support legacy IRQs.
+- * Returns error code or allocated IRQ number
+- *
+- * The whole process to setup an IRQ has been split into two steps.
+- * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
+- * descriptor and required hardware resources. The second step,
+- * irq_domain_activate_irq(), is to program hardwares with preallocated
+- * resources. In this way, it's easier to rollback when failing to
+- * allocate resources.
+- */
+-int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
+-			    unsigned int nr_irqs, int node, void *arg,
+-			    bool realloc, const struct irq_affinity_desc *affinity)
++static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
++					unsigned int nr_irqs, int node, void *arg,
++					bool realloc, const struct irq_affinity_desc *affinity)
+ {
+ 	int i, ret, virq;
+ 
+-	if (domain == NULL) {
+-		domain = irq_default_domain;
+-		if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n"))
+-			return -EINVAL;
+-	}
+-
+ 	if (realloc && irq_base >= 0) {
+ 		virq = irq_base;
+ 	} else {
+@@ -1457,24 +1480,18 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
+ 		goto out_free_desc;
+ 	}
+ 
+-	mutex_lock(&irq_domain_mutex);
+ 	ret = irq_domain_alloc_irqs_hierarchy(domain, virq, nr_irqs, arg);
+-	if (ret < 0) {
+-		mutex_unlock(&irq_domain_mutex);
++	if (ret < 0)
+ 		goto out_free_irq_data;
+-	}
+ 
+ 	for (i = 0; i < nr_irqs; i++) {
+ 		ret = irq_domain_trim_hierarchy(virq + i);
+-		if (ret) {
+-			mutex_unlock(&irq_domain_mutex);
++		if (ret)
+ 			goto out_free_irq_data;
+-		}
+ 	}
+-	
++
+ 	for (i = 0; i < nr_irqs; i++)
+ 		irq_domain_insert_irq(virq + i);
+-	mutex_unlock(&irq_domain_mutex);
+ 
+ 	return virq;
+ 
+@@ -1485,6 +1502,48 @@ out_free_desc:
+ 	return ret;
+ }
+ 
++/**
++ * __irq_domain_alloc_irqs - Allocate IRQs from domain
++ * @domain:	domain to allocate from
++ * @irq_base:	allocate specified IRQ number if irq_base >= 0
++ * @nr_irqs:	number of IRQs to allocate
++ * @node:	NUMA node id for memory allocation
++ * @arg:	domain specific argument
++ * @realloc:	IRQ descriptors have already been allocated if true
++ * @affinity:	Optional irq affinity mask for multiqueue devices
++ *
++ * Allocate IRQ numbers and initialized all data structures to support
++ * hierarchy IRQ domains.
++ * Parameter @realloc is mainly to support legacy IRQs.
++ * Returns error code or allocated IRQ number
++ *
++ * The whole process to setup an IRQ has been split into two steps.
++ * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
++ * descriptor and required hardware resources. The second step,
++ * irq_domain_activate_irq(), is to program the hardware with preallocated
++ * resources. In this way, it's easier to rollback when failing to
++ * allocate resources.
++ */
++int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
++			    unsigned int nr_irqs, int node, void *arg,
++			    bool realloc, const struct irq_affinity_desc *affinity)
++{
++	int ret;
++
++	if (domain == NULL) {
++		domain = irq_default_domain;
++		if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n"))
++			return -EINVAL;
++	}
++
++	mutex_lock(&irq_domain_mutex);
++	ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg,
++					   realloc, affinity);
++	mutex_unlock(&irq_domain_mutex);
++
++	return ret;
++}
++
+ /* The irq_data was moved, fix the revmap to refer to the new location */
+ static void irq_domain_fix_revmap(struct irq_data *d)
+ {
+@@ -1842,6 +1901,13 @@ void irq_domain_set_info(struct irq_domain *domain, unsigned int virq,
+ 	irq_set_handler_data(virq, handler_data);
+ }
+ 
++static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
++					unsigned int nr_irqs, int node, void *arg,
++					bool realloc, const struct irq_affinity_desc *affinity)
++{
++	return -EINVAL;
++}
++
+ static void irq_domain_check_hierarchy(struct irq_domain *domain)
+ {
+ }
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 437b073dc487e..0159925054faa 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -341,7 +341,7 @@ static bool irq_set_affinity_deactivated(struct irq_data *data,
+ 	 * If the interrupt is not yet activated, just store the affinity
+ 	 * mask and do not call the chip driver at all. On activation the
+ 	 * driver has to make sure anyway that the interrupt is in a
+-	 * useable state so startup works.
++	 * usable state so startup works.
+ 	 */
+ 	if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) ||
+ 	    irqd_is_activated(data) || !irqd_affinity_on_activate(data))
+@@ -999,7 +999,7 @@ again:
+ 	 * to IRQS_INPROGRESS and the irq line is masked forever.
+ 	 *
+ 	 * This also serializes the state of shared oneshot handlers
+-	 * versus "desc->threads_onehsot |= action->thread_mask;" in
++	 * versus "desc->threads_oneshot |= action->thread_mask;" in
+ 	 * irq_wake_thread(). See the comment there which explains the
+ 	 * serialization.
+ 	 */
+@@ -1877,7 +1877,7 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id)
+ 	/* Last action releases resources */
+ 	if (!desc->action) {
+ 		/*
+-		 * Reaquire bus lock as irq_release_resources() might
++		 * Reacquire bus lock as irq_release_resources() might
+ 		 * require it to deallocate resources over the slow bus.
+ 		 */
+ 		chip_bus_lock(desc);
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index b47d95b68ac1a..4457f3e966d0e 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -5,7 +5,7 @@
+  *
+  * This file is licensed under GPLv2.
+  *
+- * This file contains common code to support Message Signalled Interrupt for
++ * This file contains common code to support Message Signaled Interrupts for
+  * PCI compatible and non PCI compatible devices.
+  */
+ #include <linux/types.h>
+diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
+index 1f981162648a3..00d45b6bd8f89 100644
+--- a/kernel/irq/timings.c
++++ b/kernel/irq/timings.c
+@@ -490,7 +490,7 @@ static inline void irq_timings_store(int irq, struct irqt_stat *irqs, u64 ts)
+ 
+ 	/*
+ 	 * The interrupt triggered more than one second apart, that
+-	 * ends the sequence as predictible for our purpose. In this
++	 * ends the sequence as predictable for our purpose. In this
+ 	 * case, assume we have the beginning of a sequence and the
+ 	 * timestamp is the first value. As it is impossible to
+ 	 * predict anything at this point, return.
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index d29731a30b8e1..73717917d8164 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -274,6 +274,7 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ 	if (ret < 0)
+ 		goto error;
+ 
++	ret = -ENOMEM;
+ 	pages = kcalloc(sizeof(struct page *), nr_pages, GFP_KERNEL);
+ 	if (!pages)
+ 		goto error;
+diff --git a/net/caif/caif_usb.c b/net/caif/caif_usb.c
+index b02e1292f7f19..24488a4e2d26e 100644
+--- a/net/caif/caif_usb.c
++++ b/net/caif/caif_usb.c
+@@ -134,6 +134,9 @@ static int cfusbl_device_notify(struct notifier_block *me, unsigned long what,
+ 	struct usb_device *usbdev;
+ 	int res;
+ 
++	if (what == NETDEV_UNREGISTER && dev->reg_state >= NETREG_UNREGISTERED)
++		return 0;
++
+ 	/* Check whether we have a NCM device, and find its VID/PID. */
+ 	if (!(dev->dev.parent && dev->dev.parent->driver &&
+ 	      strcmp(dev->dev.parent->driver->name, "cdc_ncm") == 0))
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 8cbcb6a104f2f..413c2a08d79db 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6111,6 +6111,7 @@ EXPORT_SYMBOL(gro_find_complete_by_type);
+ 
+ static void napi_skb_free_stolen_head(struct sk_buff *skb)
+ {
++	nf_reset_ct(skb);
+ 	skb_dst_drop(skb);
+ 	skb_ext_put(skb);
+ 	kmem_cache_free(skbuff_head_cache, skb);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 668a9d0fbbc6e..09cdefe5e1c83 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -659,7 +659,6 @@ fastpath:
+ 
+ void skb_release_head_state(struct sk_buff *skb)
+ {
+-	nf_reset_ct(skb);
+ 	skb_dst_drop(skb);
+ 	if (skb->destructor) {
+ 		WARN_ON(in_irq());
+diff --git a/net/ipv4/netfilter/nf_tproxy_ipv4.c b/net/ipv4/netfilter/nf_tproxy_ipv4.c
+index b2bae0b0e42a1..61cb2341f50fe 100644
+--- a/net/ipv4/netfilter/nf_tproxy_ipv4.c
++++ b/net/ipv4/netfilter/nf_tproxy_ipv4.c
+@@ -38,7 +38,7 @@ nf_tproxy_handle_time_wait4(struct net *net, struct sk_buff *skb,
+ 					    hp->source, lport ? lport : hp->dest,
+ 					    skb->dev, NF_TPROXY_LOOKUP_LISTENER);
+ 		if (sk2) {
+-			inet_twsk_deschedule_put(inet_twsk(sk));
++			nf_tproxy_twsk_deschedule_put(inet_twsk(sk));
+ 			sk = sk2;
+ 		}
+ 	}
+diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
+index a1ac0e3d8c60c..163668531a57f 100644
+--- a/net/ipv6/ila/ila_xlat.c
++++ b/net/ipv6/ila/ila_xlat.c
+@@ -477,6 +477,7 @@ int ila_xlat_nl_cmd_get_mapping(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	rcu_read_lock();
+ 
++	ret = -ESRCH;
+ 	ila = ila_lookup_by_params(&xp, ilan);
+ 	if (ila) {
+ 		ret = ila_dump_info(ila,
+diff --git a/net/ipv6/netfilter/nf_tproxy_ipv6.c b/net/ipv6/netfilter/nf_tproxy_ipv6.c
+index 6bac68fb27a39..3fe4f15e01dc8 100644
+--- a/net/ipv6/netfilter/nf_tproxy_ipv6.c
++++ b/net/ipv6/netfilter/nf_tproxy_ipv6.c
+@@ -63,7 +63,7 @@ nf_tproxy_handle_time_wait6(struct sk_buff *skb, int tproto, int thoff,
+ 					    lport ? lport : hp->dest,
+ 					    skb->dev, NF_TPROXY_LOOKUP_LISTENER);
+ 		if (sk2) {
+-			inet_twsk_deschedule_put(inet_twsk(sk));
++			nf_tproxy_twsk_deschedule_put(inet_twsk(sk));
+ 			sk = sk2;
+ 		}
+ 	}
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index f8ba3bc25cf34..c9ca857f1068d 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -317,11 +317,12 @@ nla_put_failure:
+ }
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct)
++static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct,
++			       bool dump)
+ {
+ 	u32 mark = READ_ONCE(ct->mark);
+ 
+-	if (!mark)
++	if (!mark && !dump)
+ 		return 0;
+ 
+ 	if (nla_put_be32(skb, CTA_MARK, htonl(mark)))
+@@ -332,7 +333,7 @@ nla_put_failure:
+ 	return -1;
+ }
+ #else
+-#define ctnetlink_dump_mark(a, b) (0)
++#define ctnetlink_dump_mark(a, b, c) (0)
+ #endif
+ 
+ #ifdef CONFIG_NF_CONNTRACK_SECMARK
+@@ -537,7 +538,7 @@ static int ctnetlink_dump_extinfo(struct sk_buff *skb,
+ static int ctnetlink_dump_info(struct sk_buff *skb, struct nf_conn *ct)
+ {
+ 	if (ctnetlink_dump_status(skb, ct) < 0 ||
+-	    ctnetlink_dump_mark(skb, ct) < 0 ||
++	    ctnetlink_dump_mark(skb, ct, true) < 0 ||
+ 	    ctnetlink_dump_secctx(skb, ct) < 0 ||
+ 	    ctnetlink_dump_id(skb, ct) < 0 ||
+ 	    ctnetlink_dump_use(skb, ct) < 0 ||
+@@ -816,8 +817,7 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
+ 	}
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-	if (events & (1 << IPCT_MARK) &&
+-	    ctnetlink_dump_mark(skb, ct) < 0)
++	if (ctnetlink_dump_mark(skb, ct, events & (1 << IPCT_MARK)))
+ 		goto nla_put_failure;
+ #endif
+ 	nlmsg_end(skb, nlh);
+@@ -2734,7 +2734,7 @@ static int __ctnetlink_glue_build(struct sk_buff *skb, struct nf_conn *ct)
+ 		goto nla_put_failure;
+ 
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+-	if (ctnetlink_dump_mark(skb, ct) < 0)
++	if (ctnetlink_dump_mark(skb, ct, true) < 0)
+ 		goto nla_put_failure;
+ #endif
+ 	if (ctnetlink_dump_labels(skb, ct) < 0)
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 3f4785be066a8..e0e1168655118 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1446,8 +1446,8 @@ static int nfc_se_io(struct nfc_dev *dev, u32 se_idx,
+ 	return rc;
+ 
+ error:
+-	kfree(cb_context);
+ 	device_unlock(&dev->dev);
++	kfree(cb_context);
+ 	return rc;
+ }
+ 
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 41cbc7c89c9d2..8ab84926816f6 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1988,16 +1988,14 @@ static int smc_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct smc_sock *smc;
+-	int rc = -EPIPE;
++	int rc;
+ 
+ 	smc = smc_sk(sk);
+ 	lock_sock(sk);
+-	if ((sk->sk_state != SMC_ACTIVE) &&
+-	    (sk->sk_state != SMC_APPCLOSEWAIT1) &&
+-	    (sk->sk_state != SMC_INIT))
+-		goto out;
+ 
++	/* SMC does not support connect with fastopen */
+ 	if (msg->msg_flags & MSG_FASTOPEN) {
++		/* not connected yet, fallback */
+ 		if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) {
+ 			smc_switch_to_fallback(smc);
+ 			smc->fallback_rsn = SMC_CLC_DECL_OPTUNSUPP;
+@@ -2005,6 +2003,11 @@ static int smc_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 			rc = -EINVAL;
+ 			goto out;
+ 		}
++	} else if ((sk->sk_state != SMC_ACTIVE) &&
++		   (sk->sk_state != SMC_APPCLOSEWAIT1) &&
++		   (sk->sk_state != SMC_INIT)) {
++		rc = -EPIPE;
++		goto out;
+ 	}
+ 
+ 	if (smc->use_fallback)
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index d38788cd9433a..af657a482ad2d 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -800,6 +800,7 @@ EXPORT_SYMBOL_GPL(svc_set_num_threads);
+ static int
+ svc_stop_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+ {
++	struct svc_rqst	*rqstp;
+ 	struct task_struct *task;
+ 	unsigned int state = serv->sv_nrthreads-1;
+ 
+@@ -808,7 +809,10 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+ 		task = choose_victim(serv, pool, &state);
+ 		if (task == NULL)
+ 			break;
+-		kthread_stop(task);
++		rqstp = kthread_data(task);
++		/* Did we lose a race to svo_function threadfn? */
++		if (kthread_stop(task) == -EINTR)
++			svc_exit_thread(rqstp);
+ 		nrservs++;
+ 	} while (nrservs < 0);
+ 	return 0;
+diff --git a/scripts/checkkconfigsymbols.py b/scripts/checkkconfigsymbols.py
+index 1548f9ce46827..697972432bbe7 100755
+--- a/scripts/checkkconfigsymbols.py
++++ b/scripts/checkkconfigsymbols.py
+@@ -113,7 +113,7 @@ def parse_options():
+     return args
+ 
+ 
+-def main():
++def print_undefined_symbols():
+     """Main function of this module."""
+     args = parse_options()
+ 
+@@ -472,5 +472,16 @@ def parse_kconfig_file(kfile):
+     return defined, references
+ 
+ 
++def main():
++    try:
++        print_undefined_symbols()
++    except BrokenPipeError:
++        # Python flushes standard streams on exit; redirect remaining output
++        # to devnull to avoid another BrokenPipeError at shutdown
++        devnull = os.open(os.devnull, os.O_WRONLY)
++        os.dup2(devnull, sys.stdout.fileno())
++        sys.exit(1)  # Python exits with error code 1 on EPIPE
++
++
+ if __name__ == "__main__":
+     main()
+diff --git a/scripts/clang-tools/run-clang-tools.py b/scripts/clang-tools/run-clang-tools.py
+index f754415af398b..f42699134f1c0 100755
+--- a/scripts/clang-tools/run-clang-tools.py
++++ b/scripts/clang-tools/run-clang-tools.py
+@@ -60,14 +60,21 @@ def run_analysis(entry):
+ 
+ 
+ def main():
+-    args = parse_arguments()
++    try:
++        args = parse_arguments()
+ 
+-    lock = multiprocessing.Lock()
+-    pool = multiprocessing.Pool(initializer=init, initargs=(lock, args))
+-    # Read JSON data into the datastore variable
+-    with open(args.path, "r") as f:
+-        datastore = json.load(f)
+-        pool.map(run_analysis, datastore)
++        lock = multiprocessing.Lock()
++        pool = multiprocessing.Pool(initializer=init, initargs=(lock, args))
++        # Read JSON data into the datastore variable
++        with open(args.path, "r") as f:
++            datastore = json.load(f)
++            pool.map(run_analysis, datastore)
++    except BrokenPipeError:
++        # Python flushes standard streams on exit; redirect remaining output
++        # to devnull to avoid another BrokenPipeError at shutdown
++        devnull = os.open(os.devnull, os.O_WRONLY)
++        os.dup2(devnull, sys.stdout.fileno())
++        sys.exit(1)  # Python exits with error code 1 on EPIPE
+ 
+ 
+ if __name__ == "__main__":
+diff --git a/scripts/diffconfig b/scripts/diffconfig
+index d5da5fa05d1d3..43f0f3d273ae7 100755
+--- a/scripts/diffconfig
++++ b/scripts/diffconfig
+@@ -65,7 +65,7 @@ def print_config(op, config, value, new_value):
+         else:
+             print(" %s %s -> %s" % (config, value, new_value))
+ 
+-def main():
++def show_diff():
+     global merge_style
+ 
+     # parse command line args
+@@ -129,4 +129,16 @@ def main():
+     for config in new:
+         print_config("+", config, None, b[config])
+ 
+-main()
++def main():
++    try:
++        show_diff()
++    except BrokenPipeError:
++        # Python flushes standard streams on exit; redirect remaining output
++        # to devnull to avoid another BrokenPipeError at shutdown
++        devnull = os.open(os.devnull, os.O_WRONLY)
++        os.dup2(devnull, sys.stdout.fileno())
++        sys.exit(1)  # Python exits with error code 1 on EPIPE
++
++
++if __name__ == '__main__':
++    main()
+diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh
+index 4e15e81673104..67697d8ea59a5 100755
+--- a/tools/testing/selftests/netfilter/nft_nat.sh
++++ b/tools/testing/selftests/netfilter/nft_nat.sh
+@@ -404,6 +404,8 @@ EOF
+ 	echo SERVER-$family | ip netns exec "$ns1" timeout 5 socat -u STDIN TCP-LISTEN:2000 &
+ 	sc_s=$!
+ 
++	sleep 1
++
+ 	result=$(ip netns exec "$ns0" timeout 1 socat TCP:$daddr:2000 STDOUT)
+ 
+ 	if [ "$result" = "SERVER-inet" ];then


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-03-22 14:15 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-03-22 14:15 UTC (permalink / raw
  To: gentoo-commits

commit:     d7c0924144a2e2bebb2b7cb64b827d6ef9739aa1
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 22 12:51:20 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Mar 22 12:51:20 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d7c09241

Linux patch 5.10.176

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1175_linux-5.10.176.patch | 8752 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8756 insertions(+)

diff --git a/0000_README b/0000_README
index f3c2cbaf..50964ce7 100644
--- a/0000_README
+++ b/0000_README
@@ -743,6 +743,10 @@ Patch:  1174_linux-5.10.175.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.175
 
+Patch:  1175_linux-5.10.176.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.176
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1175_linux-5.10.176.patch b/1175_linux-5.10.176.patch
new file mode 100644
index 00000000..8c46ce2a
--- /dev/null
+++ b/1175_linux-5.10.176.patch
@@ -0,0 +1,8752 @@
+diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
+index ca52c82e5bb54..f7b69a0e71e1c 100644
+--- a/Documentation/filesystems/vfs.rst
++++ b/Documentation/filesystems/vfs.rst
+@@ -1188,7 +1188,7 @@ defined:
+ 	return
+ 	-ECHILD and it will be called again in ref-walk mode.
+ 
+-``_weak_revalidate``
++``d_weak_revalidate``
+ 	called when the VFS needs to revalidate a "jumped" dentry.  This
+ 	is called when a path-walk ends at dentry that was not acquired
+ 	by doing a lookup in the parent directory.  This includes "/",
+diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst
+index 87cf5c010d5dd..ed2e45f9b7627 100644
+--- a/Documentation/trace/ftrace.rst
++++ b/Documentation/trace/ftrace.rst
+@@ -2923,7 +2923,7 @@ Produces::
+               bash-1994  [000] ....  4342.324898: ima_get_action <-process_measurement
+               bash-1994  [000] ....  4342.324898: ima_match_policy <-ima_get_action
+               bash-1994  [000] ....  4342.324899: do_truncate <-do_last
+-              bash-1994  [000] ....  4342.324899: should_remove_suid <-do_truncate
++              bash-1994  [000] ....  4342.324899: setattr_should_drop_suidgid <-do_truncate
+               bash-1994  [000] ....  4342.324899: notify_change <-do_truncate
+               bash-1994  [000] ....  4342.324900: current_fs_time <-notify_change
+               bash-1994  [000] ....  4342.324900: current_kernel_time <-current_fs_time
+diff --git a/Makefile b/Makefile
+index e6b09052f222b..71caf59383615 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 175
++SUBLEVEL = 176
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/s390/boot/ipl_report.c b/arch/s390/boot/ipl_report.c
+index 0b4965573656f..88bacf4999c47 100644
+--- a/arch/s390/boot/ipl_report.c
++++ b/arch/s390/boot/ipl_report.c
+@@ -57,11 +57,19 @@ repeat:
+ 	if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && INITRD_START && INITRD_SIZE &&
+ 	    intersects(INITRD_START, INITRD_SIZE, safe_addr, size))
+ 		safe_addr = INITRD_START + INITRD_SIZE;
++	if (intersects(safe_addr, size, (unsigned long)comps, comps->len)) {
++		safe_addr = (unsigned long)comps + comps->len;
++		goto repeat;
++	}
+ 	for_each_rb_entry(comp, comps)
+ 		if (intersects(safe_addr, size, comp->addr, comp->len)) {
+ 			safe_addr = comp->addr + comp->len;
+ 			goto repeat;
+ 		}
++	if (intersects(safe_addr, size, (unsigned long)certs, certs->len)) {
++		safe_addr = (unsigned long)certs + certs->len;
++		goto repeat;
++	}
+ 	for_each_rb_entry(cert, certs)
+ 		if (intersects(safe_addr, size, cert->addr, cert->len)) {
+ 			safe_addr = cert->addr + cert->len;
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 1906387a0faf4..0b7c81389c50a 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -2309,6 +2309,7 @@ static void mce_restart(void)
+ {
+ 	mce_timer_delete_all();
+ 	on_each_cpu(mce_cpu_restart, NULL, 1);
++	mce_schedule_work();
+ }
+ 
+ /* Toggle features for corrected errors */
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 91371b01eae0c..c165ddbb672fe 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2998,7 +2998,7 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
+ 					struct vmcs12 *vmcs12,
+ 					enum vm_entry_failure_code *entry_failure_code)
+ {
+-	bool ia32e;
++	bool ia32e = !!(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE);
+ 
+ 	*entry_failure_code = ENTRY_FAIL_DEFAULT;
+ 
+@@ -3024,6 +3024,13 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
+ 					   vmcs12->guest_ia32_perf_global_ctrl)))
+ 		return -EINVAL;
+ 
++	if (CC((vmcs12->guest_cr0 & (X86_CR0_PG | X86_CR0_PE)) == X86_CR0_PG))
++		return -EINVAL;
++
++	if (CC(ia32e && !(vmcs12->guest_cr4 & X86_CR4_PAE)) ||
++	    CC(ia32e && !(vmcs12->guest_cr0 & X86_CR0_PG)))
++		return -EINVAL;
++
+ 	/*
+ 	 * If the load IA32_EFER VM-entry control is 1, the following checks
+ 	 * are performed on the field for the IA32_EFER MSR:
+@@ -3035,7 +3042,6 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
+ 	 */
+ 	if (to_vmx(vcpu)->nested.nested_run_pending &&
+ 	    (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_EFER)) {
+-		ia32e = (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) != 0;
+ 		if (CC(!kvm_valid_efer(vcpu, vmcs12->guest_ia32_efer)) ||
+ 		    CC(ia32e != !!(vmcs12->guest_ia32_efer & EFER_LMA)) ||
+ 		    CC(((vmcs12->guest_cr0 & X86_CR0_PG) &&
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index 011e042b47ba7..5ec47af786ddb 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -586,7 +586,8 @@ void __init sme_enable(struct boot_params *bp)
+ 	cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr |
+ 				     ((u64)bp->ext_cmd_line_ptr << 32));
+ 
+-	cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer));
++	if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0)
++		return;
+ 
+ 	if (!strncmp(buffer, cmdline_on, sizeof(buffer)))
+ 		sme_me_mask = me_mask;
+diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
+index 40c53632512b7..9617688b58b32 100644
+--- a/drivers/block/Kconfig
++++ b/drivers/block/Kconfig
+@@ -16,13 +16,7 @@ menuconfig BLK_DEV
+ 
+ if BLK_DEV
+ 
+-config BLK_DEV_NULL_BLK
+-	tristate "Null test block driver"
+-	select CONFIGFS_FS
+-
+-config BLK_DEV_NULL_BLK_FAULT_INJECTION
+-	bool "Support fault injection for Null test block driver"
+-	depends on BLK_DEV_NULL_BLK && FAULT_INJECTION
++source "drivers/block/null_blk/Kconfig"
+ 
+ config BLK_DEV_FD
+ 	tristate "Normal floppy disk support"
+diff --git a/drivers/block/Makefile b/drivers/block/Makefile
+index e1f63117ee94f..a3170859e01d4 100644
+--- a/drivers/block/Makefile
++++ b/drivers/block/Makefile
+@@ -41,12 +41,7 @@ obj-$(CONFIG_BLK_DEV_RSXX) += rsxx/
+ obj-$(CONFIG_ZRAM) += zram/
+ obj-$(CONFIG_BLK_DEV_RNBD)	+= rnbd/
+ 
+-obj-$(CONFIG_BLK_DEV_NULL_BLK)	+= null_blk.o
+-null_blk-objs	:= null_blk_main.o
+-ifeq ($(CONFIG_BLK_DEV_ZONED), y)
+-null_blk-$(CONFIG_TRACING) += null_blk_trace.o
+-endif
+-null_blk-$(CONFIG_BLK_DEV_ZONED) += null_blk_zoned.o
++obj-$(CONFIG_BLK_DEV_NULL_BLK)	+= null_blk/
+ 
+ skd-y		:= skd_main.o
+ swim_mod-y	:= swim.o swim_asm.o
+diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h
+deleted file mode 100644
+index 7de703f28617b..0000000000000
+--- a/drivers/block/null_blk.h
++++ /dev/null
+@@ -1,137 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef __BLK_NULL_BLK_H
+-#define __BLK_NULL_BLK_H
+-
+-#undef pr_fmt
+-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+-
+-#include <linux/blkdev.h>
+-#include <linux/slab.h>
+-#include <linux/blk-mq.h>
+-#include <linux/hrtimer.h>
+-#include <linux/configfs.h>
+-#include <linux/badblocks.h>
+-#include <linux/fault-inject.h>
+-
+-struct nullb_cmd {
+-	struct request *rq;
+-	struct bio *bio;
+-	unsigned int tag;
+-	blk_status_t error;
+-	struct nullb_queue *nq;
+-	struct hrtimer timer;
+-	bool fake_timeout;
+-};
+-
+-struct nullb_queue {
+-	unsigned long *tag_map;
+-	wait_queue_head_t wait;
+-	unsigned int queue_depth;
+-	struct nullb_device *dev;
+-	unsigned int requeue_selection;
+-
+-	struct nullb_cmd *cmds;
+-};
+-
+-struct nullb_device {
+-	struct nullb *nullb;
+-	struct config_item item;
+-	struct radix_tree_root data; /* data stored in the disk */
+-	struct radix_tree_root cache; /* disk cache data */
+-	unsigned long flags; /* device flags */
+-	unsigned int curr_cache;
+-	struct badblocks badblocks;
+-
+-	unsigned int nr_zones;
+-	unsigned int nr_zones_imp_open;
+-	unsigned int nr_zones_exp_open;
+-	unsigned int nr_zones_closed;
+-	struct blk_zone *zones;
+-	sector_t zone_size_sects;
+-	spinlock_t zone_lock;
+-	unsigned long *zone_locks;
+-
+-	unsigned long size; /* device size in MB */
+-	unsigned long completion_nsec; /* time in ns to complete a request */
+-	unsigned long cache_size; /* disk cache size in MB */
+-	unsigned long zone_size; /* zone size in MB if device is zoned */
+-	unsigned long zone_capacity; /* zone capacity in MB if device is zoned */
+-	unsigned int zone_nr_conv; /* number of conventional zones */
+-	unsigned int zone_max_open; /* max number of open zones */
+-	unsigned int zone_max_active; /* max number of active zones */
+-	unsigned int submit_queues; /* number of submission queues */
+-	unsigned int home_node; /* home node for the device */
+-	unsigned int queue_mode; /* block interface */
+-	unsigned int blocksize; /* block size */
+-	unsigned int irqmode; /* IRQ completion handler */
+-	unsigned int hw_queue_depth; /* queue depth */
+-	unsigned int index; /* index of the disk, only valid with a disk */
+-	unsigned int mbps; /* Bandwidth throttle cap (in MB/s) */
+-	bool blocking; /* blocking blk-mq device */
+-	bool use_per_node_hctx; /* use per-node allocation for hardware context */
+-	bool power; /* power on/off the device */
+-	bool memory_backed; /* if data is stored in memory */
+-	bool discard; /* if support discard */
+-	bool zoned; /* if device is zoned */
+-};
+-
+-struct nullb {
+-	struct nullb_device *dev;
+-	struct list_head list;
+-	unsigned int index;
+-	struct request_queue *q;
+-	struct gendisk *disk;
+-	struct blk_mq_tag_set *tag_set;
+-	struct blk_mq_tag_set __tag_set;
+-	unsigned int queue_depth;
+-	atomic_long_t cur_bytes;
+-	struct hrtimer bw_timer;
+-	unsigned long cache_flush_pos;
+-	spinlock_t lock;
+-
+-	struct nullb_queue *queues;
+-	unsigned int nr_queues;
+-	char disk_name[DISK_NAME_LEN];
+-};
+-
+-blk_status_t null_process_cmd(struct nullb_cmd *cmd,
+-			      enum req_opf op, sector_t sector,
+-			      unsigned int nr_sectors);
+-
+-#ifdef CONFIG_BLK_DEV_ZONED
+-int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q);
+-int null_register_zoned_dev(struct nullb *nullb);
+-void null_free_zoned_dev(struct nullb_device *dev);
+-int null_report_zones(struct gendisk *disk, sector_t sector,
+-		      unsigned int nr_zones, report_zones_cb cb, void *data);
+-blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd,
+-				    enum req_opf op, sector_t sector,
+-				    sector_t nr_sectors);
+-size_t null_zone_valid_read_len(struct nullb *nullb,
+-				sector_t sector, unsigned int len);
+-#else
+-static inline int null_init_zoned_dev(struct nullb_device *dev,
+-				      struct request_queue *q)
+-{
+-	pr_err("CONFIG_BLK_DEV_ZONED not enabled\n");
+-	return -EINVAL;
+-}
+-static inline int null_register_zoned_dev(struct nullb *nullb)
+-{
+-	return -ENODEV;
+-}
+-static inline void null_free_zoned_dev(struct nullb_device *dev) {}
+-static inline blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd,
+-			enum req_opf op, sector_t sector, sector_t nr_sectors)
+-{
+-	return BLK_STS_NOTSUPP;
+-}
+-static inline size_t null_zone_valid_read_len(struct nullb *nullb,
+-					      sector_t sector,
+-					      unsigned int len)
+-{
+-	return len;
+-}
+-#define null_report_zones	NULL
+-#endif /* CONFIG_BLK_DEV_ZONED */
+-#endif /* __NULL_BLK_H */
+diff --git a/drivers/block/null_blk/Kconfig b/drivers/block/null_blk/Kconfig
+new file mode 100644
+index 0000000000000..6bf1f8ca20a24
+--- /dev/null
++++ b/drivers/block/null_blk/Kconfig
+@@ -0,0 +1,12 @@
++# SPDX-License-Identifier: GPL-2.0
++#
++# Null block device driver configuration
++#
++
++config BLK_DEV_NULL_BLK
++	tristate "Null test block driver"
++	select CONFIGFS_FS
++
++config BLK_DEV_NULL_BLK_FAULT_INJECTION
++	bool "Support fault injection for Null test block driver"
++	depends on BLK_DEV_NULL_BLK && FAULT_INJECTION
+diff --git a/drivers/block/null_blk/Makefile b/drivers/block/null_blk/Makefile
+new file mode 100644
+index 0000000000000..84c36e512ab89
+--- /dev/null
++++ b/drivers/block/null_blk/Makefile
+@@ -0,0 +1,11 @@
++# SPDX-License-Identifier: GPL-2.0
++
++# needed for trace events
++ccflags-y			+= -I$(src)
++
++obj-$(CONFIG_BLK_DEV_NULL_BLK)	+= null_blk.o
++null_blk-objs			:= main.o
++ifeq ($(CONFIG_BLK_DEV_ZONED), y)
++null_blk-$(CONFIG_TRACING) 	+= trace.o
++endif
++null_blk-$(CONFIG_BLK_DEV_ZONED) += zoned.o
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+new file mode 100644
+index 0000000000000..25db095e943b7
+--- /dev/null
++++ b/drivers/block/null_blk/main.c
+@@ -0,0 +1,2036 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Add configfs and memory store: Kyungchan Koh <kkc6196@fb.com> and
++ * Shaohua Li <shli@fb.com>
++ */
++#include <linux/module.h>
++
++#include <linux/moduleparam.h>
++#include <linux/sched.h>
++#include <linux/fs.h>
++#include <linux/init.h>
++#include "null_blk.h"
++
++#define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
++#define PAGE_SECTORS		(1 << PAGE_SECTORS_SHIFT)
++#define SECTOR_MASK		(PAGE_SECTORS - 1)
++
++#define FREE_BATCH		16
++
++#define TICKS_PER_SEC		50ULL
++#define TIMER_INTERVAL		(NSEC_PER_SEC / TICKS_PER_SEC)
++
++#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
++static DECLARE_FAULT_ATTR(null_timeout_attr);
++static DECLARE_FAULT_ATTR(null_requeue_attr);
++static DECLARE_FAULT_ATTR(null_init_hctx_attr);
++#endif
++
++static inline u64 mb_per_tick(int mbps)
++{
++	return (1 << 20) / TICKS_PER_SEC * ((u64) mbps);
++}
++
++/*
++ * Status flags for nullb_device.
++ *
++ * CONFIGURED:	Device has been configured and turned on. Cannot reconfigure.
++ * UP:		Device is currently on and visible in userspace.
++ * THROTTLED:	Device is being throttled.
++ * CACHE:	Device is using a write-back cache.
++ */
++enum nullb_device_flags {
++	NULLB_DEV_FL_CONFIGURED	= 0,
++	NULLB_DEV_FL_UP		= 1,
++	NULLB_DEV_FL_THROTTLED	= 2,
++	NULLB_DEV_FL_CACHE	= 3,
++};
++
++#define MAP_SZ		((PAGE_SIZE >> SECTOR_SHIFT) + 2)
++/*
++ * nullb_page is a page in memory for nullb devices.
++ *
++ * @page:	The page holding the data.
++ * @bitmap:	The bitmap represents which sector in the page has data.
++ *		Each bit represents one block size. For example, sector 8
++ *		will use the 7th bit
++ * The highest 2 bits of bitmap are for special purpose. LOCK means the cache
++ * page is being flushing to storage. FREE means the cache page is freed and
++ * should be skipped from flushing to storage. Please see
++ * null_make_cache_space
++ */
++struct nullb_page {
++	struct page *page;
++	DECLARE_BITMAP(bitmap, MAP_SZ);
++};
++#define NULLB_PAGE_LOCK (MAP_SZ - 1)
++#define NULLB_PAGE_FREE (MAP_SZ - 2)
++
++static LIST_HEAD(nullb_list);
++static struct mutex lock;
++static int null_major;
++static DEFINE_IDA(nullb_indexes);
++static struct blk_mq_tag_set tag_set;
++
++enum {
++	NULL_IRQ_NONE		= 0,
++	NULL_IRQ_SOFTIRQ	= 1,
++	NULL_IRQ_TIMER		= 2,
++};
++
++enum {
++	NULL_Q_BIO		= 0,
++	NULL_Q_RQ		= 1,
++	NULL_Q_MQ		= 2,
++};
++
++static int g_no_sched;
++module_param_named(no_sched, g_no_sched, int, 0444);
++MODULE_PARM_DESC(no_sched, "No io scheduler");
++
++static int g_submit_queues = 1;
++module_param_named(submit_queues, g_submit_queues, int, 0444);
++MODULE_PARM_DESC(submit_queues, "Number of submission queues");
++
++static int g_home_node = NUMA_NO_NODE;
++module_param_named(home_node, g_home_node, int, 0444);
++MODULE_PARM_DESC(home_node, "Home node for the device");
++
++#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
++/*
++ * For more details about fault injection, please refer to
++ * Documentation/fault-injection/fault-injection.rst.
++ */
++static char g_timeout_str[80];
++module_param_string(timeout, g_timeout_str, sizeof(g_timeout_str), 0444);
++MODULE_PARM_DESC(timeout, "Fault injection. timeout=<interval>,<probability>,<space>,<times>");
++
++static char g_requeue_str[80];
++module_param_string(requeue, g_requeue_str, sizeof(g_requeue_str), 0444);
++MODULE_PARM_DESC(requeue, "Fault injection. requeue=<interval>,<probability>,<space>,<times>");
++
++static char g_init_hctx_str[80];
++module_param_string(init_hctx, g_init_hctx_str, sizeof(g_init_hctx_str), 0444);
++MODULE_PARM_DESC(init_hctx, "Fault injection to fail hctx init. init_hctx=<interval>,<probability>,<space>,<times>");
++#endif
++
++static int g_queue_mode = NULL_Q_MQ;
++
++static int null_param_store_val(const char *str, int *val, int min, int max)
++{
++	int ret, new_val;
++
++	ret = kstrtoint(str, 10, &new_val);
++	if (ret)
++		return -EINVAL;
++
++	if (new_val < min || new_val > max)
++		return -EINVAL;
++
++	*val = new_val;
++	return 0;
++}
++
++static int null_set_queue_mode(const char *str, const struct kernel_param *kp)
++{
++	return null_param_store_val(str, &g_queue_mode, NULL_Q_BIO, NULL_Q_MQ);
++}
++
++static const struct kernel_param_ops null_queue_mode_param_ops = {
++	.set	= null_set_queue_mode,
++	.get	= param_get_int,
++};
++
++device_param_cb(queue_mode, &null_queue_mode_param_ops, &g_queue_mode, 0444);
++MODULE_PARM_DESC(queue_mode, "Block interface to use (0=bio,1=rq,2=multiqueue)");
++
++static int g_gb = 250;
++module_param_named(gb, g_gb, int, 0444);
++MODULE_PARM_DESC(gb, "Size in GB");
++
++static int g_bs = 512;
++module_param_named(bs, g_bs, int, 0444);
++MODULE_PARM_DESC(bs, "Block size (in bytes)");
++
++static unsigned int nr_devices = 1;
++module_param(nr_devices, uint, 0444);
++MODULE_PARM_DESC(nr_devices, "Number of devices to register");
++
++static bool g_blocking;
++module_param_named(blocking, g_blocking, bool, 0444);
++MODULE_PARM_DESC(blocking, "Register as a blocking blk-mq driver device");
++
++static bool shared_tags;
++module_param(shared_tags, bool, 0444);
++MODULE_PARM_DESC(shared_tags, "Share tag set between devices for blk-mq");
++
++static bool g_shared_tag_bitmap;
++module_param_named(shared_tag_bitmap, g_shared_tag_bitmap, bool, 0444);
++MODULE_PARM_DESC(shared_tag_bitmap, "Use shared tag bitmap for all submission queues for blk-mq");
++
++static int g_irqmode = NULL_IRQ_SOFTIRQ;
++
++static int null_set_irqmode(const char *str, const struct kernel_param *kp)
++{
++	return null_param_store_val(str, &g_irqmode, NULL_IRQ_NONE,
++					NULL_IRQ_TIMER);
++}
++
++static const struct kernel_param_ops null_irqmode_param_ops = {
++	.set	= null_set_irqmode,
++	.get	= param_get_int,
++};
++
++device_param_cb(irqmode, &null_irqmode_param_ops, &g_irqmode, 0444);
++MODULE_PARM_DESC(irqmode, "IRQ completion handler. 0-none, 1-softirq, 2-timer");
++
++static unsigned long g_completion_nsec = 10000;
++module_param_named(completion_nsec, g_completion_nsec, ulong, 0444);
++MODULE_PARM_DESC(completion_nsec, "Time in ns to complete a request in hardware. Default: 10,000ns");
++
++static int g_hw_queue_depth = 64;
++module_param_named(hw_queue_depth, g_hw_queue_depth, int, 0444);
++MODULE_PARM_DESC(hw_queue_depth, "Queue depth for each hardware queue. Default: 64");
++
++static bool g_use_per_node_hctx;
++module_param_named(use_per_node_hctx, g_use_per_node_hctx, bool, 0444);
++MODULE_PARM_DESC(use_per_node_hctx, "Use per-node allocation for hardware context queues. Default: false");
++
++static bool g_zoned;
++module_param_named(zoned, g_zoned, bool, S_IRUGO);
++MODULE_PARM_DESC(zoned, "Make device as a host-managed zoned block device. Default: false");
++
++static unsigned long g_zone_size = 256;
++module_param_named(zone_size, g_zone_size, ulong, S_IRUGO);
++MODULE_PARM_DESC(zone_size, "Zone size in MB when block device is zoned. Must be power-of-two: Default: 256");
++
++static unsigned long g_zone_capacity;
++module_param_named(zone_capacity, g_zone_capacity, ulong, 0444);
++MODULE_PARM_DESC(zone_capacity, "Zone capacity in MB when block device is zoned. Can be less than or equal to zone size. Default: Zone size");
++
++static unsigned int g_zone_nr_conv;
++module_param_named(zone_nr_conv, g_zone_nr_conv, uint, 0444);
++MODULE_PARM_DESC(zone_nr_conv, "Number of conventional zones when block device is zoned. Default: 0");
++
++static unsigned int g_zone_max_open;
++module_param_named(zone_max_open, g_zone_max_open, uint, 0444);
++MODULE_PARM_DESC(zone_max_open, "Maximum number of open zones when block device is zoned. Default: 0 (no limit)");
++
++static unsigned int g_zone_max_active;
++module_param_named(zone_max_active, g_zone_max_active, uint, 0444);
++MODULE_PARM_DESC(zone_max_active, "Maximum number of active zones when block device is zoned. Default: 0 (no limit)");
++
++static struct nullb_device *null_alloc_dev(void);
++static void null_free_dev(struct nullb_device *dev);
++static void null_del_dev(struct nullb *nullb);
++static int null_add_dev(struct nullb_device *dev);
++static void null_free_device_storage(struct nullb_device *dev, bool is_cache);
++
++static inline struct nullb_device *to_nullb_device(struct config_item *item)
++{
++	return item ? container_of(item, struct nullb_device, item) : NULL;
++}
++
++static inline ssize_t nullb_device_uint_attr_show(unsigned int val, char *page)
++{
++	return snprintf(page, PAGE_SIZE, "%u\n", val);
++}
++
++static inline ssize_t nullb_device_ulong_attr_show(unsigned long val,
++	char *page)
++{
++	return snprintf(page, PAGE_SIZE, "%lu\n", val);
++}
++
++static inline ssize_t nullb_device_bool_attr_show(bool val, char *page)
++{
++	return snprintf(page, PAGE_SIZE, "%u\n", val);
++}
++
++static ssize_t nullb_device_uint_attr_store(unsigned int *val,
++	const char *page, size_t count)
++{
++	unsigned int tmp;
++	int result;
++
++	result = kstrtouint(page, 0, &tmp);
++	if (result < 0)
++		return result;
++
++	*val = tmp;
++	return count;
++}
++
++static ssize_t nullb_device_ulong_attr_store(unsigned long *val,
++	const char *page, size_t count)
++{
++	int result;
++	unsigned long tmp;
++
++	result = kstrtoul(page, 0, &tmp);
++	if (result < 0)
++		return result;
++
++	*val = tmp;
++	return count;
++}
++
++static ssize_t nullb_device_bool_attr_store(bool *val, const char *page,
++	size_t count)
++{
++	bool tmp;
++	int result;
++
++	result = kstrtobool(page,  &tmp);
++	if (result < 0)
++		return result;
++
++	*val = tmp;
++	return count;
++}
++
++/* The following macro should only be used with TYPE = {uint, ulong, bool}. */
++#define NULLB_DEVICE_ATTR(NAME, TYPE, APPLY)				\
++static ssize_t								\
++nullb_device_##NAME##_show(struct config_item *item, char *page)	\
++{									\
++	return nullb_device_##TYPE##_attr_show(				\
++				to_nullb_device(item)->NAME, page);	\
++}									\
++static ssize_t								\
++nullb_device_##NAME##_store(struct config_item *item, const char *page,	\
++			    size_t count)				\
++{									\
++	int (*apply_fn)(struct nullb_device *dev, TYPE new_value) = APPLY;\
++	struct nullb_device *dev = to_nullb_device(item);		\
++	TYPE new_value = 0;						\
++	int ret;							\
++									\
++	ret = nullb_device_##TYPE##_attr_store(&new_value, page, count);\
++	if (ret < 0)							\
++		return ret;						\
++	if (apply_fn)							\
++		ret = apply_fn(dev, new_value);				\
++	else if (test_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags)) 	\
++		ret = -EBUSY;						\
++	if (ret < 0)							\
++		return ret;						\
++	dev->NAME = new_value;						\
++	return count;							\
++}									\
++CONFIGFS_ATTR(nullb_device_, NAME);
++
++static int nullb_apply_submit_queues(struct nullb_device *dev,
++				     unsigned int submit_queues)
++{
++	struct nullb *nullb = dev->nullb;
++	struct blk_mq_tag_set *set;
++
++	if (!nullb)
++		return 0;
++
++	/*
++	 * Make sure that null_init_hctx() does not access nullb->queues[] past
++	 * the end of that array.
++	 */
++	if (submit_queues > nr_cpu_ids)
++		return -EINVAL;
++	set = nullb->tag_set;
++	blk_mq_update_nr_hw_queues(set, submit_queues);
++	return set->nr_hw_queues == submit_queues ? 0 : -ENOMEM;
++}
++
++NULLB_DEVICE_ATTR(size, ulong, NULL);
++NULLB_DEVICE_ATTR(completion_nsec, ulong, NULL);
++NULLB_DEVICE_ATTR(submit_queues, uint, nullb_apply_submit_queues);
++NULLB_DEVICE_ATTR(home_node, uint, NULL);
++NULLB_DEVICE_ATTR(queue_mode, uint, NULL);
++NULLB_DEVICE_ATTR(blocksize, uint, NULL);
++NULLB_DEVICE_ATTR(irqmode, uint, NULL);
++NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL);
++NULLB_DEVICE_ATTR(index, uint, NULL);
++NULLB_DEVICE_ATTR(blocking, bool, NULL);
++NULLB_DEVICE_ATTR(use_per_node_hctx, bool, NULL);
++NULLB_DEVICE_ATTR(memory_backed, bool, NULL);
++NULLB_DEVICE_ATTR(discard, bool, NULL);
++NULLB_DEVICE_ATTR(mbps, uint, NULL);
++NULLB_DEVICE_ATTR(cache_size, ulong, NULL);
++NULLB_DEVICE_ATTR(zoned, bool, NULL);
++NULLB_DEVICE_ATTR(zone_size, ulong, NULL);
++NULLB_DEVICE_ATTR(zone_capacity, ulong, NULL);
++NULLB_DEVICE_ATTR(zone_nr_conv, uint, NULL);
++NULLB_DEVICE_ATTR(zone_max_open, uint, NULL);
++NULLB_DEVICE_ATTR(zone_max_active, uint, NULL);
++
++static ssize_t nullb_device_power_show(struct config_item *item, char *page)
++{
++	return nullb_device_bool_attr_show(to_nullb_device(item)->power, page);
++}
++
++static ssize_t nullb_device_power_store(struct config_item *item,
++				     const char *page, size_t count)
++{
++	struct nullb_device *dev = to_nullb_device(item);
++	bool newp = false;
++	ssize_t ret;
++
++	ret = nullb_device_bool_attr_store(&newp, page, count);
++	if (ret < 0)
++		return ret;
++
++	if (!dev->power && newp) {
++		if (test_and_set_bit(NULLB_DEV_FL_UP, &dev->flags))
++			return count;
++		if (null_add_dev(dev)) {
++			clear_bit(NULLB_DEV_FL_UP, &dev->flags);
++			return -ENOMEM;
++		}
++
++		set_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags);
++		dev->power = newp;
++	} else if (dev->power && !newp) {
++		if (test_and_clear_bit(NULLB_DEV_FL_UP, &dev->flags)) {
++			mutex_lock(&lock);
++			dev->power = newp;
++			null_del_dev(dev->nullb);
++			mutex_unlock(&lock);
++		}
++		clear_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags);
++	}
++
++	return count;
++}
++
++CONFIGFS_ATTR(nullb_device_, power);
++
++static ssize_t nullb_device_badblocks_show(struct config_item *item, char *page)
++{
++	struct nullb_device *t_dev = to_nullb_device(item);
++
++	return badblocks_show(&t_dev->badblocks, page, 0);
++}
++
++static ssize_t nullb_device_badblocks_store(struct config_item *item,
++				     const char *page, size_t count)
++{
++	struct nullb_device *t_dev = to_nullb_device(item);
++	char *orig, *buf, *tmp;
++	u64 start, end;
++	int ret;
++
++	orig = kstrndup(page, count, GFP_KERNEL);
++	if (!orig)
++		return -ENOMEM;
++
++	buf = strstrip(orig);
++
++	ret = -EINVAL;
++	if (buf[0] != '+' && buf[0] != '-')
++		goto out;
++	tmp = strchr(&buf[1], '-');
++	if (!tmp)
++		goto out;
++	*tmp = '\0';
++	ret = kstrtoull(buf + 1, 0, &start);
++	if (ret)
++		goto out;
++	ret = kstrtoull(tmp + 1, 0, &end);
++	if (ret)
++		goto out;
++	ret = -EINVAL;
++	if (start > end)
++		goto out;
++	/* enable badblocks */
++	cmpxchg(&t_dev->badblocks.shift, -1, 0);
++	if (buf[0] == '+')
++		ret = badblocks_set(&t_dev->badblocks, start,
++			end - start + 1, 1);
++	else
++		ret = badblocks_clear(&t_dev->badblocks, start,
++			end - start + 1);
++	if (ret == 0)
++		ret = count;
++out:
++	kfree(orig);
++	return ret;
++}
++CONFIGFS_ATTR(nullb_device_, badblocks);
++
++static struct configfs_attribute *nullb_device_attrs[] = {
++	&nullb_device_attr_size,
++	&nullb_device_attr_completion_nsec,
++	&nullb_device_attr_submit_queues,
++	&nullb_device_attr_home_node,
++	&nullb_device_attr_queue_mode,
++	&nullb_device_attr_blocksize,
++	&nullb_device_attr_irqmode,
++	&nullb_device_attr_hw_queue_depth,
++	&nullb_device_attr_index,
++	&nullb_device_attr_blocking,
++	&nullb_device_attr_use_per_node_hctx,
++	&nullb_device_attr_power,
++	&nullb_device_attr_memory_backed,
++	&nullb_device_attr_discard,
++	&nullb_device_attr_mbps,
++	&nullb_device_attr_cache_size,
++	&nullb_device_attr_badblocks,
++	&nullb_device_attr_zoned,
++	&nullb_device_attr_zone_size,
++	&nullb_device_attr_zone_capacity,
++	&nullb_device_attr_zone_nr_conv,
++	&nullb_device_attr_zone_max_open,
++	&nullb_device_attr_zone_max_active,
++	NULL,
++};
++
++static void nullb_device_release(struct config_item *item)
++{
++	struct nullb_device *dev = to_nullb_device(item);
++
++	null_free_device_storage(dev, false);
++	null_free_dev(dev);
++}
++
++static struct configfs_item_operations nullb_device_ops = {
++	.release	= nullb_device_release,
++};
++
++static const struct config_item_type nullb_device_type = {
++	.ct_item_ops	= &nullb_device_ops,
++	.ct_attrs	= nullb_device_attrs,
++	.ct_owner	= THIS_MODULE,
++};
++
++static struct
++config_item *nullb_group_make_item(struct config_group *group, const char *name)
++{
++	struct nullb_device *dev;
++
++	dev = null_alloc_dev();
++	if (!dev)
++		return ERR_PTR(-ENOMEM);
++
++	config_item_init_type_name(&dev->item, name, &nullb_device_type);
++
++	return &dev->item;
++}
++
++static void
++nullb_group_drop_item(struct config_group *group, struct config_item *item)
++{
++	struct nullb_device *dev = to_nullb_device(item);
++
++	if (test_and_clear_bit(NULLB_DEV_FL_UP, &dev->flags)) {
++		mutex_lock(&lock);
++		dev->power = false;
++		null_del_dev(dev->nullb);
++		mutex_unlock(&lock);
++	}
++
++	config_item_put(item);
++}
++
++static ssize_t memb_group_features_show(struct config_item *item, char *page)
++{
++	return snprintf(page, PAGE_SIZE,
++			"memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active\n");
++}
++
++CONFIGFS_ATTR_RO(memb_group_, features);
++
++static struct configfs_attribute *nullb_group_attrs[] = {
++	&memb_group_attr_features,
++	NULL,
++};
++
++static struct configfs_group_operations nullb_group_ops = {
++	.make_item	= nullb_group_make_item,
++	.drop_item	= nullb_group_drop_item,
++};
++
++static const struct config_item_type nullb_group_type = {
++	.ct_group_ops	= &nullb_group_ops,
++	.ct_attrs	= nullb_group_attrs,
++	.ct_owner	= THIS_MODULE,
++};
++
++static struct configfs_subsystem nullb_subsys = {
++	.su_group = {
++		.cg_item = {
++			.ci_namebuf = "nullb",
++			.ci_type = &nullb_group_type,
++		},
++	},
++};
++
++static inline int null_cache_active(struct nullb *nullb)
++{
++	return test_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
++}
++
++static struct nullb_device *null_alloc_dev(void)
++{
++	struct nullb_device *dev;
++
++	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
++	if (!dev)
++		return NULL;
++	INIT_RADIX_TREE(&dev->data, GFP_ATOMIC);
++	INIT_RADIX_TREE(&dev->cache, GFP_ATOMIC);
++	if (badblocks_init(&dev->badblocks, 0)) {
++		kfree(dev);
++		return NULL;
++	}
++
++	dev->size = g_gb * 1024;
++	dev->completion_nsec = g_completion_nsec;
++	dev->submit_queues = g_submit_queues;
++	dev->home_node = g_home_node;
++	dev->queue_mode = g_queue_mode;
++	dev->blocksize = g_bs;
++	dev->irqmode = g_irqmode;
++	dev->hw_queue_depth = g_hw_queue_depth;
++	dev->blocking = g_blocking;
++	dev->use_per_node_hctx = g_use_per_node_hctx;
++	dev->zoned = g_zoned;
++	dev->zone_size = g_zone_size;
++	dev->zone_capacity = g_zone_capacity;
++	dev->zone_nr_conv = g_zone_nr_conv;
++	dev->zone_max_open = g_zone_max_open;
++	dev->zone_max_active = g_zone_max_active;
++	return dev;
++}
++
++static void null_free_dev(struct nullb_device *dev)
++{
++	if (!dev)
++		return;
++
++	null_free_zoned_dev(dev);
++	badblocks_exit(&dev->badblocks);
++	kfree(dev);
++}
++
++static void put_tag(struct nullb_queue *nq, unsigned int tag)
++{
++	clear_bit_unlock(tag, nq->tag_map);
++
++	if (waitqueue_active(&nq->wait))
++		wake_up(&nq->wait);
++}
++
++static unsigned int get_tag(struct nullb_queue *nq)
++{
++	unsigned int tag;
++
++	do {
++		tag = find_first_zero_bit(nq->tag_map, nq->queue_depth);
++		if (tag >= nq->queue_depth)
++			return -1U;
++	} while (test_and_set_bit_lock(tag, nq->tag_map));
++
++	return tag;
++}
++
++static void free_cmd(struct nullb_cmd *cmd)
++{
++	put_tag(cmd->nq, cmd->tag);
++}
++
++static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer);
++
++static struct nullb_cmd *__alloc_cmd(struct nullb_queue *nq)
++{
++	struct nullb_cmd *cmd;
++	unsigned int tag;
++
++	tag = get_tag(nq);
++	if (tag != -1U) {
++		cmd = &nq->cmds[tag];
++		cmd->tag = tag;
++		cmd->error = BLK_STS_OK;
++		cmd->nq = nq;
++		if (nq->dev->irqmode == NULL_IRQ_TIMER) {
++			hrtimer_init(&cmd->timer, CLOCK_MONOTONIC,
++				     HRTIMER_MODE_REL);
++			cmd->timer.function = null_cmd_timer_expired;
++		}
++		return cmd;
++	}
++
++	return NULL;
++}
++
++static struct nullb_cmd *alloc_cmd(struct nullb_queue *nq, int can_wait)
++{
++	struct nullb_cmd *cmd;
++	DEFINE_WAIT(wait);
++
++	cmd = __alloc_cmd(nq);
++	if (cmd || !can_wait)
++		return cmd;
++
++	do {
++		prepare_to_wait(&nq->wait, &wait, TASK_UNINTERRUPTIBLE);
++		cmd = __alloc_cmd(nq);
++		if (cmd)
++			break;
++
++		io_schedule();
++	} while (1);
++
++	finish_wait(&nq->wait, &wait);
++	return cmd;
++}
++
++static void end_cmd(struct nullb_cmd *cmd)
++{
++	int queue_mode = cmd->nq->dev->queue_mode;
++
++	switch (queue_mode)  {
++	case NULL_Q_MQ:
++		blk_mq_end_request(cmd->rq, cmd->error);
++		return;
++	case NULL_Q_BIO:
++		cmd->bio->bi_status = cmd->error;
++		bio_endio(cmd->bio);
++		break;
++	}
++
++	free_cmd(cmd);
++}
++
++static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer)
++{
++	end_cmd(container_of(timer, struct nullb_cmd, timer));
++
++	return HRTIMER_NORESTART;
++}
++
++static void null_cmd_end_timer(struct nullb_cmd *cmd)
++{
++	ktime_t kt = cmd->nq->dev->completion_nsec;
++
++	hrtimer_start(&cmd->timer, kt, HRTIMER_MODE_REL);
++}
++
++static void null_complete_rq(struct request *rq)
++{
++	end_cmd(blk_mq_rq_to_pdu(rq));
++}
++
++static struct nullb_page *null_alloc_page(gfp_t gfp_flags)
++{
++	struct nullb_page *t_page;
++
++	t_page = kmalloc(sizeof(struct nullb_page), gfp_flags);
++	if (!t_page)
++		goto out;
++
++	t_page->page = alloc_pages(gfp_flags, 0);
++	if (!t_page->page)
++		goto out_freepage;
++
++	memset(t_page->bitmap, 0, sizeof(t_page->bitmap));
++	return t_page;
++out_freepage:
++	kfree(t_page);
++out:
++	return NULL;
++}
++
++static void null_free_page(struct nullb_page *t_page)
++{
++	__set_bit(NULLB_PAGE_FREE, t_page->bitmap);
++	if (test_bit(NULLB_PAGE_LOCK, t_page->bitmap))
++		return;
++	__free_page(t_page->page);
++	kfree(t_page);
++}
++
++static bool null_page_empty(struct nullb_page *page)
++{
++	int size = MAP_SZ - 2;
++
++	return find_first_bit(page->bitmap, size) == size;
++}
++
++static void null_free_sector(struct nullb *nullb, sector_t sector,
++	bool is_cache)
++{
++	unsigned int sector_bit;
++	u64 idx;
++	struct nullb_page *t_page, *ret;
++	struct radix_tree_root *root;
++
++	root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
++	idx = sector >> PAGE_SECTORS_SHIFT;
++	sector_bit = (sector & SECTOR_MASK);
++
++	t_page = radix_tree_lookup(root, idx);
++	if (t_page) {
++		__clear_bit(sector_bit, t_page->bitmap);
++
++		if (null_page_empty(t_page)) {
++			ret = radix_tree_delete_item(root, idx, t_page);
++			WARN_ON(ret != t_page);
++			null_free_page(ret);
++			if (is_cache)
++				nullb->dev->curr_cache -= PAGE_SIZE;
++		}
++	}
++}
++
++static struct nullb_page *null_radix_tree_insert(struct nullb *nullb, u64 idx,
++	struct nullb_page *t_page, bool is_cache)
++{
++	struct radix_tree_root *root;
++
++	root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
++
++	if (radix_tree_insert(root, idx, t_page)) {
++		null_free_page(t_page);
++		t_page = radix_tree_lookup(root, idx);
++		WARN_ON(!t_page || t_page->page->index != idx);
++	} else if (is_cache)
++		nullb->dev->curr_cache += PAGE_SIZE;
++
++	return t_page;
++}
++
++static void null_free_device_storage(struct nullb_device *dev, bool is_cache)
++{
++	unsigned long pos = 0;
++	int nr_pages;
++	struct nullb_page *ret, *t_pages[FREE_BATCH];
++	struct radix_tree_root *root;
++
++	root = is_cache ? &dev->cache : &dev->data;
++
++	do {
++		int i;
++
++		nr_pages = radix_tree_gang_lookup(root,
++				(void **)t_pages, pos, FREE_BATCH);
++
++		for (i = 0; i < nr_pages; i++) {
++			pos = t_pages[i]->page->index;
++			ret = radix_tree_delete_item(root, pos, t_pages[i]);
++			WARN_ON(ret != t_pages[i]);
++			null_free_page(ret);
++		}
++
++		pos++;
++	} while (nr_pages == FREE_BATCH);
++
++	if (is_cache)
++		dev->curr_cache = 0;
++}
++
++static struct nullb_page *__null_lookup_page(struct nullb *nullb,
++	sector_t sector, bool for_write, bool is_cache)
++{
++	unsigned int sector_bit;
++	u64 idx;
++	struct nullb_page *t_page;
++	struct radix_tree_root *root;
++
++	idx = sector >> PAGE_SECTORS_SHIFT;
++	sector_bit = (sector & SECTOR_MASK);
++
++	root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
++	t_page = radix_tree_lookup(root, idx);
++	WARN_ON(t_page && t_page->page->index != idx);
++
++	if (t_page && (for_write || test_bit(sector_bit, t_page->bitmap)))
++		return t_page;
++
++	return NULL;
++}
++
++static struct nullb_page *null_lookup_page(struct nullb *nullb,
++	sector_t sector, bool for_write, bool ignore_cache)
++{
++	struct nullb_page *page = NULL;
++
++	if (!ignore_cache)
++		page = __null_lookup_page(nullb, sector, for_write, true);
++	if (page)
++		return page;
++	return __null_lookup_page(nullb, sector, for_write, false);
++}
++
++static struct nullb_page *null_insert_page(struct nullb *nullb,
++					   sector_t sector, bool ignore_cache)
++	__releases(&nullb->lock)
++	__acquires(&nullb->lock)
++{
++	u64 idx;
++	struct nullb_page *t_page;
++
++	t_page = null_lookup_page(nullb, sector, true, ignore_cache);
++	if (t_page)
++		return t_page;
++
++	spin_unlock_irq(&nullb->lock);
++
++	t_page = null_alloc_page(GFP_NOIO);
++	if (!t_page)
++		goto out_lock;
++
++	if (radix_tree_preload(GFP_NOIO))
++		goto out_freepage;
++
++	spin_lock_irq(&nullb->lock);
++	idx = sector >> PAGE_SECTORS_SHIFT;
++	t_page->page->index = idx;
++	t_page = null_radix_tree_insert(nullb, idx, t_page, !ignore_cache);
++	radix_tree_preload_end();
++
++	return t_page;
++out_freepage:
++	null_free_page(t_page);
++out_lock:
++	spin_lock_irq(&nullb->lock);
++	return null_lookup_page(nullb, sector, true, ignore_cache);
++}
++
++static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page)
++{
++	int i;
++	unsigned int offset;
++	u64 idx;
++	struct nullb_page *t_page, *ret;
++	void *dst, *src;
++
++	idx = c_page->page->index;
++
++	t_page = null_insert_page(nullb, idx << PAGE_SECTORS_SHIFT, true);
++
++	__clear_bit(NULLB_PAGE_LOCK, c_page->bitmap);
++	if (test_bit(NULLB_PAGE_FREE, c_page->bitmap)) {
++		null_free_page(c_page);
++		if (t_page && null_page_empty(t_page)) {
++			ret = radix_tree_delete_item(&nullb->dev->data,
++				idx, t_page);
++			null_free_page(t_page);
++		}
++		return 0;
++	}
++
++	if (!t_page)
++		return -ENOMEM;
++
++	src = kmap_atomic(c_page->page);
++	dst = kmap_atomic(t_page->page);
++
++	for (i = 0; i < PAGE_SECTORS;
++			i += (nullb->dev->blocksize >> SECTOR_SHIFT)) {
++		if (test_bit(i, c_page->bitmap)) {
++			offset = (i << SECTOR_SHIFT);
++			memcpy(dst + offset, src + offset,
++				nullb->dev->blocksize);
++			__set_bit(i, t_page->bitmap);
++		}
++	}
++
++	kunmap_atomic(dst);
++	kunmap_atomic(src);
++
++	ret = radix_tree_delete_item(&nullb->dev->cache, idx, c_page);
++	null_free_page(ret);
++	nullb->dev->curr_cache -= PAGE_SIZE;
++
++	return 0;
++}
++
++static int null_make_cache_space(struct nullb *nullb, unsigned long n)
++{
++	int i, err, nr_pages;
++	struct nullb_page *c_pages[FREE_BATCH];
++	unsigned long flushed = 0, one_round;
++
++again:
++	if ((nullb->dev->cache_size * 1024 * 1024) >
++	     nullb->dev->curr_cache + n || nullb->dev->curr_cache == 0)
++		return 0;
++
++	nr_pages = radix_tree_gang_lookup(&nullb->dev->cache,
++			(void **)c_pages, nullb->cache_flush_pos, FREE_BATCH);
++	/*
++	 * nullb_flush_cache_page could unlock before using the c_pages. To
++	 * avoid race, we don't allow page free
++	 */
++	for (i = 0; i < nr_pages; i++) {
++		nullb->cache_flush_pos = c_pages[i]->page->index;
++		/*
++		 * We found the page which is being flushed to disk by other
++		 * threads
++		 */
++		if (test_bit(NULLB_PAGE_LOCK, c_pages[i]->bitmap))
++			c_pages[i] = NULL;
++		else
++			__set_bit(NULLB_PAGE_LOCK, c_pages[i]->bitmap);
++	}
++
++	one_round = 0;
++	for (i = 0; i < nr_pages; i++) {
++		if (c_pages[i] == NULL)
++			continue;
++		err = null_flush_cache_page(nullb, c_pages[i]);
++		if (err)
++			return err;
++		one_round++;
++	}
++	flushed += one_round << PAGE_SHIFT;
++
++	if (n > flushed) {
++		if (nr_pages == 0)
++			nullb->cache_flush_pos = 0;
++		if (one_round == 0) {
++			/* give other threads a chance */
++			spin_unlock_irq(&nullb->lock);
++			spin_lock_irq(&nullb->lock);
++		}
++		goto again;
++	}
++	return 0;
++}
++
++static int copy_to_nullb(struct nullb *nullb, struct page *source,
++	unsigned int off, sector_t sector, size_t n, bool is_fua)
++{
++	size_t temp, count = 0;
++	unsigned int offset;
++	struct nullb_page *t_page;
++	void *dst, *src;
++
++	while (count < n) {
++		temp = min_t(size_t, nullb->dev->blocksize, n - count);
++
++		if (null_cache_active(nullb) && !is_fua)
++			null_make_cache_space(nullb, PAGE_SIZE);
++
++		offset = (sector & SECTOR_MASK) << SECTOR_SHIFT;
++		t_page = null_insert_page(nullb, sector,
++			!null_cache_active(nullb) || is_fua);
++		if (!t_page)
++			return -ENOSPC;
++
++		src = kmap_atomic(source);
++		dst = kmap_atomic(t_page->page);
++		memcpy(dst + offset, src + off + count, temp);
++		kunmap_atomic(dst);
++		kunmap_atomic(src);
++
++		__set_bit(sector & SECTOR_MASK, t_page->bitmap);
++
++		if (is_fua)
++			null_free_sector(nullb, sector, true);
++
++		count += temp;
++		sector += temp >> SECTOR_SHIFT;
++	}
++	return 0;
++}
++
++static int copy_from_nullb(struct nullb *nullb, struct page *dest,
++	unsigned int off, sector_t sector, size_t n)
++{
++	size_t temp, count = 0;
++	unsigned int offset;
++	struct nullb_page *t_page;
++	void *dst, *src;
++
++	while (count < n) {
++		temp = min_t(size_t, nullb->dev->blocksize, n - count);
++
++		offset = (sector & SECTOR_MASK) << SECTOR_SHIFT;
++		t_page = null_lookup_page(nullb, sector, false,
++			!null_cache_active(nullb));
++
++		dst = kmap_atomic(dest);
++		if (!t_page) {
++			memset(dst + off + count, 0, temp);
++			goto next;
++		}
++		src = kmap_atomic(t_page->page);
++		memcpy(dst + off + count, src + offset, temp);
++		kunmap_atomic(src);
++next:
++		kunmap_atomic(dst);
++
++		count += temp;
++		sector += temp >> SECTOR_SHIFT;
++	}
++	return 0;
++}
++
++static void nullb_fill_pattern(struct nullb *nullb, struct page *page,
++			       unsigned int len, unsigned int off)
++{
++	void *dst;
++
++	dst = kmap_atomic(page);
++	memset(dst + off, 0xFF, len);
++	kunmap_atomic(dst);
++}
++
++static void null_handle_discard(struct nullb *nullb, sector_t sector, size_t n)
++{
++	size_t temp;
++
++	spin_lock_irq(&nullb->lock);
++	while (n > 0) {
++		temp = min_t(size_t, n, nullb->dev->blocksize);
++		null_free_sector(nullb, sector, false);
++		if (null_cache_active(nullb))
++			null_free_sector(nullb, sector, true);
++		sector += temp >> SECTOR_SHIFT;
++		n -= temp;
++	}
++	spin_unlock_irq(&nullb->lock);
++}
++
++static int null_handle_flush(struct nullb *nullb)
++{
++	int err;
++
++	if (!null_cache_active(nullb))
++		return 0;
++
++	spin_lock_irq(&nullb->lock);
++	while (true) {
++		err = null_make_cache_space(nullb,
++			nullb->dev->cache_size * 1024 * 1024);
++		if (err || nullb->dev->curr_cache == 0)
++			break;
++	}
++
++	WARN_ON(!radix_tree_empty(&nullb->dev->cache));
++	spin_unlock_irq(&nullb->lock);
++	return err;
++}
++
++static int null_transfer(struct nullb *nullb, struct page *page,
++	unsigned int len, unsigned int off, bool is_write, sector_t sector,
++	bool is_fua)
++{
++	struct nullb_device *dev = nullb->dev;
++	unsigned int valid_len = len;
++	int err = 0;
++
++	if (!is_write) {
++		if (dev->zoned)
++			valid_len = null_zone_valid_read_len(nullb,
++				sector, len);
++
++		if (valid_len) {
++			err = copy_from_nullb(nullb, page, off,
++				sector, valid_len);
++			off += valid_len;
++			len -= valid_len;
++		}
++
++		if (len)
++			nullb_fill_pattern(nullb, page, len, off);
++		flush_dcache_page(page);
++	} else {
++		flush_dcache_page(page);
++		err = copy_to_nullb(nullb, page, off, sector, len, is_fua);
++	}
++
++	return err;
++}
++
++static int null_handle_rq(struct nullb_cmd *cmd)
++{
++	struct request *rq = cmd->rq;
++	struct nullb *nullb = cmd->nq->dev->nullb;
++	int err;
++	unsigned int len;
++	sector_t sector;
++	struct req_iterator iter;
++	struct bio_vec bvec;
++
++	sector = blk_rq_pos(rq);
++
++	if (req_op(rq) == REQ_OP_DISCARD) {
++		null_handle_discard(nullb, sector, blk_rq_bytes(rq));
++		return 0;
++	}
++
++	spin_lock_irq(&nullb->lock);
++	rq_for_each_segment(bvec, rq, iter) {
++		len = bvec.bv_len;
++		err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
++				     op_is_write(req_op(rq)), sector,
++				     rq->cmd_flags & REQ_FUA);
++		if (err) {
++			spin_unlock_irq(&nullb->lock);
++			return err;
++		}
++		sector += len >> SECTOR_SHIFT;
++	}
++	spin_unlock_irq(&nullb->lock);
++
++	return 0;
++}
++
++static int null_handle_bio(struct nullb_cmd *cmd)
++{
++	struct bio *bio = cmd->bio;
++	struct nullb *nullb = cmd->nq->dev->nullb;
++	int err;
++	unsigned int len;
++	sector_t sector;
++	struct bio_vec bvec;
++	struct bvec_iter iter;
++
++	sector = bio->bi_iter.bi_sector;
++
++	if (bio_op(bio) == REQ_OP_DISCARD) {
++		null_handle_discard(nullb, sector,
++			bio_sectors(bio) << SECTOR_SHIFT);
++		return 0;
++	}
++
++	spin_lock_irq(&nullb->lock);
++	bio_for_each_segment(bvec, bio, iter) {
++		len = bvec.bv_len;
++		err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
++				     op_is_write(bio_op(bio)), sector,
++				     bio->bi_opf & REQ_FUA);
++		if (err) {
++			spin_unlock_irq(&nullb->lock);
++			return err;
++		}
++		sector += len >> SECTOR_SHIFT;
++	}
++	spin_unlock_irq(&nullb->lock);
++	return 0;
++}
++
++static void null_stop_queue(struct nullb *nullb)
++{
++	struct request_queue *q = nullb->q;
++
++	if (nullb->dev->queue_mode == NULL_Q_MQ)
++		blk_mq_stop_hw_queues(q);
++}
++
++static void null_restart_queue_async(struct nullb *nullb)
++{
++	struct request_queue *q = nullb->q;
++
++	if (nullb->dev->queue_mode == NULL_Q_MQ)
++		blk_mq_start_stopped_hw_queues(q, true);
++}
++
++static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd)
++{
++	struct nullb_device *dev = cmd->nq->dev;
++	struct nullb *nullb = dev->nullb;
++	blk_status_t sts = BLK_STS_OK;
++	struct request *rq = cmd->rq;
++
++	if (!hrtimer_active(&nullb->bw_timer))
++		hrtimer_restart(&nullb->bw_timer);
++
++	if (atomic_long_sub_return(blk_rq_bytes(rq), &nullb->cur_bytes) < 0) {
++		null_stop_queue(nullb);
++		/* race with timer */
++		if (atomic_long_read(&nullb->cur_bytes) > 0)
++			null_restart_queue_async(nullb);
++		/* requeue request */
++		sts = BLK_STS_DEV_RESOURCE;
++	}
++	return sts;
++}
++
++static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd,
++						 sector_t sector,
++						 sector_t nr_sectors)
++{
++	struct badblocks *bb = &cmd->nq->dev->badblocks;
++	sector_t first_bad;
++	int bad_sectors;
++
++	if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors))
++		return BLK_STS_IOERR;
++
++	return BLK_STS_OK;
++}
++
++static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd,
++						     enum req_opf op)
++{
++	struct nullb_device *dev = cmd->nq->dev;
++	int err;
++
++	if (dev->queue_mode == NULL_Q_BIO)
++		err = null_handle_bio(cmd);
++	else
++		err = null_handle_rq(cmd);
++
++	return errno_to_blk_status(err);
++}
++
++static void nullb_zero_read_cmd_buffer(struct nullb_cmd *cmd)
++{
++	struct nullb_device *dev = cmd->nq->dev;
++	struct bio *bio;
++
++	if (dev->memory_backed)
++		return;
++
++	if (dev->queue_mode == NULL_Q_BIO && bio_op(cmd->bio) == REQ_OP_READ) {
++		zero_fill_bio(cmd->bio);
++	} else if (req_op(cmd->rq) == REQ_OP_READ) {
++		__rq_for_each_bio(bio, cmd->rq)
++			zero_fill_bio(bio);
++	}
++}
++
++static inline void nullb_complete_cmd(struct nullb_cmd *cmd)
++{
++	/*
++	 * Since root privileges are required to configure the null_blk
++	 * driver, it is fine that this driver does not initialize the
++	 * data buffers of read commands. Zero-initialize these buffers
++	 * anyway if KMSAN is enabled to prevent that KMSAN complains
++	 * about null_blk not initializing read data buffers.
++	 */
++	if (IS_ENABLED(CONFIG_KMSAN))
++		nullb_zero_read_cmd_buffer(cmd);
++
++	/* Complete IO by inline, softirq or timer */
++	switch (cmd->nq->dev->irqmode) {
++	case NULL_IRQ_SOFTIRQ:
++		switch (cmd->nq->dev->queue_mode) {
++		case NULL_Q_MQ:
++			blk_mq_complete_request(cmd->rq);
++			break;
++		case NULL_Q_BIO:
++			/*
++			 * XXX: no proper submitting cpu information available.
++			 */
++			end_cmd(cmd);
++			break;
++		}
++		break;
++	case NULL_IRQ_NONE:
++		end_cmd(cmd);
++		break;
++	case NULL_IRQ_TIMER:
++		null_cmd_end_timer(cmd);
++		break;
++	}
++}
++
++blk_status_t null_process_cmd(struct nullb_cmd *cmd,
++			      enum req_opf op, sector_t sector,
++			      unsigned int nr_sectors)
++{
++	struct nullb_device *dev = cmd->nq->dev;
++	blk_status_t ret;
++
++	if (dev->badblocks.shift != -1) {
++		ret = null_handle_badblocks(cmd, sector, nr_sectors);
++		if (ret != BLK_STS_OK)
++			return ret;
++	}
++
++	if (dev->memory_backed)
++		return null_handle_memory_backed(cmd, op);
++
++	return BLK_STS_OK;
++}
++
++static blk_status_t null_handle_cmd(struct nullb_cmd *cmd, sector_t sector,
++				    sector_t nr_sectors, enum req_opf op)
++{
++	struct nullb_device *dev = cmd->nq->dev;
++	struct nullb *nullb = dev->nullb;
++	blk_status_t sts;
++
++	if (test_bit(NULLB_DEV_FL_THROTTLED, &dev->flags)) {
++		sts = null_handle_throttled(cmd);
++		if (sts != BLK_STS_OK)
++			return sts;
++	}
++
++	if (op == REQ_OP_FLUSH) {
++		cmd->error = errno_to_blk_status(null_handle_flush(nullb));
++		goto out;
++	}
++
++	if (dev->zoned)
++		sts = null_process_zoned_cmd(cmd, op, sector, nr_sectors);
++	else
++		sts = null_process_cmd(cmd, op, sector, nr_sectors);
++
++	/* Do not overwrite errors (e.g. timeout errors) */
++	if (cmd->error == BLK_STS_OK)
++		cmd->error = sts;
++
++out:
++	nullb_complete_cmd(cmd);
++	return BLK_STS_OK;
++}
++
++static enum hrtimer_restart nullb_bwtimer_fn(struct hrtimer *timer)
++{
++	struct nullb *nullb = container_of(timer, struct nullb, bw_timer);
++	ktime_t timer_interval = ktime_set(0, TIMER_INTERVAL);
++	unsigned int mbps = nullb->dev->mbps;
++
++	if (atomic_long_read(&nullb->cur_bytes) == mb_per_tick(mbps))
++		return HRTIMER_NORESTART;
++
++	atomic_long_set(&nullb->cur_bytes, mb_per_tick(mbps));
++	null_restart_queue_async(nullb);
++
++	hrtimer_forward_now(&nullb->bw_timer, timer_interval);
++
++	return HRTIMER_RESTART;
++}
++
++static void nullb_setup_bwtimer(struct nullb *nullb)
++{
++	ktime_t timer_interval = ktime_set(0, TIMER_INTERVAL);
++
++	hrtimer_init(&nullb->bw_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	nullb->bw_timer.function = nullb_bwtimer_fn;
++	atomic_long_set(&nullb->cur_bytes, mb_per_tick(nullb->dev->mbps));
++	hrtimer_start(&nullb->bw_timer, timer_interval, HRTIMER_MODE_REL);
++}
++
++static struct nullb_queue *nullb_to_queue(struct nullb *nullb)
++{
++	int index = 0;
++
++	if (nullb->nr_queues != 1)
++		index = raw_smp_processor_id() / ((nr_cpu_ids + nullb->nr_queues - 1) / nullb->nr_queues);
++
++	return &nullb->queues[index];
++}
++
++static blk_qc_t null_submit_bio(struct bio *bio)
++{
++	sector_t sector = bio->bi_iter.bi_sector;
++	sector_t nr_sectors = bio_sectors(bio);
++	struct nullb *nullb = bio->bi_disk->private_data;
++	struct nullb_queue *nq = nullb_to_queue(nullb);
++	struct nullb_cmd *cmd;
++
++	cmd = alloc_cmd(nq, 1);
++	cmd->bio = bio;
++
++	null_handle_cmd(cmd, sector, nr_sectors, bio_op(bio));
++	return BLK_QC_T_NONE;
++}
++
++static bool should_timeout_request(struct request *rq)
++{
++#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
++	if (g_timeout_str[0])
++		return should_fail(&null_timeout_attr, 1);
++#endif
++	return false;
++}
++
++static bool should_requeue_request(struct request *rq)
++{
++#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
++	if (g_requeue_str[0])
++		return should_fail(&null_requeue_attr, 1);
++#endif
++	return false;
++}
++
++static enum blk_eh_timer_return null_timeout_rq(struct request *rq, bool res)
++{
++	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
++
++	pr_info("rq %p timed out\n", rq);
++
++	/*
++	 * If the device is marked as blocking (i.e. memory backed or zoned
++	 * device), the submission path may be blocked waiting for resources
++	 * and cause real timeouts. For these real timeouts, the submission
++	 * path will complete the request using blk_mq_complete_request().
++	 * Only fake timeouts need to execute blk_mq_complete_request() here.
++	 */
++	cmd->error = BLK_STS_TIMEOUT;
++	if (cmd->fake_timeout)
++		blk_mq_complete_request(rq);
++	return BLK_EH_DONE;
++}
++
++static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
++			 const struct blk_mq_queue_data *bd)
++{
++	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
++	struct nullb_queue *nq = hctx->driver_data;
++	sector_t nr_sectors = blk_rq_sectors(bd->rq);
++	sector_t sector = blk_rq_pos(bd->rq);
++
++	might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
++
++	if (nq->dev->irqmode == NULL_IRQ_TIMER) {
++		hrtimer_init(&cmd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++		cmd->timer.function = null_cmd_timer_expired;
++	}
++	cmd->rq = bd->rq;
++	cmd->error = BLK_STS_OK;
++	cmd->nq = nq;
++	cmd->fake_timeout = should_timeout_request(bd->rq) ||
++		blk_should_fake_timeout(bd->rq->q);
++
++	blk_mq_start_request(bd->rq);
++
++	if (should_requeue_request(bd->rq)) {
++		/*
++		 * Alternate between hitting the core BUSY path, and the
++		 * driver driven requeue path
++		 */
++		nq->requeue_selection++;
++		if (nq->requeue_selection & 1)
++			return BLK_STS_RESOURCE;
++		else {
++			blk_mq_requeue_request(bd->rq, true);
++			return BLK_STS_OK;
++		}
++	}
++	if (cmd->fake_timeout)
++		return BLK_STS_OK;
++
++	return null_handle_cmd(cmd, sector, nr_sectors, req_op(bd->rq));
++}
++
++static void cleanup_queue(struct nullb_queue *nq)
++{
++	kfree(nq->tag_map);
++	kfree(nq->cmds);
++}
++
++static void cleanup_queues(struct nullb *nullb)
++{
++	int i;
++
++	for (i = 0; i < nullb->nr_queues; i++)
++		cleanup_queue(&nullb->queues[i]);
++
++	kfree(nullb->queues);
++}
++
++static void null_exit_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
++{
++	struct nullb_queue *nq = hctx->driver_data;
++	struct nullb *nullb = nq->dev->nullb;
++
++	nullb->nr_queues--;
++}
++
++static void null_init_queue(struct nullb *nullb, struct nullb_queue *nq)
++{
++	init_waitqueue_head(&nq->wait);
++	nq->queue_depth = nullb->queue_depth;
++	nq->dev = nullb->dev;
++}
++
++static int null_init_hctx(struct blk_mq_hw_ctx *hctx, void *driver_data,
++			  unsigned int hctx_idx)
++{
++	struct nullb *nullb = hctx->queue->queuedata;
++	struct nullb_queue *nq;
++
++#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
++	if (g_init_hctx_str[0] && should_fail(&null_init_hctx_attr, 1))
++		return -EFAULT;
++#endif
++
++	nq = &nullb->queues[hctx_idx];
++	hctx->driver_data = nq;
++	null_init_queue(nullb, nq);
++	nullb->nr_queues++;
++
++	return 0;
++}
++
++static const struct blk_mq_ops null_mq_ops = {
++	.queue_rq       = null_queue_rq,
++	.complete	= null_complete_rq,
++	.timeout	= null_timeout_rq,
++	.init_hctx	= null_init_hctx,
++	.exit_hctx	= null_exit_hctx,
++};
++
++static void null_del_dev(struct nullb *nullb)
++{
++	struct nullb_device *dev;
++
++	if (!nullb)
++		return;
++
++	dev = nullb->dev;
++
++	ida_simple_remove(&nullb_indexes, nullb->index);
++
++	list_del_init(&nullb->list);
++
++	del_gendisk(nullb->disk);
++
++	if (test_bit(NULLB_DEV_FL_THROTTLED, &nullb->dev->flags)) {
++		hrtimer_cancel(&nullb->bw_timer);
++		atomic_long_set(&nullb->cur_bytes, LONG_MAX);
++		null_restart_queue_async(nullb);
++	}
++
++	blk_cleanup_queue(nullb->q);
++	if (dev->queue_mode == NULL_Q_MQ &&
++	    nullb->tag_set == &nullb->__tag_set)
++		blk_mq_free_tag_set(nullb->tag_set);
++	put_disk(nullb->disk);
++	cleanup_queues(nullb);
++	if (null_cache_active(nullb))
++		null_free_device_storage(nullb->dev, true);
++	kfree(nullb);
++	dev->nullb = NULL;
++}
++
++static void null_config_discard(struct nullb *nullb)
++{
++	if (nullb->dev->discard == false)
++		return;
++
++	if (nullb->dev->zoned) {
++		nullb->dev->discard = false;
++		pr_info("discard option is ignored in zoned mode\n");
++		return;
++	}
++
++	nullb->q->limits.discard_granularity = nullb->dev->blocksize;
++	nullb->q->limits.discard_alignment = nullb->dev->blocksize;
++	blk_queue_max_discard_sectors(nullb->q, UINT_MAX >> 9);
++	blk_queue_flag_set(QUEUE_FLAG_DISCARD, nullb->q);
++}
++
++static const struct block_device_operations null_bio_ops = {
++	.owner		= THIS_MODULE,
++	.submit_bio	= null_submit_bio,
++	.report_zones	= null_report_zones,
++};
++
++static const struct block_device_operations null_rq_ops = {
++	.owner		= THIS_MODULE,
++	.report_zones	= null_report_zones,
++};
++
++static int setup_commands(struct nullb_queue *nq)
++{
++	struct nullb_cmd *cmd;
++	int i, tag_size;
++
++	nq->cmds = kcalloc(nq->queue_depth, sizeof(*cmd), GFP_KERNEL);
++	if (!nq->cmds)
++		return -ENOMEM;
++
++	tag_size = ALIGN(nq->queue_depth, BITS_PER_LONG) / BITS_PER_LONG;
++	nq->tag_map = kcalloc(tag_size, sizeof(unsigned long), GFP_KERNEL);
++	if (!nq->tag_map) {
++		kfree(nq->cmds);
++		return -ENOMEM;
++	}
++
++	for (i = 0; i < nq->queue_depth; i++) {
++		cmd = &nq->cmds[i];
++		cmd->tag = -1U;
++	}
++
++	return 0;
++}
++
++static int setup_queues(struct nullb *nullb)
++{
++	nullb->queues = kcalloc(nr_cpu_ids, sizeof(struct nullb_queue),
++				GFP_KERNEL);
++	if (!nullb->queues)
++		return -ENOMEM;
++
++	nullb->queue_depth = nullb->dev->hw_queue_depth;
++
++	return 0;
++}
++
++static int init_driver_queues(struct nullb *nullb)
++{
++	struct nullb_queue *nq;
++	int i, ret = 0;
++
++	for (i = 0; i < nullb->dev->submit_queues; i++) {
++		nq = &nullb->queues[i];
++
++		null_init_queue(nullb, nq);
++
++		ret = setup_commands(nq);
++		if (ret)
++			return ret;
++		nullb->nr_queues++;
++	}
++	return 0;
++}
++
++static int null_gendisk_register(struct nullb *nullb)
++{
++	sector_t size = ((sector_t)nullb->dev->size * SZ_1M) >> SECTOR_SHIFT;
++	struct gendisk *disk;
++
++	disk = nullb->disk = alloc_disk_node(1, nullb->dev->home_node);
++	if (!disk)
++		return -ENOMEM;
++	set_capacity(disk, size);
++
++	disk->flags |= GENHD_FL_EXT_DEVT | GENHD_FL_SUPPRESS_PARTITION_INFO;
++	disk->major		= null_major;
++	disk->first_minor	= nullb->index;
++	if (queue_is_mq(nullb->q))
++		disk->fops		= &null_rq_ops;
++	else
++		disk->fops		= &null_bio_ops;
++	disk->private_data	= nullb;
++	disk->queue		= nullb->q;
++	strncpy(disk->disk_name, nullb->disk_name, DISK_NAME_LEN);
++
++	if (nullb->dev->zoned) {
++		int ret = null_register_zoned_dev(nullb);
++
++		if (ret)
++			return ret;
++	}
++
++	add_disk(disk);
++	return 0;
++}
++
++static int null_init_tag_set(struct nullb *nullb, struct blk_mq_tag_set *set)
++{
++	set->ops = &null_mq_ops;
++	set->nr_hw_queues = nullb ? nullb->dev->submit_queues :
++						g_submit_queues;
++	set->queue_depth = nullb ? nullb->dev->hw_queue_depth :
++						g_hw_queue_depth;
++	set->numa_node = nullb ? nullb->dev->home_node : g_home_node;
++	set->cmd_size	= sizeof(struct nullb_cmd);
++	set->flags = BLK_MQ_F_SHOULD_MERGE;
++	if (g_no_sched)
++		set->flags |= BLK_MQ_F_NO_SCHED;
++	if (g_shared_tag_bitmap)
++		set->flags |= BLK_MQ_F_TAG_HCTX_SHARED;
++	set->driver_data = NULL;
++
++	if ((nullb && nullb->dev->blocking) || g_blocking)
++		set->flags |= BLK_MQ_F_BLOCKING;
++
++	return blk_mq_alloc_tag_set(set);
++}
++
++static int null_validate_conf(struct nullb_device *dev)
++{
++	dev->blocksize = round_down(dev->blocksize, 512);
++	dev->blocksize = clamp_t(unsigned int, dev->blocksize, 512, 4096);
++
++	if (dev->queue_mode == NULL_Q_MQ && dev->use_per_node_hctx) {
++		if (dev->submit_queues != nr_online_nodes)
++			dev->submit_queues = nr_online_nodes;
++	} else if (dev->submit_queues > nr_cpu_ids)
++		dev->submit_queues = nr_cpu_ids;
++	else if (dev->submit_queues == 0)
++		dev->submit_queues = 1;
++
++	dev->queue_mode = min_t(unsigned int, dev->queue_mode, NULL_Q_MQ);
++	dev->irqmode = min_t(unsigned int, dev->irqmode, NULL_IRQ_TIMER);
++
++	/* Do memory allocation, so set blocking */
++	if (dev->memory_backed)
++		dev->blocking = true;
++	else /* cache is meaningless */
++		dev->cache_size = 0;
++	dev->cache_size = min_t(unsigned long, ULONG_MAX / 1024 / 1024,
++						dev->cache_size);
++	dev->mbps = min_t(unsigned int, 1024 * 40, dev->mbps);
++	/* can not stop a queue */
++	if (dev->queue_mode == NULL_Q_BIO)
++		dev->mbps = 0;
++
++	if (dev->zoned &&
++	    (!dev->zone_size || !is_power_of_2(dev->zone_size))) {
++		pr_err("zone_size must be power-of-two\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
++static bool __null_setup_fault(struct fault_attr *attr, char *str)
++{
++	if (!str[0])
++		return true;
++
++	if (!setup_fault_attr(attr, str))
++		return false;
++
++	attr->verbose = 0;
++	return true;
++}
++#endif
++
++static bool null_setup_fault(void)
++{
++#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
++	if (!__null_setup_fault(&null_timeout_attr, g_timeout_str))
++		return false;
++	if (!__null_setup_fault(&null_requeue_attr, g_requeue_str))
++		return false;
++	if (!__null_setup_fault(&null_init_hctx_attr, g_init_hctx_str))
++		return false;
++#endif
++	return true;
++}
++
++static int null_add_dev(struct nullb_device *dev)
++{
++	struct nullb *nullb;
++	int rv;
++
++	rv = null_validate_conf(dev);
++	if (rv)
++		return rv;
++
++	nullb = kzalloc_node(sizeof(*nullb), GFP_KERNEL, dev->home_node);
++	if (!nullb) {
++		rv = -ENOMEM;
++		goto out;
++	}
++	nullb->dev = dev;
++	dev->nullb = nullb;
++
++	spin_lock_init(&nullb->lock);
++
++	rv = setup_queues(nullb);
++	if (rv)
++		goto out_free_nullb;
++
++	if (dev->queue_mode == NULL_Q_MQ) {
++		if (shared_tags) {
++			nullb->tag_set = &tag_set;
++			rv = 0;
++		} else {
++			nullb->tag_set = &nullb->__tag_set;
++			rv = null_init_tag_set(nullb, nullb->tag_set);
++		}
++
++		if (rv)
++			goto out_cleanup_queues;
++
++		if (!null_setup_fault())
++			goto out_cleanup_queues;
++
++		nullb->tag_set->timeout = 5 * HZ;
++		nullb->q = blk_mq_init_queue_data(nullb->tag_set, nullb);
++		if (IS_ERR(nullb->q)) {
++			rv = -ENOMEM;
++			goto out_cleanup_tags;
++		}
++	} else if (dev->queue_mode == NULL_Q_BIO) {
++		nullb->q = blk_alloc_queue(dev->home_node);
++		if (!nullb->q) {
++			rv = -ENOMEM;
++			goto out_cleanup_queues;
++		}
++		rv = init_driver_queues(nullb);
++		if (rv)
++			goto out_cleanup_blk_queue;
++	}
++
++	if (dev->mbps) {
++		set_bit(NULLB_DEV_FL_THROTTLED, &dev->flags);
++		nullb_setup_bwtimer(nullb);
++	}
++
++	if (dev->cache_size > 0) {
++		set_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
++		blk_queue_write_cache(nullb->q, true, true);
++	}
++
++	if (dev->zoned) {
++		rv = null_init_zoned_dev(dev, nullb->q);
++		if (rv)
++			goto out_cleanup_blk_queue;
++	}
++
++	nullb->q->queuedata = nullb;
++	blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
++	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, nullb->q);
++
++	mutex_lock(&lock);
++	rv = ida_simple_get(&nullb_indexes, 0, 0, GFP_KERNEL);
++	if (rv < 0) {
++		mutex_unlock(&lock);
++		goto out_cleanup_zone;
++	}
++	nullb->index = rv;
++	dev->index = rv;
++	mutex_unlock(&lock);
++
++	blk_queue_logical_block_size(nullb->q, dev->blocksize);
++	blk_queue_physical_block_size(nullb->q, dev->blocksize);
++
++	null_config_discard(nullb);
++
++	sprintf(nullb->disk_name, "nullb%d", nullb->index);
++
++	rv = null_gendisk_register(nullb);
++	if (rv)
++		goto out_ida_free;
++
++	mutex_lock(&lock);
++	list_add_tail(&nullb->list, &nullb_list);
++	mutex_unlock(&lock);
++
++	return 0;
++
++out_ida_free:
++	ida_free(&nullb_indexes, nullb->index);
++out_cleanup_zone:
++	null_free_zoned_dev(dev);
++out_cleanup_blk_queue:
++	blk_cleanup_queue(nullb->q);
++out_cleanup_tags:
++	if (dev->queue_mode == NULL_Q_MQ && nullb->tag_set == &nullb->__tag_set)
++		blk_mq_free_tag_set(nullb->tag_set);
++out_cleanup_queues:
++	cleanup_queues(nullb);
++out_free_nullb:
++	kfree(nullb);
++	dev->nullb = NULL;
++out:
++	return rv;
++}
++
++static int __init null_init(void)
++{
++	int ret = 0;
++	unsigned int i;
++	struct nullb *nullb;
++	struct nullb_device *dev;
++
++	if (g_bs > PAGE_SIZE) {
++		pr_warn("invalid block size\n");
++		pr_warn("defaults block size to %lu\n", PAGE_SIZE);
++		g_bs = PAGE_SIZE;
++	}
++
++	if (g_home_node != NUMA_NO_NODE && g_home_node >= nr_online_nodes) {
++		pr_err("invalid home_node value\n");
++		g_home_node = NUMA_NO_NODE;
++	}
++
++	if (g_queue_mode == NULL_Q_RQ) {
++		pr_err("legacy IO path no longer available\n");
++		return -EINVAL;
++	}
++	if (g_queue_mode == NULL_Q_MQ && g_use_per_node_hctx) {
++		if (g_submit_queues != nr_online_nodes) {
++			pr_warn("submit_queues param is set to %u.\n",
++							nr_online_nodes);
++			g_submit_queues = nr_online_nodes;
++		}
++	} else if (g_submit_queues > nr_cpu_ids)
++		g_submit_queues = nr_cpu_ids;
++	else if (g_submit_queues <= 0)
++		g_submit_queues = 1;
++
++	if (g_queue_mode == NULL_Q_MQ && shared_tags) {
++		ret = null_init_tag_set(NULL, &tag_set);
++		if (ret)
++			return ret;
++	}
++
++	config_group_init(&nullb_subsys.su_group);
++	mutex_init(&nullb_subsys.su_mutex);
++
++	ret = configfs_register_subsystem(&nullb_subsys);
++	if (ret)
++		goto err_tagset;
++
++	mutex_init(&lock);
++
++	null_major = register_blkdev(0, "nullb");
++	if (null_major < 0) {
++		ret = null_major;
++		goto err_conf;
++	}
++
++	for (i = 0; i < nr_devices; i++) {
++		dev = null_alloc_dev();
++		if (!dev) {
++			ret = -ENOMEM;
++			goto err_dev;
++		}
++		ret = null_add_dev(dev);
++		if (ret) {
++			null_free_dev(dev);
++			goto err_dev;
++		}
++	}
++
++	pr_info("module loaded\n");
++	return 0;
++
++err_dev:
++	while (!list_empty(&nullb_list)) {
++		nullb = list_entry(nullb_list.next, struct nullb, list);
++		dev = nullb->dev;
++		null_del_dev(nullb);
++		null_free_dev(dev);
++	}
++	unregister_blkdev(null_major, "nullb");
++err_conf:
++	configfs_unregister_subsystem(&nullb_subsys);
++err_tagset:
++	if (g_queue_mode == NULL_Q_MQ && shared_tags)
++		blk_mq_free_tag_set(&tag_set);
++	return ret;
++}
++
++static void __exit null_exit(void)
++{
++	struct nullb *nullb;
++
++	configfs_unregister_subsystem(&nullb_subsys);
++
++	unregister_blkdev(null_major, "nullb");
++
++	mutex_lock(&lock);
++	while (!list_empty(&nullb_list)) {
++		struct nullb_device *dev;
++
++		nullb = list_entry(nullb_list.next, struct nullb, list);
++		dev = nullb->dev;
++		null_del_dev(nullb);
++		null_free_dev(dev);
++	}
++	mutex_unlock(&lock);
++
++	if (g_queue_mode == NULL_Q_MQ && shared_tags)
++		blk_mq_free_tag_set(&tag_set);
++}
++
++module_init(null_init);
++module_exit(null_exit);
++
++MODULE_AUTHOR("Jens Axboe <axboe@kernel.dk>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h
+new file mode 100644
+index 0000000000000..7de703f28617b
+--- /dev/null
++++ b/drivers/block/null_blk/null_blk.h
+@@ -0,0 +1,137 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __BLK_NULL_BLK_H
++#define __BLK_NULL_BLK_H
++
++#undef pr_fmt
++#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
++
++#include <linux/blkdev.h>
++#include <linux/slab.h>
++#include <linux/blk-mq.h>
++#include <linux/hrtimer.h>
++#include <linux/configfs.h>
++#include <linux/badblocks.h>
++#include <linux/fault-inject.h>
++
++struct nullb_cmd {
++	struct request *rq;
++	struct bio *bio;
++	unsigned int tag;
++	blk_status_t error;
++	struct nullb_queue *nq;
++	struct hrtimer timer;
++	bool fake_timeout;
++};
++
++struct nullb_queue {
++	unsigned long *tag_map;
++	wait_queue_head_t wait;
++	unsigned int queue_depth;
++	struct nullb_device *dev;
++	unsigned int requeue_selection;
++
++	struct nullb_cmd *cmds;
++};
++
++struct nullb_device {
++	struct nullb *nullb;
++	struct config_item item;
++	struct radix_tree_root data; /* data stored in the disk */
++	struct radix_tree_root cache; /* disk cache data */
++	unsigned long flags; /* device flags */
++	unsigned int curr_cache;
++	struct badblocks badblocks;
++
++	unsigned int nr_zones;
++	unsigned int nr_zones_imp_open;
++	unsigned int nr_zones_exp_open;
++	unsigned int nr_zones_closed;
++	struct blk_zone *zones;
++	sector_t zone_size_sects;
++	spinlock_t zone_lock;
++	unsigned long *zone_locks;
++
++	unsigned long size; /* device size in MB */
++	unsigned long completion_nsec; /* time in ns to complete a request */
++	unsigned long cache_size; /* disk cache size in MB */
++	unsigned long zone_size; /* zone size in MB if device is zoned */
++	unsigned long zone_capacity; /* zone capacity in MB if device is zoned */
++	unsigned int zone_nr_conv; /* number of conventional zones */
++	unsigned int zone_max_open; /* max number of open zones */
++	unsigned int zone_max_active; /* max number of active zones */
++	unsigned int submit_queues; /* number of submission queues */
++	unsigned int home_node; /* home node for the device */
++	unsigned int queue_mode; /* block interface */
++	unsigned int blocksize; /* block size */
++	unsigned int irqmode; /* IRQ completion handler */
++	unsigned int hw_queue_depth; /* queue depth */
++	unsigned int index; /* index of the disk, only valid with a disk */
++	unsigned int mbps; /* Bandwidth throttle cap (in MB/s) */
++	bool blocking; /* blocking blk-mq device */
++	bool use_per_node_hctx; /* use per-node allocation for hardware context */
++	bool power; /* power on/off the device */
++	bool memory_backed; /* if data is stored in memory */
++	bool discard; /* if support discard */
++	bool zoned; /* if device is zoned */
++};
++
++struct nullb {
++	struct nullb_device *dev;
++	struct list_head list;
++	unsigned int index;
++	struct request_queue *q;
++	struct gendisk *disk;
++	struct blk_mq_tag_set *tag_set;
++	struct blk_mq_tag_set __tag_set;
++	unsigned int queue_depth;
++	atomic_long_t cur_bytes;
++	struct hrtimer bw_timer;
++	unsigned long cache_flush_pos;
++	spinlock_t lock;
++
++	struct nullb_queue *queues;
++	unsigned int nr_queues;
++	char disk_name[DISK_NAME_LEN];
++};
++
++blk_status_t null_process_cmd(struct nullb_cmd *cmd,
++			      enum req_opf op, sector_t sector,
++			      unsigned int nr_sectors);
++
++#ifdef CONFIG_BLK_DEV_ZONED
++int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q);
++int null_register_zoned_dev(struct nullb *nullb);
++void null_free_zoned_dev(struct nullb_device *dev);
++int null_report_zones(struct gendisk *disk, sector_t sector,
++		      unsigned int nr_zones, report_zones_cb cb, void *data);
++blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd,
++				    enum req_opf op, sector_t sector,
++				    sector_t nr_sectors);
++size_t null_zone_valid_read_len(struct nullb *nullb,
++				sector_t sector, unsigned int len);
++#else
++static inline int null_init_zoned_dev(struct nullb_device *dev,
++				      struct request_queue *q)
++{
++	pr_err("CONFIG_BLK_DEV_ZONED not enabled\n");
++	return -EINVAL;
++}
++static inline int null_register_zoned_dev(struct nullb *nullb)
++{
++	return -ENODEV;
++}
++static inline void null_free_zoned_dev(struct nullb_device *dev) {}
++static inline blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd,
++			enum req_opf op, sector_t sector, sector_t nr_sectors)
++{
++	return BLK_STS_NOTSUPP;
++}
++static inline size_t null_zone_valid_read_len(struct nullb *nullb,
++					      sector_t sector,
++					      unsigned int len)
++{
++	return len;
++}
++#define null_report_zones	NULL
++#endif /* CONFIG_BLK_DEV_ZONED */
++#endif /* __NULL_BLK_H */
+diff --git a/drivers/block/null_blk/trace.c b/drivers/block/null_blk/trace.c
+new file mode 100644
+index 0000000000000..3711cba160715
+--- /dev/null
++++ b/drivers/block/null_blk/trace.c
+@@ -0,0 +1,21 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * null_blk trace related helpers.
++ *
++ * Copyright (C) 2020 Western Digital Corporation or its affiliates.
++ */
++#include "trace.h"
++
++/*
++ * Helper to use for all null_blk traces to extract disk name.
++ */
++const char *nullb_trace_disk_name(struct trace_seq *p, char *name)
++{
++	const char *ret = trace_seq_buffer_ptr(p);
++
++	if (name && *name)
++		trace_seq_printf(p, "disk=%s, ", name);
++	trace_seq_putc(p, 0);
++
++	return ret;
++}
+diff --git a/drivers/block/null_blk/trace.h b/drivers/block/null_blk/trace.h
+new file mode 100644
+index 0000000000000..ce3b430e88c57
+--- /dev/null
++++ b/drivers/block/null_blk/trace.h
+@@ -0,0 +1,79 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * null_blk device driver tracepoints.
++ *
++ * Copyright (C) 2020 Western Digital Corporation or its affiliates.
++ */
++
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM nullb
++
++#if !defined(_TRACE_NULLB_H) || defined(TRACE_HEADER_MULTI_READ)
++#define _TRACE_NULLB_H
++
++#include <linux/tracepoint.h>
++#include <linux/trace_seq.h>
++
++#include "null_blk.h"
++
++const char *nullb_trace_disk_name(struct trace_seq *p, char *name);
++
++#define __print_disk_name(name) nullb_trace_disk_name(p, name)
++
++#ifndef TRACE_HEADER_MULTI_READ
++static inline void __assign_disk_name(char *name, struct gendisk *disk)
++{
++	if (disk)
++		memcpy(name, disk->disk_name, DISK_NAME_LEN);
++	else
++		memset(name, 0, DISK_NAME_LEN);
++}
++#endif
++
++TRACE_EVENT(nullb_zone_op,
++	    TP_PROTO(struct nullb_cmd *cmd, unsigned int zone_no,
++		     unsigned int zone_cond),
++	    TP_ARGS(cmd, zone_no, zone_cond),
++	    TP_STRUCT__entry(
++		__array(char, disk, DISK_NAME_LEN)
++		__field(enum req_opf, op)
++		__field(unsigned int, zone_no)
++		__field(unsigned int, zone_cond)
++	    ),
++	    TP_fast_assign(
++		__entry->op = req_op(cmd->rq);
++		__entry->zone_no = zone_no;
++		__entry->zone_cond = zone_cond;
++		__assign_disk_name(__entry->disk, cmd->rq->rq_disk);
++	    ),
++	    TP_printk("%s req=%-15s zone_no=%u zone_cond=%-10s",
++		      __print_disk_name(__entry->disk),
++		      blk_op_str(__entry->op),
++		      __entry->zone_no,
++		      blk_zone_cond_str(__entry->zone_cond))
++);
++
++TRACE_EVENT(nullb_report_zones,
++	    TP_PROTO(struct nullb *nullb, unsigned int nr_zones),
++	    TP_ARGS(nullb, nr_zones),
++	    TP_STRUCT__entry(
++		__array(char, disk, DISK_NAME_LEN)
++		__field(unsigned int, nr_zones)
++	    ),
++	    TP_fast_assign(
++		__entry->nr_zones = nr_zones;
++		__assign_disk_name(__entry->disk, nullb->disk);
++	    ),
++	    TP_printk("%s nr_zones=%u",
++		      __print_disk_name(__entry->disk), __entry->nr_zones)
++);
++
++#endif /* _TRACE_NULLB_H */
++
++#undef TRACE_INCLUDE_PATH
++#define TRACE_INCLUDE_PATH .
++#undef TRACE_INCLUDE_FILE
++#define TRACE_INCLUDE_FILE trace
++
++/* This part must be outside protection */
++#include <trace/define_trace.h>
+diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
+new file mode 100644
+index 0000000000000..41220ce59659b
+--- /dev/null
++++ b/drivers/block/null_blk/zoned.c
+@@ -0,0 +1,617 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <linux/vmalloc.h>
++#include <linux/bitmap.h>
++#include "null_blk.h"
++
++#define CREATE_TRACE_POINTS
++#include "trace.h"
++
++#define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT)
++
++static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect)
++{
++	return sect >> ilog2(dev->zone_size_sects);
++}
++
++int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
++{
++	sector_t dev_capacity_sects, zone_capacity_sects;
++	sector_t sector = 0;
++	unsigned int i;
++
++	if (!is_power_of_2(dev->zone_size)) {
++		pr_err("zone_size must be power-of-two\n");
++		return -EINVAL;
++	}
++	if (dev->zone_size > dev->size) {
++		pr_err("Zone size larger than device capacity\n");
++		return -EINVAL;
++	}
++
++	if (!dev->zone_capacity)
++		dev->zone_capacity = dev->zone_size;
++
++	if (dev->zone_capacity > dev->zone_size) {
++		pr_err("null_blk: zone capacity (%lu MB) larger than zone size (%lu MB)\n",
++					dev->zone_capacity, dev->zone_size);
++		return -EINVAL;
++	}
++
++	zone_capacity_sects = MB_TO_SECTS(dev->zone_capacity);
++	dev_capacity_sects = MB_TO_SECTS(dev->size);
++	dev->zone_size_sects = MB_TO_SECTS(dev->zone_size);
++	dev->nr_zones = dev_capacity_sects >> ilog2(dev->zone_size_sects);
++	if (dev_capacity_sects & (dev->zone_size_sects - 1))
++		dev->nr_zones++;
++
++	dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct blk_zone),
++			GFP_KERNEL | __GFP_ZERO);
++	if (!dev->zones)
++		return -ENOMEM;
++
++	/*
++	 * With memory backing, the zone_lock spinlock needs to be temporarily
++	 * released to avoid scheduling in atomic context. To guarantee zone
++	 * information protection, use a bitmap to lock zones with
++	 * wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing
++	 * implies that the queue is marked with BLK_MQ_F_BLOCKING.
++	 */
++	spin_lock_init(&dev->zone_lock);
++	if (dev->memory_backed) {
++		dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);
++		if (!dev->zone_locks) {
++			kvfree(dev->zones);
++			return -ENOMEM;
++		}
++	}
++
++	if (dev->zone_nr_conv >= dev->nr_zones) {
++		dev->zone_nr_conv = dev->nr_zones - 1;
++		pr_info("changed the number of conventional zones to %u",
++			dev->zone_nr_conv);
++	}
++
++	/* Max active zones has to be < nbr of seq zones in order to be enforceable */
++	if (dev->zone_max_active >= dev->nr_zones - dev->zone_nr_conv) {
++		dev->zone_max_active = 0;
++		pr_info("zone_max_active limit disabled, limit >= zone count\n");
++	}
++
++	/* Max open zones has to be <= max active zones */
++	if (dev->zone_max_active && dev->zone_max_open > dev->zone_max_active) {
++		dev->zone_max_open = dev->zone_max_active;
++		pr_info("changed the maximum number of open zones to %u\n",
++			dev->nr_zones);
++	} else if (dev->zone_max_open >= dev->nr_zones - dev->zone_nr_conv) {
++		dev->zone_max_open = 0;
++		pr_info("zone_max_open limit disabled, limit >= zone count\n");
++	}
++
++	for (i = 0; i <  dev->zone_nr_conv; i++) {
++		struct blk_zone *zone = &dev->zones[i];
++
++		zone->start = sector;
++		zone->len = dev->zone_size_sects;
++		zone->capacity = zone->len;
++		zone->wp = zone->start + zone->len;
++		zone->type = BLK_ZONE_TYPE_CONVENTIONAL;
++		zone->cond = BLK_ZONE_COND_NOT_WP;
++
++		sector += dev->zone_size_sects;
++	}
++
++	for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
++		struct blk_zone *zone = &dev->zones[i];
++
++		zone->start = zone->wp = sector;
++		if (zone->start + dev->zone_size_sects > dev_capacity_sects)
++			zone->len = dev_capacity_sects - zone->start;
++		else
++			zone->len = dev->zone_size_sects;
++		zone->capacity =
++			min_t(sector_t, zone->len, zone_capacity_sects);
++		zone->type = BLK_ZONE_TYPE_SEQWRITE_REQ;
++		zone->cond = BLK_ZONE_COND_EMPTY;
++
++		sector += dev->zone_size_sects;
++	}
++
++	q->limits.zoned = BLK_ZONED_HM;
++	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
++	blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE);
++
++	return 0;
++}
++
++int null_register_zoned_dev(struct nullb *nullb)
++{
++	struct nullb_device *dev = nullb->dev;
++	struct request_queue *q = nullb->q;
++
++	if (queue_is_mq(q)) {
++		int ret = blk_revalidate_disk_zones(nullb->disk, NULL);
++
++		if (ret)
++			return ret;
++	} else {
++		blk_queue_chunk_sectors(q, dev->zone_size_sects);
++		q->nr_zones = blkdev_nr_zones(nullb->disk);
++	}
++
++	blk_queue_max_zone_append_sectors(q, dev->zone_size_sects);
++	blk_queue_max_open_zones(q, dev->zone_max_open);
++	blk_queue_max_active_zones(q, dev->zone_max_active);
++
++	return 0;
++}
++
++void null_free_zoned_dev(struct nullb_device *dev)
++{
++	bitmap_free(dev->zone_locks);
++	kvfree(dev->zones);
++	dev->zones = NULL;
++}
++
++static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)
++{
++	if (dev->memory_backed)
++		wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);
++	spin_lock_irq(&dev->zone_lock);
++}
++
++static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno)
++{
++	spin_unlock_irq(&dev->zone_lock);
++
++	if (dev->memory_backed)
++		clear_and_wake_up_bit(zno, dev->zone_locks);
++}
++
++int null_report_zones(struct gendisk *disk, sector_t sector,
++		unsigned int nr_zones, report_zones_cb cb, void *data)
++{
++	struct nullb *nullb = disk->private_data;
++	struct nullb_device *dev = nullb->dev;
++	unsigned int first_zone, i, zno;
++	struct blk_zone zone;
++	int error;
++
++	first_zone = null_zone_no(dev, sector);
++	if (first_zone >= dev->nr_zones)
++		return 0;
++
++	nr_zones = min(nr_zones, dev->nr_zones - first_zone);
++	trace_nullb_report_zones(nullb, nr_zones);
++
++	zno = first_zone;
++	for (i = 0; i < nr_zones; i++, zno++) {
++		/*
++		 * Stacked DM target drivers will remap the zone information by
++		 * modifying the zone information passed to the report callback.
++		 * So use a local copy to avoid corruption of the device zone
++		 * array.
++		 */
++		null_lock_zone(dev, zno);
++		memcpy(&zone, &dev->zones[zno], sizeof(struct blk_zone));
++		null_unlock_zone(dev, zno);
++
++		error = cb(&zone, i, data);
++		if (error)
++			return error;
++	}
++
++	return nr_zones;
++}
++
++/*
++ * This is called in the case of memory backing from null_process_cmd()
++ * with the target zone already locked.
++ */
++size_t null_zone_valid_read_len(struct nullb *nullb,
++				sector_t sector, unsigned int len)
++{
++	struct nullb_device *dev = nullb->dev;
++	struct blk_zone *zone = &dev->zones[null_zone_no(dev, sector)];
++	unsigned int nr_sectors = len >> SECTOR_SHIFT;
++
++	/* Read must be below the write pointer position */
++	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL ||
++	    sector + nr_sectors <= zone->wp)
++		return len;
++
++	if (sector > zone->wp)
++		return 0;
++
++	return (zone->wp - sector) << SECTOR_SHIFT;
++}
++
++static blk_status_t null_close_zone(struct nullb_device *dev, struct blk_zone *zone)
++{
++	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
++		return BLK_STS_IOERR;
++
++	switch (zone->cond) {
++	case BLK_ZONE_COND_CLOSED:
++		/* close operation on closed is not an error */
++		return BLK_STS_OK;
++	case BLK_ZONE_COND_IMP_OPEN:
++		dev->nr_zones_imp_open--;
++		break;
++	case BLK_ZONE_COND_EXP_OPEN:
++		dev->nr_zones_exp_open--;
++		break;
++	case BLK_ZONE_COND_EMPTY:
++	case BLK_ZONE_COND_FULL:
++	default:
++		return BLK_STS_IOERR;
++	}
++
++	if (zone->wp == zone->start) {
++		zone->cond = BLK_ZONE_COND_EMPTY;
++	} else {
++		zone->cond = BLK_ZONE_COND_CLOSED;
++		dev->nr_zones_closed++;
++	}
++
++	return BLK_STS_OK;
++}
++
++static void null_close_first_imp_zone(struct nullb_device *dev)
++{
++	unsigned int i;
++
++	for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
++		if (dev->zones[i].cond == BLK_ZONE_COND_IMP_OPEN) {
++			null_close_zone(dev, &dev->zones[i]);
++			return;
++		}
++	}
++}
++
++static blk_status_t null_check_active(struct nullb_device *dev)
++{
++	if (!dev->zone_max_active)
++		return BLK_STS_OK;
++
++	if (dev->nr_zones_exp_open + dev->nr_zones_imp_open +
++			dev->nr_zones_closed < dev->zone_max_active)
++		return BLK_STS_OK;
++
++	return BLK_STS_ZONE_ACTIVE_RESOURCE;
++}
++
++static blk_status_t null_check_open(struct nullb_device *dev)
++{
++	if (!dev->zone_max_open)
++		return BLK_STS_OK;
++
++	if (dev->nr_zones_exp_open + dev->nr_zones_imp_open < dev->zone_max_open)
++		return BLK_STS_OK;
++
++	if (dev->nr_zones_imp_open) {
++		if (null_check_active(dev) == BLK_STS_OK) {
++			null_close_first_imp_zone(dev);
++			return BLK_STS_OK;
++		}
++	}
++
++	return BLK_STS_ZONE_OPEN_RESOURCE;
++}
++
++/*
++ * This function matches the manage open zone resources function in the ZBC standard,
++ * with the addition of max active zones support (added in the ZNS standard).
++ *
++ * The function determines if a zone can transition to implicit open or explicit open,
++ * while maintaining the max open zone (and max active zone) limit(s). It may close an
++ * implicit open zone in order to make additional zone resources available.
++ *
++ * ZBC states that an implicit open zone shall be closed only if there is not
++ * room within the open limit. However, with the addition of an active limit,
++ * it is not certain that closing an implicit open zone will allow a new zone
++ * to be opened, since we might already be at the active limit capacity.
++ */
++static blk_status_t null_check_zone_resources(struct nullb_device *dev, struct blk_zone *zone)
++{
++	blk_status_t ret;
++
++	switch (zone->cond) {
++	case BLK_ZONE_COND_EMPTY:
++		ret = null_check_active(dev);
++		if (ret != BLK_STS_OK)
++			return ret;
++		fallthrough;
++	case BLK_ZONE_COND_CLOSED:
++		return null_check_open(dev);
++	default:
++		/* Should never be called for other states */
++		WARN_ON(1);
++		return BLK_STS_IOERR;
++	}
++}
++
++static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
++				    unsigned int nr_sectors, bool append)
++{
++	struct nullb_device *dev = cmd->nq->dev;
++	unsigned int zno = null_zone_no(dev, sector);
++	struct blk_zone *zone = &dev->zones[zno];
++	blk_status_t ret;
++
++	trace_nullb_zone_op(cmd, zno, zone->cond);
++
++	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
++		if (append)
++			return BLK_STS_IOERR;
++		return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
++	}
++
++	null_lock_zone(dev, zno);
++
++	switch (zone->cond) {
++	case BLK_ZONE_COND_FULL:
++		/* Cannot write to a full zone */
++		ret = BLK_STS_IOERR;
++		goto unlock;
++	case BLK_ZONE_COND_EMPTY:
++	case BLK_ZONE_COND_CLOSED:
++		ret = null_check_zone_resources(dev, zone);
++		if (ret != BLK_STS_OK)
++			goto unlock;
++		break;
++	case BLK_ZONE_COND_IMP_OPEN:
++	case BLK_ZONE_COND_EXP_OPEN:
++		break;
++	default:
++		/* Invalid zone condition */
++		ret = BLK_STS_IOERR;
++		goto unlock;
++	}
++
++	/*
++	 * Regular writes must be at the write pointer position.
++	 * Zone append writes are automatically issued at the write
++	 * pointer and the position returned using the request or BIO
++	 * sector.
++	 */
++	if (append) {
++		sector = zone->wp;
++		if (cmd->bio)
++			cmd->bio->bi_iter.bi_sector = sector;
++		else
++			cmd->rq->__sector = sector;
++	} else if (sector != zone->wp) {
++		ret = BLK_STS_IOERR;
++		goto unlock;
++	}
++
++	if (zone->wp + nr_sectors > zone->start + zone->capacity) {
++		ret = BLK_STS_IOERR;
++		goto unlock;
++	}
++
++	if (zone->cond == BLK_ZONE_COND_CLOSED) {
++		dev->nr_zones_closed--;
++		dev->nr_zones_imp_open++;
++	} else if (zone->cond == BLK_ZONE_COND_EMPTY) {
++		dev->nr_zones_imp_open++;
++	}
++	if (zone->cond != BLK_ZONE_COND_EXP_OPEN)
++		zone->cond = BLK_ZONE_COND_IMP_OPEN;
++
++	/*
++	 * Memory backing allocation may sleep: release the zone_lock spinlock
++	 * to avoid scheduling in atomic context. Zone operation atomicity is
++	 * still guaranteed through the zone_locks bitmap.
++	 */
++	if (dev->memory_backed)
++		spin_unlock_irq(&dev->zone_lock);
++	ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
++	if (dev->memory_backed)
++		spin_lock_irq(&dev->zone_lock);
++
++	if (ret != BLK_STS_OK)
++		goto unlock;
++
++	zone->wp += nr_sectors;
++	if (zone->wp == zone->start + zone->capacity) {
++		if (zone->cond == BLK_ZONE_COND_EXP_OPEN)
++			dev->nr_zones_exp_open--;
++		else if (zone->cond == BLK_ZONE_COND_IMP_OPEN)
++			dev->nr_zones_imp_open--;
++		zone->cond = BLK_ZONE_COND_FULL;
++	}
++	ret = BLK_STS_OK;
++
++unlock:
++	null_unlock_zone(dev, zno);
++
++	return ret;
++}
++
++static blk_status_t null_open_zone(struct nullb_device *dev, struct blk_zone *zone)
++{
++	blk_status_t ret;
++
++	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
++		return BLK_STS_IOERR;
++
++	switch (zone->cond) {
++	case BLK_ZONE_COND_EXP_OPEN:
++		/* open operation on exp open is not an error */
++		return BLK_STS_OK;
++	case BLK_ZONE_COND_EMPTY:
++		ret = null_check_zone_resources(dev, zone);
++		if (ret != BLK_STS_OK)
++			return ret;
++		break;
++	case BLK_ZONE_COND_IMP_OPEN:
++		dev->nr_zones_imp_open--;
++		break;
++	case BLK_ZONE_COND_CLOSED:
++		ret = null_check_zone_resources(dev, zone);
++		if (ret != BLK_STS_OK)
++			return ret;
++		dev->nr_zones_closed--;
++		break;
++	case BLK_ZONE_COND_FULL:
++	default:
++		return BLK_STS_IOERR;
++	}
++
++	zone->cond = BLK_ZONE_COND_EXP_OPEN;
++	dev->nr_zones_exp_open++;
++
++	return BLK_STS_OK;
++}
++
++static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone *zone)
++{
++	blk_status_t ret;
++
++	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
++		return BLK_STS_IOERR;
++
++	switch (zone->cond) {
++	case BLK_ZONE_COND_FULL:
++		/* finish operation on full is not an error */
++		return BLK_STS_OK;
++	case BLK_ZONE_COND_EMPTY:
++		ret = null_check_zone_resources(dev, zone);
++		if (ret != BLK_STS_OK)
++			return ret;
++		break;
++	case BLK_ZONE_COND_IMP_OPEN:
++		dev->nr_zones_imp_open--;
++		break;
++	case BLK_ZONE_COND_EXP_OPEN:
++		dev->nr_zones_exp_open--;
++		break;
++	case BLK_ZONE_COND_CLOSED:
++		ret = null_check_zone_resources(dev, zone);
++		if (ret != BLK_STS_OK)
++			return ret;
++		dev->nr_zones_closed--;
++		break;
++	default:
++		return BLK_STS_IOERR;
++	}
++
++	zone->cond = BLK_ZONE_COND_FULL;
++	zone->wp = zone->start + zone->len;
++
++	return BLK_STS_OK;
++}
++
++static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *zone)
++{
++	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
++		return BLK_STS_IOERR;
++
++	switch (zone->cond) {
++	case BLK_ZONE_COND_EMPTY:
++		/* reset operation on empty is not an error */
++		return BLK_STS_OK;
++	case BLK_ZONE_COND_IMP_OPEN:
++		dev->nr_zones_imp_open--;
++		break;
++	case BLK_ZONE_COND_EXP_OPEN:
++		dev->nr_zones_exp_open--;
++		break;
++	case BLK_ZONE_COND_CLOSED:
++		dev->nr_zones_closed--;
++		break;
++	case BLK_ZONE_COND_FULL:
++		break;
++	default:
++		return BLK_STS_IOERR;
++	}
++
++	zone->cond = BLK_ZONE_COND_EMPTY;
++	zone->wp = zone->start;
++
++	return BLK_STS_OK;
++}
++
++static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
++				   sector_t sector)
++{
++	struct nullb_device *dev = cmd->nq->dev;
++	unsigned int zone_no;
++	struct blk_zone *zone;
++	blk_status_t ret;
++	size_t i;
++
++	if (op == REQ_OP_ZONE_RESET_ALL) {
++		for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
++			null_lock_zone(dev, i);
++			zone = &dev->zones[i];
++			if (zone->cond != BLK_ZONE_COND_EMPTY) {
++				null_reset_zone(dev, zone);
++				trace_nullb_zone_op(cmd, i, zone->cond);
++			}
++			null_unlock_zone(dev, i);
++		}
++		return BLK_STS_OK;
++	}
++
++	zone_no = null_zone_no(dev, sector);
++	zone = &dev->zones[zone_no];
++
++	null_lock_zone(dev, zone_no);
++
++	switch (op) {
++	case REQ_OP_ZONE_RESET:
++		ret = null_reset_zone(dev, zone);
++		break;
++	case REQ_OP_ZONE_OPEN:
++		ret = null_open_zone(dev, zone);
++		break;
++	case REQ_OP_ZONE_CLOSE:
++		ret = null_close_zone(dev, zone);
++		break;
++	case REQ_OP_ZONE_FINISH:
++		ret = null_finish_zone(dev, zone);
++		break;
++	default:
++		ret = BLK_STS_NOTSUPP;
++		break;
++	}
++
++	if (ret == BLK_STS_OK)
++		trace_nullb_zone_op(cmd, zone_no, zone->cond);
++
++	null_unlock_zone(dev, zone_no);
++
++	return ret;
++}
++
++blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op,
++				    sector_t sector, sector_t nr_sectors)
++{
++	struct nullb_device *dev = cmd->nq->dev;
++	unsigned int zno = null_zone_no(dev, sector);
++	blk_status_t sts;
++
++	switch (op) {
++	case REQ_OP_WRITE:
++		sts = null_zone_write(cmd, sector, nr_sectors, false);
++		break;
++	case REQ_OP_ZONE_APPEND:
++		sts = null_zone_write(cmd, sector, nr_sectors, true);
++		break;
++	case REQ_OP_ZONE_RESET:
++	case REQ_OP_ZONE_RESET_ALL:
++	case REQ_OP_ZONE_OPEN:
++	case REQ_OP_ZONE_CLOSE:
++	case REQ_OP_ZONE_FINISH:
++		sts = null_zone_mgmt(cmd, op, sector);
++		break;
++	default:
++		null_lock_zone(dev, zno);
++		sts = null_process_cmd(cmd, op, sector, nr_sectors);
++		null_unlock_zone(dev, zno);
++	}
++
++	return sts;
++}
+diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
+deleted file mode 100644
+index c6ba8f9f3f311..0000000000000
+--- a/drivers/block/null_blk_main.c
++++ /dev/null
+@@ -1,2036 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Add configfs and memory store: Kyungchan Koh <kkc6196@fb.com> and
+- * Shaohua Li <shli@fb.com>
+- */
+-#include <linux/module.h>
+-
+-#include <linux/moduleparam.h>
+-#include <linux/sched.h>
+-#include <linux/fs.h>
+-#include <linux/init.h>
+-#include "null_blk.h"
+-
+-#define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
+-#define PAGE_SECTORS		(1 << PAGE_SECTORS_SHIFT)
+-#define SECTOR_MASK		(PAGE_SECTORS - 1)
+-
+-#define FREE_BATCH		16
+-
+-#define TICKS_PER_SEC		50ULL
+-#define TIMER_INTERVAL		(NSEC_PER_SEC / TICKS_PER_SEC)
+-
+-#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
+-static DECLARE_FAULT_ATTR(null_timeout_attr);
+-static DECLARE_FAULT_ATTR(null_requeue_attr);
+-static DECLARE_FAULT_ATTR(null_init_hctx_attr);
+-#endif
+-
+-static inline u64 mb_per_tick(int mbps)
+-{
+-	return (1 << 20) / TICKS_PER_SEC * ((u64) mbps);
+-}
+-
+-/*
+- * Status flags for nullb_device.
+- *
+- * CONFIGURED:	Device has been configured and turned on. Cannot reconfigure.
+- * UP:		Device is currently on and visible in userspace.
+- * THROTTLED:	Device is being throttled.
+- * CACHE:	Device is using a write-back cache.
+- */
+-enum nullb_device_flags {
+-	NULLB_DEV_FL_CONFIGURED	= 0,
+-	NULLB_DEV_FL_UP		= 1,
+-	NULLB_DEV_FL_THROTTLED	= 2,
+-	NULLB_DEV_FL_CACHE	= 3,
+-};
+-
+-#define MAP_SZ		((PAGE_SIZE >> SECTOR_SHIFT) + 2)
+-/*
+- * nullb_page is a page in memory for nullb devices.
+- *
+- * @page:	The page holding the data.
+- * @bitmap:	The bitmap represents which sector in the page has data.
+- *		Each bit represents one block size. For example, sector 8
+- *		will use the 7th bit
+- * The highest 2 bits of bitmap are for special purpose. LOCK means the cache
+- * page is being flushing to storage. FREE means the cache page is freed and
+- * should be skipped from flushing to storage. Please see
+- * null_make_cache_space
+- */
+-struct nullb_page {
+-	struct page *page;
+-	DECLARE_BITMAP(bitmap, MAP_SZ);
+-};
+-#define NULLB_PAGE_LOCK (MAP_SZ - 1)
+-#define NULLB_PAGE_FREE (MAP_SZ - 2)
+-
+-static LIST_HEAD(nullb_list);
+-static struct mutex lock;
+-static int null_major;
+-static DEFINE_IDA(nullb_indexes);
+-static struct blk_mq_tag_set tag_set;
+-
+-enum {
+-	NULL_IRQ_NONE		= 0,
+-	NULL_IRQ_SOFTIRQ	= 1,
+-	NULL_IRQ_TIMER		= 2,
+-};
+-
+-enum {
+-	NULL_Q_BIO		= 0,
+-	NULL_Q_RQ		= 1,
+-	NULL_Q_MQ		= 2,
+-};
+-
+-static int g_no_sched;
+-module_param_named(no_sched, g_no_sched, int, 0444);
+-MODULE_PARM_DESC(no_sched, "No io scheduler");
+-
+-static int g_submit_queues = 1;
+-module_param_named(submit_queues, g_submit_queues, int, 0444);
+-MODULE_PARM_DESC(submit_queues, "Number of submission queues");
+-
+-static int g_home_node = NUMA_NO_NODE;
+-module_param_named(home_node, g_home_node, int, 0444);
+-MODULE_PARM_DESC(home_node, "Home node for the device");
+-
+-#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
+-/*
+- * For more details about fault injection, please refer to
+- * Documentation/fault-injection/fault-injection.rst.
+- */
+-static char g_timeout_str[80];
+-module_param_string(timeout, g_timeout_str, sizeof(g_timeout_str), 0444);
+-MODULE_PARM_DESC(timeout, "Fault injection. timeout=<interval>,<probability>,<space>,<times>");
+-
+-static char g_requeue_str[80];
+-module_param_string(requeue, g_requeue_str, sizeof(g_requeue_str), 0444);
+-MODULE_PARM_DESC(requeue, "Fault injection. requeue=<interval>,<probability>,<space>,<times>");
+-
+-static char g_init_hctx_str[80];
+-module_param_string(init_hctx, g_init_hctx_str, sizeof(g_init_hctx_str), 0444);
+-MODULE_PARM_DESC(init_hctx, "Fault injection to fail hctx init. init_hctx=<interval>,<probability>,<space>,<times>");
+-#endif
+-
+-static int g_queue_mode = NULL_Q_MQ;
+-
+-static int null_param_store_val(const char *str, int *val, int min, int max)
+-{
+-	int ret, new_val;
+-
+-	ret = kstrtoint(str, 10, &new_val);
+-	if (ret)
+-		return -EINVAL;
+-
+-	if (new_val < min || new_val > max)
+-		return -EINVAL;
+-
+-	*val = new_val;
+-	return 0;
+-}
+-
+-static int null_set_queue_mode(const char *str, const struct kernel_param *kp)
+-{
+-	return null_param_store_val(str, &g_queue_mode, NULL_Q_BIO, NULL_Q_MQ);
+-}
+-
+-static const struct kernel_param_ops null_queue_mode_param_ops = {
+-	.set	= null_set_queue_mode,
+-	.get	= param_get_int,
+-};
+-
+-device_param_cb(queue_mode, &null_queue_mode_param_ops, &g_queue_mode, 0444);
+-MODULE_PARM_DESC(queue_mode, "Block interface to use (0=bio,1=rq,2=multiqueue)");
+-
+-static int g_gb = 250;
+-module_param_named(gb, g_gb, int, 0444);
+-MODULE_PARM_DESC(gb, "Size in GB");
+-
+-static int g_bs = 512;
+-module_param_named(bs, g_bs, int, 0444);
+-MODULE_PARM_DESC(bs, "Block size (in bytes)");
+-
+-static unsigned int nr_devices = 1;
+-module_param(nr_devices, uint, 0444);
+-MODULE_PARM_DESC(nr_devices, "Number of devices to register");
+-
+-static bool g_blocking;
+-module_param_named(blocking, g_blocking, bool, 0444);
+-MODULE_PARM_DESC(blocking, "Register as a blocking blk-mq driver device");
+-
+-static bool shared_tags;
+-module_param(shared_tags, bool, 0444);
+-MODULE_PARM_DESC(shared_tags, "Share tag set between devices for blk-mq");
+-
+-static bool g_shared_tag_bitmap;
+-module_param_named(shared_tag_bitmap, g_shared_tag_bitmap, bool, 0444);
+-MODULE_PARM_DESC(shared_tag_bitmap, "Use shared tag bitmap for all submission queues for blk-mq");
+-
+-static int g_irqmode = NULL_IRQ_SOFTIRQ;
+-
+-static int null_set_irqmode(const char *str, const struct kernel_param *kp)
+-{
+-	return null_param_store_val(str, &g_irqmode, NULL_IRQ_NONE,
+-					NULL_IRQ_TIMER);
+-}
+-
+-static const struct kernel_param_ops null_irqmode_param_ops = {
+-	.set	= null_set_irqmode,
+-	.get	= param_get_int,
+-};
+-
+-device_param_cb(irqmode, &null_irqmode_param_ops, &g_irqmode, 0444);
+-MODULE_PARM_DESC(irqmode, "IRQ completion handler. 0-none, 1-softirq, 2-timer");
+-
+-static unsigned long g_completion_nsec = 10000;
+-module_param_named(completion_nsec, g_completion_nsec, ulong, 0444);
+-MODULE_PARM_DESC(completion_nsec, "Time in ns to complete a request in hardware. Default: 10,000ns");
+-
+-static int g_hw_queue_depth = 64;
+-module_param_named(hw_queue_depth, g_hw_queue_depth, int, 0444);
+-MODULE_PARM_DESC(hw_queue_depth, "Queue depth for each hardware queue. Default: 64");
+-
+-static bool g_use_per_node_hctx;
+-module_param_named(use_per_node_hctx, g_use_per_node_hctx, bool, 0444);
+-MODULE_PARM_DESC(use_per_node_hctx, "Use per-node allocation for hardware context queues. Default: false");
+-
+-static bool g_zoned;
+-module_param_named(zoned, g_zoned, bool, S_IRUGO);
+-MODULE_PARM_DESC(zoned, "Make device as a host-managed zoned block device. Default: false");
+-
+-static unsigned long g_zone_size = 256;
+-module_param_named(zone_size, g_zone_size, ulong, S_IRUGO);
+-MODULE_PARM_DESC(zone_size, "Zone size in MB when block device is zoned. Must be power-of-two: Default: 256");
+-
+-static unsigned long g_zone_capacity;
+-module_param_named(zone_capacity, g_zone_capacity, ulong, 0444);
+-MODULE_PARM_DESC(zone_capacity, "Zone capacity in MB when block device is zoned. Can be less than or equal to zone size. Default: Zone size");
+-
+-static unsigned int g_zone_nr_conv;
+-module_param_named(zone_nr_conv, g_zone_nr_conv, uint, 0444);
+-MODULE_PARM_DESC(zone_nr_conv, "Number of conventional zones when block device is zoned. Default: 0");
+-
+-static unsigned int g_zone_max_open;
+-module_param_named(zone_max_open, g_zone_max_open, uint, 0444);
+-MODULE_PARM_DESC(zone_max_open, "Maximum number of open zones when block device is zoned. Default: 0 (no limit)");
+-
+-static unsigned int g_zone_max_active;
+-module_param_named(zone_max_active, g_zone_max_active, uint, 0444);
+-MODULE_PARM_DESC(zone_max_active, "Maximum number of active zones when block device is zoned. Default: 0 (no limit)");
+-
+-static struct nullb_device *null_alloc_dev(void);
+-static void null_free_dev(struct nullb_device *dev);
+-static void null_del_dev(struct nullb *nullb);
+-static int null_add_dev(struct nullb_device *dev);
+-static void null_free_device_storage(struct nullb_device *dev, bool is_cache);
+-
+-static inline struct nullb_device *to_nullb_device(struct config_item *item)
+-{
+-	return item ? container_of(item, struct nullb_device, item) : NULL;
+-}
+-
+-static inline ssize_t nullb_device_uint_attr_show(unsigned int val, char *page)
+-{
+-	return snprintf(page, PAGE_SIZE, "%u\n", val);
+-}
+-
+-static inline ssize_t nullb_device_ulong_attr_show(unsigned long val,
+-	char *page)
+-{
+-	return snprintf(page, PAGE_SIZE, "%lu\n", val);
+-}
+-
+-static inline ssize_t nullb_device_bool_attr_show(bool val, char *page)
+-{
+-	return snprintf(page, PAGE_SIZE, "%u\n", val);
+-}
+-
+-static ssize_t nullb_device_uint_attr_store(unsigned int *val,
+-	const char *page, size_t count)
+-{
+-	unsigned int tmp;
+-	int result;
+-
+-	result = kstrtouint(page, 0, &tmp);
+-	if (result < 0)
+-		return result;
+-
+-	*val = tmp;
+-	return count;
+-}
+-
+-static ssize_t nullb_device_ulong_attr_store(unsigned long *val,
+-	const char *page, size_t count)
+-{
+-	int result;
+-	unsigned long tmp;
+-
+-	result = kstrtoul(page, 0, &tmp);
+-	if (result < 0)
+-		return result;
+-
+-	*val = tmp;
+-	return count;
+-}
+-
+-static ssize_t nullb_device_bool_attr_store(bool *val, const char *page,
+-	size_t count)
+-{
+-	bool tmp;
+-	int result;
+-
+-	result = kstrtobool(page,  &tmp);
+-	if (result < 0)
+-		return result;
+-
+-	*val = tmp;
+-	return count;
+-}
+-
+-/* The following macro should only be used with TYPE = {uint, ulong, bool}. */
+-#define NULLB_DEVICE_ATTR(NAME, TYPE, APPLY)				\
+-static ssize_t								\
+-nullb_device_##NAME##_show(struct config_item *item, char *page)	\
+-{									\
+-	return nullb_device_##TYPE##_attr_show(				\
+-				to_nullb_device(item)->NAME, page);	\
+-}									\
+-static ssize_t								\
+-nullb_device_##NAME##_store(struct config_item *item, const char *page,	\
+-			    size_t count)				\
+-{									\
+-	int (*apply_fn)(struct nullb_device *dev, TYPE new_value) = APPLY;\
+-	struct nullb_device *dev = to_nullb_device(item);		\
+-	TYPE new_value = 0;						\
+-	int ret;							\
+-									\
+-	ret = nullb_device_##TYPE##_attr_store(&new_value, page, count);\
+-	if (ret < 0)							\
+-		return ret;						\
+-	if (apply_fn)							\
+-		ret = apply_fn(dev, new_value);				\
+-	else if (test_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags)) 	\
+-		ret = -EBUSY;						\
+-	if (ret < 0)							\
+-		return ret;						\
+-	dev->NAME = new_value;						\
+-	return count;							\
+-}									\
+-CONFIGFS_ATTR(nullb_device_, NAME);
+-
+-static int nullb_apply_submit_queues(struct nullb_device *dev,
+-				     unsigned int submit_queues)
+-{
+-	struct nullb *nullb = dev->nullb;
+-	struct blk_mq_tag_set *set;
+-
+-	if (!nullb)
+-		return 0;
+-
+-	/*
+-	 * Make sure that null_init_hctx() does not access nullb->queues[] past
+-	 * the end of that array.
+-	 */
+-	if (submit_queues > nr_cpu_ids)
+-		return -EINVAL;
+-	set = nullb->tag_set;
+-	blk_mq_update_nr_hw_queues(set, submit_queues);
+-	return set->nr_hw_queues == submit_queues ? 0 : -ENOMEM;
+-}
+-
+-NULLB_DEVICE_ATTR(size, ulong, NULL);
+-NULLB_DEVICE_ATTR(completion_nsec, ulong, NULL);
+-NULLB_DEVICE_ATTR(submit_queues, uint, nullb_apply_submit_queues);
+-NULLB_DEVICE_ATTR(home_node, uint, NULL);
+-NULLB_DEVICE_ATTR(queue_mode, uint, NULL);
+-NULLB_DEVICE_ATTR(blocksize, uint, NULL);
+-NULLB_DEVICE_ATTR(irqmode, uint, NULL);
+-NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL);
+-NULLB_DEVICE_ATTR(index, uint, NULL);
+-NULLB_DEVICE_ATTR(blocking, bool, NULL);
+-NULLB_DEVICE_ATTR(use_per_node_hctx, bool, NULL);
+-NULLB_DEVICE_ATTR(memory_backed, bool, NULL);
+-NULLB_DEVICE_ATTR(discard, bool, NULL);
+-NULLB_DEVICE_ATTR(mbps, uint, NULL);
+-NULLB_DEVICE_ATTR(cache_size, ulong, NULL);
+-NULLB_DEVICE_ATTR(zoned, bool, NULL);
+-NULLB_DEVICE_ATTR(zone_size, ulong, NULL);
+-NULLB_DEVICE_ATTR(zone_capacity, ulong, NULL);
+-NULLB_DEVICE_ATTR(zone_nr_conv, uint, NULL);
+-NULLB_DEVICE_ATTR(zone_max_open, uint, NULL);
+-NULLB_DEVICE_ATTR(zone_max_active, uint, NULL);
+-
+-static ssize_t nullb_device_power_show(struct config_item *item, char *page)
+-{
+-	return nullb_device_bool_attr_show(to_nullb_device(item)->power, page);
+-}
+-
+-static ssize_t nullb_device_power_store(struct config_item *item,
+-				     const char *page, size_t count)
+-{
+-	struct nullb_device *dev = to_nullb_device(item);
+-	bool newp = false;
+-	ssize_t ret;
+-
+-	ret = nullb_device_bool_attr_store(&newp, page, count);
+-	if (ret < 0)
+-		return ret;
+-
+-	if (!dev->power && newp) {
+-		if (test_and_set_bit(NULLB_DEV_FL_UP, &dev->flags))
+-			return count;
+-		if (null_add_dev(dev)) {
+-			clear_bit(NULLB_DEV_FL_UP, &dev->flags);
+-			return -ENOMEM;
+-		}
+-
+-		set_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags);
+-		dev->power = newp;
+-	} else if (dev->power && !newp) {
+-		if (test_and_clear_bit(NULLB_DEV_FL_UP, &dev->flags)) {
+-			mutex_lock(&lock);
+-			dev->power = newp;
+-			null_del_dev(dev->nullb);
+-			mutex_unlock(&lock);
+-		}
+-		clear_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags);
+-	}
+-
+-	return count;
+-}
+-
+-CONFIGFS_ATTR(nullb_device_, power);
+-
+-static ssize_t nullb_device_badblocks_show(struct config_item *item, char *page)
+-{
+-	struct nullb_device *t_dev = to_nullb_device(item);
+-
+-	return badblocks_show(&t_dev->badblocks, page, 0);
+-}
+-
+-static ssize_t nullb_device_badblocks_store(struct config_item *item,
+-				     const char *page, size_t count)
+-{
+-	struct nullb_device *t_dev = to_nullb_device(item);
+-	char *orig, *buf, *tmp;
+-	u64 start, end;
+-	int ret;
+-
+-	orig = kstrndup(page, count, GFP_KERNEL);
+-	if (!orig)
+-		return -ENOMEM;
+-
+-	buf = strstrip(orig);
+-
+-	ret = -EINVAL;
+-	if (buf[0] != '+' && buf[0] != '-')
+-		goto out;
+-	tmp = strchr(&buf[1], '-');
+-	if (!tmp)
+-		goto out;
+-	*tmp = '\0';
+-	ret = kstrtoull(buf + 1, 0, &start);
+-	if (ret)
+-		goto out;
+-	ret = kstrtoull(tmp + 1, 0, &end);
+-	if (ret)
+-		goto out;
+-	ret = -EINVAL;
+-	if (start > end)
+-		goto out;
+-	/* enable badblocks */
+-	cmpxchg(&t_dev->badblocks.shift, -1, 0);
+-	if (buf[0] == '+')
+-		ret = badblocks_set(&t_dev->badblocks, start,
+-			end - start + 1, 1);
+-	else
+-		ret = badblocks_clear(&t_dev->badblocks, start,
+-			end - start + 1);
+-	if (ret == 0)
+-		ret = count;
+-out:
+-	kfree(orig);
+-	return ret;
+-}
+-CONFIGFS_ATTR(nullb_device_, badblocks);
+-
+-static struct configfs_attribute *nullb_device_attrs[] = {
+-	&nullb_device_attr_size,
+-	&nullb_device_attr_completion_nsec,
+-	&nullb_device_attr_submit_queues,
+-	&nullb_device_attr_home_node,
+-	&nullb_device_attr_queue_mode,
+-	&nullb_device_attr_blocksize,
+-	&nullb_device_attr_irqmode,
+-	&nullb_device_attr_hw_queue_depth,
+-	&nullb_device_attr_index,
+-	&nullb_device_attr_blocking,
+-	&nullb_device_attr_use_per_node_hctx,
+-	&nullb_device_attr_power,
+-	&nullb_device_attr_memory_backed,
+-	&nullb_device_attr_discard,
+-	&nullb_device_attr_mbps,
+-	&nullb_device_attr_cache_size,
+-	&nullb_device_attr_badblocks,
+-	&nullb_device_attr_zoned,
+-	&nullb_device_attr_zone_size,
+-	&nullb_device_attr_zone_capacity,
+-	&nullb_device_attr_zone_nr_conv,
+-	&nullb_device_attr_zone_max_open,
+-	&nullb_device_attr_zone_max_active,
+-	NULL,
+-};
+-
+-static void nullb_device_release(struct config_item *item)
+-{
+-	struct nullb_device *dev = to_nullb_device(item);
+-
+-	null_free_device_storage(dev, false);
+-	null_free_dev(dev);
+-}
+-
+-static struct configfs_item_operations nullb_device_ops = {
+-	.release	= nullb_device_release,
+-};
+-
+-static const struct config_item_type nullb_device_type = {
+-	.ct_item_ops	= &nullb_device_ops,
+-	.ct_attrs	= nullb_device_attrs,
+-	.ct_owner	= THIS_MODULE,
+-};
+-
+-static struct
+-config_item *nullb_group_make_item(struct config_group *group, const char *name)
+-{
+-	struct nullb_device *dev;
+-
+-	dev = null_alloc_dev();
+-	if (!dev)
+-		return ERR_PTR(-ENOMEM);
+-
+-	config_item_init_type_name(&dev->item, name, &nullb_device_type);
+-
+-	return &dev->item;
+-}
+-
+-static void
+-nullb_group_drop_item(struct config_group *group, struct config_item *item)
+-{
+-	struct nullb_device *dev = to_nullb_device(item);
+-
+-	if (test_and_clear_bit(NULLB_DEV_FL_UP, &dev->flags)) {
+-		mutex_lock(&lock);
+-		dev->power = false;
+-		null_del_dev(dev->nullb);
+-		mutex_unlock(&lock);
+-	}
+-
+-	config_item_put(item);
+-}
+-
+-static ssize_t memb_group_features_show(struct config_item *item, char *page)
+-{
+-	return snprintf(page, PAGE_SIZE,
+-			"memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active\n");
+-}
+-
+-CONFIGFS_ATTR_RO(memb_group_, features);
+-
+-static struct configfs_attribute *nullb_group_attrs[] = {
+-	&memb_group_attr_features,
+-	NULL,
+-};
+-
+-static struct configfs_group_operations nullb_group_ops = {
+-	.make_item	= nullb_group_make_item,
+-	.drop_item	= nullb_group_drop_item,
+-};
+-
+-static const struct config_item_type nullb_group_type = {
+-	.ct_group_ops	= &nullb_group_ops,
+-	.ct_attrs	= nullb_group_attrs,
+-	.ct_owner	= THIS_MODULE,
+-};
+-
+-static struct configfs_subsystem nullb_subsys = {
+-	.su_group = {
+-		.cg_item = {
+-			.ci_namebuf = "nullb",
+-			.ci_type = &nullb_group_type,
+-		},
+-	},
+-};
+-
+-static inline int null_cache_active(struct nullb *nullb)
+-{
+-	return test_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
+-}
+-
+-static struct nullb_device *null_alloc_dev(void)
+-{
+-	struct nullb_device *dev;
+-
+-	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+-	if (!dev)
+-		return NULL;
+-	INIT_RADIX_TREE(&dev->data, GFP_ATOMIC);
+-	INIT_RADIX_TREE(&dev->cache, GFP_ATOMIC);
+-	if (badblocks_init(&dev->badblocks, 0)) {
+-		kfree(dev);
+-		return NULL;
+-	}
+-
+-	dev->size = g_gb * 1024;
+-	dev->completion_nsec = g_completion_nsec;
+-	dev->submit_queues = g_submit_queues;
+-	dev->home_node = g_home_node;
+-	dev->queue_mode = g_queue_mode;
+-	dev->blocksize = g_bs;
+-	dev->irqmode = g_irqmode;
+-	dev->hw_queue_depth = g_hw_queue_depth;
+-	dev->blocking = g_blocking;
+-	dev->use_per_node_hctx = g_use_per_node_hctx;
+-	dev->zoned = g_zoned;
+-	dev->zone_size = g_zone_size;
+-	dev->zone_capacity = g_zone_capacity;
+-	dev->zone_nr_conv = g_zone_nr_conv;
+-	dev->zone_max_open = g_zone_max_open;
+-	dev->zone_max_active = g_zone_max_active;
+-	return dev;
+-}
+-
+-static void null_free_dev(struct nullb_device *dev)
+-{
+-	if (!dev)
+-		return;
+-
+-	null_free_zoned_dev(dev);
+-	badblocks_exit(&dev->badblocks);
+-	kfree(dev);
+-}
+-
+-static void put_tag(struct nullb_queue *nq, unsigned int tag)
+-{
+-	clear_bit_unlock(tag, nq->tag_map);
+-
+-	if (waitqueue_active(&nq->wait))
+-		wake_up(&nq->wait);
+-}
+-
+-static unsigned int get_tag(struct nullb_queue *nq)
+-{
+-	unsigned int tag;
+-
+-	do {
+-		tag = find_first_zero_bit(nq->tag_map, nq->queue_depth);
+-		if (tag >= nq->queue_depth)
+-			return -1U;
+-	} while (test_and_set_bit_lock(tag, nq->tag_map));
+-
+-	return tag;
+-}
+-
+-static void free_cmd(struct nullb_cmd *cmd)
+-{
+-	put_tag(cmd->nq, cmd->tag);
+-}
+-
+-static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer);
+-
+-static struct nullb_cmd *__alloc_cmd(struct nullb_queue *nq)
+-{
+-	struct nullb_cmd *cmd;
+-	unsigned int tag;
+-
+-	tag = get_tag(nq);
+-	if (tag != -1U) {
+-		cmd = &nq->cmds[tag];
+-		cmd->tag = tag;
+-		cmd->error = BLK_STS_OK;
+-		cmd->nq = nq;
+-		if (nq->dev->irqmode == NULL_IRQ_TIMER) {
+-			hrtimer_init(&cmd->timer, CLOCK_MONOTONIC,
+-				     HRTIMER_MODE_REL);
+-			cmd->timer.function = null_cmd_timer_expired;
+-		}
+-		return cmd;
+-	}
+-
+-	return NULL;
+-}
+-
+-static struct nullb_cmd *alloc_cmd(struct nullb_queue *nq, int can_wait)
+-{
+-	struct nullb_cmd *cmd;
+-	DEFINE_WAIT(wait);
+-
+-	cmd = __alloc_cmd(nq);
+-	if (cmd || !can_wait)
+-		return cmd;
+-
+-	do {
+-		prepare_to_wait(&nq->wait, &wait, TASK_UNINTERRUPTIBLE);
+-		cmd = __alloc_cmd(nq);
+-		if (cmd)
+-			break;
+-
+-		io_schedule();
+-	} while (1);
+-
+-	finish_wait(&nq->wait, &wait);
+-	return cmd;
+-}
+-
+-static void end_cmd(struct nullb_cmd *cmd)
+-{
+-	int queue_mode = cmd->nq->dev->queue_mode;
+-
+-	switch (queue_mode)  {
+-	case NULL_Q_MQ:
+-		blk_mq_end_request(cmd->rq, cmd->error);
+-		return;
+-	case NULL_Q_BIO:
+-		cmd->bio->bi_status = cmd->error;
+-		bio_endio(cmd->bio);
+-		break;
+-	}
+-
+-	free_cmd(cmd);
+-}
+-
+-static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer)
+-{
+-	end_cmd(container_of(timer, struct nullb_cmd, timer));
+-
+-	return HRTIMER_NORESTART;
+-}
+-
+-static void null_cmd_end_timer(struct nullb_cmd *cmd)
+-{
+-	ktime_t kt = cmd->nq->dev->completion_nsec;
+-
+-	hrtimer_start(&cmd->timer, kt, HRTIMER_MODE_REL);
+-}
+-
+-static void null_complete_rq(struct request *rq)
+-{
+-	end_cmd(blk_mq_rq_to_pdu(rq));
+-}
+-
+-static struct nullb_page *null_alloc_page(gfp_t gfp_flags)
+-{
+-	struct nullb_page *t_page;
+-
+-	t_page = kmalloc(sizeof(struct nullb_page), gfp_flags);
+-	if (!t_page)
+-		goto out;
+-
+-	t_page->page = alloc_pages(gfp_flags, 0);
+-	if (!t_page->page)
+-		goto out_freepage;
+-
+-	memset(t_page->bitmap, 0, sizeof(t_page->bitmap));
+-	return t_page;
+-out_freepage:
+-	kfree(t_page);
+-out:
+-	return NULL;
+-}
+-
+-static void null_free_page(struct nullb_page *t_page)
+-{
+-	__set_bit(NULLB_PAGE_FREE, t_page->bitmap);
+-	if (test_bit(NULLB_PAGE_LOCK, t_page->bitmap))
+-		return;
+-	__free_page(t_page->page);
+-	kfree(t_page);
+-}
+-
+-static bool null_page_empty(struct nullb_page *page)
+-{
+-	int size = MAP_SZ - 2;
+-
+-	return find_first_bit(page->bitmap, size) == size;
+-}
+-
+-static void null_free_sector(struct nullb *nullb, sector_t sector,
+-	bool is_cache)
+-{
+-	unsigned int sector_bit;
+-	u64 idx;
+-	struct nullb_page *t_page, *ret;
+-	struct radix_tree_root *root;
+-
+-	root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
+-	idx = sector >> PAGE_SECTORS_SHIFT;
+-	sector_bit = (sector & SECTOR_MASK);
+-
+-	t_page = radix_tree_lookup(root, idx);
+-	if (t_page) {
+-		__clear_bit(sector_bit, t_page->bitmap);
+-
+-		if (null_page_empty(t_page)) {
+-			ret = radix_tree_delete_item(root, idx, t_page);
+-			WARN_ON(ret != t_page);
+-			null_free_page(ret);
+-			if (is_cache)
+-				nullb->dev->curr_cache -= PAGE_SIZE;
+-		}
+-	}
+-}
+-
+-static struct nullb_page *null_radix_tree_insert(struct nullb *nullb, u64 idx,
+-	struct nullb_page *t_page, bool is_cache)
+-{
+-	struct radix_tree_root *root;
+-
+-	root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
+-
+-	if (radix_tree_insert(root, idx, t_page)) {
+-		null_free_page(t_page);
+-		t_page = radix_tree_lookup(root, idx);
+-		WARN_ON(!t_page || t_page->page->index != idx);
+-	} else if (is_cache)
+-		nullb->dev->curr_cache += PAGE_SIZE;
+-
+-	return t_page;
+-}
+-
+-static void null_free_device_storage(struct nullb_device *dev, bool is_cache)
+-{
+-	unsigned long pos = 0;
+-	int nr_pages;
+-	struct nullb_page *ret, *t_pages[FREE_BATCH];
+-	struct radix_tree_root *root;
+-
+-	root = is_cache ? &dev->cache : &dev->data;
+-
+-	do {
+-		int i;
+-
+-		nr_pages = radix_tree_gang_lookup(root,
+-				(void **)t_pages, pos, FREE_BATCH);
+-
+-		for (i = 0; i < nr_pages; i++) {
+-			pos = t_pages[i]->page->index;
+-			ret = radix_tree_delete_item(root, pos, t_pages[i]);
+-			WARN_ON(ret != t_pages[i]);
+-			null_free_page(ret);
+-		}
+-
+-		pos++;
+-	} while (nr_pages == FREE_BATCH);
+-
+-	if (is_cache)
+-		dev->curr_cache = 0;
+-}
+-
+-static struct nullb_page *__null_lookup_page(struct nullb *nullb,
+-	sector_t sector, bool for_write, bool is_cache)
+-{
+-	unsigned int sector_bit;
+-	u64 idx;
+-	struct nullb_page *t_page;
+-	struct radix_tree_root *root;
+-
+-	idx = sector >> PAGE_SECTORS_SHIFT;
+-	sector_bit = (sector & SECTOR_MASK);
+-
+-	root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
+-	t_page = radix_tree_lookup(root, idx);
+-	WARN_ON(t_page && t_page->page->index != idx);
+-
+-	if (t_page && (for_write || test_bit(sector_bit, t_page->bitmap)))
+-		return t_page;
+-
+-	return NULL;
+-}
+-
+-static struct nullb_page *null_lookup_page(struct nullb *nullb,
+-	sector_t sector, bool for_write, bool ignore_cache)
+-{
+-	struct nullb_page *page = NULL;
+-
+-	if (!ignore_cache)
+-		page = __null_lookup_page(nullb, sector, for_write, true);
+-	if (page)
+-		return page;
+-	return __null_lookup_page(nullb, sector, for_write, false);
+-}
+-
+-static struct nullb_page *null_insert_page(struct nullb *nullb,
+-					   sector_t sector, bool ignore_cache)
+-	__releases(&nullb->lock)
+-	__acquires(&nullb->lock)
+-{
+-	u64 idx;
+-	struct nullb_page *t_page;
+-
+-	t_page = null_lookup_page(nullb, sector, true, ignore_cache);
+-	if (t_page)
+-		return t_page;
+-
+-	spin_unlock_irq(&nullb->lock);
+-
+-	t_page = null_alloc_page(GFP_NOIO);
+-	if (!t_page)
+-		goto out_lock;
+-
+-	if (radix_tree_preload(GFP_NOIO))
+-		goto out_freepage;
+-
+-	spin_lock_irq(&nullb->lock);
+-	idx = sector >> PAGE_SECTORS_SHIFT;
+-	t_page->page->index = idx;
+-	t_page = null_radix_tree_insert(nullb, idx, t_page, !ignore_cache);
+-	radix_tree_preload_end();
+-
+-	return t_page;
+-out_freepage:
+-	null_free_page(t_page);
+-out_lock:
+-	spin_lock_irq(&nullb->lock);
+-	return null_lookup_page(nullb, sector, true, ignore_cache);
+-}
+-
+-static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page)
+-{
+-	int i;
+-	unsigned int offset;
+-	u64 idx;
+-	struct nullb_page *t_page, *ret;
+-	void *dst, *src;
+-
+-	idx = c_page->page->index;
+-
+-	t_page = null_insert_page(nullb, idx << PAGE_SECTORS_SHIFT, true);
+-
+-	__clear_bit(NULLB_PAGE_LOCK, c_page->bitmap);
+-	if (test_bit(NULLB_PAGE_FREE, c_page->bitmap)) {
+-		null_free_page(c_page);
+-		if (t_page && null_page_empty(t_page)) {
+-			ret = radix_tree_delete_item(&nullb->dev->data,
+-				idx, t_page);
+-			null_free_page(t_page);
+-		}
+-		return 0;
+-	}
+-
+-	if (!t_page)
+-		return -ENOMEM;
+-
+-	src = kmap_atomic(c_page->page);
+-	dst = kmap_atomic(t_page->page);
+-
+-	for (i = 0; i < PAGE_SECTORS;
+-			i += (nullb->dev->blocksize >> SECTOR_SHIFT)) {
+-		if (test_bit(i, c_page->bitmap)) {
+-			offset = (i << SECTOR_SHIFT);
+-			memcpy(dst + offset, src + offset,
+-				nullb->dev->blocksize);
+-			__set_bit(i, t_page->bitmap);
+-		}
+-	}
+-
+-	kunmap_atomic(dst);
+-	kunmap_atomic(src);
+-
+-	ret = radix_tree_delete_item(&nullb->dev->cache, idx, c_page);
+-	null_free_page(ret);
+-	nullb->dev->curr_cache -= PAGE_SIZE;
+-
+-	return 0;
+-}
+-
+-static int null_make_cache_space(struct nullb *nullb, unsigned long n)
+-{
+-	int i, err, nr_pages;
+-	struct nullb_page *c_pages[FREE_BATCH];
+-	unsigned long flushed = 0, one_round;
+-
+-again:
+-	if ((nullb->dev->cache_size * 1024 * 1024) >
+-	     nullb->dev->curr_cache + n || nullb->dev->curr_cache == 0)
+-		return 0;
+-
+-	nr_pages = radix_tree_gang_lookup(&nullb->dev->cache,
+-			(void **)c_pages, nullb->cache_flush_pos, FREE_BATCH);
+-	/*
+-	 * nullb_flush_cache_page could unlock before using the c_pages. To
+-	 * avoid race, we don't allow page free
+-	 */
+-	for (i = 0; i < nr_pages; i++) {
+-		nullb->cache_flush_pos = c_pages[i]->page->index;
+-		/*
+-		 * We found the page which is being flushed to disk by other
+-		 * threads
+-		 */
+-		if (test_bit(NULLB_PAGE_LOCK, c_pages[i]->bitmap))
+-			c_pages[i] = NULL;
+-		else
+-			__set_bit(NULLB_PAGE_LOCK, c_pages[i]->bitmap);
+-	}
+-
+-	one_round = 0;
+-	for (i = 0; i < nr_pages; i++) {
+-		if (c_pages[i] == NULL)
+-			continue;
+-		err = null_flush_cache_page(nullb, c_pages[i]);
+-		if (err)
+-			return err;
+-		one_round++;
+-	}
+-	flushed += one_round << PAGE_SHIFT;
+-
+-	if (n > flushed) {
+-		if (nr_pages == 0)
+-			nullb->cache_flush_pos = 0;
+-		if (one_round == 0) {
+-			/* give other threads a chance */
+-			spin_unlock_irq(&nullb->lock);
+-			spin_lock_irq(&nullb->lock);
+-		}
+-		goto again;
+-	}
+-	return 0;
+-}
+-
+-static int copy_to_nullb(struct nullb *nullb, struct page *source,
+-	unsigned int off, sector_t sector, size_t n, bool is_fua)
+-{
+-	size_t temp, count = 0;
+-	unsigned int offset;
+-	struct nullb_page *t_page;
+-	void *dst, *src;
+-
+-	while (count < n) {
+-		temp = min_t(size_t, nullb->dev->blocksize, n - count);
+-
+-		if (null_cache_active(nullb) && !is_fua)
+-			null_make_cache_space(nullb, PAGE_SIZE);
+-
+-		offset = (sector & SECTOR_MASK) << SECTOR_SHIFT;
+-		t_page = null_insert_page(nullb, sector,
+-			!null_cache_active(nullb) || is_fua);
+-		if (!t_page)
+-			return -ENOSPC;
+-
+-		src = kmap_atomic(source);
+-		dst = kmap_atomic(t_page->page);
+-		memcpy(dst + offset, src + off + count, temp);
+-		kunmap_atomic(dst);
+-		kunmap_atomic(src);
+-
+-		__set_bit(sector & SECTOR_MASK, t_page->bitmap);
+-
+-		if (is_fua)
+-			null_free_sector(nullb, sector, true);
+-
+-		count += temp;
+-		sector += temp >> SECTOR_SHIFT;
+-	}
+-	return 0;
+-}
+-
+-static int copy_from_nullb(struct nullb *nullb, struct page *dest,
+-	unsigned int off, sector_t sector, size_t n)
+-{
+-	size_t temp, count = 0;
+-	unsigned int offset;
+-	struct nullb_page *t_page;
+-	void *dst, *src;
+-
+-	while (count < n) {
+-		temp = min_t(size_t, nullb->dev->blocksize, n - count);
+-
+-		offset = (sector & SECTOR_MASK) << SECTOR_SHIFT;
+-		t_page = null_lookup_page(nullb, sector, false,
+-			!null_cache_active(nullb));
+-
+-		dst = kmap_atomic(dest);
+-		if (!t_page) {
+-			memset(dst + off + count, 0, temp);
+-			goto next;
+-		}
+-		src = kmap_atomic(t_page->page);
+-		memcpy(dst + off + count, src + offset, temp);
+-		kunmap_atomic(src);
+-next:
+-		kunmap_atomic(dst);
+-
+-		count += temp;
+-		sector += temp >> SECTOR_SHIFT;
+-	}
+-	return 0;
+-}
+-
+-static void nullb_fill_pattern(struct nullb *nullb, struct page *page,
+-			       unsigned int len, unsigned int off)
+-{
+-	void *dst;
+-
+-	dst = kmap_atomic(page);
+-	memset(dst + off, 0xFF, len);
+-	kunmap_atomic(dst);
+-}
+-
+-static void null_handle_discard(struct nullb *nullb, sector_t sector, size_t n)
+-{
+-	size_t temp;
+-
+-	spin_lock_irq(&nullb->lock);
+-	while (n > 0) {
+-		temp = min_t(size_t, n, nullb->dev->blocksize);
+-		null_free_sector(nullb, sector, false);
+-		if (null_cache_active(nullb))
+-			null_free_sector(nullb, sector, true);
+-		sector += temp >> SECTOR_SHIFT;
+-		n -= temp;
+-	}
+-	spin_unlock_irq(&nullb->lock);
+-}
+-
+-static int null_handle_flush(struct nullb *nullb)
+-{
+-	int err;
+-
+-	if (!null_cache_active(nullb))
+-		return 0;
+-
+-	spin_lock_irq(&nullb->lock);
+-	while (true) {
+-		err = null_make_cache_space(nullb,
+-			nullb->dev->cache_size * 1024 * 1024);
+-		if (err || nullb->dev->curr_cache == 0)
+-			break;
+-	}
+-
+-	WARN_ON(!radix_tree_empty(&nullb->dev->cache));
+-	spin_unlock_irq(&nullb->lock);
+-	return err;
+-}
+-
+-static int null_transfer(struct nullb *nullb, struct page *page,
+-	unsigned int len, unsigned int off, bool is_write, sector_t sector,
+-	bool is_fua)
+-{
+-	struct nullb_device *dev = nullb->dev;
+-	unsigned int valid_len = len;
+-	int err = 0;
+-
+-	if (!is_write) {
+-		if (dev->zoned)
+-			valid_len = null_zone_valid_read_len(nullb,
+-				sector, len);
+-
+-		if (valid_len) {
+-			err = copy_from_nullb(nullb, page, off,
+-				sector, valid_len);
+-			off += valid_len;
+-			len -= valid_len;
+-		}
+-
+-		if (len)
+-			nullb_fill_pattern(nullb, page, len, off);
+-		flush_dcache_page(page);
+-	} else {
+-		flush_dcache_page(page);
+-		err = copy_to_nullb(nullb, page, off, sector, len, is_fua);
+-	}
+-
+-	return err;
+-}
+-
+-static int null_handle_rq(struct nullb_cmd *cmd)
+-{
+-	struct request *rq = cmd->rq;
+-	struct nullb *nullb = cmd->nq->dev->nullb;
+-	int err;
+-	unsigned int len;
+-	sector_t sector;
+-	struct req_iterator iter;
+-	struct bio_vec bvec;
+-
+-	sector = blk_rq_pos(rq);
+-
+-	if (req_op(rq) == REQ_OP_DISCARD) {
+-		null_handle_discard(nullb, sector, blk_rq_bytes(rq));
+-		return 0;
+-	}
+-
+-	spin_lock_irq(&nullb->lock);
+-	rq_for_each_segment(bvec, rq, iter) {
+-		len = bvec.bv_len;
+-		err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
+-				     op_is_write(req_op(rq)), sector,
+-				     rq->cmd_flags & REQ_FUA);
+-		if (err) {
+-			spin_unlock_irq(&nullb->lock);
+-			return err;
+-		}
+-		sector += len >> SECTOR_SHIFT;
+-	}
+-	spin_unlock_irq(&nullb->lock);
+-
+-	return 0;
+-}
+-
+-static int null_handle_bio(struct nullb_cmd *cmd)
+-{
+-	struct bio *bio = cmd->bio;
+-	struct nullb *nullb = cmd->nq->dev->nullb;
+-	int err;
+-	unsigned int len;
+-	sector_t sector;
+-	struct bio_vec bvec;
+-	struct bvec_iter iter;
+-
+-	sector = bio->bi_iter.bi_sector;
+-
+-	if (bio_op(bio) == REQ_OP_DISCARD) {
+-		null_handle_discard(nullb, sector,
+-			bio_sectors(bio) << SECTOR_SHIFT);
+-		return 0;
+-	}
+-
+-	spin_lock_irq(&nullb->lock);
+-	bio_for_each_segment(bvec, bio, iter) {
+-		len = bvec.bv_len;
+-		err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
+-				     op_is_write(bio_op(bio)), sector,
+-				     bio->bi_opf & REQ_FUA);
+-		if (err) {
+-			spin_unlock_irq(&nullb->lock);
+-			return err;
+-		}
+-		sector += len >> SECTOR_SHIFT;
+-	}
+-	spin_unlock_irq(&nullb->lock);
+-	return 0;
+-}
+-
+-static void null_stop_queue(struct nullb *nullb)
+-{
+-	struct request_queue *q = nullb->q;
+-
+-	if (nullb->dev->queue_mode == NULL_Q_MQ)
+-		blk_mq_stop_hw_queues(q);
+-}
+-
+-static void null_restart_queue_async(struct nullb *nullb)
+-{
+-	struct request_queue *q = nullb->q;
+-
+-	if (nullb->dev->queue_mode == NULL_Q_MQ)
+-		blk_mq_start_stopped_hw_queues(q, true);
+-}
+-
+-static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd)
+-{
+-	struct nullb_device *dev = cmd->nq->dev;
+-	struct nullb *nullb = dev->nullb;
+-	blk_status_t sts = BLK_STS_OK;
+-	struct request *rq = cmd->rq;
+-
+-	if (!hrtimer_active(&nullb->bw_timer))
+-		hrtimer_restart(&nullb->bw_timer);
+-
+-	if (atomic_long_sub_return(blk_rq_bytes(rq), &nullb->cur_bytes) < 0) {
+-		null_stop_queue(nullb);
+-		/* race with timer */
+-		if (atomic_long_read(&nullb->cur_bytes) > 0)
+-			null_restart_queue_async(nullb);
+-		/* requeue request */
+-		sts = BLK_STS_DEV_RESOURCE;
+-	}
+-	return sts;
+-}
+-
+-static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd,
+-						 sector_t sector,
+-						 sector_t nr_sectors)
+-{
+-	struct badblocks *bb = &cmd->nq->dev->badblocks;
+-	sector_t first_bad;
+-	int bad_sectors;
+-
+-	if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors))
+-		return BLK_STS_IOERR;
+-
+-	return BLK_STS_OK;
+-}
+-
+-static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd,
+-						     enum req_opf op)
+-{
+-	struct nullb_device *dev = cmd->nq->dev;
+-	int err;
+-
+-	if (dev->queue_mode == NULL_Q_BIO)
+-		err = null_handle_bio(cmd);
+-	else
+-		err = null_handle_rq(cmd);
+-
+-	return errno_to_blk_status(err);
+-}
+-
+-static void nullb_zero_read_cmd_buffer(struct nullb_cmd *cmd)
+-{
+-	struct nullb_device *dev = cmd->nq->dev;
+-	struct bio *bio;
+-
+-	if (dev->memory_backed)
+-		return;
+-
+-	if (dev->queue_mode == NULL_Q_BIO && bio_op(cmd->bio) == REQ_OP_READ) {
+-		zero_fill_bio(cmd->bio);
+-	} else if (req_op(cmd->rq) == REQ_OP_READ) {
+-		__rq_for_each_bio(bio, cmd->rq)
+-			zero_fill_bio(bio);
+-	}
+-}
+-
+-static inline void nullb_complete_cmd(struct nullb_cmd *cmd)
+-{
+-	/*
+-	 * Since root privileges are required to configure the null_blk
+-	 * driver, it is fine that this driver does not initialize the
+-	 * data buffers of read commands. Zero-initialize these buffers
+-	 * anyway if KMSAN is enabled to prevent that KMSAN complains
+-	 * about null_blk not initializing read data buffers.
+-	 */
+-	if (IS_ENABLED(CONFIG_KMSAN))
+-		nullb_zero_read_cmd_buffer(cmd);
+-
+-	/* Complete IO by inline, softirq or timer */
+-	switch (cmd->nq->dev->irqmode) {
+-	case NULL_IRQ_SOFTIRQ:
+-		switch (cmd->nq->dev->queue_mode) {
+-		case NULL_Q_MQ:
+-			if (likely(!blk_should_fake_timeout(cmd->rq->q)))
+-				blk_mq_complete_request(cmd->rq);
+-			break;
+-		case NULL_Q_BIO:
+-			/*
+-			 * XXX: no proper submitting cpu information available.
+-			 */
+-			end_cmd(cmd);
+-			break;
+-		}
+-		break;
+-	case NULL_IRQ_NONE:
+-		end_cmd(cmd);
+-		break;
+-	case NULL_IRQ_TIMER:
+-		null_cmd_end_timer(cmd);
+-		break;
+-	}
+-}
+-
+-blk_status_t null_process_cmd(struct nullb_cmd *cmd,
+-			      enum req_opf op, sector_t sector,
+-			      unsigned int nr_sectors)
+-{
+-	struct nullb_device *dev = cmd->nq->dev;
+-	blk_status_t ret;
+-
+-	if (dev->badblocks.shift != -1) {
+-		ret = null_handle_badblocks(cmd, sector, nr_sectors);
+-		if (ret != BLK_STS_OK)
+-			return ret;
+-	}
+-
+-	if (dev->memory_backed)
+-		return null_handle_memory_backed(cmd, op);
+-
+-	return BLK_STS_OK;
+-}
+-
+-static blk_status_t null_handle_cmd(struct nullb_cmd *cmd, sector_t sector,
+-				    sector_t nr_sectors, enum req_opf op)
+-{
+-	struct nullb_device *dev = cmd->nq->dev;
+-	struct nullb *nullb = dev->nullb;
+-	blk_status_t sts;
+-
+-	if (test_bit(NULLB_DEV_FL_THROTTLED, &dev->flags)) {
+-		sts = null_handle_throttled(cmd);
+-		if (sts != BLK_STS_OK)
+-			return sts;
+-	}
+-
+-	if (op == REQ_OP_FLUSH) {
+-		cmd->error = errno_to_blk_status(null_handle_flush(nullb));
+-		goto out;
+-	}
+-
+-	if (dev->zoned)
+-		sts = null_process_zoned_cmd(cmd, op, sector, nr_sectors);
+-	else
+-		sts = null_process_cmd(cmd, op, sector, nr_sectors);
+-
+-	/* Do not overwrite errors (e.g. timeout errors) */
+-	if (cmd->error == BLK_STS_OK)
+-		cmd->error = sts;
+-
+-out:
+-	nullb_complete_cmd(cmd);
+-	return BLK_STS_OK;
+-}
+-
+-static enum hrtimer_restart nullb_bwtimer_fn(struct hrtimer *timer)
+-{
+-	struct nullb *nullb = container_of(timer, struct nullb, bw_timer);
+-	ktime_t timer_interval = ktime_set(0, TIMER_INTERVAL);
+-	unsigned int mbps = nullb->dev->mbps;
+-
+-	if (atomic_long_read(&nullb->cur_bytes) == mb_per_tick(mbps))
+-		return HRTIMER_NORESTART;
+-
+-	atomic_long_set(&nullb->cur_bytes, mb_per_tick(mbps));
+-	null_restart_queue_async(nullb);
+-
+-	hrtimer_forward_now(&nullb->bw_timer, timer_interval);
+-
+-	return HRTIMER_RESTART;
+-}
+-
+-static void nullb_setup_bwtimer(struct nullb *nullb)
+-{
+-	ktime_t timer_interval = ktime_set(0, TIMER_INTERVAL);
+-
+-	hrtimer_init(&nullb->bw_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+-	nullb->bw_timer.function = nullb_bwtimer_fn;
+-	atomic_long_set(&nullb->cur_bytes, mb_per_tick(nullb->dev->mbps));
+-	hrtimer_start(&nullb->bw_timer, timer_interval, HRTIMER_MODE_REL);
+-}
+-
+-static struct nullb_queue *nullb_to_queue(struct nullb *nullb)
+-{
+-	int index = 0;
+-
+-	if (nullb->nr_queues != 1)
+-		index = raw_smp_processor_id() / ((nr_cpu_ids + nullb->nr_queues - 1) / nullb->nr_queues);
+-
+-	return &nullb->queues[index];
+-}
+-
+-static blk_qc_t null_submit_bio(struct bio *bio)
+-{
+-	sector_t sector = bio->bi_iter.bi_sector;
+-	sector_t nr_sectors = bio_sectors(bio);
+-	struct nullb *nullb = bio->bi_disk->private_data;
+-	struct nullb_queue *nq = nullb_to_queue(nullb);
+-	struct nullb_cmd *cmd;
+-
+-	cmd = alloc_cmd(nq, 1);
+-	cmd->bio = bio;
+-
+-	null_handle_cmd(cmd, sector, nr_sectors, bio_op(bio));
+-	return BLK_QC_T_NONE;
+-}
+-
+-static bool should_timeout_request(struct request *rq)
+-{
+-#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
+-	if (g_timeout_str[0])
+-		return should_fail(&null_timeout_attr, 1);
+-#endif
+-	return false;
+-}
+-
+-static bool should_requeue_request(struct request *rq)
+-{
+-#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
+-	if (g_requeue_str[0])
+-		return should_fail(&null_requeue_attr, 1);
+-#endif
+-	return false;
+-}
+-
+-static enum blk_eh_timer_return null_timeout_rq(struct request *rq, bool res)
+-{
+-	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
+-
+-	pr_info("rq %p timed out\n", rq);
+-
+-	/*
+-	 * If the device is marked as blocking (i.e. memory backed or zoned
+-	 * device), the submission path may be blocked waiting for resources
+-	 * and cause real timeouts. For these real timeouts, the submission
+-	 * path will complete the request using blk_mq_complete_request().
+-	 * Only fake timeouts need to execute blk_mq_complete_request() here.
+-	 */
+-	cmd->error = BLK_STS_TIMEOUT;
+-	if (cmd->fake_timeout)
+-		blk_mq_complete_request(rq);
+-	return BLK_EH_DONE;
+-}
+-
+-static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
+-			 const struct blk_mq_queue_data *bd)
+-{
+-	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
+-	struct nullb_queue *nq = hctx->driver_data;
+-	sector_t nr_sectors = blk_rq_sectors(bd->rq);
+-	sector_t sector = blk_rq_pos(bd->rq);
+-
+-	might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
+-
+-	if (nq->dev->irqmode == NULL_IRQ_TIMER) {
+-		hrtimer_init(&cmd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+-		cmd->timer.function = null_cmd_timer_expired;
+-	}
+-	cmd->rq = bd->rq;
+-	cmd->error = BLK_STS_OK;
+-	cmd->nq = nq;
+-	cmd->fake_timeout = should_timeout_request(bd->rq);
+-
+-	blk_mq_start_request(bd->rq);
+-
+-	if (should_requeue_request(bd->rq)) {
+-		/*
+-		 * Alternate between hitting the core BUSY path, and the
+-		 * driver driven requeue path
+-		 */
+-		nq->requeue_selection++;
+-		if (nq->requeue_selection & 1)
+-			return BLK_STS_RESOURCE;
+-		else {
+-			blk_mq_requeue_request(bd->rq, true);
+-			return BLK_STS_OK;
+-		}
+-	}
+-	if (cmd->fake_timeout)
+-		return BLK_STS_OK;
+-
+-	return null_handle_cmd(cmd, sector, nr_sectors, req_op(bd->rq));
+-}
+-
+-static void cleanup_queue(struct nullb_queue *nq)
+-{
+-	kfree(nq->tag_map);
+-	kfree(nq->cmds);
+-}
+-
+-static void cleanup_queues(struct nullb *nullb)
+-{
+-	int i;
+-
+-	for (i = 0; i < nullb->nr_queues; i++)
+-		cleanup_queue(&nullb->queues[i]);
+-
+-	kfree(nullb->queues);
+-}
+-
+-static void null_exit_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
+-{
+-	struct nullb_queue *nq = hctx->driver_data;
+-	struct nullb *nullb = nq->dev->nullb;
+-
+-	nullb->nr_queues--;
+-}
+-
+-static void null_init_queue(struct nullb *nullb, struct nullb_queue *nq)
+-{
+-	init_waitqueue_head(&nq->wait);
+-	nq->queue_depth = nullb->queue_depth;
+-	nq->dev = nullb->dev;
+-}
+-
+-static int null_init_hctx(struct blk_mq_hw_ctx *hctx, void *driver_data,
+-			  unsigned int hctx_idx)
+-{
+-	struct nullb *nullb = hctx->queue->queuedata;
+-	struct nullb_queue *nq;
+-
+-#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
+-	if (g_init_hctx_str[0] && should_fail(&null_init_hctx_attr, 1))
+-		return -EFAULT;
+-#endif
+-
+-	nq = &nullb->queues[hctx_idx];
+-	hctx->driver_data = nq;
+-	null_init_queue(nullb, nq);
+-	nullb->nr_queues++;
+-
+-	return 0;
+-}
+-
+-static const struct blk_mq_ops null_mq_ops = {
+-	.queue_rq       = null_queue_rq,
+-	.complete	= null_complete_rq,
+-	.timeout	= null_timeout_rq,
+-	.init_hctx	= null_init_hctx,
+-	.exit_hctx	= null_exit_hctx,
+-};
+-
+-static void null_del_dev(struct nullb *nullb)
+-{
+-	struct nullb_device *dev;
+-
+-	if (!nullb)
+-		return;
+-
+-	dev = nullb->dev;
+-
+-	ida_simple_remove(&nullb_indexes, nullb->index);
+-
+-	list_del_init(&nullb->list);
+-
+-	del_gendisk(nullb->disk);
+-
+-	if (test_bit(NULLB_DEV_FL_THROTTLED, &nullb->dev->flags)) {
+-		hrtimer_cancel(&nullb->bw_timer);
+-		atomic_long_set(&nullb->cur_bytes, LONG_MAX);
+-		null_restart_queue_async(nullb);
+-	}
+-
+-	blk_cleanup_queue(nullb->q);
+-	if (dev->queue_mode == NULL_Q_MQ &&
+-	    nullb->tag_set == &nullb->__tag_set)
+-		blk_mq_free_tag_set(nullb->tag_set);
+-	put_disk(nullb->disk);
+-	cleanup_queues(nullb);
+-	if (null_cache_active(nullb))
+-		null_free_device_storage(nullb->dev, true);
+-	kfree(nullb);
+-	dev->nullb = NULL;
+-}
+-
+-static void null_config_discard(struct nullb *nullb)
+-{
+-	if (nullb->dev->discard == false)
+-		return;
+-
+-	if (nullb->dev->zoned) {
+-		nullb->dev->discard = false;
+-		pr_info("discard option is ignored in zoned mode\n");
+-		return;
+-	}
+-
+-	nullb->q->limits.discard_granularity = nullb->dev->blocksize;
+-	nullb->q->limits.discard_alignment = nullb->dev->blocksize;
+-	blk_queue_max_discard_sectors(nullb->q, UINT_MAX >> 9);
+-	blk_queue_flag_set(QUEUE_FLAG_DISCARD, nullb->q);
+-}
+-
+-static const struct block_device_operations null_bio_ops = {
+-	.owner		= THIS_MODULE,
+-	.submit_bio	= null_submit_bio,
+-	.report_zones	= null_report_zones,
+-};
+-
+-static const struct block_device_operations null_rq_ops = {
+-	.owner		= THIS_MODULE,
+-	.report_zones	= null_report_zones,
+-};
+-
+-static int setup_commands(struct nullb_queue *nq)
+-{
+-	struct nullb_cmd *cmd;
+-	int i, tag_size;
+-
+-	nq->cmds = kcalloc(nq->queue_depth, sizeof(*cmd), GFP_KERNEL);
+-	if (!nq->cmds)
+-		return -ENOMEM;
+-
+-	tag_size = ALIGN(nq->queue_depth, BITS_PER_LONG) / BITS_PER_LONG;
+-	nq->tag_map = kcalloc(tag_size, sizeof(unsigned long), GFP_KERNEL);
+-	if (!nq->tag_map) {
+-		kfree(nq->cmds);
+-		return -ENOMEM;
+-	}
+-
+-	for (i = 0; i < nq->queue_depth; i++) {
+-		cmd = &nq->cmds[i];
+-		cmd->tag = -1U;
+-	}
+-
+-	return 0;
+-}
+-
+-static int setup_queues(struct nullb *nullb)
+-{
+-	nullb->queues = kcalloc(nr_cpu_ids, sizeof(struct nullb_queue),
+-				GFP_KERNEL);
+-	if (!nullb->queues)
+-		return -ENOMEM;
+-
+-	nullb->queue_depth = nullb->dev->hw_queue_depth;
+-
+-	return 0;
+-}
+-
+-static int init_driver_queues(struct nullb *nullb)
+-{
+-	struct nullb_queue *nq;
+-	int i, ret = 0;
+-
+-	for (i = 0; i < nullb->dev->submit_queues; i++) {
+-		nq = &nullb->queues[i];
+-
+-		null_init_queue(nullb, nq);
+-
+-		ret = setup_commands(nq);
+-		if (ret)
+-			return ret;
+-		nullb->nr_queues++;
+-	}
+-	return 0;
+-}
+-
+-static int null_gendisk_register(struct nullb *nullb)
+-{
+-	sector_t size = ((sector_t)nullb->dev->size * SZ_1M) >> SECTOR_SHIFT;
+-	struct gendisk *disk;
+-
+-	disk = nullb->disk = alloc_disk_node(1, nullb->dev->home_node);
+-	if (!disk)
+-		return -ENOMEM;
+-	set_capacity(disk, size);
+-
+-	disk->flags |= GENHD_FL_EXT_DEVT | GENHD_FL_SUPPRESS_PARTITION_INFO;
+-	disk->major		= null_major;
+-	disk->first_minor	= nullb->index;
+-	if (queue_is_mq(nullb->q))
+-		disk->fops		= &null_rq_ops;
+-	else
+-		disk->fops		= &null_bio_ops;
+-	disk->private_data	= nullb;
+-	disk->queue		= nullb->q;
+-	strncpy(disk->disk_name, nullb->disk_name, DISK_NAME_LEN);
+-
+-	if (nullb->dev->zoned) {
+-		int ret = null_register_zoned_dev(nullb);
+-
+-		if (ret)
+-			return ret;
+-	}
+-
+-	add_disk(disk);
+-	return 0;
+-}
+-
+-static int null_init_tag_set(struct nullb *nullb, struct blk_mq_tag_set *set)
+-{
+-	set->ops = &null_mq_ops;
+-	set->nr_hw_queues = nullb ? nullb->dev->submit_queues :
+-						g_submit_queues;
+-	set->queue_depth = nullb ? nullb->dev->hw_queue_depth :
+-						g_hw_queue_depth;
+-	set->numa_node = nullb ? nullb->dev->home_node : g_home_node;
+-	set->cmd_size	= sizeof(struct nullb_cmd);
+-	set->flags = BLK_MQ_F_SHOULD_MERGE;
+-	if (g_no_sched)
+-		set->flags |= BLK_MQ_F_NO_SCHED;
+-	if (g_shared_tag_bitmap)
+-		set->flags |= BLK_MQ_F_TAG_HCTX_SHARED;
+-	set->driver_data = NULL;
+-
+-	if ((nullb && nullb->dev->blocking) || g_blocking)
+-		set->flags |= BLK_MQ_F_BLOCKING;
+-
+-	return blk_mq_alloc_tag_set(set);
+-}
+-
+-static int null_validate_conf(struct nullb_device *dev)
+-{
+-	dev->blocksize = round_down(dev->blocksize, 512);
+-	dev->blocksize = clamp_t(unsigned int, dev->blocksize, 512, 4096);
+-
+-	if (dev->queue_mode == NULL_Q_MQ && dev->use_per_node_hctx) {
+-		if (dev->submit_queues != nr_online_nodes)
+-			dev->submit_queues = nr_online_nodes;
+-	} else if (dev->submit_queues > nr_cpu_ids)
+-		dev->submit_queues = nr_cpu_ids;
+-	else if (dev->submit_queues == 0)
+-		dev->submit_queues = 1;
+-
+-	dev->queue_mode = min_t(unsigned int, dev->queue_mode, NULL_Q_MQ);
+-	dev->irqmode = min_t(unsigned int, dev->irqmode, NULL_IRQ_TIMER);
+-
+-	/* Do memory allocation, so set blocking */
+-	if (dev->memory_backed)
+-		dev->blocking = true;
+-	else /* cache is meaningless */
+-		dev->cache_size = 0;
+-	dev->cache_size = min_t(unsigned long, ULONG_MAX / 1024 / 1024,
+-						dev->cache_size);
+-	dev->mbps = min_t(unsigned int, 1024 * 40, dev->mbps);
+-	/* can not stop a queue */
+-	if (dev->queue_mode == NULL_Q_BIO)
+-		dev->mbps = 0;
+-
+-	if (dev->zoned &&
+-	    (!dev->zone_size || !is_power_of_2(dev->zone_size))) {
+-		pr_err("zone_size must be power-of-two\n");
+-		return -EINVAL;
+-	}
+-
+-	return 0;
+-}
+-
+-#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
+-static bool __null_setup_fault(struct fault_attr *attr, char *str)
+-{
+-	if (!str[0])
+-		return true;
+-
+-	if (!setup_fault_attr(attr, str))
+-		return false;
+-
+-	attr->verbose = 0;
+-	return true;
+-}
+-#endif
+-
+-static bool null_setup_fault(void)
+-{
+-#ifdef CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION
+-	if (!__null_setup_fault(&null_timeout_attr, g_timeout_str))
+-		return false;
+-	if (!__null_setup_fault(&null_requeue_attr, g_requeue_str))
+-		return false;
+-	if (!__null_setup_fault(&null_init_hctx_attr, g_init_hctx_str))
+-		return false;
+-#endif
+-	return true;
+-}
+-
+-static int null_add_dev(struct nullb_device *dev)
+-{
+-	struct nullb *nullb;
+-	int rv;
+-
+-	rv = null_validate_conf(dev);
+-	if (rv)
+-		return rv;
+-
+-	nullb = kzalloc_node(sizeof(*nullb), GFP_KERNEL, dev->home_node);
+-	if (!nullb) {
+-		rv = -ENOMEM;
+-		goto out;
+-	}
+-	nullb->dev = dev;
+-	dev->nullb = nullb;
+-
+-	spin_lock_init(&nullb->lock);
+-
+-	rv = setup_queues(nullb);
+-	if (rv)
+-		goto out_free_nullb;
+-
+-	if (dev->queue_mode == NULL_Q_MQ) {
+-		if (shared_tags) {
+-			nullb->tag_set = &tag_set;
+-			rv = 0;
+-		} else {
+-			nullb->tag_set = &nullb->__tag_set;
+-			rv = null_init_tag_set(nullb, nullb->tag_set);
+-		}
+-
+-		if (rv)
+-			goto out_cleanup_queues;
+-
+-		if (!null_setup_fault())
+-			goto out_cleanup_queues;
+-
+-		nullb->tag_set->timeout = 5 * HZ;
+-		nullb->q = blk_mq_init_queue_data(nullb->tag_set, nullb);
+-		if (IS_ERR(nullb->q)) {
+-			rv = -ENOMEM;
+-			goto out_cleanup_tags;
+-		}
+-	} else if (dev->queue_mode == NULL_Q_BIO) {
+-		nullb->q = blk_alloc_queue(dev->home_node);
+-		if (!nullb->q) {
+-			rv = -ENOMEM;
+-			goto out_cleanup_queues;
+-		}
+-		rv = init_driver_queues(nullb);
+-		if (rv)
+-			goto out_cleanup_blk_queue;
+-	}
+-
+-	if (dev->mbps) {
+-		set_bit(NULLB_DEV_FL_THROTTLED, &dev->flags);
+-		nullb_setup_bwtimer(nullb);
+-	}
+-
+-	if (dev->cache_size > 0) {
+-		set_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
+-		blk_queue_write_cache(nullb->q, true, true);
+-	}
+-
+-	if (dev->zoned) {
+-		rv = null_init_zoned_dev(dev, nullb->q);
+-		if (rv)
+-			goto out_cleanup_blk_queue;
+-	}
+-
+-	nullb->q->queuedata = nullb;
+-	blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
+-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, nullb->q);
+-
+-	mutex_lock(&lock);
+-	rv = ida_simple_get(&nullb_indexes, 0, 0, GFP_KERNEL);
+-	if (rv < 0) {
+-		mutex_unlock(&lock);
+-		goto out_cleanup_zone;
+-	}
+-	nullb->index = rv;
+-	dev->index = rv;
+-	mutex_unlock(&lock);
+-
+-	blk_queue_logical_block_size(nullb->q, dev->blocksize);
+-	blk_queue_physical_block_size(nullb->q, dev->blocksize);
+-
+-	null_config_discard(nullb);
+-
+-	sprintf(nullb->disk_name, "nullb%d", nullb->index);
+-
+-	rv = null_gendisk_register(nullb);
+-	if (rv)
+-		goto out_ida_free;
+-
+-	mutex_lock(&lock);
+-	list_add_tail(&nullb->list, &nullb_list);
+-	mutex_unlock(&lock);
+-
+-	return 0;
+-
+-out_ida_free:
+-	ida_free(&nullb_indexes, nullb->index);
+-out_cleanup_zone:
+-	null_free_zoned_dev(dev);
+-out_cleanup_blk_queue:
+-	blk_cleanup_queue(nullb->q);
+-out_cleanup_tags:
+-	if (dev->queue_mode == NULL_Q_MQ && nullb->tag_set == &nullb->__tag_set)
+-		blk_mq_free_tag_set(nullb->tag_set);
+-out_cleanup_queues:
+-	cleanup_queues(nullb);
+-out_free_nullb:
+-	kfree(nullb);
+-	dev->nullb = NULL;
+-out:
+-	return rv;
+-}
+-
+-static int __init null_init(void)
+-{
+-	int ret = 0;
+-	unsigned int i;
+-	struct nullb *nullb;
+-	struct nullb_device *dev;
+-
+-	if (g_bs > PAGE_SIZE) {
+-		pr_warn("invalid block size\n");
+-		pr_warn("defaults block size to %lu\n", PAGE_SIZE);
+-		g_bs = PAGE_SIZE;
+-	}
+-
+-	if (g_home_node != NUMA_NO_NODE && g_home_node >= nr_online_nodes) {
+-		pr_err("invalid home_node value\n");
+-		g_home_node = NUMA_NO_NODE;
+-	}
+-
+-	if (g_queue_mode == NULL_Q_RQ) {
+-		pr_err("legacy IO path no longer available\n");
+-		return -EINVAL;
+-	}
+-	if (g_queue_mode == NULL_Q_MQ && g_use_per_node_hctx) {
+-		if (g_submit_queues != nr_online_nodes) {
+-			pr_warn("submit_queues param is set to %u.\n",
+-							nr_online_nodes);
+-			g_submit_queues = nr_online_nodes;
+-		}
+-	} else if (g_submit_queues > nr_cpu_ids)
+-		g_submit_queues = nr_cpu_ids;
+-	else if (g_submit_queues <= 0)
+-		g_submit_queues = 1;
+-
+-	if (g_queue_mode == NULL_Q_MQ && shared_tags) {
+-		ret = null_init_tag_set(NULL, &tag_set);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	config_group_init(&nullb_subsys.su_group);
+-	mutex_init(&nullb_subsys.su_mutex);
+-
+-	ret = configfs_register_subsystem(&nullb_subsys);
+-	if (ret)
+-		goto err_tagset;
+-
+-	mutex_init(&lock);
+-
+-	null_major = register_blkdev(0, "nullb");
+-	if (null_major < 0) {
+-		ret = null_major;
+-		goto err_conf;
+-	}
+-
+-	for (i = 0; i < nr_devices; i++) {
+-		dev = null_alloc_dev();
+-		if (!dev) {
+-			ret = -ENOMEM;
+-			goto err_dev;
+-		}
+-		ret = null_add_dev(dev);
+-		if (ret) {
+-			null_free_dev(dev);
+-			goto err_dev;
+-		}
+-	}
+-
+-	pr_info("module loaded\n");
+-	return 0;
+-
+-err_dev:
+-	while (!list_empty(&nullb_list)) {
+-		nullb = list_entry(nullb_list.next, struct nullb, list);
+-		dev = nullb->dev;
+-		null_del_dev(nullb);
+-		null_free_dev(dev);
+-	}
+-	unregister_blkdev(null_major, "nullb");
+-err_conf:
+-	configfs_unregister_subsystem(&nullb_subsys);
+-err_tagset:
+-	if (g_queue_mode == NULL_Q_MQ && shared_tags)
+-		blk_mq_free_tag_set(&tag_set);
+-	return ret;
+-}
+-
+-static void __exit null_exit(void)
+-{
+-	struct nullb *nullb;
+-
+-	configfs_unregister_subsystem(&nullb_subsys);
+-
+-	unregister_blkdev(null_major, "nullb");
+-
+-	mutex_lock(&lock);
+-	while (!list_empty(&nullb_list)) {
+-		struct nullb_device *dev;
+-
+-		nullb = list_entry(nullb_list.next, struct nullb, list);
+-		dev = nullb->dev;
+-		null_del_dev(nullb);
+-		null_free_dev(dev);
+-	}
+-	mutex_unlock(&lock);
+-
+-	if (g_queue_mode == NULL_Q_MQ && shared_tags)
+-		blk_mq_free_tag_set(&tag_set);
+-}
+-
+-module_init(null_init);
+-module_exit(null_exit);
+-
+-MODULE_AUTHOR("Jens Axboe <axboe@kernel.dk>");
+-MODULE_LICENSE("GPL");
+diff --git a/drivers/block/null_blk_trace.c b/drivers/block/null_blk_trace.c
+deleted file mode 100644
+index f246e7bff6982..0000000000000
+--- a/drivers/block/null_blk_trace.c
++++ /dev/null
+@@ -1,21 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * null_blk trace related helpers.
+- *
+- * Copyright (C) 2020 Western Digital Corporation or its affiliates.
+- */
+-#include "null_blk_trace.h"
+-
+-/*
+- * Helper to use for all null_blk traces to extract disk name.
+- */
+-const char *nullb_trace_disk_name(struct trace_seq *p, char *name)
+-{
+-	const char *ret = trace_seq_buffer_ptr(p);
+-
+-	if (name && *name)
+-		trace_seq_printf(p, "disk=%s, ", name);
+-	trace_seq_putc(p, 0);
+-
+-	return ret;
+-}
+diff --git a/drivers/block/null_blk_trace.h b/drivers/block/null_blk_trace.h
+deleted file mode 100644
+index 4f83032eb5441..0000000000000
+--- a/drivers/block/null_blk_trace.h
++++ /dev/null
+@@ -1,79 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * null_blk device driver tracepoints.
+- *
+- * Copyright (C) 2020 Western Digital Corporation or its affiliates.
+- */
+-
+-#undef TRACE_SYSTEM
+-#define TRACE_SYSTEM nullb
+-
+-#if !defined(_TRACE_NULLB_H) || defined(TRACE_HEADER_MULTI_READ)
+-#define _TRACE_NULLB_H
+-
+-#include <linux/tracepoint.h>
+-#include <linux/trace_seq.h>
+-
+-#include "null_blk.h"
+-
+-const char *nullb_trace_disk_name(struct trace_seq *p, char *name);
+-
+-#define __print_disk_name(name) nullb_trace_disk_name(p, name)
+-
+-#ifndef TRACE_HEADER_MULTI_READ
+-static inline void __assign_disk_name(char *name, struct gendisk *disk)
+-{
+-	if (disk)
+-		memcpy(name, disk->disk_name, DISK_NAME_LEN);
+-	else
+-		memset(name, 0, DISK_NAME_LEN);
+-}
+-#endif
+-
+-TRACE_EVENT(nullb_zone_op,
+-	    TP_PROTO(struct nullb_cmd *cmd, unsigned int zone_no,
+-		     unsigned int zone_cond),
+-	    TP_ARGS(cmd, zone_no, zone_cond),
+-	    TP_STRUCT__entry(
+-		__array(char, disk, DISK_NAME_LEN)
+-		__field(enum req_opf, op)
+-		__field(unsigned int, zone_no)
+-		__field(unsigned int, zone_cond)
+-	    ),
+-	    TP_fast_assign(
+-		__entry->op = req_op(cmd->rq);
+-		__entry->zone_no = zone_no;
+-		__entry->zone_cond = zone_cond;
+-		__assign_disk_name(__entry->disk, cmd->rq->rq_disk);
+-	    ),
+-	    TP_printk("%s req=%-15s zone_no=%u zone_cond=%-10s",
+-		      __print_disk_name(__entry->disk),
+-		      blk_op_str(__entry->op),
+-		      __entry->zone_no,
+-		      blk_zone_cond_str(__entry->zone_cond))
+-);
+-
+-TRACE_EVENT(nullb_report_zones,
+-	    TP_PROTO(struct nullb *nullb, unsigned int nr_zones),
+-	    TP_ARGS(nullb, nr_zones),
+-	    TP_STRUCT__entry(
+-		__array(char, disk, DISK_NAME_LEN)
+-		__field(unsigned int, nr_zones)
+-	    ),
+-	    TP_fast_assign(
+-		__entry->nr_zones = nr_zones;
+-		__assign_disk_name(__entry->disk, nullb->disk);
+-	    ),
+-	    TP_printk("%s nr_zones=%u",
+-		      __print_disk_name(__entry->disk), __entry->nr_zones)
+-);
+-
+-#endif /* _TRACE_NULLB_H */
+-
+-#undef TRACE_INCLUDE_PATH
+-#define TRACE_INCLUDE_PATH .
+-#undef TRACE_INCLUDE_FILE
+-#define TRACE_INCLUDE_FILE null_blk_trace
+-
+-/* This part must be outside protection */
+-#include <trace/define_trace.h>
+diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c
+deleted file mode 100644
+index f5df82c26c16f..0000000000000
+--- a/drivers/block/null_blk_zoned.c
++++ /dev/null
+@@ -1,617 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-#include <linux/vmalloc.h>
+-#include <linux/bitmap.h>
+-#include "null_blk.h"
+-
+-#define CREATE_TRACE_POINTS
+-#include "null_blk_trace.h"
+-
+-#define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT)
+-
+-static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect)
+-{
+-	return sect >> ilog2(dev->zone_size_sects);
+-}
+-
+-int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
+-{
+-	sector_t dev_capacity_sects, zone_capacity_sects;
+-	sector_t sector = 0;
+-	unsigned int i;
+-
+-	if (!is_power_of_2(dev->zone_size)) {
+-		pr_err("zone_size must be power-of-two\n");
+-		return -EINVAL;
+-	}
+-	if (dev->zone_size > dev->size) {
+-		pr_err("Zone size larger than device capacity\n");
+-		return -EINVAL;
+-	}
+-
+-	if (!dev->zone_capacity)
+-		dev->zone_capacity = dev->zone_size;
+-
+-	if (dev->zone_capacity > dev->zone_size) {
+-		pr_err("null_blk: zone capacity (%lu MB) larger than zone size (%lu MB)\n",
+-					dev->zone_capacity, dev->zone_size);
+-		return -EINVAL;
+-	}
+-
+-	zone_capacity_sects = MB_TO_SECTS(dev->zone_capacity);
+-	dev_capacity_sects = MB_TO_SECTS(dev->size);
+-	dev->zone_size_sects = MB_TO_SECTS(dev->zone_size);
+-	dev->nr_zones = dev_capacity_sects >> ilog2(dev->zone_size_sects);
+-	if (dev_capacity_sects & (dev->zone_size_sects - 1))
+-		dev->nr_zones++;
+-
+-	dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct blk_zone),
+-			GFP_KERNEL | __GFP_ZERO);
+-	if (!dev->zones)
+-		return -ENOMEM;
+-
+-	/*
+-	 * With memory backing, the zone_lock spinlock needs to be temporarily
+-	 * released to avoid scheduling in atomic context. To guarantee zone
+-	 * information protection, use a bitmap to lock zones with
+-	 * wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing
+-	 * implies that the queue is marked with BLK_MQ_F_BLOCKING.
+-	 */
+-	spin_lock_init(&dev->zone_lock);
+-	if (dev->memory_backed) {
+-		dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);
+-		if (!dev->zone_locks) {
+-			kvfree(dev->zones);
+-			return -ENOMEM;
+-		}
+-	}
+-
+-	if (dev->zone_nr_conv >= dev->nr_zones) {
+-		dev->zone_nr_conv = dev->nr_zones - 1;
+-		pr_info("changed the number of conventional zones to %u",
+-			dev->zone_nr_conv);
+-	}
+-
+-	/* Max active zones has to be < nbr of seq zones in order to be enforceable */
+-	if (dev->zone_max_active >= dev->nr_zones - dev->zone_nr_conv) {
+-		dev->zone_max_active = 0;
+-		pr_info("zone_max_active limit disabled, limit >= zone count\n");
+-	}
+-
+-	/* Max open zones has to be <= max active zones */
+-	if (dev->zone_max_active && dev->zone_max_open > dev->zone_max_active) {
+-		dev->zone_max_open = dev->zone_max_active;
+-		pr_info("changed the maximum number of open zones to %u\n",
+-			dev->nr_zones);
+-	} else if (dev->zone_max_open >= dev->nr_zones - dev->zone_nr_conv) {
+-		dev->zone_max_open = 0;
+-		pr_info("zone_max_open limit disabled, limit >= zone count\n");
+-	}
+-
+-	for (i = 0; i <  dev->zone_nr_conv; i++) {
+-		struct blk_zone *zone = &dev->zones[i];
+-
+-		zone->start = sector;
+-		zone->len = dev->zone_size_sects;
+-		zone->capacity = zone->len;
+-		zone->wp = zone->start + zone->len;
+-		zone->type = BLK_ZONE_TYPE_CONVENTIONAL;
+-		zone->cond = BLK_ZONE_COND_NOT_WP;
+-
+-		sector += dev->zone_size_sects;
+-	}
+-
+-	for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
+-		struct blk_zone *zone = &dev->zones[i];
+-
+-		zone->start = zone->wp = sector;
+-		if (zone->start + dev->zone_size_sects > dev_capacity_sects)
+-			zone->len = dev_capacity_sects - zone->start;
+-		else
+-			zone->len = dev->zone_size_sects;
+-		zone->capacity =
+-			min_t(sector_t, zone->len, zone_capacity_sects);
+-		zone->type = BLK_ZONE_TYPE_SEQWRITE_REQ;
+-		zone->cond = BLK_ZONE_COND_EMPTY;
+-
+-		sector += dev->zone_size_sects;
+-	}
+-
+-	q->limits.zoned = BLK_ZONED_HM;
+-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
+-	blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE);
+-
+-	return 0;
+-}
+-
+-int null_register_zoned_dev(struct nullb *nullb)
+-{
+-	struct nullb_device *dev = nullb->dev;
+-	struct request_queue *q = nullb->q;
+-
+-	if (queue_is_mq(q)) {
+-		int ret = blk_revalidate_disk_zones(nullb->disk, NULL);
+-
+-		if (ret)
+-			return ret;
+-	} else {
+-		blk_queue_chunk_sectors(q, dev->zone_size_sects);
+-		q->nr_zones = blkdev_nr_zones(nullb->disk);
+-	}
+-
+-	blk_queue_max_zone_append_sectors(q, dev->zone_size_sects);
+-	blk_queue_max_open_zones(q, dev->zone_max_open);
+-	blk_queue_max_active_zones(q, dev->zone_max_active);
+-
+-	return 0;
+-}
+-
+-void null_free_zoned_dev(struct nullb_device *dev)
+-{
+-	bitmap_free(dev->zone_locks);
+-	kvfree(dev->zones);
+-	dev->zones = NULL;
+-}
+-
+-static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)
+-{
+-	if (dev->memory_backed)
+-		wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);
+-	spin_lock_irq(&dev->zone_lock);
+-}
+-
+-static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno)
+-{
+-	spin_unlock_irq(&dev->zone_lock);
+-
+-	if (dev->memory_backed)
+-		clear_and_wake_up_bit(zno, dev->zone_locks);
+-}
+-
+-int null_report_zones(struct gendisk *disk, sector_t sector,
+-		unsigned int nr_zones, report_zones_cb cb, void *data)
+-{
+-	struct nullb *nullb = disk->private_data;
+-	struct nullb_device *dev = nullb->dev;
+-	unsigned int first_zone, i, zno;
+-	struct blk_zone zone;
+-	int error;
+-
+-	first_zone = null_zone_no(dev, sector);
+-	if (first_zone >= dev->nr_zones)
+-		return 0;
+-
+-	nr_zones = min(nr_zones, dev->nr_zones - first_zone);
+-	trace_nullb_report_zones(nullb, nr_zones);
+-
+-	zno = first_zone;
+-	for (i = 0; i < nr_zones; i++, zno++) {
+-		/*
+-		 * Stacked DM target drivers will remap the zone information by
+-		 * modifying the zone information passed to the report callback.
+-		 * So use a local copy to avoid corruption of the device zone
+-		 * array.
+-		 */
+-		null_lock_zone(dev, zno);
+-		memcpy(&zone, &dev->zones[zno], sizeof(struct blk_zone));
+-		null_unlock_zone(dev, zno);
+-
+-		error = cb(&zone, i, data);
+-		if (error)
+-			return error;
+-	}
+-
+-	return nr_zones;
+-}
+-
+-/*
+- * This is called in the case of memory backing from null_process_cmd()
+- * with the target zone already locked.
+- */
+-size_t null_zone_valid_read_len(struct nullb *nullb,
+-				sector_t sector, unsigned int len)
+-{
+-	struct nullb_device *dev = nullb->dev;
+-	struct blk_zone *zone = &dev->zones[null_zone_no(dev, sector)];
+-	unsigned int nr_sectors = len >> SECTOR_SHIFT;
+-
+-	/* Read must be below the write pointer position */
+-	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL ||
+-	    sector + nr_sectors <= zone->wp)
+-		return len;
+-
+-	if (sector > zone->wp)
+-		return 0;
+-
+-	return (zone->wp - sector) << SECTOR_SHIFT;
+-}
+-
+-static blk_status_t null_close_zone(struct nullb_device *dev, struct blk_zone *zone)
+-{
+-	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+-		return BLK_STS_IOERR;
+-
+-	switch (zone->cond) {
+-	case BLK_ZONE_COND_CLOSED:
+-		/* close operation on closed is not an error */
+-		return BLK_STS_OK;
+-	case BLK_ZONE_COND_IMP_OPEN:
+-		dev->nr_zones_imp_open--;
+-		break;
+-	case BLK_ZONE_COND_EXP_OPEN:
+-		dev->nr_zones_exp_open--;
+-		break;
+-	case BLK_ZONE_COND_EMPTY:
+-	case BLK_ZONE_COND_FULL:
+-	default:
+-		return BLK_STS_IOERR;
+-	}
+-
+-	if (zone->wp == zone->start) {
+-		zone->cond = BLK_ZONE_COND_EMPTY;
+-	} else {
+-		zone->cond = BLK_ZONE_COND_CLOSED;
+-		dev->nr_zones_closed++;
+-	}
+-
+-	return BLK_STS_OK;
+-}
+-
+-static void null_close_first_imp_zone(struct nullb_device *dev)
+-{
+-	unsigned int i;
+-
+-	for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
+-		if (dev->zones[i].cond == BLK_ZONE_COND_IMP_OPEN) {
+-			null_close_zone(dev, &dev->zones[i]);
+-			return;
+-		}
+-	}
+-}
+-
+-static blk_status_t null_check_active(struct nullb_device *dev)
+-{
+-	if (!dev->zone_max_active)
+-		return BLK_STS_OK;
+-
+-	if (dev->nr_zones_exp_open + dev->nr_zones_imp_open +
+-			dev->nr_zones_closed < dev->zone_max_active)
+-		return BLK_STS_OK;
+-
+-	return BLK_STS_ZONE_ACTIVE_RESOURCE;
+-}
+-
+-static blk_status_t null_check_open(struct nullb_device *dev)
+-{
+-	if (!dev->zone_max_open)
+-		return BLK_STS_OK;
+-
+-	if (dev->nr_zones_exp_open + dev->nr_zones_imp_open < dev->zone_max_open)
+-		return BLK_STS_OK;
+-
+-	if (dev->nr_zones_imp_open) {
+-		if (null_check_active(dev) == BLK_STS_OK) {
+-			null_close_first_imp_zone(dev);
+-			return BLK_STS_OK;
+-		}
+-	}
+-
+-	return BLK_STS_ZONE_OPEN_RESOURCE;
+-}
+-
+-/*
+- * This function matches the manage open zone resources function in the ZBC standard,
+- * with the addition of max active zones support (added in the ZNS standard).
+- *
+- * The function determines if a zone can transition to implicit open or explicit open,
+- * while maintaining the max open zone (and max active zone) limit(s). It may close an
+- * implicit open zone in order to make additional zone resources available.
+- *
+- * ZBC states that an implicit open zone shall be closed only if there is not
+- * room within the open limit. However, with the addition of an active limit,
+- * it is not certain that closing an implicit open zone will allow a new zone
+- * to be opened, since we might already be at the active limit capacity.
+- */
+-static blk_status_t null_check_zone_resources(struct nullb_device *dev, struct blk_zone *zone)
+-{
+-	blk_status_t ret;
+-
+-	switch (zone->cond) {
+-	case BLK_ZONE_COND_EMPTY:
+-		ret = null_check_active(dev);
+-		if (ret != BLK_STS_OK)
+-			return ret;
+-		fallthrough;
+-	case BLK_ZONE_COND_CLOSED:
+-		return null_check_open(dev);
+-	default:
+-		/* Should never be called for other states */
+-		WARN_ON(1);
+-		return BLK_STS_IOERR;
+-	}
+-}
+-
+-static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
+-				    unsigned int nr_sectors, bool append)
+-{
+-	struct nullb_device *dev = cmd->nq->dev;
+-	unsigned int zno = null_zone_no(dev, sector);
+-	struct blk_zone *zone = &dev->zones[zno];
+-	blk_status_t ret;
+-
+-	trace_nullb_zone_op(cmd, zno, zone->cond);
+-
+-	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
+-		if (append)
+-			return BLK_STS_IOERR;
+-		return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
+-	}
+-
+-	null_lock_zone(dev, zno);
+-
+-	switch (zone->cond) {
+-	case BLK_ZONE_COND_FULL:
+-		/* Cannot write to a full zone */
+-		ret = BLK_STS_IOERR;
+-		goto unlock;
+-	case BLK_ZONE_COND_EMPTY:
+-	case BLK_ZONE_COND_CLOSED:
+-		ret = null_check_zone_resources(dev, zone);
+-		if (ret != BLK_STS_OK)
+-			goto unlock;
+-		break;
+-	case BLK_ZONE_COND_IMP_OPEN:
+-	case BLK_ZONE_COND_EXP_OPEN:
+-		break;
+-	default:
+-		/* Invalid zone condition */
+-		ret = BLK_STS_IOERR;
+-		goto unlock;
+-	}
+-
+-	/*
+-	 * Regular writes must be at the write pointer position.
+-	 * Zone append writes are automatically issued at the write
+-	 * pointer and the position returned using the request or BIO
+-	 * sector.
+-	 */
+-	if (append) {
+-		sector = zone->wp;
+-		if (cmd->bio)
+-			cmd->bio->bi_iter.bi_sector = sector;
+-		else
+-			cmd->rq->__sector = sector;
+-	} else if (sector != zone->wp) {
+-		ret = BLK_STS_IOERR;
+-		goto unlock;
+-	}
+-
+-	if (zone->wp + nr_sectors > zone->start + zone->capacity) {
+-		ret = BLK_STS_IOERR;
+-		goto unlock;
+-	}
+-
+-	if (zone->cond == BLK_ZONE_COND_CLOSED) {
+-		dev->nr_zones_closed--;
+-		dev->nr_zones_imp_open++;
+-	} else if (zone->cond == BLK_ZONE_COND_EMPTY) {
+-		dev->nr_zones_imp_open++;
+-	}
+-	if (zone->cond != BLK_ZONE_COND_EXP_OPEN)
+-		zone->cond = BLK_ZONE_COND_IMP_OPEN;
+-
+-	/*
+-	 * Memory backing allocation may sleep: release the zone_lock spinlock
+-	 * to avoid scheduling in atomic context. Zone operation atomicity is
+-	 * still guaranteed through the zone_locks bitmap.
+-	 */
+-	if (dev->memory_backed)
+-		spin_unlock_irq(&dev->zone_lock);
+-	ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
+-	if (dev->memory_backed)
+-		spin_lock_irq(&dev->zone_lock);
+-
+-	if (ret != BLK_STS_OK)
+-		goto unlock;
+-
+-	zone->wp += nr_sectors;
+-	if (zone->wp == zone->start + zone->capacity) {
+-		if (zone->cond == BLK_ZONE_COND_EXP_OPEN)
+-			dev->nr_zones_exp_open--;
+-		else if (zone->cond == BLK_ZONE_COND_IMP_OPEN)
+-			dev->nr_zones_imp_open--;
+-		zone->cond = BLK_ZONE_COND_FULL;
+-	}
+-	ret = BLK_STS_OK;
+-
+-unlock:
+-	null_unlock_zone(dev, zno);
+-
+-	return ret;
+-}
+-
+-static blk_status_t null_open_zone(struct nullb_device *dev, struct blk_zone *zone)
+-{
+-	blk_status_t ret;
+-
+-	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+-		return BLK_STS_IOERR;
+-
+-	switch (zone->cond) {
+-	case BLK_ZONE_COND_EXP_OPEN:
+-		/* open operation on exp open is not an error */
+-		return BLK_STS_OK;
+-	case BLK_ZONE_COND_EMPTY:
+-		ret = null_check_zone_resources(dev, zone);
+-		if (ret != BLK_STS_OK)
+-			return ret;
+-		break;
+-	case BLK_ZONE_COND_IMP_OPEN:
+-		dev->nr_zones_imp_open--;
+-		break;
+-	case BLK_ZONE_COND_CLOSED:
+-		ret = null_check_zone_resources(dev, zone);
+-		if (ret != BLK_STS_OK)
+-			return ret;
+-		dev->nr_zones_closed--;
+-		break;
+-	case BLK_ZONE_COND_FULL:
+-	default:
+-		return BLK_STS_IOERR;
+-	}
+-
+-	zone->cond = BLK_ZONE_COND_EXP_OPEN;
+-	dev->nr_zones_exp_open++;
+-
+-	return BLK_STS_OK;
+-}
+-
+-static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone *zone)
+-{
+-	blk_status_t ret;
+-
+-	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+-		return BLK_STS_IOERR;
+-
+-	switch (zone->cond) {
+-	case BLK_ZONE_COND_FULL:
+-		/* finish operation on full is not an error */
+-		return BLK_STS_OK;
+-	case BLK_ZONE_COND_EMPTY:
+-		ret = null_check_zone_resources(dev, zone);
+-		if (ret != BLK_STS_OK)
+-			return ret;
+-		break;
+-	case BLK_ZONE_COND_IMP_OPEN:
+-		dev->nr_zones_imp_open--;
+-		break;
+-	case BLK_ZONE_COND_EXP_OPEN:
+-		dev->nr_zones_exp_open--;
+-		break;
+-	case BLK_ZONE_COND_CLOSED:
+-		ret = null_check_zone_resources(dev, zone);
+-		if (ret != BLK_STS_OK)
+-			return ret;
+-		dev->nr_zones_closed--;
+-		break;
+-	default:
+-		return BLK_STS_IOERR;
+-	}
+-
+-	zone->cond = BLK_ZONE_COND_FULL;
+-	zone->wp = zone->start + zone->len;
+-
+-	return BLK_STS_OK;
+-}
+-
+-static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *zone)
+-{
+-	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+-		return BLK_STS_IOERR;
+-
+-	switch (zone->cond) {
+-	case BLK_ZONE_COND_EMPTY:
+-		/* reset operation on empty is not an error */
+-		return BLK_STS_OK;
+-	case BLK_ZONE_COND_IMP_OPEN:
+-		dev->nr_zones_imp_open--;
+-		break;
+-	case BLK_ZONE_COND_EXP_OPEN:
+-		dev->nr_zones_exp_open--;
+-		break;
+-	case BLK_ZONE_COND_CLOSED:
+-		dev->nr_zones_closed--;
+-		break;
+-	case BLK_ZONE_COND_FULL:
+-		break;
+-	default:
+-		return BLK_STS_IOERR;
+-	}
+-
+-	zone->cond = BLK_ZONE_COND_EMPTY;
+-	zone->wp = zone->start;
+-
+-	return BLK_STS_OK;
+-}
+-
+-static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
+-				   sector_t sector)
+-{
+-	struct nullb_device *dev = cmd->nq->dev;
+-	unsigned int zone_no;
+-	struct blk_zone *zone;
+-	blk_status_t ret;
+-	size_t i;
+-
+-	if (op == REQ_OP_ZONE_RESET_ALL) {
+-		for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
+-			null_lock_zone(dev, i);
+-			zone = &dev->zones[i];
+-			if (zone->cond != BLK_ZONE_COND_EMPTY) {
+-				null_reset_zone(dev, zone);
+-				trace_nullb_zone_op(cmd, i, zone->cond);
+-			}
+-			null_unlock_zone(dev, i);
+-		}
+-		return BLK_STS_OK;
+-	}
+-
+-	zone_no = null_zone_no(dev, sector);
+-	zone = &dev->zones[zone_no];
+-
+-	null_lock_zone(dev, zone_no);
+-
+-	switch (op) {
+-	case REQ_OP_ZONE_RESET:
+-		ret = null_reset_zone(dev, zone);
+-		break;
+-	case REQ_OP_ZONE_OPEN:
+-		ret = null_open_zone(dev, zone);
+-		break;
+-	case REQ_OP_ZONE_CLOSE:
+-		ret = null_close_zone(dev, zone);
+-		break;
+-	case REQ_OP_ZONE_FINISH:
+-		ret = null_finish_zone(dev, zone);
+-		break;
+-	default:
+-		ret = BLK_STS_NOTSUPP;
+-		break;
+-	}
+-
+-	if (ret == BLK_STS_OK)
+-		trace_nullb_zone_op(cmd, zone_no, zone->cond);
+-
+-	null_unlock_zone(dev, zone_no);
+-
+-	return ret;
+-}
+-
+-blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op,
+-				    sector_t sector, sector_t nr_sectors)
+-{
+-	struct nullb_device *dev = cmd->nq->dev;
+-	unsigned int zno = null_zone_no(dev, sector);
+-	blk_status_t sts;
+-
+-	switch (op) {
+-	case REQ_OP_WRITE:
+-		sts = null_zone_write(cmd, sector, nr_sectors, false);
+-		break;
+-	case REQ_OP_ZONE_APPEND:
+-		sts = null_zone_write(cmd, sector, nr_sectors, true);
+-		break;
+-	case REQ_OP_ZONE_RESET:
+-	case REQ_OP_ZONE_RESET_ALL:
+-	case REQ_OP_ZONE_OPEN:
+-	case REQ_OP_ZONE_CLOSE:
+-	case REQ_OP_ZONE_FINISH:
+-		sts = null_zone_mgmt(cmd, op, sector);
+-		break;
+-	default:
+-		null_lock_zone(dev, zno);
+-		sts = null_process_cmd(cmd, op, sector, nr_sectors);
+-		null_unlock_zone(dev, zno);
+-	}
+-
+-	return sts;
+-}
+diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
+index 39aeebc6837da..d9e41d3bbe717 100644
+--- a/drivers/block/sunvdc.c
++++ b/drivers/block/sunvdc.c
+@@ -984,6 +984,8 @@ static int vdc_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ 	print_version();
+ 
+ 	hp = mdesc_grab();
++	if (!hp)
++		return -ENODEV;
+ 
+ 	err = -ENODEV;
+ 	if ((vdev->dev_no << PARTITION_SHIFT) & ~(u64)MINORMASK) {
+diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
+index c715d4681a0b8..4ae49eae45869 100644
+--- a/drivers/clk/Kconfig
++++ b/drivers/clk/Kconfig
+@@ -79,7 +79,7 @@ config COMMON_CLK_RK808
+ config COMMON_CLK_HI655X
+ 	tristate "Clock driver for Hi655x" if EXPERT
+ 	depends on (MFD_HI655X_PMIC || COMPILE_TEST)
+-	depends on REGMAP
++	select REGMAP
+ 	default MFD_HI655X_PMIC
+ 	help
+ 	  This driver supports the hi655x PMIC clock. This
+diff --git a/drivers/cpuidle/cpuidle-psci-domain.c b/drivers/cpuidle/cpuidle-psci-domain.c
+index 4a031c62f92a1..5098639d41f12 100644
+--- a/drivers/cpuidle/cpuidle-psci-domain.c
++++ b/drivers/cpuidle/cpuidle-psci-domain.c
+@@ -182,7 +182,8 @@ static void psci_pd_remove(void)
+ 	struct psci_pd_provider *pd_provider, *it;
+ 	struct generic_pm_domain *genpd;
+ 
+-	list_for_each_entry_safe(pd_provider, it, &psci_pd_providers, link) {
++	list_for_each_entry_safe_reverse(pd_provider, it,
++					 &psci_pd_providers, link) {
+ 		of_genpd_del_provider(pd_provider->node);
+ 
+ 		genpd = of_genpd_remove_last(pd_provider->node);
+diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
+index 9e6504592646e..300ba2991936b 100644
+--- a/drivers/firmware/xilinx/zynqmp.c
++++ b/drivers/firmware/xilinx/zynqmp.c
+@@ -171,7 +171,7 @@ static int zynqmp_pm_feature(u32 api_id)
+ 	}
+ 
+ 	/* Add new entry if not present */
+-	feature_data = kmalloc(sizeof(*feature_data), GFP_KERNEL);
++	feature_data = kmalloc(sizeof(*feature_data), GFP_ATOMIC);
+ 	if (!feature_data)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+index 159be13ef20bb..2c19b3775179b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+@@ -528,16 +528,13 @@ static struct kfd_event_waiter *alloc_event_waiters(uint32_t num_events)
+ 	struct kfd_event_waiter *event_waiters;
+ 	uint32_t i;
+ 
+-	event_waiters = kmalloc_array(num_events,
+-					sizeof(struct kfd_event_waiter),
+-					GFP_KERNEL);
++	event_waiters = kcalloc(num_events, sizeof(struct kfd_event_waiter),
++				GFP_KERNEL);
+ 	if (!event_waiters)
+ 		return NULL;
+ 
+-	for (i = 0; (event_waiters) && (i < num_events) ; i++) {
++	for (i = 0; i < num_events; i++)
+ 		init_wait(&event_waiters[i].wait);
+-		event_waiters[i].activated = false;
+-	}
+ 
+ 	return event_waiters;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index e427f4ffa0807..e5b1002d7f3f0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -1868,7 +1868,10 @@ static unsigned int CalculateVMAndRowBytes(
+ 	}
+ 
+ 	if (SurfaceTiling == dm_sw_linear) {
+-		*dpte_row_height = dml_min(128, 1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch), 1));
++		if (PTEBufferSizeInRequests == 0)
++			*dpte_row_height = 1;
++		else
++			*dpte_row_height = dml_min(128, 1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch), 1));
+ 		*dpte_row_width_ub = (dml_ceil(((double) SwathWidth - 1) / *PixelPTEReqWidth, 1) + 1) * *PixelPTEReqWidth;
+ 		*PixelPTEBytesPerRow = *dpte_row_width_ub / *PixelPTEReqWidth * *PTERequestSize;
+ 	} else if (ScanDirection != dm_vert) {
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index c56656a95cf99..b7bb5610dfe21 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -614,11 +614,14 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+ 	int ret;
+ 
+ 	if (obj->import_attach) {
+-		/* Drop the reference drm_gem_mmap_obj() acquired.*/
+-		drm_gem_object_put(obj);
+ 		vma->vm_private_data = NULL;
++		ret = dma_buf_mmap(obj->dma_buf, vma, 0);
++
++		/* Drop the reference drm_gem_mmap_obj() acquired.*/
++		if (!ret)
++			drm_gem_object_put(obj);
+ 
+-		return dma_buf_mmap(obj->dma_buf, vma, 0);
++		return ret;
+ 	}
+ 
+ 	shmem = to_drm_gem_shmem_obj(obj);
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c
+index 69b2e5509d678..de67b2745258f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring.c
+@@ -108,7 +108,7 @@ static struct i915_vma *create_ring_vma(struct i915_ggtt *ggtt, int size)
+ 	struct i915_vma *vma;
+ 
+ 	obj = ERR_PTR(-ENODEV);
+-	if (i915_ggtt_has_aperture(ggtt))
++	if (i915_ggtt_has_aperture(ggtt) && !HAS_LLC(i915))
+ 		obj = i915_gem_object_create_stolen(i915, size);
+ 	if (IS_ERR(obj))
+ 		obj = i915_gem_object_create_internal(i915, size);
+diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
+index c4c2d24dc5094..0532a5069c04b 100644
+--- a/drivers/gpu/drm/i915/i915_active.c
++++ b/drivers/gpu/drm/i915/i915_active.c
+@@ -432,8 +432,7 @@ replace_barrier(struct i915_active *ref, struct i915_active_fence *active)
+ 	 * we can use it to substitute for the pending idle-barrer
+ 	 * request that we want to emit on the kernel_context.
+ 	 */
+-	__active_del_barrier(ref, node_from_active(active));
+-	return true;
++	return __active_del_barrier(ref, node_from_active(active));
+ }
+ 
+ int i915_active_ref(struct i915_active *ref, u64 idx, struct dma_fence *fence)
+@@ -446,16 +445,19 @@ int i915_active_ref(struct i915_active *ref, u64 idx, struct dma_fence *fence)
+ 	if (err)
+ 		return err;
+ 
+-	active = active_instance(ref, idx);
+-	if (!active) {
+-		err = -ENOMEM;
+-		goto out;
+-	}
++	do {
++		active = active_instance(ref, idx);
++		if (!active) {
++			err = -ENOMEM;
++			goto out;
++		}
++
++		if (replace_barrier(ref, active)) {
++			RCU_INIT_POINTER(active->fence, NULL);
++			atomic_dec(&ref->count);
++		}
++	} while (unlikely(is_barrier(active)));
+ 
+-	if (replace_barrier(ref, active)) {
+-		RCU_INIT_POINTER(active->fence, NULL);
+-		atomic_dec(&ref->count);
+-	}
+ 	if (!__i915_active_fence_set(active, fence))
+ 		__i915_active_acquire(ref);
+ 
+diff --git a/drivers/gpu/drm/meson/meson_vpp.c b/drivers/gpu/drm/meson/meson_vpp.c
+index 154837688ab0d..5df1957c8e41f 100644
+--- a/drivers/gpu/drm/meson/meson_vpp.c
++++ b/drivers/gpu/drm/meson/meson_vpp.c
+@@ -100,6 +100,8 @@ void meson_vpp_init(struct meson_drm *priv)
+ 			       priv->io_base + _REG(VPP_DOLBY_CTRL));
+ 		writel_relaxed(0x1020080,
+ 				priv->io_base + _REG(VPP_DUMMY_DATA1));
++		writel_relaxed(0x42020,
++				priv->io_base + _REG(VPP_DUMMY_DATA));
+ 	} else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
+ 		writel_relaxed(0xf, priv->io_base + _REG(DOLBY_PATH_CTRL));
+ 
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index 13596961ae17f..5ff856ef7d88c 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -236,7 +236,7 @@ static void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
+ 	if (pm_runtime_active(pfdev->dev))
+ 		mmu_hw_do_operation(pfdev, mmu, iova, size, AS_COMMAND_FLUSH_PT);
+ 
+-	pm_runtime_put_sync_autosuspend(pfdev->dev);
++	pm_runtime_put_autosuspend(pfdev->dev);
+ }
+ 
+ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 5f9ec1d1464a2..524d6d712e724 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -258,6 +258,7 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
+ {
+ 	struct hid_report *report;
+ 	struct hid_field *field;
++	unsigned int max_buffer_size = HID_MAX_BUFFER_SIZE;
+ 	unsigned int usages;
+ 	unsigned int offset;
+ 	unsigned int i;
+@@ -288,8 +289,11 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
+ 	offset = report->size;
+ 	report->size += parser->global.report_size * parser->global.report_count;
+ 
++	if (parser->device->ll_driver->max_buffer_size)
++		max_buffer_size = parser->device->ll_driver->max_buffer_size;
++
+ 	/* Total size check: Allow for possible report index byte */
+-	if (report->size > (HID_MAX_BUFFER_SIZE - 1) << 3) {
++	if (report->size > (max_buffer_size - 1) << 3) {
+ 		hid_err(parser->device, "report is too long\n");
+ 		return -1;
+ 	}
+@@ -1752,6 +1756,7 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
+ 	struct hid_report_enum *report_enum = hid->report_enum + type;
+ 	struct hid_report *report;
+ 	struct hid_driver *hdrv;
++	int max_buffer_size = HID_MAX_BUFFER_SIZE;
+ 	unsigned int a;
+ 	u32 rsize, csize = size;
+ 	u8 *cdata = data;
+@@ -1768,10 +1773,13 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
+ 
+ 	rsize = hid_compute_report_size(report);
+ 
+-	if (report_enum->numbered && rsize >= HID_MAX_BUFFER_SIZE)
+-		rsize = HID_MAX_BUFFER_SIZE - 1;
+-	else if (rsize > HID_MAX_BUFFER_SIZE)
+-		rsize = HID_MAX_BUFFER_SIZE;
++	if (hid->ll_driver->max_buffer_size)
++		max_buffer_size = hid->ll_driver->max_buffer_size;
++
++	if (report_enum->numbered && rsize >= max_buffer_size)
++		rsize = max_buffer_size - 1;
++	else if (rsize > max_buffer_size)
++		rsize = max_buffer_size;
+ 
+ 	if (csize < rsize) {
+ 		dbg_hid("report %d is too short, (%d < %d)\n", report->id,
+diff --git a/drivers/hid/uhid.c b/drivers/hid/uhid.c
+index fc06d8bb42e0f..ba0ca652b9dab 100644
+--- a/drivers/hid/uhid.c
++++ b/drivers/hid/uhid.c
+@@ -395,6 +395,7 @@ struct hid_ll_driver uhid_hid_driver = {
+ 	.parse = uhid_hid_parse,
+ 	.raw_request = uhid_hid_raw_request,
+ 	.output_report = uhid_hid_output_report,
++	.max_buffer_size = UHID_DATA_MAX,
+ };
+ EXPORT_SYMBOL_GPL(uhid_hid_driver);
+ 
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 9d5b019651f2d..6b84822e7d93b 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -486,10 +486,10 @@ static ssize_t temp_store(struct device *dev, struct device_attribute *attr,
+ 		val = (temp - val) / 1000;
+ 
+ 		if (sattr->index != 1) {
+-			data->temp[HYSTERSIS][sattr->index] &= 0xF0;
++			data->temp[HYSTERSIS][sattr->index] &= 0x0F;
+ 			data->temp[HYSTERSIS][sattr->index] |= (val & 0xF) << 4;
+ 		} else {
+-			data->temp[HYSTERSIS][sattr->index] &= 0x0F;
++			data->temp[HYSTERSIS][sattr->index] &= 0xF0;
+ 			data->temp[HYSTERSIS][sattr->index] |= (val & 0xF);
+ 		}
+ 
+@@ -554,11 +554,11 @@ static ssize_t temp_st_show(struct device *dev, struct device_attribute *attr,
+ 		val = data->enh_acoustics[0] & 0xf;
+ 		break;
+ 	case 1:
+-		val = (data->enh_acoustics[1] >> 4) & 0xf;
++		val = data->enh_acoustics[1] & 0xf;
+ 		break;
+ 	case 2:
+ 	default:
+-		val = data->enh_acoustics[1] & 0xf;
++		val = (data->enh_acoustics[1] >> 4) & 0xf;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/hwmon/ina3221.c b/drivers/hwmon/ina3221.c
+index d3c98115042b5..836e7579e166a 100644
+--- a/drivers/hwmon/ina3221.c
++++ b/drivers/hwmon/ina3221.c
+@@ -772,7 +772,7 @@ static int ina3221_probe_child_from_dt(struct device *dev,
+ 		return ret;
+ 	} else if (val > INA3221_CHANNEL3) {
+ 		dev_err(dev, "invalid reg %d of %pOFn\n", val, child);
+-		return ret;
++		return -EINVAL;
+ 	}
+ 
+ 	input = &ina->inputs[val];
+diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c
+index c7b373ba92f21..d1b2e936546fd 100644
+--- a/drivers/hwmon/pmbus/adm1266.c
++++ b/drivers/hwmon/pmbus/adm1266.c
+@@ -301,6 +301,7 @@ static int adm1266_config_gpio(struct adm1266_data *data)
+ 	data->gc.label = name;
+ 	data->gc.parent = &data->client->dev;
+ 	data->gc.owner = THIS_MODULE;
++	data->gc.can_sleep = true;
+ 	data->gc.base = -1;
+ 	data->gc.names = data->gpio_names;
+ 	data->gc.ngpio = ARRAY_SIZE(data->gpio_names);
+diff --git a/drivers/hwmon/pmbus/ucd9000.c b/drivers/hwmon/pmbus/ucd9000.c
+index f8017993e2b4d..9e26cc084a176 100644
+--- a/drivers/hwmon/pmbus/ucd9000.c
++++ b/drivers/hwmon/pmbus/ucd9000.c
+@@ -7,6 +7,7 @@
+  */
+ 
+ #include <linux/debugfs.h>
++#include <linux/delay.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+@@ -16,6 +17,7 @@
+ #include <linux/i2c.h>
+ #include <linux/pmbus.h>
+ #include <linux/gpio/driver.h>
++#include <linux/timekeeping.h>
+ #include "pmbus.h"
+ 
+ enum chips { ucd9000, ucd90120, ucd90124, ucd90160, ucd90320, ucd9090,
+@@ -65,6 +67,7 @@ struct ucd9000_data {
+ 	struct gpio_chip gpio;
+ #endif
+ 	struct dentry *debugfs;
++	ktime_t write_time;
+ };
+ #define to_ucd9000_data(_info) container_of(_info, struct ucd9000_data, info)
+ 
+@@ -73,6 +76,73 @@ struct ucd9000_debugfs_entry {
+ 	u8 index;
+ };
+ 
++/*
++ * It has been observed that the UCD90320 randomly fails register access when
++ * doing another access right on the back of a register write. To mitigate this
++ * make sure that there is a minimum delay between a write access and the
++ * following access. The 250us is based on experimental data. At a delay of
++ * 200us the issue seems to go away. Add a bit of extra margin to allow for
++ * system to system differences.
++ */
++#define UCD90320_WAIT_DELAY_US 250
++
++static inline void ucd90320_wait(const struct ucd9000_data *data)
++{
++	s64 delta = ktime_us_delta(ktime_get(), data->write_time);
++
++	if (delta < UCD90320_WAIT_DELAY_US)
++		udelay(UCD90320_WAIT_DELAY_US - delta);
++}
++
++static int ucd90320_read_word_data(struct i2c_client *client, int page,
++				   int phase, int reg)
++{
++	const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
++	struct ucd9000_data *data = to_ucd9000_data(info);
++
++	if (reg >= PMBUS_VIRT_BASE)
++		return -ENXIO;
++
++	ucd90320_wait(data);
++	return pmbus_read_word_data(client, page, phase, reg);
++}
++
++static int ucd90320_read_byte_data(struct i2c_client *client, int page, int reg)
++{
++	const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
++	struct ucd9000_data *data = to_ucd9000_data(info);
++
++	ucd90320_wait(data);
++	return pmbus_read_byte_data(client, page, reg);
++}
++
++static int ucd90320_write_word_data(struct i2c_client *client, int page,
++				    int reg, u16 word)
++{
++	const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
++	struct ucd9000_data *data = to_ucd9000_data(info);
++	int ret;
++
++	ucd90320_wait(data);
++	ret = pmbus_write_word_data(client, page, reg, word);
++	data->write_time = ktime_get();
++
++	return ret;
++}
++
++static int ucd90320_write_byte(struct i2c_client *client, int page, u8 value)
++{
++	const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
++	struct ucd9000_data *data = to_ucd9000_data(info);
++	int ret;
++
++	ucd90320_wait(data);
++	ret = pmbus_write_byte(client, page, value);
++	data->write_time = ktime_get();
++
++	return ret;
++}
++
+ static int ucd9000_get_fan_config(struct i2c_client *client, int fan)
+ {
+ 	int fan_config = 0;
+@@ -598,6 +668,11 @@ static int ucd9000_probe(struct i2c_client *client)
+ 		info->read_byte_data = ucd9000_read_byte_data;
+ 		info->func[0] |= PMBUS_HAVE_FAN12 | PMBUS_HAVE_STATUS_FAN12
+ 		  | PMBUS_HAVE_FAN34 | PMBUS_HAVE_STATUS_FAN34;
++	} else if (mid->driver_data == ucd90320) {
++		info->read_byte_data = ucd90320_read_byte_data;
++		info->read_word_data = ucd90320_read_word_data;
++		info->write_byte = ucd90320_write_byte;
++		info->write_word_data = ucd90320_write_word_data;
+ 	}
+ 
+ 	ucd9000_probe_gpio(client, mid, data);
+diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c
+index 47bbe47e062fd..7d5f7441aceb1 100644
+--- a/drivers/hwmon/tmp513.c
++++ b/drivers/hwmon/tmp513.c
+@@ -758,7 +758,7 @@ static int tmp51x_probe(struct i2c_client *client)
+ static struct i2c_driver tmp51x_driver = {
+ 	.driver = {
+ 		.name	= "tmp51x",
+-		.of_match_table = of_match_ptr(tmp51x_of_match),
++		.of_match_table = tmp51x_of_match,
+ 	},
+ 	.probe_new	= tmp51x_probe,
+ 	.id_table	= tmp51x_id,
+diff --git a/drivers/hwmon/xgene-hwmon.c b/drivers/hwmon/xgene-hwmon.c
+index f2a5af239c956..f5d3cf86753f7 100644
+--- a/drivers/hwmon/xgene-hwmon.c
++++ b/drivers/hwmon/xgene-hwmon.c
+@@ -768,6 +768,7 @@ static int xgene_hwmon_remove(struct platform_device *pdev)
+ {
+ 	struct xgene_hwmon_dev *ctx = platform_get_drvdata(pdev);
+ 
++	cancel_work_sync(&ctx->workq);
+ 	hwmon_device_unregister(ctx->hwmon_dev);
+ 	kfifo_free(&ctx->async_msg_fifo);
+ 	if (acpi_disabled)
+diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
+index ceb6cdc20484e..7db6d0fc6ec2e 100644
+--- a/drivers/interconnect/core.c
++++ b/drivers/interconnect/core.c
+@@ -850,6 +850,10 @@ void icc_node_destroy(int id)
+ 
+ 	mutex_unlock(&icc_lock);
+ 
++	if (!node)
++		return;
++
++	kfree(node->links);
+ 	kfree(node);
+ }
+ EXPORT_SYMBOL_GPL(icc_node_destroy);
+diff --git a/drivers/media/i2c/m5mols/m5mols_core.c b/drivers/media/i2c/m5mols/m5mols_core.c
+index 21666d705e372..dcf9e4d4ee6b8 100644
+--- a/drivers/media/i2c/m5mols/m5mols_core.c
++++ b/drivers/media/i2c/m5mols/m5mols_core.c
+@@ -488,7 +488,7 @@ static enum m5mols_restype __find_restype(u32 code)
+ 	do {
+ 		if (code == m5mols_default_ffmt[type].code)
+ 			return type;
+-	} while (type++ != SIZE_DEFAULT_FFMT);
++	} while (++type != SIZE_DEFAULT_FFMT);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index af85b32c6c1c8..c468f9a02ef6b 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -1818,7 +1818,6 @@ static void atmci_tasklet_func(unsigned long priv)
+ 				atmci_writel(host, ATMCI_IER, ATMCI_NOTBUSY);
+ 				state = STATE_WAITING_NOTBUSY;
+ 			} else if (host->mrq->stop) {
+-				atmci_writel(host, ATMCI_IER, ATMCI_CMDRDY);
+ 				atmci_send_stop_cmd(host, data);
+ 				state = STATE_SENDING_STOP;
+ 			} else {
+@@ -1851,8 +1850,6 @@ static void atmci_tasklet_func(unsigned long priv)
+ 				 * command to send.
+ 				 */
+ 				if (host->mrq->stop) {
+-					atmci_writel(host, ATMCI_IER,
+-					             ATMCI_CMDRDY);
+ 					atmci_send_stop_cmd(host, data);
+ 					state = STATE_SENDING_STOP;
+ 				} else {
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index 24cd6d3dc6477..bf2592774165b 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -369,7 +369,7 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg)
+ 					MAX_POWER_ON_TIMEOUT, false, host, val,
+ 					reg);
+ 		if (ret)
+-			dev_warn(mmc_dev(host->mmc), "Power on failed\n");
++			dev_info(mmc_dev(host->mmc), "Power on failed\n");
+ 	}
+ }
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 371b345635e62..a253476a52b01 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2734,7 +2734,7 @@ static int mv88e6xxx_get_max_mtu(struct dsa_switch *ds, int port)
+ 		return 10240 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
+ 	else if (chip->info->ops->set_max_frame_size)
+ 		return 1632 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
+-	return 1522 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
++	return ETH_DATA_LEN;
+ }
+ 
+ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+@@ -2742,6 +2742,17 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ 	struct mv88e6xxx_chip *chip = ds->priv;
+ 	int ret = 0;
+ 
++	/* For families where we don't know how to alter the MTU,
++	 * just accept any value up to ETH_DATA_LEN
++	 */
++	if (!chip->info->ops->port_set_jumbo_size &&
++	    !chip->info->ops->set_max_frame_size) {
++		if (new_mtu > ETH_DATA_LEN)
++			return -EINVAL;
++
++		return 0;
++	}
++
+ 	if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
+ 		new_mtu += EDSA_HLEN;
+ 
+@@ -2750,9 +2761,6 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ 		ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
+ 	else if (chip->info->ops->set_max_frame_size)
+ 		ret = chip->info->ops->set_max_frame_size(chip, new_mtu);
+-	else
+-		if (new_mtu > 1522)
+-			ret = -EINVAL;
+ 	mv88e6xxx_reg_unlock(chip);
+ 
+ 	return ret;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 9e8a20a94862f..76481ff7074ba 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -14851,6 +14851,7 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw)
+ 	int err;
+ 	int v_idx;
+ 
++	pci_set_drvdata(pf->pdev, pf);
+ 	pci_save_state(pf->pdev);
+ 
+ 	/* set up periodic task facility */
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 59963b901be0f..e0790df700e2c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -169,8 +169,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 	}
+ 	netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
+ 
+-	ice_qvec_dis_irq(vsi, rx_ring, q_vector);
+-
+ 	ice_fill_txq_meta(vsi, tx_ring, &txq_meta);
+ 	err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta);
+ 	if (err)
+@@ -185,6 +183,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 		if (err)
+ 			return err;
+ 	}
++	ice_qvec_dis_irq(vsi, rx_ring, q_vector);
++
+ 	err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index d2f5855b2ea79..895b6f0a39841 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -4986,6 +4986,11 @@ static int qed_init_wfq_param(struct qed_hwfn *p_hwfn,
+ 
+ 	num_vports = p_hwfn->qm_info.num_vports;
+ 
++	if (num_vports < 2) {
++		DP_NOTICE(p_hwfn, "Unexpected num_vports: %d\n", num_vports);
++		return -EINVAL;
++	}
++
+ 	/* Accounting for the vports which are configured for WFQ explicitly */
+ 	for (i = 0; i < num_vports; i++) {
+ 		u32 tmp_speed;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c b/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
+index 3e3192a3ad9b7..fdbd5f07a1857 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
+@@ -422,7 +422,7 @@ qed_mfw_get_tlv_time_value(struct qed_mfw_tlv_time *p_time,
+ 	if (p_time->hour > 23)
+ 		p_time->hour = 0;
+ 	if (p_time->min > 59)
+-		p_time->hour = 0;
++		p_time->min = 0;
+ 	if (p_time->msec > 999)
+ 		p_time->msec = 0;
+ 	if (p_time->usec > 999)
+diff --git a/drivers/net/ethernet/sun/ldmvsw.c b/drivers/net/ethernet/sun/ldmvsw.c
+index 01ea0d6f88193..934a4b54784b8 100644
+--- a/drivers/net/ethernet/sun/ldmvsw.c
++++ b/drivers/net/ethernet/sun/ldmvsw.c
+@@ -290,6 +290,9 @@ static int vsw_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ 
+ 	hp = mdesc_grab();
+ 
++	if (!hp)
++		return -ENODEV;
++
+ 	rmac = mdesc_get_property(hp, vdev->mp, remote_macaddr_prop, &len);
+ 	err = -ENODEV;
+ 	if (!rmac) {
+diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c
+index 96b883f965f63..b6c03adf1e762 100644
+--- a/drivers/net/ethernet/sun/sunvnet.c
++++ b/drivers/net/ethernet/sun/sunvnet.c
+@@ -431,6 +431,9 @@ static int vnet_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ 
+ 	hp = mdesc_grab();
+ 
++	if (!hp)
++		return -ENODEV;
++
+ 	vp = vnet_find_parent(hp, vdev->mp, vdev);
+ 	if (IS_ERR(vp)) {
+ 		pr_err("Cannot find port parent vnet\n");
+diff --git a/drivers/net/ipvlan/ipvlan_l3s.c b/drivers/net/ipvlan/ipvlan_l3s.c
+index 943d26cbf39f5..71712ea25403d 100644
+--- a/drivers/net/ipvlan/ipvlan_l3s.c
++++ b/drivers/net/ipvlan/ipvlan_l3s.c
+@@ -101,6 +101,7 @@ static unsigned int ipvlan_nf_input(void *priv, struct sk_buff *skb,
+ 		goto out;
+ 
+ 	skb->dev = addr->master->dev;
++	skb->skb_iif = skb->dev->ifindex;
+ 	len = skb->len + ETH_HLEN;
+ 	ipvlan_count_rx(addr->master, len, true, false);
+ out:
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index caf7291ffaf83..b67de3f9ef186 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -181,8 +181,11 @@ static int lan95xx_config_aneg_ext(struct phy_device *phydev)
+ static int lan87xx_read_status(struct phy_device *phydev)
+ {
+ 	struct smsc_phy_priv *priv = phydev->priv;
++	int err;
+ 
+-	int err = genphy_read_status(phydev);
++	err = genphy_read_status(phydev);
++	if (err)
++		return err;
+ 
+ 	if (!phydev->link && priv->energy_enable) {
+ 		/* Disable EDPD to wake up PHY */
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index 378a12ae2d957..fb1389bd09392 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -2199,6 +2199,13 @@ static int smsc75xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 		size = (rx_cmd_a & RX_CMD_A_LEN) - RXW_PADDING;
+ 		align_count = (4 - ((size + RXW_PADDING) % 4)) % 4;
+ 
++		if (unlikely(size > skb->len)) {
++			netif_dbg(dev, rx_err, dev->net,
++				  "size err rx_cmd_a=0x%08x\n",
++				  rx_cmd_a);
++			return 0;
++		}
++
+ 		if (unlikely(rx_cmd_a & RX_CMD_A_RED)) {
+ 			netif_dbg(dev, rx_err, dev->net,
+ 				  "Error rx_cmd_a=0x%08x\n", rx_cmd_a);
+diff --git a/drivers/nfc/pn533/usb.c b/drivers/nfc/pn533/usb.c
+index 57b07446bb768..68eb1253f888f 100644
+--- a/drivers/nfc/pn533/usb.c
++++ b/drivers/nfc/pn533/usb.c
+@@ -175,6 +175,7 @@ static int pn533_usb_send_frame(struct pn533 *dev,
+ 	print_hex_dump_debug("PN533 TX: ", DUMP_PREFIX_NONE, 16, 1,
+ 			     out->data, out->len, false);
+ 
++	arg.phy = phy;
+ 	init_completion(&arg.done);
+ 	cntx = phy->out_urb->context;
+ 	phy->out_urb->context = &arg;
+diff --git a/drivers/nfc/st-nci/ndlc.c b/drivers/nfc/st-nci/ndlc.c
+index 5d74c674368a5..8ccf5a86ad1bb 100644
+--- a/drivers/nfc/st-nci/ndlc.c
++++ b/drivers/nfc/st-nci/ndlc.c
+@@ -286,13 +286,15 @@ EXPORT_SYMBOL(ndlc_probe);
+ 
+ void ndlc_remove(struct llt_ndlc *ndlc)
+ {
+-	st_nci_remove(ndlc->ndev);
+-
+ 	/* cancel timers */
+ 	del_timer_sync(&ndlc->t1_timer);
+ 	del_timer_sync(&ndlc->t2_timer);
+ 	ndlc->t2_active = false;
+ 	ndlc->t1_active = false;
++	/* cancel work */
++	cancel_work_sync(&ndlc->sm_work);
++
++	st_nci_remove(ndlc->ndev);
+ 
+ 	skb_queue_purge(&ndlc->rcv_q);
+ 	skb_queue_purge(&ndlc->send_q);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index e162f1dfbafe9..a4b6aa932a8fe 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -723,16 +723,26 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req,
+ 		range = page_address(ns->ctrl->discard_page);
+ 	}
+ 
+-	__rq_for_each_bio(bio, req) {
+-		u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
+-		u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
+-
+-		if (n < segments) {
+-			range[n].cattr = cpu_to_le32(0);
+-			range[n].nlb = cpu_to_le32(nlb);
+-			range[n].slba = cpu_to_le64(slba);
++	if (queue_max_discard_segments(req->q) == 1) {
++		u64 slba = nvme_sect_to_lba(ns, blk_rq_pos(req));
++		u32 nlb = blk_rq_sectors(req) >> (ns->lba_shift - 9);
++
++		range[0].cattr = cpu_to_le32(0);
++		range[0].nlb = cpu_to_le32(nlb);
++		range[0].slba = cpu_to_le64(slba);
++		n = 1;
++	} else {
++		__rq_for_each_bio(bio, req) {
++			u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
++			u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
++
++			if (n < segments) {
++				range[n].cattr = cpu_to_le32(0);
++				range[n].nlb = cpu_to_le32(nlb);
++				range[n].slba = cpu_to_le64(slba);
++			}
++			n++;
+ 		}
+-		n++;
+ 	}
+ 
+ 	if (WARN_ON_ONCE(n != segments)) {
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index bc88ff2912f56..a82a0796a6148 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -749,8 +749,10 @@ static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
+ 
+ void nvmet_req_complete(struct nvmet_req *req, u16 status)
+ {
++	struct nvmet_sq *sq = req->sq;
++
+ 	__nvmet_req_complete(req, status);
+-	percpu_ref_put(&req->sq->ref);
++	percpu_ref_put(&sq->ref);
+ }
+ EXPORT_SYMBOL_GPL(nvmet_req_complete);
+ 
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index 8b587fc97f7bc..c22cc20db1a74 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -911,7 +911,7 @@ static int pci_pm_resume_noirq(struct device *dev)
+ 	pcie_pme_root_status_cleanup(pci_dev);
+ 
+ 	if (!skip_bus_pm && prev_state == PCI_D3cold)
+-		pci_bridge_wait_for_secondary_bus(pci_dev);
++		pci_bridge_wait_for_secondary_bus(pci_dev, "resume", PCI_RESET_WAIT);
+ 
+ 	if (pci_has_legacy_pm_support(pci_dev))
+ 		return 0;
+@@ -1298,7 +1298,7 @@ static int pci_pm_runtime_resume(struct device *dev)
+ 	pci_pm_default_resume(pci_dev);
+ 
+ 	if (prev_state == PCI_D3cold)
+-		pci_bridge_wait_for_secondary_bus(pci_dev);
++		pci_bridge_wait_for_secondary_bus(pci_dev, "resume", PCI_RESET_WAIT);
+ 
+ 	if (pm && pm->runtime_resume)
+ 		error = pm->runtime_resume(dev);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 744a2e05635b9..d37013d007b6e 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -157,9 +157,6 @@ static int __init pcie_port_pm_setup(char *str)
+ }
+ __setup("pcie_port_pm=", pcie_port_pm_setup);
+ 
+-/* Time to wait after a reset for device to become responsive */
+-#define PCIE_RESET_READY_POLL_MS 60000
+-
+ /**
+  * pci_bus_max_busnr - returns maximum PCI bus number of given bus' children
+  * @bus: pointer to PCI bus structure to search
+@@ -1221,7 +1218,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
+ 			return -ENOTTY;
+ 		}
+ 
+-		if (delay > 1000)
++		if (delay > PCI_RESET_WAIT)
+ 			pci_info(dev, "not ready %dms after %s; waiting\n",
+ 				 delay - 1, reset_type);
+ 
+@@ -1230,7 +1227,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
+ 		pci_read_config_dword(dev, PCI_COMMAND, &id);
+ 	}
+ 
+-	if (delay > 1000)
++	if (delay > PCI_RESET_WAIT)
+ 		pci_info(dev, "ready %dms after %s\n", delay - 1,
+ 			 reset_type);
+ 
+@@ -4792,24 +4789,31 @@ static int pci_bus_max_d3cold_delay(const struct pci_bus *bus)
+ /**
+  * pci_bridge_wait_for_secondary_bus - Wait for secondary bus to be accessible
+  * @dev: PCI bridge
++ * @reset_type: reset type in human-readable form
++ * @timeout: maximum time to wait for devices on secondary bus (milliseconds)
+  *
+  * Handle necessary delays before access to the devices on the secondary
+- * side of the bridge are permitted after D3cold to D0 transition.
++ * side of the bridge are permitted after D3cold to D0 transition
++ * or Conventional Reset.
+  *
+  * For PCIe this means the delays in PCIe 5.0 section 6.6.1. For
+  * conventional PCI it means Tpvrh + Trhfa specified in PCI 3.0 section
+  * 4.3.2.
++ *
++ * Return 0 on success or -ENOTTY if the first device on the secondary bus
++ * failed to become accessible.
+  */
+-void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
++int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
++				      int timeout)
+ {
+ 	struct pci_dev *child;
+ 	int delay;
+ 
+ 	if (pci_dev_is_disconnected(dev))
+-		return;
++		return 0;
+ 
+ 	if (!pci_is_bridge(dev))
+-		return;
++		return 0;
+ 
+ 	down_read(&pci_bus_sem);
+ 
+@@ -4821,14 +4825,14 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ 	 */
+ 	if (!dev->subordinate || list_empty(&dev->subordinate->devices)) {
+ 		up_read(&pci_bus_sem);
+-		return;
++		return 0;
+ 	}
+ 
+ 	/* Take d3cold_delay requirements into account */
+ 	delay = pci_bus_max_d3cold_delay(dev->subordinate);
+ 	if (!delay) {
+ 		up_read(&pci_bus_sem);
+-		return;
++		return 0;
+ 	}
+ 
+ 	child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
+@@ -4837,14 +4841,12 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ 
+ 	/*
+ 	 * Conventional PCI and PCI-X we need to wait Tpvrh + Trhfa before
+-	 * accessing the device after reset (that is 1000 ms + 100 ms). In
+-	 * practice this should not be needed because we don't do power
+-	 * management for them (see pci_bridge_d3_possible()).
++	 * accessing the device after reset (that is 1000 ms + 100 ms).
+ 	 */
+ 	if (!pci_is_pcie(dev)) {
+ 		pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
+ 		msleep(1000 + delay);
+-		return;
++		return 0;
+ 	}
+ 
+ 	/*
+@@ -4861,11 +4863,11 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ 	 * configuration requests if we only wait for 100 ms (see
+ 	 * https://bugzilla.kernel.org/show_bug.cgi?id=203885).
+ 	 *
+-	 * Therefore we wait for 100 ms and check for the device presence.
+-	 * If it is still not present give it an additional 100 ms.
++	 * Therefore we wait for 100 ms and check for the device presence
++	 * until the timeout expires.
+ 	 */
+ 	if (!pcie_downstream_port(dev))
+-		return;
++		return 0;
+ 
+ 	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
+ 		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
+@@ -4876,14 +4878,11 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ 		if (!pcie_wait_for_link_delay(dev, true, delay)) {
+ 			/* Did not train, no need to wait any further */
+ 			pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
+-			return;
++			return -ENOTTY;
+ 		}
+ 	}
+ 
+-	if (!pci_device_is_present(child)) {
+-		pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
+-		msleep(delay);
+-	}
++	return pci_dev_wait(child, reset_type, timeout - delay);
+ }
+ 
+ void pci_reset_secondary_bus(struct pci_dev *dev)
+@@ -4902,15 +4901,6 @@ void pci_reset_secondary_bus(struct pci_dev *dev)
+ 
+ 	ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
+ 	pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
+-
+-	/*
+-	 * Trhfa for conventional PCI is 2^25 clock cycles.
+-	 * Assuming a minimum 33MHz clock this results in a 1s
+-	 * delay before we can consider subordinate devices to
+-	 * be re-initialized.  PCIe has some ways to shorten this,
+-	 * but we don't make use of them yet.
+-	 */
+-	ssleep(1);
+ }
+ 
+ void __weak pcibios_reset_secondary_bus(struct pci_dev *dev)
+@@ -4929,7 +4919,8 @@ int pci_bridge_secondary_bus_reset(struct pci_dev *dev)
+ {
+ 	pcibios_reset_secondary_bus(dev);
+ 
+-	return pci_dev_wait(dev, "bus reset", PCIE_RESET_READY_POLL_MS);
++	return pci_bridge_wait_for_secondary_bus(dev, "bus reset",
++						 PCIE_RESET_READY_POLL_MS);
+ }
+ EXPORT_SYMBOL_GPL(pci_bridge_secondary_bus_reset);
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 9197d7362731e..72436000ff252 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -47,6 +47,19 @@ int pci_bus_error_reset(struct pci_dev *dev);
+ #define PCI_PM_D3HOT_WAIT       10	/* msec */
+ #define PCI_PM_D3COLD_WAIT      100	/* msec */
+ 
++/*
++ * Following exit from Conventional Reset, devices must be ready within 1 sec
++ * (PCIe r6.0 sec 6.6.1).  A D3cold to D0 transition implies a Conventional
++ * Reset (PCIe r6.0 sec 5.8).
++ */
++#define PCI_RESET_WAIT		1000	/* msec */
++/*
++ * Devices may extend the 1 sec period through Request Retry Status completions
++ * (PCIe r6.0 sec 2.3.1).  The spec does not provide an upper limit, but 60 sec
++ * ought to be enough for any device to become responsive.
++ */
++#define PCIE_RESET_READY_POLL_MS 60000	/* msec */
++
+ /**
+  * struct pci_platform_pm_ops - Firmware PM callbacks
+  *
+@@ -108,7 +121,8 @@ void pci_allocate_cap_save_buffers(struct pci_dev *dev);
+ void pci_free_cap_save_buffers(struct pci_dev *dev);
+ bool pci_bridge_d3_possible(struct pci_dev *dev);
+ void pci_bridge_d3_update(struct pci_dev *dev);
+-void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev);
++int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
++				      int timeout);
+ 
+ static inline void pci_wakeup_event(struct pci_dev *dev)
+ {
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index c556e7beafe38..f21d64ae4ffcc 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -170,8 +170,8 @@ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
+ 	pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
+ 			      PCI_EXP_DPC_STATUS_TRIGGER);
+ 
+-	if (!pcie_wait_for_link(pdev, true)) {
+-		pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n");
++	if (pci_bridge_wait_for_secondary_bus(pdev, "DPC",
++					      PCIE_RESET_READY_POLL_MS)) {
+ 		clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
+ 		ret = PCI_ERS_RESULT_DISCONNECT;
+ 	} else {
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index fae0323242103..18321cf9db5d6 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -322,10 +322,7 @@ static void scsi_host_dev_release(struct device *dev)
+ 	struct Scsi_Host *shost = dev_to_shost(dev);
+ 	struct device *parent = dev->parent;
+ 
+-	/* In case scsi_remove_host() has not been called. */
+-	scsi_proc_hostdir_rm(shost->hostt);
+-
+-	/* Wait for functions invoked through call_rcu(&shost->rcu, ...) */
++	/* Wait for functions invoked through call_rcu(&scmd->rcu, ...) */
+ 	rcu_barrier();
+ 
+ 	if (shost->tmf_work_q)
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+index b58f4d9c296a3..326265fd7f91a 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+@@ -670,7 +670,7 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
+ 		goto out_fail;
+ 	}
+ 	port = sas_port_alloc_num(sas_node->parent_dev);
+-	if ((sas_port_add(port))) {
++	if (!port || (sas_port_add(port))) {
+ 		ioc_err(ioc, "failure at %s:%d/%s()!\n",
+ 			__FILE__, __LINE__, __func__);
+ 		goto out_fail;
+@@ -695,6 +695,12 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
+ 		rphy = sas_expander_alloc(port,
+ 		    mpt3sas_port->remote_identify.device_type);
+ 
++	if (!rphy) {
++		ioc_err(ioc, "failure at %s:%d/%s()!\n",
++			__FILE__, __LINE__, __func__);
++		goto out_delete_port;
++	}
++
+ 	rphy->identify = mpt3sas_port->remote_identify;
+ 
+ 	if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
+@@ -714,6 +720,7 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
+ 			__FILE__, __LINE__, __func__);
+ 		sas_rphy_free(rphy);
+ 		rphy = NULL;
++		goto out_delete_port;
+ 	}
+ 
+ 	if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
+@@ -740,7 +747,10 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
+ 		    rphy_to_expander_device(rphy));
+ 	return mpt3sas_port;
+ 
+- out_fail:
++out_delete_port:
++	sas_port_delete(port);
++
++out_fail:
+ 	list_for_each_entry_safe(mpt3sas_phy, next, &mpt3sas_port->phy_list,
+ 	    port_siblings)
+ 		list_del(&mpt3sas_phy->port_siblings);
+diff --git a/drivers/tty/serial/8250/8250_em.c b/drivers/tty/serial/8250/8250_em.c
+index f8e99995eee91..d94c3811a8f7a 100644
+--- a/drivers/tty/serial/8250/8250_em.c
++++ b/drivers/tty/serial/8250/8250_em.c
+@@ -106,8 +106,8 @@ static int serial8250_em_probe(struct platform_device *pdev)
+ 	memset(&up, 0, sizeof(up));
+ 	up.port.mapbase = regs->start;
+ 	up.port.irq = irq;
+-	up.port.type = PORT_UNKNOWN;
+-	up.port.flags = UPF_BOOT_AUTOCONF | UPF_FIXED_PORT | UPF_IOREMAP;
++	up.port.type = PORT_16750;
++	up.port.flags = UPF_FIXED_PORT | UPF_IOREMAP | UPF_FIXED_TYPE;
+ 	up.port.dev = &pdev->dev;
+ 	up.port.private_data = priv;
+ 
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 9cb0e8673f826..32cce52800a73 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2159,9 +2159,15 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	/* update the per-port timeout */
+ 	uart_update_timeout(port, termios->c_cflag, baud);
+ 
+-	/* wait transmit engin complete */
+-	lpuart32_write(&sport->port, 0, UARTMODIR);
+-	lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
++	/*
++	 * LPUART Transmission Complete Flag may never be set while queuing a break
++	 * character, so skip waiting for transmission complete when UARTCTRL_SBK is
++	 * asserted.
++	 */
++	if (!(old_ctrl & UARTCTRL_SBK)) {
++		lpuart32_write(&sport->port, 0, UARTMODIR);
++		lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
++	}
+ 
+ 	/* disable transmit and receive */
+ 	lpuart32_write(&sport->port, old_ctrl & ~(UARTCTRL_TE | UARTCTRL_RE),
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index 3feb6e40d56d8..ef8a4c5fc6875 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -921,6 +921,28 @@ SETUP_HCRX(struct stifb_info *fb)
+ 
+ /* ------------------- driver specific functions --------------------------- */
+ 
++static int
++stifb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
++{
++	struct stifb_info *fb = container_of(info, struct stifb_info, info);
++
++	if (var->xres != fb->info.var.xres ||
++	    var->yres != fb->info.var.yres ||
++	    var->bits_per_pixel != fb->info.var.bits_per_pixel)
++		return -EINVAL;
++
++	var->xres_virtual = var->xres;
++	var->yres_virtual = var->yres;
++	var->xoffset = 0;
++	var->yoffset = 0;
++	var->grayscale = fb->info.var.grayscale;
++	var->red.length = fb->info.var.red.length;
++	var->green.length = fb->info.var.green.length;
++	var->blue.length = fb->info.var.blue.length;
++
++	return 0;
++}
++
+ static int
+ stifb_setcolreg(u_int regno, u_int red, u_int green,
+ 	      u_int blue, u_int transp, struct fb_info *info)
+@@ -1145,6 +1167,7 @@ stifb_init_display(struct stifb_info *fb)
+ 
+ static const struct fb_ops stifb_ops = {
+ 	.owner		= THIS_MODULE,
++	.fb_check_var	= stifb_check_var,
+ 	.fb_setcolreg	= stifb_setcolreg,
+ 	.fb_blank	= stifb_blank,
+ 	.fb_fillrect	= stifb_fillrect,
+@@ -1164,6 +1187,7 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
+ 	struct stifb_info *fb;
+ 	struct fb_info *info;
+ 	unsigned long sti_rom_address;
++	char modestr[32];
+ 	char *dev_name;
+ 	int bpp, xres, yres;
+ 
+@@ -1342,6 +1366,9 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
+ 	info->flags = FBINFO_HWACCEL_COPYAREA | FBINFO_HWACCEL_FILLRECT;
+ 	info->pseudo_palette = &fb->pseudo_palette;
+ 
++	scnprintf(modestr, sizeof(modestr), "%dx%d-%d", xres, yres, bpp);
++	fb_find_mode(&info->var, info, modestr, NULL, 0, NULL, bpp);
++
+ 	/* This has to be done !!! */
+ 	if (fb_alloc_cmap(&info->cmap, NR_PALETTE, 0))
+ 		goto out_err1;
+diff --git a/fs/attr.c b/fs/attr.c
+index 848ffe6e3c24b..326a0db3296d7 100644
+--- a/fs/attr.c
++++ b/fs/attr.c
+@@ -18,6 +18,65 @@
+ #include <linux/evm.h>
+ #include <linux/ima.h>
+ 
++#include "internal.h"
++
++/**
++ * setattr_should_drop_sgid - determine whether the setgid bit needs to be
++ *                            removed
++ * @inode:	inode to check
++ *
++ * This function determines whether the setgid bit needs to be removed.
++ * We retain backwards compatibility and require setgid bit to be removed
++ * unconditionally if S_IXGRP is set. Otherwise we have the exact same
++ * requirements as setattr_prepare() and setattr_copy().
++ *
++ * Return: ATTR_KILL_SGID if setgid bit needs to be removed, 0 otherwise.
++ */
++int setattr_should_drop_sgid(const struct inode *inode)
++{
++	umode_t mode = inode->i_mode;
++
++	if (!(mode & S_ISGID))
++		return 0;
++	if (mode & S_IXGRP)
++		return ATTR_KILL_SGID;
++	if (!in_group_or_capable(inode, inode->i_gid))
++		return ATTR_KILL_SGID;
++	return 0;
++}
++
++/**
++ * setattr_should_drop_suidgid - determine whether the set{g,u}id bit needs to
++ *                               be dropped
++ * @inode:	inode to check
++ *
++ * This function determines whether the set{g,u}id bits need to be removed.
++ * If the setuid bit needs to be removed ATTR_KILL_SUID is returned. If the
++ * setgid bit needs to be removed ATTR_KILL_SGID is returned. If both
++ * set{g,u}id bits need to be removed the corresponding mask of both flags is
++ * returned.
++ *
++ * Return: A mask of ATTR_KILL_S{G,U}ID indicating which - if any - setid bits
++ * to remove, 0 otherwise.
++ */
++int setattr_should_drop_suidgid(struct inode *inode)
++{
++	umode_t mode = inode->i_mode;
++	int kill = 0;
++
++	/* suid always must be killed */
++	if (unlikely(mode & S_ISUID))
++		kill = ATTR_KILL_SUID;
++
++	kill |= setattr_should_drop_sgid(inode);
++
++	if (unlikely(kill && !capable(CAP_FSETID) && S_ISREG(mode)))
++		return kill;
++
++	return 0;
++}
++EXPORT_SYMBOL(setattr_should_drop_suidgid);
++
+ static bool chown_ok(const struct inode *inode, kuid_t uid)
+ {
+ 	if (uid_eq(current_fsuid(), inode->i_uid) &&
+@@ -90,9 +149,8 @@ int setattr_prepare(struct dentry *dentry, struct iattr *attr)
+ 		if (!inode_owner_or_capable(inode))
+ 			return -EPERM;
+ 		/* Also check the setgid bit! */
+-		if (!in_group_p((ia_valid & ATTR_GID) ? attr->ia_gid :
+-				inode->i_gid) &&
+-		    !capable_wrt_inode_uidgid(inode, CAP_FSETID))
++		if (!in_group_or_capable(inode, (ia_valid & ATTR_GID) ?
++						attr->ia_gid : inode->i_gid))
+ 			attr->ia_mode &= ~S_ISGID;
+ 	}
+ 
+@@ -193,9 +251,7 @@ void setattr_copy(struct inode *inode, const struct iattr *attr)
+ 		inode->i_ctime = attr->ia_ctime;
+ 	if (ia_valid & ATTR_MODE) {
+ 		umode_t mode = attr->ia_mode;
+-
+-		if (!in_group_p(inode->i_gid) &&
+-		    !capable_wrt_inode_uidgid(inode, CAP_FSETID))
++		if (!in_group_or_capable(inode, inode->i_gid))
+ 			mode &= ~S_ISGID;
+ 		inode->i_mode = mode;
+ 	}
+@@ -297,7 +353,7 @@ int notify_change(struct dentry * dentry, struct iattr * attr, struct inode **de
+ 		}
+ 	}
+ 	if (ia_valid & ATTR_KILL_SGID) {
+-		if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
++		if (mode & S_ISGID) {
+ 			if (!(ia_valid & ATTR_MODE)) {
+ 				ia_valid = attr->ia_valid |= ATTR_MODE;
+ 				attr->ia_mode = inode->i_mode;
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index 97cd4df040608..e11818801148a 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -236,15 +236,32 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 		size[0] = 8; /* sizeof __le64 */
+ 		data[0] = ptr;
+ 
+-		rc = SMB2_set_info_init(tcon, server,
+-					&rqst[num_rqst], COMPOUND_FID,
+-					COMPOUND_FID, current->tgid,
+-					FILE_END_OF_FILE_INFORMATION,
+-					SMB2_O_INFO_FILE, 0, data, size);
++		if (cfile) {
++			rc = SMB2_set_info_init(tcon, server,
++						&rqst[num_rqst],
++						cfile->fid.persistent_fid,
++						cfile->fid.volatile_fid,
++						current->tgid,
++						FILE_END_OF_FILE_INFORMATION,
++						SMB2_O_INFO_FILE, 0,
++						data, size);
++		} else {
++			rc = SMB2_set_info_init(tcon, server,
++						&rqst[num_rqst],
++						COMPOUND_FID,
++						COMPOUND_FID,
++						current->tgid,
++						FILE_END_OF_FILE_INFORMATION,
++						SMB2_O_INFO_FILE, 0,
++						data, size);
++			if (!rc) {
++				smb2_set_next_command(tcon, &rqst[num_rqst]);
++				smb2_set_related(&rqst[num_rqst]);
++			}
++		}
+ 		if (rc)
+ 			goto finished;
+-		smb2_set_next_command(tcon, &rqst[num_rqst]);
+-		smb2_set_related(&rqst[num_rqst++]);
++		num_rqst++;
+ 		trace_smb3_set_eof_enter(xid, ses->Suid, tcon->tid, full_path);
+ 		break;
+ 	case SMB2_OP_SET_INFO:
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index b137006f0fd25..4409f56fc37e6 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -312,7 +312,7 @@ static int
+ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 		struct smb_rqst *rqst)
+ {
+-	int rc = 0;
++	int rc;
+ 	struct kvec *iov;
+ 	int n_vec;
+ 	unsigned int send_length = 0;
+@@ -323,6 +323,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 	struct msghdr smb_msg = {};
+ 	__be32 rfc1002_marker;
+ 
++	cifs_in_send_inc(server);
+ 	if (cifs_rdma_enabled(server)) {
+ 		/* return -EAGAIN when connecting or reconnecting */
+ 		rc = -EAGAIN;
+@@ -331,14 +332,17 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 		goto smbd_done;
+ 	}
+ 
++	rc = -EAGAIN;
+ 	if (ssocket == NULL)
+-		return -EAGAIN;
++		goto out;
+ 
++	rc = -ERESTARTSYS;
+ 	if (fatal_signal_pending(current)) {
+ 		cifs_dbg(FYI, "signal pending before send request\n");
+-		return -ERESTARTSYS;
++		goto out;
+ 	}
+ 
++	rc = 0;
+ 	/* cork the socket */
+ 	tcp_sock_set_cork(ssocket->sk, true);
+ 
+@@ -449,7 +453,8 @@ smbd_done:
+ 			 rc);
+ 	else if (rc > 0)
+ 		rc = 0;
+-
++out:
++	cifs_in_send_dec(server);
+ 	return rc;
+ }
+ 
+@@ -826,9 +831,7 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
+ 	 * I/O response may come back and free the mid entry on another thread.
+ 	 */
+ 	cifs_save_when_sent(mid);
+-	cifs_in_send_inc(server);
+ 	rc = smb_send_rqst(server, 1, rqst, flags);
+-	cifs_in_send_dec(server);
+ 
+ 	if (rc < 0) {
+ 		revert_current_mid(server, mid->credits);
+@@ -1117,9 +1120,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 		else
+ 			midQ[i]->callback = cifs_compound_last_callback;
+ 	}
+-	cifs_in_send_inc(server);
+ 	rc = smb_send_rqst(server, num_rqst, rqst, flags);
+-	cifs_in_send_dec(server);
+ 
+ 	for (i = 0; i < num_rqst; i++)
+ 		cifs_save_when_sent(midQ[i]);
+@@ -1356,9 +1357,7 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses,
+ 
+ 	midQ->mid_state = MID_REQUEST_SUBMITTED;
+ 
+-	cifs_in_send_inc(server);
+ 	rc = smb_send(server, in_buf, len);
+-	cifs_in_send_dec(server);
+ 	cifs_save_when_sent(midQ);
+ 
+ 	if (rc < 0)
+@@ -1495,9 +1494,7 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	midQ->mid_state = MID_REQUEST_SUBMITTED;
+-	cifs_in_send_inc(server);
+ 	rc = smb_send(server, in_buf, len);
+-	cifs_in_send_dec(server);
+ 	cifs_save_when_sent(midQ);
+ 
+ 	if (rc < 0)
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 1a654a1f3f46b..6ba185b46ba39 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4721,13 +4721,6 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 		goto bad_inode;
+ 	raw_inode = ext4_raw_inode(&iloc);
+ 
+-	if ((ino == EXT4_ROOT_INO) && (raw_inode->i_links_count == 0)) {
+-		ext4_error_inode(inode, function, line, 0,
+-				 "iget: root inode unallocated");
+-		ret = -EFSCORRUPTED;
+-		goto bad_inode;
+-	}
+-
+ 	if ((flags & EXT4_IGET_HANDLE) &&
+ 	    (raw_inode->i_links_count == 0) && (raw_inode->i_mode == 0)) {
+ 		ret = -ESTALE;
+@@ -4800,11 +4793,16 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 	 * NeilBrown 1999oct15
+ 	 */
+ 	if (inode->i_nlink == 0) {
+-		if ((inode->i_mode == 0 ||
++		if ((inode->i_mode == 0 || flags & EXT4_IGET_SPECIAL ||
+ 		     !(EXT4_SB(inode->i_sb)->s_mount_state & EXT4_ORPHAN_FS)) &&
+ 		    ino != EXT4_BOOT_LOADER_INO) {
+-			/* this inode is deleted */
+-			ret = -ESTALE;
++			/* this inode is deleted or unallocated */
++			if (flags & EXT4_IGET_SPECIAL) {
++				ext4_error_inode(inode, function, line, 0,
++						 "iget: special inode unallocated");
++				ret = -EFSCORRUPTED;
++			} else
++				ret = -ESTALE;
+ 			goto bad_inode;
+ 		}
+ 		/* The only unlinked inodes we let through here have
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 1f47aeca71422..45f719c1e0023 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3934,10 +3934,8 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 				goto end_rename;
+ 		}
+ 		retval = ext4_rename_dir_prepare(handle, &old);
+-		if (retval) {
+-			inode_unlock(old.inode);
++		if (retval)
+ 			goto end_rename;
+-		}
+ 	}
+ 	/*
+ 	 * If we're renaming a file within an inline_data dir and adding or
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 60e122761352c..f3da1f2d4cb93 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -386,6 +386,17 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
+ 	struct inode *inode;
+ 	int err;
+ 
++	/*
++	 * We have to check for this corruption early as otherwise
++	 * iget_locked() could wait indefinitely for the state of our
++	 * parent inode.
++	 */
++	if (parent->i_ino == ea_ino) {
++		ext4_error(parent->i_sb,
++			   "Parent and EA inode have the same ino %lu", ea_ino);
++		return -EFSCORRUPTED;
++	}
++
+ 	inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL);
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+diff --git a/fs/inode.c b/fs/inode.c
+index 9f49e0bdc2f77..7ec90788d8be9 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -1854,35 +1854,6 @@ skip_update:
+ }
+ EXPORT_SYMBOL(touch_atime);
+ 
+-/*
+- * The logic we want is
+- *
+- *	if suid or (sgid and xgrp)
+- *		remove privs
+- */
+-int should_remove_suid(struct dentry *dentry)
+-{
+-	umode_t mode = d_inode(dentry)->i_mode;
+-	int kill = 0;
+-
+-	/* suid always must be killed */
+-	if (unlikely(mode & S_ISUID))
+-		kill = ATTR_KILL_SUID;
+-
+-	/*
+-	 * sgid without any exec bits is just a mandatory locking mark; leave
+-	 * it alone.  If some exec bits are set, it's a real sgid; kill it.
+-	 */
+-	if (unlikely((mode & S_ISGID) && (mode & S_IXGRP)))
+-		kill |= ATTR_KILL_SGID;
+-
+-	if (unlikely(kill && !capable(CAP_FSETID) && S_ISREG(mode)))
+-		return kill;
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL(should_remove_suid);
+-
+ /*
+  * Return mask of changes for notify_change() that need to be done as a
+  * response to write or truncate. Return 0 if nothing has to be changed.
+@@ -1897,7 +1868,7 @@ int dentry_needs_remove_privs(struct dentry *dentry)
+ 	if (IS_NOSEC(inode))
+ 		return 0;
+ 
+-	mask = should_remove_suid(dentry);
++	mask = setattr_should_drop_suidgid(inode);
+ 	ret = security_inode_need_killpriv(dentry);
+ 	if (ret < 0)
+ 		return ret;
+@@ -2147,10 +2118,6 @@ void inode_init_owner(struct inode *inode, const struct inode *dir,
+ 		/* Directories are special, and always inherit S_ISGID */
+ 		if (S_ISDIR(mode))
+ 			mode |= S_ISGID;
+-		else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) &&
+-			 !in_group_p(inode->i_gid) &&
+-			 !capable_wrt_inode_uidgid(dir, CAP_FSETID))
+-			mode &= ~S_ISGID;
+ 	} else
+ 		inode->i_gid = current_fsgid();
+ 	inode->i_mode = mode;
+@@ -2382,3 +2349,48 @@ int vfs_ioc_fssetxattr_check(struct inode *inode, const struct fsxattr *old_fa,
+ 	return 0;
+ }
+ EXPORT_SYMBOL(vfs_ioc_fssetxattr_check);
++
++/**
++ * in_group_or_capable - check whether caller is CAP_FSETID privileged
++ * @inode:	inode to check
++ * @gid:	the new/current gid of @inode
++ *
++ * Check wether @gid is in the caller's group list or if the caller is
++ * privileged with CAP_FSETID over @inode. This can be used to determine
++ * whether the setgid bit can be kept or must be dropped.
++ *
++ * Return: true if the caller is sufficiently privileged, false if not.
++ */
++bool in_group_or_capable(const struct inode *inode, kgid_t gid)
++{
++	if (in_group_p(gid))
++		return true;
++	if (capable_wrt_inode_uidgid(inode, CAP_FSETID))
++		return true;
++	return false;
++}
++
++/**
++ * mode_strip_sgid - handle the sgid bit for non-directories
++ * @dir: parent directory inode
++ * @mode: mode of the file to be created in @dir
++ *
++ * If the @mode of the new file has both the S_ISGID and S_IXGRP bit
++ * raised and @dir has the S_ISGID bit raised ensure that the caller is
++ * either in the group of the parent directory or they have CAP_FSETID
++ * in their user namespace and are privileged over the parent directory.
++ * In all other cases, strip the S_ISGID bit from @mode.
++ *
++ * Return: the new mode to use for the file
++ */
++umode_t mode_strip_sgid(const struct inode *dir, umode_t mode)
++{
++	if ((mode & (S_ISGID | S_IXGRP)) != (S_ISGID | S_IXGRP))
++		return mode;
++	if (S_ISDIR(mode) || !dir || !(dir->i_mode & S_ISGID))
++		return mode;
++	if (in_group_or_capable(dir, dir->i_gid))
++		return mode;
++	return mode & ~S_ISGID;
++}
++EXPORT_SYMBOL(mode_strip_sgid);
+diff --git a/fs/internal.h b/fs/internal.h
+index 06d313b9beecb..d5d9fcdae10c4 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -149,6 +149,7 @@ extern int vfs_open(const struct path *, struct file *);
+ extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
+ extern void inode_add_lru(struct inode *inode);
+ extern int dentry_needs_remove_privs(struct dentry *dentry);
++bool in_group_or_capable(const struct inode *inode, kgid_t gid);
+ 
+ /*
+  * fs-writeback.c
+@@ -196,3 +197,8 @@ int sb_init_dio_done_wq(struct super_block *sb);
+  */
+ int do_statx(int dfd, const char __user *filename, unsigned flags,
+ 	     unsigned int mask, struct statx __user *buffer);
++
++/*
++ * fs/attr.c
++ */
++int setattr_should_drop_sgid(const struct inode *inode);
+diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
+index bd7d58d27bfc6..97a3c09fd96b6 100644
+--- a/fs/jffs2/file.c
++++ b/fs/jffs2/file.c
+@@ -138,19 +138,18 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 	struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
+ 	struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
+ 	pgoff_t index = pos >> PAGE_SHIFT;
+-	uint32_t pageofs = index << PAGE_SHIFT;
+ 	int ret = 0;
+ 
+ 	jffs2_dbg(1, "%s()\n", __func__);
+ 
+-	if (pageofs > inode->i_size) {
+-		/* Make new hole frag from old EOF to new page */
++	if (pos > inode->i_size) {
++		/* Make new hole frag from old EOF to new position */
+ 		struct jffs2_raw_inode ri;
+ 		struct jffs2_full_dnode *fn;
+ 		uint32_t alloc_len;
+ 
+-		jffs2_dbg(1, "Writing new hole frag 0x%x-0x%x between current EOF and new page\n",
+-			  (unsigned int)inode->i_size, pageofs);
++		jffs2_dbg(1, "Writing new hole frag 0x%x-0x%x between current EOF and new position\n",
++			  (unsigned int)inode->i_size, (uint32_t)pos);
+ 
+ 		ret = jffs2_reserve_space(c, sizeof(ri), &alloc_len,
+ 					  ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
+@@ -170,10 +169,10 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 		ri.mode = cpu_to_jemode(inode->i_mode);
+ 		ri.uid = cpu_to_je16(i_uid_read(inode));
+ 		ri.gid = cpu_to_je16(i_gid_read(inode));
+-		ri.isize = cpu_to_je32(max((uint32_t)inode->i_size, pageofs));
++		ri.isize = cpu_to_je32((uint32_t)pos);
+ 		ri.atime = ri.ctime = ri.mtime = cpu_to_je32(JFFS2_NOW());
+ 		ri.offset = cpu_to_je32(inode->i_size);
+-		ri.dsize = cpu_to_je32(pageofs - inode->i_size);
++		ri.dsize = cpu_to_je32((uint32_t)pos - inode->i_size);
+ 		ri.csize = cpu_to_je32(0);
+ 		ri.compr = JFFS2_COMPR_ZERO;
+ 		ri.node_crc = cpu_to_je32(crc32(0, &ri, sizeof(ri)-8));
+@@ -203,7 +202,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 			goto out_err;
+ 		}
+ 		jffs2_complete_reservation(c);
+-		inode->i_size = pageofs;
++		inode->i_size = pos;
+ 		mutex_unlock(&f->sem);
+ 	}
+ 
+diff --git a/fs/namei.c b/fs/namei.c
+index 4159c140fa473..3d98db9802a77 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2798,6 +2798,63 @@ void unlock_rename(struct dentry *p1, struct dentry *p2)
+ }
+ EXPORT_SYMBOL(unlock_rename);
+ 
++/**
++ * mode_strip_umask - handle vfs umask stripping
++ * @dir:	parent directory of the new inode
++ * @mode:	mode of the new inode to be created in @dir
++ *
++ * Umask stripping depends on whether or not the filesystem supports POSIX
++ * ACLs. If the filesystem doesn't support it umask stripping is done directly
++ * in here. If the filesystem does support POSIX ACLs umask stripping is
++ * deferred until the filesystem calls posix_acl_create().
++ *
++ * Returns: mode
++ */
++static inline umode_t mode_strip_umask(const struct inode *dir, umode_t mode)
++{
++	if (!IS_POSIXACL(dir))
++		mode &= ~current_umask();
++	return mode;
++}
++
++/**
++ * vfs_prepare_mode - prepare the mode to be used for a new inode
++ * @dir:	parent directory of the new inode
++ * @mode:	mode of the new inode
++ * @mask_perms:	allowed permission by the vfs
++ * @type:	type of file to be created
++ *
++ * This helper consolidates and enforces vfs restrictions on the @mode of a new
++ * object to be created.
++ *
++ * Umask stripping depends on whether the filesystem supports POSIX ACLs (see
++ * the kernel documentation for mode_strip_umask()). Moving umask stripping
++ * after setgid stripping allows the same ordering for both non-POSIX ACL and
++ * POSIX ACL supporting filesystems.
++ *
++ * Note that it's currently valid for @type to be 0 if a directory is created.
++ * Filesystems raise that flag individually and we need to check whether each
++ * filesystem can deal with receiving S_IFDIR from the vfs before we enforce a
++ * non-zero type.
++ *
++ * Returns: mode to be passed to the filesystem
++ */
++static inline umode_t vfs_prepare_mode(const struct inode *dir, umode_t mode,
++				       umode_t mask_perms, umode_t type)
++{
++	mode = mode_strip_sgid(dir, mode);
++	mode = mode_strip_umask(dir, mode);
++
++	/*
++	 * Apply the vfs mandated allowed permission mask and set the type of
++	 * file to be created before we call into the filesystem.
++	 */
++	mode &= (mask_perms & ~S_IFMT);
++	mode |= (type & S_IFMT);
++
++	return mode;
++}
++
+ int vfs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 		bool want_excl)
+ {
+@@ -2807,8 +2864,8 @@ int vfs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 
+ 	if (!dir->i_op->create)
+ 		return -EACCES;	/* shouldn't it be ENOSYS? */
+-	mode &= S_IALLUGO;
+-	mode |= S_IFREG;
++
++	mode = vfs_prepare_mode(dir, mode, S_IALLUGO, S_IFREG);
+ 	error = security_inode_create(dir, dentry, mode);
+ 	if (error)
+ 		return error;
+@@ -3072,8 +3129,7 @@ static struct dentry *lookup_open(struct nameidata *nd, struct file *file,
+ 	if (open_flag & O_CREAT) {
+ 		if (open_flag & O_EXCL)
+ 			open_flag &= ~O_TRUNC;
+-		if (!IS_POSIXACL(dir->d_inode))
+-			mode &= ~current_umask();
++		mode = vfs_prepare_mode(dir->d_inode, mode, mode, mode);
+ 		if (likely(got_write))
+ 			create_error = may_o_create(&nd->path, dentry, mode);
+ 		else
+@@ -3286,8 +3342,7 @@ struct dentry *vfs_tmpfile(struct dentry *dentry, umode_t mode, int open_flag)
+ 	child = d_alloc(dentry, &slash_name);
+ 	if (unlikely(!child))
+ 		goto out_err;
+-	if (!IS_POSIXACL(dir))
+-		mode &= ~current_umask();
++	mode = vfs_prepare_mode(dir, mode, mode, mode);
+ 	error = dir->i_op->tmpfile(dir, child, mode);
+ 	if (error)
+ 		goto out_err;
+@@ -3548,6 +3603,7 @@ int vfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode, dev_t dev)
+ 	if (!dir->i_op->mknod)
+ 		return -EPERM;
+ 
++	mode = vfs_prepare_mode(dir, mode, mode, mode);
+ 	error = devcgroup_inode_mknod(mode, dev);
+ 	if (error)
+ 		return error;
+@@ -3596,9 +3652,8 @@ retry:
+ 	if (IS_ERR(dentry))
+ 		return PTR_ERR(dentry);
+ 
+-	if (!IS_POSIXACL(path.dentry->d_inode))
+-		mode &= ~current_umask();
+-	error = security_path_mknod(&path, dentry, mode, dev);
++	error = security_path_mknod(&path, dentry,
++			mode_strip_umask(path.dentry->d_inode, mode), dev);
+ 	if (error)
+ 		goto out;
+ 	switch (mode & S_IFMT) {
+@@ -3646,7 +3701,7 @@ int vfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	if (!dir->i_op->mkdir)
+ 		return -EPERM;
+ 
+-	mode &= (S_IRWXUGO|S_ISVTX);
++	mode = vfs_prepare_mode(dir, mode, S_IRWXUGO | S_ISVTX, 0);
+ 	error = security_inode_mkdir(dir, dentry, mode);
+ 	if (error)
+ 		return error;
+@@ -3673,9 +3728,8 @@ retry:
+ 	if (IS_ERR(dentry))
+ 		return PTR_ERR(dentry);
+ 
+-	if (!IS_POSIXACL(path.dentry->d_inode))
+-		mode &= ~current_umask();
+-	error = security_path_mkdir(&path, dentry, mode);
++	error = security_path_mkdir(&path, dentry,
++			mode_strip_umask(path.dentry->d_inode, mode));
+ 	if (!error)
+ 		error = vfs_mkdir(path.dentry->d_inode, dentry, mode);
+ 	done_path_create(&path, dentry);
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 1470b49adb2db..ca00cac5a12f7 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1994,7 +1994,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		}
+ 	}
+ 
+-	if (file && should_remove_suid(file->f_path.dentry)) {
++	if (file && setattr_should_drop_suidgid(file_inode(file))) {
+ 		ret = __ocfs2_write_remove_suid(inode, di_bh);
+ 		if (ret) {
+ 			mlog_errno(ret);
+@@ -2282,7 +2282,7 @@ static int ocfs2_prepare_inode_for_write(struct file *file,
+ 		 * inode. There's also the dinode i_size state which
+ 		 * can be lost via setattr during extending writes (we
+ 		 * set inode->i_size at the end of a write. */
+-		if (should_remove_suid(dentry)) {
++		if (setattr_should_drop_suidgid(inode)) {
+ 			if (meta_level == 0) {
+ 				ocfs2_inode_unlock_for_extent_tree(inode,
+ 								   &di_bh,
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 856474b0a1ae7..df1f6b7aa7979 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -198,6 +198,7 @@ static struct inode *ocfs2_get_init_inode(struct inode *dir, umode_t mode)
+ 	 * callers. */
+ 	if (S_ISDIR(mode))
+ 		set_nlink(inode, 2);
++	mode = mode_strip_sgid(dir, mode);
+ 	inode_init_owner(inode, dir, mode);
+ 	status = dquot_initialize(inode);
+ 	if (status)
+diff --git a/fs/open.c b/fs/open.c
+index b3fbb4300fc96..1ca4b236fdbe0 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -665,10 +665,10 @@ retry_deleg:
+ 		newattrs.ia_valid |= ATTR_GID;
+ 		newattrs.ia_gid = gid;
+ 	}
+-	if (!S_ISDIR(inode->i_mode))
+-		newattrs.ia_valid |=
+-			ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV;
+ 	inode_lock(inode);
++	if (!S_ISDIR(inode->i_mode))
++		newattrs.ia_valid |= ATTR_KILL_SUID | ATTR_KILL_PRIV |
++				     setattr_should_drop_sgid(inode);
+ 	error = security_path_chown(path, uid, gid);
+ 	if (!error)
+ 		error = notify_change(path->dentry, &newattrs, &delegated_inode);
+diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c
+index 24c7d30e41dfe..0926363179a76 100644
+--- a/fs/xfs/libxfs/xfs_btree.c
++++ b/fs/xfs/libxfs/xfs_btree.c
+@@ -3190,7 +3190,7 @@ xfs_btree_insrec(
+ 	struct xfs_btree_block	*block;	/* btree block */
+ 	struct xfs_buf		*bp;	/* buffer for block */
+ 	union xfs_btree_ptr	nptr;	/* new block ptr */
+-	struct xfs_btree_cur	*ncur;	/* new btree cursor */
++	struct xfs_btree_cur	*ncur = NULL;	/* new btree cursor */
+ 	union xfs_btree_key	nkey;	/* new block key */
+ 	union xfs_btree_key	*lkey;
+ 	int			optr;	/* old key/record index */
+@@ -3270,7 +3270,7 @@ xfs_btree_insrec(
+ #ifdef DEBUG
+ 	error = xfs_btree_check_block(cur, block, level, bp);
+ 	if (error)
+-		return error;
++		goto error0;
+ #endif
+ 
+ 	/*
+@@ -3290,7 +3290,7 @@ xfs_btree_insrec(
+ 		for (i = numrecs - ptr; i >= 0; i--) {
+ 			error = xfs_btree_debug_check_ptr(cur, pp, i, level);
+ 			if (error)
+-				return error;
++				goto error0;
+ 		}
+ 
+ 		xfs_btree_shift_keys(cur, kp, 1, numrecs - ptr + 1);
+@@ -3375,6 +3375,8 @@ xfs_btree_insrec(
+ 	return 0;
+ 
+ error0:
++	if (ncur)
++		xfs_btree_del_cursor(ncur, error);
+ 	return error;
+ }
+ 
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index 7371a7f7c6529..fbab1042bc90b 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -800,9 +800,6 @@ xfs_alloc_file_space(
+ 			quota_flag = XFS_QMOPT_RES_REGBLKS;
+ 		}
+ 
+-		/*
+-		 * Allocate and setup the transaction.
+-		 */
+ 		error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks,
+ 				resrtextents, 0, &tp);
+ 
+@@ -830,9 +827,9 @@ xfs_alloc_file_space(
+ 		if (error)
+ 			goto error0;
+ 
+-		/*
+-		 * Complete the transaction
+-		 */
++		ip->i_d.di_flags |= XFS_DIFLAG_PREALLOC;
++		xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
++
+ 		error = xfs_trans_commit(tp);
+ 		xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ 		if (error)
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index 4d6bf8d4974fe..9b6c5ba5fdfb6 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -94,8 +94,6 @@ xfs_update_prealloc_flags(
+ 		ip->i_d.di_flags &= ~XFS_DIFLAG_PREALLOC;
+ 
+ 	xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
+-	if (flags & XFS_PREALLOC_SYNC)
+-		xfs_trans_set_sync(tp);
+ 	return xfs_trans_commit(tp);
+ }
+ 
+@@ -852,7 +850,6 @@ xfs_file_fallocate(
+ 	struct inode		*inode = file_inode(file);
+ 	struct xfs_inode	*ip = XFS_I(inode);
+ 	long			error;
+-	enum xfs_prealloc_flags	flags = 0;
+ 	uint			iolock = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL;
+ 	loff_t			new_size = 0;
+ 	bool			do_file_insert = false;
+@@ -897,6 +894,10 @@ xfs_file_fallocate(
+ 			goto out_unlock;
+ 	}
+ 
++	error = file_modified(file);
++	if (error)
++		goto out_unlock;
++
+ 	if (mode & FALLOC_FL_PUNCH_HOLE) {
+ 		error = xfs_free_file_space(ip, offset, len);
+ 		if (error)
+@@ -946,8 +947,6 @@ xfs_file_fallocate(
+ 		}
+ 		do_file_insert = true;
+ 	} else {
+-		flags |= XFS_PREALLOC_SET;
+-
+ 		if (!(mode & FALLOC_FL_KEEP_SIZE) &&
+ 		    offset + len > i_size_read(inode)) {
+ 			new_size = offset + len;
+@@ -1000,13 +999,6 @@ xfs_file_fallocate(
+ 		}
+ 	}
+ 
+-	if (file->f_flags & O_DSYNC)
+-		flags |= XFS_PREALLOC_SYNC;
+-
+-	error = xfs_update_prealloc_flags(ip, flags);
+-	if (error)
+-		goto out_unlock;
+-
+ 	/* Change file size if needed */
+ 	if (new_size) {
+ 		struct iattr iattr;
+@@ -1024,8 +1016,14 @@ xfs_file_fallocate(
+ 	 * leave shifted extents past EOF and hence losing access to
+ 	 * the data that is contained within them.
+ 	 */
+-	if (do_file_insert)
++	if (do_file_insert) {
+ 		error = xfs_insert_file_space(ip, offset, len);
++		if (error)
++			goto out_unlock;
++	}
++
++	if (file->f_flags & O_DSYNC)
++		error = xfs_log_force_inode(ip);
+ 
+ out_unlock:
+ 	xfs_iunlock(ip, iolock);
+diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
+index 6a3026e78a9bb..69fef29df4284 100644
+--- a/fs/xfs/xfs_iops.c
++++ b/fs/xfs/xfs_iops.c
+@@ -595,37 +595,6 @@ xfs_vn_getattr(
+ 	return 0;
+ }
+ 
+-static void
+-xfs_setattr_mode(
+-	struct xfs_inode	*ip,
+-	struct iattr		*iattr)
+-{
+-	struct inode		*inode = VFS_I(ip);
+-	umode_t			mode = iattr->ia_mode;
+-
+-	ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
+-
+-	inode->i_mode &= S_IFMT;
+-	inode->i_mode |= mode & ~S_IFMT;
+-}
+-
+-void
+-xfs_setattr_time(
+-	struct xfs_inode	*ip,
+-	struct iattr		*iattr)
+-{
+-	struct inode		*inode = VFS_I(ip);
+-
+-	ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
+-
+-	if (iattr->ia_valid & ATTR_ATIME)
+-		inode->i_atime = iattr->ia_atime;
+-	if (iattr->ia_valid & ATTR_CTIME)
+-		inode->i_ctime = iattr->ia_ctime;
+-	if (iattr->ia_valid & ATTR_MTIME)
+-		inode->i_mtime = iattr->ia_mtime;
+-}
+-
+ static int
+ xfs_vn_change_ok(
+ 	struct dentry	*dentry,
+@@ -740,16 +709,6 @@ xfs_setattr_nonsize(
+ 				goto out_cancel;
+ 		}
+ 
+-		/*
+-		 * CAP_FSETID overrides the following restrictions:
+-		 *
+-		 * The set-user-ID and set-group-ID bits of a file will be
+-		 * cleared upon successful return from chown()
+-		 */
+-		if ((inode->i_mode & (S_ISUID|S_ISGID)) &&
+-		    !capable(CAP_FSETID))
+-			inode->i_mode &= ~(S_ISUID|S_ISGID);
+-
+ 		/*
+ 		 * Change the ownerships and register quota modifications
+ 		 * in the transaction.
+@@ -761,7 +720,6 @@ xfs_setattr_nonsize(
+ 				olddquot1 = xfs_qm_vop_chown(tp, ip,
+ 							&ip->i_udquot, udqp);
+ 			}
+-			inode->i_uid = uid;
+ 		}
+ 		if (!gid_eq(igid, gid)) {
+ 			if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_GQUOTA_ON(mp)) {
+@@ -772,15 +730,10 @@ xfs_setattr_nonsize(
+ 				olddquot2 = xfs_qm_vop_chown(tp, ip,
+ 							&ip->i_gdquot, gdqp);
+ 			}
+-			inode->i_gid = gid;
+ 		}
+ 	}
+ 
+-	if (mask & ATTR_MODE)
+-		xfs_setattr_mode(ip, iattr);
+-	if (mask & (ATTR_ATIME|ATTR_CTIME|ATTR_MTIME))
+-		xfs_setattr_time(ip, iattr);
+-
++	setattr_copy(inode, iattr);
+ 	xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
+ 
+ 	XFS_STATS_INC(mp, xs_ig_attrchg);
+@@ -1025,11 +978,8 @@ xfs_setattr_size(
+ 		xfs_inode_clear_eofblocks_tag(ip);
+ 	}
+ 
+-	if (iattr->ia_valid & ATTR_MODE)
+-		xfs_setattr_mode(ip, iattr);
+-	if (iattr->ia_valid & (ATTR_ATIME|ATTR_CTIME|ATTR_MTIME))
+-		xfs_setattr_time(ip, iattr);
+-
++	ASSERT(!(iattr->ia_valid & (ATTR_UID | ATTR_GID)));
++	setattr_copy(inode, iattr);
+ 	xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
+ 
+ 	XFS_STATS_INC(mp, xs_ig_attrchg);
+diff --git a/fs/xfs/xfs_iops.h b/fs/xfs/xfs_iops.h
+index 4d24ff309f593..dd1bd0332f8e3 100644
+--- a/fs/xfs/xfs_iops.h
++++ b/fs/xfs/xfs_iops.h
+@@ -18,7 +18,6 @@ extern ssize_t xfs_vn_listxattr(struct dentry *, char *data, size_t size);
+  */
+ #define XFS_ATTR_NOACL		0x01	/* Don't call posix_acl_chmod */
+ 
+-extern void xfs_setattr_time(struct xfs_inode *ip, struct iattr *iattr);
+ extern int xfs_setattr_nonsize(struct xfs_inode *ip, struct iattr *vap,
+ 			       int flags);
+ extern int xfs_vn_setattr_nonsize(struct dentry *dentry, struct iattr *vap);
+diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
+index a2a5a0fd92334..402cf828cc919 100644
+--- a/fs/xfs/xfs_mount.c
++++ b/fs/xfs/xfs_mount.c
+@@ -126,7 +126,6 @@ __xfs_free_perag(
+ {
+ 	struct xfs_perag *pag = container_of(head, struct xfs_perag, rcu_head);
+ 
+-	ASSERT(atomic_read(&pag->pag_ref) == 0);
+ 	kmem_free(pag);
+ }
+ 
+@@ -145,7 +144,7 @@ xfs_free_perag(
+ 		pag = radix_tree_delete(&mp->m_perag_tree, agno);
+ 		spin_unlock(&mp->m_perag_lock);
+ 		ASSERT(pag);
+-		ASSERT(atomic_read(&pag->pag_ref) == 0);
++		XFS_IS_CORRUPT(pag->pag_mount, atomic_read(&pag->pag_ref) != 0);
+ 		xfs_iunlink_destroy(pag);
+ 		xfs_buf_hash_destroy(pag);
+ 		call_rcu(&pag->rcu_head, __xfs_free_perag);
+diff --git a/fs/xfs/xfs_pnfs.c b/fs/xfs/xfs_pnfs.c
+index f3082a957d5e1..053b99929f835 100644
+--- a/fs/xfs/xfs_pnfs.c
++++ b/fs/xfs/xfs_pnfs.c
+@@ -164,10 +164,12 @@ xfs_fs_map_blocks(
+ 		 * that the blocks allocated and handed out to the client are
+ 		 * guaranteed to be present even after a server crash.
+ 		 */
+-		error = xfs_update_prealloc_flags(ip,
+-				XFS_PREALLOC_SET | XFS_PREALLOC_SYNC);
++		error = xfs_update_prealloc_flags(ip, XFS_PREALLOC_SET);
++		if (!error)
++			error = xfs_log_force_inode(ip);
+ 		if (error)
+ 			goto out_unlock;
++
+ 	} else {
+ 		xfs_iunlock(ip, lock_flags);
+ 	}
+@@ -283,7 +285,8 @@ xfs_fs_commit_blocks(
+ 	xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
+ 	xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
+ 
+-	xfs_setattr_time(ip, iattr);
++	ASSERT(!(iattr->ia_valid & (ATTR_UID | ATTR_GID)));
++	setattr_copy(inode, iattr);
+ 	if (update_isize) {
+ 		i_size_write(inode, iattr->ia_size);
+ 		ip->i_d.di_size = iattr->ia_size;
+diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
+index 64e5da33733b9..3c17e0c0f8169 100644
+--- a/fs/xfs/xfs_qm.c
++++ b/fs/xfs/xfs_qm.c
+@@ -1318,8 +1318,15 @@ xfs_qm_quotacheck(
+ 
+ 	error = xfs_iwalk_threaded(mp, 0, 0, xfs_qm_dqusage_adjust, 0, true,
+ 			NULL);
+-	if (error)
++	if (error) {
++		/*
++		 * The inode walk may have partially populated the dquot
++		 * caches.  We must purge them before disabling quota and
++		 * tearing down the quotainfo, or else the dquots will leak.
++		 */
++		xfs_qm_dqpurge_all(mp, XFS_QMOPT_QUOTALL);
+ 		goto error_return;
++	}
+ 
+ 	/*
+ 	 * We've made all the changes that we need to make incore.  Flush them
+diff --git a/include/drm/drm_bridge.h b/include/drm/drm_bridge.h
+index 2195daa289d27..055486e35e68f 100644
+--- a/include/drm/drm_bridge.h
++++ b/include/drm/drm_bridge.h
+@@ -427,11 +427,11 @@ struct drm_bridge_funcs {
+ 	 *
+ 	 * The returned array must be allocated with kmalloc() and will be
+ 	 * freed by the caller. If the allocation fails, NULL should be
+-	 * returned. num_output_fmts must be set to the returned array size.
++	 * returned. num_input_fmts must be set to the returned array size.
+ 	 * Formats listed in the returned array should be listed in decreasing
+ 	 * preference order (the core will try all formats until it finds one
+ 	 * that works). When the format is not supported NULL should be
+-	 * returned and num_output_fmts should be set to 0.
++	 * returned and num_input_fmts should be set to 0.
+ 	 *
+ 	 * This method is called on all elements of the bridge chain as part of
+ 	 * the bus format negotiation process that happens in
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 74e19bccbf738..8ce9e5c61ede8 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1768,6 +1768,7 @@ extern long compat_ptr_ioctl(struct file *file, unsigned int cmd,
+ extern void inode_init_owner(struct inode *inode, const struct inode *dir,
+ 			umode_t mode);
+ extern bool may_open_dev(const struct path *path);
++umode_t mode_strip_sgid(const struct inode *dir, umode_t mode);
+ 
+ /*
+  * This is the "filldir" function type, used by readdir() to let
+@@ -2959,7 +2960,7 @@ extern void __destroy_inode(struct inode *);
+ extern struct inode *new_inode_pseudo(struct super_block *sb);
+ extern struct inode *new_inode(struct super_block *sb);
+ extern void free_inode_nonrcu(struct inode *inode);
+-extern int should_remove_suid(struct dentry *);
++extern int setattr_should_drop_suidgid(struct inode *);
+ extern int file_remove_privs(struct file *);
+ 
+ extern void __insert_inode_hash(struct inode *, unsigned long hashval);
+@@ -3407,7 +3408,7 @@ int __init get_filesystem_list(char *buf);
+ 
+ static inline bool is_sxid(umode_t mode)
+ {
+-	return (mode & S_ISUID) || ((mode & S_ISGID) && (mode & S_IXGRP));
++	return mode & (S_ISUID | S_ISGID);
+ }
+ 
+ static inline int check_sticky(struct inode *dir, struct inode *inode)
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 2ba33d708942c..256f34f49167c 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -798,6 +798,7 @@ struct hid_driver {
+  * @raw_request: send raw report request to device (e.g. feature report)
+  * @output_report: send output report to device
+  * @idle: send idle request to device
++ * @max_buffer_size: over-ride maximum data buffer size (default: HID_MAX_BUFFER_SIZE)
+  */
+ struct hid_ll_driver {
+ 	int (*start)(struct hid_device *hdev);
+@@ -822,6 +823,8 @@ struct hid_ll_driver {
+ 	int (*output_report) (struct hid_device *hdev, __u8 *buf, size_t len);
+ 
+ 	int (*idle)(struct hid_device *hdev, int report, int idle, int reqtype);
++
++	unsigned int max_buffer_size;
+ };
+ 
+ extern struct hid_ll_driver i2c_hid_ll_driver;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index b478a16ef284d..9ef63bc14b002 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -270,9 +270,11 @@ struct hh_cache {
+  * relationship HH alignment <= LL alignment.
+  */
+ #define LL_RESERVED_SPACE(dev) \
+-	((((dev)->hard_header_len+(dev)->needed_headroom)&~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
++	((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom)) \
++	  & ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
+ #define LL_RESERVED_SPACE_EXTRA(dev,extra) \
+-	((((dev)->hard_header_len+(dev)->needed_headroom+(extra))&~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
++	((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom) + (extra)) \
++	  & ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
+ 
+ struct header_ops {
+ 	int	(*create) (struct sk_buff *skb, struct net_device *dev,
+diff --git a/include/linux/sh_intc.h b/include/linux/sh_intc.h
+index c255273b02810..37ad81058d6ae 100644
+--- a/include/linux/sh_intc.h
++++ b/include/linux/sh_intc.h
+@@ -97,7 +97,10 @@ struct intc_hw_desc {
+ 	unsigned int nr_subgroups;
+ };
+ 
+-#define _INTC_ARRAY(a) a, __same_type(a, NULL) ? 0 : sizeof(a)/sizeof(*a)
++#define _INTC_SIZEOF_OR_ZERO(a) (_Generic(a,                 \
++                                 typeof(NULL):  0,           \
++                                 default:       sizeof(a)))
++#define _INTC_ARRAY(a) a, _INTC_SIZEOF_OR_ZERO(a)/sizeof(*a)
+ 
+ #define INTC_HW_DESC(vectors, groups, mask_regs,	\
+ 		     prio_regs,	sense_regs, ack_regs)	\
+diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
+index e4c5df71f0e74..4e1356c35fe62 100644
+--- a/include/linux/tracepoint.h
++++ b/include/linux/tracepoint.h
+@@ -234,12 +234,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
+  * not add unwanted padding between the beginning of the section and the
+  * structure. Force alignment to the same alignment as the section start.
+  *
+- * When lockdep is enabled, we make sure to always do the RCU portions of
+- * the tracepoint code, regardless of whether tracing is on. However,
+- * don't check if the condition is false, due to interaction with idle
+- * instrumentation. This lets us find RCU issues triggered with tracepoints
+- * even when this tracepoint is off. This code has no purpose other than
+- * poking RCU a bit.
++ * When lockdep is enabled, we make sure to always test if RCU is
++ * "watching" regardless if the tracepoint is enabled or not. Tracepoints
++ * require RCU to be active, and it should always warn at the tracepoint
++ * site if it is not watching, as it will need to be active when the
++ * tracepoint is enabled.
+  */
+ #define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
+ 	extern int __traceiter_##name(data_proto);			\
+@@ -253,9 +252,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
+ 				TP_ARGS(data_args),			\
+ 				TP_CONDITION(cond), 0);			\
+ 		if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) {		\
+-			rcu_read_lock_sched_notrace();			\
+-			rcu_dereference_sched(__tracepoint_##name.funcs);\
+-			rcu_read_unlock_sched_notrace();		\
++			WARN_ON_ONCE(!rcu_is_watching());		\
+ 		}							\
+ 	}								\
+ 	__DECLARE_TRACE_RCU(name, PARAMS(proto), PARAMS(args),		\
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 445afda927f47..fd799567fc23a 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -5792,10 +5792,10 @@ static int io_arm_poll_handler(struct io_kiocb *req)
+ 		}
+ 	} else {
+ 		apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
++		if (unlikely(!apoll))
++			return IO_APOLL_ABORTED;
+ 		apoll->poll.retries = APOLL_MAX_RETRY;
+ 	}
+-	if (unlikely(!apoll))
+-		return IO_APOLL_ABORTED;
+ 	apoll->double_poll = NULL;
+ 	req->apoll = apoll;
+ 	req->flags |= REQ_F_POLLED;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index d97c189695cbb..67829b6e07bdc 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -1538,7 +1538,8 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end)
+ 	key.flags = end;	/* overload flags, as it is unsigned long */
+ 
+ 	for (pg = ftrace_pages_start; pg; pg = pg->next) {
+-		if (end < pg->records[0].ip ||
++		if (pg->index == 0 ||
++		    end < pg->records[0].ip ||
+ 		    start >= (pg->records[pg->index - 1].ip + MCOUNT_INSN_SIZE))
+ 			continue;
+ 		rec = bsearch(&key, pg->records, pg->index,
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 8637eab2986ee..ce45bdd9077db 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4705,6 +4705,8 @@ loff_t tracing_lseek(struct file *file, loff_t offset, int whence)
+ static const struct file_operations tracing_fops = {
+ 	.open		= tracing_open,
+ 	.read		= seq_read,
++	.read_iter	= seq_read_iter,
++	.splice_read	= generic_file_splice_read,
+ 	.write		= tracing_write_stub,
+ 	.llseek		= tracing_lseek,
+ 	.release	= tracing_release,
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index ccc99cd23f3c4..9ed65191888ef 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1087,6 +1087,9 @@ static const char *hist_field_name(struct hist_field *field,
+ {
+ 	const char *field_name = "";
+ 
++	if (WARN_ON_ONCE(!field))
++		return field_name;
++
+ 	if (level > 1)
+ 		return field_name;
+ 
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 9b15760e0541a..e4c690c21fc9c 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1994,7 +1994,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+ 	pgtable_t pgtable;
+-	pmd_t _pmd;
++	pmd_t _pmd, old_pmd;
+ 	int i;
+ 
+ 	/*
+@@ -2005,7 +2005,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
+ 	 *
+ 	 * See Documentation/vm/mmu_notifier.rst
+ 	 */
+-	pmdp_huge_clear_flush(vma, haddr, pmd);
++	old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd);
+ 
+ 	pgtable = pgtable_trans_huge_withdraw(mm, pmd);
+ 	pmd_populate(mm, &_pmd, pgtable);
+@@ -2014,6 +2014,8 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
+ 		pte_t *pte, entry;
+ 		entry = pfn_pte(my_zero_pfn(haddr), vma->vm_page_prot);
+ 		entry = pte_mkspecial(entry);
++		if (pmd_uffd_wp(old_pmd))
++			entry = pte_mkuffd_wp(entry);
+ 		pte = pte_offset_map(&_pmd, haddr);
+ 		VM_BUG_ON(!pte_none(*pte));
+ 		set_pte_at(mm, haddr, pte, entry);
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 5f786ef662ead..41f890bf9d4c4 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -573,6 +573,9 @@ static int rtentry_to_fib_config(struct net *net, int cmd, struct rtentry *rt,
+ 			cfg->fc_scope = RT_SCOPE_UNIVERSE;
+ 	}
+ 
++	if (!cfg->fc_table)
++		cfg->fc_table = RT_TABLE_MAIN;
++
+ 	if (cmd == SIOCDELRT)
+ 		return 0;
+ 
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index be75b409445c2..99f70b990eb13 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -613,10 +613,10 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	}
+ 
+ 	headroom += LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len;
+-	if (headroom > dev->needed_headroom)
+-		dev->needed_headroom = headroom;
++	if (headroom > READ_ONCE(dev->needed_headroom))
++		WRITE_ONCE(dev->needed_headroom, headroom);
+ 
+-	if (skb_cow_head(skb, dev->needed_headroom)) {
++	if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
+ 		ip_rt_put(rt);
+ 		goto tx_dropped;
+ 	}
+@@ -797,10 +797,10 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
+ 			+ rt->dst.header_len + ip_encap_hlen(&tunnel->encap);
+-	if (max_headroom > dev->needed_headroom)
+-		dev->needed_headroom = max_headroom;
++	if (max_headroom > READ_ONCE(dev->needed_headroom))
++		WRITE_ONCE(dev->needed_headroom, max_headroom);
+ 
+-	if (skb_cow_head(skb, dev->needed_headroom)) {
++	if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
+ 		ip_rt_put(rt);
+ 		dev->stats.tx_dropped++;
+ 		kfree_skb(skb);
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index eefd032bc6dbd..e4ad274ec7a30 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3609,7 +3609,7 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
+ 	th->window = htons(min(req->rsk_rcv_wnd, 65535U));
+ 	tcp_options_write((__be32 *)(th + 1), NULL, &opts);
+ 	th->doff = (tcp_header_size >> 2);
+-	__TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
++	TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
+ 
+ #ifdef CONFIG_TCP_MD5SIG
+ 	/* Okay, we have all we need - do the md5 hash if needed */
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 0d4cab94c5dd2..a03a322e0cc1c 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1267,8 +1267,8 @@ route_lookup:
+ 	 */
+ 	max_headroom = LL_RESERVED_SPACE(dst->dev) + sizeof(struct ipv6hdr)
+ 			+ dst->header_len + t->hlen;
+-	if (max_headroom > dev->needed_headroom)
+-		dev->needed_headroom = max_headroom;
++	if (max_headroom > READ_ONCE(dev->needed_headroom))
++		WRITE_ONCE(dev->needed_headroom, max_headroom);
+ 
+ 	err = ip6_tnl_encap(skb, t, &proto, fl6);
+ 	if (err)
+diff --git a/net/iucv/iucv.c b/net/iucv/iucv.c
+index 349c6ac3313f7..6f84978a77265 100644
+--- a/net/iucv/iucv.c
++++ b/net/iucv/iucv.c
+@@ -83,7 +83,7 @@ struct iucv_irq_data {
+ 	u16 ippathid;
+ 	u8  ipflags1;
+ 	u8  iptype;
+-	u32 res2[8];
++	u32 res2[9];
+ };
+ 
+ struct iucv_irq_list {
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 3b154ad4945c4..607519246bf28 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -275,7 +275,6 @@ void mptcp_subflow_reset(struct sock *ssk)
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ 	struct sock *sk = subflow->conn;
+ 
+-	tcp_set_state(ssk, TCP_CLOSE);
+ 	tcp_send_active_reset(ssk, GFP_ATOMIC);
+ 	tcp_done(ssk);
+ 	if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags) &&
+diff --git a/net/netfilter/nft_masq.c b/net/netfilter/nft_masq.c
+index 9953e80537536..1818dbf089cad 100644
+--- a/net/netfilter/nft_masq.c
++++ b/net/netfilter/nft_masq.c
+@@ -43,7 +43,7 @@ static int nft_masq_init(const struct nft_ctx *ctx,
+ 			 const struct nft_expr *expr,
+ 			 const struct nlattr * const tb[])
+ {
+-	u32 plen = sizeof_field(struct nf_nat_range, min_addr.all);
++	u32 plen = sizeof_field(struct nf_nat_range, min_proto.all);
+ 	struct nft_masq *priv = nft_expr_priv(expr);
+ 	int err;
+ 
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index db8f9116eeb43..cd4eb4996aff3 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -226,7 +226,7 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 		priv->flags |= NF_NAT_RANGE_MAP_IPS;
+ 	}
+ 
+-	plen = sizeof_field(struct nf_nat_range, min_addr.all);
++	plen = sizeof_field(struct nf_nat_range, min_proto.all);
+ 	if (tb[NFTA_NAT_REG_PROTO_MIN]) {
+ 		err = nft_parse_register_load(tb[NFTA_NAT_REG_PROTO_MIN],
+ 					      &priv->sreg_proto_min, plen);
+diff --git a/net/netfilter/nft_redir.c b/net/netfilter/nft_redir.c
+index ba09890dddb50..e64f531d66cfc 100644
+--- a/net/netfilter/nft_redir.c
++++ b/net/netfilter/nft_redir.c
+@@ -48,7 +48,7 @@ static int nft_redir_init(const struct nft_ctx *ctx,
+ 	unsigned int plen;
+ 	int err;
+ 
+-	plen = sizeof_field(struct nf_nat_range, min_addr.all);
++	plen = sizeof_field(struct nf_nat_range, min_proto.all);
+ 	if (tb[NFTA_REDIR_REG_PROTO_MIN]) {
+ 		err = nft_parse_register_load(tb[NFTA_REDIR_REG_PROTO_MIN],
+ 					      &priv->sreg_proto_min, plen);
+@@ -232,7 +232,7 @@ static struct nft_expr_type nft_redir_inet_type __read_mostly = {
+ 	.name		= "redir",
+ 	.ops		= &nft_redir_inet_ops,
+ 	.policy		= nft_redir_policy,
+-	.maxattr	= NFTA_MASQ_MAX,
++	.maxattr	= NFTA_REDIR_MAX,
+ 	.owner		= THIS_MODULE,
+ };
+ 
+diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c
+index 94503f36b9a61..9125d28d9ff5d 100644
+--- a/net/smc/smc_cdc.c
++++ b/net/smc/smc_cdc.c
+@@ -104,6 +104,9 @@ int smc_cdc_msg_send(struct smc_connection *conn,
+ 	union smc_host_cursor cfed;
+ 	int rc;
+ 
++	if (unlikely(!READ_ONCE(conn->sndbuf_desc)))
++		return -ENOBUFS;
++
+ 	smc_cdc_add_pending_send(conn, pend);
+ 
+ 	conn->tx_cdc_seq++;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index bf485a2017a4e..e84241ff4ac4f 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -912,7 +912,7 @@ static void __smc_lgr_terminate(struct smc_link_group *lgr, bool soft)
+ 	if (lgr->terminating)
+ 		return;	/* lgr already terminating */
+ 	/* cancel free_work sync, will terminate when lgr->freeing is set */
+-	cancel_delayed_work_sync(&lgr->free_work);
++	cancel_delayed_work(&lgr->free_work);
+ 	lgr->terminating = 1;
+ 
+ 	/* kill remaining link group connections */
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index fdbd56ed4bd52..ba73014805a4f 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2611,9 +2611,6 @@ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
+ 		if (inner_mode == NULL)
+ 			goto error;
+ 
+-		if (!(inner_mode->flags & XFRM_MODE_FLAG_TUNNEL))
+-			goto error;
+-
+ 		x->inner_mode = *inner_mode;
+ 
+ 		if (x->props.family == AF_INET)
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 2a5ba9dca6b08..f96e70c85f84a 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -359,6 +359,15 @@ static const struct config_entry config_table[] = {
+ 	},
+ #endif
+ 
++/* Meteor Lake */
++#if IS_ENABLED(CONFIG_SND_SOC_SOF_METEORLAKE)
++	/* Meteorlake-P */
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++		.device = 0x7e28,
++	},
++#endif
++
+ };
+ 
+ static const struct config_entry *snd_intel_dsp_find_config
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 494bfd2135a9e..de1fe604905f3 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -365,14 +365,15 @@ enum {
+ #define needs_eld_notify_link(chip)	false
+ #endif
+ 
+-#define CONTROLLER_IN_GPU(pci) (((pci)->device == 0x0a0c) || \
++#define CONTROLLER_IN_GPU(pci) (((pci)->vendor == 0x8086) &&         \
++				       (((pci)->device == 0x0a0c) || \
+ 					((pci)->device == 0x0c0c) || \
+ 					((pci)->device == 0x0d0c) || \
+ 					((pci)->device == 0x160c) || \
+ 					((pci)->device == 0x490d) || \
+ 					((pci)->device == 0x4f90) || \
+ 					((pci)->device == 0x4f91) || \
+-					((pci)->device == 0x4f92))
++					((pci)->device == 0x4f92)))
+ 
+ #define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f2ef75c8de427..2cf6600c9ca83 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9091,6 +9091,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc868, "Samsung Galaxy Book2 Pro (NP930XED)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+diff --git a/tools/testing/selftests/net/devlink_port_split.py b/tools/testing/selftests/net/devlink_port_split.py
+index 834066d465fc1..f0fbd7367f4f6 100755
+--- a/tools/testing/selftests/net/devlink_port_split.py
++++ b/tools/testing/selftests/net/devlink_port_split.py
+@@ -57,6 +57,8 @@ class devlink_ports(object):
+         assert stderr == ""
+         ports = json.loads(stdout)['port']
+ 
++        validate_devlink_output(ports, 'flavour')
++
+         for port in ports:
+             if dev in port:
+                 if ports[port]['flavour'] == 'physical':
+@@ -218,6 +220,27 @@ def split_splittable_port(port, k, lanes, dev):
+     unsplit(port.bus_info)
+ 
+ 
++def validate_devlink_output(devlink_data, target_property=None):
++    """
++    Determine if test should be skipped by checking:
++      1. devlink_data contains values
++      2. The target_property exist in devlink_data
++    """
++    skip_reason = None
++    if any(devlink_data.values()):
++        if target_property:
++            skip_reason = "{} not found in devlink output, test skipped".format(target_property)
++            for key in devlink_data:
++                if target_property in devlink_data[key]:
++                    skip_reason = None
++    else:
++        skip_reason = 'devlink output is empty, test skipped'
++
++    if skip_reason:
++        print(skip_reason)
++        sys.exit(KSFT_SKIP)
++
++
+ def make_parser():
+     parser = argparse.ArgumentParser(description='A test for port splitting.')
+     parser.add_argument('--dev',
+@@ -238,6 +261,7 @@ def main(cmdline=None):
+         stdout, stderr = run_command(cmd)
+         assert stderr == ""
+ 
++        validate_devlink_output(json.loads(stdout))
+         devs = json.loads(stdout)['dev']
+         dev = list(devs.keys())[0]
+ 
+@@ -249,6 +273,7 @@ def main(cmdline=None):
+ 
+     ports = devlink_ports(dev)
+ 
++    found_max_lanes = False
+     for port in ports.if_names:
+         max_lanes = get_max_lanes(port.name)
+ 
+@@ -271,6 +296,11 @@ def main(cmdline=None):
+                 split_splittable_port(port, lane, max_lanes, dev)
+ 
+                 lane //= 2
++        found_max_lanes = True
++
++    if not found_max_lanes:
++        print(f"Test not started, no port of device {dev} reports max_lanes")
++        sys.exit(KSFT_SKIP)
+ 
+ 
+ if __name__ == "__main__":


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-04-05 10:01 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-04-05 10:01 UTC (permalink / raw
  To: gentoo-commits

commit:     7cbb1827a9ba73c13e3f7b4d1d27406655a4285b
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Apr  5 10:00:58 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Apr  5 10:00:58 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7cbb1827

Linux patch 5.10.177

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1176_linux-5.10.177.patch | 5983 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5987 insertions(+)

diff --git a/0000_README b/0000_README
index 50964ce7..89aa39fe 100644
--- a/0000_README
+++ b/0000_README
@@ -747,6 +747,10 @@ Patch:  1175_linux-5.10.176.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.176
 
+Patch:  1176_linux-5.10.177.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.177
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1176_linux-5.10.177.patch b/1176_linux-5.10.177.patch
new file mode 100644
index 00000000..326ab251
--- /dev/null
+++ b/1176_linux-5.10.177.patch
@@ -0,0 +1,5983 @@
+diff --git a/Makefile b/Makefile
+index 71caf59383615..ae202cc531588 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 176
++SUBLEVEL = 177
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/e60k02.dtsi b/arch/arm/boot/dts/e60k02.dtsi
+index 3af1ab4458ef5..bd1f58ae23743 100644
+--- a/arch/arm/boot/dts/e60k02.dtsi
++++ b/arch/arm/boot/dts/e60k02.dtsi
+@@ -296,6 +296,7 @@
+ 
+ &usbotg1 {
+ 	pinctrl-names = "default";
++	pinctrl-0 = <&pinctrl_usbotg1>;
+ 	disable-over-current;
+ 	srp-disable;
+ 	hnp-disable;
+diff --git a/arch/arm/boot/dts/imx6sl-tolino-shine2hd.dts b/arch/arm/boot/dts/imx6sl-tolino-shine2hd.dts
+index caa2796088036..0fd126db4e5db 100644
+--- a/arch/arm/boot/dts/imx6sl-tolino-shine2hd.dts
++++ b/arch/arm/boot/dts/imx6sl-tolino-shine2hd.dts
+@@ -580,6 +580,7 @@
+ 
+ &usbotg1 {
+ 	pinctrl-names = "default";
++	pinctrl-0 = <&pinctrl_usbotg1>;
+ 	disable-over-current;
+ 	srp-disable;
+ 	hnp-disable;
+diff --git a/arch/m68k/kernel/traps.c b/arch/m68k/kernel/traps.c
+index b2a31afb998c2..7d42c84649ac2 100644
+--- a/arch/m68k/kernel/traps.c
++++ b/arch/m68k/kernel/traps.c
+@@ -30,6 +30,7 @@
+ #include <linux/init.h>
+ #include <linux/ptrace.h>
+ #include <linux/kallsyms.h>
++#include <linux/extable.h>
+ 
+ #include <asm/setup.h>
+ #include <asm/fpu.h>
+@@ -549,7 +550,8 @@ static inline void bus_error030 (struct frame *fp)
+ 			errorcode |= 2;
+ 
+ 		if (mmusr & (MMU_I | MMU_WP)) {
+-			if (ssw & 4) {
++			/* We might have an exception table for this PC */
++			if (ssw & 4 && !search_exception_tables(fp->ptregs.pc)) {
+ 				pr_err("Data %s fault at %#010lx in %s (pc=%#lx)\n",
+ 				       ssw & RW ? "read" : "write",
+ 				       fp->un.fmtb.daddr,
+diff --git a/arch/mips/bmips/dma.c b/arch/mips/bmips/dma.c
+index 49061b870680b..daef44f682984 100644
+--- a/arch/mips/bmips/dma.c
++++ b/arch/mips/bmips/dma.c
+@@ -64,6 +64,8 @@ phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dma_addr)
+ 	return dma_addr;
+ }
+ 
++bool bmips_rac_flush_disable;
++
+ void arch_sync_dma_for_cpu_all(void)
+ {
+ 	void __iomem *cbr = BMIPS_GET_CBR();
+@@ -74,6 +76,9 @@ void arch_sync_dma_for_cpu_all(void)
+ 	    boot_cpu_type() != CPU_BMIPS4380)
+ 		return;
+ 
++	if (unlikely(bmips_rac_flush_disable))
++		return;
++
+ 	/* Flush stale data out of the readahead cache */
+ 	cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
+ 	__raw_writel(cfg | 0x100, cbr + BMIPS_RAC_CONFIG);
+diff --git a/arch/mips/bmips/setup.c b/arch/mips/bmips/setup.c
+index 1b06b25aea87d..16063081d61ec 100644
+--- a/arch/mips/bmips/setup.c
++++ b/arch/mips/bmips/setup.c
+@@ -34,6 +34,8 @@
+ #define REG_BCM6328_OTP		((void __iomem *)CKSEG1ADDR(0x1000062c))
+ #define BCM6328_TP1_DISABLED	BIT(9)
+ 
++extern bool bmips_rac_flush_disable;
++
+ static const unsigned long kbase = VMLINUX_LOAD_ADDRESS & 0xfff00000;
+ 
+ struct bmips_quirk {
+@@ -103,6 +105,12 @@ static void bcm6358_quirks(void)
+ 	 * disable SMP for now
+ 	 */
+ 	bmips_smp_enabled = 0;
++
++	/*
++	 * RAC flush causes kernel panics on BCM6358 when booting from TP1
++	 * because the bootloader is not initializing it properly.
++	 */
++	bmips_rac_flush_disable = !!(read_c0_brcm_cmt_local() & (1 << 31));
+ }
+ 
+ static void bcm6368_quirks(void)
+diff --git a/arch/powerpc/kernel/ptrace/ptrace-view.c b/arch/powerpc/kernel/ptrace/ptrace-view.c
+index 7e6478e7ed074..67c126d4f4314 100644
+--- a/arch/powerpc/kernel/ptrace/ptrace-view.c
++++ b/arch/powerpc/kernel/ptrace/ptrace-view.c
+@@ -298,6 +298,9 @@ static int gpr_set(struct task_struct *target, const struct user_regset *regset,
+ static int ppr_get(struct task_struct *target, const struct user_regset *regset,
+ 		   struct membuf to)
+ {
++	if (!target->thread.regs)
++		return -EINVAL;
++
+ 	return membuf_write(&to, &target->thread.regs->ppr, sizeof(u64));
+ }
+ 
+@@ -305,6 +308,9 @@ static int ppr_set(struct task_struct *target, const struct user_regset *regset,
+ 		   unsigned int pos, unsigned int count, const void *kbuf,
+ 		   const void __user *ubuf)
+ {
++	if (!target->thread.regs)
++		return -EINVAL;
++
+ 	return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+ 				  &target->thread.regs->ppr, 0, sizeof(u64));
+ }
+diff --git a/arch/riscv/include/uapi/asm/setup.h b/arch/riscv/include/uapi/asm/setup.h
+new file mode 100644
+index 0000000000000..66b13a5228808
+--- /dev/null
++++ b/arch/riscv/include/uapi/asm/setup.h
+@@ -0,0 +1,8 @@
++/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
++
++#ifndef _UAPI_ASM_RISCV_SETUP_H
++#define _UAPI_ASM_RISCV_SETUP_H
++
++#define COMMAND_LINE_SIZE	1024
++
++#endif /* _UAPI_ASM_RISCV_SETUP_H */
+diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
+index 0267405ab7c69..fcfd78f99cb4b 100644
+--- a/arch/s390/lib/uaccess.c
++++ b/arch/s390/lib/uaccess.c
+@@ -339,7 +339,7 @@ static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size
+ 		"4: slgr  %0,%0\n"
+ 		"5:\n"
+ 		EX_TABLE(0b,2b) EX_TABLE(3b,5b)
+-		: "+a" (size), "+a" (to), "+a" (tmp1), "=a" (tmp2)
++		: "+&a" (size), "+&a" (to), "+a" (tmp1), "=&a" (tmp2)
+ 		: "a" (empty_zero_page), "d" (reg0) : "cc", "memory");
+ 	return size;
+ }
+diff --git a/arch/sh/include/asm/processor_32.h b/arch/sh/include/asm/processor_32.h
+index aa92cc933889d..6c7966e627758 100644
+--- a/arch/sh/include/asm/processor_32.h
++++ b/arch/sh/include/asm/processor_32.h
+@@ -50,6 +50,7 @@
+ #define SR_FD		0x00008000
+ #define SR_MD		0x40000000
+ 
++#define SR_USER_MASK	0x00000303	// M, Q, S, T bits
+ /*
+  * DSP structure and data
+  */
+diff --git a/arch/sh/kernel/signal_32.c b/arch/sh/kernel/signal_32.c
+index dd3092911efad..dc13702003f0f 100644
+--- a/arch/sh/kernel/signal_32.c
++++ b/arch/sh/kernel/signal_32.c
+@@ -115,6 +115,7 @@ static int
+ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *r0_p)
+ {
+ 	unsigned int err = 0;
++	unsigned int sr = regs->sr & ~SR_USER_MASK;
+ 
+ #define COPY(x)		err |= __get_user(regs->x, &sc->sc_##x)
+ 			COPY(regs[1]);
+@@ -130,6 +131,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *r0_p
+ 	COPY(sr);	COPY(pc);
+ #undef COPY
+ 
++	regs->sr = (regs->sr & SR_USER_MASK) | sr;
++
+ #ifdef CONFIG_SH_FPU
+ 	if (boot_cpu_data.flags & CPU_HAS_FPU) {
+ 		int owned_fp;
+diff --git a/arch/xtensa/kernel/traps.c b/arch/xtensa/kernel/traps.c
+index 129f23c0ab553..6af68305b795b 100644
+--- a/arch/xtensa/kernel/traps.c
++++ b/arch/xtensa/kernel/traps.c
+@@ -503,7 +503,7 @@ static size_t kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;
+ 
+ void show_stack(struct task_struct *task, unsigned long *sp, const char *loglvl)
+ {
+-	size_t len;
++	size_t len, off = 0;
+ 
+ 	if (!sp)
+ 		sp = stack_pointer(task);
+@@ -512,9 +512,17 @@ void show_stack(struct task_struct *task, unsigned long *sp, const char *loglvl)
+ 		  kstack_depth_to_print * STACK_DUMP_ENTRY_SIZE);
+ 
+ 	printk("%sStack:\n", loglvl);
+-	print_hex_dump(loglvl, " ", DUMP_PREFIX_NONE,
+-		       STACK_DUMP_LINE_SIZE, STACK_DUMP_ENTRY_SIZE,
+-		       sp, len, false);
++	while (off < len) {
++		u8 line[STACK_DUMP_LINE_SIZE];
++		size_t line_len = len - off > STACK_DUMP_LINE_SIZE ?
++			STACK_DUMP_LINE_SIZE : len - off;
++
++		__memcpy(line, (u8 *)sp + off, line_len);
++		print_hex_dump(loglvl, " ", DUMP_PREFIX_NONE,
++			       STACK_DUMP_LINE_SIZE, STACK_DUMP_ENTRY_SIZE,
++			       line, line_len, false);
++		off += STACK_DUMP_LINE_SIZE;
++	}
+ 	show_trace(task, sp, loglvl);
+ }
+ 
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index 82f6f1fbe9e78..a217b50439e72 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -2915,6 +2915,7 @@ close_card_oam(struct idt77252_dev *card)
+ 
+ 				recycle_rx_pool_skb(card, &vc->rcv.rx_pool);
+ 			}
++			kfree(vc);
+ 		}
+ 	}
+ }
+@@ -2958,6 +2959,15 @@ open_card_ubr0(struct idt77252_dev *card)
+ 	return 0;
+ }
+ 
++static void
++close_card_ubr0(struct idt77252_dev *card)
++{
++	struct vc_map *vc = card->vcs[0];
++
++	free_scq(card, vc->scq);
++	kfree(vc);
++}
++
+ static int
+ idt77252_dev_open(struct idt77252_dev *card)
+ {
+@@ -3007,6 +3017,7 @@ static void idt77252_dev_close(struct atm_dev *dev)
+ 	struct idt77252_dev *card = dev->dev_data;
+ 	u32 conf;
+ 
++	close_card_ubr0(card);
+ 	close_card_oam(card);
+ 
+ 	conf = SAR_CFG_RXPTH |	/* enable receive path           */
+diff --git a/drivers/bluetooth/btqcomsmd.c b/drivers/bluetooth/btqcomsmd.c
+index 2acb719e596f5..11c7e04bf3947 100644
+--- a/drivers/bluetooth/btqcomsmd.c
++++ b/drivers/bluetooth/btqcomsmd.c
+@@ -122,6 +122,21 @@ static int btqcomsmd_setup(struct hci_dev *hdev)
+ 	return 0;
+ }
+ 
++static int btqcomsmd_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
++{
++	int ret;
++
++	ret = qca_set_bdaddr_rome(hdev, bdaddr);
++	if (ret)
++		return ret;
++
++	/* The firmware stops responding for a while after setting the bdaddr,
++	 * causing timeouts for subsequent commands. Sleep a bit to avoid this.
++	 */
++	usleep_range(1000, 10000);
++	return 0;
++}
++
+ static int btqcomsmd_probe(struct platform_device *pdev)
+ {
+ 	struct btqcomsmd *btq;
+@@ -162,7 +177,7 @@ static int btqcomsmd_probe(struct platform_device *pdev)
+ 	hdev->close = btqcomsmd_close;
+ 	hdev->send = btqcomsmd_send;
+ 	hdev->setup = btqcomsmd_setup;
+-	hdev->set_bdaddr = qca_set_bdaddr_rome;
++	hdev->set_bdaddr = btqcomsmd_set_bdaddr;
+ 
+ 	ret = hci_register_dev(hdev);
+ 	if (ret < 0)
+diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c
+index 199e8f7d426d9..7050a16e7efeb 100644
+--- a/drivers/bluetooth/btsdio.c
++++ b/drivers/bluetooth/btsdio.c
+@@ -352,6 +352,7 @@ static void btsdio_remove(struct sdio_func *func)
+ 
+ 	BT_DBG("func %p", func);
+ 
++	cancel_work_sync(&data->work);
+ 	if (!data)
+ 		return;
+ 
+diff --git a/drivers/bus/imx-weim.c b/drivers/bus/imx-weim.c
+index 28bb65a5613fd..201767823edb5 100644
+--- a/drivers/bus/imx-weim.c
++++ b/drivers/bus/imx-weim.c
+@@ -192,8 +192,8 @@ static int weim_parse_dt(struct platform_device *pdev, void __iomem *base)
+ 	const struct of_device_id *of_id = of_match_device(weim_id_table,
+ 							   &pdev->dev);
+ 	const struct imx_weim_devtype *devtype = of_id->data;
++	int ret = 0, have_child = 0;
+ 	struct device_node *child;
+-	int ret, have_child = 0;
+ 	struct cs_timing_state ts = {};
+ 	u32 reg;
+ 
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 0f2bac24e564d..20dc2452815c7 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -74,7 +74,8 @@
+ /*
+  * Timer values
+  */
+-#define SSIF_MSG_USEC		20000	/* 20ms between message tries. */
++#define SSIF_MSG_USEC		60000	/* 60ms between message tries (T3). */
++#define SSIF_REQ_RETRY_USEC	60000	/* 60ms between send retries (T6). */
+ #define SSIF_MSG_PART_USEC	5000	/* 5ms for a message part */
+ 
+ /* How many times to we retry sending/receiving the message. */
+@@ -82,7 +83,9 @@
+ #define	SSIF_RECV_RETRIES	250
+ 
+ #define SSIF_MSG_MSEC		(SSIF_MSG_USEC / 1000)
++#define SSIF_REQ_RETRY_MSEC	(SSIF_REQ_RETRY_USEC / 1000)
+ #define SSIF_MSG_JIFFIES	((SSIF_MSG_USEC * 1000) / TICK_NSEC)
++#define SSIF_REQ_RETRY_JIFFIES	((SSIF_REQ_RETRY_USEC * 1000) / TICK_NSEC)
+ #define SSIF_MSG_PART_JIFFIES	((SSIF_MSG_PART_USEC * 1000) / TICK_NSEC)
+ 
+ /*
+@@ -229,6 +232,9 @@ struct ssif_info {
+ 	bool		    got_alert;
+ 	bool		    waiting_alert;
+ 
++	/* Used to inform the timeout that it should do a resend. */
++	bool		    do_resend;
++
+ 	/*
+ 	 * If set to true, this will request events the next time the
+ 	 * state machine is idle.
+@@ -510,7 +516,7 @@ static int ipmi_ssif_thread(void *data)
+ 	return 0;
+ }
+ 
+-static int ssif_i2c_send(struct ssif_info *ssif_info,
++static void ssif_i2c_send(struct ssif_info *ssif_info,
+ 			ssif_i2c_done handler,
+ 			int read_write, int command,
+ 			unsigned char *data, unsigned int size)
+@@ -522,7 +528,6 @@ static int ssif_i2c_send(struct ssif_info *ssif_info,
+ 	ssif_info->i2c_data = data;
+ 	ssif_info->i2c_size = size;
+ 	complete(&ssif_info->wake_thread);
+-	return 0;
+ }
+ 
+ 
+@@ -531,40 +536,36 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 
+ static void start_get(struct ssif_info *ssif_info)
+ {
+-	int rv;
+-
+ 	ssif_info->rtc_us_timer = 0;
+ 	ssif_info->multi_pos = 0;
+ 
+-	rv = ssif_i2c_send(ssif_info, msg_done_handler, I2C_SMBUS_READ,
+-			  SSIF_IPMI_RESPONSE,
+-			  ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
+-	if (rv < 0) {
+-		/* request failed, just return the error. */
+-		if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
+-			dev_dbg(&ssif_info->client->dev,
+-				"Error from i2c_non_blocking_op(5)\n");
+-
+-		msg_done_handler(ssif_info, -EIO, NULL, 0);
+-	}
++	ssif_i2c_send(ssif_info, msg_done_handler, I2C_SMBUS_READ,
++		  SSIF_IPMI_RESPONSE,
++		  ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
+ }
+ 
++static void start_resend(struct ssif_info *ssif_info);
++
+ static void retry_timeout(struct timer_list *t)
+ {
+ 	struct ssif_info *ssif_info = from_timer(ssif_info, t, retry_timer);
+ 	unsigned long oflags, *flags;
+-	bool waiting;
++	bool waiting, resend;
+ 
+ 	if (ssif_info->stopping)
+ 		return;
+ 
+ 	flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
++	resend = ssif_info->do_resend;
++	ssif_info->do_resend = false;
+ 	waiting = ssif_info->waiting_alert;
+ 	ssif_info->waiting_alert = false;
+ 	ipmi_ssif_unlock_cond(ssif_info, flags);
+ 
+ 	if (waiting)
+ 		start_get(ssif_info);
++	if (resend)
++		start_resend(ssif_info);
+ }
+ 
+ static void watch_timeout(struct timer_list *t)
+@@ -613,14 +614,11 @@ static void ssif_alert(struct i2c_client *client, enum i2c_alert_protocol type,
+ 		start_get(ssif_info);
+ }
+ 
+-static int start_resend(struct ssif_info *ssif_info);
+-
+ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			     unsigned char *data, unsigned int len)
+ {
+ 	struct ipmi_smi_msg *msg;
+ 	unsigned long oflags, *flags;
+-	int rv;
+ 
+ 	/*
+ 	 * We are single-threaded here, so no need for a lock until we
+@@ -666,17 +664,10 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 		ssif_info->multi_len = len;
+ 		ssif_info->multi_pos = 1;
+ 
+-		rv = ssif_i2c_send(ssif_info, msg_done_handler, I2C_SMBUS_READ,
+-				  SSIF_IPMI_MULTI_PART_RESPONSE_MIDDLE,
+-				  ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
+-		if (rv < 0) {
+-			if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
+-				dev_dbg(&ssif_info->client->dev,
+-					"Error from i2c_non_blocking_op(1)\n");
+-
+-			result = -EIO;
+-		} else
+-			return;
++		ssif_i2c_send(ssif_info, msg_done_handler, I2C_SMBUS_READ,
++			 SSIF_IPMI_MULTI_PART_RESPONSE_MIDDLE,
++			 ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
++		return;
+ 	} else if (ssif_info->multi_pos) {
+ 		/* Middle of multi-part read.  Start the next transaction. */
+ 		int i;
+@@ -738,19 +729,12 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 
+ 			ssif_info->multi_pos++;
+ 
+-			rv = ssif_i2c_send(ssif_info, msg_done_handler,
+-					   I2C_SMBUS_READ,
+-					   SSIF_IPMI_MULTI_PART_RESPONSE_MIDDLE,
+-					   ssif_info->recv,
+-					   I2C_SMBUS_BLOCK_DATA);
+-			if (rv < 0) {
+-				if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
+-					dev_dbg(&ssif_info->client->dev,
+-						"Error from ssif_i2c_send\n");
+-
+-				result = -EIO;
+-			} else
+-				return;
++			ssif_i2c_send(ssif_info, msg_done_handler,
++				  I2C_SMBUS_READ,
++				  SSIF_IPMI_MULTI_PART_RESPONSE_MIDDLE,
++				  ssif_info->recv,
++				  I2C_SMBUS_BLOCK_DATA);
++			return;
+ 		}
+ 	}
+ 
+@@ -931,37 +915,27 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ 				unsigned char *data, unsigned int len)
+ {
+-	int rv;
+-
+ 	/* We are single-threaded here, so no need for a lock. */
+ 	if (result < 0) {
+ 		ssif_info->retries_left--;
+ 		if (ssif_info->retries_left > 0) {
+-			if (!start_resend(ssif_info)) {
+-				ssif_inc_stat(ssif_info, send_retries);
+-				return;
+-			}
+-			/* request failed, just return the error. */
+-			ssif_inc_stat(ssif_info, send_errors);
+-
+-			if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
+-				dev_dbg(&ssif_info->client->dev,
+-					"%s: Out of retries\n", __func__);
+-			msg_done_handler(ssif_info, -EIO, NULL, 0);
++			/*
++			 * Wait the retry timeout time per the spec,
++			 * then redo the send.
++			 */
++			ssif_info->do_resend = true;
++			mod_timer(&ssif_info->retry_timer,
++				  jiffies + SSIF_REQ_RETRY_JIFFIES);
+ 			return;
+ 		}
+ 
+ 		ssif_inc_stat(ssif_info, send_errors);
+ 
+-		/*
+-		 * Got an error on transmit, let the done routine
+-		 * handle it.
+-		 */
+ 		if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
+ 			dev_dbg(&ssif_info->client->dev,
+-				"%s: Error  %d\n", __func__, result);
++				"%s: Out of retries\n", __func__);
+ 
+-		msg_done_handler(ssif_info, result, NULL, 0);
++		msg_done_handler(ssif_info, -EIO, NULL, 0);
+ 		return;
+ 	}
+ 
+@@ -995,18 +969,9 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ 			ssif_info->multi_data = NULL;
+ 		}
+ 
+-		rv = ssif_i2c_send(ssif_info, msg_written_handler,
+-				   I2C_SMBUS_WRITE, cmd,
+-				   data_to_send, I2C_SMBUS_BLOCK_DATA);
+-		if (rv < 0) {
+-			/* request failed, just return the error. */
+-			ssif_inc_stat(ssif_info, send_errors);
+-
+-			if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
+-				dev_dbg(&ssif_info->client->dev,
+-					"Error from i2c_non_blocking_op(3)\n");
+-			msg_done_handler(ssif_info, -EIO, NULL, 0);
+-		}
++		ssif_i2c_send(ssif_info, msg_written_handler,
++			  I2C_SMBUS_WRITE, cmd,
++			  data_to_send, I2C_SMBUS_BLOCK_DATA);
+ 	} else {
+ 		/* Ready to request the result. */
+ 		unsigned long oflags, *flags;
+@@ -1033,9 +998,8 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ 	}
+ }
+ 
+-static int start_resend(struct ssif_info *ssif_info)
++static void start_resend(struct ssif_info *ssif_info)
+ {
+-	int rv;
+ 	int command;
+ 
+ 	ssif_info->got_alert = false;
+@@ -1057,12 +1021,8 @@ static int start_resend(struct ssif_info *ssif_info)
+ 		ssif_info->data[0] = ssif_info->data_len;
+ 	}
+ 
+-	rv = ssif_i2c_send(ssif_info, msg_written_handler, I2C_SMBUS_WRITE,
+-			  command, ssif_info->data, I2C_SMBUS_BLOCK_DATA);
+-	if (rv && (ssif_info->ssif_debug & SSIF_DEBUG_MSG))
+-		dev_dbg(&ssif_info->client->dev,
+-			"Error from i2c_non_blocking_op(4)\n");
+-	return rv;
++	ssif_i2c_send(ssif_info, msg_written_handler, I2C_SMBUS_WRITE,
++		   command, ssif_info->data, I2C_SMBUS_BLOCK_DATA);
+ }
+ 
+ static int start_send(struct ssif_info *ssif_info,
+@@ -1077,7 +1037,8 @@ static int start_send(struct ssif_info *ssif_info,
+ 	ssif_info->retries_left = SSIF_SEND_RETRIES;
+ 	memcpy(ssif_info->data + 1, data, len);
+ 	ssif_info->data_len = len;
+-	return start_resend(ssif_info);
++	start_resend(ssif_info);
++	return 0;
+ }
+ 
+ /* Must be called with the message lock held. */
+@@ -1377,8 +1338,10 @@ static int do_cmd(struct i2c_client *client, int len, unsigned char *msg,
+ 	ret = i2c_smbus_write_block_data(client, SSIF_IPMI_REQUEST, len, msg);
+ 	if (ret) {
+ 		retry_cnt--;
+-		if (retry_cnt > 0)
++		if (retry_cnt > 0) {
++			msleep(SSIF_REQ_RETRY_MSEC);
+ 			goto retry1;
++		}
+ 		return -ENODEV;
+ 	}
+ 
+@@ -1519,8 +1482,10 @@ retry_write:
+ 					 32, msg);
+ 	if (ret) {
+ 		retry_cnt--;
+-		if (retry_cnt > 0)
++		if (retry_cnt > 0) {
++			msleep(SSIF_REQ_RETRY_MSEC);
+ 			goto retry_write;
++		}
+ 		dev_err(&client->dev, "Could not write multi-part start, though the BMC said it could handle it.  Just limit sends to one part.\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c
+index 4626404be541a..ad773a657ed24 100644
+--- a/drivers/firmware/arm_scmi/mailbox.c
++++ b/drivers/firmware/arm_scmi/mailbox.c
+@@ -52,6 +52,39 @@ static bool mailbox_chan_available(struct device *dev, int idx)
+ 					   "#mbox-cells", idx, NULL);
+ }
+ 
++static int mailbox_chan_validate(struct device *cdev)
++{
++	int num_mb, num_sh, ret = 0;
++	struct device_node *np = cdev->of_node;
++
++	num_mb = of_count_phandle_with_args(np, "mboxes", "#mbox-cells");
++	num_sh = of_count_phandle_with_args(np, "shmem", NULL);
++	/* Bail out if mboxes and shmem descriptors are inconsistent */
++	if (num_mb <= 0 || num_sh > 2 || num_mb != num_sh) {
++		dev_warn(cdev, "Invalid channel descriptor for '%s'\n",
++			 of_node_full_name(np));
++		return -EINVAL;
++	}
++
++	if (num_sh > 1) {
++		struct device_node *np_tx, *np_rx;
++
++		np_tx = of_parse_phandle(np, "shmem", 0);
++		np_rx = of_parse_phandle(np, "shmem", 1);
++		/* SCMI Tx and Rx shared mem areas have to be distinct */
++		if (!np_tx || !np_rx || np_tx == np_rx) {
++			dev_warn(cdev, "Invalid shmem descriptor for '%s'\n",
++				 of_node_full_name(np));
++			ret = -EINVAL;
++		}
++
++		of_node_put(np_tx);
++		of_node_put(np_rx);
++	}
++
++	return ret;
++}
++
+ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
+ 			      bool tx)
+ {
+@@ -64,6 +97,10 @@ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
+ 	resource_size_t size;
+ 	struct resource res;
+ 
++	ret = mailbox_chan_validate(cdev);
++	if (ret)
++		return ret;
++
+ 	smbox = devm_kzalloc(dev, sizeof(*smbox), GFP_KERNEL);
+ 	if (!smbox)
+ 		return -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index d617e98afb76d..767b3d31c7205 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -164,6 +164,21 @@ static bool needs_dsc_aux_workaround(struct dc_link *link)
+ 	return false;
+ }
+ 
++bool is_synaptics_cascaded_panamera(struct dc_link *link, struct drm_dp_mst_port *port)
++{
++	u8 branch_vendor_data[4] = { 0 }; // Vendor data 0x50C ~ 0x50F
++
++	if (drm_dp_dpcd_read(port->mgr->aux, DP_BRANCH_VENDOR_SPECIFIC_START, &branch_vendor_data, 4) == 4) {
++		if (link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 &&
++				IS_SYNAPTICS_CASCADED_PANAMERA(link->dpcd_caps.branch_dev_name, branch_vendor_data)) {
++			DRM_INFO("Synaptics Cascaded MST hub\n");
++			return true;
++		}
++	}
++
++	return false;
++}
++
+ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector)
+ {
+ 	struct dc_sink *dc_sink = aconnector->dc_sink;
+@@ -185,6 +200,10 @@ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnecto
+ 	    needs_dsc_aux_workaround(aconnector->dc_link))
+ 		aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux;
+ 
++	/* synaptics cascaded MST hub case */
++	if (!aconnector->dsc_aux && is_synaptics_cascaded_panamera(aconnector->dc_link, port))
++		aconnector->dsc_aux = port->mgr->aux;
++
+ 	if (!aconnector->dsc_aux)
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+index b38bd68121ceb..5d60e2bf0bd88 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+@@ -26,6 +26,18 @@
+ #ifndef __DAL_AMDGPU_DM_MST_TYPES_H__
+ #define __DAL_AMDGPU_DM_MST_TYPES_H__
+ 
++#define DP_BRANCH_VENDOR_SPECIFIC_START 0x50C
++
++/**
++ * Panamera MST Hub detection
++ * Offset DPCD 050Eh == 0x5A indicates cascaded MST hub case
++ * Check from beginning of branch device vendor specific field (050Ch)
++ */
++#define IS_SYNAPTICS_PANAMERA(branchDevName) (((int)branchDevName[4] & 0xF0) == 0x50 ? 1 : 0)
++#define BRANCH_HW_REVISION_PANAMERA_A2 0x10
++#define SYNAPTICS_CASCADED_HUB_ID  0x5A
++#define IS_SYNAPTICS_CASCADED_PANAMERA(devName, data) ((IS_SYNAPTICS_PANAMERA(devName) && ((int)data[2] == SYNAPTICS_CASCADED_HUB_ID)) ? 1 : 0)
++
+ struct amdgpu_display_manager;
+ struct amdgpu_dm_connector;
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+index 4aa3426a9ba4b..33974cc57e32a 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+@@ -93,7 +93,15 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
+ static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
+ 		struct vm_area_struct *vma)
+ {
+-	return dma_buf_mmap(etnaviv_obj->base.dma_buf, vma, 0);
++	int ret;
++
++	ret = dma_buf_mmap(etnaviv_obj->base.dma_buf, vma, 0);
++	if (!ret) {
++		/* Drop the reference acquired by drm_gem_mmap_obj(). */
++		drm_gem_object_put(&etnaviv_obj->base);
++	}
++
++	return ret;
+ }
+ 
+ static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = {
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 45c2556d63955..d46011f7a8380 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -13335,6 +13335,7 @@ intel_crtc_prepare_cleared_state(struct intel_crtc_state *crtc_state)
+ 	 * only fields that are know to not cause problems are preserved. */
+ 
+ 	saved_state->uapi = crtc_state->uapi;
++	saved_state->inherited = crtc_state->inherited;
+ 	saved_state->scaler_state = crtc_state->scaler_state;
+ 	saved_state->shared_dpll = crtc_state->shared_dpll;
+ 	saved_state->dpll_hw_state = crtc_state->dpll_hw_state;
+diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
+index 0532a5069c04b..cae9ac6379a5d 100644
+--- a/drivers/gpu/drm/i915/i915_active.c
++++ b/drivers/gpu/drm/i915/i915_active.c
+@@ -96,8 +96,7 @@ static void debug_active_init(struct i915_active *ref)
+ static void debug_active_activate(struct i915_active *ref)
+ {
+ 	lockdep_assert_held(&ref->tree_lock);
+-	if (!atomic_read(&ref->count)) /* before the first inc */
+-		debug_object_activate(ref, &active_debug_desc);
++	debug_object_activate(ref, &active_debug_desc);
+ }
+ 
+ static void debug_active_deactivate(struct i915_active *ref)
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index b0bfe85f5f6a8..5c29ddf93eb3f 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -320,38 +320,38 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ 	if (priv->afbcd.ops) {
+ 		ret = priv->afbcd.ops->init(priv);
+ 		if (ret)
+-			return ret;
++			goto free_drm;
+ 	}
+ 
+ 	/* Encoder Initialization */
+ 
+ 	ret = meson_venc_cvbs_create(priv);
+ 	if (ret)
+-		goto free_drm;
++		goto exit_afbcd;
+ 
+ 	if (has_components) {
+ 		ret = component_bind_all(drm->dev, drm);
+ 		if (ret) {
+ 			dev_err(drm->dev, "Couldn't bind all components\n");
+-			goto free_drm;
++			goto exit_afbcd;
+ 		}
+ 	}
+ 
+ 	ret = meson_plane_create(priv);
+ 	if (ret)
+-		goto free_drm;
++		goto unbind_all;
+ 
+ 	ret = meson_overlay_create(priv);
+ 	if (ret)
+-		goto free_drm;
++		goto unbind_all;
+ 
+ 	ret = meson_crtc_create(priv);
+ 	if (ret)
+-		goto free_drm;
++		goto unbind_all;
+ 
+ 	ret = drm_irq_install(drm, priv->vsync_irq);
+ 	if (ret)
+-		goto free_drm;
++		goto unbind_all;
+ 
+ 	drm_mode_config_reset(drm);
+ 
+@@ -369,6 +369,12 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ 
+ uninstall_irq:
+ 	drm_irq_uninstall(drm);
++unbind_all:
++	if (has_components)
++		component_unbind_all(drm->dev, drm);
++exit_afbcd:
++	if (priv->afbcd.ops)
++		priv->afbcd.ops->exit(priv);
+ free_drm:
+ 	drm_dev_put(drm);
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c
+index c5912fd537729..9c6ae8cfa0b2c 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
++++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
+@@ -93,7 +93,7 @@ static int sun4i_drv_bind(struct device *dev)
+ 	/* drm_vblank_init calls kcalloc, which can fail */
+ 	ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
+ 	if (ret)
+-		goto cleanup_mode_config;
++		goto unbind_all;
+ 
+ 	drm->irq_enabled = true;
+ 
+@@ -117,6 +117,8 @@ static int sun4i_drv_bind(struct device *dev)
+ 
+ finish_poll:
+ 	drm_kms_helper_poll_fini(drm);
++unbind_all:
++	component_unbind_all(dev, NULL);
+ cleanup_mode_config:
+ 	drm_mode_config_cleanup(drm);
+ 	of_reserved_mem_device_release(dev);
+diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
+index 172f20e88c6c9..d902fe43cb818 100644
+--- a/drivers/hid/hid-cp2112.c
++++ b/drivers/hid/hid-cp2112.c
+@@ -1352,6 +1352,7 @@ static int cp2112_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	girq->parents = NULL;
+ 	girq->default_type = IRQ_TYPE_NONE;
+ 	girq->handler = handle_simple_irq;
++	girq->threaded = true;
+ 
+ 	ret = gpiochip_add_data(&dev->gc, dev);
+ 	if (ret < 0) {
+diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c
+index d649fea829994..045dc3fd7953e 100644
+--- a/drivers/hwmon/hwmon.c
++++ b/drivers/hwmon/hwmon.c
+@@ -700,6 +700,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
+ {
+ 	struct hwmon_device *hwdev;
+ 	struct device *hdev;
++	struct device *tdev = dev;
+ 	int i, err, id;
+ 
+ 	/* Complain about invalid characters in hwmon name attribute */
+@@ -757,7 +758,9 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
+ 	hwdev->name = name;
+ 	hdev->class = &hwmon_class;
+ 	hdev->parent = dev;
+-	hdev->of_node = dev ? dev->of_node : NULL;
++	while (tdev && !tdev->of_node)
++		tdev = tdev->parent;
++	hdev->of_node = tdev ? tdev->of_node : NULL;
+ 	hwdev->chip = chip;
+ 	dev_set_drvdata(hdev, drvdata);
+ 	dev_set_name(hdev, HWMON_ID_FORMAT, id);
+@@ -769,7 +772,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
+ 
+ 	INIT_LIST_HEAD(&hwdev->tzdata);
+ 
+-	if (dev && dev->of_node && chip && chip->ops->read &&
++	if (hdev->of_node && chip && chip->ops->read &&
+ 	    chip->info[0]->type == hwmon_chip &&
+ 	    (chip->info[0]->config[0] & HWMON_C_REGISTER_TZ)) {
+ 		err = hwmon_thermal_register_sensors(hdev);
+diff --git a/drivers/hwmon/it87.c b/drivers/hwmon/it87.c
+index fac9b5c68a6a0..85413d3dc3940 100644
+--- a/drivers/hwmon/it87.c
++++ b/drivers/hwmon/it87.c
+@@ -486,6 +486,8 @@ static const struct it87_devices it87_devices[] = {
+ #define has_pwm_freq2(data)	((data)->features & FEAT_PWM_FREQ2)
+ #define has_six_temp(data)	((data)->features & FEAT_SIX_TEMP)
+ #define has_vin3_5v(data)	((data)->features & FEAT_VIN3_5V)
++#define has_scaling(data)	((data)->features & (FEAT_12MV_ADC | \
++						     FEAT_10_9MV_ADC))
+ 
+ struct it87_sio_data {
+ 	int sioaddr;
+@@ -3098,7 +3100,7 @@ static int it87_probe(struct platform_device *pdev)
+ 			 "Detected broken BIOS defaults, disabling PWM interface\n");
+ 
+ 	/* Starting with IT8721F, we handle scaling of internal voltages */
+-	if (has_12mv_adc(data)) {
++	if (has_scaling(data)) {
+ 		if (sio_data->internal & BIT(0))
+ 			data->in_scaled |= BIT(3);	/* in3 is AVCC */
+ 		if (sio_data->internal & BIT(1))
+diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
+index 8b9ba055c4186..2018dbcf241e9 100644
+--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
++++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
+@@ -502,10 +502,14 @@ disable:
+ static irqreturn_t lpi2c_imx_isr(int irq, void *dev_id)
+ {
+ 	struct lpi2c_imx_struct *lpi2c_imx = dev_id;
++	unsigned int enabled;
+ 	unsigned int temp;
+ 
++	enabled = readl(lpi2c_imx->base + LPI2C_MIER);
++
+ 	lpi2c_imx_intctrl(lpi2c_imx, 0);
+ 	temp = readl(lpi2c_imx->base + LPI2C_MSR);
++	temp &= enabled;
+ 
+ 	if (temp & MSR_RDF)
+ 		lpi2c_imx_read_rxfifo(lpi2c_imx);
+diff --git a/drivers/i2c/busses/i2c-xgene-slimpro.c b/drivers/i2c/busses/i2c-xgene-slimpro.c
+index 63cbb9c7c1b0e..76e9dcd638569 100644
+--- a/drivers/i2c/busses/i2c-xgene-slimpro.c
++++ b/drivers/i2c/busses/i2c-xgene-slimpro.c
+@@ -308,6 +308,9 @@ static int slimpro_i2c_blkwr(struct slimpro_i2c_dev *ctx, u32 chip,
+ 	u32 msg[3];
+ 	int rc;
+ 
++	if (writelen > I2C_SMBUS_BLOCK_MAX)
++		return -EINVAL;
++
+ 	memcpy(ctx->dma_buffer, data, writelen);
+ 	paddr = dma_map_single(ctx->dev, ctx->dma_buffer, writelen,
+ 			       DMA_TO_DEVICE);
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index b067bfd2699c5..0b10c466659e2 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -852,8 +852,8 @@ static void alps_process_packet_v6(struct psmouse *psmouse)
+ 			x = y = z = 0;
+ 
+ 		/* Divide 4 since trackpoint's speed is too fast */
+-		input_report_rel(dev2, REL_X, (char)x / 4);
+-		input_report_rel(dev2, REL_Y, -((char)y / 4));
++		input_report_rel(dev2, REL_X, (s8)x / 4);
++		input_report_rel(dev2, REL_Y, -((s8)y / 4));
+ 
+ 		psmouse_report_standard_buttons(dev2, packet[3]);
+ 
+@@ -1104,8 +1104,8 @@ static void alps_process_trackstick_packet_v7(struct psmouse *psmouse)
+ 	    ((packet[3] & 0x20) << 1);
+ 	z = (packet[5] & 0x3f) | ((packet[3] & 0x80) >> 1);
+ 
+-	input_report_rel(dev2, REL_X, (char)x);
+-	input_report_rel(dev2, REL_Y, -((char)y));
++	input_report_rel(dev2, REL_X, (s8)x);
++	input_report_rel(dev2, REL_Y, -((s8)y));
+ 	input_report_abs(dev2, ABS_PRESSURE, z);
+ 
+ 	psmouse_report_standard_buttons(dev2, packet[1]);
+@@ -2294,20 +2294,20 @@ static int alps_get_v3_v7_resolution(struct psmouse *psmouse, int reg_pitch)
+ 	if (reg < 0)
+ 		return reg;
+ 
+-	x_pitch = (char)(reg << 4) >> 4; /* sign extend lower 4 bits */
++	x_pitch = (s8)(reg << 4) >> 4; /* sign extend lower 4 bits */
+ 	x_pitch = 50 + 2 * x_pitch; /* In 0.1 mm units */
+ 
+-	y_pitch = (char)reg >> 4; /* sign extend upper 4 bits */
++	y_pitch = (s8)reg >> 4; /* sign extend upper 4 bits */
+ 	y_pitch = 36 + 2 * y_pitch; /* In 0.1 mm units */
+ 
+ 	reg = alps_command_mode_read_reg(psmouse, reg_pitch + 1);
+ 	if (reg < 0)
+ 		return reg;
+ 
+-	x_electrode = (char)(reg << 4) >> 4; /* sign extend lower 4 bits */
++	x_electrode = (s8)(reg << 4) >> 4; /* sign extend lower 4 bits */
+ 	x_electrode = 17 + x_electrode;
+ 
+-	y_electrode = (char)reg >> 4; /* sign extend upper 4 bits */
++	y_electrode = (s8)reg >> 4; /* sign extend upper 4 bits */
+ 	y_electrode = 13 + y_electrode;
+ 
+ 	x_phys = x_pitch * (x_electrode - 1); /* In 0.1 mm units */
+diff --git a/drivers/input/mouse/focaltech.c b/drivers/input/mouse/focaltech.c
+index 6fd5fff0cbfff..c74b99077d16a 100644
+--- a/drivers/input/mouse/focaltech.c
++++ b/drivers/input/mouse/focaltech.c
+@@ -202,8 +202,8 @@ static void focaltech_process_rel_packet(struct psmouse *psmouse,
+ 	state->pressed = packet[0] >> 7;
+ 	finger1 = ((packet[0] >> 4) & 0x7) - 1;
+ 	if (finger1 < FOC_MAX_FINGERS) {
+-		state->fingers[finger1].x += (char)packet[1];
+-		state->fingers[finger1].y += (char)packet[2];
++		state->fingers[finger1].x += (s8)packet[1];
++		state->fingers[finger1].y += (s8)packet[2];
+ 	} else {
+ 		psmouse_err(psmouse, "First finger in rel packet invalid: %d\n",
+ 			    finger1);
+@@ -218,8 +218,8 @@ static void focaltech_process_rel_packet(struct psmouse *psmouse,
+ 	 */
+ 	finger2 = ((packet[3] >> 4) & 0x7) - 1;
+ 	if (finger2 < FOC_MAX_FINGERS) {
+-		state->fingers[finger2].x += (char)packet[4];
+-		state->fingers[finger2].y += (char)packet[5];
++		state->fingers[finger2].x += (s8)packet[4];
++		state->fingers[finger2].y += (s8)packet[5];
+ 	}
+ }
+ 
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index b7f87ad4b9a95..098115eb80841 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -183,10 +183,18 @@ static const unsigned long goodix_irq_flags[] = {
+ static const struct dmi_system_id nine_bytes_report[] = {
+ #if defined(CONFIG_DMI) && defined(CONFIG_X86)
+ 	{
+-		.ident = "Lenovo YogaBook",
+-		/* YB1-X91L/F and YB1-X90L/F */
++		/* Lenovo Yoga Book X90F / X90L */
+ 		.matches = {
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9")
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
++		}
++	},
++	{
++		/* Lenovo Yoga Book X91F / X91L */
++		.matches = {
++			/* Non exact match to match F + L versions */
++			DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X91"),
+ 		}
+ 	},
+ #endif
+diff --git a/drivers/interconnect/qcom/osm-l3.c b/drivers/interconnect/qcom/osm-l3.c
+index 695f28789e98a..08a282d573203 100644
+--- a/drivers/interconnect/qcom/osm-l3.c
++++ b/drivers/interconnect/qcom/osm-l3.c
+@@ -258,7 +258,7 @@ static int qcom_osm_l3_probe(struct platform_device *pdev)
+ 	qnodes = desc->nodes;
+ 	num_nodes = desc->num_nodes;
+ 
+-	data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL);
++	data = devm_kzalloc(&pdev->dev, struct_size(data, nodes, num_nodes), GFP_KERNEL);
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 3d975db86434f..5d772f322a245 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -67,7 +67,9 @@ struct dm_crypt_io {
+ 	struct crypt_config *cc;
+ 	struct bio *base_bio;
+ 	u8 *integrity_metadata;
+-	bool integrity_metadata_from_pool;
++	bool integrity_metadata_from_pool:1;
++	bool in_tasklet:1;
++
+ 	struct work_struct work;
+ 	struct tasklet_struct tasklet;
+ 
+@@ -1722,6 +1724,7 @@ static void crypt_io_init(struct dm_crypt_io *io, struct crypt_config *cc,
+ 	io->ctx.r.req = NULL;
+ 	io->integrity_metadata = NULL;
+ 	io->integrity_metadata_from_pool = false;
++	io->in_tasklet = false;
+ 	atomic_set(&io->io_pending, 0);
+ }
+ 
+@@ -1767,14 +1770,13 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
+ 	 * our tasklet. In this case we need to delay bio_endio()
+ 	 * execution to after the tasklet is done and dequeued.
+ 	 */
+-	if (tasklet_trylock(&io->tasklet)) {
+-		tasklet_unlock(&io->tasklet);
+-		bio_endio(base_bio);
++	if (io->in_tasklet) {
++		INIT_WORK(&io->work, kcryptd_io_bio_endio);
++		queue_work(cc->io_queue, &io->work);
+ 		return;
+ 	}
+ 
+-	INIT_WORK(&io->work, kcryptd_io_bio_endio);
+-	queue_work(cc->io_queue, &io->work);
++	bio_endio(base_bio);
+ }
+ 
+ /*
+@@ -1934,6 +1936,7 @@ pop_from_list:
+ 			io = crypt_io_from_node(rb_first(&write_tree));
+ 			rb_erase(&io->rb_node, &write_tree);
+ 			kcryptd_io_write(io);
++			cond_resched();
+ 		} while (!RB_EMPTY_ROOT(&write_tree));
+ 		blk_finish_plug(&plug);
+ 	}
+@@ -2227,6 +2230,7 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io)
+ 		 * it is being executed with irqs disabled.
+ 		 */
+ 		if (in_irq() || irqs_disabled()) {
++			io->in_tasklet = true;
+ 			tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work);
+ 			tasklet_schedule(&io->tasklet);
+ 			return;
+diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
+index 55443a6598fa6..4029281ca383c 100644
+--- a/drivers/md/dm-stats.c
++++ b/drivers/md/dm-stats.c
+@@ -188,7 +188,7 @@ static int dm_stat_in_flight(struct dm_stat_shared *shared)
+ 	       atomic_read(&shared->in_flight[WRITE]);
+ }
+ 
+-void dm_stats_init(struct dm_stats *stats)
++int dm_stats_init(struct dm_stats *stats)
+ {
+ 	int cpu;
+ 	struct dm_stats_last_position *last;
+@@ -196,11 +196,16 @@ void dm_stats_init(struct dm_stats *stats)
+ 	mutex_init(&stats->mutex);
+ 	INIT_LIST_HEAD(&stats->list);
+ 	stats->last = alloc_percpu(struct dm_stats_last_position);
++	if (!stats->last)
++		return -ENOMEM;
++
+ 	for_each_possible_cpu(cpu) {
+ 		last = per_cpu_ptr(stats->last, cpu);
+ 		last->last_sector = (sector_t)ULLONG_MAX;
+ 		last->last_rw = UINT_MAX;
+ 	}
++
++	return 0;
+ }
+ 
+ void dm_stats_cleanup(struct dm_stats *stats)
+diff --git a/drivers/md/dm-stats.h b/drivers/md/dm-stats.h
+index 2ddfae678f320..dcac11fce03bb 100644
+--- a/drivers/md/dm-stats.h
++++ b/drivers/md/dm-stats.h
+@@ -22,7 +22,7 @@ struct dm_stats_aux {
+ 	unsigned long long duration_ns;
+ };
+ 
+-void dm_stats_init(struct dm_stats *st);
++int dm_stats_init(struct dm_stats *st);
+ void dm_stats_cleanup(struct dm_stats *st);
+ 
+ struct mapped_device;
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index c890bb3e51852..93140743a9998 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -3383,6 +3383,7 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 	pt->low_water_blocks = low_water_blocks;
+ 	pt->adjusted_pf = pt->requested_pf = pf;
+ 	ti->num_flush_bios = 1;
++	ti->limit_swap_bios = true;
+ 
+ 	/*
+ 	 * Only need to enable discards if the pool should pass
+@@ -4259,6 +4260,7 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 		goto bad;
+ 
+ 	ti->num_flush_bios = 1;
++	ti->limit_swap_bios = true;
+ 	ti->flush_supported = true;
+ 	ti->per_io_data_size = sizeof(struct dm_thin_endio_hook);
+ 
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index c60febd14be14..9029c1004b933 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1910,7 +1910,9 @@ static struct mapped_device *alloc_dev(int minor)
+ 	if (!md->bdev)
+ 		goto bad;
+ 
+-	dm_stats_init(&md->stats);
++	r = dm_stats_init(&md->stats);
++	if (r < 0)
++		goto bad;
+ 
+ 	/* Populate the mapping, nobody knows we exist yet */
+ 	spin_lock(&_minor_lock);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index c0b34637bd667..1553c2495841b 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -3207,6 +3207,9 @@ slot_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 		err = kstrtouint(buf, 10, (unsigned int *)&slot);
+ 		if (err < 0)
+ 			return err;
++		if (slot < 0)
++			/* overflow */
++			return -ENOSPC;
+ 	}
+ 	if (rdev->mddev->pers && slot == -1) {
+ 		/* Setting 'slot' on an active array requires also
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index 38f490088d764..dc631c5143187 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -172,6 +172,7 @@ struct meson_nfc {
+ 
+ 	dma_addr_t daddr;
+ 	dma_addr_t iaddr;
++	u32 info_bytes;
+ 
+ 	unsigned long assigned_cs;
+ };
+@@ -499,6 +500,7 @@ static int meson_nfc_dma_buffer_setup(struct nand_chip *nand, void *databuf,
+ 					 nfc->daddr, datalen, dir);
+ 			return ret;
+ 		}
++		nfc->info_bytes = infolen;
+ 		cmd = GENCMDIADDRL(NFC_CMD_AIL, nfc->iaddr);
+ 		writel(cmd, nfc->reg_base + NFC_REG_CMD);
+ 
+@@ -516,8 +518,10 @@ static void meson_nfc_dma_buffer_release(struct nand_chip *nand,
+ 	struct meson_nfc *nfc = nand_get_controller_data(nand);
+ 
+ 	dma_unmap_single(nfc->dev, nfc->daddr, datalen, dir);
+-	if (infolen)
++	if (infolen) {
+ 		dma_unmap_single(nfc->dev, nfc->iaddr, infolen, dir);
++		nfc->info_bytes = 0;
++	}
+ }
+ 
+ static int meson_nfc_read_buf(struct nand_chip *nand, u8 *buf, int len)
+@@ -706,6 +710,8 @@ static void meson_nfc_check_ecc_pages_valid(struct meson_nfc *nfc,
+ 		usleep_range(10, 15);
+ 		/* info is updated by nfc dma engine*/
+ 		smp_rmb();
++		dma_sync_single_for_cpu(nfc->dev, nfc->iaddr, nfc->info_bytes,
++					DMA_FROM_DEVICE);
+ 		ret = *info & ECC_COMPLETE;
+ 	} while (!ret);
+ }
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index a253476a52b01..0b104a90c0d80 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2611,9 +2611,14 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
+ 	 * If this is the upstream port for this switch, enable
+ 	 * forwarding of unknown unicasts and multicasts.
+ 	 */
+-	reg = MV88E6XXX_PORT_CTL0_IGMP_MLD_SNOOP |
+-		MV88E6185_PORT_CTL0_USE_TAG | MV88E6185_PORT_CTL0_USE_IP |
++	reg = MV88E6185_PORT_CTL0_USE_TAG | MV88E6185_PORT_CTL0_USE_IP |
+ 		MV88E6XXX_PORT_CTL0_STATE_FORWARDING;
++	/* Forward any IPv4 IGMP or IPv6 MLD frames received
++	 * by a USER port to the CPU port to allow snooping.
++	 */
++	if (dsa_is_user_port(ds, port))
++		reg |= MV88E6XXX_PORT_CTL0_IGMP_MLD_SNOOP;
++
+ 	err = mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_CTL0, reg);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 6928c0b578abb..3a9fcf942a6de 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -219,12 +219,12 @@ static const struct pci_device_id bnxt_pci_tbl[] = {
+ 	{ PCI_VDEVICE(BROADCOM, 0x1750), .driver_data = BCM57508 },
+ 	{ PCI_VDEVICE(BROADCOM, 0x1751), .driver_data = BCM57504 },
+ 	{ PCI_VDEVICE(BROADCOM, 0x1752), .driver_data = BCM57502 },
+-	{ PCI_VDEVICE(BROADCOM, 0x1800), .driver_data = BCM57508_NPAR },
++	{ PCI_VDEVICE(BROADCOM, 0x1800), .driver_data = BCM57502_NPAR },
+ 	{ PCI_VDEVICE(BROADCOM, 0x1801), .driver_data = BCM57504_NPAR },
+-	{ PCI_VDEVICE(BROADCOM, 0x1802), .driver_data = BCM57502_NPAR },
+-	{ PCI_VDEVICE(BROADCOM, 0x1803), .driver_data = BCM57508_NPAR },
++	{ PCI_VDEVICE(BROADCOM, 0x1802), .driver_data = BCM57508_NPAR },
++	{ PCI_VDEVICE(BROADCOM, 0x1803), .driver_data = BCM57502_NPAR },
+ 	{ PCI_VDEVICE(BROADCOM, 0x1804), .driver_data = BCM57504_NPAR },
+-	{ PCI_VDEVICE(BROADCOM, 0x1805), .driver_data = BCM57502_NPAR },
++	{ PCI_VDEVICE(BROADCOM, 0x1805), .driver_data = BCM57508_NPAR },
+ 	{ PCI_VDEVICE(BROADCOM, 0xd802), .driver_data = BCM58802 },
+ 	{ PCI_VDEVICE(BROADCOM, 0xd804), .driver_data = BCM58804 },
+ #ifdef CONFIG_BNXT_SRIOV
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 34affd1de91da..b7b07beb17ffb 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1198,6 +1198,7 @@ struct bnxt_link_info {
+ #define BNXT_LINK_SPEED_40GB	PORT_PHY_QCFG_RESP_LINK_SPEED_40GB
+ #define BNXT_LINK_SPEED_50GB	PORT_PHY_QCFG_RESP_LINK_SPEED_50GB
+ #define BNXT_LINK_SPEED_100GB	PORT_PHY_QCFG_RESP_LINK_SPEED_100GB
++#define BNXT_LINK_SPEED_200GB	PORT_PHY_QCFG_RESP_LINK_SPEED_200GB
+ 	u16			support_speeds;
+ 	u16			support_pam4_speeds;
+ 	u16			auto_link_speeds;	/* fw adv setting */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 81b63d1c2391f..1e67e86fc3344 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -1653,6 +1653,8 @@ u32 bnxt_fw_to_ethtool_speed(u16 fw_link_speed)
+ 		return SPEED_50000;
+ 	case BNXT_LINK_SPEED_100GB:
+ 		return SPEED_100000;
++	case BNXT_LINK_SPEED_200GB:
++		return SPEED_200000;
+ 	default:
+ 		return SPEED_UNKNOWN;
+ 	}
+diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
+index c53a043139446..e0449cc24fbdb 100644
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -510,7 +510,10 @@ static int gve_get_link_ksettings(struct net_device *netdev,
+ 				  struct ethtool_link_ksettings *cmd)
+ {
+ 	struct gve_priv *priv = netdev_priv(netdev);
+-	int err = gve_adminq_report_link_speed(priv);
++	int err = 0;
++
++	if (priv->link_speed == 0)
++		err = gve_adminq_report_link_speed(priv);
+ 
+ 	cmd->base.speed = priv->link_speed;
+ 	return err;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.c b/drivers/net/ethernet/intel/i40e/i40e_diag.c
+index ef4d3762bf371..ca229b0efeb65 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_diag.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_diag.c
+@@ -44,7 +44,7 @@ static i40e_status i40e_diag_reg_pattern_test(struct i40e_hw *hw,
+ 	return 0;
+ }
+ 
+-struct i40e_diag_reg_test_info i40e_reg_list[] = {
++const struct i40e_diag_reg_test_info i40e_reg_list[] = {
+ 	/* offset               mask         elements   stride */
+ 	{I40E_QTX_CTL(0),       0x0000FFBF, 1,
+ 		I40E_QTX_CTL(1) - I40E_QTX_CTL(0)},
+@@ -78,27 +78,28 @@ i40e_status i40e_diag_reg_test(struct i40e_hw *hw)
+ {
+ 	i40e_status ret_code = 0;
+ 	u32 reg, mask;
++	u32 elements;
+ 	u32 i, j;
+ 
+ 	for (i = 0; i40e_reg_list[i].offset != 0 &&
+ 					     !ret_code; i++) {
+ 
++		elements = i40e_reg_list[i].elements;
+ 		/* set actual reg range for dynamically allocated resources */
+ 		if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) &&
+ 		    hw->func_caps.num_tx_qp != 0)
+-			i40e_reg_list[i].elements = hw->func_caps.num_tx_qp;
++			elements = hw->func_caps.num_tx_qp;
+ 		if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) ||
+ 		     i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) ||
+ 		     i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) ||
+ 		     i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) ||
+ 		     i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) &&
+ 		    hw->func_caps.num_msix_vectors != 0)
+-			i40e_reg_list[i].elements =
+-				hw->func_caps.num_msix_vectors - 1;
++			elements = hw->func_caps.num_msix_vectors - 1;
+ 
+ 		/* test register access */
+ 		mask = i40e_reg_list[i].mask;
+-		for (j = 0; j < i40e_reg_list[i].elements && !ret_code; j++) {
++		for (j = 0; j < elements && !ret_code; j++) {
+ 			reg = i40e_reg_list[i].offset +
+ 			      (j * i40e_reg_list[i].stride);
+ 			ret_code = i40e_diag_reg_pattern_test(hw, reg, mask);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.h b/drivers/net/ethernet/intel/i40e/i40e_diag.h
+index c3340f320a18c..1db7c6d572311 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_diag.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_diag.h
+@@ -20,7 +20,7 @@ struct i40e_diag_reg_test_info {
+ 	u32 stride;	/* bytes between each element */
+ };
+ 
+-extern struct i40e_diag_reg_test_info i40e_reg_list[];
++extern const struct i40e_diag_reg_test_info i40e_reg_list[];
+ 
+ i40e_status i40e_diag_reg_test(struct i40e_hw *hw);
+ i40e_status i40e_diag_eeprom_test(struct i40e_hw *hw);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c
+index 8547fc8fdfd60..78423ca401b24 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_common.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_common.c
+@@ -662,7 +662,7 @@ struct iavf_rx_ptype_decoded iavf_ptype_lookup[] = {
+ 	/* Non Tunneled IPv6 */
+ 	IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+ 	IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+-	IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
++	IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+ 	IAVF_PTT_UNUSED_ENTRY(91),
+ 	IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+ 	IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+index d481a922f0184..f411e683eb151 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+@@ -1061,7 +1061,7 @@ static inline void iavf_rx_hash(struct iavf_ring *ring,
+ 		cpu_to_le64((u64)IAVF_RX_DESC_FLTSTAT_RSS_HASH <<
+ 			    IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT);
+ 
+-	if (ring->netdev->features & NETIF_F_RXHASH)
++	if (!(ring->netdev->features & NETIF_F_RXHASH))
+ 		return;
+ 
+ 	if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) {
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 0ea8e4024d638..c5f465814dec3 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -3821,9 +3821,7 @@ static void igb_remove(struct pci_dev *pdev)
+ 	igb_release_hw_control(adapter);
+ 
+ #ifdef CONFIG_PCI_IOV
+-	rtnl_lock();
+ 	igb_disable_sriov(pdev);
+-	rtnl_unlock();
+ #endif
+ 
+ 	unregister_netdev(netdev);
+diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
+index fe8c0a26b7201..037ec90ed56cb 100644
+--- a/drivers/net/ethernet/intel/igbvf/netdev.c
++++ b/drivers/net/ethernet/intel/igbvf/netdev.c
+@@ -1074,7 +1074,7 @@ static int igbvf_request_msix(struct igbvf_adapter *adapter)
+ 			  igbvf_intr_msix_rx, 0, adapter->rx_ring->name,
+ 			  netdev);
+ 	if (err)
+-		goto out;
++		goto free_irq_tx;
+ 
+ 	adapter->rx_ring->itr_register = E1000_EITR(vector);
+ 	adapter->rx_ring->itr_val = adapter->current_itr;
+@@ -1083,10 +1083,14 @@ static int igbvf_request_msix(struct igbvf_adapter *adapter)
+ 	err = request_irq(adapter->msix_entries[vector].vector,
+ 			  igbvf_msix_other, 0, netdev->name, netdev);
+ 	if (err)
+-		goto out;
++		goto free_irq_rx;
+ 
+ 	igbvf_configure_msix(adapter);
+ 	return 0;
++free_irq_rx:
++	free_irq(adapter->msix_entries[--vector].vector, netdev);
++free_irq_tx:
++	free_irq(adapter->msix_entries[--vector].vector, netdev);
+ out:
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/intel/igbvf/vf.c b/drivers/net/ethernet/intel/igbvf/vf.c
+index b8ba3f94c3632..a47a2e3e548cf 100644
+--- a/drivers/net/ethernet/intel/igbvf/vf.c
++++ b/drivers/net/ethernet/intel/igbvf/vf.c
+@@ -1,6 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2009 - 2018 Intel Corporation. */
+ 
++#include <linux/etherdevice.h>
++
+ #include "vf.h"
+ 
+ static s32 e1000_check_for_link_vf(struct e1000_hw *hw);
+@@ -131,11 +133,16 @@ static s32 e1000_reset_hw_vf(struct e1000_hw *hw)
+ 		/* set our "perm_addr" based on info provided by PF */
+ 		ret_val = mbx->ops.read_posted(hw, msgbuf, 3);
+ 		if (!ret_val) {
+-			if (msgbuf[0] == (E1000_VF_RESET |
+-					  E1000_VT_MSGTYPE_ACK))
++			switch (msgbuf[0]) {
++			case E1000_VF_RESET | E1000_VT_MSGTYPE_ACK:
+ 				memcpy(hw->mac.perm_addr, addr, ETH_ALEN);
+-			else
++				break;
++			case E1000_VF_RESET | E1000_VT_MSGTYPE_NACK:
++				eth_zero_addr(hw->mac.perm_addr);
++				break;
++			default:
+ 				ret_val = -E1000_ERR_MAC_INIT;
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 1a0aae7b128d8..3aa0efb542aaf 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -4874,18 +4874,18 @@ static bool validate_schedule(struct igc_adapter *adapter,
+ 		if (e->command != TC_TAPRIO_CMD_SET_GATES)
+ 			return false;
+ 
+-		for (i = 0; i < adapter->num_tx_queues; i++) {
+-			if (e->gate_mask & BIT(i))
++		for (i = 0; i < adapter->num_tx_queues; i++)
++			if (e->gate_mask & BIT(i)) {
+ 				queue_uses[i]++;
+ 
+-			/* There are limitations: A single queue cannot be
+-			 * opened and closed multiple times per cycle unless the
+-			 * gate stays open. Check for it.
+-			 */
+-			if (queue_uses[i] > 1 &&
+-			    !(prev->gate_mask & BIT(i)))
+-				return false;
+-		}
++				/* There are limitations: A single queue cannot
++				 * be opened and closed multiple times per cycle
++				 * unless the gate stays open. Check for it.
++				 */
++				if (queue_uses[i] > 1 &&
++				    !(prev->gate_mask & BIT(i)))
++					return false;
++			}
+ 	}
+ 
+ 	return true;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+index 7c0ae7c38eefd..c25fb0cbde274 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+@@ -117,12 +117,14 @@ static int mlx5e_dcbnl_ieee_getets(struct net_device *netdev,
+ 	if (!MLX5_CAP_GEN(priv->mdev, ets))
+ 		return -EOPNOTSUPP;
+ 
+-	ets->ets_cap = mlx5_max_tc(priv->mdev) + 1;
+-	for (i = 0; i < ets->ets_cap; i++) {
++	for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ 		err = mlx5_query_port_prio_tc(mdev, i, &ets->prio_tc[i]);
+ 		if (err)
+ 			return err;
++	}
+ 
++	ets->ets_cap = mlx5_max_tc(priv->mdev) + 1;
++	for (i = 0; i < ets->ets_cap; i++) {
+ 		err = mlx5_query_port_tc_group(mdev, i, &tc_group[i]);
+ 		if (err)
+ 			return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
+index 548c005ea6335..90a10230bf0cd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
+@@ -301,8 +301,7 @@ int mlx5_esw_acl_ingress_vport_bond_update(struct mlx5_eswitch *esw, u16 vport_n
+ 
+ 	if (WARN_ON_ONCE(IS_ERR(vport))) {
+ 		esw_warn(esw->dev, "vport(%d) invalid!\n", vport_num);
+-		err = PTR_ERR(vport);
+-		goto out;
++		return PTR_ERR(vport);
+ 	}
+ 
+ 	esw_acl_ingress_ofld_rules_destroy(esw, vport);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 78cc6f0bbc72b..3ae082c72a2b8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1339,6 +1339,7 @@ static void esw_disable_vport(struct mlx5_eswitch *esw, u16 vport_num)
+ 	 */
+ 	esw_vport_change_handle_locked(vport);
+ 	vport->enabled_events = 0;
++	esw_apply_vport_rx_mode(esw, vport, false, false);
+ 	esw_vport_cleanup(esw, vport);
+ 	esw->enabled_vports--;
+ 
+diff --git a/drivers/net/ethernet/natsemi/sonic.c b/drivers/net/ethernet/natsemi/sonic.c
+index d17d1b4f2585f..825356ee3492e 100644
+--- a/drivers/net/ethernet/natsemi/sonic.c
++++ b/drivers/net/ethernet/natsemi/sonic.c
+@@ -292,7 +292,7 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
+ 	 */
+ 
+ 	laddr = dma_map_single(lp->device, skb->data, length, DMA_TO_DEVICE);
+-	if (!laddr) {
++	if (dma_mapping_error(lp->device, laddr)) {
+ 		pr_err_ratelimited("%s: failed to map tx DMA buffer.\n", dev->name);
+ 		dev_kfree_skb_any(skb);
+ 		return NETDEV_TX_OK;
+@@ -509,7 +509,7 @@ static bool sonic_alloc_rb(struct net_device *dev, struct sonic_local *lp,
+ 
+ 	*new_addr = dma_map_single(lp->device, skb_put(*new_skb, SONIC_RBSIZE),
+ 				   SONIC_RBSIZE, DMA_FROM_DEVICE);
+-	if (!*new_addr) {
++	if (dma_mapping_error(lp->device, *new_addr)) {
+ 		dev_kfree_skb(*new_skb);
+ 		*new_skb = NULL;
+ 		return false;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+index 3541bc95493f0..b2a2beb84e54e 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+@@ -4378,6 +4378,9 @@ qed_iov_configure_min_tx_rate(struct qed_dev *cdev, int vfid, u32 rate)
+ 	}
+ 
+ 	vf = qed_iov_get_vf_info(QED_LEADING_HWFN(cdev), (u16)vfid, true);
++	if (!vf)
++		return -EINVAL;
++
+ 	vport_id = vf->vport_id;
+ 
+ 	return qed_configure_vport_wfq(cdev, vport_id, rate);
+@@ -5123,7 +5126,7 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
+ 
+ 		/* Validate that the VF has a configured vport */
+ 		vf = qed_iov_get_vf_info(hwfn, i, true);
+-		if (!vf->vport_instance)
++		if (!vf || !vf->vport_instance)
+ 			continue;
+ 
+ 		memset(&params, 0, sizeof(params));
+diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c
+index ad655f0a4965c..e1aa56be9cc0b 100644
+--- a/drivers/net/ethernet/qualcomm/emac/emac.c
++++ b/drivers/net/ethernet/qualcomm/emac/emac.c
+@@ -728,9 +728,15 @@ static int emac_remove(struct platform_device *pdev)
+ 	struct net_device *netdev = dev_get_drvdata(&pdev->dev);
+ 	struct emac_adapter *adpt = netdev_priv(netdev);
+ 
++	netif_carrier_off(netdev);
++	netif_tx_disable(netdev);
++
+ 	unregister_netdev(netdev);
+ 	netif_napi_del(&adpt->rx_q.napi);
+ 
++	free_irq(adpt->irq.irq, &adpt->irq);
++	cancel_work_sync(&adpt->work_thread);
++
+ 	emac_clks_teardown(adpt);
+ 
+ 	put_device(&adpt->phydev->mdio.dev);
+diff --git a/drivers/net/ethernet/realtek/r8169_phy_config.c b/drivers/net/ethernet/realtek/r8169_phy_config.c
+index 913d030d73eb4..e18a76f5049fd 100644
+--- a/drivers/net/ethernet/realtek/r8169_phy_config.c
++++ b/drivers/net/ethernet/realtek/r8169_phy_config.c
+@@ -970,6 +970,9 @@ static void rtl8168h_2_hw_phy_config(struct rtl8169_private *tp,
+ 	/* disable phy pfm mode */
+ 	phy_modify_paged(phydev, 0x0a44, 0x11, BIT(7), 0);
+ 
++	/* disable 10m pll off */
++	phy_modify_paged(phydev, 0x0a43, 0x10, BIT(0), 0);
++
+ 	rtl8168g_disable_aldps(phydev);
+ 	rtl8168g_config_eee_phy(phydev);
+ }
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index eb1be73020822..32654fe1f8b59 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -1304,7 +1304,8 @@ static void efx_ef10_fini_nic(struct efx_nic *efx)
+ static int efx_ef10_init_nic(struct efx_nic *efx)
+ {
+ 	struct efx_ef10_nic_data *nic_data = efx->nic_data;
+-	netdev_features_t hw_enc_features = 0;
++	struct net_device *net_dev = efx->net_dev;
++	netdev_features_t tun_feats, tso_feats;
+ 	int rc;
+ 
+ 	if (nic_data->must_check_datapath_caps) {
+@@ -1349,20 +1350,30 @@ static int efx_ef10_init_nic(struct efx_nic *efx)
+ 		nic_data->must_restore_piobufs = false;
+ 	}
+ 
+-	/* add encapsulated checksum offload features */
++	/* encap features might change during reset if fw variant changed */
+ 	if (efx_has_cap(efx, VXLAN_NVGRE) && !efx_ef10_is_vf(efx))
+-		hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
+-	/* add encapsulated TSO features */
+-	if (efx_has_cap(efx, TX_TSO_V2_ENCAP)) {
+-		netdev_features_t encap_tso_features;
++		net_dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
++	else
++		net_dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
+ 
+-		encap_tso_features = NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
+-			NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM;
++	tun_feats = NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
++		    NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM;
++	tso_feats = NETIF_F_TSO | NETIF_F_TSO6;
+ 
+-		hw_enc_features |= encap_tso_features | NETIF_F_TSO;
+-		efx->net_dev->features |= encap_tso_features;
++	if (efx_has_cap(efx, TX_TSO_V2_ENCAP)) {
++		/* If this is first nic_init, or if it is a reset and a new fw
++		 * variant has added new features, enable them by default.
++		 * If the features are not new, maintain their current value.
++		 */
++		if (!(net_dev->hw_features & tun_feats))
++			net_dev->features |= tun_feats;
++		net_dev->hw_enc_features |= tun_feats | tso_feats;
++		net_dev->hw_features |= tun_feats;
++	} else {
++		net_dev->hw_enc_features &= ~(tun_feats | tso_feats);
++		net_dev->hw_features &= ~tun_feats;
++		net_dev->features &= ~tun_feats;
+ 	}
+-	efx->net_dev->hw_enc_features = hw_enc_features;
+ 
+ 	/* don't fail init if RSS setup doesn't work */
+ 	rc = efx->type->rx_push_rss_config(efx, false,
+@@ -3977,7 +3988,10 @@ static unsigned int ef10_check_caps(const struct efx_nic *efx,
+ 	 NETIF_F_HW_VLAN_CTAG_FILTER |	\
+ 	 NETIF_F_IPV6_CSUM |		\
+ 	 NETIF_F_RXHASH |		\
+-	 NETIF_F_NTUPLE)
++	 NETIF_F_NTUPLE |		\
++	 NETIF_F_SG |			\
++	 NETIF_F_RXCSUM |		\
++	 NETIF_F_RXALL)
+ 
+ const struct efx_nic_type efx_hunt_a0_vf_nic_type = {
+ 	.is_vf = true,
+diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
+index 29c8d2c990044..c069659c9e2d0 100644
+--- a/drivers/net/ethernet/sfc/efx.c
++++ b/drivers/net/ethernet/sfc/efx.c
+@@ -1045,21 +1045,18 @@ static int efx_pci_probe_post_io(struct efx_nic *efx)
+ 	}
+ 
+ 	/* Determine netdevice features */
+-	net_dev->features |= (efx->type->offload_features | NETIF_F_SG |
+-			      NETIF_F_TSO | NETIF_F_RXCSUM | NETIF_F_RXALL);
+-	if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM)) {
+-		net_dev->features |= NETIF_F_TSO6;
+-		if (efx_has_cap(efx, TX_TSO_V2_ENCAP))
+-			net_dev->hw_enc_features |= NETIF_F_TSO6;
+-	}
+-	/* Check whether device supports TSO */
+-	if (!efx->type->tso_versions || !efx->type->tso_versions(efx))
+-		net_dev->features &= ~NETIF_F_ALL_TSO;
++	net_dev->features |= efx->type->offload_features;
++
++	/* Add TSO features */
++	if (efx->type->tso_versions && efx->type->tso_versions(efx))
++		net_dev->features |= NETIF_F_TSO | NETIF_F_TSO6;
++
+ 	/* Mask for features that also apply to VLAN devices */
+ 	net_dev->vlan_features |= (NETIF_F_HW_CSUM | NETIF_F_SG |
+ 				   NETIF_F_HIGHDMA | NETIF_F_ALL_TSO |
+ 				   NETIF_F_RXCSUM);
+ 
++	/* Determine user configurable features */
+ 	net_dev->hw_features |= net_dev->features & ~efx->fixed_features;
+ 
+ 	/* Disable receiving frames with bad FCS, by default. */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index df7de50497a0d..af43035239297 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -480,7 +480,6 @@ struct mac_device_info {
+ 	unsigned int xlgmac;
+ 	unsigned int num_vlan;
+ 	u32 vlan_filter[32];
+-	unsigned int promisc;
+ 	bool vlan_fail_q_en;
+ 	u8 vlan_fail_q;
+ };
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 5b052fdd2696e..cd11be005390b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -453,12 +453,6 @@ static int dwmac4_add_hw_vlan_rx_fltr(struct net_device *dev,
+ 	if (vid > 4095)
+ 		return -EINVAL;
+ 
+-	if (hw->promisc) {
+-		netdev_err(dev,
+-			   "Adding VLAN in promisc mode not supported\n");
+-		return -EPERM;
+-	}
+-
+ 	/* Single Rx VLAN Filter */
+ 	if (hw->num_vlan == 1) {
+ 		/* For single VLAN filter, VID 0 means VLAN promiscuous */
+@@ -508,12 +502,6 @@ static int dwmac4_del_hw_vlan_rx_fltr(struct net_device *dev,
+ {
+ 	int i, ret = 0;
+ 
+-	if (hw->promisc) {
+-		netdev_err(dev,
+-			   "Deleting VLAN in promisc mode not supported\n");
+-		return -EPERM;
+-	}
+-
+ 	/* Single Rx VLAN Filter */
+ 	if (hw->num_vlan == 1) {
+ 		if ((hw->vlan_filter[0] & GMAC_VLAN_TAG_VID) == vid) {
+@@ -538,39 +526,6 @@ static int dwmac4_del_hw_vlan_rx_fltr(struct net_device *dev,
+ 	return ret;
+ }
+ 
+-static void dwmac4_vlan_promisc_enable(struct net_device *dev,
+-				       struct mac_device_info *hw)
+-{
+-	void __iomem *ioaddr = hw->pcsr;
+-	u32 value;
+-	u32 hash;
+-	u32 val;
+-	int i;
+-
+-	/* Single Rx VLAN Filter */
+-	if (hw->num_vlan == 1) {
+-		dwmac4_write_single_vlan(dev, 0);
+-		return;
+-	}
+-
+-	/* Extended Rx VLAN Filter Enable */
+-	for (i = 0; i < hw->num_vlan; i++) {
+-		if (hw->vlan_filter[i] & GMAC_VLAN_TAG_DATA_VEN) {
+-			val = hw->vlan_filter[i] & ~GMAC_VLAN_TAG_DATA_VEN;
+-			dwmac4_write_vlan_filter(dev, hw, i, val);
+-		}
+-	}
+-
+-	hash = readl(ioaddr + GMAC_VLAN_HASH_TABLE);
+-	if (hash & GMAC_VLAN_VLHT) {
+-		value = readl(ioaddr + GMAC_VLAN_TAG);
+-		if (value & GMAC_VLAN_VTHM) {
+-			value &= ~GMAC_VLAN_VTHM;
+-			writel(value, ioaddr + GMAC_VLAN_TAG);
+-		}
+-	}
+-}
+-
+ static void dwmac4_restore_hw_vlan_rx_fltr(struct net_device *dev,
+ 					   struct mac_device_info *hw)
+ {
+@@ -690,22 +645,12 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
+ 	}
+ 
+ 	/* VLAN filtering */
+-	if (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
++	if (dev->flags & IFF_PROMISC && !hw->vlan_fail_q_en)
++		value &= ~GMAC_PACKET_FILTER_VTFE;
++	else if (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
+ 		value |= GMAC_PACKET_FILTER_VTFE;
+ 
+ 	writel(value, ioaddr + GMAC_PACKET_FILTER);
+-
+-	if (dev->flags & IFF_PROMISC && !hw->vlan_fail_q_en) {
+-		if (!hw->promisc) {
+-			hw->promisc = 1;
+-			dwmac4_vlan_promisc_enable(dev, hw);
+-		}
+-	} else {
+-		if (hw->promisc) {
+-			hw->promisc = 0;
+-			dwmac4_restore_hw_vlan_rx_fltr(dev, hw);
+-		}
+-	}
+ }
+ 
+ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
+diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_net.c b/drivers/net/ethernet/toshiba/ps3_gelic_net.c
+index d9a5722f561b5..524098a7b6585 100644
+--- a/drivers/net/ethernet/toshiba/ps3_gelic_net.c
++++ b/drivers/net/ethernet/toshiba/ps3_gelic_net.c
+@@ -317,15 +317,17 @@ static int gelic_card_init_chain(struct gelic_card *card,
+ 
+ 	/* set up the hardware pointers in each descriptor */
+ 	for (i = 0; i < no; i++, descr++) {
++		dma_addr_t cpu_addr;
++
+ 		gelic_descr_set_status(descr, GELIC_DESCR_DMA_NOT_IN_USE);
+-		descr->bus_addr =
+-			dma_map_single(ctodev(card), descr,
+-				       GELIC_DESCR_SIZE,
+-				       DMA_BIDIRECTIONAL);
+ 
+-		if (!descr->bus_addr)
++		cpu_addr = dma_map_single(ctodev(card), descr,
++					  GELIC_DESCR_SIZE, DMA_BIDIRECTIONAL);
++
++		if (dma_mapping_error(ctodev(card), cpu_addr))
+ 			goto iommu_error;
+ 
++		descr->bus_addr = cpu_to_be32(cpu_addr);
+ 		descr->next = descr + 1;
+ 		descr->prev = descr - 1;
+ 	}
+@@ -365,26 +367,28 @@ iommu_error:
+  *
+  * allocates a new rx skb, iommu-maps it and attaches it to the descriptor.
+  * Activate the descriptor state-wise
++ *
++ * Gelic RX sk_buffs must be aligned to GELIC_NET_RXBUF_ALIGN and the length
++ * must be a multiple of GELIC_NET_RXBUF_ALIGN.
+  */
+ static int gelic_descr_prepare_rx(struct gelic_card *card,
+ 				  struct gelic_descr *descr)
+ {
++	static const unsigned int rx_skb_size =
++		ALIGN(GELIC_NET_MAX_FRAME, GELIC_NET_RXBUF_ALIGN) +
++		GELIC_NET_RXBUF_ALIGN - 1;
++	dma_addr_t cpu_addr;
+ 	int offset;
+-	unsigned int bufsize;
+ 
+ 	if (gelic_descr_get_status(descr) !=  GELIC_DESCR_DMA_NOT_IN_USE)
+ 		dev_info(ctodev(card), "%s: ERROR status\n", __func__);
+-	/* we need to round up the buffer size to a multiple of 128 */
+-	bufsize = ALIGN(GELIC_NET_MAX_MTU, GELIC_NET_RXBUF_ALIGN);
+ 
+-	/* and we need to have it 128 byte aligned, therefore we allocate a
+-	 * bit more */
+-	descr->skb = dev_alloc_skb(bufsize + GELIC_NET_RXBUF_ALIGN - 1);
++	descr->skb = netdev_alloc_skb(*card->netdev, rx_skb_size);
+ 	if (!descr->skb) {
+ 		descr->buf_addr = 0; /* tell DMAC don't touch memory */
+ 		return -ENOMEM;
+ 	}
+-	descr->buf_size = cpu_to_be32(bufsize);
++	descr->buf_size = cpu_to_be32(rx_skb_size);
+ 	descr->dmac_cmd_status = 0;
+ 	descr->result_size = 0;
+ 	descr->valid_size = 0;
+@@ -395,11 +399,10 @@ static int gelic_descr_prepare_rx(struct gelic_card *card,
+ 	if (offset)
+ 		skb_reserve(descr->skb, GELIC_NET_RXBUF_ALIGN - offset);
+ 	/* io-mmu-map the skb */
+-	descr->buf_addr = cpu_to_be32(dma_map_single(ctodev(card),
+-						     descr->skb->data,
+-						     GELIC_NET_MAX_MTU,
+-						     DMA_FROM_DEVICE));
+-	if (!descr->buf_addr) {
++	cpu_addr = dma_map_single(ctodev(card), descr->skb->data,
++				  GELIC_NET_MAX_FRAME, DMA_FROM_DEVICE);
++	descr->buf_addr = cpu_to_be32(cpu_addr);
++	if (dma_mapping_error(ctodev(card), cpu_addr)) {
+ 		dev_kfree_skb_any(descr->skb);
+ 		descr->skb = NULL;
+ 		dev_info(ctodev(card),
+@@ -779,7 +782,7 @@ static int gelic_descr_prepare_tx(struct gelic_card *card,
+ 
+ 	buf = dma_map_single(ctodev(card), skb->data, skb->len, DMA_TO_DEVICE);
+ 
+-	if (!buf) {
++	if (dma_mapping_error(ctodev(card), buf)) {
+ 		dev_err(ctodev(card),
+ 			"dma map 2 failed (%p, %i). Dropping packet\n",
+ 			skb->data, skb->len);
+@@ -915,7 +918,7 @@ static void gelic_net_pass_skb_up(struct gelic_descr *descr,
+ 	data_error = be32_to_cpu(descr->data_error);
+ 	/* unmap skb buffer */
+ 	dma_unmap_single(ctodev(card), be32_to_cpu(descr->buf_addr),
+-			 GELIC_NET_MAX_MTU,
++			 GELIC_NET_MAX_FRAME,
+ 			 DMA_FROM_DEVICE);
+ 
+ 	skb_put(skb, be32_to_cpu(descr->valid_size)?
+diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_net.h b/drivers/net/ethernet/toshiba/ps3_gelic_net.h
+index 68f324ed4eaf0..0d98defb011ed 100644
+--- a/drivers/net/ethernet/toshiba/ps3_gelic_net.h
++++ b/drivers/net/ethernet/toshiba/ps3_gelic_net.h
+@@ -19,8 +19,9 @@
+ #define GELIC_NET_RX_DESCRIPTORS        128 /* num of descriptors */
+ #define GELIC_NET_TX_DESCRIPTORS        128 /* num of descriptors */
+ 
+-#define GELIC_NET_MAX_MTU               VLAN_ETH_FRAME_LEN
+-#define GELIC_NET_MIN_MTU               VLAN_ETH_ZLEN
++#define GELIC_NET_MAX_FRAME             2312
++#define GELIC_NET_MAX_MTU               2294
++#define GELIC_NET_MIN_MTU               64
+ #define GELIC_NET_RXBUF_ALIGN           128
+ #define GELIC_CARD_RX_CSUM_DEFAULT      1 /* hw chksum */
+ #define GELIC_NET_WATCHDOG_TIMEOUT      5*HZ
+diff --git a/drivers/net/ethernet/xircom/xirc2ps_cs.c b/drivers/net/ethernet/xircom/xirc2ps_cs.c
+index 3e337142b5161..56cef59c1c872 100644
+--- a/drivers/net/ethernet/xircom/xirc2ps_cs.c
++++ b/drivers/net/ethernet/xircom/xirc2ps_cs.c
+@@ -503,6 +503,11 @@ static void
+ xirc2ps_detach(struct pcmcia_device *link)
+ {
+     struct net_device *dev = link->priv;
++    struct local_info *local = netdev_priv(dev);
++
++    netif_carrier_off(dev);
++    netif_tx_disable(dev);
++    cancel_work_sync(&local->tx_timeout_task);
+ 
+     dev_dbg(&link->dev, "detach\n");
+ 
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index 95ef3b6f98dd3..1c5d70c60354b 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -1945,10 +1945,9 @@ static int ca8210_skb_tx(
+ 	struct ca8210_priv  *priv
+ )
+ {
+-	int status;
+ 	struct ieee802154_hdr header = { };
+ 	struct secspec secspec;
+-	unsigned int mac_len;
++	int mac_len, status;
+ 
+ 	dev_dbg(&priv->spi->dev, "%s called\n", __func__);
+ 
+@@ -1956,6 +1955,8 @@ static int ca8210_skb_tx(
+ 	 * packet
+ 	 */
+ 	mac_len = ieee802154_hdr_peek_addrs(skb, &header);
++	if (mac_len < 0)
++		return mac_len;
+ 
+ 	secspec.security_level = header.sec.level;
+ 	secspec.key_id_mode = header.sec.key_id_mode;
+diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c
+index 70c2b585f98d6..1e0d626393012 100644
+--- a/drivers/net/ipa/gsi_trans.c
++++ b/drivers/net/ipa/gsi_trans.c
+@@ -159,7 +159,7 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
+ 	 * gsi_trans_pool_exit_dma() can assume the total allocated
+ 	 * size is exactly (count * size).
+ 	 */
+-	total_size = get_order(total_size) << PAGE_SHIFT;
++	total_size = PAGE_SIZE << get_order(total_size);
+ 
+ 	virt = dma_alloc_coherent(dev, total_size, &addr, GFP_KERNEL);
+ 	if (!virt)
+diff --git a/drivers/net/mdio/mdio-thunder.c b/drivers/net/mdio/mdio-thunder.c
+index 822d2cdd2f359..394b864aaa372 100644
+--- a/drivers/net/mdio/mdio-thunder.c
++++ b/drivers/net/mdio/mdio-thunder.c
+@@ -104,6 +104,7 @@ static int thunder_mdiobus_pci_probe(struct pci_dev *pdev,
+ 		if (i >= ARRAY_SIZE(nexus->buses))
+ 			break;
+ 	}
++	fwnode_handle_put(fwn);
+ 	return 0;
+ 
+ err_release_regions:
+diff --git a/drivers/net/mdio/of_mdio.c b/drivers/net/mdio/of_mdio.c
+index 5bae47f3da405..b254127cea50d 100644
+--- a/drivers/net/mdio/of_mdio.c
++++ b/drivers/net/mdio/of_mdio.c
+@@ -238,21 +238,23 @@ bool of_mdiobus_child_is_phy(struct device_node *child)
+ EXPORT_SYMBOL(of_mdiobus_child_is_phy);
+ 
+ /**
+- * of_mdiobus_register - Register mii_bus and create PHYs from the device tree
++ * __of_mdiobus_register - Register mii_bus and create PHYs from the device tree
+  * @mdio: pointer to mii_bus structure
+  * @np: pointer to device_node of MDIO bus.
++ * @owner: module owning the @mdio object.
+  *
+  * This function registers the mii_bus structure and registers a phy_device
+  * for each child node of @np.
+  */
+-int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
++int __of_mdiobus_register(struct mii_bus *mdio, struct device_node *np,
++			  struct module *owner)
+ {
+ 	struct device_node *child;
+ 	bool scanphys = false;
+ 	int addr, rc;
+ 
+ 	if (!np)
+-		return mdiobus_register(mdio);
++		return __mdiobus_register(mdio, owner);
+ 
+ 	/* Do not continue if the node is disabled */
+ 	if (!of_device_is_available(np))
+@@ -272,7 +274,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
+ 	of_property_read_u32(np, "reset-post-delay-us", &mdio->reset_post_delay_us);
+ 
+ 	/* Register the MDIO bus */
+-	rc = mdiobus_register(mdio);
++	rc = __mdiobus_register(mdio, owner);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -336,7 +338,7 @@ unregister:
+ 	mdiobus_unregister(mdio);
+ 	return rc;
+ }
+-EXPORT_SYMBOL(of_mdiobus_register);
++EXPORT_SYMBOL(__of_mdiobus_register);
+ 
+ /**
+  * of_mdio_find_device - Given a device tree node, find the mdio_device
+diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c
+index fb182bec8f062..6b7bba720d8c7 100644
+--- a/drivers/net/net_failover.c
++++ b/drivers/net/net_failover.c
+@@ -130,14 +130,10 @@ static u16 net_failover_select_queue(struct net_device *dev,
+ 			txq = ops->ndo_select_queue(primary_dev, skb, sb_dev);
+ 		else
+ 			txq = netdev_pick_tx(primary_dev, skb, NULL);
+-
+-		qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping;
+-
+-		return txq;
++	} else {
++		txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0;
+ 	}
+ 
+-	txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0;
+-
+ 	/* Save the original txq to restore before passing to the driver */
+ 	qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping;
+ 
+diff --git a/drivers/net/phy/dp83869.c b/drivers/net/phy/dp83869.c
+index a9daff88006b3..65b69ff35e403 100644
+--- a/drivers/net/phy/dp83869.c
++++ b/drivers/net/phy/dp83869.c
+@@ -553,15 +553,13 @@ static int dp83869_of_init(struct phy_device *phydev)
+ 						       &dp83869_internal_delay[0],
+ 						       delay_size, true);
+ 	if (dp83869->rx_int_delay < 0)
+-		dp83869->rx_int_delay =
+-				dp83869_internal_delay[DP83869_CLK_DELAY_DEF];
++		dp83869->rx_int_delay = DP83869_CLK_DELAY_DEF;
+ 
+ 	dp83869->tx_int_delay = phy_get_internal_delay(phydev, dev,
+ 						       &dp83869_internal_delay[0],
+ 						       delay_size, false);
+ 	if (dp83869->tx_int_delay < 0)
+-		dp83869->tx_int_delay =
+-				dp83869_internal_delay[DP83869_CLK_DELAY_DEF];
++		dp83869->tx_int_delay = DP83869_CLK_DELAY_DEF;
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/phy/mdio_devres.c b/drivers/net/phy/mdio_devres.c
+index b560e99695dfd..69b829e6ab35b 100644
+--- a/drivers/net/phy/mdio_devres.c
++++ b/drivers/net/phy/mdio_devres.c
+@@ -98,13 +98,14 @@ EXPORT_SYMBOL(__devm_mdiobus_register);
+ 
+ #if IS_ENABLED(CONFIG_OF_MDIO)
+ /**
+- * devm_of_mdiobus_register - Resource managed variant of of_mdiobus_register()
++ * __devm_of_mdiobus_register - Resource managed variant of of_mdiobus_register()
+  * @dev:	Device to register mii_bus for
+  * @mdio:	MII bus structure to register
+  * @np:		Device node to parse
++ * @owner:	Owning module
+  */
+-int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
+-			     struct device_node *np)
++int __devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
++			       struct device_node *np, struct module *owner)
+ {
+ 	struct mdiobus_devres *dr;
+ 	int ret;
+@@ -117,7 +118,7 @@ int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
+ 	if (!dr)
+ 		return -ENOMEM;
+ 
+-	ret = of_mdiobus_register(mdio, np);
++	ret = __of_mdiobus_register(mdio, np, owner);
+ 	if (ret) {
+ 		devres_free(dr);
+ 		return ret;
+@@ -127,7 +128,7 @@ int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
+ 	devres_add(dev, dr);
+ 	return 0;
+ }
+-EXPORT_SYMBOL(devm_of_mdiobus_register);
++EXPORT_SYMBOL(__devm_of_mdiobus_register);
+ #endif /* CONFIG_OF_MDIO */
+ 
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 18e67eb6d8b4f..f3e606b6617e9 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -56,6 +56,18 @@ static const char *phy_state_to_str(enum phy_state st)
+ 	return NULL;
+ }
+ 
++static void phy_process_state_change(struct phy_device *phydev,
++				     enum phy_state old_state)
++{
++	if (old_state != phydev->state) {
++		phydev_dbg(phydev, "PHY state change %s -> %s\n",
++			   phy_state_to_str(old_state),
++			   phy_state_to_str(phydev->state));
++		if (phydev->drv && phydev->drv->link_change_notify)
++			phydev->drv->link_change_notify(phydev);
++	}
++}
++
+ static void phy_link_up(struct phy_device *phydev)
+ {
+ 	phydev->phy_link_change(phydev, true);
+@@ -1110,6 +1122,7 @@ EXPORT_SYMBOL(phy_free_interrupt);
+ void phy_stop(struct phy_device *phydev)
+ {
+ 	struct net_device *dev = phydev->attached_dev;
++	enum phy_state old_state;
+ 
+ 	if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) {
+ 		WARN(1, "called from state %s\n",
+@@ -1118,6 +1131,7 @@ void phy_stop(struct phy_device *phydev)
+ 	}
+ 
+ 	mutex_lock(&phydev->lock);
++	old_state = phydev->state;
+ 
+ 	if (phydev->state == PHY_CABLETEST) {
+ 		phy_abort_cable_test(phydev);
+@@ -1128,6 +1142,7 @@ void phy_stop(struct phy_device *phydev)
+ 		sfp_upstream_stop(phydev->sfp_bus);
+ 
+ 	phydev->state = PHY_HALTED;
++	phy_process_state_change(phydev, old_state);
+ 
+ 	mutex_unlock(&phydev->lock);
+ 
+@@ -1242,13 +1257,7 @@ void phy_state_machine(struct work_struct *work)
+ 	if (err < 0)
+ 		phy_error(phydev);
+ 
+-	if (old_state != phydev->state) {
+-		phydev_dbg(phydev, "PHY state change %s -> %s\n",
+-			   phy_state_to_str(old_state),
+-			   phy_state_to_str(phydev->state));
+-		if (phydev->drv && phydev->drv->link_change_notify)
+-			phydev->drv->link_change_notify(phydev);
+-	}
++	phy_process_state_change(phydev, old_state);
+ 
+ 	/* Only re-schedule a PHY state machine change if we are polling the
+ 	 * PHY, if PHY_IGNORE_INTERRUPT is set, then we will be moving
+diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
+index 414341c9cf5ae..6ad1fb00a35cd 100644
+--- a/drivers/net/usb/cdc_mbim.c
++++ b/drivers/net/usb/cdc_mbim.c
+@@ -663,6 +663,11 @@ static const struct usb_device_id mbim_devs[] = {
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
+ 	},
+ 
++	/* Telit FE990 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1081, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
++	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
++	},
++
+ 	/* default entry */
+ 	{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index bce151e3706a0..070910567c44e 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1297,6 +1297,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)},	/* Telit FN980 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)},	/* Telit LN920 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)},	/* Telit FN990 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1080, 2)}, /* Telit FE990 */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)},	/* Telit ME910 */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)},	/* Telit ME910 dual modem */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)},	/* Telit LE920 */
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index e1cd4c2de2d30..975f52605867f 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1824,6 +1824,12 @@ static int smsc95xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 		size = (u16)((header & RX_STS_FL_) >> 16);
+ 		align_count = (4 - ((size + NET_IP_ALIGN) % 4)) % 4;
+ 
++		if (unlikely(size > skb->len)) {
++			netif_dbg(dev, rx_err, dev->net,
++				  "size err header=0x%08x\n", header);
++			return 0;
++		}
++
+ 		if (unlikely(header & RX_STS_ES_)) {
+ 			netif_dbg(dev, rx_err, dev->net,
+ 				  "Error header=0x%08x\n", header);
+diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
+index 1ba9749692164..fe99439ad5fbc 100644
+--- a/drivers/net/xen-netback/common.h
++++ b/drivers/net/xen-netback/common.h
+@@ -166,7 +166,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
+ 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
+ 	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
+ 
+-	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
++	struct gnttab_copy tx_copy_ops[2 * MAX_PENDING_REQS];
+ 	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
+ 	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
+ 	/* passed to gnttab_[un]map_refs with pages under (un)mapping */
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index f9373a88cf37c..67614e7166ac8 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -334,6 +334,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
+ struct xenvif_tx_cb {
+ 	u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1];
+ 	u8 copy_count;
++	u32 split_mask;
+ };
+ 
+ #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb)
+@@ -361,6 +362,8 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
+ 	struct sk_buff *skb =
+ 		alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN,
+ 			  GFP_ATOMIC | __GFP_NOWARN);
++
++	BUILD_BUG_ON(sizeof(*XENVIF_TX_CB(skb)) > sizeof(skb->cb));
+ 	if (unlikely(skb == NULL))
+ 		return NULL;
+ 
+@@ -396,11 +399,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 	nr_slots = shinfo->nr_frags + 1;
+ 
+ 	copy_count(skb) = 0;
++	XENVIF_TX_CB(skb)->split_mask = 0;
+ 
+ 	/* Create copy ops for exactly data_len bytes into the skb head. */
+ 	__skb_put(skb, data_len);
+ 	while (data_len > 0) {
+ 		int amount = data_len > txp->size ? txp->size : data_len;
++		bool split = false;
+ 
+ 		cop->source.u.ref = txp->gref;
+ 		cop->source.domid = queue->vif->domid;
+@@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 		cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb)
+ 				               - data_len);
+ 
++		/* Don't cross local page boundary! */
++		if (cop->dest.offset + amount > XEN_PAGE_SIZE) {
++			amount = XEN_PAGE_SIZE - cop->dest.offset;
++			XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb);
++			split = true;
++		}
++
+ 		cop->len = amount;
+ 		cop->flags = GNTCOPY_source_gref;
+ 
+@@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 		pending_idx = queue->pending_ring[index];
+ 		callback_param(queue, pending_idx).ctx = NULL;
+ 		copy_pending_idx(skb, copy_count(skb)) = pending_idx;
+-		copy_count(skb)++;
++		if (!split)
++			copy_count(skb)++;
+ 
+ 		cop++;
+ 		data_len -= amount;
+@@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 			nr_slots--;
+ 		} else {
+ 			/* The copy op partially covered the tx_request.
+-			 * The remainder will be mapped.
++			 * The remainder will be mapped or copied in the next
++			 * iteration.
+ 			 */
+ 			txp->offset += amount;
+ 			txp->size -= amount;
+@@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
+ 		pending_idx = copy_pending_idx(skb, i);
+ 
+ 		newerr = (*gopp_copy)->status;
++
++		/* Split copies need to be handled together. */
++		if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) {
++			(*gopp_copy)++;
++			if (!newerr)
++				newerr = (*gopp_copy)->status;
++		}
+ 		if (likely(!newerr)) {
+ 			/* The first frag might still have this slot mapped */
+ 			if (i < copy_count(skb) - 1 || !sharedslot)
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 82b658a3c220a..7bfdf5ad77c45 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -764,32 +764,34 @@ static const struct pinconf_ops amd_pinconf_ops = {
+ 	.pin_config_group_set = amd_pinconf_group_set,
+ };
+ 
+-static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
++static void amd_gpio_irq_init_pin(struct amd_gpio *gpio_dev, int pin)
+ {
+-	struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++	const struct pin_desc *pd;
+ 	unsigned long flags;
+ 	u32 pin_reg, mask;
+-	int i;
+ 
+ 	mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3) |
+ 		BIT(INTERRUPT_MASK_OFF) | BIT(INTERRUPT_ENABLE_OFF) |
+ 		BIT(WAKE_CNTRL_OFF_S4);
+ 
+-	for (i = 0; i < desc->npins; i++) {
+-		int pin = desc->pins[i].number;
+-		const struct pin_desc *pd = pin_desc_get(gpio_dev->pctrl, pin);
+-
+-		if (!pd)
+-			continue;
++	pd = pin_desc_get(gpio_dev->pctrl, pin);
++	if (!pd)
++		return;
+ 
+-		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++	raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++	pin_reg = readl(gpio_dev->base + pin * 4);
++	pin_reg &= ~mask;
++	writel(pin_reg, gpio_dev->base + pin * 4);
++	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
++}
+ 
+-		pin_reg = readl(gpio_dev->base + i * 4);
+-		pin_reg &= ~mask;
+-		writel(pin_reg, gpio_dev->base + i * 4);
++static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
++{
++	struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++	int i;
+ 
+-		raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+-	}
++	for (i = 0; i < desc->npins; i++)
++		amd_gpio_irq_init_pin(gpio_dev, i);
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+@@ -842,8 +844,10 @@ static int amd_gpio_resume(struct device *dev)
+ 	for (i = 0; i < desc->npins; i++) {
+ 		int pin = desc->pins[i].number;
+ 
+-		if (!amd_gpio_should_save(gpio_dev, pin))
++		if (!amd_gpio_should_save(gpio_dev, pin)) {
++			amd_gpio_irq_init_pin(gpio_dev, pin);
+ 			continue;
++		}
+ 
+ 		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ 		gpio_dev->saved_regs[i] |= readl(gpio_dev->base + pin * 4) & PIN_IRQ_PENDING;
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index d2e2b101978f8..315a6c4d9ade0 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -1139,7 +1139,6 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+ 		dev_err(dev, "can't add the irq domain\n");
+ 		return -ENODEV;
+ 	}
+-	atmel_pioctrl->irq_domain->name = "atmel gpio";
+ 
+ 	for (i = 0; i < atmel_pioctrl->npins; i++) {
+ 		int irq = irq_create_mapping(atmel_pioctrl->irq_domain, i);
+diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
+index a4a1b00f7f0df..c42a5b0bc4f0c 100644
+--- a/drivers/pinctrl/pinctrl-ocelot.c
++++ b/drivers/pinctrl/pinctrl-ocelot.c
+@@ -575,7 +575,7 @@ static int ocelot_pinmux_set_mux(struct pinctrl_dev *pctldev,
+ 	regmap_update_bits(info->map, REG_ALT(0, info, pin->pin),
+ 			   BIT(p), f << p);
+ 	regmap_update_bits(info->map, REG_ALT(1, info, pin->pin),
+-			   BIT(p), f << (p - 1));
++			   BIT(p), (f >> 1) << p);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/platform/chrome/cros_ec_chardev.c b/drivers/platform/chrome/cros_ec_chardev.c
+index 0de7c255254e0..d6de5a2941282 100644
+--- a/drivers/platform/chrome/cros_ec_chardev.c
++++ b/drivers/platform/chrome/cros_ec_chardev.c
+@@ -284,7 +284,7 @@ static long cros_ec_chardev_ioctl_xcmd(struct cros_ec_dev *ec, void __user *arg)
+ 	    u_cmd.insize > EC_MAX_MSG_BYTES)
+ 		return -EINVAL;
+ 
+-	s_cmd = kmalloc(sizeof(*s_cmd) + max(u_cmd.outsize, u_cmd.insize),
++	s_cmd = kzalloc(sizeof(*s_cmd) + max(u_cmd.outsize, u_cmd.insize),
+ 			GFP_KERNEL);
+ 	if (!s_cmd)
+ 		return -ENOMEM;
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index 8c3c378dce0d5..338dd82007e4e 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -448,11 +448,9 @@ static ssize_t bq24190_sysfs_show(struct device *dev,
+ 	if (!info)
+ 		return -EINVAL;
+ 
+-	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(bdi->dev);
++	ret = pm_runtime_resume_and_get(bdi->dev);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	ret = bq24190_read_mask(bdi, info->reg, info->mask, info->shift, &v);
+ 	if (ret)
+@@ -483,11 +481,9 @@ static ssize_t bq24190_sysfs_store(struct device *dev,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(bdi->dev);
++	ret = pm_runtime_resume_and_get(bdi->dev);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	ret = bq24190_write_mask(bdi, info->reg, info->mask, info->shift, v);
+ 	if (ret)
+@@ -506,10 +502,9 @@ static int bq24190_set_charge_mode(struct regulator_dev *dev, u8 val)
+ 	struct bq24190_dev_info *bdi = rdev_get_drvdata(dev);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(bdi->dev);
++	ret = pm_runtime_resume_and_get(bdi->dev);
+ 	if (ret < 0) {
+ 		dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", ret);
+-		pm_runtime_put_noidle(bdi->dev);
+ 		return ret;
+ 	}
+ 
+@@ -539,10 +534,9 @@ static int bq24190_vbus_is_enabled(struct regulator_dev *dev)
+ 	int ret;
+ 	u8 val;
+ 
+-	ret = pm_runtime_get_sync(bdi->dev);
++	ret = pm_runtime_resume_and_get(bdi->dev);
+ 	if (ret < 0) {
+ 		dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", ret);
+-		pm_runtime_put_noidle(bdi->dev);
+ 		return ret;
+ 	}
+ 
+@@ -1083,11 +1077,9 @@ static int bq24190_charger_get_property(struct power_supply *psy,
+ 
+ 	dev_dbg(bdi->dev, "prop: %d\n", psp);
+ 
+-	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(bdi->dev);
++	ret = pm_runtime_resume_and_get(bdi->dev);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_CHARGE_TYPE:
+@@ -1157,11 +1149,9 @@ static int bq24190_charger_set_property(struct power_supply *psy,
+ 
+ 	dev_dbg(bdi->dev, "prop: %d\n", psp);
+ 
+-	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(bdi->dev);
++	ret = pm_runtime_resume_and_get(bdi->dev);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_ONLINE:
+@@ -1420,11 +1410,9 @@ static int bq24190_battery_get_property(struct power_supply *psy,
+ 	dev_warn(bdi->dev, "warning: /sys/class/power_supply/bq24190-battery is deprecated\n");
+ 	dev_dbg(bdi->dev, "prop: %d\n", psp);
+ 
+-	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(bdi->dev);
++	ret = pm_runtime_resume_and_get(bdi->dev);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_STATUS:
+@@ -1468,11 +1456,9 @@ static int bq24190_battery_set_property(struct power_supply *psy,
+ 	dev_warn(bdi->dev, "warning: /sys/class/power_supply/bq24190-battery is deprecated\n");
+ 	dev_dbg(bdi->dev, "prop: %d\n", psp);
+ 
+-	ret = pm_runtime_get_sync(bdi->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(bdi->dev);
++	ret = pm_runtime_resume_and_get(bdi->dev);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_ONLINE:
+@@ -1626,10 +1612,9 @@ static irqreturn_t bq24190_irq_handler_thread(int irq, void *data)
+ 	int error;
+ 
+ 	bdi->irq_event = true;
+-	error = pm_runtime_get_sync(bdi->dev);
++	error = pm_runtime_resume_and_get(bdi->dev);
+ 	if (error < 0) {
+ 		dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
+-		pm_runtime_put_noidle(bdi->dev);
+ 		return IRQ_NONE;
+ 	}
+ 	bq24190_check_status(bdi);
+@@ -1849,11 +1834,10 @@ static int bq24190_remove(struct i2c_client *client)
+ 	struct bq24190_dev_info *bdi = i2c_get_clientdata(client);
+ 	int error;
+ 
+-	error = pm_runtime_get_sync(bdi->dev);
+-	if (error < 0) {
++	cancel_delayed_work_sync(&bdi->input_current_limit_work);
++	error = pm_runtime_resume_and_get(bdi->dev);
++	if (error < 0)
+ 		dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
+-		pm_runtime_put_noidle(bdi->dev);
+-	}
+ 
+ 	bq24190_register_reset(bdi);
+ 	if (bdi->battery)
+@@ -1902,11 +1886,9 @@ static __maybe_unused int bq24190_pm_suspend(struct device *dev)
+ 	struct bq24190_dev_info *bdi = i2c_get_clientdata(client);
+ 	int error;
+ 
+-	error = pm_runtime_get_sync(bdi->dev);
+-	if (error < 0) {
++	error = pm_runtime_resume_and_get(bdi->dev);
++	if (error < 0)
+ 		dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
+-		pm_runtime_put_noidle(bdi->dev);
+-	}
+ 
+ 	bq24190_register_reset(bdi);
+ 
+@@ -1927,11 +1909,9 @@ static __maybe_unused int bq24190_pm_resume(struct device *dev)
+ 	bdi->f_reg = 0;
+ 	bdi->ss_reg = BQ24190_REG_SS_VBUS_STAT_MASK; /* impossible state */
+ 
+-	error = pm_runtime_get_sync(bdi->dev);
+-	if (error < 0) {
++	error = pm_runtime_resume_and_get(bdi->dev);
++	if (error < 0)
+ 		dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
+-		pm_runtime_put_noidle(bdi->dev);
+-	}
+ 
+ 	bq24190_register_reset(bdi);
+ 	bq24190_set_config(bdi);
+diff --git a/drivers/power/supply/da9150-charger.c b/drivers/power/supply/da9150-charger.c
+index f9314cc0cd75f..6b987da586556 100644
+--- a/drivers/power/supply/da9150-charger.c
++++ b/drivers/power/supply/da9150-charger.c
+@@ -662,6 +662,7 @@ static int da9150_charger_remove(struct platform_device *pdev)
+ 
+ 	if (!IS_ERR_OR_NULL(charger->usb_phy))
+ 		usb_unregister_notifier(charger->usb_phy, &charger->otg_nb);
++	cancel_work_sync(&charger->otg_work);
+ 
+ 	power_supply_unregister(charger->battery);
+ 	power_supply_unregister(charger->usb);
+diff --git a/drivers/ptp/ptp_qoriq.c b/drivers/ptp/ptp_qoriq.c
+index 08f4cf0ad9e3c..8fa9772acf79b 100644
+--- a/drivers/ptp/ptp_qoriq.c
++++ b/drivers/ptp/ptp_qoriq.c
+@@ -601,7 +601,7 @@ static int ptp_qoriq_probe(struct platform_device *dev)
+ 	return 0;
+ 
+ no_clock:
+-	iounmap(ptp_qoriq->base);
++	iounmap(base);
+ no_ioremap:
+ 	release_resource(ptp_qoriq->rsrc);
+ no_resource:
+diff --git a/drivers/regulator/fixed.c b/drivers/regulator/fixed.c
+index 3de7709bdcd4c..4acfff1908072 100644
+--- a/drivers/regulator/fixed.c
++++ b/drivers/regulator/fixed.c
+@@ -175,7 +175,7 @@ static int reg_fixed_voltage_probe(struct platform_device *pdev)
+ 		drvdata->enable_clock = devm_clk_get(dev, NULL);
+ 		if (IS_ERR(drvdata->enable_clock)) {
+ 			dev_err(dev, "Can't get enable-clock from devicetree\n");
+-			return -ENOENT;
++			return PTR_ERR(drvdata->enable_clock);
+ 		}
+ 	} else {
+ 		drvdata->desc.ops = &fixed_voltage_ops;
+diff --git a/drivers/s390/crypto/vfio_ap_drv.c b/drivers/s390/crypto/vfio_ap_drv.c
+index 7dc72cb718b0e..22128eb44f7fa 100644
+--- a/drivers/s390/crypto/vfio_ap_drv.c
++++ b/drivers/s390/crypto/vfio_ap_drv.c
+@@ -82,8 +82,9 @@ static void vfio_ap_queue_dev_remove(struct ap_device *apdev)
+ 
+ static void vfio_ap_matrix_dev_release(struct device *dev)
+ {
+-	struct ap_matrix_dev *matrix_dev = dev_get_drvdata(dev);
++	struct ap_matrix_dev *matrix_dev;
+ 
++	matrix_dev = container_of(dev, struct ap_matrix_dev, device);
+ 	kfree(matrix_dev);
+ }
+ 
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index fe8a5e5c0df84..bf0b3178f84d0 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -1036,10 +1036,12 @@ static int alua_activate(struct scsi_device *sdev,
+ 	rcu_read_unlock();
+ 	mutex_unlock(&h->init_mutex);
+ 
+-	if (alua_rtpg_queue(pg, sdev, qdata, true))
++	if (alua_rtpg_queue(pg, sdev, qdata, true)) {
+ 		fn = NULL;
+-	else
++	} else {
++		kfree(qdata);
+ 		err = SCSI_DH_DEV_OFFLINED;
++	}
+ 	kref_put(&pg->kref, release_port_group);
+ out:
+ 	if (fn)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index cd41dc061d874..65971bd80186b 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2402,8 +2402,7 @@ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
+ 	hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW;
+ 	shost->nr_hw_queues = hisi_hba->cq_nvecs;
+ 
+-	devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
+-	return 0;
++	return devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
+ }
+ 
+ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 755d68b981602..923ceaba0bf30 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -20816,20 +20816,20 @@ lpfc_get_io_buf_from_private_pool(struct lpfc_hba *phba,
+ static struct lpfc_io_buf *
+ lpfc_get_io_buf_from_expedite_pool(struct lpfc_hba *phba)
+ {
+-	struct lpfc_io_buf *lpfc_ncmd;
++	struct lpfc_io_buf *lpfc_ncmd = NULL, *iter;
+ 	struct lpfc_io_buf *lpfc_ncmd_next;
+ 	unsigned long iflag;
+ 	struct lpfc_epd_pool *epd_pool;
+ 
+ 	epd_pool = &phba->epd_pool;
+-	lpfc_ncmd = NULL;
+ 
+ 	spin_lock_irqsave(&epd_pool->lock, iflag);
+ 	if (epd_pool->count > 0) {
+-		list_for_each_entry_safe(lpfc_ncmd, lpfc_ncmd_next,
++		list_for_each_entry_safe(iter, lpfc_ncmd_next,
+ 					 &epd_pool->list, list) {
+-			list_del(&lpfc_ncmd->list);
++			list_del(&iter->list);
+ 			epd_pool->count--;
++			lpfc_ncmd = iter;
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 7838c7911adde..8eb126d48462b 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -4656,7 +4656,7 @@ int megasas_task_abort_fusion(struct scsi_cmnd *scmd)
+ 	devhandle = megasas_get_tm_devhandle(scmd->device);
+ 
+ 	if (devhandle == (u16)ULONG_MAX) {
+-		ret = SUCCESS;
++		ret = FAILED;
+ 		sdev_printk(KERN_INFO, scmd->device,
+ 			"task abort issued for invalid devhandle\n");
+ 		mutex_unlock(&instance->reset_mutex);
+@@ -4726,7 +4726,7 @@ int megasas_reset_target_fusion(struct scsi_cmnd *scmd)
+ 	devhandle = megasas_get_tm_devhandle(scmd->device);
+ 
+ 	if (devhandle == (u16)ULONG_MAX) {
+-		ret = SUCCESS;
++		ret = FAILED;
+ 		sdev_printk(KERN_INFO, scmd->device,
+ 			"target reset issued for invalid devhandle\n");
+ 		mutex_unlock(&instance->reset_mutex);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index e1132970f1892..38b8ff87ec0a7 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1762,6 +1762,17 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)
+ 	for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
+ 		sp = req->outstanding_cmds[cnt];
+ 		if (sp) {
++			/*
++			 * perform lockless completion during driver unload
++			 */
++			if (qla2x00_chip_is_down(vha)) {
++				req->outstanding_cmds[cnt] = NULL;
++				spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
++				sp->done(sp, res);
++				spin_lock_irqsave(qp->qp_lock_ptr, flags);
++				continue;
++			}
++
+ 			switch (sp->cmd_type) {
+ 			case TYPE_SRB:
+ 				qla2x00_abort_srb(qp, sp, res, &flags);
+diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
+index 9a8f9f902f3b4..f5e121f0ee52a 100644
+--- a/drivers/scsi/scsi_devinfo.c
++++ b/drivers/scsi/scsi_devinfo.c
+@@ -232,6 +232,7 @@ static struct {
+ 	{"SGI", "RAID5", "*", BLIST_SPARSELUN},
+ 	{"SGI", "TP9100", "*", BLIST_REPORTLUN2},
+ 	{"SGI", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
++	{"SKhynix", "H28U74301AMR", NULL, BLIST_SKIP_VPD_PAGES},
+ 	{"IBM", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ 	{"SUN", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ 	{"DELL", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 3fa8a0c94bdc1..e38aebcabb26f 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1013,6 +1013,22 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ 				goto do_work;
+ 			}
+ 
++			/*
++			 * Check for "Operating parameters have changed"
++			 * due to Hyper-V changing the VHD/VHDX BlockSize
++			 * when adding/removing a differencing disk. This
++			 * causes discard_granularity to change, so do a
++			 * rescan to pick up the new granularity. We don't
++			 * want scsi_report_sense() to output a message
++			 * that a sysadmin wouldn't know what to do with.
++			 */
++			if ((asc == 0x3f) && (ascq != 0x03) &&
++					(ascq != 0x0e)) {
++				process_err_fn = storvsc_device_scan;
++				set_host_byte(scmnd, DID_REQUEUE);
++				goto do_work;
++			}
++
+ 			/*
+ 			 * Otherwise, let upper layer deal with the
+ 			 * error when sense message is present
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index ea6ceab1a1b25..f3389e9131794 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -9311,5 +9311,6 @@ EXPORT_SYMBOL_GPL(ufshcd_init);
+ MODULE_AUTHOR("Santosh Yaragnavi <santosh.sy@samsung.com>");
+ MODULE_AUTHOR("Vinayak Holikatti <h.vinayak@samsung.com>");
+ MODULE_DESCRIPTION("Generic UFS host controller driver Core");
++MODULE_SOFTDEP("pre: governor_simpleondemand");
+ MODULE_LICENSE("GPL");
+ MODULE_VERSION(UFSHCD_DRIVER_VERSION);
+diff --git a/drivers/target/iscsi/iscsi_target_parameters.c b/drivers/target/iscsi/iscsi_target_parameters.c
+index 7a461fbb15668..31cd3c02e5176 100644
+--- a/drivers/target/iscsi/iscsi_target_parameters.c
++++ b/drivers/target/iscsi/iscsi_target_parameters.c
+@@ -1262,18 +1262,20 @@ static struct iscsi_param *iscsi_check_key(
+ 		return param;
+ 
+ 	if (!(param->phase & phase)) {
+-		pr_err("Key \"%s\" may not be negotiated during ",
+-				param->name);
++		char *phase_name;
++
+ 		switch (phase) {
+ 		case PHASE_SECURITY:
+-			pr_debug("Security phase.\n");
++			phase_name = "Security";
+ 			break;
+ 		case PHASE_OPERATIONAL:
+-			pr_debug("Operational phase.\n");
++			phase_name = "Operational";
+ 			break;
+ 		default:
+-			pr_debug("Unknown phase.\n");
++			phase_name = "Unknown";
+ 		}
++		pr_err("Key \"%s\" may not be negotiated during %s phase.\n",
++				param->name, phase_name);
+ 		return NULL;
+ 	}
+ 
+diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
+index 297dc62bca298..372d64756ed64 100644
+--- a/drivers/tee/amdtee/core.c
++++ b/drivers/tee/amdtee/core.c
+@@ -267,35 +267,34 @@ int amdtee_open_session(struct tee_context *ctx,
+ 		goto out;
+ 	}
+ 
++	/* Open session with loaded TA */
++	handle_open_session(arg, &session_info, param);
++	if (arg->ret != TEEC_SUCCESS) {
++		pr_err("open_session failed %d\n", arg->ret);
++		handle_unload_ta(ta_handle);
++		kref_put(&sess->refcount, destroy_session);
++		goto out;
++	}
++
+ 	/* Find an empty session index for the given TA */
+ 	spin_lock(&sess->lock);
+ 	i = find_first_zero_bit(sess->sess_mask, TEE_NUM_SESSIONS);
+-	if (i < TEE_NUM_SESSIONS)
++	if (i < TEE_NUM_SESSIONS) {
++		sess->session_info[i] = session_info;
++		set_session_id(ta_handle, i, &arg->session);
+ 		set_bit(i, sess->sess_mask);
++	}
+ 	spin_unlock(&sess->lock);
+ 
+ 	if (i >= TEE_NUM_SESSIONS) {
+ 		pr_err("reached maximum session count %d\n", TEE_NUM_SESSIONS);
++		handle_close_session(ta_handle, session_info);
+ 		handle_unload_ta(ta_handle);
+ 		kref_put(&sess->refcount, destroy_session);
+ 		rc = -ENOMEM;
+ 		goto out;
+ 	}
+ 
+-	/* Open session with loaded TA */
+-	handle_open_session(arg, &session_info, param);
+-	if (arg->ret != TEEC_SUCCESS) {
+-		pr_err("open_session failed %d\n", arg->ret);
+-		spin_lock(&sess->lock);
+-		clear_bit(i, sess->sess_mask);
+-		spin_unlock(&sess->lock);
+-		handle_unload_ta(ta_handle);
+-		kref_put(&sess->refcount, destroy_session);
+-		goto out;
+-	}
+-
+-	sess->session_info[i] = session_info;
+-	set_session_id(ta_handle, i, &arg->session);
+ out:
+ 	free_pages((u64)ta, get_order(ta_size));
+ 	return rc;
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index db80dc5dfebae..fd1b59397c705 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -36,7 +36,7 @@
+ 
+ #define NHI_MAILBOX_TIMEOUT	500 /* ms */
+ 
+-static int ring_interrupt_index(struct tb_ring *ring)
++static int ring_interrupt_index(const struct tb_ring *ring)
+ {
+ 	int bit = ring->hop;
+ 	if (!ring->is_tx)
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index 0b3a77ade04d9..5b45c45e7c5bf 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -1636,18 +1636,30 @@ static int usb4_usb3_port_write_allocated_bandwidth(struct tb_port *port,
+ 						    int downstream_bw)
+ {
+ 	u32 val, ubw, dbw, scale;
+-	int ret;
++	int ret, max_bw;
+ 
+-	/* Read the used scale, hardware default is 0 */
+-	ret = tb_port_read(port, &scale, TB_CFG_PORT,
+-			   port->cap_adap + ADP_USB3_CS_3, 1);
++	/* Figure out suitable scale */
++	scale = 0;
++	max_bw = max(upstream_bw, downstream_bw);
++	while (scale < 64) {
++		if (mbps_to_usb3_bw(max_bw, scale) < 4096)
++			break;
++		scale++;
++	}
++
++	if (WARN_ON(scale >= 64))
++		return -EINVAL;
++
++	ret = tb_port_write(port, &scale, TB_CFG_PORT,
++			    port->cap_adap + ADP_USB3_CS_3, 1);
+ 	if (ret)
+ 		return ret;
+ 
+-	scale &= ADP_USB3_CS_3_SCALE_MASK;
+ 	ubw = mbps_to_usb3_bw(upstream_bw, scale);
+ 	dbw = mbps_to_usb3_bw(downstream_bw, scale);
+ 
++	tb_port_dbg(port, "scaled bandwidth %u/%u, scale %u\n", ubw, dbw, scale);
++
+ 	ret = tb_port_read(port, &val, TB_CFG_PORT,
+ 			   port->cap_adap + ADP_USB3_CS_2, 1);
+ 	if (ret)
+diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig
+index 136f2b1460f91..b7922c8da1e61 100644
+--- a/drivers/tty/serial/8250/Kconfig
++++ b/drivers/tty/serial/8250/Kconfig
+@@ -254,7 +254,9 @@ config SERIAL_8250_ASPEED_VUART
+ 	tristate "Aspeed Virtual UART"
+ 	depends on SERIAL_8250
+ 	depends on OF
+-	depends on REGMAP && MFD_SYSCON
++	depends on MFD_SYSCON
++	depends on ARCH_ASPEED || COMPILE_TEST
++	select REGMAP
+ 	help
+ 	  If you want to use the virtual UART (VUART) device on Aspeed
+ 	  BMC platforms, enable this option. This enables the 16550A-
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 32cce52800a73..99f29bd930bd0 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1278,6 +1278,7 @@ static void lpuart_dma_rx_free(struct uart_port *port)
+ 	struct dma_chan *chan = sport->dma_rx_chan;
+ 
+ 	dmaengine_terminate_all(chan);
++	del_timer_sync(&sport->lpuart_timer);
+ 	dma_unmap_sg(chan->device->dev, &sport->rx_sgl, 1, DMA_FROM_DEVICE);
+ 	kfree(sport->rx_ring.buf);
+ 	sport->rx_ring.tail = 0;
+@@ -1743,7 +1744,6 @@ static int lpuart32_startup(struct uart_port *port)
+ static void lpuart_dma_shutdown(struct lpuart_port *sport)
+ {
+ 	if (sport->lpuart_dma_rx_use) {
+-		del_timer_sync(&sport->lpuart_timer);
+ 		lpuart_dma_rx_free(&sport->port);
+ 		sport->lpuart_dma_rx_use = false;
+ 	}
+@@ -1894,10 +1894,8 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	 * Since timer function acqures sport->port.lock, need to stop before
+ 	 * acquring same lock because otherwise del_timer_sync() can deadlock.
+ 	 */
+-	if (old && sport->lpuart_dma_rx_use) {
+-		del_timer_sync(&sport->lpuart_timer);
++	if (old && sport->lpuart_dma_rx_use)
+ 		lpuart_dma_rx_free(&sport->port);
+-	}
+ 
+ 	spin_lock_irqsave(&sport->port.lock, flags);
+ 
+@@ -2129,10 +2127,8 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	 * Since timer function acqures sport->port.lock, need to stop before
+ 	 * acquring same lock because otherwise del_timer_sync() can deadlock.
+ 	 */
+-	if (old && sport->lpuart_dma_rx_use) {
+-		del_timer_sync(&sport->lpuart_timer);
++	if (old && sport->lpuart_dma_rx_use)
+ 		lpuart_dma_rx_free(&sport->port);
+-	}
+ 
+ 	spin_lock_irqsave(&sport->port.lock, flags);
+ 
+@@ -2766,11 +2762,10 @@ static int __maybe_unused lpuart_suspend(struct device *dev)
+ 		 * EDMA driver during suspend will forcefully release any
+ 		 * non-idle DMA channels. If port wakeup is enabled or if port
+ 		 * is console port or 'no_console_suspend' is set the Rx DMA
+-		 * cannot resume as as expected, hence gracefully release the
++		 * cannot resume as expected, hence gracefully release the
+ 		 * Rx DMA path before suspend and start Rx DMA path on resume.
+ 		 */
+ 		if (irq_wake) {
+-			del_timer_sync(&sport->lpuart_timer);
+ 			lpuart_dma_rx_free(&sport->port);
+ 		}
+ 
+diff --git a/drivers/usb/cdns3/cdns3-pci-wrap.c b/drivers/usb/cdns3/cdns3-pci-wrap.c
+index deeea618ba33b..1f6320d98a76b 100644
+--- a/drivers/usb/cdns3/cdns3-pci-wrap.c
++++ b/drivers/usb/cdns3/cdns3-pci-wrap.c
+@@ -60,6 +60,11 @@ static struct pci_dev *cdns3_get_second_fun(struct pci_dev *pdev)
+ 			return NULL;
+ 	}
+ 
++	if (func->devfn != PCI_DEV_FN_HOST_DEVICE &&
++	    func->devfn != PCI_DEV_FN_OTG) {
++		return NULL;
++	}
++
+ 	return func;
+ }
+ 
+diff --git a/drivers/usb/chipidea/ci.h b/drivers/usb/chipidea/ci.h
+index 0697eb980e5fa..7b00b93dad9b8 100644
+--- a/drivers/usb/chipidea/ci.h
++++ b/drivers/usb/chipidea/ci.h
+@@ -204,6 +204,7 @@ struct hw_bank {
+  * @in_lpm: if the core in low power mode
+  * @wakeup_int: if wakeup interrupt occur
+  * @rev: The revision number for controller
++ * @mutex: protect code from concorrent running when doing role switch
+  */
+ struct ci_hdrc {
+ 	struct device			*dev;
+@@ -257,6 +258,7 @@ struct ci_hdrc {
+ 	bool				in_lpm;
+ 	bool				wakeup_int;
+ 	enum ci_revision		rev;
++	struct mutex                    mutex;
+ };
+ 
+ static inline struct ci_role_driver *ci_role(struct ci_hdrc *ci)
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 127b1a62b1bf4..f26dd1f054f21 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -966,9 +966,16 @@ static ssize_t role_store(struct device *dev,
+ 			     strlen(ci->roles[role]->name)))
+ 			break;
+ 
+-	if (role == CI_ROLE_END || role == ci->role)
++	if (role == CI_ROLE_END)
+ 		return -EINVAL;
+ 
++	mutex_lock(&ci->mutex);
++
++	if (role == ci->role) {
++		mutex_unlock(&ci->mutex);
++		return n;
++	}
++
+ 	pm_runtime_get_sync(dev);
+ 	disable_irq(ci->irq);
+ 	ci_role_stop(ci);
+@@ -977,6 +984,7 @@ static ssize_t role_store(struct device *dev,
+ 		ci_handle_vbus_change(ci);
+ 	enable_irq(ci->irq);
+ 	pm_runtime_put_sync(dev);
++	mutex_unlock(&ci->mutex);
+ 
+ 	return (ret == 0) ? n : ret;
+ }
+@@ -1012,6 +1020,7 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	spin_lock_init(&ci->lock);
++	mutex_init(&ci->mutex);
+ 	ci->dev = dev;
+ 	ci->platdata = dev_get_platdata(dev);
+ 	ci->imx28_write_fix = !!(ci->platdata->flags &
+diff --git a/drivers/usb/chipidea/otg.c b/drivers/usb/chipidea/otg.c
+index d3aada3ce7ec2..9a12868ea9b64 100644
+--- a/drivers/usb/chipidea/otg.c
++++ b/drivers/usb/chipidea/otg.c
+@@ -166,8 +166,10 @@ static int hw_wait_vbus_lower_bsv(struct ci_hdrc *ci)
+ 
+ static void ci_handle_id_switch(struct ci_hdrc *ci)
+ {
+-	enum ci_role role = ci_otg_role(ci);
++	enum ci_role role;
+ 
++	mutex_lock(&ci->mutex);
++	role = ci_otg_role(ci);
+ 	if (role != ci->role) {
+ 		dev_dbg(ci->dev, "switching from %s to %s\n",
+ 			ci_role(ci)->name, ci->roles[role]->name);
+@@ -197,6 +199,7 @@ static void ci_handle_id_switch(struct ci_hdrc *ci)
+ 		if (role == CI_ROLE_GADGET)
+ 			ci_handle_vbus_change(ci);
+ 	}
++	mutex_unlock(&ci->mutex);
+ }
+ /**
+  * ci_otg_work - perform otg (vbus/id) event handle
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 8851db646ef53..9d0dd09a20151 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -121,13 +121,6 @@ static int dwc2_get_dr_mode(struct dwc2_hsotg *hsotg)
+ 	return 0;
+ }
+ 
+-static void __dwc2_disable_regulators(void *data)
+-{
+-	struct dwc2_hsotg *hsotg = data;
+-
+-	regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), hsotg->supplies);
+-}
+-
+ static int __dwc2_lowlevel_hw_enable(struct dwc2_hsotg *hsotg)
+ {
+ 	struct platform_device *pdev = to_platform_device(hsotg->dev);
+@@ -138,11 +131,6 @@ static int __dwc2_lowlevel_hw_enable(struct dwc2_hsotg *hsotg)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = devm_add_action_or_reset(&pdev->dev,
+-				       __dwc2_disable_regulators, hsotg);
+-	if (ret)
+-		return ret;
+-
+ 	if (hsotg->clk) {
+ 		ret = clk_prepare_enable(hsotg->clk);
+ 		if (ret)
+@@ -198,7 +186,7 @@ static int __dwc2_lowlevel_hw_disable(struct dwc2_hsotg *hsotg)
+ 	if (hsotg->clk)
+ 		clk_disable_unprepare(hsotg->clk);
+ 
+-	return 0;
++	return regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), hsotg->supplies);
+ }
+ 
+ /**
+@@ -625,7 +613,7 @@ error_init:
+ 	if (hsotg->params.activate_stm_id_vb_detection)
+ 		regulator_disable(hsotg->usb33d);
+ error:
+-	if (hsotg->dr_mode != USB_DR_MODE_PERIPHERAL)
++	if (hsotg->ll_hw_enabled)
+ 		dwc2_lowlevel_hw_disable(hsotg);
+ 	return retval;
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 28a1194f849fc..01cecde76140b 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1440,6 +1440,44 @@ static int __dwc3_gadget_get_frame(struct dwc3 *dwc)
+ 	return DWC3_DSTS_SOFFN(reg);
+ }
+ 
++/**
++ * __dwc3_stop_active_transfer - stop the current active transfer
++ * @dep: isoc endpoint
++ * @force: set forcerm bit in the command
++ * @interrupt: command complete interrupt after End Transfer command
++ *
++ * When setting force, the ForceRM bit will be set. In that case
++ * the controller won't update the TRB progress on command
++ * completion. It also won't clear the HWO bit in the TRB.
++ * The command will also not complete immediately in that case.
++ */
++static int __dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force, bool interrupt)
++{
++	struct dwc3 *dwc = dep->dwc;
++	struct dwc3_gadget_ep_cmd_params params;
++	u32 cmd;
++	int ret;
++
++	cmd = DWC3_DEPCMD_ENDTRANSFER;
++	cmd |= force ? DWC3_DEPCMD_HIPRI_FORCERM : 0;
++	cmd |= interrupt ? DWC3_DEPCMD_CMDIOC : 0;
++	cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
++	memset(&params, 0, sizeof(params));
++	ret = dwc3_send_gadget_ep_cmd(dep, cmd, &params);
++	WARN_ON_ONCE(ret);
++	dep->resource_index = 0;
++
++	if (!interrupt) {
++		if (!DWC3_IP_IS(DWC3) || DWC3_VER_IS_PRIOR(DWC3, 310A))
++			mdelay(1);
++		dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
++	} else if (!ret) {
++		dep->flags |= DWC3_EP_END_TRANSFER_PENDING;
++	}
++
++	return ret;
++}
++
+ /**
+  * dwc3_gadget_start_isoc_quirk - workaround invalid frame number
+  * @dep: isoc endpoint
+@@ -1609,21 +1647,8 @@ static int __dwc3_gadget_start_isoc(struct dwc3_ep *dep)
+ 	 * status, issue END_TRANSFER command and retry on the next XferNotReady
+ 	 * event.
+ 	 */
+-	if (ret == -EAGAIN) {
+-		struct dwc3_gadget_ep_cmd_params params;
+-		u32 cmd;
+-
+-		cmd = DWC3_DEPCMD_ENDTRANSFER |
+-			DWC3_DEPCMD_CMDIOC |
+-			DWC3_DEPCMD_PARAM(dep->resource_index);
+-
+-		dep->resource_index = 0;
+-		memset(&params, 0, sizeof(params));
+-
+-		ret = dwc3_send_gadget_ep_cmd(dep, cmd, &params);
+-		if (!ret)
+-			dep->flags |= DWC3_EP_END_TRANSFER_PENDING;
+-	}
++	if (ret == -EAGAIN)
++		ret = __dwc3_stop_active_transfer(dep, false, true);
+ 
+ 	return ret;
+ }
+@@ -3250,10 +3275,6 @@ static void dwc3_reset_gadget(struct dwc3 *dwc)
+ static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
+ 	bool interrupt)
+ {
+-	struct dwc3_gadget_ep_cmd_params params;
+-	u32 cmd;
+-	int ret;
+-
+ 	if (!(dep->flags & DWC3_EP_TRANSFER_STARTED) ||
+ 	    (dep->flags & DWC3_EP_END_TRANSFER_PENDING))
+ 		return;
+@@ -3282,22 +3303,14 @@ static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
+ 	 * enabled, the EndTransfer command will have completed upon
+ 	 * returning from this function.
+ 	 *
+-	 * This mode is NOT available on the DWC_usb31 IP.
++	 * This mode is NOT available on the DWC_usb31 IP.  In this
++	 * case, if the IOC bit is not set, then delay by 1ms
++	 * after issuing the EndTransfer command.  This allows for the
++	 * controller to handle the command completely before DWC3
++	 * remove requests attempts to unmap USB request buffers.
+ 	 */
+ 
+-	cmd = DWC3_DEPCMD_ENDTRANSFER;
+-	cmd |= force ? DWC3_DEPCMD_HIPRI_FORCERM : 0;
+-	cmd |= interrupt ? DWC3_DEPCMD_CMDIOC : 0;
+-	cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
+-	memset(&params, 0, sizeof(params));
+-	ret = dwc3_send_gadget_ep_cmd(dep, cmd, &params);
+-	WARN_ON_ONCE(ret);
+-	dep->resource_index = 0;
+-
+-	if (!interrupt)
+-		dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
+-	else
+-		dep->flags |= DWC3_EP_END_TRANSFER_PENDING;
++	__dwc3_stop_active_transfer(dep, force, interrupt);
+ }
+ 
+ static void dwc3_clear_stall_all_ep(struct dwc3 *dwc)
+diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
+index 95605b1ef4eb4..6c8b8f5b7e0f5 100644
+--- a/drivers/usb/gadget/function/u_audio.c
++++ b/drivers/usb/gadget/function/u_audio.c
+@@ -613,7 +613,7 @@ void g_audio_cleanup(struct g_audio *g_audio)
+ 	uac = g_audio->uac;
+ 	card = uac->card;
+ 	if (card)
+-		snd_card_free(card);
++		snd_card_free_when_closed(card);
+ 
+ 	kfree(uac->p_prm.ureq);
+ 	kfree(uac->c_prm.ureq);
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index c7b763d6d1023..1f8c9b16a0fb8 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -111,6 +111,13 @@ UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_BROKEN_FUA),
+ 
++/* Reported by: Yaroslav Furman <yaro330@gmail.com> */
++UNUSUAL_DEV(0x152d, 0x0583, 0x0000, 0x9999,
++		"JMicron",
++		"JMS583Gen 2",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_NO_REPORT_OPCODES),
++
+ /* Reported-by: Thinh Nguyen <thinhn@synopsys.com> */
+ UNUSUAL_DEV(0x154b, 0xf00b, 0x0000, 0x9999,
+ 		"PNY",
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 4cd5c291cdf38..cd3689005c310 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -1152,7 +1152,7 @@ out_unlock:
+ static int ucsi_init(struct ucsi *ucsi)
+ {
+ 	struct ucsi_connector *con;
+-	u64 command;
++	u64 command, ntfy;
+ 	int ret;
+ 	int i;
+ 
+@@ -1164,8 +1164,8 @@ static int ucsi_init(struct ucsi *ucsi)
+ 	}
+ 
+ 	/* Enable basic notifications */
+-	ucsi->ntfy = UCSI_ENABLE_NTFY_CMD_COMPLETE | UCSI_ENABLE_NTFY_ERROR;
+-	command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy;
++	ntfy = UCSI_ENABLE_NTFY_CMD_COMPLETE | UCSI_ENABLE_NTFY_ERROR;
++	command = UCSI_SET_NOTIFICATION_ENABLE | ntfy;
+ 	ret = ucsi_send_command(ucsi, command, NULL, 0);
+ 	if (ret < 0)
+ 		goto err_reset;
+@@ -1197,12 +1197,13 @@ static int ucsi_init(struct ucsi *ucsi)
+ 	}
+ 
+ 	/* Enable all notifications */
+-	ucsi->ntfy = UCSI_ENABLE_NTFY_ALL;
+-	command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy;
++	ntfy = UCSI_ENABLE_NTFY_ALL;
++	command = UCSI_SET_NOTIFICATION_ENABLE | ntfy;
+ 	ret = ucsi_send_command(ucsi, command, NULL, 0);
+ 	if (ret < 0)
+ 		goto err_unregister;
+ 
++	ucsi->ntfy = ntfy;
+ 	return 0;
+ 
+ err_unregister:
+diff --git a/drivers/video/fbdev/au1200fb.c b/drivers/video/fbdev/au1200fb.c
+index c00e01a173685..a8a0a448cdb5e 100644
+--- a/drivers/video/fbdev/au1200fb.c
++++ b/drivers/video/fbdev/au1200fb.c
+@@ -1040,6 +1040,9 @@ static int au1200fb_fb_check_var(struct fb_var_screeninfo *var,
+ 	u32 pixclock;
+ 	int screen_size, plane;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	plane = fbdev->plane;
+ 
+ 	/* Make sure that the mode respect all LCD controller and
+diff --git a/drivers/video/fbdev/geode/lxfb_core.c b/drivers/video/fbdev/geode/lxfb_core.c
+index 66c81262d18f8..6c6b6efb49f69 100644
+--- a/drivers/video/fbdev/geode/lxfb_core.c
++++ b/drivers/video/fbdev/geode/lxfb_core.c
+@@ -234,6 +234,9 @@ static void get_modedb(struct fb_videomode **modedb, unsigned int *size)
+ 
+ static int lxfb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ {
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	if (var->xres > 1920 || var->yres > 1440)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/video/fbdev/intelfb/intelfbdrv.c b/drivers/video/fbdev/intelfb/intelfbdrv.c
+index a9579964eaba8..8a703adfa9360 100644
+--- a/drivers/video/fbdev/intelfb/intelfbdrv.c
++++ b/drivers/video/fbdev/intelfb/intelfbdrv.c
+@@ -1214,6 +1214,9 @@ static int intelfb_check_var(struct fb_var_screeninfo *var,
+ 
+ 	dinfo = GET_DINFO(info);
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	/* update the pitch */
+ 	if (intelfbhw_validate_mode(dinfo, var) != 0)
+ 		return -EINVAL;
+diff --git a/drivers/video/fbdev/nvidia/nvidia.c b/drivers/video/fbdev/nvidia/nvidia.c
+index a372a183c1f01..f9c388a8c10e3 100644
+--- a/drivers/video/fbdev/nvidia/nvidia.c
++++ b/drivers/video/fbdev/nvidia/nvidia.c
+@@ -763,6 +763,8 @@ static int nvidiafb_check_var(struct fb_var_screeninfo *var,
+ 	int pitch, err = 0;
+ 
+ 	NVTRACE_ENTER();
++	if (!var->pixclock)
++		return -EINVAL;
+ 
+ 	var->transp.offset = 0;
+ 	var->transp.length = 0;
+diff --git a/drivers/video/fbdev/tgafb.c b/drivers/video/fbdev/tgafb.c
+index 666fbe2f671c9..98a2977fd4271 100644
+--- a/drivers/video/fbdev/tgafb.c
++++ b/drivers/video/fbdev/tgafb.c
+@@ -166,6 +166,9 @@ tgafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ {
+ 	struct tga_par *par = (struct tga_par *)info->par;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	if (par->tga_type == TGA_TYPE_8PLANE) {
+ 		if (var->bits_per_pixel != 8)
+ 			return -EINVAL;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index fc335b5e44df8..10686b494f0a9 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -4254,7 +4254,9 @@ static long btrfs_ioctl_qgroup_assign(struct file *file, void __user *arg)
+ 	}
+ 
+ 	/* update qgroup status and info */
++	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	err = btrfs_run_qgroups(trans);
++	mutex_unlock(&fs_info->qgroup_ioctl_lock);
+ 	if (err < 0)
+ 		btrfs_handle_fs_error(fs_info, err,
+ 				      "failed to update qgroup status and info");
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 9fe6a01ea8b85..3fc689154bb5b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2762,13 +2762,22 @@ cleanup:
+ }
+ 
+ /*
+- * called from commit_transaction. Writes all changed qgroups to disk.
++ * Writes all changed qgroups to disk.
++ * Called by the transaction commit path and the qgroup assign ioctl.
+  */
+ int btrfs_run_qgroups(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	int ret = 0;
+ 
++	/*
++	 * In case we are called from the qgroup assign ioctl, assert that we
++	 * are holding the qgroup_ioctl_lock, otherwise we can race with a quota
++	 * disable operation (ioctl) and access a freed quota root.
++	 */
++	if (trans->transaction->state != TRANS_STATE_COMMIT_DOING)
++		lockdep_assert_held(&fs_info->qgroup_ioctl_lock);
++
+ 	if (!fs_info->quota_root)
+ 		return ret;
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 15435f983180f..83dca79ff042c 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1411,8 +1411,17 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, fmode_t flags,
+ 	 * later supers, using BTRFS_SUPER_MIRROR_MAX instead
+ 	 */
+ 	bytenr = btrfs_sb_offset(0);
+-	flags |= FMODE_EXCL;
+ 
++	/*
++	 * Avoid using flag |= FMODE_EXCL here, as the systemd-udev may
++	 * initiate the device scan which may race with the user's mount
++	 * or mkfs command, resulting in failure.
++	 * Since the device scan is solely for reading purposes, there is
++	 * no need for FMODE_EXCL. Additionally, the devices are read again
++	 * during the mount process. It is ok to get some inconsistent
++	 * values temporarily, as the device paths of the fsid are the only
++	 * required information for assembling the volume.
++	 */
+ 	bdev = blkdev_get_by_path(path, flags, holder);
+ 	if (IS_ERR(bdev))
+ 		return ERR_CAST(bdev);
+diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
+index e996f0bef4145..59c41412ebaf0 100644
+--- a/fs/cifs/cifsfs.h
++++ b/fs/cifs/cifsfs.h
+@@ -126,7 +126,10 @@ extern const struct dentry_operations cifs_ci_dentry_ops;
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ extern struct vfsmount *cifs_dfs_d_automount(struct path *path);
+ #else
+-#define cifs_dfs_d_automount NULL
++static inline struct vfsmount *cifs_dfs_d_automount(struct path *path)
++{
++	return ERR_PTR(-EREMOTE);
++}
+ #endif
+ 
+ /* Functions related to symlinks */
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index c279527aae92d..95992c93bbe34 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -4859,8 +4859,13 @@ CIFSGetDFSRefer(const unsigned int xid, struct cifs_ses *ses,
+ 		return -ENODEV;
+ 
+ getDFSRetry:
+-	rc = smb_init(SMB_COM_TRANSACTION2, 15, ses->tcon_ipc, (void **) &pSMB,
+-		      (void **) &pSMBr);
++	/*
++	 * Use smb_init_no_reconnect() instead of smb_init() as
++	 * CIFSGetDFSRefer() may be called from cifs_reconnect_tcon() and thus
++	 * causing an infinite recursion.
++	 */
++	rc = smb_init_no_reconnect(SMB_COM_TRANSACTION2, 15, ses->tcon_ipc,
++				   (void **)&pSMB, (void **)&pSMBr);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 8fdd34ff20ef5..120c7cb11b02a 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -593,7 +593,7 @@ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
+ 	if (rc == -EOPNOTSUPP) {
+ 		cifs_dbg(FYI,
+ 			 "server does not support query network interfaces\n");
+-		goto out;
++		ret_data_len = 0;
+ 	} else if (rc != 0) {
+ 		cifs_tcon_dbg(VFS, "error %d on ioctl to get interface list\n", rc);
+ 		goto out;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 6ba185b46ba39..9bd5f8b0511b2 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1303,7 +1303,8 @@ static int ext4_write_end(struct file *file,
+ 	bool verity = ext4_verity_in_progress(inode);
+ 
+ 	trace_ext4_write_end(inode, pos, len, copied);
+-	if (inline_data) {
++	if (inline_data &&
++	    ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) {
+ 		ret = ext4_write_inline_data_end(inode, pos, len,
+ 						 copied, page);
+ 		if (ret < 0) {
+diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
+index 5306595548703..a0430da033b38 100644
+--- a/fs/gfs2/aops.c
++++ b/fs/gfs2/aops.c
+@@ -451,8 +451,6 @@ static int stuffed_readpage(struct gfs2_inode *ip, struct page *page)
+ 		return error;
+ 
+ 	kaddr = kmap_atomic(page);
+-	if (dsize > gfs2_max_stuffed_size(ip))
+-		dsize = gfs2_max_stuffed_size(ip);
+ 	memcpy(kaddr, dibh->b_data + sizeof(struct gfs2_dinode), dsize);
+ 	memset(kaddr + dsize, 0, PAGE_SIZE - dsize);
+ 	kunmap_atomic(kaddr);
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index b4fde3a8eeb4b..eaee95d2ad143 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -69,9 +69,6 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh,
+ 		void *kaddr = kmap(page);
+ 		u64 dsize = i_size_read(inode);
+  
+-		if (dsize > gfs2_max_stuffed_size(ip))
+-			dsize = gfs2_max_stuffed_size(ip);
+-
+ 		memcpy(kaddr, dibh->b_data + sizeof(struct gfs2_dinode), dsize);
+ 		memset(kaddr + dsize, 0, PAGE_SIZE - dsize);
+ 		kunmap(page);
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index bf539eab92c6f..db28c240dae35 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -454,6 +454,9 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
+ 	ip->i_depth = (u8)depth;
+ 	ip->i_entries = be32_to_cpu(str->di_entries);
+ 
++	if (gfs2_is_stuffed(ip) && ip->i_inode.i_size > gfs2_max_stuffed_size(ip))
++		goto corrupt;
++
+ 	if (S_ISREG(ip->i_inode.i_mode))
+ 		gfs2_set_aops(&ip->i_inode);
+ 
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 8653335c17b67..bca5d1bdd79bd 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1975,8 +1975,7 @@ _nfs4_opendata_reclaim_to_nfs4_state(struct nfs4_opendata *data)
+ 	if (!data->rpc_done) {
+ 		if (data->rpc_status)
+ 			return ERR_PTR(data->rpc_status);
+-		/* cached opens have already been processed */
+-		goto update;
++		return nfs4_try_open_cached(data);
+ 	}
+ 
+ 	ret = nfs_refresh_inode(inode, &data->f_attr);
+@@ -1985,7 +1984,7 @@ _nfs4_opendata_reclaim_to_nfs4_state(struct nfs4_opendata *data)
+ 
+ 	if (data->o_res.delegation_type != 0)
+ 		nfs4_opendata_check_deleg(data, state);
+-update:
++
+ 	if (!update_open_stateid(state, &data->o_res.stateid,
+ 				NULL, data->o_arg.fmode))
+ 		return ERR_PTR(-EAGAIN);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index f82cfe843b99b..3c651cbcf8971 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1248,13 +1248,6 @@ out_err:
+ 	return status;
+ }
+ 
+-static void
+-nfsd4_interssc_disconnect(struct vfsmount *ss_mnt)
+-{
+-	nfs_do_sb_deactive(ss_mnt->mnt_sb);
+-	mntput(ss_mnt);
+-}
+-
+ /*
+  * Verify COPY destination stateid.
+  *
+@@ -1325,11 +1318,6 @@ nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct nfsd_file *src,
+ {
+ }
+ 
+-static void
+-nfsd4_interssc_disconnect(struct vfsmount *ss_mnt)
+-{
+-}
+-
+ static struct file *nfs42_ssc_open(struct vfsmount *ss_mnt,
+ 				   struct nfs_fh *src_fh,
+ 				   nfs4_stateid *stateid)
+@@ -1471,14 +1459,14 @@ static int nfsd4_do_async_copy(void *data)
+ 		copy->nf_src = kzalloc(sizeof(struct nfsd_file), GFP_KERNEL);
+ 		if (!copy->nf_src) {
+ 			copy->nfserr = nfserr_serverfault;
+-			nfsd4_interssc_disconnect(copy->ss_mnt);
++			/* ss_mnt will be unmounted by the laundromat */
+ 			goto do_callback;
+ 		}
+ 		copy->nf_src->nf_file = nfs42_ssc_open(copy->ss_mnt, &copy->c_fh,
+ 					      &copy->stateid);
+ 		if (IS_ERR(copy->nf_src->nf_file)) {
+ 			copy->nfserr = nfserr_offload_denied;
+-			nfsd4_interssc_disconnect(copy->ss_mnt);
++			/* ss_mnt will be unmounted by the laundromat */
+ 			goto do_callback;
+ 		}
+ 	}
+@@ -1561,8 +1549,10 @@ out_err:
+ 	if (async_copy)
+ 		cleanup_async_copy(async_copy);
+ 	status = nfserrno(-ENOMEM);
+-	if (!copy->cp_intra)
+-		nfsd4_interssc_disconnect(copy->ss_mnt);
++	/*
++	 * source's vfsmount of inter-copy will be unmounted
++	 * by the laundromat
++	 */
+ 	goto out;
+ }
+ 
+diff --git a/fs/nilfs2/ioctl.c b/fs/nilfs2/ioctl.c
+index 3a1dea5d14484..01235fac5971f 100644
+--- a/fs/nilfs2/ioctl.c
++++ b/fs/nilfs2/ioctl.c
+@@ -70,7 +70,7 @@ static int nilfs_ioctl_wrap_copy(struct the_nilfs *nilfs,
+ 	if (argv->v_index > ~(__u64)0 - argv->v_nmembs)
+ 		return -EINVAL;
+ 
+-	buf = (void *)__get_free_pages(GFP_NOFS, 0);
++	buf = (void *)get_zeroed_page(GFP_NOFS);
+ 	if (unlikely(!buf))
+ 		return -ENOMEM;
+ 	maxmembs = PAGE_SIZE / argv->v_size;
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index ad20403b383fa..9b23e74036eb9 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -1981,11 +1981,25 @@ int ocfs2_write_end_nolock(struct address_space *mapping,
+ 	}
+ 
+ 	if (unlikely(copied < len) && wc->w_target_page) {
++		loff_t new_isize;
++
+ 		if (!PageUptodate(wc->w_target_page))
+ 			copied = 0;
+ 
+-		ocfs2_zero_new_buffers(wc->w_target_page, start+copied,
+-				       start+len);
++		new_isize = max_t(loff_t, i_size_read(inode), pos + copied);
++		if (new_isize > page_offset(wc->w_target_page))
++			ocfs2_zero_new_buffers(wc->w_target_page, start+copied,
++					       start+len);
++		else {
++			/*
++			 * When page is fully beyond new isize (data copy
++			 * failed), do not bother zeroing the page. Invalidate
++			 * it instead so that writeback does not get confused
++			 * put page & buffer dirty bits into inconsistent
++			 * state.
++			 */
++			block_invalidatepage(wc->w_target_page, 0, PAGE_SIZE);
++		}
+ 	}
+ 	if (wc->w_target_page)
+ 		flush_dcache_page(wc->w_target_page);
+diff --git a/fs/verity/enable.c b/fs/verity/enable.c
+index 734862e608fd3..5ceae66e1ae02 100644
+--- a/fs/verity/enable.c
++++ b/fs/verity/enable.c
+@@ -391,25 +391,27 @@ int fsverity_ioctl_enable(struct file *filp, const void __user *uarg)
+ 		goto out_drop_write;
+ 
+ 	err = enable_verity(filp, &arg);
+-	if (err)
+-		goto out_allow_write_access;
+ 
+ 	/*
+-	 * Some pages of the file may have been evicted from pagecache after
+-	 * being used in the Merkle tree construction, then read into pagecache
+-	 * again by another process reading from the file concurrently.  Since
+-	 * these pages didn't undergo verification against the file measurement
+-	 * which fs-verity now claims to be enforcing, we have to wipe the
+-	 * pagecache to ensure that all future reads are verified.
++	 * We no longer drop the inode's pagecache after enabling verity.  This
++	 * used to be done to try to avoid a race condition where pages could be
++	 * evicted after being used in the Merkle tree construction, then
++	 * re-instantiated by a concurrent read.  Such pages are unverified, and
++	 * the backing storage could have filled them with different content, so
++	 * they shouldn't be used to fulfill reads once verity is enabled.
++	 *
++	 * But, dropping the pagecache has a big performance impact, and it
++	 * doesn't fully solve the race condition anyway.  So for those reasons,
++	 * and also because this race condition isn't very important relatively
++	 * speaking (especially for small-ish files, where the chance of a page
++	 * being used, evicted, *and* re-instantiated all while enabling verity
++	 * is quite small), we no longer drop the inode's pagecache.
+ 	 */
+-	filemap_write_and_wait(inode->i_mapping);
+-	invalidate_inode_pages2(inode->i_mapping);
+ 
+ 	/*
+ 	 * allow_write_access() is needed to pair with deny_write_access().
+ 	 * Regardless, the filesystem won't allow writing to verity files.
+ 	 */
+-out_allow_write_access:
+ 	allow_write_access(filp);
+ out_drop_write:
+ 	mnt_drop_write_file(filp);
+diff --git a/fs/verity/verify.c b/fs/verity/verify.c
+index a8b68c6f663d1..d3a3a359d8152 100644
+--- a/fs/verity/verify.c
++++ b/fs/verity/verify.c
+@@ -279,15 +279,15 @@ EXPORT_SYMBOL_GPL(fsverity_enqueue_verify_work);
+ int __init fsverity_init_workqueue(void)
+ {
+ 	/*
+-	 * Use an unbound workqueue to allow bios to be verified in parallel
+-	 * even when they happen to complete on the same CPU.  This sacrifices
+-	 * locality, but it's worthwhile since hashing is CPU-intensive.
++	 * Use a high-priority workqueue to prioritize verification work, which
++	 * blocks reads from completing, over regular application tasks.
+ 	 *
+-	 * Also use a high-priority workqueue to prioritize verification work,
+-	 * which blocks reads from completing, over regular application tasks.
++	 * For performance reasons, don't use an unbound workqueue.  Using an
++	 * unbound workqueue for crypto operations causes excessive scheduler
++	 * latency on ARM64.
+ 	 */
+ 	fsverity_read_workqueue = alloc_workqueue("fsverity_read_queue",
+-						  WQ_UNBOUND | WQ_HIGHPRI,
++						  WQ_HIGHPRI,
+ 						  num_online_cpus());
+ 	if (!fsverity_read_workqueue)
+ 		return -ENOMEM;
+diff --git a/fs/xfs/xfs_extent_busy.c b/fs/xfs/xfs_extent_busy.c
+index 5c2695a42de15..a4075685d9eba 100644
+--- a/fs/xfs/xfs_extent_busy.c
++++ b/fs/xfs/xfs_extent_busy.c
+@@ -344,7 +344,6 @@ xfs_extent_busy_trim(
+ 	ASSERT(*len > 0);
+ 
+ 	spin_lock(&args->pag->pagb_lock);
+-restart:
+ 	fbno = *bno;
+ 	flen = *len;
+ 	rbp = args->pag->pagb_tree.rb_node;
+@@ -363,19 +362,6 @@ restart:
+ 			continue;
+ 		}
+ 
+-		/*
+-		 * If this is a metadata allocation, try to reuse the busy
+-		 * extent instead of trimming the allocation.
+-		 */
+-		if (!(args->datatype & XFS_ALLOC_USERDATA) &&
+-		    !(busyp->flags & XFS_EXTENT_BUSY_DISCARDED)) {
+-			if (!xfs_extent_busy_update_extent(args->mp, args->pag,
+-							  busyp, fbno, flen,
+-							  false))
+-				goto restart;
+-			continue;
+-		}
+-
+ 		if (bbno <= fbno) {
+ 			/* start overlap */
+ 
+diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c
+index 288ea38c43ad0..5ca210e6626cd 100644
+--- a/fs/xfs/xfs_trans_dquot.c
++++ b/fs/xfs/xfs_trans_dquot.c
+@@ -16,6 +16,7 @@
+ #include "xfs_quota.h"
+ #include "xfs_qm.h"
+ #include "xfs_trace.h"
++#include "xfs_error.h"
+ 
+ STATIC void	xfs_trans_alloc_dqinfo(xfs_trans_t *);
+ 
+@@ -708,9 +709,11 @@ xfs_trans_dqresv(
+ 					    XFS_TRANS_DQ_RES_INOS,
+ 					    ninos);
+ 	}
+-	ASSERT(dqp->q_blk.reserved >= dqp->q_blk.count);
+-	ASSERT(dqp->q_rtb.reserved >= dqp->q_rtb.count);
+-	ASSERT(dqp->q_ino.reserved >= dqp->q_ino.count);
++
++	if (XFS_IS_CORRUPT(mp, dqp->q_blk.reserved < dqp->q_blk.count) ||
++	    XFS_IS_CORRUPT(mp, dqp->q_rtb.reserved < dqp->q_rtb.count) ||
++	    XFS_IS_CORRUPT(mp, dqp->q_ino.reserved < dqp->q_ino.count))
++		goto error_corrupt;
+ 
+ 	xfs_dqunlock(dqp);
+ 	return 0;
+@@ -720,6 +723,10 @@ error_return:
+ 	if (xfs_dquot_type(dqp) == XFS_DQTYPE_PROJ)
+ 		return -ENOSPC;
+ 	return -EDQUOT;
++error_corrupt:
++	xfs_dqunlock(dqp);
++	xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
++	return -EFSCORRUPTED;
+ }
+ 
+ 
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 66a089a62c39f..b9522eee1257a 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -789,7 +789,7 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from)
+ 		if (bio->bi_iter.bi_sector != wpsector) {
+ 			zonefs_warn(inode->i_sb,
+ 				"Corrupted write pointer %llu for zone at %llu\n",
+-				wpsector, zi->i_zsector);
++				bio->bi_iter.bi_sector, zi->i_zsector);
+ 			ret = -EIO;
+ 		}
+ 	}
+diff --git a/include/linux/nvme-tcp.h b/include/linux/nvme-tcp.h
+index 959e0bd9a913e..73364ae916890 100644
+--- a/include/linux/nvme-tcp.h
++++ b/include/linux/nvme-tcp.h
+@@ -114,8 +114,9 @@ struct nvme_tcp_icresp_pdu {
+ struct nvme_tcp_term_pdu {
+ 	struct nvme_tcp_hdr	hdr;
+ 	__le16			fes;
+-	__le32			fei;
+-	__u8			rsvd[8];
++	__le16			feil;
++	__le16			feiu;
++	__u8			rsvd[10];
+ };
+ 
+ /**
+diff --git a/include/linux/of_mdio.h b/include/linux/of_mdio.h
+index f56c6a9230ac8..8cc6522ee43ab 100644
+--- a/include/linux/of_mdio.h
++++ b/include/linux/of_mdio.h
+@@ -14,9 +14,25 @@
+ 
+ #if IS_ENABLED(CONFIG_OF_MDIO)
+ bool of_mdiobus_child_is_phy(struct device_node *child);
+-int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np);
+-int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
+-			     struct device_node *np);
++int __of_mdiobus_register(struct mii_bus *mdio, struct device_node *np,
++			  struct module *owner);
++
++static inline int of_mdiobus_register(struct mii_bus *mdio,
++				      struct device_node *np)
++{
++	return __of_mdiobus_register(mdio, np, THIS_MODULE);
++}
++
++int __devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
++			       struct device_node *np, struct module *owner);
++
++static inline int devm_of_mdiobus_register(struct device *dev,
++					   struct mii_bus *mdio,
++					   struct device_node *np)
++{
++	return __devm_of_mdiobus_register(dev, mdio, np, THIS_MODULE);
++}
++
+ struct mdio_device *of_mdio_find_device(struct device_node *np);
+ struct phy_device *of_phy_find_device(struct device_node *phy_np);
+ struct phy_device *
+diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
+index 9b8000869b078..7f9d0ab76b14f 100644
+--- a/include/net/bluetooth/l2cap.h
++++ b/include/net/bluetooth/l2cap.h
+@@ -493,6 +493,7 @@ struct l2cap_le_credits {
+ 
+ #define L2CAP_ECRED_MIN_MTU		64
+ #define L2CAP_ECRED_MIN_MPS		64
++#define L2CAP_ECRED_MAX_CID		5
+ 
+ struct l2cap_ecred_conn_req {
+ 	__le16 psm;
+diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
+index 155b5cb43cfd3..2d8790e409018 100644
+--- a/include/trace/events/rcu.h
++++ b/include/trace/events/rcu.h
+@@ -713,7 +713,7 @@ TRACE_EVENT_RCU(rcu_torture_read,
+ 	TP_ARGS(rcutorturename, rhp, secs, c_old, c),
+ 
+ 	TP_STRUCT__entry(
+-		__field(char, rcutorturename[RCUTORTURENAME_LEN])
++		__array(char, rcutorturename, RCUTORTURENAME_LEN)
+ 		__field(struct rcu_head *, rhp)
+ 		__field(unsigned long, secs)
+ 		__field(unsigned long, c_old)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 73d4b1e32fbdb..d3f6a070875cb 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -826,7 +826,7 @@ static int __init bpf_jit_charge_init(void)
+ {
+ 	/* Only used as heuristic here to derive limit. */
+ 	bpf_jit_limit_max = bpf_jit_alloc_exec_limit();
+-	bpf_jit_limit = min_t(u64, round_up(bpf_jit_limit_max >> 2,
++	bpf_jit_limit = min_t(u64, round_up(bpf_jit_limit_max >> 1,
+ 					    PAGE_SIZE), LONG_MAX);
+ 	return 0;
+ }
+diff --git a/kernel/compat.c b/kernel/compat.c
+index 05adfd6fa8bf9..f9f7a79e07c5f 100644
+--- a/kernel/compat.c
++++ b/kernel/compat.c
+@@ -152,7 +152,7 @@ COMPAT_SYSCALL_DEFINE3(sched_getaffinity, compat_pid_t,  pid, unsigned int, len,
+ 	if (len & (sizeof(compat_ulong_t)-1))
+ 		return -EINVAL;
+ 
+-	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
+ 	ret = sched_getaffinity(pid, mask);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index d7b61116f15bb..e2e1371fbb9d3 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3817,7 +3817,7 @@ ctx_sched_in(struct perf_event_context *ctx,
+ 	if (likely(!ctx->nr_events))
+ 		return;
+ 
+-	if (is_active ^ EVENT_TIME) {
++	if (!(is_active & EVENT_TIME)) {
+ 		/* start ctx time */
+ 		__update_context_time(ctx, false);
+ 		perf_cgroup_set_timestamp(task, ctx);
+@@ -8710,7 +8710,7 @@ static void perf_event_bpf_output(struct perf_event *event, void *data)
+ 
+ 	perf_event_header__init_id(&bpf_event->event_id.header,
+ 				   &sample, event);
+-	ret = perf_output_begin(&handle, data, event,
++	ret = perf_output_begin(&handle, &sample, event,
+ 				bpf_event->event_id.header.size);
+ 	if (ret)
+ 		return;
+diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile
+index 65ca5539c470e..a9b0ee63b6978 100644
+--- a/kernel/kcsan/Makefile
++++ b/kernel/kcsan/Makefile
+@@ -13,5 +13,6 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
+ obj-y := core.o debugfs.o report.o
+ obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o
+ 
+-CFLAGS_kcsan-test.o := $(CFLAGS_KCSAN) -g -fno-omit-frame-pointer
++CFLAGS_kcsan-test.o := $(CFLAGS_KCSAN) -fno-omit-frame-pointer
++CFLAGS_kcsan_test.o += $(DISABLE_STRUCTLEAK_PLUGIN)
+ obj-$(CONFIG_KCSAN_TEST) += kcsan-test.o
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 1303a2607f1f8..b4bd02d68185e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1601,6 +1601,9 @@ static inline void dequeue_task(struct rq *rq, struct task_struct *p, int flags)
+ 
+ void activate_task(struct rq *rq, struct task_struct *p, int flags)
+ {
++	if (task_on_rq_migrating(p))
++		flags |= ENQUEUE_MIGRATED;
++
+ 	enqueue_task(rq, p, flags);
+ 
+ 	p->on_rq = TASK_ON_RQ_QUEUED;
+@@ -6064,14 +6067,14 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
+ 	if (len & (sizeof(unsigned long)-1))
+ 		return -EINVAL;
+ 
+-	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
+ 	ret = sched_getaffinity(pid, mask);
+ 	if (ret == 0) {
+ 		unsigned int retlen = min(len, cpumask_size());
+ 
+-		if (copy_to_user(user_mask_ptr, mask, retlen))
++		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
+ 			ret = -EFAULT;
+ 		else
+ 			ret = retlen;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index c39d2fc3f9945..bb70a7856277f 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4274,6 +4274,29 @@ static void check_spread(struct cfs_rq *cfs_rq, struct sched_entity *se)
+ #endif
+ }
+ 
++static inline bool entity_is_long_sleeper(struct sched_entity *se)
++{
++	struct cfs_rq *cfs_rq;
++	u64 sleep_time;
++
++	if (se->exec_start == 0)
++		return false;
++
++	cfs_rq = cfs_rq_of(se);
++
++	sleep_time = rq_clock_task(rq_of(cfs_rq));
++
++	/* Happen while migrating because of clock task divergence */
++	if (sleep_time <= se->exec_start)
++		return false;
++
++	sleep_time -= se->exec_start;
++	if (sleep_time > ((1ULL << 63) / scale_load_down(NICE_0_LOAD)))
++		return true;
++
++	return false;
++}
++
+ static void
+ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
+ {
+@@ -4302,8 +4325,29 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
+ 		vruntime -= thresh;
+ 	}
+ 
+-	/* ensure we never gain time by being placed backwards. */
+-	se->vruntime = max_vruntime(se->vruntime, vruntime);
++	/*
++	 * Pull vruntime of the entity being placed to the base level of
++	 * cfs_rq, to prevent boosting it if placed backwards.
++	 * However, min_vruntime can advance much faster than real time, with
++	 * the extreme being when an entity with the minimal weight always runs
++	 * on the cfs_rq. If the waking entity slept for a long time, its
++	 * vruntime difference from min_vruntime may overflow s64 and their
++	 * comparison may get inversed, so ignore the entity's original
++	 * vruntime in that case.
++	 * The maximal vruntime speedup is given by the ratio of normal to
++	 * minimal weight: scale_load_down(NICE_0_LOAD) / MIN_SHARES.
++	 * When placing a migrated waking entity, its exec_start has been set
++	 * from a different rq. In order to take into account a possible
++	 * divergence between new and prev rq's clocks task because of irq and
++	 * stolen time, we take an additional margin.
++	 * So, cutting off on the sleep time of
++	 *     2^63 / scale_load_down(NICE_0_LOAD) ~ 104 days
++	 * should be safe.
++	 */
++	if (entity_is_long_sleeper(se))
++		se->vruntime = vruntime;
++	else
++		se->vruntime = max_vruntime(se->vruntime, vruntime);
+ }
+ 
+ static void check_enqueue_throttle(struct cfs_rq *cfs_rq);
+@@ -4399,6 +4443,9 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ 
+ 	if (flags & ENQUEUE_WAKEUP)
+ 		place_entity(cfs_rq, se, 0);
++	/* Entity has migrated, no longer consider this task hot */
++	if (flags & ENQUEUE_MIGRATED)
++		se->exec_start = 0;
+ 
+ 	check_schedstat_required();
+ 	update_stats_enqueue(cfs_rq, se, flags);
+@@ -6984,9 +7031,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
+ 	/* Tell new CPU we are migrated */
+ 	p->se.avg.last_update_time = 0;
+ 
+-	/* We have migrated, no longer consider this task hot */
+-	p->se.exec_start = 0;
+-
+ 	update_scan_period(p, new_cpu);
+ }
+ 
+diff --git a/kernel/trace/kprobe_event_gen_test.c b/kernel/trace/kprobe_event_gen_test.c
+index c736487fc0e48..e0c420eb0b2b4 100644
+--- a/kernel/trace/kprobe_event_gen_test.c
++++ b/kernel/trace/kprobe_event_gen_test.c
+@@ -146,7 +146,7 @@ static int __init test_gen_kprobe_cmd(void)
+ 	if (trace_event_file_is_valid(gen_kprobe_test))
+ 		gen_kprobe_test = NULL;
+ 	/* We got an error after creating the event, delete it */
+-	ret = kprobe_event_delete("gen_kprobe_test");
++	kprobe_event_delete("gen_kprobe_test");
+ 	goto out;
+ }
+ 
+@@ -211,7 +211,7 @@ static int __init test_gen_kretprobe_cmd(void)
+ 	if (trace_event_file_is_valid(gen_kretprobe_test))
+ 		gen_kretprobe_test = NULL;
+ 	/* We got an error after creating the event, delete it */
+-	ret = kprobe_event_delete("gen_kretprobe_test");
++	kprobe_event_delete("gen_kretprobe_test");
+ 	goto out;
+ }
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index bde90df6b4976..367b1dec2e751 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -710,6 +710,17 @@ void l2cap_chan_del(struct l2cap_chan *chan, int err)
+ }
+ EXPORT_SYMBOL_GPL(l2cap_chan_del);
+ 
++static void __l2cap_chan_list_id(struct l2cap_conn *conn, u16 id,
++				 l2cap_chan_func_t func, void *data)
++{
++	struct l2cap_chan *chan, *l;
++
++	list_for_each_entry_safe(chan, l, &conn->chan_l, list) {
++		if (chan->ident == id)
++			func(chan, data);
++	}
++}
++
+ static void __l2cap_chan_list(struct l2cap_conn *conn, l2cap_chan_func_t func,
+ 			      void *data)
+ {
+@@ -777,23 +788,9 @@ static void l2cap_chan_le_connect_reject(struct l2cap_chan *chan)
+ 
+ static void l2cap_chan_ecred_connect_reject(struct l2cap_chan *chan)
+ {
+-	struct l2cap_conn *conn = chan->conn;
+-	struct l2cap_ecred_conn_rsp rsp;
+-	u16 result;
+-
+-	if (test_bit(FLAG_DEFER_SETUP, &chan->flags))
+-		result = L2CAP_CR_LE_AUTHORIZATION;
+-	else
+-		result = L2CAP_CR_LE_BAD_PSM;
+-
+ 	l2cap_state_change(chan, BT_DISCONN);
+ 
+-	memset(&rsp, 0, sizeof(rsp));
+-
+-	rsp.result  = cpu_to_le16(result);
+-
+-	l2cap_send_cmd(conn, chan->ident, L2CAP_LE_CONN_RSP, sizeof(rsp),
+-		       &rsp);
++	__l2cap_ecred_conn_rsp_defer(chan);
+ }
+ 
+ static void l2cap_chan_connect_reject(struct l2cap_chan *chan)
+@@ -848,7 +845,7 @@ void l2cap_chan_close(struct l2cap_chan *chan, int reason)
+ 					break;
+ 				case L2CAP_MODE_EXT_FLOWCTL:
+ 					l2cap_chan_ecred_connect_reject(chan);
+-					break;
++					return;
+ 				}
+ 			}
+ 		}
+@@ -3934,43 +3931,86 @@ void __l2cap_le_connect_rsp_defer(struct l2cap_chan *chan)
+ 		       &rsp);
+ }
+ 
+-void __l2cap_ecred_conn_rsp_defer(struct l2cap_chan *chan)
++static void l2cap_ecred_list_defer(struct l2cap_chan *chan, void *data)
+ {
++	int *result = data;
++
++	if (*result || test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags))
++		return;
++
++	switch (chan->state) {
++	case BT_CONNECT2:
++		/* If channel still pending accept add to result */
++		(*result)++;
++		return;
++	case BT_CONNECTED:
++		return;
++	default:
++		/* If not connected or pending accept it has been refused */
++		*result = -ECONNREFUSED;
++		return;
++	}
++}
++
++struct l2cap_ecred_rsp_data {
+ 	struct {
+ 		struct l2cap_ecred_conn_rsp rsp;
+-		__le16 dcid[5];
++		__le16 scid[L2CAP_ECRED_MAX_CID];
+ 	} __packed pdu;
++	int count;
++};
++
++static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data)
++{
++	struct l2cap_ecred_rsp_data *rsp = data;
++
++	if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags))
++		return;
++
++	/* Reset ident so only one response is sent */
++	chan->ident = 0;
++
++	/* Include all channels pending with the same ident */
++	if (!rsp->pdu.rsp.result)
++		rsp->pdu.rsp.dcid[rsp->count++] = cpu_to_le16(chan->scid);
++	else
++		l2cap_chan_del(chan, ECONNRESET);
++}
++
++void __l2cap_ecred_conn_rsp_defer(struct l2cap_chan *chan)
++{
+ 	struct l2cap_conn *conn = chan->conn;
+-	u16 ident = chan->ident;
+-	int i = 0;
++	struct l2cap_ecred_rsp_data data;
++	u16 id = chan->ident;
++	int result = 0;
+ 
+-	if (!ident)
++	if (!id)
+ 		return;
+ 
+-	BT_DBG("chan %p ident %d", chan, ident);
++	BT_DBG("chan %p id %d", chan, id);
+ 
+-	pdu.rsp.mtu     = cpu_to_le16(chan->imtu);
+-	pdu.rsp.mps     = cpu_to_le16(chan->mps);
+-	pdu.rsp.credits = cpu_to_le16(chan->rx_credits);
+-	pdu.rsp.result  = cpu_to_le16(L2CAP_CR_LE_SUCCESS);
++	memset(&data, 0, sizeof(data));
+ 
+-	mutex_lock(&conn->chan_lock);
++	data.pdu.rsp.mtu     = cpu_to_le16(chan->imtu);
++	data.pdu.rsp.mps     = cpu_to_le16(chan->mps);
++	data.pdu.rsp.credits = cpu_to_le16(chan->rx_credits);
++	data.pdu.rsp.result  = cpu_to_le16(L2CAP_CR_LE_SUCCESS);
+ 
+-	list_for_each_entry(chan, &conn->chan_l, list) {
+-		if (chan->ident != ident)
+-			continue;
++	/* Verify that all channels are ready */
++	__l2cap_chan_list_id(conn, id, l2cap_ecred_list_defer, &result);
+ 
+-		/* Reset ident so only one response is sent */
+-		chan->ident = 0;
++	if (result > 0)
++		return;
+ 
+-		/* Include all channels pending with the same ident */
+-		pdu.dcid[i++] = cpu_to_le16(chan->scid);
+-	}
++	if (result < 0)
++		data.pdu.rsp.result = cpu_to_le16(L2CAP_CR_LE_AUTHORIZATION);
+ 
+-	mutex_unlock(&conn->chan_lock);
++	/* Build response */
++	__l2cap_chan_list_id(conn, id, l2cap_ecred_rsp_defer, &data);
+ 
+-	l2cap_send_cmd(conn, ident, L2CAP_ECRED_CONN_RSP,
+-			sizeof(pdu.rsp) + i * sizeof(__le16), &pdu);
++	l2cap_send_cmd(conn, id, L2CAP_ECRED_CONN_RSP,
++		       sizeof(data.pdu.rsp) + (data.count * sizeof(__le16)),
++		       &data.pdu);
+ }
+ 
+ void __l2cap_connect_rsp_defer(struct l2cap_chan *chan)
+@@ -5952,7 +5992,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
+ 	struct l2cap_ecred_conn_req *req = (void *) data;
+ 	struct {
+ 		struct l2cap_ecred_conn_rsp rsp;
+-		__le16 dcid[5];
++		__le16 dcid[L2CAP_ECRED_MAX_CID];
+ 	} __packed pdu;
+ 	struct l2cap_chan *chan, *pchan;
+ 	u16 mtu, mps;
+@@ -5969,6 +6009,14 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
+ 		goto response;
+ 	}
+ 
++	cmd_len -= sizeof(*req);
++	num_scid = cmd_len / sizeof(u16);
++
++	if (num_scid > ARRAY_SIZE(pdu.dcid)) {
++		result = L2CAP_CR_LE_INVALID_PARAMS;
++		goto response;
++	}
++
+ 	mtu  = __le16_to_cpu(req->mtu);
+ 	mps  = __le16_to_cpu(req->mps);
+ 
+@@ -6013,8 +6061,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
+ 	}
+ 
+ 	result = L2CAP_CR_LE_SUCCESS;
+-	cmd_len -= sizeof(*req);
+-	num_scid = cmd_len / sizeof(u16);
+ 
+ 	for (i = 0; i < num_scid; i++) {
+ 		u16 scid = __le16_to_cpu(req->scid[i]);
+@@ -6067,6 +6113,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
+ 		__set_chan_timer(chan, chan->ops->get_sndtimeo(chan));
+ 
+ 		chan->ident = cmd->ident;
++		chan->mode = L2CAP_MODE_EXT_FLOWCTL;
+ 
+ 		if (test_bit(FLAG_DEFER_SETUP, &chan->flags)) {
+ 			l2cap_state_change(chan, BT_CONNECT2);
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index afa82adaf6cd5..ddba4e12da783 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -936,6 +936,8 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ 
+ 			cf = op->frames + op->cfsiz * i;
+ 			err = memcpy_from_msg((u8 *)cf, msg, op->cfsiz);
++			if (err < 0)
++				goto free_op;
+ 
+ 			if (op->flags & CAN_FD_FRAME) {
+ 				if (cf->len > 64)
+@@ -945,12 +947,8 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ 					err = -EINVAL;
+ 			}
+ 
+-			if (err < 0) {
+-				if (op->frames != &op->sframe)
+-					kfree(op->frames);
+-				kfree(op);
+-				return err;
+-			}
++			if (err < 0)
++				goto free_op;
+ 
+ 			if (msg_head->flags & TX_CP_CAN_ID) {
+ 				/* copy can_id into frame */
+@@ -1021,6 +1019,12 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ 		bcm_tx_start_timer(op);
+ 
+ 	return msg_head->nframes * op->cfsiz + MHSIZ;
++
++free_op:
++	if (op->frames != &op->sframe)
++		kfree(op->frames);
++	kfree(op);
++	return err;
+ }
+ 
+ /*
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 20cb6b7dbc694..afc97d65cf2d8 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -380,7 +380,7 @@ void hsr_addr_subst_dest(struct hsr_node *node_src, struct sk_buff *skb,
+ 	node_dst = find_node_by_addr_A(&port->hsr->node_db,
+ 				       eth_hdr(skb)->h_dest);
+ 	if (!node_dst) {
+-		if (net_ratelimit())
++		if (port->hsr->prot_version != PRP_V1 && net_ratelimit())
+ 			netdev_err(skb->dev, "%s: Unknown node\n", __func__);
+ 		return;
+ 	}
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 65ead8a749337..9d1a506571043 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -547,7 +547,7 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		truncate = true;
+ 	}
+ 
+-	nhoff = skb_network_header(skb) - skb_mac_header(skb);
++	nhoff = skb_network_offset(skb);
+ 	if (skb->protocol == htons(ETH_P_IP) &&
+ 	    (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
+ 		truncate = true;
+@@ -556,7 +556,7 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		int thoff;
+ 
+ 		if (skb_transport_header_was_set(skb))
+-			thoff = skb_transport_header(skb) - skb_mac_header(skb);
++			thoff = skb_transport_offset(skb);
+ 		else
+ 			thoff = nhoff + sizeof(struct ipv6hdr);
+ 		if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 0010f9e54f13b..2332b5b81c551 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -959,7 +959,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 		truncate = true;
+ 	}
+ 
+-	nhoff = skb_network_header(skb) - skb_mac_header(skb);
++	nhoff = skb_network_offset(skb);
+ 	if (skb->protocol == htons(ETH_P_IP) &&
+ 	    (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
+ 		truncate = true;
+@@ -968,7 +968,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 		int thoff;
+ 
+ 		if (skb_transport_header_was_set(skb))
+-			thoff = skb_transport_header(skb) - skb_mac_header(skb);
++			thoff = skb_transport_offset(skb);
+ 		else
+ 			thoff = nhoff + sizeof(struct ipv6hdr);
+ 		if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
+diff --git a/net/mac80211/wme.c b/net/mac80211/wme.c
+index b9404b0560871..eb79f6844466e 100644
+--- a/net/mac80211/wme.c
++++ b/net/mac80211/wme.c
+@@ -141,12 +141,14 @@ u16 ieee80211_select_queue_80211(struct ieee80211_sub_if_data *sdata,
+ u16 __ieee80211_select_queue(struct ieee80211_sub_if_data *sdata,
+ 			     struct sta_info *sta, struct sk_buff *skb)
+ {
++	const struct ethhdr *eth = (void *)skb->data;
+ 	struct mac80211_qos_map *qos_map;
+ 	bool qos;
+ 
+ 	/* all mesh/ocb stations are required to support WME */
+-	if (sta && (sdata->vif.type == NL80211_IFTYPE_MESH_POINT ||
+-		    sdata->vif.type == NL80211_IFTYPE_OCB))
++	if ((sdata->vif.type == NL80211_IFTYPE_MESH_POINT &&
++	    !is_multicast_ether_addr(eth->h_dest)) ||
++	    (sdata->vif.type == NL80211_IFTYPE_OCB && sta))
+ 		qos = true;
+ 	else if (sta)
+ 		qos = sta->sta.wme;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index e537085b184fe..54863e68f3040 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -386,13 +386,11 @@ static int do_tls_getsockopt_conf(struct sock *sk, char __user *optval,
+ 			rc = -EINVAL;
+ 			goto out;
+ 		}
+-		lock_sock(sk);
+ 		memcpy(crypto_info_aes_gcm_128->iv,
+ 		       cctx->iv + TLS_CIPHER_AES_GCM_128_SALT_SIZE,
+ 		       TLS_CIPHER_AES_GCM_128_IV_SIZE);
+ 		memcpy(crypto_info_aes_gcm_128->rec_seq, cctx->rec_seq,
+ 		       TLS_CIPHER_AES_GCM_128_REC_SEQ_SIZE);
+-		release_sock(sk);
+ 		if (copy_to_user(optval,
+ 				 crypto_info_aes_gcm_128,
+ 				 sizeof(*crypto_info_aes_gcm_128)))
+@@ -410,13 +408,11 @@ static int do_tls_getsockopt_conf(struct sock *sk, char __user *optval,
+ 			rc = -EINVAL;
+ 			goto out;
+ 		}
+-		lock_sock(sk);
+ 		memcpy(crypto_info_aes_gcm_256->iv,
+ 		       cctx->iv + TLS_CIPHER_AES_GCM_256_SALT_SIZE,
+ 		       TLS_CIPHER_AES_GCM_256_IV_SIZE);
+ 		memcpy(crypto_info_aes_gcm_256->rec_seq, cctx->rec_seq,
+ 		       TLS_CIPHER_AES_GCM_256_REC_SEQ_SIZE);
+-		release_sock(sk);
+ 		if (copy_to_user(optval,
+ 				 crypto_info_aes_gcm_256,
+ 				 sizeof(*crypto_info_aes_gcm_256)))
+@@ -436,6 +432,8 @@ static int do_tls_getsockopt(struct sock *sk, int optname,
+ {
+ 	int rc = 0;
+ 
++	lock_sock(sk);
++
+ 	switch (optname) {
+ 	case TLS_TX:
+ 	case TLS_RX:
+@@ -446,6 +444,9 @@ static int do_tls_getsockopt(struct sock *sk, int optname,
+ 		rc = -ENOPROTOOPT;
+ 		break;
+ 	}
++
++	release_sock(sk);
++
+ 	return rc;
+ }
+ 
+diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
+index 56a28a686988d..42b19feb2b6e5 100644
+--- a/net/xdp/xdp_umem.c
++++ b/net/xdp/xdp_umem.c
+@@ -153,10 +153,11 @@ static int xdp_umem_account_pages(struct xdp_umem *umem)
+ 
+ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ {
+-	u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom;
+ 	bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
+-	u64 npgs, addr = mr->addr, size = mr->len;
+-	unsigned int chunks, chunks_rem;
++	u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
++	u64 addr = mr->addr, size = mr->len;
++	u32 chunks_rem, npgs_rem;
++	u64 chunks, npgs;
+ 	int err;
+ 
+ 	if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
+@@ -191,8 +192,8 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 	if (npgs > U32_MAX)
+ 		return -EINVAL;
+ 
+-	chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem);
+-	if (chunks == 0)
++	chunks = div_u64_rem(size, chunk_size, &chunks_rem);
++	if (!chunks || chunks > U32_MAX)
+ 		return -EINVAL;
+ 
+ 	if (!unaligned_chunks && chunks_rem)
+@@ -205,7 +206,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 	umem->headroom = headroom;
+ 	umem->chunk_size = chunk_size;
+ 	umem->chunks = chunks;
+-	umem->npgs = (u32)npgs;
++	umem->npgs = npgs;
+ 	umem->pgs = NULL;
+ 	umem->user = NULL;
+ 	umem->flags = mr->flags;
+diff --git a/security/keys/request_key.c b/security/keys/request_key.c
+index 2da4404276f0f..07a0ef2baacd8 100644
+--- a/security/keys/request_key.c
++++ b/security/keys/request_key.c
+@@ -38,9 +38,12 @@ static void cache_requested_key(struct key *key)
+ #ifdef CONFIG_KEYS_REQUEST_CACHE
+ 	struct task_struct *t = current;
+ 
+-	key_put(t->cached_requested_key);
+-	t->cached_requested_key = key_get(key);
+-	set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
++	/* Do not cache key if it is a kernel thread */
++	if (!(t->flags & PF_KTHREAD)) {
++		key_put(t->cached_requested_key);
++		t->cached_requested_key = key_get(key);
++		set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
++	}
+ #endif
+ }
+ 
+diff --git a/sound/pci/asihpi/hpi6205.c b/sound/pci/asihpi/hpi6205.c
+index 3d6914c64c4a8..4cdaeefeb6885 100644
+--- a/sound/pci/asihpi/hpi6205.c
++++ b/sound/pci/asihpi/hpi6205.c
+@@ -430,7 +430,7 @@ void HPI_6205(struct hpi_message *phm, struct hpi_response *phr)
+ 		pao = hpi_find_adapter(phm->adapter_index);
+ 	} else {
+ 		/* subsys messages don't address an adapter */
+-		_HPI_6205(NULL, phm, phr);
++		phr->error = HPI_ERROR_INVALID_OBJ_INDEX;
+ 		return;
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 24c2638cde376..6057084da4cf8 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -4108,8 +4108,10 @@ static int tuning_ctl_set(struct hda_codec *codec, hda_nid_t nid,
+ 
+ 	for (i = 0; i < TUNING_CTLS_COUNT; i++)
+ 		if (nid == ca0132_tuning_ctls[i].nid)
+-			break;
++			goto found;
+ 
++	return -EINVAL;
++found:
+ 	snd_hda_power_up(codec);
+ 	dspio_set_param(codec, ca0132_tuning_ctls[i].mid, 0x20,
+ 			ca0132_tuning_ctls[i].req,
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 48b802563c2da..e35c470eb4814 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -973,7 +973,10 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+-	SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_PINCFG_LENOVO_NOTEBOOK),
++	/* NOTE: we'd need to extend the quirk for 17aa:3977 as the same
++	 * PCI SSID is used on multiple Lenovo models
++	 */
++	SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo G50-70", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
+@@ -996,6 +999,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
+ 	{ .id = CXT_FIXUP_MUTE_LED_GPIO, .name = "mute-led-gpio" },
+ 	{ .id = CXT_FIXUP_HP_ZBOOK_MUTE_LED, .name = "hp-zbook-mute-led" },
+ 	{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
++	{ .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" },
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2cf6600c9ca83..2af9cd7b7999c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9253,6 +9253,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
++	SND_PCI_QUIRK(0x17aa, 0x9e56, "Lenovo ZhaoYang CF4620Z", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
+ 	SND_PCI_QUIRK(0x1849, 0xa233, "Positivo Master C6300", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+diff --git a/sound/pci/ymfpci/ymfpci.c b/sound/pci/ymfpci/ymfpci.c
+index 9b0d18a7bf356..27fd10b976f77 100644
+--- a/sound/pci/ymfpci/ymfpci.c
++++ b/sound/pci/ymfpci/ymfpci.c
+@@ -78,7 +78,8 @@ static int snd_ymfpci_create_gameport(struct snd_ymfpci *chip, int dev,
+ 
+ 		if (io_port == 1) {
+ 			/* auto-detect */
+-			if (!(io_port = pci_resource_start(chip->pci, 2)))
++			io_port = pci_resource_start(chip->pci, 2);
++			if (!io_port)
+ 				return -ENODEV;
+ 		}
+ 	} else {
+@@ -87,7 +88,8 @@ static int snd_ymfpci_create_gameport(struct snd_ymfpci *chip, int dev,
+ 			for (io_port = 0x201; io_port <= 0x205; io_port++) {
+ 				if (io_port == 0x203)
+ 					continue;
+-				if ((r = request_region(io_port, 1, "YMFPCI gameport")) != NULL)
++				r = request_region(io_port, 1, "YMFPCI gameport");
++				if (r)
+ 					break;
+ 			}
+ 			if (!r) {
+@@ -108,10 +110,13 @@ static int snd_ymfpci_create_gameport(struct snd_ymfpci *chip, int dev,
+ 		}
+ 	}
+ 
+-	if (!r && !(r = request_region(io_port, 1, "YMFPCI gameport"))) {
+-		dev_err(chip->card->dev,
+-			"joystick port %#x is in use.\n", io_port);
+-		return -EBUSY;
++	if (!r) {
++		r = request_region(io_port, 1, "YMFPCI gameport");
++		if (!r) {
++			dev_err(chip->card->dev,
++				"joystick port %#x is in use.\n", io_port);
++			return -EBUSY;
++		}
+ 	}
+ 
+ 	chip->gameport = gp = gameport_allocate_port();
+@@ -199,8 +204,9 @@ static int snd_card_ymfpci_probe(struct pci_dev *pci,
+ 			/* auto-detect */
+ 			fm_port[dev] = pci_resource_start(pci, 1);
+ 		}
+-		if (fm_port[dev] > 0 &&
+-		    (fm_res = request_region(fm_port[dev], 4, "YMFPCI OPL3")) != NULL) {
++		if (fm_port[dev] > 0)
++			fm_res = request_region(fm_port[dev], 4, "YMFPCI OPL3");
++		if (fm_res) {
+ 			legacy_ctrl |= YMFPCI_LEGACY_FMEN;
+ 			pci_write_config_word(pci, PCIR_DSXG_FMBASE, fm_port[dev]);
+ 		}
+@@ -208,8 +214,9 @@ static int snd_card_ymfpci_probe(struct pci_dev *pci,
+ 			/* auto-detect */
+ 			mpu_port[dev] = pci_resource_start(pci, 1) + 0x20;
+ 		}
+-		if (mpu_port[dev] > 0 &&
+-		    (mpu_res = request_region(mpu_port[dev], 2, "YMFPCI MPU401")) != NULL) {
++		if (mpu_port[dev] > 0)
++			mpu_res = request_region(mpu_port[dev], 2, "YMFPCI MPU401");
++		if (mpu_res) {
+ 			legacy_ctrl |= YMFPCI_LEGACY_MEN;
+ 			pci_write_config_word(pci, PCIR_DSXG_MPU401BASE, mpu_port[dev]);
+ 		}
+@@ -221,8 +228,9 @@ static int snd_card_ymfpci_probe(struct pci_dev *pci,
+ 		case 0x3a8: legacy_ctrl2 |= 3; break;
+ 		default: fm_port[dev] = 0; break;
+ 		}
+-		if (fm_port[dev] > 0 &&
+-		    (fm_res = request_region(fm_port[dev], 4, "YMFPCI OPL3")) != NULL) {
++		if (fm_port[dev] > 0)
++			fm_res = request_region(fm_port[dev], 4, "YMFPCI OPL3");
++		if (fm_res) {
+ 			legacy_ctrl |= YMFPCI_LEGACY_FMEN;
+ 		} else {
+ 			legacy_ctrl2 &= ~YMFPCI_LEGACY2_FMIO;
+@@ -235,8 +243,9 @@ static int snd_card_ymfpci_probe(struct pci_dev *pci,
+ 		case 0x334: legacy_ctrl2 |= 3 << 4; break;
+ 		default: mpu_port[dev] = 0; break;
+ 		}
+-		if (mpu_port[dev] > 0 &&
+-		    (mpu_res = request_region(mpu_port[dev], 2, "YMFPCI MPU401")) != NULL) {
++		if (mpu_port[dev] > 0)
++			mpu_res = request_region(mpu_port[dev], 2, "YMFPCI MPU401");
++		if (mpu_res) {
+ 			legacy_ctrl |= YMFPCI_LEGACY_MEN;
+ 		} else {
+ 			legacy_ctrl2 &= ~YMFPCI_LEGACY2_MPUIO;
+@@ -250,9 +259,8 @@ static int snd_card_ymfpci_probe(struct pci_dev *pci,
+ 	pci_read_config_word(pci, PCIR_DSXG_LEGACY, &old_legacy_ctrl);
+ 	pci_write_config_word(pci, PCIR_DSXG_LEGACY, legacy_ctrl);
+ 	pci_write_config_word(pci, PCIR_DSXG_ELEGACY, legacy_ctrl2);
+-	if ((err = snd_ymfpci_create(card, pci,
+-				     old_legacy_ctrl,
+-			 	     &chip)) < 0) {
++	err = snd_ymfpci_create(card, pci, old_legacy_ctrl, &chip);
++	if (err  < 0) {
+ 		release_and_free_resource(mpu_res);
+ 		release_and_free_resource(fm_res);
+ 		goto free_card;
+@@ -293,11 +301,12 @@ static int snd_card_ymfpci_probe(struct pci_dev *pci,
+ 		goto free_card;
+ 
+ 	if (chip->mpu_res) {
+-		if ((err = snd_mpu401_uart_new(card, 0, MPU401_HW_YMFPCI,
+-					       mpu_port[dev],
+-					       MPU401_INFO_INTEGRATED |
+-					       MPU401_INFO_IRQ_HOOK,
+-					       -1, &chip->rawmidi)) < 0) {
++		err = snd_mpu401_uart_new(card, 0, MPU401_HW_YMFPCI,
++					  mpu_port[dev],
++					  MPU401_INFO_INTEGRATED |
++					  MPU401_INFO_IRQ_HOOK,
++					  -1, &chip->rawmidi);
++		if (err < 0) {
+ 			dev_warn(card->dev,
+ 				 "cannot initialize MPU401 at 0x%lx, skipping...\n",
+ 				 mpu_port[dev]);
+@@ -306,18 +315,22 @@ static int snd_card_ymfpci_probe(struct pci_dev *pci,
+ 		}
+ 	}
+ 	if (chip->fm_res) {
+-		if ((err = snd_opl3_create(card,
+-					   fm_port[dev],
+-					   fm_port[dev] + 2,
+-					   OPL3_HW_OPL3, 1, &opl3)) < 0) {
++		err = snd_opl3_create(card,
++				      fm_port[dev],
++				      fm_port[dev] + 2,
++				      OPL3_HW_OPL3, 1, &opl3);
++		if (err < 0) {
+ 			dev_warn(card->dev,
+ 				 "cannot initialize FM OPL3 at 0x%lx, skipping...\n",
+ 				 fm_port[dev]);
+ 			legacy_ctrl &= ~YMFPCI_LEGACY_FMEN;
+ 			pci_write_config_word(pci, PCIR_DSXG_LEGACY, legacy_ctrl);
+-		} else if ((err = snd_opl3_hwdep_new(opl3, 0, 1, NULL)) < 0) {
+-			dev_err(card->dev, "cannot create opl3 hwdep\n");
+-			goto free_card;
++		} else {
++			err = snd_opl3_hwdep_new(opl3, 0, 1, NULL);
++			if (err < 0) {
++				dev_err(card->dev, "cannot create opl3 hwdep\n");
++				goto free_card;
++			}
+ 		}
+ 	}
+ 
+diff --git a/sound/pci/ymfpci/ymfpci_main.c b/sound/pci/ymfpci/ymfpci_main.c
+index cacc6a9d14c8b..0cd9b4029dab1 100644
+--- a/sound/pci/ymfpci/ymfpci_main.c
++++ b/sound/pci/ymfpci/ymfpci_main.c
+@@ -292,7 +292,8 @@ static void snd_ymfpci_pcm_interrupt(struct snd_ymfpci *chip, struct snd_ymfpci_
+ 	struct snd_ymfpci_pcm *ypcm;
+ 	u32 pos, delta;
+ 	
+-	if ((ypcm = voice->ypcm) == NULL)
++	ypcm = voice->ypcm;
++	if (!ypcm)
+ 		return;
+ 	if (ypcm->substream == NULL)
+ 		return;
+@@ -628,7 +629,8 @@ static int snd_ymfpci_playback_hw_params(struct snd_pcm_substream *substream,
+ 	struct snd_ymfpci_pcm *ypcm = runtime->private_data;
+ 	int err;
+ 
+-	if ((err = snd_ymfpci_pcm_voice_alloc(ypcm, params_channels(hw_params))) < 0)
++	err = snd_ymfpci_pcm_voice_alloc(ypcm, params_channels(hw_params));
++	if (err < 0)
+ 		return err;
+ 	return 0;
+ }
+@@ -932,7 +934,8 @@ static int snd_ymfpci_playback_open(struct snd_pcm_substream *substream)
+ 	struct snd_ymfpci_pcm *ypcm;
+ 	int err;
+ 	
+-	if ((err = snd_ymfpci_playback_open_1(substream)) < 0)
++	err = snd_ymfpci_playback_open_1(substream);
++	if (err < 0)
+ 		return err;
+ 	ypcm = runtime->private_data;
+ 	ypcm->output_front = 1;
+@@ -954,7 +957,8 @@ static int snd_ymfpci_playback_spdif_open(struct snd_pcm_substream *substream)
+ 	struct snd_ymfpci_pcm *ypcm;
+ 	int err;
+ 	
+-	if ((err = snd_ymfpci_playback_open_1(substream)) < 0)
++	err = snd_ymfpci_playback_open_1(substream);
++	if (err < 0)
+ 		return err;
+ 	ypcm = runtime->private_data;
+ 	ypcm->output_front = 0;
+@@ -982,7 +986,8 @@ static int snd_ymfpci_playback_4ch_open(struct snd_pcm_substream *substream)
+ 	struct snd_ymfpci_pcm *ypcm;
+ 	int err;
+ 	
+-	if ((err = snd_ymfpci_playback_open_1(substream)) < 0)
++	err = snd_ymfpci_playback_open_1(substream);
++	if (err < 0)
+ 		return err;
+ 	ypcm = runtime->private_data;
+ 	ypcm->output_front = 0;
+@@ -1124,7 +1129,8 @@ int snd_ymfpci_pcm(struct snd_ymfpci *chip, int device)
+ 	struct snd_pcm *pcm;
+ 	int err;
+ 
+-	if ((err = snd_pcm_new(chip->card, "YMFPCI", device, 32, 1, &pcm)) < 0)
++	err = snd_pcm_new(chip->card, "YMFPCI", device, 32, 1, &pcm);
++	if (err < 0)
+ 		return err;
+ 	pcm->private_data = chip;
+ 
+@@ -1157,7 +1163,8 @@ int snd_ymfpci_pcm2(struct snd_ymfpci *chip, int device)
+ 	struct snd_pcm *pcm;
+ 	int err;
+ 
+-	if ((err = snd_pcm_new(chip->card, "YMFPCI - PCM2", device, 0, 1, &pcm)) < 0)
++	err = snd_pcm_new(chip->card, "YMFPCI - PCM2", device, 0, 1, &pcm);
++	if (err < 0)
+ 		return err;
+ 	pcm->private_data = chip;
+ 
+@@ -1190,7 +1197,8 @@ int snd_ymfpci_pcm_spdif(struct snd_ymfpci *chip, int device)
+ 	struct snd_pcm *pcm;
+ 	int err;
+ 
+-	if ((err = snd_pcm_new(chip->card, "YMFPCI - IEC958", device, 1, 0, &pcm)) < 0)
++	err = snd_pcm_new(chip->card, "YMFPCI - IEC958", device, 1, 0, &pcm);
++	if (err < 0)
+ 		return err;
+ 	pcm->private_data = chip;
+ 
+@@ -1230,7 +1238,8 @@ int snd_ymfpci_pcm_4ch(struct snd_ymfpci *chip, int device)
+ 	struct snd_pcm *pcm;
+ 	int err;
+ 
+-	if ((err = snd_pcm_new(chip->card, "YMFPCI - Rear", device, 1, 0, &pcm)) < 0)
++	err = snd_pcm_new(chip->card, "YMFPCI - Rear", device, 1, 0, &pcm);
++	if (err < 0)
+ 		return err;
+ 	pcm->private_data = chip;
+ 
+@@ -1785,7 +1794,8 @@ int snd_ymfpci_mixer(struct snd_ymfpci *chip, int rear_switch)
+ 		.read = snd_ymfpci_codec_read,
+ 	};
+ 
+-	if ((err = snd_ac97_bus(chip->card, 0, &ops, chip, &chip->ac97_bus)) < 0)
++	err = snd_ac97_bus(chip->card, 0, &ops, chip, &chip->ac97_bus);
++	if (err < 0)
+ 		return err;
+ 	chip->ac97_bus->private_free = snd_ymfpci_mixer_free_ac97_bus;
+ 	chip->ac97_bus->no_vra = 1; /* YMFPCI doesn't need VRA */
+@@ -1793,7 +1803,8 @@ int snd_ymfpci_mixer(struct snd_ymfpci *chip, int rear_switch)
+ 	memset(&ac97, 0, sizeof(ac97));
+ 	ac97.private_data = chip;
+ 	ac97.private_free = snd_ymfpci_mixer_free_ac97;
+-	if ((err = snd_ac97_mixer(chip->ac97_bus, &ac97, &chip->ac97)) < 0)
++	err = snd_ac97_mixer(chip->ac97_bus, &ac97, &chip->ac97);
++	if (err < 0)
+ 		return err;
+ 
+ 	/* to be sure */
+@@ -1801,7 +1812,8 @@ int snd_ymfpci_mixer(struct snd_ymfpci *chip, int rear_switch)
+ 			     AC97_EA_VRA|AC97_EA_VRM, 0);
+ 
+ 	for (idx = 0; idx < ARRAY_SIZE(snd_ymfpci_controls); idx++) {
+-		if ((err = snd_ctl_add(chip->card, snd_ctl_new1(&snd_ymfpci_controls[idx], chip))) < 0)
++		err = snd_ctl_add(chip->card, snd_ctl_new1(&snd_ymfpci_controls[idx], chip));
++		if (err < 0)
+ 			return err;
+ 	}
+ 	if (chip->ac97->ext_id & AC97_EI_SDAC) {
+@@ -1814,27 +1826,37 @@ int snd_ymfpci_mixer(struct snd_ymfpci *chip, int rear_switch)
+ 	/* add S/PDIF control */
+ 	if (snd_BUG_ON(!chip->pcm_spdif))
+ 		return -ENXIO;
+-	if ((err = snd_ctl_add(chip->card, kctl = snd_ctl_new1(&snd_ymfpci_spdif_default, chip))) < 0)
++	kctl = snd_ctl_new1(&snd_ymfpci_spdif_default, chip);
++	err = snd_ctl_add(chip->card, kctl);
++	if (err < 0)
+ 		return err;
+ 	kctl->id.device = chip->pcm_spdif->device;
+-	if ((err = snd_ctl_add(chip->card, kctl = snd_ctl_new1(&snd_ymfpci_spdif_mask, chip))) < 0)
++	kctl = snd_ctl_new1(&snd_ymfpci_spdif_mask, chip);
++	err = snd_ctl_add(chip->card, kctl);
++	if (err < 0)
+ 		return err;
+ 	kctl->id.device = chip->pcm_spdif->device;
+-	if ((err = snd_ctl_add(chip->card, kctl = snd_ctl_new1(&snd_ymfpci_spdif_stream, chip))) < 0)
++	kctl = snd_ctl_new1(&snd_ymfpci_spdif_stream, chip);
++	err = snd_ctl_add(chip->card, kctl);
++	if (err < 0)
+ 		return err;
+ 	kctl->id.device = chip->pcm_spdif->device;
+ 	chip->spdif_pcm_ctl = kctl;
+ 
+ 	/* direct recording source */
+-	if (chip->device_id == PCI_DEVICE_ID_YAMAHA_754 &&
+-	    (err = snd_ctl_add(chip->card, kctl = snd_ctl_new1(&snd_ymfpci_drec_source, chip))) < 0)
+-		return err;
++	if (chip->device_id == PCI_DEVICE_ID_YAMAHA_754) {
++		kctl = snd_ctl_new1(&snd_ymfpci_drec_source, chip);
++		err = snd_ctl_add(chip->card, kctl);
++		if (err < 0)
++			return err;
++	}
+ 
+ 	/*
+ 	 * shared rear/line-in
+ 	 */
+ 	if (rear_switch) {
+-		if ((err = snd_ctl_add(chip->card, snd_ctl_new1(&snd_ymfpci_rear_shared, chip))) < 0)
++		err = snd_ctl_add(chip->card, snd_ctl_new1(&snd_ymfpci_rear_shared, chip));
++		if (err < 0)
+ 			return err;
+ 	}
+ 
+@@ -1847,7 +1869,8 @@ int snd_ymfpci_mixer(struct snd_ymfpci *chip, int rear_switch)
+ 		kctl->id.device = chip->pcm->device;
+ 		kctl->id.subdevice = idx;
+ 		kctl->private_value = (unsigned long)substream;
+-		if ((err = snd_ctl_add(chip->card, kctl)) < 0)
++		err = snd_ctl_add(chip->card, kctl);
++		if (err < 0)
+ 			return err;
+ 		chip->pcm_mixer[idx].left = 0x8000;
+ 		chip->pcm_mixer[idx].right = 0x8000;
+@@ -1928,7 +1951,8 @@ int snd_ymfpci_timer(struct snd_ymfpci *chip, int device)
+ 	tid.card = chip->card->number;
+ 	tid.device = device;
+ 	tid.subdevice = 0;
+-	if ((err = snd_timer_new(chip->card, "YMFPCI", &tid, &timer)) >= 0) {
++	err = snd_timer_new(chip->card, "YMFPCI", &tid, &timer);
++	if (err >= 0) {
+ 		strcpy(timer->name, "YMFPCI timer");
+ 		timer->private_data = chip;
+ 		timer->hw = snd_ymfpci_timer_hw;
+@@ -2140,7 +2164,7 @@ static int snd_ymfpci_memalloc(struct snd_ymfpci *chip)
+ 	chip->work_base = ptr;
+ 	chip->work_base_addr = ptr_addr;
+ 	
+-	snd_BUG_ON(ptr + chip->work_size !=
++	snd_BUG_ON(ptr + PAGE_ALIGN(chip->work_size) !=
+ 		   chip->work_ptr.area + chip->work_ptr.bytes);
+ 
+ 	snd_ymfpci_writel(chip, YDSXGR_PLAYCTRLBASE, chip->bank_base_playback_addr);
+@@ -2334,7 +2358,8 @@ int snd_ymfpci_create(struct snd_card *card,
+ 	*rchip = NULL;
+ 
+ 	/* enable PCI device */
+-	if ((err = pci_enable_device(pci)) < 0)
++	err = pci_enable_device(pci);
++	if (err < 0)
+ 		return err;
+ 
+ 	chip = kzalloc(sizeof(*chip), GFP_KERNEL);
+@@ -2357,7 +2382,8 @@ int snd_ymfpci_create(struct snd_card *card,
+ 	pci_set_master(pci);
+ 	chip->src441_used = -1;
+ 
+-	if ((chip->res_reg_area = request_mem_region(chip->reg_area_phys, 0x8000, "YMFPCI")) == NULL) {
++	chip->res_reg_area = request_mem_region(chip->reg_area_phys, 0x8000, "YMFPCI");
++	if (!chip->res_reg_area) {
+ 		dev_err(chip->card->dev,
+ 			"unable to grab memory region 0x%lx-0x%lx\n",
+ 			chip->reg_area_phys, chip->reg_area_phys + 0x8000 - 1);
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index e8a63ea2189d1..e0fda244a942c 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -40,8 +40,12 @@ static u64 parse_audio_format_i_type(struct snd_usb_audio *chip,
+ 	case UAC_VERSION_1:
+ 	default: {
+ 		struct uac_format_type_i_discrete_descriptor *fmt = _fmt;
+-		if (format >= 64)
+-			return 0; /* invalid format */
++		if (format >= 64) {
++			usb_audio_info(chip,
++				       "%u:%d: invalid format type 0x%llx is detected, processed as PCM\n",
++				       fp->iface, fp->altsetting, format);
++			format = UAC_FORMAT_TYPE_I_PCM;
++		}
+ 		sample_width = fmt->bBitResolution;
+ 		sample_bytes = fmt->bSubframeSize;
+ 		format = 1ULL << format;
+diff --git a/tools/bootconfig/test-bootconfig.sh b/tools/bootconfig/test-bootconfig.sh
+index baed891d0ba49..e36f178f7dcbf 100755
+--- a/tools/bootconfig/test-bootconfig.sh
++++ b/tools/bootconfig/test-bootconfig.sh
+@@ -87,10 +87,14 @@ xfail grep -i "error" $OUTFILE
+ 
+ echo "Max node number check"
+ 
+-echo -n > $TEMPCONF
+-for i in `seq 1 1024` ; do
+-   echo "node$i" >> $TEMPCONF
+-done
++awk '
++BEGIN {
++  for (i = 0; i < 26; i += 1)
++      printf("%c\n", 65 + i % 26)
++  for (i = 26; i < 8192; i += 1)
++      printf("%c%c%c\n", 65 + i % 26, 65 + (i / 26) % 26, 65 + (i / 26 / 26))
++}
++' > $TEMPCONF
+ xpass $BOOTCONF -a $TEMPCONF $INITRD
+ 
+ echo "badnode" >> $TEMPCONF
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 0e2d63da24e91..558d34fbd331c 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -792,14 +792,9 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
+ 				 const struct btf_type *t)
+ {
+ 	const struct btf_member *m;
+-	int align, i, bit_sz;
++	int max_align = 1, align, i, bit_sz;
+ 	__u16 vlen;
+ 
+-	align = btf__align_of(btf, id);
+-	/* size of a non-packed struct has to be a multiple of its alignment*/
+-	if (align && t->size % align)
+-		return true;
+-
+ 	m = btf_members(t);
+ 	vlen = btf_vlen(t);
+ 	/* all non-bitfield fields have to be naturally aligned */
+@@ -808,8 +803,11 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
+ 		bit_sz = btf_member_bitfield_size(t, i);
+ 		if (align && bit_sz == 0 && m->offset % (8 * align) != 0)
+ 			return true;
++		max_align = max(align, max_align);
+ 	}
+-
++	/* size of a non-packed struct has to be a multiple of its alignment */
++	if (t->size % max_align != 0)
++		return true;
+ 	/*
+ 	 * if original struct was marked as packed, but its layout is
+ 	 * naturally aligned, we'll detect that it's not packed
+@@ -817,44 +815,97 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
+ 	return false;
+ }
+ 
+-static int chip_away_bits(int total, int at_most)
+-{
+-	return total % at_most ? : at_most;
+-}
+-
+ static void btf_dump_emit_bit_padding(const struct btf_dump *d,
+-				      int cur_off, int m_off, int m_bit_sz,
+-				      int align, int lvl)
++				      int cur_off, int next_off, int next_align,
++				      bool in_bitfield, int lvl)
+ {
+-	int off_diff = m_off - cur_off;
+-	int ptr_bits = d->ptr_sz * 8;
++	const struct {
++		const char *name;
++		int bits;
++	} pads[] = {
++		{"long", d->ptr_sz * 8}, {"int", 32}, {"short", 16}, {"char", 8}
++	};
++	int new_off, pad_bits, bits, i;
++	const char *pad_type;
++
++	if (cur_off >= next_off)
++		return; /* no gap */
++
++	/* For filling out padding we want to take advantage of
++	 * natural alignment rules to minimize unnecessary explicit
++	 * padding. First, we find the largest type (among long, int,
++	 * short, or char) that can be used to force naturally aligned
++	 * boundary. Once determined, we'll use such type to fill in
++	 * the remaining padding gap. In some cases we can rely on
++	 * compiler filling some gaps, but sometimes we need to force
++	 * alignment to close natural alignment with markers like
++	 * `long: 0` (this is always the case for bitfields).  Note
++	 * that even if struct itself has, let's say 4-byte alignment
++	 * (i.e., it only uses up to int-aligned types), using `long:
++	 * X;` explicit padding doesn't actually change struct's
++	 * overall alignment requirements, but compiler does take into
++	 * account that type's (long, in this example) natural
++	 * alignment requirements when adding implicit padding. We use
++	 * this fact heavily and don't worry about ruining correct
++	 * struct alignment requirement.
++	 */
++	for (i = 0; i < ARRAY_SIZE(pads); i++) {
++		pad_bits = pads[i].bits;
++		pad_type = pads[i].name;
+ 
+-	if (off_diff <= 0)
+-		/* no gap */
+-		return;
+-	if (m_bit_sz == 0 && off_diff < align * 8)
+-		/* natural padding will take care of a gap */
+-		return;
++		new_off = roundup(cur_off, pad_bits);
++		if (new_off <= next_off)
++			break;
++	}
+ 
+-	while (off_diff > 0) {
+-		const char *pad_type;
+-		int pad_bits;
+-
+-		if (ptr_bits > 32 && off_diff > 32) {
+-			pad_type = "long";
+-			pad_bits = chip_away_bits(off_diff, ptr_bits);
+-		} else if (off_diff > 16) {
+-			pad_type = "int";
+-			pad_bits = chip_away_bits(off_diff, 32);
+-		} else if (off_diff > 8) {
+-			pad_type = "short";
+-			pad_bits = chip_away_bits(off_diff, 16);
+-		} else {
+-			pad_type = "char";
+-			pad_bits = chip_away_bits(off_diff, 8);
++	if (new_off > cur_off && new_off <= next_off) {
++		/* We need explicit `<type>: 0` aligning mark if next
++		 * field is right on alignment offset and its
++		 * alignment requirement is less strict than <type>'s
++		 * alignment (so compiler won't naturally align to the
++		 * offset we expect), or if subsequent `<type>: X`,
++		 * will actually completely fit in the remaining hole,
++		 * making compiler basically ignore `<type>: X`
++		 * completely.
++		 */
++		if (in_bitfield ||
++		    (new_off == next_off && roundup(cur_off, next_align * 8) != new_off) ||
++		    (new_off != next_off && next_off - new_off <= new_off - cur_off))
++			/* but for bitfields we'll emit explicit bit count */
++			btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type,
++					in_bitfield ? new_off - cur_off : 0);
++		cur_off = new_off;
++	}
++
++	/* Now we know we start at naturally aligned offset for a chosen
++	 * padding type (long, int, short, or char), and so the rest is just
++	 * a straightforward filling of remaining padding gap with full
++	 * `<type>: sizeof(<type>);` markers, except for the last one, which
++	 * might need smaller than sizeof(<type>) padding.
++	 */
++	while (cur_off != next_off) {
++		bits = min(next_off - cur_off, pad_bits);
++		if (bits == pad_bits) {
++			btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, pad_bits);
++			cur_off += bits;
++			continue;
++		}
++		/* For the remainder padding that doesn't cover entire
++		 * pad_type bit length, we pick the smallest necessary type.
++		 * This is pure aesthetics, we could have just used `long`,
++		 * but having smallest necessary one communicates better the
++		 * scale of the padding gap.
++		 */
++		for (i = ARRAY_SIZE(pads) - 1; i >= 0; i--) {
++			pad_type = pads[i].name;
++			pad_bits = pads[i].bits;
++			if (pad_bits < bits)
++				continue;
++
++			btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, bits);
++			cur_off += bits;
++			break;
+ 		}
+-		btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, pad_bits);
+-		off_diff -= pad_bits;
+ 	}
+ }
+ 
+@@ -873,9 +924,11 @@ static void btf_dump_emit_struct_def(struct btf_dump *d,
+ {
+ 	const struct btf_member *m = btf_members(t);
+ 	bool is_struct = btf_is_struct(t);
+-	int align, i, packed, off = 0;
++	bool packed, prev_bitfield = false;
++	int align, i, off = 0;
+ 	__u16 vlen = btf_vlen(t);
+ 
++	align = btf__align_of(d->btf, id);
+ 	packed = is_struct ? btf_is_struct_packed(d->btf, id, t) : 0;
+ 
+ 	btf_dump_printf(d, "%s%s%s {",
+@@ -885,33 +938,36 @@ static void btf_dump_emit_struct_def(struct btf_dump *d,
+ 
+ 	for (i = 0; i < vlen; i++, m++) {
+ 		const char *fname;
+-		int m_off, m_sz;
++		int m_off, m_sz, m_align;
++		bool in_bitfield;
+ 
+ 		fname = btf_name_of(d, m->name_off);
+ 		m_sz = btf_member_bitfield_size(t, i);
+ 		m_off = btf_member_bit_offset(t, i);
+-		align = packed ? 1 : btf__align_of(d->btf, m->type);
++		m_align = packed ? 1 : btf__align_of(d->btf, m->type);
+ 
+-		btf_dump_emit_bit_padding(d, off, m_off, m_sz, align, lvl + 1);
++		in_bitfield = prev_bitfield && m_sz != 0;
++
++		btf_dump_emit_bit_padding(d, off, m_off, m_align, in_bitfield, lvl + 1);
+ 		btf_dump_printf(d, "\n%s", pfx(lvl + 1));
+ 		btf_dump_emit_type_decl(d, m->type, fname, lvl + 1);
+ 
+ 		if (m_sz) {
+ 			btf_dump_printf(d, ": %d", m_sz);
+ 			off = m_off + m_sz;
++			prev_bitfield = true;
+ 		} else {
+ 			m_sz = max((__s64)0, btf__resolve_size(d->btf, m->type));
+ 			off = m_off + m_sz * 8;
++			prev_bitfield = false;
+ 		}
++
+ 		btf_dump_printf(d, ";");
+ 	}
+ 
+ 	/* pad at the end, if necessary */
+-	if (is_struct) {
+-		align = packed ? 1 : btf__align_of(d->btf, id);
+-		btf_dump_emit_bit_padding(d, off, t->size * 8, 0, align,
+-					  lvl + 1);
+-	}
++	if (is_struct)
++		btf_dump_emit_bit_padding(d, off, t->size * 8, align, false, lvl + 1);
+ 
+ 	if (vlen)
+ 		btf_dump_printf(d, "\n");
+diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
+index f6b7e85b121ce..71e3f3a68b9df 100644
+--- a/tools/power/x86/turbostat/turbostat.8
++++ b/tools/power/x86/turbostat/turbostat.8
+@@ -294,6 +294,8 @@ Alternatively, non-root users can be enabled to run turbostat this way:
+ 
+ # chmod +r /dev/cpu/*/msr
+ 
++# chmod +r /dev/cpu_dma_latency
++
+ .B "turbostat "
+ reads hardware counters, but doesn't write them.
+ So it will not interfere with the OS or other programs, including
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index ef65f7eed1ec9..d33c9d427e573 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -5004,7 +5004,7 @@ void print_dev_latency(void)
+ 
+ 	retval = read(fd, (void *)&value, sizeof(int));
+ 	if (retval != sizeof(int)) {
+-		warn("read %s\n", path);
++		warn("read failed %s\n", path);
+ 		close(fd);
+ 		return;
+ 	}
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
+index 48b01150e703f..28d22265b8253 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf.c
+@@ -882,6 +882,34 @@ static struct btf_raw_test raw_tests[] = {
+ 	.btf_load_err = true,
+ 	.err_str = "Invalid elem",
+ },
++{
++	.descr = "var after datasec, ptr followed by modifier",
++	.raw_types = {
++		/* .bss section */				/* [1] */
++		BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 2),
++			sizeof(void*)+4),
++		BTF_VAR_SECINFO_ENC(4, 0, sizeof(void*)),
++		BTF_VAR_SECINFO_ENC(6, sizeof(void*), 4),
++		/* int */					/* [2] */
++		BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4),
++		/* int* */					/* [3] */
++		BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2),
++		BTF_VAR_ENC(NAME_TBD, 3, 0),			/* [4] */
++		/* const int */					/* [5] */
++		BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_CONST, 0, 0), 2),
++		BTF_VAR_ENC(NAME_TBD, 5, 0),			/* [6] */
++		BTF_END_RAW,
++	},
++	.str_sec = "\0a\0b\0c\0",
++	.str_sec_size = sizeof("\0a\0b\0c\0"),
++	.map_type = BPF_MAP_TYPE_ARRAY,
++	.map_name = ".bss",
++	.key_size = sizeof(int),
++	.value_size = sizeof(void*)+4,
++	.key_type_id = 0,
++	.value_type_id = 1,
++	.max_entries = 1,
++},
+ /* Test member exceeds the size of struct.
+  *
+  * struct A {
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_bitfields.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_bitfields.c
+index 8f44767a75fa5..22a7cd8fd9acf 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_bitfields.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_bitfields.c
+@@ -53,7 +53,7 @@ struct bitfields_only_mixed_types {
+  */
+ /* ------ END-EXPECTED-OUTPUT ------ */
+ struct bitfield_mixed_with_others {
+-	long: 4; /* char is enough as a backing field */
++	char: 4; /* char is enough as a backing field */
+ 	int a: 4;
+ 	/* 8-bit implicit padding */
+ 	short b; /* combined with previous bitfield */
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_packing.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_packing.c
+index 1cef3bec1dc7f..22dbd12134347 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_packing.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_packing.c
+@@ -58,7 +58,81 @@ union jump_code_union {
+ 	} __attribute__((packed));
+ };
+ 
+-/*------ END-EXPECTED-OUTPUT ------ */
++/* ----- START-EXPECTED-OUTPUT ----- */
++/*
++ *struct nested_packed_but_aligned_struct {
++ *	int x1;
++ *	int x2;
++ *};
++ *
++ *struct outer_implicitly_packed_struct {
++ *	char y1;
++ *	struct nested_packed_but_aligned_struct y2;
++ *} __attribute__((packed));
++ *
++ */
++/* ------ END-EXPECTED-OUTPUT ------ */
++
++struct nested_packed_but_aligned_struct {
++	int x1;
++	int x2;
++} __attribute__((packed));
++
++struct outer_implicitly_packed_struct {
++	char y1;
++	struct nested_packed_but_aligned_struct y2;
++};
++/* ----- START-EXPECTED-OUTPUT ----- */
++/*
++ *struct usb_ss_ep_comp_descriptor {
++ *	char: 8;
++ *	char bDescriptorType;
++ *	char bMaxBurst;
++ *	short wBytesPerInterval;
++ *};
++ *
++ *struct usb_host_endpoint {
++ *	long: 64;
++ *	char: 8;
++ *	struct usb_ss_ep_comp_descriptor ss_ep_comp;
++ *	long: 0;
++ *} __attribute__((packed));
++ *
++ */
++/* ------ END-EXPECTED-OUTPUT ------ */
++
++struct usb_ss_ep_comp_descriptor {
++	char: 8;
++	char bDescriptorType;
++	char bMaxBurst;
++	int: 0;
++	short wBytesPerInterval;
++} __attribute__((packed));
++
++struct usb_host_endpoint {
++	long: 64;
++	char: 8;
++	struct usb_ss_ep_comp_descriptor ss_ep_comp;
++	long: 0;
++};
++
++/* ----- START-EXPECTED-OUTPUT ----- */
++struct nested_packed_struct {
++	int a;
++	char b;
++} __attribute__((packed));
++
++struct outer_nonpacked_struct {
++	short a;
++	struct nested_packed_struct b;
++};
++
++struct outer_packed_struct {
++	short a;
++	struct nested_packed_struct b;
++} __attribute__((packed));
++
++/* ------ END-EXPECTED-OUTPUT ------ */
+ 
+ int f(struct {
+ 	struct packed_trailing_space _1;
+@@ -69,6 +143,10 @@ int f(struct {
+ 	union union_is_never_packed _6;
+ 	union union_does_not_need_packing _7;
+ 	union jump_code_union _8;
++	struct outer_implicitly_packed_struct _9;
++	struct usb_host_endpoint _10;
++	struct outer_nonpacked_struct _11;
++	struct outer_packed_struct _12;
+ } *_)
+ {
+ 	return 0;
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_padding.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_padding.c
+index 35c512818a56b..0b3cdffbfcf71 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_padding.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_padding.c
+@@ -19,7 +19,7 @@ struct padded_implicitly {
+ /*
+  *struct padded_explicitly {
+  *	int a;
+- *	int: 32;
++ *	long: 0;
+  *	int b;
+  *};
+  *
+@@ -28,41 +28,28 @@ struct padded_implicitly {
+ 
+ struct padded_explicitly {
+ 	int a;
+-	int: 1; /* algo will explicitly pad with full 32 bits here */
++	int: 1; /* algo will emit aligning `long: 0;` here */
+ 	int b;
+ };
+ 
+ /* ----- START-EXPECTED-OUTPUT ----- */
+-/*
+- *struct padded_a_lot {
+- *	int a;
+- *	long: 32;
+- *	long: 64;
+- *	long: 64;
+- *	int b;
+- *};
+- *
+- */
+-/* ------ END-EXPECTED-OUTPUT ------ */
+-
+ struct padded_a_lot {
+ 	int a;
+-	/* 32 bit of implicit padding here, which algo will make explicit */
+ 	long: 64;
+ 	long: 64;
+ 	int b;
+ };
+ 
++/* ------ END-EXPECTED-OUTPUT ------ */
++
+ /* ----- START-EXPECTED-OUTPUT ----- */
+ /*
+  *struct padded_cache_line {
+  *	int a;
+- *	long: 32;
+  *	long: 64;
+  *	long: 64;
+  *	long: 64;
+  *	int b;
+- *	long: 32;
+  *	long: 64;
+  *	long: 64;
+  *	long: 64;
+@@ -85,7 +72,7 @@ struct padded_cache_line {
+  *struct zone {
+  *	int a;
+  *	short b;
+- *	short: 16;
++ *	long: 0;
+  *	struct zone_padding __pad__;
+  *};
+  *
+@@ -102,12 +89,160 @@ struct zone {
+ 	struct zone_padding __pad__;
+ };
+ 
++/* ----- START-EXPECTED-OUTPUT ----- */
++struct padding_wo_named_members {
++	long: 64;
++	long: 64;
++};
++
++struct padding_weird_1 {
++	int a;
++	long: 64;
++	short: 16;
++	short b;
++};
++
++/* ------ END-EXPECTED-OUTPUT ------ */
++
++/* ----- START-EXPECTED-OUTPUT ----- */
++/*
++ *struct padding_weird_2 {
++ *	long: 56;
++ *	char a;
++ *	long: 56;
++ *	char b;
++ *	char: 8;
++ *};
++ *
++ */
++/* ------ END-EXPECTED-OUTPUT ------ */
++struct padding_weird_2 {
++	int: 32;	/* these paddings will be collapsed into `long: 56;` */
++	short: 16;
++	char: 8;
++	char a;
++	int: 32;	/* these paddings will be collapsed into `long: 56;` */
++	short: 16;
++	char: 8;
++	char b;
++	char: 8;
++};
++
++/* ----- START-EXPECTED-OUTPUT ----- */
++struct exact_1byte {
++	char x;
++};
++
++struct padded_1byte {
++	char: 8;
++};
++
++struct exact_2bytes {
++	short x;
++};
++
++struct padded_2bytes {
++	short: 16;
++};
++
++struct exact_4bytes {
++	int x;
++};
++
++struct padded_4bytes {
++	int: 32;
++};
++
++struct exact_8bytes {
++	long x;
++};
++
++struct padded_8bytes {
++	long: 64;
++};
++
++struct ff_periodic_effect {
++	int: 32;
++	short magnitude;
++	long: 0;
++	short phase;
++	long: 0;
++	int: 32;
++	int custom_len;
++	short *custom_data;
++};
++
++struct ib_wc {
++	long: 64;
++	long: 64;
++	int: 32;
++	int byte_len;
++	void *qp;
++	union {} ex;
++	long: 64;
++	int slid;
++	int wc_flags;
++	long: 64;
++	char smac[6];
++	long: 0;
++	char network_hdr_type;
++};
++
++struct acpi_object_method {
++	long: 64;
++	char: 8;
++	char type;
++	short reference_count;
++	char flags;
++	short: 0;
++	char: 8;
++	char sync_level;
++	long: 64;
++	void *node;
++	void *aml_start;
++	union {} dispatch;
++	long: 64;
++	int aml_length;
++};
++
++struct nested_unpacked {
++	int x;
++};
++
++struct nested_packed {
++	struct nested_unpacked a;
++	char c;
++} __attribute__((packed));
++
++struct outer_mixed_but_unpacked {
++	struct nested_packed b1;
++	short a1;
++	struct nested_packed b2;
++};
++
++/* ------ END-EXPECTED-OUTPUT ------ */
++
+ int f(struct {
+ 	struct padded_implicitly _1;
+ 	struct padded_explicitly _2;
+ 	struct padded_a_lot _3;
+ 	struct padded_cache_line _4;
+ 	struct zone _5;
++	struct padding_wo_named_members _6;
++	struct padding_weird_1 _7;
++	struct padding_weird_2 _8;
++	struct exact_1byte _100;
++	struct padded_1byte _101;
++	struct exact_2bytes _102;
++	struct padded_2bytes _103;
++	struct exact_4bytes _104;
++	struct padded_4bytes _105;
++	struct exact_8bytes _106;
++	struct padded_8bytes _107;
++	struct ff_periodic_effect _200;
++	struct ib_wc _201;
++	struct acpi_object_method _202;
++	struct outer_mixed_but_unpacked _203;
+ } *_)
+ {
+ 	return 0;
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 564d5c145fbe7..356fd5d1a4285 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -154,6 +154,8 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm);
+ static unsigned long long kvm_createvm_count;
+ static unsigned long long kvm_active_vms;
+ 
++static DEFINE_PER_CPU(cpumask_var_t, cpu_kick_mask);
++
+ __weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+ 						   unsigned long start, unsigned long end)
+ {
+@@ -248,9 +250,13 @@ static void ack_flush(void *_completed)
+ {
+ }
+ 
+-static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
++static inline bool kvm_kick_many_cpus(cpumask_var_t tmp, bool wait)
+ {
+-	if (unlikely(!cpus))
++	const struct cpumask *cpus;
++
++	if (likely(cpumask_available(tmp)))
++		cpus = tmp;
++	else
+ 		cpus = cpu_online_mask;
+ 
+ 	if (cpumask_empty(cpus))
+@@ -260,30 +266,57 @@ static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
+ 	return true;
+ }
+ 
++static void kvm_make_vcpu_request(struct kvm *kvm, struct kvm_vcpu *vcpu,
++				  unsigned int req, cpumask_var_t tmp,
++				  int current_cpu)
++{
++	int cpu;
++
++	kvm_make_request(req, vcpu);
++
++	if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
++		return;
++
++	/*
++	 * tmp can be "unavailable" if cpumasks are allocated off stack as
++	 * allocation of the mask is deliberately not fatal and is handled by
++	 * falling back to kicking all online CPUs.
++	 */
++	if (!cpumask_available(tmp))
++		return;
++
++	/*
++	 * Note, the vCPU could get migrated to a different pCPU at any point
++	 * after kvm_request_needs_ipi(), which could result in sending an IPI
++	 * to the previous pCPU.  But, that's OK because the purpose of the IPI
++	 * is to ensure the vCPU returns to OUTSIDE_GUEST_MODE, which is
++	 * satisfied if the vCPU migrates. Entering READING_SHADOW_PAGE_TABLES
++	 * after this point is also OK, as the requirement is only that KVM wait
++	 * for vCPUs that were reading SPTEs _before_ any changes were
++	 * finalized. See kvm_vcpu_kick() for more details on handling requests.
++	 */
++	if (kvm_request_needs_ipi(vcpu, req)) {
++		cpu = READ_ONCE(vcpu->cpu);
++		if (cpu != -1 && cpu != current_cpu)
++			__cpumask_set_cpu(cpu, tmp);
++	}
++}
++
+ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
+ 				 struct kvm_vcpu *except,
+ 				 unsigned long *vcpu_bitmap, cpumask_var_t tmp)
+ {
+-	int i, cpu, me;
+ 	struct kvm_vcpu *vcpu;
++	int i, me;
+ 	bool called;
+ 
+ 	me = get_cpu();
+ 
+-	kvm_for_each_vcpu(i, vcpu, kvm) {
+-		if ((vcpu_bitmap && !test_bit(i, vcpu_bitmap)) ||
+-		    vcpu == except)
+-			continue;
+-
+-		kvm_make_request(req, vcpu);
+-		cpu = vcpu->cpu;
+-
+-		if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
++	for_each_set_bit(i, vcpu_bitmap, KVM_MAX_VCPUS) {
++		vcpu = kvm_get_vcpu(kvm, i);
++		if (!vcpu || vcpu == except)
+ 			continue;
+-
+-		if (tmp != NULL && cpu != -1 && cpu != me &&
+-		    kvm_request_needs_ipi(vcpu, req))
+-			__cpumask_set_cpu(cpu, tmp);
++		kvm_make_vcpu_request(kvm, vcpu, req, tmp, me);
+ 	}
+ 
+ 	called = kvm_kick_many_cpus(tmp, !!(req & KVM_REQUEST_WAIT));
+@@ -295,14 +328,25 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
+ bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req,
+ 				      struct kvm_vcpu *except)
+ {
+-	cpumask_var_t cpus;
++	struct kvm_vcpu *vcpu;
++	struct cpumask *cpus;
+ 	bool called;
++	int i, me;
+ 
+-	zalloc_cpumask_var(&cpus, GFP_ATOMIC);
++	me = get_cpu();
+ 
+-	called = kvm_make_vcpus_request_mask(kvm, req, except, NULL, cpus);
++	cpus = this_cpu_cpumask_var_ptr(cpu_kick_mask);
++	cpumask_clear(cpus);
++
++	kvm_for_each_vcpu(i, vcpu, kvm) {
++		if (vcpu == except)
++			continue;
++		kvm_make_vcpu_request(kvm, vcpu, req, cpus, me);
++	}
++
++	called = kvm_kick_many_cpus(cpus, !!(req & KVM_REQUEST_WAIT));
++	put_cpu();
+ 
+-	free_cpumask_var(cpus);
+ 	return called;
+ }
+ 
+@@ -2937,16 +2981,24 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_wake_up);
+  */
+ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
+ {
+-	int me;
+-	int cpu = vcpu->cpu;
++	int me, cpu;
+ 
+ 	if (kvm_vcpu_wake_up(vcpu))
+ 		return;
+ 
++	/*
++	 * Note, the vCPU could get migrated to a different pCPU at any point
++	 * after kvm_arch_vcpu_should_kick(), which could result in sending an
++	 * IPI to the previous pCPU.  But, that's ok because the purpose of the
++	 * IPI is to force the vCPU to leave IN_GUEST_MODE, and migrating the
++	 * vCPU also requires it to leave IN_GUEST_MODE.
++	 */
+ 	me = get_cpu();
+-	if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
+-		if (kvm_arch_vcpu_should_kick(vcpu))
++	if (kvm_arch_vcpu_should_kick(vcpu)) {
++		cpu = READ_ONCE(vcpu->cpu);
++		if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
+ 			smp_send_reschedule(cpu);
++	}
+ 	put_cpu();
+ }
+ EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
+@@ -4952,20 +5004,22 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+ 		goto out_free_3;
+ 	}
+ 
++	for_each_possible_cpu(cpu) {
++		if (!alloc_cpumask_var_node(&per_cpu(cpu_kick_mask, cpu),
++					    GFP_KERNEL, cpu_to_node(cpu))) {
++			r = -ENOMEM;
++			goto out_free_4;
++		}
++	}
++
+ 	r = kvm_async_pf_init();
+ 	if (r)
+-		goto out_free;
++		goto out_free_4;
+ 
+ 	kvm_chardev_ops.owner = module;
+ 	kvm_vm_fops.owner = module;
+ 	kvm_vcpu_fops.owner = module;
+ 
+-	r = misc_register(&kvm_dev);
+-	if (r) {
+-		pr_err("kvm: misc device register failed\n");
+-		goto out_unreg;
+-	}
+-
+ 	register_syscore_ops(&kvm_syscore_ops);
+ 
+ 	kvm_preempt_ops.sched_in = kvm_sched_in;
+@@ -4974,13 +5028,28 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+ 	kvm_init_debug();
+ 
+ 	r = kvm_vfio_ops_init();
+-	WARN_ON(r);
++	if (WARN_ON_ONCE(r))
++		goto err_vfio;
++
++	/*
++	 * Registration _must_ be the very last thing done, as this exposes
++	 * /dev/kvm to userspace, i.e. all infrastructure must be setup!
++	 */
++	r = misc_register(&kvm_dev);
++	if (r) {
++		pr_err("kvm: misc device register failed\n");
++		goto err_register;
++	}
+ 
+ 	return 0;
+ 
+-out_unreg:
++err_register:
++	kvm_vfio_ops_exit();
++err_vfio:
+ 	kvm_async_pf_deinit();
+-out_free:
++out_free_4:
++	for_each_possible_cpu(cpu)
++		free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
+ 	kmem_cache_destroy(kvm_vcpu_cache);
+ out_free_3:
+ 	unregister_reboot_notifier(&kvm_reboot_notifier);
+@@ -5000,8 +5069,18 @@ EXPORT_SYMBOL_GPL(kvm_init);
+ 
+ void kvm_exit(void)
+ {
+-	debugfs_remove_recursive(kvm_debugfs_dir);
++	int cpu;
++
++	/*
++	 * Note, unregistering /dev/kvm doesn't strictly need to come first,
++	 * fops_get(), a.k.a. try_module_get(), prevents acquiring references
++	 * to KVM while the module is being stopped.
++	 */
+ 	misc_deregister(&kvm_dev);
++
++	debugfs_remove_recursive(kvm_debugfs_dir);
++	for_each_possible_cpu(cpu)
++		free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
+ 	kmem_cache_destroy(kvm_vcpu_cache);
+ 	kvm_async_pf_deinit();
+ 	unregister_syscore_ops(&kvm_syscore_ops);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-04-20 11:17 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-04-20 11:17 UTC (permalink / raw
  To: gentoo-commits

commit:     8ee109da78253e414103d0a26520e0abeb9b5df5
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 20 11:16:51 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Apr 20 11:16:51 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8ee109da

Linux patch 5.10.178

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1177_linux-5.10.178.patch | 7410 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7414 insertions(+)

diff --git a/0000_README b/0000_README
index 89aa39fe..e37eac6d 100644
--- a/0000_README
+++ b/0000_README
@@ -751,6 +751,10 @@ Patch:  1176_linux-5.10.177.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.177
 
+Patch:  1177_linux-5.10.178.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.178
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1177_linux-5.10.178.patch b/1177_linux-5.10.178.patch
new file mode 100644
index 00000000..46d8f600
--- /dev/null
+++ b/1177_linux-5.10.178.patch
@@ -0,0 +1,7410 @@
+diff --git a/Documentation/devicetree/bindings/serial/renesas,scif.yaml b/Documentation/devicetree/bindings/serial/renesas,scif.yaml
+index eda3d2c6bdd30..dbaa043802098 100644
+--- a/Documentation/devicetree/bindings/serial/renesas,scif.yaml
++++ b/Documentation/devicetree/bindings/serial/renesas,scif.yaml
+@@ -74,7 +74,7 @@ properties:
+           - description: Error interrupt
+           - description: Receive buffer full interrupt
+           - description: Transmit buffer empty interrupt
+-          - description: Transmit End interrupt
++          - description: Break interrupt
+       - items:
+           - description: Error interrupt
+           - description: Receive buffer full interrupt
+@@ -89,7 +89,7 @@ properties:
+           - const: eri
+           - const: rxi
+           - const: txi
+-          - const: tei
++          - const: bri
+       - items:
+           - const: eri
+           - const: rxi
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index 0158dff638873..df26cf4110ef5 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -272,6 +272,8 @@ tcp_app_win - INTEGER
+ 	Reserve max(window/2^tcp_app_win, mss) of window for application
+ 	buffer. Value 0 is special, it means that nothing is reserved.
+ 
++	Possible values are [0, 31], inclusive.
++
+ 	Default: 31
+ 
+ tcp_autocorking - BOOLEAN
+diff --git a/Documentation/powerpc/associativity.rst b/Documentation/powerpc/associativity.rst
+new file mode 100644
+index 0000000000000..07e7dd3d6c87e
+--- /dev/null
++++ b/Documentation/powerpc/associativity.rst
+@@ -0,0 +1,104 @@
++============================
++NUMA resource associativity
++=============================
++
++Associativity represents the groupings of the various platform resources into
++domains of substantially similar mean performance relative to resources outside
++of that domain. Resources subsets of a given domain that exhibit better
++performance relative to each other than relative to other resources subsets
++are represented as being members of a sub-grouping domain. This performance
++characteristic is presented in terms of NUMA node distance within the Linux kernel.
++From the platform view, these groups are also referred to as domains.
++
++PAPR interface currently supports different ways of communicating these resource
++grouping details to the OS. These are referred to as Form 0, Form 1 and Form2
++associativity grouping. Form 0 is the oldest format and is now considered deprecated.
++
++Hypervisor indicates the type/form of associativity used via "ibm,architecture-vec-5 property".
++Bit 0 of byte 5 in the "ibm,architecture-vec-5" property indicates usage of Form 0 or Form 1.
++A value of 1 indicates the usage of Form 1 associativity. For Form 2 associativity
++bit 2 of byte 5 in the "ibm,architecture-vec-5" property is used.
++
++Form 0
++-----
++Form 0 associativity supports only two NUMA distances (LOCAL and REMOTE).
++
++Form 1
++-----
++With Form 1 a combination of ibm,associativity-reference-points, and ibm,associativity
++device tree properties are used to determine the NUMA distance between resource groups/domains.
++
++The “ibm,associativity” property contains a list of one or more numbers (domainID)
++representing the resource’s platform grouping domains.
++
++The “ibm,associativity-reference-points” property contains a list of one or more numbers
++(domainID index) that represents the 1 based ordinal in the associativity lists.
++The list of domainID indexes represents an increasing hierarchy of resource grouping.
++
++ex:
++{ primary domainID index, secondary domainID index, tertiary domainID index.. }
++
++Linux kernel uses the domainID at the primary domainID index as the NUMA node id.
++Linux kernel computes NUMA distance between two domains by recursively comparing
++if they belong to the same higher-level domains. For mismatch at every higher
++level of the resource group, the kernel doubles the NUMA distance between the
++comparing domains.
++
++Form 2
++-------
++Form 2 associativity format adds separate device tree properties representing NUMA node distance
++thereby making the node distance computation flexible. Form 2 also allows flexible primary
++domain numbering. With numa distance computation now detached from the index value in
++"ibm,associativity-reference-points" property, Form 2 allows a large number of primary domain
++ids at the same domainID index representing resource groups of different performance/latency
++characteristics.
++
++Hypervisor indicates the usage of FORM2 associativity using bit 2 of byte 5 in the
++"ibm,architecture-vec-5" property.
++
++"ibm,numa-lookup-index-table" property contains a list of one or more numbers representing
++the domainIDs present in the system. The offset of the domainID in this property is
++used as an index while computing numa distance information via "ibm,numa-distance-table".
++
++prop-encoded-array: The number N of the domainIDs encoded as with encode-int, followed by
++N domainID encoded as with encode-int
++
++For ex:
++"ibm,numa-lookup-index-table" =  {4, 0, 8, 250, 252}. The offset of domainID 8 (2) is used when
++computing the distance of domain 8 from other domains present in the system. For the rest of
++this document, this offset will be referred to as domain distance offset.
++
++"ibm,numa-distance-table" property contains a list of one or more numbers representing the NUMA
++distance between resource groups/domains present in the system.
++
++prop-encoded-array: The number N of the distance values encoded as with encode-int, followed by
++N distance values encoded as with encode-bytes. The max distance value we could encode is 255.
++The number N must be equal to the square of m where m is the number of domainIDs in the
++numa-lookup-index-table.
++
++For ex:
++ibm,numa-lookup-index-table = <3 0 8 40>;
++ibm,numa-distace-table = <9>, /bits/ 8 < 10  20  80
++					 20  10 160
++					 80 160  10>;
++  | 0    8   40
++--|------------
++  |
++0 | 10   20  80
++  |
++8 | 20   10  160
++  |
++40| 80   160  10
++
++A possible "ibm,associativity" property for resources in node 0, 8 and 40
++
++{ 3, 6, 7, 0 }
++{ 3, 6, 9, 8 }
++{ 3, 6, 7, 40}
++
++With "ibm,associativity-reference-points"  { 0x3 }
++
++"ibm,lookup-index-table" helps in having a compact representation of distance matrix.
++Since domainID can be sparse, the matrix of distances can also be effectively sparse.
++With "ibm,lookup-index-table" we can achieve a compact representation of
++distance information.
+diff --git a/Documentation/sound/hd-audio/models.rst b/Documentation/sound/hd-audio/models.rst
+index 9b52f50a68542..1204304500147 100644
+--- a/Documentation/sound/hd-audio/models.rst
++++ b/Documentation/sound/hd-audio/models.rst
+@@ -704,7 +704,7 @@ ref
+ no-jd
+     BIOS setup but without jack-detection
+ intel
+-    Intel DG45* mobos
++    Intel D*45* mobos
+ dell-m6-amic
+     Dell desktops/laptops with analog mics
+ dell-m6-dmic
+diff --git a/Makefile b/Makefile
+index ae202cc531588..3bde04cc7f048 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 177
++SUBLEVEL = 178
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -580,8 +580,10 @@ endif
+ ifneq ($(GCC_TOOLCHAIN),)
+ CLANG_FLAGS	+= --gcc-toolchain=$(GCC_TOOLCHAIN)
+ endif
+-ifneq ($(LLVM_IAS),1)
+-CLANG_FLAGS	+= -no-integrated-as
++ifeq ($(LLVM_IAS),1)
++CLANG_FLAGS	+= -fintegrated-as
++else
++CLANG_FLAGS	+= -fno-integrated-as
+ endif
+ CLANG_FLAGS	+= -Werror=unknown-warning-option
+ KBUILD_CFLAGS	+= $(CLANG_FLAGS)
+@@ -849,7 +851,7 @@ else
+ DEBUG_CFLAGS	+= -g
+ endif
+ 
+-ifeq ($(LLVM_IAS),1)
++ifdef CONFIG_AS_IS_LLVM
+ KBUILD_AFLAGS	+= -g
+ else
+ KBUILD_AFLAGS	+= -Wa,-gdwarf-2
+diff --git a/arch/powerpc/include/asm/firmware.h b/arch/powerpc/include/asm/firmware.h
+index aa6a5ef5d4830..89a31f1c7b118 100644
+--- a/arch/powerpc/include/asm/firmware.h
++++ b/arch/powerpc/include/asm/firmware.h
+@@ -44,7 +44,7 @@
+ #define FW_FEATURE_OPAL		ASM_CONST(0x0000000010000000)
+ #define FW_FEATURE_SET_MODE	ASM_CONST(0x0000000040000000)
+ #define FW_FEATURE_BEST_ENERGY	ASM_CONST(0x0000000080000000)
+-#define FW_FEATURE_TYPE1_AFFINITY ASM_CONST(0x0000000100000000)
++#define FW_FEATURE_FORM1_AFFINITY ASM_CONST(0x0000000100000000)
+ #define FW_FEATURE_PRRN		ASM_CONST(0x0000000200000000)
+ #define FW_FEATURE_DRMEM_V2	ASM_CONST(0x0000000400000000)
+ #define FW_FEATURE_DRC_INFO	ASM_CONST(0x0000000800000000)
+@@ -53,6 +53,7 @@
+ #define FW_FEATURE_ULTRAVISOR	ASM_CONST(0x0000004000000000)
+ #define FW_FEATURE_STUFF_TCE	ASM_CONST(0x0000008000000000)
+ #define FW_FEATURE_RPT_INVALIDATE ASM_CONST(0x0000010000000000)
++#define FW_FEATURE_FORM2_AFFINITY ASM_CONST(0x0000020000000000)
+ 
+ #ifndef __ASSEMBLY__
+ 
+@@ -69,11 +70,11 @@ enum {
+ 		FW_FEATURE_SPLPAR | FW_FEATURE_LPAR |
+ 		FW_FEATURE_CMO | FW_FEATURE_VPHN | FW_FEATURE_XCMO |
+ 		FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY |
+-		FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN |
++		FW_FEATURE_FORM1_AFFINITY | FW_FEATURE_PRRN |
+ 		FW_FEATURE_HPT_RESIZE | FW_FEATURE_DRMEM_V2 |
+ 		FW_FEATURE_DRC_INFO | FW_FEATURE_BLOCK_REMOVE |
+ 		FW_FEATURE_PAPR_SCM | FW_FEATURE_ULTRAVISOR |
+-		FW_FEATURE_RPT_INVALIDATE,
++		FW_FEATURE_RPT_INVALIDATE | FW_FEATURE_FORM2_AFFINITY,
+ 	FW_FEATURE_PSERIES_ALWAYS = 0,
+ 	FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL | FW_FEATURE_ULTRAVISOR,
+ 	FW_FEATURE_POWERNV_ALWAYS = 0,
+diff --git a/arch/powerpc/include/asm/prom.h b/arch/powerpc/include/asm/prom.h
+index 324a13351749a..5c80152e8f188 100644
+--- a/arch/powerpc/include/asm/prom.h
++++ b/arch/powerpc/include/asm/prom.h
+@@ -147,8 +147,9 @@ extern int of_read_drc_info_cell(struct property **prop,
+ #define OV5_MSI			0x0201	/* PCIe/MSI support */
+ #define OV5_CMO			0x0480	/* Cooperative Memory Overcommitment */
+ #define OV5_XCMO		0x0440	/* Page Coalescing */
+-#define OV5_TYPE1_AFFINITY	0x0580	/* Type 1 NUMA affinity */
++#define OV5_FORM1_AFFINITY	0x0580	/* FORM1 NUMA affinity */
+ #define OV5_PRRN		0x0540	/* Platform Resource Reassignment */
++#define OV5_FORM2_AFFINITY	0x0520	/* Form2 NUMA affinity */
+ #define OV5_HP_EVT		0x0604	/* Hot Plug Event support */
+ #define OV5_RESIZE_HPT		0x0601	/* Hash Page Table resizing */
+ #define OV5_PFO_HW_RNG		0x1180	/* PFO Random Number Generator */
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index 3beeb030cd78e..b239ef589ae06 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -36,7 +36,7 @@ static inline int pcibus_to_node(struct pci_bus *bus)
+ 				 cpu_all_mask :				\
+ 				 cpumask_of_node(pcibus_to_node(bus)))
+ 
+-extern int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc);
++int cpu_relative_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc);
+ extern int __node_distance(int, int);
+ #define node_distance(a, b) __node_distance(a, b)
+ 
+@@ -64,6 +64,7 @@ static inline int early_cpu_to_node(int cpu)
+ }
+ 
+ int of_drconf_to_nid_single(struct drmem_lmb *lmb);
++void update_numa_distance(struct device_node *node);
+ 
+ #else
+ 
+@@ -83,7 +84,7 @@ static inline void sysfs_remove_device_from_node(struct device *dev,
+ 
+ static inline void update_numa_cpu_lookup_table(unsigned int cpu, int node) {}
+ 
+-static inline int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc)
++static inline int cpu_relative_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc)
+ {
+ 	return 0;
+ }
+@@ -93,6 +94,7 @@ static inline int of_drconf_to_nid_single(struct drmem_lmb *lmb)
+ 	return first_online_node;
+ }
+ 
++static inline void update_numa_distance(struct device_node *node) {}
+ #endif /* CONFIG_NUMA */
+ 
+ #if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR)
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index 9e71c0739f08d..6f7ad80763159 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -1069,7 +1069,8 @@ static const struct ibm_arch_vec ibm_architecture_vec_template __initconst = {
+ #else
+ 		0,
+ #endif
+-		.associativity = OV5_FEAT(OV5_TYPE1_AFFINITY) | OV5_FEAT(OV5_PRRN),
++		.associativity = OV5_FEAT(OV5_FORM1_AFFINITY) | OV5_FEAT(OV5_PRRN) |
++		OV5_FEAT(OV5_FORM2_AFFINITY),
+ 		.bin_opts = OV5_FEAT(OV5_RESIZE_HPT) | OV5_FEAT(OV5_HP_EVT),
+ 		.micro_checkpoint = 0,
+ 		.reserved0 = 0,
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 275c60f92a7ce..ce8569e16f0c4 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -51,14 +51,22 @@ EXPORT_SYMBOL(numa_cpu_lookup_table);
+ EXPORT_SYMBOL(node_to_cpumask_map);
+ EXPORT_SYMBOL(node_data);
+ 
+-static int min_common_depth;
++static int primary_domain_index;
+ static int n_mem_addr_cells, n_mem_size_cells;
+-static int form1_affinity;
++
++#define FORM0_AFFINITY 0
++#define FORM1_AFFINITY 1
++#define FORM2_AFFINITY 2
++static int affinity_form;
+ 
+ #define MAX_DISTANCE_REF_POINTS 4
+ static int distance_ref_points_depth;
+ static const __be32 *distance_ref_points;
+ static int distance_lookup_table[MAX_NUMNODES][MAX_DISTANCE_REF_POINTS];
++static int numa_distance_table[MAX_NUMNODES][MAX_NUMNODES] = {
++	[0 ... MAX_NUMNODES - 1] = { [0 ... MAX_NUMNODES - 1] = -1 }
++};
++static int numa_id_index_table[MAX_NUMNODES] = { [0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE };
+ 
+ /*
+  * Allocate node_to_cpumask_map based on number of available nodes
+@@ -163,7 +171,55 @@ static void unmap_cpu_from_node(unsigned long cpu)
+ }
+ #endif /* CONFIG_HOTPLUG_CPU || CONFIG_PPC_SPLPAR */
+ 
+-int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc)
++static int __associativity_to_nid(const __be32 *associativity,
++				  int max_array_sz)
++{
++	int nid;
++	/*
++	 * primary_domain_index is 1 based array index.
++	 */
++	int index = primary_domain_index  - 1;
++
++	if (!numa_enabled || index >= max_array_sz)
++		return NUMA_NO_NODE;
++
++	nid = of_read_number(&associativity[index], 1);
++
++	/* POWER4 LPAR uses 0xffff as invalid node */
++	if (nid == 0xffff || nid >= nr_node_ids)
++		nid = NUMA_NO_NODE;
++	return nid;
++}
++/*
++ * Returns nid in the range [0..nr_node_ids], or -1 if no useful NUMA
++ * info is found.
++ */
++static int associativity_to_nid(const __be32 *associativity)
++{
++	int array_sz = of_read_number(associativity, 1);
++
++	/* Skip the first element in the associativity array */
++	return __associativity_to_nid((associativity + 1), array_sz);
++}
++
++static int __cpu_form2_relative_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc)
++{
++	int dist;
++	int node1, node2;
++
++	node1 = associativity_to_nid(cpu1_assoc);
++	node2 = associativity_to_nid(cpu2_assoc);
++
++	dist = numa_distance_table[node1][node2];
++	if (dist <= LOCAL_DISTANCE)
++		return 0;
++	else if (dist <= REMOTE_DISTANCE)
++		return 1;
++	else
++		return 2;
++}
++
++static int __cpu_form1_relative_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc)
+ {
+ 	int dist = 0;
+ 
+@@ -179,6 +235,15 @@ int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc)
+ 	return dist;
+ }
+ 
++int cpu_relative_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc)
++{
++	/* We should not get called with FORM0 */
++	VM_WARN_ON(affinity_form == FORM0_AFFINITY);
++	if (affinity_form == FORM1_AFFINITY)
++		return __cpu_form1_relative_distance(cpu1_assoc, cpu2_assoc);
++	return __cpu_form2_relative_distance(cpu1_assoc, cpu2_assoc);
++}
++
+ /* must hold reference to node during call */
+ static const __be32 *of_get_associativity(struct device_node *dev)
+ {
+@@ -190,7 +255,9 @@ int __node_distance(int a, int b)
+ 	int i;
+ 	int distance = LOCAL_DISTANCE;
+ 
+-	if (!form1_affinity)
++	if (affinity_form == FORM2_AFFINITY)
++		return numa_distance_table[a][b];
++	else if (affinity_form == FORM0_AFFINITY)
+ 		return ((a == b) ? LOCAL_DISTANCE : REMOTE_DISTANCE);
+ 
+ 	for (i = 0; i < distance_ref_points_depth; i++) {
+@@ -205,52 +272,6 @@ int __node_distance(int a, int b)
+ }
+ EXPORT_SYMBOL(__node_distance);
+ 
+-static void initialize_distance_lookup_table(int nid,
+-		const __be32 *associativity)
+-{
+-	int i;
+-
+-	if (!form1_affinity)
+-		return;
+-
+-	for (i = 0; i < distance_ref_points_depth; i++) {
+-		const __be32 *entry;
+-
+-		entry = &associativity[be32_to_cpu(distance_ref_points[i]) - 1];
+-		distance_lookup_table[nid][i] = of_read_number(entry, 1);
+-	}
+-}
+-
+-/*
+- * Returns nid in the range [0..nr_node_ids], or -1 if no useful NUMA
+- * info is found.
+- */
+-static int associativity_to_nid(const __be32 *associativity)
+-{
+-	int nid = NUMA_NO_NODE;
+-
+-	if (!numa_enabled)
+-		goto out;
+-
+-	if (of_read_number(associativity, 1) >= min_common_depth)
+-		nid = of_read_number(&associativity[min_common_depth], 1);
+-
+-	/* POWER4 LPAR uses 0xffff as invalid node */
+-	if (nid == 0xffff || nid >= nr_node_ids)
+-		nid = NUMA_NO_NODE;
+-
+-	if (nid > 0 &&
+-		of_read_number(associativity, 1) >= distance_ref_points_depth) {
+-		/*
+-		 * Skip the length field and send start of associativity array
+-		 */
+-		initialize_distance_lookup_table(nid, associativity + 1);
+-	}
+-
+-out:
+-	return nid;
+-}
+-
+ /* Returns the nid associated with the given device tree node,
+  * or -1 if not found.
+  */
+@@ -284,11 +305,160 @@ int of_node_to_nid(struct device_node *device)
+ }
+ EXPORT_SYMBOL(of_node_to_nid);
+ 
+-static int __init find_min_common_depth(void)
++static void __initialize_form1_numa_distance(const __be32 *associativity,
++					     int max_array_sz)
++{
++	int i, nid;
++
++	if (affinity_form != FORM1_AFFINITY)
++		return;
++
++	nid = __associativity_to_nid(associativity, max_array_sz);
++	if (nid != NUMA_NO_NODE) {
++		for (i = 0; i < distance_ref_points_depth; i++) {
++			const __be32 *entry;
++			int index = be32_to_cpu(distance_ref_points[i]) - 1;
++
++			/*
++			 * broken hierarchy, return with broken distance table
++			 */
++			if (WARN(index >= max_array_sz, "Broken ibm,associativity property"))
++				return;
++
++			entry = &associativity[index];
++			distance_lookup_table[nid][i] = of_read_number(entry, 1);
++		}
++	}
++}
++
++static void initialize_form1_numa_distance(const __be32 *associativity)
++{
++	int array_sz;
++
++	array_sz = of_read_number(associativity, 1);
++	/* Skip the first element in the associativity array */
++	__initialize_form1_numa_distance(associativity + 1, array_sz);
++}
++
++/*
++ * Used to update distance information w.r.t newly added node.
++ */
++void update_numa_distance(struct device_node *node)
++{
++	int nid;
++
++	if (affinity_form == FORM0_AFFINITY)
++		return;
++	else if (affinity_form == FORM1_AFFINITY) {
++		const __be32 *associativity;
++
++		associativity = of_get_associativity(node);
++		if (!associativity)
++			return;
++
++		initialize_form1_numa_distance(associativity);
++		return;
++	}
++
++	/* FORM2 affinity  */
++	nid = of_node_to_nid_single(node);
++	if (nid == NUMA_NO_NODE)
++		return;
++
++	/*
++	 * With FORM2 we expect NUMA distance of all possible NUMA
++	 * nodes to be provided during boot.
++	 */
++	WARN(numa_distance_table[nid][nid] == -1,
++	     "NUMA distance details for node %d not provided\n", nid);
++}
++EXPORT_SYMBOL_GPL(update_numa_distance);
++
++/*
++ * ibm,numa-lookup-index-table= {N, domainid1, domainid2, ..... domainidN}
++ * ibm,numa-distance-table = { N, 1, 2, 4, 5, 1, 6, .... N elements}
++ */
++static void initialize_form2_numa_distance_lookup_table(void)
++{
++	int i, j;
++	struct device_node *root;
++	const __u8 *numa_dist_table;
++	const __be32 *numa_lookup_index;
++	int numa_dist_table_length;
++	int max_numa_index, distance_index;
++
++	if (firmware_has_feature(FW_FEATURE_OPAL))
++		root = of_find_node_by_path("/ibm,opal");
++	else
++		root = of_find_node_by_path("/rtas");
++	if (!root)
++		root = of_find_node_by_path("/");
++
++	numa_lookup_index = of_get_property(root, "ibm,numa-lookup-index-table", NULL);
++	max_numa_index = of_read_number(&numa_lookup_index[0], 1);
++
++	/* first element of the array is the size and is encode-int */
++	numa_dist_table = of_get_property(root, "ibm,numa-distance-table", NULL);
++	numa_dist_table_length = of_read_number((const __be32 *)&numa_dist_table[0], 1);
++	/* Skip the size which is encoded int */
++	numa_dist_table += sizeof(__be32);
++
++	pr_debug("numa_dist_table_len = %d, numa_dist_indexes_len = %d\n",
++		 numa_dist_table_length, max_numa_index);
++
++	for (i = 0; i < max_numa_index; i++)
++		/* +1 skip the max_numa_index in the property */
++		numa_id_index_table[i] = of_read_number(&numa_lookup_index[i + 1], 1);
++
++
++	if (numa_dist_table_length != max_numa_index * max_numa_index) {
++		WARN(1, "Wrong NUMA distance information\n");
++		/* consider everybody else just remote. */
++		for (i = 0;  i < max_numa_index; i++) {
++			for (j = 0; j < max_numa_index; j++) {
++				int nodeA = numa_id_index_table[i];
++				int nodeB = numa_id_index_table[j];
++
++				if (nodeA == nodeB)
++					numa_distance_table[nodeA][nodeB] = LOCAL_DISTANCE;
++				else
++					numa_distance_table[nodeA][nodeB] = REMOTE_DISTANCE;
++			}
++		}
++	}
++
++	distance_index = 0;
++	for (i = 0;  i < max_numa_index; i++) {
++		for (j = 0; j < max_numa_index; j++) {
++			int nodeA = numa_id_index_table[i];
++			int nodeB = numa_id_index_table[j];
++
++			numa_distance_table[nodeA][nodeB] = numa_dist_table[distance_index++];
++			pr_debug("dist[%d][%d]=%d ", nodeA, nodeB, numa_distance_table[nodeA][nodeB]);
++		}
++	}
++	of_node_put(root);
++}
++
++static int __init find_primary_domain_index(void)
+ {
+-	int depth;
++	int index;
+ 	struct device_node *root;
+ 
++	/*
++	 * Check for which form of affinity.
++	 */
++	if (firmware_has_feature(FW_FEATURE_OPAL)) {
++		affinity_form = FORM1_AFFINITY;
++	} else if (firmware_has_feature(FW_FEATURE_FORM2_AFFINITY)) {
++		dbg("Using form 2 affinity\n");
++		affinity_form = FORM2_AFFINITY;
++	} else if (firmware_has_feature(FW_FEATURE_FORM1_AFFINITY)) {
++		dbg("Using form 1 affinity\n");
++		affinity_form = FORM1_AFFINITY;
++	} else
++		affinity_form = FORM0_AFFINITY;
++
+ 	if (firmware_has_feature(FW_FEATURE_OPAL))
+ 		root = of_find_node_by_path("/ibm,opal");
+ 	else
+@@ -318,25 +488,21 @@ static int __init find_min_common_depth(void)
+ 	}
+ 
+ 	distance_ref_points_depth /= sizeof(int);
+-
+-	if (firmware_has_feature(FW_FEATURE_OPAL) ||
+-	    firmware_has_feature(FW_FEATURE_TYPE1_AFFINITY)) {
+-		dbg("Using form 1 affinity\n");
+-		form1_affinity = 1;
+-	}
+-
+-	if (form1_affinity) {
+-		depth = of_read_number(distance_ref_points, 1);
+-	} else {
++	if (affinity_form == FORM0_AFFINITY) {
+ 		if (distance_ref_points_depth < 2) {
+ 			printk(KERN_WARNING "NUMA: "
+-				"short ibm,associativity-reference-points\n");
++			       "short ibm,associativity-reference-points\n");
+ 			goto err;
+ 		}
+ 
+-		depth = of_read_number(&distance_ref_points[1], 1);
++		index = of_read_number(&distance_ref_points[1], 1);
++	} else {
++		/*
++		 * Both FORM1 and FORM2 affinity find the primary domain details
++		 * at the same offset.
++		 */
++		index = of_read_number(distance_ref_points, 1);
+ 	}
+-
+ 	/*
+ 	 * Warn and cap if the hardware supports more than
+ 	 * MAX_DISTANCE_REF_POINTS domains.
+@@ -348,7 +514,7 @@ static int __init find_min_common_depth(void)
+ 	}
+ 
+ 	of_node_put(root);
+-	return depth;
++	return index;
+ 
+ err:
+ 	of_node_put(root);
+@@ -426,6 +592,38 @@ static int of_get_assoc_arrays(struct assoc_arrays *aa)
+ 	return 0;
+ }
+ 
++static int get_nid_and_numa_distance(struct drmem_lmb *lmb)
++{
++	struct assoc_arrays aa = { .arrays = NULL };
++	int default_nid = NUMA_NO_NODE;
++	int nid = default_nid;
++	int rc, index;
++
++	if ((primary_domain_index < 0) || !numa_enabled)
++		return default_nid;
++
++	rc = of_get_assoc_arrays(&aa);
++	if (rc)
++		return default_nid;
++
++	if (primary_domain_index <= aa.array_sz &&
++	    !(lmb->flags & DRCONF_MEM_AI_INVALID) && lmb->aa_index < aa.n_arrays) {
++		const __be32 *associativity;
++
++		index = lmb->aa_index * aa.array_sz;
++		associativity = &aa.arrays[index];
++		nid = __associativity_to_nid(associativity, aa.array_sz);
++		if (nid > 0 && affinity_form == FORM1_AFFINITY) {
++			/*
++			 * lookup array associativity entries have
++			 * no length of the array as the first element.
++			 */
++			__initialize_form1_numa_distance(associativity, aa.array_sz);
++		}
++	}
++	return nid;
++}
++
+ /*
+  * This is like of_node_to_nid_single() for memory represented in the
+  * ibm,dynamic-reconfiguration-memory node.
+@@ -437,35 +635,28 @@ int of_drconf_to_nid_single(struct drmem_lmb *lmb)
+ 	int nid = default_nid;
+ 	int rc, index;
+ 
+-	if ((min_common_depth < 0) || !numa_enabled)
++	if ((primary_domain_index < 0) || !numa_enabled)
+ 		return default_nid;
+ 
+ 	rc = of_get_assoc_arrays(&aa);
+ 	if (rc)
+ 		return default_nid;
+ 
+-	if (min_common_depth <= aa.array_sz &&
++	if (primary_domain_index <= aa.array_sz &&
+ 	    !(lmb->flags & DRCONF_MEM_AI_INVALID) && lmb->aa_index < aa.n_arrays) {
+-		index = lmb->aa_index * aa.array_sz + min_common_depth - 1;
+-		nid = of_read_number(&aa.arrays[index], 1);
++		const __be32 *associativity;
+ 
+-		if (nid == 0xffff || nid >= nr_node_ids)
+-			nid = default_nid;
+-
+-		if (nid > 0) {
+-			index = lmb->aa_index * aa.array_sz;
+-			initialize_distance_lookup_table(nid,
+-							&aa.arrays[index]);
+-		}
++		index = lmb->aa_index * aa.array_sz;
++		associativity = &aa.arrays[index];
++		nid = __associativity_to_nid(associativity, aa.array_sz);
+ 	}
+-
+ 	return nid;
+ }
+ 
+ #ifdef CONFIG_PPC_SPLPAR
+-static int vphn_get_nid(long lcpu)
++
++static int __vphn_get_associativity(long lcpu, __be32 *associativity)
+ {
+-	__be32 associativity[VPHN_ASSOC_BUFSIZE] = {0};
+ 	long rc, hwid;
+ 
+ 	/*
+@@ -485,12 +676,30 @@ static int vphn_get_nid(long lcpu)
+ 
+ 		rc = hcall_vphn(hwid, VPHN_FLAG_VCPU, associativity);
+ 		if (rc == H_SUCCESS)
+-			return associativity_to_nid(associativity);
++			return 0;
+ 	}
+ 
++	return -1;
++}
++
++static int vphn_get_nid(long lcpu)
++{
++	__be32 associativity[VPHN_ASSOC_BUFSIZE] = {0};
++
++
++	if (!__vphn_get_associativity(lcpu, associativity))
++		return associativity_to_nid(associativity);
++
+ 	return NUMA_NO_NODE;
++
+ }
+ #else
++
++static int __vphn_get_associativity(long lcpu, __be32 *associativity)
++{
++	return -1;
++}
++
+ static int vphn_get_nid(long unused)
+ {
+ 	return NUMA_NO_NODE;
+@@ -685,7 +894,7 @@ static int __init numa_setup_drmem_lmb(struct drmem_lmb *lmb,
+ 			size = read_n_cells(n_mem_size_cells, usm);
+ 		}
+ 
+-		nid = of_drconf_to_nid_single(lmb);
++		nid = get_nid_and_numa_distance(lmb);
+ 		fake_numa_create_new_node(((base + size) >> PAGE_SHIFT),
+ 					  &nid);
+ 		node_set_online(nid);
+@@ -702,24 +911,31 @@ static int __init parse_numa_properties(void)
+ 	struct device_node *memory;
+ 	int default_nid = 0;
+ 	unsigned long i;
++	const __be32 *associativity;
+ 
+ 	if (numa_enabled == 0) {
+ 		printk(KERN_WARNING "NUMA disabled by user\n");
+ 		return -1;
+ 	}
+ 
+-	min_common_depth = find_min_common_depth();
++	primary_domain_index = find_primary_domain_index();
+ 
+-	if (min_common_depth < 0) {
++	if (primary_domain_index < 0) {
+ 		/*
+-		 * if we fail to parse min_common_depth from device tree
++		 * if we fail to parse primary_domain_index from device tree
+ 		 * mark the numa disabled, boot with numa disabled.
+ 		 */
+ 		numa_enabled = false;
+-		return min_common_depth;
++		return primary_domain_index;
+ 	}
+ 
+-	dbg("NUMA associativity depth for CPU/Memory: %d\n", min_common_depth);
++	dbg("NUMA associativity depth for CPU/Memory: %d\n", primary_domain_index);
++
++	/*
++	 * If it is FORM2 initialize the distance table here.
++	 */
++	if (affinity_form == FORM2_AFFINITY)
++		initialize_form2_numa_distance_lookup_table();
+ 
+ 	/*
+ 	 * Even though we connect cpus to numa domains later in SMP
+@@ -727,18 +943,30 @@ static int __init parse_numa_properties(void)
+ 	 * each node to be onlined must have NODE_DATA etc backing it.
+ 	 */
+ 	for_each_present_cpu(i) {
++		__be32 vphn_assoc[VPHN_ASSOC_BUFSIZE];
+ 		struct device_node *cpu;
+-		int nid = vphn_get_nid(i);
++		int nid = NUMA_NO_NODE;
+ 
+-		/*
+-		 * Don't fall back to default_nid yet -- we will plug
+-		 * cpus into nodes once the memory scan has discovered
+-		 * the topology.
+-		 */
+-		if (nid == NUMA_NO_NODE) {
++		memset(vphn_assoc, 0, VPHN_ASSOC_BUFSIZE * sizeof(__be32));
++
++		if (__vphn_get_associativity(i, vphn_assoc) == 0) {
++			nid = associativity_to_nid(vphn_assoc);
++			initialize_form1_numa_distance(vphn_assoc);
++		} else {
++
++			/*
++			 * Don't fall back to default_nid yet -- we will plug
++			 * cpus into nodes once the memory scan has discovered
++			 * the topology.
++			 */
+ 			cpu = of_get_cpu_node(i, NULL);
+ 			BUG_ON(!cpu);
+-			nid = of_node_to_nid_single(cpu);
++
++			associativity = of_get_associativity(cpu);
++			if (associativity) {
++				nid = associativity_to_nid(associativity);
++				initialize_form1_numa_distance(associativity);
++			}
+ 			of_node_put(cpu);
+ 		}
+ 
+@@ -776,8 +1004,11 @@ new_range:
+ 		 * have associativity properties.  If none, then
+ 		 * everything goes to default_nid.
+ 		 */
+-		nid = of_node_to_nid_single(memory);
+-		if (nid < 0)
++		associativity = of_get_associativity(memory);
++		if (associativity) {
++			nid = associativity_to_nid(associativity);
++			initialize_form1_numa_distance(associativity);
++		} else
+ 			nid = default_nid;
+ 
+ 		fake_numa_create_new_node(((start + size) >> PAGE_SHIFT), &nid);
+@@ -926,7 +1157,7 @@ static void __init find_possible_nodes(void)
+ 			goto out;
+ 	}
+ 
+-	max_nodes = of_read_number(&domains[min_common_depth], 1);
++	max_nodes = of_read_number(&domains[primary_domain_index], 1);
+ 	pr_info("Partition configured for %d NUMA nodes.\n", max_nodes);
+ 
+ 	for (i = 0; i < max_nodes; i++) {
+@@ -935,7 +1166,7 @@ static void __init find_possible_nodes(void)
+ 	}
+ 
+ 	prop_length /= sizeof(int);
+-	if (prop_length > min_common_depth + 2)
++	if (prop_length > primary_domain_index + 2)
+ 		coregroup_enabled = 1;
+ 
+ out:
+@@ -1268,7 +1499,7 @@ int cpu_to_coregroup_id(int cpu)
+ 		goto out;
+ 
+ 	index = of_read_number(associativity, 1);
+-	if (index > min_common_depth + 1)
++	if (index > primary_domain_index + 1)
+ 		return of_read_number(&associativity[index - 1], 1);
+ 
+ out:
+diff --git a/arch/powerpc/platforms/pseries/firmware.c b/arch/powerpc/platforms/pseries/firmware.c
+index 4c7b7f5a2ebca..f162156b7b68d 100644
+--- a/arch/powerpc/platforms/pseries/firmware.c
++++ b/arch/powerpc/platforms/pseries/firmware.c
+@@ -119,10 +119,11 @@ struct vec5_fw_feature {
+ 
+ static __initdata struct vec5_fw_feature
+ vec5_fw_features_table[] = {
+-	{FW_FEATURE_TYPE1_AFFINITY,	OV5_TYPE1_AFFINITY},
++	{FW_FEATURE_FORM1_AFFINITY,	OV5_FORM1_AFFINITY},
+ 	{FW_FEATURE_PRRN,		OV5_PRRN},
+ 	{FW_FEATURE_DRMEM_V2,		OV5_DRMEM_V2},
+ 	{FW_FEATURE_DRC_INFO,		OV5_DRC_INFO},
++	{FW_FEATURE_FORM2_AFFINITY,	OV5_FORM2_AFFINITY},
+ };
+ 
+ static void __init fw_vec5_feature_init(const char *vec5, unsigned long len)
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 325f3b220f360..1f8f97210d143 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -484,6 +484,8 @@ static ssize_t dlpar_cpu_add(u32 drc_index)
+ 		return saved_rc;
+ 	}
+ 
++	update_numa_distance(dn);
++
+ 	rc = dlpar_online_cpu(dn);
+ 	if (rc) {
+ 		saved_rc = rc;
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index 7efe6ec5d14a4..a5f968b5fa3a8 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -180,6 +180,8 @@ static int update_lmb_associativity_index(struct drmem_lmb *lmb)
+ 		return -ENODEV;
+ 	}
+ 
++	update_numa_distance(lmb_node);
++
+ 	dr_node = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
+ 	if (!dr_node) {
+ 		dlpar_free_cc_nodes(lmb_node);
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 115d196560b8b..28396a7e77d6f 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -261,7 +261,7 @@ static int cpu_relative_dispatch_distance(int last_disp_cpu, int cur_disp_cpu)
+ 	if (!last_disp_cpu_assoc || !cur_disp_cpu_assoc)
+ 		return -EIO;
+ 
+-	return cpu_distance(last_disp_cpu_assoc, cur_disp_cpu_assoc);
++	return cpu_relative_distance(last_disp_cpu_assoc, cur_disp_cpu_assoc);
+ }
+ 
+ static int cpu_home_node_dispatch_distance(int disp_cpu)
+@@ -281,7 +281,7 @@ static int cpu_home_node_dispatch_distance(int disp_cpu)
+ 	if (!disp_cpu_assoc || !vcpu_assoc)
+ 		return -EIO;
+ 
+-	return cpu_distance(disp_cpu_assoc, vcpu_assoc);
++	return cpu_relative_distance(disp_cpu_assoc, vcpu_assoc);
+ }
+ 
+ static void update_vcpu_disp_stat(int disp_cpu)
+diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
+index 057acbb9116dd..e3b7698b4762c 100644
+--- a/arch/powerpc/platforms/pseries/papr_scm.c
++++ b/arch/powerpc/platforms/pseries/papr_scm.c
+@@ -1079,6 +1079,13 @@ static int papr_scm_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
++	/*
++	 * open firmware platform device create won't update the NUMA 
++	 * distance table. For PAPR SCM devices we use numa_map_to_online_node()
++	 * to find the nearest online NUMA node and that requires correct
++	 * distance table information.
++	 */
++	update_numa_distance(dn);
+ 
+ 	p = kzalloc(sizeof(*p), GFP_KERNEL);
+ 	if (!p)
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 557c4a8c4087d..c192bd7305dc6 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -331,6 +331,28 @@ config RISCV_BASE_PMU
+ 
+ endmenu
+ 
++config TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
++	def_bool y
++	# https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=aed44286efa8ae8717a77d94b51ac3614e2ca6dc
++	depends on AS_IS_GNU && AS_VERSION >= 23800
++	help
++	  Newer binutils versions default to ISA spec version 20191213 which
++	  moves some instructions from the I extension to the Zicsr and Zifencei
++	  extensions.
++
++config TOOLCHAIN_NEEDS_OLD_ISA_SPEC
++	def_bool y
++	depends on TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
++	# https://github.com/llvm/llvm-project/commit/22e199e6afb1263c943c0c0d4498694e15bf8a16
++	depends on CC_IS_CLANG && CLANG_VERSION < 170000
++	help
++	  Certain versions of clang do not support zicsr and zifencei via -march
++	  but newer versions of binutils require it for the reasons noted in the
++	  help text of CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI. This
++	  option causes an older ISA spec compatible with these older versions
++	  of clang to be passed to GAS, which has the same result as passing zicsr
++	  and zifencei to -march.
++
+ config FPU
+ 	bool "FPU support"
+ 	default y
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 9446282b52bab..daa679440000a 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -40,7 +40,7 @@ ifeq ($(CONFIG_LD_IS_LLD),y)
+ ifeq ($(shell test $(CONFIG_LLD_VERSION) -lt 150000; echo $$?),0)
+ 	KBUILD_CFLAGS += -mno-relax
+ 	KBUILD_AFLAGS += -mno-relax
+-ifneq ($(LLVM_IAS),1)
++ifndef CONFIG_AS_IS_LLVM
+ 	KBUILD_CFLAGS += -Wa,-mno-relax
+ 	KBUILD_AFLAGS += -Wa,-mno-relax
+ endif
+@@ -53,10 +53,12 @@ riscv-march-$(CONFIG_ARCH_RV64I)	:= rv64ima
+ riscv-march-$(CONFIG_FPU)		:= $(riscv-march-y)fd
+ riscv-march-$(CONFIG_RISCV_ISA_C)	:= $(riscv-march-y)c
+ 
+-# Newer binutils versions default to ISA spec version 20191213 which moves some
+-# instructions from the I extension to the Zicsr and Zifencei extensions.
+-toolchain-need-zicsr-zifencei := $(call cc-option-yn, -march=$(riscv-march-y)_zicsr_zifencei)
+-riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei
++ifdef CONFIG_TOOLCHAIN_NEEDS_OLD_ISA_SPEC
++KBUILD_CFLAGS += -Wa,-misa-spec=2.2
++KBUILD_AFLAGS += -Wa,-misa-spec=2.2
++else
++riscv-march-$(CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI) := $(riscv-march-y)_zicsr_zifencei
++endif
+ 
+ KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))
+ KBUILD_AFLAGS += -march=$(riscv-march-y)
+diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
+index 50a8225c58bca..dd63407e82949 100644
+--- a/arch/riscv/kernel/signal.c
++++ b/arch/riscv/kernel/signal.c
+@@ -16,6 +16,7 @@
+ #include <asm/vdso.h>
+ #include <asm/switch_to.h>
+ #include <asm/csr.h>
++#include <asm/cacheflush.h>
+ 
+ extern u32 __user_rt_sigreturn[2];
+ 
+@@ -178,6 +179,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ {
+ 	struct rt_sigframe __user *frame;
+ 	long err = 0;
++	unsigned long __maybe_unused addr;
+ 
+ 	frame = get_sigframe(ksig, regs, sizeof(*frame));
+ 	if (!access_ok(frame, sizeof(*frame)))
+@@ -206,7 +208,12 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 	if (copy_to_user(&frame->sigreturn_code, __user_rt_sigreturn,
+ 			 sizeof(frame->sigreturn_code)))
+ 		return -EFAULT;
+-	regs->ra = (unsigned long)&frame->sigreturn_code;
++
++	addr = (unsigned long)&frame->sigreturn_code;
++	/* Make sure the two instructions are pushed to icache. */
++	flush_icache_range(addr, addr + sizeof(frame->sigreturn_code));
++
++	regs->ra = addr;
+ #endif /* CONFIG_MMU */
+ 
+ 	/*
+diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
+index 77909d362b78f..5be68190901f9 100644
+--- a/arch/s390/kvm/intercept.c
++++ b/arch/s390/kvm/intercept.c
+@@ -270,10 +270,18 @@ static int handle_prog(struct kvm_vcpu *vcpu)
+ /**
+  * handle_external_interrupt - used for external interruption interceptions
+  *
+- * This interception only occurs if the CPUSTAT_EXT_INT bit was set, or if
+- * the new PSW does not have external interrupts disabled. In the first case,
+- * we've got to deliver the interrupt manually, and in the second case, we
+- * drop to userspace to handle the situation there.
++ * This interception occurs if:
++ * - the CPUSTAT_EXT_INT bit was already set when the external interrupt
++ *   occurred. In this case, the interrupt needs to be injected manually to
++ *   preserve interrupt priority.
++ * - the external new PSW has external interrupts enabled, which will cause an
++ *   interruption loop. We drop to userspace in this case.
++ *
++ * The latter case can be detected by inspecting the external mask bit in the
++ * external new psw.
++ *
++ * Under PV, only the latter case can occur, since interrupt priorities are
++ * handled in the ultravisor.
+  */
+ static int handle_external_interrupt(struct kvm_vcpu *vcpu)
+ {
+@@ -284,10 +292,18 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu)
+ 
+ 	vcpu->stat.exit_external_interrupt++;
+ 
+-	rc = read_guest_lc(vcpu, __LC_EXT_NEW_PSW, &newpsw, sizeof(psw_t));
+-	if (rc)
+-		return rc;
+-	/* We can not handle clock comparator or timer interrupt with bad PSW */
++	if (kvm_s390_pv_cpu_is_protected(vcpu)) {
++		newpsw = vcpu->arch.sie_block->gpsw;
++	} else {
++		rc = read_guest_lc(vcpu, __LC_EXT_NEW_PSW, &newpsw, sizeof(psw_t));
++		if (rc)
++			return rc;
++	}
++
++	/*
++	 * Clock comparator or timer interrupt with external interrupt enabled
++	 * will cause interrupt loop. Drop to userspace.
++	 */
+ 	if ((eic == EXT_IRQ_CLK_COMP || eic == EXT_IRQ_CPU_TIMER) &&
+ 	    (newpsw.mask & PSW_MASK_EXT))
+ 		return -EOPNOTSUPP;
+diff --git a/arch/x86/kernel/sysfb_efi.c b/arch/x86/kernel/sysfb_efi.c
+index 9ea65611fba0b..fff04d2859765 100644
+--- a/arch/x86/kernel/sysfb_efi.c
++++ b/arch/x86/kernel/sysfb_efi.c
+@@ -272,6 +272,14 @@ static const struct dmi_system_id efifb_dmi_swap_width_height[] __initconst = {
+ 					"IdeaPad Duet 3 10IGL5"),
+ 		},
+ 	},
++	{
++		/* Lenovo Yoga Book X91F / X91L */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			/* Non exact match to match F + L versions */
++			DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X91"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
+index a3038d8deb6a4..b758eeea6090b 100644
+--- a/arch/x86/kernel/x86_init.c
++++ b/arch/x86/kernel/x86_init.c
+@@ -32,8 +32,8 @@ static int __init iommu_init_noop(void) { return 0; }
+ static void iommu_shutdown_noop(void) { }
+ bool __init bool_x86_init_noop(void) { return false; }
+ void x86_op_int_noop(int cpu) { }
+-static __init int set_rtc_noop(const struct timespec64 *now) { return -EINVAL; }
+-static __init void get_rtc_noop(struct timespec64 *now) { }
++static int set_rtc_noop(const struct timespec64 *now) { return -EINVAL; }
++static void get_rtc_noop(struct timespec64 *now) { }
+ 
+ static __initconst const struct of_device_id of_cmos_match[] = {
+ 	{ .compatible = "motorola,mc146818" },
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 9b0e771302cee..3f3411b882f56 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -7,6 +7,7 @@
+ #include <linux/dmi.h>
+ #include <linux/pci.h>
+ #include <linux/vgaarb.h>
++#include <asm/amd_nb.h>
+ #include <asm/hpet.h>
+ #include <asm/pci_x86.h>
+ 
+@@ -824,3 +825,23 @@ static void rs690_fix_64bit_dma(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7910, rs690_fix_64bit_dma);
+ 
+ #endif
++
++#ifdef CONFIG_AMD_NB
++
++#define AMD_15B8_RCC_DEV2_EPF0_STRAP2                                  0x10136008
++#define AMD_15B8_RCC_DEV2_EPF0_STRAP2_NO_SOFT_RESET_DEV2_F0_MASK       0x00000080L
++
++static void quirk_clear_strap_no_soft_reset_dev2_f0(struct pci_dev *dev)
++{
++	u32 data;
++
++	if (!amd_smn_read(0, AMD_15B8_RCC_DEV2_EPF0_STRAP2, &data)) {
++		data &= ~AMD_15B8_RCC_DEV2_EPF0_STRAP2_NO_SOFT_RESET_DEV2_F0_MASK;
++		if (amd_smn_write(0, AMD_15B8_RCC_DEV2_EPF0_STRAP2, data))
++			pci_err(dev, "Failed to write data 0x%x\n", data);
++	} else {
++		pci_err(dev, "Failed to read data\n");
++	}
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15b8, quirk_clear_strap_no_soft_reset_dev2_f0);
++#endif
+diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c
+index ce49820caa97f..01e54450c846f 100644
+--- a/crypto/asymmetric_keys/pkcs7_verify.c
++++ b/crypto/asymmetric_keys/pkcs7_verify.c
+@@ -79,16 +79,16 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
+ 		}
+ 
+ 		if (sinfo->msgdigest_len != sig->digest_size) {
+-			pr_debug("Sig %u: Invalid digest size (%u)\n",
+-				 sinfo->index, sinfo->msgdigest_len);
++			pr_warn("Sig %u: Invalid digest size (%u)\n",
++				sinfo->index, sinfo->msgdigest_len);
+ 			ret = -EBADMSG;
+ 			goto error;
+ 		}
+ 
+ 		if (memcmp(sig->digest, sinfo->msgdigest,
+ 			   sinfo->msgdigest_len) != 0) {
+-			pr_debug("Sig %u: Message digest doesn't match\n",
+-				 sinfo->index);
++			pr_warn("Sig %u: Message digest doesn't match\n",
++				sinfo->index);
+ 			ret = -EKEYREJECTED;
+ 			goto error;
+ 		}
+@@ -488,7 +488,7 @@ int pkcs7_supply_detached_data(struct pkcs7_message *pkcs7,
+ 			       const void *data, size_t datalen)
+ {
+ 	if (pkcs7->data) {
+-		pr_debug("Data already supplied\n");
++		pr_warn("Data already supplied\n");
+ 		return -EINVAL;
+ 	}
+ 	pkcs7->data = data;
+diff --git a/crypto/asymmetric_keys/verify_pefile.c b/crypto/asymmetric_keys/verify_pefile.c
+index 7553ab18db898..22beaf2213a22 100644
+--- a/crypto/asymmetric_keys/verify_pefile.c
++++ b/crypto/asymmetric_keys/verify_pefile.c
+@@ -74,7 +74,7 @@ static int pefile_parse_binary(const void *pebuf, unsigned int pelen,
+ 		break;
+ 
+ 	default:
+-		pr_debug("Unknown PEOPT magic = %04hx\n", pe32->magic);
++		pr_warn("Unknown PEOPT magic = %04hx\n", pe32->magic);
+ 		return -ELIBBAD;
+ 	}
+ 
+@@ -95,7 +95,7 @@ static int pefile_parse_binary(const void *pebuf, unsigned int pelen,
+ 	ctx->certs_size = ddir->certs.size;
+ 
+ 	if (!ddir->certs.virtual_address || !ddir->certs.size) {
+-		pr_debug("Unsigned PE binary\n");
++		pr_warn("Unsigned PE binary\n");
+ 		return -ENODATA;
+ 	}
+ 
+@@ -127,7 +127,7 @@ static int pefile_strip_sig_wrapper(const void *pebuf,
+ 	unsigned len;
+ 
+ 	if (ctx->sig_len < sizeof(wrapper)) {
+-		pr_debug("Signature wrapper too short\n");
++		pr_warn("Signature wrapper too short\n");
+ 		return -ELIBBAD;
+ 	}
+ 
+@@ -135,19 +135,23 @@ static int pefile_strip_sig_wrapper(const void *pebuf,
+ 	pr_debug("sig wrapper = { %x, %x, %x }\n",
+ 		 wrapper.length, wrapper.revision, wrapper.cert_type);
+ 
+-	/* Both pesign and sbsign round up the length of certificate table
+-	 * (in optional header data directories) to 8 byte alignment.
++	/* sbsign rounds up the length of certificate table (in optional
++	 * header data directories) to 8 byte alignment.  However, the PE
++	 * specification states that while entries are 8-byte aligned, this is
++	 * not included in their length, and as a result, pesign has not
++	 * rounded up since 0.110.
+ 	 */
+-	if (round_up(wrapper.length, 8) != ctx->sig_len) {
+-		pr_debug("Signature wrapper len wrong\n");
++	if (wrapper.length > ctx->sig_len) {
++		pr_warn("Signature wrapper bigger than sig len (%x > %x)\n",
++			ctx->sig_len, wrapper.length);
+ 		return -ELIBBAD;
+ 	}
+ 	if (wrapper.revision != WIN_CERT_REVISION_2_0) {
+-		pr_debug("Signature is not revision 2.0\n");
++		pr_warn("Signature is not revision 2.0\n");
+ 		return -ENOTSUPP;
+ 	}
+ 	if (wrapper.cert_type != WIN_CERT_TYPE_PKCS_SIGNED_DATA) {
+-		pr_debug("Signature certificate type is not PKCS\n");
++		pr_warn("Signature certificate type is not PKCS\n");
+ 		return -ENOTSUPP;
+ 	}
+ 
+@@ -160,7 +164,7 @@ static int pefile_strip_sig_wrapper(const void *pebuf,
+ 	ctx->sig_offset += sizeof(wrapper);
+ 	ctx->sig_len -= sizeof(wrapper);
+ 	if (ctx->sig_len < 4) {
+-		pr_debug("Signature data missing\n");
++		pr_warn("Signature data missing\n");
+ 		return -EKEYREJECTED;
+ 	}
+ 
+@@ -194,7 +198,7 @@ check_len:
+ 		return 0;
+ 	}
+ not_pkcs7:
+-	pr_debug("Signature data not PKCS#7\n");
++	pr_warn("Signature data not PKCS#7\n");
+ 	return -ELIBBAD;
+ }
+ 
+@@ -337,8 +341,8 @@ static int pefile_digest_pe(const void *pebuf, unsigned int pelen,
+ 	digest_size = crypto_shash_digestsize(tfm);
+ 
+ 	if (digest_size != ctx->digest_len) {
+-		pr_debug("Digest size mismatch (%zx != %x)\n",
+-			 digest_size, ctx->digest_len);
++		pr_warn("Digest size mismatch (%zx != %x)\n",
++			digest_size, ctx->digest_len);
+ 		ret = -EBADMSG;
+ 		goto error_no_desc;
+ 	}
+@@ -369,7 +373,7 @@ static int pefile_digest_pe(const void *pebuf, unsigned int pelen,
+ 	 * PKCS#7 certificate.
+ 	 */
+ 	if (memcmp(digest, ctx->digest, ctx->digest_len) != 0) {
+-		pr_debug("Digest mismatch\n");
++		pr_warn("Digest mismatch\n");
+ 		ret = -EKEYREJECTED;
+ 	} else {
+ 		pr_debug("The digests match!\n");
+diff --git a/drivers/clk/sprd/common.c b/drivers/clk/sprd/common.c
+index ce81e4087a8fc..2bfbab8db94bf 100644
+--- a/drivers/clk/sprd/common.c
++++ b/drivers/clk/sprd/common.c
+@@ -17,7 +17,6 @@ static const struct regmap_config sprdclk_regmap_config = {
+ 	.reg_bits	= 32,
+ 	.reg_stride	= 4,
+ 	.val_bits	= 32,
+-	.max_register	= 0xffff,
+ 	.fast_io	= true,
+ };
+ 
+@@ -43,6 +42,8 @@ int sprd_clk_regmap_init(struct platform_device *pdev,
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *node = dev->of_node, *np;
+ 	struct regmap *regmap;
++	struct resource *res;
++	struct regmap_config reg_config = sprdclk_regmap_config;
+ 
+ 	if (of_find_property(node, "sprd,syscon", NULL)) {
+ 		regmap = syscon_regmap_lookup_by_phandle(node, "sprd,syscon");
+@@ -59,12 +60,14 @@ int sprd_clk_regmap_init(struct platform_device *pdev,
+ 			return PTR_ERR(regmap);
+ 		}
+ 	} else {
+-		base = devm_platform_ioremap_resource(pdev, 0);
++		base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 		if (IS_ERR(base))
+ 			return PTR_ERR(base);
+ 
++		reg_config.max_register = resource_size(res) - reg_config.reg_stride;
++
+ 		regmap = devm_regmap_init_mmio(&pdev->dev, base,
+-					       &sprdclk_regmap_config);
++					       &reg_config);
+ 		if (IS_ERR(regmap)) {
+ 			pr_err("failed to init regmap\n");
+ 			return PTR_ERR(regmap);
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index d1300fc003ed7..39f3e13664099 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -99,7 +99,7 @@ config GPIO_GENERIC
+ 	tristate
+ 
+ config GPIO_REGMAP
+-	depends on REGMAP
++	select REGMAP
+ 	tristate
+ 
+ # put drivers in the right section, in alphabetical order
+diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c
+index 6f2138503726a..80597e90de9c6 100644
+--- a/drivers/gpio/gpio-davinci.c
++++ b/drivers/gpio/gpio-davinci.c
+@@ -326,7 +326,7 @@ static struct irq_chip gpio_irqchip = {
+ 	.irq_enable	= gpio_irq_enable,
+ 	.irq_disable	= gpio_irq_disable,
+ 	.irq_set_type	= gpio_irq_type,
+-	.flags		= IRQCHIP_SET_TYPE_MASKED,
++	.flags		= IRQCHIP_SET_TYPE_MASKED | IRQCHIP_SKIP_SET_WAKE,
+ };
+ 
+ static void gpio_irq_handler(struct irq_desc *desc)
+diff --git a/drivers/gpu/drm/armada/armada_drv.c b/drivers/gpu/drm/armada/armada_drv.c
+index 980d3f1f8f16e..2d1e1e48f0eec 100644
+--- a/drivers/gpu/drm/armada/armada_drv.c
++++ b/drivers/gpu/drm/armada/armada_drv.c
+@@ -102,7 +102,6 @@ static int armada_drm_bind(struct device *dev)
+ 	if (ret) {
+ 		dev_err(dev, "[" DRM_NAME ":%s] can't kick out simple-fb: %d\n",
+ 			__func__, ret);
+-		kfree(priv);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index 0c6dea9ccb728..660e05fa4a704 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -256,6 +256,7 @@ static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode
+ 		{ 0x8126, 0x55 },
+ 		{ 0x8127, 0x66 },
+ 		{ 0x8128, 0x88 },
++		{ 0x812a, 0x20 },
+ 	};
+ 
+ 	regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 8768073794fbf..6106fa7c43028 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -284,10 +284,17 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "IdeaPad Duet 3 10IGL5"),
+ 		},
+ 		.driver_data = (void *)&lcd1200x1920_rightside_up,
+-	}, {	/* Lenovo Yoga Book X90F / X91F / X91L */
++	}, {	/* Lenovo Yoga Book X90F / X90L */
+ 		.matches = {
+-		  /* Non exact match to match all versions */
+-		  DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"),
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
++		},
++		.driver_data = (void *)&lcd1200x1920_rightside_up,
++	}, {	/* Lenovo Yoga Book X91F / X91L */
++		.matches = {
++		  /* Non exact match to match F + L versions */
++		  DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X91"),
+ 		},
+ 		.driver_data = (void *)&lcd1200x1920_rightside_up,
+ 	}, {	/* OneGX1 Pro */
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 804ea035fa46b..0ac120225b4d4 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -396,6 +396,35 @@ nv50_outp_atomic_check_view(struct drm_encoder *encoder,
+ 	return 0;
+ }
+ 
++static void
++nv50_outp_atomic_fix_depth(struct drm_encoder *encoder, struct drm_crtc_state *crtc_state)
++{
++	struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);
++	struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
++	struct drm_display_mode *mode = &asyh->state.adjusted_mode;
++	unsigned int max_rate, mode_rate;
++
++	switch (nv_encoder->dcb->type) {
++	case DCB_OUTPUT_DP:
++		max_rate = nv_encoder->dp.link_nr * nv_encoder->dp.link_bw;
++
++		/* we don't support more than 10 anyway */
++		asyh->or.bpc = min_t(u8, asyh->or.bpc, 10);
++
++		/* reduce the bpc until it works out */
++		while (asyh->or.bpc > 6) {
++			mode_rate = DIV_ROUND_UP(mode->clock * asyh->or.bpc * 3, 8);
++			if (mode_rate <= max_rate)
++				break;
++
++			asyh->or.bpc -= 2;
++		}
++		break;
++	default:
++		break;
++	}
++}
++
+ static int
+ nv50_outp_atomic_check(struct drm_encoder *encoder,
+ 		       struct drm_crtc_state *crtc_state,
+@@ -414,6 +443,9 @@ nv50_outp_atomic_check(struct drm_encoder *encoder,
+ 	if (crtc_state->mode_changed || crtc_state->connectors_changed)
+ 		asyh->or.bpc = connector->display_info.bpc;
+ 
++	/* We might have to reduce the bpc */
++	nv50_outp_atomic_fix_depth(encoder, crtc_state);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dp.c b/drivers/gpu/drm/nouveau/nouveau_dp.c
+index 040ed88d362d7..447b7594b35ae 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dp.c
++++ b/drivers/gpu/drm/nouveau/nouveau_dp.c
+@@ -220,8 +220,6 @@ void nouveau_dp_irq(struct nouveau_drm *drm,
+ }
+ 
+ /* TODO:
+- * - Use the minimum possible BPC here, once we add support for the max bpc
+- *   property.
+  * - Validate against the DP caps advertised by the GPU (we don't check these
+  *   yet)
+  */
+@@ -233,7 +231,11 @@ nv50_dp_mode_valid(struct drm_connector *connector,
+ {
+ 	const unsigned int min_clock = 25000;
+ 	unsigned int max_rate, mode_rate, ds_max_dotclock, clock = mode->clock;
+-	const u8 bpp = connector->display_info.bpc * 3;
++	/* Check with the minmum bpc always, so we can advertise better modes.
++	 * In particlar not doing this causes modes to be dropped on HDR
++	 * displays as we might check with a bpc of 16 even.
++	 */
++	const u8 bpp = 6 * 3;
+ 
+ 	if (mode->flags & DRM_MODE_FLAG_INTERLACE && !outp->caps.dp_interlace)
+ 		return MODE_NO_INTERLACE;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index 5ff856ef7d88c..c81f3ac22cddb 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -458,6 +458,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ 		if (IS_ERR(pages[i])) {
+ 			mutex_unlock(&bo->base.pages_lock);
+ 			ret = PTR_ERR(pages[i]);
++			pages[i] = NULL;
+ 			goto err_pages;
+ 		}
+ 	}
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index bfd7f00a59ecf..683fdfa3e723e 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -305,6 +305,10 @@ void vmbus_disconnect(void)
+  */
+ struct vmbus_channel *relid2channel(u32 relid)
+ {
++	if (vmbus_connection.channels == NULL) {
++		pr_warn_once("relid2channel: relid=%d: No channels mapped!\n", relid);
++		return NULL;
++	}
+ 	if (WARN_ON(relid >= MAX_CHANNEL_RELIDS))
+ 		return NULL;
+ 	return READ_ONCE(vmbus_connection.channels[relid]);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 99df453575f50..059c09a925cc7 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -179,7 +179,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ 		writel_relaxed(config->ss_pe_cmp[i],
+ 			       drvdata->base + TRCSSPCICRn(i));
+ 	}
+-	for (i = 0; i < drvdata->nr_addr_cmp; i++) {
++	for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
+ 		writeq_relaxed(config->addr_val[i],
+ 			       drvdata->base + TRCACVRn(i));
+ 		writeq_relaxed(config->addr_acc[i],
+diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
+index 2018dbcf241e9..d45ec26d51cb9 100644
+--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
++++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
+@@ -462,6 +462,8 @@ static int lpi2c_imx_xfer(struct i2c_adapter *adapter,
+ 		if (num == 1 && msgs[0].len == 0)
+ 			goto stop;
+ 
++		lpi2c_imx->rx_buf = NULL;
++		lpi2c_imx->tx_buf = NULL;
+ 		lpi2c_imx->delivered = 0;
+ 		lpi2c_imx->msglen = msgs[i].len;
+ 		init_completion(&lpi2c_imx->complete);
+diff --git a/drivers/i2c/busses/i2c-ocores.c b/drivers/i2c/busses/i2c-ocores.c
+index f5fc75b65a194..71e26aa6bd8ff 100644
+--- a/drivers/i2c/busses/i2c-ocores.c
++++ b/drivers/i2c/busses/i2c-ocores.c
+@@ -343,18 +343,18 @@ static int ocores_poll_wait(struct ocores_i2c *i2c)
+  * ocores_isr(), we just add our polling code around it.
+  *
+  * It can run in atomic context
++ *
++ * Return: 0 on success, -ETIMEDOUT on timeout
+  */
+-static void ocores_process_polling(struct ocores_i2c *i2c)
++static int ocores_process_polling(struct ocores_i2c *i2c)
+ {
+-	while (1) {
+-		irqreturn_t ret;
+-		int err;
++	irqreturn_t ret;
++	int err = 0;
+ 
++	while (1) {
+ 		err = ocores_poll_wait(i2c);
+-		if (err) {
+-			i2c->state = STATE_ERROR;
++		if (err)
+ 			break; /* timeout */
+-		}
+ 
+ 		ret = ocores_isr(-1, i2c);
+ 		if (ret == IRQ_NONE)
+@@ -365,13 +365,15 @@ static void ocores_process_polling(struct ocores_i2c *i2c)
+ 					break;
+ 		}
+ 	}
++
++	return err;
+ }
+ 
+ static int ocores_xfer_core(struct ocores_i2c *i2c,
+ 			    struct i2c_msg *msgs, int num,
+ 			    bool polling)
+ {
+-	int ret;
++	int ret = 0;
+ 	u8 ctrl;
+ 
+ 	ctrl = oc_getreg(i2c, OCI2C_CONTROL);
+@@ -389,15 +391,16 @@ static int ocores_xfer_core(struct ocores_i2c *i2c,
+ 	oc_setreg(i2c, OCI2C_CMD, OCI2C_CMD_START);
+ 
+ 	if (polling) {
+-		ocores_process_polling(i2c);
++		ret = ocores_process_polling(i2c);
+ 	} else {
+-		ret = wait_event_timeout(i2c->wait,
+-					 (i2c->state == STATE_ERROR) ||
+-					 (i2c->state == STATE_DONE), HZ);
+-		if (ret == 0) {
+-			ocores_process_timeout(i2c);
+-			return -ETIMEDOUT;
+-		}
++		if (wait_event_timeout(i2c->wait,
++				       (i2c->state == STATE_ERROR) ||
++				       (i2c->state == STATE_DONE), HZ) == 0)
++			ret = -ETIMEDOUT;
++	}
++	if (ret) {
++		ocores_process_timeout(i2c);
++		return ret;
+ 	}
+ 
+ 	return (i2c->state == STATE_DONE) ? num : -EIO;
+diff --git a/drivers/iio/adc/ad7791.c b/drivers/iio/adc/ad7791.c
+index d57ad966e17c1..f3502f12653b3 100644
+--- a/drivers/iio/adc/ad7791.c
++++ b/drivers/iio/adc/ad7791.c
+@@ -253,7 +253,7 @@ static const struct ad_sigma_delta_info ad7791_sigma_delta_info = {
+ 	.has_registers = true,
+ 	.addr_shift = 4,
+ 	.read_mask = BIT(3),
+-	.irq_flags = IRQF_TRIGGER_LOW,
++	.irq_flags = IRQF_TRIGGER_FALLING,
+ };
+ 
+ static int ad7791_read_raw(struct iio_dev *indio_dev,
+diff --git a/drivers/iio/adc/ti-ads7950.c b/drivers/iio/adc/ti-ads7950.c
+index a2b83f0bd5260..d4583b76f1fe3 100644
+--- a/drivers/iio/adc/ti-ads7950.c
++++ b/drivers/iio/adc/ti-ads7950.c
+@@ -634,6 +634,7 @@ static int ti_ads7950_probe(struct spi_device *spi)
+ 	st->chip.label = dev_name(&st->spi->dev);
+ 	st->chip.parent = &st->spi->dev;
+ 	st->chip.owner = THIS_MODULE;
++	st->chip.can_sleep = true;
+ 	st->chip.base = -1;
+ 	st->chip.ngpio = TI_ADS7950_NUM_GPIOS;
+ 	st->chip.get_direction = ti_ads7950_get_direction;
+diff --git a/drivers/iio/dac/cio-dac.c b/drivers/iio/dac/cio-dac.c
+index 95813569f3940..77a6916b3d6c6 100644
+--- a/drivers/iio/dac/cio-dac.c
++++ b/drivers/iio/dac/cio-dac.c
+@@ -66,8 +66,8 @@ static int cio_dac_write_raw(struct iio_dev *indio_dev,
+ 	if (mask != IIO_CHAN_INFO_RAW)
+ 		return -EINVAL;
+ 
+-	/* DAC can only accept up to a 16-bit value */
+-	if ((unsigned int)val > 65535)
++	/* DAC can only accept up to a 12-bit value */
++	if ((unsigned int)val > 4095)
+ 		return -EINVAL;
+ 
+ 	priv->chan_out_states[chan->channel] = val;
+diff --git a/drivers/iio/light/cm32181.c b/drivers/iio/light/cm32181.c
+index 97649944f1df6..c14a630dd683b 100644
+--- a/drivers/iio/light/cm32181.c
++++ b/drivers/iio/light/cm32181.c
+@@ -429,6 +429,14 @@ static const struct iio_info cm32181_info = {
+ 	.attrs			= &cm32181_attribute_group,
+ };
+ 
++static void cm32181_unregister_dummy_client(void *data)
++{
++	struct i2c_client *client = data;
++
++	/* Unregister the dummy client */
++	i2c_unregister_device(client);
++}
++
+ static int cm32181_probe(struct i2c_client *client)
+ {
+ 	struct device *dev = &client->dev;
+@@ -458,6 +466,10 @@ static int cm32181_probe(struct i2c_client *client)
+ 		client = i2c_acpi_new_device(dev, 1, &board_info);
+ 		if (IS_ERR(client))
+ 			return PTR_ERR(client);
++
++		ret = devm_add_action_or_reset(dev, cm32181_unregister_dummy_client, client);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	cm32181 = iio_priv(indio_dev);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 9ed5de38e372f..fdcad8d6a5a07 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -505,22 +505,11 @@ static inline unsigned short cma_family(struct rdma_id_private *id_priv)
+ 	return id_priv->id.route.addr.src_addr.ss_family;
+ }
+ 
+-static int cma_set_qkey(struct rdma_id_private *id_priv, u32 qkey)
++static int cma_set_default_qkey(struct rdma_id_private *id_priv)
+ {
+ 	struct ib_sa_mcmember_rec rec;
+ 	int ret = 0;
+ 
+-	if (id_priv->qkey) {
+-		if (qkey && id_priv->qkey != qkey)
+-			return -EINVAL;
+-		return 0;
+-	}
+-
+-	if (qkey) {
+-		id_priv->qkey = qkey;
+-		return 0;
+-	}
+-
+ 	switch (id_priv->id.ps) {
+ 	case RDMA_PS_UDP:
+ 	case RDMA_PS_IB:
+@@ -540,6 +529,16 @@ static int cma_set_qkey(struct rdma_id_private *id_priv, u32 qkey)
+ 	return ret;
+ }
+ 
++static int cma_set_qkey(struct rdma_id_private *id_priv, u32 qkey)
++{
++	if (!qkey ||
++	    (id_priv->qkey && (id_priv->qkey != qkey)))
++		return -EINVAL;
++
++	id_priv->qkey = qkey;
++	return 0;
++}
++
+ static void cma_translate_ib(struct sockaddr_ib *sib, struct rdma_dev_addr *dev_addr)
+ {
+ 	dev_addr->dev_type = ARPHRD_INFINIBAND;
+@@ -1107,7 +1106,7 @@ static int cma_ib_init_qp_attr(struct rdma_id_private *id_priv,
+ 	*qp_attr_mask = IB_QP_STATE | IB_QP_PKEY_INDEX | IB_QP_PORT;
+ 
+ 	if (id_priv->id.qp_type == IB_QPT_UD) {
+-		ret = cma_set_qkey(id_priv, 0);
++		ret = cma_set_default_qkey(id_priv);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -4312,7 +4311,10 @@ static int cma_send_sidr_rep(struct rdma_id_private *id_priv,
+ 	memset(&rep, 0, sizeof rep);
+ 	rep.status = status;
+ 	if (status == IB_SIDR_SUCCESS) {
+-		ret = cma_set_qkey(id_priv, qkey);
++		if (qkey)
++			ret = cma_set_qkey(id_priv, qkey);
++		else
++			ret = cma_set_default_qkey(id_priv);
+ 		if (ret)
+ 			return ret;
+ 		rep.qp_num = id_priv->qp_num;
+@@ -4516,9 +4518,7 @@ static void cma_make_mc_event(int status, struct rdma_id_private *id_priv,
+ 	enum ib_gid_type gid_type;
+ 	struct net_device *ndev;
+ 
+-	if (!status)
+-		status = cma_set_qkey(id_priv, be32_to_cpu(multicast->rec.qkey));
+-	else
++	if (status)
+ 		pr_debug_ratelimited("RDMA CM: MULTICAST_ERROR: failed to join multicast. status %d\n",
+ 				     status);
+ 
+@@ -4546,7 +4546,7 @@ static void cma_make_mc_event(int status, struct rdma_id_private *id_priv,
+ 	}
+ 
+ 	event->param.ud.qp_num = 0xFFFFFF;
+-	event->param.ud.qkey = be32_to_cpu(multicast->rec.qkey);
++	event->param.ud.qkey = id_priv->qkey;
+ 
+ out:
+ 	if (ndev)
+@@ -4565,8 +4565,11 @@ static int cma_ib_mc_handler(int status, struct ib_sa_multicast *multicast)
+ 	    READ_ONCE(id_priv->state) == RDMA_CM_DESTROYING)
+ 		goto out;
+ 
+-	cma_make_mc_event(status, id_priv, multicast, &event, mc);
+-	ret = cma_cm_event_handler(id_priv, &event);
++	ret = cma_set_qkey(id_priv, be32_to_cpu(multicast->rec.qkey));
++	if (!ret) {
++		cma_make_mc_event(status, id_priv, multicast, &event, mc);
++		ret = cma_cm_event_handler(id_priv, &event);
++	}
+ 	rdma_destroy_ah_attr(&event.param.ud.ah_attr);
+ 	WARN_ON(ret);
+ 
+@@ -4619,9 +4622,11 @@ static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = cma_set_qkey(id_priv, 0);
+-	if (ret)
+-		return ret;
++	if (!id_priv->qkey) {
++		ret = cma_set_default_qkey(id_priv);
++		if (ret)
++			return ret;
++	}
+ 
+ 	cma_set_mgid(id_priv, (struct sockaddr *) &mc->addr, &rec.mgid);
+ 	rec.qkey = cpu_to_be32(id_priv->qkey);
+@@ -4709,9 +4714,6 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 	cma_iboe_set_mgid(addr, &ib.rec.mgid, gid_type);
+ 
+ 	ib.rec.pkey = cpu_to_be16(0xffff);
+-	if (id_priv->id.ps == RDMA_PS_UDP)
+-		ib.rec.qkey = cpu_to_be32(RDMA_UDP_QKEY);
+-
+ 	if (dev_addr->bound_dev_if)
+ 		ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if);
+ 	if (!ndev)
+@@ -4737,6 +4739,9 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 	if (err || !ib.rec.mtu)
+ 		return err ?: -EINVAL;
+ 
++	if (!id_priv->qkey)
++		cma_set_default_qkey(id_priv);
++
+ 	rdma_ip2gid((struct sockaddr *)&id_priv->id.route.addr.src_addr,
+ 		    &ib.rec.port_gid);
+ 	INIT_WORK(&mc->iboe_join.work, cma_iboe_join_work_handler);
+@@ -4762,6 +4767,9 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
+ 			    READ_ONCE(id_priv->state) != RDMA_CM_ADDR_RESOLVED))
+ 		return -EINVAL;
+ 
++	if (id_priv->id.qp_type != IB_QPT_UD)
++		return -EINVAL;
++
+ 	mc = kzalloc(sizeof(*mc), GFP_KERNEL);
+ 	if (!mc)
+ 		return -ENOMEM;
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 5123be0ab02f5..4fcabe5a84bee 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -535,6 +535,8 @@ static struct ib_ah *_rdma_create_ah(struct ib_pd *pd,
+ 
+ 	ret = device->ops.create_ah(ah, &init_attr, udata);
+ 	if (ret) {
++		if (ah->sgid_attr)
++			rdma_put_gid_attr(ah->sgid_attr);
+ 		kfree(ah);
+ 		return ERR_PTR(ret);
+ 	}
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index eb69bec77e5d4..5ef37902e96b5 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -425,10 +425,26 @@ static int translate_eth_ext_proto_oper(u32 eth_proto_oper, u16 *active_speed,
+ 		*active_width = IB_WIDTH_2X;
+ 		*active_speed = IB_SPEED_HDR;
+ 		break;
++	case MLX5E_PROT_MASK(MLX5E_100GAUI_1_100GBASE_CR_KR):
++		*active_width = IB_WIDTH_1X;
++		*active_speed = IB_SPEED_NDR;
++		break;
+ 	case MLX5E_PROT_MASK(MLX5E_200GAUI_4_200GBASE_CR4_KR4):
+ 		*active_width = IB_WIDTH_4X;
+ 		*active_speed = IB_SPEED_HDR;
+ 		break;
++	case MLX5E_PROT_MASK(MLX5E_200GAUI_2_200GBASE_CR2_KR2):
++		*active_width = IB_WIDTH_2X;
++		*active_speed = IB_SPEED_NDR;
++		break;
++	case MLX5E_PROT_MASK(MLX5E_400GAUI_8):
++		*active_width = IB_WIDTH_8X;
++		*active_speed = IB_SPEED_HDR;
++		break;
++	case MLX5E_PROT_MASK(MLX5E_400GAUI_4_400GBASE_CR4_KR4):
++		*active_width = IB_WIDTH_4X;
++		*active_speed = IB_SPEED_NDR;
++		break;
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/media/platform/ti-vpe/cal.c b/drivers/media/platform/ti-vpe/cal.c
+index 93121c90d76ae..2eef245c31a17 100644
+--- a/drivers/media/platform/ti-vpe/cal.c
++++ b/drivers/media/platform/ti-vpe/cal.c
+@@ -624,10 +624,8 @@ static struct cal_ctx *cal_ctx_create(struct cal_dev *cal, int inst)
+ 	ctx->cport = inst;
+ 
+ 	ret = cal_ctx_v4l2_init(ctx);
+-	if (ret) {
+-		kfree(ctx);
++	if (ret)
+ 		return NULL;
+-	}
+ 
+ 	return ctx;
+ }
+diff --git a/drivers/mtd/mtdblock.c b/drivers/mtd/mtdblock.c
+index 32e52d83b961e..9e5bbb5457027 100644
+--- a/drivers/mtd/mtdblock.c
++++ b/drivers/mtd/mtdblock.c
+@@ -153,7 +153,7 @@ static int do_cached_write (struct mtdblk_dev *mtdblk, unsigned long pos,
+ 				mtdblk->cache_state = STATE_EMPTY;
+ 				ret = mtd_read(mtd, sect_start, sect_size,
+ 					       &retlen, mtdblk->cache_data);
+-				if (ret)
++				if (ret && !mtd_is_bitflip(ret))
+ 					return ret;
+ 				if (retlen != sect_size)
+ 					return -EIO;
+@@ -188,8 +188,12 @@ static int do_cached_read (struct mtdblk_dev *mtdblk, unsigned long pos,
+ 	pr_debug("mtdblock: read on \"%s\" at 0x%lx, size 0x%x\n",
+ 			mtd->name, pos, len);
+ 
+-	if (!sect_size)
+-		return mtd_read(mtd, pos, len, &retlen, buf);
++	if (!sect_size) {
++		ret = mtd_read(mtd, pos, len, &retlen, buf);
++		if (ret && !mtd_is_bitflip(ret))
++			return ret;
++		return 0;
++	}
+ 
+ 	while (len > 0) {
+ 		unsigned long sect_start = (pos/sect_size)*sect_size;
+@@ -209,7 +213,7 @@ static int do_cached_read (struct mtdblk_dev *mtdblk, unsigned long pos,
+ 			memcpy (buf, mtdblk->cache_data + offset, size);
+ 		} else {
+ 			ret = mtd_read(mtd, pos, size, &retlen, buf);
+-			if (ret)
++			if (ret && !mtd_is_bitflip(ret))
+ 				return ret;
+ 			if (retlen != size)
+ 				return -EIO;
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index dc631c5143187..228612d82f311 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -276,7 +276,7 @@ static void meson_nfc_cmd_access(struct nand_chip *nand, int raw, bool dir,
+ 
+ 	if (raw) {
+ 		len = mtd->writesize + mtd->oobsize;
+-		cmd = (len & GENMASK(5, 0)) | scrambler | DMA_DIR(dir);
++		cmd = (len & GENMASK(13, 0)) | scrambler | DMA_DIR(dir);
+ 		writel(cmd, nfc->reg_base + NFC_REG_CMD);
+ 		return;
+ 	}
+@@ -540,7 +540,7 @@ static int meson_nfc_read_buf(struct nand_chip *nand, u8 *buf, int len)
+ 	if (ret)
+ 		goto out;
+ 
+-	cmd = NFC_CMD_N2M | (len & GENMASK(5, 0));
++	cmd = NFC_CMD_N2M | (len & GENMASK(13, 0));
+ 	writel(cmd, nfc->reg_base + NFC_REG_CMD);
+ 
+ 	meson_nfc_drain_cmd(nfc);
+@@ -564,7 +564,7 @@ static int meson_nfc_write_buf(struct nand_chip *nand, u8 *buf, int len)
+ 	if (ret)
+ 		return ret;
+ 
+-	cmd = NFC_CMD_M2N | (len & GENMASK(5, 0));
++	cmd = NFC_CMD_M2N | (len & GENMASK(13, 0));
+ 	writel(cmd, nfc->reg_base + NFC_REG_CMD);
+ 
+ 	meson_nfc_drain_cmd(nfc);
+diff --git a/drivers/mtd/nand/raw/stm32_fmc2_nand.c b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+index 550bda4d1415a..c0c47f31c100d 100644
+--- a/drivers/mtd/nand/raw/stm32_fmc2_nand.c
++++ b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+@@ -1525,6 +1525,9 @@ static int stm32_fmc2_nfc_setup_interface(struct nand_chip *chip, int chipnr,
+ 	if (IS_ERR(sdrt))
+ 		return PTR_ERR(sdrt);
+ 
++	if (conf->timings.mode > 3)
++		return -EOPNOTSUPP;
++
+ 	if (chipnr == NAND_DATA_IFACE_CHECK_ONLY)
+ 		return 0;
+ 
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index e45fdc1bf66a4..929ce489b0629 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -665,12 +665,6 @@ static int io_init(struct ubi_device *ubi, int max_beb_per1024)
+ 	ubi->ec_hdr_alsize = ALIGN(UBI_EC_HDR_SIZE, ubi->hdrs_min_io_size);
+ 	ubi->vid_hdr_alsize = ALIGN(UBI_VID_HDR_SIZE, ubi->hdrs_min_io_size);
+ 
+-	if (ubi->vid_hdr_offset && ((ubi->vid_hdr_offset + UBI_VID_HDR_SIZE) >
+-	    ubi->vid_hdr_alsize)) {
+-		ubi_err(ubi, "VID header offset %d too large.", ubi->vid_hdr_offset);
+-		return -EINVAL;
+-	}
+-
+ 	dbg_gen("min_io_size      %d", ubi->min_io_size);
+ 	dbg_gen("max_write_size   %d", ubi->max_write_size);
+ 	dbg_gen("hdrs_min_io_size %d", ubi->hdrs_min_io_size);
+@@ -688,6 +682,21 @@ static int io_init(struct ubi_device *ubi, int max_beb_per1024)
+ 						ubi->vid_hdr_aloffset;
+ 	}
+ 
++	/*
++	 * Memory allocation for VID header is ubi->vid_hdr_alsize
++	 * which is described in comments in io.c.
++	 * Make sure VID header shift + UBI_VID_HDR_SIZE not exceeds
++	 * ubi->vid_hdr_alsize, so that all vid header operations
++	 * won't access memory out of bounds.
++	 */
++	if ((ubi->vid_hdr_shift + UBI_VID_HDR_SIZE) > ubi->vid_hdr_alsize) {
++		ubi_err(ubi, "Invalid VID header offset %d, VID header shift(%d)"
++			" + VID header size(%zu) > VID header aligned size(%d).",
++			ubi->vid_hdr_offset, ubi->vid_hdr_shift,
++			UBI_VID_HDR_SIZE, ubi->vid_hdr_alsize);
++		return -EINVAL;
++	}
++
+ 	/* Similar for the data offset */
+ 	ubi->leb_start = ubi->vid_hdr_offset + UBI_VID_HDR_SIZE;
+ 	ubi->leb_start = ALIGN(ubi->leb_start, ubi->min_io_size);
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 6da09263e0b9f..4427018ad4d9b 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -575,6 +575,7 @@ static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
+  * @vol_id: the volume ID that last used this PEB
+  * @lnum: the last used logical eraseblock number for the PEB
+  * @torture: if the physical eraseblock has to be tortured
++ * @nested: denotes whether the work_sem is already held
+  *
+  * This function returns zero in case of success and a %-ENOMEM in case of
+  * failure.
+@@ -1066,8 +1067,6 @@ out_unlock:
+  * __erase_worker - physical eraseblock erase worker function.
+  * @ubi: UBI device description object
+  * @wl_wrk: the work object
+- * @shutdown: non-zero if the worker has to free memory and exit
+- * because the WL sub-system is shutting down
+  *
+  * This function erases a physical eraseblock and perform torture testing if
+  * needed. It also takes care about marking the physical eraseblock bad if
+@@ -1122,7 +1121,7 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
+ 		int err1;
+ 
+ 		/* Re-schedule the LEB for erasure */
+-		err1 = schedule_erase(ubi, e, vol_id, lnum, 0, false);
++		err1 = schedule_erase(ubi, e, vol_id, lnum, 0, true);
+ 		if (err1) {
+ 			spin_lock(&ubi->wl_lock);
+ 			wl_entry_destroy(ubi, e);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index e0d62e2513879..70d57ef95fb15 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -884,6 +884,10 @@ static dma_addr_t macb_get_addr(struct macb *bp, struct macb_dma_desc *desc)
+ 	}
+ #endif
+ 	addr |= MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, desc->addr));
++#ifdef CONFIG_MACB_USE_HWSTAMP
++	if (bp->hw_dma_cap & HW_DMA_CAP_PTP)
++		addr &= ~GEM_BIT(DMA_RXVALID);
++#endif
+ 	return addr;
+ }
+ 
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
+index 87f76bac2e463..eb827b86ecae8 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
+@@ -628,7 +628,13 @@ int qlcnic_fw_create_ctx(struct qlcnic_adapter *dev)
+ 	int i, err, ring;
+ 
+ 	if (dev->flags & QLCNIC_NEED_FLR) {
+-		pci_reset_function(dev->pdev);
++		err = pci_reset_function(dev->pdev);
++		if (err) {
++			dev_err(&dev->pdev->dev,
++				"Adapter reset failed (%d). Please reboot\n",
++				err);
++			return err;
++		}
+ 		dev->flags &= ~QLCNIC_NEED_FLR;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 04c59102a2863..de66406c50572 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4940,7 +4940,7 @@ static void stmmac_napi_del(struct net_device *dev)
+ int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+-	int ret = 0;
++	int ret = 0, i;
+ 
+ 	if (netif_running(dev))
+ 		stmmac_release(dev);
+@@ -4949,6 +4949,10 @@ int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt)
+ 
+ 	priv->plat->rx_queues_to_use = rx_cnt;
+ 	priv->plat->tx_queues_to_use = tx_cnt;
++	if (!netif_is_rxfh_configured(dev))
++		for (i = 0; i < ARRAY_SIZE(priv->rss.table); i++)
++			priv->rss.table[i] = ethtool_rxfh_indir_default(i,
++									rx_cnt);
+ 
+ 	stmmac_napi_add(dev);
+ 
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 860644d182ab0..1a269fa8c1a07 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -4503,7 +4503,7 @@ static int niu_alloc_channels(struct niu *np)
+ 
+ 		err = niu_rbr_fill(np, rp, GFP_KERNEL);
+ 		if (err)
+-			return err;
++			goto out_err;
+ 	}
+ 
+ 	tx_rings = kcalloc(num_tx_rings, sizeof(struct tx_ring_info),
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 4074310abcff4..e4af1f506b833 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2153,7 +2153,8 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_of_clear:
+-	of_platform_device_destroy(common->mdio_dev, NULL);
++	if (common->mdio_dev)
++		of_platform_device_destroy(common->mdio_dev, NULL);
+ err_pm_clear:
+ 	pm_runtime_put_sync(dev);
+ 	pm_runtime_disable(dev);
+@@ -2179,7 +2180,8 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
+ 	 */
+ 	am65_cpsw_nuss_cleanup_ndev(common);
+ 
+-	of_platform_device_destroy(common->mdio_dev, NULL);
++	if (common->mdio_dev)
++		of_platform_device_destroy(common->mdio_dev, NULL);
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index dcbe278086dca..6a5f40f11db3f 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -207,6 +207,12 @@ static const enum gpiod_flags gpio_flags[] = {
+  */
+ #define SFP_PHY_ADDR	22
+ 
++/* SFP_EEPROM_BLOCK_SIZE is the size of data chunk to read the EEPROM
++ * at a time. Some SFP modules and also some Linux I2C drivers do not like
++ * reads longer than 16 bytes.
++ */
++#define SFP_EEPROM_BLOCK_SIZE	16
++
+ struct sff_data {
+ 	unsigned int gpios;
+ 	bool (*module_supported)(const struct sfp_eeprom_id *id);
+@@ -1754,11 +1760,7 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
+ 	u8 check;
+ 	int ret;
+ 
+-	/* Some SFP modules and also some Linux I2C drivers do not like reads
+-	 * longer than 16 bytes, so read the EEPROM in chunks of 16 bytes at
+-	 * a time.
+-	 */
+-	sfp->i2c_block_size = 16;
++	sfp->i2c_block_size = SFP_EEPROM_BLOCK_SIZE;
+ 
+ 	ret = sfp_read(sfp, false, 0, &id.base, sizeof(id.base));
+ 	if (ret < 0) {
+@@ -2385,6 +2387,7 @@ static struct sfp *sfp_alloc(struct device *dev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	sfp->dev = dev;
++	sfp->i2c_block_size = SFP_EEPROM_BLOCK_SIZE;
+ 
+ 	mutex_init(&sfp->sm_mutex);
+ 	mutex_init(&sfp->st_mutex);
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index b0024893a1cba..50c34630ca302 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -183,7 +183,7 @@ static const struct mwifiex_pcie_device mwifiex_pcie8997 = {
+ 	.can_ext_scan = true,
+ };
+ 
+-static const struct of_device_id mwifiex_pcie_of_match_table[] = {
++static const struct of_device_id mwifiex_pcie_of_match_table[] __maybe_unused = {
+ 	{ .compatible = "pci11ab,2b42" },
+ 	{ .compatible = "pci1b4b,2b42" },
+ 	{ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.c b/drivers/net/wireless/marvell/mwifiex/sdio.c
+index 7fb6eef409285..b09e60fedeb16 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sdio.c
++++ b/drivers/net/wireless/marvell/mwifiex/sdio.c
+@@ -484,7 +484,7 @@ static struct memory_type_mapping mem_type_mapping_tbl[] = {
+ 	{"EXTLAST", NULL, 0, 0xFE},
+ };
+ 
+-static const struct of_device_id mwifiex_sdio_of_match_table[] = {
++static const struct of_device_id mwifiex_sdio_of_match_table[] __maybe_unused = {
+ 	{ .compatible = "marvell,sd8787" },
+ 	{ .compatible = "marvell,sd8897" },
+ 	{ .compatible = "marvell,sd8997" },
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 7bfdf5ad77c45..82b658a3c220a 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -764,34 +764,32 @@ static const struct pinconf_ops amd_pinconf_ops = {
+ 	.pin_config_group_set = amd_pinconf_group_set,
+ };
+ 
+-static void amd_gpio_irq_init_pin(struct amd_gpio *gpio_dev, int pin)
++static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
+ {
+-	const struct pin_desc *pd;
++	struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
+ 	unsigned long flags;
+ 	u32 pin_reg, mask;
++	int i;
+ 
+ 	mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3) |
+ 		BIT(INTERRUPT_MASK_OFF) | BIT(INTERRUPT_ENABLE_OFF) |
+ 		BIT(WAKE_CNTRL_OFF_S4);
+ 
+-	pd = pin_desc_get(gpio_dev->pctrl, pin);
+-	if (!pd)
+-		return;
++	for (i = 0; i < desc->npins; i++) {
++		int pin = desc->pins[i].number;
++		const struct pin_desc *pd = pin_desc_get(gpio_dev->pctrl, pin);
+ 
+-	raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+-	pin_reg = readl(gpio_dev->base + pin * 4);
+-	pin_reg &= ~mask;
+-	writel(pin_reg, gpio_dev->base + pin * 4);
+-	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+-}
++		if (!pd)
++			continue;
+ 
+-static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
+-{
+-	struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
+-	int i;
++		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ 
+-	for (i = 0; i < desc->npins; i++)
+-		amd_gpio_irq_init_pin(gpio_dev, i);
++		pin_reg = readl(gpio_dev->base + i * 4);
++		pin_reg &= ~mask;
++		writel(pin_reg, gpio_dev->base + i * 4);
++
++		raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
++	}
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+@@ -844,10 +842,8 @@ static int amd_gpio_resume(struct device *dev)
+ 	for (i = 0; i < desc->npins; i++) {
+ 		int pin = desc->pins[i].number;
+ 
+-		if (!amd_gpio_should_save(gpio_dev, pin)) {
+-			amd_gpio_irq_init_pin(gpio_dev, pin);
++		if (!amd_gpio_should_save(gpio_dev, pin))
+ 			continue;
+-		}
+ 
+ 		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ 		gpio_dev->saved_regs[i] |= readl(gpio_dev->base + pin * 4) & PIN_IRQ_PENDING;
+diff --git a/drivers/power/supply/cros_usbpd-charger.c b/drivers/power/supply/cros_usbpd-charger.c
+index d89e08efd2ad0..0a4f02e4ae7ba 100644
+--- a/drivers/power/supply/cros_usbpd-charger.c
++++ b/drivers/power/supply/cros_usbpd-charger.c
+@@ -276,7 +276,7 @@ static int cros_usbpd_charger_get_power_info(struct port_data *port)
+ 		port->psy_current_max = 0;
+ 		break;
+ 	default:
+-		dev_err(dev, "Port %d: default case!\n", port->port_number);
++		dev_dbg(dev, "Port %d: default case!\n", port->port_number);
+ 		port->psy_usb_type = POWER_SUPPLY_USB_TYPE_SDP;
+ 	}
+ 
+diff --git a/drivers/pwm/pwm-cros-ec.c b/drivers/pwm/pwm-cros-ec.c
+index c1c337969e4ec..d4f4a133656c7 100644
+--- a/drivers/pwm/pwm-cros-ec.c
++++ b/drivers/pwm/pwm-cros-ec.c
+@@ -154,6 +154,7 @@ static void cros_ec_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	state->enabled = (ret > 0);
+ 	state->period = EC_PWM_MAX_DUTY;
++	state->polarity = PWM_POLARITY_NORMAL;
+ 
+ 	/*
+ 	 * Note that "disabled" and "duty cycle == 0" are treated the same. If
+diff --git a/drivers/pwm/pwm-sprd.c b/drivers/pwm/pwm-sprd.c
+index 9eeb59cb81b68..b23456d38bd22 100644
+--- a/drivers/pwm/pwm-sprd.c
++++ b/drivers/pwm/pwm-sprd.c
+@@ -109,6 +109,7 @@ static void sprd_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	duty = val & SPRD_PWM_DUTY_MSK;
+ 	tmp = (prescale + 1) * NSEC_PER_SEC * duty;
+ 	state->duty_cycle = DIV_ROUND_CLOSEST_ULL(tmp, chn->clk_rate);
++	state->polarity = PWM_POLARITY_NORMAL;
+ 
+ 	/* Disable PWM clocks if the PWM channel is not in enable state. */
+ 	if (!state->enabled)
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index 252d7881f99c2..def9fac7aa4f4 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -721,13 +721,12 @@ static int iscsi_sw_tcp_conn_set_param(struct iscsi_cls_conn *cls_conn,
+ 		iscsi_set_param(cls_conn, param, buf, buflen);
+ 		break;
+ 	case ISCSI_PARAM_DATADGST_EN:
+-		iscsi_set_param(cls_conn, param, buf, buflen);
+-
+ 		mutex_lock(&tcp_sw_conn->sock_lock);
+ 		if (!tcp_sw_conn->sock) {
+ 			mutex_unlock(&tcp_sw_conn->sock_lock);
+ 			return -ENOTCONN;
+ 		}
++		iscsi_set_param(cls_conn, param, buf, buflen);
+ 		tcp_sw_conn->sendpage = conn->datadgst_en ?
+ 			sock_no_sendpage : tcp_sw_conn->sock->ops->sendpage;
+ 		mutex_unlock(&tcp_sw_conn->sock_lock);
+diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
+index 1707d6d144d21..6a1428d453f3e 100644
+--- a/drivers/scsi/ses.c
++++ b/drivers/scsi/ses.c
+@@ -503,9 +503,6 @@ static int ses_enclosure_find_by_addr(struct enclosure_device *edev,
+ 	int i;
+ 	struct ses_component *scomp;
+ 
+-	if (!edev->component[0].scratch)
+-		return 0;
+-
+ 	for (i = 0; i < edev->components; i++) {
+ 		scomp = edev->component[i].scratch;
+ 		if (scomp->addr != efd->addr)
+@@ -596,8 +593,10 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ 						components++,
+ 						type_ptr[0],
+ 						name);
+-				else
++				else if (components < edev->components)
+ 					ecomp = &edev->component[components++];
++				else
++					ecomp = ERR_PTR(-EINVAL);
+ 
+ 				if (!IS_ERR(ecomp)) {
+ 					if (addl_desc_ptr) {
+@@ -728,11 +727,6 @@ static int ses_intf_add(struct device *cdev,
+ 			components += type_ptr[1];
+ 	}
+ 
+-	if (components == 0) {
+-		sdev_printk(KERN_WARNING, sdev, "enclosure has no enumerated components\n");
+-		goto err_free;
+-	}
+-
+ 	ses_dev->page1 = buf;
+ 	ses_dev->page1_len = len;
+ 	buf = NULL;
+@@ -774,9 +768,11 @@ static int ses_intf_add(struct device *cdev,
+ 		buf = NULL;
+ 	}
+ page2_not_supported:
+-	scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL);
+-	if (!scomp)
+-		goto err_free;
++	if (components > 0) {
++		scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL);
++		if (!scomp)
++			goto err_free;
++	}
+ 
+ 	edev = enclosure_register(cdev->parent, dev_name(&sdev->sdev_gendev),
+ 				  components, &ses_enclosure_callbacks);
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 99f29bd930bd0..f481c260b7049 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -809,11 +809,17 @@ static unsigned int lpuart32_tx_empty(struct uart_port *port)
+ 			struct lpuart_port, port);
+ 	unsigned long stat = lpuart32_read(port, UARTSTAT);
+ 	unsigned long sfifo = lpuart32_read(port, UARTFIFO);
++	unsigned long ctrl = lpuart32_read(port, UARTCTRL);
+ 
+ 	if (sport->dma_tx_in_progress)
+ 		return 0;
+ 
+-	if (stat & UARTSTAT_TC && sfifo & UARTFIFO_TXEMPT)
++	/*
++	 * LPUART Transmission Complete Flag may never be set while queuing a break
++	 * character, so avoid checking for transmission complete when UARTCTRL_SBK
++	 * is asserted.
++	 */
++	if ((stat & UARTSTAT_TC && sfifo & UARTFIFO_TXEMPT) || ctrl & UARTCTRL_SBK)
+ 		return TIOCSER_TEMT;
+ 
+ 	return 0;
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 8d924727d6f0a..a7c28543c6f72 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -31,6 +31,7 @@
+ #include <linux/ioport.h>
+ #include <linux/ktime.h>
+ #include <linux/major.h>
++#include <linux/minmax.h>
+ #include <linux/module.h>
+ #include <linux/mm.h>
+ #include <linux/of.h>
+@@ -2923,6 +2924,13 @@ static int sci_init_single(struct platform_device *dev,
+ 			sci_port->irqs[i] = platform_get_irq(dev, i);
+ 	}
+ 
++	/*
++	 * The fourth interrupt on SCI port is transmit end interrupt, so
++	 * shuffle the interrupts.
++	 */
++	if (p->type == PORT_SCI)
++		swap(sci_port->irqs[SCIx_BRI_IRQ], sci_port->irqs[SCIx_TEI_IRQ]);
++
+ 	/* The SCI generates several interrupts. They can be muxed together or
+ 	 * connected to different interrupt lines. In the muxed case only one
+ 	 * interrupt resource is specified as there is only one interrupt ID.
+@@ -2988,7 +2996,7 @@ static int sci_init_single(struct platform_device *dev,
+ 	port->flags		= UPF_FIXED_PORT | UPF_BOOT_AUTOCONF | p->flags;
+ 	port->fifosize		= sci_port->params->fifosize;
+ 
+-	if (port->type == PORT_SCI) {
++	if (port->type == PORT_SCI && !dev->dev.of_node) {
+ 		if (sci_port->reg_size >= 0x20)
+ 			port->regshift = 2;
+ 		else
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 246a3d274142b..9fa4f8f39830a 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -1175,6 +1175,9 @@ static void tegra_xhci_id_work(struct work_struct *work)
+ 
+ 	mutex_unlock(&tegra->lock);
+ 
++	tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion(tegra->padctl,
++								    tegra->otg_usb2_port);
++
+ 	if (tegra->host_mode) {
+ 		/* switch to host mode */
+ 		if (tegra->otg_usb3_port >= 0) {
+@@ -1243,9 +1246,6 @@ static int tegra_xhci_id_notify(struct notifier_block *nb,
+ 	}
+ 
+ 	tegra->otg_usb2_port = tegra_xusb_get_usb2_port(tegra, usbphy);
+-	tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion(
+-							tegra->padctl,
+-							tegra->otg_usb2_port);
+ 
+ 	tegra->host_mode = (usbphy->last_event == USB_EVENT_ID) ? true : false;
+ 
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 473b0b64dd572..b069fe3f8ab0b 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -9,6 +9,7 @@
+  */
+ 
+ #include <linux/pci.h>
++#include <linux/iommu.h>
+ #include <linux/iopoll.h>
+ #include <linux/irq.h>
+ #include <linux/log2.h>
+@@ -226,6 +227,7 @@ int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
+ static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
+ {
+ 	struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
++	struct iommu_domain *domain;
+ 	int err, i;
+ 	u64 val;
+ 	u32 intrs;
+@@ -244,7 +246,9 @@ static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
+ 	 * an iommu. Doing anything when there is no iommu is definitely
+ 	 * unsafe...
+ 	 */
+-	if (!(xhci->quirks & XHCI_ZERO_64B_REGS) || !device_iommu_mapped(dev))
++	domain = iommu_get_domain_for_dev(dev);
++	if (!(xhci->quirks & XHCI_ZERO_64B_REGS) || !domain ||
++	    domain->type == IOMMU_DOMAIN_IDENTITY)
+ 		return;
+ 
+ 	xhci_info(xhci, "Zeroing 64bit base registers, expecting fault\n");
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 9ee0fa7756121..045e24174e1ae 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -124,6 +124,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */
+ 	{ USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */
+ 	{ USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */
++	{ USB_DEVICE(0x10C4, 0x82AA) }, /* Silicon Labs IFS-USB-DATACABLE used with Quint UPS */
+ 	{ USB_DEVICE(0x10C4, 0x82EF) }, /* CESINEL FALCO 6105 AC Power Supply */
+ 	{ USB_DEVICE(0x10C4, 0x82F1) }, /* CESINEL MEDCAL EFD Earth Fault Detector */
+ 	{ USB_DEVICE(0x10C4, 0x82F2) }, /* CESINEL MEDCAL ST Network Analyzer */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 14a7af7f3bcd7..da8b7bd39703e 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1198,6 +1198,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0900, 0xff, 0, 0), /* RM500U-CN */
++	  .driver_info = ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+@@ -1300,6 +1302,14 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff),	/* Telit FN990 (PCIe) */
+ 	  .driver_info = RSVD(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff),	/* Telit FE990 (rmnet) */
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff),	/* Telit FE990 (MBIM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff),	/* Telit FE990 (RNDIS) */
++	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff),	/* Telit FE990 (ECM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index e8eaca5a84db2..07b6561720687 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -101,8 +101,12 @@ static int dp_altmode_configure(struct dp_altmode *dp, u8 con)
+ 		if (dp->data.status & DP_STATUS_PREFER_MULTI_FUNC &&
+ 		    pin_assign & DP_PIN_ASSIGN_MULTI_FUNC_MASK)
+ 			pin_assign &= DP_PIN_ASSIGN_MULTI_FUNC_MASK;
+-		else if (pin_assign & DP_PIN_ASSIGN_DP_ONLY_MASK)
++		else if (pin_assign & DP_PIN_ASSIGN_DP_ONLY_MASK) {
+ 			pin_assign &= DP_PIN_ASSIGN_DP_ONLY_MASK;
++			/* Default to pin assign C if available */
++			if (pin_assign & BIT(DP_PIN_ASSIGN_C))
++				pin_assign = BIT(DP_PIN_ASSIGN_C);
++		}
+ 
+ 		if (!pin_assign)
+ 			return -EINVAL;
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index d787a344b3b88..1704deaf41525 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1117,6 +1117,8 @@ static long do_fb_ioctl(struct fb_info *info, unsigned int cmd,
+ 	case FBIOPUT_VSCREENINFO:
+ 		if (copy_from_user(&var, argp, sizeof(var)))
+ 			return -EFAULT;
++		/* only for kernel-internal use */
++		var.activate &= ~FB_ACTIVATE_KD_TEXT;
+ 		console_lock();
+ 		lock_fb_info(info);
+ 		ret = fbcon_modechange_possible(info, &var);
+diff --git a/drivers/watchdog/sbsa_gwdt.c b/drivers/watchdog/sbsa_gwdt.c
+index f0f1e3b2e4639..4cbe6ba527541 100644
+--- a/drivers/watchdog/sbsa_gwdt.c
++++ b/drivers/watchdog/sbsa_gwdt.c
+@@ -121,6 +121,7 @@ static int sbsa_gwdt_set_timeout(struct watchdog_device *wdd,
+ 	struct sbsa_gwdt *gwdt = watchdog_get_drvdata(wdd);
+ 
+ 	wdd->timeout = timeout;
++	timeout = clamp_t(unsigned int, timeout, 1, wdd->max_hw_heartbeat_ms / 1000);
+ 
+ 	if (action)
+ 		writel(gwdt->clk * timeout,
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index f2abd8bfd4a0f..2a7778a88f03b 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2246,6 +2246,23 @@ static int btrfs_init_csum_hash(struct btrfs_fs_info *fs_info, u16 csum_type)
+ 
+ 	fs_info->csum_shash = csum_shash;
+ 
++	/*
++	 * Check if the checksum implementation is a fast accelerated one.
++	 * As-is this is a bit of a hack and should be replaced once the csum
++	 * implementations provide that information themselves.
++	 */
++	switch (csum_type) {
++	case BTRFS_CSUM_TYPE_CRC32:
++		if (!strstr(crypto_shash_driver_name(csum_shash), "generic"))
++			set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags);
++		break;
++	default:
++		break;
++	}
++
++	btrfs_info(fs_info, "using %s (%s) checksum algorithm",
++			btrfs_super_csum_name(csum_type),
++			crypto_shash_driver_name(csum_shash));
+ 	return 0;
+ }
+ 
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 8bf8cdb62a3af..b33505330e335 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -1692,8 +1692,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 	} else {
+ 		snprintf(s->s_id, sizeof(s->s_id), "%pg", bdev);
+ 		btrfs_sb(s)->bdev_holder = fs_type;
+-		if (!strstr(crc32c_impl(), "generic"))
+-			set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags);
+ 		error = btrfs_fill_super(s, fs_devices, data);
+ 	}
+ 	if (!error)
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index af2064e36ac61..f5b7ad0847f20 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -875,8 +875,8 @@ static const struct cred *get_backchannel_cred(struct nfs4_client *clp, struct r
+ 		if (!kcred)
+ 			return NULL;
+ 
+-		kcred->uid = ses->se_cb_sec.uid;
+-		kcred->gid = ses->se_cb_sec.gid;
++		kcred->fsuid = ses->se_cb_sec.uid;
++		kcred->fsgid = ses->se_cb_sec.gid;
+ 		return kcred;
+ 	}
+ }
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 5ee4973525f01..5e835bbf1ffb8 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -2614,11 +2614,10 @@ static int nilfs_segctor_thread(void *arg)
+ 	goto loop;
+ 
+  end_thread:
+-	spin_unlock(&sci->sc_state_lock);
+-
+ 	/* end sync. */
+ 	sci->sc_task = NULL;
+ 	wake_up(&sci->sc_wait_task); /* for nilfs_segctor_kill_thread() */
++	spin_unlock(&sci->sc_state_lock);
+ 	return 0;
+ }
+ 
+diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
+index 7751848687635..9aae60d9a32e6 100644
+--- a/fs/nilfs2/super.c
++++ b/fs/nilfs2/super.c
+@@ -482,6 +482,7 @@ static void nilfs_put_super(struct super_block *sb)
+ 		up_write(&nilfs->ns_sem);
+ 	}
+ 
++	nilfs_sysfs_delete_device_group(nilfs);
+ 	iput(nilfs->ns_sufile);
+ 	iput(nilfs->ns_cpfile);
+ 	iput(nilfs->ns_dat);
+@@ -1105,6 +1106,7 @@ nilfs_fill_super(struct super_block *sb, void *data, int silent)
+ 	nilfs_put_root(fsroot);
+ 
+  failed_unload:
++	nilfs_sysfs_delete_device_group(nilfs);
+ 	iput(nilfs->ns_sufile);
+ 	iput(nilfs->ns_cpfile);
+ 	iput(nilfs->ns_dat);
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index 38a1206cf9481..d8d08fa5c3406 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -87,7 +87,6 @@ void destroy_nilfs(struct the_nilfs *nilfs)
+ {
+ 	might_sleep();
+ 	if (nilfs_init(nilfs)) {
+-		nilfs_sysfs_delete_device_group(nilfs);
+ 		brelse(nilfs->ns_sbh[0]);
+ 		brelse(nilfs->ns_sbh[1]);
+ 	}
+@@ -305,6 +304,10 @@ int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
+ 		goto failed;
+ 	}
+ 
++	err = nilfs_sysfs_create_device_group(sb);
++	if (unlikely(err))
++		goto sysfs_error;
++
+ 	if (valid_fs)
+ 		goto skip_recovery;
+ 
+@@ -366,6 +369,9 @@ int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
+ 	goto failed;
+ 
+  failed_unload:
++	nilfs_sysfs_delete_device_group(nilfs);
++
++ sysfs_error:
+ 	iput(nilfs->ns_cpfile);
+ 	iput(nilfs->ns_sufile);
+ 	iput(nilfs->ns_dat);
+@@ -697,10 +703,6 @@ int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb, char *data)
+ 	if (err)
+ 		goto failed_sbh;
+ 
+-	err = nilfs_sysfs_create_device_group(sb);
+-	if (err)
+-		goto failed_sbh;
+-
+ 	set_nilfs_init(nilfs);
+ 	err = 0;
+  out:
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 3e06e9a8cf594..42465693dbdc4 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -3396,10 +3396,12 @@ void ocfs2_dlm_shutdown(struct ocfs2_super *osb,
+ 	ocfs2_lock_res_free(&osb->osb_nfs_sync_lockres);
+ 	ocfs2_lock_res_free(&osb->osb_orphan_scan.os_lockres);
+ 
+-	ocfs2_cluster_disconnect(osb->cconn, hangup_pending);
+-	osb->cconn = NULL;
++	if (osb->cconn) {
++		ocfs2_cluster_disconnect(osb->cconn, hangup_pending);
++		osb->cconn = NULL;
+ 
+-	ocfs2_dlm_shutdown_debug(osb);
++		ocfs2_dlm_shutdown_debug(osb);
++	}
+ }
+ 
+ static int ocfs2_drop_lock(struct ocfs2_super *osb,
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 3e0b2e3e00ad1..04ce30ae44044 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -1922,8 +1922,7 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err)
+ 	    !ocfs2_is_hard_readonly(osb))
+ 		hangup_needed = 1;
+ 
+-	if (osb->cconn)
+-		ocfs2_dlm_shutdown(osb, hangup_needed);
++	ocfs2_dlm_shutdown(osb, hangup_needed);
+ 
+ 	ocfs2_blockcheck_stats_debugfs_remove(&osb->osb_ecc_stats);
+ 	debugfs_remove_recursive(osb->osb_debug_root);
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index cd7c6c4af83ad..1655b7b2a5abe 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1106,6 +1106,11 @@ static int sysctl_check_table_array(const char *path, struct ctl_table *table)
+ 			err |= sysctl_err(path, table, "array not allowed");
+ 	}
+ 
++	if (table->proc_handler == proc_dou8vec_minmax) {
++		if (table->maxlen != sizeof(u8))
++			err |= sysctl_err(path, table, "array not allowed");
++	}
++
+ 	return err;
+ }
+ 
+@@ -1121,6 +1126,7 @@ static int sysctl_check_table(const char *path, struct ctl_table *table)
+ 		    (table->proc_handler == proc_douintvec) ||
+ 		    (table->proc_handler == proc_douintvec_minmax) ||
+ 		    (table->proc_handler == proc_dointvec_minmax) ||
++		    (table->proc_handler == proc_dou8vec_minmax) ||
+ 		    (table->proc_handler == proc_dointvec_jiffies) ||
+ 		    (table->proc_handler == proc_dointvec_userhz_jiffies) ||
+ 		    (table->proc_handler == proc_dointvec_ms_jiffies) ||
+diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
+index 1bd3a0356ae47..b9dd113599eb4 100644
+--- a/include/linux/ftrace.h
++++ b/include/linux/ftrace.h
+@@ -811,7 +811,7 @@ static inline void __ftrace_enabled_restore(int enabled)
+ #define CALLER_ADDR5 ((unsigned long)ftrace_return_address(5))
+ #define CALLER_ADDR6 ((unsigned long)ftrace_return_address(6))
+ 
+-static inline unsigned long get_lock_parent_ip(void)
++static __always_inline unsigned long get_lock_parent_ip(void)
+ {
+ 	unsigned long addr = CALLER_ADDR0;
+ 
+diff --git a/include/linux/kexec.h b/include/linux/kexec.h
+index a1f12e959bbad..3c1deba496c97 100644
+--- a/include/linux/kexec.h
++++ b/include/linux/kexec.h
+@@ -380,8 +380,8 @@ extern note_buf_t __percpu *crash_notes;
+ extern bool kexec_in_progress;
+ 
+ int crash_shrink_memory(unsigned long new_size);
+-size_t crash_get_memory_size(void);
+ void crash_free_reserved_phys_range(unsigned long begin, unsigned long end);
++ssize_t crash_get_memory_size(void);
+ 
+ void arch_kexec_protect_crashkres(void);
+ void arch_kexec_unprotect_crashkres(void);
+diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
+index 161eba9fd9122..4393de94cb32d 100644
+--- a/include/linux/sysctl.h
++++ b/include/linux/sysctl.h
+@@ -53,6 +53,8 @@ int proc_douintvec(struct ctl_table *, int, void *, size_t *, loff_t *);
+ int proc_dointvec_minmax(struct ctl_table *, int, void *, size_t *, loff_t *);
+ int proc_douintvec_minmax(struct ctl_table *table, int write, void *buffer,
+ 		size_t *lenp, loff_t *ppos);
++int proc_dou8vec_minmax(struct ctl_table *table, int write, void *buffer,
++			size_t *lenp, loff_t *ppos);
+ int proc_dointvec_jiffies(struct ctl_table *, int, void *, size_t *, loff_t *);
+ int proc_dointvec_userhz_jiffies(struct ctl_table *, int, void *, size_t *,
+ 		loff_t *);
+diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
+index 75484f425e558..d8b320cf54ba0 100644
+--- a/include/net/netns/ipv4.h
++++ b/include/net/netns/ipv4.h
+@@ -84,41 +84,41 @@ struct netns_ipv4 {
+ 	struct xt_table		*nat_table;
+ #endif
+ 
+-	int sysctl_icmp_echo_ignore_all;
+-	int sysctl_icmp_echo_ignore_broadcasts;
+-	int sysctl_icmp_ignore_bogus_error_responses;
++	u8 sysctl_icmp_echo_ignore_all;
++	u8 sysctl_icmp_echo_ignore_broadcasts;
++	u8 sysctl_icmp_ignore_bogus_error_responses;
++	u8 sysctl_icmp_errors_use_inbound_ifaddr;
+ 	int sysctl_icmp_ratelimit;
+ 	int sysctl_icmp_ratemask;
+-	int sysctl_icmp_errors_use_inbound_ifaddr;
+ 
+ 	struct local_ports ip_local_ports;
+ 
+-	int sysctl_tcp_ecn;
+-	int sysctl_tcp_ecn_fallback;
++	u8 sysctl_tcp_ecn;
++	u8 sysctl_tcp_ecn_fallback;
+ 
+-	int sysctl_ip_default_ttl;
+-	int sysctl_ip_no_pmtu_disc;
+-	int sysctl_ip_fwd_use_pmtu;
++	u8 sysctl_ip_default_ttl;
++	u8 sysctl_ip_no_pmtu_disc;
++	u8 sysctl_ip_fwd_use_pmtu;
+ 	int sysctl_ip_fwd_update_priority;
+-	int sysctl_ip_nonlocal_bind;
+-	int sysctl_ip_autobind_reuse;
++	u8 sysctl_ip_nonlocal_bind;
++	u8 sysctl_ip_autobind_reuse;
+ 	/* Shall we try to damage output packets if routing dev changes? */
+-	int sysctl_ip_dynaddr;
+-	int sysctl_ip_early_demux;
++	u8 sysctl_ip_dynaddr;
++	u8 sysctl_ip_early_demux;
+ #ifdef CONFIG_NET_L3_MASTER_DEV
+-	int sysctl_raw_l3mdev_accept;
++	u8 sysctl_raw_l3mdev_accept;
+ #endif
+ 	int sysctl_tcp_early_demux;
+ 	int sysctl_udp_early_demux;
+ 
+-	int sysctl_nexthop_compat_mode;
++	u8 sysctl_nexthop_compat_mode;
+ 
+-	int sysctl_fwmark_reflect;
+-	int sysctl_tcp_fwmark_accept;
++	u8 sysctl_fwmark_reflect;
++	u8 sysctl_tcp_fwmark_accept;
+ #ifdef CONFIG_NET_L3_MASTER_DEV
+-	int sysctl_tcp_l3mdev_accept;
++	u8 sysctl_tcp_l3mdev_accept;
+ #endif
+-	int sysctl_tcp_mtu_probing;
++	u8 sysctl_tcp_mtu_probing;
+ 	int sysctl_tcp_mtu_probe_floor;
+ 	int sysctl_tcp_base_mss;
+ 	int sysctl_tcp_min_snd_mss;
+@@ -126,46 +126,47 @@ struct netns_ipv4 {
+ 	u32 sysctl_tcp_probe_interval;
+ 
+ 	int sysctl_tcp_keepalive_time;
+-	int sysctl_tcp_keepalive_probes;
+ 	int sysctl_tcp_keepalive_intvl;
++	u8 sysctl_tcp_keepalive_probes;
+ 
+-	int sysctl_tcp_syn_retries;
+-	int sysctl_tcp_synack_retries;
+-	int sysctl_tcp_syncookies;
++	u8 sysctl_tcp_syn_retries;
++	u8 sysctl_tcp_synack_retries;
++	u8 sysctl_tcp_syncookies;
+ 	int sysctl_tcp_reordering;
+-	int sysctl_tcp_retries1;
+-	int sysctl_tcp_retries2;
+-	int sysctl_tcp_orphan_retries;
++	u8 sysctl_tcp_retries1;
++	u8 sysctl_tcp_retries2;
++	u8 sysctl_tcp_orphan_retries;
++	u8 sysctl_tcp_tw_reuse;
+ 	int sysctl_tcp_fin_timeout;
+ 	unsigned int sysctl_tcp_notsent_lowat;
+-	int sysctl_tcp_tw_reuse;
+-	int sysctl_tcp_sack;
+-	int sysctl_tcp_window_scaling;
+-	int sysctl_tcp_timestamps;
+-	int sysctl_tcp_early_retrans;
+-	int sysctl_tcp_recovery;
+-	int sysctl_tcp_thin_linear_timeouts;
+-	int sysctl_tcp_slow_start_after_idle;
+-	int sysctl_tcp_retrans_collapse;
+-	int sysctl_tcp_stdurg;
+-	int sysctl_tcp_rfc1337;
+-	int sysctl_tcp_abort_on_overflow;
+-	int sysctl_tcp_fack;
++	u8 sysctl_tcp_sack;
++	u8 sysctl_tcp_window_scaling;
++	u8 sysctl_tcp_timestamps;
++	u8 sysctl_tcp_early_retrans;
++	u8 sysctl_tcp_recovery;
++	u8 sysctl_tcp_thin_linear_timeouts;
++	u8 sysctl_tcp_slow_start_after_idle;
++	u8 sysctl_tcp_retrans_collapse;
++	u8 sysctl_tcp_stdurg;
++	u8 sysctl_tcp_rfc1337;
++	u8 sysctl_tcp_abort_on_overflow;
++	u8 sysctl_tcp_fack; /* obsolete */
+ 	int sysctl_tcp_max_reordering;
+-	int sysctl_tcp_dsack;
+-	int sysctl_tcp_app_win;
+ 	int sysctl_tcp_adv_win_scale;
+-	int sysctl_tcp_frto;
+-	int sysctl_tcp_nometrics_save;
+-	int sysctl_tcp_no_ssthresh_metrics_save;
+-	int sysctl_tcp_moderate_rcvbuf;
+-	int sysctl_tcp_tso_win_divisor;
+-	int sysctl_tcp_workaround_signed_windows;
++	u8 sysctl_tcp_dsack;
++	u8 sysctl_tcp_app_win;
++	u8 sysctl_tcp_frto;
++	u8 sysctl_tcp_nometrics_save;
++	u8 sysctl_tcp_no_ssthresh_metrics_save;
++	u8 sysctl_tcp_moderate_rcvbuf;
++	u8 sysctl_tcp_tso_win_divisor;
++	u8 sysctl_tcp_workaround_signed_windows;
+ 	int sysctl_tcp_limit_output_bytes;
+ 	int sysctl_tcp_challenge_ack_limit;
+-	int sysctl_tcp_min_tso_segs;
+ 	int sysctl_tcp_min_rtt_wlen;
+-	int sysctl_tcp_autocorking;
++	u8 sysctl_tcp_min_tso_segs;
++	u8 sysctl_tcp_autocorking;
++	u8 sysctl_tcp_reflect_tos;
+ 	int sysctl_tcp_invalid_ratelimit;
+ 	int sysctl_tcp_pacing_ss_ratio;
+ 	int sysctl_tcp_pacing_ca_ratio;
+@@ -183,7 +184,6 @@ struct netns_ipv4 {
+ 	unsigned int sysctl_tcp_fastopen_blackhole_timeout;
+ 	atomic_t tfo_active_disable_times;
+ 	unsigned long tfo_active_disable_stamp;
+-	int sysctl_tcp_reflect_tos;
+ 
+ 	int sysctl_udp_wmem_min;
+ 	int sysctl_udp_rmem_min;
+diff --git a/init/Kconfig b/init/Kconfig
+index eba883d6d9ed5..9807c66b24bb6 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -47,6 +47,18 @@ config CLANG_VERSION
+ 	int
+ 	default $(shell,$(srctree)/scripts/clang-version.sh $(CC))
+ 
++config AS_IS_GNU
++	def_bool $(success,test "$(as-name)" = GNU)
++
++config AS_IS_LLVM
++	def_bool $(success,test "$(as-name)" = LLVM)
++
++config AS_VERSION
++	int
++	# Use clang version if this is the integrated assembler
++	default CLANG_VERSION if AS_IS_LLVM
++	default $(as-version)
++
+ config LLD_VERSION
+ 	int
+ 	default $(shell,$(srctree)/scripts/lld-version.sh $(LD))
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 43270b07b2e0b..b476591168dcf 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -2188,11 +2188,15 @@ out_unlock:
+ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ {
+ 	struct cgroup_subsys_state *css;
++	struct cpuset *cs;
+ 
+ 	cgroup_taskset_first(tset, &css);
++	cs = css_cs(css);
+ 
+ 	percpu_down_write(&cpuset_rwsem);
+-	css_cs(css)->attach_in_progress--;
++	cs->attach_in_progress--;
++	if (!cs->attach_in_progress)
++		wake_up(&cpuset_attach_wq);
+ 	percpu_up_write(&cpuset_rwsem);
+ }
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e2e1371fbb9d3..f0df3bc0e6415 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -11622,7 +11622,7 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event)
+ 	/*
+ 	 * If its not a per-cpu rb, it must be the same task.
+ 	 */
+-	if (output_event->cpu == -1 && output_event->ctx != event->ctx)
++	if (output_event->cpu == -1 && output_event->hw.target != event->hw.target)
+ 		goto out;
+ 
+ 	/*
+diff --git a/kernel/kexec.c b/kernel/kexec.c
+index c82c6c06f0518..f0f0c65554549 100644
+--- a/kernel/kexec.c
++++ b/kernel/kexec.c
+@@ -110,6 +110,14 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments,
+ 	unsigned long i;
+ 	int ret;
+ 
++	/*
++	 * Because we write directly to the reserved memory region when loading
++	 * crash kernels we need a serialization here to prevent multiple crash
++	 * kernels from attempting to load simultaneously.
++	 */
++	if (!kexec_trylock())
++		return -EBUSY;
++
+ 	if (flags & KEXEC_ON_CRASH) {
+ 		dest_image = &kexec_crash_image;
+ 		if (kexec_crash_image)
+@@ -121,7 +129,8 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments,
+ 	if (nr_segments == 0) {
+ 		/* Uninstall image */
+ 		kimage_free(xchg(dest_image, NULL));
+-		return 0;
++		ret = 0;
++		goto out_unlock;
+ 	}
+ 	if (flags & KEXEC_ON_CRASH) {
+ 		/*
+@@ -134,7 +143,7 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments,
+ 
+ 	ret = kimage_alloc_init(&image, entry, nr_segments, segments, flags);
+ 	if (ret)
+-		return ret;
++		goto out_unlock;
+ 
+ 	if (flags & KEXEC_PRESERVE_CONTEXT)
+ 		image->preserve_context = 1;
+@@ -171,6 +180,8 @@ out:
+ 		arch_kexec_protect_crashkres();
+ 
+ 	kimage_free(image);
++out_unlock:
++	kexec_unlock();
+ 	return ret;
+ }
+ 
+@@ -247,21 +258,8 @@ SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments,
+ 		((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT))
+ 		return -EINVAL;
+ 
+-	/* Because we write directly to the reserved memory
+-	 * region when loading crash kernels we need a mutex here to
+-	 * prevent multiple crash  kernels from attempting to load
+-	 * simultaneously, and to prevent a crash kernel from loading
+-	 * over the top of a in use crash kernel.
+-	 *
+-	 * KISS: always take the mutex.
+-	 */
+-	if (!mutex_trylock(&kexec_mutex))
+-		return -EBUSY;
+-
+ 	result = do_kexec_load(entry, nr_segments, segments, flags);
+ 
+-	mutex_unlock(&kexec_mutex);
+-
+ 	return result;
+ }
+ 
+@@ -301,21 +299,8 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry,
+ 			return -EFAULT;
+ 	}
+ 
+-	/* Because we write directly to the reserved memory
+-	 * region when loading crash kernels we need a mutex here to
+-	 * prevent multiple crash  kernels from attempting to load
+-	 * simultaneously, and to prevent a crash kernel from loading
+-	 * over the top of a in use crash kernel.
+-	 *
+-	 * KISS: always take the mutex.
+-	 */
+-	if (!mutex_trylock(&kexec_mutex))
+-		return -EBUSY;
+-
+ 	result = do_kexec_load(entry, nr_segments, ksegments, flags);
+ 
+-	mutex_unlock(&kexec_mutex);
+-
+ 	return result;
+ }
+ #endif
+diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
+index c589c7a9562ca..7a8104d489971 100644
+--- a/kernel/kexec_core.c
++++ b/kernel/kexec_core.c
+@@ -45,7 +45,7 @@
+ #include <crypto/sha.h>
+ #include "kexec_internal.h"
+ 
+-DEFINE_MUTEX(kexec_mutex);
++atomic_t __kexec_lock = ATOMIC_INIT(0);
+ 
+ /* Per cpu memory for storing cpu states in case of system crash. */
+ note_buf_t __percpu *crash_notes;
+@@ -943,7 +943,7 @@ int kexec_load_disabled;
+  */
+ void __noclone __crash_kexec(struct pt_regs *regs)
+ {
+-	/* Take the kexec_mutex here to prevent sys_kexec_load
++	/* Take the kexec_lock here to prevent sys_kexec_load
+ 	 * running on one cpu from replacing the crash kernel
+ 	 * we are using after a panic on a different cpu.
+ 	 *
+@@ -951,7 +951,7 @@ void __noclone __crash_kexec(struct pt_regs *regs)
+ 	 * of memory the xchg(&kexec_crash_image) would be
+ 	 * sufficient.  But since I reuse the memory...
+ 	 */
+-	if (mutex_trylock(&kexec_mutex)) {
++	if (kexec_trylock()) {
+ 		if (kexec_crash_image) {
+ 			struct pt_regs fixed_regs;
+ 
+@@ -960,7 +960,7 @@ void __noclone __crash_kexec(struct pt_regs *regs)
+ 			machine_crash_shutdown(&fixed_regs);
+ 			machine_kexec(kexec_crash_image);
+ 		}
+-		mutex_unlock(&kexec_mutex);
++		kexec_unlock();
+ 	}
+ }
+ STACK_FRAME_NON_STANDARD(__crash_kexec);
+@@ -989,14 +989,17 @@ void crash_kexec(struct pt_regs *regs)
+ 	}
+ }
+ 
+-size_t crash_get_memory_size(void)
++ssize_t crash_get_memory_size(void)
+ {
+-	size_t size = 0;
++	ssize_t size = 0;
++
++	if (!kexec_trylock())
++		return -EBUSY;
+ 
+-	mutex_lock(&kexec_mutex);
+ 	if (crashk_res.end != crashk_res.start)
+ 		size = resource_size(&crashk_res);
+-	mutex_unlock(&kexec_mutex);
++
++	kexec_unlock();
+ 	return size;
+ }
+ 
+@@ -1016,7 +1019,8 @@ int crash_shrink_memory(unsigned long new_size)
+ 	unsigned long old_size;
+ 	struct resource *ram_res;
+ 
+-	mutex_lock(&kexec_mutex);
++	if (!kexec_trylock())
++		return -EBUSY;
+ 
+ 	if (kexec_crash_image) {
+ 		ret = -ENOENT;
+@@ -1054,7 +1058,7 @@ int crash_shrink_memory(unsigned long new_size)
+ 	insert_resource(&iomem_resource, ram_res);
+ 
+ unlock:
+-	mutex_unlock(&kexec_mutex);
++	kexec_unlock();
+ 	return ret;
+ }
+ 
+@@ -1126,7 +1130,7 @@ int kernel_kexec(void)
+ {
+ 	int error = 0;
+ 
+-	if (!mutex_trylock(&kexec_mutex))
++	if (!kexec_trylock())
+ 		return -EBUSY;
+ 	if (!kexec_image) {
+ 		error = -EINVAL;
+@@ -1201,7 +1205,7 @@ int kernel_kexec(void)
+ #endif
+ 
+  Unlock:
+-	mutex_unlock(&kexec_mutex);
++	kexec_unlock();
+ 	return error;
+ }
+ 
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index fff11916aba33..b9c857782adae 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -343,7 +343,7 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,
+ 
+ 	image = NULL;
+ 
+-	if (!mutex_trylock(&kexec_mutex))
++	if (!kexec_trylock())
+ 		return -EBUSY;
+ 
+ 	dest_image = &kexec_image;
+@@ -415,7 +415,7 @@ out:
+ 	if ((flags & KEXEC_FILE_ON_CRASH) && kexec_crash_image)
+ 		arch_kexec_protect_crashkres();
+ 
+-	mutex_unlock(&kexec_mutex);
++	kexec_unlock();
+ 	kimage_free(image);
+ 	return ret;
+ }
+diff --git a/kernel/kexec_internal.h b/kernel/kexec_internal.h
+index 39d30ccf8d879..49d4e3ab9c964 100644
+--- a/kernel/kexec_internal.h
++++ b/kernel/kexec_internal.h
+@@ -15,7 +15,20 @@ int kimage_is_destination_range(struct kimage *image,
+ 
+ int machine_kexec_post_load(struct kimage *image);
+ 
+-extern struct mutex kexec_mutex;
++/*
++ * Whatever is used to serialize accesses to the kexec_crash_image needs to be
++ * NMI safe, as __crash_kexec() can happen during nmi_panic(), so here we use a
++ * "simple" atomic variable that is acquired with a cmpxchg().
++ */
++extern atomic_t __kexec_lock;
++static inline bool kexec_trylock(void)
++{
++	return atomic_cmpxchg_acquire(&__kexec_lock, 0, 1) == 0;
++}
++static inline void kexec_unlock(void)
++{
++	atomic_set_release(&__kexec_lock, 0);
++}
+ 
+ #ifdef CONFIG_KEXEC_FILE
+ #include <linux/purgatory.h>
+diff --git a/kernel/ksysfs.c b/kernel/ksysfs.c
+index 35859da8bd4f7..e20c19e3ba49c 100644
+--- a/kernel/ksysfs.c
++++ b/kernel/ksysfs.c
+@@ -106,7 +106,12 @@ KERNEL_ATTR_RO(kexec_crash_loaded);
+ static ssize_t kexec_crash_size_show(struct kobject *kobj,
+ 				       struct kobj_attribute *attr, char *buf)
+ {
+-	return sprintf(buf, "%zu\n", crash_get_memory_size());
++	ssize_t size = crash_get_memory_size();
++
++	if (size < 0)
++		return size;
++
++	return sprintf(buf, "%zd\n", size);
+ }
+ static ssize_t kexec_crash_size_store(struct kobject *kobj,
+ 				   struct kobj_attribute *attr,
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index bb70a7856277f..57a58bc48021a 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -9342,8 +9342,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
+ 		local->avg_load = (local->group_load * SCHED_CAPACITY_SCALE) /
+ 				  local->group_capacity;
+ 
+-		sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
+-				sds->total_capacity;
+ 		/*
+ 		 * If the local group is more loaded than the selected
+ 		 * busiest group don't try to pull any tasks.
+@@ -9352,6 +9350,19 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
+ 			env->imbalance = 0;
+ 			return;
+ 		}
++
++		sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
++				sds->total_capacity;
++
++		/*
++		 * If the local group is more loaded than the average system
++		 * load, don't try to pull any tasks.
++		 */
++		if (local->avg_load >= sds->avg_load) {
++			env->imbalance = 0;
++			return;
++		}
++
+ 	}
+ 
+ 	/*
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index d8b7b28463135..d981abea0358d 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -1061,6 +1061,65 @@ int proc_douintvec_minmax(struct ctl_table *table, int write,
+ 				 do_proc_douintvec_minmax_conv, &param);
+ }
+ 
++/**
++ * proc_dou8vec_minmax - read a vector of unsigned chars with min/max values
++ * @table: the sysctl table
++ * @write: %TRUE if this is a write to the sysctl file
++ * @buffer: the user buffer
++ * @lenp: the size of the user buffer
++ * @ppos: file position
++ *
++ * Reads/writes up to table->maxlen/sizeof(u8) unsigned chars
++ * values from/to the user buffer, treated as an ASCII string. Negative
++ * strings are not allowed.
++ *
++ * This routine will ensure the values are within the range specified by
++ * table->extra1 (min) and table->extra2 (max).
++ *
++ * Returns 0 on success or an error on write when the range check fails.
++ */
++int proc_dou8vec_minmax(struct ctl_table *table, int write,
++			void *buffer, size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table tmp;
++	unsigned int min = 0, max = 255U, val;
++	u8 *data = table->data;
++	struct do_proc_douintvec_minmax_conv_param param = {
++		.min = &min,
++		.max = &max,
++	};
++	int res;
++
++	/* Do not support arrays yet. */
++	if (table->maxlen != sizeof(u8))
++		return -EINVAL;
++
++	if (table->extra1) {
++		min = *(unsigned int *) table->extra1;
++		if (min > 255U)
++			return -EINVAL;
++	}
++	if (table->extra2) {
++		max = *(unsigned int *) table->extra2;
++		if (max > 255U)
++			return -EINVAL;
++	}
++
++	tmp = *table;
++
++	tmp.maxlen = sizeof(val);
++	tmp.data = &val;
++	val = READ_ONCE(*data);
++	res = do_proc_douintvec(&tmp, write, buffer, lenp, ppos,
++				do_proc_douintvec_minmax_conv, &param);
++	if (res)
++		return res;
++	if (write)
++		WRITE_ONCE(*data, val);
++	return 0;
++}
++EXPORT_SYMBOL_GPL(proc_dou8vec_minmax);
++
+ static int do_proc_dopipe_max_size_conv(unsigned long *lvalp,
+ 					unsigned int *valp,
+ 					int write, void *data)
+@@ -1612,6 +1671,12 @@ int proc_douintvec_minmax(struct ctl_table *table, int write,
+ 	return -ENOSYS;
+ }
+ 
++int proc_dou8vec_minmax(struct ctl_table *table, int write,
++			void *buffer, size_t *lenp, loff_t *ppos)
++{
++	return -ENOSYS;
++}
++
+ int proc_dointvec_jiffies(struct ctl_table *table, int write,
+ 		    void *buffer, size_t *lenp, loff_t *ppos)
+ {
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 67829b6e07bdc..3dab978c156d2 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -5390,12 +5390,15 @@ int modify_ftrace_direct(unsigned long ip,
+ 		ret = 0;
+ 	}
+ 
+-	if (unlikely(ret && new_direct)) {
+-		direct->count++;
+-		list_del_rcu(&new_direct->next);
+-		synchronize_rcu_tasks();
+-		kfree(new_direct);
+-		ftrace_direct_func_count--;
++	if (ret) {
++		direct->addr = old_addr;
++		if (unlikely(new_direct)) {
++			direct->count++;
++			list_del_rcu(&new_direct->next);
++			synchronize_rcu_tasks();
++			kfree(new_direct);
++			ftrace_direct_func_count--;
++		}
+ 	}
+ 
+  out_unlock:
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 70da6f3212bc4..21b07c7c6ee5e 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2962,6 +2962,10 @@ rb_set_commit_to_write(struct ring_buffer_per_cpu *cpu_buffer)
+ 		if (RB_WARN_ON(cpu_buffer,
+ 			       rb_is_reader_page(cpu_buffer->tail_page)))
+ 			return;
++		/*
++		 * No need for a memory barrier here, as the update
++		 * of the tail_page did it for this page.
++		 */
+ 		local_set(&cpu_buffer->commit_page->page->commit,
+ 			  rb_page_write(cpu_buffer->commit_page));
+ 		rb_inc_page(cpu_buffer, &cpu_buffer->commit_page);
+@@ -2971,6 +2975,8 @@ rb_set_commit_to_write(struct ring_buffer_per_cpu *cpu_buffer)
+ 	while (rb_commit_index(cpu_buffer) !=
+ 	       rb_page_write(cpu_buffer->commit_page)) {
+ 
++		/* Make sure the readers see the content of what is committed. */
++		smp_wmb();
+ 		local_set(&cpu_buffer->commit_page->page->commit,
+ 			  rb_page_write(cpu_buffer->commit_page));
+ 		RB_WARN_ON(cpu_buffer,
+@@ -4390,7 +4396,12 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
+ 
+ 	/*
+ 	 * Make sure we see any padding after the write update
+-	 * (see rb_reset_tail())
++	 * (see rb_reset_tail()).
++	 *
++	 * In addition, a writer may be writing on the reader page
++	 * if the page has not been fully filled, so the read barrier
++	 * is also needed to make sure we see the content of what is
++	 * committed by the writer (see rb_set_commit_to_write()).
+ 	 */
+ 	smp_rmb();
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index ce45bdd9077db..482ec6606b7b5 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8895,6 +8895,7 @@ static int __remove_instance(struct trace_array *tr)
+ 	ftrace_destroy_function_files(tr);
+ 	tracefs_remove(tr->dir);
+ 	free_trace_buffers(tr);
++	clear_tracing_err_log(tr);
+ 
+ 	for (i = 0; i < tr->nr_topts; i++) {
+ 		kfree(tr->topts[i].topts);
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index d87d6971afc9d..86ade667a7af6 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -666,6 +666,7 @@ static void __del_from_avail_list(struct swap_info_struct *p)
+ {
+ 	int nid;
+ 
++	assert_spin_locked(&p->lock);
+ 	for_each_node(nid)
+ 		plist_del(&p->avail_lists[nid], &swap_avail_heads[nid]);
+ }
+@@ -2611,8 +2612,8 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
+ 		spin_unlock(&swap_lock);
+ 		goto out_dput;
+ 	}
+-	del_from_avail_list(p);
+ 	spin_lock(&p->lock);
++	del_from_avail_list(p);
+ 	if (p->prio < 0) {
+ 		struct swap_info_struct *si = p;
+ 		int nid;
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 220e8f4ac0cfe..da056170849bf 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -300,6 +300,10 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
+ 	write_unlock(&xen_9pfs_lock);
+ 
+ 	for (i = 0; i < priv->num_rings; i++) {
++		struct xen_9pfs_dataring *ring = &priv->rings[i];
++
++		cancel_work_sync(&ring->work);
++
+ 		if (!priv->rings[i].intf)
+ 			break;
+ 		if (priv->rings[i].irq > 0)
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 0db48c8126623..b946a6379433a 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -433,7 +433,7 @@ static void hidp_set_timer(struct hidp_session *session)
+ static void hidp_del_timer(struct hidp_session *session)
+ {
+ 	if (session->idle_to > 0)
+-		del_timer(&session->timer);
++		del_timer_sync(&session->timer);
+ }
+ 
+ static void hidp_process_report(struct hidp_session *session, int type,
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 367b1dec2e751..f9d2ce9cee369 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4647,33 +4647,27 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn,
+ 
+ 	BT_DBG("scid 0x%4.4x dcid 0x%4.4x", scid, dcid);
+ 
+-	mutex_lock(&conn->chan_lock);
+-
+-	chan = __l2cap_get_chan_by_scid(conn, dcid);
++	chan = l2cap_get_chan_by_scid(conn, dcid);
+ 	if (!chan) {
+-		mutex_unlock(&conn->chan_lock);
+ 		cmd_reject_invalid_cid(conn, cmd->ident, dcid, scid);
+ 		return 0;
+ 	}
+ 
+-	l2cap_chan_hold(chan);
+-	l2cap_chan_lock(chan);
+-
+ 	rsp.dcid = cpu_to_le16(chan->scid);
+ 	rsp.scid = cpu_to_le16(chan->dcid);
+ 	l2cap_send_cmd(conn, cmd->ident, L2CAP_DISCONN_RSP, sizeof(rsp), &rsp);
+ 
+ 	chan->ops->set_shutdown(chan);
+ 
++	mutex_lock(&conn->chan_lock);
+ 	l2cap_chan_del(chan, ECONNRESET);
++	mutex_unlock(&conn->chan_lock);
+ 
+ 	chan->ops->close(chan);
+ 
+ 	l2cap_chan_unlock(chan);
+ 	l2cap_chan_put(chan);
+ 
+-	mutex_unlock(&conn->chan_lock);
+-
+ 	return 0;
+ }
+ 
+@@ -4693,33 +4687,27 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
+ 
+ 	BT_DBG("dcid 0x%4.4x scid 0x%4.4x", dcid, scid);
+ 
+-	mutex_lock(&conn->chan_lock);
+-
+-	chan = __l2cap_get_chan_by_scid(conn, scid);
++	chan = l2cap_get_chan_by_scid(conn, scid);
+ 	if (!chan) {
+ 		mutex_unlock(&conn->chan_lock);
+ 		return 0;
+ 	}
+ 
+-	l2cap_chan_hold(chan);
+-	l2cap_chan_lock(chan);
+-
+ 	if (chan->state != BT_DISCONN) {
+ 		l2cap_chan_unlock(chan);
+ 		l2cap_chan_put(chan);
+-		mutex_unlock(&conn->chan_lock);
+ 		return 0;
+ 	}
+ 
++	mutex_lock(&conn->chan_lock);
+ 	l2cap_chan_del(chan, 0);
++	mutex_unlock(&conn->chan_lock);
+ 
+ 	chan->ops->close(chan);
+ 
+ 	l2cap_chan_unlock(chan);
+ 	l2cap_chan_put(chan);
+ 
+-	mutex_unlock(&conn->chan_lock);
+-
+ 	return 0;
+ }
+ 
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index f2f0bc7f0cb4c..4360f33278c1e 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1480,6 +1480,21 @@ static int isotp_init(struct sock *sk)
+ 	return 0;
+ }
+ 
++static __poll_t isotp_poll(struct file *file, struct socket *sock, poll_table *wait)
++{
++	struct sock *sk = sock->sk;
++	struct isotp_sock *so = isotp_sk(sk);
++
++	__poll_t mask = datagram_poll(file, sock, wait);
++	poll_wait(file, &so->wait, wait);
++
++	/* Check for false positives due to TX state */
++	if ((mask & EPOLLWRNORM) && (so->tx.state != ISOTP_IDLE))
++		mask &= ~(EPOLLOUT | EPOLLWRNORM);
++
++	return mask;
++}
++
+ static int isotp_sock_no_ioctlcmd(struct socket *sock, unsigned int cmd,
+ 				  unsigned long arg)
+ {
+@@ -1495,7 +1510,7 @@ static const struct proto_ops isotp_ops = {
+ 	.socketpair = sock_no_socketpair,
+ 	.accept = sock_no_accept,
+ 	.getname = isotp_getname,
+-	.poll = datagram_poll,
++	.poll = isotp_poll,
+ 	.ioctl = isotp_sock_no_ioctlcmd,
+ 	.gettstamp = sock_gettstamp,
+ 	.listen = sock_no_listen,
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 57d6aac7f4353..5dcbb0b7d123a 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -600,7 +600,10 @@ sk_buff *j1939_tp_tx_dat_new(struct j1939_priv *priv,
+ 	/* reserve CAN header */
+ 	skb_reserve(skb, offsetof(struct can_frame, data));
+ 
+-	memcpy(skb->cb, re_skcb, sizeof(skb->cb));
++	/* skb->cb must be large enough to hold a j1939_sk_buff_cb structure */
++	BUILD_BUG_ON(sizeof(skb->cb) < sizeof(*re_skcb));
++
++	memcpy(skb->cb, re_skcb, sizeof(*re_skcb));
+ 	skcb = j1939_skb_to_cb(skb);
+ 	if (swap_src_dst)
+ 		j1939_skbcb_swap(skcb);
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 960948290001e..2ad22511b9c6d 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -137,6 +137,20 @@ static void queue_process(struct work_struct *work)
+ 	}
+ }
+ 
++static int netif_local_xmit_active(struct net_device *dev)
++{
++	int i;
++
++	for (i = 0; i < dev->num_tx_queues; i++) {
++		struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
++
++		if (READ_ONCE(txq->xmit_lock_owner) == smp_processor_id())
++			return 1;
++	}
++
++	return 0;
++}
++
+ static void poll_one_napi(struct napi_struct *napi)
+ {
+ 	int work;
+@@ -183,7 +197,10 @@ void netpoll_poll_dev(struct net_device *dev)
+ 	if (!ni || down_trylock(&ni->dev_lock))
+ 		return;
+ 
+-	if (!netif_running(dev)) {
++	/* Some drivers will take the same locks in poll and xmit,
++	 * we can't poll if local CPU is already in xmit.
++	 */
++	if (!netif_running(dev) || netif_local_xmit_active(dev)) {
+ 		up(&ni->dev_lock);
+ 		return;
+ 	}
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index a1aacf5e75a6a..0bbc047e8f6e1 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -755,6 +755,11 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ 		room = 576;
+ 	room -= sizeof(struct iphdr) + icmp_param.replyopts.opt.opt.optlen;
+ 	room -= sizeof(struct icmphdr);
++	/* Guard against tiny mtu. We need to include at least one
++	 * IP network header for this message to make any sense.
++	 */
++	if (room <= (int)sizeof(struct iphdr))
++		goto ende;
+ 
+ 	icmp_param.data_len = skb_in->len - icmp_param.offset;
+ 	if (icmp_param.data_len > room)
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 439970e02ac65..3a34e9768bff0 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -37,6 +37,7 @@ static int ip_local_port_range_min[] = { 1, 1 };
+ static int ip_local_port_range_max[] = { 65535, 65535 };
+ static int tcp_adv_win_scale_min = -31;
+ static int tcp_adv_win_scale_max = 31;
++static int tcp_app_win_max = 31;
+ static int tcp_min_snd_mss_min = TCP_MIN_SND_MSS;
+ static int tcp_min_snd_mss_max = 65535;
+ static int ip_privileged_port_min;
+@@ -540,30 +541,30 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "icmp_echo_ignore_all",
+ 		.data		= &init_net.ipv4.sysctl_icmp_echo_ignore_all,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "icmp_echo_ignore_broadcasts",
+ 		.data		= &init_net.ipv4.sysctl_icmp_echo_ignore_broadcasts,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "icmp_ignore_bogus_error_responses",
+ 		.data		= &init_net.ipv4.sysctl_icmp_ignore_bogus_error_responses,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "icmp_errors_use_inbound_ifaddr",
+ 		.data		= &init_net.ipv4.sysctl_icmp_errors_use_inbound_ifaddr,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "icmp_ratelimit",
+@@ -590,9 +591,9 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "raw_l3mdev_accept",
+ 		.data		= &init_net.ipv4.sysctl_raw_l3mdev_accept,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= SYSCTL_ONE,
+ 	},
+@@ -600,30 +601,30 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_ecn",
+ 		.data		= &init_net.ipv4.sysctl_tcp_ecn,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_ecn_fallback",
+ 		.data		= &init_net.ipv4.sysctl_tcp_ecn_fallback,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "ip_dynaddr",
+ 		.data		= &init_net.ipv4.sysctl_ip_dynaddr,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "ip_early_demux",
+ 		.data		= &init_net.ipv4.sysctl_ip_early_demux,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname       = "udp_early_demux",
+@@ -642,18 +643,18 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname       = "nexthop_compat_mode",
+ 		.data           = &init_net.ipv4.sysctl_nexthop_compat_mode,
+-		.maxlen         = sizeof(int),
++		.maxlen         = sizeof(u8),
+ 		.mode           = 0644,
+-		.proc_handler   = proc_dointvec_minmax,
++		.proc_handler   = proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= SYSCTL_ONE,
+ 	},
+ 	{
+ 		.procname	= "ip_default_ttl",
+ 		.data		= &init_net.ipv4.sysctl_ip_default_ttl,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= &ip_ttl_min,
+ 		.extra2		= &ip_ttl_max,
+ 	},
+@@ -674,16 +675,16 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "ip_no_pmtu_disc",
+ 		.data		= &init_net.ipv4.sysctl_ip_no_pmtu_disc,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "ip_forward_use_pmtu",
+ 		.data		= &init_net.ipv4.sysctl_ip_fwd_use_pmtu,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "ip_forward_update_priority",
+@@ -697,40 +698,40 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "ip_nonlocal_bind",
+ 		.data		= &init_net.ipv4.sysctl_ip_nonlocal_bind,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "ip_autobind_reuse",
+ 		.data		= &init_net.ipv4.sysctl_ip_autobind_reuse,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1         = SYSCTL_ZERO,
+ 		.extra2         = SYSCTL_ONE,
+ 	},
+ 	{
+ 		.procname	= "fwmark_reflect",
+ 		.data		= &init_net.ipv4.sysctl_fwmark_reflect,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_fwmark_accept",
+ 		.data		= &init_net.ipv4.sysctl_tcp_fwmark_accept,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ #ifdef CONFIG_NET_L3_MASTER_DEV
+ 	{
+ 		.procname	= "tcp_l3mdev_accept",
+ 		.data		= &init_net.ipv4.sysctl_tcp_l3mdev_accept,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= SYSCTL_ONE,
+ 	},
+@@ -738,9 +739,9 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_mtu_probing",
+ 		.data		= &init_net.ipv4.sysctl_tcp_mtu_probing,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_base_mss",
+@@ -842,9 +843,9 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_keepalive_probes",
+ 		.data		= &init_net.ipv4.sysctl_tcp_keepalive_probes,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_keepalive_intvl",
+@@ -856,26 +857,26 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_syn_retries",
+ 		.data		= &init_net.ipv4.sysctl_tcp_syn_retries,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= &tcp_syn_retries_min,
+ 		.extra2		= &tcp_syn_retries_max
+ 	},
+ 	{
+ 		.procname	= "tcp_synack_retries",
+ 		.data		= &init_net.ipv4.sysctl_tcp_synack_retries,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ #ifdef CONFIG_SYN_COOKIES
+ 	{
+ 		.procname	= "tcp_syncookies",
+ 		.data		= &init_net.ipv4.sysctl_tcp_syncookies,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ #endif
+ 	{
+@@ -888,24 +889,24 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_retries1",
+ 		.data		= &init_net.ipv4.sysctl_tcp_retries1,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra2		= &tcp_retr1_max
+ 	},
+ 	{
+ 		.procname	= "tcp_retries2",
+ 		.data		= &init_net.ipv4.sysctl_tcp_retries2,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_orphan_retries",
+ 		.data		= &init_net.ipv4.sysctl_tcp_orphan_retries,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_fin_timeout",
+@@ -924,9 +925,9 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_tw_reuse",
+ 		.data		= &init_net.ipv4.sysctl_tcp_tw_reuse,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= &two,
+ 	},
+@@ -1012,88 +1013,88 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_sack",
+ 		.data		= &init_net.ipv4.sysctl_tcp_sack,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_window_scaling",
+ 		.data		= &init_net.ipv4.sysctl_tcp_window_scaling,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_timestamps",
+ 		.data		= &init_net.ipv4.sysctl_tcp_timestamps,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_early_retrans",
+ 		.data		= &init_net.ipv4.sysctl_tcp_early_retrans,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= &four,
+ 	},
+ 	{
+ 		.procname	= "tcp_recovery",
+ 		.data		= &init_net.ipv4.sysctl_tcp_recovery,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname       = "tcp_thin_linear_timeouts",
+ 		.data           = &init_net.ipv4.sysctl_tcp_thin_linear_timeouts,
+-		.maxlen         = sizeof(int),
++		.maxlen         = sizeof(u8),
+ 		.mode           = 0644,
+-		.proc_handler   = proc_dointvec
++		.proc_handler   = proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_slow_start_after_idle",
+ 		.data		= &init_net.ipv4.sysctl_tcp_slow_start_after_idle,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_retrans_collapse",
+ 		.data		= &init_net.ipv4.sysctl_tcp_retrans_collapse,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_stdurg",
+ 		.data		= &init_net.ipv4.sysctl_tcp_stdurg,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_rfc1337",
+ 		.data		= &init_net.ipv4.sysctl_tcp_rfc1337,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_abort_on_overflow",
+ 		.data		= &init_net.ipv4.sysctl_tcp_abort_on_overflow,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_fack",
+ 		.data		= &init_net.ipv4.sysctl_tcp_fack,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_max_reordering",
+@@ -1105,16 +1106,18 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_dsack",
+ 		.data		= &init_net.ipv4.sysctl_tcp_dsack,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_app_win",
+ 		.data		= &init_net.ipv4.sysctl_tcp_app_win,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= &tcp_app_win_max,
+ 	},
+ 	{
+ 		.procname	= "tcp_adv_win_scale",
+@@ -1128,46 +1131,46 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_frto",
+ 		.data		= &init_net.ipv4.sysctl_tcp_frto,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_no_metrics_save",
+ 		.data		= &init_net.ipv4.sysctl_tcp_nometrics_save,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_no_ssthresh_metrics_save",
+ 		.data		= &init_net.ipv4.sysctl_tcp_no_ssthresh_metrics_save,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= SYSCTL_ONE,
+ 	},
+ 	{
+ 		.procname	= "tcp_moderate_rcvbuf",
+ 		.data		= &init_net.ipv4.sysctl_tcp_moderate_rcvbuf,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_tso_win_divisor",
+ 		.data		= &init_net.ipv4.sysctl_tcp_tso_win_divisor,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_workaround_signed_windows",
+ 		.data		= &init_net.ipv4.sysctl_tcp_workaround_signed_windows,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ 	{
+ 		.procname	= "tcp_limit_output_bytes",
+@@ -1186,9 +1189,9 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_min_tso_segs",
+ 		.data		= &init_net.ipv4.sysctl_tcp_min_tso_segs,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ONE,
+ 		.extra2		= &gso_max_segs,
+ 	},
+@@ -1204,9 +1207,9 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname	= "tcp_autocorking",
+ 		.data		= &init_net.ipv4.sysctl_tcp_autocorking,
+-		.maxlen		= sizeof(int),
++		.maxlen		= sizeof(u8),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec_minmax,
++		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= SYSCTL_ONE,
+ 	},
+@@ -1277,9 +1280,9 @@ static struct ctl_table ipv4_net_table[] = {
+ 	{
+ 		.procname       = "tcp_reflect_tos",
+ 		.data           = &init_net.ipv4.sysctl_tcp_reflect_tos,
+-		.maxlen         = sizeof(int),
++		.maxlen         = sizeof(u8),
+ 		.mode           = 0644,
+-		.proc_handler   = proc_dointvec_minmax,
++		.proc_handler   = proc_dou8vec_minmax,
+ 		.extra1         = SYSCTL_ZERO,
+ 		.extra2         = SYSCTL_ONE,
+ 	},
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index e427f5040a08e..c62e44224bf84 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1925,8 +1925,13 @@ struct sk_buff *__ip6_make_skb(struct sock *sk,
+ 	IP6_UPD_PO_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUT, skb->len);
+ 	if (proto == IPPROTO_ICMPV6) {
+ 		struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb));
++		u8 icmp6_type;
+ 
+-		ICMP6MSGOUT_INC_STATS(net, idev, icmp6_hdr(skb)->icmp6_type);
++		if (sk->sk_socket->type == SOCK_RAW && !inet_sk(sk)->hdrincl)
++			icmp6_type = fl6->fl6_icmp_type;
++		else
++			icmp6_type = icmp6_hdr(skb)->icmp6_type;
++		ICMP6MSGOUT_INC_STATS(net, idev, icmp6_type);
+ 		ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS);
+ 	}
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 1805cc5f7418b..20cc08210c700 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1340,9 +1340,11 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 			msg->msg_name = &sin;
+ 			msg->msg_namelen = sizeof(sin);
+ do_udp_sendmsg:
+-			if (__ipv6_only_sock(sk))
+-				return -ENETUNREACH;
+-			return udp_sendmsg(sk, msg, len);
++			err = __ipv6_only_sock(sk) ?
++				-ENETUNREACH : udp_sendmsg(sk, msg, len);
++			msg->msg_name = sin6;
++			msg->msg_namelen = addr_len;
++			return err;
+ 		}
+ 	}
+ 
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index d572478c4d68e..2e84360990f0c 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -1040,7 +1040,8 @@ static int __must_check __sta_info_destroy_part1(struct sta_info *sta)
+ 	list_del_rcu(&sta->list);
+ 	sta->removed = true;
+ 
+-	drv_sta_pre_rcu_remove(local, sta->sdata, sta);
++	if (sta->uploaded)
++		drv_sta_pre_rcu_remove(local, sta->sdata, sta);
+ 
+ 	if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
+ 	    rcu_access_pointer(sdata->u.vlan.sta) == sta)
+diff --git a/net/qrtr/Makefile b/net/qrtr/Makefile
+index 1b1411d158a73..8e0605f88a73d 100644
+--- a/net/qrtr/Makefile
++++ b/net/qrtr/Makefile
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_QRTR) := qrtr.o ns.o
++obj-$(CONFIG_QRTR) += qrtr.o
++qrtr-y	:= af_qrtr.o ns.o
+ 
+ obj-$(CONFIG_QRTR_SMD) += qrtr-smd.o
+ qrtr-smd-y	:= smd.o
+diff --git a/net/qrtr/af_qrtr.c b/net/qrtr/af_qrtr.c
+new file mode 100644
+index 0000000000000..71c2295d4a573
+--- /dev/null
++++ b/net/qrtr/af_qrtr.c
+@@ -0,0 +1,1303 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Copyright (c) 2015, Sony Mobile Communications Inc.
++ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
++ */
++#include <linux/module.h>
++#include <linux/netlink.h>
++#include <linux/qrtr.h>
++#include <linux/termios.h>	/* For TIOCINQ/OUTQ */
++#include <linux/spinlock.h>
++#include <linux/wait.h>
++
++#include <net/sock.h>
++
++#include "qrtr.h"
++
++#define QRTR_PROTO_VER_1 1
++#define QRTR_PROTO_VER_2 3
++
++/* auto-bind range */
++#define QRTR_MIN_EPH_SOCKET 0x4000
++#define QRTR_MAX_EPH_SOCKET 0x7fff
++#define QRTR_EPH_PORT_RANGE \
++		XA_LIMIT(QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET)
++
++/**
++ * struct qrtr_hdr_v1 - (I|R)PCrouter packet header version 1
++ * @version: protocol version
++ * @type: packet type; one of QRTR_TYPE_*
++ * @src_node_id: source node
++ * @src_port_id: source port
++ * @confirm_rx: boolean; whether a resume-tx packet should be send in reply
++ * @size: length of packet, excluding this header
++ * @dst_node_id: destination node
++ * @dst_port_id: destination port
++ */
++struct qrtr_hdr_v1 {
++	__le32 version;
++	__le32 type;
++	__le32 src_node_id;
++	__le32 src_port_id;
++	__le32 confirm_rx;
++	__le32 size;
++	__le32 dst_node_id;
++	__le32 dst_port_id;
++} __packed;
++
++/**
++ * struct qrtr_hdr_v2 - (I|R)PCrouter packet header later versions
++ * @version: protocol version
++ * @type: packet type; one of QRTR_TYPE_*
++ * @flags: bitmask of QRTR_FLAGS_*
++ * @optlen: length of optional header data
++ * @size: length of packet, excluding this header and optlen
++ * @src_node_id: source node
++ * @src_port_id: source port
++ * @dst_node_id: destination node
++ * @dst_port_id: destination port
++ */
++struct qrtr_hdr_v2 {
++	u8 version;
++	u8 type;
++	u8 flags;
++	u8 optlen;
++	__le32 size;
++	__le16 src_node_id;
++	__le16 src_port_id;
++	__le16 dst_node_id;
++	__le16 dst_port_id;
++};
++
++#define QRTR_FLAGS_CONFIRM_RX	BIT(0)
++
++struct qrtr_cb {
++	u32 src_node;
++	u32 src_port;
++	u32 dst_node;
++	u32 dst_port;
++
++	u8 type;
++	u8 confirm_rx;
++};
++
++#define QRTR_HDR_MAX_SIZE max_t(size_t, sizeof(struct qrtr_hdr_v1), \
++					sizeof(struct qrtr_hdr_v2))
++
++struct qrtr_sock {
++	/* WARNING: sk must be the first member */
++	struct sock sk;
++	struct sockaddr_qrtr us;
++	struct sockaddr_qrtr peer;
++};
++
++static inline struct qrtr_sock *qrtr_sk(struct sock *sk)
++{
++	BUILD_BUG_ON(offsetof(struct qrtr_sock, sk) != 0);
++	return container_of(sk, struct qrtr_sock, sk);
++}
++
++static unsigned int qrtr_local_nid = 1;
++
++/* for node ids */
++static RADIX_TREE(qrtr_nodes, GFP_ATOMIC);
++static DEFINE_SPINLOCK(qrtr_nodes_lock);
++/* broadcast list */
++static LIST_HEAD(qrtr_all_nodes);
++/* lock for qrtr_all_nodes and node reference */
++static DEFINE_MUTEX(qrtr_node_lock);
++
++/* local port allocation management */
++static DEFINE_XARRAY_ALLOC(qrtr_ports);
++
++/**
++ * struct qrtr_node - endpoint node
++ * @ep_lock: lock for endpoint management and callbacks
++ * @ep: endpoint
++ * @ref: reference count for node
++ * @nid: node id
++ * @qrtr_tx_flow: tree of qrtr_tx_flow, keyed by node << 32 | port
++ * @qrtr_tx_lock: lock for qrtr_tx_flow inserts
++ * @rx_queue: receive queue
++ * @item: list item for broadcast list
++ */
++struct qrtr_node {
++	struct mutex ep_lock;
++	struct qrtr_endpoint *ep;
++	struct kref ref;
++	unsigned int nid;
++
++	struct radix_tree_root qrtr_tx_flow;
++	struct mutex qrtr_tx_lock; /* for qrtr_tx_flow */
++
++	struct sk_buff_head rx_queue;
++	struct list_head item;
++};
++
++/**
++ * struct qrtr_tx_flow - tx flow control
++ * @resume_tx: waiters for a resume tx from the remote
++ * @pending: number of waiting senders
++ * @tx_failed: indicates that a message with confirm_rx flag was lost
++ */
++struct qrtr_tx_flow {
++	struct wait_queue_head resume_tx;
++	int pending;
++	int tx_failed;
++};
++
++#define QRTR_TX_FLOW_HIGH	10
++#define QRTR_TX_FLOW_LOW	5
++
++static int qrtr_local_enqueue(struct qrtr_node *node, struct sk_buff *skb,
++			      int type, struct sockaddr_qrtr *from,
++			      struct sockaddr_qrtr *to);
++static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb,
++			      int type, struct sockaddr_qrtr *from,
++			      struct sockaddr_qrtr *to);
++static struct qrtr_sock *qrtr_port_lookup(int port);
++static void qrtr_port_put(struct qrtr_sock *ipc);
++
++/* Release node resources and free the node.
++ *
++ * Do not call directly, use qrtr_node_release.  To be used with
++ * kref_put_mutex.  As such, the node mutex is expected to be locked on call.
++ */
++static void __qrtr_node_release(struct kref *kref)
++{
++	struct qrtr_node *node = container_of(kref, struct qrtr_node, ref);
++	struct radix_tree_iter iter;
++	struct qrtr_tx_flow *flow;
++	unsigned long flags;
++	void __rcu **slot;
++
++	spin_lock_irqsave(&qrtr_nodes_lock, flags);
++	if (node->nid != QRTR_EP_NID_AUTO)
++		radix_tree_delete(&qrtr_nodes, node->nid);
++	spin_unlock_irqrestore(&qrtr_nodes_lock, flags);
++
++	list_del(&node->item);
++	mutex_unlock(&qrtr_node_lock);
++
++	skb_queue_purge(&node->rx_queue);
++
++	/* Free tx flow counters */
++	radix_tree_for_each_slot(slot, &node->qrtr_tx_flow, &iter, 0) {
++		flow = *slot;
++		radix_tree_iter_delete(&node->qrtr_tx_flow, &iter, slot);
++		kfree(flow);
++	}
++	kfree(node);
++}
++
++/* Increment reference to node. */
++static struct qrtr_node *qrtr_node_acquire(struct qrtr_node *node)
++{
++	if (node)
++		kref_get(&node->ref);
++	return node;
++}
++
++/* Decrement reference to node and release as necessary. */
++static void qrtr_node_release(struct qrtr_node *node)
++{
++	if (!node)
++		return;
++	kref_put_mutex(&node->ref, __qrtr_node_release, &qrtr_node_lock);
++}
++
++/**
++ * qrtr_tx_resume() - reset flow control counter
++ * @node:	qrtr_node that the QRTR_TYPE_RESUME_TX packet arrived on
++ * @skb:	resume_tx packet
++ */
++static void qrtr_tx_resume(struct qrtr_node *node, struct sk_buff *skb)
++{
++	struct qrtr_ctrl_pkt *pkt = (struct qrtr_ctrl_pkt *)skb->data;
++	u64 remote_node = le32_to_cpu(pkt->client.node);
++	u32 remote_port = le32_to_cpu(pkt->client.port);
++	struct qrtr_tx_flow *flow;
++	unsigned long key;
++
++	key = remote_node << 32 | remote_port;
++
++	rcu_read_lock();
++	flow = radix_tree_lookup(&node->qrtr_tx_flow, key);
++	rcu_read_unlock();
++	if (flow) {
++		spin_lock(&flow->resume_tx.lock);
++		flow->pending = 0;
++		spin_unlock(&flow->resume_tx.lock);
++		wake_up_interruptible_all(&flow->resume_tx);
++	}
++
++	consume_skb(skb);
++}
++
++/**
++ * qrtr_tx_wait() - flow control for outgoing packets
++ * @node:	qrtr_node that the packet is to be send to
++ * @dest_node:	node id of the destination
++ * @dest_port:	port number of the destination
++ * @type:	type of message
++ *
++ * The flow control scheme is based around the low and high "watermarks". When
++ * the low watermark is passed the confirm_rx flag is set on the outgoing
++ * message, which will trigger the remote to send a control message of the type
++ * QRTR_TYPE_RESUME_TX to reset the counter. If the high watermark is hit
++ * further transmision should be paused.
++ *
++ * Return: 1 if confirm_rx should be set, 0 otherwise or errno failure
++ */
++static int qrtr_tx_wait(struct qrtr_node *node, int dest_node, int dest_port,
++			int type)
++{
++	unsigned long key = (u64)dest_node << 32 | dest_port;
++	struct qrtr_tx_flow *flow;
++	int confirm_rx = 0;
++	int ret;
++
++	/* Never set confirm_rx on non-data packets */
++	if (type != QRTR_TYPE_DATA)
++		return 0;
++
++	mutex_lock(&node->qrtr_tx_lock);
++	flow = radix_tree_lookup(&node->qrtr_tx_flow, key);
++	if (!flow) {
++		flow = kzalloc(sizeof(*flow), GFP_KERNEL);
++		if (flow) {
++			init_waitqueue_head(&flow->resume_tx);
++			if (radix_tree_insert(&node->qrtr_tx_flow, key, flow)) {
++				kfree(flow);
++				flow = NULL;
++			}
++		}
++	}
++	mutex_unlock(&node->qrtr_tx_lock);
++
++	/* Set confirm_rx if we where unable to find and allocate a flow */
++	if (!flow)
++		return 1;
++
++	spin_lock_irq(&flow->resume_tx.lock);
++	ret = wait_event_interruptible_locked_irq(flow->resume_tx,
++						  flow->pending < QRTR_TX_FLOW_HIGH ||
++						  flow->tx_failed ||
++						  !node->ep);
++	if (ret < 0) {
++		confirm_rx = ret;
++	} else if (!node->ep) {
++		confirm_rx = -EPIPE;
++	} else if (flow->tx_failed) {
++		flow->tx_failed = 0;
++		confirm_rx = 1;
++	} else {
++		flow->pending++;
++		confirm_rx = flow->pending == QRTR_TX_FLOW_LOW;
++	}
++	spin_unlock_irq(&flow->resume_tx.lock);
++
++	return confirm_rx;
++}
++
++/**
++ * qrtr_tx_flow_failed() - flag that tx of confirm_rx flagged messages failed
++ * @node:	qrtr_node that the packet is to be send to
++ * @dest_node:	node id of the destination
++ * @dest_port:	port number of the destination
++ *
++ * Signal that the transmission of a message with confirm_rx flag failed. The
++ * flow's "pending" counter will keep incrementing towards QRTR_TX_FLOW_HIGH,
++ * at which point transmission would stall forever waiting for the resume TX
++ * message associated with the dropped confirm_rx message.
++ * Work around this by marking the flow as having a failed transmission and
++ * cause the next transmission attempt to be sent with the confirm_rx.
++ */
++static void qrtr_tx_flow_failed(struct qrtr_node *node, int dest_node,
++				int dest_port)
++{
++	unsigned long key = (u64)dest_node << 32 | dest_port;
++	struct qrtr_tx_flow *flow;
++
++	rcu_read_lock();
++	flow = radix_tree_lookup(&node->qrtr_tx_flow, key);
++	rcu_read_unlock();
++	if (flow) {
++		spin_lock_irq(&flow->resume_tx.lock);
++		flow->tx_failed = 1;
++		spin_unlock_irq(&flow->resume_tx.lock);
++	}
++}
++
++/* Pass an outgoing packet socket buffer to the endpoint driver. */
++static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
++			     int type, struct sockaddr_qrtr *from,
++			     struct sockaddr_qrtr *to)
++{
++	struct qrtr_hdr_v1 *hdr;
++	size_t len = skb->len;
++	int rc, confirm_rx;
++
++	confirm_rx = qrtr_tx_wait(node, to->sq_node, to->sq_port, type);
++	if (confirm_rx < 0) {
++		kfree_skb(skb);
++		return confirm_rx;
++	}
++
++	hdr = skb_push(skb, sizeof(*hdr));
++	hdr->version = cpu_to_le32(QRTR_PROTO_VER_1);
++	hdr->type = cpu_to_le32(type);
++	hdr->src_node_id = cpu_to_le32(from->sq_node);
++	hdr->src_port_id = cpu_to_le32(from->sq_port);
++	if (to->sq_port == QRTR_PORT_CTRL) {
++		hdr->dst_node_id = cpu_to_le32(node->nid);
++		hdr->dst_port_id = cpu_to_le32(QRTR_PORT_CTRL);
++	} else {
++		hdr->dst_node_id = cpu_to_le32(to->sq_node);
++		hdr->dst_port_id = cpu_to_le32(to->sq_port);
++	}
++
++	hdr->size = cpu_to_le32(len);
++	hdr->confirm_rx = !!confirm_rx;
++
++	rc = skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
++
++	if (!rc) {
++		mutex_lock(&node->ep_lock);
++		rc = -ENODEV;
++		if (node->ep)
++			rc = node->ep->xmit(node->ep, skb);
++		else
++			kfree_skb(skb);
++		mutex_unlock(&node->ep_lock);
++	}
++	/* Need to ensure that a subsequent message carries the otherwise lost
++	 * confirm_rx flag if we dropped this one */
++	if (rc && confirm_rx)
++		qrtr_tx_flow_failed(node, to->sq_node, to->sq_port);
++
++	return rc;
++}
++
++/* Lookup node by id.
++ *
++ * callers must release with qrtr_node_release()
++ */
++static struct qrtr_node *qrtr_node_lookup(unsigned int nid)
++{
++	struct qrtr_node *node;
++	unsigned long flags;
++
++	mutex_lock(&qrtr_node_lock);
++	spin_lock_irqsave(&qrtr_nodes_lock, flags);
++	node = radix_tree_lookup(&qrtr_nodes, nid);
++	node = qrtr_node_acquire(node);
++	spin_unlock_irqrestore(&qrtr_nodes_lock, flags);
++	mutex_unlock(&qrtr_node_lock);
++
++	return node;
++}
++
++/* Assign node id to node.
++ *
++ * This is mostly useful for automatic node id assignment, based on
++ * the source id in the incoming packet.
++ */
++static void qrtr_node_assign(struct qrtr_node *node, unsigned int nid)
++{
++	unsigned long flags;
++
++	if (node->nid != QRTR_EP_NID_AUTO || nid == QRTR_EP_NID_AUTO)
++		return;
++
++	spin_lock_irqsave(&qrtr_nodes_lock, flags);
++	radix_tree_insert(&qrtr_nodes, nid, node);
++	node->nid = nid;
++	spin_unlock_irqrestore(&qrtr_nodes_lock, flags);
++}
++
++/**
++ * qrtr_endpoint_post() - post incoming data
++ * @ep: endpoint handle
++ * @data: data pointer
++ * @len: size of data in bytes
++ *
++ * Return: 0 on success; negative error code on failure
++ */
++int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
++{
++	struct qrtr_node *node = ep->node;
++	const struct qrtr_hdr_v1 *v1;
++	const struct qrtr_hdr_v2 *v2;
++	struct qrtr_sock *ipc;
++	struct sk_buff *skb;
++	struct qrtr_cb *cb;
++	size_t size;
++	unsigned int ver;
++	size_t hdrlen;
++
++	if (len == 0 || len & 3)
++		return -EINVAL;
++
++	skb = __netdev_alloc_skb(NULL, len, GFP_ATOMIC | __GFP_NOWARN);
++	if (!skb)
++		return -ENOMEM;
++
++	cb = (struct qrtr_cb *)skb->cb;
++
++	/* Version field in v1 is little endian, so this works for both cases */
++	ver = *(u8*)data;
++
++	switch (ver) {
++	case QRTR_PROTO_VER_1:
++		if (len < sizeof(*v1))
++			goto err;
++		v1 = data;
++		hdrlen = sizeof(*v1);
++
++		cb->type = le32_to_cpu(v1->type);
++		cb->src_node = le32_to_cpu(v1->src_node_id);
++		cb->src_port = le32_to_cpu(v1->src_port_id);
++		cb->confirm_rx = !!v1->confirm_rx;
++		cb->dst_node = le32_to_cpu(v1->dst_node_id);
++		cb->dst_port = le32_to_cpu(v1->dst_port_id);
++
++		size = le32_to_cpu(v1->size);
++		break;
++	case QRTR_PROTO_VER_2:
++		if (len < sizeof(*v2))
++			goto err;
++		v2 = data;
++		hdrlen = sizeof(*v2) + v2->optlen;
++
++		cb->type = v2->type;
++		cb->confirm_rx = !!(v2->flags & QRTR_FLAGS_CONFIRM_RX);
++		cb->src_node = le16_to_cpu(v2->src_node_id);
++		cb->src_port = le16_to_cpu(v2->src_port_id);
++		cb->dst_node = le16_to_cpu(v2->dst_node_id);
++		cb->dst_port = le16_to_cpu(v2->dst_port_id);
++
++		if (cb->src_port == (u16)QRTR_PORT_CTRL)
++			cb->src_port = QRTR_PORT_CTRL;
++		if (cb->dst_port == (u16)QRTR_PORT_CTRL)
++			cb->dst_port = QRTR_PORT_CTRL;
++
++		size = le32_to_cpu(v2->size);
++		break;
++	default:
++		pr_err("qrtr: Invalid version %d\n", ver);
++		goto err;
++	}
++
++	if (!size || len != ALIGN(size, 4) + hdrlen)
++		goto err;
++
++	if ((cb->type == QRTR_TYPE_NEW_SERVER ||
++	     cb->type == QRTR_TYPE_RESUME_TX) &&
++	    size < sizeof(struct qrtr_ctrl_pkt))
++		goto err;
++
++	if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
++	    cb->type != QRTR_TYPE_RESUME_TX)
++		goto err;
++
++	skb_put_data(skb, data + hdrlen, size);
++
++	qrtr_node_assign(node, cb->src_node);
++
++	if (cb->type == QRTR_TYPE_NEW_SERVER) {
++		/* Remote node endpoint can bridge other distant nodes */
++		const struct qrtr_ctrl_pkt *pkt;
++
++		pkt = data + hdrlen;
++		qrtr_node_assign(node, le32_to_cpu(pkt->server.node));
++	}
++
++	if (cb->type == QRTR_TYPE_RESUME_TX) {
++		qrtr_tx_resume(node, skb);
++	} else {
++		ipc = qrtr_port_lookup(cb->dst_port);
++		if (!ipc)
++			goto err;
++
++		if (sock_queue_rcv_skb(&ipc->sk, skb)) {
++			qrtr_port_put(ipc);
++			goto err;
++		}
++
++		qrtr_port_put(ipc);
++	}
++
++	return 0;
++
++err:
++	kfree_skb(skb);
++	return -EINVAL;
++
++}
++EXPORT_SYMBOL_GPL(qrtr_endpoint_post);
++
++/**
++ * qrtr_alloc_ctrl_packet() - allocate control packet skb
++ * @pkt: reference to qrtr_ctrl_pkt pointer
++ *
++ * Returns newly allocated sk_buff, or NULL on failure
++ *
++ * This function allocates a sk_buff large enough to carry a qrtr_ctrl_pkt and
++ * on success returns a reference to the control packet in @pkt.
++ */
++static struct sk_buff *qrtr_alloc_ctrl_packet(struct qrtr_ctrl_pkt **pkt)
++{
++	const int pkt_len = sizeof(struct qrtr_ctrl_pkt);
++	struct sk_buff *skb;
++
++	skb = alloc_skb(QRTR_HDR_MAX_SIZE + pkt_len, GFP_KERNEL);
++	if (!skb)
++		return NULL;
++
++	skb_reserve(skb, QRTR_HDR_MAX_SIZE);
++	*pkt = skb_put_zero(skb, pkt_len);
++
++	return skb;
++}
++
++/**
++ * qrtr_endpoint_register() - register a new endpoint
++ * @ep: endpoint to register
++ * @nid: desired node id; may be QRTR_EP_NID_AUTO for auto-assignment
++ * Return: 0 on success; negative error code on failure
++ *
++ * The specified endpoint must have the xmit function pointer set on call.
++ */
++int qrtr_endpoint_register(struct qrtr_endpoint *ep, unsigned int nid)
++{
++	struct qrtr_node *node;
++
++	if (!ep || !ep->xmit)
++		return -EINVAL;
++
++	node = kzalloc(sizeof(*node), GFP_KERNEL);
++	if (!node)
++		return -ENOMEM;
++
++	kref_init(&node->ref);
++	mutex_init(&node->ep_lock);
++	skb_queue_head_init(&node->rx_queue);
++	node->nid = QRTR_EP_NID_AUTO;
++	node->ep = ep;
++
++	INIT_RADIX_TREE(&node->qrtr_tx_flow, GFP_KERNEL);
++	mutex_init(&node->qrtr_tx_lock);
++
++	qrtr_node_assign(node, nid);
++
++	mutex_lock(&qrtr_node_lock);
++	list_add(&node->item, &qrtr_all_nodes);
++	mutex_unlock(&qrtr_node_lock);
++	ep->node = node;
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(qrtr_endpoint_register);
++
++/**
++ * qrtr_endpoint_unregister - unregister endpoint
++ * @ep: endpoint to unregister
++ */
++void qrtr_endpoint_unregister(struct qrtr_endpoint *ep)
++{
++	struct qrtr_node *node = ep->node;
++	struct sockaddr_qrtr src = {AF_QIPCRTR, node->nid, QRTR_PORT_CTRL};
++	struct sockaddr_qrtr dst = {AF_QIPCRTR, qrtr_local_nid, QRTR_PORT_CTRL};
++	struct radix_tree_iter iter;
++	struct qrtr_ctrl_pkt *pkt;
++	struct qrtr_tx_flow *flow;
++	struct sk_buff *skb;
++	void __rcu **slot;
++
++	mutex_lock(&node->ep_lock);
++	node->ep = NULL;
++	mutex_unlock(&node->ep_lock);
++
++	/* Notify the local controller about the event */
++	skb = qrtr_alloc_ctrl_packet(&pkt);
++	if (skb) {
++		pkt->cmd = cpu_to_le32(QRTR_TYPE_BYE);
++		qrtr_local_enqueue(NULL, skb, QRTR_TYPE_BYE, &src, &dst);
++	}
++
++	/* Wake up any transmitters waiting for resume-tx from the node */
++	mutex_lock(&node->qrtr_tx_lock);
++	radix_tree_for_each_slot(slot, &node->qrtr_tx_flow, &iter, 0) {
++		flow = *slot;
++		wake_up_interruptible_all(&flow->resume_tx);
++	}
++	mutex_unlock(&node->qrtr_tx_lock);
++
++	qrtr_node_release(node);
++	ep->node = NULL;
++}
++EXPORT_SYMBOL_GPL(qrtr_endpoint_unregister);
++
++/* Lookup socket by port.
++ *
++ * Callers must release with qrtr_port_put()
++ */
++static struct qrtr_sock *qrtr_port_lookup(int port)
++{
++	struct qrtr_sock *ipc;
++
++	if (port == QRTR_PORT_CTRL)
++		port = 0;
++
++	rcu_read_lock();
++	ipc = xa_load(&qrtr_ports, port);
++	if (ipc)
++		sock_hold(&ipc->sk);
++	rcu_read_unlock();
++
++	return ipc;
++}
++
++/* Release acquired socket. */
++static void qrtr_port_put(struct qrtr_sock *ipc)
++{
++	sock_put(&ipc->sk);
++}
++
++/* Remove port assignment. */
++static void qrtr_port_remove(struct qrtr_sock *ipc)
++{
++	struct qrtr_ctrl_pkt *pkt;
++	struct sk_buff *skb;
++	int port = ipc->us.sq_port;
++	struct sockaddr_qrtr to;
++
++	to.sq_family = AF_QIPCRTR;
++	to.sq_node = QRTR_NODE_BCAST;
++	to.sq_port = QRTR_PORT_CTRL;
++
++	skb = qrtr_alloc_ctrl_packet(&pkt);
++	if (skb) {
++		pkt->cmd = cpu_to_le32(QRTR_TYPE_DEL_CLIENT);
++		pkt->client.node = cpu_to_le32(ipc->us.sq_node);
++		pkt->client.port = cpu_to_le32(ipc->us.sq_port);
++
++		skb_set_owner_w(skb, &ipc->sk);
++		qrtr_bcast_enqueue(NULL, skb, QRTR_TYPE_DEL_CLIENT, &ipc->us,
++				   &to);
++	}
++
++	if (port == QRTR_PORT_CTRL)
++		port = 0;
++
++	__sock_put(&ipc->sk);
++
++	xa_erase(&qrtr_ports, port);
++
++	/* Ensure that if qrtr_port_lookup() did enter the RCU read section we
++	 * wait for it to up increment the refcount */
++	synchronize_rcu();
++}
++
++/* Assign port number to socket.
++ *
++ * Specify port in the integer pointed to by port, and it will be adjusted
++ * on return as necesssary.
++ *
++ * Port may be:
++ *   0: Assign ephemeral port in [QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET]
++ *   <QRTR_MIN_EPH_SOCKET: Specified; requires CAP_NET_ADMIN
++ *   >QRTR_MIN_EPH_SOCKET: Specified; available to all
++ */
++static int qrtr_port_assign(struct qrtr_sock *ipc, int *port)
++{
++	int rc;
++
++	if (!*port) {
++		rc = xa_alloc(&qrtr_ports, port, ipc, QRTR_EPH_PORT_RANGE,
++				GFP_KERNEL);
++	} else if (*port < QRTR_MIN_EPH_SOCKET && !capable(CAP_NET_ADMIN)) {
++		rc = -EACCES;
++	} else if (*port == QRTR_PORT_CTRL) {
++		rc = xa_insert(&qrtr_ports, 0, ipc, GFP_KERNEL);
++	} else {
++		rc = xa_insert(&qrtr_ports, *port, ipc, GFP_KERNEL);
++	}
++
++	if (rc == -EBUSY)
++		return -EADDRINUSE;
++	else if (rc < 0)
++		return rc;
++
++	sock_hold(&ipc->sk);
++
++	return 0;
++}
++
++/* Reset all non-control ports */
++static void qrtr_reset_ports(void)
++{
++	struct qrtr_sock *ipc;
++	unsigned long index;
++
++	rcu_read_lock();
++	xa_for_each_start(&qrtr_ports, index, ipc, 1) {
++		sock_hold(&ipc->sk);
++		ipc->sk.sk_err = ENETRESET;
++		ipc->sk.sk_error_report(&ipc->sk);
++		sock_put(&ipc->sk);
++	}
++	rcu_read_unlock();
++}
++
++/* Bind socket to address.
++ *
++ * Socket should be locked upon call.
++ */
++static int __qrtr_bind(struct socket *sock,
++		       const struct sockaddr_qrtr *addr, int zapped)
++{
++	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
++	struct sock *sk = sock->sk;
++	int port;
++	int rc;
++
++	/* rebinding ok */
++	if (!zapped && addr->sq_port == ipc->us.sq_port)
++		return 0;
++
++	port = addr->sq_port;
++	rc = qrtr_port_assign(ipc, &port);
++	if (rc)
++		return rc;
++
++	/* unbind previous, if any */
++	if (!zapped)
++		qrtr_port_remove(ipc);
++	ipc->us.sq_port = port;
++
++	sock_reset_flag(sk, SOCK_ZAPPED);
++
++	/* Notify all open ports about the new controller */
++	if (port == QRTR_PORT_CTRL)
++		qrtr_reset_ports();
++
++	return 0;
++}
++
++/* Auto bind to an ephemeral port. */
++static int qrtr_autobind(struct socket *sock)
++{
++	struct sock *sk = sock->sk;
++	struct sockaddr_qrtr addr;
++
++	if (!sock_flag(sk, SOCK_ZAPPED))
++		return 0;
++
++	addr.sq_family = AF_QIPCRTR;
++	addr.sq_node = qrtr_local_nid;
++	addr.sq_port = 0;
++
++	return __qrtr_bind(sock, &addr, 1);
++}
++
++/* Bind socket to specified sockaddr. */
++static int qrtr_bind(struct socket *sock, struct sockaddr *saddr, int len)
++{
++	DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, saddr);
++	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
++	struct sock *sk = sock->sk;
++	int rc;
++
++	if (len < sizeof(*addr) || addr->sq_family != AF_QIPCRTR)
++		return -EINVAL;
++
++	if (addr->sq_node != ipc->us.sq_node)
++		return -EINVAL;
++
++	lock_sock(sk);
++	rc = __qrtr_bind(sock, addr, sock_flag(sk, SOCK_ZAPPED));
++	release_sock(sk);
++
++	return rc;
++}
++
++/* Queue packet to local peer socket. */
++static int qrtr_local_enqueue(struct qrtr_node *node, struct sk_buff *skb,
++			      int type, struct sockaddr_qrtr *from,
++			      struct sockaddr_qrtr *to)
++{
++	struct qrtr_sock *ipc;
++	struct qrtr_cb *cb;
++
++	ipc = qrtr_port_lookup(to->sq_port);
++	if (!ipc || &ipc->sk == skb->sk) { /* do not send to self */
++		if (ipc)
++			qrtr_port_put(ipc);
++		kfree_skb(skb);
++		return -ENODEV;
++	}
++
++	cb = (struct qrtr_cb *)skb->cb;
++	cb->src_node = from->sq_node;
++	cb->src_port = from->sq_port;
++
++	if (sock_queue_rcv_skb(&ipc->sk, skb)) {
++		qrtr_port_put(ipc);
++		kfree_skb(skb);
++		return -ENOSPC;
++	}
++
++	qrtr_port_put(ipc);
++
++	return 0;
++}
++
++/* Queue packet for broadcast. */
++static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb,
++			      int type, struct sockaddr_qrtr *from,
++			      struct sockaddr_qrtr *to)
++{
++	struct sk_buff *skbn;
++
++	mutex_lock(&qrtr_node_lock);
++	list_for_each_entry(node, &qrtr_all_nodes, item) {
++		skbn = skb_clone(skb, GFP_KERNEL);
++		if (!skbn)
++			break;
++		skb_set_owner_w(skbn, skb->sk);
++		qrtr_node_enqueue(node, skbn, type, from, to);
++	}
++	mutex_unlock(&qrtr_node_lock);
++
++	qrtr_local_enqueue(NULL, skb, type, from, to);
++
++	return 0;
++}
++
++static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
++{
++	DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, msg->msg_name);
++	int (*enqueue_fn)(struct qrtr_node *, struct sk_buff *, int,
++			  struct sockaddr_qrtr *, struct sockaddr_qrtr *);
++	__le32 qrtr_type = cpu_to_le32(QRTR_TYPE_DATA);
++	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
++	struct sock *sk = sock->sk;
++	struct qrtr_node *node;
++	struct sk_buff *skb;
++	size_t plen;
++	u32 type;
++	int rc;
++
++	if (msg->msg_flags & ~(MSG_DONTWAIT))
++		return -EINVAL;
++
++	if (len > 65535)
++		return -EMSGSIZE;
++
++	lock_sock(sk);
++
++	if (addr) {
++		if (msg->msg_namelen < sizeof(*addr)) {
++			release_sock(sk);
++			return -EINVAL;
++		}
++
++		if (addr->sq_family != AF_QIPCRTR) {
++			release_sock(sk);
++			return -EINVAL;
++		}
++
++		rc = qrtr_autobind(sock);
++		if (rc) {
++			release_sock(sk);
++			return rc;
++		}
++	} else if (sk->sk_state == TCP_ESTABLISHED) {
++		addr = &ipc->peer;
++	} else {
++		release_sock(sk);
++		return -ENOTCONN;
++	}
++
++	node = NULL;
++	if (addr->sq_node == QRTR_NODE_BCAST) {
++		if (addr->sq_port != QRTR_PORT_CTRL &&
++		    qrtr_local_nid != QRTR_NODE_BCAST) {
++			release_sock(sk);
++			return -ENOTCONN;
++		}
++		enqueue_fn = qrtr_bcast_enqueue;
++	} else if (addr->sq_node == ipc->us.sq_node) {
++		enqueue_fn = qrtr_local_enqueue;
++	} else {
++		node = qrtr_node_lookup(addr->sq_node);
++		if (!node) {
++			release_sock(sk);
++			return -ECONNRESET;
++		}
++		enqueue_fn = qrtr_node_enqueue;
++	}
++
++	plen = (len + 3) & ~3;
++	skb = sock_alloc_send_skb(sk, plen + QRTR_HDR_MAX_SIZE,
++				  msg->msg_flags & MSG_DONTWAIT, &rc);
++	if (!skb) {
++		rc = -ENOMEM;
++		goto out_node;
++	}
++
++	skb_reserve(skb, QRTR_HDR_MAX_SIZE);
++
++	rc = memcpy_from_msg(skb_put(skb, len), msg, len);
++	if (rc) {
++		kfree_skb(skb);
++		goto out_node;
++	}
++
++	if (ipc->us.sq_port == QRTR_PORT_CTRL) {
++		if (len < 4) {
++			rc = -EINVAL;
++			kfree_skb(skb);
++			goto out_node;
++		}
++
++		/* control messages already require the type as 'command' */
++		skb_copy_bits(skb, 0, &qrtr_type, 4);
++	}
++
++	type = le32_to_cpu(qrtr_type);
++	rc = enqueue_fn(node, skb, type, &ipc->us, addr);
++	if (rc >= 0)
++		rc = len;
++
++out_node:
++	qrtr_node_release(node);
++	release_sock(sk);
++
++	return rc;
++}
++
++static int qrtr_send_resume_tx(struct qrtr_cb *cb)
++{
++	struct sockaddr_qrtr remote = { AF_QIPCRTR, cb->src_node, cb->src_port };
++	struct sockaddr_qrtr local = { AF_QIPCRTR, cb->dst_node, cb->dst_port };
++	struct qrtr_ctrl_pkt *pkt;
++	struct qrtr_node *node;
++	struct sk_buff *skb;
++	int ret;
++
++	node = qrtr_node_lookup(remote.sq_node);
++	if (!node)
++		return -EINVAL;
++
++	skb = qrtr_alloc_ctrl_packet(&pkt);
++	if (!skb)
++		return -ENOMEM;
++
++	pkt->cmd = cpu_to_le32(QRTR_TYPE_RESUME_TX);
++	pkt->client.node = cpu_to_le32(cb->dst_node);
++	pkt->client.port = cpu_to_le32(cb->dst_port);
++
++	ret = qrtr_node_enqueue(node, skb, QRTR_TYPE_RESUME_TX, &local, &remote);
++
++	qrtr_node_release(node);
++
++	return ret;
++}
++
++static int qrtr_recvmsg(struct socket *sock, struct msghdr *msg,
++			size_t size, int flags)
++{
++	DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, msg->msg_name);
++	struct sock *sk = sock->sk;
++	struct sk_buff *skb;
++	struct qrtr_cb *cb;
++	int copied, rc;
++
++	lock_sock(sk);
++
++	if (sock_flag(sk, SOCK_ZAPPED)) {
++		release_sock(sk);
++		return -EADDRNOTAVAIL;
++	}
++
++	skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
++				flags & MSG_DONTWAIT, &rc);
++	if (!skb) {
++		release_sock(sk);
++		return rc;
++	}
++	cb = (struct qrtr_cb *)skb->cb;
++
++	copied = skb->len;
++	if (copied > size) {
++		copied = size;
++		msg->msg_flags |= MSG_TRUNC;
++	}
++
++	rc = skb_copy_datagram_msg(skb, 0, msg, copied);
++	if (rc < 0)
++		goto out;
++	rc = copied;
++
++	if (addr) {
++		/* There is an anonymous 2-byte hole after sq_family,
++		 * make sure to clear it.
++		 */
++		memset(addr, 0, sizeof(*addr));
++
++		addr->sq_family = AF_QIPCRTR;
++		addr->sq_node = cb->src_node;
++		addr->sq_port = cb->src_port;
++		msg->msg_namelen = sizeof(*addr);
++	}
++
++out:
++	if (cb->confirm_rx)
++		qrtr_send_resume_tx(cb);
++
++	skb_free_datagram(sk, skb);
++	release_sock(sk);
++
++	return rc;
++}
++
++static int qrtr_connect(struct socket *sock, struct sockaddr *saddr,
++			int len, int flags)
++{
++	DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, saddr);
++	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
++	struct sock *sk = sock->sk;
++	int rc;
++
++	if (len < sizeof(*addr) || addr->sq_family != AF_QIPCRTR)
++		return -EINVAL;
++
++	lock_sock(sk);
++
++	sk->sk_state = TCP_CLOSE;
++	sock->state = SS_UNCONNECTED;
++
++	rc = qrtr_autobind(sock);
++	if (rc) {
++		release_sock(sk);
++		return rc;
++	}
++
++	ipc->peer = *addr;
++	sock->state = SS_CONNECTED;
++	sk->sk_state = TCP_ESTABLISHED;
++
++	release_sock(sk);
++
++	return 0;
++}
++
++static int qrtr_getname(struct socket *sock, struct sockaddr *saddr,
++			int peer)
++{
++	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
++	struct sockaddr_qrtr qaddr;
++	struct sock *sk = sock->sk;
++
++	lock_sock(sk);
++	if (peer) {
++		if (sk->sk_state != TCP_ESTABLISHED) {
++			release_sock(sk);
++			return -ENOTCONN;
++		}
++
++		qaddr = ipc->peer;
++	} else {
++		qaddr = ipc->us;
++	}
++	release_sock(sk);
++
++	qaddr.sq_family = AF_QIPCRTR;
++
++	memcpy(saddr, &qaddr, sizeof(qaddr));
++
++	return sizeof(qaddr);
++}
++
++static int qrtr_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
++{
++	void __user *argp = (void __user *)arg;
++	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
++	struct sock *sk = sock->sk;
++	struct sockaddr_qrtr *sq;
++	struct sk_buff *skb;
++	struct ifreq ifr;
++	long len = 0;
++	int rc = 0;
++
++	lock_sock(sk);
++
++	switch (cmd) {
++	case TIOCOUTQ:
++		len = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
++		if (len < 0)
++			len = 0;
++		rc = put_user(len, (int __user *)argp);
++		break;
++	case TIOCINQ:
++		skb = skb_peek(&sk->sk_receive_queue);
++		if (skb)
++			len = skb->len;
++		rc = put_user(len, (int __user *)argp);
++		break;
++	case SIOCGIFADDR:
++		if (copy_from_user(&ifr, argp, sizeof(ifr))) {
++			rc = -EFAULT;
++			break;
++		}
++
++		sq = (struct sockaddr_qrtr *)&ifr.ifr_addr;
++		*sq = ipc->us;
++		if (copy_to_user(argp, &ifr, sizeof(ifr))) {
++			rc = -EFAULT;
++			break;
++		}
++		break;
++	case SIOCADDRT:
++	case SIOCDELRT:
++	case SIOCSIFADDR:
++	case SIOCGIFDSTADDR:
++	case SIOCSIFDSTADDR:
++	case SIOCGIFBRDADDR:
++	case SIOCSIFBRDADDR:
++	case SIOCGIFNETMASK:
++	case SIOCSIFNETMASK:
++		rc = -EINVAL;
++		break;
++	default:
++		rc = -ENOIOCTLCMD;
++		break;
++	}
++
++	release_sock(sk);
++
++	return rc;
++}
++
++static int qrtr_release(struct socket *sock)
++{
++	struct sock *sk = sock->sk;
++	struct qrtr_sock *ipc;
++
++	if (!sk)
++		return 0;
++
++	lock_sock(sk);
++
++	ipc = qrtr_sk(sk);
++	sk->sk_shutdown = SHUTDOWN_MASK;
++	if (!sock_flag(sk, SOCK_DEAD))
++		sk->sk_state_change(sk);
++
++	sock_set_flag(sk, SOCK_DEAD);
++	sock_orphan(sk);
++	sock->sk = NULL;
++
++	if (!sock_flag(sk, SOCK_ZAPPED))
++		qrtr_port_remove(ipc);
++
++	skb_queue_purge(&sk->sk_receive_queue);
++
++	release_sock(sk);
++	sock_put(sk);
++
++	return 0;
++}
++
++static const struct proto_ops qrtr_proto_ops = {
++	.owner		= THIS_MODULE,
++	.family		= AF_QIPCRTR,
++	.bind		= qrtr_bind,
++	.connect	= qrtr_connect,
++	.socketpair	= sock_no_socketpair,
++	.accept		= sock_no_accept,
++	.listen		= sock_no_listen,
++	.sendmsg	= qrtr_sendmsg,
++	.recvmsg	= qrtr_recvmsg,
++	.getname	= qrtr_getname,
++	.ioctl		= qrtr_ioctl,
++	.gettstamp	= sock_gettstamp,
++	.poll		= datagram_poll,
++	.shutdown	= sock_no_shutdown,
++	.release	= qrtr_release,
++	.mmap		= sock_no_mmap,
++	.sendpage	= sock_no_sendpage,
++};
++
++static struct proto qrtr_proto = {
++	.name		= "QIPCRTR",
++	.owner		= THIS_MODULE,
++	.obj_size	= sizeof(struct qrtr_sock),
++};
++
++static int qrtr_create(struct net *net, struct socket *sock,
++		       int protocol, int kern)
++{
++	struct qrtr_sock *ipc;
++	struct sock *sk;
++
++	if (sock->type != SOCK_DGRAM)
++		return -EPROTOTYPE;
++
++	sk = sk_alloc(net, AF_QIPCRTR, GFP_KERNEL, &qrtr_proto, kern);
++	if (!sk)
++		return -ENOMEM;
++
++	sock_set_flag(sk, SOCK_ZAPPED);
++
++	sock_init_data(sock, sk);
++	sock->ops = &qrtr_proto_ops;
++
++	ipc = qrtr_sk(sk);
++	ipc->us.sq_family = AF_QIPCRTR;
++	ipc->us.sq_node = qrtr_local_nid;
++	ipc->us.sq_port = 0;
++
++	return 0;
++}
++
++static const struct net_proto_family qrtr_family = {
++	.owner	= THIS_MODULE,
++	.family	= AF_QIPCRTR,
++	.create	= qrtr_create,
++};
++
++static int __init qrtr_proto_init(void)
++{
++	int rc;
++
++	rc = proto_register(&qrtr_proto, 1);
++	if (rc)
++		return rc;
++
++	rc = sock_register(&qrtr_family);
++	if (rc) {
++		proto_unregister(&qrtr_proto);
++		return rc;
++	}
++
++	qrtr_ns_init();
++
++	return rc;
++}
++postcore_initcall(qrtr_proto_init);
++
++static void __exit qrtr_proto_fini(void)
++{
++	qrtr_ns_remove();
++	sock_unregister(qrtr_family.family);
++	proto_unregister(&qrtr_proto);
++}
++module_exit(qrtr_proto_fini);
++
++MODULE_DESCRIPTION("Qualcomm IPC-router driver");
++MODULE_LICENSE("GPL v2");
++MODULE_ALIAS_NETPROTO(PF_QIPCRTR);
+diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
+index fe81e03851686..713e9940d88bb 100644
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -273,7 +273,7 @@ err:
+ 	return NULL;
+ }
+ 
+-static int server_del(struct qrtr_node *node, unsigned int port)
++static int server_del(struct qrtr_node *node, unsigned int port, bool bcast)
+ {
+ 	struct qrtr_lookup *lookup;
+ 	struct qrtr_server *srv;
+@@ -286,7 +286,7 @@ static int server_del(struct qrtr_node *node, unsigned int port)
+ 	radix_tree_delete(&node->servers, port);
+ 
+ 	/* Broadcast the removal of local servers */
+-	if (srv->node == qrtr_ns.local_node)
++	if (srv->node == qrtr_ns.local_node && bcast)
+ 		service_announce_del(&qrtr_ns.bcast_sq, srv);
+ 
+ 	/* Announce the service's disappearance to observers */
+@@ -372,7 +372,7 @@ static int ctrl_cmd_bye(struct sockaddr_qrtr *from)
+ 		}
+ 		slot = radix_tree_iter_resume(slot, &iter);
+ 		rcu_read_unlock();
+-		server_del(node, srv->port);
++		server_del(node, srv->port, true);
+ 		rcu_read_lock();
+ 	}
+ 	rcu_read_unlock();
+@@ -458,10 +458,13 @@ static int ctrl_cmd_del_client(struct sockaddr_qrtr *from,
+ 		kfree(lookup);
+ 	}
+ 
+-	/* Remove the server belonging to this port */
++	/* Remove the server belonging to this port but don't broadcast
++	 * DEL_SERVER. Neighbours would've already removed the server belonging
++	 * to this port due to the DEL_CLIENT broadcast from qrtr_port_remove().
++	 */
+ 	node = node_get(node_id);
+ 	if (node)
+-		server_del(node, port);
++		server_del(node, port, false);
+ 
+ 	/* Advertise the removal of this client to all local servers */
+ 	local_node = node_get(qrtr_ns.local_node);
+@@ -574,7 +577,7 @@ static int ctrl_cmd_del_server(struct sockaddr_qrtr *from,
+ 	if (!node)
+ 		return -ENOENT;
+ 
+-	return server_del(node, port);
++	return server_del(node, port, true);
+ }
+ 
+ static int ctrl_cmd_new_lookup(struct sockaddr_qrtr *from,
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+deleted file mode 100644
+index 13448ca5aeff2..0000000000000
+--- a/net/qrtr/qrtr.c
++++ /dev/null
+@@ -1,1288 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Copyright (c) 2015, Sony Mobile Communications Inc.
+- * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+- */
+-#include <linux/module.h>
+-#include <linux/netlink.h>
+-#include <linux/qrtr.h>
+-#include <linux/termios.h>	/* For TIOCINQ/OUTQ */
+-#include <linux/spinlock.h>
+-#include <linux/wait.h>
+-
+-#include <net/sock.h>
+-
+-#include "qrtr.h"
+-
+-#define QRTR_PROTO_VER_1 1
+-#define QRTR_PROTO_VER_2 3
+-
+-/* auto-bind range */
+-#define QRTR_MIN_EPH_SOCKET 0x4000
+-#define QRTR_MAX_EPH_SOCKET 0x7fff
+-#define QRTR_EPH_PORT_RANGE \
+-		XA_LIMIT(QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET)
+-
+-/**
+- * struct qrtr_hdr_v1 - (I|R)PCrouter packet header version 1
+- * @version: protocol version
+- * @type: packet type; one of QRTR_TYPE_*
+- * @src_node_id: source node
+- * @src_port_id: source port
+- * @confirm_rx: boolean; whether a resume-tx packet should be send in reply
+- * @size: length of packet, excluding this header
+- * @dst_node_id: destination node
+- * @dst_port_id: destination port
+- */
+-struct qrtr_hdr_v1 {
+-	__le32 version;
+-	__le32 type;
+-	__le32 src_node_id;
+-	__le32 src_port_id;
+-	__le32 confirm_rx;
+-	__le32 size;
+-	__le32 dst_node_id;
+-	__le32 dst_port_id;
+-} __packed;
+-
+-/**
+- * struct qrtr_hdr_v2 - (I|R)PCrouter packet header later versions
+- * @version: protocol version
+- * @type: packet type; one of QRTR_TYPE_*
+- * @flags: bitmask of QRTR_FLAGS_*
+- * @optlen: length of optional header data
+- * @size: length of packet, excluding this header and optlen
+- * @src_node_id: source node
+- * @src_port_id: source port
+- * @dst_node_id: destination node
+- * @dst_port_id: destination port
+- */
+-struct qrtr_hdr_v2 {
+-	u8 version;
+-	u8 type;
+-	u8 flags;
+-	u8 optlen;
+-	__le32 size;
+-	__le16 src_node_id;
+-	__le16 src_port_id;
+-	__le16 dst_node_id;
+-	__le16 dst_port_id;
+-};
+-
+-#define QRTR_FLAGS_CONFIRM_RX	BIT(0)
+-
+-struct qrtr_cb {
+-	u32 src_node;
+-	u32 src_port;
+-	u32 dst_node;
+-	u32 dst_port;
+-
+-	u8 type;
+-	u8 confirm_rx;
+-};
+-
+-#define QRTR_HDR_MAX_SIZE max_t(size_t, sizeof(struct qrtr_hdr_v1), \
+-					sizeof(struct qrtr_hdr_v2))
+-
+-struct qrtr_sock {
+-	/* WARNING: sk must be the first member */
+-	struct sock sk;
+-	struct sockaddr_qrtr us;
+-	struct sockaddr_qrtr peer;
+-};
+-
+-static inline struct qrtr_sock *qrtr_sk(struct sock *sk)
+-{
+-	BUILD_BUG_ON(offsetof(struct qrtr_sock, sk) != 0);
+-	return container_of(sk, struct qrtr_sock, sk);
+-}
+-
+-static unsigned int qrtr_local_nid = 1;
+-
+-/* for node ids */
+-static RADIX_TREE(qrtr_nodes, GFP_ATOMIC);
+-static DEFINE_SPINLOCK(qrtr_nodes_lock);
+-/* broadcast list */
+-static LIST_HEAD(qrtr_all_nodes);
+-/* lock for qrtr_all_nodes and node reference */
+-static DEFINE_MUTEX(qrtr_node_lock);
+-
+-/* local port allocation management */
+-static DEFINE_XARRAY_ALLOC(qrtr_ports);
+-
+-/**
+- * struct qrtr_node - endpoint node
+- * @ep_lock: lock for endpoint management and callbacks
+- * @ep: endpoint
+- * @ref: reference count for node
+- * @nid: node id
+- * @qrtr_tx_flow: tree of qrtr_tx_flow, keyed by node << 32 | port
+- * @qrtr_tx_lock: lock for qrtr_tx_flow inserts
+- * @rx_queue: receive queue
+- * @item: list item for broadcast list
+- */
+-struct qrtr_node {
+-	struct mutex ep_lock;
+-	struct qrtr_endpoint *ep;
+-	struct kref ref;
+-	unsigned int nid;
+-
+-	struct radix_tree_root qrtr_tx_flow;
+-	struct mutex qrtr_tx_lock; /* for qrtr_tx_flow */
+-
+-	struct sk_buff_head rx_queue;
+-	struct list_head item;
+-};
+-
+-/**
+- * struct qrtr_tx_flow - tx flow control
+- * @resume_tx: waiters for a resume tx from the remote
+- * @pending: number of waiting senders
+- * @tx_failed: indicates that a message with confirm_rx flag was lost
+- */
+-struct qrtr_tx_flow {
+-	struct wait_queue_head resume_tx;
+-	int pending;
+-	int tx_failed;
+-};
+-
+-#define QRTR_TX_FLOW_HIGH	10
+-#define QRTR_TX_FLOW_LOW	5
+-
+-static int qrtr_local_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+-			      int type, struct sockaddr_qrtr *from,
+-			      struct sockaddr_qrtr *to);
+-static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+-			      int type, struct sockaddr_qrtr *from,
+-			      struct sockaddr_qrtr *to);
+-static struct qrtr_sock *qrtr_port_lookup(int port);
+-static void qrtr_port_put(struct qrtr_sock *ipc);
+-
+-/* Release node resources and free the node.
+- *
+- * Do not call directly, use qrtr_node_release.  To be used with
+- * kref_put_mutex.  As such, the node mutex is expected to be locked on call.
+- */
+-static void __qrtr_node_release(struct kref *kref)
+-{
+-	struct qrtr_node *node = container_of(kref, struct qrtr_node, ref);
+-	struct radix_tree_iter iter;
+-	struct qrtr_tx_flow *flow;
+-	unsigned long flags;
+-	void __rcu **slot;
+-
+-	spin_lock_irqsave(&qrtr_nodes_lock, flags);
+-	if (node->nid != QRTR_EP_NID_AUTO)
+-		radix_tree_delete(&qrtr_nodes, node->nid);
+-	spin_unlock_irqrestore(&qrtr_nodes_lock, flags);
+-
+-	list_del(&node->item);
+-	mutex_unlock(&qrtr_node_lock);
+-
+-	skb_queue_purge(&node->rx_queue);
+-
+-	/* Free tx flow counters */
+-	radix_tree_for_each_slot(slot, &node->qrtr_tx_flow, &iter, 0) {
+-		flow = *slot;
+-		radix_tree_iter_delete(&node->qrtr_tx_flow, &iter, slot);
+-		kfree(flow);
+-	}
+-	kfree(node);
+-}
+-
+-/* Increment reference to node. */
+-static struct qrtr_node *qrtr_node_acquire(struct qrtr_node *node)
+-{
+-	if (node)
+-		kref_get(&node->ref);
+-	return node;
+-}
+-
+-/* Decrement reference to node and release as necessary. */
+-static void qrtr_node_release(struct qrtr_node *node)
+-{
+-	if (!node)
+-		return;
+-	kref_put_mutex(&node->ref, __qrtr_node_release, &qrtr_node_lock);
+-}
+-
+-/**
+- * qrtr_tx_resume() - reset flow control counter
+- * @node:	qrtr_node that the QRTR_TYPE_RESUME_TX packet arrived on
+- * @skb:	resume_tx packet
+- */
+-static void qrtr_tx_resume(struct qrtr_node *node, struct sk_buff *skb)
+-{
+-	struct qrtr_ctrl_pkt *pkt = (struct qrtr_ctrl_pkt *)skb->data;
+-	u64 remote_node = le32_to_cpu(pkt->client.node);
+-	u32 remote_port = le32_to_cpu(pkt->client.port);
+-	struct qrtr_tx_flow *flow;
+-	unsigned long key;
+-
+-	key = remote_node << 32 | remote_port;
+-
+-	rcu_read_lock();
+-	flow = radix_tree_lookup(&node->qrtr_tx_flow, key);
+-	rcu_read_unlock();
+-	if (flow) {
+-		spin_lock(&flow->resume_tx.lock);
+-		flow->pending = 0;
+-		spin_unlock(&flow->resume_tx.lock);
+-		wake_up_interruptible_all(&flow->resume_tx);
+-	}
+-
+-	consume_skb(skb);
+-}
+-
+-/**
+- * qrtr_tx_wait() - flow control for outgoing packets
+- * @node:	qrtr_node that the packet is to be send to
+- * @dest_node:	node id of the destination
+- * @dest_port:	port number of the destination
+- * @type:	type of message
+- *
+- * The flow control scheme is based around the low and high "watermarks". When
+- * the low watermark is passed the confirm_rx flag is set on the outgoing
+- * message, which will trigger the remote to send a control message of the type
+- * QRTR_TYPE_RESUME_TX to reset the counter. If the high watermark is hit
+- * further transmision should be paused.
+- *
+- * Return: 1 if confirm_rx should be set, 0 otherwise or errno failure
+- */
+-static int qrtr_tx_wait(struct qrtr_node *node, int dest_node, int dest_port,
+-			int type)
+-{
+-	unsigned long key = (u64)dest_node << 32 | dest_port;
+-	struct qrtr_tx_flow *flow;
+-	int confirm_rx = 0;
+-	int ret;
+-
+-	/* Never set confirm_rx on non-data packets */
+-	if (type != QRTR_TYPE_DATA)
+-		return 0;
+-
+-	mutex_lock(&node->qrtr_tx_lock);
+-	flow = radix_tree_lookup(&node->qrtr_tx_flow, key);
+-	if (!flow) {
+-		flow = kzalloc(sizeof(*flow), GFP_KERNEL);
+-		if (flow) {
+-			init_waitqueue_head(&flow->resume_tx);
+-			if (radix_tree_insert(&node->qrtr_tx_flow, key, flow)) {
+-				kfree(flow);
+-				flow = NULL;
+-			}
+-		}
+-	}
+-	mutex_unlock(&node->qrtr_tx_lock);
+-
+-	/* Set confirm_rx if we where unable to find and allocate a flow */
+-	if (!flow)
+-		return 1;
+-
+-	spin_lock_irq(&flow->resume_tx.lock);
+-	ret = wait_event_interruptible_locked_irq(flow->resume_tx,
+-						  flow->pending < QRTR_TX_FLOW_HIGH ||
+-						  flow->tx_failed ||
+-						  !node->ep);
+-	if (ret < 0) {
+-		confirm_rx = ret;
+-	} else if (!node->ep) {
+-		confirm_rx = -EPIPE;
+-	} else if (flow->tx_failed) {
+-		flow->tx_failed = 0;
+-		confirm_rx = 1;
+-	} else {
+-		flow->pending++;
+-		confirm_rx = flow->pending == QRTR_TX_FLOW_LOW;
+-	}
+-	spin_unlock_irq(&flow->resume_tx.lock);
+-
+-	return confirm_rx;
+-}
+-
+-/**
+- * qrtr_tx_flow_failed() - flag that tx of confirm_rx flagged messages failed
+- * @node:	qrtr_node that the packet is to be send to
+- * @dest_node:	node id of the destination
+- * @dest_port:	port number of the destination
+- *
+- * Signal that the transmission of a message with confirm_rx flag failed. The
+- * flow's "pending" counter will keep incrementing towards QRTR_TX_FLOW_HIGH,
+- * at which point transmission would stall forever waiting for the resume TX
+- * message associated with the dropped confirm_rx message.
+- * Work around this by marking the flow as having a failed transmission and
+- * cause the next transmission attempt to be sent with the confirm_rx.
+- */
+-static void qrtr_tx_flow_failed(struct qrtr_node *node, int dest_node,
+-				int dest_port)
+-{
+-	unsigned long key = (u64)dest_node << 32 | dest_port;
+-	struct qrtr_tx_flow *flow;
+-
+-	rcu_read_lock();
+-	flow = radix_tree_lookup(&node->qrtr_tx_flow, key);
+-	rcu_read_unlock();
+-	if (flow) {
+-		spin_lock_irq(&flow->resume_tx.lock);
+-		flow->tx_failed = 1;
+-		spin_unlock_irq(&flow->resume_tx.lock);
+-	}
+-}
+-
+-/* Pass an outgoing packet socket buffer to the endpoint driver. */
+-static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+-			     int type, struct sockaddr_qrtr *from,
+-			     struct sockaddr_qrtr *to)
+-{
+-	struct qrtr_hdr_v1 *hdr;
+-	size_t len = skb->len;
+-	int rc, confirm_rx;
+-
+-	confirm_rx = qrtr_tx_wait(node, to->sq_node, to->sq_port, type);
+-	if (confirm_rx < 0) {
+-		kfree_skb(skb);
+-		return confirm_rx;
+-	}
+-
+-	hdr = skb_push(skb, sizeof(*hdr));
+-	hdr->version = cpu_to_le32(QRTR_PROTO_VER_1);
+-	hdr->type = cpu_to_le32(type);
+-	hdr->src_node_id = cpu_to_le32(from->sq_node);
+-	hdr->src_port_id = cpu_to_le32(from->sq_port);
+-	if (to->sq_port == QRTR_PORT_CTRL) {
+-		hdr->dst_node_id = cpu_to_le32(node->nid);
+-		hdr->dst_port_id = cpu_to_le32(QRTR_PORT_CTRL);
+-	} else {
+-		hdr->dst_node_id = cpu_to_le32(to->sq_node);
+-		hdr->dst_port_id = cpu_to_le32(to->sq_port);
+-	}
+-
+-	hdr->size = cpu_to_le32(len);
+-	hdr->confirm_rx = !!confirm_rx;
+-
+-	rc = skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
+-
+-	if (!rc) {
+-		mutex_lock(&node->ep_lock);
+-		rc = -ENODEV;
+-		if (node->ep)
+-			rc = node->ep->xmit(node->ep, skb);
+-		else
+-			kfree_skb(skb);
+-		mutex_unlock(&node->ep_lock);
+-	}
+-	/* Need to ensure that a subsequent message carries the otherwise lost
+-	 * confirm_rx flag if we dropped this one */
+-	if (rc && confirm_rx)
+-		qrtr_tx_flow_failed(node, to->sq_node, to->sq_port);
+-
+-	return rc;
+-}
+-
+-/* Lookup node by id.
+- *
+- * callers must release with qrtr_node_release()
+- */
+-static struct qrtr_node *qrtr_node_lookup(unsigned int nid)
+-{
+-	struct qrtr_node *node;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&qrtr_nodes_lock, flags);
+-	node = radix_tree_lookup(&qrtr_nodes, nid);
+-	node = qrtr_node_acquire(node);
+-	spin_unlock_irqrestore(&qrtr_nodes_lock, flags);
+-
+-	return node;
+-}
+-
+-/* Assign node id to node.
+- *
+- * This is mostly useful for automatic node id assignment, based on
+- * the source id in the incoming packet.
+- */
+-static void qrtr_node_assign(struct qrtr_node *node, unsigned int nid)
+-{
+-	unsigned long flags;
+-
+-	if (node->nid != QRTR_EP_NID_AUTO || nid == QRTR_EP_NID_AUTO)
+-		return;
+-
+-	spin_lock_irqsave(&qrtr_nodes_lock, flags);
+-	radix_tree_insert(&qrtr_nodes, nid, node);
+-	node->nid = nid;
+-	spin_unlock_irqrestore(&qrtr_nodes_lock, flags);
+-}
+-
+-/**
+- * qrtr_endpoint_post() - post incoming data
+- * @ep: endpoint handle
+- * @data: data pointer
+- * @len: size of data in bytes
+- *
+- * Return: 0 on success; negative error code on failure
+- */
+-int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+-{
+-	struct qrtr_node *node = ep->node;
+-	const struct qrtr_hdr_v1 *v1;
+-	const struct qrtr_hdr_v2 *v2;
+-	struct qrtr_sock *ipc;
+-	struct sk_buff *skb;
+-	struct qrtr_cb *cb;
+-	size_t size;
+-	unsigned int ver;
+-	size_t hdrlen;
+-
+-	if (len == 0 || len & 3)
+-		return -EINVAL;
+-
+-	skb = __netdev_alloc_skb(NULL, len, GFP_ATOMIC | __GFP_NOWARN);
+-	if (!skb)
+-		return -ENOMEM;
+-
+-	cb = (struct qrtr_cb *)skb->cb;
+-
+-	/* Version field in v1 is little endian, so this works for both cases */
+-	ver = *(u8*)data;
+-
+-	switch (ver) {
+-	case QRTR_PROTO_VER_1:
+-		if (len < sizeof(*v1))
+-			goto err;
+-		v1 = data;
+-		hdrlen = sizeof(*v1);
+-
+-		cb->type = le32_to_cpu(v1->type);
+-		cb->src_node = le32_to_cpu(v1->src_node_id);
+-		cb->src_port = le32_to_cpu(v1->src_port_id);
+-		cb->confirm_rx = !!v1->confirm_rx;
+-		cb->dst_node = le32_to_cpu(v1->dst_node_id);
+-		cb->dst_port = le32_to_cpu(v1->dst_port_id);
+-
+-		size = le32_to_cpu(v1->size);
+-		break;
+-	case QRTR_PROTO_VER_2:
+-		if (len < sizeof(*v2))
+-			goto err;
+-		v2 = data;
+-		hdrlen = sizeof(*v2) + v2->optlen;
+-
+-		cb->type = v2->type;
+-		cb->confirm_rx = !!(v2->flags & QRTR_FLAGS_CONFIRM_RX);
+-		cb->src_node = le16_to_cpu(v2->src_node_id);
+-		cb->src_port = le16_to_cpu(v2->src_port_id);
+-		cb->dst_node = le16_to_cpu(v2->dst_node_id);
+-		cb->dst_port = le16_to_cpu(v2->dst_port_id);
+-
+-		if (cb->src_port == (u16)QRTR_PORT_CTRL)
+-			cb->src_port = QRTR_PORT_CTRL;
+-		if (cb->dst_port == (u16)QRTR_PORT_CTRL)
+-			cb->dst_port = QRTR_PORT_CTRL;
+-
+-		size = le32_to_cpu(v2->size);
+-		break;
+-	default:
+-		pr_err("qrtr: Invalid version %d\n", ver);
+-		goto err;
+-	}
+-
+-	if (!size || len != ALIGN(size, 4) + hdrlen)
+-		goto err;
+-
+-	if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
+-	    cb->type != QRTR_TYPE_RESUME_TX)
+-		goto err;
+-
+-	skb_put_data(skb, data + hdrlen, size);
+-
+-	qrtr_node_assign(node, cb->src_node);
+-
+-	if (cb->type == QRTR_TYPE_RESUME_TX) {
+-		qrtr_tx_resume(node, skb);
+-	} else {
+-		ipc = qrtr_port_lookup(cb->dst_port);
+-		if (!ipc)
+-			goto err;
+-
+-		if (sock_queue_rcv_skb(&ipc->sk, skb)) {
+-			qrtr_port_put(ipc);
+-			goto err;
+-		}
+-
+-		qrtr_port_put(ipc);
+-	}
+-
+-	return 0;
+-
+-err:
+-	kfree_skb(skb);
+-	return -EINVAL;
+-
+-}
+-EXPORT_SYMBOL_GPL(qrtr_endpoint_post);
+-
+-/**
+- * qrtr_alloc_ctrl_packet() - allocate control packet skb
+- * @pkt: reference to qrtr_ctrl_pkt pointer
+- *
+- * Returns newly allocated sk_buff, or NULL on failure
+- *
+- * This function allocates a sk_buff large enough to carry a qrtr_ctrl_pkt and
+- * on success returns a reference to the control packet in @pkt.
+- */
+-static struct sk_buff *qrtr_alloc_ctrl_packet(struct qrtr_ctrl_pkt **pkt)
+-{
+-	const int pkt_len = sizeof(struct qrtr_ctrl_pkt);
+-	struct sk_buff *skb;
+-
+-	skb = alloc_skb(QRTR_HDR_MAX_SIZE + pkt_len, GFP_KERNEL);
+-	if (!skb)
+-		return NULL;
+-
+-	skb_reserve(skb, QRTR_HDR_MAX_SIZE);
+-	*pkt = skb_put_zero(skb, pkt_len);
+-
+-	return skb;
+-}
+-
+-/**
+- * qrtr_endpoint_register() - register a new endpoint
+- * @ep: endpoint to register
+- * @nid: desired node id; may be QRTR_EP_NID_AUTO for auto-assignment
+- * Return: 0 on success; negative error code on failure
+- *
+- * The specified endpoint must have the xmit function pointer set on call.
+- */
+-int qrtr_endpoint_register(struct qrtr_endpoint *ep, unsigned int nid)
+-{
+-	struct qrtr_node *node;
+-
+-	if (!ep || !ep->xmit)
+-		return -EINVAL;
+-
+-	node = kzalloc(sizeof(*node), GFP_KERNEL);
+-	if (!node)
+-		return -ENOMEM;
+-
+-	kref_init(&node->ref);
+-	mutex_init(&node->ep_lock);
+-	skb_queue_head_init(&node->rx_queue);
+-	node->nid = QRTR_EP_NID_AUTO;
+-	node->ep = ep;
+-
+-	INIT_RADIX_TREE(&node->qrtr_tx_flow, GFP_KERNEL);
+-	mutex_init(&node->qrtr_tx_lock);
+-
+-	qrtr_node_assign(node, nid);
+-
+-	mutex_lock(&qrtr_node_lock);
+-	list_add(&node->item, &qrtr_all_nodes);
+-	mutex_unlock(&qrtr_node_lock);
+-	ep->node = node;
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(qrtr_endpoint_register);
+-
+-/**
+- * qrtr_endpoint_unregister - unregister endpoint
+- * @ep: endpoint to unregister
+- */
+-void qrtr_endpoint_unregister(struct qrtr_endpoint *ep)
+-{
+-	struct qrtr_node *node = ep->node;
+-	struct sockaddr_qrtr src = {AF_QIPCRTR, node->nid, QRTR_PORT_CTRL};
+-	struct sockaddr_qrtr dst = {AF_QIPCRTR, qrtr_local_nid, QRTR_PORT_CTRL};
+-	struct radix_tree_iter iter;
+-	struct qrtr_ctrl_pkt *pkt;
+-	struct qrtr_tx_flow *flow;
+-	struct sk_buff *skb;
+-	void __rcu **slot;
+-
+-	mutex_lock(&node->ep_lock);
+-	node->ep = NULL;
+-	mutex_unlock(&node->ep_lock);
+-
+-	/* Notify the local controller about the event */
+-	skb = qrtr_alloc_ctrl_packet(&pkt);
+-	if (skb) {
+-		pkt->cmd = cpu_to_le32(QRTR_TYPE_BYE);
+-		qrtr_local_enqueue(NULL, skb, QRTR_TYPE_BYE, &src, &dst);
+-	}
+-
+-	/* Wake up any transmitters waiting for resume-tx from the node */
+-	mutex_lock(&node->qrtr_tx_lock);
+-	radix_tree_for_each_slot(slot, &node->qrtr_tx_flow, &iter, 0) {
+-		flow = *slot;
+-		wake_up_interruptible_all(&flow->resume_tx);
+-	}
+-	mutex_unlock(&node->qrtr_tx_lock);
+-
+-	qrtr_node_release(node);
+-	ep->node = NULL;
+-}
+-EXPORT_SYMBOL_GPL(qrtr_endpoint_unregister);
+-
+-/* Lookup socket by port.
+- *
+- * Callers must release with qrtr_port_put()
+- */
+-static struct qrtr_sock *qrtr_port_lookup(int port)
+-{
+-	struct qrtr_sock *ipc;
+-
+-	if (port == QRTR_PORT_CTRL)
+-		port = 0;
+-
+-	rcu_read_lock();
+-	ipc = xa_load(&qrtr_ports, port);
+-	if (ipc)
+-		sock_hold(&ipc->sk);
+-	rcu_read_unlock();
+-
+-	return ipc;
+-}
+-
+-/* Release acquired socket. */
+-static void qrtr_port_put(struct qrtr_sock *ipc)
+-{
+-	sock_put(&ipc->sk);
+-}
+-
+-/* Remove port assignment. */
+-static void qrtr_port_remove(struct qrtr_sock *ipc)
+-{
+-	struct qrtr_ctrl_pkt *pkt;
+-	struct sk_buff *skb;
+-	int port = ipc->us.sq_port;
+-	struct sockaddr_qrtr to;
+-
+-	to.sq_family = AF_QIPCRTR;
+-	to.sq_node = QRTR_NODE_BCAST;
+-	to.sq_port = QRTR_PORT_CTRL;
+-
+-	skb = qrtr_alloc_ctrl_packet(&pkt);
+-	if (skb) {
+-		pkt->cmd = cpu_to_le32(QRTR_TYPE_DEL_CLIENT);
+-		pkt->client.node = cpu_to_le32(ipc->us.sq_node);
+-		pkt->client.port = cpu_to_le32(ipc->us.sq_port);
+-
+-		skb_set_owner_w(skb, &ipc->sk);
+-		qrtr_bcast_enqueue(NULL, skb, QRTR_TYPE_DEL_CLIENT, &ipc->us,
+-				   &to);
+-	}
+-
+-	if (port == QRTR_PORT_CTRL)
+-		port = 0;
+-
+-	__sock_put(&ipc->sk);
+-
+-	xa_erase(&qrtr_ports, port);
+-
+-	/* Ensure that if qrtr_port_lookup() did enter the RCU read section we
+-	 * wait for it to up increment the refcount */
+-	synchronize_rcu();
+-}
+-
+-/* Assign port number to socket.
+- *
+- * Specify port in the integer pointed to by port, and it will be adjusted
+- * on return as necesssary.
+- *
+- * Port may be:
+- *   0: Assign ephemeral port in [QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET]
+- *   <QRTR_MIN_EPH_SOCKET: Specified; requires CAP_NET_ADMIN
+- *   >QRTR_MIN_EPH_SOCKET: Specified; available to all
+- */
+-static int qrtr_port_assign(struct qrtr_sock *ipc, int *port)
+-{
+-	int rc;
+-
+-	if (!*port) {
+-		rc = xa_alloc(&qrtr_ports, port, ipc, QRTR_EPH_PORT_RANGE,
+-				GFP_KERNEL);
+-	} else if (*port < QRTR_MIN_EPH_SOCKET && !capable(CAP_NET_ADMIN)) {
+-		rc = -EACCES;
+-	} else if (*port == QRTR_PORT_CTRL) {
+-		rc = xa_insert(&qrtr_ports, 0, ipc, GFP_KERNEL);
+-	} else {
+-		rc = xa_insert(&qrtr_ports, *port, ipc, GFP_KERNEL);
+-	}
+-
+-	if (rc == -EBUSY)
+-		return -EADDRINUSE;
+-	else if (rc < 0)
+-		return rc;
+-
+-	sock_hold(&ipc->sk);
+-
+-	return 0;
+-}
+-
+-/* Reset all non-control ports */
+-static void qrtr_reset_ports(void)
+-{
+-	struct qrtr_sock *ipc;
+-	unsigned long index;
+-
+-	rcu_read_lock();
+-	xa_for_each_start(&qrtr_ports, index, ipc, 1) {
+-		sock_hold(&ipc->sk);
+-		ipc->sk.sk_err = ENETRESET;
+-		ipc->sk.sk_error_report(&ipc->sk);
+-		sock_put(&ipc->sk);
+-	}
+-	rcu_read_unlock();
+-}
+-
+-/* Bind socket to address.
+- *
+- * Socket should be locked upon call.
+- */
+-static int __qrtr_bind(struct socket *sock,
+-		       const struct sockaddr_qrtr *addr, int zapped)
+-{
+-	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+-	struct sock *sk = sock->sk;
+-	int port;
+-	int rc;
+-
+-	/* rebinding ok */
+-	if (!zapped && addr->sq_port == ipc->us.sq_port)
+-		return 0;
+-
+-	port = addr->sq_port;
+-	rc = qrtr_port_assign(ipc, &port);
+-	if (rc)
+-		return rc;
+-
+-	/* unbind previous, if any */
+-	if (!zapped)
+-		qrtr_port_remove(ipc);
+-	ipc->us.sq_port = port;
+-
+-	sock_reset_flag(sk, SOCK_ZAPPED);
+-
+-	/* Notify all open ports about the new controller */
+-	if (port == QRTR_PORT_CTRL)
+-		qrtr_reset_ports();
+-
+-	return 0;
+-}
+-
+-/* Auto bind to an ephemeral port. */
+-static int qrtr_autobind(struct socket *sock)
+-{
+-	struct sock *sk = sock->sk;
+-	struct sockaddr_qrtr addr;
+-
+-	if (!sock_flag(sk, SOCK_ZAPPED))
+-		return 0;
+-
+-	addr.sq_family = AF_QIPCRTR;
+-	addr.sq_node = qrtr_local_nid;
+-	addr.sq_port = 0;
+-
+-	return __qrtr_bind(sock, &addr, 1);
+-}
+-
+-/* Bind socket to specified sockaddr. */
+-static int qrtr_bind(struct socket *sock, struct sockaddr *saddr, int len)
+-{
+-	DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, saddr);
+-	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+-	struct sock *sk = sock->sk;
+-	int rc;
+-
+-	if (len < sizeof(*addr) || addr->sq_family != AF_QIPCRTR)
+-		return -EINVAL;
+-
+-	if (addr->sq_node != ipc->us.sq_node)
+-		return -EINVAL;
+-
+-	lock_sock(sk);
+-	rc = __qrtr_bind(sock, addr, sock_flag(sk, SOCK_ZAPPED));
+-	release_sock(sk);
+-
+-	return rc;
+-}
+-
+-/* Queue packet to local peer socket. */
+-static int qrtr_local_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+-			      int type, struct sockaddr_qrtr *from,
+-			      struct sockaddr_qrtr *to)
+-{
+-	struct qrtr_sock *ipc;
+-	struct qrtr_cb *cb;
+-
+-	ipc = qrtr_port_lookup(to->sq_port);
+-	if (!ipc || &ipc->sk == skb->sk) { /* do not send to self */
+-		if (ipc)
+-			qrtr_port_put(ipc);
+-		kfree_skb(skb);
+-		return -ENODEV;
+-	}
+-
+-	cb = (struct qrtr_cb *)skb->cb;
+-	cb->src_node = from->sq_node;
+-	cb->src_port = from->sq_port;
+-
+-	if (sock_queue_rcv_skb(&ipc->sk, skb)) {
+-		qrtr_port_put(ipc);
+-		kfree_skb(skb);
+-		return -ENOSPC;
+-	}
+-
+-	qrtr_port_put(ipc);
+-
+-	return 0;
+-}
+-
+-/* Queue packet for broadcast. */
+-static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+-			      int type, struct sockaddr_qrtr *from,
+-			      struct sockaddr_qrtr *to)
+-{
+-	struct sk_buff *skbn;
+-
+-	mutex_lock(&qrtr_node_lock);
+-	list_for_each_entry(node, &qrtr_all_nodes, item) {
+-		skbn = skb_clone(skb, GFP_KERNEL);
+-		if (!skbn)
+-			break;
+-		skb_set_owner_w(skbn, skb->sk);
+-		qrtr_node_enqueue(node, skbn, type, from, to);
+-	}
+-	mutex_unlock(&qrtr_node_lock);
+-
+-	qrtr_local_enqueue(NULL, skb, type, from, to);
+-
+-	return 0;
+-}
+-
+-static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+-{
+-	DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, msg->msg_name);
+-	int (*enqueue_fn)(struct qrtr_node *, struct sk_buff *, int,
+-			  struct sockaddr_qrtr *, struct sockaddr_qrtr *);
+-	__le32 qrtr_type = cpu_to_le32(QRTR_TYPE_DATA);
+-	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+-	struct sock *sk = sock->sk;
+-	struct qrtr_node *node;
+-	struct sk_buff *skb;
+-	size_t plen;
+-	u32 type;
+-	int rc;
+-
+-	if (msg->msg_flags & ~(MSG_DONTWAIT))
+-		return -EINVAL;
+-
+-	if (len > 65535)
+-		return -EMSGSIZE;
+-
+-	lock_sock(sk);
+-
+-	if (addr) {
+-		if (msg->msg_namelen < sizeof(*addr)) {
+-			release_sock(sk);
+-			return -EINVAL;
+-		}
+-
+-		if (addr->sq_family != AF_QIPCRTR) {
+-			release_sock(sk);
+-			return -EINVAL;
+-		}
+-
+-		rc = qrtr_autobind(sock);
+-		if (rc) {
+-			release_sock(sk);
+-			return rc;
+-		}
+-	} else if (sk->sk_state == TCP_ESTABLISHED) {
+-		addr = &ipc->peer;
+-	} else {
+-		release_sock(sk);
+-		return -ENOTCONN;
+-	}
+-
+-	node = NULL;
+-	if (addr->sq_node == QRTR_NODE_BCAST) {
+-		if (addr->sq_port != QRTR_PORT_CTRL &&
+-		    qrtr_local_nid != QRTR_NODE_BCAST) {
+-			release_sock(sk);
+-			return -ENOTCONN;
+-		}
+-		enqueue_fn = qrtr_bcast_enqueue;
+-	} else if (addr->sq_node == ipc->us.sq_node) {
+-		enqueue_fn = qrtr_local_enqueue;
+-	} else {
+-		node = qrtr_node_lookup(addr->sq_node);
+-		if (!node) {
+-			release_sock(sk);
+-			return -ECONNRESET;
+-		}
+-		enqueue_fn = qrtr_node_enqueue;
+-	}
+-
+-	plen = (len + 3) & ~3;
+-	skb = sock_alloc_send_skb(sk, plen + QRTR_HDR_MAX_SIZE,
+-				  msg->msg_flags & MSG_DONTWAIT, &rc);
+-	if (!skb) {
+-		rc = -ENOMEM;
+-		goto out_node;
+-	}
+-
+-	skb_reserve(skb, QRTR_HDR_MAX_SIZE);
+-
+-	rc = memcpy_from_msg(skb_put(skb, len), msg, len);
+-	if (rc) {
+-		kfree_skb(skb);
+-		goto out_node;
+-	}
+-
+-	if (ipc->us.sq_port == QRTR_PORT_CTRL) {
+-		if (len < 4) {
+-			rc = -EINVAL;
+-			kfree_skb(skb);
+-			goto out_node;
+-		}
+-
+-		/* control messages already require the type as 'command' */
+-		skb_copy_bits(skb, 0, &qrtr_type, 4);
+-	}
+-
+-	type = le32_to_cpu(qrtr_type);
+-	rc = enqueue_fn(node, skb, type, &ipc->us, addr);
+-	if (rc >= 0)
+-		rc = len;
+-
+-out_node:
+-	qrtr_node_release(node);
+-	release_sock(sk);
+-
+-	return rc;
+-}
+-
+-static int qrtr_send_resume_tx(struct qrtr_cb *cb)
+-{
+-	struct sockaddr_qrtr remote = { AF_QIPCRTR, cb->src_node, cb->src_port };
+-	struct sockaddr_qrtr local = { AF_QIPCRTR, cb->dst_node, cb->dst_port };
+-	struct qrtr_ctrl_pkt *pkt;
+-	struct qrtr_node *node;
+-	struct sk_buff *skb;
+-	int ret;
+-
+-	node = qrtr_node_lookup(remote.sq_node);
+-	if (!node)
+-		return -EINVAL;
+-
+-	skb = qrtr_alloc_ctrl_packet(&pkt);
+-	if (!skb)
+-		return -ENOMEM;
+-
+-	pkt->cmd = cpu_to_le32(QRTR_TYPE_RESUME_TX);
+-	pkt->client.node = cpu_to_le32(cb->dst_node);
+-	pkt->client.port = cpu_to_le32(cb->dst_port);
+-
+-	ret = qrtr_node_enqueue(node, skb, QRTR_TYPE_RESUME_TX, &local, &remote);
+-
+-	qrtr_node_release(node);
+-
+-	return ret;
+-}
+-
+-static int qrtr_recvmsg(struct socket *sock, struct msghdr *msg,
+-			size_t size, int flags)
+-{
+-	DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, msg->msg_name);
+-	struct sock *sk = sock->sk;
+-	struct sk_buff *skb;
+-	struct qrtr_cb *cb;
+-	int copied, rc;
+-
+-	lock_sock(sk);
+-
+-	if (sock_flag(sk, SOCK_ZAPPED)) {
+-		release_sock(sk);
+-		return -EADDRNOTAVAIL;
+-	}
+-
+-	skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
+-				flags & MSG_DONTWAIT, &rc);
+-	if (!skb) {
+-		release_sock(sk);
+-		return rc;
+-	}
+-	cb = (struct qrtr_cb *)skb->cb;
+-
+-	copied = skb->len;
+-	if (copied > size) {
+-		copied = size;
+-		msg->msg_flags |= MSG_TRUNC;
+-	}
+-
+-	rc = skb_copy_datagram_msg(skb, 0, msg, copied);
+-	if (rc < 0)
+-		goto out;
+-	rc = copied;
+-
+-	if (addr) {
+-		/* There is an anonymous 2-byte hole after sq_family,
+-		 * make sure to clear it.
+-		 */
+-		memset(addr, 0, sizeof(*addr));
+-
+-		addr->sq_family = AF_QIPCRTR;
+-		addr->sq_node = cb->src_node;
+-		addr->sq_port = cb->src_port;
+-		msg->msg_namelen = sizeof(*addr);
+-	}
+-
+-out:
+-	if (cb->confirm_rx)
+-		qrtr_send_resume_tx(cb);
+-
+-	skb_free_datagram(sk, skb);
+-	release_sock(sk);
+-
+-	return rc;
+-}
+-
+-static int qrtr_connect(struct socket *sock, struct sockaddr *saddr,
+-			int len, int flags)
+-{
+-	DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, saddr);
+-	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+-	struct sock *sk = sock->sk;
+-	int rc;
+-
+-	if (len < sizeof(*addr) || addr->sq_family != AF_QIPCRTR)
+-		return -EINVAL;
+-
+-	lock_sock(sk);
+-
+-	sk->sk_state = TCP_CLOSE;
+-	sock->state = SS_UNCONNECTED;
+-
+-	rc = qrtr_autobind(sock);
+-	if (rc) {
+-		release_sock(sk);
+-		return rc;
+-	}
+-
+-	ipc->peer = *addr;
+-	sock->state = SS_CONNECTED;
+-	sk->sk_state = TCP_ESTABLISHED;
+-
+-	release_sock(sk);
+-
+-	return 0;
+-}
+-
+-static int qrtr_getname(struct socket *sock, struct sockaddr *saddr,
+-			int peer)
+-{
+-	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+-	struct sockaddr_qrtr qaddr;
+-	struct sock *sk = sock->sk;
+-
+-	lock_sock(sk);
+-	if (peer) {
+-		if (sk->sk_state != TCP_ESTABLISHED) {
+-			release_sock(sk);
+-			return -ENOTCONN;
+-		}
+-
+-		qaddr = ipc->peer;
+-	} else {
+-		qaddr = ipc->us;
+-	}
+-	release_sock(sk);
+-
+-	qaddr.sq_family = AF_QIPCRTR;
+-
+-	memcpy(saddr, &qaddr, sizeof(qaddr));
+-
+-	return sizeof(qaddr);
+-}
+-
+-static int qrtr_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+-{
+-	void __user *argp = (void __user *)arg;
+-	struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+-	struct sock *sk = sock->sk;
+-	struct sockaddr_qrtr *sq;
+-	struct sk_buff *skb;
+-	struct ifreq ifr;
+-	long len = 0;
+-	int rc = 0;
+-
+-	lock_sock(sk);
+-
+-	switch (cmd) {
+-	case TIOCOUTQ:
+-		len = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
+-		if (len < 0)
+-			len = 0;
+-		rc = put_user(len, (int __user *)argp);
+-		break;
+-	case TIOCINQ:
+-		skb = skb_peek(&sk->sk_receive_queue);
+-		if (skb)
+-			len = skb->len;
+-		rc = put_user(len, (int __user *)argp);
+-		break;
+-	case SIOCGIFADDR:
+-		if (copy_from_user(&ifr, argp, sizeof(ifr))) {
+-			rc = -EFAULT;
+-			break;
+-		}
+-
+-		sq = (struct sockaddr_qrtr *)&ifr.ifr_addr;
+-		*sq = ipc->us;
+-		if (copy_to_user(argp, &ifr, sizeof(ifr))) {
+-			rc = -EFAULT;
+-			break;
+-		}
+-		break;
+-	case SIOCADDRT:
+-	case SIOCDELRT:
+-	case SIOCSIFADDR:
+-	case SIOCGIFDSTADDR:
+-	case SIOCSIFDSTADDR:
+-	case SIOCGIFBRDADDR:
+-	case SIOCSIFBRDADDR:
+-	case SIOCGIFNETMASK:
+-	case SIOCSIFNETMASK:
+-		rc = -EINVAL;
+-		break;
+-	default:
+-		rc = -ENOIOCTLCMD;
+-		break;
+-	}
+-
+-	release_sock(sk);
+-
+-	return rc;
+-}
+-
+-static int qrtr_release(struct socket *sock)
+-{
+-	struct sock *sk = sock->sk;
+-	struct qrtr_sock *ipc;
+-
+-	if (!sk)
+-		return 0;
+-
+-	lock_sock(sk);
+-
+-	ipc = qrtr_sk(sk);
+-	sk->sk_shutdown = SHUTDOWN_MASK;
+-	if (!sock_flag(sk, SOCK_DEAD))
+-		sk->sk_state_change(sk);
+-
+-	sock_set_flag(sk, SOCK_DEAD);
+-	sock_orphan(sk);
+-	sock->sk = NULL;
+-
+-	if (!sock_flag(sk, SOCK_ZAPPED))
+-		qrtr_port_remove(ipc);
+-
+-	skb_queue_purge(&sk->sk_receive_queue);
+-
+-	release_sock(sk);
+-	sock_put(sk);
+-
+-	return 0;
+-}
+-
+-static const struct proto_ops qrtr_proto_ops = {
+-	.owner		= THIS_MODULE,
+-	.family		= AF_QIPCRTR,
+-	.bind		= qrtr_bind,
+-	.connect	= qrtr_connect,
+-	.socketpair	= sock_no_socketpair,
+-	.accept		= sock_no_accept,
+-	.listen		= sock_no_listen,
+-	.sendmsg	= qrtr_sendmsg,
+-	.recvmsg	= qrtr_recvmsg,
+-	.getname	= qrtr_getname,
+-	.ioctl		= qrtr_ioctl,
+-	.gettstamp	= sock_gettstamp,
+-	.poll		= datagram_poll,
+-	.shutdown	= sock_no_shutdown,
+-	.release	= qrtr_release,
+-	.mmap		= sock_no_mmap,
+-	.sendpage	= sock_no_sendpage,
+-};
+-
+-static struct proto qrtr_proto = {
+-	.name		= "QIPCRTR",
+-	.owner		= THIS_MODULE,
+-	.obj_size	= sizeof(struct qrtr_sock),
+-};
+-
+-static int qrtr_create(struct net *net, struct socket *sock,
+-		       int protocol, int kern)
+-{
+-	struct qrtr_sock *ipc;
+-	struct sock *sk;
+-
+-	if (sock->type != SOCK_DGRAM)
+-		return -EPROTOTYPE;
+-
+-	sk = sk_alloc(net, AF_QIPCRTR, GFP_KERNEL, &qrtr_proto, kern);
+-	if (!sk)
+-		return -ENOMEM;
+-
+-	sock_set_flag(sk, SOCK_ZAPPED);
+-
+-	sock_init_data(sock, sk);
+-	sock->ops = &qrtr_proto_ops;
+-
+-	ipc = qrtr_sk(sk);
+-	ipc->us.sq_family = AF_QIPCRTR;
+-	ipc->us.sq_node = qrtr_local_nid;
+-	ipc->us.sq_port = 0;
+-
+-	return 0;
+-}
+-
+-static const struct net_proto_family qrtr_family = {
+-	.owner	= THIS_MODULE,
+-	.family	= AF_QIPCRTR,
+-	.create	= qrtr_create,
+-};
+-
+-static int __init qrtr_proto_init(void)
+-{
+-	int rc;
+-
+-	rc = proto_register(&qrtr_proto, 1);
+-	if (rc)
+-		return rc;
+-
+-	rc = sock_register(&qrtr_family);
+-	if (rc) {
+-		proto_unregister(&qrtr_proto);
+-		return rc;
+-	}
+-
+-	qrtr_ns_init();
+-
+-	return rc;
+-}
+-postcore_initcall(qrtr_proto_init);
+-
+-static void __exit qrtr_proto_fini(void)
+-{
+-	qrtr_ns_remove();
+-	sock_unregister(qrtr_family.family);
+-	proto_unregister(&qrtr_proto);
+-}
+-module_exit(qrtr_proto_fini);
+-
+-MODULE_DESCRIPTION("Qualcomm IPC-router driver");
+-MODULE_LICENSE("GPL v2");
+-MODULE_ALIAS_NETPROTO(PF_QIPCRTR);
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index e9b4ea3d934fa..3a68d65f7d153 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -1831,6 +1831,10 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
+ 		err = sctp_wait_for_sndbuf(asoc, &timeo, msg_len);
+ 		if (err)
+ 			goto err;
++		if (unlikely(sinfo->sinfo_stream >= asoc->stream.outcnt)) {
++			err = -EINVAL;
++			goto err;
++		}
+ 	}
+ 
+ 	if (sctp_state(asoc, CLOSED)) {
+diff --git a/net/sctp/stream_interleave.c b/net/sctp/stream_interleave.c
+index 6b13f737ebf2e..e3aad75cb11d9 100644
+--- a/net/sctp/stream_interleave.c
++++ b/net/sctp/stream_interleave.c
+@@ -1162,7 +1162,8 @@ static void sctp_generate_iftsn(struct sctp_outq *q, __u32 ctsn)
+ 
+ #define _sctp_walk_ifwdtsn(pos, chunk, end) \
+ 	for (pos = chunk->subh.ifwdtsn_hdr->skip; \
+-	     (void *)pos < (void *)chunk->subh.ifwdtsn_hdr->skip + (end); pos++)
++	     (void *)pos <= (void *)chunk->subh.ifwdtsn_hdr->skip + (end) - \
++			    sizeof(struct sctp_ifwdtsn_skip); pos++)
+ 
+ #define sctp_walk_ifwdtsn(pos, ch) \
+ 	_sctp_walk_ifwdtsn((pos), (ch), ntohs((ch)->chunk_hdr->length) - \
+diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c
+index 97c0bddba7a30..60754a292589b 100644
+--- a/net/sunrpc/svcauth_unix.c
++++ b/net/sunrpc/svcauth_unix.c
+@@ -424,14 +424,23 @@ static int unix_gid_hash(kuid_t uid)
+ 	return hash_long(from_kuid(&init_user_ns, uid), GID_HASHBITS);
+ }
+ 
+-static void unix_gid_put(struct kref *kref)
++static void unix_gid_free(struct rcu_head *rcu)
+ {
+-	struct cache_head *item = container_of(kref, struct cache_head, ref);
+-	struct unix_gid *ug = container_of(item, struct unix_gid, h);
++	struct unix_gid *ug = container_of(rcu, struct unix_gid, rcu);
++	struct cache_head *item = &ug->h;
++
+ 	if (test_bit(CACHE_VALID, &item->flags) &&
+ 	    !test_bit(CACHE_NEGATIVE, &item->flags))
+ 		put_group_info(ug->gi);
+-	kfree_rcu(ug, rcu);
++	kfree(ug);
++}
++
++static void unix_gid_put(struct kref *kref)
++{
++	struct cache_head *item = container_of(kref, struct cache_head, ref);
++	struct unix_gid *ug = container_of(item, struct unix_gid, h);
++
++	call_rcu(&ug->rcu, unix_gid_free);
+ }
+ 
+ static int unix_gid_match(struct cache_head *corig, struct cache_head *cnew)
+diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include
+index a5fe72c504ffb..6d37cb780452b 100644
+--- a/scripts/Kconfig.include
++++ b/scripts/Kconfig.include
+@@ -42,6 +42,12 @@ $(error-if,$(failure,command -v $(LD)),linker '$(LD)' not found)
+ # Fail if the linker is gold as it's not capable of linking the kernel proper
+ $(error-if,$(success, $(LD) -v | grep -q gold), gold linker '$(LD)' not supported)
+ 
++# Get the assembler name, version, and error out if it is not supported.
++as-info := $(shell,$(srctree)/scripts/as-version.sh $(CC) $(CLANG_FLAGS))
++$(error-if,$(success,test -z "$(as-info)"),Sorry$(comma) this assembler is not supported.)
++as-name := $(shell,set -- $(as-info) && echo $1)
++as-version := $(shell,set -- $(as-info) && echo $2)
++
+ # machine bit flags
+ #  $(m32-flag): -m32 if the compiler supports it, or an empty string otherwise.
+ #  $(m64-flag): -m64 if the compiler supports it, or an empty string otherwise.
+diff --git a/scripts/as-version.sh b/scripts/as-version.sh
+new file mode 100755
+index 0000000000000..532270bd4b7ef
+--- /dev/null
++++ b/scripts/as-version.sh
+@@ -0,0 +1,69 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0-only
++#
++# Print the assembler name and its version in a 5 or 6-digit form.
++# Also, perform the minimum version check.
++# (If it is the integrated assembler, return 0 as the version, and
++# skip the version check.)
++
++set -e
++
++# Convert the version string x.y.z to a canonical 5 or 6-digit form.
++get_canonical_version()
++{
++	IFS=.
++	set -- $1
++
++	# If the 2nd or 3rd field is missing, fill it with a zero.
++	#
++	# The 4th field, if present, is ignored.
++	# This occurs in development snapshots as in 2.35.1.20201116
++	echo $((10000 * $1 + 100 * ${2:-0} + ${3:-0}))
++}
++
++# Clang fails to handle -Wa,--version unless -fno-integrated-as is given.
++# We check -fintegrated-as, expecting it is explicitly passed in for the
++# integrated assembler case.
++check_integrated_as()
++{
++	while [ $# -gt 0 ]; do
++		if [ "$1" = -fintegrated-as ]; then
++			# For the integrated assembler, we do not check the
++			# version here. It is the same as the clang version, and
++			# it has been already checked by scripts/cc-version.sh.
++			echo LLVM 0
++			exit 0
++		fi
++		shift
++	done
++}
++
++check_integrated_as "$@"
++
++orig_args="$@"
++
++# Get the first line of the --version output.
++IFS='
++'
++set -- $(LC_ALL=C "$@" -Wa,--version -c -x assembler /dev/null -o /dev/null 2>/dev/null)
++
++# Split the line on spaces.
++IFS=' '
++set -- $1
++
++if [ "$1" = GNU -a "$2" = assembler ]; then
++	shift $(($# - 1))
++	version=$1
++	name=GNU
++else
++	echo "$orig_args: unknown assembler invoked" >&2
++	exit 1
++fi
++
++# Some distributions append a package release number, as in 2.34-4.fc32
++# Trim the hyphen and any characters that follow.
++version=${version%-*}
++
++cversion=$(get_canonical_version $version)
++
++echo $name $cversion
+diff --git a/scripts/dummy-tools/gcc b/scripts/dummy-tools/gcc
+index 346757a87dbc8..485427f40dba8 100755
+--- a/scripts/dummy-tools/gcc
++++ b/scripts/dummy-tools/gcc
+@@ -67,6 +67,12 @@ if arg_contain -E "$@"; then
+ 	fi
+ fi
+ 
++# To set CONFIG_AS_IS_GNU
++if arg_contain -Wa,--version "$@"; then
++	echo "GNU assembler (scripts/dummy-tools) 2.50"
++	exit 0
++fi
++
+ if arg_contain -S "$@"; then
+ 	# For scripts/gcc-x86-*-has-stack-protector.sh
+ 	if arg_contain -fstack-protector "$@"; then
+diff --git a/sound/firewire/tascam/tascam-stream.c b/sound/firewire/tascam/tascam-stream.c
+index eb07e1decf9ba..47de9727ac73e 100644
+--- a/sound/firewire/tascam/tascam-stream.c
++++ b/sound/firewire/tascam/tascam-stream.c
+@@ -475,7 +475,7 @@ int snd_tscm_stream_start_duplex(struct snd_tscm *tscm, unsigned int rate)
+ 
+ 		err = amdtp_domain_start(&tscm->domain, 0);
+ 		if (err < 0)
+-			return err;
++			goto error;
+ 
+ 		if (!amdtp_stream_wait_callback(&tscm->rx_stream,
+ 						CALLBACK_TIMEOUT) ||
+diff --git a/sound/i2c/cs8427.c b/sound/i2c/cs8427.c
+index 8634d4f466b36..e8c4c39cea12f 100644
+--- a/sound/i2c/cs8427.c
++++ b/sound/i2c/cs8427.c
+@@ -553,10 +553,13 @@ int snd_cs8427_iec958_active(struct snd_i2c_device *cs8427, int active)
+ 	if (snd_BUG_ON(!cs8427))
+ 		return -ENXIO;
+ 	chip = cs8427->private_data;
+-	if (active)
++	if (active) {
+ 		memcpy(chip->playback.pcm_status,
+ 		       chip->playback.def_status, 24);
+-	chip->playback.pcm_ctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_INACTIVE;
++		chip->playback.pcm_ctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_INACTIVE;
++	} else {
++		chip->playback.pcm_ctl->vd[0].access |= SNDRV_CTL_ELEM_ACCESS_INACTIVE;
++	}
+ 	snd_ctl_notify(cs8427->bus->card,
+ 		       SNDRV_CTL_EVENT_MASK_VALUE | SNDRV_CTL_EVENT_MASK_INFO,
+ 		       &chip->playback.pcm_ctl->id);
+diff --git a/sound/pci/emu10k1/emupcm.c b/sound/pci/emu10k1/emupcm.c
+index 8d2c101d66a23..1c4388b847d18 100644
+--- a/sound/pci/emu10k1/emupcm.c
++++ b/sound/pci/emu10k1/emupcm.c
+@@ -1232,7 +1232,7 @@ static int snd_emu10k1_capture_mic_close(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_emu10k1 *emu = snd_pcm_substream_chip(substream);
+ 
+-	emu->capture_interrupt = NULL;
++	emu->capture_mic_interrupt = NULL;
+ 	emu->pcm_capture_mic_substream = NULL;
+ 	return 0;
+ }
+@@ -1340,7 +1340,7 @@ static int snd_emu10k1_capture_efx_close(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_emu10k1 *emu = snd_pcm_substream_chip(substream);
+ 
+-	emu->capture_interrupt = NULL;
++	emu->capture_efx_interrupt = NULL;
+ 	emu->pcm_capture_efx_substream = NULL;
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2af9cd7b7999c..18309fa17fb87 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2632,6 +2632,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ 	SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
++	SND_PCI_QUIRK(0x1558, 0x3702, "Clevo X370SN[VW]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 6fc0c4e77cd1e..76c5a2b64ef51 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -1707,6 +1707,7 @@ static const struct snd_pci_quirk stac925x_fixup_tbl[] = {
+ };
+ 
+ static const struct hda_pintbl ref92hd73xx_pin_configs[] = {
++	// Port A-H
+ 	{ 0x0a, 0x02214030 },
+ 	{ 0x0b, 0x02a19040 },
+ 	{ 0x0c, 0x01a19020 },
+@@ -1715,9 +1716,12 @@ static const struct hda_pintbl ref92hd73xx_pin_configs[] = {
+ 	{ 0x0f, 0x01014010 },
+ 	{ 0x10, 0x01014020 },
+ 	{ 0x11, 0x01014030 },
++	// CD in
+ 	{ 0x12, 0x02319040 },
++	// Digial Mic ins
+ 	{ 0x13, 0x90a000f0 },
+ 	{ 0x14, 0x90a000f0 },
++	// Digital outs
+ 	{ 0x22, 0x01452050 },
+ 	{ 0x23, 0x01452050 },
+ 	{}
+@@ -1758,6 +1762,7 @@ static const struct hda_pintbl alienware_m17x_pin_configs[] = {
+ };
+ 
+ static const struct hda_pintbl intel_dg45id_pin_configs[] = {
++	// Analog outputs
+ 	{ 0x0a, 0x02214230 },
+ 	{ 0x0b, 0x02A19240 },
+ 	{ 0x0c, 0x01013214 },
+@@ -1765,6 +1770,9 @@ static const struct hda_pintbl intel_dg45id_pin_configs[] = {
+ 	{ 0x0e, 0x01A19250 },
+ 	{ 0x0f, 0x01011212 },
+ 	{ 0x10, 0x01016211 },
++	// Digital output
++	{ 0x22, 0x01451380 },
++	{ 0x23, 0x40f000f0 },
+ 	{}
+ };
+ 
+@@ -1955,6 +1963,8 @@ static const struct snd_pci_quirk stac92hd73xx_fixup_tbl[] = {
+ 				"DFI LanParty", STAC_92HD73XX_REF),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_DFI, 0x3101,
+ 				"DFI LanParty", STAC_92HD73XX_REF),
++	SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5001,
++				"Intel DP45SG", STAC_92HD73XX_INTEL),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5002,
+ 				"Intel DG45ID", STAC_92HD73XX_INTEL),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5003,
+diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
+index 2c1305bf05722..6de3e47b92d85 100644
+--- a/sound/soc/codecs/hdac_hdmi.c
++++ b/sound/soc/codecs/hdac_hdmi.c
+@@ -436,23 +436,28 @@ static int hdac_hdmi_setup_audio_infoframe(struct hdac_device *hdev,
+ 	return 0;
+ }
+ 
+-static int hdac_hdmi_set_tdm_slot(struct snd_soc_dai *dai,
+-		unsigned int tx_mask, unsigned int rx_mask,
+-		int slots, int slot_width)
++static int hdac_hdmi_set_stream(struct snd_soc_dai *dai,
++				void *stream, int direction)
+ {
+ 	struct hdac_hdmi_priv *hdmi = snd_soc_dai_get_drvdata(dai);
+ 	struct hdac_device *hdev = hdmi->hdev;
+ 	struct hdac_hdmi_dai_port_map *dai_map;
+ 	struct hdac_hdmi_pcm *pcm;
++	struct hdac_stream *hstream;
+ 
+-	dev_dbg(&hdev->dev, "%s: strm_tag: %d\n", __func__, tx_mask);
++	if (!stream)
++		return -EINVAL;
++
++	hstream = (struct hdac_stream *)stream;
++
++	dev_dbg(&hdev->dev, "%s: strm_tag: %d\n", __func__, hstream->stream_tag);
+ 
+ 	dai_map = &hdmi->dai_map[dai->id];
+ 
+ 	pcm = hdac_hdmi_get_pcm_from_cvt(hdmi, dai_map->cvt);
+ 
+ 	if (pcm)
+-		pcm->stream_tag = (tx_mask << 4);
++		pcm->stream_tag = (hstream->stream_tag << 4);
+ 
+ 	return 0;
+ }
+@@ -1544,7 +1549,7 @@ static const struct snd_soc_dai_ops hdmi_dai_ops = {
+ 	.startup = hdac_hdmi_pcm_open,
+ 	.shutdown = hdac_hdmi_pcm_close,
+ 	.hw_params = hdac_hdmi_set_hw_params,
+-	.set_tdm_slot = hdac_hdmi_set_tdm_slot,
++	.set_stream = hdac_hdmi_set_stream,
+ };
+ 
+ /*
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 558d34fbd331c..61aa2c47fbd5e 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -969,9 +969,16 @@ static void btf_dump_emit_struct_def(struct btf_dump *d,
+ 	if (is_struct)
+ 		btf_dump_emit_bit_padding(d, off, t->size * 8, align, false, lvl + 1);
+ 
+-	if (vlen)
++	/*
++	 * Keep `struct empty {}` on a single line,
++	 * only print newline when there are regular or padding fields.
++	 */
++	if (vlen || t->size) {
+ 		btf_dump_printf(d, "\n");
+-	btf_dump_printf(d, "%s}", pfx(lvl));
++		btf_dump_printf(d, "%s}", pfx(lvl));
++	} else {
++		btf_dump_printf(d, "}");
++	}
+ 	if (packed)
+ 		btf_dump_printf(d, " __attribute__((packed))");
+ }
+diff --git a/tools/testing/selftests/intel_pstate/aperf.c b/tools/testing/selftests/intel_pstate/aperf.c
+index f6cd03a87493c..a8acf39969734 100644
+--- a/tools/testing/selftests/intel_pstate/aperf.c
++++ b/tools/testing/selftests/intel_pstate/aperf.c
+@@ -10,8 +10,12 @@
+ #include <sched.h>
+ #include <errno.h>
+ #include <string.h>
++#include <time.h>
+ #include "../kselftest.h"
+ 
++#define MSEC_PER_SEC	1000L
++#define NSEC_PER_MSEC	1000000L
++
+ void usage(char *name) {
+ 	printf ("Usage: %s cpunum\n", name);
+ }
+@@ -22,7 +26,7 @@ int main(int argc, char **argv) {
+ 	long long tsc, old_tsc, new_tsc;
+ 	long long aperf, old_aperf, new_aperf;
+ 	long long mperf, old_mperf, new_mperf;
+-	struct timeb before, after;
++	struct timespec before, after;
+ 	long long int start, finish, total;
+ 	cpu_set_t cpuset;
+ 
+@@ -55,7 +59,10 @@ int main(int argc, char **argv) {
+ 		return 1;
+ 	}
+ 
+-	ftime(&before);
++	if (clock_gettime(CLOCK_MONOTONIC, &before) < 0) {
++		perror("clock_gettime");
++		return 1;
++	}
+ 	pread(fd, &old_tsc,  sizeof(old_tsc), 0x10);
+ 	pread(fd, &old_aperf,  sizeof(old_mperf), 0xe7);
+ 	pread(fd, &old_mperf,  sizeof(old_aperf), 0xe8);
+@@ -64,7 +71,10 @@ int main(int argc, char **argv) {
+ 		sqrt(i);
+ 	}
+ 
+-	ftime(&after);
++	if (clock_gettime(CLOCK_MONOTONIC, &after) < 0) {
++		perror("clock_gettime");
++		return 1;
++	}
+ 	pread(fd, &new_tsc,  sizeof(new_tsc), 0x10);
+ 	pread(fd, &new_aperf,  sizeof(new_mperf), 0xe7);
+ 	pread(fd, &new_mperf,  sizeof(new_aperf), 0xe8);
+@@ -73,11 +83,11 @@ int main(int argc, char **argv) {
+ 	aperf = new_aperf-old_aperf;
+ 	mperf = new_mperf-old_mperf;
+ 
+-	start = before.time*1000 + before.millitm;
+-	finish = after.time*1000 + after.millitm;
++	start = before.tv_sec*MSEC_PER_SEC + before.tv_nsec/NSEC_PER_MSEC;
++	finish = after.tv_sec*MSEC_PER_SEC + after.tv_nsec/NSEC_PER_MSEC;
+ 	total = finish - start;
+ 
+-	printf("runTime: %4.2f\n", 1.0*total/1000);
++	printf("runTime: %4.2f\n", 1.0*total/MSEC_PER_SEC);
+ 	printf("freq: %7.0f\n", tsc / (1.0*aperf / (1.0 * mperf)) / total);
+ 	return 0;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-04-26  9:50 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-04-26  9:50 UTC (permalink / raw
  To: gentoo-commits

commit:     993d04936ee82597d4aa7c397cd8626eeee7a970
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 26 09:50:46 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Apr 26 09:50:46 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=993d0493

Linux patch 5.10.179

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |    4 +
 1178_linux-5.10.179.patch | 2894 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2898 insertions(+)

diff --git a/0000_README b/0000_README
index e37eac6d..a9e84fc1 100644
--- a/0000_README
+++ b/0000_README
@@ -755,6 +755,10 @@ Patch:  1177_linux-5.10.178.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.178
 
+Patch:  1178_linux-5.10.179.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.179
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1178_linux-5.10.179.patch b/1178_linux-5.10.179.patch
new file mode 100644
index 00000000..2b1b8ed7
--- /dev/null
+++ b/1178_linux-5.10.179.patch
@@ -0,0 +1,2894 @@
+diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst
+index 6ed806e6061bb..a6d89efede790 100644
+--- a/Documentation/kernel-hacking/locking.rst
++++ b/Documentation/kernel-hacking/locking.rst
+@@ -1358,7 +1358,7 @@ Mutex API reference
+ Futex API reference
+ ===================
+ 
+-.. kernel-doc:: kernel/futex.c
++.. kernel-doc:: kernel/futex/core.c
+    :internal:
+ 
+ Further reading
+diff --git a/Documentation/powerpc/associativity.rst b/Documentation/powerpc/associativity.rst
+index 07e7dd3d6c87e..4d01c73685619 100644
+--- a/Documentation/powerpc/associativity.rst
++++ b/Documentation/powerpc/associativity.rst
+@@ -1,6 +1,6 @@
+ ============================
+ NUMA resource associativity
+-=============================
++============================
+ 
+ Associativity represents the groupings of the various platform resources into
+ domains of substantially similar mean performance relative to resources outside
+@@ -20,11 +20,11 @@ A value of 1 indicates the usage of Form 1 associativity. For Form 2 associativi
+ bit 2 of byte 5 in the "ibm,architecture-vec-5" property is used.
+ 
+ Form 0
+------
++------
+ Form 0 associativity supports only two NUMA distances (LOCAL and REMOTE).
+ 
+ Form 1
+------
++------
+ With Form 1 a combination of ibm,associativity-reference-points, and ibm,associativity
+ device tree properties are used to determine the NUMA distance between resource groups/domains.
+ 
+@@ -78,17 +78,18 @@ numa-lookup-index-table.
+ 
+ For ex:
+ ibm,numa-lookup-index-table = <3 0 8 40>;
+-ibm,numa-distace-table = <9>, /bits/ 8 < 10  20  80
+-					 20  10 160
+-					 80 160  10>;
+-  | 0    8   40
+---|------------
+-  |
+-0 | 10   20  80
+-  |
+-8 | 20   10  160
+-  |
+-40| 80   160  10
++ibm,numa-distace-table = <9>, /bits/ 8 < 10  20  80 20  10 160 80 160  10>;
++
++::
++
++	  | 0    8   40
++	--|------------
++	  |
++	0 | 10   20  80
++	  |
++	8 | 20   10  160
++	  |
++	40| 80   160  10
+ 
+ A possible "ibm,associativity" property for resources in node 0, 8 and 40
+ 
+diff --git a/Documentation/powerpc/index.rst b/Documentation/powerpc/index.rst
+index 6ec64b0d52574..4663b72caab8b 100644
+--- a/Documentation/powerpc/index.rst
++++ b/Documentation/powerpc/index.rst
+@@ -7,6 +7,7 @@ powerpc
+ .. toctree::
+     :maxdepth: 1
+ 
++    associativity
+     booting
+     bootwrapper
+     cpu_families
+diff --git a/Documentation/translations/it_IT/kernel-hacking/locking.rst b/Documentation/translations/it_IT/kernel-hacking/locking.rst
+index bf1acd6204efa..192ab8e281252 100644
+--- a/Documentation/translations/it_IT/kernel-hacking/locking.rst
++++ b/Documentation/translations/it_IT/kernel-hacking/locking.rst
+@@ -1400,7 +1400,7 @@ Riferimento per l'API dei Mutex
+ Riferimento per l'API dei Futex
+ ===============================
+ 
+-.. kernel-doc:: kernel/futex.c
++.. kernel-doc:: kernel/futex/core.c
+    :internal:
+ 
+ Approfondimenti
+diff --git a/Makefile b/Makefile
+index 3bde04cc7f048..3ddcade4be8fc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 178
++SUBLEVEL = 179
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index aab28161b9ae9..250a03a066a17 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -959,7 +959,7 @@
+ 		status = "disabled";
+ 	};
+ 
+-	spdif: sound@ff88b0000 {
++	spdif: sound@ff8b0000 {
+ 		compatible = "rockchip,rk3288-spdif", "rockchip,rk3066-spdif";
+ 		reg = <0x0 0xff8b0000 0x0 0x10000>;
+ 		#sound-dai-cells = <0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index c0defb36592d0..9dd9f7715fbe6 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -1604,10 +1604,9 @@
+ 
+ 			dmc: bus@38000 {
+ 				compatible = "simple-bus";
+-				reg = <0x0 0x38000 0x0 0x400>;
+ 				#address-cells = <2>;
+ 				#size-cells = <2>;
+-				ranges = <0x0 0x0 0x0 0x38000 0x0 0x400>;
++				ranges = <0x0 0x0 0x0 0x38000 0x0 0x2000>;
+ 
+ 				canvas: video-lut@48 {
+ 					compatible = "amlogic,canvas";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi
+index 521eb3a5a12ed..ed6d296bd6644 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi
+@@ -128,7 +128,7 @@
+ 		rohm,reset-snvs-powered;
+ 
+ 		#clock-cells = <0>;
+-		clocks = <&osc_32k 0>;
++		clocks = <&osc_32k>;
+ 		clock-output-names = "clk-32k-out";
+ 
+ 		regulators {
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
+index cc08dc4eb56a5..68698cdf56c46 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
++++ b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
+@@ -60,11 +60,11 @@
+ 	perst-gpio = <&tlmm 58 0x1>;
+ };
+ 
+-&pcie_phy0 {
++&pcie_qmp0 {
+ 	status = "okay";
+ };
+ 
+-&pcie_phy1 {
++&pcie_qmp1 {
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
+index 09fa4705ce8eb..64afe075df089 100644
+--- a/arch/mips/kernel/vmlinux.lds.S
++++ b/arch/mips/kernel/vmlinux.lds.S
+@@ -15,6 +15,8 @@
+ #define EMITS_PT_NOTE
+ #endif
+ 
++#define RUNTIME_DISCARD_EXIT
++
+ #include <asm-generic/vmlinux.lds.h>
+ 
+ #undef mips
+diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
+index a76dd27fb2e81..3009bb5272524 100644
+--- a/arch/s390/kernel/ptrace.c
++++ b/arch/s390/kernel/ptrace.c
+@@ -500,9 +500,7 @@ long arch_ptrace(struct task_struct *child, long request,
+ 		}
+ 		return 0;
+ 	case PTRACE_GET_LAST_BREAK:
+-		put_user(child->thread.last_break,
+-			 (unsigned long __user *) data);
+-		return 0;
++		return put_user(child->thread.last_break, (unsigned long __user *)data);
+ 	case PTRACE_ENABLE_TE:
+ 		if (!MACHINE_HAS_TE)
+ 			return -EIO;
+@@ -854,9 +852,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+ 		}
+ 		return 0;
+ 	case PTRACE_GET_LAST_BREAK:
+-		put_user(child->thread.last_break,
+-			 (unsigned int __user *) data);
+-		return 0;
++		return put_user(child->thread.last_break, (unsigned int __user *)data);
+ 	}
+ 	return compat_ptrace_request(child, request, addr, data);
+ }
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index 95ea17a9d20cb..ebaf329a23688 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -64,8 +64,7 @@ CFLAGS_sha256.o			+= $(PURGATORY_CFLAGS)
+ CFLAGS_REMOVE_string.o		+= $(PURGATORY_CFLAGS_REMOVE)
+ CFLAGS_string.o			+= $(PURGATORY_CFLAGS)
+ 
+-AFLAGS_REMOVE_setup-x86_$(BITS).o	+= -Wa,-gdwarf-2
+-AFLAGS_REMOVE_entry64.o			+= -Wa,-gdwarf-2
++asflags-remove-y		+= -g -Wa,-gdwarf-2
+ 
+ $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
+ 		$(call if_changed,ld)
+diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
+index 250b78ee16251..b806c1ab9b618 100644
+--- a/drivers/iio/adc/at91-sama5d2_adc.c
++++ b/drivers/iio/adc/at91-sama5d2_adc.c
+@@ -1002,7 +1002,7 @@ static struct iio_trigger *at91_adc_allocate_trigger(struct iio_dev *indio,
+ 	trig = devm_iio_trigger_alloc(&indio->dev, "%s-dev%d-%s", indio->name,
+ 				      indio->id, trigger_name);
+ 	if (!trig)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	trig->dev.parent = indio->dev.parent;
+ 	iio_trigger_set_drvdata(trig, indio);
+diff --git a/drivers/iio/light/tsl2772.c b/drivers/iio/light/tsl2772.c
+index d79205361dfac..ff33ad3714206 100644
+--- a/drivers/iio/light/tsl2772.c
++++ b/drivers/iio/light/tsl2772.c
+@@ -606,6 +606,7 @@ static int tsl2772_read_prox_diodes(struct tsl2772_chip *chip)
+ 			return -EINVAL;
+ 		}
+ 	}
++	chip->settings.prox_diode = prox_diode_mask;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 65c0081838e3d..9dcdf21c50bdc 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -601,6 +601,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		},
+ 		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
++	{
++		/* Fujitsu Lifebook A574/H */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "FMVA0501PZ"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
+ 	{
+ 		/* Gigabyte M912 */
+ 		.matches = {
+diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c
+index 12bc3f5a6cbbd..1c7a9dcfed658 100644
+--- a/drivers/memstick/core/memstick.c
++++ b/drivers/memstick/core/memstick.c
+@@ -412,6 +412,7 @@ static struct memstick_dev *memstick_alloc_card(struct memstick_host *host)
+ 	return card;
+ err_out:
+ 	host->card = old_card;
++	kfree_const(card->dev.kobj.name);
+ 	kfree(card);
+ 	return NULL;
+ }
+@@ -470,8 +471,10 @@ static void memstick_check(struct work_struct *work)
+ 				put_device(&card->dev);
+ 				host->card = NULL;
+ 			}
+-		} else
++		} else {
++			kfree_const(card->dev.kobj.name);
+ 			kfree(card);
++		}
+ 	}
+ 
+ out_power_off:
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index bf2592774165b..8e52905458f9c 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -351,8 +351,6 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg)
+ 		 */
+ 		case MMC_TIMING_SD_HS:
+ 		case MMC_TIMING_MMC_HS:
+-		case MMC_TIMING_UHS_SDR12:
+-		case MMC_TIMING_UHS_SDR25:
+ 			val &= ~SDHCI_CTRL_HISPD;
+ 		}
+ 	}
+diff --git a/drivers/net/dsa/b53/b53_mmap.c b/drivers/net/dsa/b53/b53_mmap.c
+index c628d0980c0b1..1d52cb3e46d52 100644
+--- a/drivers/net/dsa/b53/b53_mmap.c
++++ b/drivers/net/dsa/b53/b53_mmap.c
+@@ -215,6 +215,18 @@ static int b53_mmap_write64(struct b53_device *dev, u8 page, u8 reg,
+ 	return 0;
+ }
+ 
++static int b53_mmap_phy_read16(struct b53_device *dev, int addr, int reg,
++			       u16 *value)
++{
++	return -EIO;
++}
++
++static int b53_mmap_phy_write16(struct b53_device *dev, int addr, int reg,
++				u16 value)
++{
++	return -EIO;
++}
++
+ static const struct b53_io_ops b53_mmap_ops = {
+ 	.read8 = b53_mmap_read8,
+ 	.read16 = b53_mmap_read16,
+@@ -226,6 +238,8 @@ static const struct b53_io_ops b53_mmap_ops = {
+ 	.write32 = b53_mmap_write32,
+ 	.write48 = b53_mmap_write48,
+ 	.write64 = b53_mmap_write64,
++	.phy_read16 = b53_mmap_phy_read16,
++	.phy_write16 = b53_mmap_phy_write16,
+ };
+ 
+ static int b53_mmap_probe(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index ae0c9aaab48db..b700663a634d2 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -5294,31 +5294,6 @@ static void e1000_watchdog_task(struct work_struct *work)
+ 				ew32(TARC(0), tarc0);
+ 			}
+ 
+-			/* disable TSO for pcie and 10/100 speeds, to avoid
+-			 * some hardware issues
+-			 */
+-			if (!(adapter->flags & FLAG_TSO_FORCE)) {
+-				switch (adapter->link_speed) {
+-				case SPEED_10:
+-				case SPEED_100:
+-					e_info("10/100 speed: disabling TSO\n");
+-					netdev->features &= ~NETIF_F_TSO;
+-					netdev->features &= ~NETIF_F_TSO6;
+-					break;
+-				case SPEED_1000:
+-					netdev->features |= NETIF_F_TSO;
+-					netdev->features |= NETIF_F_TSO6;
+-					break;
+-				default:
+-					/* oops */
+-					break;
+-				}
+-				if (hw->mac.type == e1000_pch_spt) {
+-					netdev->features &= ~NETIF_F_TSO;
+-					netdev->features &= ~NETIF_F_TSO6;
+-				}
+-			}
+-
+ 			/* enable transmits in the hardware, need to do this
+ 			 * after setting TARC(0)
+ 			 */
+@@ -7477,6 +7452,32 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			    NETIF_F_RXCSUM |
+ 			    NETIF_F_HW_CSUM);
+ 
++	/* disable TSO for pcie and 10/100 speeds to avoid
++	 * some hardware issues and for i219 to fix transfer
++	 * speed being capped at 60%
++	 */
++	if (!(adapter->flags & FLAG_TSO_FORCE)) {
++		switch (adapter->link_speed) {
++		case SPEED_10:
++		case SPEED_100:
++			e_info("10/100 speed: disabling TSO\n");
++			netdev->features &= ~NETIF_F_TSO;
++			netdev->features &= ~NETIF_F_TSO6;
++			break;
++		case SPEED_1000:
++			netdev->features |= NETIF_F_TSO;
++			netdev->features |= NETIF_F_TSO6;
++			break;
++		default:
++			/* oops */
++			break;
++		}
++		if (hw->mac.type == e1000_pch_spt) {
++			netdev->features &= ~NETIF_F_TSO;
++			netdev->features &= ~NETIF_F_TSO6;
++		}
++	}
++
+ 	/* Set user-changeable features (subset of all device features) */
+ 	netdev->hw_features = netdev->features;
+ 	netdev->hw_features |= NETIF_F_RXFCS;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 76481ff7074ba..d23a467d0d209 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -10448,8 +10448,11 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 					     pf->hw.aq.asq_last_status));
+ 	}
+ 	/* reinit the misc interrupt */
+-	if (pf->flags & I40E_FLAG_MSIX_ENABLED)
++	if (pf->flags & I40E_FLAG_MSIX_ENABLED) {
+ 		ret = i40e_setup_misc_vector(pf);
++		if (ret)
++			goto end_unlock;
++	}
+ 
+ 	/* Add a filter to drop all Flow control frames from any VSI from being
+ 	 * transmitted. By doing so we stop a malicious VF from sending out
+@@ -13458,15 +13461,15 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
+ 		vsi->id = ctxt.vsi_number;
+ 	}
+ 
+-	vsi->active_filters = 0;
+-	clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state);
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
++	vsi->active_filters = 0;
+ 	/* If macvlan filters already exist, force them to get loaded */
+ 	hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) {
+ 		f->state = I40E_FILTER_NEW;
+ 		f_count++;
+ 	}
+ 	spin_unlock_bh(&vsi->mac_filter_hash_lock);
++	clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state);
+ 
+ 	if (f_count) {
+ 		vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED;
+diff --git a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c
+index 017d68f1e1232..972c571b41587 100644
+--- a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c
++++ b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c
+@@ -31,6 +31,8 @@ mlxfw_mfa2_tlv_next(const struct mlxfw_mfa2_file *mfa2_file,
+ 
+ 	if (tlv->type == MLXFW_MFA2_TLV_MULTI_PART) {
+ 		multi = mlxfw_mfa2_tlv_multi_get(mfa2_file, tlv);
++		if (!multi)
++			return NULL;
+ 		tlv_len = NLA_ALIGN(tlv_len + be16_to_cpu(multi->total_len));
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
+index a2c1fbd3e0d13..0225c8f1e5ea2 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
+@@ -26,7 +26,7 @@
+ #define MLXSW_PCI_CIR_TIMEOUT_MSECS		1000
+ 
+ #define MLXSW_PCI_SW_RESET_TIMEOUT_MSECS	900000
+-#define MLXSW_PCI_SW_RESET_WAIT_MSECS		200
++#define MLXSW_PCI_SW_RESET_WAIT_MSECS		400
+ #define MLXSW_PCI_FW_READY			0xA1844
+ #define MLXSW_PCI_FW_READY_MASK			0xFFFF
+ #define MLXSW_PCI_FW_READY_MAGIC		0x5E
+diff --git a/drivers/net/ethernet/sfc/ef100_netdev.c b/drivers/net/ethernet/sfc/ef100_netdev.c
+index 63a44ee763be7..b9429e8faba1e 100644
+--- a/drivers/net/ethernet/sfc/ef100_netdev.c
++++ b/drivers/net/ethernet/sfc/ef100_netdev.c
+@@ -96,6 +96,8 @@ static int ef100_net_stop(struct net_device *net_dev)
+ 	efx_mcdi_free_vis(efx);
+ 	efx_remove_interrupts(efx);
+ 
++	efx->state = STATE_NET_DOWN;
++
+ 	return 0;
+ }
+ 
+@@ -172,6 +174,8 @@ static int ef100_net_open(struct net_device *net_dev)
+ 		efx_link_status_changed(efx);
+ 	mutex_unlock(&efx->mac_lock);
+ 
++	efx->state = STATE_NET_UP;
++
+ 	return 0;
+ 
+ fail:
+@@ -272,7 +276,7 @@ int ef100_register_netdev(struct efx_nic *efx)
+ 	/* Always start with carrier off; PHY events will detect the link */
+ 	netif_carrier_off(net_dev);
+ 
+-	efx->state = STATE_READY;
++	efx->state = STATE_NET_DOWN;
+ 	rtnl_unlock();
+ 	efx_init_mcdi_logging(efx);
+ 
+diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
+index c069659c9e2d0..7cf52fcdb3078 100644
+--- a/drivers/net/ethernet/sfc/efx.c
++++ b/drivers/net/ethernet/sfc/efx.c
+@@ -105,14 +105,6 @@ static int efx_xdp(struct net_device *dev, struct netdev_bpf *xdp);
+ static int efx_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **xdpfs,
+ 			u32 flags);
+ 
+-#define EFX_ASSERT_RESET_SERIALISED(efx)		\
+-	do {						\
+-		if ((efx->state == STATE_READY) ||	\
+-		    (efx->state == STATE_RECOVERY) ||	\
+-		    (efx->state == STATE_DISABLED))	\
+-			ASSERT_RTNL();			\
+-	} while (0)
+-
+ /**************************************************************************
+  *
+  * Port handling
+@@ -377,6 +369,8 @@ static int efx_probe_all(struct efx_nic *efx)
+ 	if (rc)
+ 		goto fail5;
+ 
++	efx->state = STATE_NET_DOWN;
++
+ 	return 0;
+ 
+  fail5:
+@@ -543,7 +537,9 @@ int efx_net_open(struct net_device *net_dev)
+ 	efx_start_all(efx);
+ 	if (efx->state == STATE_DISABLED || efx->reset_pending)
+ 		netif_device_detach(efx->net_dev);
+-	efx_selftest_async_start(efx);
++	else
++		efx->state = STATE_NET_UP;
++
+ 	return 0;
+ }
+ 
+@@ -721,8 +717,6 @@ static int efx_register_netdev(struct efx_nic *efx)
+ 	 * already requested.  If so, the NIC is probably hosed so we
+ 	 * abort.
+ 	 */
+-	efx->state = STATE_READY;
+-	smp_mb(); /* ensure we change state before checking reset_pending */
+ 	if (efx->reset_pending) {
+ 		netif_err(efx, probe, efx->net_dev,
+ 			  "aborting probe due to scheduled reset\n");
+@@ -750,6 +744,8 @@ static int efx_register_netdev(struct efx_nic *efx)
+ 
+ 	efx_associate(efx);
+ 
++	efx->state = STATE_NET_DOWN;
++
+ 	rtnl_unlock();
+ 
+ 	rc = device_create_file(&efx->pci_dev->dev, &dev_attr_phy_type);
+@@ -851,7 +847,7 @@ static void efx_pci_remove_main(struct efx_nic *efx)
+ 	/* Flush reset_work. It can no longer be scheduled since we
+ 	 * are not READY.
+ 	 */
+-	BUG_ON(efx->state == STATE_READY);
++	WARN_ON(efx_net_active(efx->state));
+ 	efx_flush_reset_workqueue(efx);
+ 
+ 	efx_disable_interrupts(efx);
+@@ -1196,13 +1192,13 @@ static int efx_pm_freeze(struct device *dev)
+ 
+ 	rtnl_lock();
+ 
+-	if (efx->state != STATE_DISABLED) {
+-		efx->state = STATE_UNINIT;
+-
++	if (efx_net_active(efx->state)) {
+ 		efx_device_detach_sync(efx);
+ 
+ 		efx_stop_all(efx);
+ 		efx_disable_interrupts(efx);
++
++		efx->state = efx_freeze(efx->state);
+ 	}
+ 
+ 	rtnl_unlock();
+@@ -1217,7 +1213,7 @@ static int efx_pm_thaw(struct device *dev)
+ 
+ 	rtnl_lock();
+ 
+-	if (efx->state != STATE_DISABLED) {
++	if (efx_frozen(efx->state)) {
+ 		rc = efx_enable_interrupts(efx);
+ 		if (rc)
+ 			goto fail;
+@@ -1230,7 +1226,7 @@ static int efx_pm_thaw(struct device *dev)
+ 
+ 		efx_device_attach_if_not_resetting(efx);
+ 
+-		efx->state = STATE_READY;
++		efx->state = efx_thaw(efx->state);
+ 
+ 		efx->type->resume_wol(efx);
+ 	}
+diff --git a/drivers/net/ethernet/sfc/efx_common.c b/drivers/net/ethernet/sfc/efx_common.c
+index de797e1ac5a98..476ef1c976375 100644
+--- a/drivers/net/ethernet/sfc/efx_common.c
++++ b/drivers/net/ethernet/sfc/efx_common.c
+@@ -542,6 +542,8 @@ void efx_start_all(struct efx_nic *efx)
+ 	/* Start the hardware monitor if there is one */
+ 	efx_start_monitor(efx);
+ 
++	efx_selftest_async_start(efx);
++
+ 	/* Link state detection is normally event-driven; we have
+ 	 * to poll now because we could have missed a change
+ 	 */
+@@ -897,7 +899,7 @@ static void efx_reset_work(struct work_struct *data)
+ 	 * have changed by now.  Now that we have the RTNL lock,
+ 	 * it cannot change again.
+ 	 */
+-	if (efx->state == STATE_READY)
++	if (efx_net_active(efx->state))
+ 		(void)efx_reset(efx, method);
+ 
+ 	rtnl_unlock();
+@@ -907,7 +909,7 @@ void efx_schedule_reset(struct efx_nic *efx, enum reset_type type)
+ {
+ 	enum reset_type method;
+ 
+-	if (efx->state == STATE_RECOVERY) {
++	if (efx_recovering(efx->state)) {
+ 		netif_dbg(efx, drv, efx->net_dev,
+ 			  "recovering: skip scheduling %s reset\n",
+ 			  RESET_TYPE(type));
+@@ -942,7 +944,7 @@ void efx_schedule_reset(struct efx_nic *efx, enum reset_type type)
+ 	/* If we're not READY then just leave the flags set as the cue
+ 	 * to abort probing or reschedule the reset later.
+ 	 */
+-	if (READ_ONCE(efx->state) != STATE_READY)
++	if (!efx_net_active(READ_ONCE(efx->state)))
+ 		return;
+ 
+ 	/* efx_process_channel() will no longer read events once a
+@@ -1214,7 +1216,7 @@ static pci_ers_result_t efx_io_error_detected(struct pci_dev *pdev,
+ 	rtnl_lock();
+ 
+ 	if (efx->state != STATE_DISABLED) {
+-		efx->state = STATE_RECOVERY;
++		efx->state = efx_recover(efx->state);
+ 		efx->reset_pending = 0;
+ 
+ 		efx_device_detach_sync(efx);
+@@ -1268,7 +1270,7 @@ static void efx_io_resume(struct pci_dev *pdev)
+ 		netif_err(efx, hw, efx->net_dev,
+ 			  "efx_reset failed after PCI error (%d)\n", rc);
+ 	} else {
+-		efx->state = STATE_READY;
++		efx->state = efx_recovered(efx->state);
+ 		netif_dbg(efx, hw, efx->net_dev,
+ 			  "Done resetting and resuming IO after PCI error.\n");
+ 	}
+diff --git a/drivers/net/ethernet/sfc/efx_common.h b/drivers/net/ethernet/sfc/efx_common.h
+index 65513fd0cf6c4..c72e819da8fd3 100644
+--- a/drivers/net/ethernet/sfc/efx_common.h
++++ b/drivers/net/ethernet/sfc/efx_common.h
+@@ -45,9 +45,7 @@ int efx_reconfigure_port(struct efx_nic *efx);
+ 
+ #define EFX_ASSERT_RESET_SERIALISED(efx)		\
+ 	do {						\
+-		if ((efx->state == STATE_READY) ||	\
+-		    (efx->state == STATE_RECOVERY) ||	\
+-		    (efx->state == STATE_DISABLED))	\
++		if (efx->state != STATE_UNINIT)		\
+ 			ASSERT_RTNL();			\
+ 	} while (0)
+ 
+@@ -64,7 +62,7 @@ void efx_port_dummy_op_void(struct efx_nic *efx);
+ 
+ static inline int efx_check_disabled(struct efx_nic *efx)
+ {
+-	if (efx->state == STATE_DISABLED || efx->state == STATE_RECOVERY) {
++	if (efx->state == STATE_DISABLED || efx_recovering(efx->state)) {
+ 		netif_err(efx, drv, efx->net_dev,
+ 			  "device is disabled due to earlier errors\n");
+ 		return -EIO;
+diff --git a/drivers/net/ethernet/sfc/ethtool_common.c b/drivers/net/ethernet/sfc/ethtool_common.c
+index bd552c7dffcb1..3846b76b89720 100644
+--- a/drivers/net/ethernet/sfc/ethtool_common.c
++++ b/drivers/net/ethernet/sfc/ethtool_common.c
+@@ -137,7 +137,7 @@ void efx_ethtool_self_test(struct net_device *net_dev,
+ 	if (!efx_tests)
+ 		goto fail;
+ 
+-	if (efx->state != STATE_READY) {
++	if (!efx_net_active(efx->state)) {
+ 		rc = -EBUSY;
+ 		goto out;
+ 	}
+diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
+index 8aecb4bd2c0d5..39f97929b3ffe 100644
+--- a/drivers/net/ethernet/sfc/net_driver.h
++++ b/drivers/net/ethernet/sfc/net_driver.h
+@@ -627,12 +627,54 @@ enum efx_int_mode {
+ #define EFX_INT_MODE_USE_MSI(x) (((x)->interrupt_mode) <= EFX_INT_MODE_MSI)
+ 
+ enum nic_state {
+-	STATE_UNINIT = 0,	/* device being probed/removed or is frozen */
+-	STATE_READY = 1,	/* hardware ready and netdev registered */
+-	STATE_DISABLED = 2,	/* device disabled due to hardware errors */
+-	STATE_RECOVERY = 3,	/* device recovering from PCI error */
++	STATE_UNINIT = 0,	/* device being probed/removed */
++	STATE_NET_DOWN,		/* hardware probed and netdev registered */
++	STATE_NET_UP,		/* ready for traffic */
++	STATE_DISABLED,		/* device disabled due to hardware errors */
++
++	STATE_RECOVERY = 0x100,/* recovering from PCI error */
++	STATE_FROZEN = 0x200,	/* frozen by power management */
+ };
+ 
++static inline bool efx_net_active(enum nic_state state)
++{
++	return state == STATE_NET_DOWN || state == STATE_NET_UP;
++}
++
++static inline bool efx_frozen(enum nic_state state)
++{
++	return state & STATE_FROZEN;
++}
++
++static inline bool efx_recovering(enum nic_state state)
++{
++	return state & STATE_RECOVERY;
++}
++
++static inline enum nic_state efx_freeze(enum nic_state state)
++{
++	WARN_ON(!efx_net_active(state));
++	return state | STATE_FROZEN;
++}
++
++static inline enum nic_state efx_thaw(enum nic_state state)
++{
++	WARN_ON(!efx_frozen(state));
++	return state & ~STATE_FROZEN;
++}
++
++static inline enum nic_state efx_recover(enum nic_state state)
++{
++	WARN_ON(!efx_net_active(state));
++	return state | STATE_RECOVERY;
++}
++
++static inline enum nic_state efx_recovered(enum nic_state state)
++{
++	WARN_ON(!efx_recovering(state));
++	return state & ~STATE_RECOVERY;
++}
++
+ /* Forward declaration */
+ struct efx_nic;
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index d533211161366..47c9118cc92a3 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -646,8 +646,13 @@ static struct page *xdp_linearize_page(struct receive_queue *rq,
+ 				       int page_off,
+ 				       unsigned int *len)
+ {
+-	struct page *page = alloc_page(GFP_ATOMIC);
++	int tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
++	struct page *page;
+ 
++	if (page_off + *len + tailroom > PAGE_SIZE)
++		return NULL;
++
++	page = alloc_page(GFP_ATOMIC);
+ 	if (!page)
+ 		return NULL;
+ 
+@@ -655,7 +660,6 @@ static struct page *xdp_linearize_page(struct receive_queue *rq,
+ 	page_off += *len;
+ 
+ 	while (--*num_buf) {
+-		int tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ 		unsigned int buflen;
+ 		void *buf;
+ 		int off;
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index 67614e7166ac8..379ac9ca60b70 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -996,10 +996,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 
+ 		/* No crossing a page as the payload mustn't fragment. */
+ 		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
+-			netdev_err(queue->vif->dev,
+-				   "txreq.offset: %u, size: %u, end: %lu\n",
+-				   txreq.offset, txreq.size,
+-				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
++			netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n",
++				   txreq.offset, txreq.size);
+ 			xenvif_fatal_tx_err(queue->vif);
+ 			break;
+ 		}
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 57df87def8c33..e6147a9220f9a 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1535,22 +1535,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl,
+ 	if (ret)
+ 		goto err_init_connect;
+ 
+-	queue->rd_enabled = true;
+ 	set_bit(NVME_TCP_Q_ALLOCATED, &queue->flags);
+-	nvme_tcp_init_recv_ctx(queue);
+-
+-	write_lock_bh(&queue->sock->sk->sk_callback_lock);
+-	queue->sock->sk->sk_user_data = queue;
+-	queue->state_change = queue->sock->sk->sk_state_change;
+-	queue->data_ready = queue->sock->sk->sk_data_ready;
+-	queue->write_space = queue->sock->sk->sk_write_space;
+-	queue->sock->sk->sk_data_ready = nvme_tcp_data_ready;
+-	queue->sock->sk->sk_state_change = nvme_tcp_state_change;
+-	queue->sock->sk->sk_write_space = nvme_tcp_write_space;
+-#ifdef CONFIG_NET_RX_BUSY_POLL
+-	queue->sock->sk->sk_ll_usec = 1;
+-#endif
+-	write_unlock_bh(&queue->sock->sk->sk_callback_lock);
+ 
+ 	return 0;
+ 
+@@ -1569,7 +1554,7 @@ err_destroy_mutex:
+ 	return ret;
+ }
+ 
+-static void nvme_tcp_restore_sock_calls(struct nvme_tcp_queue *queue)
++static void nvme_tcp_restore_sock_ops(struct nvme_tcp_queue *queue)
+ {
+ 	struct socket *sock = queue->sock;
+ 
+@@ -1584,7 +1569,7 @@ static void nvme_tcp_restore_sock_calls(struct nvme_tcp_queue *queue)
+ static void __nvme_tcp_stop_queue(struct nvme_tcp_queue *queue)
+ {
+ 	kernel_sock_shutdown(queue->sock, SHUT_RDWR);
+-	nvme_tcp_restore_sock_calls(queue);
++	nvme_tcp_restore_sock_ops(queue);
+ 	cancel_work_sync(&queue->io_work);
+ }
+ 
+@@ -1599,21 +1584,42 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
+ 	mutex_unlock(&queue->queue_lock);
+ }
+ 
++static void nvme_tcp_setup_sock_ops(struct nvme_tcp_queue *queue)
++{
++	write_lock_bh(&queue->sock->sk->sk_callback_lock);
++	queue->sock->sk->sk_user_data = queue;
++	queue->state_change = queue->sock->sk->sk_state_change;
++	queue->data_ready = queue->sock->sk->sk_data_ready;
++	queue->write_space = queue->sock->sk->sk_write_space;
++	queue->sock->sk->sk_data_ready = nvme_tcp_data_ready;
++	queue->sock->sk->sk_state_change = nvme_tcp_state_change;
++	queue->sock->sk->sk_write_space = nvme_tcp_write_space;
++#ifdef CONFIG_NET_RX_BUSY_POLL
++	queue->sock->sk->sk_ll_usec = 1;
++#endif
++	write_unlock_bh(&queue->sock->sk->sk_callback_lock);
++}
++
+ static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
+ {
+ 	struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
++	struct nvme_tcp_queue *queue = &ctrl->queues[idx];
+ 	int ret;
+ 
++	queue->rd_enabled = true;
++	nvme_tcp_init_recv_ctx(queue);
++	nvme_tcp_setup_sock_ops(queue);
++
+ 	if (idx)
+ 		ret = nvmf_connect_io_queue(nctrl, idx, false);
+ 	else
+ 		ret = nvmf_connect_admin_queue(nctrl);
+ 
+ 	if (!ret) {
+-		set_bit(NVME_TCP_Q_LIVE, &ctrl->queues[idx].flags);
++		set_bit(NVME_TCP_Q_LIVE, &queue->flags);
+ 	} else {
+-		if (test_bit(NVME_TCP_Q_ALLOCATED, &ctrl->queues[idx].flags))
+-			__nvme_tcp_stop_queue(&ctrl->queues[idx]);
++		if (test_bit(NVME_TCP_Q_ALLOCATED, &queue->flags))
++			__nvme_tcp_stop_queue(queue);
+ 		dev_err(nctrl->device,
+ 			"failed to connect queue: %d ret=%d\n", idx, ret);
+ 	}
+diff --git a/drivers/pwm/pwm-hibvt.c b/drivers/pwm/pwm-hibvt.c
+index ad205fdad3722..286e9b119ee5b 100644
+--- a/drivers/pwm/pwm-hibvt.c
++++ b/drivers/pwm/pwm-hibvt.c
+@@ -146,6 +146,7 @@ static void hibvt_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	value = readl(base + PWM_CTRL_ADDR(pwm->hwpwm));
+ 	state->enabled = (PWM_ENABLE_MASK & value);
++	state->polarity = (PWM_POLARITY_MASK & value) ? PWM_POLARITY_INVERSED : PWM_POLARITY_NORMAL;
+ }
+ 
+ static int hibvt_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+diff --git a/drivers/pwm/pwm-iqs620a.c b/drivers/pwm/pwm-iqs620a.c
+index 3e967a12458c6..a2aef006cb71e 100644
+--- a/drivers/pwm/pwm-iqs620a.c
++++ b/drivers/pwm/pwm-iqs620a.c
+@@ -132,6 +132,7 @@ static void iqs620_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	mutex_unlock(&iqs620_pwm->lock);
+ 
+ 	state->period = IQS620_PWM_PERIOD_NS;
++	state->polarity = PWM_POLARITY_NORMAL;
+ }
+ 
+ static int iqs620_pwm_notifier(struct notifier_block *notifier,
+diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c
+index bd0d7336b8983..237bb8e065933 100644
+--- a/drivers/pwm/pwm-meson.c
++++ b/drivers/pwm/pwm-meson.c
+@@ -168,6 +168,12 @@ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
+ 	duty = state->duty_cycle;
+ 	period = state->period;
+ 
++	/*
++	 * Note this is wrong. The result is an output wave that isn't really
++	 * inverted and so is wrongly identified by .get_state as normal.
++	 * Fixing this needs some care however as some machines might rely on
++	 * this.
++	 */
+ 	if (state->polarity == PWM_POLARITY_INVERSED)
+ 		duty = period - duty;
+ 
+@@ -366,6 +372,7 @@ static void meson_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		state->period = 0;
+ 		state->duty_cycle = 0;
+ 	}
++	state->polarity = PWM_POLARITY_NORMAL;
+ }
+ 
+ static const struct pwm_ops meson_pwm_ops = {
+diff --git a/drivers/regulator/fan53555.c b/drivers/regulator/fan53555.c
+index aa426183b6a11..1af12074a75ab 100644
+--- a/drivers/regulator/fan53555.c
++++ b/drivers/regulator/fan53555.c
+@@ -8,18 +8,19 @@
+ // Copyright (c) 2012 Marvell Technology Ltd.
+ // Yunfan Zhang <yfzhang@marvell.com>
+ 
++#include <linux/bits.h>
++#include <linux/err.h>
++#include <linux/i2c.h>
+ #include <linux/module.h>
++#include <linux/of_device.h>
+ #include <linux/param.h>
+-#include <linux/err.h>
+ #include <linux/platform_device.h>
++#include <linux/regmap.h>
+ #include <linux/regulator/driver.h>
++#include <linux/regulator/fan53555.h>
+ #include <linux/regulator/machine.h>
+ #include <linux/regulator/of_regulator.h>
+-#include <linux/of_device.h>
+-#include <linux/i2c.h>
+ #include <linux/slab.h>
+-#include <linux/regmap.h>
+-#include <linux/regulator/fan53555.h>
+ 
+ /* Voltage setting */
+ #define FAN53555_VSEL0		0x00
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 84a2e9292fd03..b5a74b237fd21 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -3248,7 +3248,7 @@ fw_crash_buffer_show(struct device *cdev,
+ 
+ 	spin_lock_irqsave(&instance->crashdump_lock, flags);
+ 	buff_offset = instance->fw_crash_buffer_offset;
+-	if (!instance->crash_dump_buf &&
++	if (!instance->crash_dump_buf ||
+ 		!((instance->fw_crash_state == AVAILABLE) ||
+ 		(instance->fw_crash_state == COPYING))) {
+ 		dev_err(&instance->pdev->dev,
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index 6ad834d61d4c7..d6c25a88cebc9 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -317,11 +317,18 @@ static int scsi_vpd_inquiry(struct scsi_device *sdev, unsigned char *buffer,
+ 	if (result)
+ 		return -EIO;
+ 
+-	/* Sanity check that we got the page back that we asked for */
++	/*
++	 * Sanity check that we got the page back that we asked for and that
++	 * the page size is not 0.
++	 */
+ 	if (buffer[1] != page)
+ 		return -EIO;
+ 
+-	return get_unaligned_be16(&buffer[2]) + 4;
++	result = get_unaligned_be16(&buffer[2]);
++	if (!result)
++		return -EIO;
++
++	return result + 4;
+ }
+ 
+ /**
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 61cb50e8fcb77..0758f606f0065 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -206,7 +206,7 @@ out:
+ /*
+  * write the buffer to the inline inode.
+  * If 'create' is set, we don't need to do the extra copy in the xattr
+- * value since it is already handled by ext4_xattr_ibody_inline_set.
++ * value since it is already handled by ext4_xattr_ibody_set.
+  * That saves us one memcpy.
+  */
+ static void ext4_write_inline_data(struct inode *inode, struct ext4_iloc *iloc,
+@@ -288,7 +288,7 @@ static int ext4_create_inline_data(handle_t *handle,
+ 
+ 	BUG_ON(!is.s.not_found);
+ 
+-	error = ext4_xattr_ibody_inline_set(handle, inode, &i, &is);
++	error = ext4_xattr_ibody_set(handle, inode, &i, &is);
+ 	if (error) {
+ 		if (error == -ENOSPC)
+ 			ext4_clear_inode_state(inode,
+@@ -360,7 +360,7 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ 	i.value = value;
+ 	i.value_len = len;
+ 
+-	error = ext4_xattr_ibody_inline_set(handle, inode, &i, &is);
++	error = ext4_xattr_ibody_set(handle, inode, &i, &is);
+ 	if (error)
+ 		goto out;
+ 
+@@ -433,7 +433,7 @@ static int ext4_destroy_inline_data_nolock(handle_t *handle,
+ 	if (error)
+ 		goto out;
+ 
+-	error = ext4_xattr_ibody_inline_set(handle, inode, &i, &is);
++	error = ext4_xattr_ibody_set(handle, inode, &i, &is);
+ 	if (error)
+ 		goto out;
+ 
+@@ -1930,8 +1930,7 @@ int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ 			i.value = value;
+ 			i.value_len = i_size > EXT4_MIN_INLINE_DATA_SIZE ?
+ 					i_size - EXT4_MIN_INLINE_DATA_SIZE : 0;
+-			err = ext4_xattr_ibody_inline_set(handle, inode,
+-							  &i, &is);
++			err = ext4_xattr_ibody_set(handle, inode, &i, &is);
+ 			if (err)
+ 				goto out_error;
+ 		}
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index f3da1f2d4cb93..28fa9a64dc4be 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2215,7 +2215,7 @@ int ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i,
+ 	return 0;
+ }
+ 
+-int ext4_xattr_ibody_inline_set(handle_t *handle, struct inode *inode,
++int ext4_xattr_ibody_set(handle_t *handle, struct inode *inode,
+ 				struct ext4_xattr_info *i,
+ 				struct ext4_xattr_ibody_find *is)
+ {
+@@ -2240,30 +2240,6 @@ int ext4_xattr_ibody_inline_set(handle_t *handle, struct inode *inode,
+ 	return 0;
+ }
+ 
+-static int ext4_xattr_ibody_set(handle_t *handle, struct inode *inode,
+-				struct ext4_xattr_info *i,
+-				struct ext4_xattr_ibody_find *is)
+-{
+-	struct ext4_xattr_ibody_header *header;
+-	struct ext4_xattr_search *s = &is->s;
+-	int error;
+-
+-	if (EXT4_I(inode)->i_extra_isize == 0)
+-		return -ENOSPC;
+-	error = ext4_xattr_set_entry(i, s, handle, inode, false /* is_block */);
+-	if (error)
+-		return error;
+-	header = IHDR(inode, ext4_raw_inode(&is->iloc));
+-	if (!IS_LAST_ENTRY(s->first)) {
+-		header->h_magic = cpu_to_le32(EXT4_XATTR_MAGIC);
+-		ext4_set_inode_state(inode, EXT4_STATE_XATTR);
+-	} else {
+-		header->h_magic = cpu_to_le32(0);
+-		ext4_clear_inode_state(inode, EXT4_STATE_XATTR);
+-	}
+-	return 0;
+-}
+-
+ static int ext4_xattr_value_same(struct ext4_xattr_search *s,
+ 				 struct ext4_xattr_info *i)
+ {
+diff --git a/fs/ext4/xattr.h b/fs/ext4/xattr.h
+index b357872ab83b4..e5e36bd11f055 100644
+--- a/fs/ext4/xattr.h
++++ b/fs/ext4/xattr.h
+@@ -200,9 +200,9 @@ extern int ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i,
+ extern int ext4_xattr_ibody_get(struct inode *inode, int name_index,
+ 				const char *name,
+ 				void *buffer, size_t buffer_size);
+-extern int ext4_xattr_ibody_inline_set(handle_t *handle, struct inode *inode,
+-				       struct ext4_xattr_info *i,
+-				       struct ext4_xattr_ibody_find *is);
++extern int ext4_xattr_ibody_set(handle_t *handle, struct inode *inode,
++				struct ext4_xattr_info *i,
++				struct ext4_xattr_ibody_find *is);
+ 
+ extern struct mb_cache *ext4_xattr_create_cache(void);
+ extern void ext4_xattr_destroy_cache(struct mb_cache *);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 80a9e50392a09..e3b9b7d188e67 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -205,7 +205,7 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
+ 	if (inode && fuse_is_bad(inode))
+ 		goto invalid;
+ 	else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) ||
+-		 (flags & (LOOKUP_EXCL | LOOKUP_REVAL))) {
++		 (flags & (LOOKUP_EXCL | LOOKUP_REVAL | LOOKUP_RENAME_TARGET))) {
+ 		struct fuse_entry_out outarg;
+ 		FUSE_ARGS(args);
+ 		struct fuse_forget_link *forget;
+@@ -537,6 +537,7 @@ static int fuse_create_open(struct inode *dir, struct dentry *entry,
+ 	struct fuse_entry_out outentry;
+ 	struct fuse_inode *fi;
+ 	struct fuse_file *ff;
++	bool trunc = flags & O_TRUNC;
+ 
+ 	/* Userspace expects S_IFREG in create mode */
+ 	BUG_ON((mode & S_IFMT) != S_IFREG);
+@@ -604,6 +605,10 @@ static int fuse_create_open(struct inode *dir, struct dentry *entry,
+ 	} else {
+ 		file->private_data = ff;
+ 		fuse_finish_open(inode, file);
++		if (fm->fc->atomic_o_trunc && trunc)
++			truncate_pagecache(inode, 0);
++		else if (!(ff->open_flags & FOPEN_KEEP_CACHE))
++			invalidate_inode_pages2(inode->i_mapping);
+ 	}
+ 	return err;
+ 
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 504389568dac5..13d97547eaf6c 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -206,14 +206,10 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 		fi->attr_version = atomic64_inc_return(&fc->attr_version);
+ 		i_size_write(inode, 0);
+ 		spin_unlock(&fi->lock);
+-		truncate_pagecache(inode, 0);
+ 		fuse_invalidate_attr(inode);
+ 		if (fc->writeback_cache)
+ 			file_update_time(file);
+-	} else if (!(ff->open_flags & FOPEN_KEEP_CACHE)) {
+-		invalidate_inode_pages2(inode->i_mapping);
+ 	}
+-
+ 	if ((file->f_mode & FMODE_WRITE) && fc->writeback_cache)
+ 		fuse_link_write_file(file);
+ }
+@@ -236,30 +232,39 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
+ 	if (err)
+ 		return err;
+ 
+-	if (is_wb_truncate || dax_truncate) {
++	if (is_wb_truncate || dax_truncate)
+ 		inode_lock(inode);
+-		fuse_set_nowrite(inode);
+-	}
+ 
+ 	if (dax_truncate) {
+ 		down_write(&get_fuse_inode(inode)->i_mmap_sem);
+ 		err = fuse_dax_break_layouts(inode, 0, 0);
+ 		if (err)
+-			goto out;
++			goto out_inode_unlock;
+ 	}
+ 
++	if (is_wb_truncate || dax_truncate)
++		fuse_set_nowrite(inode);
++
+ 	err = fuse_do_open(fm, get_node_id(inode), file, isdir);
+ 	if (!err)
+ 		fuse_finish_open(inode, file);
+ 
+-out:
++	if (is_wb_truncate || dax_truncate)
++		fuse_release_nowrite(inode);
++	if (!err) {
++		struct fuse_file *ff = file->private_data;
++
++		if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC))
++			truncate_pagecache(inode, 0);
++		else if (!(ff->open_flags & FOPEN_KEEP_CACHE))
++			invalidate_inode_pages2(inode->i_mapping);
++	}
+ 	if (dax_truncate)
+ 		up_write(&get_fuse_inode(inode)->i_mmap_sem);
+ 
+-	if (is_wb_truncate | dax_truncate) {
+-		fuse_release_nowrite(inode);
++out_inode_unlock:
++	if (is_wb_truncate || dax_truncate)
+ 		inode_unlock(inode);
+-	}
+ 
+ 	return err;
+ }
+@@ -782,7 +787,7 @@ static void fuse_read_update_size(struct inode *inode, loff_t size,
+ 	struct fuse_inode *fi = get_fuse_inode(inode);
+ 
+ 	spin_lock(&fi->lock);
+-	if (attr_ver == fi->attr_version && size < inode->i_size &&
++	if (attr_ver >= fi->attr_version && size < inode->i_size &&
+ 	    !test_bit(FUSE_I_SIZE_UNSTABLE, &fi->state)) {
+ 		fi->attr_version = atomic64_inc_return(&fc->attr_version);
+ 		i_size_write(inode, size);
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index b10cddd723559..ceaa6868386e6 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -556,6 +556,9 @@ struct fuse_conn {
+ 	/** Maxmum number of pages that can be used in a single request */
+ 	unsigned int max_pages;
+ 
++	/** Constrain ->max_pages to this value during feature negotiation */
++	unsigned int max_pages_limit;
++
+ 	/** Input queue */
+ 	struct fuse_iqueue iq;
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 2ede05df7d069..9ea175ff9c8e6 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -710,6 +710,7 @@ void fuse_conn_init(struct fuse_conn *fc, struct fuse_mount *fm,
+ 	fc->pid_ns = get_pid_ns(task_active_pid_ns(current));
+ 	fc->user_ns = get_user_ns(user_ns);
+ 	fc->max_pages = FUSE_DEFAULT_MAX_PAGES_PER_REQ;
++	fc->max_pages_limit = FUSE_MAX_MAX_PAGES;
+ 
+ 	INIT_LIST_HEAD(&fc->mounts);
+ 	list_add(&fm->fc_entry, &fc->mounts);
+@@ -1056,7 +1057,7 @@ static void process_init_reply(struct fuse_mount *fm, struct fuse_args *args,
+ 				fc->abort_err = 1;
+ 			if (arg->flags & FUSE_MAX_PAGES) {
+ 				fc->max_pages =
+-					min_t(unsigned int, FUSE_MAX_MAX_PAGES,
++					min_t(unsigned int, fc->max_pages_limit,
+ 					max_t(unsigned int, arg->max_pages, 1));
+ 			}
+ 			if (IS_ENABLED(CONFIG_FUSE_DAX) &&
+@@ -1595,7 +1596,7 @@ static void fuse_kill_sb_blk(struct super_block *sb)
+ 	struct fuse_mount *fm = get_fuse_mount_super(sb);
+ 	bool last;
+ 
+-	if (fm) {
++	if (sb->s_root) {
+ 		last = fuse_mount_remove(fm);
+ 		if (last)
+ 			fuse_conn_destroy(fm);
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index b9cfb1165ff42..faadc80485e7f 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -18,6 +18,12 @@
+ #include <linux/uio.h>
+ #include "fuse_i.h"
+ 
++/* Used to help calculate the FUSE connection's max_pages limit for a request's
++ * size. Parts of the struct fuse_req are sliced into scattergather lists in
++ * addition to the pages used, so this can help account for that overhead.
++ */
++#define FUSE_HEADER_OVERHEAD    4
++
+ /* List of virtio-fs device instances and a lock for the list. Also provides
+  * mutual exclusion in device removal and mounting path
+  */
+@@ -1393,7 +1399,7 @@ static void virtio_kill_sb(struct super_block *sb)
+ 	bool last;
+ 
+ 	/* If mount failed, we can still be called without any fc */
+-	if (fm) {
++	if (sb->s_root) {
+ 		last = fuse_mount_remove(fm);
+ 		if (last)
+ 			virtio_fs_conn_destroy(fm);
+@@ -1426,9 +1432,10 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ {
+ 	struct virtio_fs *fs;
+ 	struct super_block *sb;
+-	struct fuse_conn *fc;
++	struct fuse_conn *fc = NULL;
+ 	struct fuse_mount *fm;
+-	int err;
++	unsigned int virtqueue_size;
++	int err = -EIO;
+ 
+ 	/* This gets a reference on virtio_fs object. This ptr gets installed
+ 	 * in fc->iq->priv. Once fuse_conn is going away, it calls ->put()
+@@ -1440,28 +1447,28 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ 		return -EINVAL;
+ 	}
+ 
++	virtqueue_size = virtqueue_get_vring_size(fs->vqs[VQ_REQUEST].vq);
++	if (WARN_ON(virtqueue_size <= FUSE_HEADER_OVERHEAD))
++		goto out_err;
++
++	err = -ENOMEM;
+ 	fc = kzalloc(sizeof(struct fuse_conn), GFP_KERNEL);
+-	if (!fc) {
+-		mutex_lock(&virtio_fs_mutex);
+-		virtio_fs_put(fs);
+-		mutex_unlock(&virtio_fs_mutex);
+-		return -ENOMEM;
+-	}
++	if (!fc)
++		goto out_err;
+ 
+ 	fm = kzalloc(sizeof(struct fuse_mount), GFP_KERNEL);
+-	if (!fm) {
+-		mutex_lock(&virtio_fs_mutex);
+-		virtio_fs_put(fs);
+-		mutex_unlock(&virtio_fs_mutex);
+-		kfree(fc);
+-		return -ENOMEM;
+-	}
++	if (!fm)
++		goto out_err;
+ 
+ 	fuse_conn_init(fc, fm, fsc->user_ns, &virtio_fs_fiq_ops, fs);
+ 	fc->release = fuse_free_conn;
+ 	fc->delete_stale = true;
+ 	fc->auto_submounts = true;
+ 
++	/* Tell FUSE to split requests that exceed the virtqueue's size */
++	fc->max_pages_limit = min_t(unsigned int, fc->max_pages_limit,
++				    virtqueue_size - FUSE_HEADER_OVERHEAD);
++
+ 	fsc->s_fs_info = fm;
+ 	sb = sget_fc(fsc, virtio_fs_test_super, virtio_fs_set_super);
+ 	fuse_mount_put(fm);
+@@ -1483,6 +1490,13 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ 	WARN_ON(fsc->root);
+ 	fsc->root = dget(sb->s_root);
+ 	return 0;
++
++out_err:
++	kfree(fc);
++	mutex_lock(&virtio_fs_mutex);
++	virtio_fs_put(fs);
++	mutex_unlock(&virtio_fs_mutex);
++	return err;
+ }
+ 
+ static const struct fs_context_operations virtio_fs_context_ops = {
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 5e835bbf1ffb8..fff2cdc69e5ee 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -435,6 +435,23 @@ static int nilfs_segctor_reset_segment_buffer(struct nilfs_sc_info *sci)
+ 	return 0;
+ }
+ 
++/**
++ * nilfs_segctor_zeropad_segsum - zero pad the rest of the segment summary area
++ * @sci: segment constructor object
++ *
++ * nilfs_segctor_zeropad_segsum() zero-fills unallocated space at the end of
++ * the current segment summary block.
++ */
++static void nilfs_segctor_zeropad_segsum(struct nilfs_sc_info *sci)
++{
++	struct nilfs_segsum_pointer *ssp;
++
++	ssp = sci->sc_blk_cnt > 0 ? &sci->sc_binfo_ptr : &sci->sc_finfo_ptr;
++	if (ssp->offset < ssp->bh->b_size)
++		memset(ssp->bh->b_data + ssp->offset, 0,
++		       ssp->bh->b_size - ssp->offset);
++}
++
+ static int nilfs_segctor_feed_segment(struct nilfs_sc_info *sci)
+ {
+ 	sci->sc_nblk_this_inc += sci->sc_curseg->sb_sum.nblocks;
+@@ -443,6 +460,7 @@ static int nilfs_segctor_feed_segment(struct nilfs_sc_info *sci)
+ 				* The current segment is filled up
+ 				* (internal code)
+ 				*/
++	nilfs_segctor_zeropad_segsum(sci);
+ 	sci->sc_curseg = NILFS_NEXT_SEGBUF(sci->sc_curseg);
+ 	return nilfs_segctor_reset_segment_buffer(sci);
+ }
+@@ -547,6 +565,7 @@ static int nilfs_segctor_add_file_block(struct nilfs_sc_info *sci,
+ 		goto retry;
+ 	}
+ 	if (unlikely(required)) {
++		nilfs_segctor_zeropad_segsum(sci);
+ 		err = nilfs_segbuf_extend_segsum(segbuf);
+ 		if (unlikely(err))
+ 			goto failed;
+@@ -1536,6 +1555,7 @@ static int nilfs_segctor_collect(struct nilfs_sc_info *sci,
+ 		nadd = min_t(int, nadd << 1, SC_MAX_SEGDELTA);
+ 		sci->sc_stage = prev_stage;
+ 	}
++	nilfs_segctor_zeropad_segsum(sci);
+ 	nilfs_segctor_truncate_segments(sci, sci->sc_curseg, nilfs->ns_sufile);
+ 	return 0;
+ 
+diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
+index 953de843d9c38..e341d6531e687 100644
+--- a/fs/xfs/xfs_aops.c
++++ b/fs/xfs/xfs_aops.c
+@@ -39,33 +39,6 @@ static inline bool xfs_ioend_is_append(struct iomap_ioend *ioend)
+ 		XFS_I(ioend->io_inode)->i_d.di_size;
+ }
+ 
+-STATIC int
+-xfs_setfilesize_trans_alloc(
+-	struct iomap_ioend	*ioend)
+-{
+-	struct xfs_mount	*mp = XFS_I(ioend->io_inode)->i_mount;
+-	struct xfs_trans	*tp;
+-	int			error;
+-
+-	error = xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, 0, &tp);
+-	if (error)
+-		return error;
+-
+-	ioend->io_private = tp;
+-
+-	/*
+-	 * We may pass freeze protection with a transaction.  So tell lockdep
+-	 * we released it.
+-	 */
+-	__sb_writers_release(ioend->io_inode->i_sb, SB_FREEZE_FS);
+-	/*
+-	 * We hand off the transaction to the completion thread now, so
+-	 * clear the flag here.
+-	 */
+-	xfs_trans_clear_context(tp);
+-	return 0;
+-}
+-
+ /*
+  * Update on-disk file size now that data has been written to disk.
+  */
+@@ -191,12 +164,10 @@ xfs_end_ioend(
+ 		error = xfs_reflink_end_cow(ip, offset, size);
+ 	else if (ioend->io_type == IOMAP_UNWRITTEN)
+ 		error = xfs_iomap_write_unwritten(ip, offset, size, false);
+-	else
+-		ASSERT(!xfs_ioend_is_append(ioend) || ioend->io_private);
+ 
++	if (!error && xfs_ioend_is_append(ioend))
++		error = xfs_setfilesize(ip, ioend->io_offset, ioend->io_size);
+ done:
+-	if (ioend->io_private)
+-		error = xfs_setfilesize_ioend(ioend, error);
+ 	iomap_finish_ioends(ioend, error);
+ 	memalloc_nofs_restore(nofs_flag);
+ }
+@@ -246,7 +217,7 @@ xfs_end_io(
+ 
+ static inline bool xfs_ioend_needs_workqueue(struct iomap_ioend *ioend)
+ {
+-	return ioend->io_private ||
++	return xfs_ioend_is_append(ioend) ||
+ 		ioend->io_type == IOMAP_UNWRITTEN ||
+ 		(ioend->io_flags & IOMAP_F_SHARED);
+ }
+@@ -259,8 +230,6 @@ xfs_end_bio(
+ 	struct xfs_inode	*ip = XFS_I(ioend->io_inode);
+ 	unsigned long		flags;
+ 
+-	ASSERT(xfs_ioend_needs_workqueue(ioend));
+-
+ 	spin_lock_irqsave(&ip->i_ioend_lock, flags);
+ 	if (list_empty(&ip->i_ioend_list))
+ 		WARN_ON_ONCE(!queue_work(ip->i_mount->m_unwritten_workqueue,
+@@ -510,14 +479,6 @@ xfs_prepare_ioend(
+ 				ioend->io_offset, ioend->io_size);
+ 	}
+ 
+-	/* Reserve log space if we might write beyond the on-disk inode size. */
+-	if (!status &&
+-	    ((ioend->io_flags & IOMAP_F_SHARED) ||
+-	     ioend->io_type != IOMAP_UNWRITTEN) &&
+-	    xfs_ioend_is_append(ioend) &&
+-	    !ioend->io_private)
+-		status = xfs_setfilesize_trans_alloc(ioend);
+-
+ 	memalloc_nofs_restore(nofs_flag);
+ 
+ 	if (xfs_ioend_needs_workqueue(ioend))
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 39636fe7e8f0a..a210f19958621 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -258,6 +258,7 @@ struct nf_bridge_info {
+ 	u8			pkt_otherhost:1;
+ 	u8			in_prerouting:1;
+ 	u8			bridged_dnat:1;
++	u8			sabotage_in_done:1;
+ 	__u16			frag_max_size;
+ 	struct net_device	*physindev;
+ 
+@@ -4276,7 +4277,7 @@ static inline void nf_reset_ct(struct sk_buff *skb)
+ 
+ static inline void nf_reset_trace(struct sk_buff *skb)
+ {
+-#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES)
++#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || IS_ENABLED(CONFIG_NF_TABLES)
+ 	skb->nf_trace = 0;
+ #endif
+ }
+@@ -4296,7 +4297,7 @@ static inline void __nf_copy(struct sk_buff *dst, const struct sk_buff *src,
+ 	dst->_nfct = src->_nfct;
+ 	nf_conntrack_get(skb_nfct(src));
+ #endif
+-#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES)
++#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || IS_ENABLED(CONFIG_NF_TABLES)
+ 	if (copy)
+ 		dst->nf_trace = src->nf_trace;
+ #endif
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 89ce8a50f2363..8879c0ab0b89d 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -1109,6 +1109,8 @@ void ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err, __be16 port,
+ void ipv6_local_error(struct sock *sk, int err, struct flowi6 *fl6, u32 info);
+ void ipv6_local_rxpmtu(struct sock *sk, struct flowi6 *fl6, u32 mtu);
+ 
++void inet6_cleanup_sock(struct sock *sk);
++void inet6_sock_destruct(struct sock *sk);
+ int inet6_release(struct socket *sock);
+ int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len);
+ int inet6_getname(struct socket *sock, struct sockaddr *uaddr,
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 388e68c7bca05..e2550a4547a70 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -268,7 +268,7 @@ static inline bool udp_sk_bound_dev_eq(struct net *net, int bound_dev_if,
+ }
+ 
+ /* net/ipv4/udp.c */
+-void udp_destruct_sock(struct sock *sk);
++void udp_destruct_common(struct sock *sk);
+ void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len);
+ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb);
+ void udp_skb_destructor(struct sock *sk, struct sk_buff *skb);
+diff --git a/include/net/udplite.h b/include/net/udplite.h
+index 9185e45b997ff..c59ba86668af0 100644
+--- a/include/net/udplite.h
++++ b/include/net/udplite.h
+@@ -24,14 +24,6 @@ static __inline__ int udplite_getfrag(void *from, char *to, int  offset,
+ 	return copy_from_iter_full(to, len, &msg->msg_iter) ? 0 : -EFAULT;
+ }
+ 
+-/* Designate sk as UDP-Lite socket */
+-static inline int udplite_sk_init(struct sock *sk)
+-{
+-	udp_init_sock(sk);
+-	udp_sk(sk)->pcflag = UDPLITE_BIT;
+-	return 0;
+-}
+-
+ /*
+  * 	Checksumming routines
+  */
+diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
+index df293bc7f03b8..e8cd19e91de11 100644
+--- a/include/trace/events/f2fs.h
++++ b/include/trace/events/f2fs.h
+@@ -513,7 +513,7 @@ TRACE_EVENT(f2fs_truncate_partial_nodes,
+ 	TP_STRUCT__entry(
+ 		__field(dev_t,	dev)
+ 		__field(ino_t,	ino)
+-		__field(nid_t,	nid[3])
++		__array(nid_t,	nid, 3)
+ 		__field(int,	depth)
+ 		__field(int,	err)
+ 	),
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 9e5f1ebe67d7f..5a96a9dd51e4c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1931,6 +1931,21 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 			}
+ 		} else if (opcode == BPF_EXIT) {
+ 			return -ENOTSUPP;
++		} else if (BPF_SRC(insn->code) == BPF_X) {
++			if (!(*reg_mask & (dreg | sreg)))
++				return 0;
++			/* dreg <cond> sreg
++			 * Both dreg and sreg need precision before
++			 * this insn. If only sreg was marked precise
++			 * before it would be equally necessary to
++			 * propagate it to dreg.
++			 */
++			*reg_mask |= (sreg | dreg);
++			 /* else dreg <cond> K
++			  * Only dreg still needs precision before
++			  * this insn, so for the K-based conditional
++			  * there is nothing new to be marked.
++			  */
+ 		}
+ 	} else if (class == BPF_LD) {
+ 		if (!(*reg_mask & dreg))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index b4bd02d68185e..9d6dd14cfd261 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -980,7 +980,7 @@ static inline void uclamp_idle_reset(struct rq *rq, enum uclamp_id clamp_id,
+ 	if (!(rq->uclamp_flags & UCLAMP_FLAG_IDLE))
+ 		return;
+ 
+-	WRITE_ONCE(rq->uclamp[clamp_id].value, clamp_value);
++	uclamp_rq_set(rq, clamp_id, clamp_value);
+ }
+ 
+ static inline
+@@ -1158,8 +1158,8 @@ static inline void uclamp_rq_inc_id(struct rq *rq, struct task_struct *p,
+ 	if (bucket->tasks == 1 || uc_se->value > bucket->value)
+ 		bucket->value = uc_se->value;
+ 
+-	if (uc_se->value > READ_ONCE(uc_rq->value))
+-		WRITE_ONCE(uc_rq->value, uc_se->value);
++	if (uc_se->value > uclamp_rq_get(rq, clamp_id))
++		uclamp_rq_set(rq, clamp_id, uc_se->value);
+ }
+ 
+ /*
+@@ -1225,7 +1225,7 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p,
+ 	if (likely(bucket->tasks))
+ 		return;
+ 
+-	rq_clamp = READ_ONCE(uc_rq->value);
++	rq_clamp = uclamp_rq_get(rq, clamp_id);
+ 	/*
+ 	 * Defensive programming: this should never happen. If it happens,
+ 	 * e.g. due to future modification, warn and fixup the expected value.
+@@ -1233,7 +1233,7 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p,
+ 	SCHED_WARN_ON(bucket->value > rq_clamp);
+ 	if (bucket->value >= rq_clamp) {
+ 		bkt_clamp = uclamp_rq_max_value(rq, clamp_id, uc_se->value);
+-		WRITE_ONCE(uc_rq->value, bkt_clamp);
++		uclamp_rq_set(rq, clamp_id, bkt_clamp);
+ 	}
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 57a58bc48021a..45c1d03aff735 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3928,14 +3928,16 @@ static inline unsigned long task_util_est(struct task_struct *p)
+ }
+ 
+ #ifdef CONFIG_UCLAMP_TASK
+-static inline unsigned long uclamp_task_util(struct task_struct *p)
++static inline unsigned long uclamp_task_util(struct task_struct *p,
++					     unsigned long uclamp_min,
++					     unsigned long uclamp_max)
+ {
+-	return clamp(task_util_est(p),
+-		     uclamp_eff_value(p, UCLAMP_MIN),
+-		     uclamp_eff_value(p, UCLAMP_MAX));
++	return clamp(task_util_est(p), uclamp_min, uclamp_max);
+ }
+ #else
+-static inline unsigned long uclamp_task_util(struct task_struct *p)
++static inline unsigned long uclamp_task_util(struct task_struct *p,
++					     unsigned long uclamp_min,
++					     unsigned long uclamp_max)
+ {
+ 	return task_util_est(p);
+ }
+@@ -4111,12 +4113,16 @@ static inline int util_fits_cpu(unsigned long util,
+ 	 * For uclamp_max, we can tolerate a drop in performance level as the
+ 	 * goal is to cap the task. So it's okay if it's getting less.
+ 	 *
+-	 * In case of capacity inversion, which is not handled yet, we should
+-	 * honour the inverted capacity for both uclamp_min and uclamp_max all
+-	 * the time.
++	 * In case of capacity inversion we should honour the inverted capacity
++	 * for both uclamp_min and uclamp_max all the time.
+ 	 */
+-	capacity_orig = capacity_orig_of(cpu);
+-	capacity_orig_thermal = capacity_orig - arch_scale_thermal_pressure(cpu);
++	capacity_orig = cpu_in_capacity_inversion(cpu);
++	if (capacity_orig) {
++		capacity_orig_thermal = capacity_orig;
++	} else {
++		capacity_orig = capacity_orig_of(cpu);
++		capacity_orig_thermal = capacity_orig - arch_scale_thermal_pressure(cpu);
++	}
+ 
+ 	/*
+ 	 * We want to force a task to fit a cpu as implied by uclamp_max.
+@@ -4197,10 +4203,12 @@ static inline int util_fits_cpu(unsigned long util,
+ 	return fits;
+ }
+ 
+-static inline int task_fits_capacity(struct task_struct *p,
+-				     unsigned long capacity)
++static inline int task_fits_cpu(struct task_struct *p, int cpu)
+ {
+-	return fits_capacity(uclamp_task_util(p), capacity);
++	unsigned long uclamp_min = uclamp_eff_value(p, UCLAMP_MIN);
++	unsigned long uclamp_max = uclamp_eff_value(p, UCLAMP_MAX);
++	unsigned long util = task_util_est(p);
++	return util_fits_cpu(util, uclamp_min, uclamp_max, cpu);
+ }
+ 
+ static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
+@@ -4213,7 +4221,7 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
+ 		return;
+ 	}
+ 
+-	if (task_fits_capacity(p, capacity_of(cpu_of(rq)))) {
++	if (task_fits_cpu(p, cpu_of(rq))) {
+ 		rq->misfit_task_load = 0;
+ 		return;
+ 	}
+@@ -5655,7 +5663,10 @@ static inline unsigned long cpu_util(int cpu);
+ 
+ static inline bool cpu_overutilized(int cpu)
+ {
+-	return !fits_capacity(cpu_util(cpu), capacity_of(cpu));
++	unsigned long rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
++	unsigned long rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
++
++	return !util_fits_cpu(cpu_util(cpu), rq_util_min, rq_util_max, cpu);
+ }
+ 
+ static inline void update_overutilized_status(struct rq *rq)
+@@ -6392,21 +6403,23 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
+ static int
+ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
+ {
+-	unsigned long task_util, best_cap = 0;
++	unsigned long task_util, util_min, util_max, best_cap = 0;
+ 	int cpu, best_cpu = -1;
+ 	struct cpumask *cpus;
+ 
+ 	cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
+ 	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+ 
+-	task_util = uclamp_task_util(p);
++	task_util = task_util_est(p);
++	util_min = uclamp_eff_value(p, UCLAMP_MIN);
++	util_max = uclamp_eff_value(p, UCLAMP_MAX);
+ 
+ 	for_each_cpu_wrap(cpu, cpus, target) {
+ 		unsigned long cpu_cap = capacity_of(cpu);
+ 
+ 		if (!available_idle_cpu(cpu) && !sched_idle_cpu(cpu))
+ 			continue;
+-		if (fits_capacity(task_util, cpu_cap))
++		if (util_fits_cpu(task_util, util_min, util_max, cpu))
+ 			return cpu;
+ 
+ 		if (cpu_cap > best_cap) {
+@@ -6418,10 +6431,13 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
+ 	return best_cpu;
+ }
+ 
+-static inline bool asym_fits_capacity(unsigned long task_util, int cpu)
++static inline bool asym_fits_cpu(unsigned long util,
++				 unsigned long util_min,
++				 unsigned long util_max,
++				 int cpu)
+ {
+ 	if (static_branch_unlikely(&sched_asym_cpucapacity))
+-		return fits_capacity(task_util, capacity_of(cpu));
++		return util_fits_cpu(util, util_min, util_max, cpu);
+ 
+ 	return true;
+ }
+@@ -6432,7 +6448,7 @@ static inline bool asym_fits_capacity(unsigned long task_util, int cpu)
+ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ {
+ 	struct sched_domain *sd;
+-	unsigned long task_util;
++	unsigned long task_util, util_min, util_max;
+ 	int i, recent_used_cpu;
+ 
+ 	/*
+@@ -6441,11 +6457,13 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ 	 */
+ 	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+ 		sync_entity_load_avg(&p->se);
+-		task_util = uclamp_task_util(p);
++		task_util = task_util_est(p);
++		util_min = uclamp_eff_value(p, UCLAMP_MIN);
++		util_max = uclamp_eff_value(p, UCLAMP_MAX);
+ 	}
+ 
+ 	if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
+-	    asym_fits_capacity(task_util, target))
++	    asym_fits_cpu(task_util, util_min, util_max, target))
+ 		return target;
+ 
+ 	/*
+@@ -6453,7 +6471,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ 	 */
+ 	if (prev != target && cpus_share_cache(prev, target) &&
+ 	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
+-	    asym_fits_capacity(task_util, prev))
++	    asym_fits_cpu(task_util, util_min, util_max, prev))
+ 		return prev;
+ 
+ 	/*
+@@ -6468,7 +6486,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ 	    in_task() &&
+ 	    prev == smp_processor_id() &&
+ 	    this_rq()->nr_running <= 1 &&
+-	    asym_fits_capacity(task_util, prev)) {
++	    asym_fits_cpu(task_util, util_min, util_max, prev)) {
+ 		return prev;
+ 	}
+ 
+@@ -6479,7 +6497,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ 	    cpus_share_cache(recent_used_cpu, target) &&
+ 	    (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
+ 	    cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
+-	    asym_fits_capacity(task_util, recent_used_cpu)) {
++	    asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) {
+ 		/*
+ 		 * Replace recent_used_cpu with prev as it is a potential
+ 		 * candidate for the next wake:
+@@ -6800,6 +6818,8 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
+ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
+ {
+ 	unsigned long prev_delta = ULONG_MAX, best_delta = ULONG_MAX;
++	unsigned long p_util_min = uclamp_is_used() ? uclamp_eff_value(p, UCLAMP_MIN) : 0;
++	unsigned long p_util_max = uclamp_is_used() ? uclamp_eff_value(p, UCLAMP_MAX) : 1024;
+ 	struct root_domain *rd = cpu_rq(smp_processor_id())->rd;
+ 	unsigned long cpu_cap, util, base_energy = 0;
+ 	int cpu, best_energy_cpu = prev_cpu;
+@@ -6822,11 +6842,13 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
+ 		goto fail;
+ 
+ 	sync_entity_load_avg(&p->se);
+-	if (!task_util_est(p))
++	if (!uclamp_task_util(p, p_util_min, p_util_max))
+ 		goto unlock;
+ 
+ 	for (; pd; pd = pd->next) {
++		unsigned long util_min = p_util_min, util_max = p_util_max;
+ 		unsigned long cur_delta, spare_cap, max_spare_cap = 0;
++		unsigned long rq_util_min, rq_util_max;
+ 		unsigned long base_energy_pd;
+ 		int max_spare_cap_cpu = -1;
+ 
+@@ -6835,6 +6857,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
+ 		base_energy += base_energy_pd;
+ 
+ 		for_each_cpu_and(cpu, perf_domain_span(pd), sched_domain_span(sd)) {
++			struct rq *rq = cpu_rq(cpu);
++
+ 			if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ 				continue;
+ 
+@@ -6850,8 +6874,21 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
+ 			 * much capacity we can get out of the CPU; this is
+ 			 * aligned with schedutil_cpu_util().
+ 			 */
+-			util = uclamp_rq_util_with(cpu_rq(cpu), util, p);
+-			if (!fits_capacity(util, cpu_cap))
++			if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) {
++				/*
++				 * Open code uclamp_rq_util_with() except for
++				 * the clamp() part. Ie: apply max aggregation
++				 * only. util_fits_cpu() logic requires to
++				 * operate on non clamped util but must use the
++				 * max-aggregated uclamp_{min, max}.
++				 */
++				rq_util_min = uclamp_rq_get(rq, UCLAMP_MIN);
++				rq_util_max = uclamp_rq_get(rq, UCLAMP_MAX);
++
++				util_min = max(rq_util_min, p_util_min);
++				util_max = max(rq_util_max, p_util_max);
++			}
++			if (!util_fits_cpu(util, util_min, util_max, cpu))
+ 				continue;
+ 
+ 			/* Always use prev_cpu as a candidate. */
+@@ -7942,7 +7979,7 @@ static int detach_tasks(struct lb_env *env)
+ 
+ 		case migrate_misfit:
+ 			/* This is not a misfit task */
+-			if (task_fits_capacity(p, capacity_of(env->src_cpu)))
++			if (task_fits_cpu(p, env->src_cpu))
+ 				goto next;
+ 
+ 			env->imbalance = 0;
+@@ -8340,16 +8377,82 @@ static unsigned long scale_rt_capacity(int cpu)
+ 
+ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
+ {
++	unsigned long capacity_orig = arch_scale_cpu_capacity(cpu);
+ 	unsigned long capacity = scale_rt_capacity(cpu);
+ 	struct sched_group *sdg = sd->groups;
++	struct rq *rq = cpu_rq(cpu);
+ 
+-	cpu_rq(cpu)->cpu_capacity_orig = arch_scale_cpu_capacity(cpu);
++	rq->cpu_capacity_orig = capacity_orig;
+ 
+ 	if (!capacity)
+ 		capacity = 1;
+ 
+-	cpu_rq(cpu)->cpu_capacity = capacity;
+-	trace_sched_cpu_capacity_tp(cpu_rq(cpu));
++	rq->cpu_capacity = capacity;
++
++	/*
++	 * Detect if the performance domain is in capacity inversion state.
++	 *
++	 * Capacity inversion happens when another perf domain with equal or
++	 * lower capacity_orig_of() ends up having higher capacity than this
++	 * domain after subtracting thermal pressure.
++	 *
++	 * We only take into account thermal pressure in this detection as it's
++	 * the only metric that actually results in *real* reduction of
++	 * capacity due to performance points (OPPs) being dropped/become
++	 * unreachable due to thermal throttling.
++	 *
++	 * We assume:
++	 *   * That all cpus in a perf domain have the same capacity_orig
++	 *     (same uArch).
++	 *   * Thermal pressure will impact all cpus in this perf domain
++	 *     equally.
++	 */
++	if (sched_energy_enabled()) {
++		unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
++		struct perf_domain *pd;
++
++		rcu_read_lock();
++
++		pd = rcu_dereference(rq->rd->pd);
++		rq->cpu_capacity_inverted = 0;
++
++		for (; pd; pd = pd->next) {
++			struct cpumask *pd_span = perf_domain_span(pd);
++			unsigned long pd_cap_orig, pd_cap;
++
++			/* We can't be inverted against our own pd */
++			if (cpumask_test_cpu(cpu_of(rq), pd_span))
++				continue;
++
++			cpu = cpumask_any(pd_span);
++			pd_cap_orig = arch_scale_cpu_capacity(cpu);
++
++			if (capacity_orig < pd_cap_orig)
++				continue;
++
++			/*
++			 * handle the case of multiple perf domains have the
++			 * same capacity_orig but one of them is under higher
++			 * thermal pressure. We record it as capacity
++			 * inversion.
++			 */
++			if (capacity_orig == pd_cap_orig) {
++				pd_cap = pd_cap_orig - thermal_load_avg(cpu_rq(cpu));
++
++				if (pd_cap > inv_cap) {
++					rq->cpu_capacity_inverted = inv_cap;
++					break;
++				}
++			} else if (pd_cap_orig > inv_cap) {
++				rq->cpu_capacity_inverted = inv_cap;
++				break;
++			}
++		}
++
++		rcu_read_unlock();
++	}
++
++	trace_sched_cpu_capacity_tp(rq);
+ 
+ 	sdg->sgc->capacity = capacity;
+ 	sdg->sgc->min_capacity = capacity;
+@@ -8884,6 +8987,10 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
+ 
+ 	memset(sgs, 0, sizeof(*sgs));
+ 
++	/* Assume that task can't fit any CPU of the group */
++	if (sd->flags & SD_ASYM_CPUCAPACITY)
++		sgs->group_misfit_task_load = 1;
++
+ 	for_each_cpu(i, sched_group_span(group)) {
+ 		struct rq *rq = cpu_rq(i);
+ 		unsigned int local;
+@@ -8903,12 +9010,12 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
+ 		if (!nr_running && idle_cpu_without(i, p))
+ 			sgs->idle_cpus++;
+ 
+-	}
++		/* Check if task fits in the CPU */
++		if (sd->flags & SD_ASYM_CPUCAPACITY &&
++		    sgs->group_misfit_task_load &&
++		    task_fits_cpu(p, i))
++			sgs->group_misfit_task_load = 0;
+ 
+-	/* Check if task fits in the group */
+-	if (sd->flags & SD_ASYM_CPUCAPACITY &&
+-	    !task_fits_capacity(p, group->sgc->max_capacity)) {
+-		sgs->group_misfit_task_load = 1;
+ 	}
+ 
+ 	sgs->group_capacity = group->sgc->capacity;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 12c65628801c6..852e856eed488 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -973,6 +973,7 @@ struct rq {
+ 
+ 	unsigned long		cpu_capacity;
+ 	unsigned long		cpu_capacity_orig;
++	unsigned long		cpu_capacity_inverted;
+ 
+ 	struct callback_head	*balance_callback;
+ 
+@@ -2402,6 +2403,23 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
+ #ifdef CONFIG_UCLAMP_TASK
+ unsigned long uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id);
+ 
++static inline unsigned long uclamp_rq_get(struct rq *rq,
++					  enum uclamp_id clamp_id)
++{
++	return READ_ONCE(rq->uclamp[clamp_id].value);
++}
++
++static inline void uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id,
++				 unsigned int value)
++{
++	WRITE_ONCE(rq->uclamp[clamp_id].value, value);
++}
++
++static inline bool uclamp_rq_is_idle(struct rq *rq)
++{
++	return rq->uclamp_flags & UCLAMP_FLAG_IDLE;
++}
++
+ /**
+  * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
+  * @rq:		The rq to clamp against. Must not be NULL.
+@@ -2437,12 +2455,12 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ 		 * Ignore last runnable task's max clamp, as this task will
+ 		 * reset it. Similarly, no need to read the rq's min clamp.
+ 		 */
+-		if (rq->uclamp_flags & UCLAMP_FLAG_IDLE)
++		if (uclamp_rq_is_idle(rq))
+ 			goto out;
+ 	}
+ 
+-	min_util = max_t(unsigned long, min_util, READ_ONCE(rq->uclamp[UCLAMP_MIN].value));
+-	max_util = max_t(unsigned long, max_util, READ_ONCE(rq->uclamp[UCLAMP_MAX].value));
++	min_util = max_t(unsigned long, min_util, uclamp_rq_get(rq, UCLAMP_MIN));
++	max_util = max_t(unsigned long, max_util, uclamp_rq_get(rq, UCLAMP_MAX));
+ out:
+ 	/*
+ 	 * Since CPU's {min,max}_util clamps are MAX aggregated considering
+@@ -2468,6 +2486,15 @@ static inline bool uclamp_is_used(void)
+ 	return static_branch_likely(&sched_uclamp_used);
+ }
+ #else /* CONFIG_UCLAMP_TASK */
++static inline unsigned long uclamp_eff_value(struct task_struct *p,
++					     enum uclamp_id clamp_id)
++{
++	if (clamp_id == UCLAMP_MIN)
++		return 0;
++
++	return SCHED_CAPACITY_SCALE;
++}
++
+ static inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ 				  struct task_struct *p)
+@@ -2479,6 +2506,25 @@ static inline bool uclamp_is_used(void)
+ {
+ 	return false;
+ }
++
++static inline unsigned long uclamp_rq_get(struct rq *rq,
++					  enum uclamp_id clamp_id)
++{
++	if (clamp_id == UCLAMP_MIN)
++		return 0;
++
++	return SCHED_CAPACITY_SCALE;
++}
++
++static inline void uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id,
++				 unsigned int value)
++{
++}
++
++static inline bool uclamp_rq_is_idle(struct rq *rq)
++{
++	return false;
++}
+ #endif /* CONFIG_UCLAMP_TASK */
+ 
+ #ifdef arch_scale_freq_capacity
+@@ -2494,6 +2540,24 @@ static inline unsigned long capacity_orig_of(int cpu)
+ {
+ 	return cpu_rq(cpu)->cpu_capacity_orig;
+ }
++
++/*
++ * Returns inverted capacity if the CPU is in capacity inversion state.
++ * 0 otherwise.
++ *
++ * Capacity inversion detection only considers thermal impact where actual
++ * performance points (OPPs) gets dropped.
++ *
++ * Capacity inversion state happens when another performance domain that has
++ * equal or lower capacity_orig_of() becomes effectively larger than the perf
++ * domain this CPU belongs to due to thermal pressure throttling it hard.
++ *
++ * See comment in update_cpu_capacity().
++ */
++static inline unsigned long cpu_in_capacity_inversion(int cpu)
++{
++	return cpu_rq(cpu)->cpu_capacity_inverted;
++}
+ #endif
+ 
+ /**
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 9f59cc8ab8f86..bff14910b9262 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -634,6 +634,7 @@ long __sys_setresuid(uid_t ruid, uid_t euid, uid_t suid)
+ 	struct cred *new;
+ 	int retval;
+ 	kuid_t kruid, keuid, ksuid;
++	bool ruid_new, euid_new, suid_new;
+ 
+ 	kruid = make_kuid(ns, ruid);
+ 	keuid = make_kuid(ns, euid);
+@@ -648,25 +649,29 @@ long __sys_setresuid(uid_t ruid, uid_t euid, uid_t suid)
+ 	if ((suid != (uid_t) -1) && !uid_valid(ksuid))
+ 		return -EINVAL;
+ 
++	old = current_cred();
++
++	/* check for no-op */
++	if ((ruid == (uid_t) -1 || uid_eq(kruid, old->uid)) &&
++	    (euid == (uid_t) -1 || (uid_eq(keuid, old->euid) &&
++				    uid_eq(keuid, old->fsuid))) &&
++	    (suid == (uid_t) -1 || uid_eq(ksuid, old->suid)))
++		return 0;
++
++	ruid_new = ruid != (uid_t) -1        && !uid_eq(kruid, old->uid) &&
++		   !uid_eq(kruid, old->euid) && !uid_eq(kruid, old->suid);
++	euid_new = euid != (uid_t) -1        && !uid_eq(keuid, old->uid) &&
++		   !uid_eq(keuid, old->euid) && !uid_eq(keuid, old->suid);
++	suid_new = suid != (uid_t) -1        && !uid_eq(ksuid, old->uid) &&
++		   !uid_eq(ksuid, old->euid) && !uid_eq(ksuid, old->suid);
++	if ((ruid_new || euid_new || suid_new) &&
++	    !ns_capable_setid(old->user_ns, CAP_SETUID))
++		return -EPERM;
++
+ 	new = prepare_creds();
+ 	if (!new)
+ 		return -ENOMEM;
+ 
+-	old = current_cred();
+-
+-	retval = -EPERM;
+-	if (!ns_capable_setid(old->user_ns, CAP_SETUID)) {
+-		if (ruid != (uid_t) -1        && !uid_eq(kruid, old->uid) &&
+-		    !uid_eq(kruid, old->euid) && !uid_eq(kruid, old->suid))
+-			goto error;
+-		if (euid != (uid_t) -1        && !uid_eq(keuid, old->uid) &&
+-		    !uid_eq(keuid, old->euid) && !uid_eq(keuid, old->suid))
+-			goto error;
+-		if (suid != (uid_t) -1        && !uid_eq(ksuid, old->uid) &&
+-		    !uid_eq(ksuid, old->euid) && !uid_eq(ksuid, old->suid))
+-			goto error;
+-	}
+-
+ 	if (ruid != (uid_t) -1) {
+ 		new->uid = kruid;
+ 		if (!uid_eq(kruid, old->uid)) {
+@@ -726,6 +731,7 @@ long __sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid)
+ 	struct cred *new;
+ 	int retval;
+ 	kgid_t krgid, kegid, ksgid;
++	bool rgid_new, egid_new, sgid_new;
+ 
+ 	krgid = make_kgid(ns, rgid);
+ 	kegid = make_kgid(ns, egid);
+@@ -738,23 +744,28 @@ long __sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid)
+ 	if ((sgid != (gid_t) -1) && !gid_valid(ksgid))
+ 		return -EINVAL;
+ 
++	old = current_cred();
++
++	/* check for no-op */
++	if ((rgid == (gid_t) -1 || gid_eq(krgid, old->gid)) &&
++	    (egid == (gid_t) -1 || (gid_eq(kegid, old->egid) &&
++				    gid_eq(kegid, old->fsgid))) &&
++	    (sgid == (gid_t) -1 || gid_eq(ksgid, old->sgid)))
++		return 0;
++
++	rgid_new = rgid != (gid_t) -1        && !gid_eq(krgid, old->gid) &&
++		   !gid_eq(krgid, old->egid) && !gid_eq(krgid, old->sgid);
++	egid_new = egid != (gid_t) -1        && !gid_eq(kegid, old->gid) &&
++		   !gid_eq(kegid, old->egid) && !gid_eq(kegid, old->sgid);
++	sgid_new = sgid != (gid_t) -1        && !gid_eq(ksgid, old->gid) &&
++		   !gid_eq(ksgid, old->egid) && !gid_eq(ksgid, old->sgid);
++	if ((rgid_new || egid_new || sgid_new) &&
++	    !ns_capable_setid(old->user_ns, CAP_SETGID))
++		return -EPERM;
++
+ 	new = prepare_creds();
+ 	if (!new)
+ 		return -ENOMEM;
+-	old = current_cred();
+-
+-	retval = -EPERM;
+-	if (!ns_capable_setid(old->user_ns, CAP_SETGID)) {
+-		if (rgid != (gid_t) -1        && !gid_eq(krgid, old->gid) &&
+-		    !gid_eq(krgid, old->egid) && !gid_eq(krgid, old->sgid))
+-			goto error;
+-		if (egid != (gid_t) -1        && !gid_eq(kegid, old->gid) &&
+-		    !gid_eq(kegid, old->egid) && !gid_eq(kegid, old->sgid))
+-			goto error;
+-		if (sgid != (gid_t) -1        && !gid_eq(ksgid, old->gid) &&
+-		    !gid_eq(ksgid, old->egid) && !gid_eq(ksgid, old->sgid))
+-			goto error;
+-	}
+ 
+ 	if (rgid != (gid_t) -1)
+ 		new->gid = krgid;
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index b77186ec70e93..28e18777ec513 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -622,6 +622,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
+ 			result = SCAN_PTE_NON_PRESENT;
+ 			goto out;
+ 		}
++		if (pte_uffd_wp(pteval)) {
++			result = SCAN_PTE_UFFD_WP;
++			goto out;
++		}
+ 		page = vm_normal_page(vma, address, pteval);
+ 		if (unlikely(!page)) {
+ 			result = SCAN_PAGE_NULL;
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index f3c7cfba31e1b..f14beb9a62edb 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -868,12 +868,17 @@ static unsigned int ip_sabotage_in(void *priv,
+ {
+ 	struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
+ 
+-	if (nf_bridge && !nf_bridge->in_prerouting &&
+-	    !netif_is_l3_master(skb->dev) &&
+-	    !netif_is_l3_slave(skb->dev)) {
+-		nf_bridge_info_free(skb);
+-		state->okfn(state->net, state->sk, skb);
+-		return NF_STOLEN;
++	if (nf_bridge) {
++		if (nf_bridge->sabotage_in_done)
++			return NF_ACCEPT;
++
++		if (!nf_bridge->in_prerouting &&
++		    !netif_is_l3_master(skb->dev) &&
++		    !netif_is_l3_slave(skb->dev)) {
++			nf_bridge->sabotage_in_done = 1;
++			state->okfn(state->net, state->sk, skb);
++			return NF_STOLEN;
++		}
+ 	}
+ 
+ 	return NF_ACCEPT;
+diff --git a/net/dccp/dccp.h b/net/dccp/dccp.h
+index 5183e627468d8..0218eb169891c 100644
+--- a/net/dccp/dccp.h
++++ b/net/dccp/dccp.h
+@@ -283,6 +283,7 @@ int dccp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
+ int dccp_rcv_established(struct sock *sk, struct sk_buff *skb,
+ 			 const struct dccp_hdr *dh, const unsigned int len);
+ 
++void dccp_destruct_common(struct sock *sk);
+ int dccp_init_sock(struct sock *sk, const __u8 ctl_sock_initialized);
+ void dccp_destroy_sock(struct sock *sk);
+ 
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index c563f9b325d05..64e91783860df 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -992,6 +992,12 @@ static const struct inet_connection_sock_af_ops dccp_ipv6_mapped = {
+ 	.sockaddr_len	   = sizeof(struct sockaddr_in6),
+ };
+ 
++static void dccp_v6_sk_destruct(struct sock *sk)
++{
++	dccp_destruct_common(sk);
++	inet6_sock_destruct(sk);
++}
++
+ /* NOTE: A lot of things set to zero explicitly by call to
+  *       sk_alloc() so need not be done here.
+  */
+@@ -1004,17 +1010,12 @@ static int dccp_v6_init_sock(struct sock *sk)
+ 		if (unlikely(!dccp_v6_ctl_sock_initialized))
+ 			dccp_v6_ctl_sock_initialized = 1;
+ 		inet_csk(sk)->icsk_af_ops = &dccp_ipv6_af_ops;
++		sk->sk_destruct = dccp_v6_sk_destruct;
+ 	}
+ 
+ 	return err;
+ }
+ 
+-static void dccp_v6_destroy_sock(struct sock *sk)
+-{
+-	dccp_destroy_sock(sk);
+-	inet6_destroy_sock(sk);
+-}
+-
+ static struct timewait_sock_ops dccp6_timewait_sock_ops = {
+ 	.twsk_obj_size	= sizeof(struct dccp6_timewait_sock),
+ };
+@@ -1037,7 +1038,7 @@ static struct proto dccp_v6_prot = {
+ 	.accept		   = inet_csk_accept,
+ 	.get_port	   = inet_csk_get_port,
+ 	.shutdown	   = dccp_shutdown,
+-	.destroy	   = dccp_v6_destroy_sock,
++	.destroy	   = dccp_destroy_sock,
+ 	.orphan_count	   = &dccp_orphan_count,
+ 	.max_header	   = MAX_DCCP_HEADER,
+ 	.obj_size	   = sizeof(struct dccp6_sock),
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 65e81e0199b04..e946211758c05 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -171,12 +171,18 @@ const char *dccp_packet_name(const int type)
+ 
+ EXPORT_SYMBOL_GPL(dccp_packet_name);
+ 
+-static void dccp_sk_destruct(struct sock *sk)
++void dccp_destruct_common(struct sock *sk)
+ {
+ 	struct dccp_sock *dp = dccp_sk(sk);
+ 
+ 	ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk);
+ 	dp->dccps_hc_tx_ccid = NULL;
++}
++EXPORT_SYMBOL_GPL(dccp_destruct_common);
++
++static void dccp_sk_destruct(struct sock *sk)
++{
++	dccp_destruct_common(sk);
+ 	inet_sock_destruct(sk);
+ }
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index b093daaa3deb9..f0db66e415bd6 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1582,7 +1582,7 @@ drop:
+ }
+ EXPORT_SYMBOL_GPL(__udp_enqueue_schedule_skb);
+ 
+-void udp_destruct_sock(struct sock *sk)
++void udp_destruct_common(struct sock *sk)
+ {
+ 	/* reclaim completely the forward allocated memory */
+ 	struct udp_sock *up = udp_sk(sk);
+@@ -1595,10 +1595,14 @@ void udp_destruct_sock(struct sock *sk)
+ 		kfree_skb(skb);
+ 	}
+ 	udp_rmem_release(sk, total, 0, true);
++}
++EXPORT_SYMBOL_GPL(udp_destruct_common);
+ 
++static void udp_destruct_sock(struct sock *sk)
++{
++	udp_destruct_common(sk);
+ 	inet_sock_destruct(sk);
+ }
+-EXPORT_SYMBOL_GPL(udp_destruct_sock);
+ 
+ int udp_init_sock(struct sock *sk)
+ {
+@@ -1606,7 +1610,6 @@ int udp_init_sock(struct sock *sk)
+ 	sk->sk_destruct = udp_destruct_sock;
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(udp_init_sock);
+ 
+ void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len)
+ {
+diff --git a/net/ipv4/udplite.c b/net/ipv4/udplite.c
+index bd8773b49e72e..cfb36655a5fda 100644
+--- a/net/ipv4/udplite.c
++++ b/net/ipv4/udplite.c
+@@ -17,6 +17,14 @@
+ struct udp_table 	udplite_table __read_mostly;
+ EXPORT_SYMBOL(udplite_table);
+ 
++/* Designate sk as UDP-Lite socket */
++static int udplite_sk_init(struct sock *sk)
++{
++	udp_init_sock(sk);
++	udp_sk(sk)->pcflag = UDPLITE_BIT;
++	return 0;
++}
++
+ static int udplite_rcv(struct sk_buff *skb)
+ {
+ 	return __udp4_lib_rcv(skb, &udplite_table, IPPROTO_UDPLITE);
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 4df9dc9375c8e..4247997077bfb 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -107,6 +107,13 @@ static __inline__ struct ipv6_pinfo *inet6_sk_generic(struct sock *sk)
+ 	return (struct ipv6_pinfo *)(((u8 *)sk) + offset);
+ }
+ 
++void inet6_sock_destruct(struct sock *sk)
++{
++	inet6_cleanup_sock(sk);
++	inet_sock_destruct(sk);
++}
++EXPORT_SYMBOL_GPL(inet6_sock_destruct);
++
+ static int inet6_create(struct net *net, struct socket *sock, int protocol,
+ 			int kern)
+ {
+@@ -199,7 +206,7 @@ lookup_protocol:
+ 			inet->hdrincl = 1;
+ 	}
+ 
+-	sk->sk_destruct		= inet_sock_destruct;
++	sk->sk_destruct		= inet6_sock_destruct;
+ 	sk->sk_family		= PF_INET6;
+ 	sk->sk_protocol		= protocol;
+ 
+@@ -503,6 +510,12 @@ void inet6_destroy_sock(struct sock *sk)
+ }
+ EXPORT_SYMBOL_GPL(inet6_destroy_sock);
+ 
++void inet6_cleanup_sock(struct sock *sk)
++{
++	inet6_destroy_sock(sk);
++}
++EXPORT_SYMBOL_GPL(inet6_cleanup_sock);
++
+ /*
+  *	This does both peername and sockname.
+  */
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 2017257cb2784..7b4b457a8b87a 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -429,9 +429,6 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ 		if (optlen < sizeof(int))
+ 			goto e_inval;
+ 		if (val == PF_INET) {
+-			struct ipv6_txoptions *opt;
+-			struct sk_buff *pktopt;
+-
+ 			if (sk->sk_type == SOCK_RAW)
+ 				break;
+ 
+@@ -462,7 +459,6 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ 				break;
+ 			}
+ 
+-			fl6_free_socklist(sk);
+ 			__ipv6_sock_mc_close(sk);
+ 			__ipv6_sock_ac_close(sk);
+ 
+@@ -497,14 +493,14 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ 				sk->sk_socket->ops = &inet_dgram_ops;
+ 				sk->sk_family = PF_INET;
+ 			}
+-			opt = xchg((__force struct ipv6_txoptions **)&np->opt,
+-				   NULL);
+-			if (opt) {
+-				atomic_sub(opt->tot_len, &sk->sk_omem_alloc);
+-				txopt_put(opt);
+-			}
+-			pktopt = xchg(&np->pktoptions, NULL);
+-			kfree_skb(pktopt);
++
++			/* Disable all options not to allocate memory anymore,
++			 * but there is still a race.  See the lockless path
++			 * in udpv6_sendmsg() and ipv6_local_rxpmtu().
++			 */
++			np->rxopt.all = 0;
++
++			inet6_cleanup_sock(sk);
+ 
+ 			/*
+ 			 * ... and add it to the refcnt debug socks count
+diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
+index 135e3a060caa8..6ac88fe24a8e0 100644
+--- a/net/ipv6/ping.c
++++ b/net/ipv6/ping.c
+@@ -22,11 +22,6 @@
+ #include <linux/proc_fs.h>
+ #include <net/ping.h>
+ 
+-static void ping_v6_destroy(struct sock *sk)
+-{
+-	inet6_destroy_sock(sk);
+-}
+-
+ /* Compatibility glue so we can support IPv6 when it's compiled as a module */
+ static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
+ 				 int *addr_len)
+@@ -171,7 +166,6 @@ struct proto pingv6_prot = {
+ 	.owner =	THIS_MODULE,
+ 	.init =		ping_init_sock,
+ 	.close =	ping_close,
+-	.destroy =	ping_v6_destroy,
+ 	.connect =	ip6_datagram_connect_v6_only,
+ 	.disconnect =	__udp_disconnect,
+ 	.setsockopt =	ipv6_setsockopt,
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 110254f44a468..69f0f9c05d028 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -1211,8 +1211,6 @@ static void raw6_destroy(struct sock *sk)
+ 	lock_sock(sk);
+ 	ip6_flush_pending_frames(sk);
+ 	release_sock(sk);
+-
+-	inet6_destroy_sock(sk);
+ }
+ 
+ static int rawv6_init_sk(struct sock *sk)
+diff --git a/net/ipv6/rpl.c b/net/ipv6/rpl.c
+index 307f336b5353e..3b0386437f69d 100644
+--- a/net/ipv6/rpl.c
++++ b/net/ipv6/rpl.c
+@@ -32,7 +32,8 @@ static void *ipv6_rpl_segdata_pos(const struct ipv6_rpl_sr_hdr *hdr, int i)
+ size_t ipv6_rpl_srh_size(unsigned char n, unsigned char cmpri,
+ 			 unsigned char cmpre)
+ {
+-	return (n * IPV6_PFXTAIL_LEN(cmpri)) + IPV6_PFXTAIL_LEN(cmpre);
++	return sizeof(struct ipv6_rpl_sr_hdr) + (n * IPV6_PFXTAIL_LEN(cmpri)) +
++		IPV6_PFXTAIL_LEN(cmpre);
+ }
+ 
+ void ipv6_rpl_srh_decompress(struct ipv6_rpl_sr_hdr *outhdr,
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index e4ae5362cb51b..2347740d3cc7c 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1936,12 +1936,6 @@ static int tcp_v6_init_sock(struct sock *sk)
+ 	return 0;
+ }
+ 
+-static void tcp_v6_destroy_sock(struct sock *sk)
+-{
+-	tcp_v4_destroy_sock(sk);
+-	inet6_destroy_sock(sk);
+-}
+-
+ #ifdef CONFIG_PROC_FS
+ /* Proc filesystem TCPv6 sock list dumping. */
+ static void get_openreq6(struct seq_file *seq,
+@@ -2134,7 +2128,7 @@ struct proto tcpv6_prot = {
+ 	.accept			= inet_csk_accept,
+ 	.ioctl			= tcp_ioctl,
+ 	.init			= tcp_v6_init_sock,
+-	.destroy		= tcp_v6_destroy_sock,
++	.destroy		= tcp_v4_destroy_sock,
+ 	.shutdown		= tcp_shutdown,
+ 	.setsockopt		= tcp_setsockopt,
+ 	.getsockopt		= tcp_getsockopt,
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 20cc08210c700..19c0721399d9e 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -54,6 +54,19 @@
+ #include <trace/events/skb.h>
+ #include "udp_impl.h"
+ 
++static void udpv6_destruct_sock(struct sock *sk)
++{
++	udp_destruct_common(sk);
++	inet6_sock_destruct(sk);
++}
++
++int udpv6_init_sock(struct sock *sk)
++{
++	skb_queue_head_init(&udp_sk(sk)->reader_queue);
++	sk->sk_destruct = udpv6_destruct_sock;
++	return 0;
++}
++
+ static u32 udp6_ehashfn(const struct net *net,
+ 			const struct in6_addr *laddr,
+ 			const u16 lport,
+@@ -1617,8 +1630,6 @@ void udpv6_destroy_sock(struct sock *sk)
+ 			udp_encap_disable();
+ 		}
+ 	}
+-
+-	inet6_destroy_sock(sk);
+ }
+ 
+ /*
+@@ -1702,7 +1713,7 @@ struct proto udpv6_prot = {
+ 	.connect		= ip6_datagram_connect,
+ 	.disconnect		= udp_disconnect,
+ 	.ioctl			= udp_ioctl,
+-	.init			= udp_init_sock,
++	.init			= udpv6_init_sock,
+ 	.destroy		= udpv6_destroy_sock,
+ 	.setsockopt		= udpv6_setsockopt,
+ 	.getsockopt		= udpv6_getsockopt,
+diff --git a/net/ipv6/udp_impl.h b/net/ipv6/udp_impl.h
+index b2fcc46c1630e..e497768194414 100644
+--- a/net/ipv6/udp_impl.h
++++ b/net/ipv6/udp_impl.h
+@@ -12,6 +12,7 @@ int __udp6_lib_rcv(struct sk_buff *, struct udp_table *, int);
+ int __udp6_lib_err(struct sk_buff *, struct inet6_skb_parm *, u8, u8, int,
+ 		   __be32, struct udp_table *);
+ 
++int udpv6_init_sock(struct sock *sk);
+ int udp_v6_get_port(struct sock *sk, unsigned short snum);
+ void udp_v6_rehash(struct sock *sk);
+ 
+diff --git a/net/ipv6/udplite.c b/net/ipv6/udplite.c
+index fbb700d3f437e..b6482e04dad0e 100644
+--- a/net/ipv6/udplite.c
++++ b/net/ipv6/udplite.c
+@@ -12,6 +12,13 @@
+ #include <linux/proc_fs.h>
+ #include "udp_impl.h"
+ 
++static int udplitev6_sk_init(struct sock *sk)
++{
++	udpv6_init_sock(sk);
++	udp_sk(sk)->pcflag = UDPLITE_BIT;
++	return 0;
++}
++
+ static int udplitev6_rcv(struct sk_buff *skb)
+ {
+ 	return __udp6_lib_rcv(skb, &udplite_table, IPPROTO_UDPLITE);
+@@ -38,7 +45,7 @@ struct proto udplitev6_prot = {
+ 	.connect	   = ip6_datagram_connect,
+ 	.disconnect	   = udp_disconnect,
+ 	.ioctl		   = udp_ioctl,
+-	.init		   = udplite_sk_init,
++	.init		   = udplitev6_sk_init,
+ 	.destroy	   = udpv6_destroy_sock,
+ 	.setsockopt	   = udpv6_setsockopt,
+ 	.getsockopt	   = udpv6_getsockopt,
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index d54dbd01d86f1..382124d6f7647 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -255,8 +255,6 @@ static void l2tp_ip6_destroy_sock(struct sock *sk)
+ 
+ 	if (tunnel)
+ 		l2tp_tunnel_delete(tunnel);
+-
+-	inet6_destroy_sock(sk);
+ }
+ 
+ static int l2tp_ip6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index e61c85873ea2f..72d944e6a641f 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2863,12 +2863,6 @@ static const struct proto_ops mptcp_v6_stream_ops = {
+ 
+ static struct proto mptcp_v6_prot;
+ 
+-static void mptcp_v6_destroy(struct sock *sk)
+-{
+-	mptcp_destroy(sk);
+-	inet6_destroy_sock(sk);
+-}
+-
+ static struct inet_protosw mptcp_v6_protosw = {
+ 	.type		= SOCK_STREAM,
+ 	.protocol	= IPPROTO_MPTCP,
+@@ -2884,7 +2878,6 @@ int __init mptcp_proto_v6_init(void)
+ 	mptcp_v6_prot = mptcp_prot;
+ 	strcpy(mptcp_v6_prot.name, "MPTCPv6");
+ 	mptcp_v6_prot.slab = NULL;
+-	mptcp_v6_prot.destroy = mptcp_v6_destroy;
+ 	mptcp_v6_prot.obj_size = sizeof(struct mptcp6_sock);
+ 
+ 	err = proto_register(&mptcp_v6_prot, 1);
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index 1d1d81aeb389f..cad7deacf60a4 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -421,15 +421,16 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 	} else
+ 		weight = 1;
+ 
+-	if (tb[TCA_QFQ_LMAX]) {
++	if (tb[TCA_QFQ_LMAX])
+ 		lmax = nla_get_u32(tb[TCA_QFQ_LMAX]);
+-		if (lmax < QFQ_MIN_LMAX || lmax > (1UL << QFQ_MTU_SHIFT)) {
+-			pr_notice("qfq: invalid max length %u\n", lmax);
+-			return -EINVAL;
+-		}
+-	} else
++	else
+ 		lmax = psched_mtu(qdisc_dev(sch));
+ 
++	if (lmax < QFQ_MIN_LMAX || lmax > (1UL << QFQ_MTU_SHIFT)) {
++		pr_notice("qfq: invalid max length %u\n", lmax);
++		return -EINVAL;
++	}
++
+ 	inv_w = ONE_FP / weight;
+ 	weight = ONE_FP / inv_w;
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 3a68d65f7d153..35d3eee26ea56 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4995,13 +4995,17 @@ static void sctp_destroy_sock(struct sock *sk)
+ }
+ 
+ /* Triggered when there are no references on the socket anymore */
+-static void sctp_destruct_sock(struct sock *sk)
++static void sctp_destruct_common(struct sock *sk)
+ {
+ 	struct sctp_sock *sp = sctp_sk(sk);
+ 
+ 	/* Free up the HMAC transform. */
+ 	crypto_free_shash(sp->hmac);
++}
+ 
++static void sctp_destruct_sock(struct sock *sk)
++{
++	sctp_destruct_common(sk);
+ 	inet_sock_destruct(sk);
+ }
+ 
+@@ -9195,7 +9199,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
+ 	sctp_sk(newsk)->reuse = sp->reuse;
+ 
+ 	newsk->sk_shutdown = sk->sk_shutdown;
+-	newsk->sk_destruct = sctp_destruct_sock;
++	newsk->sk_destruct = sk->sk_destruct;
+ 	newsk->sk_family = sk->sk_family;
+ 	newsk->sk_protocol = IPPROTO_SCTP;
+ 	newsk->sk_backlog_rcv = sk->sk_prot->backlog_rcv;
+@@ -9427,11 +9431,20 @@ struct proto sctp_prot = {
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ 
+-#include <net/transp_v6.h>
+-static void sctp_v6_destroy_sock(struct sock *sk)
++static void sctp_v6_destruct_sock(struct sock *sk)
++{
++	sctp_destruct_common(sk);
++	inet6_sock_destruct(sk);
++}
++
++static int sctp_v6_init_sock(struct sock *sk)
+ {
+-	sctp_destroy_sock(sk);
+-	inet6_destroy_sock(sk);
++	int ret = sctp_init_sock(sk);
++
++	if (!ret)
++		sk->sk_destruct = sctp_v6_destruct_sock;
++
++	return ret;
+ }
+ 
+ struct proto sctpv6_prot = {
+@@ -9441,8 +9454,8 @@ struct proto sctpv6_prot = {
+ 	.disconnect	= sctp_disconnect,
+ 	.accept		= sctp_accept,
+ 	.ioctl		= sctp_ioctl,
+-	.init		= sctp_init_sock,
+-	.destroy	= sctp_v6_destroy_sock,
++	.init		= sctp_v6_init_sock,
++	.destroy	= sctp_destroy_sock,
+ 	.shutdown	= sctp_shutdown,
+ 	.setsockopt	= sctp_setsockopt,
+ 	.getsockopt	= sctp_getsockopt,
+diff --git a/scripts/asn1_compiler.c b/scripts/asn1_compiler.c
+index adabd41452640..985fb81cae79b 100644
+--- a/scripts/asn1_compiler.c
++++ b/scripts/asn1_compiler.c
+@@ -625,7 +625,7 @@ int main(int argc, char **argv)
+ 	p = strrchr(argv[1], '/');
+ 	p = p ? p + 1 : argv[1];
+ 	grammar_name = strdup(p);
+-	if (!p) {
++	if (!grammar_name) {
+ 		perror(NULL);
+ 		exit(1);
+ 	}
+diff --git a/sound/soc/fsl/fsl_asrc_dma.c b/sound/soc/fsl/fsl_asrc_dma.c
+index 29f91cdecbc33..9b2a986ce4152 100644
+--- a/sound/soc/fsl/fsl_asrc_dma.c
++++ b/sound/soc/fsl/fsl_asrc_dma.c
+@@ -207,14 +207,19 @@ static int fsl_asrc_dma_hw_params(struct snd_soc_component *component,
+ 		be_chan = soc_component_to_pcm(component_be)->chan[substream->stream];
+ 		tmp_chan = be_chan;
+ 	}
+-	if (!tmp_chan)
+-		tmp_chan = dma_request_slave_channel(dev_be, tx ? "tx" : "rx");
++	if (!tmp_chan) {
++		tmp_chan = dma_request_chan(dev_be, tx ? "tx" : "rx");
++		if (IS_ERR(tmp_chan)) {
++			dev_err(dev, "failed to request DMA channel for Back-End\n");
++			return -EINVAL;
++		}
++	}
+ 
+ 	/*
+ 	 * An EDMA DEV_TO_DEV channel is fixed and bound with DMA event of each
+ 	 * peripheral, unlike SDMA channel that is allocated dynamically. So no
+ 	 * need to configure dma_request and dma_request2, but get dma_chan of
+-	 * Back-End device directly via dma_request_slave_channel.
++	 * Back-End device directly via dma_request_chan.
+ 	 */
+ 	if (!asrc->use_edma) {
+ 		/* Get DMA request of Back-End */
+diff --git a/tools/testing/selftests/sigaltstack/current_stack_pointer.h b/tools/testing/selftests/sigaltstack/current_stack_pointer.h
+new file mode 100644
+index 0000000000000..ea9bdf3a90b16
+--- /dev/null
++++ b/tools/testing/selftests/sigaltstack/current_stack_pointer.h
+@@ -0,0 +1,23 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++
++#if __alpha__
++register unsigned long sp asm("$30");
++#elif __arm__ || __aarch64__ || __csky__ || __m68k__ || __mips__ || __riscv
++register unsigned long sp asm("sp");
++#elif __i386__
++register unsigned long sp asm("esp");
++#elif __loongarch64
++register unsigned long sp asm("$sp");
++#elif __ppc__
++register unsigned long sp asm("r1");
++#elif __s390x__
++register unsigned long sp asm("%15");
++#elif __sh__
++register unsigned long sp asm("r15");
++#elif __x86_64__
++register unsigned long sp asm("rsp");
++#elif __XTENSA__
++register unsigned long sp asm("a1");
++#else
++#error "implement current_stack_pointer equivalent"
++#endif
+diff --git a/tools/testing/selftests/sigaltstack/sas.c b/tools/testing/selftests/sigaltstack/sas.c
+index 8934a3766d207..41646c22384a2 100644
+--- a/tools/testing/selftests/sigaltstack/sas.c
++++ b/tools/testing/selftests/sigaltstack/sas.c
+@@ -19,6 +19,7 @@
+ #include <errno.h>
+ 
+ #include "../kselftest.h"
++#include "current_stack_pointer.h"
+ 
+ #ifndef SS_AUTODISARM
+ #define SS_AUTODISARM  (1U << 31)
+@@ -40,12 +41,6 @@ void my_usr1(int sig, siginfo_t *si, void *u)
+ 	stack_t stk;
+ 	struct stk_data *p;
+ 
+-#if __s390x__
+-	register unsigned long sp asm("%15");
+-#else
+-	register unsigned long sp asm("sp");
+-#endif
+-
+ 	if (sp < (unsigned long)sstack ||
+ 			sp >= (unsigned long)sstack + SIGSTKSZ) {
+ 		ksft_exit_fail_msg("SP is not on sigaltstack\n");


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-04-27 14:11 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-04-27 14:11 UTC (permalink / raw
  To: gentoo-commits

commit:     641f7c504ff4f220e1772e636b50703be7fd0645
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 27 14:11:23 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr 27 14:11:23 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=641f7c50

gcc-plugins: Reorganize gimple includes for GCC 13

Bug: https://bugs.gentoo.org/905140

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ++++
 ..._gcc-plugins-reorg-gimple-incl-for-GCC-13.patch | 26 ++++++++++++++++++++++
 2 files changed, 30 insertions(+)

diff --git a/0000_README b/0000_README
index a9e84fc1..c2deeac3 100644
--- a/0000_README
+++ b/0000_README
@@ -783,6 +783,10 @@ Patch:  2940_gcc-plugins-drop-std-gnu-plus-plus-to-fix-GCC-13-build.patch
 From:   https://lore.kernel.org/all/20230201230009.2252783-1-sam@gentoo.org/
 Desc:   gcc-plugins: drop -std=gnu++11 to fix GCC 13 build
 
+Patch:  2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch
+From:   https://git.kernel.org
+Desc:   gcc-plugins: Reorganize gimple includes for GCC 13
+
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852
 Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev

diff --git a/2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch b/2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch
new file mode 100644
index 00000000..3c94f239
--- /dev/null
+++ b/2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch
@@ -0,0 +1,26 @@
+--- a/scripts/gcc-plugins/gcc-common.h	2023-04-27 09:30:13.021916723 -0400
++++ b/scripts/gcc-plugins/gcc-common.h	2023-04-27 09:31:15.088866298 -0400
+@@ -108,7 +108,9 @@
+ #include "varasm.h"
+ #include "stor-layout.h"
+ #include "internal-fn.h"
++#include "gimple.h"
+ #include "gimple-expr.h"
++#include "gimple-iterator.h"
+ #include "gimple-fold.h"
+ #include "context.h"
+ #include "tree-ssa-alias.h"
+@@ -124,13 +126,10 @@
+ #include "gimplify.h"
+ #endif
+ 
+-#include "gimple.h"
+-
+ #if BUILDING_GCC_VERSION >= 4009
+ #include "tree-ssa-operands.h"
+ #include "tree-phinodes.h"
+ #include "tree-cfg.h"
+-#include "gimple-iterator.h"
+ #include "gimple-ssa.h"
+ #include "ssa-iterators.h"
+ #endif


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-05-10 17:56 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-05-10 17:56 UTC (permalink / raw
  To: gentoo-commits

commit:     e81d06c95357ee18ac4056a88d30f78724424f91
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 10 17:56:38 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 10 17:56:38 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e81d06c9

netfilter: nf_tables: deactivate anonymous set from preparation phase

Bug: https://bugs.gentoo.org/90606

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 +
 ...nf-tables-make-deleted-anon-sets-inactive.patch | 121 +++++++++++++++++++++
 2 files changed, 125 insertions(+)

diff --git a/0000_README b/0000_README
index c2deeac3..51b1212a 100644
--- a/0000_README
+++ b/0000_README
@@ -767,6 +767,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1520_fs-enable-link-security-restrictions-by-default.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=c1592a89942e9678f7d9c8030efa777c0d57edab
+Desc:   netfilter: nf_tables: deactivate anonymous set from preparation phase
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1520_nf-tables-make-deleted-anon-sets-inactive.patch b/1520_nf-tables-make-deleted-anon-sets-inactive.patch
new file mode 100644
index 00000000..cd75de5c
--- /dev/null
+++ b/1520_nf-tables-make-deleted-anon-sets-inactive.patch
@@ -0,0 +1,121 @@
+From c1592a89942e9678f7d9c8030efa777c0d57edab Mon Sep 17 00:00:00 2001
+From: Pablo Neira Ayuso <pablo@netfilter.org>
+Date: Tue, 2 May 2023 10:25:24 +0200
+Subject: netfilter: nf_tables: deactivate anonymous set from preparation phase
+
+Toggle deleted anonymous sets as inactive in the next generation, so
+users cannot perform any update on it. Clear the generation bitmask
+in case the transaction is aborted.
+
+The following KASAN splat shows a set element deletion for a bound
+anonymous set that has been already removed in the same transaction.
+
+[   64.921510] ==================================================================
+[   64.923123] BUG: KASAN: wild-memory-access in nf_tables_commit+0xa24/0x1490 [nf_tables]
+[   64.924745] Write of size 8 at addr dead000000000122 by task test/890
+[   64.927903] CPU: 3 PID: 890 Comm: test Not tainted 6.3.0+ #253
+[   64.931120] Call Trace:
+[   64.932699]  <TASK>
+[   64.934292]  dump_stack_lvl+0x33/0x50
+[   64.935908]  ? nf_tables_commit+0xa24/0x1490 [nf_tables]
+[   64.937551]  kasan_report+0xda/0x120
+[   64.939186]  ? nf_tables_commit+0xa24/0x1490 [nf_tables]
+[   64.940814]  nf_tables_commit+0xa24/0x1490 [nf_tables]
+[   64.942452]  ? __kasan_slab_alloc+0x2d/0x60
+[   64.944070]  ? nf_tables_setelem_notify+0x190/0x190 [nf_tables]
+[   64.945710]  ? kasan_set_track+0x21/0x30
+[   64.947323]  nfnetlink_rcv_batch+0x709/0xd90 [nfnetlink]
+[   64.948898]  ? nfnetlink_rcv_msg+0x480/0x480 [nfnetlink]
+
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+---
+ include/net/netfilter/nf_tables.h |  1 +
+ net/netfilter/nf_tables_api.c     | 12 ++++++++++++
+ net/netfilter/nft_dynset.c        |  2 +-
+ net/netfilter/nft_lookup.c        |  2 +-
+ net/netfilter/nft_objref.c        |  2 +-
+ 5 files changed, 16 insertions(+), 3 deletions(-)
+
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 3ed21d2d56590..2e24ea1d744c2 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -619,6 +619,7 @@ struct nft_set_binding {
+ };
+ 
+ enum nft_trans_phase;
++void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
+ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 8b6c61a2196cb..59fb8320ab4d7 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -5127,12 +5127,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 	}
+ }
+ 
++void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	if (nft_set_is_anonymous(set))
++		nft_clear(ctx->net, set);
++
++	set->use++;
++}
++EXPORT_SYMBOL_GPL(nf_tables_activate_set);
++
+ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase)
+ {
+ 	switch (phase) {
+ 	case NFT_TRANS_PREPARE:
++		if (nft_set_is_anonymous(set))
++			nft_deactivate_next(ctx->net, set);
++
+ 		set->use--;
+ 		return;
+ 	case NFT_TRANS_ABORT:
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 274579b1696e0..bd19c7aec92ee 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -342,7 +342,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_dynset *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_dynset_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index cecf8ab90e58f..03ef4fdaa460b 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -167,7 +167,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_lookup *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_lookup_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index cb37169608bab..a48dd5b5d45b1 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -185,7 +185,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_objref_map *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_objref_map_destroy(const struct nft_ctx *ctx,
+-- 
+cgit 
+


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-05-17 10:59 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-05-17 10:59 UTC (permalink / raw
  To: gentoo-commits

commit:     6893a851954087efaeb9fd07435c7f3f05de7ea5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 17 10:59:20 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 17 10:59:20 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6893a851

Linux patch 5.10.180

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1179_linux-5.10.180.patch | 13791 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 13795 insertions(+)

diff --git a/0000_README b/0000_README
index 51b1212a..90d28c5e 100644
--- a/0000_README
+++ b/0000_README
@@ -759,6 +759,10 @@ Patch:  1178_linux-5.10.179.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.179
 
+Patch:  1179_linux-5.10.180.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.180
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1179_linux-5.10.180.patch b/1179_linux-5.10.180.patch
new file mode 100644
index 00000000..e9305dc7
--- /dev/null
+++ b/1179_linux-5.10.180.patch
@@ -0,0 +1,13791 @@
+diff --git a/Makefile b/Makefile
+index 3ddcade4be8fc..c2f8e1644abdc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 179
++SUBLEVEL = 180
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/exynos4412-itop-elite.dts b/arch/arm/boot/dts/exynos4412-itop-elite.dts
+index f6d0a5f5d339e..9a2a49420d4db 100644
+--- a/arch/arm/boot/dts/exynos4412-itop-elite.dts
++++ b/arch/arm/boot/dts/exynos4412-itop-elite.dts
+@@ -179,7 +179,7 @@
+ 		compatible = "wlf,wm8960";
+ 		reg = <0x1a>;
+ 		clocks = <&pmu_system_controller 0>;
+-		clock-names = "MCLK1";
++		clock-names = "mclk";
+ 		wlf,shared-lrclk;
+ 		#sound-dai-cells = <0>;
+ 	};
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index cc8a378dd076e..e61e5ddbf2027 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -609,6 +609,22 @@
+ 	clock-frequency = <100000>;
+ };
+ 
++&mcspi1 {
++	status = "disabled";
++};
++
++&mcspi2 {
++	status = "disabled";
++};
++
++&mcspi3 {
++	status = "disabled";
++};
++
++&mcspi4 {
++	status = "disabled";
++};
++
+ &usb_otg_hs {
+ 	interface-type = <0>;
+ 	usb-phy = <&usb2_phy>;
+diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+index 3defd47fd8fab..037bb8a9b01ec 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+@@ -414,8 +414,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x40200000 0x40200000 0 0x00100000>,
+-				 <0x82000000 0 0x40300000 0x40300000 0 0x00d00000>;
++			ranges = <0x81000000 0x0 0x00000000 0x40200000 0x0 0x00100000>,
++				 <0x82000000 0x0 0x40300000 0x40300000 0x0 0x00d00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm/boot/dts/qcom-ipq8064.dtsi b/arch/arm/boot/dts/qcom-ipq8064.dtsi
+index c51481405e7f8..dca0ed6c8c8de 100644
+--- a/arch/arm/boot/dts/qcom-ipq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq8064.dtsi
+@@ -465,8 +465,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x0fe00000 0x0fe00000 0 0x00100000   /* downstream I/O */
+-				  0x82000000 0 0x08000000 0x08000000 0 0x07e00000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x0fe00000 0x0 0x00010000   /* I/O */
++				  0x82000000 0x0 0x08000000 0x08000000 0x0 0x07e00000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -516,8 +516,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x31e00000 0x31e00000 0 0x00100000   /* downstream I/O */
+-				  0x82000000 0 0x2e000000 0x2e000000 0 0x03e00000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x31e00000 0x0 0x00010000   /* I/O */
++				  0x82000000 0x0 0x2e000000 0x2e000000 0x0 0x03e00000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -567,8 +567,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x35e00000 0x35e00000 0 0x00100000   /* downstream I/O */
+-				  0x82000000 0 0x32000000 0x32000000 0 0x03e00000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x35e00000 0x0 0x00010000   /* I/O */
++				  0x82000000 0x0 0x32000000 0x32000000 0x0 0x03e00000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 71 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
+index eb7e3660ada79..81ab9fe9897fc 100644
+--- a/arch/arm/boot/dts/s5pv210.dtsi
++++ b/arch/arm/boot/dts/s5pv210.dtsi
+@@ -583,7 +583,7 @@
+ 				interrupts = <29>;
+ 				clocks = <&clocks CLK_CSIS>,
+ 						<&clocks SCLK_CSIS>;
+-				clock-names = "clk_csis",
++				clock-names = "csis",
+ 						"sclk_csis";
+ 				bus-width = <4>;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index e191a7bc532be..f85fcc7c8676b 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -609,10 +609,8 @@
+ 			phys = <&pcie_phy1>;
+ 			phy-names = "pciephy";
+ 
+-			ranges = <0x81000000 0 0x10200000 0x10200000
+-				  0 0x10000>,   /* downstream I/O */
+-				 <0x82000000 0 0x10220000 0x10220000
+-				  0 0xfde0000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x10200000 0x0 0x10000>,   /* I/O */
++				 <0x82000000 0x0 0x10220000 0x10220000 0x0 0xfde0000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -673,10 +671,8 @@
+ 			phys = <&pcie_phy0>;
+ 			phy-names = "pciephy";
+ 
+-			ranges = <0x81000000 0 0x20200000 0x20200000
+-				  0 0x10000>, /* downstream I/O */
+-				 <0x82000000 0 0x20220000 0x20220000
+-				  0 0xfde0000>; /* non-prefetchable memory */
++			ranges = <0x81000000 0x0 0x00000000 0x20200000 0x0 0x10000>,   /* I/O */
++				 <0x82000000 0x0 0x20220000 0x20220000 0x0 0xfde0000>; /* MEM */
+ 
+ 			interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index bc140269e4cc5..02b5f6f1d331e 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -744,8 +744,8 @@
+ 
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+-				ranges = <0x01000000 0x0 0x0c200000 0x0c200000 0x0 0x100000>,
+-					<0x02000000 0x0 0x0c300000 0x0c300000 0x0 0xd00000>;
++				ranges = <0x01000000 0x0 0x00000000 0x0c200000 0x0 0x100000>,
++					 <0x02000000 0x0 0x0c300000 0x0c300000 0x0 0xd00000>;
+ 
+ 				interrupts = <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-names = "msi";
+@@ -796,8 +796,8 @@
+ 
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+-				ranges = <0x01000000 0x0 0x0d200000 0x0d200000 0x0 0x100000>,
+-					<0x02000000 0x0 0x0d300000 0x0d300000 0x0 0xd00000>;
++				ranges = <0x01000000 0x0 0x00000000 0x0d200000 0x0 0x100000>,
++					 <0x02000000 0x0 0x0d300000 0x0d300000 0x0 0xd00000>;
+ 
+ 				interrupts = <GIC_SPI 413 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-names = "msi";
+@@ -845,8 +845,8 @@
+ 
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+-				ranges = <0x01000000 0x0 0x0e200000 0x0e200000 0x0 0x100000>,
+-					<0x02000000 0x0 0x0e300000 0x0e300000 0x0 0x1d00000>;
++				ranges = <0x01000000 0x0 0x00000000 0x0e200000 0x0 0x100000>,
++					 <0x02000000 0x0 0x0e300000 0x0e300000 0x0 0x1d00000>;
+ 
+ 				device_type = "pci";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index 9e04ac3f596d0..7c8d69ca91cf4 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -942,7 +942,7 @@
+ 			phys = <&pciephy>;
+ 			phy-names = "pciephy";
+ 
+-			ranges = <0x01000000 0x0 0x1b200000 0x1b200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x1b200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x1b300000 0x1b300000 0x0 0xd00000>;
+ 
+ 			#interrupt-cells = <1>;
+@@ -1187,7 +1187,7 @@
+ 			compatible = "arm,coresight-stm", "arm,primecell";
+ 			reg = <0x06002000 0x1000>,
+ 			      <0x16280000 0x180000>;
+-			reg-names = "stm-base", "stm-data-base";
++			reg-names = "stm-base", "stm-stimulus-base";
+ 			status = "disabled";
+ 
+ 			clocks = <&rpmcc RPM_SMD_QDSS_CLK>, <&rpmcc RPM_SMD_QDSS_A_CLK>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 9beb3c34fcdb5..71e5b9fdc9e16 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -196,8 +196,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <607>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <611>;
++			dynamic-power-coefficient = <154>;
+ 			qcom,freq-domain = <&cpufreq_hw 0>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -221,8 +221,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <607>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <611>;
++			dynamic-power-coefficient = <154>;
+ 			qcom,freq-domain = <&cpufreq_hw 0>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -243,8 +243,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <607>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <611>;
++			dynamic-power-coefficient = <154>;
+ 			qcom,freq-domain = <&cpufreq_hw 0>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -265,8 +265,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <607>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <611>;
++			dynamic-power-coefficient = <154>;
+ 			qcom,freq-domain = <&cpufreq_hw 0>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -288,7 +288,7 @@
+ 			cpu-idle-states = <&BIG_CPU_SLEEP_0
+ 					   &BIG_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			dynamic-power-coefficient = <396>;
++			dynamic-power-coefficient = <442>;
+ 			qcom,freq-domain = <&cpufreq_hw 1>;
+ 			operating-points-v2 = <&cpu4_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -310,7 +310,7 @@
+ 			cpu-idle-states = <&BIG_CPU_SLEEP_0
+ 					   &BIG_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			dynamic-power-coefficient = <396>;
++			dynamic-power-coefficient = <442>;
+ 			qcom,freq-domain = <&cpufreq_hw 1>;
+ 			operating-points-v2 = <&cpu4_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -332,7 +332,7 @@
+ 			cpu-idle-states = <&BIG_CPU_SLEEP_0
+ 					   &BIG_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			dynamic-power-coefficient = <396>;
++			dynamic-power-coefficient = <442>;
+ 			qcom,freq-domain = <&cpufreq_hw 1>;
+ 			operating-points-v2 = <&cpu4_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -354,7 +354,7 @@
+ 			cpu-idle-states = <&BIG_CPU_SLEEP_0
+ 					   &BIG_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			dynamic-power-coefficient = <396>;
++			dynamic-power-coefficient = <442>;
+ 			qcom,freq-domain = <&cpufreq_hw 1>;
+ 			operating-points-v2 = <&cpu4_opp_table>;
+ 			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
+@@ -1816,8 +1816,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x60200000 0 0x60200000 0x0 0x100000>,
+-				 <0x02000000 0x0 0x60300000 0 0x60300000 0x0 0xd00000>;
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>,
++				 <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0xd00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+@@ -1920,7 +1920,7 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x01000000 0x0 0x40200000 0x0 0x40200000 0x0 0x100000>,
++			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+ 			interrupts = <GIC_SPI 307 IRQ_TYPE_EDGE_RISING>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index 4c7d7e8f8e289..c4a6dfae93aad 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -49,17 +49,14 @@
+ 		opp-shared;
+ 		opp-800000000 {
+ 			opp-hz = /bits/ 64 <800000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1200000000 {
+ 			opp-hz = /bits/ 64 <1200000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 			opp-suspend;
+ 		};
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index 37159b9408e8a..e91d197a4c0b4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -60,17 +60,14 @@
+ 		opp-shared;
+ 		opp-800000000 {
+ 			opp-hz = /bits/ 64 <800000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1200000000 {
+ 			opp-hz = /bits/ 64 <1200000000>;
+-			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
+ 			opp-suspend;
+ 		};
+diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
+index 657c921fd784a..c16ed5b68768e 100644
+--- a/arch/arm64/include/asm/debug-monitors.h
++++ b/arch/arm64/include/asm/debug-monitors.h
+@@ -116,6 +116,7 @@ void user_regs_reset_single_step(struct user_pt_regs *regs,
+ void kernel_enable_single_step(struct pt_regs *regs);
+ void kernel_disable_single_step(void);
+ int kernel_active_single_step(void);
++void kernel_rewind_single_step(struct pt_regs *regs);
+ 
+ #ifdef CONFIG_HAVE_HW_BREAKPOINT
+ int reinstall_suspended_bps(struct pt_regs *regs);
+diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h
+index eaa2cd92e4c10..7155055a5bebc 100644
+--- a/arch/arm64/include/asm/scs.h
++++ b/arch/arm64/include/asm/scs.h
+@@ -9,15 +9,16 @@
+ #ifdef CONFIG_SHADOW_CALL_STACK
+ 	scs_sp	.req	x18
+ 
+-	.macro scs_load tsk, tmp
+-	ldr	scs_sp, [\tsk, #TSK_TI_SCS_SP]
++	.macro scs_load_current
++	get_current_task scs_sp
++	ldr	scs_sp, [scs_sp, #TSK_TI_SCS_SP]
+ 	.endm
+ 
+ 	.macro scs_save tsk, tmp
+ 	str	scs_sp, [\tsk, #TSK_TI_SCS_SP]
+ 	.endm
+ #else
+-	.macro scs_load tsk, tmp
++	.macro scs_load_current
+ 	.endm
+ 
+ 	.macro scs_save tsk, tmp
+diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
+index fa76151de6ff1..38a0213fdbeee 100644
+--- a/arch/arm64/kernel/debug-monitors.c
++++ b/arch/arm64/kernel/debug-monitors.c
+@@ -439,6 +439,11 @@ int kernel_active_single_step(void)
+ }
+ NOKPROBE_SYMBOL(kernel_active_single_step);
+ 
++void kernel_rewind_single_step(struct pt_regs *regs)
++{
++	set_regs_spsr_ss(regs);
++}
++
+ /* ptrace API */
+ void user_enable_single_step(struct task_struct *task)
+ {
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index d5bc1dbdd2fda..55e477f73158d 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -221,7 +221,7 @@ alternative_else_nop_endif
+ 
+ 	ptrauth_keys_install_kernel tsk, x20, x22, x23
+ 
+-	scs_load tsk, x20
++	scs_load_current
+ 	.else
+ 	add	x21, sp, #S_FRAME_SIZE
+ 	get_current_task tsk
+@@ -431,9 +431,7 @@ SYM_CODE_END(__swpan_exit_el0)
+ 
+ 	.macro	irq_stack_entry
+ 	mov	x19, sp			// preserve the original sp
+-#ifdef CONFIG_SHADOW_CALL_STACK
+-	mov	x24, scs_sp		// preserve the original shadow stack
+-#endif
++	scs_save tsk			// preserve the original shadow stack
+ 
+ 	/*
+ 	 * Compare sp with the base of the task stack.
+@@ -467,9 +465,7 @@ SYM_CODE_END(__swpan_exit_el0)
+ 	 */
+ 	.macro	irq_stack_exit
+ 	mov	sp, x19
+-#ifdef CONFIG_SHADOW_CALL_STACK
+-	mov	scs_sp, x24
+-#endif
++	scs_load_current
+ 	.endm
+ 
+ /* GPRs used by entry code */
+@@ -1025,7 +1021,7 @@ SYM_FUNC_START(cpu_switch_to)
+ 	msr	sp_el0, x1
+ 	ptrauth_keys_install_kernel x1, x8, x9, x10
+ 	scs_save x0, x8
+-	scs_load x1, x8
++	scs_load_current
+ 	ret
+ SYM_FUNC_END(cpu_switch_to)
+ NOKPROBE(cpu_switch_to)
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index e1c25fa3b8e6c..351ee64c7deb4 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -747,7 +747,7 @@ SYM_FUNC_START_LOCAL(__secondary_switched)
+ 	ldr	x2, [x0, #CPU_BOOT_TASK]
+ 	cbz	x2, __secondary_too_slow
+ 	msr	sp_el0, x2
+-	scs_load x2, x3
++	scs_load_current
+ 	mov	x29, #0
+ 	mov	x30, #0
+ 
+diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
+index 1a157ca33262d..e4e95821b1f6c 100644
+--- a/arch/arm64/kernel/kgdb.c
++++ b/arch/arm64/kernel/kgdb.c
+@@ -223,6 +223,8 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
+ 		 */
+ 		if (!kernel_active_single_step())
+ 			kernel_enable_single_step(linux_regs);
++		else
++			kernel_rewind_single_step(linux_regs);
+ 		err = 0;
+ 		break;
+ 	default:
+diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
+index 20ba5136ac3dd..32bb26be8a9b1 100644
+--- a/arch/arm64/kvm/psci.c
++++ b/arch/arm64/kvm/psci.c
+@@ -499,6 +499,8 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	u64 val;
+ 	int wa_level;
+ 
++	if (KVM_REG_SIZE(reg->id) != sizeof(val))
++		return -ENOENT;
+ 	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
+ 		return -EFAULT;
+ 
+diff --git a/arch/ia64/kernel/salinfo.c b/arch/ia64/kernel/salinfo.c
+index a25ab9b37953e..bb99b543dc672 100644
+--- a/arch/ia64/kernel/salinfo.c
++++ b/arch/ia64/kernel/salinfo.c
+@@ -581,7 +581,7 @@ static int salinfo_cpu_pre_down(unsigned int cpu)
+  * 'data' contains an integer that corresponds to the feature we're
+  * testing
+  */
+-static int proc_salinfo_show(struct seq_file *m, void *v)
++static int __maybe_unused proc_salinfo_show(struct seq_file *m, void *v)
+ {
+ 	unsigned long data = (unsigned long)v;
+ 	seq_puts(m, (sal_platform_features & data) ? "1\n" : "0\n");
+diff --git a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c
+index e30e360beef84..c638e012ad051 100644
+--- a/arch/ia64/mm/contig.c
++++ b/arch/ia64/mm/contig.c
+@@ -79,7 +79,7 @@ skip:
+ 	return __per_cpu_start + __per_cpu_offset[smp_processor_id()];
+ }
+ 
+-static inline void
++static inline __init void
+ alloc_per_cpu_data(void)
+ {
+ 	size_t size = PERCPU_PAGE_SIZE * num_possible_cpus();
+diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
+index b331f94d20ac1..292a28a2008f0 100644
+--- a/arch/ia64/mm/hugetlbpage.c
++++ b/arch/ia64/mm/hugetlbpage.c
+@@ -57,7 +57,7 @@ huge_pte_offset (struct mm_struct *mm, unsigned long addr, unsigned long sz)
+ 
+ 	pgd = pgd_offset(mm, taddr);
+ 	if (pgd_present(*pgd)) {
+-		p4d = p4d_offset(pgd, addr);
++		p4d = p4d_offset(pgd, taddr);
+ 		if (p4d_present(*p4d)) {
+ 			pud = pud_offset(p4d, taddr);
+ 			if (pud_present(*pud)) {
+diff --git a/arch/mips/fw/lib/cmdline.c b/arch/mips/fw/lib/cmdline.c
+index f24cbb4a39b50..892765b742bbc 100644
+--- a/arch/mips/fw/lib/cmdline.c
++++ b/arch/mips/fw/lib/cmdline.c
+@@ -53,7 +53,7 @@ char *fw_getenv(char *envname)
+ {
+ 	char *result = NULL;
+ 
+-	if (_fw_envp != NULL) {
++	if (_fw_envp != NULL && fw_envp(0) != NULL) {
+ 		/*
+ 		 * Return a pointer to the given environment variable.
+ 		 * YAMON uses "name", "value" pairs, while U-Boot uses
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index b42d32d79b2e6..7257e942731df 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -173,7 +173,6 @@ handler:							;\
+ 	l.sw    PT_GPR28(r1),r28					;\
+ 	l.sw    PT_GPR29(r1),r29					;\
+ 	/* r30 already save */					;\
+-/*        l.sw    PT_GPR30(r1),r30*/					;\
+ 	l.sw    PT_GPR31(r1),r31					;\
+ 	TRACE_IRQS_OFF_ENTRY						;\
+ 	/* Store -1 in orig_gpr11 for non-syscall exceptions */	;\
+@@ -211,9 +210,8 @@ handler:							;\
+ 	l.sw    PT_GPR27(r1),r27					;\
+ 	l.sw    PT_GPR28(r1),r28					;\
+ 	l.sw    PT_GPR29(r1),r29					;\
+-	/* r31 already saved */					;\
+-	l.sw    PT_GPR30(r1),r30					;\
+-/*        l.sw    PT_GPR31(r1),r31	*/				;\
++	/* r30 already saved */						;\
++	l.sw    PT_GPR31(r1),r31					;\
+ 	/* Store -1 in orig_gpr11 for non-syscall exceptions */	;\
+ 	l.addi	r30,r0,-1					;\
+ 	l.sw	PT_ORIG_GPR11(r1),r30				;\
+diff --git a/arch/parisc/kernel/real2.S b/arch/parisc/kernel/real2.S
+index 2b16d8d6598f1..c37010a135865 100644
+--- a/arch/parisc/kernel/real2.S
++++ b/arch/parisc/kernel/real2.S
+@@ -248,9 +248,6 @@ ENTRY_CFI(real64_call_asm)
+ 	/* save fn */
+ 	copy	%arg2, %r31
+ 
+-	/* set up the new ap */
+-	ldo	64(%arg1), %r29
+-
+ 	/* load up the arg registers from the saved arg area */
+ 	/* 32-bit calling convention passes first 4 args in registers */
+ 	ldd	0*REG_SZ(%arg1), %arg0		/* note overwriting arg0 */
+@@ -262,7 +259,9 @@ ENTRY_CFI(real64_call_asm)
+ 	ldd	7*REG_SZ(%arg1), %r19
+ 	ldd	1*REG_SZ(%arg1), %arg1		/* do this one last! */
+ 
++	/* set up real-mode stack and real-mode ap */
+ 	tophys_r1 %sp
++	ldo	-16(%sp), %r29			/* Reference param save area */
+ 
+ 	b,l	rfi_virt2real,%r2
+ 	nop
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index c2e407a112a28..5976a25c6264d 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -399,7 +399,7 @@ static char *__fetch_rtas_last_error(char *altbuf)
+ 				buf = kmalloc(RTAS_ERROR_LOG_MAX, GFP_ATOMIC);
+ 		}
+ 		if (buf)
+-			memcpy(buf, rtas_err_buf, RTAS_ERROR_LOG_MAX);
++			memmove(buf, rtas_err_buf, RTAS_ERROR_LOG_MAX);
+ 	}
+ 
+ 	return buf;
+diff --git a/arch/powerpc/platforms/512x/clock-commonclk.c b/arch/powerpc/platforms/512x/clock-commonclk.c
+index 30342b60aa63f..42c3d40355d90 100644
+--- a/arch/powerpc/platforms/512x/clock-commonclk.c
++++ b/arch/powerpc/platforms/512x/clock-commonclk.c
+@@ -984,7 +984,7 @@ static void mpc5121_clk_provide_migration_support(void)
+ 
+ #define NODE_PREP do { \
+ 	of_address_to_resource(np, 0, &res); \
+-	snprintf(devname, sizeof(devname), "%08x.%s", res.start, np->name); \
++	snprintf(devname, sizeof(devname), "%pa.%s", &res.start, np->name); \
+ } while (0)
+ 
+ #define NODE_CHK(clkname, clkitem, regnode, regflag) do { \
+diff --git a/arch/powerpc/platforms/embedded6xx/flipper-pic.c b/arch/powerpc/platforms/embedded6xx/flipper-pic.c
+index d39a9213a3e69..7dd2e2f97aae5 100644
+--- a/arch/powerpc/platforms/embedded6xx/flipper-pic.c
++++ b/arch/powerpc/platforms/embedded6xx/flipper-pic.c
+@@ -144,7 +144,7 @@ static struct irq_domain * __init flipper_pic_init(struct device_node *np)
+ 	}
+ 	io_base = ioremap(res.start, resource_size(&res));
+ 
+-	pr_info("controller at 0x%08x mapped to 0x%p\n", res.start, io_base);
++	pr_info("controller at 0x%pa mapped to 0x%p\n", &res.start, io_base);
+ 
+ 	__flipper_quiesce(io_base);
+ 
+diff --git a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+index de10c13de15c6..c6b492ebb7662 100644
+--- a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
++++ b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+@@ -173,7 +173,7 @@ static struct irq_domain *hlwd_pic_init(struct device_node *np)
+ 		return NULL;
+ 	}
+ 
+-	pr_info("controller at 0x%08x mapped to 0x%p\n", res.start, io_base);
++	pr_info("controller at 0x%pa mapped to 0x%p\n", &res.start, io_base);
+ 
+ 	__hlwd_quiesce(io_base);
+ 
+diff --git a/arch/powerpc/platforms/embedded6xx/wii.c b/arch/powerpc/platforms/embedded6xx/wii.c
+index a802ef957d63e..458a63a30e803 100644
+--- a/arch/powerpc/platforms/embedded6xx/wii.c
++++ b/arch/powerpc/platforms/embedded6xx/wii.c
+@@ -89,8 +89,8 @@ static void __iomem *wii_ioremap_hw_regs(char *name, char *compatible)
+ 
+ 	hw_regs = ioremap(res.start, resource_size(&res));
+ 	if (hw_regs) {
+-		pr_info("%s at 0x%08x mapped to 0x%p\n", name,
+-			res.start, hw_regs);
++		pr_info("%s at 0x%pa mapped to 0x%p\n", name,
++			&res.start, hw_regs);
+ 	}
+ 
+ out_put:
+diff --git a/arch/powerpc/sysdev/tsi108_pci.c b/arch/powerpc/sysdev/tsi108_pci.c
+index 49f9541954f8d..3664ffcbb313c 100644
+--- a/arch/powerpc/sysdev/tsi108_pci.c
++++ b/arch/powerpc/sysdev/tsi108_pci.c
+@@ -216,9 +216,8 @@ int __init tsi108_setup_pci(struct device_node *dev, u32 cfg_phys, int primary)
+ 
+ 	(hose)->ops = &tsi108_direct_pci_ops;
+ 
+-	printk(KERN_INFO "Found tsi108 PCI host bridge at 0x%08x. "
+-	       "Firmware bus number: %d->%d\n",
+-	       rsrc.start, hose->first_busno, hose->last_busno);
++	pr_info("Found tsi108 PCI host bridge at 0x%pa. Firmware bus number: %d->%d\n",
++		&rsrc.start, hose->first_busno, hose->last_busno);
+ 
+ 	/* Interpret the "ranges" property */
+ 	/* This also maps the I/O region and sets isa_io/mem_base */
+diff --git a/arch/sh/Kconfig.debug b/arch/sh/Kconfig.debug
+index 97b0e26cf05a1..7bc1b10b81c96 100644
+--- a/arch/sh/Kconfig.debug
++++ b/arch/sh/Kconfig.debug
+@@ -18,7 +18,7 @@ config SH_STANDARD_BIOS
+ 
+ config STACK_DEBUG
+ 	bool "Check for stack overflows"
+-	depends on DEBUG_KERNEL
++	depends on DEBUG_KERNEL && PRINTK
+ 	help
+ 	  This option will cause messages to be printed if free stack space
+ 	  drops below a certain limit. Saying Y here will add overhead to
+diff --git a/arch/sh/kernel/cpu/sh4/sq.c b/arch/sh/kernel/cpu/sh4/sq.c
+index d432164b23b7c..c31ec0fea3003 100644
+--- a/arch/sh/kernel/cpu/sh4/sq.c
++++ b/arch/sh/kernel/cpu/sh4/sq.c
+@@ -381,7 +381,7 @@ static int __init sq_api_init(void)
+ 	if (unlikely(!sq_cache))
+ 		return ret;
+ 
+-	sq_bitmap = kzalloc(size, GFP_KERNEL);
++	sq_bitmap = kcalloc(size, sizeof(long), GFP_KERNEL);
+ 	if (unlikely(!sq_bitmap))
+ 		goto out;
+ 
+diff --git a/arch/sh/kernel/head_32.S b/arch/sh/kernel/head_32.S
+index 4adbd4ade3194..b603b7968b388 100644
+--- a/arch/sh/kernel/head_32.S
++++ b/arch/sh/kernel/head_32.S
+@@ -64,7 +64,7 @@ ENTRY(_stext)
+ 	ldc	r0, r6_bank
+ #endif
+ 
+-#ifdef CONFIG_OF_FLATTREE
++#ifdef CONFIG_OF_EARLY_FLATTREE
+ 	mov	r4, r12		! Store device tree blob pointer in r12
+ #endif
+ 	
+@@ -315,7 +315,7 @@ ENTRY(_stext)
+ 10:		
+ #endif
+ 
+-#ifdef CONFIG_OF_FLATTREE
++#ifdef CONFIG_OF_EARLY_FLATTREE
+ 	mov.l	8f, r0		! Make flat device tree available early.
+ 	jsr	@r0
+ 	 mov	r12, r4
+@@ -346,7 +346,7 @@ ENTRY(stack_start)
+ 5:	.long	start_kernel
+ 6:	.long	cpu_init
+ 7:	.long	init_thread_union
+-#if defined(CONFIG_OF_FLATTREE)
++#if defined(CONFIG_OF_EARLY_FLATTREE)
+ 8:	.long	sh_fdt_init
+ #endif
+ 
+diff --git a/arch/sh/kernel/nmi_debug.c b/arch/sh/kernel/nmi_debug.c
+index 11777867c6f5f..a212b645b4cf8 100644
+--- a/arch/sh/kernel/nmi_debug.c
++++ b/arch/sh/kernel/nmi_debug.c
+@@ -49,7 +49,7 @@ static int __init nmi_debug_setup(char *str)
+ 	register_die_notifier(&nmi_debug_nb);
+ 
+ 	if (*str != '=')
+-		return 0;
++		return 1;
+ 
+ 	for (p = str + 1; *p; p = sep + 1) {
+ 		sep = strchr(p, ',');
+@@ -70,6 +70,6 @@ static int __init nmi_debug_setup(char *str)
+ 			break;
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("nmi_debug", nmi_debug_setup);
+diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
+index 4144be650d410..556e463a43d22 100644
+--- a/arch/sh/kernel/setup.c
++++ b/arch/sh/kernel/setup.c
+@@ -244,7 +244,7 @@ void __init __weak plat_early_device_setup(void)
+ {
+ }
+ 
+-#ifdef CONFIG_OF_FLATTREE
++#ifdef CONFIG_OF_EARLY_FLATTREE
+ void __ref sh_fdt_init(phys_addr_t dt_phys)
+ {
+ 	static int done = 0;
+@@ -329,7 +329,7 @@ void __init setup_arch(char **cmdline_p)
+ 	/* Let earlyprintk output early console messages */
+ 	sh_early_platform_driver_probe("earlyprintk", 1, 1);
+ 
+-#ifdef CONFIG_OF_FLATTREE
++#ifdef CONFIG_OF_EARLY_FLATTREE
+ #ifdef CONFIG_USE_BUILTIN_DTB
+ 	unflatten_and_copy_device_tree();
+ #else
+diff --git a/arch/sh/math-emu/sfp-util.h b/arch/sh/math-emu/sfp-util.h
+index 784f541344f36..bda50762b3d33 100644
+--- a/arch/sh/math-emu/sfp-util.h
++++ b/arch/sh/math-emu/sfp-util.h
+@@ -67,7 +67,3 @@
+   } while (0)
+ 
+ #define abort()	return 0
+-
+-#define __BYTE_ORDER __LITTLE_ENDIAN
+-
+-
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 660012ab7bfa5..3e9f1c820edbf 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -553,6 +553,7 @@ struct kvm_vcpu_arch {
+ 	u64 ia32_misc_enable_msr;
+ 	u64 smbase;
+ 	u64 smi_count;
++	bool at_instruction_boundary;
+ 	bool tpr_access_reporting;
+ 	bool xsaves_enabled;
+ 	u64 ia32_xss;
+@@ -663,7 +664,7 @@ struct kvm_vcpu_arch {
+ 		u8 preempted;
+ 		u64 msr_val;
+ 		u64 last_steal;
+-		struct gfn_to_pfn_cache cache;
++		struct gfn_to_hva_cache cache;
+ 	} st;
+ 
+ 	u64 l1_tsc_offset;
+@@ -1061,6 +1062,8 @@ struct kvm_vcpu_stat {
+ 	u64 req_event;
+ 	u64 halt_poll_success_ns;
+ 	u64 halt_poll_fail_ns;
++	u64 preemption_reported;
++	u64 preemption_other;
+ };
+ 
+ struct x86_instruction_info;
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 1c96f2425eafd..25eb69f26e039 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -410,10 +410,9 @@ static unsigned int reserve_eilvt_offset(int offset, unsigned int new)
+ 		if (vector && !eilvt_entry_is_changeable(vector, new))
+ 			/* may not change if vectors are different */
+ 			return rsvd;
+-		rsvd = atomic_cmpxchg(&eilvt_offsets[offset], rsvd, new);
+-	} while (rsvd != new);
++	} while (!atomic_try_cmpxchg(&eilvt_offsets[offset], &rsvd, new));
+ 
+-	rsvd &= ~APIC_EILVT_MASKED;
++	rsvd = new & ~APIC_EILVT_MASKED;
+ 	if (rsvd && rsvd != vector)
+ 		pr_info("LVT offset %d assigned for vector 0x%02x\n",
+ 			offset, rsvd);
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 25b1d5c6af969..74794387bf59d 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2442,17 +2442,21 @@ static int io_apic_get_redir_entries(int ioapic)
+ 
+ unsigned int arch_dynirq_lower_bound(unsigned int from)
+ {
++	unsigned int ret;
++
+ 	/*
+ 	 * dmar_alloc_hwirq() may be called before setup_IO_APIC(), so use
+ 	 * gsi_top if ioapic_dynirq_base hasn't been initialized yet.
+ 	 */
+-	if (!ioapic_initialized)
+-		return gsi_top;
++	ret = ioapic_dynirq_base ? : gsi_top;
++
+ 	/*
+-	 * For DT enabled machines ioapic_dynirq_base is irrelevant and not
+-	 * updated. So simply return @from if ioapic_dynirq_base == 0.
++	 * For DT enabled machines ioapic_dynirq_base is irrelevant and
++	 * always 0. gsi_top can be 0 if there is no IO/APIC registered.
++	 * 0 is an invalid interrupt number for dynamic allocations. Return
++	 * @from instead.
+ 	 */
+-	return ioapic_dynirq_base ? : from;
++	return ret ? : from;
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index 4f9b7c1cfc36f..cd8db6b9ca2f5 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -197,10 +197,10 @@ static DEFINE_PER_CPU(struct threshold_bank **, threshold_banks);
+  * A list of the banks enabled on each logical CPU. Controls which respective
+  * descriptors to initialize later in mce_threshold_create_device().
+  */
+-static DEFINE_PER_CPU(unsigned int, bank_map);
++static DEFINE_PER_CPU(u64, bank_map);
+ 
+ /* Map of banks that have more than MCA_MISC0 available. */
+-static DEFINE_PER_CPU(u32, smca_misc_banks_map);
++static DEFINE_PER_CPU(u64, smca_misc_banks_map);
+ 
+ static void amd_threshold_interrupt(void);
+ static void amd_deferred_error_interrupt(void);
+@@ -229,7 +229,7 @@ static void smca_set_misc_banks_map(unsigned int bank, unsigned int cpu)
+ 		return;
+ 
+ 	if (low & MASK_BLKPTR_LO)
+-		per_cpu(smca_misc_banks_map, cpu) |= BIT(bank);
++		per_cpu(smca_misc_banks_map, cpu) |= BIT_ULL(bank);
+ 
+ }
+ 
+@@ -492,7 +492,7 @@ static u32 smca_get_block_address(unsigned int bank, unsigned int block,
+ 	if (!block)
+ 		return MSR_AMD64_SMCA_MCx_MISC(bank);
+ 
+-	if (!(per_cpu(smca_misc_banks_map, cpu) & BIT(bank)))
++	if (!(per_cpu(smca_misc_banks_map, cpu) & BIT_ULL(bank)))
+ 		return 0;
+ 
+ 	return MSR_AMD64_SMCA_MCx_MISCy(bank, block - 1);
+@@ -536,7 +536,7 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
+ 	int new;
+ 
+ 	if (!block)
+-		per_cpu(bank_map, cpu) |= (1 << bank);
++		per_cpu(bank_map, cpu) |= BIT_ULL(bank);
+ 
+ 	memset(&b, 0, sizeof(b));
+ 	b.cpu			= cpu;
+@@ -1048,7 +1048,7 @@ static void amd_threshold_interrupt(void)
+ 		return;
+ 
+ 	for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
+-		if (!(per_cpu(bank_map, cpu) & (1 << bank)))
++		if (!(per_cpu(bank_map, cpu) & BIT_ULL(bank)))
+ 			continue;
+ 
+ 		first_block = bp[bank]->blocks;
+@@ -1525,7 +1525,7 @@ int mce_threshold_create_device(unsigned int cpu)
+ 		return -ENOMEM;
+ 
+ 	for (bank = 0; bank < numbanks; ++bank) {
+-		if (!(this_cpu_read(bank_map) & (1 << bank)))
++		if (!(this_cpu_read(bank_map) & BIT_ULL(bank)))
+ 			continue;
+ 		err = threshold_create_bank(bp, cpu, bank);
+ 		if (err) {
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index 571220ac8beaa..835b948095cde 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -25,17 +25,7 @@
+  */
+ union fpregs_state init_fpstate __read_mostly;
+ 
+-/*
+- * Track whether the kernel is using the FPU state
+- * currently.
+- *
+- * This flag is used:
+- *
+- *   - by IRQ context code to potentially use the FPU
+- *     if it's unused.
+- *
+- *   - to debug kernel_fpu_begin()/end() correctness
+- */
++/* Track in-kernel FPU usage */
+ static DEFINE_PER_CPU(bool, in_kernel_fpu);
+ 
+ /*
+@@ -43,42 +33,37 @@ static DEFINE_PER_CPU(bool, in_kernel_fpu);
+  */
+ DEFINE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx);
+ 
+-static bool kernel_fpu_disabled(void)
+-{
+-	return this_cpu_read(in_kernel_fpu);
+-}
+-
+-static bool interrupted_kernel_fpu_idle(void)
+-{
+-	return !kernel_fpu_disabled();
+-}
+-
+-/*
+- * Were we in user mode (or vm86 mode) when we were
+- * interrupted?
+- *
+- * Doing kernel_fpu_begin/end() is ok if we are running
+- * in an interrupt context from user mode - we'll just
+- * save the FPU state as required.
+- */
+-static bool interrupted_user_mode(void)
+-{
+-	struct pt_regs *regs = get_irq_regs();
+-	return regs && user_mode(regs);
+-}
+-
+ /*
+  * Can we use the FPU in kernel mode with the
+  * whole "kernel_fpu_begin/end()" sequence?
+- *
+- * It's always ok in process context (ie "not interrupt")
+- * but it is sometimes ok even from an irq.
+  */
+ bool irq_fpu_usable(void)
+ {
+-	return !in_interrupt() ||
+-		interrupted_user_mode() ||
+-		interrupted_kernel_fpu_idle();
++	if (WARN_ON_ONCE(in_nmi()))
++		return false;
++
++	/* In kernel FPU usage already active? */
++	if (this_cpu_read(in_kernel_fpu))
++		return false;
++
++	/*
++	 * When not in NMI or hard interrupt context, FPU can be used in:
++	 *
++	 * - Task context except from within fpregs_lock()'ed critical
++	 *   regions.
++	 *
++	 * - Soft interrupt processing context which cannot happen
++	 *   while in a fpregs_lock()'ed critical region.
++	 */
++	if (!in_irq())
++		return true;
++
++	/*
++	 * In hard interrupt context it's safe when soft interrupts
++	 * are enabled, which means the interrupt did not hit in
++	 * a fpregs_lock()'ed critical region.
++	 */
++	return !softirq_count();
+ }
+ EXPORT_SYMBOL(irq_fpu_usable);
+ 
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 09ec1cda2d687..e03e320847cdd 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -1562,16 +1562,19 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa,
+ 
+ 	cpumask_clear(&hv_vcpu->tlb_flush);
+ 
+-	vcpu_mask = all_cpus ? NULL :
+-		sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
+-					vp_bitmap, vcpu_bitmap);
+-
+ 	/*
+ 	 * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
+ 	 * analyze it here, flush TLB regardless of the specified address space.
+ 	 */
+-	kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
+-				    NULL, vcpu_mask, &hv_vcpu->tlb_flush);
++	if (all_cpus) {
++		kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH_GUEST);
++	} else {
++		vcpu_mask = sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
++						    vp_bitmap, vcpu_bitmap);
++
++		kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
++					    NULL, vcpu_mask, &hv_vcpu->tlb_flush);
++	}
+ 
+ ret_success:
+ 	/* We always do full TLB flush, set rep_done = rep_cnt. */
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 5775983fec56e..7b2b61309d8a4 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -3983,6 +3983,8 @@ out:
+ 
+ static void svm_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+ {
++	if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_INTR)
++		vcpu->arch.at_instruction_boundary = true;
+ }
+ 
+ static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 2c5d8b9f9873f..9aedc7b06da7a 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6510,6 +6510,7 @@ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
+ 		return;
+ 
+ 	handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc));
++	vcpu->arch.at_instruction_boundary = true;
+ }
+ 
+ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+@@ -7536,6 +7537,21 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu,
+ 		/* FIXME: produce nested vmexit and return X86EMUL_INTERCEPTED.  */
+ 		break;
+ 
++	case x86_intercept_pause:
++		/*
++		 * PAUSE is a single-byte NOP with a REPE prefix, i.e. collides
++		 * with vanilla NOPs in the emulator.  Apply the interception
++		 * check only to actual PAUSE instructions.  Don't check
++		 * PAUSE-loop-exiting, software can't expect a given PAUSE to
++		 * exit, i.e. KVM is within its rights to allow L2 to execute
++		 * the PAUSE.
++		 */
++		if ((info->rep_prefix != REPE_PREFIX) ||
++		    !nested_cpu_has2(vmcs12, CPU_BASED_PAUSE_EXITING))
++			return X86EMUL_CONTINUE;
++
++		break;
++
+ 	/* TODO: check more intercepts... */
+ 	default:
+ 		break;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0ccc8d1b972c9..5fbae8cc06977 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -231,6 +231,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ 	VCPU_STAT("l1d_flush", l1d_flush),
+ 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
+ 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
++	VCPU_STAT("preemption_reported", preemption_reported),
++	VCPU_STAT("preemption_other", preemption_other),
+ 	VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped),
+ 	VM_STAT("mmu_pte_write", mmu_pte_write),
+ 	VM_STAT("mmu_pde_zapped", mmu_pde_zapped),
+@@ -3020,51 +3022,95 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
+ 
+ static void record_steal_time(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm_host_map map;
+-	struct kvm_steal_time *st;
++	struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
++	struct kvm_steal_time __user *st;
++	struct kvm_memslots *slots;
++	gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
++	u64 steal;
++	u32 version;
+ 
+ 	if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+ 		return;
+ 
+-	/* -EAGAIN is returned in atomic context so we can just return. */
+-	if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT,
+-			&map, &vcpu->arch.st.cache, false))
++	if (WARN_ON_ONCE(current->mm != vcpu->kvm->mm))
+ 		return;
+ 
+-	st = map.hva +
+-		offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
++	slots = kvm_memslots(vcpu->kvm);
++
++	if (unlikely(slots->generation != ghc->generation ||
++		     gpa != ghc->gpa ||
++		     kvm_is_error_hva(ghc->hva) || !ghc->memslot)) {
++		/* We rely on the fact that it fits in a single page. */
++		BUILD_BUG_ON((sizeof(*st) - 1) & KVM_STEAL_VALID_BITS);
+ 
++		if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, gpa, sizeof(*st)) ||
++		    kvm_is_error_hva(ghc->hva) || !ghc->memslot)
++			return;
++	}
++
++	st = (struct kvm_steal_time __user *)ghc->hva;
+ 	/*
+ 	 * Doing a TLB flush here, on the guest's behalf, can avoid
+ 	 * expensive IPIs.
+ 	 */
+ 	if (guest_pv_has(vcpu, KVM_FEATURE_PV_TLB_FLUSH)) {
++		u8 st_preempted = 0;
++		int err = -EFAULT;
++
++		if (!user_access_begin(st, sizeof(*st)))
++			return;
++
++		asm volatile("1: xchgb %0, %2\n"
++			     "xor %1, %1\n"
++			     "2:\n"
++			     _ASM_EXTABLE_UA(1b, 2b)
++			     : "+q" (st_preempted),
++			       "+&r" (err),
++			       "+m" (st->preempted));
++		if (err)
++			goto out;
++
++		user_access_end();
++
++		vcpu->arch.st.preempted = 0;
++
+ 		trace_kvm_pv_tlb_flush(vcpu->vcpu_id,
+-				       st->preempted & KVM_VCPU_FLUSH_TLB);
+-		if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
++				       st_preempted & KVM_VCPU_FLUSH_TLB);
++		if (st_preempted & KVM_VCPU_FLUSH_TLB)
+ 			kvm_vcpu_flush_tlb_guest(vcpu);
++
++		if (!user_access_begin(st, sizeof(*st)))
++			goto dirty;
+ 	} else {
+-		st->preempted = 0;
+-	}
++		if (!user_access_begin(st, sizeof(*st)))
++			return;
+ 
+-	vcpu->arch.st.preempted = 0;
++		unsafe_put_user(0, &st->preempted, out);
++		vcpu->arch.st.preempted = 0;
++	}
+ 
+-	if (st->version & 1)
+-		st->version += 1;  /* first time write, random junk */
++	unsafe_get_user(version, &st->version, out);
++	if (version & 1)
++		version += 1;  /* first time write, random junk */
+ 
+-	st->version += 1;
++	version += 1;
++	unsafe_put_user(version, &st->version, out);
+ 
+ 	smp_wmb();
+ 
+-	st->steal += current->sched_info.run_delay -
++	unsafe_get_user(steal, &st->steal, out);
++	steal += current->sched_info.run_delay -
+ 		vcpu->arch.st.last_steal;
+ 	vcpu->arch.st.last_steal = current->sched_info.run_delay;
++	unsafe_put_user(steal, &st->steal, out);
+ 
+-	smp_wmb();
++	version += 1;
++	unsafe_put_user(version, &st->version, out);
+ 
+-	st->version += 1;
+-
+-	kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, false);
++ out:
++	user_access_end();
++ dirty:
++	mark_page_dirty_in_slot(ghc->memslot, gpa_to_gfn(ghc->gpa));
+ }
+ 
+ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+@@ -4049,51 +4095,67 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 
+ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm_host_map map;
+-	struct kvm_steal_time *st;
++	struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
++	struct kvm_steal_time __user *st;
++	struct kvm_memslots *slots;
++	static const u8 preempted = KVM_VCPU_PREEMPTED;
++	gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
++
++	/*
++	 * The vCPU can be marked preempted if and only if the VM-Exit was on
++	 * an instruction boundary and will not trigger guest emulation of any
++	 * kind (see vcpu_run).  Vendor specific code controls (conservatively)
++	 * when this is true, for example allowing the vCPU to be marked
++	 * preempted if and only if the VM-Exit was due to a host interrupt.
++	 */
++	if (!vcpu->arch.at_instruction_boundary) {
++		vcpu->stat.preemption_other++;
++		return;
++	}
+ 
++	vcpu->stat.preemption_reported++;
+ 	if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+ 		return;
+ 
+ 	if (vcpu->arch.st.preempted)
+ 		return;
+ 
+-	if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT, &map,
+-			&vcpu->arch.st.cache, true))
++	/* This happens on process exit */
++	if (unlikely(current->mm != vcpu->kvm->mm))
+ 		return;
+ 
+-	st = map.hva +
+-		offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
++	slots = kvm_memslots(vcpu->kvm);
++
++	if (unlikely(slots->generation != ghc->generation ||
++		     gpa != ghc->gpa ||
++		     kvm_is_error_hva(ghc->hva) || !ghc->memslot))
++		return;
+ 
+-	st->preempted = vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
++	st = (struct kvm_steal_time __user *)ghc->hva;
++	BUILD_BUG_ON(sizeof(st->preempted) != sizeof(preempted));
+ 
+-	kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, true);
++	if (!copy_to_user_nofault(&st->preempted, &preempted, sizeof(preempted)))
++		vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
++
++	mark_page_dirty_in_slot(ghc->memslot, gpa_to_gfn(ghc->gpa));
+ }
+ 
+ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+ {
+ 	int idx;
+ 
+-	if (vcpu->preempted)
++	if (vcpu->preempted) {
+ 		vcpu->arch.preempted_in_kernel = !kvm_x86_ops.get_cpl(vcpu);
+ 
+-	/*
+-	 * Disable page faults because we're in atomic context here.
+-	 * kvm_write_guest_offset_cached() would call might_fault()
+-	 * that relies on pagefault_disable() to tell if there's a
+-	 * bug. NOTE: the write to guest memory may not go through if
+-	 * during postcopy live migration or if there's heavy guest
+-	 * paging.
+-	 */
+-	pagefault_disable();
+-	/*
+-	 * kvm_memslots() will be called by
+-	 * kvm_write_guest_offset_cached() so take the srcu lock.
+-	 */
+-	idx = srcu_read_lock(&vcpu->kvm->srcu);
+-	kvm_steal_time_set_preempted(vcpu);
+-	srcu_read_unlock(&vcpu->kvm->srcu, idx);
+-	pagefault_enable();
++		/*
++		 * Take the srcu lock as memslots will be accessed to check the gfn
++		 * cache generation against the memslots generation.
++		 */
++		idx = srcu_read_lock(&vcpu->kvm->srcu);
++		kvm_steal_time_set_preempted(vcpu);
++		srcu_read_unlock(&vcpu->kvm->srcu, idx);
++	}
++
+ 	kvm_x86_ops.vcpu_put(vcpu);
+ 	vcpu->arch.last_host_tsc = rdtsc();
+ 	/*
+@@ -9357,6 +9419,13 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.l1tf_flush_l1d = true;
+ 
+ 	for (;;) {
++		/*
++		 * If another guest vCPU requests a PV TLB flush in the middle
++		 * of instruction emulation, the rest of the emulation could
++		 * use a stale page translation. Assume that any code after
++		 * this point can start executing an instruction.
++		 */
++		vcpu->arch.at_instruction_boundary = false;
+ 		if (kvm_vcpu_running(vcpu)) {
+ 			r = vcpu_enter_guest(vcpu);
+ 		} else {
+@@ -10242,11 +10311,8 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+ 
+ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+ {
+-	struct gfn_to_pfn_cache *cache = &vcpu->arch.st.cache;
+ 	int idx;
+ 
+-	kvm_release_pfn(cache->pfn, cache->dirty, cache);
+-
+ 	kvmclock_reset(vcpu);
+ 
+ 	kvm_x86_ops.vcpu_free(vcpu);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 9afb79b322fb0..d0d0dd8151f75 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1444,6 +1444,13 @@ bool blk_update_request(struct request *req, blk_status_t error,
+ 		req->q->integrity.profile->complete_fn(req, nr_bytes);
+ #endif
+ 
++	/*
++	 * Upper layers may call blk_crypto_evict_key() anytime after the last
++	 * bio_endio().  Therefore, the keyslot must be released before that.
++	 */
++	if (blk_crypto_rq_has_keyslot(req) && nr_bytes >= blk_rq_bytes(req))
++		__blk_crypto_rq_put_keyslot(req);
++
+ 	if (unlikely(error && !blk_rq_is_passthrough(req) &&
+ 		     !(req->rq_flags & RQF_QUIET)))
+ 		print_req_error(req, error, __func__);
+diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
+index 0d36aae538d7b..8e08345576203 100644
+--- a/block/blk-crypto-internal.h
++++ b/block/blk-crypto-internal.h
+@@ -60,6 +60,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
+ 	return rq->crypt_ctx;
+ }
+ 
++static inline bool blk_crypto_rq_has_keyslot(struct request *rq)
++{
++	return rq->crypt_keyslot;
++}
++
+ #else /* CONFIG_BLK_INLINE_ENCRYPTION */
+ 
+ static inline bool bio_crypt_rq_ctx_compatible(struct request *rq,
+@@ -93,6 +98,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
+ 	return false;
+ }
+ 
++static inline bool blk_crypto_rq_has_keyslot(struct request *rq)
++{
++	return false;
++}
++
+ #endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+ 
+ void __bio_crypt_advance(struct bio *bio, unsigned int bytes);
+@@ -127,14 +137,21 @@ static inline bool blk_crypto_bio_prep(struct bio **bio_ptr)
+ 	return true;
+ }
+ 
+-blk_status_t __blk_crypto_init_request(struct request *rq);
+-static inline blk_status_t blk_crypto_init_request(struct request *rq)
++blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq);
++static inline blk_status_t blk_crypto_rq_get_keyslot(struct request *rq)
+ {
+ 	if (blk_crypto_rq_is_encrypted(rq))
+-		return __blk_crypto_init_request(rq);
++		return __blk_crypto_rq_get_keyslot(rq);
+ 	return BLK_STS_OK;
+ }
+ 
++void __blk_crypto_rq_put_keyslot(struct request *rq);
++static inline void blk_crypto_rq_put_keyslot(struct request *rq)
++{
++	if (blk_crypto_rq_has_keyslot(rq))
++		__blk_crypto_rq_put_keyslot(rq);
++}
++
+ void __blk_crypto_free_request(struct request *rq);
+ static inline void blk_crypto_free_request(struct request *rq)
+ {
+@@ -173,7 +190,7 @@ static inline blk_status_t blk_crypto_insert_cloned_request(struct request *rq)
+ {
+ 
+ 	if (blk_crypto_rq_is_encrypted(rq))
+-		return blk_crypto_init_request(rq);
++		return blk_crypto_rq_get_keyslot(rq);
+ 	return BLK_STS_OK;
+ }
+ 
+diff --git a/block/blk-crypto.c b/block/blk-crypto.c
+index 5ffa9aab49de0..87ec55d4354f5 100644
+--- a/block/blk-crypto.c
++++ b/block/blk-crypto.c
+@@ -13,6 +13,7 @@
+ #include <linux/blkdev.h>
+ #include <linux/keyslot-manager.h>
+ #include <linux/module.h>
++#include <linux/ratelimit.h>
+ #include <linux/slab.h>
+ 
+ #include "blk-crypto-internal.h"
+@@ -216,26 +217,26 @@ static bool bio_crypt_check_alignment(struct bio *bio)
+ 	return true;
+ }
+ 
+-blk_status_t __blk_crypto_init_request(struct request *rq)
++blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq)
+ {
+ 	return blk_ksm_get_slot_for_key(rq->q->ksm, rq->crypt_ctx->bc_key,
+ 					&rq->crypt_keyslot);
+ }
+ 
+-/**
+- * __blk_crypto_free_request - Uninitialize the crypto fields of a request.
+- *
+- * @rq: The request whose crypto fields to uninitialize.
+- *
+- * Completely uninitializes the crypto fields of a request. If a keyslot has
+- * been programmed into some inline encryption hardware, that keyslot is
+- * released. The rq->crypt_ctx is also freed.
+- */
+-void __blk_crypto_free_request(struct request *rq)
++void __blk_crypto_rq_put_keyslot(struct request *rq)
+ {
+ 	blk_ksm_put_slot(rq->crypt_keyslot);
++	rq->crypt_keyslot = NULL;
++}
++
++void __blk_crypto_free_request(struct request *rq)
++{
++	/* The keyslot, if one was needed, should have been released earlier. */
++	if (WARN_ON_ONCE(rq->crypt_keyslot))
++		__blk_crypto_rq_put_keyslot(rq);
++
+ 	mempool_free(rq->crypt_ctx, bio_crypt_ctx_pool);
+-	blk_crypto_rq_set_defaults(rq);
++	rq->crypt_ctx = NULL;
+ }
+ 
+ /**
+@@ -384,28 +385,38 @@ int blk_crypto_start_using_key(const struct blk_crypto_key *key,
+ }
+ 
+ /**
+- * blk_crypto_evict_key() - Evict a key from any inline encryption hardware
+- *			    it may have been programmed into
+- * @q: The request queue who's associated inline encryption hardware this key
+- *     might have been programmed into
+- * @key: The key to evict
++ * blk_crypto_evict_key() - Evict a blk_crypto_key from a request_queue
++ * @q: a request_queue on which I/O using the key may have been done
++ * @key: the key to evict
+  *
+- * Upper layers (filesystems) must call this function to ensure that a key is
+- * evicted from any hardware that it might have been programmed into.  The key
+- * must not be in use by any in-flight IO when this function is called.
++ * For a given request_queue, this function removes the given blk_crypto_key
++ * from the keyslot management structures and evicts it from any underlying
++ * hardware keyslot(s) or blk-crypto-fallback keyslot it may have been
++ * programmed into.
+  *
+- * Return: 0 on success or if key is not present in the q's ksm, -err on error.
++ * Upper layers must call this before freeing the blk_crypto_key.  It must be
++ * called for every request_queue the key may have been used on.  The key must
++ * no longer be in use by any I/O when this function is called.
++ *
++ * Context: May sleep.
+  */
+-int blk_crypto_evict_key(struct request_queue *q,
+-			 const struct blk_crypto_key *key)
++void blk_crypto_evict_key(struct request_queue *q,
++			  const struct blk_crypto_key *key)
+ {
+-	if (blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg))
+-		return blk_ksm_evict_key(q->ksm, key);
++	int err;
+ 
++	if (blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg))
++		err = blk_ksm_evict_key(q->ksm, key);
++	else
++		err = blk_crypto_fallback_evict_key(key);
+ 	/*
+-	 * If the request queue's associated inline encryption hardware didn't
+-	 * have support for the key, then the key might have been programmed
+-	 * into the fallback keyslot manager, so try to evict from there.
++	 * An error can only occur here if the key failed to be evicted from a
++	 * keyslot (due to a hardware or driver issue) or is allegedly still in
++	 * use by I/O (due to a kernel bug).  Even in these cases, the key is
++	 * still unlinked from the keyslot management structures, and the caller
++	 * is allowed and expected to free it right away.  There's nothing
++	 * callers can do to handle errors, so just log them and return void.
+ 	 */
+-	return blk_crypto_fallback_evict_key(key);
++	if (err)
++		pr_warn_ratelimited("error %d evicting key\n", err);
+ }
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index fbba277364f01..f3b016b31af86 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -801,6 +801,8 @@ static struct request *attempt_merge(struct request_queue *q,
+ 	if (!blk_discard_mergable(req))
+ 		elv_merge_requests(q, req, next);
+ 
++	blk_crypto_rq_put_keyslot(next);
++
+ 	/*
+ 	 * 'next' is going away, so update stats accordingly
+ 	 */
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index cf66de0f00fd3..e153a36c9ba3a 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2193,7 +2193,7 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
+ 
+ 	blk_mq_bio_to_request(rq, bio, nr_segs);
+ 
+-	ret = blk_crypto_init_request(rq);
++	ret = blk_crypto_rq_get_keyslot(rq);
+ 	if (ret != BLK_STS_OK) {
+ 		bio->bi_status = ret;
+ 		bio_endio(bio);
+diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
+index 86f8195d8039e..17a1f1ba44efc 100644
+--- a/block/keyslot-manager.c
++++ b/block/keyslot-manager.c
+@@ -305,44 +305,43 @@ bool blk_ksm_crypto_cfg_supported(struct blk_keyslot_manager *ksm,
+ 	return true;
+ }
+ 
+-/**
+- * blk_ksm_evict_key() - Evict a key from the lower layer device.
+- * @ksm: The keyslot manager to evict from
+- * @key: The key to evict
+- *
+- * Find the keyslot that the specified key was programmed into, and evict that
+- * slot from the lower layer device. The slot must not be in use by any
+- * in-flight IO when this function is called.
+- *
+- * Context: Process context. Takes and releases ksm->lock.
+- * Return: 0 on success or if there's no keyslot with the specified key, -EBUSY
+- *	   if the keyslot is still in use, or another -errno value on other
+- *	   error.
++/*
++ * This is an internal function that evicts a key from an inline encryption
++ * device that can be either a real device or the blk-crypto-fallback "device".
++ * It is used only by blk_crypto_evict_key(); see that function for details.
+  */
+ int blk_ksm_evict_key(struct blk_keyslot_manager *ksm,
+ 		      const struct blk_crypto_key *key)
+ {
+ 	struct blk_ksm_keyslot *slot;
+-	int err = 0;
++	int err;
+ 
+ 	blk_ksm_hw_enter(ksm);
+ 	slot = blk_ksm_find_keyslot(ksm, key);
+-	if (!slot)
+-		goto out_unlock;
++	if (!slot) {
++		/*
++		 * Not an error, since a key not in use by I/O is not guaranteed
++		 * to be in a keyslot.  There can be more keys than keyslots.
++		 */
++		err = 0;
++		goto out;
++	}
+ 
+ 	if (WARN_ON_ONCE(atomic_read(&slot->slot_refs) != 0)) {
++		/* BUG: key is still in use by I/O */
+ 		err = -EBUSY;
+-		goto out_unlock;
++		goto out_remove;
+ 	}
+ 	err = ksm->ksm_ll_ops.keyslot_evict(ksm, key,
+ 					    blk_ksm_get_slot_idx(slot));
+-	if (err)
+-		goto out_unlock;
+-
++out_remove:
++	/*
++	 * Callers free the key even on error, so unlink the key from the hash
++	 * table and clear slot->key even on error.
++	 */
+ 	hlist_del(&slot->hash_node);
+ 	slot->key = NULL;
+-	err = 0;
+-out_unlock:
++out:
+ 	blk_ksm_hw_exit(ksm);
+ 	return err;
+ }
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 9de27daa98b47..42dca17dc2d97 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -456,7 +456,9 @@ void crypto_unregister_alg(struct crypto_alg *alg)
+ 	if (WARN(ret, "Algorithm %s is not registered", alg->cra_driver_name))
+ 		return;
+ 
+-	BUG_ON(refcount_read(&alg->cra_refcnt) != 1);
++	if (WARN_ON(refcount_read(&alg->cra_refcnt) != 1))
++		return;
++
+ 	if (alg->cra_destroy)
+ 		alg->cra_destroy(alg);
+ 
+diff --git a/crypto/drbg.c b/crypto/drbg.c
+index a4b5d6dbe99d3..ba1fa5cdd90ac 100644
+--- a/crypto/drbg.c
++++ b/crypto/drbg.c
+@@ -1515,6 +1515,14 @@ static int drbg_prepare_hrng(struct drbg_state *drbg)
+ 		return 0;
+ 
+ 	drbg->jent = crypto_alloc_rng("jitterentropy_rng", 0, 0);
++	if (IS_ERR(drbg->jent)) {
++		const int err = PTR_ERR(drbg->jent);
++
++		drbg->jent = NULL;
++		if (fips_enabled)
++			return err;
++		pr_info("DRBG: Continuing without Jitter RNG\n");
++	}
+ 
+ 	return 0;
+ }
+@@ -1570,14 +1578,6 @@ static int drbg_instantiate(struct drbg_state *drbg, struct drbg_string *pers,
+ 		if (ret)
+ 			goto free_everything;
+ 
+-		if (IS_ERR(drbg->jent)) {
+-			ret = PTR_ERR(drbg->jent);
+-			drbg->jent = NULL;
+-			if (fips_enabled || ret != -ENOENT)
+-				goto free_everything;
+-			pr_info("DRBG: Continuing without Jitter RNG\n");
+-		}
+-
+ 		reseed = false;
+ 	}
+ 
+diff --git a/drivers/acpi/processor_pdc.c b/drivers/acpi/processor_pdc.c
+index 813f1b78c16a9..c0d2d9a2c0d58 100644
+--- a/drivers/acpi/processor_pdc.c
++++ b/drivers/acpi/processor_pdc.c
+@@ -14,6 +14,8 @@
+ #include <linux/acpi.h>
+ #include <acpi/processor.h>
+ 
++#include <xen/xen.h>
++
+ #include "internal.h"
+ 
+ #define _COMPONENT              ACPI_PROCESSOR_COMPONENT
+@@ -50,6 +52,15 @@ static bool __init processor_physically_present(acpi_handle handle)
+ 		return false;
+ 	}
+ 
++	if (xen_initial_domain())
++		/*
++		 * When running as a Xen dom0 the number of processors Linux
++		 * sees can be different from the real number of processors on
++		 * the system, and we still need to execute _PDC for all of
++		 * them.
++		 */
++		return xen_processor_present(acpi_id);
++
+ 	type = (acpi_type == ACPI_TYPE_DEVICE) ? 1 : 0;
+ 	cpuid = acpi_get_cpuid(handle, type, acpi_id);
+ 
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 24dd532c8e5ed..33e0526907ebd 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -489,7 +489,8 @@ static const struct attribute_group *cpu_root_attr_groups[] = {
+ bool cpu_is_hotpluggable(unsigned cpu)
+ {
+ 	struct device *dev = get_cpu_device(cpu);
+-	return dev && container_of(dev, struct cpu, dev)->hotpluggable;
++	return dev && container_of(dev, struct cpu, dev)->hotpluggable
++		&& tick_nohz_cpu_hotpluggable(cpu);
+ }
+ EXPORT_SYMBOL_GPL(cpu_is_hotpluggable);
+ 
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 497e3d4255c41..503c01d4015dc 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -677,7 +677,12 @@ static int really_probe_debug(struct device *dev, struct device_driver *drv)
+ 	calltime = ktime_get();
+ 	ret = really_probe(dev, drv);
+ 	rettime = ktime_get();
+-	pr_debug("probe of %s returned %d after %lld usecs\n",
++	/*
++	 * Don't change this to pr_debug() because that requires
++	 * CONFIG_DYNAMIC_DEBUG and we want a simple 'initcall_debug' on the
++	 * kernel commandline to print this all the time at the debug level.
++	 */
++	printk(KERN_DEBUG "probe of %s returned %d after %lld usecs\n",
+ 		 dev_name(dev), ret, ktime_us_delta(rettime, calltime));
+ 	return ret;
+ }
+diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
+index dc333dbe52328..405e09575f08a 100644
+--- a/drivers/block/drbd/drbd_receiver.c
++++ b/drivers/block/drbd/drbd_receiver.c
+@@ -1299,7 +1299,7 @@ static void submit_one_flush(struct drbd_device *device, struct issue_flush_cont
+ 	bio_set_dev(bio, device->ldev->backing_bdev);
+ 	bio->bi_private = octx;
+ 	bio->bi_end_io = one_flush_endio;
+-	bio->bi_opf = REQ_OP_FLUSH | REQ_PREFLUSH;
++	bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
+ 
+ 	device->flush_jif = jiffies;
+ 	set_bit(FLUSH_PENDING, &device->flags);
+diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c
+index 7050a16e7efeb..199e8f7d426d9 100644
+--- a/drivers/bluetooth/btsdio.c
++++ b/drivers/bluetooth/btsdio.c
+@@ -352,7 +352,6 @@ static void btsdio_remove(struct sdio_func *func)
+ 
+ 	BT_DBG("func %p", func);
+ 
+-	cancel_work_sync(&data->work);
+ 	if (!data)
+ 		return;
+ 
+diff --git a/drivers/char/ipmi/Kconfig b/drivers/char/ipmi/Kconfig
+index 07847d9a459af..f443186269e1c 100644
+--- a/drivers/char/ipmi/Kconfig
++++ b/drivers/char/ipmi/Kconfig
+@@ -126,7 +126,8 @@ config NPCM7XX_KCS_IPMI_BMC
+ 
+ config ASPEED_BT_IPMI_BMC
+ 	depends on ARCH_ASPEED || COMPILE_TEST
+-	depends on REGMAP && REGMAP_MMIO && MFD_SYSCON
++	depends on MFD_SYSCON
++	select REGMAP_MMIO
+ 	tristate "BT IPMI bmc driver"
+ 	help
+ 	  Provides a driver for the BT (Block Transfer) IPMI interface
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 20dc2452815c7..a3745fa643f3b 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -564,8 +564,10 @@ static void retry_timeout(struct timer_list *t)
+ 
+ 	if (waiting)
+ 		start_get(ssif_info);
+-	if (resend)
++	if (resend) {
+ 		start_resend(ssif_info);
++		ssif_inc_stat(ssif_info, send_retries);
++	}
+ }
+ 
+ static void watch_timeout(struct timer_list *t)
+@@ -792,9 +794,9 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 		} else if (data[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2
+ 			   || data[1] != IPMI_GET_MSG_FLAGS_CMD) {
+ 			/*
+-			 * Don't abort here, maybe it was a queued
+-			 * response to a previous command.
++			 * Recv error response, give up.
+ 			 */
++			ssif_info->ssif_state = SSIF_IDLE;
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 			dev_warn(&ssif_info->client->dev,
+ 				 "Invalid response getting flags: %x %x\n",
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index dc56b976d8162..d65fff4e2ebe9 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -136,16 +136,27 @@ static bool check_locality(struct tpm_chip *chip, int l)
+ 	return false;
+ }
+ 
+-static int release_locality(struct tpm_chip *chip, int l)
++static int __tpm_tis_relinquish_locality(struct tpm_tis_data *priv, int l)
++{
++	tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY);
++
++	return 0;
++}
++
++static int tpm_tis_relinquish_locality(struct tpm_chip *chip, int l)
+ {
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ 
+-	tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY);
++	mutex_lock(&priv->locality_count_mutex);
++	priv->locality_count--;
++	if (priv->locality_count == 0)
++		__tpm_tis_relinquish_locality(priv, l);
++	mutex_unlock(&priv->locality_count_mutex);
+ 
+ 	return 0;
+ }
+ 
+-static int request_locality(struct tpm_chip *chip, int l)
++static int __tpm_tis_request_locality(struct tpm_chip *chip, int l)
+ {
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ 	unsigned long stop, timeout;
+@@ -186,6 +197,20 @@ again:
+ 	return -1;
+ }
+ 
++static int tpm_tis_request_locality(struct tpm_chip *chip, int l)
++{
++	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
++	int ret = 0;
++
++	mutex_lock(&priv->locality_count_mutex);
++	if (priv->locality_count == 0)
++		ret = __tpm_tis_request_locality(chip, l);
++	if (!ret)
++		priv->locality_count++;
++	mutex_unlock(&priv->locality_count_mutex);
++	return ret;
++}
++
+ static u8 tpm_tis_status(struct tpm_chip *chip)
+ {
+ 	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+@@ -638,7 +663,7 @@ static int probe_itpm(struct tpm_chip *chip)
+ 	if (vendor != TPM_VID_INTEL)
+ 		return 0;
+ 
+-	if (request_locality(chip, 0) != 0)
++	if (tpm_tis_request_locality(chip, 0) != 0)
+ 		return -EBUSY;
+ 
+ 	rc = tpm_tis_send_data(chip, cmd_getticks, len);
+@@ -659,7 +684,7 @@ static int probe_itpm(struct tpm_chip *chip)
+ 
+ out:
+ 	tpm_tis_ready(chip);
+-	release_locality(chip, priv->locality);
++	tpm_tis_relinquish_locality(chip, priv->locality);
+ 
+ 	return rc;
+ }
+@@ -714,25 +739,17 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static int tpm_tis_gen_interrupt(struct tpm_chip *chip)
++static void tpm_tis_gen_interrupt(struct tpm_chip *chip)
+ {
+ 	const char *desc = "attempting to generate an interrupt";
+ 	u32 cap2;
+ 	cap_t cap;
+ 	int ret;
+ 
+-	ret = request_locality(chip, 0);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ 		ret = tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
+ 	else
+ 		ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
+-
+-	release_locality(chip, 0);
+-
+-	return ret;
+ }
+ 
+ /* Register the IRQ and issue a command that will cause an interrupt. If an
+@@ -755,52 +772,55 @@ static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
+ 	}
+ 	priv->irq = irq;
+ 
++	rc = tpm_tis_request_locality(chip, 0);
++	if (rc < 0)
++		return rc;
++
+ 	rc = tpm_tis_read8(priv, TPM_INT_VECTOR(priv->locality),
+ 			   &original_int_vec);
+-	if (rc < 0)
++	if (rc < 0) {
++		tpm_tis_relinquish_locality(chip, priv->locality);
+ 		return rc;
++	}
+ 
+ 	rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), irq);
+ 	if (rc < 0)
+-		return rc;
++		goto restore_irqs;
+ 
+ 	rc = tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &int_status);
+ 	if (rc < 0)
+-		return rc;
++		goto restore_irqs;
+ 
+ 	/* Clear all existing */
+ 	rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), int_status);
+ 	if (rc < 0)
+-		return rc;
+-
++		goto restore_irqs;
+ 	/* Turn on */
+ 	rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality),
+ 			     intmask | TPM_GLOBAL_INT_ENABLE);
+ 	if (rc < 0)
+-		return rc;
++		goto restore_irqs;
+ 
+ 	priv->irq_tested = false;
+ 
+ 	/* Generate an interrupt by having the core call through to
+ 	 * tpm_tis_send
+ 	 */
+-	rc = tpm_tis_gen_interrupt(chip);
+-	if (rc < 0)
+-		return rc;
++	tpm_tis_gen_interrupt(chip);
+ 
++restore_irqs:
+ 	/* tpm_tis_send will either confirm the interrupt is working or it
+ 	 * will call disable_irq which undoes all of the above.
+ 	 */
+ 	if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
+-		rc = tpm_tis_write8(priv, original_int_vec,
+-				TPM_INT_VECTOR(priv->locality));
+-		if (rc < 0)
+-			return rc;
+-
+-		return 1;
++		tpm_tis_write8(priv, original_int_vec,
++			       TPM_INT_VECTOR(priv->locality));
++		rc = -1;
+ 	}
+ 
+-	return 0;
++	tpm_tis_relinquish_locality(chip, priv->locality);
++
++	return rc;
+ }
+ 
+ /* Try to find the IRQ the TPM is using. This is for legacy x86 systems that
+@@ -914,8 +934,8 @@ static const struct tpm_class_ops tpm_tis = {
+ 	.req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
+ 	.req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
+ 	.req_canceled = tpm_tis_req_canceled,
+-	.request_locality = request_locality,
+-	.relinquish_locality = release_locality,
++	.request_locality = tpm_tis_request_locality,
++	.relinquish_locality = tpm_tis_relinquish_locality,
+ 	.clk_enable = tpm_tis_clkrun_enable,
+ };
+ 
+@@ -949,6 +969,8 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	priv->timeout_min = TPM_TIMEOUT_USECS_MIN;
+ 	priv->timeout_max = TPM_TIMEOUT_USECS_MAX;
+ 	priv->phy_ops = phy_ops;
++	priv->locality_count = 0;
++	mutex_init(&priv->locality_count_mutex);
+ 
+ 	dev_set_drvdata(&chip->dev, priv);
+ 
+@@ -995,14 +1017,14 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		   TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
+ 	intmask &= ~TPM_GLOBAL_INT_ENABLE;
+ 
+-	rc = request_locality(chip, 0);
++	rc = tpm_tis_request_locality(chip, 0);
+ 	if (rc < 0) {
+ 		rc = -ENODEV;
+ 		goto out_err;
+ 	}
+ 
+ 	tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
+-	release_locality(chip, 0);
++	tpm_tis_relinquish_locality(chip, 0);
+ 
+ 	rc = tpm_chip_start(chip);
+ 	if (rc)
+@@ -1062,13 +1084,13 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		 * proper timeouts for the driver.
+ 		 */
+ 
+-		rc = request_locality(chip, 0);
++		rc = tpm_tis_request_locality(chip, 0);
+ 		if (rc < 0)
+ 			goto out_err;
+ 
+ 		rc = tpm_get_timeouts(chip);
+ 
+-		release_locality(chip, 0);
++		tpm_tis_relinquish_locality(chip, 0);
+ 
+ 		if (rc) {
+ 			dev_err(dev, "Could not get TPM timeouts and durations\n");
+@@ -1076,17 +1098,21 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 			goto out_err;
+ 		}
+ 
+-		if (irq) {
++		if (irq)
+ 			tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED,
+ 						 irq);
+-			if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
+-				dev_err(&chip->dev, FW_BUG
++		else
++			tpm_tis_probe_irq(chip, intmask);
++
++		if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
++			dev_err(&chip->dev, FW_BUG
+ 					"TPM interrupt not working, polling instead\n");
+ 
+-				disable_interrupts(chip);
+-			}
+-		} else {
+-			tpm_tis_probe_irq(chip, intmask);
++			rc = tpm_tis_request_locality(chip, 0);
++			if (rc < 0)
++				goto out_err;
++			disable_interrupts(chip);
++			tpm_tis_relinquish_locality(chip, 0);
+ 		}
+ 	}
+ 
+@@ -1147,28 +1173,27 @@ int tpm_tis_resume(struct device *dev)
+ 	struct tpm_chip *chip = dev_get_drvdata(dev);
+ 	int ret;
+ 
++	ret = tpm_tis_request_locality(chip, 0);
++	if (ret < 0)
++		return ret;
++
+ 	if (chip->flags & TPM_CHIP_FLAG_IRQ)
+ 		tpm_tis_reenable_interrupts(chip);
+ 
+ 	ret = tpm_pm_resume(dev);
+ 	if (ret)
+-		return ret;
++		goto out;
+ 
+ 	/*
+ 	 * TPM 1.2 requires self-test on resume. This function actually returns
+ 	 * an error code but for unknown reason it isn't handled.
+ 	 */
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+-		ret = request_locality(chip, 0);
+-		if (ret < 0)
+-			return ret;
+-
++	if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
+ 		tpm1_do_selftest(chip);
++out:
++	tpm_tis_relinquish_locality(chip, 0);
+ 
+-		release_locality(chip, 0);
+-	}
+-
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(tpm_tis_resume);
+ #endif
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 3be24f221e32a..464ed352ab2e8 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -90,6 +90,8 @@ enum tpm_tis_flags {
+ 
+ struct tpm_tis_data {
+ 	u16 manufacturer_id;
++	struct mutex locality_count_mutex;
++	unsigned int locality_count;
+ 	int locality;
+ 	int irq;
+ 	bool irq_tested;
+diff --git a/drivers/clk/at91/clk-sam9x60-pll.c b/drivers/clk/at91/clk-sam9x60-pll.c
+index 5a9daa3643a72..5fe50ba173a80 100644
+--- a/drivers/clk/at91/clk-sam9x60-pll.c
++++ b/drivers/clk/at91/clk-sam9x60-pll.c
+@@ -452,7 +452,7 @@ sam9x60_clk_register_frac_pll(struct regmap *regmap, spinlock_t *lock,
+ 
+ 		ret = sam9x60_frac_pll_compute_mul_frac(&frac->core, FCORE_MIN,
+ 							parent_rate, true);
+-		if (ret <= 0) {
++		if (ret < 0) {
+ 			hw = ERR_PTR(ret);
+ 			goto free;
+ 		}
+diff --git a/drivers/clk/clk-conf.c b/drivers/clk/clk-conf.c
+index 2ef819606c417..1a4e6340f95ce 100644
+--- a/drivers/clk/clk-conf.c
++++ b/drivers/clk/clk-conf.c
+@@ -33,9 +33,12 @@ static int __set_clk_parents(struct device_node *node, bool clk_supplier)
+ 			else
+ 				return rc;
+ 		}
+-		if (clkspec.np == node && !clk_supplier)
++		if (clkspec.np == node && !clk_supplier) {
++			of_node_put(clkspec.np);
+ 			return 0;
++		}
+ 		pclk = of_clk_get_from_provider(&clkspec);
++		of_node_put(clkspec.np);
+ 		if (IS_ERR(pclk)) {
+ 			if (PTR_ERR(pclk) != -EPROBE_DEFER)
+ 				pr_warn("clk: couldn't get parent clock %d for %pOF\n",
+@@ -48,10 +51,12 @@ static int __set_clk_parents(struct device_node *node, bool clk_supplier)
+ 		if (rc < 0)
+ 			goto err;
+ 		if (clkspec.np == node && !clk_supplier) {
++			of_node_put(clkspec.np);
+ 			rc = 0;
+ 			goto err;
+ 		}
+ 		clk = of_clk_get_from_provider(&clkspec);
++		of_node_put(clkspec.np);
+ 		if (IS_ERR(clk)) {
+ 			if (PTR_ERR(clk) != -EPROBE_DEFER)
+ 				pr_warn("clk: couldn't get assigned clock %d for %pOF\n",
+@@ -93,10 +98,13 @@ static int __set_clk_rates(struct device_node *node, bool clk_supplier)
+ 				else
+ 					return rc;
+ 			}
+-			if (clkspec.np == node && !clk_supplier)
++			if (clkspec.np == node && !clk_supplier) {
++				of_node_put(clkspec.np);
+ 				return 0;
++			}
+ 
+ 			clk = of_clk_get_from_provider(&clkspec);
++			of_node_put(clkspec.np);
+ 			if (IS_ERR(clk)) {
+ 				if (PTR_ERR(clk) != -EPROBE_DEFER)
+ 					pr_warn("clk: couldn't get clock %d for %pOF\n",
+diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
+index 7df2f1e00347e..a9cacbcc1c2a3 100644
+--- a/drivers/clk/rockchip/clk-rk3399.c
++++ b/drivers/clk/rockchip/clk-rk3399.c
+@@ -1261,7 +1261,7 @@ static struct rockchip_clk_branch rk3399_clk_branches[] __initdata = {
+ 			RK3399_CLKSEL_CON(56), 6, 2, MFLAGS,
+ 			RK3399_CLKGATE_CON(10), 7, GFLAGS),
+ 
+-	COMPOSITE_NOGATE(SCLK_CIF_OUT, "clk_cifout", mux_clk_cif_p, 0,
++	COMPOSITE_NOGATE(SCLK_CIF_OUT, "clk_cifout", mux_clk_cif_p, CLK_SET_RATE_PARENT,
+ 			 RK3399_CLKSEL_CON(56), 5, 1, MFLAGS, 0, 5, DFLAGS),
+ 
+ 	/* gic */
+diff --git a/drivers/clocksource/timer-davinci.c b/drivers/clocksource/timer-davinci.c
+index bb4eee31ae082..3dc0c6ceed027 100644
+--- a/drivers/clocksource/timer-davinci.c
++++ b/drivers/clocksource/timer-davinci.c
+@@ -258,21 +258,25 @@ int __init davinci_timer_register(struct clk *clk,
+ 				resource_size(&timer_cfg->reg),
+ 				"davinci-timer")) {
+ 		pr_err("Unable to request memory region\n");
+-		return -EBUSY;
++		rv = -EBUSY;
++		goto exit_clk_disable;
+ 	}
+ 
+ 	base = ioremap(timer_cfg->reg.start, resource_size(&timer_cfg->reg));
+ 	if (!base) {
+ 		pr_err("Unable to map the register range\n");
+-		return -ENOMEM;
++		rv = -ENOMEM;
++		goto exit_mem_region;
+ 	}
+ 
+ 	davinci_timer_init(base);
+ 	tick_rate = clk_get_rate(clk);
+ 
+ 	clockevent = kzalloc(sizeof(*clockevent), GFP_KERNEL);
+-	if (!clockevent)
+-		return -ENOMEM;
++	if (!clockevent) {
++		rv = -ENOMEM;
++		goto exit_iounmap_base;
++	}
+ 
+ 	clockevent->dev.name = "tim12";
+ 	clockevent->dev.features = CLOCK_EVT_FEAT_ONESHOT;
+@@ -297,7 +301,7 @@ int __init davinci_timer_register(struct clk *clk,
+ 			 "clockevent/tim12", clockevent);
+ 	if (rv) {
+ 		pr_err("Unable to request the clockevent interrupt\n");
+-		return rv;
++		goto exit_free_clockevent;
+ 	}
+ 
+ 	davinci_clocksource.dev.rating = 300;
+@@ -324,13 +328,27 @@ int __init davinci_timer_register(struct clk *clk,
+ 	rv = clocksource_register_hz(&davinci_clocksource.dev, tick_rate);
+ 	if (rv) {
+ 		pr_err("Unable to register clocksource\n");
+-		return rv;
++		goto exit_free_irq;
+ 	}
+ 
+ 	sched_clock_register(davinci_timer_read_sched_clock,
+ 			     DAVINCI_TIMER_CLKSRC_BITS, tick_rate);
+ 
+ 	return 0;
++
++exit_free_irq:
++	free_irq(timer_cfg->irq[DAVINCI_TIMER_CLOCKEVENT_IRQ].start,
++			clockevent);
++exit_free_clockevent:
++	kfree(clockevent);
++exit_iounmap_base:
++	iounmap(base);
++exit_mem_region:
++	release_mem_region(timer_cfg->reg.start,
++			   resource_size(&timer_cfg->reg));
++exit_clk_disable:
++	clk_disable_unprepare(clk);
++	return rv;
+ }
+ 
+ static int __init of_davinci_timer_register(struct device_node *np)
+diff --git a/drivers/counter/104-quad-8.c b/drivers/counter/104-quad-8.c
+index 21bb2bb767a1e..89c9cb850a34c 100644
+--- a/drivers/counter/104-quad-8.c
++++ b/drivers/counter/104-quad-8.c
+@@ -62,10 +62,6 @@ struct quad8_iio {
+ #define QUAD8_REG_CHAN_OP 0x11
+ #define QUAD8_REG_INDEX_INPUT_LEVELS 0x16
+ #define QUAD8_DIFF_ENCODER_CABLE_STATUS 0x17
+-/* Borrow Toggle flip-flop */
+-#define QUAD8_FLAG_BT BIT(0)
+-/* Carry Toggle flip-flop */
+-#define QUAD8_FLAG_CT BIT(1)
+ /* Error flag */
+ #define QUAD8_FLAG_E BIT(4)
+ /* Up/Down flag */
+@@ -104,9 +100,6 @@ static int quad8_read_raw(struct iio_dev *indio_dev,
+ {
+ 	struct quad8_iio *const priv = iio_priv(indio_dev);
+ 	const int base_offset = priv->base + 2 * chan->channel;
+-	unsigned int flags;
+-	unsigned int borrow;
+-	unsigned int carry;
+ 	int i;
+ 
+ 	switch (mask) {
+@@ -117,12 +110,7 @@ static int quad8_read_raw(struct iio_dev *indio_dev,
+ 			return IIO_VAL_INT;
+ 		}
+ 
+-		flags = inb(base_offset + 1);
+-		borrow = flags & QUAD8_FLAG_BT;
+-		carry = !!(flags & QUAD8_FLAG_CT);
+-
+-		/* Borrow XOR Carry effectively doubles count range */
+-		*val = (borrow ^ carry) << 24;
++		*val = 0;
+ 
+ 		mutex_lock(&priv->lock);
+ 
+@@ -643,17 +631,9 @@ static int quad8_count_read(struct counter_device *counter,
+ {
+ 	struct quad8_iio *const priv = counter->priv;
+ 	const int base_offset = priv->base + 2 * count->id;
+-	unsigned int flags;
+-	unsigned int borrow;
+-	unsigned int carry;
+ 	int i;
+ 
+-	flags = inb(base_offset + 1);
+-	borrow = flags & QUAD8_FLAG_BT;
+-	carry = !!(flags & QUAD8_FLAG_CT);
+-
+-	/* Borrow XOR Carry effectively doubles count range */
+-	*val = (unsigned long)(borrow ^ carry) << 24;
++	*val = 0;
+ 
+ 	mutex_lock(&priv->lock);
+ 
+@@ -1198,8 +1178,8 @@ static ssize_t quad8_count_ceiling_read(struct counter_device *counter,
+ 
+ 	mutex_unlock(&priv->lock);
+ 
+-	/* By default 0x1FFFFFF (25 bits unsigned) is maximum count */
+-	return sprintf(buf, "33554431\n");
++	/* By default 0xFFFFFF (24 bits unsigned) is maximum count */
++	return sprintf(buf, "16777215\n");
+ }
+ 
+ static ssize_t quad8_count_ceiling_write(struct counter_device *counter,
+diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
+index 0a3dd0793f30e..dbdee43c6ffcf 100644
+--- a/drivers/crypto/Kconfig
++++ b/drivers/crypto/Kconfig
+@@ -897,6 +897,7 @@ config CRYPTO_DEV_SA2UL
+ 	select CRYPTO_AES_ARM64
+ 	select CRYPTO_ALGAPI
+ 	select CRYPTO_AUTHENC
++	select CRYPTO_DES
+ 	select CRYPTO_SHA1
+ 	select CRYPTO_SHA256
+ 	select CRYPTO_SHA512
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 49c7a8b464ddf..8a94f812e6d29 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -132,7 +132,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
+ 		}
+ 		rctx->p_iv[i] = a;
+ 		/* we need to setup all others IVs only in the decrypt way */
+-		if (rctx->op_dir & SS_ENCRYPTION)
++		if (rctx->op_dir == SS_ENCRYPTION)
+ 			return 0;
+ 		todo = min(len, sg_dma_len(sg));
+ 		len -= todo;
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index f87aa2169e5f5..f9a1ec3c84851 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -284,6 +284,10 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
+ 		const u32 rdsta_if = RDSTA_IF0 << sh_idx;
+ 		const u32 rdsta_pr = RDSTA_PR0 << sh_idx;
+ 		const u32 rdsta_mask = rdsta_if | rdsta_pr;
++
++		/* Clear the contents before using the descriptor */
++		memset(desc, 0x00, CAAM_CMD_SZ * 7);
++
+ 		/*
+ 		 * If the corresponding bit is set, this state handle
+ 		 * was initialized by somebody else, so it's left alone.
+@@ -327,8 +331,6 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
+ 		}
+ 
+ 		dev_info(ctrldev, "Instantiated RNG4 SH%d\n", sh_idx);
+-		/* Clear the contents before recreating the descriptor */
+-		memset(desc, 0x00, CAAM_CMD_SZ * 7);
+ 	}
+ 
+ 	kfree(desc);
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index ae7b445999144..4bf9eaab4456f 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -42,6 +42,9 @@ static irqreturn_t psp_irq_handler(int irq, void *data)
+ 	/* Read the interrupt status: */
+ 	status = ioread32(psp->io_regs + psp->vdata->intsts_reg);
+ 
++	/* Clear the interrupt status by writing the same value we read. */
++	iowrite32(status, psp->io_regs + psp->vdata->intsts_reg);
++
+ 	/* invoke subdevice interrupt handlers */
+ 	if (status) {
+ 		if (psp->sev_irq_handler)
+@@ -51,9 +54,6 @@ static irqreturn_t psp_irq_handler(int irq, void *data)
+ 			psp->tee_irq_handler(irq, psp->tee_irq_data, status);
+ 	}
+ 
+-	/* Clear the interrupt status by writing the same value we read. */
+-	iowrite32(status, psp->io_regs + psp->vdata->intsts_reg);
+-
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
+index fbcf52e46d179..7de9b9d20de02 100644
+--- a/drivers/crypto/inside-secure/safexcel.c
++++ b/drivers/crypto/inside-secure/safexcel.c
+@@ -1634,19 +1634,23 @@ static int safexcel_probe_generic(void *pdev,
+ 						     &priv->ring[i].rdr);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to initialize rings\n");
+-			return ret;
++			goto err_cleanup_rings;
+ 		}
+ 
+ 		priv->ring[i].rdr_req = devm_kcalloc(dev,
+ 			EIP197_DEFAULT_RING_SIZE,
+ 			sizeof(*priv->ring[i].rdr_req),
+ 			GFP_KERNEL);
+-		if (!priv->ring[i].rdr_req)
+-			return -ENOMEM;
++		if (!priv->ring[i].rdr_req) {
++			ret = -ENOMEM;
++			goto err_cleanup_rings;
++		}
+ 
+ 		ring_irq = devm_kzalloc(dev, sizeof(*ring_irq), GFP_KERNEL);
+-		if (!ring_irq)
+-			return -ENOMEM;
++		if (!ring_irq) {
++			ret = -ENOMEM;
++			goto err_cleanup_rings;
++		}
+ 
+ 		ring_irq->priv = priv;
+ 		ring_irq->ring = i;
+@@ -1660,7 +1664,8 @@ static int safexcel_probe_generic(void *pdev,
+ 						ring_irq);
+ 		if (irq < 0) {
+ 			dev_err(dev, "Failed to get IRQ ID for ring %d\n", i);
+-			return irq;
++			ret = irq;
++			goto err_cleanup_rings;
+ 		}
+ 
+ 		priv->ring[i].irq = irq;
+@@ -1672,8 +1677,10 @@ static int safexcel_probe_generic(void *pdev,
+ 		snprintf(wq_name, 9, "wq_ring%d", i);
+ 		priv->ring[i].workqueue =
+ 			create_singlethread_workqueue(wq_name);
+-		if (!priv->ring[i].workqueue)
+-			return -ENOMEM;
++		if (!priv->ring[i].workqueue) {
++			ret = -ENOMEM;
++			goto err_cleanup_rings;
++		}
+ 
+ 		priv->ring[i].requests = 0;
+ 		priv->ring[i].busy = false;
+@@ -1690,16 +1697,26 @@ static int safexcel_probe_generic(void *pdev,
+ 	ret = safexcel_hw_init(priv);
+ 	if (ret) {
+ 		dev_err(dev, "HW init failed (%d)\n", ret);
+-		return ret;
++		goto err_cleanup_rings;
+ 	}
+ 
+ 	ret = safexcel_register_algorithms(priv);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register algorithms (%d)\n", ret);
+-		return ret;
++		goto err_cleanup_rings;
+ 	}
+ 
+ 	return 0;
++
++err_cleanup_rings:
++	for (i = 0; i < priv->config.rings; i++) {
++		if (priv->ring[i].irq)
++			irq_set_affinity_hint(priv->ring[i].irq, NULL);
++		if (priv->ring[i].workqueue)
++			destroy_workqueue(priv->ring[i].workqueue);
++	}
++
++	return ret;
+ }
+ 
+ static void safexcel_hw_reset_rings(struct safexcel_crypto_priv *priv)
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index b5d691ae45dcf..1fe006cc643e7 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -212,6 +212,7 @@ struct at_xdmac {
+ 	int			irq;
+ 	struct clk		*clk;
+ 	u32			save_gim;
++	u32			save_gs;
+ 	struct dma_pool		*at_xdmac_desc_pool;
+ 	struct at_xdmac_chan	chan[];
+ };
+@@ -1910,6 +1911,7 @@ static int atmel_xdmac_suspend(struct device *dev)
+ 		}
+ 	}
+ 	atxdmac->save_gim = at_xdmac_read(atxdmac, AT_XDMAC_GIM);
++	atxdmac->save_gs = at_xdmac_read(atxdmac, AT_XDMAC_GS);
+ 
+ 	at_xdmac_off(atxdmac);
+ 	clk_disable_unprepare(atxdmac->clk);
+@@ -1946,7 +1948,8 @@ static int atmel_xdmac_resume(struct device *dev)
+ 			at_xdmac_chan_write(atchan, AT_XDMAC_CNDC, atchan->save_cndc);
+ 			at_xdmac_chan_write(atchan, AT_XDMAC_CIE, atchan->save_cim);
+ 			wmb();
+-			at_xdmac_write(atxdmac, AT_XDMAC_GE, atchan->mask);
++			if (atxdmac->save_gs & atchan->mask)
++				at_xdmac_write(atxdmac, AT_XDMAC_GE, atchan->mask);
+ 		}
+ 	}
+ 	return 0;
+diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
+index d7ed50f8b9294..f91dbf43a5980 100644
+--- a/drivers/dma/dw-edma/dw-edma-core.c
++++ b/drivers/dma/dw-edma/dw-edma-core.c
+@@ -166,7 +166,7 @@ static void vchan_free_desc(struct virt_dma_desc *vdesc)
+ 	dw_edma_free_desc(vd2dw_edma_desc(vdesc));
+ }
+ 
+-static void dw_edma_start_transfer(struct dw_edma_chan *chan)
++static int dw_edma_start_transfer(struct dw_edma_chan *chan)
+ {
+ 	struct dw_edma_chunk *child;
+ 	struct dw_edma_desc *desc;
+@@ -174,16 +174,16 @@ static void dw_edma_start_transfer(struct dw_edma_chan *chan)
+ 
+ 	vd = vchan_next_desc(&chan->vc);
+ 	if (!vd)
+-		return;
++		return 0;
+ 
+ 	desc = vd2dw_edma_desc(vd);
+ 	if (!desc)
+-		return;
++		return 0;
+ 
+ 	child = list_first_entry_or_null(&desc->chunk->list,
+ 					 struct dw_edma_chunk, list);
+ 	if (!child)
+-		return;
++		return 0;
+ 
+ 	dw_edma_v0_core_start(child, !desc->xfer_sz);
+ 	desc->xfer_sz += child->ll_region.sz;
+@@ -191,6 +191,8 @@ static void dw_edma_start_transfer(struct dw_edma_chan *chan)
+ 	list_del(&child->list);
+ 	kfree(child);
+ 	desc->chunks_alloc--;
++
++	return 1;
+ }
+ 
+ static int dw_edma_device_config(struct dma_chan *dchan,
+@@ -274,9 +276,12 @@ static void dw_edma_device_issue_pending(struct dma_chan *dchan)
+ 	struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan);
+ 	unsigned long flags;
+ 
++	if (!chan->configured)
++		return;
++
+ 	spin_lock_irqsave(&chan->vc.lock, flags);
+-	if (chan->configured && chan->request == EDMA_REQ_NONE &&
+-	    chan->status == EDMA_ST_IDLE && vchan_issue_pending(&chan->vc)) {
++	if (vchan_issue_pending(&chan->vc) && chan->request == EDMA_REQ_NONE &&
++	    chan->status == EDMA_ST_IDLE) {
+ 		chan->status = EDMA_ST_BUSY;
+ 		dw_edma_start_transfer(chan);
+ 	}
+@@ -497,14 +502,14 @@ static void dw_edma_done_interrupt(struct dw_edma_chan *chan)
+ 		switch (chan->request) {
+ 		case EDMA_REQ_NONE:
+ 			desc = vd2dw_edma_desc(vd);
+-			if (desc->chunks_alloc) {
+-				chan->status = EDMA_ST_BUSY;
+-				dw_edma_start_transfer(chan);
+-			} else {
++			if (!desc->chunks_alloc) {
+ 				list_del(&vd->node);
+ 				vchan_cookie_complete(vd);
+-				chan->status = EDMA_ST_IDLE;
+ 			}
++
++			/* Continue transferring if there are remaining chunks or issued requests.
++			 */
++			chan->status = dw_edma_start_transfer(chan) ? EDMA_ST_BUSY : EDMA_ST_IDLE;
+ 			break;
+ 
+ 		case EDMA_REQ_STOP:
+diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
+index 4800c596433ad..9f3e011fbd914 100644
+--- a/drivers/dma/mv_xor_v2.c
++++ b/drivers/dma/mv_xor_v2.c
+@@ -756,7 +756,7 @@ static int mv_xor_v2_probe(struct platform_device *pdev)
+ 
+ 	xor_dev->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (PTR_ERR(xor_dev->clk) == -EPROBE_DEFER) {
+-		ret = EPROBE_DEFER;
++		ret = -EPROBE_DEFER;
+ 		goto disable_reg_clk;
+ 	}
+ 	if (!IS_ERR(xor_dev->clk)) {
+diff --git a/drivers/edac/skx_base.c b/drivers/edac/skx_base.c
+index f887e31666510..ba3e83313938b 100644
+--- a/drivers/edac/skx_base.c
++++ b/drivers/edac/skx_base.c
+@@ -509,7 +509,7 @@ rir_found:
+ }
+ 
+ static u8 skx_close_row[] = {
+-	15, 16, 17, 18, 20, 21, 22, 28, 10, 11, 12, 13, 29, 30, 31, 32, 33
++	15, 16, 17, 18, 20, 21, 22, 28, 10, 11, 12, 13, 29, 30, 31, 32, 33, 34
+ };
+ 
+ static u8 skx_close_column[] = {
+@@ -517,7 +517,7 @@ static u8 skx_close_column[] = {
+ };
+ 
+ static u8 skx_open_row[] = {
+-	14, 15, 16, 20, 28, 21, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32, 33
++	14, 15, 16, 20, 28, 21, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32, 33, 34
+ };
+ 
+ static u8 skx_open_column[] = {
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index d417199f8fe94..96086e7df9100 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -1256,8 +1256,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ static void qcom_scm_shutdown(struct platform_device *pdev)
+ {
+ 	/* Clean shutdown, disable download mode to allow normal restart */
+-	if (download_mode)
+-		qcom_scm_set_download_mode(false);
++	qcom_scm_set_download_mode(false);
+ }
+ 
+ static const struct of_device_id qcom_scm_dt_match[] = {
+diff --git a/drivers/firmware/raspberrypi.c b/drivers/firmware/raspberrypi.c
+index 9eef49da47e04..45ff03da234a6 100644
+--- a/drivers/firmware/raspberrypi.c
++++ b/drivers/firmware/raspberrypi.c
+@@ -243,6 +243,13 @@ void rpi_firmware_put(struct rpi_firmware *fw)
+ }
+ EXPORT_SYMBOL_GPL(rpi_firmware_put);
+ 
++static void devm_rpi_firmware_put(void *data)
++{
++	struct rpi_firmware *fw = data;
++
++	rpi_firmware_put(fw);
++}
++
+ static int rpi_firmware_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -338,6 +345,28 @@ err_put_device:
+ }
+ EXPORT_SYMBOL_GPL(rpi_firmware_get);
+ 
++/**
++ * devm_rpi_firmware_get - Get pointer to rpi_firmware structure.
++ * @firmware_node:    Pointer to the firmware Device Tree node.
++ *
++ * Returns NULL is the firmware device is not ready.
++ */
++struct rpi_firmware *devm_rpi_firmware_get(struct device *dev,
++					   struct device_node *firmware_node)
++{
++	struct rpi_firmware *fw;
++
++	fw = rpi_firmware_get(firmware_node);
++	if (!fw)
++		return NULL;
++
++	if (devm_add_action_or_reset(dev, devm_rpi_firmware_put, fw))
++		return NULL;
++
++	return fw;
++}
++EXPORT_SYMBOL_GPL(devm_rpi_firmware_get);
++
+ static const struct of_device_id rpi_firmware_of_match[] = {
+ 	{ .compatible = "raspberrypi,bcm2835-firmware", },
+ 	{},
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 7dd0ac1a0cfc7..78a446cb43486 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -989,8 +989,8 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	genpool = svc_create_memory_pool(pdev, sh_memory);
+-	if (!genpool)
+-		return -ENOMEM;
++	if (IS_ERR(genpool))
++		return PTR_ERR(genpool);
+ 
+ 	/* allocate service controller and supporting channel */
+ 	controller = devm_kzalloc(dev, sizeof(*controller), GFP_KERNEL);
+diff --git a/drivers/fpga/fpga-bridge.c b/drivers/fpga/fpga-bridge.c
+index 2deccacc3aa75..851debe32bf0f 100644
+--- a/drivers/fpga/fpga-bridge.c
++++ b/drivers/fpga/fpga-bridge.c
+@@ -115,7 +115,7 @@ static int fpga_bridge_dev_match(struct device *dev, const void *data)
+ /**
+  * fpga_bridge_get - get an exclusive reference to a fpga bridge
+  * @dev:	parent device that fpga bridge was registered with
+- * @info:	fpga manager info
++ * @info:	fpga image specific information
+  *
+  * Given a device, get an exclusive reference to a fpga bridge.
+  *
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+index 0da0a0d986720..15c0a3068eab8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+@@ -66,6 +66,7 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ {
+ 	struct fd f = fdget(fd);
+ 	struct amdgpu_fpriv *fpriv;
++	struct amdgpu_ctx_mgr *mgr;
+ 	struct amdgpu_ctx *ctx;
+ 	uint32_t id;
+ 	int r;
+@@ -79,8 +80,11 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ 		return r;
+ 	}
+ 
+-	idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
++	mgr = &fpriv->ctx_mgr;
++	mutex_lock(&mgr->lock);
++	idr_for_each_entry(&mgr->ctx_handles, ctx, id)
+ 		amdgpu_ctx_priority_override(ctx, priority);
++	mutex_unlock(&mgr->lock);
+ 
+ 	fdput(f);
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 38f4c7474487b..629671f66b319 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3943,7 +3943,8 @@ static int gfx_v9_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
+-	amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0);
++	if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__GFX))
++		amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 1673bf3bae55a..945cbdbc2f998 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1686,7 +1686,6 @@ static int gmc_v9_0_hw_fini(void *handle)
+ 		return 0;
+ 	}
+ 
+-	amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index 1f2e2460e121e..dbcaef3f35da9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -1979,9 +1979,11 @@ static int sdma_v4_0_hw_fini(void *handle)
+ 	if (amdgpu_sriov_vf(adev))
+ 		return 0;
+ 
+-	for (i = 0; i < adev->sdma.num_instances; i++) {
+-		amdgpu_irq_put(adev, &adev->sdma.ecc_irq,
+-			       AMDGPU_SDMA_IRQ_INSTANCE0 + i);
++	if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__SDMA)) {
++		for (i = 0; i < adev->sdma.num_instances; i++) {
++			amdgpu_irq_put(adev, &adev->sdma.ecc_irq,
++				       AMDGPU_SDMA_IRQ_INSTANCE0 + i);
++		}
+ 	}
+ 
+ 	sdma_v4_0_ctx_switch_enable(adev, false);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index dbdf0e210522c..3ca1ee396e4c6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -7248,6 +7248,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 			continue;
+ 
+ 		dc_plane = dm_new_plane_state->dc_state;
++		if (!dc_plane)
++			continue;
+ 
+ 		bundle->surface_updates[planes_count].surface = dc_plane;
+ 		if (new_pcrtc_state->color_mgmt_changed) {
+@@ -8562,8 +8564,9 @@ static int dm_update_plane_state(struct dc *dc,
+ 			return -EINVAL;
+ 		}
+ 
++		if (dm_old_plane_state->dc_state)
++			dc_plane_state_release(dm_old_plane_state->dc_state);
+ 
+-		dc_plane_state_release(dm_old_plane_state->dc_state);
+ 		dm_new_plane_state->dc_state = NULL;
+ 
+ 		*lock_and_validation_needed = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 1e47afc4ccc1d..f1eda1a6496d4 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1502,6 +1502,9 @@ bool dc_remove_plane_from_context(
+ 	struct dc_stream_status *stream_status = NULL;
+ 	struct resource_pool *pool = dc->res_pool;
+ 
++	if (!plane_state)
++		return true;
++
+ 	for (i = 0; i < context->stream_count; i++)
+ 		if (context->streams[i] == stream) {
+ 			stream_status = &context->stream_status[i];
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7533.c b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+index f304a5ff8e596..e0bdedf22390c 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7533.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+@@ -103,22 +103,19 @@ void adv7533_dsi_power_off(struct adv7511 *adv)
+ enum drm_mode_status adv7533_mode_valid(struct adv7511 *adv,
+ 					const struct drm_display_mode *mode)
+ {
+-	int lanes;
++	unsigned long max_lane_freq;
+ 	struct mipi_dsi_device *dsi = adv->dsi;
++	u8 bpp = mipi_dsi_pixel_format_to_bpp(dsi->format);
+ 
+-	if (mode->clock > 80000)
+-		lanes = 4;
+-	else
+-		lanes = 3;
+-
+-	/*
+-	 * TODO: add support for dynamic switching of lanes
+-	 * by using the bridge pre_enable() op . Till then filter
+-	 * out the modes which shall need different number of lanes
+-	 * than what was configured in the device tree.
+-	 */
+-	if (lanes != dsi->lanes)
+-		return MODE_BAD;
++	/* Check max clock for either 7533 or 7535 */
++	if (mode->clock > (adv->type == ADV7533 ? 80000 : 148500))
++		return MODE_CLOCK_HIGH;
++
++	/* Check max clock for each lane */
++	max_lane_freq = (adv->type == ADV7533 ? 800000 : 891000);
++
++	if (mode->clock * bpp > max_lane_freq * adv->num_dsi_lanes)
++		return MODE_CLOCK_HIGH;
+ 
+ 	return MODE_OK;
+ }
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index ac5d61e65124e..04f2ec2254e9f 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1299,6 +1299,9 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ 		return -EINVAL;
+ 	}
+ 
++	var->xres_virtual = fb->width;
++	var->yres_virtual = fb->height;
++
+ 	/*
+ 	 * Workaround for SDL 1.2, which is known to be setting all pixel format
+ 	 * fields values to zero in some cases. We treat this situation as a
+diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
+index e5432dcf69996..d3f0d048594e7 100644
+--- a/drivers/gpu/drm/drm_probe_helper.c
++++ b/drivers/gpu/drm/drm_probe_helper.c
+@@ -488,8 +488,9 @@ retry:
+ 		 */
+ 		dev->mode_config.delayed_event = true;
+ 		if (dev->mode_config.poll_enabled)
+-			schedule_delayed_work(&dev->mode_config.output_poll_work,
+-					      0);
++			mod_delayed_work(system_wq,
++					 &dev->mode_config.output_poll_work,
++					 0);
+ 	}
+ 
+ 	/* Re-enable polling in case the global poll config changed. */
+diff --git a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+index 1f79bc2a881e1..c277d2fc50c66 100644
+--- a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
++++ b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+@@ -775,8 +775,8 @@ static int decon_conf_irq(struct decon_context *ctx, const char *name,
+ 			return irq;
+ 		}
+ 	}
+-	irq_set_status_flags(irq, IRQ_NOAUTOEN);
+-	ret = devm_request_irq(ctx->dev, irq, handler, flags, "drm_decon", ctx);
++	ret = devm_request_irq(ctx->dev, irq, handler,
++			       flags | IRQF_NO_AUTOEN, "drm_decon", ctx);
+ 	if (ret < 0) {
+ 		dev_err(ctx->dev, "IRQ %s request failed\n", name);
+ 		return ret;
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_dsi.c b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
+index 5b9666fc7af1a..afb03de2880f0 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_dsi.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
+@@ -1353,10 +1353,9 @@ static int exynos_dsi_register_te_irq(struct exynos_dsi *dsi,
+ 	}
+ 
+ 	te_gpio_irq = gpio_to_irq(dsi->te_gpio);
+-	irq_set_status_flags(te_gpio_irq, IRQ_NOAUTOEN);
+ 
+ 	ret = request_threaded_irq(te_gpio_irq, exynos_dsi_te_irq_handler, NULL,
+-					IRQF_TRIGGER_RISING, "TE", dsi);
++				   IRQF_TRIGGER_RISING | IRQF_NO_AUTOEN, "TE", dsi);
+ 	if (ret) {
+ 		dev_err(dsi->dev, "request interrupt failed with %d\n", ret);
+ 		gpio_free(dsi->te_gpio);
+@@ -1802,9 +1801,9 @@ static int exynos_dsi_probe(struct platform_device *pdev)
+ 	if (dsi->irq < 0)
+ 		return dsi->irq;
+ 
+-	irq_set_status_flags(dsi->irq, IRQ_NOAUTOEN);
+ 	ret = devm_request_threaded_irq(dev, dsi->irq, NULL,
+-					exynos_dsi_irq, IRQF_ONESHOT,
++					exynos_dsi_irq,
++					IRQF_ONESHOT | IRQF_NO_AUTOEN,
+ 					dev_name(dev), dsi);
+ 	if (ret) {
+ 		dev_err(dev, "failed to request dsi irq\n");
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index d46011f7a8380..9a06bd8cb200b 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -5844,7 +5844,7 @@ intel_get_crtc_new_encoder(const struct intel_atomic_state *state,
+ 		num_encoders++;
+ 	}
+ 
+-	drm_WARN(encoder->base.dev, num_encoders != 1,
++	drm_WARN(state->base.dev, num_encoders != 1,
+ 		 "%d encoders for pipe %c\n",
+ 		 num_encoders, pipe_name(crtc->pipe));
+ 
+diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
+index ab460121fd52c..65dc0dc2c119a 100644
+--- a/drivers/gpu/drm/lima/lima_drv.c
++++ b/drivers/gpu/drm/lima/lima_drv.c
+@@ -392,8 +392,10 @@ static int lima_pdev_probe(struct platform_device *pdev)
+ 
+ 	/* Allocate and initialize the DRM device. */
+ 	ddev = drm_dev_alloc(&lima_drm_driver, &pdev->dev);
+-	if (IS_ERR(ddev))
+-		return PTR_ERR(ddev);
++	if (IS_ERR(ddev)) {
++		err = PTR_ERR(ddev);
++		goto err_out0;
++	}
+ 
+ 	ddev->dev_private = ldev;
+ 	ldev->ddev = ddev;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 6f84db97e20e8..0fcba2bc26b8e 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1569,6 +1569,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+ 	struct a5xx_gpu *a5xx_gpu = NULL;
+ 	struct adreno_gpu *adreno_gpu;
+ 	struct msm_gpu *gpu;
++	unsigned int nr_rings;
+ 	int ret;
+ 
+ 	if (!pdev) {
+@@ -1589,7 +1590,12 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+ 
+ 	check_speed_bin(&pdev->dev);
+ 
+-	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 4);
++	nr_rings = 4;
++
++	if (adreno_is_a510(adreno_gpu))
++		nr_rings = 1;
++
++	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, nr_rings);
+ 	if (ret) {
+ 		a5xx_destroy(&(a5xx_gpu->base.base));
+ 		return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
+index 58e03b20e1c7a..760687f66ae5b 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
+@@ -301,8 +301,11 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
+ 	if (ret)
+ 		return NULL;
+ 
+-	/* Make sure pm runtime is active and reset any previous errors */
+-	pm_runtime_set_active(&pdev->dev);
++	/*
++	 * Now that we have firmware loaded, and are ready to begin
++	 * booting the gpu, go ahead and enable runpm:
++	 */
++	pm_runtime_enable(&pdev->dev);
+ 
+ 	ret = pm_runtime_get_sync(&pdev->dev);
+ 	if (ret < 0) {
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index 78181e2d78a97..11a6a41b4910f 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -916,7 +916,6 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
+ 	pm_runtime_set_autosuspend_delay(dev,
+ 		adreno_gpu->info->inactive_period);
+ 	pm_runtime_use_autosuspend(dev);
+-	pm_runtime_enable(dev);
+ 
+ 	ret = msm_gpu_init(drm, pdev, &adreno_gpu->base, &funcs->base,
+ 			adreno_gpu->info->name, &adreno_gpu_config);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index a0274fcfe9c9d..408fc6c8a6df8 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -634,7 +634,7 @@ static int dpu_encoder_virt_atomic_check(
+ 		if (drm_atomic_crtc_needs_modeset(crtc_state)) {
+ 			dpu_rm_release(global_state, drm_enc);
+ 
+-			if (!crtc_state->active_changed || crtc_state->active)
++			if (!crtc_state->active_changed || crtc_state->enable)
+ 				ret = dpu_rm_reserve(&dpu_kms->rm, global_state,
+ 						drm_enc, crtc_state, topology);
+ 		}
+diff --git a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+index 6ac1accade803..b19597b836e3a 100644
+--- a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
++++ b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+@@ -458,7 +458,7 @@ static int otm8009a_probe(struct mipi_dsi_device *dsi)
+ 		       DRM_MODE_CONNECTOR_DSI);
+ 
+ 	ctx->bl_dev = devm_backlight_device_register(dev, dev_name(dev),
+-						     dsi->host->dev, ctx,
++						     dev, ctx,
+ 						     &otm8009a_backlight_ops,
+ 						     NULL);
+ 	if (IS_ERR(ctx->bl_dev)) {
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+index 62e5d0970525e..22ff4a5929768 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+@@ -250,9 +250,6 @@ static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
+ 	else
+ 		ret = rockchip_drm_gem_object_mmap_dma(obj, vma);
+ 
+-	if (ret)
+-		drm_gem_vm_close(vma);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
+index 17f32f550dd99..575bc331716e8 100644
+--- a/drivers/gpu/drm/vgem/vgem_fence.c
++++ b/drivers/gpu/drm/vgem/vgem_fence.c
+@@ -249,4 +249,5 @@ void vgem_fence_close(struct vgem_file *vfile)
+ {
+ 	idr_for_each(&vfile->fence_idr, __vgem_fence_idr_fini, vfile);
+ 	idr_destroy(&vfile->fence_idr);
++	mutex_destroy(&vfile->fence_mutex);
+ }
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index afb94b89fc4d4..6c64165fae13e 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1265,6 +1265,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 
+ 	struct input_dev *pen_input = wacom->pen_input;
+ 	unsigned char *data = wacom->data;
++	int number_of_valid_frames = 0;
++	int time_interval = 15000000;
++	ktime_t time_packet_received = ktime_get();
+ 	int i;
+ 
+ 	if (wacom->features.type == INTUOSP2_BT ||
+@@ -1285,12 +1288,30 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 		wacom->id[0] |= (wacom->serial[0] >> 32) & 0xFFFFF;
+ 	}
+ 
++	/* number of valid frames */
+ 	for (i = 0; i < pen_frames; i++) {
+ 		unsigned char *frame = &data[i*pen_frame_len + 1];
+ 		bool valid = frame[0] & 0x80;
++
++		if (valid)
++			number_of_valid_frames++;
++	}
++
++	if (number_of_valid_frames) {
++		if (wacom->hid_data.time_delayed)
++			time_interval = ktime_get() - wacom->hid_data.time_delayed;
++		time_interval /= number_of_valid_frames;
++		wacom->hid_data.time_delayed = time_packet_received;
++	}
++
++	for (i = 0; i < number_of_valid_frames; i++) {
++		unsigned char *frame = &data[i*pen_frame_len + 1];
++		bool valid = frame[0] & 0x80;
+ 		bool prox = frame[0] & 0x40;
+ 		bool range = frame[0] & 0x20;
+ 		bool invert = frame[0] & 0x10;
++		int frames_number_reversed = number_of_valid_frames - i - 1;
++		int event_timestamp = time_packet_received - frames_number_reversed * time_interval;
+ 
+ 		if (!valid)
+ 			continue;
+@@ -1303,6 +1324,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 			wacom->tool[0] = 0;
+ 			wacom->id[0] = 0;
+ 			wacom->serial[0] = 0;
++			wacom->hid_data.time_delayed = 0;
+ 			return;
+ 		}
+ 
+@@ -1339,6 +1361,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 						 get_unaligned_le16(&frame[11]));
+ 			}
+ 		}
++
+ 		if (wacom->tool[0]) {
+ 			input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5]));
+ 			if (wacom->features.type == INTUOSP2_BT ||
+@@ -1362,6 +1385,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 
+ 		wacom->shared->stylus_in_proximity = prox;
+ 
++		/* add timestamp to unpack the frames */
++		input_set_timestamp(pen_input, event_timestamp);
++
+ 		input_sync(pen_input);
+ 	}
+ }
+@@ -1853,6 +1879,7 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
+ 	int fmax = field->logical_maximum;
+ 	unsigned int equivalent_usage = wacom_equivalent_usage(usage->hid);
+ 	int resolution_code = code;
++	int resolution = hidinput_calc_abs_res(field, resolution_code);
+ 
+ 	if (equivalent_usage == HID_DG_TWIST) {
+ 		resolution_code = ABS_RZ;
+@@ -1875,8 +1902,15 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
+ 	switch (type) {
+ 	case EV_ABS:
+ 		input_set_abs_params(input, code, fmin, fmax, fuzz, 0);
+-		input_abs_set_res(input, code,
+-				  hidinput_calc_abs_res(field, resolution_code));
++
++		/* older tablet may miss physical usage */
++		if ((code == ABS_X || code == ABS_Y) && !resolution) {
++			resolution = WACOM_INTUOS_RES;
++			hid_warn(input,
++				 "Wacom usage (%d) missing resolution \n",
++				 code);
++		}
++		input_abs_set_res(input, code, resolution);
+ 		break;
+ 	case EV_KEY:
+ 		input_set_capability(input, EV_KEY, code);
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index ca172efcf072f..88badfbae999c 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -320,6 +320,7 @@ struct hid_data {
+ 	int bat_connected;
+ 	int ps_connected;
+ 	bool pad_input_event_flag;
++	int time_delayed;
+ };
+ 
+ struct wacom_remote_data {
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 6b84822e7d93b..22e314725def0 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -1515,9 +1515,9 @@ static int adt7475_set_pwm_polarity(struct i2c_client *client)
+ 	int ret, i;
+ 	u8 val;
+ 
+-	ret = of_property_read_u32_array(client->dev.of_node,
+-					 "adi,pwm-active-state", states,
+-					 ARRAY_SIZE(states));
++	ret = device_property_read_u32_array(&client->dev,
++					     "adi,pwm-active-state", states,
++					     ARRAY_SIZE(states));
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 3bc2551577a30..74a67089ad073 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -74,6 +74,7 @@ static DEFINE_MUTEX(nb_smu_ind_mutex);
+ 
+ #define ZEN_CUR_TEMP_SHIFT			21
+ #define ZEN_CUR_TEMP_RANGE_SEL_MASK		BIT(19)
++#define ZEN_CUR_TEMP_TJ_SEL_MASK		GENMASK(17, 16)
+ 
+ #define ZEN_SVI_BASE				0x0005A000
+ 
+@@ -173,7 +174,8 @@ static long get_raw_temp(struct k10temp_data *data)
+ 
+ 	data->read_tempreg(data->pdev, &regval);
+ 	temp = (regval >> ZEN_CUR_TEMP_SHIFT) * 125;
+-	if (regval & data->temp_adjust_mask)
++	if ((regval & data->temp_adjust_mask) ||
++	    (regval & ZEN_CUR_TEMP_TJ_SEL_MASK) == ZEN_CUR_TEMP_TJ_SEL_MASK)
+ 		temp -= 49000;
+ 	return temp;
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index bdc34ca449f71..c5698c62b6103 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -619,6 +619,7 @@ int __init etm_perf_init(void)
+ 	etm_pmu.addr_filters_sync	= etm_addr_filters_sync;
+ 	etm_pmu.addr_filters_validate	= etm_addr_filters_validate;
+ 	etm_pmu.nr_addr_filters		= ETM_ADDR_CMP_MAX;
++	etm_pmu.module			= THIS_MODULE;
+ 
+ 	ret = perf_pmu_register(&etm_pmu, CORESIGHT_ETM_PMU_NAME, -1);
+ 	if (ret == 0)
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index 50928216b3f28..24987902ca590 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -792,8 +792,10 @@ static int cdns_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ 	/* Check i2c operating mode and switch if possible */
+ 	if (id->dev_mode == CDNS_I2C_MODE_SLAVE) {
+-		if (id->slave_state != CDNS_I2C_SLAVE_STATE_IDLE)
+-			return -EAGAIN;
++		if (id->slave_state != CDNS_I2C_SLAVE_STATE_IDLE) {
++			ret = -EAGAIN;
++			goto out;
++		}
+ 
+ 		/* Set mode to master */
+ 		cdns_i2c_set_mode(CDNS_I2C_MODE_MASTER, id);
+diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
+index d4f6c6d60683a..8955f62b497e6 100644
+--- a/drivers/i2c/busses/i2c-omap.c
++++ b/drivers/i2c/busses/i2c-omap.c
+@@ -1058,7 +1058,7 @@ omap_i2c_isr(int irq, void *dev_id)
+ 	u16 stat;
+ 
+ 	stat = omap_i2c_read_reg(omap, OMAP_I2C_STAT_REG);
+-	mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG);
++	mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG) & ~OMAP_I2C_STAT_NACK;
+ 
+ 	if (stat & mask)
+ 		ret = IRQ_WAKE_THREAD;
+diff --git a/drivers/iio/adc/palmas_gpadc.c b/drivers/iio/adc/palmas_gpadc.c
+index f4756671cddb6..6ed0d151ad21a 100644
+--- a/drivers/iio/adc/palmas_gpadc.c
++++ b/drivers/iio/adc/palmas_gpadc.c
+@@ -628,7 +628,7 @@ out:
+ 
+ static int palmas_gpadc_remove(struct platform_device *pdev)
+ {
+-	struct iio_dev *indio_dev = dev_to_iio_dev(&pdev->dev);
++	struct iio_dev *indio_dev = dev_get_drvdata(&pdev->dev);
+ 	struct palmas_gpadc *adc = iio_priv(indio_dev);
+ 
+ 	if (adc->wakeup1_enable || adc->wakeup2_enable)
+diff --git a/drivers/iio/light/max44009.c b/drivers/iio/light/max44009.c
+index 801e5a0ad496b..f3648f20ef2c0 100644
+--- a/drivers/iio/light/max44009.c
++++ b/drivers/iio/light/max44009.c
+@@ -528,6 +528,12 @@ static int max44009_probe(struct i2c_client *client,
+ 	return devm_iio_device_register(&client->dev, indio_dev);
+ }
+ 
++static const struct of_device_id max44009_of_match[] = {
++	{ .compatible = "maxim,max44009" },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, max44009_of_match);
++
+ static const struct i2c_device_id max44009_id[] = {
+ 	{ "max44009", 0 },
+ 	{ }
+@@ -537,18 +543,13 @@ MODULE_DEVICE_TABLE(i2c, max44009_id);
+ static struct i2c_driver max44009_driver = {
+ 	.driver = {
+ 		.name = MAX44009_DRV_NAME,
++		.of_match_table = max44009_of_match,
+ 	},
+ 	.probe = max44009_probe,
+ 	.id_table = max44009_id,
+ };
+ module_i2c_driver(max44009_driver);
+ 
+-static const struct of_device_id max44009_of_match[] = {
+-	{ .compatible = "maxim,max44009" },
+-	{ }
+-};
+-MODULE_DEVICE_TABLE(of, max44009_of_match);
+-
+ MODULE_AUTHOR("Robert Eshleman <bobbyeshleman@gmail.com>");
+ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("MAX44009 ambient light sensor driver");
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 3133b6be6cab9..db1a25fbe2fa9 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -2924,6 +2924,8 @@ static int cm_send_rej_locked(struct cm_id_private *cm_id_priv,
+ 	    (ari && ari_length > IB_CM_REJ_ARI_LENGTH))
+ 		return -EINVAL;
+ 
++	trace_icm_send_rej(&cm_id_priv->id, reason);
++
+ 	switch (state) {
+ 	case IB_CM_REQ_SENT:
+ 	case IB_CM_MRA_REQ_RCVD:
+@@ -2954,7 +2956,6 @@ static int cm_send_rej_locked(struct cm_id_private *cm_id_priv,
+ 		return -EINVAL;
+ 	}
+ 
+-	trace_icm_send_rej(&cm_id_priv->id, reason);
+ 	ret = ib_post_send_mad(msg, NULL);
+ 	if (ret) {
+ 		cm_free_msg(msg);
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index ab1eefffc14b3..956fc3fd88b99 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -15,6 +15,7 @@
+ #include "verbs.h"
+ #include "trace_ibhdrs.h"
+ #include "ipoib.h"
++#include "trace_tx.h"
+ 
+ /* Add a convenience helper */
+ #define CIRC_ADD(val, add, size) (((val) + (add)) & ((size) - 1))
+@@ -63,12 +64,14 @@ static u64 hfi1_ipoib_used(struct hfi1_ipoib_txq *txq)
+ 
+ static void hfi1_ipoib_stop_txq(struct hfi1_ipoib_txq *txq)
+ {
++	trace_hfi1_txq_stop(txq);
+ 	if (atomic_inc_return(&txq->stops) == 1)
+ 		netif_stop_subqueue(txq->priv->netdev, txq->q_idx);
+ }
+ 
+ static void hfi1_ipoib_wake_txq(struct hfi1_ipoib_txq *txq)
+ {
++	trace_hfi1_txq_wake(txq);
+ 	if (atomic_dec_and_test(&txq->stops))
+ 		netif_wake_subqueue(txq->priv->netdev, txq->q_idx);
+ }
+@@ -89,8 +92,10 @@ static void hfi1_ipoib_check_queue_depth(struct hfi1_ipoib_txq *txq)
+ {
+ 	++txq->sent_txreqs;
+ 	if (hfi1_ipoib_used(txq) >= hfi1_ipoib_ring_hwat(txq) &&
+-	    !atomic_xchg(&txq->ring_full, 1))
++	    !atomic_xchg(&txq->ring_full, 1)) {
++		trace_hfi1_txq_full(txq);
+ 		hfi1_ipoib_stop_txq(txq);
++	}
+ }
+ 
+ static void hfi1_ipoib_check_queue_stopped(struct hfi1_ipoib_txq *txq)
+@@ -112,8 +117,10 @@ static void hfi1_ipoib_check_queue_stopped(struct hfi1_ipoib_txq *txq)
+ 	 * to protect against ring overflow.
+ 	 */
+ 	if (hfi1_ipoib_used(txq) < hfi1_ipoib_ring_lwat(txq) &&
+-	    atomic_xchg(&txq->ring_full, 0))
++	    atomic_xchg(&txq->ring_full, 0)) {
++		trace_hfi1_txq_xmit_unstopped(txq);
+ 		hfi1_ipoib_wake_txq(txq);
++	}
+ }
+ 
+ static void hfi1_ipoib_free_tx(struct ipoib_txreq *tx, int budget)
+@@ -244,6 +251,7 @@ static int hfi1_ipoib_build_ulp_payload(struct ipoib_txreq *tx,
+ 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ 
+ 		ret = sdma_txadd_page(dd,
++				      NULL,
+ 				      txreq,
+ 				      skb_frag_page(frag),
+ 				      frag->bv_offset,
+@@ -405,6 +413,7 @@ static struct ipoib_txreq *hfi1_ipoib_send_dma_common(struct net_device *dev,
+ 				sdma_select_engine_sc(priv->dd,
+ 						      txp->flow.tx_queue,
+ 						      txp->flow.sc5);
++			trace_hfi1_flow_switch(txp->txq);
+ 		}
+ 
+ 		return tx;
+@@ -525,6 +534,7 @@ static int hfi1_ipoib_send_dma_list(struct net_device *dev,
+ 	if (txq->flow.as_int != txp->flow.as_int) {
+ 		int ret;
+ 
++		trace_hfi1_flow_flush(txq);
+ 		ret = hfi1_ipoib_flush_tx_list(dev, txq);
+ 		if (unlikely(ret)) {
+ 			if (ret == -EBUSY)
+@@ -635,8 +645,10 @@ static int hfi1_ipoib_sdma_sleep(struct sdma_engine *sde,
+ 			/* came from non-list submit */
+ 			list_add_tail(&txreq->list, &txq->tx_list);
+ 		if (list_empty(&txq->wait.list)) {
+-			if (!atomic_xchg(&txq->no_desc, 1))
++			if (!atomic_xchg(&txq->no_desc, 1)) {
++				trace_hfi1_txq_queued(txq);
+ 				hfi1_ipoib_stop_txq(txq);
++			}
+ 			iowait_queue(pkts_sent, wait->iow, &sde->dmawait);
+ 		}
+ 
+@@ -659,6 +671,7 @@ static void hfi1_ipoib_sdma_wakeup(struct iowait *wait, int reason)
+ 	struct hfi1_ipoib_txq *txq =
+ 		container_of(wait, struct hfi1_ipoib_txq, wait);
+ 
++	trace_hfi1_txq_wakeup(txq);
+ 	if (likely(txq->priv->netdev->reg_state == NETREG_REGISTERED))
+ 		iowait_schedule(wait, system_highpri_wq, WORK_CPU_UNBOUND);
+ }
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index ed8a96ae61cef..d331184ded308 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -167,11 +167,11 @@ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	node = __mmu_rb_search(handler, mnode->addr, mnode->len);
+ 	if (node) {
+-		ret = -EINVAL;
++		ret = -EEXIST;
+ 		goto unlock;
+ 	}
+ 	__mmu_int_rb_insert(mnode, &handler->root);
+-	list_add(&mnode->list, &handler->lru_list);
++	list_add_tail(&mnode->list, &handler->lru_list);
+ 
+ 	ret = handler->ops->insert(handler->ops_arg, mnode);
+ 	if (ret) {
+@@ -184,6 +184,19 @@ unlock:
+ 	return ret;
+ }
+ 
++/* Caller must hold handler lock */
++struct mmu_rb_node *hfi1_mmu_rb_get_first(struct mmu_rb_handler *handler,
++					  unsigned long addr, unsigned long len)
++{
++	struct mmu_rb_node *node;
++
++	trace_hfi1_mmu_rb_search(addr, len);
++	node = __mmu_int_rb_iter_first(&handler->root, addr, (addr + len) - 1);
++	if (node)
++		list_move_tail(&node->list, &handler->lru_list);
++	return node;
++}
++
+ /* Caller must hold handler lock */
+ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
+ 					   unsigned long addr,
+@@ -208,32 +221,6 @@ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
+ 	return node;
+ }
+ 
+-bool hfi1_mmu_rb_remove_unless_exact(struct mmu_rb_handler *handler,
+-				     unsigned long addr, unsigned long len,
+-				     struct mmu_rb_node **rb_node)
+-{
+-	struct mmu_rb_node *node;
+-	unsigned long flags;
+-	bool ret = false;
+-
+-	if (current->mm != handler->mn.mm)
+-		return ret;
+-
+-	spin_lock_irqsave(&handler->lock, flags);
+-	node = __mmu_rb_search(handler, addr, len);
+-	if (node) {
+-		if (node->addr == addr && node->len == len)
+-			goto unlock;
+-		__mmu_int_rb_remove(node, &handler->root);
+-		list_del(&node->list); /* remove from LRU list */
+-		ret = true;
+-	}
+-unlock:
+-	spin_unlock_irqrestore(&handler->lock, flags);
+-	*rb_node = node;
+-	return ret;
+-}
+-
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ {
+ 	struct mmu_rb_node *rbnode, *ptr;
+@@ -247,8 +234,7 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 	INIT_LIST_HEAD(&del_list);
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+-	list_for_each_entry_safe_reverse(rbnode, ptr, &handler->lru_list,
+-					 list) {
++	list_for_each_entry_safe(rbnode, ptr, &handler->lru_list, list) {
+ 		if (handler->ops->evict(handler->ops_arg, rbnode, evict_arg,
+ 					&stop)) {
+ 			__mmu_int_rb_remove(rbnode, &handler->root);
+@@ -260,36 +246,11 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 	}
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	while (!list_empty(&del_list)) {
+-		rbnode = list_first_entry(&del_list, struct mmu_rb_node, list);
+-		list_del(&rbnode->list);
++	list_for_each_entry_safe(rbnode, ptr, &del_list, list) {
+ 		handler->ops->remove(handler->ops_arg, rbnode);
+ 	}
+ }
+ 
+-/*
+- * It is up to the caller to ensure that this function does not race with the
+- * mmu invalidate notifier which may be calling the users remove callback on
+- * 'node'.
+- */
+-void hfi1_mmu_rb_remove(struct mmu_rb_handler *handler,
+-			struct mmu_rb_node *node)
+-{
+-	unsigned long flags;
+-
+-	if (current->mm != handler->mn.mm)
+-		return;
+-
+-	/* Validity of handler and node pointers has been checked by caller. */
+-	trace_hfi1_mmu_rb_remove(node->addr, node->len);
+-	spin_lock_irqsave(&handler->lock, flags);
+-	__mmu_int_rb_remove(node, &handler->root);
+-	list_del(&node->list); /* remove from LRU list */
+-	spin_unlock_irqrestore(&handler->lock, flags);
+-
+-	handler->ops->remove(handler->ops_arg, node);
+-}
+-
+ static int mmu_notifier_range_start(struct mmu_notifier *mn,
+ 		const struct mmu_notifier_range *range)
+ {
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.h b/drivers/infiniband/hw/hfi1/mmu_rb.h
+index 423aacc67e948..0265d81c62061 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.h
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.h
+@@ -93,10 +93,8 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler);
+ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 		       struct mmu_rb_node *mnode);
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg);
+-void hfi1_mmu_rb_remove(struct mmu_rb_handler *handler,
+-			struct mmu_rb_node *mnode);
+-bool hfi1_mmu_rb_remove_unless_exact(struct mmu_rb_handler *handler,
+-				     unsigned long addr, unsigned long len,
+-				     struct mmu_rb_node **rb_node);
++struct mmu_rb_node *hfi1_mmu_rb_get_first(struct mmu_rb_handler *handler,
++					  unsigned long addr,
++					  unsigned long len);
+ 
+ #endif /* _HFI1_MMU_RB_H */
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index a044bee257f94..061562627dae4 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1635,22 +1635,7 @@ static inline void sdma_unmap_desc(
+ 	struct hfi1_devdata *dd,
+ 	struct sdma_desc *descp)
+ {
+-	switch (sdma_mapping_type(descp)) {
+-	case SDMA_MAP_SINGLE:
+-		dma_unmap_single(
+-			&dd->pcidev->dev,
+-			sdma_mapping_addr(descp),
+-			sdma_mapping_len(descp),
+-			DMA_TO_DEVICE);
+-		break;
+-	case SDMA_MAP_PAGE:
+-		dma_unmap_page(
+-			&dd->pcidev->dev,
+-			sdma_mapping_addr(descp),
+-			sdma_mapping_len(descp),
+-			DMA_TO_DEVICE);
+-		break;
+-	}
++	system_descriptor_complete(dd, descp);
+ }
+ 
+ /*
+@@ -3170,7 +3155,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
+ 
+ 		/* Add descriptor for coalesce buffer */
+ 		tx->desc_limit = MAX_DESC;
+-		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx,
++		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx,
+ 					 addr, tx->tlen);
+ 	}
+ 
+@@ -3210,10 +3195,12 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ 			return rval;
+ 		}
+ 	}
++
+ 	/* finish the one just added */
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		SDMA_MAP_NONE,
++		NULL,
+ 		dd->sdma_pad_phys,
+ 		sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)));
+ 	_sdma_close_tx(dd, tx);
+diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
+index 7a851191f9870..7d4f316ac6e43 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.h
++++ b/drivers/infiniband/hw/hfi1/sdma.h
+@@ -635,6 +635,7 @@ static inline dma_addr_t sdma_mapping_addr(struct sdma_desc *d)
+ static inline void make_tx_sdma_desc(
+ 	struct sdma_txreq *tx,
+ 	int type,
++	void *pinning_ctx,
+ 	dma_addr_t addr,
+ 	size_t len)
+ {
+@@ -653,6 +654,7 @@ static inline void make_tx_sdma_desc(
+ 				<< SDMA_DESC0_PHY_ADDR_SHIFT) |
+ 			(((u64)len & SDMA_DESC0_BYTE_COUNT_MASK)
+ 				<< SDMA_DESC0_BYTE_COUNT_SHIFT);
++	desc->pinning_ctx = pinning_ctx;
+ }
+ 
+ /* helper to extend txreq */
+@@ -685,6 +687,7 @@ static inline void _sdma_close_tx(struct hfi1_devdata *dd,
+ static inline int _sdma_txadd_daddr(
+ 	struct hfi1_devdata *dd,
+ 	int type,
++	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	dma_addr_t addr,
+ 	u16 len)
+@@ -694,6 +697,7 @@ static inline int _sdma_txadd_daddr(
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		type,
++		pinning_ctx,
+ 		addr, len);
+ 	WARN_ON(len > tx->tlen);
+ 	tx->tlen -= len;
+@@ -714,6 +718,7 @@ static inline int _sdma_txadd_daddr(
+ /**
+  * sdma_txadd_page() - add a page to the sdma_txreq
+  * @dd: the device to use for mapping
++ * @pinning_ctx: context to be released at descriptor retirement
+  * @tx: tx request to which the page is added
+  * @page: page to map
+  * @offset: offset within the page
+@@ -729,6 +734,7 @@ static inline int _sdma_txadd_daddr(
+  */
+ static inline int sdma_txadd_page(
+ 	struct hfi1_devdata *dd,
++	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	struct page *page,
+ 	unsigned long offset,
+@@ -756,8 +762,7 @@ static inline int sdma_txadd_page(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(
+-			dd, SDMA_MAP_PAGE, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_PAGE, pinning_ctx, tx, addr, len);
+ }
+ 
+ /**
+@@ -791,7 +796,8 @@ static inline int sdma_txadd_daddr(
+ 			return rval;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, NULL, tx,
++				 addr, len);
+ }
+ 
+ /**
+@@ -837,8 +843,7 @@ static inline int sdma_txadd_kvaddr(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(
+-			dd, SDMA_MAP_SINGLE, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx, addr, len);
+ }
+ 
+ struct iowait_work;
+@@ -1090,4 +1095,5 @@ extern uint mod_num_sdma;
+ 
+ void sdma_update_lmc(struct hfi1_devdata *dd, u64 mask, u32 lid);
+ 
++void system_descriptor_complete(struct hfi1_devdata *dd, struct sdma_desc *descp);
+ #endif
+diff --git a/drivers/infiniband/hw/hfi1/sdma_txreq.h b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+index 514a4784566b2..4204650cebc29 100644
+--- a/drivers/infiniband/hw/hfi1/sdma_txreq.h
++++ b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+@@ -61,6 +61,7 @@
+ struct sdma_desc {
+ 	/* private:  don't use directly */
+ 	u64 qw[2];
++	void *pinning_ctx;
+ };
+ 
+ /**
+diff --git a/drivers/infiniband/hw/hfi1/trace_mmu.h b/drivers/infiniband/hw/hfi1/trace_mmu.h
+index 3b7abbc382c20..c3055cff4d6bb 100644
+--- a/drivers/infiniband/hw/hfi1/trace_mmu.h
++++ b/drivers/infiniband/hw/hfi1/trace_mmu.h
+@@ -78,10 +78,6 @@ DEFINE_EVENT(hfi1_mmu_rb_template, hfi1_mmu_rb_search,
+ 	     TP_PROTO(unsigned long addr, unsigned long len),
+ 	     TP_ARGS(addr, len));
+ 
+-DEFINE_EVENT(hfi1_mmu_rb_template, hfi1_mmu_rb_remove,
+-	     TP_PROTO(unsigned long addr, unsigned long len),
+-	     TP_ARGS(addr, len));
+-
+ DEFINE_EVENT(hfi1_mmu_rb_template, hfi1_mmu_mem_invalidate,
+ 	     TP_PROTO(unsigned long addr, unsigned long len),
+ 	     TP_ARGS(addr, len));
+diff --git a/drivers/infiniband/hw/hfi1/trace_tx.h b/drivers/infiniband/hw/hfi1/trace_tx.h
+index 769e5e4710c64..d44fc54858b90 100644
+--- a/drivers/infiniband/hw/hfi1/trace_tx.h
++++ b/drivers/infiniband/hw/hfi1/trace_tx.h
+@@ -53,6 +53,8 @@
+ #include "hfi.h"
+ #include "mad.h"
+ #include "sdma.h"
++#include "ipoib.h"
++#include "user_sdma.h"
+ 
+ const char *parse_sdma_flags(struct trace_seq *p, u64 desc0, u64 desc1);
+ 
+@@ -653,6 +655,80 @@ TRACE_EVENT(hfi1_sdma_user_completion,
+ 		      __entry->code)
+ );
+ 
++TRACE_EVENT(hfi1_usdma_defer,
++	    TP_PROTO(struct hfi1_user_sdma_pkt_q *pq,
++		     struct sdma_engine *sde,
++		     struct iowait *wait),
++	    TP_ARGS(pq, sde, wait),
++	    TP_STRUCT__entry(DD_DEV_ENTRY(pq->dd)
++			     __field(struct hfi1_user_sdma_pkt_q *, pq)
++			     __field(struct sdma_engine *, sde)
++			     __field(struct iowait *, wait)
++			     __field(int, engine)
++			     __field(int, empty)
++			     ),
++	     TP_fast_assign(DD_DEV_ASSIGN(pq->dd);
++			    __entry->pq = pq;
++			    __entry->sde = sde;
++			    __entry->wait = wait;
++			    __entry->engine = sde->this_idx;
++			    __entry->empty = list_empty(&__entry->wait->list);
++			    ),
++	     TP_printk("[%s] pq %llx sde %llx wait %llx engine %d empty %d",
++		       __get_str(dev),
++		       (unsigned long long)__entry->pq,
++		       (unsigned long long)__entry->sde,
++		       (unsigned long long)__entry->wait,
++		       __entry->engine,
++		       __entry->empty
++		)
++);
++
++TRACE_EVENT(hfi1_usdma_activate,
++	    TP_PROTO(struct hfi1_user_sdma_pkt_q *pq,
++		     struct iowait *wait,
++		     int reason),
++	    TP_ARGS(pq, wait, reason),
++	    TP_STRUCT__entry(DD_DEV_ENTRY(pq->dd)
++			     __field(struct hfi1_user_sdma_pkt_q *, pq)
++			     __field(struct iowait *, wait)
++			     __field(int, reason)
++			     ),
++	     TP_fast_assign(DD_DEV_ASSIGN(pq->dd);
++			    __entry->pq = pq;
++			    __entry->wait = wait;
++			    __entry->reason = reason;
++			    ),
++	     TP_printk("[%s] pq %llx wait %llx reason %d",
++		       __get_str(dev),
++		       (unsigned long long)__entry->pq,
++		       (unsigned long long)__entry->wait,
++		       __entry->reason
++		)
++);
++
++TRACE_EVENT(hfi1_usdma_we,
++	    TP_PROTO(struct hfi1_user_sdma_pkt_q *pq,
++		     int we_ret),
++	    TP_ARGS(pq, we_ret),
++	    TP_STRUCT__entry(DD_DEV_ENTRY(pq->dd)
++			     __field(struct hfi1_user_sdma_pkt_q *, pq)
++			     __field(int, state)
++			     __field(int, we_ret)
++			     ),
++	     TP_fast_assign(DD_DEV_ASSIGN(pq->dd);
++			    __entry->pq = pq;
++			    __entry->state = pq->state;
++			    __entry->we_ret = we_ret;
++			    ),
++	     TP_printk("[%s] pq %llx state %d we_ret %d",
++		       __get_str(dev),
++		       (unsigned long long)__entry->pq,
++		       __entry->state,
++		       __entry->we_ret
++		)
++);
++
+ const char *print_u32_array(struct trace_seq *, u32 *, int);
+ #define __print_u32_hex(arr, len) print_u32_array(p, arr, len)
+ 
+@@ -858,6 +934,109 @@ DEFINE_EVENT(
+ 	TP_ARGS(qp, flag)
+ );
+ 
++DECLARE_EVENT_CLASS(/* AIP  */
++	hfi1_ipoib_txq_template,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq),
++	TP_STRUCT__entry(/* entry */
++		DD_DEV_ENTRY(txq->priv->dd)
++		__field(struct hfi1_ipoib_txq *, txq)
++		__field(struct sdma_engine *, sde)
++		__field(ulong, head)
++		__field(ulong, tail)
++		__field(uint, used)
++		__field(uint, flow)
++		__field(int, stops)
++		__field(int, no_desc)
++		__field(u8, idx)
++		__field(u8, stopped)
++	),
++	TP_fast_assign(/* assign */
++		DD_DEV_ASSIGN(txq->priv->dd)
++		__entry->txq = txq;
++		__entry->sde = txq->sde;
++		__entry->head = txq->tx_ring.head;
++		__entry->tail = txq->tx_ring.tail;
++		__entry->idx = txq->q_idx;
++		__entry->used =
++			txq->sent_txreqs -
++			atomic64_read(&txq->complete_txreqs);
++		__entry->flow = txq->flow.as_int;
++		__entry->stops = atomic_read(&txq->stops);
++		__entry->no_desc = atomic_read(&txq->no_desc);
++		__entry->stopped =
++		 __netif_subqueue_stopped(txq->priv->netdev, txq->q_idx);
++	),
++	TP_printk(/* print  */
++		"[%s] txq %llx idx %u sde %llx head %lx tail %lx flow %x used %u stops %d no_desc %d stopped %u",
++		__get_str(dev),
++		(unsigned long long)__entry->txq,
++		__entry->idx,
++		(unsigned long long)__entry->sde,
++		__entry->head,
++		__entry->tail,
++		__entry->flow,
++		__entry->used,
++		__entry->stops,
++		__entry->no_desc,
++		__entry->stopped
++	)
++);
++
++DEFINE_EVENT(/* queue stop */
++	hfi1_ipoib_txq_template, hfi1_txq_stop,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
++DEFINE_EVENT(/* queue wake */
++	hfi1_ipoib_txq_template, hfi1_txq_wake,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
++DEFINE_EVENT(/* flow flush */
++	hfi1_ipoib_txq_template, hfi1_flow_flush,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
++DEFINE_EVENT(/* flow switch */
++	hfi1_ipoib_txq_template, hfi1_flow_switch,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
++DEFINE_EVENT(/* wakeup */
++	hfi1_ipoib_txq_template, hfi1_txq_wakeup,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
++DEFINE_EVENT(/* full */
++	hfi1_ipoib_txq_template, hfi1_txq_full,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
++DEFINE_EVENT(/* queued */
++	hfi1_ipoib_txq_template, hfi1_txq_queued,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
++DEFINE_EVENT(/* xmit_stopped */
++	hfi1_ipoib_txq_template, hfi1_txq_xmit_stopped,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
++DEFINE_EVENT(/* xmit_unstopped */
++	hfi1_ipoib_txq_template, hfi1_txq_xmit_unstopped,
++	TP_PROTO(struct hfi1_ipoib_txq *txq),
++	TP_ARGS(txq)
++);
++
+ #endif /* __HFI1_TRACE_TX_H */
+ 
+ #undef TRACE_INCLUDE_PATH
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index 4a4956f96a7eb..1eb5a44a4ae6a 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -65,7 +65,6 @@
+ 
+ #include "hfi.h"
+ #include "sdma.h"
+-#include "mmu_rb.h"
+ #include "user_sdma.h"
+ #include "verbs.h"  /* for the headers */
+ #include "common.h" /* for struct hfi1_tid_info */
+@@ -80,11 +79,7 @@ static unsigned initial_pkt_count = 8;
+ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts);
+ static void user_sdma_txreq_cb(struct sdma_txreq *txreq, int status);
+ static inline void pq_update(struct hfi1_user_sdma_pkt_q *pq);
+-static void user_sdma_free_request(struct user_sdma_request *req, bool unpin);
+-static int pin_vector_pages(struct user_sdma_request *req,
+-			    struct user_sdma_iovec *iovec);
+-static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,
+-			       unsigned start, unsigned npages);
++static void user_sdma_free_request(struct user_sdma_request *req);
+ static int check_header_template(struct user_sdma_request *req,
+ 				 struct hfi1_pkt_header *hdr, u32 lrhlen,
+ 				 u32 datalen);
+@@ -122,6 +117,11 @@ static struct mmu_rb_ops sdma_rb_ops = {
+ 	.invalidate = sdma_rb_invalidate
+ };
+ 
++static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
++					   struct user_sdma_txreq *tx,
++					   struct user_sdma_iovec *iovec,
++					   u32 *pkt_remaining);
++
+ static int defer_packet_queue(
+ 	struct sdma_engine *sde,
+ 	struct iowait_work *wait,
+@@ -133,6 +133,7 @@ static int defer_packet_queue(
+ 		container_of(wait->iow, struct hfi1_user_sdma_pkt_q, busy);
+ 
+ 	write_seqlock(&sde->waitlock);
++	trace_hfi1_usdma_defer(pq, sde, &pq->busy);
+ 	if (sdma_progress(sde, seq, txreq))
+ 		goto eagain;
+ 	/*
+@@ -157,7 +158,8 @@ static void activate_packet_queue(struct iowait *wait, int reason)
+ {
+ 	struct hfi1_user_sdma_pkt_q *pq =
+ 		container_of(wait, struct hfi1_user_sdma_pkt_q, busy);
+-	pq->busy.lock = NULL;
++
++	trace_hfi1_usdma_activate(pq, wait, reason);
+ 	xchg(&pq->state, SDMA_PKT_Q_ACTIVE);
+ 	wake_up(&wait->wait_dma);
+ };
+@@ -451,6 +453,7 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
+ 		ret = -EINVAL;
+ 		goto free_req;
+ 	}
++
+ 	/* Copy the header from the user buffer */
+ 	ret = copy_from_user(&req->hdr, iovec[idx].iov_base + sizeof(info),
+ 			     sizeof(req->hdr));
+@@ -525,9 +528,8 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
+ 		memcpy(&req->iovs[i].iov,
+ 		       iovec + idx++,
+ 		       sizeof(req->iovs[i].iov));
+-		ret = pin_vector_pages(req, &req->iovs[i]);
+-		if (ret) {
+-			req->data_iovs = i;
++		if (req->iovs[i].iov.iov_len == 0) {
++			ret = -EINVAL;
+ 			goto free_req;
+ 		}
+ 		req->data_len += req->iovs[i].iov.iov_len;
+@@ -599,13 +601,17 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
+ 	while (req->seqsubmitted != req->info.npkts) {
+ 		ret = user_sdma_send_pkts(req, pcount);
+ 		if (ret < 0) {
++			int we_ret;
++
+ 			if (ret != -EBUSY)
+ 				goto free_req;
+-			if (wait_event_interruptible_timeout(
++			we_ret = wait_event_interruptible_timeout(
+ 				pq->busy.wait_dma,
+ 				pq->state == SDMA_PKT_Q_ACTIVE,
+ 				msecs_to_jiffies(
+-					SDMA_IOWAIT_TIMEOUT)) <= 0)
++					SDMA_IOWAIT_TIMEOUT));
++			trace_hfi1_usdma_we(pq, we_ret);
++			if (we_ret <= 0)
+ 				flush_pq_iowait(pq);
+ 		}
+ 	}
+@@ -621,7 +627,7 @@ free_req:
+ 		if (req->seqsubmitted)
+ 			wait_event(pq->busy.wait_dma,
+ 				   (req->seqcomp == req->seqsubmitted - 1));
+-		user_sdma_free_request(req, true);
++		user_sdma_free_request(req);
+ 		pq_update(pq);
+ 		set_comp_state(pq, cq, info.comp_idx, ERROR, ret);
+ 	}
+@@ -733,48 +739,6 @@ static int user_sdma_txadd_ahg(struct user_sdma_request *req,
+ 	return ret;
+ }
+ 
+-static int user_sdma_txadd(struct user_sdma_request *req,
+-			   struct user_sdma_txreq *tx,
+-			   struct user_sdma_iovec *iovec, u32 datalen,
+-			   u32 *queued_ptr, u32 *data_sent_ptr,
+-			   u64 *iov_offset_ptr)
+-{
+-	int ret;
+-	unsigned int pageidx, len;
+-	unsigned long base, offset;
+-	u64 iov_offset = *iov_offset_ptr;
+-	u32 queued = *queued_ptr, data_sent = *data_sent_ptr;
+-	struct hfi1_user_sdma_pkt_q *pq = req->pq;
+-
+-	base = (unsigned long)iovec->iov.iov_base;
+-	offset = offset_in_page(base + iovec->offset + iov_offset);
+-	pageidx = (((iovec->offset + iov_offset + base) - (base & PAGE_MASK)) >>
+-		   PAGE_SHIFT);
+-	len = offset + req->info.fragsize > PAGE_SIZE ?
+-		PAGE_SIZE - offset : req->info.fragsize;
+-	len = min((datalen - queued), len);
+-	ret = sdma_txadd_page(pq->dd, &tx->txreq, iovec->pages[pageidx],
+-			      offset, len);
+-	if (ret) {
+-		SDMA_DBG(req, "SDMA txreq add page failed %d\n", ret);
+-		return ret;
+-	}
+-	iov_offset += len;
+-	queued += len;
+-	data_sent += len;
+-	if (unlikely(queued < datalen && pageidx == iovec->npages &&
+-		     req->iov_idx < req->data_iovs - 1)) {
+-		iovec->offset += iov_offset;
+-		iovec = &req->iovs[++req->iov_idx];
+-		iov_offset = 0;
+-	}
+-
+-	*queued_ptr = queued;
+-	*data_sent_ptr = data_sent;
+-	*iov_offset_ptr = iov_offset;
+-	return ret;
+-}
+-
+ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts)
+ {
+ 	int ret = 0;
+@@ -806,8 +770,7 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts)
+ 		maxpkts = req->info.npkts - req->seqnum;
+ 
+ 	while (npkts < maxpkts) {
+-		u32 datalen = 0, queued = 0, data_sent = 0;
+-		u64 iov_offset = 0;
++		u32 datalen = 0;
+ 
+ 		/*
+ 		 * Check whether any of the completions have come back
+@@ -900,27 +863,17 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts)
+ 				goto free_txreq;
+ 		}
+ 
+-		/*
+-		 * If the request contains any data vectors, add up to
+-		 * fragsize bytes to the descriptor.
+-		 */
+-		while (queued < datalen &&
+-		       (req->sent + data_sent) < req->data_len) {
+-			ret = user_sdma_txadd(req, tx, iovec, datalen,
+-					      &queued, &data_sent, &iov_offset);
+-			if (ret)
+-				goto free_txreq;
+-		}
+-		/*
+-		 * The txreq was submitted successfully so we can update
+-		 * the counters.
+-		 */
+ 		req->koffset += datalen;
+ 		if (req_opcode(req->info.ctrl) == EXPECTED)
+ 			req->tidoffset += datalen;
+-		req->sent += data_sent;
+-		if (req->data_len)
+-			iovec->offset += iov_offset;
++		req->sent += datalen;
++		while (datalen) {
++			ret = add_system_pages_to_sdma_packet(req, tx, iovec,
++							      &datalen);
++			if (ret)
++				goto free_txreq;
++			iovec = &req->iovs[req->iov_idx];
++		}
+ 		list_add_tail(&tx->txreq.list, &req->txps);
+ 		/*
+ 		 * It is important to increment this here as it is used to
+@@ -957,133 +910,14 @@ free_tx:
+ static u32 sdma_cache_evict(struct hfi1_user_sdma_pkt_q *pq, u32 npages)
+ {
+ 	struct evict_data evict_data;
++	struct mmu_rb_handler *handler = pq->handler;
+ 
+ 	evict_data.cleared = 0;
+ 	evict_data.target = npages;
+-	hfi1_mmu_rb_evict(pq->handler, &evict_data);
++	hfi1_mmu_rb_evict(handler, &evict_data);
+ 	return evict_data.cleared;
+ }
+ 
+-static int pin_sdma_pages(struct user_sdma_request *req,
+-			  struct user_sdma_iovec *iovec,
+-			  struct sdma_mmu_node *node,
+-			  int npages)
+-{
+-	int pinned, cleared;
+-	struct page **pages;
+-	struct hfi1_user_sdma_pkt_q *pq = req->pq;
+-
+-	pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
+-	if (!pages)
+-		return -ENOMEM;
+-	memcpy(pages, node->pages, node->npages * sizeof(*pages));
+-
+-	npages -= node->npages;
+-retry:
+-	if (!hfi1_can_pin_pages(pq->dd, current->mm,
+-				atomic_read(&pq->n_locked), npages)) {
+-		cleared = sdma_cache_evict(pq, npages);
+-		if (cleared >= npages)
+-			goto retry;
+-	}
+-	pinned = hfi1_acquire_user_pages(current->mm,
+-					 ((unsigned long)iovec->iov.iov_base +
+-					 (node->npages * PAGE_SIZE)), npages, 0,
+-					 pages + node->npages);
+-	if (pinned < 0) {
+-		kfree(pages);
+-		return pinned;
+-	}
+-	if (pinned != npages) {
+-		unpin_vector_pages(current->mm, pages, node->npages, pinned);
+-		return -EFAULT;
+-	}
+-	kfree(node->pages);
+-	node->rb.len = iovec->iov.iov_len;
+-	node->pages = pages;
+-	atomic_add(pinned, &pq->n_locked);
+-	return pinned;
+-}
+-
+-static void unpin_sdma_pages(struct sdma_mmu_node *node)
+-{
+-	if (node->npages) {
+-		unpin_vector_pages(mm_from_sdma_node(node), node->pages, 0,
+-				   node->npages);
+-		atomic_sub(node->npages, &node->pq->n_locked);
+-	}
+-}
+-
+-static int pin_vector_pages(struct user_sdma_request *req,
+-			    struct user_sdma_iovec *iovec)
+-{
+-	int ret = 0, pinned, npages;
+-	struct hfi1_user_sdma_pkt_q *pq = req->pq;
+-	struct sdma_mmu_node *node = NULL;
+-	struct mmu_rb_node *rb_node;
+-	struct iovec *iov;
+-	bool extracted;
+-
+-	extracted =
+-		hfi1_mmu_rb_remove_unless_exact(pq->handler,
+-						(unsigned long)
+-						iovec->iov.iov_base,
+-						iovec->iov.iov_len, &rb_node);
+-	if (rb_node) {
+-		node = container_of(rb_node, struct sdma_mmu_node, rb);
+-		if (!extracted) {
+-			atomic_inc(&node->refcount);
+-			iovec->pages = node->pages;
+-			iovec->npages = node->npages;
+-			iovec->node = node;
+-			return 0;
+-		}
+-	}
+-
+-	if (!node) {
+-		node = kzalloc(sizeof(*node), GFP_KERNEL);
+-		if (!node)
+-			return -ENOMEM;
+-
+-		node->rb.addr = (unsigned long)iovec->iov.iov_base;
+-		node->pq = pq;
+-		atomic_set(&node->refcount, 0);
+-	}
+-
+-	iov = &iovec->iov;
+-	npages = num_user_pages((unsigned long)iov->iov_base, iov->iov_len);
+-	if (node->npages < npages) {
+-		pinned = pin_sdma_pages(req, iovec, node, npages);
+-		if (pinned < 0) {
+-			ret = pinned;
+-			goto bail;
+-		}
+-		node->npages += pinned;
+-		npages = node->npages;
+-	}
+-	iovec->pages = node->pages;
+-	iovec->npages = npages;
+-	iovec->node = node;
+-
+-	ret = hfi1_mmu_rb_insert(req->pq->handler, &node->rb);
+-	if (ret) {
+-		iovec->node = NULL;
+-		goto bail;
+-	}
+-	return 0;
+-bail:
+-	unpin_sdma_pages(node);
+-	kfree(node);
+-	return ret;
+-}
+-
+-static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,
+-			       unsigned start, unsigned npages)
+-{
+-	hfi1_release_user_pages(mm, pages + start, npages, false);
+-	kfree(pages);
+-}
+-
+ static int check_header_template(struct user_sdma_request *req,
+ 				 struct hfi1_pkt_header *hdr, u32 lrhlen,
+ 				 u32 datalen)
+@@ -1425,7 +1259,7 @@ static void user_sdma_txreq_cb(struct sdma_txreq *txreq, int status)
+ 	if (req->seqcomp != req->info.npkts - 1)
+ 		return;
+ 
+-	user_sdma_free_request(req, false);
++	user_sdma_free_request(req);
+ 	set_comp_state(pq, cq, req->info.comp_idx, state, status);
+ 	pq_update(pq);
+ }
+@@ -1436,10 +1270,8 @@ static inline void pq_update(struct hfi1_user_sdma_pkt_q *pq)
+ 		wake_up(&pq->wait);
+ }
+ 
+-static void user_sdma_free_request(struct user_sdma_request *req, bool unpin)
++static void user_sdma_free_request(struct user_sdma_request *req)
+ {
+-	int i;
+-
+ 	if (!list_empty(&req->txps)) {
+ 		struct sdma_txreq *t, *p;
+ 
+@@ -1452,21 +1284,6 @@ static void user_sdma_free_request(struct user_sdma_request *req, bool unpin)
+ 		}
+ 	}
+ 
+-	for (i = 0; i < req->data_iovs; i++) {
+-		struct sdma_mmu_node *node = req->iovs[i].node;
+-
+-		if (!node)
+-			continue;
+-
+-		req->iovs[i].node = NULL;
+-
+-		if (unpin)
+-			hfi1_mmu_rb_remove(req->pq->handler,
+-					   &node->rb);
+-		else
+-			atomic_dec(&node->refcount);
+-	}
+-
+ 	kfree(req->tids);
+ 	clear_bit(req->info.comp_idx, req->pq->req_in_use);
+ }
+@@ -1484,6 +1301,368 @@ static inline void set_comp_state(struct hfi1_user_sdma_pkt_q *pq,
+ 					idx, state, ret);
+ }
+ 
++static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,
++			       unsigned int start, unsigned int npages)
++{
++	hfi1_release_user_pages(mm, pages + start, npages, false);
++	kfree(pages);
++}
++
++static void free_system_node(struct sdma_mmu_node *node)
++{
++	if (node->npages) {
++		unpin_vector_pages(mm_from_sdma_node(node), node->pages, 0,
++				   node->npages);
++		atomic_sub(node->npages, &node->pq->n_locked);
++	}
++	kfree(node);
++}
++
++static inline void acquire_node(struct sdma_mmu_node *node)
++{
++	atomic_inc(&node->refcount);
++	WARN_ON(atomic_read(&node->refcount) < 0);
++}
++
++static inline void release_node(struct mmu_rb_handler *handler,
++				struct sdma_mmu_node *node)
++{
++	atomic_dec(&node->refcount);
++	WARN_ON(atomic_read(&node->refcount) < 0);
++}
++
++static struct sdma_mmu_node *find_system_node(struct mmu_rb_handler *handler,
++					      unsigned long start,
++					      unsigned long end)
++{
++	struct mmu_rb_node *rb_node;
++	struct sdma_mmu_node *node;
++	unsigned long flags;
++
++	spin_lock_irqsave(&handler->lock, flags);
++	rb_node = hfi1_mmu_rb_get_first(handler, start, (end - start));
++	if (!rb_node) {
++		spin_unlock_irqrestore(&handler->lock, flags);
++		return NULL;
++	}
++	node = container_of(rb_node, struct sdma_mmu_node, rb);
++	acquire_node(node);
++	spin_unlock_irqrestore(&handler->lock, flags);
++
++	return node;
++}
++
++static int pin_system_pages(struct user_sdma_request *req,
++			    uintptr_t start_address, size_t length,
++			    struct sdma_mmu_node *node, int npages)
++{
++	struct hfi1_user_sdma_pkt_q *pq = req->pq;
++	int pinned, cleared;
++	struct page **pages;
++
++	pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
++	if (!pages)
++		return -ENOMEM;
++
++retry:
++	if (!hfi1_can_pin_pages(pq->dd, current->mm, atomic_read(&pq->n_locked),
++				npages)) {
++		SDMA_DBG(req, "Evicting: nlocked %u npages %u",
++			 atomic_read(&pq->n_locked), npages);
++		cleared = sdma_cache_evict(pq, npages);
++		if (cleared >= npages)
++			goto retry;
++	}
++
++	SDMA_DBG(req, "Acquire user pages start_address %lx node->npages %u npages %u",
++		 start_address, node->npages, npages);
++	pinned = hfi1_acquire_user_pages(current->mm, start_address, npages, 0,
++					 pages);
++
++	if (pinned < 0) {
++		kfree(pages);
++		SDMA_DBG(req, "pinned %d", pinned);
++		return pinned;
++	}
++	if (pinned != npages) {
++		unpin_vector_pages(current->mm, pages, node->npages, pinned);
++		SDMA_DBG(req, "npages %u pinned %d", npages, pinned);
++		return -EFAULT;
++	}
++	node->rb.addr = start_address;
++	node->rb.len = length;
++	node->pages = pages;
++	node->npages = npages;
++	atomic_add(pinned, &pq->n_locked);
++	SDMA_DBG(req, "done. pinned %d", pinned);
++	return 0;
++}
++
++static int add_system_pinning(struct user_sdma_request *req,
++			      struct sdma_mmu_node **node_p,
++			      unsigned long start, unsigned long len)
++
++{
++	struct hfi1_user_sdma_pkt_q *pq = req->pq;
++	struct sdma_mmu_node *node;
++	int ret;
++
++	node = kzalloc(sizeof(*node), GFP_KERNEL);
++	if (!node)
++		return -ENOMEM;
++
++	node->pq = pq;
++	ret = pin_system_pages(req, start, len, node, PFN_DOWN(len));
++	if (ret == 0) {
++		ret = hfi1_mmu_rb_insert(pq->handler, &node->rb);
++		if (ret)
++			free_system_node(node);
++		else
++			*node_p = node;
++
++		return ret;
++	}
++
++	kfree(node);
++	return ret;
++}
++
++static int get_system_cache_entry(struct user_sdma_request *req,
++				  struct sdma_mmu_node **node_p,
++				  size_t req_start, size_t req_len)
++{
++	struct hfi1_user_sdma_pkt_q *pq = req->pq;
++	u64 start = ALIGN_DOWN(req_start, PAGE_SIZE);
++	u64 end = PFN_ALIGN(req_start + req_len);
++	struct mmu_rb_handler *handler = pq->handler;
++	int ret;
++
++	if ((end - start) == 0) {
++		SDMA_DBG(req,
++			 "Request for empty cache entry req_start %lx req_len %lx start %llx end %llx",
++			 req_start, req_len, start, end);
++		return -EINVAL;
++	}
++
++	SDMA_DBG(req, "req_start %lx req_len %lu", req_start, req_len);
++
++	while (1) {
++		struct sdma_mmu_node *node =
++			find_system_node(handler, start, end);
++		u64 prepend_len = 0;
++
++		SDMA_DBG(req, "node %p start %llx end %llu", node, start, end);
++		if (!node) {
++			ret = add_system_pinning(req, node_p, start,
++						 end - start);
++			if (ret == -EEXIST) {
++				/*
++				 * Another execution context has inserted a
++				 * conficting entry first.
++				 */
++				continue;
++			}
++			return ret;
++		}
++
++		if (node->rb.addr <= start) {
++			/*
++			 * This entry covers at least part of the region. If it doesn't extend
++			 * to the end, then this will be called again for the next segment.
++			 */
++			*node_p = node;
++			return 0;
++		}
++
++		SDMA_DBG(req, "prepend: node->rb.addr %lx, node->refcount %d",
++			 node->rb.addr, atomic_read(&node->refcount));
++		prepend_len = node->rb.addr - start;
++
++		/*
++		 * This node will not be returned, instead a new node
++		 * will be. So release the reference.
++		 */
++		release_node(handler, node);
++
++		/* Prepend a node to cover the beginning of the allocation */
++		ret = add_system_pinning(req, node_p, start, prepend_len);
++		if (ret == -EEXIST) {
++			/* Another execution context has inserted a conficting entry first. */
++			continue;
++		}
++		return ret;
++	}
++}
++
++static int add_mapping_to_sdma_packet(struct user_sdma_request *req,
++				      struct user_sdma_txreq *tx,
++				      struct sdma_mmu_node *cache_entry,
++				      size_t start,
++				      size_t from_this_cache_entry)
++{
++	struct hfi1_user_sdma_pkt_q *pq = req->pq;
++	unsigned int page_offset;
++	unsigned int from_this_page;
++	size_t page_index;
++	void *ctx;
++	int ret;
++
++	/*
++	 * Because the cache may be more fragmented than the memory that is being accessed,
++	 * it's not strictly necessary to have a descriptor per cache entry.
++	 */
++
++	while (from_this_cache_entry) {
++		page_index = PFN_DOWN(start - cache_entry->rb.addr);
++
++		if (page_index >= cache_entry->npages) {
++			SDMA_DBG(req,
++				 "Request for page_index %zu >= cache_entry->npages %u",
++				 page_index, cache_entry->npages);
++			return -EINVAL;
++		}
++
++		page_offset = start - ALIGN_DOWN(start, PAGE_SIZE);
++		from_this_page = PAGE_SIZE - page_offset;
++
++		if (from_this_page < from_this_cache_entry) {
++			ctx = NULL;
++		} else {
++			/*
++			 * In the case they are equal the next line has no practical effect,
++			 * but it's better to do a register to register copy than a conditional
++			 * branch.
++			 */
++			from_this_page = from_this_cache_entry;
++			ctx = cache_entry;
++		}
++
++		ret = sdma_txadd_page(pq->dd, ctx, &tx->txreq,
++				      cache_entry->pages[page_index],
++				      page_offset, from_this_page);
++		if (ret) {
++			/*
++			 * When there's a failure, the entire request is freed by
++			 * user_sdma_send_pkts().
++			 */
++			SDMA_DBG(req,
++				 "sdma_txadd_page failed %d page_index %lu page_offset %u from_this_page %u",
++				 ret, page_index, page_offset, from_this_page);
++			return ret;
++		}
++		start += from_this_page;
++		from_this_cache_entry -= from_this_page;
++	}
++	return 0;
++}
++
++static int add_system_iovec_to_sdma_packet(struct user_sdma_request *req,
++					   struct user_sdma_txreq *tx,
++					   struct user_sdma_iovec *iovec,
++					   size_t from_this_iovec)
++{
++	struct mmu_rb_handler *handler = req->pq->handler;
++
++	while (from_this_iovec > 0) {
++		struct sdma_mmu_node *cache_entry;
++		size_t from_this_cache_entry;
++		size_t start;
++		int ret;
++
++		start = (uintptr_t)iovec->iov.iov_base + iovec->offset;
++		ret = get_system_cache_entry(req, &cache_entry, start,
++					     from_this_iovec);
++		if (ret) {
++			SDMA_DBG(req, "pin system segment failed %d", ret);
++			return ret;
++		}
++
++		from_this_cache_entry = cache_entry->rb.len - (start - cache_entry->rb.addr);
++		if (from_this_cache_entry > from_this_iovec)
++			from_this_cache_entry = from_this_iovec;
++
++		ret = add_mapping_to_sdma_packet(req, tx, cache_entry, start,
++						 from_this_cache_entry);
++		if (ret) {
++			/*
++			 * We're guaranteed that there will be no descriptor
++			 * completion callback that releases this node
++			 * because only the last descriptor referencing it
++			 * has a context attached, and a failure means the
++			 * last descriptor was never added.
++			 */
++			release_node(handler, cache_entry);
++			SDMA_DBG(req, "add system segment failed %d", ret);
++			return ret;
++		}
++
++		iovec->offset += from_this_cache_entry;
++		from_this_iovec -= from_this_cache_entry;
++	}
++
++	return 0;
++}
++
++static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
++					   struct user_sdma_txreq *tx,
++					   struct user_sdma_iovec *iovec,
++					   u32 *pkt_data_remaining)
++{
++	size_t remaining_to_add = *pkt_data_remaining;
++	/*
++	 * Walk through iovec entries, ensure the associated pages
++	 * are pinned and mapped, add data to the packet until no more
++	 * data remains to be added.
++	 */
++	while (remaining_to_add > 0) {
++		struct user_sdma_iovec *cur_iovec;
++		size_t from_this_iovec;
++		int ret;
++
++		cur_iovec = iovec;
++		from_this_iovec = iovec->iov.iov_len - iovec->offset;
++
++		if (from_this_iovec > remaining_to_add) {
++			from_this_iovec = remaining_to_add;
++		} else {
++			/* The current iovec entry will be consumed by this pass. */
++			req->iov_idx++;
++			iovec++;
++		}
++
++		ret = add_system_iovec_to_sdma_packet(req, tx, cur_iovec,
++						      from_this_iovec);
++		if (ret)
++			return ret;
++
++		remaining_to_add -= from_this_iovec;
++	}
++	*pkt_data_remaining = remaining_to_add;
++
++	return 0;
++}
++
++void system_descriptor_complete(struct hfi1_devdata *dd,
++				struct sdma_desc *descp)
++{
++	switch (sdma_mapping_type(descp)) {
++	case SDMA_MAP_SINGLE:
++		dma_unmap_single(&dd->pcidev->dev, sdma_mapping_addr(descp),
++				 sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	case SDMA_MAP_PAGE:
++		dma_unmap_page(&dd->pcidev->dev, sdma_mapping_addr(descp),
++			       sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	}
++
++	if (descp->pinning_ctx) {
++		struct sdma_mmu_node *node = descp->pinning_ctx;
++
++		release_node(node->rb.handler, node);
++	}
++}
++
+ static bool sdma_rb_filter(struct mmu_rb_node *node, unsigned long addr,
+ 			   unsigned long len)
+ {
+@@ -1530,8 +1709,7 @@ static void sdma_rb_remove(void *arg, struct mmu_rb_node *mnode)
+ 	struct sdma_mmu_node *node =
+ 		container_of(mnode, struct sdma_mmu_node, rb);
+ 
+-	unpin_sdma_pages(node);
+-	kfree(node);
++	free_system_node(node);
+ }
+ 
+ static int sdma_rb_invalidate(void *arg, struct mmu_rb_node *mnode)
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.h b/drivers/infiniband/hw/hfi1/user_sdma.h
+index 1e8c02fe8ad1d..9d417aacfa8b7 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.h
++++ b/drivers/infiniband/hw/hfi1/user_sdma.h
+@@ -53,6 +53,7 @@
+ #include "common.h"
+ #include "iowait.h"
+ #include "user_exp_rcv.h"
++#include "mmu_rb.h"
+ 
+ /* The maximum number of Data io vectors per message/request */
+ #define MAX_VECTORS_PER_REQ 8
+@@ -152,16 +153,11 @@ struct sdma_mmu_node {
+ struct user_sdma_iovec {
+ 	struct list_head list;
+ 	struct iovec iov;
+-	/* number of pages in this vector */
+-	unsigned int npages;
+-	/* array of pinned pages for this vector */
+-	struct page **pages;
+ 	/*
+ 	 * offset into the virtual address space of the vector at
+ 	 * which we last left off.
+ 	 */
+ 	u64 offset;
+-	struct sdma_mmu_node *node;
+ };
+ 
+ /* evict operation argument */
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 5f3edd255ca3c..693922df3543b 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -820,8 +820,8 @@ static int build_verbs_tx_desc(
+ 
+ 	/* add icrc, lt byte, and padding to flit */
+ 	if (extra_bytes)
+-		ret = sdma_txadd_daddr(sde->dd, &tx->txreq,
+-				       sde->dd->sdma_pad_phys, extra_bytes);
++		ret = sdma_txadd_daddr(sde->dd, &tx->txreq, sde->dd->sdma_pad_phys,
++				       extra_bytes);
+ 
+ bail_txadd:
+ 	return ret;
+diff --git a/drivers/infiniband/hw/hfi1/vnic_sdma.c b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+index 7d90b900131ba..7658c620a125c 100644
+--- a/drivers/infiniband/hw/hfi1/vnic_sdma.c
++++ b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+@@ -106,6 +106,7 @@ static noinline int build_vnic_ulp_payload(struct sdma_engine *sde,
+ 
+ 		/* combine physically continuous fragments later? */
+ 		ret = sdma_txadd_page(sde->dd,
++				      NULL,
+ 				      &tx->txreq,
+ 				      skb_frag_page(frag),
+ 				      skb_frag_off(frag),
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index c6a815a705fef..255194029e2d8 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -412,9 +412,13 @@ static int set_user_sq_size(struct mlx4_ib_dev *dev,
+ 			    struct mlx4_ib_qp *qp,
+ 			    struct mlx4_ib_create_qp *ucmd)
+ {
++	u32 cnt;
++
+ 	/* Sanity check SQ size before proceeding */
+-	if ((1 << ucmd->log_sq_bb_count) > dev->dev->caps.max_wqes	 ||
+-	    ucmd->log_sq_stride >
++	if (check_shl_overflow(1, ucmd->log_sq_bb_count, &cnt) ||
++	    cnt > dev->dev->caps.max_wqes)
++		return -EINVAL;
++	if (ucmd->log_sq_stride >
+ 		ilog2(roundup_pow_of_two(dev->dev->caps.max_sq_desc_sz)) ||
+ 	    ucmd->log_sq_stride < MLX4_IB_MIN_SQ_STRIDE)
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 2f053f48f1beb..a56ebdc15723c 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -595,7 +595,21 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs,
+ 				      obj_id;
+ 
+ 	case MLX5_IB_OBJECT_DEVX_OBJ:
+-		return ((struct devx_obj *)uobj->object)->obj_id == obj_id;
++	{
++		u16 opcode = MLX5_GET(general_obj_in_cmd_hdr, in, opcode);
++		struct devx_obj *devx_uobj = uobj->object;
++
++		if (opcode == MLX5_CMD_OP_QUERY_FLOW_COUNTER &&
++		    devx_uobj->flow_counter_bulk_size) {
++			u64 end;
++
++			end = devx_uobj->obj_id +
++				devx_uobj->flow_counter_bulk_size;
++			return devx_uobj->obj_id <= obj_id && end > obj_id;
++		}
++
++		return devx_uobj->obj_id == obj_id;
++	}
+ 
+ 	default:
+ 		return false;
+@@ -1416,10 +1430,17 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_CREATE)(
+ 		goto obj_free;
+ 
+ 	if (opcode == MLX5_CMD_OP_ALLOC_FLOW_COUNTER) {
+-		u8 bulk = MLX5_GET(alloc_flow_counter_in,
+-				   cmd_in,
+-				   flow_counter_bulk);
+-		obj->flow_counter_bulk_size = 128UL * bulk;
++		u32 bulk = MLX5_GET(alloc_flow_counter_in,
++				    cmd_in,
++				    flow_counter_bulk_log_size);
++
++		if (bulk)
++			bulk = 1 << bulk;
++		else
++			bulk = 128UL * MLX5_GET(alloc_flow_counter_in,
++						cmd_in,
++						flow_counter_bulk);
++		obj->flow_counter_bulk_size = bulk;
+ 	}
+ 
+ 	uobj->object = obj;
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 0caff276f2c18..0c47e3e24b2a4 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -4164,7 +4164,7 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 			return -EINVAL;
+ 
+ 		if (attr->port_num == 0 ||
+-		    attr->port_num > MLX5_CAP_GEN(dev->mdev, num_ports)) {
++		    attr->port_num > dev->num_ports) {
+ 			mlx5_ib_dbg(dev, "invalid port number %d. number of ports is %d\n",
+ 				    attr->port_num, dev->num_ports);
+ 			return -EINVAL;
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 585a9c76e5183..ddc8825d526e0 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -505,8 +505,6 @@ void rvt_qp_exit(struct rvt_dev_info *rdi)
+ 	if (qps_inuse)
+ 		rvt_pr_err(rdi, "QP memory leak! %u still in use\n",
+ 			   qps_inuse);
+-	if (!rdi->qp_dev)
+-		return;
+ 
+ 	kfree(rdi->qp_dev->qp_table);
+ 	free_qpn_table(&rdi->qp_dev->qpn_table);
+diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c
+index 32a553a1b905e..5ba0893f1f017 100644
+--- a/drivers/infiniband/sw/siw/siw_main.c
++++ b/drivers/infiniband/sw/siw/siw_main.c
+@@ -458,9 +458,6 @@ static int siw_netdev_event(struct notifier_block *nb, unsigned long event,
+ 
+ 	dev_dbg(&netdev->dev, "siw: event %lu\n", event);
+ 
+-	if (dev_net(netdev) != &init_net)
+-		return NOTIFY_OK;
+-
+ 	base_dev = ib_device_get_by_netdev(netdev, RDMA_DRIVER_SIW);
+ 	if (!base_dev)
+ 		return NOTIFY_OK;
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index df8802b4981cf..ccc6d5bb1a276 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -548,7 +548,7 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s)
+ 			data_len -= plen;
+ 			fp_off = 0;
+ 
+-			if (++seg > (int)MAX_ARRAY) {
++			if (++seg >= (int)MAX_ARRAY) {
+ 				siw_dbg_qp(tx_qp(c_tx), "to many fragments\n");
+ 				siw_unmap_pages(page_array, kmap_mask);
+ 				wqe->processed -= c_tx->bytes_unsent;
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index edea37da8a5bd..2d0d966fba2c8 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -1553,12 +1553,12 @@ isert_check_pi_status(struct se_cmd *se_cmd, struct ib_mr *sig_mr)
+ 		}
+ 		sec_offset_err = mr_status.sig_err.sig_err_offset;
+ 		do_div(sec_offset_err, block_size);
+-		se_cmd->bad_sector = sec_offset_err + se_cmd->t_task_lba;
++		se_cmd->sense_info = sec_offset_err + se_cmd->t_task_lba;
+ 
+ 		isert_err("PI error found type %d at sector 0x%llx "
+ 			  "expected 0x%x vs actual 0x%x\n",
+ 			  mr_status.sig_err.err_type,
+-			  (unsigned long long)se_cmd->bad_sector,
++			  (unsigned long long)se_cmd->sense_info,
+ 			  mr_status.sig_err.expected,
+ 			  mr_status.sig_err.actual);
+ 		ret = 1;
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index c0ed08fcab480..983f59c87b79f 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -549,6 +549,7 @@ static int srpt_format_guid(char *buf, unsigned int size, const __be64 *guid)
+  */
+ static int srpt_refresh_port(struct srpt_port *sport)
+ {
++	struct ib_mad_agent *mad_agent;
+ 	struct ib_mad_reg_req reg_req;
+ 	struct ib_port_modify port_modify;
+ 	struct ib_port_attr port_attr;
+@@ -593,24 +594,26 @@ static int srpt_refresh_port(struct srpt_port *sport)
+ 		set_bit(IB_MGMT_METHOD_GET, reg_req.method_mask);
+ 		set_bit(IB_MGMT_METHOD_SET, reg_req.method_mask);
+ 
+-		sport->mad_agent = ib_register_mad_agent(sport->sdev->device,
+-							 sport->port,
+-							 IB_QPT_GSI,
+-							 &reg_req, 0,
+-							 srpt_mad_send_handler,
+-							 srpt_mad_recv_handler,
+-							 sport, 0);
+-		if (IS_ERR(sport->mad_agent)) {
++		mad_agent = ib_register_mad_agent(sport->sdev->device,
++						  sport->port,
++						  IB_QPT_GSI,
++						  &reg_req, 0,
++						  srpt_mad_send_handler,
++						  srpt_mad_recv_handler,
++						  sport, 0);
++		if (IS_ERR(mad_agent)) {
+ 			pr_err("%s-%d: MAD agent registration failed (%ld). Note: this is expected if SR-IOV is enabled.\n",
+ 			       dev_name(&sport->sdev->device->dev), sport->port,
+-			       PTR_ERR(sport->mad_agent));
++			       PTR_ERR(mad_agent));
+ 			sport->mad_agent = NULL;
+ 			memset(&port_modify, 0, sizeof(port_modify));
+ 			port_modify.clr_port_cap_mask = IB_PORT_DEVICE_MGMT_SUP;
+ 			ib_modify_port(sport->sdev->device, sport->port, 0,
+ 				       &port_modify);
+-
++			return 0;
+ 		}
++
++		sport->mad_agent = mad_agent;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/input/touchscreen/raspberrypi-ts.c b/drivers/input/touchscreen/raspberrypi-ts.c
+index ef6aaed217cfb..45c575df994e0 100644
+--- a/drivers/input/touchscreen/raspberrypi-ts.c
++++ b/drivers/input/touchscreen/raspberrypi-ts.c
+@@ -134,7 +134,7 @@ static int rpi_ts_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
+-	fw = rpi_firmware_get(fw_node);
++	fw = devm_rpi_firmware_get(&pdev->dev, fw_node);
+ 	of_node_put(fw_node);
+ 	if (!fw)
+ 		return -EPROBE_DEFER;
+@@ -160,7 +160,6 @@ static int rpi_ts_probe(struct platform_device *pdev)
+ 	touchbuf = (u32)ts->fw_regs_phys;
+ 	error = rpi_firmware_property(fw, RPI_FIRMWARE_FRAMEBUFFER_SET_TOUCHBUF,
+ 				      &touchbuf, sizeof(touchbuf));
+-
+ 	if (error || touchbuf != 0) {
+ 		dev_warn(dev, "Failed to set touchbuf, %d\n", error);
+ 		return error;
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 690c5976575c6..4a8791e037b84 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -897,8 +897,8 @@ struct amd_ir_data {
+ 	 */
+ 	struct irq_cfg *cfg;
+ 	int ga_vector;
+-	int ga_root_ptr;
+-	int ga_tag;
++	u64 ga_root_ptr;
++	u32 ga_tag;
+ };
+ 
+ struct amd_irte_ops {
+diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
+index 56e8198e13d10..ad84be4f68171 100644
+--- a/drivers/leds/Kconfig
++++ b/drivers/leds/Kconfig
+@@ -871,7 +871,7 @@ config LEDS_SPI_BYTE
+ config LEDS_TI_LMU_COMMON
+ 	tristate "LED driver for TI LMU"
+ 	depends on LEDS_CLASS
+-	depends on REGMAP
++	select REGMAP
+ 	help
+ 	  Say Y to enable the LED driver for TI LMU devices.
+ 	  This supports common features between the TI LM3532, LM3631, LM3632,
+diff --git a/drivers/leds/leds-tca6507.c b/drivers/leds/leds-tca6507.c
+index 225b765830bdc..caad9d3e0eac8 100644
+--- a/drivers/leds/leds-tca6507.c
++++ b/drivers/leds/leds-tca6507.c
+@@ -696,8 +696,9 @@ tca6507_led_dt_init(struct device *dev)
+ 		if (fwnode_property_read_string(child, "label", &led.name))
+ 			led.name = fwnode_get_name(child);
+ 
+-		fwnode_property_read_string(child, "linux,default-trigger",
+-					    &led.default_trigger);
++		if (fwnode_property_read_string(child, "linux,default-trigger",
++						&led.default_trigger))
++			led.default_trigger = NULL;
+ 
+ 		led.flags = 0;
+ 		if (fwnode_property_match_string(child, "compatible",
+diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig
+index 539a2ed4e13dc..a0e717a986dcb 100644
+--- a/drivers/macintosh/Kconfig
++++ b/drivers/macintosh/Kconfig
+@@ -86,6 +86,7 @@ config ADB_PMU_LED
+ 
+ config ADB_PMU_LED_DISK
+ 	bool "Use front LED as DISK LED by default"
++	depends on ATA
+ 	depends on ADB_PMU_LED
+ 	depends on LEDS_CLASS
+ 	select LEDS_TRIGGERS
+diff --git a/drivers/macintosh/windfarm_smu_sat.c b/drivers/macintosh/windfarm_smu_sat.c
+index e46e1153a0b43..7d7d6213e32aa 100644
+--- a/drivers/macintosh/windfarm_smu_sat.c
++++ b/drivers/macintosh/windfarm_smu_sat.c
+@@ -171,6 +171,7 @@ static void wf_sat_release(struct kref *ref)
+ 
+ 	if (sat->nr >= 0)
+ 		sats[sat->nr] = NULL;
++	of_node_put(sat->node);
+ 	kfree(sat);
+ }
+ 
+diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
+index 527204c6d5cd0..be06de791c544 100644
+--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
++++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
+@@ -110,7 +110,7 @@ struct zynqmp_ipi_pdata {
+ 	unsigned int method;
+ 	u32 local_id;
+ 	int num_mboxes;
+-	struct zynqmp_ipi_mbox *ipi_mboxes;
++	struct zynqmp_ipi_mbox ipi_mboxes[];
+ };
+ 
+ static struct device_driver zynqmp_ipi_mbox_driver = {
+@@ -152,7 +152,7 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
+ 	struct zynqmp_ipi_message *msg;
+ 	u64 arg0, arg3;
+ 	struct arm_smccc_res res;
+-	int ret, i;
++	int ret, i, status = IRQ_NONE;
+ 
+ 	(void)irq;
+ 	arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
+@@ -170,11 +170,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
+ 				memcpy_fromio(msg->data, mchan->req_buf,
+ 					      msg->len);
+ 				mbox_chan_received_data(chan, (void *)msg);
+-				return IRQ_HANDLED;
++				status = IRQ_HANDLED;
+ 			}
+ 		}
+ 	}
+-	return IRQ_NONE;
++	return status;
+ }
+ 
+ /**
+@@ -634,8 +634,13 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
+ 	struct zynqmp_ipi_mbox *mbox;
+ 	int num_mboxes, ret = -EINVAL;
+ 
+-	num_mboxes = of_get_child_count(np);
+-	pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * sizeof(*mbox)),
++	num_mboxes = of_get_available_child_count(np);
++	if (num_mboxes == 0) {
++		dev_err(dev, "mailbox nodes not available\n");
++		return -EINVAL;
++	}
++
++	pdata = devm_kzalloc(dev, struct_size(pdata, ipi_mboxes, num_mboxes),
+ 			     GFP_KERNEL);
+ 	if (!pdata)
+ 		return -ENOMEM;
+@@ -649,8 +654,6 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	pdata->num_mboxes = num_mboxes;
+-	pdata->ipi_mboxes = (struct zynqmp_ipi_mbox *)
+-			    ((char *)pdata + sizeof(*pdata));
+ 
+ 	mbox = pdata->ipi_mboxes;
+ 	for_each_available_child_of_node(np, nc) {
+diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
+index 0d38ad6235348..e3156b30294f0 100644
+--- a/drivers/md/dm-clone-target.c
++++ b/drivers/md/dm-clone-target.c
+@@ -2221,6 +2221,7 @@ static int __init dm_clone_init(void)
+ 	r = dm_register_target(&clone_target);
+ 	if (r < 0) {
+ 		DMERR("Failed to register clone target");
++		kmem_cache_destroy(_hydration_cache);
+ 		return r;
+ 	}
+ 
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 36a4ef51ecaa8..faae360b881b5 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -124,9 +124,9 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 			 * Direction r or w?
+ 			 */
+ 			arg_name = dm_shift_arg(as);
+-			if (!strcasecmp(arg_name, "w"))
++			if (arg_name && !strcasecmp(arg_name, "w"))
+ 				fc->corrupt_bio_rw = WRITE;
+-			else if (!strcasecmp(arg_name, "r"))
++			else if (arg_name && !strcasecmp(arg_name, "r"))
+ 				fc->corrupt_bio_rw = READ;
+ 			else {
+ 				ti->error = "Invalid corrupt bio direction (r or w)";
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index a1c4cc48bf034..7599a122c9563 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -4481,11 +4481,13 @@ static int __init dm_integrity_init(void)
+ 	}
+ 
+ 	r = dm_register_target(&integrity_target);
+-
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		kmem_cache_destroy(journal_io_cache);
++		return r;
++	}
+ 
+-	return r;
++	return 0;
+ }
+ 
+ static void __exit dm_integrity_exit(void)
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 20171c9d8952e..5f9b9178c647e 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1435,11 +1435,12 @@ static int table_clear(struct file *filp, struct dm_ioctl *param, size_t param_s
+ 		hc->new_map = NULL;
+ 	}
+ 
+-	param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
+-
+-	__dev_status(hc->md, param);
+ 	md = hc->md;
+ 	up_write(&_hash_lock);
++
++	param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
++	__dev_status(md, param);
++
+ 	if (old_map) {
+ 		dm_sync_table(md);
+ 		dm_table_destroy(old_map);
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index c801f6b93b7b4..0c2048d2b847e 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -475,13 +475,14 @@ static int verity_verify_io(struct dm_verity_io *io)
+ 	struct bvec_iter start;
+ 	unsigned b;
+ 	struct crypto_wait wait;
++	struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
+ 
+ 	for (b = 0; b < io->n_blocks; b++) {
+ 		int r;
+ 		sector_t cur_block = io->block + b;
+ 		struct ahash_request *req = verity_io_hash_req(v, io);
+ 
+-		if (v->validated_blocks &&
++		if (v->validated_blocks && bio->bi_status == BLK_STS_OK &&
+ 		    likely(test_bit(cur_block, v->validated_blocks))) {
+ 			verity_bv_skip_block(v, io, &io->iter);
+ 			continue;
+@@ -529,9 +530,17 @@ static int verity_verify_io(struct dm_verity_io *io)
+ 		else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA,
+ 					   cur_block, NULL, &start) == 0)
+ 			continue;
+-		else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,
+-					   cur_block))
+-			return -EIO;
++		else {
++			if (bio->bi_status) {
++				/*
++				 * Error correction failed; Just return error
++				 */
++				return -EIO;
++			}
++			if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,
++					      cur_block))
++				return -EIO;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 0e741a8d278df..6a0459f9fafbc 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -2212,11 +2212,22 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio)
+ {
+ 	struct r10conf *conf = mddev->private;
+ 	int d;
+-	struct bio *wbio, *wbio2;
++	struct bio *wbio = r10_bio->devs[1].bio;
++	struct bio *wbio2 = r10_bio->devs[1].repl_bio;
++
++	/* Need to test wbio2->bi_end_io before we call
++	 * submit_bio_noacct as if the former is NULL,
++	 * the latter is free to free wbio2.
++	 */
++	if (wbio2 && !wbio2->bi_end_io)
++		wbio2 = NULL;
+ 
+ 	if (!test_bit(R10BIO_Uptodate, &r10_bio->state)) {
+ 		fix_recovery_read_error(r10_bio);
+-		end_sync_request(r10_bio);
++		if (wbio->bi_end_io)
++			end_sync_request(r10_bio);
++		if (wbio2)
++			end_sync_request(r10_bio);
+ 		return;
+ 	}
+ 
+@@ -2225,14 +2236,6 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio)
+ 	 * and submit the write request
+ 	 */
+ 	d = r10_bio->devs[1].devnum;
+-	wbio = r10_bio->devs[1].bio;
+-	wbio2 = r10_bio->devs[1].repl_bio;
+-	/* Need to test wbio2->bi_end_io before we call
+-	 * submit_bio_noacct as if the former is NULL,
+-	 * the latter is free to free wbio2.
+-	 */
+-	if (wbio2 && !wbio2->bi_end_io)
+-		wbio2 = NULL;
+ 	if (wbio->bi_end_io) {
+ 		atomic_inc(&conf->mirrors[d].rdev->nr_pending);
+ 		md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(wbio));
+@@ -2900,10 +2903,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 	sector_t chunk_mask = conf->geo.chunk_mask;
+ 	int page_idx = 0;
+ 
+-	if (!mempool_initialized(&conf->r10buf_pool))
+-		if (init_resync(conf))
+-			return 0;
+-
+ 	/*
+ 	 * Allow skipping a full rebuild for incremental assembly
+ 	 * of a clean array, like RAID1 does.
+@@ -2919,6 +2918,10 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 		return mddev->dev_sectors - sector_nr;
+ 	}
+ 
++	if (!mempool_initialized(&conf->r10buf_pool))
++		if (init_resync(conf))
++			return 0;
++
+  skipped:
+ 	max_sector = mddev->dev_sectors;
+ 	if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
+@@ -3615,6 +3618,20 @@ static int setup_geo(struct geom *geo, struct mddev *mddev, enum geo_type new)
+ 	return nc*fc;
+ }
+ 
++static void raid10_free_conf(struct r10conf *conf)
++{
++	if (!conf)
++		return;
++
++	mempool_exit(&conf->r10bio_pool);
++	kfree(conf->mirrors);
++	kfree(conf->mirrors_old);
++	kfree(conf->mirrors_new);
++	safe_put_page(conf->tmppage);
++	bioset_exit(&conf->bio_split);
++	kfree(conf);
++}
++
+ static struct r10conf *setup_conf(struct mddev *mddev)
+ {
+ 	struct r10conf *conf = NULL;
+@@ -3697,13 +3714,7 @@ static struct r10conf *setup_conf(struct mddev *mddev)
+ 	return conf;
+ 
+  out:
+-	if (conf) {
+-		mempool_exit(&conf->r10bio_pool);
+-		kfree(conf->mirrors);
+-		safe_put_page(conf->tmppage);
+-		bioset_exit(&conf->bio_split);
+-		kfree(conf);
+-	}
++	raid10_free_conf(conf);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -3741,6 +3752,9 @@ static int raid10_run(struct mddev *mddev)
+ 	if (!conf)
+ 		goto out;
+ 
++	mddev->thread = conf->thread;
++	conf->thread = NULL;
++
+ 	if (mddev_is_clustered(conf->mddev)) {
+ 		int fc, fo;
+ 
+@@ -3753,9 +3767,6 @@ static int raid10_run(struct mddev *mddev)
+ 		}
+ 	}
+ 
+-	mddev->thread = conf->thread;
+-	conf->thread = NULL;
+-
+ 	if (mddev->queue) {
+ 		blk_queue_max_discard_sectors(mddev->queue,
+ 					      mddev->chunk_sectors);
+@@ -3909,10 +3920,7 @@ static int raid10_run(struct mddev *mddev)
+ 
+ out_free_conf:
+ 	md_unregister_thread(&mddev->thread);
+-	mempool_exit(&conf->r10bio_pool);
+-	safe_put_page(conf->tmppage);
+-	kfree(conf->mirrors);
+-	kfree(conf);
++	raid10_free_conf(conf);
+ 	mddev->private = NULL;
+ out:
+ 	return -EIO;
+@@ -3920,15 +3928,7 @@ out:
+ 
+ static void raid10_free(struct mddev *mddev, void *priv)
+ {
+-	struct r10conf *conf = priv;
+-
+-	mempool_exit(&conf->r10bio_pool);
+-	safe_put_page(conf->tmppage);
+-	kfree(conf->mirrors);
+-	kfree(conf->mirrors_old);
+-	kfree(conf->mirrors_new);
+-	bioset_exit(&conf->bio_split);
+-	kfree(conf);
++	raid10_free_conf(priv);
+ }
+ 
+ static void raid10_quiesce(struct mddev *mddev, int quiesce)
+diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
+index 79a11c0184c65..62ce27552dd3c 100644
+--- a/drivers/media/i2c/max9286.c
++++ b/drivers/media/i2c/max9286.c
+@@ -899,6 +899,7 @@ err_async:
+ static void max9286_v4l2_unregister(struct max9286_priv *priv)
+ {
+ 	fwnode_handle_put(priv->sd.fwnode);
++	v4l2_ctrl_handler_free(&priv->ctrls);
+ 	v4l2_async_unregister_subdev(&priv->sd);
+ 	max9286_v4l2_notifier_unregister(priv);
+ }
+diff --git a/drivers/media/pci/dm1105/dm1105.c b/drivers/media/pci/dm1105/dm1105.c
+index 9dce31d2b525b..d2e194a24e7e7 100644
+--- a/drivers/media/pci/dm1105/dm1105.c
++++ b/drivers/media/pci/dm1105/dm1105.c
+@@ -1178,6 +1178,7 @@ static void dm1105_remove(struct pci_dev *pdev)
+ 	struct dvb_demux *dvbdemux = &dev->demux;
+ 	struct dmx_demux *dmx = &dvbdemux->dmx;
+ 
++	cancel_work_sync(&dev->ir.work);
+ 	dm1105_ir_exit(dev);
+ 	dmx->close(dmx);
+ 	dvb_net_release(&dev->dvbnet);
+diff --git a/drivers/media/pci/saa7134/saa7134-ts.c b/drivers/media/pci/saa7134/saa7134-ts.c
+index 6a5053126237f..437dbe5e75e29 100644
+--- a/drivers/media/pci/saa7134/saa7134-ts.c
++++ b/drivers/media/pci/saa7134/saa7134-ts.c
+@@ -300,6 +300,7 @@ int saa7134_ts_start(struct saa7134_dev *dev)
+ 
+ int saa7134_ts_fini(struct saa7134_dev *dev)
+ {
++	del_timer_sync(&dev->ts_q.timeout);
+ 	saa7134_pgtable_free(dev->pci, &dev->ts_q.pt);
+ 	return 0;
+ }
+diff --git a/drivers/media/pci/saa7134/saa7134-vbi.c b/drivers/media/pci/saa7134/saa7134-vbi.c
+index 3f0b0933eed69..3e773690468bd 100644
+--- a/drivers/media/pci/saa7134/saa7134-vbi.c
++++ b/drivers/media/pci/saa7134/saa7134-vbi.c
+@@ -185,6 +185,7 @@ int saa7134_vbi_init1(struct saa7134_dev *dev)
+ int saa7134_vbi_fini(struct saa7134_dev *dev)
+ {
+ 	/* nothing */
++	del_timer_sync(&dev->vbi_q.timeout);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c
+index 85d082baaadc5..df9e3293015a2 100644
+--- a/drivers/media/pci/saa7134/saa7134-video.c
++++ b/drivers/media/pci/saa7134/saa7134-video.c
+@@ -2153,6 +2153,7 @@ int saa7134_video_init1(struct saa7134_dev *dev)
+ 
+ void saa7134_video_fini(struct saa7134_dev *dev)
+ {
++	del_timer_sync(&dev->video_q.timeout);
+ 	/* free stuff */
+ 	saa7134_pgtable_free(dev->pci, &dev->video_q.pt);
+ 	saa7134_pgtable_free(dev->pci, &dev->vbi_q.pt);
+diff --git a/drivers/media/platform/qcom/venus/core.h b/drivers/media/platform/qcom/venus/core.h
+index f2a0ef9ee884e..aebd4c664bfa1 100644
+--- a/drivers/media/platform/qcom/venus/core.h
++++ b/drivers/media/platform/qcom/venus/core.h
+@@ -283,7 +283,6 @@ enum venus_dec_state {
+ 	VENUS_DEC_STATE_DRAIN		= 5,
+ 	VENUS_DEC_STATE_DECODING	= 6,
+ 	VENUS_DEC_STATE_DRC		= 7,
+-	VENUS_DEC_STATE_DRC_FLUSH_DONE	= 8,
+ };
+ 
+ struct venus_ts_metadata {
+@@ -348,7 +347,7 @@ struct venus_ts_metadata {
+  * @priv:	a private for HFI operations callbacks
+  * @session_type:	the type of the session (decoder or encoder)
+  * @hprop:	a union used as a holder by get property
+- * @last_buf:	last capture buffer for dynamic-resoluton-change
++ * @next_buf_last: a flag to mark next queued capture buffer as last
+  */
+ struct venus_inst {
+ 	struct list_head list;
+@@ -410,7 +409,8 @@ struct venus_inst {
+ 	union hfi_get_property hprop;
+ 	unsigned int core_acquired: 1;
+ 	unsigned int bit_depth;
+-	struct vb2_buffer *last_buf;
++	bool next_buf_last;
++	bool drain_active;
+ };
+ 
+ #define IS_V1(core)	((core)->res->hfi_version == HFI_VERSION_1XX)
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index 50439eb1ffeaa..5ca3920237c5a 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -1347,6 +1347,12 @@ void venus_helper_vb2_buf_queue(struct vb2_buffer *vb)
+ 
+ 	v4l2_m2m_buf_queue(m2m_ctx, vbuf);
+ 
++	/* Skip processing queued capture buffers after LAST flag */
++	if (inst->session_type == VIDC_SESSION_TYPE_DEC &&
++	    V4L2_TYPE_IS_CAPTURE(vb->vb2_queue->type) &&
++	    inst->codec_state == VENUS_DEC_STATE_DRC)
++		goto unlock;
++
+ 	cache_payload(inst, vb);
+ 
+ 	if (inst->session_type == VIDC_SESSION_TYPE_ENC &&
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index de34a87d1130e..c437a929a5451 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -495,6 +495,7 @@ static int
+ vdec_decoder_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd)
+ {
+ 	struct venus_inst *inst = to_inst(file);
++	struct vb2_queue *dst_vq;
+ 	struct hfi_frame_data fdata = {0};
+ 	int ret;
+ 
+@@ -518,8 +519,17 @@ vdec_decoder_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd)
+ 
+ 		ret = hfi_session_process_buf(inst, &fdata);
+ 
+-		if (!ret && inst->codec_state == VENUS_DEC_STATE_DECODING)
++		if (!ret && inst->codec_state == VENUS_DEC_STATE_DECODING) {
+ 			inst->codec_state = VENUS_DEC_STATE_DRAIN;
++			inst->drain_active = true;
++		}
++	} else if (cmd->cmd == V4L2_DEC_CMD_START &&
++		   inst->codec_state == VENUS_DEC_STATE_STOPPED) {
++		dst_vq = v4l2_m2m_get_vq(inst->fh.m2m_ctx,
++					 V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
++		vb2_clear_last_buffer_dequeued(dst_vq);
++
++		inst->codec_state = VENUS_DEC_STATE_DECODING;
+ 	}
+ 
+ unlock:
+@@ -636,6 +646,7 @@ static int vdec_output_conf(struct venus_inst *inst)
+ {
+ 	struct venus_core *core = inst->core;
+ 	struct hfi_enable en = { .enable = 1 };
++	struct hfi_buffer_requirements bufreq;
+ 	u32 width = inst->out_width;
+ 	u32 height = inst->out_height;
+ 	u32 out_fmt, out2_fmt;
+@@ -711,6 +722,23 @@ static int vdec_output_conf(struct venus_inst *inst)
+ 	}
+ 
+ 	if (IS_V3(core) || IS_V4(core)) {
++		ret = venus_helper_get_bufreq(inst, HFI_BUFFER_OUTPUT, &bufreq);
++		if (ret)
++			return ret;
++
++		if (bufreq.size > inst->output_buf_size)
++			return -EINVAL;
++
++		if (inst->dpb_fmt) {
++			ret = venus_helper_get_bufreq(inst, HFI_BUFFER_OUTPUT2,
++						      &bufreq);
++			if (ret)
++				return ret;
++
++			if (bufreq.size > inst->output2_buf_size)
++				return -EINVAL;
++		}
++
+ 		if (inst->output2_buf_size) {
+ 			ret = venus_helper_set_bufsize(inst,
+ 						       inst->output2_buf_size,
+@@ -916,10 +944,6 @@ static int vdec_start_capture(struct venus_inst *inst)
+ 		return 0;
+ 
+ reconfigure:
+-	ret = hfi_session_flush(inst, HFI_FLUSH_OUTPUT, true);
+-	if (ret)
+-		return ret;
+-
+ 	ret = vdec_output_conf(inst);
+ 	if (ret)
+ 		return ret;
+@@ -947,15 +971,21 @@ reconfigure:
+ 
+ 	venus_pm_load_scale(inst);
+ 
++	inst->next_buf_last = false;
++
+ 	ret = hfi_session_continue(inst);
+ 	if (ret)
+ 		goto free_dpb_bufs;
+ 
+ 	inst->codec_state = VENUS_DEC_STATE_DECODING;
+ 
++	if (inst->drain_active)
++		inst->codec_state = VENUS_DEC_STATE_DRAIN;
++
+ 	inst->streamon_cap = 1;
+ 	inst->sequence_cap = 0;
+ 	inst->reconfig = false;
++	inst->drain_active = false;
+ 
+ 	return 0;
+ 
+@@ -971,7 +1001,10 @@ static int vdec_start_output(struct venus_inst *inst)
+ 
+ 	if (inst->codec_state == VENUS_DEC_STATE_SEEK) {
+ 		ret = venus_helper_process_initial_out_bufs(inst);
+-		inst->codec_state = VENUS_DEC_STATE_DECODING;
++		if (inst->next_buf_last)
++			inst->codec_state = VENUS_DEC_STATE_DRC;
++		else
++			inst->codec_state = VENUS_DEC_STATE_DECODING;
+ 		goto done;
+ 	}
+ 
+@@ -987,6 +1020,7 @@ static int vdec_start_output(struct venus_inst *inst)
+ 	venus_helper_init_instance(inst);
+ 	inst->sequence_out = 0;
+ 	inst->reconfig = false;
++	inst->next_buf_last = false;
+ 
+ 	ret = vdec_set_properties(inst);
+ 	if (ret)
+@@ -1076,13 +1110,14 @@ static int vdec_stop_capture(struct venus_inst *inst)
+ 		ret = hfi_session_flush(inst, HFI_FLUSH_ALL, true);
+ 		fallthrough;
+ 	case VENUS_DEC_STATE_DRAIN:
+-		vdec_cancel_dst_buffers(inst);
+ 		inst->codec_state = VENUS_DEC_STATE_STOPPED;
++		inst->drain_active = false;
++		fallthrough;
++	case VENUS_DEC_STATE_SEEK:
++		vdec_cancel_dst_buffers(inst);
+ 		break;
+ 	case VENUS_DEC_STATE_DRC:
+-		WARN_ON(1);
+-		fallthrough;
+-	case VENUS_DEC_STATE_DRC_FLUSH_DONE:
++		ret = hfi_session_flush(inst, HFI_FLUSH_OUTPUT, true);
+ 		inst->codec_state = VENUS_DEC_STATE_CAPTURE_SETUP;
+ 		venus_helper_free_dpb_bufs(inst);
+ 		break;
+@@ -1101,6 +1136,7 @@ static int vdec_stop_output(struct venus_inst *inst)
+ 	case VENUS_DEC_STATE_DECODING:
+ 	case VENUS_DEC_STATE_DRAIN:
+ 	case VENUS_DEC_STATE_STOPPED:
++	case VENUS_DEC_STATE_DRC:
+ 		ret = hfi_session_flush(inst, HFI_FLUSH_ALL, true);
+ 		inst->codec_state = VENUS_DEC_STATE_SEEK;
+ 		break;
+@@ -1206,9 +1242,28 @@ static void vdec_buf_cleanup(struct vb2_buffer *vb)
+ static void vdec_vb2_buf_queue(struct vb2_buffer *vb)
+ {
+ 	struct venus_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
++	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
++	static const struct v4l2_event eos = { .type = V4L2_EVENT_EOS };
+ 
+ 	vdec_pm_get_put(inst);
+ 
++	mutex_lock(&inst->lock);
++
++	if (inst->next_buf_last && V4L2_TYPE_IS_CAPTURE(vb->vb2_queue->type) &&
++	    inst->codec_state == VENUS_DEC_STATE_DRC) {
++		vbuf->flags |= V4L2_BUF_FLAG_LAST;
++		vbuf->sequence = inst->sequence_cap++;
++		vbuf->field = V4L2_FIELD_NONE;
++		vb2_set_plane_payload(vb, 0, 0);
++		v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
++		v4l2_event_queue_fh(&inst->fh, &eos);
++		inst->next_buf_last = false;
++		mutex_unlock(&inst->lock);
++		return;
++	}
++
++	mutex_unlock(&inst->lock);
++
+ 	venus_helper_vb2_buf_queue(vb);
+ }
+ 
+@@ -1252,20 +1307,15 @@ static void vdec_buf_done(struct venus_inst *inst, unsigned int buf_type,
+ 		vb->timestamp = timestamp_us * NSEC_PER_USEC;
+ 		vbuf->sequence = inst->sequence_cap++;
+ 
+-		if (inst->last_buf == vb) {
+-			inst->last_buf = NULL;
+-			vbuf->flags |= V4L2_BUF_FLAG_LAST;
+-			vb2_set_plane_payload(vb, 0, 0);
+-			vb->timestamp = 0;
+-		}
+-
+ 		if (vbuf->flags & V4L2_BUF_FLAG_LAST) {
+ 			const struct v4l2_event ev = { .type = V4L2_EVENT_EOS };
+ 
+ 			v4l2_event_queue_fh(&inst->fh, &ev);
+ 
+-			if (inst->codec_state == VENUS_DEC_STATE_DRAIN)
++			if (inst->codec_state == VENUS_DEC_STATE_DRAIN) {
++				inst->drain_active = false;
+ 				inst->codec_state = VENUS_DEC_STATE_STOPPED;
++			}
+ 		}
+ 
+ 		if (!bytesused)
+@@ -1321,19 +1371,16 @@ static void vdec_event_change(struct venus_inst *inst,
+ 	dev_dbg(dev, VDBGM "event %s sufficient resources (%ux%u)\n",
+ 		sufficient ? "" : "not", ev_data->width, ev_data->height);
+ 
+-	if (sufficient) {
+-		hfi_session_continue(inst);
+-	} else {
+-		switch (inst->codec_state) {
+-		case VENUS_DEC_STATE_INIT:
+-			inst->codec_state = VENUS_DEC_STATE_CAPTURE_SETUP;
+-			break;
+-		case VENUS_DEC_STATE_DECODING:
+-			inst->codec_state = VENUS_DEC_STATE_DRC;
+-			break;
+-		default:
+-			break;
+-		}
++	switch (inst->codec_state) {
++	case VENUS_DEC_STATE_INIT:
++		inst->codec_state = VENUS_DEC_STATE_CAPTURE_SETUP;
++		break;
++	case VENUS_DEC_STATE_DECODING:
++	case VENUS_DEC_STATE_DRAIN:
++		inst->codec_state = VENUS_DEC_STATE_DRC;
++		break;
++	default:
++		break;
+ 	}
+ 
+ 	/*
+@@ -1342,19 +1389,17 @@ static void vdec_event_change(struct venus_inst *inst,
+ 	 * itself doesn't mark the last decoder output buffer with HFI EOS flag.
+ 	 */
+ 
+-	if (!sufficient && inst->codec_state == VENUS_DEC_STATE_DRC) {
+-		struct vb2_v4l2_buffer *last;
++	if (inst->codec_state == VENUS_DEC_STATE_DRC) {
+ 		int ret;
+ 
+-		last = v4l2_m2m_last_dst_buf(inst->m2m_ctx);
+-		if (last)
+-			inst->last_buf = &last->vb2_buf;
++		inst->next_buf_last = true;
+ 
+ 		ret = hfi_session_flush(inst, HFI_FLUSH_OUTPUT, false);
+ 		if (ret)
+ 			dev_dbg(dev, VDBGH "flush output error %d\n", ret);
+ 	}
+ 
++	inst->next_buf_last = true;
+ 	inst->reconfig = true;
+ 	v4l2_event_queue_fh(&inst->fh, &ev);
+ 	wake_up(&inst->reconf_wait);
+@@ -1397,8 +1442,7 @@ static void vdec_event_notify(struct venus_inst *inst, u32 event,
+ 
+ static void vdec_flush_done(struct venus_inst *inst)
+ {
+-	if (inst->codec_state == VENUS_DEC_STATE_DRC)
+-		inst->codec_state = VENUS_DEC_STATE_DRC_FLUSH_DONE;
++	dev_dbg(inst->core->dev_dec, VDBGH "flush done\n");
+ }
+ 
+ static const struct hfi_inst_ops vdec_hfi_ops = {
+diff --git a/drivers/media/platform/rcar_fdp1.c b/drivers/media/platform/rcar_fdp1.c
+index c9448de885b62..44a57fa06e74a 100644
+--- a/drivers/media/platform/rcar_fdp1.c
++++ b/drivers/media/platform/rcar_fdp1.c
+@@ -2121,9 +2121,7 @@ static int fdp1_open(struct file *file)
+ 
+ 	if (ctx->hdl.error) {
+ 		ret = ctx->hdl.error;
+-		v4l2_ctrl_handler_free(&ctx->hdl);
+-		kfree(ctx);
+-		goto done;
++		goto error_ctx;
+ 	}
+ 
+ 	ctx->fh.ctrl_handler = &ctx->hdl;
+@@ -2137,20 +2135,27 @@ static int fdp1_open(struct file *file)
+ 
+ 	if (IS_ERR(ctx->fh.m2m_ctx)) {
+ 		ret = PTR_ERR(ctx->fh.m2m_ctx);
+-
+-		v4l2_ctrl_handler_free(&ctx->hdl);
+-		kfree(ctx);
+-		goto done;
++		goto error_ctx;
+ 	}
+ 
+ 	/* Perform any power management required */
+-	pm_runtime_get_sync(fdp1->dev);
++	ret = pm_runtime_resume_and_get(fdp1->dev);
++	if (ret < 0)
++		goto error_pm;
+ 
+ 	v4l2_fh_add(&ctx->fh);
+ 
+ 	dprintk(fdp1, "Created instance: %p, m2m_ctx: %p\n",
+ 		ctx, ctx->fh.m2m_ctx);
+ 
++	mutex_unlock(&fdp1->dev_mutex);
++	return 0;
++
++error_pm:
++       v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
++error_ctx:
++	v4l2_ctrl_handler_free(&ctx->hdl);
++	kfree(ctx);
+ done:
+ 	mutex_unlock(&fdp1->dev_mutex);
+ 	return ret;
+@@ -2255,7 +2260,6 @@ static int fdp1_probe(struct platform_device *pdev)
+ 	struct fdp1_dev *fdp1;
+ 	struct video_device *vfd;
+ 	struct device_node *fcp_node;
+-	struct resource *res;
+ 	struct clk *clk;
+ 	unsigned int i;
+ 
+@@ -2282,17 +2286,15 @@ static int fdp1_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, fdp1);
+ 
+ 	/* Memory-mapped registers */
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	fdp1->regs = devm_ioremap_resource(&pdev->dev, res);
++	fdp1->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(fdp1->regs))
+ 		return PTR_ERR(fdp1->regs);
+ 
+ 	/* Interrupt service routine registration */
+-	fdp1->irq = ret = platform_get_irq(pdev, 0);
+-	if (ret < 0) {
+-		dev_err(&pdev->dev, "cannot find IRQ\n");
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
+ 		return ret;
+-	}
++	fdp1->irq = ret;
+ 
+ 	ret = devm_request_irq(&pdev->dev, fdp1->irq, fdp1_irq_handler, 0,
+ 			       dev_name(&pdev->dev), fdp1);
+@@ -2315,8 +2317,10 @@ static int fdp1_probe(struct platform_device *pdev)
+ 
+ 	/* Determine our clock rate */
+ 	clk = clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
++	if (IS_ERR(clk)) {
++		ret = PTR_ERR(clk);
++		goto put_dev;
++	}
+ 
+ 	fdp1->clk_rate = clk_get_rate(clk);
+ 	clk_put(clk);
+@@ -2325,7 +2329,7 @@ static int fdp1_probe(struct platform_device *pdev)
+ 	ret = v4l2_device_register(&pdev->dev, &fdp1->v4l2_dev);
+ 	if (ret) {
+ 		v4l2_err(&fdp1->v4l2_dev, "Failed to register video device\n");
+-		return ret;
++		goto put_dev;
+ 	}
+ 
+ 	/* M2M registration */
+@@ -2355,7 +2359,9 @@ static int fdp1_probe(struct platform_device *pdev)
+ 
+ 	/* Power up the cells to read HW */
+ 	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_get_sync(fdp1->dev);
++	ret = pm_runtime_resume_and_get(fdp1->dev);
++	if (ret < 0)
++		goto disable_pm;
+ 
+ 	hw_version = fdp1_read(fdp1, FD1_IP_INTDATA);
+ 	switch (hw_version) {
+@@ -2384,12 +2390,17 @@ static int fdp1_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++disable_pm:
++	pm_runtime_disable(fdp1->dev);
++
+ release_m2m:
+ 	v4l2_m2m_release(fdp1->m2m_dev);
+ 
+ unreg_dev:
+ 	v4l2_device_unregister(&fdp1->v4l2_dev);
+ 
++put_dev:
++	rcar_fcp_put(fdp1->fcp);
+ 	return ret;
+ }
+ 
+@@ -2401,6 +2412,7 @@ static int fdp1_remove(struct platform_device *pdev)
+ 	video_unregister_device(&fdp1->vfd);
+ 	v4l2_device_unregister(&fdp1->v4l2_dev);
+ 	pm_runtime_disable(&pdev->dev);
++	rcar_fcp_put(fdp1->fcp);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+index 85288da9d2ae6..182e5a4aeb171 100644
+--- a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
++++ b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+@@ -1310,6 +1310,8 @@ static int bdisp_probe(struct platform_device *pdev)
+ 	init_waitqueue_head(&bdisp->irq_queue);
+ 	INIT_DELAYED_WORK(&bdisp->timeout_work, bdisp_irq_timeout);
+ 	bdisp->work_queue = create_workqueue(BDISP_NAME);
++	if (!bdisp->work_queue)
++		return -ENOMEM;
+ 
+ 	spin_lock_init(&bdisp->slock);
+ 	mutex_init(&bdisp->lock);
+diff --git a/drivers/media/rc/gpio-ir-recv.c b/drivers/media/rc/gpio-ir-recv.c
+index a56c844d7f816..16795e07dc103 100644
+--- a/drivers/media/rc/gpio-ir-recv.c
++++ b/drivers/media/rc/gpio-ir-recv.c
+@@ -107,6 +107,8 @@ static int gpio_ir_recv_probe(struct platform_device *pdev)
+ 		rcdev->map_name = RC_MAP_EMPTY;
+ 
+ 	gpio_dev->rcdev = rcdev;
++	if (of_property_read_bool(np, "wakeup-source"))
++		device_init_wakeup(dev, true);
+ 
+ 	rc = devm_rc_register_device(dev, rcdev);
+ 	if (rc < 0) {
+diff --git a/drivers/mfd/tqmx86.c b/drivers/mfd/tqmx86.c
+index 732013f40e4e8..0498f1b7e2e36 100644
+--- a/drivers/mfd/tqmx86.c
++++ b/drivers/mfd/tqmx86.c
+@@ -16,8 +16,8 @@
+ #include <linux/platform_data/i2c-ocores.h>
+ #include <linux/platform_device.h>
+ 
+-#define TQMX86_IOBASE	0x160
+-#define TQMX86_IOSIZE	0x3f
++#define TQMX86_IOBASE	0x180
++#define TQMX86_IOSIZE	0x20
+ #define TQMX86_IOBASE_I2C	0x1a0
+ #define TQMX86_IOSIZE_I2C	0xa
+ #define TQMX86_IOBASE_WATCHDOG	0x18b
+@@ -25,29 +25,33 @@
+ #define TQMX86_IOBASE_GPIO	0x18d
+ #define TQMX86_IOSIZE_GPIO	0x4
+ 
+-#define TQMX86_REG_BOARD_ID	0x20
++#define TQMX86_REG_BOARD_ID	0x00
+ #define TQMX86_REG_BOARD_ID_E38M	1
+ #define TQMX86_REG_BOARD_ID_50UC	2
+ #define TQMX86_REG_BOARD_ID_E38C	3
+ #define TQMX86_REG_BOARD_ID_60EB	4
+-#define TQMX86_REG_BOARD_ID_E39M	5
+-#define TQMX86_REG_BOARD_ID_E39C	6
+-#define TQMX86_REG_BOARD_ID_E39x	7
++#define TQMX86_REG_BOARD_ID_E39MS	5
++#define TQMX86_REG_BOARD_ID_E39C1	6
++#define TQMX86_REG_BOARD_ID_E39C2	7
+ #define TQMX86_REG_BOARD_ID_70EB	8
+ #define TQMX86_REG_BOARD_ID_80UC	9
+-#define TQMX86_REG_BOARD_ID_90UC	10
+-#define TQMX86_REG_BOARD_REV	0x21
+-#define TQMX86_REG_IO_EXT_INT	0x26
++#define TQMX86_REG_BOARD_ID_110EB	11
++#define TQMX86_REG_BOARD_ID_E40M	12
++#define TQMX86_REG_BOARD_ID_E40S	13
++#define TQMX86_REG_BOARD_ID_E40C1	14
++#define TQMX86_REG_BOARD_ID_E40C2	15
++#define TQMX86_REG_BOARD_REV	0x01
++#define TQMX86_REG_IO_EXT_INT	0x06
+ #define TQMX86_REG_IO_EXT_INT_NONE		0
+ #define TQMX86_REG_IO_EXT_INT_7			1
+ #define TQMX86_REG_IO_EXT_INT_9			2
+ #define TQMX86_REG_IO_EXT_INT_12		3
+ #define TQMX86_REG_IO_EXT_INT_MASK		0x3
+ #define TQMX86_REG_IO_EXT_INT_GPIO_SHIFT	4
++#define TQMX86_REG_SAUC		0x17
+ 
+-#define TQMX86_REG_I2C_DETECT	0x47
++#define TQMX86_REG_I2C_DETECT	0x1a7
+ #define TQMX86_REG_I2C_DETECT_SOFT		0xa5
+-#define TQMX86_REG_I2C_INT_EN	0x49
+ 
+ static uint gpio_irq;
+ module_param(gpio_irq, uint, 0);
+@@ -107,7 +111,7 @@ static const struct mfd_cell tqmx86_devs[] = {
+ 	},
+ };
+ 
+-static const char *tqmx86_board_id_to_name(u8 board_id)
++static const char *tqmx86_board_id_to_name(u8 board_id, u8 sauc)
+ {
+ 	switch (board_id) {
+ 	case TQMX86_REG_BOARD_ID_E38M:
+@@ -118,18 +122,26 @@ static const char *tqmx86_board_id_to_name(u8 board_id)
+ 		return "TQMxE38C";
+ 	case TQMX86_REG_BOARD_ID_60EB:
+ 		return "TQMx60EB";
+-	case TQMX86_REG_BOARD_ID_E39M:
+-		return "TQMxE39M";
+-	case TQMX86_REG_BOARD_ID_E39C:
+-		return "TQMxE39C";
+-	case TQMX86_REG_BOARD_ID_E39x:
+-		return "TQMxE39x";
++	case TQMX86_REG_BOARD_ID_E39MS:
++		return (sauc == 0xff) ? "TQMxE39M" : "TQMxE39S";
++	case TQMX86_REG_BOARD_ID_E39C1:
++		return "TQMxE39C1";
++	case TQMX86_REG_BOARD_ID_E39C2:
++		return "TQMxE39C2";
+ 	case TQMX86_REG_BOARD_ID_70EB:
+ 		return "TQMx70EB";
+ 	case TQMX86_REG_BOARD_ID_80UC:
+ 		return "TQMx80UC";
+-	case TQMX86_REG_BOARD_ID_90UC:
+-		return "TQMx90UC";
++	case TQMX86_REG_BOARD_ID_110EB:
++		return "TQMx110EB";
++	case TQMX86_REG_BOARD_ID_E40M:
++		return "TQMxE40M";
++	case TQMX86_REG_BOARD_ID_E40S:
++		return "TQMxE40S";
++	case TQMX86_REG_BOARD_ID_E40C1:
++		return "TQMxE40C1";
++	case TQMX86_REG_BOARD_ID_E40C2:
++		return "TQMxE40C2";
+ 	default:
+ 		return "Unknown";
+ 	}
+@@ -142,11 +154,15 @@ static int tqmx86_board_id_to_clk_rate(u8 board_id)
+ 	case TQMX86_REG_BOARD_ID_60EB:
+ 	case TQMX86_REG_BOARD_ID_70EB:
+ 	case TQMX86_REG_BOARD_ID_80UC:
+-	case TQMX86_REG_BOARD_ID_90UC:
++	case TQMX86_REG_BOARD_ID_110EB:
++	case TQMX86_REG_BOARD_ID_E40M:
++	case TQMX86_REG_BOARD_ID_E40S:
++	case TQMX86_REG_BOARD_ID_E40C1:
++	case TQMX86_REG_BOARD_ID_E40C2:
+ 		return 24000;
+-	case TQMX86_REG_BOARD_ID_E39M:
+-	case TQMX86_REG_BOARD_ID_E39C:
+-	case TQMX86_REG_BOARD_ID_E39x:
++	case TQMX86_REG_BOARD_ID_E39MS:
++	case TQMX86_REG_BOARD_ID_E39C1:
++	case TQMX86_REG_BOARD_ID_E39C2:
+ 		return 25000;
+ 	case TQMX86_REG_BOARD_ID_E38M:
+ 	case TQMX86_REG_BOARD_ID_E38C:
+@@ -158,7 +174,7 @@ static int tqmx86_board_id_to_clk_rate(u8 board_id)
+ 
+ static int tqmx86_probe(struct platform_device *pdev)
+ {
+-	u8 board_id, rev, i2c_det, io_ext_int_val;
++	u8 board_id, sauc, rev, i2c_det, io_ext_int_val;
+ 	struct device *dev = &pdev->dev;
+ 	u8 gpio_irq_cfg, readback;
+ 	const char *board_name;
+@@ -188,14 +204,20 @@ static int tqmx86_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	board_id = ioread8(io_base + TQMX86_REG_BOARD_ID);
+-	board_name = tqmx86_board_id_to_name(board_id);
++	sauc = ioread8(io_base + TQMX86_REG_SAUC);
++	board_name = tqmx86_board_id_to_name(board_id, sauc);
+ 	rev = ioread8(io_base + TQMX86_REG_BOARD_REV);
+ 
+ 	dev_info(dev,
+ 		 "Found %s - Board ID %d, PCB Revision %d, PLD Revision %d\n",
+ 		 board_name, board_id, rev >> 4, rev & 0xf);
+ 
+-	i2c_det = ioread8(io_base + TQMX86_REG_I2C_DETECT);
++	/*
++	 * The I2C_DETECT register is in the range assigned to the I2C driver
++	 * later, so we don't extend TQMX86_IOSIZE. Use inb() for this one-off
++	 * access instead of ioport_map + unmap.
++	 */
++	i2c_det = inb(TQMX86_REG_I2C_DETECT);
+ 
+ 	if (gpio_irq_cfg) {
+ 		io_ext_int_val =
+diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
+index 2d8328d928d53..4a903770b8e1d 100644
+--- a/drivers/misc/vmw_vmci/vmci_host.c
++++ b/drivers/misc/vmw_vmci/vmci_host.c
+@@ -165,10 +165,16 @@ static int vmci_host_close(struct inode *inode, struct file *filp)
+ static __poll_t vmci_host_poll(struct file *filp, poll_table *wait)
+ {
+ 	struct vmci_host_dev *vmci_host_dev = filp->private_data;
+-	struct vmci_ctx *context = vmci_host_dev->context;
++	struct vmci_ctx *context;
+ 	__poll_t mask = 0;
+ 
+ 	if (vmci_host_dev->ct_type == VMCIOBJ_CONTEXT) {
++		/*
++		 * Read context only if ct_type == VMCIOBJ_CONTEXT to make
++		 * sure that context is initialized
++		 */
++		context = vmci_host_dev->context;
++
+ 		/* Check for VMCI calls to this VM context. */
+ 		if (wait)
+ 			poll_wait(filp, &context->host_context.wait_queue,
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index d53374991e137..5b853f651b389 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -126,6 +126,7 @@ static u32 esdhc_readl_fixup(struct sdhci_host *host,
+ 			return ret;
+ 		}
+ 	}
++
+ 	/*
+ 	 * The DAT[3:0] line signal levels and the CMD line signal level are
+ 	 * not compatible with standard SDHC register. The line signal levels
+@@ -137,6 +138,16 @@ static u32 esdhc_readl_fixup(struct sdhci_host *host,
+ 		ret = value & 0x000fffff;
+ 		ret |= (value >> 4) & SDHCI_DATA_LVL_MASK;
+ 		ret |= (value << 1) & SDHCI_CMD_LVL;
++
++		/*
++		 * Some controllers have unreliable Data Line Active
++		 * bit for commands with busy signal. This affects
++		 * Command Inhibit (data) bit. Just ignore it since
++		 * MMC core driver has already polled card status
++		 * with CMD13 after any command with busy siganl.
++		 */
++		if (esdhc->quirk_ignore_data_inhibit)
++			ret &= ~SDHCI_DATA_INHIBIT;
+ 		return ret;
+ 	}
+ 
+@@ -151,19 +162,6 @@ static u32 esdhc_readl_fixup(struct sdhci_host *host,
+ 		return ret;
+ 	}
+ 
+-	/*
+-	 * Some controllers have unreliable Data Line Active
+-	 * bit for commands with busy signal. This affects
+-	 * Command Inhibit (data) bit. Just ignore it since
+-	 * MMC core driver has already polled card status
+-	 * with CMD13 after any command with busy siganl.
+-	 */
+-	if ((spec_reg == SDHCI_PRESENT_STATE) &&
+-	(esdhc->quirk_ignore_data_inhibit == true)) {
+-		ret = value & ~SDHCI_DATA_INHIBIT;
+-		return ret;
+-	}
+-
+ 	ret = value;
+ 	return ret;
+ }
+diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
+index 0edecfdbd01f3..b4cdf2351cac9 100644
+--- a/drivers/mtd/ubi/eba.c
++++ b/drivers/mtd/ubi/eba.c
+@@ -947,7 +947,7 @@ static int try_write_vid_and_data(struct ubi_volume *vol, int lnum,
+ 				  int offset, int len)
+ {
+ 	struct ubi_device *ubi = vol->ubi;
+-	int pnum, opnum, err, vol_id = vol->vol_id;
++	int pnum, opnum, err, err2, vol_id = vol->vol_id;
+ 
+ 	pnum = ubi_wl_get_peb(ubi);
+ 	if (pnum < 0) {
+@@ -982,10 +982,19 @@ static int try_write_vid_and_data(struct ubi_volume *vol, int lnum,
+ out_put:
+ 	up_read(&ubi->fm_eba_sem);
+ 
+-	if (err && pnum >= 0)
+-		err = ubi_wl_put_peb(ubi, vol_id, lnum, pnum, 1);
+-	else if (!err && opnum >= 0)
+-		err = ubi_wl_put_peb(ubi, vol_id, lnum, opnum, 0);
++	if (err && pnum >= 0) {
++		err2 = ubi_wl_put_peb(ubi, vol_id, lnum, pnum, 1);
++		if (err2) {
++			ubi_warn(ubi, "failed to return physical eraseblock %d, error %d",
++				 pnum, err2);
++		}
++	} else if (!err && opnum >= 0) {
++		err2 = ubi_wl_put_peb(ubi, vol_id, lnum, opnum, 0);
++		if (err2) {
++			ubi_warn(ubi, "failed to return physical eraseblock %d, error %d",
++				 opnum, err2);
++		}
++	}
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 70155e996f7d7..d3b42adef057b 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -404,9 +404,9 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
+ 	case PHY_INTERFACE_MODE_TRGMII:
+ 		trgint = 1;
+ 		if (priv->id == ID_MT7621) {
+-			/* PLL frequency: 150MHz: 1.2GBit */
++			/* PLL frequency: 125MHz: 1.0GBit */
+ 			if (xtal == HWTRAP_XTAL_40MHZ)
+-				ncpo1 = 0x0780;
++				ncpo1 = 0x0640;
+ 			if (xtal == HWTRAP_XTAL_25MHZ)
+ 				ncpo1 = 0x0a00;
+ 		} else { /* PLL frequency: 250MHz: 2.0Gbit */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 0b104a90c0d80..321c821876f65 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -4182,6 +4182,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
+ 	.set_cpu_port = mv88e6095_g1_set_cpu_port,
+ 	.set_egress_port = mv88e6095_g1_set_egress_port,
+ 	.watchdog_ops = &mv88e6390_watchdog_ops,
++	.mgmt_rsvd2cpu = mv88e6352_g2_mgmt_rsvd2cpu,
+ 	.reset = mv88e6352_g1_reset,
+ 	.vtu_getnext = mv88e6185_g1_vtu_getnext,
+ 	.vtu_loadpurge = mv88e6185_g1_vtu_loadpurge,
+diff --git a/drivers/net/ethernet/amd/nmclan_cs.c b/drivers/net/ethernet/amd/nmclan_cs.c
+index 11c0b13edd30f..f34881637505e 100644
+--- a/drivers/net/ethernet/amd/nmclan_cs.c
++++ b/drivers/net/ethernet/amd/nmclan_cs.c
+@@ -650,7 +650,7 @@ static int nmclan_config(struct pcmcia_device *link)
+     } else {
+       pr_notice("mace id not found: %x %x should be 0x40 0x?9\n",
+ 		sig[0], sig[1]);
+-      return -ENODEV;
++      goto failed;
+     }
+   }
+ 
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 5841721c81190..8d92dc6bc9945 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -1266,7 +1266,7 @@ static int enetc_psfp_parse_clsflower(struct enetc_ndev_priv *priv,
+ 		int index;
+ 
+ 		index = enetc_get_free_index(priv);
+-		if (sfi->handle < 0) {
++		if (index < 0) {
+ 			NL_SET_ERR_MSG_MOD(extack, "No Stream Filter resource!");
+ 			err = -ENOSPC;
+ 			goto free_fmi;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+index 55983904b6df1..2eb1331834731 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+@@ -2641,6 +2641,14 @@ static int ixgbe_get_rss_hash_opts(struct ixgbe_adapter *adapter,
+ 	return 0;
+ }
+ 
++static int ixgbe_rss_indir_tbl_max(struct ixgbe_adapter *adapter)
++{
++	if (adapter->hw.mac.type < ixgbe_mac_X550)
++		return 16;
++	else
++		return 64;
++}
++
+ static int ixgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ 			   u32 *rule_locs)
+ {
+@@ -2649,7 +2657,8 @@ static int ixgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ 
+ 	switch (cmd->cmd) {
+ 	case ETHTOOL_GRXRINGS:
+-		cmd->data = adapter->num_rx_queues;
++		cmd->data = min_t(int, adapter->num_rx_queues,
++				  ixgbe_rss_indir_tbl_max(adapter));
+ 		ret = 0;
+ 		break;
+ 	case ETHTOOL_GRXCLSRLCNT:
+@@ -3051,14 +3060,6 @@ static int ixgbe_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
+ 	return ret;
+ }
+ 
+-static int ixgbe_rss_indir_tbl_max(struct ixgbe_adapter *adapter)
+-{
+-	if (adapter->hw.mac.type < ixgbe_mac_X550)
+-		return 16;
+-	else
+-		return 64;
+-}
+-
+ static u32 ixgbe_get_rxfh_key_size(struct net_device *netdev)
+ {
+ 	return IXGBE_RSS_KEY_SIZE;
+@@ -3107,8 +3108,8 @@ static int ixgbe_set_rxfh(struct net_device *netdev, const u32 *indir,
+ 	int i;
+ 	u32 reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
+ 
+-	if (hfunc)
+-		return -EINVAL;
++	if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)
++		return -EOPNOTSUPP;
+ 
+ 	/* Fill out the redirection table */
+ 	if (indir) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 161174be51c31..54aeb276b9a0a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1589,11 +1589,20 @@ int otx2_open(struct net_device *netdev)
+ 	otx2_config_pause_frm(pf);
+ 
+ 	err = otx2_rxtx_enable(pf, true);
+-	if (err)
++	/* If a mbox communication error happens at this point then interface
++	 * will end up in a state such that it is in down state but hardware
++	 * mcam entries are enabled to receive the packets. Hence disable the
++	 * packet I/O.
++	 */
++	if (err == EIO)
++		goto err_disable_rxtx;
++	else if (err)
+ 		goto err_tx_stop_queues;
+ 
+ 	return 0;
+ 
++err_disable_rxtx:
++	otx2_rxtx_enable(pf, false);
+ err_tx_stop_queues:
+ 	netif_tx_stop_all_queues(netdev);
+ 	netif_carrier_off(netdev);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index 67fabf265fe6f..5310b71795ecd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -542,7 +542,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	err = otx2vf_realloc_msix_vectors(vf);
+ 	if (err)
+-		goto err_mbox_destroy;
++		goto err_detach_rsrc;
+ 
+ 	err = otx2_set_real_num_queues(netdev, qcount, qcount);
+ 	if (err)
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+index 35c72d4a78b3f..8e5b01af85ed2 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+@@ -693,7 +693,7 @@ static int ionic_get_rxnfc(struct net_device *netdev,
+ 		info->data = lif->nxqs;
+ 		break;
+ 	default:
+-		netdev_err(netdev, "Command parameter %d is not supported\n",
++		netdev_dbg(netdev, "Command parameter %d is not supported\n",
+ 			   info->cmd);
+ 		err = -EOPNOTSUPP;
+ 	}
+diff --git a/drivers/net/ethernet/sfc/mcdi_port_common.c b/drivers/net/ethernet/sfc/mcdi_port_common.c
+index c4fe3c48ac46a..eccb97a5d9387 100644
+--- a/drivers/net/ethernet/sfc/mcdi_port_common.c
++++ b/drivers/net/ethernet/sfc/mcdi_port_common.c
+@@ -974,12 +974,15 @@ static u32 efx_mcdi_phy_module_type(struct efx_nic *efx)
+ 
+ 	/* A QSFP+ NIC may actually have an SFP+ module attached.
+ 	 * The ID is page 0, byte 0.
++	 * QSFP28 is of type SFF_8636, however, this is treated
++	 * the same by ethtool, so we can also treat them the same.
+ 	 */
+ 	switch (efx_mcdi_phy_get_module_eeprom_byte(efx, 0, 0)) {
+-	case 0x3:
++	case 0x3: /* SFP */
+ 		return MC_CMD_MEDIA_SFP_PLUS;
+-	case 0xc:
+-	case 0xd:
++	case 0xc: /* QSFP */
++	case 0xd: /* QSFP+ */
++	case 0x11: /* QSFP28 */
+ 		return MC_CMD_MEDIA_QSFP_PLUS;
+ 	default:
+ 		return 0;
+@@ -1077,7 +1080,7 @@ int efx_mcdi_phy_get_module_info(struct efx_nic *efx, struct ethtool_modinfo *mo
+ 
+ 	case MC_CMD_MEDIA_QSFP_PLUS:
+ 		modinfo->type = ETH_MODULE_SFF_8436;
+-		modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
++		modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+index e7fbc9b30bf96..d0d47d91b460c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+@@ -1194,9 +1194,6 @@ static int phy_power_on(struct rk_priv_data *bsp_priv, bool enable)
+ 	int ret;
+ 	struct device *dev = &bsp_priv->pdev->dev;
+ 
+-	if (!ldo)
+-		return 0;
+-
+ 	if (enable) {
+ 		ret = regulator_enable(ldo);
+ 		if (ret)
+@@ -1227,14 +1224,11 @@ static struct rk_priv_data *rk_gmac_setup(struct platform_device *pdev,
+ 	of_get_phy_mode(dev->of_node, &bsp_priv->phy_iface);
+ 	bsp_priv->ops = ops;
+ 
+-	bsp_priv->regulator = devm_regulator_get_optional(dev, "phy");
++	bsp_priv->regulator = devm_regulator_get(dev, "phy");
+ 	if (IS_ERR(bsp_priv->regulator)) {
+-		if (PTR_ERR(bsp_priv->regulator) == -EPROBE_DEFER) {
+-			dev_err(dev, "phy regulator is not available yet, deferred probing\n");
+-			return ERR_PTR(-EPROBE_DEFER);
+-		}
+-		dev_err(dev, "no regulator found\n");
+-		bsp_priv->regulator = NULL;
++		ret = PTR_ERR(bsp_priv->regulator);
++		dev_err_probe(dev, ret, "failed to get phy regulator\n");
++		return ERR_PTR(ret);
+ 	}
+ 
+ 	ret = of_property_read_string(dev->of_node, "clock_in_out", &strings);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 47c9118cc92a3..119a32f34b539 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2747,6 +2747,27 @@ static void free_receive_page_frags(struct virtnet_info *vi)
+ 			put_page(vi->rq[i].alloc_frag.page);
+ }
+ 
++static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
++{
++	if (!is_xdp_frame(buf))
++		dev_kfree_skb(buf);
++	else
++		xdp_return_frame(ptr_to_xdp(buf));
++}
++
++static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
++{
++	struct virtnet_info *vi = vq->vdev->priv;
++	int i = vq2rxq(vq);
++
++	if (vi->mergeable_rx_bufs)
++		put_page(virt_to_head_page(buf));
++	else if (vi->big_packets)
++		give_pages(&vi->rq[i], buf);
++	else
++		put_page(virt_to_head_page(buf));
++}
++
+ static void free_unused_bufs(struct virtnet_info *vi)
+ {
+ 	void *buf;
+@@ -2754,26 +2775,16 @@ static void free_unused_bufs(struct virtnet_info *vi)
+ 
+ 	for (i = 0; i < vi->max_queue_pairs; i++) {
+ 		struct virtqueue *vq = vi->sq[i].vq;
+-		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
+-			if (!is_xdp_frame(buf))
+-				dev_kfree_skb(buf);
+-			else
+-				xdp_return_frame(ptr_to_xdp(buf));
+-		}
++		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
++			virtnet_sq_free_unused_buf(vq, buf);
++		cond_resched();
+ 	}
+ 
+ 	for (i = 0; i < vi->max_queue_pairs; i++) {
+ 		struct virtqueue *vq = vi->rq[i].vq;
+-
+-		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
+-			if (vi->mergeable_rx_bufs) {
+-				put_page(virt_to_head_page(buf));
+-			} else if (vi->big_packets) {
+-				give_pages(&vi->rq[i], buf);
+-			} else {
+-				put_page(virt_to_head_page(buf));
+-			}
+-		}
++		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
++			virtnet_rq_free_unused_buf(vq, buf);
++		cond_resched();
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireguard/timers.c b/drivers/net/wireguard/timers.c
+index d54d32ac9bc41..91f5d6d2d4e2d 100644
+--- a/drivers/net/wireguard/timers.c
++++ b/drivers/net/wireguard/timers.c
+@@ -46,7 +46,7 @@ static void wg_expired_retransmit_handshake(struct timer_list *timer)
+ 	if (peer->timer_handshake_attempts > MAX_TIMER_HANDSHAKES) {
+ 		pr_debug("%s: Handshake for peer %llu (%pISpfsc) did not complete after %d attempts, giving up\n",
+ 			 peer->device->dev->name, peer->internal_id,
+-			 &peer->endpoint.addr, MAX_TIMER_HANDSHAKES + 2);
++			 &peer->endpoint.addr, (int)MAX_TIMER_HANDSHAKES + 2);
+ 
+ 		del_timer(&peer->timer_send_keepalive);
+ 		/* We drop all packets without a keypair and don't try again,
+@@ -64,7 +64,7 @@ static void wg_expired_retransmit_handshake(struct timer_list *timer)
+ 		++peer->timer_handshake_attempts;
+ 		pr_debug("%s: Handshake for peer %llu (%pISpfsc) did not complete after %d seconds, retrying (try %d)\n",
+ 			 peer->device->dev->name, peer->internal_id,
+-			 &peer->endpoint.addr, REKEY_TIMEOUT,
++			 &peer->endpoint.addr, (int)REKEY_TIMEOUT,
+ 			 peer->timer_handshake_attempts + 1);
+ 
+ 		/* We clear the endpoint address src address, in case this is
+@@ -94,7 +94,7 @@ static void wg_expired_new_handshake(struct timer_list *timer)
+ 
+ 	pr_debug("%s: Retrying handshake with peer %llu (%pISpfsc) because we stopped hearing back after %d seconds\n",
+ 		 peer->device->dev->name, peer->internal_id,
+-		 &peer->endpoint.addr, KEEPALIVE_TIMEOUT + REKEY_TIMEOUT);
++		 &peer->endpoint.addr, (int)(KEEPALIVE_TIMEOUT + REKEY_TIMEOUT));
+ 	/* We clear the endpoint address src address, in case this is the cause
+ 	 * of trouble.
+ 	 */
+@@ -126,7 +126,7 @@ static void wg_queued_expired_zero_key_material(struct work_struct *work)
+ 
+ 	pr_debug("%s: Zeroing out all keys for peer %llu (%pISpfsc), since we haven't received a new one in %d seconds\n",
+ 		 peer->device->dev->name, peer->internal_id,
+-		 &peer->endpoint.addr, REJECT_AFTER_TIME * 3);
++		 &peer->endpoint.addr, (int)REJECT_AFTER_TIME * 3);
+ 	wg_noise_handshake_clear(&peer->handshake);
+ 	wg_noise_keypairs_clear(&peer->keypairs);
+ 	wg_peer_put(peer);
+diff --git a/drivers/net/wireless/ath/ath5k/eeprom.c b/drivers/net/wireless/ath/ath5k/eeprom.c
+index d444b3d70ba2e..58d3e86f6256d 100644
+--- a/drivers/net/wireless/ath/ath5k/eeprom.c
++++ b/drivers/net/wireless/ath/ath5k/eeprom.c
+@@ -529,7 +529,7 @@ ath5k_eeprom_read_freq_list(struct ath5k_hw *ah, int *offset, int max,
+ 		ee->ee_n_piers[mode]++;
+ 
+ 		freq2 = (val >> 8) & 0xff;
+-		if (!freq2)
++		if (!freq2 || i >= max)
+ 			break;
+ 
+ 		pc[i++].freq = ath5k_eeprom_bin2freq(ee,
+diff --git a/drivers/net/wireless/ath/ath6kl/bmi.c b/drivers/net/wireless/ath/ath6kl/bmi.c
+index bde5a10d470c8..af98e871199d3 100644
+--- a/drivers/net/wireless/ath/ath6kl/bmi.c
++++ b/drivers/net/wireless/ath/ath6kl/bmi.c
+@@ -246,7 +246,7 @@ int ath6kl_bmi_execute(struct ath6kl *ar, u32 addr, u32 *param)
+ 		return -EACCES;
+ 	}
+ 
+-	size = sizeof(cid) + sizeof(addr) + sizeof(param);
++	size = sizeof(cid) + sizeof(addr) + sizeof(*param);
+ 	if (size > ar->bmi.max_cmd_size) {
+ 		WARN_ON(1);
+ 		return -EINVAL;
+diff --git a/drivers/net/wireless/ath/ath6kl/htc_pipe.c b/drivers/net/wireless/ath/ath6kl/htc_pipe.c
+index c68848819a52d..9b88d96bfe96c 100644
+--- a/drivers/net/wireless/ath/ath6kl/htc_pipe.c
++++ b/drivers/net/wireless/ath/ath6kl/htc_pipe.c
+@@ -960,8 +960,8 @@ static int ath6kl_htc_pipe_rx_complete(struct ath6kl *ar, struct sk_buff *skb,
+ 	 * Thus the possibility of ar->htc_target being NULL
+ 	 * via ath6kl_recv_complete -> ath6kl_usb_io_comp_work.
+ 	 */
+-	if (WARN_ON_ONCE(!target)) {
+-		ath6kl_err("Target not yet initialized\n");
++	if (!target) {
++		ath6kl_dbg(ATH6KL_DBG_HTC, "Target not yet initialized\n");
+ 		status = -EINVAL;
+ 		goto free_skb;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index f521dfa2f1945..e0130beb304df 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -534,6 +534,24 @@ static struct ath9k_htc_hif hif_usb = {
+ 	.send = hif_usb_send,
+ };
+ 
++/* Need to free remain_skb allocated in ath9k_hif_usb_rx_stream
++ * in case ath9k_hif_usb_rx_stream wasn't called next time to
++ * process the buffer and subsequently free it.
++ */
++static void ath9k_hif_usb_free_rx_remain_skb(struct hif_device_usb *hif_dev)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&hif_dev->rx_lock, flags);
++	if (hif_dev->remain_skb) {
++		dev_kfree_skb_any(hif_dev->remain_skb);
++		hif_dev->remain_skb = NULL;
++		hif_dev->rx_remain_len = 0;
++		RX_STAT_INC(hif_dev, skb_dropped);
++	}
++	spin_unlock_irqrestore(&hif_dev->rx_lock, flags);
++}
++
+ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 				    struct sk_buff *skb)
+ {
+@@ -868,6 +886,7 @@ err:
+ static void ath9k_hif_usb_dealloc_rx_urbs(struct hif_device_usb *hif_dev)
+ {
+ 	usb_kill_anchored_urbs(&hif_dev->rx_submitted);
++	ath9k_hif_usb_free_rx_remain_skb(hif_dev);
+ }
+ 
+ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 0a069bc7f1567..df59706197124 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -5834,6 +5834,11 @@ static s32 brcmf_get_assoc_ies(struct brcmf_cfg80211_info *cfg,
+ 		(struct brcmf_cfg80211_assoc_ielen_le *)cfg->extra_buf;
+ 	req_len = le32_to_cpu(assoc_info->req_len);
+ 	resp_len = le32_to_cpu(assoc_info->resp_len);
++	if (req_len > WL_EXTRA_BUF_MAX || resp_len > WL_EXTRA_BUF_MAX) {
++		bphy_err(drvr, "invalid lengths in assoc info: req %u resp %u\n",
++			 req_len, resp_len);
++		return -EINVAL;
++	}
+ 	if (req_len) {
+ 		err = brcmf_fil_iovar_data_get(ifp, "assoc_req_ies",
+ 					       cfg->extra_buf,
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index 419eaa5cf0b50..79d08e5d9a81c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -1372,13 +1372,13 @@ static void iwl_ini_get_rxf_data(struct iwl_fw_runtime *fwrt,
+ 	if (!data)
+ 		return;
+ 
++	memset(data, 0, sizeof(*data));
++
+ 	/* make sure only one bit is set in only one fid */
+ 	if (WARN_ONCE(hweight_long(fid1) + hweight_long(fid2) != 1,
+ 		      "fid1=%x, fid2=%x\n", fid1, fid2))
+ 		return;
+ 
+-	memset(data, 0, sizeof(*data));
+-
+ 	if (fid1) {
+ 		fifo_idx = ffs(fid1) - 1;
+ 		if (WARN_ONCE(fifo_idx >= MAX_NUM_LMAC, "fifo_idx=%d\n",
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+index 267ad4eddb5c0..24d6ed3513ce5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+@@ -344,8 +344,10 @@ static void *iwl_dbgfs_fw_info_seq_next(struct seq_file *seq,
+ 	const struct iwl_fw *fw = priv->fwrt->fw;
+ 
+ 	*pos = ++state->pos;
+-	if (*pos >= fw->ucode_capa.n_cmd_versions)
++	if (*pos >= fw->ucode_capa.n_cmd_versions) {
++		kfree(state);
+ 		return NULL;
++	}
+ 
+ 	return state;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 3c931b1b2a0bb..fdf2c6ea41d96 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -191,6 +191,12 @@ static int iwl_dbg_tlv_alloc_buf_alloc(struct iwl_trans *trans,
+ 	    alloc_id != IWL_FW_INI_ALLOCATION_ID_INTERNAL)
+ 		goto err;
+ 
++	if (buf_location == IWL_FW_INI_LOCATION_DRAM_PATH &&
++	    alloc->req_size == 0) {
++		IWL_ERR(trans, "WRT: Invalid DRAM buffer allocation requested size (0)\n");
++		return -EINVAL;
++	}
++
+ 	trans->dbg.fw_mon_cfg[alloc_id] = *alloc;
+ 
+ 	return 0;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+index 3395c46759883..36f75d5946304 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+@@ -1885,6 +1885,11 @@ static ssize_t iwl_dbgfs_mem_read(struct file *file, char __user *user_buf,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (iwl_rx_packet_payload_len(hcmd.resp_pkt) < sizeof(*rsp)) {
++		ret = -EIO;
++		goto out;
++	}
++
+ 	rsp = (void *)hcmd.resp_pkt->data;
+ 	if (le32_to_cpu(rsp->status) != DEBUG_MEM_STATUS_SUCCESS) {
+ 		ret = -ENXIO;
+@@ -1962,6 +1967,11 @@ static ssize_t iwl_dbgfs_mem_write(struct file *file,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (iwl_rx_packet_payload_len(hcmd.resp_pkt) < sizeof(*rsp)) {
++		ret = -EIO;
++		goto out;
++	}
++
+ 	rsp = (void *)hcmd.resp_pkt->data;
+ 	if (rsp->status != DEBUG_MEM_STATUS_SUCCESS) {
+ 		ret = -ENXIO;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index daec61a60fec5..7f8b7f7697cfd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -620,7 +620,6 @@ static int iwl_pcie_set_hw_ready(struct iwl_trans *trans)
+ int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
+ {
+ 	int ret;
+-	int t = 0;
+ 	int iter;
+ 
+ 	IWL_DEBUG_INFO(trans, "iwl_trans_prepare_card_hw enter\n");
+@@ -635,6 +634,8 @@ int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
+ 	usleep_range(1000, 2000);
+ 
+ 	for (iter = 0; iter < 10; iter++) {
++		int t = 0;
++
+ 		/* If HW is not ready, prepare the conditions to check again */
+ 		iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG,
+ 			    CSR_HW_IF_CONFIG_REG_PREPARE);
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+index 3b3cb3a6e2e89..6d13327d690e0 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+@@ -1702,6 +1702,7 @@ struct rtl8xxxu_fileops rtl8192eu_fops = {
+ 	.rx_desc_size = sizeof(struct rtl8xxxu_rxdesc24),
+ 	.has_s0s1 = 0,
+ 	.gen2_thermal_meter = 1,
++	.needs_full_init = 1,
+ 	.adda_1t_init = 0x0fc01616,
+ 	.adda_1t_path_on = 0x0fc01616,
+ 	.adda_2t_path_on_a = 0x0fc01616,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/debug.c b/drivers/net/wireless/realtek/rtlwifi/debug.c
+index 0b1bc04cb6adb..9eb26dfe4ca92 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/debug.c
++++ b/drivers/net/wireless/realtek/rtlwifi/debug.c
+@@ -278,8 +278,8 @@ static ssize_t rtl_debugfs_set_write_reg(struct file *filp,
+ 
+ 	tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
+ 
+-	if (!buffer || copy_from_user(tmp, buffer, tmp_len))
+-		return count;
++	if (copy_from_user(tmp, buffer, tmp_len))
++		return -EFAULT;
+ 
+ 	tmp[tmp_len] = '\0';
+ 
+@@ -287,7 +287,7 @@ static ssize_t rtl_debugfs_set_write_reg(struct file *filp,
+ 	num = sscanf(tmp, "%x %x %x", &addr, &val, &len);
+ 
+ 	if (num !=  3)
+-		return count;
++		return -EINVAL;
+ 
+ 	switch (len) {
+ 	case 1:
+@@ -375,8 +375,8 @@ static ssize_t rtl_debugfs_set_write_rfreg(struct file *filp,
+ 
+ 	tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
+ 
+-	if (!buffer || copy_from_user(tmp, buffer, tmp_len))
+-		return count;
++	if (copy_from_user(tmp, buffer, tmp_len))
++		return -EFAULT;
+ 
+ 	tmp[tmp_len] = '\0';
+ 
+@@ -386,7 +386,7 @@ static ssize_t rtl_debugfs_set_write_rfreg(struct file *filp,
+ 	if (num != 4) {
+ 		rtl_dbg(rtlpriv, COMP_ERR, DBG_DMESG,
+ 			"Format is <path> <addr> <mask> <data>\n");
+-		return count;
++		return -EINVAL;
+ 	}
+ 
+ 	rtl_set_rfreg(hw, path, addr, bitmask, data);
+diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
+index 59028b121b00e..a6d554a9dd995 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac.c
++++ b/drivers/net/wireless/realtek/rtw88/mac.c
+@@ -233,7 +233,7 @@ static int rtw_pwr_seq_parser(struct rtw_dev *rtwdev,
+ 
+ 		ret = rtw_sub_pwr_seq_parser(rtwdev, intf_mask, cut_mask, cmd);
+ 		if (ret)
+-			return -EBUSY;
++			return ret;
+ 
+ 		idx++;
+ 	} while (1);
+@@ -247,6 +247,7 @@ static int rtw_mac_power_switch(struct rtw_dev *rtwdev, bool pwr_on)
+ 	const struct rtw_pwr_seq_cmd **pwr_seq;
+ 	u8 rpwm;
+ 	bool cur_pwr;
++	int ret;
+ 
+ 	if (rtw_chip_wcpu_11ac(rtwdev)) {
+ 		rpwm = rtw_read8(rtwdev, rtwdev->hci.rpwm_addr);
+@@ -270,8 +271,9 @@ static int rtw_mac_power_switch(struct rtw_dev *rtwdev, bool pwr_on)
+ 		return -EALREADY;
+ 
+ 	pwr_seq = pwr_on ? chip->pwr_on_seq : chip->pwr_off_seq;
+-	if (rtw_pwr_seq_parser(rtwdev, pwr_seq))
+-		return -EINVAL;
++	ret = rtw_pwr_seq_parser(rtwdev, pwr_seq);
++	if (ret)
++		return ret;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index a4b6aa932a8fe..07c41a149328a 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4416,11 +4416,19 @@ static void nvme_fw_act_work(struct work_struct *work)
+ 	nvme_get_fw_slot_info(ctrl);
+ }
+ 
+-static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
++static u32 nvme_aer_type(u32 result)
+ {
+-	u32 aer_notice_type = (result & 0xff00) >> 8;
++	return result & 0x7;
++}
+ 
+-	trace_nvme_async_event(ctrl, aer_notice_type);
++static u32 nvme_aer_subtype(u32 result)
++{
++	return (result & 0xff00) >> 8;
++}
++
++static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
++{
++	u32 aer_notice_type = nvme_aer_subtype(result);
+ 
+ 	switch (aer_notice_type) {
+ 	case NVME_AER_NOTICE_NS_CHANGED:
+@@ -4451,24 +4459,40 @@ static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
+ 	}
+ }
+ 
++static void nvme_handle_aer_persistent_error(struct nvme_ctrl *ctrl)
++{
++	dev_warn(ctrl->device, "resetting controller due to AER\n");
++	nvme_reset_ctrl(ctrl);
++}
++
+ void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status,
+ 		volatile union nvme_result *res)
+ {
+ 	u32 result = le32_to_cpu(res->u32);
+-	u32 aer_type = result & 0x07;
++	u32 aer_type = nvme_aer_type(result);
++	u32 aer_subtype = nvme_aer_subtype(result);
+ 
+ 	if (le16_to_cpu(status) >> 1 != NVME_SC_SUCCESS)
+ 		return;
+ 
++	trace_nvme_async_event(ctrl, result);
+ 	switch (aer_type) {
+ 	case NVME_AER_NOTICE:
+ 		nvme_handle_aen_notice(ctrl, result);
+ 		break;
+ 	case NVME_AER_ERROR:
++		/*
++		 * For a persistent internal error, don't run async_event_work
++		 * to submit a new AER. The controller reset will do it.
++		 */
++		if (aer_subtype == NVME_AER_ERROR_PERSIST_INT_ERR) {
++			nvme_handle_aer_persistent_error(ctrl);
++			return;
++		}
++		fallthrough;
+ 	case NVME_AER_SMART:
+ 	case NVME_AER_CSS:
+ 	case NVME_AER_VS:
+-		trace_nvme_async_event(ctrl, aer_type);
+ 		ctrl->aen_result = result;
+ 		break;
+ 	default:
+diff --git a/drivers/nvme/host/trace.h b/drivers/nvme/host/trace.h
+index aa8b0f86b2be1..b258f7b8788e1 100644
+--- a/drivers/nvme/host/trace.h
++++ b/drivers/nvme/host/trace.h
+@@ -127,15 +127,12 @@ TRACE_EVENT(nvme_async_event,
+ 	),
+ 	TP_printk("nvme%d: NVME_AEN=%#08x [%s]",
+ 		__entry->ctrl_id, __entry->result,
+-		__print_symbolic(__entry->result,
+-		aer_name(NVME_AER_NOTICE_NS_CHANGED),
+-		aer_name(NVME_AER_NOTICE_ANA),
+-		aer_name(NVME_AER_NOTICE_FW_ACT_STARTING),
+-		aer_name(NVME_AER_NOTICE_DISC_CHANGED),
+-		aer_name(NVME_AER_ERROR),
+-		aer_name(NVME_AER_SMART),
+-		aer_name(NVME_AER_CSS),
+-		aer_name(NVME_AER_VS))
++		__print_symbolic(__entry->result & 0x7,
++			aer_name(NVME_AER_ERROR),
++			aer_name(NVME_AER_SMART),
++			aer_name(NVME_AER_NOTICE),
++			aer_name(NVME_AER_CSS),
++			aer_name(NVME_AER_VS))
+ 	)
+ );
+ 
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 3da067a8311e5..80a208fb34f52 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -570,10 +570,11 @@ fcloop_fcp_recv_work(struct work_struct *work)
+ 	struct fcloop_fcpreq *tfcp_req =
+ 		container_of(work, struct fcloop_fcpreq, fcp_rcv_work);
+ 	struct nvmefc_fcp_req *fcpreq = tfcp_req->fcpreq;
++	unsigned long flags;
+ 	int ret = 0;
+ 	bool aborted = false;
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_START:
+ 		tfcp_req->inistate = INI_IO_ACTIVE;
+@@ -582,11 +583,11 @@ fcloop_fcp_recv_work(struct work_struct *work)
+ 		aborted = true;
+ 		break;
+ 	default:
+-		spin_unlock_irq(&tfcp_req->reqlock);
++		spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 		WARN_ON(1);
+ 		return;
+ 	}
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	if (unlikely(aborted))
+ 		ret = -ECANCELED;
+@@ -607,8 +608,9 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 		container_of(work, struct fcloop_fcpreq, abort_rcv_work);
+ 	struct nvmefc_fcp_req *fcpreq;
+ 	bool completed = false;
++	unsigned long flags;
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	fcpreq = tfcp_req->fcpreq;
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_ABORTED:
+@@ -617,11 +619,11 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 		completed = true;
+ 		break;
+ 	default:
+-		spin_unlock_irq(&tfcp_req->reqlock);
++		spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 		WARN_ON(1);
+ 		return;
+ 	}
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	if (unlikely(completed)) {
+ 		/* remove reference taken in original abort downcall */
+@@ -633,9 +635,9 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 		nvmet_fc_rcv_fcp_abort(tfcp_req->tport->targetport,
+ 					&tfcp_req->tgt_fcp_req);
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	tfcp_req->fcpreq = NULL;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	fcloop_call_host_done(fcpreq, tfcp_req, -ECANCELED);
+ 	/* call_host_done releases reference for abort downcall */
+@@ -651,11 +653,12 @@ fcloop_tgt_fcprqst_done_work(struct work_struct *work)
+ 	struct fcloop_fcpreq *tfcp_req =
+ 		container_of(work, struct fcloop_fcpreq, tio_done_work);
+ 	struct nvmefc_fcp_req *fcpreq;
++	unsigned long flags;
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	fcpreq = tfcp_req->fcpreq;
+ 	tfcp_req->inistate = INI_IO_COMPLETED;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	fcloop_call_host_done(fcpreq, tfcp_req, tfcp_req->status);
+ }
+@@ -759,13 +762,14 @@ fcloop_fcp_op(struct nvmet_fc_target_port *tgtport,
+ 	u32 rsplen = 0, xfrlen = 0;
+ 	int fcp_err = 0, active, aborted;
+ 	u8 op = tgt_fcpreq->op;
++	unsigned long flags;
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	fcpreq = tfcp_req->fcpreq;
+ 	active = tfcp_req->active;
+ 	aborted = tfcp_req->aborted;
+ 	tfcp_req->active = true;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	if (unlikely(active))
+ 		/* illegal - call while i/o active */
+@@ -773,9 +777,9 @@ fcloop_fcp_op(struct nvmet_fc_target_port *tgtport,
+ 
+ 	if (unlikely(aborted)) {
+ 		/* target transport has aborted i/o prior */
+-		spin_lock_irq(&tfcp_req->reqlock);
++		spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 		tfcp_req->active = false;
+-		spin_unlock_irq(&tfcp_req->reqlock);
++		spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 		tgt_fcpreq->transferred_length = 0;
+ 		tgt_fcpreq->fcp_error = -ECANCELED;
+ 		tgt_fcpreq->done(tgt_fcpreq);
+@@ -832,9 +836,9 @@ fcloop_fcp_op(struct nvmet_fc_target_port *tgtport,
+ 		break;
+ 	}
+ 
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	tfcp_req->active = false;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	tgt_fcpreq->transferred_length = xfrlen;
+ 	tgt_fcpreq->fcp_error = fcp_err;
+@@ -848,15 +852,16 @@ fcloop_tgt_fcp_abort(struct nvmet_fc_target_port *tgtport,
+ 			struct nvmefc_tgt_fcp_req *tgt_fcpreq)
+ {
+ 	struct fcloop_fcpreq *tfcp_req = tgt_fcp_req_to_fcpreq(tgt_fcpreq);
++	unsigned long flags;
+ 
+ 	/*
+ 	 * mark aborted only in case there were 2 threads in transport
+ 	 * (one doing io, other doing abort) and only kills ops posted
+ 	 * after the abort request
+ 	 */
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	tfcp_req->aborted = true;
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	tfcp_req->status = NVME_SC_INTERNAL;
+ 
+@@ -898,6 +903,7 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+ 	struct fcloop_ini_fcpreq *inireq = fcpreq->private;
+ 	struct fcloop_fcpreq *tfcp_req;
+ 	bool abortio = true;
++	unsigned long flags;
+ 
+ 	spin_lock(&inireq->inilock);
+ 	tfcp_req = inireq->tfcp_req;
+@@ -910,7 +916,7 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+ 		return;
+ 
+ 	/* break initiator/target relationship for io */
+-	spin_lock_irq(&tfcp_req->reqlock);
++	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_START:
+ 	case INI_IO_ACTIVE:
+@@ -920,11 +926,11 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+ 		abortio = false;
+ 		break;
+ 	default:
+-		spin_unlock_irq(&tfcp_req->reqlock);
++		spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 		WARN_ON(1);
+ 		return;
+ 	}
+-	spin_unlock_irq(&tfcp_req->reqlock);
++	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+ 	if (abortio)
+ 		/* leave the reference while the work item is scheduled */
+diff --git a/drivers/of/device.c b/drivers/of/device.c
+index 1122daa8e2736..3a547793135c3 100644
+--- a/drivers/of/device.c
++++ b/drivers/of/device.c
+@@ -264,12 +264,15 @@ int of_device_request_module(struct device *dev)
+ 	if (size < 0)
+ 		return size;
+ 
+-	str = kmalloc(size + 1, GFP_KERNEL);
++	/* Reserve an additional byte for the trailing '\0' */
++	size++;
++
++	str = kmalloc(size, GFP_KERNEL);
+ 	if (!str)
+ 		return -ENOMEM;
+ 
+ 	of_device_get_modalias(dev, str, size);
+-	str[size] = '\0';
++	str[size - 1] = '\0';
+ 	ret = request_module(str);
+ 	kfree(str);
+ 
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index ceb4815379cd4..8117f2dad86c4 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -1272,6 +1272,13 @@ DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_SYNOPSYS, 0xabcd,
+ static int __init imx6_pcie_init(void)
+ {
+ #ifdef CONFIG_ARM
++	struct device_node *np;
++
++	np = of_find_matching_node(NULL, imx6_pcie_of_match);
++	if (!np)
++		return -ENODEV;
++	of_node_put(np);
++
+ 	/*
+ 	 * Since probe() can be deferred we need to make sure that
+ 	 * hook_fault_code is not called after __init memory is freed
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 5fbd80908a99a..c68e14271c02c 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1210,11 +1210,9 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
+ 	val |= BIT(4);
+ 	writel(val, pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL);
+ 
+-	if (IS_ENABLED(CONFIG_PCI_MSI)) {
+-		val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT);
+-		val |= BIT(31);
+-		writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT);
+-	}
++	val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2);
++	val |= BIT(31);
++	writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2);
+ 
+ 	return 0;
+ err_disable_clocks:
+diff --git a/drivers/pci/hotplug/pciehp_pci.c b/drivers/pci/hotplug/pciehp_pci.c
+index d17f3bf36f709..ad12515a4a121 100644
+--- a/drivers/pci/hotplug/pciehp_pci.c
++++ b/drivers/pci/hotplug/pciehp_pci.c
+@@ -63,7 +63,14 @@ int pciehp_configure_device(struct controller *ctrl)
+ 
+ 	pci_assign_unassigned_bridge_resources(bridge);
+ 	pcie_bus_configure_settings(parent);
++
++	/*
++	 * Release reset_lock during driver binding
++	 * to avoid AB-BA deadlock with device_lock.
++	 */
++	up_read(&ctrl->reset_lock);
+ 	pci_bus_add_devices(parent);
++	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+ 
+  out:
+ 	pci_unlock_rescan_remove();
+@@ -104,7 +111,15 @@ void pciehp_unconfigure_device(struct controller *ctrl, bool presence)
+ 	list_for_each_entry_safe_reverse(dev, temp, &parent->devices,
+ 					 bus_list) {
+ 		pci_dev_get(dev);
++
++		/*
++		 * Release reset_lock during driver unbinding
++		 * to avoid AB-BA deadlock with device_lock.
++		 */
++		up_read(&ctrl->reset_lock);
+ 		pci_stop_and_remove_bus_device(dev);
++		down_read_nested(&ctrl->reset_lock, ctrl->depth);
++
+ 		/*
+ 		 * Ensure that no new Requests will be generated from
+ 		 * the device.
+diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
+index a6b9b479b97ad..87734e4c3c204 100644
+--- a/drivers/pci/pcie/edr.c
++++ b/drivers/pci/pcie/edr.c
+@@ -193,6 +193,7 @@ send_ost:
+ 	 */
+ 	if (estate == PCI_ERS_RESULT_RECOVERED) {
+ 		pci_dbg(edev, "DPC port successfully recovered\n");
++		pcie_clear_device_status(edev);
+ 		acpi_send_edr_status(pdev, edev, EDR_OST_SUCCESS);
+ 	} else {
+ 		pci_dbg(edev, "DPC port recovery failed\n");
+diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c
+index 181a1be5f4917..d07f33ec79397 100644
+--- a/drivers/phy/tegra/xusb.c
++++ b/drivers/phy/tegra/xusb.c
+@@ -775,6 +775,7 @@ static int tegra_xusb_add_usb2_port(struct tegra_xusb_padctl *padctl,
+ 	usb2->base.lane = usb2->base.ops->map(&usb2->base);
+ 	if (IS_ERR(usb2->base.lane)) {
+ 		err = PTR_ERR(usb2->base.lane);
++		tegra_xusb_port_unregister(&usb2->base);
+ 		goto out;
+ 	}
+ 
+@@ -841,6 +842,7 @@ static int tegra_xusb_add_ulpi_port(struct tegra_xusb_padctl *padctl,
+ 	ulpi->base.lane = ulpi->base.ops->map(&ulpi->base);
+ 	if (IS_ERR(ulpi->base.lane)) {
+ 		err = PTR_ERR(ulpi->base.lane);
++		tegra_xusb_port_unregister(&ulpi->base);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index b96fbc8dba09d..55a18cd0c298f 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -327,6 +327,22 @@ static const struct ts_dmi_data dexp_ursus_7w_data = {
+ 	.properties	= dexp_ursus_7w_props,
+ };
+ 
++static const struct property_entry dexp_ursus_kx210i_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 5),
++	PROPERTY_ENTRY_U32("touchscreen-min-y",  2),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1720),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1137),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-dexp-ursus-kx210i.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	PROPERTY_ENTRY_BOOL("silead,home-button"),
++	{ }
++};
++
++static const struct ts_dmi_data dexp_ursus_kx210i_data = {
++	.acpi_name	= "MSSL1680:00",
++	.properties	= dexp_ursus_kx210i_props,
++};
++
+ static const struct property_entry digma_citi_e200_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 1980),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 1500),
+@@ -381,6 +397,11 @@ static const struct ts_dmi_data glavey_tm800a550l_data = {
+ 	.properties	= glavey_tm800a550l_props,
+ };
+ 
++static const struct ts_dmi_data gdix1002_00_upside_down_data = {
++	.acpi_name	= "GDIX1002:00",
++	.properties	= gdix1001_upside_down_props,
++};
++
+ static const struct property_entry gp_electronic_t701_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 960),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 640),
+@@ -1118,6 +1139,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "7W"),
+ 		},
+ 	},
++	{
++		/* DEXP Ursus KX210i */
++		.driver_data = (void *)&dexp_ursus_kx210i_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "INSYDE Corp."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "S107I"),
++		},
++	},
+ 	{
+ 		/* Digma Citi E200 */
+ 		.driver_data = (void *)&digma_citi_e200_data,
+@@ -1227,6 +1256,18 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BIOS_VERSION, "jumperx.T87.KFBNEEA"),
+ 		},
+ 	},
++	{
++		/* Juno Tablet */
++		.driver_data = (void *)&gdix1002_00_upside_down_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Default string"),
++			/* Both product- and board-name being "Default string" is somewhat rare */
++			DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
++			DMI_MATCH(DMI_BOARD_NAME, "Default string"),
++			/* Above matches are too generic, add partial bios-version match */
++			DMI_MATCH(DMI_BIOS_VERSION, "JP2V1."),
++		},
++	},
+ 	{
+ 		/* Mediacom WinPad 7.0 W700 (same hw as Wintron surftab 7") */
+ 		.driver_data = (void *)&trekstor_surftab_wintron70_data,
+diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
+index 58f09314741a7..09bb4ff291cb0 100644
+--- a/drivers/power/supply/generic-adc-battery.c
++++ b/drivers/power/supply/generic-adc-battery.c
+@@ -138,6 +138,9 @@ static int read_channel(struct gab *adc_bat, enum power_supply_property psp,
+ 			result);
+ 	if (ret < 0)
+ 		pr_err("read channel error\n");
++	else
++		*result *= 1000;
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c
+index 237bb8e065933..0283163ddbe8e 100644
+--- a/drivers/pwm/pwm-meson.c
++++ b/drivers/pwm/pwm-meson.c
+@@ -424,7 +424,7 @@ static const struct meson_pwm_data pwm_axg_ee_data = {
+ };
+ 
+ static const char * const pwm_axg_ao_parent_names[] = {
+-	"aoclk81", "xtal", "fclk_div4", "fclk_div5"
++	"xtal", "axg_ao_clk81", "fclk_div4", "fclk_div5"
+ };
+ 
+ static const struct meson_pwm_data pwm_axg_ao_data = {
+@@ -433,7 +433,7 @@ static const struct meson_pwm_data pwm_axg_ao_data = {
+ };
+ 
+ static const char * const pwm_g12a_ao_ab_parent_names[] = {
+-	"xtal", "aoclk81", "fclk_div4", "fclk_div5"
++	"xtal", "g12a_ao_clk81", "fclk_div4", "fclk_div5"
+ };
+ 
+ static const struct meson_pwm_data pwm_g12a_ao_ab_data = {
+@@ -442,7 +442,7 @@ static const struct meson_pwm_data pwm_g12a_ao_ab_data = {
+ };
+ 
+ static const char * const pwm_g12a_ao_cd_parent_names[] = {
+-	"xtal", "aoclk81",
++	"xtal", "g12a_ao_clk81",
+ };
+ 
+ static const struct meson_pwm_data pwm_g12a_ao_cd_data = {
+diff --git a/drivers/pwm/pwm-mtk-disp.c b/drivers/pwm/pwm-mtk-disp.c
+index 83b8be0209b74..327c780d433da 100644
+--- a/drivers/pwm/pwm-mtk-disp.c
++++ b/drivers/pwm/pwm-mtk-disp.c
+@@ -74,6 +74,19 @@ static int mtk_disp_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	u64 div, rate;
+ 	int err;
+ 
++	err = clk_prepare_enable(mdp->clk_main);
++	if (err < 0) {
++		dev_err(chip->dev, "Can't enable mdp->clk_main: %pe\n", ERR_PTR(err));
++		return err;
++	}
++
++	err = clk_prepare_enable(mdp->clk_mm);
++	if (err < 0) {
++		dev_err(chip->dev, "Can't enable mdp->clk_mm: %pe\n", ERR_PTR(err));
++		clk_disable_unprepare(mdp->clk_main);
++		return err;
++	}
++
+ 	/*
+ 	 * Find period, high_width and clk_div to suit duty_ns and period_ns.
+ 	 * Calculate proper div value to keep period value in the bound.
+@@ -87,8 +100,11 @@ static int mtk_disp_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	rate = clk_get_rate(mdp->clk_main);
+ 	clk_div = div_u64(rate * period_ns, NSEC_PER_SEC) >>
+ 			  PWM_PERIOD_BIT_WIDTH;
+-	if (clk_div > PWM_CLKDIV_MAX)
++	if (clk_div > PWM_CLKDIV_MAX) {
++		clk_disable_unprepare(mdp->clk_mm);
++		clk_disable_unprepare(mdp->clk_main);
+ 		return -EINVAL;
++	}
+ 
+ 	div = NSEC_PER_SEC * (clk_div + 1);
+ 	period = div64_u64(rate * period_ns, div);
+@@ -98,14 +114,17 @@ static int mtk_disp_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	high_width = div64_u64(rate * duty_ns, div);
+ 	value = period | (high_width << PWM_HIGH_WIDTH_SHIFT);
+ 
+-	err = clk_enable(mdp->clk_main);
+-	if (err < 0)
+-		return err;
+-
+-	err = clk_enable(mdp->clk_mm);
+-	if (err < 0) {
+-		clk_disable(mdp->clk_main);
+-		return err;
++	if (mdp->data->bls_debug && !mdp->data->has_commit) {
++		/*
++		 * For MT2701, disable double buffer before writing register
++		 * and select manual mode and use PWM_PERIOD/PWM_HIGH_WIDTH.
++		 */
++		mtk_disp_pwm_update_bits(mdp, mdp->data->bls_debug,
++					 mdp->data->bls_debug_mask,
++					 mdp->data->bls_debug_mask);
++		mtk_disp_pwm_update_bits(mdp, mdp->data->con0,
++					 mdp->data->con0_sel,
++					 mdp->data->con0_sel);
+ 	}
+ 
+ 	mtk_disp_pwm_update_bits(mdp, mdp->data->con0,
+@@ -124,8 +143,8 @@ static int mtk_disp_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 					 0x0);
+ 	}
+ 
+-	clk_disable(mdp->clk_mm);
+-	clk_disable(mdp->clk_main);
++	clk_disable_unprepare(mdp->clk_mm);
++	clk_disable_unprepare(mdp->clk_main);
+ 
+ 	return 0;
+ }
+@@ -135,13 +154,16 @@ static int mtk_disp_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	struct mtk_disp_pwm *mdp = to_mtk_disp_pwm(chip);
+ 	int err;
+ 
+-	err = clk_enable(mdp->clk_main);
+-	if (err < 0)
++	err = clk_prepare_enable(mdp->clk_main);
++	if (err < 0) {
++		dev_err(chip->dev, "Can't enable mdp->clk_main: %pe\n", ERR_PTR(err));
+ 		return err;
++	}
+ 
+-	err = clk_enable(mdp->clk_mm);
++	err = clk_prepare_enable(mdp->clk_mm);
+ 	if (err < 0) {
+-		clk_disable(mdp->clk_main);
++		dev_err(chip->dev, "Can't enable mdp->clk_mm: %pe\n", ERR_PTR(err));
++		clk_disable_unprepare(mdp->clk_main);
+ 		return err;
+ 	}
+ 
+@@ -158,8 +180,8 @@ static void mtk_disp_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	mtk_disp_pwm_update_bits(mdp, DISP_PWM_EN, mdp->data->enable_mask,
+ 				 0x0);
+ 
+-	clk_disable(mdp->clk_mm);
+-	clk_disable(mdp->clk_main);
++	clk_disable_unprepare(mdp->clk_mm);
++	clk_disable_unprepare(mdp->clk_main);
+ }
+ 
+ static const struct pwm_ops mtk_disp_pwm_ops = {
+@@ -194,14 +216,6 @@ static int mtk_disp_pwm_probe(struct platform_device *pdev)
+ 	if (IS_ERR(mdp->clk_mm))
+ 		return PTR_ERR(mdp->clk_mm);
+ 
+-	ret = clk_prepare(mdp->clk_main);
+-	if (ret < 0)
+-		return ret;
+-
+-	ret = clk_prepare(mdp->clk_mm);
+-	if (ret < 0)
+-		goto disable_clk_main;
+-
+ 	mdp->chip.dev = &pdev->dev;
+ 	mdp->chip.ops = &mtk_disp_pwm_ops;
+ 	mdp->chip.base = -1;
+@@ -209,44 +223,22 @@ static int mtk_disp_pwm_probe(struct platform_device *pdev)
+ 
+ 	ret = pwmchip_add(&mdp->chip);
+ 	if (ret < 0) {
+-		dev_err(&pdev->dev, "pwmchip_add() failed: %d\n", ret);
+-		goto disable_clk_mm;
++		dev_err(&pdev->dev, "pwmchip_add() failed: %pe\n", ERR_PTR(ret));
++		return ret;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, mdp);
+ 
+-	/*
+-	 * For MT2701, disable double buffer before writing register
+-	 * and select manual mode and use PWM_PERIOD/PWM_HIGH_WIDTH.
+-	 */
+-	if (!mdp->data->has_commit) {
+-		mtk_disp_pwm_update_bits(mdp, mdp->data->bls_debug,
+-					 mdp->data->bls_debug_mask,
+-					 mdp->data->bls_debug_mask);
+-		mtk_disp_pwm_update_bits(mdp, mdp->data->con0,
+-					 mdp->data->con0_sel,
+-					 mdp->data->con0_sel);
+-	}
+-
+ 	return 0;
+-
+-disable_clk_mm:
+-	clk_unprepare(mdp->clk_mm);
+-disable_clk_main:
+-	clk_unprepare(mdp->clk_main);
+-	return ret;
+ }
+ 
+ static int mtk_disp_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct mtk_disp_pwm *mdp = platform_get_drvdata(pdev);
+-	int ret;
+ 
+-	ret = pwmchip_remove(&mdp->chip);
+-	clk_unprepare(mdp->clk_mm);
+-	clk_unprepare(mdp->clk_main);
++	pwmchip_remove(&mdp->chip);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static const struct mtk_pwm_data mt2701_pwm_data = {
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 6dd698b2d0af3..47a04c5f7a9b8 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -216,6 +216,78 @@ static void regulator_unlock(struct regulator_dev *rdev)
+ 	mutex_unlock(&regulator_nesting_mutex);
+ }
+ 
++/**
++ * regulator_lock_two - lock two regulators
++ * @rdev1:		first regulator
++ * @rdev2:		second regulator
++ * @ww_ctx:		w/w mutex acquire context
++ *
++ * Locks both rdevs using the regulator_ww_class.
++ */
++static void regulator_lock_two(struct regulator_dev *rdev1,
++			       struct regulator_dev *rdev2,
++			       struct ww_acquire_ctx *ww_ctx)
++{
++	struct regulator_dev *tmp;
++	int ret;
++
++	ww_acquire_init(ww_ctx, &regulator_ww_class);
++
++	/* Try to just grab both of them */
++	ret = regulator_lock_nested(rdev1, ww_ctx);
++	WARN_ON(ret);
++	ret = regulator_lock_nested(rdev2, ww_ctx);
++	if (ret != -EDEADLOCK) {
++		WARN_ON(ret);
++		goto exit;
++	}
++
++	while (true) {
++		/*
++		 * Start of loop: rdev1 was locked and rdev2 was contended.
++		 * Need to unlock rdev1, slowly lock rdev2, then try rdev1
++		 * again.
++		 */
++		regulator_unlock(rdev1);
++
++		ww_mutex_lock_slow(&rdev2->mutex, ww_ctx);
++		rdev2->ref_cnt++;
++		rdev2->mutex_owner = current;
++		ret = regulator_lock_nested(rdev1, ww_ctx);
++
++		if (ret == -EDEADLOCK) {
++			/* More contention; swap which needs to be slow */
++			tmp = rdev1;
++			rdev1 = rdev2;
++			rdev2 = tmp;
++		} else {
++			WARN_ON(ret);
++			break;
++		}
++	}
++
++exit:
++	ww_acquire_done(ww_ctx);
++}
++
++/**
++ * regulator_unlock_two - unlock two regulators
++ * @rdev1:		first regulator
++ * @rdev2:		second regulator
++ * @ww_ctx:		w/w mutex acquire context
++ *
++ * The inverse of regulator_lock_two().
++ */
++
++static void regulator_unlock_two(struct regulator_dev *rdev1,
++				 struct regulator_dev *rdev2,
++				 struct ww_acquire_ctx *ww_ctx)
++{
++	regulator_unlock(rdev2);
++	regulator_unlock(rdev1);
++	ww_acquire_fini(ww_ctx);
++}
++
+ static bool regulator_supply_is_couple(struct regulator_dev *rdev)
+ {
+ 	struct regulator_dev *c_rdev;
+@@ -343,6 +415,7 @@ static void regulator_lock_dependent(struct regulator_dev *rdev,
+ 			ww_mutex_lock_slow(&new_contended_rdev->mutex, ww_ctx);
+ 			old_contended_rdev = new_contended_rdev;
+ 			old_contended_rdev->ref_cnt++;
++			old_contended_rdev->mutex_owner = current;
+ 		}
+ 
+ 		err = regulator_lock_recursive(rdev,
+@@ -1459,8 +1532,8 @@ static int set_machine_constraints(struct regulator_dev *rdev)
+ 
+ /**
+  * set_supply - set regulator supply regulator
+- * @rdev: regulator name
+- * @supply_rdev: supply regulator name
++ * @rdev: regulator (locked)
++ * @supply_rdev: supply regulator (locked))
+  *
+  * Called by platform initialisation code to set the supply regulator for this
+  * regulator. This ensures that a regulators supply will also be enabled by the
+@@ -1632,6 +1705,8 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 	struct regulator *regulator;
+ 	int err = 0;
+ 
++	lockdep_assert_held_once(&rdev->mutex.base);
++
+ 	if (dev) {
+ 		char buf[REG_STR_SIZE];
+ 		int size;
+@@ -1659,9 +1734,7 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 	regulator->rdev = rdev;
+ 	regulator->supply_name = supply_name;
+ 
+-	regulator_lock(rdev);
+ 	list_add(&regulator->list, &rdev->consumer_list);
+-	regulator_unlock(rdev);
+ 
+ 	if (dev) {
+ 		regulator->dev = dev;
+@@ -1827,6 +1900,7 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ {
+ 	struct regulator_dev *r;
+ 	struct device *dev = rdev->dev.parent;
++	struct ww_acquire_ctx ww_ctx;
+ 	int ret = 0;
+ 
+ 	/* No supply to resolve? */
+@@ -1893,23 +1967,23 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 	 * between rdev->supply null check and setting rdev->supply in
+ 	 * set_supply() from concurrent tasks.
+ 	 */
+-	regulator_lock(rdev);
++	regulator_lock_two(rdev, r, &ww_ctx);
+ 
+ 	/* Supply just resolved by a concurrent task? */
+ 	if (rdev->supply) {
+-		regulator_unlock(rdev);
++		regulator_unlock_two(rdev, r, &ww_ctx);
+ 		put_device(&r->dev);
+ 		goto out;
+ 	}
+ 
+ 	ret = set_supply(rdev, r);
+ 	if (ret < 0) {
+-		regulator_unlock(rdev);
++		regulator_unlock_two(rdev, r, &ww_ctx);
+ 		put_device(&r->dev);
+ 		goto out;
+ 	}
+ 
+-	regulator_unlock(rdev);
++	regulator_unlock_two(rdev, r, &ww_ctx);
+ 
+ 	/*
+ 	 * In set_machine_constraints() we may have turned this regulator on
+@@ -2022,7 +2096,9 @@ struct regulator *_regulator_get(struct device *dev, const char *id,
+ 		return regulator;
+ 	}
+ 
++	regulator_lock(rdev);
+ 	regulator = create_regulator(rdev, dev, id);
++	regulator_unlock(rdev);
+ 	if (regulator == NULL) {
+ 		regulator = ERR_PTR(-ENOMEM);
+ 		module_put(rdev->owner);
+@@ -5800,6 +5876,7 @@ static void regulator_summary_lock(struct ww_acquire_ctx *ww_ctx)
+ 			ww_mutex_lock_slow(&new_contended_rdev->mutex, ww_ctx);
+ 			old_contended_rdev = new_contended_rdev;
+ 			old_contended_rdev->ref_cnt++;
++			old_contended_rdev->mutex_owner = current;
+ 		}
+ 
+ 		err = regulator_summary_lock_all(ww_ctx,
+diff --git a/drivers/regulator/stm32-pwr.c b/drivers/regulator/stm32-pwr.c
+index 2a42acb7c24e9..e5dd4db6403b2 100644
+--- a/drivers/regulator/stm32-pwr.c
++++ b/drivers/regulator/stm32-pwr.c
+@@ -129,17 +129,16 @@ static const struct regulator_desc stm32_pwr_desc[] = {
+ 
+ static int stm32_pwr_regulator_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
+ 	struct stm32_pwr_reg *priv;
+ 	void __iomem *base;
+ 	struct regulator_dev *rdev;
+ 	struct regulator_config config = { };
+ 	int i, ret = 0;
+ 
+-	base = of_iomap(np, 0);
+-	if (!base) {
++	base = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(base)) {
+ 		dev_err(&pdev->dev, "Unable to map IO memory\n");
+-		return -ENOMEM;
++		return PTR_ERR(base);
+ 	}
+ 
+ 	config.dev = &pdev->dev;
+diff --git a/drivers/remoteproc/st_remoteproc.c b/drivers/remoteproc/st_remoteproc.c
+index a3268d95a50e6..e6bd3c7a950a2 100644
+--- a/drivers/remoteproc/st_remoteproc.c
++++ b/drivers/remoteproc/st_remoteproc.c
+@@ -129,6 +129,7 @@ static int st_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
+ 	while (of_phandle_iterator_next(&it) == 0) {
+ 		rmem = of_reserved_mem_lookup(it.node);
+ 		if (!rmem) {
++			of_node_put(it.node);
+ 			dev_err(dev, "unable to acquire memory-region\n");
+ 			return -EINVAL;
+ 		}
+@@ -150,8 +151,10 @@ static int st_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
+ 							   it.node->name);
+ 		}
+ 
+-		if (!mem)
++		if (!mem) {
++			of_node_put(it.node);
+ 			return -ENOMEM;
++		}
+ 
+ 		rproc_add_carveout(rproc, mem);
+ 		index++;
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index d2414cc1d90d6..24760d8ea6374 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -231,11 +231,13 @@ static int stm32_rproc_parse_memory_regions(struct rproc *rproc)
+ 	while (of_phandle_iterator_next(&it) == 0) {
+ 		rmem = of_reserved_mem_lookup(it.node);
+ 		if (!rmem) {
++			of_node_put(it.node);
+ 			dev_err(dev, "unable to acquire memory-region\n");
+ 			return -EINVAL;
+ 		}
+ 
+ 		if (stm32_rproc_pa_to_da(rproc, rmem->base, &da) < 0) {
++			of_node_put(it.node);
+ 			dev_err(dev, "memory region not valid %pa\n",
+ 				&rmem->base);
+ 			return -EINVAL;
+@@ -262,8 +264,10 @@ static int stm32_rproc_parse_memory_regions(struct rproc *rproc)
+ 							   it.node->name);
+ 		}
+ 
+-		if (!mem)
++		if (!mem) {
++			of_node_put(it.node);
+ 			return -ENOMEM;
++		}
+ 
+ 		rproc_add_carveout(rproc, mem);
+ 		index++;
+diff --git a/drivers/rtc/rtc-meson-vrtc.c b/drivers/rtc/rtc-meson-vrtc.c
+index e6bd0808a092b..18ff8439b5bb5 100644
+--- a/drivers/rtc/rtc-meson-vrtc.c
++++ b/drivers/rtc/rtc-meson-vrtc.c
+@@ -23,7 +23,7 @@ static int meson_vrtc_read_time(struct device *dev, struct rtc_time *tm)
+ 	struct timespec64 time;
+ 
+ 	dev_dbg(dev, "%s\n", __func__);
+-	ktime_get_raw_ts64(&time);
++	ktime_get_real_ts64(&time);
+ 	rtc_time64_to_tm(time.tv_sec, tm);
+ 
+ 	return 0;
+@@ -96,7 +96,7 @@ static int __maybe_unused meson_vrtc_suspend(struct device *dev)
+ 		long alarm_secs;
+ 		struct timespec64 time;
+ 
+-		ktime_get_raw_ts64(&time);
++		ktime_get_real_ts64(&time);
+ 		local_time = time.tv_sec;
+ 
+ 		dev_dbg(dev, "alarm_time = %lus, local_time=%lus\n",
+diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c
+index c20fc7937dfa8..18ae2a4f26eab 100644
+--- a/drivers/rtc/rtc-omap.c
++++ b/drivers/rtc/rtc-omap.c
+@@ -25,6 +25,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/rtc.h>
++#include <linux/rtc/rtc-omap.h>
+ 
+ /*
+  * The OMAP RTC is a year/month/day/hours/minutes/seconds BCD clock
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 9f26f55e4988a..792f8f5688085 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -3000,7 +3000,7 @@ static int _dasd_requeue_request(struct dasd_ccw_req *cqr)
+ 		return 0;
+ 	spin_lock_irq(&cqr->dq->lock);
+ 	req = (struct request *) cqr->callback_data;
+-	blk_mq_requeue_request(req, false);
++	blk_mq_requeue_request(req, true);
+ 	spin_unlock_irq(&cqr->dq->lock);
+ 
+ 	return 0;
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 17200b453cbbb..1bb3c96a04bd6 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -10477,7 +10477,7 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
+ 				goto out_iounmap_all;
+ 		} else {
+ 			error = -ENOMEM;
+-			goto out_iounmap_all;
++			goto out_iounmap_ctrl;
+ 		}
+ 	}
+ 
+@@ -10495,7 +10495,7 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
+ 			dev_err(&pdev->dev,
+ 			   "ioremap failed for SLI4 HBA dpp registers.\n");
+ 			error = -ENOMEM;
+-			goto out_iounmap_ctrl;
++			goto out_iounmap_all;
+ 		}
+ 		phba->pci_bar4_memmap_p = phba->sli4_hba.dpp_regs_memmap_p;
+ 	}
+@@ -10520,9 +10520,11 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
+ 	return 0;
+ 
+ out_iounmap_all:
+-	iounmap(phba->sli4_hba.drbl_regs_memmap_p);
++	if (phba->sli4_hba.drbl_regs_memmap_p)
++		iounmap(phba->sli4_hba.drbl_regs_memmap_p);
+ out_iounmap_ctrl:
+-	iounmap(phba->sli4_hba.ctrl_regs_memmap_p);
++	if (phba->sli4_hba.ctrl_regs_memmap_p)
++		iounmap(phba->sli4_hba.ctrl_regs_memmap_p);
+ out_iounmap_conf:
+ 	iounmap(phba->sli4_hba.conf_regs_memmap_p);
+ 
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index daffa36988aee..810d803406761 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -1443,6 +1443,7 @@ mega_cmd_done(adapter_t *adapter, u8 completed[], int nstatus, int status)
+ 		 */
+ 		if (cmdid == CMDID_INT_CMDS) {
+ 			scb = &adapter->int_scb;
++			cmd = scb->cmd;
+ 
+ 			list_del_init(&scb->list);
+ 			scb->state = SCB_FREE;
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 299d0369e4f08..7df0106f132ee 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2456,6 +2456,9 @@ static void __qedi_remove(struct pci_dev *pdev, int mode)
+ 		qedi_ops->ll2->stop(qedi->cdev);
+ 	}
+ 
++	cancel_delayed_work_sync(&qedi->recovery_work);
++	cancel_delayed_work_sync(&qedi->board_disable_work);
++
+ 	qedi_free_iscsi_pf_param(qedi);
+ 
+ 	rval = qedi_ops->common->update_drv_state(qedi->cdev, false);
+diff --git a/drivers/soc/ti/pm33xx.c b/drivers/soc/ti/pm33xx.c
+index dc21aa855a458..44ec0048911cd 100644
+--- a/drivers/soc/ti/pm33xx.c
++++ b/drivers/soc/ti/pm33xx.c
+@@ -19,6 +19,7 @@
+ #include <linux/of_address.h>
+ #include <linux/platform_data/pm33xx.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ #include <linux/rtc.h>
+ #include <linux/rtc/rtc-omap.h>
+ #include <linux/sizes.h>
+@@ -528,7 +529,7 @@ static int am33xx_pm_probe(struct platform_device *pdev)
+ 
+ 	ret = am33xx_pm_alloc_sram();
+ 	if (ret)
+-		return ret;
++		goto err_wkup_m3_ipc_put;
+ 
+ 	ret = am33xx_pm_rtc_setup();
+ 	if (ret)
+@@ -555,28 +556,41 @@ static int am33xx_pm_probe(struct platform_device *pdev)
+ 	suspend_wfi_flags |= WFI_FLAG_WAKE_M3;
+ #endif /* CONFIG_SUSPEND */
+ 
++	pm_runtime_enable(dev);
++	ret = pm_runtime_get_sync(dev);
++	if (ret < 0) {
++		pm_runtime_put_noidle(dev);
++		goto err_pm_runtime_disable;
++	}
++
+ 	ret = pm_ops->init(am33xx_do_sram_idle);
+ 	if (ret) {
+ 		dev_err(dev, "Unable to call core pm init!\n");
+ 		ret = -ENODEV;
+-		goto err_put_wkup_m3_ipc;
++		goto err_pm_runtime_put;
+ 	}
+ 
+ 	return 0;
+ 
+-err_put_wkup_m3_ipc:
+-	wkup_m3_ipc_put(m3_ipc);
++err_pm_runtime_put:
++	pm_runtime_put_sync(dev);
++err_pm_runtime_disable:
++	pm_runtime_disable(dev);
+ err_unsetup_rtc:
+ 	iounmap(rtc_base_virt);
+ 	clk_put(rtc_fck);
+ err_free_sram:
+ 	am33xx_pm_free_sram();
+ 	pm33xx_dev = NULL;
++err_wkup_m3_ipc_put:
++	wkup_m3_ipc_put(m3_ipc);
+ 	return ret;
+ }
+ 
+ static int am33xx_pm_remove(struct platform_device *pdev)
+ {
++	pm_runtime_put_sync(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
+ 	if (pm_ops->deinit)
+ 		pm_ops->deinit();
+ 	suspend_set_ops(NULL);
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 2e1255bf1b429..23d50a19ae271 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1355,17 +1355,30 @@ static int cqspi_remove(struct platform_device *pdev)
+ static int cqspi_suspend(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
++	struct spi_master *master = dev_get_drvdata(dev);
++	int ret;
+ 
++	ret = spi_master_suspend(master);
+ 	cqspi_controller_enable(cqspi, 0);
+-	return 0;
++
++	clk_disable_unprepare(cqspi->clk);
++
++	return ret;
+ }
+ 
+ static int cqspi_resume(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
++	struct spi_master *master = dev_get_drvdata(dev);
+ 
+-	cqspi_controller_enable(cqspi, 1);
+-	return 0;
++	clk_prepare_enable(cqspi->clk);
++	cqspi_wait_idle(cqspi);
++	cqspi_controller_init(cqspi);
++
++	cqspi->current_cs = -1;
++	cqspi->sclk = 0;
++
++	return spi_master_resume(master);
+ }
+ 
+ static const struct dev_pm_ops cqspi__dev_pm_ops = {
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index bdf94cc7be1af..1bad0ceac81b4 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -207,8 +207,8 @@ static int mspi_apply_qe_mode_quirks(struct spi_mpc8xxx_cs *cs,
+ 				struct spi_device *spi,
+ 				int bits_per_word)
+ {
+-	/* QE uses Little Endian for words > 8
+-	 * so transform all words > 8 into 8 bits
++	/* CPM/QE uses Little Endian for words > 8
++	 * so transform 16 and 32 bits words into 8 bits
+ 	 * Unfortnatly that doesn't work for LSB so
+ 	 * reject these for now */
+ 	/* Note: 32 bits word, LSB works iff
+@@ -216,9 +216,11 @@ static int mspi_apply_qe_mode_quirks(struct spi_mpc8xxx_cs *cs,
+ 	if (spi->mode & SPI_LSB_FIRST &&
+ 	    bits_per_word > 8)
+ 		return -EINVAL;
+-	if (bits_per_word > 8)
++	if (bits_per_word <= 8)
++		return bits_per_word;
++	if (bits_per_word == 16 || bits_per_word == 32)
+ 		return 8; /* pretend its 8 bits */
+-	return bits_per_word;
++	return -EINVAL;
+ }
+ 
+ static int fsl_spi_setup_transfer(struct spi_device *spi,
+@@ -248,7 +250,7 @@ static int fsl_spi_setup_transfer(struct spi_device *spi,
+ 		bits_per_word = mspi_apply_cpu_mode_quirks(cs, spi,
+ 							   mpc8xxx_spi,
+ 							   bits_per_word);
+-	else if (mpc8xxx_spi->flags & SPI_QE)
++	else
+ 		bits_per_word = mspi_apply_qe_mode_quirks(cs, spi,
+ 							  bits_per_word);
+ 
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 74b3b6ca15efb..bbc420865f0fd 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1554,9 +1554,8 @@ spi_imx_prepare_message(struct spi_master *master, struct spi_message *msg)
+ 	struct spi_imx_data *spi_imx = spi_master_get_devdata(master);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(spi_imx->dev);
++	ret = pm_runtime_resume_and_get(spi_imx->dev);
+ 	if (ret < 0) {
+-		pm_runtime_put_noidle(spi_imx->dev);
+ 		dev_err(spi_imx->dev, "failed to enable clock\n");
+ 		return ret;
+ 	}
+@@ -1766,13 +1765,10 @@ static int spi_imx_remove(struct platform_device *pdev)
+ 	spi_bitbang_stop(&spi_imx->bitbang);
+ 
+ 	ret = pm_runtime_get_sync(spi_imx->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(spi_imx->dev);
+-		dev_err(spi_imx->dev, "failed to enable clock\n");
+-		return ret;
+-	}
+-
+-	writel(0, spi_imx->base + MXC_CSPICTRL);
++	if (ret >= 0)
++		writel(0, spi_imx->base + MXC_CSPICTRL);
++	else
++		dev_warn(spi_imx->dev, "failed to enable clock, skip hw disable\n");
+ 
+ 	pm_runtime_dont_use_autosuspend(spi_imx->dev);
+ 	pm_runtime_put_sync(spi_imx->dev);
+diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c
+index f3877eeb3da65..8bf58510cca6d 100644
+--- a/drivers/spi/spi-qup.c
++++ b/drivers/spi/spi-qup.c
+@@ -1276,18 +1276,22 @@ static int spi_qup_remove(struct platform_device *pdev)
+ 	struct spi_qup *controller = spi_master_get_devdata(master);
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&pdev->dev);
+-	if (ret < 0)
+-		return ret;
++	ret = pm_runtime_get_sync(&pdev->dev);
+ 
+-	ret = spi_qup_set_state(controller, QUP_STATE_RESET);
+-	if (ret)
+-		return ret;
++	if (ret >= 0) {
++		ret = spi_qup_set_state(controller, QUP_STATE_RESET);
++		if (ret)
++			dev_warn(&pdev->dev, "failed to reset controller (%pe)\n",
++				 ERR_PTR(ret));
+ 
+-	spi_qup_release_dma(master);
++		clk_disable_unprepare(controller->cclk);
++		clk_disable_unprepare(controller->iclk);
++	} else {
++		dev_warn(&pdev->dev, "failed to resume, skip hw disable (%pe)\n",
++			 ERR_PTR(ret));
++	}
+ 
+-	clk_disable_unprepare(controller->cclk);
+-	clk_disable_unprepare(controller->iclk);
++	spi_qup_release_dma(master);
+ 
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/spmi/spmi.c b/drivers/spmi/spmi.c
+index c16b60f645a4d..8ca7e004a53dc 100644
+--- a/drivers/spmi/spmi.c
++++ b/drivers/spmi/spmi.c
+@@ -348,7 +348,8 @@ static int spmi_drv_remove(struct device *dev)
+ 	const struct spmi_driver *sdrv = to_spmi_driver(dev->driver);
+ 
+ 	pm_runtime_get_sync(dev);
+-	sdrv->remove(to_spmi_device(dev));
++	if (sdrv->remove)
++		sdrv->remove(to_spmi_device(dev));
+ 	pm_runtime_put_noidle(dev);
+ 
+ 	pm_runtime_disable(dev);
+diff --git a/drivers/staging/iio/resolver/ad2s1210.c b/drivers/staging/iio/resolver/ad2s1210.c
+index 74adb82f37c30..a19cfb2998c93 100644
+--- a/drivers/staging/iio/resolver/ad2s1210.c
++++ b/drivers/staging/iio/resolver/ad2s1210.c
+@@ -101,7 +101,7 @@ struct ad2s1210_state {
+ static const int ad2s1210_mode_vals[4][2] = {
+ 	[MOD_POS] = { 0, 0 },
+ 	[MOD_VEL] = { 0, 1 },
+-	[MOD_CONFIG] = { 1, 0 },
++	[MOD_CONFIG] = { 1, 1 },
+ };
+ 
+ static inline void ad2s1210_set_mode(enum ad2s1210_mode mode,
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index e384ea8d72801..f6a29a7078625 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -1077,6 +1077,8 @@ static int rkvdec_remove(struct platform_device *pdev)
+ {
+ 	struct rkvdec_dev *rkvdec = platform_get_drvdata(pdev);
+ 
++	cancel_delayed_work_sync(&rkvdec->watchdog_work);
++
+ 	rkvdec_v4l2_cleanup(rkvdec);
+ 	pm_runtime_disable(&pdev->dev);
+ 	pm_runtime_dont_use_autosuspend(&pdev->dev);
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+index 99c27d6b42333..291f98251f7f7 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+@@ -770,6 +770,7 @@ static int _rtl92e_sta_up(struct net_device *dev, bool is_silent_reset)
+ 	else
+ 		netif_wake_queue(dev);
+ 
++	priv->bfirst_after_down = false;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index a237f1cf9bd60..6bb8403580729 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4084,9 +4084,12 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn)
+ 	list_for_each_entry_safe(cmd, cmd_tmp, &tmp_list, i_conn_node) {
+ 		struct se_cmd *se_cmd = &cmd->se_cmd;
+ 
+-		if (se_cmd->se_tfo != NULL) {
+-			spin_lock_irq(&se_cmd->t_state_lock);
+-			if (se_cmd->transport_state & CMD_T_ABORTED) {
++		if (!se_cmd->se_tfo)
++			continue;
++
++		spin_lock_irq(&se_cmd->t_state_lock);
++		if (se_cmd->transport_state & CMD_T_ABORTED) {
++			if (!(se_cmd->transport_state & CMD_T_TAS))
+ 				/*
+ 				 * LIO's abort path owns the cleanup for this,
+ 				 * so put it back on the list and let
+@@ -4094,11 +4097,10 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn)
+ 				 */
+ 				list_move_tail(&cmd->i_conn_node,
+ 					       &conn->conn_cmd_list);
+-			} else {
+-				se_cmd->transport_state |= CMD_T_FABRIC_STOP;
+-			}
+-			spin_unlock_irq(&se_cmd->t_state_lock);
++		} else {
++			se_cmd->transport_state |= CMD_T_FABRIC_STOP;
+ 		}
++		spin_unlock_irq(&se_cmd->t_state_lock);
+ 	}
+ 	spin_unlock_bh(&conn->cmd_lock);
+ 
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 1eded5c4ebda6..4664330fb55dd 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -724,11 +724,24 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
+ {
+ 	struct se_device *dev;
+ 	struct se_lun *xcopy_lun;
++	int i;
+ 
+ 	dev = hba->backend->ops->alloc_device(hba, name);
+ 	if (!dev)
+ 		return NULL;
+ 
++	dev->queues = kcalloc(nr_cpu_ids, sizeof(*dev->queues), GFP_KERNEL);
++	if (!dev->queues) {
++		dev->transport->free_device(dev);
++		return NULL;
++	}
++
++	dev->queue_cnt = nr_cpu_ids;
++	for (i = 0; i < dev->queue_cnt; i++) {
++		INIT_LIST_HEAD(&dev->queues[i].state_list);
++		spin_lock_init(&dev->queues[i].lock);
++	}
++
+ 	dev->se_hba = hba;
+ 	dev->transport = hba->backend->ops;
+ 	dev->transport_flags = dev->transport->transport_flags_default;
+@@ -738,9 +751,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
+ 	INIT_LIST_HEAD(&dev->dev_sep_list);
+ 	INIT_LIST_HEAD(&dev->dev_tmr_list);
+ 	INIT_LIST_HEAD(&dev->delayed_cmd_list);
+-	INIT_LIST_HEAD(&dev->state_list);
+ 	INIT_LIST_HEAD(&dev->qf_cmd_list);
+-	spin_lock_init(&dev->execute_task_lock);
+ 	spin_lock_init(&dev->delayed_cmd_lock);
+ 	spin_lock_init(&dev->dev_reservation_lock);
+ 	spin_lock_init(&dev->se_port_lock);
+@@ -759,6 +770,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
+ 	spin_lock_init(&dev->t10_alua.lba_map_lock);
+ 
+ 	INIT_WORK(&dev->delayed_cmd_work, target_do_delayed_work);
++	mutex_init(&dev->lun_reset_mutex);
+ 
+ 	dev->t10_wwn.t10_dev = dev;
+ 	dev->t10_alua.t10_dev = dev;
+@@ -1014,6 +1026,7 @@ void target_free_device(struct se_device *dev)
+ 	if (dev->transport->free_prot)
+ 		dev->transport->free_prot(dev);
+ 
++	kfree(dev->queues);
+ 	dev->transport->free_device(dev);
+ }
+ 
+diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
+index eaf8551ebc612..47c5f26e6012d 100644
+--- a/drivers/target/target_core_sbc.c
++++ b/drivers/target/target_core_sbc.c
+@@ -1438,7 +1438,7 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
+ 			if (rc) {
+ 				kunmap_atomic(daddr - dsg->offset);
+ 				kunmap_atomic(paddr - psg->offset);
+-				cmd->bad_sector = sector;
++				cmd->sense_info = sector;
+ 				return rc;
+ 			}
+ next:
+diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
+index 3efd5a3bd69d1..a2b18a98d6718 100644
+--- a/drivers/target/target_core_tmr.c
++++ b/drivers/target/target_core_tmr.c
+@@ -121,57 +121,61 @@ void core_tmr_abort_task(
+ 	unsigned long flags;
+ 	bool rc;
+ 	u64 ref_tag;
+-
+-	spin_lock_irqsave(&dev->execute_task_lock, flags);
+-	list_for_each_entry_safe(se_cmd, next, &dev->state_list, state_list) {
+-
+-		if (se_sess != se_cmd->se_sess)
+-			continue;
+-
+-		/* skip task management functions, including tmr->task_cmd */
+-		if (se_cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)
+-			continue;
+-
+-		ref_tag = se_cmd->tag;
+-		if (tmr->ref_task_tag != ref_tag)
+-			continue;
+-
+-		printk("ABORT_TASK: Found referenced %s task_tag: %llu\n",
+-			se_cmd->se_tfo->fabric_name, ref_tag);
+-
+-		spin_lock(&se_sess->sess_cmd_lock);
+-		rc = __target_check_io_state(se_cmd, se_sess, 0);
+-		spin_unlock(&se_sess->sess_cmd_lock);
+-		if (!rc)
+-			continue;
+-
+-		list_move_tail(&se_cmd->state_list, &aborted_list);
+-		se_cmd->state_active = false;
+-
+-		spin_unlock_irqrestore(&dev->execute_task_lock, flags);
+-
+-		/*
+-		 * Ensure that this ABORT request is visible to the LU RESET
+-		 * code.
+-		 */
+-		if (!tmr->tmr_dev)
+-			WARN_ON_ONCE(transport_lookup_tmr_lun(tmr->task_cmd) <
+-					0);
+-
+-		if (dev->transport->tmr_notify)
+-			dev->transport->tmr_notify(dev, TMR_ABORT_TASK,
+-						   &aborted_list);
+-
+-		list_del_init(&se_cmd->state_list);
+-		target_put_cmd_and_wait(se_cmd);
+-
+-		printk("ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for"
+-				" ref_tag: %llu\n", ref_tag);
+-		tmr->response = TMR_FUNCTION_COMPLETE;
+-		atomic_long_inc(&dev->aborts_complete);
+-		return;
++	int i;
++
++	for (i = 0; i < dev->queue_cnt; i++) {
++		spin_lock_irqsave(&dev->queues[i].lock, flags);
++		list_for_each_entry_safe(se_cmd, next, &dev->queues[i].state_list,
++					 state_list) {
++			if (se_sess != se_cmd->se_sess)
++				continue;
++
++			/*
++			 * skip task management functions, including
++			 * tmr->task_cmd
++			 */
++			if (se_cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)
++				continue;
++
++			ref_tag = se_cmd->tag;
++			if (tmr->ref_task_tag != ref_tag)
++				continue;
++
++			pr_err("ABORT_TASK: Found referenced %s task_tag: %llu\n",
++			       se_cmd->se_tfo->fabric_name, ref_tag);
++
++			spin_lock(&se_sess->sess_cmd_lock);
++			rc = __target_check_io_state(se_cmd, se_sess, 0);
++			spin_unlock(&se_sess->sess_cmd_lock);
++			if (!rc)
++				continue;
++
++			list_move_tail(&se_cmd->state_list, &aborted_list);
++			se_cmd->state_active = false;
++			spin_unlock_irqrestore(&dev->queues[i].lock, flags);
++
++			/*
++			 * Ensure that this ABORT request is visible to the LU
++			 * RESET code.
++			 */
++			if (!tmr->tmr_dev)
++				WARN_ON_ONCE(transport_lookup_tmr_lun(tmr->task_cmd) < 0);
++
++			if (dev->transport->tmr_notify)
++				dev->transport->tmr_notify(dev, TMR_ABORT_TASK,
++							   &aborted_list);
++
++			list_del_init(&se_cmd->state_list);
++			target_put_cmd_and_wait(se_cmd);
++
++			pr_err("ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: %llu\n",
++			       ref_tag);
++			tmr->response = TMR_FUNCTION_COMPLETE;
++			atomic_long_inc(&dev->aborts_complete);
++			return;
++		}
++		spin_unlock_irqrestore(&dev->queues[i].lock, flags);
+ 	}
+-	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
+ 
+ 	if (dev->transport->tmr_notify)
+ 		dev->transport->tmr_notify(dev, TMR_ABORT_TASK, &aborted_list);
+@@ -198,14 +202,23 @@ static void core_tmr_drain_tmr_list(
+ 	 * LUN_RESET tmr..
+ 	 */
+ 	spin_lock_irqsave(&dev->se_tmr_lock, flags);
+-	if (tmr)
+-		list_del_init(&tmr->tmr_list);
+ 	list_for_each_entry_safe(tmr_p, tmr_pp, &dev->dev_tmr_list, tmr_list) {
++		if (tmr_p == tmr)
++			continue;
++
+ 		cmd = tmr_p->task_cmd;
+ 		if (!cmd) {
+ 			pr_err("Unable to locate struct se_cmd for TMR\n");
+ 			continue;
+ 		}
++
++		/*
++		 * We only execute one LUN_RESET at a time so we can't wait
++		 * on them below.
++		 */
++		if (tmr_p->function == TMR_LUN_RESET)
++			continue;
++
+ 		/*
+ 		 * If this function was called with a valid pr_res_key
+ 		 * parameter (eg: for PROUT PREEMPT_AND_ABORT service action
+@@ -273,7 +286,7 @@ static void core_tmr_drain_state_list(
+ 	struct se_session *sess;
+ 	struct se_cmd *cmd, *next;
+ 	unsigned long flags;
+-	int rc;
++	int rc, i;
+ 
+ 	/*
+ 	 * Complete outstanding commands with TASK_ABORTED SAM status.
+@@ -297,35 +310,39 @@ static void core_tmr_drain_state_list(
+ 	 * Note that this seems to be independent of TAS (Task Aborted Status)
+ 	 * in the Control Mode Page.
+ 	 */
+-	spin_lock_irqsave(&dev->execute_task_lock, flags);
+-	list_for_each_entry_safe(cmd, next, &dev->state_list, state_list) {
+-		/*
+-		 * For PREEMPT_AND_ABORT usage, only process commands
+-		 * with a matching reservation key.
+-		 */
+-		if (target_check_cdb_and_preempt(preempt_and_abort_list, cmd))
+-			continue;
+-
+-		/*
+-		 * Not aborting PROUT PREEMPT_AND_ABORT CDB..
+-		 */
+-		if (prout_cmd == cmd)
+-			continue;
+-
+-		sess = cmd->se_sess;
+-		if (WARN_ON_ONCE(!sess))
+-			continue;
+-
+-		spin_lock(&sess->sess_cmd_lock);
+-		rc = __target_check_io_state(cmd, tmr_sess, tas);
+-		spin_unlock(&sess->sess_cmd_lock);
+-		if (!rc)
+-			continue;
+-
+-		list_move_tail(&cmd->state_list, &drain_task_list);
+-		cmd->state_active = false;
++	for (i = 0; i < dev->queue_cnt; i++) {
++		spin_lock_irqsave(&dev->queues[i].lock, flags);
++		list_for_each_entry_safe(cmd, next, &dev->queues[i].state_list,
++					 state_list) {
++			/*
++			 * For PREEMPT_AND_ABORT usage, only process commands
++			 * with a matching reservation key.
++			 */
++			if (target_check_cdb_and_preempt(preempt_and_abort_list,
++							 cmd))
++				continue;
++
++			/*
++			 * Not aborting PROUT PREEMPT_AND_ABORT CDB..
++			 */
++			if (prout_cmd == cmd)
++				continue;
++
++			sess = cmd->se_sess;
++			if (WARN_ON_ONCE(!sess))
++				continue;
++
++			spin_lock(&sess->sess_cmd_lock);
++			rc = __target_check_io_state(cmd, tmr_sess, tas);
++			spin_unlock(&sess->sess_cmd_lock);
++			if (!rc)
++				continue;
++
++			list_move_tail(&cmd->state_list, &drain_task_list);
++			cmd->state_active = false;
++		}
++		spin_unlock_irqrestore(&dev->queues[i].lock, flags);
+ 	}
+-	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
+ 
+ 	if (dev->transport->tmr_notify)
+ 		dev->transport->tmr_notify(dev, preempt_and_abort_list ?
+@@ -382,14 +399,25 @@ int core_tmr_lun_reset(
+ 				tmr_nacl->initiatorname);
+ 		}
+ 	}
++
++
++	/*
++	 * We only allow one reset or preempt and abort to execute at a time
++	 * to prevent one call from claiming all the cmds causing a second
++	 * call from returning while cmds it should have waited on are still
++	 * running.
++	 */
++	mutex_lock(&dev->lun_reset_mutex);
++
+ 	pr_debug("LUN_RESET: %s starting for [%s], tas: %d\n",
+ 		(preempt_and_abort_list) ? "Preempt" : "TMR",
+ 		dev->transport->name, tas);
+-
+ 	core_tmr_drain_tmr_list(dev, tmr, preempt_and_abort_list);
+ 	core_tmr_drain_state_list(dev, prout_cmd, tmr_sess, tas,
+ 				preempt_and_abort_list);
+ 
++	mutex_unlock(&dev->lun_reset_mutex);
++
+ 	/*
+ 	 * Clear any legacy SPC-2 reservation when called during
+ 	 * LOGICAL UNIT RESET
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index bca3a32a4bfb7..2e97937f005ff 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -650,12 +650,12 @@ static void target_remove_from_state_list(struct se_cmd *cmd)
+ 	if (!dev)
+ 		return;
+ 
+-	spin_lock_irqsave(&dev->execute_task_lock, flags);
++	spin_lock_irqsave(&dev->queues[cmd->cpuid].lock, flags);
+ 	if (cmd->state_active) {
+ 		list_del(&cmd->state_list);
+ 		cmd->state_active = false;
+ 	}
+-	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
++	spin_unlock_irqrestore(&dev->queues[cmd->cpuid].lock, flags);
+ }
+ 
+ /*
+@@ -866,10 +866,7 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
+ 
+ 	INIT_WORK(&cmd->work, success ? target_complete_ok_work :
+ 		  target_complete_failure_work);
+-	if (cmd->se_cmd_flags & SCF_USE_CPUID)
+-		queue_work_on(cmd->cpuid, target_completion_wq, &cmd->work);
+-	else
+-		queue_work(target_completion_wq, &cmd->work);
++	queue_work_on(cmd->cpuid, target_completion_wq, &cmd->work);
+ }
+ EXPORT_SYMBOL(target_complete_cmd);
+ 
+@@ -904,12 +901,13 @@ static void target_add_to_state_list(struct se_cmd *cmd)
+ 	struct se_device *dev = cmd->se_dev;
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&dev->execute_task_lock, flags);
++	spin_lock_irqsave(&dev->queues[cmd->cpuid].lock, flags);
+ 	if (!cmd->state_active) {
+-		list_add_tail(&cmd->state_list, &dev->state_list);
++		list_add_tail(&cmd->state_list,
++			      &dev->queues[cmd->cpuid].state_list);
+ 		cmd->state_active = true;
+ 	}
+-	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
++	spin_unlock_irqrestore(&dev->queues[cmd->cpuid].lock, flags);
+ }
+ 
+ /*
+@@ -1397,6 +1395,9 @@ void transport_init_se_cmd(
+ 	cmd->sense_buffer = sense_buffer;
+ 	cmd->orig_fe_lun = unpacked_lun;
+ 
++	if (!(cmd->se_cmd_flags & SCF_USE_CPUID))
++		cmd->cpuid = raw_smp_processor_id();
++
+ 	cmd->state_active = false;
+ }
+ EXPORT_SYMBOL(transport_init_se_cmd);
+@@ -1614,6 +1615,9 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
+ 	BUG_ON(!se_tpg);
+ 	BUG_ON(se_cmd->se_tfo || se_cmd->se_sess);
+ 	BUG_ON(in_interrupt());
++
++	if (flags & TARGET_SCF_USE_CPUID)
++		se_cmd->se_cmd_flags |= SCF_USE_CPUID;
+ 	/*
+ 	 * Initialize se_cmd for target operation.  From this point
+ 	 * exceptions are handled by sending exception status via
+@@ -1623,11 +1627,6 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
+ 				data_length, data_dir, task_attr, sense,
+ 				unpacked_lun);
+ 
+-	if (flags & TARGET_SCF_USE_CPUID)
+-		se_cmd->se_cmd_flags |= SCF_USE_CPUID;
+-	else
+-		se_cmd->cpuid = WORK_CPU_UNBOUND;
+-
+ 	if (flags & TARGET_SCF_UNKNOWN_SIZE)
+ 		se_cmd->unknown_data_length = 1;
+ 	/*
+@@ -3131,14 +3130,14 @@ bool transport_wait_for_tasks(struct se_cmd *cmd)
+ }
+ EXPORT_SYMBOL(transport_wait_for_tasks);
+ 
+-struct sense_info {
++struct sense_detail {
+ 	u8 key;
+ 	u8 asc;
+ 	u8 ascq;
+-	bool add_sector_info;
++	bool add_sense_info;
+ };
+ 
+-static const struct sense_info sense_info_table[] = {
++static const struct sense_detail sense_detail_table[] = {
+ 	[TCM_NO_SENSE] = {
+ 		.key = NOT_READY
+ 	},
+@@ -3238,19 +3237,19 @@ static const struct sense_info sense_info_table[] = {
+ 		.key = ABORTED_COMMAND,
+ 		.asc = 0x10,
+ 		.ascq = 0x01, /* LOGICAL BLOCK GUARD CHECK FAILED */
+-		.add_sector_info = true,
++		.add_sense_info = true,
+ 	},
+ 	[TCM_LOGICAL_BLOCK_APP_TAG_CHECK_FAILED] = {
+ 		.key = ABORTED_COMMAND,
+ 		.asc = 0x10,
+ 		.ascq = 0x02, /* LOGICAL BLOCK APPLICATION TAG CHECK FAILED */
+-		.add_sector_info = true,
++		.add_sense_info = true,
+ 	},
+ 	[TCM_LOGICAL_BLOCK_REF_TAG_CHECK_FAILED] = {
+ 		.key = ABORTED_COMMAND,
+ 		.asc = 0x10,
+ 		.ascq = 0x03, /* LOGICAL BLOCK REFERENCE TAG CHECK FAILED */
+-		.add_sector_info = true,
++		.add_sense_info = true,
+ 	},
+ 	[TCM_COPY_TARGET_DEVICE_NOT_REACHABLE] = {
+ 		.key = COPY_ABORTED,
+@@ -3298,42 +3297,42 @@ static const struct sense_info sense_info_table[] = {
+  */
+ static void translate_sense_reason(struct se_cmd *cmd, sense_reason_t reason)
+ {
+-	const struct sense_info *si;
++	const struct sense_detail *sd;
+ 	u8 *buffer = cmd->sense_buffer;
+ 	int r = (__force int)reason;
+ 	u8 key, asc, ascq;
+ 	bool desc_format = target_sense_desc_format(cmd->se_dev);
+ 
+-	if (r < ARRAY_SIZE(sense_info_table) && sense_info_table[r].key)
+-		si = &sense_info_table[r];
++	if (r < ARRAY_SIZE(sense_detail_table) && sense_detail_table[r].key)
++		sd = &sense_detail_table[r];
+ 	else
+-		si = &sense_info_table[(__force int)
++		sd = &sense_detail_table[(__force int)
+ 				       TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE];
+ 
+-	key = si->key;
++	key = sd->key;
+ 	if (reason == TCM_CHECK_CONDITION_UNIT_ATTENTION) {
+ 		if (!core_scsi3_ua_for_check_condition(cmd, &key, &asc,
+ 						       &ascq)) {
+ 			cmd->scsi_status = SAM_STAT_BUSY;
+ 			return;
+ 		}
+-	} else if (si->asc == 0) {
++	} else if (sd->asc == 0) {
+ 		WARN_ON_ONCE(cmd->scsi_asc == 0);
+ 		asc = cmd->scsi_asc;
+ 		ascq = cmd->scsi_ascq;
+ 	} else {
+-		asc = si->asc;
+-		ascq = si->ascq;
++		asc = sd->asc;
++		ascq = sd->ascq;
+ 	}
+ 
+ 	cmd->se_cmd_flags |= SCF_EMULATED_TASK_SENSE;
+ 	cmd->scsi_status = SAM_STAT_CHECK_CONDITION;
+ 	cmd->scsi_sense_length  = TRANSPORT_SENSE_BUFFER;
+ 	scsi_build_sense_buffer(desc_format, buffer, key, asc, ascq);
+-	if (si->add_sector_info)
++	if (sd->add_sense_info)
+ 		WARN_ON_ONCE(scsi_set_sense_information(buffer,
+ 							cmd->scsi_sense_length,
+-							cmd->bad_sector) < 0);
++							cmd->sense_info) < 0);
+ }
+ 
+ int
+diff --git a/drivers/target/tcm_fc/tfc_cmd.c b/drivers/target/tcm_fc/tfc_cmd.c
+index a7ed56602c6cd..8936a094f8461 100644
+--- a/drivers/target/tcm_fc/tfc_cmd.c
++++ b/drivers/target/tcm_fc/tfc_cmd.c
+@@ -551,7 +551,7 @@ static void ft_send_work(struct work_struct *work)
+ 	if (target_submit_cmd(&cmd->se_cmd, cmd->sess->se_sess, fcp->fc_cdb,
+ 			      &cmd->ft_sense_buffer[0], scsilun_to_int(&fcp->fc_lun),
+ 			      ntohl(fcp->fc_dl), task_attr, data_dir,
+-			      TARGET_SCF_ACK_KREF | TARGET_SCF_USE_CPUID))
++			      TARGET_SCF_ACK_KREF))
+ 		goto err;
+ 
+ 	pr_debug("r_ctl %x target_submit_cmd %p\n", fh->fh_r_ctl, cmd);
+diff --git a/drivers/thermal/mtk_thermal.c b/drivers/thermal/mtk_thermal.c
+index 0bd7aa564bc25..9fe169dbed887 100644
+--- a/drivers/thermal/mtk_thermal.c
++++ b/drivers/thermal/mtk_thermal.c
+@@ -1026,7 +1026,12 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	auxadc_base = of_iomap(auxadc, 0);
++	auxadc_base = devm_of_iomap(&pdev->dev, auxadc, 0, NULL);
++	if (IS_ERR(auxadc_base)) {
++		of_node_put(auxadc);
++		return PTR_ERR(auxadc_base);
++	}
++
+ 	auxadc_phys_base = of_get_phys_base(auxadc);
+ 
+ 	of_node_put(auxadc);
+@@ -1042,7 +1047,12 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	apmixed_base = of_iomap(apmixedsys, 0);
++	apmixed_base = devm_of_iomap(&pdev->dev, apmixedsys, 0, NULL);
++	if (IS_ERR(apmixed_base)) {
++		of_node_put(apmixedsys);
++		return PTR_ERR(apmixed_base);
++	}
++
+ 	apmixed_phys_base = of_get_phys_base(apmixedsys);
+ 
+ 	of_node_put(apmixedsys);
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index f5063499f9cf6..23b014b8c9199 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -50,6 +50,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/etherdevice.h>
+ #include <linux/gsmmux.h>
++#include "tty.h"
+ 
+ static int debug;
+ module_param(debug, int, 0600);
+diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
+index 48c64e68017cd..697199a3ca019 100644
+--- a/drivers/tty/n_hdlc.c
++++ b/drivers/tty/n_hdlc.c
+@@ -100,6 +100,7 @@
+ 
+ #include <asm/termios.h>
+ #include <linux/uaccess.h>
++#include "tty.h"
+ 
+ /*
+  * Buffers for individual HDLC frames
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index 12dde01e576b5..8e7931d935438 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -49,6 +49,7 @@
+ #include <linux/module.h>
+ #include <linux/ratelimit.h>
+ #include <linux/vmalloc.h>
++#include "tty.h"
+ 
+ /*
+  * Until this number of characters is queued in the xmit buffer, select will
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index 16498f5fba64d..ca3e5a6c1a497 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -29,6 +29,7 @@
+ #include <linux/file.h>
+ #include <linux/ioctl.h>
+ #include <linux/compat.h>
++#include "tty.h"
+ 
+ #undef TTY_DEBUG_HANGUP
+ #ifdef TTY_DEBUG_HANGUP
+diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
+index b6dc9003b8c4a..0771cd2265813 100644
+--- a/drivers/tty/serial/8250/8250.h
++++ b/drivers/tty/serial/8250/8250.h
+@@ -330,6 +330,13 @@ extern int serial8250_rx_dma(struct uart_8250_port *);
+ extern void serial8250_rx_dma_flush(struct uart_8250_port *);
+ extern int serial8250_request_dma(struct uart_8250_port *);
+ extern void serial8250_release_dma(struct uart_8250_port *);
++
++static inline bool serial8250_tx_dma_running(struct uart_8250_port *p)
++{
++	struct uart_8250_dma *dma = p->dma;
++
++	return dma && dma->tx_running;
++}
+ #else
+ static inline int serial8250_tx_dma(struct uart_8250_port *p)
+ {
+@@ -345,6 +352,11 @@ static inline int serial8250_request_dma(struct uart_8250_port *p)
+ 	return -1;
+ }
+ static inline void serial8250_release_dma(struct uart_8250_port *p) { }
++
++static inline bool serial8250_tx_dma_running(struct uart_8250_port *p)
++{
++	return false;
++}
+ #endif
+ 
+ static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 1f231fcda657b..b19908779e3b8 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -15,6 +15,7 @@
+ #include <linux/moduleparam.h>
+ #include <linux/ioport.h>
+ #include <linux/init.h>
++#include <linux/irq.h>
+ #include <linux/console.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/sysrq.h>
+@@ -1889,6 +1890,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ 	unsigned char status;
+ 	unsigned long flags;
+ 	struct uart_8250_port *up = up_to_u8250p(port);
++	struct tty_port *tport = &port->state->port;
+ 	bool skip_rx = false;
+ 
+ 	if (iir & UART_IIR_NO_INT)
+@@ -1912,6 +1914,8 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ 		skip_rx = true;
+ 
+ 	if (status & (UART_LSR_DR | UART_LSR_BI) && !skip_rx) {
++		if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
++			pm_wakeup_event(tport->tty->dev, 0);
+ 		if (!up->dma || handle_rx_dma(up, iir))
+ 			status = serial8250_rx_chars(up, status);
+ 	}
+@@ -1967,19 +1971,25 @@ static int serial8250_tx_threshold_handle_irq(struct uart_port *port)
+ static unsigned int serial8250_tx_empty(struct uart_port *port)
+ {
+ 	struct uart_8250_port *up = up_to_u8250p(port);
++	unsigned int result = 0;
+ 	unsigned long flags;
+ 	unsigned int lsr;
+ 
+ 	serial8250_rpm_get(up);
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+-	lsr = serial_port_in(port, UART_LSR);
+-	up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
++	if (!serial8250_tx_dma_running(up)) {
++		lsr = serial_port_in(port, UART_LSR);
++		up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
++
++		if ((lsr & BOTH_EMPTY) == BOTH_EMPTY)
++			result = TIOCSER_TEMT;
++	}
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ 
+ 	serial8250_rpm_put(up);
+ 
+-	return (lsr & BOTH_EMPTY) == BOTH_EMPTY ? TIOCSER_TEMT : 0;
++	return result;
+ }
+ 
+ unsigned int serial8250_do_get_mctrl(struct uart_port *port)
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index f481c260b7049..a2efa81471f30 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1220,7 +1220,7 @@ static inline int lpuart_start_rx_dma(struct lpuart_port *sport)
+ 	 * 10ms at any baud rate.
+ 	 */
+ 	sport->rx_dma_rng_buf_len = (DMA_RX_TIMEOUT * baud /  bits / 1000) * 2;
+-	sport->rx_dma_rng_buf_len = (1 << (fls(sport->rx_dma_rng_buf_len) - 1));
++	sport->rx_dma_rng_buf_len = (1 << fls(sport->rx_dma_rng_buf_len));
+ 	if (sport->rx_dma_rng_buf_len < 16)
+ 		sport->rx_dma_rng_buf_len = 16;
+ 
+diff --git a/drivers/tty/tty.h b/drivers/tty/tty.h
+new file mode 100644
+index 0000000000000..1908f27a795a0
+--- /dev/null
++++ b/drivers/tty/tty.h
+@@ -0,0 +1,117 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * TTY core internal functions
++ */
++
++#ifndef _TTY_INTERNAL_H
++#define _TTY_INTERNAL_H
++
++#define tty_msg(fn, tty, f, ...) \
++	fn("%s %s: " f, tty_driver_name(tty), tty_name(tty), ##__VA_ARGS__)
++
++#define tty_debug(tty, f, ...)	tty_msg(pr_debug, tty, f, ##__VA_ARGS__)
++#define tty_info(tty, f, ...)	tty_msg(pr_info, tty, f, ##__VA_ARGS__)
++#define tty_notice(tty, f, ...)	tty_msg(pr_notice, tty, f, ##__VA_ARGS__)
++#define tty_warn(tty, f, ...)	tty_msg(pr_warn, tty, f, ##__VA_ARGS__)
++#define tty_err(tty, f, ...)	tty_msg(pr_err, tty, f, ##__VA_ARGS__)
++
++#define tty_info_ratelimited(tty, f, ...) \
++		tty_msg(pr_info_ratelimited, tty, f, ##__VA_ARGS__)
++
++/*
++ * Lock subclasses for tty locks
++ *
++ * TTY_LOCK_NORMAL is for normal ttys and master ptys.
++ * TTY_LOCK_SLAVE is for slave ptys only.
++ *
++ * Lock subclasses are necessary for handling nested locking with pty pairs.
++ * tty locks which use nested locking:
++ *
++ * legacy_mutex - Nested tty locks are necessary for releasing pty pairs.
++ *		  The stable lock order is master pty first, then slave pty.
++ * termios_rwsem - The stable lock order is tty_buffer lock->termios_rwsem.
++ *		   Subclassing this lock enables the slave pty to hold its
++ *		   termios_rwsem when claiming the master tty_buffer lock.
++ * tty_buffer lock - slave ptys can claim nested buffer lock when handling
++ *		     signal chars. The stable lock order is slave pty, then
++ *		     master.
++ */
++enum {
++	TTY_LOCK_NORMAL = 0,
++	TTY_LOCK_SLAVE,
++};
++
++/* Values for tty->flow_change */
++#define TTY_THROTTLE_SAFE	1
++#define TTY_UNTHROTTLE_SAFE	2
++
++static inline void __tty_set_flow_change(struct tty_struct *tty, int val)
++{
++	tty->flow_change = val;
++}
++
++static inline void tty_set_flow_change(struct tty_struct *tty, int val)
++{
++	tty->flow_change = val;
++	smp_mb();
++}
++
++int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout);
++void tty_ldisc_unlock(struct tty_struct *tty);
++
++int __tty_check_change(struct tty_struct *tty, int sig);
++int tty_check_change(struct tty_struct *tty);
++void __stop_tty(struct tty_struct *tty);
++void __start_tty(struct tty_struct *tty);
++void tty_write_unlock(struct tty_struct *tty);
++int tty_write_lock(struct tty_struct *tty, int ndelay);
++void tty_vhangup_session(struct tty_struct *tty);
++void tty_open_proc_set_tty(struct file *filp, struct tty_struct *tty);
++int tty_signal_session_leader(struct tty_struct *tty, int exit_session);
++void session_clear_tty(struct pid *session);
++void tty_buffer_free_all(struct tty_port *port);
++void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld);
++void tty_buffer_init(struct tty_port *port);
++void tty_buffer_set_lock_subclass(struct tty_port *port);
++bool tty_buffer_restart_work(struct tty_port *port);
++bool tty_buffer_cancel_work(struct tty_port *port);
++void tty_buffer_flush_work(struct tty_port *port);
++speed_t tty_termios_input_baud_rate(struct ktermios *termios);
++void tty_ldisc_hangup(struct tty_struct *tty, bool reset);
++int tty_ldisc_reinit(struct tty_struct *tty, int disc);
++long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
++long tty_jobctrl_ioctl(struct tty_struct *tty, struct tty_struct *real_tty,
++		       struct file *file, unsigned int cmd, unsigned long arg);
++void tty_default_fops(struct file_operations *fops);
++struct tty_struct *alloc_tty_struct(struct tty_driver *driver, int idx);
++int tty_alloc_file(struct file *file);
++void tty_add_file(struct tty_struct *tty, struct file *file);
++void tty_free_file(struct file *file);
++int tty_release(struct inode *inode, struct file *filp);
++
++#define tty_is_writelocked(tty)  (mutex_is_locked(&tty->atomic_write_lock))
++
++int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty);
++void tty_ldisc_release(struct tty_struct *tty);
++int __must_check tty_ldisc_init(struct tty_struct *tty);
++void tty_ldisc_deinit(struct tty_struct *tty);
++
++void tty_sysctl_init(void);
++
++/* tty_audit.c */
++#ifdef CONFIG_AUDIT
++void tty_audit_add_data(struct tty_struct *tty, const void *data, size_t size);
++void tty_audit_tiocsti(struct tty_struct *tty, char ch);
++#else
++static inline void tty_audit_add_data(struct tty_struct *tty, const void *data,
++				      size_t size)
++{
++}
++static inline void tty_audit_tiocsti(struct tty_struct *tty, char ch)
++{
++}
++#endif
++
++ssize_t redirected_tty_write(struct kiocb *, struct iov_iter *);
++
++#endif
+diff --git a/drivers/tty/tty_audit.c b/drivers/tty/tty_audit.c
+index 9f906a5b8e810..9b30edee71fe9 100644
+--- a/drivers/tty/tty_audit.c
++++ b/drivers/tty/tty_audit.c
+@@ -10,6 +10,7 @@
+ #include <linux/audit.h>
+ #include <linux/slab.h>
+ #include <linux/tty.h>
++#include "tty.h"
+ 
+ struct tty_audit_buf {
+ 	struct mutex mutex;	/* Protects all data below */
+diff --git a/drivers/tty/tty_baudrate.c b/drivers/tty/tty_baudrate.c
+index 84fec3c62d6a4..9d0093d84e085 100644
+--- a/drivers/tty/tty_baudrate.c
++++ b/drivers/tty/tty_baudrate.c
+@@ -8,6 +8,7 @@
+ #include <linux/termios.h>
+ #include <linux/tty.h>
+ #include <linux/export.h>
++#include "tty.h"
+ 
+ 
+ /*
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 5bbc2e010b483..9f23798155573 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -17,7 +17,7 @@
+ #include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/ratelimit.h>
+-
++#include "tty.h"
+ 
+ #define MIN_TTYB_SIZE	256
+ #define TTYB_ALIGN_MASK	255
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index c37d2657308cd..094e82a12d298 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -108,6 +108,7 @@
+ 
+ #include <linux/kmod.h>
+ #include <linux/nsproxy.h>
++#include "tty.h"
+ 
+ #undef TTY_DEBUG_HANGUP
+ #ifdef TTY_DEBUG_HANGUP
+@@ -941,13 +942,13 @@ static ssize_t tty_read(struct kiocb *iocb, struct iov_iter *to)
+ 	return i;
+ }
+ 
+-static void tty_write_unlock(struct tty_struct *tty)
++void tty_write_unlock(struct tty_struct *tty)
+ {
+ 	mutex_unlock(&tty->atomic_write_lock);
+ 	wake_up_interruptible_poll(&tty->write_wait, EPOLLOUT);
+ }
+ 
+-static int tty_write_lock(struct tty_struct *tty, int ndelay)
++int tty_write_lock(struct tty_struct *tty, int ndelay)
+ {
+ 	if (!mutex_trylock(&tty->atomic_write_lock)) {
+ 		if (ndelay)
+diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
+index 803da2d111c8c..68b07250dcb60 100644
+--- a/drivers/tty/tty_ioctl.c
++++ b/drivers/tty/tty_ioctl.c
+@@ -21,6 +21,7 @@
+ #include <linux/bitops.h>
+ #include <linux/mutex.h>
+ #include <linux/compat.h>
++#include "tty.h"
+ 
+ #include <asm/io.h>
+ #include <linux/uaccess.h>
+@@ -397,21 +398,42 @@ static int set_termios(struct tty_struct *tty, void __user *arg, int opt)
+ 	tmp_termios.c_ispeed = tty_termios_input_baud_rate(&tmp_termios);
+ 	tmp_termios.c_ospeed = tty_termios_baud_rate(&tmp_termios);
+ 
+-	ld = tty_ldisc_ref(tty);
++	if (opt & (TERMIOS_FLUSH|TERMIOS_WAIT)) {
++retry_write_wait:
++		retval = wait_event_interruptible(tty->write_wait, !tty_chars_in_buffer(tty));
++		if (retval < 0)
++			return retval;
+ 
+-	if (ld != NULL) {
+-		if ((opt & TERMIOS_FLUSH) && ld->ops->flush_buffer)
+-			ld->ops->flush_buffer(tty);
+-		tty_ldisc_deref(ld);
+-	}
++		if (tty_write_lock(tty, 0) < 0)
++			goto retry_write_wait;
+ 
+-	if (opt & TERMIOS_WAIT) {
+-		tty_wait_until_sent(tty, 0);
+-		if (signal_pending(current))
+-			return -ERESTARTSYS;
+-	}
++		/* Racing writer? */
++		if (tty_chars_in_buffer(tty)) {
++			tty_write_unlock(tty);
++			goto retry_write_wait;
++		}
+ 
+-	tty_set_termios(tty, &tmp_termios);
++		ld = tty_ldisc_ref(tty);
++		if (ld != NULL) {
++			if ((opt & TERMIOS_FLUSH) && ld->ops->flush_buffer)
++				ld->ops->flush_buffer(tty);
++			tty_ldisc_deref(ld);
++		}
++
++		if ((opt & TERMIOS_WAIT) && tty->ops->wait_until_sent) {
++			tty->ops->wait_until_sent(tty, 0);
++			if (signal_pending(current)) {
++				tty_write_unlock(tty);
++				return -ERESTARTSYS;
++			}
++		}
++
++		tty_set_termios(tty, &tmp_termios);
++
++		tty_write_unlock(tty);
++	} else {
++		tty_set_termios(tty, &tmp_termios);
++	}
+ 
+ 	/* FIXME: Arguably if tmp_termios == tty->termios AND the
+ 	   actual requested termios was not tmp_termios then we may
+diff --git a/drivers/tty/tty_jobctrl.c b/drivers/tty/tty_jobctrl.c
+index aa6d0537b379e..95d67613b25b6 100644
+--- a/drivers/tty/tty_jobctrl.c
++++ b/drivers/tty/tty_jobctrl.c
+@@ -11,6 +11,7 @@
+ #include <linux/tty.h>
+ #include <linux/fcntl.h>
+ #include <linux/uaccess.h>
++#include "tty.h"
+ 
+ static int is_ignored(int sig)
+ {
+diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c
+index fe37ec331289b..c23938b8628d1 100644
+--- a/drivers/tty/tty_ldisc.c
++++ b/drivers/tty/tty_ldisc.c
+@@ -19,6 +19,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/uaccess.h>
+ #include <linux/ratelimit.h>
++#include "tty.h"
+ 
+ #undef LDISC_DEBUG_HANGUP
+ 
+diff --git a/drivers/tty/tty_mutex.c b/drivers/tty/tty_mutex.c
+index 2640635ee177d..393518a24cfe2 100644
+--- a/drivers/tty/tty_mutex.c
++++ b/drivers/tty/tty_mutex.c
+@@ -4,6 +4,7 @@
+ #include <linux/kallsyms.h>
+ #include <linux/semaphore.h>
+ #include <linux/sched.h>
++#include "tty.h"
+ 
+ /* Legacy tty mutex glue */
+ 
+diff --git a/drivers/tty/tty_port.c b/drivers/tty/tty_port.c
+index ea80bf872f543..cbb56f725bc4a 100644
+--- a/drivers/tty/tty_port.c
++++ b/drivers/tty/tty_port.c
+@@ -18,6 +18,7 @@
+ #include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/serdev.h>
++#include "tty.h"
+ 
+ static int tty_port_default_receive_buf(struct tty_port *port,
+ 					const unsigned char *p,
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index f26dd1f054f21..3d18599c5b9e4 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -1090,7 +1090,7 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 	ret = ci_usb_phy_init(ci);
+ 	if (ret) {
+ 		dev_err(dev, "unable to init phy: %d\n", ret);
+-		return ret;
++		goto ulpi_exit;
+ 	}
+ 
+ 	ci->hw_bank.phys = res->start;
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index d73f624ed42a3..5709b959b1d93 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1567,13 +1567,11 @@ static int dwc3_probe(struct platform_device *pdev)
+ 	spin_lock_init(&dwc->lock);
+ 	mutex_init(&dwc->mutex);
+ 
++	pm_runtime_get_noresume(dev);
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_use_autosuspend(dev);
+ 	pm_runtime_set_autosuspend_delay(dev, DWC3_DEFAULT_AUTOSUSPEND_DELAY);
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
+-	if (ret < 0)
+-		goto err1;
+ 
+ 	pm_runtime_forbid(dev);
+ 
+@@ -1633,12 +1631,10 @@ err3:
+ 	dwc3_free_event_buffers(dwc);
+ 
+ err2:
+-	pm_runtime_allow(&pdev->dev);
+-
+-err1:
+-	pm_runtime_put_sync(&pdev->dev);
+-	pm_runtime_disable(&pdev->dev);
+-
++	pm_runtime_allow(dev);
++	pm_runtime_disable(dev);
++	pm_runtime_set_suspended(dev);
++	pm_runtime_put_noidle(dev);
+ disable_clks:
+ 	clk_bulk_disable_unprepare(dwc->num_clks, dwc->clks);
+ assert_reset:
+@@ -1659,6 +1655,7 @@ static int dwc3_remove(struct platform_device *pdev)
+ 	dwc3_core_exit(dwc);
+ 	dwc3_ulpi_exit(dwc);
+ 
++	pm_runtime_allow(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_set_suspended(&pdev->dev);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 01cecde76140b..4e3b451ed749e 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3726,15 +3726,8 @@ static void dwc3_gadget_interrupt(struct dwc3 *dwc,
+ 		break;
+ 	case DWC3_DEVICE_EVENT_EOPF:
+ 		/* It changed to be suspend event for version 2.30a and above */
+-		if (!DWC3_VER_IS_PRIOR(DWC3, 230A)) {
+-			/*
+-			 * Ignore suspend event until the gadget enters into
+-			 * USB_STATE_CONFIGURED state.
+-			 */
+-			if (dwc->gadget->state >= USB_STATE_CONFIGURED)
+-				dwc3_gadget_suspend_interrupt(dwc,
+-						event->event_info);
+-		}
++		if (!DWC3_VER_IS_PRIOR(DWC3, 230A))
++			dwc3_gadget_suspend_interrupt(dwc, event->event_info);
+ 		break;
+ 	case DWC3_DEVICE_EVENT_SOF:
+ 	case DWC3_DEVICE_EVENT_ERRATIC_ERROR:
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 601829a6b4bad..a10f41c4a3f2f 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2568,6 +2568,7 @@ static int renesas_usb3_remove(struct platform_device *pdev)
+ 	debugfs_remove_recursive(usb3->dentry);
+ 	device_remove_file(&pdev->dev, &dev_attr_role);
+ 
++	cancel_work_sync(&usb3->role_work);
+ 	usb_role_switch_unregister(usb3->role_sw);
+ 
+ 	usb_del_gadget_udc(&usb3->gadget);
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 3ebc8c5416e30..66d5f6a85c848 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -2154,7 +2154,7 @@ static int tegra_xudc_gadget_vbus_draw(struct usb_gadget *gadget,
+ 
+ 	dev_dbg(xudc->dev, "%s: %u mA\n", __func__, m_a);
+ 
+-	if (xudc->curr_usbphy->chg_type == SDP_TYPE)
++	if (xudc->curr_usbphy && xudc->curr_usbphy->chg_type == SDP_TYPE)
+ 		ret = usb_phy_set_power(xudc->curr_usbphy, m_a);
+ 
+ 	return ret;
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index dc832ddf7033f..bd40caeeb21c6 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -133,6 +133,7 @@ static void xhci_debugfs_regset(struct xhci_hcd *xhci, u32 base,
+ 	regset->regs = regs;
+ 	regset->nregs = nregs;
+ 	regset->base = hcd->regs + base;
++	regset->dev = hcd->self.controller;
+ 
+ 	debugfs_create_regset32((const char *)rgs->name, 0444, parent, regset);
+ }
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index 9888ba7d85b6a..cfafa1c50adea 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -75,7 +75,6 @@ MODULE_FIRMWARE(XHCI_RCAR_FIRMWARE_NAME_V3);
+ 
+ /* For soc_device_attribute */
+ #define RCAR_XHCI_FIRMWARE_V2   BIT(0) /* FIRMWARE V2 */
+-#define RCAR_XHCI_FIRMWARE_V3   BIT(1) /* FIRMWARE V3 */
+ 
+ static const struct soc_device_attribute rcar_quirks_match[]  = {
+ 	{
+@@ -147,8 +146,6 @@ static int xhci_rcar_download_firmware(struct usb_hcd *hcd)
+ 
+ 	if (quirks & RCAR_XHCI_FIRMWARE_V2)
+ 		firmware_name = XHCI_RCAR_FIRMWARE_NAME_V2;
+-	else if (quirks & RCAR_XHCI_FIRMWARE_V3)
+-		firmware_name = XHCI_RCAR_FIRMWARE_NAME_V3;
+ 	else
+ 		firmware_name = priv->firmware_name;
+ 
+diff --git a/drivers/usb/mtu3/mtu3_qmu.c b/drivers/usb/mtu3/mtu3_qmu.c
+index 2ea3157ddb6e2..e65586147965d 100644
+--- a/drivers/usb/mtu3/mtu3_qmu.c
++++ b/drivers/usb/mtu3/mtu3_qmu.c
+@@ -210,6 +210,7 @@ static struct qmu_gpd *advance_enq_gpd(struct mtu3_gpd_ring *ring)
+ 	return ring->enqueue;
+ }
+ 
++/* @dequeue may be NULL if ring is unallocated or freed */
+ static struct qmu_gpd *advance_deq_gpd(struct mtu3_gpd_ring *ring)
+ {
+ 	if (ring->dequeue < ring->end)
+@@ -484,7 +485,7 @@ static void qmu_done_tx(struct mtu3 *mtu, u8 epnum)
+ 	dev_dbg(mtu->dev, "%s EP%d, last=%p, current=%p, enq=%p\n",
+ 		__func__, epnum, gpd, gpd_current, ring->enqueue);
+ 
+-	while (gpd != gpd_current && !GET_GPD_HWO(gpd)) {
++	while (gpd && gpd != gpd_current && !GET_GPD_HWO(gpd)) {
+ 
+ 		mreq = next_request(mep);
+ 
+@@ -523,7 +524,7 @@ static void qmu_done_rx(struct mtu3 *mtu, u8 epnum)
+ 	dev_dbg(mtu->dev, "%s EP%d, last=%p, current=%p, enq=%p\n",
+ 		__func__, epnum, gpd, gpd_current, ring->enqueue);
+ 
+-	while (gpd != gpd_current && !GET_GPD_HWO(gpd)) {
++	while (gpd && gpd != gpd_current && !GET_GPD_HWO(gpd)) {
+ 
+ 		mreq = next_request(mep);
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index da8b7bd39703e..5b474efeab6ab 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -595,6 +595,11 @@ static void option_instat_callback(struct urb *urb);
+ #define SIERRA_VENDOR_ID			0x1199
+ #define SIERRA_PRODUCT_EM9191			0x90d3
+ 
++/* UNISOC (Spreadtrum) products */
++#define UNISOC_VENDOR_ID			0x1782
++/* TOZED LT70-C based on UNISOC SL8563 uses UNISOC's vendor ID */
++#define TOZED_PRODUCT_LT70C			0x4055
++
+ /* Device flags */
+ 
+ /* Highest interface number which can be used with NCTRL() and RSVD() */
+@@ -2225,6 +2230,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c
+index 32d0e1781e63c..3cd1182819809 100644
+--- a/drivers/watchdog/dw_wdt.c
++++ b/drivers/watchdog/dw_wdt.c
+@@ -638,7 +638,7 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
+ 
+ 	ret = dw_wdt_init_timeouts(dw_wdt, dev);
+ 	if (ret)
+-		goto out_disable_clk;
++		goto out_assert_rst;
+ 
+ 	wdd = &dw_wdt->wdd;
+ 	wdd->ops = &dw_wdt_ops;
+@@ -669,12 +669,15 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
+ 
+ 	ret = watchdog_register_device(wdd);
+ 	if (ret)
+-		goto out_disable_pclk;
++		goto out_assert_rst;
+ 
+ 	dw_wdt_dbgfs_init(dw_wdt);
+ 
+ 	return 0;
+ 
++out_assert_rst:
++	reset_control_assert(dw_wdt->rst);
++
+ out_disable_pclk:
+ 	clk_disable_unprepare(dw_wdt->pclk);
+ 
+diff --git a/drivers/xen/pcpu.c b/drivers/xen/pcpu.c
+index 9cf7085a260b4..4581217e31fea 100644
+--- a/drivers/xen/pcpu.c
++++ b/drivers/xen/pcpu.c
+@@ -58,6 +58,7 @@ struct pcpu {
+ 	struct list_head list;
+ 	struct device dev;
+ 	uint32_t cpu_id;
++	uint32_t acpi_id;
+ 	uint32_t flags;
+ };
+ 
+@@ -249,6 +250,7 @@ static struct pcpu *create_and_register_pcpu(struct xenpf_pcpuinfo *info)
+ 
+ 	INIT_LIST_HEAD(&pcpu->list);
+ 	pcpu->cpu_id = info->xen_cpuid;
++	pcpu->acpi_id = info->acpi_id;
+ 	pcpu->flags = info->flags;
+ 
+ 	/* Need hold on xen_pcpu_lock before pcpu list manipulations */
+@@ -416,3 +418,21 @@ err1:
+ 	return ret;
+ }
+ arch_initcall(xen_pcpu_init);
++
++#ifdef CONFIG_ACPI
++bool __init xen_processor_present(uint32_t acpi_id)
++{
++	const struct pcpu *pcpu;
++	bool online = false;
++
++	mutex_lock(&xen_pcpu_lock);
++	list_for_each_entry(pcpu, &xen_pcpus, list)
++		if (pcpu->acpi_id == acpi_id) {
++			online = pcpu->flags & XEN_PCPU_FLAGS_ONLINE;
++			break;
++		}
++	mutex_unlock(&xen_pcpu_lock);
++
++	return online;
++}
++#endif
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 826fae22a8cc9..fdca4262f806a 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -218,6 +218,7 @@ static void afs_apply_status(struct afs_operation *op,
+ 			set_bit(AFS_VNODE_ZAP_DATA, &vnode->flags);
+ 		}
+ 		change_size = true;
++		data_changed = true;
+ 	} else if (vnode->status.type == AFS_FTYPE_DIR) {
+ 		/* Expected directory change is handled elsewhere so
+ 		 * that we can locally edit the directory and save on a
+diff --git a/fs/btrfs/block-rsv.c b/fs/btrfs/block-rsv.c
+index bc920afe23bf0..eb41dc2f6b40c 100644
+--- a/fs/btrfs/block-rsv.c
++++ b/fs/btrfs/block-rsv.c
+@@ -121,7 +121,8 @@ static u64 block_rsv_release_bytes(struct btrfs_fs_info *fs_info,
+ 	} else {
+ 		num_bytes = 0;
+ 	}
+-	if (block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) {
++	if (qgroup_to_release_ret &&
++	    block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) {
+ 		qgroup_to_release = block_rsv->qgroup_rsv_reserved -
+ 				    block_rsv->qgroup_rsv_size;
+ 		block_rsv->qgroup_rsv_reserved = block_rsv->qgroup_rsv_size;
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 5addd1e36a8ee..3e55245e54e7c 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -5160,10 +5160,12 @@ int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ int btrfs_prev_leaf(struct btrfs_root *root, struct btrfs_path *path)
+ {
+ 	struct btrfs_key key;
++	struct btrfs_key orig_key;
+ 	struct btrfs_disk_key found_key;
+ 	int ret;
+ 
+ 	btrfs_item_key_to_cpu(path->nodes[0], &key, 0);
++	orig_key = key;
+ 
+ 	if (key.offset > 0) {
+ 		key.offset--;
+@@ -5180,8 +5182,36 @@ int btrfs_prev_leaf(struct btrfs_root *root, struct btrfs_path *path)
+ 
+ 	btrfs_release_path(path);
+ 	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+-	if (ret < 0)
++	if (ret <= 0)
+ 		return ret;
++
++	/*
++	 * Previous key not found. Even if we were at slot 0 of the leaf we had
++	 * before releasing the path and calling btrfs_search_slot(), we now may
++	 * be in a slot pointing to the same original key - this can happen if
++	 * after we released the path, one of more items were moved from a
++	 * sibling leaf into the front of the leaf we had due to an insertion
++	 * (see push_leaf_right()).
++	 * If we hit this case and our slot is > 0 and just decrement the slot
++	 * so that the caller does not process the same key again, which may or
++	 * may not break the caller, depending on its logic.
++	 */
++	if (path->slots[0] < btrfs_header_nritems(path->nodes[0])) {
++		btrfs_item_key(path->nodes[0], &found_key, path->slots[0]);
++		ret = comp_keys(&found_key, &orig_key);
++		if (ret == 0) {
++			if (path->slots[0] > 0) {
++				path->slots[0]--;
++				return 0;
++			}
++			/*
++			 * At slot 0, same key as before, it means orig_key is
++			 * the lowest, leftmost, key in the tree. We're done.
++			 */
++			return 1;
++		}
++	}
++
+ 	btrfs_item_key(path->nodes[0], &found_key, 0);
+ 	ret = comp_keys(&found_key, &key);
+ 	/*
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 10686b494f0a9..63bf68e0b0061 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3702,6 +3702,11 @@ static long btrfs_ioctl_scrub(struct file *file, void __user *arg)
+ 	if (IS_ERR(sa))
+ 		return PTR_ERR(sa);
+ 
++	if (sa->flags & ~BTRFS_SCRUB_SUPPORTED_FLAGS) {
++		ret = -EOPNOTSUPP;
++		goto out;
++	}
++
+ 	if (!(sa->flags & BTRFS_SCRUB_READONLY)) {
+ 		ret = mnt_want_write_file(file);
+ 		if (ret)
+diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c
+index c62771f3af8c6..e98ba4e091b3b 100644
+--- a/fs/btrfs/print-tree.c
++++ b/fs/btrfs/print-tree.c
+@@ -147,10 +147,10 @@ static void print_extent_item(struct extent_buffer *eb, int slot, int type)
+ 			pr_cont("shared data backref parent %llu count %u\n",
+ 			       offset, btrfs_shared_data_ref_count(eb, sref));
+ 			/*
+-			 * offset is supposed to be a tree block which
+-			 * must be aligned to nodesize.
++			 * Offset is supposed to be a tree block which must be
++			 * aligned to sectorsize.
+ 			 */
+-			if (!IS_ALIGNED(offset, eb->fs_info->nodesize))
++			if (!IS_ALIGNED(offset, eb->fs_info->sectorsize))
+ 				pr_info(
+ 			"\t\t\t(parent %llu not aligned to sectorsize %u)\n",
+ 				     offset, eb->fs_info->sectorsize);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 120c7cb11b02a..015b7b37edee5 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1784,7 +1784,7 @@ smb2_copychunk_range(const unsigned int xid,
+ 		pcchunk->SourceOffset = cpu_to_le64(src_off);
+ 		pcchunk->TargetOffset = cpu_to_le64(dest_off);
+ 		pcchunk->Length =
+-			cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk));
++			cpu_to_le32(min_t(u64, len, tcon->max_bytes_chunk));
+ 
+ 		/* Request server copy to target from src identified by key */
+ 		kfree(retbuf);
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 67a7ec9456866..ce52f708a403d 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -232,7 +232,7 @@ struct erofs_inode {
+ 
+ 	unsigned char datalayout;
+ 	unsigned char inode_isize;
+-	unsigned short xattr_isize;
++	unsigned int xattr_isize;
+ 
+ 	unsigned int xattr_shared_count;
+ 	unsigned int *xattr_shared_xattrs;
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index 14d2de35110cc..f18194fd8d770 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -179,6 +179,10 @@ static int legacy_load_cluster_from_disk(struct z_erofs_maprecorder *m,
+ 	case Z_EROFS_VLE_CLUSTER_TYPE_PLAIN:
+ 	case Z_EROFS_VLE_CLUSTER_TYPE_HEAD:
+ 		m->clusterofs = le16_to_cpu(di->di_clusterofs);
++		if (m->clusterofs >= 1 << vi->z_logical_clusterbits) {
++			DBG_BUGON(1);
++			return -EFSCORRUPTED;
++		}
+ 		m->pblk = le32_to_cpu(di->di_u.blkaddr);
+ 		break;
+ 	default:
+diff --git a/fs/ext4/acl.c b/fs/ext4/acl.c
+index 68aaed48315ff..76f634d185f10 100644
+--- a/fs/ext4/acl.c
++++ b/fs/ext4/acl.c
+@@ -242,7 +242,6 @@ retry:
+ 	handle = ext4_journal_start(inode, EXT4_HT_XATTR, credits);
+ 	if (IS_ERR(handle))
+ 		return PTR_ERR(handle);
+-	ext4_fc_start_update(inode);
+ 
+ 	if ((type == ACL_TYPE_ACCESS) && acl) {
+ 		error = posix_acl_update_mode(inode, &mode, &acl);
+@@ -260,7 +259,6 @@ retry:
+ 	}
+ out_stop:
+ 	ext4_journal_stop(handle);
+-	ext4_fc_stop_update(inode);
+ 	if (error == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
+ 		goto retry;
+ 	return error;
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 1afd60fcd7723..50a0e90e8af9b 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -303,6 +303,22 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block *sb,
+ 	return desc;
+ }
+ 
++static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb,
++						    ext4_group_t block_group,
++						    struct buffer_head *bh)
++{
++	ext4_grpblk_t next_zero_bit;
++	unsigned long bitmap_size = sb->s_blocksize * 8;
++	unsigned int offset = num_clusters_in_group(sb, block_group);
++
++	if (bitmap_size <= offset)
++		return 0;
++
++	next_zero_bit = ext4_find_next_zero_bit(bh->b_data, bitmap_size, offset);
++
++	return (next_zero_bit < bitmap_size ? next_zero_bit : 0);
++}
++
+ /*
+  * Return the block number which was discovered to be invalid, or 0 if
+  * the block bitmap is valid.
+@@ -401,6 +417,15 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
+ 					EXT4_GROUP_INFO_BBITMAP_CORRUPT);
+ 		return -EFSCORRUPTED;
+ 	}
++	blk = ext4_valid_block_bitmap_padding(sb, block_group, bh);
++	if (unlikely(blk != 0)) {
++		ext4_unlock_group(sb, block_group);
++		ext4_error(sb, "bg %u: block %llu: padding at end of block bitmap is not set",
++			   block_group, blk);
++		ext4_mark_group_bitmap_corrupted(sb, block_group,
++						 EXT4_GROUP_INFO_BBITMAP_CORRUPT);
++		return -EFSCORRUPTED;
++	}
+ 	set_buffer_verified(bh);
+ verified:
+ 	ext4_unlock_group(sb, block_group);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index bf0872bb34f69..2c2e1cc43e0e8 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4694,7 +4694,6 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 		     FALLOC_FL_INSERT_RANGE))
+ 		return -EOPNOTSUPP;
+ 
+-	ext4_fc_start_update(inode);
+ 	inode_lock(inode);
+ 	ret = ext4_convert_inline_data(inode);
+ 	inode_unlock(inode);
+@@ -4764,7 +4763,6 @@ out:
+ 	inode_unlock(inode);
+ 	trace_ext4_fallocate_exit(inode, offset, max_blocks, ret);
+ exit:
+-	ext4_fc_stop_update(inode);
+ 	return ret;
+ }
+ 
+@@ -5807,7 +5805,8 @@ int ext4_clu_mapped(struct inode *inode, ext4_lblk_t lclu)
+ 	 * mapped - no physical clusters have been allocated, and the
+ 	 * file has no extents
+ 	 */
+-	if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA))
++	if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) ||
++	    ext4_has_inline_data(inode))
+ 		return 0;
+ 
+ 	/* search for the extent closest to the first block in the cluster */
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index aa99a3659edfc..fee54ab42bbaa 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -269,14 +269,12 @@ static void __es_find_extent_range(struct inode *inode,
+ 
+ 	/* see if the extent has been cached */
+ 	es->es_lblk = es->es_len = es->es_pblk = 0;
+-	if (tree->cache_es) {
+-		es1 = tree->cache_es;
+-		if (in_range(lblk, es1->es_lblk, es1->es_len)) {
+-			es_debug("%u cached by [%u/%u) %llu %x\n",
+-				 lblk, es1->es_lblk, es1->es_len,
+-				 ext4_es_pblock(es1), ext4_es_status(es1));
+-			goto out;
+-		}
++	es1 = READ_ONCE(tree->cache_es);
++	if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) {
++		es_debug("%u cached by [%u/%u) %llu %x\n",
++			 lblk, es1->es_lblk, es1->es_len,
++			 ext4_es_pblock(es1), ext4_es_status(es1));
++		goto out;
+ 	}
+ 
+ 	es1 = __es_tree_search(&tree->root, lblk);
+@@ -295,7 +293,7 @@ out:
+ 	}
+ 
+ 	if (es1 && matching_fn(es1)) {
+-		tree->cache_es = es1;
++		WRITE_ONCE(tree->cache_es, es1);
+ 		es->es_lblk = es1->es_lblk;
+ 		es->es_len = es1->es_len;
+ 		es->es_pblk = es1->es_pblk;
+@@ -934,14 +932,12 @@ int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,
+ 
+ 	/* find extent in cache firstly */
+ 	es->es_lblk = es->es_len = es->es_pblk = 0;
+-	if (tree->cache_es) {
+-		es1 = tree->cache_es;
+-		if (in_range(lblk, es1->es_lblk, es1->es_len)) {
+-			es_debug("%u cached by [%u/%u)\n",
+-				 lblk, es1->es_lblk, es1->es_len);
+-			found = 1;
+-			goto out;
+-		}
++	es1 = READ_ONCE(tree->cache_es);
++	if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) {
++		es_debug("%u cached by [%u/%u)\n",
++			 lblk, es1->es_lblk, es1->es_len);
++		found = 1;
++		goto out;
+ 	}
+ 
+ 	node = tree->root.rb_node;
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 0f61e0aa85d6f..f42cc1fe0ba1d 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -260,7 +260,6 @@ static ssize_t ext4_buffered_write_iter(struct kiocb *iocb,
+ 	if (iocb->ki_flags & IOCB_NOWAIT)
+ 		return -EOPNOTSUPP;
+ 
+-	ext4_fc_start_update(inode);
+ 	inode_lock(inode);
+ 	ret = ext4_write_checks(iocb, from);
+ 	if (ret <= 0)
+@@ -272,7 +271,6 @@ static ssize_t ext4_buffered_write_iter(struct kiocb *iocb,
+ 
+ out:
+ 	inode_unlock(inode);
+-	ext4_fc_stop_update(inode);
+ 	if (likely(ret > 0)) {
+ 		iocb->ki_pos += ret;
+ 		ret = generic_write_sync(iocb, ret);
+@@ -559,9 +557,7 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 			goto out;
+ 		}
+ 
+-		ext4_fc_start_update(inode);
+ 		ret = ext4_orphan_add(handle, inode);
+-		ext4_fc_stop_update(inode);
+ 		if (ret) {
+ 			ext4_journal_stop(handle);
+ 			goto out;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 0758f606f0065..979935c078fb8 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -32,6 +32,7 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
+ 	struct ext4_xattr_ibody_header *header;
+ 	struct ext4_xattr_entry *entry;
+ 	struct ext4_inode *raw_inode;
++	void *end;
+ 	int free, min_offs;
+ 
+ 	if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
+@@ -55,14 +56,23 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
+ 	raw_inode = ext4_raw_inode(iloc);
+ 	header = IHDR(inode, raw_inode);
+ 	entry = IFIRST(header);
++	end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
+ 
+ 	/* Compute min_offs. */
+-	for (; !IS_LAST_ENTRY(entry); entry = EXT4_XATTR_NEXT(entry)) {
++	while (!IS_LAST_ENTRY(entry)) {
++		void *next = EXT4_XATTR_NEXT(entry);
++
++		if (next >= end) {
++			EXT4_ERROR_INODE(inode,
++					 "corrupt xattr in inline inode");
++			return 0;
++		}
+ 		if (!entry->e_value_inum && entry->e_value_size) {
+ 			size_t offs = le16_to_cpu(entry->e_value_offs);
+ 			if (offs < min_offs)
+ 				min_offs = offs;
+ 		}
++		entry = next;
+ 	}
+ 	free = min_offs -
+ 		((void *)entry - (void *)IFIRST(header)) - sizeof(__u32);
+@@ -348,7 +358,7 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ 
+ 	error = ext4_xattr_ibody_get(inode, i.name_index, i.name,
+ 				     value, len);
+-	if (error == -ENODATA)
++	if (error < 0)
+ 		goto out;
+ 
+ 	BUFFER_TRACE(is.iloc.bh, "get_write_access");
+@@ -1172,6 +1182,7 @@ static int ext4_finish_convert_inline_dir(handle_t *handle,
+ 		ext4_initialize_dirent_tail(dir_block,
+ 					    inode->i_sb->s_blocksize);
+ 	set_buffer_uptodate(dir_block);
++	unlock_buffer(dir_block);
+ 	err = ext4_handle_dirty_dirblock(handle, inode, dir_block);
+ 	if (err)
+ 		return err;
+@@ -1245,6 +1256,7 @@ static int ext4_convert_inline_data_nolock(handle_t *handle,
+ 	if (!S_ISDIR(inode->i_mode)) {
+ 		memcpy(data_bh->b_data, buf, inline_size);
+ 		set_buffer_uptodate(data_bh);
++		unlock_buffer(data_bh);
+ 		error = ext4_handle_dirty_metadata(handle,
+ 						   inode, data_bh);
+ 	} else {
+@@ -1252,7 +1264,6 @@ static int ext4_convert_inline_data_nolock(handle_t *handle,
+ 						       buf, inline_size);
+ 	}
+ 
+-	unlock_buffer(data_bh);
+ out_restore:
+ 	if (error)
+ 		ext4_restore_inline_data(handle, inode, iloc, buf, inline_size);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 9bd5f8b0511b2..735109b9e88da 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3564,7 +3564,7 @@ static int ext4_iomap_overwrite_begin(struct inode *inode, loff_t offset,
+ 	 */
+ 	flags &= ~IOMAP_WRITE;
+ 	ret = ext4_iomap_begin(inode, offset, length, flags, iomap, srcmap);
+-	WARN_ON_ONCE(iomap->type != IOMAP_MAPPED);
++	WARN_ON_ONCE(!ret && iomap->type != IOMAP_MAPPED);
+ 	return ret;
+ }
+ 
+@@ -5437,7 +5437,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 		if (error)
+ 			return error;
+ 	}
+-	ext4_fc_start_update(inode);
++
+ 	if ((ia_valid & ATTR_UID && !uid_eq(attr->ia_uid, inode->i_uid)) ||
+ 	    (ia_valid & ATTR_GID && !gid_eq(attr->ia_gid, inode->i_gid))) {
+ 		handle_t *handle;
+@@ -5461,7 +5461,6 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 
+ 		if (error) {
+ 			ext4_journal_stop(handle);
+-			ext4_fc_stop_update(inode);
+ 			return error;
+ 		}
+ 		/* Update corresponding info in inode so that everything is in
+@@ -5473,7 +5472,6 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 		error = ext4_mark_inode_dirty(handle, inode);
+ 		ext4_journal_stop(handle);
+ 		if (unlikely(error)) {
+-			ext4_fc_stop_update(inode);
+ 			return error;
+ 		}
+ 	}
+@@ -5488,12 +5486,10 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 			struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 
+ 			if (attr->ia_size > sbi->s_bitmap_maxbytes) {
+-				ext4_fc_stop_update(inode);
+ 				return -EFBIG;
+ 			}
+ 		}
+ 		if (!S_ISREG(inode->i_mode)) {
+-			ext4_fc_stop_update(inode);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -5619,7 +5615,6 @@ err_out:
+ 		ext4_std_error(inode->i_sb, error);
+ 	if (!error)
+ 		error = rc;
+-	ext4_fc_stop_update(inode);
+ 	return error;
+ }
+ 
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 53bdc67a815f6..1171618f6549a 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -1322,13 +1322,7 @@ out:
+ 
+ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ {
+-	long ret;
+-
+-	ext4_fc_start_update(file_inode(filp));
+-	ret = __ext4_ioctl(filp, cmd, arg);
+-	ext4_fc_stop_update(file_inode(filp));
+-
+-	return ret;
++	return __ext4_ioctl(filp, cmd, arg);
+ }
+ 
+ #ifdef CONFIG_COMPAT
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 843840c2aced9..a7c42e4bfc5ec 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -4250,7 +4250,11 @@ ext4_mb_release_group_pa(struct ext4_buddy *e4b,
+ 	trace_ext4_mb_release_group_pa(sb, pa);
+ 	BUG_ON(pa->pa_deleted == 0);
+ 	ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit);
+-	BUG_ON(group != e4b->bd_group && pa->pa_len != 0);
++	if (unlikely(group != e4b->bd_group && pa->pa_len != 0)) {
++		ext4_warning(sb, "bad group: expected %u, group %u, pa_start %llu",
++			     e4b->bd_group, group, pa->pa_pstart);
++		return 0;
++	}
+ 	mb_free_blocks(pa->pa_inode, e4b, bit, pa->pa_len);
+ 	atomic_add(pa->pa_len, &EXT4_SB(sb)->s_mb_discarded);
+ 	trace_ext4_mballoc_discard(sb, NULL, group, bit, pa->pa_len);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index e940fb07ef2e9..8694be5132415 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2831,11 +2831,9 @@ static __le16 ext4_group_desc_csum(struct super_block *sb, __u32 block_group,
+ 	crc = crc16(crc, (__u8 *)gdp, offset);
+ 	offset += sizeof(gdp->bg_checksum); /* skip checksum */
+ 	/* for checksum of struct ext4_group_desc do the rest...*/
+-	if (ext4_has_feature_64bit(sb) &&
+-	    offset < le16_to_cpu(sbi->s_es->s_desc_size))
++	if (ext4_has_feature_64bit(sb) && offset < sbi->s_desc_size)
+ 		crc = crc16(crc, (__u8 *)gdp + offset,
+-			    le16_to_cpu(sbi->s_es->s_desc_size) -
+-				offset);
++			    sbi->s_desc_size - offset);
+ 
+ out:
+ 	return cpu_to_le16(crc);
+@@ -6030,9 +6028,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	}
+ 
+ #ifdef CONFIG_QUOTA
+-	/* Release old quota file names */
+-	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+-		kfree(old_opts.s_qf_names[i]);
+ 	if (enable_quota) {
+ 		if (sb_any_quota_suspended(sb))
+ 			dquot_resume(sb, -1);
+@@ -6042,6 +6037,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 				goto restore_opts;
+ 		}
+ 	}
++	/* Release old quota file names */
++	for (i = 0; i < EXT4_MAXQUOTAS; i++)
++		kfree(old_opts.s_qf_names[i]);
+ #endif
+ 	if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
+ 		ext4_release_system_zone(sb);
+@@ -6061,6 +6059,13 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	return 0;
+ 
+ restore_opts:
++	/*
++	 * If there was a failing r/w to ro transition, we may need to
++	 * re-enable quota
++	 */
++	if ((sb->s_flags & SB_RDONLY) && !(old_sb_flags & SB_RDONLY) &&
++	    sb_any_quota_suspended(sb))
++		dquot_resume(sb, -1);
+ 	sb->s_flags = old_sb_flags;
+ 	sbi->s_mount_opt = old_opts.s_mount_opt;
+ 	sbi->s_mount_opt2 = old_opts.s_mount_opt2;
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 28fa9a64dc4be..abcba0255109e 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2554,6 +2554,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 		.in_inode = !!entry->e_value_inum,
+ 	};
+ 	struct ext4_xattr_ibody_header *header = IHDR(inode, raw_inode);
++	int needs_kvfree = 0;
+ 	int error;
+ 
+ 	is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
+@@ -2576,7 +2577,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 			error = -ENOMEM;
+ 			goto out;
+ 		}
+-
++		needs_kvfree = 1;
+ 		error = ext4_xattr_inode_get(inode, entry, buffer, value_size);
+ 		if (error)
+ 			goto out;
+@@ -2615,7 +2616,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 
+ out:
+ 	kfree(b_entry_name);
+-	if (entry->e_value_inum && buffer)
++	if (needs_kvfree && buffer)
+ 		kvfree(buffer);
+ 	if (is)
+ 		brelse(is->iloc.bh);
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 1541da5ace85e..1be9de40f0b5a 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1391,6 +1391,12 @@ continue_unlock:
+ 		if (!PageDirty(cc->rpages[i]))
+ 			goto continue_unlock;
+ 
++		if (PageWriteback(cc->rpages[i])) {
++			if (wbc->sync_mode == WB_SYNC_NONE)
++				goto continue_unlock;
++			f2fs_wait_on_page_writeback(cc->rpages[i], DATA, true, true);
++		}
++
+ 		if (!clear_page_dirty_for_io(cc->rpages[i]))
+ 			goto continue_unlock;
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index db26e87b8f0dd..e9481c940895c 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -849,6 +849,8 @@ void f2fs_submit_merged_ipu_write(struct f2fs_sb_info *sbi,
+ 	bool found = false;
+ 	struct bio *target = bio ? *bio : NULL;
+ 
++	f2fs_bug_on(sbi, !target && !page);
++
+ 	for (temp = HOT; temp < NR_TEMP_TYPE && !found; temp++) {
+ 		struct f2fs_bio_info *io = sbi->write_io[DATA] + temp;
+ 		struct list_head *head = &io->bio_list;
+@@ -2917,7 +2919,8 @@ out:
+ 
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		f2fs_submit_merged_write(sbi, DATA);
+-		f2fs_submit_merged_ipu_write(sbi, bio, NULL);
++		if (bio && *bio)
++			f2fs_submit_merged_ipu_write(sbi, bio, NULL);
+ 		submitted = NULL;
+ 	}
+ 
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index c03fdda1bddf6..62b7848f1f71e 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1153,7 +1153,6 @@ struct f2fs_dev_info {
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	unsigned int nr_blkz;		/* Total number of zones */
+ 	unsigned long *blkz_seq;	/* Bitmap indicating sequential zones */
+-	block_t *zone_capacity_blocks;  /* Array of zone capacity in blks */
+ #endif
+ };
+ 
+@@ -1422,6 +1421,7 @@ struct f2fs_sb_info {
+ 	unsigned int meta_ino_num;		/* meta inode number*/
+ 	unsigned int log_blocks_per_seg;	/* log2 blocks per segment */
+ 	unsigned int blocks_per_seg;		/* blocks per segment */
++	unsigned int unusable_blocks_per_sec;	/* unusable blocks per section */
+ 	unsigned int segs_per_sec;		/* segments per section */
+ 	unsigned int secs_per_zone;		/* sections per zone */
+ 	unsigned int total_sections;		/* total section count */
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index d56fcace18211..a0d8aa52b696b 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -3013,15 +3013,16 @@ int f2fs_transfer_project_quota(struct inode *inode, kprojid_t kprojid)
+ 	struct dquot *transfer_to[MAXQUOTAS] = {};
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct super_block *sb = sbi->sb;
+-	int err = 0;
++	int err;
+ 
+ 	transfer_to[PRJQUOTA] = dqget(sb, make_kqid_projid(kprojid));
+-	if (!IS_ERR(transfer_to[PRJQUOTA])) {
+-		err = __dquot_transfer(inode, transfer_to);
+-		if (err)
+-			set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
+-		dqput(transfer_to[PRJQUOTA]);
+-	}
++	if (IS_ERR(transfer_to[PRJQUOTA]))
++		return PTR_ERR(transfer_to[PRJQUOTA]);
++
++	err = __dquot_transfer(inode, transfer_to);
++	if (err)
++		set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++	dqput(transfer_to[PRJQUOTA]);
+ 	return err;
+ }
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 72b109685db47..98263180c0ead 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -969,12 +969,20 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			goto out;
+ 	}
+ 
++	/*
++	 * Copied from ext4_rename: we need to protect against old.inode
++	 * directory getting converted from inline directory format into
++	 * a normal one.
++	 */
++	if (S_ISDIR(old_inode->i_mode))
++		inode_lock_nested(old_inode, I_MUTEX_NONDIR2);
++
+ 	err = -ENOENT;
+ 	old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page);
+ 	if (!old_entry) {
+ 		if (IS_ERR(old_page))
+ 			err = PTR_ERR(old_page);
+-		goto out;
++		goto out_unlock_old;
+ 	}
+ 
+ 	if (S_ISDIR(old_inode->i_mode)) {
+@@ -1082,6 +1090,9 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	f2fs_unlock_op(sbi);
+ 
++	if (S_ISDIR(old_inode->i_mode))
++		inode_unlock(old_inode);
++
+ 	if (IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir))
+ 		f2fs_sync_fs(sbi->sb, 1);
+ 
+@@ -1096,6 +1107,9 @@ out_dir:
+ 		f2fs_put_page(old_dir_page, 0);
+ out_old:
+ 	f2fs_put_page(old_page, 0);
++out_unlock_old:
++	if (S_ISDIR(old_inode->i_mode))
++		inode_unlock(old_inode);
+ out:
+ 	if (whiteout)
+ 		iput(whiteout);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 7c90d93f4e435..a27a934292715 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -4957,54 +4957,6 @@ int f2fs_check_write_pointer(struct f2fs_sb_info *sbi)
+ 	return 0;
+ }
+ 
+-static bool is_conv_zone(struct f2fs_sb_info *sbi, unsigned int zone_idx,
+-						unsigned int dev_idx)
+-{
+-	if (!bdev_is_zoned(FDEV(dev_idx).bdev))
+-		return true;
+-	return !test_bit(zone_idx, FDEV(dev_idx).blkz_seq);
+-}
+-
+-/* Return the zone index in the given device */
+-static unsigned int get_zone_idx(struct f2fs_sb_info *sbi, unsigned int secno,
+-					int dev_idx)
+-{
+-	block_t sec_start_blkaddr = START_BLOCK(sbi, GET_SEG_FROM_SEC(sbi, secno));
+-
+-	return (sec_start_blkaddr - FDEV(dev_idx).start_blk) >>
+-						sbi->log_blocks_per_blkz;
+-}
+-
+-/*
+- * Return the usable segments in a section based on the zone's
+- * corresponding zone capacity. Zone is equal to a section.
+- */
+-static inline unsigned int f2fs_usable_zone_segs_in_sec(
+-		struct f2fs_sb_info *sbi, unsigned int segno)
+-{
+-	unsigned int dev_idx, zone_idx, unusable_segs_in_sec;
+-
+-	dev_idx = f2fs_target_device_index(sbi, START_BLOCK(sbi, segno));
+-	zone_idx = get_zone_idx(sbi, GET_SEC_FROM_SEG(sbi, segno), dev_idx);
+-
+-	/* Conventional zone's capacity is always equal to zone size */
+-	if (is_conv_zone(sbi, zone_idx, dev_idx))
+-		return sbi->segs_per_sec;
+-
+-	/*
+-	 * If the zone_capacity_blocks array is NULL, then zone capacity
+-	 * is equal to the zone size for all zones
+-	 */
+-	if (!FDEV(dev_idx).zone_capacity_blocks)
+-		return sbi->segs_per_sec;
+-
+-	/* Get the segment count beyond zone capacity block */
+-	unusable_segs_in_sec = (sbi->blocks_per_blkz -
+-				FDEV(dev_idx).zone_capacity_blocks[zone_idx]) >>
+-				sbi->log_blocks_per_seg;
+-	return sbi->segs_per_sec - unusable_segs_in_sec;
+-}
+-
+ /*
+  * Return the number of usable blocks in a segment. The number of blocks
+  * returned is always equal to the number of blocks in a segment for
+@@ -5017,26 +4969,15 @@ static inline unsigned int f2fs_usable_zone_blks_in_seg(
+ 			struct f2fs_sb_info *sbi, unsigned int segno)
+ {
+ 	block_t seg_start, sec_start_blkaddr, sec_cap_blkaddr;
+-	unsigned int zone_idx, dev_idx, secno;
+-
+-	secno = GET_SEC_FROM_SEG(sbi, segno);
+-	seg_start = START_BLOCK(sbi, segno);
+-	dev_idx = f2fs_target_device_index(sbi, seg_start);
+-	zone_idx = get_zone_idx(sbi, secno, dev_idx);
+-
+-	/*
+-	 * Conventional zone's capacity is always equal to zone size,
+-	 * so, blocks per segment is unchanged.
+-	 */
+-	if (is_conv_zone(sbi, zone_idx, dev_idx))
+-		return sbi->blocks_per_seg;
++	unsigned int secno;
+ 
+-	if (!FDEV(dev_idx).zone_capacity_blocks)
++	if (!sbi->unusable_blocks_per_sec)
+ 		return sbi->blocks_per_seg;
+ 
++	secno = GET_SEC_FROM_SEG(sbi, segno);
++	seg_start = START_BLOCK(sbi, segno);
+ 	sec_start_blkaddr = START_BLOCK(sbi, GET_SEG_FROM_SEC(sbi, secno));
+-	sec_cap_blkaddr = sec_start_blkaddr +
+-				FDEV(dev_idx).zone_capacity_blocks[zone_idx];
++	sec_cap_blkaddr = sec_start_blkaddr + CAP_BLKS_PER_SEC(sbi);
+ 
+ 	/*
+ 	 * If segment starts before zone capacity and spans beyond
+@@ -5068,11 +5009,6 @@ static inline unsigned int f2fs_usable_zone_blks_in_seg(struct f2fs_sb_info *sbi
+ 	return 0;
+ }
+ 
+-static inline unsigned int f2fs_usable_zone_segs_in_sec(struct f2fs_sb_info *sbi,
+-							unsigned int segno)
+-{
+-	return 0;
+-}
+ #endif
+ unsigned int f2fs_usable_blks_in_seg(struct f2fs_sb_info *sbi,
+ 					unsigned int segno)
+@@ -5087,7 +5023,7 @@ unsigned int f2fs_usable_segs_in_sec(struct f2fs_sb_info *sbi,
+ 					unsigned int segno)
+ {
+ 	if (f2fs_sb_has_blkzoned(sbi))
+-		return f2fs_usable_zone_segs_in_sec(sbi, segno);
++		return CAP_SEGS_PER_SEC(sbi);
+ 
+ 	return sbi->segs_per_sec;
+ }
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index eafd89f0c77e8..979296b835b5a 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -101,6 +101,12 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
+ 		GET_SEGNO_FROM_SEG0(sbi, blk_addr)))
+ #define BLKS_PER_SEC(sbi)					\
+ 	((sbi)->segs_per_sec * (sbi)->blocks_per_seg)
++#define CAP_BLKS_PER_SEC(sbi)					\
++	((sbi)->segs_per_sec * (sbi)->blocks_per_seg -		\
++	 (sbi)->unusable_blocks_per_sec)
++#define CAP_SEGS_PER_SEC(sbi)					\
++	((sbi)->segs_per_sec - ((sbi)->unusable_blocks_per_sec >>\
++	(sbi)->log_blocks_per_seg))
+ #define GET_SEC_FROM_SEG(sbi, segno)				\
+ 	(((segno) == -1) ? -1: (segno) / (sbi)->segs_per_sec)
+ #define GET_SEG_FROM_SEC(sbi, secno)				\
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 0bba5c72fc77e..9a74d60f61dba 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1242,7 +1242,6 @@ static void destroy_device_list(struct f2fs_sb_info *sbi)
+ 		blkdev_put(FDEV(i).bdev, FMODE_EXCL);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 		kvfree(FDEV(i).blkz_seq);
+-		kfree(FDEV(i).zone_capacity_blocks);
+ #endif
+ 	}
+ 	kvfree(sbi->devs);
+@@ -3199,24 +3198,29 @@ static int init_percpu_info(struct f2fs_sb_info *sbi)
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 
+ struct f2fs_report_zones_args {
++	struct f2fs_sb_info *sbi;
+ 	struct f2fs_dev_info *dev;
+-	bool zone_cap_mismatch;
+ };
+ 
+ static int f2fs_report_zone_cb(struct blk_zone *zone, unsigned int idx,
+ 			      void *data)
+ {
+ 	struct f2fs_report_zones_args *rz_args = data;
++	block_t unusable_blocks = (zone->len - zone->capacity) >>
++					F2FS_LOG_SECTORS_PER_BLOCK;
+ 
+ 	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+ 		return 0;
+ 
+ 	set_bit(idx, rz_args->dev->blkz_seq);
+-	rz_args->dev->zone_capacity_blocks[idx] = zone->capacity >>
+-						F2FS_LOG_SECTORS_PER_BLOCK;
+-	if (zone->len != zone->capacity && !rz_args->zone_cap_mismatch)
+-		rz_args->zone_cap_mismatch = true;
+-
++	if (!rz_args->sbi->unusable_blocks_per_sec) {
++		rz_args->sbi->unusable_blocks_per_sec = unusable_blocks;
++		return 0;
++	}
++	if (rz_args->sbi->unusable_blocks_per_sec != unusable_blocks) {
++		f2fs_err(rz_args->sbi, "F2FS supports single zone capacity\n");
++		return -EINVAL;
++	}
+ 	return 0;
+ }
+ 
+@@ -3250,26 +3254,13 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
+ 	if (!FDEV(devi).blkz_seq)
+ 		return -ENOMEM;
+ 
+-	/* Get block zones type and zone-capacity */
+-	FDEV(devi).zone_capacity_blocks = f2fs_kzalloc(sbi,
+-					FDEV(devi).nr_blkz * sizeof(block_t),
+-					GFP_KERNEL);
+-	if (!FDEV(devi).zone_capacity_blocks)
+-		return -ENOMEM;
+-
++	rep_zone_arg.sbi = sbi;
+ 	rep_zone_arg.dev = &FDEV(devi);
+-	rep_zone_arg.zone_cap_mismatch = false;
+ 
+ 	ret = blkdev_report_zones(bdev, 0, BLK_ALL_ZONES, f2fs_report_zone_cb,
+ 				  &rep_zone_arg);
+ 	if (ret < 0)
+ 		return ret;
+-
+-	if (!rep_zone_arg.zone_cap_mismatch) {
+-		kfree(FDEV(devi).zone_capacity_blocks);
+-		FDEV(devi).zone_capacity_blocks = NULL;
+-	}
+-
+ 	return 0;
+ }
+ #endif
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 46c15dd2405c6..045a3bd520cae 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -700,7 +700,7 @@ void wbc_detach_inode(struct writeback_control *wbc)
+ 		 * is okay.  The main goal is avoiding keeping an inode on
+ 		 * the wrong wb for an extended period of time.
+ 		 */
+-		if (hweight32(history) > WB_FRN_HIST_THR_SLOTS)
++		if (hweight16(history) > WB_FRN_HIST_THR_SLOTS)
+ 			inode_switch_wbs(inode, max_id);
+ 	}
+ 
+@@ -884,6 +884,16 @@ restart:
+ 			continue;
+ 		}
+ 
++		/*
++		 * If wb_tryget fails, the wb has been shutdown, skip it.
++		 *
++		 * Pin @wb so that it stays on @bdi->wb_list.  This allows
++		 * continuing iteration from @wb after dropping and
++		 * regrabbing rcu read lock.
++		 */
++		if (!wb_tryget(wb))
++			continue;
++
+ 		/* alloc failed, execute synchronously using on-stack fallback */
+ 		work = &fallback_work;
+ 		*work = *base_work;
+@@ -892,13 +902,6 @@ restart:
+ 		work->done = &fallback_work_done;
+ 
+ 		wb_queue_work(wb, work);
+-
+-		/*
+-		 * Pin @wb so that it stays on @bdi->wb_list.  This allows
+-		 * continuing iteration from @wb after dropping and
+-		 * regrabbing rcu read lock.
+-		 */
+-		wb_get(wb);
+ 		last_wb = wb;
+ 
+ 		rcu_read_unlock();
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 6689d235de8a4..fee325d62bfd9 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -757,6 +757,7 @@ int jbd2_fc_begin_commit(journal_t *journal, tid_t tid)
+ 	}
+ 	journal->j_flags |= JBD2_FAST_COMMIT_ONGOING;
+ 	write_unlock(&journal->j_state_lock);
++	jbd2_journal_lock_updates(journal);
+ 
+ 	return 0;
+ }
+@@ -768,6 +769,7 @@ EXPORT_SYMBOL(jbd2_fc_begin_commit);
+  */
+ static int __jbd2_fc_end_commit(journal_t *journal, tid_t tid, bool fallback)
+ {
++	jbd2_journal_unlock_updates(journal);
+ 	if (journal->j_fc_cleanup_callback)
+ 		journal->j_fc_cleanup_callback(journal, 0);
+ 	write_lock(&journal->j_state_lock);
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 1923528154b52..1baf2d607268f 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -2378,6 +2378,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
+ 			spin_unlock(&jh->b_state_lock);
+ 			write_unlock(&journal->j_state_lock);
+ 			jbd2_journal_put_journal_head(jh);
++			/* Already zapped buffer? Nothing to do... */
++			if (!bh->b_bdev)
++				return 0;
+ 			return -EBUSY;
+ 		}
+ 		/*
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 628e030f8e3ba..ff6ca05a9d441 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -67,6 +67,8 @@
+ 
+ #define OPENOWNER_POOL_SIZE	8
+ 
++static void nfs4_state_start_reclaim_reboot(struct nfs_client *clp);
++
+ const nfs4_stateid zero_stateid = {
+ 	{ .data = { 0 } },
+ 	.type = NFS4_SPECIAL_STATEID_TYPE,
+@@ -330,6 +332,8 @@ do_confirm:
+ 	status = nfs4_proc_create_session(clp, cred);
+ 	if (status != 0)
+ 		goto out;
++	if (!(clp->cl_exchange_flags & EXCHGID4_FLAG_CONFIRMED_R))
++		nfs4_state_start_reclaim_reboot(clp);
+ 	nfs41_finish_session_reset(clp);
+ 	nfs_mark_client_ready(clp, NFS_CS_READY);
+ out:
+diff --git a/fs/nilfs2/bmap.c b/fs/nilfs2/bmap.c
+index 5900879d5693c..8ebb69c4ad186 100644
+--- a/fs/nilfs2/bmap.c
++++ b/fs/nilfs2/bmap.c
+@@ -67,20 +67,28 @@ int nilfs_bmap_lookup_at_level(struct nilfs_bmap *bmap, __u64 key, int level,
+ 
+ 	down_read(&bmap->b_sem);
+ 	ret = bmap->b_ops->bop_lookup(bmap, key, level, ptrp);
+-	if (ret < 0) {
+-		ret = nilfs_bmap_convert_error(bmap, __func__, ret);
++	if (ret < 0)
+ 		goto out;
+-	}
++
+ 	if (NILFS_BMAP_USE_VBN(bmap)) {
+ 		ret = nilfs_dat_translate(nilfs_bmap_get_dat(bmap), *ptrp,
+ 					  &blocknr);
+ 		if (!ret)
+ 			*ptrp = blocknr;
++		else if (ret == -ENOENT) {
++			/*
++			 * If there was no valid entry in DAT for the block
++			 * address obtained by b_ops->bop_lookup, then pass
++			 * internal code -EINVAL to nilfs_bmap_convert_error
++			 * to treat it as metadata corruption.
++			 */
++			ret = -EINVAL;
++		}
+ 	}
+ 
+  out:
+ 	up_read(&bmap->b_sem);
+-	return ret;
++	return nilfs_bmap_convert_error(bmap, __func__, ret);
+ }
+ 
+ int nilfs_bmap_lookup_contig(struct nilfs_bmap *bmap, __u64 key, __u64 *ptrp,
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index fff2cdc69e5ee..cdaca232ac4d6 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -2044,6 +2044,9 @@ static int nilfs_segctor_do_construct(struct nilfs_sc_info *sci, int mode)
+ 	struct the_nilfs *nilfs = sci->sc_super->s_fs_info;
+ 	int err;
+ 
++	if (sb_rdonly(sci->sc_super))
++		return -EROFS;
++
+ 	nilfs_sc_cstage_set(sci, NILFS_ST_INIT);
+ 	sci->sc_cno = nilfs->ns_cno;
+ 
+@@ -2729,7 +2732,7 @@ static void nilfs_segctor_write_out(struct nilfs_sc_info *sci)
+ 
+ 		flush_work(&sci->sc_iput_work);
+ 
+-	} while (ret && retrycount-- > 0);
++	} while (ret && ret != -EROFS && retrycount-- > 0);
+ }
+ 
+ /**
+diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c
+index 1901d799909b8..66991c7fef9e2 100644
+--- a/fs/notify/inotify/inotify_fsnotify.c
++++ b/fs/notify/inotify/inotify_fsnotify.c
+@@ -64,7 +64,7 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 	struct fsnotify_event *fsn_event;
+ 	struct fsnotify_group *group = inode_mark->group;
+ 	int ret;
+-	int len = 0;
++	int len = 0, wd;
+ 	int alloc_len = sizeof(struct inotify_event_info);
+ 	struct mem_cgroup *old_memcg;
+ 
+@@ -79,6 +79,13 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 	i_mark = container_of(inode_mark, struct inotify_inode_mark,
+ 			      fsn_mark);
+ 
++	/*
++	 * We can be racing with mark being detached. Don't report event with
++	 * invalid wd.
++	 */
++	wd = READ_ONCE(i_mark->wd);
++	if (wd == -1)
++		return 0;
+ 	/*
+ 	 * Whoever is interested in the event, pays for the allocation. Do not
+ 	 * trigger OOM killer in the target monitoring memcg as it may have
+@@ -109,7 +116,7 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 	fsn_event = &event->fse;
+ 	fsnotify_init_event(fsn_event, 0);
+ 	event->mask = mask;
+-	event->wd = i_mark->wd;
++	event->wd = wd;
+ 	event->sync_cookie = cookie;
+ 	event->name_len = len;
+ 	if (len)
+diff --git a/fs/pstore/pmsg.c b/fs/pstore/pmsg.c
+index 18cf94b597e05..d8542ec2f38c6 100644
+--- a/fs/pstore/pmsg.c
++++ b/fs/pstore/pmsg.c
+@@ -7,10 +7,9 @@
+ #include <linux/device.h>
+ #include <linux/fs.h>
+ #include <linux/uaccess.h>
+-#include <linux/rtmutex.h>
+ #include "internal.h"
+ 
+-static DEFINE_RT_MUTEX(pmsg_lock);
++static DEFINE_MUTEX(pmsg_lock);
+ 
+ static ssize_t write_pmsg(struct file *file, const char __user *buf,
+ 			  size_t count, loff_t *ppos)
+@@ -29,9 +28,9 @@ static ssize_t write_pmsg(struct file *file, const char __user *buf,
+ 	if (!access_ok(buf, count))
+ 		return -EFAULT;
+ 
+-	rt_mutex_lock(&pmsg_lock);
++	mutex_lock(&pmsg_lock);
+ 	ret = psinfo->write_user(&record, buf);
+-	rt_mutex_unlock(&pmsg_lock);
++	mutex_unlock(&pmsg_lock);
+ 	return ret ? ret : count;
+ }
+ 
+diff --git a/fs/reiserfs/xattr_security.c b/fs/reiserfs/xattr_security.c
+index 59d87f9f72fb4..159af6c26f4bd 100644
+--- a/fs/reiserfs/xattr_security.c
++++ b/fs/reiserfs/xattr_security.c
+@@ -81,11 +81,15 @@ int reiserfs_security_write(struct reiserfs_transaction_handle *th,
+ 			    struct inode *inode,
+ 			    struct reiserfs_security_handle *sec)
+ {
++	char xattr_name[XATTR_NAME_MAX + 1] = XATTR_SECURITY_PREFIX;
+ 	int error;
+-	if (strlen(sec->name) < sizeof(XATTR_SECURITY_PREFIX))
++
++	if (XATTR_SECURITY_PREFIX_LEN + strlen(sec->name) > XATTR_NAME_MAX)
+ 		return -EINVAL;
+ 
+-	error = reiserfs_xattr_set_handle(th, inode, sec->name, sec->value,
++	strlcat(xattr_name, sec->name, sizeof(xattr_name));
++
++	error = reiserfs_xattr_set_handle(th, inode, xattr_name, sec->value,
+ 					  sec->length, XATTR_CREATE);
+ 	if (error == -ENODATA || error == -EOPNOTSUPP)
+ 		error = 0;
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 6039943877e10..bc562b1072d3e 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -425,6 +425,7 @@ static int do_tmpfile(struct inode *dir, struct dentry *dentry,
+ 	mutex_unlock(&dir_ui->ui_mutex);
+ 
+ 	ubifs_release_budget(c, &req);
++	fscrypt_free_filename(&nm);
+ 
+ 	return 0;
+ 
+diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
+index 07470449b9602..2313c7174e3c0 100644
+--- a/fs/ubifs/tnc.c
++++ b/fs/ubifs/tnc.c
+@@ -44,6 +44,33 @@ enum {
+ 	NOT_ON_MEDIA = 3,
+ };
+ 
++static void do_insert_old_idx(struct ubifs_info *c,
++			      struct ubifs_old_idx *old_idx)
++{
++	struct ubifs_old_idx *o;
++	struct rb_node **p, *parent = NULL;
++
++	p = &c->old_idx.rb_node;
++	while (*p) {
++		parent = *p;
++		o = rb_entry(parent, struct ubifs_old_idx, rb);
++		if (old_idx->lnum < o->lnum)
++			p = &(*p)->rb_left;
++		else if (old_idx->lnum > o->lnum)
++			p = &(*p)->rb_right;
++		else if (old_idx->offs < o->offs)
++			p = &(*p)->rb_left;
++		else if (old_idx->offs > o->offs)
++			p = &(*p)->rb_right;
++		else {
++			ubifs_err(c, "old idx added twice!");
++			kfree(old_idx);
++		}
++	}
++	rb_link_node(&old_idx->rb, parent, p);
++	rb_insert_color(&old_idx->rb, &c->old_idx);
++}
++
+ /**
+  * insert_old_idx - record an index node obsoleted since the last commit start.
+  * @c: UBIFS file-system description object
+@@ -69,35 +96,15 @@ enum {
+  */
+ static int insert_old_idx(struct ubifs_info *c, int lnum, int offs)
+ {
+-	struct ubifs_old_idx *old_idx, *o;
+-	struct rb_node **p, *parent = NULL;
++	struct ubifs_old_idx *old_idx;
+ 
+ 	old_idx = kmalloc(sizeof(struct ubifs_old_idx), GFP_NOFS);
+ 	if (unlikely(!old_idx))
+ 		return -ENOMEM;
+ 	old_idx->lnum = lnum;
+ 	old_idx->offs = offs;
++	do_insert_old_idx(c, old_idx);
+ 
+-	p = &c->old_idx.rb_node;
+-	while (*p) {
+-		parent = *p;
+-		o = rb_entry(parent, struct ubifs_old_idx, rb);
+-		if (lnum < o->lnum)
+-			p = &(*p)->rb_left;
+-		else if (lnum > o->lnum)
+-			p = &(*p)->rb_right;
+-		else if (offs < o->offs)
+-			p = &(*p)->rb_left;
+-		else if (offs > o->offs)
+-			p = &(*p)->rb_right;
+-		else {
+-			ubifs_err(c, "old idx added twice!");
+-			kfree(old_idx);
+-			return 0;
+-		}
+-	}
+-	rb_link_node(&old_idx->rb, parent, p);
+-	rb_insert_color(&old_idx->rb, &c->old_idx);
+ 	return 0;
+ }
+ 
+@@ -199,23 +206,6 @@ static struct ubifs_znode *copy_znode(struct ubifs_info *c,
+ 	__set_bit(DIRTY_ZNODE, &zn->flags);
+ 	__clear_bit(COW_ZNODE, &zn->flags);
+ 
+-	ubifs_assert(c, !ubifs_zn_obsolete(znode));
+-	__set_bit(OBSOLETE_ZNODE, &znode->flags);
+-
+-	if (znode->level != 0) {
+-		int i;
+-		const int n = zn->child_cnt;
+-
+-		/* The children now have new parent */
+-		for (i = 0; i < n; i++) {
+-			struct ubifs_zbranch *zbr = &zn->zbranch[i];
+-
+-			if (zbr->znode)
+-				zbr->znode->parent = zn;
+-		}
+-	}
+-
+-	atomic_long_inc(&c->dirty_zn_cnt);
+ 	return zn;
+ }
+ 
+@@ -233,6 +223,42 @@ static int add_idx_dirt(struct ubifs_info *c, int lnum, int dirt)
+ 	return ubifs_add_dirt(c, lnum, dirt);
+ }
+ 
++/**
++ * replace_znode - replace old znode with new znode.
++ * @c: UBIFS file-system description object
++ * @new_zn: new znode
++ * @old_zn: old znode
++ * @zbr: the branch of parent znode
++ *
++ * Replace old znode with new znode in TNC.
++ */
++static void replace_znode(struct ubifs_info *c, struct ubifs_znode *new_zn,
++			  struct ubifs_znode *old_zn, struct ubifs_zbranch *zbr)
++{
++	ubifs_assert(c, !ubifs_zn_obsolete(old_zn));
++	__set_bit(OBSOLETE_ZNODE, &old_zn->flags);
++
++	if (old_zn->level != 0) {
++		int i;
++		const int n = new_zn->child_cnt;
++
++		/* The children now have new parent */
++		for (i = 0; i < n; i++) {
++			struct ubifs_zbranch *child = &new_zn->zbranch[i];
++
++			if (child->znode)
++				child->znode->parent = new_zn;
++		}
++	}
++
++	zbr->znode = new_zn;
++	zbr->lnum = 0;
++	zbr->offs = 0;
++	zbr->len = 0;
++
++	atomic_long_inc(&c->dirty_zn_cnt);
++}
++
+ /**
+  * dirty_cow_znode - ensure a znode is not being committed.
+  * @c: UBIFS file-system description object
+@@ -265,28 +291,32 @@ static struct ubifs_znode *dirty_cow_znode(struct ubifs_info *c,
+ 		return zn;
+ 
+ 	if (zbr->len) {
+-		err = insert_old_idx(c, zbr->lnum, zbr->offs);
+-		if (unlikely(err))
+-			/*
+-			 * Obsolete znodes will be freed by tnc_destroy_cnext()
+-			 * or free_obsolete_znodes(), copied up znodes should
+-			 * be added back to tnc and freed by
+-			 * ubifs_destroy_tnc_subtree().
+-			 */
++		struct ubifs_old_idx *old_idx;
++
++		old_idx = kmalloc(sizeof(struct ubifs_old_idx), GFP_NOFS);
++		if (unlikely(!old_idx)) {
++			err = -ENOMEM;
+ 			goto out;
++		}
++		old_idx->lnum = zbr->lnum;
++		old_idx->offs = zbr->offs;
++
+ 		err = add_idx_dirt(c, zbr->lnum, zbr->len);
+-	} else
+-		err = 0;
++		if (err) {
++			kfree(old_idx);
++			goto out;
++		}
+ 
+-out:
+-	zbr->znode = zn;
+-	zbr->lnum = 0;
+-	zbr->offs = 0;
+-	zbr->len = 0;
++		do_insert_old_idx(c, old_idx);
++	}
++
++	replace_znode(c, zn, znode, zbr);
+ 
+-	if (unlikely(err))
+-		return ERR_PTR(err);
+ 	return zn;
++
++out:
++	kfree(zn);
++	return ERR_PTR(err);
+ }
+ 
+ /**
+diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
+index 9ea83d80eb6f9..dcbd41048b4e7 100644
+--- a/include/asm-generic/io.h
++++ b/include/asm-generic/io.h
+@@ -190,7 +190,7 @@ static inline u64 readq(const volatile void __iomem *addr)
+ 	u64 val;
+ 
+ 	__io_br();
+-	val = __le64_to_cpu(__raw_readq(addr));
++	val = __le64_to_cpu((__le64 __force)__raw_readq(addr));
+ 	__io_ar(val);
+ 	return val;
+ }
+@@ -233,7 +233,7 @@ static inline void writel(u32 value, volatile void __iomem *addr)
+ static inline void writeq(u64 value, volatile void __iomem *addr)
+ {
+ 	__io_bw();
+-	__raw_writeq(__cpu_to_le64(value), addr);
++	__raw_writeq((u64 __force)__cpu_to_le64(value), addr);
+ 	__io_aw();
+ }
+ #endif
+diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
+index 69b24fe92cbf1..5e96bad548047 100644
+--- a/include/linux/blk-crypto.h
++++ b/include/linux/blk-crypto.h
+@@ -97,8 +97,8 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+ int blk_crypto_start_using_key(const struct blk_crypto_key *key,
+ 			       struct request_queue *q);
+ 
+-int blk_crypto_evict_key(struct request_queue *q,
+-			 const struct blk_crypto_key *key);
++void blk_crypto_evict_key(struct request_queue *q,
++			  const struct blk_crypto_key *key);
+ 
+ bool blk_crypto_config_supported(struct request_queue *q,
+ 				 const struct blk_crypto_config *cfg);
+diff --git a/include/linux/mailbox/zynqmp-ipi-message.h b/include/linux/mailbox/zynqmp-ipi-message.h
+index 35ce84c8ca02c..31d8046d945e7 100644
+--- a/include/linux/mailbox/zynqmp-ipi-message.h
++++ b/include/linux/mailbox/zynqmp-ipi-message.h
+@@ -9,7 +9,7 @@
+  * @data: message payload
+  *
+  * This is the structure for data used in mbox_send_message
+- * the maximum length of data buffer is fixed to 12 bytes.
++ * the maximum length of data buffer is fixed to 32 bytes.
+  * Client is supposed to be aware of this.
+  */
+ struct zynqmp_ipi_message {
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 6ca97729b54a4..88dbb20090805 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -8331,7 +8331,8 @@ struct mlx5_ifc_alloc_flow_counter_in_bits {
+ 	u8         reserved_at_20[0x10];
+ 	u8         op_mod[0x10];
+ 
+-	u8         reserved_at_40[0x38];
++	u8         reserved_at_40[0x33];
++	u8         flow_counter_bulk_log_size[0x5];
+ 	u8         flow_counter_bulk[0x8];
+ };
+ 
+diff --git a/include/linux/netfilter/nfnetlink.h b/include/linux/netfilter/nfnetlink.h
+index 791d516e1e880..0518ca72b7616 100644
+--- a/include/linux/netfilter/nfnetlink.h
++++ b/include/linux/netfilter/nfnetlink.h
+@@ -39,7 +39,6 @@ struct nfnetlink_subsystem {
+ 	int (*commit)(struct net *net, struct sk_buff *skb);
+ 	int (*abort)(struct net *net, struct sk_buff *skb,
+ 		     enum nfnl_abort_action action);
+-	void (*cleanup)(struct net *net);
+ 	bool (*valid_genid)(struct net *net, u32 genid);
+ };
+ 
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index fe39ed9e9303e..f454dd1003347 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -602,6 +602,10 @@ enum {
+ 	NVME_AER_VS			= 7,
+ };
+ 
++enum {
++	NVME_AER_ERROR_PERSIST_INT_ERR	= 0x03,
++};
++
+ enum {
+ 	NVME_AER_NOTICE_NS_CHANGED	= 0x00,
+ 	NVME_AER_NOTICE_FW_ACT_STARTING = 0x01,
+diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
+index 913aa60228b16..8e284161b65e5 100644
+--- a/include/linux/posix-timers.h
++++ b/include/linux/posix-timers.h
+@@ -4,6 +4,7 @@
+ 
+ #include <linux/spinlock.h>
+ #include <linux/list.h>
++#include <linux/mutex.h>
+ #include <linux/alarmtimer.h>
+ #include <linux/timerqueue.h>
+ #include <linux/task_work.h>
+@@ -63,16 +64,18 @@ static inline int clockid_to_fd(const clockid_t clk)
+  * cpu_timer - Posix CPU timer representation for k_itimer
+  * @node:	timerqueue node to queue in the task/sig
+  * @head:	timerqueue head on which this timer is queued
+- * @task:	Pointer to target task
++ * @pid:	Pointer to target task PID
+  * @elist:	List head for the expiry list
+  * @firing:	Timer is currently firing
++ * @handling:	Pointer to the task which handles expiry
+  */
+ struct cpu_timer {
+-	struct timerqueue_node	node;
+-	struct timerqueue_head	*head;
+-	struct pid		*pid;
+-	struct list_head	elist;
+-	int			firing;
++	struct timerqueue_node		node;
++	struct timerqueue_head		*head;
++	struct pid			*pid;
++	struct list_head		elist;
++	int				firing;
++	struct task_struct __rcu	*handling;
+ };
+ 
+ static inline bool cpu_timer_enqueue(struct timerqueue_head *head,
+@@ -129,10 +132,12 @@ struct posix_cputimers {
+ /**
+  * posix_cputimers_work - Container for task work based posix CPU timer expiry
+  * @work:	The task work to be scheduled
++ * @mutex:	Mutex held around expiry in context of this task work
+  * @scheduled:  @work has been scheduled already, no further processing
+  */
+ struct posix_cputimers_work {
+ 	struct callback_head	work;
++	struct mutex		mutex;
+ 	unsigned int		scheduled;
+ };
+ 
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index fe7eb2351610d..344f6da3d4c36 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -623,4 +623,23 @@ static inline void print_hex_dump_debug(const char *prefix_str, int prefix_type,
+ #define print_hex_dump_bytes(prefix_str, prefix_type, buf, len)	\
+ 	print_hex_dump_debug(prefix_str, prefix_type, 16, 1, buf, len, true)
+ 
++#ifdef CONFIG_PRINTK
++extern void __printk_safe_enter(void);
++extern void __printk_safe_exit(void);
++/*
++ * The printk_deferred_enter/exit macros are available only as a hack for
++ * some code paths that need to defer all printk console printing. Interrupts
++ * must be disabled for the deferred duration.
++ */
++#define printk_deferred_enter __printk_safe_enter
++#define printk_deferred_exit __printk_safe_exit
++#else
++static inline void printk_deferred_enter(void)
++{
++}
++static inline void printk_deferred_exit(void)
++{
++}
++#endif
++
+ #endif
+diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h
+index df696efdd6753..256dff36cf720 100644
+--- a/include/linux/sunrpc/sched.h
++++ b/include/linux/sunrpc/sched.h
+@@ -90,8 +90,7 @@ struct rpc_task {
+ #endif
+ 	unsigned char		tk_priority : 2,/* Task priority */
+ 				tk_garb_retry : 2,
+-				tk_cred_retry : 2,
+-				tk_rebind_retry : 2;
++				tk_cred_retry : 2;
+ };
+ 
+ typedef void			(*rpc_action)(struct rpc_task *);
+diff --git a/include/linux/tick.h b/include/linux/tick.h
+index 7340613c7eff7..a90a8f7759a26 100644
+--- a/include/linux/tick.h
++++ b/include/linux/tick.h
+@@ -211,6 +211,7 @@ extern void tick_nohz_dep_set_signal(struct signal_struct *signal,
+ 				     enum tick_dep_bits bit);
+ extern void tick_nohz_dep_clear_signal(struct signal_struct *signal,
+ 				       enum tick_dep_bits bit);
++extern bool tick_nohz_cpu_hotpluggable(unsigned int cpu);
+ 
+ /*
+  * The below are tick_nohz_[set,clear]_dep() wrappers that optimize off-cases
+@@ -275,6 +276,7 @@ static inline void tick_nohz_full_add_cpus_to(struct cpumask *mask) { }
+ 
+ static inline void tick_nohz_dep_set_cpu(int cpu, enum tick_dep_bits bit) { }
+ static inline void tick_nohz_dep_clear_cpu(int cpu, enum tick_dep_bits bit) { }
++static inline bool tick_nohz_cpu_hotpluggable(unsigned int cpu) { return true; }
+ 
+ static inline void tick_dep_set(enum tick_dep_bits bit) { }
+ static inline void tick_dep_clear(enum tick_dep_bits bit) { }
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index 5972f43b9d5ae..e51d75f5165b5 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -16,30 +16,6 @@
+ #include <linux/llist.h>
+ 
+ 
+-/*
+- * Lock subclasses for tty locks
+- *
+- * TTY_LOCK_NORMAL is for normal ttys and master ptys.
+- * TTY_LOCK_SLAVE is for slave ptys only.
+- *
+- * Lock subclasses are necessary for handling nested locking with pty pairs.
+- * tty locks which use nested locking:
+- *
+- * legacy_mutex - Nested tty locks are necessary for releasing pty pairs.
+- *		  The stable lock order is master pty first, then slave pty.
+- * termios_rwsem - The stable lock order is tty_buffer lock->termios_rwsem.
+- *		   Subclassing this lock enables the slave pty to hold its
+- *		   termios_rwsem when claiming the master tty_buffer lock.
+- * tty_buffer lock - slave ptys can claim nested buffer lock when handling
+- *		     signal chars. The stable lock order is slave pty, then
+- *		     master.
+- */
+-
+-enum {
+-	TTY_LOCK_NORMAL = 0,
+-	TTY_LOCK_SLAVE,
+-};
+-
+ /*
+  * (Note: the *_driver.minor_start values 1, 64, 128, 192 are
+  * hardcoded at present.)
+@@ -374,21 +350,6 @@ struct tty_file_private {
+ #define TTY_LDISC_CHANGING	20	/* Change pending - non-block IO */
+ #define TTY_LDISC_HALTED	22	/* Line discipline is halted */
+ 
+-/* Values for tty->flow_change */
+-#define TTY_THROTTLE_SAFE 1
+-#define TTY_UNTHROTTLE_SAFE 2
+-
+-static inline void __tty_set_flow_change(struct tty_struct *tty, int val)
+-{
+-	tty->flow_change = val;
+-}
+-
+-static inline void tty_set_flow_change(struct tty_struct *tty, int val)
+-{
+-	tty->flow_change = val;
+-	smp_mb();
+-}
+-
+ static inline bool tty_io_nonblock(struct tty_struct *tty, struct file *file)
+ {
+ 	return file->f_flags & O_NONBLOCK ||
+@@ -419,9 +380,6 @@ extern const char *tty_name(const struct tty_struct *tty);
+ extern struct tty_struct *tty_kopen(dev_t device);
+ extern void tty_kclose(struct tty_struct *tty);
+ extern int tty_dev_name_to_number(const char *name, dev_t *number);
+-extern int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout);
+-extern void tty_ldisc_unlock(struct tty_struct *tty);
+-extern ssize_t redirected_tty_write(struct kiocb *, struct iov_iter *);
+ #else
+ static inline void tty_kref_put(struct tty_struct *tty)
+ { }
+@@ -474,11 +432,7 @@ static inline struct tty_struct *tty_kref_get(struct tty_struct *tty)
+ 
+ extern const char *tty_driver_name(const struct tty_struct *tty);
+ extern void tty_wait_until_sent(struct tty_struct *tty, long timeout);
+-extern int __tty_check_change(struct tty_struct *tty, int sig);
+-extern int tty_check_change(struct tty_struct *tty);
+-extern void __stop_tty(struct tty_struct *tty);
+ extern void stop_tty(struct tty_struct *tty);
+-extern void __start_tty(struct tty_struct *tty);
+ extern void start_tty(struct tty_struct *tty);
+ extern int tty_register_driver(struct tty_driver *driver);
+ extern int tty_unregister_driver(struct tty_driver *driver);
+@@ -503,23 +457,11 @@ extern int tty_do_resize(struct tty_struct *tty, struct winsize *ws);
+ extern int is_current_pgrp_orphaned(void);
+ extern void tty_hangup(struct tty_struct *tty);
+ extern void tty_vhangup(struct tty_struct *tty);
+-extern void tty_vhangup_session(struct tty_struct *tty);
+ extern int tty_hung_up_p(struct file *filp);
+ extern void do_SAK(struct tty_struct *tty);
+ extern void __do_SAK(struct tty_struct *tty);
+-extern void tty_open_proc_set_tty(struct file *filp, struct tty_struct *tty);
+-extern int tty_signal_session_leader(struct tty_struct *tty, int exit_session);
+-extern void session_clear_tty(struct pid *session);
+ extern void no_tty(void);
+-extern void tty_buffer_free_all(struct tty_port *port);
+-extern void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld);
+-extern void tty_buffer_init(struct tty_port *port);
+-extern void tty_buffer_set_lock_subclass(struct tty_port *port);
+-extern bool tty_buffer_restart_work(struct tty_port *port);
+-extern bool tty_buffer_cancel_work(struct tty_port *port);
+-extern void tty_buffer_flush_work(struct tty_port *port);
+ extern speed_t tty_termios_baud_rate(struct ktermios *termios);
+-extern speed_t tty_termios_input_baud_rate(struct ktermios *termios);
+ extern void tty_termios_encode_baud_rate(struct ktermios *termios,
+ 						speed_t ibaud, speed_t obaud);
+ extern void tty_encode_baud_rate(struct tty_struct *tty,
+@@ -547,27 +489,16 @@ extern int tty_set_termios(struct tty_struct *tty, struct ktermios *kt);
+ extern struct tty_ldisc *tty_ldisc_ref(struct tty_struct *);
+ extern void tty_ldisc_deref(struct tty_ldisc *);
+ extern struct tty_ldisc *tty_ldisc_ref_wait(struct tty_struct *);
+-extern void tty_ldisc_hangup(struct tty_struct *tty, bool reset);
+-extern int tty_ldisc_reinit(struct tty_struct *tty, int disc);
+ extern const struct seq_operations tty_ldiscs_seq_ops;
+ 
+ extern void tty_wakeup(struct tty_struct *tty);
+ extern void tty_ldisc_flush(struct tty_struct *tty);
+ 
+-extern long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
+ extern int tty_mode_ioctl(struct tty_struct *tty, struct file *file,
+ 			unsigned int cmd, unsigned long arg);
+-extern long tty_jobctrl_ioctl(struct tty_struct *tty, struct tty_struct *real_tty,
+-			      struct file *file, unsigned int cmd, unsigned long arg);
+ extern int tty_perform_flush(struct tty_struct *tty, unsigned long arg);
+-extern void tty_default_fops(struct file_operations *fops);
+-extern struct tty_struct *alloc_tty_struct(struct tty_driver *driver, int idx);
+-extern int tty_alloc_file(struct file *file);
+-extern void tty_add_file(struct tty_struct *tty, struct file *file);
+-extern void tty_free_file(struct file *file);
+ extern struct tty_struct *tty_init_dev(struct tty_driver *driver, int idx);
+ extern void tty_release_struct(struct tty_struct *tty, int idx);
+-extern int tty_release(struct inode *inode, struct file *filp);
+ extern void tty_init_termios(struct tty_struct *tty);
+ extern void tty_save_termios(struct tty_struct *tty);
+ extern int tty_standard_install(struct tty_driver *driver,
+@@ -575,8 +506,6 @@ extern int tty_standard_install(struct tty_driver *driver,
+ 
+ extern struct mutex tty_mutex;
+ 
+-#define tty_is_writelocked(tty)  (mutex_is_locked(&tty->atomic_write_lock))
+-
+ extern void tty_port_init(struct tty_port *port);
+ extern void tty_port_link_device(struct tty_port *port,
+ 		struct tty_driver *driver, unsigned index);
+@@ -714,10 +643,6 @@ static inline int tty_port_users(struct tty_port *port)
+ extern int tty_register_ldisc(int disc, struct tty_ldisc_ops *new_ldisc);
+ extern int tty_unregister_ldisc(int disc);
+ extern int tty_set_ldisc(struct tty_struct *tty, int disc);
+-extern int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty);
+-extern void tty_ldisc_release(struct tty_struct *tty);
+-extern int __must_check tty_ldisc_init(struct tty_struct *tty);
+-extern void tty_ldisc_deinit(struct tty_struct *tty);
+ extern int tty_ldisc_receive_buf(struct tty_ldisc *ld, const unsigned char *p,
+ 				 char *f, int count);
+ 
+@@ -731,20 +656,10 @@ static inline void n_tty_init(void) { }
+ 
+ /* tty_audit.c */
+ #ifdef CONFIG_AUDIT
+-extern void tty_audit_add_data(struct tty_struct *tty, const void *data,
+-			       size_t size);
+ extern void tty_audit_exit(void);
+ extern void tty_audit_fork(struct signal_struct *sig);
+-extern void tty_audit_tiocsti(struct tty_struct *tty, char ch);
+ extern int tty_audit_push(void);
+ #else
+-static inline void tty_audit_add_data(struct tty_struct *tty, const void *data,
+-				      size_t size)
+-{
+-}
+-static inline void tty_audit_tiocsti(struct tty_struct *tty, char ch)
+-{
+-}
+ static inline void tty_audit_exit(void)
+ {
+ }
+@@ -786,16 +701,4 @@ static inline void proc_tty_register_driver(struct tty_driver *d) {}
+ static inline void proc_tty_unregister_driver(struct tty_driver *d) {}
+ #endif
+ 
+-#define tty_msg(fn, tty, f, ...) \
+-	fn("%s %s: " f, tty_driver_name(tty), tty_name(tty), ##__VA_ARGS__)
+-
+-#define tty_debug(tty, f, ...)	tty_msg(pr_debug, tty, f, ##__VA_ARGS__)
+-#define tty_info(tty, f, ...)	tty_msg(pr_info, tty, f, ##__VA_ARGS__)
+-#define tty_notice(tty, f, ...)	tty_msg(pr_notice, tty, f, ##__VA_ARGS__)
+-#define tty_warn(tty, f, ...)	tty_msg(pr_warn, tty, f, ##__VA_ARGS__)
+-#define tty_err(tty, f, ...)	tty_msg(pr_err, tty, f, ##__VA_ARGS__)
+-
+-#define tty_info_ratelimited(tty, f, ...) \
+-		tty_msg(pr_info_ratelimited, tty, f, ##__VA_ARGS__)
+-
+ #endif
+diff --git a/include/linux/vt_buffer.h b/include/linux/vt_buffer.h
+index 848db1b1569ff..919d999a8c1db 100644
+--- a/include/linux/vt_buffer.h
++++ b/include/linux/vt_buffer.h
+@@ -16,7 +16,7 @@
+ 
+ #include <linux/string.h>
+ 
+-#if defined(CONFIG_VGA_CONSOLE) || defined(CONFIG_MDA_CONSOLE)
++#if IS_ENABLED(CONFIG_VGA_CONSOLE) || IS_ENABLED(CONFIG_MDA_CONSOLE)
+ #include <asm/vga.h>
+ #endif
+ 
+diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
+index 26de0cae2a0a8..2fa9b311e5663 100644
+--- a/include/linux/workqueue.h
++++ b/include/linux/workqueue.h
+@@ -29,7 +29,7 @@ void delayed_work_timer_fn(struct timer_list *t);
+ 
+ enum {
+ 	WORK_STRUCT_PENDING_BIT	= 0,	/* work item is pending execution */
+-	WORK_STRUCT_DELAYED_BIT	= 1,	/* work item is delayed */
++	WORK_STRUCT_INACTIVE_BIT= 1,	/* work item is inactive */
+ 	WORK_STRUCT_PWQ_BIT	= 2,	/* data points to pwq */
+ 	WORK_STRUCT_LINKED_BIT	= 3,	/* next work is linked to this one */
+ #ifdef CONFIG_DEBUG_OBJECTS_WORK
+@@ -42,7 +42,7 @@ enum {
+ 	WORK_STRUCT_COLOR_BITS	= 4,
+ 
+ 	WORK_STRUCT_PENDING	= 1 << WORK_STRUCT_PENDING_BIT,
+-	WORK_STRUCT_DELAYED	= 1 << WORK_STRUCT_DELAYED_BIT,
++	WORK_STRUCT_INACTIVE	= 1 << WORK_STRUCT_INACTIVE_BIT,
+ 	WORK_STRUCT_PWQ		= 1 << WORK_STRUCT_PWQ_BIT,
+ 	WORK_STRUCT_LINKED	= 1 << WORK_STRUCT_LINKED_BIT,
+ #ifdef CONFIG_DEBUG_OBJECTS_WORK
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index e66fee99ed3ea..564fbe0c865fd 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -507,6 +507,7 @@ struct nft_set_binding {
+ };
+ 
+ enum nft_trans_phase;
++void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
+ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase);
+diff --git a/include/net/scm.h b/include/net/scm.h
+index 1ce365f4c2560..585adc1346bd0 100644
+--- a/include/net/scm.h
++++ b/include/net/scm.h
+@@ -105,16 +105,27 @@ static inline void scm_passec(struct socket *sock, struct msghdr *msg, struct sc
+ 		}
+ 	}
+ }
++
++static inline bool scm_has_secdata(struct socket *sock)
++{
++	return test_bit(SOCK_PASSSEC, &sock->flags);
++}
+ #else
+ static inline void scm_passec(struct socket *sock, struct msghdr *msg, struct scm_cookie *scm)
+ { }
++
++static inline bool scm_has_secdata(struct socket *sock)
++{
++	return false;
++}
+ #endif /* CONFIG_SECURITY_NETWORK */
+ 
+ static __inline__ void scm_recv(struct socket *sock, struct msghdr *msg,
+ 				struct scm_cookie *scm, int flags)
+ {
+ 	if (!msg->msg_control) {
+-		if (test_bit(SOCK_PASSCRED, &sock->flags) || scm->fp)
++		if (test_bit(SOCK_PASSCRED, &sock->flags) || scm->fp ||
++		    scm_has_secdata(sock))
+ 			msg->msg_flags |= MSG_CTRUNC;
+ 		scm_destroy(scm);
+ 		return;
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index c9a47d3d8f503..5a63e3b12335a 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -150,13 +150,8 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool,
+ 	if (likely(!cross_pg))
+ 		return false;
+ 
+-	if (pool->dma_pages_cnt) {
+-		return !(pool->dma_pages[addr >> PAGE_SHIFT] &
+-			 XSK_NEXT_PG_CONTIG_MASK);
+-	}
+-
+-	/* skb path */
+-	return addr + len > pool->addrs_cnt;
++	return pool->dma_pages_cnt &&
++	       !(pool->dma_pages[addr >> PAGE_SHIFT] & XSK_NEXT_PG_CONTIG_MASK);
+ }
+ 
+ static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr)
+diff --git a/include/soc/bcm2835/raspberrypi-firmware.h b/include/soc/bcm2835/raspberrypi-firmware.h
+index fdfef7fe40df9..73ad784fca966 100644
+--- a/include/soc/bcm2835/raspberrypi-firmware.h
++++ b/include/soc/bcm2835/raspberrypi-firmware.h
+@@ -142,6 +142,8 @@ int rpi_firmware_property_list(struct rpi_firmware *fw,
+ 			       void *data, size_t tag_size);
+ void rpi_firmware_put(struct rpi_firmware *fw);
+ struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node);
++struct rpi_firmware *devm_rpi_firmware_get(struct device *dev,
++					   struct device_node *firmware_node);
+ #else
+ static inline int rpi_firmware_property(struct rpi_firmware *fw, u32 tag,
+ 					void *data, size_t len)
+@@ -160,6 +162,12 @@ static inline struct rpi_firmware *rpi_firmware_get(struct device_node *firmware
+ {
+ 	return NULL;
+ }
++
++static inline struct rpi_firmware *devm_rpi_firmware_get(struct device *dev,
++					struct device_node *firmware_node)
++{
++	return NULL;
++}
+ #endif
+ 
+ #endif /* __SOC_RASPBERRY_FIRMWARE_H__ */
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index 18a5dcd275f88..e681c712fcca2 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -540,7 +540,11 @@ struct se_cmd {
+ 	struct scatterlist	*t_prot_sg;
+ 	unsigned int		t_prot_nents;
+ 	sense_reason_t		pi_err;
+-	sector_t		bad_sector;
++	u64			sense_info;
++	/*
++	 * CPU LIO will execute the cmd on. Defaults to the CPU the cmd is
++	 * initialized on. Drivers can override.
++	 */
+ 	int			cpuid;
+ };
+ 
+@@ -761,6 +765,11 @@ struct se_dev_stat_grps {
+ 	struct config_group scsi_lu_group;
+ };
+ 
++struct se_device_queue {
++	struct list_head	state_list;
++	spinlock_t		lock;
++};
++
+ struct se_device {
+ 	/* RELATIVE TARGET PORT IDENTIFER Counter */
+ 	u16			dev_rpti_counter;
+@@ -794,7 +803,6 @@ struct se_device {
+ 	atomic_t		dev_qf_count;
+ 	u32			export_count;
+ 	spinlock_t		delayed_cmd_lock;
+-	spinlock_t		execute_task_lock;
+ 	spinlock_t		dev_reservation_lock;
+ 	unsigned int		dev_reservation_flags;
+ #define DRF_SPC2_RESERVATIONS			0x00000001
+@@ -814,7 +822,6 @@ struct se_device {
+ 	struct work_struct	qf_work_queue;
+ 	struct work_struct	delayed_cmd_work;
+ 	struct list_head	delayed_cmd_list;
+-	struct list_head	state_list;
+ 	struct list_head	qf_cmd_list;
+ 	/* Pointer to associated SE HBA */
+ 	struct se_hba		*se_hba;
+@@ -841,6 +848,9 @@ struct se_device {
+ 	/* For se_lun->lun_se_dev RCU read-side critical access */
+ 	u32			hba_index;
+ 	struct rcu_head		rcu_head;
++	int			queue_cnt;
++	struct se_device_queue	*queues;
++	struct mutex		lun_reset_mutex;
+ };
+ 
+ struct se_hba {
+diff --git a/include/trace/events/qrtr.h b/include/trace/events/qrtr.h
+index b1de14c3bb934..441132c67133f 100644
+--- a/include/trace/events/qrtr.h
++++ b/include/trace/events/qrtr.h
+@@ -10,15 +10,16 @@
+ 
+ TRACE_EVENT(qrtr_ns_service_announce_new,
+ 
+-	TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
++	TP_PROTO(unsigned int service, unsigned int instance,
++		 unsigned int node, unsigned int port),
+ 
+ 	TP_ARGS(service, instance, node, port),
+ 
+ 	TP_STRUCT__entry(
+-		__field(__le32, service)
+-		__field(__le32, instance)
+-		__field(__le32, node)
+-		__field(__le32, port)
++		__field(unsigned int, service)
++		__field(unsigned int, instance)
++		__field(unsigned int, node)
++		__field(unsigned int, port)
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -36,15 +37,16 @@ TRACE_EVENT(qrtr_ns_service_announce_new,
+ 
+ TRACE_EVENT(qrtr_ns_service_announce_del,
+ 
+-	TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
++	TP_PROTO(unsigned int service, unsigned int instance,
++		 unsigned int node, unsigned int port),
+ 
+ 	TP_ARGS(service, instance, node, port),
+ 
+ 	TP_STRUCT__entry(
+-		__field(__le32, service)
+-		__field(__le32, instance)
+-		__field(__le32, node)
+-		__field(__le32, port)
++		__field(unsigned int, service)
++		__field(unsigned int, instance)
++		__field(unsigned int, node)
++		__field(unsigned int, port)
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -62,15 +64,16 @@ TRACE_EVENT(qrtr_ns_service_announce_del,
+ 
+ TRACE_EVENT(qrtr_ns_server_add,
+ 
+-	TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
++	TP_PROTO(unsigned int service, unsigned int instance,
++		 unsigned int node, unsigned int port),
+ 
+ 	TP_ARGS(service, instance, node, port),
+ 
+ 	TP_STRUCT__entry(
+-		__field(__le32, service)
+-		__field(__le32, instance)
+-		__field(__le32, node)
+-		__field(__le32, port)
++		__field(unsigned int, service)
++		__field(unsigned int, instance)
++		__field(unsigned int, node)
++		__field(unsigned int, port)
+ 	),
+ 
+ 	TP_fast_assign(
+diff --git a/include/trace/events/timer.h b/include/trace/events/timer.h
+index 19abb6c3eb73f..40e9b5a12732d 100644
+--- a/include/trace/events/timer.h
++++ b/include/trace/events/timer.h
+@@ -368,7 +368,8 @@ TRACE_EVENT(itimer_expire,
+ 		tick_dep_name(PERF_EVENTS)		\
+ 		tick_dep_name(SCHED)			\
+ 		tick_dep_name(CLOCK_UNSTABLE)		\
+-		tick_dep_name_end(RCU)
++		tick_dep_name(RCU)			\
++		tick_dep_name_end(RCU_EXP)
+ 
+ #undef tick_dep_name
+ #undef tick_dep_mask_name
+diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
+index 2c39d15a2beb4..e65300d63d7c4 100644
+--- a/include/uapi/linux/btrfs.h
++++ b/include/uapi/linux/btrfs.h
+@@ -181,6 +181,7 @@ struct btrfs_scrub_progress {
+ };
+ 
+ #define BTRFS_SCRUB_READONLY	1
++#define BTRFS_SCRUB_SUPPORTED_FLAGS	(BTRFS_SCRUB_READONLY)
+ struct btrfs_ioctl_scrub_args {
+ 	__u64 devid;				/* in */
+ 	__u64 start;				/* in */
+diff --git a/include/uapi/linux/const.h b/include/uapi/linux/const.h
+index af2a44c08683d..a429381e7ca50 100644
+--- a/include/uapi/linux/const.h
++++ b/include/uapi/linux/const.h
+@@ -28,7 +28,7 @@
+ #define _BITUL(x)	(_UL(1) << (x))
+ #define _BITULL(x)	(_ULL(1) << (x))
+ 
+-#define __ALIGN_KERNEL(x, a)		__ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1)
++#define __ALIGN_KERNEL(x, a)		__ALIGN_KERNEL_MASK(x, (__typeof__(x))(a) - 1)
+ #define __ALIGN_KERNEL_MASK(x, mask)	(((x) + (mask)) & ~(mask))
+ 
+ #define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+diff --git a/include/xen/xen.h b/include/xen/xen.h
+index 43efba045acc7..5a6a2ab675bed 100644
+--- a/include/xen/xen.h
++++ b/include/xen/xen.h
+@@ -61,4 +61,15 @@ void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
+ #include <xen/balloon.h>
+ #endif
+ 
++#if defined(CONFIG_XEN_DOM0) && defined(CONFIG_ACPI) && defined(CONFIG_X86)
++bool __init xen_processor_present(uint32_t acpi_id);
++#else
++#include <linux/bug.h>
++static inline bool xen_processor_present(uint32_t acpi_id)
++{
++	BUG();
++	return false;
++}
++#endif
++
+ #endif	/* _XEN_XEN_H */
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 6d92e393e1bc6..d3593a520bb72 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1516,7 +1516,7 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 		goto out;
+ 	}
+ 
+-	if (ctx.optlen > max_optlen || ctx.optlen < 0) {
++	if (optval && (ctx.optlen > max_optlen || ctx.optlen < 0)) {
+ 		ret = -EFAULT;
+ 		goto out;
+ 	}
+@@ -1530,8 +1530,11 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 	}
+ 
+ 	if (ctx.optlen != 0) {
+-		if (copy_to_user(optval, ctx.optval, ctx.optlen) ||
+-		    put_user(ctx.optlen, optlen)) {
++		if (optval && copy_to_user(optval, ctx.optval, ctx.optlen)) {
++			ret = -EFAULT;
++			goto out;
++		}
++		if (put_user(ctx.optlen, optlen)) {
+ 			ret = -EFAULT;
+ 			goto out;
+ 		}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 5a96a9dd51e4c..6876796a8de0c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2775,17 +2775,13 @@ static int check_stack_read(struct bpf_verifier_env *env,
+ 	}
+ 	/* Variable offset is prohibited for unprivileged mode for simplicity
+ 	 * since it requires corresponding support in Spectre masking for stack
+-	 * ALU. See also retrieve_ptr_limit().
++	 * ALU. See also retrieve_ptr_limit(). The check in
++	 * check_stack_access_for_ptr_arithmetic() called by
++	 * adjust_ptr_min_max_vals() prevents users from creating stack pointers
++	 * with variable offsets, therefore no check is required here. Further,
++	 * just checking it here would be insufficient as speculative stack
++	 * writes could still lead to unsafe speculative behaviour.
+ 	 */
+-	if (!env->bypass_spec_v1 && var_off) {
+-		char tn_buf[48];
+-
+-		tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+-		verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n",
+-				ptr_regno, tn_buf);
+-		return -EACCES;
+-	}
+-
+ 	if (!var_off) {
+ 		off += reg->var_off.value;
+ 		err = check_stack_read_fixed_off(env, state, off, size,
+@@ -9550,10 +9546,11 @@ static int propagate_precision(struct bpf_verifier_env *env,
+ 		state_reg = state->regs;
+ 		for (i = 0; i < BPF_REG_FP; i++, state_reg++) {
+ 			if (state_reg->type != SCALAR_VALUE ||
+-			    !state_reg->precise)
++			    !state_reg->precise ||
++			    !(state_reg->live & REG_LIVE_READ))
+ 				continue;
+ 			if (env->log.level & BPF_LOG_LEVEL2)
+-				verbose(env, "frame %d: propagating r%d\n", i, fr);
++				verbose(env, "frame %d: propagating r%d\n", fr, i);
+ 			err = mark_chain_precision_frame(env, fr, i);
+ 			if (err < 0)
+ 				return err;
+@@ -9564,11 +9561,12 @@ static int propagate_precision(struct bpf_verifier_env *env,
+ 				continue;
+ 			state_reg = &state->stack[i].spilled_ptr;
+ 			if (state_reg->type != SCALAR_VALUE ||
+-			    !state_reg->precise)
++			    !state_reg->precise ||
++			    !(state_reg->live & REG_LIVE_READ))
+ 				continue;
+ 			if (env->log.level & BPF_LOG_LEVEL2)
+ 				verbose(env, "frame %d: propagating fp%d\n",
+-					(-i - 1) * BPF_REG_SIZE, fr);
++					fr, (-i - 1) * BPF_REG_SIZE);
+ 			err = mark_chain_precision_stack_frame(env, fr, i);
+ 			if (err < 0)
+ 				return err;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index f0df3bc0e6415..53f36bbaf0c66 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -8925,8 +8925,8 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
+ 		hwc->interrupts = 1;
+ 	} else {
+ 		hwc->interrupts++;
+-		if (unlikely(throttle
+-			     && hwc->interrupts >= max_samples_per_tick)) {
++		if (unlikely(throttle &&
++			     hwc->interrupts > max_samples_per_tick)) {
+ 			__this_cpu_inc(perf_throttled_count);
+ 			tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
+ 			hwc->interrupts = MAX_INTERRUPTS;
+diff --git a/kernel/fork.c b/kernel/fork.c
+index a5bc0c6a00fd1..c6a289317e89b 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -441,6 +441,9 @@ void put_task_stack(struct task_struct *tsk)
+ 
+ void free_task(struct task_struct *tsk)
+ {
++#ifdef CONFIG_SECCOMP
++	WARN_ON_ONCE(tsk->seccomp.filter);
++#endif
+ 	scs_release(tsk);
+ 
+ #ifndef CONFIG_THREAD_INFO_IN_TASK
+@@ -2248,12 +2251,6 @@ static __latent_entropy struct task_struct *copy_process(
+ 
+ 	spin_lock(&current->sighand->siglock);
+ 
+-	/*
+-	 * Copy seccomp details explicitly here, in case they were changed
+-	 * before holding sighand lock.
+-	 */
+-	copy_seccomp(p);
+-
+ 	rseq_fork(p, clone_flags);
+ 
+ 	/* Don't start children in a dying pid namespace */
+@@ -2268,6 +2265,14 @@ static __latent_entropy struct task_struct *copy_process(
+ 		goto bad_fork_cancel_cgroup;
+ 	}
+ 
++	/* No more failure paths after this point. */
++
++	/*
++	 * Copy seccomp details explicitly here, in case they were changed
++	 * before holding sighand lock.
++	 */
++	copy_seccomp(p);
++
+ 	init_task_pid_links(p);
+ 	if (likely(p->pid)) {
+ 		ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace);
+diff --git a/kernel/kheaders.c b/kernel/kheaders.c
+index 8f69772af77b4..42163c9e94e55 100644
+--- a/kernel/kheaders.c
++++ b/kernel/kheaders.c
+@@ -26,15 +26,15 @@ asm (
+ "	.popsection				\n"
+ );
+ 
+-extern char kernel_headers_data;
+-extern char kernel_headers_data_end;
++extern char kernel_headers_data[];
++extern char kernel_headers_data_end[];
+ 
+ static ssize_t
+ ikheaders_read(struct file *file,  struct kobject *kobj,
+ 	       struct bin_attribute *bin_attr,
+ 	       char *buf, loff_t off, size_t len)
+ {
+-	memcpy(buf, &kernel_headers_data + off, len);
++	memcpy(buf, &kernel_headers_data[off], len);
+ 	return len;
+ }
+ 
+@@ -48,8 +48,8 @@ static struct bin_attribute kheaders_attr __ro_after_init = {
+ 
+ static int __init ikheaders_init(void)
+ {
+-	kheaders_attr.size = (&kernel_headers_data_end -
+-			      &kernel_headers_data);
++	kheaders_attr.size = (kernel_headers_data_end -
++			      kernel_headers_data);
+ 	return sysfs_create_bin_file(kernel_kobj, &kheaders_attr);
+ }
+ 
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 9cce4e13af414..30e1d7fedb5fd 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -964,6 +964,7 @@ void __rcu_irq_enter_check_tick(void)
+ 	}
+ 	raw_spin_unlock_rcu_node(rdp->mynode);
+ }
++NOKPROBE_SYMBOL(__rcu_irq_enter_check_tick);
+ #endif /* CONFIG_NO_HZ_FULL */
+ 
+ /**
+diff --git a/kernel/relay.c b/kernel/relay.c
+index 067769b80d4ab..f6826dec21033 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -1077,7 +1077,8 @@ static size_t relay_file_read_start_pos(struct rchan_buf *buf)
+ 	size_t subbuf_size = buf->chan->subbuf_size;
+ 	size_t n_subbufs = buf->chan->n_subbufs;
+ 	size_t consumed = buf->subbufs_consumed % n_subbufs;
+-	size_t read_pos = consumed * subbuf_size + buf->bytes_consumed;
++	size_t read_pos = (consumed * subbuf_size + buf->bytes_consumed)
++			% (n_subbufs * subbuf_size);
+ 
+ 	read_subbuf = read_pos / subbuf_size;
+ 	padding = buf->padding[read_subbuf];
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 5d76edd0ad9c2..bede1e608d959 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -782,6 +782,8 @@ static u64 collect_timerqueue(struct timerqueue_head *head,
+ 			return expires;
+ 
+ 		ctmr->firing = 1;
++		/* See posix_cpu_timer_wait_running() */
++		rcu_assign_pointer(ctmr->handling, current);
+ 		cpu_timer_dequeue(ctmr);
+ 		list_add_tail(&ctmr->elist, firing);
+ 	}
+@@ -1097,7 +1099,49 @@ static void handle_posix_cpu_timers(struct task_struct *tsk);
+ #ifdef CONFIG_POSIX_CPU_TIMERS_TASK_WORK
+ static void posix_cpu_timers_work(struct callback_head *work)
+ {
++	struct posix_cputimers_work *cw = container_of(work, typeof(*cw), work);
++
++	mutex_lock(&cw->mutex);
+ 	handle_posix_cpu_timers(current);
++	mutex_unlock(&cw->mutex);
++}
++
++/*
++ * Invoked from the posix-timer core when a cancel operation failed because
++ * the timer is marked firing. The caller holds rcu_read_lock(), which
++ * protects the timer and the task which is expiring it from being freed.
++ */
++static void posix_cpu_timer_wait_running(struct k_itimer *timr)
++{
++	struct task_struct *tsk = rcu_dereference(timr->it.cpu.handling);
++
++	/* Has the handling task completed expiry already? */
++	if (!tsk)
++		return;
++
++	/* Ensure that the task cannot go away */
++	get_task_struct(tsk);
++	/* Now drop the RCU protection so the mutex can be locked */
++	rcu_read_unlock();
++	/* Wait on the expiry mutex */
++	mutex_lock(&tsk->posix_cputimers_work.mutex);
++	/* Release it immediately again. */
++	mutex_unlock(&tsk->posix_cputimers_work.mutex);
++	/* Drop the task reference. */
++	put_task_struct(tsk);
++	/* Relock RCU so the callsite is balanced */
++	rcu_read_lock();
++}
++
++static void posix_cpu_timer_wait_running_nsleep(struct k_itimer *timr)
++{
++	/* Ensure that timr->it.cpu.handling task cannot go away */
++	rcu_read_lock();
++	spin_unlock_irq(&timr->it_lock);
++	posix_cpu_timer_wait_running(timr);
++	rcu_read_unlock();
++	/* @timr is on stack and is valid */
++	spin_lock_irq(&timr->it_lock);
+ }
+ 
+ /*
+@@ -1113,6 +1157,7 @@ void clear_posix_cputimers_work(struct task_struct *p)
+ 	       sizeof(p->posix_cputimers_work.work));
+ 	init_task_work(&p->posix_cputimers_work.work,
+ 		       posix_cpu_timers_work);
++	mutex_init(&p->posix_cputimers_work.mutex);
+ 	p->posix_cputimers_work.scheduled = false;
+ }
+ 
+@@ -1191,6 +1236,18 @@ static inline void __run_posix_cpu_timers(struct task_struct *tsk)
+ 	lockdep_posixtimer_exit();
+ }
+ 
++static void posix_cpu_timer_wait_running(struct k_itimer *timr)
++{
++	cpu_relax();
++}
++
++static void posix_cpu_timer_wait_running_nsleep(struct k_itimer *timr)
++{
++	spin_unlock_irq(&timr->it_lock);
++	cpu_relax();
++	spin_lock_irq(&timr->it_lock);
++}
++
+ static inline bool posix_cpu_timers_work_scheduled(struct task_struct *tsk)
+ {
+ 	return false;
+@@ -1299,6 +1356,8 @@ static void handle_posix_cpu_timers(struct task_struct *tsk)
+ 		 */
+ 		if (likely(cpu_firing >= 0))
+ 			cpu_timer_fire(timer);
++		/* See posix_cpu_timer_wait_running() */
++		rcu_assign_pointer(timer->it.cpu.handling, NULL);
+ 		spin_unlock(&timer->it_lock);
+ 	}
+ }
+@@ -1434,23 +1493,16 @@ static int do_cpu_nanosleep(const clockid_t which_clock, int flags,
+ 		expires = cpu_timer_getexpires(&timer.it.cpu);
+ 		error = posix_cpu_timer_set(&timer, 0, &zero_it, &it);
+ 		if (!error) {
+-			/*
+-			 * Timer is now unarmed, deletion can not fail.
+-			 */
++			/* Timer is now unarmed, deletion can not fail. */
+ 			posix_cpu_timer_del(&timer);
++		} else {
++			while (error == TIMER_RETRY) {
++				posix_cpu_timer_wait_running_nsleep(&timer);
++				error = posix_cpu_timer_del(&timer);
++			}
+ 		}
+-		spin_unlock_irq(&timer.it_lock);
+ 
+-		while (error == TIMER_RETRY) {
+-			/*
+-			 * We need to handle case when timer was or is in the
+-			 * middle of firing. In other cases we already freed
+-			 * resources.
+-			 */
+-			spin_lock_irq(&timer.it_lock);
+-			error = posix_cpu_timer_del(&timer);
+-			spin_unlock_irq(&timer.it_lock);
+-		}
++		spin_unlock_irq(&timer.it_lock);
+ 
+ 		if ((it.it_value.tv_sec | it.it_value.tv_nsec) == 0) {
+ 			/*
+@@ -1560,6 +1612,7 @@ const struct k_clock clock_posix_cpu = {
+ 	.timer_del		= posix_cpu_timer_del,
+ 	.timer_get		= posix_cpu_timer_get,
+ 	.timer_rearm		= posix_cpu_timer_rearm,
++	.timer_wait_running	= posix_cpu_timer_wait_running,
+ };
+ 
+ const struct k_clock clock_process = {
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index 724ca7eb1a6e8..d089627f2f2b4 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -846,6 +846,10 @@ static struct k_itimer *timer_wait_running(struct k_itimer *timer,
+ 	rcu_read_lock();
+ 	unlock_timer(timer, *flags);
+ 
++	/*
++	 * kc->timer_wait_running() might drop RCU lock. So @timer
++	 * cannot be touched anymore after the function returns!
++	 */
+ 	if (!WARN_ON_ONCE(!kc->timer_wait_running))
+ 		kc->timer_wait_running(timer);
+ 
+diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
+index 36d7464c89625..a9530e866e5f1 100644
+--- a/kernel/time/tick-broadcast.c
++++ b/kernel/time/tick-broadcast.c
+@@ -331,7 +331,7 @@ static void tick_handle_periodic_broadcast(struct clock_event_device *dev)
+ 	bc_local = tick_do_periodic_broadcast();
+ 
+ 	if (clockevent_state_oneshot(dev)) {
+-		ktime_t next = ktime_add(dev->next_event, tick_period);
++		ktime_t next = ktime_add_ns(dev->next_event, TICK_NSEC);
+ 
+ 		clockevents_program_event(dev, next, true);
+ 	}
+diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
+index 6c9c342dd0e53..2b7448ae5b478 100644
+--- a/kernel/time/tick-common.c
++++ b/kernel/time/tick-common.c
+@@ -30,7 +30,6 @@ DEFINE_PER_CPU(struct tick_device, tick_cpu_device);
+  * Tick next event: keeps track of the tick time
+  */
+ ktime_t tick_next_period;
+-ktime_t tick_period;
+ 
+ /*
+  * tick_do_timer_cpu is a timer core internal variable which holds the CPU NR
+@@ -88,7 +87,7 @@ static void tick_periodic(int cpu)
+ 		write_seqcount_begin(&jiffies_seq);
+ 
+ 		/* Keep track of the next tick event */
+-		tick_next_period = ktime_add(tick_next_period, tick_period);
++		tick_next_period = ktime_add_ns(tick_next_period, TICK_NSEC);
+ 
+ 		do_timer(1);
+ 		write_seqcount_end(&jiffies_seq);
+@@ -127,7 +126,7 @@ void tick_handle_periodic(struct clock_event_device *dev)
+ 		 * Setup the next period for devices, which do not have
+ 		 * periodic mode:
+ 		 */
+-		next = ktime_add(next, tick_period);
++		next = ktime_add_ns(next, TICK_NSEC);
+ 
+ 		if (!clockevents_program_event(dev, next, false))
+ 			return;
+@@ -173,7 +172,7 @@ void tick_setup_periodic(struct clock_event_device *dev, int broadcast)
+ 		for (;;) {
+ 			if (!clockevents_program_event(dev, next, false))
+ 				return;
+-			next = ktime_add(next, tick_period);
++			next = ktime_add_ns(next, TICK_NSEC);
+ 		}
+ 	}
+ }
+@@ -217,10 +216,19 @@ static void tick_setup_device(struct tick_device *td,
+ 		 * this cpu:
+ 		 */
+ 		if (tick_do_timer_cpu == TICK_DO_TIMER_BOOT) {
++			ktime_t next_p;
++			u32 rem;
++
+ 			tick_do_timer_cpu = cpu;
+ 
+-			tick_next_period = ktime_get();
+-			tick_period = NSEC_PER_SEC / HZ;
++			next_p = ktime_get();
++			div_u64_rem(next_p, TICK_NSEC, &rem);
++			if (rem) {
++				next_p -= rem;
++				next_p += TICK_NSEC;
++			}
++
++			tick_next_period = next_p;
+ #ifdef CONFIG_NO_HZ_FULL
+ 			/*
+ 			 * The boot CPU may be nohz_full, in which case set
+diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h
+index 5294f5b1f9550..e61c1244e7d46 100644
+--- a/kernel/time/tick-internal.h
++++ b/kernel/time/tick-internal.h
+@@ -15,7 +15,6 @@
+ 
+ DECLARE_PER_CPU(struct tick_device, tick_cpu_device);
+ extern ktime_t tick_next_period;
+-extern ktime_t tick_period;
+ extern int tick_do_timer_cpu __read_mostly;
+ 
+ extern void tick_setup_periodic(struct clock_event_device *dev, int broadcast);
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 92fb738813f39..17dc3f53efef8 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -53,49 +53,67 @@ static ktime_t last_jiffies_update;
+  */
+ static void tick_do_update_jiffies64(ktime_t now)
+ {
+-	unsigned long ticks = 0;
++	unsigned long ticks = 1;
+ 	ktime_t delta;
+ 
+ 	/*
+-	 * Do a quick check without holding jiffies_lock:
+-	 * The READ_ONCE() pairs with two updates done later in this function.
++	 * Do a quick check without holding jiffies_lock. The READ_ONCE()
++	 * pairs with the update done later in this function.
++	 *
++	 * This is also an intentional data race which is even safe on
++	 * 32bit in theory. If there is a concurrent update then the check
++	 * might give a random answer. It does not matter because if it
++	 * returns then the concurrent update is already taking care, if it
++	 * falls through then it will pointlessly contend on jiffies_lock.
++	 *
++	 * Though there is one nasty case on 32bit due to store tearing of
++	 * the 64bit value. If the first 32bit store makes the quick check
++	 * return on all other CPUs and the writing CPU context gets
++	 * delayed to complete the second store (scheduled out on virt)
++	 * then jiffies can become stale for up to ~2^32 nanoseconds
++	 * without noticing. After that point all CPUs will wait for
++	 * jiffies lock.
++	 *
++	 * OTOH, this is not any different than the situation with NOHZ=off
++	 * where one CPU is responsible for updating jiffies and
++	 * timekeeping. If that CPU goes out for lunch then all other CPUs
++	 * will operate on stale jiffies until it decides to come back.
+ 	 */
+-	delta = ktime_sub(now, READ_ONCE(last_jiffies_update));
+-	if (delta < tick_period)
++	if (ktime_before(now, READ_ONCE(tick_next_period)))
+ 		return;
+ 
+ 	/* Reevaluate with jiffies_lock held */
+ 	raw_spin_lock(&jiffies_lock);
++	if (ktime_before(now, tick_next_period)) {
++		raw_spin_unlock(&jiffies_lock);
++		return;
++	}
++
+ 	write_seqcount_begin(&jiffies_seq);
+ 
+-	delta = ktime_sub(now, last_jiffies_update);
+-	if (delta >= tick_period) {
++	delta = ktime_sub(now, tick_next_period);
++	if (unlikely(delta >= TICK_NSEC)) {
++		/* Slow path for long idle sleep times */
++		s64 incr = TICK_NSEC;
+ 
+-		delta = ktime_sub(delta, tick_period);
+-		/* Pairs with the lockless read in this function. */
+-		WRITE_ONCE(last_jiffies_update,
+-			   ktime_add(last_jiffies_update, tick_period));
++		ticks += ktime_divns(delta, incr);
+ 
+-		/* Slow path for long timeouts */
+-		if (unlikely(delta >= tick_period)) {
+-			s64 incr = ktime_to_ns(tick_period);
++		last_jiffies_update = ktime_add_ns(last_jiffies_update,
++						   incr * ticks);
++	} else {
++		last_jiffies_update = ktime_add_ns(last_jiffies_update,
++						   TICK_NSEC);
++	}
+ 
+-			ticks = ktime_divns(delta, incr);
++	do_timer(ticks);
+ 
+-			/* Pairs with the lockless read in this function. */
+-			WRITE_ONCE(last_jiffies_update,
+-				   ktime_add_ns(last_jiffies_update,
+-						incr * ticks));
+-		}
+-		do_timer(++ticks);
++	/*
++	 * Keep the tick_next_period variable up to date.  WRITE_ONCE()
++	 * pairs with the READ_ONCE() in the lockless quick check above.
++	 */
++	WRITE_ONCE(tick_next_period,
++		   ktime_add_ns(last_jiffies_update, TICK_NSEC));
+ 
+-		/* Keep the tick_next_period variable up to date */
+-		tick_next_period = ktime_add(last_jiffies_update, tick_period);
+-	} else {
+-		write_seqcount_end(&jiffies_seq);
+-		raw_spin_unlock(&jiffies_lock);
+-		return;
+-	}
+ 	write_seqcount_end(&jiffies_seq);
+ 	raw_spin_unlock(&jiffies_lock);
+ 	update_wall_time();
+@@ -213,6 +231,11 @@ static bool check_tick_dependency(atomic_t *dep)
+ 		return true;
+ 	}
+ 
++	if (val & TICK_DEP_MASK_RCU_EXP) {
++		trace_tick_stop(0, TICK_DEP_MASK_RCU_EXP);
++		return true;
++	}
++
+ 	return false;
+ }
+ 
+@@ -426,7 +449,7 @@ void __init tick_nohz_full_setup(cpumask_var_t cpumask)
+ 	tick_nohz_full_running = true;
+ }
+ 
+-static int tick_nohz_cpu_down(unsigned int cpu)
++bool tick_nohz_cpu_hotpluggable(unsigned int cpu)
+ {
+ 	/*
+ 	 * The tick_do_timer_cpu CPU handles housekeeping duty (unbound
+@@ -434,8 +457,13 @@ static int tick_nohz_cpu_down(unsigned int cpu)
+ 	 * CPUs. It must remain online when nohz full is enabled.
+ 	 */
+ 	if (tick_nohz_full_running && tick_do_timer_cpu == cpu)
+-		return -EBUSY;
+-	return 0;
++		return false;
++	return true;
++}
++
++static int tick_nohz_cpu_down(unsigned int cpu)
++{
++	return tick_nohz_cpu_hotpluggable(cpu) ? 0 : -EBUSY;
+ }
+ 
+ void __init tick_nohz_init(void)
+@@ -660,7 +688,7 @@ static void tick_nohz_restart(struct tick_sched *ts, ktime_t now)
+ 	hrtimer_set_expires(&ts->sched_timer, ts->last_tick);
+ 
+ 	/* Forward the time to expire in the future */
+-	hrtimer_forward(&ts->sched_timer, now, tick_period);
++	hrtimer_forward(&ts->sched_timer, now, TICK_NSEC);
+ 
+ 	if (ts->nohz_mode == NOHZ_MODE_HIGHRES) {
+ 		hrtimer_start_expires(&ts->sched_timer,
+@@ -1222,7 +1250,7 @@ static void tick_nohz_handler(struct clock_event_device *dev)
+ 	if (unlikely(ts->tick_stopped))
+ 		return;
+ 
+-	hrtimer_forward(&ts->sched_timer, now, tick_period);
++	hrtimer_forward(&ts->sched_timer, now, TICK_NSEC);
+ 	tick_program_event(hrtimer_get_expires(&ts->sched_timer), 1);
+ }
+ 
+@@ -1259,7 +1287,7 @@ static void tick_nohz_switch_to_nohz(void)
+ 	next = tick_init_jiffy_update();
+ 
+ 	hrtimer_set_expires(&ts->sched_timer, next);
+-	hrtimer_forward_now(&ts->sched_timer, tick_period);
++	hrtimer_forward_now(&ts->sched_timer, TICK_NSEC);
+ 	tick_program_event(hrtimer_get_expires(&ts->sched_timer), 1);
+ 	tick_nohz_activate(ts, NOHZ_MODE_LOWRES);
+ }
+@@ -1325,7 +1353,7 @@ static enum hrtimer_restart tick_sched_timer(struct hrtimer *timer)
+ 	if (unlikely(ts->tick_stopped))
+ 		return HRTIMER_NORESTART;
+ 
+-	hrtimer_forward(timer, now, tick_period);
++	hrtimer_forward(timer, now, TICK_NSEC);
+ 
+ 	return HRTIMER_RESTART;
+ }
+@@ -1359,13 +1387,13 @@ void tick_setup_sched_timer(void)
+ 
+ 	/* Offset the tick to avert jiffies_lock contention. */
+ 	if (sched_skew_tick) {
+-		u64 offset = ktime_to_ns(tick_period) >> 1;
++		u64 offset = TICK_NSEC >> 1;
+ 		do_div(offset, num_possible_cpus());
+ 		offset *= smp_processor_id();
+ 		hrtimer_add_expires_ns(&ts->sched_timer, offset);
+ 	}
+ 
+-	hrtimer_forward(&ts->sched_timer, now, tick_period);
++	hrtimer_forward(&ts->sched_timer, now, TICK_NSEC);
+ 	hrtimer_start_expires(&ts->sched_timer, HRTIMER_MODE_ABS_PINNED_HARD);
+ 	tick_nohz_activate(ts, NOHZ_MODE_HIGHRES);
+ }
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 21b07c7c6ee5e..f08904914166b 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1644,6 +1644,8 @@ static void rb_free_cpu_buffer(struct ring_buffer_per_cpu *cpu_buffer)
+ 	struct list_head *head = cpu_buffer->pages;
+ 	struct buffer_page *bpage, *tmp;
+ 
++	irq_work_sync(&cpu_buffer->irq_work.work);
++
+ 	free_buffer_page(cpu_buffer->reader_page);
+ 
+ 	if (head) {
+@@ -1750,6 +1752,8 @@ ring_buffer_free(struct trace_buffer *buffer)
+ 
+ 	cpuhp_state_remove_instance(CPUHP_TRACE_RB_PREPARE, &buffer->node);
+ 
++	irq_work_sync(&buffer->irq_work.work);
++
+ 	for_each_buffer_cpu(buffer, cpu)
+ 		rb_free_cpu_buffer(buffer->buffers[cpu]);
+ 
+@@ -5047,6 +5051,9 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
+ 
++/* Flag to ensure proper resetting of atomic variables */
++#define RESET_BIT	(1 << 30)
++
+ /**
+  * ring_buffer_reset_cpu - reset a ring buffer per CPU buffer
+  * @buffer: The ring buffer to reset a per cpu buffer of
+@@ -5063,20 +5070,27 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
+ 	for_each_online_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+ 
+-		atomic_inc(&cpu_buffer->resize_disabled);
++		atomic_add(RESET_BIT, &cpu_buffer->resize_disabled);
+ 		atomic_inc(&cpu_buffer->record_disabled);
+ 	}
+ 
+ 	/* Make sure all commits have finished */
+ 	synchronize_rcu();
+ 
+-	for_each_online_buffer_cpu(buffer, cpu) {
++	for_each_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+ 
++		/*
++		 * If a CPU came online during the synchronize_rcu(), then
++		 * ignore it.
++		 */
++		if (!(atomic_read(&cpu_buffer->resize_disabled) & RESET_BIT))
++			continue;
++
+ 		reset_disabled_cpu_buffer(cpu_buffer);
+ 
+ 		atomic_dec(&cpu_buffer->record_disabled);
+-		atomic_dec(&cpu_buffer->resize_disabled);
++		atomic_sub(RESET_BIT, &cpu_buffer->resize_disabled);
+ 	}
+ 
+ 	mutex_unlock(&buffer->mutex);
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 0cc2a62e88f9e..b9041ab881bc8 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -207,7 +207,7 @@ struct pool_workqueue {
+ 						/* L: nr of in_flight works */
+ 	int			nr_active;	/* L: nr of active works */
+ 	int			max_active;	/* L: max active works */
+-	struct list_head	delayed_works;	/* L: delayed works */
++	struct list_head	inactive_works;	/* L: inactive works */
+ 	struct list_head	pwqs_node;	/* WR: node on wq->pwqs */
+ 	struct list_head	mayday_node;	/* MD: node on wq->maydays */
+ 
+@@ -1145,7 +1145,7 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq)
+ 	}
+ }
+ 
+-static void pwq_activate_delayed_work(struct work_struct *work)
++static void pwq_activate_inactive_work(struct work_struct *work)
+ {
+ 	struct pool_workqueue *pwq = get_work_pwq(work);
+ 
+@@ -1153,16 +1153,16 @@ static void pwq_activate_delayed_work(struct work_struct *work)
+ 	if (list_empty(&pwq->pool->worklist))
+ 		pwq->pool->watchdog_ts = jiffies;
+ 	move_linked_works(work, &pwq->pool->worklist, NULL);
+-	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
++	__clear_bit(WORK_STRUCT_INACTIVE_BIT, work_data_bits(work));
+ 	pwq->nr_active++;
+ }
+ 
+-static void pwq_activate_first_delayed(struct pool_workqueue *pwq)
++static void pwq_activate_first_inactive(struct pool_workqueue *pwq)
+ {
+-	struct work_struct *work = list_first_entry(&pwq->delayed_works,
++	struct work_struct *work = list_first_entry(&pwq->inactive_works,
+ 						    struct work_struct, entry);
+ 
+-	pwq_activate_delayed_work(work);
++	pwq_activate_inactive_work(work);
+ }
+ 
+ /**
+@@ -1185,10 +1185,10 @@ static void pwq_dec_nr_in_flight(struct pool_workqueue *pwq, int color)
+ 	pwq->nr_in_flight[color]--;
+ 
+ 	pwq->nr_active--;
+-	if (!list_empty(&pwq->delayed_works)) {
+-		/* one down, submit a delayed one */
++	if (!list_empty(&pwq->inactive_works)) {
++		/* one down, submit an inactive one */
+ 		if (pwq->nr_active < pwq->max_active)
+-			pwq_activate_first_delayed(pwq);
++			pwq_activate_first_inactive(pwq);
+ 	}
+ 
+ 	/* is flush in progress and are we at the flushing tip? */
+@@ -1290,14 +1290,14 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
+ 		debug_work_deactivate(work);
+ 
+ 		/*
+-		 * A delayed work item cannot be grabbed directly because
++		 * An inactive work item cannot be grabbed directly because
+ 		 * it might have linked NO_COLOR work items which, if left
+-		 * on the delayed_list, will confuse pwq->nr_active
++		 * on the inactive_works list, will confuse pwq->nr_active
+ 		 * management later on and cause stall.  Make sure the work
+ 		 * item is activated before grabbing.
+ 		 */
+-		if (*work_data_bits(work) & WORK_STRUCT_DELAYED)
+-			pwq_activate_delayed_work(work);
++		if (*work_data_bits(work) & WORK_STRUCT_INACTIVE)
++			pwq_activate_inactive_work(work);
+ 
+ 		list_del_init(&work->entry);
+ 		pwq_dec_nr_in_flight(pwq, get_work_color(work));
+@@ -1496,8 +1496,8 @@ retry:
+ 		if (list_empty(worklist))
+ 			pwq->pool->watchdog_ts = jiffies;
+ 	} else {
+-		work_flags |= WORK_STRUCT_DELAYED;
+-		worklist = &pwq->delayed_works;
++		work_flags |= WORK_STRUCT_INACTIVE;
++		worklist = &pwq->inactive_works;
+ 	}
+ 
+ 	debug_work_activate(work);
+@@ -2534,7 +2534,7 @@ repeat:
+ 			/*
+ 			 * The above execution of rescued work items could
+ 			 * have created more to rescue through
+-			 * pwq_activate_first_delayed() or chained
++			 * pwq_activate_first_inactive() or chained
+ 			 * queueing.  Let's put @pwq back on mayday list so
+ 			 * that such back-to-back work items, which may be
+ 			 * being used to relieve memory pressure, don't
+@@ -2960,7 +2960,7 @@ reflush:
+ 		bool drained;
+ 
+ 		raw_spin_lock_irq(&pwq->pool->lock);
+-		drained = !pwq->nr_active && list_empty(&pwq->delayed_works);
++		drained = !pwq->nr_active && list_empty(&pwq->inactive_works);
+ 		raw_spin_unlock_irq(&pwq->pool->lock);
+ 
+ 		if (drained)
+@@ -3714,7 +3714,7 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
+  * @pwq: target pool_workqueue
+  *
+  * If @pwq isn't freezing, set @pwq->max_active to the associated
+- * workqueue's saved_max_active and activate delayed work items
++ * workqueue's saved_max_active and activate inactive work items
+  * accordingly.  If @pwq is freezing, clear @pwq->max_active to zero.
+  */
+ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
+@@ -3743,9 +3743,9 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
+ 
+ 		pwq->max_active = wq->saved_max_active;
+ 
+-		while (!list_empty(&pwq->delayed_works) &&
++		while (!list_empty(&pwq->inactive_works) &&
+ 		       pwq->nr_active < pwq->max_active) {
+-			pwq_activate_first_delayed(pwq);
++			pwq_activate_first_inactive(pwq);
+ 			kick = true;
+ 		}
+ 
+@@ -3776,7 +3776,7 @@ static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq,
+ 	pwq->wq = wq;
+ 	pwq->flush_color = -1;
+ 	pwq->refcnt = 1;
+-	INIT_LIST_HEAD(&pwq->delayed_works);
++	INIT_LIST_HEAD(&pwq->inactive_works);
+ 	INIT_LIST_HEAD(&pwq->pwqs_node);
+ 	INIT_LIST_HEAD(&pwq->mayday_node);
+ 	INIT_WORK(&pwq->unbound_release_work, pwq_unbound_release_workfn);
+@@ -4363,7 +4363,7 @@ static bool pwq_busy(struct pool_workqueue *pwq)
+ 
+ 	if ((pwq != pwq->wq->dfl_pwq) && (pwq->refcnt > 1))
+ 		return true;
+-	if (pwq->nr_active || !list_empty(&pwq->delayed_works))
++	if (pwq->nr_active || !list_empty(&pwq->inactive_works))
+ 		return true;
+ 
+ 	return false;
+@@ -4559,7 +4559,7 @@ bool workqueue_congested(int cpu, struct workqueue_struct *wq)
+ 	else
+ 		pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
+ 
+-	ret = !list_empty(&pwq->delayed_works);
++	ret = !list_empty(&pwq->inactive_works);
+ 	preempt_enable();
+ 	rcu_read_unlock();
+ 
+@@ -4755,11 +4755,11 @@ static void show_pwq(struct pool_workqueue *pwq)
+ 		pr_cont("\n");
+ 	}
+ 
+-	if (!list_empty(&pwq->delayed_works)) {
++	if (!list_empty(&pwq->inactive_works)) {
+ 		bool comma = false;
+ 
+-		pr_info("    delayed:");
+-		list_for_each_entry(work, &pwq->delayed_works, entry) {
++		pr_info("    inactive:");
++		list_for_each_entry(work, &pwq->inactive_works, entry) {
+ 			pr_cont_work(comma, work);
+ 			comma = !(*work_data_bits(work) & WORK_STRUCT_LINKED);
+ 		}
+@@ -4789,7 +4789,7 @@ void show_workqueue_state(void)
+ 		bool idle = true;
+ 
+ 		for_each_pwq(pwq, wq) {
+-			if (pwq->nr_active || !list_empty(&pwq->delayed_works)) {
++			if (pwq->nr_active || !list_empty(&pwq->inactive_works)) {
+ 				idle = false;
+ 				break;
+ 			}
+@@ -4801,7 +4801,7 @@ void show_workqueue_state(void)
+ 
+ 		for_each_pwq(pwq, wq) {
+ 			raw_spin_lock_irqsave(&pwq->pool->lock, flags);
+-			if (pwq->nr_active || !list_empty(&pwq->delayed_works))
++			if (pwq->nr_active || !list_empty(&pwq->inactive_works))
+ 				show_pwq(pwq);
+ 			raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
+ 			/*
+@@ -4816,16 +4816,19 @@ void show_workqueue_state(void)
+ 	for_each_pool(pool, pi) {
+ 		struct worker *worker;
+ 		bool first = true;
++		unsigned long hung = 0;
+ 
+ 		raw_spin_lock_irqsave(&pool->lock, flags);
+ 		if (pool->nr_workers == pool->nr_idle)
+ 			goto next_pool;
+ 
++		/* How long the first pending work is waiting for a worker. */
++		if (!list_empty(&pool->worklist))
++			hung = jiffies_to_msecs(jiffies - pool->watchdog_ts) / 1000;
++
+ 		pr_info("pool %d:", pool->id);
+ 		pr_cont_pool_info(pool);
+-		pr_cont(" hung=%us workers=%d",
+-			jiffies_to_msecs(jiffies - pool->watchdog_ts) / 1000,
+-			pool->nr_workers);
++		pr_cont(" hung=%lus workers=%d", hung, pool->nr_workers);
+ 		if (pool->manager)
+ 			pr_cont(" manager: %d",
+ 				task_pid_nr(pool->manager->task));
+@@ -5176,7 +5179,7 @@ EXPORT_SYMBOL_GPL(work_on_cpu_safe);
+  * freeze_workqueues_begin - begin freezing workqueues
+  *
+  * Start freezing workqueues.  After this function returns, all freezable
+- * workqueues will queue new works to their delayed_works list instead of
++ * workqueues will queue new works to their inactive_works list instead of
+  * pool->worklist.
+  *
+  * CONTEXT:
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 71bdc167a9ee7..824337ec36aa8 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -219,10 +219,6 @@ static struct debug_obj *__alloc_object(struct hlist_head *list)
+ 	return obj;
+ }
+ 
+-/*
+- * Allocate a new object. If the pool is empty, switch off the debugger.
+- * Must be called with interrupts disabled.
+- */
+ static struct debug_obj *
+ alloc_object(void *addr, struct debug_bucket *b, const struct debug_obj_descr *descr)
+ {
+@@ -555,31 +551,74 @@ static void debug_object_is_on_stack(void *addr, int onstack)
+ 	WARN_ON(1);
+ }
+ 
++static struct debug_obj *lookup_object_or_alloc(void *addr, struct debug_bucket *b,
++						const struct debug_obj_descr *descr,
++						bool onstack, bool alloc_ifstatic)
++{
++	struct debug_obj *obj = lookup_object(addr, b);
++	enum debug_obj_state state = ODEBUG_STATE_NONE;
++
++	if (likely(obj))
++		return obj;
++
++	/*
++	 * debug_object_init() unconditionally allocates untracked
++	 * objects. It does not matter whether it is a static object or
++	 * not.
++	 *
++	 * debug_object_assert_init() and debug_object_activate() allow
++	 * allocation only if the descriptor callback confirms that the
++	 * object is static and considered initialized. For non-static
++	 * objects the allocation needs to be done from the fixup callback.
++	 */
++	if (unlikely(alloc_ifstatic)) {
++		if (!descr->is_static_object || !descr->is_static_object(addr))
++			return ERR_PTR(-ENOENT);
++		/* Statically allocated objects are considered initialized */
++		state = ODEBUG_STATE_INIT;
++	}
++
++	obj = alloc_object(addr, b, descr);
++	if (likely(obj)) {
++		obj->state = state;
++		debug_object_is_on_stack(addr, onstack);
++		return obj;
++	}
++
++	/* Out of memory. Do the cleanup outside of the locked region */
++	debug_objects_enabled = 0;
++	return NULL;
++}
++
++static void debug_objects_fill_pool(void)
++{
++	/*
++	 * On RT enabled kernels the pool refill must happen in preemptible
++	 * context:
++	 */
++	if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible())
++		fill_pool();
++}
++
+ static void
+ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack)
+ {
+ 	enum debug_obj_state state;
+-	bool check_stack = false;
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+ 
+-	fill_pool();
++	debug_objects_fill_pool();
+ 
+ 	db = get_bucket((unsigned long) addr);
+ 
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+-	obj = lookup_object(addr, db);
+-	if (!obj) {
+-		obj = alloc_object(addr, db, descr);
+-		if (!obj) {
+-			debug_objects_enabled = 0;
+-			raw_spin_unlock_irqrestore(&db->lock, flags);
+-			debug_objects_oom();
+-			return;
+-		}
+-		check_stack = true;
++	obj = lookup_object_or_alloc(addr, db, descr, onstack, false);
++	if (unlikely(!obj)) {
++		raw_spin_unlock_irqrestore(&db->lock, flags);
++		debug_objects_oom();
++		return;
+ 	}
+ 
+ 	switch (obj->state) {
+@@ -605,8 +644,6 @@ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+-	if (check_stack)
+-		debug_object_is_on_stack(addr, onstack);
+ }
+ 
+ /**
+@@ -646,24 +683,24 @@ EXPORT_SYMBOL_GPL(debug_object_init_on_stack);
+  */
+ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
+ {
++	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+ 	enum debug_obj_state state;
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+ 	int ret;
+-	struct debug_obj o = { .object = addr,
+-			       .state = ODEBUG_STATE_NOTAVAILABLE,
+-			       .descr = descr };
+ 
+ 	if (!debug_objects_enabled)
+ 		return 0;
+ 
++	debug_objects_fill_pool();
++
+ 	db = get_bucket((unsigned long) addr);
+ 
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+-	obj = lookup_object(addr, db);
+-	if (obj) {
++	obj = lookup_object_or_alloc(addr, db, descr, false, true);
++	if (likely(!IS_ERR_OR_NULL(obj))) {
+ 		bool print_object = false;
+ 
+ 		switch (obj->state) {
+@@ -696,24 +733,16 @@ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+ 
+-	/*
+-	 * We are here when a static object is activated. We
+-	 * let the type specific code confirm whether this is
+-	 * true or not. if true, we just make sure that the
+-	 * static object is tracked in the object tracker. If
+-	 * not, this must be a bug, so we try to fix it up.
+-	 */
+-	if (descr->is_static_object && descr->is_static_object(addr)) {
+-		/* track this static object */
+-		debug_object_init(addr, descr);
+-		debug_object_activate(addr, descr);
+-	} else {
+-		debug_print_object(&o, "activate");
+-		ret = debug_object_fixup(descr->fixup_activate, addr,
+-					ODEBUG_STATE_NOTAVAILABLE);
+-		return ret ? 0 : -EINVAL;
++	/* If NULL the allocation has hit OOM */
++	if (!obj) {
++		debug_objects_oom();
++		return 0;
+ 	}
+-	return 0;
++
++	/* Object is neither static nor tracked. It's not initialized */
++	debug_print_object(&o, "activate");
++	ret = debug_object_fixup(descr->fixup_activate, addr, ODEBUG_STATE_NOTAVAILABLE);
++	return ret ? 0 : -EINVAL;
+ }
+ EXPORT_SYMBOL_GPL(debug_object_activate);
+ 
+@@ -867,6 +896,7 @@ EXPORT_SYMBOL_GPL(debug_object_free);
+  */
+ void debug_object_assert_init(void *addr, const struct debug_obj_descr *descr)
+ {
++	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+@@ -874,34 +904,25 @@ void debug_object_assert_init(void *addr, const struct debug_obj_descr *descr)
+ 	if (!debug_objects_enabled)
+ 		return;
+ 
++	debug_objects_fill_pool();
++
+ 	db = get_bucket((unsigned long) addr);
+ 
+ 	raw_spin_lock_irqsave(&db->lock, flags);
++	obj = lookup_object_or_alloc(addr, db, descr, false, true);
++	raw_spin_unlock_irqrestore(&db->lock, flags);
++	if (likely(!IS_ERR_OR_NULL(obj)))
++		return;
+ 
+-	obj = lookup_object(addr, db);
++	/* If NULL the allocation has hit OOM */
+ 	if (!obj) {
+-		struct debug_obj o = { .object = addr,
+-				       .state = ODEBUG_STATE_NOTAVAILABLE,
+-				       .descr = descr };
+-
+-		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		/*
+-		 * Maybe the object is static, and we let the type specific
+-		 * code confirm. Track this static object if true, else invoke
+-		 * fixup.
+-		 */
+-		if (descr->is_static_object && descr->is_static_object(addr)) {
+-			/* Track this static object */
+-			debug_object_init(addr, descr);
+-		} else {
+-			debug_print_object(&o, "assert_init");
+-			debug_object_fixup(descr->fixup_assert_init, addr,
+-					   ODEBUG_STATE_NOTAVAILABLE);
+-		}
++		debug_objects_oom();
+ 		return;
+ 	}
+ 
+-	raw_spin_unlock_irqrestore(&db->lock, flags);
++	/* Object is neither tracked nor static. It's not initialized. */
++	debug_print_object(&o, "assert_init");
++	debug_object_fixup(descr->fixup_assert_init, addr, ODEBUG_STATE_NOTAVAILABLE);
+ }
+ EXPORT_SYMBOL_GPL(debug_object_assert_init);
+ 
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index ca770a783a9f9..b28f629c35271 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -378,6 +378,15 @@ static void wb_exit(struct bdi_writeback *wb)
+ static DEFINE_SPINLOCK(cgwb_lock);
+ static struct workqueue_struct *cgwb_release_wq;
+ 
++static void cgwb_free_rcu(struct rcu_head *rcu_head)
++{
++	struct bdi_writeback *wb = container_of(rcu_head,
++			struct bdi_writeback, rcu);
++
++	percpu_ref_exit(&wb->refcnt);
++	kfree(wb);
++}
++
+ static void cgwb_release_workfn(struct work_struct *work)
+ {
+ 	struct bdi_writeback *wb = container_of(work, struct bdi_writeback,
+@@ -397,7 +406,7 @@ static void cgwb_release_workfn(struct work_struct *work)
+ 	fprop_local_destroy_percpu(&wb->memcg_completions);
+ 	percpu_ref_exit(&wb->refcnt);
+ 	wb_exit(wb);
+-	kfree_rcu(wb, rcu);
++	call_rcu(&wb->rcu, cgwb_free_rcu);
+ }
+ 
+ static void cgwb_release(struct percpu_ref *refcnt)
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 1fd41b91a1a88..d85435db35f37 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -5973,7 +5973,21 @@ static void __build_all_zonelists(void *data)
+ 	int nid;
+ 	int __maybe_unused cpu;
+ 	pg_data_t *self = data;
++	unsigned long flags;
+ 
++	/*
++	 * Explicitly disable this CPU's interrupts before taking seqlock
++	 * to prevent any IRQ handler from calling into the page allocator
++	 * (e.g. GFP_ATOMIC) that could hit zonelist_iter_begin and livelock.
++	 */
++	local_irq_save(flags);
++	/*
++	 * Explicitly disable this CPU's synchronous printk() before taking
++	 * seqlock to prevent any printk() from trying to hold port->lock, for
++	 * tty_insert_flip_string_and_push_buffer() on other CPU might be
++	 * calling kmalloc(GFP_ATOMIC | __GFP_NOWARN) with port->lock held.
++	 */
++	printk_deferred_enter();
+ 	write_seqlock(&zonelist_update_seq);
+ 
+ #ifdef CONFIG_NUMA
+@@ -6008,6 +6022,8 @@ static void __build_all_zonelists(void *data)
+ 	}
+ 
+ 	write_sequnlock(&zonelist_update_seq);
++	printk_deferred_exit();
++	local_irq_restore(flags);
+ }
+ 
+ static noinline void __init
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index 86a1c99025ea0..929f85c6cf112 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -365,7 +365,7 @@ static int vlan_dev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 
+ 	switch (cmd) {
+ 	case SIOCSHWTSTAMP:
+-		if (!net_eq(dev_net(dev), &init_net))
++		if (!net_eq(dev_net(dev), dev_net(real_dev)))
+ 			break;
+ 		fallthrough;
+ 	case SIOCGMIIPHY:
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 73779af2fed61..4dcc1a8a8954f 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -996,7 +996,14 @@ static int hci_sock_ioctl(struct socket *sock, unsigned int cmd,
+ 	if (hci_sock_gen_cookie(sk)) {
+ 		struct sk_buff *skb;
+ 
+-		if (capable(CAP_NET_ADMIN))
++		/* Perform careful checks before setting the HCI_SOCK_TRUSTED
++		 * flag. Make sure that not only the current task but also
++		 * the socket opener has the required capability, since
++		 * privileged programs can be tricked into making ioctl calls
++		 * on HCI sockets, and the socket should not be marked as
++		 * trusted simply because the ioctl caller is privileged.
++		 */
++		if (sk_capable(sk, CAP_NET_ADMIN))
+ 			hci_sock_set_flag(sk, HCI_SOCK_TRUSTED);
+ 
+ 		/* Send event to monitor */
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 09cdefe5e1c83..fb6b3f2ae1921 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4750,6 +4750,9 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
+ 			skb = alloc_skb(0, GFP_ATOMIC);
+ 	} else {
+ 		skb = skb_clone(orig_skb, GFP_ATOMIC);
++
++		if (skb_orphan_frags_rx(skb, GFP_ATOMIC))
++			return;
+ 	}
+ 	if (!skb)
+ 		return;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 0dbf950de832f..1e07df2821773 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1564,9 +1564,19 @@ struct sk_buff *__ip_make_skb(struct sock *sk,
+ 	cork->dst = NULL;
+ 	skb_dst_set(skb, &rt->dst);
+ 
+-	if (iph->protocol == IPPROTO_ICMP)
+-		icmp_out_count(net, ((struct icmphdr *)
+-			skb_transport_header(skb))->type);
++	if (iph->protocol == IPPROTO_ICMP) {
++		u8 icmp_type;
++
++		/* For such sockets, transhdrlen is zero when do ip_append_data(),
++		 * so icmphdr does not in skb linear region and can not get icmp_type
++		 * by icmp_hdr(skb)->type.
++		 */
++		if (sk->sk_type == SOCK_RAW && !inet_sk(sk)->hdrincl)
++			icmp_type = fl4->fl4_icmp_type;
++		else
++			icmp_type = icmp_hdr(skb)->type;
++		icmp_out_count(net, icmp_type);
++	}
+ 
+ 	ip_cork_release(cork);
+ out:
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 1ce486a9bc076..9806bd56b95f1 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1094,12 +1094,13 @@ tx_err:
+ 
+ static void ipip6_tunnel_bind_dev(struct net_device *dev)
+ {
++	struct ip_tunnel *tunnel = netdev_priv(dev);
++	int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ 	struct net_device *tdev = NULL;
+-	struct ip_tunnel *tunnel;
++	int hlen = LL_MAX_HEADER;
+ 	const struct iphdr *iph;
+ 	struct flowi4 fl4;
+ 
+-	tunnel = netdev_priv(dev);
+ 	iph = &tunnel->parms.iph;
+ 
+ 	if (iph->daddr) {
+@@ -1122,14 +1123,15 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
+ 		tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
+ 
+ 	if (tdev && !netif_is_l3_master(tdev)) {
+-		int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ 		int mtu;
+ 
+ 		mtu = tdev->mtu - t_hlen;
+ 		if (mtu < IPV6_MIN_MTU)
+ 			mtu = IPV6_MIN_MTU;
+ 		WRITE_ONCE(dev->mtu, mtu);
++		hlen = tdev->hard_header_len + tdev->needed_headroom;
+ 	}
++	dev->needed_headroom = t_hlen + hlen;
+ }
+ 
+ static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p,
+diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c
+index b635c194f0a85..62fb1031763d1 100644
+--- a/net/ncsi/ncsi-aen.c
++++ b/net/ncsi/ncsi-aen.c
+@@ -165,6 +165,7 @@ static int ncsi_aen_handler_cr(struct ncsi_dev_priv *ndp,
+ 	nc->state = NCSI_CHANNEL_INACTIVE;
+ 	list_add_tail_rcu(&nc->link, &ndp->channel_queue);
+ 	spin_unlock_irqrestore(&ndp->lock, flags);
++	nc->modes[NCSI_MODE_TX_ENABLE].enable = 0;
+ 
+ 	return ncsi_process_next_channel(ndp);
+ }
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 2143edafba772..fe51cedd9cc3c 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4479,12 +4479,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 	}
+ }
+ 
++void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	if (nft_set_is_anonymous(set))
++		nft_clear(ctx->net, set);
++
++	set->use++;
++}
++EXPORT_SYMBOL_GPL(nf_tables_activate_set);
++
+ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase)
+ {
+ 	switch (phase) {
+ 	case NFT_TRANS_PREPARE:
++		if (nft_set_is_anonymous(set))
++			nft_deactivate_next(ctx->net, set);
++
+ 		set->use--;
+ 		return;
+ 	case NFT_TRANS_ABORT:
+@@ -7442,6 +7454,8 @@ static int nf_tables_validate(struct net *net)
+ 			if (nft_table_validate(net, table) < 0)
+ 				return -EAGAIN;
+ 		}
++
++		nft_validate_state_update(net, NFT_VALIDATE_SKIP);
+ 		break;
+ 	}
+ 
+@@ -8273,11 +8287,6 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 	return 0;
+ }
+ 
+-static void nf_tables_cleanup(struct net *net)
+-{
+-	nft_validate_state_update(net, NFT_VALIDATE_SKIP);
+-}
+-
+ static int nf_tables_abort(struct net *net, struct sk_buff *skb,
+ 			   enum nfnl_abort_action action)
+ {
+@@ -8309,7 +8318,6 @@ static const struct nfnetlink_subsystem nf_tables_subsys = {
+ 	.cb		= nf_tables_cb,
+ 	.commit		= nf_tables_commit,
+ 	.abort		= nf_tables_abort,
+-	.cleanup	= nf_tables_cleanup,
+ 	.valid_genid	= nf_tables_valid_genid,
+ 	.owner		= THIS_MODULE,
+ };
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index d3df66a39b5e0..edf386a020b9e 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -530,8 +530,6 @@ done:
+ 			goto replay_abort;
+ 		}
+ 	}
+-	if (ss->cleanup)
+-		ss->cleanup(net);
+ 
+ 	nfnl_err_deliver(&err_list, oskb);
+ 	kfree_skb(skb);
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 8c45e01fecdd8..038588d4d80e1 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -233,7 +233,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_dynset *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_dynset_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index b0f558b4fea54..8bc008ff00cb7 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -132,7 +132,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_lookup *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_lookup_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index bc104d36d3bb2..25157d8cc2504 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -180,7 +180,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_objref_map *priv = nft_expr_priv(expr);
+ 
+-	priv->set->use++;
++	nf_tables_activate_set(ctx, priv->set);
+ }
+ 
+ static void nft_objref_map_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 2104fbdd63d29..eedb16517f16a 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1744,7 +1744,8 @@ static int netlink_getsockopt(struct socket *sock, int level, int optname,
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct netlink_sock *nlk = nlk_sk(sk);
+-	int len, val, err;
++	unsigned int flag;
++	int len, val;
+ 
+ 	if (level != SOL_NETLINK)
+ 		return -ENOPROTOOPT;
+@@ -1756,39 +1757,17 @@ static int netlink_getsockopt(struct socket *sock, int level, int optname,
+ 
+ 	switch (optname) {
+ 	case NETLINK_PKTINFO:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_RECV_PKTINFO ? 1 : 0;
+-		if (put_user(len, optlen) ||
+-		    put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_RECV_PKTINFO;
+ 		break;
+ 	case NETLINK_BROADCAST_ERROR:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_BROADCAST_SEND_ERROR ? 1 : 0;
+-		if (put_user(len, optlen) ||
+-		    put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_BROADCAST_SEND_ERROR;
+ 		break;
+ 	case NETLINK_NO_ENOBUFS:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_RECV_NO_ENOBUFS ? 1 : 0;
+-		if (put_user(len, optlen) ||
+-		    put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_RECV_NO_ENOBUFS;
+ 		break;
+ 	case NETLINK_LIST_MEMBERSHIPS: {
+-		int pos, idx, shift;
++		int pos, idx, shift, err = 0;
+ 
+-		err = 0;
+ 		netlink_lock_table();
+ 		for (pos = 0; pos * 8 < nlk->ngroups; pos += sizeof(u32)) {
+ 			if (len - pos < sizeof(u32))
+@@ -1805,40 +1784,32 @@ static int netlink_getsockopt(struct socket *sock, int level, int optname,
+ 		if (put_user(ALIGN(nlk->ngroups / 8, sizeof(u32)), optlen))
+ 			err = -EFAULT;
+ 		netlink_unlock_table();
+-		break;
++		return err;
+ 	}
+ 	case NETLINK_CAP_ACK:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_CAP_ACK ? 1 : 0;
+-		if (put_user(len, optlen) ||
+-		    put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_CAP_ACK;
+ 		break;
+ 	case NETLINK_EXT_ACK:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_EXT_ACK ? 1 : 0;
+-		if (put_user(len, optlen) || put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_EXT_ACK;
+ 		break;
+ 	case NETLINK_GET_STRICT_CHK:
+-		if (len < sizeof(int))
+-			return -EINVAL;
+-		len = sizeof(int);
+-		val = nlk->flags & NETLINK_F_STRICT_CHK ? 1 : 0;
+-		if (put_user(len, optlen) || put_user(val, optval))
+-			return -EFAULT;
+-		err = 0;
++		flag = NETLINK_F_STRICT_CHK;
+ 		break;
+ 	default:
+-		err = -ENOPROTOOPT;
++		return -ENOPROTOOPT;
+ 	}
+-	return err;
++
++	if (len < sizeof(int))
++		return -EINVAL;
++
++	len = sizeof(int);
++	val = nlk->flags & flag ? 1 : 0;
++
++	if (put_user(len, optlen) ||
++	    copy_to_user(optval, &val, len))
++		return -EFAULT;
++
++	return 0;
+ }
+ 
+ static void netlink_cmsg_recv_pktinfo(struct msghdr *msg, struct sk_buff *skb)
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 3716797c55643..2e766490a739b 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -269,7 +269,8 @@ static void packet_cached_dev_reset(struct packet_sock *po)
+ 
+ static bool packet_use_direct_xmit(const struct packet_sock *po)
+ {
+-	return po->xmit == packet_direct_xmit;
++	/* Paired with WRITE_ONCE() in packet_setsockopt() */
++	return READ_ONCE(po->xmit) == packet_direct_xmit;
+ }
+ 
+ static u16 packet_pick_tx_queue(struct sk_buff *skb)
+@@ -1995,7 +1996,7 @@ retry:
+ 		goto retry;
+ 	}
+ 
+-	if (!dev_validate_header(dev, skb->data, len)) {
++	if (!dev_validate_header(dev, skb->data, len) || !skb->len) {
+ 		err = -EINVAL;
+ 		goto out_unlock;
+ 	}
+@@ -2145,7 +2146,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	sll = &PACKET_SKB_CB(skb)->sa.ll;
+ 	sll->sll_hatype = dev->type;
+ 	sll->sll_pkttype = skb->pkt_type;
+-	if (unlikely(po->origdev))
++	if (unlikely(packet_sock_flag(po, PACKET_SOCK_ORIGDEV)))
+ 		sll->sll_ifindex = orig_dev->ifindex;
+ 	else
+ 		sll->sll_ifindex = dev->ifindex;
+@@ -2418,7 +2419,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	sll->sll_hatype = dev->type;
+ 	sll->sll_protocol = skb->protocol;
+ 	sll->sll_pkttype = skb->pkt_type;
+-	if (unlikely(po->origdev))
++	if (unlikely(packet_sock_flag(po, PACKET_SOCK_ORIGDEV)))
+ 		sll->sll_ifindex = orig_dev->ifindex;
+ 	else
+ 		sll->sll_ifindex = dev->ifindex;
+@@ -2825,7 +2826,8 @@ tpacket_error:
+ 		packet_inc_pending(&po->tx_ring);
+ 
+ 		status = TP_STATUS_SEND_REQUEST;
+-		err = po->xmit(skb);
++		/* Paired with WRITE_ONCE() in packet_setsockopt() */
++		err = READ_ONCE(po->xmit)(skb);
+ 		if (unlikely(err != 0)) {
+ 			if (err > 0)
+ 				err = net_xmit_errno(err);
+@@ -3028,7 +3030,8 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 		virtio_net_hdr_set_proto(skb, &vnet_hdr);
+ 	}
+ 
+-	err = po->xmit(skb);
++	/* Paired with WRITE_ONCE() in packet_setsockopt() */
++	err = READ_ONCE(po->xmit)(skb);
+ 	if (unlikely(err != 0)) {
+ 		if (err > 0)
+ 			err = net_xmit_errno(err);
+@@ -3482,7 +3485,7 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa, copy_len);
+ 	}
+ 
+-	if (pkt_sk(sk)->auxdata) {
++	if (packet_sock_flag(pkt_sk(sk), PACKET_SOCK_AUXDATA)) {
+ 		struct tpacket_auxdata aux;
+ 
+ 		aux.tp_status = TP_STATUS_USER;
+@@ -3866,9 +3869,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 		if (copy_from_sockptr(&val, optval, sizeof(val)))
+ 			return -EFAULT;
+ 
+-		lock_sock(sk);
+-		po->auxdata = !!val;
+-		release_sock(sk);
++		packet_sock_flag_set(po, PACKET_SOCK_AUXDATA, val);
+ 		return 0;
+ 	}
+ 	case PACKET_ORIGDEV:
+@@ -3880,9 +3881,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 		if (copy_from_sockptr(&val, optval, sizeof(val)))
+ 			return -EFAULT;
+ 
+-		lock_sock(sk);
+-		po->origdev = !!val;
+-		release_sock(sk);
++		packet_sock_flag_set(po, PACKET_SOCK_ORIGDEV, val);
+ 		return 0;
+ 	}
+ 	case PACKET_VNET_HDR:
+@@ -3979,7 +3978,8 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 		if (copy_from_sockptr(&val, optval, sizeof(val)))
+ 			return -EFAULT;
+ 
+-		po->xmit = val ? packet_direct_xmit : dev_queue_xmit;
++		/* Paired with all lockless reads of po->xmit */
++		WRITE_ONCE(po->xmit, val ? packet_direct_xmit : dev_queue_xmit);
+ 		return 0;
+ 	}
+ 	default:
+@@ -4030,10 +4030,10 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
+ 
+ 		break;
+ 	case PACKET_AUXDATA:
+-		val = po->auxdata;
++		val = packet_sock_flag(po, PACKET_SOCK_AUXDATA);
+ 		break;
+ 	case PACKET_ORIGDEV:
+-		val = po->origdev;
++		val = packet_sock_flag(po, PACKET_SOCK_ORIGDEV);
+ 		break;
+ 	case PACKET_VNET_HDR:
+ 		val = po->has_vnet_hdr;
+diff --git a/net/packet/diag.c b/net/packet/diag.c
+index 07812ae5ca073..d704c7bf51b20 100644
+--- a/net/packet/diag.c
++++ b/net/packet/diag.c
+@@ -23,9 +23,9 @@ static int pdiag_put_info(const struct packet_sock *po, struct sk_buff *nlskb)
+ 	pinfo.pdi_flags = 0;
+ 	if (po->running)
+ 		pinfo.pdi_flags |= PDI_RUNNING;
+-	if (po->auxdata)
++	if (packet_sock_flag(po, PACKET_SOCK_AUXDATA))
+ 		pinfo.pdi_flags |= PDI_AUXDATA;
+-	if (po->origdev)
++	if (packet_sock_flag(po, PACKET_SOCK_ORIGDEV))
+ 		pinfo.pdi_flags |= PDI_ORIGDEV;
+ 	if (po->has_vnet_hdr)
+ 		pinfo.pdi_flags |= PDI_VNETHDR;
+diff --git a/net/packet/internal.h b/net/packet/internal.h
+index 7af1e9179385f..3938cb413d5d3 100644
+--- a/net/packet/internal.h
++++ b/net/packet/internal.h
+@@ -116,10 +116,9 @@ struct packet_sock {
+ 	int			copy_thresh;
+ 	spinlock_t		bind_lock;
+ 	struct mutex		pg_vec_lock;
++	unsigned long		flags;
+ 	unsigned int		running;	/* bind_lock must be held */
+-	unsigned int		auxdata:1,	/* writer must hold sock lock */
+-				origdev:1,
+-				has_vnet_hdr:1,
++	unsigned int		has_vnet_hdr:1, /* writer must hold sock lock */
+ 				tp_loss:1,
+ 				tp_tx_has_off:1;
+ 	int			pressure;
+@@ -144,4 +143,25 @@ static struct packet_sock *pkt_sk(struct sock *sk)
+ 	return (struct packet_sock *)sk;
+ }
+ 
++enum packet_sock_flags {
++	PACKET_SOCK_ORIGDEV,
++	PACKET_SOCK_AUXDATA,
++};
++
++static inline void packet_sock_flag_set(struct packet_sock *po,
++					enum packet_sock_flags flag,
++					bool val)
++{
++	if (val)
++		set_bit(flag, &po->flags);
++	else
++		clear_bit(flag, &po->flags);
++}
++
++static inline bool packet_sock_flag(const struct packet_sock *po,
++				    enum packet_sock_flags flag)
++{
++	return test_bit(flag, &po->flags);
++}
++
+ #endif
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index a670553159abe..1882fea719035 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -753,7 +753,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ 		fallthrough;
+ 	case 1:
+ 		if (p.call.timeouts.hard > 0) {
+-			j = msecs_to_jiffies(p.call.timeouts.hard);
++			j = p.call.timeouts.hard * HZ;
+ 			now = jiffies;
+ 			j += now;
+ 			WRITE_ONCE(call->expect_term_by, j);
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index 24d561d8d9c97..25dad1921baf2 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -244,7 +244,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 		goto out;
+ 	}
+ 
+-	if (unlikely(!(dev->flags & IFF_UP))) {
++	if (unlikely(!(dev->flags & IFF_UP)) || !netif_carrier_ok(dev)) {
+ 		net_notice_ratelimited("tc mirred to Houston: device %s is down\n",
+ 				       dev->name);
+ 		goto out;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index c410a736301bc..53d315ed94307 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1466,6 +1466,7 @@ static int tcf_block_bind(struct tcf_block *block,
+ 
+ err_unroll:
+ 	list_for_each_entry_safe(block_cb, next, &bo->cb_list, list) {
++		list_del(&block_cb->driver_list);
+ 		if (i-- > 0) {
+ 			list_del(&block_cb->list);
+ 			tcf_block_playback_offloads(block, block_cb->cb,
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 2fb76fc0cc31b..5a1274199fe33 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -779,13 +779,17 @@ static int fq_resize(struct Qdisc *sch, u32 log)
+ 	return 0;
+ }
+ 
++static struct netlink_range_validation iq_range = {
++	.max = INT_MAX,
++};
++
+ static const struct nla_policy fq_policy[TCA_FQ_MAX + 1] = {
+ 	[TCA_FQ_UNSPEC]			= { .strict_start_type = TCA_FQ_TIMER_SLACK },
+ 
+ 	[TCA_FQ_PLIMIT]			= { .type = NLA_U32 },
+ 	[TCA_FQ_FLOW_PLIMIT]		= { .type = NLA_U32 },
+ 	[TCA_FQ_QUANTUM]		= { .type = NLA_U32 },
+-	[TCA_FQ_INITIAL_QUANTUM]	= { .type = NLA_U32 },
++	[TCA_FQ_INITIAL_QUANTUM]	= NLA_POLICY_FULL_RANGE(NLA_U32, &iq_range),
+ 	[TCA_FQ_RATE_ENABLE]		= { .type = NLA_U32 },
+ 	[TCA_FQ_FLOW_DEFAULT_RATE]	= { .type = NLA_U32 },
+ 	[TCA_FQ_FLOW_MAX_RATE]		= { .type = NLA_U32 },
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index c6e8bd78e35d6..e1ce0f261f0be 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1967,9 +1967,6 @@ call_bind_status(struct rpc_task *task)
+ 			status = -EOPNOTSUPP;
+ 			break;
+ 		}
+-		if (task->tk_rebind_retry == 0)
+-			break;
+-		task->tk_rebind_retry--;
+ 		rpc_delay(task, 3*HZ);
+ 		goto retry_timeout;
+ 	case -ENOBUFS:
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index f0f55fbd13752..a00890962e115 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -796,7 +796,6 @@ rpc_init_task_statistics(struct rpc_task *task)
+ 	/* Initialize retry counters */
+ 	task->tk_garb_retry = 2;
+ 	task->tk_cred_retry = 2;
+-	task->tk_rebind_retry = 2;
+ 
+ 	/* starting timestamp */
+ 	task->tk_start = ktime_get();
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index 3c7ce60fe9a5a..a76d43787549f 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -155,6 +155,7 @@ static inline bool xp_unaligned_validate_desc(struct xsk_buff_pool *pool,
+ 		return false;
+ 
+ 	if (base_addr >= pool->addrs_cnt || addr >= pool->addrs_cnt ||
++	    addr + desc->len > pool->addrs_cnt ||
+ 	    xp_desc_crosses_non_contig_pg(pool, addr, desc->len))
+ 		return false;
+ 
+diff --git a/scripts/gdb/linux/clk.py b/scripts/gdb/linux/clk.py
+index 061aecfa294e6..7a01fdc3e8446 100644
+--- a/scripts/gdb/linux/clk.py
++++ b/scripts/gdb/linux/clk.py
+@@ -41,6 +41,8 @@ are cached and potentially out of date"""
+             self.show_subtree(child, level + 1)
+ 
+     def invoke(self, arg, from_tty):
++        if utils.gdb_eval_or_none("clk_root_list") is None:
++            raise gdb.GdbError("No clocks registered")
+         gdb.write("                                 enable  prepare  protect               \n")
+         gdb.write("   clock                          count    count    count        rate   \n")
+         gdb.write("------------------------------------------------------------------------\n")
+diff --git a/scripts/gdb/linux/genpd.py b/scripts/gdb/linux/genpd.py
+index 39cd1abd85590..b53649c0a77a6 100644
+--- a/scripts/gdb/linux/genpd.py
++++ b/scripts/gdb/linux/genpd.py
+@@ -5,7 +5,7 @@
+ import gdb
+ import sys
+ 
+-from linux.utils import CachedType
++from linux.utils import CachedType, gdb_eval_or_none
+ from linux.lists import list_for_each_entry
+ 
+ generic_pm_domain_type = CachedType('struct generic_pm_domain')
+@@ -70,6 +70,8 @@ Output is similar to /sys/kernel/debug/pm_genpd/pm_genpd_summary'''
+             gdb.write('    %-50s  %s\n' % (kobj_path, rtpm_status_str(dev)))
+ 
+     def invoke(self, arg, from_tty):
++        if gdb_eval_or_none("&gpd_list") is None:
++            raise gdb.GdbError("No power domain(s) registered")
+         gdb.write('domain                          status          children\n');
+         gdb.write('    /device                                             runtime status\n');
+         gdb.write('----------------------------------------------------------------------\n');
+diff --git a/scripts/gdb/linux/timerlist.py b/scripts/gdb/linux/timerlist.py
+index 071d0dd5a6349..51def847f1ef9 100644
+--- a/scripts/gdb/linux/timerlist.py
++++ b/scripts/gdb/linux/timerlist.py
+@@ -73,7 +73,7 @@ def print_cpu(hrtimer_bases, cpu, max_clock_bases):
+     ts = cpus.per_cpu(tick_sched_ptr, cpu)
+ 
+     text = "cpu: {}\n".format(cpu)
+-    for i in xrange(max_clock_bases):
++    for i in range(max_clock_bases):
+         text += " clock {}:\n".format(i)
+         text += print_base(cpu_base['clock_base'][i])
+ 
+@@ -158,6 +158,8 @@ def pr_cpumask(mask):
+     num_bytes = (nr_cpu_ids + 7) / 8
+     buf = utils.read_memoryview(inf, bits, num_bytes).tobytes()
+     buf = binascii.b2a_hex(buf)
++    if type(buf) is not str:
++        buf=buf.decode()
+ 
+     chunks = []
+     i = num_bytes
+diff --git a/scripts/gdb/linux/utils.py b/scripts/gdb/linux/utils.py
+index ff7c1799d588f..db59f986c7fdd 100644
+--- a/scripts/gdb/linux/utils.py
++++ b/scripts/gdb/linux/utils.py
+@@ -89,7 +89,10 @@ def get_target_endianness():
+ 
+ 
+ def read_memoryview(inf, start, length):
+-    return memoryview(inf.read_memory(start, length))
++    m = inf.read_memory(start, length)
++    if type(m) is memoryview:
++        return m
++    return memoryview(m)
+ 
+ 
+ def read_u16(buffer, offset):
+diff --git a/security/selinux/Makefile b/security/selinux/Makefile
+index 4d8e0e8adf0b1..ee1ddda964478 100644
+--- a/security/selinux/Makefile
++++ b/security/selinux/Makefile
+@@ -21,8 +21,8 @@ ccflags-y := -I$(srctree)/security/selinux -I$(srctree)/security/selinux/include
+ $(addprefix $(obj)/,$(selinux-y)): $(obj)/flask.h
+ 
+ quiet_cmd_flask = GEN     $(obj)/flask.h $(obj)/av_permissions.h
+-      cmd_flask = scripts/selinux/genheaders/genheaders $(obj)/flask.h $(obj)/av_permissions.h
++      cmd_flask = $< $(obj)/flask.h $(obj)/av_permissions.h
+ 
+ targets += flask.h av_permissions.h
+-$(obj)/flask.h: $(src)/include/classmap.h FORCE
++$(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/genheaders/genheaders FORCE
+ 	$(call if_changed,flask)
+diff --git a/sound/oss/dmasound/dmasound.h b/sound/oss/dmasound/dmasound.h
+index c1c52b479da26..ad8ce6a1c25c7 100644
+--- a/sound/oss/dmasound/dmasound.h
++++ b/sound/oss/dmasound/dmasound.h
+@@ -88,11 +88,7 @@ static inline int ioctl_return(int __user *addr, int value)
+      */
+ 
+ extern int dmasound_init(void);
+-#ifdef MODULE
+ extern void dmasound_deinit(void);
+-#else
+-#define dmasound_deinit()	do { } while (0)
+-#endif
+ 
+ /* description of the set-up applies to either hard or soft settings */
+ 
+@@ -114,9 +110,7 @@ typedef struct {
+     void *(*dma_alloc)(unsigned int, gfp_t);
+     void (*dma_free)(void *, unsigned int);
+     int (*irqinit)(void);
+-#ifdef MODULE
+     void (*irqcleanup)(void);
+-#endif
+     void (*init)(void);
+     void (*silence)(void);
+     int (*setFormat)(int);
+diff --git a/sound/oss/dmasound/dmasound_core.c b/sound/oss/dmasound/dmasound_core.c
+index 38f25e97538fa..7454b058dda54 100644
+--- a/sound/oss/dmasound/dmasound_core.c
++++ b/sound/oss/dmasound/dmasound_core.c
+@@ -206,12 +206,10 @@ module_param(writeBufSize, int, 0);
+ 
+ MODULE_LICENSE("GPL");
+ 
+-#ifdef MODULE
+ static int sq_unit = -1;
+ static int mixer_unit = -1;
+ static int state_unit = -1;
+ static int irq_installed;
+-#endif /* MODULE */
+ 
+ /* control over who can modify resources shared between play/record */
+ static fmode_t shared_resource_owner;
+@@ -391,9 +389,6 @@ static const struct file_operations mixer_fops =
+ 
+ static void mixer_init(void)
+ {
+-#ifndef MODULE
+-	int mixer_unit;
+-#endif
+ 	mixer_unit = register_sound_mixer(&mixer_fops, -1);
+ 	if (mixer_unit < 0)
+ 		return;
+@@ -1176,9 +1171,6 @@ static const struct file_operations sq_fops =
+ static int sq_init(void)
+ {
+ 	const struct file_operations *fops = &sq_fops;
+-#ifndef MODULE
+-	int sq_unit;
+-#endif
+ 
+ 	sq_unit = register_sound_dsp(fops, -1);
+ 	if (sq_unit < 0) {
+@@ -1380,9 +1372,6 @@ static const struct file_operations state_fops = {
+ 
+ static int state_init(void)
+ {
+-#ifndef MODULE
+-	int state_unit;
+-#endif
+ 	state_unit = register_sound_special(&state_fops, SND_DEV_STATUS);
+ 	if (state_unit < 0)
+ 		return state_unit ;
+@@ -1400,10 +1389,9 @@ static int state_init(void)
+ int dmasound_init(void)
+ {
+ 	int res ;
+-#ifdef MODULE
++
+ 	if (irq_installed)
+ 		return -EBUSY;
+-#endif
+ 
+ 	/* Set up sound queue, /dev/audio and /dev/dsp. */
+ 
+@@ -1422,9 +1410,7 @@ int dmasound_init(void)
+ 		printk(KERN_ERR "DMA sound driver: Interrupt initialization failed\n");
+ 		return -ENODEV;
+ 	}
+-#ifdef MODULE
+ 	irq_installed = 1;
+-#endif
+ 
+ 	printk(KERN_INFO "%s DMA sound driver rev %03d installed\n",
+ 		dmasound.mach.name, (DMASOUND_CORE_REVISION<<4) +
+@@ -1438,8 +1424,6 @@ int dmasound_init(void)
+ 	return 0;
+ }
+ 
+-#ifdef MODULE
+-
+ void dmasound_deinit(void)
+ {
+ 	if (irq_installed) {
+@@ -1458,9 +1442,7 @@ void dmasound_deinit(void)
+ 		unregister_sound_dsp(sq_unit);
+ }
+ 
+-#else /* !MODULE */
+-
+-static int dmasound_setup(char *str)
++static int __maybe_unused dmasound_setup(char *str)
+ {
+ 	int ints[6], size;
+ 
+@@ -1503,8 +1485,6 @@ static int dmasound_setup(char *str)
+ 
+ __setup("dmasound=", dmasound_setup);
+ 
+-#endif /* !MODULE */
+-
+     /*
+      *  Conversion tables
+      */
+@@ -1591,9 +1571,7 @@ char dmasound_alaw2dma8[] = {
+ 
+ EXPORT_SYMBOL(dmasound);
+ EXPORT_SYMBOL(dmasound_init);
+-#ifdef MODULE
+ EXPORT_SYMBOL(dmasound_deinit);
+-#endif
+ EXPORT_SYMBOL(dmasound_write_sq);
+ EXPORT_SYMBOL(dmasound_catchRadius);
+ #ifdef HAS_8BIT_TABLES
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index 609459077f9d9..bc3d46617a113 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -807,15 +807,14 @@ static int es8316_i2c_probe(struct i2c_client *i2c_client,
+ 	es8316->irq = i2c_client->irq;
+ 	mutex_init(&es8316->lock);
+ 
+-	ret = devm_request_threaded_irq(dev, es8316->irq, NULL, es8316_irq,
+-					IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+-					"es8316", es8316);
+-	if (ret == 0) {
+-		/* Gets re-enabled by es8316_set_jack() */
+-		disable_irq(es8316->irq);
+-	} else {
+-		dev_warn(dev, "Failed to get IRQ %d: %d\n", es8316->irq, ret);
+-		es8316->irq = -ENXIO;
++	if (es8316->irq > 0) {
++		ret = devm_request_threaded_irq(dev, es8316->irq, NULL, es8316_irq,
++						IRQF_TRIGGER_HIGH | IRQF_ONESHOT | IRQF_NO_AUTOEN,
++						"es8316", es8316);
++		if (ret) {
++			dev_warn(dev, "Failed to get IRQ %d: %d\n", es8316->irq, ret);
++			es8316->irq = -ENXIO;
++		}
+ 	}
+ 
+ 	return devm_snd_soc_register_component(&i2c_client->dev,
+diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
+index 0d4efbed41dab..c33439650823b 100644
+--- a/sound/soc/fsl/fsl_mqs.c
++++ b/sound/soc/fsl/fsl_mqs.c
+@@ -204,10 +204,10 @@ static int fsl_mqs_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		mqs_priv->regmap = syscon_node_to_regmap(gpr_np);
++		of_node_put(gpr_np);
+ 		if (IS_ERR(mqs_priv->regmap)) {
+ 			dev_err(&pdev->dev, "failed to get gpr regmap\n");
+-			ret = PTR_ERR(mqs_priv->regmap);
+-			goto err_free_gpr_np;
++			return PTR_ERR(mqs_priv->regmap);
+ 		}
+ 	} else {
+ 		regs = devm_platform_ioremap_resource(pdev, 0);
+@@ -236,8 +236,7 @@ static int fsl_mqs_probe(struct platform_device *pdev)
+ 	if (IS_ERR(mqs_priv->mclk)) {
+ 		dev_err(&pdev->dev, "failed to get the clock: %ld\n",
+ 			PTR_ERR(mqs_priv->mclk));
+-		ret = PTR_ERR(mqs_priv->mclk);
+-		goto err_free_gpr_np;
++		return PTR_ERR(mqs_priv->mclk);
+ 	}
+ 
+ 	dev_set_drvdata(&pdev->dev, mqs_priv);
+@@ -246,13 +245,9 @@ static int fsl_mqs_probe(struct platform_device *pdev)
+ 	ret = devm_snd_soc_register_component(&pdev->dev, &soc_codec_fsl_mqs,
+ 			&fsl_mqs_dai, 1);
+ 	if (ret)
+-		goto err_free_gpr_np;
+-	return 0;
+-
+-err_free_gpr_np:
+-	of_node_put(gpr_np);
++		return ret;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int fsl_mqs_remove(struct platform_device *pdev)
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 8a99cb6dfcd69..9a5ab96f917d3 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -393,6 +393,18 @@ static int byt_rt5640_aif1_hw_params(struct snd_pcm_substream *substream,
+ 
+ /* Please keep this list alphabetically sorted */
+ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
++	{	/* Acer Iconia One 7 B1-750 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "VESPA2"),
++		},
++		.driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++					BYT_RT5640_JD_SRC_JD1_IN4P |
++					BYT_RT5640_OVCD_TH_1500UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{	/* Acer Iconia Tab 8 W1-810 */
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"),
+diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c
+index 1e2cf2f08eecd..84f26dce7f5d0 100644
+--- a/sound/usb/caiaq/input.c
++++ b/sound/usb/caiaq/input.c
+@@ -804,6 +804,7 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
+ 
+ 	default:
+ 		/* no input methods supported on this device */
++		ret = -EINVAL;
+ 		goto exit_free_idev;
+ 	}
+ 
+diff --git a/tools/bpf/bpftool/json_writer.c b/tools/bpf/bpftool/json_writer.c
+index 7fea83bedf488..bca5dd0a59e34 100644
+--- a/tools/bpf/bpftool/json_writer.c
++++ b/tools/bpf/bpftool/json_writer.c
+@@ -80,9 +80,6 @@ static void jsonw_puts(json_writer_t *self, const char *str)
+ 		case '"':
+ 			fputs("\\\"", self->out);
+ 			break;
+-		case '\'':
+-			fputs("\\\'", self->out);
+-			break;
+ 		default:
+ 			putc(*str, self->out);
+ 		}
+diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
+index 8608cd68cdd07..13d614b16f6c2 100644
+--- a/tools/bpf/bpftool/xlated_dumper.c
++++ b/tools/bpf/bpftool/xlated_dumper.c
+@@ -363,8 +363,15 @@ void dump_xlated_for_graph(struct dump_data *dd, void *buf_start, void *buf_end,
+ 	struct bpf_insn *insn_start = buf_start;
+ 	struct bpf_insn *insn_end = buf_end;
+ 	struct bpf_insn *cur = insn_start;
++	bool double_insn = false;
+ 
+ 	for (; cur <= insn_end; cur++) {
++		if (double_insn) {
++			double_insn = false;
++			continue;
++		}
++		double_insn = cur->code == (BPF_LD | BPF_IMM | BPF_DW);
++
+ 		printf("% 4d: ", (int)(cur - insn_start + start_idx));
+ 		print_bpf_insn(&cbs, cur, true);
+ 		if (cur != insn_end)
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index d3b5f5faf8c14..02e5774cabb6e 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -670,7 +670,7 @@ static void create_tasks(struct perf_sched *sched)
+ 	err = pthread_attr_init(&attr);
+ 	BUG_ON(err);
+ 	err = pthread_attr_setstacksize(&attr,
+-			(size_t) max(16 * 1024, PTHREAD_STACK_MIN));
++			(size_t) max(16 * 1024, (int)PTHREAD_STACK_MIN));
+ 	BUG_ON(err);
+ 	err = pthread_mutex_lock(&sched->start_work_mutex);
+ 	BUG_ON(err);
+diff --git a/tools/perf/pmu-events/arch/powerpc/power9/other.json b/tools/perf/pmu-events/arch/powerpc/power9/other.json
+index 3f69422c21f99..f10bd554521a0 100644
+--- a/tools/perf/pmu-events/arch/powerpc/power9/other.json
++++ b/tools/perf/pmu-events/arch/powerpc/power9/other.json
+@@ -1417,7 +1417,7 @@
+   {
+     "EventCode": "0x45054",
+     "EventName": "PM_FMA_CMPL",
+-    "BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only. "
++    "BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only."
+   },
+   {
+     "EventCode": "0x201E8",
+@@ -2017,7 +2017,7 @@
+   {
+     "EventCode": "0xC0BC",
+     "EventName": "PM_LSU_FLUSH_OTHER",
+-    "BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the “bad dval” back and flush all younger ops)"
++    "BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the 'bad dval' back and flush all younger ops)"
+   },
+   {
+     "EventCode": "0x5094",
+diff --git a/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json b/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json
+index d0265f255de2b..723bffa41c448 100644
+--- a/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json
++++ b/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json
+@@ -442,7 +442,7 @@
+   {
+     "EventCode": "0x4D052",
+     "EventName": "PM_2FLOP_CMPL",
+-    "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg "
++    "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg"
+   },
+   {
+     "EventCode": "0x1F142",
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index 18452f12510c0..41dd4c266cc00 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -2279,6 +2279,7 @@ static int find_entire_kern_cb(void *arg, const char *name __maybe_unused,
+ 			       char type, u64 start)
+ {
+ 	struct sym_args *args = arg;
++	u64 size;
+ 
+ 	if (!kallsyms__is_function(type))
+ 		return 0;
+@@ -2288,7 +2289,9 @@ static int find_entire_kern_cb(void *arg, const char *name __maybe_unused,
+ 		args->start = start;
+ 	}
+ 	/* Don't know exactly where the kernel ends, so we add a page */
+-	args->size = round_up(start, page_size) + page_size - args->start;
++	size = round_up(start, page_size) + page_size - args->start;
++	if (size > args->size)
++		args->size = size;
+ 
+ 	return 0;
+ }
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index e4c485f92c028..48fda1a19ab5b 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -1639,6 +1639,8 @@ static void intel_pt_calc_cbr(struct intel_pt_decoder *decoder)
+ 
+ 	decoder->cbr = cbr;
+ 	decoder->cbr_cyc_to_tsc = decoder->max_non_turbo_ratio_fp / cbr;
++	decoder->cyc_ref_timestamp = decoder->timestamp;
++	decoder->cycle_cnt = 0;
+ 
+ 	intel_pt_mtc_cyc_cnt_cbr(decoder);
+ }
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index ac45da0302a73..d322305bc1828 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -1670,7 +1670,7 @@ static int perf_pmu__new_caps(struct list_head *list, char *name, char *value)
+ 	return 0;
+ 
+ free_name:
+-	zfree(caps->name);
++	zfree(&caps->name);
+ free_caps:
+ 	free(caps);
+ 
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index 5e9e96452b9e6..42806102010bb 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -873,8 +873,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
+ static int64_t
+ sort__sym_from_cmp(struct hist_entry *left, struct hist_entry *right)
+ {
+-	struct addr_map_symbol *from_l = &left->branch_info->from;
+-	struct addr_map_symbol *from_r = &right->branch_info->from;
++	struct addr_map_symbol *from_l, *from_r;
+ 
+ 	if (!left->branch_info || !right->branch_info)
+ 		return cmp_null(left->branch_info, right->branch_info);
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 5221f272f85c6..b171d134ce87a 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -548,7 +548,7 @@ static int elf_read_build_id(Elf *elf, void *bf, size_t size)
+ 				size_t sz = min(size, descsz);
+ 				memcpy(bf, ptr, sz);
+ 				memset(bf + sz, 0, size - sz);
+-				err = descsz;
++				err = sz;
+ 				break;
+ 			}
+ 		}
+diff --git a/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c b/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c
+index 643dfa35419c1..48dc5827daa7f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c
++++ b/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c
+@@ -56,8 +56,9 @@ static bool assert_storage_noexist(struct bpf_map *map, const void *key)
+ 
+ static bool connect_send(const char *cgroup_path)
+ {
+-	bool res = true;
+ 	int server_fd = -1, client_fd = -1;
++	char message[] = "message";
++	bool res = true;
+ 
+ 	if (join_cgroup(cgroup_path))
+ 		goto out_clean;
+@@ -70,7 +71,10 @@ static bool connect_send(const char *cgroup_path)
+ 	if (client_fd < 0)
+ 		goto out_clean;
+ 
+-	if (send(client_fd, "message", strlen("message"), 0) < 0)
++	if (send(client_fd, &message, sizeof(message), 0) < 0)
++		goto out_clean;
++
++	if (read(server_fd, &message, sizeof(message)) < 0)
+ 		goto out_clean;
+ 
+ 	res = false;
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index 56ccbeae0638d..c20d0a7ecbe63 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -68,6 +68,8 @@ static void *malloc_and_init_memory(size_t s)
+ 	size_t s64;
+ 
+ 	void *p = memalign(PAGE_SIZE, s);
++	if (!p)
++		return NULL;
+ 
+ 	p64 = (uint64_t *)p;
+ 	s64 = s / sizeof(uint64_t);
+diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
+index 6449fbd96096a..6cfddd1d43558 100644
+--- a/tools/testing/selftests/resctrl/mba_test.c
++++ b/tools/testing/selftests/resctrl/mba_test.c
+@@ -28,6 +28,7 @@ static int mba_setup(int num, ...)
+ 	struct resctrl_val_param *p;
+ 	char allocation_str[64];
+ 	va_list param;
++	int ret;
+ 
+ 	va_start(param, num);
+ 	p = va_arg(param, struct resctrl_val_param *);
+@@ -45,7 +46,11 @@ static int mba_setup(int num, ...)
+ 
+ 	sprintf(allocation_str, "%d", allocation);
+ 
+-	write_schemata(p->ctrlgrp, allocation_str, p->cpu_no, p->resctrl_val);
++	ret = write_schemata(p->ctrlgrp, allocation_str, p->cpu_no,
++			     p->resctrl_val);
++	if (ret < 0)
++		return ret;
++
+ 	allocation -= ALLOCATION_STEP;
+ 
+ 	return 0;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-05-17 11:25 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-05-17 11:25 UTC (permalink / raw
  To: gentoo-commits

commit:     f0be2afc205c71e13111e110cf77d7e1e21cfaf9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 17 11:25:09 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 17 11:25:09 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f0be2afc

Remove redundant patch

Removed:
1520_fs-enable-link-security-restrictions-by-default.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 -
 ...nf-tables-make-deleted-anon-sets-inactive.patch | 121 ---------------------
 2 files changed, 125 deletions(-)

diff --git a/0000_README b/0000_README
index 90d28c5e..a665318d 100644
--- a/0000_README
+++ b/0000_README
@@ -771,10 +771,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1520_fs-enable-link-security-restrictions-by-default.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=c1592a89942e9678f7d9c8030efa777c0d57edab
-Desc:   netfilter: nf_tables: deactivate anonymous set from preparation phase
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1520_nf-tables-make-deleted-anon-sets-inactive.patch b/1520_nf-tables-make-deleted-anon-sets-inactive.patch
deleted file mode 100644
index cd75de5c..00000000
--- a/1520_nf-tables-make-deleted-anon-sets-inactive.patch
+++ /dev/null
@@ -1,121 +0,0 @@
-From c1592a89942e9678f7d9c8030efa777c0d57edab Mon Sep 17 00:00:00 2001
-From: Pablo Neira Ayuso <pablo@netfilter.org>
-Date: Tue, 2 May 2023 10:25:24 +0200
-Subject: netfilter: nf_tables: deactivate anonymous set from preparation phase
-
-Toggle deleted anonymous sets as inactive in the next generation, so
-users cannot perform any update on it. Clear the generation bitmask
-in case the transaction is aborted.
-
-The following KASAN splat shows a set element deletion for a bound
-anonymous set that has been already removed in the same transaction.
-
-[   64.921510] ==================================================================
-[   64.923123] BUG: KASAN: wild-memory-access in nf_tables_commit+0xa24/0x1490 [nf_tables]
-[   64.924745] Write of size 8 at addr dead000000000122 by task test/890
-[   64.927903] CPU: 3 PID: 890 Comm: test Not tainted 6.3.0+ #253
-[   64.931120] Call Trace:
-[   64.932699]  <TASK>
-[   64.934292]  dump_stack_lvl+0x33/0x50
-[   64.935908]  ? nf_tables_commit+0xa24/0x1490 [nf_tables]
-[   64.937551]  kasan_report+0xda/0x120
-[   64.939186]  ? nf_tables_commit+0xa24/0x1490 [nf_tables]
-[   64.940814]  nf_tables_commit+0xa24/0x1490 [nf_tables]
-[   64.942452]  ? __kasan_slab_alloc+0x2d/0x60
-[   64.944070]  ? nf_tables_setelem_notify+0x190/0x190 [nf_tables]
-[   64.945710]  ? kasan_set_track+0x21/0x30
-[   64.947323]  nfnetlink_rcv_batch+0x709/0xd90 [nfnetlink]
-[   64.948898]  ? nfnetlink_rcv_msg+0x480/0x480 [nfnetlink]
-
-Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
----
- include/net/netfilter/nf_tables.h |  1 +
- net/netfilter/nf_tables_api.c     | 12 ++++++++++++
- net/netfilter/nft_dynset.c        |  2 +-
- net/netfilter/nft_lookup.c        |  2 +-
- net/netfilter/nft_objref.c        |  2 +-
- 5 files changed, 16 insertions(+), 3 deletions(-)
-
-diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
-index 3ed21d2d56590..2e24ea1d744c2 100644
---- a/include/net/netfilter/nf_tables.h
-+++ b/include/net/netfilter/nf_tables.h
-@@ -619,6 +619,7 @@ struct nft_set_binding {
- };
- 
- enum nft_trans_phase;
-+void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
- void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
- 			      struct nft_set_binding *binding,
- 			      enum nft_trans_phase phase);
-diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
-index 8b6c61a2196cb..59fb8320ab4d7 100644
---- a/net/netfilter/nf_tables_api.c
-+++ b/net/netfilter/nf_tables_api.c
-@@ -5127,12 +5127,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
- 	}
- }
- 
-+void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
-+{
-+	if (nft_set_is_anonymous(set))
-+		nft_clear(ctx->net, set);
-+
-+	set->use++;
-+}
-+EXPORT_SYMBOL_GPL(nf_tables_activate_set);
-+
- void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
- 			      struct nft_set_binding *binding,
- 			      enum nft_trans_phase phase)
- {
- 	switch (phase) {
- 	case NFT_TRANS_PREPARE:
-+		if (nft_set_is_anonymous(set))
-+			nft_deactivate_next(ctx->net, set);
-+
- 		set->use--;
- 		return;
- 	case NFT_TRANS_ABORT:
-diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
-index 274579b1696e0..bd19c7aec92ee 100644
---- a/net/netfilter/nft_dynset.c
-+++ b/net/netfilter/nft_dynset.c
-@@ -342,7 +342,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
- {
- 	struct nft_dynset *priv = nft_expr_priv(expr);
- 
--	priv->set->use++;
-+	nf_tables_activate_set(ctx, priv->set);
- }
- 
- static void nft_dynset_destroy(const struct nft_ctx *ctx,
-diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
-index cecf8ab90e58f..03ef4fdaa460b 100644
---- a/net/netfilter/nft_lookup.c
-+++ b/net/netfilter/nft_lookup.c
-@@ -167,7 +167,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
- {
- 	struct nft_lookup *priv = nft_expr_priv(expr);
- 
--	priv->set->use++;
-+	nf_tables_activate_set(ctx, priv->set);
- }
- 
- static void nft_lookup_destroy(const struct nft_ctx *ctx,
-diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
-index cb37169608bab..a48dd5b5d45b1 100644
---- a/net/netfilter/nft_objref.c
-+++ b/net/netfilter/nft_objref.c
-@@ -185,7 +185,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
- {
- 	struct nft_objref_map *priv = nft_expr_priv(expr);
- 
--	priv->set->use++;
-+	nf_tables_activate_set(ctx, priv->set);
- }
- 
- static void nft_objref_map_destroy(const struct nft_ctx *ctx,
--- 
-cgit 
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-05-30 12:56 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-05-30 12:56 UTC (permalink / raw
  To: gentoo-commits

commit:     43efe29bf4fafaee7fcb400d8d6ea62272a8066c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 30 12:55:51 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 30 12:55:51 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=43efe29b

Linux patch 5.10.181

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1180_linux-5.10.181.patch | 8078 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8082 insertions(+)

diff --git a/0000_README b/0000_README
index a665318d..366c0d03 100644
--- a/0000_README
+++ b/0000_README
@@ -763,6 +763,10 @@ Patch:  1179_linux-5.10.180.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.180
 
+Patch:  1180_linux-5.10.181.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.181
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1180_linux-5.10.181.patch b/1180_linux-5.10.181.patch
new file mode 100644
index 00000000..05689177
--- /dev/null
+++ b/1180_linux-5.10.181.patch
@@ -0,0 +1,8078 @@
+diff --git a/Documentation/devicetree/bindings/usb/cdns,usb3.yaml b/Documentation/devicetree/bindings/usb/cdns,usb3.yaml
+index d6af2794d4448..703921e6bcaf9 100644
+--- a/Documentation/devicetree/bindings/usb/cdns,usb3.yaml
++++ b/Documentation/devicetree/bindings/usb/cdns,usb3.yaml
+@@ -59,7 +59,7 @@ properties:
+     description:
+       size of memory intended as internal memory for endpoints
+       buffers expressed in KB
+-    $ref: /schemas/types.yaml#/definitions/uint32
++    $ref: /schemas/types.yaml#/definitions/uint16
+ 
+   cdns,phyrst-a-enable:
+     description: Enable resetting of PHY if Rx fail is detected
+diff --git a/Makefile b/Makefile
+index c2f8e1644abdc..4e8289113a81f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 180
++SUBLEVEL = 181
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index ccf66adbbf623..07c15c1ce9d4c 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1102,7 +1102,7 @@
+ 		};
+ 	};
+ 
+-	sai2a_sleep_pins_c: sai2a-2 {
++	sai2a_sleep_pins_c: sai2a-sleep-2 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('D', 13, ANALOG)>, /* SAI2_SCK_A */
+ 				 <STM32_PINMUX('D', 11, ANALOG)>, /* SAI2_SD_A */
+diff --git a/arch/arm/mach-sa1100/jornada720_ssp.c b/arch/arm/mach-sa1100/jornada720_ssp.c
+index 1dbe98948ce30..9627c4cf3e41d 100644
+--- a/arch/arm/mach-sa1100/jornada720_ssp.c
++++ b/arch/arm/mach-sa1100/jornada720_ssp.c
+@@ -1,5 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+-/**
++/*
+  *  arch/arm/mac-sa1100/jornada720_ssp.c
+  *
+  *  Copyright (C) 2006/2007 Kristoffer Ericson <Kristoffer.Ericson@gmail.com>
+@@ -26,6 +26,7 @@ static unsigned long jornada_ssp_flags;
+ 
+ /**
+  * jornada_ssp_reverse - reverses input byte
++ * @byte: input byte to reverse
+  *
+  * we need to reverse all data we receive from the mcu due to its physical location
+  * returns : 01110111 -> 11101110
+@@ -46,6 +47,7 @@ EXPORT_SYMBOL(jornada_ssp_reverse);
+ 
+ /**
+  * jornada_ssp_byte - waits for ready ssp bus and sends byte
++ * @byte: input byte to transmit
+  *
+  * waits for fifo buffer to clear and then transmits, if it doesn't then we will
+  * timeout after <timeout> rounds. Needs mcu running before its called.
+@@ -77,6 +79,7 @@ EXPORT_SYMBOL(jornada_ssp_byte);
+ 
+ /**
+  * jornada_ssp_inout - decide if input is command or trading byte
++ * @byte: input byte to send (may be %TXDUMMY)
+  *
+  * returns : (jornada_ssp_byte(byte)) on success
+  *         : %-ETIMEDOUT on timeout failure
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+index f6287f174355c..24f9e8fd0c8b8 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+@@ -98,11 +98,17 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		ethphy: ethernet-phy@4 {
++		ethphy: ethernet-phy@4 { /* AR8033 or ADIN1300 */
+ 			compatible = "ethernet-phy-ieee802.3-c22";
+ 			reg = <4>;
+ 			reset-gpios = <&gpio1 9 GPIO_ACTIVE_LOW>;
+ 			reset-assert-us = <10000>;
++			/*
++			 * Deassert delay:
++			 * ADIN1300 requires 5ms.
++			 * AR8033   requires 1ms.
++			 */
++			reset-deassert-us = <20000>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 02b5f6f1d331e..159cdd03e7c01 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -1771,8 +1771,11 @@
+ 				interrupts = <0 131 IRQ_TYPE_LEVEL_HIGH>;
+ 				phys = <&hsusb_phy1>, <&ssusb_phy_0>;
+ 				phy-names = "usb2-phy", "usb3-phy";
++				snps,hird-threshold = /bits/ 8 <0>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
++				snps,is-utmi-l1-suspend;
++				tx-fifo-resize;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/include/asm/hyp_image.h b/arch/arm64/include/asm/hyp_image.h
+index daa1a1da539e7..e068427560510 100644
+--- a/arch/arm64/include/asm/hyp_image.h
++++ b/arch/arm64/include/asm/hyp_image.h
+@@ -31,6 +31,9 @@
+  */
+ #define KVM_NVHE_ALIAS(sym)	kvm_nvhe_sym(sym) = sym;
+ 
++/* Defines a linker script alias for KVM nVHE hyp symbols */
++#define KVM_NVHE_ALIAS_HYP(first, sec)	kvm_nvhe_sym(first) = kvm_nvhe_sym(sec);
++
+ #endif /* LINKER_SCRIPT */
+ 
+ #endif /* __ARM64_HYP_IMAGE_H__ */
+diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
+index c615b285ff5b3..48e43b29a2d5f 100644
+--- a/arch/arm64/kernel/image-vars.h
++++ b/arch/arm64/kernel/image-vars.h
+@@ -103,6 +103,17 @@ KVM_NVHE_ALIAS(gic_nonsecure_priorities);
+ KVM_NVHE_ALIAS(__start___kvm_ex_table);
+ KVM_NVHE_ALIAS(__stop___kvm_ex_table);
+ 
++/* Position-independent library routines */
++KVM_NVHE_ALIAS_HYP(clear_page, __pi_clear_page);
++KVM_NVHE_ALIAS_HYP(copy_page, __pi_copy_page);
++KVM_NVHE_ALIAS_HYP(memcpy, __pi_memcpy);
++KVM_NVHE_ALIAS_HYP(memset, __pi_memset);
++
++#ifdef CONFIG_KASAN
++KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy);
++KVM_NVHE_ALIAS_HYP(__memset, __pi_memset);
++#endif
++
+ #endif /* CONFIG_KVM */
+ 
+ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */
+diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
+index ddde15fe85f2f..230bba1a6716b 100644
+--- a/arch/arm64/kvm/hyp/nvhe/Makefile
++++ b/arch/arm64/kvm/hyp/nvhe/Makefile
+@@ -6,9 +6,13 @@
+ asflags-y := -D__KVM_NVHE_HYPERVISOR__
+ ccflags-y := -D__KVM_NVHE_HYPERVISOR__
+ 
++lib-objs := clear_page.o copy_page.o memcpy.o memset.o
++lib-objs := $(addprefix ../../../lib/, $(lib-objs))
++
+ obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o hyp-main.o
+ obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
+ 	 ../fpsimd.o ../hyp-entry.o
++obj-y += $(lib-objs)
+ 
+ ##
+ ## Build rules for compiling nVHE hyp code
+diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c
+index 5d12736b4b281..131e87a55cbb7 100644
+--- a/arch/m68k/kernel/signal.c
++++ b/arch/m68k/kernel/signal.c
+@@ -882,11 +882,17 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *
+ }
+ 
+ static inline void __user *
+-get_sigframe(struct ksignal *ksig, size_t frame_size)
++get_sigframe(struct ksignal *ksig, struct pt_regs *tregs, size_t frame_size)
+ {
+ 	unsigned long usp = sigsp(rdusp(), ksig);
++	unsigned long gap = 0;
+ 
+-	return (void __user *)((usp - frame_size) & -8UL);
++	if (CPU_IS_020_OR_030 && tregs->format == 0xb) {
++		/* USP is unreliable so use worst-case value */
++		gap = 256;
++	}
++
++	return (void __user *)((usp - gap - frame_size) & -8UL);
+ }
+ 
+ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+@@ -904,7 +910,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 		return -EFAULT;
+ 	}
+ 
+-	frame = get_sigframe(ksig, sizeof(*frame) + fsize);
++	frame = get_sigframe(ksig, tregs, sizeof(*frame) + fsize);
+ 
+ 	if (fsize)
+ 		err |= copy_to_user (frame + 1, regs + 1, fsize);
+@@ -976,7 +982,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 		return -EFAULT;
+ 	}
+ 
+-	frame = get_sigframe(ksig, sizeof(*frame));
++	frame = get_sigframe(ksig, tregs, sizeof(*frame));
+ 
+ 	if (fsize)
+ 		err |= copy_to_user (&frame->uc.uc_extra, regs + 1, fsize);
+diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
+index 99663fc1f997f..cc9d096d5cc46 100644
+--- a/arch/parisc/include/asm/cacheflush.h
++++ b/arch/parisc/include/asm/cacheflush.h
+@@ -57,6 +57,11 @@ extern void flush_dcache_page(struct page *page);
+ 
+ #define flush_dcache_mmap_lock(mapping)		xa_lock_irq(&mapping->i_pages)
+ #define flush_dcache_mmap_unlock(mapping)	xa_unlock_irq(&mapping->i_pages)
++#define flush_dcache_mmap_lock_irqsave(mapping, flags)		\
++		xa_lock_irqsave(&mapping->i_pages, flags)
++#define flush_dcache_mmap_unlock_irqrestore(mapping, flags)	\
++		xa_unlock_irqrestore(&mapping->i_pages, flags)
++
+ 
+ #define flush_icache_page(vma,page)	do { 		\
+ 	flush_kernel_dcache_page(page);			\
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index c81ab0cb89255..efa8d2a678a32 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -327,6 +327,7 @@ void flush_dcache_page(struct page *page)
+ 	struct vm_area_struct *mpnt;
+ 	unsigned long offset;
+ 	unsigned long addr, old_addr = 0;
++	unsigned long flags;
+ 	pgoff_t pgoff;
+ 
+ 	if (mapping && !mapping_mapped(mapping)) {
+@@ -346,7 +347,7 @@ void flush_dcache_page(struct page *page)
+ 	 * declared as MAP_PRIVATE or MAP_SHARED), so we only need
+ 	 * to flush one address here for them all to become coherent */
+ 
+-	flush_dcache_mmap_lock(mapping);
++	flush_dcache_mmap_lock_irqsave(mapping, flags);
+ 	vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
+ 		offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
+ 		addr = mpnt->vm_start + offset;
+@@ -369,7 +370,7 @@ void flush_dcache_page(struct page *page)
+ 			old_addr = addr;
+ 		}
+ 	}
+-	flush_dcache_mmap_unlock(mapping);
++	flush_dcache_mmap_unlock_irqrestore(mapping, flags);
+ }
+ EXPORT_SYMBOL(flush_dcache_page);
+ 
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index 5e4381280c97b..c14ee40302d85 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -123,13 +123,18 @@ void machine_power_off(void)
+ 	/* It seems we have no way to power the system off via
+ 	 * software. The user has to press the button himself. */
+ 
+-	printk(KERN_EMERG "System shut down completed.\n"
+-	       "Please power this system off now.");
++	printk("Power off or press RETURN to reboot.\n");
+ 
+ 	/* prevent soft lockup/stalled CPU messages for endless loop. */
+ 	rcu_sysrq_start();
+ 	lockup_detector_soft_poweroff();
+-	for (;;);
++	while (1) {
++		/* reboot if user presses RETURN key */
++		if (pdc_iodc_getc() == 13) {
++			printk("Rebooting...\n");
++			machine_restart(NULL);
++		}
++	}
+ }
+ 
+ void (*pm_power_off)(void);
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index 2fad7867af100..bd09050dc0af7 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -305,8 +305,8 @@ static void handle_break(struct pt_regs *regs)
+ #endif
+ 
+ #ifdef CONFIG_KGDB
+-	if (unlikely(iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
+-		iir == PARISC_KGDB_BREAK_INSN)) {
++	if (unlikely((iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
++		iir == PARISC_KGDB_BREAK_INSN)) && !user_mode(regs)) {
+ 		kgdb_handle_exception(9, SIGTRAP, 0, regs);
+ 		return;
+ 	}
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index ae4ba6a6745d4..5f0a2fa611fa2 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -1064,8 +1064,8 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
+ 				  pte_t entry, unsigned long address, int psize)
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+-	unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
+-					      _PAGE_RW | _PAGE_EXEC);
++	unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_SOFT_DIRTY |
++					      _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
+ 
+ 	unsigned long change = pte_val(entry) ^ pte_val(*ptep);
+ 	/*
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index 9abe842dbd843..14b52718917f6 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -98,6 +98,11 @@
+ #define	INTEL_FAM6_LAKEFIELD		0x8A
+ #define INTEL_FAM6_ALDERLAKE		0x97
+ #define INTEL_FAM6_ALDERLAKE_L		0x9A
++#define INTEL_FAM6_ALDERLAKE_N		0xBE
++
++#define INTEL_FAM6_RAPTORLAKE		0xB7
++#define INTEL_FAM6_RAPTORLAKE_P		0xBA
++#define INTEL_FAM6_RAPTORLAKE_S		0xBF
+ 
+ /* "Small Core" Processors (Atom) */
+ 
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index 37d48ab3d077c..58d17c01d4593 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -79,7 +79,7 @@ int detect_extended_topology_early(struct cpuinfo_x86 *c)
+ 	 * initial apic id, which also represents 32-bit extended x2apic id.
+ 	 */
+ 	c->initial_apicid = edx;
+-	smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
++	smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
+ #endif
+ 	return 0;
+ }
+@@ -109,7 +109,8 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	 */
+ 	cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
+ 	c->initial_apicid = edx;
+-	core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
++	core_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
++	smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
+ 	core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 	die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
+ 	pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index b4964300153a1..b9736aac20eef 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -195,7 +195,6 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ 	printk("%sCall Trace:\n", log_lvl);
+ 
+ 	unwind_start(&state, task, regs, stack);
+-	stack = stack ? : get_stack_pointer(task, regs);
+ 	regs = unwind_get_entry_regs(&state, &partial);
+ 
+ 	/*
+@@ -214,9 +213,13 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ 	 * - hardirq stack
+ 	 * - entry stack
+ 	 */
+-	for ( ; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {
++	for (stack = stack ?: get_stack_pointer(task, regs);
++	     stack;
++	     stack = stack_info.next_sp) {
+ 		const char *stack_name;
+ 
++		stack = PTR_ALIGN(stack, sizeof(long));
++
+ 		if (get_stack_info(stack, task, &stack_info, &visit_mask)) {
+ 			/*
+ 			 * We weren't on a valid stack.  It's possible that
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index 63d8c6c7d1254..ff3b0d8fe0486 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -9,6 +9,7 @@
+ #include <linux/sched/task.h>
+ 
+ #include <asm/set_memory.h>
++#include <asm/cpu_device_id.h>
+ #include <asm/e820/api.h>
+ #include <asm/init.h>
+ #include <asm/page.h>
+@@ -254,6 +255,24 @@ static void __init probe_page_size_mask(void)
+ 	}
+ }
+ 
++#define INTEL_MATCH(_model) { .vendor  = X86_VENDOR_INTEL,	\
++			      .family  = 6,			\
++			      .model = _model,			\
++			    }
++/*
++ * INVLPG may not properly flush Global entries
++ * on these CPUs when PCIDs are enabled.
++ */
++static const struct x86_cpu_id invlpg_miss_ids[] = {
++	INTEL_MATCH(INTEL_FAM6_ALDERLAKE   ),
++	INTEL_MATCH(INTEL_FAM6_ALDERLAKE_L ),
++	INTEL_MATCH(INTEL_FAM6_ALDERLAKE_N ),
++	INTEL_MATCH(INTEL_FAM6_RAPTORLAKE  ),
++	INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_P),
++	INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_S),
++	{}
++};
++
+ static void setup_pcid(void)
+ {
+ 	if (!IS_ENABLED(CONFIG_X86_64))
+@@ -262,6 +281,12 @@ static void setup_pcid(void)
+ 	if (!boot_cpu_has(X86_FEATURE_PCID))
+ 		return;
+ 
++	if (x86_match_cpu(invlpg_miss_ids)) {
++		pr_info("Incomplete global flushes, disabling PCID");
++		setup_clear_cpu_cap(X86_FEATURE_PCID);
++		return;
++	}
++
+ 	if (boot_cpu_has(X86_FEATURE_PGE)) {
+ 		/*
+ 		 * This can't be cr4_set_bits_and_update_boot() -- the
+diff --git a/drivers/acpi/acpica/dbnames.c b/drivers/acpi/acpica/dbnames.c
+index 3615e1a6efd8a..b91155ea9c343 100644
+--- a/drivers/acpi/acpica/dbnames.c
++++ b/drivers/acpi/acpica/dbnames.c
+@@ -652,6 +652,9 @@ acpi_status acpi_db_display_objects(char *obj_type_arg, char *display_count_arg)
+ 		object_info =
+ 		    ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_object_info));
+ 
++		if (!object_info)
++			return (AE_NO_MEMORY);
++
+ 		/* Walk the namespace from the root */
+ 
+ 		(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
+diff --git a/drivers/acpi/acpica/dswstate.c b/drivers/acpi/acpica/dswstate.c
+index 809a0c0536b59..f9ba7695be147 100644
+--- a/drivers/acpi/acpica/dswstate.c
++++ b/drivers/acpi/acpica/dswstate.c
+@@ -576,9 +576,14 @@ acpi_ds_init_aml_walk(struct acpi_walk_state *walk_state,
+ 	ACPI_FUNCTION_TRACE(ds_init_aml_walk);
+ 
+ 	walk_state->parser_state.aml =
+-	    walk_state->parser_state.aml_start = aml_start;
+-	walk_state->parser_state.aml_end =
+-	    walk_state->parser_state.pkg_end = aml_start + aml_length;
++	    walk_state->parser_state.aml_start =
++	    walk_state->parser_state.aml_end =
++	    walk_state->parser_state.pkg_end = aml_start;
++	/* Avoid undefined behavior: applying zero offset to null pointer */
++	if (aml_length != 0) {
++		walk_state->parser_state.aml_end += aml_length;
++		walk_state->parser_state.pkg_end += aml_length;
++	}
+ 
+ 	/* The next_op of the next_walk will be the beginning of the method */
+ 
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 4707d1808ca54..487884420fb0d 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -1114,6 +1114,7 @@ static void acpi_ec_remove_query_handlers(struct acpi_ec *ec,
+ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
+ {
+ 	acpi_ec_remove_query_handlers(ec, false, query_bit);
++	flush_workqueue(ec_query_wq);
+ }
+ EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler);
+ 
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 9a874a58d690c..cb859febd03cf 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -4352,6 +4352,13 @@ void device_set_of_node_from_dev(struct device *dev, const struct device *dev2)
+ }
+ EXPORT_SYMBOL_GPL(device_set_of_node_from_dev);
+ 
++void device_set_node(struct device *dev, struct fwnode_handle *fwnode)
++{
++	dev->fwnode = fwnode;
++	dev->of_node = to_of_node(fwnode);
++}
++EXPORT_SYMBOL_GPL(device_set_node);
++
+ int device_match_name(struct device *dev, const void *name)
+ {
+ 	return sysfs_streq(dev_name(dev), name);
+diff --git a/drivers/base/regmap/regcache.c b/drivers/base/regmap/regcache.c
+index 7f4b3b62492ca..7fdd702e564ae 100644
+--- a/drivers/base/regmap/regcache.c
++++ b/drivers/base/regmap/regcache.c
+@@ -343,6 +343,9 @@ int regcache_sync(struct regmap *map)
+ 	const char *name;
+ 	bool bypass;
+ 
++	if (WARN_ON(map->cache_type == REGCACHE_NONE))
++		return -EINVAL;
++
+ 	BUG_ON(!map->cache_ops);
+ 
+ 	map->lock(map->lock_arg);
+@@ -412,6 +415,9 @@ int regcache_sync_region(struct regmap *map, unsigned int min,
+ 	const char *name;
+ 	bool bypass;
+ 
++	if (WARN_ON(map->cache_type == REGCACHE_NONE))
++		return -EINVAL;
++
+ 	BUG_ON(!map->cache_ops);
+ 
+ 	map->lock(map->lock_arg);
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 25db095e943b7..35b390a785dd4 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -1738,6 +1738,11 @@ static int null_init_tag_set(struct nullb *nullb, struct blk_mq_tag_set *set)
+ 
+ static int null_validate_conf(struct nullb_device *dev)
+ {
++	if (dev->queue_mode == NULL_Q_RQ) {
++		pr_err("legacy IO path is no longer available\n");
++		return -EINVAL;
++	}
++
+ 	dev->blocksize = round_down(dev->blocksize, 512);
+ 	dev->blocksize = clamp_t(unsigned int, dev->blocksize, 512, 4096);
+ 
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index d263eac784daa..636db3b7e470b 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -6,6 +6,7 @@
+  *  Copyright (C) 2015  Intel Corporation
+  */
+ 
++#include <linux/efi.h>
+ #include <linux/module.h>
+ #include <linux/firmware.h>
+ #include <asm/unaligned.h>
+@@ -32,6 +33,43 @@
+ /* For kmalloc-ing the fw-name array instead of putting it on the stack */
+ typedef char bcm_fw_name[BCM_FW_NAME_LEN];
+ 
++#ifdef CONFIG_EFI
++static int btbcm_set_bdaddr_from_efi(struct hci_dev *hdev)
++{
++	efi_guid_t guid = EFI_GUID(0x74b00bd9, 0x805a, 0x4d61, 0xb5, 0x1f,
++				   0x43, 0x26, 0x81, 0x23, 0xd1, 0x13);
++	bdaddr_t efi_bdaddr, bdaddr;
++	efi_status_t status;
++	unsigned long len;
++	int ret;
++
++	if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
++		return -EOPNOTSUPP;
++
++	len = sizeof(efi_bdaddr);
++	status = efi.get_variable(L"BDADDR", &guid, NULL, &len, &efi_bdaddr);
++	if (status != EFI_SUCCESS)
++		return -ENXIO;
++
++	if (len != sizeof(efi_bdaddr))
++		return -EIO;
++
++	baswap(&bdaddr, &efi_bdaddr);
++
++	ret = btbcm_set_bdaddr(hdev, &bdaddr);
++	if (ret)
++		return ret;
++
++	bt_dev_info(hdev, "BCM: Using EFI device address (%pMR)", &bdaddr);
++	return 0;
++}
++#else
++static int btbcm_set_bdaddr_from_efi(struct hci_dev *hdev)
++{
++	return -EOPNOTSUPP;
++}
++#endif
++
+ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ {
+ 	struct hci_rp_read_bd_addr *bda;
+@@ -85,9 +123,12 @@ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM4345C5) ||
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM43430A0) ||
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM43341B)) {
+-		bt_dev_info(hdev, "BCM: Using default device address (%pMR)",
+-			    &bda->bdaddr);
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++		/* Try falling back to BDADDR EFI variable */
++		if (btbcm_set_bdaddr_from_efi(hdev) != 0) {
++			bt_dev_info(hdev, "BCM: Using default device address (%pMR)",
++				    &bda->bdaddr);
++			set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
++		}
+ 	}
+ 
+ 	kfree_skb(skb);
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index 14fad16d371fb..3e1bb28b7efdf 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -83,6 +83,22 @@ static const struct dmi_system_id tpm_tis_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T490s"),
+ 		},
+ 	},
++	{
++		.callback = tpm_tis_disable_irq,
++		.ident = "ThinkStation P360 Tiny",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkStation P360 Tiny"),
++		},
++	},
++	{
++		.callback = tpm_tis_disable_irq,
++		.ident = "ThinkPad L490",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L490"),
++		},
++	},
+ 	{}
+ };
+ 
+diff --git a/drivers/clk/tegra/clk-tegra20.c b/drivers/clk/tegra/clk-tegra20.c
+index d60ee6e318a55..fb1da5d63f4b2 100644
+--- a/drivers/clk/tegra/clk-tegra20.c
++++ b/drivers/clk/tegra/clk-tegra20.c
+@@ -18,24 +18,24 @@
+ #define MISC_CLK_ENB 0x48
+ 
+ #define OSC_CTRL 0x50
+-#define OSC_CTRL_OSC_FREQ_MASK (3<<30)
+-#define OSC_CTRL_OSC_FREQ_13MHZ (0<<30)
+-#define OSC_CTRL_OSC_FREQ_19_2MHZ (1<<30)
+-#define OSC_CTRL_OSC_FREQ_12MHZ (2<<30)
+-#define OSC_CTRL_OSC_FREQ_26MHZ (3<<30)
+-#define OSC_CTRL_MASK (0x3f2 | OSC_CTRL_OSC_FREQ_MASK)
+-
+-#define OSC_CTRL_PLL_REF_DIV_MASK (3<<28)
+-#define OSC_CTRL_PLL_REF_DIV_1		(0<<28)
+-#define OSC_CTRL_PLL_REF_DIV_2		(1<<28)
+-#define OSC_CTRL_PLL_REF_DIV_4		(2<<28)
++#define OSC_CTRL_OSC_FREQ_MASK (3u<<30)
++#define OSC_CTRL_OSC_FREQ_13MHZ (0u<<30)
++#define OSC_CTRL_OSC_FREQ_19_2MHZ (1u<<30)
++#define OSC_CTRL_OSC_FREQ_12MHZ (2u<<30)
++#define OSC_CTRL_OSC_FREQ_26MHZ (3u<<30)
++#define OSC_CTRL_MASK (0x3f2u | OSC_CTRL_OSC_FREQ_MASK)
++
++#define OSC_CTRL_PLL_REF_DIV_MASK	(3u<<28)
++#define OSC_CTRL_PLL_REF_DIV_1		(0u<<28)
++#define OSC_CTRL_PLL_REF_DIV_2		(1u<<28)
++#define OSC_CTRL_PLL_REF_DIV_4		(2u<<28)
+ 
+ #define OSC_FREQ_DET 0x58
+-#define OSC_FREQ_DET_TRIG (1<<31)
++#define OSC_FREQ_DET_TRIG (1u<<31)
+ 
+ #define OSC_FREQ_DET_STATUS 0x5c
+-#define OSC_FREQ_DET_BUSY (1<<31)
+-#define OSC_FREQ_DET_CNT_MASK 0xFFFF
++#define OSC_FREQ_DET_BUSYu (1<<31)
++#define OSC_FREQ_DET_CNT_MASK 0xFFFFu
+ 
+ #define TEGRA20_CLK_PERIPH_BANKS	3
+ 
+diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
+index 840754dcc6ca4..5a877d76078f7 100644
+--- a/drivers/firmware/arm_sdei.c
++++ b/drivers/firmware/arm_sdei.c
+@@ -44,6 +44,8 @@ static asmlinkage void (*sdei_firmware_call)(unsigned long function_id,
+ /* entry point from firmware to arch asm code */
+ static unsigned long sdei_entry_point;
+ 
++static int sdei_hp_state;
++
+ struct sdei_event {
+ 	/* These three are protected by the sdei_list_lock */
+ 	struct list_head	list;
+@@ -302,8 +304,6 @@ int sdei_mask_local_cpu(void)
+ {
+ 	int err;
+ 
+-	WARN_ON_ONCE(preemptible());
+-
+ 	err = invoke_sdei_fn(SDEI_1_0_FN_SDEI_PE_MASK, 0, 0, 0, 0, 0, NULL);
+ 	if (err && err != -EIO) {
+ 		pr_warn_once("failed to mask CPU[%u]: %d\n",
+@@ -316,6 +316,7 @@ int sdei_mask_local_cpu(void)
+ 
+ static void _ipi_mask_cpu(void *ignored)
+ {
++	WARN_ON_ONCE(preemptible());
+ 	sdei_mask_local_cpu();
+ }
+ 
+@@ -323,8 +324,6 @@ int sdei_unmask_local_cpu(void)
+ {
+ 	int err;
+ 
+-	WARN_ON_ONCE(preemptible());
+-
+ 	err = invoke_sdei_fn(SDEI_1_0_FN_SDEI_PE_UNMASK, 0, 0, 0, 0, 0, NULL);
+ 	if (err && err != -EIO) {
+ 		pr_warn_once("failed to unmask CPU[%u]: %d\n",
+@@ -337,6 +336,7 @@ int sdei_unmask_local_cpu(void)
+ 
+ static void _ipi_unmask_cpu(void *ignored)
+ {
++	WARN_ON_ONCE(preemptible());
+ 	sdei_unmask_local_cpu();
+ }
+ 
+@@ -344,6 +344,8 @@ static void _ipi_private_reset(void *ignored)
+ {
+ 	int err;
+ 
++	WARN_ON_ONCE(preemptible());
++
+ 	err = invoke_sdei_fn(SDEI_1_0_FN_SDEI_PRIVATE_RESET, 0, 0, 0, 0, 0,
+ 			     NULL);
+ 	if (err && err != -EIO)
+@@ -390,8 +392,6 @@ static void _local_event_enable(void *data)
+ 	int err;
+ 	struct sdei_crosscall_args *arg = data;
+ 
+-	WARN_ON_ONCE(preemptible());
+-
+ 	err = sdei_api_event_enable(arg->event->event_num);
+ 
+ 	sdei_cross_call_return(arg, err);
+@@ -480,8 +480,6 @@ static void _local_event_unregister(void *data)
+ 	int err;
+ 	struct sdei_crosscall_args *arg = data;
+ 
+-	WARN_ON_ONCE(preemptible());
+-
+ 	err = sdei_api_event_unregister(arg->event->event_num);
+ 
+ 	sdei_cross_call_return(arg, err);
+@@ -562,8 +560,6 @@ static void _local_event_register(void *data)
+ 	struct sdei_registered_event *reg;
+ 	struct sdei_crosscall_args *arg = data;
+ 
+-	WARN_ON(preemptible());
+-
+ 	reg = per_cpu_ptr(arg->event->private_registered, smp_processor_id());
+ 	err = sdei_api_event_register(arg->event->event_num, sdei_entry_point,
+ 				      reg, 0, 0);
+@@ -718,6 +714,8 @@ static int sdei_pm_notifier(struct notifier_block *nb, unsigned long action,
+ {
+ 	int rv;
+ 
++	WARN_ON_ONCE(preemptible());
++
+ 	switch (action) {
+ 	case CPU_PM_ENTER:
+ 		rv = sdei_mask_local_cpu();
+@@ -766,7 +764,7 @@ static int sdei_device_freeze(struct device *dev)
+ 	int err;
+ 
+ 	/* unregister private events */
+-	cpuhp_remove_state(CPUHP_AP_ARM_SDEI_STARTING);
++	cpuhp_remove_state(sdei_entry_point);
+ 
+ 	err = sdei_unregister_shared();
+ 	if (err)
+@@ -787,12 +785,15 @@ static int sdei_device_thaw(struct device *dev)
+ 		return err;
+ 	}
+ 
+-	err = cpuhp_setup_state(CPUHP_AP_ARM_SDEI_STARTING, "SDEI",
++	err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "SDEI",
+ 				&sdei_cpuhp_up, &sdei_cpuhp_down);
+-	if (err)
++	if (err < 0) {
+ 		pr_warn("Failed to re-register CPU hotplug notifier...\n");
++		return err;
++	}
+ 
+-	return err;
++	sdei_hp_state = err;
++	return 0;
+ }
+ 
+ static int sdei_device_restore(struct device *dev)
+@@ -824,7 +825,7 @@ static int sdei_reboot_notifier(struct notifier_block *nb, unsigned long action,
+ 	 * We are going to reset the interface, after this there is no point
+ 	 * doing work when we take CPUs offline.
+ 	 */
+-	cpuhp_remove_state(CPUHP_AP_ARM_SDEI_STARTING);
++	cpuhp_remove_state(sdei_hp_state);
+ 
+ 	sdei_platform_reset();
+ 
+@@ -1004,13 +1005,15 @@ static int sdei_probe(struct platform_device *pdev)
+ 		goto remove_cpupm;
+ 	}
+ 
+-	err = cpuhp_setup_state(CPUHP_AP_ARM_SDEI_STARTING, "SDEI",
++	err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "SDEI",
+ 				&sdei_cpuhp_up, &sdei_cpuhp_down);
+-	if (err) {
++	if (err < 0) {
+ 		pr_warn("Failed to register CPU hotplug notifier...\n");
+ 		goto remove_reboot;
+ 	}
+ 
++	sdei_hp_state = err;
++
+ 	return 0;
+ 
+ remove_reboot:
+diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c
+index 876027fdefc95..e20b60432e01d 100644
+--- a/drivers/gpio/gpio-mockup.c
++++ b/drivers/gpio/gpio-mockup.c
+@@ -370,7 +370,7 @@ static void gpio_mockup_debugfs_setup(struct device *dev,
+ 		priv->offset = i;
+ 		priv->desc = &gc->gpiodev->descs[i];
+ 
+-		debugfs_create_file(name, 0200, chip->dbg_dir, priv,
++		debugfs_create_file(name, 0600, chip->dbg_dir, priv,
+ 				    &gpio_mockup_debugfs_ops);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index 930d2b7d34489..9dd41eaf32cb5 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -406,11 +406,8 @@ static enum bp_result get_gpio_i2c_info(
+ 	info->i2c_slave_address = record->i2c_slave_addr;
+ 
+ 	/* TODO: check how to get register offset for en, Y, etc. */
+-	info->gpio_info.clk_a_register_index =
+-			le16_to_cpu(
+-			header->gpio_pin[table_index].data_a_reg_index);
+-	info->gpio_info.clk_a_shift =
+-			header->gpio_pin[table_index].gpio_bitshift;
++	info->gpio_info.clk_a_register_index = le16_to_cpu(pin->data_a_reg_index);
++	info->gpio_info.clk_a_shift = pin->gpio_bitshift;
+ 
+ 	return BP_RESULT_OK;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
+index e2e79025825f8..a54a309879246 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
+@@ -1011,7 +1011,7 @@ static void dce_transform_set_pixel_storage_depth(
+ 		color_depth = COLOR_DEPTH_101010;
+ 		pixel_depth = 0;
+ 		expan_mode  = 1;
+-		BREAK_TO_DEBUGGER();
++		DC_LOG_DC("The pixel depth %d is not valid, set COLOR_DEPTH_101010 instead.", depth);
+ 		break;
+ 	}
+ 
+@@ -1025,8 +1025,7 @@ static void dce_transform_set_pixel_storage_depth(
+ 	if (!(xfm_dce->lb_pixel_depth_supported & depth)) {
+ 		/*we should use unsupported capabilities
+ 		 *  unless it is required by w/a*/
+-		DC_LOG_WARNING("%s: Capability not supported",
+-			__func__);
++		DC_LOG_DC("%s: Capability not supported", __func__);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 19fb1d93a4f07..0c806e99e8690 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -221,7 +221,7 @@ mipi_dsi_device_register_full(struct mipi_dsi_host *host,
+ 		return dsi;
+ 	}
+ 
+-	dsi->dev.of_node = info->node;
++	device_set_node(&dsi->dev, of_fwnode_handle(info->node));
+ 	dsi->channel = info->channel;
+ 	strlcpy(dsi->name, info->type, sizeof(dsi->name));
+ 
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.h b/drivers/gpu/drm/exynos/exynos_drm_g2d.h
+index 74ea3c26deadc..1a5ae781b56c6 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.h
++++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.h
+@@ -34,11 +34,11 @@ static inline int exynos_g2d_exec_ioctl(struct drm_device *dev, void *data,
+ 	return -ENODEV;
+ }
+ 
+-int g2d_open(struct drm_device *drm_dev, struct drm_file *file)
++static inline int g2d_open(struct drm_device *drm_dev, struct drm_file *file)
+ {
+ 	return 0;
+ }
+ 
+-void g2d_close(struct drm_device *drm_dev, struct drm_file *file)
++static inline void g2d_close(struct drm_device *drm_dev, struct drm_file *file)
+ { }
+ #endif
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 1c1931f5c958b..7f633f8b3239a 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2281,6 +2281,11 @@ static int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
+ 		pipe_config->dsc.slice_count =
+ 			drm_dp_dsc_sink_max_slice_count(intel_dp->dsc_dpcd,
+ 							true);
++		if (!pipe_config->dsc.slice_count) {
++			drm_dbg_kms(&dev_priv->drm, "Unsupported Slice Count %d\n",
++				    pipe_config->dsc.slice_count);
++			return -EINVAL;
++		}
+ 	} else {
+ 		u16 dsc_max_output_bpp;
+ 		u8 dsc_dp_slice_count;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index 108882bbd2b8b..7aa6accb74ad3 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -51,11 +51,6 @@
+ #define   INTF_TPG_RGB_MAPPING          0x11C
+ #define   INTF_PROG_FETCH_START         0x170
+ #define   INTF_PROG_ROT_START           0x174
+-
+-#define   INTF_FRAME_LINE_COUNT_EN      0x0A8
+-#define   INTF_FRAME_COUNT              0x0AC
+-#define   INTF_LINE_COUNT               0x0B0
+-
+ #define   INTF_MUX                      0x25C
+ 
+ static const struct dpu_intf_cfg *_intf_offset(enum dpu_intf intf,
+diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c
+index d7e4a39a904e2..0eaaaa94563a3 100644
+--- a/drivers/gpu/drm/msm/dp/dp_audio.c
++++ b/drivers/gpu/drm/msm/dp/dp_audio.c
+@@ -577,6 +577,18 @@ static struct hdmi_codec_pdata codec_data = {
+ 	.i2s = 1,
+ };
+ 
++void dp_unregister_audio_driver(struct device *dev, struct dp_audio *dp_audio)
++{
++	struct dp_audio_private *audio_priv;
++
++	audio_priv = container_of(dp_audio, struct dp_audio_private, dp_audio);
++
++	if (audio_priv->audio_pdev) {
++		platform_device_unregister(audio_priv->audio_pdev);
++		audio_priv->audio_pdev = NULL;
++	}
++}
++
+ int dp_register_audio_driver(struct device *dev,
+ 		struct dp_audio *dp_audio)
+ {
+diff --git a/drivers/gpu/drm/msm/dp/dp_audio.h b/drivers/gpu/drm/msm/dp/dp_audio.h
+index 84e5f4a5d26ba..4ab78880af829 100644
+--- a/drivers/gpu/drm/msm/dp/dp_audio.h
++++ b/drivers/gpu/drm/msm/dp/dp_audio.h
+@@ -53,6 +53,8 @@ struct dp_audio *dp_audio_get(struct platform_device *pdev,
+ int dp_register_audio_driver(struct device *dev,
+ 		struct dp_audio *dp_audio);
+ 
++void dp_unregister_audio_driver(struct device *dev, struct dp_audio *dp_audio);
++
+ /**
+  * dp_audio_put()
+  *
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 1c3dcbc6cce8c..0bcccf422192c 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -276,6 +276,7 @@ static void dp_display_unbind(struct device *dev, struct device *master,
+ 	kthread_stop(dp->ev_tsk);
+ 
+ 	dp_power_client_deinit(dp->power);
++	dp_unregister_audio_driver(dev, dp->audio);
+ 	dp_aux_unregister(dp->aux);
+ 	priv->dp = NULL;
+ }
+diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c
+index 32c83f2e386ca..9d60d1c4cfcea 100644
+--- a/drivers/gpu/drm/tegra/sor.c
++++ b/drivers/gpu/drm/tegra/sor.c
+@@ -1153,7 +1153,7 @@ static int tegra_sor_compute_config(struct tegra_sor *sor,
+ 				    struct drm_dp_link *link)
+ {
+ 	const u64 f = 100000, link_rate = link->rate * 1000;
+-	const u64 pclk = mode->clock * 1000;
++	const u64 pclk = (u64)mode->clock * 1000;
+ 	u64 input, output, watermark, num;
+ 	struct tegra_sor_params params;
+ 	u32 num_syms_per_line;
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index f5ea8e1d84452..2e32a21bbcbfc 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -828,8 +828,7 @@ static int hidpp_unifying_init(struct hidpp_device *hidpp)
+ 	if (ret)
+ 		return ret;
+ 
+-	snprintf(hdev->uniq, sizeof(hdev->uniq), "%04x-%4phD",
+-		 hdev->product, &serial);
++	snprintf(hdev->uniq, sizeof(hdev->uniq), "%4phD", &serial);
+ 	dbg_hid("HID++ Unifying: Got serial: %s\n", hdev->uniq);
+ 
+ 	name = hidpp_unifying_get_name(hidpp);
+@@ -922,6 +921,54 @@ print_version:
+ 	return 0;
+ }
+ 
++/* -------------------------------------------------------------------------- */
++/* 0x0003: Device Information                                                 */
++/* -------------------------------------------------------------------------- */
++
++#define HIDPP_PAGE_DEVICE_INFORMATION			0x0003
++
++#define CMD_GET_DEVICE_INFO				0x00
++
++static int hidpp_get_serial(struct hidpp_device *hidpp, u32 *serial)
++{
++	struct hidpp_report response;
++	u8 feature_type;
++	u8 feature_index;
++	int ret;
++
++	ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_DEVICE_INFORMATION,
++				     &feature_index,
++				     &feature_type);
++	if (ret)
++		return ret;
++
++	ret = hidpp_send_fap_command_sync(hidpp, feature_index,
++					  CMD_GET_DEVICE_INFO,
++					  NULL, 0, &response);
++	if (ret)
++		return ret;
++
++	/* See hidpp_unifying_get_serial() */
++	*serial = *((u32 *)&response.rap.params[1]);
++	return 0;
++}
++
++static int hidpp_serial_init(struct hidpp_device *hidpp)
++{
++	struct hid_device *hdev = hidpp->hid_dev;
++	u32 serial;
++	int ret;
++
++	ret = hidpp_get_serial(hidpp, &serial);
++	if (ret)
++		return ret;
++
++	snprintf(hdev->uniq, sizeof(hdev->uniq), "%4phD", &serial);
++	dbg_hid("HID++ DeviceInformation: Got serial: %s\n", hdev->uniq);
++
++	return 0;
++}
++
+ /* -------------------------------------------------------------------------- */
+ /* 0x0005: GetDeviceNameType                                                  */
+ /* -------------------------------------------------------------------------- */
+@@ -3855,6 +3902,8 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	if (hidpp->quirks & HIDPP_QUIRK_UNIFYING)
+ 		hidpp_unifying_init(hidpp);
++	else if (hid_is_usb(hidpp->hid_dev))
++		hidpp_serial_init(hidpp);
+ 
+ 	connected = hidpp_root_get_protocol_version(hidpp) == 0;
+ 	atomic_set(&hidpp->connected, connected);
+diff --git a/drivers/hid/wacom.h b/drivers/hid/wacom.h
+index 203d27d198b81..3f8b24a57014b 100644
+--- a/drivers/hid/wacom.h
++++ b/drivers/hid/wacom.h
+@@ -91,6 +91,7 @@
+ #include <linux/leds.h>
+ #include <linux/usb/input.h>
+ #include <linux/power_supply.h>
++#include <linux/timer.h>
+ #include <asm/unaligned.h>
+ 
+ /*
+@@ -167,6 +168,7 @@ struct wacom {
+ 	struct delayed_work init_work;
+ 	struct wacom_remote *remote;
+ 	struct work_struct mode_change_work;
++	struct timer_list idleprox_timer;
+ 	bool generic_has_leds;
+ 	struct wacom_leds {
+ 		struct wacom_group_leds *groups;
+@@ -239,4 +241,5 @@ struct wacom_led *wacom_led_find(struct wacom *wacom, unsigned int group,
+ struct wacom_led *wacom_led_next(struct wacom *wacom, struct wacom_led *cur);
+ int wacom_equivalent_usage(int usage);
+ int wacom_initialize_leds(struct wacom *wacom);
++void wacom_idleprox_timeout(struct timer_list *list);
+ #endif
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index b42785fdf7ed5..a93070f5b214c 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2781,6 +2781,7 @@ static int wacom_probe(struct hid_device *hdev,
+ 	INIT_WORK(&wacom->battery_work, wacom_battery_work);
+ 	INIT_WORK(&wacom->remote_work, wacom_remote_work);
+ 	INIT_WORK(&wacom->mode_change_work, wacom_mode_change_work);
++	timer_setup(&wacom->idleprox_timer, &wacom_idleprox_timeout, TIMER_DEFERRABLE);
+ 
+ 	/* ask for the report descriptor to be loaded by HID */
+ 	error = hid_parse(hdev);
+@@ -2825,6 +2826,7 @@ static void wacom_remove(struct hid_device *hdev)
+ 	cancel_work_sync(&wacom->battery_work);
+ 	cancel_work_sync(&wacom->remote_work);
+ 	cancel_work_sync(&wacom->mode_change_work);
++	del_timer_sync(&wacom->idleprox_timer);
+ 	if (hdev->bus == BUS_BLUETOOTH)
+ 		device_remove_file(&hdev->dev, &dev_attr_speed);
+ 
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 6c64165fae13e..37754a1f733b4 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -11,6 +11,7 @@
+ #include "wacom_wac.h"
+ #include "wacom.h"
+ #include <linux/input/mt.h>
++#include <linux/jiffies.h>
+ 
+ /* resolution for penabled devices */
+ #define WACOM_PL_RES		20
+@@ -41,6 +42,43 @@ static int wacom_numbered_button_to_key(int n);
+ 
+ static void wacom_update_led(struct wacom *wacom, int button_count, int mask,
+ 			     int group);
++
++static void wacom_force_proxout(struct wacom_wac *wacom_wac)
++{
++	struct input_dev *input = wacom_wac->pen_input;
++
++	wacom_wac->shared->stylus_in_proximity = 0;
++
++	input_report_key(input, BTN_TOUCH, 0);
++	input_report_key(input, BTN_STYLUS, 0);
++	input_report_key(input, BTN_STYLUS2, 0);
++	input_report_key(input, BTN_STYLUS3, 0);
++	input_report_key(input, wacom_wac->tool[0], 0);
++	if (wacom_wac->serial[0]) {
++		input_report_abs(input, ABS_MISC, 0);
++	}
++	input_report_abs(input, ABS_PRESSURE, 0);
++
++	wacom_wac->tool[0] = 0;
++	wacom_wac->id[0] = 0;
++	wacom_wac->serial[0] = 0;
++
++	input_sync(input);
++}
++
++void wacom_idleprox_timeout(struct timer_list *list)
++{
++	struct wacom *wacom = from_timer(wacom, list, idleprox_timer);
++	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
++
++	if (!wacom_wac->hid_data.sense_state) {
++		return;
++	}
++
++	hid_warn(wacom->hdev, "%s: tool appears to be hung in-prox. forcing it out.\n", __func__);
++	wacom_force_proxout(wacom_wac);
++}
++
+ /*
+  * Percent of battery capacity for Graphire.
+  * 8th value means AC online and show 100% capacity.
+@@ -675,11 +713,14 @@ static int wacom_intuos_get_tool_type(int tool_id)
+ 	case 0x802: /* Intuos4/5 13HD/24HD General Pen */
+ 	case 0x8e2: /* IntuosHT2 pen */
+ 	case 0x022:
++	case 0x200: /* Pro Pen 3 */
++	case 0x04200: /* Pro Pen 3 */
+ 	case 0x10842: /* MobileStudio Pro Pro Pen slim */
+ 	case 0x14802: /* Intuos4/5 13HD/24HD Classic Pen */
+ 	case 0x16802: /* Cintiq 13HD Pro Pen */
+ 	case 0x18802: /* DTH2242 Pen */
+ 	case 0x10802: /* Intuos4/5 13HD/24HD General Pen */
++	case 0x80842: /* Intuos Pro and Cintiq Pro 3D Pen */
+ 		tool_type = BTN_TOOL_PEN;
+ 		break;
+ 
+@@ -1927,18 +1968,7 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
+ static void wacom_wac_battery_usage_mapping(struct hid_device *hdev,
+ 		struct hid_field *field, struct hid_usage *usage)
+ {
+-	struct wacom *wacom = hid_get_drvdata(hdev);
+-	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+-	struct wacom_features *features = &wacom_wac->features;
+-	unsigned equivalent_usage = wacom_equivalent_usage(usage->hid);
+-
+-	switch (equivalent_usage) {
+-	case HID_DG_BATTERYSTRENGTH:
+-	case WACOM_HID_WD_BATTERY_LEVEL:
+-	case WACOM_HID_WD_BATTERY_CHARGING:
+-		features->quirks |= WACOM_QUIRK_BATTERY;
+-		break;
+-	}
++	return;
+ }
+ 
+ static void wacom_wac_battery_event(struct hid_device *hdev, struct hid_field *field,
+@@ -1959,18 +1989,21 @@ static void wacom_wac_battery_event(struct hid_device *hdev, struct hid_field *f
+ 			wacom_wac->hid_data.bat_connected = 1;
+ 			wacom_wac->hid_data.bat_status = WACOM_POWER_SUPPLY_STATUS_AUTO;
+ 		}
++		wacom_wac->features.quirks |= WACOM_QUIRK_BATTERY;
+ 		break;
+ 	case WACOM_HID_WD_BATTERY_LEVEL:
+ 		value = value * 100 / (field->logical_maximum - field->logical_minimum);
+ 		wacom_wac->hid_data.battery_capacity = value;
+ 		wacom_wac->hid_data.bat_connected = 1;
+ 		wacom_wac->hid_data.bat_status = WACOM_POWER_SUPPLY_STATUS_AUTO;
++		wacom_wac->features.quirks |= WACOM_QUIRK_BATTERY;
+ 		break;
+ 	case WACOM_HID_WD_BATTERY_CHARGING:
+ 		wacom_wac->hid_data.bat_charging = value;
+ 		wacom_wac->hid_data.ps_connected = value;
+ 		wacom_wac->hid_data.bat_connected = 1;
+ 		wacom_wac->hid_data.bat_status = WACOM_POWER_SUPPLY_STATUS_AUTO;
++		wacom_wac->features.quirks |= WACOM_QUIRK_BATTERY;
+ 		break;
+ 	}
+ }
+@@ -1986,18 +2019,15 @@ static void wacom_wac_battery_report(struct hid_device *hdev,
+ {
+ 	struct wacom *wacom = hid_get_drvdata(hdev);
+ 	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+-	struct wacom_features *features = &wacom_wac->features;
+ 
+-	if (features->quirks & WACOM_QUIRK_BATTERY) {
+-		int status = wacom_wac->hid_data.bat_status;
+-		int capacity = wacom_wac->hid_data.battery_capacity;
+-		bool charging = wacom_wac->hid_data.bat_charging;
+-		bool connected = wacom_wac->hid_data.bat_connected;
+-		bool powered = wacom_wac->hid_data.ps_connected;
++	int status = wacom_wac->hid_data.bat_status;
++	int capacity = wacom_wac->hid_data.battery_capacity;
++	bool charging = wacom_wac->hid_data.bat_charging;
++	bool connected = wacom_wac->hid_data.bat_connected;
++	bool powered = wacom_wac->hid_data.ps_connected;
+ 
+-		wacom_notify_battery(wacom_wac, status, capacity, charging,
+-				     connected, powered);
+-	}
++	wacom_notify_battery(wacom_wac, status, capacity, charging,
++			     connected, powered);
+ }
+ 
+ static void wacom_wac_pad_usage_mapping(struct hid_device *hdev,
+@@ -2339,6 +2369,7 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ 		value = field->logical_maximum - value;
+ 		break;
+ 	case HID_DG_INRANGE:
++		mod_timer(&wacom->idleprox_timer, jiffies + msecs_to_jiffies(100));
+ 		wacom_wac->hid_data.inrange_state = value;
+ 		if (!(features->quirks & WACOM_QUIRK_SENSE))
+ 			wacom_wac->hid_data.sense_state = value;
+@@ -4812,6 +4843,10 @@ static const struct wacom_features wacom_features_0x3c6 =
+ static const struct wacom_features wacom_features_0x3c8 =
+ 	{ "Wacom Intuos BT M", 21600, 13500, 4095, 63,
+ 	  INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
++static const struct wacom_features wacom_features_0x3dd =
++	{ "Wacom Intuos Pro S", 31920, 19950, 8191, 63,
++	  INTUOSP2S_BT, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES, 7,
++	  .touch_max = 10 };
+ 
+ static const struct wacom_features wacom_features_HID_ANY_ID =
+ 	{ "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID };
+@@ -4991,6 +5026,7 @@ const struct hid_device_id wacom_ids[] = {
+ 	{ BT_DEVICE_WACOM(0x393) },
+ 	{ BT_DEVICE_WACOM(0x3c6) },
+ 	{ BT_DEVICE_WACOM(0x3c8) },
++	{ BT_DEVICE_WACOM(0x3dd) },
+ 	{ USB_DEVICE_WACOM(0x4001) },
+ 	{ USB_DEVICE_WACOM(0x4004) },
+ 	{ USB_DEVICE_WACOM(0x5000) },
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+index 3309b1344ffc0..3e74f5aed20d7 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+@@ -926,7 +926,7 @@ tmc_etr_buf_insert_barrier_packet(struct etr_buf *etr_buf, u64 offset)
+ 
+ 	len = tmc_etr_buf_get_data(etr_buf, offset,
+ 				   CORESIGHT_BARRIER_PKT_SIZE, &bufp);
+-	if (WARN_ON(len < CORESIGHT_BARRIER_PKT_SIZE))
++	if (WARN_ON(len < 0 || len < CORESIGHT_BARRIER_PKT_SIZE))
+ 		return -EINVAL;
+ 	coresight_insert_barrier_packet(bufp);
+ 	return offset + CORESIGHT_BARRIER_PKT_SIZE;
+diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
+index 4688a6657c875..3bd0dcde8576d 100644
+--- a/drivers/infiniband/core/user_mad.c
++++ b/drivers/infiniband/core/user_mad.c
+@@ -131,6 +131,11 @@ struct ib_umad_packet {
+ 	struct ib_user_mad mad;
+ };
+ 
++struct ib_rmpp_mad_hdr {
++	struct ib_mad_hdr	mad_hdr;
++	struct ib_rmpp_hdr      rmpp_hdr;
++} __packed;
++
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/ib_umad.h>
+ 
+@@ -494,11 +499,11 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 			     size_t count, loff_t *pos)
+ {
+ 	struct ib_umad_file *file = filp->private_data;
++	struct ib_rmpp_mad_hdr *rmpp_mad_hdr;
+ 	struct ib_umad_packet *packet;
+ 	struct ib_mad_agent *agent;
+ 	struct rdma_ah_attr ah_attr;
+ 	struct ib_ah *ah;
+-	struct ib_rmpp_mad *rmpp_mad;
+ 	__be64 *tid;
+ 	int ret, data_len, hdr_len, copy_offset, rmpp_active;
+ 	u8 base_version;
+@@ -506,7 +511,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 	if (count < hdr_size(file) + IB_MGMT_RMPP_HDR)
+ 		return -EINVAL;
+ 
+-	packet = kzalloc(sizeof *packet + IB_MGMT_RMPP_HDR, GFP_KERNEL);
++	packet = kzalloc(sizeof(*packet) + IB_MGMT_RMPP_HDR, GFP_KERNEL);
+ 	if (!packet)
+ 		return -ENOMEM;
+ 
+@@ -560,13 +565,13 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 		goto err_up;
+ 	}
+ 
+-	rmpp_mad = (struct ib_rmpp_mad *) packet->mad.data;
+-	hdr_len = ib_get_mad_data_offset(rmpp_mad->mad_hdr.mgmt_class);
++	rmpp_mad_hdr = (struct ib_rmpp_mad_hdr *)packet->mad.data;
++	hdr_len = ib_get_mad_data_offset(rmpp_mad_hdr->mad_hdr.mgmt_class);
+ 
+-	if (ib_is_mad_class_rmpp(rmpp_mad->mad_hdr.mgmt_class)
++	if (ib_is_mad_class_rmpp(rmpp_mad_hdr->mad_hdr.mgmt_class)
+ 	    && ib_mad_kernel_rmpp_agent(agent)) {
+ 		copy_offset = IB_MGMT_RMPP_HDR;
+-		rmpp_active = ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) &
++		rmpp_active = ib_get_rmpp_flags(&rmpp_mad_hdr->rmpp_hdr) &
+ 						IB_MGMT_RMPP_FLAG_ACTIVE;
+ 	} else {
+ 		copy_offset = IB_MGMT_MAD_HDR;
+@@ -615,12 +620,12 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ 		tid = &((struct ib_mad_hdr *) packet->msg->mad)->tid;
+ 		*tid = cpu_to_be64(((u64) agent->hi_tid) << 32 |
+ 				   (be64_to_cpup(tid) & 0xffffffff));
+-		rmpp_mad->mad_hdr.tid = *tid;
++		rmpp_mad_hdr->mad_hdr.tid = *tid;
+ 	}
+ 
+ 	if (!ib_mad_kernel_rmpp_agent(agent)
+-	   && ib_is_mad_class_rmpp(rmpp_mad->mad_hdr.mgmt_class)
+-	   && (ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) & IB_MGMT_RMPP_FLAG_ACTIVE)) {
++	    && ib_is_mad_class_rmpp(rmpp_mad_hdr->mad_hdr.mgmt_class)
++	    && (ib_get_rmpp_flags(&rmpp_mad_hdr->rmpp_hdr) & IB_MGMT_RMPP_FLAG_ACTIVE)) {
+ 		spin_lock_irq(&file->send_lock);
+ 		list_add_tail(&packet->list, &file->send_list);
+ 		spin_unlock_irq(&file->send_lock);
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 70dedc0f7827c..0bd55e1fca372 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -489,6 +489,9 @@ struct xboxone_init_packet {
+ 	}
+ 
+ 
++#define GIP_WIRED_INTF_DATA 0
++#define GIP_WIRED_INTF_AUDIO 1
++
+ /*
+  * This packet is required for all Xbox One pads with 2015
+  * or later firmware installed (or present from the factory).
+@@ -1813,7 +1816,7 @@ static int xpad_probe(struct usb_interface *intf, const struct usb_device_id *id
+ 	}
+ 
+ 	if (xpad->xtype == XTYPE_XBOXONE &&
+-	    intf->cur_altsetting->desc.bInterfaceNumber != 0) {
++	    intf->cur_altsetting->desc.bInterfaceNumber != GIP_WIRED_INTF_DATA) {
+ 		/*
+ 		 * The Xbox One controller lists three interfaces all with the
+ 		 * same interface class, subclass and protocol. Differentiate by
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index bc4cbc7542ce2..982c42c873102 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -162,6 +162,18 @@ static void queue_inc_cons(struct arm_smmu_ll_queue *q)
+ 	q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
+ }
+ 
++static void queue_sync_cons_ovf(struct arm_smmu_queue *q)
++{
++	struct arm_smmu_ll_queue *llq = &q->llq;
++
++	if (likely(Q_OVF(llq->prod) == Q_OVF(llq->cons)))
++		return;
++
++	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
++		      Q_IDX(llq, llq->cons);
++	queue_sync_cons_out(q);
++}
++
+ static int queue_sync_prod_in(struct arm_smmu_queue *q)
+ {
+ 	u32 prod;
+@@ -1380,8 +1392,7 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+ 	} while (!queue_empty(llq));
+ 
+ 	/* Sync our overflow flag, as we believe we're up to speed */
+-	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+-		    Q_IDX(llq, llq->cons);
++	queue_sync_cons_ovf(q);
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -1439,9 +1450,7 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+ 	} while (!queue_empty(llq));
+ 
+ 	/* Sync our overflow flag, as we believe we're up to speed */
+-	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+-		      Q_IDX(llq, llq->cons);
+-	queue_sync_cons_out(q);
++	queue_sync_cons_ovf(q);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 63f7173b241f0..1598a1ddbf694 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -32,12 +32,26 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
+ 
+ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
+ {
+-	unsigned int last_s2cr = ARM_SMMU_GR0_S2CR(smmu->num_mapping_groups - 1);
+ 	struct qcom_smmu *qsmmu = to_qcom_smmu(smmu);
++	unsigned int last_s2cr;
+ 	u32 reg;
+ 	u32 smr;
+ 	int i;
+ 
++	/*
++	 * Some platforms support more than the Arm SMMU architected maximum of
++	 * 128 stream matching groups. For unknown reasons, the additional
++	 * groups don't exhibit the same behavior as the architected registers,
++	 * so limit the groups to 128 until the behavior is fixed for the other
++	 * groups.
++	 */
++	if (smmu->num_mapping_groups > 128) {
++		dev_notice(smmu->dev, "\tLimiting the stream matching groups to 128\n");
++		smmu->num_mapping_groups = 128;
++	}
++
++	last_s2cr = ARM_SMMU_GR0_S2CR(smmu->num_mapping_groups - 1);
++
+ 	/*
+ 	 * With some firmware versions writes to S2CR of type FAULT are
+ 	 * ignored, and writing BYPASS will end up written as FAULT in the
+diff --git a/drivers/mcb/mcb-pci.c b/drivers/mcb/mcb-pci.c
+index dc88232d9af83..53d9202ff9a7c 100644
+--- a/drivers/mcb/mcb-pci.c
++++ b/drivers/mcb/mcb-pci.c
+@@ -31,7 +31,7 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ 	struct resource *res;
+ 	struct priv *priv;
+-	int ret;
++	int ret, table_size;
+ 	unsigned long flags;
+ 
+ 	priv = devm_kzalloc(&pdev->dev, sizeof(struct priv), GFP_KERNEL);
+@@ -90,7 +90,30 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (ret < 0)
+ 		goto out_mcb_bus;
+ 
+-	dev_dbg(&pdev->dev, "Found %d cells\n", ret);
++	table_size = ret;
++
++	if (table_size < CHAM_HEADER_SIZE) {
++		/* Release the previous resources */
++		devm_iounmap(&pdev->dev, priv->base);
++		devm_release_mem_region(&pdev->dev, priv->mapbase, CHAM_HEADER_SIZE);
++
++		/* Then, allocate it again with the actual chameleon table size */
++		res = devm_request_mem_region(&pdev->dev, priv->mapbase,
++						table_size,
++						KBUILD_MODNAME);
++		if (!res) {
++			dev_err(&pdev->dev, "Failed to request PCI memory\n");
++			ret = -EBUSY;
++			goto out_mcb_bus;
++		}
++
++		priv->base = devm_ioremap(&pdev->dev, priv->mapbase, table_size);
++		if (!priv->base) {
++			dev_err(&pdev->dev, "Cannot ioremap\n");
++			ret = -ENOMEM;
++			goto out_mcb_bus;
++		}
++	}
+ 
+ 	mcb_bus_add_devices(priv->bus);
+ 
+diff --git a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+index 77bae14685513..a71814e2772d1 100644
+--- a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
++++ b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+@@ -697,7 +697,7 @@ static void netup_unidvb_dma_fini(struct netup_unidvb_dev *ndev, int num)
+ 	netup_unidvb_dma_enable(dma, 0);
+ 	msleep(50);
+ 	cancel_work_sync(&dma->work);
+-	del_timer(&dma->timeout);
++	del_timer_sync(&dma->timeout);
+ }
+ 
+ static int netup_unidvb_dma_setup(struct netup_unidvb_dev *ndev)
+diff --git a/drivers/media/radio/radio-shark.c b/drivers/media/radio/radio-shark.c
+index 8230da828d0ee..127a3be0e0f07 100644
+--- a/drivers/media/radio/radio-shark.c
++++ b/drivers/media/radio/radio-shark.c
+@@ -316,6 +316,16 @@ static int usb_shark_probe(struct usb_interface *intf,
+ {
+ 	struct shark_device *shark;
+ 	int retval = -ENOMEM;
++	static const u8 ep_addresses[] = {
++		SHARK_IN_EP | USB_DIR_IN,
++		SHARK_OUT_EP | USB_DIR_OUT,
++		0};
++
++	/* Are the expected endpoints present? */
++	if (!usb_check_int_endpoints(intf, ep_addresses)) {
++		dev_err(&intf->dev, "Invalid radioSHARK device\n");
++		return -EINVAL;
++	}
+ 
+ 	shark = kzalloc(sizeof(struct shark_device), GFP_KERNEL);
+ 	if (!shark)
+diff --git a/drivers/media/radio/radio-shark2.c b/drivers/media/radio/radio-shark2.c
+index d150f12382c60..f1c5c0a6a335c 100644
+--- a/drivers/media/radio/radio-shark2.c
++++ b/drivers/media/radio/radio-shark2.c
+@@ -282,6 +282,16 @@ static int usb_shark_probe(struct usb_interface *intf,
+ {
+ 	struct shark_device *shark;
+ 	int retval = -ENOMEM;
++	static const u8 ep_addresses[] = {
++		SHARK_IN_EP | USB_DIR_IN,
++		SHARK_OUT_EP | USB_DIR_OUT,
++		0};
++
++	/* Are the expected endpoints present? */
++	if (!usb_check_int_endpoints(intf, ep_addresses)) {
++		dev_err(&intf->dev, "Invalid radioSHARK2 device\n");
++		return -EINVAL;
++	}
+ 
+ 	shark = kzalloc(sizeof(struct shark_device), GFP_KERNEL);
+ 	if (!shark)
+diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
+index eaa2a94d18be4..dd06c18495eb6 100644
+--- a/drivers/memstick/host/r592.c
++++ b/drivers/memstick/host/r592.c
+@@ -828,7 +828,7 @@ static void r592_remove(struct pci_dev *pdev)
+ 	/* Stop the processing thread.
+ 	That ensures that we won't take any more requests */
+ 	kthread_stop(dev->io_thread);
+-
++	del_timer_sync(&dev->detect_timer);
+ 	r592_enable_device(dev, false);
+ 
+ 	while (!error && dev->req) {
+diff --git a/drivers/message/fusion/mptlan.c b/drivers/message/fusion/mptlan.c
+index 7d3784aa20e58..90cc3cd49a5ee 100644
+--- a/drivers/message/fusion/mptlan.c
++++ b/drivers/message/fusion/mptlan.c
+@@ -1430,7 +1430,9 @@ mptlan_remove(struct pci_dev *pdev)
+ {
+ 	MPT_ADAPTER 		*ioc = pci_get_drvdata(pdev);
+ 	struct net_device	*dev = ioc->netdev;
++	struct mpt_lan_priv *priv = netdev_priv(dev);
+ 
++	cancel_delayed_work_sync(&priv->post_buckets_task);
+ 	if(dev != NULL) {
+ 		unregister_netdev(dev);
+ 		free_netdev(dev);
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index 852129ea07666..fc65f9e25fda8 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -836,6 +836,7 @@ out_stop_rx:
+ 	dln2_stop_rx_urbs(dln2);
+ 
+ out_free:
++	usb_put_dev(dln2->usb_dev);
+ 	dln2_free(dln2);
+ 
+ 	return ret;
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index c40b92f8d16bf..381e6cdd603a1 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3537,7 +3537,11 @@ static int bond_slave_netdev_event(unsigned long event,
+ 		unblock_netpoll_tx();
+ 		break;
+ 	case NETDEV_FEAT_CHANGE:
+-		bond_compute_features(bond);
++		if (!bond->notifier_ctx) {
++			bond->notifier_ctx = true;
++			bond_compute_features(bond);
++			bond->notifier_ctx = false;
++		}
+ 		break;
+ 	case NETDEV_RESEND_IGMP:
+ 		/* Propagate to master device */
+@@ -5360,6 +5364,8 @@ static int bond_init(struct net_device *bond_dev)
+ 	if (!bond->wq)
+ 		return -ENOMEM;
+ 
++	bond->notifier_ctx = false;
++
+ 	spin_lock_init(&bond->stats_lock);
+ 	netdev_lockdep_set_classes(bond_dev);
+ 
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 9d7445f6ef143..197390dfc6abc 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -70,10 +70,12 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
+ #define KVASER_PCIEFD_SYSID_BUILD_REG (KVASER_PCIEFD_SYSID_BASE + 0x14)
+ /* Shared receive buffer registers */
+ #define KVASER_PCIEFD_SRB_BASE 0x1f200
++#define KVASER_PCIEFD_SRB_FIFO_LAST_REG (KVASER_PCIEFD_SRB_BASE + 0x1f4)
+ #define KVASER_PCIEFD_SRB_CMD_REG (KVASER_PCIEFD_SRB_BASE + 0x200)
+ #define KVASER_PCIEFD_SRB_IEN_REG (KVASER_PCIEFD_SRB_BASE + 0x204)
+ #define KVASER_PCIEFD_SRB_IRQ_REG (KVASER_PCIEFD_SRB_BASE + 0x20c)
+ #define KVASER_PCIEFD_SRB_STAT_REG (KVASER_PCIEFD_SRB_BASE + 0x210)
++#define KVASER_PCIEFD_SRB_RX_NR_PACKETS_REG (KVASER_PCIEFD_SRB_BASE + 0x214)
+ #define KVASER_PCIEFD_SRB_CTRL_REG (KVASER_PCIEFD_SRB_BASE + 0x218)
+ /* EPCS flash controller registers */
+ #define KVASER_PCIEFD_SPI_BASE 0x1fc00
+@@ -110,6 +112,9 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
+ /* DMA support */
+ #define KVASER_PCIEFD_SRB_STAT_DMA BIT(24)
+ 
++/* SRB current packet level */
++#define KVASER_PCIEFD_SRB_RX_NR_PACKETS_MASK 0xff
++
+ /* DMA Enable */
+ #define KVASER_PCIEFD_SRB_CTRL_DMA_ENABLE BIT(0)
+ 
+@@ -528,7 +533,7 @@ static int kvaser_pciefd_set_tx_irq(struct kvaser_pciefd_can *can)
+ 	      KVASER_PCIEFD_KCAN_IRQ_TOF | KVASER_PCIEFD_KCAN_IRQ_ABD |
+ 	      KVASER_PCIEFD_KCAN_IRQ_TAE | KVASER_PCIEFD_KCAN_IRQ_TAL |
+ 	      KVASER_PCIEFD_KCAN_IRQ_FDIC | KVASER_PCIEFD_KCAN_IRQ_BPP |
+-	      KVASER_PCIEFD_KCAN_IRQ_TAR | KVASER_PCIEFD_KCAN_IRQ_TFD;
++	      KVASER_PCIEFD_KCAN_IRQ_TAR;
+ 
+ 	iowrite32(msk, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 
+@@ -556,6 +561,8 @@ static void kvaser_pciefd_setup_controller(struct kvaser_pciefd_can *can)
+ 
+ 	if (can->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
+ 		mode |= KVASER_PCIEFD_KCAN_MODE_LOM;
++	else
++		mode &= ~KVASER_PCIEFD_KCAN_MODE_LOM;
+ 
+ 	mode |= KVASER_PCIEFD_KCAN_MODE_EEN;
+ 	mode |= KVASER_PCIEFD_KCAN_MODE_EPEN;
+@@ -574,7 +581,7 @@ static void kvaser_pciefd_start_controller_flush(struct kvaser_pciefd_can *can)
+ 
+ 	spin_lock_irqsave(&can->lock, irq);
+ 	iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
+-	iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD | KVASER_PCIEFD_KCAN_IRQ_TFD,
++	iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD,
+ 		  can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 
+ 	status = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_STAT_REG);
+@@ -617,7 +624,7 @@ static int kvaser_pciefd_bus_on(struct kvaser_pciefd_can *can)
+ 	iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 	iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
+ 
+-	iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD | KVASER_PCIEFD_KCAN_IRQ_TFD,
++	iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD,
+ 		  can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 
+ 	mode = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
+@@ -721,6 +728,7 @@ static int kvaser_pciefd_stop(struct net_device *netdev)
+ 		iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 		del_timer(&can->bec_poll_timer);
+ 	}
++	can->can.state = CAN_STATE_STOPPED;
+ 	close_candev(netdev);
+ 
+ 	return ret;
+@@ -1003,8 +1011,7 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ 		SET_NETDEV_DEV(netdev, &pcie->pci->dev);
+ 
+ 		iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
+-		iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD |
+-			  KVASER_PCIEFD_KCAN_IRQ_TFD,
++		iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD,
+ 			  can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 
+ 		pcie->can[i] = can;
+@@ -1054,6 +1061,7 @@ static int kvaser_pciefd_setup_dma(struct kvaser_pciefd *pcie)
+ {
+ 	int i;
+ 	u32 srb_status;
++	u32 srb_packet_count;
+ 	dma_addr_t dma_addr[KVASER_PCIEFD_DMA_COUNT];
+ 
+ 	/* Disable the DMA */
+@@ -1081,6 +1089,15 @@ static int kvaser_pciefd_setup_dma(struct kvaser_pciefd *pcie)
+ 		  KVASER_PCIEFD_SRB_CMD_RDB1,
+ 		  pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
+ 
++	/* Empty Rx FIFO */
++	srb_packet_count = ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_RX_NR_PACKETS_REG) &
++			   KVASER_PCIEFD_SRB_RX_NR_PACKETS_MASK;
++	while (srb_packet_count) {
++		/* Drop current packet in FIFO */
++		ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_FIFO_LAST_REG);
++		srb_packet_count--;
++	}
++
+ 	srb_status = ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_STAT_REG);
+ 	if (!(srb_status & KVASER_PCIEFD_SRB_STAT_DI)) {
+ 		dev_err(&pcie->pci->dev, "DMA not idle before enabling\n");
+@@ -1423,9 +1440,6 @@ static int kvaser_pciefd_handle_status_packet(struct kvaser_pciefd *pcie,
+ 		cmd = KVASER_PCIEFD_KCAN_CMD_AT;
+ 		cmd |= ++can->cmd_seq << KVASER_PCIEFD_KCAN_CMD_SEQ_SHIFT;
+ 		iowrite32(cmd, can->reg_base + KVASER_PCIEFD_KCAN_CMD_REG);
+-
+-		iowrite32(KVASER_PCIEFD_KCAN_IRQ_TFD,
+-			  can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+ 	} else if (p->header[0] & KVASER_PCIEFD_SPACK_IDET &&
+ 		   p->header[0] & KVASER_PCIEFD_SPACK_IRM &&
+ 		   cmdseq == (p->header[1] & KVASER_PCIEFD_PACKET_SEQ_MSK) &&
+@@ -1714,15 +1728,6 @@ static int kvaser_pciefd_transmit_irq(struct kvaser_pciefd_can *can)
+ 	if (irq & KVASER_PCIEFD_KCAN_IRQ_TOF)
+ 		netdev_err(can->can.dev, "Tx FIFO overflow\n");
+ 
+-	if (irq & KVASER_PCIEFD_KCAN_IRQ_TFD) {
+-		u8 count = ioread32(can->reg_base +
+-				    KVASER_PCIEFD_KCAN_TX_NPACKETS_REG) & 0xff;
+-
+-		if (count == 0)
+-			iowrite32(KVASER_PCIEFD_KCAN_CTRL_EFLUSH,
+-				  can->reg_base + KVASER_PCIEFD_KCAN_CTRL_REG);
+-	}
+-
+ 	if (irq & KVASER_PCIEFD_KCAN_IRQ_BPP)
+ 		netdev_err(can->can.dev,
+ 			   "Fail to change bittiming, when not in reset mode\n");
+@@ -1824,6 +1829,11 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ 	if (err)
+ 		goto err_teardown_can_ctrls;
+ 
++	err = request_irq(pcie->pci->irq, kvaser_pciefd_irq_handler,
++			  IRQF_SHARED, KVASER_PCIEFD_DRV_NAME, pcie);
++	if (err)
++		goto err_teardown_can_ctrls;
++
+ 	iowrite32(KVASER_PCIEFD_SRB_IRQ_DPD0 | KVASER_PCIEFD_SRB_IRQ_DPD1,
+ 		  pcie->reg_base + KVASER_PCIEFD_SRB_IRQ_REG);
+ 
+@@ -1844,11 +1854,6 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ 	iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1,
+ 		  pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
+ 
+-	err = request_irq(pcie->pci->irq, kvaser_pciefd_irq_handler,
+-			  IRQF_SHARED, KVASER_PCIEFD_DRV_NAME, pcie);
+-	if (err)
+-		goto err_teardown_can_ctrls;
+-
+ 	err = kvaser_pciefd_reg_candev(pcie);
+ 	if (err)
+ 		goto err_free_irq;
+@@ -1856,6 +1861,8 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ 	return 0;
+ 
+ err_free_irq:
++	/* Disable PCI interrupts */
++	iowrite32(0, pcie->reg_base + KVASER_PCIEFD_IEN_REG);
+ 	free_irq(pcie->pci->irq, pcie);
+ 
+ err_teardown_can_ctrls:
+diff --git a/drivers/net/ethernet/3com/3c589_cs.c b/drivers/net/ethernet/3com/3c589_cs.c
+index 09816e84314d0..0197ef6f15826 100644
+--- a/drivers/net/ethernet/3com/3c589_cs.c
++++ b/drivers/net/ethernet/3com/3c589_cs.c
+@@ -195,6 +195,7 @@ static int tc589_probe(struct pcmcia_device *link)
+ {
+ 	struct el3_private *lp;
+ 	struct net_device *dev;
++	int ret;
+ 
+ 	dev_dbg(&link->dev, "3c589_attach()\n");
+ 
+@@ -218,7 +219,15 @@ static int tc589_probe(struct pcmcia_device *link)
+ 
+ 	dev->ethtool_ops = &netdev_ethtool_ops;
+ 
+-	return tc589_config(link);
++	ret = tc589_config(link);
++	if (ret)
++		goto err_free_netdev;
++
++	return 0;
++
++err_free_netdev:
++	free_netdev(dev);
++	return ret;
+ }
+ 
+ static void tc589_detach(struct pcmcia_device *link)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 7667cbb5adfd6..145488449f133 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -3397,7 +3397,7 @@ err_clk_disable:
+ 	return ret;
+ }
+ 
+-static void bcmgenet_netif_stop(struct net_device *dev)
++static void bcmgenet_netif_stop(struct net_device *dev, bool stop_phy)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 
+@@ -3412,7 +3412,8 @@ static void bcmgenet_netif_stop(struct net_device *dev)
+ 	/* Disable MAC transmit. TX DMA disabled must be done before this */
+ 	umac_enable_set(priv, CMD_TX_EN, false);
+ 
+-	phy_stop(dev->phydev);
++	if (stop_phy)
++		phy_stop(dev->phydev);
+ 	bcmgenet_disable_rx_napi(priv);
+ 	bcmgenet_intr_disable(priv);
+ 
+@@ -3438,7 +3439,7 @@ static int bcmgenet_close(struct net_device *dev)
+ 
+ 	netif_dbg(priv, ifdown, dev, "bcmgenet_close\n");
+ 
+-	bcmgenet_netif_stop(dev);
++	bcmgenet_netif_stop(dev, false);
+ 
+ 	/* Really kill the PHY state machine and disconnect from it */
+ 	phy_disconnect(dev->phydev);
+@@ -4240,7 +4241,7 @@ static int bcmgenet_suspend(struct device *d)
+ 
+ 	netif_device_detach(dev);
+ 
+-	bcmgenet_netif_stop(dev);
++	bcmgenet_netif_stop(dev, true);
+ 
+ 	if (!device_may_wakeup(d))
+ 		phy_suspend(dev->phydev);
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 686bb873125cc..e18b3b72fc0df 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3850,9 +3850,11 @@ fec_drv_remove(struct platform_device *pdev)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&pdev->dev);
++	ret = pm_runtime_get_sync(&pdev->dev);
+ 	if (ret < 0)
+-		return ret;
++		dev_err(&pdev->dev,
++			"Failed to resume device in remove callback (%pe)\n",
++			ERR_PTR(ret));
+ 
+ 	cancel_work_sync(&fep->tx_timeout_work);
+ 	fec_ptp_stop(pdev);
+@@ -3865,8 +3867,13 @@ fec_drv_remove(struct platform_device *pdev)
+ 		of_phy_deregister_fixed_link(np);
+ 	of_node_put(fep->phy_node);
+ 
+-	clk_disable_unprepare(fep->clk_ahb);
+-	clk_disable_unprepare(fep->clk_ipg);
++	/* After pm_runtime_get_sync() failed, the clks are still off, so skip
++	 * disabling them again.
++	 */
++	if (ret >= 0) {
++		clk_disable_unprepare(fep->clk_ahb);
++		clk_disable_unprepare(fep->clk_ipg);
++	}
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 2070e26a3a358..1ec1709446bab 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -7023,12 +7023,15 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 	/* If it is not PF reset or FLR, the firmware will disable the MAC,
+ 	 * so it only need to stop phy here.
+ 	 */
+-	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) &&
+-	    hdev->reset_type != HNAE3_FUNC_RESET &&
+-	    hdev->reset_type != HNAE3_FLR_RESET) {
+-		hclge_mac_stop_phy(hdev);
+-		hclge_update_link_status(hdev);
+-		return;
++	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) {
++		hclge_pfc_pause_en_cfg(hdev, HCLGE_PFC_TX_RX_DISABLE,
++				       HCLGE_PFC_DISABLE);
++		if (hdev->reset_type != HNAE3_FUNC_RESET &&
++		    hdev->reset_type != HNAE3_FLR_RESET) {
++			hclge_mac_stop_phy(hdev);
++			hclge_update_link_status(hdev);
++			return;
++		}
+ 	}
+ 
+ 	for (i = 0; i < handle->kinfo.num_tqps; i++)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 9168e39b63641..b3ceaaaeacaeb 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -169,8 +169,8 @@ int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx)
+ 	return hclge_cmd_send(&hdev->hw, &desc, 1);
+ }
+ 
+-static int hclge_pfc_pause_en_cfg(struct hclge_dev *hdev, u8 tx_rx_bitmap,
+-				  u8 pfc_bitmap)
++int hclge_pfc_pause_en_cfg(struct hclge_dev *hdev, u8 tx_rx_bitmap,
++			   u8 pfc_bitmap)
+ {
+ 	struct hclge_desc desc;
+ 	struct hclge_pfc_en_cmd *pfc = (struct hclge_pfc_en_cmd *)desc.data;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+index bb2a2d8e92591..42932c879b360 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+@@ -117,6 +117,9 @@ struct hclge_bp_to_qs_map_cmd {
+ 	u32 rsvd1;
+ };
+ 
++#define HCLGE_PFC_DISABLE	0
++#define HCLGE_PFC_TX_RX_DISABLE	0
++
+ struct hclge_pfc_en_cmd {
+ 	u8 tx_rx_en_bitmap;
+ 	u8 pri_en_bitmap;
+@@ -164,6 +167,8 @@ void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc);
+ void hclge_tm_pfc_info_update(struct hclge_dev *hdev);
+ int hclge_tm_dwrr_cfg(struct hclge_dev *hdev);
+ int hclge_tm_init_hw(struct hclge_dev *hdev, bool init);
++int hclge_pfc_pause_en_cfg(struct hclge_dev *hdev, u8 tx_rx_bitmap,
++			   u8 pfc_bitmap);
+ int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx);
+ int hclge_pause_addr_cfg(struct hclge_dev *hdev, const u8 *mac_addr);
+ int hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index f7f3e4bbc4770..7d05915c35e38 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1772,7 +1772,10 @@ static int hclgevf_reset_wait(struct hclgevf_dev *hdev)
+ 	 * might happen in case reset assertion was made by PF. Yes, this also
+ 	 * means we might end up waiting bit more even for VF reset.
+ 	 */
+-	msleep(5000);
++	if (hdev->reset_type == HNAE3_VF_FULL_RESET)
++		msleep(5000);
++	else
++		msleep(500);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/igb/e1000_mac.c b/drivers/net/ethernet/intel/igb/e1000_mac.c
+index fd8eb2f9ab9dc..57e813405b311 100644
+--- a/drivers/net/ethernet/intel/igb/e1000_mac.c
++++ b/drivers/net/ethernet/intel/igb/e1000_mac.c
+@@ -426,7 +426,7 @@ void igb_mta_set(struct e1000_hw *hw, u32 hash_value)
+ static u32 igb_hash_mc_addr(struct e1000_hw *hw, u8 *mc_addr)
+ {
+ 	u32 hash_value, hash_mask;
+-	u8 bit_shift = 0;
++	u8 bit_shift = 1;
+ 
+ 	/* Register count multiplied by bits per register */
+ 	hash_mask = (hw->mac.mta_reg_count * 32) - 1;
+@@ -434,7 +434,7 @@ static u32 igb_hash_mc_addr(struct e1000_hw *hw, u8 *mc_addr)
+ 	/* For a mc_filter_type of 0, bit_shift is the number of left-shifts
+ 	 * where 0xFF would still fall within the hash mask.
+ 	 */
+-	while (hash_mask >> bit_shift != 0xFF)
++	while (hash_mask >> bit_shift != 0xFF && bit_shift < 4)
+ 		bit_shift++;
+ 
+ 	/* The portion of the address that is used for the hash table
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+index d5d7a2f374935..a0a6dadbcc909 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+@@ -526,9 +526,7 @@ static void otx2_sqe_add_ext(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
+ 				htons(ext->lso_sb - skb_network_offset(skb));
+ 		} else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) {
+ 			ext->lso_format = pfvf->hw.lso_tsov6_idx;
+-
+-			ipv6_hdr(skb)->payload_len =
+-				htons(ext->lso_sb - skb_network_offset(skb));
++			ipv6_hdr(skb)->payload_len = htons(tcp_hdrlen(skb));
+ 		} else if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
+ 			__be16 l3_proto = vlan_get_protocol(skb);
+ 			struct udphdr *udph = udp_hdr(skb);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+index d5868670f8a58..53ac2383327e5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+@@ -137,20 +137,22 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
+ 	for (i = 0; i < c->num_tc; i++)
+ 		busy |= mlx5e_poll_tx_cq(&c->sq[i].cq, budget);
+ 
++	/* budget=0 means we may be in IRQ context, do as little as possible */
++	if (unlikely(!budget))
++		goto out;
++
+ 	busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq);
+ 
+ 	if (c->xdp)
+ 		busy |= mlx5e_poll_xdpsq_cq(&c->rq_xdpsq.cq);
+ 
+-	if (likely(budget)) { /* budget=0 means: don't poll rx rings */
+-		if (xsk_open)
+-			work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget);
++	if (xsk_open)
++		work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget);
+ 
+-		if (likely(budget - work_done))
+-			work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done);
++	if (likely(budget - work_done))
++		work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done);
+ 
+-		busy |= work_done == budget;
+-	}
++	busy |= work_done == budget;
+ 
+ 	mlx5e_poll_ico_cq(&c->icosq.cq);
+ 	if (mlx5e_poll_ico_cq(&c->async_icosq.cq))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
+index bced2efe9bef4..abd066e952286 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
+@@ -110,7 +110,8 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
+ 	priv->devs[idx] = dev;
+ 	devcom = mlx5_devcom_alloc(priv, idx);
+ 	if (!devcom) {
+-		kfree(priv);
++		if (new_priv)
++			kfree(priv);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 112eaef186e19..da4ca0f67e9ce 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -887,7 +887,7 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
+ 
+ 	dev->dm = mlx5_dm_create(dev);
+ 	if (IS_ERR(dev->dm))
+-		mlx5_core_warn(dev, "Failed to init device memory%d\n", err);
++		mlx5_core_warn(dev, "Failed to init device memory %ld\n", PTR_ERR(dev->dm));
+ 
+ 	dev->tracer = mlx5_fw_tracer_create(dev);
+ 	dev->hv_vhca = mlx5_hv_vhca_create(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
+index b01aaec75622f..3b5deb0fe7eb2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
+@@ -112,7 +112,8 @@ static u32 dr_ste_crc32_calc(const void *input_data, size_t length)
+ {
+ 	u32 crc = crc32(0, input_data, length);
+ 
+-	return (__force u32)htonl(crc);
++	return (__force u32)((crc >> 24) & 0xff) | ((crc << 8) & 0xff0000) |
++			    ((crc >> 8) & 0xff00) | ((crc << 24) & 0xff000000);
+ }
+ 
+ u32 mlx5dr_ste_calc_hash_index(u8 *hw_ste_p, struct mlx5dr_ste_htbl *htbl)
+diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c
+index 2fc10a36afa4a..e14dd1051e58c 100644
+--- a/drivers/net/ethernet/nvidia/forcedeth.c
++++ b/drivers/net/ethernet/nvidia/forcedeth.c
+@@ -6138,6 +6138,7 @@ static int nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ out_error:
++	nv_mgmt_release_sema(dev);
+ 	if (phystate_orig)
+ 		writel(phystate|NVREG_ADAPTCTL_RUNNING, base + NvRegAdapterControl);
+ out_freering:
+diff --git a/drivers/net/ethernet/pasemi/pasemi_mac.c b/drivers/net/ethernet/pasemi/pasemi_mac.c
+index 040a15a828b41..c1d7bd168f1d1 100644
+--- a/drivers/net/ethernet/pasemi/pasemi_mac.c
++++ b/drivers/net/ethernet/pasemi/pasemi_mac.c
+@@ -1423,7 +1423,7 @@ static void pasemi_mac_queue_csdesc(const struct sk_buff *skb,
+ 	write_dma_reg(PAS_DMA_TXCHAN_INCR(txring->chan.chno), 2);
+ }
+ 
+-static int pasemi_mac_start_tx(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t pasemi_mac_start_tx(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct pasemi_mac * const mac = netdev_priv(dev);
+ 	struct pasemi_mac_txring * const txring = tx_ring(mac);
+diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/cassini.c
+index 9ff894ba8d3ea..d245f6e21e8ca 100644
+--- a/drivers/net/ethernet/sun/cassini.c
++++ b/drivers/net/ethernet/sun/cassini.c
+@@ -5122,6 +5122,8 @@ err_out_iounmap:
+ 		cas_shutdown(cp);
+ 	mutex_unlock(&cp->pm_mutex);
+ 
++	vfree(cp->fw_data);
++
+ 	pci_iounmap(pdev, cp->regs);
+ 
+ 
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index a33149ee0ddcf..0a5b5ff597c6f 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -437,6 +437,9 @@ static int ipvlan_process_v4_outbound(struct sk_buff *skb)
+ 		goto err;
+ 	}
+ 	skb_dst_set(skb, &rt->dst);
++
++	memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
++
+ 	err = ip_local_out(net, skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+ 		dev->stats.tx_errors++;
+@@ -475,6 +478,9 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+ 		goto err;
+ 	}
+ 	skb_dst_set(skb, dst);
++
++	memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
++
+ 	err = ip6_local_out(net, skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+ 		dev->stats.tx_errors++;
+diff --git a/drivers/net/mdio/mdio-mvusb.c b/drivers/net/mdio/mdio-mvusb.c
+index d5eabddfdf51b..11e048136ac23 100644
+--- a/drivers/net/mdio/mdio-mvusb.c
++++ b/drivers/net/mdio/mdio-mvusb.c
+@@ -73,6 +73,7 @@ static int mvusb_mdio_probe(struct usb_interface *interface,
+ 	struct device *dev = &interface->dev;
+ 	struct mvusb_mdio *mvusb;
+ 	struct mii_bus *mdio;
++	int ret;
+ 
+ 	mdio = devm_mdiobus_alloc_size(dev, sizeof(*mvusb));
+ 	if (!mdio)
+@@ -93,7 +94,15 @@ static int mvusb_mdio_probe(struct usb_interface *interface,
+ 	mdio->write = mvusb_mdio_write;
+ 
+ 	usb_set_intfdata(interface, mvusb);
+-	return of_mdiobus_register(mdio, dev->of_node);
++	ret = of_mdiobus_register(mdio, dev->of_node);
++	if (ret)
++		goto put_dev;
++
++	return 0;
++
++put_dev:
++	usb_put_dev(mvusb->udev);
++	return ret;
+ }
+ 
+ static void mvusb_mdio_disconnect(struct usb_interface *interface)
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index c8031e297faf4..5fabcd15ef77a 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -41,6 +41,7 @@
+ #define DP83867_STRAP_STS1	0x006E
+ #define DP83867_STRAP_STS2	0x006f
+ #define DP83867_RGMIIDCTL	0x0086
++#define DP83867_DSP_FFE_CFG	0x012c
+ #define DP83867_RXFCFG		0x0134
+ #define DP83867_RXFPMD1	0x0136
+ #define DP83867_RXFPMD2	0x0137
+@@ -807,8 +808,27 @@ static int dp83867_phy_reset(struct phy_device *phydev)
+ 
+ 	usleep_range(10, 20);
+ 
+-	return phy_modify(phydev, MII_DP83867_PHYCTRL,
++	err = phy_modify(phydev, MII_DP83867_PHYCTRL,
+ 			 DP83867_PHYCR_FORCE_LINK_GOOD, 0);
++	if (err < 0)
++		return err;
++
++	/* Configure the DSP Feedforward Equalizer Configuration register to
++	 * improve short cable (< 1 meter) performance. This will not affect
++	 * long cable performance.
++	 */
++	err = phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_DSP_FFE_CFG,
++			    0x0e81);
++	if (err < 0)
++		return err;
++
++	err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESTART);
++	if (err < 0)
++		return err;
++
++	usleep_range(10, 20);
++
++	return 0;
+ }
+ 
+ static void dp83867_link_change_notify(struct phy_device *phydev)
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index e14fa72791b0e..ffac713afa551 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -2563,6 +2563,7 @@ static struct phy_driver vsc85xx_driver[] = {
+ module_phy_driver(vsc85xx_driver);
+ 
+ static struct mdio_device_id __maybe_unused vsc85xx_tbl[] = {
++	{ PHY_ID_VSC8502, 0xfffffff0, },
+ 	{ PHY_ID_VSC8504, 0xfffffff0, },
+ 	{ PHY_ID_VSC8514, 0xfffffff0, },
+ 	{ PHY_ID_VSC8530, 0xfffffff0, },
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index d9018d9fe3106..f9b3eb2d8d8b0 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -713,9 +713,8 @@ static ssize_t tap_get_user(struct tap_queue *q, void *msg_control,
+ 	skb_probe_transport_header(skb);
+ 
+ 	/* Move network header to the right position for VLAN tagged packets */
+-	if ((skb->protocol == htons(ETH_P_8021Q) ||
+-	     skb->protocol == htons(ETH_P_8021AD)) &&
+-	    __vlan_get_protocol(skb, skb->protocol, &depth) != 0)
++	if (eth_type_vlan(skb->protocol) &&
++	    vlan_get_protocol_and_depth(skb, skb->protocol, &depth) != 0)
+ 		skb_set_network_header(skb, depth);
+ 
+ 	rcu_read_lock();
+@@ -1165,9 +1164,8 @@ static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp)
+ 	}
+ 
+ 	/* Move network header to the right position for VLAN tagged packets */
+-	if ((skb->protocol == htons(ETH_P_8021Q) ||
+-	     skb->protocol == htons(ETH_P_8021AD)) &&
+-	    __vlan_get_protocol(skb, skb->protocol, &depth) != 0)
++	if (eth_type_vlan(skb->protocol) &&
++	    vlan_get_protocol_and_depth(skb, skb->protocol, &depth) != 0)
+ 		skb_set_network_header(skb, depth);
+ 
+ 	rcu_read_lock();
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 7117d559a32e4..8a1619695421b 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1624,6 +1624,7 @@ static int team_init(struct net_device *dev)
+ 
+ 	team->dev = dev;
+ 	team_set_no_mode(team);
++	team->notifier_ctx = false;
+ 
+ 	team->pcpu_stats = netdev_alloc_pcpu_stats(struct team_pcpu_stats);
+ 	if (!team->pcpu_stats)
+@@ -3016,7 +3017,11 @@ static int team_device_event(struct notifier_block *unused,
+ 		team_del_slave(port->team->dev, dev);
+ 		break;
+ 	case NETDEV_FEAT_CHANGE:
+-		team_compute_features(port->team);
++		if (!port->team->notifier_ctx) {
++			port->team->notifier_ctx = true;
++			team_compute_features(port->team);
++			port->team->notifier_ctx = false;
++		}
+ 		break;
+ 	case NETDEV_PRECHANGEMTU:
+ 		/* Forbid to change mtu of underlaying device */
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index ab91fa5b0194d..57b1e6dc62f07 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -180,9 +180,12 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
+ 	else
+ 		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
+ 
+-	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
+-	if (max == 0)
++	if (le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize) == 0)
+ 		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
++	else
++		max = clamp_t(u32, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize),
++			      USB_CDC_NCM_NTB_MIN_OUT_SIZE,
++			      CDC_NCM_NTB_MAX_SIZE_TX);
+ 
+ 	/* some devices set dwNtbOutMaxSize too low for the above default */
+ 	min = min(min, max);
+@@ -1230,6 +1233,9 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 			 * further.
+ 			 */
+ 			if (skb_out == NULL) {
++				/* If even the smallest allocation fails, abort. */
++				if (ctx->tx_curr_size == USB_CDC_NCM_NTB_MIN_OUT_SIZE)
++					goto alloc_failed;
+ 				ctx->tx_low_mem_max_cnt = min(ctx->tx_low_mem_max_cnt + 1,
+ 							      (unsigned)CDC_NCM_LOW_MEM_MAX_CNT);
+ 				ctx->tx_low_mem_val = ctx->tx_low_mem_max_cnt;
+@@ -1248,13 +1254,8 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 			skb_out = alloc_skb(ctx->tx_curr_size, GFP_ATOMIC);
+ 
+ 			/* No allocation possible so we will abort */
+-			if (skb_out == NULL) {
+-				if (skb != NULL) {
+-					dev_kfree_skb_any(skb);
+-					dev->net->stats.tx_dropped++;
+-				}
+-				goto exit_no_skb;
+-			}
++			if (!skb_out)
++				goto alloc_failed;
+ 			ctx->tx_low_mem_val--;
+ 		}
+ 		if (ctx->is_ndp16) {
+@@ -1447,6 +1448,11 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 
+ 	return skb_out;
+ 
++alloc_failed:
++	if (skb) {
++		dev_kfree_skb_any(skb);
++		dev->net->stats.tx_dropped++;
++	}
+ exit_no_skb:
+ 	/* Start timer, if there is a remaining non-empty skb */
+ 	if (ctx->tx_curr_skb != NULL && n > 0)
+diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h
+index f083fb9038c36..f02a308a9ffc5 100644
+--- a/drivers/net/wireless/ath/ath.h
++++ b/drivers/net/wireless/ath/ath.h
+@@ -96,11 +96,13 @@ struct ath_keyval {
+ 	u8 kv_type;
+ 	u8 kv_pad;
+ 	u16 kv_len;
+-	u8 kv_val[16]; /* TK */
+-	u8 kv_mic[8]; /* Michael MIC key */
+-	u8 kv_txmic[8]; /* Michael MIC TX key (used only if the hardware
+-			 * supports both MIC keys in the same key cache entry;
+-			 * in that case, kv_mic is the RX key) */
++	struct_group(kv_values,
++		u8 kv_val[16]; /* TK */
++		u8 kv_mic[8]; /* Michael MIC key */
++		u8 kv_txmic[8]; /* Michael MIC TX key (used only if the hardware
++				 * supports both MIC keys in the same key cache entry;
++				 * in that case, kv_mic is the RX key) */
++	);
+ };
+ 
+ enum ath_cipher {
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 578fdc446bc03..583bcf148403b 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -324,10 +324,10 @@ int ath11k_dp_rxbufs_replenish(struct ath11k_base *ab, int mac_id,
+ 			goto fail_free_skb;
+ 
+ 		spin_lock_bh(&rx_ring->idr_lock);
+-		buf_id = idr_alloc(&rx_ring->bufs_idr, skb, 0,
+-				   rx_ring->bufs_max * 3, GFP_ATOMIC);
++		buf_id = idr_alloc(&rx_ring->bufs_idr, skb, 1,
++				   (rx_ring->bufs_max * 3) + 1, GFP_ATOMIC);
+ 		spin_unlock_bh(&rx_ring->idr_lock);
+-		if (buf_id < 0)
++		if (buf_id <= 0)
+ 			goto fail_dma_unmap;
+ 
+ 		desc = ath11k_hal_srng_src_get_next_entry(ab, srng);
+@@ -2564,6 +2564,9 @@ try_again:
+ 				   cookie);
+ 		mac_id = FIELD_GET(DP_RXDMA_BUF_COOKIE_PDEV_ID, cookie);
+ 
++		if (unlikely(buf_id == 0))
++			continue;
++
+ 		ar = ab->pdevs[mac_id].ar;
+ 		rx_ring = &ar->dp.rx_refill_buf_ring;
+ 		spin_lock_bh(&rx_ring->idr_lock);
+diff --git a/drivers/net/wireless/ath/key.c b/drivers/net/wireless/ath/key.c
+index 61b59a804e308..b7b61d4f02bae 100644
+--- a/drivers/net/wireless/ath/key.c
++++ b/drivers/net/wireless/ath/key.c
+@@ -503,7 +503,7 @@ int ath_key_config(struct ath_common *common,
+ 
+ 	hk.kv_len = key->keylen;
+ 	if (key->keylen)
+-		memcpy(hk.kv_val, key->key, key->keylen);
++		memcpy(&hk.kv_values, key->key, key->keylen);
+ 
+ 	if (!(key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) {
+ 		switch (vif->type) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index df59706197124..baf5f0afe802e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -1350,13 +1350,14 @@ static int brcmf_set_pmk(struct brcmf_if *ifp, const u8 *pmk_data, u16 pmk_len)
+ {
+ 	struct brcmf_pub *drvr = ifp->drvr;
+ 	struct brcmf_wsec_pmk_le pmk;
+-	int i, err;
++	int err;
++
++	memset(&pmk, 0, sizeof(pmk));
+ 
+-	/* convert to firmware key format */
+-	pmk.key_len = cpu_to_le16(pmk_len << 1);
+-	pmk.flags = cpu_to_le16(BRCMF_WSEC_PASSPHRASE);
+-	for (i = 0; i < pmk_len; i++)
+-		snprintf(&pmk.key[2 * i], 3, "%02x", pmk_data[i]);
++	/* pass pmk directly */
++	pmk.key_len = cpu_to_le16(pmk_len);
++	pmk.flags = cpu_to_le16(0);
++	memcpy(pmk.key, pmk_data, pmk_len);
+ 
+ 	/* store psk in firmware */
+ 	err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SET_WSEC_PMK,
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/sta.c b/drivers/net/wireless/intel/iwlwifi/dvm/sta.c
+index e622948661fa8..b307f0e527779 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/sta.c
+@@ -1086,6 +1086,7 @@ static int iwlagn_send_sta_key(struct iwl_priv *priv,
+ {
+ 	__le16 key_flags;
+ 	struct iwl_addsta_cmd sta_cmd;
++	size_t to_copy;
+ 	int i;
+ 
+ 	spin_lock_bh(&priv->sta_lock);
+@@ -1105,7 +1106,9 @@ static int iwlagn_send_sta_key(struct iwl_priv *priv,
+ 		sta_cmd.key.tkip_rx_tsc_byte2 = tkip_iv32;
+ 		for (i = 0; i < 5; i++)
+ 			sta_cmd.key.tkip_rx_ttak[i] = cpu_to_le16(tkip_p1k[i]);
+-		memcpy(sta_cmd.key.key, keyconf->key, keyconf->keylen);
++		/* keyconf may contain MIC rx/tx keys which iwl does not use */
++		to_copy = min_t(size_t, sizeof(sta_cmd.key.key), keyconf->keylen);
++		memcpy(sta_cmd.key.key, keyconf->key, to_copy);
+ 		break;
+ 	case WLAN_CIPHER_SUITE_WEP104:
+ 		key_flags |= STA_KEY_FLG_KEY_SIZE_MSK;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
+index 60296a754af26..34be3f75c2e96 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
+@@ -502,6 +502,11 @@ iwl_mvm_update_mcc(struct iwl_mvm *mvm, const char *alpha2,
+ 		struct iwl_mcc_update_resp *mcc_resp = (void *)pkt->data;
+ 
+ 		n_channels =  __le32_to_cpu(mcc_resp->n_channels);
++		if (iwl_rx_packet_payload_len(pkt) !=
++		    struct_size(mcc_resp, channels, n_channels)) {
++			resp_cp = ERR_PTR(-EINVAL);
++			goto exit;
++		}
+ 		resp_len = sizeof(struct iwl_mcc_update_resp) +
+ 			   n_channels * sizeof(__le32);
+ 		resp_cp = kmemdup(mcc_resp, resp_len, GFP_KERNEL);
+@@ -513,6 +518,11 @@ iwl_mvm_update_mcc(struct iwl_mvm *mvm, const char *alpha2,
+ 		struct iwl_mcc_update_resp_v3 *mcc_resp_v3 = (void *)pkt->data;
+ 
+ 		n_channels =  __le32_to_cpu(mcc_resp_v3->n_channels);
++		if (iwl_rx_packet_payload_len(pkt) !=
++		    struct_size(mcc_resp_v3, channels, n_channels)) {
++			resp_cp = ERR_PTR(-EINVAL);
++			goto exit;
++		}
+ 		resp_len = sizeof(struct iwl_mcc_update_resp) +
+ 			   n_channels * sizeof(__le32);
+ 		resp_cp = kzalloc(resp_len, GFP_KERNEL);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 4e43efd5d1ea1..dc0a507213ca6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1214,6 +1214,9 @@ static void iwl_pci_remove(struct pci_dev *pdev)
+ {
+ 	struct iwl_trans *trans = pci_get_drvdata(pdev);
+ 
++	if (!trans)
++		return;
++
+ 	iwl_drv_stop(trans->drv);
+ 
+ 	iwl_trans_pcie_free(trans);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 7f8b7f7697cfd..fac7cc75bc31e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -2835,7 +2835,7 @@ static bool iwl_write_to_user_buf(char __user *user_buf, ssize_t count,
+ 				  void *buf, ssize_t *size,
+ 				  ssize_t *bytes_copied)
+ {
+-	int buf_size_left = count - *bytes_copied;
++	ssize_t buf_size_left = count - *bytes_copied;
+ 
+ 	buf_size_left = buf_size_left - (buf_size_left % sizeof(u32));
+ 	if (*size > buf_size_left)
+diff --git a/drivers/phy/st/phy-miphy28lp.c b/drivers/phy/st/phy-miphy28lp.c
+index 068160a34f5cc..e30305b77f0d1 100644
+--- a/drivers/phy/st/phy-miphy28lp.c
++++ b/drivers/phy/st/phy-miphy28lp.c
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/platform_device.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+@@ -484,19 +485,11 @@ static inline void miphy28lp_pcie_config_gen(struct miphy28lp_phy *miphy_phy)
+ 
+ static inline int miphy28lp_wait_compensation(struct miphy28lp_phy *miphy_phy)
+ {
+-	unsigned long finish = jiffies + 5 * HZ;
+ 	u8 val;
+ 
+ 	/* Waiting for Compensation to complete */
+-	do {
+-		val = readb_relaxed(miphy_phy->base + MIPHY_COMP_FSM_6);
+-
+-		if (time_after_eq(jiffies, finish))
+-			return -EBUSY;
+-		cpu_relax();
+-	} while (!(val & COMP_DONE));
+-
+-	return 0;
++	return readb_relaxed_poll_timeout(miphy_phy->base + MIPHY_COMP_FSM_6,
++					  val, val & COMP_DONE, 1, 5 * USEC_PER_SEC);
+ }
+ 
+ 
+@@ -805,7 +798,6 @@ static inline void miphy28lp_configure_usb3(struct miphy28lp_phy *miphy_phy)
+ 
+ static inline int miphy_is_ready(struct miphy28lp_phy *miphy_phy)
+ {
+-	unsigned long finish = jiffies + 5 * HZ;
+ 	u8 mask = HFC_PLL | HFC_RDY;
+ 	u8 val;
+ 
+@@ -816,21 +808,14 @@ static inline int miphy_is_ready(struct miphy28lp_phy *miphy_phy)
+ 	if (miphy_phy->type == PHY_TYPE_SATA)
+ 		mask |= PHY_RDY;
+ 
+-	do {
+-		val = readb_relaxed(miphy_phy->base + MIPHY_STATUS_1);
+-		if ((val & mask) != mask)
+-			cpu_relax();
+-		else
+-			return 0;
+-	} while (!time_after_eq(jiffies, finish));
+-
+-	return -EBUSY;
++	return readb_relaxed_poll_timeout(miphy_phy->base + MIPHY_STATUS_1,
++					  val, (val & mask) == mask, 1,
++					  5 * USEC_PER_SEC);
+ }
+ 
+ static int miphy_osc_is_ready(struct miphy28lp_phy *miphy_phy)
+ {
+ 	struct miphy28lp_dev *miphy_dev = miphy_phy->phydev;
+-	unsigned long finish = jiffies + 5 * HZ;
+ 	u32 val;
+ 
+ 	if (!miphy_phy->osc_rdy)
+@@ -839,17 +824,10 @@ static int miphy_osc_is_ready(struct miphy28lp_phy *miphy_phy)
+ 	if (!miphy_phy->syscfg_reg[SYSCFG_STATUS])
+ 		return -EINVAL;
+ 
+-	do {
+-		regmap_read(miphy_dev->regmap,
+-				miphy_phy->syscfg_reg[SYSCFG_STATUS], &val);
+-
+-		if ((val & MIPHY_OSC_RDY) != MIPHY_OSC_RDY)
+-			cpu_relax();
+-		else
+-			return 0;
+-	} while (!time_after_eq(jiffies, finish));
+-
+-	return -EBUSY;
++	return regmap_read_poll_timeout(miphy_dev->regmap,
++					miphy_phy->syscfg_reg[SYSCFG_STATUS],
++					val, val & MIPHY_OSC_RDY, 1,
++					5 * USEC_PER_SEC);
+ }
+ 
+ static int miphy28lp_get_resource_byname(struct device_node *child,
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 72a2bcf3ab32b..c08dd4e6d35ad 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1681,7 +1681,7 @@ static int bq27xxx_battery_read_health(struct bq27xxx_device_info *di)
+ 	return POWER_SUPPLY_HEALTH_GOOD;
+ }
+ 
+-void bq27xxx_battery_update(struct bq27xxx_device_info *di)
++static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
+ {
+ 	struct bq27xxx_reg_cache cache = {0, };
+ 	bool has_ci_flag = di->opts & BQ27XXX_O_HAS_CI;
+@@ -1732,6 +1732,16 @@ void bq27xxx_battery_update(struct bq27xxx_device_info *di)
+ 		di->cache = cache;
+ 
+ 	di->last_update = jiffies;
++
++	if (!di->removed && poll_interval > 0)
++		mod_delayed_work(system_wq, &di->work, poll_interval * HZ);
++}
++
++void bq27xxx_battery_update(struct bq27xxx_device_info *di)
++{
++	mutex_lock(&di->lock);
++	bq27xxx_battery_update_unlocked(di);
++	mutex_unlock(&di->lock);
+ }
+ EXPORT_SYMBOL_GPL(bq27xxx_battery_update);
+ 
+@@ -1742,9 +1752,6 @@ static void bq27xxx_battery_poll(struct work_struct *work)
+ 				     work.work);
+ 
+ 	bq27xxx_battery_update(di);
+-
+-	if (poll_interval > 0)
+-		schedule_delayed_work(&di->work, poll_interval * HZ);
+ }
+ 
+ /*
+@@ -1919,10 +1926,8 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
+ 	struct bq27xxx_device_info *di = power_supply_get_drvdata(psy);
+ 
+ 	mutex_lock(&di->lock);
+-	if (time_is_before_jiffies(di->last_update + 5 * HZ)) {
+-		cancel_delayed_work_sync(&di->work);
+-		bq27xxx_battery_poll(&di->work.work);
+-	}
++	if (time_is_before_jiffies(di->last_update + 5 * HZ))
++		bq27xxx_battery_update_unlocked(di);
+ 	mutex_unlock(&di->lock);
+ 
+ 	if (psp != POWER_SUPPLY_PROP_PRESENT && di->cache.flags < 0)
+@@ -2058,22 +2063,18 @@ EXPORT_SYMBOL_GPL(bq27xxx_battery_setup);
+ 
+ void bq27xxx_battery_teardown(struct bq27xxx_device_info *di)
+ {
+-	/*
+-	 * power_supply_unregister call bq27xxx_battery_get_property which
+-	 * call bq27xxx_battery_poll.
+-	 * Make sure that bq27xxx_battery_poll will not call
+-	 * schedule_delayed_work again after unregister (which cause OOPS).
+-	 */
+-	poll_interval = 0;
+-
+-	cancel_delayed_work_sync(&di->work);
+-
+-	power_supply_unregister(di->bat);
+-
+ 	mutex_lock(&bq27xxx_list_lock);
+ 	list_del(&di->list);
+ 	mutex_unlock(&bq27xxx_list_lock);
+ 
++	/* Set removed to avoid bq27xxx_battery_update() re-queuing the work */
++	mutex_lock(&di->lock);
++	di->removed = true;
++	mutex_unlock(&di->lock);
++
++	cancel_delayed_work_sync(&di->work);
++
++	power_supply_unregister(di->bat);
+ 	mutex_destroy(&di->lock);
+ }
+ EXPORT_SYMBOL_GPL(bq27xxx_battery_teardown);
+diff --git a/drivers/power/supply/bq27xxx_battery_i2c.c b/drivers/power/supply/bq27xxx_battery_i2c.c
+index 3012eb13a08cb..0e32efb10ee78 100644
+--- a/drivers/power/supply/bq27xxx_battery_i2c.c
++++ b/drivers/power/supply/bq27xxx_battery_i2c.c
+@@ -179,7 +179,7 @@ static int bq27xxx_battery_i2c_probe(struct i2c_client *client,
+ 	i2c_set_clientdata(client, di);
+ 
+ 	if (client->irq) {
+-		ret = devm_request_threaded_irq(&client->dev, client->irq,
++		ret = request_threaded_irq(client->irq,
+ 				NULL, bq27xxx_battery_irq_handler_thread,
+ 				IRQF_ONESHOT,
+ 				di->name, di);
+@@ -209,6 +209,7 @@ static int bq27xxx_battery_i2c_remove(struct i2c_client *client)
+ {
+ 	struct bq27xxx_device_info *di = i2c_get_clientdata(client);
+ 
++	free_irq(client->irq, di);
+ 	bq27xxx_battery_teardown(di);
+ 
+ 	mutex_lock(&battery_mutex);
+diff --git a/drivers/power/supply/power_supply_leds.c b/drivers/power/supply/power_supply_leds.c
+index d69880cc35931..b7a2778f878de 100644
+--- a/drivers/power/supply/power_supply_leds.c
++++ b/drivers/power/supply/power_supply_leds.c
+@@ -34,8 +34,9 @@ static void power_supply_update_bat_leds(struct power_supply *psy)
+ 		led_trigger_event(psy->charging_full_trig, LED_FULL);
+ 		led_trigger_event(psy->charging_trig, LED_OFF);
+ 		led_trigger_event(psy->full_trig, LED_FULL);
+-		led_trigger_event(psy->charging_blink_full_solid_trig,
+-			LED_FULL);
++		/* Going from blink to LED on requires a LED_OFF event to stop blink */
++		led_trigger_event(psy->charging_blink_full_solid_trig, LED_OFF);
++		led_trigger_event(psy->charging_blink_full_solid_trig, LED_FULL);
+ 		break;
+ 	case POWER_SUPPLY_STATUS_CHARGING:
+ 		led_trigger_event(psy->charging_full_trig, LED_FULL);
+diff --git a/drivers/power/supply/sbs-charger.c b/drivers/power/supply/sbs-charger.c
+index fbfb6a6209617..a30eb42e379a4 100644
+--- a/drivers/power/supply/sbs-charger.c
++++ b/drivers/power/supply/sbs-charger.c
+@@ -25,7 +25,7 @@
+ #define SBS_CHARGER_REG_STATUS			0x13
+ #define SBS_CHARGER_REG_ALARM_WARNING		0x16
+ 
+-#define SBS_CHARGER_STATUS_CHARGE_INHIBITED	BIT(1)
++#define SBS_CHARGER_STATUS_CHARGE_INHIBITED	BIT(0)
+ #define SBS_CHARGER_STATUS_RES_COLD		BIT(9)
+ #define SBS_CHARGER_STATUS_RES_HOT		BIT(10)
+ #define SBS_CHARGER_STATUS_BATTERY_PRESENT	BIT(14)
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index 24760d8ea6374..df784fec124f6 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -301,8 +301,16 @@ static void stm32_rproc_mb_vq_work(struct work_struct *work)
+ 	struct stm32_mbox *mb = container_of(work, struct stm32_mbox, vq_work);
+ 	struct rproc *rproc = dev_get_drvdata(mb->client.dev);
+ 
++	mutex_lock(&rproc->lock);
++
++	if (rproc->state != RPROC_RUNNING)
++		goto unlock_mutex;
++
+ 	if (rproc_vq_interrupt(rproc, mb->vq_id) == IRQ_NONE)
+ 		dev_dbg(&rproc->dev, "no message found in vq%d\n", mb->vq_id);
++
++unlock_mutex:
++	mutex_unlock(&rproc->lock);
+ }
+ 
+ static void stm32_rproc_mb_callback(struct mbox_client *cl, void *data)
+diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h
+index cd2df4ff8e0ef..919d106141664 100644
+--- a/drivers/s390/cio/qdio.h
++++ b/drivers/s390/cio/qdio.h
+@@ -88,15 +88,15 @@ enum qdio_irq_states {
+ static inline int do_sqbs(u64 token, unsigned char state, int queue,
+ 			  int *start, int *count)
+ {
+-	register unsigned long _ccq asm ("0") = *count;
+-	register unsigned long _token asm ("1") = token;
+ 	unsigned long _queuestart = ((unsigned long)queue << 32) | *start;
++	unsigned long _ccq = *count;
+ 
+ 	asm volatile(
+-		"	.insn	rsy,0xeb000000008A,%1,0,0(%2)"
+-		: "+d" (_ccq), "+d" (_queuestart)
+-		: "d" ((unsigned long)state), "d" (_token)
+-		: "memory", "cc");
++		"	lgr	1,%[token]\n"
++		"	.insn	rsy,0xeb000000008a,%[qs],%[ccq],0(%[state])"
++		: [ccq] "+&d" (_ccq), [qs] "+&d" (_queuestart)
++		: [state] "a" ((unsigned long)state), [token] "d" (token)
++		: "memory", "cc", "1");
+ 	*count = _ccq & 0xff;
+ 	*start = _queuestart & 0xff;
+ 
+@@ -106,16 +106,17 @@ static inline int do_sqbs(u64 token, unsigned char state, int queue,
+ static inline int do_eqbs(u64 token, unsigned char *state, int queue,
+ 			  int *start, int *count, int ack)
+ {
+-	register unsigned long _ccq asm ("0") = *count;
+-	register unsigned long _token asm ("1") = token;
+ 	unsigned long _queuestart = ((unsigned long)queue << 32) | *start;
+ 	unsigned long _state = (unsigned long)ack << 63;
++	unsigned long _ccq = *count;
+ 
+ 	asm volatile(
+-		"	.insn	rrf,0xB99c0000,%1,%2,0,0"
+-		: "+d" (_ccq), "+d" (_queuestart), "+d" (_state)
+-		: "d" (_token)
+-		: "memory", "cc");
++		"	lgr	1,%[token]\n"
++		"	.insn	rrf,0xb99c0000,%[qs],%[state],%[ccq],0"
++		: [ccq] "+&d" (_ccq), [qs] "+&d" (_queuestart),
++		  [state] "+&d" (_state)
++		: [token] "d" (token)
++		: "memory", "cc", "1");
+ 	*count = _ccq & 0xff;
+ 	*start = _queuestart & 0xff;
+ 	*state = _state & 0xff;
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index 3e29c26f01856..e3c55fc2363ac 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -31,38 +31,41 @@ MODULE_DESCRIPTION("QDIO base support");
+ MODULE_LICENSE("GPL");
+ 
+ static inline int do_siga_sync(unsigned long schid,
+-			       unsigned int out_mask, unsigned int in_mask,
++			       unsigned long out_mask, unsigned long in_mask,
+ 			       unsigned int fc)
+ {
+-	register unsigned long __fc asm ("0") = fc;
+-	register unsigned long __schid asm ("1") = schid;
+-	register unsigned long out asm ("2") = out_mask;
+-	register unsigned long in asm ("3") = in_mask;
+ 	int cc;
+ 
+ 	asm volatile(
++		"	lgr	0,%[fc]\n"
++		"	lgr	1,%[schid]\n"
++		"	lgr	2,%[out]\n"
++		"	lgr	3,%[in]\n"
+ 		"	siga	0\n"
+-		"	ipm	%0\n"
+-		"	srl	%0,28\n"
+-		: "=d" (cc)
+-		: "d" (__fc), "d" (__schid), "d" (out), "d" (in) : "cc");
++		"	ipm	%[cc]\n"
++		"	srl	%[cc],28\n"
++		: [cc] "=&d" (cc)
++		: [fc] "d" (fc), [schid] "d" (schid),
++		  [out] "d" (out_mask), [in] "d" (in_mask)
++		: "cc", "0", "1", "2", "3");
+ 	return cc;
+ }
+ 
+-static inline int do_siga_input(unsigned long schid, unsigned int mask,
+-				unsigned int fc)
++static inline int do_siga_input(unsigned long schid, unsigned long mask,
++				unsigned long fc)
+ {
+-	register unsigned long __fc asm ("0") = fc;
+-	register unsigned long __schid asm ("1") = schid;
+-	register unsigned long __mask asm ("2") = mask;
+ 	int cc;
+ 
+ 	asm volatile(
++		"	lgr	0,%[fc]\n"
++		"	lgr	1,%[schid]\n"
++		"	lgr	2,%[mask]\n"
+ 		"	siga	0\n"
+-		"	ipm	%0\n"
+-		"	srl	%0,28\n"
+-		: "=d" (cc)
+-		: "d" (__fc), "d" (__schid), "d" (__mask) : "cc");
++		"	ipm	%[cc]\n"
++		"	srl	%[cc],28\n"
++		: [cc] "=&d" (cc)
++		: [fc] "d" (fc), [schid] "d" (schid), [mask] "d" (mask)
++		: "cc", "0", "1", "2");
+ 	return cc;
+ }
+ 
+@@ -78,23 +81,24 @@ static inline int do_siga_input(unsigned long schid, unsigned int mask,
+  * Note: For IQDC unicast queues only the highest priority queue is processed.
+  */
+ static inline int do_siga_output(unsigned long schid, unsigned long mask,
+-				 unsigned int *bb, unsigned int fc,
++				 unsigned int *bb, unsigned long fc,
+ 				 unsigned long aob)
+ {
+-	register unsigned long __fc asm("0") = fc;
+-	register unsigned long __schid asm("1") = schid;
+-	register unsigned long __mask asm("2") = mask;
+-	register unsigned long __aob asm("3") = aob;
+ 	int cc;
+ 
+ 	asm volatile(
++		"	lgr	0,%[fc]\n"
++		"	lgr	1,%[schid]\n"
++		"	lgr	2,%[mask]\n"
++		"	lgr	3,%[aob]\n"
+ 		"	siga	0\n"
+-		"	ipm	%0\n"
+-		"	srl	%0,28\n"
+-		: "=d" (cc), "+d" (__fc), "+d" (__aob)
+-		: "d" (__schid), "d" (__mask)
+-		: "cc");
+-	*bb = __fc >> 31;
++		"	lgr	%[fc],0\n"
++		"	ipm	%[cc]\n"
++		"	srl	%[cc],28\n"
++		: [cc] "=&d" (cc), [fc] "+&d" (fc)
++		: [schid] "d" (schid), [mask] "d" (mask), [aob] "d" (aob)
++		: "cc", "0", "1", "2", "3");
++	*bb = fc >> 31;
+ 	return cc;
+ }
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index fbc76d69ea0b4..2b77cbbcdccb6 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -2159,10 +2159,13 @@ lpfc_debugfs_lockstat_write(struct file *file, const char __user *buf,
+ 	char mybuf[64];
+ 	char *pbuf;
+ 	int i;
++	size_t bsize;
+ 
+ 	memset(mybuf, 0, sizeof(mybuf));
+ 
+-	if (copy_from_user(mybuf, buf, nbytes))
++	bsize = min(nbytes, (sizeof(mybuf) - 1));
++
++	if (copy_from_user(mybuf, buf, bsize))
+ 		return -EFAULT;
+ 	pbuf = &mybuf[0];
+ 
+@@ -2183,7 +2186,7 @@ lpfc_debugfs_lockstat_write(struct file *file, const char __user *buf,
+ 			qp->lock_conflict.wq_access = 0;
+ 		}
+ 	}
+-	return nbytes;
++	return bsize;
+ }
+ #endif
+ 
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index e38aebcabb26f..70b4868fe2f7d 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1756,7 +1756,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+ 
+ 	length = scsi_bufflen(scmnd);
+ 	payload = (struct vmbus_packet_mpb_array *)&cmd_request->mpb;
+-	payload_sz = sizeof(cmd_request->mpb);
++	payload_sz = 0;
+ 
+ 	if (sg_count) {
+ 		unsigned int hvpgoff = 0;
+@@ -1764,10 +1764,10 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+ 		unsigned int hvpg_count = HVPFN_UP(offset_in_hvpg + length);
+ 		u64 hvpfn;
+ 
+-		if (hvpg_count > MAX_PAGE_BUFFER_COUNT) {
++		payload_sz = (hvpg_count * sizeof(u64) +
++			      sizeof(struct vmbus_packet_mpb_array));
+ 
+-			payload_sz = (hvpg_count * sizeof(u64) +
+-				      sizeof(struct vmbus_packet_mpb_array));
++		if (hvpg_count > MAX_PAGE_BUFFER_COUNT) {
+ 			payload = kzalloc(payload_sz, GFP_ATOMIC);
+ 			if (!payload)
+ 				return SCSI_MLQUEUE_DEVICE_BUSY;
+diff --git a/drivers/spi/spi-fsl-cpm.c b/drivers/spi/spi-fsl-cpm.c
+index ee905880769e6..7832ce330b29d 100644
+--- a/drivers/spi/spi-fsl-cpm.c
++++ b/drivers/spi/spi-fsl-cpm.c
+@@ -21,6 +21,7 @@
+ #include <linux/spi/spi.h>
+ #include <linux/types.h>
+ #include <linux/platform_device.h>
++#include <linux/byteorder/generic.h>
+ 
+ #include "spi-fsl-cpm.h"
+ #include "spi-fsl-lib.h"
+@@ -120,6 +121,21 @@ int fsl_spi_cpm_bufs(struct mpc8xxx_spi *mspi,
+ 		mspi->rx_dma = mspi->dma_dummy_rx;
+ 		mspi->map_rx_dma = 0;
+ 	}
++	if (t->bits_per_word == 16 && t->tx_buf) {
++		const u16 *src = t->tx_buf;
++		u16 *dst;
++		int i;
++
++		dst = kmalloc(t->len, GFP_KERNEL);
++		if (!dst)
++			return -ENOMEM;
++
++		for (i = 0; i < t->len >> 1; i++)
++			dst[i] = cpu_to_le16p(src + i);
++
++		mspi->tx = dst;
++		mspi->map_tx_dma = 1;
++	}
+ 
+ 	if (mspi->map_tx_dma) {
+ 		void *nonconst_tx = (void *)mspi->tx; /* shut up gcc */
+@@ -173,6 +189,13 @@ void fsl_spi_cpm_bufs_complete(struct mpc8xxx_spi *mspi)
+ 	if (mspi->map_rx_dma)
+ 		dma_unmap_single(dev, mspi->rx_dma, t->len, DMA_FROM_DEVICE);
+ 	mspi->xfer_in_progress = NULL;
++
++	if (t->bits_per_word == 16 && t->rx_buf) {
++		int i;
++
++		for (i = 0; i < t->len; i += 2)
++			le16_to_cpus(t->rx_buf + i);
++	}
+ }
+ EXPORT_SYMBOL_GPL(fsl_spi_cpm_bufs_complete);
+ 
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index 1bad0ceac81b4..63302e21e574c 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -203,26 +203,6 @@ static int mspi_apply_cpu_mode_quirks(struct spi_mpc8xxx_cs *cs,
+ 	return bits_per_word;
+ }
+ 
+-static int mspi_apply_qe_mode_quirks(struct spi_mpc8xxx_cs *cs,
+-				struct spi_device *spi,
+-				int bits_per_word)
+-{
+-	/* CPM/QE uses Little Endian for words > 8
+-	 * so transform 16 and 32 bits words into 8 bits
+-	 * Unfortnatly that doesn't work for LSB so
+-	 * reject these for now */
+-	/* Note: 32 bits word, LSB works iff
+-	 * tfcr/rfcr is set to CPMFCR_GBL */
+-	if (spi->mode & SPI_LSB_FIRST &&
+-	    bits_per_word > 8)
+-		return -EINVAL;
+-	if (bits_per_word <= 8)
+-		return bits_per_word;
+-	if (bits_per_word == 16 || bits_per_word == 32)
+-		return 8; /* pretend its 8 bits */
+-	return -EINVAL;
+-}
+-
+ static int fsl_spi_setup_transfer(struct spi_device *spi,
+ 					struct spi_transfer *t)
+ {
+@@ -250,9 +230,6 @@ static int fsl_spi_setup_transfer(struct spi_device *spi,
+ 		bits_per_word = mspi_apply_cpu_mode_quirks(cs, spi,
+ 							   mpc8xxx_spi,
+ 							   bits_per_word);
+-	else
+-		bits_per_word = mspi_apply_qe_mode_quirks(cs, spi,
+-							  bits_per_word);
+ 
+ 	if (bits_per_word < 0)
+ 		return bits_per_word;
+@@ -370,14 +347,30 @@ static int fsl_spi_do_one_msg(struct spi_master *master,
+ 	 * In CPU mode, optimize large byte transfers to use larger
+ 	 * bits_per_word values to reduce number of interrupts taken.
+ 	 */
+-	if (!(mpc8xxx_spi->flags & SPI_CPM_MODE)) {
+-		list_for_each_entry(t, &m->transfers, transfer_list) {
++	list_for_each_entry(t, &m->transfers, transfer_list) {
++		if (!(mpc8xxx_spi->flags & SPI_CPM_MODE)) {
+ 			if (t->len < 256 || t->bits_per_word != 8)
+ 				continue;
+ 			if ((t->len & 3) == 0)
+ 				t->bits_per_word = 32;
+ 			else if ((t->len & 1) == 0)
+ 				t->bits_per_word = 16;
++		} else {
++			/*
++			 * CPM/QE uses Little Endian for words > 8
++			 * so transform 16 and 32 bits words into 8 bits
++			 * Unfortnatly that doesn't work for LSB so
++			 * reject these for now
++			 * Note: 32 bits word, LSB works iff
++			 * tfcr/rfcr is set to CPMFCR_GBL
++			 */
++			if (m->spi->mode & SPI_LSB_FIRST && t->bits_per_word > 8)
++				return -EINVAL;
++			if (t->bits_per_word == 16 || t->bits_per_word == 32)
++				t->bits_per_word = 8; /* pretend its 8 bits */
++			if (t->bits_per_word == 8 && t->len >= 256 &&
++			    (mpc8xxx_spi->flags & SPI_CPM1))
++				t->bits_per_word = 16;
+ 		}
+ 	}
+ 
+@@ -635,8 +628,14 @@ static struct spi_master *fsl_spi_probe(struct device *dev,
+ 	if (mpc8xxx_spi->type == TYPE_GRLIB)
+ 		fsl_spi_grlib_probe(dev);
+ 
+-	master->bits_per_word_mask =
+-		(SPI_BPW_RANGE_MASK(4, 16) | SPI_BPW_MASK(32)) &
++	if (mpc8xxx_spi->flags & SPI_CPM_MODE)
++		master->bits_per_word_mask =
++			(SPI_BPW_RANGE_MASK(4, 8) | SPI_BPW_MASK(16) | SPI_BPW_MASK(32));
++	else
++		master->bits_per_word_mask =
++			(SPI_BPW_RANGE_MASK(4, 16) | SPI_BPW_MASK(32));
++
++	master->bits_per_word_mask &=
+ 		SPI_BPW_RANGE_MASK(1, mpc8xxx_spi->max_bits_per_word);
+ 
+ 	if (mpc8xxx_spi->flags & SPI_QE_CPU_MODE)
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index bbc420865f0fd..21297cc62571a 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -242,6 +242,18 @@ static bool spi_imx_can_dma(struct spi_master *master, struct spi_device *spi,
+ 	return true;
+ }
+ 
++/*
++ * Note the number of natively supported chip selects for MX51 is 4. Some
++ * devices may have less actual SS pins but the register map supports 4. When
++ * using gpio chip selects the cs values passed into the macros below can go
++ * outside the range 0 - 3. We therefore need to limit the cs value to avoid
++ * corrupting bits outside the allocated locations.
++ *
++ * The simplest way to do this is to just mask the cs bits to 2 bits. This
++ * still allows all 4 native chip selects to work as well as gpio chip selects
++ * (which can use any of the 4 chip select configurations).
++ */
++
+ #define MX51_ECSPI_CTRL		0x08
+ #define MX51_ECSPI_CTRL_ENABLE		(1 <<  0)
+ #define MX51_ECSPI_CTRL_XCH		(1 <<  2)
+@@ -250,16 +262,16 @@ static bool spi_imx_can_dma(struct spi_master *master, struct spi_device *spi,
+ #define MX51_ECSPI_CTRL_DRCTL(drctl)	((drctl) << 16)
+ #define MX51_ECSPI_CTRL_POSTDIV_OFFSET	8
+ #define MX51_ECSPI_CTRL_PREDIV_OFFSET	12
+-#define MX51_ECSPI_CTRL_CS(cs)		((cs) << 18)
++#define MX51_ECSPI_CTRL_CS(cs)		((cs & 3) << 18)
+ #define MX51_ECSPI_CTRL_BL_OFFSET	20
+ #define MX51_ECSPI_CTRL_BL_MASK		(0xfff << 20)
+ 
+ #define MX51_ECSPI_CONFIG	0x0c
+-#define MX51_ECSPI_CONFIG_SCLKPHA(cs)	(1 << ((cs) +  0))
+-#define MX51_ECSPI_CONFIG_SCLKPOL(cs)	(1 << ((cs) +  4))
+-#define MX51_ECSPI_CONFIG_SBBCTRL(cs)	(1 << ((cs) +  8))
+-#define MX51_ECSPI_CONFIG_SSBPOL(cs)	(1 << ((cs) + 12))
+-#define MX51_ECSPI_CONFIG_SCLKCTL(cs)	(1 << ((cs) + 20))
++#define MX51_ECSPI_CONFIG_SCLKPHA(cs)	(1 << ((cs & 3) +  0))
++#define MX51_ECSPI_CONFIG_SCLKPOL(cs)	(1 << ((cs & 3) +  4))
++#define MX51_ECSPI_CONFIG_SBBCTRL(cs)	(1 << ((cs & 3) +  8))
++#define MX51_ECSPI_CONFIG_SSBPOL(cs)	(1 << ((cs & 3) + 12))
++#define MX51_ECSPI_CONFIG_SCLKCTL(cs)	(1 << ((cs & 3) + 20))
+ 
+ #define MX51_ECSPI_INT		0x10
+ #define MX51_ECSPI_INT_TEEN		(1 <<  0)
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+index 291f98251f7f7..4c201679fc081 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+@@ -50,9 +50,9 @@ static const struct rtl819x_ops rtl819xp_ops = {
+ };
+ 
+ static struct pci_device_id rtl8192_pci_id_tbl[] = {
+-	{RTL_PCI_DEVICE(0x10ec, 0x8192, rtl819xp_ops)},
+-	{RTL_PCI_DEVICE(0x07aa, 0x0044, rtl819xp_ops)},
+-	{RTL_PCI_DEVICE(0x07aa, 0x0047, rtl819xp_ops)},
++	{PCI_DEVICE(0x10ec, 0x8192)},
++	{PCI_DEVICE(0x07aa, 0x0044)},
++	{PCI_DEVICE(0x07aa, 0x0047)},
+ 	{}
+ };
+ 
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
+index 736f1a824cd2e..7bbd884aa5f13 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
+@@ -55,11 +55,6 @@
+ #define IS_HARDWARE_TYPE_8192SE(_priv)		\
+ 	(((struct r8192_priv *)rtllib_priv(dev))->card_8192 == NIC_8192SE)
+ 
+-#define RTL_PCI_DEVICE(vend, dev, cfg) \
+-	.vendor = (vend), .device = (dev), \
+-	.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, \
+-	.driver_data = (kernel_ulong_t)&(cfg)
+-
+ #define TOTAL_CAM_ENTRY		32
+ #define CAM_CONTENT_COUNT	8
+ 
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 6bb8403580729..075e2a6fb474f 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4385,6 +4385,9 @@ int iscsit_close_session(struct iscsi_session *sess)
+ 	iscsit_stop_time2retain_timer(sess);
+ 	spin_unlock_bh(&se_tpg->session_lock);
+ 
++	if (sess->sess_ops->ErrorRecoveryLevel == 2)
++		iscsit_free_connection_recovery_entries(sess);
++
+ 	/*
+ 	 * transport_deregister_session_configfs() will clear the
+ 	 * struct se_node_acl->nacl_sess pointer now as a iscsi_np process context
+@@ -4412,9 +4415,6 @@ int iscsit_close_session(struct iscsi_session *sess)
+ 
+ 	transport_deregister_session(sess->se_sess);
+ 
+-	if (sess->sess_ops->ErrorRecoveryLevel == 2)
+-		iscsit_free_connection_recovery_entries(sess);
+-
+ 	iscsit_free_all_ooo_cmdsns(sess);
+ 
+ 	spin_lock_bh(&se_tpg->session_lock);
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 0a7e9491b4d14..43f2eed6df78e 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -1165,6 +1165,7 @@ void serial8250_unregister_port(int line)
+ 		uart->port.type = PORT_UNKNOWN;
+ 		uart->port.dev = &serial8250_isa_devs->dev;
+ 		uart->capabilities = 0;
++		serial8250_init_port(uart);
+ 		serial8250_apply_quirks(uart);
+ 		uart_add_one_port(&serial8250_reg, &uart->port);
+ 	} else {
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 2d0e7c7e408dc..5c2adf14049b7 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -40,9 +40,19 @@
+ #define PCI_DEVICE_ID_COMMTECH_4224PCIE		0x0020
+ #define PCI_DEVICE_ID_COMMTECH_4228PCIE		0x0021
+ #define PCI_DEVICE_ID_COMMTECH_4222PCIE		0x0022
++
+ #define PCI_DEVICE_ID_EXAR_XR17V4358		0x4358
+ #define PCI_DEVICE_ID_EXAR_XR17V8358		0x8358
+ 
++#define PCI_SUBDEVICE_ID_USR_2980		0x0128
++#define PCI_SUBDEVICE_ID_USR_2981		0x0129
++
++#define PCI_DEVICE_ID_SEALEVEL_710xC		0x1001
++#define PCI_DEVICE_ID_SEALEVEL_720xC		0x1002
++#define PCI_DEVICE_ID_SEALEVEL_740xC		0x1004
++#define PCI_DEVICE_ID_SEALEVEL_780xC		0x1008
++#define PCI_DEVICE_ID_SEALEVEL_716xC		0x1010
++
+ #define UART_EXAR_INT0		0x80
+ #define UART_EXAR_8XMODE	0x88	/* 8X sampling rate select */
+ #define UART_EXAR_SLEEP		0x8b	/* Sleep mode */
+@@ -596,7 +606,14 @@ exar_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *ent)
+ 
+ 	maxnr = pci_resource_len(pcidev, bar) >> (board->reg_shift + 3);
+ 
+-	nr_ports = board->num_ports ? board->num_ports : pcidev->device & 0x0f;
++	if (pcidev->vendor == PCI_VENDOR_ID_ACCESSIO)
++		nr_ports = BIT(((pcidev->device & 0x38) >> 3) - 1);
++	else if (board->num_ports)
++		nr_ports = board->num_ports;
++	else if (pcidev->vendor == PCI_VENDOR_ID_SEALEVEL)
++		nr_ports = pcidev->device & 0xff;
++	else
++		nr_ports = pcidev->device & 0x0f;
+ 
+ 	priv = devm_kzalloc(&pcidev->dev, struct_size(priv, line, nr_ports), GFP_KERNEL);
+ 	if (!priv)
+@@ -695,22 +712,6 @@ static int __maybe_unused exar_resume(struct device *dev)
+ 
+ static SIMPLE_DEV_PM_OPS(exar_pci_pm, exar_suspend, exar_resume);
+ 
+-static const struct exar8250_board acces_com_2x = {
+-	.num_ports	= 2,
+-	.setup		= pci_xr17c154_setup,
+-};
+-
+-static const struct exar8250_board acces_com_4x = {
+-	.num_ports	= 4,
+-	.setup		= pci_xr17c154_setup,
+-};
+-
+-static const struct exar8250_board acces_com_8x = {
+-	.num_ports	= 8,
+-	.setup		= pci_xr17c154_setup,
+-};
+-
+-
+ static const struct exar8250_board pbn_fastcom335_2 = {
+ 	.num_ports	= 2,
+ 	.setup		= pci_fastcom335_setup,
+@@ -794,14 +795,23 @@ static const struct exar8250_board pbn_exar_XR17V8358 = {
+ 		(kernel_ulong_t)&bd			\
+ 	}
+ 
++#define USR_DEVICE(devid, sdevid, bd) {			\
++	PCI_DEVICE_SUB(					\
++		PCI_VENDOR_ID_USR,			\
++		PCI_DEVICE_ID_EXAR_##devid,		\
++		PCI_VENDOR_ID_EXAR,			\
++		PCI_SUBDEVICE_ID_USR_##sdevid), 0, 0,	\
++		(kernel_ulong_t)&bd			\
++	}
++
+ static const struct pci_device_id exar_pci_tbl[] = {
+-	EXAR_DEVICE(ACCESSIO, COM_2S, acces_com_2x),
+-	EXAR_DEVICE(ACCESSIO, COM_4S, acces_com_4x),
+-	EXAR_DEVICE(ACCESSIO, COM_8S, acces_com_8x),
+-	EXAR_DEVICE(ACCESSIO, COM232_8, acces_com_8x),
+-	EXAR_DEVICE(ACCESSIO, COM_2SM, acces_com_2x),
+-	EXAR_DEVICE(ACCESSIO, COM_4SM, acces_com_4x),
+-	EXAR_DEVICE(ACCESSIO, COM_8SM, acces_com_8x),
++	EXAR_DEVICE(ACCESSIO, COM_2S, pbn_exar_XR17C15x),
++	EXAR_DEVICE(ACCESSIO, COM_4S, pbn_exar_XR17C15x),
++	EXAR_DEVICE(ACCESSIO, COM_8S, pbn_exar_XR17C15x),
++	EXAR_DEVICE(ACCESSIO, COM232_8, pbn_exar_XR17C15x),
++	EXAR_DEVICE(ACCESSIO, COM_2SM, pbn_exar_XR17C15x),
++	EXAR_DEVICE(ACCESSIO, COM_4SM, pbn_exar_XR17C15x),
++	EXAR_DEVICE(ACCESSIO, COM_8SM, pbn_exar_XR17C15x),
+ 
+ 	CONNECT_DEVICE(XR17C152, UART_2_232, pbn_connect),
+ 	CONNECT_DEVICE(XR17C154, UART_4_232, pbn_connect),
+@@ -818,6 +828,10 @@ static const struct pci_device_id exar_pci_tbl[] = {
+ 
+ 	IBM_DEVICE(XR17C152, SATURN_SERIAL_ONE_PORT, pbn_exar_ibm_saturn),
+ 
++	/* USRobotics USR298x-OEM PCI Modems */
++	USR_DEVICE(XR17C152, 2980, pbn_exar_XR17C15x),
++	USR_DEVICE(XR17C152, 2981, pbn_exar_XR17C15x),
++
+ 	/* Exar Corp. XR17C15[248] Dual/Quad/Octal UART */
+ 	EXAR_DEVICE(EXAR, XR17C152, pbn_exar_XR17C15x),
+ 	EXAR_DEVICE(EXAR, XR17C154, pbn_exar_XR17C15x),
+@@ -837,6 +851,12 @@ static const struct pci_device_id exar_pci_tbl[] = {
+ 	EXAR_DEVICE(COMMTECH, 4224PCI335, pbn_fastcom335_4),
+ 	EXAR_DEVICE(COMMTECH, 2324PCI335, pbn_fastcom335_4),
+ 	EXAR_DEVICE(COMMTECH, 2328PCI335, pbn_fastcom335_8),
++
++	EXAR_DEVICE(SEALEVEL, 710xC, pbn_exar_XR17V35x),
++	EXAR_DEVICE(SEALEVEL, 720xC, pbn_exar_XR17V35x),
++	EXAR_DEVICE(SEALEVEL, 740xC, pbn_exar_XR17V35x),
++	EXAR_DEVICE(SEALEVEL, 780xC, pbn_exar_XR17V35x),
++	EXAR_DEVICE(SEALEVEL, 716xC, pbn_exar_XR17V35x),
+ 	{ 0, }
+ };
+ MODULE_DEVICE_TABLE(pci, exar_pci_tbl);
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index b6656898699d1..9617f7ad332d1 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -1839,6 +1839,8 @@ pci_moxa_setup(struct serial_private *priv,
+ #define PCI_SUBDEVICE_ID_SIIG_DUAL_30	0x2530
+ #define PCI_VENDOR_ID_ADVANTECH		0x13fe
+ #define PCI_DEVICE_ID_INTEL_CE4100_UART 0x2e66
++#define PCI_DEVICE_ID_ADVANTECH_PCI1600	0x1600
++#define PCI_DEVICE_ID_ADVANTECH_PCI1600_1611	0x1611
+ #define PCI_DEVICE_ID_ADVANTECH_PCI3620	0x3620
+ #define PCI_DEVICE_ID_ADVANTECH_PCI3618	0x3618
+ #define PCI_DEVICE_ID_ADVANTECH_PCIf618	0xf618
+@@ -4185,6 +4187,9 @@ static SIMPLE_DEV_PM_OPS(pciserial_pm_ops, pciserial_suspend_one,
+ 			 pciserial_resume_one);
+ 
+ static const struct pci_device_id serial_pci_tbl[] = {
++	{	PCI_VENDOR_ID_ADVANTECH, PCI_DEVICE_ID_ADVANTECH_PCI1600,
++		PCI_DEVICE_ID_ADVANTECH_PCI1600_1611, PCI_ANY_ID, 0, 0,
++		pbn_b0_4_921600 },
+ 	/* Advantech use PCI_DEVICE_ID_ADVANTECH_PCI3620 (0x3620) as 'PCI_SUBVENDOR_ID' */
+ 	{	PCI_VENDOR_ID_ADVANTECH, PCI_DEVICE_ID_ADVANTECH_PCI3620,
+ 		PCI_DEVICE_ID_ADVANTECH_PCI3620, 0x0001, 0, 0,
+diff --git a/drivers/tty/serial/arc_uart.c b/drivers/tty/serial/arc_uart.c
+index 17c3fc398fc65..6f7a7d2dcf3aa 100644
+--- a/drivers/tty/serial/arc_uart.c
++++ b/drivers/tty/serial/arc_uart.c
+@@ -609,10 +609,11 @@ static int arc_serial_probe(struct platform_device *pdev)
+ 	}
+ 	uart->baud = val;
+ 
+-	port->membase = of_iomap(np, 0);
+-	if (!port->membase)
++	port->membase = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(port->membase)) {
+ 		/* No point of dev_err since UART itself is hosed here */
+-		return -ENXIO;
++		return PTR_ERR(port->membase);
++	}
+ 
+ 	port->irq = irq_of_parse_and_map(np, 0);
+ 
+diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c
+index 1dc07f9214d57..01c96537fa36b 100644
+--- a/drivers/tty/vt/vc_screen.c
++++ b/drivers/tty/vt/vc_screen.c
+@@ -656,10 +656,17 @@ vcs_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
+ 			}
+ 		}
+ 
+-		/* The vcs_size might have changed while we slept to grab
+-		 * the user buffer, so recheck.
++		/* The vc might have been freed or vcs_size might have changed
++		 * while we slept to grab the user buffer, so recheck.
+ 		 * Return data written up to now on failure.
+ 		 */
++		vc = vcs_vc(inode, &viewed);
++		if (!vc) {
++			if (written)
++				break;
++			ret = -ENXIO;
++			goto unlock_out;
++		}
+ 		size = vcs_size(vc, attr, false);
+ 		if (size < 0) {
+ 			if (written)
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 49f59d53b4b26..76ff182427bc6 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -1898,6 +1898,8 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 
+ 	if (request.req.wLength > USBTMC_BUFSIZE)
+ 		return -EMSGSIZE;
++	if (request.req.wLength == 0)	/* Length-0 requests are never IN */
++		request.req.bRequestType &= ~USB_DIR_IN;
+ 
+ 	is_in = request.req.bRequestType & USB_DIR_IN;
+ 
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index db4de5367737a..c4cd9d46f9e3c 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -207,6 +207,82 @@ int usb_find_common_endpoints_reverse(struct usb_host_interface *alt,
+ }
+ EXPORT_SYMBOL_GPL(usb_find_common_endpoints_reverse);
+ 
++/**
++ * usb_find_endpoint() - Given an endpoint address, search for the endpoint's
++ * usb_host_endpoint structure in an interface's current altsetting.
++ * @intf: the interface whose current altsetting should be searched
++ * @ep_addr: the endpoint address (number and direction) to find
++ *
++ * Search the altsetting's list of endpoints for one with the specified address.
++ *
++ * Return: Pointer to the usb_host_endpoint if found, %NULL otherwise.
++ */
++static const struct usb_host_endpoint *usb_find_endpoint(
++		const struct usb_interface *intf, unsigned int ep_addr)
++{
++	int n;
++	const struct usb_host_endpoint *ep;
++
++	n = intf->cur_altsetting->desc.bNumEndpoints;
++	ep = intf->cur_altsetting->endpoint;
++	for (; n > 0; (--n, ++ep)) {
++		if (ep->desc.bEndpointAddress == ep_addr)
++			return ep;
++	}
++	return NULL;
++}
++
++/**
++ * usb_check_bulk_endpoints - Check whether an interface's current altsetting
++ * contains a set of bulk endpoints with the given addresses.
++ * @intf: the interface whose current altsetting should be searched
++ * @ep_addrs: 0-terminated array of the endpoint addresses (number and
++ * direction) to look for
++ *
++ * Search for endpoints with the specified addresses and check their types.
++ *
++ * Return: %true if all the endpoints are found and are bulk, %false otherwise.
++ */
++bool usb_check_bulk_endpoints(
++		const struct usb_interface *intf, const u8 *ep_addrs)
++{
++	const struct usb_host_endpoint *ep;
++
++	for (; *ep_addrs; ++ep_addrs) {
++		ep = usb_find_endpoint(intf, *ep_addrs);
++		if (!ep || !usb_endpoint_xfer_bulk(&ep->desc))
++			return false;
++	}
++	return true;
++}
++EXPORT_SYMBOL_GPL(usb_check_bulk_endpoints);
++
++/**
++ * usb_check_int_endpoints - Check whether an interface's current altsetting
++ * contains a set of interrupt endpoints with the given addresses.
++ * @intf: the interface whose current altsetting should be searched
++ * @ep_addrs: 0-terminated array of the endpoint addresses (number and
++ * direction) to look for
++ *
++ * Search for endpoints with the specified addresses and check their types.
++ *
++ * Return: %true if all the endpoints are found and are interrupt,
++ * %false otherwise.
++ */
++bool usb_check_int_endpoints(
++		const struct usb_interface *intf, const u8 *ep_addrs)
++{
++	const struct usb_host_endpoint *ep;
++
++	for (; *ep_addrs; ++ep_addrs) {
++		ep = usb_find_endpoint(intf, *ep_addrs);
++		if (!ep || !usb_endpoint_xfer_int(&ep->desc))
++			return false;
++	}
++	return true;
++}
++EXPORT_SYMBOL_GPL(usb_check_int_endpoints);
++
+ /**
+  * usb_find_alt_setting() - Given a configuration, find the alternate setting
+  * for the given interface.
+diff --git a/drivers/usb/dwc3/debugfs.c b/drivers/usb/dwc3/debugfs.c
+index 3ebe3e6c284d2..da8b62db49fb2 100644
+--- a/drivers/usb/dwc3/debugfs.c
++++ b/drivers/usb/dwc3/debugfs.c
+@@ -327,6 +327,11 @@ static int dwc3_lsp_show(struct seq_file *s, void *unused)
+ 	unsigned int		current_mode;
+ 	unsigned long		flags;
+ 	u32			reg;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_GSTS);
+@@ -345,6 +350,8 @@ static int dwc3_lsp_show(struct seq_file *s, void *unused)
+ 	}
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -390,6 +397,11 @@ static int dwc3_mode_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = s->private;
+ 	unsigned long		flags;
+ 	u32			reg;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+@@ -409,6 +421,8 @@ static int dwc3_mode_show(struct seq_file *s, void *unused)
+ 		seq_printf(s, "UNKNOWN %08x\n", DWC3_GCTL_PRTCAP(reg));
+ 	}
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -458,6 +472,11 @@ static int dwc3_testmode_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = s->private;
+ 	unsigned long		flags;
+ 	u32			reg;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+@@ -488,6 +507,8 @@ static int dwc3_testmode_show(struct seq_file *s, void *unused)
+ 		seq_printf(s, "UNKNOWN %d\n", reg);
+ 	}
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -504,6 +525,7 @@ static ssize_t dwc3_testmode_write(struct file *file,
+ 	unsigned long		flags;
+ 	u32			testmode = 0;
+ 	char			buf[32];
++	int			ret;
+ 
+ 	if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+ 		return -EFAULT;
+@@ -521,10 +543,16 @@ static ssize_t dwc3_testmode_write(struct file *file,
+ 	else
+ 		testmode = 0;
+ 
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
++
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	dwc3_gadget_set_test_mode(dwc, testmode);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return count;
+ }
+ 
+@@ -543,12 +571,18 @@ static int dwc3_link_state_show(struct seq_file *s, void *unused)
+ 	enum dwc3_link_state	state;
+ 	u32			reg;
+ 	u8			speed;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_GSTS);
+ 	if (DWC3_GSTS_CURMOD(reg) != DWC3_GSTS_CURMOD_DEVICE) {
+ 		seq_puts(s, "Not available\n");
+ 		spin_unlock_irqrestore(&dwc->lock, flags);
++		pm_runtime_put_sync(dwc->dev);
+ 		return 0;
+ 	}
+ 
+@@ -561,6 +595,8 @@ static int dwc3_link_state_show(struct seq_file *s, void *unused)
+ 		   dwc3_gadget_hs_link_string(state));
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -579,6 +615,7 @@ static ssize_t dwc3_link_state_write(struct file *file,
+ 	char			buf[32];
+ 	u32			reg;
+ 	u8			speed;
++	int			ret;
+ 
+ 	if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+ 		return -EFAULT;
+@@ -598,10 +635,15 @@ static ssize_t dwc3_link_state_write(struct file *file,
+ 	else
+ 		return -EINVAL;
+ 
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
++
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = dwc3_readl(dwc->regs, DWC3_GSTS);
+ 	if (DWC3_GSTS_CURMOD(reg) != DWC3_GSTS_CURMOD_DEVICE) {
+ 		spin_unlock_irqrestore(&dwc->lock, flags);
++		pm_runtime_put_sync(dwc->dev);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -611,12 +653,15 @@ static ssize_t dwc3_link_state_write(struct file *file,
+ 	if (speed < DWC3_DSTS_SUPERSPEED &&
+ 	    state != DWC3_LINK_STATE_RECOV) {
+ 		spin_unlock_irqrestore(&dwc->lock, flags);
++		pm_runtime_put_sync(dwc->dev);
+ 		return -EINVAL;
+ 	}
+ 
+ 	dwc3_gadget_set_link_state(dwc, state);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return count;
+ }
+ 
+@@ -640,6 +685,11 @@ static int dwc3_tx_fifo_size_show(struct seq_file *s, void *unused)
+ 	unsigned long		flags;
+ 	int			mdwidth;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_TXFIFO);
+@@ -654,6 +704,8 @@ static int dwc3_tx_fifo_size_show(struct seq_file *s, void *unused)
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -664,6 +716,11 @@ static int dwc3_rx_fifo_size_show(struct seq_file *s, void *unused)
+ 	unsigned long		flags;
+ 	int			mdwidth;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_RXFIFO);
+@@ -678,6 +735,8 @@ static int dwc3_rx_fifo_size_show(struct seq_file *s, void *unused)
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -687,12 +746,19 @@ static int dwc3_tx_request_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_TXREQQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -702,12 +768,19 @@ static int dwc3_rx_request_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_RXREQQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -717,12 +790,19 @@ static int dwc3_rx_info_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_RXINFOQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -732,12 +812,19 @@ static int dwc3_descriptor_fetch_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_DESCFETCHQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -747,12 +834,19 @@ static int dwc3_event_queue_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	u32			val;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	val = dwc3_core_fifo_space(dep, DWC3_EVENTQ);
+ 	seq_printf(s, "%u\n", val);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -797,6 +891,11 @@ static int dwc3_trb_ring_show(struct seq_file *s, void *unused)
+ 	struct dwc3		*dwc = dep->dwc;
+ 	unsigned long		flags;
+ 	int			i;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	if (dep->number <= 1) {
+@@ -826,6 +925,8 @@ static int dwc3_trb_ring_show(struct seq_file *s, void *unused)
+ out:
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -838,6 +939,11 @@ static int dwc3_ep_info_register_show(struct seq_file *s, void *unused)
+ 	u32			lower_32_bits;
+ 	u32			upper_32_bits;
+ 	u32			reg;
++	int			ret;
++
++	ret = pm_runtime_resume_and_get(dwc->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	reg = DWC3_GDBGLSPMUX_EPSELECT(dep->number);
+@@ -850,6 +956,8 @@ static int dwc3_ep_info_register_show(struct seq_file *s, void *unused)
+ 	seq_printf(s, "0x%016llx\n", ep_info);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	pm_runtime_put_sync(dwc->dev);
++
+ 	return 0;
+ }
+ 
+@@ -911,6 +1019,7 @@ void dwc3_debugfs_init(struct dwc3 *dwc)
+ 	dwc->regset->regs = dwc3_regs;
+ 	dwc->regset->nregs = ARRAY_SIZE(dwc3_regs);
+ 	dwc->regset->base = dwc->regs - DWC3_GLOBALS_REGS_START;
++	dwc->regset->dev = dwc->dev;
+ 
+ 	root = debugfs_create_dir(dev_name(dwc->dev), usb_debug_root);
+ 	dwc->root = root;
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index 64ef97ab9274a..5e5f699a434f4 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -17,6 +17,7 @@
+ #include <linux/etherdevice.h>
+ #include <linux/ethtool.h>
+ #include <linux/if_vlan.h>
++#include <linux/string_helpers.h>
+ 
+ #include "u_ether.h"
+ 
+@@ -974,6 +975,8 @@ int gether_get_host_addr_cdc(struct net_device *net, char *host_addr, int len)
+ 	dev = netdev_priv(net);
+ 	snprintf(host_addr, len, "%pm", dev->host_mac);
+ 
++	string_upper(host_addr, host_addr);
++
+ 	return strlen(host_addr);
+ }
+ EXPORT_SYMBOL_GPL(gether_get_host_addr_cdc);
+diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c
+index 9b88745d247f5..3316533b8bc29 100644
+--- a/drivers/usb/host/uhci-pci.c
++++ b/drivers/usb/host/uhci-pci.c
+@@ -119,11 +119,13 @@ static int uhci_pci_init(struct usb_hcd *hcd)
+ 
+ 	uhci->rh_numports = uhci_count_ports(hcd);
+ 
+-	/* Intel controllers report the OverCurrent bit active on.
+-	 * VIA controllers report it active off, so we'll adjust the
+-	 * bit value.  (It's not standardized in the UHCI spec.)
++	/*
++	 * Intel controllers report the OverCurrent bit active on.  VIA
++	 * and ZHAOXIN controllers report it active off, so we'll adjust
++	 * the bit value.  (It's not standardized in the UHCI spec.)
+ 	 */
+-	if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_VIA)
++	if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_VIA ||
++			to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_ZHAOXIN)
+ 		uhci->oc_low = 1;
+ 
+ 	/* HP's server management chip requires a longer port reset delay. */
+diff --git a/drivers/usb/misc/sisusbvga/sisusb.c b/drivers/usb/misc/sisusbvga/sisusb.c
+index f08de33d9ff38..8ed803c4a251d 100644
+--- a/drivers/usb/misc/sisusbvga/sisusb.c
++++ b/drivers/usb/misc/sisusbvga/sisusb.c
+@@ -3014,6 +3014,20 @@ static int sisusb_probe(struct usb_interface *intf,
+ 	struct usb_device *dev = interface_to_usbdev(intf);
+ 	struct sisusb_usb_data *sisusb;
+ 	int retval = 0, i;
++	static const u8 ep_addresses[] = {
++		SISUSB_EP_GFX_IN | USB_DIR_IN,
++		SISUSB_EP_GFX_OUT | USB_DIR_OUT,
++		SISUSB_EP_GFX_BULK_OUT | USB_DIR_OUT,
++		SISUSB_EP_GFX_LBULK_OUT | USB_DIR_OUT,
++		SISUSB_EP_BRIDGE_IN | USB_DIR_IN,
++		SISUSB_EP_BRIDGE_OUT | USB_DIR_OUT,
++		0};
++
++	/* Are the expected endpoints present? */
++	if (!usb_check_bulk_endpoints(intf, ep_addresses)) {
++		dev_err(&intf->dev, "Invalid USB2VGA device\n");
++		return -EINVAL;
++	}
+ 
+ 	dev_info(&dev->dev, "USB2VGA dongle found at address %d\n",
+ 			dev->devnum);
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index e5a971b83e3f5..b8e1109f0e0d4 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -407,22 +407,25 @@ static DEF_SCSI_QCMD(queuecommand)
+  ***********************************************************************/
+ 
+ /* Command timeout and abort */
+-static int command_abort(struct scsi_cmnd *srb)
++static int command_abort_matching(struct us_data *us, struct scsi_cmnd *srb_match)
+ {
+-	struct us_data *us = host_to_us(srb->device->host);
+-
+-	usb_stor_dbg(us, "%s called\n", __func__);
+-
+ 	/*
+ 	 * us->srb together with the TIMED_OUT, RESETTING, and ABORTING
+ 	 * bits are protected by the host lock.
+ 	 */
+ 	scsi_lock(us_to_host(us));
+ 
+-	/* Is this command still active? */
+-	if (us->srb != srb) {
++	/* is there any active pending command to abort ? */
++	if (!us->srb) {
+ 		scsi_unlock(us_to_host(us));
+ 		usb_stor_dbg(us, "-- nothing to abort\n");
++		return SUCCESS;
++	}
++
++	/* Does the command match the passed srb if any ? */
++	if (srb_match && us->srb != srb_match) {
++		scsi_unlock(us_to_host(us));
++		usb_stor_dbg(us, "-- pending command mismatch\n");
+ 		return FAILED;
+ 	}
+ 
+@@ -445,6 +448,14 @@ static int command_abort(struct scsi_cmnd *srb)
+ 	return SUCCESS;
+ }
+ 
++static int command_abort(struct scsi_cmnd *srb)
++{
++	struct us_data *us = host_to_us(srb->device->host);
++
++	usb_stor_dbg(us, "%s called\n", __func__);
++	return command_abort_matching(us, srb);
++}
++
+ /*
+  * This invokes the transport reset mechanism to reset the state of the
+  * device
+@@ -456,6 +467,9 @@ static int device_reset(struct scsi_cmnd *srb)
+ 
+ 	usb_stor_dbg(us, "%s called\n", __func__);
+ 
++	/* abort any pending command before reset */
++	command_abort_matching(us, NULL);
++
+ 	/* lock the device pointers and do the reset */
+ 	mutex_lock(&(us->dev_mutex));
+ 	result = us->transport_reset(us);
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index 07b6561720687..0d4b1c0eeefb3 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -503,6 +503,10 @@ static ssize_t pin_assignment_show(struct device *dev,
+ 
+ 	mutex_unlock(&dp->lock);
+ 
++	/* get_current_pin_assignments can return 0 when no matching pin assignments are found */
++	if (len == 0)
++		len++;
++
+ 	buf[len - 1] = '\n';
+ 	return len;
+ }
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 8333c80b5f7c1..cf0e6a80815ae 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -1126,7 +1126,21 @@ static bool svdm_consume_svids(struct tcpm_port *port, const u32 *p, int cnt)
+ 		pmdata->svids[pmdata->nsvids++] = svid;
+ 		tcpm_log(port, "SVID %d: 0x%x", pmdata->nsvids, svid);
+ 	}
+-	return true;
++
++	/*
++	 * PD3.0 Spec 6.4.4.3.2: The SVIDs are returned 2 per VDO (see Table
++	 * 6-43), and can be returned maximum 6 VDOs per response (see Figure
++	 * 6-19). If the Respondersupports 12 or more SVID then the Discover
++	 * SVIDs Command Shall be executed multiple times until a Discover
++	 * SVIDs VDO is returned ending either with a SVID value of 0x0000 in
++	 * the last part of the last VDO or with a VDO containing two SVIDs
++	 * with values of 0x0000.
++	 *
++	 * However, some odd dockers support SVIDs less than 12 but without
++	 * 0x0000 in the last VDO, so we need to break the Discover SVIDs
++	 * request and return false here.
++	 */
++	return cnt == 7;
+ abort:
+ 	tcpm_log(port, "SVID_DISCOVERY_MAX(%d) too low!", SVID_DISCOVERY_MAX);
+ 	return false;
+diff --git a/drivers/video/fbdev/arcfb.c b/drivers/video/fbdev/arcfb.c
+index 1447324ed0b64..08da436265d92 100644
+--- a/drivers/video/fbdev/arcfb.c
++++ b/drivers/video/fbdev/arcfb.c
+@@ -523,7 +523,7 @@ static int arcfb_probe(struct platform_device *dev)
+ 
+ 	info = framebuffer_alloc(sizeof(struct arcfb_par), &dev->dev);
+ 	if (!info)
+-		goto err;
++		goto err_fb_alloc;
+ 
+ 	info->screen_base = (char __iomem *)videomemory;
+ 	info->fbops = &arcfb_ops;
+@@ -535,7 +535,7 @@ static int arcfb_probe(struct platform_device *dev)
+ 
+ 	if (!dio_addr || !cio_addr || !c2io_addr) {
+ 		printk(KERN_WARNING "no IO addresses supplied\n");
+-		goto err1;
++		goto err_addr;
+ 	}
+ 	par->dio_addr = dio_addr;
+ 	par->cio_addr = cio_addr;
+@@ -551,12 +551,12 @@ static int arcfb_probe(struct platform_device *dev)
+ 			printk(KERN_INFO
+ 				"arcfb: Failed req IRQ %d\n", par->irq);
+ 			retval = -EBUSY;
+-			goto err1;
++			goto err_addr;
+ 		}
+ 	}
+ 	retval = register_framebuffer(info);
+ 	if (retval < 0)
+-		goto err1;
++		goto err_register_fb;
+ 	platform_set_drvdata(dev, info);
+ 	fb_info(info, "Arc frame buffer device, using %dK of video memory\n",
+ 		videomemorysize >> 10);
+@@ -580,9 +580,12 @@ static int arcfb_probe(struct platform_device *dev)
+ 	}
+ 
+ 	return 0;
+-err1:
++
++err_register_fb:
++	free_irq(par->irq, info);
++err_addr:
+ 	framebuffer_release(info);
+-err:
++err_fb_alloc:
+ 	vfree(videomemory);
+ 	return retval;
+ }
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index d9eec1b60e665..0de7b867714a7 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -27,6 +27,8 @@
+ #include <video/udlfb.h>
+ #include "edid.h"
+ 
++#define OUT_EP_NUM	1	/* The endpoint number we will use */
++
+ static const struct fb_fix_screeninfo dlfb_fix = {
+ 	.id =           "udlfb",
+ 	.type =         FB_TYPE_PACKED_PIXELS,
+@@ -1651,7 +1653,7 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	struct fb_info *info;
+ 	int retval;
+ 	struct usb_device *usbdev = interface_to_usbdev(intf);
+-	struct usb_endpoint_descriptor *out;
++	static u8 out_ep[] = {OUT_EP_NUM + USB_DIR_OUT, 0};
+ 
+ 	/* usb initialization */
+ 	dlfb = kzalloc(sizeof(*dlfb), GFP_KERNEL);
+@@ -1665,9 +1667,9 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	dlfb->udev = usb_get_dev(usbdev);
+ 	usb_set_intfdata(intf, dlfb);
+ 
+-	retval = usb_find_common_endpoints(intf->cur_altsetting, NULL, &out, NULL, NULL);
+-	if (retval) {
+-		dev_err(&intf->dev, "Device should have at lease 1 bulk endpoint!\n");
++	if (!usb_check_bulk_endpoints(intf, out_ep)) {
++		dev_err(&intf->dev, "Invalid DisplayLink device!\n");
++		retval = -EINVAL;
+ 		goto error;
+ 	}
+ 
+@@ -1926,7 +1928,8 @@ retry:
+ 		}
+ 
+ 		/* urb->transfer_buffer_length set to actual before submit */
+-		usb_fill_bulk_urb(urb, dlfb->udev, usb_sndbulkpipe(dlfb->udev, 1),
++		usb_fill_bulk_urb(urb, dlfb->udev,
++			usb_sndbulkpipe(dlfb->udev, OUT_EP_NUM),
+ 			buf, size, dlfb_urb_completion, unode);
+ 		urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+ 
+diff --git a/drivers/watchdog/sp5100_tco.c b/drivers/watchdog/sp5100_tco.c
+index a730ecbf78cd5..0db77c90b4b66 100644
+--- a/drivers/watchdog/sp5100_tco.c
++++ b/drivers/watchdog/sp5100_tco.c
+@@ -104,6 +104,10 @@ static int tco_timer_start(struct watchdog_device *wdd)
+ 	val |= SP5100_WDT_START_STOP_BIT;
+ 	writel(val, SP5100_WDT_CONTROL(tco->tcobase));
+ 
++	/* This must be a distinct write. */
++	val |= SP5100_WDT_TRIGGER_BIT;
++	writel(val, SP5100_WDT_CONTROL(tco->tcobase));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
+index 3b5a8e2c4d475..3c435e1bca124 100644
+--- a/drivers/xen/pvcalls-back.c
++++ b/drivers/xen/pvcalls-back.c
+@@ -321,8 +321,10 @@ static struct sock_mapping *pvcalls_new_active_socket(
+ 	void *page;
+ 
+ 	map = kzalloc(sizeof(*map), GFP_KERNEL);
+-	if (map == NULL)
++	if (map == NULL) {
++		sock_release(sock);
+ 		return NULL;
++	}
+ 
+ 	map->fedata = fedata;
+ 	map->sock = sock;
+@@ -414,10 +416,8 @@ static int pvcalls_back_connect(struct xenbus_device *dev,
+ 					req->u.connect.ref,
+ 					req->u.connect.evtchn,
+ 					sock);
+-	if (!map) {
++	if (!map)
+ 		ret = -EFAULT;
+-		sock_release(sock);
+-	}
+ 
+ out:
+ 	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+@@ -558,7 +558,6 @@ static void __pvcalls_back_accept(struct work_struct *work)
+ 					sock);
+ 	if (!map) {
+ 		ret = -EFAULT;
+-		sock_release(sock);
+ 		goto out_error;
+ 	}
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 2a7778a88f03b..60b7a227624d5 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4535,7 +4535,11 @@ static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root)
+ 		 */
+ 		inode = igrab(&btrfs_inode->vfs_inode);
+ 		if (inode) {
++			unsigned int nofs_flag;
++
++			nofs_flag = memalloc_nofs_save();
+ 			invalidate_inode_pages2(inode->i_mapping);
++			memalloc_nofs_restore(nofs_flag);
+ 			iput(inode);
+ 		}
+ 		spin_lock(&root->delalloc_lock);
+@@ -4640,7 +4644,12 @@ static void btrfs_cleanup_bg_io(struct btrfs_block_group *cache)
+ 
+ 	inode = cache->io_ctl.inode;
+ 	if (inode) {
++		unsigned int nofs_flag;
++
++		nofs_flag = memalloc_nofs_save();
+ 		invalidate_inode_pages2(inode->i_mapping);
++		memalloc_nofs_restore(nofs_flag);
++
+ 		BTRFS_I(inode)->generation = 0;
+ 		cache->io_ctl.inode = NULL;
+ 		iput(inode);
+@@ -4780,3 +4789,58 @@ static int btrfs_cleanup_transaction(struct btrfs_fs_info *fs_info)
+ 
+ 	return 0;
+ }
++
++int btrfs_find_highest_objectid(struct btrfs_root *root, u64 *objectid)
++{
++	struct btrfs_path *path;
++	int ret;
++	struct extent_buffer *l;
++	struct btrfs_key search_key;
++	struct btrfs_key found_key;
++	int slot;
++
++	path = btrfs_alloc_path();
++	if (!path)
++		return -ENOMEM;
++
++	search_key.objectid = BTRFS_LAST_FREE_OBJECTID;
++	search_key.type = -1;
++	search_key.offset = (u64)-1;
++	ret = btrfs_search_slot(NULL, root, &search_key, path, 0, 0);
++	if (ret < 0)
++		goto error;
++	BUG_ON(ret == 0); /* Corruption */
++	if (path->slots[0] > 0) {
++		slot = path->slots[0] - 1;
++		l = path->nodes[0];
++		btrfs_item_key_to_cpu(l, &found_key, slot);
++		*objectid = max_t(u64, found_key.objectid,
++				  BTRFS_FIRST_FREE_OBJECTID - 1);
++	} else {
++		*objectid = BTRFS_FIRST_FREE_OBJECTID - 1;
++	}
++	ret = 0;
++error:
++	btrfs_free_path(path);
++	return ret;
++}
++
++int btrfs_find_free_objectid(struct btrfs_root *root, u64 *objectid)
++{
++	int ret;
++	mutex_lock(&root->objectid_mutex);
++
++	if (unlikely(root->highest_objectid >= BTRFS_LAST_FREE_OBJECTID)) {
++		btrfs_warn(root->fs_info,
++			   "the objectid of root %llu reaches its highest value",
++			   root->root_key.objectid);
++		ret = -ENOSPC;
++		goto out;
++	}
++
++	*objectid = ++root->highest_objectid;
++	ret = 0;
++out:
++	mutex_unlock(&root->objectid_mutex);
++	return ret;
++}
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index 182540bdcea0f..e3b96944ce10c 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -131,6 +131,8 @@ struct btrfs_root *btrfs_create_tree(struct btrfs_trans_handle *trans,
+ int btree_lock_page_hook(struct page *page, void *data,
+ 				void (*flush_fn)(void *));
+ int btrfs_get_num_tolerated_disk_barrier_failures(u64 flags);
++int btrfs_find_free_objectid(struct btrfs_root *root, u64 *objectid);
++int btrfs_find_highest_objectid(struct btrfs_root *root, u64 *objectid);
+ int __init btrfs_end_io_wq_init(void);
+ void __cold btrfs_end_io_wq_exit(void);
+ 
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index ba280707d5ec2..4989c60b1df9c 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -794,15 +794,16 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode,
+ 			}
+ 			spin_lock(&ctl->tree_lock);
+ 			ret = link_free_space(ctl, e);
+-			ctl->total_bitmaps++;
+-			ctl->op->recalc_thresholds(ctl);
+-			spin_unlock(&ctl->tree_lock);
+ 			if (ret) {
++				spin_unlock(&ctl->tree_lock);
+ 				btrfs_err(fs_info,
+ 					"Duplicate entries in free space cache, dumping");
+ 				kmem_cache_free(btrfs_free_space_cachep, e);
+ 				goto free_cache;
+ 			}
++			ctl->total_bitmaps++;
++			ctl->op->recalc_thresholds(ctl);
++			spin_unlock(&ctl->tree_lock);
+ 			list_add_tail(&e->list, &bitmaps);
+ 		}
+ 
+diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
+index 76d2e43817eae..c74340d22624e 100644
+--- a/fs/btrfs/inode-map.c
++++ b/fs/btrfs/inode-map.c
+@@ -525,58 +525,3 @@ out:
+ 	extent_changeset_free(data_reserved);
+ 	return ret;
+ }
+-
+-int btrfs_find_highest_objectid(struct btrfs_root *root, u64 *objectid)
+-{
+-	struct btrfs_path *path;
+-	int ret;
+-	struct extent_buffer *l;
+-	struct btrfs_key search_key;
+-	struct btrfs_key found_key;
+-	int slot;
+-
+-	path = btrfs_alloc_path();
+-	if (!path)
+-		return -ENOMEM;
+-
+-	search_key.objectid = BTRFS_LAST_FREE_OBJECTID;
+-	search_key.type = -1;
+-	search_key.offset = (u64)-1;
+-	ret = btrfs_search_slot(NULL, root, &search_key, path, 0, 0);
+-	if (ret < 0)
+-		goto error;
+-	BUG_ON(ret == 0); /* Corruption */
+-	if (path->slots[0] > 0) {
+-		slot = path->slots[0] - 1;
+-		l = path->nodes[0];
+-		btrfs_item_key_to_cpu(l, &found_key, slot);
+-		*objectid = max_t(u64, found_key.objectid,
+-				  BTRFS_FIRST_FREE_OBJECTID - 1);
+-	} else {
+-		*objectid = BTRFS_FIRST_FREE_OBJECTID - 1;
+-	}
+-	ret = 0;
+-error:
+-	btrfs_free_path(path);
+-	return ret;
+-}
+-
+-int btrfs_find_free_objectid(struct btrfs_root *root, u64 *objectid)
+-{
+-	int ret;
+-	mutex_lock(&root->objectid_mutex);
+-
+-	if (unlikely(root->highest_objectid >= BTRFS_LAST_FREE_OBJECTID)) {
+-		btrfs_warn(root->fs_info,
+-			   "the objectid of root %llu reaches its highest value",
+-			   root->root_key.objectid);
+-		ret = -ENOSPC;
+-		goto out;
+-	}
+-
+-	*objectid = ++root->highest_objectid;
+-	ret = 0;
+-out:
+-	mutex_unlock(&root->objectid_mutex);
+-	return ret;
+-}
+diff --git a/fs/btrfs/inode-map.h b/fs/btrfs/inode-map.h
+index 7a962811dffe0..629baf9aefb15 100644
+--- a/fs/btrfs/inode-map.h
++++ b/fs/btrfs/inode-map.h
+@@ -10,7 +10,4 @@ int btrfs_find_free_ino(struct btrfs_root *root, u64 *objectid);
+ int btrfs_save_ino_cache(struct btrfs_root *root,
+ 			 struct btrfs_trans_handle *trans);
+ 
+-int btrfs_find_free_objectid(struct btrfs_root *root, u64 *objectid);
+-int btrfs_find_highest_objectid(struct btrfs_root *root, u64 *objectid);
+-
+ #endif
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 779b7745cdc48..c900a39666e38 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -6273,7 +6273,7 @@ static int btrfs_mknod(struct inode *dir, struct dentry *dentry,
+ 	if (IS_ERR(trans))
+ 		return PTR_ERR(trans);
+ 
+-	err = btrfs_find_free_ino(root, &objectid);
++	err = btrfs_find_free_objectid(root, &objectid);
+ 	if (err)
+ 		goto out_unlock;
+ 
+@@ -6337,7 +6337,7 @@ static int btrfs_create(struct inode *dir, struct dentry *dentry,
+ 	if (IS_ERR(trans))
+ 		return PTR_ERR(trans);
+ 
+-	err = btrfs_find_free_ino(root, &objectid);
++	err = btrfs_find_free_objectid(root, &objectid);
+ 	if (err)
+ 		goto out_unlock;
+ 
+@@ -6481,7 +6481,7 @@ static int btrfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	if (IS_ERR(trans))
+ 		return PTR_ERR(trans);
+ 
+-	err = btrfs_find_free_ino(root, &objectid);
++	err = btrfs_find_free_objectid(root, &objectid);
+ 	if (err)
+ 		goto out_fail;
+ 
+@@ -9135,7 +9135,7 @@ static int btrfs_whiteout_for_rename(struct btrfs_trans_handle *trans,
+ 	u64 objectid;
+ 	u64 index;
+ 
+-	ret = btrfs_find_free_ino(root, &objectid);
++	ret = btrfs_find_free_objectid(root, &objectid);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -9631,7 +9631,7 @@ static int btrfs_symlink(struct inode *dir, struct dentry *dentry,
+ 	if (IS_ERR(trans))
+ 		return PTR_ERR(trans);
+ 
+-	err = btrfs_find_free_ino(root, &objectid);
++	err = btrfs_find_free_objectid(root, &objectid);
+ 	if (err)
+ 		goto out_unlock;
+ 
+@@ -9962,7 +9962,7 @@ static int btrfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	if (IS_ERR(trans))
+ 		return PTR_ERR(trans);
+ 
+-	ret = btrfs_find_free_ino(root, &objectid);
++	ret = btrfs_find_free_objectid(root, &objectid);
+ 	if (ret)
+ 		goto out;
+ 
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index 734873be56a74..8e6fc45ccc9eb 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -1008,6 +1008,19 @@ skip_inode:
+ 				continue;
+ 			adjust_snap_realm_parent(mdsc, child, realm->ino);
+ 		}
++	} else {
++		/*
++		 * In the non-split case both 'num_split_inos' and
++		 * 'num_split_realms' should be 0, making this a no-op.
++		 * However the MDS happens to populate 'split_realms' list
++		 * in one of the UPDATE op cases by mistake.
++		 *
++		 * Skip both lists just in case to ensure that 'p' is
++		 * positioned at the start of realm info, as expected by
++		 * ceph_update_snap_trace().
++		 */
++		p += sizeof(u64) * num_split_inos;
++		p += sizeof(u64) * num_split_realms;
+ 	}
+ 
+ 	/*
+diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
+index 5136b7289e8da..f06367cfd7641 100644
+--- a/fs/ext2/ext2.h
++++ b/fs/ext2/ext2.h
+@@ -177,6 +177,7 @@ static inline struct ext2_sb_info *EXT2_SB(struct super_block *sb)
+ #define EXT2_MIN_BLOCK_SIZE		1024
+ #define	EXT2_MAX_BLOCK_SIZE		4096
+ #define EXT2_MIN_BLOCK_LOG_SIZE		  10
++#define EXT2_MAX_BLOCK_LOG_SIZE		  16
+ #define EXT2_BLOCK_SIZE(s)		((s)->s_blocksize)
+ #define	EXT2_ADDR_PER_BLOCK(s)		(EXT2_BLOCK_SIZE(s) / sizeof (__u32))
+ #define EXT2_BLOCK_SIZE_BITS(s)		((s)->s_blocksize_bits)
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 9a6475b2ab28b..ab01ec7ac48c5 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -950,6 +950,13 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto failed_mount;
+ 	}
+ 
++	if (le32_to_cpu(es->s_log_block_size) >
++	    (EXT2_MAX_BLOCK_LOG_SIZE - BLOCK_SIZE_BITS)) {
++		ext2_msg(sb, KERN_ERR,
++			 "Invalid log block size: %u",
++			 le32_to_cpu(es->s_log_block_size));
++		goto failed_mount;
++	}
+ 	blocksize = BLOCK_SIZE << le32_to_cpu(sbi->s_es->s_log_block_size);
+ 
+ 	if (test_opt(sb, DAX)) {
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 50a0e90e8af9b..a43167042b6b1 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -319,6 +319,22 @@ static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb,
+ 	return (next_zero_bit < bitmap_size ? next_zero_bit : 0);
+ }
+ 
++struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
++					    ext4_group_t group)
++{
++	 struct ext4_group_info **grp_info;
++	 long indexv, indexh;
++
++	 if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) {
++		 ext4_error(sb, "invalid group %u", group);
++		 return NULL;
++	 }
++	 indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
++	 indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
++	 grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
++	 return grp_info[indexh];
++}
++
+ /*
+  * Return the block number which was discovered to be invalid, or 0 if
+  * the block bitmap is valid.
+@@ -393,7 +409,7 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
+ 
+ 	if (buffer_verified(bh))
+ 		return 0;
+-	if (EXT4_MB_GRP_BBITMAP_CORRUPT(grp))
++	if (!grp || EXT4_MB_GRP_BBITMAP_CORRUPT(grp))
+ 		return -EFSCORRUPTED;
+ 
+ 	ext4_lock_group(sb, block_group);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 246573a4e8041..84a240025aa46 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1535,12 +1535,15 @@ struct ext4_sb_info {
+ 	atomic_t s_bal_success;	/* we found long enough chunks */
+ 	atomic_t s_bal_allocated;	/* in blocks */
+ 	atomic_t s_bal_ex_scanned;	/* total extents scanned */
++	atomic_t s_bal_groups_scanned;	/* number of groups scanned */
+ 	atomic_t s_bal_goals;	/* goal hits */
+ 	atomic_t s_bal_breaks;	/* too long searches */
+ 	atomic_t s_bal_2orders;	/* 2^order hits */
+-	spinlock_t s_bal_lock;
+-	unsigned long s_mb_buddies_generated;
+-	unsigned long long s_mb_generation_time;
++	atomic64_t s_bal_cX_groups_considered[4];
++	atomic64_t s_bal_cX_hits[4];
++	atomic64_t s_bal_cX_failed[4];		/* cX loop didn't find blocks */
++	atomic_t s_mb_buddies_generated;	/* number of buddies generated */
++	atomic64_t s_mb_generation_time;
+ 	atomic_t s_mb_lost_chunks;
+ 	atomic_t s_mb_preallocated;
+ 	atomic_t s_mb_discarded;
+@@ -2548,6 +2551,8 @@ extern void ext4_check_blocks_bitmap(struct super_block *);
+ extern struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
+ 						    ext4_group_t block_group,
+ 						    struct buffer_head ** bh);
++extern struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
++						   ext4_group_t group);
+ extern int ext4_should_retry_alloc(struct super_block *sb, int *retries);
+ 
+ extern struct buffer_head *ext4_read_block_bitmap_nowait(struct super_block *sb,
+@@ -2785,6 +2790,7 @@ int ext4_fc_record_regions(struct super_block *sb, int ino,
+ extern const struct seq_operations ext4_mb_seq_groups_ops;
+ extern long ext4_mb_stats;
+ extern long ext4_mb_max_to_scan;
++extern int ext4_seq_mb_stats_show(struct seq_file *seq, void *offset);
+ extern int ext4_mb_init(struct super_block *);
+ extern int ext4_mb_release(struct super_block *);
+ extern ext4_fsblk_t ext4_mb_new_blocks(handle_t *,
+@@ -3194,19 +3200,6 @@ static inline void ext4_isize_set(struct ext4_inode *raw_inode, loff_t i_size)
+ 	raw_inode->i_size_high = cpu_to_le32(i_size >> 32);
+ }
+ 
+-static inline
+-struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
+-					    ext4_group_t group)
+-{
+-	 struct ext4_group_info **grp_info;
+-	 long indexv, indexh;
+-	 BUG_ON(group >= EXT4_SB(sb)->s_groups_count);
+-	 indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
+-	 indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
+-	 grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
+-	 return grp_info[indexh];
+-}
+-
+ /*
+  * Reading s_groups_count requires using smp_rmb() afterwards.  See
+  * the locking protocol documented in the comments of ext4_group_add()
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index c53c9b1322049..d178543ca13f1 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -91,7 +91,7 @@ static int ext4_validate_inode_bitmap(struct super_block *sb,
+ 
+ 	if (buffer_verified(bh))
+ 		return 0;
+-	if (EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
++	if (!grp || EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
+ 		return -EFSCORRUPTED;
+ 
+ 	ext4_lock_group(sb, block_group);
+@@ -293,7 +293,7 @@ void ext4_free_inode(handle_t *handle, struct inode *inode)
+ 	}
+ 	if (!(sbi->s_mount_state & EXT4_FC_REPLAY)) {
+ 		grp = ext4_get_group_info(sb, block_group);
+-		if (unlikely(EXT4_MB_GRP_IBITMAP_CORRUPT(grp))) {
++		if (!grp || unlikely(EXT4_MB_GRP_IBITMAP_CORRUPT(grp))) {
+ 			fatal = -EFSCORRUPTED;
+ 			goto error_return;
+ 		}
+@@ -1045,7 +1045,7 @@ got_group:
+ 			 * Skip groups with already-known suspicious inode
+ 			 * tables
+ 			 */
+-			if (EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
++			if (!grp || EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
+ 				goto next_group;
+ 		}
+ 
+@@ -1180,6 +1180,10 @@ got:
+ 
+ 		if (!(sbi->s_mount_state & EXT4_FC_REPLAY)) {
+ 			grp = ext4_get_group_info(sb, group);
++			if (!grp) {
++				err = -EFSCORRUPTED;
++				goto out;
++			}
+ 			down_read(&grp->alloc_sem); /*
+ 						     * protect vs itable
+ 						     * lazyinit
+@@ -1523,7 +1527,7 @@ int ext4_init_inode_table(struct super_block *sb, ext4_group_t group,
+ 	}
+ 
+ 	gdp = ext4_get_group_desc(sb, group, &group_desc_bh);
+-	if (!gdp)
++	if (!gdp || !grp)
+ 		goto out;
+ 
+ 	/*
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index a7c42e4bfc5ec..8a51448b76700 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -684,6 +684,8 @@ static int __mb_check_buddy(struct ext4_buddy *e4b, char *file,
+ 	MB_CHECK_ASSERT(e4b->bd_info->bb_fragments == fragments);
+ 
+ 	grp = ext4_get_group_info(sb, e4b->bd_group);
++	if (!grp)
++		return NULL;
+ 	list_for_each(cur, &grp->bb_prealloc_list) {
+ 		ext4_group_t groupnr;
+ 		struct ext4_prealloc_space *pa;
+@@ -767,9 +769,9 @@ mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
+ 
+ static noinline_for_stack
+ void ext4_mb_generate_buddy(struct super_block *sb,
+-				void *buddy, void *bitmap, ext4_group_t group)
++			    void *buddy, void *bitmap, ext4_group_t group,
++			    struct ext4_group_info *grp)
+ {
+-	struct ext4_group_info *grp = ext4_get_group_info(sb, group);
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb);
+ 	ext4_grpblk_t i = 0;
+@@ -816,28 +818,8 @@ void ext4_mb_generate_buddy(struct super_block *sb,
+ 	clear_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &(grp->bb_state));
+ 
+ 	period = get_cycles() - period;
+-	spin_lock(&sbi->s_bal_lock);
+-	sbi->s_mb_buddies_generated++;
+-	sbi->s_mb_generation_time += period;
+-	spin_unlock(&sbi->s_bal_lock);
+-}
+-
+-static void mb_regenerate_buddy(struct ext4_buddy *e4b)
+-{
+-	int count;
+-	int order = 1;
+-	void *buddy;
+-
+-	while ((buddy = mb_find_buddy(e4b, order++, &count))) {
+-		ext4_set_bits(buddy, 0, count);
+-	}
+-	e4b->bd_info->bb_fragments = 0;
+-	memset(e4b->bd_info->bb_counters, 0,
+-		sizeof(*e4b->bd_info->bb_counters) *
+-		(e4b->bd_sb->s_blocksize_bits + 2));
+-
+-	ext4_mb_generate_buddy(e4b->bd_sb, e4b->bd_buddy,
+-		e4b->bd_bitmap, e4b->bd_group);
++	atomic_inc(&sbi->s_mb_buddies_generated);
++	atomic64_add(period, &sbi->s_mb_generation_time);
+ }
+ 
+ /* The buddy information is attached the buddy cache inode
+@@ -909,6 +891,8 @@ static int ext4_mb_init_cache(struct page *page, char *incore, gfp_t gfp)
+ 			break;
+ 
+ 		grinfo = ext4_get_group_info(sb, group);
++		if (!grinfo)
++			continue;
+ 		/*
+ 		 * If page is uptodate then we came here after online resize
+ 		 * which added some new uninitialized group info structs, so
+@@ -974,6 +958,10 @@ static int ext4_mb_init_cache(struct page *page, char *incore, gfp_t gfp)
+ 				group, page->index, i * blocksize);
+ 			trace_ext4_mb_buddy_bitmap_load(sb, group);
+ 			grinfo = ext4_get_group_info(sb, group);
++			if (!grinfo) {
++				err = -EFSCORRUPTED;
++				goto out;
++			}
+ 			grinfo->bb_fragments = 0;
+ 			memset(grinfo->bb_counters, 0,
+ 			       sizeof(*grinfo->bb_counters) *
+@@ -984,7 +972,7 @@ static int ext4_mb_init_cache(struct page *page, char *incore, gfp_t gfp)
+ 			ext4_lock_group(sb, group);
+ 			/* init the buddy */
+ 			memset(data, 0xff, blocksize);
+-			ext4_mb_generate_buddy(sb, data, incore, group);
++			ext4_mb_generate_buddy(sb, data, incore, group, grinfo);
+ 			ext4_unlock_group(sb, group);
+ 			incore = NULL;
+ 		} else {
+@@ -1098,6 +1086,9 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp)
+ 	might_sleep();
+ 	mb_debug(sb, "init group %u\n", group);
+ 	this_grp = ext4_get_group_info(sb, group);
++	if (!this_grp)
++		return -EFSCORRUPTED;
++
+ 	/*
+ 	 * This ensures that we don't reinit the buddy cache
+ 	 * page which map to the group from which we are already
+@@ -1172,6 +1163,8 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
+ 
+ 	blocks_per_page = PAGE_SIZE / sb->s_blocksize;
+ 	grp = ext4_get_group_info(sb, group);
++	if (!grp)
++		return -EFSCORRUPTED;
+ 
+ 	e4b->bd_blkbits = sb->s_blocksize_bits;
+ 	e4b->bd_info = grp;
+@@ -1512,7 +1505,6 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 				sb, e4b->bd_group,
+ 				EXT4_GROUP_INFO_BBITMAP_CORRUPT);
+ 		}
+-		mb_regenerate_buddy(e4b);
+ 		goto done;
+ 	}
+ 
+@@ -1885,7 +1877,9 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
+ 	struct ext4_group_info *grp = ext4_get_group_info(ac->ac_sb, group);
+ 	struct ext4_free_extent ex;
+ 
+-	if (!(ac->ac_flags & EXT4_MB_HINT_TRY_GOAL))
++	if (!grp)
++		return -EFSCORRUPTED;
++	if (!(ac->ac_flags & (EXT4_MB_HINT_TRY_GOAL | EXT4_MB_HINT_GOAL_ONLY)))
+ 		return 0;
+ 	if (grp->bb_free == 0)
+ 		return 0;
+@@ -2109,7 +2103,7 @@ static bool ext4_mb_good_group(struct ext4_allocation_context *ac,
+ 
+ 	BUG_ON(cr < 0 || cr >= 4);
+ 
+-	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp)))
++	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp) || !grp))
+ 		return false;
+ 
+ 	free = grp->bb_free;
+@@ -2172,6 +2166,10 @@ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac,
+ 	ext4_grpblk_t free;
+ 	int ret = 0;
+ 
++	if (!grp)
++		return -EFSCORRUPTED;
++	if (sbi->s_mb_stats)
++		atomic64_inc(&sbi->s_bal_cX_groups_considered[ac->ac_criteria]);
+ 	if (should_lock)
+ 		ext4_lock_group(sb, group);
+ 	free = grp->bb_free;
+@@ -2242,7 +2240,7 @@ ext4_group_t ext4_mb_prefetch(struct super_block *sb, ext4_group_t group,
+ 		 * prefetch once, so we avoid getblk() call, which can
+ 		 * be expensive.
+ 		 */
+-		if (!EXT4_MB_GRP_TEST_AND_SET_READ(grp) &&
++		if (gdp && grp && !EXT4_MB_GRP_TEST_AND_SET_READ(grp) &&
+ 		    EXT4_MB_GRP_NEED_INIT(grp) &&
+ 		    ext4_free_group_clusters(sb, gdp) > 0 &&
+ 		    !(ext4_has_group_desc_csum(sb) &&
+@@ -2286,7 +2284,7 @@ void ext4_mb_prefetch_fini(struct super_block *sb, ext4_group_t group,
+ 		group--;
+ 		grp = ext4_get_group_info(sb, group);
+ 
+-		if (EXT4_MB_GRP_NEED_INIT(grp) &&
++		if (grp && gdp && EXT4_MB_GRP_NEED_INIT(grp) &&
+ 		    ext4_free_group_clusters(sb, gdp) > 0 &&
+ 		    !(ext4_has_group_desc_csum(sb) &&
+ 		      (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)))) {
+@@ -2446,6 +2444,9 @@ repeat:
+ 			if (ac->ac_status != AC_STATUS_CONTINUE)
+ 				break;
+ 		}
++		/* Processed all groups and haven't found blocks */
++		if (sbi->s_mb_stats && i == ngroups)
++			atomic64_inc(&sbi->s_bal_cX_failed[cr]);
+ 	}
+ 
+ 	if (ac->ac_b_ex.fe_len > 0 && ac->ac_status != AC_STATUS_FOUND &&
+@@ -2475,6 +2476,9 @@ repeat:
+ 			goto repeat;
+ 		}
+ 	}
++
++	if (sbi->s_mb_stats && ac->ac_status == AC_STATUS_FOUND)
++		atomic64_inc(&sbi->s_bal_cX_hits[ac->ac_criteria]);
+ out:
+ 	if (!err && ac->ac_status != AC_STATUS_FOUND && first_err)
+ 		err = first_err;
+@@ -2538,6 +2542,8 @@ static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
+ 		sizeof(struct ext4_group_info);
+ 
+ 	grinfo = ext4_get_group_info(sb, group);
++	if (!grinfo)
++		return 0;
+ 	/* Load the group info in memory only if not already loaded. */
+ 	if (unlikely(EXT4_MB_GRP_NEED_INIT(grinfo))) {
+ 		err = ext4_mb_load_buddy(sb, group, &e4b);
+@@ -2548,7 +2554,7 @@ static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
+ 		buddy_loaded = 1;
+ 	}
+ 
+-	memcpy(&sg, ext4_get_group_info(sb, group), i);
++	memcpy(&sg, grinfo, i);
+ 
+ 	if (buddy_loaded)
+ 		ext4_mb_unload_buddy(&e4b);
+@@ -2574,6 +2580,67 @@ const struct seq_operations ext4_mb_seq_groups_ops = {
+ 	.show   = ext4_mb_seq_groups_show,
+ };
+ 
++int ext4_seq_mb_stats_show(struct seq_file *seq, void *offset)
++{
++	struct super_block *sb = (struct super_block *)seq->private;
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
++
++	seq_puts(seq, "mballoc:\n");
++	if (!sbi->s_mb_stats) {
++		seq_puts(seq, "\tmb stats collection turned off.\n");
++		seq_puts(seq, "\tTo enable, please write \"1\" to sysfs file mb_stats.\n");
++		return 0;
++	}
++	seq_printf(seq, "\treqs: %u\n", atomic_read(&sbi->s_bal_reqs));
++	seq_printf(seq, "\tsuccess: %u\n", atomic_read(&sbi->s_bal_success));
++
++	seq_printf(seq, "\tgroups_scanned: %u\n",  atomic_read(&sbi->s_bal_groups_scanned));
++
++	seq_puts(seq, "\tcr0_stats:\n");
++	seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[0]));
++	seq_printf(seq, "\t\tgroups_considered: %llu\n",
++		   atomic64_read(&sbi->s_bal_cX_groups_considered[0]));
++	seq_printf(seq, "\t\tuseless_loops: %llu\n",
++		   atomic64_read(&sbi->s_bal_cX_failed[0]));
++
++	seq_puts(seq, "\tcr1_stats:\n");
++	seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[1]));
++	seq_printf(seq, "\t\tgroups_considered: %llu\n",
++		   atomic64_read(&sbi->s_bal_cX_groups_considered[1]));
++	seq_printf(seq, "\t\tuseless_loops: %llu\n",
++		   atomic64_read(&sbi->s_bal_cX_failed[1]));
++
++	seq_puts(seq, "\tcr2_stats:\n");
++	seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[2]));
++	seq_printf(seq, "\t\tgroups_considered: %llu\n",
++		   atomic64_read(&sbi->s_bal_cX_groups_considered[2]));
++	seq_printf(seq, "\t\tuseless_loops: %llu\n",
++		   atomic64_read(&sbi->s_bal_cX_failed[2]));
++
++	seq_puts(seq, "\tcr3_stats:\n");
++	seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[3]));
++	seq_printf(seq, "\t\tgroups_considered: %llu\n",
++		   atomic64_read(&sbi->s_bal_cX_groups_considered[3]));
++	seq_printf(seq, "\t\tuseless_loops: %llu\n",
++		   atomic64_read(&sbi->s_bal_cX_failed[3]));
++	seq_printf(seq, "\textents_scanned: %u\n", atomic_read(&sbi->s_bal_ex_scanned));
++	seq_printf(seq, "\t\tgoal_hits: %u\n", atomic_read(&sbi->s_bal_goals));
++	seq_printf(seq, "\t\t2^n_hits: %u\n", atomic_read(&sbi->s_bal_2orders));
++	seq_printf(seq, "\t\tbreaks: %u\n", atomic_read(&sbi->s_bal_breaks));
++	seq_printf(seq, "\t\tlost: %u\n", atomic_read(&sbi->s_mb_lost_chunks));
++
++	seq_printf(seq, "\tbuddies_generated: %u/%u\n",
++		   atomic_read(&sbi->s_mb_buddies_generated),
++		   ext4_get_groups_count(sb));
++	seq_printf(seq, "\tbuddies_time_used: %llu\n",
++		   atomic64_read(&sbi->s_mb_generation_time));
++	seq_printf(seq, "\tpreallocated: %u\n",
++		   atomic_read(&sbi->s_mb_preallocated));
++	seq_printf(seq, "\tdiscarded: %u\n",
++		   atomic_read(&sbi->s_mb_discarded));
++	return 0;
++}
++
+ static struct kmem_cache *get_groupinfo_cache(int blocksize_bits)
+ {
+ 	int cache_index = blocksize_bits - EXT4_MIN_BLOCK_LOG_SIZE;
+@@ -2764,8 +2831,12 @@ static int ext4_mb_init_backend(struct super_block *sb)
+ 
+ err_freebuddy:
+ 	cachep = get_groupinfo_cache(sb->s_blocksize_bits);
+-	while (i-- > 0)
+-		kmem_cache_free(cachep, ext4_get_group_info(sb, i));
++	while (i-- > 0) {
++		struct ext4_group_info *grp = ext4_get_group_info(sb, i);
++
++		if (grp)
++			kmem_cache_free(cachep, grp);
++	}
+ 	i = sbi->s_group_info_size;
+ 	rcu_read_lock();
+ 	group_info = rcu_dereference(sbi->s_group_info);
+@@ -2874,7 +2945,6 @@ int ext4_mb_init(struct super_block *sb)
+ 	} while (i <= sb->s_blocksize_bits + 1);
+ 
+ 	spin_lock_init(&sbi->s_md_lock);
+-	spin_lock_init(&sbi->s_bal_lock);
+ 	sbi->s_mb_free_pending = 0;
+ 	INIT_LIST_HEAD(&sbi->s_freed_data_list);
+ 
+@@ -2973,6 +3043,8 @@ int ext4_mb_release(struct super_block *sb)
+ 		for (i = 0; i < ngroups; i++) {
+ 			cond_resched();
+ 			grinfo = ext4_get_group_info(sb, i);
++			if (!grinfo)
++				continue;
+ 			mb_group_bb_bitmap_free(grinfo);
+ 			ext4_lock_group(sb, i);
+ 			count = ext4_mb_cleanup_pa(grinfo);
+@@ -3002,17 +3074,18 @@ int ext4_mb_release(struct super_block *sb)
+ 				atomic_read(&sbi->s_bal_reqs),
+ 				atomic_read(&sbi->s_bal_success));
+ 		ext4_msg(sb, KERN_INFO,
+-		      "mballoc: %u extents scanned, %u goal hits, "
++		      "mballoc: %u extents scanned, %u groups scanned, %u goal hits, "
+ 				"%u 2^N hits, %u breaks, %u lost",
+ 				atomic_read(&sbi->s_bal_ex_scanned),
++				atomic_read(&sbi->s_bal_groups_scanned),
+ 				atomic_read(&sbi->s_bal_goals),
+ 				atomic_read(&sbi->s_bal_2orders),
+ 				atomic_read(&sbi->s_bal_breaks),
+ 				atomic_read(&sbi->s_mb_lost_chunks));
+ 		ext4_msg(sb, KERN_INFO,
+-		       "mballoc: %lu generated and it took %Lu",
+-				sbi->s_mb_buddies_generated,
+-				sbi->s_mb_generation_time);
++		       "mballoc: %u generated and it took %llu",
++				atomic_read(&sbi->s_mb_buddies_generated),
++				atomic64_read(&sbi->s_mb_generation_time));
+ 		ext4_msg(sb, KERN_INFO,
+ 		       "mballoc: %u preallocated, %u discarded",
+ 				atomic_read(&sbi->s_mb_preallocated),
+@@ -3439,6 +3512,7 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 				struct ext4_allocation_request *ar)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
++	struct ext4_super_block *es = sbi->s_es;
+ 	int bsbits, max;
+ 	ext4_lblk_t end;
+ 	loff_t size, start_off;
+@@ -3619,18 +3693,21 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 	ac->ac_g_ex.fe_len = EXT4_NUM_B2C(sbi, size);
+ 
+ 	/* define goal start in order to merge */
+-	if (ar->pright && (ar->lright == (start + size))) {
++	if (ar->pright && (ar->lright == (start + size)) &&
++	    ar->pright >= size &&
++	    ar->pright - size >= le32_to_cpu(es->s_first_data_block)) {
+ 		/* merge to the right */
+ 		ext4_get_group_no_and_offset(ac->ac_sb, ar->pright - size,
+-						&ac->ac_f_ex.fe_group,
+-						&ac->ac_f_ex.fe_start);
++						&ac->ac_g_ex.fe_group,
++						&ac->ac_g_ex.fe_start);
+ 		ac->ac_flags |= EXT4_MB_HINT_TRY_GOAL;
+ 	}
+-	if (ar->pleft && (ar->lleft + 1 == start)) {
++	if (ar->pleft && (ar->lleft + 1 == start) &&
++	    ar->pleft + 1 < ext4_blocks_count(es)) {
+ 		/* merge to the left */
+ 		ext4_get_group_no_and_offset(ac->ac_sb, ar->pleft + 1,
+-						&ac->ac_f_ex.fe_group,
+-						&ac->ac_f_ex.fe_start);
++						&ac->ac_g_ex.fe_group,
++						&ac->ac_g_ex.fe_start);
+ 		ac->ac_flags |= EXT4_MB_HINT_TRY_GOAL;
+ 	}
+ 
+@@ -3642,12 +3719,13 @@ static void ext4_mb_collect_stats(struct ext4_allocation_context *ac)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
+ 
+-	if (sbi->s_mb_stats && ac->ac_g_ex.fe_len > 1) {
++	if (sbi->s_mb_stats && ac->ac_g_ex.fe_len >= 1) {
+ 		atomic_inc(&sbi->s_bal_reqs);
+ 		atomic_add(ac->ac_b_ex.fe_len, &sbi->s_bal_allocated);
+ 		if (ac->ac_b_ex.fe_len >= ac->ac_o_ex.fe_len)
+ 			atomic_inc(&sbi->s_bal_success);
+ 		atomic_add(ac->ac_found, &sbi->s_bal_ex_scanned);
++		atomic_add(ac->ac_groups_scanned, &sbi->s_bal_groups_scanned);
+ 		if (ac->ac_g_ex.fe_start == ac->ac_b_ex.fe_start &&
+ 				ac->ac_g_ex.fe_group == ac->ac_b_ex.fe_group)
+ 			atomic_inc(&sbi->s_bal_goals);
+@@ -3722,6 +3800,7 @@ static void ext4_mb_use_inode_pa(struct ext4_allocation_context *ac,
+ 	BUG_ON(start < pa->pa_pstart);
+ 	BUG_ON(end > pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len));
+ 	BUG_ON(pa->pa_free < len);
++	BUG_ON(ac->ac_b_ex.fe_len <= 0);
+ 	pa->pa_free -= len;
+ 
+ 	mb_debug(ac->ac_sb, "use %llu/%d from inode pa %p\n", start, len, pa);
+@@ -3884,6 +3963,8 @@ static void ext4_mb_generate_from_freelist(struct super_block *sb, void *bitmap,
+ 	struct ext4_free_data *entry;
+ 
+ 	grp = ext4_get_group_info(sb, group);
++	if (!grp)
++		return;
+ 	n = rb_first(&(grp->bb_free_root));
+ 
+ 	while (n) {
+@@ -3911,6 +3992,9 @@ void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
+ 	int preallocated = 0;
+ 	int len;
+ 
++	if (!grp)
++		return;
++
+ 	/* all form of preallocation discards first load group,
+ 	 * so the only competing code is preallocation use.
+ 	 * we don't need any locking here
+@@ -4046,10 +4130,8 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 	pa = ac->ac_pa;
+ 
+ 	if (ac->ac_b_ex.fe_len < ac->ac_g_ex.fe_len) {
+-		int winl;
+-		int wins;
+-		int win;
+-		int offs;
++		int new_bex_start;
++		int new_bex_end;
+ 
+ 		/* we can't allocate as much as normalizer wants.
+ 		 * so, found space must get proper lstart
+@@ -4057,26 +4139,40 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 		BUG_ON(ac->ac_g_ex.fe_logical > ac->ac_o_ex.fe_logical);
+ 		BUG_ON(ac->ac_g_ex.fe_len < ac->ac_o_ex.fe_len);
+ 
+-		/* we're limited by original request in that
+-		 * logical block must be covered any way
+-		 * winl is window we can move our chunk within */
+-		winl = ac->ac_o_ex.fe_logical - ac->ac_g_ex.fe_logical;
++		/*
++		 * Use the below logic for adjusting best extent as it keeps
++		 * fragmentation in check while ensuring logical range of best
++		 * extent doesn't overflow out of goal extent:
++		 *
++		 * 1. Check if best ex can be kept at end of goal and still
++		 *    cover original start
++		 * 2. Else, check if best ex can be kept at start of goal and
++		 *    still cover original start
++		 * 3. Else, keep the best ex at start of original request.
++		 */
++		new_bex_end = ac->ac_g_ex.fe_logical +
++			EXT4_C2B(sbi, ac->ac_g_ex.fe_len);
++		new_bex_start = new_bex_end - EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
++		if (ac->ac_o_ex.fe_logical >= new_bex_start)
++			goto adjust_bex;
+ 
+-		/* also, we should cover whole original request */
+-		wins = EXT4_C2B(sbi, ac->ac_b_ex.fe_len - ac->ac_o_ex.fe_len);
++		new_bex_start = ac->ac_g_ex.fe_logical;
++		new_bex_end =
++			new_bex_start + EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
++		if (ac->ac_o_ex.fe_logical < new_bex_end)
++			goto adjust_bex;
+ 
+-		/* the smallest one defines real window */
+-		win = min(winl, wins);
++		new_bex_start = ac->ac_o_ex.fe_logical;
++		new_bex_end =
++			new_bex_start + EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
+ 
+-		offs = ac->ac_o_ex.fe_logical %
+-			EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
+-		if (offs && offs < win)
+-			win = offs;
++adjust_bex:
++		ac->ac_b_ex.fe_logical = new_bex_start;
+ 
+-		ac->ac_b_ex.fe_logical = ac->ac_o_ex.fe_logical -
+-			EXT4_NUM_B2C(sbi, win);
+ 		BUG_ON(ac->ac_o_ex.fe_logical < ac->ac_b_ex.fe_logical);
+ 		BUG_ON(ac->ac_o_ex.fe_len > ac->ac_b_ex.fe_len);
++		BUG_ON(new_bex_end > (ac->ac_g_ex.fe_logical +
++				      EXT4_C2B(sbi, ac->ac_g_ex.fe_len)));
+ 	}
+ 
+ 	/* preallocation can change ac_b_ex, thus we store actually
+@@ -4102,6 +4198,8 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 
+ 	ei = EXT4_I(ac->ac_inode);
+ 	grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);
++	if (!grp)
++		return;
+ 
+ 	pa->pa_obj_lock = &ei->i_prealloc_lock;
+ 	pa->pa_inode = ac->ac_inode;
+@@ -4155,6 +4253,8 @@ ext4_mb_new_group_pa(struct ext4_allocation_context *ac)
+ 	atomic_add(pa->pa_free, &EXT4_SB(sb)->s_mb_preallocated);
+ 
+ 	grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);
++	if (!grp)
++		return;
+ 	lg = ac->ac_lg;
+ 	BUG_ON(lg == NULL);
+ 
+@@ -4283,6 +4383,8 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
+ 	int err;
+ 	int free = 0;
+ 
++	if (!grp)
++		return 0;
+ 	mb_debug(sb, "discard preallocation for group %u\n", group);
+ 	if (list_empty(&grp->bb_prealloc_list))
+ 		goto out_dbg;
+@@ -4520,6 +4622,9 @@ static inline void ext4_mb_show_pa(struct super_block *sb)
+ 		struct ext4_prealloc_space *pa;
+ 		ext4_grpblk_t start;
+ 		struct list_head *cur;
++
++		if (!grp)
++			continue;
+ 		ext4_lock_group(sb, i);
+ 		list_for_each(cur, &grp->bb_prealloc_list) {
+ 			pa = list_entry(cur, struct ext4_prealloc_space,
+@@ -5323,6 +5428,7 @@ static void ext4_mb_clear_bb(handle_t *handle, struct inode *inode,
+ 	struct buffer_head *bitmap_bh = NULL;
+ 	struct super_block *sb = inode->i_sb;
+ 	struct ext4_group_desc *gdp;
++	struct ext4_group_info *grp;
+ 	unsigned int overflow;
+ 	ext4_grpblk_t bit;
+ 	struct buffer_head *gd_bh;
+@@ -5348,8 +5454,8 @@ do_more:
+ 	overflow = 0;
+ 	ext4_get_group_no_and_offset(sb, block, &block_group, &bit);
+ 
+-	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(
+-			ext4_get_group_info(sb, block_group))))
++	grp = ext4_get_group_info(sb, block_group);
++	if (unlikely(!grp || EXT4_MB_GRP_BBITMAP_CORRUPT(grp)))
+ 		return;
+ 
+ 	/*
+@@ -5937,6 +6043,8 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 
+ 	for (group = first_group; group <= last_group; group++) {
+ 		grp = ext4_get_group_info(sb, group);
++		if (!grp)
++			continue;
+ 		/* We only do this if the grp has never been initialized */
+ 		if (unlikely(EXT4_MB_GRP_NEED_INIT(grp))) {
+ 			ret = ext4_mb_init_group(sb, group, GFP_NOFS);
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index bc364c119af6a..7a9a8ed1de66c 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -39,28 +39,36 @@ static void ext4_mmp_csum_set(struct super_block *sb, struct mmp_struct *mmp)
+  * Write the MMP block using REQ_SYNC to try to get the block on-disk
+  * faster.
+  */
+-static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
++static int write_mmp_block_thawed(struct super_block *sb,
++				  struct buffer_head *bh)
+ {
+ 	struct mmp_struct *mmp = (struct mmp_struct *)(bh->b_data);
+ 
+-	/*
+-	 * We protect against freezing so that we don't create dirty buffers
+-	 * on frozen filesystem.
+-	 */
+-	sb_start_write(sb);
+ 	ext4_mmp_csum_set(sb, mmp);
+ 	lock_buffer(bh);
+ 	bh->b_end_io = end_buffer_write_sync;
+ 	get_bh(bh);
+ 	submit_bh(REQ_OP_WRITE, REQ_SYNC | REQ_META | REQ_PRIO, bh);
+ 	wait_on_buffer(bh);
+-	sb_end_write(sb);
+ 	if (unlikely(!buffer_uptodate(bh)))
+ 		return -EIO;
+-
+ 	return 0;
+ }
+ 
++static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
++{
++	int err;
++
++	/*
++	 * We protect against freezing so that we don't create dirty buffers
++	 * on frozen filesystem.
++	 */
++	sb_start_write(sb);
++	err = write_mmp_block_thawed(sb, bh);
++	sb_end_write(sb);
++	return err;
++}
++
+ /*
+  * Read the MMP block. It _must_ be read from disk and hence we clear the
+  * uptodate flag on the buffer.
+@@ -290,6 +298,7 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 	if (mmp_block < le32_to_cpu(es->s_first_data_block) ||
+ 	    mmp_block >= ext4_blocks_count(es)) {
+ 		ext4_warning(sb, "Invalid MMP block in superblock");
++		retval = -EINVAL;
+ 		goto failed;
+ 	}
+ 
+@@ -315,6 +324,7 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 
+ 	if (seq == EXT4_MMP_SEQ_FSCK) {
+ 		dump_mmp_msg(sb, mmp, "fsck is running on the filesystem");
++		retval = -EBUSY;
+ 		goto failed;
+ 	}
+ 
+@@ -328,6 +338,7 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 
+ 	if (schedule_timeout_interruptible(HZ * wait_time) != 0) {
+ 		ext4_warning(sb, "MMP startup interrupted, failing mount\n");
++		retval = -ETIMEDOUT;
+ 		goto failed;
+ 	}
+ 
+@@ -338,6 +349,7 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 	if (seq != le32_to_cpu(mmp->mmp_seq)) {
+ 		dump_mmp_msg(sb, mmp,
+ 			     "Device is already active on another node.");
++		retval = -EBUSY;
+ 		goto failed;
+ 	}
+ 
+@@ -348,7 +360,11 @@ skip:
+ 	seq = mmp_new_seq();
+ 	mmp->mmp_seq = cpu_to_le32(seq);
+ 
+-	retval = write_mmp_block(sb, bh);
++	/*
++	 * On mount / remount we are protected against fs freezing (by s_umount
++	 * semaphore) and grabbing freeze protection upsets lockdep
++	 */
++	retval = write_mmp_block_thawed(sb, bh);
+ 	if (retval)
+ 		goto failed;
+ 
+@@ -357,6 +373,7 @@ skip:
+ 	 */
+ 	if (schedule_timeout_interruptible(HZ * wait_time) != 0) {
+ 		ext4_warning(sb, "MMP startup interrupted, failing mount");
++		retval = -ETIMEDOUT;
+ 		goto failed;
+ 	}
+ 
+@@ -367,6 +384,7 @@ skip:
+ 	if (seq != le32_to_cpu(mmp->mmp_seq)) {
+ 		dump_mmp_msg(sb, mmp,
+ 			     "Device is already active on another node.");
++		retval = -EBUSY;
+ 		goto failed;
+ 	}
+ 
+@@ -383,6 +401,7 @@ skip:
+ 		EXT4_SB(sb)->s_mmp_tsk = NULL;
+ 		ext4_warning(sb, "Unable to create kmmpd thread for %s.",
+ 			     sb->s_id);
++		retval = -ENOMEM;
+ 		goto failed;
+ 	}
+ 
+@@ -390,5 +409,5 @@ skip:
+ 
+ failed:
+ 	brelse(bh);
+-	return 1;
++	return retval;
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 8694be5132415..d89750e90bc4b 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1011,6 +1011,8 @@ void ext4_mark_group_bitmap_corrupted(struct super_block *sb,
+ 	struct ext4_group_desc *gdp = ext4_get_group_desc(sb, group, NULL);
+ 	int ret;
+ 
++	if (!grp || !gdp)
++		return;
+ 	if (flags & EXT4_GROUP_INFO_BBITMAP_CORRUPT) {
+ 		ret = ext4_test_and_set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,
+ 					    &grp->bb_state);
+@@ -4777,9 +4779,11 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	needs_recovery = (es->s_last_orphan != 0 ||
+ 			  ext4_has_feature_journal_needs_recovery(sb));
+ 
+-	if (ext4_has_feature_mmp(sb) && !sb_rdonly(sb))
+-		if (ext4_multi_mount_protect(sb, le64_to_cpu(es->s_mmp_block)))
++	if (ext4_has_feature_mmp(sb) && !sb_rdonly(sb)) {
++		err = ext4_multi_mount_protect(sb, le64_to_cpu(es->s_mmp_block));
++		if (err)
+ 			goto failed_mount3a;
++	}
+ 
+ 	/*
+ 	 * The first inode we look at is the journal inode.  Don't try
+@@ -5792,11 +5796,12 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	unsigned long old_sb_flags, vfs_flags;
+ 	struct ext4_mount_options old_opts;
+-	int enable_quota = 0;
+ 	ext4_group_t g;
+ 	unsigned int journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+ 	int err = 0;
++	int enable_rw = 0;
+ #ifdef CONFIG_QUOTA
++	int enable_quota = 0;
+ 	int i, j;
+ 	char *to_free[EXT4_MAXQUOTAS];
+ #endif
+@@ -5987,14 +5992,16 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 			if (err)
+ 				goto restore_opts;
+ 
+-			sb->s_flags &= ~SB_RDONLY;
+-			if (ext4_has_feature_mmp(sb))
+-				if (ext4_multi_mount_protect(sb,
+-						le64_to_cpu(es->s_mmp_block))) {
+-					err = -EROFS;
++			enable_rw = 1;
++			if (ext4_has_feature_mmp(sb)) {
++				err = ext4_multi_mount_protect(sb,
++						le64_to_cpu(es->s_mmp_block));
++				if (err)
+ 					goto restore_opts;
+-				}
++			}
++#ifdef CONFIG_QUOTA
+ 			enable_quota = 1;
++#endif
+ 		}
+ 	}
+ 
+@@ -6044,6 +6051,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
+ 		ext4_release_system_zone(sb);
+ 
++	if (enable_rw)
++		sb->s_flags &= ~SB_RDONLY;
++
+ 	if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
+ 		ext4_stop_mmpd(sbi);
+ 
+diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
+index ce74cde6d8faa..b0bb4a92c9c94 100644
+--- a/fs/ext4/sysfs.c
++++ b/fs/ext4/sysfs.c
+@@ -539,6 +539,8 @@ int ext4_register_sysfs(struct super_block *sb)
+ 					ext4_fc_info_show, sb);
+ 		proc_create_seq_data("mb_groups", S_IRUGO, sbi->s_proc,
+ 				&ext4_mb_seq_groups_ops, sb);
++		proc_create_single_data("mb_stats", 0444, sbi->s_proc,
++				ext4_seq_mb_stats_show, sb);
+ 	}
+ 	return 0;
+ }
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index cd46a64ace1b3..8ca549cc975e4 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -309,8 +309,15 @@ static int __f2fs_write_meta_page(struct page *page,
+ 
+ 	trace_f2fs_writepage(page, META);
+ 
+-	if (unlikely(f2fs_cp_error(sbi)))
++	if (unlikely(f2fs_cp_error(sbi))) {
++		if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
++			ClearPageUptodate(page);
++			dec_page_count(sbi, F2FS_DIRTY_META);
++			unlock_page(page);
++			return 0;
++		}
+ 		goto redirty_out;
++	}
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ 		goto redirty_out;
+ 	if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
+@@ -1283,7 +1290,8 @@ void f2fs_wait_on_all_pages(struct f2fs_sb_info *sbi, int type)
+ 		if (!get_pages(sbi, type))
+ 			break;
+ 
+-		if (unlikely(f2fs_cp_error(sbi)))
++		if (unlikely(f2fs_cp_error(sbi) &&
++			!is_sbi_flag_set(sbi, SBI_IS_CLOSE)))
+ 			break;
+ 
+ 		if (type == F2FS_DIRTY_META)
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index e9481c940895c..e0533cffbb076 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2814,7 +2814,8 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 		 * don't drop any dirty dentry pages for keeping lastest
+ 		 * directory structure.
+ 		 */
+-		if (S_ISDIR(inode->i_mode))
++		if (S_ISDIR(inode->i_mode) &&
++				!is_sbi_flag_set(sbi, SBI_IS_CLOSE))
+ 			goto redirty_out;
+ 		goto out;
+ 	}
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index db28c240dae35..87f8110884663 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -405,6 +405,7 @@ static int inode_go_demote_ok(const struct gfs2_glock *gl)
+ 
+ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
+ {
++	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+ 	const struct gfs2_dinode *str = buf;
+ 	struct timespec64 atime;
+ 	u16 height, depth;
+@@ -444,7 +445,7 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
+ 	/* i_diskflags and i_eattr must be set before gfs2_set_inode_flags() */
+ 	gfs2_set_inode_flags(&ip->i_inode);
+ 	height = be16_to_cpu(str->di_height);
+-	if (unlikely(height > GFS2_MAX_META_HEIGHT))
++	if (unlikely(height > sdp->sd_max_height))
+ 		goto corrupt;
+ 	ip->i_height = (u8)height;
+ 
+diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
+index c60d5ceb0d31c..7e1d889dcc07a 100644
+--- a/fs/hfsplus/inode.c
++++ b/fs/hfsplus/inode.c
+@@ -497,7 +497,11 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 	if (type == HFSPLUS_FOLDER) {
+ 		struct hfsplus_cat_folder *folder = &entry.folder;
+ 
+-		WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_folder));
++		if (fd->entrylength < sizeof(struct hfsplus_cat_folder)) {
++			pr_err("bad catalog folder entry\n");
++			res = -EIO;
++			goto out;
++		}
+ 		hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
+ 					sizeof(struct hfsplus_cat_folder));
+ 		hfsplus_get_perms(inode, &folder->permissions, 1);
+@@ -517,7 +521,11 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 	} else if (type == HFSPLUS_FILE) {
+ 		struct hfsplus_cat_file *file = &entry.file;
+ 
+-		WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_file));
++		if (fd->entrylength < sizeof(struct hfsplus_cat_file)) {
++			pr_err("bad catalog file entry\n");
++			res = -EIO;
++			goto out;
++		}
+ 		hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
+ 					sizeof(struct hfsplus_cat_file));
+ 
+@@ -548,6 +556,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 		pr_err("bad catalog entry used to create inode\n");
+ 		res = -EIO;
+ 	}
++out:
+ 	return res;
+ }
+ 
+@@ -556,6 +565,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	struct inode *main_inode = inode;
+ 	struct hfs_find_data fd;
+ 	hfsplus_cat_entry entry;
++	int res = 0;
+ 
+ 	if (HFSPLUS_IS_RSRC(inode))
+ 		main_inode = HFSPLUS_I(inode)->rsrc_inode;
+@@ -574,7 +584,11 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	if (S_ISDIR(main_inode->i_mode)) {
+ 		struct hfsplus_cat_folder *folder = &entry.folder;
+ 
+-		WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_folder));
++		if (fd.entrylength < sizeof(struct hfsplus_cat_folder)) {
++			pr_err("bad catalog folder entry\n");
++			res = -EIO;
++			goto out;
++		}
+ 		hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
+ 					sizeof(struct hfsplus_cat_folder));
+ 		/* simple node checks? */
+@@ -599,7 +613,11 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	} else {
+ 		struct hfsplus_cat_file *file = &entry.file;
+ 
+-		WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_file));
++		if (fd.entrylength < sizeof(struct hfsplus_cat_file)) {
++			pr_err("bad catalog file entry\n");
++			res = -EIO;
++			goto out;
++		}
+ 		hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
+ 					sizeof(struct hfsplus_cat_file));
+ 		hfsplus_inode_write_fork(inode, &file->data_fork);
+@@ -620,5 +638,5 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	set_bit(HFSPLUS_I_CAT_DIRTY, &HFSPLUS_I(inode)->flags);
+ out:
+ 	hfs_find_exit(&fd);
+-	return 0;
++	return res;
+ }
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index fb594edc0837c..042f4512e47d9 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -921,6 +921,7 @@ void nilfs_evict_inode(struct inode *inode)
+ 	struct nilfs_transaction_info ti;
+ 	struct super_block *sb = inode->i_sb;
+ 	struct nilfs_inode_info *ii = NILFS_I(inode);
++	struct the_nilfs *nilfs;
+ 	int ret;
+ 
+ 	if (inode->i_nlink || !ii->i_root || unlikely(is_bad_inode(inode))) {
+@@ -933,6 +934,23 @@ void nilfs_evict_inode(struct inode *inode)
+ 
+ 	truncate_inode_pages_final(&inode->i_data);
+ 
++	nilfs = sb->s_fs_info;
++	if (unlikely(sb_rdonly(sb) || !nilfs->ns_writer)) {
++		/*
++		 * If this inode is about to be disposed after the file system
++		 * has been degraded to read-only due to file system corruption
++		 * or after the writer has been detached, do not make any
++		 * changes that cause writes, just clear it.
++		 * Do this check after read-locking ns_segctor_sem by
++		 * nilfs_transaction_begin() in order to avoid a race with
++		 * the writer detach operation.
++		 */
++		clear_inode(inode);
++		nilfs_clear_inode(inode);
++		nilfs_transaction_abort(sb);
++		return;
++	}
++
+ 	/* TODO: some of the following operations may fail.  */
+ 	nilfs_truncate_bmap(ii, 0);
+ 	nilfs_mark_inode_dirty(inode);
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index df1f6b7aa7979..d6a0e719b1ad9 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -242,6 +242,7 @@ static int ocfs2_mknod(struct inode *dir,
+ 	int want_meta = 0;
+ 	int xattr_credits = 0;
+ 	struct ocfs2_security_xattr_info si = {
++		.name = NULL,
+ 		.enable = 1,
+ 	};
+ 	int did_quota_inode = 0;
+@@ -1801,6 +1802,7 @@ static int ocfs2_symlink(struct inode *dir,
+ 	int want_clusters = 0;
+ 	int xattr_credits = 0;
+ 	struct ocfs2_security_xattr_info si = {
++		.name = NULL,
+ 		.enable = 1,
+ 	};
+ 	int did_quota = 0, did_quota_inode = 0;
+diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
+index 9ccd19d8f7b18..10df2e1dfef72 100644
+--- a/fs/ocfs2/xattr.c
++++ b/fs/ocfs2/xattr.c
+@@ -7260,9 +7260,21 @@ static int ocfs2_xattr_security_set(const struct xattr_handler *handler,
+ static int ocfs2_initxattrs(struct inode *inode, const struct xattr *xattr_array,
+ 		     void *fs_info)
+ {
++	struct ocfs2_security_xattr_info *si = fs_info;
+ 	const struct xattr *xattr;
+ 	int err = 0;
+ 
++	if (si) {
++		si->value = kmemdup(xattr_array->value, xattr_array->value_len,
++				    GFP_KERNEL);
++		if (!si->value)
++			return -ENOMEM;
++
++		si->name = xattr_array->name;
++		si->value_len = xattr_array->value_len;
++		return 0;
++	}
++
+ 	for (xattr = xattr_array; xattr->name != NULL; xattr++) {
+ 		err = ocfs2_xattr_set(inode, OCFS2_XATTR_INDEX_SECURITY,
+ 				      xattr->name, xattr->value,
+@@ -7278,13 +7290,23 @@ int ocfs2_init_security_get(struct inode *inode,
+ 			    const struct qstr *qstr,
+ 			    struct ocfs2_security_xattr_info *si)
+ {
++	int ret;
++
+ 	/* check whether ocfs2 support feature xattr */
+ 	if (!ocfs2_supports_xattr(OCFS2_SB(dir->i_sb)))
+ 		return -EOPNOTSUPP;
+-	if (si)
+-		return security_old_inode_init_security(inode, dir, qstr,
+-							&si->name, &si->value,
+-							&si->value_len);
++	if (si) {
++		ret = security_inode_init_security(inode, dir, qstr,
++						   &ocfs2_initxattrs, si);
++		/*
++		 * security_inode_init_security() does not return -EOPNOTSUPP,
++		 * we have to check the xattr ourselves.
++		 */
++		if (!ret && !si->name)
++			si->enable = 0;
++
++		return ret;
++	}
+ 
+ 	return security_inode_init_security(inode, dir, qstr,
+ 					    &ocfs2_initxattrs, NULL);
+diff --git a/fs/statfs.c b/fs/statfs.c
+index 59f33752c1311..d42b44dc0e493 100644
+--- a/fs/statfs.c
++++ b/fs/statfs.c
+@@ -130,6 +130,7 @@ static int do_statfs_native(struct kstatfs *st, struct statfs __user *p)
+ 	if (sizeof(buf) == sizeof(*st))
+ 		memcpy(&buf, st, sizeof(*st));
+ 	else {
++		memset(&buf, 0, sizeof(buf));
+ 		if (sizeof buf.f_blocks == 4) {
+ 			if ((st->f_blocks | st->f_bfree | st->f_bavail |
+ 			     st->f_bsize | st->f_frsize) &
+@@ -158,7 +159,6 @@ static int do_statfs_native(struct kstatfs *st, struct statfs __user *p)
+ 		buf.f_namelen = st->f_namelen;
+ 		buf.f_frsize = st->f_frsize;
+ 		buf.f_flags = st->f_flags;
+-		memset(buf.f_spare, 0, sizeof(buf.f_spare));
+ 	}
+ 	if (copy_to_user(p, &buf, sizeof(buf)))
+ 		return -EFAULT;
+@@ -171,6 +171,7 @@ static int do_statfs64(struct kstatfs *st, struct statfs64 __user *p)
+ 	if (sizeof(buf) == sizeof(*st))
+ 		memcpy(&buf, st, sizeof(*st));
+ 	else {
++		memset(&buf, 0, sizeof(buf));
+ 		buf.f_type = st->f_type;
+ 		buf.f_bsize = st->f_bsize;
+ 		buf.f_blocks = st->f_blocks;
+@@ -182,7 +183,6 @@ static int do_statfs64(struct kstatfs *st, struct statfs64 __user *p)
+ 		buf.f_namelen = st->f_namelen;
+ 		buf.f_frsize = st->f_frsize;
+ 		buf.f_flags = st->f_flags;
+-		memset(buf.f_spare, 0, sizeof(buf.f_spare));
+ 	}
+ 	if (copy_to_user(p, &buf, sizeof(buf)))
+ 		return -EFAULT;
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index fc945f9df2c1d..cb87247da5ba1 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -115,7 +115,6 @@ enum cpuhp_state {
+ 	CPUHP_AP_PERF_X86_CSTATE_STARTING,
+ 	CPUHP_AP_PERF_XTENSA_STARTING,
+ 	CPUHP_AP_MIPS_OP_LOONGSON3_STARTING,
+-	CPUHP_AP_ARM_SDEI_STARTING,
+ 	CPUHP_AP_ARM_VFP_STARTING,
+ 	CPUHP_AP_ARM64_DEBUG_MONITORS_STARTING,
+ 	CPUHP_AP_PERF_ARM_HW_BREAKPOINT_STARTING,
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 5dc0f81e4f9d4..4f7e0c85e11fa 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -818,6 +818,7 @@ int device_online(struct device *dev);
+ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode);
+ void set_secondary_fwnode(struct device *dev, struct fwnode_handle *fwnode);
+ void device_set_of_node_from_dev(struct device *dev, const struct device *dev2);
++void device_set_node(struct device *dev, struct fwnode_handle *fwnode);
+ 
+ static inline int dev_num_vf(struct device *dev)
+ {
+diff --git a/include/linux/dim.h b/include/linux/dim.h
+index 6c5733981563e..f343bc9aa2ec9 100644
+--- a/include/linux/dim.h
++++ b/include/linux/dim.h
+@@ -236,8 +236,9 @@ void dim_park_tired(struct dim *dim);
+  *
+  * Calculate the delta between two samples (in data rates).
+  * Takes into consideration counter wrap-around.
++ * Returned boolean indicates whether curr_stats are reliable.
+  */
+-void dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
++bool dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
+ 		    struct dim_stats *curr_stats);
+ 
+ /**
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 8ce9e5c61ede8..f9e25d0a7b9c8 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1348,29 +1348,29 @@ extern int send_sigurg(struct fown_struct *fown);
+  * sb->s_flags.  Note that these mirror the equivalent MS_* flags where
+  * represented in both.
+  */
+-#define SB_RDONLY	 1	/* Mount read-only */
+-#define SB_NOSUID	 2	/* Ignore suid and sgid bits */
+-#define SB_NODEV	 4	/* Disallow access to device special files */
+-#define SB_NOEXEC	 8	/* Disallow program execution */
+-#define SB_SYNCHRONOUS	16	/* Writes are synced at once */
+-#define SB_MANDLOCK	64	/* Allow mandatory locks on an FS */
+-#define SB_DIRSYNC	128	/* Directory modifications are synchronous */
+-#define SB_NOATIME	1024	/* Do not update access times. */
+-#define SB_NODIRATIME	2048	/* Do not update directory access times */
+-#define SB_SILENT	32768
+-#define SB_POSIXACL	(1<<16)	/* VFS does not apply the umask */
+-#define SB_INLINECRYPT	(1<<17)	/* Use blk-crypto for encrypted files */
+-#define SB_KERNMOUNT	(1<<22) /* this is a kern_mount call */
+-#define SB_I_VERSION	(1<<23) /* Update inode I_version field */
+-#define SB_LAZYTIME	(1<<25) /* Update the on-disk [acm]times lazily */
++#define SB_RDONLY       BIT(0)	/* Mount read-only */
++#define SB_NOSUID       BIT(1)	/* Ignore suid and sgid bits */
++#define SB_NODEV        BIT(2)	/* Disallow access to device special files */
++#define SB_NOEXEC       BIT(3)	/* Disallow program execution */
++#define SB_SYNCHRONOUS  BIT(4)	/* Writes are synced at once */
++#define SB_MANDLOCK     BIT(6)	/* Allow mandatory locks on an FS */
++#define SB_DIRSYNC      BIT(7)	/* Directory modifications are synchronous */
++#define SB_NOATIME      BIT(10)	/* Do not update access times. */
++#define SB_NODIRATIME   BIT(11)	/* Do not update directory access times */
++#define SB_SILENT       BIT(15)
++#define SB_POSIXACL     BIT(16)	/* VFS does not apply the umask */
++#define SB_INLINECRYPT  BIT(17)	/* Use blk-crypto for encrypted files */
++#define SB_KERNMOUNT    BIT(22)	/* this is a kern_mount call */
++#define SB_I_VERSION    BIT(23)	/* Update inode I_version field */
++#define SB_LAZYTIME     BIT(25)	/* Update the on-disk [acm]times lazily */
+ 
+ /* These sb flags are internal to the kernel */
+-#define SB_SUBMOUNT     (1<<26)
+-#define SB_FORCE    	(1<<27)
+-#define SB_NOSEC	(1<<28)
+-#define SB_BORN		(1<<29)
+-#define SB_ACTIVE	(1<<30)
+-#define SB_NOUSER	(1<<31)
++#define SB_SUBMOUNT     BIT(26)
++#define SB_FORCE        BIT(27)
++#define SB_NOSEC        BIT(28)
++#define SB_BORN         BIT(29)
++#define SB_ACTIVE       BIT(30)
++#define SB_NOUSER       BIT(31)
+ 
+ /* These flags relate to encoding and casefolding */
+ #define SB_ENC_STRICT_MODE_FL	(1 << 0)
+diff --git a/include/linux/if_team.h b/include/linux/if_team.h
+index add607943c956..5dd1657947b75 100644
+--- a/include/linux/if_team.h
++++ b/include/linux/if_team.h
+@@ -208,6 +208,7 @@ struct team {
+ 	bool queue_override_enabled;
+ 	struct list_head *qom_lists; /* array of queue override mapping lists */
+ 	bool port_mtu_change_allowed;
++	bool notifier_ctx;
+ 	struct {
+ 		unsigned int count;
+ 		unsigned int interval; /* in ms */
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index 41a518336673b..4e7e72f3da5bd 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -626,6 +626,23 @@ static inline __be16 vlan_get_protocol(const struct sk_buff *skb)
+ 	return __vlan_get_protocol(skb, skb->protocol, NULL);
+ }
+ 
++/* This version of __vlan_get_protocol() also pulls mac header in skb->head */
++static inline __be16 vlan_get_protocol_and_depth(struct sk_buff *skb,
++						 __be16 type, int *depth)
++{
++	int maclen;
++
++	type = __vlan_get_protocol(skb, type, &maclen);
++
++	if (type) {
++		if (!pskb_may_pull(skb, maclen))
++			type = 0;
++		else if (depth)
++			*depth = maclen;
++	}
++	return type;
++}
++
+ /* A getter for the SKB protocol field which will handle VLAN tags consistently
+  * whether VLAN acceleration is enabled or not.
+  */
+diff --git a/include/linux/power/bq27xxx_battery.h b/include/linux/power/bq27xxx_battery.h
+index 8d5f4f40fb418..705b94bd091e3 100644
+--- a/include/linux/power/bq27xxx_battery.h
++++ b/include/linux/power/bq27xxx_battery.h
+@@ -67,6 +67,7 @@ struct bq27xxx_device_info {
+ 	struct bq27xxx_access_methods bus;
+ 	struct bq27xxx_reg_cache cache;
+ 	int charge_design_full;
++	bool removed;
+ 	unsigned long last_update;
+ 	struct delayed_work work;
+ 	struct power_supply *bat;
+diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
+index d10150587d819..f24575942dabe 100644
+--- a/include/linux/sched/task_stack.h
++++ b/include/linux/sched/task_stack.h
+@@ -23,7 +23,7 @@ static inline void *task_stack_page(const struct task_struct *task)
+ 
+ #define setup_thread_stack(new,old)	do { } while(0)
+ 
+-static inline unsigned long *end_of_stack(const struct task_struct *task)
++static __always_inline unsigned long *end_of_stack(const struct task_struct *task)
+ {
+ #ifdef CONFIG_STACK_GROWSUP
+ 	return (unsigned long *)((unsigned long)task->stack + THREAD_SIZE) - 1;
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 568be613bdb31..bc59237727033 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -279,6 +279,11 @@ void usb_put_intf(struct usb_interface *intf);
+ #define USB_MAXINTERFACES	32
+ #define USB_MAXIADS		(USB_MAXINTERFACES/2)
+ 
++bool usb_check_bulk_endpoints(
++		const struct usb_interface *intf, const u8 *ep_addrs);
++bool usb_check_int_endpoints(
++		const struct usb_interface *intf, const u8 *ep_addrs);
++
+ /*
+  * USB Resume Timer: Every Host controller driver should drive the resume
+  * signalling on the bus for the amount of time defined by this macro.
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index d9cc3f5602fb2..a248caff969f5 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -216,6 +216,7 @@ struct bonding {
+ 	struct   bond_up_slave __rcu *usable_slaves;
+ 	struct   bond_up_slave __rcu *all_slaves;
+ 	bool     force_primary;
++	bool     notifier_ctx;
+ 	s32      slave_cnt; /* never change this value outside the attach/detach wrappers */
+ 	int     (*recv_probe)(const struct sk_buff *, struct bonding *,
+ 			      struct slave *);
+diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
+index d609e957a3ec0..c02c3bb0fe091 100644
+--- a/include/net/ip_vs.h
++++ b/include/net/ip_vs.h
+@@ -549,8 +549,10 @@ struct ip_vs_conn {
+ 	 */
+ 	struct ip_vs_app        *app;           /* bound ip_vs_app object */
+ 	void                    *app_data;      /* Application private data */
+-	struct ip_vs_seq        in_seq;         /* incoming seq. struct */
+-	struct ip_vs_seq        out_seq;        /* outgoing seq. struct */
++	struct_group(sync_conn_opt,
++		struct ip_vs_seq  in_seq;       /* incoming seq. struct */
++		struct ip_vs_seq  out_seq;      /* outgoing seq. struct */
++	);
+ 
+ 	const struct ip_vs_pe	*pe;
+ 	char			*pe_data;
+diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
+index d8b320cf54ba0..4a4a5270ff6f2 100644
+--- a/include/net/netns/ipv4.h
++++ b/include/net/netns/ipv4.h
+@@ -71,7 +71,6 @@ struct netns_ipv4 {
+ 	struct sock		*mc_autojoin_sk;
+ 
+ 	struct inet_peer_base	*peers;
+-	struct sock  * __percpu	*tcp_sk;
+ 	struct fqdir		*fqdir;
+ #ifdef CONFIG_NETFILTER
+ 	struct xt_table		*iptable_filter;
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 61cd19ee51f4e..a62677be74528 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1320,11 +1320,6 @@ void mini_qdisc_pair_init(struct mini_Qdisc_pair *miniqp, struct Qdisc *qdisc,
+ void mini_qdisc_pair_block_init(struct mini_Qdisc_pair *miniqp,
+ 				struct tcf_block *block);
+ 
+-static inline int skb_tc_reinsert(struct sk_buff *skb, struct tcf_result *res)
+-{
+-	return res->ingress ? netif_receive_skb(skb) : dev_queue_xmit(skb);
+-}
+-
+ /* Make sure qdisc is no longer in SCHED state. */
+ static inline void qdisc_synchronize(const struct Qdisc *q)
+ {
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 1d8529311d6f9..651dc0a7bbd58 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2535,7 +2535,7 @@ static inline void sock_recv_ts_and_drops(struct msghdr *msg, struct sock *sk,
+ 		__sock_recv_ts_and_drops(msg, sk, skb);
+ 	else if (unlikely(sock_flag(sk, SOCK_TIMESTAMP)))
+ 		sock_write_timestamp(sk, skb->tstamp);
+-	else if (unlikely(sk->sk_stamp == SK_DEFAULT_STAMP))
++	else if (unlikely(sock_read_timestamp(sk) == SK_DEFAULT_STAMP))
+ 		sock_write_timestamp(sk, 0);
+ }
+ 
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 9a8d98639b20f..d213b86a48227 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -381,6 +381,7 @@ void tcp_update_metrics(struct sock *sk);
+ void tcp_init_metrics(struct sock *sk);
+ void tcp_metrics_init(void);
+ bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst);
++void __tcp_close(struct sock *sk, long timeout);
+ void tcp_close(struct sock *sk, long timeout);
+ void tcp_init_sock(struct sock *sk);
+ void tcp_init_transfer(struct sock *sk, int bpf_op, struct sk_buff *skb);
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 8a9943d935f14..726a2dbb407f1 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1198,6 +1198,8 @@ int __xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk);
+ 
+ static inline int xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk)
+ {
++	if (!sk_fullsock(osk))
++		return 0;
+ 	sk->sk_policy[0] = NULL;
+ 	sk->sk_policy[1] = NULL;
+ 	if (unlikely(osk->sk_policy[0] || osk->sk_policy[1]))
+diff --git a/include/uapi/sound/skl-tplg-interface.h b/include/uapi/sound/skl-tplg-interface.h
+index a93c0decfdd53..215ce16b37d2b 100644
+--- a/include/uapi/sound/skl-tplg-interface.h
++++ b/include/uapi/sound/skl-tplg-interface.h
+@@ -66,7 +66,8 @@ enum skl_ch_cfg {
+ 	SKL_CH_CFG_DUAL_MONO = 9,
+ 	SKL_CH_CFG_I2S_DUAL_STEREO_0 = 10,
+ 	SKL_CH_CFG_I2S_DUAL_STEREO_1 = 11,
+-	SKL_CH_CFG_4_CHANNEL = 12,
++	SKL_CH_CFG_7_1 = 12,
++	SKL_CH_CFG_4_CHANNEL = SKL_CH_CFG_7_1,
+ 	SKL_CH_CFG_INVALID
+ };
+ 
+diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
+index 8aaaaef99f09f..f753965726205 100644
+--- a/kernel/bpf/bpf_local_storage.c
++++ b/kernel/bpf/bpf_local_storage.c
+@@ -48,11 +48,21 @@ owner_storage(struct bpf_local_storage_map *smap, void *owner)
+ 	return map->ops->map_owner_storage_ptr(owner);
+ }
+ 
++static bool selem_linked_to_storage_lockless(const struct bpf_local_storage_elem *selem)
++{
++	return !hlist_unhashed_lockless(&selem->snode);
++}
++
+ static bool selem_linked_to_storage(const struct bpf_local_storage_elem *selem)
+ {
+ 	return !hlist_unhashed(&selem->snode);
+ }
+ 
++static bool selem_linked_to_map_lockless(const struct bpf_local_storage_elem *selem)
++{
++	return !hlist_unhashed_lockless(&selem->map_node);
++}
++
+ static bool selem_linked_to_map(const struct bpf_local_storage_elem *selem)
+ {
+ 	return !hlist_unhashed(&selem->map_node);
+@@ -140,7 +150,7 @@ static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem)
+ 	struct bpf_local_storage *local_storage;
+ 	bool free_local_storage = false;
+ 
+-	if (unlikely(!selem_linked_to_storage(selem)))
++	if (unlikely(!selem_linked_to_storage_lockless(selem)))
+ 		/* selem has already been unlinked from sk */
+ 		return;
+ 
+@@ -167,7 +177,7 @@ void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem)
+ 	struct bpf_local_storage_map *smap;
+ 	struct bpf_local_storage_map_bucket *b;
+ 
+-	if (unlikely(!selem_linked_to_map(selem)))
++	if (unlikely(!selem_linked_to_map_lockless(selem)))
+ 		/* selem has already be unlinked from smap */
+ 		return;
+ 
+@@ -365,7 +375,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
+ 		err = check_flags(old_sdata, map_flags);
+ 		if (err)
+ 			return ERR_PTR(err);
+-		if (old_sdata && selem_linked_to_storage(SELEM(old_sdata))) {
++		if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) {
+ 			copy_map_value_locked(&smap->map, old_sdata->data,
+ 					      value, false);
+ 			return old_sdata;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 6876796a8de0c..fd2082a9bf81b 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -11146,7 +11146,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 					insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH,
+ 									insn->dst_reg,
+ 									shift);
+-				insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
++				insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg,
+ 								(1ULL << size * 8) - 1);
+ 			}
+ 		}
+diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c
+index 952595c678b37..4e419ca6d6114 100644
+--- a/kernel/rcu/refscale.c
++++ b/kernel/rcu/refscale.c
+@@ -625,7 +625,7 @@ ref_scale_cleanup(void)
+ static int
+ ref_scale_shutdown(void *arg)
+ {
+-	wait_event(shutdown_wq, shutdown_start);
++	wait_event_idle(shutdown_wq, shutdown_start);
+ 
+ 	smp_mb(); // Wake before output.
+ 	ref_scale_cleanup();
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index ef6570137dcd5..401c1f331cafa 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -707,9 +707,11 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp)
+ 	int ndetected = 0;
+ 	struct task_struct *t;
+ 
+-	if (!READ_ONCE(rnp->exp_tasks))
+-		return 0;
+ 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
++	if (!rnp->exp_tasks) {
++		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
++		return 0;
++	}
+ 	t = list_entry(rnp->exp_tasks->prev,
+ 		       struct task_struct, rcu_node_entry);
+ 	list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
+diff --git a/lib/cpu_rmap.c b/lib/cpu_rmap.c
+index f08d9c56f712e..e77f12bb3c774 100644
+--- a/lib/cpu_rmap.c
++++ b/lib/cpu_rmap.c
+@@ -232,7 +232,8 @@ void free_irq_cpu_rmap(struct cpu_rmap *rmap)
+ 
+ 	for (index = 0; index < rmap->used; index++) {
+ 		glue = rmap->obj[index];
+-		irq_set_affinity_notifier(glue->notify.irq, NULL);
++		if (glue)
++			irq_set_affinity_notifier(glue->notify.irq, NULL);
+ 	}
+ 
+ 	cpu_rmap_put(rmap);
+@@ -268,6 +269,7 @@ static void irq_cpu_rmap_release(struct kref *ref)
+ 		container_of(ref, struct irq_glue, notify.kref);
+ 
+ 	cpu_rmap_put(glue->rmap);
++	glue->rmap->obj[glue->index] = NULL;
+ 	kfree(glue);
+ }
+ 
+@@ -297,6 +299,7 @@ int irq_cpu_rmap_add(struct cpu_rmap *rmap, int irq)
+ 	rc = irq_set_affinity_notifier(irq, &glue->notify);
+ 	if (rc) {
+ 		cpu_rmap_put(glue->rmap);
++		rmap->obj[glue->index] = NULL;
+ 		kfree(glue);
+ 	}
+ 	return rc;
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 824337ec36aa8..4c39678c03ee5 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -129,7 +129,7 @@ static const char *obj_states[ODEBUG_STATE_MAX] = {
+ 
+ static void fill_pool(void)
+ {
+-	gfp_t gfp = GFP_ATOMIC | __GFP_NORETRY | __GFP_NOWARN;
++	gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+ 
+diff --git a/lib/dim/dim.c b/lib/dim/dim.c
+index 38045d6d05381..e89aaf07bde50 100644
+--- a/lib/dim/dim.c
++++ b/lib/dim/dim.c
+@@ -54,7 +54,7 @@ void dim_park_tired(struct dim *dim)
+ }
+ EXPORT_SYMBOL(dim_park_tired);
+ 
+-void dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
++bool dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
+ 		    struct dim_stats *curr_stats)
+ {
+ 	/* u32 holds up to 71 minutes, should be enough */
+@@ -66,7 +66,7 @@ void dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
+ 			     start->comp_ctr);
+ 
+ 	if (!delta_us)
+-		return;
++		return false;
+ 
+ 	curr_stats->ppms = DIV_ROUND_UP(npkts * USEC_PER_MSEC, delta_us);
+ 	curr_stats->bpms = DIV_ROUND_UP(nbytes * USEC_PER_MSEC, delta_us);
+@@ -79,5 +79,6 @@ void dim_calc_stats(struct dim_sample *start, struct dim_sample *end,
+ 	else
+ 		curr_stats->cpe_ratio = 0;
+ 
++	return true;
+ }
+ EXPORT_SYMBOL(dim_calc_stats);
+diff --git a/lib/dim/net_dim.c b/lib/dim/net_dim.c
+index dae3b51ac3d9b..0e4f3a686f1de 100644
+--- a/lib/dim/net_dim.c
++++ b/lib/dim/net_dim.c
+@@ -227,7 +227,8 @@ void net_dim(struct dim *dim, struct dim_sample end_sample)
+ 				  dim->start_sample.event_ctr);
+ 		if (nevents < DIM_NEVENTS)
+ 			break;
+-		dim_calc_stats(&dim->start_sample, &end_sample, &curr_stats);
++		if (!dim_calc_stats(&dim->start_sample, &end_sample, &curr_stats))
++			break;
+ 		if (net_dim_decision(&curr_stats, dim)) {
+ 			dim->state = DIM_APPLY_NEW_PROFILE;
+ 			schedule_work(&dim->work);
+diff --git a/lib/dim/rdma_dim.c b/lib/dim/rdma_dim.c
+index f7e26c7b4749f..d32c8b105adc9 100644
+--- a/lib/dim/rdma_dim.c
++++ b/lib/dim/rdma_dim.c
+@@ -88,7 +88,8 @@ void rdma_dim(struct dim *dim, u64 completions)
+ 		nevents = curr_sample->event_ctr - dim->start_sample.event_ctr;
+ 		if (nevents < DIM_NEVENTS)
+ 			break;
+-		dim_calc_stats(&dim->start_sample, curr_sample, &curr_stats);
++		if (!dim_calc_stats(&dim->start_sample, curr_sample, &curr_stats))
++			break;
+ 		if (rdma_dim_decision(&curr_stats, dim)) {
+ 			dim->state = DIM_APPLY_NEW_PROFILE;
+ 			schedule_work(&dim->work);
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index b28f629c35271..dd08ab928e071 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -404,7 +404,6 @@ static void cgwb_release_workfn(struct work_struct *work)
+ 	blkcg_unpin_online(blkcg);
+ 
+ 	fprop_local_destroy_percpu(&wb->memcg_completions);
+-	percpu_ref_exit(&wb->refcnt);
+ 	wb_exit(wb);
+ 	call_rcu(&wb->rcu, cgwb_free_rcu);
+ }
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index 929f85c6cf112..8edac9307868a 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -108,8 +108,8 @@ static netdev_tx_t vlan_dev_hard_start_xmit(struct sk_buff *skb,
+ 	 * NOTE: THIS ASSUMES DIX ETHERNET, SPECIFICALLY NOT SUPPORTING
+ 	 * OTHER THINGS LIKE FDDI/TokenRing/802.3 SNAPs...
+ 	 */
+-	if (veth->h_vlan_proto != vlan->vlan_proto ||
+-	    vlan->flags & VLAN_FLAG_REORDER_HDR) {
++	if (vlan->flags & VLAN_FLAG_REORDER_HDR ||
++	    veth->h_vlan_proto != vlan->vlan_proto) {
+ 		u16 vlan_tci;
+ 		vlan_tci = vlan->vlan_id;
+ 		vlan_tci |= vlan_dev_get_egress_qos_mask(dev, skb->priority);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index f9d2ce9cee369..b85ce276e2a3c 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4689,7 +4689,6 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
+ 
+ 	chan = l2cap_get_chan_by_scid(conn, scid);
+ 	if (!chan) {
+-		mutex_unlock(&conn->chan_lock);
+ 		return 0;
+ 	}
+ 
+diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c
+index e28ffadd13719..4610f3a13966f 100644
+--- a/net/bridge/br_forward.c
++++ b/net/bridge/br_forward.c
+@@ -43,7 +43,7 @@ int br_dev_queue_push_xmit(struct net *net, struct sock *sk, struct sk_buff *skb
+ 	     skb->protocol == htons(ETH_P_8021AD))) {
+ 		int depth;
+ 
+-		if (!__vlan_get_protocol(skb, skb->protocol, &depth))
++		if (!vlan_get_protocol_and_depth(skb, skb->protocol, &depth))
+ 			goto drop;
+ 
+ 		skb_set_network_header(skb, depth);
+diff --git a/net/bridge/br_private_tunnel.h b/net/bridge/br_private_tunnel.h
+index c54cc26211d7c..f6c65dc088d60 100644
+--- a/net/bridge/br_private_tunnel.h
++++ b/net/bridge/br_private_tunnel.h
+@@ -27,6 +27,10 @@ int br_process_vlan_tunnel_info(const struct net_bridge *br,
+ int br_get_vlan_tunnel_info_size(struct net_bridge_vlan_group *vg);
+ int br_fill_vlan_tunnel_info(struct sk_buff *skb,
+ 			     struct net_bridge_vlan_group *vg);
++bool vlan_tunid_inrange(const struct net_bridge_vlan *v_curr,
++			const struct net_bridge_vlan *v_last);
++int br_vlan_tunnel_info(const struct net_bridge_port *p, int cmd,
++			u16 vid, u32 tun_id, bool *changed);
+ 
+ #ifdef CONFIG_BRIDGE_VLAN_FILTERING
+ /* br_vlan_tunnel.c */
+@@ -43,10 +47,6 @@ int br_handle_ingress_vlan_tunnel(struct sk_buff *skb,
+ 				  struct net_bridge_vlan_group *vg);
+ int br_handle_egress_vlan_tunnel(struct sk_buff *skb,
+ 				 struct net_bridge_vlan *vlan);
+-bool vlan_tunid_inrange(const struct net_bridge_vlan *v_curr,
+-			const struct net_bridge_vlan *v_last);
+-int br_vlan_tunnel_info(const struct net_bridge_port *p, int cmd,
+-			u16 vid, u32 tun_id, bool *changed);
+ #else
+ static inline int vlan_tunnel_init(struct net_bridge_vlan_group *vg)
+ {
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 4360f33278c1e..4fcd8162fad4a 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1016,7 +1016,7 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 	int noblock = flags & MSG_DONTWAIT;
+ 	int ret = 0;
+ 
+-	if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK))
++	if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK | MSG_CMSG_COMPAT))
+ 		return -EINVAL;
+ 
+ 	if (!so->bound)
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 709141abd1318..76cd5f43faf7a 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -798,7 +798,7 @@ static int j1939_sk_recvmsg(struct socket *sock, struct msghdr *msg,
+ 	struct j1939_sk_buff_cb *skcb;
+ 	int ret = 0;
+ 
+-	if (flags & ~(MSG_DONTWAIT | MSG_ERRQUEUE))
++	if (flags & ~(MSG_DONTWAIT | MSG_ERRQUEUE | MSG_CMSG_COMPAT))
+ 		return -EINVAL;
+ 
+ 	if (flags & MSG_ERRQUEUE)
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index bc92683fdcdb4..9e77695d1bdc2 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -799,18 +799,21 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
+ {
+ 	struct sock *sk = sock->sk;
+ 	__poll_t mask;
++	u8 shutdown;
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 	mask = 0;
+ 
+ 	/* exceptional events? */
+-	if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
++	if (READ_ONCE(sk->sk_err) ||
++	    !skb_queue_empty_lockless(&sk->sk_error_queue))
+ 		mask |= EPOLLERR |
+ 			(sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+ 
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	shutdown = READ_ONCE(sk->sk_shutdown);
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;
+-	if (sk->sk_shutdown == SHUTDOWN_MASK)
++	if (shutdown == SHUTDOWN_MASK)
+ 		mask |= EPOLLHUP;
+ 
+ 	/* readable? */
+@@ -819,10 +822,12 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
+ 
+ 	/* Connection-based need to check for termination and startup */
+ 	if (connection_based(sk)) {
+-		if (sk->sk_state == TCP_CLOSE)
++		int state = READ_ONCE(sk->sk_state);
++
++		if (state == TCP_CLOSE)
+ 			mask |= EPOLLHUP;
+ 		/* connection hasn't started yet? */
+-		if (sk->sk_state == TCP_SYN_SENT)
++		if (state == TCP_SYN_SENT)
+ 			return mask;
+ 	}
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 413c2a08d79db..29e6e11c481c6 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2628,6 +2628,8 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask,
+ 	bool active = false;
+ 	unsigned int nr_ids;
+ 
++	WARN_ON_ONCE(index >= dev->num_tx_queues);
++
+ 	if (dev->num_tc) {
+ 		/* Do not allow XPS on subordinate device directly */
+ 		num_tc = dev->num_tc;
+@@ -3320,7 +3322,7 @@ __be16 skb_network_protocol(struct sk_buff *skb, int *depth)
+ 		type = eth->h_proto;
+ 	}
+ 
+-	return __vlan_get_protocol(skb, type, depth);
++	return vlan_get_protocol_and_depth(skb, type, depth);
+ }
+ 
+ /**
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index fb6b3f2ae1921..e203172b9b9e7 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4751,8 +4751,10 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
+ 	} else {
+ 		skb = skb_clone(orig_skb, GFP_ATOMIC);
+ 
+-		if (skb_orphan_frags_rx(skb, GFP_ATOMIC))
++		if (skb_orphan_frags_rx(skb, GFP_ATOMIC)) {
++			kfree_skb(skb);
+ 			return;
++		}
+ 	}
+ 	if (!skb)
+ 		return;
+diff --git a/net/core/stream.c b/net/core/stream.c
+index cd60746877b1e..422ee97e4f2be 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -73,8 +73,8 @@ int sk_stream_wait_connect(struct sock *sk, long *timeo_p)
+ 		add_wait_queue(sk_sleep(sk), &wait);
+ 		sk->sk_write_pending++;
+ 		done = sk_wait_event(sk, timeo_p,
+-				     !sk->sk_err &&
+-				     !((1 << sk->sk_state) &
++				     !READ_ONCE(sk->sk_err) &&
++				     !((1 << READ_ONCE(sk->sk_state)) &
+ 				       ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)), &wait);
+ 		remove_wait_queue(sk_sleep(sk), &wait);
+ 		sk->sk_write_pending--;
+@@ -87,9 +87,9 @@ EXPORT_SYMBOL(sk_stream_wait_connect);
+  * sk_stream_closing - Return 1 if we still have things to send in our buffers.
+  * @sk: socket to verify
+  */
+-static inline int sk_stream_closing(struct sock *sk)
++static int sk_stream_closing(const struct sock *sk)
+ {
+-	return (1 << sk->sk_state) &
++	return (1 << READ_ONCE(sk->sk_state)) &
+ 	       (TCPF_FIN_WAIT1 | TCPF_CLOSING | TCPF_LAST_ACK);
+ }
+ 
+@@ -142,8 +142,8 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p)
+ 
+ 		set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ 		sk->sk_write_pending++;
+-		sk_wait_event(sk, &current_timeo, sk->sk_err ||
+-						  (sk->sk_shutdown & SEND_SHUTDOWN) ||
++		sk_wait_event(sk, &current_timeo, READ_ONCE(sk->sk_err) ||
++						  (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN) ||
+ 						  (sk_stream_memory_free(sk) &&
+ 						  !vm_wait), &wait);
+ 		sk->sk_write_pending--;
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 8dab0d311aba3..800c2c7607e1a 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -884,7 +884,7 @@ int inet_shutdown(struct socket *sock, int how)
+ 		   EPOLLHUP, even on eg. unconnected UDP sockets -- RR */
+ 		fallthrough;
+ 	default:
+-		sk->sk_shutdown |= how;
++		WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | how);
+ 		if (sk->sk_prot->shutdown)
+ 			sk->sk_prot->shutdown(sk, how);
+ 		break;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 1e07df2821773..6fd04f2f8b40c 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1723,7 +1723,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ 			   tcp_hdr(skb)->source, tcp_hdr(skb)->dest,
+ 			   arg->uid);
+ 	security_skb_classify_flow(skb, flowi4_to_flowi_common(&fl4));
+-	rt = ip_route_output_key(net, &fl4);
++	rt = ip_route_output_flow(net, &fl4, sk);
+ 	if (IS_ERR(rt))
+ 		return;
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 6a0560a735ce4..eecce63ba25e3 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -506,6 +506,7 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ 	__poll_t mask;
+ 	struct sock *sk = sock->sk;
+ 	const struct tcp_sock *tp = tcp_sk(sk);
++	u8 shutdown;
+ 	int state;
+ 
+ 	sock_poll_wait(file, sock, wait);
+@@ -548,9 +549,10 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ 	 * NOTE. Check for TCP_CLOSE is added. The goal is to prevent
+ 	 * blocking on fresh not-connected or disconnected socket. --ANK
+ 	 */
+-	if (sk->sk_shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)
++	shutdown = READ_ONCE(sk->sk_shutdown);
++	if (shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)
+ 		mask |= EPOLLHUP;
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
+ 
+ 	/* Connected or passive Fast Open socket? */
+@@ -566,7 +568,7 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ 		if (tcp_stream_is_readable(tp, target, sk))
+ 			mask |= EPOLLIN | EPOLLRDNORM;
+ 
+-		if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
++		if (!(shutdown & SEND_SHUTDOWN)) {
+ 			if (__sk_stream_is_writeable(sk, 1)) {
+ 				mask |= EPOLLOUT | EPOLLWRNORM;
+ 			} else {  /* send SIGIO later */
+@@ -2510,14 +2512,13 @@ bool tcp_check_oom(struct sock *sk, int shift)
+ 	return too_many_orphans || out_of_socket_memory;
+ }
+ 
+-void tcp_close(struct sock *sk, long timeout)
++void __tcp_close(struct sock *sk, long timeout)
+ {
+ 	struct sk_buff *skb;
+ 	int data_was_unread = 0;
+ 	int state;
+ 
+-	lock_sock(sk);
+-	sk->sk_shutdown = SHUTDOWN_MASK;
++	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 
+ 	if (sk->sk_state == TCP_LISTEN) {
+ 		tcp_set_state(sk, TCP_CLOSE);
+@@ -2680,6 +2681,12 @@ adjudge_to_death:
+ out:
+ 	bh_unlock_sock(sk);
+ 	local_bh_enable();
++}
++
++void tcp_close(struct sock *sk, long timeout)
++{
++	lock_sock(sk);
++	__tcp_close(sk, timeout);
+ 	release_sock(sk);
+ 	sock_put(sk);
+ }
+@@ -2777,7 +2784,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 	if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK))
+ 		inet_reset_saddr(sk);
+ 
+-	sk->sk_shutdown = 0;
++	WRITE_ONCE(sk->sk_shutdown, 0);
+ 	sock_reset_flag(sk, SOCK_DONE);
+ 	tp->srtt_us = 0;
+ 	tp->mdev_us = jiffies_to_usecs(TCP_TIMEOUT_INIT);
+@@ -4164,7 +4171,7 @@ void tcp_done(struct sock *sk)
+ 	if (req)
+ 		reqsk_fastopen_remove(sk, req, false);
+ 
+-	sk->sk_shutdown = SHUTDOWN_MASK;
++	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 
+ 	if (!sock_flag(sk, SOCK_DEAD))
+ 		sk->sk_state_change(sk);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 926e29e84b40b..d0ca1fc325cd6 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -262,7 +262,7 @@ static int tcp_bpf_wait_data(struct sock *sk, struct sk_psock *psock,
+ 	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ 	ret = sk_wait_event(sk, &timeo,
+ 			    !list_empty(&psock->ingress_msg) ||
+-			    !skb_queue_empty(&sk->sk_receive_queue), &wait);
++			    !skb_queue_empty_lockless(&sk->sk_receive_queue), &wait);
+ 	sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ 	remove_wait_queue(sk_sleep(sk), &wait);
+ 	return ret;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 541758cd0b81f..b98b7920c4029 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4323,7 +4323,7 @@ void tcp_fin(struct sock *sk)
+ 
+ 	inet_csk_schedule_ack(sk);
+ 
+-	sk->sk_shutdown |= RCV_SHUTDOWN;
++	WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN);
+ 	sock_set_flag(sk, SOCK_DONE);
+ 
+ 	switch (sk->sk_state) {
+@@ -6504,7 +6504,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 			break;
+ 
+ 		tcp_set_state(sk, TCP_FIN_WAIT2);
+-		sk->sk_shutdown |= SEND_SHUTDOWN;
++		WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | SEND_SHUTDOWN);
+ 
+ 		sk_dst_confirm(sk);
+ 
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 8bd7b1ec3b6a3..270b20e0907c2 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -91,6 +91,8 @@ static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key,
+ struct inet_hashinfo tcp_hashinfo;
+ EXPORT_SYMBOL(tcp_hashinfo);
+ 
++static DEFINE_PER_CPU(struct sock *, ipv4_tcp_sk);
++
+ static u32 tcp_v4_init_seq(const struct sk_buff *skb)
+ {
+ 	return secure_tcp_seq(ip_hdr(skb)->daddr,
+@@ -794,13 +796,18 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ 	arg.tos = ip_hdr(skb)->tos;
+ 	arg.uid = sock_net_uid(net, sk && sk_fullsock(sk) ? sk : NULL);
+ 	local_bh_disable();
+-	ctl_sk = this_cpu_read(*net->ipv4.tcp_sk);
++	ctl_sk = this_cpu_read(ipv4_tcp_sk);
++	sock_net_set(ctl_sk, net);
+ 	if (sk) {
+ 		ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ?
+ 				   inet_twsk(sk)->tw_mark : sk->sk_mark;
+ 		ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ?
+ 				   inet_twsk(sk)->tw_priority : sk->sk_priority;
+ 		transmit_time = tcp_transmit_time(sk);
++		xfrm_sk_clone_policy(ctl_sk, sk);
++	} else {
++		ctl_sk->sk_mark = 0;
++		ctl_sk->sk_priority = 0;
+ 	}
+ 	ip_send_unicast_reply(ctl_sk,
+ 			      skb, &TCP_SKB_CB(skb)->header.h4.opt,
+@@ -808,7 +815,8 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ 			      &arg, arg.iov[0].iov_len,
+ 			      transmit_time);
+ 
+-	ctl_sk->sk_mark = 0;
++	xfrm_sk_free_policy(ctl_sk);
++	sock_net_set(ctl_sk, &init_net);
+ 	__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
+ 	__TCP_INC_STATS(net, TCP_MIB_OUTRSTS);
+ 	local_bh_enable();
+@@ -892,7 +900,8 @@ static void tcp_v4_send_ack(const struct sock *sk,
+ 	arg.tos = tos;
+ 	arg.uid = sock_net_uid(net, sk_fullsock(sk) ? sk : NULL);
+ 	local_bh_disable();
+-	ctl_sk = this_cpu_read(*net->ipv4.tcp_sk);
++	ctl_sk = this_cpu_read(ipv4_tcp_sk);
++	sock_net_set(ctl_sk, net);
+ 	ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ?
+ 			   inet_twsk(sk)->tw_mark : sk->sk_mark;
+ 	ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ?
+@@ -904,7 +913,7 @@ static void tcp_v4_send_ack(const struct sock *sk,
+ 			      &arg, arg.iov[0].iov_len,
+ 			      transmit_time);
+ 
+-	ctl_sk->sk_mark = 0;
++	sock_net_set(ctl_sk, &init_net);
+ 	__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
+ 	local_bh_enable();
+ }
+@@ -2828,41 +2837,14 @@ EXPORT_SYMBOL(tcp_prot);
+ 
+ static void __net_exit tcp_sk_exit(struct net *net)
+ {
+-	int cpu;
+-
+ 	if (net->ipv4.tcp_congestion_control)
+ 		bpf_module_put(net->ipv4.tcp_congestion_control,
+ 			       net->ipv4.tcp_congestion_control->owner);
+-
+-	for_each_possible_cpu(cpu)
+-		inet_ctl_sock_destroy(*per_cpu_ptr(net->ipv4.tcp_sk, cpu));
+-	free_percpu(net->ipv4.tcp_sk);
+ }
+ 
+ static int __net_init tcp_sk_init(struct net *net)
+ {
+-	int res, cpu, cnt;
+-
+-	net->ipv4.tcp_sk = alloc_percpu(struct sock *);
+-	if (!net->ipv4.tcp_sk)
+-		return -ENOMEM;
+-
+-	for_each_possible_cpu(cpu) {
+-		struct sock *sk;
+-
+-		res = inet_ctl_sock_create(&sk, PF_INET, SOCK_RAW,
+-					   IPPROTO_TCP, net);
+-		if (res)
+-			goto fail;
+-		sock_set_flag(sk, SOCK_USE_WRITE_QUEUE);
+-
+-		/* Please enforce IP_DF and IPID==0 for RST and
+-		 * ACK sent in SYN-RECV and TIME-WAIT state.
+-		 */
+-		inet_sk(sk)->pmtudisc = IP_PMTUDISC_DO;
+-
+-		*per_cpu_ptr(net->ipv4.tcp_sk, cpu) = sk;
+-	}
++	int cnt;
+ 
+ 	net->ipv4.sysctl_tcp_ecn = 2;
+ 	net->ipv4.sysctl_tcp_ecn_fallback = 1;
+@@ -2947,10 +2929,6 @@ static int __net_init tcp_sk_init(struct net *net)
+ 		net->ipv4.tcp_congestion_control = &tcp_reno;
+ 
+ 	return 0;
+-fail:
+-	tcp_sk_exit(net);
+-
+-	return res;
+ }
+ 
+ static void __net_exit tcp_sk_exit_batch(struct list_head *net_exit_list)
+@@ -3027,6 +3005,24 @@ static void __init bpf_iter_register(void)
+ 
+ void __init tcp_v4_init(void)
+ {
++	int cpu, res;
++
++	for_each_possible_cpu(cpu) {
++		struct sock *sk;
++
++		res = inet_ctl_sock_create(&sk, PF_INET, SOCK_RAW,
++					   IPPROTO_TCP, &init_net);
++		if (res)
++			panic("Failed to create the TCP control socket.\n");
++		sock_set_flag(sk, SOCK_USE_WRITE_QUEUE);
++
++		/* Please enforce IP_DF and IPID==0 for RST and
++		 * ACK sent in SYN-RECV and TIME-WAIT state.
++		 */
++		inet_sk(sk)->pmtudisc = IP_PMTUDISC_DO;
++
++		per_cpu(ipv4_tcp_sk, cpu) = sk;
++	}
+ 	if (register_pernet_subsys(&tcp_sk_ops))
+ 		panic("Failed to create the TCP control socket.\n");
+ 
+diff --git a/net/ipv4/udplite.c b/net/ipv4/udplite.c
+index cfb36655a5fda..edf82a4001630 100644
+--- a/net/ipv4/udplite.c
++++ b/net/ipv4/udplite.c
+@@ -62,6 +62,8 @@ struct proto 	udplite_prot = {
+ 	.get_port	   = udp_v4_get_port,
+ 	.memory_allocated  = &udp_memory_allocated,
+ 	.sysctl_mem	   = sysctl_udp_mem,
++	.sysctl_wmem_offset = offsetof(struct net, ipv4.sysctl_udp_wmem_min),
++	.sysctl_rmem_offset = offsetof(struct net, ipv4.sysctl_udp_rmem_min),
+ 	.obj_size	   = sizeof(struct udp_sock),
+ 	.h.udp_table	   = &udplite_table,
+ };
+diff --git a/net/ipv6/exthdrs_core.c b/net/ipv6/exthdrs_core.c
+index da46c42846765..49e31e4ae7b7f 100644
+--- a/net/ipv6/exthdrs_core.c
++++ b/net/ipv6/exthdrs_core.c
+@@ -143,6 +143,8 @@ int ipv6_find_tlv(const struct sk_buff *skb, int offset, int type)
+ 			optlen = 1;
+ 			break;
+ 		default:
++			if (len < 2)
++				goto bad;
+ 			optlen = nh[offset + 1] + 2;
+ 			if (optlen > len)
+ 				goto bad;
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 2332b5b81c551..7b50e1811678e 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1015,12 +1015,14 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 					    ntohl(tun_id),
+ 					    ntohl(md->u.index), truncate,
+ 					    false);
++			proto = htons(ETH_P_ERSPAN);
+ 		} else if (md->version == 2) {
+ 			erspan_build_header_v2(skb,
+ 					       ntohl(tun_id),
+ 					       md->u.md2.dir,
+ 					       get_hwid(&md->u.md2),
+ 					       truncate, false);
++			proto = htons(ETH_P_ERSPAN2);
+ 		} else {
+ 			goto tx_err;
+ 		}
+@@ -1043,24 +1045,25 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 			break;
+ 		}
+ 
+-		if (t->parms.erspan_ver == 1)
++		if (t->parms.erspan_ver == 1) {
+ 			erspan_build_header(skb, ntohl(t->parms.o_key),
+ 					    t->parms.index,
+ 					    truncate, false);
+-		else if (t->parms.erspan_ver == 2)
++			proto = htons(ETH_P_ERSPAN);
++		} else if (t->parms.erspan_ver == 2) {
+ 			erspan_build_header_v2(skb, ntohl(t->parms.o_key),
+ 					       t->parms.dir,
+ 					       t->parms.hwid,
+ 					       truncate, false);
+-		else
++			proto = htons(ETH_P_ERSPAN2);
++		} else {
+ 			goto tx_err;
++		}
+ 
+ 		fl6.daddr = t->parms.raddr;
+ 	}
+ 
+ 	/* Push GRE header. */
+-	proto = (t->parms.erspan_ver == 1) ? htons(ETH_P_ERSPAN)
+-					   : htons(ETH_P_ERSPAN2);
+ 	gre_build_header(skb, 8, TUNNEL_SEQ, proto, 0, htonl(atomic_fetch_inc(&t->o_seqno)));
+ 
+ 	/* TooBig packet may have updated dst->dev's mtu */
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 2347740d3cc7c..fe29bc66aeac7 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -984,7 +984,10 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
+ 	 * Underlying function will use this to retrieve the network
+ 	 * namespace
+ 	 */
+-	dst = ip6_dst_lookup_flow(sock_net(ctl_sk), ctl_sk, &fl6, NULL);
++	if (sk && sk->sk_state != TCP_TIME_WAIT)
++		dst = ip6_dst_lookup_flow(net, sk, &fl6, NULL); /*sk's xfrm_policy can be referred*/
++	else
++		dst = ip6_dst_lookup_flow(net, ctl_sk, &fl6, NULL);
+ 	if (!IS_ERR(dst)) {
+ 		skb_dst_set(buff, dst);
+ 		ip6_xmit(ctl_sk, buff, &fl6, fl6.flowi6_mark, NULL,
+diff --git a/net/ipv6/udplite.c b/net/ipv6/udplite.c
+index b6482e04dad0e..26199f743791c 100644
+--- a/net/ipv6/udplite.c
++++ b/net/ipv6/udplite.c
+@@ -57,6 +57,8 @@ struct proto udplitev6_prot = {
+ 	.get_port	   = udp_v6_get_port,
+ 	.memory_allocated  = &udp_memory_allocated,
+ 	.sysctl_mem	   = sysctl_udp_mem,
++	.sysctl_wmem_offset = offsetof(struct net, ipv4.sysctl_udp_wmem_min),
++	.sysctl_rmem_offset = offsetof(struct net, ipv4.sysctl_udp_rmem_min),
+ 	.obj_size	   = sizeof(struct udp6_sock),
+ 	.h.udp_table	   = &udplite_table,
+ };
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 8bc7d399987b2..fff2bd5f03e37 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1944,7 +1944,8 @@ static u32 gen_reqid(struct net *net)
+ }
+ 
+ static int
+-parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_ipsecrequest *rq)
++parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_policy *pol,
++		   struct sadb_x_ipsecrequest *rq)
+ {
+ 	struct net *net = xp_net(xp);
+ 	struct xfrm_tmpl *t = xp->xfrm_vec + xp->xfrm_nr;
+@@ -1962,9 +1963,12 @@ parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_ipsecrequest *rq)
+ 	if ((mode = pfkey_mode_to_xfrm(rq->sadb_x_ipsecrequest_mode)) < 0)
+ 		return -EINVAL;
+ 	t->mode = mode;
+-	if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_USE)
++	if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_USE) {
++		if ((mode == XFRM_MODE_TUNNEL || mode == XFRM_MODE_BEET) &&
++		    pol->sadb_x_policy_dir == IPSEC_DIR_OUTBOUND)
++			return -EINVAL;
+ 		t->optional = 1;
+-	else if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_UNIQUE) {
++	} else if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_UNIQUE) {
+ 		t->reqid = rq->sadb_x_ipsecrequest_reqid;
+ 		if (t->reqid > IPSEC_MANUAL_REQID_MAX)
+ 			t->reqid = 0;
+@@ -2006,7 +2010,7 @@ parse_ipsecrequests(struct xfrm_policy *xp, struct sadb_x_policy *pol)
+ 		    rq->sadb_x_ipsecrequest_len < sizeof(*rq))
+ 			return -EINVAL;
+ 
+-		if ((err = parse_ipsecrequest(xp, rq)) < 0)
++		if ((err = parse_ipsecrequest(xp, pol, rq)) < 0)
+ 			return err;
+ 		len -= rq->sadb_x_ipsecrequest_len;
+ 		rq = (void*)((u8*)rq + rq->sadb_x_ipsecrequest_len);
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 99a37c411323e..01e26698285a0 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -582,7 +582,8 @@ static int llc_ui_wait_for_disc(struct sock *sk, long timeout)
+ 
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	while (1) {
+-		if (sk_wait_event(sk, &timeout, sk->sk_state == TCP_CLOSE, &wait))
++		if (sk_wait_event(sk, &timeout,
++				  READ_ONCE(sk->sk_state) == TCP_CLOSE, &wait))
+ 			break;
+ 		rc = -ERESTARTSYS;
+ 		if (signal_pending(current))
+@@ -602,7 +603,8 @@ static bool llc_ui_wait_for_conn(struct sock *sk, long timeout)
+ 
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	while (1) {
+-		if (sk_wait_event(sk, &timeout, sk->sk_state != TCP_SYN_SENT, &wait))
++		if (sk_wait_event(sk, &timeout,
++				  READ_ONCE(sk->sk_state) != TCP_SYN_SENT, &wait))
+ 			break;
+ 		if (signal_pending(current) || !timeout)
+ 			break;
+@@ -621,7 +623,7 @@ static int llc_ui_wait_for_busy_core(struct sock *sk, long timeout)
+ 	while (1) {
+ 		rc = 0;
+ 		if (sk_wait_event(sk, &timeout,
+-				  (sk->sk_shutdown & RCV_SHUTDOWN) ||
++				  (READ_ONCE(sk->sk_shutdown) & RCV_SHUTDOWN) ||
+ 				  (!llc_data_accept_state(llc->state) &&
+ 				   !llc->remote_busy_flag &&
+ 				   !llc->p_flag), &wait))
+diff --git a/net/mac80211/trace.h b/net/mac80211/trace.h
+index 89723907a0945..5ddaa7c824773 100644
+--- a/net/mac80211/trace.h
++++ b/net/mac80211/trace.h
+@@ -67,7 +67,7 @@
+ 			__entry->min_freq_offset = (c)->chan ? (c)->chan->freq_offset : 0;	\
+ 			__entry->min_chan_width = (c)->width;				\
+ 			__entry->min_center_freq1 = (c)->center_freq1;			\
+-			__entry->freq1_offset = (c)->freq1_offset;			\
++			__entry->min_freq1_offset = (c)->freq1_offset;			\
+ 			__entry->min_center_freq2 = (c)->center_freq2;
+ #define MIN_CHANDEF_PR_FMT	" min_control:%d.%03d MHz min_width:%d min_center: %d.%03d/%d MHz"
+ #define MIN_CHANDEF_PR_ARG	__entry->min_control_freq, __entry->min_freq_offset,	\
+diff --git a/net/netfilter/core.c b/net/netfilter/core.c
+index 60332fdb6dd44..5b7578adbf0f1 100644
+--- a/net/netfilter/core.c
++++ b/net/netfilter/core.c
+@@ -674,9 +674,11 @@ void nf_conntrack_destroy(struct nf_conntrack *nfct)
+ 
+ 	rcu_read_lock();
+ 	ct_hook = rcu_dereference(nf_ct_hook);
+-	BUG_ON(ct_hook == NULL);
+-	ct_hook->destroy(nfct);
++	if (ct_hook)
++		ct_hook->destroy(nfct);
+ 	rcu_read_unlock();
++
++	WARN_ON(!ct_hook);
+ }
+ EXPORT_SYMBOL(nf_conntrack_destroy);
+ 
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index daab857c52a80..fc8db03d3efca 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -603,7 +603,7 @@ static void ip_vs_sync_conn_v0(struct netns_ipvs *ipvs, struct ip_vs_conn *cp,
+ 	if (cp->flags & IP_VS_CONN_F_SEQ_MASK) {
+ 		struct ip_vs_sync_conn_options *opt =
+ 			(struct ip_vs_sync_conn_options *)&s[1];
+-		memcpy(opt, &cp->in_seq, sizeof(*opt));
++		memcpy(opt, &cp->sync_conn_opt, sizeof(*opt));
+ 	}
+ 
+ 	m->nr_conns++;
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index e12b52019a550..b613de96ad855 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -1170,11 +1170,12 @@ static int __init nf_conntrack_standalone_init(void)
+ 	nf_conntrack_htable_size_user = nf_conntrack_htable_size;
+ #endif
+ 
++	nf_conntrack_init_end();
++
+ 	ret = register_pernet_subsys(&nf_conntrack_net_ops);
+ 	if (ret < 0)
+ 		goto out_pernet;
+ 
+-	nf_conntrack_init_end();
+ 	return 0;
+ 
+ out_pernet:
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 4b9a499fe8f4d..1ffb24f4c74ca 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -220,7 +220,7 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
+ {
+ 	struct nft_set *set = (struct nft_set *)__set;
+ 	struct rb_node *prev = rb_prev(&rbe->node);
+-	struct nft_rbtree_elem *rbe_prev;
++	struct nft_rbtree_elem *rbe_prev = NULL;
+ 	struct nft_set_gc_batch *gcb;
+ 
+ 	gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC);
+@@ -228,17 +228,21 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
+ 		return -ENOMEM;
+ 
+ 	/* search for expired end interval coming before this element. */
+-	do {
++	while (prev) {
+ 		rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
+ 		if (nft_rbtree_interval_end(rbe_prev))
+ 			break;
+ 
+ 		prev = rb_prev(prev);
+-	} while (prev != NULL);
++	}
++
++	if (rbe_prev) {
++		rb_erase(&rbe_prev->node, &priv->root);
++		atomic_dec(&set->nelems);
++	}
+ 
+-	rb_erase(&rbe_prev->node, &priv->root);
+ 	rb_erase(&rbe->node, &priv->root);
+-	atomic_sub(2, &set->nelems);
++	atomic_dec(&set->nelems);
+ 
+ 	nft_set_gc_batch_add(gcb, rbe);
+ 	nft_set_gc_batch_complete(gcb);
+@@ -267,7 +271,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 			       struct nft_set_ext **ext)
+ {
+ 	struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;
+-	struct rb_node *node, *parent, **p, *first = NULL;
++	struct rb_node *node, *next, *parent, **p, *first = NULL;
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	u8 genmask = nft_genmask_next(net);
+ 	int d, err;
+@@ -306,7 +310,9 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 	 * Values stored in the tree are in reversed order, starting from
+ 	 * highest to lowest value.
+ 	 */
+-	for (node = first; node != NULL; node = rb_next(node)) {
++	for (node = first; node != NULL; node = next) {
++		next = rb_next(node);
++
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ 
+ 		if (!nft_set_elem_active(&rbe->ext, genmask))
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index eedb16517f16a..651f8ca912af0 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1992,7 +1992,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 
+ 	skb_free_datagram(sk, skb);
+ 
+-	if (nlk->cb_running &&
++	if (READ_ONCE(nlk->cb_running) &&
+ 	    atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) {
+ 		ret = netlink_dump(sk);
+ 		if (ret) {
+@@ -2304,7 +2304,7 @@ static int netlink_dump(struct sock *sk)
+ 	if (cb->done)
+ 		cb->done(cb);
+ 
+-	nlk->cb_running = false;
++	WRITE_ONCE(nlk->cb_running, false);
+ 	module = cb->module;
+ 	skb = cb->skb;
+ 	mutex_unlock(nlk->cb_mutex);
+@@ -2367,7 +2367,7 @@ int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
+ 			goto error_put;
+ 	}
+ 
+-	nlk->cb_running = true;
++	WRITE_ONCE(nlk->cb_running, true);
+ 	nlk->dump_done_errno = INT_MAX;
+ 
+ 	mutex_unlock(nlk->cb_mutex);
+@@ -2653,7 +2653,7 @@ static int netlink_native_seq_show(struct seq_file *seq, void *v)
+ 			   nlk->groups ? (u32)nlk->groups[0] : 0,
+ 			   sk_rmem_alloc_get(s),
+ 			   sk_wmem_alloc_get(s),
+-			   nlk->cb_running,
++			   READ_ONCE(nlk->cb_running),
+ 			   refcount_read(&s->sk_refcnt),
+ 			   atomic_read(&s->sk_drops),
+ 			   sock_i_ino(s)
+diff --git a/net/nsh/nsh.c b/net/nsh/nsh.c
+index e9ca007718b7e..0f23e5e8e03eb 100644
+--- a/net/nsh/nsh.c
++++ b/net/nsh/nsh.c
+@@ -77,13 +77,12 @@ static struct sk_buff *nsh_gso_segment(struct sk_buff *skb,
+ 				       netdev_features_t features)
+ {
+ 	struct sk_buff *segs = ERR_PTR(-EINVAL);
++	u16 mac_offset = skb->mac_header;
+ 	unsigned int nsh_len, mac_len;
+ 	__be16 proto;
+-	int nhoff;
+ 
+ 	skb_reset_network_header(skb);
+ 
+-	nhoff = skb->network_header - skb->mac_header;
+ 	mac_len = skb->mac_len;
+ 
+ 	if (unlikely(!pskb_may_pull(skb, NSH_BASE_HDR_LEN)))
+@@ -108,15 +107,14 @@ static struct sk_buff *nsh_gso_segment(struct sk_buff *skb,
+ 	segs = skb_mac_gso_segment(skb, features);
+ 	if (IS_ERR_OR_NULL(segs)) {
+ 		skb_gso_error_unwind(skb, htons(ETH_P_NSH), nsh_len,
+-				     skb->network_header - nhoff,
+-				     mac_len);
++				     mac_offset, mac_len);
+ 		goto out;
+ 	}
+ 
+ 	for (skb = segs; skb; skb = skb->next) {
+ 		skb->protocol = htons(ETH_P_NSH);
+ 		__skb_push(skb, nsh_len);
+-		skb_set_mac_header(skb, -nhoff);
++		skb->mac_header = mac_offset;
+ 		skb->network_header = skb->mac_header + mac_len;
+ 		skb->mac_len = mac_len;
+ 	}
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 2e766490a739b..3c05414cd3f83 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1897,10 +1897,8 @@ static void packet_parse_headers(struct sk_buff *skb, struct socket *sock)
+ 	/* Move network header to the right position for VLAN tagged packets */
+ 	if (likely(skb->dev->type == ARPHRD_ETHER) &&
+ 	    eth_type_vlan(skb->protocol) &&
+-	    __vlan_get_protocol(skb, skb->protocol, &depth) != 0) {
+-		if (pskb_may_pull(skb, depth))
+-			skb_set_network_header(skb, depth);
+-	}
++	    vlan_get_protocol_and_depth(skb, skb->protocol, &depth) != 0)
++		skb_set_network_header(skb, depth);
+ 
+ 	skb_probe_transport_header(skb);
+ }
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index 25dad1921baf2..91a19460cb578 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -28,8 +28,8 @@
+ static LIST_HEAD(mirred_list);
+ static DEFINE_SPINLOCK(mirred_list_lock);
+ 
+-#define MIRRED_RECURSION_LIMIT    4
+-static DEFINE_PER_CPU(unsigned int, mirred_rec_level);
++#define MIRRED_NEST_LIMIT    4
++static DEFINE_PER_CPU(unsigned int, mirred_nest_level);
+ 
+ static bool tcf_mirred_is_act_redirect(int action)
+ {
+@@ -206,6 +206,25 @@ release_idr:
+ 	return err;
+ }
+ 
++static bool is_mirred_nested(void)
++{
++	return unlikely(__this_cpu_read(mirred_nest_level) > 1);
++}
++
++static int tcf_mirred_forward(bool want_ingress, struct sk_buff *skb)
++{
++	int err;
++
++	if (!want_ingress)
++		err = dev_queue_xmit(skb);
++	else if (is_mirred_nested())
++		err = netif_rx(skb);
++	else
++		err = netif_receive_skb(skb);
++
++	return err;
++}
++
+ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 			  struct tcf_result *res)
+ {
+@@ -213,7 +232,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 	struct sk_buff *skb2 = skb;
+ 	bool m_mac_header_xmit;
+ 	struct net_device *dev;
+-	unsigned int rec_level;
++	unsigned int nest_level;
+ 	int retval, err = 0;
+ 	bool use_reinsert;
+ 	bool want_ingress;
+@@ -224,11 +243,11 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 	int mac_len;
+ 	bool at_nh;
+ 
+-	rec_level = __this_cpu_inc_return(mirred_rec_level);
+-	if (unlikely(rec_level > MIRRED_RECURSION_LIMIT)) {
++	nest_level = __this_cpu_inc_return(mirred_nest_level);
++	if (unlikely(nest_level > MIRRED_NEST_LIMIT)) {
+ 		net_warn_ratelimited("Packet exceeded mirred recursion limit on dev %s\n",
+ 				     netdev_name(skb->dev));
+-		__this_cpu_dec(mirred_rec_level);
++		__this_cpu_dec(mirred_nest_level);
+ 		return TC_ACT_SHOT;
+ 	}
+ 
+@@ -295,25 +314,22 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 		/* let's the caller reinsert the packet, if possible */
+ 		if (use_reinsert) {
+ 			res->ingress = want_ingress;
+-			if (skb_tc_reinsert(skb, res))
++			err = tcf_mirred_forward(res->ingress, skb);
++			if (err)
+ 				tcf_action_inc_overlimit_qstats(&m->common);
+-			__this_cpu_dec(mirred_rec_level);
++			__this_cpu_dec(mirred_nest_level);
+ 			return TC_ACT_CONSUMED;
+ 		}
+ 	}
+ 
+-	if (!want_ingress)
+-		err = dev_queue_xmit(skb2);
+-	else
+-		err = netif_receive_skb(skb2);
+-
++	err = tcf_mirred_forward(want_ingress, skb2);
+ 	if (err) {
+ out:
+ 		tcf_action_inc_overlimit_qstats(&m->common);
+ 		if (tcf_mirred_is_act_redirect(m_eaction))
+ 			retval = TC_ACT_SHOT;
+ 	}
+-	__this_cpu_dec(mirred_rec_level);
++	__this_cpu_dec(mirred_nest_level);
+ 
+ 	return retval;
+ }
+diff --git a/net/smc/smc_close.c b/net/smc/smc_close.c
+index 84102db5bb314..149a59ecd299f 100644
+--- a/net/smc/smc_close.c
++++ b/net/smc/smc_close.c
+@@ -64,8 +64,8 @@ static void smc_close_stream_wait(struct smc_sock *smc, long timeout)
+ 
+ 		rc = sk_wait_event(sk, &timeout,
+ 				   !smc_tx_prepared_sends(&smc->conn) ||
+-				   sk->sk_err == ECONNABORTED ||
+-				   sk->sk_err == ECONNRESET ||
++				   READ_ONCE(sk->sk_err) == ECONNABORTED ||
++				   READ_ONCE(sk->sk_err) == ECONNRESET ||
+ 				   smc->conn.killed,
+ 				   &wait);
+ 		if (rc)
+diff --git a/net/smc/smc_rx.c b/net/smc/smc_rx.c
+index 7f7e983e42b1f..3757aff6c2f00 100644
+--- a/net/smc/smc_rx.c
++++ b/net/smc/smc_rx.c
+@@ -203,9 +203,9 @@ int smc_rx_wait(struct smc_sock *smc, long *timeo,
+ 	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	rc = sk_wait_event(sk, timeo,
+-			   sk->sk_err ||
++			   READ_ONCE(sk->sk_err) ||
+ 			   cflags->peer_conn_abort ||
+-			   sk->sk_shutdown & RCV_SHUTDOWN ||
++			   READ_ONCE(sk->sk_shutdown) & RCV_SHUTDOWN ||
+ 			   conn->killed ||
+ 			   fcrit(conn),
+ 			   &wait);
+diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
+index 52ef1fca0b604..2429f9fc7e0e7 100644
+--- a/net/smc/smc_tx.c
++++ b/net/smc/smc_tx.c
+@@ -110,8 +110,8 @@ static int smc_tx_wait(struct smc_sock *smc, int flags)
+ 			break; /* at least 1 byte of free & no urgent data */
+ 		set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ 		sk_wait_event(sk, &timeo,
+-			      sk->sk_err ||
+-			      (sk->sk_shutdown & SEND_SHUTDOWN) ||
++			      READ_ONCE(sk->sk_err) ||
++			      (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN) ||
+ 			      smc_cdc_rxed_any_close(conn) ||
+ 			      (atomic_read(&conn->sndbuf_space) &&
+ 			       !conn->urg_tx_pend),
+diff --git a/net/socket.c b/net/socket.c
+index 8657112a687a4..84223419da862 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2764,7 +2764,7 @@ static int do_recvmmsg(int fd, struct mmsghdr __user *mmsg,
+ 		 * error to return on the next call or if the
+ 		 * app asks about it using getsockopt(SO_ERROR).
+ 		 */
+-		sock->sk->sk_err = -err;
++		WRITE_ONCE(sock->sk->sk_err, -err);
+ 	}
+ out_put:
+ 	fput_light(sock->file, fput_needed);
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index af657a482ad2d..495ebe7fad6dd 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -995,7 +995,7 @@ static int __svc_register(struct net *net, const char *progname,
+ #endif
+ 	}
+ 
+-	trace_svc_register(progname, version, protocol, port, family, error);
++	trace_svc_register(progname, version, family, protocol, port, error);
+ 	return error;
+ }
+ 
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 72c31ef985eb3..91e678fa3feb5 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -525,6 +525,19 @@ int tipc_bearer_mtu(struct net *net, u32 bearer_id)
+ 	return mtu;
+ }
+ 
++int tipc_bearer_min_mtu(struct net *net, u32 bearer_id)
++{
++	int mtu = TIPC_MIN_BEARER_MTU;
++	struct tipc_bearer *b;
++
++	rcu_read_lock();
++	b = bearer_get(net, bearer_id);
++	if (b)
++		mtu += b->encap_hlen;
++	rcu_read_unlock();
++	return mtu;
++}
++
+ /* tipc_bearer_xmit_skb - sends buffer to destination over bearer
+  */
+ void tipc_bearer_xmit_skb(struct net *net, u32 bearer_id,
+@@ -1122,8 +1135,8 @@ int __tipc_nl_bearer_set(struct sk_buff *skb, struct genl_info *info)
+ 				return -EINVAL;
+ 			}
+ #ifdef CONFIG_TIPC_MEDIA_UDP
+-			if (tipc_udp_mtu_bad(nla_get_u32
+-					     (props[TIPC_NLA_PROP_MTU]))) {
++			if (nla_get_u32(props[TIPC_NLA_PROP_MTU]) <
++			    b->encap_hlen + TIPC_MIN_BEARER_MTU) {
+ 				NL_SET_ERR_MSG(info->extack,
+ 					       "MTU value is out-of-range");
+ 				return -EINVAL;
+diff --git a/net/tipc/bearer.h b/net/tipc/bearer.h
+index bc0023119da2f..711a50f449934 100644
+--- a/net/tipc/bearer.h
++++ b/net/tipc/bearer.h
+@@ -93,7 +93,8 @@ struct tipc_bearer;
+  * @raw2addr: convert from raw addr format to media addr format
+  * @priority: default link (and bearer) priority
+  * @tolerance: default time (in ms) before declaring link failure
+- * @window: default window (in packets) before declaring link congestion
++ * @min_win: minimum window (in packets) before declaring link congestion
++ * @max_win: maximum window (in packets) before declaring link congestion
+  * @mtu: max packet size bearer can support for media type not dependent on
+  * underlying device MTU
+  * @type_id: TIPC media identifier
+@@ -138,12 +139,16 @@ struct tipc_media {
+  * @pt: packet type for bearer
+  * @rcu: rcu struct for tipc_bearer
+  * @priority: default link priority for bearer
+- * @window: default window size for bearer
++ * @min_win: minimum window (in packets) before declaring link congestion
++ * @max_win: maximum window (in packets) before declaring link congestion
+  * @tolerance: default link tolerance for bearer
+  * @domain: network domain to which links can be established
+  * @identity: array index of this bearer within TIPC bearer array
+- * @link_req: ptr to (optional) structure making periodic link setup requests
++ * @disc: ptr to link setup request
+  * @net_plane: network plane ('A' through 'H') currently associated with bearer
++ * @encap_hlen: encap headers length
++ * @up: bearer up flag (bit 0)
++ * @refcnt: tipc_bearer reference counter
+  *
+  * Note: media-specific code is responsible for initialization of the fields
+  * indicated below when a bearer is enabled; TIPC's generic bearer code takes
+@@ -166,6 +171,7 @@ struct tipc_bearer {
+ 	u32 identity;
+ 	struct tipc_discoverer *disc;
+ 	char net_plane;
++	u16 encap_hlen;
+ 	unsigned long up;
+ 	refcount_t refcnt;
+ };
+@@ -228,6 +234,7 @@ int tipc_bearer_setup(void);
+ void tipc_bearer_cleanup(void);
+ void tipc_bearer_stop(struct net *net);
+ int tipc_bearer_mtu(struct net *net, u32 bearer_id);
++int tipc_bearer_min_mtu(struct net *net, u32 bearer_id);
+ bool tipc_bearer_bcast_support(struct net *net, u32 bearer_id);
+ void tipc_bearer_xmit_skb(struct net *net, u32 bearer_id,
+ 			  struct sk_buff *skb,
+diff --git a/net/tipc/crypto.h b/net/tipc/crypto.h
+index e71193bd5e369..ce7d4cc8a9e0c 100644
+--- a/net/tipc/crypto.h
++++ b/net/tipc/crypto.h
+@@ -1,5 +1,5 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+-/**
++/*
+  * net/tipc/crypto.h: Include file for TIPC crypto
+  *
+  * Copyright (c) 2019, Ericsson AB
+@@ -53,7 +53,7 @@
+ #define TIPC_AES_GCM_IV_SIZE		12
+ #define TIPC_AES_GCM_TAG_SIZE		16
+ 
+-/**
++/*
+  * TIPC crypto modes:
+  * - CLUSTER_KEY:
+  *	One single key is used for both TX & RX in all nodes in the cluster.
+@@ -69,7 +69,7 @@ enum {
+ extern int sysctl_tipc_max_tfms __read_mostly;
+ extern int sysctl_tipc_key_exchange_enabled __read_mostly;
+ 
+-/**
++/*
+  * TIPC encryption message format:
+  *
+  *     3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index c1e56d1f21b38..dbb1bc722ba9b 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -2164,7 +2164,7 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	struct tipc_msg *hdr = buf_msg(skb);
+ 	struct tipc_gap_ack_blks *ga = NULL;
+ 	bool reply = msg_probe(hdr), retransmitted = false;
+-	u32 dlen = msg_data_sz(hdr), glen = 0;
++	u32 dlen = msg_data_sz(hdr), glen = 0, msg_max;
+ 	u16 peers_snd_nxt =  msg_next_sent(hdr);
+ 	u16 peers_tol = msg_link_tolerance(hdr);
+ 	u16 peers_prio = msg_linkprio(hdr);
+@@ -2203,6 +2203,9 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	switch (mtyp) {
+ 	case RESET_MSG:
+ 	case ACTIVATE_MSG:
++		msg_max = msg_max_pkt(hdr);
++		if (msg_max < tipc_bearer_min_mtu(l->net, l->bearer_id))
++			break;
+ 		/* Complete own link name with peer's interface name */
+ 		if_name =  strrchr(l->name, ':') + 1;
+ 		if (sizeof(l->name) - (if_name - l->name) <= TIPC_MAX_IF_NAME)
+@@ -2247,8 +2250,8 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 		l->peer_session = msg_session(hdr);
+ 		l->in_session = true;
+ 		l->peer_bearer_id = msg_bearer_id(hdr);
+-		if (l->mtu > msg_max_pkt(hdr))
+-			l->mtu = msg_max_pkt(hdr);
++		if (l->mtu > msg_max)
++			l->mtu = msg_max;
+ 		break;
+ 
+ 	case STATE_MSG:
+diff --git a/net/tipc/name_distr.h b/net/tipc/name_distr.h
+index 092323158f060..e231e6964d611 100644
+--- a/net/tipc/name_distr.h
++++ b/net/tipc/name_distr.h
+@@ -46,7 +46,7 @@
+  * @type: name sequence type
+  * @lower: name sequence lower bound
+  * @upper: name sequence upper bound
+- * @ref: publishing port reference
++ * @port: publishing port reference
+  * @key: publication key
+  *
+  * ===> All fields are stored in network byte order. <===
+diff --git a/net/tipc/name_table.h b/net/tipc/name_table.h
+index 8064e1986e2c8..5a82a01369d67 100644
+--- a/net/tipc/name_table.h
++++ b/net/tipc/name_table.h
+@@ -60,8 +60,8 @@ struct tipc_group;
+  * @key: publication key, unique across the cluster
+  * @id: publication id
+  * @binding_node: all publications from the same node which bound this one
+- * - Remote publications: in node->publ_list
+- *   Used by node/name distr to withdraw publications when node is lost
++ * - Remote publications: in node->publ_list;
++ * Used by node/name distr to withdraw publications when node is lost
+  * - Local/node scope publications: in name_table->node_scope list
+  * - Local/cluster scope publications: in name_table->cluster_scope list
+  * @binding_sock: all publications from the same socket which bound this one
+@@ -92,13 +92,16 @@ struct publication {
+ 
+ /**
+  * struct name_table - table containing all existing port name publications
+- * @seq_hlist: name sequence hash lists
++ * @services: name sequence hash lists
+  * @node_scope: all local publications with node scope
+  *               - used by name_distr during re-init of name table
+  * @cluster_scope: all local publications with cluster scope
+  *               - used by name_distr to send bulk updates to new nodes
+  *               - used by name_distr during re-init of name table
++ * @cluster_scope_lock: lock for accessing @cluster_scope
+  * @local_publ_count: number of publications issued by this node
++ * @rc_dests: destination node counter
++ * @snd_nxt: next sequence number to be used
+  */
+ struct name_table {
+ 	struct hlist_head services[TIPC_NAMETBL_SIZE];
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 8f3c9fbb99165..7cf9b40b5c73b 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -300,9 +300,9 @@ static void tsk_rej_rx_queue(struct sock *sk, int error)
+ 		tipc_sk_respond(sk, skb, error);
+ }
+ 
+-static bool tipc_sk_connected(struct sock *sk)
++static bool tipc_sk_connected(const struct sock *sk)
+ {
+-	return sk->sk_state == TIPC_ESTABLISHED;
++	return READ_ONCE(sk->sk_state) == TIPC_ESTABLISHED;
+ }
+ 
+ /* tipc_sk_type_connectionless - check if the socket is datagram socket
+diff --git a/net/tipc/subscr.h b/net/tipc/subscr.h
+index 6ebbec1bedd1a..63bdce9358fe6 100644
+--- a/net/tipc/subscr.h
++++ b/net/tipc/subscr.h
+@@ -47,12 +47,15 @@ struct tipc_conn;
+ 
+ /**
+  * struct tipc_subscription - TIPC network topology subscription object
+- * @subscriber: pointer to its subscriber
+- * @seq: name sequence associated with subscription
++ * @kref: reference count for this subscription
++ * @net: network namespace associated with subscription
+  * @timer: timer governing subscription duration (optional)
+- * @nameseq_list: adjacent subscriptions in name sequence's subscription list
++ * @service_list: adjacent subscriptions in name sequence's subscription list
+  * @sub_list: adjacent subscriptions in subscriber's subscription list
+  * @evt: template for events generated by subscription
++ * @conid: connection identifier of topology server
++ * @inactive: true if this subscription is inactive
++ * @lock: serialize up/down and timer events
+  */
+ struct tipc_subscription {
+ 	struct kref kref;
+@@ -63,7 +66,7 @@ struct tipc_subscription {
+ 	struct tipc_event evt;
+ 	int conid;
+ 	bool inactive;
+-	spinlock_t lock; /* serialize up/down and timer events */
++	spinlock_t lock;
+ };
+ 
+ struct tipc_subscription *tipc_sub_subscribe(struct net *net,
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index a236281082726..3e47501f024fd 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -730,8 +730,8 @@ static int tipc_udp_enable(struct net *net, struct tipc_bearer *b,
+ 			udp_conf.local_ip.s_addr = local.ipv4.s_addr;
+ 		udp_conf.use_udp_checksums = false;
+ 		ub->ifindex = dev->ifindex;
+-		if (tipc_mtu_bad(dev, sizeof(struct iphdr) +
+-				      sizeof(struct udphdr))) {
++		b->encap_hlen = sizeof(struct iphdr) + sizeof(struct udphdr);
++		if (tipc_mtu_bad(dev, b->encap_hlen)) {
+ 			err = -EINVAL;
+ 			goto err;
+ 		}
+@@ -752,6 +752,7 @@ static int tipc_udp_enable(struct net *net, struct tipc_bearer *b,
+ 		else
+ 			udp_conf.local_ip6 = local.ipv6;
+ 		ub->ifindex = dev->ifindex;
++		b->encap_hlen = sizeof(struct ipv6hdr) + sizeof(struct udphdr);
+ 		b->mtu = 1280;
+ #endif
+ 	} else {
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 54863e68f3040..7ee3c8b03a39e 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -92,7 +92,8 @@ int wait_on_pending_writer(struct sock *sk, long *timeo)
+ 			break;
+ 		}
+ 
+-		if (sk_wait_event(sk, timeo, !sk->sk_write_pending, &wait))
++		if (sk_wait_event(sk, timeo,
++				  !READ_ONCE(sk->sk_write_pending), &wait))
+ 			break;
+ 	}
+ 	remove_wait_queue(sk_sleep(sk), &wait);
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 28721e9575b75..2fe0efcbfed16 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -529,7 +529,7 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 	/* Clear state */
+ 	unix_state_lock(sk);
+ 	sock_orphan(sk);
+-	sk->sk_shutdown = SHUTDOWN_MASK;
++	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 	path	     = u->path;
+ 	u->path.dentry = NULL;
+ 	u->path.mnt = NULL;
+@@ -547,7 +547,7 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 		if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) {
+ 			unix_state_lock(skpair);
+ 			/* No more writes */
+-			skpair->sk_shutdown = SHUTDOWN_MASK;
++			WRITE_ONCE(skpair->sk_shutdown, SHUTDOWN_MASK);
+ 			if (!skb_queue_empty(&sk->sk_receive_queue) || embrion)
+ 				skpair->sk_err = ECONNRESET;
+ 			unix_state_unlock(skpair);
+@@ -1236,7 +1236,7 @@ static long unix_wait_for_peer(struct sock *other, long timeo)
+ 
+ 	sched = !sock_flag(other, SOCK_DEAD) &&
+ 		!(other->sk_shutdown & RCV_SHUTDOWN) &&
+-		unix_recvq_full(other);
++		unix_recvq_full_lockless(other);
+ 
+ 	unix_state_unlock(other);
+ 
+@@ -2581,7 +2581,7 @@ static int unix_shutdown(struct socket *sock, int mode)
+ 	++mode;
+ 
+ 	unix_state_lock(sk);
+-	sk->sk_shutdown |= mode;
++	WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | mode);
+ 	other = unix_peer(sk);
+ 	if (other)
+ 		sock_hold(other);
+@@ -2598,7 +2598,7 @@ static int unix_shutdown(struct socket *sock, int mode)
+ 		if (mode&SEND_SHUTDOWN)
+ 			peer_mode |= RCV_SHUTDOWN;
+ 		unix_state_lock(other);
+-		other->sk_shutdown |= peer_mode;
++		WRITE_ONCE(other->sk_shutdown, other->sk_shutdown | peer_mode);
+ 		unix_state_unlock(other);
+ 		other->sk_state_change(other);
+ 		if (peer_mode == SHUTDOWN_MASK)
+@@ -2717,16 +2717,18 @@ static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wa
+ {
+ 	struct sock *sk = sock->sk;
+ 	__poll_t mask;
++	u8 shutdown;
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 	mask = 0;
++	shutdown = READ_ONCE(sk->sk_shutdown);
+ 
+ 	/* exceptional events? */
+ 	if (sk->sk_err)
+ 		mask |= EPOLLERR;
+-	if (sk->sk_shutdown == SHUTDOWN_MASK)
++	if (shutdown == SHUTDOWN_MASK)
+ 		mask |= EPOLLHUP;
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;
+ 
+ 	/* readable? */
+@@ -2754,18 +2756,20 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ 	struct sock *sk = sock->sk, *other;
+ 	unsigned int writable;
+ 	__poll_t mask;
++	u8 shutdown;
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 	mask = 0;
++	shutdown = READ_ONCE(sk->sk_shutdown);
+ 
+ 	/* exceptional events? */
+ 	if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ 		mask |= EPOLLERR |
+ 			(sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+ 
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;
+-	if (sk->sk_shutdown == SHUTDOWN_MASK)
++	if (shutdown == SHUTDOWN_MASK)
+ 		mask |= EPOLLHUP;
+ 
+ 	/* readable? */
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 7829a5018ef9f..ce14374bbacad 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1372,7 +1372,7 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
+ 			vsock_transport_cancel_pkt(vsk);
+ 			vsock_remove_connected(vsk);
+ 			goto out_wait;
+-		} else if (timeout == 0) {
++		} else if ((sk->sk_state != TCP_ESTABLISHED) && (timeout == 0)) {
+ 			err = -ETIMEDOUT;
+ 			sk->sk_state = TCP_CLOSE;
+ 			sock->state = SS_UNCONNECTED;
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index e4f21a6924153..da518b4ca84c6 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -207,52 +207,6 @@ static void xfrmi_scrub_packet(struct sk_buff *skb, bool xnet)
+ 	skb->mark = 0;
+ }
+ 
+-static int xfrmi_input(struct sk_buff *skb, int nexthdr, __be32 spi,
+-		       int encap_type, unsigned short family)
+-{
+-	struct sec_path *sp;
+-
+-	sp = skb_sec_path(skb);
+-	if (sp && (sp->len || sp->olen) &&
+-	    !xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
+-		goto discard;
+-
+-	XFRM_SPI_SKB_CB(skb)->family = family;
+-	if (family == AF_INET) {
+-		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct iphdr, daddr);
+-		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = NULL;
+-	} else {
+-		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr);
+-		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL;
+-	}
+-
+-	return xfrm_input(skb, nexthdr, spi, encap_type);
+-discard:
+-	kfree_skb(skb);
+-	return 0;
+-}
+-
+-static int xfrmi4_rcv(struct sk_buff *skb)
+-{
+-	return xfrmi_input(skb, ip_hdr(skb)->protocol, 0, 0, AF_INET);
+-}
+-
+-static int xfrmi6_rcv(struct sk_buff *skb)
+-{
+-	return xfrmi_input(skb, skb_network_header(skb)[IP6CB(skb)->nhoff],
+-			   0, 0, AF_INET6);
+-}
+-
+-static int xfrmi4_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+-{
+-	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET);
+-}
+-
+-static int xfrmi6_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+-{
+-	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET6);
+-}
+-
+ static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
+ {
+ 	const struct xfrm_mode *inner_mode;
+@@ -826,8 +780,8 @@ static struct pernet_operations xfrmi_net_ops = {
+ };
+ 
+ static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = {
+-	.handler	=	xfrmi6_rcv,
+-	.input_handler	=	xfrmi6_input,
++	.handler	=	xfrm6_rcv,
++	.input_handler	=	xfrm_input,
+ 	.cb_handler	=	xfrmi_rcv_cb,
+ 	.err_handler	=	xfrmi6_err,
+ 	.priority	=	10,
+@@ -877,8 +831,8 @@ static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = {
+ #endif
+ 
+ static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = {
+-	.handler	=	xfrmi4_rcv,
+-	.input_handler	=	xfrmi4_input,
++	.handler	=	xfrm4_rcv,
++	.input_handler	=	xfrm_input,
+ 	.cb_handler	=	xfrmi_rcv_cb,
+ 	.err_handler	=	xfrmi4_err,
+ 	.priority	=	10,
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index d15aa62887de0..2956854928537 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3677,12 +3677,6 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		}
+ 		xfrm_nr = ti;
+ 
+-		if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK &&
+-		    !xfrm_nr) {
+-			XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES);
+-			goto reject;
+-		}
+-
+ 		if (npols > 1) {
+ 			xfrm_tmpl_sort(stp, tpp, xfrm_nr, family);
+ 			tpp = stp;
+@@ -3710,9 +3704,6 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 			goto reject;
+ 		}
+ 
+-		if (if_id)
+-			secpath_reset(skb);
+-
+ 		xfrm_pols_put(pols, npols);
+ 		return 1;
+ 	}
+diff --git a/samples/bpf/hbm.c b/samples/bpf/hbm.c
+index ff4c533dfac29..8e48489b96ae9 100644
+--- a/samples/bpf/hbm.c
++++ b/samples/bpf/hbm.c
+@@ -308,6 +308,7 @@ static int run_bpf_prog(char *prog, int cg_id)
+ 		fout = fopen(fname, "w");
+ 		fprintf(fout, "id:%d\n", cg_id);
+ 		fprintf(fout, "ERROR: Could not lookup queue_stats\n");
++		fclose(fout);
+ 	} else if (stats_flag && qstats.lastPacketTime >
+ 		   qstats.firstPacketTime) {
+ 		long long delta_us = (qstats.lastPacketTime -
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index cce12e1971d85..ec692af8ce9eb 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -102,6 +102,7 @@ static ssize_t uwrite(void const *const buf, size_t const count)
+ {
+ 	size_t cnt = count;
+ 	off_t idx = 0;
++	void *p = NULL;
+ 
+ 	file_updated = 1;
+ 
+@@ -109,7 +110,10 @@ static ssize_t uwrite(void const *const buf, size_t const count)
+ 		off_t aoffset = (file_ptr + count) - file_end;
+ 
+ 		if (aoffset > file_append_size) {
+-			file_append = realloc(file_append, aoffset);
++			p = realloc(file_append, aoffset);
++			if (!p)
++				free(file_append);
++			file_append = p;
+ 			file_append_size = aoffset;
+ 		}
+ 		if (!file_append) {
+diff --git a/sound/firewire/digi00x/digi00x-stream.c b/sound/firewire/digi00x/digi00x-stream.c
+index 405d6903bfbc3..62a54f5ab84d7 100644
+--- a/sound/firewire/digi00x/digi00x-stream.c
++++ b/sound/firewire/digi00x/digi00x-stream.c
+@@ -259,8 +259,10 @@ int snd_dg00x_stream_init_duplex(struct snd_dg00x *dg00x)
+ 		return err;
+ 
+ 	err = init_stream(dg00x, &dg00x->tx_stream);
+-	if (err < 0)
++	if (err < 0) {
+ 		destroy_stream(dg00x, &dg00x->rx_stream);
++		return err;
++	}
+ 
+ 	err = amdtp_domain_init(&dg00x->domain);
+ 	if (err < 0) {
+diff --git a/sound/hda/hdac_device.c b/sound/hda/hdac_device.c
+index b7e5032b61c97..bfd8585776767 100644
+--- a/sound/hda/hdac_device.c
++++ b/sound/hda/hdac_device.c
+@@ -611,7 +611,7 @@ EXPORT_SYMBOL_GPL(snd_hdac_power_up_pm);
+ int snd_hdac_keep_power_up(struct hdac_device *codec)
+ {
+ 	if (!atomic_inc_not_zero(&codec->in_pm)) {
+-		int ret = pm_runtime_get_if_in_use(&codec->dev);
++		int ret = pm_runtime_get_if_active(&codec->dev, true);
+ 		if (!ret)
+ 			return -1;
+ 		if (ret < 0)
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 8ee3be7bbd24e..35113fa84a0fd 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -1153,8 +1153,8 @@ static bool path_has_mixer(struct hda_codec *codec, int path_idx, int ctl_type)
+ 	return path && path->ctls[ctl_type];
+ }
+ 
+-static const char * const channel_name[4] = {
+-	"Front", "Surround", "CLFE", "Side"
++static const char * const channel_name[] = {
++	"Front", "Surround", "CLFE", "Side", "Back",
+ };
+ 
+ /* give some appropriate ctl name prefix for the given line out channel */
+@@ -1180,7 +1180,7 @@ static const char *get_line_out_pfx(struct hda_codec *codec, int ch,
+ 
+ 	/* multi-io channels */
+ 	if (ch >= cfg->line_outs)
+-		return channel_name[ch];
++		goto fixed_name;
+ 
+ 	switch (cfg->line_out_type) {
+ 	case AUTO_PIN_SPEAKER_OUT:
+@@ -1232,6 +1232,7 @@ static const char *get_line_out_pfx(struct hda_codec *codec, int ch,
+ 	if (cfg->line_outs == 1 && !spec->multi_ios)
+ 		return "Line Out";
+ 
++ fixed_name:
+ 	if (ch >= ARRAY_SIZE(channel_name)) {
+ 		snd_BUG();
+ 		return "PCM";
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 6057084da4cf8..6d67cca4cfa69 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1272,6 +1272,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x3842, 0x1038, "EVGA X99 Classified", QUIRK_R3DI),
++	SND_PCI_QUIRK(0x3842, 0x104b, "EVGA X299 Dark", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x3842, 0x1055, "EVGA Z390 DARK", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 1afe9cddb69eb..e4366fea9e274 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4374,6 +4374,11 @@ HDA_CODEC_ENTRY(0x10de009d, "GPU 9d HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009e, "GPU 9e HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009f, "GPU 9f HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a0, "GPU a0 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a3, "GPU a3 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a4, "GPU a4 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a5, "GPU a5 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP",	patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI",	patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI",	patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x11069f80, "VX900 HDMI/DP",	patch_via_hdmi),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 18309fa17fb87..21c8b474a4dfb 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8944,7 +8944,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8077, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x8158, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+-	SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC295_FIXUP_HP_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x827f, "HP x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+@@ -9044,6 +9044,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+@@ -9136,6 +9137,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x7717, "Clevo NS70PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x7724, "Clevo L140AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -11158,6 +11160,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ 	SND_PCI_QUIRK(0x103c, 0x870c, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
++	SND_PCI_QUIRK(0x103c, 0x872b, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+@@ -11184,6 +11187,8 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32f7, "Lenovo ThinkCentre M90", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x3321, "Lenovo ThinkCentre M70 Gen4", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x331b, "Lenovo ThinkCentre M90 Gen4", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x3742, "Lenovo TianYi510Pro-14IOB", ALC897_FIXUP_HEADSET_MIC_PIN2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index e0fda244a942c..29ed301c6f066 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -419,6 +419,7 @@ static int line6_parse_audio_format_rates_quirk(struct snd_usb_audio *chip,
+ 	case USB_ID(0x0e41, 0x4248): /* Line6 Helix >= fw 2.82 */
+ 	case USB_ID(0x0e41, 0x4249): /* Line6 Helix Rack >= fw 2.82 */
+ 	case USB_ID(0x0e41, 0x424a): /* Line6 Helix LT >= fw 2.82 */
++	case USB_ID(0x0e41, 0x424b): /* Line6 Pod Go */
+ 	case USB_ID(0x19f7, 0x0011): /* Rode Rodecaster Pro */
+ 		return set_fixed_rate(fp, 48000, SNDRV_PCM_RATE_48000);
+ 	}
+diff --git a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+index e7d48cb563c0e..ae6af354a81db 100644
+--- a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+@@ -70,8 +70,8 @@ static int max_freq_mode;
+  */
+ static unsigned long max_frequency;
+ 
+-static unsigned long long tsc_at_measure_start;
+-static unsigned long long tsc_at_measure_end;
++static unsigned long long *tsc_at_measure_start;
++static unsigned long long *tsc_at_measure_end;
+ static unsigned long long *mperf_previous_count;
+ static unsigned long long *aperf_previous_count;
+ static unsigned long long *mperf_current_count;
+@@ -169,7 +169,7 @@ static int mperf_get_count_percent(unsigned int id, double *percent,
+ 	aperf_diff = aperf_current_count[cpu] - aperf_previous_count[cpu];
+ 
+ 	if (max_freq_mode == MAX_FREQ_TSC_REF) {
+-		tsc_diff = tsc_at_measure_end - tsc_at_measure_start;
++		tsc_diff = tsc_at_measure_end[cpu] - tsc_at_measure_start[cpu];
+ 		*percent = 100.0 * mperf_diff / tsc_diff;
+ 		dprint("%s: TSC Ref - mperf_diff: %llu, tsc_diff: %llu\n",
+ 		       mperf_cstates[id].name, mperf_diff, tsc_diff);
+@@ -206,7 +206,7 @@ static int mperf_get_count_freq(unsigned int id, unsigned long long *count,
+ 
+ 	if (max_freq_mode == MAX_FREQ_TSC_REF) {
+ 		/* Calculate max_freq from TSC count */
+-		tsc_diff = tsc_at_measure_end - tsc_at_measure_start;
++		tsc_diff = tsc_at_measure_end[cpu] - tsc_at_measure_start[cpu];
+ 		time_diff = timespec_diff_us(time_start, time_end);
+ 		max_frequency = tsc_diff / time_diff;
+ 	}
+@@ -225,33 +225,27 @@ static int mperf_get_count_freq(unsigned int id, unsigned long long *count,
+ static int mperf_start(void)
+ {
+ 	int cpu;
+-	unsigned long long dbg;
+ 
+ 	clock_gettime(CLOCK_REALTIME, &time_start);
+-	mperf_get_tsc(&tsc_at_measure_start);
+ 
+-	for (cpu = 0; cpu < cpu_count; cpu++)
++	for (cpu = 0; cpu < cpu_count; cpu++) {
++		mperf_get_tsc(&tsc_at_measure_start[cpu]);
+ 		mperf_init_stats(cpu);
++	}
+ 
+-	mperf_get_tsc(&dbg);
+-	dprint("TSC diff: %llu\n", dbg - tsc_at_measure_start);
+ 	return 0;
+ }
+ 
+ static int mperf_stop(void)
+ {
+-	unsigned long long dbg;
+ 	int cpu;
+ 
+-	for (cpu = 0; cpu < cpu_count; cpu++)
++	for (cpu = 0; cpu < cpu_count; cpu++) {
+ 		mperf_measure_stats(cpu);
++		mperf_get_tsc(&tsc_at_measure_end[cpu]);
++	}
+ 
+-	mperf_get_tsc(&tsc_at_measure_end);
+ 	clock_gettime(CLOCK_REALTIME, &time_end);
+-
+-	mperf_get_tsc(&dbg);
+-	dprint("TSC diff: %llu\n", dbg - tsc_at_measure_end);
+-
+ 	return 0;
+ }
+ 
+@@ -353,7 +347,8 @@ struct cpuidle_monitor *mperf_register(void)
+ 	aperf_previous_count = calloc(cpu_count, sizeof(unsigned long long));
+ 	mperf_current_count = calloc(cpu_count, sizeof(unsigned long long));
+ 	aperf_current_count = calloc(cpu_count, sizeof(unsigned long long));
+-
++	tsc_at_measure_start = calloc(cpu_count, sizeof(unsigned long long));
++	tsc_at_measure_end = calloc(cpu_count, sizeof(unsigned long long));
+ 	mperf_monitor.name_len = strlen(mperf_monitor.name);
+ 	return &mperf_monitor;
+ }
+@@ -364,6 +359,8 @@ void mperf_unregister(void)
+ 	free(aperf_previous_count);
+ 	free(mperf_current_count);
+ 	free(aperf_current_count);
++	free(tsc_at_measure_start);
++	free(tsc_at_measure_end);
+ 	free(is_valid);
+ }
+ 
+diff --git a/tools/testing/selftests/memfd/fuse_test.c b/tools/testing/selftests/memfd/fuse_test.c
+index b018e835737df..cda63164d9d35 100644
+--- a/tools/testing/selftests/memfd/fuse_test.c
++++ b/tools/testing/selftests/memfd/fuse_test.c
+@@ -22,6 +22,7 @@
+ #include <linux/falloc.h>
+ #include <linux/fcntl.h>
+ #include <linux/memfd.h>
++#include <linux/types.h>
+ #include <sched.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 8f42e17db5d09..1681016373076 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -68,7 +68,7 @@ setup()
+ cleanup()
+ {
+ 	$IP link del dev dummy0 &> /dev/null
+-	ip netns del ns1
++	ip netns del ns1 &> /dev/null
+ 	ip netns del ns2 &> /dev/null
+ }
+ 
+diff --git a/tools/testing/selftests/net/forwarding/tc_actions.sh b/tools/testing/selftests/net/forwarding/tc_actions.sh
+index d9eca227136bb..1e27031288c81 100755
+--- a/tools/testing/selftests/net/forwarding/tc_actions.sh
++++ b/tools/testing/selftests/net/forwarding/tc_actions.sh
+@@ -3,7 +3,7 @@
+ 
+ ALL_TESTS="gact_drop_and_ok_test mirred_egress_redirect_test \
+ 	mirred_egress_mirror_test matchall_mirred_egress_mirror_test \
+-	gact_trap_test"
++	gact_trap_test mirred_egress_to_ingress_tcp_test"
+ NUM_NETIFS=4
+ source tc_common.sh
+ source lib.sh
+@@ -153,6 +153,52 @@ gact_trap_test()
+ 	log_test "trap ($tcflags)"
+ }
+ 
++mirred_egress_to_ingress_tcp_test()
++{
++	local tmpfile=$(mktemp) tmpfile1=$(mktemp)
++
++	RET=0
++	dd conv=sparse status=none if=/dev/zero bs=1M count=2 of=$tmpfile
++	tc filter add dev $h1 protocol ip pref 100 handle 100 egress flower \
++		$tcflags ip_proto tcp src_ip 192.0.2.1 dst_ip 192.0.2.2 \
++			action ct commit nat src addr 192.0.2.2 pipe \
++			action ct clear pipe \
++			action ct commit nat dst addr 192.0.2.1 pipe \
++			action ct clear pipe \
++			action skbedit ptype host pipe \
++			action mirred ingress redirect dev $h1
++	tc filter add dev $h1 protocol ip pref 101 handle 101 egress flower \
++		$tcflags ip_proto icmp \
++			action mirred ingress redirect dev $h1
++	tc filter add dev $h1 protocol ip pref 102 handle 102 ingress flower \
++		ip_proto icmp \
++			action drop
++
++	ip vrf exec v$h1 nc --recv-only -w10 -l -p 12345 -o $tmpfile1  &
++	local rpid=$!
++	ip vrf exec v$h1 nc -w1 --send-only 192.0.2.2 12345 <$tmpfile
++	wait -n $rpid
++	cmp -s $tmpfile $tmpfile1
++	check_err $? "server output check failed"
++
++	$MZ $h1 -c 10 -p 64 -a $h1mac -b $h1mac -A 192.0.2.1 -B 192.0.2.1 \
++		-t icmp "ping,id=42,seq=5" -q
++	tc_check_packets "dev $h1 egress" 101 10
++	check_err $? "didn't mirred redirect ICMP"
++	tc_check_packets "dev $h1 ingress" 102 10
++	check_err $? "didn't drop mirred ICMP"
++	local overlimits=$(tc_rule_stats_get ${h1} 101 egress .overlimits)
++	test ${overlimits} = 10
++	check_err $? "wrong overlimits, expected 10 got ${overlimits}"
++
++	tc filter del dev $h1 egress protocol ip pref 100 handle 100 flower
++	tc filter del dev $h1 egress protocol ip pref 101 handle 101 flower
++	tc filter del dev $h1 ingress protocol ip pref 102 handle 102 flower
++
++	rm -f $tmpfile $tmpfile1
++	log_test "mirred_egress_to_ingress_tcp ($tcflags)"
++}
++
+ setup_prepare()
+ {
+ 	h1=${NETIFS[p1]}


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-06-05 11:50 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-06-05 11:50 UTC (permalink / raw
  To: gentoo-commits

commit:     26f8bf35b6fb51a168b7ee50e07ac0f8b7f60a0d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun  5 11:49:57 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun  5 11:49:57 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=26f8bf35

Linux patch 5.10.182

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1181_linux-5.10.182.patch | 1150 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1154 insertions(+)

diff --git a/0000_README b/0000_README
index 366c0d03..0400be74 100644
--- a/0000_README
+++ b/0000_README
@@ -767,6 +767,10 @@ Patch:  1180_linux-5.10.181.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.181
 
+Patch:  1181_linux-5.10.182.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.182
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1181_linux-5.10.182.patch b/1181_linux-5.10.182.patch
new file mode 100644
index 00000000..75cca61c
--- /dev/null
+++ b/1181_linux-5.10.182.patch
@@ -0,0 +1,1150 @@
+diff --git a/Makefile b/Makefile
+index 4e8289113a81f..2f0efde219023 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 181
++SUBLEVEL = 182
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index 14b52718917f6..0de49e33d422e 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -104,6 +104,8 @@
+ #define INTEL_FAM6_RAPTORLAKE_P		0xBA
+ #define INTEL_FAM6_RAPTORLAKE_S		0xBF
+ 
++#define INTEL_FAM6_RAPTORLAKE		0xB7
++
+ /* "Small Core" Processors (Atom) */
+ 
+ #define INTEL_FAM6_ATOM_BONNELL		0x1C /* Diamondville, Pineview */
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index b403c7f063b00..dbae98f096580 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2267,24 +2267,23 @@ static void binder_deferred_fd_close(int fd)
+ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 					      struct binder_thread *thread,
+ 					      struct binder_buffer *buffer,
+-					      binder_size_t failed_at,
++					      binder_size_t off_end_offset,
+ 					      bool is_failure)
+ {
+ 	int debug_id = buffer->debug_id;
+-	binder_size_t off_start_offset, buffer_offset, off_end_offset;
++	binder_size_t off_start_offset, buffer_offset;
+ 
+ 	binder_debug(BINDER_DEBUG_TRANSACTION,
+ 		     "%d buffer release %d, size %zd-%zd, failed at %llx\n",
+ 		     proc->pid, buffer->debug_id,
+ 		     buffer->data_size, buffer->offsets_size,
+-		     (unsigned long long)failed_at);
++		     (unsigned long long)off_end_offset);
+ 
+ 	if (buffer->target_node)
+ 		binder_dec_node(buffer->target_node, 1, 0);
+ 
+ 	off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
+-	off_end_offset = is_failure && failed_at ? failed_at :
+-				off_start_offset + buffer->offsets_size;
++
+ 	for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
+ 	     buffer_offset += sizeof(binder_size_t)) {
+ 		struct binder_object_header *hdr;
+@@ -2444,6 +2443,21 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 	}
+ }
+ 
++/* Clean up all the objects in the buffer */
++static inline void binder_release_entire_buffer(struct binder_proc *proc,
++						struct binder_thread *thread,
++						struct binder_buffer *buffer,
++						bool is_failure)
++{
++	binder_size_t off_end_offset;
++
++	off_end_offset = ALIGN(buffer->data_size, sizeof(void *));
++	off_end_offset += buffer->offsets_size;
++
++	binder_transaction_buffer_release(proc, thread, buffer,
++					  off_end_offset, is_failure);
++}
++
+ static int binder_translate_binder(struct flat_binder_object *fp,
+ 				   struct binder_transaction *t,
+ 				   struct binder_thread *thread)
+@@ -3926,7 +3940,7 @@ binder_free_buf(struct binder_proc *proc,
+ 		binder_node_inner_unlock(buf_node);
+ 	}
+ 	trace_binder_transaction_buffer_release(buffer);
+-	binder_transaction_buffer_release(proc, thread, buffer, 0, is_failure);
++	binder_release_entire_buffer(proc, thread, buffer, is_failure);
+ 	binder_alloc_free_buf(&proc->alloc, buffer);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
+index abd066e952286..438be215bbd45 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
+@@ -3,6 +3,7 @@
+ 
+ #include <linux/mlx5/vport.h>
+ #include "lib/devcom.h"
++#include "mlx5_core.h"
+ 
+ static LIST_HEAD(devcom_list);
+ 
+@@ -14,7 +15,7 @@ static LIST_HEAD(devcom_list);
+ struct mlx5_devcom_component {
+ 	struct {
+ 		void *data;
+-	} device[MLX5_MAX_PORTS];
++	} device[MLX5_DEVCOM_PORTS_SUPPORTED];
+ 
+ 	mlx5_devcom_event_handler_t handler;
+ 	struct rw_semaphore sem;
+@@ -25,7 +26,7 @@ struct mlx5_devcom_list {
+ 	struct list_head list;
+ 
+ 	struct mlx5_devcom_component components[MLX5_DEVCOM_NUM_COMPONENTS];
+-	struct mlx5_core_dev *devs[MLX5_MAX_PORTS];
++	struct mlx5_core_dev *devs[MLX5_DEVCOM_PORTS_SUPPORTED];
+ };
+ 
+ struct mlx5_devcom {
+@@ -74,13 +75,16 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
+ 
+ 	if (!mlx5_core_is_pf(dev))
+ 		return NULL;
++	if (MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_DEVCOM_PORTS_SUPPORTED)
++		return NULL;
+ 
++	mlx5_dev_list_lock();
+ 	sguid0 = mlx5_query_nic_system_image_guid(dev);
+ 	list_for_each_entry(iter, &devcom_list, list) {
+ 		struct mlx5_core_dev *tmp_dev = NULL;
+ 
+ 		idx = -1;
+-		for (i = 0; i < MLX5_MAX_PORTS; i++) {
++		for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++) {
+ 			if (iter->devs[i])
+ 				tmp_dev = iter->devs[i];
+ 			else
+@@ -100,8 +104,10 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
+ 
+ 	if (!priv) {
+ 		priv = mlx5_devcom_list_alloc();
+-		if (!priv)
+-			return ERR_PTR(-ENOMEM);
++		if (!priv) {
++			devcom = ERR_PTR(-ENOMEM);
++			goto out;
++		}
+ 
+ 		idx = 0;
+ 		new_priv = true;
+@@ -112,12 +118,14 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
+ 	if (!devcom) {
+ 		if (new_priv)
+ 			kfree(priv);
+-		return ERR_PTR(-ENOMEM);
++		devcom = ERR_PTR(-ENOMEM);
++		goto out;
+ 	}
+ 
+ 	if (new_priv)
+ 		list_add(&priv->list, &devcom_list);
+-
++out:
++	mlx5_dev_list_unlock();
+ 	return devcom;
+ }
+ 
+@@ -130,20 +138,23 @@ void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom)
+ 	if (IS_ERR_OR_NULL(devcom))
+ 		return;
+ 
++	mlx5_dev_list_lock();
+ 	priv = devcom->priv;
+ 	priv->devs[devcom->idx] = NULL;
+ 
+ 	kfree(devcom);
+ 
+-	for (i = 0; i < MLX5_MAX_PORTS; i++)
++	for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++)
+ 		if (priv->devs[i])
+ 			break;
+ 
+-	if (i != MLX5_MAX_PORTS)
+-		return;
++	if (i != MLX5_DEVCOM_PORTS_SUPPORTED)
++		goto out;
+ 
+ 	list_del(&priv->list);
+ 	kfree(priv);
++out:
++	mlx5_dev_list_unlock();
+ }
+ 
+ void mlx5_devcom_register_component(struct mlx5_devcom *devcom,
+@@ -192,7 +203,7 @@ int mlx5_devcom_send_event(struct mlx5_devcom *devcom,
+ 
+ 	comp = &devcom->priv->components[id];
+ 	down_write(&comp->sem);
+-	for (i = 0; i < MLX5_MAX_PORTS; i++)
++	for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++)
+ 		if (i != devcom->idx && comp->device[i].data) {
+ 			err = comp->handler(event, comp->device[i].data,
+ 					    event_data);
+@@ -240,7 +251,7 @@ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
+ 		return NULL;
+ 	}
+ 
+-	for (i = 0; i < MLX5_MAX_PORTS; i++)
++	for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++)
+ 		if (i != devcom->idx)
+ 			break;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
+index 939d5bf1581b5..94313c18bb647 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
+@@ -6,6 +6,8 @@
+ 
+ #include <linux/mlx5/driver.h>
+ 
++#define MLX5_DEVCOM_PORTS_SUPPORTED 2
++
+ enum mlx5_devcom_components {
+ 	MLX5_DEVCOM_ESW_OFFLOADS,
+ 
+diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h
+index c2023f93c0b24..79117d281c1ec 100644
+--- a/drivers/net/phy/mscc/mscc.h
++++ b/drivers/net/phy/mscc/mscc.h
+@@ -175,6 +175,7 @@ enum rgmii_clock_delay {
+ #define VSC8502_RGMII_CNTL		  20
+ #define VSC8502_RGMII_RX_DELAY_MASK	  0x0070
+ #define VSC8502_RGMII_TX_DELAY_MASK	  0x0007
++#define VSC8502_RGMII_RX_CLK_DISABLE	  0x0800
+ 
+ #define MSCC_PHY_WOL_LOWER_MAC_ADDR	  21
+ #define MSCC_PHY_WOL_MID_MAC_ADDR	  22
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index ffac713afa551..c64ac142509a5 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -527,14 +527,27 @@ out_unlock:
+  *  * 2.0 ns (which causes the data to be sampled at exactly half way between
+  *    clock transitions at 1000 Mbps) if delays should be enabled
+  */
+-static int vsc85xx_rgmii_set_skews(struct phy_device *phydev, u32 rgmii_cntl,
+-				   u16 rgmii_rx_delay_mask,
+-				   u16 rgmii_tx_delay_mask)
++static int vsc85xx_update_rgmii_cntl(struct phy_device *phydev, u32 rgmii_cntl,
++				     u16 rgmii_rx_delay_mask,
++				     u16 rgmii_tx_delay_mask)
+ {
+ 	u16 rgmii_rx_delay_pos = ffs(rgmii_rx_delay_mask) - 1;
+ 	u16 rgmii_tx_delay_pos = ffs(rgmii_tx_delay_mask) - 1;
+ 	u16 reg_val = 0;
+-	int rc;
++	u16 mask = 0;
++	int rc = 0;
++
++	/* For traffic to pass, the VSC8502 family needs the RX_CLK disable bit
++	 * to be unset for all PHY modes, so do that as part of the paged
++	 * register modification.
++	 * For some family members (like VSC8530/31/40/41) this bit is reserved
++	 * and read-only, and the RX clock is enabled by default.
++	 */
++	if (rgmii_cntl == VSC8502_RGMII_CNTL)
++		mask |= VSC8502_RGMII_RX_CLK_DISABLE;
++
++	if (phy_interface_is_rgmii(phydev))
++		mask |= rgmii_rx_delay_mask | rgmii_tx_delay_mask;
+ 
+ 	mutex_lock(&phydev->lock);
+ 
+@@ -545,10 +558,9 @@ static int vsc85xx_rgmii_set_skews(struct phy_device *phydev, u32 rgmii_cntl,
+ 	    phydev->interface == PHY_INTERFACE_MODE_RGMII_ID)
+ 		reg_val |= RGMII_CLK_DELAY_2_0_NS << rgmii_tx_delay_pos;
+ 
+-	rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_EXTENDED_2,
+-			      rgmii_cntl,
+-			      rgmii_rx_delay_mask | rgmii_tx_delay_mask,
+-			      reg_val);
++	if (mask)
++		rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_EXTENDED_2,
++				      rgmii_cntl, mask, reg_val);
+ 
+ 	mutex_unlock(&phydev->lock);
+ 
+@@ -557,19 +569,11 @@ static int vsc85xx_rgmii_set_skews(struct phy_device *phydev, u32 rgmii_cntl,
+ 
+ static int vsc85xx_default_config(struct phy_device *phydev)
+ {
+-	int rc;
+-
+ 	phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
+ 
+-	if (phy_interface_mode_is_rgmii(phydev->interface)) {
+-		rc = vsc85xx_rgmii_set_skews(phydev, VSC8502_RGMII_CNTL,
+-					     VSC8502_RGMII_RX_DELAY_MASK,
+-					     VSC8502_RGMII_TX_DELAY_MASK);
+-		if (rc)
+-			return rc;
+-	}
+-
+-	return 0;
++	return vsc85xx_update_rgmii_cntl(phydev, VSC8502_RGMII_CNTL,
++					 VSC8502_RGMII_RX_DELAY_MASK,
++					 VSC8502_RGMII_TX_DELAY_MASK);
+ }
+ 
+ static int vsc85xx_get_tunable(struct phy_device *phydev,
+@@ -1646,13 +1650,11 @@ static int vsc8584_config_init(struct phy_device *phydev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (phy_interface_is_rgmii(phydev)) {
+-		ret = vsc85xx_rgmii_set_skews(phydev, VSC8572_RGMII_CNTL,
+-					      VSC8572_RGMII_RX_DELAY_MASK,
+-					      VSC8572_RGMII_TX_DELAY_MASK);
+-		if (ret)
+-			return ret;
+-	}
++	ret = vsc85xx_update_rgmii_cntl(phydev, VSC8572_RGMII_CNTL,
++					VSC8572_RGMII_RX_DELAY_MASK,
++					VSC8572_RGMII_TX_DELAY_MASK);
++	if (ret)
++		return ret;
+ 
+ 	ret = genphy_soft_reset(phydev);
+ 	if (ret)
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index 338dd82007e4e..5769b36851c34 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -1203,8 +1203,19 @@ static void bq24190_input_current_limit_work(struct work_struct *work)
+ 	struct bq24190_dev_info *bdi =
+ 		container_of(work, struct bq24190_dev_info,
+ 			     input_current_limit_work.work);
++	union power_supply_propval val;
++	int ret;
+ 
+-	power_supply_set_input_current_limit_from_supplier(bdi->charger);
++	ret = power_supply_get_property_from_supplier(bdi->charger,
++						      POWER_SUPPLY_PROP_CURRENT_MAX,
++						      &val);
++	if (ret)
++		return;
++
++	bq24190_charger_set_property(bdi->charger,
++				     POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT,
++				     &val);
++	power_supply_changed(bdi->charger);
+ }
+ 
+ /* Sync the input-current-limit with our parent supply (if we have one) */
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index c08dd4e6d35ad..235647b21af71 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1507,14 +1507,6 @@ static int bq27xxx_battery_read_charge(struct bq27xxx_device_info *di, u8 reg)
+  */
+ static inline int bq27xxx_battery_read_nac(struct bq27xxx_device_info *di)
+ {
+-	int flags;
+-
+-	if (di->opts & BQ27XXX_O_ZERO) {
+-		flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, true);
+-		if (flags >= 0 && (flags & BQ27000_FLAG_CI))
+-			return -ENODATA;
+-	}
+-
+ 	return bq27xxx_battery_read_charge(di, BQ27XXX_REG_NAC);
+ }
+ 
+@@ -1668,6 +1660,18 @@ static bool bq27xxx_battery_dead(struct bq27xxx_device_info *di, u16 flags)
+ 		return flags & (BQ27XXX_FLAG_SOC1 | BQ27XXX_FLAG_SOCF);
+ }
+ 
++/*
++ * Returns true if reported battery capacity is inaccurate
++ */
++static bool bq27xxx_battery_capacity_inaccurate(struct bq27xxx_device_info *di,
++						 u16 flags)
++{
++	if (di->opts & BQ27XXX_O_HAS_CI)
++		return (flags & BQ27000_FLAG_CI);
++	else
++		return false;
++}
++
+ static int bq27xxx_battery_read_health(struct bq27xxx_device_info *di)
+ {
+ 	/* Unlikely but important to return first */
+@@ -1677,14 +1681,89 @@ static int bq27xxx_battery_read_health(struct bq27xxx_device_info *di)
+ 		return POWER_SUPPLY_HEALTH_COLD;
+ 	if (unlikely(bq27xxx_battery_dead(di, di->cache.flags)))
+ 		return POWER_SUPPLY_HEALTH_DEAD;
++	if (unlikely(bq27xxx_battery_capacity_inaccurate(di, di->cache.flags)))
++		return POWER_SUPPLY_HEALTH_CALIBRATION_REQUIRED;
+ 
+ 	return POWER_SUPPLY_HEALTH_GOOD;
+ }
+ 
++static bool bq27xxx_battery_is_full(struct bq27xxx_device_info *di, int flags)
++{
++	if (di->opts & BQ27XXX_O_ZERO)
++		return (flags & BQ27000_FLAG_FC);
++	else if (di->opts & BQ27Z561_O_BITS)
++		return (flags & BQ27Z561_FLAG_FC);
++	else
++		return (flags & BQ27XXX_FLAG_FC);
++}
++
++/*
++ * Return the battery average current in µA and the status
++ * Note that current can be negative signed as well
++ * Or 0 if something fails.
++ */
++static int bq27xxx_battery_current_and_status(
++	struct bq27xxx_device_info *di,
++	union power_supply_propval *val_curr,
++	union power_supply_propval *val_status,
++	struct bq27xxx_reg_cache *cache)
++{
++	bool single_flags = (di->opts & BQ27XXX_O_ZERO);
++	int curr;
++	int flags;
++
++	curr = bq27xxx_read(di, BQ27XXX_REG_AI, false);
++	if (curr < 0) {
++		dev_err(di->dev, "error reading current\n");
++		return curr;
++	}
++
++	if (cache) {
++		flags = cache->flags;
++	} else {
++		flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, single_flags);
++		if (flags < 0) {
++			dev_err(di->dev, "error reading flags\n");
++			return flags;
++		}
++	}
++
++	if (di->opts & BQ27XXX_O_ZERO) {
++		if (!(flags & BQ27000_FLAG_CHGS)) {
++			dev_dbg(di->dev, "negative current!\n");
++			curr = -curr;
++		}
++
++		curr = curr * BQ27XXX_CURRENT_CONSTANT / BQ27XXX_RS;
++	} else {
++		/* Other gauges return signed value */
++		curr = (int)((s16)curr) * 1000;
++	}
++
++	if (val_curr)
++		val_curr->intval = curr;
++
++	if (val_status) {
++		if (curr > 0) {
++			val_status->intval = POWER_SUPPLY_STATUS_CHARGING;
++		} else if (curr < 0) {
++			val_status->intval = POWER_SUPPLY_STATUS_DISCHARGING;
++		} else {
++			if (bq27xxx_battery_is_full(di, flags))
++				val_status->intval = POWER_SUPPLY_STATUS_FULL;
++			else
++				val_status->intval =
++					POWER_SUPPLY_STATUS_NOT_CHARGING;
++		}
++	}
++
++	return 0;
++}
++
+ static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
+ {
++	union power_supply_propval status = di->last_status;
+ 	struct bq27xxx_reg_cache cache = {0, };
+-	bool has_ci_flag = di->opts & BQ27XXX_O_HAS_CI;
+ 	bool has_singe_flag = di->opts & BQ27XXX_O_ZERO;
+ 
+ 	cache.flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, has_singe_flag);
+@@ -1692,41 +1771,40 @@ static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
+ 		cache.flags = -1; /* read error */
+ 	if (cache.flags >= 0) {
+ 		cache.temperature = bq27xxx_battery_read_temperature(di);
+-		if (has_ci_flag && (cache.flags & BQ27000_FLAG_CI)) {
+-			dev_info_once(di->dev, "battery is not calibrated! ignoring capacity values\n");
+-			cache.capacity = -ENODATA;
+-			cache.energy = -ENODATA;
+-			cache.time_to_empty = -ENODATA;
+-			cache.time_to_empty_avg = -ENODATA;
+-			cache.time_to_full = -ENODATA;
+-			cache.charge_full = -ENODATA;
+-			cache.health = -ENODATA;
+-		} else {
+-			if (di->regs[BQ27XXX_REG_TTE] != INVALID_REG_ADDR)
+-				cache.time_to_empty = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTE);
+-			if (di->regs[BQ27XXX_REG_TTECP] != INVALID_REG_ADDR)
+-				cache.time_to_empty_avg = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTECP);
+-			if (di->regs[BQ27XXX_REG_TTF] != INVALID_REG_ADDR)
+-				cache.time_to_full = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTF);
+-
+-			cache.charge_full = bq27xxx_battery_read_fcc(di);
+-			cache.capacity = bq27xxx_battery_read_soc(di);
+-			if (di->regs[BQ27XXX_REG_AE] != INVALID_REG_ADDR)
+-				cache.energy = bq27xxx_battery_read_energy(di);
+-			di->cache.flags = cache.flags;
+-			cache.health = bq27xxx_battery_read_health(di);
+-		}
++		if (di->regs[BQ27XXX_REG_TTE] != INVALID_REG_ADDR)
++			cache.time_to_empty = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTE);
++		if (di->regs[BQ27XXX_REG_TTECP] != INVALID_REG_ADDR)
++			cache.time_to_empty_avg = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTECP);
++		if (di->regs[BQ27XXX_REG_TTF] != INVALID_REG_ADDR)
++			cache.time_to_full = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTF);
++
++		cache.charge_full = bq27xxx_battery_read_fcc(di);
++		cache.capacity = bq27xxx_battery_read_soc(di);
++		if (di->regs[BQ27XXX_REG_AE] != INVALID_REG_ADDR)
++			cache.energy = bq27xxx_battery_read_energy(di);
++		di->cache.flags = cache.flags;
++		cache.health = bq27xxx_battery_read_health(di);
+ 		if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR)
+ 			cache.cycle_count = bq27xxx_battery_read_cyct(di);
+ 
++		/*
++		 * On gauges with signed current reporting the current must be
++		 * checked to detect charging <-> discharging status changes.
++		 */
++		if (!(di->opts & BQ27XXX_O_ZERO))
++			bq27xxx_battery_current_and_status(di, NULL, &status, &cache);
++
+ 		/* We only have to read charge design full once */
+ 		if (di->charge_design_full <= 0)
+ 			di->charge_design_full = bq27xxx_battery_read_dcap(di);
+ 	}
+ 
+ 	if ((di->cache.capacity != cache.capacity) ||
+-	    (di->cache.flags != cache.flags))
++	    (di->cache.flags != cache.flags) ||
++	    (di->last_status.intval != status.intval)) {
++		di->last_status.intval = status.intval;
+ 		power_supply_changed(di->bat);
++	}
+ 
+ 	if (memcmp(&di->cache, &cache, sizeof(cache)) != 0)
+ 		di->cache = cache;
+@@ -1754,39 +1832,6 @@ static void bq27xxx_battery_poll(struct work_struct *work)
+ 	bq27xxx_battery_update(di);
+ }
+ 
+-/*
+- * Return the battery average current in µA
+- * Note that current can be negative signed as well
+- * Or 0 if something fails.
+- */
+-static int bq27xxx_battery_current(struct bq27xxx_device_info *di,
+-				   union power_supply_propval *val)
+-{
+-	int curr;
+-	int flags;
+-
+-	curr = bq27xxx_read(di, BQ27XXX_REG_AI, false);
+-	if (curr < 0) {
+-		dev_err(di->dev, "error reading current\n");
+-		return curr;
+-	}
+-
+-	if (di->opts & BQ27XXX_O_ZERO) {
+-		flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, true);
+-		if (flags & BQ27000_FLAG_CHGS) {
+-			dev_dbg(di->dev, "negative current!\n");
+-			curr = -curr;
+-		}
+-
+-		val->intval = curr * BQ27XXX_CURRENT_CONSTANT / BQ27XXX_RS;
+-	} else {
+-		/* Other gauges return signed value */
+-		val->intval = (int)((s16)curr) * 1000;
+-	}
+-
+-	return 0;
+-}
+-
+ /*
+  * Get the average power in µW
+  * Return < 0 if something fails.
+@@ -1813,43 +1858,6 @@ static int bq27xxx_battery_pwr_avg(struct bq27xxx_device_info *di,
+ 	return 0;
+ }
+ 
+-static int bq27xxx_battery_status(struct bq27xxx_device_info *di,
+-				  union power_supply_propval *val)
+-{
+-	int status;
+-
+-	if (di->opts & BQ27XXX_O_ZERO) {
+-		if (di->cache.flags & BQ27000_FLAG_FC)
+-			status = POWER_SUPPLY_STATUS_FULL;
+-		else if (di->cache.flags & BQ27000_FLAG_CHGS)
+-			status = POWER_SUPPLY_STATUS_CHARGING;
+-		else
+-			status = POWER_SUPPLY_STATUS_DISCHARGING;
+-	} else if (di->opts & BQ27Z561_O_BITS) {
+-		if (di->cache.flags & BQ27Z561_FLAG_FC)
+-			status = POWER_SUPPLY_STATUS_FULL;
+-		else if (di->cache.flags & BQ27Z561_FLAG_DIS_CH)
+-			status = POWER_SUPPLY_STATUS_DISCHARGING;
+-		else
+-			status = POWER_SUPPLY_STATUS_CHARGING;
+-	} else {
+-		if (di->cache.flags & BQ27XXX_FLAG_FC)
+-			status = POWER_SUPPLY_STATUS_FULL;
+-		else if (di->cache.flags & BQ27XXX_FLAG_DSC)
+-			status = POWER_SUPPLY_STATUS_DISCHARGING;
+-		else
+-			status = POWER_SUPPLY_STATUS_CHARGING;
+-	}
+-
+-	if ((status == POWER_SUPPLY_STATUS_DISCHARGING) &&
+-	    (power_supply_am_i_supplied(di->bat) > 0))
+-		status = POWER_SUPPLY_STATUS_NOT_CHARGING;
+-
+-	val->intval = status;
+-
+-	return 0;
+-}
+-
+ static int bq27xxx_battery_capacity_level(struct bq27xxx_device_info *di,
+ 					  union power_supply_propval *val)
+ {
+@@ -1935,7 +1943,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_STATUS:
+-		ret = bq27xxx_battery_status(di, val);
++		ret = bq27xxx_battery_current_and_status(di, NULL, val, NULL);
+ 		break;
+ 	case POWER_SUPPLY_PROP_VOLTAGE_NOW:
+ 		ret = bq27xxx_battery_voltage(di, val);
+@@ -1944,7 +1952,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
+ 		val->intval = di->cache.flags < 0 ? 0 : 1;
+ 		break;
+ 	case POWER_SUPPLY_PROP_CURRENT_NOW:
+-		ret = bq27xxx_battery_current(di, val);
++		ret = bq27xxx_battery_current_and_status(di, val, NULL, NULL);
+ 		break;
+ 	case POWER_SUPPLY_PROP_CAPACITY:
+ 		ret = bq27xxx_simple_value(di->cache.capacity, val);
+@@ -2014,8 +2022,8 @@ static void bq27xxx_external_power_changed(struct power_supply *psy)
+ {
+ 	struct bq27xxx_device_info *di = power_supply_get_drvdata(psy);
+ 
+-	cancel_delayed_work_sync(&di->work);
+-	schedule_delayed_work(&di->work, 0);
++	/* After charger plug in/out wait 0.5s for things to stabilize */
++	mod_delayed_work(system_wq, &di->work, HZ / 2);
+ }
+ 
+ int bq27xxx_battery_setup(struct bq27xxx_device_info *di)
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index 2b644590fa8e0..53e5b3e04be13 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -375,46 +375,49 @@ int power_supply_is_system_supplied(void)
+ }
+ EXPORT_SYMBOL_GPL(power_supply_is_system_supplied);
+ 
+-static int __power_supply_get_supplier_max_current(struct device *dev,
+-						   void *data)
++struct psy_get_supplier_prop_data {
++	struct power_supply *psy;
++	enum power_supply_property psp;
++	union power_supply_propval *val;
++};
++
++static int __power_supply_get_supplier_property(struct device *dev, void *_data)
+ {
+-	union power_supply_propval ret = {0,};
+ 	struct power_supply *epsy = dev_get_drvdata(dev);
+-	struct power_supply *psy = data;
++	struct psy_get_supplier_prop_data *data = _data;
+ 
+-	if (__power_supply_is_supplied_by(epsy, psy))
+-		if (!epsy->desc->get_property(epsy,
+-					      POWER_SUPPLY_PROP_CURRENT_MAX,
+-					      &ret))
+-			return ret.intval;
++	if (__power_supply_is_supplied_by(epsy, data->psy))
++		if (!epsy->desc->get_property(epsy, data->psp, data->val))
++			return 1; /* Success */
+ 
+-	return 0;
++	return 0; /* Continue iterating */
+ }
+ 
+-int power_supply_set_input_current_limit_from_supplier(struct power_supply *psy)
++int power_supply_get_property_from_supplier(struct power_supply *psy,
++					    enum power_supply_property psp,
++					    union power_supply_propval *val)
+ {
+-	union power_supply_propval val = {0,};
+-	int curr;
+-
+-	if (!psy->desc->set_property)
+-		return -EINVAL;
++	struct psy_get_supplier_prop_data data = {
++		.psy = psy,
++		.psp = psp,
++		.val = val,
++	};
++	int ret;
+ 
+ 	/*
+ 	 * This function is not intended for use with a supply with multiple
+-	 * suppliers, we simply pick the first supply to report a non 0
+-	 * max-current.
++	 * suppliers, we simply pick the first supply to report the psp.
+ 	 */
+-	curr = class_for_each_device(power_supply_class, NULL, psy,
+-				      __power_supply_get_supplier_max_current);
+-	if (curr <= 0)
+-		return (curr == 0) ? -ENODEV : curr;
+-
+-	val.intval = curr;
++	ret = class_for_each_device(power_supply_class, NULL, &data,
++				    __power_supply_get_supplier_property);
++	if (ret < 0)
++		return ret;
++	if (ret == 0)
++		return -ENODEV;
+ 
+-	return psy->desc->set_property(psy,
+-				POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT, &val);
++	return 0;
+ }
+-EXPORT_SYMBOL_GPL(power_supply_set_input_current_limit_from_supplier);
++EXPORT_SYMBOL_GPL(power_supply_get_property_from_supplier);
+ 
+ int power_supply_set_battery_charged(struct power_supply *psy)
+ {
+diff --git a/drivers/regulator/helpers.c b/drivers/regulator/helpers.c
+index e4bb09bbd3fa6..a356f84b1285b 100644
+--- a/drivers/regulator/helpers.c
++++ b/drivers/regulator/helpers.c
+@@ -879,3 +879,68 @@ bool regulator_is_equal(struct regulator *reg1, struct regulator *reg2)
+ 	return reg1->rdev == reg2->rdev;
+ }
+ EXPORT_SYMBOL_GPL(regulator_is_equal);
++
++static int find_closest_bigger(unsigned int target, const unsigned int *table,
++			       unsigned int num_sel, unsigned int *sel)
++{
++	unsigned int s, tmp, max, maxsel = 0;
++	bool found = false;
++
++	max = table[0];
++
++	for (s = 0; s < num_sel; s++) {
++		if (table[s] > max) {
++			max = table[s];
++			maxsel = s;
++		}
++		if (table[s] >= target) {
++			if (!found || table[s] - target < tmp - target) {
++				tmp = table[s];
++				*sel = s;
++				found = true;
++				if (tmp == target)
++					break;
++			}
++		}
++	}
++
++	if (!found) {
++		*sel = maxsel;
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++/**
++ * regulator_set_ramp_delay_regmap - set_ramp_delay() helper
++ *
++ * @rdev: regulator to operate on
++ *
++ * Regulators that use regmap for their register I/O can set the ramp_reg
++ * and ramp_mask fields in their descriptor and then use this as their
++ * set_ramp_delay operation, saving some code.
++ */
++int regulator_set_ramp_delay_regmap(struct regulator_dev *rdev, int ramp_delay)
++{
++	int ret;
++	unsigned int sel;
++
++	if (!rdev->desc->n_ramp_values)
++		return -EINVAL;
++
++	ret = find_closest_bigger(ramp_delay, rdev->desc->ramp_delay_table,
++				  rdev->desc->n_ramp_values, &sel);
++
++	if (ret) {
++		dev_warn(rdev_get_dev(rdev),
++			 "Can't set ramp-delay %u, setting %u\n", ramp_delay,
++			 rdev->desc->ramp_delay_table[sel]);
++	}
++
++	sel <<= ffs(rdev->desc->ramp_mask) - 1;
++
++	return regmap_update_bits(rdev->regmap, rdev->desc->ramp_reg,
++				  rdev->desc->ramp_mask, sel);
++}
++EXPORT_SYMBOL_GPL(regulator_set_ramp_delay_regmap);
+diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c
+index d38109cc3a011..b3d206ebb2894 100644
+--- a/drivers/regulator/pca9450-regulator.c
++++ b/drivers/regulator/pca9450-regulator.c
+@@ -65,32 +65,9 @@ static const struct regmap_config pca9450_regmap_config = {
+  * 10: 25mV/4usec
+  * 11: 25mV/8usec
+  */
+-static int pca9450_dvs_set_ramp_delay(struct regulator_dev *rdev,
+-				      int ramp_delay)
+-{
+-	int id = rdev_get_id(rdev);
+-	unsigned int ramp_value;
+-
+-	switch (ramp_delay) {
+-	case 1 ... 3125:
+-		ramp_value = BUCK1_RAMP_3P125MV;
+-		break;
+-	case 3126 ... 6250:
+-		ramp_value = BUCK1_RAMP_6P25MV;
+-		break;
+-	case 6251 ... 12500:
+-		ramp_value = BUCK1_RAMP_12P5MV;
+-		break;
+-	case 12501 ... 25000:
+-		ramp_value = BUCK1_RAMP_25MV;
+-		break;
+-	default:
+-		ramp_value = BUCK1_RAMP_25MV;
+-	}
+-
+-	return regmap_update_bits(rdev->regmap, PCA9450_REG_BUCK1CTRL + id * 3,
+-				  BUCK1_RAMP_MASK, ramp_value << 6);
+-}
++static const unsigned int pca9450_dvs_buck_ramp_table[] = {
++	25000, 12500, 6250, 3125
++};
+ 
+ static const struct regulator_ops pca9450_dvs_buck_regulator_ops = {
+ 	.enable = regulator_enable_regmap,
+@@ -100,7 +77,7 @@ static const struct regulator_ops pca9450_dvs_buck_regulator_ops = {
+ 	.set_voltage_sel = regulator_set_voltage_sel_regmap,
+ 	.get_voltage_sel = regulator_get_voltage_sel_regmap,
+ 	.set_voltage_time_sel = regulator_set_voltage_time_sel,
+-	.set_ramp_delay = pca9450_dvs_set_ramp_delay,
++	.set_ramp_delay	= regulator_set_ramp_delay_regmap,
+ };
+ 
+ static const struct regulator_ops pca9450_buck_regulator_ops = {
+@@ -251,6 +228,10 @@ static const struct pca9450_regulator_desc pca9450a_regulators[] = {
+ 			.vsel_mask = BUCK1OUT_DVS0_MASK,
+ 			.enable_reg = PCA9450_REG_BUCK1CTRL,
+ 			.enable_mask = BUCK1_ENMODE_MASK,
++			.ramp_reg = PCA9450_REG_BUCK1CTRL,
++			.ramp_mask = BUCK1_RAMP_MASK,
++			.ramp_delay_table = pca9450_dvs_buck_ramp_table,
++			.n_ramp_values = ARRAY_SIZE(pca9450_dvs_buck_ramp_table),
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = pca9450_set_dvs_levels,
+ 		},
+@@ -275,7 +256,11 @@ static const struct pca9450_regulator_desc pca9450a_regulators[] = {
+ 			.vsel_reg = PCA9450_REG_BUCK2OUT_DVS0,
+ 			.vsel_mask = BUCK2OUT_DVS0_MASK,
+ 			.enable_reg = PCA9450_REG_BUCK2CTRL,
+-			.enable_mask = BUCK1_ENMODE_MASK,
++			.enable_mask = BUCK2_ENMODE_MASK,
++			.ramp_reg = PCA9450_REG_BUCK2CTRL,
++			.ramp_mask = BUCK2_RAMP_MASK,
++			.ramp_delay_table = pca9450_dvs_buck_ramp_table,
++			.n_ramp_values = ARRAY_SIZE(pca9450_dvs_buck_ramp_table),
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = pca9450_set_dvs_levels,
+ 		},
+@@ -301,6 +286,10 @@ static const struct pca9450_regulator_desc pca9450a_regulators[] = {
+ 			.vsel_mask = BUCK3OUT_DVS0_MASK,
+ 			.enable_reg = PCA9450_REG_BUCK3CTRL,
+ 			.enable_mask = BUCK3_ENMODE_MASK,
++			.ramp_reg = PCA9450_REG_BUCK3CTRL,
++			.ramp_mask = BUCK3_RAMP_MASK,
++			.ramp_delay_table = pca9450_dvs_buck_ramp_table,
++			.n_ramp_values = ARRAY_SIZE(pca9450_dvs_buck_ramp_table),
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = pca9450_set_dvs_levels,
+ 		},
+@@ -477,6 +466,10 @@ static const struct pca9450_regulator_desc pca9450bc_regulators[] = {
+ 			.vsel_mask = BUCK1OUT_DVS0_MASK,
+ 			.enable_reg = PCA9450_REG_BUCK1CTRL,
+ 			.enable_mask = BUCK1_ENMODE_MASK,
++			.ramp_reg = PCA9450_REG_BUCK1CTRL,
++			.ramp_mask = BUCK1_RAMP_MASK,
++			.ramp_delay_table = pca9450_dvs_buck_ramp_table,
++			.n_ramp_values = ARRAY_SIZE(pca9450_dvs_buck_ramp_table),
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = pca9450_set_dvs_levels,
+ 		},
+@@ -501,7 +494,11 @@ static const struct pca9450_regulator_desc pca9450bc_regulators[] = {
+ 			.vsel_reg = PCA9450_REG_BUCK2OUT_DVS0,
+ 			.vsel_mask = BUCK2OUT_DVS0_MASK,
+ 			.enable_reg = PCA9450_REG_BUCK2CTRL,
+-			.enable_mask = BUCK1_ENMODE_MASK,
++			.enable_mask = BUCK2_ENMODE_MASK,
++			.ramp_reg = PCA9450_REG_BUCK2CTRL,
++			.ramp_mask = BUCK2_RAMP_MASK,
++			.ramp_delay_table = pca9450_dvs_buck_ramp_table,
++			.n_ramp_values = ARRAY_SIZE(pca9450_dvs_buck_ramp_table),
+ 			.owner = THIS_MODULE,
+ 			.of_parse_cb = pca9450_set_dvs_levels,
+ 		},
+diff --git a/include/linux/power/bq27xxx_battery.h b/include/linux/power/bq27xxx_battery.h
+index 705b94bd091e3..63964196a436e 100644
+--- a/include/linux/power/bq27xxx_battery.h
++++ b/include/linux/power/bq27xxx_battery.h
+@@ -2,6 +2,8 @@
+ #ifndef __LINUX_BQ27X00_BATTERY_H__
+ #define __LINUX_BQ27X00_BATTERY_H__
+ 
++#include <linux/power_supply.h>
++
+ enum bq27xxx_chip {
+ 	BQ27000 = 1, /* bq27000, bq27200 */
+ 	BQ27010, /* bq27010, bq27210 */
+@@ -69,6 +71,7 @@ struct bq27xxx_device_info {
+ 	int charge_design_full;
+ 	bool removed;
+ 	unsigned long last_update;
++	union power_supply_propval last_status;
+ 	struct delayed_work work;
+ 	struct power_supply *bat;
+ 	struct list_head list;
+diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
+index 81a55e974feb1..e6fe2f581bdaf 100644
+--- a/include/linux/power_supply.h
++++ b/include/linux/power_supply.h
+@@ -413,8 +413,9 @@ power_supply_temp2resist_simple(struct power_supply_resistance_temp_table *table
+ 				int table_len, int temp);
+ extern void power_supply_changed(struct power_supply *psy);
+ extern int power_supply_am_i_supplied(struct power_supply *psy);
+-extern int power_supply_set_input_current_limit_from_supplier(
+-					 struct power_supply *psy);
++int power_supply_get_property_from_supplier(struct power_supply *psy,
++					    enum power_supply_property psp,
++					    union power_supply_propval *val);
+ extern int power_supply_set_battery_charged(struct power_supply *psy);
+ 
+ #ifdef CONFIG_POWER_SUPPLY
+diff --git a/include/linux/regulator/driver.h b/include/linux/regulator/driver.h
+index 11cade73726ce..633e7a2ab01d0 100644
+--- a/include/linux/regulator/driver.h
++++ b/include/linux/regulator/driver.h
+@@ -370,6 +370,10 @@ struct regulator_desc {
+ 	unsigned int pull_down_reg;
+ 	unsigned int pull_down_mask;
+ 	unsigned int pull_down_val_on;
++	unsigned int ramp_reg;
++	unsigned int ramp_mask;
++	const unsigned int *ramp_delay_table;
++	unsigned int n_ramp_values;
+ 
+ 	unsigned int enable_time;
+ 
+@@ -532,6 +536,7 @@ int regulator_set_current_limit_regmap(struct regulator_dev *rdev,
+ 				       int min_uA, int max_uA);
+ int regulator_get_current_limit_regmap(struct regulator_dev *rdev);
+ void *regulator_get_init_drvdata(struct regulator_init_data *reg_init_data);
++int regulator_set_ramp_delay_regmap(struct regulator_dev *rdev, int ramp_delay);
+ 
+ /*
+  * Helper functions intended to be used by regulator drivers prior registering
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 4b775af572688..8d1173577fb5c 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -75,6 +75,7 @@ struct ipcm_cookie {
+ 	__be32			addr;
+ 	int			oif;
+ 	struct ip_options_rcu	*opt;
++	__u8			protocol;
+ 	__u8			ttl;
+ 	__s16			tos;
+ 	char			priority;
+@@ -95,6 +96,7 @@ static inline void ipcm_init_sk(struct ipcm_cookie *ipcm,
+ 	ipcm->sockc.tsflags = inet->sk.sk_tsflags;
+ 	ipcm->oif = inet->sk.sk_bound_dev_if;
+ 	ipcm->addr = inet->inet_saddr;
++	ipcm->protocol = inet->inet_num;
+ }
+ 
+ #define IPCB(skb) ((struct inet_skb_parm*)((skb)->cb))
+diff --git a/include/uapi/linux/in.h b/include/uapi/linux/in.h
+index d1b327036ae43..3960bc3da6b30 100644
+--- a/include/uapi/linux/in.h
++++ b/include/uapi/linux/in.h
+@@ -159,6 +159,8 @@ struct in_addr {
+ #define MCAST_MSFILTER			48
+ #define IP_MULTICAST_ALL		49
+ #define IP_UNICAST_IF			50
++#define IP_LOCAL_PORT_RANGE		51
++#define IP_PROTOCOL			52
+ 
+ #define MCAST_EXCLUDE	0
+ #define MCAST_INCLUDE	1
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 4dcc1a8a8954f..eafb2bebc12cb 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -980,6 +980,34 @@ static int hci_sock_ioctl(struct socket *sock, unsigned int cmd,
+ 
+ 	BT_DBG("cmd %x arg %lx", cmd, arg);
+ 
++	/* Make sure the cmd is valid before doing anything */
++	switch (cmd) {
++	case HCIGETDEVLIST:
++	case HCIGETDEVINFO:
++	case HCIGETCONNLIST:
++	case HCIDEVUP:
++	case HCIDEVDOWN:
++	case HCIDEVRESET:
++	case HCIDEVRESTAT:
++	case HCISETSCAN:
++	case HCISETAUTH:
++	case HCISETENCRYPT:
++	case HCISETPTYPE:
++	case HCISETLINKPOL:
++	case HCISETLINKMODE:
++	case HCISETACLMTU:
++	case HCISETSCOMTU:
++	case HCIINQUIRY:
++	case HCISETRAW:
++	case HCIGETCONNINFO:
++	case HCIGETAUTHINFO:
++	case HCIBLOCKADDR:
++	case HCIUNBLOCKADDR:
++		break;
++	default:
++		return -ENOIOCTLCMD;
++	}
++
+ 	lock_sock(sk);
+ 
+ 	if (hci_pi(sk)->channel != HCI_CHANNEL_RAW) {
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 4cc39c62af55d..1b35afd326b8d 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -317,7 +317,14 @@ int ip_cmsg_send(struct sock *sk, struct msghdr *msg, struct ipcm_cookie *ipc,
+ 			ipc->tos = val;
+ 			ipc->priority = rt_tos2priority(ipc->tos);
+ 			break;
+-
++		case IP_PROTOCOL:
++			if (cmsg->cmsg_len != CMSG_LEN(sizeof(int)))
++				return -EINVAL;
++			val = *(int *)CMSG_DATA(cmsg);
++			if (val < 1 || val > 255)
++				return -EINVAL;
++			ipc->protocol = val;
++			break;
+ 		default:
+ 			return -EINVAL;
+ 		}
+@@ -1724,6 +1731,9 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 	case IP_MINTTL:
+ 		val = inet->min_ttl;
+ 		break;
++	case IP_PROTOCOL:
++		val = inet_sk(sk)->inet_num;
++		break;
+ 	default:
+ 		release_sock(sk);
+ 		return -ENOPROTOOPT;
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 4899ebe569eb6..650da4d8f7ad1 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -559,6 +559,9 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	}
+ 
+ 	ipcm_init_sk(&ipc, inet);
++	/* Keep backward compat */
++	if (hdrincl)
++		ipc.protocol = IPPROTO_RAW;
+ 
+ 	if (msg->msg_controllen) {
+ 		err = ip_cmsg_send(sk, msg, &ipc, false);
+@@ -626,7 +629,7 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ 	flowi4_init_output(&fl4, ipc.oif, ipc.sockc.mark, tos,
+ 			   RT_SCOPE_UNIVERSE,
+-			   hdrincl ? IPPROTO_RAW : sk->sk_protocol,
++			   hdrincl ? ipc.protocol : sk->sk_protocol,
+ 			   inet_sk_flowi_flags(sk) |
+ 			    (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),
+ 			   daddr, saddr, 0, 0, sk->sk_uid);
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 69f0f9c05d028..7ff06fa7ed19a 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -828,7 +828,8 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ 		if (!proto)
+ 			proto = inet->inet_num;
+-		else if (proto != inet->inet_num)
++		else if (proto != inet->inet_num &&
++			 inet->inet_num != IPPROTO_RAW)
+ 			return -EINVAL;
+ 
+ 		if (proto > 255)
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index c9ca857f1068d..6a055a2216831 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -1493,9 +1493,6 @@ static const struct nla_policy ct_nla_policy[CTA_MAX+1] = {
+ 
+ static int ctnetlink_flush_iterate(struct nf_conn *ct, void *data)
+ {
+-	if (test_bit(IPS_OFFLOAD_BIT, &ct->status))
+-		return 0;
+-
+ 	return ctnetlink_filter_match(ct, data);
+ }
+ 
+@@ -1561,11 +1558,6 @@ static int ctnetlink_del_conntrack(struct net *net, struct sock *ctnl,
+ 
+ 	ct = nf_ct_tuplehash_to_ctrack(h);
+ 
+-	if (test_bit(IPS_OFFLOAD_BIT, &ct->status)) {
+-		nf_ct_put(ct);
+-		return -EBUSY;
+-	}
+-
+ 	if (cda[CTA_ID]) {
+ 		__be32 id = nla_get_be32(cda[CTA_ID]);
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-06-09 11:31 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-06-09 11:31 UTC (permalink / raw
  To: gentoo-commits

commit:     8dfb506604eb11598c91416cae76df3c49940ac4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun  9 11:31:03 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun  9 11:31:03 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8dfb5066

Linux patch 5.10.183

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1182_linux-5.10.183.patch | 4298 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4302 insertions(+)

diff --git a/0000_README b/0000_README
index 0400be74..3ea4f952 100644
--- a/0000_README
+++ b/0000_README
@@ -771,6 +771,10 @@ Patch:  1181_linux-5.10.182.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.182
 
+Patch:  1182_linux-5.10.183.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.183
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1182_linux-5.10.183.patch b/1182_linux-5.10.183.patch
new file mode 100644
index 00000000..f8ae706c
--- /dev/null
+++ b/1182_linux-5.10.183.patch
@@ -0,0 +1,4298 @@
+diff --git a/Documentation/devicetree/bindings/sound/tas2562.yaml b/Documentation/devicetree/bindings/sound/tas2562.yaml
+index 27f7132ba2ef0..6ccb346d4a4d5 100644
+--- a/Documentation/devicetree/bindings/sound/tas2562.yaml
++++ b/Documentation/devicetree/bindings/sound/tas2562.yaml
+@@ -50,7 +50,9 @@ properties:
+     description: TDM TX current sense time slot.
+ 
+   '#sound-dai-cells':
+-    const: 1
++    # The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
++    # compatibility but is deprecated.
++    enum: [0, 1]
+ 
+ required:
+   - compatible
+@@ -67,7 +69,7 @@ examples:
+      codec: codec@4c {
+        compatible = "ti,tas2562";
+        reg = <0x4c>;
+-       #sound-dai-cells = <1>;
++       #sound-dai-cells = <0>;
+        interrupt-parent = <&gpio1>;
+        interrupts = <14>;
+        shutdown-gpios = <&gpio1 15 0>;
+diff --git a/Documentation/devicetree/bindings/sound/tas2764.yaml b/Documentation/devicetree/bindings/sound/tas2764.yaml
+index 5bf8c76ecda11..1ffe1a01668fe 100644
+--- a/Documentation/devicetree/bindings/sound/tas2764.yaml
++++ b/Documentation/devicetree/bindings/sound/tas2764.yaml
+@@ -46,7 +46,9 @@ properties:
+     description: TDM TX voltage sense time slot.
+ 
+   '#sound-dai-cells':
+-    const: 1
++    # The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
++    # compatibility but is deprecated.
++    enum: [0, 1]
+ 
+ required:
+   - compatible
+@@ -63,7 +65,7 @@ examples:
+      codec: codec@38 {
+        compatible = "ti,tas2764";
+        reg = <0x38>;
+-       #sound-dai-cells = <1>;
++       #sound-dai-cells = <0>;
+        interrupt-parent = <&gpio1>;
+        interrupts = <14>;
+        reset-gpios = <&gpio1 15 0>;
+diff --git a/Documentation/devicetree/bindings/sound/tas2770.yaml b/Documentation/devicetree/bindings/sound/tas2770.yaml
+index 07e7f9951d2ed..f3d0ca067bea4 100644
+--- a/Documentation/devicetree/bindings/sound/tas2770.yaml
++++ b/Documentation/devicetree/bindings/sound/tas2770.yaml
+@@ -52,7 +52,9 @@ properties:
+       - 1 # Falling edge
+ 
+   '#sound-dai-cells':
+-    const: 1
++    # The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
++    # compatibility but is deprecated.
++    enum: [0, 1]
+ 
+ required:
+   - compatible
+@@ -69,7 +71,7 @@ examples:
+      codec: codec@41 {
+        compatible = "ti,tas2770";
+        reg = <0x41>;
+-       #sound-dai-cells = <1>;
++       #sound-dai-cells = <0>;
+        interrupt-parent = <&gpio1>;
+        interrupts = <14>;
+        reset-gpio = <&gpio1 15 0>;
+diff --git a/Makefile b/Makefile
+index 2f0efde219023..28115681fffda 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 182
++SUBLEVEL = 183
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+@@ -808,6 +808,10 @@ endif
+ KBUILD_CFLAGS += $(call cc-disable-warning, unused-but-set-variable)
+ 
+ KBUILD_CFLAGS += $(call cc-disable-warning, unused-const-variable)
++
++# These result in bogus false positives
++KBUILD_CFLAGS += $(call cc-disable-warning, dangling-pointer)
++
+ ifdef CONFIG_FRAME_POINTER
+ KBUILD_CFLAGS	+= -fno-omit-frame-pointer -fno-optimize-sibling-calls
+ else
+diff --git a/arch/arm/boot/dts/stm32f7-pinctrl.dtsi b/arch/arm/boot/dts/stm32f7-pinctrl.dtsi
+index fe4cfda72a476..4e1b8b3359e21 100644
+--- a/arch/arm/boot/dts/stm32f7-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32f7-pinctrl.dtsi
+@@ -284,6 +284,88 @@
+ 					slew-rate = <2>;
+ 				};
+ 			};
++
++			can1_pins_a: can1-0 {
++				pins1 {
++					pinmux = <STM32_PINMUX('A', 12, AF9)>; /* CAN1_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('A', 11, AF9)>; /* CAN1_RX */
++					bias-pull-up;
++				};
++			};
++
++			can1_pins_b: can1-1 {
++				pins1 {
++					pinmux = <STM32_PINMUX('B', 9, AF9)>; /* CAN1_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('B', 8, AF9)>; /* CAN1_RX */
++					bias-pull-up;
++				};
++			};
++
++			can1_pins_c: can1-2 {
++				pins1 {
++					pinmux = <STM32_PINMUX('D', 1, AF9)>; /* CAN1_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('D', 0, AF9)>; /* CAN1_RX */
++					bias-pull-up;
++
++				};
++			};
++
++			can1_pins_d: can1-3 {
++				pins1 {
++					pinmux = <STM32_PINMUX('H', 13, AF9)>; /* CAN1_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('H', 14, AF9)>; /* CAN1_RX */
++					bias-pull-up;
++
++				};
++			};
++
++			can2_pins_a: can2-0 {
++				pins1 {
++					pinmux = <STM32_PINMUX('B', 6, AF9)>; /* CAN2_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('B', 5, AF9)>; /* CAN2_RX */
++					bias-pull-up;
++				};
++			};
++
++			can2_pins_b: can2-1 {
++				pins1 {
++					pinmux = <STM32_PINMUX('B', 13, AF9)>; /* CAN2_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('B', 12, AF9)>; /* CAN2_RX */
++					bias-pull-up;
++				};
++			};
++
++			can3_pins_a: can3-0 {
++				pins1 {
++					pinmux = <STM32_PINMUX('A', 15, AF11)>; /* CAN3_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('A', 8, AF11)>; /* CAN3_RX */
++					bias-pull-up;
++				};
++			};
++
++			can3_pins_b: can3-1 {
++				pins1 {
++					pinmux = <STM32_PINMUX('B', 4, AF11)>;  /* CAN3_TX */
++				};
++				pins2 {
++					pinmux = <STM32_PINMUX('B', 3, AF11)>; /* CAN3_RX */
++					bias-pull-up;
++				};
++			};
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig
+index a611b0c1e540f..07b7a2b76cb42 100644
+--- a/arch/arm/configs/multi_v7_defconfig
++++ b/arch/arm/configs/multi_v7_defconfig
+@@ -672,7 +672,6 @@ CONFIG_DRM_IMX_LDB=m
+ CONFIG_DRM_IMX_HDMI=m
+ CONFIG_DRM_ATMEL_HLCDC=m
+ CONFIG_DRM_RCAR_DU=m
+-CONFIG_DRM_RCAR_LVDS=y
+ CONFIG_DRM_SUN4I=m
+ CONFIG_DRM_MSM=m
+ CONFIG_DRM_FSL_DCU=m
+diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
+index d2bd0df2318d6..7e90f17a0676c 100644
+--- a/arch/arm/kernel/unwind.c
++++ b/arch/arm/kernel/unwind.c
+@@ -300,6 +300,29 @@ static int unwind_exec_pop_subset_r0_to_r3(struct unwind_ctrl_block *ctrl,
+ 	return URC_OK;
+ }
+ 
++static unsigned long unwind_decode_uleb128(struct unwind_ctrl_block *ctrl)
++{
++	unsigned long bytes = 0;
++	unsigned long insn;
++	unsigned long result = 0;
++
++	/*
++	 * unwind_get_byte() will advance `ctrl` one instruction at a time, so
++	 * loop until we get an instruction byte where bit 7 is not set.
++	 *
++	 * Note: This decodes a maximum of 4 bytes to output 28 bits data where
++	 * max is 0xfffffff: that will cover a vsp increment of 1073742336, hence
++	 * it is sufficient for unwinding the stack.
++	 */
++	do {
++		insn = unwind_get_byte(ctrl);
++		result |= (insn & 0x7f) << (bytes * 7);
++		bytes++;
++	} while (!!(insn & 0x80) && (bytes != sizeof(result)));
++
++	return result;
++}
++
+ /*
+  * Execute the current unwind instruction.
+  */
+@@ -353,7 +376,7 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl)
+ 		if (ret)
+ 			goto error;
+ 	} else if (insn == 0xb2) {
+-		unsigned long uleb128 = unwind_get_byte(ctrl);
++		unsigned long uleb128 = unwind_decode_uleb128(ctrl);
+ 
+ 		ctrl->vrs[SP] += 0x204 + (uleb128 << 2);
+ 	} else {
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 2be856731e817..d8baedd160de0 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -402,8 +402,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
+ 	}
+ }
+ 
+-#define VM_FAULT_BADMAP		0x010000
+-#define VM_FAULT_BADACCESS	0x020000
++#define VM_FAULT_BADMAP		((__force vm_fault_t)0x010000)
++#define VM_FAULT_BADACCESS	((__force vm_fault_t)0x020000)
+ 
+ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
+ 				  unsigned int mm_flags, unsigned long vm_flags,
+diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
+index ca866f1cca2ef..4d79391bb7873 100644
+--- a/arch/x86/boot/boot.h
++++ b/arch/x86/boot/boot.h
+@@ -110,66 +110,78 @@ typedef unsigned int addr_t;
+ 
+ static inline u8 rdfs8(addr_t addr)
+ {
++	u8 *ptr = (u8 *)absolute_pointer(addr);
+ 	u8 v;
+-	asm volatile("movb %%fs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr));
++	asm volatile("movb %%fs:%1,%0" : "=q" (v) : "m" (*ptr));
+ 	return v;
+ }
+ static inline u16 rdfs16(addr_t addr)
+ {
++	u16 *ptr = (u16 *)absolute_pointer(addr);
+ 	u16 v;
+-	asm volatile("movw %%fs:%1,%0" : "=r" (v) : "m" (*(u16 *)addr));
++	asm volatile("movw %%fs:%1,%0" : "=r" (v) : "m" (*ptr));
+ 	return v;
+ }
+ static inline u32 rdfs32(addr_t addr)
+ {
++	u32 *ptr = (u32 *)absolute_pointer(addr);
+ 	u32 v;
+-	asm volatile("movl %%fs:%1,%0" : "=r" (v) : "m" (*(u32 *)addr));
++	asm volatile("movl %%fs:%1,%0" : "=r" (v) : "m" (*ptr));
+ 	return v;
+ }
+ 
+ static inline void wrfs8(u8 v, addr_t addr)
+ {
+-	asm volatile("movb %1,%%fs:%0" : "+m" (*(u8 *)addr) : "qi" (v));
++	u8 *ptr = (u8 *)absolute_pointer(addr);
++	asm volatile("movb %1,%%fs:%0" : "+m" (*ptr) : "qi" (v));
+ }
+ static inline void wrfs16(u16 v, addr_t addr)
+ {
+-	asm volatile("movw %1,%%fs:%0" : "+m" (*(u16 *)addr) : "ri" (v));
++	u16 *ptr = (u16 *)absolute_pointer(addr);
++	asm volatile("movw %1,%%fs:%0" : "+m" (*ptr) : "ri" (v));
+ }
+ static inline void wrfs32(u32 v, addr_t addr)
+ {
+-	asm volatile("movl %1,%%fs:%0" : "+m" (*(u32 *)addr) : "ri" (v));
++	u32 *ptr = (u32 *)absolute_pointer(addr);
++	asm volatile("movl %1,%%fs:%0" : "+m" (*ptr) : "ri" (v));
+ }
+ 
+ static inline u8 rdgs8(addr_t addr)
+ {
++	u8 *ptr = (u8 *)absolute_pointer(addr);
+ 	u8 v;
+-	asm volatile("movb %%gs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr));
++	asm volatile("movb %%gs:%1,%0" : "=q" (v) : "m" (*ptr));
+ 	return v;
+ }
+ static inline u16 rdgs16(addr_t addr)
+ {
++	u16 *ptr = (u16 *)absolute_pointer(addr);
+ 	u16 v;
+-	asm volatile("movw %%gs:%1,%0" : "=r" (v) : "m" (*(u16 *)addr));
++	asm volatile("movw %%gs:%1,%0" : "=r" (v) : "m" (*ptr));
+ 	return v;
+ }
+ static inline u32 rdgs32(addr_t addr)
+ {
++	u32 *ptr = (u32 *)absolute_pointer(addr);
+ 	u32 v;
+-	asm volatile("movl %%gs:%1,%0" : "=r" (v) : "m" (*(u32 *)addr));
++	asm volatile("movl %%gs:%1,%0" : "=r" (v) : "m" (*ptr));
+ 	return v;
+ }
+ 
+ static inline void wrgs8(u8 v, addr_t addr)
+ {
+-	asm volatile("movb %1,%%gs:%0" : "+m" (*(u8 *)addr) : "qi" (v));
++	u8 *ptr = (u8 *)absolute_pointer(addr);
++	asm volatile("movb %1,%%gs:%0" : "+m" (*ptr) : "qi" (v));
+ }
+ static inline void wrgs16(u16 v, addr_t addr)
+ {
+-	asm volatile("movw %1,%%gs:%0" : "+m" (*(u16 *)addr) : "ri" (v));
++	u16 *ptr = (u16 *)absolute_pointer(addr);
++	asm volatile("movw %1,%%gs:%0" : "+m" (*ptr) : "ri" (v));
+ }
+ static inline void wrgs32(u32 v, addr_t addr)
+ {
+-	asm volatile("movl %1,%%gs:%0" : "+m" (*(u32 *)addr) : "ri" (v));
++	u32 *ptr = (u32 *)absolute_pointer(addr);
++	asm volatile("movl %1,%%gs:%0" : "+m" (*ptr) : "ri" (v));
+ }
+ 
+ /* Note: these only return true/false, not a signed return value! */
+diff --git a/arch/x86/boot/main.c b/arch/x86/boot/main.c
+index e3add857c2c9d..c421af5a3cdce 100644
+--- a/arch/x86/boot/main.c
++++ b/arch/x86/boot/main.c
+@@ -33,7 +33,7 @@ static void copy_boot_params(void)
+ 		u16 cl_offset;
+ 	};
+ 	const struct old_cmdline * const oldcmd =
+-		(const struct old_cmdline *)OLD_CL_ADDRESS;
++		absolute_pointer(OLD_CL_ADDRESS);
+ 
+ 	BUILD_BUG_ON(sizeof(boot_params) != 4096);
+ 	memcpy(&boot_params.hdr, &hdr, sizeof(hdr));
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 5fbae8cc06977..34670943f5435 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1588,6 +1588,9 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type)
+ 			allowed = !!test_bit(index - start, bitmap);
+ 			break;
+ 		}
++
++		/* Note, VM-Exits that go down the "slow" path are accounted below. */
++		++vcpu->stat.exits;
+ 	}
+ 
+ out:
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index cf9b7ac362025..bd713a214d7a4 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -316,9 +316,10 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 	struct crypto_wait cwait;
+ 	struct crypto_akcipher *tfm;
+ 	struct akcipher_request *req;
+-	struct scatterlist src_sg[2];
++	struct scatterlist src_sg;
+ 	char alg_name[CRYPTO_MAX_ALG_NAME];
+-	char *key, *ptr;
++	char *buf, *ptr;
++	size_t buf_len;
+ 	int ret;
+ 
+ 	pr_devel("==>%s()\n", __func__);
+@@ -342,34 +343,37 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 	if (!req)
+ 		goto error_free_tfm;
+ 
+-	key = kmalloc(pkey->keylen + sizeof(u32) * 2 + pkey->paramlen,
+-		      GFP_KERNEL);
+-	if (!key)
++	buf_len = max_t(size_t, pkey->keylen + sizeof(u32) * 2 + pkey->paramlen,
++			sig->s_size + sig->digest_size);
++
++	buf = kmalloc(buf_len, GFP_KERNEL);
++	if (!buf)
+ 		goto error_free_req;
+ 
+-	memcpy(key, pkey->key, pkey->keylen);
+-	ptr = key + pkey->keylen;
++	memcpy(buf, pkey->key, pkey->keylen);
++	ptr = buf + pkey->keylen;
+ 	ptr = pkey_pack_u32(ptr, pkey->algo);
+ 	ptr = pkey_pack_u32(ptr, pkey->paramlen);
+ 	memcpy(ptr, pkey->params, pkey->paramlen);
+ 
+ 	if (pkey->key_is_private)
+-		ret = crypto_akcipher_set_priv_key(tfm, key, pkey->keylen);
++		ret = crypto_akcipher_set_priv_key(tfm, buf, pkey->keylen);
+ 	else
+-		ret = crypto_akcipher_set_pub_key(tfm, key, pkey->keylen);
++		ret = crypto_akcipher_set_pub_key(tfm, buf, pkey->keylen);
+ 	if (ret)
+-		goto error_free_key;
++		goto error_free_buf;
+ 
+ 	if (strcmp(pkey->pkey_algo, "sm2") == 0 && sig->data_size) {
+ 		ret = cert_sig_digest_update(sig, tfm);
+ 		if (ret)
+-			goto error_free_key;
++			goto error_free_buf;
+ 	}
+ 
+-	sg_init_table(src_sg, 2);
+-	sg_set_buf(&src_sg[0], sig->s, sig->s_size);
+-	sg_set_buf(&src_sg[1], sig->digest, sig->digest_size);
+-	akcipher_request_set_crypt(req, src_sg, NULL, sig->s_size,
++	memcpy(buf, sig->s, sig->s_size);
++	memcpy(buf + sig->s_size, sig->digest, sig->digest_size);
++
++	sg_init_one(&src_sg, buf, sig->s_size + sig->digest_size);
++	akcipher_request_set_crypt(req, &src_sg, NULL, sig->s_size,
+ 				   sig->digest_size);
+ 	crypto_init_wait(&cwait);
+ 	akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
+@@ -377,8 +381,8 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 				      crypto_req_done, &cwait);
+ 	ret = crypto_wait_req(crypto_akcipher_verify(req), &cwait);
+ 
+-error_free_key:
+-	kfree(key);
++error_free_buf:
++	kfree(buf);
+ error_free_req:
+ 	akcipher_request_free(req);
+ error_free_tfm:
+diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c
+index 859b1de31ddc0..d62bf6df78f81 100644
+--- a/drivers/acpi/thermal.c
++++ b/drivers/acpi/thermal.c
+@@ -1120,8 +1120,6 @@ static int acpi_thermal_resume(struct device *dev)
+ 		return -EINVAL;
+ 
+ 	for (i = 0; i < ACPI_THERMAL_MAX_ACTIVE; i++) {
+-		if (!(&tz->trips.active[i]))
+-			break;
+ 		if (!tz->trips.active[i].flags.valid)
+ 			break;
+ 		tz->trips.active[i].flags.enabled = 1;
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index f1755efd30a25..dfa090ccd21c6 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -2742,18 +2742,36 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc)
+ 	return 0;
+ }
+ 
+-static struct ata_device *ata_find_dev(struct ata_port *ap, int devno)
++static struct ata_device *ata_find_dev(struct ata_port *ap, unsigned int devno)
+ {
+-	if (!sata_pmp_attached(ap)) {
+-		if (likely(devno >= 0 &&
+-			   devno < ata_link_max_devices(&ap->link)))
++	/*
++	 * For the non-PMP case, ata_link_max_devices() returns 1 (SATA case),
++	 * or 2 (IDE master + slave case). However, the former case includes
++	 * libsas hosted devices which are numbered per scsi host, leading
++	 * to devno potentially being larger than 0 but with each struct
++	 * ata_device having its own struct ata_port and struct ata_link.
++	 * To accommodate these, ignore devno and always use device number 0.
++	 */
++	if (likely(!sata_pmp_attached(ap))) {
++		int link_max_devices = ata_link_max_devices(&ap->link);
++
++		if (link_max_devices == 1)
++			return &ap->link.device[0];
++
++		if (devno < link_max_devices)
+ 			return &ap->link.device[devno];
+-	} else {
+-		if (likely(devno >= 0 &&
+-			   devno < ap->nr_pmp_links))
+-			return &ap->pmp_link[devno].device[0];
++
++		return NULL;
+ 	}
+ 
++	/*
++	 * For PMP-attached devices, the device number corresponds to C
++	 * (channel) of SCSI [H:C:I:L], indicating the port pmp link
++	 * for the device.
++	 */
++	if (devno < ap->nr_pmp_links)
++		return &ap->pmp_link[devno].device[0];
++
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 55a30afc14a00..2a3c3dfefdcec 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1998,6 +1998,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+ 	size_t val_count = val_len / val_bytes;
+ 	size_t chunk_count, chunk_bytes;
+ 	size_t chunk_regs = val_count;
++	size_t max_data = map->max_raw_write - map->format.reg_bytes -
++			map->format.pad_bytes;
+ 	int ret, i;
+ 
+ 	if (!val_count)
+@@ -2005,8 +2007,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+ 
+ 	if (map->use_single_write)
+ 		chunk_regs = 1;
+-	else if (map->max_raw_write && val_len > map->max_raw_write)
+-		chunk_regs = map->max_raw_write / val_bytes;
++	else if (map->max_raw_write && val_len > max_data)
++		chunk_regs = max_data / val_bytes;
+ 
+ 	chunk_count = val_count / chunk_regs;
+ 	chunk_bytes = chunk_regs * val_bytes;
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index dbcd903ba128f..b6940f0a9c905 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1624,7 +1624,7 @@ static int nbd_dev_dbg_init(struct nbd_device *nbd)
+ 		return -EIO;
+ 
+ 	dir = debugfs_create_dir(nbd_name(nbd), nbd_dbg_dir);
+-	if (!dir) {
++	if (IS_ERR(dir)) {
+ 		dev_err(nbd_to_dev(nbd), "Failed to create debugfs dir for '%s'\n",
+ 			nbd_name(nbd));
+ 		return -EIO;
+@@ -1650,7 +1650,7 @@ static int nbd_dbg_init(void)
+ 	struct dentry *dbg_dir;
+ 
+ 	dbg_dir = debugfs_create_dir("nbd", NULL);
+-	if (!dbg_dir)
++	if (IS_ERR(dbg_dir))
+ 		return -EIO;
+ 
+ 	nbd_dbg_dir = dbg_dir;
+diff --git a/drivers/block/rnbd/rnbd-proto.h b/drivers/block/rnbd/rnbd-proto.h
+index ca166241452c2..cb11855455dde 100644
+--- a/drivers/block/rnbd/rnbd-proto.h
++++ b/drivers/block/rnbd/rnbd-proto.h
+@@ -234,7 +234,7 @@ static inline u32 rnbd_to_bio_flags(u32 rnbd_opf)
+ 		bio_opf = REQ_OP_WRITE;
+ 		break;
+ 	case RNBD_OP_FLUSH:
+-		bio_opf = REQ_OP_FLUSH | REQ_PREFLUSH;
++		bio_opf = REQ_OP_WRITE | REQ_PREFLUSH;
+ 		break;
+ 	case RNBD_OP_DISCARD:
+ 		bio_opf = REQ_OP_DISCARD;
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index d65fff4e2ebe9..512c867495ea5 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -764,8 +764,11 @@ static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
+ 	int rc;
+ 	u32 int_status;
+ 
+-	if (devm_request_irq(chip->dev.parent, irq, tis_int_handler, flags,
+-			     dev_name(&chip->dev), chip) != 0) {
++
++	rc = devm_request_threaded_irq(chip->dev.parent, irq, NULL,
++				       tis_int_handler, IRQF_ONESHOT | flags,
++				       dev_name(&chip->dev), chip);
++	if (rc) {
+ 		dev_info(&chip->dev, "Unable to request irq: %d for probe\n",
+ 			 irq);
+ 		return -1;
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 856d867f46ebb..8e2672ec6e038 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -156,6 +156,7 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	struct sev_device *sev;
+ 	unsigned int phys_lsb, phys_msb;
+ 	unsigned int reg, ret = 0;
++	int buf_len;
+ 
+ 	if (!psp || !psp->sev_data)
+ 		return -ENODEV;
+@@ -165,18 +166,27 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 
+ 	sev = psp->sev_data;
+ 
+-	if (data && WARN_ON_ONCE(!virt_addr_valid(data)))
++	buf_len = sev_cmd_buffer_len(cmd);
++	if (WARN_ON_ONCE(!data != !buf_len))
+ 		return -EINVAL;
+ 
++	/*
++	 * Copy the incoming data to driver's scratch buffer as __pa() will not
++	 * work for some memory, e.g. vmalloc'd addresses, and @data may not be
++	 * physically contiguous.
++	 */
++	if (data)
++		memcpy(sev->cmd_buf, data, buf_len);
++
+ 	/* Get the physical address of the command buffer */
+-	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+-	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
++	phys_lsb = data ? lower_32_bits(__psp_pa(sev->cmd_buf)) : 0;
++	phys_msb = data ? upper_32_bits(__psp_pa(sev->cmd_buf)) : 0;
+ 
+ 	dev_dbg(sev->dev, "sev command id %#x buffer 0x%08x%08x timeout %us\n",
+ 		cmd, phys_msb, phys_lsb, psp_timeout);
+ 
+ 	print_hex_dump_debug("(in):  ", DUMP_PREFIX_OFFSET, 16, 2, data,
+-			     sev_cmd_buffer_len(cmd), false);
++			     buf_len, false);
+ 
+ 	iowrite32(phys_lsb, sev->io_regs + sev->vdata->cmdbuff_addr_lo_reg);
+ 	iowrite32(phys_msb, sev->io_regs + sev->vdata->cmdbuff_addr_hi_reg);
+@@ -212,7 +222,14 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	}
+ 
+ 	print_hex_dump_debug("(out): ", DUMP_PREFIX_OFFSET, 16, 2, data,
+-			     sev_cmd_buffer_len(cmd), false);
++			     buf_len, false);
++
++	/*
++	 * Copy potential output from the PSP back to data.  Do this even on
++	 * failure in case the caller wants to glean something from the error.
++	 */
++	if (data)
++		memcpy(data, sev->cmd_buf, buf_len);
+ 
+ 	return ret;
+ }
+@@ -974,6 +991,10 @@ int sev_dev_init(struct psp_device *psp)
+ 	if (!sev)
+ 		goto e_err;
+ 
++	sev->cmd_buf = (void *)devm_get_free_pages(dev, GFP_KERNEL, 0);
++	if (!sev->cmd_buf)
++		goto e_sev;
++
+ 	psp->sev_data = sev;
+ 
+ 	sev->dev = dev;
+@@ -985,7 +1006,7 @@ int sev_dev_init(struct psp_device *psp)
+ 	if (!sev->vdata) {
+ 		ret = -ENODEV;
+ 		dev_err(dev, "sev: missing driver data\n");
+-		goto e_sev;
++		goto e_buf;
+ 	}
+ 
+ 	psp_set_sev_irq_handler(psp, sev_irq_handler, sev);
+@@ -1000,6 +1021,8 @@ int sev_dev_init(struct psp_device *psp)
+ 
+ e_irq:
+ 	psp_clear_sev_irq_handler(psp);
++e_buf:
++	devm_free_pages(dev, (unsigned long)sev->cmd_buf);
+ e_sev:
+ 	devm_kfree(dev, sev);
+ e_err:
+diff --git a/drivers/crypto/ccp/sev-dev.h b/drivers/crypto/ccp/sev-dev.h
+index 3b0cd0f854df9..0fd21433f6277 100644
+--- a/drivers/crypto/ccp/sev-dev.h
++++ b/drivers/crypto/ccp/sev-dev.h
+@@ -51,6 +51,8 @@ struct sev_device {
+ 	u8 api_major;
+ 	u8 api_minor;
+ 	u8 build;
++
++	void *cmd_buf;
+ };
+ 
+ int sev_dev_init(struct psp_device *psp);
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 1fe006cc643e7..861be862a775a 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -678,7 +678,8 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		if (!desc) {
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+-				list_splice_init(&first->descs_list, &atchan->free_descs_list);
++				list_splice_tail_init(&first->descs_list,
++						      &atchan->free_descs_list);
+ 			goto spin_unlock;
+ 		}
+ 
+@@ -766,7 +767,8 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+ 		if (!desc) {
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+-				list_splice_init(&first->descs_list, &atchan->free_descs_list);
++				list_splice_tail_init(&first->descs_list,
++						      &atchan->free_descs_list);
+ 			spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 			return NULL;
+ 		}
+@@ -968,6 +970,8 @@ at_xdmac_prep_interleaved(struct dma_chan *chan,
+ 							NULL,
+ 							src_addr, dst_addr,
+ 							xt, xt->sgl);
++		if (!first)
++			return NULL;
+ 
+ 		/* Length of the block is (BLEN+1) microblocks. */
+ 		for (i = 0; i < xt->numf - 1; i++)
+@@ -998,8 +1002,9 @@ at_xdmac_prep_interleaved(struct dma_chan *chan,
+ 							       src_addr, dst_addr,
+ 							       xt, chunk);
+ 			if (!desc) {
+-				list_splice_init(&first->descs_list,
+-						 &atchan->free_descs_list);
++				if (first)
++					list_splice_tail_init(&first->descs_list,
++							      &atchan->free_descs_list);
+ 				return NULL;
+ 			}
+ 
+@@ -1077,7 +1082,8 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+ 		if (!desc) {
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+-				list_splice_init(&first->descs_list, &atchan->free_descs_list);
++				list_splice_tail_init(&first->descs_list,
++						      &atchan->free_descs_list);
+ 			return NULL;
+ 		}
+ 
+@@ -1251,8 +1257,8 @@ at_xdmac_prep_dma_memset_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 						   sg_dma_len(sg),
+ 						   value);
+ 		if (!desc && first)
+-			list_splice_init(&first->descs_list,
+-					 &atchan->free_descs_list);
++			list_splice_tail_init(&first->descs_list,
++					      &atchan->free_descs_list);
+ 
+ 		if (!first)
+ 			first = desc;
+@@ -1527,20 +1533,6 @@ spin_unlock:
+ 	return ret;
+ }
+ 
+-/* Call must be protected by lock. */
+-static void at_xdmac_remove_xfer(struct at_xdmac_chan *atchan,
+-				    struct at_xdmac_desc *desc)
+-{
+-	dev_dbg(chan2dev(&atchan->chan), "%s: desc 0x%p\n", __func__, desc);
+-
+-	/*
+-	 * Remove the transfer from the transfer list then move the transfer
+-	 * descriptors into the free descriptors list.
+-	 */
+-	list_del(&desc->xfer_node);
+-	list_splice_init(&desc->descs_list, &atchan->free_descs_list);
+-}
+-
+ static void at_xdmac_advance_work(struct at_xdmac_chan *atchan)
+ {
+ 	struct at_xdmac_desc	*desc;
+@@ -1651,17 +1643,20 @@ static void at_xdmac_tasklet(struct tasklet_struct *t)
+ 		}
+ 
+ 		txd = &desc->tx_dma_desc;
+-
+-		at_xdmac_remove_xfer(atchan, desc);
++		dma_cookie_complete(txd);
++		/* Remove the transfer from the transfer list. */
++		list_del(&desc->xfer_node);
+ 		spin_unlock_irq(&atchan->lock);
+ 
+-		dma_cookie_complete(txd);
+ 		if (txd->flags & DMA_PREP_INTERRUPT)
+ 			dmaengine_desc_get_callback_invoke(txd, NULL);
+ 
+ 		dma_run_dependencies(txd);
+ 
+ 		spin_lock_irq(&atchan->lock);
++		/* Move the xfer descriptors into the free descriptors list. */
++		list_splice_tail_init(&desc->descs_list,
++				      &atchan->free_descs_list);
+ 		at_xdmac_advance_work(atchan);
+ 		spin_unlock_irq(&atchan->lock);
+ 	}
+@@ -1808,8 +1803,11 @@ static int at_xdmac_device_terminate_all(struct dma_chan *chan)
+ 		cpu_relax();
+ 
+ 	/* Cancel all pending transfers. */
+-	list_for_each_entry_safe(desc, _desc, &atchan->xfers_list, xfer_node)
+-		at_xdmac_remove_xfer(atchan, desc);
++	list_for_each_entry_safe(desc, _desc, &atchan->xfers_list, xfer_node) {
++		list_del(&desc->xfer_node);
++		list_splice_tail_init(&desc->descs_list,
++				      &atchan->free_descs_list);
++	}
+ 
+ 	clear_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
+ 	clear_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status);
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index 5bbae99f2d34e..6f697b3f2c184 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -1050,7 +1050,7 @@ static bool _trigger(struct pl330_thread *thrd)
+ 	return true;
+ }
+ 
+-static bool _start(struct pl330_thread *thrd)
++static bool pl330_start_thread(struct pl330_thread *thrd)
+ {
+ 	switch (_state(thrd)) {
+ 	case PL330_STATE_FAULT_COMPLETING:
+@@ -1704,7 +1704,7 @@ static int pl330_update(struct pl330_dmac *pl330)
+ 			thrd->req_running = -1;
+ 
+ 			/* Get going again ASAP */
+-			_start(thrd);
++			pl330_start_thread(thrd);
+ 
+ 			/* For now, just make a list of callbacks to be done */
+ 			list_add_tail(&descdone->rqd, &pl330->req_done);
+@@ -2091,7 +2091,7 @@ static void pl330_tasklet(struct tasklet_struct *t)
+ 	} else {
+ 		/* Make sure the PL330 Channel thread is active */
+ 		spin_lock(&pch->thread->dmac->lock);
+-		_start(pch->thread);
++		pl330_start_thread(pch->thread);
+ 		spin_unlock(&pch->thread->dmac->lock);
+ 	}
+ 
+@@ -2109,7 +2109,7 @@ static void pl330_tasklet(struct tasklet_struct *t)
+ 			if (power_down) {
+ 				pch->active = true;
+ 				spin_lock(&pch->thread->dmac->lock);
+-				_start(pch->thread);
++				pl330_start_thread(pch->thread);
+ 				spin_unlock(&pch->thread->dmac->lock);
+ 				power_down = false;
+ 			}
+diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
+index ecab6287c1c39..b81390d6ebd38 100644
+--- a/drivers/gpu/drm/msm/msm_iommu.c
++++ b/drivers/gpu/drm/msm/msm_iommu.c
+@@ -155,7 +155,12 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent)
+ 	/* Get the pagetable configuration from the domain */
+ 	if (adreno_smmu->cookie)
+ 		ttbr1_cfg = adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie);
+-	if (!ttbr1_cfg)
++
++	/*
++	 * If you hit this WARN_ONCE() you are probably missing an entry in
++	 * qcom_smmu_impl_of_match[] in arm-smmu-qcom.c
++	 */
++	if (WARN_ONCE(!ttbr1_cfg, "No per-process page tables"))
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	pagetable = kzalloc(sizeof(*pagetable), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/rcar-du/Kconfig b/drivers/gpu/drm/rcar-du/Kconfig
+index b47e74421e347..3e588ddba2457 100644
+--- a/drivers/gpu/drm/rcar-du/Kconfig
++++ b/drivers/gpu/drm/rcar-du/Kconfig
+@@ -4,8 +4,6 @@ config DRM_RCAR_DU
+ 	depends on DRM && OF
+ 	depends on ARM || ARM64
+ 	depends on ARCH_RENESAS || COMPILE_TEST
+-	imply DRM_RCAR_CMM
+-	imply DRM_RCAR_LVDS
+ 	select DRM_KMS_HELPER
+ 	select DRM_KMS_CMA_HELPER
+ 	select DRM_GEM_CMA_HELPER
+@@ -14,13 +12,17 @@ config DRM_RCAR_DU
+ 	  Choose this option if you have an R-Car chipset.
+ 	  If M is selected the module will be called rcar-du-drm.
+ 
+-config DRM_RCAR_CMM
+-	tristate "R-Car DU Color Management Module (CMM) Support"
+-	depends on DRM && OF
++config DRM_RCAR_USE_CMM
++	bool "R-Car DU Color Management Module (CMM) Support"
+ 	depends on DRM_RCAR_DU
++	default DRM_RCAR_DU
+ 	help
+ 	  Enable support for R-Car Color Management Module (CMM).
+ 
++config DRM_RCAR_CMM
++	def_tristate DRM_RCAR_DU
++	depends on DRM_RCAR_USE_CMM
++
+ config DRM_RCAR_DW_HDMI
+ 	tristate "R-Car Gen3 and RZ/G2 DU HDMI Encoder Support"
+ 	depends on DRM && OF
+@@ -28,15 +30,20 @@ config DRM_RCAR_DW_HDMI
+ 	help
+ 	  Enable support for R-Car Gen3 or RZ/G2 internal HDMI encoder.
+ 
++config DRM_RCAR_USE_LVDS
++	bool "R-Car DU LVDS Encoder Support"
++	depends on DRM_BRIDGE && OF
++	default DRM_RCAR_DU
++	help
++	  Enable support for the R-Car Display Unit embedded LVDS encoders.
++
+ config DRM_RCAR_LVDS
+-	tristate "R-Car DU LVDS Encoder Support"
+-	depends on DRM && DRM_BRIDGE && OF
++	def_tristate DRM_RCAR_DU
++	depends on DRM_RCAR_USE_LVDS
+ 	select DRM_KMS_HELPER
+ 	select DRM_PANEL
+ 	select OF_FLATTREE
+ 	select OF_OVERLAY
+-	help
+-	  Enable support for the R-Car Display Unit embedded LVDS encoders.
+ 
+ config DRM_RCAR_VSP
+ 	bool "R-Car DU VSP Compositor Support" if ARM
+diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c
+index 0476301983964..2f4c5b45d4096 100644
+--- a/drivers/hid/hid-google-hammer.c
++++ b/drivers/hid/hid-google-hammer.c
+@@ -532,6 +532,8 @@ static const struct hid_device_id hammer_devices[] = {
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_EEL) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
++	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_JEWEL) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ 		     USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) },
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 1d1306a6004e6..2b658d820b800 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -491,6 +491,7 @@
+ #define USB_DEVICE_ID_GOOGLE_MOONBALL	0x5044
+ #define USB_DEVICE_ID_GOOGLE_DON	0x5050
+ #define USB_DEVICE_ID_GOOGLE_EEL	0x5057
++#define USB_DEVICE_ID_GOOGLE_JEWEL	0x5061
+ 
+ #define USB_VENDOR_ID_GOTOP		0x08f2
+ #define USB_DEVICE_ID_SUPER_Q2		0x007f
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 37754a1f733b4..1bfcc94c1d234 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -831,7 +831,7 @@ static int wacom_intuos_inout(struct wacom_wac *wacom)
+ 	/* Enter report */
+ 	if ((data[1] & 0xfc) == 0xc0) {
+ 		/* serial number of the tool */
+-		wacom->serial[idx] = ((data[3] & 0x0f) << 28) +
++		wacom->serial[idx] = ((__u64)(data[3] & 0x0f) << 28) +
+ 			(data[4] << 20) + (data[5] << 12) +
+ 			(data[6] << 4) + (data[7] >> 4);
+ 
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index 1b8baba9d4d64..614c5e807bb8d 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -835,10 +835,6 @@ static const struct iio_info ad7195_info = {
+ 	__AD719x_CHANNEL(_si, _channel1, -1, _address, NULL, IIO_VOLTAGE, \
+ 		BIT(IIO_CHAN_INFO_SCALE), ad7192_calibsys_ext_info)
+ 
+-#define AD719x_SHORTED_CHANNEL(_si, _channel1, _address) \
+-	__AD719x_CHANNEL(_si, _channel1, -1, _address, "shorted", IIO_VOLTAGE, \
+-		BIT(IIO_CHAN_INFO_SCALE), ad7192_calibsys_ext_info)
+-
+ #define AD719x_TEMP_CHANNEL(_si, _address) \
+ 	__AD719x_CHANNEL(_si, 0, -1, _address, NULL, IIO_TEMP, 0, NULL)
+ 
+@@ -846,7 +842,7 @@ static const struct iio_chan_spec ad7192_channels[] = {
+ 	AD719x_DIFF_CHANNEL(0, 1, 2, AD7192_CH_AIN1P_AIN2M),
+ 	AD719x_DIFF_CHANNEL(1, 3, 4, AD7192_CH_AIN3P_AIN4M),
+ 	AD719x_TEMP_CHANNEL(2, AD7192_CH_TEMP),
+-	AD719x_SHORTED_CHANNEL(3, 2, AD7192_CH_AIN2P_AIN2M),
++	AD719x_DIFF_CHANNEL(3, 2, 2, AD7192_CH_AIN2P_AIN2M),
+ 	AD719x_CHANNEL(4, 1, AD7192_CH_AIN1),
+ 	AD719x_CHANNEL(5, 2, AD7192_CH_AIN2),
+ 	AD719x_CHANNEL(6, 3, AD7192_CH_AIN3),
+@@ -860,7 +856,7 @@ static const struct iio_chan_spec ad7193_channels[] = {
+ 	AD719x_DIFF_CHANNEL(2, 5, 6, AD7193_CH_AIN5P_AIN6M),
+ 	AD719x_DIFF_CHANNEL(3, 7, 8, AD7193_CH_AIN7P_AIN8M),
+ 	AD719x_TEMP_CHANNEL(4, AD7193_CH_TEMP),
+-	AD719x_SHORTED_CHANNEL(5, 2, AD7193_CH_AIN2P_AIN2M),
++	AD719x_DIFF_CHANNEL(5, 2, 2, AD7193_CH_AIN2P_AIN2M),
+ 	AD719x_CHANNEL(6, 1, AD7193_CH_AIN1),
+ 	AD719x_CHANNEL(7, 2, AD7193_CH_AIN2),
+ 	AD719x_CHANNEL(8, 3, AD7193_CH_AIN3),
+diff --git a/drivers/iio/adc/mxs-lradc-adc.c b/drivers/iio/adc/mxs-lradc-adc.c
+index c480cb489c1a3..c37e39b96f0e7 100644
+--- a/drivers/iio/adc/mxs-lradc-adc.c
++++ b/drivers/iio/adc/mxs-lradc-adc.c
+@@ -757,13 +757,13 @@ static int mxs_lradc_adc_probe(struct platform_device *pdev)
+ 
+ 	ret = mxs_lradc_adc_trigger_init(iio);
+ 	if (ret)
+-		goto err_trig;
++		return ret;
+ 
+ 	ret = iio_triggered_buffer_setup(iio, &iio_pollfunc_store_time,
+ 					 &mxs_lradc_adc_trigger_handler,
+ 					 &mxs_lradc_adc_buffer_ops);
+ 	if (ret)
+-		return ret;
++		goto err_trig;
+ 
+ 	adc->vref_mv = mxs_lradc_adc_vref_mv[lradc->soc];
+ 
+@@ -801,9 +801,9 @@ static int mxs_lradc_adc_probe(struct platform_device *pdev)
+ 
+ err_dev:
+ 	mxs_lradc_adc_hw_stop(adc);
+-	mxs_lradc_adc_trigger_remove(iio);
+-err_trig:
+ 	iio_triggered_buffer_cleanup(iio);
++err_trig:
++	mxs_lradc_adc_trigger_remove(iio);
+ 	return ret;
+ }
+ 
+@@ -814,8 +814,8 @@ static int mxs_lradc_adc_remove(struct platform_device *pdev)
+ 
+ 	iio_device_unregister(iio);
+ 	mxs_lradc_adc_hw_stop(adc);
+-	mxs_lradc_adc_trigger_remove(iio);
+ 	iio_triggered_buffer_cleanup(iio);
++	mxs_lradc_adc_trigger_remove(iio);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/dac/Makefile b/drivers/iio/dac/Makefile
+index 2fc4811677240..09506d248c9eb 100644
+--- a/drivers/iio/dac/Makefile
++++ b/drivers/iio/dac/Makefile
+@@ -16,7 +16,7 @@ obj-$(CONFIG_AD5592R_BASE) += ad5592r-base.o
+ obj-$(CONFIG_AD5592R) += ad5592r.o
+ obj-$(CONFIG_AD5593R) += ad5593r.o
+ obj-$(CONFIG_AD5755) += ad5755.o
+-obj-$(CONFIG_AD5755) += ad5758.o
++obj-$(CONFIG_AD5758) += ad5758.o
+ obj-$(CONFIG_AD5761) += ad5761.o
+ obj-$(CONFIG_AD5764) += ad5764.o
+ obj-$(CONFIG_AD5770R) += ad5770r.o
+diff --git a/drivers/iio/dac/mcp4725.c b/drivers/iio/dac/mcp4725.c
+index beb9a15b7c744..0c0e726ae054a 100644
+--- a/drivers/iio/dac/mcp4725.c
++++ b/drivers/iio/dac/mcp4725.c
+@@ -47,12 +47,18 @@ static int __maybe_unused mcp4725_suspend(struct device *dev)
+ 	struct mcp4725_data *data = iio_priv(i2c_get_clientdata(
+ 		to_i2c_client(dev)));
+ 	u8 outbuf[2];
++	int ret;
+ 
+ 	outbuf[0] = (data->powerdown_mode + 1) << 4;
+ 	outbuf[1] = 0;
+ 	data->powerdown = true;
+ 
+-	return i2c_master_send(data->client, outbuf, 2);
++	ret = i2c_master_send(data->client, outbuf, 2);
++	if (ret < 0)
++		return ret;
++	else if (ret != 2)
++		return -EIO;
++	return 0;
+ }
+ 
+ static int __maybe_unused mcp4725_resume(struct device *dev)
+@@ -60,13 +66,19 @@ static int __maybe_unused mcp4725_resume(struct device *dev)
+ 	struct mcp4725_data *data = iio_priv(i2c_get_clientdata(
+ 		to_i2c_client(dev)));
+ 	u8 outbuf[2];
++	int ret;
+ 
+ 	/* restore previous DAC value */
+ 	outbuf[0] = (data->dac_value >> 8) & 0xf;
+ 	outbuf[1] = data->dac_value & 0xff;
+ 	data->powerdown = false;
+ 
+-	return i2c_master_send(data->client, outbuf, 2);
++	ret = i2c_master_send(data->client, outbuf, 2);
++	if (ret < 0)
++		return ret;
++	else if (ret != 2)
++		return -EIO;
++	return 0;
+ }
+ static SIMPLE_DEV_PM_OPS(mcp4725_pm_ops, mcp4725_suspend, mcp4725_resume);
+ 
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
+index 99576b2c171f4..32d7f83642303 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
+@@ -275,9 +275,14 @@ static int inv_icm42600_buffer_preenable(struct iio_dev *indio_dev)
+ {
+ 	struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+ 	struct device *dev = regmap_get_device(st->map);
++	struct inv_icm42600_timestamp *ts = iio_priv(indio_dev);
+ 
+ 	pm_runtime_get_sync(dev);
+ 
++	mutex_lock(&st->lock);
++	inv_icm42600_timestamp_reset(ts);
++	mutex_unlock(&st->lock);
++
+ 	return 0;
+ }
+ 
+@@ -375,7 +380,6 @@ static int inv_icm42600_buffer_postdisable(struct iio_dev *indio_dev)
+ 	struct device *dev = regmap_get_device(st->map);
+ 	unsigned int sensor;
+ 	unsigned int *watermark;
+-	struct inv_icm42600_timestamp *ts;
+ 	struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT;
+ 	unsigned int sleep_temp = 0;
+ 	unsigned int sleep_sensor = 0;
+@@ -385,11 +389,9 @@ static int inv_icm42600_buffer_postdisable(struct iio_dev *indio_dev)
+ 	if (indio_dev == st->indio_gyro) {
+ 		sensor = INV_ICM42600_SENSOR_GYRO;
+ 		watermark = &st->fifo.watermark.gyro;
+-		ts = iio_priv(st->indio_gyro);
+ 	} else if (indio_dev == st->indio_accel) {
+ 		sensor = INV_ICM42600_SENSOR_ACCEL;
+ 		watermark = &st->fifo.watermark.accel;
+-		ts = iio_priv(st->indio_accel);
+ 	} else {
+ 		return -EINVAL;
+ 	}
+@@ -417,8 +419,6 @@ static int inv_icm42600_buffer_postdisable(struct iio_dev *indio_dev)
+ 	if (!st->fifo.on)
+ 		ret = inv_icm42600_set_temp_conf(st, false, &sleep_temp);
+ 
+-	inv_icm42600_timestamp_reset(ts);
+-
+ out_unlock:
+ 	mutex_unlock(&st->lock);
+ 
+diff --git a/drivers/iio/light/vcnl4035.c b/drivers/iio/light/vcnl4035.c
+index 1bd85e21fd114..6e38a33f55c71 100644
+--- a/drivers/iio/light/vcnl4035.c
++++ b/drivers/iio/light/vcnl4035.c
+@@ -8,6 +8,7 @@
+  * TODO: Proximity
+  */
+ #include <linux/bitops.h>
++#include <linux/bitfield.h>
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/pm_runtime.h>
+@@ -42,6 +43,7 @@
+ #define VCNL4035_ALS_PERS_MASK		GENMASK(3, 2)
+ #define VCNL4035_INT_ALS_IF_H_MASK	BIT(12)
+ #define VCNL4035_INT_ALS_IF_L_MASK	BIT(13)
++#define VCNL4035_DEV_ID_MASK		GENMASK(7, 0)
+ 
+ /* Default values */
+ #define VCNL4035_MODE_ALS_ENABLE	BIT(0)
+@@ -415,6 +417,7 @@ static int vcnl4035_init(struct vcnl4035_data *data)
+ 		return ret;
+ 	}
+ 
++	id = FIELD_GET(VCNL4035_DEV_ID_MASK, id);
+ 	if (id != VCNL4035_DEV_ID_VAL) {
+ 		dev_err(&data->client->dev, "Wrong id, got %x, expected %x\n",
+ 			id, VCNL4035_DEV_ID_VAL);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 10d77f50f818b..2a973a1390a4a 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -469,7 +469,6 @@ static int bnxt_re_create_fence_mr(struct bnxt_re_pd *pd)
+ 	struct bnxt_re_mr *mr = NULL;
+ 	dma_addr_t dma_addr = 0;
+ 	struct ib_mw *mw;
+-	u64 pbl_tbl;
+ 	int rc;
+ 
+ 	dma_addr = dma_map_single(dev, fence->va, BNXT_RE_FENCE_BYTES,
+@@ -504,9 +503,8 @@ static int bnxt_re_create_fence_mr(struct bnxt_re_pd *pd)
+ 	mr->ib_mr.lkey = mr->qplib_mr.lkey;
+ 	mr->qplib_mr.va = (u64)(unsigned long)fence->va;
+ 	mr->qplib_mr.total_size = BNXT_RE_FENCE_BYTES;
+-	pbl_tbl = dma_addr;
+-	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, &pbl_tbl,
+-			       BNXT_RE_FENCE_PBL_SIZE, false, PAGE_SIZE);
++	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, NULL,
++			       BNXT_RE_FENCE_PBL_SIZE, PAGE_SIZE);
+ 	if (rc) {
+ 		ibdev_err(&rdev->ibdev, "Failed to register fence-MR\n");
+ 		goto fail;
+@@ -3249,9 +3247,7 @@ static int bnxt_re_process_raw_qp_pkt_rx(struct bnxt_re_qp *gsi_qp,
+ 	udwr.remote_qkey = gsi_sqp->qplib_qp.qkey;
+ 
+ 	/* post data received  in the send queue */
+-	rc = bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr);
+-
+-	return 0;
++	return bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr);
+ }
+ 
+ static void bnxt_re_process_res_rawqp1_wc(struct ib_wc *wc,
+@@ -3588,7 +3584,6 @@ struct ib_mr *bnxt_re_get_dma_mr(struct ib_pd *ib_pd, int mr_access_flags)
+ 	struct bnxt_re_pd *pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd);
+ 	struct bnxt_re_dev *rdev = pd->rdev;
+ 	struct bnxt_re_mr *mr;
+-	u64 pbl = 0;
+ 	int rc;
+ 
+ 	mr = kzalloc(sizeof(*mr), GFP_KERNEL);
+@@ -3607,7 +3602,7 @@ struct ib_mr *bnxt_re_get_dma_mr(struct ib_pd *ib_pd, int mr_access_flags)
+ 
+ 	mr->qplib_mr.hwq.level = PBL_LVL_MAX;
+ 	mr->qplib_mr.total_size = -1; /* Infinte length */
+-	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, &pbl, 0, false,
++	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, NULL, 0,
+ 			       PAGE_SIZE);
+ 	if (rc)
+ 		goto fail_mr;
+@@ -3778,19 +3773,6 @@ int bnxt_re_dealloc_mw(struct ib_mw *ib_mw)
+ 	return rc;
+ }
+ 
+-static int fill_umem_pbl_tbl(struct ib_umem *umem, u64 *pbl_tbl_orig,
+-			     int page_shift)
+-{
+-	u64 *pbl_tbl = pbl_tbl_orig;
+-	u64 page_size =  BIT_ULL(page_shift);
+-	struct ib_block_iter biter;
+-
+-	rdma_umem_for_each_dma_block(umem, &biter, page_size)
+-		*pbl_tbl++ = rdma_block_iter_dma_address(&biter);
+-
+-	return pbl_tbl - pbl_tbl_orig;
+-}
+-
+ /* uverbs */
+ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length,
+ 				  u64 virt_addr, int mr_access_flags,
+@@ -3800,7 +3782,6 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length,
+ 	struct bnxt_re_dev *rdev = pd->rdev;
+ 	struct bnxt_re_mr *mr;
+ 	struct ib_umem *umem;
+-	u64 *pbl_tbl = NULL;
+ 	unsigned long page_size;
+ 	int umem_pgs, rc;
+ 
+@@ -3854,30 +3835,18 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length,
+ 	}
+ 
+ 	umem_pgs = ib_umem_num_dma_blocks(umem, page_size);
+-	pbl_tbl = kcalloc(umem_pgs, sizeof(*pbl_tbl), GFP_KERNEL);
+-	if (!pbl_tbl) {
+-		rc = -ENOMEM;
+-		goto free_umem;
+-	}
+-
+-	/* Map umem buf ptrs to the PBL */
+-	umem_pgs = fill_umem_pbl_tbl(umem, pbl_tbl, order_base_2(page_size));
+-	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, pbl_tbl,
+-			       umem_pgs, false, page_size);
++	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, umem,
++			       umem_pgs, page_size);
+ 	if (rc) {
+ 		ibdev_err(&rdev->ibdev, "Failed to register user MR");
+-		goto fail;
++		goto free_umem;
+ 	}
+ 
+-	kfree(pbl_tbl);
+-
+ 	mr->ib_mr.lkey = mr->qplib_mr.lkey;
+ 	mr->ib_mr.rkey = mr->qplib_mr.lkey;
+ 	atomic_inc(&rdev->mr_count);
+ 
+ 	return &mr->ib_mr;
+-fail:
+-	kfree(pbl_tbl);
+ free_umem:
+ 	ib_umem_release(umem);
+ free_mrw:
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index bd153aa7e9ab3..b26a89187a192 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -2041,6 +2041,12 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 	u32 pg_sz_lvl;
+ 	int rc;
+ 
++	if (!cq->dpi) {
++		dev_err(&rcfw->pdev->dev,
++			"FP: CREATE_CQ failed due to NULL DPI\n");
++		return -EINVAL;
++	}
++
+ 	hwq_attr.res = res;
+ 	hwq_attr.depth = cq->max_wqe;
+ 	hwq_attr.stride = sizeof(struct cq_base);
+@@ -2052,11 +2058,6 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 
+ 	RCFW_CMD_PREP(req, CREATE_CQ, cmd_flags);
+ 
+-	if (!cq->dpi) {
+-		dev_err(&rcfw->pdev->dev,
+-			"FP: CREATE_CQ failed due to NULL DPI\n");
+-		return -EINVAL;
+-	}
+ 	req.dpi = cpu_to_le32(cq->dpi->dpi);
+ 	req.cq_handle = cpu_to_le64(cq->cq_handle);
+ 	req.cq_size = cpu_to_le32(cq->hwq.max_elements);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index 754dcebeb4ca1..123ea759f2826 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -215,17 +215,9 @@ int bnxt_qplib_alloc_init_hwq(struct bnxt_qplib_hwq *hwq,
+ 			return -EINVAL;
+ 		hwq_attr->sginfo->npages = npages;
+ 	} else {
+-		unsigned long sginfo_num_pages = ib_umem_num_dma_blocks(
+-			hwq_attr->sginfo->umem, hwq_attr->sginfo->pgsize);
+-
++		npages = ib_umem_num_dma_blocks(hwq_attr->sginfo->umem,
++						hwq_attr->sginfo->pgsize);
+ 		hwq->is_user = true;
+-		npages = sginfo_num_pages;
+-		npages = (npages * PAGE_SIZE) /
+-			  BIT_ULL(hwq_attr->sginfo->pgshft);
+-		if ((sginfo_num_pages * PAGE_SIZE) %
+-		     BIT_ULL(hwq_attr->sginfo->pgshft))
+-			if (!npages)
+-				npages++;
+ 	}
+ 
+ 	if (npages == MAX_PBL_LVL_0_PGS) {
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 64d44f51db4b6..f53d94c812ec8 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -650,16 +650,15 @@ int bnxt_qplib_dereg_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mrw,
+ }
+ 
+ int bnxt_qplib_reg_mr(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr,
+-		      u64 *pbl_tbl, int num_pbls, bool block, u32 buf_pg_size)
++		      struct ib_umem *umem, int num_pbls, u32 buf_pg_size)
+ {
+ 	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+ 	struct bnxt_qplib_hwq_attr hwq_attr = {};
+ 	struct bnxt_qplib_sg_info sginfo = {};
+ 	struct creq_register_mr_resp resp;
+ 	struct cmdq_register_mr req;
+-	int pg_ptrs, pages, i, rc;
+ 	u16 cmd_flags = 0, level;
+-	dma_addr_t **pbl_ptr;
++	int pages, rc, pg_ptrs;
+ 	u32 pg_size;
+ 
+ 	if (num_pbls) {
+@@ -680,26 +679,21 @@ int bnxt_qplib_reg_mr(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr,
+ 		/* Free the hwq if it already exist, must be a rereg */
+ 		if (mr->hwq.max_elements)
+ 			bnxt_qplib_free_hwq(res, &mr->hwq);
+-		/* Use system PAGE_SIZE */
+ 		hwq_attr.res = res;
+ 		hwq_attr.depth = pages;
+-		hwq_attr.stride = PAGE_SIZE;
++		hwq_attr.stride = sizeof(dma_addr_t);
+ 		hwq_attr.type = HWQ_TYPE_MR;
+ 		hwq_attr.sginfo = &sginfo;
++		hwq_attr.sginfo->umem = umem;
+ 		hwq_attr.sginfo->npages = pages;
+-		hwq_attr.sginfo->pgsize = PAGE_SIZE;
+-		hwq_attr.sginfo->pgshft = PAGE_SHIFT;
++		hwq_attr.sginfo->pgsize = buf_pg_size;
++		hwq_attr.sginfo->pgshft = ilog2(buf_pg_size);
+ 		rc = bnxt_qplib_alloc_init_hwq(&mr->hwq, &hwq_attr);
+ 		if (rc) {
+ 			dev_err(&res->pdev->dev,
+ 				"SP: Reg MR memory allocation failed\n");
+ 			return -ENOMEM;
+ 		}
+-		/* Write to the hwq */
+-		pbl_ptr = (dma_addr_t **)mr->hwq.pbl_ptr;
+-		for (i = 0; i < num_pbls; i++)
+-			pbl_ptr[PTR_PG(i)][PTR_IDX(i)] =
+-				(pbl_tbl[i] & PAGE_MASK) | PTU_PTE_VALID;
+ 	}
+ 
+ 	RCFW_CMD_PREP(req, REGISTER_MR, cmd_flags);
+@@ -711,7 +705,7 @@ int bnxt_qplib_reg_mr(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr,
+ 		req.pbl = 0;
+ 		pg_size = PAGE_SIZE;
+ 	} else {
+-		level = mr->hwq.level + 1;
++		level = mr->hwq.level;
+ 		req.pbl = cpu_to_le64(mr->hwq.pbl[PBL_LVL_0].pg_map_arr[0]);
+ 	}
+ 	pg_size = buf_pg_size ? buf_pg_size : PAGE_SIZE;
+@@ -728,7 +722,7 @@ int bnxt_qplib_reg_mr(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr,
+ 	req.mr_size = cpu_to_le64(mr->total_size);
+ 
+ 	rc = bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+-					  (void *)&resp, NULL, block);
++					  (void *)&resp, NULL, false);
+ 	if (rc)
+ 		goto fail;
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+index 967890cd81f27..bc228340684f4 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+@@ -254,7 +254,7 @@ int bnxt_qplib_alloc_mrw(struct bnxt_qplib_res *res,
+ int bnxt_qplib_dereg_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mrw,
+ 			 bool block);
+ int bnxt_qplib_reg_mr(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr,
+-		      u64 *pbl_tbl, int num_pbls, bool block, u32 buf_pg_size);
++		      struct ib_umem *umem, int num_pbls, u32 buf_pg_size);
+ int bnxt_qplib_free_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr);
+ int bnxt_qplib_alloc_fast_reg_mr(struct bnxt_qplib_res *res,
+ 				 struct bnxt_qplib_mrw *mr, int max);
+diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
+index 2ece682c7835b..9cf051818725c 100644
+--- a/drivers/infiniband/hw/efa/efa_verbs.c
++++ b/drivers/infiniband/hw/efa/efa_verbs.c
+@@ -1328,7 +1328,7 @@ static int pbl_continuous_initialize(struct efa_dev *dev,
+  */
+ static int pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl)
+ {
+-	u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, PAGE_SIZE);
++	u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, EFA_CHUNK_PAYLOAD_SIZE);
+ 	struct scatterlist *sgl;
+ 	int sg_dma_cnt, err;
+ 
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index f216a86d9c817..0a061a196b531 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -3914,8 +3914,7 @@ int amd_iommu_activate_guest_mode(void *data)
+ 	struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
+ 	u64 valid;
+ 
+-	if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
+-	    !entry || entry->lo.fields_vapic.guest_mode)
++	if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) || !entry)
+ 		return 0;
+ 
+ 	valid = entry->lo.fields_vapic.valid;
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index e5d86b7177dec..12551dc117148 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -1218,18 +1218,20 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 	for (i = 0; i < iommu->num_irq; i++) {
+ 		int irq = platform_get_irq(pdev, i);
+ 
+-		if (irq < 0)
+-			return irq;
++		if (irq < 0) {
++			err = irq;
++			goto err_pm_disable;
++		}
+ 
+ 		err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
+ 				       IRQF_SHARED, dev_name(dev), iommu);
+-		if (err) {
+-			pm_runtime_disable(dev);
+-			goto err_remove_sysfs;
+-		}
++		if (err)
++			goto err_pm_disable;
+ 	}
+ 
+ 	return 0;
++err_pm_disable:
++	pm_runtime_disable(dev);
+ err_remove_sysfs:
+ 	iommu_device_sysfs_remove(&iommu->iommu);
+ err_put_group:
+diff --git a/drivers/mailbox/mailbox-test.c b/drivers/mailbox/mailbox-test.c
+index 4555d678fadda..abcee58e851c2 100644
+--- a/drivers/mailbox/mailbox-test.c
++++ b/drivers/mailbox/mailbox-test.c
+@@ -12,6 +12,7 @@
+ #include <linux/kernel.h>
+ #include <linux/mailbox_client.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+ #include <linux/poll.h>
+@@ -38,6 +39,7 @@ struct mbox_test_device {
+ 	char			*signal;
+ 	char			*message;
+ 	spinlock_t		lock;
++	struct mutex		mutex;
+ 	wait_queue_head_t	waitq;
+ 	struct fasync_struct	*async_queue;
+ 	struct dentry		*root_debugfs_dir;
+@@ -95,6 +97,7 @@ static ssize_t mbox_test_message_write(struct file *filp,
+ 				       size_t count, loff_t *ppos)
+ {
+ 	struct mbox_test_device *tdev = filp->private_data;
++	char *message;
+ 	void *data;
+ 	int ret;
+ 
+@@ -110,10 +113,13 @@ static ssize_t mbox_test_message_write(struct file *filp,
+ 		return -EINVAL;
+ 	}
+ 
+-	tdev->message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL);
+-	if (!tdev->message)
++	message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL);
++	if (!message)
+ 		return -ENOMEM;
+ 
++	mutex_lock(&tdev->mutex);
++
++	tdev->message = message;
+ 	ret = copy_from_user(tdev->message, userbuf, count);
+ 	if (ret) {
+ 		ret = -EFAULT;
+@@ -144,6 +150,8 @@ out:
+ 	kfree(tdev->message);
+ 	tdev->signal = NULL;
+ 
++	mutex_unlock(&tdev->mutex);
++
+ 	return ret < 0 ? ret : count;
+ }
+ 
+@@ -392,6 +400,7 @@ static int mbox_test_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, tdev);
+ 
+ 	spin_lock_init(&tdev->lock);
++	mutex_init(&tdev->mutex);
+ 
+ 	if (tdev->rx_channel) {
+ 		tdev->rx_buffer = devm_kzalloc(&pdev->dev,
+diff --git a/drivers/media/dvb-core/dvb_ca_en50221.c b/drivers/media/dvb-core/dvb_ca_en50221.c
+index fd476536d32ed..dec036e0336cb 100644
+--- a/drivers/media/dvb-core/dvb_ca_en50221.c
++++ b/drivers/media/dvb-core/dvb_ca_en50221.c
+@@ -151,6 +151,12 @@ struct dvb_ca_private {
+ 
+ 	/* mutex serializing ioctls */
+ 	struct mutex ioctl_mutex;
++
++	/* A mutex used when a device is disconnected */
++	struct mutex remove_mutex;
++
++	/* Whether the device is disconnected */
++	int exit;
+ };
+ 
+ static void dvb_ca_private_free(struct dvb_ca_private *ca)
+@@ -187,7 +193,7 @@ static void dvb_ca_en50221_thread_wakeup(struct dvb_ca_private *ca);
+ static int dvb_ca_en50221_read_data(struct dvb_ca_private *ca, int slot,
+ 				    u8 *ebuf, int ecount);
+ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
+-				     u8 *ebuf, int ecount);
++				     u8 *ebuf, int ecount, int size_write_flag);
+ 
+ /**
+  * Safely find needle in haystack.
+@@ -370,7 +376,7 @@ static int dvb_ca_en50221_link_init(struct dvb_ca_private *ca, int slot)
+ 	ret = dvb_ca_en50221_wait_if_status(ca, slot, STATUSREG_FR, HZ / 10);
+ 	if (ret)
+ 		return ret;
+-	ret = dvb_ca_en50221_write_data(ca, slot, buf, 2);
++	ret = dvb_ca_en50221_write_data(ca, slot, buf, 2, CMDREG_SW);
+ 	if (ret != 2)
+ 		return -EIO;
+ 	ret = ca->pub->write_cam_control(ca->pub, slot, CTRLIF_COMMAND, IRQEN);
+@@ -778,11 +784,13 @@ exit:
+  * @buf: The data in this buffer is treated as a complete link-level packet to
+  *	 be written.
+  * @bytes_write: Size of ebuf.
++ * @size_write_flag: A flag on Command Register which says whether the link size
++ * information will be writen or not.
+  *
+  * return: Number of bytes written, or < 0 on error.
+  */
+ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
+-				     u8 *buf, int bytes_write)
++				     u8 *buf, int bytes_write, int size_write_flag)
+ {
+ 	struct dvb_ca_slot *sl = &ca->slot_info[slot];
+ 	int status;
+@@ -817,7 +825,7 @@ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
+ 
+ 	/* OK, set HC bit */
+ 	status = ca->pub->write_cam_control(ca->pub, slot, CTRLIF_COMMAND,
+-					    IRQEN | CMDREG_HC);
++					    IRQEN | CMDREG_HC | size_write_flag);
+ 	if (status)
+ 		goto exit;
+ 
+@@ -1505,7 +1513,7 @@ static ssize_t dvb_ca_en50221_io_write(struct file *file,
+ 
+ 			mutex_lock(&sl->slot_lock);
+ 			status = dvb_ca_en50221_write_data(ca, slot, fragbuf,
+-							   fraglen + 2);
++							   fraglen + 2, 0);
+ 			mutex_unlock(&sl->slot_lock);
+ 			if (status == (fraglen + 2)) {
+ 				written = 1;
+@@ -1706,12 +1714,22 @@ static int dvb_ca_en50221_io_open(struct inode *inode, struct file *file)
+ 
+ 	dprintk("%s\n", __func__);
+ 
+-	if (!try_module_get(ca->pub->owner))
++	mutex_lock(&ca->remove_mutex);
++
++	if (ca->exit) {
++		mutex_unlock(&ca->remove_mutex);
++		return -ENODEV;
++	}
++
++	if (!try_module_get(ca->pub->owner)) {
++		mutex_unlock(&ca->remove_mutex);
+ 		return -EIO;
++	}
+ 
+ 	err = dvb_generic_open(inode, file);
+ 	if (err < 0) {
+ 		module_put(ca->pub->owner);
++		mutex_unlock(&ca->remove_mutex);
+ 		return err;
+ 	}
+ 
+@@ -1736,6 +1754,7 @@ static int dvb_ca_en50221_io_open(struct inode *inode, struct file *file)
+ 
+ 	dvb_ca_private_get(ca);
+ 
++	mutex_unlock(&ca->remove_mutex);
+ 	return 0;
+ }
+ 
+@@ -1755,6 +1774,8 @@ static int dvb_ca_en50221_io_release(struct inode *inode, struct file *file)
+ 
+ 	dprintk("%s\n", __func__);
+ 
++	mutex_lock(&ca->remove_mutex);
++
+ 	/* mark the CA device as closed */
+ 	ca->open = 0;
+ 	dvb_ca_en50221_thread_update_delay(ca);
+@@ -1765,6 +1786,13 @@ static int dvb_ca_en50221_io_release(struct inode *inode, struct file *file)
+ 
+ 	dvb_ca_private_put(ca);
+ 
++	if (dvbdev->users == 1 && ca->exit == 1) {
++		mutex_unlock(&ca->remove_mutex);
++		wake_up(&dvbdev->wait_queue);
++	} else {
++		mutex_unlock(&ca->remove_mutex);
++	}
++
+ 	return err;
+ }
+ 
+@@ -1888,6 +1916,7 @@ int dvb_ca_en50221_init(struct dvb_adapter *dvb_adapter,
+ 	}
+ 
+ 	mutex_init(&ca->ioctl_mutex);
++	mutex_init(&ca->remove_mutex);
+ 
+ 	if (signal_pending(current)) {
+ 		ret = -EINTR;
+@@ -1930,6 +1959,14 @@ void dvb_ca_en50221_release(struct dvb_ca_en50221 *pubca)
+ 
+ 	dprintk("%s\n", __func__);
+ 
++	mutex_lock(&ca->remove_mutex);
++	ca->exit = 1;
++	mutex_unlock(&ca->remove_mutex);
++
++	if (ca->dvbdev->users < 1)
++		wait_event(ca->dvbdev->wait_queue,
++				ca->dvbdev->users == 1);
++
+ 	/* shutdown the thread if there was one */
+ 	kthread_stop(ca->thread);
+ 
+diff --git a/drivers/media/dvb-core/dvb_demux.c b/drivers/media/dvb-core/dvb_demux.c
+index 5fde1d38b3e34..80b495982f63c 100644
+--- a/drivers/media/dvb-core/dvb_demux.c
++++ b/drivers/media/dvb-core/dvb_demux.c
+@@ -125,12 +125,12 @@ static inline int dvb_dmx_swfilter_payload(struct dvb_demux_feed *feed,
+ 
+ 	cc = buf[3] & 0x0f;
+ 	ccok = ((feed->cc + 1) & 0x0f) == cc;
+-	feed->cc = cc;
+ 	if (!ccok) {
+ 		set_buf_flags(feed, DMX_BUFFER_FLAG_DISCONTINUITY_DETECTED);
+ 		dprintk_sect_loss("missed packet: %d instead of %d!\n",
+ 				  cc, (feed->cc + 1) & 0x0f);
+ 	}
++	feed->cc = cc;
+ 
+ 	if (buf[1] & 0x40)	// PUSI ?
+ 		feed->peslen = 0xfffa;
+@@ -310,7 +310,6 @@ static int dvb_dmx_swfilter_section_packet(struct dvb_demux_feed *feed,
+ 
+ 	cc = buf[3] & 0x0f;
+ 	ccok = ((feed->cc + 1) & 0x0f) == cc;
+-	feed->cc = cc;
+ 
+ 	if (buf[3] & 0x20) {
+ 		/* adaption field present, check for discontinuity_indicator */
+@@ -346,6 +345,7 @@ static int dvb_dmx_swfilter_section_packet(struct dvb_demux_feed *feed,
+ 		feed->pusi_seen = false;
+ 		dvb_dmx_swfilter_section_new(feed);
+ 	}
++	feed->cc = cc;
+ 
+ 	if (buf[1] & 0x40) {
+ 		/* PUSI=1 (is set), section boundary is here */
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index b04638321b75b..ad3e42a4eaf73 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -292,14 +292,22 @@ static int dvb_frontend_get_event(struct dvb_frontend *fe,
+ 	}
+ 
+ 	if (events->eventw == events->eventr) {
+-		int ret;
++		struct wait_queue_entry wait;
++		int ret = 0;
+ 
+ 		if (flags & O_NONBLOCK)
+ 			return -EWOULDBLOCK;
+ 
+-		ret = wait_event_interruptible(events->wait_queue,
+-					       dvb_frontend_test_event(fepriv, events));
+-
++		init_waitqueue_entry(&wait, current);
++		add_wait_queue(&events->wait_queue, &wait);
++		while (!dvb_frontend_test_event(fepriv, events)) {
++			wait_woken(&wait, TASK_INTERRUPTIBLE, 0);
++			if (signal_pending(current)) {
++				ret = -ERESTARTSYS;
++				break;
++			}
++		}
++		remove_wait_queue(&events->wait_queue, &wait);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+diff --git a/drivers/media/dvb-core/dvb_net.c b/drivers/media/dvb-core/dvb_net.c
+index dddebea644bb8..c594b1bdfcaa5 100644
+--- a/drivers/media/dvb-core/dvb_net.c
++++ b/drivers/media/dvb-core/dvb_net.c
+@@ -1564,15 +1564,43 @@ static long dvb_net_ioctl(struct file *file,
+ 	return dvb_usercopy(file, cmd, arg, dvb_net_do_ioctl);
+ }
+ 
++static int locked_dvb_net_open(struct inode *inode, struct file *file)
++{
++	struct dvb_device *dvbdev = file->private_data;
++	struct dvb_net *dvbnet = dvbdev->priv;
++	int ret;
++
++	if (mutex_lock_interruptible(&dvbnet->remove_mutex))
++		return -ERESTARTSYS;
++
++	if (dvbnet->exit) {
++		mutex_unlock(&dvbnet->remove_mutex);
++		return -ENODEV;
++	}
++
++	ret = dvb_generic_open(inode, file);
++
++	mutex_unlock(&dvbnet->remove_mutex);
++
++	return ret;
++}
++
+ static int dvb_net_close(struct inode *inode, struct file *file)
+ {
+ 	struct dvb_device *dvbdev = file->private_data;
+ 	struct dvb_net *dvbnet = dvbdev->priv;
+ 
++	mutex_lock(&dvbnet->remove_mutex);
++
+ 	dvb_generic_release(inode, file);
+ 
+-	if(dvbdev->users == 1 && dvbnet->exit == 1)
++	if (dvbdev->users == 1 && dvbnet->exit == 1) {
++		mutex_unlock(&dvbnet->remove_mutex);
+ 		wake_up(&dvbdev->wait_queue);
++	} else {
++		mutex_unlock(&dvbnet->remove_mutex);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1580,7 +1608,7 @@ static int dvb_net_close(struct inode *inode, struct file *file)
+ static const struct file_operations dvb_net_fops = {
+ 	.owner = THIS_MODULE,
+ 	.unlocked_ioctl = dvb_net_ioctl,
+-	.open =	dvb_generic_open,
++	.open =	locked_dvb_net_open,
+ 	.release = dvb_net_close,
+ 	.llseek = noop_llseek,
+ };
+@@ -1599,10 +1627,13 @@ void dvb_net_release (struct dvb_net *dvbnet)
+ {
+ 	int i;
+ 
++	mutex_lock(&dvbnet->remove_mutex);
+ 	dvbnet->exit = 1;
++	mutex_unlock(&dvbnet->remove_mutex);
++
+ 	if (dvbnet->dvbdev->users < 1)
+ 		wait_event(dvbnet->dvbdev->wait_queue,
+-				dvbnet->dvbdev->users==1);
++				dvbnet->dvbdev->users == 1);
+ 
+ 	dvb_unregister_device(dvbnet->dvbdev);
+ 
+@@ -1621,6 +1652,7 @@ int dvb_net_init (struct dvb_adapter *adap, struct dvb_net *dvbnet,
+ 	int i;
+ 
+ 	mutex_init(&dvbnet->ioctl_mutex);
++	mutex_init(&dvbnet->remove_mutex);
+ 	dvbnet->demux = dmx;
+ 
+ 	for (i=0; i<DVB_NET_DEVICES_MAX; i++)
+diff --git a/drivers/media/dvb-frontends/mn88443x.c b/drivers/media/dvb-frontends/mn88443x.c
+index fff212c0bf3b5..05894deb8a19a 100644
+--- a/drivers/media/dvb-frontends/mn88443x.c
++++ b/drivers/media/dvb-frontends/mn88443x.c
+@@ -800,7 +800,7 @@ MODULE_DEVICE_TABLE(i2c, mn88443x_i2c_id);
+ static struct i2c_driver mn88443x_driver = {
+ 	.driver = {
+ 		.name = "mn88443x",
+-		.of_match_table = of_match_ptr(mn88443x_of_match),
++		.of_match_table = mn88443x_of_match,
+ 	},
+ 	.probe    = mn88443x_probe,
+ 	.remove   = mn88443x_remove,
+diff --git a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+index a71814e2772d1..7c5061953ee82 100644
+--- a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
++++ b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+@@ -887,12 +887,7 @@ static int netup_unidvb_initdev(struct pci_dev *pci_dev,
+ 		ndev->lmmio0, (u32)pci_resource_len(pci_dev, 0),
+ 		ndev->lmmio1, (u32)pci_resource_len(pci_dev, 1),
+ 		pci_dev->irq);
+-	if (request_irq(pci_dev->irq, netup_unidvb_isr, IRQF_SHARED,
+-			"netup_unidvb", pci_dev) < 0) {
+-		dev_err(&pci_dev->dev,
+-			"%s(): can't get IRQ %d\n", __func__, pci_dev->irq);
+-		goto irq_request_err;
+-	}
++
+ 	ndev->dma_size = 2 * 188 *
+ 		NETUP_DMA_BLOCKS_COUNT * NETUP_DMA_PACKETS_COUNT;
+ 	ndev->dma_virt = dma_alloc_coherent(&pci_dev->dev,
+@@ -933,6 +928,14 @@ static int netup_unidvb_initdev(struct pci_dev *pci_dev,
+ 		dev_err(&pci_dev->dev, "netup_unidvb: DMA setup failed\n");
+ 		goto dma_setup_err;
+ 	}
++
++	if (request_irq(pci_dev->irq, netup_unidvb_isr, IRQF_SHARED,
++			"netup_unidvb", pci_dev) < 0) {
++		dev_err(&pci_dev->dev,
++			"%s(): can't get IRQ %d\n", __func__, pci_dev->irq);
++		goto dma_setup_err;
++	}
++
+ 	dev_info(&pci_dev->dev,
+ 		"netup_unidvb: device has been initialized\n");
+ 	return 0;
+@@ -951,8 +954,6 @@ spi_setup_err:
+ 	dma_free_coherent(&pci_dev->dev, ndev->dma_size,
+ 			ndev->dma_virt, ndev->dma_phys);
+ dma_alloc_err:
+-	free_irq(pci_dev->irq, pci_dev);
+-irq_request_err:
+ 	iounmap(ndev->lmmio1);
+ pci_bar1_error:
+ 	iounmap(ndev->lmmio0);
+diff --git a/drivers/media/platform/rcar-vin/rcar-dma.c b/drivers/media/platform/rcar-vin/rcar-dma.c
+index 692dea300b0de..63c61c704446b 100644
+--- a/drivers/media/platform/rcar-vin/rcar-dma.c
++++ b/drivers/media/platform/rcar-vin/rcar-dma.c
+@@ -645,11 +645,9 @@ static int rvin_setup(struct rvin_dev *vin)
+ 	case V4L2_FIELD_SEQ_TB:
+ 	case V4L2_FIELD_SEQ_BT:
+ 	case V4L2_FIELD_NONE:
+-		vnmc = VNMC_IM_ODD_EVEN;
+-		progressive = true;
+-		break;
+ 	case V4L2_FIELD_ALTERNATE:
+ 		vnmc = VNMC_IM_ODD_EVEN;
++		progressive = true;
+ 		break;
+ 	default:
+ 		vnmc = VNMC_IM_ODD;
+diff --git a/drivers/media/platform/ti-vpe/cal.h b/drivers/media/platform/ti-vpe/cal.h
+index 4123405ee0cf7..20d07311d2223 100644
+--- a/drivers/media/platform/ti-vpe/cal.h
++++ b/drivers/media/platform/ti-vpe/cal.h
+@@ -215,7 +215,7 @@ static inline void cal_write(struct cal_dev *cal, u32 offset, u32 val)
+ 	iowrite32(val, cal->base + offset);
+ }
+ 
+-static inline u32 cal_read_field(struct cal_dev *cal, u32 offset, u32 mask)
++static __always_inline u32 cal_read_field(struct cal_dev *cal, u32 offset, u32 mask)
+ {
+ 	return FIELD_GET(mask, cal_read(cal, offset));
+ }
+diff --git a/drivers/media/usb/dvb-usb-v2/ce6230.c b/drivers/media/usb/dvb-usb-v2/ce6230.c
+index 44540de1a2066..d3b5cb4a24daf 100644
+--- a/drivers/media/usb/dvb-usb-v2/ce6230.c
++++ b/drivers/media/usb/dvb-usb-v2/ce6230.c
+@@ -101,6 +101,10 @@ static int ce6230_i2c_master_xfer(struct i2c_adapter *adap,
+ 		if (num > i + 1 && (msg[i+1].flags & I2C_M_RD)) {
+ 			if (msg[i].addr ==
+ 				ce6230_zl10353_config.demod_address) {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = DEMOD_READ;
+ 				req.value = msg[i].addr >> 1;
+ 				req.index = msg[i].buf[0];
+@@ -117,6 +121,10 @@ static int ce6230_i2c_master_xfer(struct i2c_adapter *adap,
+ 		} else {
+ 			if (msg[i].addr ==
+ 				ce6230_zl10353_config.demod_address) {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = DEMOD_WRITE;
+ 				req.value = msg[i].addr >> 1;
+ 				req.index = msg[i].buf[0];
+diff --git a/drivers/media/usb/dvb-usb-v2/ec168.c b/drivers/media/usb/dvb-usb-v2/ec168.c
+index 7ed0ab9e429b1..0e4773fc025c9 100644
+--- a/drivers/media/usb/dvb-usb-v2/ec168.c
++++ b/drivers/media/usb/dvb-usb-v2/ec168.c
+@@ -115,6 +115,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 	while (i < num) {
+ 		if (num > i + 1 && (msg[i+1].flags & I2C_M_RD)) {
+ 			if (msg[i].addr == ec168_ec100_config.demod_address) {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = READ_DEMOD;
+ 				req.value = 0;
+ 				req.index = 0xff00 + msg[i].buf[0]; /* reg */
+@@ -131,6 +135,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			}
+ 		} else {
+ 			if (msg[i].addr == ec168_ec100_config.demod_address) {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = WRITE_DEMOD;
+ 				req.value = msg[i].buf[1]; /* val */
+ 				req.index = 0xff00 + msg[i].buf[0]; /* reg */
+@@ -139,6 +147,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 				ret = ec168_ctrl_msg(d, &req);
+ 				i += 1;
+ 			} else {
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				req.cmd = WRITE_I2C;
+ 				req.value = msg[i].buf[0]; /* val */
+ 				req.index = 0x0100 + msg[i].addr; /* I2C addr */
+diff --git a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+index c278b9b0f1024..70a2f04942164 100644
+--- a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
++++ b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+@@ -176,6 +176,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			ret = -EOPNOTSUPP;
+ 			goto err_mutex_unlock;
+ 		} else if (msg[0].addr == 0x10) {
++			if (msg[0].len < 1 || msg[1].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto err_mutex_unlock;
++			}
+ 			/* method 1 - integrated demod */
+ 			if (msg[0].buf[0] == 0x00) {
+ 				/* return demod page from driver cache */
+@@ -189,6 +193,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 				ret = rtl28xxu_ctrl_msg(d, &req);
+ 			}
+ 		} else if (msg[0].len < 2) {
++			if (msg[0].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto err_mutex_unlock;
++			}
+ 			/* method 2 - old I2C */
+ 			req.value = (msg[0].buf[0] << 8) | (msg[0].addr << 1);
+ 			req.index = CMD_I2C_RD;
+@@ -217,8 +225,16 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			ret = -EOPNOTSUPP;
+ 			goto err_mutex_unlock;
+ 		} else if (msg[0].addr == 0x10) {
++			if (msg[0].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto err_mutex_unlock;
++			}
+ 			/* method 1 - integrated demod */
+ 			if (msg[0].buf[0] == 0x00) {
++				if (msg[0].len < 2) {
++					ret = -EOPNOTSUPP;
++					goto err_mutex_unlock;
++				}
+ 				/* save demod page for later demod access */
+ 				dev->page = msg[0].buf[1];
+ 				ret = 0;
+@@ -231,6 +247,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 				ret = rtl28xxu_ctrl_msg(d, &req);
+ 			}
+ 		} else if ((msg[0].len < 23) && (!dev->new_i2c_write)) {
++			if (msg[0].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto err_mutex_unlock;
++			}
+ 			/* method 2 - old I2C */
+ 			req.value = (msg[0].buf[0] << 8) | (msg[0].addr << 1);
+ 			req.index = CMD_I2C_WR;
+diff --git a/drivers/media/usb/dvb-usb/az6027.c b/drivers/media/usb/dvb-usb/az6027.c
+index 32b4ee65c2802..991f4510aaebb 100644
+--- a/drivers/media/usb/dvb-usb/az6027.c
++++ b/drivers/media/usb/dvb-usb/az6027.c
+@@ -988,6 +988,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
+ 			/* write/read request */
+ 			if (i + 1 < num && (msg[i + 1].flags & I2C_M_RD)) {
+ 				req = 0xB9;
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				index = (((msg[i].buf[0] << 8) & 0xff00) | (msg[i].buf[1] & 0x00ff));
+ 				value = msg[i].addr + (msg[i].len << 8);
+ 				length = msg[i + 1].len + 6;
+@@ -1001,6 +1005,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
+ 
+ 				/* demod 16bit addr */
+ 				req = 0xBD;
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				index = (((msg[i].buf[0] << 8) & 0xff00) | (msg[i].buf[1] & 0x00ff));
+ 				value = msg[i].addr + (2 << 8);
+ 				length = msg[i].len - 2;
+@@ -1026,6 +1034,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
+ 			} else {
+ 
+ 				req = 0xBD;
++				if (msg[i].len < 1) {
++					i = -EOPNOTSUPP;
++					break;
++				}
+ 				index = msg[i].buf[0] & 0x00FF;
+ 				value = msg[i].addr + (1 << 8);
+ 				length = msg[i].len - 1;
+diff --git a/drivers/media/usb/dvb-usb/digitv.c b/drivers/media/usb/dvb-usb/digitv.c
+index 4e3b3c064bcfb..e56efebd4f0a1 100644
+--- a/drivers/media/usb/dvb-usb/digitv.c
++++ b/drivers/media/usb/dvb-usb/digitv.c
+@@ -63,6 +63,10 @@ static int digitv_i2c_xfer(struct i2c_adapter *adap,struct i2c_msg msg[],int num
+ 		warn("more than 2 i2c messages at a time is not handled yet. TODO.");
+ 
+ 	for (i = 0; i < num; i++) {
++		if (msg[i].len < 1) {
++			i = -EOPNOTSUPP;
++			break;
++		}
+ 		/* write/read request */
+ 		if (i+1 < num && (msg[i+1].flags & I2C_M_RD)) {
+ 			if (digitv_ctrl_msg(d, USB_READ_COFDM, msg[i].buf[0], NULL, 0,
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index aa929db56db1f..3c4ac998d040f 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -946,7 +946,7 @@ static int su3000_read_mac_address(struct dvb_usb_device *d, u8 mac[6])
+ 	for (i = 0; i < 6; i++) {
+ 		obuf[1] = 0xf0 + i;
+ 		if (i2c_transfer(&d->i2c_adap, msg, 2) != 2)
+-			break;
++			return -1;
+ 		else
+ 			mac[i] = ibuf[0];
+ 	}
+diff --git a/drivers/media/usb/ttusb-dec/ttusb_dec.c b/drivers/media/usb/ttusb-dec/ttusb_dec.c
+index df6c5e4a0f058..68f88143c8a6e 100644
+--- a/drivers/media/usb/ttusb-dec/ttusb_dec.c
++++ b/drivers/media/usb/ttusb-dec/ttusb_dec.c
+@@ -1551,8 +1551,7 @@ static void ttusb_dec_exit_dvb(struct ttusb_dec *dec)
+ 	dvb_dmx_release(&dec->demux);
+ 	if (dec->fe) {
+ 		dvb_unregister_frontend(dec->fe);
+-		if (dec->fe->ops.release)
+-			dec->fe->ops.release(dec->fe);
++		dvb_frontend_detach(dec->fe);
+ 	}
+ 	dvb_unregister_adapter(&dec->adapter);
+ }
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 67a51f69cf9aa..2488a9a67d18a 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1675,8 +1675,10 @@ static void fastrpc_notify_users(struct fastrpc_user *user)
+ 	struct fastrpc_invoke_ctx *ctx;
+ 
+ 	spin_lock(&user->lock);
+-	list_for_each_entry(ctx, &user->pending, node)
++	list_for_each_entry(ctx, &user->pending, node) {
++		ctx->retval = -EPIPE;
+ 		complete(&ctx->work);
++	}
+ 	spin_unlock(&user->lock);
+ }
+ 
+@@ -1686,7 +1688,9 @@ static void fastrpc_rpmsg_remove(struct rpmsg_device *rpdev)
+ 	struct fastrpc_user *user;
+ 	unsigned long flags;
+ 
++	/* No invocations past this point */
+ 	spin_lock_irqsave(&cctx->lock, flags);
++	cctx->rpdev = NULL;
+ 	list_for_each_entry(user, &cctx->users, user)
+ 		fastrpc_notify_users(user);
+ 	spin_unlock_irqrestore(&cctx->lock, flags);
+@@ -1694,7 +1698,6 @@ static void fastrpc_rpmsg_remove(struct rpmsg_device *rpdev)
+ 	misc_deregister(&cctx->miscdev);
+ 	of_platform_depopulate(&rpdev->dev);
+ 
+-	cctx->rpdev = NULL;
+ 	fastrpc_channel_ctx_put(cctx);
+ }
+ 
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index 72f65f32abbc7..7dc0e91dabfc7 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -1715,6 +1715,9 @@ static void construct_request_response(struct vub300_mmc_host *vub300,
+ 	int bytes = 3 & less_cmd;
+ 	int words = less_cmd >> 2;
+ 	u8 *r = vub300->resp.response.command_response;
++
++	if (!resp_len)
++		return;
+ 	if (bytes == 3) {
+ 		cmd->resp[words] = (r[1 + (words << 2)] << 24)
+ 			| (r[2 + (words << 2)] << 16)
+diff --git a/drivers/mtd/nand/raw/ingenic/ingenic_ecc.h b/drivers/mtd/nand/raw/ingenic/ingenic_ecc.h
+index 2cda439b5e11b..017868f59f222 100644
+--- a/drivers/mtd/nand/raw/ingenic/ingenic_ecc.h
++++ b/drivers/mtd/nand/raw/ingenic/ingenic_ecc.h
+@@ -36,25 +36,25 @@ int ingenic_ecc_correct(struct ingenic_ecc *ecc,
+ void ingenic_ecc_release(struct ingenic_ecc *ecc);
+ struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np);
+ #else /* CONFIG_MTD_NAND_INGENIC_ECC */
+-int ingenic_ecc_calculate(struct ingenic_ecc *ecc,
++static inline int ingenic_ecc_calculate(struct ingenic_ecc *ecc,
+ 			  struct ingenic_ecc_params *params,
+ 			  const u8 *buf, u8 *ecc_code)
+ {
+ 	return -ENODEV;
+ }
+ 
+-int ingenic_ecc_correct(struct ingenic_ecc *ecc,
++static inline int ingenic_ecc_correct(struct ingenic_ecc *ecc,
+ 			struct ingenic_ecc_params *params, u8 *buf,
+ 			u8 *ecc_code)
+ {
+ 	return -ENODEV;
+ }
+ 
+-void ingenic_ecc_release(struct ingenic_ecc *ecc)
++static inline void ingenic_ecc_release(struct ingenic_ecc *ecc)
+ {
+ }
+ 
+-struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np)
++static inline struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np)
+ {
+ 	return ERR_PTR(-ENODEV);
+ }
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index dce35f81e0a55..2ef1a5adfcfc1 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -2443,6 +2443,12 @@ static int marvell_nfc_setup_interface(struct nand_chip *chip, int chipnr,
+ 			NDTR1_WAIT_MODE;
+ 	}
+ 
++	/*
++	 * Reset nfc->selected_chip so the next command will cause the timing
++	 * registers to be updated in marvell_nfc_select_target().
++	 */
++	nfc->selected_chip = NULL;
++
+ 	return 0;
+ }
+ 
+@@ -2885,10 +2891,6 @@ static int marvell_nfc_init(struct marvell_nfc *nfc)
+ 		regmap_update_bits(sysctrl_base, GENCONF_CLK_GATING_CTRL,
+ 				   GENCONF_CLK_GATING_CTRL_ND_GATE,
+ 				   GENCONF_CLK_GATING_CTRL_ND_GATE);
+-
+-		regmap_update_bits(sysctrl_base, GENCONF_ND_CLK_CTRL,
+-				   GENCONF_ND_CLK_CTRL_EN,
+-				   GENCONF_ND_CLK_CTRL_EN);
+ 	}
+ 
+ 	/* Configure the DMA if appropriate */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 321c821876f65..8b2c8546f4c99 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -5547,7 +5547,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ 		goto out;
+ 	}
+ 	if (chip->reset)
+-		usleep_range(1000, 2000);
++		usleep_range(10000, 20000);
+ 
+ 	err = mv88e6xxx_detect(chip);
+ 	if (err)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 43fdd111235a6..ca7372369b3e6 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -1312,7 +1312,7 @@ static enum xgbe_mode xgbe_phy_status_aneg(struct xgbe_prv_data *pdata)
+ 	return pdata->phy_if.phy_impl.an_outcome(pdata);
+ }
+ 
+-static void xgbe_phy_status_result(struct xgbe_prv_data *pdata)
++static bool xgbe_phy_status_result(struct xgbe_prv_data *pdata)
+ {
+ 	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+ 	enum xgbe_mode mode;
+@@ -1347,8 +1347,13 @@ static void xgbe_phy_status_result(struct xgbe_prv_data *pdata)
+ 
+ 	pdata->phy.duplex = DUPLEX_FULL;
+ 
+-	if (xgbe_set_mode(pdata, mode) && pdata->an_again)
++	if (!xgbe_set_mode(pdata, mode))
++		return false;
++
++	if (pdata->an_again)
+ 		xgbe_phy_reconfig_aneg(pdata);
++
++	return true;
+ }
+ 
+ static void xgbe_phy_status(struct xgbe_prv_data *pdata)
+@@ -1378,7 +1383,8 @@ static void xgbe_phy_status(struct xgbe_prv_data *pdata)
+ 			return;
+ 		}
+ 
+-		xgbe_phy_status_result(pdata);
++		if (xgbe_phy_status_result(pdata))
++			return;
+ 
+ 		if (test_bit(XGBE_LINK_INIT, &pdata->dev_state))
+ 			clear_bit(XGBE_LINK_INIT, &pdata->dev_state);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 0a011a41c039e..5273644fb2bf9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -483,7 +483,7 @@ static void poll_trace(struct mlx5_fw_tracer *tracer,
+ 				(u64)timestamp_low;
+ 		break;
+ 	default:
+-		if (tracer_event->event_id >= tracer->str_db.first_string_trace ||
++		if (tracer_event->event_id >= tracer->str_db.first_string_trace &&
+ 		    tracer_event->event_id <= tracer->str_db.first_string_trace +
+ 					      tracer->str_db.num_string_trace) {
+ 			tracer_event->type = TRACER_EVENT_TYPE_STRING;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index da4ca0f67e9ce..22907f6364f54 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -783,7 +783,6 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct pci_dev *pdev,
+ 	}
+ 
+ 	mlx5_pci_vsc_init(dev);
+-	dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev);
+ 	return 0;
+ 
+ err_clr_master:
+@@ -978,6 +977,7 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
+ 		goto err_cmd_cleanup;
+ 	}
+ 
++	dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev);
+ 	mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_UP);
+ 
+ 	err = mlx5_core_enable_hca(dev, 0);
+diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/cassini.c
+index d245f6e21e8ca..b929c6f6ce514 100644
+--- a/drivers/net/ethernet/sun/cassini.c
++++ b/drivers/net/ethernet/sun/cassini.c
+@@ -1325,7 +1325,7 @@ static void cas_init_rx_dma(struct cas *cp)
+ 	writel(val, cp->regs + REG_RX_PAGE_SIZE);
+ 
+ 	/* enable the header parser if desired */
+-	if (CAS_HP_FIRMWARE == cas_prog_null)
++	if (&CAS_HP_FIRMWARE[0] == &cas_prog_null[0])
+ 		return;
+ 
+ 	val = CAS_BASE(HP_CFG_NUM_CPU, CAS_NCPUS > 63 ? 0 : CAS_NCPUS);
+@@ -3793,7 +3793,7 @@ static void cas_reset(struct cas *cp, int blkflag)
+ 
+ 	/* program header parser */
+ 	if ((cp->cas_flags & CAS_FLAG_TARGET_ABORT) ||
+-	    (CAS_HP_ALT_FIRMWARE == cas_prog_null)) {
++	    (&CAS_HP_ALT_FIRMWARE[0] == &cas_prog_null[0])) {
+ 		cas_load_firmware(cp, CAS_HP_FIRMWARE);
+ 	} else {
+ 		cas_load_firmware(cp, CAS_HP_ALT_FIRMWARE);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 070910567c44e..53f1cd0bfaf42 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1260,7 +1260,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x2001, 0x7e3d, 4)},	/* D-Link DWM-222 A2 */
+ 	{QMI_FIXED_INTF(0x2020, 0x2031, 4)},	/* Olicard 600 */
+ 	{QMI_FIXED_INTF(0x2020, 0x2033, 4)},	/* BroadMobi BM806U */
+-	{QMI_FIXED_INTF(0x2020, 0x2060, 4)},	/* BroadMobi BM818 */
++	{QMI_QUIRK_SET_DTR(0x2020, 0x2060, 4)},	/* BroadMobi BM818 */
+ 	{QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)},    /* Sierra Wireless MC7700 */
+ 	{QMI_FIXED_INTF(0x114f, 0x68a2, 8)},    /* Sierra Wireless MC7750 */
+ 	{QMI_FIXED_INTF(0x1199, 0x68a2, 8)},	/* Sierra Wireless MC7710 in QMI mode */
+diff --git a/drivers/net/wireless/ath/ath6kl/htc.h b/drivers/net/wireless/ath/ath6kl/htc.h
+index 112d8a9b8d431..d3534a29c4f05 100644
+--- a/drivers/net/wireless/ath/ath6kl/htc.h
++++ b/drivers/net/wireless/ath/ath6kl/htc.h
+@@ -153,12 +153,19 @@
+  * implementations.
+  */
+ struct htc_frame_hdr {
+-	u8 eid;
+-	u8 flags;
+-
+-	/* length of data (including trailer) that follows the header */
+-	__le16 payld_len;
+-
++	struct_group_tagged(htc_frame_look_ahead, header,
++		union {
++			struct {
++				u8 eid;
++				u8 flags;
++
++				/* length of data (including trailer) that follows the header */
++				__le16 payld_len;
++
++			};
++			u32 word;
++		};
++	);
+ 	/* end of 4-byte lookahead */
+ 
+ 	u8 ctrl[2];
+diff --git a/drivers/net/wireless/ath/ath6kl/htc_mbox.c b/drivers/net/wireless/ath/ath6kl/htc_mbox.c
+index 998947ef63b6e..e3874421c4c0c 100644
+--- a/drivers/net/wireless/ath/ath6kl/htc_mbox.c
++++ b/drivers/net/wireless/ath/ath6kl/htc_mbox.c
+@@ -2260,19 +2260,16 @@ int ath6kl_htc_rxmsg_pending_handler(struct htc_target *target,
+ static struct htc_packet *htc_wait_for_ctrl_msg(struct htc_target *target)
+ {
+ 	struct htc_packet *packet = NULL;
+-	struct htc_frame_hdr *htc_hdr;
+-	u32 look_ahead;
++	struct htc_frame_look_ahead look_ahead;
+ 
+-	if (ath6kl_hif_poll_mboxmsg_rx(target->dev, &look_ahead,
++	if (ath6kl_hif_poll_mboxmsg_rx(target->dev, &look_ahead.word,
+ 				       HTC_TARGET_RESPONSE_TIMEOUT))
+ 		return NULL;
+ 
+ 	ath6kl_dbg(ATH6KL_DBG_HTC,
+-		   "htc rx wait ctrl look_ahead 0x%X\n", look_ahead);
+-
+-	htc_hdr = (struct htc_frame_hdr *)&look_ahead;
++		   "htc rx wait ctrl look_ahead 0x%X\n", look_ahead.word);
+ 
+-	if (htc_hdr->eid != ENDPOINT_0)
++	if (look_ahead.eid != ENDPOINT_0)
+ 		return NULL;
+ 
+ 	packet = htc_get_control_buf(target, false);
+@@ -2281,8 +2278,8 @@ static struct htc_packet *htc_wait_for_ctrl_msg(struct htc_target *target)
+ 		return NULL;
+ 
+ 	packet->info.rx.rx_flags = 0;
+-	packet->info.rx.exp_hdr = look_ahead;
+-	packet->act_len = le16_to_cpu(htc_hdr->payld_len) + HTC_HDR_LENGTH;
++	packet->info.rx.exp_hdr = look_ahead.word;
++	packet->act_len = le16_to_cpu(look_ahead.payld_len) + HTC_HDR_LENGTH;
+ 
+ 	if (packet->act_len > packet->buf_len)
+ 		goto fail_ctrl_rx;
+diff --git a/drivers/net/wireless/broadcom/b43/b43.h b/drivers/net/wireless/broadcom/b43/b43.h
+index 9fc7c088a539e..67b4bac048e58 100644
+--- a/drivers/net/wireless/broadcom/b43/b43.h
++++ b/drivers/net/wireless/broadcom/b43/b43.h
+@@ -651,7 +651,7 @@ struct b43_iv {
+ 	union {
+ 		__be16 d16;
+ 		__be32 d32;
+-	} data __packed;
++	} __packed data;
+ } __packed;
+ 
+ 
+diff --git a/drivers/net/wireless/broadcom/b43legacy/b43legacy.h b/drivers/net/wireless/broadcom/b43legacy/b43legacy.h
+index 6b0cec467938f..f49365d14619f 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/b43legacy.h
++++ b/drivers/net/wireless/broadcom/b43legacy/b43legacy.h
+@@ -379,7 +379,7 @@ struct b43legacy_iv {
+ 	union {
+ 		__be16 d16;
+ 		__be32 d32;
+-	} data __packed;
++	} __packed data;
+ } __packed;
+ 
+ #define B43legacy_PHYMODE(phytype)	(1 << (phytype))
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index 0ed4d67308d78..fe1e4c4c17c42 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -1346,6 +1346,7 @@ struct rtl8xxxu_priv {
+ 	u32 rege9c;
+ 	u32 regeb4;
+ 	u32 regebc;
++	u32 regrcr;
+ 	int next_mbox;
+ 	int nr_out_eps;
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index deef1c09de319..004778faf3d07 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -4045,6 +4045,7 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
+ 		RCR_ACCEPT_MGMT_FRAME | RCR_HTC_LOC_CTRL |
+ 		RCR_APPEND_PHYSTAT | RCR_APPEND_ICV | RCR_APPEND_MIC;
+ 	rtl8xxxu_write32(priv, REG_RCR, val32);
++	priv->regrcr = val32;
+ 
+ 	/*
+ 	 * Accept all multicast
+@@ -5999,7 +6000,7 @@ static void rtl8xxxu_configure_filter(struct ieee80211_hw *hw,
+ 				      unsigned int *total_flags, u64 multicast)
+ {
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+-	u32 rcr = rtl8xxxu_read32(priv, REG_RCR);
++	u32 rcr = priv->regrcr;
+ 
+ 	dev_dbg(&priv->udev->dev, "%s: changed_flags %08x, total_flags %08x\n",
+ 		__func__, changed_flags, *total_flags);
+@@ -6045,6 +6046,7 @@ static void rtl8xxxu_configure_filter(struct ieee80211_hw *hw,
+ 	 */
+ 
+ 	rtl8xxxu_write32(priv, REG_RCR, rcr);
++	priv->regrcr = rcr;
+ 
+ 	*total_flags &= (FIF_ALLMULTI | FIF_FCSFAIL | FIF_BCN_PRBRESP_PROMISC |
+ 			 FIF_CONTROL | FIF_OTHER_BSS | FIF_PSPOLL |
+diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
+index dd84995049b91..870e00effe439 100644
+--- a/drivers/s390/crypto/pkey_api.c
++++ b/drivers/s390/crypto/pkey_api.c
+@@ -1271,6 +1271,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 			return PTR_ERR(kkey);
+ 		rc = pkey_keyblob2pkey(kkey, ktp.keylen, &ktp.protkey);
+ 		DEBUG_DBG("%s pkey_keyblob2pkey()=%d\n", __func__, rc);
++		memzero_explicit(kkey, ktp.keylen);
+ 		kfree(kkey);
+ 		if (rc)
+ 			break;
+@@ -1404,6 +1405,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 					kkey, ktp.keylen, &ktp.protkey);
+ 		DEBUG_DBG("%s pkey_keyblob2pkey2()=%d\n", __func__, rc);
+ 		kfree(apqns);
++		memzero_explicit(kkey, ktp.keylen);
+ 		kfree(kkey);
+ 		if (rc)
+ 			break;
+@@ -1530,6 +1532,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 					protkey, &protkeylen);
+ 		DEBUG_DBG("%s pkey_keyblob2pkey3()=%d\n", __func__, rc);
+ 		kfree(apqns);
++		memzero_explicit(kkey, ktp.keylen);
+ 		kfree(kkey);
+ 		if (rc) {
+ 			kfree(protkey);
+diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
+index 701b61ec76eed..6524e1fe54d2e 100644
+--- a/drivers/scsi/Kconfig
++++ b/drivers/scsi/Kconfig
+@@ -444,7 +444,7 @@ config SCSI_MVUMI
+ 
+ config SCSI_DPT_I2O
+ 	tristate "Adaptec I2O RAID support "
+-	depends on SCSI && PCI && VIRT_TO_BUS
++	depends on SCSI && PCI
+ 	help
+ 	  This driver supports all of Adaptec's I2O based RAID controllers as 
+ 	  well as the DPT SmartRaid V cards.  This is an Adaptec maintained
+diff --git a/drivers/scsi/dpt_i2o.c b/drivers/scsi/dpt_i2o.c
+index 4251212acbbe9..43ec5657a9353 100644
+--- a/drivers/scsi/dpt_i2o.c
++++ b/drivers/scsi/dpt_i2o.c
+@@ -56,7 +56,7 @@ MODULE_DESCRIPTION("Adaptec I2O RAID Driver");
+ #include <linux/mutex.h>
+ 
+ #include <asm/processor.h>	/* for boot_cpu_data */
+-#include <asm/io.h>		/* for virt_to_bus, etc. */
++#include <asm/io.h>
+ 
+ #include <scsi/scsi.h>
+ #include <scsi/scsi_cmnd.h>
+@@ -582,51 +582,6 @@ static int adpt_show_info(struct seq_file *m, struct Scsi_Host *host)
+ 	return 0;
+ }
+ 
+-/*
+- *	Turn a pointer to ioctl reply data into an u32 'context'
+- */
+-static u32 adpt_ioctl_to_context(adpt_hba * pHba, void *reply)
+-{
+-#if BITS_PER_LONG == 32
+-	return (u32)(unsigned long)reply;
+-#else
+-	ulong flags = 0;
+-	u32 nr, i;
+-
+-	spin_lock_irqsave(pHba->host->host_lock, flags);
+-	nr = ARRAY_SIZE(pHba->ioctl_reply_context);
+-	for (i = 0; i < nr; i++) {
+-		if (pHba->ioctl_reply_context[i] == NULL) {
+-			pHba->ioctl_reply_context[i] = reply;
+-			break;
+-		}
+-	}
+-	spin_unlock_irqrestore(pHba->host->host_lock, flags);
+-	if (i >= nr) {
+-		printk(KERN_WARNING"%s: Too many outstanding "
+-				"ioctl commands\n", pHba->name);
+-		return (u32)-1;
+-	}
+-
+-	return i;
+-#endif
+-}
+-
+-/*
+- *	Go from an u32 'context' to a pointer to ioctl reply data.
+- */
+-static void *adpt_ioctl_from_context(adpt_hba *pHba, u32 context)
+-{
+-#if BITS_PER_LONG == 32
+-	return (void *)(unsigned long)context;
+-#else
+-	void *p = pHba->ioctl_reply_context[context];
+-	pHba->ioctl_reply_context[context] = NULL;
+-
+-	return p;
+-#endif
+-}
+-
+ /*===========================================================================
+  * Error Handling routines
+  *===========================================================================
+@@ -1648,208 +1603,6 @@ static int adpt_close(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
+-
+-static int adpt_i2o_passthru(adpt_hba* pHba, u32 __user *arg)
+-{
+-	u32 msg[MAX_MESSAGE_SIZE];
+-	u32* reply = NULL;
+-	u32 size = 0;
+-	u32 reply_size = 0;
+-	u32 __user *user_msg = arg;
+-	u32 __user * user_reply = NULL;
+-	void **sg_list = NULL;
+-	u32 sg_offset = 0;
+-	u32 sg_count = 0;
+-	int sg_index = 0;
+-	u32 i = 0;
+-	u32 rcode = 0;
+-	void *p = NULL;
+-	dma_addr_t addr;
+-	ulong flags = 0;
+-
+-	memset(&msg, 0, MAX_MESSAGE_SIZE*4);
+-	// get user msg size in u32s 
+-	if(get_user(size, &user_msg[0])){
+-		return -EFAULT;
+-	}
+-	size = size>>16;
+-
+-	user_reply = &user_msg[size];
+-	if(size > MAX_MESSAGE_SIZE){
+-		return -EFAULT;
+-	}
+-	size *= 4; // Convert to bytes
+-
+-	/* Copy in the user's I2O command */
+-	if(copy_from_user(msg, user_msg, size)) {
+-		return -EFAULT;
+-	}
+-	get_user(reply_size, &user_reply[0]);
+-	reply_size = reply_size>>16;
+-	if(reply_size > REPLY_FRAME_SIZE){
+-		reply_size = REPLY_FRAME_SIZE;
+-	}
+-	reply_size *= 4;
+-	reply = kzalloc(REPLY_FRAME_SIZE*4, GFP_KERNEL);
+-	if(reply == NULL) {
+-		printk(KERN_WARNING"%s: Could not allocate reply buffer\n",pHba->name);
+-		return -ENOMEM;
+-	}
+-	sg_offset = (msg[0]>>4)&0xf;
+-	msg[2] = 0x40000000; // IOCTL context
+-	msg[3] = adpt_ioctl_to_context(pHba, reply);
+-	if (msg[3] == (u32)-1) {
+-		rcode = -EBUSY;
+-		goto free;
+-	}
+-
+-	sg_list = kcalloc(pHba->sg_tablesize, sizeof(*sg_list), GFP_KERNEL);
+-	if (!sg_list) {
+-		rcode = -ENOMEM;
+-		goto free;
+-	}
+-	if(sg_offset) {
+-		// TODO add 64 bit API
+-		struct sg_simple_element *sg =  (struct sg_simple_element*) (msg+sg_offset);
+-		sg_count = (size - sg_offset*4) / sizeof(struct sg_simple_element);
+-		if (sg_count > pHba->sg_tablesize){
+-			printk(KERN_DEBUG"%s:IOCTL SG List too large (%u)\n", pHba->name,sg_count);
+-			rcode = -EINVAL;
+-			goto free;
+-		}
+-
+-		for(i = 0; i < sg_count; i++) {
+-			int sg_size;
+-
+-			if (!(sg[i].flag_count & 0x10000000 /*I2O_SGL_FLAGS_SIMPLE_ADDRESS_ELEMENT*/)) {
+-				printk(KERN_DEBUG"%s:Bad SG element %d - not simple (%x)\n",pHba->name,i,  sg[i].flag_count);
+-				rcode = -EINVAL;
+-				goto cleanup;
+-			}
+-			sg_size = sg[i].flag_count & 0xffffff;      
+-			/* Allocate memory for the transfer */
+-			p = dma_alloc_coherent(&pHba->pDev->dev, sg_size, &addr, GFP_KERNEL);
+-			if(!p) {
+-				printk(KERN_DEBUG"%s: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
+-						pHba->name,sg_size,i,sg_count);
+-				rcode = -ENOMEM;
+-				goto cleanup;
+-			}
+-			sg_list[sg_index++] = p; // sglist indexed with input frame, not our internal frame.
+-			/* Copy in the user's SG buffer if necessary */
+-			if(sg[i].flag_count & 0x04000000 /*I2O_SGL_FLAGS_DIR*/) {
+-				// sg_simple_element API is 32 bit
+-				if (copy_from_user(p,(void __user *)(ulong)sg[i].addr_bus, sg_size)) {
+-					printk(KERN_DEBUG"%s: Could not copy SG buf %d FROM user\n",pHba->name,i);
+-					rcode = -EFAULT;
+-					goto cleanup;
+-				}
+-			}
+-			/* sg_simple_element API is 32 bit, but addr < 4GB */
+-			sg[i].addr_bus = addr;
+-		}
+-	}
+-
+-	do {
+-		/*
+-		 * Stop any new commands from enterring the
+-		 * controller while processing the ioctl
+-		 */
+-		if (pHba->host) {
+-			scsi_block_requests(pHba->host);
+-			spin_lock_irqsave(pHba->host->host_lock, flags);
+-		}
+-		rcode = adpt_i2o_post_wait(pHba, msg, size, FOREVER);
+-		if (rcode != 0)
+-			printk("adpt_i2o_passthru: post wait failed %d %p\n",
+-					rcode, reply);
+-		if (pHba->host) {
+-			spin_unlock_irqrestore(pHba->host->host_lock, flags);
+-			scsi_unblock_requests(pHba->host);
+-		}
+-	} while (rcode == -ETIMEDOUT);
+-
+-	if(rcode){
+-		goto cleanup;
+-	}
+-
+-	if(sg_offset) {
+-	/* Copy back the Scatter Gather buffers back to user space */
+-		u32 j;
+-		// TODO add 64 bit API
+-		struct sg_simple_element* sg;
+-		int sg_size;
+-
+-		// re-acquire the original message to handle correctly the sg copy operation
+-		memset(&msg, 0, MAX_MESSAGE_SIZE*4); 
+-		// get user msg size in u32s 
+-		if(get_user(size, &user_msg[0])){
+-			rcode = -EFAULT; 
+-			goto cleanup; 
+-		}
+-		size = size>>16;
+-		size *= 4;
+-		if (size > MAX_MESSAGE_SIZE) {
+-			rcode = -EINVAL;
+-			goto cleanup;
+-		}
+-		/* Copy in the user's I2O command */
+-		if (copy_from_user (msg, user_msg, size)) {
+-			rcode = -EFAULT;
+-			goto cleanup;
+-		}
+-		sg_count = (size - sg_offset*4) / sizeof(struct sg_simple_element);
+-
+-		// TODO add 64 bit API
+-		sg 	 = (struct sg_simple_element*)(msg + sg_offset);
+-		for (j = 0; j < sg_count; j++) {
+-			/* Copy out the SG list to user's buffer if necessary */
+-			if(! (sg[j].flag_count & 0x4000000 /*I2O_SGL_FLAGS_DIR*/)) {
+-				sg_size = sg[j].flag_count & 0xffffff; 
+-				// sg_simple_element API is 32 bit
+-				if (copy_to_user((void __user *)(ulong)sg[j].addr_bus,sg_list[j], sg_size)) {
+-					printk(KERN_WARNING"%s: Could not copy %p TO user %x\n",pHba->name, sg_list[j], sg[j].addr_bus);
+-					rcode = -EFAULT;
+-					goto cleanup;
+-				}
+-			}
+-		}
+-	} 
+-
+-	/* Copy back the reply to user space */
+-	if (reply_size) {
+-		// we wrote our own values for context - now restore the user supplied ones
+-		if(copy_from_user(reply+2, user_msg+2, sizeof(u32)*2)) {
+-			printk(KERN_WARNING"%s: Could not copy message context FROM user\n",pHba->name);
+-			rcode = -EFAULT;
+-		}
+-		if(copy_to_user(user_reply, reply, reply_size)) {
+-			printk(KERN_WARNING"%s: Could not copy reply TO user\n",pHba->name);
+-			rcode = -EFAULT;
+-		}
+-	}
+-
+-
+-cleanup:
+-	if (rcode != -ETIME && rcode != -EINTR) {
+-		struct sg_simple_element *sg =
+-				(struct sg_simple_element*) (msg +sg_offset);
+-		while(sg_index) {
+-			if(sg_list[--sg_index]) {
+-				dma_free_coherent(&pHba->pDev->dev,
+-					sg[sg_index].flag_count & 0xffffff,
+-					sg_list[sg_index],
+-					sg[sg_index].addr_bus);
+-			}
+-		}
+-	}
+-
+-free:
+-	kfree(sg_list);
+-	kfree(reply);
+-	return rcode;
+-}
+-
+ #if defined __ia64__ 
+ static void adpt_ia64_info(sysInfo_S* si)
+ {
+@@ -1976,8 +1729,6 @@ static int adpt_ioctl(struct inode *inode, struct file *file, uint cmd, ulong ar
+ 			return -EFAULT;
+ 		}
+ 		break;
+-	case I2OUSRCMD:
+-		return adpt_i2o_passthru(pHba, argp);
+ 
+ 	case DPT_CTRLINFO:{
+ 		drvrHBAinfo_S HbaInfo;
+@@ -2114,7 +1865,7 @@ static irqreturn_t adpt_isr(int irq, void *dev_id)
+ 		} else {
+ 			/* Ick, we should *never* be here */
+ 			printk(KERN_ERR "dpti: reply frame not from pool\n");
+-			reply = (u8 *)bus_to_virt(m);
++			continue;
+ 		}
+ 
+ 		if (readl(reply) & MSG_FAIL) {
+@@ -2134,13 +1885,6 @@ static irqreturn_t adpt_isr(int irq, void *dev_id)
+ 			adpt_send_nop(pHba, old_m);
+ 		} 
+ 		context = readl(reply+8);
+-		if(context & 0x40000000){ // IOCTL
+-			void *p = adpt_ioctl_from_context(pHba, readl(reply+12));
+-			if( p != NULL) {
+-				memcpy_fromio(p, reply, REPLY_FRAME_SIZE * 4);
+-			}
+-			// All IOCTLs will also be post wait
+-		}
+ 		if(context & 0x80000000){ // Post wait message
+ 			status = readl(reply+16);
+ 			if(status  >> 24){
+@@ -2148,16 +1892,14 @@ static irqreturn_t adpt_isr(int irq, void *dev_id)
+ 			} else {
+ 				status = I2O_POST_WAIT_OK;
+ 			}
+-			if(!(context & 0x40000000)) {
+-				/*
+-				 * The request tag is one less than the command tag
+-				 * as the firmware might treat a 0 tag as invalid
+-				 */
+-				cmd = scsi_host_find_tag(pHba->host,
+-							 readl(reply + 12) - 1);
+-				if(cmd != NULL) {
+-					printk(KERN_WARNING"%s: Apparent SCSI cmd in Post Wait Context - cmd=%p context=%x\n", pHba->name, cmd, context);
+-				}
++			/*
++			 * The request tag is one less than the command tag
++			 * as the firmware might treat a 0 tag as invalid
++			 */
++			cmd = scsi_host_find_tag(pHba->host,
++						 readl(reply + 12) - 1);
++			if(cmd != NULL) {
++				printk(KERN_WARNING"%s: Apparent SCSI cmd in Post Wait Context - cmd=%p context=%x\n", pHba->name, cmd, context);
+ 			}
+ 			adpt_i2o_post_wait_complete(context, status);
+ 		} else { // SCSI message
+diff --git a/drivers/scsi/dpti.h b/drivers/scsi/dpti.h
+index 8a079e8d7f65f..0565533e8095a 100644
+--- a/drivers/scsi/dpti.h
++++ b/drivers/scsi/dpti.h
+@@ -248,7 +248,6 @@ typedef struct _adpt_hba {
+ 	void __iomem *FwDebugBLEDflag_P;// Virtual Addr Of FW Debug BLED
+ 	void __iomem *FwDebugBLEDvalue_P;// Virtual Addr Of FW Debug BLED
+ 	u32 FwDebugFlags;
+-	u32 *ioctl_reply_context[4];
+ } adpt_hba;
+ 
+ struct sg_simple_element {
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 6f3d29d16d1f4..99b90031500b2 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1490,6 +1490,7 @@ static int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
+ 		 */
+ 		SCSI_LOG_MLQUEUE(3, scmd_printk(KERN_INFO, cmd,
+ 			"queuecommand : device blocked\n"));
++		atomic_dec(&cmd->device->iorequest_cnt);
+ 		return SCSI_MLQUEUE_DEVICE_BUSY;
+ 	}
+ 
+@@ -1522,6 +1523,7 @@ static int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
+ 	trace_scsi_dispatch_cmd_start(cmd);
+ 	rtn = host->hostt->queuecommand(host, cmd);
+ 	if (rtn) {
++		atomic_dec(&cmd->device->iorequest_cnt);
+ 		trace_scsi_dispatch_cmd_error(cmd, rtn);
+ 		if (rtn != SCSI_MLQUEUE_DEVICE_BUSY &&
+ 		    rtn != SCSI_MLQUEUE_TARGET_BUSY)
+diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c
+index a3bce11ed4b4b..fa607f2182500 100644
+--- a/drivers/scsi/stex.c
++++ b/drivers/scsi/stex.c
+@@ -109,7 +109,9 @@ enum {
+ 	TASK_ATTRIBUTE_HEADOFQUEUE		= 0x1,
+ 	TASK_ATTRIBUTE_ORDERED			= 0x2,
+ 	TASK_ATTRIBUTE_ACA			= 0x4,
++};
+ 
++enum {
+ 	SS_STS_NORMAL				= 0x80000000,
+ 	SS_STS_DONE				= 0x40000000,
+ 	SS_STS_HANDSHAKE			= 0x20000000,
+@@ -121,7 +123,9 @@ enum {
+ 	SS_I2H_REQUEST_RESET			= 0x2000,
+ 
+ 	SS_MU_OPERATIONAL			= 0x80000000,
++};
+ 
++enum {
+ 	STEX_CDB_LENGTH				= 16,
+ 	STATUS_VAR_LEN				= 128,
+ 
+diff --git a/drivers/tty/serial/8250/8250_tegra.c b/drivers/tty/serial/8250/8250_tegra.c
+index c0ffad1572c6c..b6694ddfc4eaf 100644
+--- a/drivers/tty/serial/8250/8250_tegra.c
++++ b/drivers/tty/serial/8250/8250_tegra.c
+@@ -111,13 +111,15 @@ static int tegra_uart_probe(struct platform_device *pdev)
+ 
+ 	ret = serial8250_register_8250_port(&port8250);
+ 	if (ret < 0)
+-		goto err_clkdisable;
++		goto err_ctrl_assert;
+ 
+ 	platform_set_drvdata(pdev, uart);
+ 	uart->line = ret;
+ 
+ 	return 0;
+ 
++err_ctrl_assert:
++	reset_control_assert(uart->rst);
+ err_clkdisable:
+ 	clk_disable_unprepare(uart->clk);
+ 
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index a2efa81471f30..ca22a11258217 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1455,34 +1455,36 @@ static void lpuart_break_ctl(struct uart_port *port, int break_state)
+ 
+ static void lpuart32_break_ctl(struct uart_port *port, int break_state)
+ {
+-	unsigned long temp, modem;
+-	struct tty_struct *tty;
+-	unsigned int cflag = 0;
+-
+-	tty = tty_port_tty_get(&port->state->port);
+-	if (tty) {
+-		cflag = tty->termios.c_cflag;
+-		tty_kref_put(tty);
+-	}
++	unsigned long temp;
+ 
+-	temp = lpuart32_read(port, UARTCTRL) & ~UARTCTRL_SBK;
+-	modem = lpuart32_read(port, UARTMODIR);
++	temp = lpuart32_read(port, UARTCTRL);
+ 
++	/*
++	 * LPUART IP now has two known bugs, one is CTS has higher priority than the
++	 * break signal, which causes the break signal sending through UARTCTRL_SBK
++	 * may impacted by the CTS input if the HW flow control is enabled. It
++	 * exists on all platforms we support in this driver.
++	 * Another bug is i.MX8QM LPUART may have an additional break character
++	 * being sent after SBK was cleared.
++	 * To avoid above two bugs, we use Transmit Data Inversion function to send
++	 * the break signal instead of UARTCTRL_SBK.
++	 */
+ 	if (break_state != 0) {
+-		temp |= UARTCTRL_SBK;
+ 		/*
+-		 * LPUART CTS has higher priority than SBK, need to disable CTS before
+-		 * asserting SBK to avoid any interference if flow control is enabled.
++		 * Disable the transmitter to prevent any data from being sent out
++		 * during break, then invert the TX line to send break.
+ 		 */
+-		if (cflag & CRTSCTS && modem & UARTMODIR_TXCTSE)
+-			lpuart32_write(port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
++		temp &= ~UARTCTRL_TE;
++		lpuart32_write(port, temp, UARTCTRL);
++		temp |= UARTCTRL_TXINV;
++		lpuart32_write(port, temp, UARTCTRL);
+ 	} else {
+-		/* Re-enable the CTS when break off. */
+-		if (cflag & CRTSCTS && !(modem & UARTMODIR_TXCTSE))
+-			lpuart32_write(port, modem | UARTMODIR_TXCTSE, UARTMODIR);
++		/* Disable the TXINV to turn off break and re-enable transmitter. */
++		temp &= ~UARTCTRL_TXINV;
++		lpuart32_write(port, temp, UARTCTRL);
++		temp |= UARTCTRL_TE;
++		lpuart32_write(port, temp, UARTCTRL);
+ 	}
+-
+-	lpuart32_write(port, temp, UARTCTRL);
+ }
+ 
+ static void lpuart_setup_watermark(struct lpuart_port *sport)
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 8c48c9f801be2..b17acab77fe26 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -3609,6 +3609,7 @@ static void ffs_func_unbind(struct usb_configuration *c,
+ 	/* Drain any pending AIO completions */
+ 	drain_workqueue(ffs->io_completion_wq);
+ 
++	ffs_event_add(ffs, FUNCTIONFS_UNBIND);
+ 	if (!--opts->refcnt)
+ 		functionfs_unbind(ffs);
+ 
+@@ -3633,7 +3634,6 @@ static void ffs_func_unbind(struct usb_configuration *c,
+ 	func->function.ssp_descriptors = NULL;
+ 	func->interfaces_nums = NULL;
+ 
+-	ffs_event_add(ffs, FUNCTIONFS_UNBIND);
+ }
+ 
+ static struct usb_function *ffs_alloc(struct usb_function_instance *fi)
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index 9725ecd1255ba..8e095b0982db4 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -247,6 +247,9 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 
+ 	cursor.set = 0;
+ 
++	if (!vc->vc_font.data)
++		return;
++
+  	c = scr_readw((u16 *) vc->vc_pos);
+ 	attribute = get_attribute(info, c);
+ 	src = vc->vc_font.data + ((c & charmask) * (w * vc->vc_font.height));
+diff --git a/drivers/video/fbdev/core/modedb.c b/drivers/video/fbdev/core/modedb.c
+index 6473e0dfe1464..e78ec7f728463 100644
+--- a/drivers/video/fbdev/core/modedb.c
++++ b/drivers/video/fbdev/core/modedb.c
+@@ -257,6 +257,11 @@ static const struct fb_videomode modedb[] = {
+ 	{ NULL, 72, 480, 300, 33386, 40, 24, 11, 19, 80, 3, 0,
+ 		FB_VMODE_DOUBLE },
+ 
++	/* 1920x1080 @ 60 Hz, 67.3 kHz hsync */
++	{ NULL, 60, 1920, 1080, 6734, 148, 88, 36, 4, 44, 5, 0,
++		FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
++		FB_VMODE_NONINTERLACED },
++
+ 	/* 1920x1200 @ 60 Hz, 74.5 Khz hsync */
+ 	{ NULL, 60, 1920, 1200, 5177, 128, 336, 1, 38, 208, 3,
+ 		FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index ef8a4c5fc6875..63f51783352dc 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -1413,6 +1413,7 @@ out_err1:
+ 	iounmap(info->screen_base);
+ out_err0:
+ 	kfree(fb);
++	sti->info = NULL;
+ 	return -ENXIO;
+ }
+ 
+diff --git a/drivers/watchdog/menz69_wdt.c b/drivers/watchdog/menz69_wdt.c
+index 8973f98bc6a56..bca0938f3429f 100644
+--- a/drivers/watchdog/menz69_wdt.c
++++ b/drivers/watchdog/menz69_wdt.c
+@@ -98,14 +98,6 @@ static const struct watchdog_ops men_z069_ops = {
+ 	.set_timeout = men_z069_wdt_set_timeout,
+ };
+ 
+-static struct watchdog_device men_z069_wdt = {
+-	.info = &men_z069_info,
+-	.ops = &men_z069_ops,
+-	.timeout = MEN_Z069_DEFAULT_TIMEOUT,
+-	.min_timeout = 1,
+-	.max_timeout = MEN_Z069_WDT_COUNTER_MAX / MEN_Z069_TIMER_FREQ,
+-};
+-
+ static int men_z069_probe(struct mcb_device *dev,
+ 			  const struct mcb_device_id *id)
+ {
+@@ -125,15 +117,19 @@ static int men_z069_probe(struct mcb_device *dev,
+ 		goto release_mem;
+ 
+ 	drv->mem = mem;
++	drv->wdt.info = &men_z069_info;
++	drv->wdt.ops = &men_z069_ops;
++	drv->wdt.timeout = MEN_Z069_DEFAULT_TIMEOUT;
++	drv->wdt.min_timeout = 1;
++	drv->wdt.max_timeout = MEN_Z069_WDT_COUNTER_MAX / MEN_Z069_TIMER_FREQ;
+ 
+-	drv->wdt = men_z069_wdt;
+ 	watchdog_init_timeout(&drv->wdt, 0, &dev->dev);
+ 	watchdog_set_nowayout(&drv->wdt, nowayout);
+ 	watchdog_set_drvdata(&drv->wdt, drv);
+ 	drv->wdt.parent = &dev->dev;
+ 	mcb_set_drvdata(dev, drv);
+ 
+-	return watchdog_register_device(&men_z069_wdt);
++	return watchdog_register_device(&drv->wdt);
+ 
+ release_mem:
+ 	mcb_release_mem(mem);
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 3e55245e54e7c..41a7ace9998e4 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -3872,6 +3872,7 @@ static int push_leaf_right(struct btrfs_trans_handle *trans, struct btrfs_root
+ 
+ 	if (check_sibling_keys(left, right)) {
+ 		ret = -EUCLEAN;
++		btrfs_abort_transaction(trans, ret);
+ 		btrfs_tree_unlock(right);
+ 		free_extent_buffer(right);
+ 		return ret;
+@@ -4116,6 +4117,7 @@ static int push_leaf_left(struct btrfs_trans_handle *trans, struct btrfs_root
+ 
+ 	if (check_sibling_keys(left, right)) {
+ 		ret = -EUCLEAN;
++		btrfs_abort_transaction(trans, ret);
+ 		goto out;
+ 	}
+ 	return __push_leaf_left(path, min_data_size,
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 60b7a227624d5..5a114cad988a6 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -220,7 +220,7 @@ static void csum_tree_block(struct extent_buffer *buf, u8 *result)
+ 	crypto_shash_update(shash, kaddr + BTRFS_CSUM_SIZE,
+ 			    PAGE_SIZE - BTRFS_CSUM_SIZE);
+ 
+-	for (i = 1; i < num_pages; i++) {
++	for (i = 1; i < num_pages && INLINE_EXTENT_BUFFER_PAGES > 1; i++) {
+ 		kaddr = page_address(buf->pages[i]);
+ 		crypto_shash_update(shash, kaddr, PAGE_SIZE);
+ 	}
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 84a240025aa46..c8fdf127d843b 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -980,11 +980,13 @@ do {									       \
+  *			  where the second inode has larger inode number
+  *			  than the first
+  *  I_DATA_SEM_QUOTA  - Used for quota inodes only
++ *  I_DATA_SEM_EA     - Used for ea_inodes only
+  */
+ enum {
+ 	I_DATA_SEM_NORMAL = 0,
+ 	I_DATA_SEM_OTHER,
+ 	I_DATA_SEM_QUOTA,
++	I_DATA_SEM_EA
+ };
+ 
+ 
+@@ -2849,7 +2851,8 @@ typedef enum {
+ 	EXT4_IGET_NORMAL =	0,
+ 	EXT4_IGET_SPECIAL =	0x0001, /* OK to iget a system inode */
+ 	EXT4_IGET_HANDLE = 	0x0002,	/* Inode # is from a handle */
+-	EXT4_IGET_BAD =		0x0004  /* Allow to iget a bad inode */
++	EXT4_IGET_BAD =		0x0004, /* Allow to iget a bad inode */
++	EXT4_IGET_EA_INODE =	0x0008	/* Inode should contain an EA value */
+ } ext4_iget_flags;
+ 
+ extern struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 735109b9e88da..d3143746bcdbf 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4680,6 +4680,24 @@ static inline u64 ext4_inode_peek_iversion(const struct inode *inode)
+ 		return inode_peek_iversion(inode);
+ }
+ 
++static const char *check_igot_inode(struct inode *inode, ext4_iget_flags flags)
++
++{
++	if (flags & EXT4_IGET_EA_INODE) {
++		if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
++			return "missing EA_INODE flag";
++		if (ext4_test_inode_state(inode, EXT4_STATE_XATTR) ||
++		    EXT4_I(inode)->i_file_acl)
++			return "ea_inode with extended attributes";
++	} else {
++		if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
++			return "unexpected EA_INODE flag";
++	}
++	if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD))
++		return "unexpected bad inode w/o EXT4_IGET_BAD";
++	return NULL;
++}
++
+ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 			  ext4_iget_flags flags, const char *function,
+ 			  unsigned int line)
+@@ -4688,6 +4706,7 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 	struct ext4_inode *raw_inode;
+ 	struct ext4_inode_info *ei;
+ 	struct inode *inode;
++	const char *err_str;
+ 	journal_t *journal = EXT4_SB(sb)->s_journal;
+ 	long ret;
+ 	loff_t size;
+@@ -4711,8 +4730,14 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 	inode = iget_locked(sb, ino);
+ 	if (!inode)
+ 		return ERR_PTR(-ENOMEM);
+-	if (!(inode->i_state & I_NEW))
++	if (!(inode->i_state & I_NEW)) {
++		if ((err_str = check_igot_inode(inode, flags)) != NULL) {
++			ext4_error_inode(inode, function, line, 0, err_str);
++			iput(inode);
++			return ERR_PTR(-EFSCORRUPTED);
++		}
+ 		return inode;
++	}
+ 
+ 	ei = EXT4_I(inode);
+ 	iloc.bh = NULL;
+@@ -4981,10 +5006,9 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 	if (IS_CASEFOLDED(inode) && !ext4_has_feature_casefold(inode->i_sb))
+ 		ext4_error_inode(inode, function, line, 0,
+ 				 "casefold flag without casefold feature");
+-	if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) {
+-		ext4_error_inode(inode, function, line, 0,
+-				 "bad inode without EXT4_IGET_BAD flag");
+-		ret = -EUCLEAN;
++	if ((err_str = check_igot_inode(inode, flags)) != NULL) {
++		ext4_error_inode(inode, function, line, 0, err_str);
++		ret = -EFSCORRUPTED;
+ 		goto bad_inode;
+ 	}
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index d89750e90bc4b..f72896384dbc9 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6005,18 +6005,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 		}
+ 	}
+ 
+-	/*
+-	 * Reinitialize lazy itable initialization thread based on
+-	 * current settings
+-	 */
+-	if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE))
+-		ext4_unregister_li_request(sb);
+-	else {
+-		ext4_group_t first_not_zeroed;
+-		first_not_zeroed = ext4_has_uninit_itable(sb);
+-		ext4_register_li_request(sb, first_not_zeroed);
+-	}
+-
+ 	/*
+ 	 * Handle creation of system zone data early because it can fail.
+ 	 * Releasing of existing data is done when we are sure remount will
+@@ -6054,6 +6042,18 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	if (enable_rw)
+ 		sb->s_flags &= ~SB_RDONLY;
+ 
++	/*
++	 * Reinitialize lazy itable initialization thread based on
++	 * current settings
++	 */
++	if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE))
++		ext4_unregister_li_request(sb);
++	else {
++		ext4_group_t first_not_zeroed;
++		first_not_zeroed = ext4_has_uninit_itable(sb);
++		ext4_register_li_request(sb, first_not_zeroed);
++	}
++
+ 	if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
+ 		ext4_stop_mmpd(sbi);
+ 
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index abcba0255109e..10b2f570d4003 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -123,7 +123,11 @@ ext4_expand_inode_array(struct ext4_xattr_inode_array **ea_inode_array,
+ #ifdef CONFIG_LOCKDEP
+ void ext4_xattr_inode_set_class(struct inode *ea_inode)
+ {
++	struct ext4_inode_info *ei = EXT4_I(ea_inode);
++
+ 	lockdep_set_subclass(&ea_inode->i_rwsem, 1);
++	(void) ei;	/* shut up clang warning if !CONFIG_LOCKDEP */
++	lockdep_set_subclass(&ei->i_data_sem, I_DATA_SEM_EA);
+ }
+ #endif
+ 
+@@ -397,7 +401,7 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+-	inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL);
++	inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_EA_INODE);
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+ 		ext4_error(parent->i_sb,
+@@ -405,23 +409,6 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
+ 			   err);
+ 		return err;
+ 	}
+-
+-	if (is_bad_inode(inode)) {
+-		ext4_error(parent->i_sb,
+-			   "error while reading EA inode %lu is_bad_inode",
+-			   ea_ino);
+-		err = -EIO;
+-		goto error;
+-	}
+-
+-	if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {
+-		ext4_error(parent->i_sb,
+-			   "EA inode %lu does not have EXT4_EA_INODE_FL flag",
+-			    ea_ino);
+-		err = -EINVAL;
+-		goto error;
+-	}
+-
+ 	ext4_xattr_inode_set_class(inode);
+ 
+ 	/*
+@@ -442,9 +429,6 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
+ 
+ 	*ea_inode = inode;
+ 	return 0;
+-error:
+-	iput(inode);
+-	return err;
+ }
+ 
+ /* Remove entry from mbcache when EA inode is getting evicted */
+@@ -1500,11 +1484,11 @@ ext4_xattr_inode_cache_find(struct inode *inode, const void *value,
+ 
+ 	while (ce) {
+ 		ea_inode = ext4_iget(inode->i_sb, ce->e_value,
+-				     EXT4_IGET_NORMAL);
+-		if (!IS_ERR(ea_inode) &&
+-		    !is_bad_inode(ea_inode) &&
+-		    (EXT4_I(ea_inode)->i_flags & EXT4_EA_INODE_FL) &&
+-		    i_size_read(ea_inode) == value_len &&
++				     EXT4_IGET_EA_INODE);
++		if (IS_ERR(ea_inode))
++			goto next_entry;
++		ext4_xattr_inode_set_class(ea_inode);
++		if (i_size_read(ea_inode) == value_len &&
+ 		    !ext4_xattr_inode_read(ea_inode, ea_data, value_len) &&
+ 		    !ext4_xattr_inode_verify_hashes(ea_inode, NULL, ea_data,
+ 						    value_len) &&
+@@ -1514,9 +1498,8 @@ ext4_xattr_inode_cache_find(struct inode *inode, const void *value,
+ 			kvfree(ea_data);
+ 			return ea_inode;
+ 		}
+-
+-		if (!IS_ERR(ea_inode))
+-			iput(ea_inode);
++		iput(ea_inode);
++	next_entry:
+ 		ce = mb_cache_entry_find_next(ea_inode_cache, ce);
+ 	}
+ 	kvfree(ea_data);
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 5cb7e771b57ab..e01b6a2d12d30 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1416,6 +1416,14 @@ static void gfs2_evict_inode(struct inode *inode)
+ 	if (inode->i_nlink || sb_rdonly(sb))
+ 		goto out;
+ 
++	/*
++	 * In case of an incomplete mount, gfs2_evict_inode() may be called for
++	 * system files without having an active journal to write to.  In that
++	 * case, skip the filesystem evict.
++	 */
++	if (!sdp->sd_jdesc)
++		goto out;
++
+ 	gfs2_holder_mark_uninitialized(&gh);
+ 	ret = evict_should_delete(inode, &gh);
+ 	if (ret == SHOULD_DEFER_EVICTION)
+diff --git a/include/media/dvb_net.h b/include/media/dvb_net.h
+index 5e31d37f25fac..cc01dffcc9f35 100644
+--- a/include/media/dvb_net.h
++++ b/include/media/dvb_net.h
+@@ -41,6 +41,9 @@
+  * @exit:		flag to indicate when the device is being removed.
+  * @demux:		pointer to &struct dmx_demux.
+  * @ioctl_mutex:	protect access to this struct.
++ * @remove_mutex:	mutex that avoids a race condition between a callback
++ *			called when the hardware is disconnected and the
++ *			file_operations of dvb_net.
+  *
+  * Currently, the core supports up to %DVB_NET_DEVICES_MAX (10) network
+  * devices.
+@@ -53,6 +56,7 @@ struct dvb_net {
+ 	unsigned int exit:1;
+ 	struct dmx_demux *demux;
+ 	struct mutex ioctl_mutex;
++	struct mutex remove_mutex;
+ };
+ 
+ /**
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 651dc0a7bbd58..3da0601b573ed 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -326,6 +326,7 @@ struct bpf_local_storage;
+   *	@sk_cgrp_data: cgroup data for this cgroup
+   *	@sk_memcg: this socket's memory cgroup association
+   *	@sk_write_pending: a write to stream socket waits to start
++  *	@sk_wait_pending: number of threads blocked on this socket
+   *	@sk_state_change: callback to indicate change in the state of the sock
+   *	@sk_data_ready: callback to indicate there is data to be processed
+   *	@sk_write_space: callback to indicate there is bf sending space available
+@@ -410,6 +411,7 @@ struct sock {
+ 	unsigned int		sk_napi_id;
+ #endif
+ 	int			sk_rcvbuf;
++	int			sk_wait_pending;
+ 
+ 	struct sk_filter __rcu	*sk_filter;
+ 	union {
+@@ -1095,6 +1097,7 @@ static inline void sock_rps_reset_rxhash(struct sock *sk)
+ 
+ #define sk_wait_event(__sk, __timeo, __condition, __wait)		\
+ 	({	int __rc;						\
++		__sk->sk_wait_pending++;				\
+ 		release_sock(__sk);					\
+ 		__rc = __condition;					\
+ 		if (!__rc) {						\
+@@ -1104,6 +1107,7 @@ static inline void sock_rps_reset_rxhash(struct sock *sk)
+ 		}							\
+ 		sched_annotate_sleep();					\
+ 		lock_sock(__sk);					\
++		__sk->sk_wait_pending--;				\
+ 		__rc = __condition;					\
+ 		__rc;							\
+ 	})
+diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
+index 6d41e20c47ced..d4a69b83902e1 100644
+--- a/kernel/trace/trace_probe.h
++++ b/kernel/trace/trace_probe.h
+@@ -301,7 +301,7 @@ trace_probe_primary_from_call(struct trace_event_call *call)
+ {
+ 	struct trace_probe_event *tpe = trace_probe_event_from_call(call);
+ 
+-	return list_first_entry(&tpe->probes, struct trace_probe, list);
++	return list_first_entry_or_null(&tpe->probes, struct trace_probe, list);
+ }
+ 
+ static inline struct list_head *trace_probe_probe_list(struct trace_probe *tp)
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index 76550d2e2edc7..581ee3fcdd5c2 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -41,6 +41,7 @@ struct test_batched_req {
+ 	bool sent;
+ 	const struct firmware *fw;
+ 	const char *name;
++	const char *fw_buf;
+ 	struct completion completion;
+ 	struct task_struct *task;
+ 	struct device *dev;
+@@ -143,8 +144,14 @@ static void __test_release_all_firmware(void)
+ 
+ 	for (i = 0; i < test_fw_config->num_requests; i++) {
+ 		req = &test_fw_config->reqs[i];
+-		if (req->fw)
++		if (req->fw) {
++			if (req->fw_buf) {
++				kfree_const(req->fw_buf);
++				req->fw_buf = NULL;
++			}
+ 			release_firmware(req->fw);
++			req->fw = NULL;
++		}
+ 	}
+ 
+ 	vfree(test_fw_config->reqs);
+@@ -589,6 +596,8 @@ static ssize_t trigger_request_store(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 	release_firmware(test_firmware);
++	if (test_fw_config->reqs)
++		__test_release_all_firmware();
+ 	test_firmware = NULL;
+ 	rc = request_firmware(&test_firmware, name, dev);
+ 	if (rc) {
+@@ -689,6 +698,8 @@ static ssize_t trigger_async_request_store(struct device *dev,
+ 	mutex_lock(&test_fw_mutex);
+ 	release_firmware(test_firmware);
+ 	test_firmware = NULL;
++	if (test_fw_config->reqs)
++		__test_release_all_firmware();
+ 	rc = request_firmware_nowait(THIS_MODULE, 1, name, dev, GFP_KERNEL,
+ 				     NULL, trigger_async_request_cb);
+ 	if (rc) {
+@@ -731,6 +742,8 @@ static ssize_t trigger_custom_fallback_store(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 	release_firmware(test_firmware);
++	if (test_fw_config->reqs)
++		__test_release_all_firmware();
+ 	test_firmware = NULL;
+ 	rc = request_firmware_nowait(THIS_MODULE, FW_ACTION_NOHOTPLUG, name,
+ 				     dev, GFP_KERNEL, NULL,
+@@ -793,6 +806,8 @@ static int test_fw_run_batch_request(void *data)
+ 						 test_fw_config->buf_size);
+ 		if (!req->fw)
+ 			kfree(test_buf);
++		else
++			req->fw_buf = test_buf;
+ 	} else {
+ 		req->rc = test_fw_config->req_firmware(&req->fw,
+ 						       req->name,
+@@ -848,6 +863,7 @@ static ssize_t trigger_batched_requests_store(struct device *dev,
+ 		req->fw = NULL;
+ 		req->idx = i;
+ 		req->name = test_fw_config->name;
++		req->fw_buf = NULL;
+ 		req->dev = dev;
+ 		init_completion(&req->completion);
+ 		req->task = kthread_run(test_fw_run_batch_request, req,
+@@ -947,6 +963,7 @@ ssize_t trigger_batched_requests_async_store(struct device *dev,
+ 	for (i = 0; i < test_fw_config->num_requests; i++) {
+ 		req = &test_fw_config->reqs[i];
+ 		req->name = test_fw_config->name;
++		req->fw_buf = NULL;
+ 		req->fw = NULL;
+ 		req->idx = i;
+ 		init_completion(&req->completion);
+diff --git a/net/atm/resources.c b/net/atm/resources.c
+index 53236986dfe09..3ad39ae971323 100644
+--- a/net/atm/resources.c
++++ b/net/atm/resources.c
+@@ -403,6 +403,7 @@ done:
+ 	return error;
+ }
+ 
++#ifdef CONFIG_PROC_FS
+ void *atm_dev_seq_start(struct seq_file *seq, loff_t *pos)
+ {
+ 	mutex_lock(&atm_dev_mutex);
+@@ -418,3 +419,4 @@ void *atm_dev_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
+ 	return seq_list_next(v, &atm_devs, pos);
+ }
++#endif
+diff --git a/net/core/sock.c b/net/core/sock.c
+index c5ae520d4a69c..9b013d052a722 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2016,7 +2016,6 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
+ {
+ 	u32 max_segs = 1;
+ 
+-	sk_dst_set(sk, dst);
+ 	sk->sk_route_caps = dst->dev->features | sk->sk_route_forced_caps;
+ 	if (sk->sk_route_caps & NETIF_F_GSO)
+ 		sk->sk_route_caps |= NETIF_F_GSO_SOFTWARE;
+@@ -2031,6 +2030,7 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
+ 		}
+ 	}
+ 	sk->sk_gso_max_segs = max_segs;
++	sk_dst_set(sk, dst);
+ }
+ EXPORT_SYMBOL_GPL(sk_setup_caps);
+ 
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 800c2c7607e1a..acb4887351daf 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -584,6 +584,7 @@ static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias)
+ 
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	sk->sk_write_pending += writebias;
++	sk->sk_wait_pending++;
+ 
+ 	/* Basic assumption: if someone sets sk->sk_err, he _must_
+ 	 * change state of the socket from TCP_SYN_*.
+@@ -599,6 +600,7 @@ static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias)
+ 	}
+ 	remove_wait_queue(sk_sleep(sk), &wait);
+ 	sk->sk_write_pending -= writebias;
++	sk->sk_wait_pending--;
+ 	return timeo;
+ }
+ 
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index e05dd87848f78..406305aaec904 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -839,6 +839,7 @@ struct sock *inet_csk_clone_lock(const struct sock *sk,
+ 	if (newsk) {
+ 		struct inet_connection_sock *newicsk = inet_csk(newsk);
+ 
++		newsk->sk_wait_pending = 0;
+ 		inet_sk_set_state(newsk, TCP_SYN_RECV);
+ 		newicsk->icsk_bind_hash = NULL;
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index eecce63ba25e3..82abbf1929851 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2748,6 +2748,12 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 	int old_state = sk->sk_state;
+ 	u32 seq;
+ 
++	/* Deny disconnect if other threads are blocked in sk_wait_event()
++	 * or inet_wait_for_connect().
++	 */
++	if (sk->sk_wait_pending)
++		return -EBUSY;
++
+ 	if (old_state != TCP_CLOSE)
+ 		tcp_set_state(sk, TCP_CLOSE);
+ 
+@@ -3713,7 +3719,8 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 	switch (optname) {
+ 	case TCP_MAXSEG:
+ 		val = tp->mss_cache;
+-		if (!val && ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))
++		if (tp->rx_opt.user_mss &&
++		    ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))
+ 			val = tp->rx_opt.user_mss;
+ 		if (tp->repair)
+ 			val = tp->rx_opt.mss_clamp;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 6a055a2216831..ceb7c988edefa 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -2968,7 +2968,9 @@ nla_put_failure:
+ 	return -1;
+ }
+ 
++#if IS_ENABLED(CONFIG_NF_NAT)
+ static const union nf_inet_addr any_addr;
++#endif
+ 
+ static __be32 nf_expect_get_id(const struct nf_conntrack_expect *exp)
+ {
+@@ -3458,10 +3460,12 @@ ctnetlink_change_expect(struct nf_conntrack_expect *x,
+ 	return 0;
+ }
+ 
++#if IS_ENABLED(CONFIG_NF_NAT)
+ static const struct nla_policy exp_nat_nla_policy[CTA_EXPECT_NAT_MAX+1] = {
+ 	[CTA_EXPECT_NAT_DIR]	= { .type = NLA_U32 },
+ 	[CTA_EXPECT_NAT_TUPLE]	= { .type = NLA_NESTED },
+ };
++#endif
+ 
+ static int
+ ctnetlink_parse_expect_nat(const struct nlattr *attr,
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 651f8ca912af0..99c869d8d3044 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1781,7 +1781,7 @@ static int netlink_getsockopt(struct socket *sock, int level, int optname,
+ 				break;
+ 			}
+ 		}
+-		if (put_user(ALIGN(nlk->ngroups / 8, sizeof(u32)), optlen))
++		if (put_user(ALIGN(BITS_TO_BYTES(nlk->ngroups), sizeof(u32)), optlen))
+ 			err = -EFAULT;
+ 		netlink_unlock_table();
+ 		return err;
+diff --git a/net/netrom/nr_subr.c b/net/netrom/nr_subr.c
+index 3f99b432ea707..e2d2af924cff4 100644
+--- a/net/netrom/nr_subr.c
++++ b/net/netrom/nr_subr.c
+@@ -123,7 +123,7 @@ void nr_write_internal(struct sock *sk, int frametype)
+ 	unsigned char  *dptr;
+ 	int len, timeout;
+ 
+-	len = NR_NETWORK_LEN + NR_TRANSPORT_LEN;
++	len = NR_TRANSPORT_LEN;
+ 
+ 	switch (frametype & 0x0F) {
+ 	case NR_CONNREQ:
+@@ -141,7 +141,8 @@ void nr_write_internal(struct sock *sk, int frametype)
+ 		return;
+ 	}
+ 
+-	if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
++	skb = alloc_skb(NR_NETWORK_LEN + len, GFP_ATOMIC);
++	if (!skb)
+ 		return;
+ 
+ 	/*
+@@ -149,7 +150,7 @@ void nr_write_internal(struct sock *sk, int frametype)
+ 	 */
+ 	skb_reserve(skb, NR_NETWORK_LEN);
+ 
+-	dptr = skb_put(skb, skb_tailroom(skb));
++	dptr = skb_put(skb, len);
+ 
+ 	switch (frametype & 0x0F) {
+ 	case NR_CONNREQ:
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 3c05414cd3f83..c7129616dd530 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3157,6 +3157,9 @@ static int packet_do_bind(struct sock *sk, const char *name, int ifindex,
+ 
+ 	lock_sock(sk);
+ 	spin_lock(&po->bind_lock);
++	if (!proto)
++		proto = po->num;
++
+ 	rcu_read_lock();
+ 
+ 	if (po->fanout) {
+@@ -3259,7 +3262,7 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
+ 	memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data));
+ 	name[sizeof(uaddr->sa_data)] = 0;
+ 
+-	return packet_do_bind(sk, name, 0, pkt_sk(sk)->num);
++	return packet_do_bind(sk, name, 0, 0);
+ }
+ 
+ static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+@@ -3276,8 +3279,7 @@ static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len
+ 	if (sll->sll_family != AF_PACKET)
+ 		return -EINVAL;
+ 
+-	return packet_do_bind(sk, NULL, sll->sll_ifindex,
+-			      sll->sll_protocol ? : pkt_sk(sk)->num);
++	return packet_do_bind(sk, NULL, sll->sll_ifindex, sll->sll_protocol);
+ }
+ 
+ static struct proto packet_proto = {
+diff --git a/net/packet/diag.c b/net/packet/diag.c
+index d704c7bf51b20..a68a84574c739 100644
+--- a/net/packet/diag.c
++++ b/net/packet/diag.c
+@@ -143,7 +143,7 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb,
+ 	rp = nlmsg_data(nlh);
+ 	rp->pdiag_family = AF_PACKET;
+ 	rp->pdiag_type = sk->sk_type;
+-	rp->pdiag_num = ntohs(po->num);
++	rp->pdiag_num = ntohs(READ_ONCE(po->num));
+ 	rp->pdiag_ino = sk_ino;
+ 	sock_diag_save_cookie(sk, rp->pdiag_cookie);
+ 
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 35ee6d8226e61..caf1a05bfbde4 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1086,6 +1086,9 @@ static int fl_set_geneve_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ 	if (option_len > sizeof(struct geneve_opt))
+ 		data_len = option_len - sizeof(struct geneve_opt);
+ 
++	if (key->enc_opts.len > FLOW_DIS_TUN_OPTS_MAX - 4)
++		return -ERANGE;
++
+ 	opt = (struct geneve_opt *)&key->enc_opts.data[key->enc_opts.len];
+ 	memset(opt, 0xff, option_len);
+ 	opt->length = data_len / 4;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 54e2309315eb5..2084724c36ad3 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1223,7 +1223,12 @@ static struct Qdisc *qdisc_create(struct net_device *dev,
+ 	sch->parent = parent;
+ 
+ 	if (handle == TC_H_INGRESS) {
+-		sch->flags |= TCQ_F_INGRESS;
++		if (!(sch->flags & TCQ_F_INGRESS)) {
++			NL_SET_ERR_MSG(extack,
++				       "Specified parent ID is reserved for ingress and clsact Qdiscs");
++			err = -EINVAL;
++			goto err_out3;
++		}
+ 		handle = TC_H_MAKE(TC_H_INGRESS, 0);
+ 	} else {
+ 		if (handle == 0) {
+@@ -1584,11 +1589,20 @@ replay:
+ 					NL_SET_ERR_MSG(extack, "Invalid qdisc name");
+ 					return -EINVAL;
+ 				}
++				if (q->flags & TCQ_F_INGRESS) {
++					NL_SET_ERR_MSG(extack,
++						       "Cannot regraft ingress or clsact Qdiscs");
++					return -EINVAL;
++				}
+ 				if (q == p ||
+ 				    (p && check_loop(q, p, 0))) {
+ 					NL_SET_ERR_MSG(extack, "Qdisc parent/child loop detected");
+ 					return -ELOOP;
+ 				}
++				if (clid == TC_H_INGRESS) {
++					NL_SET_ERR_MSG(extack, "Ingress cannot graft directly");
++					return -EINVAL;
++				}
+ 				qdisc_refcount_inc(q);
+ 				goto graft;
+ 			} else {
+diff --git a/net/sched/sch_ingress.c b/net/sched/sch_ingress.c
+index 84838128b9c5b..e43a454993723 100644
+--- a/net/sched/sch_ingress.c
++++ b/net/sched/sch_ingress.c
+@@ -80,6 +80,9 @@ static int ingress_init(struct Qdisc *sch, struct nlattr *opt,
+ 	struct net_device *dev = qdisc_dev(sch);
+ 	int err;
+ 
++	if (sch->parent != TC_H_INGRESS)
++		return -EOPNOTSUPP;
++
+ 	net_inc_ingress_queue();
+ 
+ 	mini_qdisc_pair_init(&q->miniqp, sch, &dev->miniq_ingress);
+@@ -101,6 +104,9 @@ static void ingress_destroy(struct Qdisc *sch)
+ {
+ 	struct ingress_sched_data *q = qdisc_priv(sch);
+ 
++	if (sch->parent != TC_H_INGRESS)
++		return;
++
+ 	tcf_block_put_ext(q->block, sch, &q->block_info);
+ 	net_dec_ingress_queue();
+ }
+@@ -134,7 +140,7 @@ static struct Qdisc_ops ingress_qdisc_ops __read_mostly = {
+ 	.cl_ops			=	&ingress_class_ops,
+ 	.id			=	"ingress",
+ 	.priv_size		=	sizeof(struct ingress_sched_data),
+-	.static_flags		=	TCQ_F_CPUSTATS,
++	.static_flags		=	TCQ_F_INGRESS | TCQ_F_CPUSTATS,
+ 	.init			=	ingress_init,
+ 	.destroy		=	ingress_destroy,
+ 	.dump			=	ingress_dump,
+@@ -219,6 +225,9 @@ static int clsact_init(struct Qdisc *sch, struct nlattr *opt,
+ 	struct net_device *dev = qdisc_dev(sch);
+ 	int err;
+ 
++	if (sch->parent != TC_H_CLSACT)
++		return -EOPNOTSUPP;
++
+ 	net_inc_ingress_queue();
+ 	net_inc_egress_queue();
+ 
+@@ -248,6 +257,9 @@ static void clsact_destroy(struct Qdisc *sch)
+ {
+ 	struct clsact_sched_data *q = qdisc_priv(sch);
+ 
++	if (sch->parent != TC_H_CLSACT)
++		return;
++
+ 	tcf_block_put_ext(q->egress_block, sch, &q->egress_block_info);
+ 	tcf_block_put_ext(q->ingress_block, sch, &q->ingress_block_info);
+ 
+@@ -269,7 +281,7 @@ static struct Qdisc_ops clsact_qdisc_ops __read_mostly = {
+ 	.cl_ops			=	&clsact_class_ops,
+ 	.id			=	"clsact",
+ 	.priv_size		=	sizeof(struct clsact_sched_data),
+-	.static_flags		=	TCQ_F_CPUSTATS,
++	.static_flags		=	TCQ_F_INGRESS | TCQ_F_CPUSTATS,
+ 	.init			=	clsact_init,
+ 	.destroy		=	clsact_destroy,
+ 	.dump			=	ingress_dump,
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 2956854928537..d3b128b74a382 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3240,7 +3240,7 @@ xfrm_secpath_reject(int idx, struct sk_buff *skb, const struct flowi *fl)
+ 
+ static inline int
+ xfrm_state_ok(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x,
+-	      unsigned short family)
++	      unsigned short family, u32 if_id)
+ {
+ 	if (xfrm_state_kern(x))
+ 		return tmpl->optional && !xfrm_state_addr_cmp(tmpl, x, tmpl->encap_family);
+@@ -3251,7 +3251,8 @@ xfrm_state_ok(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x,
+ 		(tmpl->allalgs || (tmpl->aalgos & (1<<x->props.aalgo)) ||
+ 		 !(xfrm_id_proto_match(tmpl->id.proto, IPSEC_PROTO_ANY))) &&
+ 		!(x->props.mode != XFRM_MODE_TRANSPORT &&
+-		  xfrm_state_addr_cmp(tmpl, x, family));
++		  xfrm_state_addr_cmp(tmpl, x, family)) &&
++		(if_id == 0 || if_id == x->if_id);
+ }
+ 
+ /*
+@@ -3263,7 +3264,7 @@ xfrm_state_ok(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x,
+  */
+ static inline int
+ xfrm_policy_ok(const struct xfrm_tmpl *tmpl, const struct sec_path *sp, int start,
+-	       unsigned short family)
++	       unsigned short family, u32 if_id)
+ {
+ 	int idx = start;
+ 
+@@ -3273,7 +3274,7 @@ xfrm_policy_ok(const struct xfrm_tmpl *tmpl, const struct sec_path *sp, int star
+ 	} else
+ 		start = -1;
+ 	for (; idx < sp->len; idx++) {
+-		if (xfrm_state_ok(tmpl, sp->xvec[idx], family))
++		if (xfrm_state_ok(tmpl, sp->xvec[idx], family, if_id))
+ 			return ++idx;
+ 		if (sp->xvec[idx]->props.mode != XFRM_MODE_TRANSPORT) {
+ 			if (start == -1)
+@@ -3689,7 +3690,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		 * are implied between each two transformations.
+ 		 */
+ 		for (i = xfrm_nr-1, k = 0; i >= 0; i--) {
+-			k = xfrm_policy_ok(tpp[i], sp, k, family);
++			k = xfrm_policy_ok(tpp[i], sp, k, family, if_id);
+ 			if (k < 0) {
+ 				if (k < -1)
+ 					/* "-2 - errored_index" returned */
+diff --git a/security/selinux/Makefile b/security/selinux/Makefile
+index ee1ddda964478..332a1c752b497 100644
+--- a/security/selinux/Makefile
++++ b/security/selinux/Makefile
+@@ -24,5 +24,9 @@ quiet_cmd_flask = GEN     $(obj)/flask.h $(obj)/av_permissions.h
+       cmd_flask = $< $(obj)/flask.h $(obj)/av_permissions.h
+ 
+ targets += flask.h av_permissions.h
+-$(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/genheaders/genheaders FORCE
++# once make >= 4.3 is required, we can use grouped targets in the rule below,
++# which basically involves adding both headers and a '&' before the colon, see
++# the example below:
++#   $(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/...
++$(obj)/flask.h: scripts/selinux/genheaders/genheaders FORCE
+ 	$(call if_changed,flask)
+diff --git a/sound/core/oss/pcm_plugin.h b/sound/core/oss/pcm_plugin.h
+index 46e273bd4a786..50a6b50f5db4c 100644
+--- a/sound/core/oss/pcm_plugin.h
++++ b/sound/core/oss/pcm_plugin.h
+@@ -141,6 +141,14 @@ int snd_pcm_area_copy(const struct snd_pcm_channel_area *src_channel,
+ 
+ void *snd_pcm_plug_buf_alloc(struct snd_pcm_substream *plug, snd_pcm_uframes_t size);
+ void snd_pcm_plug_buf_unlock(struct snd_pcm_substream *plug, void *ptr);
++#else
++
++static inline snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *handle, snd_pcm_uframes_t drv_size) { return drv_size; }
++static inline snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *handle, snd_pcm_uframes_t clt_size) { return clt_size; }
++static inline int snd_pcm_plug_slave_format(int format, const struct snd_mask *format_mask) { return format; }
++
++#endif
++
+ snd_pcm_sframes_t snd_pcm_oss_write3(struct snd_pcm_substream *substream,
+ 				     const char *ptr, snd_pcm_uframes_t size,
+ 				     int in_kernel);
+@@ -151,14 +159,6 @@ snd_pcm_sframes_t snd_pcm_oss_writev3(struct snd_pcm_substream *substream,
+ snd_pcm_sframes_t snd_pcm_oss_readv3(struct snd_pcm_substream *substream,
+ 				     void **bufs, snd_pcm_uframes_t frames);
+ 
+-#else
+-
+-static inline snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *handle, snd_pcm_uframes_t drv_size) { return drv_size; }
+-static inline snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *handle, snd_pcm_uframes_t clt_size) { return clt_size; }
+-static inline int snd_pcm_plug_slave_format(int format, const struct snd_mask *format_mask) { return format; }
+-
+-#endif
+-
+ #ifdef PLUGIN_DEBUG
+ #define pdprintf(fmt, args...) printk(KERN_DEBUG "plugin: " fmt, ##args)
+ #else
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index de1fe604905f3..1f641712233ef 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -264,6 +264,7 @@ enum {
+ 	AZX_DRIVER_ATI,
+ 	AZX_DRIVER_ATIHDMI,
+ 	AZX_DRIVER_ATIHDMI_NS,
++	AZX_DRIVER_GFHDMI,
+ 	AZX_DRIVER_VIA,
+ 	AZX_DRIVER_SIS,
+ 	AZX_DRIVER_ULI,
+@@ -386,6 +387,7 @@ static const char * const driver_short_names[] = {
+ 	[AZX_DRIVER_ATI] = "HDA ATI SB",
+ 	[AZX_DRIVER_ATIHDMI] = "HDA ATI HDMI",
+ 	[AZX_DRIVER_ATIHDMI_NS] = "HDA ATI HDMI",
++	[AZX_DRIVER_GFHDMI] = "HDA GF HDMI",
+ 	[AZX_DRIVER_VIA] = "HDA VIA VT82xx",
+ 	[AZX_DRIVER_SIS] = "HDA SIS966",
+ 	[AZX_DRIVER_ULI] = "HDA ULI M5461",
+@@ -1783,6 +1785,12 @@ static int default_bdl_pos_adj(struct azx *chip)
+ 	}
+ 
+ 	switch (chip->driver_type) {
++	/*
++	 * increase the bdl size for Glenfly Gpus for hardware
++	 * limitation on hdac interrupt interval
++	 */
++	case AZX_DRIVER_GFHDMI:
++		return 128;
+ 	case AZX_DRIVER_ICH:
+ 	case AZX_DRIVER_PCH:
+ 		return 1;
+@@ -1902,6 +1910,12 @@ static int azx_first_init(struct azx *chip)
+ 		pci_write_config_dword(pci, PCI_BASE_ADDRESS_1, 0);
+ 	}
+ #endif
++	/*
++	 * Fix response write request not synced to memory when handle
++	 * hdac interrupt on Glenfly Gpus
++	 */
++	if (chip->driver_type == AZX_DRIVER_GFHDMI)
++		bus->polling_mode = 1;
+ 
+ 	err = pci_request_regions(pci, "ICH HD audio");
+ 	if (err < 0)
+@@ -2011,6 +2025,7 @@ static int azx_first_init(struct azx *chip)
+ 			chip->playback_streams = ATIHDMI_NUM_PLAYBACK;
+ 			chip->capture_streams = ATIHDMI_NUM_CAPTURE;
+ 			break;
++		case AZX_DRIVER_GFHDMI:
+ 		case AZX_DRIVER_GENERIC:
+ 		default:
+ 			chip->playback_streams = ICH6_NUM_PLAYBACK;
+@@ -2756,6 +2771,12 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_DEVICE(0x1002, 0xab38),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
++	/* GLENFLY */
++	{ PCI_DEVICE(0x6766, PCI_ANY_ID),
++	  .class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8,
++	  .class_mask = 0xffffff,
++	  .driver_data = AZX_DRIVER_GFHDMI | AZX_DCAPS_POSFIX_LPIB |
++	  AZX_DCAPS_NO_MSI | AZX_DCAPS_NO_64BIT },
+ 	/* VIA VT8251/VT8237A */
+ 	{ PCI_DEVICE(0x1106, 0x3288), .driver_data = AZX_DRIVER_VIA },
+ 	/* VIA GFX VT7122/VX900 */
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index e4366fea9e274..c19afe4861949 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4287,6 +4287,22 @@ static int patch_via_hdmi(struct hda_codec *codec)
+ 	return patch_simple_hdmi(codec, VIAHDMI_CVT_NID, VIAHDMI_PIN_NID);
+ }
+ 
++static int patch_gf_hdmi(struct hda_codec *codec)
++{
++	int err;
++
++	err = patch_generic_hdmi(codec);
++	if (err)
++		return err;
++
++	/*
++	 * Glenfly GPUs have two codecs, stream switches from one codec to
++	 * another, need to do actual clean-ups in codec_cleanup_stream
++	 */
++	codec->no_sticky_stream = 1;
++	return 0;
++}
++
+ /*
+  * patch entries
+  */
+@@ -4381,6 +4397,12 @@ HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP",	patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI",	patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI",	patch_nvhdmi_2ch),
++HDA_CODEC_ENTRY(0x67663d82, "Arise 82 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d83, "Arise 83 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d84, "Arise 84 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d85, "Arise 85 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d86, "Arise 86 HDMI/DP",	patch_gf_hdmi),
++HDA_CODEC_ENTRY(0x67663d87, "Arise 87 HDMI/DP",	patch_gf_hdmi),
+ HDA_CODEC_ENTRY(0x11069f80, "VX900 HDMI/DP",	patch_via_hdmi),
+ HDA_CODEC_ENTRY(0x11069f81, "VX900 HDMI/DP",	patch_via_hdmi),
+ HDA_CODEC_ENTRY(0x11069f84, "VX11 HDMI/DP",	patch_generic_hdmi),
+diff --git a/sound/soc/codecs/ssm2602.c b/sound/soc/codecs/ssm2602.c
+index 9051602466146..c7a90c34d8f08 100644
+--- a/sound/soc/codecs/ssm2602.c
++++ b/sound/soc/codecs/ssm2602.c
+@@ -53,6 +53,18 @@ static const struct reg_default ssm2602_reg[SSM2602_CACHEREGNUM] = {
+ 	{ .reg = 0x09, .def = 0x0000 }
+ };
+ 
++/*
++ * ssm2602 register patch
++ * Workaround for playback distortions after power up: activates digital
++ * core, and then powers on output, DAC, and whole chip at the same time
++ */
++
++static const struct reg_sequence ssm2602_patch[] = {
++	{ SSM2602_ACTIVE, 0x01 },
++	{ SSM2602_PWR,    0x07 },
++	{ SSM2602_RESET,  0x00 },
++};
++
+ 
+ /*Appending several "None"s just for OSS mixer use*/
+ static const char *ssm2602_input_select[] = {
+@@ -589,6 +601,9 @@ static int ssm260x_component_probe(struct snd_soc_component *component)
+ 		return ret;
+ 	}
+ 
++	regmap_register_patch(ssm2602->regmap, ssm2602_patch,
++			      ARRAY_SIZE(ssm2602_patch));
++
+ 	/* set the update bits */
+ 	regmap_update_bits(ssm2602->regmap, SSM2602_LINVOL,
+ 			    LINVOL_LRIN_BOTH, LINVOL_LRIN_BOTH);
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index 36da0f01571a1..5469399abcb44 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -132,13 +132,13 @@ static irqreturn_t i2s_irq_handler(int irq, void *dev_id)
+ 
+ 		/* Error Handling: TX */
+ 		if (isr[i] & ISR_TXFO) {
+-			dev_err(dev->dev, "TX overrun (ch_id=%d)\n", i);
++			dev_err_ratelimited(dev->dev, "TX overrun (ch_id=%d)\n", i);
+ 			irq_valid = true;
+ 		}
+ 
+ 		/* Error Handling: TX */
+ 		if (isr[i] & ISR_RXFO) {
+-			dev_err(dev->dev, "RX overrun (ch_id=%d)\n", i);
++			dev_err_ratelimited(dev->dev, "RX overrun (ch_id=%d)\n", i);
+ 			irq_valid = true;
+ 		}
+ 	}
+diff --git a/tools/testing/selftests/net/mptcp/Makefile b/tools/testing/selftests/net/mptcp/Makefile
+index 00bb158b4a5d2..7072ef1c0ae74 100644
+--- a/tools/testing/selftests/net/mptcp/Makefile
++++ b/tools/testing/selftests/net/mptcp/Makefile
+@@ -10,7 +10,7 @@ TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \
+ 
+ TEST_GEN_FILES = mptcp_connect pm_nl_ctl
+ 
+-TEST_FILES := settings
++TEST_FILES := mptcp_lib.sh settings
+ 
+ EXTRA_CLEAN := *.pcap
+ 
+diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh
+index 39edce4f541c2..34577d469f583 100755
+--- a/tools/testing/selftests/net/mptcp/diag.sh
++++ b/tools/testing/selftests/net/mptcp/diag.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ rndh=$(printf %x $sec)-$(mktemp -u XXXXXX)
+ ns="ns1-$rndh"
+ ksft_skip=4
+@@ -28,6 +30,8 @@ cleanup()
+ 	done
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index 987a914ee0df2..fb89298bdde4c 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ time_start=$(date +%s)
+ 
+ optstring="S:R:d:e:l:r:h4cm:f:t"
+@@ -131,6 +133,8 @@ cleanup()
+ 	done
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 08f53d86dedcb..94b15bb28e110 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ ret=0
+ sin=""
+ sout=""
+@@ -88,6 +90,8 @@ for arg in "$@"; do
+ 	fi
+ done
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_lib.sh b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+new file mode 100644
+index 0000000000000..3286536b79d55
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+@@ -0,0 +1,40 @@
++#! /bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++readonly KSFT_FAIL=1
++readonly KSFT_SKIP=4
++
++# SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES env var can be set when validating all
++# features using the last version of the kernel and the selftests to make sure
++# a test is not being skipped by mistake.
++mptcp_lib_expect_all_features() {
++	[ "${SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES:-}" = "1" ]
++}
++
++# $1: msg
++mptcp_lib_fail_if_expected_feature() {
++	if mptcp_lib_expect_all_features; then
++		echo "ERROR: missing feature: ${*}"
++		exit ${KSFT_FAIL}
++	fi
++
++	return 1
++}
++
++# $1: file
++mptcp_lib_has_file() {
++	local f="${1}"
++
++	if [ -f "${f}" ]; then
++		return 0
++	fi
++
++	mptcp_lib_fail_if_expected_feature "${f} file not found"
++}
++
++mptcp_lib_check_mptcp() {
++	if ! mptcp_lib_has_file "/proc/sys/net/mptcp/enabled"; then
++		echo "SKIP: MPTCP support is not available"
++		exit ${KSFT_SKIP}
++	fi
++}
+diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+index 15f4f46ca3a90..f7cdba0a97a90 100755
+--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
++++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ ksft_skip=4
+ ret=0
+ 
+@@ -34,6 +36,8 @@ cleanup()
+ 	ip netns del $ns1
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"
+diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
+index 8fcb289278182..b51afba244be5 100755
+--- a/tools/testing/selftests/net/mptcp/simult_flows.sh
++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
+@@ -1,6 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
++. "$(dirname "${0}")/mptcp_lib.sh"
++
+ rndh=$(printf %x $sec)-$(mktemp -u XXXXXX)
+ ns1="ns1-$rndh"
+ ns2="ns2-$rndh"
+@@ -31,6 +33,8 @@ cleanup()
+ 	done
+ }
+ 
++mptcp_lib_check_mptcp
++
+ ip -Version > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ 	echo "SKIP: Could not run test without ip tool"


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-06-14 10:20 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-06-14 10:20 UTC (permalink / raw
  To: gentoo-commits

commit:     6d617305feb21ff027697aa0e5fcc277b67b23b3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 14 10:19:54 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 14 10:19:54 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6d617305

Linux patch 5.10.184

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1183_linux-5.10.184.patch | 4008 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4012 insertions(+)

diff --git a/0000_README b/0000_README
index 3ea4f952..3b319458 100644
--- a/0000_README
+++ b/0000_README
@@ -775,6 +775,10 @@ Patch:  1182_linux-5.10.183.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.183
 
+Patch:  1183_linux-5.10.184.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.184
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1183_linux-5.10.184.patch b/1183_linux-5.10.184.patch
new file mode 100644
index 00000000..ad847cfe
--- /dev/null
+++ b/1183_linux-5.10.184.patch
@@ -0,0 +1,4008 @@
+diff --git a/Makefile b/Makefile
+index 28115681fffda..3450b061e8d69 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 183
++SUBLEVEL = 184
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
+index 27ad767915390..fd0e09033a7c3 100644
+--- a/arch/mips/include/asm/atomic.h
++++ b/arch/mips/include/asm/atomic.h
+@@ -203,7 +203,7 @@ ATOMIC_OPS(atomic64, xor, s64, ^=, xor, lld, scd)
+  * The function returns the old value of @v minus @i.
+  */
+ #define ATOMIC_SIP_OP(pfx, type, op, ll, sc)				\
+-static __inline__ int pfx##_sub_if_positive(type i, pfx##_t * v)	\
++static __inline__ type pfx##_sub_if_positive(type i, pfx##_t * v)	\
+ {									\
+ 	type temp, result;						\
+ 									\
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index c192bd7305dc6..b28fabfc91bf7 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -22,6 +22,7 @@ config RISCV
+ 	select ARCH_HAS_GIGANTIC_PAGE
+ 	select ARCH_HAS_KCOV
+ 	select ARCH_HAS_MMIOWB
++	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_SET_DIRECT_MAP
+ 	select ARCH_HAS_SET_MEMORY
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 9255b642d6adb..105ad23dff063 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -232,7 +232,9 @@ enum {
+ 
+ 	/* 1/64k is granular enough and can easily be handled w/ u32 */
+ 	WEIGHT_ONE		= 1 << 16,
++};
+ 
++enum {
+ 	/*
+ 	 * As vtime is used to calculate the cost of each IO, it needs to
+ 	 * be fairly high precision.  For example, it should be able to
+@@ -256,6 +258,11 @@ enum {
+ 	VRATE_MIN		= VTIME_PER_USEC * VRATE_MIN_PPM / MILLION,
+ 	VRATE_CLAMP_ADJ_PCT	= 4,
+ 
++	/* switch iff the conditions are met for longer than this */
++	AUTOP_CYCLE_NSEC	= 10LLU * NSEC_PER_SEC,
++};
++
++enum {
+ 	/* if IOs end up waiting for requests, issue less */
+ 	RQ_WAIT_BUSY_PCT	= 5,
+ 
+@@ -294,9 +301,6 @@ enum {
+ 	/* don't let cmds which take a very long time pin lagging for too long */
+ 	MAX_LAGGING_PERIODS	= 10,
+ 
+-	/* switch iff the conditions are met for longer than this */
+-	AUTOP_CYCLE_NSEC	= 10LLU * NSEC_PER_SEC,
+-
+ 	/*
+ 	 * Count IO size in 4k pages.  The 12bit shift helps keeping
+ 	 * size-proportional components of cost calculation in closer
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index 1ce8973569933..7cc6feb17e972 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -24,6 +24,7 @@
+ #include <linux/libata.h>
+ #include <linux/phy/phy.h>
+ #include <linux/regulator/consumer.h>
++#include <linux/bits.h>
+ 
+ /* Enclosure Management Control */
+ #define EM_CTRL_MSG_TYPE              0x000f0000
+@@ -54,12 +55,12 @@ enum {
+ 	AHCI_PORT_PRIV_FBS_DMA_SZ	= AHCI_CMD_SLOT_SZ +
+ 					  AHCI_CMD_TBL_AR_SZ +
+ 					  (AHCI_RX_FIS_SZ * 16),
+-	AHCI_IRQ_ON_SG		= (1 << 31),
+-	AHCI_CMD_ATAPI		= (1 << 5),
+-	AHCI_CMD_WRITE		= (1 << 6),
+-	AHCI_CMD_PREFETCH	= (1 << 7),
+-	AHCI_CMD_RESET		= (1 << 8),
+-	AHCI_CMD_CLR_BUSY	= (1 << 10),
++	AHCI_IRQ_ON_SG		= BIT(31),
++	AHCI_CMD_ATAPI		= BIT(5),
++	AHCI_CMD_WRITE		= BIT(6),
++	AHCI_CMD_PREFETCH	= BIT(7),
++	AHCI_CMD_RESET		= BIT(8),
++	AHCI_CMD_CLR_BUSY	= BIT(10),
+ 
+ 	RX_FIS_PIO_SETUP	= 0x20,	/* offset of PIO Setup FIS data */
+ 	RX_FIS_D2H_REG		= 0x40,	/* offset of D2H Register FIS data */
+@@ -77,37 +78,37 @@ enum {
+ 	HOST_CAP2		= 0x24, /* host capabilities, extended */
+ 
+ 	/* HOST_CTL bits */
+-	HOST_RESET		= (1 << 0),  /* reset controller; self-clear */
+-	HOST_IRQ_EN		= (1 << 1),  /* global IRQ enable */
+-	HOST_MRSM		= (1 << 2),  /* MSI Revert to Single Message */
+-	HOST_AHCI_EN		= (1 << 31), /* AHCI enabled */
++	HOST_RESET		= BIT(0),  /* reset controller; self-clear */
++	HOST_IRQ_EN		= BIT(1),  /* global IRQ enable */
++	HOST_MRSM		= BIT(2),  /* MSI Revert to Single Message */
++	HOST_AHCI_EN		= BIT(31), /* AHCI enabled */
+ 
+ 	/* HOST_CAP bits */
+-	HOST_CAP_SXS		= (1 << 5),  /* Supports External SATA */
+-	HOST_CAP_EMS		= (1 << 6),  /* Enclosure Management support */
+-	HOST_CAP_CCC		= (1 << 7),  /* Command Completion Coalescing */
+-	HOST_CAP_PART		= (1 << 13), /* Partial state capable */
+-	HOST_CAP_SSC		= (1 << 14), /* Slumber state capable */
+-	HOST_CAP_PIO_MULTI	= (1 << 15), /* PIO multiple DRQ support */
+-	HOST_CAP_FBS		= (1 << 16), /* FIS-based switching support */
+-	HOST_CAP_PMP		= (1 << 17), /* Port Multiplier support */
+-	HOST_CAP_ONLY		= (1 << 18), /* Supports AHCI mode only */
+-	HOST_CAP_CLO		= (1 << 24), /* Command List Override support */
+-	HOST_CAP_LED		= (1 << 25), /* Supports activity LED */
+-	HOST_CAP_ALPM		= (1 << 26), /* Aggressive Link PM support */
+-	HOST_CAP_SSS		= (1 << 27), /* Staggered Spin-up */
+-	HOST_CAP_MPS		= (1 << 28), /* Mechanical presence switch */
+-	HOST_CAP_SNTF		= (1 << 29), /* SNotification register */
+-	HOST_CAP_NCQ		= (1 << 30), /* Native Command Queueing */
+-	HOST_CAP_64		= (1 << 31), /* PCI DAC (64-bit DMA) support */
++	HOST_CAP_SXS		= BIT(5),  /* Supports External SATA */
++	HOST_CAP_EMS		= BIT(6),  /* Enclosure Management support */
++	HOST_CAP_CCC		= BIT(7),  /* Command Completion Coalescing */
++	HOST_CAP_PART		= BIT(13), /* Partial state capable */
++	HOST_CAP_SSC		= BIT(14), /* Slumber state capable */
++	HOST_CAP_PIO_MULTI	= BIT(15), /* PIO multiple DRQ support */
++	HOST_CAP_FBS		= BIT(16), /* FIS-based switching support */
++	HOST_CAP_PMP		= BIT(17), /* Port Multiplier support */
++	HOST_CAP_ONLY		= BIT(18), /* Supports AHCI mode only */
++	HOST_CAP_CLO		= BIT(24), /* Command List Override support */
++	HOST_CAP_LED		= BIT(25), /* Supports activity LED */
++	HOST_CAP_ALPM		= BIT(26), /* Aggressive Link PM support */
++	HOST_CAP_SSS		= BIT(27), /* Staggered Spin-up */
++	HOST_CAP_MPS		= BIT(28), /* Mechanical presence switch */
++	HOST_CAP_SNTF		= BIT(29), /* SNotification register */
++	HOST_CAP_NCQ		= BIT(30), /* Native Command Queueing */
++	HOST_CAP_64		= BIT(31), /* PCI DAC (64-bit DMA) support */
+ 
+ 	/* HOST_CAP2 bits */
+-	HOST_CAP2_BOH		= (1 << 0),  /* BIOS/OS handoff supported */
+-	HOST_CAP2_NVMHCI	= (1 << 1),  /* NVMHCI supported */
+-	HOST_CAP2_APST		= (1 << 2),  /* Automatic partial to slumber */
+-	HOST_CAP2_SDS		= (1 << 3),  /* Support device sleep */
+-	HOST_CAP2_SADM		= (1 << 4),  /* Support aggressive DevSlp */
+-	HOST_CAP2_DESO		= (1 << 5),  /* DevSlp from slumber only */
++	HOST_CAP2_BOH		= BIT(0),  /* BIOS/OS handoff supported */
++	HOST_CAP2_NVMHCI	= BIT(1),  /* NVMHCI supported */
++	HOST_CAP2_APST		= BIT(2),  /* Automatic partial to slumber */
++	HOST_CAP2_SDS		= BIT(3),  /* Support device sleep */
++	HOST_CAP2_SADM		= BIT(4),  /* Support aggressive DevSlp */
++	HOST_CAP2_DESO		= BIT(5),  /* DevSlp from slumber only */
+ 
+ 	/* registers for each SATA port */
+ 	PORT_LST_ADDR		= 0x00, /* command list DMA addr */
+@@ -129,24 +130,24 @@ enum {
+ 	PORT_DEVSLP		= 0x44, /* device sleep */
+ 
+ 	/* PORT_IRQ_{STAT,MASK} bits */
+-	PORT_IRQ_COLD_PRES	= (1 << 31), /* cold presence detect */
+-	PORT_IRQ_TF_ERR		= (1 << 30), /* task file error */
+-	PORT_IRQ_HBUS_ERR	= (1 << 29), /* host bus fatal error */
+-	PORT_IRQ_HBUS_DATA_ERR	= (1 << 28), /* host bus data error */
+-	PORT_IRQ_IF_ERR		= (1 << 27), /* interface fatal error */
+-	PORT_IRQ_IF_NONFATAL	= (1 << 26), /* interface non-fatal error */
+-	PORT_IRQ_OVERFLOW	= (1 << 24), /* xfer exhausted available S/G */
+-	PORT_IRQ_BAD_PMP	= (1 << 23), /* incorrect port multiplier */
+-
+-	PORT_IRQ_PHYRDY		= (1 << 22), /* PhyRdy changed */
+-	PORT_IRQ_DEV_ILCK	= (1 << 7), /* device interlock */
+-	PORT_IRQ_CONNECT	= (1 << 6), /* port connect change status */
+-	PORT_IRQ_SG_DONE	= (1 << 5), /* descriptor processed */
+-	PORT_IRQ_UNK_FIS	= (1 << 4), /* unknown FIS rx'd */
+-	PORT_IRQ_SDB_FIS	= (1 << 3), /* Set Device Bits FIS rx'd */
+-	PORT_IRQ_DMAS_FIS	= (1 << 2), /* DMA Setup FIS rx'd */
+-	PORT_IRQ_PIOS_FIS	= (1 << 1), /* PIO Setup FIS rx'd */
+-	PORT_IRQ_D2H_REG_FIS	= (1 << 0), /* D2H Register FIS rx'd */
++	PORT_IRQ_COLD_PRES	= BIT(31), /* cold presence detect */
++	PORT_IRQ_TF_ERR		= BIT(30), /* task file error */
++	PORT_IRQ_HBUS_ERR	= BIT(29), /* host bus fatal error */
++	PORT_IRQ_HBUS_DATA_ERR	= BIT(28), /* host bus data error */
++	PORT_IRQ_IF_ERR		= BIT(27), /* interface fatal error */
++	PORT_IRQ_IF_NONFATAL	= BIT(26), /* interface non-fatal error */
++	PORT_IRQ_OVERFLOW	= BIT(24), /* xfer exhausted available S/G */
++	PORT_IRQ_BAD_PMP	= BIT(23), /* incorrect port multiplier */
++
++	PORT_IRQ_PHYRDY		= BIT(22), /* PhyRdy changed */
++	PORT_IRQ_DEV_ILCK	= BIT(7),  /* device interlock */
++	PORT_IRQ_CONNECT	= BIT(6),  /* port connect change status */
++	PORT_IRQ_SG_DONE	= BIT(5),  /* descriptor processed */
++	PORT_IRQ_UNK_FIS	= BIT(4),  /* unknown FIS rx'd */
++	PORT_IRQ_SDB_FIS	= BIT(3),  /* Set Device Bits FIS rx'd */
++	PORT_IRQ_DMAS_FIS	= BIT(2),  /* DMA Setup FIS rx'd */
++	PORT_IRQ_PIOS_FIS	= BIT(1),  /* PIO Setup FIS rx'd */
++	PORT_IRQ_D2H_REG_FIS	= BIT(0),  /* D2H Register FIS rx'd */
+ 
+ 	PORT_IRQ_FREEZE		= PORT_IRQ_HBUS_ERR |
+ 				  PORT_IRQ_IF_ERR |
+@@ -162,34 +163,34 @@ enum {
+ 				  PORT_IRQ_PIOS_FIS | PORT_IRQ_D2H_REG_FIS,
+ 
+ 	/* PORT_CMD bits */
+-	PORT_CMD_ASP		= (1 << 27), /* Aggressive Slumber/Partial */
+-	PORT_CMD_ALPE		= (1 << 26), /* Aggressive Link PM enable */
+-	PORT_CMD_ATAPI		= (1 << 24), /* Device is ATAPI */
+-	PORT_CMD_FBSCP		= (1 << 22), /* FBS Capable Port */
+-	PORT_CMD_ESP		= (1 << 21), /* External Sata Port */
+-	PORT_CMD_HPCP		= (1 << 18), /* HotPlug Capable Port */
+-	PORT_CMD_PMP		= (1 << 17), /* PMP attached */
+-	PORT_CMD_LIST_ON	= (1 << 15), /* cmd list DMA engine running */
+-	PORT_CMD_FIS_ON		= (1 << 14), /* FIS DMA engine running */
+-	PORT_CMD_FIS_RX		= (1 << 4), /* Enable FIS receive DMA engine */
+-	PORT_CMD_CLO		= (1 << 3), /* Command list override */
+-	PORT_CMD_POWER_ON	= (1 << 2), /* Power up device */
+-	PORT_CMD_SPIN_UP	= (1 << 1), /* Spin up device */
+-	PORT_CMD_START		= (1 << 0), /* Enable port DMA engine */
+-
+-	PORT_CMD_ICC_MASK	= (0xf << 28), /* i/f ICC state mask */
+-	PORT_CMD_ICC_ACTIVE	= (0x1 << 28), /* Put i/f in active state */
+-	PORT_CMD_ICC_PARTIAL	= (0x2 << 28), /* Put i/f in partial state */
+-	PORT_CMD_ICC_SLUMBER	= (0x6 << 28), /* Put i/f in slumber state */
++	PORT_CMD_ASP		= BIT(27), /* Aggressive Slumber/Partial */
++	PORT_CMD_ALPE		= BIT(26), /* Aggressive Link PM enable */
++	PORT_CMD_ATAPI		= BIT(24), /* Device is ATAPI */
++	PORT_CMD_FBSCP		= BIT(22), /* FBS Capable Port */
++	PORT_CMD_ESP		= BIT(21), /* External Sata Port */
++	PORT_CMD_HPCP		= BIT(18), /* HotPlug Capable Port */
++	PORT_CMD_PMP		= BIT(17), /* PMP attached */
++	PORT_CMD_LIST_ON	= BIT(15), /* cmd list DMA engine running */
++	PORT_CMD_FIS_ON		= BIT(14), /* FIS DMA engine running */
++	PORT_CMD_FIS_RX		= BIT(4),  /* Enable FIS receive DMA engine */
++	PORT_CMD_CLO		= BIT(3),  /* Command list override */
++	PORT_CMD_POWER_ON	= BIT(2),  /* Power up device */
++	PORT_CMD_SPIN_UP	= BIT(1),  /* Spin up device */
++	PORT_CMD_START		= BIT(0),  /* Enable port DMA engine */
++
++	PORT_CMD_ICC_MASK	= (0xfu << 28), /* i/f ICC state mask */
++	PORT_CMD_ICC_ACTIVE	= (0x1u << 28), /* Put i/f in active state */
++	PORT_CMD_ICC_PARTIAL	= (0x2u << 28), /* Put i/f in partial state */
++	PORT_CMD_ICC_SLUMBER	= (0x6u << 28), /* Put i/f in slumber state */
+ 
+ 	/* PORT_FBS bits */
+ 	PORT_FBS_DWE_OFFSET	= 16, /* FBS device with error offset */
+ 	PORT_FBS_ADO_OFFSET	= 12, /* FBS active dev optimization offset */
+ 	PORT_FBS_DEV_OFFSET	= 8,  /* FBS device to issue offset */
+ 	PORT_FBS_DEV_MASK	= (0xf << PORT_FBS_DEV_OFFSET),  /* FBS.DEV */
+-	PORT_FBS_SDE		= (1 << 2), /* FBS single device error */
+-	PORT_FBS_DEC		= (1 << 1), /* FBS device error clear */
+-	PORT_FBS_EN		= (1 << 0), /* Enable FBS */
++	PORT_FBS_SDE		= BIT(2), /* FBS single device error */
++	PORT_FBS_DEC		= BIT(1), /* FBS device error clear */
++	PORT_FBS_EN		= BIT(0), /* Enable FBS */
+ 
+ 	/* PORT_DEVSLP bits */
+ 	PORT_DEVSLP_DM_OFFSET	= 25,             /* DITO multiplier offset */
+@@ -197,52 +198,52 @@ enum {
+ 	PORT_DEVSLP_DITO_OFFSET	= 15,             /* DITO offset */
+ 	PORT_DEVSLP_MDAT_OFFSET	= 10,             /* Minimum assertion time */
+ 	PORT_DEVSLP_DETO_OFFSET	= 2,              /* DevSlp exit timeout */
+-	PORT_DEVSLP_DSP		= (1 << 1),       /* DevSlp present */
+-	PORT_DEVSLP_ADSE	= (1 << 0),       /* Aggressive DevSlp enable */
++	PORT_DEVSLP_DSP		= BIT(1),         /* DevSlp present */
++	PORT_DEVSLP_ADSE	= BIT(0),         /* Aggressive DevSlp enable */
+ 
+ 	/* hpriv->flags bits */
+ 
+ #define AHCI_HFLAGS(flags)		.private_data	= (void *)(flags)
+ 
+-	AHCI_HFLAG_NO_NCQ		= (1 << 0),
+-	AHCI_HFLAG_IGN_IRQ_IF_ERR	= (1 << 1), /* ignore IRQ_IF_ERR */
+-	AHCI_HFLAG_IGN_SERR_INTERNAL	= (1 << 2), /* ignore SERR_INTERNAL */
+-	AHCI_HFLAG_32BIT_ONLY		= (1 << 3), /* force 32bit */
+-	AHCI_HFLAG_MV_PATA		= (1 << 4), /* PATA port */
+-	AHCI_HFLAG_NO_MSI		= (1 << 5), /* no PCI MSI */
+-	AHCI_HFLAG_NO_PMP		= (1 << 6), /* no PMP */
+-	AHCI_HFLAG_SECT255		= (1 << 8), /* max 255 sectors */
+-	AHCI_HFLAG_YES_NCQ		= (1 << 9), /* force NCQ cap on */
+-	AHCI_HFLAG_NO_SUSPEND		= (1 << 10), /* don't suspend */
+-	AHCI_HFLAG_SRST_TOUT_IS_OFFLINE	= (1 << 11), /* treat SRST timeout as
+-							link offline */
+-	AHCI_HFLAG_NO_SNTF		= (1 << 12), /* no sntf */
+-	AHCI_HFLAG_NO_FPDMA_AA		= (1 << 13), /* no FPDMA AA */
+-	AHCI_HFLAG_YES_FBS		= (1 << 14), /* force FBS cap on */
+-	AHCI_HFLAG_DELAY_ENGINE		= (1 << 15), /* do not start engine on
+-						        port start (wait until
+-						        error-handling stage) */
+-	AHCI_HFLAG_NO_DEVSLP		= (1 << 17), /* no device sleep */
+-	AHCI_HFLAG_NO_FBS		= (1 << 18), /* no FBS */
++	AHCI_HFLAG_NO_NCQ		= BIT(0),
++	AHCI_HFLAG_IGN_IRQ_IF_ERR	= BIT(1), /* ignore IRQ_IF_ERR */
++	AHCI_HFLAG_IGN_SERR_INTERNAL	= BIT(2), /* ignore SERR_INTERNAL */
++	AHCI_HFLAG_32BIT_ONLY		= BIT(3), /* force 32bit */
++	AHCI_HFLAG_MV_PATA		= BIT(4), /* PATA port */
++	AHCI_HFLAG_NO_MSI		= BIT(5), /* no PCI MSI */
++	AHCI_HFLAG_NO_PMP		= BIT(6), /* no PMP */
++	AHCI_HFLAG_SECT255		= BIT(8), /* max 255 sectors */
++	AHCI_HFLAG_YES_NCQ		= BIT(9), /* force NCQ cap on */
++	AHCI_HFLAG_NO_SUSPEND		= BIT(10), /* don't suspend */
++	AHCI_HFLAG_SRST_TOUT_IS_OFFLINE	= BIT(11), /* treat SRST timeout as
++						      link offline */
++	AHCI_HFLAG_NO_SNTF		= BIT(12), /* no sntf */
++	AHCI_HFLAG_NO_FPDMA_AA		= BIT(13), /* no FPDMA AA */
++	AHCI_HFLAG_YES_FBS		= BIT(14), /* force FBS cap on */
++	AHCI_HFLAG_DELAY_ENGINE		= BIT(15), /* do not start engine on
++						      port start (wait until
++						      error-handling stage) */
++	AHCI_HFLAG_NO_DEVSLP		= BIT(17), /* no device sleep */
++	AHCI_HFLAG_NO_FBS		= BIT(18), /* no FBS */
+ 
+ #ifdef CONFIG_PCI_MSI
+-	AHCI_HFLAG_MULTI_MSI		= (1 << 20), /* per-port MSI(-X) */
++	AHCI_HFLAG_MULTI_MSI		= BIT(20), /* per-port MSI(-X) */
+ #else
+ 	/* compile out MSI infrastructure */
+ 	AHCI_HFLAG_MULTI_MSI		= 0,
+ #endif
+-	AHCI_HFLAG_WAKE_BEFORE_STOP	= (1 << 22), /* wake before DMA stop */
+-	AHCI_HFLAG_YES_ALPM		= (1 << 23), /* force ALPM cap on */
+-	AHCI_HFLAG_NO_WRITE_TO_RO	= (1 << 24), /* don't write to read
+-							only registers */
+-	AHCI_HFLAG_IS_MOBILE		= (1 << 25), /* mobile chipset, use
+-							SATA_MOBILE_LPM_POLICY
+-							as default lpm_policy */
+-	AHCI_HFLAG_SUSPEND_PHYS		= (1 << 26), /* handle PHYs during
+-							suspend/resume */
+-	AHCI_HFLAG_IGN_NOTSUPP_POWER_ON	= (1 << 27), /* ignore -EOPNOTSUPP
+-							from phy_power_on() */
+-	AHCI_HFLAG_NO_SXS		= (1 << 28), /* SXS not supported */
++	AHCI_HFLAG_WAKE_BEFORE_STOP	= BIT(22), /* wake before DMA stop */
++	AHCI_HFLAG_YES_ALPM		= BIT(23), /* force ALPM cap on */
++	AHCI_HFLAG_NO_WRITE_TO_RO	= BIT(24), /* don't write to read
++						      only registers */
++	AHCI_HFLAG_IS_MOBILE            = BIT(25), /* mobile chipset, use
++						      SATA_MOBILE_LPM_POLICY
++						      as default lpm_policy */
++	AHCI_HFLAG_SUSPEND_PHYS		= BIT(26), /* handle PHYs during
++						      suspend/resume */
++	AHCI_HFLAG_IGN_NOTSUPP_POWER_ON	= BIT(27), /* ignore -EOPNOTSUPP
++						      from phy_power_on() */
++	AHCI_HFLAG_NO_SXS		= BIT(28), /* SXS not supported */
+ 
+ 	/* ap->flags bits */
+ 
+@@ -258,22 +259,22 @@ enum {
+ 	EM_MAX_RETRY			= 5,
+ 
+ 	/* em_ctl bits */
+-	EM_CTL_RST		= (1 << 9), /* Reset */
+-	EM_CTL_TM		= (1 << 8), /* Transmit Message */
+-	EM_CTL_MR		= (1 << 0), /* Message Received */
+-	EM_CTL_ALHD		= (1 << 26), /* Activity LED */
+-	EM_CTL_XMT		= (1 << 25), /* Transmit Only */
+-	EM_CTL_SMB		= (1 << 24), /* Single Message Buffer */
+-	EM_CTL_SGPIO		= (1 << 19), /* SGPIO messages supported */
+-	EM_CTL_SES		= (1 << 18), /* SES-2 messages supported */
+-	EM_CTL_SAFTE		= (1 << 17), /* SAF-TE messages supported */
+-	EM_CTL_LED		= (1 << 16), /* LED messages supported */
++	EM_CTL_RST		= BIT(9), /* Reset */
++	EM_CTL_TM		= BIT(8), /* Transmit Message */
++	EM_CTL_MR		= BIT(0), /* Message Received */
++	EM_CTL_ALHD		= BIT(26), /* Activity LED */
++	EM_CTL_XMT		= BIT(25), /* Transmit Only */
++	EM_CTL_SMB		= BIT(24), /* Single Message Buffer */
++	EM_CTL_SGPIO		= BIT(19), /* SGPIO messages supported */
++	EM_CTL_SES		= BIT(18), /* SES-2 messages supported */
++	EM_CTL_SAFTE		= BIT(17), /* SAF-TE messages supported */
++	EM_CTL_LED		= BIT(16), /* LED messages supported */
+ 
+ 	/* em message type */
+-	EM_MSG_TYPE_LED		= (1 << 0), /* LED */
+-	EM_MSG_TYPE_SAFTE	= (1 << 1), /* SAF-TE */
+-	EM_MSG_TYPE_SES2	= (1 << 2), /* SES-2 */
+-	EM_MSG_TYPE_SGPIO	= (1 << 3), /* SGPIO */
++	EM_MSG_TYPE_LED		= BIT(0), /* LED */
++	EM_MSG_TYPE_SAFTE	= BIT(1), /* SAF-TE */
++	EM_MSG_TYPE_SES2	= BIT(2), /* SES-2 */
++	EM_MSG_TYPE_SGPIO	= BIT(3), /* SGPIO */
+ };
+ 
+ struct ahci_cmd_hdr {
+diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
+index 9617688b58b32..408c7428cca5e 100644
+--- a/drivers/block/Kconfig
++++ b/drivers/block/Kconfig
+@@ -293,15 +293,6 @@ config BLK_DEV_SKD
+ 
+ 	Use device /dev/skd$N amd /dev/skd$Np$M.
+ 
+-config BLK_DEV_SX8
+-	tristate "Promise SATA SX8 support"
+-	depends on PCI
+-	help
+-	  Saying Y or M here will enable support for the 
+-	  Promise SATA SX8 controllers.
+-
+-	  Use devices /dev/sx8/$N and /dev/sx8/$Np$M.
+-
+ config BLK_DEV_RAM
+ 	tristate "RAM block device support"
+ 	help
+diff --git a/drivers/block/Makefile b/drivers/block/Makefile
+index a3170859e01d4..24427da7dd64a 100644
+--- a/drivers/block/Makefile
++++ b/drivers/block/Makefile
+@@ -29,8 +29,6 @@ obj-$(CONFIG_BLK_DEV_NBD)	+= nbd.o
+ obj-$(CONFIG_BLK_DEV_CRYPTOLOOP) += cryptoloop.o
+ obj-$(CONFIG_VIRTIO_BLK)	+= virtio_blk.o
+ 
+-obj-$(CONFIG_BLK_DEV_SX8)	+= sx8.o
+-
+ obj-$(CONFIG_XEN_BLKDEV_FRONTEND)	+= xen-blkfront.o
+ obj-$(CONFIG_XEN_BLKDEV_BACKEND)	+= xen-blkback/
+ obj-$(CONFIG_BLK_DEV_DRBD)     += drbd/
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 932d4bb8e4035..63491748dc8d7 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -1397,14 +1397,30 @@ static bool rbd_obj_is_tail(struct rbd_obj_request *obj_req)
+ /*
+  * Must be called after rbd_obj_calc_img_extents().
+  */
+-static bool rbd_obj_copyup_enabled(struct rbd_obj_request *obj_req)
++static void rbd_obj_set_copyup_enabled(struct rbd_obj_request *obj_req)
+ {
+-	if (!obj_req->num_img_extents ||
+-	    (rbd_obj_is_entire(obj_req) &&
+-	     !obj_req->img_request->snapc->num_snaps))
+-		return false;
++	rbd_assert(obj_req->img_request->snapc);
+ 
+-	return true;
++	if (obj_req->img_request->op_type == OBJ_OP_DISCARD) {
++		dout("%s %p objno %llu discard\n", __func__, obj_req,
++		     obj_req->ex.oe_objno);
++		return;
++	}
++
++	if (!obj_req->num_img_extents) {
++		dout("%s %p objno %llu not overlapping\n", __func__, obj_req,
++		     obj_req->ex.oe_objno);
++		return;
++	}
++
++	if (rbd_obj_is_entire(obj_req) &&
++	    !obj_req->img_request->snapc->num_snaps) {
++		dout("%s %p objno %llu entire\n", __func__, obj_req,
++		     obj_req->ex.oe_objno);
++		return;
++	}
++
++	obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED;
+ }
+ 
+ static u64 rbd_obj_img_extents_bytes(struct rbd_obj_request *obj_req)
+@@ -1505,6 +1521,7 @@ __rbd_obj_add_osd_request(struct rbd_obj_request *obj_req,
+ static struct ceph_osd_request *
+ rbd_obj_add_osd_request(struct rbd_obj_request *obj_req, int num_ops)
+ {
++	rbd_assert(obj_req->img_request->snapc);
+ 	return __rbd_obj_add_osd_request(obj_req, obj_req->img_request->snapc,
+ 					 num_ops);
+ }
+@@ -1641,15 +1658,18 @@ static void rbd_img_request_init(struct rbd_img_request *img_request,
+ 	mutex_init(&img_request->state_mutex);
+ }
+ 
++/*
++ * Only snap_id is captured here, for reads.  For writes, snapshot
++ * context is captured in rbd_img_object_requests() after exclusive
++ * lock is ensured to be held.
++ */
+ static void rbd_img_capture_header(struct rbd_img_request *img_req)
+ {
+ 	struct rbd_device *rbd_dev = img_req->rbd_dev;
+ 
+ 	lockdep_assert_held(&rbd_dev->header_rwsem);
+ 
+-	if (rbd_img_is_write(img_req))
+-		img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc);
+-	else
++	if (!rbd_img_is_write(img_req))
+ 		img_req->snap_id = rbd_dev->spec->snap_id;
+ 
+ 	if (rbd_dev_parent_get(rbd_dev))
+@@ -2296,9 +2316,6 @@ static int rbd_obj_init_write(struct rbd_obj_request *obj_req)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (rbd_obj_copyup_enabled(obj_req))
+-		obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED;
+-
+ 	obj_req->write_state = RBD_OBJ_WRITE_START;
+ 	return 0;
+ }
+@@ -2404,8 +2421,6 @@ static int rbd_obj_init_zeroout(struct rbd_obj_request *obj_req)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (rbd_obj_copyup_enabled(obj_req))
+-		obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED;
+ 	if (!obj_req->num_img_extents) {
+ 		obj_req->flags |= RBD_OBJ_FLAG_NOOP_FOR_NONEXISTENT;
+ 		if (rbd_obj_is_entire(obj_req))
+@@ -3351,6 +3366,7 @@ again:
+ 	case RBD_OBJ_WRITE_START:
+ 		rbd_assert(!*result);
+ 
++		rbd_obj_set_copyup_enabled(obj_req);
+ 		if (rbd_obj_write_is_noop(obj_req))
+ 			return true;
+ 
+@@ -3537,9 +3553,19 @@ static int rbd_img_exclusive_lock(struct rbd_img_request *img_req)
+ 
+ static void rbd_img_object_requests(struct rbd_img_request *img_req)
+ {
++	struct rbd_device *rbd_dev = img_req->rbd_dev;
+ 	struct rbd_obj_request *obj_req;
+ 
+ 	rbd_assert(!img_req->pending.result && !img_req->pending.num_pending);
++	rbd_assert(!need_exclusive_lock(img_req) ||
++		   __rbd_is_lock_owner(rbd_dev));
++
++	if (rbd_img_is_write(img_req)) {
++		rbd_assert(!img_req->snapc);
++		down_read(&rbd_dev->header_rwsem);
++		img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc);
++		up_read(&rbd_dev->header_rwsem);
++	}
+ 
+ 	for_each_obj_request(img_req, obj_req) {
+ 		int result = 0;
+@@ -3557,7 +3583,6 @@ static void rbd_img_object_requests(struct rbd_img_request *img_req)
+ 
+ static bool rbd_img_advance(struct rbd_img_request *img_req, int *result)
+ {
+-	struct rbd_device *rbd_dev = img_req->rbd_dev;
+ 	int ret;
+ 
+ again:
+@@ -3578,9 +3603,6 @@ again:
+ 		if (*result)
+ 			return true;
+ 
+-		rbd_assert(!need_exclusive_lock(img_req) ||
+-			   __rbd_is_lock_owner(rbd_dev));
+-
+ 		rbd_img_object_requests(img_req);
+ 		if (!img_req->pending.num_pending) {
+ 			*result = img_req->pending.result;
+@@ -4038,6 +4060,10 @@ static int rbd_post_acquire_action(struct rbd_device *rbd_dev)
+ {
+ 	int ret;
+ 
++	ret = rbd_dev_refresh(rbd_dev);
++	if (ret)
++		return ret;
++
+ 	if (rbd_dev->header.features & RBD_FEATURE_OBJECT_MAP) {
+ 		ret = rbd_object_map_open(rbd_dev);
+ 		if (ret)
+diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
+deleted file mode 100644
+index 4478eb7efee0b..0000000000000
+--- a/drivers/block/sx8.c
++++ /dev/null
+@@ -1,1586 +0,0 @@
+-/*
+- *  sx8.c: Driver for Promise SATA SX8 looks-like-I2O hardware
+- *
+- *  Copyright 2004-2005 Red Hat, Inc.
+- *
+- *  Author/maintainer:  Jeff Garzik <jgarzik@pobox.com>
+- *
+- *  This file is subject to the terms and conditions of the GNU General Public
+- *  License.  See the file "COPYING" in the main directory of this archive
+- *  for more details.
+- */
+-
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/init.h>
+-#include <linux/pci.h>
+-#include <linux/slab.h>
+-#include <linux/spinlock.h>
+-#include <linux/blk-mq.h>
+-#include <linux/sched.h>
+-#include <linux/interrupt.h>
+-#include <linux/compiler.h>
+-#include <linux/workqueue.h>
+-#include <linux/bitops.h>
+-#include <linux/delay.h>
+-#include <linux/ktime.h>
+-#include <linux/hdreg.h>
+-#include <linux/dma-mapping.h>
+-#include <linux/completion.h>
+-#include <linux/scatterlist.h>
+-#include <asm/io.h>
+-#include <linux/uaccess.h>
+-
+-#if 0
+-#define CARM_DEBUG
+-#define CARM_VERBOSE_DEBUG
+-#else
+-#undef CARM_DEBUG
+-#undef CARM_VERBOSE_DEBUG
+-#endif
+-#undef CARM_NDEBUG
+-
+-#define DRV_NAME "sx8"
+-#define DRV_VERSION "1.0"
+-#define PFX DRV_NAME ": "
+-
+-MODULE_AUTHOR("Jeff Garzik");
+-MODULE_LICENSE("GPL");
+-MODULE_DESCRIPTION("Promise SATA SX8 block driver");
+-MODULE_VERSION(DRV_VERSION);
+-
+-/*
+- * SX8 hardware has a single message queue for all ATA ports.
+- * When this driver was written, the hardware (firmware?) would
+- * corrupt data eventually, if more than one request was outstanding.
+- * As one can imagine, having 8 ports bottlenecking on a single
+- * command hurts performance.
+- *
+- * Based on user reports, later versions of the hardware (firmware?)
+- * seem to be able to survive with more than one command queued.
+- *
+- * Therefore, we default to the safe option -- 1 command -- but
+- * allow the user to increase this.
+- *
+- * SX8 should be able to support up to ~60 queued commands (CARM_MAX_REQ),
+- * but problems seem to occur when you exceed ~30, even on newer hardware.
+- */
+-static int max_queue = 1;
+-module_param(max_queue, int, 0444);
+-MODULE_PARM_DESC(max_queue, "Maximum number of queued commands. (min==1, max==30, safe==1)");
+-
+-
+-#define NEXT_RESP(idx)	((idx + 1) % RMSG_Q_LEN)
+-
+-/* 0xf is just arbitrary, non-zero noise; this is sorta like poisoning */
+-#define TAG_ENCODE(tag)	(((tag) << 16) | 0xf)
+-#define TAG_DECODE(tag)	(((tag) >> 16) & 0x1f)
+-#define TAG_VALID(tag)	((((tag) & 0xf) == 0xf) && (TAG_DECODE(tag) < 32))
+-
+-/* note: prints function name for you */
+-#ifdef CARM_DEBUG
+-#define DPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __func__, ## args)
+-#ifdef CARM_VERBOSE_DEBUG
+-#define VPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __func__, ## args)
+-#else
+-#define VPRINTK(fmt, args...)
+-#endif	/* CARM_VERBOSE_DEBUG */
+-#else
+-#define DPRINTK(fmt, args...)
+-#define VPRINTK(fmt, args...)
+-#endif	/* CARM_DEBUG */
+-
+-#ifdef CARM_NDEBUG
+-#define assert(expr)
+-#else
+-#define assert(expr) \
+-        if(unlikely(!(expr))) {                                   \
+-        printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
+-	#expr, __FILE__, __func__, __LINE__);          \
+-        }
+-#endif
+-
+-/* defines only for the constants which don't work well as enums */
+-struct carm_host;
+-
+-enum {
+-	/* adapter-wide limits */
+-	CARM_MAX_PORTS		= 8,
+-	CARM_SHM_SIZE		= (4096 << 7),
+-	CARM_MINORS_PER_MAJOR	= 256 / CARM_MAX_PORTS,
+-	CARM_MAX_WAIT_Q		= CARM_MAX_PORTS + 1,
+-
+-	/* command message queue limits */
+-	CARM_MAX_REQ		= 64,	       /* max command msgs per host */
+-	CARM_MSG_LOW_WATER	= (CARM_MAX_REQ / 4),	     /* refill mark */
+-
+-	/* S/G limits, host-wide and per-request */
+-	CARM_MAX_REQ_SG		= 32,	     /* max s/g entries per request */
+-	CARM_MAX_HOST_SG	= 600,		/* max s/g entries per host */
+-	CARM_SG_LOW_WATER	= (CARM_MAX_HOST_SG / 4),   /* re-fill mark */
+-
+-	/* hardware registers */
+-	CARM_IHQP		= 0x1c,
+-	CARM_INT_STAT		= 0x10, /* interrupt status */
+-	CARM_INT_MASK		= 0x14, /* interrupt mask */
+-	CARM_HMUC		= 0x18, /* host message unit control */
+-	RBUF_ADDR_LO		= 0x20, /* response msg DMA buf low 32 bits */
+-	RBUF_ADDR_HI		= 0x24, /* response msg DMA buf high 32 bits */
+-	RBUF_BYTE_SZ		= 0x28,
+-	CARM_RESP_IDX		= 0x2c,
+-	CARM_CMS0		= 0x30, /* command message size reg 0 */
+-	CARM_LMUC		= 0x48,
+-	CARM_HMPHA		= 0x6c,
+-	CARM_INITC		= 0xb5,
+-
+-	/* bits in CARM_INT_{STAT,MASK} */
+-	INT_RESERVED		= 0xfffffff0,
+-	INT_WATCHDOG		= (1 << 3),	/* watchdog timer */
+-	INT_Q_OVERFLOW		= (1 << 2),	/* cmd msg q overflow */
+-	INT_Q_AVAILABLE		= (1 << 1),	/* cmd msg q has free space */
+-	INT_RESPONSE		= (1 << 0),	/* response msg available */
+-	INT_ACK_MASK		= INT_WATCHDOG | INT_Q_OVERFLOW,
+-	INT_DEF_MASK		= INT_RESERVED | INT_Q_OVERFLOW |
+-				  INT_RESPONSE,
+-
+-	/* command messages, and related register bits */
+-	CARM_HAVE_RESP		= 0x01,
+-	CARM_MSG_READ		= 1,
+-	CARM_MSG_WRITE		= 2,
+-	CARM_MSG_VERIFY		= 3,
+-	CARM_MSG_GET_CAPACITY	= 4,
+-	CARM_MSG_FLUSH		= 5,
+-	CARM_MSG_IOCTL		= 6,
+-	CARM_MSG_ARRAY		= 8,
+-	CARM_MSG_MISC		= 9,
+-	CARM_CME		= (1 << 2),
+-	CARM_RME		= (1 << 1),
+-	CARM_WZBC		= (1 << 0),
+-	CARM_RMI		= (1 << 0),
+-	CARM_Q_FULL		= (1 << 3),
+-	CARM_MSG_SIZE		= 288,
+-	CARM_Q_LEN		= 48,
+-
+-	/* CARM_MSG_IOCTL messages */
+-	CARM_IOC_SCAN_CHAN	= 5,	/* scan channels for devices */
+-	CARM_IOC_GET_TCQ	= 13,	/* get tcq/ncq depth */
+-	CARM_IOC_SET_TCQ	= 14,	/* set tcq/ncq depth */
+-
+-	IOC_SCAN_CHAN_NODEV	= 0x1f,
+-	IOC_SCAN_CHAN_OFFSET	= 0x40,
+-
+-	/* CARM_MSG_ARRAY messages */
+-	CARM_ARRAY_INFO		= 0,
+-
+-	ARRAY_NO_EXIST		= (1 << 31),
+-
+-	/* response messages */
+-	RMSG_SZ			= 8,	/* sizeof(struct carm_response) */
+-	RMSG_Q_LEN		= 48,	/* resp. msg list length */
+-	RMSG_OK			= 1,	/* bit indicating msg was successful */
+-					/* length of entire resp. msg buffer */
+-	RBUF_LEN		= RMSG_SZ * RMSG_Q_LEN,
+-
+-	PDC_SHM_SIZE		= (4096 << 7), /* length of entire h/w buffer */
+-
+-	/* CARM_MSG_MISC messages */
+-	MISC_GET_FW_VER		= 2,
+-	MISC_ALLOC_MEM		= 3,
+-	MISC_SET_TIME		= 5,
+-
+-	/* MISC_GET_FW_VER feature bits */
+-	FW_VER_4PORT		= (1 << 2), /* 1=4 ports, 0=8 ports */
+-	FW_VER_NON_RAID		= (1 << 1), /* 1=non-RAID firmware, 0=RAID */
+-	FW_VER_ZCR		= (1 << 0), /* zero channel RAID (whatever that is) */
+-
+-	/* carm_host flags */
+-	FL_NON_RAID		= FW_VER_NON_RAID,
+-	FL_4PORT		= FW_VER_4PORT,
+-	FL_FW_VER_MASK		= (FW_VER_NON_RAID | FW_VER_4PORT),
+-	FL_DYN_MAJOR		= (1 << 17),
+-};
+-
+-enum {
+-	CARM_SG_BOUNDARY	= 0xffffUL,	    /* s/g segment boundary */
+-};
+-
+-enum scatter_gather_types {
+-	SGT_32BIT		= 0,
+-	SGT_64BIT		= 1,
+-};
+-
+-enum host_states {
+-	HST_INVALID,		/* invalid state; never used */
+-	HST_ALLOC_BUF,		/* setting up master SHM area */
+-	HST_ERROR,		/* we never leave here */
+-	HST_PORT_SCAN,		/* start dev scan */
+-	HST_DEV_SCAN_START,	/* start per-device probe */
+-	HST_DEV_SCAN,		/* continue per-device probe */
+-	HST_DEV_ACTIVATE,	/* activate devices we found */
+-	HST_PROBE_FINISHED,	/* probe is complete */
+-	HST_PROBE_START,	/* initiate probe */
+-	HST_SYNC_TIME,		/* tell firmware what time it is */
+-	HST_GET_FW_VER,		/* get firmware version, adapter port cnt */
+-};
+-
+-#ifdef CARM_DEBUG
+-static const char *state_name[] = {
+-	"HST_INVALID",
+-	"HST_ALLOC_BUF",
+-	"HST_ERROR",
+-	"HST_PORT_SCAN",
+-	"HST_DEV_SCAN_START",
+-	"HST_DEV_SCAN",
+-	"HST_DEV_ACTIVATE",
+-	"HST_PROBE_FINISHED",
+-	"HST_PROBE_START",
+-	"HST_SYNC_TIME",
+-	"HST_GET_FW_VER",
+-};
+-#endif
+-
+-struct carm_port {
+-	unsigned int			port_no;
+-	struct gendisk			*disk;
+-	struct carm_host		*host;
+-
+-	/* attached device characteristics */
+-	u64				capacity;
+-	char				name[41];
+-	u16				dev_geom_head;
+-	u16				dev_geom_sect;
+-	u16				dev_geom_cyl;
+-};
+-
+-struct carm_request {
+-	int				n_elem;
+-	unsigned int			msg_type;
+-	unsigned int			msg_subtype;
+-	unsigned int			msg_bucket;
+-	struct scatterlist		sg[CARM_MAX_REQ_SG];
+-};
+-
+-struct carm_host {
+-	unsigned long			flags;
+-	void				__iomem *mmio;
+-	void				*shm;
+-	dma_addr_t			shm_dma;
+-
+-	int				major;
+-	int				id;
+-	char				name[32];
+-
+-	spinlock_t			lock;
+-	struct pci_dev			*pdev;
+-	unsigned int			state;
+-	u32				fw_ver;
+-
+-	struct blk_mq_tag_set		tag_set;
+-	struct request_queue		*oob_q;
+-	unsigned int			n_oob;
+-
+-	unsigned int			hw_sg_used;
+-
+-	unsigned int			resp_idx;
+-
+-	unsigned int			wait_q_prod;
+-	unsigned int			wait_q_cons;
+-	struct request_queue		*wait_q[CARM_MAX_WAIT_Q];
+-
+-	void				*msg_base;
+-	dma_addr_t			msg_dma;
+-
+-	int				cur_scan_dev;
+-	unsigned long			dev_active;
+-	unsigned long			dev_present;
+-	struct carm_port		port[CARM_MAX_PORTS];
+-
+-	struct work_struct		fsm_task;
+-
+-	struct completion		probe_comp;
+-};
+-
+-struct carm_response {
+-	__le32 ret_handle;
+-	__le32 status;
+-}  __attribute__((packed));
+-
+-struct carm_msg_sg {
+-	__le32 start;
+-	__le32 len;
+-}  __attribute__((packed));
+-
+-struct carm_msg_rw {
+-	u8 type;
+-	u8 id;
+-	u8 sg_count;
+-	u8 sg_type;
+-	__le32 handle;
+-	__le32 lba;
+-	__le16 lba_count;
+-	__le16 lba_high;
+-	struct carm_msg_sg sg[32];
+-}  __attribute__((packed));
+-
+-struct carm_msg_allocbuf {
+-	u8 type;
+-	u8 subtype;
+-	u8 n_sg;
+-	u8 sg_type;
+-	__le32 handle;
+-	__le32 addr;
+-	__le32 len;
+-	__le32 evt_pool;
+-	__le32 n_evt;
+-	__le32 rbuf_pool;
+-	__le32 n_rbuf;
+-	__le32 msg_pool;
+-	__le32 n_msg;
+-	struct carm_msg_sg sg[8];
+-}  __attribute__((packed));
+-
+-struct carm_msg_ioctl {
+-	u8 type;
+-	u8 subtype;
+-	u8 array_id;
+-	u8 reserved1;
+-	__le32 handle;
+-	__le32 data_addr;
+-	u32 reserved2;
+-}  __attribute__((packed));
+-
+-struct carm_msg_sync_time {
+-	u8 type;
+-	u8 subtype;
+-	u16 reserved1;
+-	__le32 handle;
+-	u32 reserved2;
+-	__le32 timestamp;
+-}  __attribute__((packed));
+-
+-struct carm_msg_get_fw_ver {
+-	u8 type;
+-	u8 subtype;
+-	u16 reserved1;
+-	__le32 handle;
+-	__le32 data_addr;
+-	u32 reserved2;
+-}  __attribute__((packed));
+-
+-struct carm_fw_ver {
+-	__le32 version;
+-	u8 features;
+-	u8 reserved1;
+-	u16 reserved2;
+-}  __attribute__((packed));
+-
+-struct carm_array_info {
+-	__le32 size;
+-
+-	__le16 size_hi;
+-	__le16 stripe_size;
+-
+-	__le32 mode;
+-
+-	__le16 stripe_blk_sz;
+-	__le16 reserved1;
+-
+-	__le16 cyl;
+-	__le16 head;
+-
+-	__le16 sect;
+-	u8 array_id;
+-	u8 reserved2;
+-
+-	char name[40];
+-
+-	__le32 array_status;
+-
+-	/* device list continues beyond this point? */
+-}  __attribute__((packed));
+-
+-static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+-static void carm_remove_one (struct pci_dev *pdev);
+-static int carm_bdev_getgeo(struct block_device *bdev, struct hd_geometry *geo);
+-
+-static const struct pci_device_id carm_pci_tbl[] = {
+-	{ PCI_VENDOR_ID_PROMISE, 0x8000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
+-	{ PCI_VENDOR_ID_PROMISE, 0x8002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
+-	{ }	/* terminate list */
+-};
+-MODULE_DEVICE_TABLE(pci, carm_pci_tbl);
+-
+-static struct pci_driver carm_driver = {
+-	.name		= DRV_NAME,
+-	.id_table	= carm_pci_tbl,
+-	.probe		= carm_init_one,
+-	.remove		= carm_remove_one,
+-};
+-
+-static const struct block_device_operations carm_bd_ops = {
+-	.owner		= THIS_MODULE,
+-	.getgeo		= carm_bdev_getgeo,
+-};
+-
+-static unsigned int carm_host_id;
+-static unsigned long carm_major_alloc;
+-
+-
+-
+-static int carm_bdev_getgeo(struct block_device *bdev, struct hd_geometry *geo)
+-{
+-	struct carm_port *port = bdev->bd_disk->private_data;
+-
+-	geo->heads = (u8) port->dev_geom_head;
+-	geo->sectors = (u8) port->dev_geom_sect;
+-	geo->cylinders = port->dev_geom_cyl;
+-	return 0;
+-}
+-
+-static const u32 msg_sizes[] = { 32, 64, 128, CARM_MSG_SIZE };
+-
+-static inline int carm_lookup_bucket(u32 msg_size)
+-{
+-	int i;
+-
+-	for (i = 0; i < ARRAY_SIZE(msg_sizes); i++)
+-		if (msg_size <= msg_sizes[i])
+-			return i;
+-
+-	return -ENOENT;
+-}
+-
+-static void carm_init_buckets(void __iomem *mmio)
+-{
+-	unsigned int i;
+-
+-	for (i = 0; i < ARRAY_SIZE(msg_sizes); i++)
+-		writel(msg_sizes[i], mmio + CARM_CMS0 + (4 * i));
+-}
+-
+-static inline void *carm_ref_msg(struct carm_host *host,
+-				 unsigned int msg_idx)
+-{
+-	return host->msg_base + (msg_idx * CARM_MSG_SIZE);
+-}
+-
+-static inline dma_addr_t carm_ref_msg_dma(struct carm_host *host,
+-					  unsigned int msg_idx)
+-{
+-	return host->msg_dma + (msg_idx * CARM_MSG_SIZE);
+-}
+-
+-static int carm_send_msg(struct carm_host *host,
+-			 struct carm_request *crq, unsigned tag)
+-{
+-	void __iomem *mmio = host->mmio;
+-	u32 msg = (u32) carm_ref_msg_dma(host, tag);
+-	u32 cm_bucket = crq->msg_bucket;
+-	u32 tmp;
+-	int rc = 0;
+-
+-	VPRINTK("ENTER\n");
+-
+-	tmp = readl(mmio + CARM_HMUC);
+-	if (tmp & CARM_Q_FULL) {
+-#if 0
+-		tmp = readl(mmio + CARM_INT_MASK);
+-		tmp |= INT_Q_AVAILABLE;
+-		writel(tmp, mmio + CARM_INT_MASK);
+-		readl(mmio + CARM_INT_MASK);	/* flush */
+-#endif
+-		DPRINTK("host msg queue full\n");
+-		rc = -EBUSY;
+-	} else {
+-		writel(msg | (cm_bucket << 1), mmio + CARM_IHQP);
+-		readl(mmio + CARM_IHQP);	/* flush */
+-	}
+-
+-	return rc;
+-}
+-
+-static int carm_array_info (struct carm_host *host, unsigned int array_idx)
+-{
+-	struct carm_msg_ioctl *ioc;
+-	u32 msg_data;
+-	dma_addr_t msg_dma;
+-	struct carm_request *crq;
+-	struct request *rq;
+-	int rc;
+-
+-	rq = blk_mq_alloc_request(host->oob_q, REQ_OP_DRV_OUT, 0);
+-	if (IS_ERR(rq)) {
+-		rc = -ENOMEM;
+-		goto err_out;
+-	}
+-	crq = blk_mq_rq_to_pdu(rq);
+-
+-	ioc = carm_ref_msg(host, rq->tag);
+-	msg_dma = carm_ref_msg_dma(host, rq->tag);
+-	msg_data = (u32) (msg_dma + sizeof(struct carm_array_info));
+-
+-	crq->msg_type = CARM_MSG_ARRAY;
+-	crq->msg_subtype = CARM_ARRAY_INFO;
+-	rc = carm_lookup_bucket(sizeof(struct carm_msg_ioctl) +
+-				sizeof(struct carm_array_info));
+-	BUG_ON(rc < 0);
+-	crq->msg_bucket = (u32) rc;
+-
+-	memset(ioc, 0, sizeof(*ioc));
+-	ioc->type	= CARM_MSG_ARRAY;
+-	ioc->subtype	= CARM_ARRAY_INFO;
+-	ioc->array_id	= (u8) array_idx;
+-	ioc->handle	= cpu_to_le32(TAG_ENCODE(rq->tag));
+-	ioc->data_addr	= cpu_to_le32(msg_data);
+-
+-	spin_lock_irq(&host->lock);
+-	assert(host->state == HST_DEV_SCAN_START ||
+-	       host->state == HST_DEV_SCAN);
+-	spin_unlock_irq(&host->lock);
+-
+-	DPRINTK("blk_execute_rq_nowait, tag == %u\n", rq->tag);
+-	blk_execute_rq_nowait(host->oob_q, NULL, rq, true, NULL);
+-
+-	return 0;
+-
+-err_out:
+-	spin_lock_irq(&host->lock);
+-	host->state = HST_ERROR;
+-	spin_unlock_irq(&host->lock);
+-	return rc;
+-}
+-
+-typedef unsigned int (*carm_sspc_t)(struct carm_host *, unsigned int, void *);
+-
+-static int carm_send_special (struct carm_host *host, carm_sspc_t func)
+-{
+-	struct request *rq;
+-	struct carm_request *crq;
+-	struct carm_msg_ioctl *ioc;
+-	void *mem;
+-	unsigned int msg_size;
+-	int rc;
+-
+-	rq = blk_mq_alloc_request(host->oob_q, REQ_OP_DRV_OUT, 0);
+-	if (IS_ERR(rq))
+-		return -ENOMEM;
+-	crq = blk_mq_rq_to_pdu(rq);
+-
+-	mem = carm_ref_msg(host, rq->tag);
+-
+-	msg_size = func(host, rq->tag, mem);
+-
+-	ioc = mem;
+-	crq->msg_type = ioc->type;
+-	crq->msg_subtype = ioc->subtype;
+-	rc = carm_lookup_bucket(msg_size);
+-	BUG_ON(rc < 0);
+-	crq->msg_bucket = (u32) rc;
+-
+-	DPRINTK("blk_execute_rq_nowait, tag == %u\n", rq->tag);
+-	blk_execute_rq_nowait(host->oob_q, NULL, rq, true, NULL);
+-
+-	return 0;
+-}
+-
+-static unsigned int carm_fill_sync_time(struct carm_host *host,
+-					unsigned int idx, void *mem)
+-{
+-	struct carm_msg_sync_time *st = mem;
+-
+-	time64_t tv = ktime_get_real_seconds();
+-
+-	memset(st, 0, sizeof(*st));
+-	st->type	= CARM_MSG_MISC;
+-	st->subtype	= MISC_SET_TIME;
+-	st->handle	= cpu_to_le32(TAG_ENCODE(idx));
+-	st->timestamp	= cpu_to_le32(tv);
+-
+-	return sizeof(struct carm_msg_sync_time);
+-}
+-
+-static unsigned int carm_fill_alloc_buf(struct carm_host *host,
+-					unsigned int idx, void *mem)
+-{
+-	struct carm_msg_allocbuf *ab = mem;
+-
+-	memset(ab, 0, sizeof(*ab));
+-	ab->type	= CARM_MSG_MISC;
+-	ab->subtype	= MISC_ALLOC_MEM;
+-	ab->handle	= cpu_to_le32(TAG_ENCODE(idx));
+-	ab->n_sg	= 1;
+-	ab->sg_type	= SGT_32BIT;
+-	ab->addr	= cpu_to_le32(host->shm_dma + (PDC_SHM_SIZE >> 1));
+-	ab->len		= cpu_to_le32(PDC_SHM_SIZE >> 1);
+-	ab->evt_pool	= cpu_to_le32(host->shm_dma + (16 * 1024));
+-	ab->n_evt	= cpu_to_le32(1024);
+-	ab->rbuf_pool	= cpu_to_le32(host->shm_dma);
+-	ab->n_rbuf	= cpu_to_le32(RMSG_Q_LEN);
+-	ab->msg_pool	= cpu_to_le32(host->shm_dma + RBUF_LEN);
+-	ab->n_msg	= cpu_to_le32(CARM_Q_LEN);
+-	ab->sg[0].start	= cpu_to_le32(host->shm_dma + (PDC_SHM_SIZE >> 1));
+-	ab->sg[0].len	= cpu_to_le32(65536);
+-
+-	return sizeof(struct carm_msg_allocbuf);
+-}
+-
+-static unsigned int carm_fill_scan_channels(struct carm_host *host,
+-					    unsigned int idx, void *mem)
+-{
+-	struct carm_msg_ioctl *ioc = mem;
+-	u32 msg_data = (u32) (carm_ref_msg_dma(host, idx) +
+-			      IOC_SCAN_CHAN_OFFSET);
+-
+-	memset(ioc, 0, sizeof(*ioc));
+-	ioc->type	= CARM_MSG_IOCTL;
+-	ioc->subtype	= CARM_IOC_SCAN_CHAN;
+-	ioc->handle	= cpu_to_le32(TAG_ENCODE(idx));
+-	ioc->data_addr	= cpu_to_le32(msg_data);
+-
+-	/* fill output data area with "no device" default values */
+-	mem += IOC_SCAN_CHAN_OFFSET;
+-	memset(mem, IOC_SCAN_CHAN_NODEV, CARM_MAX_PORTS);
+-
+-	return IOC_SCAN_CHAN_OFFSET + CARM_MAX_PORTS;
+-}
+-
+-static unsigned int carm_fill_get_fw_ver(struct carm_host *host,
+-					 unsigned int idx, void *mem)
+-{
+-	struct carm_msg_get_fw_ver *ioc = mem;
+-	u32 msg_data = (u32) (carm_ref_msg_dma(host, idx) + sizeof(*ioc));
+-
+-	memset(ioc, 0, sizeof(*ioc));
+-	ioc->type	= CARM_MSG_MISC;
+-	ioc->subtype	= MISC_GET_FW_VER;
+-	ioc->handle	= cpu_to_le32(TAG_ENCODE(idx));
+-	ioc->data_addr	= cpu_to_le32(msg_data);
+-
+-	return sizeof(struct carm_msg_get_fw_ver) +
+-	       sizeof(struct carm_fw_ver);
+-}
+-
+-static inline void carm_push_q (struct carm_host *host, struct request_queue *q)
+-{
+-	unsigned int idx = host->wait_q_prod % CARM_MAX_WAIT_Q;
+-
+-	blk_mq_stop_hw_queues(q);
+-	VPRINTK("STOPPED QUEUE %p\n", q);
+-
+-	host->wait_q[idx] = q;
+-	host->wait_q_prod++;
+-	BUG_ON(host->wait_q_prod == host->wait_q_cons); /* overrun */
+-}
+-
+-static inline struct request_queue *carm_pop_q(struct carm_host *host)
+-{
+-	unsigned int idx;
+-
+-	if (host->wait_q_prod == host->wait_q_cons)
+-		return NULL;
+-
+-	idx = host->wait_q_cons % CARM_MAX_WAIT_Q;
+-	host->wait_q_cons++;
+-
+-	return host->wait_q[idx];
+-}
+-
+-static inline void carm_round_robin(struct carm_host *host)
+-{
+-	struct request_queue *q = carm_pop_q(host);
+-	if (q) {
+-		blk_mq_start_hw_queues(q);
+-		VPRINTK("STARTED QUEUE %p\n", q);
+-	}
+-}
+-
+-static inline enum dma_data_direction carm_rq_dir(struct request *rq)
+-{
+-	return op_is_write(req_op(rq)) ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+-}
+-
+-static blk_status_t carm_queue_rq(struct blk_mq_hw_ctx *hctx,
+-				  const struct blk_mq_queue_data *bd)
+-{
+-	struct request_queue *q = hctx->queue;
+-	struct request *rq = bd->rq;
+-	struct carm_port *port = q->queuedata;
+-	struct carm_host *host = port->host;
+-	struct carm_request *crq = blk_mq_rq_to_pdu(rq);
+-	struct carm_msg_rw *msg;
+-	struct scatterlist *sg;
+-	int i, n_elem = 0, rc;
+-	unsigned int msg_size;
+-	u32 tmp;
+-
+-	crq->n_elem = 0;
+-	sg_init_table(crq->sg, CARM_MAX_REQ_SG);
+-
+-	blk_mq_start_request(rq);
+-
+-	spin_lock_irq(&host->lock);
+-	if (req_op(rq) == REQ_OP_DRV_OUT)
+-		goto send_msg;
+-
+-	/* get scatterlist from block layer */
+-	sg = &crq->sg[0];
+-	n_elem = blk_rq_map_sg(q, rq, sg);
+-	if (n_elem <= 0)
+-		goto out_ioerr;
+-
+-	/* map scatterlist to PCI bus addresses */
+-	n_elem = dma_map_sg(&host->pdev->dev, sg, n_elem, carm_rq_dir(rq));
+-	if (n_elem <= 0)
+-		goto out_ioerr;
+-
+-	/* obey global hardware limit on S/G entries */
+-	if (host->hw_sg_used >= CARM_MAX_HOST_SG - n_elem)
+-		goto out_resource;
+-
+-	crq->n_elem = n_elem;
+-	host->hw_sg_used += n_elem;
+-
+-	/*
+-	 * build read/write message
+-	 */
+-
+-	VPRINTK("build msg\n");
+-	msg = (struct carm_msg_rw *) carm_ref_msg(host, rq->tag);
+-
+-	if (rq_data_dir(rq) == WRITE) {
+-		msg->type = CARM_MSG_WRITE;
+-		crq->msg_type = CARM_MSG_WRITE;
+-	} else {
+-		msg->type = CARM_MSG_READ;
+-		crq->msg_type = CARM_MSG_READ;
+-	}
+-
+-	msg->id		= port->port_no;
+-	msg->sg_count	= n_elem;
+-	msg->sg_type	= SGT_32BIT;
+-	msg->handle	= cpu_to_le32(TAG_ENCODE(rq->tag));
+-	msg->lba	= cpu_to_le32(blk_rq_pos(rq) & 0xffffffff);
+-	tmp		= (blk_rq_pos(rq) >> 16) >> 16;
+-	msg->lba_high	= cpu_to_le16( (u16) tmp );
+-	msg->lba_count	= cpu_to_le16(blk_rq_sectors(rq));
+-
+-	msg_size = sizeof(struct carm_msg_rw) - sizeof(msg->sg);
+-	for (i = 0; i < n_elem; i++) {
+-		struct carm_msg_sg *carm_sg = &msg->sg[i];
+-		carm_sg->start = cpu_to_le32(sg_dma_address(&crq->sg[i]));
+-		carm_sg->len = cpu_to_le32(sg_dma_len(&crq->sg[i]));
+-		msg_size += sizeof(struct carm_msg_sg);
+-	}
+-
+-	rc = carm_lookup_bucket(msg_size);
+-	BUG_ON(rc < 0);
+-	crq->msg_bucket = (u32) rc;
+-send_msg:
+-	/*
+-	 * queue read/write message to hardware
+-	 */
+-	VPRINTK("send msg, tag == %u\n", rq->tag);
+-	rc = carm_send_msg(host, crq, rq->tag);
+-	if (rc) {
+-		host->hw_sg_used -= n_elem;
+-		goto out_resource;
+-	}
+-
+-	spin_unlock_irq(&host->lock);
+-	return BLK_STS_OK;
+-out_resource:
+-	dma_unmap_sg(&host->pdev->dev, &crq->sg[0], n_elem, carm_rq_dir(rq));
+-	carm_push_q(host, q);
+-	spin_unlock_irq(&host->lock);
+-	return BLK_STS_DEV_RESOURCE;
+-out_ioerr:
+-	carm_round_robin(host);
+-	spin_unlock_irq(&host->lock);
+-	return BLK_STS_IOERR;
+-}
+-
+-static void carm_handle_array_info(struct carm_host *host,
+-				   struct carm_request *crq, u8 *mem,
+-				   blk_status_t error)
+-{
+-	struct carm_port *port;
+-	u8 *msg_data = mem + sizeof(struct carm_array_info);
+-	struct carm_array_info *desc = (struct carm_array_info *) msg_data;
+-	u64 lo, hi;
+-	int cur_port;
+-	size_t slen;
+-
+-	DPRINTK("ENTER\n");
+-
+-	if (error)
+-		goto out;
+-	if (le32_to_cpu(desc->array_status) & ARRAY_NO_EXIST)
+-		goto out;
+-
+-	cur_port = host->cur_scan_dev;
+-
+-	/* should never occur */
+-	if ((cur_port < 0) || (cur_port >= CARM_MAX_PORTS)) {
+-		printk(KERN_ERR PFX "BUG: cur_scan_dev==%d, array_id==%d\n",
+-		       cur_port, (int) desc->array_id);
+-		goto out;
+-	}
+-
+-	port = &host->port[cur_port];
+-
+-	lo = (u64) le32_to_cpu(desc->size);
+-	hi = (u64) le16_to_cpu(desc->size_hi);
+-
+-	port->capacity = lo | (hi << 32);
+-	port->dev_geom_head = le16_to_cpu(desc->head);
+-	port->dev_geom_sect = le16_to_cpu(desc->sect);
+-	port->dev_geom_cyl = le16_to_cpu(desc->cyl);
+-
+-	host->dev_active |= (1 << cur_port);
+-
+-	strncpy(port->name, desc->name, sizeof(port->name));
+-	port->name[sizeof(port->name) - 1] = 0;
+-	slen = strlen(port->name);
+-	while (slen && (port->name[slen - 1] == ' ')) {
+-		port->name[slen - 1] = 0;
+-		slen--;
+-	}
+-
+-	printk(KERN_INFO DRV_NAME "(%s): port %u device %Lu sectors\n",
+-	       pci_name(host->pdev), port->port_no,
+-	       (unsigned long long) port->capacity);
+-	printk(KERN_INFO DRV_NAME "(%s): port %u device \"%s\"\n",
+-	       pci_name(host->pdev), port->port_no, port->name);
+-
+-out:
+-	assert(host->state == HST_DEV_SCAN);
+-	schedule_work(&host->fsm_task);
+-}
+-
+-static void carm_handle_scan_chan(struct carm_host *host,
+-				  struct carm_request *crq, u8 *mem,
+-				  blk_status_t error)
+-{
+-	u8 *msg_data = mem + IOC_SCAN_CHAN_OFFSET;
+-	unsigned int i, dev_count = 0;
+-	int new_state = HST_DEV_SCAN_START;
+-
+-	DPRINTK("ENTER\n");
+-
+-	if (error) {
+-		new_state = HST_ERROR;
+-		goto out;
+-	}
+-
+-	/* TODO: scan and support non-disk devices */
+-	for (i = 0; i < 8; i++)
+-		if (msg_data[i] == 0) { /* direct-access device (disk) */
+-			host->dev_present |= (1 << i);
+-			dev_count++;
+-		}
+-
+-	printk(KERN_INFO DRV_NAME "(%s): found %u interesting devices\n",
+-	       pci_name(host->pdev), dev_count);
+-
+-out:
+-	assert(host->state == HST_PORT_SCAN);
+-	host->state = new_state;
+-	schedule_work(&host->fsm_task);
+-}
+-
+-static void carm_handle_generic(struct carm_host *host,
+-				struct carm_request *crq, blk_status_t error,
+-				int cur_state, int next_state)
+-{
+-	DPRINTK("ENTER\n");
+-
+-	assert(host->state == cur_state);
+-	if (error)
+-		host->state = HST_ERROR;
+-	else
+-		host->state = next_state;
+-	schedule_work(&host->fsm_task);
+-}
+-
+-static inline void carm_handle_resp(struct carm_host *host,
+-				    __le32 ret_handle_le, u32 status)
+-{
+-	u32 handle = le32_to_cpu(ret_handle_le);
+-	unsigned int msg_idx;
+-	struct request *rq;
+-	struct carm_request *crq;
+-	blk_status_t error = (status == RMSG_OK) ? 0 : BLK_STS_IOERR;
+-	u8 *mem;
+-
+-	VPRINTK("ENTER, handle == 0x%x\n", handle);
+-
+-	if (unlikely(!TAG_VALID(handle))) {
+-		printk(KERN_ERR DRV_NAME "(%s): BUG: invalid tag 0x%x\n",
+-		       pci_name(host->pdev), handle);
+-		return;
+-	}
+-
+-	msg_idx = TAG_DECODE(handle);
+-	VPRINTK("tag == %u\n", msg_idx);
+-
+-	rq = blk_mq_tag_to_rq(host->tag_set.tags[0], msg_idx);
+-	crq = blk_mq_rq_to_pdu(rq);
+-
+-	/* fast path */
+-	if (likely(crq->msg_type == CARM_MSG_READ ||
+-		   crq->msg_type == CARM_MSG_WRITE)) {
+-		dma_unmap_sg(&host->pdev->dev, &crq->sg[0], crq->n_elem,
+-			     carm_rq_dir(rq));
+-		goto done;
+-	}
+-
+-	mem = carm_ref_msg(host, msg_idx);
+-
+-	switch (crq->msg_type) {
+-	case CARM_MSG_IOCTL: {
+-		switch (crq->msg_subtype) {
+-		case CARM_IOC_SCAN_CHAN:
+-			carm_handle_scan_chan(host, crq, mem, error);
+-			goto done;
+-		default:
+-			/* unknown / invalid response */
+-			goto err_out;
+-		}
+-		break;
+-	}
+-
+-	case CARM_MSG_MISC: {
+-		switch (crq->msg_subtype) {
+-		case MISC_ALLOC_MEM:
+-			carm_handle_generic(host, crq, error,
+-					    HST_ALLOC_BUF, HST_SYNC_TIME);
+-			goto done;
+-		case MISC_SET_TIME:
+-			carm_handle_generic(host, crq, error,
+-					    HST_SYNC_TIME, HST_GET_FW_VER);
+-			goto done;
+-		case MISC_GET_FW_VER: {
+-			struct carm_fw_ver *ver = (struct carm_fw_ver *)
+-				(mem + sizeof(struct carm_msg_get_fw_ver));
+-			if (!error) {
+-				host->fw_ver = le32_to_cpu(ver->version);
+-				host->flags |= (ver->features & FL_FW_VER_MASK);
+-			}
+-			carm_handle_generic(host, crq, error,
+-					    HST_GET_FW_VER, HST_PORT_SCAN);
+-			goto done;
+-		}
+-		default:
+-			/* unknown / invalid response */
+-			goto err_out;
+-		}
+-		break;
+-	}
+-
+-	case CARM_MSG_ARRAY: {
+-		switch (crq->msg_subtype) {
+-		case CARM_ARRAY_INFO:
+-			carm_handle_array_info(host, crq, mem, error);
+-			break;
+-		default:
+-			/* unknown / invalid response */
+-			goto err_out;
+-		}
+-		break;
+-	}
+-
+-	default:
+-		/* unknown / invalid response */
+-		goto err_out;
+-	}
+-
+-	return;
+-
+-err_out:
+-	printk(KERN_WARNING DRV_NAME "(%s): BUG: unhandled message type %d/%d\n",
+-	       pci_name(host->pdev), crq->msg_type, crq->msg_subtype);
+-	error = BLK_STS_IOERR;
+-done:
+-	host->hw_sg_used -= crq->n_elem;
+-	blk_mq_end_request(blk_mq_rq_from_pdu(crq), error);
+-
+-	if (host->hw_sg_used <= CARM_SG_LOW_WATER)
+-		carm_round_robin(host);
+-}
+-
+-static inline void carm_handle_responses(struct carm_host *host)
+-{
+-	void __iomem *mmio = host->mmio;
+-	struct carm_response *resp = (struct carm_response *) host->shm;
+-	unsigned int work = 0;
+-	unsigned int idx = host->resp_idx % RMSG_Q_LEN;
+-
+-	while (1) {
+-		u32 status = le32_to_cpu(resp[idx].status);
+-
+-		if (status == 0xffffffff) {
+-			VPRINTK("ending response on index %u\n", idx);
+-			writel(idx << 3, mmio + CARM_RESP_IDX);
+-			break;
+-		}
+-
+-		/* response to a message we sent */
+-		else if ((status & (1 << 31)) == 0) {
+-			VPRINTK("handling msg response on index %u\n", idx);
+-			carm_handle_resp(host, resp[idx].ret_handle, status);
+-			resp[idx].status = cpu_to_le32(0xffffffff);
+-		}
+-
+-		/* asynchronous events the hardware throws our way */
+-		else if ((status & 0xff000000) == (1 << 31)) {
+-			u8 *evt_type_ptr = (u8 *) &resp[idx];
+-			u8 evt_type = *evt_type_ptr;
+-			printk(KERN_WARNING DRV_NAME "(%s): unhandled event type %d\n",
+-			       pci_name(host->pdev), (int) evt_type);
+-			resp[idx].status = cpu_to_le32(0xffffffff);
+-		}
+-
+-		idx = NEXT_RESP(idx);
+-		work++;
+-	}
+-
+-	VPRINTK("EXIT, work==%u\n", work);
+-	host->resp_idx += work;
+-}
+-
+-static irqreturn_t carm_interrupt(int irq, void *__host)
+-{
+-	struct carm_host *host = __host;
+-	void __iomem *mmio;
+-	u32 mask;
+-	int handled = 0;
+-	unsigned long flags;
+-
+-	if (!host) {
+-		VPRINTK("no host\n");
+-		return IRQ_NONE;
+-	}
+-
+-	spin_lock_irqsave(&host->lock, flags);
+-
+-	mmio = host->mmio;
+-
+-	/* reading should also clear interrupts */
+-	mask = readl(mmio + CARM_INT_STAT);
+-
+-	if (mask == 0 || mask == 0xffffffff) {
+-		VPRINTK("no work, mask == 0x%x\n", mask);
+-		goto out;
+-	}
+-
+-	if (mask & INT_ACK_MASK)
+-		writel(mask, mmio + CARM_INT_STAT);
+-
+-	if (unlikely(host->state == HST_INVALID)) {
+-		VPRINTK("not initialized yet, mask = 0x%x\n", mask);
+-		goto out;
+-	}
+-
+-	if (mask & CARM_HAVE_RESP) {
+-		handled = 1;
+-		carm_handle_responses(host);
+-	}
+-
+-out:
+-	spin_unlock_irqrestore(&host->lock, flags);
+-	VPRINTK("EXIT\n");
+-	return IRQ_RETVAL(handled);
+-}
+-
+-static void carm_fsm_task (struct work_struct *work)
+-{
+-	struct carm_host *host =
+-		container_of(work, struct carm_host, fsm_task);
+-	unsigned long flags;
+-	unsigned int state;
+-	int rc, i, next_dev;
+-	int reschedule = 0;
+-	int new_state = HST_INVALID;
+-
+-	spin_lock_irqsave(&host->lock, flags);
+-	state = host->state;
+-	spin_unlock_irqrestore(&host->lock, flags);
+-
+-	DPRINTK("ENTER, state == %s\n", state_name[state]);
+-
+-	switch (state) {
+-	case HST_PROBE_START:
+-		new_state = HST_ALLOC_BUF;
+-		reschedule = 1;
+-		break;
+-
+-	case HST_ALLOC_BUF:
+-		rc = carm_send_special(host, carm_fill_alloc_buf);
+-		if (rc) {
+-			new_state = HST_ERROR;
+-			reschedule = 1;
+-		}
+-		break;
+-
+-	case HST_SYNC_TIME:
+-		rc = carm_send_special(host, carm_fill_sync_time);
+-		if (rc) {
+-			new_state = HST_ERROR;
+-			reschedule = 1;
+-		}
+-		break;
+-
+-	case HST_GET_FW_VER:
+-		rc = carm_send_special(host, carm_fill_get_fw_ver);
+-		if (rc) {
+-			new_state = HST_ERROR;
+-			reschedule = 1;
+-		}
+-		break;
+-
+-	case HST_PORT_SCAN:
+-		rc = carm_send_special(host, carm_fill_scan_channels);
+-		if (rc) {
+-			new_state = HST_ERROR;
+-			reschedule = 1;
+-		}
+-		break;
+-
+-	case HST_DEV_SCAN_START:
+-		host->cur_scan_dev = -1;
+-		new_state = HST_DEV_SCAN;
+-		reschedule = 1;
+-		break;
+-
+-	case HST_DEV_SCAN:
+-		next_dev = -1;
+-		for (i = host->cur_scan_dev + 1; i < CARM_MAX_PORTS; i++)
+-			if (host->dev_present & (1 << i)) {
+-				next_dev = i;
+-				break;
+-			}
+-
+-		if (next_dev >= 0) {
+-			host->cur_scan_dev = next_dev;
+-			rc = carm_array_info(host, next_dev);
+-			if (rc) {
+-				new_state = HST_ERROR;
+-				reschedule = 1;
+-			}
+-		} else {
+-			new_state = HST_DEV_ACTIVATE;
+-			reschedule = 1;
+-		}
+-		break;
+-
+-	case HST_DEV_ACTIVATE: {
+-		int activated = 0;
+-		for (i = 0; i < CARM_MAX_PORTS; i++)
+-			if (host->dev_active & (1 << i)) {
+-				struct carm_port *port = &host->port[i];
+-				struct gendisk *disk = port->disk;
+-
+-				set_capacity(disk, port->capacity);
+-				add_disk(disk);
+-				activated++;
+-			}
+-
+-		printk(KERN_INFO DRV_NAME "(%s): %d ports activated\n",
+-		       pci_name(host->pdev), activated);
+-
+-		new_state = HST_PROBE_FINISHED;
+-		reschedule = 1;
+-		break;
+-	}
+-
+-	case HST_PROBE_FINISHED:
+-		complete(&host->probe_comp);
+-		break;
+-
+-	case HST_ERROR:
+-		/* FIXME: TODO */
+-		break;
+-
+-	default:
+-		/* should never occur */
+-		printk(KERN_ERR PFX "BUG: unknown state %d\n", state);
+-		assert(0);
+-		break;
+-	}
+-
+-	if (new_state != HST_INVALID) {
+-		spin_lock_irqsave(&host->lock, flags);
+-		host->state = new_state;
+-		spin_unlock_irqrestore(&host->lock, flags);
+-	}
+-	if (reschedule)
+-		schedule_work(&host->fsm_task);
+-}
+-
+-static int carm_init_wait(void __iomem *mmio, u32 bits, unsigned int test_bit)
+-{
+-	unsigned int i;
+-
+-	for (i = 0; i < 50000; i++) {
+-		u32 tmp = readl(mmio + CARM_LMUC);
+-		udelay(100);
+-
+-		if (test_bit) {
+-			if ((tmp & bits) == bits)
+-				return 0;
+-		} else {
+-			if ((tmp & bits) == 0)
+-				return 0;
+-		}
+-
+-		cond_resched();
+-	}
+-
+-	printk(KERN_ERR PFX "carm_init_wait timeout, bits == 0x%x, test_bit == %s\n",
+-	       bits, test_bit ? "yes" : "no");
+-	return -EBUSY;
+-}
+-
+-static void carm_init_responses(struct carm_host *host)
+-{
+-	void __iomem *mmio = host->mmio;
+-	unsigned int i;
+-	struct carm_response *resp = (struct carm_response *) host->shm;
+-
+-	for (i = 0; i < RMSG_Q_LEN; i++)
+-		resp[i].status = cpu_to_le32(0xffffffff);
+-
+-	writel(0, mmio + CARM_RESP_IDX);
+-}
+-
+-static int carm_init_host(struct carm_host *host)
+-{
+-	void __iomem *mmio = host->mmio;
+-	u32 tmp;
+-	u8 tmp8;
+-	int rc;
+-
+-	DPRINTK("ENTER\n");
+-
+-	writel(0, mmio + CARM_INT_MASK);
+-
+-	tmp8 = readb(mmio + CARM_INITC);
+-	if (tmp8 & 0x01) {
+-		tmp8 &= ~0x01;
+-		writeb(tmp8, mmio + CARM_INITC);
+-		readb(mmio + CARM_INITC);	/* flush */
+-
+-		DPRINTK("snooze...\n");
+-		msleep(5000);
+-	}
+-
+-	tmp = readl(mmio + CARM_HMUC);
+-	if (tmp & CARM_CME) {
+-		DPRINTK("CME bit present, waiting\n");
+-		rc = carm_init_wait(mmio, CARM_CME, 1);
+-		if (rc) {
+-			DPRINTK("EXIT, carm_init_wait 1 failed\n");
+-			return rc;
+-		}
+-	}
+-	if (tmp & CARM_RME) {
+-		DPRINTK("RME bit present, waiting\n");
+-		rc = carm_init_wait(mmio, CARM_RME, 1);
+-		if (rc) {
+-			DPRINTK("EXIT, carm_init_wait 2 failed\n");
+-			return rc;
+-		}
+-	}
+-
+-	tmp &= ~(CARM_RME | CARM_CME);
+-	writel(tmp, mmio + CARM_HMUC);
+-	readl(mmio + CARM_HMUC);	/* flush */
+-
+-	rc = carm_init_wait(mmio, CARM_RME | CARM_CME, 0);
+-	if (rc) {
+-		DPRINTK("EXIT, carm_init_wait 3 failed\n");
+-		return rc;
+-	}
+-
+-	carm_init_buckets(mmio);
+-
+-	writel(host->shm_dma & 0xffffffff, mmio + RBUF_ADDR_LO);
+-	writel((host->shm_dma >> 16) >> 16, mmio + RBUF_ADDR_HI);
+-	writel(RBUF_LEN, mmio + RBUF_BYTE_SZ);
+-
+-	tmp = readl(mmio + CARM_HMUC);
+-	tmp |= (CARM_RME | CARM_CME | CARM_WZBC);
+-	writel(tmp, mmio + CARM_HMUC);
+-	readl(mmio + CARM_HMUC);	/* flush */
+-
+-	rc = carm_init_wait(mmio, CARM_RME | CARM_CME, 1);
+-	if (rc) {
+-		DPRINTK("EXIT, carm_init_wait 4 failed\n");
+-		return rc;
+-	}
+-
+-	writel(0, mmio + CARM_HMPHA);
+-	writel(INT_DEF_MASK, mmio + CARM_INT_MASK);
+-
+-	carm_init_responses(host);
+-
+-	/* start initialization, probing state machine */
+-	spin_lock_irq(&host->lock);
+-	assert(host->state == HST_INVALID);
+-	host->state = HST_PROBE_START;
+-	spin_unlock_irq(&host->lock);
+-	schedule_work(&host->fsm_task);
+-
+-	DPRINTK("EXIT\n");
+-	return 0;
+-}
+-
+-static const struct blk_mq_ops carm_mq_ops = {
+-	.queue_rq	= carm_queue_rq,
+-};
+-
+-static int carm_init_disk(struct carm_host *host, unsigned int port_no)
+-{
+-	struct carm_port *port = &host->port[port_no];
+-	struct gendisk *disk;
+-	struct request_queue *q;
+-
+-	port->host = host;
+-	port->port_no = port_no;
+-
+-	disk = alloc_disk(CARM_MINORS_PER_MAJOR);
+-	if (!disk)
+-		return -ENOMEM;
+-
+-	port->disk = disk;
+-	sprintf(disk->disk_name, DRV_NAME "/%u",
+-		(unsigned int)host->id * CARM_MAX_PORTS + port_no);
+-	disk->major = host->major;
+-	disk->first_minor = port_no * CARM_MINORS_PER_MAJOR;
+-	disk->fops = &carm_bd_ops;
+-	disk->private_data = port;
+-
+-	q = blk_mq_init_queue(&host->tag_set);
+-	if (IS_ERR(q))
+-		return PTR_ERR(q);
+-
+-	blk_queue_max_segments(q, CARM_MAX_REQ_SG);
+-	blk_queue_segment_boundary(q, CARM_SG_BOUNDARY);
+-
+-	q->queuedata = port;
+-	disk->queue = q;
+-	return 0;
+-}
+-
+-static void carm_free_disk(struct carm_host *host, unsigned int port_no)
+-{
+-	struct carm_port *port = &host->port[port_no];
+-	struct gendisk *disk = port->disk;
+-
+-	if (!disk)
+-		return;
+-
+-	if (disk->flags & GENHD_FL_UP)
+-		del_gendisk(disk);
+-	if (disk->queue)
+-		blk_cleanup_queue(disk->queue);
+-	put_disk(disk);
+-}
+-
+-static int carm_init_shm(struct carm_host *host)
+-{
+-	host->shm = dma_alloc_coherent(&host->pdev->dev, CARM_SHM_SIZE,
+-				       &host->shm_dma, GFP_KERNEL);
+-	if (!host->shm)
+-		return -ENOMEM;
+-
+-	host->msg_base = host->shm + RBUF_LEN;
+-	host->msg_dma = host->shm_dma + RBUF_LEN;
+-
+-	memset(host->shm, 0xff, RBUF_LEN);
+-	memset(host->msg_base, 0, PDC_SHM_SIZE - RBUF_LEN);
+-
+-	return 0;
+-}
+-
+-static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+-{
+-	struct carm_host *host;
+-	int rc;
+-	struct request_queue *q;
+-	unsigned int i;
+-
+-	printk_once(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
+-
+-	rc = pci_enable_device(pdev);
+-	if (rc)
+-		return rc;
+-
+-	rc = pci_request_regions(pdev, DRV_NAME);
+-	if (rc)
+-		goto err_out;
+-
+-	rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
+-	if (rc) {
+-		printk(KERN_ERR DRV_NAME "(%s): DMA mask failure\n",
+-			pci_name(pdev));
+-		goto err_out_regions;
+-	}
+-
+-	host = kzalloc(sizeof(*host), GFP_KERNEL);
+-	if (!host) {
+-		printk(KERN_ERR DRV_NAME "(%s): memory alloc failure\n",
+-		       pci_name(pdev));
+-		rc = -ENOMEM;
+-		goto err_out_regions;
+-	}
+-
+-	host->pdev = pdev;
+-	spin_lock_init(&host->lock);
+-	INIT_WORK(&host->fsm_task, carm_fsm_task);
+-	init_completion(&host->probe_comp);
+-
+-	host->mmio = ioremap(pci_resource_start(pdev, 0),
+-			     pci_resource_len(pdev, 0));
+-	if (!host->mmio) {
+-		printk(KERN_ERR DRV_NAME "(%s): MMIO alloc failure\n",
+-		       pci_name(pdev));
+-		rc = -ENOMEM;
+-		goto err_out_kfree;
+-	}
+-
+-	rc = carm_init_shm(host);
+-	if (rc) {
+-		printk(KERN_ERR DRV_NAME "(%s): DMA SHM alloc failure\n",
+-		       pci_name(pdev));
+-		goto err_out_iounmap;
+-	}
+-
+-	memset(&host->tag_set, 0, sizeof(host->tag_set));
+-	host->tag_set.ops = &carm_mq_ops;
+-	host->tag_set.cmd_size = sizeof(struct carm_request);
+-	host->tag_set.nr_hw_queues = 1;
+-	host->tag_set.nr_maps = 1;
+-	host->tag_set.queue_depth = max_queue;
+-	host->tag_set.numa_node = NUMA_NO_NODE;
+-	host->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+-
+-	rc = blk_mq_alloc_tag_set(&host->tag_set);
+-	if (rc)
+-		goto err_out_dma_free;
+-
+-	q = blk_mq_init_queue(&host->tag_set);
+-	if (IS_ERR(q)) {
+-		rc = PTR_ERR(q);
+-		blk_mq_free_tag_set(&host->tag_set);
+-		goto err_out_dma_free;
+-	}
+-
+-	host->oob_q = q;
+-	q->queuedata = host;
+-
+-	/*
+-	 * Figure out which major to use: 160, 161, or dynamic
+-	 */
+-	if (!test_and_set_bit(0, &carm_major_alloc))
+-		host->major = 160;
+-	else if (!test_and_set_bit(1, &carm_major_alloc))
+-		host->major = 161;
+-	else
+-		host->flags |= FL_DYN_MAJOR;
+-
+-	host->id = carm_host_id;
+-	sprintf(host->name, DRV_NAME "%d", carm_host_id);
+-
+-	rc = register_blkdev(host->major, host->name);
+-	if (rc < 0)
+-		goto err_out_free_majors;
+-	if (host->flags & FL_DYN_MAJOR)
+-		host->major = rc;
+-
+-	for (i = 0; i < CARM_MAX_PORTS; i++) {
+-		rc = carm_init_disk(host, i);
+-		if (rc)
+-			goto err_out_blkdev_disks;
+-	}
+-
+-	pci_set_master(pdev);
+-
+-	rc = request_irq(pdev->irq, carm_interrupt, IRQF_SHARED, DRV_NAME, host);
+-	if (rc) {
+-		printk(KERN_ERR DRV_NAME "(%s): irq alloc failure\n",
+-		       pci_name(pdev));
+-		goto err_out_blkdev_disks;
+-	}
+-
+-	rc = carm_init_host(host);
+-	if (rc)
+-		goto err_out_free_irq;
+-
+-	DPRINTK("waiting for probe_comp\n");
+-	wait_for_completion(&host->probe_comp);
+-
+-	printk(KERN_INFO "%s: pci %s, ports %d, io %llx, irq %u, major %d\n",
+-	       host->name, pci_name(pdev), (int) CARM_MAX_PORTS,
+-	       (unsigned long long)pci_resource_start(pdev, 0),
+-		   pdev->irq, host->major);
+-
+-	carm_host_id++;
+-	pci_set_drvdata(pdev, host);
+-	return 0;
+-
+-err_out_free_irq:
+-	free_irq(pdev->irq, host);
+-err_out_blkdev_disks:
+-	for (i = 0; i < CARM_MAX_PORTS; i++)
+-		carm_free_disk(host, i);
+-	unregister_blkdev(host->major, host->name);
+-err_out_free_majors:
+-	if (host->major == 160)
+-		clear_bit(0, &carm_major_alloc);
+-	else if (host->major == 161)
+-		clear_bit(1, &carm_major_alloc);
+-	blk_cleanup_queue(host->oob_q);
+-	blk_mq_free_tag_set(&host->tag_set);
+-err_out_dma_free:
+-	dma_free_coherent(&pdev->dev, CARM_SHM_SIZE, host->shm, host->shm_dma);
+-err_out_iounmap:
+-	iounmap(host->mmio);
+-err_out_kfree:
+-	kfree(host);
+-err_out_regions:
+-	pci_release_regions(pdev);
+-err_out:
+-	pci_disable_device(pdev);
+-	return rc;
+-}
+-
+-static void carm_remove_one (struct pci_dev *pdev)
+-{
+-	struct carm_host *host = pci_get_drvdata(pdev);
+-	unsigned int i;
+-
+-	if (!host) {
+-		printk(KERN_ERR PFX "BUG: no host data for PCI(%s)\n",
+-		       pci_name(pdev));
+-		return;
+-	}
+-
+-	free_irq(pdev->irq, host);
+-	for (i = 0; i < CARM_MAX_PORTS; i++)
+-		carm_free_disk(host, i);
+-	unregister_blkdev(host->major, host->name);
+-	if (host->major == 160)
+-		clear_bit(0, &carm_major_alloc);
+-	else if (host->major == 161)
+-		clear_bit(1, &carm_major_alloc);
+-	blk_cleanup_queue(host->oob_q);
+-	blk_mq_free_tag_set(&host->tag_set);
+-	dma_free_coherent(&pdev->dev, CARM_SHM_SIZE, host->shm, host->shm_dma);
+-	iounmap(host->mmio);
+-	kfree(host);
+-	pci_release_regions(pdev);
+-	pci_disable_device(pdev);
+-}
+-
+-module_pci_driver(carm_driver);
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 5347fc465ce89..bc0850d3f7d28 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -78,7 +78,8 @@ enum qca_flags {
+ 	QCA_HW_ERROR_EVENT,
+ 	QCA_SSR_TRIGGERED,
+ 	QCA_BT_OFF,
+-	QCA_ROM_FW
++	QCA_ROM_FW,
++	QCA_DEBUGFS_CREATED,
+ };
+ 
+ enum qca_capabilities {
+@@ -633,6 +634,9 @@ static void qca_debugfs_init(struct hci_dev *hdev)
+ 	if (!hdev->debugfs)
+ 		return;
+ 
++	if (test_and_set_bit(QCA_DEBUGFS_CREATED, &qca->flags))
++		return;
++
+ 	ibs_dir = debugfs_create_dir("ibs", hdev->debugfs);
+ 
+ 	/* read only */
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index 9bcd0eebc6d7e..2e030a308b6e6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -329,8 +329,15 @@ static u32 vi_get_xclk(struct amdgpu_device *adev)
+ 	u32 reference_clock = adev->clock.spll.reference_freq;
+ 	u32 tmp;
+ 
+-	if (adev->flags & AMD_IS_APU)
+-		return reference_clock;
++	if (adev->flags & AMD_IS_APU) {
++		switch (adev->asic_type) {
++		case CHIP_STONEY:
++			/* vbios says 48Mhz, but the actual freq is 100Mhz */
++			return 10000;
++		default:
++			return reference_clock;
++		}
++	}
+ 
+ 	tmp = RREG32_SMC(ixCG_CLKPIN_CNTL_2);
+ 	if (REG_GET_FIELD(tmp, CG_CLKPIN_CNTL_2, MUX_TCLK_TO_XCLK))
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index 25c269bc46815..b6062833370f1 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -75,15 +75,17 @@ int drm_atomic_set_mode_for_crtc(struct drm_crtc_state *state,
+ 	state->mode_blob = NULL;
+ 
+ 	if (mode) {
++		struct drm_property_blob *blob;
++
+ 		drm_mode_convert_to_umode(&umode, mode);
+-		state->mode_blob =
+-			drm_property_create_blob(state->crtc->dev,
+-		                                 sizeof(umode),
+-		                                 &umode);
+-		if (IS_ERR(state->mode_blob))
+-			return PTR_ERR(state->mode_blob);
++		blob = drm_property_create_blob(crtc->dev,
++						sizeof(umode), &umode);
++		if (IS_ERR(blob))
++			return PTR_ERR(blob);
+ 
+ 		drm_mode_copy(&state->mode, mode);
++
++		state->mode_blob = blob;
+ 		state->enable = true;
+ 		DRM_DEBUG_ATOMIC("Set [MODE:%s] for [CRTC:%d:%s] state %p\n",
+ 				 mode->name, crtc->base.id, crtc->name, state);
+diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c
+index 8ead7e021008c..a520aa06d2cb5 100644
+--- a/drivers/i2c/busses/i2c-sprd.c
++++ b/drivers/i2c/busses/i2c-sprd.c
+@@ -576,12 +576,14 @@ static int sprd_i2c_remove(struct platform_device *pdev)
+ 	struct sprd_i2c *i2c_dev = platform_get_drvdata(pdev);
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(i2c_dev->dev);
++	ret = pm_runtime_get_sync(i2c_dev->dev);
+ 	if (ret < 0)
+-		return ret;
++		dev_err(&pdev->dev, "Failed to resume device (%pe)\n", ERR_PTR(ret));
+ 
+ 	i2c_del_adapter(&i2c_dev->adap);
+-	clk_disable_unprepare(i2c_dev->clk);
++
++	if (ret >= 0)
++		clk_disable_unprepare(i2c_dev->clk);
+ 
+ 	pm_runtime_put_noidle(i2c_dev->dev);
+ 	pm_runtime_disable(i2c_dev->dev);
+diff --git a/drivers/infiniband/hw/i40iw/i40iw.h b/drivers/infiniband/hw/i40iw/i40iw.h
+index 832b80de004fb..13545fcdc5ad6 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw.h
++++ b/drivers/infiniband/hw/i40iw/i40iw.h
+@@ -422,9 +422,8 @@ void i40iw_manage_arp_cache(struct i40iw_device *iwdev,
+ 			    bool ipv4,
+ 			    u32 action);
+ 
+-int i40iw_manage_apbvt(struct i40iw_device *iwdev,
+-		       u16 accel_local_port,
+-		       bool add_port);
++enum i40iw_status_code i40iw_manage_apbvt(struct i40iw_device *iwdev,
++					  u16 accel_local_port, bool add_port);
+ 
+ struct i40iw_cqp_request *i40iw_get_cqp_request(struct i40iw_cqp *cqp, bool wait);
+ void i40iw_free_cqp_request(struct i40iw_cqp *cqp, struct i40iw_cqp_request *cqp_request);
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 0bd55e1fca372..b99318fb58dc6 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -262,7 +262,6 @@ static const struct xpad_device {
+ 	{ 0x1430, 0xf801, "RedOctane Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x146b, 0x0604, "Bigben Interactive DAIJA Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+-	{ 0x1532, 0x0037, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ 	{ 0x1532, 0x0a00, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ 	{ 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE },
+ 	{ 0x15e4, 0x3f00, "Power A Mini Pro Elite", 0, XTYPE_XBOX360 },
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 2e53ea261e014..598fcb99f6c9e 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -674,10 +674,11 @@ static void process_packet_head_v4(struct psmouse *psmouse)
+ 	struct input_dev *dev = psmouse->dev;
+ 	struct elantech_data *etd = psmouse->private;
+ 	unsigned char *packet = psmouse->packet;
+-	int id = ((packet[3] & 0xe0) >> 5) - 1;
++	int id;
+ 	int pres, traces;
+ 
+-	if (id < 0)
++	id = ((packet[3] & 0xe0) >> 5) - 1;
++	if (id < 0 || id >= ETP_MAX_FINGERS)
+ 		return;
+ 
+ 	etd->mt[id].x = ((packet[1] & 0x0f) << 8) | packet[2];
+@@ -707,7 +708,7 @@ static void process_packet_motion_v4(struct psmouse *psmouse)
+ 	int id, sid;
+ 
+ 	id = ((packet[0] & 0xe0) >> 5) - 1;
+-	if (id < 0)
++	if (id < 0 || id >= ETP_MAX_FINGERS)
+ 		return;
+ 
+ 	sid = ((packet[3] & 0xe0) >> 5) - 1;
+@@ -728,7 +729,7 @@ static void process_packet_motion_v4(struct psmouse *psmouse)
+ 	input_report_abs(dev, ABS_MT_POSITION_X, etd->mt[id].x);
+ 	input_report_abs(dev, ABS_MT_POSITION_Y, etd->mt[id].y);
+ 
+-	if (sid >= 0) {
++	if (sid >= 0 && sid < ETP_MAX_FINGERS) {
+ 		etd->mt[sid].x += delta_x2 * weight;
+ 		etd->mt[sid].y -= delta_y2 * weight;
+ 		input_mt_slot(dev, sid);
+diff --git a/drivers/misc/eeprom/Kconfig b/drivers/misc/eeprom/Kconfig
+index 0f791bfdc1f58..c92f2cdf40263 100644
+--- a/drivers/misc/eeprom/Kconfig
++++ b/drivers/misc/eeprom/Kconfig
+@@ -6,6 +6,7 @@ config EEPROM_AT24
+ 	depends on I2C && SYSFS
+ 	select NVMEM
+ 	select NVMEM_SYSFS
++	select REGMAP
+ 	select REGMAP_I2C
+ 	help
+ 	  Enable this driver to get read/write support to most I2C EEPROMs
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index deeed50a42c05..f5ab0bff4ac29 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -1187,8 +1187,6 @@ static int lan9303_port_fdb_add(struct dsa_switch *ds, int port,
+ 	struct lan9303 *chip = ds->priv;
+ 
+ 	dev_dbg(chip->dev, "%s(%d, %pM, %d)\n", __func__, port, addr, vid);
+-	if (vid)
+-		return -EOPNOTSUPP;
+ 
+ 	return lan9303_alr_add_port(chip, addr, port, false);
+ }
+@@ -1200,8 +1198,6 @@ static int lan9303_port_fdb_del(struct dsa_switch *ds, int port,
+ 	struct lan9303 *chip = ds->priv;
+ 
+ 	dev_dbg(chip->dev, "%s(%d, %pM, %d)\n", __func__, port, addr, vid);
+-	if (vid)
+-		return -EOPNOTSUPP;
+ 	lan9303_alr_del_port(chip, addr, port);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 3a9fcf942a6de..d8366351cf14a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -8337,6 +8337,9 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
+ 		goto err_out;
+ 	}
+ 
++	if (BNXT_VF(bp))
++		bnxt_hwrm_func_qcfg(bp);
++
+ 	rc = bnxt_setup_vnic(bp, 0);
+ 	if (rc)
+ 		goto err_out;
+@@ -12101,26 +12104,37 @@ static void bnxt_cfg_ntp_filters(struct bnxt *bp)
+ 
+ #endif /* CONFIG_RFS_ACCEL */
+ 
+-static int bnxt_udp_tunnel_sync(struct net_device *netdev, unsigned int table)
++static int bnxt_udp_tunnel_set_port(struct net_device *netdev, unsigned int table,
++				    unsigned int entry, struct udp_tunnel_info *ti)
+ {
+ 	struct bnxt *bp = netdev_priv(netdev);
+-	struct udp_tunnel_info ti;
+ 	unsigned int cmd;
+ 
+-	udp_tunnel_nic_get_port(netdev, table, 0, &ti);
+-	if (ti.type == UDP_TUNNEL_TYPE_VXLAN)
++	if (ti->type == UDP_TUNNEL_TYPE_VXLAN)
+ 		cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN;
+ 	else
+ 		cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE;
+ 
+-	if (ti.port)
+-		return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti.port, cmd);
++	return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti->port, cmd);
++}
++
++static int bnxt_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table,
++				      unsigned int entry, struct udp_tunnel_info *ti)
++{
++	struct bnxt *bp = netdev_priv(netdev);
++	unsigned int cmd;
++
++	if (ti->type == UDP_TUNNEL_TYPE_VXLAN)
++		cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN;
++	else
++		cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE;
+ 
+ 	return bnxt_hwrm_tunnel_dst_port_free(bp, cmd);
+ }
+ 
+ static const struct udp_tunnel_nic_info bnxt_udp_tunnels = {
+-	.sync_table	= bnxt_udp_tunnel_sync,
++	.set_port	= bnxt_udp_tunnel_set_port,
++	.unset_port	= bnxt_udp_tunnel_unset_port,
+ 	.flags		= UDP_TUNNEL_NIC_INFO_MAY_SLEEP |
+ 			  UDP_TUNNEL_NIC_INFO_OPEN_ONLY,
+ 	.tables		= {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 1e67e86fc3344..2984234df67eb 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -3440,7 +3440,7 @@ static int bnxt_reset(struct net_device *dev, u32 *flags)
+ 		}
+ 	}
+ 
+-	if (req & BNXT_FW_RESET_AP) {
++	if (!BNXT_CHIP_P4_PLUS(bp) && (req & BNXT_FW_RESET_AP)) {
+ 		/* This feature is not supported in older firmware versions */
+ 		if (bp->hwrm_spec_code >= 0x10803) {
+ 			if (!bnxt_firmware_reset_ap(dev)) {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_alloc.h b/drivers/net/ethernet/intel/i40e/i40e_alloc.h
+index cb8689222c8b7..55ba6b690ab6c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_alloc.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_alloc.h
+@@ -20,16 +20,11 @@ enum i40e_memory_type {
+ };
+ 
+ /* prototype for functions used for dynamic memory allocation */
+-i40e_status i40e_allocate_dma_mem(struct i40e_hw *hw,
+-					    struct i40e_dma_mem *mem,
+-					    enum i40e_memory_type type,
+-					    u64 size, u32 alignment);
+-i40e_status i40e_free_dma_mem(struct i40e_hw *hw,
+-					struct i40e_dma_mem *mem);
+-i40e_status i40e_allocate_virt_mem(struct i40e_hw *hw,
+-					     struct i40e_virt_mem *mem,
+-					     u32 size);
+-i40e_status i40e_free_virt_mem(struct i40e_hw *hw,
+-					 struct i40e_virt_mem *mem);
++int i40e_allocate_dma_mem(struct i40e_hw *hw, struct i40e_dma_mem *mem,
++			  enum i40e_memory_type type, u64 size, u32 alignment);
++int i40e_free_dma_mem(struct i40e_hw *hw, struct i40e_dma_mem *mem);
++int i40e_allocate_virt_mem(struct i40e_hw *hw, struct i40e_virt_mem *mem,
++			   u32 size);
++int i40e_free_virt_mem(struct i40e_hw *hw, struct i40e_virt_mem *mem);
+ 
+ #endif /* _I40E_ALLOC_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c
+index 2418d4fff037f..e27b4de7e7aa3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_fltr.c
++++ b/drivers/net/ethernet/intel/ice/ice_fltr.c
+@@ -128,7 +128,7 @@ void ice_fltr_remove_all(struct ice_vsi *vsi)
+  * @mac: MAC address to add
+  * @action: filter action
+  */
+-int
++enum ice_status
+ ice_fltr_add_mac_to_list(struct ice_vsi *vsi, struct list_head *list,
+ 			 const u8 *mac, enum ice_sw_fwd_act_type action)
+ {
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+index 07824bf9d68d9..0157bcd2efffa 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+@@ -1902,7 +1902,7 @@ void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats *stats)
+ {
+ 	u32 i;
+ 
+-	if (!cdev) {
++	if (!cdev || cdev->recov_in_prog) {
+ 		memset(stats, 0, sizeof(*stats));
+ 		return;
+ 	}
+diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h
+index f313fd7303316..3251d58a263fa 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede.h
++++ b/drivers/net/ethernet/qlogic/qede/qede.h
+@@ -273,6 +273,10 @@ struct qede_dev {
+ #define QEDE_ERR_WARN			3
+ 
+ 	struct qede_dump_info		dump_info;
++	struct delayed_work		periodic_task;
++	unsigned long			stats_coal_ticks;
++	u32				stats_coal_usecs;
++	spinlock_t			stats_lock; /* lock for vport stats access */
+ };
+ 
+ enum QEDE_STATE {
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
+index bedbb85a179ae..db104e035ba18 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
+@@ -426,6 +426,8 @@ static void qede_get_ethtool_stats(struct net_device *dev,
+ 		}
+ 	}
+ 
++	spin_lock(&edev->stats_lock);
++
+ 	for (i = 0; i < QEDE_NUM_STATS; i++) {
+ 		if (qede_is_irrelevant_stat(edev, i))
+ 			continue;
+@@ -435,6 +437,8 @@ static void qede_get_ethtool_stats(struct net_device *dev,
+ 		buf++;
+ 	}
+ 
++	spin_unlock(&edev->stats_lock);
++
+ 	__qede_unlock(edev);
+ }
+ 
+@@ -815,6 +819,7 @@ out:
+ 
+ 	coal->rx_coalesce_usecs = rx_coal;
+ 	coal->tx_coalesce_usecs = tx_coal;
++	coal->stats_block_coalesce_usecs = edev->stats_coal_usecs;
+ 
+ 	return rc;
+ }
+@@ -827,6 +832,19 @@ static int qede_set_coalesce(struct net_device *dev,
+ 	int i, rc = 0;
+ 	u16 rxc, txc;
+ 
++	if (edev->stats_coal_usecs != coal->stats_block_coalesce_usecs) {
++		edev->stats_coal_usecs = coal->stats_block_coalesce_usecs;
++		if (edev->stats_coal_usecs) {
++			edev->stats_coal_ticks = usecs_to_jiffies(edev->stats_coal_usecs);
++			schedule_delayed_work(&edev->periodic_task, 0);
++
++			DP_INFO(edev, "Configured stats coal ticks=%lu jiffies\n",
++				edev->stats_coal_ticks);
++		} else {
++			cancel_delayed_work_sync(&edev->periodic_task);
++		}
++	}
++
+ 	if (!netif_running(dev)) {
+ 		DP_INFO(edev, "Interface is down\n");
+ 		return -EINVAL;
+@@ -2106,7 +2124,8 @@ err:
+ }
+ 
+ static const struct ethtool_ops qede_ethtool_ops = {
+-	.supported_coalesce_params	= ETHTOOL_COALESCE_USECS,
++	.supported_coalesce_params	= ETHTOOL_COALESCE_USECS |
++					  ETHTOOL_COALESCE_STATS_BLOCK_USECS,
+ 	.get_link_ksettings		= qede_get_link_ksettings,
+ 	.set_link_ksettings		= qede_set_link_ksettings,
+ 	.get_drvinfo			= qede_get_drvinfo,
+@@ -2155,7 +2174,8 @@ static const struct ethtool_ops qede_ethtool_ops = {
+ };
+ 
+ static const struct ethtool_ops qede_vf_ethtool_ops = {
+-	.supported_coalesce_params	= ETHTOOL_COALESCE_USECS,
++	.supported_coalesce_params	= ETHTOOL_COALESCE_USECS |
++					  ETHTOOL_COALESCE_STATS_BLOCK_USECS,
+ 	.get_link_ksettings		= qede_get_link_ksettings,
+ 	.get_drvinfo			= qede_get_drvinfo,
+ 	.get_msglevel			= qede_get_msglevel,
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index e93f06e4a1729..681ec142c23de 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -313,6 +313,8 @@ void qede_fill_by_demand_stats(struct qede_dev *edev)
+ 
+ 	edev->ops->get_vport_stats(edev->cdev, &stats);
+ 
++	spin_lock(&edev->stats_lock);
++
+ 	p_common->no_buff_discards = stats.common.no_buff_discards;
+ 	p_common->packet_too_big_discard = stats.common.packet_too_big_discard;
+ 	p_common->ttl0_discard = stats.common.ttl0_discard;
+@@ -410,6 +412,8 @@ void qede_fill_by_demand_stats(struct qede_dev *edev)
+ 		p_ah->tx_1519_to_max_byte_packets =
+ 		    stats.ah.tx_1519_to_max_byte_packets;
+ 	}
++
++	spin_unlock(&edev->stats_lock);
+ }
+ 
+ static void qede_get_stats64(struct net_device *dev,
+@@ -418,9 +422,10 @@ static void qede_get_stats64(struct net_device *dev,
+ 	struct qede_dev *edev = netdev_priv(dev);
+ 	struct qede_stats_common *p_common;
+ 
+-	qede_fill_by_demand_stats(edev);
+ 	p_common = &edev->stats.common;
+ 
++	spin_lock(&edev->stats_lock);
++
+ 	stats->rx_packets = p_common->rx_ucast_pkts + p_common->rx_mcast_pkts +
+ 			    p_common->rx_bcast_pkts;
+ 	stats->tx_packets = p_common->tx_ucast_pkts + p_common->tx_mcast_pkts +
+@@ -440,6 +445,8 @@ static void qede_get_stats64(struct net_device *dev,
+ 		stats->collisions = edev->stats.bb.tx_total_collisions;
+ 	stats->rx_crc_errors = p_common->rx_crc_errors;
+ 	stats->rx_frame_errors = p_common->rx_align_errors;
++
++	spin_unlock(&edev->stats_lock);
+ }
+ 
+ #ifdef CONFIG_QED_SRIOV
+@@ -1001,6 +1008,23 @@ static void qede_unlock(struct qede_dev *edev)
+ 	rtnl_unlock();
+ }
+ 
++static void qede_periodic_task(struct work_struct *work)
++{
++	struct qede_dev *edev = container_of(work, struct qede_dev,
++					     periodic_task.work);
++
++	qede_fill_by_demand_stats(edev);
++	schedule_delayed_work(&edev->periodic_task, edev->stats_coal_ticks);
++}
++
++static void qede_init_periodic_task(struct qede_dev *edev)
++{
++	INIT_DELAYED_WORK(&edev->periodic_task, qede_periodic_task);
++	spin_lock_init(&edev->stats_lock);
++	edev->stats_coal_usecs = USEC_PER_SEC;
++	edev->stats_coal_ticks = usecs_to_jiffies(USEC_PER_SEC);
++}
++
+ static void qede_sp_task(struct work_struct *work)
+ {
+ 	struct qede_dev *edev = container_of(work, struct qede_dev,
+@@ -1020,6 +1044,7 @@ static void qede_sp_task(struct work_struct *work)
+ 	 */
+ 
+ 	if (test_and_clear_bit(QEDE_SP_RECOVERY, &edev->sp_flags)) {
++		cancel_delayed_work_sync(&edev->periodic_task);
+ #ifdef CONFIG_QED_SRIOV
+ 		/* SRIOV must be disabled outside the lock to avoid a deadlock.
+ 		 * The recovery of the active VFs is currently not supported.
+@@ -1216,6 +1241,7 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
+ 		 */
+ 		INIT_DELAYED_WORK(&edev->sp_task, qede_sp_task);
+ 		mutex_init(&edev->qede_lock);
++		qede_init_periodic_task(edev);
+ 
+ 		rc = register_netdev(edev->ndev);
+ 		if (rc) {
+@@ -1240,6 +1266,11 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
+ 	edev->rx_copybreak = QEDE_RX_HDR_SIZE;
+ 
+ 	qede_log_probe(edev);
++
++	/* retain user config (for example - after recovery) */
++	if (edev->stats_coal_usecs)
++		schedule_delayed_work(&edev->periodic_task, 0);
++
+ 	return 0;
+ 
+ err4:
+@@ -1308,6 +1339,7 @@ static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode)
+ 		unregister_netdev(ndev);
+ 
+ 		cancel_delayed_work_sync(&edev->sp_task);
++		cancel_delayed_work_sync(&edev->periodic_task);
+ 
+ 		edev->ops->common->set_power_state(cdev, PCI_D0);
+ 
+diff --git a/drivers/net/ethernet/sfc/ef100_tx.c b/drivers/net/ethernet/sfc/ef100_tx.c
+index a90e5a9d2a37a..6ddda1a5e5363 100644
+--- a/drivers/net/ethernet/sfc/ef100_tx.c
++++ b/drivers/net/ethernet/sfc/ef100_tx.c
+@@ -333,7 +333,8 @@ void ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event)
+  * Returns 0 on success, error code otherwise. In case of an error this
+  * function will free the SKB.
+  */
+-int ef100_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb)
++netdev_tx_t ef100_enqueue_skb(struct efx_tx_queue *tx_queue,
++			      struct sk_buff *skb)
+ {
+ 	unsigned int old_insert_count = tx_queue->insert_count;
+ 	struct efx_nic *efx = tx_queue->efx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index b26617026e831..4364f73b501da 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -779,7 +779,10 @@ void mt7615_mac_sta_poll(struct mt7615_dev *dev)
+ 
+ 		msta = list_first_entry(&sta_poll_list, struct mt7615_sta,
+ 					poll_list);
++
++		spin_lock_bh(&dev->sta_poll_lock);
+ 		list_del_init(&msta->poll_list);
++		spin_unlock_bh(&dev->sta_poll_lock);
+ 
+ 		addr = mt7615_mac_wtbl_addr(dev, msta->wcid.idx) + 19 * 4;
+ 
+diff --git a/drivers/pinctrl/meson/pinctrl-meson-axg.c b/drivers/pinctrl/meson/pinctrl-meson-axg.c
+index 072765db93d70..505466b06629f 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson-axg.c
++++ b/drivers/pinctrl/meson/pinctrl-meson-axg.c
+@@ -400,6 +400,7 @@ static struct meson_pmx_group meson_axg_periphs_groups[] = {
+ 	GPIO_GROUP(GPIOA_15),
+ 	GPIO_GROUP(GPIOA_16),
+ 	GPIO_GROUP(GPIOA_17),
++	GPIO_GROUP(GPIOA_18),
+ 	GPIO_GROUP(GPIOA_19),
+ 	GPIO_GROUP(GPIOA_20),
+ 
+diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
+index cb6427fb9f3d1..6d5c9cb83592f 100644
+--- a/drivers/s390/block/dasd_ioctl.c
++++ b/drivers/s390/block/dasd_ioctl.c
+@@ -505,10 +505,10 @@ static int __dasd_ioctl_information(struct dasd_block *block,
+ 
+ 	memcpy(dasd_info->type, base->discipline->name, 4);
+ 
+-	spin_lock_irqsave(&block->queue_lock, flags);
++	spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
+ 	list_for_each(l, &base->ccw_queue)
+ 		dasd_info->chanq_len++;
+-	spin_unlock_irqrestore(&block->queue_lock, flags);
++	spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c
+index 8bf58510cca6d..2cc9bb413c108 100644
+--- a/drivers/spi/spi-qup.c
++++ b/drivers/spi/spi-qup.c
+@@ -1030,23 +1030,8 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 		return -ENXIO;
+ 	}
+ 
+-	ret = clk_prepare_enable(cclk);
+-	if (ret) {
+-		dev_err(dev, "cannot enable core clock\n");
+-		return ret;
+-	}
+-
+-	ret = clk_prepare_enable(iclk);
+-	if (ret) {
+-		clk_disable_unprepare(cclk);
+-		dev_err(dev, "cannot enable iface clock\n");
+-		return ret;
+-	}
+-
+ 	master = spi_alloc_master(dev, sizeof(struct spi_qup));
+ 	if (!master) {
+-		clk_disable_unprepare(cclk);
+-		clk_disable_unprepare(iclk);
+ 		dev_err(dev, "cannot allocate master\n");
+ 		return -ENOMEM;
+ 	}
+@@ -1092,6 +1077,19 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 	spin_lock_init(&controller->lock);
+ 	init_completion(&controller->done);
+ 
++	ret = clk_prepare_enable(cclk);
++	if (ret) {
++		dev_err(dev, "cannot enable core clock\n");
++		goto error_dma;
++	}
++
++	ret = clk_prepare_enable(iclk);
++	if (ret) {
++		clk_disable_unprepare(cclk);
++		dev_err(dev, "cannot enable iface clock\n");
++		goto error_dma;
++	}
++
+ 	iomode = readl_relaxed(base + QUP_IO_M_MODES);
+ 
+ 	size = QUP_IO_M_OUTPUT_BLOCK_SIZE(iomode);
+@@ -1121,7 +1119,7 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 	ret = spi_qup_set_state(controller, QUP_STATE_RESET);
+ 	if (ret) {
+ 		dev_err(dev, "cannot set RESET state\n");
+-		goto error_dma;
++		goto error_clk;
+ 	}
+ 
+ 	writel_relaxed(0, base + QUP_OPERATIONAL);
+@@ -1145,7 +1143,7 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 	ret = devm_request_irq(dev, irq, spi_qup_qup_irq,
+ 			       IRQF_TRIGGER_HIGH, pdev->name, controller);
+ 	if (ret)
+-		goto error_dma;
++		goto error_clk;
+ 
+ 	pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
+ 	pm_runtime_use_autosuspend(dev);
+@@ -1160,11 +1158,12 @@ static int spi_qup_probe(struct platform_device *pdev)
+ 
+ disable_pm:
+ 	pm_runtime_disable(&pdev->dev);
++error_clk:
++	clk_disable_unprepare(cclk);
++	clk_disable_unprepare(iclk);
+ error_dma:
+ 	spi_qup_release_dma(master);
+ error:
+-	clk_disable_unprepare(cclk);
+-	clk_disable_unprepare(iclk);
+ 	spi_master_put(master);
+ 	return ret;
+ }
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+index 4c201679fc081..291f98251f7f7 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+@@ -50,9 +50,9 @@ static const struct rtl819x_ops rtl819xp_ops = {
+ };
+ 
+ static struct pci_device_id rtl8192_pci_id_tbl[] = {
+-	{PCI_DEVICE(0x10ec, 0x8192)},
+-	{PCI_DEVICE(0x07aa, 0x0044)},
+-	{PCI_DEVICE(0x07aa, 0x0047)},
++	{RTL_PCI_DEVICE(0x10ec, 0x8192, rtl819xp_ops)},
++	{RTL_PCI_DEVICE(0x07aa, 0x0044, rtl819xp_ops)},
++	{RTL_PCI_DEVICE(0x07aa, 0x0047, rtl819xp_ops)},
+ 	{}
+ };
+ 
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
+index 7bbd884aa5f13..736f1a824cd2e 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h
+@@ -55,6 +55,11 @@
+ #define IS_HARDWARE_TYPE_8192SE(_priv)		\
+ 	(((struct r8192_priv *)rtllib_priv(dev))->card_8192 == NIC_8192SE)
+ 
++#define RTL_PCI_DEVICE(vend, dev, cfg) \
++	.vendor = (vend), .device = (dev), \
++	.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, \
++	.driver_data = (kernel_ulong_t)&(cfg)
++
+ #define TOTAL_CAM_ENTRY		32
+ #define CAM_CONTENT_COUNT	8
+ 
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 3d378da119e7a..178cf90fb3e5a 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -147,12 +147,11 @@ vchiq_blocking_bulk_transfer(unsigned int handle, void *data,
+ 	unsigned int size, enum vchiq_bulk_dir dir);
+ 
+ #define VCHIQ_INIT_RETRIES 10
+-enum vchiq_status vchiq_initialise(struct vchiq_instance **instance_out)
++int vchiq_initialise(struct vchiq_instance **instance_out)
+ {
+-	enum vchiq_status status = VCHIQ_ERROR;
+ 	struct vchiq_state *state;
+ 	struct vchiq_instance *instance = NULL;
+-	int i;
++	int i, ret;
+ 
+ 	vchiq_log_trace(vchiq_core_log_level, "%s called", __func__);
+ 
+@@ -169,6 +168,7 @@ enum vchiq_status vchiq_initialise(struct vchiq_instance **instance_out)
+ 	if (i == VCHIQ_INIT_RETRIES) {
+ 		vchiq_log_error(vchiq_core_log_level,
+ 			"%s: videocore not initialized\n", __func__);
++		ret = -ENOTCONN;
+ 		goto failed;
+ 	} else if (i > 0) {
+ 		vchiq_log_warning(vchiq_core_log_level,
+@@ -180,6 +180,7 @@ enum vchiq_status vchiq_initialise(struct vchiq_instance **instance_out)
+ 	if (!instance) {
+ 		vchiq_log_error(vchiq_core_log_level,
+ 			"%s: error allocating vchiq instance\n", __func__);
++		ret = -ENOMEM;
+ 		goto failed;
+ 	}
+ 
+@@ -190,13 +191,13 @@ enum vchiq_status vchiq_initialise(struct vchiq_instance **instance_out)
+ 
+ 	*instance_out = instance;
+ 
+-	status = VCHIQ_SUCCESS;
++	ret = 0;
+ 
+ failed:
+ 	vchiq_log_trace(vchiq_core_log_level,
+-		"%s(%p): returning %d", __func__, instance, status);
++		"%s(%p): returning %d", __func__, instance, ret);
+ 
+-	return status;
++	return ret;
+ }
+ EXPORT_SYMBOL(vchiq_initialise);
+ 
+@@ -2223,6 +2224,7 @@ vchiq_keepalive_thread_func(void *v)
+ 	enum vchiq_status status;
+ 	struct vchiq_instance *instance;
+ 	unsigned int ka_handle;
++	int ret;
+ 
+ 	struct vchiq_service_params_kernel params = {
+ 		.fourcc      = VCHIQ_MAKE_FOURCC('K', 'E', 'E', 'P'),
+@@ -2231,10 +2233,10 @@ vchiq_keepalive_thread_func(void *v)
+ 		.version_min = KEEPALIVE_VER_MIN
+ 	};
+ 
+-	status = vchiq_initialise(&instance);
+-	if (status != VCHIQ_SUCCESS) {
++	ret = vchiq_initialise(&instance);
++	if (ret) {
+ 		vchiq_log_error(vchiq_susp_log_level,
+-			"%s vchiq_initialise failed %d", __func__, status);
++			"%s vchiq_initialise failed %d", __func__, ret);
+ 		goto exit;
+ 	}
+ 
+@@ -2313,7 +2315,7 @@ vchiq_arm_init_state(struct vchiq_state *state,
+ 	return VCHIQ_SUCCESS;
+ }
+ 
+-enum vchiq_status
++int
+ vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service,
+ 		   enum USE_TYPE_E use_type)
+ {
+@@ -2373,7 +2375,7 @@ out:
+ 	return ret;
+ }
+ 
+-enum vchiq_status
++int
+ vchiq_release_internal(struct vchiq_state *state, struct vchiq_service *service)
+ {
+ 	struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state);
+diff --git a/drivers/tee/amdtee/amdtee_if.h b/drivers/tee/amdtee/amdtee_if.h
+index ff48c3e473750..e2014e21530ac 100644
+--- a/drivers/tee/amdtee/amdtee_if.h
++++ b/drivers/tee/amdtee/amdtee_if.h
+@@ -118,16 +118,18 @@ struct tee_cmd_unmap_shared_mem {
+ 
+ /**
+  * struct tee_cmd_load_ta - load Trusted Application (TA) binary into TEE
+- * @low_addr:    [in] bits [31:0] of the physical address of the TA binary
+- * @hi_addr:     [in] bits [63:32] of the physical address of the TA binary
+- * @size:        [in] size of TA binary in bytes
+- * @ta_handle:   [out] return handle of the loaded TA
++ * @low_addr:       [in] bits [31:0] of the physical address of the TA binary
++ * @hi_addr:        [in] bits [63:32] of the physical address of the TA binary
++ * @size:           [in] size of TA binary in bytes
++ * @ta_handle:      [out] return handle of the loaded TA
++ * @return_origin:  [out] origin of return code after TEE processing
+  */
+ struct tee_cmd_load_ta {
+ 	u32 low_addr;
+ 	u32 hi_addr;
+ 	u32 size;
+ 	u32 ta_handle;
++	u32 return_origin;
+ };
+ 
+ /**
+diff --git a/drivers/tee/amdtee/call.c b/drivers/tee/amdtee/call.c
+index 07f36ac834c88..63d428423e904 100644
+--- a/drivers/tee/amdtee/call.c
++++ b/drivers/tee/amdtee/call.c
+@@ -423,19 +423,23 @@ int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg)
+ 	if (ret) {
+ 		arg->ret_origin = TEEC_ORIGIN_COMMS;
+ 		arg->ret = TEEC_ERROR_COMMUNICATION;
+-	} else if (arg->ret == TEEC_SUCCESS) {
+-		ret = get_ta_refcount(load_cmd.ta_handle);
+-		if (!ret) {
+-			arg->ret_origin = TEEC_ORIGIN_COMMS;
+-			arg->ret = TEEC_ERROR_OUT_OF_MEMORY;
+-
+-			/* Unload the TA on error */
+-			unload_cmd.ta_handle = load_cmd.ta_handle;
+-			psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA,
+-					    (void *)&unload_cmd,
+-					    sizeof(unload_cmd), &ret);
+-		} else {
+-			set_session_id(load_cmd.ta_handle, 0, &arg->session);
++	} else {
++		arg->ret_origin = load_cmd.return_origin;
++
++		if (arg->ret == TEEC_SUCCESS) {
++			ret = get_ta_refcount(load_cmd.ta_handle);
++			if (!ret) {
++				arg->ret_origin = TEEC_ORIGIN_COMMS;
++				arg->ret = TEEC_ERROR_OUT_OF_MEMORY;
++
++				/* Unload the TA on error */
++				unload_cmd.ta_handle = load_cmd.ta_handle;
++				psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA,
++						    (void *)&unload_cmd,
++						    sizeof(unload_cmd), &ret);
++			} else {
++				set_session_id(load_cmd.ta_handle, 0, &arg->session);
++			}
+ 		}
+ 	}
+ 	mutex_unlock(&ta_refcount_mutex);
+diff --git a/drivers/usb/core/buffer.c b/drivers/usb/core/buffer.c
+index 6cf22c27f2d24..be8738750948e 100644
+--- a/drivers/usb/core/buffer.c
++++ b/drivers/usb/core/buffer.c
+@@ -170,3 +170,44 @@ void hcd_buffer_free(
+ 	}
+ 	dma_free_coherent(hcd->self.sysdev, size, addr, dma);
+ }
++
++void *hcd_buffer_alloc_pages(struct usb_hcd *hcd,
++		size_t size, gfp_t mem_flags, dma_addr_t *dma)
++{
++	if (size == 0)
++		return NULL;
++
++	if (hcd->localmem_pool)
++		return gen_pool_dma_alloc_align(hcd->localmem_pool,
++				size, dma, PAGE_SIZE);
++
++	/* some USB hosts just use PIO */
++	if (!hcd_uses_dma(hcd)) {
++		*dma = DMA_MAPPING_ERROR;
++		return (void *)__get_free_pages(mem_flags,
++				get_order(size));
++	}
++
++	return dma_alloc_coherent(hcd->self.sysdev,
++			size, dma, mem_flags);
++}
++
++void hcd_buffer_free_pages(struct usb_hcd *hcd,
++		size_t size, void *addr, dma_addr_t dma)
++{
++	if (!addr)
++		return;
++
++	if (hcd->localmem_pool) {
++		gen_pool_free(hcd->localmem_pool,
++				(unsigned long)addr, size);
++		return;
++	}
++
++	if (!hcd_uses_dma(hcd)) {
++		free_pages((unsigned long)addr, get_order(size));
++		return;
++	}
++
++	dma_free_coherent(hcd->self.sysdev, size, addr, dma);
++}
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 73b60f013b205..2fe29319de441 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -173,6 +173,7 @@ static int connected(struct usb_dev_state *ps)
+ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)
+ {
+ 	struct usb_dev_state *ps = usbm->ps;
++	struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus);
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&ps->lock, flags);
+@@ -181,8 +182,8 @@ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)
+ 		list_del(&usbm->memlist);
+ 		spin_unlock_irqrestore(&ps->lock, flags);
+ 
+-		usb_free_coherent(ps->dev, usbm->size, usbm->mem,
+-				usbm->dma_handle);
++		hcd_buffer_free_pages(hcd, usbm->size,
++				usbm->mem, usbm->dma_handle);
+ 		usbfs_decrease_memory_usage(
+ 			usbm->size + sizeof(struct usb_memory));
+ 		kfree(usbm);
+@@ -221,7 +222,7 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+ 	size_t size = vma->vm_end - vma->vm_start;
+ 	void *mem;
+ 	unsigned long flags;
+-	dma_addr_t dma_handle;
++	dma_addr_t dma_handle = DMA_MAPPING_ERROR;
+ 	int ret;
+ 
+ 	ret = usbfs_increase_memory_usage(size + sizeof(struct usb_memory));
+@@ -234,8 +235,8 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+ 		goto error_decrease_mem;
+ 	}
+ 
+-	mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN,
+-			&dma_handle);
++	mem = hcd_buffer_alloc_pages(hcd,
++			size, GFP_USER | __GFP_NOWARN, &dma_handle);
+ 	if (!mem) {
+ 		ret = -ENOMEM;
+ 		goto error_free_usbm;
+@@ -251,7 +252,14 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+ 	usbm->vma_use_count = 1;
+ 	INIT_LIST_HEAD(&usbm->memlist);
+ 
+-	if (hcd->localmem_pool || !hcd_uses_dma(hcd)) {
++	/*
++	 * In DMA-unavailable cases, hcd_buffer_alloc_pages allocates
++	 * normal pages and assigns DMA_MAPPING_ERROR to dma_handle. Check
++	 * whether we are in such cases, and then use remap_pfn_range (or
++	 * dma_mmap_coherent) to map normal (or DMA) pages into the user
++	 * space, respectively.
++	 */
++	if (dma_handle == DMA_MAPPING_ERROR) {
+ 		if (remap_pfn_range(vma, vma->vm_start,
+ 				    virt_to_phys(usbm->mem) >> PAGE_SHIFT,
+ 				    size, vma->vm_page_prot) < 0) {
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 1f9a1554ce5f4..de110363af521 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -1621,17 +1621,25 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
+ 			r = -EFAULT;
+ 			break;
+ 		}
+-		if (s.num > 0xffff) {
+-			r = -EINVAL;
+-			break;
++		if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
++			vq->last_avail_idx = s.num & 0xffff;
++			vq->last_used_idx = (s.num >> 16) & 0xffff;
++		} else {
++			if (s.num > 0xffff) {
++				r = -EINVAL;
++				break;
++			}
++			vq->last_avail_idx = s.num;
+ 		}
+-		vq->last_avail_idx = s.num;
+ 		/* Forget the cached index value. */
+ 		vq->avail_idx = vq->last_avail_idx;
+ 		break;
+ 	case VHOST_GET_VRING_BASE:
+ 		s.index = idx;
+-		s.num = vq->last_avail_idx;
++		if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
++			s.num = (u32)vq->last_avail_idx | ((u32)vq->last_used_idx << 16);
++		else
++			s.num = vq->last_avail_idx;
+ 		if (copy_to_user(argp, &s, sizeof s))
+ 			r = -EFAULT;
+ 		break;
+diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
+index 8f80d6b0d843e..e00347f2b4d30 100644
+--- a/drivers/vhost/vhost.h
++++ b/drivers/vhost/vhost.h
+@@ -87,13 +87,17 @@ struct vhost_virtqueue {
+ 	/* The routine to call when the Guest pings us, or timeout. */
+ 	vhost_work_fn_t handle_kick;
+ 
+-	/* Last available index we saw. */
++	/* Last available index we saw.
++	 * Values are limited to 0x7fff, and the high bit is used as
++	 * a wrap counter when using VIRTIO_F_RING_PACKED. */
+ 	u16 last_avail_idx;
+ 
+ 	/* Caches available index value from user. */
+ 	u16 avail_idx;
+ 
+-	/* Last index we used. */
++	/* Last index we used.
++	 * Values are limited to 0x7fff, and the high bit is used as
++	 * a wrap counter when using VIRTIO_F_RING_PACKED. */
+ 	u16 last_used_idx;
+ 
+ 	/* Used flags */
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 159795059547f..a59d6293a32b2 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1313,6 +1313,7 @@ static int afs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	op->dentry	= dentry;
+ 	op->create.mode	= S_IFDIR | mode;
+ 	op->create.reason = afs_edit_dir_for_mkdir;
++	op->mtime	= current_time(dir);
+ 	op->ops		= &afs_mkdir_operation;
+ 	return afs_do_sync_operation(op);
+ }
+@@ -1616,6 +1617,7 @@ static int afs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 	op->dentry	= dentry;
+ 	op->create.mode	= S_IFREG | mode;
+ 	op->create.reason = afs_edit_dir_for_create;
++	op->mtime	= current_time(dir);
+ 	op->ops		= &afs_create_operation;
+ 	return afs_do_sync_operation(op);
+ 
+@@ -1745,6 +1747,7 @@ static int afs_symlink(struct inode *dir, struct dentry *dentry,
+ 	op->ops			= &afs_symlink_operation;
+ 	op->create.reason	= afs_edit_dir_for_symlink;
+ 	op->create.symlink	= content;
++	op->mtime		= current_time(dir);
+ 	return afs_do_sync_operation(op);
+ 
+ error:
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index c21545c5b34bd..93db4486a9433 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1895,7 +1895,7 @@ again:
+ 	list_splice(&reloc_roots, &rc->reloc_roots);
+ 
+ 	if (!err)
+-		btrfs_commit_transaction(trans);
++		err = btrfs_commit_transaction(trans);
+ 	else
+ 		btrfs_end_transaction(trans);
+ 	return err;
+@@ -3270,8 +3270,12 @@ int prepare_to_relocate(struct reloc_control *rc)
+ 		 */
+ 		return PTR_ERR(trans);
+ 	}
+-	btrfs_commit_transaction(trans);
+-	return 0;
++
++	ret = btrfs_commit_transaction(trans);
++	if (ret)
++		unset_reloc_control(rc);
++
++	return ret;
+ }
+ 
+ static noinline_for_stack int relocate_block_group(struct reloc_control *rc)
+@@ -3443,7 +3447,9 @@ restart:
+ 		err = PTR_ERR(trans);
+ 		goto out_free;
+ 	}
+-	btrfs_commit_transaction(trans);
++	ret = btrfs_commit_transaction(trans);
++	if (ret && !err)
++		err = ret;
+ out_free:
+ 	ret = clean_dirty_subvols(rc);
+ 	if (ret < 0 && !err)
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 210496dc2fd49..e1fda3923944e 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1636,6 +1636,7 @@ void ceph_flush_snaps(struct ceph_inode_info *ci,
+ 	struct inode *inode = &ci->vfs_inode;
+ 	struct ceph_mds_client *mdsc = ceph_inode_to_client(inode)->mdsc;
+ 	struct ceph_mds_session *session = NULL;
++	bool need_put = false;
+ 	int mds;
+ 
+ 	dout("ceph_flush_snaps %p\n", inode);
+@@ -1687,8 +1688,13 @@ out:
+ 	}
+ 	/* we flushed them all; remove this inode from the queue */
+ 	spin_lock(&mdsc->snap_flush_lock);
++	if (!list_empty(&ci->i_snap_flush_item))
++		need_put = true;
+ 	list_del_init(&ci->i_snap_flush_item);
+ 	spin_unlock(&mdsc->snap_flush_lock);
++
++	if (need_put)
++		iput(inode);
+ }
+ 
+ /*
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index 8e6fc45ccc9eb..db464682b2cb2 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -647,8 +647,10 @@ int __ceph_finish_cap_snap(struct ceph_inode_info *ci,
+ 	     capsnap->size);
+ 
+ 	spin_lock(&mdsc->snap_flush_lock);
+-	if (list_empty(&ci->i_snap_flush_item))
++	if (list_empty(&ci->i_snap_flush_item)) {
++		ihold(inode);
+ 		list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list);
++	}
+ 	spin_unlock(&mdsc->snap_flush_lock);
+ 	return 1;  /* caller may want to ceph_flush_snaps */
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index f72896384dbc9..84b4fc9833e38 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -5799,7 +5799,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	ext4_group_t g;
+ 	unsigned int journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+ 	int err = 0;
+-	int enable_rw = 0;
+ #ifdef CONFIG_QUOTA
+ 	int enable_quota = 0;
+ 	int i, j;
+@@ -5992,7 +5991,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 			if (err)
+ 				goto restore_opts;
+ 
+-			enable_rw = 1;
++			sb->s_flags &= ~SB_RDONLY;
+ 			if (ext4_has_feature_mmp(sb)) {
+ 				err = ext4_multi_mount_protect(sb,
+ 						le64_to_cpu(es->s_mmp_block));
+@@ -6039,9 +6038,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
+ 		ext4_release_system_zone(sb);
+ 
+-	if (enable_rw)
+-		sb->s_flags &= ~SB_RDONLY;
+-
+ 	/*
+ 	 * Reinitialize lazy itable initialization thread based on
+ 	 * current settings
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 10b2f570d4003..ed39101dc7b6f 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1999,8 +1999,9 @@ inserted:
+ 			else {
+ 				u32 ref;
+ 
++#ifdef EXT4_XATTR_DEBUG
+ 				WARN_ON_ONCE(dquot_initialize_needed(inode));
+-
++#endif
+ 				/* The old block is released after updating
+ 				   the inode. */
+ 				error = dquot_alloc_block(inode,
+@@ -2062,8 +2063,9 @@ inserted:
+ 			/* We need to allocate a new block */
+ 			ext4_fsblk_t goal, block;
+ 
++#ifdef EXT4_XATTR_DEBUG
+ 			WARN_ON_ONCE(dquot_initialize_needed(inode));
+-
++#endif
+ 			goal = ext4_group_first_block_no(sb,
+ 						EXT4_I(inode)->i_block_group);
+ 			block = ext4_new_meta_blocks(handle, inode, goal, 0,
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index a7e7d68256e00..2501c11d8c468 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -403,9 +403,9 @@ out:
+ 	if (!strcmp(a->attr.name, "iostat_period_ms")) {
+ 		if (t < MIN_IOSTAT_PERIOD_MS || t > MAX_IOSTAT_PERIOD_MS)
+ 			return -EINVAL;
+-		spin_lock(&sbi->iostat_lock);
++		spin_lock_irq(&sbi->iostat_lock);
+ 		sbi->iostat_period_ms = (unsigned int)t;
+-		spin_unlock(&sbi->iostat_lock);
++		spin_unlock_irq(&sbi->iostat_lock);
+ 		return count;
+ 	}
+ 
+diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c
+index b374c9cee1177..a053b0bf79308 100644
+--- a/fs/xfs/xfs_buf_item_recover.c
++++ b/fs/xfs/xfs_buf_item_recover.c
+@@ -924,6 +924,16 @@ xlog_recover_buf_commit_pass2(
+ 	if (lsn && lsn != -1 && XFS_LSN_CMP(lsn, current_lsn) >= 0) {
+ 		trace_xfs_log_recover_buf_skip(log, buf_f);
+ 		xlog_recover_validate_buf_type(mp, bp, buf_f, NULLCOMMITLSN);
++
++		/*
++		 * We're skipping replay of this buffer log item due to the log
++		 * item LSN being behind the ondisk buffer.  Verify the buffer
++		 * contents since we aren't going to run the write verifier.
++		 */
++		if (bp->b_ops) {
++			bp->b_ops->verify_read(bp);
++			error = bp->b_error;
++		}
+ 		goto out_release;
+ 	}
+ 
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 9ef63bc14b002..24fe2cd4b0e8d 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -744,8 +744,11 @@ static inline void rps_record_sock_flow(struct rps_sock_flow_table *table,
+ 		/* We only give a hint, preemption can change CPU under us */
+ 		val |= raw_smp_processor_id();
+ 
+-		if (table->ents[index] != val)
+-			table->ents[index] = val;
++		/* The following WRITE_ONCE() is paired with the READ_ONCE()
++		 * here, and another one in get_rps_cpu().
++		 */
++		if (READ_ONCE(table->ents[index]) != val)
++			WRITE_ONCE(table->ents[index], val);
+ 	}
+ }
+ 
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index c0cf20b19e637..528be670006f4 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -504,6 +504,11 @@ void *hcd_buffer_alloc(struct usb_bus *bus, size_t size,
+ void hcd_buffer_free(struct usb_bus *bus, size_t size,
+ 	void *addr, dma_addr_t dma);
+ 
++void *hcd_buffer_alloc_pages(struct usb_hcd *hcd,
++		size_t size, gfp_t mem_flags, dma_addr_t *dma);
++void hcd_buffer_free_pages(struct usb_hcd *hcd,
++		size_t size, void *addr, dma_addr_t dma);
++
+ /* generic bus glue, needed for host controllers that don't use PCI */
+ extern irqreturn_t usb_hcd_irq(int irq, void *__hcd);
+ 
+diff --git a/include/net/bond_alb.h b/include/net/bond_alb.h
+index 191c36afa1f4a..9dc082b2d5430 100644
+--- a/include/net/bond_alb.h
++++ b/include/net/bond_alb.h
+@@ -156,8 +156,8 @@ int bond_alb_init_slave(struct bonding *bond, struct slave *slave);
+ void bond_alb_deinit_slave(struct bonding *bond, struct slave *slave);
+ void bond_alb_handle_link_change(struct bonding *bond, struct slave *slave, char link);
+ void bond_alb_handle_active_change(struct bonding *bond, struct slave *new_slave);
+-int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev);
+-int bond_tlb_xmit(struct sk_buff *skb, struct net_device *bond_dev);
++netdev_tx_t bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev);
++netdev_tx_t bond_tlb_xmit(struct sk_buff *skb, struct net_device *bond_dev);
+ struct slave *bond_xmit_alb_slave_get(struct bonding *bond,
+ 				      struct sk_buff *skb);
+ struct slave *bond_xmit_tlb_slave_get(struct bonding *bond,
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index d5767e25509cc..abb22cfd4827f 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -174,7 +174,7 @@ struct pneigh_entry {
+ 	struct net_device	*dev;
+ 	u8			flags;
+ 	u8			protocol;
+-	u8			key[];
++	u32			key[];
+ };
+ 
+ /*
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 50d5ffbad473e..ba781e0aaf566 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -129,6 +129,8 @@ static inline void qdisc_run(struct Qdisc *q)
+ 	}
+ }
+ 
++extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
++
+ /* Calculate maximal size of packet seen by hard_start_xmit
+    routine of this device.
+  */
+diff --git a/include/net/rpl.h b/include/net/rpl.h
+index 308ef0a05caef..30fe780d1e7c8 100644
+--- a/include/net/rpl.h
++++ b/include/net/rpl.h
+@@ -23,9 +23,6 @@ static inline int rpl_init(void)
+ static inline void rpl_exit(void) {}
+ #endif
+ 
+-/* Worst decompression memory usage ipv6 address (16) + pad 7 */
+-#define IPV6_RPL_SRH_WORST_SWAP_SIZE (sizeof(struct in6_addr) + 7)
+-
+ size_t ipv6_rpl_srh_size(unsigned char n, unsigned char cmpri,
+ 			 unsigned char cmpre);
+ 
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 3da0601b573ed..51b499d745499 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1073,8 +1073,12 @@ static inline void sock_rps_record_flow(const struct sock *sk)
+ 		 * OR	an additional socket flag
+ 		 * [1] : sk_state and sk_prot are in the same cache line.
+ 		 */
+-		if (sk->sk_state == TCP_ESTABLISHED)
+-			sock_rps_record_flow_hash(sk->sk_rxhash);
++		if (sk->sk_state == TCP_ESTABLISHED) {
++			/* This READ_ONCE() is paired with the WRITE_ONCE()
++			 * from sock_rps_save_rxhash() and sock_rps_reset_rxhash().
++			 */
++			sock_rps_record_flow_hash(READ_ONCE(sk->sk_rxhash));
++		}
+ 	}
+ #endif
+ }
+@@ -1083,15 +1087,19 @@ static inline void sock_rps_save_rxhash(struct sock *sk,
+ 					const struct sk_buff *skb)
+ {
+ #ifdef CONFIG_RPS
+-	if (unlikely(sk->sk_rxhash != skb->hash))
+-		sk->sk_rxhash = skb->hash;
++	/* The following WRITE_ONCE() is paired with the READ_ONCE()
++	 * here, and another one in sock_rps_record_flow().
++	 */
++	if (unlikely(READ_ONCE(sk->sk_rxhash) != skb->hash))
++		WRITE_ONCE(sk->sk_rxhash, skb->hash);
+ #endif
+ }
+ 
+ static inline void sock_rps_reset_rxhash(struct sock *sk)
+ {
+ #ifdef CONFIG_RPS
+-	sk->sk_rxhash = 0;
++	/* Paired with READ_ONCE() in sock_rps_record_flow() */
++	WRITE_ONCE(sk->sk_rxhash, 0);
+ #endif
+ }
+ 
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 94e51d36fb497..9e90d1e7af2c8 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1128,13 +1128,23 @@ static const struct bpf_func_proto bpf_send_signal_thread_proto = {
+ 
+ BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz)
+ {
++	struct path copy;
+ 	long len;
+ 	char *p;
+ 
+ 	if (!sz)
+ 		return 0;
+ 
+-	p = d_path(path, buf, sz);
++	/*
++	 * The path pointer is verified as trusted and safe to use,
++	 * but let's double check it's valid anyway to workaround
++	 * potentially broken verifier.
++	 */
++	len = copy_from_kernel_nofault(&copy, path, sizeof(*path));
++	if (len < 0)
++		return len;
++
++	p = d_path(&copy, buf, sz);
+ 	if (IS_ERR(p)) {
+ 		len = PTR_ERR(p);
+ 	} else {
+diff --git a/lib/cpu_rmap.c b/lib/cpu_rmap.c
+index e77f12bb3c774..1833ad73de6fc 100644
+--- a/lib/cpu_rmap.c
++++ b/lib/cpu_rmap.c
+@@ -268,8 +268,8 @@ static void irq_cpu_rmap_release(struct kref *ref)
+ 	struct irq_glue *glue =
+ 		container_of(ref, struct irq_glue, notify.kref);
+ 
+-	cpu_rmap_put(glue->rmap);
+ 	glue->rmap->obj[glue->index] = NULL;
++	cpu_rmap_put(glue->rmap);
+ 	kfree(glue);
+ }
+ 
+diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
+index 338e4e9c33b8a..ddd3b4c70a516 100644
+--- a/net/batman-adv/distributed-arp-table.c
++++ b/net/batman-adv/distributed-arp-table.c
+@@ -102,7 +102,6 @@ static void batadv_dat_purge(struct work_struct *work);
+  */
+ static void batadv_dat_start_timer(struct batadv_priv *bat_priv)
+ {
+-	INIT_DELAYED_WORK(&bat_priv->dat.work, batadv_dat_purge);
+ 	queue_delayed_work(batadv_event_workqueue, &bat_priv->dat.work,
+ 			   msecs_to_jiffies(10000));
+ }
+@@ -822,6 +821,7 @@ int batadv_dat_init(struct batadv_priv *bat_priv)
+ 	if (!bat_priv->dat.hash)
+ 		return -ENOMEM;
+ 
++	INIT_DELAYED_WORK(&bat_priv->dat.work, batadv_dat_purge);
+ 	batadv_dat_start_timer(bat_priv);
+ 
+ 	batadv_tvlv_handler_register(bat_priv, batadv_dat_tvlv_ogm_handler_v1,
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 08c473aa0113d..bd6f20ef13f35 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2685,10 +2685,10 @@ int hci_remove_link_key(struct hci_dev *hdev, bdaddr_t *bdaddr)
+ 
+ int hci_remove_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 bdaddr_type)
+ {
+-	struct smp_ltk *k;
++	struct smp_ltk *k, *tmp;
+ 	int removed = 0;
+ 
+-	list_for_each_entry_rcu(k, &hdev->long_term_keys, list) {
++	list_for_each_entry_safe(k, tmp, &hdev->long_term_keys, list) {
+ 		if (bacmp(bdaddr, &k->bdaddr) || k->bdaddr_type != bdaddr_type)
+ 			continue;
+ 
+@@ -2704,9 +2704,9 @@ int hci_remove_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 bdaddr_type)
+ 
+ void hci_remove_irk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 addr_type)
+ {
+-	struct smp_irk *k;
++	struct smp_irk *k, *tmp;
+ 
+-	list_for_each_entry_rcu(k, &hdev->identity_resolving_keys, list) {
++	list_for_each_entry_safe(k, tmp, &hdev->identity_resolving_keys, list) {
+ 		if (bacmp(bdaddr, &k->bdaddr) || k->addr_type != addr_type)
+ 			continue;
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index b85ce276e2a3c..568f0f072b3df 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4303,6 +4303,10 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ 	result = __le16_to_cpu(rsp->result);
+ 	status = __le16_to_cpu(rsp->status);
+ 
++	if (result == L2CAP_CR_SUCCESS && (dcid < L2CAP_CID_DYN_START ||
++					   dcid > L2CAP_CID_DYN_END))
++		return -EPROTO;
++
+ 	BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x",
+ 	       dcid, scid, result, status);
+ 
+@@ -4334,6 +4338,11 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ 
+ 	switch (result) {
+ 	case L2CAP_CR_SUCCESS:
++		if (__l2cap_get_chan_by_dcid(conn, dcid)) {
++			err = -EBADSLT;
++			break;
++		}
++
+ 		l2cap_state_change(chan, BT_CONFIG);
+ 		chan->ident = 0;
+ 		chan->dcid = dcid;
+@@ -4659,7 +4668,9 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn,
+ 
+ 	chan->ops->set_shutdown(chan);
+ 
++	l2cap_chan_unlock(chan);
+ 	mutex_lock(&conn->chan_lock);
++	l2cap_chan_lock(chan);
+ 	l2cap_chan_del(chan, ECONNRESET);
+ 	mutex_unlock(&conn->chan_lock);
+ 
+@@ -4698,7 +4709,9 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
+ 		return 0;
+ 	}
+ 
++	l2cap_chan_unlock(chan);
+ 	mutex_lock(&conn->chan_lock);
++	l2cap_chan_lock(chan);
+ 	l2cap_chan_del(chan, 0);
+ 	mutex_unlock(&conn->chan_lock);
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 9da8fbc81c04a..9169ef174ff09 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -122,7 +122,7 @@ static void j1939_can_recv(struct sk_buff *iskb, void *data)
+ #define J1939_CAN_ID CAN_EFF_FLAG
+ #define J1939_CAN_MASK (CAN_EFF_FLAG | CAN_RTR_FLAG)
+ 
+-static DEFINE_SPINLOCK(j1939_netdev_lock);
++static DEFINE_MUTEX(j1939_netdev_lock);
+ 
+ static struct j1939_priv *j1939_priv_create(struct net_device *ndev)
+ {
+@@ -216,7 +216,7 @@ static void __j1939_rx_release(struct kref *kref)
+ 	j1939_can_rx_unregister(priv);
+ 	j1939_ecu_unmap_all(priv);
+ 	j1939_priv_set(priv->ndev, NULL);
+-	spin_unlock(&j1939_netdev_lock);
++	mutex_unlock(&j1939_netdev_lock);
+ }
+ 
+ /* get pointer to priv without increasing ref counter */
+@@ -244,9 +244,9 @@ static struct j1939_priv *j1939_priv_get_by_ndev(struct net_device *ndev)
+ {
+ 	struct j1939_priv *priv;
+ 
+-	spin_lock(&j1939_netdev_lock);
++	mutex_lock(&j1939_netdev_lock);
+ 	priv = j1939_priv_get_by_ndev_locked(ndev);
+-	spin_unlock(&j1939_netdev_lock);
++	mutex_unlock(&j1939_netdev_lock);
+ 
+ 	return priv;
+ }
+@@ -256,14 +256,14 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 	struct j1939_priv *priv, *priv_new;
+ 	int ret;
+ 
+-	spin_lock(&j1939_netdev_lock);
++	mutex_lock(&j1939_netdev_lock);
+ 	priv = j1939_priv_get_by_ndev_locked(ndev);
+ 	if (priv) {
+ 		kref_get(&priv->rx_kref);
+-		spin_unlock(&j1939_netdev_lock);
++		mutex_unlock(&j1939_netdev_lock);
+ 		return priv;
+ 	}
+-	spin_unlock(&j1939_netdev_lock);
++	mutex_unlock(&j1939_netdev_lock);
+ 
+ 	priv = j1939_priv_create(ndev);
+ 	if (!priv)
+@@ -273,29 +273,31 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 	spin_lock_init(&priv->j1939_socks_lock);
+ 	INIT_LIST_HEAD(&priv->j1939_socks);
+ 
+-	spin_lock(&j1939_netdev_lock);
++	mutex_lock(&j1939_netdev_lock);
+ 	priv_new = j1939_priv_get_by_ndev_locked(ndev);
+ 	if (priv_new) {
+ 		/* Someone was faster than us, use their priv and roll
+ 		 * back our's.
+ 		 */
+ 		kref_get(&priv_new->rx_kref);
+-		spin_unlock(&j1939_netdev_lock);
++		mutex_unlock(&j1939_netdev_lock);
+ 		dev_put(ndev);
+ 		kfree(priv);
+ 		return priv_new;
+ 	}
+ 	j1939_priv_set(ndev, priv);
+-	spin_unlock(&j1939_netdev_lock);
+ 
+ 	ret = j1939_can_rx_register(priv);
+ 	if (ret < 0)
+ 		goto out_priv_put;
+ 
++	mutex_unlock(&j1939_netdev_lock);
+ 	return priv;
+ 
+  out_priv_put:
+ 	j1939_priv_set(ndev, NULL);
++	mutex_unlock(&j1939_netdev_lock);
++
+ 	dev_put(ndev);
+ 	kfree(priv);
+ 
+@@ -304,7 +306,7 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 
+ void j1939_netdev_stop(struct j1939_priv *priv)
+ {
+-	kref_put_lock(&priv->rx_kref, __j1939_rx_release, &j1939_netdev_lock);
++	kref_put_mutex(&priv->rx_kref, __j1939_rx_release, &j1939_netdev_lock);
+ 	j1939_priv_put(priv);
+ }
+ 
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 76cd5f43faf7a..906a08d38c1c8 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -1013,6 +1013,11 @@ void j1939_sk_errqueue(struct j1939_session *session,
+ 
+ void j1939_sk_send_loop_abort(struct sock *sk, int err)
+ {
++	struct j1939_sock *jsk = j1939_sk(sk);
++
++	if (jsk->state & J1939_SOCK_ERRQUEUE)
++		return;
++
+ 	sk->sk_err = err;
+ 
+ 	sk->sk_error_report(sk);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 29e6e11c481c6..f4aad9b00cc90 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4385,8 +4385,10 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 		u32 next_cpu;
+ 		u32 ident;
+ 
+-		/* First check into global flow table if there is a match */
+-		ident = sock_flow_table->ents[hash & sock_flow_table->mask];
++		/* First check into global flow table if there is a match.
++		 * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow().
++		 */
++		ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]);
+ 		if ((ident ^ hash) & ~rps_cpu_mask)
+ 			goto try_rps;
+ 
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 3a34e9768bff0..5aa8bde3e9c8e 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -31,7 +31,6 @@
+ static int two = 2;
+ static int four = 4;
+ static int thousand = 1000;
+-static int gso_max_segs = GSO_MAX_SEGS;
+ static int tcp_retr1_max = 255;
+ static int ip_local_port_range_min[] = { 1, 1 };
+ static int ip_local_port_range_max[] = { 65535, 65535 };
+@@ -1193,7 +1192,6 @@ static struct ctl_table ipv4_net_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dou8vec_minmax,
+ 		.extra1		= SYSCTL_ONE,
+-		.extra2		= &gso_max_segs,
+ 	},
+ 	{
+ 		.procname	= "tcp_min_rtt_wlen",
+diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
+index 4932dea9820ba..cdad9019c77c4 100644
+--- a/net/ipv6/exthdrs.c
++++ b/net/ipv6/exthdrs.c
+@@ -552,24 +552,6 @@ looped_back:
+ 		return -1;
+ 	}
+ 
+-	if (skb_cloned(skb)) {
+-		if (pskb_expand_head(skb, IPV6_RPL_SRH_WORST_SWAP_SIZE, 0,
+-				     GFP_ATOMIC)) {
+-			__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+-					IPSTATS_MIB_OUTDISCARDS);
+-			kfree_skb(skb);
+-			return -1;
+-		}
+-	} else {
+-		err = skb_cow_head(skb, IPV6_RPL_SRH_WORST_SWAP_SIZE);
+-		if (unlikely(err)) {
+-			kfree_skb(skb);
+-			return -1;
+-		}
+-	}
+-
+-	hdr = (struct ipv6_rpl_sr_hdr *)skb_transport_header(skb);
+-
+ 	if (!pskb_may_pull(skb, ipv6_rpl_srh_size(n, hdr->cmpri,
+ 						  hdr->cmpre))) {
+ 		kfree_skb(skb);
+@@ -615,6 +597,17 @@ looped_back:
+ 	skb_pull(skb, ((hdr->hdrlen + 1) << 3));
+ 	skb_postpull_rcsum(skb, oldhdr,
+ 			   sizeof(struct ipv6hdr) + ((hdr->hdrlen + 1) << 3));
++	if (unlikely(!hdr->segments_left)) {
++		if (pskb_expand_head(skb, sizeof(struct ipv6hdr) + ((chdr->hdrlen + 1) << 3), 0,
++				     GFP_ATOMIC)) {
++			__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_OUTDISCARDS);
++			kfree_skb(skb);
++			kfree(buf);
++			return -1;
++		}
++
++		oldhdr = ipv6_hdr(skb);
++	}
+ 	skb_push(skb, ((chdr->hdrlen + 1) << 3) + sizeof(struct ipv6hdr));
+ 	skb_reset_network_header(skb);
+ 	skb_mac_header_rebuild(skb);
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 1bf6ab83644b3..55ac0cc12657c 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -1704,6 +1704,14 @@ call_ad(struct sock *ctnl, struct sk_buff *skb, struct ip_set *set,
+ 	bool eexist = flags & IPSET_FLAG_EXIST, retried = false;
+ 
+ 	do {
++		if (retried) {
++			__ip_set_get(set);
++			nfnl_unlock(NFNL_SUBSYS_IPSET);
++			cond_resched();
++			nfnl_lock(NFNL_SUBSYS_IPSET);
++			__ip_set_put(set);
++		}
++
+ 		ip_set_lock(set);
+ 		ret = set->variant->uadt(set, tb, adt, &lineno, flags, retried);
+ 		ip_set_unlock(set);
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 193a18bfddc0a..f82a234ac53a1 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -2075,6 +2075,9 @@ static int nf_confirm_cthelper(struct sk_buff *skb, struct nf_conn *ct,
+ 		return 0;
+ 
+ 	helper = rcu_dereference(help->helper);
++	if (!helper)
++		return 0;
++
+ 	if (!(helper->flags & NF_CT_HELPER_F_USERSPACE))
+ 		return 0;
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 53d315ed94307..befe42aad04ba 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -41,8 +41,6 @@
+ #include <net/tc_act/tc_gate.h>
+ #include <net/flow_offload.h>
+ 
+-extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
+-
+ /* The list of all installed classifier types */
+ static LIST_HEAD(tcf_proto_base);
+ 
+@@ -2774,6 +2772,7 @@ static int tc_chain_tmplt_add(struct tcf_chain *chain, struct net *net,
+ 		return PTR_ERR(ops);
+ 	if (!ops->tmplt_create || !ops->tmplt_destroy || !ops->tmplt_dump) {
+ 		NL_SET_ERR_MSG(extack, "Chain templates are not supported with specified classifier");
++		module_put(ops->owner);
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index cf04f70e96bf1..4f6b5b6fba3ed 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -201,6 +201,11 @@ out:
+ 	return NET_XMIT_CN;
+ }
+ 
++static struct netlink_range_validation fq_pie_q_range = {
++	.min = 1,
++	.max = 1 << 20,
++};
++
+ static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = {
+ 	[TCA_FQ_PIE_LIMIT]		= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_FLOWS]		= {.type = NLA_U32},
+@@ -208,7 +213,8 @@ static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = {
+ 	[TCA_FQ_PIE_TUPDATE]		= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_ALPHA]		= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_BETA]		= {.type = NLA_U32},
+-	[TCA_FQ_PIE_QUANTUM]		= {.type = NLA_U32},
++	[TCA_FQ_PIE_QUANTUM]		=
++			NLA_POLICY_FULL_RANGE(NLA_U32, &fq_pie_q_range),
+ 	[TCA_FQ_PIE_MEMORY_LIMIT]	= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_ECN_PROB]		= {.type = NLA_U32},
+ 	[TCA_FQ_PIE_ECN]		= {.type = NLA_U32},
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index 0ef15f8fba902..d5ee961ca72d5 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -716,6 +716,8 @@ static int smc_llc_add_link_cont(struct smc_link *link,
+ 	addc_llc->num_rkeys = *num_rkeys_todo;
+ 	n = *num_rkeys_todo;
+ 	for (i = 0; i < min_t(u8, n, SMC_LLC_RKEYS_PER_CONT_MSG); i++) {
++		while (*buf_pos && !(*buf_pos)->used)
++			*buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos);
+ 		if (!*buf_pos) {
+ 			addc_llc->num_rkeys = addc_llc->num_rkeys -
+ 					      *num_rkeys_todo;
+@@ -731,8 +733,6 @@ static int smc_llc_add_link_cont(struct smc_link *link,
+ 
+ 		(*num_rkeys_todo)--;
+ 		*buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos);
+-		while (*buf_pos && !(*buf_pos)->used)
+-			*buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos);
+ 	}
+ 	addc_llc->hd.common.type = SMC_LLC_ADD_LINK_CONT;
+ 	addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont);
+diff --git a/scripts/gcc-plugins/gcc-common.h b/scripts/gcc-plugins/gcc-common.h
+index 9ad76b7f3f10e..6d4563b8a52c6 100644
+--- a/scripts/gcc-plugins/gcc-common.h
++++ b/scripts/gcc-plugins/gcc-common.h
+@@ -108,7 +108,13 @@
+ #include "varasm.h"
+ #include "stor-layout.h"
+ #include "internal-fn.h"
++#endif
++
++#include "gimple.h"
++
++#if BUILDING_GCC_VERSION >= 4009
+ #include "gimple-expr.h"
++#include "gimple-iterator.h"
+ #include "gimple-fold.h"
+ #include "context.h"
+ #include "tree-ssa-alias.h"
+@@ -124,13 +130,10 @@
+ #include "gimplify.h"
+ #endif
+ 
+-#include "gimple.h"
+-
+ #if BUILDING_GCC_VERSION >= 4009
+ #include "tree-ssa-operands.h"
+ #include "tree-phinodes.h"
+ #include "tree-cfg.h"
+-#include "gimple-iterator.h"
+ #include "gimple-ssa.h"
+ #include "ssa-iterators.h"
+ #endif
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 21c8b474a4dfb..8a42262dd7faf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11162,6 +11162,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x872b, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
++	SND_PCI_QUIRK(0x103c, 0x8768, "HP Slim Desktop S01", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+@@ -11183,6 +11184,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS),
+ 	SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x1064, "Lenovo P3 Tower", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
+diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c
+index 15b3f47fbfa35..9f66f6dc2c67f 100644
+--- a/sound/soc/codecs/wsa881x.c
++++ b/sound/soc/codecs/wsa881x.c
+@@ -646,7 +646,6 @@ static struct regmap_config wsa881x_regmap_config = {
+ 	.readable_reg = wsa881x_readable_register,
+ 	.reg_format_endian = REGMAP_ENDIAN_NATIVE,
+ 	.val_format_endian = REGMAP_ENDIAN_NATIVE,
+-	.can_multi_write = true,
+ };
+ 
+ enum {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-06-14 10:34 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-06-14 10:34 UTC (permalink / raw
  To: gentoo-commits

commit:     7d35705c5ee679272267b1bcfe1eb0d2b16e8e9e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 14 10:34:40 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 14 10:34:40 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7d35705c

Remove redundant patch

Removed:
2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ----
 ..._gcc-plugins-reorg-gimple-incl-for-GCC-13.patch | 26 ----------------------
 2 files changed, 30 deletions(-)

diff --git a/0000_README b/0000_README
index 3b319458..8d2598a6 100644
--- a/0000_README
+++ b/0000_README
@@ -803,10 +803,6 @@ Patch:  2940_gcc-plugins-drop-std-gnu-plus-plus-to-fix-GCC-13-build.patch
 From:   https://lore.kernel.org/all/20230201230009.2252783-1-sam@gentoo.org/
 Desc:   gcc-plugins: drop -std=gnu++11 to fix GCC 13 build
 
-Patch:  2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch
-From:   https://git.kernel.org
-Desc:   gcc-plugins: Reorganize gimple includes for GCC 13
-
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852
 Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev

diff --git a/2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch b/2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch
deleted file mode 100644
index 3c94f239..00000000
--- a/2950_gcc-plugins-reorg-gimple-incl-for-GCC-13.patch
+++ /dev/null
@@ -1,26 +0,0 @@
---- a/scripts/gcc-plugins/gcc-common.h	2023-04-27 09:30:13.021916723 -0400
-+++ b/scripts/gcc-plugins/gcc-common.h	2023-04-27 09:31:15.088866298 -0400
-@@ -108,7 +108,9 @@
- #include "varasm.h"
- #include "stor-layout.h"
- #include "internal-fn.h"
-+#include "gimple.h"
- #include "gimple-expr.h"
-+#include "gimple-iterator.h"
- #include "gimple-fold.h"
- #include "context.h"
- #include "tree-ssa-alias.h"
-@@ -124,13 +126,10 @@
- #include "gimplify.h"
- #endif
- 
--#include "gimple.h"
--
- #if BUILDING_GCC_VERSION >= 4009
- #include "tree-ssa-operands.h"
- #include "tree-phinodes.h"
- #include "tree-cfg.h"
--#include "gimple-iterator.h"
- #include "gimple-ssa.h"
- #include "ssa-iterators.h"
- #endif


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-06-21 14:55 Alice Ferrazzi
  0 siblings, 0 replies; 289+ messages in thread
From: Alice Ferrazzi @ 2023-06-21 14:55 UTC (permalink / raw
  To: gentoo-commits

commit:     0ece0912e9cd80f7f8fc1ebcad4ef357e2f3804d
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 21 14:54:43 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Jun 21 14:54:43 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0ece0912

Linux patch 5.10.185

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README               |     4 +
 1184_linux-5.10.185.patch | 16047 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 16051 insertions(+)

diff --git a/0000_README b/0000_README
index 8d2598a6..feeba516 100644
--- a/0000_README
+++ b/0000_README
@@ -779,6 +779,10 @@ Patch:  1183_linux-5.10.184.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.184
 
+Patch:  1184_linux-5.10.185.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.185
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1184_linux-5.10.185.patch b/1184_linux-5.10.185.patch
new file mode 100644
index 00000000..4bc1e1df
--- /dev/null
+++ b/1184_linux-5.10.185.patch
@@ -0,0 +1,16047 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index eb437d659f2c4..1f4a0523dc1ab 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -866,10 +866,6 @@
+ 
+ 	debugpat	[X86] Enable PAT debugging
+ 
+-	decnet.addr=	[HW,NET]
+-			Format: <area>[,<node>]
+-			See also Documentation/networking/decnet.rst.
+-
+ 	default_hugepagesz=
+ 			[HW] The size of the default HugeTLB page. This is
+ 			the size represented by the legacy /proc/ hugepages
+diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst
+index 7f553859dba82..4f1a686b4b64b 100644
+--- a/Documentation/admin-guide/sysctl/net.rst
++++ b/Documentation/admin-guide/sysctl/net.rst
+@@ -34,13 +34,14 @@ Table : Subdirectories in /proc/sys/net
+  ========= =================== = ========== ==================
+  Directory Content               Directory  Content
+  ========= =================== = ========== ==================
+- core      General parameter     appletalk  Appletalk protocol
+- unix      Unix domain sockets   netrom     NET/ROM
+- 802       E802 protocol         ax25       AX25
+- ethernet  Ethernet protocol     rose       X.25 PLP layer
+- ipv4      IP version 4          x25        X.25 protocol
+- bridge    Bridging              decnet     DEC net
+- ipv6      IP version 6          tipc       TIPC
++ 802       E802 protocol         mptcp     Multipath TCP
++ appletalk Appletalk protocol    netfilter Network Filter
++ ax25      AX25                  netrom     NET/ROM
++ bridge    Bridging              rose      X.25 PLP layer
++ core      General parameter     tipc      TIPC
++ ethernet  Ethernet protocol     unix      Unix domain sockets
++ ipv4      IP version 4          x25       X.25 protocol
++ ipv6      IP version 6
+  ========= =================== = ========== ==================
+ 
+ 1. /proc/sys/net/core - Network core options
+diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst
+index 741aa37dc1819..2a7444e3a4c21 100644
+--- a/Documentation/core-api/kernel-api.rst
++++ b/Documentation/core-api/kernel-api.rst
+@@ -24,11 +24,8 @@ String Conversions
+ .. kernel-doc:: lib/vsprintf.c
+    :export:
+ 
+-.. kernel-doc:: include/linux/kernel.h
+-   :functions: kstrtol
+-
+-.. kernel-doc:: include/linux/kernel.h
+-   :functions: kstrtoul
++.. kernel-doc:: include/linux/kstrtox.h
++   :functions: kstrtol kstrtoul
+ 
+ .. kernel-doc:: lib/kstrtox.c
+    :export:
+diff --git a/Documentation/networking/decnet.rst b/Documentation/networking/decnet.rst
+deleted file mode 100644
+index b8bc11ff8370d..0000000000000
+--- a/Documentation/networking/decnet.rst
++++ /dev/null
+@@ -1,243 +0,0 @@
+-.. SPDX-License-Identifier: GPL-2.0
+-
+-=========================================
+-Linux DECnet Networking Layer Information
+-=========================================
+-
+-1. Other documentation....
+-==========================
+-
+-   - Project Home Pages
+-     - http://www.chygwyn.com/				   - Kernel info
+-     - http://linux-decnet.sourceforge.net/                - Userland tools
+-     - http://www.sourceforge.net/projects/linux-decnet/   - Status page
+-
+-2. Configuring the kernel
+-=========================
+-
+-Be sure to turn on the following options:
+-
+-    - CONFIG_DECNET (obviously)
+-    - CONFIG_PROC_FS (to see what's going on)
+-    - CONFIG_SYSCTL (for easy configuration)
+-
+-if you want to try out router support (not properly debugged yet)
+-you'll need the following options as well...
+-
+-    - CONFIG_DECNET_ROUTER (to be able to add/delete routes)
+-    - CONFIG_NETFILTER (will be required for the DECnet routing daemon)
+-
+-Don't turn on SIOCGIFCONF support for DECnet unless you are really sure
+-that you need it, in general you won't and it can cause ifconfig to
+-malfunction.
+-
+-Run time configuration has changed slightly from the 2.4 system. If you
+-want to configure an endnode, then the simplified procedure is as follows:
+-
+- - Set the MAC address on your ethernet card before starting _any_ other
+-   network protocols.
+-
+-As soon as your network card is brought into the UP state, DECnet should
+-start working. If you need something more complicated or are unsure how
+-to set the MAC address, see the next section. Also all configurations which
+-worked with 2.4 will work under 2.5 with no change.
+-
+-3. Command line options
+-=======================
+-
+-You can set a DECnet address on the kernel command line for compatibility
+-with the 2.4 configuration procedure, but in general it's not needed any more.
+-If you do st a DECnet address on the command line, it has only one purpose
+-which is that its added to the addresses on the loopback device.
+-
+-With 2.4 kernels, DECnet would only recognise addresses as local if they
+-were added to the loopback device. In 2.5, any local interface address
+-can be used to loop back to the local machine. Of course this does not
+-prevent you adding further addresses to the loopback device if you
+-want to.
+-
+-N.B. Since the address list of an interface determines the addresses for
+-which "hello" messages are sent, if you don't set an address on the loopback
+-interface then you won't see any entries in /proc/net/neigh for the local
+-host until such time as you start a connection. This doesn't affect the
+-operation of the local communications in any other way though.
+-
+-The kernel command line takes options looking like the following::
+-
+-    decnet.addr=1,2
+-
+-the two numbers are the node address 1,2 = 1.2 For 2.2.xx kernels
+-and early 2.3.xx kernels, you must use a comma when specifying the
+-DECnet address like this. For more recent 2.3.xx kernels, you may
+-use almost any character except space, although a `.` would be the most
+-obvious choice :-)
+-
+-There used to be a third number specifying the node type. This option
+-has gone away in favour of a per interface node type. This is now set
+-using /proc/sys/net/decnet/conf/<dev>/forwarding. This file can be
+-set with a single digit, 0=EndNode, 1=L1 Router and  2=L2 Router.
+-
+-There are also equivalent options for modules. The node address can
+-also be set through the /proc/sys/net/decnet/ files, as can other system
+-parameters.
+-
+-Currently the only supported devices are ethernet and ip_gre. The
+-ethernet address of your ethernet card has to be set according to the DECnet
+-address of the node in order for it to be autoconfigured (and then appear in
+-/proc/net/decnet_dev). There is a utility available at the above
+-FTP sites called dn2ethaddr which can compute the correct ethernet
+-address to use. The address can be set by ifconfig either before or
+-at the time the device is brought up. If you are using RedHat you can
+-add the line::
+-
+-    MACADDR=AA:00:04:00:03:04
+-
+-or something similar, to /etc/sysconfig/network-scripts/ifcfg-eth0 or
+-wherever your network card's configuration lives. Setting the MAC address
+-of your ethernet card to an address starting with "hi-ord" will cause a
+-DECnet address which matches to be added to the interface (which you can
+-verify with iproute2).
+-
+-The default device for routing can be set through the /proc filesystem
+-by setting /proc/sys/net/decnet/default_device to the
+-device you want DECnet to route packets out of when no specific route
+-is available. Usually this will be eth0, for example::
+-
+-    echo -n "eth0" >/proc/sys/net/decnet/default_device
+-
+-If you don't set the default device, then it will default to the first
+-ethernet card which has been autoconfigured as described above. You can
+-confirm that by looking in the default_device file of course.
+-
+-There is a list of what the other files under /proc/sys/net/decnet/ do
+-on the kernel patch web site (shown above).
+-
+-4. Run time kernel configuration
+-================================
+-
+-
+-This is either done through the sysctl/proc interface (see the kernel web
+-pages for details on what the various options do) or through the iproute2
+-package in the same way as IPv4/6 configuration is performed.
+-
+-Documentation for iproute2 is included with the package, although there is
+-as yet no specific section on DECnet, most of the features apply to both
+-IP and DECnet, albeit with DECnet addresses instead of IP addresses and
+-a reduced functionality.
+-
+-If you want to configure a DECnet router you'll need the iproute2 package
+-since its the _only_ way to add and delete routes currently. Eventually
+-there will be a routing daemon to send and receive routing messages for
+-each interface and update the kernel routing tables accordingly. The
+-routing daemon will use netfilter to listen to routing packets, and
+-rtnetlink to update the kernels routing tables.
+-
+-The DECnet raw socket layer has been removed since it was there purely
+-for use by the routing daemon which will now use netfilter (a much cleaner
+-and more generic solution) instead.
+-
+-5. How can I tell if its working?
+-=================================
+-
+-Here is a quick guide of what to look for in order to know if your DECnet
+-kernel subsystem is working.
+-
+-   - Is the node address set (see /proc/sys/net/decnet/node_address)
+-   - Is the node of the correct type
+-     (see /proc/sys/net/decnet/conf/<dev>/forwarding)
+-   - Is the Ethernet MAC address of each Ethernet card set to match
+-     the DECnet address. If in doubt use the dn2ethaddr utility available
+-     at the ftp archive.
+-   - If the previous two steps are satisfied, and the Ethernet card is up,
+-     you should find that it is listed in /proc/net/decnet_dev and also
+-     that it appears as a directory in /proc/sys/net/decnet/conf/. The
+-     loopback device (lo) should also appear and is required to communicate
+-     within a node.
+-   - If you have any DECnet routers on your network, they should appear
+-     in /proc/net/decnet_neigh, otherwise this file will only contain the
+-     entry for the node itself (if it doesn't check to see if lo is up).
+-   - If you want to send to any node which is not listed in the
+-     /proc/net/decnet_neigh file, you'll need to set the default device
+-     to point to an Ethernet card with connection to a router. This is
+-     again done with the /proc/sys/net/decnet/default_device file.
+-   - Try starting a simple server and client, like the dnping/dnmirror
+-     over the loopback interface. With luck they should communicate.
+-     For this step and those after, you'll need the DECnet library
+-     which can be obtained from the above ftp sites as well as the
+-     actual utilities themselves.
+-   - If this seems to work, then try talking to a node on your local
+-     network, and see if you can obtain the same results.
+-   - At this point you are on your own... :-)
+-
+-6. How to send a bug report
+-===========================
+-
+-If you've found a bug and want to report it, then there are several things
+-you can do to help me work out exactly what it is that is wrong. Useful
+-information (_most_ of which _is_ _essential_) includes:
+-
+- - What kernel version are you running ?
+- - What version of the patch are you running ?
+- - How far though the above set of tests can you get ?
+- - What is in the /proc/decnet* files and /proc/sys/net/decnet/* files ?
+- - Which services are you running ?
+- - Which client caused the problem ?
+- - How much data was being transferred ?
+- - Was the network congested ?
+- - How can the problem be reproduced ?
+- - Can you use tcpdump to get a trace ? (N.B. Most (all?) versions of
+-   tcpdump don't understand how to dump DECnet properly, so including
+-   the hex listing of the packet contents is _essential_, usually the -x flag.
+-   You may also need to increase the length grabbed with the -s flag. The
+-   -e flag also provides very useful information (ethernet MAC addresses))
+-
+-7. MAC FAQ
+-==========
+-
+-A quick FAQ on ethernet MAC addresses to explain how Linux and DECnet
+-interact and how to get the best performance from your hardware.
+-
+-Ethernet cards are designed to normally only pass received network frames
+-to a host computer when they are addressed to it, or to the broadcast address.
+-
+-Linux has an interface which allows the setting of extra addresses for
+-an ethernet card to listen to. If the ethernet card supports it, the
+-filtering operation will be done in hardware, if not the extra unwanted packets
+-received will be discarded by the host computer. In the latter case,
+-significant processor time and bus bandwidth can be used up on a busy
+-network (see the NAPI documentation for a longer explanation of these
+-effects).
+-
+-DECnet makes use of this interface to allow running DECnet on an ethernet
+-card which has already been configured using TCP/IP (presumably using the
+-built in MAC address of the card, as usual) and/or to allow multiple DECnet
+-addresses on each physical interface. If you do this, be aware that if your
+-ethernet card doesn't support perfect hashing in its MAC address filter
+-then your computer will be doing more work than required. Some cards
+-will simply set themselves into promiscuous mode in order to receive
+-packets from the DECnet specified addresses. So if you have one of these
+-cards its better to set the MAC address of the card as described above
+-to gain the best efficiency. Better still is to use a card which supports
+-NAPI as well.
+-
+-
+-8. Mailing list
+-===============
+-
+-If you are keen to get involved in development, or want to ask questions
+-about configuration, or even just report bugs, then there is a mailing
+-list that you can join, details are at:
+-
+-http://sourceforge.net/mail/?group_id=4993
+-
+-9. Legal Info
+-=============
+-
+-The Linux DECnet project team have placed their code under the GPL. The
+-software is provided "as is" and without warranty express or implied.
+-DECnet is a trademark of Compaq. This software is not a product of
+-Compaq. We acknowledge the help of people at Compaq in providing extra
+-documentation above and beyond what was previously publicly available.
+-
+-Steve Whitehouse <SteveW@ACM.org>
+-
+diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst
+index 63ef386afd0ab..9fab919ed530f 100644
+--- a/Documentation/networking/index.rst
++++ b/Documentation/networking/index.rst
+@@ -46,7 +46,6 @@ Contents:
+    cdc_mbim
+    dccp
+    dctcp
+-   decnet
+    dns_resolver
+    driver
+    eql
+diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst
+index 55a2d9b2ce33c..a7373d4e3984c 100644
+--- a/Documentation/userspace-api/ioctl/ioctl-number.rst
++++ b/Documentation/userspace-api/ioctl/ioctl-number.rst
+@@ -303,7 +303,6 @@ Code  Seq#    Include File                                           Comments
+ 0x89  00-06  arch/x86/include/asm/sockios.h
+ 0x89  0B-DF  linux/sockios.h
+ 0x89  E0-EF  linux/sockios.h                                         SIOCPROTOPRIVATE range
+-0x89  E0-EF  linux/dn.h                                              PROTOPRIVATE range
+ 0x89  F0-FF  linux/sockios.h                                         SIOCDEVPRIVATE range
+ 0x8B  all    linux/wireless.h
+ 0x8C  00-3F                                                          WiNRADiO driver
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 6c5efc4013ab5..cdb5f1f22f4c4 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -4905,13 +4905,6 @@ F:	include/linux/tfrc.h
+ F:	include/uapi/linux/dccp.h
+ F:	net/dccp/
+ 
+-DECnet NETWORK LAYER
+-L:	linux-decnet-user@lists.sourceforge.net
+-S:	Orphan
+-W:	http://linux-decnet.sourceforge.net
+-F:	Documentation/networking/decnet.rst
+-F:	net/decnet/
+-
+ DECSTATION PLATFORM SUPPORT
+ M:	"Maciej W. Rozycki" <macro@linux-mips.org>
+ L:	linux-mips@vger.kernel.org
+diff --git a/Makefile b/Makefile
+index 3450b061e8d69..73e6cc1992c21 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 184
++SUBLEVEL = 185
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/vexpress-v2p-ca5s.dts b/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
+index 7aa64ae257798..6fc48016fe8ed 100644
+--- a/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
++++ b/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
+@@ -132,6 +132,7 @@
+ 		reg = <0x2c0f0000 0x1000>;
+ 		interrupts = <0 84 4>;
+ 		cache-level = <2>;
++		cache-unified;
+ 	};
+ 
+ 	pmu {
+diff --git a/arch/mips/alchemy/common/dbdma.c b/arch/mips/alchemy/common/dbdma.c
+index 4ca2c28878e0f..e9ee9ab90a0c6 100644
+--- a/arch/mips/alchemy/common/dbdma.c
++++ b/arch/mips/alchemy/common/dbdma.c
+@@ -30,6 +30,7 @@
+  *
+  */
+ 
++#include <linux/dma-map-ops.h> /* for dma_default_coherent */
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+@@ -623,17 +624,18 @@ u32 au1xxx_dbdma_put_source(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
+ 		dp->dscr_cmd0 &= ~DSCR_CMD0_IE;
+ 
+ 	/*
+-	 * There is an errata on the Au1200/Au1550 parts that could result
+-	 * in "stale" data being DMA'ed. It has to do with the snoop logic on
+-	 * the cache eviction buffer.  DMA_NONCOHERENT is on by default for
+-	 * these parts. If it is fixed in the future, these dma_cache_inv will
+-	 * just be nothing more than empty macros. See io.h.
++	 * There is an erratum on certain Au1200/Au1550 revisions that could
++	 * result in "stale" data being DMA'ed. It has to do with the snoop
++	 * logic on the cache eviction buffer.  dma_default_coherent is set
++	 * to false on these parts.
+ 	 */
+-	dma_cache_wback_inv((unsigned long)buf, nbytes);
++	if (!dma_default_coherent)
++		dma_cache_wback_inv(KSEG0ADDR(buf), nbytes);
+ 	dp->dscr_cmd0 |= DSCR_CMD0_V;	/* Let it rip */
+ 	wmb(); /* drain writebuffer */
+ 	dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
+ 	ctp->chan_ptr->ddma_dbell = 0;
++	wmb(); /* force doorbell write out to dma engine */
+ 
+ 	/* Get next descriptor pointer. */
+ 	ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
+@@ -685,17 +687,18 @@ u32 au1xxx_dbdma_put_dest(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
+ 			  dp->dscr_source1, dp->dscr_dest0, dp->dscr_dest1);
+ #endif
+ 	/*
+-	 * There is an errata on the Au1200/Au1550 parts that could result in
+-	 * "stale" data being DMA'ed. It has to do with the snoop logic on the
+-	 * cache eviction buffer.  DMA_NONCOHERENT is on by default for these
+-	 * parts. If it is fixed in the future, these dma_cache_inv will just
+-	 * be nothing more than empty macros. See io.h.
++	 * There is an erratum on certain Au1200/Au1550 revisions that could
++	 * result in "stale" data being DMA'ed. It has to do with the snoop
++	 * logic on the cache eviction buffer.  dma_default_coherent is set
++	 * to false on these parts.
+ 	 */
+-	dma_cache_inv((unsigned long)buf, nbytes);
++	if (!dma_default_coherent)
++		dma_cache_inv(KSEG0ADDR(buf), nbytes);
+ 	dp->dscr_cmd0 |= DSCR_CMD0_V;	/* Let it rip */
+ 	wmb(); /* drain writebuffer */
+ 	dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
+ 	ctp->chan_ptr->ddma_dbell = 0;
++	wmb(); /* force doorbell write out to dma engine */
+ 
+ 	/* Get next descriptor pointer. */
+ 	ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
+diff --git a/arch/mips/configs/decstation_64_defconfig b/arch/mips/configs/decstation_64_defconfig
+index 85f1955b4b004..4a81297e21a72 100644
+--- a/arch/mips/configs/decstation_64_defconfig
++++ b/arch/mips/configs/decstation_64_defconfig
+@@ -53,8 +53,6 @@ CONFIG_IPV6_SUBTREES=y
+ CONFIG_NETWORK_SECMARK=y
+ CONFIG_IP_SCTP=m
+ CONFIG_VLAN_8021Q=m
+-CONFIG_DECNET=m
+-CONFIG_DECNET_ROUTER=y
+ # CONFIG_WIRELESS is not set
+ # CONFIG_UEVENT_HELPER is not set
+ # CONFIG_FW_LOADER is not set
+diff --git a/arch/mips/configs/decstation_defconfig b/arch/mips/configs/decstation_defconfig
+index 30a6eafdb1d01..fd35454bae4ce 100644
+--- a/arch/mips/configs/decstation_defconfig
++++ b/arch/mips/configs/decstation_defconfig
+@@ -49,8 +49,6 @@ CONFIG_IPV6_SUBTREES=y
+ CONFIG_NETWORK_SECMARK=y
+ CONFIG_IP_SCTP=m
+ CONFIG_VLAN_8021Q=m
+-CONFIG_DECNET=m
+-CONFIG_DECNET_ROUTER=y
+ # CONFIG_WIRELESS is not set
+ # CONFIG_UEVENT_HELPER is not set
+ # CONFIG_FW_LOADER is not set
+diff --git a/arch/mips/configs/decstation_r4k_defconfig b/arch/mips/configs/decstation_r4k_defconfig
+index e2b58dbf4aa9a..7ed8f4c7cbdd9 100644
+--- a/arch/mips/configs/decstation_r4k_defconfig
++++ b/arch/mips/configs/decstation_r4k_defconfig
+@@ -48,8 +48,6 @@ CONFIG_IPV6_SUBTREES=y
+ CONFIG_NETWORK_SECMARK=y
+ CONFIG_IP_SCTP=m
+ CONFIG_VLAN_8021Q=m
+-CONFIG_DECNET=m
+-CONFIG_DECNET_ROUTER=y
+ # CONFIG_WIRELESS is not set
+ # CONFIG_UEVENT_HELPER is not set
+ # CONFIG_FW_LOADER is not set
+diff --git a/arch/mips/configs/gpr_defconfig b/arch/mips/configs/gpr_defconfig
+index 9085f4d6c6985..5b9b59a8fe68e 100644
+--- a/arch/mips/configs/gpr_defconfig
++++ b/arch/mips/configs/gpr_defconfig
+@@ -69,7 +69,6 @@ CONFIG_IP_NF_RAW=m
+ CONFIG_IP_NF_ARPTABLES=m
+ CONFIG_IP_NF_ARPFILTER=m
+ CONFIG_IP_NF_ARP_MANGLE=m
+-CONFIG_DECNET_NF_GRABULATOR=m
+ CONFIG_BRIDGE_NF_EBTABLES=m
+ CONFIG_BRIDGE_EBT_BROUTE=m
+ CONFIG_BRIDGE_EBT_T_FILTER=m
+@@ -99,7 +98,6 @@ CONFIG_ATM_MPOA=m
+ CONFIG_ATM_BR2684=m
+ CONFIG_BRIDGE=m
+ CONFIG_VLAN_8021Q=m
+-CONFIG_DECNET=m
+ CONFIG_LLC2=m
+ CONFIG_ATALK=m
+ CONFIG_DEV_APPLETALK=m
+diff --git a/arch/mips/configs/mtx1_defconfig b/arch/mips/configs/mtx1_defconfig
+index 914af125a7fa2..d6578536d77cc 100644
+--- a/arch/mips/configs/mtx1_defconfig
++++ b/arch/mips/configs/mtx1_defconfig
+@@ -117,7 +117,6 @@ CONFIG_IP6_NF_FILTER=m
+ CONFIG_IP6_NF_TARGET_REJECT=m
+ CONFIG_IP6_NF_MANGLE=m
+ CONFIG_IP6_NF_RAW=m
+-CONFIG_DECNET_NF_GRABULATOR=m
+ CONFIG_BRIDGE_NF_EBTABLES=m
+ CONFIG_BRIDGE_EBT_BROUTE=m
+ CONFIG_BRIDGE_EBT_T_FILTER=m
+@@ -147,7 +146,6 @@ CONFIG_ATM_MPOA=m
+ CONFIG_ATM_BR2684=m
+ CONFIG_BRIDGE=m
+ CONFIG_VLAN_8021Q=m
+-CONFIG_DECNET=m
+ CONFIG_LLC2=m
+ CONFIG_ATALK=m
+ CONFIG_DEV_APPLETALK=m
+diff --git a/arch/mips/configs/nlm_xlp_defconfig b/arch/mips/configs/nlm_xlp_defconfig
+index 72a211d2d556f..45421a05bb0e6 100644
+--- a/arch/mips/configs/nlm_xlp_defconfig
++++ b/arch/mips/configs/nlm_xlp_defconfig
+@@ -200,7 +200,6 @@ CONFIG_IP6_NF_TARGET_REJECT=m
+ CONFIG_IP6_NF_MANGLE=m
+ CONFIG_IP6_NF_RAW=m
+ CONFIG_IP6_NF_SECURITY=m
+-CONFIG_DECNET_NF_GRABULATOR=m
+ CONFIG_BRIDGE_NF_EBTABLES=m
+ CONFIG_BRIDGE_EBT_BROUTE=m
+ CONFIG_BRIDGE_EBT_T_FILTER=m
+@@ -234,7 +233,6 @@ CONFIG_ATM_BR2684=m
+ CONFIG_BRIDGE=m
+ CONFIG_VLAN_8021Q=m
+ CONFIG_VLAN_8021Q_GVRP=y
+-CONFIG_DECNET=m
+ CONFIG_LLC2=m
+ CONFIG_ATALK=m
+ CONFIG_DEV_APPLETALK=m
+diff --git a/arch/mips/configs/nlm_xlr_defconfig b/arch/mips/configs/nlm_xlr_defconfig
+index 4ecb157e56d42..1081972c81da8 100644
+--- a/arch/mips/configs/nlm_xlr_defconfig
++++ b/arch/mips/configs/nlm_xlr_defconfig
+@@ -198,7 +198,6 @@ CONFIG_IP6_NF_TARGET_REJECT=m
+ CONFIG_IP6_NF_MANGLE=m
+ CONFIG_IP6_NF_RAW=m
+ CONFIG_IP6_NF_SECURITY=m
+-CONFIG_DECNET_NF_GRABULATOR=m
+ CONFIG_BRIDGE_NF_EBTABLES=m
+ CONFIG_BRIDGE_EBT_BROUTE=m
+ CONFIG_BRIDGE_EBT_T_FILTER=m
+@@ -232,7 +231,6 @@ CONFIG_ATM_BR2684=m
+ CONFIG_BRIDGE=m
+ CONFIG_VLAN_8021Q=m
+ CONFIG_VLAN_8021Q_GVRP=y
+-CONFIG_DECNET=m
+ CONFIG_LLC2=m
+ CONFIG_ATALK=m
+ CONFIG_DEV_APPLETALK=m
+diff --git a/arch/mips/configs/rm200_defconfig b/arch/mips/configs/rm200_defconfig
+index 30d7c3db884e4..2d0506159dfa8 100644
+--- a/arch/mips/configs/rm200_defconfig
++++ b/arch/mips/configs/rm200_defconfig
+@@ -116,7 +116,6 @@ CONFIG_IP6_NF_FILTER=m
+ CONFIG_IP6_NF_TARGET_REJECT=m
+ CONFIG_IP6_NF_MANGLE=m
+ CONFIG_IP6_NF_RAW=m
+-CONFIG_DECNET_NF_GRABULATOR=m
+ CONFIG_BRIDGE_NF_EBTABLES=m
+ CONFIG_BRIDGE_EBT_BROUTE=m
+ CONFIG_BRIDGE_EBT_T_FILTER=m
+@@ -137,7 +136,6 @@ CONFIG_BRIDGE_EBT_REDIRECT=m
+ CONFIG_BRIDGE_EBT_SNAT=m
+ CONFIG_BRIDGE_EBT_LOG=m
+ CONFIG_BRIDGE=m
+-CONFIG_DECNET=m
+ CONFIG_NET_SCHED=y
+ CONFIG_NET_SCH_CBQ=m
+ CONFIG_NET_SCH_HTB=m
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 9d11f68a9e8bb..b49ad569e4690 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -154,10 +154,6 @@ static unsigned long __init init_initrd(void)
+ 		pr_err("initrd start must be page aligned\n");
+ 		goto disable;
+ 	}
+-	if (initrd_start < PAGE_OFFSET) {
+-		pr_err("initrd start < PAGE_OFFSET\n");
+-		goto disable;
+-	}
+ 
+ 	/*
+ 	 * Sanitize initrd addresses. For example firmware
+@@ -170,6 +166,11 @@ static unsigned long __init init_initrd(void)
+ 	initrd_end = (unsigned long)__va(end);
+ 	initrd_start = (unsigned long)__va(__pa(initrd_start));
+ 
++	if (initrd_start < PAGE_OFFSET) {
++		pr_err("initrd start < PAGE_OFFSET\n");
++		goto disable;
++	}
++
+ 	ROOT_DEV = Root_RAM0;
+ 	return PFN_UP(end);
+ disable:
+diff --git a/arch/nios2/boot/dts/10m50_devboard.dts b/arch/nios2/boot/dts/10m50_devboard.dts
+index 56339bef3247d..0e7e5b0dd685c 100644
+--- a/arch/nios2/boot/dts/10m50_devboard.dts
++++ b/arch/nios2/boot/dts/10m50_devboard.dts
+@@ -97,7 +97,7 @@
+ 			rx-fifo-depth = <8192>;
+ 			tx-fifo-depth = <8192>;
+ 			address-bits = <48>;
+-			max-frame-size = <1518>;
++			max-frame-size = <1500>;
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			altr,has-supplementary-unicast;
+ 			altr,enable-sup-addr = <1>;
+diff --git a/arch/nios2/boot/dts/3c120_devboard.dts b/arch/nios2/boot/dts/3c120_devboard.dts
+index d10fb81686c7e..3ee3169063797 100644
+--- a/arch/nios2/boot/dts/3c120_devboard.dts
++++ b/arch/nios2/boot/dts/3c120_devboard.dts
+@@ -106,7 +106,7 @@
+ 				interrupt-names = "rx_irq", "tx_irq";
+ 				rx-fifo-depth = <8192>;
+ 				tx-fifo-depth = <8192>;
+-				max-frame-size = <1518>;
++				max-frame-size = <1500>;
+ 				local-mac-address = [ 00 00 00 00 00 00 ];
+ 				phy-mode = "rgmii-id";
+ 				phy-handle = <&phy0>;
+diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c
+index 36610a5c029fc..fc90ccfee1572 100644
+--- a/arch/parisc/kernel/pci-dma.c
++++ b/arch/parisc/kernel/pci-dma.c
+@@ -446,11 +446,27 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
+ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
+ 		enum dma_data_direction dir)
+ {
++	/*
++	 * fdc: The data cache line is written back to memory, if and only if
++	 * it is dirty, and then invalidated from the data cache.
++	 */
+ 	flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
+ }
+ 
+ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
+ 		enum dma_data_direction dir)
+ {
+-	flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
++	unsigned long addr = (unsigned long) phys_to_virt(paddr);
++
++	switch (dir) {
++	case DMA_TO_DEVICE:
++	case DMA_BIDIRECTIONAL:
++		flush_kernel_dcache_range(addr, size);
++		return;
++	case DMA_FROM_DEVICE:
++		purge_kernel_dcache_range_asm(addr, addr + size);
++		return;
++	default:
++		BUG();
++	}
+ }
+diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig
+index 66e9a0fd64ff2..021da6736570e 100644
+--- a/arch/powerpc/configs/ppc6xx_defconfig
++++ b/arch/powerpc/configs/ppc6xx_defconfig
+@@ -245,8 +245,6 @@ CONFIG_ATM_LANE=m
+ CONFIG_ATM_BR2684=m
+ CONFIG_BRIDGE=m
+ CONFIG_VLAN_8021Q=m
+-CONFIG_DECNET=m
+-CONFIG_DECNET_ROUTER=y
+ CONFIG_ATALK=m
+ CONFIG_DEV_APPLETALK=m
+ CONFIG_IPDDP=m
+diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
+index 348f595810523..d08239ae2bcd2 100644
+--- a/arch/powerpc/purgatory/Makefile
++++ b/arch/powerpc/purgatory/Makefile
+@@ -4,6 +4,11 @@ KASAN_SANITIZE := n
+ 
+ targets += trampoline_$(BITS).o purgatory.ro kexec-purgatory.c
+ 
++# When profile-guided optimization is enabled, llvm emits two different
++# overlapping text sections, which is not supported by kexec. Remove profile
++# optimization flags.
++KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS))
++
+ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined
+ 
+ $(obj)/purgatory.ro: $(obj)/trampoline_$(BITS).o FORCE
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index 00c6dce14bd2b..55930ca498e93 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -387,12 +387,12 @@ void text_poke_sync(void)
+ {
+ }
+ 
+-#ifdef CONFIG_PM_SLEEP
+ void uml_pm_wake(void)
+ {
+ 	pm_system_wakeup();
+ }
+ 
++#ifdef CONFIG_PM_SLEEP
+ static int init_pm_wake_signal(void)
+ {
+ 	/*
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index ebaf329a23688..dc0b91c1db04b 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -14,6 +14,11 @@ $(obj)/sha256.o: $(srctree)/lib/crypto/sha256.c FORCE
+ 
+ CFLAGS_sha256.o := -D__DISABLE_EXPORTS
+ 
++# When profile-guided optimization is enabled, llvm emits two different
++# overlapping text sections, which is not supported by kexec. Remove profile
++# optimization flags.
++KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS))
++
+ # When linking purgatory.ro with -r unresolved symbols are not checked,
+ # also link a purgatory.chk binary without -r to check for unresolved symbols.
+ PURGATORY_LDFLAGS := -e purgatory_start -nostdlib -z nodefaultlib
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 6f33d62331b1f..d68a8ca2161fb 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -792,7 +792,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
+ 		ring_req->u.rw.handle = info->handle;
+ 		ring_req->operation = rq_data_dir(req) ?
+ 			BLKIF_OP_WRITE : BLKIF_OP_READ;
+-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
++		if (req_op(req) == REQ_OP_FLUSH ||
++		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
+ 			/*
+ 			 * Ideally we can do an unordered flush-to-disk.
+ 			 * In case the backend onlysupports barriers, use that.
+diff --git a/drivers/char/agp/parisc-agp.c b/drivers/char/agp/parisc-agp.c
+index d68d05d5d3838..514f9f287a781 100644
+--- a/drivers/char/agp/parisc-agp.c
++++ b/drivers/char/agp/parisc-agp.c
+@@ -90,6 +90,9 @@ parisc_agp_tlbflush(struct agp_memory *mem)
+ {
+ 	struct _parisc_agp_info *info = &parisc_agp_info;
+ 
++	/* force fdc ops to be visible to IOMMU */
++	asm_io_sync();
++
+ 	writeq(info->gart_base | ilog2(info->gart_size), info->ioc_regs+IOC_PCOM);
+ 	readq(info->ioc_regs+IOC_PCOM);	/* flush */
+ }
+@@ -158,6 +161,7 @@ parisc_agp_insert_memory(struct agp_memory *mem, off_t pg_start, int type)
+ 			info->gatt[j] =
+ 				parisc_agp_mask_memory(agp_bridge,
+ 					paddr, type);
++			asm_io_fdc(&info->gatt[j]);
+ 		}
+ 	}
+ 
+@@ -191,7 +195,16 @@ static unsigned long
+ parisc_agp_mask_memory(struct agp_bridge_data *bridge, dma_addr_t addr,
+ 		       int type)
+ {
+-	return SBA_PDIR_VALID_BIT | addr;
++	unsigned ci;			/* coherent index */
++	dma_addr_t pa;
++
++	pa = addr & IOVP_MASK;
++	asm("lci 0(%1), %0" : "=r" (ci) : "r" (phys_to_virt(pa)));
++
++	pa |= (ci >> PAGE_SHIFT) & 0xff;/* move CI (8 bits) into lowest byte */
++	pa |= SBA_PDIR_VALID_BIT;	/* set "valid" bit */
++
++	return cpu_to_le64(pa);
+ }
+ 
+ static void
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 0e3ff5c3766ed..72410a2d4e6bf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -6702,8 +6702,10 @@ static int gfx_v10_0_kiq_resume(struct amdgpu_device *adev)
+ 		return r;
+ 
+ 	r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+-	if (unlikely(r != 0))
++	if (unlikely(r != 0)) {
++		amdgpu_bo_unreserve(ring->mqd_obj);
+ 		return r;
++	}
+ 
+ 	gfx_v10_0_kiq_init_queue(ring);
+ 	amdgpu_bo_kunmap(ring->mqd_obj);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 629671f66b319..acef2227d992b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3800,8 +3800,10 @@ static int gfx_v9_0_kiq_resume(struct amdgpu_device *adev)
+ 		return r;
+ 
+ 	r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+-	if (unlikely(r != 0))
++	if (unlikely(r != 0)) {
++		amdgpu_bo_unreserve(ring->mqd_obj);
+ 		return r;
++	}
+ 
+ 	gfx_v9_0_kiq_init_queue(ring);
+ 	amdgpu_bo_kunmap(ring->mqd_obj);
+diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
+index bd060404d2495..4b5a30ac84bc3 100644
+--- a/drivers/gpu/drm/i915/display/intel_bw.c
++++ b/drivers/gpu/drm/i915/display/intel_bw.c
+@@ -20,76 +20,9 @@ struct intel_qgv_point {
+ struct intel_qgv_info {
+ 	struct intel_qgv_point points[I915_NUM_QGV_POINTS];
+ 	u8 num_points;
+-	u8 num_channels;
+ 	u8 t_bl;
+-	enum intel_dram_type dram_type;
+ };
+ 
+-static int icl_pcode_read_mem_global_info(struct drm_i915_private *dev_priv,
+-					  struct intel_qgv_info *qi)
+-{
+-	u32 val = 0;
+-	int ret;
+-
+-	ret = sandybridge_pcode_read(dev_priv,
+-				     ICL_PCODE_MEM_SUBSYSYSTEM_INFO |
+-				     ICL_PCODE_MEM_SS_READ_GLOBAL_INFO,
+-				     &val, NULL);
+-	if (ret)
+-		return ret;
+-
+-	if (IS_GEN(dev_priv, 12)) {
+-		switch (val & 0xf) {
+-		case 0:
+-			qi->dram_type = INTEL_DRAM_DDR4;
+-			break;
+-		case 3:
+-			qi->dram_type = INTEL_DRAM_LPDDR4;
+-			break;
+-		case 4:
+-			qi->dram_type = INTEL_DRAM_DDR3;
+-			break;
+-		case 5:
+-			qi->dram_type = INTEL_DRAM_LPDDR3;
+-			break;
+-		default:
+-			MISSING_CASE(val & 0xf);
+-			break;
+-		}
+-	} else if (IS_GEN(dev_priv, 11)) {
+-		switch (val & 0xf) {
+-		case 0:
+-			qi->dram_type = INTEL_DRAM_DDR4;
+-			break;
+-		case 1:
+-			qi->dram_type = INTEL_DRAM_DDR3;
+-			break;
+-		case 2:
+-			qi->dram_type = INTEL_DRAM_LPDDR3;
+-			break;
+-		case 3:
+-			qi->dram_type = INTEL_DRAM_LPDDR4;
+-			break;
+-		default:
+-			MISSING_CASE(val & 0xf);
+-			break;
+-		}
+-	} else {
+-		MISSING_CASE(INTEL_GEN(dev_priv));
+-		qi->dram_type = INTEL_DRAM_LPDDR3; /* Conservative default */
+-	}
+-
+-	qi->num_channels = (val & 0xf0) >> 4;
+-	qi->num_points = (val & 0xf00) >> 8;
+-
+-	if (IS_GEN(dev_priv, 12))
+-		qi->t_bl = qi->dram_type == INTEL_DRAM_DDR4 ? 4 : 16;
+-	else if (IS_GEN(dev_priv, 11))
+-		qi->t_bl = qi->dram_type == INTEL_DRAM_DDR4 ? 4 : 8;
+-
+-	return 0;
+-}
+-
+ static int icl_pcode_read_qgv_point_info(struct drm_i915_private *dev_priv,
+ 					 struct intel_qgv_point *sp,
+ 					 int point)
+@@ -139,11 +72,15 @@ int icl_pcode_restrict_qgv_points(struct drm_i915_private *dev_priv,
+ static int icl_get_qgv_points(struct drm_i915_private *dev_priv,
+ 			      struct intel_qgv_info *qi)
+ {
++	const struct dram_info *dram_info = &dev_priv->dram_info;
+ 	int i, ret;
+ 
+-	ret = icl_pcode_read_mem_global_info(dev_priv, qi);
+-	if (ret)
+-		return ret;
++	qi->num_points = dram_info->num_qgv_points;
++
++	if (IS_GEN(dev_priv, 12))
++		qi->t_bl = dev_priv->dram_info.type == INTEL_DRAM_DDR4 ? 4 : 16;
++	else if (IS_GEN(dev_priv, 11))
++		qi->t_bl = dev_priv->dram_info.type == INTEL_DRAM_DDR4 ? 4 : 8;
+ 
+ 	if (drm_WARN_ON(&dev_priv->drm,
+ 			qi->num_points > ARRAY_SIZE(qi->points)))
+@@ -209,7 +146,7 @@ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
+ {
+ 	struct intel_qgv_info qi = {};
+ 	bool is_y_tile = true; /* assume y tile may be used */
+-	int num_channels;
++	int num_channels = dev_priv->dram_info.num_channels;
+ 	int deinterleave;
+ 	int ipqdepth, ipqdepthpch;
+ 	int dclk_max;
+@@ -222,7 +159,6 @@ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
+ 			    "Failed to get memory subsystem information, ignoring bandwidth limits");
+ 		return ret;
+ 	}
+-	num_channels = qi.num_channels;
+ 
+ 	deinterleave = DIV_ROUND_UP(num_channels, is_y_tile ? 4 : 2);
+ 	dclk_max = icl_sagv_max_dclk(&qi);
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 382cf048eefe0..c72a26af181c0 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -84,6 +84,7 @@
+ #include "intel_gvt.h"
+ #include "intel_memory_region.h"
+ #include "intel_pm.h"
++#include "intel_sideband.h"
+ #include "vlv_suspend.h"
+ 
+ static struct drm_driver driver;
+@@ -608,6 +609,9 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
+ 		goto err_msi;
+ 
+ 	intel_opregion_setup(dev_priv);
++
++	intel_pcode_init(dev_priv);
++
+ 	/*
+ 	 * Fill the dram structure to get the system raw bandwidth and
+ 	 * dram info. This will be used for memory latency calculation.
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 6909901b35513..a8b65bab82c82 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1148,6 +1148,7 @@ struct drm_i915_private {
+ 			INTEL_DRAM_LPDDR3,
+ 			INTEL_DRAM_LPDDR4
+ 		} type;
++		u8 num_qgv_points;
+ 	} dram_info;
+ 
+ 	struct intel_bw_info {
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 04157d8ced320..728a46489f9ca 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -9235,6 +9235,9 @@ enum {
+ #define     GEN9_SAGV_DISABLE			0x0
+ #define     GEN9_SAGV_IS_DISABLED		0x1
+ #define     GEN9_SAGV_ENABLE			0x3
++#define   DG1_PCODE_STATUS			0x7E
++#define     DG1_UNCORE_GET_INIT_STATUS		0x0
++#define     DG1_UNCORE_INIT_STATUS_COMPLETE	0x1
+ #define GEN12_PCODE_READ_SAGV_BLOCK_TIME_US	0x23
+ #define GEN6_PCODE_DATA				_MMIO(0x138128)
+ #define   GEN6_PCODE_FREQ_IA_RATIO_SHIFT	8
+diff --git a/drivers/gpu/drm/i915/intel_dram.c b/drivers/gpu/drm/i915/intel_dram.c
+index 8aa12cad93ce3..26e109c649843 100644
+--- a/drivers/gpu/drm/i915/intel_dram.c
++++ b/drivers/gpu/drm/i915/intel_dram.c
+@@ -5,6 +5,7 @@
+ 
+ #include "i915_drv.h"
+ #include "intel_dram.h"
++#include "intel_sideband.h"
+ 
+ struct dram_dimm_info {
+ 	u8 size, width, ranks;
+@@ -433,6 +434,81 @@ static int bxt_get_dram_info(struct drm_i915_private *i915)
+ 	return 0;
+ }
+ 
++static int icl_pcode_read_mem_global_info(struct drm_i915_private *dev_priv)
++{
++	struct dram_info *dram_info = &dev_priv->dram_info;
++	u32 val = 0;
++	int ret;
++
++	ret = sandybridge_pcode_read(dev_priv,
++				     ICL_PCODE_MEM_SUBSYSYSTEM_INFO |
++				     ICL_PCODE_MEM_SS_READ_GLOBAL_INFO,
++				     &val, NULL);
++	if (ret)
++		return ret;
++
++	if (IS_GEN(dev_priv, 12)) {
++		switch (val & 0xf) {
++		case 0:
++			dram_info->type = INTEL_DRAM_DDR4;
++			break;
++		case 3:
++			dram_info->type = INTEL_DRAM_LPDDR4;
++			break;
++		case 4:
++			dram_info->type = INTEL_DRAM_DDR3;
++			break;
++		case 5:
++			dram_info->type = INTEL_DRAM_LPDDR3;
++			break;
++		default:
++			MISSING_CASE(val & 0xf);
++			return -1;
++		}
++	} else {
++		switch (val & 0xf) {
++		case 0:
++			dram_info->type = INTEL_DRAM_DDR4;
++			break;
++		case 1:
++			dram_info->type = INTEL_DRAM_DDR3;
++			break;
++		case 2:
++			dram_info->type = INTEL_DRAM_LPDDR3;
++			break;
++		case 3:
++			dram_info->type = INTEL_DRAM_LPDDR4;
++			break;
++		default:
++			MISSING_CASE(val & 0xf);
++			return -1;
++		}
++	}
++
++	dram_info->num_channels = (val & 0xf0) >> 4;
++	dram_info->num_qgv_points = (val & 0xf00) >> 8;
++
++	return 0;
++}
++
++static int gen11_get_dram_info(struct drm_i915_private *i915)
++{
++	int ret = skl_get_dram_info(i915);
++
++	if (ret)
++		return ret;
++
++	return icl_pcode_read_mem_global_info(i915);
++}
++
++static int gen12_get_dram_info(struct drm_i915_private *i915)
++{
++	/* Always needed for GEN12+ */
++	i915->dram_info.is_16gb_dimm = true;
++
++	return icl_pcode_read_mem_global_info(i915);
++}
++
+ void intel_dram_detect(struct drm_i915_private *i915)
+ {
+ 	struct dram_info *dram_info = &i915->dram_info;
+@@ -448,7 +524,11 @@ void intel_dram_detect(struct drm_i915_private *i915)
+ 	if (INTEL_GEN(i915) < 9 || !HAS_DISPLAY(i915))
+ 		return;
+ 
+-	if (IS_GEN9_LP(i915))
++	if (INTEL_GEN(i915) >= 12)
++		ret = gen12_get_dram_info(i915);
++	else if (INTEL_GEN(i915) >= 11)
++		ret = gen11_get_dram_info(i915);
++	else if (IS_GEN9_LP(i915))
+ 		ret = bxt_get_dram_info(i915);
+ 	else
+ 		ret = skl_get_dram_info(i915);
+diff --git a/drivers/gpu/drm/i915/intel_sideband.c b/drivers/gpu/drm/i915/intel_sideband.c
+index 5b32792621232..02ebf5a04a9bc 100644
+--- a/drivers/gpu/drm/i915/intel_sideband.c
++++ b/drivers/gpu/drm/i915/intel_sideband.c
+@@ -555,3 +555,18 @@ out:
+ 	return ret ? ret : status;
+ #undef COND
+ }
++
++void intel_pcode_init(struct drm_i915_private *i915)
++{
++	int ret;
++
++	if (!IS_DGFX(i915))
++		return;
++
++	ret = skl_pcode_request(i915, DG1_PCODE_STATUS,
++				DG1_UNCORE_GET_INIT_STATUS,
++				DG1_UNCORE_INIT_STATUS_COMPLETE,
++				DG1_UNCORE_INIT_STATUS_COMPLETE, 50);
++	if (ret)
++		drm_err(&i915->drm, "Pcode did not report uncore initialization completion!\n");
++}
+diff --git a/drivers/gpu/drm/i915/intel_sideband.h b/drivers/gpu/drm/i915/intel_sideband.h
+index 7fb95745a4449..094c7b19c5d42 100644
+--- a/drivers/gpu/drm/i915/intel_sideband.h
++++ b/drivers/gpu/drm/i915/intel_sideband.h
+@@ -138,4 +138,6 @@ int sandybridge_pcode_write_timeout(struct drm_i915_private *i915, u32 mbox,
+ int skl_pcode_request(struct drm_i915_private *i915, u32 mbox, u32 request,
+ 		      u32 reply_mask, u32 reply, int timeout_base_ms);
+ 
++void intel_pcode_init(struct drm_i915_private *i915);
++
+ #endif /* _INTEL_SIDEBAND_H */
+diff --git a/drivers/gpu/drm/nouveau/nouveau_acpi.c b/drivers/gpu/drm/nouveau/nouveau_acpi.c
+index 69a84d0197d0a..7b946d44ab2c1 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_acpi.c
++++ b/drivers/gpu/drm/nouveau/nouveau_acpi.c
+@@ -220,6 +220,9 @@ static void nouveau_dsm_pci_probe(struct pci_dev *pdev, acpi_handle *dhandle_out
+ 	int optimus_funcs;
+ 	struct pci_dev *parent_pdev;
+ 
++	if (pdev->vendor != PCI_VENDOR_ID_NVIDIA)
++		return;
++
+ 	*has_pr3 = false;
+ 	parent_pdev = pci_upstream_bridge(pdev);
+ 	if (parent_pdev) {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index 9542fc63e7968..b8884272d65e0 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -726,7 +726,8 @@ out:
+ #endif
+ 
+ 	nouveau_connector_set_edid(nv_connector, edid);
+-	nouveau_connector_set_encoder(connector, nv_encoder);
++	if (nv_encoder)
++		nouveau_connector_set_encoder(connector, nv_encoder);
+ 	return status;
+ }
+ 
+@@ -946,7 +947,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
+ 	/* Determine display colour depth for everything except LVDS now,
+ 	 * DP requires this before mode_valid() is called.
+ 	 */
+-	if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)
++	if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
+ 		nouveau_connector_detect_depth(connector);
+ 
+ 	/* Find the native mode if this is a digital panel, if we didn't
+@@ -967,7 +968,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
+ 	 * "native" mode as some VBIOS tables require us to use the
+ 	 * pixel clock as part of the lookup...
+ 	 */
+-	if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS)
++	if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
+ 		nouveau_connector_detect_depth(connector);
+ 
+ 	if (nv_encoder->dcb->type == DCB_OUTPUT_TV)
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index ac96b6ab44c07..8e15ff95b809b 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -124,10 +124,16 @@ nouveau_name(struct drm_device *dev)
+ static inline bool
+ nouveau_cli_work_ready(struct dma_fence *fence)
+ {
+-	if (!dma_fence_is_signaled(fence))
+-		return false;
+-	dma_fence_put(fence);
+-	return true;
++	bool ret = true;
++
++	spin_lock_irq(fence->lock);
++	if (!dma_fence_is_signaled_locked(fence))
++		ret = false;
++	spin_unlock_irq(fence->lock);
++
++	if (ret == true)
++		dma_fence_put(fence);
++	return ret;
+ }
+ 
+ static void
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index fdcad8d6a5a07..db24f7dfa00f7 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -3069,7 +3069,7 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
+ 	route->path_rec->traffic_class = tos;
+ 	route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
+ 	route->path_rec->rate_selector = IB_SA_EQ;
+-	route->path_rec->rate = iboe_get_rate(ndev);
++	route->path_rec->rate = IB_RATE_PORT_CURRENT;
+ 	dev_put(ndev);
+ 	route->path_rec->packet_life_time_selector = IB_SA_EQ;
+ 	/* In case ACK timeout is set, use this value to calculate
+@@ -4719,7 +4719,7 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 	if (!ndev)
+ 		return -ENODEV;
+ 
+-	ib.rec.rate = iboe_get_rate(ndev);
++	ib.rec.rate = IB_RATE_PORT_CURRENT;
+ 	ib.rec.hop_limit = 1;
+ 	ib.rec.mtu = iboe_get_mtu(ndev->mtu);
+ 
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index d7c90da9ce7f1..09cf470c08d65 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -1842,8 +1842,13 @@ static int modify_qp(struct uverbs_attr_bundle *attrs,
+ 		attr->path_mtu = cmd->base.path_mtu;
+ 	if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE)
+ 		attr->path_mig_state = cmd->base.path_mig_state;
+-	if (cmd->base.attr_mask & IB_QP_QKEY)
++	if (cmd->base.attr_mask & IB_QP_QKEY) {
++		if (cmd->base.qkey & IB_QP_SET_QKEY && !capable(CAP_NET_RAW)) {
++			ret = -EPERM;
++			goto release_qp;
++		}
+ 		attr->qkey = cmd->base.qkey;
++	}
+ 	if (cmd->base.attr_mask & IB_QP_RQ_PSN)
+ 		attr->rq_psn = cmd->base.rq_psn;
+ 	if (cmd->base.attr_mask & IB_QP_SQ_PSN)
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 4bb7c642f80c4..099f5acc749e5 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -222,8 +222,12 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_queue *ev_queue,
+ 	spin_lock_irq(&ev_queue->lock);
+ 
+ 	while (list_empty(&ev_queue->event_list)) {
+-		spin_unlock_irq(&ev_queue->lock);
++		if (ev_queue->is_closed) {
++			spin_unlock_irq(&ev_queue->lock);
++			return -EIO;
++		}
+ 
++		spin_unlock_irq(&ev_queue->lock);
+ 		if (filp->f_flags & O_NONBLOCK)
+ 			return -EAGAIN;
+ 
+@@ -233,12 +237,6 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_queue *ev_queue,
+ 			return -ERESTARTSYS;
+ 
+ 		spin_lock_irq(&ev_queue->lock);
+-
+-		/* If device was disassociated and no event exists set an error */
+-		if (list_empty(&ev_queue->event_list) && ev_queue->is_closed) {
+-			spin_unlock_irq(&ev_queue->lock);
+-			return -EIO;
+-		}
+ 	}
+ 
+ 	event = list_entry(ev_queue->event_list.next, struct ib_uverbs_event, list);
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 5ef37902e96b5..39ba7005f2c4c 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4746,6 +4746,9 @@ const struct mlx5_ib_profile raw_eth_profile = {
+ 	STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
+ 		     mlx5_ib_stage_post_ib_reg_umr_init,
+ 		     NULL),
++	STAGE_CREATE(MLX5_IB_STAGE_DELAY_DROP,
++		     mlx5_ib_stage_delay_drop_init,
++		     mlx5_ib_stage_delay_drop_cleanup),
+ 	STAGE_CREATE(MLX5_IB_STAGE_RESTRACK,
+ 		     mlx5_ib_restrack_init,
+ 		     NULL),
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 99c1b3553e6e0..4c938d841f768 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -192,6 +192,9 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	spin_lock_init(&qp->rq.producer_lock);
+ 	spin_lock_init(&qp->rq.consumer_lock);
+ 
++	skb_queue_head_init(&qp->req_pkts);
++	skb_queue_head_init(&qp->resp_pkts);
++
+ 	atomic_set(&qp->ssn, 0);
+ 	atomic_set(&qp->skb_out, 0);
+ }
+@@ -247,12 +250,8 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	qp->req.opcode		= -1;
+ 	qp->comp.opcode		= -1;
+ 
+-	skb_queue_head_init(&qp->req_pkts);
+-
+-	rxe_init_task(rxe, &qp->req.task, qp,
+-		      rxe_requester, "req");
+-	rxe_init_task(rxe, &qp->comp.task, qp,
+-		      rxe_completer, "comp");
++	rxe_init_task(&qp->req.task, qp, rxe_requester);
++	rxe_init_task(&qp->comp.task, qp, rxe_completer);
+ 
+ 	qp->qp_timeout_jiffies = 0; /* Can't be set for UD/UC in modify_qp */
+ 	if (init->qp_type == IB_QPT_RC) {
+@@ -296,10 +295,7 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 		}
+ 	}
+ 
+-	skb_queue_head_init(&qp->resp_pkts);
+-
+-	rxe_init_task(rxe, &qp->resp.task, qp,
+-		      rxe_responder, "resp");
++	rxe_init_task(&qp->resp.task, qp, rxe_responder);
+ 
+ 	qp->resp.opcode		= OPCODE_NONE;
+ 	qp->resp.msn		= 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
+index 568cf56c236bc..5aa69947a9791 100644
+--- a/drivers/infiniband/sw/rxe/rxe_task.c
++++ b/drivers/infiniband/sw/rxe/rxe_task.c
+@@ -95,13 +95,10 @@ void rxe_do_task(struct tasklet_struct *t)
+ 	task->ret = ret;
+ }
+ 
+-int rxe_init_task(void *obj, struct rxe_task *task,
+-		  void *arg, int (*func)(void *), char *name)
++int rxe_init_task(struct rxe_task *task, void *arg, int (*func)(void *))
+ {
+-	task->obj	= obj;
+ 	task->arg	= arg;
+ 	task->func	= func;
+-	snprintf(task->name, sizeof(task->name), "%s", name);
+ 	task->destroyed	= false;
+ 
+ 	tasklet_setup(&task->tasklet, rxe_do_task);
+diff --git a/drivers/infiniband/sw/rxe/rxe_task.h b/drivers/infiniband/sw/rxe/rxe_task.h
+index 11d183fd33386..b3dfd970d1dc6 100644
+--- a/drivers/infiniband/sw/rxe/rxe_task.h
++++ b/drivers/infiniband/sw/rxe/rxe_task.h
+@@ -19,14 +19,12 @@ enum {
+  * called again.
+  */
+ struct rxe_task {
+-	void			*obj;
+ 	struct tasklet_struct	tasklet;
+ 	int			state;
+ 	spinlock_t		state_lock; /* spinlock for task state */
+ 	void			*arg;
+ 	int			(*func)(void *arg);
+ 	int			ret;
+-	char			name[16];
+ 	bool			destroyed;
+ };
+ 
+@@ -35,8 +33,7 @@ struct rxe_task {
+  *	arg  => parameter to pass to fcn
+  *	func => function to call until it returns != 0
+  */
+-int rxe_init_task(void *obj, struct rxe_task *task,
+-		  void *arg, int (*func)(void *), char *name);
++int rxe_init_task(struct rxe_task *task, void *arg, int (*func)(void *));
+ 
+ /* cleanup task */
+ void rxe_cleanup_task(struct rxe_task *task);
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 2d0d966fba2c8..7cd90604502ec 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -656,9 +656,13 @@ static int
+ isert_connect_error(struct rdma_cm_id *cma_id)
+ {
+ 	struct isert_conn *isert_conn = cma_id->qp->qp_context;
++	struct isert_np *isert_np = cma_id->context;
+ 
+ 	ib_drain_qp(isert_conn->qp);
++
++	mutex_lock(&isert_np->mutex);
+ 	list_del_init(&isert_conn->node);
++	mutex_unlock(&isert_np->mutex);
+ 	isert_conn->cm_id = NULL;
+ 	isert_put_conn(isert_conn);
+ 
+@@ -2421,6 +2425,7 @@ isert_free_np(struct iscsi_np *np)
+ {
+ 	struct isert_np *isert_np = np->np_context;
+ 	struct isert_conn *isert_conn, *n;
++	LIST_HEAD(drop_conn_list);
+ 
+ 	if (isert_np->cm_id)
+ 		rdma_destroy_id(isert_np->cm_id);
+@@ -2440,7 +2445,7 @@ isert_free_np(struct iscsi_np *np)
+ 					 node) {
+ 			isert_info("cleaning isert_conn %p state (%d)\n",
+ 				   isert_conn, isert_conn->state);
+-			isert_connect_release(isert_conn);
++			list_move_tail(&isert_conn->node, &drop_conn_list);
+ 		}
+ 	}
+ 
+@@ -2451,11 +2456,16 @@ isert_free_np(struct iscsi_np *np)
+ 					 node) {
+ 			isert_info("cleaning isert_conn %p state (%d)\n",
+ 				   isert_conn, isert_conn->state);
+-			isert_connect_release(isert_conn);
++			list_move_tail(&isert_conn->node, &drop_conn_list);
+ 		}
+ 	}
+ 	mutex_unlock(&isert_np->mutex);
+ 
++	list_for_each_entry_safe(isert_conn, n, &drop_conn_list, node) {
++		list_del_init(&isert_conn->node);
++		isert_connect_release(isert_conn);
++	}
++
+ 	np->np_context = NULL;
+ 	kfree(isert_np);
+ }
+@@ -2550,8 +2560,6 @@ static void isert_wait_conn(struct iscsi_conn *conn)
+ 	isert_put_unsol_pending_cmds(conn);
+ 	isert_wait4cmds(conn);
+ 	isert_wait4logout(isert_conn);
+-
+-	queue_work(isert_release_wq, &isert_conn->release_work);
+ }
+ 
+ static void isert_free_conn(struct iscsi_conn *conn)
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index 4629bb758126a..76b993e8d672f 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -37,8 +37,10 @@ struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t gfp_mask,
+ 			goto err;
+ 
+ 		iu->dma_addr = ib_dma_map_single(dma_dev, iu->buf, size, dir);
+-		if (ib_dma_mapping_error(dma_dev, iu->dma_addr))
++		if (ib_dma_mapping_error(dma_dev, iu->dma_addr)) {
++			kfree(iu->buf);
+ 			goto err;
++		}
+ 
+ 		iu->cqe.done  = done;
+ 		iu->size      = size;
+diff --git a/drivers/irqchip/irq-gic-common.c b/drivers/irqchip/irq-gic-common.c
+index f47b41dfd0238..b8a10f2e72da7 100644
+--- a/drivers/irqchip/irq-gic-common.c
++++ b/drivers/irqchip/irq-gic-common.c
+@@ -29,7 +29,13 @@ void gic_enable_of_quirks(const struct device_node *np,
+ 			  const struct gic_quirk *quirks, void *data)
+ {
+ 	for (; quirks->desc; quirks++) {
+-		if (!of_device_is_compatible(np, quirks->compatible))
++		if (!quirks->compatible && !quirks->property)
++			continue;
++		if (quirks->compatible &&
++		    !of_device_is_compatible(np, quirks->compatible))
++			continue;
++		if (quirks->property &&
++		    !of_property_read_bool(np, quirks->property))
+ 			continue;
+ 		if (quirks->init(data))
+ 			pr_info("GIC: enabling workaround for %s\n",
+@@ -41,7 +47,7 @@ void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
+ 		void *data)
+ {
+ 	for (; quirks->desc; quirks++) {
+-		if (quirks->compatible)
++		if (quirks->compatible || quirks->property)
+ 			continue;
+ 		if (quirks->iidr != (quirks->mask & iidr))
+ 			continue;
+diff --git a/drivers/irqchip/irq-gic-common.h b/drivers/irqchip/irq-gic-common.h
+index ccba8b0fe0f58..b42572d88f9f7 100644
+--- a/drivers/irqchip/irq-gic-common.h
++++ b/drivers/irqchip/irq-gic-common.h
+@@ -13,6 +13,7 @@
+ struct gic_quirk {
+ 	const char *desc;
+ 	const char *compatible;
++	const char *property;
+ 	bool (*init)(void *data);
+ 	u32 iidr;
+ 	u32 mask;
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 2805969e4f15a..c1f8c1be84856 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -35,6 +35,7 @@
+ 
+ #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996	(1ULL << 0)
+ #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539	(1ULL << 1)
++#define FLAGS_WORKAROUND_MTK_GICR_SAVE		(1ULL << 2)
+ 
+ #define GIC_IRQ_TYPE_PARTITION	(GIC_IRQ_TYPE_LPI + 1)
+ 
+@@ -1587,6 +1588,15 @@ static bool gic_enable_quirk_msm8996(void *data)
+ 	return true;
+ }
+ 
++static bool gic_enable_quirk_mtk_gicr(void *data)
++{
++	struct gic_chip_data *d = data;
++
++	d->flags |= FLAGS_WORKAROUND_MTK_GICR_SAVE;
++
++	return true;
++}
++
+ static bool gic_enable_quirk_cavium_38539(void *data)
+ {
+ 	struct gic_chip_data *d = data;
+@@ -1622,6 +1632,11 @@ static const struct gic_quirk gic_quirks[] = {
+ 		.compatible = "qcom,msm8996-gic-v3",
+ 		.init	= gic_enable_quirk_msm8996,
+ 	},
++	{
++		.desc	= "GICv3: Mediatek Chromebook GICR save problem",
++		.property = "mediatek,broken-save-restore-fw",
++		.init	= gic_enable_quirk_mtk_gicr,
++	},
+ 	{
+ 		.desc	= "GICv3: HIP06 erratum 161010803",
+ 		.iidr	= 0x0204043b,
+@@ -1658,6 +1673,11 @@ static void gic_enable_nmi_support(void)
+ 	if (!gic_prio_masking_enabled())
+ 		return;
+ 
++	if (gic_data.flags & FLAGS_WORKAROUND_MTK_GICR_SAVE) {
++		pr_warn("Skipping NMI enable due to firmware issues\n");
++		return;
++	}
++
+ 	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
+ 	if (!ppi_nmi_refs)
+ 		return;
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index fea2d82300b0d..2ff8a1b776fb4 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -37,6 +37,7 @@
+ #include <media/tuner.h>
+ 
+ static DEFINE_MUTEX(dvbdev_mutex);
++static LIST_HEAD(dvbdevfops_list);
+ static int dvbdev_debug;
+ 
+ module_param(dvbdev_debug, int, 0644);
+@@ -462,14 +463,15 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 			enum dvb_device_type type, int demux_sink_pads)
+ {
+ 	struct dvb_device *dvbdev;
+-	struct file_operations *dvbdevfops;
++	struct file_operations *dvbdevfops = NULL;
++	struct dvbdevfops_node *node = NULL, *new_node = NULL;
+ 	struct device *clsdev;
+ 	int minor;
+ 	int id, ret;
+ 
+ 	mutex_lock(&dvbdev_register_lock);
+ 
+-	if ((id = dvbdev_get_free_id (adap, type)) < 0){
++	if ((id = dvbdev_get_free_id (adap, type)) < 0) {
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		*pdvbdev = NULL;
+ 		pr_err("%s: couldn't find free device id\n", __func__);
+@@ -477,18 +479,45 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 	}
+ 
+ 	*pdvbdev = dvbdev = kzalloc(sizeof(*dvbdev), GFP_KERNEL);
+-
+ 	if (!dvbdev){
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return -ENOMEM;
+ 	}
+ 
+-	dvbdevfops = kmemdup(template->fops, sizeof(*dvbdevfops), GFP_KERNEL);
++	/*
++	 * When a device of the same type is probe()d more than once,
++	 * the first allocated fops are used. This prevents memory leaks
++	 * that can occur when the same device is probe()d repeatedly.
++	 */
++	list_for_each_entry(node, &dvbdevfops_list, list_head) {
++		if (node->fops->owner == adap->module &&
++				node->type == type &&
++				node->template == template) {
++			dvbdevfops = node->fops;
++			break;
++		}
++	}
+ 
+-	if (!dvbdevfops){
+-		kfree (dvbdev);
+-		mutex_unlock(&dvbdev_register_lock);
+-		return -ENOMEM;
++	if (dvbdevfops == NULL) {
++		dvbdevfops = kmemdup(template->fops, sizeof(*dvbdevfops), GFP_KERNEL);
++		if (!dvbdevfops) {
++			kfree(dvbdev);
++			mutex_unlock(&dvbdev_register_lock);
++			return -ENOMEM;
++		}
++
++		new_node = kzalloc(sizeof(struct dvbdevfops_node), GFP_KERNEL);
++		if (!new_node) {
++			kfree(dvbdevfops);
++			kfree(dvbdev);
++			mutex_unlock(&dvbdev_register_lock);
++			return -ENOMEM;
++		}
++
++		new_node->fops = dvbdevfops;
++		new_node->type = type;
++		new_node->template = template;
++		list_add_tail (&new_node->list_head, &dvbdevfops_list);
+ 	}
+ 
+ 	memcpy(dvbdev, template, sizeof(struct dvb_device));
+@@ -499,19 +528,20 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 	dvbdev->priv = priv;
+ 	dvbdev->fops = dvbdevfops;
+ 	init_waitqueue_head (&dvbdev->wait_queue);
+-
+ 	dvbdevfops->owner = adap->module;
+-
+ 	list_add_tail (&dvbdev->list_head, &adap->device_list);
+-
+ 	down_write(&minor_rwsem);
+ #ifdef CONFIG_DVB_DYNAMIC_MINORS
+ 	for (minor = 0; minor < MAX_DVB_MINORS; minor++)
+ 		if (dvb_minors[minor] == NULL)
+ 			break;
+-
+ 	if (minor == MAX_DVB_MINORS) {
+-		kfree(dvbdevfops);
++		if (new_node) {
++			list_del (&new_node->list_head);
++			kfree(dvbdevfops);
++			kfree(new_node);
++		}
++		list_del (&dvbdev->list_head);
+ 		kfree(dvbdev);
+ 		up_write(&minor_rwsem);
+ 		mutex_unlock(&dvbdev_register_lock);
+@@ -520,36 +550,47 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ #else
+ 	minor = nums2minor(adap->num, type, id);
+ #endif
+-
+ 	dvbdev->minor = minor;
+ 	dvb_minors[minor] = dvb_device_get(dvbdev);
+ 	up_write(&minor_rwsem);
+-
+ 	ret = dvb_register_media_device(dvbdev, type, minor, demux_sink_pads);
+ 	if (ret) {
+ 		pr_err("%s: dvb_register_media_device failed to create the mediagraph\n",
+ 		      __func__);
+-
++		if (new_node) {
++			list_del (&new_node->list_head);
++			kfree(dvbdevfops);
++			kfree(new_node);
++		}
+ 		dvb_media_device_free(dvbdev);
+-		kfree(dvbdevfops);
++		list_del (&dvbdev->list_head);
+ 		kfree(dvbdev);
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return ret;
+ 	}
+ 
+-	mutex_unlock(&dvbdev_register_lock);
+-
+ 	clsdev = device_create(dvb_class, adap->device,
+ 			       MKDEV(DVB_MAJOR, minor),
+ 			       dvbdev, "dvb%d.%s%d", adap->num, dnames[type], id);
+ 	if (IS_ERR(clsdev)) {
+ 		pr_err("%s: failed to create device dvb%d.%s%d (%ld)\n",
+ 		       __func__, adap->num, dnames[type], id, PTR_ERR(clsdev));
++		if (new_node) {
++			list_del (&new_node->list_head);
++			kfree(dvbdevfops);
++			kfree(new_node);
++		}
++		dvb_media_device_free(dvbdev);
++		list_del (&dvbdev->list_head);
++		kfree(dvbdev);
++		mutex_unlock(&dvbdev_register_lock);
+ 		return PTR_ERR(clsdev);
+ 	}
++
+ 	dprintk("DVB: register adapter%d/%s%d @ minor: %i (0x%02x)\n",
+ 		adap->num, dnames[type], id, minor, minor);
+ 
++	mutex_unlock(&dvbdev_register_lock);
+ 	return 0;
+ }
+ EXPORT_SYMBOL(dvb_register_device);
+@@ -578,7 +619,6 @@ static void dvb_free_device(struct kref *ref)
+ {
+ 	struct dvb_device *dvbdev = container_of(ref, struct dvb_device, ref);
+ 
+-	kfree (dvbdev->fops);
+ 	kfree (dvbdev);
+ }
+ 
+@@ -1084,9 +1124,17 @@ error:
+ 
+ static void __exit exit_dvbdev(void)
+ {
++	struct dvbdevfops_node *node, *next;
++
+ 	class_destroy(dvb_class);
+ 	cdev_del(&dvb_device_cdev);
+ 	unregister_chrdev_region(MKDEV(DVB_MAJOR, 0), MAX_DVB_MINORS);
++
++	list_for_each_entry_safe(node, next, &dvbdevfops_list, list_head) {
++		list_del (&node->list_head);
++		kfree(node->fops);
++		kfree(node);
++	}
+ }
+ 
+ subsys_initcall(init_dvbdev);
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 6622e32621874..599b7317b59a5 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -253,6 +253,7 @@ static ssize_t power_ro_lock_store(struct device *dev,
+ 		goto out_put;
+ 	}
+ 	req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_BOOT_WP;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	blk_execute_rq(mq->queue, NULL, req, 0);
+ 	ret = req_to_mmc_queue_req(req)->drv_op_result;
+ 	blk_put_request(req);
+@@ -638,6 +639,7 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md,
+ 	idatas[0] = idata;
+ 	req_to_mmc_queue_req(req)->drv_op =
+ 		rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	req_to_mmc_queue_req(req)->drv_op_data = idatas;
+ 	req_to_mmc_queue_req(req)->ioc_count = 1;
+ 	blk_execute_rq(mq->queue, NULL, req, 0);
+@@ -707,6 +709,7 @@ static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md,
+ 	}
+ 	req_to_mmc_queue_req(req)->drv_op =
+ 		rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	req_to_mmc_queue_req(req)->drv_op_data = idata;
+ 	req_to_mmc_queue_req(req)->ioc_count = num_of_cmds;
+ 	blk_execute_rq(mq->queue, NULL, req, 0);
+@@ -2749,6 +2752,7 @@ static int mmc_dbg_card_status_get(void *data, u64 *val)
+ 	if (IS_ERR(req))
+ 		return PTR_ERR(req);
+ 	req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_CARD_STATUS;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	blk_execute_rq(mq->queue, NULL, req, 0);
+ 	ret = req_to_mmc_queue_req(req)->drv_op_result;
+ 	if (ret >= 0) {
+@@ -2787,6 +2791,7 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
+ 		goto out_free;
+ 	}
+ 	req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_EXT_CSD;
++	req_to_mmc_queue_req(req)->drv_op_result = -EIO;
+ 	req_to_mmc_queue_req(req)->drv_op_data = &ext_csd;
+ 	blk_execute_rq(mq->queue, NULL, req, 0);
+ 	err = req_to_mmc_queue_req(req)->drv_op_result;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 8d92dc6bc9945..d7215bd772f3e 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -196,8 +196,8 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
+ 	int bw_sum = 0;
+ 	u8 bw;
+ 
+-	prio_top = netdev_get_prio_tc_map(ndev, tc_nums - 1);
+-	prio_next = netdev_get_prio_tc_map(ndev, tc_nums - 2);
++	prio_top = tc_nums - 1;
++	prio_next = tc_nums - 2;
+ 
+ 	/* Support highest prio and second prio tc in cbs mode */
+ 	if (tc != prio_top && tc != prio_next)
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index a994a2970ab24..6a6b5f6e8276d 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -398,7 +398,7 @@ void iavf_set_ethtool_ops(struct net_device *netdev);
+ void iavf_update_stats(struct iavf_adapter *adapter);
+ void iavf_reset_interrupt_capability(struct iavf_adapter *adapter);
+ int iavf_init_interrupt_scheme(struct iavf_adapter *adapter);
+-void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask);
++void iavf_irq_enable_queues(struct iavf_adapter *adapter);
+ void iavf_free_all_tx_resources(struct iavf_adapter *adapter);
+ void iavf_free_all_rx_resources(struct iavf_adapter *adapter);
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index ae96b552a3bb3..e45f3a1a11f36 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -234,21 +234,18 @@ static void iavf_irq_disable(struct iavf_adapter *adapter)
+ }
+ 
+ /**
+- * iavf_irq_enable_queues - Enable interrupt for specified queues
++ * iavf_irq_enable_queues - Enable interrupt for all queues
+  * @adapter: board private structure
+- * @mask: bitmap of queues to enable
+  **/
+-void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask)
++void iavf_irq_enable_queues(struct iavf_adapter *adapter)
+ {
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	int i;
+ 
+ 	for (i = 1; i < adapter->num_msix_vectors; i++) {
+-		if (mask & BIT(i - 1)) {
+-			wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
+-			     IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
+-			     IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
+-		}
++		wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
++		     IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
++		     IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
+ 	}
+ }
+ 
+@@ -262,7 +259,7 @@ void iavf_irq_enable(struct iavf_adapter *adapter, bool flush)
+ 	struct iavf_hw *hw = &adapter->hw;
+ 
+ 	iavf_misc_irq_enable(adapter);
+-	iavf_irq_enable_queues(adapter, ~0);
++	iavf_irq_enable_queues(adapter);
+ 
+ 	if (flush)
+ 		iavf_flush(hw);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_register.h b/drivers/net/ethernet/intel/iavf/iavf_register.h
+index bf793332fc9d5..a19e88898a0bb 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_register.h
++++ b/drivers/net/ethernet/intel/iavf/iavf_register.h
+@@ -40,7 +40,7 @@
+ #define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT)
+ #define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3
+ #define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT)
+-#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
++#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...63 */ /* Reset: VFR */
+ #define IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT 0
+ #define IAVF_VFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT)
+ #define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index 5e3b0a5843a8e..d9de3b8115431 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -822,6 +822,8 @@ static int igb_set_eeprom(struct net_device *netdev,
+ 		 */
+ 		ret_val = hw->nvm.ops.read(hw, last_word, 1,
+ 				   &eeprom_buff[last_word - first_word]);
++		if (ret_val)
++			goto out;
+ 	}
+ 
+ 	/* Device's eeprom is always little-endian, word addressable */
+@@ -841,6 +843,7 @@ static int igb_set_eeprom(struct net_device *netdev,
+ 		hw->nvm.ops.update(hw);
+ 
+ 	igb_set_fw_version(adapter);
++out:
+ 	kfree(eeprom_buff);
+ 	return ret_val;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 9886a30e9723c..449f5224d1aeb 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -1385,7 +1385,8 @@ static int nix_check_txschq_alloc_req(struct rvu *rvu, int lvl, u16 pcifunc,
+ 		free_cnt = rvu_rsrc_free_count(&txsch->schq);
+ 	}
+ 
+-	if (free_cnt < req_schq || req_schq > MAX_TXSCHQ_PER_FUNC)
++	if (free_cnt < req_schq || req->schq[lvl] > MAX_TXSCHQ_PER_FUNC ||
++	    req->schq_contig[lvl] > MAX_TXSCHQ_PER_FUNC)
+ 		return NIX_AF_ERR_TLX_ALLOC_FAIL;
+ 
+ 	/* If contiguous queues are needed, check for availability */
+diff --git a/drivers/net/ipvlan/ipvlan_l3s.c b/drivers/net/ipvlan/ipvlan_l3s.c
+index 71712ea25403d..d5b05e8032199 100644
+--- a/drivers/net/ipvlan/ipvlan_l3s.c
++++ b/drivers/net/ipvlan/ipvlan_l3s.c
+@@ -102,6 +102,10 @@ static unsigned int ipvlan_nf_input(void *priv, struct sk_buff *skb,
+ 
+ 	skb->dev = addr->master->dev;
+ 	skb->skb_iif = skb->dev->ifindex;
++#if IS_ENABLED(CONFIG_IPV6)
++	if (addr->atype == IPVL_IPV6)
++		IP6CB(skb)->iif = skb->dev->ifindex;
++#endif
+ 	len = skb->len + ETH_HLEN;
+ 	ipvlan_count_rx(addr->master, len, true, false);
+ out:
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 53f1cd0bfaf42..de6c561535346 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1156,7 +1156,9 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x05c6, 0x9080, 8)},
+ 	{QMI_FIXED_INTF(0x05c6, 0x9083, 3)},
+ 	{QMI_FIXED_INTF(0x05c6, 0x9084, 4)},
++	{QMI_QUIRK_SET_DTR(0x05c6, 0x9091, 2)},	/* Compal RXM-G1 */
+ 	{QMI_FIXED_INTF(0x05c6, 0x90b2, 3)},    /* ublox R410M */
++	{QMI_QUIRK_SET_DTR(0x05c6, 0x90db, 2)},	/* Compal RXM-G1 */
+ 	{QMI_FIXED_INTF(0x05c6, 0x920d, 0)},
+ 	{QMI_FIXED_INTF(0x05c6, 0x920d, 5)},
+ 	{QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)},	/* YUGA CLM920-NC5 */
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index b965eb6a4bb17..24c53cc0c112f 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -341,6 +341,9 @@ static int lapbeth_new_device(struct net_device *dev)
+ 
+ 	ASSERT_RTNL();
+ 
++	if (dev->type != ARPHRD_ETHER)
++		return -EINVAL;
++
+ 	ndev = alloc_netdev(sizeof(*lapbeth), "lapb%d", NET_NAME_UNKNOWN,
+ 			    lapbeth_setup);
+ 	if (!ndev)
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 74637bd0433e6..b4a5cbdae904e 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -507,6 +507,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0x71, { KEY_F13 } }, /* General-purpose button */
+ 	{ KE_IGNORE, 0x79, },  /* Charger type dectection notification */
+ 	{ KE_KEY, 0x7a, { KEY_ALS_TOGGLE } }, /* Ambient Light Sensor Toggle */
++	{ KE_IGNORE, 0x7B, }, /* Charger connect/disconnect notification */
+ 	{ KE_KEY, 0x7c, { KEY_MICMUTE } },
+ 	{ KE_KEY, 0x7D, { KEY_BLUETOOTH } }, /* Bluetooth Enable */
+ 	{ KE_KEY, 0x7E, { KEY_BLUETOOTH } }, /* Bluetooth Disable */
+@@ -532,6 +533,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0xA6, { KEY_SWITCHVIDEOMODE } }, /* SDSP CRT + TV + HDMI */
+ 	{ KE_KEY, 0xA7, { KEY_SWITCHVIDEOMODE } }, /* SDSP LCD + CRT + TV + HDMI */
+ 	{ KE_KEY, 0xB5, { KEY_CALC } },
++	{ KE_IGNORE, 0xC0, }, /* External display connect/disconnect notification */
+ 	{ KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
+ 	{ KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
+ 	{ KE_IGNORE, 0xC6, },  /* Ambient Light Sensor notification */
+diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8500_btemp.c
+index 4417d64c31f97..5a1adceb6974d 100644
+--- a/drivers/power/supply/ab8500_btemp.c
++++ b/drivers/power/supply/ab8500_btemp.c
+@@ -921,10 +921,8 @@ static int ab8500_btemp_get_ext_psy_data(struct device *dev, void *data)
+  */
+ static void ab8500_btemp_external_power_changed(struct power_supply *psy)
+ {
+-	struct ab8500_btemp *di = power_supply_get_drvdata(psy);
+-
+-	class_for_each_device(power_supply_class, NULL,
+-		di->btemp_psy, ab8500_btemp_get_ext_psy_data);
++	class_for_each_device(power_supply_class, NULL, psy,
++			      ab8500_btemp_get_ext_psy_data);
+ }
+ 
+ /* ab8500 btemp driver interrupts and their respective isr */
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index a6b4a94c27662..a88590563647e 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -2380,10 +2380,8 @@ out:
+  */
+ static void ab8500_fg_external_power_changed(struct power_supply *psy)
+ {
+-	struct ab8500_fg *di = power_supply_get_drvdata(psy);
+-
+-	class_for_each_device(power_supply_class, NULL,
+-		di->fg_psy, ab8500_fg_get_ext_psy_data);
++	class_for_each_device(power_supply_class, NULL, psy,
++			      ab8500_fg_get_ext_psy_data);
+ }
+ 
+ /**
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 235647b21af71..0673e0fe0ffbd 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1018,10 +1018,8 @@ static int poll_interval_param_set(const char *val, const struct kernel_param *k
+ 		return ret;
+ 
+ 	mutex_lock(&bq27xxx_list_lock);
+-	list_for_each_entry(di, &bq27xxx_battery_devices, list) {
+-		cancel_delayed_work_sync(&di->work);
+-		schedule_delayed_work(&di->work, 0);
+-	}
++	list_for_each_entry(di, &bq27xxx_battery_devices, list)
++		mod_delayed_work(system_wq, &di->work, 0);
+ 	mutex_unlock(&bq27xxx_list_lock);
+ 
+ 	return ret;
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index 53e5b3e04be13..5c8c117b396e7 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -347,6 +347,10 @@ static int __power_supply_is_system_supplied(struct device *dev, void *data)
+ 	struct power_supply *psy = dev_get_drvdata(dev);
+ 	unsigned int *count = data;
+ 
++	if (!psy->desc->get_property(psy, POWER_SUPPLY_PROP_SCOPE, &ret))
++		if (ret.intval == POWER_SUPPLY_SCOPE_DEVICE)
++			return 0;
++
+ 	(*count)++;
+ 	if (psy->desc->type != POWER_SUPPLY_TYPE_BATTERY)
+ 		if (!psy->desc->get_property(psy, POWER_SUPPLY_PROP_ONLINE,
+@@ -365,8 +369,8 @@ int power_supply_is_system_supplied(void)
+ 				      __power_supply_is_system_supplied);
+ 
+ 	/*
+-	 * If no power class device was found at all, most probably we are
+-	 * running on a desktop system, so assume we are on mains power.
++	 * If no system scope power class device was found at all, most probably we
++	 * are running on a desktop system, so assume we are on mains power.
+ 	 */
+ 	if (count == 0)
+ 		return 1;
+diff --git a/drivers/power/supply/power_supply_sysfs.c b/drivers/power/supply/power_supply_sysfs.c
+index a616b9d8f43c5..2b1df9c339699 100644
+--- a/drivers/power/supply/power_supply_sysfs.c
++++ b/drivers/power/supply/power_supply_sysfs.c
+@@ -276,7 +276,8 @@ static ssize_t power_supply_show_property(struct device *dev,
+ 
+ 		if (ret < 0) {
+ 			if (ret == -ENODATA)
+-				dev_dbg(dev, "driver has no data for `%s' property\n",
++				dev_dbg_ratelimited(dev,
++					"driver has no data for `%s' property\n",
+ 					attr->attr.name);
+ 			else if (ret != -ENODEV && ret != -EAGAIN)
+ 				dev_err_ratelimited(dev,
+diff --git a/drivers/power/supply/sc27xx_fuel_gauge.c b/drivers/power/supply/sc27xx_fuel_gauge.c
+index 1ae8374e1cebe..3bf4b263950d7 100644
+--- a/drivers/power/supply/sc27xx_fuel_gauge.c
++++ b/drivers/power/supply/sc27xx_fuel_gauge.c
+@@ -733,13 +733,6 @@ static int sc27xx_fgu_set_property(struct power_supply *psy,
+ 	return ret;
+ }
+ 
+-static void sc27xx_fgu_external_power_changed(struct power_supply *psy)
+-{
+-	struct sc27xx_fgu_data *data = power_supply_get_drvdata(psy);
+-
+-	power_supply_changed(data->battery);
+-}
+-
+ static int sc27xx_fgu_property_is_writeable(struct power_supply *psy,
+ 					    enum power_supply_property psp)
+ {
+@@ -774,7 +767,7 @@ static const struct power_supply_desc sc27xx_fgu_desc = {
+ 	.num_properties		= ARRAY_SIZE(sc27xx_fgu_props),
+ 	.get_property		= sc27xx_fgu_get_property,
+ 	.set_property		= sc27xx_fgu_set_property,
+-	.external_power_changed	= sc27xx_fgu_external_power_changed,
++	.external_power_changed	= power_supply_changed,
+ 	.property_is_writeable	= sc27xx_fgu_property_is_writeable,
+ 	.no_thermal		= true,
+ };
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 47a04c5f7a9b8..f5ab74683b58a 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5032,7 +5032,7 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
+ 	}
+ 
+ 	rdev->debugfs = debugfs_create_dir(rname, debugfs_root);
+-	if (!rdev->debugfs) {
++	if (IS_ERR(rdev->debugfs)) {
+ 		rdev_warn(rdev, "Failed to create debugfs directory\n");
+ 		return;
+ 	}
+@@ -5937,7 +5937,7 @@ static int __init regulator_init(void)
+ 	ret = class_register(&regulator_class);
+ 
+ 	debugfs_root = debugfs_create_dir("regulator", NULL);
+-	if (!debugfs_root)
++	if (IS_ERR(debugfs_root))
+ 		pr_warn("regulator: Failed to create debugfs directory\n");
+ 
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index fd004c9db9dc0..0d9201a2999de 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -975,7 +975,9 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ static int dspi_setup(struct spi_device *spi)
+ {
+ 	struct fsl_dspi *dspi = spi_controller_get_devdata(spi->controller);
++	u32 period_ns = DIV_ROUND_UP(NSEC_PER_SEC, spi->max_speed_hz);
+ 	unsigned char br = 0, pbr = 0, pcssck = 0, cssck = 0;
++	u32 quarter_period_ns = DIV_ROUND_UP(period_ns, 4);
+ 	u32 cs_sck_delay = 0, sck_cs_delay = 0;
+ 	struct fsl_dspi_platform_data *pdata;
+ 	unsigned char pasc = 0, asc = 0;
+@@ -1003,6 +1005,19 @@ static int dspi_setup(struct spi_device *spi)
+ 		sck_cs_delay = pdata->sck_cs_delay;
+ 	}
+ 
++	/* Since tCSC and tASC apply to continuous transfers too, avoid SCK
++	 * glitches of half a cycle by never allowing tCSC + tASC to go below
++	 * half a SCK period.
++	 */
++	if (cs_sck_delay < quarter_period_ns)
++		cs_sck_delay = quarter_period_ns;
++	if (sck_cs_delay < quarter_period_ns)
++		sck_cs_delay = quarter_period_ns;
++
++	dev_dbg(&spi->dev,
++		"DSPI controller timing params: CS-to-SCK delay %u ns, SCK-to-CS delay %u ns\n",
++		cs_sck_delay, sck_cs_delay);
++
+ 	clkrate = clk_get_rate(dspi->clk);
+ 	hz_to_spi_baud(&pbr, &br, spi->max_speed_hz, clkrate);
+ 
+diff --git a/drivers/tty/serial/lantiq.c b/drivers/tty/serial/lantiq.c
+index 62813e421f124..ee5fd4d845c81 100644
+--- a/drivers/tty/serial/lantiq.c
++++ b/drivers/tty/serial/lantiq.c
+@@ -274,6 +274,7 @@ lqasc_err_int(int irq, void *_port)
+ 	struct ltq_uart_port *ltq_port = to_ltq_uart_port(port);
+ 
+ 	spin_lock_irqsave(&ltq_port->lock, flags);
++	__raw_writel(ASC_IRNCR_EIR, port->membase + LTQ_ASC_IRNCR);
+ 	/* clear any pending interrupts */
+ 	asc_update_bits(0, ASCWHBSTATE_CLRPE | ASCWHBSTATE_CLRFE |
+ 		ASCWHBSTATE_CLRROE, port->membase + LTQ_ASC_WHBSTATE);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 4e3b451ed749e..164910237669f 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -180,6 +180,7 @@ static void dwc3_gadget_del_and_unmap_request(struct dwc3_ep *dep,
+ 	list_del(&req->list);
+ 	req->remaining = 0;
+ 	req->needs_extra_trb = false;
++	req->num_trbs = 0;
+ 
+ 	if (req->request.status == -EINPROGRESS)
+ 		req->request.status = status;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 5b474efeab6ab..243f97a307793 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -248,6 +248,8 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_VENDOR_ID			0x2c7c
+ /* These Quectel products use Quectel's vendor ID */
+ #define QUECTEL_PRODUCT_EC21			0x0121
++#define QUECTEL_PRODUCT_EM061K_LTA		0x0123
++#define QUECTEL_PRODUCT_EM061K_LMS		0x0124
+ #define QUECTEL_PRODUCT_EC25			0x0125
+ #define QUECTEL_PRODUCT_EG91			0x0191
+ #define QUECTEL_PRODUCT_EG95			0x0195
+@@ -266,6 +268,8 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_RM520N			0x0801
+ #define QUECTEL_PRODUCT_EC200U			0x0901
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
++#define QUECTEL_PRODUCT_EM061K_LWW		0x6008
++#define QUECTEL_PRODUCT_EM061K_LCN		0x6009
+ #define QUECTEL_PRODUCT_EC200T			0x6026
+ #define QUECTEL_PRODUCT_RM500K			0x7001
+ 
+@@ -1189,6 +1193,18 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+diff --git a/fs/afs/vl_probe.c b/fs/afs/vl_probe.c
+index d1c7068b4346f..58452b86e6727 100644
+--- a/fs/afs/vl_probe.c
++++ b/fs/afs/vl_probe.c
+@@ -115,8 +115,8 @@ responded:
+ 		}
+ 	}
+ 
+-	if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
+-	    rtt_us < server->probe.rtt) {
++	rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us);
++	if (rtt_us < server->probe.rtt) {
+ 		server->probe.rtt = rtt_us;
+ 		server->rtt = rtt_us;
+ 		alist->preferred = index;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 889a598b17f6b..d0fecbd28232f 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2279,10 +2279,20 @@ again:
+ 	}
+ 
+ 	ret = inc_block_group_ro(cache, 0);
+-	if (!do_chunk_alloc || ret == -ETXTBSY)
+-		goto unlock_out;
+ 	if (!ret)
+ 		goto out;
++	if (ret == -ETXTBSY)
++		goto unlock_out;
++
++	/*
++	 * Skip chunk alloction if the bg is SYSTEM, this is to avoid system
++	 * chunk allocation storm to exhaust the system chunk array.  Otherwise
++	 * we still want to try our best to mark the block group read-only.
++	 */
++	if (!do_chunk_alloc && ret == -ENOSPC &&
++	    (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM))
++		goto unlock_out;
++
+ 	alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags);
+ 	ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
+ 	if (ret < 0)
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index 2de1d8247494e..cbea4f572155e 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -602,7 +602,9 @@ blk_status_t btrfs_csum_one_bio(struct btrfs_inode *inode, struct bio *bio,
+ 				sums = kvzalloc(btrfs_ordered_sum_size(fs_info,
+ 						      bytes_left), GFP_KERNEL);
+ 				memalloc_nofs_restore(nofs_flag);
+-				BUG_ON(!sums); /* -ENOMEM */
++				if (!sums)
++					return BLK_STS_RESOURCE;
++
+ 				sums->len = bytes_left;
+ 				ordered = btrfs_lookup_ordered_extent(inode,
+ 								offset);
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 88b9a5394561e..715a0329ba277 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3559,13 +3559,20 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ 		ret = btrfs_inc_block_group_ro(cache, sctx->is_dev_replace);
+ 		if (ret == 0) {
+ 			ro_set = 1;
+-		} else if (ret == -ENOSPC && !sctx->is_dev_replace) {
++		} else if (ret == -ENOSPC && !sctx->is_dev_replace &&
++			   !(cache->flags & BTRFS_BLOCK_GROUP_RAID56_MASK)) {
+ 			/*
+ 			 * btrfs_inc_block_group_ro return -ENOSPC when it
+ 			 * failed in creating new chunk for metadata.
+ 			 * It is not a problem for scrub, because
+ 			 * metadata are always cowed, and our scrub paused
+ 			 * commit_transactions.
++			 *
++			 * For RAID56 chunks, we have to mark them read-only
++			 * for scrub, as later we would use our own cache
++			 * out of RAID56 realm.
++			 * Thus we want the RAID56 bg to be marked RO to
++			 * prevent RMW from screwing up out cache.
+ 			 */
+ 			ro_set = 0;
+ 		} else if (ret == -ETXTBSY) {
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 13d4c3d504126..5ce1ea1f452b1 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1817,7 +1817,11 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry,
+ {
+ 	int ret = default_wake_function(wq_entry, mode, sync, key);
+ 
+-	list_del_init(&wq_entry->entry);
++	/*
++	 * Pairs with list_empty_careful in ep_poll, and ensures future loop
++	 * iterations see the cause of this wakeup.
++	 */
++	list_del_init_careful(&wq_entry->entry);
+ 	return ret;
+ }
+ 
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index a43167042b6b1..4efe71efe1277 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -322,17 +322,15 @@ static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb,
+ struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
+ 					    ext4_group_t group)
+ {
+-	 struct ext4_group_info **grp_info;
+-	 long indexv, indexh;
+-
+-	 if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) {
+-		 ext4_error(sb, "invalid group %u", group);
+-		 return NULL;
+-	 }
+-	 indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
+-	 indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
+-	 grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
+-	 return grp_info[indexh];
++	struct ext4_group_info **grp_info;
++	long indexv, indexh;
++
++	if (unlikely(group >= EXT4_SB(sb)->s_groups_count))
++		return NULL;
++	indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
++	indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
++	grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
++	return grp_info[indexh];
+ }
+ 
+ /*
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index e00e184b12615..1776121677e28 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -285,6 +285,14 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
+ 	if (nbh == NULL) {	/* blocksize == pagesize */
+ 		xa_erase_irq(&btnc->i_pages, newkey);
+ 		unlock_page(ctxt->bh->b_page);
+-	} else
+-		brelse(nbh);
++	} else {
++		/*
++		 * When canceling a buffer that a prepare operation has
++		 * allocated to copy a node block to another location, use
++		 * nilfs_btnode_delete() to initialize and release the buffer
++		 * so that the buffer flags will not be in an inconsistent
++		 * state when it is reallocated.
++		 */
++		nilfs_btnode_delete(nbh);
++	}
+ }
+diff --git a/fs/nilfs2/sufile.c b/fs/nilfs2/sufile.c
+index 51f4cb060231f..b3abe69382fd0 100644
+--- a/fs/nilfs2/sufile.c
++++ b/fs/nilfs2/sufile.c
+@@ -779,6 +779,15 @@ int nilfs_sufile_resize(struct inode *sufile, __u64 newnsegs)
+ 			goto out_header;
+ 
+ 		sui->ncleansegs -= nsegs - newnsegs;
++
++		/*
++		 * If the sufile is successfully truncated, immediately adjust
++		 * the segment allocation space while locking the semaphore
++		 * "mi_sem" so that nilfs_sufile_alloc() never allocates
++		 * segments in the truncated space.
++		 */
++		sui->allocmax = newnsegs - 1;
++		sui->allocmin = 0;
+ 	}
+ 
+ 	kaddr = kmap_atomic(header_bh->b_page);
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index d8d08fa5c3406..1566e86e8e555 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -405,6 +405,18 @@ unsigned long nilfs_nrsvsegs(struct the_nilfs *nilfs, unsigned long nsegs)
+ 				  100));
+ }
+ 
++/**
++ * nilfs_max_segment_count - calculate the maximum number of segments
++ * @nilfs: nilfs object
++ */
++static u64 nilfs_max_segment_count(struct the_nilfs *nilfs)
++{
++	u64 max_count = U64_MAX;
++
++	do_div(max_count, nilfs->ns_blocks_per_segment);
++	return min_t(u64, max_count, ULONG_MAX);
++}
++
+ void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs)
+ {
+ 	nilfs->ns_nsegments = nsegs;
+@@ -414,6 +426,8 @@ void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs)
+ static int nilfs_store_disk_layout(struct the_nilfs *nilfs,
+ 				   struct nilfs_super_block *sbp)
+ {
++	u64 nsegments, nblocks;
++
+ 	if (le32_to_cpu(sbp->s_rev_level) < NILFS_MIN_SUPP_REV) {
+ 		nilfs_err(nilfs->ns_sb,
+ 			  "unsupported revision (superblock rev.=%d.%d, current rev.=%d.%d). Please check the version of mkfs.nilfs(2).",
+@@ -457,7 +471,35 @@ static int nilfs_store_disk_layout(struct the_nilfs *nilfs,
+ 		return -EINVAL;
+ 	}
+ 
+-	nilfs_set_nsegments(nilfs, le64_to_cpu(sbp->s_nsegments));
++	nsegments = le64_to_cpu(sbp->s_nsegments);
++	if (nsegments > nilfs_max_segment_count(nilfs)) {
++		nilfs_err(nilfs->ns_sb,
++			  "segment count %llu exceeds upper limit (%llu segments)",
++			  (unsigned long long)nsegments,
++			  (unsigned long long)nilfs_max_segment_count(nilfs));
++		return -EINVAL;
++	}
++
++	nblocks = (u64)i_size_read(nilfs->ns_sb->s_bdev->bd_inode) >>
++		nilfs->ns_sb->s_blocksize_bits;
++	if (nblocks) {
++		u64 min_block_count = nsegments * nilfs->ns_blocks_per_segment;
++		/*
++		 * To avoid failing to mount early device images without a
++		 * second superblock, exclude that block count from the
++		 * "min_block_count" calculation.
++		 */
++
++		if (nblocks < min_block_count) {
++			nilfs_err(nilfs->ns_sb,
++				  "total number of segment blocks %llu exceeds device size (%llu blocks)",
++				  (unsigned long long)min_block_count,
++				  (unsigned long long)nblocks);
++			return -EINVAL;
++		}
++	}
++
++	nilfs_set_nsegments(nilfs, nsegments);
+ 	nilfs->ns_crc_seed = le32_to_cpu(sbp->s_crc_seed);
+ 	return 0;
+ }
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index ca00cac5a12f7..df36d84aedc48 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -2103,14 +2103,20 @@ static long ocfs2_fallocate(struct file *file, int mode, loff_t offset,
+ 	struct ocfs2_space_resv sr;
+ 	int change_size = 1;
+ 	int cmd = OCFS2_IOC_RESVSP64;
++	int ret = 0;
+ 
+ 	if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
+ 		return -EOPNOTSUPP;
+ 	if (!ocfs2_writes_unwritten_extents(osb))
+ 		return -EOPNOTSUPP;
+ 
+-	if (mode & FALLOC_FL_KEEP_SIZE)
++	if (mode & FALLOC_FL_KEEP_SIZE) {
+ 		change_size = 0;
++	} else {
++		ret = inode_newsize_ok(inode, offset + len);
++		if (ret)
++			return ret;
++	}
+ 
+ 	if (mode & FALLOC_FL_PUNCH_HOLE)
+ 		cmd = OCFS2_IOC_UNRESVSP64;
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 04ce30ae44044..dc21d35527abc 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -955,8 +955,10 @@ static void ocfs2_disable_quotas(struct ocfs2_super *osb)
+ 	for (type = 0; type < OCFS2_MAXQUOTAS; type++) {
+ 		if (!sb_has_quota_loaded(sb, type))
+ 			continue;
+-		oinfo = sb_dqinfo(sb, type)->dqi_priv;
+-		cancel_delayed_work_sync(&oinfo->dqi_sync_work);
++		if (!sb_has_quota_suspended(sb, type)) {
++			oinfo = sb_dqinfo(sb, type)->dqi_priv;
++			cancel_delayed_work_sync(&oinfo->dqi_sync_work);
++		}
+ 		inode = igrab(sb->s_dquot.files[type]);
+ 		/* Turn off quotas. This will remove all dquot structures from
+ 		 * memory and so they will be automatically synced to global
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index 66948e1bf4fa6..cdd6ed5bbcf20 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -10,6 +10,7 @@
+ #include <linux/types.h>
+ #include <linux/compiler.h>
+ #include <linux/bitops.h>
++#include <linux/kstrtox.h>
+ #include <linux/log2.h>
+ #include <linux/minmax.h>
+ #include <linux/typecheck.h>
+@@ -329,148 +330,6 @@ extern bool oops_may_print(void);
+ void do_exit(long error_code) __noreturn;
+ void complete_and_exit(struct completion *, long) __noreturn;
+ 
+-/* Internal, do not use. */
+-int __must_check _kstrtoul(const char *s, unsigned int base, unsigned long *res);
+-int __must_check _kstrtol(const char *s, unsigned int base, long *res);
+-
+-int __must_check kstrtoull(const char *s, unsigned int base, unsigned long long *res);
+-int __must_check kstrtoll(const char *s, unsigned int base, long long *res);
+-
+-/**
+- * kstrtoul - convert a string to an unsigned long
+- * @s: The start of the string. The string must be null-terminated, and may also
+- *  include a single newline before its terminating null. The first character
+- *  may also be a plus sign, but not a minus sign.
+- * @base: The number base to use. The maximum supported base is 16. If base is
+- *  given as 0, then the base of the string is automatically detected with the
+- *  conventional semantics - If it begins with 0x the number will be parsed as a
+- *  hexadecimal (case insensitive), if it otherwise begins with 0, it will be
+- *  parsed as an octal number. Otherwise it will be parsed as a decimal.
+- * @res: Where to write the result of the conversion on success.
+- *
+- * Returns 0 on success, -ERANGE on overflow and -EINVAL on parsing error.
+- * Preferred over simple_strtoul(). Return code must be checked.
+-*/
+-static inline int __must_check kstrtoul(const char *s, unsigned int base, unsigned long *res)
+-{
+-	/*
+-	 * We want to shortcut function call, but
+-	 * __builtin_types_compatible_p(unsigned long, unsigned long long) = 0.
+-	 */
+-	if (sizeof(unsigned long) == sizeof(unsigned long long) &&
+-	    __alignof__(unsigned long) == __alignof__(unsigned long long))
+-		return kstrtoull(s, base, (unsigned long long *)res);
+-	else
+-		return _kstrtoul(s, base, res);
+-}
+-
+-/**
+- * kstrtol - convert a string to a long
+- * @s: The start of the string. The string must be null-terminated, and may also
+- *  include a single newline before its terminating null. The first character
+- *  may also be a plus sign or a minus sign.
+- * @base: The number base to use. The maximum supported base is 16. If base is
+- *  given as 0, then the base of the string is automatically detected with the
+- *  conventional semantics - If it begins with 0x the number will be parsed as a
+- *  hexadecimal (case insensitive), if it otherwise begins with 0, it will be
+- *  parsed as an octal number. Otherwise it will be parsed as a decimal.
+- * @res: Where to write the result of the conversion on success.
+- *
+- * Returns 0 on success, -ERANGE on overflow and -EINVAL on parsing error.
+- * Preferred over simple_strtol(). Return code must be checked.
+- */
+-static inline int __must_check kstrtol(const char *s, unsigned int base, long *res)
+-{
+-	/*
+-	 * We want to shortcut function call, but
+-	 * __builtin_types_compatible_p(long, long long) = 0.
+-	 */
+-	if (sizeof(long) == sizeof(long long) &&
+-	    __alignof__(long) == __alignof__(long long))
+-		return kstrtoll(s, base, (long long *)res);
+-	else
+-		return _kstrtol(s, base, res);
+-}
+-
+-int __must_check kstrtouint(const char *s, unsigned int base, unsigned int *res);
+-int __must_check kstrtoint(const char *s, unsigned int base, int *res);
+-
+-static inline int __must_check kstrtou64(const char *s, unsigned int base, u64 *res)
+-{
+-	return kstrtoull(s, base, res);
+-}
+-
+-static inline int __must_check kstrtos64(const char *s, unsigned int base, s64 *res)
+-{
+-	return kstrtoll(s, base, res);
+-}
+-
+-static inline int __must_check kstrtou32(const char *s, unsigned int base, u32 *res)
+-{
+-	return kstrtouint(s, base, res);
+-}
+-
+-static inline int __must_check kstrtos32(const char *s, unsigned int base, s32 *res)
+-{
+-	return kstrtoint(s, base, res);
+-}
+-
+-int __must_check kstrtou16(const char *s, unsigned int base, u16 *res);
+-int __must_check kstrtos16(const char *s, unsigned int base, s16 *res);
+-int __must_check kstrtou8(const char *s, unsigned int base, u8 *res);
+-int __must_check kstrtos8(const char *s, unsigned int base, s8 *res);
+-int __must_check kstrtobool(const char *s, bool *res);
+-
+-int __must_check kstrtoull_from_user(const char __user *s, size_t count, unsigned int base, unsigned long long *res);
+-int __must_check kstrtoll_from_user(const char __user *s, size_t count, unsigned int base, long long *res);
+-int __must_check kstrtoul_from_user(const char __user *s, size_t count, unsigned int base, unsigned long *res);
+-int __must_check kstrtol_from_user(const char __user *s, size_t count, unsigned int base, long *res);
+-int __must_check kstrtouint_from_user(const char __user *s, size_t count, unsigned int base, unsigned int *res);
+-int __must_check kstrtoint_from_user(const char __user *s, size_t count, unsigned int base, int *res);
+-int __must_check kstrtou16_from_user(const char __user *s, size_t count, unsigned int base, u16 *res);
+-int __must_check kstrtos16_from_user(const char __user *s, size_t count, unsigned int base, s16 *res);
+-int __must_check kstrtou8_from_user(const char __user *s, size_t count, unsigned int base, u8 *res);
+-int __must_check kstrtos8_from_user(const char __user *s, size_t count, unsigned int base, s8 *res);
+-int __must_check kstrtobool_from_user(const char __user *s, size_t count, bool *res);
+-
+-static inline int __must_check kstrtou64_from_user(const char __user *s, size_t count, unsigned int base, u64 *res)
+-{
+-	return kstrtoull_from_user(s, count, base, res);
+-}
+-
+-static inline int __must_check kstrtos64_from_user(const char __user *s, size_t count, unsigned int base, s64 *res)
+-{
+-	return kstrtoll_from_user(s, count, base, res);
+-}
+-
+-static inline int __must_check kstrtou32_from_user(const char __user *s, size_t count, unsigned int base, u32 *res)
+-{
+-	return kstrtouint_from_user(s, count, base, res);
+-}
+-
+-static inline int __must_check kstrtos32_from_user(const char __user *s, size_t count, unsigned int base, s32 *res)
+-{
+-	return kstrtoint_from_user(s, count, base, res);
+-}
+-
+-/*
+- * Use kstrto<foo> instead.
+- *
+- * NOTE: simple_strto<foo> does not check for the range overflow and,
+- *	 depending on the input, may give interesting results.
+- *
+- * Use these functions if and only if you cannot use kstrto<foo>, because
+- * the conversion ends on the first non-digit character, which may be far
+- * beyond the supported range. It might be useful to parse the strings like
+- * 10x50 or 12:21 without altering original string or temporary buffer in use.
+- * Keep in mind above caveat.
+- */
+-
+-extern unsigned long simple_strtoul(const char *,char **,unsigned int);
+-extern long simple_strtol(const char *,char **,unsigned int);
+-extern unsigned long long simple_strtoull(const char *,char **,unsigned int);
+-extern long long simple_strtoll(const char *,char **,unsigned int);
+-
+ extern int num_to_str(char *buf, int size,
+ 		      unsigned long long num, unsigned int width);
+ 
+diff --git a/include/linux/kstrtox.h b/include/linux/kstrtox.h
+new file mode 100644
+index 0000000000000..529974e22ea79
+--- /dev/null
++++ b/include/linux/kstrtox.h
+@@ -0,0 +1,155 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _LINUX_KSTRTOX_H
++#define _LINUX_KSTRTOX_H
++
++#include <linux/compiler.h>
++#include <linux/types.h>
++
++/* Internal, do not use. */
++int __must_check _kstrtoul(const char *s, unsigned int base, unsigned long *res);
++int __must_check _kstrtol(const char *s, unsigned int base, long *res);
++
++int __must_check kstrtoull(const char *s, unsigned int base, unsigned long long *res);
++int __must_check kstrtoll(const char *s, unsigned int base, long long *res);
++
++/**
++ * kstrtoul - convert a string to an unsigned long
++ * @s: The start of the string. The string must be null-terminated, and may also
++ *  include a single newline before its terminating null. The first character
++ *  may also be a plus sign, but not a minus sign.
++ * @base: The number base to use. The maximum supported base is 16. If base is
++ *  given as 0, then the base of the string is automatically detected with the
++ *  conventional semantics - If it begins with 0x the number will be parsed as a
++ *  hexadecimal (case insensitive), if it otherwise begins with 0, it will be
++ *  parsed as an octal number. Otherwise it will be parsed as a decimal.
++ * @res: Where to write the result of the conversion on success.
++ *
++ * Returns 0 on success, -ERANGE on overflow and -EINVAL on parsing error.
++ * Preferred over simple_strtoul(). Return code must be checked.
++*/
++static inline int __must_check kstrtoul(const char *s, unsigned int base, unsigned long *res)
++{
++	/*
++	 * We want to shortcut function call, but
++	 * __builtin_types_compatible_p(unsigned long, unsigned long long) = 0.
++	 */
++	if (sizeof(unsigned long) == sizeof(unsigned long long) &&
++	    __alignof__(unsigned long) == __alignof__(unsigned long long))
++		return kstrtoull(s, base, (unsigned long long *)res);
++	else
++		return _kstrtoul(s, base, res);
++}
++
++/**
++ * kstrtol - convert a string to a long
++ * @s: The start of the string. The string must be null-terminated, and may also
++ *  include a single newline before its terminating null. The first character
++ *  may also be a plus sign or a minus sign.
++ * @base: The number base to use. The maximum supported base is 16. If base is
++ *  given as 0, then the base of the string is automatically detected with the
++ *  conventional semantics - If it begins with 0x the number will be parsed as a
++ *  hexadecimal (case insensitive), if it otherwise begins with 0, it will be
++ *  parsed as an octal number. Otherwise it will be parsed as a decimal.
++ * @res: Where to write the result of the conversion on success.
++ *
++ * Returns 0 on success, -ERANGE on overflow and -EINVAL on parsing error.
++ * Preferred over simple_strtol(). Return code must be checked.
++ */
++static inline int __must_check kstrtol(const char *s, unsigned int base, long *res)
++{
++	/*
++	 * We want to shortcut function call, but
++	 * __builtin_types_compatible_p(long, long long) = 0.
++	 */
++	if (sizeof(long) == sizeof(long long) &&
++	    __alignof__(long) == __alignof__(long long))
++		return kstrtoll(s, base, (long long *)res);
++	else
++		return _kstrtol(s, base, res);
++}
++
++int __must_check kstrtouint(const char *s, unsigned int base, unsigned int *res);
++int __must_check kstrtoint(const char *s, unsigned int base, int *res);
++
++static inline int __must_check kstrtou64(const char *s, unsigned int base, u64 *res)
++{
++	return kstrtoull(s, base, res);
++}
++
++static inline int __must_check kstrtos64(const char *s, unsigned int base, s64 *res)
++{
++	return kstrtoll(s, base, res);
++}
++
++static inline int __must_check kstrtou32(const char *s, unsigned int base, u32 *res)
++{
++	return kstrtouint(s, base, res);
++}
++
++static inline int __must_check kstrtos32(const char *s, unsigned int base, s32 *res)
++{
++	return kstrtoint(s, base, res);
++}
++
++int __must_check kstrtou16(const char *s, unsigned int base, u16 *res);
++int __must_check kstrtos16(const char *s, unsigned int base, s16 *res);
++int __must_check kstrtou8(const char *s, unsigned int base, u8 *res);
++int __must_check kstrtos8(const char *s, unsigned int base, s8 *res);
++int __must_check kstrtobool(const char *s, bool *res);
++
++int __must_check kstrtoull_from_user(const char __user *s, size_t count, unsigned int base, unsigned long long *res);
++int __must_check kstrtoll_from_user(const char __user *s, size_t count, unsigned int base, long long *res);
++int __must_check kstrtoul_from_user(const char __user *s, size_t count, unsigned int base, unsigned long *res);
++int __must_check kstrtol_from_user(const char __user *s, size_t count, unsigned int base, long *res);
++int __must_check kstrtouint_from_user(const char __user *s, size_t count, unsigned int base, unsigned int *res);
++int __must_check kstrtoint_from_user(const char __user *s, size_t count, unsigned int base, int *res);
++int __must_check kstrtou16_from_user(const char __user *s, size_t count, unsigned int base, u16 *res);
++int __must_check kstrtos16_from_user(const char __user *s, size_t count, unsigned int base, s16 *res);
++int __must_check kstrtou8_from_user(const char __user *s, size_t count, unsigned int base, u8 *res);
++int __must_check kstrtos8_from_user(const char __user *s, size_t count, unsigned int base, s8 *res);
++int __must_check kstrtobool_from_user(const char __user *s, size_t count, bool *res);
++
++static inline int __must_check kstrtou64_from_user(const char __user *s, size_t count, unsigned int base, u64 *res)
++{
++	return kstrtoull_from_user(s, count, base, res);
++}
++
++static inline int __must_check kstrtos64_from_user(const char __user *s, size_t count, unsigned int base, s64 *res)
++{
++	return kstrtoll_from_user(s, count, base, res);
++}
++
++static inline int __must_check kstrtou32_from_user(const char __user *s, size_t count, unsigned int base, u32 *res)
++{
++	return kstrtouint_from_user(s, count, base, res);
++}
++
++static inline int __must_check kstrtos32_from_user(const char __user *s, size_t count, unsigned int base, s32 *res)
++{
++	return kstrtoint_from_user(s, count, base, res);
++}
++
++/*
++ * Use kstrto<foo> instead.
++ *
++ * NOTE: simple_strto<foo> does not check for the range overflow and,
++ *	 depending on the input, may give interesting results.
++ *
++ * Use these functions if and only if you cannot use kstrto<foo>, because
++ * the conversion ends on the first non-digit character, which may be far
++ * beyond the supported range. It might be useful to parse the strings like
++ * 10x50 or 12:21 without altering original string or temporary buffer in use.
++ * Keep in mind above caveat.
++ */
++
++extern unsigned long simple_strtoul(const char *,char **,unsigned int);
++extern long simple_strtol(const char *,char **,unsigned int);
++extern unsigned long long simple_strtoull(const char *,char **,unsigned int);
++extern long long simple_strtoll(const char *,char **,unsigned int);
++
++static inline int strtobool(const char *s, bool *res)
++{
++	return kstrtobool(s, res);
++}
++
++#endif	/* _LINUX_KSTRTOX_H */
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 24fe2cd4b0e8d..8f03cc42bd43f 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -1759,7 +1759,6 @@ enum netdev_ml_priv_type {
+  *	@tipc_ptr:	TIPC specific data
+  *	@atalk_ptr:	AppleTalk link
+  *	@ip_ptr:	IPv4 specific data
+- *	@dn_ptr:	DECnet specific data
+  *	@ip6_ptr:	IPv6 specific data
+  *	@ax25_ptr:	AX.25 specific data
+  *	@ieee80211_ptr:	IEEE 802.11 specific data, assign before registering
+@@ -2038,9 +2037,6 @@ struct net_device {
+ 	void 			*atalk_ptr;
+ #endif
+ 	struct in_device __rcu	*ip_ptr;
+-#if IS_ENABLED(CONFIG_DECNET)
+-	struct dn_dev __rcu     *dn_ptr;
+-#endif
+ 	struct inet6_dev __rcu	*ip6_ptr;
+ #if IS_ENABLED(CONFIG_AX25)
+ 	void			*ax25_ptr;
+diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
+index 0101747de5493..8b229abf1435c 100644
+--- a/include/linux/netfilter.h
++++ b/include/linux/netfilter.h
+@@ -237,11 +237,6 @@ static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
+ 		hook_head = rcu_dereference(net->nf.hooks_bridge[hook]);
+ #endif
+ 		break;
+-#if IS_ENABLED(CONFIG_DECNET)
+-	case NFPROTO_DECNET:
+-		hook_head = rcu_dereference(net->nf.hooks_decnet[hook]);
+-		break;
+-#endif
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 		break;
+diff --git a/include/linux/netfilter_defs.h b/include/linux/netfilter_defs.h
+index 8dddfb151f004..a5f7bef1b3a47 100644
+--- a/include/linux/netfilter_defs.h
++++ b/include/linux/netfilter_defs.h
+@@ -7,14 +7,6 @@
+ /* in/out/forward only */
+ #define NF_ARP_NUMHOOKS 3
+ 
+-/* max hook is NF_DN_ROUTE (6), also see uapi/linux/netfilter_decnet.h */
+-#define NF_DN_NUMHOOKS 7
+-
+-#if IS_ENABLED(CONFIG_DECNET)
+-/* Largest hook number + 1, see uapi/linux/netfilter_decnet.h */
+-#define NF_MAX_HOOKS	NF_DN_NUMHOOKS
+-#else
+ #define NF_MAX_HOOKS	NF_INET_NUMHOOKS
+-#endif
+ 
+ #endif
+diff --git a/include/linux/string.h b/include/linux/string.h
+index b1f3894a0a3e4..0cef345a6e87a 100644
+--- a/include/linux/string.h
++++ b/include/linux/string.h
+@@ -2,7 +2,6 @@
+ #ifndef _LINUX_STRING_H_
+ #define _LINUX_STRING_H_
+ 
+-
+ #include <linux/compiler.h>	/* for inline */
+ #include <linux/types.h>	/* for size_t */
+ #include <linux/stddef.h>	/* for NULL */
+@@ -183,12 +182,6 @@ extern char **argv_split(gfp_t gfp, const char *str, int *argcp);
+ extern void argv_free(char **argv);
+ 
+ extern bool sysfs_streq(const char *s1, const char *s2);
+-extern int kstrtobool(const char *s, bool *res);
+-static inline int strtobool(const char *s, bool *res)
+-{
+-	return kstrtobool(s, res);
+-}
+-
+ int match_string(const char * const *array, size_t n, const char *string);
+ int __sysfs_match_string(const char * const *array, size_t n, const char *s);
+ 
+diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
+index d0965e2997b09..b134b2b3371cf 100644
+--- a/include/linux/sunrpc/cache.h
++++ b/include/linux/sunrpc/cache.h
+@@ -14,6 +14,7 @@
+ #include <linux/kref.h>
+ #include <linux/slab.h>
+ #include <linux/atomic.h>
++#include <linux/kstrtox.h>
+ #include <linux/proc_fs.h>
+ 
+ /*
+diff --git a/include/media/dvbdev.h b/include/media/dvbdev.h
+index c493a3ffe45c8..91ff2d5fcd084 100644
+--- a/include/media/dvbdev.h
++++ b/include/media/dvbdev.h
+@@ -189,6 +189,21 @@ struct dvb_device {
+ 	void *priv;
+ };
+ 
++/**
++ * struct dvbdevfops_node - fops nodes registered in dvbdevfops_list
++ *
++ * @fops:		Dynamically allocated fops for ->owner registration
++ * @type:		type of dvb_device
++ * @template:		dvb_device used for registration
++ * @list_head:		list_head for dvbdevfops_list
++ */
++struct dvbdevfops_node {
++	struct file_operations *fops;
++	enum dvb_device_type type;
++	const struct dvb_device *template;
++	struct list_head list_head;
++};
++
+ /**
+  * dvb_device_get - Increase dvb_device reference
+  *
+diff --git a/include/net/dn.h b/include/net/dn.h
+deleted file mode 100644
+index 56ab0726c641a..0000000000000
+--- a/include/net/dn.h
++++ /dev/null
+@@ -1,231 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _NET_DN_H
+-#define _NET_DN_H
+-
+-#include <linux/dn.h>
+-#include <net/sock.h>
+-#include <net/flow.h>
+-#include <asm/byteorder.h>
+-#include <asm/unaligned.h>
+-
+-struct dn_scp                                   /* Session Control Port */
+-{
+-        unsigned char           state;
+-#define DN_O     1                      /* Open                 */
+-#define DN_CR    2                      /* Connect Receive      */
+-#define DN_DR    3                      /* Disconnect Reject    */
+-#define DN_DRC   4                      /* Discon. Rej. Complete*/
+-#define DN_CC    5                      /* Connect Confirm      */
+-#define DN_CI    6                      /* Connect Initiate     */
+-#define DN_NR    7                      /* No resources         */
+-#define DN_NC    8                      /* No communication     */
+-#define DN_CD    9                      /* Connect Delivery     */
+-#define DN_RJ    10                     /* Rejected             */
+-#define DN_RUN   11                     /* Running              */
+-#define DN_DI    12                     /* Disconnect Initiate  */
+-#define DN_DIC   13                     /* Disconnect Complete  */
+-#define DN_DN    14                     /* Disconnect Notificat */
+-#define DN_CL    15                     /* Closed               */
+-#define DN_CN    16                     /* Closed Notification  */
+-
+-        __le16          addrloc;
+-        __le16          addrrem;
+-        __u16          numdat;
+-        __u16          numoth;
+-        __u16          numoth_rcv;
+-        __u16          numdat_rcv;
+-        __u16          ackxmt_dat;
+-        __u16          ackxmt_oth;
+-        __u16          ackrcv_dat;
+-        __u16          ackrcv_oth;
+-        __u8           flowrem_sw;
+-	__u8           flowloc_sw;
+-#define DN_SEND         2
+-#define DN_DONTSEND     1
+-#define DN_NOCHANGE     0
+-	__u16		flowrem_dat;
+-	__u16		flowrem_oth;
+-	__u16		flowloc_dat;
+-	__u16		flowloc_oth;
+-	__u8		services_rem;
+-	__u8		services_loc;
+-	__u8		info_rem;
+-	__u8		info_loc;
+-
+-	__u16		segsize_rem;
+-	__u16		segsize_loc;
+-
+-	__u8		nonagle;
+-	__u8		multi_ireq;
+-	__u8		accept_mode;
+-	unsigned long		seg_total; /* Running total of current segment */
+-
+-	struct optdata_dn     conndata_in;
+-	struct optdata_dn     conndata_out;
+-	struct optdata_dn     discdata_in;
+-	struct optdata_dn     discdata_out;
+-        struct accessdata_dn  accessdata;
+-
+-        struct sockaddr_dn addr; /* Local address  */
+-	struct sockaddr_dn peer; /* Remote address */
+-
+-	/*
+-	 * In this case the RTT estimation is not specified in the
+-	 * docs, nor is any back off algorithm. Here we follow well
+-	 * known tcp algorithms with a few small variations.
+-	 *
+-	 * snd_window: Max number of packets we send before we wait for
+-	 *             an ack to come back. This will become part of a
+-	 *             more complicated scheme when we support flow
+-	 *             control.
+-	 *
+-	 * nsp_srtt:   Round-Trip-Time (x8) in jiffies. This is a rolling
+-	 *             average.
+-	 * nsp_rttvar: Round-Trip-Time-Varience (x4) in jiffies. This is the
+-	 *             varience of the smoothed average (but calculated in
+-	 *             a simpler way than for normal statistical varience
+-	 *             calculations).
+-	 *
+-	 * nsp_rxtshift: Backoff counter. Value is zero normally, each time
+-	 *               a packet is lost is increases by one until an ack
+-	 *               is received. Its used to index an array of backoff
+-	 *               multipliers.
+-	 */
+-#define NSP_MIN_WINDOW 1
+-#define NSP_MAX_WINDOW (0x07fe)
+-	unsigned long max_window;
+-	unsigned long snd_window;
+-#define NSP_INITIAL_SRTT (HZ)
+-	unsigned long nsp_srtt;
+-#define NSP_INITIAL_RTTVAR (HZ*3)
+-	unsigned long nsp_rttvar;
+-#define NSP_MAXRXTSHIFT 12
+-	unsigned long nsp_rxtshift;
+-
+-	/*
+-	 * Output queues, one for data, one for otherdata/linkservice
+-	 */
+-	struct sk_buff_head data_xmit_queue;
+-	struct sk_buff_head other_xmit_queue;
+-
+-	/*
+-	 * Input queue for other data
+-	 */
+-	struct sk_buff_head other_receive_queue;
+-	int other_report;
+-
+-	/*
+-	 * Stuff to do with the slow timer
+-	 */
+-	unsigned long stamp;          /* time of last transmit */
+-	unsigned long persist;
+-	int (*persist_fxn)(struct sock *sk);
+-	unsigned long keepalive;
+-	void (*keepalive_fxn)(struct sock *sk);
+-
+-};
+-
+-static inline struct dn_scp *DN_SK(struct sock *sk)
+-{
+-	return (struct dn_scp *)(sk + 1);
+-}
+-
+-/*
+- * src,dst : Source and Destination DECnet addresses
+- * hops : Number of hops through the network
+- * dst_port, src_port : NSP port numbers
+- * services, info : Useful data extracted from conninit messages
+- * rt_flags : Routing flags byte
+- * nsp_flags : NSP layer flags byte
+- * segsize : Size of segment
+- * segnum : Number, for data, otherdata and linkservice
+- * xmit_count : Number of times we've transmitted this skb
+- * stamp : Time stamp of most recent transmission, used in RTT calculations
+- * iif: Input interface number
+- *
+- * As a general policy, this structure keeps all addresses in network
+- * byte order, and all else in host byte order. Thus dst, src, dst_port
+- * and src_port are in network order. All else is in host order.
+- * 
+- */
+-#define DN_SKB_CB(skb) ((struct dn_skb_cb *)(skb)->cb)
+-struct dn_skb_cb {
+-	__le16 dst;
+-	__le16 src;
+-	__u16 hops;
+-	__le16 dst_port;
+-	__le16 src_port;
+-	__u8 services;
+-	__u8 info;
+-	__u8 rt_flags;
+-	__u8 nsp_flags;
+-	__u16 segsize;
+-	__u16 segnum;
+-	__u16 xmit_count;
+-	unsigned long stamp;
+-	int iif;
+-};
+-
+-static inline __le16 dn_eth2dn(unsigned char *ethaddr)
+-{
+-	return get_unaligned((__le16 *)(ethaddr + 4));
+-}
+-
+-static inline __le16 dn_saddr2dn(struct sockaddr_dn *saddr)
+-{
+-	return *(__le16 *)saddr->sdn_nodeaddr;
+-}
+-
+-static inline void dn_dn2eth(unsigned char *ethaddr, __le16 addr)
+-{
+-	__u16 a = le16_to_cpu(addr);
+-	ethaddr[0] = 0xAA;
+-	ethaddr[1] = 0x00;
+-	ethaddr[2] = 0x04;
+-	ethaddr[3] = 0x00;
+-	ethaddr[4] = (__u8)(a & 0xff);
+-	ethaddr[5] = (__u8)(a >> 8);
+-}
+-
+-static inline void dn_sk_ports_copy(struct flowidn *fld, struct dn_scp *scp)
+-{
+-	fld->fld_sport = scp->addrloc;
+-	fld->fld_dport = scp->addrrem;
+-}
+-
+-unsigned int dn_mss_from_pmtu(struct net_device *dev, int mtu);
+-void dn_register_sysctl(void);
+-void dn_unregister_sysctl(void);
+-
+-#define DN_MENUVER_ACC 0x01
+-#define DN_MENUVER_USR 0x02
+-#define DN_MENUVER_PRX 0x04
+-#define DN_MENUVER_UIC 0x08
+-
+-struct sock *dn_sklist_find_listener(struct sockaddr_dn *addr);
+-struct sock *dn_find_by_skb(struct sk_buff *skb);
+-#define DN_ASCBUF_LEN 9
+-char *dn_addr2asc(__u16, char *);
+-int dn_destroy_timer(struct sock *sk);
+-
+-int dn_sockaddr2username(struct sockaddr_dn *addr, unsigned char *buf,
+-			 unsigned char type);
+-int dn_username2sockaddr(unsigned char *data, int len, struct sockaddr_dn *addr,
+-			 unsigned char *type);
+-
+-void dn_start_slow_timer(struct sock *sk);
+-void dn_stop_slow_timer(struct sock *sk);
+-
+-extern __le16 decnet_address;
+-extern int decnet_debug_level;
+-extern int decnet_time_wait;
+-extern int decnet_dn_count;
+-extern int decnet_di_count;
+-extern int decnet_dr_count;
+-extern int decnet_no_fc_max_cwnd;
+-
+-extern long sysctl_decnet_mem[3];
+-extern int sysctl_decnet_wmem[3];
+-extern int sysctl_decnet_rmem[3];
+-
+-#endif /* _NET_DN_H */
+diff --git a/include/net/dn_dev.h b/include/net/dn_dev.h
+deleted file mode 100644
+index 595b4f6c1eb10..0000000000000
+--- a/include/net/dn_dev.h
++++ /dev/null
+@@ -1,199 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _NET_DN_DEV_H
+-#define _NET_DN_DEV_H
+-
+-
+-struct dn_dev;
+-
+-struct dn_ifaddr {
+-	struct dn_ifaddr __rcu *ifa_next;
+-	struct dn_dev    *ifa_dev;
+-	__le16            ifa_local;
+-	__le16            ifa_address;
+-	__u32             ifa_flags;
+-	__u8              ifa_scope;
+-	char              ifa_label[IFNAMSIZ];
+-	struct rcu_head   rcu;
+-};
+-
+-#define DN_DEV_S_RU  0 /* Run - working normally   */
+-#define DN_DEV_S_CR  1 /* Circuit Rejected         */
+-#define DN_DEV_S_DS  2 /* Data Link Start          */
+-#define DN_DEV_S_RI  3 /* Routing Layer Initialize */
+-#define DN_DEV_S_RV  4 /* Routing Layer Verify     */
+-#define DN_DEV_S_RC  5 /* Routing Layer Complete   */
+-#define DN_DEV_S_OF  6 /* Off                      */
+-#define DN_DEV_S_HA  7 /* Halt                     */
+-
+-
+-/*
+- * The dn_dev_parms structure contains the set of parameters
+- * for each device (hence inclusion in the dn_dev structure)
+- * and an array is used to store the default types of supported
+- * device (in dn_dev.c).
+- *
+- * The type field matches the ARPHRD_ constants and is used in
+- * searching the list for supported devices when new devices
+- * come up.
+- *
+- * The mode field is used to find out if a device is broadcast,
+- * multipoint, or pointopoint. Please note that DECnet thinks
+- * different ways about devices to the rest of the kernel
+- * so the normal IFF_xxx flags are invalid here. For devices
+- * which can be any combination of the previously mentioned
+- * attributes, you can set this on a per device basis by
+- * installing an up() routine.
+- *
+- * The device state field, defines the initial state in which the
+- * device will come up. In the dn_dev structure, it is the actual
+- * state.
+- *
+- * Things have changed here. I've killed timer1 since it's a user space
+- * issue for a user space routing deamon to sort out. The kernel does
+- * not need to be bothered with it.
+- *
+- * Timers:
+- * t2 - Rate limit timer, min time between routing and hello messages
+- * t3 - Hello timer, send hello messages when it expires
+- *
+- * Callbacks:
+- * up() - Called to initialize device, return value can veto use of
+- *        device with DECnet.
+- * down() - Called to turn device off when it goes down
+- * timer3() - Called once for each ifaddr when timer 3 goes off
+- * 
+- * sysctl - Hook for sysctl things
+- *
+- */
+-struct dn_dev_parms {
+-	int type;	          /* ARPHRD_xxx                         */
+-	int mode;	          /* Broadcast, Unicast, Mulitpoint     */
+-#define DN_DEV_BCAST  1
+-#define DN_DEV_UCAST  2
+-#define DN_DEV_MPOINT 4
+-	int state;                /* Initial state                      */
+-	int forwarding;	          /* 0=EndNode, 1=L1Router, 2=L2Router  */
+-	unsigned long t2;         /* Default value of t2                */
+-	unsigned long t3;         /* Default value of t3                */
+-	int priority;             /* Priority to be a router            */
+-	char *name;               /* Name for sysctl                    */
+-	int  (*up)(struct net_device *);
+-	void (*down)(struct net_device *);
+-	void (*timer3)(struct net_device *, struct dn_ifaddr *ifa);
+-	void *sysctl;
+-};
+-
+-
+-struct dn_dev {
+-	struct dn_ifaddr __rcu *ifa_list;
+-	struct net_device *dev;
+-	struct dn_dev_parms parms;
+-	char use_long;
+-	struct timer_list timer;
+-	unsigned long t3;
+-	struct neigh_parms *neigh_parms;
+-	__u8 addr[ETH_ALEN];
+-	struct neighbour *router; /* Default router on circuit */
+-	struct neighbour *peer;   /* Peer on pointopoint links */
+-	unsigned long uptime;     /* Time device went up in jiffies */
+-};
+-
+-struct dn_short_packet {
+-	__u8    msgflg;
+-	__le16 dstnode;
+-	__le16 srcnode;
+-	__u8   forward;
+-} __packed;
+-
+-struct dn_long_packet {
+-	__u8   msgflg;
+-	__u8   d_area;
+-	__u8   d_subarea;
+-	__u8   d_id[6];
+-	__u8   s_area;
+-	__u8   s_subarea;
+-	__u8   s_id[6];
+-	__u8   nl2;
+-	__u8   visit_ct;
+-	__u8   s_class;
+-	__u8   pt;
+-} __packed;
+-
+-/*------------------------- DRP - Routing messages ---------------------*/
+-
+-struct endnode_hello_message {
+-	__u8   msgflg;
+-	__u8   tiver[3];
+-	__u8   id[6];
+-	__u8   iinfo;
+-	__le16 blksize;
+-	__u8   area;
+-	__u8   seed[8];
+-	__u8   neighbor[6];
+-	__le16 timer;
+-	__u8   mpd;
+-	__u8   datalen;
+-	__u8   data[2];
+-} __packed;
+-
+-struct rtnode_hello_message {
+-	__u8   msgflg;
+-	__u8   tiver[3];
+-	__u8   id[6];
+-	__u8   iinfo;
+-	__le16  blksize;
+-	__u8   priority;
+-	__u8   area;
+-	__le16  timer;
+-	__u8   mpd;
+-} __packed;
+-
+-
+-void dn_dev_init(void);
+-void dn_dev_cleanup(void);
+-
+-int dn_dev_ioctl(unsigned int cmd, void __user *arg);
+-
+-void dn_dev_devices_off(void);
+-void dn_dev_devices_on(void);
+-
+-void dn_dev_init_pkt(struct sk_buff *skb);
+-void dn_dev_veri_pkt(struct sk_buff *skb);
+-void dn_dev_hello(struct sk_buff *skb);
+-
+-void dn_dev_up(struct net_device *);
+-void dn_dev_down(struct net_device *);
+-
+-int dn_dev_set_default(struct net_device *dev, int force);
+-struct net_device *dn_dev_get_default(void);
+-int dn_dev_bind_default(__le16 *addr);
+-
+-int register_dnaddr_notifier(struct notifier_block *nb);
+-int unregister_dnaddr_notifier(struct notifier_block *nb);
+-
+-static inline int dn_dev_islocal(struct net_device *dev, __le16 addr)
+-{
+-	struct dn_dev *dn_db;
+-	struct dn_ifaddr *ifa;
+-	int res = 0;
+-
+-	rcu_read_lock();
+-	dn_db = rcu_dereference(dev->dn_ptr);
+-	if (dn_db == NULL) {
+-		printk(KERN_DEBUG "dn_dev_islocal: Called for non DECnet device\n");
+-		goto out;
+-	}
+-
+-	for (ifa = rcu_dereference(dn_db->ifa_list);
+-	     ifa != NULL;
+-	     ifa = rcu_dereference(ifa->ifa_next))
+-		if ((addr ^ ifa->ifa_local) == 0) {
+-			res = 1;
+-			break;
+-		}
+-out:
+-	rcu_read_unlock();
+-	return res;
+-}
+-
+-#endif /* _NET_DN_DEV_H */
+diff --git a/include/net/dn_fib.h b/include/net/dn_fib.h
+deleted file mode 100644
+index ccc6e9df178bd..0000000000000
+--- a/include/net/dn_fib.h
++++ /dev/null
+@@ -1,167 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _NET_DN_FIB_H
+-#define _NET_DN_FIB_H
+-
+-#include <linux/netlink.h>
+-#include <linux/refcount.h>
+-
+-extern const struct nla_policy rtm_dn_policy[];
+-
+-struct dn_fib_res {
+-	struct fib_rule *r;
+-	struct dn_fib_info *fi;
+-	unsigned char prefixlen;
+-	unsigned char nh_sel;
+-	unsigned char type;
+-	unsigned char scope;
+-};
+-
+-struct dn_fib_nh {
+-	struct net_device	*nh_dev;
+-	unsigned int		nh_flags;
+-	unsigned char		nh_scope;
+-	int			nh_weight;
+-	int			nh_power;
+-	int			nh_oif;
+-	__le16			nh_gw;
+-};
+-
+-struct dn_fib_info {
+-	struct dn_fib_info	*fib_next;
+-	struct dn_fib_info	*fib_prev;
+-	int 			fib_treeref;
+-	refcount_t		fib_clntref;
+-	int			fib_dead;
+-	unsigned int		fib_flags;
+-	int			fib_protocol;
+-	__le16			fib_prefsrc;
+-	__u32			fib_priority;
+-	__u32			fib_metrics[RTAX_MAX];
+-	int			fib_nhs;
+-	int			fib_power;
+-	struct dn_fib_nh	fib_nh[0];
+-#define dn_fib_dev		fib_nh[0].nh_dev
+-};
+-
+-
+-#define DN_FIB_RES_RESET(res)	((res).nh_sel = 0)
+-#define DN_FIB_RES_NH(res)	((res).fi->fib_nh[(res).nh_sel])
+-
+-#define DN_FIB_RES_PREFSRC(res)	((res).fi->fib_prefsrc ? : __dn_fib_res_prefsrc(&res))
+-#define DN_FIB_RES_GW(res)	(DN_FIB_RES_NH(res).nh_gw)
+-#define DN_FIB_RES_DEV(res)	(DN_FIB_RES_NH(res).nh_dev)
+-#define DN_FIB_RES_OIF(res)	(DN_FIB_RES_NH(res).nh_oif)
+-
+-typedef struct {
+-	__le16	datum;
+-} dn_fib_key_t;
+-
+-typedef struct {
+-	__le16	datum;
+-} dn_fib_hash_t;
+-
+-typedef struct {
+-	__u16	datum;
+-} dn_fib_idx_t;
+-
+-struct dn_fib_node {
+-	struct dn_fib_node *fn_next;
+-	struct dn_fib_info *fn_info;
+-#define DN_FIB_INFO(f) ((f)->fn_info)
+-	dn_fib_key_t	fn_key;
+-	u8		fn_type;
+-	u8		fn_scope;
+-	u8		fn_state;
+-};
+-
+-
+-struct dn_fib_table {
+-	struct hlist_node hlist;
+-	u32 n;
+-
+-	int (*insert)(struct dn_fib_table *t, struct rtmsg *r, 
+-			struct nlattr *attrs[], struct nlmsghdr *n,
+-			struct netlink_skb_parms *req);
+-	int (*delete)(struct dn_fib_table *t, struct rtmsg *r,
+-			struct nlattr *attrs[], struct nlmsghdr *n,
+-			struct netlink_skb_parms *req);
+-	int (*lookup)(struct dn_fib_table *t, const struct flowidn *fld,
+-			struct dn_fib_res *res);
+-	int (*flush)(struct dn_fib_table *t);
+-	int (*dump)(struct dn_fib_table *t, struct sk_buff *skb, struct netlink_callback *cb);
+-
+-	unsigned char data[];
+-};
+-
+-#ifdef CONFIG_DECNET_ROUTER
+-/*
+- * dn_fib.c
+- */
+-void dn_fib_init(void);
+-void dn_fib_cleanup(void);
+-
+-int dn_fib_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
+-struct dn_fib_info *dn_fib_create_info(const struct rtmsg *r,
+-				       struct nlattr *attrs[],
+-				       const struct nlmsghdr *nlh, int *errp);
+-int dn_fib_semantic_match(int type, struct dn_fib_info *fi,
+-			  const struct flowidn *fld, struct dn_fib_res *res);
+-void dn_fib_release_info(struct dn_fib_info *fi);
+-void dn_fib_flush(void);
+-void dn_fib_select_multipath(const struct flowidn *fld, struct dn_fib_res *res);
+-
+-/*
+- * dn_tables.c
+- */
+-struct dn_fib_table *dn_fib_get_table(u32 n, int creat);
+-struct dn_fib_table *dn_fib_empty_table(void);
+-void dn_fib_table_init(void);
+-void dn_fib_table_cleanup(void);
+-
+-/*
+- * dn_rules.c
+- */
+-void dn_fib_rules_init(void);
+-void dn_fib_rules_cleanup(void);
+-unsigned int dnet_addr_type(__le16 addr);
+-int dn_fib_lookup(struct flowidn *fld, struct dn_fib_res *res);
+-
+-int dn_fib_dump(struct sk_buff *skb, struct netlink_callback *cb);
+-
+-void dn_fib_free_info(struct dn_fib_info *fi);
+-
+-static inline void dn_fib_info_put(struct dn_fib_info *fi)
+-{
+-	if (refcount_dec_and_test(&fi->fib_clntref))
+-		dn_fib_free_info(fi);
+-}
+-
+-static inline void dn_fib_res_put(struct dn_fib_res *res)
+-{
+-	if (res->fi)
+-		dn_fib_info_put(res->fi);
+-	if (res->r)
+-		fib_rule_put(res->r);
+-}
+-
+-#else /* Endnode */
+-
+-#define dn_fib_init()  do { } while(0)
+-#define dn_fib_cleanup() do { } while(0)
+-
+-#define dn_fib_lookup(fl, res) (-ESRCH)
+-#define dn_fib_info_put(fi) do { } while(0)
+-#define dn_fib_select_multipath(fl, res) do { } while(0)
+-#define dn_fib_rules_policy(saddr,res,flags) (0)
+-#define dn_fib_res_put(res) do { } while(0)
+-
+-#endif /* CONFIG_DECNET_ROUTER */
+-
+-static inline __le16 dnet_make_mask(int n)
+-{
+-	if (n)
+-		return cpu_to_le16(~((1 << (16 - n)) - 1));
+-	return cpu_to_le16(0);
+-}
+-
+-#endif /* _NET_DN_FIB_H */
+diff --git a/include/net/dn_neigh.h b/include/net/dn_neigh.h
+deleted file mode 100644
+index 2e3e7793973a8..0000000000000
+--- a/include/net/dn_neigh.h
++++ /dev/null
+@@ -1,30 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _NET_DN_NEIGH_H
+-#define _NET_DN_NEIGH_H
+-
+-/*
+- * The position of the first two fields of
+- * this structure are critical - SJW
+- */
+-struct dn_neigh {
+-        struct neighbour n;
+-	__le16 addr;
+-        unsigned long flags;
+-#define DN_NDFLAG_R1    0x0001 /* Router L1      */
+-#define DN_NDFLAG_R2    0x0002 /* Router L2      */
+-#define DN_NDFLAG_P3    0x0004 /* Phase III Node */
+-        unsigned long blksize;
+-	__u8 priority;
+-};
+-
+-void dn_neigh_init(void);
+-void dn_neigh_cleanup(void);
+-int dn_neigh_router_hello(struct net *net, struct sock *sk, struct sk_buff *skb);
+-int dn_neigh_endnode_hello(struct net *net, struct sock *sk, struct sk_buff *skb);
+-void dn_neigh_pointopoint_hello(struct sk_buff *skb);
+-int dn_neigh_elist(struct net_device *dev, unsigned char *ptr, int n);
+-int dn_to_neigh_output(struct net *net, struct sock *sk, struct sk_buff *skb);
+-
+-extern struct neigh_table dn_neigh_table;
+-
+-#endif /* _NET_DN_NEIGH_H */
+diff --git a/include/net/dn_nsp.h b/include/net/dn_nsp.h
+deleted file mode 100644
+index f83932b864a93..0000000000000
+--- a/include/net/dn_nsp.h
++++ /dev/null
+@@ -1,195 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-or-later */
+-#ifndef _NET_DN_NSP_H
+-#define _NET_DN_NSP_H
+-/******************************************************************************
+-    (c) 1995-1998 E.M. Serrat		emserrat@geocities.com
+-    
+-*******************************************************************************/
+-/* dn_nsp.c functions prototyping */
+-
+-void dn_nsp_send_data_ack(struct sock *sk);
+-void dn_nsp_send_oth_ack(struct sock *sk);
+-void dn_send_conn_ack(struct sock *sk);
+-void dn_send_conn_conf(struct sock *sk, gfp_t gfp);
+-void dn_nsp_send_disc(struct sock *sk, unsigned char type,
+-		      unsigned short reason, gfp_t gfp);
+-void dn_nsp_return_disc(struct sk_buff *skb, unsigned char type,
+-			unsigned short reason);
+-void dn_nsp_send_link(struct sock *sk, unsigned char lsflags, char fcval);
+-void dn_nsp_send_conninit(struct sock *sk, unsigned char flags);
+-
+-void dn_nsp_output(struct sock *sk);
+-int dn_nsp_check_xmit_queue(struct sock *sk, struct sk_buff *skb,
+-			    struct sk_buff_head *q, unsigned short acknum);
+-void dn_nsp_queue_xmit(struct sock *sk, struct sk_buff *skb, gfp_t gfp,
+-		       int oob);
+-unsigned long dn_nsp_persist(struct sock *sk);
+-int dn_nsp_xmit_timeout(struct sock *sk);
+-
+-int dn_nsp_rx(struct sk_buff *);
+-int dn_nsp_backlog_rcv(struct sock *sk, struct sk_buff *skb);
+-
+-struct sk_buff *dn_alloc_skb(struct sock *sk, int size, gfp_t pri);
+-struct sk_buff *dn_alloc_send_skb(struct sock *sk, size_t *size, int noblock,
+-				  long timeo, int *err);
+-
+-#define NSP_REASON_OK 0		/* No error */
+-#define NSP_REASON_NR 1		/* No resources */
+-#define NSP_REASON_UN 2		/* Unrecognised node name */
+-#define NSP_REASON_SD 3		/* Node shutting down */
+-#define NSP_REASON_ID 4		/* Invalid destination end user */
+-#define NSP_REASON_ER 5		/* End user lacks resources */
+-#define NSP_REASON_OB 6		/* Object too busy */
+-#define NSP_REASON_US 7		/* Unspecified error */
+-#define NSP_REASON_TP 8		/* Third-Party abort */
+-#define NSP_REASON_EA 9		/* End user has aborted the link */
+-#define NSP_REASON_IF 10	/* Invalid node name format */
+-#define NSP_REASON_LS 11	/* Local node shutdown */
+-#define NSP_REASON_LL 32	/* Node lacks logical-link resources */
+-#define NSP_REASON_LE 33	/* End user lacks logical-link resources */
+-#define NSP_REASON_UR 34	/* Unacceptable RQSTRID or PASSWORD field */
+-#define NSP_REASON_UA 36	/* Unacceptable ACCOUNT field */
+-#define NSP_REASON_TM 38	/* End user timed out logical link */
+-#define NSP_REASON_NU 39	/* Node unreachable */
+-#define NSP_REASON_NL 41	/* No-link message */
+-#define NSP_REASON_DC 42	/* Disconnect confirm */
+-#define NSP_REASON_IO 43	/* Image data field overflow */
+-
+-#define NSP_DISCINIT 0x38
+-#define NSP_DISCCONF 0x48
+-
+-/*------------------------- NSP - messages ------------------------------*/
+-/* Data Messages */
+-/*---------------*/
+-
+-/* Data Messages    (data segment/interrupt/link service)               */
+-
+-struct nsp_data_seg_msg {
+-	__u8   msgflg;
+-	__le16 dstaddr;
+-	__le16 srcaddr;
+-} __packed;
+-
+-struct nsp_data_opt_msg {
+-	__le16 acknum;
+-	__le16 segnum;
+-	__le16 lsflgs;
+-} __packed;
+-
+-struct nsp_data_opt_msg1 {
+-	__le16 acknum;
+-	__le16 segnum;
+-} __packed;
+-
+-
+-/* Acknowledgment Message (data/other data)                             */
+-struct nsp_data_ack_msg {
+-	__u8   msgflg;
+-	__le16 dstaddr;
+-	__le16 srcaddr;
+-	__le16 acknum;
+-} __packed;
+-
+-/* Connect Acknowledgment Message */
+-struct  nsp_conn_ack_msg {
+-	__u8 msgflg;
+-	__le16 dstaddr;
+-} __packed;
+-
+-
+-/* Connect Initiate/Retransmit Initiate/Connect Confirm */
+-struct  nsp_conn_init_msg {
+-	__u8   msgflg;
+-#define NSP_CI      0x18            /* Connect Initiate     */
+-#define NSP_RCI     0x68            /* Retrans. Conn Init   */
+-	__le16 dstaddr;
+-	__le16 srcaddr;
+-	__u8   services;
+-#define NSP_FC_NONE   0x00            /* Flow Control None    */
+-#define NSP_FC_SRC    0x04            /* Seg Req. Count       */
+-#define NSP_FC_SCMC   0x08            /* Sess. Control Mess   */
+-#define NSP_FC_MASK   0x0c            /* FC type mask         */
+-	__u8   info;
+-	__le16 segsize;
+-} __packed;
+-
+-/* Disconnect Initiate/Disconnect Confirm */
+-struct  nsp_disconn_init_msg {
+-	__u8   msgflg;
+-	__le16 dstaddr;
+-	__le16 srcaddr;
+-	__le16 reason;
+-} __packed;
+-
+-
+-
+-struct  srcobj_fmt {
+-	__u8   format;
+-	__u8   task;
+-	__le16 grpcode;
+-	__le16 usrcode;
+-	__u8   dlen;
+-} __packed;
+-
+-/*
+- * A collection of functions for manipulating the sequence
+- * numbers used in NSP. Similar in operation to the functions
+- * of the same name in TCP.
+- */
+-static __inline__ int dn_before(__u16 seq1, __u16 seq2)
+-{
+-        seq1 &= 0x0fff;
+-        seq2 &= 0x0fff;
+-
+-        return (int)((seq1 - seq2) & 0x0fff) > 2048;
+-}
+-
+-
+-static __inline__ int dn_after(__u16 seq1, __u16 seq2)
+-{
+-        seq1 &= 0x0fff;
+-        seq2 &= 0x0fff;
+-
+-        return (int)((seq2 - seq1) & 0x0fff) > 2048;
+-}
+-
+-static __inline__ int dn_equal(__u16 seq1, __u16 seq2)
+-{
+-        return ((seq1 ^ seq2) & 0x0fff) == 0;
+-}
+-
+-static __inline__ int dn_before_or_equal(__u16 seq1, __u16 seq2)
+-{
+-	return (dn_before(seq1, seq2) || dn_equal(seq1, seq2));
+-}
+-
+-static __inline__ void seq_add(__u16 *seq, __u16 off)
+-{
+-        (*seq) += off;
+-        (*seq) &= 0x0fff;
+-}
+-
+-static __inline__ int seq_next(__u16 seq1, __u16 seq2)
+-{
+-	return dn_equal(seq1 + 1, seq2);
+-}
+-
+-/*
+- * Can we delay the ack ?
+- */
+-static __inline__ int sendack(__u16 seq)
+-{
+-        return (int)((seq & 0x1000) ? 0 : 1);
+-}
+-
+-/*
+- * Is socket congested ?
+- */
+-static __inline__ int dn_congested(struct sock *sk)
+-{
+-        return atomic_read(&sk->sk_rmem_alloc) > (sk->sk_rcvbuf >> 1);
+-}
+-
+-#define DN_MAX_NSP_DATA_HEADER (11)
+-
+-#endif /* _NET_DN_NSP_H */
+diff --git a/include/net/dn_route.h b/include/net/dn_route.h
+deleted file mode 100644
+index 6f1e94ac0bdfc..0000000000000
+--- a/include/net/dn_route.h
++++ /dev/null
+@@ -1,115 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-or-later */
+-#ifndef _NET_DN_ROUTE_H
+-#define _NET_DN_ROUTE_H
+-
+-/******************************************************************************
+-    (c) 1995-1998 E.M. Serrat		emserrat@geocities.com
+-    
+-*******************************************************************************/
+-
+-struct sk_buff *dn_alloc_skb(struct sock *sk, int size, gfp_t pri);
+-int dn_route_output_sock(struct dst_entry __rcu **pprt, struct flowidn *,
+-			 struct sock *sk, int flags);
+-int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb);
+-void dn_rt_cache_flush(int delay);
+-int dn_route_rcv(struct sk_buff *skb, struct net_device *dev,
+-		 struct packet_type *pt, struct net_device *orig_dev);
+-
+-/* Masks for flags field */
+-#define DN_RT_F_PID 0x07 /* Mask for packet type                      */
+-#define DN_RT_F_PF  0x80 /* Padding Follows                           */
+-#define DN_RT_F_VER 0x40 /* Version =0 discard packet if ==1          */
+-#define DN_RT_F_IE  0x20 /* Intra Ethernet, Reserved in short pkt     */
+-#define DN_RT_F_RTS 0x10 /* Packet is being returned to sender        */
+-#define DN_RT_F_RQR 0x08 /* Return packet to sender upon non-delivery */
+-
+-/* Mask for types of routing packets */
+-#define DN_RT_PKT_MSK   0x06
+-/* Types of routing packets */
+-#define DN_RT_PKT_SHORT 0x02 /* Short routing packet */
+-#define DN_RT_PKT_LONG  0x06 /* Long routing packet  */
+-
+-/* Mask for control/routing selection */
+-#define DN_RT_PKT_CNTL  0x01 /* Set to 1 if a control packet  */
+-/* Types of control packets */
+-#define DN_RT_CNTL_MSK  0x0f /* Mask for control packets      */
+-#define DN_RT_PKT_INIT  0x01 /* Initialisation packet         */
+-#define DN_RT_PKT_VERI  0x03 /* Verification Message          */
+-#define DN_RT_PKT_HELO  0x05 /* Hello and Test Message        */
+-#define DN_RT_PKT_L1RT  0x07 /* Level 1 Routing Message       */
+-#define DN_RT_PKT_L2RT  0x09 /* Level 2 Routing Message       */
+-#define DN_RT_PKT_ERTH  0x0b /* Ethernet Router Hello         */
+-#define DN_RT_PKT_EEDH  0x0d /* Ethernet EndNode Hello        */
+-
+-/* Values for info field in hello message */
+-#define DN_RT_INFO_TYPE 0x03 /* Type mask                     */
+-#define DN_RT_INFO_L1RT 0x02 /* L1 Router                     */
+-#define DN_RT_INFO_L2RT 0x01 /* L2 Router                     */
+-#define DN_RT_INFO_ENDN 0x03 /* EndNode                       */
+-#define DN_RT_INFO_VERI 0x04 /* Verification Reqd.            */
+-#define DN_RT_INFO_RJCT 0x08 /* Reject Flag, Reserved         */
+-#define DN_RT_INFO_VFLD 0x10 /* Verification Failed, Reserved */
+-#define DN_RT_INFO_NOML 0x20 /* No Multicast traffic accepted */
+-#define DN_RT_INFO_BLKR 0x40 /* Blocking Requested            */
+-
+-/*
+- * The fl structure is what we used to look up the route.
+- * The rt_saddr & rt_daddr entries are the same as key.saddr & key.daddr
+- * except for local input routes, where the rt_saddr = fl.fld_dst and
+- * rt_daddr = fl.fld_src to allow the route to be used for returning
+- * packets to the originating host.
+- */
+-struct dn_route {
+-	struct dst_entry dst;
+-	struct dn_route __rcu *dn_next;
+-
+-	struct neighbour *n;
+-
+-	struct flowidn fld;
+-
+-	__le16 rt_saddr;
+-	__le16 rt_daddr;
+-	__le16 rt_gateway;
+-	__le16 rt_local_src;	/* Source used for forwarding packets */
+-	__le16 rt_src_map;
+-	__le16 rt_dst_map;
+-
+-	unsigned int rt_flags;
+-	unsigned int rt_type;
+-};
+-
+-static inline bool dn_is_input_route(struct dn_route *rt)
+-{
+-	return rt->fld.flowidn_iif != 0;
+-}
+-
+-static inline bool dn_is_output_route(struct dn_route *rt)
+-{
+-	return rt->fld.flowidn_iif == 0;
+-}
+-
+-void dn_route_init(void);
+-void dn_route_cleanup(void);
+-
+-#include <net/sock.h>
+-#include <linux/if_arp.h>
+-
+-static inline void dn_rt_send(struct sk_buff *skb)
+-{
+-	dev_queue_xmit(skb);
+-}
+-
+-static inline void dn_rt_finish_output(struct sk_buff *skb, char *dst, char *src)
+-{
+-	struct net_device *dev = skb->dev;
+-
+-	if ((dev->type != ARPHRD_ETHER) && (dev->type != ARPHRD_LOOPBACK))
+-		dst = NULL;
+-
+-	if (dev_hard_header(skb, dev, ETH_P_DNA_RT, dst, src, skb->len) >= 0)
+-		dn_rt_send(skb);
+-	else
+-		kfree_skb(skb);
+-}
+-
+-#endif /* _NET_DN_ROUTE_H */
+diff --git a/include/net/dst.h b/include/net/dst.h
+index ae2cf57d796b9..48e613420b952 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -235,12 +235,6 @@ static inline void dst_use_noref(struct dst_entry *dst, unsigned long time)
+ 	}
+ }
+ 
+-static inline void dst_hold_and_use(struct dst_entry *dst, unsigned long time)
+-{
+-	dst_hold(dst);
+-	dst_use_noref(dst, time);
+-}
+-
+ static inline struct dst_entry *dst_clone(struct dst_entry *dst)
+ {
+ 	if (dst)
+diff --git a/include/net/flow.h b/include/net/flow.h
+index 39d0cedcddeee..236d0941c69f0 100644
+--- a/include/net/flow.h
++++ b/include/net/flow.h
+@@ -54,11 +54,6 @@ union flowi_uli {
+ 		__u8	code;
+ 	} icmpt;
+ 
+-	struct {
+-		__le16	dport;
+-		__le16	sport;
+-	} dnports;
+-
+ 	__be32		spi;
+ 	__be32		gre_key;
+ 
+@@ -156,27 +151,11 @@ struct flowi6 {
+ 	__u32			mp_hash;
+ } __attribute__((__aligned__(BITS_PER_LONG/8)));
+ 
+-struct flowidn {
+-	struct flowi_common	__fl_common;
+-#define flowidn_oif		__fl_common.flowic_oif
+-#define flowidn_iif		__fl_common.flowic_iif
+-#define flowidn_mark		__fl_common.flowic_mark
+-#define flowidn_scope		__fl_common.flowic_scope
+-#define flowidn_proto		__fl_common.flowic_proto
+-#define flowidn_flags		__fl_common.flowic_flags
+-	__le16			daddr;
+-	__le16			saddr;
+-	union flowi_uli		uli;
+-#define fld_sport		uli.ports.sport
+-#define fld_dport		uli.ports.dport
+-} __attribute__((__aligned__(BITS_PER_LONG/8)));
+-
+ struct flowi {
+ 	union {
+ 		struct flowi_common	__fl_common;
+ 		struct flowi4		ip4;
+ 		struct flowi6		ip6;
+-		struct flowidn		dn;
+ 	} u;
+ #define flowi_oif	u.__fl_common.flowic_oif
+ #define flowi_iif	u.__fl_common.flowic_iif
+@@ -210,11 +189,6 @@ static inline struct flowi_common *flowi6_to_flowi_common(struct flowi6 *fl6)
+ 	return &(flowi6_to_flowi(fl6)->u.__fl_common);
+ }
+ 
+-static inline struct flowi *flowidn_to_flowi(struct flowidn *fldn)
+-{
+-	return container_of(fldn, struct flowi, u.dn);
+-}
+-
+ __u32 __get_hash_from_flowi6(const struct flowi6 *fl6, struct flow_keys *keys);
+ 
+ #endif
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index abb22cfd4827f..0e9d33e4439ec 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -260,11 +260,6 @@ static inline void *neighbour_priv(const struct neighbour *n)
+ 
+ extern const struct nla_policy nda_policy[];
+ 
+-static inline bool neigh_key_eq16(const struct neighbour *n, const void *pkey)
+-{
+-	return *(const u16 *)n->primary_key == *(const u16 *)pkey;
+-}
+-
+ static inline bool neigh_key_eq32(const struct neighbour *n, const void *pkey)
+ {
+ 	return *(const u32 *)n->primary_key == *(const u32 *)pkey;
+@@ -314,8 +309,6 @@ void neigh_table_init(int index, struct neigh_table *tbl);
+ int neigh_table_clear(int index, struct neigh_table *tbl);
+ struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
+ 			       struct net_device *dev);
+-struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
+-				     const void *pkey);
+ struct neighbour *__neigh_create(struct neigh_table *tbl, const void *pkey,
+ 				 struct net_device *dev, bool want_ref);
+ static inline struct neighbour *neigh_create(struct neigh_table *tbl,
+diff --git a/include/net/netns/netfilter.h b/include/net/netns/netfilter.h
+index ca043342c0ebe..2e57312ac5893 100644
+--- a/include/net/netns/netfilter.h
++++ b/include/net/netns/netfilter.h
+@@ -25,9 +25,6 @@ struct netns_nf {
+ #ifdef CONFIG_NETFILTER_FAMILY_BRIDGE
+ 	struct nf_hook_entries __rcu *hooks_bridge[NF_INET_NUMHOOKS];
+ #endif
+-#if IS_ENABLED(CONFIG_DECNET)
+-	struct nf_hook_entries __rcu *hooks_decnet[NF_DN_NUMHOOKS];
+-#endif
+ #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4)
+ 	bool			defrag_ipv4;
+ #endif
+diff --git a/include/rdma/ib_addr.h b/include/rdma/ib_addr.h
+index b0e636ac66900..8c5c9582c4fb9 100644
+--- a/include/rdma/ib_addr.h
++++ b/include/rdma/ib_addr.h
+@@ -193,29 +193,6 @@ static inline enum ib_mtu iboe_get_mtu(int mtu)
+ 		return 0;
+ }
+ 
+-static inline int iboe_get_rate(struct net_device *dev)
+-{
+-	struct ethtool_link_ksettings cmd;
+-	int err;
+-
+-	rtnl_lock();
+-	err = __ethtool_get_link_ksettings(dev, &cmd);
+-	rtnl_unlock();
+-	if (err)
+-		return IB_RATE_PORT_CURRENT;
+-
+-	if (cmd.base.speed >= 40000)
+-		return IB_RATE_40_GBPS;
+-	else if (cmd.base.speed >= 30000)
+-		return IB_RATE_30_GBPS;
+-	else if (cmd.base.speed >= 20000)
+-		return IB_RATE_20_GBPS;
+-	else if (cmd.base.speed >= 10000)
+-		return IB_RATE_10_GBPS;
+-	else
+-		return IB_RATE_PORT_CURRENT;
+-}
+-
+ static inline int rdma_link_local_addr(struct in6_addr *addr)
+ {
+ 	if (addr->s6_addr32[0] == htonl(0xfe800000) &&
+diff --git a/include/sound/soc-dpcm.h b/include/sound/soc-dpcm.h
+index 0f6c50b17bba8..bd8795198a7d6 100644
+--- a/include/sound/soc-dpcm.h
++++ b/include/sound/soc-dpcm.h
+@@ -121,6 +121,10 @@ int snd_soc_dpcm_can_be_free_stop(struct snd_soc_pcm_runtime *fe,
+ int snd_soc_dpcm_can_be_params(struct snd_soc_pcm_runtime *fe,
+ 		struct snd_soc_pcm_runtime *be, int stream);
+ 
++/* can this BE perform prepare */
++int snd_soc_dpcm_can_be_prepared(struct snd_soc_pcm_runtime *fe,
++				 struct snd_soc_pcm_runtime *be, int stream);
++
+ /* is the current PCM operation for this FE ? */
+ int snd_soc_dpcm_fe_can_update(struct snd_soc_pcm_runtime *fe, int stream);
+ 
+diff --git a/include/uapi/linux/dn.h b/include/uapi/linux/dn.h
+deleted file mode 100644
+index 36ca71bd8bbe2..0000000000000
+--- a/include/uapi/linux/dn.h
++++ /dev/null
+@@ -1,149 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+-#ifndef _LINUX_DN_H
+-#define _LINUX_DN_H
+-
+-#include <linux/ioctl.h>
+-#include <linux/types.h>
+-#include <linux/if_ether.h>
+-
+-/*
+-
+-	DECnet Data Structures and Constants
+-
+-*/
+-
+-/* 
+- * DNPROTO_NSP can't be the same as SOL_SOCKET, 
+- * so increment each by one (compared to ULTRIX)
+- */
+-#define DNPROTO_NSP     2                       /* NSP protocol number       */
+-#define DNPROTO_ROU     3                       /* Routing protocol number   */
+-#define DNPROTO_NML     4                       /* Net mgt protocol number   */
+-#define DNPROTO_EVL     5                       /* Evl protocol number (usr) */
+-#define DNPROTO_EVR     6                       /* Evl protocol number (evl) */
+-#define DNPROTO_NSPT    7                       /* NSP trace protocol number */
+-
+-
+-#define DN_ADDL		2
+-#define DN_MAXADDL	2 /* ULTRIX headers have 20 here, but pathworks has 2 */
+-#define DN_MAXOPTL	16
+-#define DN_MAXOBJL	16
+-#define DN_MAXACCL	40
+-#define DN_MAXALIASL	128
+-#define DN_MAXNODEL	256
+-#define DNBUFSIZE	65023
+-
+-/* 
+- * SET/GET Socket options  - must match the DSO_ numbers below
+- */
+-#define SO_CONDATA      1
+-#define SO_CONACCESS    2
+-#define SO_PROXYUSR     3
+-#define SO_LINKINFO     7
+-
+-#define DSO_CONDATA     1        /* Set/Get connect data                */
+-#define DSO_DISDATA     10       /* Set/Get disconnect data             */
+-#define DSO_CONACCESS   2        /* Set/Get connect access data         */
+-#define DSO_ACCEPTMODE  4        /* Set/Get accept mode                 */
+-#define DSO_CONACCEPT   5        /* Accept deferred connection          */
+-#define DSO_CONREJECT   6        /* Reject deferred connection          */
+-#define DSO_LINKINFO    7        /* Set/Get link information            */
+-#define DSO_STREAM      8        /* Set socket type to stream           */
+-#define DSO_SEQPACKET   9        /* Set socket type to sequenced packet */
+-#define DSO_MAXWINDOW   11       /* Maximum window size allowed         */
+-#define DSO_NODELAY	12       /* Turn off nagle                      */
+-#define DSO_CORK        13       /* Wait for more data!                 */
+-#define DSO_SERVICES	14       /* NSP Services field                  */
+-#define DSO_INFO	15       /* NSP Info field                      */
+-#define DSO_MAX         15       /* Maximum option number               */
+-
+-
+-/* LINK States */
+-#define LL_INACTIVE	0
+-#define LL_CONNECTING	1
+-#define LL_RUNNING	2
+-#define LL_DISCONNECTING 3
+-
+-#define ACC_IMMED 0
+-#define ACC_DEFER 1
+-
+-#define SDF_WILD        1                  /* Wild card object          */
+-#define SDF_PROXY       2                  /* Addr eligible for proxy   */
+-#define SDF_UICPROXY    4                  /* Use uic-based proxy       */
+-
+-/* Structures */
+-
+-
+-struct dn_naddr {
+-	__le16		a_len;
+-	__u8 a_addr[DN_MAXADDL]; /* Two bytes little endian */
+-};
+-
+-struct sockaddr_dn {
+-	__u16		sdn_family;
+-	__u8		sdn_flags;
+-	__u8		sdn_objnum;
+-	__le16		sdn_objnamel;
+-	__u8		sdn_objname[DN_MAXOBJL];
+-	struct   dn_naddr	sdn_add;
+-};
+-#define sdn_nodeaddrl   sdn_add.a_len   /* Node address length  */
+-#define sdn_nodeaddr    sdn_add.a_addr  /* Node address         */
+-
+-
+-
+-/*
+- * DECnet set/get DSO_CONDATA, DSO_DISDATA (optional data) structure
+- */
+-struct optdata_dn {
+-        __le16  opt_status;     /* Extended status return */
+-#define opt_sts opt_status
+-        __le16  opt_optl;       /* Length of user data    */
+-        __u8   opt_data[16];   /* User data              */
+-};
+-
+-struct accessdata_dn {
+-	__u8		acc_accl;
+-	__u8		acc_acc[DN_MAXACCL];
+-	__u8 		acc_passl;
+-	__u8		acc_pass[DN_MAXACCL];
+-	__u8 		acc_userl;
+-	__u8		acc_user[DN_MAXACCL];
+-};
+-
+-/*
+- * DECnet logical link information structure
+- */
+-struct linkinfo_dn {
+-        __u16  idn_segsize;    /* Segment size for link */
+-        __u8   idn_linkstate;  /* Logical link state    */
+-};
+-
+-/*
+- * Ethernet address format (for DECnet)
+- */
+-union etheraddress {
+-        __u8 dne_addr[ETH_ALEN];      /* Full ethernet address */
+-  struct {
+-                __u8 dne_hiord[4];    /* DECnet HIORD prefix   */
+-                __u8 dne_nodeaddr[2]; /* DECnet node address   */
+-  } dne_remote;
+-};
+-
+-
+-/*
+- * DECnet physical socket address format
+- */
+-struct dn_addr {
+-        __le16 dna_family;      /* AF_DECnet               */
+-        union etheraddress dna_netaddr; /* DECnet ethernet address */
+-};
+-
+-#define DECNET_IOCTL_BASE 0x89 /* PROTOPRIVATE range */
+-
+-#define SIOCSNETADDR  _IOW(DECNET_IOCTL_BASE, 0xe0, struct dn_naddr)
+-#define SIOCGNETADDR  _IOR(DECNET_IOCTL_BASE, 0xe1, struct dn_naddr)
+-#define OSIOCSNETADDR _IOW(DECNET_IOCTL_BASE, 0xe0, int)
+-#define OSIOCGNETADDR _IOR(DECNET_IOCTL_BASE, 0xe1, int)
+-
+-#endif /* _LINUX_DN_H */
+diff --git a/include/uapi/linux/netfilter_decnet.h b/include/uapi/linux/netfilter_decnet.h
+deleted file mode 100644
+index 3c77f54560f21..0000000000000
+--- a/include/uapi/linux/netfilter_decnet.h
++++ /dev/null
+@@ -1,72 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+-#ifndef __LINUX_DECNET_NETFILTER_H
+-#define __LINUX_DECNET_NETFILTER_H
+-
+-/* DECnet-specific defines for netfilter. 
+- * This file (C) Steve Whitehouse 1999 derived from the
+- * ipv4 netfilter header file which is
+- * (C)1998 Rusty Russell -- This code is GPL.
+- */
+-
+-#include <linux/netfilter.h>
+-
+-/* only for userspace compatibility */
+-#ifndef __KERNEL__
+-
+-#include <limits.h> /* for INT_MIN, INT_MAX */
+-
+-/* kernel define is in netfilter_defs.h */
+-#define NF_DN_NUMHOOKS		7
+-#endif /* ! __KERNEL__ */
+-
+-/* DECnet Hooks */
+-/* After promisc drops, checksum checks. */
+-#define NF_DN_PRE_ROUTING	0
+-/* If the packet is destined for this box. */
+-#define NF_DN_LOCAL_IN		1
+-/* If the packet is destined for another interface. */
+-#define NF_DN_FORWARD		2
+-/* Packets coming from a local process. */
+-#define NF_DN_LOCAL_OUT		3
+-/* Packets about to hit the wire. */
+-#define NF_DN_POST_ROUTING	4
+-/* Input Hello Packets */
+-#define NF_DN_HELLO		5
+-/* Input Routing Packets */
+-#define NF_DN_ROUTE		6
+-
+-enum nf_dn_hook_priorities {
+-	NF_DN_PRI_FIRST = INT_MIN,
+-	NF_DN_PRI_CONNTRACK = -200,
+-	NF_DN_PRI_MANGLE = -150,
+-	NF_DN_PRI_NAT_DST = -100,
+-	NF_DN_PRI_FILTER = 0,
+-	NF_DN_PRI_NAT_SRC = 100,
+-	NF_DN_PRI_DNRTMSG = 200,
+-	NF_DN_PRI_LAST = INT_MAX,
+-};
+-
+-struct nf_dn_rtmsg {
+-	int nfdn_ifindex;
+-};
+-
+-#define NFDN_RTMSG(r) ((unsigned char *)(r) + NLMSG_ALIGN(sizeof(struct nf_dn_rtmsg)))
+-
+-#ifndef __KERNEL__
+-/* backwards compatibility for userspace */
+-#define DNRMG_L1_GROUP 0x01
+-#define DNRMG_L2_GROUP 0x02
+-#endif
+-
+-enum {
+-	DNRNG_NLGRP_NONE,
+-#define DNRNG_NLGRP_NONE	DNRNG_NLGRP_NONE
+-	DNRNG_NLGRP_L1,
+-#define DNRNG_NLGRP_L1		DNRNG_NLGRP_L1
+-	DNRNG_NLGRP_L2,
+-#define DNRNG_NLGRP_L2		DNRNG_NLGRP_L2
+-	__DNRNG_NLGRP_MAX
+-};
+-#define DNRNG_NLGRP_MAX	(__DNRNG_NLGRP_MAX - 1)
+-
+-#endif /*__LINUX_DECNET_NETFILTER_H*/
+diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h
+index 3d94269bbfa87..49751d5fee88a 100644
+--- a/include/uapi/linux/netlink.h
++++ b/include/uapi/linux/netlink.h
+@@ -20,7 +20,7 @@
+ #define NETLINK_CONNECTOR	11
+ #define NETLINK_NETFILTER	12	/* netfilter subsystem */
+ #define NETLINK_IP6_FW		13
+-#define NETLINK_DNRTMSG		14	/* DECnet routing messages */
++#define NETLINK_DNRTMSG		14	/* DECnet routing messages (obsolete) */
+ #define NETLINK_KOBJECT_UEVENT	15	/* Kernel messages to userspace */
+ #define NETLINK_GENERIC		16
+ /* leave room for NETLINK_DM (DM Events) */
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index fd799567fc23a..58e0631143984 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -5966,6 +5966,8 @@ static int io_poll_update(struct io_kiocb *req, unsigned int issue_flags)
+ 	struct io_kiocb *preq;
+ 	int ret2, ret = 0;
+ 
++	io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
++
+ 	spin_lock(&ctx->completion_lock);
+ 	preq = io_poll_find(ctx, req->poll_update.old_user_data, true);
+ 	if (!preq || !io_poll_disarm(preq)) {
+@@ -5997,6 +5999,7 @@ out:
+ 		req_set_fail(req);
+ 	/* complete update request, we're done with it */
+ 	io_req_complete(req, ret);
++	io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
+ 	return 0;
+ }
+ 
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 684c16849eff3..f123b7f642498 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -6138,19 +6138,18 @@ err:
+ static void cgroup_css_set_put_fork(struct kernel_clone_args *kargs)
+ 	__releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
+ {
++	struct cgroup *cgrp = kargs->cgrp;
++	struct css_set *cset = kargs->cset;
++
+ 	cgroup_threadgroup_change_end(current);
+ 
+-	if (kargs->flags & CLONE_INTO_CGROUP) {
+-		struct cgroup *cgrp = kargs->cgrp;
+-		struct css_set *cset = kargs->cset;
++	if (cset) {
++		put_css_set(cset);
++		kargs->cset = NULL;
++	}
+ 
++	if (kargs->flags & CLONE_INTO_CGROUP) {
+ 		mutex_unlock(&cgroup_mutex);
+-
+-		if (cset) {
+-			put_css_set(cset);
+-			kargs->cset = NULL;
+-		}
+-
+ 		if (cgrp) {
+ 			cgroup_put(cgrp);
+ 			kargs->cgrp = NULL;
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index b9c857782adae..804dc508d9269 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -910,10 +910,22 @@ static int kexec_purgatory_setup_sechdrs(struct purgatory_info *pi,
+ 		}
+ 
+ 		offset = ALIGN(offset, align);
++
++		/*
++		 * Check if the segment contains the entry point, if so,
++		 * calculate the value of image->start based on it.
++		 * If the compiler has produced more than one .text section
++		 * (Eg: .text.hot), they are generally after the main .text
++		 * section, and they shall not be used to calculate
++		 * image->start. So do not re-calculate image->start if it
++		 * is not set to the initial value, and warn the user so they
++		 * have a chance to fix their purgatory's linker script.
++		 */
+ 		if (sechdrs[i].sh_flags & SHF_EXECINSTR &&
+ 		    pi->ehdr->e_entry >= sechdrs[i].sh_addr &&
+ 		    pi->ehdr->e_entry < (sechdrs[i].sh_addr
+-					 + sechdrs[i].sh_size)) {
++					 + sechdrs[i].sh_size) &&
++		    !WARN_ON(kbuf->image->start != pi->ehdr->e_entry)) {
+ 			kbuf->image->start -= sechdrs[i].sh_addr;
+ 			kbuf->image->start += kbuf->mem + offset;
+ 		}
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 30e1d7fedb5fd..eec8e2f7537eb 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3281,6 +3281,30 @@ static void kfree_rcu_work(struct work_struct *work)
+ 	}
+ }
+ 
++static bool
++need_offload_krc(struct kfree_rcu_cpu *krcp)
++{
++	int i;
++
++	for (i = 0; i < FREE_N_CHANNELS; i++)
++		if (krcp->bkvhead[i])
++			return true;
++
++	return !!krcp->head;
++}
++
++static bool
++need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp)
++{
++	int i;
++
++	for (i = 0; i < FREE_N_CHANNELS; i++)
++		if (krwp->bkvhead_free[i])
++			return true;
++
++	return !!krwp->head_free;
++}
++
+ /*
+  * Schedule the kfree batch RCU work to run in workqueue context after a GP.
+  *
+@@ -3298,16 +3322,13 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
+ 	for (i = 0; i < KFREE_N_BATCHES; i++) {
+ 		krwp = &(krcp->krw_arr[i]);
+ 
+-		/*
+-		 * Try to detach bkvhead or head and attach it over any
+-		 * available corresponding free channel. It can be that
+-		 * a previous RCU batch is in progress, it means that
+-		 * immediately to queue another one is not possible so
+-		 * return false to tell caller to retry.
+-		 */
+-		if ((krcp->bkvhead[0] && !krwp->bkvhead_free[0]) ||
+-			(krcp->bkvhead[1] && !krwp->bkvhead_free[1]) ||
+-				(krcp->head && !krwp->head_free)) {
++		// Try to detach bulk_head or head and attach it, only when
++		// all channels are free.  Any channel is not free means at krwp
++		// there is on-going rcu work to handle krwp's free business.
++		if (need_wait_for_krwp_work(krwp))
++			continue;
++
++		if (need_offload_krc(krcp)) {
+ 			// Channel 1 corresponds to SLAB ptrs.
+ 			// Channel 2 corresponds to vmalloc ptrs.
+ 			for (j = 0; j < FREE_N_CHANNELS; j++) {
+@@ -3334,12 +3355,12 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
+ 			 */
+ 			queue_rcu_work(system_wq, &krwp->rcu_work);
+ 		}
+-
+-		// Repeat if any "free" corresponding channel is still busy.
+-		if (krcp->bkvhead[0] || krcp->bkvhead[1] || krcp->head)
+-			repeat = true;
+ 	}
+ 
++	// Repeat if any "free" corresponding channel is still busy.
++	if (need_offload_krc(krcp))
++		repeat = true;
++
+ 	return !repeat;
+ }
+ 
+diff --git a/lib/kstrtox.c b/lib/kstrtox.c
+index 8504526541c13..53fa01fb75092 100644
+--- a/lib/kstrtox.c
++++ b/lib/kstrtox.c
+@@ -14,11 +14,12 @@
+  */
+ #include <linux/ctype.h>
+ #include <linux/errno.h>
+-#include <linux/kernel.h>
+-#include <linux/math64.h>
+ #include <linux/export.h>
++#include <linux/kstrtox.h>
++#include <linux/math64.h>
+ #include <linux/types.h>
+ #include <linux/uaccess.h>
++
+ #include "kstrtox.h"
+ 
+ const char *_parse_integer_fixup_radix(const char *s, unsigned int *base)
+diff --git a/lib/parser.c b/lib/parser.c
+index f5b3e5d7a7f95..5c37d6345cb0a 100644
+--- a/lib/parser.c
++++ b/lib/parser.c
+@@ -6,6 +6,7 @@
+ #include <linux/ctype.h>
+ #include <linux/types.h>
+ #include <linux/export.h>
++#include <linux/kstrtox.h>
+ #include <linux/parser.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index 581ee3fcdd5c2..ed0455a9ded87 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -22,6 +22,7 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+ #include <linux/delay.h>
++#include <linux/kstrtox.h>
+ #include <linux/kthread.h>
+ #include <linux/vmalloc.h>
+ #include <linux/efi_embedded_fw.h>
+@@ -320,16 +321,26 @@ static ssize_t config_test_show_str(char *dst,
+ 	return len;
+ }
+ 
+-static int test_dev_config_update_bool(const char *buf, size_t size,
++static inline int __test_dev_config_update_bool(const char *buf, size_t size,
+ 				       bool *cfg)
+ {
+ 	int ret;
+ 
+-	mutex_lock(&test_fw_mutex);
+-	if (strtobool(buf, cfg) < 0)
++	if (kstrtobool(buf, cfg) < 0)
+ 		ret = -EINVAL;
+ 	else
+ 		ret = size;
++
++	return ret;
++}
++
++static int test_dev_config_update_bool(const char *buf, size_t size,
++				       bool *cfg)
++{
++	int ret;
++
++	mutex_lock(&test_fw_mutex);
++	ret = __test_dev_config_update_bool(buf, size, cfg);
+ 	mutex_unlock(&test_fw_mutex);
+ 
+ 	return ret;
+@@ -340,7 +351,8 @@ static ssize_t test_dev_config_show_bool(char *buf, bool val)
+ 	return snprintf(buf, PAGE_SIZE, "%d\n", val);
+ }
+ 
+-static int test_dev_config_update_size_t(const char *buf,
++static int __test_dev_config_update_size_t(
++					 const char *buf,
+ 					 size_t size,
+ 					 size_t *cfg)
+ {
+@@ -351,9 +363,7 @@ static int test_dev_config_update_size_t(const char *buf,
+ 	if (ret)
+ 		return ret;
+ 
+-	mutex_lock(&test_fw_mutex);
+ 	*(size_t *)cfg = new;
+-	mutex_unlock(&test_fw_mutex);
+ 
+ 	/* Always return full write size even if we didn't consume all */
+ 	return size;
+@@ -369,24 +379,30 @@ static ssize_t test_dev_config_show_int(char *buf, int val)
+ 	return snprintf(buf, PAGE_SIZE, "%d\n", val);
+ }
+ 
+-static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
++static int __test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
+ {
++	u8 val;
+ 	int ret;
+-	long new;
+ 
+-	ret = kstrtol(buf, 10, &new);
++	ret = kstrtou8(buf, 10, &val);
+ 	if (ret)
+ 		return ret;
+ 
+-	if (new > U8_MAX)
+-		return -EINVAL;
++	*(u8 *)cfg = val;
++
++	/* Always return full write size even if we didn't consume all */
++	return size;
++}
++
++static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
++{
++	int ret;
+ 
+ 	mutex_lock(&test_fw_mutex);
+-	*(u8 *)cfg = new;
++	ret = __test_dev_config_update_u8(buf, size, cfg);
+ 	mutex_unlock(&test_fw_mutex);
+ 
+-	/* Always return full write size even if we didn't consume all */
+-	return size;
++	return ret;
+ }
+ 
+ static ssize_t test_dev_config_show_u8(char *buf, u8 val)
+@@ -415,10 +431,10 @@ static ssize_t config_num_requests_store(struct device *dev,
+ 		mutex_unlock(&test_fw_mutex);
+ 		goto out;
+ 	}
+-	mutex_unlock(&test_fw_mutex);
+ 
+-	rc = test_dev_config_update_u8(buf, count,
+-				       &test_fw_config->num_requests);
++	rc = __test_dev_config_update_u8(buf, count,
++					 &test_fw_config->num_requests);
++	mutex_unlock(&test_fw_mutex);
+ 
+ out:
+ 	return rc;
+@@ -462,10 +478,10 @@ static ssize_t config_buf_size_store(struct device *dev,
+ 		mutex_unlock(&test_fw_mutex);
+ 		goto out;
+ 	}
+-	mutex_unlock(&test_fw_mutex);
+ 
+-	rc = test_dev_config_update_size_t(buf, count,
+-					   &test_fw_config->buf_size);
++	rc = __test_dev_config_update_size_t(buf, count,
++					     &test_fw_config->buf_size);
++	mutex_unlock(&test_fw_mutex);
+ 
+ out:
+ 	return rc;
+@@ -492,10 +508,10 @@ static ssize_t config_file_offset_store(struct device *dev,
+ 		mutex_unlock(&test_fw_mutex);
+ 		goto out;
+ 	}
+-	mutex_unlock(&test_fw_mutex);
+ 
+-	rc = test_dev_config_update_size_t(buf, count,
+-					   &test_fw_config->file_offset);
++	rc = __test_dev_config_update_size_t(buf, count,
++					     &test_fw_config->file_offset);
++	mutex_unlock(&test_fw_mutex);
+ 
+ out:
+ 	return rc;
+@@ -847,6 +863,11 @@ static ssize_t trigger_batched_requests_store(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 
++	if (test_fw_config->reqs) {
++		rc = -EBUSY;
++		goto out_bail;
++	}
++
+ 	test_fw_config->reqs =
+ 		vzalloc(array3_size(sizeof(struct test_batched_req),
+ 				    test_fw_config->num_requests, 2));
+@@ -946,6 +967,11 @@ ssize_t trigger_batched_requests_async_store(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 
++	if (test_fw_config->reqs) {
++		rc = -EBUSY;
++		goto out_bail;
++	}
++
+ 	test_fw_config->reqs =
+ 		vzalloc(array3_size(sizeof(struct test_batched_req),
+ 				    test_fw_config->num_requests, 2));
+diff --git a/lib/test_kmod.c b/lib/test_kmod.c
+index c637f6b5053a9..c282728de3af0 100644
+--- a/lib/test_kmod.c
++++ b/lib/test_kmod.c
+@@ -877,20 +877,17 @@ static int test_dev_config_update_uint_sync(struct kmod_test_device *test_dev,
+ 					    int (*test_sync)(struct kmod_test_device *test_dev))
+ {
+ 	int ret;
+-	unsigned long new;
++	unsigned int val;
+ 	unsigned int old_val;
+ 
+-	ret = kstrtoul(buf, 10, &new);
++	ret = kstrtouint(buf, 10, &val);
+ 	if (ret)
+ 		return ret;
+ 
+-	if (new > UINT_MAX)
+-		return -EINVAL;
+-
+ 	mutex_lock(&test_dev->config_mutex);
+ 
+ 	old_val = *config;
+-	*(unsigned int *)config = new;
++	*(unsigned int *)config = val;
+ 
+ 	ret = test_sync(test_dev);
+ 	if (ret) {
+@@ -914,18 +911,18 @@ static int test_dev_config_update_uint_range(struct kmod_test_device *test_dev,
+ 					     unsigned int min,
+ 					     unsigned int max)
+ {
++	unsigned int val;
+ 	int ret;
+-	unsigned long new;
+ 
+-	ret = kstrtoul(buf, 10, &new);
++	ret = kstrtouint(buf, 10, &val);
+ 	if (ret)
+ 		return ret;
+ 
+-	if (new < min || new > max)
++	if (val < min || val > max)
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&test_dev->config_mutex);
+-	*config = new;
++	*config = val;
+ 	mutex_unlock(&test_dev->config_mutex);
+ 
+ 	/* Always return full write size even if we didn't consume all */
+@@ -936,18 +933,15 @@ static int test_dev_config_update_int(struct kmod_test_device *test_dev,
+ 				      const char *buf, size_t size,
+ 				      int *config)
+ {
++	int val;
+ 	int ret;
+-	long new;
+ 
+-	ret = kstrtol(buf, 10, &new);
++	ret = kstrtoint(buf, 10, &val);
+ 	if (ret)
+ 		return ret;
+ 
+-	if (new < INT_MIN || new > INT_MAX)
+-		return -EINVAL;
+-
+ 	mutex_lock(&test_dev->config_mutex);
+-	*config = new;
++	*config = val;
+ 	mutex_unlock(&test_dev->config_mutex);
+ 	/* Always return full write size even if we didn't consume all */
+ 	return size;
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index f0633f9a91166..9ec9e1e677051 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1788,39 +1788,112 @@ int remove_memory(int nid, u64 start, u64 size)
+ }
+ EXPORT_SYMBOL_GPL(remove_memory);
+ 
++static int try_offline_memory_block(struct memory_block *mem, void *arg)
++{
++	uint8_t online_type = MMOP_ONLINE_KERNEL;
++	uint8_t **online_types = arg;
++	struct page *page;
++	int rc;
++
++	/*
++	 * Sense the online_type via the zone of the memory block. Offlining
++	 * with multiple zones within one memory block will be rejected
++	 * by offlining code ... so we don't care about that.
++	 */
++	page = pfn_to_online_page(section_nr_to_pfn(mem->start_section_nr));
++	if (page && zone_idx(page_zone(page)) == ZONE_MOVABLE)
++		online_type = MMOP_ONLINE_MOVABLE;
++
++	rc = device_offline(&mem->dev);
++	/*
++	 * Default is MMOP_OFFLINE - change it only if offlining succeeded,
++	 * so try_reonline_memory_block() can do the right thing.
++	 */
++	if (!rc)
++		**online_types = online_type;
++
++	(*online_types)++;
++	/* Ignore if already offline. */
++	return rc < 0 ? rc : 0;
++}
++
++static int try_reonline_memory_block(struct memory_block *mem, void *arg)
++{
++	uint8_t **online_types = arg;
++	int rc;
++
++	if (**online_types != MMOP_OFFLINE) {
++		mem->online_type = **online_types;
++		rc = device_online(&mem->dev);
++		if (rc < 0)
++			pr_warn("%s: Failed to re-online memory: %d",
++				__func__, rc);
++	}
++
++	/* Continue processing all remaining memory blocks. */
++	(*online_types)++;
++	return 0;
++}
++
+ /*
+- * Try to offline and remove a memory block. Might take a long time to
+- * finish in case memory is still in use. Primarily useful for memory devices
+- * that logically unplugged all memory (so it's no longer in use) and want to
+- * offline + remove the memory block.
++ * Try to offline and remove memory. Might take a long time to finish in case
++ * memory is still in use. Primarily useful for memory devices that logically
++ * unplugged all memory (so it's no longer in use) and want to offline + remove
++ * that memory.
+  */
+ int offline_and_remove_memory(int nid, u64 start, u64 size)
+ {
+-	struct memory_block *mem;
+-	int rc = -EINVAL;
++	const unsigned long mb_count = size / memory_block_size_bytes();
++	uint8_t *online_types, *tmp;
++	int rc;
+ 
+ 	if (!IS_ALIGNED(start, memory_block_size_bytes()) ||
+-	    size != memory_block_size_bytes())
+-		return rc;
++	    !IS_ALIGNED(size, memory_block_size_bytes()) || !size)
++		return -EINVAL;
++
++	/*
++	 * We'll remember the old online type of each memory block, so we can
++	 * try to revert whatever we did when offlining one memory block fails
++	 * after offlining some others succeeded.
++	 */
++	online_types = kmalloc_array(mb_count, sizeof(*online_types),
++				     GFP_KERNEL);
++	if (!online_types)
++		return -ENOMEM;
++	/*
++	 * Initialize all states to MMOP_OFFLINE, so when we abort processing in
++	 * try_offline_memory_block(), we'll skip all unprocessed blocks in
++	 * try_reonline_memory_block().
++	 */
++	memset(online_types, MMOP_OFFLINE, mb_count);
+ 
+ 	lock_device_hotplug();
+-	mem = find_memory_block(__pfn_to_section(PFN_DOWN(start)));
+-	if (mem)
+-		rc = device_offline(&mem->dev);
+-	/* Ignore if the device is already offline. */
+-	if (rc > 0)
+-		rc = 0;
++
++	tmp = online_types;
++	rc = walk_memory_blocks(start, size, &tmp, try_offline_memory_block);
+ 
+ 	/*
+-	 * In case we succeeded to offline the memory block, remove it.
++	 * In case we succeeded to offline all memory, remove it.
+ 	 * This cannot fail as it cannot get onlined in the meantime.
+ 	 */
+ 	if (!rc) {
+ 		rc = try_remove_memory(nid, start, size);
+-		WARN_ON_ONCE(rc);
++		if (rc)
++			pr_err("%s: Failed to remove memory: %d", __func__, rc);
++	}
++
++	/*
++	 * Rollback what we did. While memory onlining might theoretically fail
++	 * (nacked by a notifier), it barely ever happens.
++	 */
++	if (rc) {
++		tmp = online_types;
++		walk_memory_blocks(start, size, &tmp,
++				   try_reonline_memory_block);
+ 	}
+ 	unlock_device_hotplug();
+ 
++	kfree(online_types);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(offline_and_remove_memory);
+diff --git a/net/Kconfig b/net/Kconfig
+index d6567162c1cfc..a22c3fb885647 100644
+--- a/net/Kconfig
++++ b/net/Kconfig
+@@ -204,7 +204,6 @@ config BRIDGE_NETFILTER
+ source "net/netfilter/Kconfig"
+ source "net/ipv4/netfilter/Kconfig"
+ source "net/ipv6/netfilter/Kconfig"
+-source "net/decnet/netfilter/Kconfig"
+ source "net/bridge/netfilter/Kconfig"
+ 
+ endif
+@@ -221,7 +220,6 @@ source "net/802/Kconfig"
+ source "net/bridge/Kconfig"
+ source "net/dsa/Kconfig"
+ source "net/8021q/Kconfig"
+-source "net/decnet/Kconfig"
+ source "net/llc/Kconfig"
+ source "drivers/net/appletalk/Kconfig"
+ source "net/x25/Kconfig"
+diff --git a/net/Makefile b/net/Makefile
+index 5744bf1997fd0..45c03aa92ace7 100644
+--- a/net/Makefile
++++ b/net/Makefile
+@@ -39,7 +39,6 @@ obj-$(CONFIG_AF_KCM)		+= kcm/
+ obj-$(CONFIG_STREAM_PARSER)	+= strparser/
+ obj-$(CONFIG_ATM)		+= atm/
+ obj-$(CONFIG_L2TP)		+= l2tp/
+-obj-$(CONFIG_DECNET)		+= decnet/
+ obj-$(CONFIG_PHONET)		+= phonet/
+ ifneq ($(CONFIG_VLAN_8021Q),)
+ obj-y				+= 8021q/
+diff --git a/net/batman-adv/gateway_common.c b/net/batman-adv/gateway_common.c
+index 16cd9450ceb14..d65ff10076d67 100644
+--- a/net/batman-adv/gateway_common.c
++++ b/net/batman-adv/gateway_common.c
+@@ -10,7 +10,7 @@
+ #include <linux/atomic.h>
+ #include <linux/byteorder/generic.h>
+ #include <linux/errno.h>
+-#include <linux/kernel.h>
++#include <linux/kstrtox.h>
+ #include <linux/limits.h>
+ #include <linux/math64.h>
+ #include <linux/netdevice.h>
+diff --git a/net/core/dev.c b/net/core/dev.c
+index f4aad9b00cc90..3fc27b52bf429 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -10300,9 +10300,7 @@ void netdev_run_todo(void)
+ 		BUG_ON(!list_empty(&dev->ptype_specific));
+ 		WARN_ON(rcu_access_pointer(dev->ip_ptr));
+ 		WARN_ON(rcu_access_pointer(dev->ip6_ptr));
+-#if IS_ENABLED(CONFIG_DECNET)
+-		WARN_ON(dev->dn_ptr);
+-#endif
++
+ 		if (dev->priv_destructor)
+ 			dev->priv_destructor(dev);
+ 		if (dev->needs_free_netdev)
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 82ccc3eebe71d..3b642c412cf32 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -571,37 +571,6 @@ struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
+ }
+ EXPORT_SYMBOL(neigh_lookup);
+ 
+-struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
+-				     const void *pkey)
+-{
+-	struct neighbour *n;
+-	unsigned int key_len = tbl->key_len;
+-	u32 hash_val;
+-	struct neigh_hash_table *nht;
+-
+-	NEIGH_CACHE_STAT_INC(tbl, lookups);
+-
+-	rcu_read_lock_bh();
+-	nht = rcu_dereference_bh(tbl->nht);
+-	hash_val = tbl->hash(pkey, NULL, nht->hash_rnd) >> (32 - nht->hash_shift);
+-
+-	for (n = rcu_dereference_bh(nht->hash_buckets[hash_val]);
+-	     n != NULL;
+-	     n = rcu_dereference_bh(n->next)) {
+-		if (!memcmp(n->primary_key, pkey, key_len) &&
+-		    net_eq(dev_net(n->dev), net)) {
+-			if (!refcount_inc_not_zero(&n->refcnt))
+-				n = NULL;
+-			NEIGH_CACHE_STAT_INC(tbl, hits);
+-			break;
+-		}
+-	}
+-
+-	rcu_read_unlock_bh();
+-	return n;
+-}
+-EXPORT_SYMBOL(neigh_lookup_nodev);
+-
+ static struct neighbour *
+ ___neigh_create(struct neigh_table *tbl, const void *pkey,
+ 		struct net_device *dev, u8 flags,
+@@ -1802,9 +1771,6 @@ static struct neigh_table *neigh_find_table(int family)
+ 	case AF_INET6:
+ 		tbl = neigh_tables[NEIGH_ND_TABLE];
+ 		break;
+-	case AF_DECnet:
+-		tbl = neigh_tables[NEIGH_DN_TABLE];
+-		break;
+ 	}
+ 
+ 	return tbl;
+diff --git a/net/decnet/Kconfig b/net/decnet/Kconfig
+deleted file mode 100644
+index 24336bdb10546..0000000000000
+--- a/net/decnet/Kconfig
++++ /dev/null
+@@ -1,43 +0,0 @@
+-# SPDX-License-Identifier: GPL-2.0-only
+-#
+-# DECnet configuration
+-#
+-config DECNET
+-	tristate "DECnet Support"
+-	help
+-	  The DECnet networking protocol was used in many products made by
+-	  Digital (now Compaq).  It provides reliable stream and sequenced
+-	  packet communications over which run a variety of services similar
+-	  to those which run over TCP/IP.
+-
+-	  To find some tools to use with the kernel layer support, please
+-	  look at Patrick Caulfield's web site:
+-	  <http://linux-decnet.sourceforge.net/>.
+-
+-	  More detailed documentation is available in
+-	  <file:Documentation/networking/decnet.rst>.
+-
+-	  Be sure to say Y to "/proc file system support" and "Sysctl support"
+-	  below when using DECnet, since you will need sysctl support to aid
+-	  in configuration at run time.
+-
+-	  The DECnet code is also available as a module ( = code which can be
+-	  inserted in and removed from the running kernel whenever you want).
+-	  The module is called decnet.
+-
+-config DECNET_ROUTER
+-	bool "DECnet: router support"
+-	depends on DECNET
+-	select FIB_RULES
+-	help
+-	  Add support for turning your DECnet Endnode into a level 1 or 2
+-	  router.  This is an experimental, but functional option.  If you
+-	  do say Y here, then make sure that you also say Y to "Kernel/User
+-	  network link driver", "Routing messages" and "Network packet
+-	  filtering".  The first two are required to allow configuration via
+-	  rtnetlink (you will need Alexey Kuznetsov's iproute2 package
+-	  from <ftp://ftp.tux.org/pub/net/ip-routing/>). The "Network packet
+-	  filtering" option will be required for the forthcoming routing daemon
+-	  to work.
+-
+-	  See <file:Documentation/networking/decnet.rst> for more information.
+diff --git a/net/decnet/Makefile b/net/decnet/Makefile
+deleted file mode 100644
+index 07b38e441b2d0..0000000000000
+--- a/net/decnet/Makefile
++++ /dev/null
+@@ -1,10 +0,0 @@
+-# SPDX-License-Identifier: GPL-2.0
+-
+-obj-$(CONFIG_DECNET) += decnet.o
+-
+-decnet-y := af_decnet.o dn_nsp_in.o dn_nsp_out.o \
+-	    dn_route.o dn_dev.o dn_neigh.o dn_timer.o
+-decnet-$(CONFIG_DECNET_ROUTER) += dn_fib.o dn_rules.o dn_table.o
+-decnet-y += sysctl_net_decnet.o
+-
+-obj-$(CONFIG_NETFILTER) += netfilter/
+diff --git a/net/decnet/README b/net/decnet/README
+deleted file mode 100644
+index 60e7ec88c81fd..0000000000000
+--- a/net/decnet/README
++++ /dev/null
+@@ -1,8 +0,0 @@
+-                       Linux DECnet Project
+-                      ======================
+-
+-The documentation for this kernel subsystem is available in the
+-Documentation/networking subdirectory of this distribution and also
+-on line at http://www.chygwyn.com/DECnet/
+-
+-Steve Whitehouse <SteveW@ACM.org>
+diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
+deleted file mode 100644
+index 7d542eb461729..0000000000000
+--- a/net/decnet/af_decnet.c
++++ /dev/null
+@@ -1,2400 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Socket Layer Interface
+- *
+- * Authors:     Eduardo Marcelo Serrat <emserrat@geocities.com>
+- *              Patrick Caulfield <patrick@pandh.demon.co.uk>
+- *
+- * Changes:
+- *        Steve Whitehouse: Copied from Eduardo Serrat and Patrick Caulfield's
+- *                          version of the code. Original copyright preserved
+- *                          below.
+- *        Steve Whitehouse: Some bug fixes, cleaning up some code to make it
+- *                          compatible with my routing layer.
+- *        Steve Whitehouse: Merging changes from Eduardo Serrat and Patrick
+- *                          Caulfield.
+- *        Steve Whitehouse: Further bug fixes, checking module code still works
+- *                          with new routing layer.
+- *        Steve Whitehouse: Additional set/get_sockopt() calls.
+- *        Steve Whitehouse: Fixed TIOCINQ ioctl to be same as Eduardo's new
+- *                          code.
+- *        Steve Whitehouse: recvmsg() changed to try and behave in a POSIX like
+- *                          way. Didn't manage it entirely, but its better.
+- *        Steve Whitehouse: ditto for sendmsg().
+- *        Steve Whitehouse: A selection of bug fixes to various things.
+- *        Steve Whitehouse: Added TIOCOUTQ ioctl.
+- *        Steve Whitehouse: Fixes to username2sockaddr & sockaddr2username.
+- *        Steve Whitehouse: Fixes to connect() error returns.
+- *       Patrick Caulfield: Fixes to delayed acceptance logic.
+- *         David S. Miller: New socket locking
+- *        Steve Whitehouse: Socket list hashing/locking
+- *         Arnaldo C. Melo: use capable, not suser
+- *        Steve Whitehouse: Removed unused code. Fix to use sk->allocation
+- *                          when required.
+- *       Patrick Caulfield: /proc/net/decnet now has object name/number
+- *        Steve Whitehouse: Fixed local port allocation, hashed sk list
+- *          Matthew Wilcox: Fixes for dn_ioctl()
+- *        Steve Whitehouse: New connect/accept logic to allow timeouts and
+- *                          prepare for sendpage etc.
+- */
+-
+-
+-/******************************************************************************
+-    (c) 1995-1998 E.M. Serrat		emserrat@geocities.com
+-
+-
+-HISTORY:
+-
+-Version           Kernel     Date       Author/Comments
+--------           ------     ----       ---------------
+-Version 0.0.1     2.0.30    01-dic-97	Eduardo Marcelo Serrat
+-					(emserrat@geocities.com)
+-
+-					First Development of DECnet Socket La-
+-					yer for Linux. Only supports outgoing
+-					connections.
+-
+-Version 0.0.2	  2.1.105   20-jun-98   Patrick J. Caulfield
+-					(patrick@pandh.demon.co.uk)
+-
+-					Port to new kernel development version.
+-
+-Version 0.0.3     2.1.106   25-jun-98   Eduardo Marcelo Serrat
+-					(emserrat@geocities.com)
+-					_
+-					Added support for incoming connections
+-					so we can start developing server apps
+-					on Linux.
+-					-
+-					Module Support
+-Version 0.0.4     2.1.109   21-jul-98   Eduardo Marcelo Serrat
+-				       (emserrat@geocities.com)
+-				       _
+-					Added support for X11R6.4. Now we can
+-					use DECnet transport for X on Linux!!!
+-				       -
+-Version 0.0.5    2.1.110   01-aug-98   Eduardo Marcelo Serrat
+-				       (emserrat@geocities.com)
+-				       Removed bugs on flow control
+-				       Removed bugs on incoming accessdata
+-				       order
+-				       -
+-Version 0.0.6    2.1.110   07-aug-98   Eduardo Marcelo Serrat
+-				       dn_recvmsg fixes
+-
+-					Patrick J. Caulfield
+-				       dn_bind fixes
+-*******************************************************************************/
+-
+-#include <linux/module.h>
+-#include <linux/errno.h>
+-#include <linux/types.h>
+-#include <linux/slab.h>
+-#include <linux/socket.h>
+-#include <linux/in.h>
+-#include <linux/kernel.h>
+-#include <linux/sched/signal.h>
+-#include <linux/timer.h>
+-#include <linux/string.h>
+-#include <linux/sockios.h>
+-#include <linux/net.h>
+-#include <linux/netdevice.h>
+-#include <linux/inet.h>
+-#include <linux/route.h>
+-#include <linux/netfilter.h>
+-#include <linux/seq_file.h>
+-#include <net/sock.h>
+-#include <net/tcp_states.h>
+-#include <net/flow.h>
+-#include <asm/ioctls.h>
+-#include <linux/capability.h>
+-#include <linux/mm.h>
+-#include <linux/interrupt.h>
+-#include <linux/proc_fs.h>
+-#include <linux/stat.h>
+-#include <linux/init.h>
+-#include <linux/poll.h>
+-#include <linux/jiffies.h>
+-#include <net/net_namespace.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/fib_rules.h>
+-#include <net/tcp.h>
+-#include <net/dn.h>
+-#include <net/dn_nsp.h>
+-#include <net/dn_dev.h>
+-#include <net/dn_route.h>
+-#include <net/dn_fib.h>
+-#include <net/dn_neigh.h>
+-
+-struct dn_sock {
+-	struct sock sk;
+-	struct dn_scp scp;
+-};
+-
+-static void dn_keepalive(struct sock *sk);
+-
+-#define DN_SK_HASH_SHIFT 8
+-#define DN_SK_HASH_SIZE (1 << DN_SK_HASH_SHIFT)
+-#define DN_SK_HASH_MASK (DN_SK_HASH_SIZE - 1)
+-
+-
+-static const struct proto_ops dn_proto_ops;
+-static DEFINE_RWLOCK(dn_hash_lock);
+-static struct hlist_head dn_sk_hash[DN_SK_HASH_SIZE];
+-static struct hlist_head dn_wild_sk;
+-static atomic_long_t decnet_memory_allocated;
+-
+-static int __dn_setsockopt(struct socket *sock, int level, int optname,
+-		sockptr_t optval, unsigned int optlen, int flags);
+-static int __dn_getsockopt(struct socket *sock, int level, int optname, char __user *optval, int __user *optlen, int flags);
+-
+-static struct hlist_head *dn_find_list(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	if (scp->addr.sdn_flags & SDF_WILD)
+-		return hlist_empty(&dn_wild_sk) ? &dn_wild_sk : NULL;
+-
+-	return &dn_sk_hash[le16_to_cpu(scp->addrloc) & DN_SK_HASH_MASK];
+-}
+-
+-/*
+- * Valid ports are those greater than zero and not already in use.
+- */
+-static int check_port(__le16 port)
+-{
+-	struct sock *sk;
+-
+-	if (port == 0)
+-		return -1;
+-
+-	sk_for_each(sk, &dn_sk_hash[le16_to_cpu(port) & DN_SK_HASH_MASK]) {
+-		struct dn_scp *scp = DN_SK(sk);
+-		if (scp->addrloc == port)
+-			return -1;
+-	}
+-	return 0;
+-}
+-
+-static unsigned short port_alloc(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	static unsigned short port = 0x2000;
+-	unsigned short i_port = port;
+-
+-	while(check_port(cpu_to_le16(++port)) != 0) {
+-		if (port == i_port)
+-			return 0;
+-	}
+-
+-	scp->addrloc = cpu_to_le16(port);
+-
+-	return 1;
+-}
+-
+-/*
+- * Since this is only ever called from user
+- * level, we don't need a write_lock() version
+- * of this.
+- */
+-static int dn_hash_sock(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct hlist_head *list;
+-	int rv = -EUSERS;
+-
+-	BUG_ON(sk_hashed(sk));
+-
+-	write_lock_bh(&dn_hash_lock);
+-
+-	if (!scp->addrloc && !port_alloc(sk))
+-		goto out;
+-
+-	rv = -EADDRINUSE;
+-	if ((list = dn_find_list(sk)) == NULL)
+-		goto out;
+-
+-	sk_add_node(sk, list);
+-	rv = 0;
+-out:
+-	write_unlock_bh(&dn_hash_lock);
+-	return rv;
+-}
+-
+-static void dn_unhash_sock(struct sock *sk)
+-{
+-	write_lock(&dn_hash_lock);
+-	sk_del_node_init(sk);
+-	write_unlock(&dn_hash_lock);
+-}
+-
+-static void dn_unhash_sock_bh(struct sock *sk)
+-{
+-	write_lock_bh(&dn_hash_lock);
+-	sk_del_node_init(sk);
+-	write_unlock_bh(&dn_hash_lock);
+-}
+-
+-static struct hlist_head *listen_hash(struct sockaddr_dn *addr)
+-{
+-	int i;
+-	unsigned int hash = addr->sdn_objnum;
+-
+-	if (hash == 0) {
+-		hash = addr->sdn_objnamel;
+-		for(i = 0; i < le16_to_cpu(addr->sdn_objnamel); i++) {
+-			hash ^= addr->sdn_objname[i];
+-			hash ^= (hash << 3);
+-		}
+-	}
+-
+-	return &dn_sk_hash[hash & DN_SK_HASH_MASK];
+-}
+-
+-/*
+- * Called to transform a socket from bound (i.e. with a local address)
+- * into a listening socket (doesn't need a local port number) and rehashes
+- * based upon the object name/number.
+- */
+-static void dn_rehash_sock(struct sock *sk)
+-{
+-	struct hlist_head *list;
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	if (scp->addr.sdn_flags & SDF_WILD)
+-		return;
+-
+-	write_lock_bh(&dn_hash_lock);
+-	sk_del_node_init(sk);
+-	DN_SK(sk)->addrloc = 0;
+-	list = listen_hash(&DN_SK(sk)->addr);
+-	sk_add_node(sk, list);
+-	write_unlock_bh(&dn_hash_lock);
+-}
+-
+-int dn_sockaddr2username(struct sockaddr_dn *sdn, unsigned char *buf, unsigned char type)
+-{
+-	int len = 2;
+-
+-	*buf++ = type;
+-
+-	switch (type) {
+-	case 0:
+-		*buf++ = sdn->sdn_objnum;
+-		break;
+-	case 1:
+-		*buf++ = 0;
+-		*buf++ = le16_to_cpu(sdn->sdn_objnamel);
+-		memcpy(buf, sdn->sdn_objname, le16_to_cpu(sdn->sdn_objnamel));
+-		len = 3 + le16_to_cpu(sdn->sdn_objnamel);
+-		break;
+-	case 2:
+-		memset(buf, 0, 5);
+-		buf += 5;
+-		*buf++ = le16_to_cpu(sdn->sdn_objnamel);
+-		memcpy(buf, sdn->sdn_objname, le16_to_cpu(sdn->sdn_objnamel));
+-		len = 7 + le16_to_cpu(sdn->sdn_objnamel);
+-		break;
+-	}
+-
+-	return len;
+-}
+-
+-/*
+- * On reception of usernames, we handle types 1 and 0 for destination
+- * addresses only. Types 2 and 4 are used for source addresses, but the
+- * UIC, GIC are ignored and they are both treated the same way. Type 3
+- * is never used as I've no idea what its purpose might be or what its
+- * format is.
+- */
+-int dn_username2sockaddr(unsigned char *data, int len, struct sockaddr_dn *sdn, unsigned char *fmt)
+-{
+-	unsigned char type;
+-	int size = len;
+-	int namel = 12;
+-
+-	sdn->sdn_objnum = 0;
+-	sdn->sdn_objnamel = cpu_to_le16(0);
+-	memset(sdn->sdn_objname, 0, DN_MAXOBJL);
+-
+-	if (len < 2)
+-		return -1;
+-
+-	len -= 2;
+-	*fmt = *data++;
+-	type = *data++;
+-
+-	switch (*fmt) {
+-	case 0:
+-		sdn->sdn_objnum = type;
+-		return 2;
+-	case 1:
+-		namel = 16;
+-		break;
+-	case 2:
+-		len  -= 4;
+-		data += 4;
+-		break;
+-	case 4:
+-		len  -= 8;
+-		data += 8;
+-		break;
+-	default:
+-		return -1;
+-	}
+-
+-	len -= 1;
+-
+-	if (len < 0)
+-		return -1;
+-
+-	sdn->sdn_objnamel = cpu_to_le16(*data++);
+-	len -= le16_to_cpu(sdn->sdn_objnamel);
+-
+-	if ((len < 0) || (le16_to_cpu(sdn->sdn_objnamel) > namel))
+-		return -1;
+-
+-	memcpy(sdn->sdn_objname, data, le16_to_cpu(sdn->sdn_objnamel));
+-
+-	return size - len;
+-}
+-
+-struct sock *dn_sklist_find_listener(struct sockaddr_dn *addr)
+-{
+-	struct hlist_head *list = listen_hash(addr);
+-	struct sock *sk;
+-
+-	read_lock(&dn_hash_lock);
+-	sk_for_each(sk, list) {
+-		struct dn_scp *scp = DN_SK(sk);
+-		if (sk->sk_state != TCP_LISTEN)
+-			continue;
+-		if (scp->addr.sdn_objnum) {
+-			if (scp->addr.sdn_objnum != addr->sdn_objnum)
+-				continue;
+-		} else {
+-			if (addr->sdn_objnum)
+-				continue;
+-			if (scp->addr.sdn_objnamel != addr->sdn_objnamel)
+-				continue;
+-			if (memcmp(scp->addr.sdn_objname, addr->sdn_objname, le16_to_cpu(addr->sdn_objnamel)) != 0)
+-				continue;
+-		}
+-		sock_hold(sk);
+-		read_unlock(&dn_hash_lock);
+-		return sk;
+-	}
+-
+-	sk = sk_head(&dn_wild_sk);
+-	if (sk) {
+-		if (sk->sk_state == TCP_LISTEN)
+-			sock_hold(sk);
+-		else
+-			sk = NULL;
+-	}
+-
+-	read_unlock(&dn_hash_lock);
+-	return sk;
+-}
+-
+-struct sock *dn_find_by_skb(struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct sock *sk;
+-	struct dn_scp *scp;
+-
+-	read_lock(&dn_hash_lock);
+-	sk_for_each(sk, &dn_sk_hash[le16_to_cpu(cb->dst_port) & DN_SK_HASH_MASK]) {
+-		scp = DN_SK(sk);
+-		if (cb->src != dn_saddr2dn(&scp->peer))
+-			continue;
+-		if (cb->dst_port != scp->addrloc)
+-			continue;
+-		if (scp->addrrem && (cb->src_port != scp->addrrem))
+-			continue;
+-		sock_hold(sk);
+-		goto found;
+-	}
+-	sk = NULL;
+-found:
+-	read_unlock(&dn_hash_lock);
+-	return sk;
+-}
+-
+-
+-
+-static void dn_destruct(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	skb_queue_purge(&scp->data_xmit_queue);
+-	skb_queue_purge(&scp->other_xmit_queue);
+-	skb_queue_purge(&scp->other_receive_queue);
+-
+-	dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
+-}
+-
+-static unsigned long dn_memory_pressure;
+-
+-static void dn_enter_memory_pressure(struct sock *sk)
+-{
+-	if (!dn_memory_pressure) {
+-		dn_memory_pressure = 1;
+-	}
+-}
+-
+-static struct proto dn_proto = {
+-	.name			= "NSP",
+-	.owner			= THIS_MODULE,
+-	.enter_memory_pressure	= dn_enter_memory_pressure,
+-	.memory_pressure	= &dn_memory_pressure,
+-	.memory_allocated	= &decnet_memory_allocated,
+-	.sysctl_mem		= sysctl_decnet_mem,
+-	.sysctl_wmem		= sysctl_decnet_wmem,
+-	.sysctl_rmem		= sysctl_decnet_rmem,
+-	.max_header		= DN_MAX_NSP_DATA_HEADER + 64,
+-	.obj_size		= sizeof(struct dn_sock),
+-};
+-
+-static struct sock *dn_alloc_sock(struct net *net, struct socket *sock, gfp_t gfp, int kern)
+-{
+-	struct dn_scp *scp;
+-	struct sock *sk = sk_alloc(net, PF_DECnet, gfp, &dn_proto, kern);
+-
+-	if  (!sk)
+-		goto out;
+-
+-	if (sock)
+-		sock->ops = &dn_proto_ops;
+-	sock_init_data(sock, sk);
+-
+-	sk->sk_backlog_rcv = dn_nsp_backlog_rcv;
+-	sk->sk_destruct    = dn_destruct;
+-	sk->sk_no_check_tx = 1;
+-	sk->sk_family      = PF_DECnet;
+-	sk->sk_protocol    = 0;
+-	sk->sk_allocation  = gfp;
+-	sk->sk_sndbuf	   = READ_ONCE(sysctl_decnet_wmem[1]);
+-	sk->sk_rcvbuf	   = READ_ONCE(sysctl_decnet_rmem[1]);
+-
+-	/* Initialization of DECnet Session Control Port		*/
+-	scp = DN_SK(sk);
+-	scp->state	= DN_O;		/* Open			*/
+-	scp->numdat	= 1;		/* Next data seg to tx	*/
+-	scp->numoth	= 1;		/* Next oth data to tx  */
+-	scp->ackxmt_dat = 0;		/* Last data seg ack'ed */
+-	scp->ackxmt_oth = 0;		/* Last oth data ack'ed */
+-	scp->ackrcv_dat = 0;		/* Highest data ack recv*/
+-	scp->ackrcv_oth = 0;		/* Last oth data ack rec*/
+-	scp->flowrem_sw = DN_SEND;
+-	scp->flowloc_sw = DN_SEND;
+-	scp->flowrem_dat = 0;
+-	scp->flowrem_oth = 1;
+-	scp->flowloc_dat = 0;
+-	scp->flowloc_oth = 1;
+-	scp->services_rem = 0;
+-	scp->services_loc = 1 | NSP_FC_NONE;
+-	scp->info_rem = 0;
+-	scp->info_loc = 0x03; /* NSP version 4.1 */
+-	scp->segsize_rem = 230 - DN_MAX_NSP_DATA_HEADER; /* Default: Updated by remote segsize */
+-	scp->nonagle = 0;
+-	scp->multi_ireq = 1;
+-	scp->accept_mode = ACC_IMMED;
+-	scp->addr.sdn_family    = AF_DECnet;
+-	scp->peer.sdn_family    = AF_DECnet;
+-	scp->accessdata.acc_accl = 5;
+-	memcpy(scp->accessdata.acc_acc, "LINUX", 5);
+-
+-	scp->max_window   = NSP_MAX_WINDOW;
+-	scp->snd_window   = NSP_MIN_WINDOW;
+-	scp->nsp_srtt     = NSP_INITIAL_SRTT;
+-	scp->nsp_rttvar   = NSP_INITIAL_RTTVAR;
+-	scp->nsp_rxtshift = 0;
+-
+-	skb_queue_head_init(&scp->data_xmit_queue);
+-	skb_queue_head_init(&scp->other_xmit_queue);
+-	skb_queue_head_init(&scp->other_receive_queue);
+-
+-	scp->persist = 0;
+-	scp->persist_fxn = NULL;
+-	scp->keepalive = 10 * HZ;
+-	scp->keepalive_fxn = dn_keepalive;
+-
+-	dn_start_slow_timer(sk);
+-out:
+-	return sk;
+-}
+-
+-/*
+- * Keepalive timer.
+- * FIXME: Should respond to SO_KEEPALIVE etc.
+- */
+-static void dn_keepalive(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	/*
+-	 * By checking the other_data transmit queue is empty
+-	 * we are double checking that we are not sending too
+-	 * many of these keepalive frames.
+-	 */
+-	if (skb_queue_empty(&scp->other_xmit_queue))
+-		dn_nsp_send_link(sk, DN_NOCHANGE, 0);
+-}
+-
+-
+-/*
+- * Timer for shutdown/destroyed sockets.
+- * When socket is dead & no packets have been sent for a
+- * certain amount of time, they are removed by this
+- * routine. Also takes care of sending out DI & DC
+- * frames at correct times.
+- */
+-int dn_destroy_timer(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	scp->persist = dn_nsp_persist(sk);
+-
+-	switch (scp->state) {
+-	case DN_DI:
+-		dn_nsp_send_disc(sk, NSP_DISCINIT, 0, GFP_ATOMIC);
+-		if (scp->nsp_rxtshift >= decnet_di_count)
+-			scp->state = DN_CN;
+-		return 0;
+-
+-	case DN_DR:
+-		dn_nsp_send_disc(sk, NSP_DISCINIT, 0, GFP_ATOMIC);
+-		if (scp->nsp_rxtshift >= decnet_dr_count)
+-			scp->state = DN_DRC;
+-		return 0;
+-
+-	case DN_DN:
+-		if (scp->nsp_rxtshift < decnet_dn_count) {
+-			/* printk(KERN_DEBUG "dn_destroy_timer: DN\n"); */
+-			dn_nsp_send_disc(sk, NSP_DISCCONF, NSP_REASON_DC,
+-					 GFP_ATOMIC);
+-			return 0;
+-		}
+-	}
+-
+-	scp->persist = (HZ * decnet_time_wait);
+-
+-	if (sk->sk_socket)
+-		return 0;
+-
+-	if (time_after_eq(jiffies, scp->stamp + HZ * decnet_time_wait)) {
+-		dn_unhash_sock(sk);
+-		sock_put(sk);
+-		return 1;
+-	}
+-
+-	return 0;
+-}
+-
+-static void dn_destroy_sock(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	scp->nsp_rxtshift = 0; /* reset back off */
+-
+-	if (sk->sk_socket) {
+-		if (sk->sk_socket->state != SS_UNCONNECTED)
+-			sk->sk_socket->state = SS_DISCONNECTING;
+-	}
+-
+-	sk->sk_state = TCP_CLOSE;
+-
+-	switch (scp->state) {
+-	case DN_DN:
+-		dn_nsp_send_disc(sk, NSP_DISCCONF, NSP_REASON_DC,
+-				 sk->sk_allocation);
+-		scp->persist_fxn = dn_destroy_timer;
+-		scp->persist = dn_nsp_persist(sk);
+-		break;
+-	case DN_CR:
+-		scp->state = DN_DR;
+-		goto disc_reject;
+-	case DN_RUN:
+-		scp->state = DN_DI;
+-		fallthrough;
+-	case DN_DI:
+-	case DN_DR:
+-disc_reject:
+-		dn_nsp_send_disc(sk, NSP_DISCINIT, 0, sk->sk_allocation);
+-		fallthrough;
+-	case DN_NC:
+-	case DN_NR:
+-	case DN_RJ:
+-	case DN_DIC:
+-	case DN_CN:
+-	case DN_DRC:
+-	case DN_CI:
+-	case DN_CD:
+-		scp->persist_fxn = dn_destroy_timer;
+-		scp->persist = dn_nsp_persist(sk);
+-		break;
+-	default:
+-		printk(KERN_DEBUG "DECnet: dn_destroy_sock passed socket in invalid state\n");
+-		fallthrough;
+-	case DN_O:
+-		dn_stop_slow_timer(sk);
+-
+-		dn_unhash_sock_bh(sk);
+-		sock_put(sk);
+-
+-		break;
+-	}
+-}
+-
+-char *dn_addr2asc(__u16 addr, char *buf)
+-{
+-	unsigned short node, area;
+-
+-	node = addr & 0x03ff;
+-	area = addr >> 10;
+-	sprintf(buf, "%hd.%hd", area, node);
+-
+-	return buf;
+-}
+-
+-
+-
+-static int dn_create(struct net *net, struct socket *sock, int protocol,
+-		     int kern)
+-{
+-	struct sock *sk;
+-
+-	if (protocol < 0 || protocol > U8_MAX)
+-		return -EINVAL;
+-
+-	if (!net_eq(net, &init_net))
+-		return -EAFNOSUPPORT;
+-
+-	switch (sock->type) {
+-	case SOCK_SEQPACKET:
+-		if (protocol != DNPROTO_NSP)
+-			return -EPROTONOSUPPORT;
+-		break;
+-	case SOCK_STREAM:
+-		break;
+-	default:
+-		return -ESOCKTNOSUPPORT;
+-	}
+-
+-
+-	if ((sk = dn_alloc_sock(net, sock, GFP_KERNEL, kern)) == NULL)
+-		return -ENOBUFS;
+-
+-	sk->sk_protocol = protocol;
+-
+-	return 0;
+-}
+-
+-
+-static int
+-dn_release(struct socket *sock)
+-{
+-	struct sock *sk = sock->sk;
+-
+-	if (sk) {
+-		sock_orphan(sk);
+-		sock_hold(sk);
+-		lock_sock(sk);
+-		dn_destroy_sock(sk);
+-		release_sock(sk);
+-		sock_put(sk);
+-	}
+-
+-	return 0;
+-}
+-
+-static int dn_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+-{
+-	struct sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct sockaddr_dn *saddr = (struct sockaddr_dn *)uaddr;
+-	struct net_device *dev, *ldev;
+-	int rv;
+-
+-	if (addr_len != sizeof(struct sockaddr_dn))
+-		return -EINVAL;
+-
+-	if (saddr->sdn_family != AF_DECnet)
+-		return -EINVAL;
+-
+-	if (le16_to_cpu(saddr->sdn_nodeaddrl) && (le16_to_cpu(saddr->sdn_nodeaddrl) != 2))
+-		return -EINVAL;
+-
+-	if (le16_to_cpu(saddr->sdn_objnamel) > DN_MAXOBJL)
+-		return -EINVAL;
+-
+-	if (saddr->sdn_flags & ~SDF_WILD)
+-		return -EINVAL;
+-
+-	if (!capable(CAP_NET_BIND_SERVICE) && (saddr->sdn_objnum ||
+-	    (saddr->sdn_flags & SDF_WILD)))
+-		return -EACCES;
+-
+-	if (!(saddr->sdn_flags & SDF_WILD)) {
+-		if (le16_to_cpu(saddr->sdn_nodeaddrl)) {
+-			rcu_read_lock();
+-			ldev = NULL;
+-			for_each_netdev_rcu(&init_net, dev) {
+-				if (!dev->dn_ptr)
+-					continue;
+-				if (dn_dev_islocal(dev, dn_saddr2dn(saddr))) {
+-					ldev = dev;
+-					break;
+-				}
+-			}
+-			rcu_read_unlock();
+-			if (ldev == NULL)
+-				return -EADDRNOTAVAIL;
+-		}
+-	}
+-
+-	rv = -EINVAL;
+-	lock_sock(sk);
+-	if (sock_flag(sk, SOCK_ZAPPED)) {
+-		memcpy(&scp->addr, saddr, addr_len);
+-		sock_reset_flag(sk, SOCK_ZAPPED);
+-
+-		rv = dn_hash_sock(sk);
+-		if (rv)
+-			sock_set_flag(sk, SOCK_ZAPPED);
+-	}
+-	release_sock(sk);
+-
+-	return rv;
+-}
+-
+-
+-static int dn_auto_bind(struct socket *sock)
+-{
+-	struct sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	int rv;
+-
+-	sock_reset_flag(sk, SOCK_ZAPPED);
+-
+-	scp->addr.sdn_flags  = 0;
+-	scp->addr.sdn_objnum = 0;
+-
+-	/*
+-	 * This stuff is to keep compatibility with Eduardo's
+-	 * patch. I hope I can dispense with it shortly...
+-	 */
+-	if ((scp->accessdata.acc_accl != 0) &&
+-		(scp->accessdata.acc_accl <= 12)) {
+-
+-		scp->addr.sdn_objnamel = cpu_to_le16(scp->accessdata.acc_accl);
+-		memcpy(scp->addr.sdn_objname, scp->accessdata.acc_acc, le16_to_cpu(scp->addr.sdn_objnamel));
+-
+-		scp->accessdata.acc_accl = 0;
+-		memset(scp->accessdata.acc_acc, 0, 40);
+-	}
+-	/* End of compatibility stuff */
+-
+-	scp->addr.sdn_add.a_len = cpu_to_le16(2);
+-	rv = dn_dev_bind_default((__le16 *)scp->addr.sdn_add.a_addr);
+-	if (rv == 0) {
+-		rv = dn_hash_sock(sk);
+-		if (rv)
+-			sock_set_flag(sk, SOCK_ZAPPED);
+-	}
+-
+-	return rv;
+-}
+-
+-static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-	int err;
+-
+-	if (scp->state != DN_CR)
+-		return -EINVAL;
+-
+-	scp->state = DN_CC;
+-	scp->segsize_loc = dst_metric_advmss(__sk_dst_get(sk));
+-	dn_send_conn_conf(sk, allocation);
+-
+-	add_wait_queue(sk_sleep(sk), &wait);
+-	for(;;) {
+-		release_sock(sk);
+-		if (scp->state == DN_CC)
+-			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+-		lock_sock(sk);
+-		err = 0;
+-		if (scp->state == DN_RUN)
+-			break;
+-		err = sock_error(sk);
+-		if (err)
+-			break;
+-		err = sock_intr_errno(*timeo);
+-		if (signal_pending(current))
+-			break;
+-		err = -EAGAIN;
+-		if (!*timeo)
+-			break;
+-	}
+-	remove_wait_queue(sk_sleep(sk), &wait);
+-	if (err == 0) {
+-		sk->sk_socket->state = SS_CONNECTED;
+-	} else if (scp->state != DN_CC) {
+-		sk->sk_socket->state = SS_UNCONNECTED;
+-	}
+-	return err;
+-}
+-
+-static int dn_wait_run(struct sock *sk, long *timeo)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-	int err = 0;
+-
+-	if (scp->state == DN_RUN)
+-		goto out;
+-
+-	if (!*timeo)
+-		return -EALREADY;
+-
+-	add_wait_queue(sk_sleep(sk), &wait);
+-	for(;;) {
+-		release_sock(sk);
+-		if (scp->state == DN_CI || scp->state == DN_CC)
+-			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+-		lock_sock(sk);
+-		err = 0;
+-		if (scp->state == DN_RUN)
+-			break;
+-		err = sock_error(sk);
+-		if (err)
+-			break;
+-		err = sock_intr_errno(*timeo);
+-		if (signal_pending(current))
+-			break;
+-		err = -ETIMEDOUT;
+-		if (!*timeo)
+-			break;
+-	}
+-	remove_wait_queue(sk_sleep(sk), &wait);
+-out:
+-	if (err == 0) {
+-		sk->sk_socket->state = SS_CONNECTED;
+-	} else if (scp->state != DN_CI && scp->state != DN_CC) {
+-		sk->sk_socket->state = SS_UNCONNECTED;
+-	}
+-	return err;
+-}
+-
+-static int __dn_connect(struct sock *sk, struct sockaddr_dn *addr, int addrlen, long *timeo, int flags)
+-{
+-	struct socket *sock = sk->sk_socket;
+-	struct dn_scp *scp = DN_SK(sk);
+-	int err = -EISCONN;
+-	struct flowidn fld;
+-	struct dst_entry *dst;
+-
+-	if (sock->state == SS_CONNECTED)
+-		goto out;
+-
+-	if (sock->state == SS_CONNECTING) {
+-		err = 0;
+-		if (scp->state == DN_RUN) {
+-			sock->state = SS_CONNECTED;
+-			goto out;
+-		}
+-		err = -ECONNREFUSED;
+-		if (scp->state != DN_CI && scp->state != DN_CC) {
+-			sock->state = SS_UNCONNECTED;
+-			goto out;
+-		}
+-		return dn_wait_run(sk, timeo);
+-	}
+-
+-	err = -EINVAL;
+-	if (scp->state != DN_O)
+-		goto out;
+-
+-	if (addr == NULL || addrlen != sizeof(struct sockaddr_dn))
+-		goto out;
+-	if (addr->sdn_family != AF_DECnet)
+-		goto out;
+-	if (addr->sdn_flags & SDF_WILD)
+-		goto out;
+-
+-	if (sock_flag(sk, SOCK_ZAPPED)) {
+-		err = dn_auto_bind(sk->sk_socket);
+-		if (err)
+-			goto out;
+-	}
+-
+-	memcpy(&scp->peer, addr, sizeof(struct sockaddr_dn));
+-
+-	err = -EHOSTUNREACH;
+-	memset(&fld, 0, sizeof(fld));
+-	fld.flowidn_oif = sk->sk_bound_dev_if;
+-	fld.daddr = dn_saddr2dn(&scp->peer);
+-	fld.saddr = dn_saddr2dn(&scp->addr);
+-	dn_sk_ports_copy(&fld, scp);
+-	fld.flowidn_proto = DNPROTO_NSP;
+-	if (dn_route_output_sock(&sk->sk_dst_cache, &fld, sk, flags) < 0)
+-		goto out;
+-	dst = __sk_dst_get(sk);
+-	sk->sk_route_caps = dst->dev->features;
+-	sock->state = SS_CONNECTING;
+-	scp->state = DN_CI;
+-	scp->segsize_loc = dst_metric_advmss(dst);
+-
+-	dn_nsp_send_conninit(sk, NSP_CI);
+-	err = -EINPROGRESS;
+-	if (*timeo) {
+-		err = dn_wait_run(sk, timeo);
+-	}
+-out:
+-	return err;
+-}
+-
+-static int dn_connect(struct socket *sock, struct sockaddr *uaddr, int addrlen, int flags)
+-{
+-	struct sockaddr_dn *addr = (struct sockaddr_dn *)uaddr;
+-	struct sock *sk = sock->sk;
+-	int err;
+-	long timeo = sock_sndtimeo(sk, flags & O_NONBLOCK);
+-
+-	lock_sock(sk);
+-	err = __dn_connect(sk, addr, addrlen, &timeo, 0);
+-	release_sock(sk);
+-
+-	return err;
+-}
+-
+-static inline int dn_check_state(struct sock *sk, struct sockaddr_dn *addr, int addrlen, long *timeo, int flags)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	switch (scp->state) {
+-	case DN_RUN:
+-		return 0;
+-	case DN_CR:
+-		return dn_confirm_accept(sk, timeo, sk->sk_allocation);
+-	case DN_CI:
+-	case DN_CC:
+-		return dn_wait_run(sk, timeo);
+-	case DN_O:
+-		return __dn_connect(sk, addr, addrlen, timeo, flags);
+-	}
+-
+-	return -EINVAL;
+-}
+-
+-
+-static void dn_access_copy(struct sk_buff *skb, struct accessdata_dn *acc)
+-{
+-	unsigned char *ptr = skb->data;
+-
+-	acc->acc_userl = *ptr++;
+-	memcpy(&acc->acc_user, ptr, acc->acc_userl);
+-	ptr += acc->acc_userl;
+-
+-	acc->acc_passl = *ptr++;
+-	memcpy(&acc->acc_pass, ptr, acc->acc_passl);
+-	ptr += acc->acc_passl;
+-
+-	acc->acc_accl = *ptr++;
+-	memcpy(&acc->acc_acc, ptr, acc->acc_accl);
+-
+-	skb_pull(skb, acc->acc_accl + acc->acc_passl + acc->acc_userl + 3);
+-
+-}
+-
+-static void dn_user_copy(struct sk_buff *skb, struct optdata_dn *opt)
+-{
+-	unsigned char *ptr = skb->data;
+-	u16 len = *ptr++; /* yes, it's 8bit on the wire */
+-
+-	BUG_ON(len > 16); /* we've checked the contents earlier */
+-	opt->opt_optl   = cpu_to_le16(len);
+-	opt->opt_status = 0;
+-	memcpy(opt->opt_data, ptr, len);
+-	skb_pull(skb, len + 1);
+-}
+-
+-static struct sk_buff *dn_wait_for_connect(struct sock *sk, long *timeo)
+-{
+-	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-	struct sk_buff *skb = NULL;
+-	int err = 0;
+-
+-	add_wait_queue(sk_sleep(sk), &wait);
+-	for(;;) {
+-		release_sock(sk);
+-		skb = skb_dequeue(&sk->sk_receive_queue);
+-		if (skb == NULL) {
+-			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+-			skb = skb_dequeue(&sk->sk_receive_queue);
+-		}
+-		lock_sock(sk);
+-		if (skb != NULL)
+-			break;
+-		err = -EINVAL;
+-		if (sk->sk_state != TCP_LISTEN)
+-			break;
+-		err = sock_intr_errno(*timeo);
+-		if (signal_pending(current))
+-			break;
+-		err = -EAGAIN;
+-		if (!*timeo)
+-			break;
+-	}
+-	remove_wait_queue(sk_sleep(sk), &wait);
+-
+-	return skb == NULL ? ERR_PTR(err) : skb;
+-}
+-
+-static int dn_accept(struct socket *sock, struct socket *newsock, int flags,
+-		     bool kern)
+-{
+-	struct sock *sk = sock->sk, *newsk;
+-	struct sk_buff *skb = NULL;
+-	struct dn_skb_cb *cb;
+-	unsigned char menuver;
+-	int err = 0;
+-	unsigned char type;
+-	long timeo = sock_rcvtimeo(sk, flags & O_NONBLOCK);
+-	struct dst_entry *dst;
+-
+-	lock_sock(sk);
+-
+-	if (sk->sk_state != TCP_LISTEN || DN_SK(sk)->state != DN_O) {
+-		release_sock(sk);
+-		return -EINVAL;
+-	}
+-
+-	skb = skb_dequeue(&sk->sk_receive_queue);
+-	if (skb == NULL) {
+-		skb = dn_wait_for_connect(sk, &timeo);
+-		if (IS_ERR(skb)) {
+-			release_sock(sk);
+-			return PTR_ERR(skb);
+-		}
+-	}
+-
+-	cb = DN_SKB_CB(skb);
+-	sk_acceptq_removed(sk);
+-	newsk = dn_alloc_sock(sock_net(sk), newsock, sk->sk_allocation, kern);
+-	if (newsk == NULL) {
+-		release_sock(sk);
+-		kfree_skb(skb);
+-		return -ENOBUFS;
+-	}
+-	release_sock(sk);
+-
+-	dst = skb_dst(skb);
+-	sk_dst_set(newsk, dst);
+-	skb_dst_set(skb, NULL);
+-
+-	DN_SK(newsk)->state        = DN_CR;
+-	DN_SK(newsk)->addrrem      = cb->src_port;
+-	DN_SK(newsk)->services_rem = cb->services;
+-	DN_SK(newsk)->info_rem     = cb->info;
+-	DN_SK(newsk)->segsize_rem  = cb->segsize;
+-	DN_SK(newsk)->accept_mode  = DN_SK(sk)->accept_mode;
+-
+-	if (DN_SK(newsk)->segsize_rem < 230)
+-		DN_SK(newsk)->segsize_rem = 230;
+-
+-	if ((DN_SK(newsk)->services_rem & NSP_FC_MASK) == NSP_FC_NONE)
+-		DN_SK(newsk)->max_window = decnet_no_fc_max_cwnd;
+-
+-	newsk->sk_state  = TCP_LISTEN;
+-	memcpy(&(DN_SK(newsk)->addr), &(DN_SK(sk)->addr), sizeof(struct sockaddr_dn));
+-
+-	/*
+-	 * If we are listening on a wild socket, we don't want
+-	 * the newly created socket on the wrong hash queue.
+-	 */
+-	DN_SK(newsk)->addr.sdn_flags &= ~SDF_WILD;
+-
+-	skb_pull(skb, dn_username2sockaddr(skb->data, skb->len, &(DN_SK(newsk)->addr), &type));
+-	skb_pull(skb, dn_username2sockaddr(skb->data, skb->len, &(DN_SK(newsk)->peer), &type));
+-	*(__le16 *)(DN_SK(newsk)->peer.sdn_add.a_addr) = cb->src;
+-	*(__le16 *)(DN_SK(newsk)->addr.sdn_add.a_addr) = cb->dst;
+-
+-	menuver = *skb->data;
+-	skb_pull(skb, 1);
+-
+-	if (menuver & DN_MENUVER_ACC)
+-		dn_access_copy(skb, &(DN_SK(newsk)->accessdata));
+-
+-	if (menuver & DN_MENUVER_USR)
+-		dn_user_copy(skb, &(DN_SK(newsk)->conndata_in));
+-
+-	if (menuver & DN_MENUVER_PRX)
+-		DN_SK(newsk)->peer.sdn_flags |= SDF_PROXY;
+-
+-	if (menuver & DN_MENUVER_UIC)
+-		DN_SK(newsk)->peer.sdn_flags |= SDF_UICPROXY;
+-
+-	kfree_skb(skb);
+-
+-	memcpy(&(DN_SK(newsk)->conndata_out), &(DN_SK(sk)->conndata_out),
+-		sizeof(struct optdata_dn));
+-	memcpy(&(DN_SK(newsk)->discdata_out), &(DN_SK(sk)->discdata_out),
+-		sizeof(struct optdata_dn));
+-
+-	lock_sock(newsk);
+-	err = dn_hash_sock(newsk);
+-	if (err == 0) {
+-		sock_reset_flag(newsk, SOCK_ZAPPED);
+-		dn_send_conn_ack(newsk);
+-
+-		/*
+-		 * Here we use sk->sk_allocation since although the conn conf is
+-		 * for the newsk, the context is the old socket.
+-		 */
+-		if (DN_SK(newsk)->accept_mode == ACC_IMMED)
+-			err = dn_confirm_accept(newsk, &timeo,
+-						sk->sk_allocation);
+-	}
+-	release_sock(newsk);
+-	return err;
+-}
+-
+-
+-static int dn_getname(struct socket *sock, struct sockaddr *uaddr,int peer)
+-{
+-	struct sockaddr_dn *sa = (struct sockaddr_dn *)uaddr;
+-	struct sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	lock_sock(sk);
+-
+-	if (peer) {
+-		if ((sock->state != SS_CONNECTED &&
+-		     sock->state != SS_CONNECTING) &&
+-		    scp->accept_mode == ACC_IMMED) {
+-			release_sock(sk);
+-			return -ENOTCONN;
+-		}
+-
+-		memcpy(sa, &scp->peer, sizeof(struct sockaddr_dn));
+-	} else {
+-		memcpy(sa, &scp->addr, sizeof(struct sockaddr_dn));
+-	}
+-
+-	release_sock(sk);
+-
+-	return sizeof(struct sockaddr_dn);
+-}
+-
+-
+-static __poll_t dn_poll(struct file *file, struct socket *sock, poll_table  *wait)
+-{
+-	struct sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	__poll_t mask = datagram_poll(file, sock, wait);
+-
+-	if (!skb_queue_empty_lockless(&scp->other_receive_queue))
+-		mask |= EPOLLRDBAND;
+-
+-	return mask;
+-}
+-
+-static int dn_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+-{
+-	struct sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	int err = -EOPNOTSUPP;
+-	long amount = 0;
+-	struct sk_buff *skb;
+-	int val;
+-
+-	switch(cmd)
+-	{
+-	case SIOCGIFADDR:
+-	case SIOCSIFADDR:
+-		return dn_dev_ioctl(cmd, (void __user *)arg);
+-
+-	case SIOCATMARK:
+-		lock_sock(sk);
+-		val = !skb_queue_empty(&scp->other_receive_queue);
+-		if (scp->state != DN_RUN)
+-			val = -ENOTCONN;
+-		release_sock(sk);
+-		return val;
+-
+-	case TIOCOUTQ:
+-		amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
+-		if (amount < 0)
+-			amount = 0;
+-		err = put_user(amount, (int __user *)arg);
+-		break;
+-
+-	case TIOCINQ:
+-		lock_sock(sk);
+-		skb = skb_peek(&scp->other_receive_queue);
+-		if (skb) {
+-			amount = skb->len;
+-		} else {
+-			skb_queue_walk(&sk->sk_receive_queue, skb)
+-				amount += skb->len;
+-		}
+-		release_sock(sk);
+-		err = put_user(amount, (int __user *)arg);
+-		break;
+-
+-	default:
+-		err = -ENOIOCTLCMD;
+-		break;
+-	}
+-
+-	return err;
+-}
+-
+-static int dn_listen(struct socket *sock, int backlog)
+-{
+-	struct sock *sk = sock->sk;
+-	int err = -EINVAL;
+-
+-	lock_sock(sk);
+-
+-	if (sock_flag(sk, SOCK_ZAPPED))
+-		goto out;
+-
+-	if ((DN_SK(sk)->state != DN_O) || (sk->sk_state == TCP_LISTEN))
+-		goto out;
+-
+-	sk->sk_max_ack_backlog = backlog;
+-	sk->sk_ack_backlog     = 0;
+-	sk->sk_state           = TCP_LISTEN;
+-	err                 = 0;
+-	dn_rehash_sock(sk);
+-
+-out:
+-	release_sock(sk);
+-
+-	return err;
+-}
+-
+-
+-static int dn_shutdown(struct socket *sock, int how)
+-{
+-	struct sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	int err = -ENOTCONN;
+-
+-	lock_sock(sk);
+-
+-	if (sock->state == SS_UNCONNECTED)
+-		goto out;
+-
+-	err = 0;
+-	if (sock->state == SS_DISCONNECTING)
+-		goto out;
+-
+-	err = -EINVAL;
+-	if (scp->state == DN_O)
+-		goto out;
+-
+-	if (how != SHUT_RDWR)
+-		goto out;
+-
+-	sk->sk_shutdown = SHUTDOWN_MASK;
+-	dn_destroy_sock(sk);
+-	err = 0;
+-
+-out:
+-	release_sock(sk);
+-
+-	return err;
+-}
+-
+-static int dn_setsockopt(struct socket *sock, int level, int optname,
+-		sockptr_t optval, unsigned int optlen)
+-{
+-	struct sock *sk = sock->sk;
+-	int err;
+-
+-	lock_sock(sk);
+-	err = __dn_setsockopt(sock, level, optname, optval, optlen, 0);
+-	release_sock(sk);
+-#ifdef CONFIG_NETFILTER
+-	/* we need to exclude all possible ENOPROTOOPTs except default case */
+-	if (err == -ENOPROTOOPT && optname != DSO_LINKINFO &&
+-	    optname != DSO_STREAM && optname != DSO_SEQPACKET)
+-		err = nf_setsockopt(sk, PF_DECnet, optname, optval, optlen);
+-#endif
+-
+-	return err;
+-}
+-
+-static int __dn_setsockopt(struct socket *sock, int level, int optname,
+-		sockptr_t optval, unsigned int optlen, int flags)
+-{
+-	struct	sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	long timeo;
+-	union {
+-		struct optdata_dn opt;
+-		struct accessdata_dn acc;
+-		int mode;
+-		unsigned long win;
+-		int val;
+-		unsigned char services;
+-		unsigned char info;
+-	} u;
+-	int err;
+-
+-	if (optlen && sockptr_is_null(optval))
+-		return -EINVAL;
+-
+-	if (optlen > sizeof(u))
+-		return -EINVAL;
+-
+-	if (copy_from_sockptr(&u, optval, optlen))
+-		return -EFAULT;
+-
+-	switch (optname) {
+-	case DSO_CONDATA:
+-		if (sock->state == SS_CONNECTED)
+-			return -EISCONN;
+-		if ((scp->state != DN_O) && (scp->state != DN_CR))
+-			return -EINVAL;
+-
+-		if (optlen != sizeof(struct optdata_dn))
+-			return -EINVAL;
+-
+-		if (le16_to_cpu(u.opt.opt_optl) > 16)
+-			return -EINVAL;
+-
+-		memcpy(&scp->conndata_out, &u.opt, optlen);
+-		break;
+-
+-	case DSO_DISDATA:
+-		if (sock->state != SS_CONNECTED &&
+-		    scp->accept_mode == ACC_IMMED)
+-			return -ENOTCONN;
+-
+-		if (optlen != sizeof(struct optdata_dn))
+-			return -EINVAL;
+-
+-		if (le16_to_cpu(u.opt.opt_optl) > 16)
+-			return -EINVAL;
+-
+-		memcpy(&scp->discdata_out, &u.opt, optlen);
+-		break;
+-
+-	case DSO_CONACCESS:
+-		if (sock->state == SS_CONNECTED)
+-			return -EISCONN;
+-		if (scp->state != DN_O)
+-			return -EINVAL;
+-
+-		if (optlen != sizeof(struct accessdata_dn))
+-			return -EINVAL;
+-
+-		if ((u.acc.acc_accl > DN_MAXACCL) ||
+-		    (u.acc.acc_passl > DN_MAXACCL) ||
+-		    (u.acc.acc_userl > DN_MAXACCL))
+-			return -EINVAL;
+-
+-		memcpy(&scp->accessdata, &u.acc, optlen);
+-		break;
+-
+-	case DSO_ACCEPTMODE:
+-		if (sock->state == SS_CONNECTED)
+-			return -EISCONN;
+-		if (scp->state != DN_O)
+-			return -EINVAL;
+-
+-		if (optlen != sizeof(int))
+-			return -EINVAL;
+-
+-		if ((u.mode != ACC_IMMED) && (u.mode != ACC_DEFER))
+-			return -EINVAL;
+-
+-		scp->accept_mode = (unsigned char)u.mode;
+-		break;
+-
+-	case DSO_CONACCEPT:
+-		if (scp->state != DN_CR)
+-			return -EINVAL;
+-		timeo = sock_rcvtimeo(sk, 0);
+-		err = dn_confirm_accept(sk, &timeo, sk->sk_allocation);
+-		return err;
+-
+-	case DSO_CONREJECT:
+-		if (scp->state != DN_CR)
+-			return -EINVAL;
+-
+-		scp->state = DN_DR;
+-		sk->sk_shutdown = SHUTDOWN_MASK;
+-		dn_nsp_send_disc(sk, 0x38, 0, sk->sk_allocation);
+-		break;
+-
+-	case DSO_MAXWINDOW:
+-		if (optlen != sizeof(unsigned long))
+-			return -EINVAL;
+-		if (u.win > NSP_MAX_WINDOW)
+-			u.win = NSP_MAX_WINDOW;
+-		if (u.win == 0)
+-			return -EINVAL;
+-		scp->max_window = u.win;
+-		if (scp->snd_window > u.win)
+-			scp->snd_window = u.win;
+-		break;
+-
+-	case DSO_NODELAY:
+-		if (optlen != sizeof(int))
+-			return -EINVAL;
+-		if (scp->nonagle == TCP_NAGLE_CORK)
+-			return -EINVAL;
+-		scp->nonagle = (u.val == 0) ? 0 : TCP_NAGLE_OFF;
+-		/* if (scp->nonagle == 1) { Push pending frames } */
+-		break;
+-
+-	case DSO_CORK:
+-		if (optlen != sizeof(int))
+-			return -EINVAL;
+-		if (scp->nonagle == TCP_NAGLE_OFF)
+-			return -EINVAL;
+-		scp->nonagle = (u.val == 0) ? 0 : TCP_NAGLE_CORK;
+-		/* if (scp->nonagle == 0) { Push pending frames } */
+-		break;
+-
+-	case DSO_SERVICES:
+-		if (optlen != sizeof(unsigned char))
+-			return -EINVAL;
+-		if ((u.services & ~NSP_FC_MASK) != 0x01)
+-			return -EINVAL;
+-		if ((u.services & NSP_FC_MASK) == NSP_FC_MASK)
+-			return -EINVAL;
+-		scp->services_loc = u.services;
+-		break;
+-
+-	case DSO_INFO:
+-		if (optlen != sizeof(unsigned char))
+-			return -EINVAL;
+-		if (u.info & 0xfc)
+-			return -EINVAL;
+-		scp->info_loc = u.info;
+-		break;
+-
+-	case DSO_LINKINFO:
+-	case DSO_STREAM:
+-	case DSO_SEQPACKET:
+-	default:
+-		return -ENOPROTOOPT;
+-	}
+-
+-	return 0;
+-}
+-
+-static int dn_getsockopt(struct socket *sock, int level, int optname, char __user *optval, int __user *optlen)
+-{
+-	struct sock *sk = sock->sk;
+-	int err;
+-
+-	lock_sock(sk);
+-	err = __dn_getsockopt(sock, level, optname, optval, optlen, 0);
+-	release_sock(sk);
+-#ifdef CONFIG_NETFILTER
+-	if (err == -ENOPROTOOPT && optname != DSO_STREAM &&
+-	    optname != DSO_SEQPACKET && optname != DSO_CONACCEPT &&
+-	    optname != DSO_CONREJECT) {
+-		int len;
+-
+-		if (get_user(len, optlen))
+-			return -EFAULT;
+-
+-		err = nf_getsockopt(sk, PF_DECnet, optname, optval, &len);
+-		if (err >= 0)
+-			err = put_user(len, optlen);
+-	}
+-#endif
+-
+-	return err;
+-}
+-
+-static int __dn_getsockopt(struct socket *sock, int level,int optname, char __user *optval,int __user *optlen, int flags)
+-{
+-	struct	sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct linkinfo_dn link;
+-	unsigned int r_len;
+-	void *r_data = NULL;
+-	unsigned int val;
+-
+-	if(get_user(r_len , optlen))
+-		return -EFAULT;
+-
+-	switch (optname) {
+-	case DSO_CONDATA:
+-		if (r_len > sizeof(struct optdata_dn))
+-			r_len = sizeof(struct optdata_dn);
+-		r_data = &scp->conndata_in;
+-		break;
+-
+-	case DSO_DISDATA:
+-		if (r_len > sizeof(struct optdata_dn))
+-			r_len = sizeof(struct optdata_dn);
+-		r_data = &scp->discdata_in;
+-		break;
+-
+-	case DSO_CONACCESS:
+-		if (r_len > sizeof(struct accessdata_dn))
+-			r_len = sizeof(struct accessdata_dn);
+-		r_data = &scp->accessdata;
+-		break;
+-
+-	case DSO_ACCEPTMODE:
+-		if (r_len > sizeof(unsigned char))
+-			r_len = sizeof(unsigned char);
+-		r_data = &scp->accept_mode;
+-		break;
+-
+-	case DSO_LINKINFO:
+-		if (r_len > sizeof(struct linkinfo_dn))
+-			r_len = sizeof(struct linkinfo_dn);
+-
+-		memset(&link, 0, sizeof(link));
+-
+-		switch (sock->state) {
+-		case SS_CONNECTING:
+-			link.idn_linkstate = LL_CONNECTING;
+-			break;
+-		case SS_DISCONNECTING:
+-			link.idn_linkstate = LL_DISCONNECTING;
+-			break;
+-		case SS_CONNECTED:
+-			link.idn_linkstate = LL_RUNNING;
+-			break;
+-		default:
+-			link.idn_linkstate = LL_INACTIVE;
+-		}
+-
+-		link.idn_segsize = scp->segsize_rem;
+-		r_data = &link;
+-		break;
+-
+-	case DSO_MAXWINDOW:
+-		if (r_len > sizeof(unsigned long))
+-			r_len = sizeof(unsigned long);
+-		r_data = &scp->max_window;
+-		break;
+-
+-	case DSO_NODELAY:
+-		if (r_len > sizeof(int))
+-			r_len = sizeof(int);
+-		val = (scp->nonagle == TCP_NAGLE_OFF);
+-		r_data = &val;
+-		break;
+-
+-	case DSO_CORK:
+-		if (r_len > sizeof(int))
+-			r_len = sizeof(int);
+-		val = (scp->nonagle == TCP_NAGLE_CORK);
+-		r_data = &val;
+-		break;
+-
+-	case DSO_SERVICES:
+-		if (r_len > sizeof(unsigned char))
+-			r_len = sizeof(unsigned char);
+-		r_data = &scp->services_rem;
+-		break;
+-
+-	case DSO_INFO:
+-		if (r_len > sizeof(unsigned char))
+-			r_len = sizeof(unsigned char);
+-		r_data = &scp->info_rem;
+-		break;
+-
+-	case DSO_STREAM:
+-	case DSO_SEQPACKET:
+-	case DSO_CONACCEPT:
+-	case DSO_CONREJECT:
+-	default:
+-		return -ENOPROTOOPT;
+-	}
+-
+-	if (r_data) {
+-		if (copy_to_user(optval, r_data, r_len))
+-			return -EFAULT;
+-		if (put_user(r_len, optlen))
+-			return -EFAULT;
+-	}
+-
+-	return 0;
+-}
+-
+-
+-static int dn_data_ready(struct sock *sk, struct sk_buff_head *q, int flags, int target)
+-{
+-	struct sk_buff *skb;
+-	int len = 0;
+-
+-	if (flags & MSG_OOB)
+-		return !skb_queue_empty(q) ? 1 : 0;
+-
+-	skb_queue_walk(q, skb) {
+-		struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-		len += skb->len;
+-
+-		if (cb->nsp_flags & 0x40) {
+-			/* SOCK_SEQPACKET reads to EOM */
+-			if (sk->sk_type == SOCK_SEQPACKET)
+-				return 1;
+-			/* so does SOCK_STREAM unless WAITALL is specified */
+-			if (!(flags & MSG_WAITALL))
+-				return 1;
+-		}
+-
+-		/* minimum data length for read exceeded */
+-		if (len >= target)
+-			return 1;
+-	}
+-
+-	return 0;
+-}
+-
+-
+-static int dn_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+-		      int flags)
+-{
+-	struct sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct sk_buff_head *queue = &sk->sk_receive_queue;
+-	size_t target = size > 1 ? 1 : 0;
+-	size_t copied = 0;
+-	int rv = 0;
+-	struct sk_buff *skb, *n;
+-	struct dn_skb_cb *cb = NULL;
+-	unsigned char eor = 0;
+-	long timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+-
+-	lock_sock(sk);
+-
+-	if (sock_flag(sk, SOCK_ZAPPED)) {
+-		rv = -EADDRNOTAVAIL;
+-		goto out;
+-	}
+-
+-	if (sk->sk_shutdown & RCV_SHUTDOWN) {
+-		rv = 0;
+-		goto out;
+-	}
+-
+-	rv = dn_check_state(sk, NULL, 0, &timeo, flags);
+-	if (rv)
+-		goto out;
+-
+-	if (flags & ~(MSG_CMSG_COMPAT|MSG_PEEK|MSG_OOB|MSG_WAITALL|MSG_DONTWAIT|MSG_NOSIGNAL)) {
+-		rv = -EOPNOTSUPP;
+-		goto out;
+-	}
+-
+-	if (flags & MSG_OOB)
+-		queue = &scp->other_receive_queue;
+-
+-	if (flags & MSG_WAITALL)
+-		target = size;
+-
+-
+-	/*
+-	 * See if there is data ready to read, sleep if there isn't
+-	 */
+-	for(;;) {
+-		DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-
+-		if (sk->sk_err)
+-			goto out;
+-
+-		if (!skb_queue_empty(&scp->other_receive_queue)) {
+-			if (!(flags & MSG_OOB)) {
+-				msg->msg_flags |= MSG_OOB;
+-				if (!scp->other_report) {
+-					scp->other_report = 1;
+-					goto out;
+-				}
+-			}
+-		}
+-
+-		if (scp->state != DN_RUN)
+-			goto out;
+-
+-		if (signal_pending(current)) {
+-			rv = sock_intr_errno(timeo);
+-			goto out;
+-		}
+-
+-		if (dn_data_ready(sk, queue, flags, target))
+-			break;
+-
+-		if (flags & MSG_DONTWAIT) {
+-			rv = -EWOULDBLOCK;
+-			goto out;
+-		}
+-
+-		add_wait_queue(sk_sleep(sk), &wait);
+-		sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+-		sk_wait_event(sk, &timeo, dn_data_ready(sk, queue, flags, target), &wait);
+-		sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+-		remove_wait_queue(sk_sleep(sk), &wait);
+-	}
+-
+-	skb_queue_walk_safe(queue, skb, n) {
+-		unsigned int chunk = skb->len;
+-		cb = DN_SKB_CB(skb);
+-
+-		if ((chunk + copied) > size)
+-			chunk = size - copied;
+-
+-		if (memcpy_to_msg(msg, skb->data, chunk)) {
+-			rv = -EFAULT;
+-			break;
+-		}
+-		copied += chunk;
+-
+-		if (!(flags & MSG_PEEK))
+-			skb_pull(skb, chunk);
+-
+-		eor = cb->nsp_flags & 0x40;
+-
+-		if (skb->len == 0) {
+-			skb_unlink(skb, queue);
+-			kfree_skb(skb);
+-			/*
+-			 * N.B. Don't refer to skb or cb after this point
+-			 * in loop.
+-			 */
+-			if ((scp->flowloc_sw == DN_DONTSEND) && !dn_congested(sk)) {
+-				scp->flowloc_sw = DN_SEND;
+-				dn_nsp_send_link(sk, DN_SEND, 0);
+-			}
+-		}
+-
+-		if (eor) {
+-			if (sk->sk_type == SOCK_SEQPACKET)
+-				break;
+-			if (!(flags & MSG_WAITALL))
+-				break;
+-		}
+-
+-		if (flags & MSG_OOB)
+-			break;
+-
+-		if (copied >= target)
+-			break;
+-	}
+-
+-	rv = copied;
+-
+-
+-	if (eor && (sk->sk_type == SOCK_SEQPACKET))
+-		msg->msg_flags |= MSG_EOR;
+-
+-out:
+-	if (rv == 0)
+-		rv = (flags & MSG_PEEK) ? -sk->sk_err : sock_error(sk);
+-
+-	if ((rv >= 0) && msg->msg_name) {
+-		__sockaddr_check_size(sizeof(struct sockaddr_dn));
+-		memcpy(msg->msg_name, &scp->peer, sizeof(struct sockaddr_dn));
+-		msg->msg_namelen = sizeof(struct sockaddr_dn);
+-	}
+-
+-	release_sock(sk);
+-
+-	return rv;
+-}
+-
+-
+-static inline int dn_queue_too_long(struct dn_scp *scp, struct sk_buff_head *queue, int flags)
+-{
+-	unsigned char fctype = scp->services_rem & NSP_FC_MASK;
+-	if (skb_queue_len(queue) >= scp->snd_window)
+-		return 1;
+-	if (fctype != NSP_FC_NONE) {
+-		if (flags & MSG_OOB) {
+-			if (scp->flowrem_oth == 0)
+-				return 1;
+-		} else {
+-			if (scp->flowrem_dat == 0)
+-				return 1;
+-		}
+-	}
+-	return 0;
+-}
+-
+-/*
+- * The DECnet spec requires that the "routing layer" accepts packets which
+- * are at least 230 bytes in size. This excludes any headers which the NSP
+- * layer might add, so we always assume that we'll be using the maximal
+- * length header on data packets. The variation in length is due to the
+- * inclusion (or not) of the two 16 bit acknowledgement fields so it doesn't
+- * make much practical difference.
+- */
+-unsigned int dn_mss_from_pmtu(struct net_device *dev, int mtu)
+-{
+-	unsigned int mss = 230 - DN_MAX_NSP_DATA_HEADER;
+-	if (dev) {
+-		struct dn_dev *dn_db = rcu_dereference_raw(dev->dn_ptr);
+-		mtu -= LL_RESERVED_SPACE(dev);
+-		if (dn_db->use_long)
+-			mtu -= 21;
+-		else
+-			mtu -= 6;
+-		mtu -= DN_MAX_NSP_DATA_HEADER;
+-	} else {
+-		/*
+-		 * 21 = long header, 16 = guess at MAC header length
+-		 */
+-		mtu -= (21 + DN_MAX_NSP_DATA_HEADER + 16);
+-	}
+-	if (mtu > mss)
+-		mss = mtu;
+-	return mss;
+-}
+-
+-static inline unsigned int dn_current_mss(struct sock *sk, int flags)
+-{
+-	struct dst_entry *dst = __sk_dst_get(sk);
+-	struct dn_scp *scp = DN_SK(sk);
+-	int mss_now = min_t(int, scp->segsize_loc, scp->segsize_rem);
+-
+-	/* Other data messages are limited to 16 bytes per packet */
+-	if (flags & MSG_OOB)
+-		return 16;
+-
+-	/* This works out the maximum size of segment we can send out */
+-	if (dst) {
+-		u32 mtu = dst_mtu(dst);
+-		mss_now = min_t(int, dn_mss_from_pmtu(dst->dev, mtu), mss_now);
+-	}
+-
+-	return mss_now;
+-}
+-
+-/*
+- * N.B. We get the timeout wrong here, but then we always did get it
+- * wrong before and this is another step along the road to correcting
+- * it. It ought to get updated each time we pass through the routine,
+- * but in practise it probably doesn't matter too much for now.
+- */
+-static inline struct sk_buff *dn_alloc_send_pskb(struct sock *sk,
+-			      unsigned long datalen, int noblock,
+-			      int *errcode)
+-{
+-	struct sk_buff *skb = sock_alloc_send_skb(sk, datalen,
+-						   noblock, errcode);
+-	if (skb) {
+-		skb->protocol = htons(ETH_P_DNA_RT);
+-		skb->pkt_type = PACKET_OUTGOING;
+-	}
+-	return skb;
+-}
+-
+-static int dn_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+-{
+-	struct sock *sk = sock->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	size_t mss;
+-	struct sk_buff_head *queue = &scp->data_xmit_queue;
+-	int flags = msg->msg_flags;
+-	int err = 0;
+-	size_t sent = 0;
+-	int addr_len = msg->msg_namelen;
+-	DECLARE_SOCKADDR(struct sockaddr_dn *, addr, msg->msg_name);
+-	struct sk_buff *skb = NULL;
+-	struct dn_skb_cb *cb;
+-	size_t len;
+-	unsigned char fctype;
+-	long timeo;
+-
+-	if (flags & ~(MSG_TRYHARD|MSG_OOB|MSG_DONTWAIT|MSG_EOR|MSG_NOSIGNAL|MSG_MORE|MSG_CMSG_COMPAT))
+-		return -EOPNOTSUPP;
+-
+-	if (addr_len && (addr_len != sizeof(struct sockaddr_dn)))
+-		return -EINVAL;
+-
+-	lock_sock(sk);
+-	timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT);
+-	/*
+-	 * The only difference between stream sockets and sequenced packet
+-	 * sockets is that the stream sockets always behave as if MSG_EOR
+-	 * has been set.
+-	 */
+-	if (sock->type == SOCK_STREAM) {
+-		if (flags & MSG_EOR) {
+-			err = -EINVAL;
+-			goto out;
+-		}
+-		flags |= MSG_EOR;
+-	}
+-
+-
+-	err = dn_check_state(sk, addr, addr_len, &timeo, flags);
+-	if (err)
+-		goto out_err;
+-
+-	if (sk->sk_shutdown & SEND_SHUTDOWN) {
+-		err = -EPIPE;
+-		if (!(flags & MSG_NOSIGNAL))
+-			send_sig(SIGPIPE, current, 0);
+-		goto out_err;
+-	}
+-
+-	if ((flags & MSG_TRYHARD) && sk->sk_dst_cache)
+-		dst_negative_advice(sk);
+-
+-	mss = scp->segsize_rem;
+-	fctype = scp->services_rem & NSP_FC_MASK;
+-
+-	mss = dn_current_mss(sk, flags);
+-
+-	if (flags & MSG_OOB) {
+-		queue = &scp->other_xmit_queue;
+-		if (size > mss) {
+-			err = -EMSGSIZE;
+-			goto out;
+-		}
+-	}
+-
+-	scp->persist_fxn = dn_nsp_xmit_timeout;
+-
+-	while(sent < size) {
+-		err = sock_error(sk);
+-		if (err)
+-			goto out;
+-
+-		if (signal_pending(current)) {
+-			err = sock_intr_errno(timeo);
+-			goto out;
+-		}
+-
+-		/*
+-		 * Calculate size that we wish to send.
+-		 */
+-		len = size - sent;
+-
+-		if (len > mss)
+-			len = mss;
+-
+-		/*
+-		 * Wait for queue size to go down below the window
+-		 * size.
+-		 */
+-		if (dn_queue_too_long(scp, queue, flags)) {
+-			DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-
+-			if (flags & MSG_DONTWAIT) {
+-				err = -EWOULDBLOCK;
+-				goto out;
+-			}
+-
+-			add_wait_queue(sk_sleep(sk), &wait);
+-			sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+-			sk_wait_event(sk, &timeo,
+-				      !dn_queue_too_long(scp, queue, flags), &wait);
+-			sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+-			remove_wait_queue(sk_sleep(sk), &wait);
+-			continue;
+-		}
+-
+-		/*
+-		 * Get a suitably sized skb.
+-		 * 64 is a bit of a hack really, but its larger than any
+-		 * link-layer headers and has served us well as a good
+-		 * guess as to their real length.
+-		 */
+-		skb = dn_alloc_send_pskb(sk, len + 64 + DN_MAX_NSP_DATA_HEADER,
+-					 flags & MSG_DONTWAIT, &err);
+-
+-		if (err)
+-			break;
+-
+-		if (!skb)
+-			continue;
+-
+-		cb = DN_SKB_CB(skb);
+-
+-		skb_reserve(skb, 64 + DN_MAX_NSP_DATA_HEADER);
+-
+-		if (memcpy_from_msg(skb_put(skb, len), msg, len)) {
+-			err = -EFAULT;
+-			goto out;
+-		}
+-
+-		if (flags & MSG_OOB) {
+-			cb->nsp_flags = 0x30;
+-			if (fctype != NSP_FC_NONE)
+-				scp->flowrem_oth--;
+-		} else {
+-			cb->nsp_flags = 0x00;
+-			if (scp->seg_total == 0)
+-				cb->nsp_flags |= 0x20;
+-
+-			scp->seg_total += len;
+-
+-			if (((sent + len) == size) && (flags & MSG_EOR)) {
+-				cb->nsp_flags |= 0x40;
+-				scp->seg_total = 0;
+-				if (fctype == NSP_FC_SCMC)
+-					scp->flowrem_dat--;
+-			}
+-			if (fctype == NSP_FC_SRC)
+-				scp->flowrem_dat--;
+-		}
+-
+-		sent += len;
+-		dn_nsp_queue_xmit(sk, skb, sk->sk_allocation, flags & MSG_OOB);
+-		skb = NULL;
+-
+-		scp->persist = dn_nsp_persist(sk);
+-
+-	}
+-out:
+-
+-	kfree_skb(skb);
+-
+-	release_sock(sk);
+-
+-	return sent ? sent : err;
+-
+-out_err:
+-	err = sk_stream_error(sk, flags, err);
+-	release_sock(sk);
+-	return err;
+-}
+-
+-static int dn_device_event(struct notifier_block *this, unsigned long event,
+-			   void *ptr)
+-{
+-	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-
+-	if (!net_eq(dev_net(dev), &init_net))
+-		return NOTIFY_DONE;
+-
+-	switch (event) {
+-	case NETDEV_UP:
+-		dn_dev_up(dev);
+-		break;
+-	case NETDEV_DOWN:
+-		dn_dev_down(dev);
+-		break;
+-	default:
+-		break;
+-	}
+-
+-	return NOTIFY_DONE;
+-}
+-
+-static struct notifier_block dn_dev_notifier = {
+-	.notifier_call = dn_device_event,
+-};
+-
+-static struct packet_type dn_dix_packet_type __read_mostly = {
+-	.type =		cpu_to_be16(ETH_P_DNA_RT),
+-	.func =		dn_route_rcv,
+-};
+-
+-#ifdef CONFIG_PROC_FS
+-struct dn_iter_state {
+-	int bucket;
+-};
+-
+-static struct sock *dn_socket_get_first(struct seq_file *seq)
+-{
+-	struct dn_iter_state *state = seq->private;
+-	struct sock *n = NULL;
+-
+-	for(state->bucket = 0;
+-	    state->bucket < DN_SK_HASH_SIZE;
+-	    ++state->bucket) {
+-		n = sk_head(&dn_sk_hash[state->bucket]);
+-		if (n)
+-			break;
+-	}
+-
+-	return n;
+-}
+-
+-static struct sock *dn_socket_get_next(struct seq_file *seq,
+-				       struct sock *n)
+-{
+-	struct dn_iter_state *state = seq->private;
+-
+-	n = sk_next(n);
+-	while (!n) {
+-		if (++state->bucket >= DN_SK_HASH_SIZE)
+-			break;
+-		n = sk_head(&dn_sk_hash[state->bucket]);
+-	}
+-	return n;
+-}
+-
+-static struct sock *socket_get_idx(struct seq_file *seq, loff_t *pos)
+-{
+-	struct sock *sk = dn_socket_get_first(seq);
+-
+-	if (sk) {
+-		while(*pos && (sk = dn_socket_get_next(seq, sk)))
+-			--*pos;
+-	}
+-	return *pos ? NULL : sk;
+-}
+-
+-static void *dn_socket_get_idx(struct seq_file *seq, loff_t pos)
+-{
+-	void *rc;
+-	read_lock_bh(&dn_hash_lock);
+-	rc = socket_get_idx(seq, &pos);
+-	if (!rc) {
+-		read_unlock_bh(&dn_hash_lock);
+-	}
+-	return rc;
+-}
+-
+-static void *dn_socket_seq_start(struct seq_file *seq, loff_t *pos)
+-{
+-	return *pos ? dn_socket_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
+-}
+-
+-static void *dn_socket_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+-{
+-	void *rc;
+-
+-	if (v == SEQ_START_TOKEN) {
+-		rc = dn_socket_get_idx(seq, 0);
+-		goto out;
+-	}
+-
+-	rc = dn_socket_get_next(seq, v);
+-	if (rc)
+-		goto out;
+-	read_unlock_bh(&dn_hash_lock);
+-out:
+-	++*pos;
+-	return rc;
+-}
+-
+-static void dn_socket_seq_stop(struct seq_file *seq, void *v)
+-{
+-	if (v && v != SEQ_START_TOKEN)
+-		read_unlock_bh(&dn_hash_lock);
+-}
+-
+-#define IS_NOT_PRINTABLE(x) ((x) < 32 || (x) > 126)
+-
+-static void dn_printable_object(struct sockaddr_dn *dn, unsigned char *buf)
+-{
+-	int i;
+-
+-	switch (le16_to_cpu(dn->sdn_objnamel)) {
+-	case 0:
+-		sprintf(buf, "%d", dn->sdn_objnum);
+-		break;
+-	default:
+-		for (i = 0; i < le16_to_cpu(dn->sdn_objnamel); i++) {
+-			buf[i] = dn->sdn_objname[i];
+-			if (IS_NOT_PRINTABLE(buf[i]))
+-				buf[i] = '.';
+-		}
+-		buf[i] = 0;
+-	}
+-}
+-
+-static char *dn_state2asc(unsigned char state)
+-{
+-	switch (state) {
+-	case DN_O:
+-		return "OPEN";
+-	case DN_CR:
+-		return "  CR";
+-	case DN_DR:
+-		return "  DR";
+-	case DN_DRC:
+-		return " DRC";
+-	case DN_CC:
+-		return "  CC";
+-	case DN_CI:
+-		return "  CI";
+-	case DN_NR:
+-		return "  NR";
+-	case DN_NC:
+-		return "  NC";
+-	case DN_CD:
+-		return "  CD";
+-	case DN_RJ:
+-		return "  RJ";
+-	case DN_RUN:
+-		return " RUN";
+-	case DN_DI:
+-		return "  DI";
+-	case DN_DIC:
+-		return " DIC";
+-	case DN_DN:
+-		return "  DN";
+-	case DN_CL:
+-		return "  CL";
+-	case DN_CN:
+-		return "  CN";
+-	}
+-
+-	return "????";
+-}
+-
+-static inline void dn_socket_format_entry(struct seq_file *seq, struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	char buf1[DN_ASCBUF_LEN];
+-	char buf2[DN_ASCBUF_LEN];
+-	char local_object[DN_MAXOBJL+3];
+-	char remote_object[DN_MAXOBJL+3];
+-
+-	dn_printable_object(&scp->addr, local_object);
+-	dn_printable_object(&scp->peer, remote_object);
+-
+-	seq_printf(seq,
+-		   "%6s/%04X %04d:%04d %04d:%04d %01d %-16s "
+-		   "%6s/%04X %04d:%04d %04d:%04d %01d %-16s %4s %s\n",
+-		   dn_addr2asc(le16_to_cpu(dn_saddr2dn(&scp->addr)), buf1),
+-		   scp->addrloc,
+-		   scp->numdat,
+-		   scp->numoth,
+-		   scp->ackxmt_dat,
+-		   scp->ackxmt_oth,
+-		   scp->flowloc_sw,
+-		   local_object,
+-		   dn_addr2asc(le16_to_cpu(dn_saddr2dn(&scp->peer)), buf2),
+-		   scp->addrrem,
+-		   scp->numdat_rcv,
+-		   scp->numoth_rcv,
+-		   scp->ackrcv_dat,
+-		   scp->ackrcv_oth,
+-		   scp->flowrem_sw,
+-		   remote_object,
+-		   dn_state2asc(scp->state),
+-		   ((scp->accept_mode == ACC_IMMED) ? "IMMED" : "DEFER"));
+-}
+-
+-static int dn_socket_seq_show(struct seq_file *seq, void *v)
+-{
+-	if (v == SEQ_START_TOKEN) {
+-		seq_puts(seq, "Local                                              Remote\n");
+-	} else {
+-		dn_socket_format_entry(seq, v);
+-	}
+-	return 0;
+-}
+-
+-static const struct seq_operations dn_socket_seq_ops = {
+-	.start	= dn_socket_seq_start,
+-	.next	= dn_socket_seq_next,
+-	.stop	= dn_socket_seq_stop,
+-	.show	= dn_socket_seq_show,
+-};
+-#endif
+-
+-static const struct net_proto_family	dn_family_ops = {
+-	.family =	AF_DECnet,
+-	.create =	dn_create,
+-	.owner	=	THIS_MODULE,
+-};
+-
+-static const struct proto_ops dn_proto_ops = {
+-	.family =	AF_DECnet,
+-	.owner =	THIS_MODULE,
+-	.release =	dn_release,
+-	.bind =		dn_bind,
+-	.connect =	dn_connect,
+-	.socketpair =	sock_no_socketpair,
+-	.accept =	dn_accept,
+-	.getname =	dn_getname,
+-	.poll =		dn_poll,
+-	.ioctl =	dn_ioctl,
+-	.listen =	dn_listen,
+-	.shutdown =	dn_shutdown,
+-	.setsockopt =	dn_setsockopt,
+-	.getsockopt =	dn_getsockopt,
+-	.sendmsg =	dn_sendmsg,
+-	.recvmsg =	dn_recvmsg,
+-	.mmap =		sock_no_mmap,
+-	.sendpage =	sock_no_sendpage,
+-};
+-
+-MODULE_DESCRIPTION("The Linux DECnet Network Protocol");
+-MODULE_AUTHOR("Linux DECnet Project Team");
+-MODULE_LICENSE("GPL");
+-MODULE_ALIAS_NETPROTO(PF_DECnet);
+-
+-static const char banner[] __initconst = KERN_INFO
+-"NET4: DECnet for Linux: V.2.5.68s (C) 1995-2003 Linux DECnet Project Team\n";
+-
+-static int __init decnet_init(void)
+-{
+-	int rc;
+-
+-	printk(banner);
+-
+-	rc = proto_register(&dn_proto, 1);
+-	if (rc != 0)
+-		goto out;
+-
+-	dn_neigh_init();
+-	dn_dev_init();
+-	dn_route_init();
+-	dn_fib_init();
+-
+-	sock_register(&dn_family_ops);
+-	dev_add_pack(&dn_dix_packet_type);
+-	register_netdevice_notifier(&dn_dev_notifier);
+-
+-	proc_create_seq_private("decnet", 0444, init_net.proc_net,
+-			&dn_socket_seq_ops, sizeof(struct dn_iter_state),
+-			NULL);
+-	dn_register_sysctl();
+-out:
+-	return rc;
+-
+-}
+-module_init(decnet_init);
+-
+-/*
+- * Prevent DECnet module unloading until its fixed properly.
+- * Requires an audit of the code to check for memory leaks and
+- * initialisation problems etc.
+- */
+-#if 0
+-static void __exit decnet_exit(void)
+-{
+-	sock_unregister(AF_DECnet);
+-	rtnl_unregister_all(PF_DECnet);
+-	dev_remove_pack(&dn_dix_packet_type);
+-
+-	dn_unregister_sysctl();
+-
+-	unregister_netdevice_notifier(&dn_dev_notifier);
+-
+-	dn_route_cleanup();
+-	dn_dev_cleanup();
+-	dn_neigh_cleanup();
+-	dn_fib_cleanup();
+-
+-	remove_proc_entry("decnet", init_net.proc_net);
+-
+-	proto_unregister(&dn_proto);
+-
+-	rcu_barrier(); /* Wait for completion of call_rcu()'s */
+-}
+-module_exit(decnet_exit);
+-#endif
+diff --git a/net/decnet/dn_dev.c b/net/decnet/dn_dev.c
+deleted file mode 100644
+index 15d42353f1a38..0000000000000
+--- a/net/decnet/dn_dev.c
++++ /dev/null
+@@ -1,1435 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Device Layer
+- *
+- * Authors:     Steve Whitehouse <SteveW@ACM.org>
+- *              Eduardo Marcelo Serrat <emserrat@geocities.com>
+- *
+- * Changes:
+- *          Steve Whitehouse : Devices now see incoming frames so they
+- *                             can mark on who it came from.
+- *          Steve Whitehouse : Fixed bug in creating neighbours. Each neighbour
+- *                             can now have a device specific setup func.
+- *          Steve Whitehouse : Added /proc/sys/net/decnet/conf/<dev>/
+- *          Steve Whitehouse : Fixed bug which sometimes killed timer
+- *          Steve Whitehouse : Multiple ifaddr support
+- *          Steve Whitehouse : SIOCGIFCONF is now a compile time option
+- *          Steve Whitehouse : /proc/sys/net/decnet/conf/<sys>/forwarding
+- *          Steve Whitehouse : Removed timer1 - it's a user space issue now
+- *         Patrick Caulfield : Fixed router hello message format
+- *          Steve Whitehouse : Got rid of constant sizes for blksize for
+- *                             devices. All mtu based now.
+- */
+-
+-#include <linux/capability.h>
+-#include <linux/module.h>
+-#include <linux/moduleparam.h>
+-#include <linux/init.h>
+-#include <linux/net.h>
+-#include <linux/netdevice.h>
+-#include <linux/proc_fs.h>
+-#include <linux/seq_file.h>
+-#include <linux/timer.h>
+-#include <linux/string.h>
+-#include <linux/if_addr.h>
+-#include <linux/if_arp.h>
+-#include <linux/if_ether.h>
+-#include <linux/skbuff.h>
+-#include <linux/sysctl.h>
+-#include <linux/notifier.h>
+-#include <linux/slab.h>
+-#include <linux/jiffies.h>
+-#include <linux/uaccess.h>
+-#include <net/net_namespace.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/flow.h>
+-#include <net/fib_rules.h>
+-#include <net/netlink.h>
+-#include <net/dn.h>
+-#include <net/dn_dev.h>
+-#include <net/dn_route.h>
+-#include <net/dn_neigh.h>
+-#include <net/dn_fib.h>
+-
+-#define DN_IFREQ_SIZE (offsetof(struct ifreq, ifr_ifru) + sizeof(struct sockaddr_dn))
+-
+-static char dn_rt_all_end_mcast[ETH_ALEN] = {0xAB,0x00,0x00,0x04,0x00,0x00};
+-static char dn_rt_all_rt_mcast[ETH_ALEN]  = {0xAB,0x00,0x00,0x03,0x00,0x00};
+-static char dn_hiord[ETH_ALEN]            = {0xAA,0x00,0x04,0x00,0x00,0x00};
+-static unsigned char dn_eco_version[3]    = {0x02,0x00,0x00};
+-
+-extern struct neigh_table dn_neigh_table;
+-
+-/*
+- * decnet_address is kept in network order.
+- */
+-__le16 decnet_address = 0;
+-
+-static DEFINE_SPINLOCK(dndev_lock);
+-static struct net_device *decnet_default_device;
+-static BLOCKING_NOTIFIER_HEAD(dnaddr_chain);
+-
+-static struct dn_dev *dn_dev_create(struct net_device *dev, int *err);
+-static void dn_dev_delete(struct net_device *dev);
+-static void dn_ifaddr_notify(int event, struct dn_ifaddr *ifa);
+-
+-static int dn_eth_up(struct net_device *);
+-static void dn_eth_down(struct net_device *);
+-static void dn_send_brd_hello(struct net_device *dev, struct dn_ifaddr *ifa);
+-static void dn_send_ptp_hello(struct net_device *dev, struct dn_ifaddr *ifa);
+-
+-static struct dn_dev_parms dn_dev_list[] =  {
+-{
+-	.type =		ARPHRD_ETHER, /* Ethernet */
+-	.mode =		DN_DEV_BCAST,
+-	.state =	DN_DEV_S_RU,
+-	.t2 =		1,
+-	.t3 =		10,
+-	.name =		"ethernet",
+-	.up =		dn_eth_up,
+-	.down = 	dn_eth_down,
+-	.timer3 =	dn_send_brd_hello,
+-},
+-{
+-	.type =		ARPHRD_IPGRE, /* DECnet tunneled over GRE in IP */
+-	.mode =		DN_DEV_BCAST,
+-	.state =	DN_DEV_S_RU,
+-	.t2 =		1,
+-	.t3 =		10,
+-	.name =		"ipgre",
+-	.timer3 =	dn_send_brd_hello,
+-},
+-#if 0
+-{
+-	.type =		ARPHRD_X25, /* Bog standard X.25 */
+-	.mode =		DN_DEV_UCAST,
+-	.state =	DN_DEV_S_DS,
+-	.t2 =		1,
+-	.t3 =		120,
+-	.name =		"x25",
+-	.timer3 =	dn_send_ptp_hello,
+-},
+-#endif
+-#if 0
+-{
+-	.type =		ARPHRD_PPP, /* DECnet over PPP */
+-	.mode =		DN_DEV_BCAST,
+-	.state =	DN_DEV_S_RU,
+-	.t2 =		1,
+-	.t3 =		10,
+-	.name =		"ppp",
+-	.timer3 =	dn_send_brd_hello,
+-},
+-#endif
+-{
+-	.type =		ARPHRD_DDCMP, /* DECnet over DDCMP */
+-	.mode =		DN_DEV_UCAST,
+-	.state =	DN_DEV_S_DS,
+-	.t2 =		1,
+-	.t3 =		120,
+-	.name =		"ddcmp",
+-	.timer3 =	dn_send_ptp_hello,
+-},
+-{
+-	.type =		ARPHRD_LOOPBACK, /* Loopback interface - always last */
+-	.mode =		DN_DEV_BCAST,
+-	.state =	DN_DEV_S_RU,
+-	.t2 =		1,
+-	.t3 =		10,
+-	.name =		"loopback",
+-	.timer3 =	dn_send_brd_hello,
+-}
+-};
+-
+-#define DN_DEV_LIST_SIZE ARRAY_SIZE(dn_dev_list)
+-
+-#define DN_DEV_PARMS_OFFSET(x) offsetof(struct dn_dev_parms, x)
+-
+-#ifdef CONFIG_SYSCTL
+-
+-static int min_t2[] = { 1 };
+-static int max_t2[] = { 60 }; /* No max specified, but this seems sensible */
+-static int min_t3[] = { 1 };
+-static int max_t3[] = { 8191 }; /* Must fit in 16 bits when multiplied by BCT3MULT or T3MULT */
+-
+-static int min_priority[1];
+-static int max_priority[] = { 127 }; /* From DECnet spec */
+-
+-static int dn_forwarding_proc(struct ctl_table *, int, void *, size_t *,
+-		loff_t *);
+-static struct dn_dev_sysctl_table {
+-	struct ctl_table_header *sysctl_header;
+-	struct ctl_table dn_dev_vars[5];
+-} dn_dev_sysctl = {
+-	NULL,
+-	{
+-	{
+-		.procname = "forwarding",
+-		.data = (void *)DN_DEV_PARMS_OFFSET(forwarding),
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = dn_forwarding_proc,
+-	},
+-	{
+-		.procname = "priority",
+-		.data = (void *)DN_DEV_PARMS_OFFSET(priority),
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_priority,
+-		.extra2 = &max_priority
+-	},
+-	{
+-		.procname = "t2",
+-		.data = (void *)DN_DEV_PARMS_OFFSET(t2),
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_t2,
+-		.extra2 = &max_t2
+-	},
+-	{
+-		.procname = "t3",
+-		.data = (void *)DN_DEV_PARMS_OFFSET(t3),
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_t3,
+-		.extra2 = &max_t3
+-	},
+-	{ }
+-	},
+-};
+-
+-static void dn_dev_sysctl_register(struct net_device *dev, struct dn_dev_parms *parms)
+-{
+-	struct dn_dev_sysctl_table *t;
+-	int i;
+-
+-	char path[sizeof("net/decnet/conf/") + IFNAMSIZ];
+-
+-	t = kmemdup(&dn_dev_sysctl, sizeof(*t), GFP_KERNEL);
+-	if (t == NULL)
+-		return;
+-
+-	for(i = 0; i < ARRAY_SIZE(t->dn_dev_vars) - 1; i++) {
+-		long offset = (long)t->dn_dev_vars[i].data;
+-		t->dn_dev_vars[i].data = ((char *)parms) + offset;
+-	}
+-
+-	snprintf(path, sizeof(path), "net/decnet/conf/%s",
+-		dev? dev->name : parms->name);
+-
+-	t->dn_dev_vars[0].extra1 = (void *)dev;
+-
+-	t->sysctl_header = register_net_sysctl(&init_net, path, t->dn_dev_vars);
+-	if (t->sysctl_header == NULL)
+-		kfree(t);
+-	else
+-		parms->sysctl = t;
+-}
+-
+-static void dn_dev_sysctl_unregister(struct dn_dev_parms *parms)
+-{
+-	if (parms->sysctl) {
+-		struct dn_dev_sysctl_table *t = parms->sysctl;
+-		parms->sysctl = NULL;
+-		unregister_net_sysctl_table(t->sysctl_header);
+-		kfree(t);
+-	}
+-}
+-
+-static int dn_forwarding_proc(struct ctl_table *table, int write,
+-		void *buffer, size_t *lenp, loff_t *ppos)
+-{
+-#ifdef CONFIG_DECNET_ROUTER
+-	struct net_device *dev = table->extra1;
+-	struct dn_dev *dn_db;
+-	int err;
+-	int tmp, old;
+-
+-	if (table->extra1 == NULL)
+-		return -EINVAL;
+-
+-	dn_db = rcu_dereference_raw(dev->dn_ptr);
+-	old = dn_db->parms.forwarding;
+-
+-	err = proc_dointvec(table, write, buffer, lenp, ppos);
+-
+-	if ((err >= 0) && write) {
+-		if (dn_db->parms.forwarding < 0)
+-			dn_db->parms.forwarding = 0;
+-		if (dn_db->parms.forwarding > 2)
+-			dn_db->parms.forwarding = 2;
+-		/*
+-		 * What an ugly hack this is... its works, just. It
+-		 * would be nice if sysctl/proc were just that little
+-		 * bit more flexible so I don't have to write a special
+-		 * routine, or suffer hacks like this - SJW
+-		 */
+-		tmp = dn_db->parms.forwarding;
+-		dn_db->parms.forwarding = old;
+-		if (dn_db->parms.down)
+-			dn_db->parms.down(dev);
+-		dn_db->parms.forwarding = tmp;
+-		if (dn_db->parms.up)
+-			dn_db->parms.up(dev);
+-	}
+-
+-	return err;
+-#else
+-	return -EINVAL;
+-#endif
+-}
+-
+-#else /* CONFIG_SYSCTL */
+-static void dn_dev_sysctl_unregister(struct dn_dev_parms *parms)
+-{
+-}
+-static void dn_dev_sysctl_register(struct net_device *dev, struct dn_dev_parms *parms)
+-{
+-}
+-
+-#endif /* CONFIG_SYSCTL */
+-
+-static inline __u16 mtu2blksize(struct net_device *dev)
+-{
+-	u32 blksize = dev->mtu;
+-	if (blksize > 0xffff)
+-		blksize = 0xffff;
+-
+-	if (dev->type == ARPHRD_ETHER ||
+-	    dev->type == ARPHRD_PPP ||
+-	    dev->type == ARPHRD_IPGRE ||
+-	    dev->type == ARPHRD_LOOPBACK)
+-		blksize -= 2;
+-
+-	return (__u16)blksize;
+-}
+-
+-static struct dn_ifaddr *dn_dev_alloc_ifa(void)
+-{
+-	struct dn_ifaddr *ifa;
+-
+-	ifa = kzalloc(sizeof(*ifa), GFP_KERNEL);
+-
+-	return ifa;
+-}
+-
+-static void dn_dev_free_ifa(struct dn_ifaddr *ifa)
+-{
+-	kfree_rcu(ifa, rcu);
+-}
+-
+-static void dn_dev_del_ifa(struct dn_dev *dn_db, struct dn_ifaddr __rcu **ifap, int destroy)
+-{
+-	struct dn_ifaddr *ifa1 = rtnl_dereference(*ifap);
+-	unsigned char mac_addr[6];
+-	struct net_device *dev = dn_db->dev;
+-
+-	ASSERT_RTNL();
+-
+-	*ifap = ifa1->ifa_next;
+-
+-	if (dn_db->dev->type == ARPHRD_ETHER) {
+-		if (ifa1->ifa_local != dn_eth2dn(dev->dev_addr)) {
+-			dn_dn2eth(mac_addr, ifa1->ifa_local);
+-			dev_mc_del(dev, mac_addr);
+-		}
+-	}
+-
+-	dn_ifaddr_notify(RTM_DELADDR, ifa1);
+-	blocking_notifier_call_chain(&dnaddr_chain, NETDEV_DOWN, ifa1);
+-	if (destroy) {
+-		dn_dev_free_ifa(ifa1);
+-
+-		if (dn_db->ifa_list == NULL)
+-			dn_dev_delete(dn_db->dev);
+-	}
+-}
+-
+-static int dn_dev_insert_ifa(struct dn_dev *dn_db, struct dn_ifaddr *ifa)
+-{
+-	struct net_device *dev = dn_db->dev;
+-	struct dn_ifaddr *ifa1;
+-	unsigned char mac_addr[6];
+-
+-	ASSERT_RTNL();
+-
+-	/* Check for duplicates */
+-	for (ifa1 = rtnl_dereference(dn_db->ifa_list);
+-	     ifa1 != NULL;
+-	     ifa1 = rtnl_dereference(ifa1->ifa_next)) {
+-		if (ifa1->ifa_local == ifa->ifa_local)
+-			return -EEXIST;
+-	}
+-
+-	if (dev->type == ARPHRD_ETHER) {
+-		if (ifa->ifa_local != dn_eth2dn(dev->dev_addr)) {
+-			dn_dn2eth(mac_addr, ifa->ifa_local);
+-			dev_mc_add(dev, mac_addr);
+-		}
+-	}
+-
+-	ifa->ifa_next = dn_db->ifa_list;
+-	rcu_assign_pointer(dn_db->ifa_list, ifa);
+-
+-	dn_ifaddr_notify(RTM_NEWADDR, ifa);
+-	blocking_notifier_call_chain(&dnaddr_chain, NETDEV_UP, ifa);
+-
+-	return 0;
+-}
+-
+-static int dn_dev_set_ifa(struct net_device *dev, struct dn_ifaddr *ifa)
+-{
+-	struct dn_dev *dn_db = rtnl_dereference(dev->dn_ptr);
+-	int rv;
+-
+-	if (dn_db == NULL) {
+-		int err;
+-		dn_db = dn_dev_create(dev, &err);
+-		if (dn_db == NULL)
+-			return err;
+-	}
+-
+-	ifa->ifa_dev = dn_db;
+-
+-	if (dev->flags & IFF_LOOPBACK)
+-		ifa->ifa_scope = RT_SCOPE_HOST;
+-
+-	rv = dn_dev_insert_ifa(dn_db, ifa);
+-	if (rv)
+-		dn_dev_free_ifa(ifa);
+-	return rv;
+-}
+-
+-
+-int dn_dev_ioctl(unsigned int cmd, void __user *arg)
+-{
+-	char buffer[DN_IFREQ_SIZE];
+-	struct ifreq *ifr = (struct ifreq *)buffer;
+-	struct sockaddr_dn *sdn = (struct sockaddr_dn *)&ifr->ifr_addr;
+-	struct dn_dev *dn_db;
+-	struct net_device *dev;
+-	struct dn_ifaddr *ifa = NULL;
+-	struct dn_ifaddr __rcu **ifap = NULL;
+-	int ret = 0;
+-
+-	if (copy_from_user(ifr, arg, DN_IFREQ_SIZE))
+-		return -EFAULT;
+-	ifr->ifr_name[IFNAMSIZ-1] = 0;
+-
+-	dev_load(&init_net, ifr->ifr_name);
+-
+-	switch (cmd) {
+-	case SIOCGIFADDR:
+-		break;
+-	case SIOCSIFADDR:
+-		if (!capable(CAP_NET_ADMIN))
+-			return -EACCES;
+-		if (sdn->sdn_family != AF_DECnet)
+-			return -EINVAL;
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	rtnl_lock();
+-
+-	if ((dev = __dev_get_by_name(&init_net, ifr->ifr_name)) == NULL) {
+-		ret = -ENODEV;
+-		goto done;
+-	}
+-
+-	if ((dn_db = rtnl_dereference(dev->dn_ptr)) != NULL) {
+-		for (ifap = &dn_db->ifa_list;
+-		     (ifa = rtnl_dereference(*ifap)) != NULL;
+-		     ifap = &ifa->ifa_next)
+-			if (strcmp(ifr->ifr_name, ifa->ifa_label) == 0)
+-				break;
+-	}
+-
+-	if (ifa == NULL && cmd != SIOCSIFADDR) {
+-		ret = -EADDRNOTAVAIL;
+-		goto done;
+-	}
+-
+-	switch (cmd) {
+-	case SIOCGIFADDR:
+-		*((__le16 *)sdn->sdn_nodeaddr) = ifa->ifa_local;
+-		if (copy_to_user(arg, ifr, DN_IFREQ_SIZE))
+-			ret = -EFAULT;
+-		break;
+-
+-	case SIOCSIFADDR:
+-		if (!ifa) {
+-			if ((ifa = dn_dev_alloc_ifa()) == NULL) {
+-				ret = -ENOBUFS;
+-				break;
+-			}
+-			memcpy(ifa->ifa_label, dev->name, IFNAMSIZ);
+-		} else {
+-			if (ifa->ifa_local == dn_saddr2dn(sdn))
+-				break;
+-			dn_dev_del_ifa(dn_db, ifap, 0);
+-		}
+-
+-		ifa->ifa_local = ifa->ifa_address = dn_saddr2dn(sdn);
+-
+-		ret = dn_dev_set_ifa(dev, ifa);
+-	}
+-done:
+-	rtnl_unlock();
+-
+-	return ret;
+-}
+-
+-struct net_device *dn_dev_get_default(void)
+-{
+-	struct net_device *dev;
+-
+-	spin_lock(&dndev_lock);
+-	dev = decnet_default_device;
+-	if (dev) {
+-		if (dev->dn_ptr)
+-			dev_hold(dev);
+-		else
+-			dev = NULL;
+-	}
+-	spin_unlock(&dndev_lock);
+-
+-	return dev;
+-}
+-
+-int dn_dev_set_default(struct net_device *dev, int force)
+-{
+-	struct net_device *old = NULL;
+-	int rv = -EBUSY;
+-	if (!dev->dn_ptr)
+-		return -ENODEV;
+-
+-	spin_lock(&dndev_lock);
+-	if (force || decnet_default_device == NULL) {
+-		old = decnet_default_device;
+-		decnet_default_device = dev;
+-		rv = 0;
+-	}
+-	spin_unlock(&dndev_lock);
+-
+-	if (old)
+-		dev_put(old);
+-	return rv;
+-}
+-
+-static void dn_dev_check_default(struct net_device *dev)
+-{
+-	spin_lock(&dndev_lock);
+-	if (dev == decnet_default_device) {
+-		decnet_default_device = NULL;
+-	} else {
+-		dev = NULL;
+-	}
+-	spin_unlock(&dndev_lock);
+-
+-	if (dev)
+-		dev_put(dev);
+-}
+-
+-/*
+- * Called with RTNL
+- */
+-static struct dn_dev *dn_dev_by_index(int ifindex)
+-{
+-	struct net_device *dev;
+-	struct dn_dev *dn_dev = NULL;
+-
+-	dev = __dev_get_by_index(&init_net, ifindex);
+-	if (dev)
+-		dn_dev = rtnl_dereference(dev->dn_ptr);
+-
+-	return dn_dev;
+-}
+-
+-static const struct nla_policy dn_ifa_policy[IFA_MAX+1] = {
+-	[IFA_ADDRESS]		= { .type = NLA_U16 },
+-	[IFA_LOCAL]		= { .type = NLA_U16 },
+-	[IFA_LABEL]		= { .type = NLA_STRING,
+-				    .len = IFNAMSIZ - 1 },
+-	[IFA_FLAGS]		= { .type = NLA_U32 },
+-};
+-
+-static int dn_nl_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh,
+-			 struct netlink_ext_ack *extack)
+-{
+-	struct net *net = sock_net(skb->sk);
+-	struct nlattr *tb[IFA_MAX+1];
+-	struct dn_dev *dn_db;
+-	struct ifaddrmsg *ifm;
+-	struct dn_ifaddr *ifa;
+-	struct dn_ifaddr __rcu **ifap;
+-	int err = -EINVAL;
+-
+-	if (!netlink_capable(skb, CAP_NET_ADMIN))
+-		return -EPERM;
+-
+-	if (!net_eq(net, &init_net))
+-		goto errout;
+-
+-	err = nlmsg_parse_deprecated(nlh, sizeof(*ifm), tb, IFA_MAX,
+-				     dn_ifa_policy, extack);
+-	if (err < 0)
+-		goto errout;
+-
+-	err = -ENODEV;
+-	ifm = nlmsg_data(nlh);
+-	if ((dn_db = dn_dev_by_index(ifm->ifa_index)) == NULL)
+-		goto errout;
+-
+-	err = -EADDRNOTAVAIL;
+-	for (ifap = &dn_db->ifa_list;
+-	     (ifa = rtnl_dereference(*ifap)) != NULL;
+-	     ifap = &ifa->ifa_next) {
+-		if (tb[IFA_LOCAL] &&
+-		    nla_memcmp(tb[IFA_LOCAL], &ifa->ifa_local, 2))
+-			continue;
+-
+-		if (tb[IFA_LABEL] && nla_strcmp(tb[IFA_LABEL], ifa->ifa_label))
+-			continue;
+-
+-		dn_dev_del_ifa(dn_db, ifap, 1);
+-		return 0;
+-	}
+-
+-errout:
+-	return err;
+-}
+-
+-static int dn_nl_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh,
+-			 struct netlink_ext_ack *extack)
+-{
+-	struct net *net = sock_net(skb->sk);
+-	struct nlattr *tb[IFA_MAX+1];
+-	struct net_device *dev;
+-	struct dn_dev *dn_db;
+-	struct ifaddrmsg *ifm;
+-	struct dn_ifaddr *ifa;
+-	int err;
+-
+-	if (!netlink_capable(skb, CAP_NET_ADMIN))
+-		return -EPERM;
+-
+-	if (!net_eq(net, &init_net))
+-		return -EINVAL;
+-
+-	err = nlmsg_parse_deprecated(nlh, sizeof(*ifm), tb, IFA_MAX,
+-				     dn_ifa_policy, extack);
+-	if (err < 0)
+-		return err;
+-
+-	if (tb[IFA_LOCAL] == NULL)
+-		return -EINVAL;
+-
+-	ifm = nlmsg_data(nlh);
+-	if ((dev = __dev_get_by_index(&init_net, ifm->ifa_index)) == NULL)
+-		return -ENODEV;
+-
+-	if ((dn_db = rtnl_dereference(dev->dn_ptr)) == NULL) {
+-		dn_db = dn_dev_create(dev, &err);
+-		if (!dn_db)
+-			return err;
+-	}
+-
+-	if ((ifa = dn_dev_alloc_ifa()) == NULL)
+-		return -ENOBUFS;
+-
+-	if (tb[IFA_ADDRESS] == NULL)
+-		tb[IFA_ADDRESS] = tb[IFA_LOCAL];
+-
+-	ifa->ifa_local = nla_get_le16(tb[IFA_LOCAL]);
+-	ifa->ifa_address = nla_get_le16(tb[IFA_ADDRESS]);
+-	ifa->ifa_flags = tb[IFA_FLAGS] ? nla_get_u32(tb[IFA_FLAGS]) :
+-					 ifm->ifa_flags;
+-	ifa->ifa_scope = ifm->ifa_scope;
+-	ifa->ifa_dev = dn_db;
+-
+-	if (tb[IFA_LABEL])
+-		nla_strlcpy(ifa->ifa_label, tb[IFA_LABEL], IFNAMSIZ);
+-	else
+-		memcpy(ifa->ifa_label, dev->name, IFNAMSIZ);
+-
+-	err = dn_dev_insert_ifa(dn_db, ifa);
+-	if (err)
+-		dn_dev_free_ifa(ifa);
+-
+-	return err;
+-}
+-
+-static inline size_t dn_ifaddr_nlmsg_size(void)
+-{
+-	return NLMSG_ALIGN(sizeof(struct ifaddrmsg))
+-	       + nla_total_size(IFNAMSIZ) /* IFA_LABEL */
+-	       + nla_total_size(2) /* IFA_ADDRESS */
+-	       + nla_total_size(2) /* IFA_LOCAL */
+-	       + nla_total_size(4); /* IFA_FLAGS */
+-}
+-
+-static int dn_nl_fill_ifaddr(struct sk_buff *skb, struct dn_ifaddr *ifa,
+-			     u32 portid, u32 seq, int event, unsigned int flags)
+-{
+-	struct ifaddrmsg *ifm;
+-	struct nlmsghdr *nlh;
+-	u32 ifa_flags = ifa->ifa_flags | IFA_F_PERMANENT;
+-
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*ifm), flags);
+-	if (nlh == NULL)
+-		return -EMSGSIZE;
+-
+-	ifm = nlmsg_data(nlh);
+-	ifm->ifa_family = AF_DECnet;
+-	ifm->ifa_prefixlen = 16;
+-	ifm->ifa_flags = ifa_flags;
+-	ifm->ifa_scope = ifa->ifa_scope;
+-	ifm->ifa_index = ifa->ifa_dev->dev->ifindex;
+-
+-	if ((ifa->ifa_address &&
+-	     nla_put_le16(skb, IFA_ADDRESS, ifa->ifa_address)) ||
+-	    (ifa->ifa_local &&
+-	     nla_put_le16(skb, IFA_LOCAL, ifa->ifa_local)) ||
+-	    (ifa->ifa_label[0] &&
+-	     nla_put_string(skb, IFA_LABEL, ifa->ifa_label)) ||
+-	     nla_put_u32(skb, IFA_FLAGS, ifa_flags))
+-		goto nla_put_failure;
+-	nlmsg_end(skb, nlh);
+-	return 0;
+-
+-nla_put_failure:
+-	nlmsg_cancel(skb, nlh);
+-	return -EMSGSIZE;
+-}
+-
+-static void dn_ifaddr_notify(int event, struct dn_ifaddr *ifa)
+-{
+-	struct sk_buff *skb;
+-	int err = -ENOBUFS;
+-
+-	skb = alloc_skb(dn_ifaddr_nlmsg_size(), GFP_KERNEL);
+-	if (skb == NULL)
+-		goto errout;
+-
+-	err = dn_nl_fill_ifaddr(skb, ifa, 0, 0, event, 0);
+-	if (err < 0) {
+-		/* -EMSGSIZE implies BUG in dn_ifaddr_nlmsg_size() */
+-		WARN_ON(err == -EMSGSIZE);
+-		kfree_skb(skb);
+-		goto errout;
+-	}
+-	rtnl_notify(skb, &init_net, 0, RTNLGRP_DECnet_IFADDR, NULL, GFP_KERNEL);
+-	return;
+-errout:
+-	if (err < 0)
+-		rtnl_set_sk_err(&init_net, RTNLGRP_DECnet_IFADDR, err);
+-}
+-
+-static int dn_nl_dump_ifaddr(struct sk_buff *skb, struct netlink_callback *cb)
+-{
+-	struct net *net = sock_net(skb->sk);
+-	int idx, dn_idx = 0, skip_ndevs, skip_naddr;
+-	struct net_device *dev;
+-	struct dn_dev *dn_db;
+-	struct dn_ifaddr *ifa;
+-
+-	if (!net_eq(net, &init_net))
+-		return 0;
+-
+-	skip_ndevs = cb->args[0];
+-	skip_naddr = cb->args[1];
+-
+-	idx = 0;
+-	rcu_read_lock();
+-	for_each_netdev_rcu(&init_net, dev) {
+-		if (idx < skip_ndevs)
+-			goto cont;
+-		else if (idx > skip_ndevs) {
+-			/* Only skip over addresses for first dev dumped
+-			 * in this iteration (idx == skip_ndevs) */
+-			skip_naddr = 0;
+-		}
+-
+-		if ((dn_db = rcu_dereference(dev->dn_ptr)) == NULL)
+-			goto cont;
+-
+-		for (ifa = rcu_dereference(dn_db->ifa_list), dn_idx = 0; ifa;
+-		     ifa = rcu_dereference(ifa->ifa_next), dn_idx++) {
+-			if (dn_idx < skip_naddr)
+-				continue;
+-
+-			if (dn_nl_fill_ifaddr(skb, ifa, NETLINK_CB(cb->skb).portid,
+-					      cb->nlh->nlmsg_seq, RTM_NEWADDR,
+-					      NLM_F_MULTI) < 0)
+-				goto done;
+-		}
+-cont:
+-		idx++;
+-	}
+-done:
+-	rcu_read_unlock();
+-	cb->args[0] = idx;
+-	cb->args[1] = dn_idx;
+-
+-	return skb->len;
+-}
+-
+-static int dn_dev_get_first(struct net_device *dev, __le16 *addr)
+-{
+-	struct dn_dev *dn_db;
+-	struct dn_ifaddr *ifa;
+-	int rv = -ENODEV;
+-
+-	rcu_read_lock();
+-	dn_db = rcu_dereference(dev->dn_ptr);
+-	if (dn_db == NULL)
+-		goto out;
+-
+-	ifa = rcu_dereference(dn_db->ifa_list);
+-	if (ifa != NULL) {
+-		*addr = ifa->ifa_local;
+-		rv = 0;
+-	}
+-out:
+-	rcu_read_unlock();
+-	return rv;
+-}
+-
+-/*
+- * Find a default address to bind to.
+- *
+- * This is one of those areas where the initial VMS concepts don't really
+- * map onto the Linux concepts, and since we introduced multiple addresses
+- * per interface we have to cope with slightly odd ways of finding out what
+- * "our address" really is. Mostly it's not a problem; for this we just guess
+- * a sensible default. Eventually the routing code will take care of all the
+- * nasties for us I hope.
+- */
+-int dn_dev_bind_default(__le16 *addr)
+-{
+-	struct net_device *dev;
+-	int rv;
+-	dev = dn_dev_get_default();
+-last_chance:
+-	if (dev) {
+-		rv = dn_dev_get_first(dev, addr);
+-		dev_put(dev);
+-		if (rv == 0 || dev == init_net.loopback_dev)
+-			return rv;
+-	}
+-	dev = init_net.loopback_dev;
+-	dev_hold(dev);
+-	goto last_chance;
+-}
+-
+-static void dn_send_endnode_hello(struct net_device *dev, struct dn_ifaddr *ifa)
+-{
+-	struct endnode_hello_message *msg;
+-	struct sk_buff *skb = NULL;
+-	__le16 *pktlen;
+-	struct dn_dev *dn_db = rcu_dereference_raw(dev->dn_ptr);
+-
+-	if ((skb = dn_alloc_skb(NULL, sizeof(*msg), GFP_ATOMIC)) == NULL)
+-		return;
+-
+-	skb->dev = dev;
+-
+-	msg = skb_put(skb, sizeof(*msg));
+-
+-	msg->msgflg  = 0x0D;
+-	memcpy(msg->tiver, dn_eco_version, 3);
+-	dn_dn2eth(msg->id, ifa->ifa_local);
+-	msg->iinfo   = DN_RT_INFO_ENDN;
+-	msg->blksize = cpu_to_le16(mtu2blksize(dev));
+-	msg->area    = 0x00;
+-	memset(msg->seed, 0, 8);
+-	memcpy(msg->neighbor, dn_hiord, ETH_ALEN);
+-
+-	if (dn_db->router) {
+-		struct dn_neigh *dn = (struct dn_neigh *)dn_db->router;
+-		dn_dn2eth(msg->neighbor, dn->addr);
+-	}
+-
+-	msg->timer   = cpu_to_le16((unsigned short)dn_db->parms.t3);
+-	msg->mpd     = 0x00;
+-	msg->datalen = 0x02;
+-	memset(msg->data, 0xAA, 2);
+-
+-	pktlen = skb_push(skb, 2);
+-	*pktlen = cpu_to_le16(skb->len - 2);
+-
+-	skb_reset_network_header(skb);
+-
+-	dn_rt_finish_output(skb, dn_rt_all_rt_mcast, msg->id);
+-}
+-
+-
+-#define DRDELAY (5 * HZ)
+-
+-static int dn_am_i_a_router(struct dn_neigh *dn, struct dn_dev *dn_db, struct dn_ifaddr *ifa)
+-{
+-	/* First check time since device went up */
+-	if (time_before(jiffies, dn_db->uptime + DRDELAY))
+-		return 0;
+-
+-	/* If there is no router, then yes... */
+-	if (!dn_db->router)
+-		return 1;
+-
+-	/* otherwise only if we have a higher priority or.. */
+-	if (dn->priority < dn_db->parms.priority)
+-		return 1;
+-
+-	/* if we have equal priority and a higher node number */
+-	if (dn->priority != dn_db->parms.priority)
+-		return 0;
+-
+-	if (le16_to_cpu(dn->addr) < le16_to_cpu(ifa->ifa_local))
+-		return 1;
+-
+-	return 0;
+-}
+-
+-static void dn_send_router_hello(struct net_device *dev, struct dn_ifaddr *ifa)
+-{
+-	int n;
+-	struct dn_dev *dn_db = rcu_dereference_raw(dev->dn_ptr);
+-	struct dn_neigh *dn = (struct dn_neigh *)dn_db->router;
+-	struct sk_buff *skb;
+-	size_t size;
+-	unsigned char *ptr;
+-	unsigned char *i1, *i2;
+-	__le16 *pktlen;
+-	char *src;
+-
+-	if (mtu2blksize(dev) < (26 + 7))
+-		return;
+-
+-	n = mtu2blksize(dev) - 26;
+-	n /= 7;
+-
+-	if (n > 32)
+-		n = 32;
+-
+-	size = 2 + 26 + 7 * n;
+-
+-	if ((skb = dn_alloc_skb(NULL, size, GFP_ATOMIC)) == NULL)
+-		return;
+-
+-	skb->dev = dev;
+-	ptr = skb_put(skb, size);
+-
+-	*ptr++ = DN_RT_PKT_CNTL | DN_RT_PKT_ERTH;
+-	*ptr++ = 2; /* ECO */
+-	*ptr++ = 0;
+-	*ptr++ = 0;
+-	dn_dn2eth(ptr, ifa->ifa_local);
+-	src = ptr;
+-	ptr += ETH_ALEN;
+-	*ptr++ = dn_db->parms.forwarding == 1 ?
+-			DN_RT_INFO_L1RT : DN_RT_INFO_L2RT;
+-	*((__le16 *)ptr) = cpu_to_le16(mtu2blksize(dev));
+-	ptr += 2;
+-	*ptr++ = dn_db->parms.priority; /* Priority */
+-	*ptr++ = 0; /* Area: Reserved */
+-	*((__le16 *)ptr) = cpu_to_le16((unsigned short)dn_db->parms.t3);
+-	ptr += 2;
+-	*ptr++ = 0; /* MPD: Reserved */
+-	i1 = ptr++;
+-	memset(ptr, 0, 7); /* Name: Reserved */
+-	ptr += 7;
+-	i2 = ptr++;
+-
+-	n = dn_neigh_elist(dev, ptr, n);
+-
+-	*i2 = 7 * n;
+-	*i1 = 8 + *i2;
+-
+-	skb_trim(skb, (27 + *i2));
+-
+-	pktlen = skb_push(skb, 2);
+-	*pktlen = cpu_to_le16(skb->len - 2);
+-
+-	skb_reset_network_header(skb);
+-
+-	if (dn_am_i_a_router(dn, dn_db, ifa)) {
+-		struct sk_buff *skb2 = skb_copy(skb, GFP_ATOMIC);
+-		if (skb2) {
+-			dn_rt_finish_output(skb2, dn_rt_all_end_mcast, src);
+-		}
+-	}
+-
+-	dn_rt_finish_output(skb, dn_rt_all_rt_mcast, src);
+-}
+-
+-static void dn_send_brd_hello(struct net_device *dev, struct dn_ifaddr *ifa)
+-{
+-	struct dn_dev *dn_db = rcu_dereference_raw(dev->dn_ptr);
+-
+-	if (dn_db->parms.forwarding == 0)
+-		dn_send_endnode_hello(dev, ifa);
+-	else
+-		dn_send_router_hello(dev, ifa);
+-}
+-
+-static void dn_send_ptp_hello(struct net_device *dev, struct dn_ifaddr *ifa)
+-{
+-	int tdlen = 16;
+-	int size = dev->hard_header_len + 2 + 4 + tdlen;
+-	struct sk_buff *skb = dn_alloc_skb(NULL, size, GFP_ATOMIC);
+-	int i;
+-	unsigned char *ptr;
+-	char src[ETH_ALEN];
+-
+-	if (skb == NULL)
+-		return ;
+-
+-	skb->dev = dev;
+-	skb_push(skb, dev->hard_header_len);
+-	ptr = skb_put(skb, 2 + 4 + tdlen);
+-
+-	*ptr++ = DN_RT_PKT_HELO;
+-	*((__le16 *)ptr) = ifa->ifa_local;
+-	ptr += 2;
+-	*ptr++ = tdlen;
+-
+-	for(i = 0; i < tdlen; i++)
+-		*ptr++ = 0252;
+-
+-	dn_dn2eth(src, ifa->ifa_local);
+-	dn_rt_finish_output(skb, dn_rt_all_rt_mcast, src);
+-}
+-
+-static int dn_eth_up(struct net_device *dev)
+-{
+-	struct dn_dev *dn_db = rcu_dereference_raw(dev->dn_ptr);
+-
+-	if (dn_db->parms.forwarding == 0)
+-		dev_mc_add(dev, dn_rt_all_end_mcast);
+-	else
+-		dev_mc_add(dev, dn_rt_all_rt_mcast);
+-
+-	dn_db->use_long = 1;
+-
+-	return 0;
+-}
+-
+-static void dn_eth_down(struct net_device *dev)
+-{
+-	struct dn_dev *dn_db = rcu_dereference_raw(dev->dn_ptr);
+-
+-	if (dn_db->parms.forwarding == 0)
+-		dev_mc_del(dev, dn_rt_all_end_mcast);
+-	else
+-		dev_mc_del(dev, dn_rt_all_rt_mcast);
+-}
+-
+-static void dn_dev_set_timer(struct net_device *dev);
+-
+-static void dn_dev_timer_func(struct timer_list *t)
+-{
+-	struct dn_dev *dn_db = from_timer(dn_db, t, timer);
+-	struct net_device *dev;
+-	struct dn_ifaddr *ifa;
+-
+-	rcu_read_lock();
+-	dev = dn_db->dev;
+-	if (dn_db->t3 <= dn_db->parms.t2) {
+-		if (dn_db->parms.timer3) {
+-			for (ifa = rcu_dereference(dn_db->ifa_list);
+-			     ifa;
+-			     ifa = rcu_dereference(ifa->ifa_next)) {
+-				if (!(ifa->ifa_flags & IFA_F_SECONDARY))
+-					dn_db->parms.timer3(dev, ifa);
+-			}
+-		}
+-		dn_db->t3 = dn_db->parms.t3;
+-	} else {
+-		dn_db->t3 -= dn_db->parms.t2;
+-	}
+-	rcu_read_unlock();
+-	dn_dev_set_timer(dev);
+-}
+-
+-static void dn_dev_set_timer(struct net_device *dev)
+-{
+-	struct dn_dev *dn_db = rcu_dereference_raw(dev->dn_ptr);
+-
+-	if (dn_db->parms.t2 > dn_db->parms.t3)
+-		dn_db->parms.t2 = dn_db->parms.t3;
+-
+-	dn_db->timer.expires = jiffies + (dn_db->parms.t2 * HZ);
+-
+-	add_timer(&dn_db->timer);
+-}
+-
+-static struct dn_dev *dn_dev_create(struct net_device *dev, int *err)
+-{
+-	int i;
+-	struct dn_dev_parms *p = dn_dev_list;
+-	struct dn_dev *dn_db;
+-
+-	for(i = 0; i < DN_DEV_LIST_SIZE; i++, p++) {
+-		if (p->type == dev->type)
+-			break;
+-	}
+-
+-	*err = -ENODEV;
+-	if (i == DN_DEV_LIST_SIZE)
+-		return NULL;
+-
+-	*err = -ENOBUFS;
+-	if ((dn_db = kzalloc(sizeof(struct dn_dev), GFP_ATOMIC)) == NULL)
+-		return NULL;
+-
+-	memcpy(&dn_db->parms, p, sizeof(struct dn_dev_parms));
+-
+-	rcu_assign_pointer(dev->dn_ptr, dn_db);
+-	dn_db->dev = dev;
+-	timer_setup(&dn_db->timer, dn_dev_timer_func, 0);
+-
+-	dn_db->uptime = jiffies;
+-
+-	dn_db->neigh_parms = neigh_parms_alloc(dev, &dn_neigh_table);
+-	if (!dn_db->neigh_parms) {
+-		RCU_INIT_POINTER(dev->dn_ptr, NULL);
+-		kfree(dn_db);
+-		return NULL;
+-	}
+-
+-	if (dn_db->parms.up) {
+-		if (dn_db->parms.up(dev) < 0) {
+-			neigh_parms_release(&dn_neigh_table, dn_db->neigh_parms);
+-			dev->dn_ptr = NULL;
+-			kfree(dn_db);
+-			return NULL;
+-		}
+-	}
+-
+-	dn_dev_sysctl_register(dev, &dn_db->parms);
+-
+-	dn_dev_set_timer(dev);
+-
+-	*err = 0;
+-	return dn_db;
+-}
+-
+-
+-/*
+- * This processes a device up event. We only start up
+- * the loopback device & ethernet devices with correct
+- * MAC addresses automatically. Others must be started
+- * specifically.
+- *
+- * FIXME: How should we configure the loopback address ? If we could dispense
+- * with using decnet_address here and for autobind, it will be one less thing
+- * for users to worry about setting up.
+- */
+-
+-void dn_dev_up(struct net_device *dev)
+-{
+-	struct dn_ifaddr *ifa;
+-	__le16 addr = decnet_address;
+-	int maybe_default = 0;
+-	struct dn_dev *dn_db = rtnl_dereference(dev->dn_ptr);
+-
+-	if ((dev->type != ARPHRD_ETHER) && (dev->type != ARPHRD_LOOPBACK))
+-		return;
+-
+-	/*
+-	 * Need to ensure that loopback device has a dn_db attached to it
+-	 * to allow creation of neighbours against it, even though it might
+-	 * not have a local address of its own. Might as well do the same for
+-	 * all autoconfigured interfaces.
+-	 */
+-	if (dn_db == NULL) {
+-		int err;
+-		dn_db = dn_dev_create(dev, &err);
+-		if (dn_db == NULL)
+-			return;
+-	}
+-
+-	if (dev->type == ARPHRD_ETHER) {
+-		if (memcmp(dev->dev_addr, dn_hiord, 4) != 0)
+-			return;
+-		addr = dn_eth2dn(dev->dev_addr);
+-		maybe_default = 1;
+-	}
+-
+-	if (addr == 0)
+-		return;
+-
+-	if ((ifa = dn_dev_alloc_ifa()) == NULL)
+-		return;
+-
+-	ifa->ifa_local = ifa->ifa_address = addr;
+-	ifa->ifa_flags = 0;
+-	ifa->ifa_scope = RT_SCOPE_UNIVERSE;
+-	strcpy(ifa->ifa_label, dev->name);
+-
+-	dn_dev_set_ifa(dev, ifa);
+-
+-	/*
+-	 * Automagically set the default device to the first automatically
+-	 * configured ethernet card in the system.
+-	 */
+-	if (maybe_default) {
+-		dev_hold(dev);
+-		if (dn_dev_set_default(dev, 0))
+-			dev_put(dev);
+-	}
+-}
+-
+-static void dn_dev_delete(struct net_device *dev)
+-{
+-	struct dn_dev *dn_db = rtnl_dereference(dev->dn_ptr);
+-
+-	if (dn_db == NULL)
+-		return;
+-
+-	del_timer_sync(&dn_db->timer);
+-	dn_dev_sysctl_unregister(&dn_db->parms);
+-	dn_dev_check_default(dev);
+-	neigh_ifdown(&dn_neigh_table, dev);
+-
+-	if (dn_db->parms.down)
+-		dn_db->parms.down(dev);
+-
+-	dev->dn_ptr = NULL;
+-
+-	neigh_parms_release(&dn_neigh_table, dn_db->neigh_parms);
+-	neigh_ifdown(&dn_neigh_table, dev);
+-
+-	if (dn_db->router)
+-		neigh_release(dn_db->router);
+-	if (dn_db->peer)
+-		neigh_release(dn_db->peer);
+-
+-	kfree(dn_db);
+-}
+-
+-void dn_dev_down(struct net_device *dev)
+-{
+-	struct dn_dev *dn_db = rtnl_dereference(dev->dn_ptr);
+-	struct dn_ifaddr *ifa;
+-
+-	if (dn_db == NULL)
+-		return;
+-
+-	while ((ifa = rtnl_dereference(dn_db->ifa_list)) != NULL) {
+-		dn_dev_del_ifa(dn_db, &dn_db->ifa_list, 0);
+-		dn_dev_free_ifa(ifa);
+-	}
+-
+-	dn_dev_delete(dev);
+-}
+-
+-void dn_dev_init_pkt(struct sk_buff *skb)
+-{
+-}
+-
+-void dn_dev_veri_pkt(struct sk_buff *skb)
+-{
+-}
+-
+-void dn_dev_hello(struct sk_buff *skb)
+-{
+-}
+-
+-void dn_dev_devices_off(void)
+-{
+-	struct net_device *dev;
+-
+-	rtnl_lock();
+-	for_each_netdev(&init_net, dev)
+-		dn_dev_down(dev);
+-	rtnl_unlock();
+-
+-}
+-
+-void dn_dev_devices_on(void)
+-{
+-	struct net_device *dev;
+-
+-	rtnl_lock();
+-	for_each_netdev(&init_net, dev) {
+-		if (dev->flags & IFF_UP)
+-			dn_dev_up(dev);
+-	}
+-	rtnl_unlock();
+-}
+-
+-int register_dnaddr_notifier(struct notifier_block *nb)
+-{
+-	return blocking_notifier_chain_register(&dnaddr_chain, nb);
+-}
+-
+-int unregister_dnaddr_notifier(struct notifier_block *nb)
+-{
+-	return blocking_notifier_chain_unregister(&dnaddr_chain, nb);
+-}
+-
+-#ifdef CONFIG_PROC_FS
+-static inline int is_dn_dev(struct net_device *dev)
+-{
+-	return dev->dn_ptr != NULL;
+-}
+-
+-static void *dn_dev_seq_start(struct seq_file *seq, loff_t *pos)
+-	__acquires(RCU)
+-{
+-	int i;
+-	struct net_device *dev;
+-
+-	rcu_read_lock();
+-
+-	if (*pos == 0)
+-		return SEQ_START_TOKEN;
+-
+-	i = 1;
+-	for_each_netdev_rcu(&init_net, dev) {
+-		if (!is_dn_dev(dev))
+-			continue;
+-
+-		if (i++ == *pos)
+-			return dev;
+-	}
+-
+-	return NULL;
+-}
+-
+-static void *dn_dev_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+-{
+-	struct net_device *dev;
+-
+-	++*pos;
+-
+-	dev = v;
+-	if (v == SEQ_START_TOKEN)
+-		dev = net_device_entry(&init_net.dev_base_head);
+-
+-	for_each_netdev_continue_rcu(&init_net, dev) {
+-		if (!is_dn_dev(dev))
+-			continue;
+-
+-		return dev;
+-	}
+-
+-	return NULL;
+-}
+-
+-static void dn_dev_seq_stop(struct seq_file *seq, void *v)
+-	__releases(RCU)
+-{
+-	rcu_read_unlock();
+-}
+-
+-static char *dn_type2asc(char type)
+-{
+-	switch (type) {
+-	case DN_DEV_BCAST:
+-		return "B";
+-	case DN_DEV_UCAST:
+-		return "U";
+-	case DN_DEV_MPOINT:
+-		return "M";
+-	}
+-
+-	return "?";
+-}
+-
+-static int dn_dev_seq_show(struct seq_file *seq, void *v)
+-{
+-	if (v == SEQ_START_TOKEN)
+-		seq_puts(seq, "Name     Flags T1   Timer1 T3   Timer3 BlkSize Pri State DevType    Router Peer\n");
+-	else {
+-		struct net_device *dev = v;
+-		char peer_buf[DN_ASCBUF_LEN];
+-		char router_buf[DN_ASCBUF_LEN];
+-		struct dn_dev *dn_db = rcu_dereference(dev->dn_ptr);
+-
+-		seq_printf(seq, "%-8s %1s     %04u %04u   %04lu %04lu"
+-				"   %04hu    %03d %02x    %-10s %-7s %-7s\n",
+-				dev->name,
+-				dn_type2asc(dn_db->parms.mode),
+-				0, 0,
+-				dn_db->t3, dn_db->parms.t3,
+-				mtu2blksize(dev),
+-				dn_db->parms.priority,
+-				dn_db->parms.state, dn_db->parms.name,
+-				dn_db->router ? dn_addr2asc(le16_to_cpu(*(__le16 *)dn_db->router->primary_key), router_buf) : "",
+-				dn_db->peer ? dn_addr2asc(le16_to_cpu(*(__le16 *)dn_db->peer->primary_key), peer_buf) : "");
+-	}
+-	return 0;
+-}
+-
+-static const struct seq_operations dn_dev_seq_ops = {
+-	.start	= dn_dev_seq_start,
+-	.next	= dn_dev_seq_next,
+-	.stop	= dn_dev_seq_stop,
+-	.show	= dn_dev_seq_show,
+-};
+-#endif /* CONFIG_PROC_FS */
+-
+-static int addr[2];
+-module_param_array(addr, int, NULL, 0444);
+-MODULE_PARM_DESC(addr, "The DECnet address of this machine: area,node");
+-
+-void __init dn_dev_init(void)
+-{
+-	if (addr[0] > 63 || addr[0] < 0) {
+-		printk(KERN_ERR "DECnet: Area must be between 0 and 63");
+-		return;
+-	}
+-
+-	if (addr[1] > 1023 || addr[1] < 0) {
+-		printk(KERN_ERR "DECnet: Node must be between 0 and 1023");
+-		return;
+-	}
+-
+-	decnet_address = cpu_to_le16((addr[0] << 10) | addr[1]);
+-
+-	dn_dev_devices_on();
+-
+-	rtnl_register_module(THIS_MODULE, PF_DECnet, RTM_NEWADDR,
+-			     dn_nl_newaddr, NULL, 0);
+-	rtnl_register_module(THIS_MODULE, PF_DECnet, RTM_DELADDR,
+-			     dn_nl_deladdr, NULL, 0);
+-	rtnl_register_module(THIS_MODULE, PF_DECnet, RTM_GETADDR,
+-			     NULL, dn_nl_dump_ifaddr, 0);
+-
+-	proc_create_seq("decnet_dev", 0444, init_net.proc_net, &dn_dev_seq_ops);
+-
+-#ifdef CONFIG_SYSCTL
+-	{
+-		int i;
+-		for(i = 0; i < DN_DEV_LIST_SIZE; i++)
+-			dn_dev_sysctl_register(NULL, &dn_dev_list[i]);
+-	}
+-#endif /* CONFIG_SYSCTL */
+-}
+-
+-void __exit dn_dev_cleanup(void)
+-{
+-#ifdef CONFIG_SYSCTL
+-	{
+-		int i;
+-		for(i = 0; i < DN_DEV_LIST_SIZE; i++)
+-			dn_dev_sysctl_unregister(&dn_dev_list[i]);
+-	}
+-#endif /* CONFIG_SYSCTL */
+-
+-	remove_proc_entry("decnet_dev", init_net.proc_net);
+-
+-	dn_dev_devices_off();
+-}
+diff --git a/net/decnet/dn_fib.c b/net/decnet/dn_fib.c
+deleted file mode 100644
+index 77fbf8e9df4b7..0000000000000
+--- a/net/decnet/dn_fib.c
++++ /dev/null
+@@ -1,799 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Routing Forwarding Information Base (Glue/Info List)
+- *
+- * Author:      Steve Whitehouse <SteveW@ACM.org>
+- *
+- *
+- * Changes:
+- *              Alexey Kuznetsov : SMP locking changes
+- *              Steve Whitehouse : Rewrote it... Well to be more correct, I
+- *                                 copied most of it from the ipv4 fib code.
+- *              Steve Whitehouse : Updated it in style and fixed a few bugs
+- *                                 which were fixed in the ipv4 code since
+- *                                 this code was copied from it.
+- *
+- */
+-#include <linux/string.h>
+-#include <linux/net.h>
+-#include <linux/socket.h>
+-#include <linux/slab.h>
+-#include <linux/sockios.h>
+-#include <linux/init.h>
+-#include <linux/skbuff.h>
+-#include <linux/netlink.h>
+-#include <linux/rtnetlink.h>
+-#include <linux/proc_fs.h>
+-#include <linux/netdevice.h>
+-#include <linux/timer.h>
+-#include <linux/spinlock.h>
+-#include <linux/atomic.h>
+-#include <linux/uaccess.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/flow.h>
+-#include <net/fib_rules.h>
+-#include <net/dn.h>
+-#include <net/dn_route.h>
+-#include <net/dn_fib.h>
+-#include <net/dn_neigh.h>
+-#include <net/dn_dev.h>
+-#include <net/rtnh.h>
+-
+-#define RT_MIN_TABLE 1
+-
+-#define for_fib_info() { struct dn_fib_info *fi;\
+-	for(fi = dn_fib_info_list; fi; fi = fi->fib_next)
+-#define endfor_fib_info() }
+-
+-#define for_nexthops(fi) { int nhsel; const struct dn_fib_nh *nh;\
+-	for(nhsel = 0, nh = (fi)->fib_nh; nhsel < (fi)->fib_nhs; nh++, nhsel++)
+-
+-#define change_nexthops(fi) { int nhsel; struct dn_fib_nh *nh;\
+-	for(nhsel = 0, nh = (struct dn_fib_nh *)((fi)->fib_nh); nhsel < (fi)->fib_nhs; nh++, nhsel++)
+-
+-#define endfor_nexthops(fi) }
+-
+-static DEFINE_SPINLOCK(dn_fib_multipath_lock);
+-static struct dn_fib_info *dn_fib_info_list;
+-static DEFINE_SPINLOCK(dn_fib_info_lock);
+-
+-static struct
+-{
+-	int error;
+-	u8 scope;
+-} dn_fib_props[RTN_MAX+1] = {
+-	[RTN_UNSPEC] =      { .error = 0,       .scope = RT_SCOPE_NOWHERE },
+-	[RTN_UNICAST] =     { .error = 0,       .scope = RT_SCOPE_UNIVERSE },
+-	[RTN_LOCAL] =       { .error = 0,       .scope = RT_SCOPE_HOST },
+-	[RTN_BROADCAST] =   { .error = -EINVAL, .scope = RT_SCOPE_NOWHERE },
+-	[RTN_ANYCAST] =     { .error = -EINVAL, .scope = RT_SCOPE_NOWHERE },
+-	[RTN_MULTICAST] =   { .error = -EINVAL, .scope = RT_SCOPE_NOWHERE },
+-	[RTN_BLACKHOLE] =   { .error = -EINVAL, .scope = RT_SCOPE_UNIVERSE },
+-	[RTN_UNREACHABLE] = { .error = -EHOSTUNREACH, .scope = RT_SCOPE_UNIVERSE },
+-	[RTN_PROHIBIT] =    { .error = -EACCES, .scope = RT_SCOPE_UNIVERSE },
+-	[RTN_THROW] =       { .error = -EAGAIN, .scope = RT_SCOPE_UNIVERSE },
+-	[RTN_NAT] =         { .error = 0,       .scope = RT_SCOPE_NOWHERE },
+-	[RTN_XRESOLVE] =    { .error = -EINVAL, .scope = RT_SCOPE_NOWHERE },
+-};
+-
+-static int dn_fib_sync_down(__le16 local, struct net_device *dev, int force);
+-static int dn_fib_sync_up(struct net_device *dev);
+-
+-void dn_fib_free_info(struct dn_fib_info *fi)
+-{
+-	if (fi->fib_dead == 0) {
+-		printk(KERN_DEBUG "DECnet: BUG! Attempt to free alive dn_fib_info\n");
+-		return;
+-	}
+-
+-	change_nexthops(fi) {
+-		if (nh->nh_dev)
+-			dev_put(nh->nh_dev);
+-		nh->nh_dev = NULL;
+-	} endfor_nexthops(fi);
+-	kfree(fi);
+-}
+-
+-void dn_fib_release_info(struct dn_fib_info *fi)
+-{
+-	spin_lock(&dn_fib_info_lock);
+-	if (fi && --fi->fib_treeref == 0) {
+-		if (fi->fib_next)
+-			fi->fib_next->fib_prev = fi->fib_prev;
+-		if (fi->fib_prev)
+-			fi->fib_prev->fib_next = fi->fib_next;
+-		if (fi == dn_fib_info_list)
+-			dn_fib_info_list = fi->fib_next;
+-		fi->fib_dead = 1;
+-		dn_fib_info_put(fi);
+-	}
+-	spin_unlock(&dn_fib_info_lock);
+-}
+-
+-static inline int dn_fib_nh_comp(const struct dn_fib_info *fi, const struct dn_fib_info *ofi)
+-{
+-	const struct dn_fib_nh *onh = ofi->fib_nh;
+-
+-	for_nexthops(fi) {
+-		if (nh->nh_oif != onh->nh_oif ||
+-			nh->nh_gw != onh->nh_gw ||
+-			nh->nh_scope != onh->nh_scope ||
+-			nh->nh_weight != onh->nh_weight ||
+-			((nh->nh_flags^onh->nh_flags)&~RTNH_F_DEAD))
+-				return -1;
+-		onh++;
+-	} endfor_nexthops(fi);
+-	return 0;
+-}
+-
+-static inline struct dn_fib_info *dn_fib_find_info(const struct dn_fib_info *nfi)
+-{
+-	for_fib_info() {
+-		if (fi->fib_nhs != nfi->fib_nhs)
+-			continue;
+-		if (nfi->fib_protocol == fi->fib_protocol &&
+-			nfi->fib_prefsrc == fi->fib_prefsrc &&
+-			nfi->fib_priority == fi->fib_priority &&
+-			memcmp(nfi->fib_metrics, fi->fib_metrics, sizeof(fi->fib_metrics)) == 0 &&
+-			((nfi->fib_flags^fi->fib_flags)&~RTNH_F_DEAD) == 0 &&
+-			(nfi->fib_nhs == 0 || dn_fib_nh_comp(fi, nfi) == 0))
+-				return fi;
+-	} endfor_fib_info();
+-	return NULL;
+-}
+-
+-static int dn_fib_count_nhs(const struct nlattr *attr)
+-{
+-	struct rtnexthop *nhp = nla_data(attr);
+-	int nhs = 0, nhlen = nla_len(attr);
+-
+-	while (rtnh_ok(nhp, nhlen)) {
+-		nhs++;
+-		nhp = rtnh_next(nhp, &nhlen);
+-	}
+-
+-	/* leftover implies invalid nexthop configuration, discard it */
+-	return nhlen > 0 ? 0 : nhs;
+-}
+-
+-static int dn_fib_get_nhs(struct dn_fib_info *fi, const struct nlattr *attr,
+-			  const struct rtmsg *r)
+-{
+-	struct rtnexthop *nhp = nla_data(attr);
+-	int nhlen = nla_len(attr);
+-
+-	change_nexthops(fi) {
+-		int attrlen;
+-
+-		if (!rtnh_ok(nhp, nhlen))
+-			return -EINVAL;
+-
+-		nh->nh_flags  = (r->rtm_flags&~0xFF) | nhp->rtnh_flags;
+-		nh->nh_oif    = nhp->rtnh_ifindex;
+-		nh->nh_weight = nhp->rtnh_hops + 1;
+-
+-		attrlen = rtnh_attrlen(nhp);
+-		if (attrlen > 0) {
+-			struct nlattr *gw_attr;
+-
+-			gw_attr = nla_find((struct nlattr *) (nhp + 1), attrlen, RTA_GATEWAY);
+-			nh->nh_gw = gw_attr ? nla_get_le16(gw_attr) : 0;
+-		}
+-
+-		nhp = rtnh_next(nhp, &nhlen);
+-	} endfor_nexthops(fi);
+-
+-	return 0;
+-}
+-
+-
+-static int dn_fib_check_nh(const struct rtmsg *r, struct dn_fib_info *fi, struct dn_fib_nh *nh)
+-{
+-	int err;
+-
+-	if (nh->nh_gw) {
+-		struct flowidn fld;
+-		struct dn_fib_res res;
+-
+-		if (nh->nh_flags&RTNH_F_ONLINK) {
+-			struct net_device *dev;
+-
+-			if (r->rtm_scope >= RT_SCOPE_LINK)
+-				return -EINVAL;
+-			if (dnet_addr_type(nh->nh_gw) != RTN_UNICAST)
+-				return -EINVAL;
+-			if ((dev = __dev_get_by_index(&init_net, nh->nh_oif)) == NULL)
+-				return -ENODEV;
+-			if (!(dev->flags&IFF_UP))
+-				return -ENETDOWN;
+-			nh->nh_dev = dev;
+-			dev_hold(dev);
+-			nh->nh_scope = RT_SCOPE_LINK;
+-			return 0;
+-		}
+-
+-		memset(&fld, 0, sizeof(fld));
+-		fld.daddr = nh->nh_gw;
+-		fld.flowidn_oif = nh->nh_oif;
+-		fld.flowidn_scope = r->rtm_scope + 1;
+-
+-		if (fld.flowidn_scope < RT_SCOPE_LINK)
+-			fld.flowidn_scope = RT_SCOPE_LINK;
+-
+-		if ((err = dn_fib_lookup(&fld, &res)) != 0)
+-			return err;
+-
+-		err = -EINVAL;
+-		if (res.type != RTN_UNICAST && res.type != RTN_LOCAL)
+-			goto out;
+-		nh->nh_scope = res.scope;
+-		nh->nh_oif = DN_FIB_RES_OIF(res);
+-		nh->nh_dev = DN_FIB_RES_DEV(res);
+-		if (nh->nh_dev == NULL)
+-			goto out;
+-		dev_hold(nh->nh_dev);
+-		err = -ENETDOWN;
+-		if (!(nh->nh_dev->flags & IFF_UP))
+-			goto out;
+-		err = 0;
+-out:
+-		dn_fib_res_put(&res);
+-		return err;
+-	} else {
+-		struct net_device *dev;
+-
+-		if (nh->nh_flags&(RTNH_F_PERVASIVE|RTNH_F_ONLINK))
+-			return -EINVAL;
+-
+-		dev = __dev_get_by_index(&init_net, nh->nh_oif);
+-		if (dev == NULL || dev->dn_ptr == NULL)
+-			return -ENODEV;
+-		if (!(dev->flags&IFF_UP))
+-			return -ENETDOWN;
+-		nh->nh_dev = dev;
+-		dev_hold(nh->nh_dev);
+-		nh->nh_scope = RT_SCOPE_HOST;
+-	}
+-
+-	return 0;
+-}
+-
+-
+-struct dn_fib_info *dn_fib_create_info(const struct rtmsg *r, struct nlattr *attrs[],
+-				       const struct nlmsghdr *nlh, int *errp)
+-{
+-	int err;
+-	struct dn_fib_info *fi = NULL;
+-	struct dn_fib_info *ofi;
+-	int nhs = 1;
+-
+-	if (r->rtm_type > RTN_MAX)
+-		goto err_inval;
+-
+-	if (dn_fib_props[r->rtm_type].scope > r->rtm_scope)
+-		goto err_inval;
+-
+-	if (attrs[RTA_MULTIPATH] &&
+-	    (nhs = dn_fib_count_nhs(attrs[RTA_MULTIPATH])) == 0)
+-		goto err_inval;
+-
+-	fi = kzalloc(struct_size(fi, fib_nh, nhs), GFP_KERNEL);
+-	err = -ENOBUFS;
+-	if (fi == NULL)
+-		goto failure;
+-
+-	fi->fib_protocol = r->rtm_protocol;
+-	fi->fib_nhs = nhs;
+-	fi->fib_flags = r->rtm_flags;
+-
+-	if (attrs[RTA_PRIORITY])
+-		fi->fib_priority = nla_get_u32(attrs[RTA_PRIORITY]);
+-
+-	if (attrs[RTA_METRICS]) {
+-		struct nlattr *attr;
+-		int rem;
+-
+-		nla_for_each_nested(attr, attrs[RTA_METRICS], rem) {
+-			int type = nla_type(attr);
+-
+-			if (type) {
+-				if (type > RTAX_MAX || type == RTAX_CC_ALGO ||
+-				    nla_len(attr) < 4)
+-					goto err_inval;
+-
+-				fi->fib_metrics[type-1] = nla_get_u32(attr);
+-			}
+-		}
+-	}
+-
+-	if (attrs[RTA_PREFSRC])
+-		fi->fib_prefsrc = nla_get_le16(attrs[RTA_PREFSRC]);
+-
+-	if (attrs[RTA_MULTIPATH]) {
+-		if ((err = dn_fib_get_nhs(fi, attrs[RTA_MULTIPATH], r)) != 0)
+-			goto failure;
+-
+-		if (attrs[RTA_OIF] &&
+-		    fi->fib_nh->nh_oif != nla_get_u32(attrs[RTA_OIF]))
+-			goto err_inval;
+-
+-		if (attrs[RTA_GATEWAY] &&
+-		    fi->fib_nh->nh_gw != nla_get_le16(attrs[RTA_GATEWAY]))
+-			goto err_inval;
+-	} else {
+-		struct dn_fib_nh *nh = fi->fib_nh;
+-
+-		if (attrs[RTA_OIF])
+-			nh->nh_oif = nla_get_u32(attrs[RTA_OIF]);
+-
+-		if (attrs[RTA_GATEWAY])
+-			nh->nh_gw = nla_get_le16(attrs[RTA_GATEWAY]);
+-
+-		nh->nh_flags = r->rtm_flags;
+-		nh->nh_weight = 1;
+-	}
+-
+-	if (r->rtm_type == RTN_NAT) {
+-		if (!attrs[RTA_GATEWAY] || nhs != 1 || attrs[RTA_OIF])
+-			goto err_inval;
+-
+-		fi->fib_nh->nh_gw = nla_get_le16(attrs[RTA_GATEWAY]);
+-		goto link_it;
+-	}
+-
+-	if (dn_fib_props[r->rtm_type].error) {
+-		if (attrs[RTA_GATEWAY] || attrs[RTA_OIF] || attrs[RTA_MULTIPATH])
+-			goto err_inval;
+-
+-		goto link_it;
+-	}
+-
+-	if (r->rtm_scope > RT_SCOPE_HOST)
+-		goto err_inval;
+-
+-	if (r->rtm_scope == RT_SCOPE_HOST) {
+-		struct dn_fib_nh *nh = fi->fib_nh;
+-
+-		/* Local address is added */
+-		if (nhs != 1 || nh->nh_gw)
+-			goto err_inval;
+-		nh->nh_scope = RT_SCOPE_NOWHERE;
+-		nh->nh_dev = dev_get_by_index(&init_net, fi->fib_nh->nh_oif);
+-		err = -ENODEV;
+-		if (nh->nh_dev == NULL)
+-			goto failure;
+-	} else {
+-		change_nexthops(fi) {
+-			if ((err = dn_fib_check_nh(r, fi, nh)) != 0)
+-				goto failure;
+-		} endfor_nexthops(fi)
+-	}
+-
+-	if (fi->fib_prefsrc) {
+-		if (r->rtm_type != RTN_LOCAL || !attrs[RTA_DST] ||
+-		    fi->fib_prefsrc != nla_get_le16(attrs[RTA_DST]))
+-			if (dnet_addr_type(fi->fib_prefsrc) != RTN_LOCAL)
+-				goto err_inval;
+-	}
+-
+-link_it:
+-	if ((ofi = dn_fib_find_info(fi)) != NULL) {
+-		fi->fib_dead = 1;
+-		dn_fib_free_info(fi);
+-		ofi->fib_treeref++;
+-		return ofi;
+-	}
+-
+-	fi->fib_treeref++;
+-	refcount_set(&fi->fib_clntref, 1);
+-	spin_lock(&dn_fib_info_lock);
+-	fi->fib_next = dn_fib_info_list;
+-	fi->fib_prev = NULL;
+-	if (dn_fib_info_list)
+-		dn_fib_info_list->fib_prev = fi;
+-	dn_fib_info_list = fi;
+-	spin_unlock(&dn_fib_info_lock);
+-	return fi;
+-
+-err_inval:
+-	err = -EINVAL;
+-
+-failure:
+-	*errp = err;
+-	if (fi) {
+-		fi->fib_dead = 1;
+-		dn_fib_free_info(fi);
+-	}
+-
+-	return NULL;
+-}
+-
+-int dn_fib_semantic_match(int type, struct dn_fib_info *fi, const struct flowidn *fld, struct dn_fib_res *res)
+-{
+-	int err = dn_fib_props[type].error;
+-
+-	if (err == 0) {
+-		if (fi->fib_flags & RTNH_F_DEAD)
+-			return 1;
+-
+-		res->fi = fi;
+-
+-		switch (type) {
+-		case RTN_NAT:
+-			DN_FIB_RES_RESET(*res);
+-			refcount_inc(&fi->fib_clntref);
+-			return 0;
+-		case RTN_UNICAST:
+-		case RTN_LOCAL:
+-			for_nexthops(fi) {
+-				if (nh->nh_flags & RTNH_F_DEAD)
+-					continue;
+-				if (!fld->flowidn_oif ||
+-				    fld->flowidn_oif == nh->nh_oif)
+-					break;
+-			}
+-			if (nhsel < fi->fib_nhs) {
+-				res->nh_sel = nhsel;
+-				refcount_inc(&fi->fib_clntref);
+-				return 0;
+-			}
+-			endfor_nexthops(fi);
+-			res->fi = NULL;
+-			return 1;
+-		default:
+-			net_err_ratelimited("DECnet: impossible routing event : dn_fib_semantic_match type=%d\n",
+-					    type);
+-			res->fi = NULL;
+-			return -EINVAL;
+-		}
+-	}
+-	return err;
+-}
+-
+-void dn_fib_select_multipath(const struct flowidn *fld, struct dn_fib_res *res)
+-{
+-	struct dn_fib_info *fi = res->fi;
+-	int w;
+-
+-	spin_lock_bh(&dn_fib_multipath_lock);
+-	if (fi->fib_power <= 0) {
+-		int power = 0;
+-		change_nexthops(fi) {
+-			if (!(nh->nh_flags&RTNH_F_DEAD)) {
+-				power += nh->nh_weight;
+-				nh->nh_power = nh->nh_weight;
+-			}
+-		} endfor_nexthops(fi);
+-		fi->fib_power = power;
+-		if (power < 0) {
+-			spin_unlock_bh(&dn_fib_multipath_lock);
+-			res->nh_sel = 0;
+-			return;
+-		}
+-	}
+-
+-	w = jiffies % fi->fib_power;
+-
+-	change_nexthops(fi) {
+-		if (!(nh->nh_flags&RTNH_F_DEAD) && nh->nh_power) {
+-			if ((w -= nh->nh_power) <= 0) {
+-				nh->nh_power--;
+-				fi->fib_power--;
+-				res->nh_sel = nhsel;
+-				spin_unlock_bh(&dn_fib_multipath_lock);
+-				return;
+-			}
+-		}
+-	} endfor_nexthops(fi);
+-	res->nh_sel = 0;
+-	spin_unlock_bh(&dn_fib_multipath_lock);
+-}
+-
+-static inline u32 rtm_get_table(struct nlattr *attrs[], u8 table)
+-{
+-	if (attrs[RTA_TABLE])
+-		table = nla_get_u32(attrs[RTA_TABLE]);
+-
+-	return table;
+-}
+-
+-static int dn_fib_rtm_delroute(struct sk_buff *skb, struct nlmsghdr *nlh,
+-			       struct netlink_ext_ack *extack)
+-{
+-	struct net *net = sock_net(skb->sk);
+-	struct dn_fib_table *tb;
+-	struct rtmsg *r = nlmsg_data(nlh);
+-	struct nlattr *attrs[RTA_MAX+1];
+-	int err;
+-
+-	if (!netlink_capable(skb, CAP_NET_ADMIN))
+-		return -EPERM;
+-
+-	if (!net_eq(net, &init_net))
+-		return -EINVAL;
+-
+-	err = nlmsg_parse_deprecated(nlh, sizeof(*r), attrs, RTA_MAX,
+-				     rtm_dn_policy, extack);
+-	if (err < 0)
+-		return err;
+-
+-	tb = dn_fib_get_table(rtm_get_table(attrs, r->rtm_table), 0);
+-	if (!tb)
+-		return -ESRCH;
+-
+-	return tb->delete(tb, r, attrs, nlh, &NETLINK_CB(skb));
+-}
+-
+-static int dn_fib_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh,
+-			       struct netlink_ext_ack *extack)
+-{
+-	struct net *net = sock_net(skb->sk);
+-	struct dn_fib_table *tb;
+-	struct rtmsg *r = nlmsg_data(nlh);
+-	struct nlattr *attrs[RTA_MAX+1];
+-	int err;
+-
+-	if (!netlink_capable(skb, CAP_NET_ADMIN))
+-		return -EPERM;
+-
+-	if (!net_eq(net, &init_net))
+-		return -EINVAL;
+-
+-	err = nlmsg_parse_deprecated(nlh, sizeof(*r), attrs, RTA_MAX,
+-				     rtm_dn_policy, extack);
+-	if (err < 0)
+-		return err;
+-
+-	tb = dn_fib_get_table(rtm_get_table(attrs, r->rtm_table), 1);
+-	if (!tb)
+-		return -ENOBUFS;
+-
+-	return tb->insert(tb, r, attrs, nlh, &NETLINK_CB(skb));
+-}
+-
+-static void fib_magic(int cmd, int type, __le16 dst, int dst_len, struct dn_ifaddr *ifa)
+-{
+-	struct dn_fib_table *tb;
+-	struct {
+-		struct nlmsghdr nlh;
+-		struct rtmsg rtm;
+-	} req;
+-	struct {
+-		struct nlattr hdr;
+-		__le16 dst;
+-	} dst_attr = {
+-		.dst = dst,
+-	};
+-	struct {
+-		struct nlattr hdr;
+-		__le16 prefsrc;
+-	} prefsrc_attr = {
+-		.prefsrc = ifa->ifa_local,
+-	};
+-	struct {
+-		struct nlattr hdr;
+-		u32 oif;
+-	} oif_attr = {
+-		.oif = ifa->ifa_dev->dev->ifindex,
+-	};
+-	struct nlattr *attrs[RTA_MAX+1] = {
+-		[RTA_DST] = (struct nlattr *) &dst_attr,
+-		[RTA_PREFSRC] = (struct nlattr * ) &prefsrc_attr,
+-		[RTA_OIF] = (struct nlattr *) &oif_attr,
+-	};
+-
+-	memset(&req.rtm, 0, sizeof(req.rtm));
+-
+-	if (type == RTN_UNICAST)
+-		tb = dn_fib_get_table(RT_MIN_TABLE, 1);
+-	else
+-		tb = dn_fib_get_table(RT_TABLE_LOCAL, 1);
+-
+-	if (tb == NULL)
+-		return;
+-
+-	req.nlh.nlmsg_len = sizeof(req);
+-	req.nlh.nlmsg_type = cmd;
+-	req.nlh.nlmsg_flags = NLM_F_REQUEST|NLM_F_CREATE|NLM_F_APPEND;
+-	req.nlh.nlmsg_pid = 0;
+-	req.nlh.nlmsg_seq = 0;
+-
+-	req.rtm.rtm_dst_len = dst_len;
+-	req.rtm.rtm_table = tb->n;
+-	req.rtm.rtm_protocol = RTPROT_KERNEL;
+-	req.rtm.rtm_scope = (type != RTN_LOCAL ? RT_SCOPE_LINK : RT_SCOPE_HOST);
+-	req.rtm.rtm_type = type;
+-
+-	if (cmd == RTM_NEWROUTE)
+-		tb->insert(tb, &req.rtm, attrs, &req.nlh, NULL);
+-	else
+-		tb->delete(tb, &req.rtm, attrs, &req.nlh, NULL);
+-}
+-
+-static void dn_fib_add_ifaddr(struct dn_ifaddr *ifa)
+-{
+-
+-	fib_magic(RTM_NEWROUTE, RTN_LOCAL, ifa->ifa_local, 16, ifa);
+-
+-#if 0
+-	if (!(dev->flags&IFF_UP))
+-		return;
+-	/* In the future, we will want to add default routes here */
+-
+-#endif
+-}
+-
+-static void dn_fib_del_ifaddr(struct dn_ifaddr *ifa)
+-{
+-	int found_it = 0;
+-	struct net_device *dev;
+-	struct dn_dev *dn_db;
+-	struct dn_ifaddr *ifa2;
+-
+-	ASSERT_RTNL();
+-
+-	/* Scan device list */
+-	rcu_read_lock();
+-	for_each_netdev_rcu(&init_net, dev) {
+-		dn_db = rcu_dereference(dev->dn_ptr);
+-		if (dn_db == NULL)
+-			continue;
+-		for (ifa2 = rcu_dereference(dn_db->ifa_list);
+-		     ifa2 != NULL;
+-		     ifa2 = rcu_dereference(ifa2->ifa_next)) {
+-			if (ifa2->ifa_local == ifa->ifa_local) {
+-				found_it = 1;
+-				break;
+-			}
+-		}
+-	}
+-	rcu_read_unlock();
+-
+-	if (found_it == 0) {
+-		fib_magic(RTM_DELROUTE, RTN_LOCAL, ifa->ifa_local, 16, ifa);
+-
+-		if (dnet_addr_type(ifa->ifa_local) != RTN_LOCAL) {
+-			if (dn_fib_sync_down(ifa->ifa_local, NULL, 0))
+-				dn_fib_flush();
+-		}
+-	}
+-}
+-
+-static void dn_fib_disable_addr(struct net_device *dev, int force)
+-{
+-	if (dn_fib_sync_down(0, dev, force))
+-		dn_fib_flush();
+-	dn_rt_cache_flush(0);
+-	neigh_ifdown(&dn_neigh_table, dev);
+-}
+-
+-static int dn_fib_dnaddr_event(struct notifier_block *this, unsigned long event, void *ptr)
+-{
+-	struct dn_ifaddr *ifa = (struct dn_ifaddr *)ptr;
+-
+-	switch (event) {
+-	case NETDEV_UP:
+-		dn_fib_add_ifaddr(ifa);
+-		dn_fib_sync_up(ifa->ifa_dev->dev);
+-		dn_rt_cache_flush(-1);
+-		break;
+-	case NETDEV_DOWN:
+-		dn_fib_del_ifaddr(ifa);
+-		if (ifa->ifa_dev && ifa->ifa_dev->ifa_list == NULL) {
+-			dn_fib_disable_addr(ifa->ifa_dev->dev, 1);
+-		} else {
+-			dn_rt_cache_flush(-1);
+-		}
+-		break;
+-	}
+-	return NOTIFY_DONE;
+-}
+-
+-static int dn_fib_sync_down(__le16 local, struct net_device *dev, int force)
+-{
+-	int ret = 0;
+-	int scope = RT_SCOPE_NOWHERE;
+-
+-	if (force)
+-		scope = -1;
+-
+-	for_fib_info() {
+-		/*
+-		 * This makes no sense for DECnet.... we will almost
+-		 * certainly have more than one local address the same
+-		 * over all our interfaces. It needs thinking about
+-		 * some more.
+-		 */
+-		if (local && fi->fib_prefsrc == local) {
+-			fi->fib_flags |= RTNH_F_DEAD;
+-			ret++;
+-		} else if (dev && fi->fib_nhs) {
+-			int dead = 0;
+-
+-			change_nexthops(fi) {
+-				if (nh->nh_flags&RTNH_F_DEAD)
+-					dead++;
+-				else if (nh->nh_dev == dev &&
+-						nh->nh_scope != scope) {
+-					spin_lock_bh(&dn_fib_multipath_lock);
+-					nh->nh_flags |= RTNH_F_DEAD;
+-					fi->fib_power -= nh->nh_power;
+-					nh->nh_power = 0;
+-					spin_unlock_bh(&dn_fib_multipath_lock);
+-					dead++;
+-				}
+-			} endfor_nexthops(fi)
+-			if (dead == fi->fib_nhs) {
+-				fi->fib_flags |= RTNH_F_DEAD;
+-				ret++;
+-			}
+-		}
+-	} endfor_fib_info();
+-	return ret;
+-}
+-
+-
+-static int dn_fib_sync_up(struct net_device *dev)
+-{
+-	int ret = 0;
+-
+-	if (!(dev->flags&IFF_UP))
+-		return 0;
+-
+-	for_fib_info() {
+-		int alive = 0;
+-
+-		change_nexthops(fi) {
+-			if (!(nh->nh_flags&RTNH_F_DEAD)) {
+-				alive++;
+-				continue;
+-			}
+-			if (nh->nh_dev == NULL || !(nh->nh_dev->flags&IFF_UP))
+-				continue;
+-			if (nh->nh_dev != dev || dev->dn_ptr == NULL)
+-				continue;
+-			alive++;
+-			spin_lock_bh(&dn_fib_multipath_lock);
+-			nh->nh_power = 0;
+-			nh->nh_flags &= ~RTNH_F_DEAD;
+-			spin_unlock_bh(&dn_fib_multipath_lock);
+-		} endfor_nexthops(fi);
+-
+-		if (alive > 0) {
+-			fi->fib_flags &= ~RTNH_F_DEAD;
+-			ret++;
+-		}
+-	} endfor_fib_info();
+-	return ret;
+-}
+-
+-static struct notifier_block dn_fib_dnaddr_notifier = {
+-	.notifier_call = dn_fib_dnaddr_event,
+-};
+-
+-void __exit dn_fib_cleanup(void)
+-{
+-	dn_fib_table_cleanup();
+-	dn_fib_rules_cleanup();
+-
+-	unregister_dnaddr_notifier(&dn_fib_dnaddr_notifier);
+-}
+-
+-
+-void __init dn_fib_init(void)
+-{
+-	dn_fib_table_init();
+-	dn_fib_rules_init();
+-
+-	register_dnaddr_notifier(&dn_fib_dnaddr_notifier);
+-
+-	rtnl_register_module(THIS_MODULE, PF_DECnet, RTM_NEWROUTE,
+-			     dn_fib_rtm_newroute, NULL, 0);
+-	rtnl_register_module(THIS_MODULE, PF_DECnet, RTM_DELROUTE,
+-			     dn_fib_rtm_delroute, NULL, 0);
+-}
+diff --git a/net/decnet/dn_neigh.c b/net/decnet/dn_neigh.c
+deleted file mode 100644
+index 94b306f6d5511..0000000000000
+--- a/net/decnet/dn_neigh.c
++++ /dev/null
+@@ -1,605 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Neighbour Functions (Adjacency Database and
+- *                                                        On-Ethernet Cache)
+- *
+- * Author:      Steve Whitehouse <SteveW@ACM.org>
+- *
+- *
+- * Changes:
+- *     Steve Whitehouse     : Fixed router listing routine
+- *     Steve Whitehouse     : Added error_report functions
+- *     Steve Whitehouse     : Added default router detection
+- *     Steve Whitehouse     : Hop counts in outgoing messages
+- *     Steve Whitehouse     : Fixed src/dst in outgoing messages so
+- *                            forwarding now stands a good chance of
+- *                            working.
+- *     Steve Whitehouse     : Fixed neighbour states (for now anyway).
+- *     Steve Whitehouse     : Made error_report functions dummies. This
+- *                            is not the right place to return skbs.
+- *     Steve Whitehouse     : Convert to seq_file
+- *
+- */
+-
+-#include <linux/net.h>
+-#include <linux/module.h>
+-#include <linux/socket.h>
+-#include <linux/if_arp.h>
+-#include <linux/slab.h>
+-#include <linux/if_ether.h>
+-#include <linux/init.h>
+-#include <linux/proc_fs.h>
+-#include <linux/string.h>
+-#include <linux/netfilter_decnet.h>
+-#include <linux/spinlock.h>
+-#include <linux/seq_file.h>
+-#include <linux/rcupdate.h>
+-#include <linux/jhash.h>
+-#include <linux/atomic.h>
+-#include <net/net_namespace.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/flow.h>
+-#include <net/dn.h>
+-#include <net/dn_dev.h>
+-#include <net/dn_neigh.h>
+-#include <net/dn_route.h>
+-
+-static int dn_neigh_construct(struct neighbour *);
+-static void dn_neigh_error_report(struct neighbour *, struct sk_buff *);
+-static int dn_neigh_output(struct neighbour *neigh, struct sk_buff *skb);
+-
+-/*
+- * Operations for adding the link layer header.
+- */
+-static const struct neigh_ops dn_neigh_ops = {
+-	.family =		AF_DECnet,
+-	.error_report =		dn_neigh_error_report,
+-	.output =		dn_neigh_output,
+-	.connected_output =	dn_neigh_output,
+-};
+-
+-static u32 dn_neigh_hash(const void *pkey,
+-			 const struct net_device *dev,
+-			 __u32 *hash_rnd)
+-{
+-	return jhash_2words(*(__u16 *)pkey, 0, hash_rnd[0]);
+-}
+-
+-static bool dn_key_eq(const struct neighbour *neigh, const void *pkey)
+-{
+-	return neigh_key_eq16(neigh, pkey);
+-}
+-
+-struct neigh_table dn_neigh_table = {
+-	.family =			PF_DECnet,
+-	.entry_size =			NEIGH_ENTRY_SIZE(sizeof(struct dn_neigh)),
+-	.key_len =			sizeof(__le16),
+-	.protocol =			cpu_to_be16(ETH_P_DNA_RT),
+-	.hash =				dn_neigh_hash,
+-	.key_eq =			dn_key_eq,
+-	.constructor =			dn_neigh_construct,
+-	.id =				"dn_neigh_cache",
+-	.parms ={
+-		.tbl =			&dn_neigh_table,
+-		.reachable_time =	30 * HZ,
+-		.data = {
+-			[NEIGH_VAR_MCAST_PROBES] = 0,
+-			[NEIGH_VAR_UCAST_PROBES] = 0,
+-			[NEIGH_VAR_APP_PROBES] = 0,
+-			[NEIGH_VAR_RETRANS_TIME] = 1 * HZ,
+-			[NEIGH_VAR_BASE_REACHABLE_TIME] = 30 * HZ,
+-			[NEIGH_VAR_DELAY_PROBE_TIME] = 5 * HZ,
+-			[NEIGH_VAR_GC_STALETIME] = 60 * HZ,
+-			[NEIGH_VAR_QUEUE_LEN_BYTES] = SK_WMEM_MAX,
+-			[NEIGH_VAR_PROXY_QLEN] = 0,
+-			[NEIGH_VAR_ANYCAST_DELAY] = 0,
+-			[NEIGH_VAR_PROXY_DELAY] = 0,
+-			[NEIGH_VAR_LOCKTIME] = 1 * HZ,
+-		},
+-	},
+-	.gc_interval =			30 * HZ,
+-	.gc_thresh1 =			128,
+-	.gc_thresh2 =			512,
+-	.gc_thresh3 =			1024,
+-};
+-
+-static int dn_neigh_construct(struct neighbour *neigh)
+-{
+-	struct net_device *dev = neigh->dev;
+-	struct dn_neigh *dn = container_of(neigh, struct dn_neigh, n);
+-	struct dn_dev *dn_db;
+-	struct neigh_parms *parms;
+-
+-	rcu_read_lock();
+-	dn_db = rcu_dereference(dev->dn_ptr);
+-	if (dn_db == NULL) {
+-		rcu_read_unlock();
+-		return -EINVAL;
+-	}
+-
+-	parms = dn_db->neigh_parms;
+-	if (!parms) {
+-		rcu_read_unlock();
+-		return -EINVAL;
+-	}
+-
+-	__neigh_parms_put(neigh->parms);
+-	neigh->parms = neigh_parms_clone(parms);
+-	rcu_read_unlock();
+-
+-	neigh->ops = &dn_neigh_ops;
+-	neigh->nud_state = NUD_NOARP;
+-	neigh->output = neigh->ops->connected_output;
+-
+-	if ((dev->type == ARPHRD_IPGRE) || (dev->flags & IFF_POINTOPOINT))
+-		memcpy(neigh->ha, dev->broadcast, dev->addr_len);
+-	else if ((dev->type == ARPHRD_ETHER) || (dev->type == ARPHRD_LOOPBACK))
+-		dn_dn2eth(neigh->ha, dn->addr);
+-	else {
+-		net_dbg_ratelimited("Trying to create neigh for hw %d\n",
+-				    dev->type);
+-		return -EINVAL;
+-	}
+-
+-	/*
+-	 * Make an estimate of the remote block size by assuming that its
+-	 * two less then the device mtu, which it true for ethernet (and
+-	 * other things which support long format headers) since there is
+-	 * an extra length field (of 16 bits) which isn't part of the
+-	 * ethernet headers and which the DECnet specs won't admit is part
+-	 * of the DECnet routing headers either.
+-	 *
+-	 * If we over estimate here its no big deal, the NSP negotiations
+-	 * will prevent us from sending packets which are too large for the
+-	 * remote node to handle. In any case this figure is normally updated
+-	 * by a hello message in most cases.
+-	 */
+-	dn->blksize = dev->mtu - 2;
+-
+-	return 0;
+-}
+-
+-static void dn_neigh_error_report(struct neighbour *neigh, struct sk_buff *skb)
+-{
+-	printk(KERN_DEBUG "dn_neigh_error_report: called\n");
+-	kfree_skb(skb);
+-}
+-
+-static int dn_neigh_output(struct neighbour *neigh, struct sk_buff *skb)
+-{
+-	struct dst_entry *dst = skb_dst(skb);
+-	struct dn_route *rt = (struct dn_route *)dst;
+-	struct net_device *dev = neigh->dev;
+-	char mac_addr[ETH_ALEN];
+-	unsigned int seq;
+-	int err;
+-
+-	dn_dn2eth(mac_addr, rt->rt_local_src);
+-	do {
+-		seq = read_seqbegin(&neigh->ha_lock);
+-		err = dev_hard_header(skb, dev, ntohs(skb->protocol),
+-				      neigh->ha, mac_addr, skb->len);
+-	} while (read_seqretry(&neigh->ha_lock, seq));
+-
+-	if (err >= 0)
+-		err = dev_queue_xmit(skb);
+-	else {
+-		kfree_skb(skb);
+-		err = -EINVAL;
+-	}
+-	return err;
+-}
+-
+-static int dn_neigh_output_packet(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dst_entry *dst = skb_dst(skb);
+-	struct dn_route *rt = (struct dn_route *)dst;
+-	struct neighbour *neigh = rt->n;
+-
+-	return neigh->output(neigh, skb);
+-}
+-
+-/*
+- * For talking to broadcast devices: Ethernet & PPP
+- */
+-static int dn_long_output(struct neighbour *neigh, struct sock *sk,
+-			  struct sk_buff *skb)
+-{
+-	struct net_device *dev = neigh->dev;
+-	int headroom = dev->hard_header_len + sizeof(struct dn_long_packet) + 3;
+-	unsigned char *data;
+-	struct dn_long_packet *lp;
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-
+-
+-	if (skb_headroom(skb) < headroom) {
+-		struct sk_buff *skb2 = skb_realloc_headroom(skb, headroom);
+-		if (skb2 == NULL) {
+-			net_crit_ratelimited("dn_long_output: no memory\n");
+-			kfree_skb(skb);
+-			return -ENOBUFS;
+-		}
+-		consume_skb(skb);
+-		skb = skb2;
+-		net_info_ratelimited("dn_long_output: Increasing headroom\n");
+-	}
+-
+-	data = skb_push(skb, sizeof(struct dn_long_packet) + 3);
+-	lp = (struct dn_long_packet *)(data+3);
+-
+-	*((__le16 *)data) = cpu_to_le16(skb->len - 2);
+-	*(data + 2) = 1 | DN_RT_F_PF; /* Padding */
+-
+-	lp->msgflg   = DN_RT_PKT_LONG|(cb->rt_flags&(DN_RT_F_IE|DN_RT_F_RQR|DN_RT_F_RTS));
+-	lp->d_area   = lp->d_subarea = 0;
+-	dn_dn2eth(lp->d_id, cb->dst);
+-	lp->s_area   = lp->s_subarea = 0;
+-	dn_dn2eth(lp->s_id, cb->src);
+-	lp->nl2      = 0;
+-	lp->visit_ct = cb->hops & 0x3f;
+-	lp->s_class  = 0;
+-	lp->pt       = 0;
+-
+-	skb_reset_network_header(skb);
+-
+-	return NF_HOOK(NFPROTO_DECNET, NF_DN_POST_ROUTING,
+-		       &init_net, sk, skb, NULL, neigh->dev,
+-		       dn_neigh_output_packet);
+-}
+-
+-/*
+- * For talking to pointopoint and multidrop devices: DDCMP and X.25
+- */
+-static int dn_short_output(struct neighbour *neigh, struct sock *sk,
+-			   struct sk_buff *skb)
+-{
+-	struct net_device *dev = neigh->dev;
+-	int headroom = dev->hard_header_len + sizeof(struct dn_short_packet) + 2;
+-	struct dn_short_packet *sp;
+-	unsigned char *data;
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-
+-
+-	if (skb_headroom(skb) < headroom) {
+-		struct sk_buff *skb2 = skb_realloc_headroom(skb, headroom);
+-		if (skb2 == NULL) {
+-			net_crit_ratelimited("dn_short_output: no memory\n");
+-			kfree_skb(skb);
+-			return -ENOBUFS;
+-		}
+-		consume_skb(skb);
+-		skb = skb2;
+-		net_info_ratelimited("dn_short_output: Increasing headroom\n");
+-	}
+-
+-	data = skb_push(skb, sizeof(struct dn_short_packet) + 2);
+-	*((__le16 *)data) = cpu_to_le16(skb->len - 2);
+-	sp = (struct dn_short_packet *)(data+2);
+-
+-	sp->msgflg     = DN_RT_PKT_SHORT|(cb->rt_flags&(DN_RT_F_RQR|DN_RT_F_RTS));
+-	sp->dstnode    = cb->dst;
+-	sp->srcnode    = cb->src;
+-	sp->forward    = cb->hops & 0x3f;
+-
+-	skb_reset_network_header(skb);
+-
+-	return NF_HOOK(NFPROTO_DECNET, NF_DN_POST_ROUTING,
+-		       &init_net, sk, skb, NULL, neigh->dev,
+-		       dn_neigh_output_packet);
+-}
+-
+-/*
+- * For talking to DECnet phase III nodes
+- * Phase 3 output is the same as short output, execpt that
+- * it clears the area bits before transmission.
+- */
+-static int dn_phase3_output(struct neighbour *neigh, struct sock *sk,
+-			    struct sk_buff *skb)
+-{
+-	struct net_device *dev = neigh->dev;
+-	int headroom = dev->hard_header_len + sizeof(struct dn_short_packet) + 2;
+-	struct dn_short_packet *sp;
+-	unsigned char *data;
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-
+-	if (skb_headroom(skb) < headroom) {
+-		struct sk_buff *skb2 = skb_realloc_headroom(skb, headroom);
+-		if (skb2 == NULL) {
+-			net_crit_ratelimited("dn_phase3_output: no memory\n");
+-			kfree_skb(skb);
+-			return -ENOBUFS;
+-		}
+-		consume_skb(skb);
+-		skb = skb2;
+-		net_info_ratelimited("dn_phase3_output: Increasing headroom\n");
+-	}
+-
+-	data = skb_push(skb, sizeof(struct dn_short_packet) + 2);
+-	*((__le16 *)data) = cpu_to_le16(skb->len - 2);
+-	sp = (struct dn_short_packet *)(data + 2);
+-
+-	sp->msgflg   = DN_RT_PKT_SHORT|(cb->rt_flags&(DN_RT_F_RQR|DN_RT_F_RTS));
+-	sp->dstnode  = cb->dst & cpu_to_le16(0x03ff);
+-	sp->srcnode  = cb->src & cpu_to_le16(0x03ff);
+-	sp->forward  = cb->hops & 0x3f;
+-
+-	skb_reset_network_header(skb);
+-
+-	return NF_HOOK(NFPROTO_DECNET, NF_DN_POST_ROUTING,
+-		       &init_net, sk, skb, NULL, neigh->dev,
+-		       dn_neigh_output_packet);
+-}
+-
+-int dn_to_neigh_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dst_entry *dst = skb_dst(skb);
+-	struct dn_route *rt = (struct dn_route *) dst;
+-	struct neighbour *neigh = rt->n;
+-	struct dn_neigh *dn = container_of(neigh, struct dn_neigh, n);
+-	struct dn_dev *dn_db;
+-	bool use_long;
+-
+-	rcu_read_lock();
+-	dn_db = rcu_dereference(neigh->dev->dn_ptr);
+-	if (dn_db == NULL) {
+-		rcu_read_unlock();
+-		return -EINVAL;
+-	}
+-	use_long = dn_db->use_long;
+-	rcu_read_unlock();
+-
+-	if (dn->flags & DN_NDFLAG_P3)
+-		return dn_phase3_output(neigh, sk, skb);
+-	if (use_long)
+-		return dn_long_output(neigh, sk, skb);
+-	else
+-		return dn_short_output(neigh, sk, skb);
+-}
+-
+-/*
+- * Unfortunately, the neighbour code uses the device in its hash
+- * function, so we don't get any advantage from it. This function
+- * basically does a neigh_lookup(), but without comparing the device
+- * field. This is required for the On-Ethernet cache
+- */
+-
+-/*
+- * Pointopoint link receives a hello message
+- */
+-void dn_neigh_pointopoint_hello(struct sk_buff *skb)
+-{
+-	kfree_skb(skb);
+-}
+-
+-/*
+- * Ethernet router hello message received
+- */
+-int dn_neigh_router_hello(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	struct rtnode_hello_message *msg = (struct rtnode_hello_message *)skb->data;
+-
+-	struct neighbour *neigh;
+-	struct dn_neigh *dn;
+-	struct dn_dev *dn_db;
+-	__le16 src;
+-
+-	src = dn_eth2dn(msg->id);
+-
+-	neigh = __neigh_lookup(&dn_neigh_table, &src, skb->dev, 1);
+-
+-	dn = container_of(neigh, struct dn_neigh, n);
+-
+-	if (neigh) {
+-		write_lock(&neigh->lock);
+-
+-		neigh->used = jiffies;
+-		dn_db = rcu_dereference(neigh->dev->dn_ptr);
+-
+-		if (!(neigh->nud_state & NUD_PERMANENT)) {
+-			neigh->updated = jiffies;
+-
+-			if (neigh->dev->type == ARPHRD_ETHER)
+-				memcpy(neigh->ha, &eth_hdr(skb)->h_source, ETH_ALEN);
+-
+-			dn->blksize  = le16_to_cpu(msg->blksize);
+-			dn->priority = msg->priority;
+-
+-			dn->flags &= ~DN_NDFLAG_P3;
+-
+-			switch (msg->iinfo & DN_RT_INFO_TYPE) {
+-			case DN_RT_INFO_L1RT:
+-				dn->flags &=~DN_NDFLAG_R2;
+-				dn->flags |= DN_NDFLAG_R1;
+-				break;
+-			case DN_RT_INFO_L2RT:
+-				dn->flags |= DN_NDFLAG_R2;
+-			}
+-		}
+-
+-		/* Only use routers in our area */
+-		if ((le16_to_cpu(src)>>10) == (le16_to_cpu((decnet_address))>>10)) {
+-			if (!dn_db->router) {
+-				dn_db->router = neigh_clone(neigh);
+-			} else {
+-				if (msg->priority > ((struct dn_neigh *)dn_db->router)->priority)
+-					neigh_release(xchg(&dn_db->router, neigh_clone(neigh)));
+-			}
+-		}
+-		write_unlock(&neigh->lock);
+-		neigh_release(neigh);
+-	}
+-
+-	kfree_skb(skb);
+-	return 0;
+-}
+-
+-/*
+- * Endnode hello message received
+- */
+-int dn_neigh_endnode_hello(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	struct endnode_hello_message *msg = (struct endnode_hello_message *)skb->data;
+-	struct neighbour *neigh;
+-	struct dn_neigh *dn;
+-	__le16 src;
+-
+-	src = dn_eth2dn(msg->id);
+-
+-	neigh = __neigh_lookup(&dn_neigh_table, &src, skb->dev, 1);
+-
+-	dn = container_of(neigh, struct dn_neigh, n);
+-
+-	if (neigh) {
+-		write_lock(&neigh->lock);
+-
+-		neigh->used = jiffies;
+-
+-		if (!(neigh->nud_state & NUD_PERMANENT)) {
+-			neigh->updated = jiffies;
+-
+-			if (neigh->dev->type == ARPHRD_ETHER)
+-				memcpy(neigh->ha, &eth_hdr(skb)->h_source, ETH_ALEN);
+-			dn->flags   &= ~(DN_NDFLAG_R1 | DN_NDFLAG_R2);
+-			dn->blksize  = le16_to_cpu(msg->blksize);
+-			dn->priority = 0;
+-		}
+-
+-		write_unlock(&neigh->lock);
+-		neigh_release(neigh);
+-	}
+-
+-	kfree_skb(skb);
+-	return 0;
+-}
+-
+-static char *dn_find_slot(char *base, int max, int priority)
+-{
+-	int i;
+-	unsigned char *min = NULL;
+-
+-	base += 6; /* skip first id */
+-
+-	for(i = 0; i < max; i++) {
+-		if (!min || (*base < *min))
+-			min = base;
+-		base += 7; /* find next priority */
+-	}
+-
+-	if (!min)
+-		return NULL;
+-
+-	return (*min < priority) ? (min - 6) : NULL;
+-}
+-
+-struct elist_cb_state {
+-	struct net_device *dev;
+-	unsigned char *ptr;
+-	unsigned char *rs;
+-	int t, n;
+-};
+-
+-static void neigh_elist_cb(struct neighbour *neigh, void *_info)
+-{
+-	struct elist_cb_state *s = _info;
+-	struct dn_neigh *dn;
+-
+-	if (neigh->dev != s->dev)
+-		return;
+-
+-	dn = container_of(neigh, struct dn_neigh, n);
+-	if (!(dn->flags & (DN_NDFLAG_R1|DN_NDFLAG_R2)))
+-		return;
+-
+-	if (s->t == s->n)
+-		s->rs = dn_find_slot(s->ptr, s->n, dn->priority);
+-	else
+-		s->t++;
+-	if (s->rs == NULL)
+-		return;
+-
+-	dn_dn2eth(s->rs, dn->addr);
+-	s->rs += 6;
+-	*(s->rs) = neigh->nud_state & NUD_CONNECTED ? 0x80 : 0x0;
+-	*(s->rs) |= dn->priority;
+-	s->rs++;
+-}
+-
+-int dn_neigh_elist(struct net_device *dev, unsigned char *ptr, int n)
+-{
+-	struct elist_cb_state state;
+-
+-	state.dev = dev;
+-	state.t = 0;
+-	state.n = n;
+-	state.ptr = ptr;
+-	state.rs = ptr;
+-
+-	neigh_for_each(&dn_neigh_table, neigh_elist_cb, &state);
+-
+-	return state.t;
+-}
+-
+-
+-#ifdef CONFIG_PROC_FS
+-
+-static inline void dn_neigh_format_entry(struct seq_file *seq,
+-					 struct neighbour *n)
+-{
+-	struct dn_neigh *dn = container_of(n, struct dn_neigh, n);
+-	char buf[DN_ASCBUF_LEN];
+-
+-	read_lock(&n->lock);
+-	seq_printf(seq, "%-7s %s%s%s   %02x    %02d  %07ld %-8s\n",
+-		   dn_addr2asc(le16_to_cpu(dn->addr), buf),
+-		   (dn->flags&DN_NDFLAG_R1) ? "1" : "-",
+-		   (dn->flags&DN_NDFLAG_R2) ? "2" : "-",
+-		   (dn->flags&DN_NDFLAG_P3) ? "3" : "-",
+-		   dn->n.nud_state,
+-		   refcount_read(&dn->n.refcnt),
+-		   dn->blksize,
+-		   (dn->n.dev) ? dn->n.dev->name : "?");
+-	read_unlock(&n->lock);
+-}
+-
+-static int dn_neigh_seq_show(struct seq_file *seq, void *v)
+-{
+-	if (v == SEQ_START_TOKEN) {
+-		seq_puts(seq, "Addr    Flags State Use Blksize Dev\n");
+-	} else {
+-		dn_neigh_format_entry(seq, v);
+-	}
+-
+-	return 0;
+-}
+-
+-static void *dn_neigh_seq_start(struct seq_file *seq, loff_t *pos)
+-{
+-	return neigh_seq_start(seq, pos, &dn_neigh_table,
+-			       NEIGH_SEQ_NEIGH_ONLY);
+-}
+-
+-static const struct seq_operations dn_neigh_seq_ops = {
+-	.start = dn_neigh_seq_start,
+-	.next  = neigh_seq_next,
+-	.stop  = neigh_seq_stop,
+-	.show  = dn_neigh_seq_show,
+-};
+-#endif
+-
+-void __init dn_neigh_init(void)
+-{
+-	neigh_table_init(NEIGH_DN_TABLE, &dn_neigh_table);
+-	proc_create_net("decnet_neigh", 0444, init_net.proc_net,
+-			&dn_neigh_seq_ops, sizeof(struct neigh_seq_state));
+-}
+-
+-void __exit dn_neigh_cleanup(void)
+-{
+-	remove_proc_entry("decnet_neigh", init_net.proc_net);
+-	neigh_table_clear(NEIGH_DN_TABLE, &dn_neigh_table);
+-}
+diff --git a/net/decnet/dn_nsp_in.c b/net/decnet/dn_nsp_in.c
+deleted file mode 100644
+index c97bdca5ec30f..0000000000000
+--- a/net/decnet/dn_nsp_in.c
++++ /dev/null
+@@ -1,906 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Network Services Protocol (Input)
+- *
+- * Author:      Eduardo Marcelo Serrat <emserrat@geocities.com>
+- *
+- * Changes:
+- *
+- *    Steve Whitehouse:  Split into dn_nsp_in.c and dn_nsp_out.c from
+- *                       original dn_nsp.c.
+- *    Steve Whitehouse:  Updated to work with my new routing architecture.
+- *    Steve Whitehouse:  Add changes from Eduardo Serrat's patches.
+- *    Steve Whitehouse:  Put all ack handling code in a common routine.
+- *    Steve Whitehouse:  Put other common bits into dn_nsp_rx()
+- *    Steve Whitehouse:  More checks on skb->len to catch bogus packets
+- *                       Fixed various race conditions and possible nasties.
+- *    Steve Whitehouse:  Now handles returned conninit frames.
+- *     David S. Miller:  New socket locking
+- *    Steve Whitehouse:  Fixed lockup when socket filtering was enabled.
+- *         Paul Koning:  Fix to push CC sockets into RUN when acks are
+- *                       received.
+- *    Steve Whitehouse:
+- *   Patrick Caulfield:  Checking conninits for correctness & sending of error
+- *                       responses.
+- *    Steve Whitehouse:  Added backlog congestion level return codes.
+- *   Patrick Caulfield:
+- *    Steve Whitehouse:  Added flow control support (outbound)
+- *    Steve Whitehouse:  Prepare for nonlinear skbs
+- */
+-
+-/******************************************************************************
+-    (c) 1995-1998 E.M. Serrat		emserrat@geocities.com
+-
+-*******************************************************************************/
+-
+-#include <linux/errno.h>
+-#include <linux/types.h>
+-#include <linux/socket.h>
+-#include <linux/in.h>
+-#include <linux/kernel.h>
+-#include <linux/timer.h>
+-#include <linux/string.h>
+-#include <linux/sockios.h>
+-#include <linux/net.h>
+-#include <linux/netdevice.h>
+-#include <linux/inet.h>
+-#include <linux/route.h>
+-#include <linux/slab.h>
+-#include <net/sock.h>
+-#include <net/tcp_states.h>
+-#include <linux/fcntl.h>
+-#include <linux/mm.h>
+-#include <linux/termios.h>
+-#include <linux/interrupt.h>
+-#include <linux/proc_fs.h>
+-#include <linux/stat.h>
+-#include <linux/init.h>
+-#include <linux/poll.h>
+-#include <linux/netfilter_decnet.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/dn.h>
+-#include <net/dn_nsp.h>
+-#include <net/dn_dev.h>
+-#include <net/dn_route.h>
+-
+-extern int decnet_log_martians;
+-
+-static void dn_log_martian(struct sk_buff *skb, const char *msg)
+-{
+-	if (decnet_log_martians) {
+-		char *devname = skb->dev ? skb->dev->name : "???";
+-		struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-		net_info_ratelimited("DECnet: Martian packet (%s) dev=%s src=0x%04hx dst=0x%04hx srcport=0x%04hx dstport=0x%04hx\n",
+-				     msg, devname,
+-				     le16_to_cpu(cb->src),
+-				     le16_to_cpu(cb->dst),
+-				     le16_to_cpu(cb->src_port),
+-				     le16_to_cpu(cb->dst_port));
+-	}
+-}
+-
+-/*
+- * For this function we've flipped the cross-subchannel bit
+- * if the message is an otherdata or linkservice message. Thus
+- * we can use it to work out what to update.
+- */
+-static void dn_ack(struct sock *sk, struct sk_buff *skb, unsigned short ack)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	unsigned short type = ((ack >> 12) & 0x0003);
+-	int wakeup = 0;
+-
+-	switch (type) {
+-	case 0: /* ACK - Data */
+-		if (dn_after(ack, scp->ackrcv_dat)) {
+-			scp->ackrcv_dat = ack & 0x0fff;
+-			wakeup |= dn_nsp_check_xmit_queue(sk, skb,
+-							  &scp->data_xmit_queue,
+-							  ack);
+-		}
+-		break;
+-	case 1: /* NAK - Data */
+-		break;
+-	case 2: /* ACK - OtherData */
+-		if (dn_after(ack, scp->ackrcv_oth)) {
+-			scp->ackrcv_oth = ack & 0x0fff;
+-			wakeup |= dn_nsp_check_xmit_queue(sk, skb,
+-							  &scp->other_xmit_queue,
+-							  ack);
+-		}
+-		break;
+-	case 3: /* NAK - OtherData */
+-		break;
+-	}
+-
+-	if (wakeup && !sock_flag(sk, SOCK_DEAD))
+-		sk->sk_state_change(sk);
+-}
+-
+-/*
+- * This function is a universal ack processor.
+- */
+-static int dn_process_ack(struct sock *sk, struct sk_buff *skb, int oth)
+-{
+-	__le16 *ptr = (__le16 *)skb->data;
+-	int len = 0;
+-	unsigned short ack;
+-
+-	if (skb->len < 2)
+-		return len;
+-
+-	if ((ack = le16_to_cpu(*ptr)) & 0x8000) {
+-		skb_pull(skb, 2);
+-		ptr++;
+-		len += 2;
+-		if ((ack & 0x4000) == 0) {
+-			if (oth)
+-				ack ^= 0x2000;
+-			dn_ack(sk, skb, ack);
+-		}
+-	}
+-
+-	if (skb->len < 2)
+-		return len;
+-
+-	if ((ack = le16_to_cpu(*ptr)) & 0x8000) {
+-		skb_pull(skb, 2);
+-		len += 2;
+-		if ((ack & 0x4000) == 0) {
+-			if (oth)
+-				ack ^= 0x2000;
+-			dn_ack(sk, skb, ack);
+-		}
+-	}
+-
+-	return len;
+-}
+-
+-
+-/**
+- * dn_check_idf - Check an image data field format is correct.
+- * @pptr: Pointer to pointer to image data
+- * @len: Pointer to length of image data
+- * @max: The maximum allowed length of the data in the image data field
+- * @follow_on: Check that this many bytes exist beyond the end of the image data
+- *
+- * Returns: 0 if ok, -1 on error
+- */
+-static inline int dn_check_idf(unsigned char **pptr, int *len, unsigned char max, unsigned char follow_on)
+-{
+-	unsigned char *ptr = *pptr;
+-	unsigned char flen = *ptr++;
+-
+-	(*len)--;
+-	if (flen > max)
+-		return -1;
+-	if ((flen + follow_on) > *len)
+-		return -1;
+-
+-	*len -= flen;
+-	*pptr = ptr + flen;
+-	return 0;
+-}
+-
+-/*
+- * Table of reason codes to pass back to node which sent us a badly
+- * formed message, plus text messages for the log. A zero entry in
+- * the reason field means "don't reply" otherwise a disc init is sent with
+- * the specified reason code.
+- */
+-static struct {
+-	unsigned short reason;
+-	const char *text;
+-} ci_err_table[] = {
+- { 0,             "CI: Truncated message" },
+- { NSP_REASON_ID, "CI: Destination username error" },
+- { NSP_REASON_ID, "CI: Destination username type" },
+- { NSP_REASON_US, "CI: Source username error" },
+- { 0,             "CI: Truncated at menuver" },
+- { 0,             "CI: Truncated before access or user data" },
+- { NSP_REASON_IO, "CI: Access data format error" },
+- { NSP_REASON_IO, "CI: User data format error" }
+-};
+-
+-/*
+- * This function uses a slightly different lookup method
+- * to find its sockets, since it searches on object name/number
+- * rather than port numbers. Various tests are done to ensure that
+- * the incoming data is in the correct format before it is queued to
+- * a socket.
+- */
+-static struct sock *dn_find_listener(struct sk_buff *skb, unsigned short *reason)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct nsp_conn_init_msg *msg = (struct nsp_conn_init_msg *)skb->data;
+-	struct sockaddr_dn dstaddr;
+-	struct sockaddr_dn srcaddr;
+-	unsigned char type = 0;
+-	int dstlen;
+-	int srclen;
+-	unsigned char *ptr;
+-	int len;
+-	int err = 0;
+-	unsigned char menuver;
+-
+-	memset(&dstaddr, 0, sizeof(struct sockaddr_dn));
+-	memset(&srcaddr, 0, sizeof(struct sockaddr_dn));
+-
+-	/*
+-	 * 1. Decode & remove message header
+-	 */
+-	cb->src_port = msg->srcaddr;
+-	cb->dst_port = msg->dstaddr;
+-	cb->services = msg->services;
+-	cb->info     = msg->info;
+-	cb->segsize  = le16_to_cpu(msg->segsize);
+-
+-	if (!pskb_may_pull(skb, sizeof(*msg)))
+-		goto err_out;
+-
+-	skb_pull(skb, sizeof(*msg));
+-
+-	len = skb->len;
+-	ptr = skb->data;
+-
+-	/*
+-	 * 2. Check destination end username format
+-	 */
+-	dstlen = dn_username2sockaddr(ptr, len, &dstaddr, &type);
+-	err++;
+-	if (dstlen < 0)
+-		goto err_out;
+-
+-	err++;
+-	if (type > 1)
+-		goto err_out;
+-
+-	len -= dstlen;
+-	ptr += dstlen;
+-
+-	/*
+-	 * 3. Check source end username format
+-	 */
+-	srclen = dn_username2sockaddr(ptr, len, &srcaddr, &type);
+-	err++;
+-	if (srclen < 0)
+-		goto err_out;
+-
+-	len -= srclen;
+-	ptr += srclen;
+-	err++;
+-	if (len < 1)
+-		goto err_out;
+-
+-	menuver = *ptr;
+-	ptr++;
+-	len--;
+-
+-	/*
+-	 * 4. Check that optional data actually exists if menuver says it does
+-	 */
+-	err++;
+-	if ((menuver & (DN_MENUVER_ACC | DN_MENUVER_USR)) && (len < 1))
+-		goto err_out;
+-
+-	/*
+-	 * 5. Check optional access data format
+-	 */
+-	err++;
+-	if (menuver & DN_MENUVER_ACC) {
+-		if (dn_check_idf(&ptr, &len, 39, 1))
+-			goto err_out;
+-		if (dn_check_idf(&ptr, &len, 39, 1))
+-			goto err_out;
+-		if (dn_check_idf(&ptr, &len, 39, (menuver & DN_MENUVER_USR) ? 1 : 0))
+-			goto err_out;
+-	}
+-
+-	/*
+-	 * 6. Check optional user data format
+-	 */
+-	err++;
+-	if (menuver & DN_MENUVER_USR) {
+-		if (dn_check_idf(&ptr, &len, 16, 0))
+-			goto err_out;
+-	}
+-
+-	/*
+-	 * 7. Look up socket based on destination end username
+-	 */
+-	return dn_sklist_find_listener(&dstaddr);
+-err_out:
+-	dn_log_martian(skb, ci_err_table[err].text);
+-	*reason = ci_err_table[err].reason;
+-	return NULL;
+-}
+-
+-
+-static void dn_nsp_conn_init(struct sock *sk, struct sk_buff *skb)
+-{
+-	if (sk_acceptq_is_full(sk)) {
+-		kfree_skb(skb);
+-		return;
+-	}
+-
+-	sk_acceptq_added(sk);
+-	skb_queue_tail(&sk->sk_receive_queue, skb);
+-	sk->sk_state_change(sk);
+-}
+-
+-static void dn_nsp_conn_conf(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct dn_scp *scp = DN_SK(sk);
+-	unsigned char *ptr;
+-
+-	if (skb->len < 4)
+-		goto out;
+-
+-	ptr = skb->data;
+-	cb->services = *ptr++;
+-	cb->info = *ptr++;
+-	cb->segsize = le16_to_cpu(*(__le16 *)ptr);
+-
+-	if ((scp->state == DN_CI) || (scp->state == DN_CD)) {
+-		scp->persist = 0;
+-		scp->addrrem = cb->src_port;
+-		sk->sk_state = TCP_ESTABLISHED;
+-		scp->state = DN_RUN;
+-		scp->services_rem = cb->services;
+-		scp->info_rem = cb->info;
+-		scp->segsize_rem = cb->segsize;
+-
+-		if ((scp->services_rem & NSP_FC_MASK) == NSP_FC_NONE)
+-			scp->max_window = decnet_no_fc_max_cwnd;
+-
+-		if (skb->len > 0) {
+-			u16 dlen = *skb->data;
+-			if ((dlen <= 16) && (dlen <= skb->len)) {
+-				scp->conndata_in.opt_optl = cpu_to_le16(dlen);
+-				skb_copy_from_linear_data_offset(skb, 1,
+-					      scp->conndata_in.opt_data, dlen);
+-			}
+-		}
+-		dn_nsp_send_link(sk, DN_NOCHANGE, 0);
+-		if (!sock_flag(sk, SOCK_DEAD))
+-			sk->sk_state_change(sk);
+-	}
+-
+-out:
+-	kfree_skb(skb);
+-}
+-
+-static void dn_nsp_conn_ack(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	if (scp->state == DN_CI) {
+-		scp->state = DN_CD;
+-		scp->persist = 0;
+-	}
+-
+-	kfree_skb(skb);
+-}
+-
+-static void dn_nsp_disc_init(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	unsigned short reason;
+-
+-	if (skb->len < 2)
+-		goto out;
+-
+-	reason = le16_to_cpu(*(__le16 *)skb->data);
+-	skb_pull(skb, 2);
+-
+-	scp->discdata_in.opt_status = cpu_to_le16(reason);
+-	scp->discdata_in.opt_optl   = 0;
+-	memset(scp->discdata_in.opt_data, 0, 16);
+-
+-	if (skb->len > 0) {
+-		u16 dlen = *skb->data;
+-		if ((dlen <= 16) && (dlen <= skb->len)) {
+-			scp->discdata_in.opt_optl = cpu_to_le16(dlen);
+-			skb_copy_from_linear_data_offset(skb, 1, scp->discdata_in.opt_data, dlen);
+-		}
+-	}
+-
+-	scp->addrrem = cb->src_port;
+-	sk->sk_state = TCP_CLOSE;
+-
+-	switch (scp->state) {
+-	case DN_CI:
+-	case DN_CD:
+-		scp->state = DN_RJ;
+-		sk->sk_err = ECONNREFUSED;
+-		break;
+-	case DN_RUN:
+-		sk->sk_shutdown |= SHUTDOWN_MASK;
+-		scp->state = DN_DN;
+-		break;
+-	case DN_DI:
+-		scp->state = DN_DIC;
+-		break;
+-	}
+-
+-	if (!sock_flag(sk, SOCK_DEAD)) {
+-		if (sk->sk_socket->state != SS_UNCONNECTED)
+-			sk->sk_socket->state = SS_DISCONNECTING;
+-		sk->sk_state_change(sk);
+-	}
+-
+-	/*
+-	 * It appears that its possible for remote machines to send disc
+-	 * init messages with no port identifier if we are in the CI and
+-	 * possibly also the CD state. Obviously we shouldn't reply with
+-	 * a message if we don't know what the end point is.
+-	 */
+-	if (scp->addrrem) {
+-		dn_nsp_send_disc(sk, NSP_DISCCONF, NSP_REASON_DC, GFP_ATOMIC);
+-	}
+-	scp->persist_fxn = dn_destroy_timer;
+-	scp->persist = dn_nsp_persist(sk);
+-
+-out:
+-	kfree_skb(skb);
+-}
+-
+-/*
+- * disc_conf messages are also called no_resources or no_link
+- * messages depending upon the "reason" field.
+- */
+-static void dn_nsp_disc_conf(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	unsigned short reason;
+-
+-	if (skb->len != 2)
+-		goto out;
+-
+-	reason = le16_to_cpu(*(__le16 *)skb->data);
+-
+-	sk->sk_state = TCP_CLOSE;
+-
+-	switch (scp->state) {
+-	case DN_CI:
+-		scp->state = DN_NR;
+-		break;
+-	case DN_DR:
+-		if (reason == NSP_REASON_DC)
+-			scp->state = DN_DRC;
+-		if (reason == NSP_REASON_NL)
+-			scp->state = DN_CN;
+-		break;
+-	case DN_DI:
+-		scp->state = DN_DIC;
+-		break;
+-	case DN_RUN:
+-		sk->sk_shutdown |= SHUTDOWN_MASK;
+-		fallthrough;
+-	case DN_CC:
+-		scp->state = DN_CN;
+-	}
+-
+-	if (!sock_flag(sk, SOCK_DEAD)) {
+-		if (sk->sk_socket->state != SS_UNCONNECTED)
+-			sk->sk_socket->state = SS_DISCONNECTING;
+-		sk->sk_state_change(sk);
+-	}
+-
+-	scp->persist_fxn = dn_destroy_timer;
+-	scp->persist = dn_nsp_persist(sk);
+-
+-out:
+-	kfree_skb(skb);
+-}
+-
+-static void dn_nsp_linkservice(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	unsigned short segnum;
+-	unsigned char lsflags;
+-	signed char fcval;
+-	int wake_up = 0;
+-	char *ptr = skb->data;
+-	unsigned char fctype = scp->services_rem & NSP_FC_MASK;
+-
+-	if (skb->len != 4)
+-		goto out;
+-
+-	segnum = le16_to_cpu(*(__le16 *)ptr);
+-	ptr += 2;
+-	lsflags = *(unsigned char *)ptr++;
+-	fcval = *ptr;
+-
+-	/*
+-	 * Here we ignore erronous packets which should really
+-	 * should cause a connection abort. It is not critical
+-	 * for now though.
+-	 */
+-	if (lsflags & 0xf8)
+-		goto out;
+-
+-	if (seq_next(scp->numoth_rcv, segnum)) {
+-		seq_add(&scp->numoth_rcv, 1);
+-		switch(lsflags & 0x04) { /* FCVAL INT */
+-		case 0x00: /* Normal Request */
+-			switch(lsflags & 0x03) { /* FCVAL MOD */
+-			case 0x00: /* Request count */
+-				if (fcval < 0) {
+-					unsigned char p_fcval = -fcval;
+-					if ((scp->flowrem_dat > p_fcval) &&
+-					    (fctype == NSP_FC_SCMC)) {
+-						scp->flowrem_dat -= p_fcval;
+-					}
+-				} else if (fcval > 0) {
+-					scp->flowrem_dat += fcval;
+-					wake_up = 1;
+-				}
+-				break;
+-			case 0x01: /* Stop outgoing data */
+-				scp->flowrem_sw = DN_DONTSEND;
+-				break;
+-			case 0x02: /* Ok to start again */
+-				scp->flowrem_sw = DN_SEND;
+-				dn_nsp_output(sk);
+-				wake_up = 1;
+-			}
+-			break;
+-		case 0x04: /* Interrupt Request */
+-			if (fcval > 0) {
+-				scp->flowrem_oth += fcval;
+-				wake_up = 1;
+-			}
+-			break;
+-		}
+-		if (wake_up && !sock_flag(sk, SOCK_DEAD))
+-			sk->sk_state_change(sk);
+-	}
+-
+-	dn_nsp_send_oth_ack(sk);
+-
+-out:
+-	kfree_skb(skb);
+-}
+-
+-/*
+- * Copy of sock_queue_rcv_skb (from sock.h) without
+- * bh_lock_sock() (its already held when this is called) which
+- * also allows data and other data to be queued to a socket.
+- */
+-static __inline__ int dn_queue_skb(struct sock *sk, struct sk_buff *skb, int sig, struct sk_buff_head *queue)
+-{
+-	int err;
+-
+-	/* Cast skb->rcvbuf to unsigned... It's pointless, but reduces
+-	   number of warnings when compiling with -W --ANK
+-	 */
+-	if (atomic_read(&sk->sk_rmem_alloc) + skb->truesize >=
+-	    (unsigned int)sk->sk_rcvbuf) {
+-		err = -ENOMEM;
+-		goto out;
+-	}
+-
+-	err = sk_filter(sk, skb);
+-	if (err)
+-		goto out;
+-
+-	skb_set_owner_r(skb, sk);
+-	skb_queue_tail(queue, skb);
+-
+-	if (!sock_flag(sk, SOCK_DEAD))
+-		sk->sk_data_ready(sk);
+-out:
+-	return err;
+-}
+-
+-static void dn_nsp_otherdata(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	unsigned short segnum;
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	int queued = 0;
+-
+-	if (skb->len < 2)
+-		goto out;
+-
+-	cb->segnum = segnum = le16_to_cpu(*(__le16 *)skb->data);
+-	skb_pull(skb, 2);
+-
+-	if (seq_next(scp->numoth_rcv, segnum)) {
+-
+-		if (dn_queue_skb(sk, skb, SIGURG, &scp->other_receive_queue) == 0) {
+-			seq_add(&scp->numoth_rcv, 1);
+-			scp->other_report = 0;
+-			queued = 1;
+-		}
+-	}
+-
+-	dn_nsp_send_oth_ack(sk);
+-out:
+-	if (!queued)
+-		kfree_skb(skb);
+-}
+-
+-static void dn_nsp_data(struct sock *sk, struct sk_buff *skb)
+-{
+-	int queued = 0;
+-	unsigned short segnum;
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	if (skb->len < 2)
+-		goto out;
+-
+-	cb->segnum = segnum = le16_to_cpu(*(__le16 *)skb->data);
+-	skb_pull(skb, 2);
+-
+-	if (seq_next(scp->numdat_rcv, segnum)) {
+-		if (dn_queue_skb(sk, skb, SIGIO, &sk->sk_receive_queue) == 0) {
+-			seq_add(&scp->numdat_rcv, 1);
+-			queued = 1;
+-		}
+-
+-		if ((scp->flowloc_sw == DN_SEND) && dn_congested(sk)) {
+-			scp->flowloc_sw = DN_DONTSEND;
+-			dn_nsp_send_link(sk, DN_DONTSEND, 0);
+-		}
+-	}
+-
+-	dn_nsp_send_data_ack(sk);
+-out:
+-	if (!queued)
+-		kfree_skb(skb);
+-}
+-
+-/*
+- * If one of our conninit messages is returned, this function
+- * deals with it. It puts the socket into the NO_COMMUNICATION
+- * state.
+- */
+-static void dn_returned_conn_init(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	if (scp->state == DN_CI) {
+-		scp->state = DN_NC;
+-		sk->sk_state = TCP_CLOSE;
+-		if (!sock_flag(sk, SOCK_DEAD))
+-			sk->sk_state_change(sk);
+-	}
+-
+-	kfree_skb(skb);
+-}
+-
+-static int dn_nsp_no_socket(struct sk_buff *skb, unsigned short reason)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	int ret = NET_RX_DROP;
+-
+-	/* Must not reply to returned packets */
+-	if (cb->rt_flags & DN_RT_F_RTS)
+-		goto out;
+-
+-	if ((reason != NSP_REASON_OK) && ((cb->nsp_flags & 0x0c) == 0x08)) {
+-		switch (cb->nsp_flags & 0x70) {
+-		case 0x10:
+-		case 0x60: /* (Retransmitted) Connect Init */
+-			dn_nsp_return_disc(skb, NSP_DISCINIT, reason);
+-			ret = NET_RX_SUCCESS;
+-			break;
+-		case 0x20: /* Connect Confirm */
+-			dn_nsp_return_disc(skb, NSP_DISCCONF, reason);
+-			ret = NET_RX_SUCCESS;
+-			break;
+-		}
+-	}
+-
+-out:
+-	kfree_skb(skb);
+-	return ret;
+-}
+-
+-static int dn_nsp_rx_packet(struct net *net, struct sock *sk2,
+-			    struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct sock *sk = NULL;
+-	unsigned char *ptr = (unsigned char *)skb->data;
+-	unsigned short reason = NSP_REASON_NL;
+-
+-	if (!pskb_may_pull(skb, 2))
+-		goto free_out;
+-
+-	skb_reset_transport_header(skb);
+-	cb->nsp_flags = *ptr++;
+-
+-	if (decnet_debug_level & 2)
+-		printk(KERN_DEBUG "dn_nsp_rx: Message type 0x%02x\n", (int)cb->nsp_flags);
+-
+-	if (cb->nsp_flags & 0x83)
+-		goto free_out;
+-
+-	/*
+-	 * Filter out conninits and useless packet types
+-	 */
+-	if ((cb->nsp_flags & 0x0c) == 0x08) {
+-		switch (cb->nsp_flags & 0x70) {
+-		case 0x00: /* NOP */
+-		case 0x70: /* Reserved */
+-		case 0x50: /* Reserved, Phase II node init */
+-			goto free_out;
+-		case 0x10:
+-		case 0x60:
+-			if (unlikely(cb->rt_flags & DN_RT_F_RTS))
+-				goto free_out;
+-			sk = dn_find_listener(skb, &reason);
+-			goto got_it;
+-		}
+-	}
+-
+-	if (!pskb_may_pull(skb, 3))
+-		goto free_out;
+-
+-	/*
+-	 * Grab the destination address.
+-	 */
+-	cb->dst_port = *(__le16 *)ptr;
+-	cb->src_port = 0;
+-	ptr += 2;
+-
+-	/*
+-	 * If not a connack, grab the source address too.
+-	 */
+-	if (pskb_may_pull(skb, 5)) {
+-		cb->src_port = *(__le16 *)ptr;
+-		ptr += 2;
+-		skb_pull(skb, 5);
+-	}
+-
+-	/*
+-	 * Returned packets...
+-	 * Swap src & dst and look up in the normal way.
+-	 */
+-	if (unlikely(cb->rt_flags & DN_RT_F_RTS)) {
+-		swap(cb->dst_port, cb->src_port);
+-		swap(cb->dst, cb->src);
+-	}
+-
+-	/*
+-	 * Find the socket to which this skb is destined.
+-	 */
+-	sk = dn_find_by_skb(skb);
+-got_it:
+-	if (sk != NULL) {
+-		struct dn_scp *scp = DN_SK(sk);
+-
+-		/* Reset backoff */
+-		scp->nsp_rxtshift = 0;
+-
+-		/*
+-		 * We linearize everything except data segments here.
+-		 */
+-		if (cb->nsp_flags & ~0x60) {
+-			if (unlikely(skb_linearize(skb)))
+-				goto free_out;
+-		}
+-
+-		return sk_receive_skb(sk, skb, 0);
+-	}
+-
+-	return dn_nsp_no_socket(skb, reason);
+-
+-free_out:
+-	kfree_skb(skb);
+-	return NET_RX_DROP;
+-}
+-
+-int dn_nsp_rx(struct sk_buff *skb)
+-{
+-	return NF_HOOK(NFPROTO_DECNET, NF_DN_LOCAL_IN,
+-		       &init_net, NULL, skb, skb->dev, NULL,
+-		       dn_nsp_rx_packet);
+-}
+-
+-/*
+- * This is the main receive routine for sockets. It is called
+- * from the above when the socket is not busy, and also from
+- * sock_release() when there is a backlog queued up.
+- */
+-int dn_nsp_backlog_rcv(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-
+-	if (cb->rt_flags & DN_RT_F_RTS) {
+-		if (cb->nsp_flags == 0x18 || cb->nsp_flags == 0x68)
+-			dn_returned_conn_init(sk, skb);
+-		else
+-			kfree_skb(skb);
+-		return NET_RX_SUCCESS;
+-	}
+-
+-	/*
+-	 * Control packet.
+-	 */
+-	if ((cb->nsp_flags & 0x0c) == 0x08) {
+-		switch (cb->nsp_flags & 0x70) {
+-		case 0x10:
+-		case 0x60:
+-			dn_nsp_conn_init(sk, skb);
+-			break;
+-		case 0x20:
+-			dn_nsp_conn_conf(sk, skb);
+-			break;
+-		case 0x30:
+-			dn_nsp_disc_init(sk, skb);
+-			break;
+-		case 0x40:
+-			dn_nsp_disc_conf(sk, skb);
+-			break;
+-		}
+-
+-	} else if (cb->nsp_flags == 0x24) {
+-		/*
+-		 * Special for connacks, 'cos they don't have
+-		 * ack data or ack otherdata info.
+-		 */
+-		dn_nsp_conn_ack(sk, skb);
+-	} else {
+-		int other = 1;
+-
+-		/* both data and ack frames can kick a CC socket into RUN */
+-		if ((scp->state == DN_CC) && !sock_flag(sk, SOCK_DEAD)) {
+-			scp->state = DN_RUN;
+-			sk->sk_state = TCP_ESTABLISHED;
+-			sk->sk_state_change(sk);
+-		}
+-
+-		if ((cb->nsp_flags & 0x1c) == 0)
+-			other = 0;
+-		if (cb->nsp_flags == 0x04)
+-			other = 0;
+-
+-		/*
+-		 * Read out ack data here, this applies equally
+-		 * to data, other data, link serivce and both
+-		 * ack data and ack otherdata.
+-		 */
+-		dn_process_ack(sk, skb, other);
+-
+-		/*
+-		 * If we've some sort of data here then call a
+-		 * suitable routine for dealing with it, otherwise
+-		 * the packet is an ack and can be discarded.
+-		 */
+-		if ((cb->nsp_flags & 0x0c) == 0) {
+-
+-			if (scp->state != DN_RUN)
+-				goto free_out;
+-
+-			switch (cb->nsp_flags) {
+-			case 0x10: /* LS */
+-				dn_nsp_linkservice(sk, skb);
+-				break;
+-			case 0x30: /* OD */
+-				dn_nsp_otherdata(sk, skb);
+-				break;
+-			default:
+-				dn_nsp_data(sk, skb);
+-			}
+-
+-		} else { /* Ack, chuck it out here */
+-free_out:
+-			kfree_skb(skb);
+-		}
+-	}
+-
+-	return NET_RX_SUCCESS;
+-}
+diff --git a/net/decnet/dn_nsp_out.c b/net/decnet/dn_nsp_out.c
+deleted file mode 100644
+index 00f2ed721ec13..0000000000000
+--- a/net/decnet/dn_nsp_out.c
++++ /dev/null
+@@ -1,695 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Network Services Protocol (Output)
+- *
+- * Author:      Eduardo Marcelo Serrat <emserrat@geocities.com>
+- *
+- * Changes:
+- *
+- *    Steve Whitehouse:  Split into dn_nsp_in.c and dn_nsp_out.c from
+- *                       original dn_nsp.c.
+- *    Steve Whitehouse:  Updated to work with my new routing architecture.
+- *    Steve Whitehouse:  Added changes from Eduardo Serrat's patches.
+- *    Steve Whitehouse:  Now conninits have the "return" bit set.
+- *    Steve Whitehouse:  Fixes to check alloc'd skbs are non NULL!
+- *                       Moved output state machine into one function
+- *    Steve Whitehouse:  New output state machine
+- *         Paul Koning:  Connect Confirm message fix.
+- *      Eduardo Serrat:  Fix to stop dn_nsp_do_disc() sending malformed packets.
+- *    Steve Whitehouse:  dn_nsp_output() and friends needed a spring clean
+- *    Steve Whitehouse:  Moved dn_nsp_send() in here from route.h
+- */
+-
+-/******************************************************************************
+-    (c) 1995-1998 E.M. Serrat		emserrat@geocities.com
+-
+-*******************************************************************************/
+-
+-#include <linux/errno.h>
+-#include <linux/types.h>
+-#include <linux/socket.h>
+-#include <linux/in.h>
+-#include <linux/kernel.h>
+-#include <linux/timer.h>
+-#include <linux/string.h>
+-#include <linux/sockios.h>
+-#include <linux/net.h>
+-#include <linux/netdevice.h>
+-#include <linux/inet.h>
+-#include <linux/route.h>
+-#include <linux/slab.h>
+-#include <net/sock.h>
+-#include <linux/fcntl.h>
+-#include <linux/mm.h>
+-#include <linux/termios.h>
+-#include <linux/interrupt.h>
+-#include <linux/proc_fs.h>
+-#include <linux/stat.h>
+-#include <linux/init.h>
+-#include <linux/poll.h>
+-#include <linux/if_packet.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/flow.h>
+-#include <net/dn.h>
+-#include <net/dn_nsp.h>
+-#include <net/dn_dev.h>
+-#include <net/dn_route.h>
+-
+-
+-static int nsp_backoff[NSP_MAXRXTSHIFT + 1] = { 1, 2, 4, 8, 16, 32, 64, 64, 64, 64, 64, 64, 64 };
+-
+-static void dn_nsp_send(struct sk_buff *skb)
+-{
+-	struct sock *sk = skb->sk;
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct dst_entry *dst;
+-	struct flowidn fld;
+-
+-	skb_reset_transport_header(skb);
+-	scp->stamp = jiffies;
+-
+-	dst = sk_dst_check(sk, 0);
+-	if (dst) {
+-try_again:
+-		skb_dst_set(skb, dst);
+-		dst_output(&init_net, skb->sk, skb);
+-		return;
+-	}
+-
+-	memset(&fld, 0, sizeof(fld));
+-	fld.flowidn_oif = sk->sk_bound_dev_if;
+-	fld.saddr = dn_saddr2dn(&scp->addr);
+-	fld.daddr = dn_saddr2dn(&scp->peer);
+-	dn_sk_ports_copy(&fld, scp);
+-	fld.flowidn_proto = DNPROTO_NSP;
+-	if (dn_route_output_sock(&sk->sk_dst_cache, &fld, sk, 0) == 0) {
+-		dst = sk_dst_get(sk);
+-		sk->sk_route_caps = dst->dev->features;
+-		goto try_again;
+-	}
+-
+-	sk->sk_err = EHOSTUNREACH;
+-	if (!sock_flag(sk, SOCK_DEAD))
+-		sk->sk_state_change(sk);
+-}
+-
+-
+-/*
+- * If sk == NULL, then we assume that we are supposed to be making
+- * a routing layer skb. If sk != NULL, then we are supposed to be
+- * creating an skb for the NSP layer.
+- *
+- * The eventual aim is for each socket to have a cached header size
+- * for its outgoing packets, and to set hdr from this when sk != NULL.
+- */
+-struct sk_buff *dn_alloc_skb(struct sock *sk, int size, gfp_t pri)
+-{
+-	struct sk_buff *skb;
+-	int hdr = 64;
+-
+-	if ((skb = alloc_skb(size + hdr, pri)) == NULL)
+-		return NULL;
+-
+-	skb->protocol = htons(ETH_P_DNA_RT);
+-	skb->pkt_type = PACKET_OUTGOING;
+-
+-	if (sk)
+-		skb_set_owner_w(skb, sk);
+-
+-	skb_reserve(skb, hdr);
+-
+-	return skb;
+-}
+-
+-/*
+- * Calculate persist timer based upon the smoothed round
+- * trip time and the variance. Backoff according to the
+- * nsp_backoff[] array.
+- */
+-unsigned long dn_nsp_persist(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	unsigned long t = ((scp->nsp_srtt >> 2) + scp->nsp_rttvar) >> 1;
+-
+-	t *= nsp_backoff[scp->nsp_rxtshift];
+-
+-	if (t < HZ) t = HZ;
+-	if (t > (600*HZ)) t = (600*HZ);
+-
+-	if (scp->nsp_rxtshift < NSP_MAXRXTSHIFT)
+-		scp->nsp_rxtshift++;
+-
+-	/* printk(KERN_DEBUG "rxtshift %lu, t=%lu\n", scp->nsp_rxtshift, t); */
+-
+-	return t;
+-}
+-
+-/*
+- * This is called each time we get an estimate for the rtt
+- * on the link.
+- */
+-static void dn_nsp_rtt(struct sock *sk, long rtt)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	long srtt = (long)scp->nsp_srtt;
+-	long rttvar = (long)scp->nsp_rttvar;
+-	long delta;
+-
+-	/*
+-	 * If the jiffies clock flips over in the middle of timestamp
+-	 * gathering this value might turn out negative, so we make sure
+-	 * that is it always positive here.
+-	 */
+-	if (rtt < 0)
+-		rtt = -rtt;
+-	/*
+-	 * Add new rtt to smoothed average
+-	 */
+-	delta = ((rtt << 3) - srtt);
+-	srtt += (delta >> 3);
+-	if (srtt >= 1)
+-		scp->nsp_srtt = (unsigned long)srtt;
+-	else
+-		scp->nsp_srtt = 1;
+-
+-	/*
+-	 * Add new rtt varience to smoothed varience
+-	 */
+-	delta >>= 1;
+-	rttvar += ((((delta>0)?(delta):(-delta)) - rttvar) >> 2);
+-	if (rttvar >= 1)
+-		scp->nsp_rttvar = (unsigned long)rttvar;
+-	else
+-		scp->nsp_rttvar = 1;
+-
+-	/* printk(KERN_DEBUG "srtt=%lu rttvar=%lu\n", scp->nsp_srtt, scp->nsp_rttvar); */
+-}
+-
+-/**
+- * dn_nsp_clone_and_send - Send a data packet by cloning it
+- * @skb: The packet to clone and transmit
+- * @gfp: memory allocation flag
+- *
+- * Clone a queued data or other data packet and transmit it.
+- *
+- * Returns: The number of times the packet has been sent previously
+- */
+-static inline unsigned int dn_nsp_clone_and_send(struct sk_buff *skb,
+-					     gfp_t gfp)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct sk_buff *skb2;
+-	int ret = 0;
+-
+-	if ((skb2 = skb_clone(skb, gfp)) != NULL) {
+-		ret = cb->xmit_count;
+-		cb->xmit_count++;
+-		cb->stamp = jiffies;
+-		skb2->sk = skb->sk;
+-		dn_nsp_send(skb2);
+-	}
+-
+-	return ret;
+-}
+-
+-/**
+- * dn_nsp_output - Try and send something from socket queues
+- * @sk: The socket whose queues are to be investigated
+- *
+- * Try and send the packet on the end of the data and other data queues.
+- * Other data gets priority over data, and if we retransmit a packet we
+- * reduce the window by dividing it in two.
+- *
+- */
+-void dn_nsp_output(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct sk_buff *skb;
+-	unsigned int reduce_win = 0;
+-
+-	/*
+-	 * First we check for otherdata/linkservice messages
+-	 */
+-	if ((skb = skb_peek(&scp->other_xmit_queue)) != NULL)
+-		reduce_win = dn_nsp_clone_and_send(skb, GFP_ATOMIC);
+-
+-	/*
+-	 * If we may not send any data, we don't.
+-	 * If we are still trying to get some other data down the
+-	 * channel, we don't try and send any data.
+-	 */
+-	if (reduce_win || (scp->flowrem_sw != DN_SEND))
+-		goto recalc_window;
+-
+-	if ((skb = skb_peek(&scp->data_xmit_queue)) != NULL)
+-		reduce_win = dn_nsp_clone_and_send(skb, GFP_ATOMIC);
+-
+-	/*
+-	 * If we've sent any frame more than once, we cut the
+-	 * send window size in half. There is always a minimum
+-	 * window size of one available.
+-	 */
+-recalc_window:
+-	if (reduce_win) {
+-		scp->snd_window >>= 1;
+-		if (scp->snd_window < NSP_MIN_WINDOW)
+-			scp->snd_window = NSP_MIN_WINDOW;
+-	}
+-}
+-
+-int dn_nsp_xmit_timeout(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	dn_nsp_output(sk);
+-
+-	if (!skb_queue_empty(&scp->data_xmit_queue) ||
+-	    !skb_queue_empty(&scp->other_xmit_queue))
+-		scp->persist = dn_nsp_persist(sk);
+-
+-	return 0;
+-}
+-
+-static inline __le16 *dn_mk_common_header(struct dn_scp *scp, struct sk_buff *skb, unsigned char msgflag, int len)
+-{
+-	unsigned char *ptr = skb_push(skb, len);
+-
+-	BUG_ON(len < 5);
+-
+-	*ptr++ = msgflag;
+-	*((__le16 *)ptr) = scp->addrrem;
+-	ptr += 2;
+-	*((__le16 *)ptr) = scp->addrloc;
+-	ptr += 2;
+-	return (__le16 __force *)ptr;
+-}
+-
+-static __le16 *dn_mk_ack_header(struct sock *sk, struct sk_buff *skb, unsigned char msgflag, int hlen, int other)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	unsigned short acknum = scp->numdat_rcv & 0x0FFF;
+-	unsigned short ackcrs = scp->numoth_rcv & 0x0FFF;
+-	__le16 *ptr;
+-
+-	BUG_ON(hlen < 9);
+-
+-	scp->ackxmt_dat = acknum;
+-	scp->ackxmt_oth = ackcrs;
+-	acknum |= 0x8000;
+-	ackcrs |= 0x8000;
+-
+-	/* If this is an "other data/ack" message, swap acknum and ackcrs */
+-	if (other)
+-		swap(acknum, ackcrs);
+-
+-	/* Set "cross subchannel" bit in ackcrs */
+-	ackcrs |= 0x2000;
+-
+-	ptr = dn_mk_common_header(scp, skb, msgflag, hlen);
+-
+-	*ptr++ = cpu_to_le16(acknum);
+-	*ptr++ = cpu_to_le16(ackcrs);
+-
+-	return ptr;
+-}
+-
+-static __le16 *dn_nsp_mk_data_header(struct sock *sk, struct sk_buff *skb, int oth)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	__le16 *ptr = dn_mk_ack_header(sk, skb, cb->nsp_flags, 11, oth);
+-
+-	if (unlikely(oth)) {
+-		cb->segnum = scp->numoth;
+-		seq_add(&scp->numoth, 1);
+-	} else {
+-		cb->segnum = scp->numdat;
+-		seq_add(&scp->numdat, 1);
+-	}
+-	*(ptr++) = cpu_to_le16(cb->segnum);
+-
+-	return ptr;
+-}
+-
+-void dn_nsp_queue_xmit(struct sock *sk, struct sk_buff *skb,
+-			gfp_t gfp, int oth)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	unsigned long t = ((scp->nsp_srtt >> 2) + scp->nsp_rttvar) >> 1;
+-
+-	cb->xmit_count = 0;
+-	dn_nsp_mk_data_header(sk, skb, oth);
+-
+-	/*
+-	 * Slow start: If we have been idle for more than
+-	 * one RTT, then reset window to min size.
+-	 */
+-	if ((jiffies - scp->stamp) > t)
+-		scp->snd_window = NSP_MIN_WINDOW;
+-
+-	if (oth)
+-		skb_queue_tail(&scp->other_xmit_queue, skb);
+-	else
+-		skb_queue_tail(&scp->data_xmit_queue, skb);
+-
+-	if (scp->flowrem_sw != DN_SEND)
+-		return;
+-
+-	dn_nsp_clone_and_send(skb, gfp);
+-}
+-
+-
+-int dn_nsp_check_xmit_queue(struct sock *sk, struct sk_buff *skb, struct sk_buff_head *q, unsigned short acknum)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct sk_buff *skb2, *n, *ack = NULL;
+-	int wakeup = 0;
+-	int try_retrans = 0;
+-	unsigned long reftime = cb->stamp;
+-	unsigned long pkttime;
+-	unsigned short xmit_count;
+-	unsigned short segnum;
+-
+-	skb_queue_walk_safe(q, skb2, n) {
+-		struct dn_skb_cb *cb2 = DN_SKB_CB(skb2);
+-
+-		if (dn_before_or_equal(cb2->segnum, acknum))
+-			ack = skb2;
+-
+-		/* printk(KERN_DEBUG "ack: %s %04x %04x\n", ack ? "ACK" : "SKIP", (int)cb2->segnum, (int)acknum); */
+-
+-		if (ack == NULL)
+-			continue;
+-
+-		/* printk(KERN_DEBUG "check_xmit_queue: %04x, %d\n", acknum, cb2->xmit_count); */
+-
+-		/* Does _last_ packet acked have xmit_count > 1 */
+-		try_retrans = 0;
+-		/* Remember to wake up the sending process */
+-		wakeup = 1;
+-		/* Keep various statistics */
+-		pkttime = cb2->stamp;
+-		xmit_count = cb2->xmit_count;
+-		segnum = cb2->segnum;
+-		/* Remove and drop ack'ed packet */
+-		skb_unlink(ack, q);
+-		kfree_skb(ack);
+-		ack = NULL;
+-
+-		/*
+-		 * We don't expect to see acknowledgements for packets we
+-		 * haven't sent yet.
+-		 */
+-		WARN_ON(xmit_count == 0);
+-
+-		/*
+-		 * If the packet has only been sent once, we can use it
+-		 * to calculate the RTT and also open the window a little
+-		 * further.
+-		 */
+-		if (xmit_count == 1) {
+-			if (dn_equal(segnum, acknum))
+-				dn_nsp_rtt(sk, (long)(pkttime - reftime));
+-
+-			if (scp->snd_window < scp->max_window)
+-				scp->snd_window++;
+-		}
+-
+-		/*
+-		 * Packet has been sent more than once. If this is the last
+-		 * packet to be acknowledged then we want to send the next
+-		 * packet in the send queue again (assumes the remote host does
+-		 * go-back-N error control).
+-		 */
+-		if (xmit_count > 1)
+-			try_retrans = 1;
+-	}
+-
+-	if (try_retrans)
+-		dn_nsp_output(sk);
+-
+-	return wakeup;
+-}
+-
+-void dn_nsp_send_data_ack(struct sock *sk)
+-{
+-	struct sk_buff *skb = NULL;
+-
+-	if ((skb = dn_alloc_skb(sk, 9, GFP_ATOMIC)) == NULL)
+-		return;
+-
+-	skb_reserve(skb, 9);
+-	dn_mk_ack_header(sk, skb, 0x04, 9, 0);
+-	dn_nsp_send(skb);
+-}
+-
+-void dn_nsp_send_oth_ack(struct sock *sk)
+-{
+-	struct sk_buff *skb = NULL;
+-
+-	if ((skb = dn_alloc_skb(sk, 9, GFP_ATOMIC)) == NULL)
+-		return;
+-
+-	skb_reserve(skb, 9);
+-	dn_mk_ack_header(sk, skb, 0x14, 9, 1);
+-	dn_nsp_send(skb);
+-}
+-
+-
+-void dn_send_conn_ack (struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct sk_buff *skb = NULL;
+-	struct nsp_conn_ack_msg *msg;
+-
+-	if ((skb = dn_alloc_skb(sk, 3, sk->sk_allocation)) == NULL)
+-		return;
+-
+-	msg = skb_put(skb, 3);
+-	msg->msgflg = 0x24;
+-	msg->dstaddr = scp->addrrem;
+-
+-	dn_nsp_send(skb);
+-}
+-
+-static int dn_nsp_retrans_conn_conf(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	if (scp->state == DN_CC)
+-		dn_send_conn_conf(sk, GFP_ATOMIC);
+-
+-	return 0;
+-}
+-
+-void dn_send_conn_conf(struct sock *sk, gfp_t gfp)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct sk_buff *skb = NULL;
+-	struct nsp_conn_init_msg *msg;
+-	__u8 len = (__u8)le16_to_cpu(scp->conndata_out.opt_optl);
+-
+-	if ((skb = dn_alloc_skb(sk, 50 + len, gfp)) == NULL)
+-		return;
+-
+-	msg = skb_put(skb, sizeof(*msg));
+-	msg->msgflg = 0x28;
+-	msg->dstaddr = scp->addrrem;
+-	msg->srcaddr = scp->addrloc;
+-	msg->services = scp->services_loc;
+-	msg->info = scp->info_loc;
+-	msg->segsize = cpu_to_le16(scp->segsize_loc);
+-
+-	skb_put_u8(skb, len);
+-
+-	if (len > 0)
+-		skb_put_data(skb, scp->conndata_out.opt_data, len);
+-
+-
+-	dn_nsp_send(skb);
+-
+-	scp->persist = dn_nsp_persist(sk);
+-	scp->persist_fxn = dn_nsp_retrans_conn_conf;
+-}
+-
+-
+-static __inline__ void dn_nsp_do_disc(struct sock *sk, unsigned char msgflg,
+-			unsigned short reason, gfp_t gfp,
+-			struct dst_entry *dst,
+-			int ddl, unsigned char *dd, __le16 rem, __le16 loc)
+-{
+-	struct sk_buff *skb = NULL;
+-	int size = 7 + ddl + ((msgflg == NSP_DISCINIT) ? 1 : 0);
+-	unsigned char *msg;
+-
+-	if ((dst == NULL) || (rem == 0)) {
+-		net_dbg_ratelimited("DECnet: dn_nsp_do_disc: BUG! Please report this to SteveW@ACM.org rem=%u dst=%p\n",
+-				    le16_to_cpu(rem), dst);
+-		return;
+-	}
+-
+-	if ((skb = dn_alloc_skb(sk, size, gfp)) == NULL)
+-		return;
+-
+-	msg = skb_put(skb, size);
+-	*msg++ = msgflg;
+-	*(__le16 *)msg = rem;
+-	msg += 2;
+-	*(__le16 *)msg = loc;
+-	msg += 2;
+-	*(__le16 *)msg = cpu_to_le16(reason);
+-	msg += 2;
+-	if (msgflg == NSP_DISCINIT)
+-		*msg++ = ddl;
+-
+-	if (ddl) {
+-		memcpy(msg, dd, ddl);
+-	}
+-
+-	/*
+-	 * This doesn't go via the dn_nsp_send() function since we need
+-	 * to be able to send disc packets out which have no socket
+-	 * associations.
+-	 */
+-	skb_dst_set(skb, dst_clone(dst));
+-	dst_output(&init_net, skb->sk, skb);
+-}
+-
+-
+-void dn_nsp_send_disc(struct sock *sk, unsigned char msgflg,
+-			unsigned short reason, gfp_t gfp)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	int ddl = 0;
+-
+-	if (msgflg == NSP_DISCINIT)
+-		ddl = le16_to_cpu(scp->discdata_out.opt_optl);
+-
+-	if (reason == 0)
+-		reason = le16_to_cpu(scp->discdata_out.opt_status);
+-
+-	dn_nsp_do_disc(sk, msgflg, reason, gfp, __sk_dst_get(sk), ddl,
+-		scp->discdata_out.opt_data, scp->addrrem, scp->addrloc);
+-}
+-
+-
+-void dn_nsp_return_disc(struct sk_buff *skb, unsigned char msgflg,
+-			unsigned short reason)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	int ddl = 0;
+-	gfp_t gfp = GFP_ATOMIC;
+-
+-	dn_nsp_do_disc(NULL, msgflg, reason, gfp, skb_dst(skb), ddl,
+-			NULL, cb->src_port, cb->dst_port);
+-}
+-
+-
+-void dn_nsp_send_link(struct sock *sk, unsigned char lsflags, char fcval)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct sk_buff *skb;
+-	unsigned char *ptr;
+-	gfp_t gfp = GFP_ATOMIC;
+-
+-	if ((skb = dn_alloc_skb(sk, DN_MAX_NSP_DATA_HEADER + 2, gfp)) == NULL)
+-		return;
+-
+-	skb_reserve(skb, DN_MAX_NSP_DATA_HEADER);
+-	ptr = skb_put(skb, 2);
+-	DN_SKB_CB(skb)->nsp_flags = 0x10;
+-	*ptr++ = lsflags;
+-	*ptr = fcval;
+-
+-	dn_nsp_queue_xmit(sk, skb, gfp, 1);
+-
+-	scp->persist = dn_nsp_persist(sk);
+-	scp->persist_fxn = dn_nsp_xmit_timeout;
+-}
+-
+-static int dn_nsp_retrans_conninit(struct sock *sk)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	if (scp->state == DN_CI)
+-		dn_nsp_send_conninit(sk, NSP_RCI);
+-
+-	return 0;
+-}
+-
+-void dn_nsp_send_conninit(struct sock *sk, unsigned char msgflg)
+-{
+-	struct dn_scp *scp = DN_SK(sk);
+-	struct nsp_conn_init_msg *msg;
+-	unsigned char aux;
+-	unsigned char menuver;
+-	struct dn_skb_cb *cb;
+-	unsigned char type = 1;
+-	gfp_t allocation = (msgflg == NSP_CI) ? sk->sk_allocation : GFP_ATOMIC;
+-	struct sk_buff *skb = dn_alloc_skb(sk, 200, allocation);
+-
+-	if (!skb)
+-		return;
+-
+-	cb  = DN_SKB_CB(skb);
+-	msg = skb_put(skb, sizeof(*msg));
+-
+-	msg->msgflg	= msgflg;
+-	msg->dstaddr	= 0x0000;		/* Remote Node will assign it*/
+-
+-	msg->srcaddr	= scp->addrloc;
+-	msg->services	= scp->services_loc;	/* Requested flow control    */
+-	msg->info	= scp->info_loc;	/* Version Number            */
+-	msg->segsize	= cpu_to_le16(scp->segsize_loc);	/* Max segment size  */
+-
+-	if (scp->peer.sdn_objnum)
+-		type = 0;
+-
+-	skb_put(skb, dn_sockaddr2username(&scp->peer,
+-					  skb_tail_pointer(skb), type));
+-	skb_put(skb, dn_sockaddr2username(&scp->addr,
+-					  skb_tail_pointer(skb), 2));
+-
+-	menuver = DN_MENUVER_ACC | DN_MENUVER_USR;
+-	if (scp->peer.sdn_flags & SDF_PROXY)
+-		menuver |= DN_MENUVER_PRX;
+-	if (scp->peer.sdn_flags & SDF_UICPROXY)
+-		menuver |= DN_MENUVER_UIC;
+-
+-	skb_put_u8(skb, menuver);	/* Menu Version		*/
+-
+-	aux = scp->accessdata.acc_userl;
+-	skb_put_u8(skb, aux);
+-	if (aux > 0)
+-		skb_put_data(skb, scp->accessdata.acc_user, aux);
+-
+-	aux = scp->accessdata.acc_passl;
+-	skb_put_u8(skb, aux);
+-	if (aux > 0)
+-		skb_put_data(skb, scp->accessdata.acc_pass, aux);
+-
+-	aux = scp->accessdata.acc_accl;
+-	skb_put_u8(skb, aux);
+-	if (aux > 0)
+-		skb_put_data(skb, scp->accessdata.acc_acc, aux);
+-
+-	aux = (__u8)le16_to_cpu(scp->conndata_out.opt_optl);
+-	skb_put_u8(skb, aux);
+-	if (aux > 0)
+-		skb_put_data(skb, scp->conndata_out.opt_data, aux);
+-
+-	scp->persist = dn_nsp_persist(sk);
+-	scp->persist_fxn = dn_nsp_retrans_conninit;
+-
+-	cb->rt_flags = DN_RT_F_RQR;
+-
+-	dn_nsp_send(skb);
+-}
+diff --git a/net/decnet/dn_route.c b/net/decnet/dn_route.c
+deleted file mode 100644
+index 4cac31d22a502..0000000000000
+--- a/net/decnet/dn_route.c
++++ /dev/null
+@@ -1,1923 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Routing Functions (Endnode and Router)
+- *
+- * Authors:     Steve Whitehouse <SteveW@ACM.org>
+- *              Eduardo Marcelo Serrat <emserrat@geocities.com>
+- *
+- * Changes:
+- *              Steve Whitehouse : Fixes to allow "intra-ethernet" and
+- *                                 "return-to-sender" bits on outgoing
+- *                                 packets.
+- *		Steve Whitehouse : Timeouts for cached routes.
+- *              Steve Whitehouse : Use dst cache for input routes too.
+- *              Steve Whitehouse : Fixed error values in dn_send_skb.
+- *              Steve Whitehouse : Rework routing functions to better fit
+- *                                 DECnet routing design
+- *              Alexey Kuznetsov : New SMP locking
+- *              Steve Whitehouse : More SMP locking changes & dn_cache_dump()
+- *              Steve Whitehouse : Prerouting NF hook, now really is prerouting.
+- *				   Fixed possible skb leak in rtnetlink funcs.
+- *              Steve Whitehouse : Dave Miller's dynamic hash table sizing and
+- *                                 Alexey Kuznetsov's finer grained locking
+- *                                 from ipv4/route.c.
+- *              Steve Whitehouse : Routing is now starting to look like a
+- *                                 sensible set of code now, mainly due to
+- *                                 my copying the IPv4 routing code. The
+- *                                 hooks here are modified and will continue
+- *                                 to evolve for a while.
+- *              Steve Whitehouse : Real SMP at last :-) Also new netfilter
+- *                                 stuff. Look out raw sockets your days
+- *                                 are numbered!
+- *              Steve Whitehouse : Added return-to-sender functions. Added
+- *                                 backlog congestion level return codes.
+- *		Steve Whitehouse : Fixed bug where routes were set up with
+- *                                 no ref count on net devices.
+- *              Steve Whitehouse : RCU for the route cache
+- *              Steve Whitehouse : Preparations for the flow cache
+- *              Steve Whitehouse : Prepare for nonlinear skbs
+- */
+-
+-/******************************************************************************
+-    (c) 1995-1998 E.M. Serrat		emserrat@geocities.com
+-
+-*******************************************************************************/
+-
+-#include <linux/errno.h>
+-#include <linux/types.h>
+-#include <linux/socket.h>
+-#include <linux/in.h>
+-#include <linux/kernel.h>
+-#include <linux/sockios.h>
+-#include <linux/net.h>
+-#include <linux/netdevice.h>
+-#include <linux/inet.h>
+-#include <linux/route.h>
+-#include <linux/in_route.h>
+-#include <linux/slab.h>
+-#include <net/sock.h>
+-#include <linux/mm.h>
+-#include <linux/proc_fs.h>
+-#include <linux/seq_file.h>
+-#include <linux/init.h>
+-#include <linux/rtnetlink.h>
+-#include <linux/string.h>
+-#include <linux/netfilter_decnet.h>
+-#include <linux/rcupdate.h>
+-#include <linux/times.h>
+-#include <linux/export.h>
+-#include <asm/errno.h>
+-#include <net/net_namespace.h>
+-#include <net/netlink.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/flow.h>
+-#include <net/fib_rules.h>
+-#include <net/dn.h>
+-#include <net/dn_dev.h>
+-#include <net/dn_nsp.h>
+-#include <net/dn_route.h>
+-#include <net/dn_neigh.h>
+-#include <net/dn_fib.h>
+-
+-struct dn_rt_hash_bucket
+-{
+-	struct dn_route __rcu *chain;
+-	spinlock_t lock;
+-};
+-
+-extern struct neigh_table dn_neigh_table;
+-
+-
+-static unsigned char dn_hiord_addr[6] = {0xAA,0x00,0x04,0x00,0x00,0x00};
+-
+-static const int dn_rt_min_delay = 2 * HZ;
+-static const int dn_rt_max_delay = 10 * HZ;
+-static const int dn_rt_mtu_expires = 10 * 60 * HZ;
+-
+-static unsigned long dn_rt_deadline;
+-
+-static int dn_dst_gc(struct dst_ops *ops);
+-static struct dst_entry *dn_dst_check(struct dst_entry *, __u32);
+-static unsigned int dn_dst_default_advmss(const struct dst_entry *dst);
+-static unsigned int dn_dst_mtu(const struct dst_entry *dst);
+-static void dn_dst_destroy(struct dst_entry *);
+-static void dn_dst_ifdown(struct dst_entry *, struct net_device *dev, int how);
+-static struct dst_entry *dn_dst_negative_advice(struct dst_entry *);
+-static void dn_dst_link_failure(struct sk_buff *);
+-static void dn_dst_update_pmtu(struct dst_entry *dst, struct sock *sk,
+-			       struct sk_buff *skb , u32 mtu,
+-			       bool confirm_neigh);
+-static void dn_dst_redirect(struct dst_entry *dst, struct sock *sk,
+-			    struct sk_buff *skb);
+-static struct neighbour *dn_dst_neigh_lookup(const struct dst_entry *dst,
+-					     struct sk_buff *skb,
+-					     const void *daddr);
+-static int dn_route_input(struct sk_buff *);
+-static void dn_run_flush(struct timer_list *unused);
+-
+-static struct dn_rt_hash_bucket *dn_rt_hash_table;
+-static unsigned int dn_rt_hash_mask;
+-
+-static struct timer_list dn_route_timer;
+-static DEFINE_TIMER(dn_rt_flush_timer, dn_run_flush);
+-int decnet_dst_gc_interval = 2;
+-
+-static struct dst_ops dn_dst_ops = {
+-	.family =		PF_DECnet,
+-	.gc_thresh =		128,
+-	.gc =			dn_dst_gc,
+-	.check =		dn_dst_check,
+-	.default_advmss =	dn_dst_default_advmss,
+-	.mtu =			dn_dst_mtu,
+-	.cow_metrics =		dst_cow_metrics_generic,
+-	.destroy =		dn_dst_destroy,
+-	.ifdown =		dn_dst_ifdown,
+-	.negative_advice =	dn_dst_negative_advice,
+-	.link_failure =		dn_dst_link_failure,
+-	.update_pmtu =		dn_dst_update_pmtu,
+-	.redirect =		dn_dst_redirect,
+-	.neigh_lookup =		dn_dst_neigh_lookup,
+-};
+-
+-static void dn_dst_destroy(struct dst_entry *dst)
+-{
+-	struct dn_route *rt = (struct dn_route *) dst;
+-
+-	if (rt->n)
+-		neigh_release(rt->n);
+-	dst_destroy_metrics_generic(dst);
+-}
+-
+-static void dn_dst_ifdown(struct dst_entry *dst, struct net_device *dev, int how)
+-{
+-	if (how) {
+-		struct dn_route *rt = (struct dn_route *) dst;
+-		struct neighbour *n = rt->n;
+-
+-		if (n && n->dev == dev) {
+-			n->dev = dev_net(dev)->loopback_dev;
+-			dev_hold(n->dev);
+-			dev_put(dev);
+-		}
+-	}
+-}
+-
+-static __inline__ unsigned int dn_hash(__le16 src, __le16 dst)
+-{
+-	__u16 tmp = (__u16 __force)(src ^ dst);
+-	tmp ^= (tmp >> 3);
+-	tmp ^= (tmp >> 5);
+-	tmp ^= (tmp >> 10);
+-	return dn_rt_hash_mask & (unsigned int)tmp;
+-}
+-
+-static void dn_dst_check_expire(struct timer_list *unused)
+-{
+-	int i;
+-	struct dn_route *rt;
+-	struct dn_route __rcu **rtp;
+-	unsigned long now = jiffies;
+-	unsigned long expire = 120 * HZ;
+-
+-	for (i = 0; i <= dn_rt_hash_mask; i++) {
+-		rtp = &dn_rt_hash_table[i].chain;
+-
+-		spin_lock(&dn_rt_hash_table[i].lock);
+-		while ((rt = rcu_dereference_protected(*rtp,
+-						lockdep_is_held(&dn_rt_hash_table[i].lock))) != NULL) {
+-			if (atomic_read(&rt->dst.__refcnt) > 1 ||
+-			    (now - rt->dst.lastuse) < expire) {
+-				rtp = &rt->dn_next;
+-				continue;
+-			}
+-			*rtp = rt->dn_next;
+-			rt->dn_next = NULL;
+-			dst_dev_put(&rt->dst);
+-			dst_release(&rt->dst);
+-		}
+-		spin_unlock(&dn_rt_hash_table[i].lock);
+-
+-		if ((jiffies - now) > 0)
+-			break;
+-	}
+-
+-	mod_timer(&dn_route_timer, now + decnet_dst_gc_interval * HZ);
+-}
+-
+-static int dn_dst_gc(struct dst_ops *ops)
+-{
+-	struct dn_route *rt;
+-	struct dn_route __rcu **rtp;
+-	int i;
+-	unsigned long now = jiffies;
+-	unsigned long expire = 10 * HZ;
+-
+-	for (i = 0; i <= dn_rt_hash_mask; i++) {
+-
+-		spin_lock_bh(&dn_rt_hash_table[i].lock);
+-		rtp = &dn_rt_hash_table[i].chain;
+-
+-		while ((rt = rcu_dereference_protected(*rtp,
+-						lockdep_is_held(&dn_rt_hash_table[i].lock))) != NULL) {
+-			if (atomic_read(&rt->dst.__refcnt) > 1 ||
+-			    (now - rt->dst.lastuse) < expire) {
+-				rtp = &rt->dn_next;
+-				continue;
+-			}
+-			*rtp = rt->dn_next;
+-			rt->dn_next = NULL;
+-			dst_dev_put(&rt->dst);
+-			dst_release(&rt->dst);
+-			break;
+-		}
+-		spin_unlock_bh(&dn_rt_hash_table[i].lock);
+-	}
+-
+-	return 0;
+-}
+-
+-/*
+- * The decnet standards don't impose a particular minimum mtu, what they
+- * do insist on is that the routing layer accepts a datagram of at least
+- * 230 bytes long. Here we have to subtract the routing header length from
+- * 230 to get the minimum acceptable mtu. If there is no neighbour, then we
+- * assume the worst and use a long header size.
+- *
+- * We update both the mtu and the advertised mss (i.e. the segment size we
+- * advertise to the other end).
+- */
+-static void dn_dst_update_pmtu(struct dst_entry *dst, struct sock *sk,
+-			       struct sk_buff *skb, u32 mtu,
+-			       bool confirm_neigh)
+-{
+-	struct dn_route *rt = (struct dn_route *) dst;
+-	struct neighbour *n = rt->n;
+-	u32 min_mtu = 230;
+-	struct dn_dev *dn;
+-
+-	dn = n ? rcu_dereference_raw(n->dev->dn_ptr) : NULL;
+-
+-	if (dn && dn->use_long == 0)
+-		min_mtu -= 6;
+-	else
+-		min_mtu -= 21;
+-
+-	if (dst_metric(dst, RTAX_MTU) > mtu && mtu >= min_mtu) {
+-		if (!(dst_metric_locked(dst, RTAX_MTU))) {
+-			dst_metric_set(dst, RTAX_MTU, mtu);
+-			dst_set_expires(dst, dn_rt_mtu_expires);
+-		}
+-		if (!(dst_metric_locked(dst, RTAX_ADVMSS))) {
+-			u32 mss = mtu - DN_MAX_NSP_DATA_HEADER;
+-			u32 existing_mss = dst_metric_raw(dst, RTAX_ADVMSS);
+-			if (!existing_mss || existing_mss > mss)
+-				dst_metric_set(dst, RTAX_ADVMSS, mss);
+-		}
+-	}
+-}
+-
+-static void dn_dst_redirect(struct dst_entry *dst, struct sock *sk,
+-			    struct sk_buff *skb)
+-{
+-}
+-
+-/*
+- * When a route has been marked obsolete. (e.g. routing cache flush)
+- */
+-static struct dst_entry *dn_dst_check(struct dst_entry *dst, __u32 cookie)
+-{
+-	return NULL;
+-}
+-
+-static struct dst_entry *dn_dst_negative_advice(struct dst_entry *dst)
+-{
+-	dst_release(dst);
+-	return NULL;
+-}
+-
+-static void dn_dst_link_failure(struct sk_buff *skb)
+-{
+-}
+-
+-static inline int compare_keys(struct flowidn *fl1, struct flowidn *fl2)
+-{
+-	return ((fl1->daddr ^ fl2->daddr) |
+-		(fl1->saddr ^ fl2->saddr) |
+-		(fl1->flowidn_mark ^ fl2->flowidn_mark) |
+-		(fl1->flowidn_scope ^ fl2->flowidn_scope) |
+-		(fl1->flowidn_oif ^ fl2->flowidn_oif) |
+-		(fl1->flowidn_iif ^ fl2->flowidn_iif)) == 0;
+-}
+-
+-static int dn_insert_route(struct dn_route *rt, unsigned int hash, struct dn_route **rp)
+-{
+-	struct dn_route *rth;
+-	struct dn_route __rcu **rthp;
+-	unsigned long now = jiffies;
+-
+-	rthp = &dn_rt_hash_table[hash].chain;
+-
+-	spin_lock_bh(&dn_rt_hash_table[hash].lock);
+-	while ((rth = rcu_dereference_protected(*rthp,
+-						lockdep_is_held(&dn_rt_hash_table[hash].lock))) != NULL) {
+-		if (compare_keys(&rth->fld, &rt->fld)) {
+-			/* Put it first */
+-			*rthp = rth->dn_next;
+-			rcu_assign_pointer(rth->dn_next,
+-					   dn_rt_hash_table[hash].chain);
+-			rcu_assign_pointer(dn_rt_hash_table[hash].chain, rth);
+-
+-			dst_hold_and_use(&rth->dst, now);
+-			spin_unlock_bh(&dn_rt_hash_table[hash].lock);
+-
+-			dst_release_immediate(&rt->dst);
+-			*rp = rth;
+-			return 0;
+-		}
+-		rthp = &rth->dn_next;
+-	}
+-
+-	rcu_assign_pointer(rt->dn_next, dn_rt_hash_table[hash].chain);
+-	rcu_assign_pointer(dn_rt_hash_table[hash].chain, rt);
+-
+-	dst_hold_and_use(&rt->dst, now);
+-	spin_unlock_bh(&dn_rt_hash_table[hash].lock);
+-	*rp = rt;
+-	return 0;
+-}
+-
+-static void dn_run_flush(struct timer_list *unused)
+-{
+-	int i;
+-	struct dn_route *rt, *next;
+-
+-	for (i = 0; i < dn_rt_hash_mask; i++) {
+-		spin_lock_bh(&dn_rt_hash_table[i].lock);
+-
+-		if ((rt = xchg((struct dn_route **)&dn_rt_hash_table[i].chain, NULL)) == NULL)
+-			goto nothing_to_declare;
+-
+-		for(; rt; rt = next) {
+-			next = rcu_dereference_raw(rt->dn_next);
+-			RCU_INIT_POINTER(rt->dn_next, NULL);
+-			dst_dev_put(&rt->dst);
+-			dst_release(&rt->dst);
+-		}
+-
+-nothing_to_declare:
+-		spin_unlock_bh(&dn_rt_hash_table[i].lock);
+-	}
+-}
+-
+-static DEFINE_SPINLOCK(dn_rt_flush_lock);
+-
+-void dn_rt_cache_flush(int delay)
+-{
+-	unsigned long now = jiffies;
+-	int user_mode = !in_interrupt();
+-
+-	if (delay < 0)
+-		delay = dn_rt_min_delay;
+-
+-	spin_lock_bh(&dn_rt_flush_lock);
+-
+-	if (del_timer(&dn_rt_flush_timer) && delay > 0 && dn_rt_deadline) {
+-		long tmo = (long)(dn_rt_deadline - now);
+-
+-		if (user_mode && tmo < dn_rt_max_delay - dn_rt_min_delay)
+-			tmo = 0;
+-
+-		if (delay > tmo)
+-			delay = tmo;
+-	}
+-
+-	if (delay <= 0) {
+-		spin_unlock_bh(&dn_rt_flush_lock);
+-		dn_run_flush(NULL);
+-		return;
+-	}
+-
+-	if (dn_rt_deadline == 0)
+-		dn_rt_deadline = now + dn_rt_max_delay;
+-
+-	dn_rt_flush_timer.expires = now + delay;
+-	add_timer(&dn_rt_flush_timer);
+-	spin_unlock_bh(&dn_rt_flush_lock);
+-}
+-
+-/**
+- * dn_return_short - Return a short packet to its sender
+- * @skb: The packet to return
+- *
+- */
+-static int dn_return_short(struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb;
+-	unsigned char *ptr;
+-	__le16 *src;
+-	__le16 *dst;
+-
+-	/* Add back headers */
+-	skb_push(skb, skb->data - skb_network_header(skb));
+-
+-	if ((skb = skb_unshare(skb, GFP_ATOMIC)) == NULL)
+-		return NET_RX_DROP;
+-
+-	cb = DN_SKB_CB(skb);
+-	/* Skip packet length and point to flags */
+-	ptr = skb->data + 2;
+-	*ptr++ = (cb->rt_flags & ~DN_RT_F_RQR) | DN_RT_F_RTS;
+-
+-	dst = (__le16 *)ptr;
+-	ptr += 2;
+-	src = (__le16 *)ptr;
+-	ptr += 2;
+-	*ptr = 0; /* Zero hop count */
+-
+-	swap(*src, *dst);
+-
+-	skb->pkt_type = PACKET_OUTGOING;
+-	dn_rt_finish_output(skb, NULL, NULL);
+-	return NET_RX_SUCCESS;
+-}
+-
+-/**
+- * dn_return_long - Return a long packet to its sender
+- * @skb: The long format packet to return
+- *
+- */
+-static int dn_return_long(struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb;
+-	unsigned char *ptr;
+-	unsigned char *src_addr, *dst_addr;
+-	unsigned char tmp[ETH_ALEN];
+-
+-	/* Add back all headers */
+-	skb_push(skb, skb->data - skb_network_header(skb));
+-
+-	if ((skb = skb_unshare(skb, GFP_ATOMIC)) == NULL)
+-		return NET_RX_DROP;
+-
+-	cb = DN_SKB_CB(skb);
+-	/* Ignore packet length and point to flags */
+-	ptr = skb->data + 2;
+-
+-	/* Skip padding */
+-	if (*ptr & DN_RT_F_PF) {
+-		char padlen = (*ptr & ~DN_RT_F_PF);
+-		ptr += padlen;
+-	}
+-
+-	*ptr++ = (cb->rt_flags & ~DN_RT_F_RQR) | DN_RT_F_RTS;
+-	ptr += 2;
+-	dst_addr = ptr;
+-	ptr += 8;
+-	src_addr = ptr;
+-	ptr += 6;
+-	*ptr = 0; /* Zero hop count */
+-
+-	/* Swap source and destination */
+-	memcpy(tmp, src_addr, ETH_ALEN);
+-	memcpy(src_addr, dst_addr, ETH_ALEN);
+-	memcpy(dst_addr, tmp, ETH_ALEN);
+-
+-	skb->pkt_type = PACKET_OUTGOING;
+-	dn_rt_finish_output(skb, dst_addr, src_addr);
+-	return NET_RX_SUCCESS;
+-}
+-
+-/**
+- * dn_route_rx_packet - Try and find a route for an incoming packet
+- * @net: The applicable net namespace
+- * @sk: Socket packet transmitted on
+- * @skb: The packet to find a route for
+- *
+- * Returns: result of input function if route is found, error code otherwise
+- */
+-static int dn_route_rx_packet(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb;
+-	int err;
+-
+-	if ((err = dn_route_input(skb)) == 0)
+-		return dst_input(skb);
+-
+-	cb = DN_SKB_CB(skb);
+-	if (decnet_debug_level & 4) {
+-		char *devname = skb->dev ? skb->dev->name : "???";
+-
+-		printk(KERN_DEBUG
+-			"DECnet: dn_route_rx_packet: rt_flags=0x%02x dev=%s len=%d src=0x%04hx dst=0x%04hx err=%d type=%d\n",
+-			(int)cb->rt_flags, devname, skb->len,
+-			le16_to_cpu(cb->src), le16_to_cpu(cb->dst),
+-			err, skb->pkt_type);
+-	}
+-
+-	if ((skb->pkt_type == PACKET_HOST) && (cb->rt_flags & DN_RT_F_RQR)) {
+-		switch (cb->rt_flags & DN_RT_PKT_MSK) {
+-		case DN_RT_PKT_SHORT:
+-			return dn_return_short(skb);
+-		case DN_RT_PKT_LONG:
+-			return dn_return_long(skb);
+-		}
+-	}
+-
+-	kfree_skb(skb);
+-	return NET_RX_DROP;
+-}
+-
+-static int dn_route_rx_long(struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	unsigned char *ptr = skb->data;
+-
+-	if (!pskb_may_pull(skb, 21)) /* 20 for long header, 1 for shortest nsp */
+-		goto drop_it;
+-
+-	skb_pull(skb, 20);
+-	skb_reset_transport_header(skb);
+-
+-	/* Destination info */
+-	ptr += 2;
+-	cb->dst = dn_eth2dn(ptr);
+-	if (memcmp(ptr, dn_hiord_addr, 4) != 0)
+-		goto drop_it;
+-	ptr += 6;
+-
+-
+-	/* Source info */
+-	ptr += 2;
+-	cb->src = dn_eth2dn(ptr);
+-	if (memcmp(ptr, dn_hiord_addr, 4) != 0)
+-		goto drop_it;
+-	ptr += 6;
+-	/* Other junk */
+-	ptr++;
+-	cb->hops = *ptr++; /* Visit Count */
+-
+-	return NF_HOOK(NFPROTO_DECNET, NF_DN_PRE_ROUTING,
+-		       &init_net, NULL, skb, skb->dev, NULL,
+-		       dn_route_rx_packet);
+-
+-drop_it:
+-	kfree_skb(skb);
+-	return NET_RX_DROP;
+-}
+-
+-
+-
+-static int dn_route_rx_short(struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	unsigned char *ptr = skb->data;
+-
+-	if (!pskb_may_pull(skb, 6)) /* 5 for short header + 1 for shortest nsp */
+-		goto drop_it;
+-
+-	skb_pull(skb, 5);
+-	skb_reset_transport_header(skb);
+-
+-	cb->dst = *(__le16 *)ptr;
+-	ptr += 2;
+-	cb->src = *(__le16 *)ptr;
+-	ptr += 2;
+-	cb->hops = *ptr & 0x3f;
+-
+-	return NF_HOOK(NFPROTO_DECNET, NF_DN_PRE_ROUTING,
+-		       &init_net, NULL, skb, skb->dev, NULL,
+-		       dn_route_rx_packet);
+-
+-drop_it:
+-	kfree_skb(skb);
+-	return NET_RX_DROP;
+-}
+-
+-static int dn_route_discard(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	/*
+-	 * I know we drop the packet here, but thats considered success in
+-	 * this case
+-	 */
+-	kfree_skb(skb);
+-	return NET_RX_SUCCESS;
+-}
+-
+-static int dn_route_ptp_hello(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	dn_dev_hello(skb);
+-	dn_neigh_pointopoint_hello(skb);
+-	return NET_RX_SUCCESS;
+-}
+-
+-int dn_route_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt, struct net_device *orig_dev)
+-{
+-	struct dn_skb_cb *cb;
+-	unsigned char flags = 0;
+-	__u16 len = le16_to_cpu(*(__le16 *)skb->data);
+-	struct dn_dev *dn = rcu_dereference(dev->dn_ptr);
+-	unsigned char padlen = 0;
+-
+-	if (!net_eq(dev_net(dev), &init_net))
+-		goto dump_it;
+-
+-	if (dn == NULL)
+-		goto dump_it;
+-
+-	if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL)
+-		goto out;
+-
+-	if (!pskb_may_pull(skb, 3))
+-		goto dump_it;
+-
+-	skb_pull(skb, 2);
+-
+-	if (len > skb->len)
+-		goto dump_it;
+-
+-	skb_trim(skb, len);
+-
+-	flags = *skb->data;
+-
+-	cb = DN_SKB_CB(skb);
+-	cb->stamp = jiffies;
+-	cb->iif = dev->ifindex;
+-
+-	/*
+-	 * If we have padding, remove it.
+-	 */
+-	if (flags & DN_RT_F_PF) {
+-		padlen = flags & ~DN_RT_F_PF;
+-		if (!pskb_may_pull(skb, padlen + 1))
+-			goto dump_it;
+-		skb_pull(skb, padlen);
+-		flags = *skb->data;
+-	}
+-
+-	skb_reset_network_header(skb);
+-
+-	/*
+-	 * Weed out future version DECnet
+-	 */
+-	if (flags & DN_RT_F_VER)
+-		goto dump_it;
+-
+-	cb->rt_flags = flags;
+-
+-	if (decnet_debug_level & 1)
+-		printk(KERN_DEBUG
+-			"dn_route_rcv: got 0x%02x from %s [%d %d %d]\n",
+-			(int)flags, dev->name, len, skb->len,
+-			padlen);
+-
+-	if (flags & DN_RT_PKT_CNTL) {
+-		if (unlikely(skb_linearize(skb)))
+-			goto dump_it;
+-
+-		switch (flags & DN_RT_CNTL_MSK) {
+-		case DN_RT_PKT_INIT:
+-			dn_dev_init_pkt(skb);
+-			break;
+-		case DN_RT_PKT_VERI:
+-			dn_dev_veri_pkt(skb);
+-			break;
+-		}
+-
+-		if (dn->parms.state != DN_DEV_S_RU)
+-			goto dump_it;
+-
+-		switch (flags & DN_RT_CNTL_MSK) {
+-		case DN_RT_PKT_HELO:
+-			return NF_HOOK(NFPROTO_DECNET, NF_DN_HELLO,
+-				       &init_net, NULL, skb, skb->dev, NULL,
+-				       dn_route_ptp_hello);
+-
+-		case DN_RT_PKT_L1RT:
+-		case DN_RT_PKT_L2RT:
+-			return NF_HOOK(NFPROTO_DECNET, NF_DN_ROUTE,
+-				       &init_net, NULL, skb, skb->dev, NULL,
+-				       dn_route_discard);
+-		case DN_RT_PKT_ERTH:
+-			return NF_HOOK(NFPROTO_DECNET, NF_DN_HELLO,
+-				       &init_net, NULL, skb, skb->dev, NULL,
+-				       dn_neigh_router_hello);
+-
+-		case DN_RT_PKT_EEDH:
+-			return NF_HOOK(NFPROTO_DECNET, NF_DN_HELLO,
+-				       &init_net, NULL, skb, skb->dev, NULL,
+-				       dn_neigh_endnode_hello);
+-		}
+-	} else {
+-		if (dn->parms.state != DN_DEV_S_RU)
+-			goto dump_it;
+-
+-		skb_pull(skb, 1); /* Pull flags */
+-
+-		switch (flags & DN_RT_PKT_MSK) {
+-		case DN_RT_PKT_LONG:
+-			return dn_route_rx_long(skb);
+-		case DN_RT_PKT_SHORT:
+-			return dn_route_rx_short(skb);
+-		}
+-	}
+-
+-dump_it:
+-	kfree_skb(skb);
+-out:
+-	return NET_RX_DROP;
+-}
+-
+-static int dn_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dst_entry *dst = skb_dst(skb);
+-	struct dn_route *rt = (struct dn_route *)dst;
+-	struct net_device *dev = dst->dev;
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-
+-	int err = -EINVAL;
+-
+-	if (rt->n == NULL)
+-		goto error;
+-
+-	skb->dev = dev;
+-
+-	cb->src = rt->rt_saddr;
+-	cb->dst = rt->rt_daddr;
+-
+-	/*
+-	 * Always set the Intra-Ethernet bit on all outgoing packets
+-	 * originated on this node. Only valid flag from upper layers
+-	 * is return-to-sender-requested. Set hop count to 0 too.
+-	 */
+-	cb->rt_flags &= ~DN_RT_F_RQR;
+-	cb->rt_flags |= DN_RT_F_IE;
+-	cb->hops = 0;
+-
+-	return NF_HOOK(NFPROTO_DECNET, NF_DN_LOCAL_OUT,
+-		       &init_net, sk, skb, NULL, dev,
+-		       dn_to_neigh_output);
+-
+-error:
+-	net_dbg_ratelimited("dn_output: This should not happen\n");
+-
+-	kfree_skb(skb);
+-
+-	return err;
+-}
+-
+-static int dn_forward(struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct dst_entry *dst = skb_dst(skb);
+-	struct dn_dev *dn_db = rcu_dereference(dst->dev->dn_ptr);
+-	struct dn_route *rt;
+-	int header_len;
+-	struct net_device *dev = skb->dev;
+-
+-	if (skb->pkt_type != PACKET_HOST)
+-		goto drop;
+-
+-	/* Ensure that we have enough space for headers */
+-	rt = (struct dn_route *)skb_dst(skb);
+-	header_len = dn_db->use_long ? 21 : 6;
+-	if (skb_cow(skb, LL_RESERVED_SPACE(rt->dst.dev)+header_len))
+-		goto drop;
+-
+-	/*
+-	 * Hop count exceeded.
+-	 */
+-	if (++cb->hops > 30)
+-		goto drop;
+-
+-	skb->dev = rt->dst.dev;
+-
+-	/*
+-	 * If packet goes out same interface it came in on, then set
+-	 * the Intra-Ethernet bit. This has no effect for short
+-	 * packets, so we don't need to test for them here.
+-	 */
+-	cb->rt_flags &= ~DN_RT_F_IE;
+-	if (rt->rt_flags & RTCF_DOREDIRECT)
+-		cb->rt_flags |= DN_RT_F_IE;
+-
+-	return NF_HOOK(NFPROTO_DECNET, NF_DN_FORWARD,
+-		       &init_net, NULL, skb, dev, skb->dev,
+-		       dn_to_neigh_output);
+-
+-drop:
+-	kfree_skb(skb);
+-	return NET_RX_DROP;
+-}
+-
+-/*
+- * Used to catch bugs. This should never normally get
+- * called.
+- */
+-static int dn_rt_bug_out(struct net *net, struct sock *sk, struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-
+-	net_dbg_ratelimited("dn_rt_bug: skb from:%04x to:%04x\n",
+-			    le16_to_cpu(cb->src), le16_to_cpu(cb->dst));
+-
+-	kfree_skb(skb);
+-
+-	return NET_RX_DROP;
+-}
+-
+-static int dn_rt_bug(struct sk_buff *skb)
+-{
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-
+-	net_dbg_ratelimited("dn_rt_bug: skb from:%04x to:%04x\n",
+-			    le16_to_cpu(cb->src), le16_to_cpu(cb->dst));
+-
+-	kfree_skb(skb);
+-
+-	return NET_RX_DROP;
+-}
+-
+-static unsigned int dn_dst_default_advmss(const struct dst_entry *dst)
+-{
+-	return dn_mss_from_pmtu(dst->dev, dst_mtu(dst));
+-}
+-
+-static unsigned int dn_dst_mtu(const struct dst_entry *dst)
+-{
+-	unsigned int mtu = dst_metric_raw(dst, RTAX_MTU);
+-
+-	return mtu ? : dst->dev->mtu;
+-}
+-
+-static struct neighbour *dn_dst_neigh_lookup(const struct dst_entry *dst,
+-					     struct sk_buff *skb,
+-					     const void *daddr)
+-{
+-	return __neigh_lookup_errno(&dn_neigh_table, daddr, dst->dev);
+-}
+-
+-static int dn_rt_set_next_hop(struct dn_route *rt, struct dn_fib_res *res)
+-{
+-	struct dn_fib_info *fi = res->fi;
+-	struct net_device *dev = rt->dst.dev;
+-	unsigned int mss_metric;
+-	struct neighbour *n;
+-
+-	if (fi) {
+-		if (DN_FIB_RES_GW(*res) &&
+-		    DN_FIB_RES_NH(*res).nh_scope == RT_SCOPE_LINK)
+-			rt->rt_gateway = DN_FIB_RES_GW(*res);
+-		dst_init_metrics(&rt->dst, fi->fib_metrics, true);
+-	}
+-	rt->rt_type = res->type;
+-
+-	if (dev != NULL && rt->n == NULL) {
+-		n = __neigh_lookup_errno(&dn_neigh_table, &rt->rt_gateway, dev);
+-		if (IS_ERR(n))
+-			return PTR_ERR(n);
+-		rt->n = n;
+-	}
+-
+-	if (dst_metric(&rt->dst, RTAX_MTU) > rt->dst.dev->mtu)
+-		dst_metric_set(&rt->dst, RTAX_MTU, rt->dst.dev->mtu);
+-	mss_metric = dst_metric_raw(&rt->dst, RTAX_ADVMSS);
+-	if (mss_metric) {
+-		unsigned int mss = dn_mss_from_pmtu(dev, dst_mtu(&rt->dst));
+-		if (mss_metric > mss)
+-			dst_metric_set(&rt->dst, RTAX_ADVMSS, mss);
+-	}
+-	return 0;
+-}
+-
+-static inline int dn_match_addr(__le16 addr1, __le16 addr2)
+-{
+-	__u16 tmp = le16_to_cpu(addr1) ^ le16_to_cpu(addr2);
+-	int match = 16;
+-	while(tmp) {
+-		tmp >>= 1;
+-		match--;
+-	}
+-	return match;
+-}
+-
+-static __le16 dnet_select_source(const struct net_device *dev, __le16 daddr, int scope)
+-{
+-	__le16 saddr = 0;
+-	struct dn_dev *dn_db;
+-	struct dn_ifaddr *ifa;
+-	int best_match = 0;
+-	int ret;
+-
+-	rcu_read_lock();
+-	dn_db = rcu_dereference(dev->dn_ptr);
+-	for (ifa = rcu_dereference(dn_db->ifa_list);
+-	     ifa != NULL;
+-	     ifa = rcu_dereference(ifa->ifa_next)) {
+-		if (ifa->ifa_scope > scope)
+-			continue;
+-		if (!daddr) {
+-			saddr = ifa->ifa_local;
+-			break;
+-		}
+-		ret = dn_match_addr(daddr, ifa->ifa_local);
+-		if (ret > best_match)
+-			saddr = ifa->ifa_local;
+-		if (best_match == 0)
+-			saddr = ifa->ifa_local;
+-	}
+-	rcu_read_unlock();
+-
+-	return saddr;
+-}
+-
+-static inline __le16 __dn_fib_res_prefsrc(struct dn_fib_res *res)
+-{
+-	return dnet_select_source(DN_FIB_RES_DEV(*res), DN_FIB_RES_GW(*res), res->scope);
+-}
+-
+-static inline __le16 dn_fib_rules_map_destination(__le16 daddr, struct dn_fib_res *res)
+-{
+-	__le16 mask = dnet_make_mask(res->prefixlen);
+-	return (daddr&~mask)|res->fi->fib_nh->nh_gw;
+-}
+-
+-static int dn_route_output_slow(struct dst_entry **pprt, const struct flowidn *oldflp, int try_hard)
+-{
+-	struct flowidn fld = {
+-		.daddr = oldflp->daddr,
+-		.saddr = oldflp->saddr,
+-		.flowidn_scope = RT_SCOPE_UNIVERSE,
+-		.flowidn_mark = oldflp->flowidn_mark,
+-		.flowidn_iif = LOOPBACK_IFINDEX,
+-		.flowidn_oif = oldflp->flowidn_oif,
+-	};
+-	struct dn_route *rt = NULL;
+-	struct net_device *dev_out = NULL, *dev;
+-	struct neighbour *neigh = NULL;
+-	unsigned int hash;
+-	unsigned int flags = 0;
+-	struct dn_fib_res res = { .fi = NULL, .type = RTN_UNICAST };
+-	int err;
+-	int free_res = 0;
+-	__le16 gateway = 0;
+-
+-	if (decnet_debug_level & 16)
+-		printk(KERN_DEBUG
+-		       "dn_route_output_slow: dst=%04x src=%04x mark=%d"
+-		       " iif=%d oif=%d\n", le16_to_cpu(oldflp->daddr),
+-		       le16_to_cpu(oldflp->saddr),
+-		       oldflp->flowidn_mark, LOOPBACK_IFINDEX,
+-		       oldflp->flowidn_oif);
+-
+-	/* If we have an output interface, verify its a DECnet device */
+-	if (oldflp->flowidn_oif) {
+-		dev_out = dev_get_by_index(&init_net, oldflp->flowidn_oif);
+-		err = -ENODEV;
+-		if (dev_out && dev_out->dn_ptr == NULL) {
+-			dev_put(dev_out);
+-			dev_out = NULL;
+-		}
+-		if (dev_out == NULL)
+-			goto out;
+-	}
+-
+-	/* If we have a source address, verify that its a local address */
+-	if (oldflp->saddr) {
+-		err = -EADDRNOTAVAIL;
+-
+-		if (dev_out) {
+-			if (dn_dev_islocal(dev_out, oldflp->saddr))
+-				goto source_ok;
+-			dev_put(dev_out);
+-			goto out;
+-		}
+-		rcu_read_lock();
+-		for_each_netdev_rcu(&init_net, dev) {
+-			if (!dev->dn_ptr)
+-				continue;
+-			if (!dn_dev_islocal(dev, oldflp->saddr))
+-				continue;
+-			if ((dev->flags & IFF_LOOPBACK) &&
+-			    oldflp->daddr &&
+-			    !dn_dev_islocal(dev, oldflp->daddr))
+-				continue;
+-
+-			dev_out = dev;
+-			break;
+-		}
+-		rcu_read_unlock();
+-		if (dev_out == NULL)
+-			goto out;
+-		dev_hold(dev_out);
+-source_ok:
+-		;
+-	}
+-
+-	/* No destination? Assume its local */
+-	if (!fld.daddr) {
+-		fld.daddr = fld.saddr;
+-
+-		if (dev_out)
+-			dev_put(dev_out);
+-		err = -EINVAL;
+-		dev_out = init_net.loopback_dev;
+-		if (!dev_out->dn_ptr)
+-			goto out;
+-		err = -EADDRNOTAVAIL;
+-		dev_hold(dev_out);
+-		if (!fld.daddr) {
+-			fld.daddr =
+-			fld.saddr = dnet_select_source(dev_out, 0,
+-						       RT_SCOPE_HOST);
+-			if (!fld.daddr)
+-				goto out;
+-		}
+-		fld.flowidn_oif = LOOPBACK_IFINDEX;
+-		res.type = RTN_LOCAL;
+-		goto make_route;
+-	}
+-
+-	if (decnet_debug_level & 16)
+-		printk(KERN_DEBUG
+-		       "dn_route_output_slow: initial checks complete."
+-		       " dst=%04x src=%04x oif=%d try_hard=%d\n",
+-		       le16_to_cpu(fld.daddr), le16_to_cpu(fld.saddr),
+-		       fld.flowidn_oif, try_hard);
+-
+-	/*
+-	 * N.B. If the kernel is compiled without router support then
+-	 * dn_fib_lookup() will evaluate to non-zero so this if () block
+-	 * will always be executed.
+-	 */
+-	err = -ESRCH;
+-	if (try_hard || (err = dn_fib_lookup(&fld, &res)) != 0) {
+-		struct dn_dev *dn_db;
+-		if (err != -ESRCH)
+-			goto out;
+-		/*
+-		 * Here the fallback is basically the standard algorithm for
+-		 * routing in endnodes which is described in the DECnet routing
+-		 * docs
+-		 *
+-		 * If we are not trying hard, look in neighbour cache.
+-		 * The result is tested to ensure that if a specific output
+-		 * device/source address was requested, then we honour that
+-		 * here
+-		 */
+-		if (!try_hard) {
+-			neigh = neigh_lookup_nodev(&dn_neigh_table, &init_net, &fld.daddr);
+-			if (neigh) {
+-				if ((oldflp->flowidn_oif &&
+-				    (neigh->dev->ifindex != oldflp->flowidn_oif)) ||
+-				    (oldflp->saddr &&
+-				    (!dn_dev_islocal(neigh->dev,
+-						     oldflp->saddr)))) {
+-					neigh_release(neigh);
+-					neigh = NULL;
+-				} else {
+-					if (dev_out)
+-						dev_put(dev_out);
+-					if (dn_dev_islocal(neigh->dev, fld.daddr)) {
+-						dev_out = init_net.loopback_dev;
+-						res.type = RTN_LOCAL;
+-					} else {
+-						dev_out = neigh->dev;
+-					}
+-					dev_hold(dev_out);
+-					goto select_source;
+-				}
+-			}
+-		}
+-
+-		/* Not there? Perhaps its a local address */
+-		if (dev_out == NULL)
+-			dev_out = dn_dev_get_default();
+-		err = -ENODEV;
+-		if (dev_out == NULL)
+-			goto out;
+-		dn_db = rcu_dereference_raw(dev_out->dn_ptr);
+-		if (!dn_db)
+-			goto e_inval;
+-		/* Possible improvement - check all devices for local addr */
+-		if (dn_dev_islocal(dev_out, fld.daddr)) {
+-			dev_put(dev_out);
+-			dev_out = init_net.loopback_dev;
+-			dev_hold(dev_out);
+-			res.type = RTN_LOCAL;
+-			goto select_source;
+-		}
+-		/* Not local either.... try sending it to the default router */
+-		neigh = neigh_clone(dn_db->router);
+-		BUG_ON(neigh && neigh->dev != dev_out);
+-
+-		/* Ok then, we assume its directly connected and move on */
+-select_source:
+-		if (neigh)
+-			gateway = ((struct dn_neigh *)neigh)->addr;
+-		if (gateway == 0)
+-			gateway = fld.daddr;
+-		if (fld.saddr == 0) {
+-			fld.saddr = dnet_select_source(dev_out, gateway,
+-						       res.type == RTN_LOCAL ?
+-						       RT_SCOPE_HOST :
+-						       RT_SCOPE_LINK);
+-			if (fld.saddr == 0 && res.type != RTN_LOCAL)
+-				goto e_addr;
+-		}
+-		fld.flowidn_oif = dev_out->ifindex;
+-		goto make_route;
+-	}
+-	free_res = 1;
+-
+-	if (res.type == RTN_NAT)
+-		goto e_inval;
+-
+-	if (res.type == RTN_LOCAL) {
+-		if (!fld.saddr)
+-			fld.saddr = fld.daddr;
+-		if (dev_out)
+-			dev_put(dev_out);
+-		dev_out = init_net.loopback_dev;
+-		dev_hold(dev_out);
+-		if (!dev_out->dn_ptr)
+-			goto e_inval;
+-		fld.flowidn_oif = dev_out->ifindex;
+-		if (res.fi)
+-			dn_fib_info_put(res.fi);
+-		res.fi = NULL;
+-		goto make_route;
+-	}
+-
+-	if (res.fi->fib_nhs > 1 && fld.flowidn_oif == 0)
+-		dn_fib_select_multipath(&fld, &res);
+-
+-	/*
+-	 * We could add some logic to deal with default routes here and
+-	 * get rid of some of the special casing above.
+-	 */
+-
+-	if (!fld.saddr)
+-		fld.saddr = DN_FIB_RES_PREFSRC(res);
+-
+-	if (dev_out)
+-		dev_put(dev_out);
+-	dev_out = DN_FIB_RES_DEV(res);
+-	dev_hold(dev_out);
+-	fld.flowidn_oif = dev_out->ifindex;
+-	gateway = DN_FIB_RES_GW(res);
+-
+-make_route:
+-	if (dev_out->flags & IFF_LOOPBACK)
+-		flags |= RTCF_LOCAL;
+-
+-	rt = dst_alloc(&dn_dst_ops, dev_out, 0, DST_OBSOLETE_NONE, 0);
+-	if (rt == NULL)
+-		goto e_nobufs;
+-
+-	rt->dn_next = NULL;
+-	memset(&rt->fld, 0, sizeof(rt->fld));
+-	rt->fld.saddr        = oldflp->saddr;
+-	rt->fld.daddr        = oldflp->daddr;
+-	rt->fld.flowidn_oif  = oldflp->flowidn_oif;
+-	rt->fld.flowidn_iif  = 0;
+-	rt->fld.flowidn_mark = oldflp->flowidn_mark;
+-
+-	rt->rt_saddr      = fld.saddr;
+-	rt->rt_daddr      = fld.daddr;
+-	rt->rt_gateway    = gateway ? gateway : fld.daddr;
+-	rt->rt_local_src  = fld.saddr;
+-
+-	rt->rt_dst_map    = fld.daddr;
+-	rt->rt_src_map    = fld.saddr;
+-
+-	rt->n = neigh;
+-	neigh = NULL;
+-
+-	rt->dst.lastuse = jiffies;
+-	rt->dst.output  = dn_output;
+-	rt->dst.input   = dn_rt_bug;
+-	rt->rt_flags      = flags;
+-	if (flags & RTCF_LOCAL)
+-		rt->dst.input = dn_nsp_rx;
+-
+-	err = dn_rt_set_next_hop(rt, &res);
+-	if (err)
+-		goto e_neighbour;
+-
+-	hash = dn_hash(rt->fld.saddr, rt->fld.daddr);
+-	/* dn_insert_route() increments dst->__refcnt */
+-	dn_insert_route(rt, hash, (struct dn_route **)pprt);
+-
+-done:
+-	if (neigh)
+-		neigh_release(neigh);
+-	if (free_res)
+-		dn_fib_res_put(&res);
+-	if (dev_out)
+-		dev_put(dev_out);
+-out:
+-	return err;
+-
+-e_addr:
+-	err = -EADDRNOTAVAIL;
+-	goto done;
+-e_inval:
+-	err = -EINVAL;
+-	goto done;
+-e_nobufs:
+-	err = -ENOBUFS;
+-	goto done;
+-e_neighbour:
+-	dst_release_immediate(&rt->dst);
+-	goto e_nobufs;
+-}
+-
+-
+-/*
+- * N.B. The flags may be moved into the flowi at some future stage.
+- */
+-static int __dn_route_output_key(struct dst_entry **pprt, const struct flowidn *flp, int flags)
+-{
+-	unsigned int hash = dn_hash(flp->saddr, flp->daddr);
+-	struct dn_route *rt = NULL;
+-
+-	if (!(flags & MSG_TRYHARD)) {
+-		rcu_read_lock_bh();
+-		for (rt = rcu_dereference_bh(dn_rt_hash_table[hash].chain); rt;
+-			rt = rcu_dereference_bh(rt->dn_next)) {
+-			if ((flp->daddr == rt->fld.daddr) &&
+-			    (flp->saddr == rt->fld.saddr) &&
+-			    (flp->flowidn_mark == rt->fld.flowidn_mark) &&
+-			    dn_is_output_route(rt) &&
+-			    (rt->fld.flowidn_oif == flp->flowidn_oif)) {
+-				dst_hold_and_use(&rt->dst, jiffies);
+-				rcu_read_unlock_bh();
+-				*pprt = &rt->dst;
+-				return 0;
+-			}
+-		}
+-		rcu_read_unlock_bh();
+-	}
+-
+-	return dn_route_output_slow(pprt, flp, flags);
+-}
+-
+-static int dn_route_output_key(struct dst_entry **pprt, struct flowidn *flp, int flags)
+-{
+-	int err;
+-
+-	err = __dn_route_output_key(pprt, flp, flags);
+-	if (err == 0 && flp->flowidn_proto) {
+-		*pprt = xfrm_lookup(&init_net, *pprt,
+-				    flowidn_to_flowi(flp), NULL, 0);
+-		if (IS_ERR(*pprt)) {
+-			err = PTR_ERR(*pprt);
+-			*pprt = NULL;
+-		}
+-	}
+-	return err;
+-}
+-
+-int dn_route_output_sock(struct dst_entry __rcu **pprt, struct flowidn *fl, struct sock *sk, int flags)
+-{
+-	int err;
+-
+-	err = __dn_route_output_key(pprt, fl, flags & MSG_TRYHARD);
+-	if (err == 0 && fl->flowidn_proto) {
+-		*pprt = xfrm_lookup(&init_net, *pprt,
+-				    flowidn_to_flowi(fl), sk, 0);
+-		if (IS_ERR(*pprt)) {
+-			err = PTR_ERR(*pprt);
+-			*pprt = NULL;
+-		}
+-	}
+-	return err;
+-}
+-
+-static int dn_route_input_slow(struct sk_buff *skb)
+-{
+-	struct dn_route *rt = NULL;
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	struct net_device *in_dev = skb->dev;
+-	struct net_device *out_dev = NULL;
+-	struct dn_dev *dn_db;
+-	struct neighbour *neigh = NULL;
+-	unsigned int hash;
+-	int flags = 0;
+-	__le16 gateway = 0;
+-	__le16 local_src = 0;
+-	struct flowidn fld = {
+-		.daddr = cb->dst,
+-		.saddr = cb->src,
+-		.flowidn_scope = RT_SCOPE_UNIVERSE,
+-		.flowidn_mark = skb->mark,
+-		.flowidn_iif = skb->dev->ifindex,
+-	};
+-	struct dn_fib_res res = { .fi = NULL, .type = RTN_UNREACHABLE };
+-	int err = -EINVAL;
+-	int free_res = 0;
+-
+-	dev_hold(in_dev);
+-
+-	if ((dn_db = rcu_dereference(in_dev->dn_ptr)) == NULL)
+-		goto out;
+-
+-	/* Zero source addresses are not allowed */
+-	if (fld.saddr == 0)
+-		goto out;
+-
+-	/*
+-	 * In this case we've just received a packet from a source
+-	 * outside ourselves pretending to come from us. We don't
+-	 * allow it any further to prevent routing loops, spoofing and
+-	 * other nasties. Loopback packets already have the dst attached
+-	 * so this only affects packets which have originated elsewhere.
+-	 */
+-	err  = -ENOTUNIQ;
+-	if (dn_dev_islocal(in_dev, cb->src))
+-		goto out;
+-
+-	err = dn_fib_lookup(&fld, &res);
+-	if (err) {
+-		if (err != -ESRCH)
+-			goto out;
+-		/*
+-		 * Is the destination us ?
+-		 */
+-		if (!dn_dev_islocal(in_dev, cb->dst))
+-			goto e_inval;
+-
+-		res.type = RTN_LOCAL;
+-	} else {
+-		__le16 src_map = fld.saddr;
+-		free_res = 1;
+-
+-		out_dev = DN_FIB_RES_DEV(res);
+-		if (out_dev == NULL) {
+-			net_crit_ratelimited("Bug in dn_route_input_slow() No output device\n");
+-			goto e_inval;
+-		}
+-		dev_hold(out_dev);
+-
+-		if (res.r)
+-			src_map = fld.saddr; /* no NAT support for now */
+-
+-		gateway = DN_FIB_RES_GW(res);
+-		if (res.type == RTN_NAT) {
+-			fld.daddr = dn_fib_rules_map_destination(fld.daddr, &res);
+-			dn_fib_res_put(&res);
+-			free_res = 0;
+-			if (dn_fib_lookup(&fld, &res))
+-				goto e_inval;
+-			free_res = 1;
+-			if (res.type != RTN_UNICAST)
+-				goto e_inval;
+-			flags |= RTCF_DNAT;
+-			gateway = fld.daddr;
+-		}
+-		fld.saddr = src_map;
+-	}
+-
+-	switch(res.type) {
+-	case RTN_UNICAST:
+-		/*
+-		 * Forwarding check here, we only check for forwarding
+-		 * being turned off, if you want to only forward intra
+-		 * area, its up to you to set the routing tables up
+-		 * correctly.
+-		 */
+-		if (dn_db->parms.forwarding == 0)
+-			goto e_inval;
+-
+-		if (res.fi->fib_nhs > 1 && fld.flowidn_oif == 0)
+-			dn_fib_select_multipath(&fld, &res);
+-
+-		/*
+-		 * Check for out_dev == in_dev. We use the RTCF_DOREDIRECT
+-		 * flag as a hint to set the intra-ethernet bit when
+-		 * forwarding. If we've got NAT in operation, we don't do
+-		 * this optimisation.
+-		 */
+-		if (out_dev == in_dev && !(flags & RTCF_NAT))
+-			flags |= RTCF_DOREDIRECT;
+-
+-		local_src = DN_FIB_RES_PREFSRC(res);
+-
+-	case RTN_BLACKHOLE:
+-	case RTN_UNREACHABLE:
+-		break;
+-	case RTN_LOCAL:
+-		flags |= RTCF_LOCAL;
+-		fld.saddr = cb->dst;
+-		fld.daddr = cb->src;
+-
+-		/* Routing tables gave us a gateway */
+-		if (gateway)
+-			goto make_route;
+-
+-		/* Packet was intra-ethernet, so we know its on-link */
+-		if (cb->rt_flags & DN_RT_F_IE) {
+-			gateway = cb->src;
+-			goto make_route;
+-		}
+-
+-		/* Use the default router if there is one */
+-		neigh = neigh_clone(dn_db->router);
+-		if (neigh) {
+-			gateway = ((struct dn_neigh *)neigh)->addr;
+-			goto make_route;
+-		}
+-
+-		/* Close eyes and pray */
+-		gateway = cb->src;
+-		goto make_route;
+-	default:
+-		goto e_inval;
+-	}
+-
+-make_route:
+-	rt = dst_alloc(&dn_dst_ops, out_dev, 1, DST_OBSOLETE_NONE, 0);
+-	if (rt == NULL)
+-		goto e_nobufs;
+-
+-	rt->dn_next = NULL;
+-	memset(&rt->fld, 0, sizeof(rt->fld));
+-	rt->rt_saddr      = fld.saddr;
+-	rt->rt_daddr      = fld.daddr;
+-	rt->rt_gateway    = fld.daddr;
+-	if (gateway)
+-		rt->rt_gateway = gateway;
+-	rt->rt_local_src  = local_src ? local_src : rt->rt_saddr;
+-
+-	rt->rt_dst_map    = fld.daddr;
+-	rt->rt_src_map    = fld.saddr;
+-
+-	rt->fld.saddr        = cb->src;
+-	rt->fld.daddr        = cb->dst;
+-	rt->fld.flowidn_oif  = 0;
+-	rt->fld.flowidn_iif  = in_dev->ifindex;
+-	rt->fld.flowidn_mark = fld.flowidn_mark;
+-
+-	rt->n = neigh;
+-	rt->dst.lastuse = jiffies;
+-	rt->dst.output = dn_rt_bug_out;
+-	switch (res.type) {
+-	case RTN_UNICAST:
+-		rt->dst.input = dn_forward;
+-		break;
+-	case RTN_LOCAL:
+-		rt->dst.output = dn_output;
+-		rt->dst.input = dn_nsp_rx;
+-		rt->dst.dev = in_dev;
+-		flags |= RTCF_LOCAL;
+-		break;
+-	default:
+-	case RTN_UNREACHABLE:
+-	case RTN_BLACKHOLE:
+-		rt->dst.input = dst_discard;
+-	}
+-	rt->rt_flags = flags;
+-
+-	err = dn_rt_set_next_hop(rt, &res);
+-	if (err)
+-		goto e_neighbour;
+-
+-	hash = dn_hash(rt->fld.saddr, rt->fld.daddr);
+-	/* dn_insert_route() increments dst->__refcnt */
+-	dn_insert_route(rt, hash, &rt);
+-	skb_dst_set(skb, &rt->dst);
+-
+-done:
+-	if (neigh)
+-		neigh_release(neigh);
+-	if (free_res)
+-		dn_fib_res_put(&res);
+-	dev_put(in_dev);
+-	if (out_dev)
+-		dev_put(out_dev);
+-out:
+-	return err;
+-
+-e_inval:
+-	err = -EINVAL;
+-	goto done;
+-
+-e_nobufs:
+-	err = -ENOBUFS;
+-	goto done;
+-
+-e_neighbour:
+-	dst_release_immediate(&rt->dst);
+-	goto done;
+-}
+-
+-static int dn_route_input(struct sk_buff *skb)
+-{
+-	struct dn_route *rt;
+-	struct dn_skb_cb *cb = DN_SKB_CB(skb);
+-	unsigned int hash = dn_hash(cb->src, cb->dst);
+-
+-	if (skb_dst(skb))
+-		return 0;
+-
+-	rcu_read_lock();
+-	for(rt = rcu_dereference(dn_rt_hash_table[hash].chain); rt != NULL;
+-	    rt = rcu_dereference(rt->dn_next)) {
+-		if ((rt->fld.saddr == cb->src) &&
+-		    (rt->fld.daddr == cb->dst) &&
+-		    (rt->fld.flowidn_oif == 0) &&
+-		    (rt->fld.flowidn_mark == skb->mark) &&
+-		    (rt->fld.flowidn_iif == cb->iif)) {
+-			dst_hold_and_use(&rt->dst, jiffies);
+-			rcu_read_unlock();
+-			skb_dst_set(skb, (struct dst_entry *)rt);
+-			return 0;
+-		}
+-	}
+-	rcu_read_unlock();
+-
+-	return dn_route_input_slow(skb);
+-}
+-
+-static int dn_rt_fill_info(struct sk_buff *skb, u32 portid, u32 seq,
+-			   int event, int nowait, unsigned int flags)
+-{
+-	struct dn_route *rt = (struct dn_route *)skb_dst(skb);
+-	struct rtmsg *r;
+-	struct nlmsghdr *nlh;
+-	long expires;
+-
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*r), flags);
+-	if (!nlh)
+-		return -EMSGSIZE;
+-
+-	r = nlmsg_data(nlh);
+-	r->rtm_family = AF_DECnet;
+-	r->rtm_dst_len = 16;
+-	r->rtm_src_len = 0;
+-	r->rtm_tos = 0;
+-	r->rtm_table = RT_TABLE_MAIN;
+-	r->rtm_type = rt->rt_type;
+-	r->rtm_flags = (rt->rt_flags & ~0xFFFF) | RTM_F_CLONED;
+-	r->rtm_scope = RT_SCOPE_UNIVERSE;
+-	r->rtm_protocol = RTPROT_UNSPEC;
+-
+-	if (rt->rt_flags & RTCF_NOTIFY)
+-		r->rtm_flags |= RTM_F_NOTIFY;
+-
+-	if (nla_put_u32(skb, RTA_TABLE, RT_TABLE_MAIN) < 0 ||
+-	    nla_put_le16(skb, RTA_DST, rt->rt_daddr) < 0)
+-		goto errout;
+-
+-	if (rt->fld.saddr) {
+-		r->rtm_src_len = 16;
+-		if (nla_put_le16(skb, RTA_SRC, rt->fld.saddr) < 0)
+-			goto errout;
+-	}
+-	if (rt->dst.dev &&
+-	    nla_put_u32(skb, RTA_OIF, rt->dst.dev->ifindex) < 0)
+-		goto errout;
+-
+-	/*
+-	 * Note to self - change this if input routes reverse direction when
+-	 * they deal only with inputs and not with replies like they do
+-	 * currently.
+-	 */
+-	if (nla_put_le16(skb, RTA_PREFSRC, rt->rt_local_src) < 0)
+-		goto errout;
+-
+-	if (rt->rt_daddr != rt->rt_gateway &&
+-	    nla_put_le16(skb, RTA_GATEWAY, rt->rt_gateway) < 0)
+-		goto errout;
+-
+-	if (rtnetlink_put_metrics(skb, dst_metrics_ptr(&rt->dst)) < 0)
+-		goto errout;
+-
+-	expires = rt->dst.expires ? rt->dst.expires - jiffies : 0;
+-	if (rtnl_put_cacheinfo(skb, &rt->dst, 0, expires,
+-			       rt->dst.error) < 0)
+-		goto errout;
+-
+-	if (dn_is_input_route(rt) &&
+-	    nla_put_u32(skb, RTA_IIF, rt->fld.flowidn_iif) < 0)
+-		goto errout;
+-
+-	nlmsg_end(skb, nlh);
+-	return 0;
+-
+-errout:
+-	nlmsg_cancel(skb, nlh);
+-	return -EMSGSIZE;
+-}
+-
+-const struct nla_policy rtm_dn_policy[RTA_MAX + 1] = {
+-	[RTA_DST]		= { .type = NLA_U16 },
+-	[RTA_SRC]		= { .type = NLA_U16 },
+-	[RTA_IIF]		= { .type = NLA_U32 },
+-	[RTA_OIF]		= { .type = NLA_U32 },
+-	[RTA_GATEWAY]		= { .type = NLA_U16 },
+-	[RTA_PRIORITY]		= { .type = NLA_U32 },
+-	[RTA_PREFSRC]		= { .type = NLA_U16 },
+-	[RTA_METRICS]		= { .type = NLA_NESTED },
+-	[RTA_MULTIPATH]		= { .type = NLA_NESTED },
+-	[RTA_TABLE]		= { .type = NLA_U32 },
+-	[RTA_MARK]		= { .type = NLA_U32 },
+-};
+-
+-/*
+- * This is called by both endnodes and routers now.
+- */
+-static int dn_cache_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+-			     struct netlink_ext_ack *extack)
+-{
+-	struct net *net = sock_net(in_skb->sk);
+-	struct rtmsg *rtm = nlmsg_data(nlh);
+-	struct dn_route *rt = NULL;
+-	struct dn_skb_cb *cb;
+-	int err;
+-	struct sk_buff *skb;
+-	struct flowidn fld;
+-	struct nlattr *tb[RTA_MAX+1];
+-
+-	if (!net_eq(net, &init_net))
+-		return -EINVAL;
+-
+-	err = nlmsg_parse_deprecated(nlh, sizeof(*rtm), tb, RTA_MAX,
+-				     rtm_dn_policy, extack);
+-	if (err < 0)
+-		return err;
+-
+-	memset(&fld, 0, sizeof(fld));
+-	fld.flowidn_proto = DNPROTO_NSP;
+-
+-	skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+-	if (skb == NULL)
+-		return -ENOBUFS;
+-	skb_reset_mac_header(skb);
+-	cb = DN_SKB_CB(skb);
+-
+-	if (tb[RTA_SRC])
+-		fld.saddr = nla_get_le16(tb[RTA_SRC]);
+-
+-	if (tb[RTA_DST])
+-		fld.daddr = nla_get_le16(tb[RTA_DST]);
+-
+-	if (tb[RTA_IIF])
+-		fld.flowidn_iif = nla_get_u32(tb[RTA_IIF]);
+-
+-	if (fld.flowidn_iif) {
+-		struct net_device *dev;
+-		dev = __dev_get_by_index(&init_net, fld.flowidn_iif);
+-		if (!dev || !dev->dn_ptr) {
+-			kfree_skb(skb);
+-			return -ENODEV;
+-		}
+-		skb->protocol = htons(ETH_P_DNA_RT);
+-		skb->dev = dev;
+-		cb->src = fld.saddr;
+-		cb->dst = fld.daddr;
+-		local_bh_disable();
+-		err = dn_route_input(skb);
+-		local_bh_enable();
+-		memset(cb, 0, sizeof(struct dn_skb_cb));
+-		rt = (struct dn_route *)skb_dst(skb);
+-		if (!err && -rt->dst.error)
+-			err = rt->dst.error;
+-	} else {
+-		if (tb[RTA_OIF])
+-			fld.flowidn_oif = nla_get_u32(tb[RTA_OIF]);
+-
+-		err = dn_route_output_key((struct dst_entry **)&rt, &fld, 0);
+-	}
+-
+-	skb->dev = NULL;
+-	if (err)
+-		goto out_free;
+-	skb_dst_set(skb, &rt->dst);
+-	if (rtm->rtm_flags & RTM_F_NOTIFY)
+-		rt->rt_flags |= RTCF_NOTIFY;
+-
+-	err = dn_rt_fill_info(skb, NETLINK_CB(in_skb).portid, nlh->nlmsg_seq, RTM_NEWROUTE, 0, 0);
+-	if (err < 0) {
+-		err = -EMSGSIZE;
+-		goto out_free;
+-	}
+-
+-	return rtnl_unicast(skb, &init_net, NETLINK_CB(in_skb).portid);
+-
+-out_free:
+-	kfree_skb(skb);
+-	return err;
+-}
+-
+-/*
+- * For routers, this is called from dn_fib_dump, but for endnodes its
+- * called directly from the rtnetlink dispatch table.
+- */
+-int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb)
+-{
+-	struct net *net = sock_net(skb->sk);
+-	struct dn_route *rt;
+-	int h, s_h;
+-	int idx, s_idx;
+-	struct rtmsg *rtm;
+-
+-	if (!net_eq(net, &init_net))
+-		return 0;
+-
+-	if (nlmsg_len(cb->nlh) < sizeof(struct rtmsg))
+-		return -EINVAL;
+-
+-	rtm = nlmsg_data(cb->nlh);
+-	if (!(rtm->rtm_flags & RTM_F_CLONED))
+-		return 0;
+-
+-	s_h = cb->args[0];
+-	s_idx = idx = cb->args[1];
+-	for(h = 0; h <= dn_rt_hash_mask; h++) {
+-		if (h < s_h)
+-			continue;
+-		if (h > s_h)
+-			s_idx = 0;
+-		rcu_read_lock_bh();
+-		for(rt = rcu_dereference_bh(dn_rt_hash_table[h].chain), idx = 0;
+-			rt;
+-			rt = rcu_dereference_bh(rt->dn_next), idx++) {
+-			if (idx < s_idx)
+-				continue;
+-			skb_dst_set(skb, dst_clone(&rt->dst));
+-			if (dn_rt_fill_info(skb, NETLINK_CB(cb->skb).portid,
+-					cb->nlh->nlmsg_seq, RTM_NEWROUTE,
+-					1, NLM_F_MULTI) < 0) {
+-				skb_dst_drop(skb);
+-				rcu_read_unlock_bh();
+-				goto done;
+-			}
+-			skb_dst_drop(skb);
+-		}
+-		rcu_read_unlock_bh();
+-	}
+-
+-done:
+-	cb->args[0] = h;
+-	cb->args[1] = idx;
+-	return skb->len;
+-}
+-
+-#ifdef CONFIG_PROC_FS
+-struct dn_rt_cache_iter_state {
+-	int bucket;
+-};
+-
+-static struct dn_route *dn_rt_cache_get_first(struct seq_file *seq)
+-{
+-	struct dn_route *rt = NULL;
+-	struct dn_rt_cache_iter_state *s = seq->private;
+-
+-	for(s->bucket = dn_rt_hash_mask; s->bucket >= 0; --s->bucket) {
+-		rcu_read_lock_bh();
+-		rt = rcu_dereference_bh(dn_rt_hash_table[s->bucket].chain);
+-		if (rt)
+-			break;
+-		rcu_read_unlock_bh();
+-	}
+-	return rt;
+-}
+-
+-static struct dn_route *dn_rt_cache_get_next(struct seq_file *seq, struct dn_route *rt)
+-{
+-	struct dn_rt_cache_iter_state *s = seq->private;
+-
+-	rt = rcu_dereference_bh(rt->dn_next);
+-	while (!rt) {
+-		rcu_read_unlock_bh();
+-		if (--s->bucket < 0)
+-			break;
+-		rcu_read_lock_bh();
+-		rt = rcu_dereference_bh(dn_rt_hash_table[s->bucket].chain);
+-	}
+-	return rt;
+-}
+-
+-static void *dn_rt_cache_seq_start(struct seq_file *seq, loff_t *pos)
+-{
+-	struct dn_route *rt = dn_rt_cache_get_first(seq);
+-
+-	if (rt) {
+-		while(*pos && (rt = dn_rt_cache_get_next(seq, rt)))
+-			--*pos;
+-	}
+-	return *pos ? NULL : rt;
+-}
+-
+-static void *dn_rt_cache_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+-{
+-	struct dn_route *rt = dn_rt_cache_get_next(seq, v);
+-	++*pos;
+-	return rt;
+-}
+-
+-static void dn_rt_cache_seq_stop(struct seq_file *seq, void *v)
+-{
+-	if (v)
+-		rcu_read_unlock_bh();
+-}
+-
+-static int dn_rt_cache_seq_show(struct seq_file *seq, void *v)
+-{
+-	struct dn_route *rt = v;
+-	char buf1[DN_ASCBUF_LEN], buf2[DN_ASCBUF_LEN];
+-
+-	seq_printf(seq, "%-8s %-7s %-7s %04d %04d %04d\n",
+-		   rt->dst.dev ? rt->dst.dev->name : "*",
+-		   dn_addr2asc(le16_to_cpu(rt->rt_daddr), buf1),
+-		   dn_addr2asc(le16_to_cpu(rt->rt_saddr), buf2),
+-		   atomic_read(&rt->dst.__refcnt),
+-		   rt->dst.__use, 0);
+-	return 0;
+-}
+-
+-static const struct seq_operations dn_rt_cache_seq_ops = {
+-	.start	= dn_rt_cache_seq_start,
+-	.next	= dn_rt_cache_seq_next,
+-	.stop	= dn_rt_cache_seq_stop,
+-	.show	= dn_rt_cache_seq_show,
+-};
+-#endif /* CONFIG_PROC_FS */
+-
+-void __init dn_route_init(void)
+-{
+-	int i, goal, order;
+-
+-	dn_dst_ops.kmem_cachep =
+-		kmem_cache_create("dn_dst_cache", sizeof(struct dn_route), 0,
+-				  SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
+-	dst_entries_init(&dn_dst_ops);
+-	timer_setup(&dn_route_timer, dn_dst_check_expire, 0);
+-	dn_route_timer.expires = jiffies + decnet_dst_gc_interval * HZ;
+-	add_timer(&dn_route_timer);
+-
+-	goal = totalram_pages() >> (26 - PAGE_SHIFT);
+-
+-	for(order = 0; (1UL << order) < goal; order++)
+-		/* NOTHING */;
+-
+-	/*
+-	 * Only want 1024 entries max, since the table is very, very unlikely
+-	 * to be larger than that.
+-	 */
+-	while(order && ((((1UL << order) * PAGE_SIZE) /
+-				sizeof(struct dn_rt_hash_bucket)) >= 2048))
+-		order--;
+-
+-	do {
+-		dn_rt_hash_mask = (1UL << order) * PAGE_SIZE /
+-			sizeof(struct dn_rt_hash_bucket);
+-		while(dn_rt_hash_mask & (dn_rt_hash_mask - 1))
+-			dn_rt_hash_mask--;
+-		dn_rt_hash_table = (struct dn_rt_hash_bucket *)
+-			__get_free_pages(GFP_ATOMIC, order);
+-	} while (dn_rt_hash_table == NULL && --order > 0);
+-
+-	if (!dn_rt_hash_table)
+-		panic("Failed to allocate DECnet route cache hash table\n");
+-
+-	printk(KERN_INFO
+-		"DECnet: Routing cache hash table of %u buckets, %ldKbytes\n",
+-		dn_rt_hash_mask,
+-		(long)(dn_rt_hash_mask*sizeof(struct dn_rt_hash_bucket))/1024);
+-
+-	dn_rt_hash_mask--;
+-	for(i = 0; i <= dn_rt_hash_mask; i++) {
+-		spin_lock_init(&dn_rt_hash_table[i].lock);
+-		dn_rt_hash_table[i].chain = NULL;
+-	}
+-
+-	dn_dst_ops.gc_thresh = (dn_rt_hash_mask + 1);
+-
+-	proc_create_seq_private("decnet_cache", 0444, init_net.proc_net,
+-			&dn_rt_cache_seq_ops,
+-			sizeof(struct dn_rt_cache_iter_state), NULL);
+-
+-#ifdef CONFIG_DECNET_ROUTER
+-	rtnl_register_module(THIS_MODULE, PF_DECnet, RTM_GETROUTE,
+-			     dn_cache_getroute, dn_fib_dump, 0);
+-#else
+-	rtnl_register_module(THIS_MODULE, PF_DECnet, RTM_GETROUTE,
+-			     dn_cache_getroute, dn_cache_dump, 0);
+-#endif
+-}
+-
+-void __exit dn_route_cleanup(void)
+-{
+-	del_timer(&dn_route_timer);
+-	dn_run_flush(NULL);
+-
+-	remove_proc_entry("decnet_cache", init_net.proc_net);
+-	dst_entries_destroy(&dn_dst_ops);
+-}
+diff --git a/net/decnet/dn_rules.c b/net/decnet/dn_rules.c
+deleted file mode 100644
+index 4a4e3c17740cb..0000000000000
+--- a/net/decnet/dn_rules.c
++++ /dev/null
+@@ -1,258 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Routing Forwarding Information Base (Rules)
+- *
+- * Author:      Steve Whitehouse <SteveW@ACM.org>
+- *              Mostly copied from Alexey Kuznetsov's ipv4/fib_rules.c
+- *
+- *
+- * Changes:
+- *              Steve Whitehouse <steve@chygwyn.com>
+- *              Updated for Thomas Graf's generic rules
+- *
+- */
+-#include <linux/net.h>
+-#include <linux/init.h>
+-#include <linux/netlink.h>
+-#include <linux/rtnetlink.h>
+-#include <linux/netdevice.h>
+-#include <linux/spinlock.h>
+-#include <linux/list.h>
+-#include <linux/rcupdate.h>
+-#include <linux/export.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/flow.h>
+-#include <net/fib_rules.h>
+-#include <net/dn.h>
+-#include <net/dn_fib.h>
+-#include <net/dn_neigh.h>
+-#include <net/dn_dev.h>
+-#include <net/dn_route.h>
+-
+-static struct fib_rules_ops *dn_fib_rules_ops;
+-
+-struct dn_fib_rule
+-{
+-	struct fib_rule		common;
+-	unsigned char		dst_len;
+-	unsigned char		src_len;
+-	__le16			src;
+-	__le16			srcmask;
+-	__le16			dst;
+-	__le16			dstmask;
+-	__le16			srcmap;
+-	u8			flags;
+-};
+-
+-
+-int dn_fib_lookup(struct flowidn *flp, struct dn_fib_res *res)
+-{
+-	struct fib_lookup_arg arg = {
+-		.result = res,
+-	};
+-	int err;
+-
+-	err = fib_rules_lookup(dn_fib_rules_ops,
+-			       flowidn_to_flowi(flp), 0, &arg);
+-	res->r = arg.rule;
+-
+-	return err;
+-}
+-
+-static int dn_fib_rule_action(struct fib_rule *rule, struct flowi *flp,
+-			      int flags, struct fib_lookup_arg *arg)
+-{
+-	struct flowidn *fld = &flp->u.dn;
+-	int err = -EAGAIN;
+-	struct dn_fib_table *tbl;
+-
+-	switch(rule->action) {
+-	case FR_ACT_TO_TBL:
+-		break;
+-
+-	case FR_ACT_UNREACHABLE:
+-		err = -ENETUNREACH;
+-		goto errout;
+-
+-	case FR_ACT_PROHIBIT:
+-		err = -EACCES;
+-		goto errout;
+-
+-	case FR_ACT_BLACKHOLE:
+-	default:
+-		err = -EINVAL;
+-		goto errout;
+-	}
+-
+-	tbl = dn_fib_get_table(rule->table, 0);
+-	if (tbl == NULL)
+-		goto errout;
+-
+-	err = tbl->lookup(tbl, fld, (struct dn_fib_res *)arg->result);
+-	if (err > 0)
+-		err = -EAGAIN;
+-errout:
+-	return err;
+-}
+-
+-static const struct nla_policy dn_fib_rule_policy[FRA_MAX+1] = {
+-	FRA_GENERIC_POLICY,
+-};
+-
+-static int dn_fib_rule_match(struct fib_rule *rule, struct flowi *fl, int flags)
+-{
+-	struct dn_fib_rule *r = (struct dn_fib_rule *)rule;
+-	struct flowidn *fld = &fl->u.dn;
+-	__le16 daddr = fld->daddr;
+-	__le16 saddr = fld->saddr;
+-
+-	if (((saddr ^ r->src) & r->srcmask) ||
+-	    ((daddr ^ r->dst) & r->dstmask))
+-		return 0;
+-
+-	return 1;
+-}
+-
+-static int dn_fib_rule_configure(struct fib_rule *rule, struct sk_buff *skb,
+-				 struct fib_rule_hdr *frh,
+-				 struct nlattr **tb,
+-				 struct netlink_ext_ack *extack)
+-{
+-	int err = -EINVAL;
+-	struct dn_fib_rule *r = (struct dn_fib_rule *)rule;
+-
+-	if (frh->tos) {
+-		NL_SET_ERR_MSG(extack, "Invalid tos value");
+-		goto  errout;
+-	}
+-
+-	if (rule->table == RT_TABLE_UNSPEC) {
+-		if (rule->action == FR_ACT_TO_TBL) {
+-			struct dn_fib_table *table;
+-
+-			table = dn_fib_empty_table();
+-			if (table == NULL) {
+-				err = -ENOBUFS;
+-				goto errout;
+-			}
+-
+-			rule->table = table->n;
+-		}
+-	}
+-
+-	if (frh->src_len)
+-		r->src = nla_get_le16(tb[FRA_SRC]);
+-
+-	if (frh->dst_len)
+-		r->dst = nla_get_le16(tb[FRA_DST]);
+-
+-	r->src_len = frh->src_len;
+-	r->srcmask = dnet_make_mask(r->src_len);
+-	r->dst_len = frh->dst_len;
+-	r->dstmask = dnet_make_mask(r->dst_len);
+-	err = 0;
+-errout:
+-	return err;
+-}
+-
+-static int dn_fib_rule_compare(struct fib_rule *rule, struct fib_rule_hdr *frh,
+-			       struct nlattr **tb)
+-{
+-	struct dn_fib_rule *r = (struct dn_fib_rule *)rule;
+-
+-	if (frh->src_len && (r->src_len != frh->src_len))
+-		return 0;
+-
+-	if (frh->dst_len && (r->dst_len != frh->dst_len))
+-		return 0;
+-
+-	if (frh->src_len && (r->src != nla_get_le16(tb[FRA_SRC])))
+-		return 0;
+-
+-	if (frh->dst_len && (r->dst != nla_get_le16(tb[FRA_DST])))
+-		return 0;
+-
+-	return 1;
+-}
+-
+-unsigned int dnet_addr_type(__le16 addr)
+-{
+-	struct flowidn fld = { .daddr = addr };
+-	struct dn_fib_res res;
+-	unsigned int ret = RTN_UNICAST;
+-	struct dn_fib_table *tb = dn_fib_get_table(RT_TABLE_LOCAL, 0);
+-
+-	res.r = NULL;
+-
+-	if (tb) {
+-		if (!tb->lookup(tb, &fld, &res)) {
+-			ret = res.type;
+-			dn_fib_res_put(&res);
+-		}
+-	}
+-	return ret;
+-}
+-
+-static int dn_fib_rule_fill(struct fib_rule *rule, struct sk_buff *skb,
+-			    struct fib_rule_hdr *frh)
+-{
+-	struct dn_fib_rule *r = (struct dn_fib_rule *)rule;
+-
+-	frh->dst_len = r->dst_len;
+-	frh->src_len = r->src_len;
+-	frh->tos = 0;
+-
+-	if ((r->dst_len &&
+-	     nla_put_le16(skb, FRA_DST, r->dst)) ||
+-	    (r->src_len &&
+-	     nla_put_le16(skb, FRA_SRC, r->src)))
+-		goto nla_put_failure;
+-	return 0;
+-
+-nla_put_failure:
+-	return -ENOBUFS;
+-}
+-
+-static void dn_fib_rule_flush_cache(struct fib_rules_ops *ops)
+-{
+-	dn_rt_cache_flush(-1);
+-}
+-
+-static const struct fib_rules_ops __net_initconst dn_fib_rules_ops_template = {
+-	.family		= AF_DECnet,
+-	.rule_size	= sizeof(struct dn_fib_rule),
+-	.addr_size	= sizeof(u16),
+-	.action		= dn_fib_rule_action,
+-	.match		= dn_fib_rule_match,
+-	.configure	= dn_fib_rule_configure,
+-	.compare	= dn_fib_rule_compare,
+-	.fill		= dn_fib_rule_fill,
+-	.flush_cache	= dn_fib_rule_flush_cache,
+-	.nlgroup	= RTNLGRP_DECnet_RULE,
+-	.policy		= dn_fib_rule_policy,
+-	.owner		= THIS_MODULE,
+-	.fro_net	= &init_net,
+-};
+-
+-void __init dn_fib_rules_init(void)
+-{
+-	dn_fib_rules_ops =
+-		fib_rules_register(&dn_fib_rules_ops_template, &init_net);
+-	BUG_ON(IS_ERR(dn_fib_rules_ops));
+-	BUG_ON(fib_default_rule_add(dn_fib_rules_ops, 0x7fff,
+-			            RT_TABLE_MAIN, 0));
+-}
+-
+-void __exit dn_fib_rules_cleanup(void)
+-{
+-	rtnl_lock();
+-	fib_rules_unregister(dn_fib_rules_ops);
+-	rtnl_unlock();
+-	rcu_barrier();
+-}
+diff --git a/net/decnet/dn_table.c b/net/decnet/dn_table.c
+deleted file mode 100644
+index 4086f9c746af4..0000000000000
+--- a/net/decnet/dn_table.c
++++ /dev/null
+@@ -1,929 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Routing Forwarding Information Base (Routing Tables)
+- *
+- * Author:      Steve Whitehouse <SteveW@ACM.org>
+- *              Mostly copied from the IPv4 routing code
+- *
+- *
+- * Changes:
+- *
+- */
+-#include <linux/string.h>
+-#include <linux/net.h>
+-#include <linux/socket.h>
+-#include <linux/slab.h>
+-#include <linux/sockios.h>
+-#include <linux/init.h>
+-#include <linux/skbuff.h>
+-#include <linux/rtnetlink.h>
+-#include <linux/proc_fs.h>
+-#include <linux/netdevice.h>
+-#include <linux/timer.h>
+-#include <linux/spinlock.h>
+-#include <linux/atomic.h>
+-#include <linux/uaccess.h>
+-#include <linux/route.h> /* RTF_xxx */
+-#include <net/neighbour.h>
+-#include <net/netlink.h>
+-#include <net/tcp.h>
+-#include <net/dst.h>
+-#include <net/flow.h>
+-#include <net/fib_rules.h>
+-#include <net/dn.h>
+-#include <net/dn_route.h>
+-#include <net/dn_fib.h>
+-#include <net/dn_neigh.h>
+-#include <net/dn_dev.h>
+-
+-struct dn_zone
+-{
+-	struct dn_zone		*dz_next;
+-	struct dn_fib_node 	**dz_hash;
+-	int			dz_nent;
+-	int			dz_divisor;
+-	u32			dz_hashmask;
+-#define DZ_HASHMASK(dz)	((dz)->dz_hashmask)
+-	int			dz_order;
+-	__le16			dz_mask;
+-#define DZ_MASK(dz)	((dz)->dz_mask)
+-};
+-
+-struct dn_hash
+-{
+-	struct dn_zone	*dh_zones[17];
+-	struct dn_zone	*dh_zone_list;
+-};
+-
+-#define dz_key_0(key)		((key).datum = 0)
+-
+-#define for_nexthops(fi) { int nhsel; const struct dn_fib_nh *nh;\
+-	for(nhsel = 0, nh = (fi)->fib_nh; nhsel < (fi)->fib_nhs; nh++, nhsel++)
+-
+-#define endfor_nexthops(fi) }
+-
+-#define DN_MAX_DIVISOR 1024
+-#define DN_S_ZOMBIE 1
+-#define DN_S_ACCESSED 2
+-
+-#define DN_FIB_SCAN(f, fp) \
+-for( ; ((f) = *(fp)) != NULL; (fp) = &(f)->fn_next)
+-
+-#define DN_FIB_SCAN_KEY(f, fp, key) \
+-for( ; ((f) = *(fp)) != NULL && dn_key_eq((f)->fn_key, (key)); (fp) = &(f)->fn_next)
+-
+-#define RT_TABLE_MIN 1
+-#define DN_FIB_TABLE_HASHSZ 256
+-static struct hlist_head dn_fib_table_hash[DN_FIB_TABLE_HASHSZ];
+-static DEFINE_RWLOCK(dn_fib_tables_lock);
+-
+-static struct kmem_cache *dn_hash_kmem __read_mostly;
+-static int dn_fib_hash_zombies;
+-
+-static inline dn_fib_idx_t dn_hash(dn_fib_key_t key, struct dn_zone *dz)
+-{
+-	u16 h = le16_to_cpu(key.datum)>>(16 - dz->dz_order);
+-	h ^= (h >> 10);
+-	h ^= (h >> 6);
+-	h &= DZ_HASHMASK(dz);
+-	return *(dn_fib_idx_t *)&h;
+-}
+-
+-static inline dn_fib_key_t dz_key(__le16 dst, struct dn_zone *dz)
+-{
+-	dn_fib_key_t k;
+-	k.datum = dst & DZ_MASK(dz);
+-	return k;
+-}
+-
+-static inline struct dn_fib_node **dn_chain_p(dn_fib_key_t key, struct dn_zone *dz)
+-{
+-	return &dz->dz_hash[dn_hash(key, dz).datum];
+-}
+-
+-static inline struct dn_fib_node *dz_chain(dn_fib_key_t key, struct dn_zone *dz)
+-{
+-	return dz->dz_hash[dn_hash(key, dz).datum];
+-}
+-
+-static inline int dn_key_eq(dn_fib_key_t a, dn_fib_key_t b)
+-{
+-	return a.datum == b.datum;
+-}
+-
+-static inline int dn_key_leq(dn_fib_key_t a, dn_fib_key_t b)
+-{
+-	return a.datum <= b.datum;
+-}
+-
+-static inline void dn_rebuild_zone(struct dn_zone *dz,
+-				   struct dn_fib_node **old_ht,
+-				   int old_divisor)
+-{
+-	struct dn_fib_node *f, **fp, *next;
+-	int i;
+-
+-	for(i = 0; i < old_divisor; i++) {
+-		for(f = old_ht[i]; f; f = next) {
+-			next = f->fn_next;
+-			for(fp = dn_chain_p(f->fn_key, dz);
+-				*fp && dn_key_leq((*fp)->fn_key, f->fn_key);
+-				fp = &(*fp)->fn_next)
+-				/* NOTHING */;
+-			f->fn_next = *fp;
+-			*fp = f;
+-		}
+-	}
+-}
+-
+-static void dn_rehash_zone(struct dn_zone *dz)
+-{
+-	struct dn_fib_node **ht, **old_ht;
+-	int old_divisor, new_divisor;
+-	u32 new_hashmask;
+-
+-	old_divisor = dz->dz_divisor;
+-
+-	switch (old_divisor) {
+-	case 16:
+-		new_divisor = 256;
+-		new_hashmask = 0xFF;
+-		break;
+-	default:
+-		printk(KERN_DEBUG "DECnet: dn_rehash_zone: BUG! %d\n",
+-		       old_divisor);
+-		fallthrough;
+-	case 256:
+-		new_divisor = 1024;
+-		new_hashmask = 0x3FF;
+-		break;
+-	}
+-
+-	ht = kcalloc(new_divisor, sizeof(struct dn_fib_node*), GFP_KERNEL);
+-	if (ht == NULL)
+-		return;
+-
+-	write_lock_bh(&dn_fib_tables_lock);
+-	old_ht = dz->dz_hash;
+-	dz->dz_hash = ht;
+-	dz->dz_hashmask = new_hashmask;
+-	dz->dz_divisor = new_divisor;
+-	dn_rebuild_zone(dz, old_ht, old_divisor);
+-	write_unlock_bh(&dn_fib_tables_lock);
+-	kfree(old_ht);
+-}
+-
+-static void dn_free_node(struct dn_fib_node *f)
+-{
+-	dn_fib_release_info(DN_FIB_INFO(f));
+-	kmem_cache_free(dn_hash_kmem, f);
+-}
+-
+-
+-static struct dn_zone *dn_new_zone(struct dn_hash *table, int z)
+-{
+-	int i;
+-	struct dn_zone *dz = kzalloc(sizeof(struct dn_zone), GFP_KERNEL);
+-	if (!dz)
+-		return NULL;
+-
+-	if (z) {
+-		dz->dz_divisor = 16;
+-		dz->dz_hashmask = 0x0F;
+-	} else {
+-		dz->dz_divisor = 1;
+-		dz->dz_hashmask = 0;
+-	}
+-
+-	dz->dz_hash = kcalloc(dz->dz_divisor, sizeof(struct dn_fib_node *), GFP_KERNEL);
+-	if (!dz->dz_hash) {
+-		kfree(dz);
+-		return NULL;
+-	}
+-
+-	dz->dz_order = z;
+-	dz->dz_mask = dnet_make_mask(z);
+-
+-	for(i = z + 1; i <= 16; i++)
+-		if (table->dh_zones[i])
+-			break;
+-
+-	write_lock_bh(&dn_fib_tables_lock);
+-	if (i>16) {
+-		dz->dz_next = table->dh_zone_list;
+-		table->dh_zone_list = dz;
+-	} else {
+-		dz->dz_next = table->dh_zones[i]->dz_next;
+-		table->dh_zones[i]->dz_next = dz;
+-	}
+-	table->dh_zones[z] = dz;
+-	write_unlock_bh(&dn_fib_tables_lock);
+-	return dz;
+-}
+-
+-
+-static int dn_fib_nh_match(struct rtmsg *r, struct nlmsghdr *nlh, struct nlattr *attrs[], struct dn_fib_info *fi)
+-{
+-	struct rtnexthop *nhp;
+-	int nhlen;
+-
+-	if (attrs[RTA_PRIORITY] &&
+-	    nla_get_u32(attrs[RTA_PRIORITY]) != fi->fib_priority)
+-		return 1;
+-
+-	if (attrs[RTA_OIF] || attrs[RTA_GATEWAY]) {
+-		if ((!attrs[RTA_OIF] || nla_get_u32(attrs[RTA_OIF]) == fi->fib_nh->nh_oif) &&
+-		    (!attrs[RTA_GATEWAY]  || nla_get_le16(attrs[RTA_GATEWAY]) != fi->fib_nh->nh_gw))
+-			return 0;
+-		return 1;
+-	}
+-
+-	if (!attrs[RTA_MULTIPATH])
+-		return 0;
+-
+-	nhp = nla_data(attrs[RTA_MULTIPATH]);
+-	nhlen = nla_len(attrs[RTA_MULTIPATH]);
+-
+-	for_nexthops(fi) {
+-		int attrlen = nhlen - sizeof(struct rtnexthop);
+-		__le16 gw;
+-
+-		if (attrlen < 0 || (nhlen -= nhp->rtnh_len) < 0)
+-			return -EINVAL;
+-		if (nhp->rtnh_ifindex && nhp->rtnh_ifindex != nh->nh_oif)
+-			return 1;
+-		if (attrlen) {
+-			struct nlattr *gw_attr;
+-
+-			gw_attr = nla_find((struct nlattr *) (nhp + 1), attrlen, RTA_GATEWAY);
+-			gw = gw_attr ? nla_get_le16(gw_attr) : 0;
+-
+-			if (gw && gw != nh->nh_gw)
+-				return 1;
+-		}
+-		nhp = RTNH_NEXT(nhp);
+-	} endfor_nexthops(fi);
+-
+-	return 0;
+-}
+-
+-static inline size_t dn_fib_nlmsg_size(struct dn_fib_info *fi)
+-{
+-	size_t payload = NLMSG_ALIGN(sizeof(struct rtmsg))
+-			 + nla_total_size(4) /* RTA_TABLE */
+-			 + nla_total_size(2) /* RTA_DST */
+-			 + nla_total_size(4) /* RTA_PRIORITY */
+-			 + nla_total_size(TCP_CA_NAME_MAX); /* RTAX_CC_ALGO */
+-
+-	/* space for nested metrics */
+-	payload += nla_total_size((RTAX_MAX * nla_total_size(4)));
+-
+-	if (fi->fib_nhs) {
+-		/* Also handles the special case fib_nhs == 1 */
+-
+-		/* each nexthop is packed in an attribute */
+-		size_t nhsize = nla_total_size(sizeof(struct rtnexthop));
+-
+-		/* may contain a gateway attribute */
+-		nhsize += nla_total_size(4);
+-
+-		/* all nexthops are packed in a nested attribute */
+-		payload += nla_total_size(fi->fib_nhs * nhsize);
+-	}
+-
+-	return payload;
+-}
+-
+-static int dn_fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event,
+-			u32 tb_id, u8 type, u8 scope, void *dst, int dst_len,
+-			struct dn_fib_info *fi, unsigned int flags)
+-{
+-	struct rtmsg *rtm;
+-	struct nlmsghdr *nlh;
+-
+-	nlh = nlmsg_put(skb, portid, seq, event, sizeof(*rtm), flags);
+-	if (!nlh)
+-		return -EMSGSIZE;
+-
+-	rtm = nlmsg_data(nlh);
+-	rtm->rtm_family = AF_DECnet;
+-	rtm->rtm_dst_len = dst_len;
+-	rtm->rtm_src_len = 0;
+-	rtm->rtm_tos = 0;
+-	rtm->rtm_table = tb_id;
+-	rtm->rtm_flags = fi->fib_flags;
+-	rtm->rtm_scope = scope;
+-	rtm->rtm_type  = type;
+-	rtm->rtm_protocol = fi->fib_protocol;
+-
+-	if (nla_put_u32(skb, RTA_TABLE, tb_id) < 0)
+-		goto errout;
+-
+-	if (rtm->rtm_dst_len &&
+-	    nla_put(skb, RTA_DST, 2, dst) < 0)
+-		goto errout;
+-
+-	if (fi->fib_priority &&
+-	    nla_put_u32(skb, RTA_PRIORITY, fi->fib_priority) < 0)
+-		goto errout;
+-
+-	if (rtnetlink_put_metrics(skb, fi->fib_metrics) < 0)
+-		goto errout;
+-
+-	if (fi->fib_nhs == 1) {
+-		if (fi->fib_nh->nh_gw &&
+-		    nla_put_le16(skb, RTA_GATEWAY, fi->fib_nh->nh_gw) < 0)
+-			goto errout;
+-
+-		if (fi->fib_nh->nh_oif &&
+-		    nla_put_u32(skb, RTA_OIF, fi->fib_nh->nh_oif) < 0)
+-			goto errout;
+-	}
+-
+-	if (fi->fib_nhs > 1) {
+-		struct rtnexthop *nhp;
+-		struct nlattr *mp_head;
+-
+-		mp_head = nla_nest_start_noflag(skb, RTA_MULTIPATH);
+-		if (!mp_head)
+-			goto errout;
+-
+-		for_nexthops(fi) {
+-			if (!(nhp = nla_reserve_nohdr(skb, sizeof(*nhp))))
+-				goto errout;
+-
+-			nhp->rtnh_flags = nh->nh_flags & 0xFF;
+-			nhp->rtnh_hops = nh->nh_weight - 1;
+-			nhp->rtnh_ifindex = nh->nh_oif;
+-
+-			if (nh->nh_gw &&
+-			    nla_put_le16(skb, RTA_GATEWAY, nh->nh_gw) < 0)
+-				goto errout;
+-
+-			nhp->rtnh_len = skb_tail_pointer(skb) - (unsigned char *)nhp;
+-		} endfor_nexthops(fi);
+-
+-		nla_nest_end(skb, mp_head);
+-	}
+-
+-	nlmsg_end(skb, nlh);
+-	return 0;
+-
+-errout:
+-	nlmsg_cancel(skb, nlh);
+-	return -EMSGSIZE;
+-}
+-
+-
+-static void dn_rtmsg_fib(int event, struct dn_fib_node *f, int z, u32 tb_id,
+-			struct nlmsghdr *nlh, struct netlink_skb_parms *req)
+-{
+-	struct sk_buff *skb;
+-	u32 portid = req ? req->portid : 0;
+-	int err = -ENOBUFS;
+-
+-	skb = nlmsg_new(dn_fib_nlmsg_size(DN_FIB_INFO(f)), GFP_KERNEL);
+-	if (skb == NULL)
+-		goto errout;
+-
+-	err = dn_fib_dump_info(skb, portid, nlh->nlmsg_seq, event, tb_id,
+-			       f->fn_type, f->fn_scope, &f->fn_key, z,
+-			       DN_FIB_INFO(f), 0);
+-	if (err < 0) {
+-		/* -EMSGSIZE implies BUG in dn_fib_nlmsg_size() */
+-		WARN_ON(err == -EMSGSIZE);
+-		kfree_skb(skb);
+-		goto errout;
+-	}
+-	rtnl_notify(skb, &init_net, portid, RTNLGRP_DECnet_ROUTE, nlh, GFP_KERNEL);
+-	return;
+-errout:
+-	if (err < 0)
+-		rtnl_set_sk_err(&init_net, RTNLGRP_DECnet_ROUTE, err);
+-}
+-
+-static __inline__ int dn_hash_dump_bucket(struct sk_buff *skb,
+-				struct netlink_callback *cb,
+-				struct dn_fib_table *tb,
+-				struct dn_zone *dz,
+-				struct dn_fib_node *f)
+-{
+-	int i, s_i;
+-
+-	s_i = cb->args[4];
+-	for(i = 0; f; i++, f = f->fn_next) {
+-		if (i < s_i)
+-			continue;
+-		if (f->fn_state & DN_S_ZOMBIE)
+-			continue;
+-		if (dn_fib_dump_info(skb, NETLINK_CB(cb->skb).portid,
+-				cb->nlh->nlmsg_seq,
+-				RTM_NEWROUTE,
+-				tb->n,
+-				(f->fn_state & DN_S_ZOMBIE) ? 0 : f->fn_type,
+-				f->fn_scope, &f->fn_key, dz->dz_order,
+-				f->fn_info, NLM_F_MULTI) < 0) {
+-			cb->args[4] = i;
+-			return -1;
+-		}
+-	}
+-	cb->args[4] = i;
+-	return skb->len;
+-}
+-
+-static __inline__ int dn_hash_dump_zone(struct sk_buff *skb,
+-				struct netlink_callback *cb,
+-				struct dn_fib_table *tb,
+-				struct dn_zone *dz)
+-{
+-	int h, s_h;
+-
+-	s_h = cb->args[3];
+-	for(h = 0; h < dz->dz_divisor; h++) {
+-		if (h < s_h)
+-			continue;
+-		if (h > s_h)
+-			memset(&cb->args[4], 0, sizeof(cb->args) - 4*sizeof(cb->args[0]));
+-		if (dz->dz_hash == NULL || dz->dz_hash[h] == NULL)
+-			continue;
+-		if (dn_hash_dump_bucket(skb, cb, tb, dz, dz->dz_hash[h]) < 0) {
+-			cb->args[3] = h;
+-			return -1;
+-		}
+-	}
+-	cb->args[3] = h;
+-	return skb->len;
+-}
+-
+-static int dn_fib_table_dump(struct dn_fib_table *tb, struct sk_buff *skb,
+-				struct netlink_callback *cb)
+-{
+-	int m, s_m;
+-	struct dn_zone *dz;
+-	struct dn_hash *table = (struct dn_hash *)tb->data;
+-
+-	s_m = cb->args[2];
+-	read_lock(&dn_fib_tables_lock);
+-	for(dz = table->dh_zone_list, m = 0; dz; dz = dz->dz_next, m++) {
+-		if (m < s_m)
+-			continue;
+-		if (m > s_m)
+-			memset(&cb->args[3], 0, sizeof(cb->args) - 3*sizeof(cb->args[0]));
+-
+-		if (dn_hash_dump_zone(skb, cb, tb, dz) < 0) {
+-			cb->args[2] = m;
+-			read_unlock(&dn_fib_tables_lock);
+-			return -1;
+-		}
+-	}
+-	read_unlock(&dn_fib_tables_lock);
+-	cb->args[2] = m;
+-
+-	return skb->len;
+-}
+-
+-int dn_fib_dump(struct sk_buff *skb, struct netlink_callback *cb)
+-{
+-	struct net *net = sock_net(skb->sk);
+-	unsigned int h, s_h;
+-	unsigned int e = 0, s_e;
+-	struct dn_fib_table *tb;
+-	int dumped = 0;
+-
+-	if (!net_eq(net, &init_net))
+-		return 0;
+-
+-	if (nlmsg_len(cb->nlh) >= sizeof(struct rtmsg) &&
+-		((struct rtmsg *)nlmsg_data(cb->nlh))->rtm_flags&RTM_F_CLONED)
+-			return dn_cache_dump(skb, cb);
+-
+-	s_h = cb->args[0];
+-	s_e = cb->args[1];
+-
+-	for (h = s_h; h < DN_FIB_TABLE_HASHSZ; h++, s_h = 0) {
+-		e = 0;
+-		hlist_for_each_entry(tb, &dn_fib_table_hash[h], hlist) {
+-			if (e < s_e)
+-				goto next;
+-			if (dumped)
+-				memset(&cb->args[2], 0, sizeof(cb->args) -
+-						 2 * sizeof(cb->args[0]));
+-			if (tb->dump(tb, skb, cb) < 0)
+-				goto out;
+-			dumped = 1;
+-next:
+-			e++;
+-		}
+-	}
+-out:
+-	cb->args[1] = e;
+-	cb->args[0] = h;
+-
+-	return skb->len;
+-}
+-
+-static int dn_fib_table_insert(struct dn_fib_table *tb, struct rtmsg *r, struct nlattr *attrs[],
+-			       struct nlmsghdr *n, struct netlink_skb_parms *req)
+-{
+-	struct dn_hash *table = (struct dn_hash *)tb->data;
+-	struct dn_fib_node *new_f, *f, **fp, **del_fp;
+-	struct dn_zone *dz;
+-	struct dn_fib_info *fi;
+-	int z = r->rtm_dst_len;
+-	int type = r->rtm_type;
+-	dn_fib_key_t key;
+-	int err;
+-
+-	if (z > 16)
+-		return -EINVAL;
+-
+-	dz = table->dh_zones[z];
+-	if (!dz && !(dz = dn_new_zone(table, z)))
+-		return -ENOBUFS;
+-
+-	dz_key_0(key);
+-	if (attrs[RTA_DST]) {
+-		__le16 dst = nla_get_le16(attrs[RTA_DST]);
+-		if (dst & ~DZ_MASK(dz))
+-			return -EINVAL;
+-		key = dz_key(dst, dz);
+-	}
+-
+-	if ((fi = dn_fib_create_info(r, attrs, n, &err)) == NULL)
+-		return err;
+-
+-	if (dz->dz_nent > (dz->dz_divisor << 2) &&
+-			dz->dz_divisor > DN_MAX_DIVISOR &&
+-			(z==16 || (1<<z) > dz->dz_divisor))
+-		dn_rehash_zone(dz);
+-
+-	fp = dn_chain_p(key, dz);
+-
+-	DN_FIB_SCAN(f, fp) {
+-		if (dn_key_leq(key, f->fn_key))
+-			break;
+-	}
+-
+-	del_fp = NULL;
+-
+-	if (f && (f->fn_state & DN_S_ZOMBIE) &&
+-			dn_key_eq(f->fn_key, key)) {
+-		del_fp = fp;
+-		fp = &f->fn_next;
+-		f = *fp;
+-		goto create;
+-	}
+-
+-	DN_FIB_SCAN_KEY(f, fp, key) {
+-		if (fi->fib_priority <= DN_FIB_INFO(f)->fib_priority)
+-			break;
+-	}
+-
+-	if (f && dn_key_eq(f->fn_key, key) &&
+-			fi->fib_priority == DN_FIB_INFO(f)->fib_priority) {
+-		struct dn_fib_node **ins_fp;
+-
+-		err = -EEXIST;
+-		if (n->nlmsg_flags & NLM_F_EXCL)
+-			goto out;
+-
+-		if (n->nlmsg_flags & NLM_F_REPLACE) {
+-			del_fp = fp;
+-			fp = &f->fn_next;
+-			f = *fp;
+-			goto replace;
+-		}
+-
+-		ins_fp = fp;
+-		err = -EEXIST;
+-
+-		DN_FIB_SCAN_KEY(f, fp, key) {
+-			if (fi->fib_priority != DN_FIB_INFO(f)->fib_priority)
+-				break;
+-			if (f->fn_type == type &&
+-			    f->fn_scope == r->rtm_scope &&
+-			    DN_FIB_INFO(f) == fi)
+-				goto out;
+-		}
+-
+-		if (!(n->nlmsg_flags & NLM_F_APPEND)) {
+-			fp = ins_fp;
+-			f = *fp;
+-		}
+-	}
+-
+-create:
+-	err = -ENOENT;
+-	if (!(n->nlmsg_flags & NLM_F_CREATE))
+-		goto out;
+-
+-replace:
+-	err = -ENOBUFS;
+-	new_f = kmem_cache_zalloc(dn_hash_kmem, GFP_KERNEL);
+-	if (new_f == NULL)
+-		goto out;
+-
+-	new_f->fn_key = key;
+-	new_f->fn_type = type;
+-	new_f->fn_scope = r->rtm_scope;
+-	DN_FIB_INFO(new_f) = fi;
+-
+-	new_f->fn_next = f;
+-	write_lock_bh(&dn_fib_tables_lock);
+-	*fp = new_f;
+-	write_unlock_bh(&dn_fib_tables_lock);
+-	dz->dz_nent++;
+-
+-	if (del_fp) {
+-		f = *del_fp;
+-		write_lock_bh(&dn_fib_tables_lock);
+-		*del_fp = f->fn_next;
+-		write_unlock_bh(&dn_fib_tables_lock);
+-
+-		if (!(f->fn_state & DN_S_ZOMBIE))
+-			dn_rtmsg_fib(RTM_DELROUTE, f, z, tb->n, n, req);
+-		if (f->fn_state & DN_S_ACCESSED)
+-			dn_rt_cache_flush(-1);
+-		dn_free_node(f);
+-		dz->dz_nent--;
+-	} else {
+-		dn_rt_cache_flush(-1);
+-	}
+-
+-	dn_rtmsg_fib(RTM_NEWROUTE, new_f, z, tb->n, n, req);
+-
+-	return 0;
+-out:
+-	dn_fib_release_info(fi);
+-	return err;
+-}
+-
+-
+-static int dn_fib_table_delete(struct dn_fib_table *tb, struct rtmsg *r, struct nlattr *attrs[],
+-			       struct nlmsghdr *n, struct netlink_skb_parms *req)
+-{
+-	struct dn_hash *table = (struct dn_hash*)tb->data;
+-	struct dn_fib_node **fp, **del_fp, *f;
+-	int z = r->rtm_dst_len;
+-	struct dn_zone *dz;
+-	dn_fib_key_t key;
+-	int matched;
+-
+-
+-	if (z > 16)
+-		return -EINVAL;
+-
+-	if ((dz = table->dh_zones[z]) == NULL)
+-		return -ESRCH;
+-
+-	dz_key_0(key);
+-	if (attrs[RTA_DST]) {
+-		__le16 dst = nla_get_le16(attrs[RTA_DST]);
+-		if (dst & ~DZ_MASK(dz))
+-			return -EINVAL;
+-		key = dz_key(dst, dz);
+-	}
+-
+-	fp = dn_chain_p(key, dz);
+-
+-	DN_FIB_SCAN(f, fp) {
+-		if (dn_key_eq(f->fn_key, key))
+-			break;
+-		if (dn_key_leq(key, f->fn_key))
+-			return -ESRCH;
+-	}
+-
+-	matched = 0;
+-	del_fp = NULL;
+-	DN_FIB_SCAN_KEY(f, fp, key) {
+-		struct dn_fib_info *fi = DN_FIB_INFO(f);
+-
+-		if (f->fn_state & DN_S_ZOMBIE)
+-			return -ESRCH;
+-
+-		matched++;
+-
+-		if (del_fp == NULL &&
+-				(!r->rtm_type || f->fn_type == r->rtm_type) &&
+-				(r->rtm_scope == RT_SCOPE_NOWHERE || f->fn_scope == r->rtm_scope) &&
+-				(!r->rtm_protocol ||
+-					fi->fib_protocol == r->rtm_protocol) &&
+-				dn_fib_nh_match(r, n, attrs, fi) == 0)
+-			del_fp = fp;
+-	}
+-
+-	if (del_fp) {
+-		f = *del_fp;
+-		dn_rtmsg_fib(RTM_DELROUTE, f, z, tb->n, n, req);
+-
+-		if (matched != 1) {
+-			write_lock_bh(&dn_fib_tables_lock);
+-			*del_fp = f->fn_next;
+-			write_unlock_bh(&dn_fib_tables_lock);
+-
+-			if (f->fn_state & DN_S_ACCESSED)
+-				dn_rt_cache_flush(-1);
+-			dn_free_node(f);
+-			dz->dz_nent--;
+-		} else {
+-			f->fn_state |= DN_S_ZOMBIE;
+-			if (f->fn_state & DN_S_ACCESSED) {
+-				f->fn_state &= ~DN_S_ACCESSED;
+-				dn_rt_cache_flush(-1);
+-			}
+-			if (++dn_fib_hash_zombies > 128)
+-				dn_fib_flush();
+-		}
+-
+-		return 0;
+-	}
+-
+-	return -ESRCH;
+-}
+-
+-static inline int dn_flush_list(struct dn_fib_node **fp, int z, struct dn_hash *table)
+-{
+-	int found = 0;
+-	struct dn_fib_node *f;
+-
+-	while((f = *fp) != NULL) {
+-		struct dn_fib_info *fi = DN_FIB_INFO(f);
+-
+-		if (fi && ((f->fn_state & DN_S_ZOMBIE) || (fi->fib_flags & RTNH_F_DEAD))) {
+-			write_lock_bh(&dn_fib_tables_lock);
+-			*fp = f->fn_next;
+-			write_unlock_bh(&dn_fib_tables_lock);
+-
+-			dn_free_node(f);
+-			found++;
+-			continue;
+-		}
+-		fp = &f->fn_next;
+-	}
+-
+-	return found;
+-}
+-
+-static int dn_fib_table_flush(struct dn_fib_table *tb)
+-{
+-	struct dn_hash *table = (struct dn_hash *)tb->data;
+-	struct dn_zone *dz;
+-	int found = 0;
+-
+-	dn_fib_hash_zombies = 0;
+-	for(dz = table->dh_zone_list; dz; dz = dz->dz_next) {
+-		int i;
+-		int tmp = 0;
+-		for(i = dz->dz_divisor-1; i >= 0; i--)
+-			tmp += dn_flush_list(&dz->dz_hash[i], dz->dz_order, table);
+-		dz->dz_nent -= tmp;
+-		found += tmp;
+-	}
+-
+-	return found;
+-}
+-
+-static int dn_fib_table_lookup(struct dn_fib_table *tb, const struct flowidn *flp, struct dn_fib_res *res)
+-{
+-	int err;
+-	struct dn_zone *dz;
+-	struct dn_hash *t = (struct dn_hash *)tb->data;
+-
+-	read_lock(&dn_fib_tables_lock);
+-	for(dz = t->dh_zone_list; dz; dz = dz->dz_next) {
+-		struct dn_fib_node *f;
+-		dn_fib_key_t k = dz_key(flp->daddr, dz);
+-
+-		for(f = dz_chain(k, dz); f; f = f->fn_next) {
+-			if (!dn_key_eq(k, f->fn_key)) {
+-				if (dn_key_leq(k, f->fn_key))
+-					break;
+-				else
+-					continue;
+-			}
+-
+-			f->fn_state |= DN_S_ACCESSED;
+-
+-			if (f->fn_state&DN_S_ZOMBIE)
+-				continue;
+-
+-			if (f->fn_scope < flp->flowidn_scope)
+-				continue;
+-
+-			err = dn_fib_semantic_match(f->fn_type, DN_FIB_INFO(f), flp, res);
+-
+-			if (err == 0) {
+-				res->type = f->fn_type;
+-				res->scope = f->fn_scope;
+-				res->prefixlen = dz->dz_order;
+-				goto out;
+-			}
+-			if (err < 0)
+-				goto out;
+-		}
+-	}
+-	err = 1;
+-out:
+-	read_unlock(&dn_fib_tables_lock);
+-	return err;
+-}
+-
+-
+-struct dn_fib_table *dn_fib_get_table(u32 n, int create)
+-{
+-	struct dn_fib_table *t;
+-	unsigned int h;
+-
+-	if (n < RT_TABLE_MIN)
+-		return NULL;
+-
+-	if (n > RT_TABLE_MAX)
+-		return NULL;
+-
+-	h = n & (DN_FIB_TABLE_HASHSZ - 1);
+-	rcu_read_lock();
+-	hlist_for_each_entry_rcu(t, &dn_fib_table_hash[h], hlist) {
+-		if (t->n == n) {
+-			rcu_read_unlock();
+-			return t;
+-		}
+-	}
+-	rcu_read_unlock();
+-
+-	if (!create)
+-		return NULL;
+-
+-	if (in_interrupt()) {
+-		net_dbg_ratelimited("DECnet: BUG! Attempt to create routing table from interrupt\n");
+-		return NULL;
+-	}
+-
+-	t = kzalloc(sizeof(struct dn_fib_table) + sizeof(struct dn_hash),
+-		    GFP_KERNEL);
+-	if (t == NULL)
+-		return NULL;
+-
+-	t->n = n;
+-	t->insert = dn_fib_table_insert;
+-	t->delete = dn_fib_table_delete;
+-	t->lookup = dn_fib_table_lookup;
+-	t->flush  = dn_fib_table_flush;
+-	t->dump = dn_fib_table_dump;
+-	hlist_add_head_rcu(&t->hlist, &dn_fib_table_hash[h]);
+-
+-	return t;
+-}
+-
+-struct dn_fib_table *dn_fib_empty_table(void)
+-{
+-	u32 id;
+-
+-	for(id = RT_TABLE_MIN; id <= RT_TABLE_MAX; id++)
+-		if (dn_fib_get_table(id, 0) == NULL)
+-			return dn_fib_get_table(id, 1);
+-	return NULL;
+-}
+-
+-void dn_fib_flush(void)
+-{
+-	int flushed = 0;
+-	struct dn_fib_table *tb;
+-	unsigned int h;
+-
+-	for (h = 0; h < DN_FIB_TABLE_HASHSZ; h++) {
+-		hlist_for_each_entry(tb, &dn_fib_table_hash[h], hlist)
+-			flushed += tb->flush(tb);
+-	}
+-
+-	if (flushed)
+-		dn_rt_cache_flush(-1);
+-}
+-
+-void __init dn_fib_table_init(void)
+-{
+-	dn_hash_kmem = kmem_cache_create("dn_fib_info_cache",
+-					sizeof(struct dn_fib_info),
+-					0, SLAB_HWCACHE_ALIGN,
+-					NULL);
+-}
+-
+-void __exit dn_fib_table_cleanup(void)
+-{
+-	struct dn_fib_table *t;
+-	struct hlist_node *next;
+-	unsigned int h;
+-
+-	write_lock(&dn_fib_tables_lock);
+-	for (h = 0; h < DN_FIB_TABLE_HASHSZ; h++) {
+-		hlist_for_each_entry_safe(t, next, &dn_fib_table_hash[h],
+-					  hlist) {
+-			hlist_del(&t->hlist);
+-			kfree(t);
+-		}
+-	}
+-	write_unlock(&dn_fib_tables_lock);
+-}
+diff --git a/net/decnet/dn_timer.c b/net/decnet/dn_timer.c
+deleted file mode 100644
+index aa4155875ca84..0000000000000
+--- a/net/decnet/dn_timer.c
++++ /dev/null
+@@ -1,104 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Socket Timer Functions
+- *
+- * Author:      Steve Whitehouse <SteveW@ACM.org>
+- *
+- *
+- * Changes:
+- *       Steve Whitehouse      : Made keepalive timer part of the same
+- *                               timer idea.
+- *       Steve Whitehouse      : Added checks for sk->sock_readers
+- *       David S. Miller       : New socket locking
+- *       Steve Whitehouse      : Timer grabs socket ref.
+- */
+-#include <linux/net.h>
+-#include <linux/socket.h>
+-#include <linux/skbuff.h>
+-#include <linux/netdevice.h>
+-#include <linux/timer.h>
+-#include <linux/spinlock.h>
+-#include <net/sock.h>
+-#include <linux/atomic.h>
+-#include <linux/jiffies.h>
+-#include <net/flow.h>
+-#include <net/dn.h>
+-
+-/*
+- * Slow timer is for everything else (n * 500mS)
+- */
+-
+-#define SLOW_INTERVAL (HZ/2)
+-
+-static void dn_slow_timer(struct timer_list *t);
+-
+-void dn_start_slow_timer(struct sock *sk)
+-{
+-	timer_setup(&sk->sk_timer, dn_slow_timer, 0);
+-	sk_reset_timer(sk, &sk->sk_timer, jiffies + SLOW_INTERVAL);
+-}
+-
+-void dn_stop_slow_timer(struct sock *sk)
+-{
+-	sk_stop_timer(sk, &sk->sk_timer);
+-}
+-
+-static void dn_slow_timer(struct timer_list *t)
+-{
+-	struct sock *sk = from_timer(sk, t, sk_timer);
+-	struct dn_scp *scp = DN_SK(sk);
+-
+-	bh_lock_sock(sk);
+-
+-	if (sock_owned_by_user(sk)) {
+-		sk_reset_timer(sk, &sk->sk_timer, jiffies + HZ / 10);
+-		goto out;
+-	}
+-
+-	/*
+-	 * The persist timer is the standard slow timer used for retransmits
+-	 * in both connection establishment and disconnection as well as
+-	 * in the RUN state. The different states are catered for by changing
+-	 * the function pointer in the socket. Setting the timer to a value
+-	 * of zero turns it off. We allow the persist_fxn to turn the
+-	 * timer off in a permant way by returning non-zero, so that
+-	 * timer based routines may remove sockets. This is why we have a
+-	 * sock_hold()/sock_put() around the timer to prevent the socket
+-	 * going away in the middle.
+-	 */
+-	if (scp->persist && scp->persist_fxn) {
+-		if (scp->persist <= SLOW_INTERVAL) {
+-			scp->persist = 0;
+-
+-			if (scp->persist_fxn(sk))
+-				goto out;
+-		} else {
+-			scp->persist -= SLOW_INTERVAL;
+-		}
+-	}
+-
+-	/*
+-	 * Check for keepalive timeout. After the other timer 'cos if
+-	 * the previous timer caused a retransmit, we don't need to
+-	 * do this. scp->stamp is the last time that we sent a packet.
+-	 * The keepalive function sends a link service packet to the
+-	 * other end. If it remains unacknowledged, the standard
+-	 * socket timers will eventually shut the socket down. Each
+-	 * time we do this, scp->stamp will be updated, thus
+-	 * we won't try and send another until scp->keepalive has passed
+-	 * since the last successful transmission.
+-	 */
+-	if (scp->keepalive && scp->keepalive_fxn && (scp->state == DN_RUN)) {
+-		if (time_after_eq(jiffies, scp->stamp + scp->keepalive))
+-			scp->keepalive_fxn(sk);
+-	}
+-
+-	sk_reset_timer(sk, &sk->sk_timer, jiffies + SLOW_INTERVAL);
+-out:
+-	bh_unlock_sock(sk);
+-	sock_put(sk);
+-}
+diff --git a/net/decnet/netfilter/Kconfig b/net/decnet/netfilter/Kconfig
+deleted file mode 100644
+index 14ec4ef95fab1..0000000000000
+--- a/net/decnet/netfilter/Kconfig
++++ /dev/null
+@@ -1,17 +0,0 @@
+-# SPDX-License-Identifier: GPL-2.0-only
+-#
+-# DECnet netfilter configuration
+-#
+-
+-menu "DECnet: Netfilter Configuration"
+-	depends on DECNET && NETFILTER
+-	depends on NETFILTER_ADVANCED
+-
+-config DECNET_NF_GRABULATOR
+-	tristate "Routing message grabulator (for userland routing daemon)"
+-	help
+-	  Enable this module if you want to use the userland DECnet routing
+-	  daemon. You will also need to enable routing support for DECnet
+-	  unless you just want to monitor routing messages from other nodes.
+-
+-endmenu
+diff --git a/net/decnet/netfilter/Makefile b/net/decnet/netfilter/Makefile
+deleted file mode 100644
+index 429c84289d0ff..0000000000000
+--- a/net/decnet/netfilter/Makefile
++++ /dev/null
+@@ -1,6 +0,0 @@
+-# SPDX-License-Identifier: GPL-2.0-only
+-#
+-# Makefile for DECnet netfilter modules
+-#
+-
+-obj-$(CONFIG_DECNET_NF_GRABULATOR) += dn_rtmsg.o
+diff --git a/net/decnet/netfilter/dn_rtmsg.c b/net/decnet/netfilter/dn_rtmsg.c
+deleted file mode 100644
+index 26a9193df7831..0000000000000
+--- a/net/decnet/netfilter/dn_rtmsg.c
++++ /dev/null
+@@ -1,158 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet Routing Message Grabulator
+- *
+- *              (C) 2000 ChyGwyn Limited  -  https://www.chygwyn.com/
+- *
+- * Author:      Steven Whitehouse <steve@chygwyn.com>
+- */
+-#include <linux/module.h>
+-#include <linux/skbuff.h>
+-#include <linux/slab.h>
+-#include <linux/init.h>
+-#include <linux/netdevice.h>
+-#include <linux/netfilter.h>
+-#include <linux/spinlock.h>
+-#include <net/netlink.h>
+-#include <linux/netfilter_decnet.h>
+-
+-#include <net/sock.h>
+-#include <net/flow.h>
+-#include <net/dn.h>
+-#include <net/dn_route.h>
+-
+-static struct sock *dnrmg = NULL;
+-
+-
+-static struct sk_buff *dnrmg_build_message(struct sk_buff *rt_skb, int *errp)
+-{
+-	struct sk_buff *skb = NULL;
+-	size_t size;
+-	sk_buff_data_t old_tail;
+-	struct nlmsghdr *nlh;
+-	unsigned char *ptr;
+-	struct nf_dn_rtmsg *rtm;
+-
+-	size = NLMSG_ALIGN(rt_skb->len) +
+-	       NLMSG_ALIGN(sizeof(struct nf_dn_rtmsg));
+-	skb = nlmsg_new(size, GFP_ATOMIC);
+-	if (!skb) {
+-		*errp = -ENOMEM;
+-		return NULL;
+-	}
+-	old_tail = skb->tail;
+-	nlh = nlmsg_put(skb, 0, 0, 0, size, 0);
+-	if (!nlh) {
+-		kfree_skb(skb);
+-		*errp = -ENOMEM;
+-		return NULL;
+-	}
+-	rtm = (struct nf_dn_rtmsg *)nlmsg_data(nlh);
+-	rtm->nfdn_ifindex = rt_skb->dev->ifindex;
+-	ptr = NFDN_RTMSG(rtm);
+-	skb_copy_from_linear_data(rt_skb, ptr, rt_skb->len);
+-	nlh->nlmsg_len = skb->tail - old_tail;
+-	return skb;
+-}
+-
+-static void dnrmg_send_peer(struct sk_buff *skb)
+-{
+-	struct sk_buff *skb2;
+-	int status = 0;
+-	int group = 0;
+-	unsigned char flags = *skb->data;
+-
+-	switch (flags & DN_RT_CNTL_MSK) {
+-	case DN_RT_PKT_L1RT:
+-		group = DNRNG_NLGRP_L1;
+-		break;
+-	case DN_RT_PKT_L2RT:
+-		group = DNRNG_NLGRP_L2;
+-		break;
+-	default:
+-		return;
+-	}
+-
+-	skb2 = dnrmg_build_message(skb, &status);
+-	if (skb2 == NULL)
+-		return;
+-	NETLINK_CB(skb2).dst_group = group;
+-	netlink_broadcast(dnrmg, skb2, 0, group, GFP_ATOMIC);
+-}
+-
+-
+-static unsigned int dnrmg_hook(void *priv,
+-			struct sk_buff *skb,
+-			const struct nf_hook_state *state)
+-{
+-	dnrmg_send_peer(skb);
+-	return NF_ACCEPT;
+-}
+-
+-
+-#define RCV_SKB_FAIL(err) do { netlink_ack(skb, nlh, (err), NULL); return; } while (0)
+-
+-static inline void dnrmg_receive_user_skb(struct sk_buff *skb)
+-{
+-	struct nlmsghdr *nlh = nlmsg_hdr(skb);
+-
+-	if (skb->len < sizeof(*nlh) ||
+-	    nlh->nlmsg_len < sizeof(*nlh) ||
+-	    skb->len < nlh->nlmsg_len)
+-		return;
+-
+-	if (!netlink_capable(skb, CAP_NET_ADMIN))
+-		RCV_SKB_FAIL(-EPERM);
+-
+-	/* Eventually we might send routing messages too */
+-
+-	RCV_SKB_FAIL(-EINVAL);
+-}
+-
+-static const struct nf_hook_ops dnrmg_ops = {
+-	.hook		= dnrmg_hook,
+-	.pf		= NFPROTO_DECNET,
+-	.hooknum	= NF_DN_ROUTE,
+-	.priority	= NF_DN_PRI_DNRTMSG,
+-};
+-
+-static int __init dn_rtmsg_init(void)
+-{
+-	int rv = 0;
+-	struct netlink_kernel_cfg cfg = {
+-		.groups	= DNRNG_NLGRP_MAX,
+-		.input	= dnrmg_receive_user_skb,
+-	};
+-
+-	dnrmg = netlink_kernel_create(&init_net, NETLINK_DNRTMSG, &cfg);
+-	if (dnrmg == NULL) {
+-		printk(KERN_ERR "dn_rtmsg: Cannot create netlink socket");
+-		return -ENOMEM;
+-	}
+-
+-	rv = nf_register_net_hook(&init_net, &dnrmg_ops);
+-	if (rv) {
+-		netlink_kernel_release(dnrmg);
+-	}
+-
+-	return rv;
+-}
+-
+-static void __exit dn_rtmsg_fini(void)
+-{
+-	nf_unregister_net_hook(&init_net, &dnrmg_ops);
+-	netlink_kernel_release(dnrmg);
+-}
+-
+-
+-MODULE_DESCRIPTION("DECnet Routing Message Grabulator");
+-MODULE_AUTHOR("Steven Whitehouse <steve@chygwyn.com>");
+-MODULE_LICENSE("GPL");
+-MODULE_ALIAS_NET_PF_PROTO(PF_NETLINK, NETLINK_DNRTMSG);
+-
+-module_init(dn_rtmsg_init);
+-module_exit(dn_rtmsg_fini);
+diff --git a/net/decnet/sysctl_net_decnet.c b/net/decnet/sysctl_net_decnet.c
+deleted file mode 100644
+index 67b5ab2657b7c..0000000000000
+--- a/net/decnet/sysctl_net_decnet.c
++++ /dev/null
+@@ -1,362 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * DECnet       An implementation of the DECnet protocol suite for the LINUX
+- *              operating system.  DECnet is implemented using the  BSD Socket
+- *              interface as the means of communication with the user level.
+- *
+- *              DECnet sysctl support functions
+- *
+- * Author:      Steve Whitehouse <SteveW@ACM.org>
+- *
+- *
+- * Changes:
+- * Steve Whitehouse - C99 changes and default device handling
+- * Steve Whitehouse - Memory buffer settings, like the tcp ones
+- *
+- */
+-#include <linux/mm.h>
+-#include <linux/sysctl.h>
+-#include <linux/fs.h>
+-#include <linux/netdevice.h>
+-#include <linux/string.h>
+-#include <net/neighbour.h>
+-#include <net/dst.h>
+-#include <net/flow.h>
+-
+-#include <linux/uaccess.h>
+-
+-#include <net/dn.h>
+-#include <net/dn_dev.h>
+-#include <net/dn_route.h>
+-
+-
+-int decnet_debug_level;
+-int decnet_time_wait = 30;
+-int decnet_dn_count = 1;
+-int decnet_di_count = 3;
+-int decnet_dr_count = 3;
+-int decnet_log_martians = 1;
+-int decnet_no_fc_max_cwnd = NSP_MIN_WINDOW;
+-
+-/* Reasonable defaults, I hope, based on tcp's defaults */
+-long sysctl_decnet_mem[3] = { 768 << 3, 1024 << 3, 1536 << 3 };
+-int sysctl_decnet_wmem[3] = { 4 * 1024, 16 * 1024, 128 * 1024 };
+-int sysctl_decnet_rmem[3] = { 4 * 1024, 87380, 87380 * 2 };
+-
+-#ifdef CONFIG_SYSCTL
+-extern int decnet_dst_gc_interval;
+-static int min_decnet_time_wait[] = { 5 };
+-static int max_decnet_time_wait[] = { 600 };
+-static int min_state_count[] = { 1 };
+-static int max_state_count[] = { NSP_MAXRXTSHIFT };
+-static int min_decnet_dst_gc_interval[] = { 1 };
+-static int max_decnet_dst_gc_interval[] = { 60 };
+-static int min_decnet_no_fc_max_cwnd[] = { NSP_MIN_WINDOW };
+-static int max_decnet_no_fc_max_cwnd[] = { NSP_MAX_WINDOW };
+-static char node_name[7] = "???";
+-
+-static struct ctl_table_header *dn_table_header = NULL;
+-
+-/*
+- * ctype.h :-)
+- */
+-#define ISNUM(x) (((x) >= '0') && ((x) <= '9'))
+-#define ISLOWER(x) (((x) >= 'a') && ((x) <= 'z'))
+-#define ISUPPER(x) (((x) >= 'A') && ((x) <= 'Z'))
+-#define ISALPHA(x) (ISLOWER(x) || ISUPPER(x))
+-#define INVALID_END_CHAR(x) (ISNUM(x) || ISALPHA(x))
+-
+-static void strip_it(char *str)
+-{
+-	for(;;) {
+-		switch (*str) {
+-		case ' ':
+-		case '\n':
+-		case '\r':
+-		case ':':
+-			*str = 0;
+-			fallthrough;
+-		case 0:
+-			return;
+-		}
+-		str++;
+-	}
+-}
+-
+-/*
+- * Simple routine to parse an ascii DECnet address
+- * into a network order address.
+- */
+-static int parse_addr(__le16 *addr, char *str)
+-{
+-	__u16 area, node;
+-
+-	while(*str && !ISNUM(*str)) str++;
+-
+-	if (*str == 0)
+-		return -1;
+-
+-	area = (*str++ - '0');
+-	if (ISNUM(*str)) {
+-		area *= 10;
+-		area += (*str++ - '0');
+-	}
+-
+-	if (*str++ != '.')
+-		return -1;
+-
+-	if (!ISNUM(*str))
+-		return -1;
+-
+-	node = *str++ - '0';
+-	if (ISNUM(*str)) {
+-		node *= 10;
+-		node += (*str++ - '0');
+-	}
+-	if (ISNUM(*str)) {
+-		node *= 10;
+-		node += (*str++ - '0');
+-	}
+-	if (ISNUM(*str)) {
+-		node *= 10;
+-		node += (*str++ - '0');
+-	}
+-
+-	if ((node > 1023) || (area > 63))
+-		return -1;
+-
+-	if (INVALID_END_CHAR(*str))
+-		return -1;
+-
+-	*addr = cpu_to_le16((area << 10) | node);
+-
+-	return 0;
+-}
+-
+-static int dn_node_address_handler(struct ctl_table *table, int write,
+-		void *buffer, size_t *lenp, loff_t *ppos)
+-{
+-	char addr[DN_ASCBUF_LEN];
+-	size_t len;
+-	__le16 dnaddr;
+-
+-	if (!*lenp || (*ppos && !write)) {
+-		*lenp = 0;
+-		return 0;
+-	}
+-
+-	if (write) {
+-		len = (*lenp < DN_ASCBUF_LEN) ? *lenp : (DN_ASCBUF_LEN-1);
+-		memcpy(addr, buffer, len);
+-		addr[len] = 0;
+-		strip_it(addr);
+-
+-		if (parse_addr(&dnaddr, addr))
+-			return -EINVAL;
+-
+-		dn_dev_devices_off();
+-
+-		decnet_address = dnaddr;
+-
+-		dn_dev_devices_on();
+-
+-		*ppos += len;
+-
+-		return 0;
+-	}
+-
+-	dn_addr2asc(le16_to_cpu(decnet_address), addr);
+-	len = strlen(addr);
+-	addr[len++] = '\n';
+-
+-	if (len > *lenp)
+-		len = *lenp;
+-	memcpy(buffer, addr, len);
+-	*lenp = len;
+-	*ppos += len;
+-
+-	return 0;
+-}
+-
+-static int dn_def_dev_handler(struct ctl_table *table, int write,
+-		void *buffer, size_t *lenp, loff_t *ppos)
+-{
+-	size_t len;
+-	struct net_device *dev;
+-	char devname[17];
+-
+-	if (!*lenp || (*ppos && !write)) {
+-		*lenp = 0;
+-		return 0;
+-	}
+-
+-	if (write) {
+-		if (*lenp > 16)
+-			return -E2BIG;
+-
+-		memcpy(devname, buffer, *lenp);
+-		devname[*lenp] = 0;
+-		strip_it(devname);
+-
+-		dev = dev_get_by_name(&init_net, devname);
+-		if (dev == NULL)
+-			return -ENODEV;
+-
+-		if (dev->dn_ptr == NULL) {
+-			dev_put(dev);
+-			return -ENODEV;
+-		}
+-
+-		if (dn_dev_set_default(dev, 1)) {
+-			dev_put(dev);
+-			return -ENODEV;
+-		}
+-		*ppos += *lenp;
+-
+-		return 0;
+-	}
+-
+-	dev = dn_dev_get_default();
+-	if (dev == NULL) {
+-		*lenp = 0;
+-		return 0;
+-	}
+-
+-	strcpy(devname, dev->name);
+-	dev_put(dev);
+-	len = strlen(devname);
+-	devname[len++] = '\n';
+-
+-	if (len > *lenp) len = *lenp;
+-
+-	memcpy(buffer, devname, len);
+-	*lenp = len;
+-	*ppos += len;
+-
+-	return 0;
+-}
+-
+-static struct ctl_table dn_table[] = {
+-	{
+-		.procname = "node_address",
+-		.maxlen = 7,
+-		.mode = 0644,
+-		.proc_handler = dn_node_address_handler,
+-	},
+-	{
+-		.procname = "node_name",
+-		.data = node_name,
+-		.maxlen = 7,
+-		.mode = 0644,
+-		.proc_handler = proc_dostring,
+-	},
+-	{
+-		.procname = "default_device",
+-		.maxlen = 16,
+-		.mode = 0644,
+-		.proc_handler = dn_def_dev_handler,
+-	},
+-	{
+-		.procname = "time_wait",
+-		.data = &decnet_time_wait,
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_decnet_time_wait,
+-		.extra2 = &max_decnet_time_wait
+-	},
+-	{
+-		.procname = "dn_count",
+-		.data = &decnet_dn_count,
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_state_count,
+-		.extra2 = &max_state_count
+-	},
+-	{
+-		.procname = "di_count",
+-		.data = &decnet_di_count,
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_state_count,
+-		.extra2 = &max_state_count
+-	},
+-	{
+-		.procname = "dr_count",
+-		.data = &decnet_dr_count,
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_state_count,
+-		.extra2 = &max_state_count
+-	},
+-	{
+-		.procname = "dst_gc_interval",
+-		.data = &decnet_dst_gc_interval,
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_decnet_dst_gc_interval,
+-		.extra2 = &max_decnet_dst_gc_interval
+-	},
+-	{
+-		.procname = "no_fc_max_cwnd",
+-		.data = &decnet_no_fc_max_cwnd,
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec_minmax,
+-		.extra1 = &min_decnet_no_fc_max_cwnd,
+-		.extra2 = &max_decnet_no_fc_max_cwnd
+-	},
+-       {
+-		.procname = "decnet_mem",
+-		.data = &sysctl_decnet_mem,
+-		.maxlen = sizeof(sysctl_decnet_mem),
+-		.mode = 0644,
+-		.proc_handler = proc_doulongvec_minmax
+-	},
+-	{
+-		.procname = "decnet_rmem",
+-		.data = &sysctl_decnet_rmem,
+-		.maxlen = sizeof(sysctl_decnet_rmem),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec,
+-	},
+-	{
+-		.procname = "decnet_wmem",
+-		.data = &sysctl_decnet_wmem,
+-		.maxlen = sizeof(sysctl_decnet_wmem),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec,
+-	},
+-	{
+-		.procname = "debug",
+-		.data = &decnet_debug_level,
+-		.maxlen = sizeof(int),
+-		.mode = 0644,
+-		.proc_handler = proc_dointvec,
+-	},
+-	{ }
+-};
+-
+-void dn_register_sysctl(void)
+-{
+-	dn_table_header = register_net_sysctl(&init_net, "net/decnet", dn_table);
+-}
+-
+-void dn_unregister_sysctl(void)
+-{
+-	unregister_net_sysctl_table(dn_table_header);
+-}
+-
+-#else  /* CONFIG_SYSCTL */
+-void dn_unregister_sysctl(void)
+-{
+-}
+-void dn_register_sysctl(void)
+-{
+-}
+-
+-#endif
+diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
+index 6ac88fe24a8e0..7fab29f3ce6e8 100644
+--- a/net/ipv6/ping.c
++++ b/net/ipv6/ping.c
+@@ -96,7 +96,8 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	addr_type = ipv6_addr_type(daddr);
+ 	if ((__ipv6_addr_needs_scope_id(addr_type) && !oif) ||
+ 	    (addr_type & IPV6_ADDR_MAPPED) ||
+-	    (oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if))
++	    (oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if &&
++	     l3mdev_master_ifindex_by_index(sock_net(sk), oif) != sk->sk_bound_dev_if))
+ 		return -EINVAL;
+ 
+ 	/* TODO: use ip6_datagram_send_ctl to get options from cmsg */
+diff --git a/net/netfilter/core.c b/net/netfilter/core.c
+index 5b7578adbf0f1..d0c7c396ba715 100644
+--- a/net/netfilter/core.c
++++ b/net/netfilter/core.c
+@@ -300,12 +300,6 @@ nf_hook_entry_head(struct net *net, int pf, unsigned int hooknum,
+ 		if (WARN_ON_ONCE(ARRAY_SIZE(net->nf.hooks_ipv6) <= hooknum))
+ 			return NULL;
+ 		return net->nf.hooks_ipv6 + hooknum;
+-#if IS_ENABLED(CONFIG_DECNET)
+-	case NFPROTO_DECNET:
+-		if (WARN_ON_ONCE(ARRAY_SIZE(net->nf.hooks_decnet) <= hooknum))
+-			return NULL;
+-		return net->nf.hooks_decnet + hooknum;
+-#endif
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 		return NULL;
+@@ -724,10 +718,6 @@ static int __net_init netfilter_net_init(struct net *net)
+ #ifdef CONFIG_NETFILTER_FAMILY_BRIDGE
+ 	__netfilter_net_init(net->nf.hooks_bridge, ARRAY_SIZE(net->nf.hooks_bridge));
+ #endif
+-#if IS_ENABLED(CONFIG_DECNET)
+-	__netfilter_net_init(net->nf.hooks_decnet, ARRAY_SIZE(net->nf.hooks_decnet));
+-#endif
+-
+ #ifdef CONFIG_PROC_FS
+ 	net->nf.proc_netfilter = proc_net_mkdir(net, "netfilter",
+ 						net->proc_net);
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index edf386a020b9e..bec9ac8b8fbef 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -473,7 +473,8 @@ ack:
+ 			 * processed, this avoids that the same error is
+ 			 * reported several times when replaying the batch.
+ 			 */
+-			if (nfnl_err_add(&err_list, nlh, err, &extack) < 0) {
++			if (err == -ENOMEM ||
++			    nfnl_err_add(&err_list, nlh, err, &extack) < 0) {
+ 				/* We failed to enqueue an error, reset the
+ 				 * list of errors and send OOM to userspace
+ 				 * pointing to the batch header.
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index befe42aad04ba..beedd0d2b5097 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -533,8 +533,8 @@ static void __tcf_chain_put(struct tcf_chain *chain, bool by_act,
+ {
+ 	struct tcf_block *block = chain->block;
+ 	const struct tcf_proto_ops *tmplt_ops;
++	unsigned int refcnt, non_act_refcnt;
+ 	bool free_block = false;
+-	unsigned int refcnt;
+ 	void *tmplt_priv;
+ 
+ 	mutex_lock(&block->lock);
+@@ -554,13 +554,15 @@ static void __tcf_chain_put(struct tcf_chain *chain, bool by_act,
+ 	 * save these to temporary variables.
+ 	 */
+ 	refcnt = --chain->refcnt;
++	non_act_refcnt = refcnt - chain->action_refcnt;
+ 	tmplt_ops = chain->tmplt_ops;
+ 	tmplt_priv = chain->tmplt_priv;
+ 
+-	/* The last dropped non-action reference will trigger notification. */
+-	if (refcnt - chain->action_refcnt == 0 && !by_act) {
+-		tc_chain_notify_delete(tmplt_ops, tmplt_priv, chain->index,
+-				       block, NULL, 0, 0, false);
++	if (non_act_refcnt == chain->explicitly_created && !by_act) {
++		if (non_act_refcnt == 0)
++			tc_chain_notify_delete(tmplt_ops, tmplt_priv,
++					       chain->index, block, NULL, 0, 0,
++					       false);
+ 		/* Last reference to chain, no need to lock. */
+ 		chain->flushing = false;
+ 	}
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index da042bc8b239d..1ac8ff445a6d3 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -716,12 +716,18 @@ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
+ 			 struct nlattr *est, bool ovr,
+ 			 struct netlink_ext_ack *extack)
+ {
+-	int err;
++	int err, ifindex = -1;
+ 
+ 	err = tcf_exts_validate(net, tp, tb, est, &n->exts, ovr, true, extack);
+ 	if (err < 0)
+ 		return err;
+ 
++	if (tb[TCA_U32_INDEV]) {
++		ifindex = tcf_change_indev(net, tb[TCA_U32_INDEV], extack);
++		if (ifindex < 0)
++			return -EINVAL;
++	}
++
+ 	if (tb[TCA_U32_LINK]) {
+ 		u32 handle = nla_get_u32(tb[TCA_U32_LINK]);
+ 		struct tc_u_hnode *ht_down = NULL, *ht_old;
+@@ -756,13 +762,9 @@ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
+ 		tcf_bind_filter(tp, &n->res, base);
+ 	}
+ 
+-	if (tb[TCA_U32_INDEV]) {
+-		int ret;
+-		ret = tcf_change_indev(net, tb[TCA_U32_INDEV], extack);
+-		if (ret < 0)
+-			return -EINVAL;
+-		n->ifindex = ret;
+-	}
++	if (ifindex >= 0)
++		n->ifindex = ifindex;
++
+ 	return 0;
+ }
+ 
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index ee0b2b03657ca..1e82c51657a7e 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -4379,7 +4379,7 @@ enum sctp_disposition sctp_sf_eat_auth(struct net *net,
+ 				    SCTP_AUTH_NEW_KEY, GFP_ATOMIC);
+ 
+ 		if (!ev)
+-			return -ENOMEM;
++			return SCTP_DISPOSITION_NOMEM;
+ 
+ 		sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP,
+ 				SCTP_ULPEVENT(ev));
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 91e678fa3feb5..df6aba2246fa0 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -1242,7 +1242,7 @@ int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info)
+ 	struct tipc_nl_msg msg;
+ 	struct tipc_media *media;
+ 	struct sk_buff *rep;
+-	struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
++	struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1];
+ 
+ 	if (!info->attrs[TIPC_NLA_MEDIA])
+ 		return -EINVAL;
+@@ -1291,7 +1291,7 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info)
+ 	int err;
+ 	char *name;
+ 	struct tipc_media *m;
+-	struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
++	struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1];
+ 
+ 	if (!info->attrs[TIPC_NLA_MEDIA])
+ 		return -EINVAL;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8a42262dd7faf..7d6a36b647a73 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11199,6 +11199,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON),
+ 	SND_PCI_QUIRK(0x1b35, 0x1234, "CZC ET26", ALC662_FIXUP_CZC_ET26),
+ 	SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
++	SND_PCI_QUIRK(0x1c6c, 0x1239, "Compaq N14JP6-V2", ALC897_FIXUP_HP_HSMIC_VERB),
+ 
+ #if 0
+ 	/* Below is a quirk table taken from the old code.
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index 5469399abcb44..8e58347dabe81 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -183,30 +183,6 @@ static void i2s_stop(struct dw_i2s_dev *dev,
+ 	}
+ }
+ 
+-static int dw_i2s_startup(struct snd_pcm_substream *substream,
+-		struct snd_soc_dai *cpu_dai)
+-{
+-	struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(cpu_dai);
+-	union dw_i2s_snd_dma_data *dma_data = NULL;
+-
+-	if (!(dev->capability & DWC_I2S_RECORD) &&
+-			(substream->stream == SNDRV_PCM_STREAM_CAPTURE))
+-		return -EINVAL;
+-
+-	if (!(dev->capability & DWC_I2S_PLAY) &&
+-			(substream->stream == SNDRV_PCM_STREAM_PLAYBACK))
+-		return -EINVAL;
+-
+-	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		dma_data = &dev->play_dma_data;
+-	else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
+-		dma_data = &dev->capture_dma_data;
+-
+-	snd_soc_dai_set_dma_data(cpu_dai, substream, (void *)dma_data);
+-
+-	return 0;
+-}
+-
+ static void dw_i2s_config(struct dw_i2s_dev *dev, int stream)
+ {
+ 	u32 ch_reg;
+@@ -305,12 +281,6 @@ static int dw_i2s_hw_params(struct snd_pcm_substream *substream,
+ 	return 0;
+ }
+ 
+-static void dw_i2s_shutdown(struct snd_pcm_substream *substream,
+-		struct snd_soc_dai *dai)
+-{
+-	snd_soc_dai_set_dma_data(dai, substream, NULL);
+-}
+-
+ static int dw_i2s_prepare(struct snd_pcm_substream *substream,
+ 			  struct snd_soc_dai *dai)
+ {
+@@ -382,8 +352,6 @@ static int dw_i2s_set_fmt(struct snd_soc_dai *cpu_dai, unsigned int fmt)
+ }
+ 
+ static const struct snd_soc_dai_ops dw_i2s_dai_ops = {
+-	.startup	= dw_i2s_startup,
+-	.shutdown	= dw_i2s_shutdown,
+ 	.hw_params	= dw_i2s_hw_params,
+ 	.prepare	= dw_i2s_prepare,
+ 	.trigger	= dw_i2s_trigger,
+@@ -624,6 +592,14 @@ static int dw_configure_dai_by_dt(struct dw_i2s_dev *dev,
+ 
+ }
+ 
++static int dw_i2s_dai_probe(struct snd_soc_dai *dai)
++{
++	struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(dai);
++
++	snd_soc_dai_init_dma_data(dai, &dev->play_dma_data, &dev->capture_dma_data);
++	return 0;
++}
++
+ static int dw_i2s_probe(struct platform_device *pdev)
+ {
+ 	const struct i2s_platform_data *pdata = pdev->dev.platform_data;
+@@ -642,6 +618,7 @@ static int dw_i2s_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	dw_i2s_dai->ops = &dw_i2s_dai_ops;
++	dw_i2s_dai->probe = dw_i2s_dai_probe;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	dev->i2s_base = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index fb874f924bbe3..e52c030bd17a2 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2332,6 +2332,9 @@ int dpcm_be_dai_prepare(struct snd_soc_pcm_runtime *fe, int stream)
+ 		if (!snd_soc_dpcm_be_can_update(fe, be, stream))
+ 			continue;
+ 
++		if (!snd_soc_dpcm_can_be_prepared(fe, be, stream))
++			continue;
++
+ 		if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_PARAMS) &&
+ 		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+ 		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND) &&
+@@ -2972,3 +2975,20 @@ int snd_soc_dpcm_can_be_params(struct snd_soc_pcm_runtime *fe,
+ 	return snd_soc_dpcm_check_state(fe, be, stream, state, ARRAY_SIZE(state));
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_dpcm_can_be_params);
++
++/*
++ * We can only prepare a BE DAI if any of it's FE are not prepared,
++ * running or paused for the specified stream direction.
++ */
++int snd_soc_dpcm_can_be_prepared(struct snd_soc_pcm_runtime *fe,
++				 struct snd_soc_pcm_runtime *be, int stream)
++{
++	const enum snd_soc_dpcm_state state[] = {
++		SND_SOC_DPCM_STATE_START,
++		SND_SOC_DPCM_STATE_PAUSED,
++		SND_SOC_DPCM_STATE_PREPARE,
++	};
++
++	return snd_soc_dpcm_check_state(fe, be, stream, state, ARRAY_SIZE(state));
++}
++EXPORT_SYMBOL_GPL(snd_soc_dpcm_can_be_prepared);
+diff --git a/tools/gpio/lsgpio.c b/tools/gpio/lsgpio.c
+index 5a05a454d0c97..85a2aa292f5d5 100644
+--- a/tools/gpio/lsgpio.c
++++ b/tools/gpio/lsgpio.c
+@@ -90,7 +90,7 @@ static void print_attributes(struct gpio_v2_line_info *info)
+ 	for (i = 0; i < info->num_attrs; i++) {
+ 		if (info->attrs[i].id == GPIO_V2_LINE_ATTR_ID_DEBOUNCE)
+ 			fprintf(stdout, ", debounce_period=%dusec",
+-				info->attrs[0].debounce_period_us);
++				info->attrs[i].debounce_period_us);
+ 	}
+ }
+ 
+diff --git a/tools/testing/selftests/ptp/testptp.c b/tools/testing/selftests/ptp/testptp.c
+index f7911aaeb0075..aa474febb4712 100644
+--- a/tools/testing/selftests/ptp/testptp.c
++++ b/tools/testing/selftests/ptp/testptp.c
+@@ -492,11 +492,11 @@ int main(int argc, char *argv[])
+ 			interval = t2 - t1;
+ 			offset = (t2 + t1) / 2 - tp;
+ 
+-			printf("system time: %lld.%u\n",
++			printf("system time: %lld.%09u\n",
+ 				(pct+2*i)->sec, (pct+2*i)->nsec);
+-			printf("phc    time: %lld.%u\n",
++			printf("phc    time: %lld.%09u\n",
+ 				(pct+2*i+1)->sec, (pct+2*i+1)->nsec);
+-			printf("system time: %lld.%u\n",
++			printf("system time: %lld.%09u\n",
+ 				(pct+2*i+2)->sec, (pct+2*i+2)->nsec);
+ 			printf("system/phc clock time offset is %" PRId64 " ns\n"
+ 			       "system     clock time delay  is %" PRId64 " ns\n",


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-06-28 10:27 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-06-28 10:27 UTC (permalink / raw
  To: gentoo-commits

commit:     00e29ccbd6c2091e5492e948243675f960de49b6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 28 10:27:23 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 28 10:27:23 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=00e29ccb

Linux patch 5.10.186

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1185_linux-5.10.186.patch | 5240 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5244 insertions(+)

diff --git a/0000_README b/0000_README
index feeba516..e7c30907 100644
--- a/0000_README
+++ b/0000_README
@@ -783,6 +783,10 @@ Patch:  1184_linux-5.10.185.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.185
 
+Patch:  1185_linux-5.10.186.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.186
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1185_linux-5.10.186.patch b/1185_linux-5.10.186.patch
new file mode 100644
index 00000000..2c7ee01d
--- /dev/null
+++ b/1185_linux-5.10.186.patch
@@ -0,0 +1,5240 @@
+diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
+index 06027c6a233ab..ac852f93f8da5 100644
+--- a/Documentation/admin-guide/sysctl/vm.rst
++++ b/Documentation/admin-guide/sysctl/vm.rst
+@@ -948,7 +948,7 @@ how much memory needs to be free before kswapd goes back to sleep.
+ 
+ The unit is in fractions of 10,000. The default value of 10 means the
+ distances between watermarks are 0.1% of the available memory in the
+-node/system. The maximum value is 1000, or 10% of memory.
++node/system. The maximum value is 3000, or 30% of memory.
+ 
+ A high rate of threads entering direct reclaim (allocstall) or kswapd
+ going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate
+diff --git a/Makefile b/Makefile
+index 73e6cc1992c21..bb2be0ed9ff26 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 185
++SUBLEVEL = 186
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+index aed81568a297d..d783d1f6950be 100644
+--- a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
++++ b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+@@ -527,7 +527,7 @@
+ 
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <31 0>;
+-		pendown-gpio = <&gpio1 31 0>;
++		pendown-gpio = <&gpio1 31 GPIO_ACTIVE_LOW>;
+ 
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+diff --git a/arch/arm/boot/dts/at91sam9261ek.dts b/arch/arm/boot/dts/at91sam9261ek.dts
+index beed819609e8d..8f3b483bb64dd 100644
+--- a/arch/arm/boot/dts/at91sam9261ek.dts
++++ b/arch/arm/boot/dts/at91sam9261ek.dts
+@@ -156,7 +156,7 @@
+ 					compatible = "ti,ads7843";
+ 					interrupts-extended = <&pioC 2 IRQ_TYPE_EDGE_BOTH>;
+ 					spi-max-frequency = <3000000>;
+-					pendown-gpio = <&pioC 2 GPIO_ACTIVE_HIGH>;
++					pendown-gpio = <&pioC 2 GPIO_ACTIVE_LOW>;
+ 
+ 					ti,x-min = /bits/ 16 <150>;
+ 					ti,x-max = /bits/ 16 <3830>;
+diff --git a/arch/arm/boot/dts/imx7d-pico-hobbit.dts b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+index d917dc4f2f227..6ad39dca70096 100644
+--- a/arch/arm/boot/dts/imx7d-pico-hobbit.dts
++++ b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+@@ -64,7 +64,7 @@
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <7 0>;
+ 		spi-max-frequency = <1000000>;
+-		pendown-gpio = <&gpio2 7 0>;
++		pendown-gpio = <&gpio2 7 GPIO_ACTIVE_LOW>;
+ 		vcc-supply = <&reg_3p3v>;
+ 		ti,x-min = /bits/ 16 <0>;
+ 		ti,x-max = /bits/ 16 <4095>;
+diff --git a/arch/arm/boot/dts/imx7d-sdb.dts b/arch/arm/boot/dts/imx7d-sdb.dts
+index 6d562ebe90295..d3b49a5b30b72 100644
+--- a/arch/arm/boot/dts/imx7d-sdb.dts
++++ b/arch/arm/boot/dts/imx7d-sdb.dts
+@@ -198,7 +198,7 @@
+ 		pinctrl-0 = <&pinctrl_tsc2046_pendown>;
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <29 0>;
+-		pendown-gpio = <&gpio2 29 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio2 29 GPIO_ACTIVE_LOW>;
+ 		touchscreen-max-pressure = <255>;
+ 		wakeup-source;
+ 	};
+diff --git a/arch/arm/boot/dts/omap3-cm-t3x.dtsi b/arch/arm/boot/dts/omap3-cm-t3x.dtsi
+index e61b8a2bfb7de..51baedf1603bd 100644
+--- a/arch/arm/boot/dts/omap3-cm-t3x.dtsi
++++ b/arch/arm/boot/dts/omap3-cm-t3x.dtsi
+@@ -227,7 +227,7 @@
+ 
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <25 0>;		/* gpio_57 */
+-		pendown-gpio = <&gpio2 25 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio2 25 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+ 		ti,x-max = /bits/ 16 <0x0fff>;
+diff --git a/arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi b/arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi
+index 3decc2d78a6ca..a7f99ae0c1fe9 100644
+--- a/arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi
++++ b/arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi
+@@ -54,7 +54,7 @@
+ 
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <27 0>;		/* gpio_27 */
+-		pendown-gpio = <&gpio1 27 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio1 27 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+ 		ti,x-max = /bits/ 16 <0x0fff>;
+diff --git a/arch/arm/boot/dts/omap3-lilly-a83x.dtsi b/arch/arm/boot/dts/omap3-lilly-a83x.dtsi
+index 73d477898ec2a..06e7cf96c6639 100644
+--- a/arch/arm/boot/dts/omap3-lilly-a83x.dtsi
++++ b/arch/arm/boot/dts/omap3-lilly-a83x.dtsi
+@@ -311,7 +311,7 @@
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <8 0>;   /* boot6 / gpio_8 */
+ 		spi-max-frequency = <1000000>;
+-		pendown-gpio = <&gpio1 8 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio1 8 GPIO_ACTIVE_LOW>;
+ 		vcc-supply = <&reg_vcc3>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&tsc2048_pins>;
+diff --git a/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi b/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
+index 1d6e88f99eb31..c3570acc35fad 100644
+--- a/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
++++ b/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
+@@ -149,7 +149,7 @@
+ 
+ 		interrupt-parent = <&gpio4>;
+ 		interrupts = <18 0>;			/* gpio_114 */
+-		pendown-gpio = <&gpio4 18 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio4 18 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+ 		ti,x-max = /bits/ 16 <0x0fff>;
+diff --git a/arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi b/arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
+index 7e30f9d45790e..d95a0e130058c 100644
+--- a/arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
++++ b/arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
+@@ -160,7 +160,7 @@
+ 
+ 		interrupt-parent = <&gpio4>;
+ 		interrupts = <18 0>;			/* gpio_114 */
+-		pendown-gpio = <&gpio4 18 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio4 18 GPIO_ACTIVE_LOW>;
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+ 		ti,x-max = /bits/ 16 <0x0fff>;
+diff --git a/arch/arm/boot/dts/omap3-pandora-common.dtsi b/arch/arm/boot/dts/omap3-pandora-common.dtsi
+index 37608af6c07f5..ca6d777ebf843 100644
+--- a/arch/arm/boot/dts/omap3-pandora-common.dtsi
++++ b/arch/arm/boot/dts/omap3-pandora-common.dtsi
+@@ -651,7 +651,7 @@
+ 		pinctrl-0 = <&penirq_pins>;
+ 		interrupt-parent = <&gpio3>;
+ 		interrupts = <30 IRQ_TYPE_NONE>;	/* GPIO_94 */
+-		pendown-gpio = <&gpio3 30 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio3 30 GPIO_ACTIVE_LOW>;
+ 		vcc-supply = <&vaux4>;
+ 
+ 		ti,x-min = /bits/ 16 <0>;
+diff --git a/arch/arm/boot/dts/omap5-cm-t54.dts b/arch/arm/boot/dts/omap5-cm-t54.dts
+index ca759b7b8a580..e62ea8b6d53fd 100644
+--- a/arch/arm/boot/dts/omap5-cm-t54.dts
++++ b/arch/arm/boot/dts/omap5-cm-t54.dts
+@@ -354,7 +354,7 @@
+ 
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <15 0>;			/* gpio1_wk15 */
+-		pendown-gpio = <&gpio1 15 GPIO_ACTIVE_HIGH>;
++		pendown-gpio = <&gpio1 15 GPIO_ACTIVE_LOW>;
+ 
+ 
+ 		ti,x-min = /bits/ 16 <0x0>;
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 06755fad38304..9fea6e9768096 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -104,8 +104,14 @@
+ #define SB_BARRIER_INSN			__SYS_BARRIER_INSN(0, 7, 31)
+ 
+ #define SYS_DC_ISW			sys_insn(1, 0, 7, 6, 2)
++#define SYS_DC_IGSW			sys_insn(1, 0, 7, 6, 4)
++#define SYS_DC_IGDSW			sys_insn(1, 0, 7, 6, 6)
+ #define SYS_DC_CSW			sys_insn(1, 0, 7, 10, 2)
++#define SYS_DC_CGSW			sys_insn(1, 0, 7, 10, 4)
++#define SYS_DC_CGDSW			sys_insn(1, 0, 7, 10, 6)
+ #define SYS_DC_CISW			sys_insn(1, 0, 7, 14, 2)
++#define SYS_DC_CIGSW			sys_insn(1, 0, 7, 14, 4)
++#define SYS_DC_CIGDSW			sys_insn(1, 0, 7, 14, 6)
+ 
+ /*
+  * System registers, organised loosely by encoding but grouped together
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index 21c4ebe29b9a2..a93c9aba834be 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -25,6 +25,7 @@ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float -fno-common
+ KBUILD_CFLAGS += -fno-stack-protector
++KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+ KBUILD_CFLAGS += $(CLANG_FLAGS)
+ KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
+ KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c
+index 032a00e5d9fa6..76c80e191a1b1 100644
+--- a/arch/x86/kernel/apic/x2apic_phys.c
++++ b/arch/x86/kernel/apic/x2apic_phys.c
+@@ -97,7 +97,10 @@ static void init_x2apic_ldr(void)
+ 
+ static int x2apic_phys_probe(void)
+ {
+-	if (x2apic_mode && (x2apic_phys || x2apic_fadt_phys()))
++	if (!x2apic_mode)
++		return 0;
++
++	if (x2apic_phys || x2apic_fadt_phys())
+ 		return 1;
+ 
+ 	return apic == &apic_x2apic_phys;
+diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
+index 6e6b39710e5fa..a8cdbbe67bb53 100644
+--- a/arch/x86/mm/kaslr.c
++++ b/arch/x86/mm/kaslr.c
+@@ -172,10 +172,10 @@ void __meminit init_trampoline_kaslr(void)
+ 		set_p4d(p4d_tramp,
+ 			__p4d(_KERNPG_TABLE | __pa(pud_page_tramp)));
+ 
+-		set_pgd(&trampoline_pgd_entry,
+-			__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
++		trampoline_pgd_entry =
++			__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp));
+ 	} else {
+-		set_pgd(&trampoline_pgd_entry,
+-			__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
++		trampoline_pgd_entry =
++			__pgd(_KERNPG_TABLE | __pa(pud_page_tramp));
+ 	}
+ }
+diff --git a/drivers/base/regmap/regmap-spi-avmm.c b/drivers/base/regmap/regmap-spi-avmm.c
+index ad1da83e849fe..67f89937219c3 100644
+--- a/drivers/base/regmap/regmap-spi-avmm.c
++++ b/drivers/base/regmap/regmap-spi-avmm.c
+@@ -666,7 +666,7 @@ static const struct regmap_bus regmap_spi_avmm_bus = {
+ 	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ 	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ 	.max_raw_read = SPI_AVMM_VAL_SIZE * MAX_READ_CNT,
+-	.max_raw_write = SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
++	.max_raw_write = SPI_AVMM_REG_SIZE + SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
+ 	.free_context = spi_avmm_bridge_ctx_free,
+ };
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 3e01a3ac652d1..d10f621085e2e 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1595,9 +1595,14 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+ 	}
+ 
+ 	if (gc->irq.parent_handler) {
+-		void *data = gc->irq.parent_handler_data ?: gc;
+-
+ 		for (i = 0; i < gc->irq.num_parents; i++) {
++			void *data;
++
++			if (gc->irq.per_parent_data)
++				data = gc->irq.parent_handler_data_array[i];
++			else
++				data = gc->irq.parent_handler_data ?: gc;
++
+ 			/*
+ 			 * The parent IRQ chip is already using the chip_data
+ 			 * for this IRQ chip, so our callbacks simply use the
+@@ -1787,6 +1792,14 @@ int gpiochip_irqchip_add_domain(struct gpio_chip *gc,
+ 	gc->to_irq = gpiochip_to_irq;
+ 	gc->irq.domain = domain;
+ 
++	/*
++	 * Using barrier() here to prevent compiler from reordering
++	 * gc->irq.initialized before adding irqdomain.
++	 */
++	barrier();
++
++	gc->irq.initialized = true;
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(gpiochip_irqchip_add_domain);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3ca1ee396e4c6..0bdc83d899463 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -7430,6 +7430,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 		if (acrtc_state->abm_level != dm_old_crtc_state->abm_level)
+ 			bundle->stream_update.abm_level = &acrtc_state->abm_level;
+ 
++		mutex_lock(&dm->dc_lock);
++		if ((acrtc_state->update_type > UPDATE_TYPE_FAST) &&
++				acrtc_state->stream->link->psr_settings.psr_allow_active)
++			amdgpu_dm_psr_disable(acrtc_state->stream);
++		mutex_unlock(&dm->dc_lock);
++
+ 		/*
+ 		 * If FreeSync state on the stream has changed then we need to
+ 		 * re-adjust the min/max bounds now that DC doesn't handle this
+@@ -7444,9 +7450,6 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 			spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags);
+ 		}
+ 		mutex_lock(&dm->dc_lock);
+-		if ((acrtc_state->update_type > UPDATE_TYPE_FAST) &&
+-				acrtc_state->stream->link->psr_settings.psr_allow_active)
+-			amdgpu_dm_psr_disable(acrtc_state->stream);
+ 
+ 		dc_commit_updates_for_stream(dm->dc,
+ 						     bundle->surface_updates,
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+index 967a5cdc120e3..81211a9d9d0a9 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+@@ -1332,7 +1332,7 @@ int exynos_g2d_exec_ioctl(struct drm_device *drm_dev, void *data,
+ 	/* Let the runqueue know that there is work to do. */
+ 	queue_work(g2d->g2d_workq, &g2d->runqueue_work);
+ 
+-	if (runqueue_node->async)
++	if (req->async)
+ 		goto out;
+ 
+ 	wait_for_completion(&runqueue_node->complete);
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_vidi.c b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+index e5662bdcbbde3..e96436e11a36c 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_vidi.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+@@ -468,8 +468,6 @@ static int vidi_remove(struct platform_device *pdev)
+ 	if (ctx->raw_edid != (struct edid *)fake_edid_info) {
+ 		kfree(ctx->raw_edid);
+ 		ctx->raw_edid = NULL;
+-
+-		return -EINVAL;
+ 	}
+ 
+ 	component_del(&pdev->dev, &vidi_component_ops);
+diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
+index e5c4271e64ede..75053917d2137 100644
+--- a/drivers/gpu/drm/radeon/radeon_gem.c
++++ b/drivers/gpu/drm/radeon/radeon_gem.c
+@@ -385,7 +385,6 @@ int radeon_gem_set_domain_ioctl(struct drm_device *dev, void *data,
+ 	struct radeon_device *rdev = dev->dev_private;
+ 	struct drm_radeon_gem_set_domain *args = data;
+ 	struct drm_gem_object *gobj;
+-	struct radeon_bo *robj;
+ 	int r;
+ 
+ 	/* for now if someone requests domain CPU -
+@@ -398,13 +397,12 @@ int radeon_gem_set_domain_ioctl(struct drm_device *dev, void *data,
+ 		up_read(&rdev->exclusive_lock);
+ 		return -ENOENT;
+ 	}
+-	robj = gem_to_radeon_bo(gobj);
+ 
+ 	r = radeon_gem_set_domain(gobj, args->read_domains, args->write_domain);
+ 
+ 	drm_gem_object_put(gobj);
+ 	up_read(&rdev->exclusive_lock);
+-	r = radeon_gem_handle_lockup(robj->rdev, r);
++	r = radeon_gem_handle_lockup(rdev, r);
+ 	return r;
+ }
+ 
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index a93070f5b214c..36cb456709ed7 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2419,8 +2419,13 @@ static int wacom_parse_and_register(struct wacom *wacom, bool wireless)
+ 		goto fail_quirks;
+ 	}
+ 
+-	if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR)
++	if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR) {
+ 		error = hid_hw_open(hdev);
++		if (error) {
++			hid_err(hdev, "hw open failed\n");
++			goto fail_quirks;
++		}
++	}
+ 
+ 	wacom_set_shared_values(wacom_wac);
+ 	devres_close_group(&hdev->dev, wacom);
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 5b902adb0d1bf..0c6c54061088e 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -765,11 +765,22 @@ static void vmbus_wait_for_unload(void)
+ 		if (completion_done(&vmbus_connection.unload_event))
+ 			goto completed;
+ 
+-		for_each_online_cpu(cpu) {
++		for_each_present_cpu(cpu) {
+ 			struct hv_per_cpu_context *hv_cpu
+ 				= per_cpu_ptr(hv_context.cpu_context, cpu);
+ 
++			/*
++			 * In a CoCo VM the synic_message_page is not allocated
++			 * in hv_synic_alloc(). Instead it is set/cleared in
++			 * hv_synic_enable_regs() and hv_synic_disable_regs()
++			 * such that it is set only when the CPU is online. If
++			 * not all present CPUs are online, the message page
++			 * might be NULL, so skip such CPUs.
++			 */
+ 			page_addr = hv_cpu->synic_message_page;
++			if (!page_addr)
++				continue;
++
+ 			msg = (struct hv_message *)page_addr
+ 				+ VMBUS_MESSAGE_SINT;
+ 
+@@ -803,11 +814,14 @@ completed:
+ 	 * maybe-pending messages on all CPUs to be able to receive new
+ 	 * messages after we reconnect.
+ 	 */
+-	for_each_online_cpu(cpu) {
++	for_each_present_cpu(cpu) {
+ 		struct hv_per_cpu_context *hv_cpu
+ 			= per_cpu_ptr(hv_context.cpu_context, cpu);
+ 
+ 		page_addr = hv_cpu->synic_message_page;
++		if (!page_addr)
++			continue;
++
+ 		msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
+ 		msg->header.message_type = HVMSG_NONE;
+ 	}
+diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
+index d45ec26d51cb9..c688f11ae5c9f 100644
+--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
++++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
+@@ -200,8 +200,8 @@ static void lpi2c_imx_stop(struct lpi2c_imx_struct *lpi2c_imx)
+ /* CLKLO = I2C_CLK_RATIO * CLKHI, SETHOLD = CLKHI, DATAVD = CLKHI/2 */
+ static int lpi2c_imx_config(struct lpi2c_imx_struct *lpi2c_imx)
+ {
+-	u8 prescale, filt, sethold, clkhi, clklo, datavd;
+-	unsigned int clk_rate, clk_cycle;
++	u8 prescale, filt, sethold, datavd;
++	unsigned int clk_rate, clk_cycle, clkhi, clklo;
+ 	enum lpi2c_imx_pincfg pincfg;
+ 	unsigned int temp;
+ 
+diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
+index 31c02c2019c1c..67a134c8448d2 100644
+--- a/drivers/input/misc/soc_button_array.c
++++ b/drivers/input/misc/soc_button_array.c
+@@ -108,6 +108,27 @@ static const struct dmi_system_id dmi_use_low_level_irq[] = {
+ 	{} /* Terminating entry */
+ };
+ 
++/*
++ * Some devices have a wrong entry which points to a GPIO which is
++ * required in another driver, so this driver must not claim it.
++ */
++static const struct dmi_system_id dmi_invalid_acpi_index[] = {
++	{
++		/*
++		 * Lenovo Yoga Book X90F / X90L, the PNP0C40 home button entry
++		 * points to a GPIO which is not a home button and which is
++		 * required by the lenovo-yogabook driver.
++		 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
++		},
++		.driver_data = (void *)1l,
++	},
++	{} /* Terminating entry */
++};
++
+ /*
+  * Get the Nth GPIO number from the ACPI object.
+  */
+@@ -137,6 +158,8 @@ soc_button_device_create(struct platform_device *pdev,
+ 	struct platform_device *pd;
+ 	struct gpio_keys_button *gpio_keys;
+ 	struct gpio_keys_platform_data *gpio_keys_pdata;
++	const struct dmi_system_id *dmi_id;
++	int invalid_acpi_index = -1;
+ 	int error, gpio, irq;
+ 	int n_buttons = 0;
+ 
+@@ -154,10 +177,17 @@ soc_button_device_create(struct platform_device *pdev,
+ 	gpio_keys = (void *)(gpio_keys_pdata + 1);
+ 	n_buttons = 0;
+ 
++	dmi_id = dmi_first_match(dmi_invalid_acpi_index);
++	if (dmi_id)
++		invalid_acpi_index = (long)dmi_id->driver_data;
++
+ 	for (info = button_info; info->name; info++) {
+ 		if (info->autorepeat != autorepeat)
+ 			continue;
+ 
++		if (info->acpi_index == invalid_acpi_index)
++			continue;
++
+ 		error = soc_button_lookup_gpio(&pdev->dev, info->acpi_index, &gpio, &irq);
+ 		if (error || irq < 0) {
+ 			/*
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index e23aa608f66f6..97b479223fe52 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -1085,7 +1085,8 @@ void cec_received_msg_ts(struct cec_adapter *adap,
+ 	mutex_lock(&adap->lock);
+ 	dprintk(2, "%s: %*ph\n", __func__, msg->len, msg->msg);
+ 
+-	adap->last_initiator = 0xff;
++	if (!adap->transmit_in_progress)
++		adap->last_initiator = 0xff;
+ 
+ 	/* Check if this message was for us (directed or broadcast). */
+ 	if (!cec_msg_is_broadcast(msg))
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 19a6b55e344fe..e89bd6f4b317c 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -970,11 +970,8 @@ static irqreturn_t meson_mmc_irq(int irq, void *dev_id)
+ 	if (status & (IRQ_END_OF_CHAIN | IRQ_RESP_STATUS)) {
+ 		if (data && !cmd->error)
+ 			data->bytes_xfered = data->blksz * data->blocks;
+-		if (meson_mmc_bounce_buf_read(data) ||
+-		    meson_mmc_get_next_command(cmd))
+-			ret = IRQ_WAKE_THREAD;
+-		else
+-			ret = IRQ_HANDLED;
++
++		return IRQ_WAKE_THREAD;
+ 	}
+ 
+ out:
+@@ -986,9 +983,6 @@ out:
+ 		writel(start, host->regs + SD_EMMC_START);
+ 	}
+ 
+-	if (ret == IRQ_HANDLED)
+-		meson_mmc_request_done(host->mmc, cmd->mrq);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index 5d83c8e7bf5cf..9cfffcecc9eb2 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -1728,7 +1728,8 @@ static void mmci_set_max_busy_timeout(struct mmc_host *mmc)
+ 		return;
+ 
+ 	if (host->variant->busy_timeout && mmc->actual_clock)
+-		max_busy_timeout = ~0UL / (mmc->actual_clock / MSEC_PER_SEC);
++		max_busy_timeout = U32_MAX / DIV_ROUND_UP(mmc->actual_clock,
++							  MSEC_PER_SEC);
+ 
+ 	mmc->max_busy_timeout = max_busy_timeout;
+ }
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index d71c113f428f6..2c9ea5ed0b2fc 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2443,7 +2443,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 
+ 	host->irq = platform_get_irq(pdev, 0);
+ 	if (host->irq < 0) {
+-		ret = -EINVAL;
++		ret = host->irq;
+ 		goto host_free;
+ 	}
+ 
+diff --git a/drivers/mmc/host/mvsdio.c b/drivers/mmc/host/mvsdio.c
+index 629efbe639c4f..b4f6a0a2fcb51 100644
+--- a/drivers/mmc/host/mvsdio.c
++++ b/drivers/mmc/host/mvsdio.c
+@@ -704,7 +704,7 @@ static int mvsd_probe(struct platform_device *pdev)
+ 	}
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+-		return -ENXIO;
++		return irq;
+ 
+ 	mmc = mmc_alloc_host(sizeof(struct mvsd_host), &pdev->dev);
+ 	if (!mmc) {
+diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
+index 6aa0537f1f847..eb978b75d8e78 100644
+--- a/drivers/mmc/host/omap.c
++++ b/drivers/mmc/host/omap.c
+@@ -1344,7 +1344,7 @@ static int mmc_omap_probe(struct platform_device *pdev)
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+-		return -ENXIO;
++		return irq;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	host->virt_base = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
+index 5b6ede81fc9f2..098075449ccd0 100644
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -1832,9 +1832,11 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	irq = platform_get_irq(pdev, 0);
+-	if (res == NULL || irq < 0)
++	if (!res)
+ 		return -ENXIO;
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0)
++		return irq;
+ 
+ 	base = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(base))
+diff --git a/drivers/mmc/host/owl-mmc.c b/drivers/mmc/host/owl-mmc.c
+index 3d4abf175b1d8..8a40cf8a92db7 100644
+--- a/drivers/mmc/host/owl-mmc.c
++++ b/drivers/mmc/host/owl-mmc.c
+@@ -640,7 +640,7 @@ static int owl_mmc_probe(struct platform_device *pdev)
+ 
+ 	owl_host->irq = platform_get_irq(pdev, 0);
+ 	if (owl_host->irq < 0) {
+-		ret = -EINVAL;
++		ret = owl_host->irq;
+ 		goto err_release_channel;
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index a2cdb37fcbbec..2a28101777c6f 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -876,7 +876,7 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
+ 	host->ops	= &sdhci_acpi_ops_dflt;
+ 	host->irq	= platform_get_irq(pdev, 0);
+ 	if (host->irq < 0) {
+-		err = -EINVAL;
++		err = host->irq;
+ 		goto err_free;
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index ad2e73f9a58f4..3366956a4ff18 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -2228,6 +2228,9 @@ static inline void sdhci_msm_get_of_property(struct platform_device *pdev,
+ 		msm_host->ddr_config = DDR_CONFIG_POR_VAL;
+ 
+ 	of_property_read_u32(node, "qcom,dll-config", &msm_host->dll_config);
++
++	if (of_device_is_compatible(node, "qcom,msm8916-sdhci"))
++		host->quirks2 |= SDHCI_QUIRK2_BROKEN_64_BIT_DMA;
+ }
+ 
+ static int sdhci_msm_gcc_reset(struct device *dev, struct sdhci_host *host)
+diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
+index e5e457037235a..5dec9e239c9bf 100644
+--- a/drivers/mmc/host/sh_mmcif.c
++++ b/drivers/mmc/host/sh_mmcif.c
+@@ -1398,7 +1398,7 @@ static int sh_mmcif_probe(struct platform_device *pdev)
+ 	irq[0] = platform_get_irq(pdev, 0);
+ 	irq[1] = platform_get_irq_optional(pdev, 1);
+ 	if (irq[0] < 0)
+-		return -ENXIO;
++		return irq[0];
+ 
+ 	reg = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(reg))
+diff --git a/drivers/mmc/host/usdhi6rol0.c b/drivers/mmc/host/usdhi6rol0.c
+index b9b79b1089a00..4f22ecef9be50 100644
+--- a/drivers/mmc/host/usdhi6rol0.c
++++ b/drivers/mmc/host/usdhi6rol0.c
+@@ -1747,8 +1747,10 @@ static int usdhi6_probe(struct platform_device *pdev)
+ 	irq_cd = platform_get_irq_byname(pdev, "card detect");
+ 	irq_sd = platform_get_irq_byname(pdev, "data");
+ 	irq_sdio = platform_get_irq_byname(pdev, "SDIO");
+-	if (irq_sd < 0 || irq_sdio < 0)
+-		return -ENODEV;
++	if (irq_sd < 0)
++		return irq_sd;
++	if (irq_sdio < 0)
++		return irq_sdio;
+ 
+ 	mmc = mmc_alloc_host(sizeof(struct usdhi6_host), dev);
+ 	if (!mmc)
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index d3b42adef057b..4056ca4255be7 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -966,7 +966,7 @@ mt753x_cpu_port_enable(struct dsa_switch *ds, int port)
+ 	mt7530_rmw(priv, MT7530_MFC, UNM_FFP_MASK, UNM_FFP(BIT(port)));
+ 
+ 	/* Set CPU port number */
+-	if (priv->id == ID_MT7621)
++	if (priv->id == ID_MT7530 || priv->id == ID_MT7621)
+ 		mt7530_rmw(priv, MT7530_MFC, CPU_MASK, CPU_EN | CPU_PORT(port));
+ 
+ 	/* CPU port gets connected to all user ports of
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 89697cb09d1c0..81be560a26431 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -1136,8 +1136,8 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
+ 	eth_hdr_len = ntohs(skb->protocol) == ETH_P_8021Q ?
+ 						VLAN_ETH_HLEN : ETH_HLEN;
+ 	if (skb->len <= 60 &&
+-	    (lancer_chip(adapter) || skb_vlan_tag_present(skb)) &&
+-	    is_ipv4_pkt(skb)) {
++	    (lancer_chip(adapter) || BE3_chip(adapter) ||
++	     skb_vlan_tag_present(skb)) && is_ipv4_pkt(skb)) {
+ 		ip = (struct iphdr *)ip_hdr(skb);
+ 		pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len));
+ 	}
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 36bcb5db3be97..44fa959ebcaa5 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -574,8 +574,7 @@ qcaspi_spi_thread(void *data)
+ 	while (!kthread_should_stop()) {
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		if ((qca->intr_req == qca->intr_svc) &&
+-		    (qca->txr.skb[qca->txr.head] == NULL) &&
+-		    (qca->sync == QCASPI_SYNC_READY))
++		    !qca->txr.skb[qca->txr.head])
+ 			schedule();
+ 
+ 		set_current_state(TASK_RUNNING);
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index 97981cf7661ad..344f63bc94a21 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -522,7 +522,7 @@ static int hwsim_del_edge_nl(struct sk_buff *msg, struct genl_info *info)
+ static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info)
+ {
+ 	struct nlattr *edge_attrs[MAC802154_HWSIM_EDGE_ATTR_MAX + 1];
+-	struct hwsim_edge_info *einfo;
++	struct hwsim_edge_info *einfo, *einfo_old;
+ 	struct hwsim_phy *phy_v0;
+ 	struct hwsim_edge *e;
+ 	u32 v0, v1;
+@@ -560,8 +560,10 @@ static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info)
+ 	list_for_each_entry_rcu(e, &phy_v0->edges, list) {
+ 		if (e->endpoint->idx == v1) {
+ 			einfo->lqi = lqi;
+-			rcu_assign_pointer(e->info, einfo);
++			einfo_old = rcu_replace_pointer(e->info, einfo,
++							lockdep_is_held(&hwsim_phys_lock));
+ 			rcu_read_unlock();
++			kfree_rcu(einfo_old, rcu);
+ 			mutex_unlock(&hwsim_phys_lock);
+ 			return 0;
+ 		}
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index 5fabcd15ef77a..834bf63dc2009 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -802,7 +802,7 @@ static int dp83867_phy_reset(struct phy_device *phydev)
+ {
+ 	int err;
+ 
+-	err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESTART);
++	err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESET);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/drivers/nfc/nfcsim.c b/drivers/nfc/nfcsim.c
+index dd27c85190d34..b42d386350b72 100644
+--- a/drivers/nfc/nfcsim.c
++++ b/drivers/nfc/nfcsim.c
+@@ -336,10 +336,6 @@ static struct dentry *nfcsim_debugfs_root;
+ static void nfcsim_debugfs_init(void)
+ {
+ 	nfcsim_debugfs_root = debugfs_create_dir("nfcsim", NULL);
+-
+-	if (!nfcsim_debugfs_root)
+-		pr_err("Could not create debugfs entry\n");
+-
+ }
+ 
+ static void nfcsim_debugfs_remove(void)
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 4353443b89d81..2d6c77dcc815c 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -520,19 +520,10 @@ struct hv_dr_state {
+ 	struct hv_pcidev_description func[];
+ };
+ 
+-enum hv_pcichild_state {
+-	hv_pcichild_init = 0,
+-	hv_pcichild_requirements,
+-	hv_pcichild_resourced,
+-	hv_pcichild_ejecting,
+-	hv_pcichild_maximum
+-};
+-
+ struct hv_pci_dev {
+ 	/* List protected by pci_rescan_remove_lock */
+ 	struct list_head list_entry;
+ 	refcount_t refs;
+-	enum hv_pcichild_state state;
+ 	struct pci_slot *pci_slot;
+ 	struct hv_pcidev_description desc;
+ 	bool reported_missing;
+@@ -1237,6 +1228,11 @@ static void hv_irq_unmask(struct irq_data *data)
+ 	pbus = pdev->bus;
+ 	hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
+ 	int_desc = data->chip_data;
++	if (!int_desc) {
++		dev_warn(&hbus->hdev->device, "%s() can not unmask irq %u\n",
++			 __func__, data->irq);
++		return;
++	}
+ 
+ 	spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
+ 
+@@ -1553,12 +1549,6 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 		hv_pci_onchannelcallback(hbus);
+ 		spin_unlock_irqrestore(&channel->sched_lock, flags);
+ 
+-		if (hpdev->state == hv_pcichild_ejecting) {
+-			dev_err_once(&hbus->hdev->device,
+-				     "the device is being ejected\n");
+-			goto enable_tasklet;
+-		}
+-
+ 		udelay(100);
+ 	}
+ 
+@@ -2378,8 +2368,6 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	hpdev = container_of(work, struct hv_pci_dev, wrk);
+ 	hbus = hpdev->hbus;
+ 
+-	WARN_ON(hpdev->state != hv_pcichild_ejecting);
+-
+ 	/*
+ 	 * Ejection can come before or after the PCI bus has been set up, so
+ 	 * attempt to find it and tear down the bus state, if it exists.  This
+@@ -2438,7 +2426,6 @@ static void hv_pci_eject_device(struct hv_pci_dev *hpdev)
+ 		return;
+ 	}
+ 
+-	hpdev->state = hv_pcichild_ejecting;
+ 	get_pcichild(hpdev);
+ 	INIT_WORK(&hpdev->wrk, hv_eject_device_work);
+ 	get_hvpcibus(hbus);
+@@ -2842,8 +2829,10 @@ static int hv_pci_enter_d0(struct hv_device *hdev)
+ 	struct pci_bus_d0_entry *d0_entry;
+ 	struct hv_pci_compl comp_pkt;
+ 	struct pci_packet *pkt;
++	bool retry = true;
+ 	int ret;
+ 
++enter_d0_retry:
+ 	/*
+ 	 * Tell the host that the bus is ready to use, and moved into the
+ 	 * powered-on state.  This includes telling the host which region
+@@ -2870,6 +2859,38 @@ static int hv_pci_enter_d0(struct hv_device *hdev)
+ 	if (ret)
+ 		goto exit;
+ 
++	/*
++	 * In certain case (Kdump) the pci device of interest was
++	 * not cleanly shut down and resource is still held on host
++	 * side, the host could return invalid device status.
++	 * We need to explicitly request host to release the resource
++	 * and try to enter D0 again.
++	 */
++	if (comp_pkt.completion_status < 0 && retry) {
++		retry = false;
++
++		dev_err(&hdev->device, "Retrying D0 Entry\n");
++
++		/*
++		 * Hv_pci_bus_exit() calls hv_send_resource_released()
++		 * to free up resources of its child devices.
++		 * In the kdump kernel we need to set the
++		 * wslot_res_allocated to 255 so it scans all child
++		 * devices to release resources allocated in the
++		 * normal kernel before panic happened.
++		 */
++		hbus->wslot_res_allocated = 255;
++
++		ret = hv_pci_bus_exit(hdev, true);
++
++		if (ret == 0) {
++			kfree(pkt);
++			goto enter_d0_retry;
++		}
++		dev_err(&hdev->device,
++			"Retrying D0 failed with ret %d\n", ret);
++	}
++
+ 	if (comp_pkt.completion_status < 0) {
+ 		dev_err(&hdev->device,
+ 			"PCI Pass-through VSP failed D0 Entry with status %x\n",
+@@ -2912,6 +2933,24 @@ static int hv_pci_query_relations(struct hv_device *hdev)
+ 	if (!ret)
+ 		ret = wait_for_response(hdev, &comp);
+ 
++	/*
++	 * In the case of fast device addition/removal, it's possible that
++	 * vmbus_sendpacket() or wait_for_response() returns -ENODEV but we
++	 * already got a PCI_BUS_RELATIONS* message from the host and the
++	 * channel callback already scheduled a work to hbus->wq, which can be
++	 * running pci_devices_present_work() -> survey_child_resources() ->
++	 * complete(&hbus->survey_event), even after hv_pci_query_relations()
++	 * exits and the stack variable 'comp' is no longer valid; as a result,
++	 * a hang or a page fault may happen when the complete() calls
++	 * raw_spin_lock_irqsave(). Flush hbus->wq before we exit from
++	 * hv_pci_query_relations() to avoid the issues. Note: if 'ret' is
++	 * -ENODEV, there can't be any more work item scheduled to hbus->wq
++	 * after the flush_workqueue(): see vmbus_onoffer_rescind() ->
++	 * vmbus_reset_channel_cb(), vmbus_rescind_cleanup() ->
++	 * channel->rescind = true.
++	 */
++	flush_workqueue(hbus->wq);
++
+ 	return ret;
+ }
+ 
+@@ -3107,7 +3146,6 @@ static int hv_pci_probe(struct hv_device *hdev,
+ 	struct hv_pcibus_device *hbus;
+ 	u16 dom_req, dom;
+ 	char *name;
+-	bool enter_d0_retry = true;
+ 	int ret;
+ 
+ 	/*
+@@ -3228,47 +3266,11 @@ static int hv_pci_probe(struct hv_device *hdev,
+ 	if (ret)
+ 		goto free_fwnode;
+ 
+-retry:
+ 	ret = hv_pci_query_relations(hdev);
+ 	if (ret)
+ 		goto free_irq_domain;
+ 
+ 	ret = hv_pci_enter_d0(hdev);
+-	/*
+-	 * In certain case (Kdump) the pci device of interest was
+-	 * not cleanly shut down and resource is still held on host
+-	 * side, the host could return invalid device status.
+-	 * We need to explicitly request host to release the resource
+-	 * and try to enter D0 again.
+-	 * Since the hv_pci_bus_exit() call releases structures
+-	 * of all its child devices, we need to start the retry from
+-	 * hv_pci_query_relations() call, requesting host to send
+-	 * the synchronous child device relations message before this
+-	 * information is needed in hv_send_resources_allocated()
+-	 * call later.
+-	 */
+-	if (ret == -EPROTO && enter_d0_retry) {
+-		enter_d0_retry = false;
+-
+-		dev_err(&hdev->device, "Retrying D0 Entry\n");
+-
+-		/*
+-		 * Hv_pci_bus_exit() calls hv_send_resources_released()
+-		 * to free up resources of its child devices.
+-		 * In the kdump kernel we need to set the
+-		 * wslot_res_allocated to 255 so it scans all child
+-		 * devices to release resources allocated in the
+-		 * normal kernel before panic happened.
+-		 */
+-		hbus->wslot_res_allocated = 255;
+-		ret = hv_pci_bus_exit(hdev, true);
+-
+-		if (ret == 0)
+-			goto retry;
+-
+-		dev_err(&hdev->device,
+-			"Retrying D0 failed with ret %d\n", ret);
+-	}
+ 	if (ret)
+ 		goto free_irq_domain;
+ 
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index 33280ca181e95..6f9c81db6e429 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -1385,6 +1385,7 @@ void ccw_device_set_notoper(struct ccw_device *cdev)
+ enum io_sch_action {
+ 	IO_SCH_UNREG,
+ 	IO_SCH_ORPH_UNREG,
++	IO_SCH_UNREG_CDEV,
+ 	IO_SCH_ATTACH,
+ 	IO_SCH_UNREG_ATTACH,
+ 	IO_SCH_ORPH_ATTACH,
+@@ -1417,7 +1418,7 @@ static enum io_sch_action sch_get_action(struct subchannel *sch)
+ 	}
+ 	if ((sch->schib.pmcw.pam & sch->opm) == 0) {
+ 		if (ccw_device_notify(cdev, CIO_NO_PATH) != NOTIFY_OK)
+-			return IO_SCH_UNREG;
++			return IO_SCH_UNREG_CDEV;
+ 		return IO_SCH_DISC;
+ 	}
+ 	if (device_is_disconnected(cdev))
+@@ -1479,6 +1480,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ 	case IO_SCH_ORPH_ATTACH:
+ 		ccw_device_set_disconnected(cdev);
+ 		break;
++	case IO_SCH_UNREG_CDEV:
+ 	case IO_SCH_UNREG_ATTACH:
+ 	case IO_SCH_UNREG:
+ 		if (!cdev)
+@@ -1512,6 +1514,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ 		if (rc)
+ 			goto out;
+ 		break;
++	case IO_SCH_UNREG_CDEV:
+ 	case IO_SCH_UNREG_ATTACH:
+ 		spin_lock_irqsave(sch->lock, flags);
+ 		if (cdev->private->flags.resuming) {
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 5d98611dd999d..c5ff6e8c45be0 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -906,9 +906,14 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ 	ret = fsl_lpspi_dma_init(&pdev->dev, fsl_lpspi, controller);
+ 	if (ret == -EPROBE_DEFER)
+ 		goto out_pm_get;
+-
+ 	if (ret < 0)
+ 		dev_err(&pdev->dev, "dma setup error %d, use pio\n", ret);
++	else
++		/*
++		 * disable LPSPI module IRQ when enable DMA mode successfully,
++		 * to prevent the unexpected LPSPI module IRQ events.
++		 */
++		disable_irq(irq);
+ 
+ 	ret = devm_spi_register_controller(&pdev->dev, controller);
+ 	if (ret < 0) {
+diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
+index 8b40f10976ff8..3931565018880 100644
+--- a/drivers/target/iscsi/iscsi_target_nego.c
++++ b/drivers/target/iscsi/iscsi_target_nego.c
+@@ -1079,6 +1079,7 @@ int iscsi_target_locate_portal(
+ 	iscsi_target_set_sock_callbacks(conn);
+ 
+ 	login->np = np;
++	conn->tpg = NULL;
+ 
+ 	login_req = (struct iscsi_login_req *) login->req;
+ 	payload_length = ntoh24(login_req->dlength);
+@@ -1148,7 +1149,6 @@ int iscsi_target_locate_portal(
+ 	 */
+ 	sessiontype = strncmp(s_buf, DISCOVERY, 9);
+ 	if (!sessiontype) {
+-		conn->tpg = iscsit_global->discovery_tpg;
+ 		if (!login->leading_connection)
+ 			goto get_target;
+ 
+@@ -1165,9 +1165,11 @@ int iscsi_target_locate_portal(
+ 		 * Serialize access across the discovery struct iscsi_portal_group to
+ 		 * process login attempt.
+ 		 */
++		conn->tpg = iscsit_global->discovery_tpg;
+ 		if (iscsit_access_np(np, conn->tpg) < 0) {
+ 			iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
++			conn->tpg = NULL;
+ 			ret = -1;
+ 			goto out;
+ 		}
+diff --git a/drivers/usb/gadget/udc/amd5536udc_pci.c b/drivers/usb/gadget/udc/amd5536udc_pci.c
+index c80f9bd51b750..a36913ae31f9e 100644
+--- a/drivers/usb/gadget/udc/amd5536udc_pci.c
++++ b/drivers/usb/gadget/udc/amd5536udc_pci.c
+@@ -170,6 +170,9 @@ static int udc_pci_probe(
+ 		retval = -ENODEV;
+ 		goto err_probe;
+ 	}
++
++	udc = dev;
++
+ 	return 0;
+ 
+ err_probe:
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index d1a148f0cae33..81992b9a219b2 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -369,7 +369,15 @@ void nilfs_clear_dirty_pages(struct address_space *mapping, bool silent)
+ 			struct page *page = pvec.pages[i];
+ 
+ 			lock_page(page);
+-			nilfs_clear_dirty_page(page, silent);
++
++			/*
++			 * This page may have been removed from the address
++			 * space by truncation or invalidation when the lock
++			 * was acquired.  Skip processing in that case.
++			 */
++			if (likely(page->mapping == mapping))
++				nilfs_clear_dirty_page(page, silent);
++
+ 			unlock_page(page);
+ 		}
+ 		pagevec_release(&pvec);
+diff --git a/fs/nilfs2/segbuf.c b/fs/nilfs2/segbuf.c
+index 1a8729eded8b1..9f435879a0487 100644
+--- a/fs/nilfs2/segbuf.c
++++ b/fs/nilfs2/segbuf.c
+@@ -101,6 +101,12 @@ int nilfs_segbuf_extend_segsum(struct nilfs_segment_buffer *segbuf)
+ 	if (unlikely(!bh))
+ 		return -ENOMEM;
+ 
++	lock_buffer(bh);
++	if (!buffer_uptodate(bh)) {
++		memset(bh->b_data, 0, bh->b_size);
++		set_buffer_uptodate(bh);
++	}
++	unlock_buffer(bh);
+ 	nilfs_segbuf_add_segsum_buffer(segbuf, bh);
+ 	return 0;
+ }
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index cdaca232ac4d6..4a910c8a56913 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -984,10 +984,13 @@ static void nilfs_segctor_fill_in_super_root(struct nilfs_sc_info *sci,
+ 	unsigned int isz, srsz;
+ 
+ 	bh_sr = NILFS_LAST_SEGBUF(&sci->sc_segbufs)->sb_super_root;
++
++	lock_buffer(bh_sr);
+ 	raw_sr = (struct nilfs_super_root *)bh_sr->b_data;
+ 	isz = nilfs->ns_inode_size;
+ 	srsz = NILFS_SR_BYTES(isz);
+ 
++	raw_sr->sr_sum = 0;  /* Ensure initialization within this update */
+ 	raw_sr->sr_bytes = cpu_to_le16(srsz);
+ 	raw_sr->sr_nongc_ctime
+ 		= cpu_to_le64(nilfs_doing_gc() ?
+@@ -1001,6 +1004,8 @@ static void nilfs_segctor_fill_in_super_root(struct nilfs_sc_info *sci,
+ 	nilfs_write_inode_common(nilfs->ns_sufile, (void *)raw_sr +
+ 				 NILFS_SR_SUFILE_OFFSET(isz), 1);
+ 	memset((void *)raw_sr + srsz, 0, nilfs->ns_blocksize - srsz);
++	set_buffer_uptodate(bh_sr);
++	unlock_buffer(bh_sr);
+ }
+ 
+ static void nilfs_redirty_inodes(struct list_head *head)
+@@ -1783,6 +1788,7 @@ static void nilfs_abort_logs(struct list_head *logs, int err)
+ 	list_for_each_entry(segbuf, logs, sb_list) {
+ 		list_for_each_entry(bh, &segbuf->sb_segsum_buffers,
+ 				    b_assoc_buffers) {
++			clear_buffer_uptodate(bh);
+ 			if (bh->b_page != bd_page) {
+ 				if (bd_page)
+ 					end_page_writeback(bd_page);
+@@ -1794,6 +1800,7 @@ static void nilfs_abort_logs(struct list_head *logs, int err)
+ 				    b_assoc_buffers) {
+ 			clear_buffer_async_write(bh);
+ 			if (bh == segbuf->sb_super_root) {
++				clear_buffer_uptodate(bh);
+ 				if (bh->b_page != bd_page) {
+ 					end_page_writeback(bd_page);
+ 					bd_page = bh->b_page;
+diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
+index 9aae60d9a32e6..037456e9c511a 100644
+--- a/fs/nilfs2/super.c
++++ b/fs/nilfs2/super.c
+@@ -372,10 +372,31 @@ static int nilfs_move_2nd_super(struct super_block *sb, loff_t sb2off)
+ 		goto out;
+ 	}
+ 	nsbp = (void *)nsbh->b_data + offset;
+-	memset(nsbp, 0, nilfs->ns_blocksize);
+ 
++	lock_buffer(nsbh);
+ 	if (sb2i >= 0) {
++		/*
++		 * The position of the second superblock only changes by 4KiB,
++		 * which is larger than the maximum superblock data size
++		 * (= 1KiB), so there is no need to use memmove() to allow
++		 * overlap between source and destination.
++		 */
+ 		memcpy(nsbp, nilfs->ns_sbp[sb2i], nilfs->ns_sbsize);
++
++		/*
++		 * Zero fill after copy to avoid overwriting in case of move
++		 * within the same block.
++		 */
++		memset(nsbh->b_data, 0, offset);
++		memset((void *)nsbp + nilfs->ns_sbsize, 0,
++		       nsbh->b_size - offset - nilfs->ns_sbsize);
++	} else {
++		memset(nsbh->b_data, 0, nsbh->b_size);
++	}
++	set_buffer_uptodate(nsbh);
++	unlock_buffer(nsbh);
++
++	if (sb2i >= 0) {
+ 		brelse(nilfs->ns_sbh[sb2i]);
+ 		nilfs->ns_sbh[sb2i] = nsbh;
+ 		nilfs->ns_sbp[sb2i] = nsbp;
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 1655b7b2a5abe..682f2bf2e5259 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -26,7 +26,7 @@ static const struct file_operations proc_sys_dir_file_operations;
+ static const struct inode_operations proc_sys_dir_operations;
+ 
+ /* shared constants to be used in various sysctls */
+-const int sysctl_vals[] = { 0, 1, INT_MAX };
++const int sysctl_vals[] = { -1, 0, 1, 2, 4, 100, 200, 1000, 3000, INT_MAX };
+ EXPORT_SYMBOL(sysctl_vals);
+ 
+ /* Support for permanently empty directories */
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 0552a9859a01e..64c93a36a3a92 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -168,11 +168,18 @@ struct gpio_irq_chip {
+ 
+ 	/**
+ 	 * @parent_handler_data:
++	 * @parent_handler_data_array:
+ 	 *
+ 	 * Data associated, and passed to, the handler for the parent
+-	 * interrupt.
++	 * interrupt. Can either be a single pointer if @per_parent_data
++	 * is false, or an array of @num_parents pointers otherwise.  If
++	 * @per_parent_data is true, @parent_handler_data_array cannot be
++	 * NULL.
+ 	 */
+-	void *parent_handler_data;
++	union {
++		void *parent_handler_data;
++		void **parent_handler_data_array;
++	};
+ 
+ 	/**
+ 	 * @num_parents:
+@@ -203,6 +210,14 @@ struct gpio_irq_chip {
+ 	 */
+ 	bool threaded;
+ 
++	/**
++	 * @per_parent_data:
++	 *
++	 * True if parent_handler_data_array describes a @num_parents
++	 * sized array to be used as parent data.
++	 */
++	bool per_parent_data;
++
+ 	/**
+ 	 * @init_hw: optional routine to initialize hardware before
+ 	 * an IRQ chip will be added. This is quite useful when
+diff --git a/include/linux/regulator/pca9450.h b/include/linux/regulator/pca9450.h
+index 71902f41c9199..0c3edff6bdfff 100644
+--- a/include/linux/regulator/pca9450.h
++++ b/include/linux/regulator/pca9450.h
+@@ -196,11 +196,11 @@ enum {
+ 
+ /* PCA9450_REG_LDO3_VOLT bits */
+ #define LDO3_EN_MASK			0xC0
+-#define LDO3OUT_MASK			0x0F
++#define LDO3OUT_MASK			0x1F
+ 
+ /* PCA9450_REG_LDO4_VOLT bits */
+ #define LDO4_EN_MASK			0xC0
+-#define LDO4OUT_MASK			0x0F
++#define LDO4OUT_MASK			0x1F
+ 
+ /* PCA9450_REG_LDO5_VOLT bits */
+ #define LDO5L_EN_MASK			0xC0
+diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
+index 4393de94cb32d..c202a72e16906 100644
+--- a/include/linux/sysctl.h
++++ b/include/linux/sysctl.h
+@@ -38,9 +38,16 @@ struct ctl_table_header;
+ struct ctl_dir;
+ 
+ /* Keep the same order as in fs/proc/proc_sysctl.c */
+-#define SYSCTL_ZERO	((void *)&sysctl_vals[0])
+-#define SYSCTL_ONE	((void *)&sysctl_vals[1])
+-#define SYSCTL_INT_MAX	((void *)&sysctl_vals[2])
++#define SYSCTL_NEG_ONE			((void *)&sysctl_vals[0])
++#define SYSCTL_ZERO			((void *)&sysctl_vals[1])
++#define SYSCTL_ONE			((void *)&sysctl_vals[2])
++#define SYSCTL_TWO			((void *)&sysctl_vals[3])
++#define SYSCTL_FOUR			((void *)&sysctl_vals[4])
++#define SYSCTL_ONE_HUNDRED		((void *)&sysctl_vals[5])
++#define SYSCTL_TWO_HUNDRED		((void *)&sysctl_vals[6])
++#define SYSCTL_ONE_THOUSAND		((void *)&sysctl_vals[7])
++#define SYSCTL_THREE_THOUSAND		((void *)&sysctl_vals[8])
++#define SYSCTL_INT_MAX			((void *)&sysctl_vals[9])
+ 
+ extern const int sysctl_vals[];
+ 
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index c3e55a9ae5857..1ddd401a8981f 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -378,9 +378,11 @@ static inline int ip_tunnel_encap(struct sk_buff *skb, struct ip_tunnel *t,
+ static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
+ 				       const struct sk_buff *skb)
+ {
+-	if (skb->protocol == htons(ETH_P_IP))
++	__be16 payload_protocol = skb_protocol(skb, true);
++
++	if (payload_protocol == htons(ETH_P_IP))
+ 		return iph->tos;
+-	else if (skb->protocol == htons(ETH_P_IPV6))
++	else if (payload_protocol == htons(ETH_P_IPV6))
+ 		return ipv6_get_dsfield((const struct ipv6hdr *)iph);
+ 	else
+ 		return 0;
+@@ -389,9 +391,11 @@ static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
+ static inline u8 ip_tunnel_get_ttl(const struct iphdr *iph,
+ 				       const struct sk_buff *skb)
+ {
+-	if (skb->protocol == htons(ETH_P_IP))
++	__be16 payload_protocol = skb_protocol(skb, true);
++
++	if (payload_protocol == htons(ETH_P_IP))
+ 		return iph->ttl;
+-	else if (skb->protocol == htons(ETH_P_IPV6))
++	else if (payload_protocol == htons(ETH_P_IPV6))
+ 		return ((const struct ipv6hdr *)iph)->hop_limit;
+ 	else
+ 		return 0;
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 564fbe0c865fd..030237f3d82a6 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -205,7 +205,6 @@ static inline enum nft_registers nft_type_to_reg(enum nft_data_types type)
+ }
+ 
+ int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest);
+-unsigned int nft_parse_register(const struct nlattr *attr);
+ int nft_dump_register(struct sk_buff *skb, unsigned int attr, unsigned int reg);
+ 
+ int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len);
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 726a2dbb407f1..7865db2f827e6 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1034,6 +1034,7 @@ struct xfrm_offload {
+ struct sec_path {
+ 	int			len;
+ 	int			olen;
++	int			verified_cnt;
+ 
+ 	struct xfrm_state	*xvec[XFRM_MAX_DEPTH];
+ 	struct xfrm_offload	ovec[XFRM_MAX_OFFLOAD_DEPTH];
+diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
+index 57d795365987d..453255afbe2d5 100644
+--- a/include/trace/events/writeback.h
++++ b/include/trace/events/writeback.h
+@@ -67,7 +67,7 @@ DECLARE_EVENT_CLASS(writeback_page_template,
+ 		strscpy_pad(__entry->name,
+ 			    bdi_dev_name(mapping ? inode_to_bdi(mapping->host) :
+ 					 NULL), 32);
+-		__entry->ino = mapping ? mapping->host->i_ino : 0;
++		__entry->ino = (mapping && mapping->host) ? mapping->host->i_ino : 0;
+ 		__entry->index = page->index;
+ 	),
+ 
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 58e0631143984..f169b46457693 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -581,6 +581,7 @@ struct io_sr_msg {
+ 	size_t				len;
+ 	size_t				done_io;
+ 	struct io_buffer		*kbuf;
++	void __user			*msg_control;
+ };
+ 
+ struct io_open {
+@@ -4718,10 +4719,16 @@ static int io_setup_async_msg(struct io_kiocb *req,
+ static int io_sendmsg_copy_hdr(struct io_kiocb *req,
+ 			       struct io_async_msghdr *iomsg)
+ {
++	struct io_sr_msg *sr = &req->sr_msg;
++	int ret;
++
+ 	iomsg->msg.msg_name = &iomsg->addr;
+ 	iomsg->free_iov = iomsg->fast_iov;
+-	return sendmsg_copy_msghdr(&iomsg->msg, req->sr_msg.umsg,
++	ret = sendmsg_copy_msghdr(&iomsg->msg, req->sr_msg.umsg,
+ 				   req->sr_msg.msg_flags, &iomsg->free_iov);
++	/* save msg_control as sys_sendmsg() overwrites it */
++	sr->msg_control = iomsg->msg.msg_control;
++	return ret;
+ }
+ 
+ static int io_sendmsg_prep_async(struct io_kiocb *req)
+@@ -4778,6 +4785,8 @@ static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 		if (ret)
+ 			return ret;
+ 		kmsg = &iomsg;
++	} else {
++		kmsg->msg.msg_control = sr->msg_control;
+ 	}
+ 
+ 	flags = req->sr_msg.msg_flags;
+@@ -5044,7 +5053,7 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 	flags = req->sr_msg.msg_flags;
+ 	if (force_nonblock)
+ 		flags |= MSG_DONTWAIT;
+-	if (flags & MSG_WAITALL)
++	if (flags & MSG_WAITALL && !kmsg->msg.msg_controllen)
+ 		min_ret = iov_iter_count(&kmsg->msg.msg_iter);
+ 
+ 	ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
+@@ -5055,6 +5064,8 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+ 		if (ret > 0 && io_net_retry(sock, flags)) {
++			kmsg->msg.msg_controllen = 0;
++			kmsg->msg.msg_control = NULL;
+ 			sr->done_io += ret;
+ 			req->flags |= REQ_F_PARTIAL_IO;
+ 			return io_setup_async_msg(req, kmsg);
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index cb80d18a49b56..06c028bdb8d4d 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -604,31 +604,30 @@ static bool btf_name_offset_valid(const struct btf *btf, u32 offset)
+ 		offset < btf->hdr.str_len;
+ }
+ 
+-static bool __btf_name_char_ok(char c, bool first, bool dot_ok)
++static bool __btf_name_char_ok(char c, bool first)
+ {
+ 	if ((first ? !isalpha(c) :
+ 		     !isalnum(c)) &&
+ 	    c != '_' &&
+-	    ((c == '.' && !dot_ok) ||
+-	      c != '.'))
++	    c != '.')
+ 		return false;
+ 	return true;
+ }
+ 
+-static bool __btf_name_valid(const struct btf *btf, u32 offset, bool dot_ok)
++static bool __btf_name_valid(const struct btf *btf, u32 offset)
+ {
+ 	/* offset must be valid */
+ 	const char *src = &btf->strings[offset];
+ 	const char *src_limit;
+ 
+-	if (!__btf_name_char_ok(*src, true, dot_ok))
++	if (!__btf_name_char_ok(*src, true))
+ 		return false;
+ 
+ 	/* set a limit on identifier length */
+ 	src_limit = src + KSYM_NAME_LEN;
+ 	src++;
+ 	while (*src && src < src_limit) {
+-		if (!__btf_name_char_ok(*src, false, dot_ok))
++		if (!__btf_name_char_ok(*src, false))
+ 			return false;
+ 		src++;
+ 	}
+@@ -636,17 +635,14 @@ static bool __btf_name_valid(const struct btf *btf, u32 offset, bool dot_ok)
+ 	return !*src;
+ }
+ 
+-/* Only C-style identifier is permitted. This can be relaxed if
+- * necessary.
+- */
+ static bool btf_name_valid_identifier(const struct btf *btf, u32 offset)
+ {
+-	return __btf_name_valid(btf, offset, false);
++	return __btf_name_valid(btf, offset);
+ }
+ 
+ static bool btf_name_valid_section(const struct btf *btf, u32 offset)
+ {
+-	return __btf_name_valid(btf, offset, true);
++	return __btf_name_valid(btf, offset);
+ }
+ 
+ static const char *__btf_name_by_offset(const struct btf *btf, u32 offset)
+@@ -3417,7 +3413,7 @@ static s32 btf_var_check_meta(struct btf_verifier_env *env,
+ 	}
+ 
+ 	if (!t->name_off ||
+-	    !__btf_name_valid(env->btf, t->name_off, true)) {
++	    !__btf_name_valid(env->btf, t->name_off)) {
+ 		btf_verifier_log_type(env, t, "Invalid name");
+ 		return -EINVAL;
+ 	}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index fd2082a9bf81b..edb19ada0405d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2318,6 +2318,11 @@ static void save_register_state(struct bpf_func_state *state,
+ 		scrub_spilled_slot(&state->stack[spi].slot_type[i - 1]);
+ }
+ 
++static bool is_bpf_st_mem(struct bpf_insn *insn)
++{
++	return BPF_CLASS(insn->code) == BPF_ST && BPF_MODE(insn->code) == BPF_MEM;
++}
++
+ /* check_stack_{read,write}_fixed_off functions track spill/fill of registers,
+  * stack boundary and alignment are checked in check_mem_access()
+  */
+@@ -2329,8 +2334,9 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ {
+ 	struct bpf_func_state *cur; /* state of the current function */
+ 	int i, slot = -off - 1, spi = slot / BPF_REG_SIZE, err;
+-	u32 dst_reg = env->prog->insnsi[insn_idx].dst_reg;
++	struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
+ 	struct bpf_reg_state *reg = NULL;
++	u32 dst_reg = insn->dst_reg;
+ 
+ 	err = realloc_func_state(state, round_up(slot + 1, BPF_REG_SIZE),
+ 				 state->acquired_refs, true);
+@@ -2379,6 +2385,16 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 				return err;
+ 		}
+ 		save_register_state(state, spi, reg, size);
++		/* Break the relation on a narrowing spill. */
++		if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
++			state->stack[spi].spilled_ptr.id = 0;
++	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
++		   insn->imm != 0 && env->bpf_capable) {
++		struct bpf_reg_state fake_reg = {};
++
++		__mark_reg_known(&fake_reg, (u32)insn->imm);
++		fake_reg.type = SCALAR_VALUE;
++		save_register_state(state, spi, &fake_reg, size);
+ 	} else if (reg && is_spillable_regtype(reg->type)) {
+ 		/* register containing pointer is being spilled into stack */
+ 		if (size != BPF_REG_SIZE) {
+@@ -2413,7 +2429,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 			state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
+ 
+ 		/* when we zero initialize stack slots mark them as such */
+-		if (reg && register_is_null(reg)) {
++		if ((reg && register_is_null(reg)) ||
++		    (!reg && is_bpf_st_mem(insn) && insn->imm == 0)) {
+ 			/* backtracking doesn't work for STACK_ZERO yet. */
+ 			err = mark_chain_precision(env, value_regno);
+ 			if (err)
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index f123b7f642498..70ed21607e472 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1712,7 +1712,7 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ {
+ 	struct cgroup *dcgrp = &dst_root->cgrp;
+ 	struct cgroup_subsys *ss;
+-	int ssid, i, ret;
++	int ssid, ret;
+ 	u16 dfl_disable_ss_mask = 0;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+@@ -1756,7 +1756,8 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 		struct cgroup_root *src_root = ss->root;
+ 		struct cgroup *scgrp = &src_root->cgrp;
+ 		struct cgroup_subsys_state *css = cgroup_css(scgrp, ss);
+-		struct css_set *cset;
++		struct css_set *cset, *cset_pos;
++		struct css_task_iter *it;
+ 
+ 		WARN_ON(!css || cgroup_css(dcgrp, ss));
+ 
+@@ -1774,9 +1775,22 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 		css->cgroup = dcgrp;
+ 
+ 		spin_lock_irq(&css_set_lock);
+-		hash_for_each(css_set_table, i, cset, hlist)
++		WARN_ON(!list_empty(&dcgrp->e_csets[ss->id]));
++		list_for_each_entry_safe(cset, cset_pos, &scgrp->e_csets[ss->id],
++					 e_cset_node[ss->id]) {
+ 			list_move_tail(&cset->e_cset_node[ss->id],
+ 				       &dcgrp->e_csets[ss->id]);
++			/*
++			 * all css_sets of scgrp together in same order to dcgrp,
++			 * patch in-flight iterators to preserve correct iteration.
++			 * since the iterator is always advanced right away and
++			 * finished when it->cset_pos meets it->cset_head, so only
++			 * update it->cset_head is enough here.
++			 */
++			list_for_each_entry(it, &cset->task_iters, iters_node)
++				if (it->cset_head == &scgrp->e_csets[ss->id])
++					it->cset_head = &dcgrp->e_csets[ss->id];
++		}
+ 		spin_unlock_irq(&css_set_lock);
+ 
+ 		/* default hierarchy doesn't enable controllers by default */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index d981abea0358d..a45f0dd10b9a3 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -111,15 +111,9 @@
+ static int sixty = 60;
+ #endif
+ 
+-static int __maybe_unused neg_one = -1;
+-static int __maybe_unused two = 2;
+-static int __maybe_unused four = 4;
+ static unsigned long zero_ul;
+ static unsigned long one_ul = 1;
+ static unsigned long long_max = LONG_MAX;
+-static int one_hundred = 100;
+-static int two_hundred = 200;
+-static int one_thousand = 1000;
+ #ifdef CONFIG_PRINTK
+ static int ten_thousand = 10000;
+ #endif
+@@ -2010,7 +2004,7 @@ static struct ctl_table kern_table[] = {
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax,
+-		.extra1		= &neg_one,
++		.extra1		= SYSCTL_NEG_ONE,
+ 		.extra2		= SYSCTL_ONE,
+ 	},
+ #endif
+@@ -2341,7 +2335,7 @@ static struct ctl_table kern_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax_sysadmin,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &two,
++		.extra2		= SYSCTL_TWO,
+ 	},
+ #endif
+ 	{
+@@ -2601,7 +2595,7 @@ static struct ctl_table kern_table[] = {
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax,
+-		.extra1		= &neg_one,
++		.extra1		= SYSCTL_NEG_ONE,
+ 	},
+ #endif
+ #ifdef CONFIG_RT_MUTEXES
+@@ -2663,7 +2657,7 @@ static struct ctl_table kern_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= perf_cpu_time_max_percent_handler,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &one_hundred,
++		.extra2		= SYSCTL_ONE_HUNDRED,
+ 	},
+ 	{
+ 		.procname	= "perf_event_max_stack",
+@@ -2681,7 +2675,7 @@ static struct ctl_table kern_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= perf_event_max_stack_handler,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &one_thousand,
++		.extra2		= SYSCTL_ONE_THOUSAND,
+ 	},
+ #endif
+ 	{
+@@ -2712,7 +2706,7 @@ static struct ctl_table kern_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= bpf_unpriv_handler,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &two,
++		.extra2		= SYSCTL_TWO,
+ 	},
+ 	{
+ 		.procname	= "bpf_stats_enabled",
+@@ -2755,7 +2749,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= overcommit_policy_handler,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &two,
++		.extra2		= SYSCTL_TWO,
+ 	},
+ 	{
+ 		.procname	= "panic_on_oom",
+@@ -2764,7 +2758,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &two,
++		.extra2		= SYSCTL_TWO,
+ 	},
+ 	{
+ 		.procname	= "oom_kill_allocating_task",
+@@ -2809,7 +2803,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= dirty_background_ratio_handler,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &one_hundred,
++		.extra2		= SYSCTL_ONE_HUNDRED,
+ 	},
+ 	{
+ 		.procname	= "dirty_background_bytes",
+@@ -2826,7 +2820,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= dirty_ratio_handler,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &one_hundred,
++		.extra2		= SYSCTL_ONE_HUNDRED,
+ 	},
+ 	{
+ 		.procname	= "dirty_bytes",
+@@ -2866,7 +2860,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &two_hundred,
++		.extra2		= SYSCTL_TWO_HUNDRED,
+ 	},
+ #ifdef CONFIG_NUMA
+ 	{
+@@ -2925,7 +2919,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0200,
+ 		.proc_handler	= drop_caches_sysctl_handler,
+ 		.extra1		= SYSCTL_ONE,
+-		.extra2		= &four,
++		.extra2		= SYSCTL_FOUR,
+ 	},
+ #ifdef CONFIG_COMPACTION
+ 	{
+@@ -2942,7 +2936,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &one_hundred,
++		.extra2		= SYSCTL_ONE_HUNDRED,
+ 	},
+ 	{
+ 		.procname	= "extfrag_threshold",
+@@ -2987,7 +2981,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= watermark_scale_factor_sysctl_handler,
+ 		.extra1		= SYSCTL_ONE,
+-		.extra2		= &one_thousand,
++		.extra2		= SYSCTL_THREE_THOUSAND,
+ 	},
+ 	{
+ 		.procname	= "percpu_pagelist_fraction",
+@@ -3074,7 +3068,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= sysctl_min_unmapped_ratio_sysctl_handler,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &one_hundred,
++		.extra2		= SYSCTL_ONE_HUNDRED,
+ 	},
+ 	{
+ 		.procname	= "min_slab_ratio",
+@@ -3083,7 +3077,7 @@ static struct ctl_table vm_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= sysctl_min_slab_ratio_sysctl_handler,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &one_hundred,
++		.extra2		= SYSCTL_ONE_HUNDRED,
+ 	},
+ #endif
+ #ifdef CONFIG_SMP
+@@ -3366,7 +3360,7 @@ static struct ctl_table fs_table[] = {
+ 		.mode		= 0600,
+ 		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &two,
++		.extra2		= SYSCTL_TWO,
+ 	},
+ 	{
+ 		.procname	= "protected_regular",
+@@ -3375,7 +3369,7 @@ static struct ctl_table fs_table[] = {
+ 		.mode		= 0600,
+ 		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &two,
++		.extra2		= SYSCTL_TWO,
+ 	},
+ 	{
+ 		.procname	= "suid_dumpable",
+@@ -3384,7 +3378,7 @@ static struct ctl_table fs_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax_coredump,
+ 		.extra1		= SYSCTL_ZERO,
+-		.extra2		= &two,
++		.extra2		= SYSCTL_TWO,
+ 	},
+ #if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE)
+ 	{
+diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
+index 2b7448ae5b478..e883d12dcb0d4 100644
+--- a/kernel/time/tick-common.c
++++ b/kernel/time/tick-common.c
+@@ -216,19 +216,8 @@ static void tick_setup_device(struct tick_device *td,
+ 		 * this cpu:
+ 		 */
+ 		if (tick_do_timer_cpu == TICK_DO_TIMER_BOOT) {
+-			ktime_t next_p;
+-			u32 rem;
+-
+ 			tick_do_timer_cpu = cpu;
+-
+-			next_p = ktime_get();
+-			div_u64_rem(next_p, TICK_NSEC, &rem);
+-			if (rem) {
+-				next_p -= rem;
+-				next_p += TICK_NSEC;
+-			}
+-
+-			tick_next_period = next_p;
++			tick_next_period = ktime_get();
+ #ifdef CONFIG_NO_HZ_FULL
+ 			/*
+ 			 * The boot CPU may be nohz_full, in which case set
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 17dc3f53efef8..d07de3ff42acc 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -129,8 +129,19 @@ static ktime_t tick_init_jiffy_update(void)
+ 	raw_spin_lock(&jiffies_lock);
+ 	write_seqcount_begin(&jiffies_seq);
+ 	/* Did we start the jiffies update yet ? */
+-	if (last_jiffies_update == 0)
++	if (last_jiffies_update == 0) {
++		u32 rem;
++
++		/*
++		 * Ensure that the tick is aligned to a multiple of
++		 * TICK_NSEC.
++		 */
++		div_u64_rem(tick_next_period, TICK_NSEC, &rem);
++		if (rem)
++			tick_next_period += TICK_NSEC - rem;
++
+ 		last_jiffies_update = tick_next_period;
++	}
+ 	period = last_jiffies_update;
+ 	write_seqcount_end(&jiffies_seq);
+ 	raw_spin_unlock(&jiffies_lock);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 482ec6606b7b5..70526400e05c9 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2178,10 +2178,12 @@ void tracing_reset_online_cpus(struct array_buffer *buf)
+ }
+ 
+ /* Must have trace_types_lock held */
+-void tracing_reset_all_online_cpus(void)
++void tracing_reset_all_online_cpus_unlocked(void)
+ {
+ 	struct trace_array *tr;
+ 
++	lockdep_assert_held(&trace_types_lock);
++
+ 	list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+ 		if (!tr->clear_trace)
+ 			continue;
+@@ -2193,6 +2195,13 @@ void tracing_reset_all_online_cpus(void)
+ 	}
+ }
+ 
++void tracing_reset_all_online_cpus(void)
++{
++	mutex_lock(&trace_types_lock);
++	tracing_reset_all_online_cpus_unlocked();
++	mutex_unlock(&trace_types_lock);
++}
++
+ /*
+  * The tgid_map array maps from pid to tgid; i.e. the value stored at index i
+  * is the tgid last observed corresponding to pid=i.
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 37f616bf5fa93..e5b505b5b7d09 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -725,6 +725,7 @@ int tracing_is_enabled(void);
+ void tracing_reset_online_cpus(struct array_buffer *buf);
+ void tracing_reset_current(int cpu);
+ void tracing_reset_all_online_cpus(void);
++void tracing_reset_all_online_cpus_unlocked(void);
+ int tracing_open_generic(struct inode *inode, struct file *filp);
+ int tracing_open_generic_tr(struct inode *inode, struct file *filp);
+ bool tracing_is_disabled(void);
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index bac13f24a96e5..f8ed66f38175b 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -2661,7 +2661,7 @@ static void trace_module_remove_events(struct module *mod)
+ 	 * over from this module may be passed to the new module events and
+ 	 * unexpected results may occur.
+ 	 */
+-	tracing_reset_all_online_cpus();
++	tracing_reset_all_online_cpus_unlocked();
+ }
+ 
+ static int trace_module_notify(struct notifier_block *self,
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index 18291ab356570..ee174de0b8f68 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -1363,7 +1363,6 @@ int synth_event_delete(const char *event_name)
+ 	mutex_unlock(&event_mutex);
+ 
+ 	if (mod) {
+-		mutex_lock(&trace_types_lock);
+ 		/*
+ 		 * It is safest to reset the ring buffer if the module
+ 		 * being unloaded registered any events that were
+@@ -1375,7 +1374,6 @@ int synth_event_delete(const char *event_name)
+ 		 * occur.
+ 		 */
+ 		tracing_reset_all_online_cpus();
+-		mutex_unlock(&trace_types_lock);
+ 	}
+ 
+ 	return ret;
+diff --git a/mm/memfd.c b/mm/memfd.c
+index fae4142f7d254..278e5636623e6 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -330,7 +330,8 @@ SYSCALL_DEFINE2(memfd_create,
+ 
+ 	if (flags & MFD_ALLOW_SEALING) {
+ 		file_seals = memfd_file_seals_ptr(file);
+-		*file_seals &= ~F_SEAL_SEAL;
++		if (file_seals)
++			*file_seals &= ~F_SEAL_SEAL;
+ 	}
+ 
+ 	fd_install(fd, file);
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 84257678160a3..dc50764b01807 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -338,6 +338,9 @@ static int esp_xmit(struct xfrm_state *x, struct sk_buff *skb,  netdev_features_
+ 
+ 	secpath_reset(skb);
+ 
++	if (skb_needs_linearize(skb, skb->dev->features) &&
++	    __skb_linearize(skb))
++		return -ENOMEM;
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
+index ad2afeef4f106..eac206a290d05 100644
+--- a/net/ipv4/xfrm4_input.c
++++ b/net/ipv4/xfrm4_input.c
+@@ -164,6 +164,7 @@ drop:
+ 	kfree_skb(skb);
+ 	return 0;
+ }
++EXPORT_SYMBOL(xfrm4_udp_encap_rcv);
+ 
+ int xfrm4_rcv(struct sk_buff *skb)
+ {
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 7608be04d0f58..87dbd53c29a6e 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -372,6 +372,9 @@ static int esp6_xmit(struct xfrm_state *x, struct sk_buff *skb,  netdev_features
+ 
+ 	secpath_reset(skb);
+ 
++	if (skb_needs_linearize(skb, skb->dev->features) &&
++	    __skb_linearize(skb))
++		return -ENOMEM;
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
+index 04cbeefd89828..4907ab241d6be 100644
+--- a/net/ipv6/xfrm6_input.c
++++ b/net/ipv6/xfrm6_input.c
+@@ -86,6 +86,9 @@ int xfrm6_udp_encap_rcv(struct sock *sk, struct sk_buff *skb)
+ 	__be32 *udpdata32;
+ 	__u16 encap_type = up->encap_type;
+ 
++	if (skb->protocol == htons(ETH_P_IP))
++		return xfrm4_udp_encap_rcv(sk, skb);
++
+ 	/* if this is not encapsulated socket, then just return now */
+ 	if (!encap_type)
+ 		return 1;
+diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
+index d2e5a8f644b80..cd2130e98836b 100644
+--- a/net/netfilter/ipvs/ip_vs_xmit.c
++++ b/net/netfilter/ipvs/ip_vs_xmit.c
+@@ -1225,6 +1225,7 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
+ 	skb->transport_header = skb->network_header;
+ 
+ 	skb_set_inner_ipproto(skb, next_protocol);
++	skb_set_inner_mac_header(skb, skb_inner_network_offset(skb));
+ 
+ 	if (tun_type == IP_VS_CONN_F_TUNNEL_TYPE_GUE) {
+ 		bool check = false;
+@@ -1373,6 +1374,7 @@ ip_vs_tunnel_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
+ 	skb->transport_header = skb->network_header;
+ 
+ 	skb_set_inner_ipproto(skb, next_protocol);
++	skb_set_inner_mac_header(skb, skb_inner_network_offset(skb));
+ 
+ 	if (tun_type == IP_VS_CONN_F_TUNNEL_TYPE_GUE) {
+ 		bool check = false;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index fe51cedd9cc3c..e59cad1f7a36b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -5460,7 +5460,8 @@ static int nf_tables_newsetelem(struct net *net, struct sock *nlsk,
+ 	if (IS_ERR(set))
+ 		return PTR_ERR(set);
+ 
+-	if (!list_empty(&set->bindings) && set->flags & NFT_SET_CONSTANT)
++	if (!list_empty(&set->bindings) &&
++	    (set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS)))
+ 		return -EBUSY;
+ 
+ 	nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
+@@ -5666,7 +5667,9 @@ static int nf_tables_delsetelem(struct net *net, struct sock *nlsk,
+ 	set = nft_set_lookup(ctx.table, nla[NFTA_SET_ELEM_LIST_SET], genmask);
+ 	if (IS_ERR(set))
+ 		return PTR_ERR(set);
+-	if (!list_empty(&set->bindings) && set->flags & NFT_SET_CONSTANT)
++
++	if (!list_empty(&set->bindings) &&
++	    (set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS)))
+ 		return -EBUSY;
+ 
+ 	if (nla[NFTA_SET_ELEM_LIST_ELEMENTS] == NULL) {
+@@ -8480,28 +8483,24 @@ int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest)
+ }
+ EXPORT_SYMBOL_GPL(nft_parse_u32_check);
+ 
+-/**
+- *	nft_parse_register - parse a register value from a netlink attribute
+- *
+- *	@attr: netlink attribute
+- *
+- *	Parse and translate a register value from a netlink attribute.
+- *	Registers used to be 128 bit wide, these register numbers will be
+- *	mapped to the corresponding 32 bit register numbers.
+- */
+-unsigned int nft_parse_register(const struct nlattr *attr)
++static int nft_parse_register(const struct nlattr *attr, u32 *preg)
+ {
+ 	unsigned int reg;
+ 
+ 	reg = ntohl(nla_get_be32(attr));
+ 	switch (reg) {
+ 	case NFT_REG_VERDICT...NFT_REG_4:
+-		return reg * NFT_REG_SIZE / NFT_REG32_SIZE;
++		*preg = reg * NFT_REG_SIZE / NFT_REG32_SIZE;
++		break;
++	case NFT_REG32_00...NFT_REG32_15:
++		*preg = reg + NFT_REG_SIZE / NFT_REG32_SIZE - NFT_REG32_00;
++		break;
+ 	default:
+-		return reg + NFT_REG_SIZE / NFT_REG32_SIZE - NFT_REG32_00;
++		return -ERANGE;
+ 	}
++
++	return 0;
+ }
+-EXPORT_SYMBOL_GPL(nft_parse_register);
+ 
+ /**
+  *	nft_dump_register - dump a register value to a netlink attribute
+@@ -8551,7 +8550,10 @@ int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len)
+ 	u32 reg;
+ 	int err;
+ 
+-	reg = nft_parse_register(attr);
++	err = nft_parse_register(attr, &reg);
++	if (err < 0)
++		return err;
++
+ 	err = nft_validate_register_load(reg, len);
+ 	if (err < 0)
+ 		return err;
+@@ -8620,7 +8622,10 @@ int nft_parse_register_store(const struct nft_ctx *ctx,
+ 	int err;
+ 	u32 reg;
+ 
+-	reg = nft_parse_register(attr);
++	err = nft_parse_register(attr, &reg);
++	if (err < 0)
++		return err;
++
+ 	err = nft_validate_register_store(ctx, reg, data, type, len);
+ 	if (err < 0)
+ 		return err;
+@@ -8978,7 +8983,9 @@ static int __net_init nf_tables_init_net(struct net *net)
+ 
+ static void __net_exit nf_tables_pre_exit_net(struct net *net)
+ {
++	mutex_lock(&net->nft.commit_mutex);
+ 	__nft_release_hooks(net);
++	mutex_unlock(&net->nft.commit_mutex);
+ }
+ 
+ static void __net_exit nf_tables_exit_net(struct net *net)
+diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
+index 51e3953b414c0..9dbaa5ce24e51 100644
+--- a/net/netfilter/nfnetlink_osf.c
++++ b/net/netfilter/nfnetlink_osf.c
+@@ -440,3 +440,4 @@ module_init(nfnl_osf_init);
+ module_exit(nfnl_osf_fini);
+ 
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_OSF);
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 30cf0673d6c19..eb5934eb3adfc 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1949,12 +1949,16 @@ static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set,
+ 			    struct nft_set_iter *iter)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
++	struct net *net = read_pnet(&set->net);
+ 	struct nft_pipapo_match *m;
+ 	struct nft_pipapo_field *f;
+ 	int i, r;
+ 
+ 	rcu_read_lock();
+-	m = rcu_dereference(priv->match);
++	if (iter->genmask == nft_genmask_cur(net))
++		m = rcu_dereference(priv->match);
++	else
++		m = priv->clone;
+ 
+ 	if (unlikely(!m))
+ 		goto out;
+diff --git a/net/netfilter/xt_osf.c b/net/netfilter/xt_osf.c
+index e1990baf3a3b7..dc9485854002a 100644
+--- a/net/netfilter/xt_osf.c
++++ b/net/netfilter/xt_osf.c
+@@ -71,4 +71,3 @@ MODULE_AUTHOR("Evgeniy Polyakov <zbr@ioremap.net>");
+ MODULE_DESCRIPTION("Passive OS fingerprint matching.");
+ MODULE_ALIAS("ipt_osf");
+ MODULE_ALIAS("ip6t_osf");
+-MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_OSF);
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 2084724c36ad3..fb50e3f3283f9 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1044,12 +1044,12 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 
+ 	if (parent == NULL) {
+ 		unsigned int i, num_q, ingress;
++		struct netdev_queue *dev_queue;
+ 
+ 		ingress = 0;
+ 		num_q = dev->num_tx_queues;
+ 		if ((q && q->flags & TCQ_F_INGRESS) ||
+ 		    (new && new->flags & TCQ_F_INGRESS)) {
+-			num_q = 1;
+ 			ingress = 1;
+ 			if (!dev_ingress_queue(dev)) {
+ 				NL_SET_ERR_MSG(extack, "Device does not have an ingress queue");
+@@ -1065,18 +1065,18 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 		if (new && new->ops->attach)
+ 			goto skip;
+ 
+-		for (i = 0; i < num_q; i++) {
+-			struct netdev_queue *dev_queue = dev_ingress_queue(dev);
+-
+-			if (!ingress)
++		if (!ingress) {
++			for (i = 0; i < num_q; i++) {
+ 				dev_queue = netdev_get_tx_queue(dev, i);
++				old = dev_graft_qdisc(dev_queue, new);
+ 
+-			old = dev_graft_qdisc(dev_queue, new);
+-			if (new && i > 0)
+-				qdisc_refcount_inc(new);
+-
+-			if (!ingress)
++				if (new && i > 0)
++					qdisc_refcount_inc(new);
+ 				qdisc_put(old);
++			}
++		} else {
++			dev_queue = dev_ingress_queue(dev);
++			old = dev_graft_qdisc(dev_queue, new);
+ 		}
+ 
+ skip:
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index adc5407fd5d58..be42b1196786b 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -969,6 +969,7 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	sch_tree_lock(sch);
+ 	/* backup q->clg and q->loss_model */
+ 	old_clg = q->clg;
+ 	old_loss_model = q->loss_model;
+@@ -977,7 +978,7 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 		ret = get_loss_clg(q, tb[TCA_NETEM_LOSS]);
+ 		if (ret) {
+ 			q->loss_model = old_loss_model;
+-			return ret;
++			goto unlock;
+ 		}
+ 	} else {
+ 		q->loss_model = CLG_RANDOM;
+@@ -1044,6 +1045,8 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 	/* capping jitter to the range acceptable by tabledist() */
+ 	q->jitter = min_t(s64, abs(q->jitter), INT_MAX);
+ 
++unlock:
++	sch_tree_unlock(sch);
+ 	return ret;
+ 
+ get_table_failure:
+@@ -1053,7 +1056,8 @@ get_table_failure:
+ 	 */
+ 	q->clg = old_clg;
+ 	q->loss_model = old_loss_model;
+-	return ret;
++
++	goto unlock;
+ }
+ 
+ static int netem_init(struct Qdisc *sch, struct nlattr *opt,
+diff --git a/net/xfrm/Makefile b/net/xfrm/Makefile
+index 494aa744bfb9a..08a2870fdd36f 100644
+--- a/net/xfrm/Makefile
++++ b/net/xfrm/Makefile
+@@ -3,6 +3,8 @@
+ # Makefile for the XFRM subsystem.
+ #
+ 
++xfrm_interface-$(CONFIG_XFRM_INTERFACE) += xfrm_interface_core.o
++
+ obj-$(CONFIG_XFRM) := xfrm_policy.o xfrm_state.o xfrm_hash.o \
+ 		      xfrm_input.o xfrm_output.o \
+ 		      xfrm_sysctl.o xfrm_replay.o xfrm_device.o
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index fef99a1c5df10..f3bccab983f05 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -129,6 +129,7 @@ struct sec_path *secpath_set(struct sk_buff *skb)
+ 	memset(sp->ovec, 0, sizeof(sp->ovec));
+ 	sp->olen = 0;
+ 	sp->len = 0;
++	sp->verified_cnt = 0;
+ 
+ 	return sp;
+ }
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+deleted file mode 100644
+index da518b4ca84c6..0000000000000
+--- a/net/xfrm/xfrm_interface.c
++++ /dev/null
+@@ -1,1038 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- *	XFRM virtual interface
+- *
+- *	Copyright (C) 2018 secunet Security Networks AG
+- *
+- *	Author:
+- *	Steffen Klassert <steffen.klassert@secunet.com>
+- */
+-
+-#include <linux/module.h>
+-#include <linux/capability.h>
+-#include <linux/errno.h>
+-#include <linux/types.h>
+-#include <linux/sockios.h>
+-#include <linux/icmp.h>
+-#include <linux/if.h>
+-#include <linux/in.h>
+-#include <linux/ip.h>
+-#include <linux/net.h>
+-#include <linux/in6.h>
+-#include <linux/netdevice.h>
+-#include <linux/if_link.h>
+-#include <linux/if_arp.h>
+-#include <linux/icmpv6.h>
+-#include <linux/init.h>
+-#include <linux/route.h>
+-#include <linux/rtnetlink.h>
+-#include <linux/netfilter_ipv6.h>
+-#include <linux/slab.h>
+-#include <linux/hash.h>
+-
+-#include <linux/uaccess.h>
+-#include <linux/atomic.h>
+-
+-#include <net/icmp.h>
+-#include <net/ip.h>
+-#include <net/ipv6.h>
+-#include <net/ip6_route.h>
+-#include <net/ip_tunnels.h>
+-#include <net/addrconf.h>
+-#include <net/xfrm.h>
+-#include <net/net_namespace.h>
+-#include <net/netns/generic.h>
+-#include <linux/etherdevice.h>
+-
+-static int xfrmi_dev_init(struct net_device *dev);
+-static void xfrmi_dev_setup(struct net_device *dev);
+-static struct rtnl_link_ops xfrmi_link_ops __read_mostly;
+-static unsigned int xfrmi_net_id __read_mostly;
+-static const struct net_device_ops xfrmi_netdev_ops;
+-
+-#define XFRMI_HASH_BITS	8
+-#define XFRMI_HASH_SIZE	BIT(XFRMI_HASH_BITS)
+-
+-struct xfrmi_net {
+-	/* lists for storing interfaces in use */
+-	struct xfrm_if __rcu *xfrmi[XFRMI_HASH_SIZE];
+-};
+-
+-#define for_each_xfrmi_rcu(start, xi) \
+-	for (xi = rcu_dereference(start); xi; xi = rcu_dereference(xi->next))
+-
+-static u32 xfrmi_hash(u32 if_id)
+-{
+-	return hash_32(if_id, XFRMI_HASH_BITS);
+-}
+-
+-static struct xfrm_if *xfrmi_lookup(struct net *net, struct xfrm_state *x)
+-{
+-	struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
+-	struct xfrm_if *xi;
+-
+-	for_each_xfrmi_rcu(xfrmn->xfrmi[xfrmi_hash(x->if_id)], xi) {
+-		if (x->if_id == xi->p.if_id &&
+-		    (xi->dev->flags & IFF_UP))
+-			return xi;
+-	}
+-
+-	return NULL;
+-}
+-
+-static struct xfrm_if *xfrmi_decode_session(struct sk_buff *skb,
+-					    unsigned short family)
+-{
+-	struct net_device *dev;
+-	int ifindex = 0;
+-
+-	if (!secpath_exists(skb) || !skb->dev)
+-		return NULL;
+-
+-	switch (family) {
+-	case AF_INET6:
+-		ifindex = inet6_sdif(skb);
+-		break;
+-	case AF_INET:
+-		ifindex = inet_sdif(skb);
+-		break;
+-	}
+-
+-	if (ifindex) {
+-		struct net *net = xs_net(xfrm_input_state(skb));
+-
+-		dev = dev_get_by_index_rcu(net, ifindex);
+-	} else {
+-		dev = skb->dev;
+-	}
+-
+-	if (!dev || !(dev->flags & IFF_UP))
+-		return NULL;
+-	if (dev->netdev_ops != &xfrmi_netdev_ops)
+-		return NULL;
+-
+-	return netdev_priv(dev);
+-}
+-
+-static void xfrmi_link(struct xfrmi_net *xfrmn, struct xfrm_if *xi)
+-{
+-	struct xfrm_if __rcu **xip = &xfrmn->xfrmi[xfrmi_hash(xi->p.if_id)];
+-
+-	rcu_assign_pointer(xi->next , rtnl_dereference(*xip));
+-	rcu_assign_pointer(*xip, xi);
+-}
+-
+-static void xfrmi_unlink(struct xfrmi_net *xfrmn, struct xfrm_if *xi)
+-{
+-	struct xfrm_if __rcu **xip;
+-	struct xfrm_if *iter;
+-
+-	for (xip = &xfrmn->xfrmi[xfrmi_hash(xi->p.if_id)];
+-	     (iter = rtnl_dereference(*xip)) != NULL;
+-	     xip = &iter->next) {
+-		if (xi == iter) {
+-			rcu_assign_pointer(*xip, xi->next);
+-			break;
+-		}
+-	}
+-}
+-
+-static void xfrmi_dev_free(struct net_device *dev)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-
+-	gro_cells_destroy(&xi->gro_cells);
+-	free_percpu(dev->tstats);
+-}
+-
+-static int xfrmi_create(struct net_device *dev)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-	struct net *net = dev_net(dev);
+-	struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
+-	int err;
+-
+-	dev->rtnl_link_ops = &xfrmi_link_ops;
+-	err = register_netdevice(dev);
+-	if (err < 0)
+-		goto out;
+-
+-	xfrmi_link(xfrmn, xi);
+-
+-	return 0;
+-
+-out:
+-	return err;
+-}
+-
+-static struct xfrm_if *xfrmi_locate(struct net *net, struct xfrm_if_parms *p)
+-{
+-	struct xfrm_if __rcu **xip;
+-	struct xfrm_if *xi;
+-	struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
+-
+-	for (xip = &xfrmn->xfrmi[xfrmi_hash(p->if_id)];
+-	     (xi = rtnl_dereference(*xip)) != NULL;
+-	     xip = &xi->next)
+-		if (xi->p.if_id == p->if_id)
+-			return xi;
+-
+-	return NULL;
+-}
+-
+-static void xfrmi_dev_uninit(struct net_device *dev)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-	struct xfrmi_net *xfrmn = net_generic(xi->net, xfrmi_net_id);
+-
+-	xfrmi_unlink(xfrmn, xi);
+-}
+-
+-static void xfrmi_scrub_packet(struct sk_buff *skb, bool xnet)
+-{
+-	skb->tstamp = 0;
+-	skb->pkt_type = PACKET_HOST;
+-	skb->skb_iif = 0;
+-	skb->ignore_df = 0;
+-	skb_dst_drop(skb);
+-	nf_reset_ct(skb);
+-	nf_reset_trace(skb);
+-
+-	if (!xnet)
+-		return;
+-
+-	ipvs_reset(skb);
+-	secpath_reset(skb);
+-	skb_orphan(skb);
+-	skb->mark = 0;
+-}
+-
+-static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
+-{
+-	const struct xfrm_mode *inner_mode;
+-	struct net_device *dev;
+-	struct xfrm_state *x;
+-	struct xfrm_if *xi;
+-	bool xnet;
+-
+-	if (err && !secpath_exists(skb))
+-		return 0;
+-
+-	x = xfrm_input_state(skb);
+-
+-	xi = xfrmi_lookup(xs_net(x), x);
+-	if (!xi)
+-		return 1;
+-
+-	dev = xi->dev;
+-	skb->dev = dev;
+-
+-	if (err) {
+-		dev->stats.rx_errors++;
+-		dev->stats.rx_dropped++;
+-
+-		return 0;
+-	}
+-
+-	xnet = !net_eq(xi->net, dev_net(skb->dev));
+-
+-	if (xnet) {
+-		inner_mode = &x->inner_mode;
+-
+-		if (x->sel.family == AF_UNSPEC) {
+-			inner_mode = xfrm_ip2inner_mode(x, XFRM_MODE_SKB_CB(skb)->protocol);
+-			if (inner_mode == NULL) {
+-				XFRM_INC_STATS(dev_net(skb->dev),
+-					       LINUX_MIB_XFRMINSTATEMODEERROR);
+-				return -EINVAL;
+-			}
+-		}
+-
+-		if (!xfrm_policy_check(NULL, XFRM_POLICY_IN, skb,
+-				       inner_mode->family))
+-			return -EPERM;
+-	}
+-
+-	xfrmi_scrub_packet(skb, xnet);
+-	dev_sw_netstats_rx_add(dev, skb->len);
+-
+-	return 0;
+-}
+-
+-static int
+-xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-	struct net_device_stats *stats = &xi->dev->stats;
+-	struct dst_entry *dst = skb_dst(skb);
+-	unsigned int length = skb->len;
+-	struct net_device *tdev;
+-	struct xfrm_state *x;
+-	int err = -1;
+-	int mtu;
+-
+-	dst_hold(dst);
+-	dst = xfrm_lookup_with_ifid(xi->net, dst, fl, NULL, 0, xi->p.if_id);
+-	if (IS_ERR(dst)) {
+-		err = PTR_ERR(dst);
+-		dst = NULL;
+-		goto tx_err_link_failure;
+-	}
+-
+-	x = dst->xfrm;
+-	if (!x)
+-		goto tx_err_link_failure;
+-
+-	if (x->if_id != xi->p.if_id)
+-		goto tx_err_link_failure;
+-
+-	tdev = dst->dev;
+-
+-	if (tdev == dev) {
+-		stats->collisions++;
+-		net_warn_ratelimited("%s: Local routing loop detected!\n",
+-				     dev->name);
+-		goto tx_err_dst_release;
+-	}
+-
+-	mtu = dst_mtu(dst);
+-	if (skb->len > mtu) {
+-		skb_dst_update_pmtu_no_confirm(skb, mtu);
+-
+-		if (skb->protocol == htons(ETH_P_IPV6)) {
+-			if (mtu < IPV6_MIN_MTU)
+-				mtu = IPV6_MIN_MTU;
+-
+-			if (skb->len > 1280)
+-				icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+-			else
+-				goto xmit;
+-		} else {
+-			if (!(ip_hdr(skb)->frag_off & htons(IP_DF)))
+-				goto xmit;
+-			icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+-				      htonl(mtu));
+-		}
+-
+-		dst_release(dst);
+-		return -EMSGSIZE;
+-	}
+-
+-xmit:
+-	xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev)));
+-	skb_dst_set(skb, dst);
+-	skb->dev = tdev;
+-
+-	err = dst_output(xi->net, skb->sk, skb);
+-	if (net_xmit_eval(err) == 0) {
+-		struct pcpu_sw_netstats *tstats = this_cpu_ptr(dev->tstats);
+-
+-		u64_stats_update_begin(&tstats->syncp);
+-		tstats->tx_bytes += length;
+-		tstats->tx_packets++;
+-		u64_stats_update_end(&tstats->syncp);
+-	} else {
+-		stats->tx_errors++;
+-		stats->tx_aborted_errors++;
+-	}
+-
+-	return 0;
+-tx_err_link_failure:
+-	stats->tx_carrier_errors++;
+-	dst_link_failure(skb);
+-tx_err_dst_release:
+-	dst_release(dst);
+-	return err;
+-}
+-
+-static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-	struct net_device_stats *stats = &xi->dev->stats;
+-	struct dst_entry *dst = skb_dst(skb);
+-	struct flowi fl;
+-	int ret;
+-
+-	memset(&fl, 0, sizeof(fl));
+-
+-	switch (skb->protocol) {
+-	case htons(ETH_P_IPV6):
+-		xfrm_decode_session(skb, &fl, AF_INET6);
+-		memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+-		if (!dst) {
+-			fl.u.ip6.flowi6_oif = dev->ifindex;
+-			fl.u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC;
+-			dst = ip6_route_output(dev_net(dev), NULL, &fl.u.ip6);
+-			if (dst->error) {
+-				dst_release(dst);
+-				stats->tx_carrier_errors++;
+-				goto tx_err;
+-			}
+-			skb_dst_set(skb, dst);
+-		}
+-		break;
+-	case htons(ETH_P_IP):
+-		xfrm_decode_session(skb, &fl, AF_INET);
+-		memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+-		if (!dst) {
+-			struct rtable *rt;
+-
+-			fl.u.ip4.flowi4_oif = dev->ifindex;
+-			fl.u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC;
+-			rt = __ip_route_output_key(dev_net(dev), &fl.u.ip4);
+-			if (IS_ERR(rt)) {
+-				stats->tx_carrier_errors++;
+-				goto tx_err;
+-			}
+-			skb_dst_set(skb, &rt->dst);
+-		}
+-		break;
+-	default:
+-		goto tx_err;
+-	}
+-
+-	fl.flowi_oif = xi->p.link;
+-
+-	ret = xfrmi_xmit2(skb, dev, &fl);
+-	if (ret < 0)
+-		goto tx_err;
+-
+-	return NETDEV_TX_OK;
+-
+-tx_err:
+-	stats->tx_errors++;
+-	stats->tx_dropped++;
+-	kfree_skb(skb);
+-	return NETDEV_TX_OK;
+-}
+-
+-static int xfrmi4_err(struct sk_buff *skb, u32 info)
+-{
+-	const struct iphdr *iph = (const struct iphdr *)skb->data;
+-	struct net *net = dev_net(skb->dev);
+-	int protocol = iph->protocol;
+-	struct ip_comp_hdr *ipch;
+-	struct ip_esp_hdr *esph;
+-	struct ip_auth_hdr *ah ;
+-	struct xfrm_state *x;
+-	struct xfrm_if *xi;
+-	__be32 spi;
+-
+-	switch (protocol) {
+-	case IPPROTO_ESP:
+-		esph = (struct ip_esp_hdr *)(skb->data+(iph->ihl<<2));
+-		spi = esph->spi;
+-		break;
+-	case IPPROTO_AH:
+-		ah = (struct ip_auth_hdr *)(skb->data+(iph->ihl<<2));
+-		spi = ah->spi;
+-		break;
+-	case IPPROTO_COMP:
+-		ipch = (struct ip_comp_hdr *)(skb->data+(iph->ihl<<2));
+-		spi = htonl(ntohs(ipch->cpi));
+-		break;
+-	default:
+-		return 0;
+-	}
+-
+-	switch (icmp_hdr(skb)->type) {
+-	case ICMP_DEST_UNREACH:
+-		if (icmp_hdr(skb)->code != ICMP_FRAG_NEEDED)
+-			return 0;
+-	case ICMP_REDIRECT:
+-		break;
+-	default:
+-		return 0;
+-	}
+-
+-	x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr,
+-			      spi, protocol, AF_INET);
+-	if (!x)
+-		return 0;
+-
+-	xi = xfrmi_lookup(net, x);
+-	if (!xi) {
+-		xfrm_state_put(x);
+-		return -1;
+-	}
+-
+-	if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH)
+-		ipv4_update_pmtu(skb, net, info, 0, protocol);
+-	else
+-		ipv4_redirect(skb, net, 0, protocol);
+-	xfrm_state_put(x);
+-
+-	return 0;
+-}
+-
+-static int xfrmi6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+-		    u8 type, u8 code, int offset, __be32 info)
+-{
+-	const struct ipv6hdr *iph = (const struct ipv6hdr *)skb->data;
+-	struct net *net = dev_net(skb->dev);
+-	int protocol = iph->nexthdr;
+-	struct ip_comp_hdr *ipch;
+-	struct ip_esp_hdr *esph;
+-	struct ip_auth_hdr *ah;
+-	struct xfrm_state *x;
+-	struct xfrm_if *xi;
+-	__be32 spi;
+-
+-	switch (protocol) {
+-	case IPPROTO_ESP:
+-		esph = (struct ip_esp_hdr *)(skb->data + offset);
+-		spi = esph->spi;
+-		break;
+-	case IPPROTO_AH:
+-		ah = (struct ip_auth_hdr *)(skb->data + offset);
+-		spi = ah->spi;
+-		break;
+-	case IPPROTO_COMP:
+-		ipch = (struct ip_comp_hdr *)(skb->data + offset);
+-		spi = htonl(ntohs(ipch->cpi));
+-		break;
+-	default:
+-		return 0;
+-	}
+-
+-	if (type != ICMPV6_PKT_TOOBIG &&
+-	    type != NDISC_REDIRECT)
+-		return 0;
+-
+-	x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr,
+-			      spi, protocol, AF_INET6);
+-	if (!x)
+-		return 0;
+-
+-	xi = xfrmi_lookup(net, x);
+-	if (!xi) {
+-		xfrm_state_put(x);
+-		return -1;
+-	}
+-
+-	if (type == NDISC_REDIRECT)
+-		ip6_redirect(skb, net, skb->dev->ifindex, 0,
+-			     sock_net_uid(net, NULL));
+-	else
+-		ip6_update_pmtu(skb, net, info, 0, 0, sock_net_uid(net, NULL));
+-	xfrm_state_put(x);
+-
+-	return 0;
+-}
+-
+-static int xfrmi_change(struct xfrm_if *xi, const struct xfrm_if_parms *p)
+-{
+-	if (xi->p.link != p->link)
+-		return -EINVAL;
+-
+-	xi->p.if_id = p->if_id;
+-
+-	return 0;
+-}
+-
+-static int xfrmi_update(struct xfrm_if *xi, struct xfrm_if_parms *p)
+-{
+-	struct net *net = xi->net;
+-	struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
+-	int err;
+-
+-	xfrmi_unlink(xfrmn, xi);
+-	synchronize_net();
+-	err = xfrmi_change(xi, p);
+-	xfrmi_link(xfrmn, xi);
+-	netdev_state_change(xi->dev);
+-	return err;
+-}
+-
+-static void xfrmi_get_stats64(struct net_device *dev,
+-			       struct rtnl_link_stats64 *s)
+-{
+-	dev_fetch_sw_netstats(s, dev->tstats);
+-
+-	s->rx_dropped = dev->stats.rx_dropped;
+-	s->tx_dropped = dev->stats.tx_dropped;
+-}
+-
+-static int xfrmi_get_iflink(const struct net_device *dev)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-
+-	return xi->p.link;
+-}
+-
+-
+-static const struct net_device_ops xfrmi_netdev_ops = {
+-	.ndo_init	= xfrmi_dev_init,
+-	.ndo_uninit	= xfrmi_dev_uninit,
+-	.ndo_start_xmit = xfrmi_xmit,
+-	.ndo_get_stats64 = xfrmi_get_stats64,
+-	.ndo_get_iflink = xfrmi_get_iflink,
+-};
+-
+-static void xfrmi_dev_setup(struct net_device *dev)
+-{
+-	dev->netdev_ops 	= &xfrmi_netdev_ops;
+-	dev->header_ops		= &ip_tunnel_header_ops;
+-	dev->type		= ARPHRD_NONE;
+-	dev->mtu		= ETH_DATA_LEN;
+-	dev->min_mtu		= ETH_MIN_MTU;
+-	dev->max_mtu		= IP_MAX_MTU;
+-	dev->flags 		= IFF_NOARP;
+-	dev->needs_free_netdev	= true;
+-	dev->priv_destructor	= xfrmi_dev_free;
+-	netif_keep_dst(dev);
+-
+-	eth_broadcast_addr(dev->broadcast);
+-}
+-
+-static int xfrmi_dev_init(struct net_device *dev)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-	struct net_device *phydev = __dev_get_by_index(xi->net, xi->p.link);
+-	int err;
+-
+-	dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
+-	if (!dev->tstats)
+-		return -ENOMEM;
+-
+-	err = gro_cells_init(&xi->gro_cells, dev);
+-	if (err) {
+-		free_percpu(dev->tstats);
+-		return err;
+-	}
+-
+-	dev->features |= NETIF_F_LLTX;
+-
+-	if (phydev) {
+-		dev->needed_headroom = phydev->needed_headroom;
+-		dev->needed_tailroom = phydev->needed_tailroom;
+-
+-		if (is_zero_ether_addr(dev->dev_addr))
+-			eth_hw_addr_inherit(dev, phydev);
+-		if (is_zero_ether_addr(dev->broadcast))
+-			memcpy(dev->broadcast, phydev->broadcast,
+-			       dev->addr_len);
+-	} else {
+-		eth_hw_addr_random(dev);
+-		eth_broadcast_addr(dev->broadcast);
+-	}
+-
+-	return 0;
+-}
+-
+-static int xfrmi_validate(struct nlattr *tb[], struct nlattr *data[],
+-			 struct netlink_ext_ack *extack)
+-{
+-	return 0;
+-}
+-
+-static void xfrmi_netlink_parms(struct nlattr *data[],
+-			       struct xfrm_if_parms *parms)
+-{
+-	memset(parms, 0, sizeof(*parms));
+-
+-	if (!data)
+-		return;
+-
+-	if (data[IFLA_XFRM_LINK])
+-		parms->link = nla_get_u32(data[IFLA_XFRM_LINK]);
+-
+-	if (data[IFLA_XFRM_IF_ID])
+-		parms->if_id = nla_get_u32(data[IFLA_XFRM_IF_ID]);
+-}
+-
+-static int xfrmi_newlink(struct net *src_net, struct net_device *dev,
+-			struct nlattr *tb[], struct nlattr *data[],
+-			struct netlink_ext_ack *extack)
+-{
+-	struct net *net = dev_net(dev);
+-	struct xfrm_if_parms p = {};
+-	struct xfrm_if *xi;
+-	int err;
+-
+-	xfrmi_netlink_parms(data, &p);
+-	if (!p.if_id) {
+-		NL_SET_ERR_MSG(extack, "if_id must be non zero");
+-		return -EINVAL;
+-	}
+-
+-	xi = xfrmi_locate(net, &p);
+-	if (xi)
+-		return -EEXIST;
+-
+-	xi = netdev_priv(dev);
+-	xi->p = p;
+-	xi->net = net;
+-	xi->dev = dev;
+-
+-	err = xfrmi_create(dev);
+-	return err;
+-}
+-
+-static void xfrmi_dellink(struct net_device *dev, struct list_head *head)
+-{
+-	unregister_netdevice_queue(dev, head);
+-}
+-
+-static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+-			   struct nlattr *data[],
+-			   struct netlink_ext_ack *extack)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-	struct net *net = xi->net;
+-	struct xfrm_if_parms p = {};
+-
+-	xfrmi_netlink_parms(data, &p);
+-	if (!p.if_id) {
+-		NL_SET_ERR_MSG(extack, "if_id must be non zero");
+-		return -EINVAL;
+-	}
+-
+-	xi = xfrmi_locate(net, &p);
+-	if (!xi) {
+-		xi = netdev_priv(dev);
+-	} else {
+-		if (xi->dev != dev)
+-			return -EEXIST;
+-	}
+-
+-	return xfrmi_update(xi, &p);
+-}
+-
+-static size_t xfrmi_get_size(const struct net_device *dev)
+-{
+-	return
+-		/* IFLA_XFRM_LINK */
+-		nla_total_size(4) +
+-		/* IFLA_XFRM_IF_ID */
+-		nla_total_size(4) +
+-		0;
+-}
+-
+-static int xfrmi_fill_info(struct sk_buff *skb, const struct net_device *dev)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-	struct xfrm_if_parms *parm = &xi->p;
+-
+-	if (nla_put_u32(skb, IFLA_XFRM_LINK, parm->link) ||
+-	    nla_put_u32(skb, IFLA_XFRM_IF_ID, parm->if_id))
+-		goto nla_put_failure;
+-	return 0;
+-
+-nla_put_failure:
+-	return -EMSGSIZE;
+-}
+-
+-static struct net *xfrmi_get_link_net(const struct net_device *dev)
+-{
+-	struct xfrm_if *xi = netdev_priv(dev);
+-
+-	return xi->net;
+-}
+-
+-static const struct nla_policy xfrmi_policy[IFLA_XFRM_MAX + 1] = {
+-	[IFLA_XFRM_LINK]	= { .type = NLA_U32 },
+-	[IFLA_XFRM_IF_ID]	= { .type = NLA_U32 },
+-};
+-
+-static struct rtnl_link_ops xfrmi_link_ops __read_mostly = {
+-	.kind		= "xfrm",
+-	.maxtype	= IFLA_XFRM_MAX,
+-	.policy		= xfrmi_policy,
+-	.priv_size	= sizeof(struct xfrm_if),
+-	.setup		= xfrmi_dev_setup,
+-	.validate	= xfrmi_validate,
+-	.newlink	= xfrmi_newlink,
+-	.dellink	= xfrmi_dellink,
+-	.changelink	= xfrmi_changelink,
+-	.get_size	= xfrmi_get_size,
+-	.fill_info	= xfrmi_fill_info,
+-	.get_link_net	= xfrmi_get_link_net,
+-};
+-
+-static void __net_exit xfrmi_exit_batch_net(struct list_head *net_exit_list)
+-{
+-	struct net *net;
+-	LIST_HEAD(list);
+-
+-	rtnl_lock();
+-	list_for_each_entry(net, net_exit_list, exit_list) {
+-		struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
+-		struct xfrm_if __rcu **xip;
+-		struct xfrm_if *xi;
+-		int i;
+-
+-		for (i = 0; i < XFRMI_HASH_SIZE; i++) {
+-			for (xip = &xfrmn->xfrmi[i];
+-			     (xi = rtnl_dereference(*xip)) != NULL;
+-			     xip = &xi->next)
+-				unregister_netdevice_queue(xi->dev, &list);
+-		}
+-	}
+-	unregister_netdevice_many(&list);
+-	rtnl_unlock();
+-}
+-
+-static struct pernet_operations xfrmi_net_ops = {
+-	.exit_batch = xfrmi_exit_batch_net,
+-	.id   = &xfrmi_net_id,
+-	.size = sizeof(struct xfrmi_net),
+-};
+-
+-static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = {
+-	.handler	=	xfrm6_rcv,
+-	.input_handler	=	xfrm_input,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi6_err,
+-	.priority	=	10,
+-};
+-
+-static struct xfrm6_protocol xfrmi_ah6_protocol __read_mostly = {
+-	.handler	=	xfrm6_rcv,
+-	.input_handler	=	xfrm_input,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi6_err,
+-	.priority	=	10,
+-};
+-
+-static struct xfrm6_protocol xfrmi_ipcomp6_protocol __read_mostly = {
+-	.handler	=	xfrm6_rcv,
+-	.input_handler	=	xfrm_input,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi6_err,
+-	.priority	=	10,
+-};
+-
+-#if IS_REACHABLE(CONFIG_INET6_XFRM_TUNNEL)
+-static int xfrmi6_rcv_tunnel(struct sk_buff *skb)
+-{
+-	const xfrm_address_t *saddr;
+-	__be32 spi;
+-
+-	saddr = (const xfrm_address_t *)&ipv6_hdr(skb)->saddr;
+-	spi = xfrm6_tunnel_spi_lookup(dev_net(skb->dev), saddr);
+-
+-	return xfrm6_rcv_spi(skb, IPPROTO_IPV6, spi, NULL);
+-}
+-
+-static struct xfrm6_tunnel xfrmi_ipv6_handler __read_mostly = {
+-	.handler	=	xfrmi6_rcv_tunnel,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi6_err,
+-	.priority	=	2,
+-};
+-
+-static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = {
+-	.handler	=	xfrmi6_rcv_tunnel,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi6_err,
+-	.priority	=	2,
+-};
+-#endif
+-
+-static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = {
+-	.handler	=	xfrm4_rcv,
+-	.input_handler	=	xfrm_input,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi4_err,
+-	.priority	=	10,
+-};
+-
+-static struct xfrm4_protocol xfrmi_ah4_protocol __read_mostly = {
+-	.handler	=	xfrm4_rcv,
+-	.input_handler	=	xfrm_input,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi4_err,
+-	.priority	=	10,
+-};
+-
+-static struct xfrm4_protocol xfrmi_ipcomp4_protocol __read_mostly = {
+-	.handler	=	xfrm4_rcv,
+-	.input_handler	=	xfrm_input,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi4_err,
+-	.priority	=	10,
+-};
+-
+-#if IS_REACHABLE(CONFIG_INET_XFRM_TUNNEL)
+-static int xfrmi4_rcv_tunnel(struct sk_buff *skb)
+-{
+-	return xfrm4_rcv_spi(skb, IPPROTO_IPIP, ip_hdr(skb)->saddr);
+-}
+-
+-static struct xfrm_tunnel xfrmi_ipip_handler __read_mostly = {
+-	.handler	=	xfrmi4_rcv_tunnel,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi4_err,
+-	.priority	=	3,
+-};
+-
+-static struct xfrm_tunnel xfrmi_ipip6_handler __read_mostly = {
+-	.handler	=	xfrmi4_rcv_tunnel,
+-	.cb_handler	=	xfrmi_rcv_cb,
+-	.err_handler	=	xfrmi4_err,
+-	.priority	=	2,
+-};
+-#endif
+-
+-static int __init xfrmi4_init(void)
+-{
+-	int err;
+-
+-	err = xfrm4_protocol_register(&xfrmi_esp4_protocol, IPPROTO_ESP);
+-	if (err < 0)
+-		goto xfrm_proto_esp_failed;
+-	err = xfrm4_protocol_register(&xfrmi_ah4_protocol, IPPROTO_AH);
+-	if (err < 0)
+-		goto xfrm_proto_ah_failed;
+-	err = xfrm4_protocol_register(&xfrmi_ipcomp4_protocol, IPPROTO_COMP);
+-	if (err < 0)
+-		goto xfrm_proto_comp_failed;
+-#if IS_REACHABLE(CONFIG_INET_XFRM_TUNNEL)
+-	err = xfrm4_tunnel_register(&xfrmi_ipip_handler, AF_INET);
+-	if (err < 0)
+-		goto xfrm_tunnel_ipip_failed;
+-	err = xfrm4_tunnel_register(&xfrmi_ipip6_handler, AF_INET6);
+-	if (err < 0)
+-		goto xfrm_tunnel_ipip6_failed;
+-#endif
+-
+-	return 0;
+-
+-#if IS_REACHABLE(CONFIG_INET_XFRM_TUNNEL)
+-xfrm_tunnel_ipip6_failed:
+-	xfrm4_tunnel_deregister(&xfrmi_ipip_handler, AF_INET);
+-xfrm_tunnel_ipip_failed:
+-	xfrm4_protocol_deregister(&xfrmi_ipcomp4_protocol, IPPROTO_COMP);
+-#endif
+-xfrm_proto_comp_failed:
+-	xfrm4_protocol_deregister(&xfrmi_ah4_protocol, IPPROTO_AH);
+-xfrm_proto_ah_failed:
+-	xfrm4_protocol_deregister(&xfrmi_esp4_protocol, IPPROTO_ESP);
+-xfrm_proto_esp_failed:
+-	return err;
+-}
+-
+-static void xfrmi4_fini(void)
+-{
+-#if IS_REACHABLE(CONFIG_INET_XFRM_TUNNEL)
+-	xfrm4_tunnel_deregister(&xfrmi_ipip6_handler, AF_INET6);
+-	xfrm4_tunnel_deregister(&xfrmi_ipip_handler, AF_INET);
+-#endif
+-	xfrm4_protocol_deregister(&xfrmi_ipcomp4_protocol, IPPROTO_COMP);
+-	xfrm4_protocol_deregister(&xfrmi_ah4_protocol, IPPROTO_AH);
+-	xfrm4_protocol_deregister(&xfrmi_esp4_protocol, IPPROTO_ESP);
+-}
+-
+-static int __init xfrmi6_init(void)
+-{
+-	int err;
+-
+-	err = xfrm6_protocol_register(&xfrmi_esp6_protocol, IPPROTO_ESP);
+-	if (err < 0)
+-		goto xfrm_proto_esp_failed;
+-	err = xfrm6_protocol_register(&xfrmi_ah6_protocol, IPPROTO_AH);
+-	if (err < 0)
+-		goto xfrm_proto_ah_failed;
+-	err = xfrm6_protocol_register(&xfrmi_ipcomp6_protocol, IPPROTO_COMP);
+-	if (err < 0)
+-		goto xfrm_proto_comp_failed;
+-#if IS_REACHABLE(CONFIG_INET6_XFRM_TUNNEL)
+-	err = xfrm6_tunnel_register(&xfrmi_ipv6_handler, AF_INET6);
+-	if (err < 0)
+-		goto xfrm_tunnel_ipv6_failed;
+-	err = xfrm6_tunnel_register(&xfrmi_ip6ip_handler, AF_INET);
+-	if (err < 0)
+-		goto xfrm_tunnel_ip6ip_failed;
+-#endif
+-
+-	return 0;
+-
+-#if IS_REACHABLE(CONFIG_INET6_XFRM_TUNNEL)
+-xfrm_tunnel_ip6ip_failed:
+-	xfrm6_tunnel_deregister(&xfrmi_ipv6_handler, AF_INET6);
+-xfrm_tunnel_ipv6_failed:
+-	xfrm6_protocol_deregister(&xfrmi_ipcomp6_protocol, IPPROTO_COMP);
+-#endif
+-xfrm_proto_comp_failed:
+-	xfrm6_protocol_deregister(&xfrmi_ah6_protocol, IPPROTO_AH);
+-xfrm_proto_ah_failed:
+-	xfrm6_protocol_deregister(&xfrmi_esp6_protocol, IPPROTO_ESP);
+-xfrm_proto_esp_failed:
+-	return err;
+-}
+-
+-static void xfrmi6_fini(void)
+-{
+-#if IS_REACHABLE(CONFIG_INET6_XFRM_TUNNEL)
+-	xfrm6_tunnel_deregister(&xfrmi_ip6ip_handler, AF_INET);
+-	xfrm6_tunnel_deregister(&xfrmi_ipv6_handler, AF_INET6);
+-#endif
+-	xfrm6_protocol_deregister(&xfrmi_ipcomp6_protocol, IPPROTO_COMP);
+-	xfrm6_protocol_deregister(&xfrmi_ah6_protocol, IPPROTO_AH);
+-	xfrm6_protocol_deregister(&xfrmi_esp6_protocol, IPPROTO_ESP);
+-}
+-
+-static const struct xfrm_if_cb xfrm_if_cb = {
+-	.decode_session =	xfrmi_decode_session,
+-};
+-
+-static int __init xfrmi_init(void)
+-{
+-	const char *msg;
+-	int err;
+-
+-	pr_info("IPsec XFRM device driver\n");
+-
+-	msg = "tunnel device";
+-	err = register_pernet_device(&xfrmi_net_ops);
+-	if (err < 0)
+-		goto pernet_dev_failed;
+-
+-	msg = "xfrm4 protocols";
+-	err = xfrmi4_init();
+-	if (err < 0)
+-		goto xfrmi4_failed;
+-
+-	msg = "xfrm6 protocols";
+-	err = xfrmi6_init();
+-	if (err < 0)
+-		goto xfrmi6_failed;
+-
+-
+-	msg = "netlink interface";
+-	err = rtnl_link_register(&xfrmi_link_ops);
+-	if (err < 0)
+-		goto rtnl_link_failed;
+-
+-	xfrm_if_register_cb(&xfrm_if_cb);
+-
+-	return err;
+-
+-rtnl_link_failed:
+-	xfrmi6_fini();
+-xfrmi6_failed:
+-	xfrmi4_fini();
+-xfrmi4_failed:
+-	unregister_pernet_device(&xfrmi_net_ops);
+-pernet_dev_failed:
+-	pr_err("xfrmi init: failed to register %s\n", msg);
+-	return err;
+-}
+-
+-static void __exit xfrmi_fini(void)
+-{
+-	xfrm_if_unregister_cb();
+-	rtnl_link_unregister(&xfrmi_link_ops);
+-	xfrmi4_fini();
+-	xfrmi6_fini();
+-	unregister_pernet_device(&xfrmi_net_ops);
+-}
+-
+-module_init(xfrmi_init);
+-module_exit(xfrmi_fini);
+-MODULE_LICENSE("GPL");
+-MODULE_ALIAS_RTNL_LINK("xfrm");
+-MODULE_ALIAS_NETDEV("xfrm0");
+-MODULE_AUTHOR("Steffen Klassert");
+-MODULE_DESCRIPTION("XFRM virtual interface");
+diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c
+new file mode 100644
+index 0000000000000..e4f21a6924153
+--- /dev/null
++++ b/net/xfrm/xfrm_interface_core.c
+@@ -0,0 +1,1084 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ *	XFRM virtual interface
++ *
++ *	Copyright (C) 2018 secunet Security Networks AG
++ *
++ *	Author:
++ *	Steffen Klassert <steffen.klassert@secunet.com>
++ */
++
++#include <linux/module.h>
++#include <linux/capability.h>
++#include <linux/errno.h>
++#include <linux/types.h>
++#include <linux/sockios.h>
++#include <linux/icmp.h>
++#include <linux/if.h>
++#include <linux/in.h>
++#include <linux/ip.h>
++#include <linux/net.h>
++#include <linux/in6.h>
++#include <linux/netdevice.h>
++#include <linux/if_link.h>
++#include <linux/if_arp.h>
++#include <linux/icmpv6.h>
++#include <linux/init.h>
++#include <linux/route.h>
++#include <linux/rtnetlink.h>
++#include <linux/netfilter_ipv6.h>
++#include <linux/slab.h>
++#include <linux/hash.h>
++
++#include <linux/uaccess.h>
++#include <linux/atomic.h>
++
++#include <net/icmp.h>
++#include <net/ip.h>
++#include <net/ipv6.h>
++#include <net/ip6_route.h>
++#include <net/ip_tunnels.h>
++#include <net/addrconf.h>
++#include <net/xfrm.h>
++#include <net/net_namespace.h>
++#include <net/netns/generic.h>
++#include <linux/etherdevice.h>
++
++static int xfrmi_dev_init(struct net_device *dev);
++static void xfrmi_dev_setup(struct net_device *dev);
++static struct rtnl_link_ops xfrmi_link_ops __read_mostly;
++static unsigned int xfrmi_net_id __read_mostly;
++static const struct net_device_ops xfrmi_netdev_ops;
++
++#define XFRMI_HASH_BITS	8
++#define XFRMI_HASH_SIZE	BIT(XFRMI_HASH_BITS)
++
++struct xfrmi_net {
++	/* lists for storing interfaces in use */
++	struct xfrm_if __rcu *xfrmi[XFRMI_HASH_SIZE];
++};
++
++#define for_each_xfrmi_rcu(start, xi) \
++	for (xi = rcu_dereference(start); xi; xi = rcu_dereference(xi->next))
++
++static u32 xfrmi_hash(u32 if_id)
++{
++	return hash_32(if_id, XFRMI_HASH_BITS);
++}
++
++static struct xfrm_if *xfrmi_lookup(struct net *net, struct xfrm_state *x)
++{
++	struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
++	struct xfrm_if *xi;
++
++	for_each_xfrmi_rcu(xfrmn->xfrmi[xfrmi_hash(x->if_id)], xi) {
++		if (x->if_id == xi->p.if_id &&
++		    (xi->dev->flags & IFF_UP))
++			return xi;
++	}
++
++	return NULL;
++}
++
++static struct xfrm_if *xfrmi_decode_session(struct sk_buff *skb,
++					    unsigned short family)
++{
++	struct net_device *dev;
++	int ifindex = 0;
++
++	if (!secpath_exists(skb) || !skb->dev)
++		return NULL;
++
++	switch (family) {
++	case AF_INET6:
++		ifindex = inet6_sdif(skb);
++		break;
++	case AF_INET:
++		ifindex = inet_sdif(skb);
++		break;
++	}
++
++	if (ifindex) {
++		struct net *net = xs_net(xfrm_input_state(skb));
++
++		dev = dev_get_by_index_rcu(net, ifindex);
++	} else {
++		dev = skb->dev;
++	}
++
++	if (!dev || !(dev->flags & IFF_UP))
++		return NULL;
++	if (dev->netdev_ops != &xfrmi_netdev_ops)
++		return NULL;
++
++	return netdev_priv(dev);
++}
++
++static void xfrmi_link(struct xfrmi_net *xfrmn, struct xfrm_if *xi)
++{
++	struct xfrm_if __rcu **xip = &xfrmn->xfrmi[xfrmi_hash(xi->p.if_id)];
++
++	rcu_assign_pointer(xi->next , rtnl_dereference(*xip));
++	rcu_assign_pointer(*xip, xi);
++}
++
++static void xfrmi_unlink(struct xfrmi_net *xfrmn, struct xfrm_if *xi)
++{
++	struct xfrm_if __rcu **xip;
++	struct xfrm_if *iter;
++
++	for (xip = &xfrmn->xfrmi[xfrmi_hash(xi->p.if_id)];
++	     (iter = rtnl_dereference(*xip)) != NULL;
++	     xip = &iter->next) {
++		if (xi == iter) {
++			rcu_assign_pointer(*xip, xi->next);
++			break;
++		}
++	}
++}
++
++static void xfrmi_dev_free(struct net_device *dev)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++
++	gro_cells_destroy(&xi->gro_cells);
++	free_percpu(dev->tstats);
++}
++
++static int xfrmi_create(struct net_device *dev)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++	struct net *net = dev_net(dev);
++	struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
++	int err;
++
++	dev->rtnl_link_ops = &xfrmi_link_ops;
++	err = register_netdevice(dev);
++	if (err < 0)
++		goto out;
++
++	xfrmi_link(xfrmn, xi);
++
++	return 0;
++
++out:
++	return err;
++}
++
++static struct xfrm_if *xfrmi_locate(struct net *net, struct xfrm_if_parms *p)
++{
++	struct xfrm_if __rcu **xip;
++	struct xfrm_if *xi;
++	struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
++
++	for (xip = &xfrmn->xfrmi[xfrmi_hash(p->if_id)];
++	     (xi = rtnl_dereference(*xip)) != NULL;
++	     xip = &xi->next)
++		if (xi->p.if_id == p->if_id)
++			return xi;
++
++	return NULL;
++}
++
++static void xfrmi_dev_uninit(struct net_device *dev)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++	struct xfrmi_net *xfrmn = net_generic(xi->net, xfrmi_net_id);
++
++	xfrmi_unlink(xfrmn, xi);
++}
++
++static void xfrmi_scrub_packet(struct sk_buff *skb, bool xnet)
++{
++	skb->tstamp = 0;
++	skb->pkt_type = PACKET_HOST;
++	skb->skb_iif = 0;
++	skb->ignore_df = 0;
++	skb_dst_drop(skb);
++	nf_reset_ct(skb);
++	nf_reset_trace(skb);
++
++	if (!xnet)
++		return;
++
++	ipvs_reset(skb);
++	secpath_reset(skb);
++	skb_orphan(skb);
++	skb->mark = 0;
++}
++
++static int xfrmi_input(struct sk_buff *skb, int nexthdr, __be32 spi,
++		       int encap_type, unsigned short family)
++{
++	struct sec_path *sp;
++
++	sp = skb_sec_path(skb);
++	if (sp && (sp->len || sp->olen) &&
++	    !xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
++		goto discard;
++
++	XFRM_SPI_SKB_CB(skb)->family = family;
++	if (family == AF_INET) {
++		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct iphdr, daddr);
++		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = NULL;
++	} else {
++		XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr);
++		XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL;
++	}
++
++	return xfrm_input(skb, nexthdr, spi, encap_type);
++discard:
++	kfree_skb(skb);
++	return 0;
++}
++
++static int xfrmi4_rcv(struct sk_buff *skb)
++{
++	return xfrmi_input(skb, ip_hdr(skb)->protocol, 0, 0, AF_INET);
++}
++
++static int xfrmi6_rcv(struct sk_buff *skb)
++{
++	return xfrmi_input(skb, skb_network_header(skb)[IP6CB(skb)->nhoff],
++			   0, 0, AF_INET6);
++}
++
++static int xfrmi4_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
++{
++	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET);
++}
++
++static int xfrmi6_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
++{
++	return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET6);
++}
++
++static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
++{
++	const struct xfrm_mode *inner_mode;
++	struct net_device *dev;
++	struct xfrm_state *x;
++	struct xfrm_if *xi;
++	bool xnet;
++
++	if (err && !secpath_exists(skb))
++		return 0;
++
++	x = xfrm_input_state(skb);
++
++	xi = xfrmi_lookup(xs_net(x), x);
++	if (!xi)
++		return 1;
++
++	dev = xi->dev;
++	skb->dev = dev;
++
++	if (err) {
++		dev->stats.rx_errors++;
++		dev->stats.rx_dropped++;
++
++		return 0;
++	}
++
++	xnet = !net_eq(xi->net, dev_net(skb->dev));
++
++	if (xnet) {
++		inner_mode = &x->inner_mode;
++
++		if (x->sel.family == AF_UNSPEC) {
++			inner_mode = xfrm_ip2inner_mode(x, XFRM_MODE_SKB_CB(skb)->protocol);
++			if (inner_mode == NULL) {
++				XFRM_INC_STATS(dev_net(skb->dev),
++					       LINUX_MIB_XFRMINSTATEMODEERROR);
++				return -EINVAL;
++			}
++		}
++
++		if (!xfrm_policy_check(NULL, XFRM_POLICY_IN, skb,
++				       inner_mode->family))
++			return -EPERM;
++	}
++
++	xfrmi_scrub_packet(skb, xnet);
++	dev_sw_netstats_rx_add(dev, skb->len);
++
++	return 0;
++}
++
++static int
++xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++	struct net_device_stats *stats = &xi->dev->stats;
++	struct dst_entry *dst = skb_dst(skb);
++	unsigned int length = skb->len;
++	struct net_device *tdev;
++	struct xfrm_state *x;
++	int err = -1;
++	int mtu;
++
++	dst_hold(dst);
++	dst = xfrm_lookup_with_ifid(xi->net, dst, fl, NULL, 0, xi->p.if_id);
++	if (IS_ERR(dst)) {
++		err = PTR_ERR(dst);
++		dst = NULL;
++		goto tx_err_link_failure;
++	}
++
++	x = dst->xfrm;
++	if (!x)
++		goto tx_err_link_failure;
++
++	if (x->if_id != xi->p.if_id)
++		goto tx_err_link_failure;
++
++	tdev = dst->dev;
++
++	if (tdev == dev) {
++		stats->collisions++;
++		net_warn_ratelimited("%s: Local routing loop detected!\n",
++				     dev->name);
++		goto tx_err_dst_release;
++	}
++
++	mtu = dst_mtu(dst);
++	if (skb->len > mtu) {
++		skb_dst_update_pmtu_no_confirm(skb, mtu);
++
++		if (skb->protocol == htons(ETH_P_IPV6)) {
++			if (mtu < IPV6_MIN_MTU)
++				mtu = IPV6_MIN_MTU;
++
++			if (skb->len > 1280)
++				icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			else
++				goto xmit;
++		} else {
++			if (!(ip_hdr(skb)->frag_off & htons(IP_DF)))
++				goto xmit;
++			icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
++				      htonl(mtu));
++		}
++
++		dst_release(dst);
++		return -EMSGSIZE;
++	}
++
++xmit:
++	xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev)));
++	skb_dst_set(skb, dst);
++	skb->dev = tdev;
++
++	err = dst_output(xi->net, skb->sk, skb);
++	if (net_xmit_eval(err) == 0) {
++		struct pcpu_sw_netstats *tstats = this_cpu_ptr(dev->tstats);
++
++		u64_stats_update_begin(&tstats->syncp);
++		tstats->tx_bytes += length;
++		tstats->tx_packets++;
++		u64_stats_update_end(&tstats->syncp);
++	} else {
++		stats->tx_errors++;
++		stats->tx_aborted_errors++;
++	}
++
++	return 0;
++tx_err_link_failure:
++	stats->tx_carrier_errors++;
++	dst_link_failure(skb);
++tx_err_dst_release:
++	dst_release(dst);
++	return err;
++}
++
++static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++	struct net_device_stats *stats = &xi->dev->stats;
++	struct dst_entry *dst = skb_dst(skb);
++	struct flowi fl;
++	int ret;
++
++	memset(&fl, 0, sizeof(fl));
++
++	switch (skb->protocol) {
++	case htons(ETH_P_IPV6):
++		xfrm_decode_session(skb, &fl, AF_INET6);
++		memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
++		if (!dst) {
++			fl.u.ip6.flowi6_oif = dev->ifindex;
++			fl.u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC;
++			dst = ip6_route_output(dev_net(dev), NULL, &fl.u.ip6);
++			if (dst->error) {
++				dst_release(dst);
++				stats->tx_carrier_errors++;
++				goto tx_err;
++			}
++			skb_dst_set(skb, dst);
++		}
++		break;
++	case htons(ETH_P_IP):
++		xfrm_decode_session(skb, &fl, AF_INET);
++		memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
++		if (!dst) {
++			struct rtable *rt;
++
++			fl.u.ip4.flowi4_oif = dev->ifindex;
++			fl.u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC;
++			rt = __ip_route_output_key(dev_net(dev), &fl.u.ip4);
++			if (IS_ERR(rt)) {
++				stats->tx_carrier_errors++;
++				goto tx_err;
++			}
++			skb_dst_set(skb, &rt->dst);
++		}
++		break;
++	default:
++		goto tx_err;
++	}
++
++	fl.flowi_oif = xi->p.link;
++
++	ret = xfrmi_xmit2(skb, dev, &fl);
++	if (ret < 0)
++		goto tx_err;
++
++	return NETDEV_TX_OK;
++
++tx_err:
++	stats->tx_errors++;
++	stats->tx_dropped++;
++	kfree_skb(skb);
++	return NETDEV_TX_OK;
++}
++
++static int xfrmi4_err(struct sk_buff *skb, u32 info)
++{
++	const struct iphdr *iph = (const struct iphdr *)skb->data;
++	struct net *net = dev_net(skb->dev);
++	int protocol = iph->protocol;
++	struct ip_comp_hdr *ipch;
++	struct ip_esp_hdr *esph;
++	struct ip_auth_hdr *ah ;
++	struct xfrm_state *x;
++	struct xfrm_if *xi;
++	__be32 spi;
++
++	switch (protocol) {
++	case IPPROTO_ESP:
++		esph = (struct ip_esp_hdr *)(skb->data+(iph->ihl<<2));
++		spi = esph->spi;
++		break;
++	case IPPROTO_AH:
++		ah = (struct ip_auth_hdr *)(skb->data+(iph->ihl<<2));
++		spi = ah->spi;
++		break;
++	case IPPROTO_COMP:
++		ipch = (struct ip_comp_hdr *)(skb->data+(iph->ihl<<2));
++		spi = htonl(ntohs(ipch->cpi));
++		break;
++	default:
++		return 0;
++	}
++
++	switch (icmp_hdr(skb)->type) {
++	case ICMP_DEST_UNREACH:
++		if (icmp_hdr(skb)->code != ICMP_FRAG_NEEDED)
++			return 0;
++	case ICMP_REDIRECT:
++		break;
++	default:
++		return 0;
++	}
++
++	x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr,
++			      spi, protocol, AF_INET);
++	if (!x)
++		return 0;
++
++	xi = xfrmi_lookup(net, x);
++	if (!xi) {
++		xfrm_state_put(x);
++		return -1;
++	}
++
++	if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH)
++		ipv4_update_pmtu(skb, net, info, 0, protocol);
++	else
++		ipv4_redirect(skb, net, 0, protocol);
++	xfrm_state_put(x);
++
++	return 0;
++}
++
++static int xfrmi6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
++		    u8 type, u8 code, int offset, __be32 info)
++{
++	const struct ipv6hdr *iph = (const struct ipv6hdr *)skb->data;
++	struct net *net = dev_net(skb->dev);
++	int protocol = iph->nexthdr;
++	struct ip_comp_hdr *ipch;
++	struct ip_esp_hdr *esph;
++	struct ip_auth_hdr *ah;
++	struct xfrm_state *x;
++	struct xfrm_if *xi;
++	__be32 spi;
++
++	switch (protocol) {
++	case IPPROTO_ESP:
++		esph = (struct ip_esp_hdr *)(skb->data + offset);
++		spi = esph->spi;
++		break;
++	case IPPROTO_AH:
++		ah = (struct ip_auth_hdr *)(skb->data + offset);
++		spi = ah->spi;
++		break;
++	case IPPROTO_COMP:
++		ipch = (struct ip_comp_hdr *)(skb->data + offset);
++		spi = htonl(ntohs(ipch->cpi));
++		break;
++	default:
++		return 0;
++	}
++
++	if (type != ICMPV6_PKT_TOOBIG &&
++	    type != NDISC_REDIRECT)
++		return 0;
++
++	x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr,
++			      spi, protocol, AF_INET6);
++	if (!x)
++		return 0;
++
++	xi = xfrmi_lookup(net, x);
++	if (!xi) {
++		xfrm_state_put(x);
++		return -1;
++	}
++
++	if (type == NDISC_REDIRECT)
++		ip6_redirect(skb, net, skb->dev->ifindex, 0,
++			     sock_net_uid(net, NULL));
++	else
++		ip6_update_pmtu(skb, net, info, 0, 0, sock_net_uid(net, NULL));
++	xfrm_state_put(x);
++
++	return 0;
++}
++
++static int xfrmi_change(struct xfrm_if *xi, const struct xfrm_if_parms *p)
++{
++	if (xi->p.link != p->link)
++		return -EINVAL;
++
++	xi->p.if_id = p->if_id;
++
++	return 0;
++}
++
++static int xfrmi_update(struct xfrm_if *xi, struct xfrm_if_parms *p)
++{
++	struct net *net = xi->net;
++	struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
++	int err;
++
++	xfrmi_unlink(xfrmn, xi);
++	synchronize_net();
++	err = xfrmi_change(xi, p);
++	xfrmi_link(xfrmn, xi);
++	netdev_state_change(xi->dev);
++	return err;
++}
++
++static void xfrmi_get_stats64(struct net_device *dev,
++			       struct rtnl_link_stats64 *s)
++{
++	dev_fetch_sw_netstats(s, dev->tstats);
++
++	s->rx_dropped = dev->stats.rx_dropped;
++	s->tx_dropped = dev->stats.tx_dropped;
++}
++
++static int xfrmi_get_iflink(const struct net_device *dev)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++
++	return xi->p.link;
++}
++
++
++static const struct net_device_ops xfrmi_netdev_ops = {
++	.ndo_init	= xfrmi_dev_init,
++	.ndo_uninit	= xfrmi_dev_uninit,
++	.ndo_start_xmit = xfrmi_xmit,
++	.ndo_get_stats64 = xfrmi_get_stats64,
++	.ndo_get_iflink = xfrmi_get_iflink,
++};
++
++static void xfrmi_dev_setup(struct net_device *dev)
++{
++	dev->netdev_ops 	= &xfrmi_netdev_ops;
++	dev->header_ops		= &ip_tunnel_header_ops;
++	dev->type		= ARPHRD_NONE;
++	dev->mtu		= ETH_DATA_LEN;
++	dev->min_mtu		= ETH_MIN_MTU;
++	dev->max_mtu		= IP_MAX_MTU;
++	dev->flags 		= IFF_NOARP;
++	dev->needs_free_netdev	= true;
++	dev->priv_destructor	= xfrmi_dev_free;
++	netif_keep_dst(dev);
++
++	eth_broadcast_addr(dev->broadcast);
++}
++
++static int xfrmi_dev_init(struct net_device *dev)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++	struct net_device *phydev = __dev_get_by_index(xi->net, xi->p.link);
++	int err;
++
++	dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
++	if (!dev->tstats)
++		return -ENOMEM;
++
++	err = gro_cells_init(&xi->gro_cells, dev);
++	if (err) {
++		free_percpu(dev->tstats);
++		return err;
++	}
++
++	dev->features |= NETIF_F_LLTX;
++
++	if (phydev) {
++		dev->needed_headroom = phydev->needed_headroom;
++		dev->needed_tailroom = phydev->needed_tailroom;
++
++		if (is_zero_ether_addr(dev->dev_addr))
++			eth_hw_addr_inherit(dev, phydev);
++		if (is_zero_ether_addr(dev->broadcast))
++			memcpy(dev->broadcast, phydev->broadcast,
++			       dev->addr_len);
++	} else {
++		eth_hw_addr_random(dev);
++		eth_broadcast_addr(dev->broadcast);
++	}
++
++	return 0;
++}
++
++static int xfrmi_validate(struct nlattr *tb[], struct nlattr *data[],
++			 struct netlink_ext_ack *extack)
++{
++	return 0;
++}
++
++static void xfrmi_netlink_parms(struct nlattr *data[],
++			       struct xfrm_if_parms *parms)
++{
++	memset(parms, 0, sizeof(*parms));
++
++	if (!data)
++		return;
++
++	if (data[IFLA_XFRM_LINK])
++		parms->link = nla_get_u32(data[IFLA_XFRM_LINK]);
++
++	if (data[IFLA_XFRM_IF_ID])
++		parms->if_id = nla_get_u32(data[IFLA_XFRM_IF_ID]);
++}
++
++static int xfrmi_newlink(struct net *src_net, struct net_device *dev,
++			struct nlattr *tb[], struct nlattr *data[],
++			struct netlink_ext_ack *extack)
++{
++	struct net *net = dev_net(dev);
++	struct xfrm_if_parms p = {};
++	struct xfrm_if *xi;
++	int err;
++
++	xfrmi_netlink_parms(data, &p);
++	if (!p.if_id) {
++		NL_SET_ERR_MSG(extack, "if_id must be non zero");
++		return -EINVAL;
++	}
++
++	xi = xfrmi_locate(net, &p);
++	if (xi)
++		return -EEXIST;
++
++	xi = netdev_priv(dev);
++	xi->p = p;
++	xi->net = net;
++	xi->dev = dev;
++
++	err = xfrmi_create(dev);
++	return err;
++}
++
++static void xfrmi_dellink(struct net_device *dev, struct list_head *head)
++{
++	unregister_netdevice_queue(dev, head);
++}
++
++static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
++			   struct nlattr *data[],
++			   struct netlink_ext_ack *extack)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++	struct net *net = xi->net;
++	struct xfrm_if_parms p = {};
++
++	xfrmi_netlink_parms(data, &p);
++	if (!p.if_id) {
++		NL_SET_ERR_MSG(extack, "if_id must be non zero");
++		return -EINVAL;
++	}
++
++	xi = xfrmi_locate(net, &p);
++	if (!xi) {
++		xi = netdev_priv(dev);
++	} else {
++		if (xi->dev != dev)
++			return -EEXIST;
++	}
++
++	return xfrmi_update(xi, &p);
++}
++
++static size_t xfrmi_get_size(const struct net_device *dev)
++{
++	return
++		/* IFLA_XFRM_LINK */
++		nla_total_size(4) +
++		/* IFLA_XFRM_IF_ID */
++		nla_total_size(4) +
++		0;
++}
++
++static int xfrmi_fill_info(struct sk_buff *skb, const struct net_device *dev)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++	struct xfrm_if_parms *parm = &xi->p;
++
++	if (nla_put_u32(skb, IFLA_XFRM_LINK, parm->link) ||
++	    nla_put_u32(skb, IFLA_XFRM_IF_ID, parm->if_id))
++		goto nla_put_failure;
++	return 0;
++
++nla_put_failure:
++	return -EMSGSIZE;
++}
++
++static struct net *xfrmi_get_link_net(const struct net_device *dev)
++{
++	struct xfrm_if *xi = netdev_priv(dev);
++
++	return xi->net;
++}
++
++static const struct nla_policy xfrmi_policy[IFLA_XFRM_MAX + 1] = {
++	[IFLA_XFRM_LINK]	= { .type = NLA_U32 },
++	[IFLA_XFRM_IF_ID]	= { .type = NLA_U32 },
++};
++
++static struct rtnl_link_ops xfrmi_link_ops __read_mostly = {
++	.kind		= "xfrm",
++	.maxtype	= IFLA_XFRM_MAX,
++	.policy		= xfrmi_policy,
++	.priv_size	= sizeof(struct xfrm_if),
++	.setup		= xfrmi_dev_setup,
++	.validate	= xfrmi_validate,
++	.newlink	= xfrmi_newlink,
++	.dellink	= xfrmi_dellink,
++	.changelink	= xfrmi_changelink,
++	.get_size	= xfrmi_get_size,
++	.fill_info	= xfrmi_fill_info,
++	.get_link_net	= xfrmi_get_link_net,
++};
++
++static void __net_exit xfrmi_exit_batch_net(struct list_head *net_exit_list)
++{
++	struct net *net;
++	LIST_HEAD(list);
++
++	rtnl_lock();
++	list_for_each_entry(net, net_exit_list, exit_list) {
++		struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
++		struct xfrm_if __rcu **xip;
++		struct xfrm_if *xi;
++		int i;
++
++		for (i = 0; i < XFRMI_HASH_SIZE; i++) {
++			for (xip = &xfrmn->xfrmi[i];
++			     (xi = rtnl_dereference(*xip)) != NULL;
++			     xip = &xi->next)
++				unregister_netdevice_queue(xi->dev, &list);
++		}
++	}
++	unregister_netdevice_many(&list);
++	rtnl_unlock();
++}
++
++static struct pernet_operations xfrmi_net_ops = {
++	.exit_batch = xfrmi_exit_batch_net,
++	.id   = &xfrmi_net_id,
++	.size = sizeof(struct xfrmi_net),
++};
++
++static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = {
++	.handler	=	xfrmi6_rcv,
++	.input_handler	=	xfrmi6_input,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi6_err,
++	.priority	=	10,
++};
++
++static struct xfrm6_protocol xfrmi_ah6_protocol __read_mostly = {
++	.handler	=	xfrm6_rcv,
++	.input_handler	=	xfrm_input,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi6_err,
++	.priority	=	10,
++};
++
++static struct xfrm6_protocol xfrmi_ipcomp6_protocol __read_mostly = {
++	.handler	=	xfrm6_rcv,
++	.input_handler	=	xfrm_input,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi6_err,
++	.priority	=	10,
++};
++
++#if IS_REACHABLE(CONFIG_INET6_XFRM_TUNNEL)
++static int xfrmi6_rcv_tunnel(struct sk_buff *skb)
++{
++	const xfrm_address_t *saddr;
++	__be32 spi;
++
++	saddr = (const xfrm_address_t *)&ipv6_hdr(skb)->saddr;
++	spi = xfrm6_tunnel_spi_lookup(dev_net(skb->dev), saddr);
++
++	return xfrm6_rcv_spi(skb, IPPROTO_IPV6, spi, NULL);
++}
++
++static struct xfrm6_tunnel xfrmi_ipv6_handler __read_mostly = {
++	.handler	=	xfrmi6_rcv_tunnel,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi6_err,
++	.priority	=	2,
++};
++
++static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = {
++	.handler	=	xfrmi6_rcv_tunnel,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi6_err,
++	.priority	=	2,
++};
++#endif
++
++static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = {
++	.handler	=	xfrmi4_rcv,
++	.input_handler	=	xfrmi4_input,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi4_err,
++	.priority	=	10,
++};
++
++static struct xfrm4_protocol xfrmi_ah4_protocol __read_mostly = {
++	.handler	=	xfrm4_rcv,
++	.input_handler	=	xfrm_input,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi4_err,
++	.priority	=	10,
++};
++
++static struct xfrm4_protocol xfrmi_ipcomp4_protocol __read_mostly = {
++	.handler	=	xfrm4_rcv,
++	.input_handler	=	xfrm_input,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi4_err,
++	.priority	=	10,
++};
++
++#if IS_REACHABLE(CONFIG_INET_XFRM_TUNNEL)
++static int xfrmi4_rcv_tunnel(struct sk_buff *skb)
++{
++	return xfrm4_rcv_spi(skb, IPPROTO_IPIP, ip_hdr(skb)->saddr);
++}
++
++static struct xfrm_tunnel xfrmi_ipip_handler __read_mostly = {
++	.handler	=	xfrmi4_rcv_tunnel,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi4_err,
++	.priority	=	3,
++};
++
++static struct xfrm_tunnel xfrmi_ipip6_handler __read_mostly = {
++	.handler	=	xfrmi4_rcv_tunnel,
++	.cb_handler	=	xfrmi_rcv_cb,
++	.err_handler	=	xfrmi4_err,
++	.priority	=	2,
++};
++#endif
++
++static int __init xfrmi4_init(void)
++{
++	int err;
++
++	err = xfrm4_protocol_register(&xfrmi_esp4_protocol, IPPROTO_ESP);
++	if (err < 0)
++		goto xfrm_proto_esp_failed;
++	err = xfrm4_protocol_register(&xfrmi_ah4_protocol, IPPROTO_AH);
++	if (err < 0)
++		goto xfrm_proto_ah_failed;
++	err = xfrm4_protocol_register(&xfrmi_ipcomp4_protocol, IPPROTO_COMP);
++	if (err < 0)
++		goto xfrm_proto_comp_failed;
++#if IS_REACHABLE(CONFIG_INET_XFRM_TUNNEL)
++	err = xfrm4_tunnel_register(&xfrmi_ipip_handler, AF_INET);
++	if (err < 0)
++		goto xfrm_tunnel_ipip_failed;
++	err = xfrm4_tunnel_register(&xfrmi_ipip6_handler, AF_INET6);
++	if (err < 0)
++		goto xfrm_tunnel_ipip6_failed;
++#endif
++
++	return 0;
++
++#if IS_REACHABLE(CONFIG_INET_XFRM_TUNNEL)
++xfrm_tunnel_ipip6_failed:
++	xfrm4_tunnel_deregister(&xfrmi_ipip_handler, AF_INET);
++xfrm_tunnel_ipip_failed:
++	xfrm4_protocol_deregister(&xfrmi_ipcomp4_protocol, IPPROTO_COMP);
++#endif
++xfrm_proto_comp_failed:
++	xfrm4_protocol_deregister(&xfrmi_ah4_protocol, IPPROTO_AH);
++xfrm_proto_ah_failed:
++	xfrm4_protocol_deregister(&xfrmi_esp4_protocol, IPPROTO_ESP);
++xfrm_proto_esp_failed:
++	return err;
++}
++
++static void xfrmi4_fini(void)
++{
++#if IS_REACHABLE(CONFIG_INET_XFRM_TUNNEL)
++	xfrm4_tunnel_deregister(&xfrmi_ipip6_handler, AF_INET6);
++	xfrm4_tunnel_deregister(&xfrmi_ipip_handler, AF_INET);
++#endif
++	xfrm4_protocol_deregister(&xfrmi_ipcomp4_protocol, IPPROTO_COMP);
++	xfrm4_protocol_deregister(&xfrmi_ah4_protocol, IPPROTO_AH);
++	xfrm4_protocol_deregister(&xfrmi_esp4_protocol, IPPROTO_ESP);
++}
++
++static int __init xfrmi6_init(void)
++{
++	int err;
++
++	err = xfrm6_protocol_register(&xfrmi_esp6_protocol, IPPROTO_ESP);
++	if (err < 0)
++		goto xfrm_proto_esp_failed;
++	err = xfrm6_protocol_register(&xfrmi_ah6_protocol, IPPROTO_AH);
++	if (err < 0)
++		goto xfrm_proto_ah_failed;
++	err = xfrm6_protocol_register(&xfrmi_ipcomp6_protocol, IPPROTO_COMP);
++	if (err < 0)
++		goto xfrm_proto_comp_failed;
++#if IS_REACHABLE(CONFIG_INET6_XFRM_TUNNEL)
++	err = xfrm6_tunnel_register(&xfrmi_ipv6_handler, AF_INET6);
++	if (err < 0)
++		goto xfrm_tunnel_ipv6_failed;
++	err = xfrm6_tunnel_register(&xfrmi_ip6ip_handler, AF_INET);
++	if (err < 0)
++		goto xfrm_tunnel_ip6ip_failed;
++#endif
++
++	return 0;
++
++#if IS_REACHABLE(CONFIG_INET6_XFRM_TUNNEL)
++xfrm_tunnel_ip6ip_failed:
++	xfrm6_tunnel_deregister(&xfrmi_ipv6_handler, AF_INET6);
++xfrm_tunnel_ipv6_failed:
++	xfrm6_protocol_deregister(&xfrmi_ipcomp6_protocol, IPPROTO_COMP);
++#endif
++xfrm_proto_comp_failed:
++	xfrm6_protocol_deregister(&xfrmi_ah6_protocol, IPPROTO_AH);
++xfrm_proto_ah_failed:
++	xfrm6_protocol_deregister(&xfrmi_esp6_protocol, IPPROTO_ESP);
++xfrm_proto_esp_failed:
++	return err;
++}
++
++static void xfrmi6_fini(void)
++{
++#if IS_REACHABLE(CONFIG_INET6_XFRM_TUNNEL)
++	xfrm6_tunnel_deregister(&xfrmi_ip6ip_handler, AF_INET);
++	xfrm6_tunnel_deregister(&xfrmi_ipv6_handler, AF_INET6);
++#endif
++	xfrm6_protocol_deregister(&xfrmi_ipcomp6_protocol, IPPROTO_COMP);
++	xfrm6_protocol_deregister(&xfrmi_ah6_protocol, IPPROTO_AH);
++	xfrm6_protocol_deregister(&xfrmi_esp6_protocol, IPPROTO_ESP);
++}
++
++static const struct xfrm_if_cb xfrm_if_cb = {
++	.decode_session =	xfrmi_decode_session,
++};
++
++static int __init xfrmi_init(void)
++{
++	const char *msg;
++	int err;
++
++	pr_info("IPsec XFRM device driver\n");
++
++	msg = "tunnel device";
++	err = register_pernet_device(&xfrmi_net_ops);
++	if (err < 0)
++		goto pernet_dev_failed;
++
++	msg = "xfrm4 protocols";
++	err = xfrmi4_init();
++	if (err < 0)
++		goto xfrmi4_failed;
++
++	msg = "xfrm6 protocols";
++	err = xfrmi6_init();
++	if (err < 0)
++		goto xfrmi6_failed;
++
++
++	msg = "netlink interface";
++	err = rtnl_link_register(&xfrmi_link_ops);
++	if (err < 0)
++		goto rtnl_link_failed;
++
++	xfrm_if_register_cb(&xfrm_if_cb);
++
++	return err;
++
++rtnl_link_failed:
++	xfrmi6_fini();
++xfrmi6_failed:
++	xfrmi4_fini();
++xfrmi4_failed:
++	unregister_pernet_device(&xfrmi_net_ops);
++pernet_dev_failed:
++	pr_err("xfrmi init: failed to register %s\n", msg);
++	return err;
++}
++
++static void __exit xfrmi_fini(void)
++{
++	xfrm_if_unregister_cb();
++	rtnl_link_unregister(&xfrmi_link_ops);
++	xfrmi4_fini();
++	xfrmi6_fini();
++	unregister_pernet_device(&xfrmi_net_ops);
++}
++
++module_init(xfrmi_init);
++module_exit(xfrmi_fini);
++MODULE_LICENSE("GPL");
++MODULE_ALIAS_RTNL_LINK("xfrm");
++MODULE_ALIAS_NETDEV("xfrm0");
++MODULE_AUTHOR("Steffen Klassert");
++MODULE_DESCRIPTION("XFRM virtual interface");
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index d3b128b74a382..465d28341ed6d 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3277,6 +3277,13 @@ xfrm_policy_ok(const struct xfrm_tmpl *tmpl, const struct sec_path *sp, int star
+ 		if (xfrm_state_ok(tmpl, sp->xvec[idx], family, if_id))
+ 			return ++idx;
+ 		if (sp->xvec[idx]->props.mode != XFRM_MODE_TRANSPORT) {
++			if (idx < sp->verified_cnt) {
++				/* Secpath entry previously verified, consider optional and
++				 * continue searching
++				 */
++				continue;
++			}
++
+ 			if (start == -1)
+ 				start = -2-idx;
+ 			break;
+@@ -3688,6 +3695,9 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		 * Order is _important_. Later we will implement
+ 		 * some barriers, but at the moment barriers
+ 		 * are implied between each two transformations.
++		 * Upon success, marks secpath entries as having been
++		 * verified to allow them to be skipped in future policy
++		 * checks (e.g. nested tunnels).
+ 		 */
+ 		for (i = xfrm_nr-1, k = 0; i >= 0; i--) {
+ 			k = xfrm_policy_ok(tpp[i], sp, k, family, if_id);
+@@ -3706,6 +3716,8 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 		}
+ 
+ 		xfrm_pols_put(pols, npols);
++		sp->verified_cnt = k;
++
+ 		return 1;
+ 	}
+ 	XFRM_INC_STATS(net, LINUX_MIB_XFRMINPOLBLOCK);
+diff --git a/sound/soc/codecs/nau8824.c b/sound/soc/codecs/nau8824.c
+index a95fe3fff1db8..9b22219a76937 100644
+--- a/sound/soc/codecs/nau8824.c
++++ b/sound/soc/codecs/nau8824.c
+@@ -1896,6 +1896,30 @@ static const struct dmi_system_id nau8824_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH),
+ 	},
++	{
++		/* Positivo CW14Q01P */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"),
++			DMI_MATCH(DMI_BOARD_NAME, "CW14Q01P"),
++		},
++		.driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH),
++	},
++	{
++		/* Positivo K1424G */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"),
++			DMI_MATCH(DMI_BOARD_NAME, "K1424G"),
++		},
++		.driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH),
++	},
++	{
++		/* Positivo N14ZP74G */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"),
++			DMI_MATCH(DMI_BOARD_NAME, "N14ZP74G"),
++		},
++		.driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH),
++	},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/generic/simple-card.c b/sound/soc/generic/simple-card.c
+index d916ec69c24ff..ac97e8b7978c7 100644
+--- a/sound/soc/generic/simple-card.c
++++ b/sound/soc/generic/simple-card.c
+@@ -410,6 +410,7 @@ static int simple_for_each_link(struct asoc_simple_priv *priv,
+ 
+ 			if (ret < 0) {
+ 				of_node_put(codec);
++				of_node_put(plat);
+ 				of_node_put(np);
+ 				goto error;
+ 			}
+diff --git a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
+index c2aa6f26738b4..bf82b923c5fe5 100644
+--- a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
++++ b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
+@@ -1,13 +1,14 @@
+ {
+ 	"bounds checks mixing signed and unsigned, positive bounds",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, 2),
+ 	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 3),
+@@ -17,20 +18,21 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -1),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
+@@ -40,20 +42,21 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 2",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -1),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5),
+@@ -65,20 +68,21 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 3",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -1),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 4),
+@@ -89,20 +93,21 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 4",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, 1),
+ 	BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2),
+@@ -112,19 +117,20 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.result = ACCEPT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 5",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -1),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5),
+@@ -135,17 +141,20 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 6",
+ 	.insns = {
++	BPF_MOV64_REG(BPF_REG_9, BPF_REG_1),
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
+ 	BPF_MOV64_IMM(BPF_REG_2, 0),
+ 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -512),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_6, -1),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_6, 5),
+@@ -163,13 +172,14 @@
+ {
+ 	"bounds checks mixing signed and unsigned, variant 7",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, 1024 * 1024 * 1024),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
+@@ -179,19 +189,20 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.result = ACCEPT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 8",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -1),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
+@@ -203,20 +214,21 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 9",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_LD_IMM64(BPF_REG_2, -9223372036854775808ULL),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
+@@ -228,19 +240,20 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.result = ACCEPT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 10",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, 0),
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
+@@ -252,20 +265,21 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 11",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -1),
+ 	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
+@@ -278,20 +292,21 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 12",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -6),
+ 	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
+@@ -303,20 +318,21 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 13",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, 2),
+ 	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
+@@ -331,7 +347,7 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+@@ -340,13 +356,14 @@
+ 	.insns = {
+ 	BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1,
+ 		    offsetof(struct __sk_buff, mark)),
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -1),
+ 	BPF_MOV64_IMM(BPF_REG_8, 2),
+@@ -360,20 +377,21 @@
+ 	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, -3),
+ 	BPF_JMP_IMM(BPF_JA, 0, 0, -7),
+ 	},
+-	.fixup_map_hash_8b = { 4 },
++	.fixup_map_hash_8b = { 6 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+ {
+ 	"bounds checks mixing signed and unsigned, variant 15",
+ 	.insns = {
++	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+ 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
+ 	BPF_MOV64_IMM(BPF_REG_2, -6),
+ 	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
+@@ -387,7 +405,7 @@
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_8b = { 3 },
++	.fixup_map_hash_8b = { 5 },
+ 	.errstr = "unbounded min value",
+ 	.result = REJECT,
+ },
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index 4a11ea2261cbe..e13b0fb63333f 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -81,6 +81,13 @@ NSC_CMD="ip netns exec ${NSC}"
+ 
+ which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
+ 
++# Check if FIPS mode is enabled
++if [ -f /proc/sys/crypto/fips_enabled ]; then
++	fips_enabled=`cat /proc/sys/crypto/fips_enabled`
++else
++	fips_enabled=0
++fi
++
+ ################################################################################
+ # utilities
+ 
+@@ -1139,7 +1146,7 @@ ipv4_tcp_novrf()
+ 	run_cmd nettest -d ${NSA_DEV} -r ${a}
+ 	log_test_addr ${a} $? 1 "No server, device client, local conn"
+ 
+-	ipv4_tcp_md5_novrf
++	[ "$fips_enabled" = "1" ] || ipv4_tcp_md5_novrf
+ }
+ 
+ ipv4_tcp_vrf()
+@@ -1193,9 +1200,11 @@ ipv4_tcp_vrf()
+ 	log_test_addr ${a} $? 1 "Global server, local connection"
+ 
+ 	# run MD5 tests
+-	setup_vrf_dup
+-	ipv4_tcp_md5
+-	cleanup_vrf_dup
++	if [ "$fips_enabled" = "0" ]; then
++		setup_vrf_dup
++		ipv4_tcp_md5
++		cleanup_vrf_dup
++	fi
+ 
+ 	#
+ 	# enable VRF global server
+@@ -2611,7 +2620,7 @@ ipv6_tcp_novrf()
+ 		log_test_addr ${a} $? 1 "No server, device client, local conn"
+ 	done
+ 
+-	ipv6_tcp_md5_novrf
++	[ "$fips_enabled" = "1" ] || ipv6_tcp_md5_novrf
+ }
+ 
+ ipv6_tcp_vrf()
+@@ -2681,9 +2690,11 @@ ipv6_tcp_vrf()
+ 	log_test_addr ${a} $? 1 "Global server, local connection"
+ 
+ 	# run MD5 tests
+-	setup_vrf_dup
+-	ipv6_tcp_md5
+-	cleanup_vrf_dup
++	if [ "$fips_enabled" = "0" ]; then
++		setup_vrf_dup
++		ipv6_tcp_md5
++		cleanup_vrf_dup
++	fi
+ 
+ 	#
+ 	# enable VRF global server
+diff --git a/tools/testing/selftests/net/mptcp/config b/tools/testing/selftests/net/mptcp/config
+index 741a1c4f4ae8f..1a4c11a444d95 100644
+--- a/tools/testing/selftests/net/mptcp/config
++++ b/tools/testing/selftests/net/mptcp/config
+@@ -1,3 +1,4 @@
++CONFIG_KALLSYMS=y
+ CONFIG_MPTCP=y
+ CONFIG_IPV6=y
+ CONFIG_MPTCP_IPV6=y
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 94b15bb28e110..d205828d75753 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -130,6 +130,22 @@ do_ping()
+ 	fi
+ }
+ 
++# $1: ns ; $2: counter
++get_counter()
++{
++	local ns="${1}"
++	local counter="${2}"
++	local count
++
++	count=$(ip netns exec ${ns} nstat -asz "${counter}" | awk 'NR==1 {next} {print $2}')
++	if [ -z "${count}" ]; then
++		mptcp_lib_fail_if_expected_feature "${counter} counter"
++		return 1
++	fi
++
++	echo "${count}"
++}
++
+ do_transfer()
+ {
+ 	listener_ns="$1"
+@@ -291,9 +307,10 @@ chk_join_nr()
+ 	local dump_stats
+ 
+ 	printf "%02u %-36s %s" "$TEST_COUNT" "$msg" "syn"
+-	count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}'`
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$syn_nr" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtMPJoinSynRx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$syn_nr" ]; then
+ 		echo "[fail] got $count JOIN[s] syn expected $syn_nr"
+ 		ret=1
+ 		dump_stats=1
+@@ -302,9 +319,10 @@ chk_join_nr()
+ 	fi
+ 
+ 	echo -n " - synack"
+-	count=`ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}'`
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$syn_ack_nr" ]; then
++	count=$(get_counter ${ns2} "MPTcpExtMPJoinSynAckRx")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$syn_ack_nr" ]; then
+ 		echo "[fail] got $count JOIN[s] synack expected $syn_ack_nr"
+ 		ret=1
+ 		dump_stats=1
+@@ -313,9 +331,10 @@ chk_join_nr()
+ 	fi
+ 
+ 	echo -n " - ack"
+-	count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinAckRx | awk '{print $2}'`
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$ack_nr" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtMPJoinAckRx")
++	if [ -z "$count" ]; then
++		echo "[skip]"
++	elif [ "$count" != "$ack_nr" ]; then
+ 		echo "[fail] got $count JOIN[s] ack expected $ack_nr"
+ 		ret=1
+ 		dump_stats=1
+@@ -338,9 +357,10 @@ chk_add_nr()
+ 	local dump_stats
+ 
+ 	printf "%-39s %s" " " "add"
+-	count=`ip netns exec $ns2 nstat -as | grep MPTcpExtAddAddr | awk '{print $2}'`
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$add_nr" ]; then
++	count=$(get_counter ${ns2} "MPTcpExtAddAddr")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$add_nr" ]; then
+ 		echo "[fail] got $count ADD_ADDR[s] expected $add_nr"
+ 		ret=1
+ 		dump_stats=1
+@@ -349,9 +369,10 @@ chk_add_nr()
+ 	fi
+ 
+ 	echo -n " - echo  "
+-	count=`ip netns exec $ns1 nstat -as | grep MPTcpExtEchoAdd | awk '{print $2}'`
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$echo_nr" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtEchoAdd")
++	if [ -z "$count" ]; then
++		echo "[skip]"
++	elif [ "$count" != "$echo_nr" ]; then
+ 		echo "[fail] got $count ADD_ADDR echo[s] expected $echo_nr"
+ 		ret=1
+ 		dump_stats=1
+@@ -375,9 +396,10 @@ chk_rm_nr()
+ 	local dump_stats
+ 
+ 	printf "%-39s %s" " " "rm "
+-	count=`ip netns exec $ns1 nstat -as | grep MPTcpExtRmAddr | awk '{print $2}'`
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$rm_addr_nr" ]; then
++	count=$(get_counter ${ns1} "MPTcpExtRmAddr")
++	if [ -z "$count" ]; then
++		echo -n "[skip]"
++	elif [ "$count" != "$rm_addr_nr" ]; then
+ 		echo "[fail] got $count RM_ADDR[s] expected $rm_addr_nr"
+ 		ret=1
+ 		dump_stats=1
+@@ -386,9 +408,10 @@ chk_rm_nr()
+ 	fi
+ 
+ 	echo -n " - sf    "
+-	count=`ip netns exec $ns2 nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}'`
+-	[ -z "$count" ] && count=0
+-	if [ "$count" != "$rm_subflow_nr" ]; then
++	count=$(get_counter ${ns2} "MPTcpExtRmSubflow")
++	if [ -z "$count" ]; then
++		echo "[skip]"
++	elif [ "$count" != "$rm_subflow_nr" ]; then
+ 		echo "[fail] got $count RM_SUBFLOW[s] expected $rm_subflow_nr"
+ 		ret=1
+ 		dump_stats=1
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_lib.sh b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+index 3286536b79d55..f32045b23b893 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_lib.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+@@ -38,3 +38,67 @@ mptcp_lib_check_mptcp() {
+ 		exit ${KSFT_SKIP}
+ 	fi
+ }
++
++mptcp_lib_check_kallsyms() {
++	if ! mptcp_lib_has_file "/proc/kallsyms"; then
++		echo "SKIP: CONFIG_KALLSYMS is missing"
++		exit ${KSFT_SKIP}
++	fi
++}
++
++# Internal: use mptcp_lib_kallsyms_has() instead
++__mptcp_lib_kallsyms_has() {
++	local sym="${1}"
++
++	mptcp_lib_check_kallsyms
++
++	grep -q " ${sym}" /proc/kallsyms
++}
++
++# $1: part of a symbol to look at, add '$' at the end for full name
++mptcp_lib_kallsyms_has() {
++	local sym="${1}"
++
++	if __mptcp_lib_kallsyms_has "${sym}"; then
++		return 0
++	fi
++
++	mptcp_lib_fail_if_expected_feature "${sym} symbol not found"
++}
++
++# $1: part of a symbol to look at, add '$' at the end for full name
++mptcp_lib_kallsyms_doesnt_have() {
++	local sym="${1}"
++
++	if ! __mptcp_lib_kallsyms_has "${sym}"; then
++		return 0
++	fi
++
++	mptcp_lib_fail_if_expected_feature "${sym} symbol has been found"
++}
++
++# !!!AVOID USING THIS!!!
++# Features might not land in the expected version and features can be backported
++#
++# $1: kernel version, e.g. 6.3
++mptcp_lib_kversion_ge() {
++	local exp_maj="${1%.*}"
++	local exp_min="${1#*.}"
++	local v maj min
++
++	# If the kernel has backported features, set this env var to 1:
++	if [ "${SELFTESTS_MPTCP_LIB_NO_KVERSION_CHECK:-}" = "1" ]; then
++		return 0
++	fi
++
++	v=$(uname -r | cut -d'.' -f1,2)
++	maj=${v%.*}
++	min=${v#*.}
++
++	if   [ "${maj}" -gt "${exp_maj}" ] ||
++	   { [ "${maj}" -eq "${exp_maj}" ] && [ "${min}" -ge "${exp_min}" ]; }; then
++		return 0
++	fi
++
++	mptcp_lib_fail_if_expected_feature "kernel version ${1} lower than ${v}"
++}
+diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+index f7cdba0a97a90..fff6f74ebe160 100755
+--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
++++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+@@ -73,8 +73,12 @@ check()
+ }
+ 
+ check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "defaults addr list"
+-check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
++
++default_limits="$(ip netns exec $ns1 ./pm_nl_ctl limits)"
++if mptcp_lib_expect_all_features; then
++	check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
+ subflows 0" "defaults limits"
++fi
+ 
+ ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1
+ ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.2 flags subflow dev lo
+@@ -120,12 +124,10 @@ ip netns exec $ns1 ./pm_nl_ctl flush
+ check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "flush addrs"
+ 
+ ip netns exec $ns1 ./pm_nl_ctl limits 9 1
+-check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
+-subflows 0" "rcv addrs above hard limit"
++check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "rcv addrs above hard limit"
+ 
+ ip netns exec $ns1 ./pm_nl_ctl limits 1 9
+-check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
+-subflows 0" "subflows above hard limit"
++check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "subflows above hard limit"
+ 
+ ip netns exec $ns1 ./pm_nl_ctl limits 8 8
+ check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 8
+diff --git a/tools/testing/selftests/net/vrf-xfrm-tests.sh b/tools/testing/selftests/net/vrf-xfrm-tests.sh
+index 184da81f554ff..452638ae8aed8 100755
+--- a/tools/testing/selftests/net/vrf-xfrm-tests.sh
++++ b/tools/testing/selftests/net/vrf-xfrm-tests.sh
+@@ -264,60 +264,60 @@ setup_xfrm()
+ 	ip -netns host1 xfrm state add src ${HOST1_4} dst ${HOST2_4} \
+ 	    proto esp spi ${SPI_1} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_1} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_1} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_1} 96 \
++	    enc 'cbc(aes)' ${ENC_1} \
+ 	    sel src ${h1_4} dst ${h2_4} ${devarg}
+ 
+ 	ip -netns host2 xfrm state add src ${HOST1_4} dst ${HOST2_4} \
+ 	    proto esp spi ${SPI_1} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_1} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_1} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_1} 96 \
++	    enc 'cbc(aes)' ${ENC_1} \
+ 	    sel src ${h1_4} dst ${h2_4}
+ 
+ 
+ 	ip -netns host1 xfrm state add src ${HOST2_4} dst ${HOST1_4} \
+ 	    proto esp spi ${SPI_2} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_2} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_2} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_2} 96 \
++	    enc 'cbc(aes)' ${ENC_2} \
+ 	    sel src ${h2_4} dst ${h1_4} ${devarg}
+ 
+ 	ip -netns host2 xfrm state add src ${HOST2_4} dst ${HOST1_4} \
+ 	    proto esp spi ${SPI_2} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_2} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_2} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_2} 96 \
++	    enc 'cbc(aes)' ${ENC_2} \
+ 	    sel src ${h2_4} dst ${h1_4}
+ 
+ 
+ 	ip -6 -netns host1 xfrm state add src ${HOST1_6} dst ${HOST2_6} \
+ 	    proto esp spi ${SPI_1} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_1} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_1} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_1} 96 \
++	    enc 'cbc(aes)' ${ENC_1} \
+ 	    sel src ${h1_6} dst ${h2_6} ${devarg}
+ 
+ 	ip -6 -netns host2 xfrm state add src ${HOST1_6} dst ${HOST2_6} \
+ 	    proto esp spi ${SPI_1} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_1} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_1} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_1} 96 \
++	    enc 'cbc(aes)' ${ENC_1} \
+ 	    sel src ${h1_6} dst ${h2_6}
+ 
+ 
+ 	ip -6 -netns host1 xfrm state add src ${HOST2_6} dst ${HOST1_6} \
+ 	    proto esp spi ${SPI_2} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_2} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_2} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_2} 96 \
++	    enc 'cbc(aes)' ${ENC_2} \
+ 	    sel src ${h2_6} dst ${h1_6} ${devarg}
+ 
+ 	ip -6 -netns host2 xfrm state add src ${HOST2_6} dst ${HOST1_6} \
+ 	    proto esp spi ${SPI_2} reqid 0 mode tunnel \
+ 	    replay-window 4 replay-oseq 0x4 \
+-	    auth-trunc 'hmac(md5)' ${AUTH_2} 96 \
+-	    enc 'cbc(des3_ede)' ${ENC_2} \
++	    auth-trunc 'hmac(sha1)' ${AUTH_2} 96 \
++	    enc 'cbc(aes)' ${ENC_2} \
+ 	    sel src ${h2_6} dst ${h1_6}
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-07-24 20:28 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-07-24 20:28 UTC (permalink / raw
  To: gentoo-commits

commit:     8bb51a2d5ea125fba25c9c700b67605021d3f673
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 24 20:28:15 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jul 24 20:28:15 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8bb51a2d

Linux patch 5.10.187

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1186_linux-5.10.187.patch | 323 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 327 insertions(+)

diff --git a/0000_README b/0000_README
index e7c30907..19c3539a 100644
--- a/0000_README
+++ b/0000_README
@@ -787,6 +787,10 @@ Patch:  1185_linux-5.10.186.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.186
 
+Patch:  1186_linux-5.10.187.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.187
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1186_linux-5.10.187.patch b/1186_linux-5.10.187.patch
new file mode 100644
index 00000000..0d508e12
--- /dev/null
+++ b/1186_linux-5.10.187.patch
@@ -0,0 +1,323 @@
+diff --git a/Makefile b/Makefile
+index bb2be0ed9ff26..2aaf3b0b9250b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 186
++SUBLEVEL = 187
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index 509cc0262fdc2..394605e59f2bf 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -5,6 +5,7 @@
+ #include <asm/cpu.h>
+ #include <linux/earlycpio.h>
+ #include <linux/initrd.h>
++#include <asm/microcode_amd.h>
+ 
+ struct ucode_patch {
+ 	struct list_head plist;
+diff --git a/arch/x86/include/asm/microcode_amd.h b/arch/x86/include/asm/microcode_amd.h
+index a645b25ee442a..403a8e76b310c 100644
+--- a/arch/x86/include/asm/microcode_amd.h
++++ b/arch/x86/include/asm/microcode_amd.h
+@@ -48,11 +48,13 @@ extern void __init load_ucode_amd_bsp(unsigned int family);
+ extern void load_ucode_amd_ap(unsigned int family);
+ extern int __init save_microcode_in_initrd_amd(unsigned int family);
+ void reload_ucode_amd(unsigned int cpu);
++extern void amd_check_microcode(void);
+ #else
+ static inline void __init load_ucode_amd_bsp(unsigned int family) {}
+ static inline void load_ucode_amd_ap(unsigned int family) {}
+ static inline int __init
+ save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
+ static inline void reload_ucode_amd(unsigned int cpu) {}
++static inline void amd_check_microcode(void) {}
+ #endif
+ #endif /* _ASM_X86_MICROCODE_AMD_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index f71a177b6b185..3fab152809ab1 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -497,6 +497,7 @@
+ #define MSR_AMD64_DE_CFG		0xc0011029
+ #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT	 1
+ #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE	BIT_ULL(MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT)
++#define MSR_AMD64_DE_CFG_ZEN2_FP_BACKUP_FIX_BIT 9
+ 
+ #define MSR_AMD64_BU_CFG2		0xc001102a
+ #define MSR_AMD64_IBSFETCHCTL		0xc0011030
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 89a9b77544765..3d99a823ffac7 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -28,11 +28,6 @@
+ 
+ #include "cpu.h"
+ 
+-static const int amd_erratum_383[];
+-static const int amd_erratum_400[];
+-static const int amd_erratum_1054[];
+-static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum);
+-
+ /*
+  * nodes_per_socket: Stores the number of nodes per socket.
+  * Refer to Fam15h Models 00-0fh BKDG - CPUID Fn8000_001E_ECX
+@@ -40,6 +35,78 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum);
+  */
+ static u32 nodes_per_socket = 1;
+ 
++/*
++ * AMD errata checking
++ *
++ * Errata are defined as arrays of ints using the AMD_LEGACY_ERRATUM() or
++ * AMD_OSVW_ERRATUM() macros. The latter is intended for newer errata that
++ * have an OSVW id assigned, which it takes as first argument. Both take a
++ * variable number of family-specific model-stepping ranges created by
++ * AMD_MODEL_RANGE().
++ *
++ * Example:
++ *
++ * const int amd_erratum_319[] =
++ *	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0x4, 0x2),
++ *			   AMD_MODEL_RANGE(0x10, 0x8, 0x0, 0x8, 0x0),
++ *			   AMD_MODEL_RANGE(0x10, 0x9, 0x0, 0x9, 0x0));
++ */
++
++#define AMD_LEGACY_ERRATUM(...)		{ -1, __VA_ARGS__, 0 }
++#define AMD_OSVW_ERRATUM(osvw_id, ...)	{ osvw_id, __VA_ARGS__, 0 }
++#define AMD_MODEL_RANGE(f, m_start, s_start, m_end, s_end) \
++	((f << 24) | (m_start << 16) | (s_start << 12) | (m_end << 4) | (s_end))
++#define AMD_MODEL_RANGE_FAMILY(range)	(((range) >> 24) & 0xff)
++#define AMD_MODEL_RANGE_START(range)	(((range) >> 12) & 0xfff)
++#define AMD_MODEL_RANGE_END(range)	((range) & 0xfff)
++
++static const int amd_erratum_400[] =
++	AMD_OSVW_ERRATUM(1, AMD_MODEL_RANGE(0xf, 0x41, 0x2, 0xff, 0xf),
++			    AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0xff, 0xf));
++
++static const int amd_erratum_383[] =
++	AMD_OSVW_ERRATUM(3, AMD_MODEL_RANGE(0x10, 0, 0, 0xff, 0xf));
++
++/* #1054: Instructions Retired Performance Counter May Be Inaccurate */
++static const int amd_erratum_1054[] =
++	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));
++
++static const int amd_zenbleed[] =
++	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0x30, 0x0, 0x4f, 0xf),
++			   AMD_MODEL_RANGE(0x17, 0x60, 0x0, 0x7f, 0xf),
++			   AMD_MODEL_RANGE(0x17, 0xa0, 0x0, 0xaf, 0xf));
++
++static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
++{
++	int osvw_id = *erratum++;
++	u32 range;
++	u32 ms;
++
++	if (osvw_id >= 0 && osvw_id < 65536 &&
++	    cpu_has(cpu, X86_FEATURE_OSVW)) {
++		u64 osvw_len;
++
++		rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, osvw_len);
++		if (osvw_id < osvw_len) {
++			u64 osvw_bits;
++
++			rdmsrl(MSR_AMD64_OSVW_STATUS + (osvw_id >> 6),
++			    osvw_bits);
++			return osvw_bits & (1ULL << (osvw_id & 0x3f));
++		}
++	}
++
++	/* OSVW unavailable or ID unknown, match family-model-stepping range */
++	ms = (cpu->x86_model << 4) | cpu->x86_stepping;
++	while ((range = *erratum++))
++		if ((cpu->x86 == AMD_MODEL_RANGE_FAMILY(range)) &&
++		    (ms >= AMD_MODEL_RANGE_START(range)) &&
++		    (ms <= AMD_MODEL_RANGE_END(range)))
++			return true;
++
++	return false;
++}
++
+ static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p)
+ {
+ 	u32 gprs[8] = { 0 };
+@@ -968,6 +1035,47 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
+ 	}
+ }
+ 
++static bool cpu_has_zenbleed_microcode(void)
++{
++	u32 good_rev = 0;
++
++	switch (boot_cpu_data.x86_model) {
++	case 0x30 ... 0x3f: good_rev = 0x0830107a; break;
++	case 0x60 ... 0x67: good_rev = 0x0860010b; break;
++	case 0x68 ... 0x6f: good_rev = 0x08608105; break;
++	case 0x70 ... 0x7f: good_rev = 0x08701032; break;
++	case 0xa0 ... 0xaf: good_rev = 0x08a00008; break;
++
++	default:
++		return false;
++		break;
++	}
++
++	if (boot_cpu_data.microcode < good_rev)
++		return false;
++
++	return true;
++}
++
++static void zenbleed_check(struct cpuinfo_x86 *c)
++{
++	if (!cpu_has_amd_erratum(c, amd_zenbleed))
++		return;
++
++	if (cpu_has(c, X86_FEATURE_HYPERVISOR))
++		return;
++
++	if (!cpu_has(c, X86_FEATURE_AVX))
++		return;
++
++	if (!cpu_has_zenbleed_microcode()) {
++		pr_notice_once("Zenbleed: please update your microcode for the most optimal fix\n");
++		msr_set_bit(MSR_AMD64_DE_CFG, MSR_AMD64_DE_CFG_ZEN2_FP_BACKUP_FIX_BIT);
++	} else {
++		msr_clear_bit(MSR_AMD64_DE_CFG, MSR_AMD64_DE_CFG_ZEN2_FP_BACKUP_FIX_BIT);
++	}
++}
++
+ static void init_amd(struct cpuinfo_x86 *c)
+ {
+ 	early_init_amd(c);
+@@ -1058,6 +1166,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 		msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
+ 
+ 	check_null_seg_clears_base(c);
++
++	zenbleed_check(c);
+ }
+ 
+ #ifdef CONFIG_X86_32
+@@ -1153,73 +1263,6 @@ static const struct cpu_dev amd_cpu_dev = {
+ 
+ cpu_dev_register(amd_cpu_dev);
+ 
+-/*
+- * AMD errata checking
+- *
+- * Errata are defined as arrays of ints using the AMD_LEGACY_ERRATUM() or
+- * AMD_OSVW_ERRATUM() macros. The latter is intended for newer errata that
+- * have an OSVW id assigned, which it takes as first argument. Both take a
+- * variable number of family-specific model-stepping ranges created by
+- * AMD_MODEL_RANGE().
+- *
+- * Example:
+- *
+- * const int amd_erratum_319[] =
+- *	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0x4, 0x2),
+- *			   AMD_MODEL_RANGE(0x10, 0x8, 0x0, 0x8, 0x0),
+- *			   AMD_MODEL_RANGE(0x10, 0x9, 0x0, 0x9, 0x0));
+- */
+-
+-#define AMD_LEGACY_ERRATUM(...)		{ -1, __VA_ARGS__, 0 }
+-#define AMD_OSVW_ERRATUM(osvw_id, ...)	{ osvw_id, __VA_ARGS__, 0 }
+-#define AMD_MODEL_RANGE(f, m_start, s_start, m_end, s_end) \
+-	((f << 24) | (m_start << 16) | (s_start << 12) | (m_end << 4) | (s_end))
+-#define AMD_MODEL_RANGE_FAMILY(range)	(((range) >> 24) & 0xff)
+-#define AMD_MODEL_RANGE_START(range)	(((range) >> 12) & 0xfff)
+-#define AMD_MODEL_RANGE_END(range)	((range) & 0xfff)
+-
+-static const int amd_erratum_400[] =
+-	AMD_OSVW_ERRATUM(1, AMD_MODEL_RANGE(0xf, 0x41, 0x2, 0xff, 0xf),
+-			    AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0xff, 0xf));
+-
+-static const int amd_erratum_383[] =
+-	AMD_OSVW_ERRATUM(3, AMD_MODEL_RANGE(0x10, 0, 0, 0xff, 0xf));
+-
+-/* #1054: Instructions Retired Performance Counter May Be Inaccurate */
+-static const int amd_erratum_1054[] =
+-	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));
+-
+-static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
+-{
+-	int osvw_id = *erratum++;
+-	u32 range;
+-	u32 ms;
+-
+-	if (osvw_id >= 0 && osvw_id < 65536 &&
+-	    cpu_has(cpu, X86_FEATURE_OSVW)) {
+-		u64 osvw_len;
+-
+-		rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, osvw_len);
+-		if (osvw_id < osvw_len) {
+-			u64 osvw_bits;
+-
+-			rdmsrl(MSR_AMD64_OSVW_STATUS + (osvw_id >> 6),
+-			    osvw_bits);
+-			return osvw_bits & (1ULL << (osvw_id & 0x3f));
+-		}
+-	}
+-
+-	/* OSVW unavailable or ID unknown, match family-model-stepping range */
+-	ms = (cpu->x86_model << 4) | cpu->x86_stepping;
+-	while ((range = *erratum++))
+-		if ((cpu->x86 == AMD_MODEL_RANGE_FAMILY(range)) &&
+-		    (ms >= AMD_MODEL_RANGE_START(range)) &&
+-		    (ms <= AMD_MODEL_RANGE_END(range)))
+-			return true;
+-
+-	return false;
+-}
+-
+ void set_dr_addr_mask(unsigned long mask, int dr)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_BPEXT))
+@@ -1238,3 +1281,15 @@ void set_dr_addr_mask(unsigned long mask, int dr)
+ 		break;
+ 	}
+ }
++
++static void zenbleed_check_cpu(void *unused)
++{
++	struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
++
++	zenbleed_check(c);
++}
++
++void amd_check_microcode(void)
++{
++	on_each_cpu(zenbleed_check_cpu, NULL, 1);
++}
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index e2dee60108460..f41781d06a5f3 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2165,6 +2165,8 @@ void microcode_check(struct cpuinfo_x86 *prev_info)
+ 
+ 	perf_check_microcode();
+ 
++	amd_check_microcode();
++
+ 	store_cpu_caps(&curr_info);
+ 
+ 	if (!memcmp(&prev_info->x86_capability, &curr_info.x86_capability,
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index d3bce6d380ed6..936085d819b10 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -700,7 +700,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 	rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+ 
+ 	/* need to apply patch? */
+-	if (rev >= mc_amd->hdr.patch_id) {
++	if (rev > mc_amd->hdr.patch_id) {
+ 		ret = UCODE_OK;
+ 		goto out;
+ 	}


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-07-27 11:50 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-07-27 11:50 UTC (permalink / raw
  To: gentoo-commits

commit:     887888d9c0193e587f655c5726fe16c2b16e40cf
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 27 11:50:06 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul 27 11:50:06 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=887888d9

Linux patch 5.10.188

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1187_linux-5.10.188.patch | 21150 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 21154 insertions(+)

diff --git a/0000_README b/0000_README
index 19c3539a..1cb8b638 100644
--- a/0000_README
+++ b/0000_README
@@ -791,6 +791,10 @@ Patch:  1186_linux-5.10.187.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.187
 
+Patch:  1187_linux-5.10.188.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.188
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1187_linux-5.10.188.patch b/1187_linux-5.10.188.patch
new file mode 100644
index 00000000..a08863eb
--- /dev/null
+++ b/1187_linux-5.10.188.patch
@@ -0,0 +1,21150 @@
+diff --git a/Documentation/filesystems/autofs-mount-control.rst b/Documentation/filesystems/autofs-mount-control.rst
+index bf4b511cdbe85..b5a379d25c40b 100644
+--- a/Documentation/filesystems/autofs-mount-control.rst
++++ b/Documentation/filesystems/autofs-mount-control.rst
+@@ -196,7 +196,7 @@ information and return operation results::
+ 		    struct args_ismountpoint	ismountpoint;
+ 	    };
+ 
+-	    char path[0];
++	    char path[];
+     };
+ 
+ The ioctlfd field is a mount point file descriptor of an autofs mount
+diff --git a/Documentation/filesystems/autofs.rst b/Documentation/filesystems/autofs.rst
+index 681c6a492bc0c..1b495768e7aaf 100644
+--- a/Documentation/filesystems/autofs.rst
++++ b/Documentation/filesystems/autofs.rst
+@@ -467,7 +467,7 @@ Each ioctl is passed a pointer to an `autofs_dev_ioctl` structure::
+ 			struct args_ismountpoint	ismountpoint;
+ 		};
+ 
+-                char path[0];
++                char path[];
+         };
+ 
+ For the **OPEN_MOUNT** and **IS_MOUNTPOINT** commands, the target
+diff --git a/Documentation/filesystems/directory-locking.rst b/Documentation/filesystems/directory-locking.rst
+index 504ba940c36c1..dccd61c7c5c3b 100644
+--- a/Documentation/filesystems/directory-locking.rst
++++ b/Documentation/filesystems/directory-locking.rst
+@@ -22,12 +22,11 @@ exclusive.
+ 3) object removal.  Locking rules: caller locks parent, finds victim,
+ locks victim and calls the method.  Locks are exclusive.
+ 
+-4) rename() that is _not_ cross-directory.  Locking rules: caller locks
+-the parent and finds source and target.  In case of exchange (with
+-RENAME_EXCHANGE in flags argument) lock both.  In any case,
+-if the target already exists, lock it.  If the source is a non-directory,
+-lock it.  If we need to lock both, lock them in inode pointer order.
+-Then call the method.  All locks are exclusive.
++4) rename() that is _not_ cross-directory.  Locking rules: caller locks the
++parent and finds source and target.  We lock both (provided they exist).  If we
++need to lock two inodes of different type (dir vs non-dir), we lock directory
++first.  If we need to lock two inodes of the same type, lock them in inode
++pointer order.  Then call the method.  All locks are exclusive.
+ NB: we might get away with locking the source (and target in exchange
+ case) shared.
+ 
+@@ -44,15 +43,17 @@ All locks are exclusive.
+ rules:
+ 
+ 	* lock the filesystem
+-	* lock parents in "ancestors first" order.
++	* lock parents in "ancestors first" order. If one is not ancestor of
++	  the other, lock them in inode pointer order.
+ 	* find source and target.
+ 	* if old parent is equal to or is a descendent of target
+ 	  fail with -ENOTEMPTY
+ 	* if new parent is equal to or is a descendent of source
+ 	  fail with -ELOOP
+-	* If it's an exchange, lock both the source and the target.
+-	* If the target exists, lock it.  If the source is a non-directory,
+-	  lock it.  If we need to lock both, do so in inode pointer order.
++	* Lock both the source and the target provided they exist. If we
++	  need to lock two inodes of different type (dir vs non-dir), we lock
++	  the directory first. If we need to lock two inodes of the same type,
++	  lock them in inode pointer order.
+ 	* call the method.
+ 
+ All ->i_rwsem are taken exclusive.  Again, we might get away with locking
+@@ -66,8 +67,9 @@ If no directory is its own ancestor, the scheme above is deadlock-free.
+ 
+ Proof:
+ 
+-	First of all, at any moment we have a partial ordering of the
+-	objects - A < B iff A is an ancestor of B.
++	First of all, at any moment we have a linear ordering of the
++	objects - A < B iff (A is an ancestor of B) or (B is not an ancestor
++        of A and ptr(A) < ptr(B)).
+ 
+ 	That ordering can change.  However, the following is true:
+ 
+diff --git a/Documentation/networking/af_xdp.rst b/Documentation/networking/af_xdp.rst
+index 2ccc5644cc98a..70623cb135d3c 100644
+--- a/Documentation/networking/af_xdp.rst
++++ b/Documentation/networking/af_xdp.rst
+@@ -433,6 +433,15 @@ start N bytes into the buffer leaving the first N bytes for the
+ application to use. The final option is the flags field, but it will
+ be dealt with in separate sections for each UMEM flag.
+ 
++SO_BINDTODEVICE setsockopt
++--------------------------
++
++This is a generic SOL_SOCKET option that can be used to tie AF_XDP
++socket to a particular network interface.  It is useful when a socket
++is created by a privileged process and passed to a non-privileged one.
++Once the option is set, kernel will refuse attempts to bind that socket
++to a different interface.  Updating the value requires CAP_NET_RAW.
++
+ XDP_STATISTICS getsockopt
+ -------------------------
+ 
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index df26cf4110ef5..252212998378e 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -713,6 +713,31 @@ tcp_syncookies - INTEGER
+ 	network connections you can set this knob to 2 to enable
+ 	unconditionally generation of syncookies.
+ 
++tcp_migrate_req - BOOLEAN
++	The incoming connection is tied to a specific listening socket when
++	the initial SYN packet is received during the three-way handshake.
++	When a listener is closed, in-flight request sockets during the
++	handshake and established sockets in the accept queue are aborted.
++
++	If the listener has SO_REUSEPORT enabled, other listeners on the
++	same port should have been able to accept such connections. This
++	option makes it possible to migrate such child sockets to another
++	listener after close() or shutdown().
++
++	The BPF_SK_REUSEPORT_SELECT_OR_MIGRATE type of eBPF program should
++	usually be used to define the policy to pick an alive listener.
++	Otherwise, the kernel will randomly pick an alive listener only if
++	this option is enabled.
++
++	Note that migration between listeners with different settings may
++	crash applications. Let's say migration happens from listener A to
++	B, and only B has TCP_SAVE_SYN enabled. B cannot read SYN data from
++	the requests migrated from A. To avoid such a situation, cancel
++	migration by returning SK_DROP in the type of eBPF program, or
++	disable this option.
++
++	Default: 0
++
+ tcp_fastopen - INTEGER
+ 	Enable TCP Fast Open (RFC7413) to send and accept data in the opening
+ 	SYN packet.
+diff --git a/Makefile b/Makefile
+index 2aaf3b0b9250b..7f9e1667d87d8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 187
++SUBLEVEL = 188
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
+index 660b14ce13179..12c120e436a24 100644
+--- a/arch/alpha/include/asm/pgtable.h
++++ b/arch/alpha/include/asm/pgtable.h
+@@ -241,8 +241,10 @@ pmd_page_vaddr(pmd_t pmd)
+ #define pud_page(pud)	(mem_map + ((pud_val(pud) & _PFN_MASK) >> 32))
+ #endif
+ 
+-extern inline unsigned long pud_page_vaddr(pud_t pgd)
+-{ return PAGE_OFFSET + ((pud_val(pgd) & _PFN_MASK) >> (32-PAGE_SHIFT)); }
++extern inline pmd_t *pud_pgtable(pud_t pgd)
++{
++	return (pmd_t *)(PAGE_OFFSET + ((pud_val(pgd) & _PFN_MASK) >> (32-PAGE_SHIFT)));
++}
+ 
+ extern inline int pte_none(pte_t pte)		{ return !pte_val(pte); }
+ extern inline int pte_present(pte_t pte)	{ return pte_val(pte) & _PAGE_VALID; }
+@@ -292,7 +294,7 @@ extern inline pte_t pte_mkyoung(pte_t pte)	{ pte_val(pte) |= __ACCESS_BITS; retu
+ /* Find an entry in the second-level page table.. */
+ extern inline pmd_t * pmd_offset(pud_t * dir, unsigned long address)
+ {
+-	pmd_t *ret = (pmd_t *) pud_page_vaddr(*dir) + ((address >> PMD_SHIFT) & (PTRS_PER_PAGE - 1));
++	pmd_t *ret = pud_pgtable(*dir) + ((address >> PMD_SHIFT) & (PTRS_PER_PAGE - 1));
+ 	smp_rmb(); /* see above */
+ 	return ret;
+ }
+diff --git a/arch/arc/include/asm/linkage.h b/arch/arc/include/asm/linkage.h
+index c9434ff3aa4ce..8a3fb71e9cfad 100644
+--- a/arch/arc/include/asm/linkage.h
++++ b/arch/arc/include/asm/linkage.h
+@@ -8,6 +8,10 @@
+ 
+ #include <asm/dwarf.h>
+ 
++#define ASM_NL		 `	/* use '`' to mark new line in macro */
++#define __ALIGN		.align 4
++#define __ALIGN_STR	__stringify(__ALIGN)
++
+ #ifdef __ASSEMBLY__
+ 
+ .macro ST2 e, o, off
+@@ -28,10 +32,6 @@
+ #endif
+ .endm
+ 
+-#define ASM_NL		 `	/* use '`' to mark new line in macro */
+-#define __ALIGN		.align 4
+-#define __ALIGN_STR	__stringify(__ALIGN)
+-
+ /* annotation for data we want in DCCM - if enabled in .config */
+ .macro ARCFP_DATA nm
+ #ifdef CONFIG_ARC_HAS_DCCM
+diff --git a/arch/arm/boot/dts/bcm53015-meraki-mr26.dts b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+index 14f58033efeb9..ca2266b936ee2 100644
+--- a/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
++++ b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+@@ -128,7 +128,7 @@
+ 
+ 			fixed-link {
+ 				speed = <1000>;
+-				duplex-full;
++				full-duplex;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+index 577a4dc604d93..edf9910100b02 100644
+--- a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
++++ b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+@@ -212,7 +212,7 @@
+ 
+ 			fixed-link {
+ 				speed = <1000>;
+-				duplex-full;
++				full-duplex;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 9fdad20c40d17..4e9bb10f37d0f 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -532,7 +532,6 @@
+ 				  "spi_lr_session_done",
+ 				  "spi_lr_overread";
+ 		clocks = <&iprocmed>;
+-		clock-names = "iprocmed";
+ 		num-cs = <2>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/iwg20d-q7-common.dtsi b/arch/arm/boot/dts/iwg20d-q7-common.dtsi
+index 63cafd220dba1..358f5477deef6 100644
+--- a/arch/arm/boot/dts/iwg20d-q7-common.dtsi
++++ b/arch/arm/boot/dts/iwg20d-q7-common.dtsi
+@@ -49,7 +49,7 @@
+ 	lcd_backlight: backlight {
+ 		compatible = "pwm-backlight";
+ 
+-		pwms = <&pwm3 0 5000000 0>;
++		pwms = <&pwm3 0 5000000>;
+ 		brightness-levels = <0 4 8 16 32 64 128 255>;
+ 		default-brightness-level = <7>;
+ 		enable-gpios = <&gpio5 14 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index 08533116a39ce..0d045add81658 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -611,13 +611,13 @@
+ 
+ &uart_B {
+ 	compatible = "amlogic,meson8-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART1>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+ 	compatible = "amlogic,meson8-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART2>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index f6eb7c803174e..af2454c9f77a4 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -599,13 +599,13 @@
+ 
+ &uart_B {
+ 	compatible = "amlogic,meson8b-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART1>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+ 	compatible = "amlogic,meson8b-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART2>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+diff --git a/arch/arm/boot/dts/omap3-gta04a5one.dts b/arch/arm/boot/dts/omap3-gta04a5one.dts
+index 9db9fe67cd63b..95df45cc70c09 100644
+--- a/arch/arm/boot/dts/omap3-gta04a5one.dts
++++ b/arch/arm/boot/dts/omap3-gta04a5one.dts
+@@ -5,9 +5,11 @@
+ 
+ #include "omap3-gta04a5.dts"
+ 
+-&omap3_pmx_core {
++/ {
+ 	model = "Goldelico GTA04A5/Letux 2804 with OneNAND";
++};
+ 
++&omap3_pmx_core {
+ 	gpmc_pins: pinmux_gpmc_pins {
+ 		pinctrl-single,pins = <
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index fd0cd10cb0931..2c391065135e3 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -120,10 +120,13 @@
+ 
+ 	sound {
+ 		compatible = "audio-graph-card";
+-		routing =
+-			"MIC_IN", "Capture",
+-			"Capture", "Mic Bias",
+-			"Playback", "HP_OUT";
++		widgets = "Headphone", "Headphone Jack",
++			  "Line", "Line In Jack",
++			  "Microphone", "Microphone Jack";
++		routing = "Headphone Jack", "HP_OUT",
++			  "LINE_IN", "Line In Jack",
++			  "MIC_IN", "Microphone Jack",
++			  "Microphone Jack", "Mic Bias";
+ 		dais = <&sai2a_port &sai2b_port>;
+ 		status = "okay";
+ 	};
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 723b39bb2129c..d8547307a9505 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -88,7 +88,7 @@
+ 
+ 	sound {
+ 		compatible = "audio-graph-card";
+-		label = "STM32MP1-AV96-HDMI";
++		label = "STM32-AV96-HDMI";
+ 		dais = <&sai2a_port>;
+ 		status = "okay";
+ 	};
+@@ -232,6 +232,12 @@
+ 			};
+ 		};
+ 	};
++
++	dh_mac_eeprom: eeprom@53 {
++		compatible = "atmel,24c02";
++		reg = <0x53>;
++		pagesize = <16>;
++	};
+ };
+ 
+ &ltdc {
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+index 5af32140e128b..7dba02e9ba6da 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+@@ -167,12 +167,6 @@
+ 			status = "disabled";
+ 		};
+ 	};
+-
+-	eeprom@53 {
+-		compatible = "atmel,24c02";
+-		reg = <0x53>;
+-		pagesize = <16>;
+-	};
+ };
+ 
+ &iwdg2 {
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+index 47df8ac67cf1a..75869d6a1ab24 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+@@ -406,7 +406,7 @@
+ 	i2s2_port: port {
+ 		i2s2_endpoint: endpoint {
+ 			remote-endpoint = <&sii9022_tx_endpoint>;
+-			format = "i2s";
++			dai-format = "i2s";
+ 			mclk-fs = <256>;
+ 		};
+ 	};
+diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
+index 2b85d175e9996..4487aea88477d 100644
+--- a/arch/arm/include/asm/pgtable-3level.h
++++ b/arch/arm/include/asm/pgtable-3level.h
+@@ -130,7 +130,7 @@
+ 		flush_pmd_entry(pudp);	\
+ 	} while (0)
+ 
+-static inline pmd_t *pud_page_vaddr(pud_t pud)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+ 	return __va(pud_val(pud) & PHYS_MASK & (s32)PAGE_MASK);
+ }
+diff --git a/arch/arm/mach-ep93xx/timer-ep93xx.c b/arch/arm/mach-ep93xx/timer-ep93xx.c
+index dd4b164d18317..a9efa7bc2fa12 100644
+--- a/arch/arm/mach-ep93xx/timer-ep93xx.c
++++ b/arch/arm/mach-ep93xx/timer-ep93xx.c
+@@ -9,6 +9,7 @@
+ #include <linux/io.h>
+ #include <asm/mach/time.h>
+ #include "soc.h"
++#include "platform.h"
+ 
+ /*************************************************************************
+  * Timer handling for EP93xx
+@@ -60,7 +61,7 @@ static u64 notrace ep93xx_read_sched_clock(void)
+ 	return ret;
+ }
+ 
+-u64 ep93xx_clocksource_read(struct clocksource *c)
++static u64 ep93xx_clocksource_read(struct clocksource *c)
+ {
+ 	u64 ret;
+ 
+diff --git a/arch/arm/mach-omap2/board-generic.c b/arch/arm/mach-omap2/board-generic.c
+index 1610c567a6a3a..10d2f078e4a8e 100644
+--- a/arch/arm/mach-omap2/board-generic.c
++++ b/arch/arm/mach-omap2/board-generic.c
+@@ -13,6 +13,7 @@
+ #include <linux/of_platform.h>
+ #include <linux/irqdomain.h>
+ #include <linux/clocksource.h>
++#include <linux/clockchips.h>
+ 
+ #include <asm/setup.h>
+ #include <asm/mach/arch.h>
+diff --git a/arch/arm/mach-orion5x/board-dt.c b/arch/arm/mach-orion5x/board-dt.c
+index 3d36f1d951964..3f651df3a71cf 100644
+--- a/arch/arm/mach-orion5x/board-dt.c
++++ b/arch/arm/mach-orion5x/board-dt.c
+@@ -63,6 +63,9 @@ static void __init orion5x_dt_init(void)
+ 	if (of_machine_is_compatible("maxtor,shared-storage-2"))
+ 		mss2_init();
+ 
++	if (of_machine_is_compatible("lacie,d2-network"))
++		d2net_init();
++
+ 	of_platform_default_populate(NULL, orion5x_auxdata_lookup, NULL);
+ }
+ 
+diff --git a/arch/arm/mach-orion5x/common.h b/arch/arm/mach-orion5x/common.h
+index eb96009e21c4c..b9cfdb4564568 100644
+--- a/arch/arm/mach-orion5x/common.h
++++ b/arch/arm/mach-orion5x/common.h
+@@ -75,6 +75,12 @@ extern void mss2_init(void);
+ static inline void mss2_init(void) {}
+ #endif
+ 
++#ifdef CONFIG_MACH_D2NET_DT
++void d2net_init(void);
++#else
++static inline void d2net_init(void) {}
++#endif
++
+ /*****************************************************************************
+  * Helpers to access Orion registers
+  ****************************************************************************/
+diff --git a/arch/arm/probes/kprobes/checkers-common.c b/arch/arm/probes/kprobes/checkers-common.c
+index 4d720990cf2a3..eba7ac4725c02 100644
+--- a/arch/arm/probes/kprobes/checkers-common.c
++++ b/arch/arm/probes/kprobes/checkers-common.c
+@@ -40,7 +40,7 @@ enum probes_insn checker_stack_use_imm_0xx(probes_opcode_t insn,
+  * Different from other insn uses imm8, the real addressing offset of
+  * STRD in T32 encoding should be imm8 * 4. See ARMARM description.
+  */
+-enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn,
++static enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn,
+ 		struct arch_probes_insn *asi,
+ 		const struct decode_header *h)
+ {
+diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
+index e513d8a467760..c0ed172893787 100644
+--- a/arch/arm/probes/kprobes/core.c
++++ b/arch/arm/probes/kprobes/core.c
+@@ -231,7 +231,7 @@ singlestep(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb)
+  * kprobe, and that level is reserved for user kprobe handlers, so we can't
+  * risk encountering a new kprobe in an interrupt handler.
+  */
+-void __kprobes kprobe_handler(struct pt_regs *regs)
++static void __kprobes kprobe_handler(struct pt_regs *regs)
+ {
+ 	struct kprobe *p, *cur;
+ 	struct kprobe_ctlblk *kcb;
+diff --git a/arch/arm/probes/kprobes/opt-arm.c b/arch/arm/probes/kprobes/opt-arm.c
+index c78180172120f..e20304f1d8bc9 100644
+--- a/arch/arm/probes/kprobes/opt-arm.c
++++ b/arch/arm/probes/kprobes/opt-arm.c
+@@ -145,8 +145,6 @@ __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
+ 	}
+ }
+ 
+-extern void kprobe_handler(struct pt_regs *regs);
+-
+ static void
+ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+ {
+diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c
+index c562832b86272..171c7076b89f4 100644
+--- a/arch/arm/probes/kprobes/test-core.c
++++ b/arch/arm/probes/kprobes/test-core.c
+@@ -720,7 +720,7 @@ static const char coverage_register_lookup[16] = {
+ 	[REG_TYPE_NOSPPCX]	= COVERAGE_ANY_REG | COVERAGE_SP,
+ };
+ 
+-unsigned coverage_start_registers(const struct decode_header *h)
++static unsigned coverage_start_registers(const struct decode_header *h)
+ {
+ 	unsigned regs = 0;
+ 	int i;
+diff --git a/arch/arm/probes/kprobes/test-core.h b/arch/arm/probes/kprobes/test-core.h
+index 19a5b2add41e1..805116c2ec27c 100644
+--- a/arch/arm/probes/kprobes/test-core.h
++++ b/arch/arm/probes/kprobes/test-core.h
+@@ -453,3 +453,7 @@ void kprobe_thumb32_test_cases(void);
+ #else
+ void kprobe_arm_test_cases(void);
+ #endif
++
++void __kprobes_test_case_start(void);
++void __kprobes_test_case_end_16(void);
++void __kprobes_test_case_end_32(void);
+diff --git a/arch/arm64/boot/dts/microchip/sparx5.dtsi b/arch/arm64/boot/dts/microchip/sparx5.dtsi
+index 3cb01c39c3c80..8dd679fbeed1c 100644
+--- a/arch/arm64/boot/dts/microchip/sparx5.dtsi
++++ b/arch/arm64/boot/dts/microchip/sparx5.dtsi
+@@ -61,7 +61,7 @@
+ 		interrupt-affinity = <&cpu0>, <&cpu1>;
+ 	};
+ 
+-	psci {
++	psci: psci {
+ 		compatible = "arm,psci-0.2";
+ 		method = "smc";
+ 	};
+diff --git a/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi b/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
+index 9d1a082de3e29..32bb76b3202a0 100644
+--- a/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
++++ b/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
+@@ -6,6 +6,18 @@
+ /dts-v1/;
+ #include "sparx5.dtsi"
+ 
++&psci {
++	status = "disabled";
++};
++
++&cpu0 {
++	enable-method = "spin-table";
++};
++
++&cpu1 {
++	enable-method = "spin-table";
++};
++
+ &uart0 {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts b/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
+index f6ddf17ada81b..861b356a982b7 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
++++ b/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
+@@ -26,7 +26,7 @@
+ 
+ 	v1p05: v1p05-regulator {
+ 		compatible = "regulator-fixed";
+-		reglator-name = "v1p05";
++		regulator-name = "v1p05";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 
+@@ -38,7 +38,7 @@
+ 
+ 	v12_poe: v12-poe-regulator {
+ 		compatible = "regulator-fixed";
+-		reglator-name = "v12_poe";
++		regulator-name = "v12_poe";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index c32e4a3833f23..5b79e4a373311 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1006,7 +1006,7 @@
+ 			};
+ 		};
+ 
+-		camss: camss@1b00000 {
++		camss: camss@1b0ac00 {
+ 			compatible = "qcom,msm8916-camss";
+ 			reg = <0x01b0ac00 0x200>,
+ 				<0x01b00030 0x4>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index aeb5762566e91..caaf7102f5798 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -489,7 +489,7 @@
+ 			reg = <0xfc4ab000 0x4>;
+ 		};
+ 
+-		spmi_bus: spmi@fc4c0000 {
++		spmi_bus: spmi@fc4cf000 {
+ 			compatible = "qcom,spmi-pmic-arb";
+ 			reg = <0xfc4cf000 0x1000>,
+ 			      <0xfc4cb000 0x1000>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 159cdd03e7c01..73f7490911c92 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -956,7 +956,7 @@
+ 			};
+ 		};
+ 
+-		camss: camss@a00000 {
++		camss: camss@a34000 {
+ 			compatible = "qcom,msm8996-camss";
+ 			reg = <0x00a34000 0x1000>,
+ 			      <0x00a00030 0x4>,
+diff --git a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+index 05e64bfad0235..24d0a1337ae1c 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+@@ -270,7 +270,7 @@
+ 	};
+ 
+ 	scif1_pins: scif1 {
+-		groups = "scif1_data_b", "scif1_ctrl";
++		groups = "scif1_data_b";
+ 		function = "scif1";
+ 	};
+ 
+@@ -330,7 +330,6 @@
+ &scif1 {
+ 	pinctrl-0 = <&scif1_pins>;
+ 	pinctrl-names = "default";
+-	uart-has-rtscts;
+ 
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index 909ab6661aef5..4ec5e955c33c2 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -19,25 +19,25 @@
+ &wkup_pmx2 {
+ 	mcu_cpsw_pins_default: mcu-cpsw-pins-default {
+ 		pinctrl-single,pins = <
+-			J721E_WKUP_IOPAD(0x0068, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
+-			J721E_WKUP_IOPAD(0x006c, PIN_INPUT, 0) /* MCU_RGMII1_RX_CTL */
+-			J721E_WKUP_IOPAD(0x0070, PIN_OUTPUT, 0) /* MCU_RGMII1_TD3 */
+-			J721E_WKUP_IOPAD(0x0074, PIN_OUTPUT, 0) /* MCU_RGMII1_TD2 */
+-			J721E_WKUP_IOPAD(0x0078, PIN_OUTPUT, 0) /* MCU_RGMII1_TD1 */
+-			J721E_WKUP_IOPAD(0x007c, PIN_OUTPUT, 0) /* MCU_RGMII1_TD0 */
+-			J721E_WKUP_IOPAD(0x0088, PIN_INPUT, 0) /* MCU_RGMII1_RD3 */
+-			J721E_WKUP_IOPAD(0x008c, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
+-			J721E_WKUP_IOPAD(0x0090, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
+-			J721E_WKUP_IOPAD(0x0094, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
+-			J721E_WKUP_IOPAD(0x0080, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
+-			J721E_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
++			J721E_WKUP_IOPAD(0x0000, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
++			J721E_WKUP_IOPAD(0x0004, PIN_INPUT, 0) /* MCU_RGMII1_RX_CTL */
++			J721E_WKUP_IOPAD(0x0008, PIN_OUTPUT, 0) /* MCU_RGMII1_TD3 */
++			J721E_WKUP_IOPAD(0x000c, PIN_OUTPUT, 0) /* MCU_RGMII1_TD2 */
++			J721E_WKUP_IOPAD(0x0010, PIN_OUTPUT, 0) /* MCU_RGMII1_TD1 */
++			J721E_WKUP_IOPAD(0x0014, PIN_OUTPUT, 0) /* MCU_RGMII1_TD0 */
++			J721E_WKUP_IOPAD(0x0020, PIN_INPUT, 0) /* MCU_RGMII1_RD3 */
++			J721E_WKUP_IOPAD(0x0024, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
++			J721E_WKUP_IOPAD(0x0028, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
++			J721E_WKUP_IOPAD(0x002c, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
++			J721E_WKUP_IOPAD(0x0018, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
++			J721E_WKUP_IOPAD(0x001c, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
+ 		>;
+ 	};
+ 
+ 	mcu_mdio_pins_default: mcu-mdio1-pins-default {
+ 		pinctrl-single,pins = <
+-			J721E_WKUP_IOPAD(0x009c, PIN_OUTPUT, 0) /* (L1) MCU_MDIO0_MDC */
+-			J721E_WKUP_IOPAD(0x0098, PIN_INPUT, 0) /* (L4) MCU_MDIO0_MDIO */
++			J721E_WKUP_IOPAD(0x0034, PIN_OUTPUT, 0) /* (L1) MCU_MDIO0_MDC */
++			J721E_WKUP_IOPAD(0x0030, PIN_INPUT, 0) /* (L4) MCU_MDIO0_MDIO */
+ 		>;
+ 	};
+ };
+diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
+index 0756191f44f64..59c3facb8a560 100644
+--- a/arch/arm64/include/asm/exception.h
++++ b/arch/arm64/include/asm/exception.h
+@@ -8,16 +8,11 @@
+ #define __ASM_EXCEPTION_H
+ 
+ #include <asm/esr.h>
+-#include <asm/kprobes.h>
+ #include <asm/ptrace.h>
+ 
+ #include <linux/interrupt.h>
+ 
+-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ #define __exception_irq_entry	__irq_entry
+-#else
+-#define __exception_irq_entry	__kprobes
+-#endif
+ 
+ static inline u32 disr_to_esr(u64 disr)
+ {
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 3f74db7b0a31d..4eedfd784cf63 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -633,9 +633,9 @@ static inline phys_addr_t pud_page_paddr(pud_t pud)
+ 	return __pud_to_phys(pud);
+ }
+ 
+-static inline unsigned long pud_page_vaddr(pud_t pud)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+-	return (unsigned long)__va(pud_page_paddr(pud));
++	return (pmd_t *)__va(pud_page_paddr(pud));
+ }
+ 
+ /* Find an entry in the second-level page table. */
+@@ -694,9 +694,9 @@ static inline phys_addr_t p4d_page_paddr(p4d_t p4d)
+ 	return __p4d_to_phys(p4d);
+ }
+ 
+-static inline unsigned long p4d_page_vaddr(p4d_t p4d)
++static inline pud_t *p4d_pgtable(p4d_t p4d)
+ {
+-	return (unsigned long)__va(p4d_page_paddr(p4d));
++	return (pud_t *)__va(p4d_page_paddr(p4d));
+ }
+ 
+ /* Find an entry in the frst-level page table. */
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 3284709ef5676..78f9fb638c9cd 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -421,7 +421,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
+ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
+ 				  phys_addr_t size, pgprot_t prot)
+ {
+-	if ((virt >= PAGE_END) && (virt < VMALLOC_START)) {
++	if (virt < PAGE_OFFSET) {
+ 		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
+ 			&phys, virt);
+ 		return;
+@@ -448,7 +448,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
+ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
+ 				phys_addr_t size, pgprot_t prot)
+ {
+-	if ((virt >= PAGE_END) && (virt < VMALLOC_START)) {
++	if (virt < PAGE_OFFSET) {
+ 		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
+ 			&phys, virt);
+ 		return;
+diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h
+index 9f64fdfbf2750..6e5c387566573 100644
+--- a/arch/ia64/include/asm/pgtable.h
++++ b/arch/ia64/include/asm/pgtable.h
+@@ -279,7 +279,7 @@ extern unsigned long VMALLOC_END;
+ #define pud_bad(pud)			(!ia64_phys_addr_valid(pud_val(pud)))
+ #define pud_present(pud)		(pud_val(pud) != 0UL)
+ #define pud_clear(pudp)			(pud_val(*(pudp)) = 0UL)
+-#define pud_page_vaddr(pud)		((unsigned long) __va(pud_val(pud) & _PFN_MASK))
++#define pud_pgtable(pud)		((pmd_t *) __va(pud_val(pud) & _PFN_MASK))
+ #define pud_page(pud)			virt_to_page((pud_val(pud) + PAGE_OFFSET))
+ 
+ #if CONFIG_PGTABLE_LEVELS == 4
+@@ -287,7 +287,7 @@ extern unsigned long VMALLOC_END;
+ #define p4d_bad(p4d)			(!ia64_phys_addr_valid(p4d_val(p4d)))
+ #define p4d_present(p4d)		(p4d_val(p4d) != 0UL)
+ #define p4d_clear(p4dp)			(p4d_val(*(p4dp)) = 0UL)
+-#define p4d_page_vaddr(p4d)		((unsigned long) __va(p4d_val(p4d) & _PFN_MASK))
++#define p4d_pgtable(p4d)		((pud_t *) __va(p4d_val(p4d) & _PFN_MASK))
+ #define p4d_page(p4d)			virt_to_page((p4d_val(p4d) + PAGE_OFFSET))
+ #endif
+ 
+diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h
+index 8076467eff4b0..956c80874f98b 100644
+--- a/arch/m68k/include/asm/motorola_pgtable.h
++++ b/arch/m68k/include/asm/motorola_pgtable.h
+@@ -129,7 +129,7 @@ static inline void pud_set(pud_t *pudp, pmd_t *pmdp)
+ 
+ #define __pte_page(pte) ((unsigned long)__va(pte_val(pte) & PAGE_MASK))
+ #define pmd_page_vaddr(pmd) ((unsigned long)__va(pmd_val(pmd) & _TABLE_MASK))
+-#define pud_page_vaddr(pud) ((unsigned long)__va(pud_val(pud) & _TABLE_MASK))
++#define pud_pgtable(pud) ((pmd_t *)__va(pud_val(pud) & _TABLE_MASK))
+ 
+ 
+ #define pte_none(pte)		(!pte_val(pte))
+diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h
+index 1e7d6ce9d8d62..b865edff2670e 100644
+--- a/arch/mips/include/asm/pgtable-64.h
++++ b/arch/mips/include/asm/pgtable-64.h
+@@ -210,9 +210,9 @@ static inline void p4d_clear(p4d_t *p4dp)
+ 	p4d_val(*p4dp) = (unsigned long)invalid_pud_table;
+ }
+ 
+-static inline unsigned long p4d_page_vaddr(p4d_t p4d)
++static inline pud_t *p4d_pgtable(p4d_t p4d)
+ {
+-	return p4d_val(p4d);
++	return (pud_t *)p4d_val(p4d);
+ }
+ 
+ #define p4d_phys(p4d)		virt_to_phys((void *)p4d_val(p4d))
+@@ -314,9 +314,9 @@ static inline void pud_clear(pud_t *pudp)
+ #endif
+ 
+ #ifndef __PAGETABLE_PMD_FOLDED
+-static inline unsigned long pud_page_vaddr(pud_t pud)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+-	return pud_val(pud);
++	return (pmd_t *)pud_val(pud);
+ }
+ #define pud_phys(pud)		virt_to_phys((void *)pud_val(pud))
+ #define pud_page(pud)		(pfn_to_page(pud_phys(pud) >> PAGE_SHIFT))
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index d120201910acf..f8d1933bfe823 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1721,7 +1721,10 @@ static inline void decode_cpucfg(struct cpuinfo_mips *c)
+ 
+ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ {
++	c->cputype = CPU_LOONGSON64;
++
+ 	/* All Loongson processors covered here define ExcCode 16 as GSExc. */
++	decode_configs(c);
+ 	c->options |= MIPS_CPU_GSEXCEX;
+ 
+ 	switch (c->processor_id & PRID_IMP_MASK) {
+@@ -1731,7 +1734,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		case PRID_REV_LOONGSON2K_R1_1:
+ 		case PRID_REV_LOONGSON2K_R1_2:
+ 		case PRID_REV_LOONGSON2K_R1_3:
+-			c->cputype = CPU_LOONGSON64;
+ 			__cpu_name[cpu] = "Loongson-2K";
+ 			set_elf_platform(cpu, "gs264e");
+ 			set_isa(c, MIPS_CPU_ISA_M64R2);
+@@ -1744,14 +1746,12 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		switch (c->processor_id & PRID_REV_MASK) {
+ 		case PRID_REV_LOONGSON3A_R2_0:
+ 		case PRID_REV_LOONGSON3A_R2_1:
+-			c->cputype = CPU_LOONGSON64;
+ 			__cpu_name[cpu] = "ICT Loongson-3";
+ 			set_elf_platform(cpu, "loongson3a");
+ 			set_isa(c, MIPS_CPU_ISA_M64R2);
+ 			break;
+ 		case PRID_REV_LOONGSON3A_R3_0:
+ 		case PRID_REV_LOONGSON3A_R3_1:
+-			c->cputype = CPU_LOONGSON64;
+ 			__cpu_name[cpu] = "ICT Loongson-3";
+ 			set_elf_platform(cpu, "loongson3a");
+ 			set_isa(c, MIPS_CPU_ISA_M64R2);
+@@ -1771,7 +1771,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		c->ases &= ~MIPS_ASE_VZ; /* VZ of Loongson-3A2000/3000 is incomplete */
+ 		break;
+ 	case PRID_IMP_LOONGSON_64G:
+-		c->cputype = CPU_LOONGSON64;
+ 		__cpu_name[cpu] = "ICT Loongson-3";
+ 		set_elf_platform(cpu, "loongson3a");
+ 		set_isa(c, MIPS_CPU_ISA_M64R2);
+@@ -1781,8 +1780,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		panic("Unknown Loongson Processor ID!");
+ 		break;
+ 	}
+-
+-	decode_configs(c);
+ }
+ #else
+ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu) { }
+diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
+index 8964798b8274e..ade591927cbff 100644
+--- a/arch/parisc/include/asm/pgtable.h
++++ b/arch/parisc/include/asm/pgtable.h
+@@ -330,8 +330,8 @@ static inline void pmd_clear(pmd_t *pmd) {
+ 
+ 
+ #if CONFIG_PGTABLE_LEVELS == 3
+-#define pud_page_vaddr(pud) ((unsigned long) __va(pud_address(pud)))
+-#define pud_page(pud)	virt_to_page((void *)pud_page_vaddr(pud))
++#define pud_pgtable(pud) ((pmd_t *) __va(pud_address(pud)))
++#define pud_page(pud)	virt_to_page((void *)pud_pgtable(pud))
+ 
+ /* For 64 bit we have three level tables */
+ 
+diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
+index 52abca88b5b2b..e03fb91544206 100644
+--- a/arch/powerpc/Kconfig.debug
++++ b/arch/powerpc/Kconfig.debug
+@@ -234,7 +234,7 @@ config PPC_EARLY_DEBUG_40x
+ 
+ config PPC_EARLY_DEBUG_CPM
+ 	bool "Early serial debugging for Freescale CPM-based serial ports"
+-	depends on SERIAL_CPM
++	depends on SERIAL_CPM=y
+ 	help
+ 	  Select this to enable early debugging for Freescale chips
+ 	  using a CPM-based serial port.  This assumes that the bootwrapper
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index a3f66ade09b32..912e64ab5f249 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -429,3 +429,11 @@ checkbin:
+ 		echo -n '*** Please use a different binutils version.' ; \
+ 		false ; \
+ 	fi
++	@if test "x${CONFIG_FTRACE_MCOUNT_USE_RECORDMCOUNT}" = "xy" -a \
++		"x${CONFIG_LD_IS_BFD}" = "xy" -a \
++		"${CONFIG_LD_VERSION}" = "23700" ; then \
++		echo -n '*** binutils 2.37 drops unused section symbols, which recordmcount ' ; \
++		echo 'is unable to handle.' ; \
++		echo '*** Please use a different binutils version.' ; \
++		false ; \
++	fi
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 71e2c524f1eea..2b4af824bdc55 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -1030,8 +1030,15 @@ extern struct page *p4d_page(p4d_t p4d);
+ /* Pointers in the page table tree are physical addresses */
+ #define __pgtable_ptr_val(ptr)	__pa(ptr)
+ 
+-#define pud_page_vaddr(pud)	__va(pud_val(pud) & ~PUD_MASKED_BITS)
+-#define p4d_page_vaddr(p4d)	__va(p4d_val(p4d) & ~P4D_MASKED_BITS)
++static inline pud_t *p4d_pgtable(p4d_t p4d)
++{
++	return (pud_t *)__va(p4d_val(p4d) & ~P4D_MASKED_BITS);
++}
++
++static inline pmd_t *pud_pgtable(pud_t pud)
++{
++	return (pmd_t *)__va(pud_val(pud) & ~PUD_MASKED_BITS);
++}
+ 
+ #define pte_ERROR(e) \
+ 	pr_err("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
+diff --git a/arch/powerpc/include/asm/nohash/64/pgtable-4k.h b/arch/powerpc/include/asm/nohash/64/pgtable-4k.h
+index fe2f4c9acd9ed..10f5cf444d72a 100644
+--- a/arch/powerpc/include/asm/nohash/64/pgtable-4k.h
++++ b/arch/powerpc/include/asm/nohash/64/pgtable-4k.h
+@@ -56,10 +56,14 @@
+ #define p4d_none(p4d)		(!p4d_val(p4d))
+ #define p4d_bad(p4d)		(p4d_val(p4d) == 0)
+ #define p4d_present(p4d)	(p4d_val(p4d) != 0)
+-#define p4d_page_vaddr(p4d)	(p4d_val(p4d) & ~P4D_MASKED_BITS)
+ 
+ #ifndef __ASSEMBLY__
+ 
++static inline pud_t *p4d_pgtable(p4d_t p4d)
++{
++	return (pud_t *) (p4d_val(p4d) & ~P4D_MASKED_BITS);
++}
++
+ static inline void p4d_clear(p4d_t *p4dp)
+ {
+ 	*p4dp = __p4d(0);
+diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
+index 1eacff0fff029..a4d475c0fc2c0 100644
+--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
+@@ -164,7 +164,11 @@ static inline void pud_clear(pud_t *pudp)
+ #define	pud_bad(pud)		(!is_kernel_addr(pud_val(pud)) \
+ 				 || (pud_val(pud) & PUD_BAD_BITS))
+ #define pud_present(pud)	(pud_val(pud) != 0)
+-#define pud_page_vaddr(pud)	(pud_val(pud) & ~PUD_MASKED_BITS)
++
++static inline pmd_t *pud_pgtable(pud_t pud)
++{
++	return (pmd_t *)(pud_val(pud) & ~PUD_MASKED_BITS);
++}
+ 
+ extern struct page *pud_page(pud_t pud);
+ 
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 5f0a2fa611fa2..3728c17de87e9 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -783,9 +783,9 @@ static void free_pud_table(pud_t *pud_start, p4d_t *p4d)
+ }
+ 
+ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+-			     unsigned long end)
++			     unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pte_t *pte;
+ 
+ 	pte = pte_start + pte_index(addr);
+@@ -807,13 +807,16 @@ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+ 		}
+ 
+ 		pte_clear(&init_mm, addr, pte);
++		pages++;
+ 	}
++	if (direct)
++		update_page_count(mmu_virtual_psize, -pages);
+ }
+ 
+ static void __meminit remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
+-			     unsigned long end)
++				       unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pte_t *pte_base;
+ 	pmd_t *pmd;
+ 
+@@ -831,19 +834,22 @@ static void __meminit remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
+ 				continue;
+ 			}
+ 			pte_clear(&init_mm, addr, (pte_t *)pmd);
++			pages++;
+ 			continue;
+ 		}
+ 
+ 		pte_base = (pte_t *)pmd_page_vaddr(*pmd);
+-		remove_pte_table(pte_base, addr, next);
++		remove_pte_table(pte_base, addr, next, direct);
+ 		free_pte_table(pte_base, pmd);
+ 	}
++	if (direct)
++		update_page_count(MMU_PAGE_2M, -pages);
+ }
+ 
+ static void __meminit remove_pud_table(pud_t *pud_start, unsigned long addr,
+-			     unsigned long end)
++				       unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pmd_t *pmd_base;
+ 	pud_t *pud;
+ 
+@@ -861,16 +867,20 @@ static void __meminit remove_pud_table(pud_t *pud_start, unsigned long addr,
+ 				continue;
+ 			}
+ 			pte_clear(&init_mm, addr, (pte_t *)pud);
++			pages++;
+ 			continue;
+ 		}
+ 
+-		pmd_base = (pmd_t *)pud_page_vaddr(*pud);
+-		remove_pmd_table(pmd_base, addr, next);
++		pmd_base = pud_pgtable(*pud);
++		remove_pmd_table(pmd_base, addr, next, direct);
+ 		free_pmd_table(pmd_base, pud);
+ 	}
++	if (direct)
++		update_page_count(MMU_PAGE_1G, -pages);
+ }
+ 
+-static void __meminit remove_pagetable(unsigned long start, unsigned long end)
++static void __meminit remove_pagetable(unsigned long start, unsigned long end,
++				       bool direct)
+ {
+ 	unsigned long addr, next;
+ 	pud_t *pud_base;
+@@ -898,8 +908,8 @@ static void __meminit remove_pagetable(unsigned long start, unsigned long end)
+ 			continue;
+ 		}
+ 
+-		pud_base = (pud_t *)p4d_page_vaddr(*p4d);
+-		remove_pud_table(pud_base, addr, next);
++		pud_base = p4d_pgtable(*p4d);
++		remove_pud_table(pud_base, addr, next, direct);
+ 		free_pud_table(pud_base, p4d);
+ 	}
+ 
+@@ -922,7 +932,7 @@ int __meminit radix__create_section_mapping(unsigned long start,
+ 
+ int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)
+ {
+-	remove_pagetable(start, end);
++	remove_pagetable(start, end, true);
+ 	return 0;
+ }
+ #endif /* CONFIG_MEMORY_HOTPLUG */
+@@ -958,7 +968,7 @@ int __meminit radix__vmemmap_create_mapping(unsigned long start,
+ #ifdef CONFIG_MEMORY_HOTPLUG
+ void __meminit radix__vmemmap_remove_mapping(unsigned long start, unsigned long page_size)
+ {
+-	remove_pagetable(start, start + page_size);
++	remove_pagetable(start, start + page_size, false);
+ }
+ #endif
+ #endif
+@@ -1156,7 +1166,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ 	pmd_t *pmd;
+ 	int i;
+ 
+-	pmd = (pmd_t *)pud_page_vaddr(*pud);
++	pmd = pud_pgtable(*pud);
+ 	pud_clear(pud);
+ 
+ 	flush_tlb_kernel_range(addr, addr + PUD_SIZE);
+diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
+index 386be136026e8..b76cd49d521b9 100644
+--- a/arch/powerpc/mm/init_64.c
++++ b/arch/powerpc/mm/init_64.c
+@@ -188,7 +188,7 @@ static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long star
+ 	unsigned long nr_pfn = page_size / sizeof(struct page);
+ 	unsigned long start_pfn = page_to_pfn((struct page *)start);
+ 
+-	if ((start_pfn + nr_pfn) > altmap->end_pfn)
++	if ((start_pfn + nr_pfn - 1) > altmap->end_pfn)
+ 		return true;
+ 
+ 	if (start_pfn < altmap->base_pfn)
+diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
+index aefc2bfdf1049..175aabf101e87 100644
+--- a/arch/powerpc/mm/pgtable_64.c
++++ b/arch/powerpc/mm/pgtable_64.c
+@@ -106,7 +106,7 @@ struct page *p4d_page(p4d_t p4d)
+ 			VM_WARN_ON(!p4d_huge(p4d));
+ 		return pte_page(p4d_pte(p4d));
+ 	}
+-	return virt_to_page(p4d_page_vaddr(p4d));
++	return virt_to_page(p4d_pgtable(p4d));
+ }
+ #endif
+ 
+@@ -117,7 +117,7 @@ struct page *pud_page(pud_t pud)
+ 			VM_WARN_ON(!pud_huge(pud));
+ 		return pte_page(pud_pte(pud));
+ 	}
+-	return virt_to_page(pud_page_vaddr(pud));
++	return virt_to_page(pud_pgtable(pud));
+ }
+ 
+ /*
+diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c
+index 28aac933a4391..e3e52ff2cbf58 100644
+--- a/arch/powerpc/platforms/powernv/pci-sriov.c
++++ b/arch/powerpc/platforms/powernv/pci-sriov.c
+@@ -600,12 +600,12 @@ static void pnv_pci_sriov_disable(struct pci_dev *pdev)
+ 	struct pnv_iov_data   *iov;
+ 
+ 	iov = pnv_iov_get(pdev);
+-	num_vfs = iov->num_vfs;
+-	base_pe = iov->vf_pe_arr[0].pe_number;
+-
+ 	if (WARN_ON(!iov))
+ 		return;
+ 
++	num_vfs = iov->num_vfs;
++	base_pe = iov->vf_pe_arr[0].pe_number;
++
+ 	/* Release VF PEs */
+ 	pnv_ioda_release_vf_PE(pdev);
+ 
+diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
+index f3b0da64c6c8f..0e863f3f7187a 100644
+--- a/arch/riscv/include/asm/pgtable-64.h
++++ b/arch/riscv/include/asm/pgtable-64.h
+@@ -60,9 +60,9 @@ static inline void pud_clear(pud_t *pudp)
+ 	set_pud(pudp, __pud(0));
+ }
+ 
+-static inline unsigned long pud_page_vaddr(pud_t pud)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+-	return (unsigned long)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT);
++	return (pmd_t *)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT);
+ }
+ 
+ static inline struct page *pud_page(pud_t pud)
+diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h
+index 75c1e99968675..ef336fe160044 100644
+--- a/arch/riscv/net/bpf_jit.h
++++ b/arch/riscv/net/bpf_jit.h
+@@ -69,6 +69,7 @@ struct rv_jit_context {
+ 	struct bpf_prog *prog;
+ 	u16 *insns;		/* RV insns */
+ 	int ninsns;
++	int prologue_len;
+ 	int epilogue_offset;
+ 	int *offset;		/* BPF to RV */
+ 	unsigned long flags;
+@@ -214,8 +215,8 @@ static inline int rv_offset(int insn, int off, struct rv_jit_context *ctx)
+ 	int from, to;
+ 
+ 	off++; /* BPF branch is from PC+1, RV is from PC */
+-	from = (insn > 0) ? ctx->offset[insn - 1] : 0;
+-	to = (insn + off > 0) ? ctx->offset[insn + off - 1] : 0;
++	from = (insn > 0) ? ctx->offset[insn - 1] : ctx->prologue_len;
++	to = (insn + off > 0) ? ctx->offset[insn + off - 1] : ctx->prologue_len;
+ 	return ninsns_rvoff(to - from);
+ }
+ 
+diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
+index c113ae818b14e..053dc83e323b6 100644
+--- a/arch/riscv/net/bpf_jit_comp64.c
++++ b/arch/riscv/net/bpf_jit_comp64.c
+@@ -1144,16 +1144,3 @@ void bpf_jit_build_epilogue(struct rv_jit_context *ctx)
+ {
+ 	__build_epilogue(false, ctx);
+ }
+-
+-void *bpf_jit_alloc_exec(unsigned long size)
+-{
+-	return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
+-				    BPF_JIT_REGION_END, GFP_KERNEL,
+-				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
+-				    __builtin_return_address(0));
+-}
+-
+-void bpf_jit_free_exec(void *addr)
+-{
+-	return vfree(addr);
+-}
+diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c
+index cbf7d2414886e..ef17bc8055d4c 100644
+--- a/arch/riscv/net/bpf_jit_core.c
++++ b/arch/riscv/net/bpf_jit_core.c
+@@ -83,6 +83,12 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 		prog = orig_prog;
+ 		goto out_offset;
+ 	}
++
++	if (build_body(ctx, extra_pass, NULL)) {
++		prog = orig_prog;
++		goto out_offset;
++	}
++
+ 	for (i = 0; i < prog->len; i++) {
+ 		prev_ninsns += 32;
+ 		ctx->offset[i] = prev_ninsns;
+@@ -91,11 +97,15 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 	for (i = 0; i < NR_JIT_ITERATIONS; i++) {
+ 		pass++;
+ 		ctx->ninsns = 0;
++
++		bpf_jit_build_prologue(ctx);
++		ctx->prologue_len = ctx->ninsns;
++
+ 		if (build_body(ctx, extra_pass, ctx->offset)) {
+ 			prog = orig_prog;
+ 			goto out_offset;
+ 		}
+-		bpf_jit_build_prologue(ctx);
++
+ 		ctx->epilogue_offset = ctx->ninsns;
+ 		bpf_jit_build_epilogue(ctx);
+ 
+@@ -153,6 +163,10 @@ skip_init_ctx:
+ 	bpf_flush_icache(jit_data->header, ctx->insns + ctx->ninsns);
+ 
+ 	if (!prog->is_func || extra_pass) {
++		bpf_jit_binary_lock_ro(jit_data->header);
++		for (i = 0; i < prog->len; i++)
++			ctx->offset[i] = ninsns_rvoff(ctx->offset[i]);
++		bpf_prog_fill_jited_linfo(prog, ctx->offset);
+ out_offset:
+ 		kfree(ctx->offset);
+ 		kfree(jit_data);
+@@ -165,3 +179,16 @@ out:
+ 					   tmp : orig_prog);
+ 	return prog;
+ }
++
++void *bpf_jit_alloc_exec(unsigned long size)
++{
++	return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
++				    BPF_JIT_REGION_END, GFP_KERNEL,
++				    PAGE_KERNEL, 0, NUMA_NO_NODE,
++				    __builtin_return_address(0));
++}
++
++void bpf_jit_free_exec(void *addr)
++{
++	return vfree(addr);
++}
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index a8cb00f30a7c3..39ffcd4389f10 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -29,6 +29,7 @@ KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables
+ KBUILD_CFLAGS_DECOMPRESSOR += -ffreestanding
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-stack-protector
++KBUILD_CFLAGS_DECOMPRESSOR += -fPIE
+ KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,))
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 7ffc73ba220fb..7a326d03087ab 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -2005,6 +2005,10 @@ static unsigned long kvm_s390_next_dirty_cmma(struct kvm_memslots *slots,
+ 		ms = slots->memslots + slotidx;
+ 		ofs = 0;
+ 	}
++
++	if (cur_gfn < ms->base_gfn)
++		ofs = 0;
++
+ 	ofs = find_next_bit(kvm_second_dirty_bitmap(ms), ms->npages, ofs);
+ 	while ((slotidx > 0) && (ofs >= ms->npages)) {
+ 		slotidx--;
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index ff58decfef5e8..192eacc8fbb7a 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -168,7 +168,8 @@ static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
+ 			    sizeof(struct kvm_s390_apcb0)))
+ 		return -EFAULT;
+ 
+-	bitmap_and(apcb_s, apcb_s, apcb_h, sizeof(struct kvm_s390_apcb0));
++	bitmap_and(apcb_s, apcb_s, apcb_h,
++		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb0));
+ 
+ 	return 0;
+ }
+@@ -190,7 +191,8 @@ static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
+ 			    sizeof(struct kvm_s390_apcb1)))
+ 		return -EFAULT;
+ 
+-	bitmap_and(apcb_s, apcb_s, apcb_h, sizeof(struct kvm_s390_apcb1));
++	bitmap_and(apcb_s, apcb_s, apcb_h,
++		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb1));
+ 
+ 	return 0;
+ }
+diff --git a/arch/sh/drivers/dma/dma-sh.c b/arch/sh/drivers/dma/dma-sh.c
+index 96c626c2cd0a4..306fba1564e5e 100644
+--- a/arch/sh/drivers/dma/dma-sh.c
++++ b/arch/sh/drivers/dma/dma-sh.c
+@@ -18,6 +18,18 @@
+ #include <cpu/dma-register.h>
+ #include <cpu/dma.h>
+ 
++/*
++ * Some of the SoCs feature two DMAC modules. In such a case, the channels are
++ * distributed equally among them.
++ */
++#ifdef	SH_DMAC_BASE1
++#define	SH_DMAC_NR_MD_CH	(CONFIG_NR_ONCHIP_DMA_CHANNELS / 2)
++#else
++#define	SH_DMAC_NR_MD_CH	CONFIG_NR_ONCHIP_DMA_CHANNELS
++#endif
++
++#define	SH_DMAC_CH_SZ		0x10
++
+ /*
+  * Define the default configuration for dual address memory-memory transfer.
+  * The 0x400 value represents auto-request, external->external.
+@@ -29,7 +41,7 @@ static unsigned long dma_find_base(unsigned int chan)
+ 	unsigned long base = SH_DMAC_BASE0;
+ 
+ #ifdef SH_DMAC_BASE1
+-	if (chan >= 6)
++	if (chan >= SH_DMAC_NR_MD_CH)
+ 		base = SH_DMAC_BASE1;
+ #endif
+ 
+@@ -40,13 +52,13 @@ static unsigned long dma_base_addr(unsigned int chan)
+ {
+ 	unsigned long base = dma_find_base(chan);
+ 
+-	/* Normalize offset calculation */
+-	if (chan >= 9)
+-		chan -= 6;
+-	if (chan >= 4)
+-		base += 0x10;
++	chan = (chan % SH_DMAC_NR_MD_CH) * SH_DMAC_CH_SZ;
++
++	/* DMAOR is placed inside the channel register space. Step over it. */
++	if (chan >= DMAOR)
++		base += SH_DMAC_CH_SZ;
+ 
+-	return base + (chan * 0x10);
++	return base + chan;
+ }
+ 
+ #ifdef CONFIG_SH_DMA_IRQ_MULTI
+@@ -250,12 +262,11 @@ static int sh_dmac_get_dma_residue(struct dma_channel *chan)
+ #define NR_DMAOR	1
+ #endif
+ 
+-/*
+- * DMAOR bases are broken out amongst channel groups. DMAOR0 manages
+- * channels 0 - 5, DMAOR1 6 - 11 (optional).
+- */
+-#define dmaor_read_reg(n)		__raw_readw(dma_find_base((n)*6))
+-#define dmaor_write_reg(n, data)	__raw_writew(data, dma_find_base(n)*6)
++#define dmaor_read_reg(n)		__raw_readw(dma_find_base((n) * \
++						    SH_DMAC_NR_MD_CH) + DMAOR)
++#define dmaor_write_reg(n, data)	__raw_writew(data, \
++						     dma_find_base((n) * \
++						     SH_DMAC_NR_MD_CH) + DMAOR)
+ 
+ static inline int dmaor_reset(int no)
+ {
+diff --git a/arch/sh/include/asm/pgtable-3level.h b/arch/sh/include/asm/pgtable-3level.h
+index 82d74472dfcda..cdced80a7ffa3 100644
+--- a/arch/sh/include/asm/pgtable-3level.h
++++ b/arch/sh/include/asm/pgtable-3level.h
+@@ -32,9 +32,9 @@ typedef struct { unsigned long long pmd; } pmd_t;
+ #define pmd_val(x)	((x).pmd)
+ #define __pmd(x)	((pmd_t) { (x) } )
+ 
+-static inline unsigned long pud_page_vaddr(pud_t pud)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+-	return pud_val(pud);
++	return (pmd_t *)(unsigned long)pud_val(pud);
+ }
+ 
+ /* only used by the stubbed out hugetlb gup code, should never be called */
+diff --git a/arch/sh/kernel/cpu/sh2/probe.c b/arch/sh/kernel/cpu/sh2/probe.c
+index d342ea08843f6..70a07f4f2142f 100644
+--- a/arch/sh/kernel/cpu/sh2/probe.c
++++ b/arch/sh/kernel/cpu/sh2/probe.c
+@@ -21,7 +21,7 @@ static int __init scan_cache(unsigned long node, const char *uname,
+ 	if (!of_flat_dt_is_compatible(node, "jcore,cache"))
+ 		return 0;
+ 
+-	j2_ccr_base = (u32 __iomem *)of_flat_dt_translate_address(node);
++	j2_ccr_base = ioremap(of_flat_dt_translate_address(node), 4);
+ 
+ 	return 1;
+ }
+diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
+index 632cdb959542c..7d1d10a8fd937 100644
+--- a/arch/sparc/include/asm/pgtable_32.h
++++ b/arch/sparc/include/asm/pgtable_32.h
+@@ -152,13 +152,13 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+ 	return (unsigned long)__nocache_va(v << 4);
+ }
+ 
+-static inline unsigned long pud_page_vaddr(pud_t pud)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+ 	if (srmmu_device_memory(pud_val(pud))) {
+-		return ~0;
++		return (pmd_t *)~0;
+ 	} else {
+ 		unsigned long v = pud_val(pud) & SRMMU_PTD_PMASK;
+-		return (unsigned long)__nocache_va(v << 4);
++		return (pmd_t *)__nocache_va(v << 4);
+ 	}
+ }
+ 
+diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
+index 7ef6affa105e4..5a1efd600f770 100644
+--- a/arch/sparc/include/asm/pgtable_64.h
++++ b/arch/sparc/include/asm/pgtable_64.h
+@@ -845,23 +845,23 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+ 	return ((unsigned long) __va(pfn << PAGE_SHIFT));
+ }
+ 
+-static inline unsigned long pud_page_vaddr(pud_t pud)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+ 	pte_t pte = __pte(pud_val(pud));
+ 	unsigned long pfn;
+ 
+ 	pfn = pte_pfn(pte);
+ 
+-	return ((unsigned long) __va(pfn << PAGE_SHIFT));
++	return ((pmd_t *) __va(pfn << PAGE_SHIFT));
+ }
+ 
+ #define pmd_page(pmd) 			virt_to_page((void *)pmd_page_vaddr(pmd))
+-#define pud_page(pud) 			virt_to_page((void *)pud_page_vaddr(pud))
++#define pud_page(pud)			virt_to_page((void *)pud_pgtable(pud))
+ #define pmd_clear(pmdp)			(pmd_val(*(pmdp)) = 0UL)
+ #define pud_present(pud)		(pud_val(pud) != 0U)
+ #define pud_clear(pudp)			(pud_val(*(pudp)) = 0UL)
+-#define p4d_page_vaddr(p4d)		\
+-	((unsigned long) __va(p4d_val(p4d)))
++#define p4d_pgtable(p4d)		\
++	((pud_t *) __va(p4d_val(p4d)))
+ #define p4d_present(p4d)		(p4d_val(p4d) != 0U)
+ #define p4d_clear(p4dp)			(p4d_val(*(p4dp)) = 0UL)
+ 
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index 7756151413393..56e5320da7624 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -147,7 +147,7 @@ export LDFLAGS_vmlinux := $(LDFLAGS_EXECSTACK)
+ # When cleaning we don't include .config, so we don't include
+ # TT or skas makefiles and don't clean skas_ptregs.h.
+ CLEAN_FILES += linux x.i gmon.out
+-MRPROPER_FILES += arch/$(SUBARCH)/include/generated
++MRPROPER_FILES += $(HOST_DIR)/include/generated
+ 
+ archclean:
+ 	@find . \( -name '*.bb' -o -name '*.bbg' -o -name '*.da' \
+diff --git a/arch/um/include/asm/pgtable-3level.h b/arch/um/include/asm/pgtable-3level.h
+index 7e6a4180db9d3..091bff319ccdf 100644
+--- a/arch/um/include/asm/pgtable-3level.h
++++ b/arch/um/include/asm/pgtable-3level.h
+@@ -84,7 +84,7 @@ static inline void pud_clear (pud_t *pud)
+ }
+ 
+ #define pud_page(pud) phys_to_page(pud_val(pud) & PAGE_MASK)
+-#define pud_page_vaddr(pud) ((unsigned long) __va(pud_val(pud) & PAGE_MASK))
++#define pud_pgtable(pud) ((pmd_t *) __va(pud_val(pud) & PAGE_MASK))
+ 
+ static inline unsigned long pte_pfn(pte_t pte)
+ {
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 52eba415928a3..afc955340f81c 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -364,7 +364,7 @@ static int amd_pmu_hw_config(struct perf_event *event)
+ 
+ 	/* pass precise event sampling to ibs: */
+ 	if (event->attr.precise_ip && get_ibs_caps())
+-		return -ENOENT;
++		return forward_event_to_ibs(event);
+ 
+ 	if (has_branch_stack(event))
+ 		return -EOPNOTSUPP;
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 8a85658a24cc1..354d52e17ef55 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -202,7 +202,7 @@ static struct perf_ibs *get_ibs_pmu(int type)
+ }
+ 
+ /*
+- * Use IBS for precise event sampling:
++ * core pmu config -> IBS config
+  *
+  *  perf record -a -e cpu-cycles:p ...    # use ibs op counting cycle count
+  *  perf record -a -e r076:p ...          # same as -e cpu-cycles:p
+@@ -211,25 +211,9 @@ static struct perf_ibs *get_ibs_pmu(int type)
+  * IbsOpCntCtl (bit 19) of IBS Execution Control Register (IbsOpCtl,
+  * MSRC001_1033) is used to select either cycle or micro-ops counting
+  * mode.
+- *
+- * The rip of IBS samples has skid 0. Thus, IBS supports precise
+- * levels 1 and 2 and the PERF_EFLAGS_EXACT is set. In rare cases the
+- * rip is invalid when IBS was not able to record the rip correctly.
+- * We clear PERF_EFLAGS_EXACT and take the rip from pt_regs then.
+- *
+  */
+-static int perf_ibs_precise_event(struct perf_event *event, u64 *config)
++static int core_pmu_ibs_config(struct perf_event *event, u64 *config)
+ {
+-	switch (event->attr.precise_ip) {
+-	case 0:
+-		return -ENOENT;
+-	case 1:
+-	case 2:
+-		break;
+-	default:
+-		return -EOPNOTSUPP;
+-	}
+-
+ 	switch (event->attr.type) {
+ 	case PERF_TYPE_HARDWARE:
+ 		switch (event->attr.config) {
+@@ -255,22 +239,37 @@ static int perf_ibs_precise_event(struct perf_event *event, u64 *config)
+ 	return -EOPNOTSUPP;
+ }
+ 
++/*
++ * The rip of IBS samples has skid 0. Thus, IBS supports precise
++ * levels 1 and 2 and the PERF_EFLAGS_EXACT is set. In rare cases the
++ * rip is invalid when IBS was not able to record the rip correctly.
++ * We clear PERF_EFLAGS_EXACT and take the rip from pt_regs then.
++ */
++int forward_event_to_ibs(struct perf_event *event)
++{
++	u64 config = 0;
++
++	if (!event->attr.precise_ip || event->attr.precise_ip > 2)
++		return -EOPNOTSUPP;
++
++	if (!core_pmu_ibs_config(event, &config)) {
++		event->attr.type = perf_ibs_op.pmu.type;
++		event->attr.config = config;
++	}
++	return -ENOENT;
++}
++
+ static int perf_ibs_init(struct perf_event *event)
+ {
+ 	struct hw_perf_event *hwc = &event->hw;
+ 	struct perf_ibs *perf_ibs;
+ 	u64 max_cnt, config;
+-	int ret;
+ 
+ 	perf_ibs = get_ibs_pmu(event->attr.type);
+-	if (perf_ibs) {
+-		config = event->attr.config;
+-	} else {
+-		perf_ibs = &perf_ibs_op;
+-		ret = perf_ibs_precise_event(event, &config);
+-		if (ret)
+-			return ret;
+-	}
++	if (!perf_ibs)
++		return -ENOENT;
++
++	config = event->attr.config;
+ 
+ 	if (event->pmu != &perf_ibs->pmu)
+ 		return -ENOENT;
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index b9a7fd0a27e2d..a4e4bbb7795d3 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -412,8 +412,10 @@ struct pebs_xmm {
+ 
+ #ifdef CONFIG_X86_LOCAL_APIC
+ extern u32 get_ibs_caps(void);
++extern int forward_event_to_ibs(struct perf_event *event);
+ #else
+ static inline u32 get_ibs_caps(void) { return 0; }
++static inline int forward_event_to_ibs(struct perf_event *event) { return -ENOENT; }
+ #endif
+ 
+ #ifdef CONFIG_PERF_EVENTS
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 87de9f2d71cf2..9bacde3ff514a 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -865,9 +865,9 @@ static inline int pud_present(pud_t pud)
+ 	return pud_flags(pud) & _PAGE_PRESENT;
+ }
+ 
+-static inline unsigned long pud_page_vaddr(pud_t pud)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+-	return (unsigned long)__va(pud_val(pud) & pud_pfn_mask(pud));
++	return (pmd_t *)__va(pud_val(pud) & pud_pfn_mask(pud));
+ }
+ 
+ /*
+@@ -906,9 +906,9 @@ static inline int p4d_present(p4d_t p4d)
+ 	return p4d_flags(p4d) & _PAGE_PRESENT;
+ }
+ 
+-static inline unsigned long p4d_page_vaddr(p4d_t p4d)
++static inline pud_t *p4d_pgtable(p4d_t p4d)
+ {
+-	return (unsigned long)__va(p4d_val(p4d) & p4d_pfn_mask(p4d));
++	return (pud_t *)__va(p4d_val(p4d) & p4d_pfn_mask(p4d));
+ }
+ 
+ /*
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 56d0399a0cd16..dd520b44e89cc 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -235,8 +235,8 @@ static inline void native_pgd_clear(pgd_t *pgd)
+ 
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val((pte)) })
+ #define __pmd_to_swp_entry(pmd)		((swp_entry_t) { pmd_val((pmd)) })
+-#define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+-#define __swp_entry_to_pmd(x)		((pmd_t) { .pmd = (x).val })
++#define __swp_entry_to_pte(x)		(__pte((x).val))
++#define __swp_entry_to_pmd(x)		(__pmd((x).val))
+ 
+ extern int kern_addr_valid(unsigned long addr);
+ extern void cleanup_highmap(void);
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 1a943743cfe4b..1e73b6fae3b4c 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -715,11 +715,15 @@ unlock:
+ static void show_rdt_tasks(struct rdtgroup *r, struct seq_file *s)
+ {
+ 	struct task_struct *p, *t;
++	pid_t pid;
+ 
+ 	rcu_read_lock();
+ 	for_each_process_thread(p, t) {
+-		if (is_closid_match(t, r) || is_rmid_match(t, r))
+-			seq_printf(s, "%d\n", t->pid);
++		if (is_closid_match(t, r) || is_rmid_match(t, r)) {
++			pid = task_pid_vnr(t);
++			if (pid)
++				seq_printf(s, "%d\n", pid);
++		}
+ 	}
+ 	rcu_read_unlock();
+ }
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index bda89ecc7799f..d2403da17842b 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -100,6 +100,17 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+ DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
+ EXPORT_PER_CPU_SYMBOL(cpu_info);
+ 
++struct mwait_cpu_dead {
++	unsigned int	control;
++	unsigned int	status;
++};
++
++/*
++ * Cache line aligned data for mwait_play_dead(). Separate on purpose so
++ * that it's unlikely to be touched by other CPUs.
++ */
++static DEFINE_PER_CPU_ALIGNED(struct mwait_cpu_dead, mwait_cpu_dead);
++
+ /* Logical package management. We might want to allocate that dynamically */
+ unsigned int __max_logical_packages __read_mostly;
+ EXPORT_SYMBOL(__max_logical_packages);
+@@ -1674,10 +1685,10 @@ EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
+  */
+ static inline void mwait_play_dead(void)
+ {
++	struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead);
+ 	unsigned int eax, ebx, ecx, edx;
+ 	unsigned int highest_cstate = 0;
+ 	unsigned int highest_subcstate = 0;
+-	void *mwait_ptr;
+ 	int i;
+ 
+ 	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+@@ -1712,13 +1723,6 @@ static inline void mwait_play_dead(void)
+ 			(highest_subcstate - 1);
+ 	}
+ 
+-	/*
+-	 * This should be a memory location in a cache line which is
+-	 * unlikely to be touched by other processors.  The actual
+-	 * content is immaterial as it is not actually modified in any way.
+-	 */
+-	mwait_ptr = &current_thread_info()->flags;
+-
+ 	wbinvd();
+ 
+ 	while (1) {
+@@ -1730,9 +1734,9 @@ static inline void mwait_play_dead(void)
+ 		 * case where we return around the loop.
+ 		 */
+ 		mb();
+-		clflush(mwait_ptr);
++		clflush(md);
+ 		mb();
+-		__monitor(mwait_ptr, 0, 0);
++		__monitor(md, 0, 0);
+ 		mb();
+ 		__mwait(eax, 0);
+ 
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 20951ab522a1d..acf4e50c5988b 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -193,8 +193,8 @@ static void sync_global_pgds_l4(unsigned long start, unsigned long end)
+ 			spin_lock(pgt_lock);
+ 
+ 			if (!p4d_none(*p4d_ref) && !p4d_none(*p4d))
+-				BUG_ON(p4d_page_vaddr(*p4d)
+-				       != p4d_page_vaddr(*p4d_ref));
++				BUG_ON(p4d_pgtable(*p4d)
++				       != p4d_pgtable(*p4d_ref));
+ 
+ 			if (p4d_none(*p4d))
+ 				set_p4d(p4d, *p4d_ref);
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index 40baa90e74f4c..217dda690ed82 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -1126,7 +1126,7 @@ static void __unmap_pmd_range(pud_t *pud, pmd_t *pmd,
+ 			      unsigned long start, unsigned long end)
+ {
+ 	if (unmap_pte_range(pmd, start, end))
+-		if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
++		if (try_to_free_pmd_page(pud_pgtable(*pud)))
+ 			pud_clear(pud);
+ }
+ 
+@@ -1170,7 +1170,7 @@ static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
+ 	 * Try again to free the PMD page if haven't succeeded above.
+ 	 */
+ 	if (!pud_none(*pud))
+-		if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
++		if (try_to_free_pmd_page(pud_pgtable(*pud)))
+ 			pud_clear(pud);
+ }
+ 
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index f6a9e2e366425..204b25ee26f0b 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -805,7 +805,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ 	pte_t *pte;
+ 	int i;
+ 
+-	pmd = (pmd_t *)pud_page_vaddr(*pud);
++	pmd = pud_pgtable(*pud);
+ 	pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
+ 	if (!pmd_sv)
+ 		return 0;
+diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
+index 08d70c868c130..1270de83435eb 100644
+--- a/arch/xtensa/platforms/iss/network.c
++++ b/arch/xtensa/platforms/iss/network.c
+@@ -231,7 +231,7 @@ static int tuntap_probe(struct iss_net_private *lp, int index, char *init)
+ 
+ 	init += sizeof(TRANSPORT_TUNTAP_NAME) - 1;
+ 	if (*init == ',') {
+-		rem = split_if_spec(init + 1, &mac_str, &dev_name);
++		rem = split_if_spec(init + 1, &mac_str, &dev_name, NULL);
+ 		if (rem != NULL) {
+ 			pr_err("%s: extra garbage on specification : '%s'\n",
+ 			       dev->name, rem);
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 105ad23dff063..7ba7c4e4e4c93 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2426,6 +2426,7 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	u32 hwi, adj_step;
+ 	s64 margin;
+ 	u64 cost, new_inuse;
++	unsigned long flags;
+ 
+ 	current_hweight(iocg, NULL, &hwi);
+ 	old_hwi = hwi;
+@@ -2444,11 +2445,11 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	    iocg->inuse == iocg->active)
+ 		return cost;
+ 
+-	spin_lock_irq(&ioc->lock);
++	spin_lock_irqsave(&ioc->lock, flags);
+ 
+ 	/* we own inuse only when @iocg is in the normal active state */
+ 	if (iocg->abs_vdebt || list_empty(&iocg->active_list)) {
+-		spin_unlock_irq(&ioc->lock);
++		spin_unlock_irqrestore(&ioc->lock, flags);
+ 		return cost;
+ 	}
+ 
+@@ -2469,7 +2470,7 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	} while (time_after64(vtime + cost, now->vnow) &&
+ 		 iocg->inuse != iocg->active);
+ 
+-	spin_unlock_irq(&ioc->lock);
++	spin_unlock_irqrestore(&ioc->lock, flags);
+ 
+ 	TRACE_IOCG_PATH(inuse_adjust, iocg, now,
+ 			old_inuse, iocg->inuse, old_hwi, hwi);
+diff --git a/block/partitions/amiga.c b/block/partitions/amiga.c
+index 9526491d9aed1..a99ec7f1a1749 100644
+--- a/block/partitions/amiga.c
++++ b/block/partitions/amiga.c
+@@ -11,10 +11,18 @@
+ #define pr_fmt(fmt) fmt
+ 
+ #include <linux/types.h>
++#include <linux/mm_types.h>
++#include <linux/overflow.h>
+ #include <linux/affs_hardblocks.h>
+ 
+ #include "check.h"
+ 
++/* magic offsets in partition DosEnvVec */
++#define NR_HD	3
++#define NR_SECT	5
++#define LO_CYL	9
++#define HI_CYL	10
++
+ static __inline__ u32
+ checksum_block(__be32 *m, int size)
+ {
+@@ -31,8 +39,12 @@ int amiga_partition(struct parsed_partitions *state)
+ 	unsigned char *data;
+ 	struct RigidDiskBlock *rdb;
+ 	struct PartitionBlock *pb;
+-	int start_sect, nr_sects, blk, part, res = 0;
+-	int blksize = 1;	/* Multiplier for disk block size */
++	u64 start_sect, nr_sects;
++	sector_t blk, end_sect;
++	u32 cylblk;		/* rdb_CylBlocks = nr_heads*sect_per_track */
++	u32 nr_hd, nr_sect, lo_cyl, hi_cyl;
++	int part, res = 0;
++	unsigned int blksize = 1;	/* Multiplier for disk block size */
+ 	int slot = 1;
+ 	char b[BDEVNAME_SIZE];
+ 
+@@ -41,7 +53,7 @@ int amiga_partition(struct parsed_partitions *state)
+ 			goto rdb_done;
+ 		data = read_part_sector(state, blk, &sect);
+ 		if (!data) {
+-			pr_err("Dev %s: unable to read RDB block %d\n",
++			pr_err("Dev %s: unable to read RDB block %llu\n",
+ 			       bdevname(state->bdev, b), blk);
+ 			res = -1;
+ 			goto rdb_done;
+@@ -58,12 +70,12 @@ int amiga_partition(struct parsed_partitions *state)
+ 		*(__be32 *)(data+0xdc) = 0;
+ 		if (checksum_block((__be32 *)data,
+ 				be32_to_cpu(rdb->rdb_SummedLongs) & 0x7F)==0) {
+-			pr_err("Trashed word at 0xd0 in block %d ignored in checksum calculation\n",
++			pr_err("Trashed word at 0xd0 in block %llu ignored in checksum calculation\n",
+ 			       blk);
+ 			break;
+ 		}
+ 
+-		pr_err("Dev %s: RDB in block %d has bad checksum\n",
++		pr_err("Dev %s: RDB in block %llu has bad checksum\n",
+ 		       bdevname(state->bdev, b), blk);
+ 	}
+ 
+@@ -79,11 +91,16 @@ int amiga_partition(struct parsed_partitions *state)
+ 	}
+ 	blk = be32_to_cpu(rdb->rdb_PartitionList);
+ 	put_dev_sector(sect);
+-	for (part = 1; blk>0 && part<=16; part++, put_dev_sector(sect)) {
+-		blk *= blksize;	/* Read in terms partition table understands */
++	for (part = 1; (s32) blk>0 && part<=16; part++, put_dev_sector(sect)) {
++		/* Read in terms partition table understands */
++		if (check_mul_overflow(blk, (sector_t) blksize, &blk)) {
++			pr_err("Dev %s: overflow calculating partition block %llu! Skipping partitions %u and beyond\n",
++				bdevname(state->bdev, b), blk, part);
++			break;
++		}
+ 		data = read_part_sector(state, blk, &sect);
+ 		if (!data) {
+-			pr_err("Dev %s: unable to read partition block %d\n",
++			pr_err("Dev %s: unable to read partition block %llu\n",
+ 			       bdevname(state->bdev, b), blk);
+ 			res = -1;
+ 			goto rdb_done;
+@@ -95,19 +112,70 @@ int amiga_partition(struct parsed_partitions *state)
+ 		if (checksum_block((__be32 *)pb, be32_to_cpu(pb->pb_SummedLongs) & 0x7F) != 0 )
+ 			continue;
+ 
+-		/* Tell Kernel about it */
++		/* RDB gives us more than enough rope to hang ourselves with,
++		 * many times over (2^128 bytes if all fields max out).
++		 * Some careful checks are in order, so check for potential
++		 * overflows.
++		 * We are multiplying four 32 bit numbers to one sector_t!
++		 */
++
++		nr_hd   = be32_to_cpu(pb->pb_Environment[NR_HD]);
++		nr_sect = be32_to_cpu(pb->pb_Environment[NR_SECT]);
++
++		/* CylBlocks is total number of blocks per cylinder */
++		if (check_mul_overflow(nr_hd, nr_sect, &cylblk)) {
++			pr_err("Dev %s: heads*sects %u overflows u32, skipping partition!\n",
++				bdevname(state->bdev, b), cylblk);
++			continue;
++		}
++
++		/* check for consistency with RDB defined CylBlocks */
++		if (cylblk > be32_to_cpu(rdb->rdb_CylBlocks)) {
++			pr_warn("Dev %s: cylblk %u > rdb_CylBlocks %u!\n",
++				bdevname(state->bdev, b), cylblk,
++				be32_to_cpu(rdb->rdb_CylBlocks));
++		}
++
++		/* RDB allows for variable logical block size -
++		 * normalize to 512 byte blocks and check result.
++		 */
++
++		if (check_mul_overflow(cylblk, blksize, &cylblk)) {
++			pr_err("Dev %s: partition %u bytes per cyl. overflows u32, skipping partition!\n",
++				bdevname(state->bdev, b), part);
++			continue;
++		}
++
++		/* Calculate partition start and end. Limit of 32 bit on cylblk
++		 * guarantees no overflow occurs if LBD support is enabled.
++		 */
++
++		lo_cyl = be32_to_cpu(pb->pb_Environment[LO_CYL]);
++		start_sect = ((u64) lo_cyl * cylblk);
++
++		hi_cyl = be32_to_cpu(pb->pb_Environment[HI_CYL]);
++		nr_sects = (((u64) hi_cyl - lo_cyl + 1) * cylblk);
+ 
+-		nr_sects = (be32_to_cpu(pb->pb_Environment[10]) + 1 -
+-			    be32_to_cpu(pb->pb_Environment[9])) *
+-			   be32_to_cpu(pb->pb_Environment[3]) *
+-			   be32_to_cpu(pb->pb_Environment[5]) *
+-			   blksize;
+ 		if (!nr_sects)
+ 			continue;
+-		start_sect = be32_to_cpu(pb->pb_Environment[9]) *
+-			     be32_to_cpu(pb->pb_Environment[3]) *
+-			     be32_to_cpu(pb->pb_Environment[5]) *
+-			     blksize;
++
++		/* Warn user if partition end overflows u32 (AmigaDOS limit) */
++
++		if ((start_sect + nr_sects) > UINT_MAX) {
++			pr_warn("Dev %s: partition %u (%llu-%llu) needs 64 bit device support!\n",
++				bdevname(state->bdev, b), part,
++				start_sect, start_sect + nr_sects);
++		}
++
++		if (check_add_overflow(start_sect, nr_sects, &end_sect)) {
++			pr_err("Dev %s: partition %u (%llu-%llu) needs LBD device support, skipping partition!\n",
++				bdevname(state->bdev, b), part,
++				start_sect, end_sect);
++			continue;
++		}
++
++		/* Tell Kernel about it */
++
+ 		put_partition(state,slot++,start_sect,nr_sects);
+ 		{
+ 			/* Be even more informative to aid mounting */
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index 0d93a5ef4d071..4861aad1a9e93 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -82,6 +82,15 @@ static const struct dmi_system_id dmi_lid_quirks[] = {
+ 		},
+ 		.driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_DISABLED,
+ 	},
++	{
++		/* Nextbook Ares 8A tablet, _LID device always reports lid closed */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CherryTrail"),
++			DMI_MATCH(DMI_BIOS_VERSION, "M882"),
++		},
++		.driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_DISABLED,
++	},
+ 	{
+ 		/*
+ 		 * Medion Akoya E2215T, notification of the LID device only
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 038542b3a80a7..b02d381e78483 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -332,6 +332,22 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "82BK"),
+ 		},
+ 	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Lenovo ThinkPad X131e (3371 AMD version) */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "3371"),
++		},
++	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Apple iMac11,3 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "iMac11,3"),
++		},
++	},
+ 	{
+ 	 /* https://bugzilla.redhat.com/show_bug.cgi?id=1217249 */
+ 	 .callback = video_detect_force_native,
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index d0ba5459ce0b9..8a90f08c9682b 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -2763,10 +2763,10 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state,
+ 
+ 	err = of_property_read_u32(state_node, "min-residency-us", &residency);
+ 	if (!err)
+-		genpd_state->residency_ns = 1000 * residency;
++		genpd_state->residency_ns = 1000LL * residency;
+ 
+-	genpd_state->power_on_latency_ns = 1000 * exit_latency;
+-	genpd_state->power_off_latency_ns = 1000 * entry_latency;
++	genpd_state->power_on_latency_ns = 1000LL * exit_latency;
++	genpd_state->power_off_latency_ns = 1000LL * entry_latency;
+ 	genpd_state->fwnode = &state_node->fwnode;
+ 
+ 	return 0;
+diff --git a/drivers/base/regmap/regmap-i2c.c b/drivers/base/regmap/regmap-i2c.c
+index 62b95a9212ae1..051c10e730f92 100644
+--- a/drivers/base/regmap/regmap-i2c.c
++++ b/drivers/base/regmap/regmap-i2c.c
+@@ -242,8 +242,8 @@ static int regmap_i2c_smbus_i2c_read(void *context, const void *reg,
+ static const struct regmap_bus regmap_i2c_smbus_i2c_block = {
+ 	.write = regmap_i2c_smbus_i2c_write,
+ 	.read = regmap_i2c_smbus_i2c_read,
+-	.max_raw_read = I2C_SMBUS_BLOCK_MAX,
+-	.max_raw_write = I2C_SMBUS_BLOCK_MAX,
++	.max_raw_read = I2C_SMBUS_BLOCK_MAX - 1,
++	.max_raw_write = I2C_SMBUS_BLOCK_MAX - 1,
+ };
+ 
+ static int regmap_i2c_smbus_i2c_write_reg16(void *context, const void *data,
+@@ -299,8 +299,8 @@ static int regmap_i2c_smbus_i2c_read_reg16(void *context, const void *reg,
+ static const struct regmap_bus regmap_i2c_smbus_i2c_block_reg16 = {
+ 	.write = regmap_i2c_smbus_i2c_write_reg16,
+ 	.read = regmap_i2c_smbus_i2c_read_reg16,
+-	.max_raw_read = I2C_SMBUS_BLOCK_MAX,
+-	.max_raw_write = I2C_SMBUS_BLOCK_MAX,
++	.max_raw_read = I2C_SMBUS_BLOCK_MAX - 2,
++	.max_raw_write = I2C_SMBUS_BLOCK_MAX - 2,
+ };
+ 
+ static const struct regmap_bus *regmap_get_i2c_bus(struct i2c_client *i2c,
+diff --git a/drivers/base/regmap/regmap-spi-avmm.c b/drivers/base/regmap/regmap-spi-avmm.c
+index 67f89937219c3..ad1da83e849fe 100644
+--- a/drivers/base/regmap/regmap-spi-avmm.c
++++ b/drivers/base/regmap/regmap-spi-avmm.c
+@@ -666,7 +666,7 @@ static const struct regmap_bus regmap_spi_avmm_bus = {
+ 	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ 	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ 	.max_raw_read = SPI_AVMM_VAL_SIZE * MAX_READ_CNT,
+-	.max_raw_write = SPI_AVMM_REG_SIZE + SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
++	.max_raw_write = SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
+ 	.free_context = spi_avmm_bridge_ctx_free,
+ };
+ 
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 2a3c3dfefdcec..55a30afc14a00 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1998,8 +1998,6 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+ 	size_t val_count = val_len / val_bytes;
+ 	size_t chunk_count, chunk_bytes;
+ 	size_t chunk_regs = val_count;
+-	size_t max_data = map->max_raw_write - map->format.reg_bytes -
+-			map->format.pad_bytes;
+ 	int ret, i;
+ 
+ 	if (!val_count)
+@@ -2007,8 +2005,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+ 
+ 	if (map->use_single_write)
+ 		chunk_regs = 1;
+-	else if (map->max_raw_write && val_len > max_data)
+-		chunk_regs = max_data / val_bytes;
++	else if (map->max_raw_write && val_len > map->max_raw_write)
++		chunk_regs = map->max_raw_write / val_bytes;
+ 
+ 	chunk_count = val_count / chunk_regs;
+ 	chunk_bytes = chunk_regs * val_bytes;
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index b6940f0a9c905..e0f805ca0e727 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1723,7 +1723,8 @@ static int nbd_dev_add(int index)
+ 		if (err == -ENOSPC)
+ 			err = -EEXIST;
+ 	} else {
+-		err = idr_alloc(&nbd_index_idr, nbd, 0, 0, GFP_KERNEL);
++		err = idr_alloc(&nbd_index_idr, nbd, 0,
++				(MINORMASK >> part_shift) + 1, GFP_KERNEL);
+ 		if (err >= 0)
+ 			index = err;
+ 	}
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 4ee20be76508f..4b1641fe30dba 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1748,7 +1748,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
+ 	if (!ddata->module_va)
+ 		return -EIO;
+ 
+-	/* DISP_CONTROL */
++	/* DISP_CONTROL, shut down lcd and digit on disable if enabled */
+ 	val = sysc_read(ddata, dispc_offset + 0x40);
+ 	lcd_en = val & lcd_en_mask;
+ 	digit_en = val & digit_en_mask;
+@@ -1760,7 +1760,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
+ 		else
+ 			irq_mask |= BIT(2) | BIT(3);	/* EVSYNC bits */
+ 	}
+-	if (disable & (lcd_en | digit_en))
++	if (disable && (lcd_en || digit_en))
+ 		sysc_write(ddata, dispc_offset + 0x40,
+ 			   val & ~(lcd_en_mask | digit_en_mask));
+ 
+diff --git a/drivers/char/hw_random/imx-rngc.c b/drivers/char/hw_random/imx-rngc.c
+index 9b182e5bfa87a..dbb2da630611a 100644
+--- a/drivers/char/hw_random/imx-rngc.c
++++ b/drivers/char/hw_random/imx-rngc.c
+@@ -110,7 +110,7 @@ static int imx_rngc_self_test(struct imx_rngc *rngc)
+ 	cmd = readl(rngc->base + RNGC_COMMAND);
+ 	writel(cmd | RNGC_CMD_SELF_TEST, rngc->base + RNGC_COMMAND);
+ 
+-	ret = wait_for_completion_timeout(&rngc->rng_op_done, RNGC_TIMEOUT);
++	ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
+ 	imx_rngc_irq_mask_clear(rngc);
+ 	if (!ret)
+ 		return -ETIMEDOUT;
+@@ -187,9 +187,7 @@ static int imx_rngc_init(struct hwrng *rng)
+ 		cmd = readl(rngc->base + RNGC_COMMAND);
+ 		writel(cmd | RNGC_CMD_SEED, rngc->base + RNGC_COMMAND);
+ 
+-		ret = wait_for_completion_timeout(&rngc->rng_op_done,
+-				RNGC_TIMEOUT);
+-
++		ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
+ 		if (!ret) {
+ 			ret = -ETIMEDOUT;
+ 			goto err;
+diff --git a/drivers/char/hw_random/st-rng.c b/drivers/char/hw_random/st-rng.c
+index 15ba1e6fae4d2..6e9dfac9fc9f4 100644
+--- a/drivers/char/hw_random/st-rng.c
++++ b/drivers/char/hw_random/st-rng.c
+@@ -42,7 +42,6 @@
+ 
+ struct st_rng_data {
+ 	void __iomem	*base;
+-	struct clk	*clk;
+ 	struct hwrng	ops;
+ };
+ 
+@@ -85,26 +84,18 @@ static int st_rng_probe(struct platform_device *pdev)
+ 	if (IS_ERR(base))
+ 		return PTR_ERR(base);
+ 
+-	clk = devm_clk_get(&pdev->dev, NULL);
++	clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+ 
+-	ret = clk_prepare_enable(clk);
+-	if (ret)
+-		return ret;
+-
+ 	ddata->ops.priv	= (unsigned long)ddata;
+ 	ddata->ops.read	= st_rng_read;
+ 	ddata->ops.name	= pdev->name;
+ 	ddata->base	= base;
+-	ddata->clk	= clk;
+-
+-	dev_set_drvdata(&pdev->dev, ddata);
+ 
+ 	ret = devm_hwrng_register(&pdev->dev, &ddata->ops);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to register HW RNG\n");
+-		clk_disable_unprepare(clk);
+ 		return ret;
+ 	}
+ 
+@@ -113,15 +104,6 @@ static int st_rng_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int st_rng_remove(struct platform_device *pdev)
+-{
+-	struct st_rng_data *ddata = dev_get_drvdata(&pdev->dev);
+-
+-	clk_disable_unprepare(ddata->clk);
+-
+-	return 0;
+-}
+-
+ static const struct of_device_id st_rng_match[] __maybe_unused = {
+ 	{ .compatible = "st,rng" },
+ 	{},
+@@ -134,7 +116,6 @@ static struct platform_driver st_rng_driver = {
+ 		.of_match_table = of_match_ptr(st_rng_match),
+ 	},
+ 	.probe = st_rng_probe,
+-	.remove = st_rng_remove
+ };
+ 
+ module_platform_driver(st_rng_driver);
+diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
+index a90001e02bf7a..3a194eb3ce8ad 100644
+--- a/drivers/char/hw_random/virtio-rng.c
++++ b/drivers/char/hw_random/virtio-rng.c
+@@ -4,6 +4,7 @@
+  *  Copyright (C) 2007, 2008 Rusty Russell IBM Corporation
+  */
+ 
++#include <asm/barrier.h>
+ #include <linux/err.h>
+ #include <linux/hw_random.h>
+ #include <linux/scatterlist.h>
+@@ -18,71 +19,111 @@ static DEFINE_IDA(rng_index_ida);
+ struct virtrng_info {
+ 	struct hwrng hwrng;
+ 	struct virtqueue *vq;
+-	struct completion have_data;
+ 	char name[25];
+-	unsigned int data_avail;
+ 	int index;
+-	bool busy;
+ 	bool hwrng_register_done;
+ 	bool hwrng_removed;
++	/* data transfer */
++	struct completion have_data;
++	unsigned int data_avail;
++	unsigned int data_idx;
++	/* minimal size returned by rng_buffer_size() */
++#if SMP_CACHE_BYTES < 32
++	u8 data[32];
++#else
++	u8 data[SMP_CACHE_BYTES];
++#endif
+ };
+ 
+ static void random_recv_done(struct virtqueue *vq)
+ {
+ 	struct virtrng_info *vi = vq->vdev->priv;
++	unsigned int len;
+ 
+ 	/* We can get spurious callbacks, e.g. shared IRQs + virtio_pci. */
+-	if (!virtqueue_get_buf(vi->vq, &vi->data_avail))
++	if (!virtqueue_get_buf(vi->vq, &len))
+ 		return;
+ 
++	smp_store_release(&vi->data_avail, len);
+ 	complete(&vi->have_data);
+ }
+ 
+-/* The host will fill any buffer we give it with sweet, sweet randomness. */
+-static void register_buffer(struct virtrng_info *vi, u8 *buf, size_t size)
++static void request_entropy(struct virtrng_info *vi)
+ {
+ 	struct scatterlist sg;
+ 
+-	sg_init_one(&sg, buf, size);
++	reinit_completion(&vi->have_data);
++	vi->data_idx = 0;
++
++	sg_init_one(&sg, vi->data, sizeof(vi->data));
+ 
+ 	/* There should always be room for one buffer. */
+-	virtqueue_add_inbuf(vi->vq, &sg, 1, buf, GFP_KERNEL);
++	virtqueue_add_inbuf(vi->vq, &sg, 1, vi->data, GFP_KERNEL);
+ 
+ 	virtqueue_kick(vi->vq);
+ }
+ 
++static unsigned int copy_data(struct virtrng_info *vi, void *buf,
++			      unsigned int size)
++{
++	size = min_t(unsigned int, size, vi->data_avail);
++	memcpy(buf, vi->data + vi->data_idx, size);
++	vi->data_idx += size;
++	vi->data_avail -= size;
++	if (vi->data_avail == 0)
++		request_entropy(vi);
++	return size;
++}
++
+ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
+ {
+ 	int ret;
+ 	struct virtrng_info *vi = (struct virtrng_info *)rng->priv;
++	unsigned int chunk;
++	size_t read;
+ 
+ 	if (vi->hwrng_removed)
+ 		return -ENODEV;
+ 
+-	if (!vi->busy) {
+-		vi->busy = true;
+-		reinit_completion(&vi->have_data);
+-		register_buffer(vi, buf, size);
++	read = 0;
++
++	/* copy available data */
++	if (smp_load_acquire(&vi->data_avail)) {
++		chunk = copy_data(vi, buf, size);
++		size -= chunk;
++		read += chunk;
+ 	}
+ 
+ 	if (!wait)
+-		return 0;
+-
+-	ret = wait_for_completion_killable(&vi->have_data);
+-	if (ret < 0)
+-		return ret;
++		return read;
++
++	/* We have already copied available entropy,
++	 * so either size is 0 or data_avail is 0
++	 */
++	while (size != 0) {
++		/* data_avail is 0 but a request is pending */
++		ret = wait_for_completion_killable(&vi->have_data);
++		if (ret < 0)
++			return ret;
++		/* if vi->data_avail is 0, we have been interrupted
++		 * by a cleanup, but buffer stays in the queue
++		 */
++		if (vi->data_avail == 0)
++			return read;
+ 
+-	vi->busy = false;
++		chunk = copy_data(vi, buf + read, size);
++		size -= chunk;
++		read += chunk;
++	}
+ 
+-	return vi->data_avail;
++	return read;
+ }
+ 
+ static void virtio_cleanup(struct hwrng *rng)
+ {
+ 	struct virtrng_info *vi = (struct virtrng_info *)rng->priv;
+ 
+-	if (vi->busy)
+-		wait_for_completion(&vi->have_data);
++	complete(&vi->have_data);
+ }
+ 
+ static int probe_common(struct virtio_device *vdev)
+@@ -118,6 +159,9 @@ static int probe_common(struct virtio_device *vdev)
+ 		goto err_find;
+ 	}
+ 
++	/* we always have a pending entropy request */
++	request_entropy(vi);
++
+ 	return 0;
+ 
+ err_find:
+@@ -133,9 +177,9 @@ static void remove_common(struct virtio_device *vdev)
+ 
+ 	vi->hwrng_removed = true;
+ 	vi->data_avail = 0;
++	vi->data_idx = 0;
+ 	complete(&vi->have_data);
+ 	vdev->config->reset(vdev);
+-	vi->busy = false;
+ 	if (vi->hwrng_register_done)
+ 		hwrng_unregister(&vi->hwrng);
+ 	vdev->config->del_vqs(vdev);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 512c867495ea5..365761055df3e 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -731,7 +731,9 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id)
+ 		wake_up_interruptible(&priv->int_queue);
+ 
+ 	/* Clear interrupts handled with TPM_EOI */
++	tpm_tis_request_locality(chip, 0);
+ 	rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), interrupt);
++	tpm_tis_relinquish_locality(chip, 0);
+ 	if (rc < 0)
+ 		return IRQ_NONE;
+ 
+diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c
+index 91c772e38bb54..ff2ec71d592ef 100644
+--- a/drivers/char/tpm/tpm_vtpm_proxy.c
++++ b/drivers/char/tpm/tpm_vtpm_proxy.c
+@@ -683,37 +683,21 @@ static struct miscdevice vtpmx_miscdev = {
+ 	.fops = &vtpmx_fops,
+ };
+ 
+-static int vtpmx_init(void)
+-{
+-	return misc_register(&vtpmx_miscdev);
+-}
+-
+-static void vtpmx_cleanup(void)
+-{
+-	misc_deregister(&vtpmx_miscdev);
+-}
+-
+ static int __init vtpm_module_init(void)
+ {
+ 	int rc;
+ 
+-	rc = vtpmx_init();
+-	if (rc) {
+-		pr_err("couldn't create vtpmx device\n");
+-		return rc;
+-	}
+-
+ 	workqueue = create_workqueue("tpm-vtpm");
+ 	if (!workqueue) {
+ 		pr_err("couldn't create workqueue\n");
+-		rc = -ENOMEM;
+-		goto err_vtpmx_cleanup;
++		return -ENOMEM;
+ 	}
+ 
+-	return 0;
+-
+-err_vtpmx_cleanup:
+-	vtpmx_cleanup();
++	rc = misc_register(&vtpmx_miscdev);
++	if (rc) {
++		pr_err("couldn't create vtpmx device\n");
++		destroy_workqueue(workqueue);
++	}
+ 
+ 	return rc;
+ }
+@@ -721,7 +705,7 @@ err_vtpmx_cleanup:
+ static void __exit vtpm_module_exit(void)
+ {
+ 	destroy_workqueue(workqueue);
+-	vtpmx_cleanup();
++	misc_deregister(&vtpmx_miscdev);
+ }
+ 
+ module_init(vtpm_module_init);
+diff --git a/drivers/clk/clk-cdce925.c b/drivers/clk/clk-cdce925.c
+index 308b353815e17..470d91d7314db 100644
+--- a/drivers/clk/clk-cdce925.c
++++ b/drivers/clk/clk-cdce925.c
+@@ -705,6 +705,10 @@ static int cdce925_probe(struct i2c_client *client,
+ 	for (i = 0; i < data->chip_info->num_plls; ++i) {
+ 		pll_clk_name[i] = kasprintf(GFP_KERNEL, "%pOFn.pll%d",
+ 			client->dev.of_node, i);
++		if (!pll_clk_name[i]) {
++			err = -ENOMEM;
++			goto error;
++		}
+ 		init.name = pll_clk_name[i];
+ 		data->pll[i].chip = data;
+ 		data->pll[i].hw.init = &init;
+@@ -746,6 +750,10 @@ static int cdce925_probe(struct i2c_client *client,
+ 	init.num_parents = 1;
+ 	init.parent_names = &parent_name; /* Mux Y1 to input */
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.Y1", client->dev.of_node);
++	if (!init.name) {
++		err = -ENOMEM;
++		goto error;
++	}
+ 	data->clk[0].chip = data;
+ 	data->clk[0].hw.init = &init;
+ 	data->clk[0].index = 0;
+@@ -764,6 +772,10 @@ static int cdce925_probe(struct i2c_client *client,
+ 	for (i = 1; i < data->chip_info->num_outputs; ++i) {
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.Y%d",
+ 			client->dev.of_node, i+1);
++		if (!init.name) {
++			err = -ENOMEM;
++			goto error;
++		}
+ 		data->clk[i].chip = data;
+ 		data->clk[i].hw.init = &init;
+ 		data->clk[i].index = i;
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index 382a0619a0488..4dea29fa901d4 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -19,6 +19,7 @@
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/regmap.h>
++#include <linux/regulator/consumer.h>
+ #include <linux/slab.h>
+ #include <asm/unaligned.h>
+ 
+@@ -59,6 +60,7 @@ struct clk_si5341_synth {
+ struct clk_si5341_output {
+ 	struct clk_hw hw;
+ 	struct clk_si5341 *data;
++	struct regulator *vddo_reg;
+ 	u8 index;
+ };
+ #define to_clk_si5341_output(_hw) \
+@@ -84,6 +86,7 @@ struct clk_si5341 {
+ struct clk_si5341_output_config {
+ 	u8 out_format_drv_bits;
+ 	u8 out_cm_ampl_bits;
++	u8 vdd_sel_bits;
+ 	bool synth_master;
+ 	bool always_on;
+ };
+@@ -136,6 +139,8 @@ struct clk_si5341_output_config {
+ #define SI5341_OUT_R_REG(output)	\
+ 			((output)->data->reg_rdiv_offset[(output)->index])
+ 
++#define SI5341_OUT_MUX_VDD_SEL_MASK 0x38
++
+ /* Synthesize N divider */
+ #define SI5341_SYNTH_N_NUM(x)	(0x0302 + ((x) * 11))
+ #define SI5341_SYNTH_N_DEN(x)	(0x0308 + ((x) * 11))
+@@ -1250,11 +1255,11 @@ static const struct regmap_config si5341_regmap_config = {
+ 	.volatile_table = &si5341_regmap_volatile,
+ };
+ 
+-static int si5341_dt_parse_dt(struct i2c_client *client,
+-	struct clk_si5341_output_config *config)
++static int si5341_dt_parse_dt(struct clk_si5341 *data,
++			      struct clk_si5341_output_config *config)
+ {
+ 	struct device_node *child;
+-	struct device_node *np = client->dev.of_node;
++	struct device_node *np = data->i2c_client->dev.of_node;
+ 	u32 num;
+ 	u32 val;
+ 
+@@ -1263,13 +1268,13 @@ static int si5341_dt_parse_dt(struct i2c_client *client,
+ 
+ 	for_each_child_of_node(np, child) {
+ 		if (of_property_read_u32(child, "reg", &num)) {
+-			dev_err(&client->dev, "missing reg property of %s\n",
++			dev_err(&data->i2c_client->dev, "missing reg property of %s\n",
+ 				child->name);
+ 			goto put_child;
+ 		}
+ 
+ 		if (num >= SI5341_MAX_NUM_OUTPUTS) {
+-			dev_err(&client->dev, "invalid clkout %d\n", num);
++			dev_err(&data->i2c_client->dev, "invalid clkout %d\n", num);
+ 			goto put_child;
+ 		}
+ 
+@@ -1288,7 +1293,7 @@ static int si5341_dt_parse_dt(struct i2c_client *client,
+ 				config[num].out_format_drv_bits |= 0xc0;
+ 				break;
+ 			default:
+-				dev_err(&client->dev,
++				dev_err(&data->i2c_client->dev,
+ 					"invalid silabs,format %u for %u\n",
+ 					val, num);
+ 				goto put_child;
+@@ -1301,7 +1306,7 @@ static int si5341_dt_parse_dt(struct i2c_client *client,
+ 
+ 		if (!of_property_read_u32(child, "silabs,common-mode", &val)) {
+ 			if (val > 0xf) {
+-				dev_err(&client->dev,
++				dev_err(&data->i2c_client->dev,
+ 					"invalid silabs,common-mode %u\n",
+ 					val);
+ 				goto put_child;
+@@ -1312,7 +1317,7 @@ static int si5341_dt_parse_dt(struct i2c_client *client,
+ 
+ 		if (!of_property_read_u32(child, "silabs,amplitude", &val)) {
+ 			if (val > 0xf) {
+-				dev_err(&client->dev,
++				dev_err(&data->i2c_client->dev,
+ 					"invalid silabs,amplitude %u\n",
+ 					val);
+ 				goto put_child;
+@@ -1329,6 +1334,34 @@ static int si5341_dt_parse_dt(struct i2c_client *client,
+ 
+ 		config[num].always_on =
+ 			of_property_read_bool(child, "always-on");
++
++		config[num].vdd_sel_bits = 0x08;
++		if (data->clk[num].vddo_reg) {
++			int vdd = regulator_get_voltage(data->clk[num].vddo_reg);
++
++			switch (vdd) {
++			case 3300000:
++				config[num].vdd_sel_bits |= 0 << 4;
++				break;
++			case 1800000:
++				config[num].vdd_sel_bits |= 1 << 4;
++				break;
++			case 2500000:
++				config[num].vdd_sel_bits |= 2 << 4;
++				break;
++			default:
++				dev_err(&data->i2c_client->dev,
++					"unsupported vddo voltage %d for %s\n",
++					vdd, child->name);
++				goto put_child;
++			}
++		} else {
++			/* chip seems to default to 2.5V when not set */
++			dev_warn(&data->i2c_client->dev,
++				"no regulator set, defaulting vdd_sel to 2.5V for %s\n",
++				child->name);
++			config[num].vdd_sel_bits |= 2 << 4;
++		}
+ 	}
+ 
+ 	return 0;
+@@ -1417,6 +1450,94 @@ static int si5341_clk_select_active_input(struct clk_si5341 *data)
+ 	return res;
+ }
+ 
++static ssize_t input_present_show(struct device *dev,
++				  struct device_attribute *attr,
++				  char *buf)
++{
++	struct clk_si5341 *data = dev_get_drvdata(dev);
++	u32 status;
++	int res = regmap_read(data->regmap, SI5341_STATUS, &status);
++
++	if (res < 0)
++		return res;
++	res = !(status & SI5341_STATUS_LOSREF);
++	return snprintf(buf, PAGE_SIZE, "%d\n", res);
++}
++static DEVICE_ATTR_RO(input_present);
++
++static ssize_t input_present_sticky_show(struct device *dev,
++					 struct device_attribute *attr,
++					 char *buf)
++{
++	struct clk_si5341 *data = dev_get_drvdata(dev);
++	u32 status;
++	int res = regmap_read(data->regmap, SI5341_STATUS_STICKY, &status);
++
++	if (res < 0)
++		return res;
++	res = !(status & SI5341_STATUS_LOSREF);
++	return snprintf(buf, PAGE_SIZE, "%d\n", res);
++}
++static DEVICE_ATTR_RO(input_present_sticky);
++
++static ssize_t pll_locked_show(struct device *dev,
++			       struct device_attribute *attr,
++			       char *buf)
++{
++	struct clk_si5341 *data = dev_get_drvdata(dev);
++	u32 status;
++	int res = regmap_read(data->regmap, SI5341_STATUS, &status);
++
++	if (res < 0)
++		return res;
++	res = !(status & SI5341_STATUS_LOL);
++	return snprintf(buf, PAGE_SIZE, "%d\n", res);
++}
++static DEVICE_ATTR_RO(pll_locked);
++
++static ssize_t pll_locked_sticky_show(struct device *dev,
++				      struct device_attribute *attr,
++				      char *buf)
++{
++	struct clk_si5341 *data = dev_get_drvdata(dev);
++	u32 status;
++	int res = regmap_read(data->regmap, SI5341_STATUS_STICKY, &status);
++
++	if (res < 0)
++		return res;
++	res = !(status & SI5341_STATUS_LOL);
++	return snprintf(buf, PAGE_SIZE, "%d\n", res);
++}
++static DEVICE_ATTR_RO(pll_locked_sticky);
++
++static ssize_t clear_sticky_store(struct device *dev,
++				  struct device_attribute *attr,
++				  const char *buf, size_t count)
++{
++	struct clk_si5341 *data = dev_get_drvdata(dev);
++	long val;
++
++	if (kstrtol(buf, 10, &val))
++		return -EINVAL;
++	if (val) {
++		int res = regmap_write(data->regmap, SI5341_STATUS_STICKY, 0);
++
++		if (res < 0)
++			return res;
++	}
++	return count;
++}
++static DEVICE_ATTR_WO(clear_sticky);
++
++static const struct attribute *si5341_attributes[] = {
++	&dev_attr_input_present.attr,
++	&dev_attr_input_present_sticky.attr,
++	&dev_attr_pll_locked.attr,
++	&dev_attr_pll_locked_sticky.attr,
++	&dev_attr_clear_sticky.attr,
++	NULL
++};
++
+ static int si5341_probe(struct i2c_client *client,
+ 		const struct i2c_device_id *id)
+ {
+@@ -1424,7 +1545,7 @@ static int si5341_probe(struct i2c_client *client,
+ 	struct clk_init_data init;
+ 	struct clk *input;
+ 	const char *root_clock_name;
+-	const char *synth_clock_names[SI5341_NUM_SYNTH];
++	const char *synth_clock_names[SI5341_NUM_SYNTH] = { NULL };
+ 	int err;
+ 	unsigned int i;
+ 	struct clk_si5341_output_config config[SI5341_MAX_NUM_OUTPUTS];
+@@ -1454,9 +1575,33 @@ static int si5341_probe(struct i2c_client *client,
+ 		}
+ 	}
+ 
+-	err = si5341_dt_parse_dt(client, config);
++	for (i = 0; i < SI5341_MAX_NUM_OUTPUTS; ++i) {
++		char reg_name[10];
++
++		snprintf(reg_name, sizeof(reg_name), "vddo%d", i);
++		data->clk[i].vddo_reg = devm_regulator_get_optional(
++			&client->dev, reg_name);
++		if (IS_ERR(data->clk[i].vddo_reg)) {
++			err = PTR_ERR(data->clk[i].vddo_reg);
++			data->clk[i].vddo_reg = NULL;
++			if (err == -ENODEV)
++				continue;
++			goto cleanup;
++		} else {
++			err = regulator_enable(data->clk[i].vddo_reg);
++			if (err) {
++				dev_err(&client->dev,
++					"failed to enable %s regulator: %d\n",
++					reg_name, err);
++				data->clk[i].vddo_reg = NULL;
++				goto cleanup;
++			}
++		}
++	}
++
++	err = si5341_dt_parse_dt(data, config);
+ 	if (err)
+-		return err;
++		goto cleanup;
+ 
+ 	if (of_property_read_string(client->dev.of_node, "clock-output-names",
+ 			&init.name))
+@@ -1464,21 +1609,23 @@ static int si5341_probe(struct i2c_client *client,
+ 	root_clock_name = init.name;
+ 
+ 	data->regmap = devm_regmap_init_i2c(client, &si5341_regmap_config);
+-	if (IS_ERR(data->regmap))
+-		return PTR_ERR(data->regmap);
++	if (IS_ERR(data->regmap)) {
++		err = PTR_ERR(data->regmap);
++		goto cleanup;
++	}
+ 
+ 	i2c_set_clientdata(client, data);
+ 
+ 	err = si5341_probe_chip_id(data);
+ 	if (err < 0)
+-		return err;
++		goto cleanup;
+ 
+ 	if (of_property_read_bool(client->dev.of_node, "silabs,reprogram")) {
+ 		initialization_required = true;
+ 	} else {
+ 		err = si5341_is_programmed_already(data);
+ 		if (err < 0)
+-			return err;
++			goto cleanup;
+ 
+ 		initialization_required = !err;
+ 	}
+@@ -1487,11 +1634,11 @@ static int si5341_probe(struct i2c_client *client,
+ 		/* Populate the regmap cache in preparation for "cache only" */
+ 		err = si5341_read_settings(data);
+ 		if (err < 0)
+-			return err;
++			goto cleanup;
+ 
+ 		err = si5341_send_preamble(data);
+ 		if (err < 0)
+-			return err;
++			goto cleanup;
+ 
+ 		/*
+ 		 * We intend to send all 'final' register values in a single
+@@ -1504,19 +1651,19 @@ static int si5341_probe(struct i2c_client *client,
+ 		err = si5341_write_multiple(data, si5341_reg_defaults,
+ 					ARRAY_SIZE(si5341_reg_defaults));
+ 		if (err < 0)
+-			return err;
++			goto cleanup;
+ 	}
+ 
+ 	/* Input must be up and running at this point */
+ 	err = si5341_clk_select_active_input(data);
+ 	if (err < 0)
+-		return err;
++		goto cleanup;
+ 
+ 	if (initialization_required) {
+ 		/* PLL configuration is required */
+ 		err = si5341_initialize_pll(data);
+ 		if (err < 0)
+-			return err;
++			goto cleanup;
+ 	}
+ 
+ 	/* Register the PLL */
+@@ -1529,7 +1676,7 @@ static int si5341_probe(struct i2c_client *client,
+ 	err = devm_clk_hw_register(&client->dev, &data->hw);
+ 	if (err) {
+ 		dev_err(&client->dev, "clock registration failed\n");
+-		return err;
++		goto cleanup;
+ 	}
+ 
+ 	init.num_parents = 1;
+@@ -1538,6 +1685,10 @@ static int si5341_probe(struct i2c_client *client,
+ 	for (i = 0; i < data->num_synth; ++i) {
+ 		synth_clock_names[i] = devm_kasprintf(&client->dev, GFP_KERNEL,
+ 				"%s.N%u", client->dev.of_node->name, i);
++		if (!synth_clock_names[i]) {
++			err = -ENOMEM;
++			goto free_clk_names;
++		}
+ 		init.name = synth_clock_names[i];
+ 		data->synth[i].index = i;
+ 		data->synth[i].data = data;
+@@ -1546,6 +1697,7 @@ static int si5341_probe(struct i2c_client *client,
+ 		if (err) {
+ 			dev_err(&client->dev,
+ 				"synth N%u registration failed\n", i);
++			goto free_clk_names;
+ 		}
+ 	}
+ 
+@@ -1555,6 +1707,10 @@ static int si5341_probe(struct i2c_client *client,
+ 	for (i = 0; i < data->num_outputs; ++i) {
+ 		init.name = kasprintf(GFP_KERNEL, "%s.%d",
+ 			client->dev.of_node->name, i);
++		if (!init.name) {
++			err = -ENOMEM;
++			goto free_clk_names;
++		}
+ 		init.flags = config[i].synth_master ? CLK_SET_RATE_PARENT : 0;
+ 		data->clk[i].index = i;
+ 		data->clk[i].data = data;
+@@ -1566,13 +1722,17 @@ static int si5341_probe(struct i2c_client *client,
+ 			regmap_write(data->regmap,
+ 				SI5341_OUT_CM(&data->clk[i]),
+ 				config[i].out_cm_ampl_bits);
++			regmap_update_bits(data->regmap,
++				SI5341_OUT_MUX_SEL(&data->clk[i]),
++				SI5341_OUT_MUX_VDD_SEL_MASK,
++				config[i].vdd_sel_bits);
+ 		}
+ 		err = devm_clk_hw_register(&client->dev, &data->clk[i].hw);
+ 		kfree(init.name); /* clock framework made a copy of the name */
+ 		if (err) {
+ 			dev_err(&client->dev,
+ 				"output %u registration failed\n", i);
+-			return err;
++			goto free_clk_names;
+ 		}
+ 		if (config[i].always_on)
+ 			clk_prepare(data->clk[i].hw.clk);
+@@ -1582,7 +1742,7 @@ static int si5341_probe(struct i2c_client *client,
+ 			data);
+ 	if (err) {
+ 		dev_err(&client->dev, "unable to add clk provider\n");
+-		return err;
++		goto free_clk_names;
+ 	}
+ 
+ 	if (initialization_required) {
+@@ -1590,11 +1750,11 @@ static int si5341_probe(struct i2c_client *client,
+ 		regcache_cache_only(data->regmap, false);
+ 		err = regcache_sync(data->regmap);
+ 		if (err < 0)
+-			return err;
++			goto free_clk_names;
+ 
+ 		err = si5341_finalize_defaults(data);
+ 		if (err < 0)
+-			return err;
++			goto free_clk_names;
+ 	}
+ 
+ 	/* wait for device to report input clock present and PLL lock */
+@@ -1603,20 +1763,47 @@ static int si5341_probe(struct i2c_client *client,
+ 	       10000, 250000);
+ 	if (err) {
+ 		dev_err(&client->dev, "Error waiting for input clock or PLL lock\n");
+-		return err;
++		goto free_clk_names;
+ 	}
+ 
+ 	/* clear sticky alarm bits from initialization */
+ 	err = regmap_write(data->regmap, SI5341_STATUS_STICKY, 0);
+ 	if (err) {
+ 		dev_err(&client->dev, "unable to clear sticky status\n");
+-		return err;
++		goto free_clk_names;
+ 	}
+ 
++	err = sysfs_create_files(&client->dev.kobj, si5341_attributes);
++	if (err)
++		dev_err(&client->dev, "unable to create sysfs files\n");
++
++free_clk_names:
+ 	/* Free the names, clk framework makes copies */
+ 	for (i = 0; i < data->num_synth; ++i)
+ 		 devm_kfree(&client->dev, (void *)synth_clock_names[i]);
+ 
++cleanup:
++	if (err) {
++		for (i = 0; i < SI5341_MAX_NUM_OUTPUTS; ++i) {
++			if (data->clk[i].vddo_reg)
++				regulator_disable(data->clk[i].vddo_reg);
++		}
++	}
++	return err;
++}
++
++static int si5341_remove(struct i2c_client *client)
++{
++	struct clk_si5341 *data = i2c_get_clientdata(client);
++	int i;
++
++	sysfs_remove_files(&client->dev.kobj, si5341_attributes);
++
++	for (i = 0; i < SI5341_MAX_NUM_OUTPUTS; ++i) {
++		if (data->clk[i].vddo_reg)
++			regulator_disable(data->clk[i].vddo_reg);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1646,6 +1833,7 @@ static struct i2c_driver si5341_driver = {
+ 		.of_match_table = clk_si5341_of_match,
+ 	},
+ 	.probe		= si5341_probe,
++	.remove		= si5341_remove,
+ 	.id_table	= si5341_id,
+ };
+ module_i2c_driver(si5341_driver);
+diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
+index eb597ea7bb87b..3ddb974da039a 100644
+--- a/drivers/clk/clk-versaclock5.c
++++ b/drivers/clk/clk-versaclock5.c
+@@ -906,6 +906,11 @@ static int vc5_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	}
+ 
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.mux", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
++
+ 	init.ops = &vc5_mux_ops;
+ 	init.flags = 0;
+ 	init.parent_names = parent_names;
+@@ -920,6 +925,10 @@ static int vc5_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.dbl",
+ 				      client->dev.of_node);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_dbl_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+@@ -935,6 +944,10 @@ static int vc5_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	/* Register PFD */
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.pfd", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_pfd_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -952,6 +965,10 @@ static int vc5_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	/* Register PLL */
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.pll", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_pll_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -971,6 +988,10 @@ static int vc5_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.fod%d",
+ 				      client->dev.of_node, idx);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_fod_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+@@ -989,6 +1010,10 @@ static int vc5_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.out0_sel_i2cb",
+ 			      client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_clk_out_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -1015,6 +1040,10 @@ static int vc5_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.out%d",
+ 				      client->dev.of_node, idx + 1);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_clk_out_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index 8a49e072d6e86..23f37a2cdf3a8 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -291,7 +291,7 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 	void __iomem *base;
+ 	int ret;
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ 					  IMX8MN_CLK_END), GFP_KERNEL);
+ 	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+@@ -308,10 +308,10 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MN_CLK_EXT4] = imx_obtain_fixed_clk_hw(np, "clk_ext4");
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mn-anatop");
+-	base = of_iomap(np, 0);
++	base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!base)) {
+-		ret = -ENOMEM;
++	if (WARN_ON(IS_ERR(base))) {
++		ret = PTR_ERR(base);
+ 		goto unregister_hws;
+ 	}
+ 
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 72592e35836b3..98a4711ef38d0 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -425,25 +425,22 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+ 	void __iomem *anatop_base, *ccm_base;
++	int err;
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mp-anatop");
+-	anatop_base = of_iomap(np, 0);
++	anatop_base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!anatop_base))
+-		return -ENOMEM;
++	if (WARN_ON(IS_ERR(anatop_base)))
++		return PTR_ERR(anatop_base);
+ 
+ 	np = dev->of_node;
+ 	ccm_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (WARN_ON(IS_ERR(ccm_base))) {
+-		iounmap(anatop_base);
++	if (WARN_ON(IS_ERR(ccm_base)))
+ 		return PTR_ERR(ccm_base);
+-	}
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws, IMX8MP_CLK_END), GFP_KERNEL);
+-	if (WARN_ON(!clk_hw_data)) {
+-		iounmap(anatop_base);
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws, IMX8MP_CLK_END), GFP_KERNEL);
++	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+-	}
+ 
+ 	clk_hw_data->num = IMX8MP_CLK_END;
+ 	hws = clk_hw_data->hws;
+@@ -743,7 +740,12 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 
+ 	imx_check_clk_hws(hws, IMX8MP_CLK_END);
+ 
+-	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
++	err = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
++	if (err < 0) {
++		dev_err(dev, "failed to register hws for i.MX8MP\n");
++		imx_unregister_hw_clocks(hws, IMX8MP_CLK_END);
++		return err;
++	}
+ 
+ 	imx_register_uart_clocks(4);
+ 
+diff --git a/drivers/clk/keystone/sci-clk.c b/drivers/clk/keystone/sci-clk.c
+index 7e1b136e71ae0..8af2a9faa805a 100644
+--- a/drivers/clk/keystone/sci-clk.c
++++ b/drivers/clk/keystone/sci-clk.c
+@@ -302,6 +302,8 @@ static int _sci_clk_build(struct sci_clk_provider *provider,
+ 
+ 	name = kasprintf(GFP_KERNEL, "clk:%d:%d", sci_clk->dev_id,
+ 			 sci_clk->clk_id);
++	if (!name)
++		return -ENOMEM;
+ 
+ 	init.name = name;
+ 
+diff --git a/drivers/clk/qcom/gcc-ipq6018.c b/drivers/clk/qcom/gcc-ipq6018.c
+index 3f9c2f61a5d93..cde62a11f5736 100644
+--- a/drivers/clk/qcom/gcc-ipq6018.c
++++ b/drivers/clk/qcom/gcc-ipq6018.c
+@@ -1654,7 +1654,7 @@ static struct clk_rcg2 sdcc1_apps_clk_src = {
+ 		.name = "sdcc1_apps_clk_src",
+ 		.parent_data = gcc_xo_gpll0_gpll2_gpll0_out_main_div2,
+ 		.num_parents = 4,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+@@ -4517,24 +4517,24 @@ static const struct qcom_reset_map gcc_ipq6018_resets[] = {
+ 	[GCC_PCIE0_AHB_ARES] = { 0x75040, 5 },
+ 	[GCC_PCIE0_AXI_MASTER_STICKY_ARES] = { 0x75040, 6 },
+ 	[GCC_PCIE0_AXI_SLAVE_STICKY_ARES] = { 0x75040, 7 },
+-	[GCC_PPE_FULL_RESET] = { 0x68014, 0 },
+-	[GCC_UNIPHY0_SOFT_RESET] = { 0x56004, 0 },
++	[GCC_PPE_FULL_RESET] = { .reg = 0x68014, .bitmask = 0xf0000 },
++	[GCC_UNIPHY0_SOFT_RESET] = { .reg = 0x56004, .bitmask = 0x3ff2 },
+ 	[GCC_UNIPHY0_XPCS_RESET] = { 0x56004, 2 },
+-	[GCC_UNIPHY1_SOFT_RESET] = { 0x56104, 0 },
++	[GCC_UNIPHY1_SOFT_RESET] = { .reg = 0x56104, .bitmask = 0x32 },
+ 	[GCC_UNIPHY1_XPCS_RESET] = { 0x56104, 2 },
+-	[GCC_EDMA_HW_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT1_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT2_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT3_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT4_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT5_RESET] = { 0x68014, 0 },
+-	[GCC_UNIPHY0_PORT1_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT2_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT3_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT4_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT5_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT_4_5_RESET] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT_4_RESET] = { 0x56004, 0 },
++	[GCC_EDMA_HW_RESET] = { .reg = 0x68014, .bitmask = 0x300000 },
++	[GCC_NSSPORT1_RESET] = { .reg = 0x68014, .bitmask = 0x1000003 },
++	[GCC_NSSPORT2_RESET] = { .reg = 0x68014, .bitmask = 0x200000c },
++	[GCC_NSSPORT3_RESET] = { .reg = 0x68014, .bitmask = 0x4000030 },
++	[GCC_NSSPORT4_RESET] = { .reg = 0x68014, .bitmask = 0x8000300 },
++	[GCC_NSSPORT5_RESET] = { .reg = 0x68014, .bitmask = 0x10000c00 },
++	[GCC_UNIPHY0_PORT1_ARES] = { .reg = 0x56004, .bitmask = 0x30 },
++	[GCC_UNIPHY0_PORT2_ARES] = { .reg = 0x56004, .bitmask = 0xc0 },
++	[GCC_UNIPHY0_PORT3_ARES] = { .reg = 0x56004, .bitmask = 0x300 },
++	[GCC_UNIPHY0_PORT4_ARES] = { .reg = 0x56004, .bitmask = 0xc00 },
++	[GCC_UNIPHY0_PORT5_ARES] = { .reg = 0x56004, .bitmask = 0x3000 },
++	[GCC_UNIPHY0_PORT_4_5_RESET] = { .reg = 0x56004, .bitmask = 0x3c02 },
++	[GCC_UNIPHY0_PORT_4_RESET] = { .reg = 0x56004, .bitmask = 0xc02 },
+ 	[GCC_LPASS_BCR] = {0x1F000, 0},
+ 	[GCC_UBI32_TBU_BCR] = {0x65000, 0},
+ 	[GCC_LPASS_TBU_BCR] = {0x6C000, 0},
+diff --git a/drivers/clk/qcom/reset.c b/drivers/clk/qcom/reset.c
+index 819d194be8f7b..0e914ec7aeae1 100644
+--- a/drivers/clk/qcom/reset.c
++++ b/drivers/clk/qcom/reset.c
+@@ -13,8 +13,10 @@
+ 
+ static int qcom_reset(struct reset_controller_dev *rcdev, unsigned long id)
+ {
++	struct qcom_reset_controller *rst = to_qcom_reset_controller(rcdev);
++
+ 	rcdev->ops->assert(rcdev, id);
+-	udelay(1);
++	udelay(rst->reset_map[id].udelay ?: 1); /* use 1 us as default */
+ 	rcdev->ops->deassert(rcdev, id);
+ 	return 0;
+ }
+@@ -28,7 +30,7 @@ qcom_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
+ 
+ 	rst = to_qcom_reset_controller(rcdev);
+ 	map = &rst->reset_map[id];
+-	mask = BIT(map->bit);
++	mask = map->bitmask ? map->bitmask : BIT(map->bit);
+ 
+ 	return regmap_update_bits(rst->regmap, map->reg, mask, mask);
+ }
+@@ -42,7 +44,7 @@ qcom_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
+ 
+ 	rst = to_qcom_reset_controller(rcdev);
+ 	map = &rst->reset_map[id];
+-	mask = BIT(map->bit);
++	mask = map->bitmask ? map->bitmask : BIT(map->bit);
+ 
+ 	return regmap_update_bits(rst->regmap, map->reg, mask, 0);
+ }
+diff --git a/drivers/clk/qcom/reset.h b/drivers/clk/qcom/reset.h
+index 2a08b5e282c77..9a47c838d9b1b 100644
+--- a/drivers/clk/qcom/reset.h
++++ b/drivers/clk/qcom/reset.h
+@@ -11,6 +11,8 @@
+ struct qcom_reset_map {
+ 	unsigned int reg;
+ 	u8 bit;
++	u8 udelay;
++	u32 bitmask;
+ };
+ 
+ struct regmap;
+diff --git a/drivers/clk/tegra/clk-tegra124-emc.c b/drivers/clk/tegra/clk-tegra124-emc.c
+index 733a962ff521a..15f728edc54b5 100644
+--- a/drivers/clk/tegra/clk-tegra124-emc.c
++++ b/drivers/clk/tegra/clk-tegra124-emc.c
+@@ -455,6 +455,7 @@ static int load_timings_from_dt(struct tegra_clk_emc *tegra,
+ 		err = load_one_timing_from_dt(tegra, timing, child);
+ 		if (err) {
+ 			of_node_put(child);
++			kfree(tegra->timings);
+ 			return err;
+ 		}
+ 
+@@ -506,6 +507,7 @@ struct clk *tegra_clk_register_emc(void __iomem *base, struct device_node *np,
+ 		err = load_timings_from_dt(tegra, node, node_ram_code);
+ 		if (err) {
+ 			of_node_put(node);
++			kfree(tegra);
+ 			return ERR_PTR(err);
+ 		}
+ 	}
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 864c484bde1b4..157abc46dcf44 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -267,6 +267,9 @@ static const char * __init clkctrl_get_clock_name(struct device_node *np,
+ 	if (clkctrl_name && !legacy_naming) {
+ 		clock_name = kasprintf(GFP_KERNEL, "%s-clkctrl:%04x:%d",
+ 				       clkctrl_name, offset, index);
++		if (!clock_name)
++			return NULL;
++
+ 		strreplace(clock_name, '_', '-');
+ 
+ 		return clock_name;
+@@ -598,6 +601,10 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ 	if (clkctrl_name) {
+ 		provider->clkdm_name = kasprintf(GFP_KERNEL,
+ 						 "%s_clkdm", clkctrl_name);
++		if (!provider->clkdm_name) {
++			kfree(provider);
++			return;
++		}
+ 		goto clkdm_found;
+ 	}
+ 
+diff --git a/drivers/clocksource/timer-cadence-ttc.c b/drivers/clocksource/timer-cadence-ttc.c
+index 4efd0cf3b602d..0d52e28fea4de 100644
+--- a/drivers/clocksource/timer-cadence-ttc.c
++++ b/drivers/clocksource/timer-cadence-ttc.c
+@@ -486,10 +486,10 @@ static int __init ttc_timer_probe(struct platform_device *pdev)
+ 	 * and use it. Note that the event timer uses the interrupt and it's the
+ 	 * 2nd TTC hence the irq_of_parse_and_map(,1)
+ 	 */
+-	timer_baseaddr = of_iomap(timer, 0);
+-	if (!timer_baseaddr) {
++	timer_baseaddr = devm_of_iomap(&pdev->dev, timer, 0, NULL);
++	if (IS_ERR(timer_baseaddr)) {
+ 		pr_err("ERROR: invalid timer base address\n");
+-		return -ENXIO;
++		return PTR_ERR(timer_baseaddr);
+ 	}
+ 
+ 	irq = irq_of_parse_and_map(timer, 1);
+@@ -513,20 +513,27 @@ static int __init ttc_timer_probe(struct platform_device *pdev)
+ 	clk_ce = of_clk_get(timer, clksel);
+ 	if (IS_ERR(clk_ce)) {
+ 		pr_err("ERROR: timer input clock not found\n");
+-		return PTR_ERR(clk_ce);
++		ret = PTR_ERR(clk_ce);
++		goto put_clk_cs;
+ 	}
+ 
+ 	ret = ttc_setup_clocksource(clk_cs, timer_baseaddr, timer_width);
+ 	if (ret)
+-		return ret;
++		goto put_clk_ce;
+ 
+ 	ret = ttc_setup_clockevent(clk_ce, timer_baseaddr + 4, irq);
+ 	if (ret)
+-		return ret;
++		goto put_clk_ce;
+ 
+ 	pr_info("%pOFn #0 at %p, irq=%d\n", timer, timer_baseaddr, irq);
+ 
+ 	return 0;
++
++put_clk_ce:
++	clk_put(clk_ce);
++put_clk_cs:
++	clk_put(clk_cs);
++	return ret;
+ }
+ 
+ static const struct of_device_id ttc_timer_of_match[] = {
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 1686705bee7bd..4b06b81d8bb0a 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -777,6 +777,8 @@ static ssize_t store_energy_performance_preference(
+ 			err = cpufreq_start_governor(policy);
+ 			if (!ret)
+ 				ret = err;
++		} else {
++			ret = 0;
+ 		}
+ 	}
+ 
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index 596a8c74e40a5..8dc10f9988948 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -287,7 +287,7 @@ static int mv_cesa_des_setkey(struct crypto_skcipher *cipher, const u8 *key,
+ static int mv_cesa_des3_ede_setkey(struct crypto_skcipher *cipher,
+ 				   const u8 *key, unsigned int len)
+ {
+-	struct mv_cesa_des_ctx *ctx = crypto_skcipher_ctx(cipher);
++	struct mv_cesa_des3_ctx *ctx = crypto_skcipher_ctx(cipher);
+ 	int err;
+ 
+ 	err = verify_skcipher_des3_key(cipher, key);
+diff --git a/drivers/crypto/nx/Makefile b/drivers/crypto/nx/Makefile
+index bc89a20e5d9d8..351822a598f97 100644
+--- a/drivers/crypto/nx/Makefile
++++ b/drivers/crypto/nx/Makefile
+@@ -1,7 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-$(CONFIG_CRYPTO_DEV_NX_ENCRYPT) += nx-crypto.o
+ nx-crypto-objs := nx.o \
+-		  nx_debugfs.o \
+ 		  nx-aes-cbc.o \
+ 		  nx-aes-ecb.o \
+ 		  nx-aes-gcm.o \
+@@ -11,6 +10,7 @@ nx-crypto-objs := nx.o \
+ 		  nx-sha256.o \
+ 		  nx-sha512.o
+ 
++nx-crypto-$(CONFIG_DEBUG_FS) += nx_debugfs.o
+ obj-$(CONFIG_CRYPTO_DEV_NX_COMPRESS_PSERIES) += nx-compress-pseries.o nx-compress.o
+ obj-$(CONFIG_CRYPTO_DEV_NX_COMPRESS_POWERNV) += nx-compress-powernv.o nx-compress.o
+ nx-compress-objs := nx-842.o
+diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h
+index c6233173c612e..2697baebb6a35 100644
+--- a/drivers/crypto/nx/nx.h
++++ b/drivers/crypto/nx/nx.h
+@@ -170,8 +170,8 @@ struct nx_sg *nx_walk_and_build(struct nx_sg *, unsigned int,
+ void nx_debugfs_init(struct nx_crypto_driver *);
+ void nx_debugfs_fini(struct nx_crypto_driver *);
+ #else
+-#define NX_DEBUGFS_INIT(drv)	(0)
+-#define NX_DEBUGFS_FINI(drv)	(0)
++#define NX_DEBUGFS_INIT(drv)	do {} while (0)
++#define NX_DEBUGFS_FINI(drv)	do {} while (0)
+ #endif
+ 
+ #define NX_PAGE_NUM(x)		((u64)(x) & 0xfffffffffffff000ULL)
+diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
+index a02777c93c07b..0541b7e4d5c66 100644
+--- a/drivers/dax/bus.c
++++ b/drivers/dax/bus.c
+@@ -403,18 +403,34 @@ static void unregister_dev_dax(void *dev)
+ 	put_device(dev);
+ }
+ 
++static void dax_region_free(struct kref *kref)
++{
++	struct dax_region *dax_region;
++
++	dax_region = container_of(kref, struct dax_region, kref);
++	kfree(dax_region);
++}
++
++void dax_region_put(struct dax_region *dax_region)
++{
++	kref_put(&dax_region->kref, dax_region_free);
++}
++EXPORT_SYMBOL_GPL(dax_region_put);
++
+ /* a return value >= 0 indicates this invocation invalidated the id */
+ static int __free_dev_dax_id(struct dev_dax *dev_dax)
+ {
+-	struct dax_region *dax_region = dev_dax->region;
+ 	struct device *dev = &dev_dax->dev;
++	struct dax_region *dax_region;
+ 	int rc = dev_dax->id;
+ 
+ 	device_lock_assert(dev);
+ 
+-	if (is_static(dax_region) || dev_dax->id < 0)
++	if (!dev_dax->dyn_id || dev_dax->id < 0)
+ 		return -1;
++	dax_region = dev_dax->region;
+ 	ida_free(&dax_region->ida, dev_dax->id);
++	dax_region_put(dax_region);
+ 	dev_dax->id = -1;
+ 	return rc;
+ }
+@@ -430,6 +446,20 @@ static int free_dev_dax_id(struct dev_dax *dev_dax)
+ 	return rc;
+ }
+ 
++static int alloc_dev_dax_id(struct dev_dax *dev_dax)
++{
++	struct dax_region *dax_region = dev_dax->region;
++	int id;
++
++	id = ida_alloc(&dax_region->ida, GFP_KERNEL);
++	if (id < 0)
++		return id;
++	kref_get(&dax_region->kref);
++	dev_dax->dyn_id = true;
++	dev_dax->id = id;
++	return id;
++}
++
+ static ssize_t delete_store(struct device *dev, struct device_attribute *attr,
+ 		const char *buf, size_t len)
+ {
+@@ -517,20 +547,6 @@ static const struct attribute_group *dax_region_attribute_groups[] = {
+ 	NULL,
+ };
+ 
+-static void dax_region_free(struct kref *kref)
+-{
+-	struct dax_region *dax_region;
+-
+-	dax_region = container_of(kref, struct dax_region, kref);
+-	kfree(dax_region);
+-}
+-
+-void dax_region_put(struct dax_region *dax_region)
+-{
+-	kref_put(&dax_region->kref, dax_region_free);
+-}
+-EXPORT_SYMBOL_GPL(dax_region_put);
+-
+ static void dax_region_unregister(void *region)
+ {
+ 	struct dax_region *dax_region = region;
+@@ -592,10 +608,12 @@ EXPORT_SYMBOL_GPL(alloc_dax_region);
+ static void dax_mapping_release(struct device *dev)
+ {
+ 	struct dax_mapping *mapping = to_dax_mapping(dev);
+-	struct dev_dax *dev_dax = to_dev_dax(dev->parent);
++	struct device *parent = dev->parent;
++	struct dev_dax *dev_dax = to_dev_dax(parent);
+ 
+ 	ida_free(&dev_dax->ida, mapping->id);
+ 	kfree(mapping);
++	put_device(parent);
+ }
+ 
+ static void unregister_dax_mapping(void *data)
+@@ -735,6 +753,7 @@ static int devm_register_dax_mapping(struct dev_dax *dev_dax, int range_id)
+ 	dev = &mapping->dev;
+ 	device_initialize(dev);
+ 	dev->parent = &dev_dax->dev;
++	get_device(dev->parent);
+ 	dev->type = &dax_mapping_type;
+ 	dev_set_name(dev, "mapping%d", mapping->id);
+ 	rc = device_add(dev);
+@@ -1267,12 +1286,10 @@ static const struct attribute_group *dax_attribute_groups[] = {
+ static void dev_dax_release(struct device *dev)
+ {
+ 	struct dev_dax *dev_dax = to_dev_dax(dev);
+-	struct dax_region *dax_region = dev_dax->region;
+ 	struct dax_device *dax_dev = dev_dax->dax_dev;
+ 
+ 	put_dax(dax_dev);
+ 	free_dev_dax_id(dev_dax);
+-	dax_region_put(dax_region);
+ 	kfree(dev_dax->pgmap);
+ 	kfree(dev_dax);
+ }
+@@ -1296,6 +1313,7 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 	if (!dev_dax)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	dev_dax->region = dax_region;
+ 	if (is_static(dax_region)) {
+ 		if (dev_WARN_ONCE(parent, data->id < 0,
+ 				"dynamic id specified to static region\n")) {
+@@ -1311,13 +1329,11 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 			goto err_id;
+ 		}
+ 
+-		rc = ida_alloc(&dax_region->ida, GFP_KERNEL);
++		rc = alloc_dev_dax_id(dev_dax);
+ 		if (rc < 0)
+ 			goto err_id;
+-		dev_dax->id = rc;
+ 	}
+ 
+-	dev_dax->region = dax_region;
+ 	dev = &dev_dax->dev;
+ 	device_initialize(dev);
+ 	dev_set_name(dev, "dax%d.%d", dax_region->id, dev_dax->id);
+@@ -1355,7 +1371,6 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 	dev_dax->target_node = dax_region->target_node;
+ 	dev_dax->align = dax_region->align;
+ 	ida_init(&dev_dax->ida);
+-	kref_get(&dax_region->kref);
+ 
+ 	inode = dax_inode(dax_dev);
+ 	dev->devt = inode->i_rdev;
+diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h
+index 1c974b7caae6e..afcada6fd2eda 100644
+--- a/drivers/dax/dax-private.h
++++ b/drivers/dax/dax-private.h
+@@ -52,7 +52,8 @@ struct dax_mapping {
+  * @region - parent region
+  * @dax_dev - core dax functionality
+  * @target_node: effective numa node if dev_dax memory range is onlined
+- * @id: ida allocated id
++ * @dyn_id: is this a dynamic or statically created instance
++ * @id: ida allocated id when the dax_region is not static
+  * @ida: mapping id allocator
+  * @dev - device core
+  * @pgmap - pgmap for memmap setup / lifetime (driver owned)
+@@ -64,6 +65,7 @@ struct dev_dax {
+ 	struct dax_device *dax_dev;
+ 	unsigned int align;
+ 	int target_node;
++	bool dyn_id;
+ 	int id;
+ 	struct ida ida;
+ 	struct device dev;
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index 356610404bb40..fa08dec389dc1 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -196,6 +196,14 @@ static const struct __extcon_info {
+  * @attr_name:		"name" sysfs entry
+  * @attr_state:		"state" sysfs entry
+  * @attrs:		the array pointing to attr_name and attr_state for attr_g
++ * @usb_propval:	the array of USB connector properties
++ * @chg_propval:	the array of charger connector properties
++ * @jack_propval:	the array of jack connector properties
++ * @disp_propval:	the array of display connector properties
++ * @usb_bits:		the bit array of the USB connector property capabilities
++ * @chg_bits:		the bit array of the charger connector property capabilities
++ * @jack_bits:		the bit array of the jack connector property capabilities
++ * @disp_bits:		the bit array of the display connector property capabilities
+  */
+ struct extcon_cable {
+ 	struct extcon_dev *edev;
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 78a446cb43486..f432fe7cb60d7 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -622,7 +622,7 @@ svc_create_memory_pool(struct platform_device *pdev,
+ 	end = rounddown(sh_memory->addr + sh_memory->size, PAGE_SIZE);
+ 	paddr = begin;
+ 	size = end - begin;
+-	va = memremap(paddr, size, MEMREMAP_WC);
++	va = devm_memremap(dev, paddr, size, MEMREMAP_WC);
+ 	if (!va) {
+ 		dev_err(dev, "fail to remap shared memory\n");
+ 		return ERR_PTR(-EINVAL);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 45b1f00c59680..8445bb7ae06ab 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2229,14 +2229,14 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
+ 	uint64_t eaddr;
+ 
+ 	/* validate the parameters */
+-	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
+-	    size == 0 || size & ~PAGE_MASK)
++	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK)
++		return -EINVAL;
++	if (saddr + size <= saddr || offset + size <= offset)
+ 		return -EINVAL;
+ 
+ 	/* make sure object fit at this offset */
+ 	eaddr = saddr + size - 1;
+-	if (saddr >= eaddr ||
+-	    (bo && offset + size > amdgpu_bo_size(bo)) ||
++	if ((bo && offset + size > amdgpu_bo_size(bo)) ||
+ 	    (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT))
+ 		return -EINVAL;
+ 
+@@ -2295,14 +2295,14 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
+ 	int r;
+ 
+ 	/* validate the parameters */
+-	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
+-	    size == 0 || size & ~PAGE_MASK)
++	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK)
++		return -EINVAL;
++	if (saddr + size <= saddr || offset + size <= offset)
+ 		return -EINVAL;
+ 
+ 	/* make sure object fit at this offset */
+ 	eaddr = saddr + size - 1;
+-	if (saddr >= eaddr ||
+-	    (bo && offset + size > amdgpu_bo_size(bo)) ||
++	if ((bo && offset + size > amdgpu_bo_size(bo)) ||
+ 	    (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT))
+ 		return -EINVAL;
+ 
+@@ -3252,6 +3252,10 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 	long timeout = msecs_to_jiffies(2000);
+ 	int r;
+ 
++	/* No valid flags defined yet */
++	if (args->in.flags)
++		return -EINVAL;
++
+ 	switch (args->in.op) {
+ 	case AMDGPU_VM_OP_RESERVE_VMID:
+ 		/* We only have requirement to reserve vmid from gfxhub */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 3b6f5963180d5..dadeb2013fd9a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -113,18 +113,19 @@ static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
+ 			&(mqd_mem_obj->gtt_mem),
+ 			&(mqd_mem_obj->gpu_addr),
+ 			(void *)&(mqd_mem_obj->cpu_ptr), true);
++
++		if (retval) {
++			kfree(mqd_mem_obj);
++			return NULL;
++		}
+ 	} else {
+ 		retval = kfd_gtt_sa_allocate(kfd, sizeof(struct v9_mqd),
+ 				&mqd_mem_obj);
+-	}
+-
+-	if (retval) {
+-		kfree(mqd_mem_obj);
+-		return NULL;
++		if (retval)
++			return NULL;
+ 	}
+ 
+ 	return mqd_mem_obj;
+-
+ }
+ 
+ static void init_mqd(struct mqd_manager *mm, void **mqd,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 7e0a55aa2b180..099542dd31544 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1855,9 +1855,6 @@ static enum surface_update_type det_surface_update(const struct dc *dc,
+ 	enum surface_update_type overall_type = UPDATE_TYPE_FAST;
+ 	union surface_update_flags *update_flags = &u->surface->update_flags;
+ 
+-	if (u->flip_addr)
+-		update_flags->bits.addr_update = 1;
+-
+ 	if (!is_surface_in_context(context, u->surface) || u->surface->force_full_update) {
+ 		update_flags->raw = 0xFFFFFFFF;
+ 		return UPDATE_TYPE_FULL;
+diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+index c6a8d6c54621e..882b4e2816b53 100644
+--- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
++++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+@@ -347,7 +347,7 @@ struct dmub_srv {
+  * of a firmware to know if feature or functionality is supported or present.
+  */
+ #define DMUB_FW_VERSION(major, minor, revision) \
+-	((((major) & 0xFF) << 24) | (((minor) & 0xFF) << 16) | ((revision) & 0xFFFF))
++	((((major) & 0xFF) << 24) | (((minor) & 0xFF) << 16) | (((revision) & 0xFF) << 8))
+ 
+ /**
+  * dmub_srv_create() - creates the DMUB service.
+diff --git a/drivers/gpu/drm/bridge/tc358768.c b/drivers/gpu/drm/bridge/tc358768.c
+index 8ed8302d6bbb4..b4a69b2104514 100644
+--- a/drivers/gpu/drm/bridge/tc358768.c
++++ b/drivers/gpu/drm/bridge/tc358768.c
+@@ -9,6 +9,8 @@
+ #include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+ #include <linux/kernel.h>
++#include <linux/media-bus-format.h>
++#include <linux/minmax.h>
+ #include <linux/module.h>
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+@@ -147,6 +149,7 @@ struct tc358768_priv {
+ 
+ 	u32 pd_lines; /* number of Parallel Port Input Data Lines */
+ 	u32 dsi_lanes; /* number of DSI Lanes */
++	u32 dsi_bpp; /* number of Bits Per Pixel over DSI */
+ 
+ 	/* Parameters for PLL programming */
+ 	u32 fbd;	/* PLL feedback divider */
+@@ -279,12 +282,12 @@ static void tc358768_hw_disable(struct tc358768_priv *priv)
+ 
+ static u32 tc358768_pll_to_pclk(struct tc358768_priv *priv, u32 pll_clk)
+ {
+-	return (u32)div_u64((u64)pll_clk * priv->dsi_lanes, priv->pd_lines);
++	return (u32)div_u64((u64)pll_clk * priv->dsi_lanes, priv->dsi_bpp);
+ }
+ 
+ static u32 tc358768_pclk_to_pll(struct tc358768_priv *priv, u32 pclk)
+ {
+-	return (u32)div_u64((u64)pclk * priv->pd_lines, priv->dsi_lanes);
++	return (u32)div_u64((u64)pclk * priv->dsi_bpp, priv->dsi_lanes);
+ }
+ 
+ static int tc358768_calc_pll(struct tc358768_priv *priv,
+@@ -329,13 +332,17 @@ static int tc358768_calc_pll(struct tc358768_priv *priv,
+ 		u32 fbd;
+ 
+ 		for (fbd = 0; fbd < 512; ++fbd) {
+-			u32 pll, diff;
++			u32 pll, diff, pll_in;
+ 
+ 			pll = (u32)div_u64((u64)refclk * (fbd + 1), divisor);
+ 
+ 			if (pll >= max_pll || pll < min_pll)
+ 				continue;
+ 
++			pll_in = (u32)div_u64((u64)refclk, prd + 1);
++			if (pll_in < 4000000)
++				continue;
++
+ 			diff = max(pll, target_pll) - min(pll, target_pll);
+ 
+ 			if (diff < best_diff) {
+@@ -417,6 +424,7 @@ static int tc358768_dsi_host_attach(struct mipi_dsi_host *host,
+ 	priv->output.panel = panel;
+ 
+ 	priv->dsi_lanes = dev->lanes;
++	priv->dsi_bpp = mipi_dsi_pixel_format_to_bpp(dev->format);
+ 
+ 	/* get input ep (port0/endpoint0) */
+ 	ret = -EINVAL;
+@@ -428,7 +436,7 @@ static int tc358768_dsi_host_attach(struct mipi_dsi_host *host,
+ 	}
+ 
+ 	if (ret)
+-		priv->pd_lines = mipi_dsi_pixel_format_to_bpp(dev->format);
++		priv->pd_lines = priv->dsi_bpp;
+ 
+ 	drm_bridge_add(&priv->bridge);
+ 
+@@ -626,6 +634,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	struct tc358768_priv *priv = bridge_to_tc358768(bridge);
+ 	struct mipi_dsi_device *dsi_dev = priv->output.dev;
+ 	u32 val, val2, lptxcnt, hact, data_type;
++	s32 raw_val;
+ 	const struct drm_display_mode *mode;
+ 	u32 dsibclk_nsk, dsiclk_nsk, ui_nsk, phy_delay_nsk;
+ 	u32 dsiclk, dsibclk;
+@@ -719,25 +728,26 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 
+ 	/* 38ns < TCLK_PREPARE < 95ns */
+ 	val = tc358768_ns_to_cnt(65, dsibclk_nsk) - 1;
+-	/* TCLK_PREPARE > 300ns */
+-	val2 = tc358768_ns_to_cnt(300 + tc358768_to_ns(3 * ui_nsk),
+-				  dsibclk_nsk);
+-	val |= (val2 - tc358768_to_ns(phy_delay_nsk - dsibclk_nsk)) << 8;
++	/* TCLK_PREPARE + TCLK_ZERO > 300ns */
++	val2 = tc358768_ns_to_cnt(300 - tc358768_to_ns(2 * ui_nsk),
++				  dsibclk_nsk) - 2;
++	val |= val2 << 8;
+ 	dev_dbg(priv->dev, "TCLK_HEADERCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_HEADERCNT, val);
+ 
+-	/* TCLK_TRAIL > 60ns + 3*UI */
+-	val = 60 + tc358768_to_ns(3 * ui_nsk);
+-	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 5;
++	/* TCLK_TRAIL > 60ns AND TEOT <= 105 ns + 12*UI */
++	raw_val = tc358768_ns_to_cnt(60 + tc358768_to_ns(2 * ui_nsk), dsibclk_nsk) - 5;
++	val = clamp(raw_val, 0, 127);
+ 	dev_dbg(priv->dev, "TCLK_TRAILCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_TRAILCNT, val);
+ 
+ 	/* 40ns + 4*UI < THS_PREPARE < 85ns + 6*UI */
+ 	val = 50 + tc358768_to_ns(4 * ui_nsk);
+ 	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 1;
+-	/* THS_ZERO > 145ns + 10*UI */
+-	val2 = tc358768_ns_to_cnt(145 - tc358768_to_ns(ui_nsk), dsibclk_nsk);
+-	val |= (val2 - tc358768_to_ns(phy_delay_nsk)) << 8;
++	/* THS_PREPARE + THS_ZERO > 145ns + 10*UI */
++	raw_val = tc358768_ns_to_cnt(145 - tc358768_to_ns(3 * ui_nsk), dsibclk_nsk) - 10;
++	val2 = clamp(raw_val, 0, 127);
++	val |= val2 << 8;
+ 	dev_dbg(priv->dev, "THS_HEADERCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_THS_HEADERCNT, val);
+ 
+@@ -753,9 +763,10 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	dev_dbg(priv->dev, "TCLK_POSTCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_POSTCNT, val);
+ 
+-	/* 60ns + 4*UI < THS_PREPARE < 105ns + 12*UI */
+-	val = tc358768_ns_to_cnt(60 + tc358768_to_ns(15 * ui_nsk),
+-				 dsibclk_nsk) - 5;
++	/* max(60ns + 4*UI, 8*UI) < THS_TRAILCNT < 105ns + 12*UI */
++	raw_val = tc358768_ns_to_cnt(60 + tc358768_to_ns(18 * ui_nsk),
++				     dsibclk_nsk) - 4;
++	val = clamp(raw_val, 0, 15);
+ 	dev_dbg(priv->dev, "THS_TRAILCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_THS_TRAILCNT, val);
+ 
+@@ -769,7 +780,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 
+ 	/* TXTAGOCNT[26:16] RXTASURECNT[10:0] */
+ 	val = tc358768_to_ns((lptxcnt + 1) * dsibclk_nsk * 4);
+-	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 1;
++	val = tc358768_ns_to_cnt(val, dsibclk_nsk) / 4 - 1;
+ 	val2 = tc358768_ns_to_cnt(tc358768_to_ns((lptxcnt + 1) * dsibclk_nsk),
+ 				  dsibclk_nsk) - 2;
+ 	val |= val2 << 16;
+@@ -819,8 +830,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	val = TC358768_DSI_CONFW_MODE_SET | TC358768_DSI_CONFW_ADDR_DSI_CONTROL;
+ 	val |= (dsi_dev->lanes - 1) << 1;
+ 
+-	if (!(dsi_dev->mode_flags & MIPI_DSI_MODE_LPM))
+-		val |= TC358768_DSI_CONTROL_TXMD;
++	val |= TC358768_DSI_CONTROL_TXMD;
+ 
+ 	if (!(dsi_dev->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS))
+ 		val |= TC358768_DSI_CONTROL_HSCKMD;
+@@ -866,6 +876,44 @@ static void tc358768_bridge_enable(struct drm_bridge *bridge)
+ 	}
+ }
+ 
++#define MAX_INPUT_SEL_FORMATS	1
++
++static u32 *
++tc358768_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
++				   struct drm_bridge_state *bridge_state,
++				   struct drm_crtc_state *crtc_state,
++				   struct drm_connector_state *conn_state,
++				   u32 output_fmt,
++				   unsigned int *num_input_fmts)
++{
++	struct tc358768_priv *priv = bridge_to_tc358768(bridge);
++	u32 *input_fmts;
++
++	*num_input_fmts = 0;
++
++	input_fmts = kcalloc(MAX_INPUT_SEL_FORMATS, sizeof(*input_fmts),
++			     GFP_KERNEL);
++	if (!input_fmts)
++		return NULL;
++
++	switch (priv->pd_lines) {
++	case 16:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB565_1X16;
++		break;
++	case 18:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB666_1X18;
++		break;
++	default:
++	case 24:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB888_1X24;
++		break;
++	};
++
++	*num_input_fmts = MAX_INPUT_SEL_FORMATS;
++
++	return input_fmts;
++}
++
+ static const struct drm_bridge_funcs tc358768_bridge_funcs = {
+ 	.attach = tc358768_bridge_attach,
+ 	.mode_valid = tc358768_bridge_mode_valid,
+@@ -873,6 +921,11 @@ static const struct drm_bridge_funcs tc358768_bridge_funcs = {
+ 	.enable = tc358768_bridge_enable,
+ 	.disable = tc358768_bridge_disable,
+ 	.post_disable = tc358768_bridge_post_disable,
++
++	.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
++	.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
++	.atomic_reset = drm_atomic_helper_bridge_reset,
++	.atomic_get_input_bus_fmts = tc358768_atomic_get_input_bus_fmts,
+ };
+ 
+ static const struct drm_bridge_timings default_tc358768_timings = {
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index 98b659981f1ad..b10ba50577358 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -98,6 +98,12 @@ drm_atomic_state_init(struct drm_device *dev, struct drm_atomic_state *state)
+ 	if (!state->planes)
+ 		goto fail;
+ 
++	/*
++	 * Because drm_atomic_state can be committed asynchronously we need our
++	 * own reference and cannot rely on the on implied by drm_file in the
++	 * ioctl call.
++	 */
++	drm_dev_get(dev);
+ 	state->dev = dev;
+ 
+ 	DRM_DEBUG_ATOMIC("Allocated atomic state %p\n", state);
+@@ -257,7 +263,8 @@ EXPORT_SYMBOL(drm_atomic_state_clear);
+ void __drm_atomic_state_free(struct kref *ref)
+ {
+ 	struct drm_atomic_state *state = container_of(ref, typeof(*state), ref);
+-	struct drm_mode_config *config = &state->dev->mode_config;
++	struct drm_device *dev = state->dev;
++	struct drm_mode_config *config = &dev->mode_config;
+ 
+ 	drm_atomic_state_clear(state);
+ 
+@@ -269,6 +276,8 @@ void __drm_atomic_state_free(struct kref *ref)
+ 		drm_atomic_state_default_release(state);
+ 		kfree(state);
+ 	}
++
++	drm_dev_put(dev);
+ }
+ EXPORT_SYMBOL(__drm_atomic_state_free);
+ 
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 7fc8e7000046c..0fde260b7edd8 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1113,7 +1113,16 @@ disable_outputs(struct drm_device *dev, struct drm_atomic_state *old_state)
+ 			continue;
+ 
+ 		ret = drm_crtc_vblank_get(crtc);
+-		WARN_ONCE(ret != -EINVAL, "driver forgot to call drm_crtc_vblank_off()\n");
++		/*
++		 * Self-refresh is not a true "disable"; ensure vblank remains
++		 * enabled.
++		 */
++		if (new_crtc_state->self_refresh_active)
++			WARN_ONCE(ret != 0,
++				  "driver disabled vblank in self-refresh\n");
++		else
++			WARN_ONCE(ret != -EINVAL,
++				  "driver forgot to call drm_crtc_vblank_off()\n");
+ 		if (ret == 0)
+ 			drm_crtc_vblank_put(crtc);
+ 	}
+diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c
+index b7e9e1c2564cd..d5fd418236246 100644
+--- a/drivers/gpu/drm/drm_client_modeset.c
++++ b/drivers/gpu/drm/drm_client_modeset.c
+@@ -308,6 +308,9 @@ static bool drm_client_target_cloned(struct drm_device *dev,
+ 	can_clone = true;
+ 	dmt_mode = drm_mode_find_dmt(dev, 1024, 768, 60, false);
+ 
++	if (!dmt_mode)
++		goto fail;
++
+ 	for (i = 0; i < connector_count; i++) {
+ 		if (!enabled[i])
+ 			continue;
+@@ -323,11 +326,13 @@ static bool drm_client_target_cloned(struct drm_device *dev,
+ 		if (!modes[i])
+ 			can_clone = false;
+ 	}
++	kfree(dmt_mode);
+ 
+ 	if (can_clone) {
+ 		DRM_DEBUG_KMS("can clone using 1024x768\n");
+ 		return true;
+ 	}
++fail:
+ 	DRM_INFO("kms: can't enable cloning when we probably wanted to.\n");
+ 	return false;
+ }
+@@ -859,6 +864,7 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
+ 				break;
+ 			}
+ 
++			kfree(modeset->mode);
+ 			modeset->mode = drm_mode_duplicate(dev, mode);
+ 			drm_connector_get(connector);
+ 			modeset->connectors[modeset->num_connectors++] = connector;
+diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
+index 375c79e23ca59..eb104de16fa73 100644
+--- a/drivers/gpu/drm/drm_gem_vram_helper.c
++++ b/drivers/gpu/drm/drm_gem_vram_helper.c
+@@ -41,7 +41,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  * the frame's scanout buffer or the cursor image. If there's no more space
+  * left in VRAM, inactive GEM objects can be moved to system memory.
+  *
+- * To initialize the VRAM helper library call drmm_vram_helper_alloc_mm().
++ * To initialize the VRAM helper library call drmm_vram_helper_init().
+  * The function allocates and initializes an instance of &struct drm_vram_mm
+  * in &struct drm_device.vram_mm . Use &DRM_GEM_VRAM_DRIVER to initialize
+  * &struct drm_driver and  &DRM_VRAM_MM_FILE_OPERATIONS to initialize
+@@ -69,7 +69,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  *		// setup device, vram base and size
+  *		// ...
+  *
+- *		ret = drmm_vram_helper_alloc_mm(dev, vram_base, vram_size);
++ *		ret = drmm_vram_helper_init(dev, vram_base, vram_size);
+  *		if (ret)
+  *			return ret;
+  *		return 0;
+@@ -82,7 +82,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  * to userspace.
+  *
+  * You don't have to clean up the instance of VRAM MM.
+- * drmm_vram_helper_alloc_mm() is a managed interface that installs a
++ * drmm_vram_helper_init() is a managed interface that installs a
+  * clean-up handler to run during the DRM device's release.
+  *
+  * For drawing or scanout operations, rsp. buffer objects have to be pinned
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 5afb3c544653c..4c64e2d4f6500 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -1262,6 +1262,8 @@ static const struct drm_crtc_helper_funcs dpu_crtc_helper_funcs = {
+ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane,
+ 				struct drm_plane *cursor)
+ {
++	struct msm_drm_private *priv = dev->dev_private;
++	struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms);
+ 	struct drm_crtc *crtc = NULL;
+ 	struct dpu_crtc *dpu_crtc = NULL;
+ 	int i;
+@@ -1293,7 +1295,8 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane,
+ 
+ 	drm_crtc_helper_add(crtc, &dpu_crtc_helper_funcs);
+ 
+-	drm_crtc_enable_color_mgmt(crtc, 0, true, 0);
++	if (dpu_kms->catalog->dspp_count)
++		drm_crtc_enable_color_mgmt(crtc, 0, true, 0);
+ 
+ 	/* save user friendly CRTC name for later */
+ 	snprintf(dpu_crtc->name, DPU_CRTC_NAME_SIZE, "crtc%u", crtc->base.id);
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 0bcccf422192c..4da8cea76a115 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -1267,9 +1267,9 @@ static int dp_display_remove(struct platform_device *pdev)
+ 	dp = container_of(g_dp_display,
+ 			struct dp_display_private, dp_display);
+ 
++	component_del(&pdev->dev, &dp_display_comp_ops);
+ 	dp_display_deinit_sub_modules(dp);
+ 
+-	component_del(&pdev->dev, &dp_display_comp_ops);
+ 	platform_set_drvdata(pdev, NULL);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+index 16dbf0f353eda..1f5fb1547730d 100644
+--- a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
++++ b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+@@ -192,15 +192,15 @@ static int sharp_nt_panel_enable(struct drm_panel *panel)
+ }
+ 
+ static const struct drm_display_mode default_mode = {
+-	.clock = 41118,
++	.clock = (540 + 48 + 32 + 80) * (960 + 3 + 10 + 15) * 60 / 1000,
+ 	.hdisplay = 540,
+ 	.hsync_start = 540 + 48,
+-	.hsync_end = 540 + 48 + 80,
+-	.htotal = 540 + 48 + 80 + 32,
++	.hsync_end = 540 + 48 + 32,
++	.htotal = 540 + 48 + 32 + 80,
+ 	.vdisplay = 960,
+ 	.vsync_start = 960 + 3,
+-	.vsync_end = 960 + 3 + 15,
+-	.vtotal = 960 + 3 + 15 + 1,
++	.vsync_end = 960 + 3 + 10,
++	.vtotal = 960 + 3 + 10 + 15,
+ };
+ 
+ static int sharp_nt_panel_get_modes(struct drm_panel *panel,
+@@ -280,6 +280,7 @@ static int sharp_nt_panel_probe(struct mipi_dsi_device *dsi)
+ 	dsi->lanes = 2;
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
++			MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
+ 			MIPI_DSI_MODE_VIDEO_HSE |
+ 			MIPI_DSI_CLOCK_NON_CONTINUOUS |
+ 			MIPI_DSI_MODE_EOT_PACKET;
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 1a87cc445b5e1..7b69f81444ebd 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -704,8 +704,8 @@ static const struct panel_desc ampire_am_480272h3tmqw_t01h = {
+ 	.num_modes = 1,
+ 	.bpc = 8,
+ 	.size = {
+-		.width = 105,
+-		.height = 67,
++		.width = 99,
++		.height = 58,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+ };
+@@ -2091,6 +2091,7 @@ static const struct panel_desc innolux_at043tn24 = {
+ 		.height = 54,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
++	.connector_type = DRM_MODE_CONNECTOR_DPI,
+ 	.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,
+ };
+ 
+@@ -3152,6 +3153,7 @@ static const struct drm_display_mode powertip_ph800480t013_idf02_mode = {
+ 	.vsync_start = 480 + 49,
+ 	.vsync_end = 480 + 49 + 2,
+ 	.vtotal = 480 + 49 + 2 + 22,
++	.flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
+ };
+ 
+ static const struct panel_desc powertip_ph800480t013_idf02  = {
+diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
+index 886e9959496fe..f98df826972c9 100644
+--- a/drivers/gpu/drm/radeon/ci_dpm.c
++++ b/drivers/gpu/drm/radeon/ci_dpm.c
+@@ -5541,6 +5541,7 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 	u8 frev, crev;
+ 	u8 *power_state_offset;
+ 	struct ci_ps *ps;
++	int ret;
+ 
+ 	if (!atom_parse_data_header(mode_info->atom_context, index, NULL,
+ 				   &frev, &crev, &data_offset))
+@@ -5570,11 +5571,15 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+-		if (!rdev->pm.power_state[i].clock_info)
+-			return -EINVAL;
++		if (!rdev->pm.power_state[i].clock_info) {
++			ret = -EINVAL;
++			goto err_free_ps;
++		}
+ 		ps = kzalloc(sizeof(struct ci_ps), GFP_KERNEL);
+-		if (ps == NULL)
+-			return -ENOMEM;
++		if (ps == NULL) {
++			ret = -ENOMEM;
++			goto err_free_ps;
++		}
+ 		rdev->pm.dpm.ps[i].ps_priv = ps;
+ 		ci_parse_pplib_non_clock_info(rdev, &rdev->pm.dpm.ps[i],
+ 					      non_clock_info,
+@@ -5614,6 +5619,12 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 	}
+ 
+ 	return 0;
++
++err_free_ps:
++	for (i = 0; i < rdev->pm.dpm.num_ps; i++)
++		kfree(rdev->pm.dpm.ps[i].ps_priv);
++	kfree(rdev->pm.dpm.ps);
++	return ret;
+ }
+ 
+ static int ci_get_vbios_boot_values(struct radeon_device *rdev,
+@@ -5702,25 +5713,26 @@ int ci_dpm_init(struct radeon_device *rdev)
+ 
+ 	ret = ci_get_vbios_boot_values(rdev, &pi->vbios_boot_state);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = r600_get_platform_caps(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = r600_parse_extended_power_table(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = ci_parse_power_table(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
++		r600_free_extended_power_table(rdev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/radeon/cypress_dpm.c b/drivers/gpu/drm/radeon/cypress_dpm.c
+index 35b177d777913..7120710d188fa 100644
+--- a/drivers/gpu/drm/radeon/cypress_dpm.c
++++ b/drivers/gpu/drm/radeon/cypress_dpm.c
+@@ -559,8 +559,12 @@ static int cypress_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = ss.percentage *
+ 				(0x4000 * dividers.whole_fb_div + 0x800 * dividers.frac_fb_div) / (clk_s * 625);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c
+index a5218747742ba..f79b348d36ad9 100644
+--- a/drivers/gpu/drm/radeon/ni_dpm.c
++++ b/drivers/gpu/drm/radeon/ni_dpm.c
+@@ -2240,8 +2240,12 @@ static int ni_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = ss.percentage *
+ 				(0x4000 * dividers.whole_fb_div + 0x800 * dividers.frac_fb_div) / (clk_s * 625);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/radeon/rv740_dpm.c b/drivers/gpu/drm/radeon/rv740_dpm.c
+index 327d65a76e1f4..79b2de65e905e 100644
+--- a/drivers/gpu/drm/radeon/rv740_dpm.c
++++ b/drivers/gpu/drm/radeon/rv740_dpm.c
+@@ -250,8 +250,12 @@ int rv740_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = 0x40000 * ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = 0x40000 * ss.percentage *
+ 				(dividers.whole_fb_div + (dividers.frac_fb_div / 8)) / (clk_s * 10000);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index af98bfcde5189..65dde9df9793e 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -702,13 +702,13 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	if (crtc->state->self_refresh_active)
+ 		rockchip_drm_set_win_enabled(crtc, false);
+ 
++	if (crtc->state->self_refresh_active)
++		goto out;
++
+ 	mutex_lock(&vop->vop_lock);
+ 
+ 	drm_crtc_vblank_off(crtc);
+ 
+-	if (crtc->state->self_refresh_active)
+-		goto out;
+-
+ 	/*
+ 	 * Vop standby will take effect at end of current frame,
+ 	 * if dsp hold valid irq happen, it means standby complete.
+@@ -740,9 +740,9 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	vop_core_clks_disable(vop);
+ 	pm_runtime_put(vop->dev);
+ 
+-out:
+ 	mutex_unlock(&vop->vop_lock);
+ 
++out:
+ 	if (crtc->state->event && !crtc->state->active) {
+ 		spin_lock_irq(&crtc->dev->event_lock);
+ 		drm_crtc_send_vblank_event(crtc, crtc->state->event);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index 9f06dec0fc61d..bb43196d5d83e 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -777,21 +777,19 @@ static irqreturn_t sun4i_tcon_handler(int irq, void *private)
+ static int sun4i_tcon_init_clocks(struct device *dev,
+ 				  struct sun4i_tcon *tcon)
+ {
+-	tcon->clk = devm_clk_get(dev, "ahb");
++	tcon->clk = devm_clk_get_enabled(dev, "ahb");
+ 	if (IS_ERR(tcon->clk)) {
+ 		dev_err(dev, "Couldn't get the TCON bus clock\n");
+ 		return PTR_ERR(tcon->clk);
+ 	}
+-	clk_prepare_enable(tcon->clk);
+ 
+ 	if (tcon->quirks->has_channel_0) {
+-		tcon->sclk0 = devm_clk_get(dev, "tcon-ch0");
++		tcon->sclk0 = devm_clk_get_enabled(dev, "tcon-ch0");
+ 		if (IS_ERR(tcon->sclk0)) {
+ 			dev_err(dev, "Couldn't get the TCON channel 0 clock\n");
+ 			return PTR_ERR(tcon->sclk0);
+ 		}
+ 	}
+-	clk_prepare_enable(tcon->sclk0);
+ 
+ 	if (tcon->quirks->has_channel_1) {
+ 		tcon->sclk1 = devm_clk_get(dev, "tcon-ch1");
+@@ -804,12 +802,6 @@ static int sun4i_tcon_init_clocks(struct device *dev,
+ 	return 0;
+ }
+ 
+-static void sun4i_tcon_free_clocks(struct sun4i_tcon *tcon)
+-{
+-	clk_disable_unprepare(tcon->sclk0);
+-	clk_disable_unprepare(tcon->clk);
+-}
+-
+ static int sun4i_tcon_init_irq(struct device *dev,
+ 			       struct sun4i_tcon *tcon)
+ {
+@@ -1224,14 +1216,14 @@ static int sun4i_tcon_bind(struct device *dev, struct device *master,
+ 	ret = sun4i_tcon_init_regmap(dev, tcon);
+ 	if (ret) {
+ 		dev_err(dev, "Couldn't init our TCON regmap\n");
+-		goto err_free_clocks;
++		goto err_assert_reset;
+ 	}
+ 
+ 	if (tcon->quirks->has_channel_0) {
+ 		ret = sun4i_dclk_create(dev, tcon);
+ 		if (ret) {
+ 			dev_err(dev, "Couldn't create our TCON dot clock\n");
+-			goto err_free_clocks;
++			goto err_assert_reset;
+ 		}
+ 	}
+ 
+@@ -1294,8 +1286,6 @@ static int sun4i_tcon_bind(struct device *dev, struct device *master,
+ err_free_dotclock:
+ 	if (tcon->quirks->has_channel_0)
+ 		sun4i_dclk_free(tcon);
+-err_free_clocks:
+-	sun4i_tcon_free_clocks(tcon);
+ err_assert_reset:
+ 	reset_control_assert(tcon->lcd_rst);
+ 	return ret;
+@@ -1309,7 +1299,6 @@ static void sun4i_tcon_unbind(struct device *dev, struct device *master,
+ 	list_del(&tcon->list);
+ 	if (tcon->quirks->has_channel_0)
+ 		sun4i_dclk_free(tcon);
+-	sun4i_tcon_free_clocks(tcon);
+ }
+ 
+ static const struct component_ops sun4i_tcon_ops = {
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 2e32a21bbcbfc..6ac0c2d9a147c 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -4009,7 +4009,7 @@ static const struct hid_device_id hidpp_devices[] = {
+ 	{ /* wireless touchpad T651 */
+ 	  HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		USB_DEVICE_ID_LOGITECH_T651),
+-	  .driver_data = HIDPP_QUIRK_CLASS_WTP },
++	  .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT },
+ 	{ /* Mouse Logitech Anywhere MX */
+ 	  LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
+ 	{ /* Mouse Logitech Cube */
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 1bfcc94c1d234..de6fe98668700 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1307,7 +1307,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 	struct input_dev *pen_input = wacom->pen_input;
+ 	unsigned char *data = wacom->data;
+ 	int number_of_valid_frames = 0;
+-	int time_interval = 15000000;
++	ktime_t time_interval = 15000000;
+ 	ktime_t time_packet_received = ktime_get();
+ 	int i;
+ 
+@@ -1341,7 +1341,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 	if (number_of_valid_frames) {
+ 		if (wacom->hid_data.time_delayed)
+ 			time_interval = ktime_get() - wacom->hid_data.time_delayed;
+-		time_interval /= number_of_valid_frames;
++		time_interval = div_u64(time_interval, number_of_valid_frames);
+ 		wacom->hid_data.time_delayed = time_packet_received;
+ 	}
+ 
+@@ -1352,7 +1352,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 		bool range = frame[0] & 0x20;
+ 		bool invert = frame[0] & 0x10;
+ 		int frames_number_reversed = number_of_valid_frames - i - 1;
+-		int event_timestamp = time_packet_received - frames_number_reversed * time_interval;
++		ktime_t event_timestamp = time_packet_received - frames_number_reversed * time_interval;
+ 
+ 		if (!valid)
+ 			continue;
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 88badfbae999c..166731292c359 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -320,7 +320,7 @@ struct hid_data {
+ 	int bat_connected;
+ 	int ps_connected;
+ 	bool pad_input_event_flag;
+-	int time_delayed;
++	ktime_t time_delayed;
+ };
+ 
+ struct wacom_remote_data {
+diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c
+index f29ce49294daf..89d036bf88df7 100644
+--- a/drivers/hwmon/gsc-hwmon.c
++++ b/drivers/hwmon/gsc-hwmon.c
+@@ -82,8 +82,8 @@ static ssize_t pwm_auto_point_temp_store(struct device *dev,
+ 	if (kstrtol(buf, 10, &temp))
+ 		return -EINVAL;
+ 
+-	temp = clamp_val(temp, 0, 10000);
+-	temp = DIV_ROUND_CLOSEST(temp, 10);
++	temp = clamp_val(temp, 0, 100000);
++	temp = DIV_ROUND_CLOSEST(temp, 100);
+ 
+ 	regs[0] = temp & 0xff;
+ 	regs[1] = (temp >> 8) & 0xff;
+@@ -100,7 +100,7 @@ static ssize_t pwm_auto_point_pwm_show(struct device *dev,
+ {
+ 	struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ 
+-	return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)) / 100);
++	return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)));
+ }
+ 
+ static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point1_pwm, pwm_auto_point_pwm, 0);
+diff --git a/drivers/hwmon/pmbus/adm1275.c b/drivers/hwmon/pmbus/adm1275.c
+index e7997f37b2666..c0618205758e9 100644
+--- a/drivers/hwmon/pmbus/adm1275.c
++++ b/drivers/hwmon/pmbus/adm1275.c
+@@ -37,10 +37,13 @@ enum chips { adm1075, adm1272, adm1275, adm1276, adm1278, adm1293, adm1294 };
+ 
+ #define ADM1272_IRANGE			BIT(0)
+ 
++#define ADM1278_TSFILT			BIT(15)
+ #define ADM1278_TEMP1_EN		BIT(3)
+ #define ADM1278_VIN_EN			BIT(2)
+ #define ADM1278_VOUT_EN			BIT(1)
+ 
++#define ADM1278_PMON_DEFCONFIG		(ADM1278_VOUT_EN | ADM1278_TEMP1_EN | ADM1278_TSFILT)
++
+ #define ADM1293_IRANGE_25		0
+ #define ADM1293_IRANGE_50		BIT(6)
+ #define ADM1293_IRANGE_100		BIT(7)
+@@ -462,6 +465,22 @@ static const struct i2c_device_id adm1275_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, adm1275_id);
+ 
++/* Enable VOUT & TEMP1 if not enabled (disabled by default) */
++static int adm1275_enable_vout_temp(struct i2c_client *client, int config)
++{
++	int ret;
++
++	if ((config & ADM1278_PMON_DEFCONFIG) != ADM1278_PMON_DEFCONFIG) {
++		config |= ADM1278_PMON_DEFCONFIG;
++		ret = i2c_smbus_write_word_data(client, ADM1275_PMON_CONFIG, config);
++		if (ret < 0) {
++			dev_err(&client->dev, "Failed to enable VOUT/TEMP1 monitoring\n");
++			return ret;
++		}
++	}
++	return 0;
++}
++
+ static int adm1275_probe(struct i2c_client *client)
+ {
+ 	s32 (*config_read_fn)(const struct i2c_client *client, u8 reg);
+@@ -475,6 +494,7 @@ static int adm1275_probe(struct i2c_client *client)
+ 	int vindex = -1, voindex = -1, cindex = -1, pindex = -1;
+ 	int tindex = -1;
+ 	u32 shunt;
++	u32 avg;
+ 
+ 	if (!i2c_check_functionality(client->adapter,
+ 				     I2C_FUNC_SMBUS_READ_BYTE_DATA
+@@ -611,24 +631,13 @@ static int adm1275_probe(struct i2c_client *client)
+ 		tindex = 8;
+ 
+ 		info->func[0] |= PMBUS_HAVE_PIN | PMBUS_HAVE_STATUS_INPUT |
+-			PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT;
+-
+-		/* Enable VOUT if not enabled (it is disabled by default) */
+-		if (!(config & ADM1278_VOUT_EN)) {
+-			config |= ADM1278_VOUT_EN;
+-			ret = i2c_smbus_write_byte_data(client,
+-							ADM1275_PMON_CONFIG,
+-							config);
+-			if (ret < 0) {
+-				dev_err(&client->dev,
+-					"Failed to enable VOUT monitoring\n");
+-				return -ENODEV;
+-			}
+-		}
++			PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT |
++			PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP;
++
++		ret = adm1275_enable_vout_temp(client, config);
++		if (ret)
++			return ret;
+ 
+-		if (config & ADM1278_TEMP1_EN)
+-			info->func[0] |=
+-				PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP;
+ 		if (config & ADM1278_VIN_EN)
+ 			info->func[0] |= PMBUS_HAVE_VIN;
+ 		break;
+@@ -685,19 +694,9 @@ static int adm1275_probe(struct i2c_client *client)
+ 			PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT |
+ 			PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP;
+ 
+-		/* Enable VOUT & TEMP1 if not enabled (disabled by default) */
+-		if ((config & (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) !=
+-		    (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) {
+-			config |= ADM1278_VOUT_EN | ADM1278_TEMP1_EN;
+-			ret = i2c_smbus_write_byte_data(client,
+-							ADM1275_PMON_CONFIG,
+-							config);
+-			if (ret < 0) {
+-				dev_err(&client->dev,
+-					"Failed to enable VOUT monitoring\n");
+-				return -ENODEV;
+-			}
+-		}
++		ret = adm1275_enable_vout_temp(client, config);
++		if (ret)
++			return ret;
+ 
+ 		if (config & ADM1278_VIN_EN)
+ 			info->func[0] |= PMBUS_HAVE_VIN;
+@@ -758,6 +757,43 @@ static int adm1275_probe(struct i2c_client *client)
+ 		return -ENODEV;
+ 	}
+ 
++	if (data->have_power_sampling &&
++	    of_property_read_u32(client->dev.of_node,
++				 "adi,power-sample-average", &avg) == 0) {
++		if (!avg || avg > ADM1275_SAMPLES_AVG_MAX ||
++		    BIT(__fls(avg)) != avg) {
++			dev_err(&client->dev,
++				"Invalid number of power samples");
++			return -EINVAL;
++		}
++		ret = adm1275_write_pmon_config(data, client, true,
++						ilog2(avg));
++		if (ret < 0) {
++			dev_err(&client->dev,
++				"Setting power sample averaging failed with error %d",
++				ret);
++			return ret;
++		}
++	}
++
++	if (of_property_read_u32(client->dev.of_node,
++				"adi,volt-curr-sample-average", &avg) == 0) {
++		if (!avg || avg > ADM1275_SAMPLES_AVG_MAX ||
++		    BIT(__fls(avg)) != avg) {
++			dev_err(&client->dev,
++				"Invalid number of voltage/current samples");
++			return -EINVAL;
++		}
++		ret = adm1275_write_pmon_config(data, client, false,
++						ilog2(avg));
++		if (ret < 0) {
++			dev_err(&client->dev,
++				"Setting voltage and current sample averaging failed with error %d",
++				ret);
++			return ret;
++		}
++	}
++
+ 	if (voindex < 0)
+ 		voindex = vindex;
+ 	if (vindex >= 0) {
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index 5ddc8103503b5..c4b805b045316 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1376,13 +1376,8 @@ static int coresight_remove_match(struct device *dev, void *data)
+ 		if (csdev->dev.fwnode == conn->child_fwnode) {
+ 			iterator->orphan = true;
+ 			coresight_remove_links(iterator, conn);
+-			/*
+-			 * Drop the reference to the handle for the remote
+-			 * device acquired in parsing the connections from
+-			 * platform data.
+-			 */
+-			fwnode_handle_put(conn->child_fwnode);
+-			conn->child_fwnode = NULL;
++
++			conn->child_dev = NULL;
+ 			/* No need to continue */
+ 			break;
+ 		}
+diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c
+index 5a47915869ae4..576c12670bd88 100644
+--- a/drivers/i2c/busses/i2c-qup.c
++++ b/drivers/i2c/busses/i2c-qup.c
+@@ -1752,16 +1752,21 @@ nodma:
+ 	if (!clk_freq || clk_freq > I2C_MAX_FAST_MODE_PLUS_FREQ) {
+ 		dev_err(qup->dev, "clock frequency not supported %d\n",
+ 			clk_freq);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto fail_dma;
+ 	}
+ 
+ 	qup->base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(qup->base))
+-		return PTR_ERR(qup->base);
++	if (IS_ERR(qup->base)) {
++		ret = PTR_ERR(qup->base);
++		goto fail_dma;
++	}
+ 
+ 	qup->irq = platform_get_irq(pdev, 0);
+-	if (qup->irq < 0)
+-		return qup->irq;
++	if (qup->irq < 0) {
++		ret = qup->irq;
++		goto fail_dma;
++	}
+ 
+ 	if (has_acpi_companion(qup->dev)) {
+ 		ret = device_property_read_u32(qup->dev,
+@@ -1775,13 +1780,15 @@ nodma:
+ 		qup->clk = devm_clk_get(qup->dev, "core");
+ 		if (IS_ERR(qup->clk)) {
+ 			dev_err(qup->dev, "Could not get core clock\n");
+-			return PTR_ERR(qup->clk);
++			ret = PTR_ERR(qup->clk);
++			goto fail_dma;
+ 		}
+ 
+ 		qup->pclk = devm_clk_get(qup->dev, "iface");
+ 		if (IS_ERR(qup->pclk)) {
+ 			dev_err(qup->dev, "Could not get iface clock\n");
+-			return PTR_ERR(qup->pclk);
++			ret = PTR_ERR(qup->pclk);
++			goto fail_dma;
+ 		}
+ 		qup_i2c_enable_clocks(qup);
+ 		src_clk_freq = clk_get_rate(qup->clk);
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 3b564e68130b5..568e97c3896d1 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -375,6 +375,9 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 	struct xiic_i2c *i2c = dev_id;
+ 	u32 pend, isr, ier;
+ 	u32 clr = 0;
++	int xfer_more = 0;
++	int wakeup_req = 0;
++	int wakeup_code = 0;
+ 
+ 	/* Get the interrupt Status from the IPIF. There is no clearing of
+ 	 * interrupts in the IPIF. Interrupts must be cleared at the source.
+@@ -411,10 +414,16 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 		 */
+ 		xiic_reinit(i2c);
+ 
+-		if (i2c->rx_msg)
+-			xiic_wakeup(i2c, STATE_ERROR);
+-		if (i2c->tx_msg)
+-			xiic_wakeup(i2c, STATE_ERROR);
++		if (i2c->rx_msg) {
++			wakeup_req = 1;
++			wakeup_code = STATE_ERROR;
++		}
++		if (i2c->tx_msg) {
++			wakeup_req = 1;
++			wakeup_code = STATE_ERROR;
++		}
++		/* don't try to handle other events */
++		goto out;
+ 	}
+ 	if (pend & XIIC_INTR_RX_FULL_MASK) {
+ 		/* Receive register/FIFO is full */
+@@ -448,8 +457,7 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 				i2c->tx_msg++;
+ 				dev_dbg(i2c->adap.dev.parent,
+ 					"%s will start next...\n", __func__);
+-
+-				__xiic_start_xfer(i2c);
++				xfer_more = 1;
+ 			}
+ 		}
+ 	}
+@@ -463,11 +471,13 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 		if (!i2c->tx_msg)
+ 			goto out;
+ 
+-		if ((i2c->nmsgs == 1) && !i2c->rx_msg &&
+-			xiic_tx_space(i2c) == 0)
+-			xiic_wakeup(i2c, STATE_DONE);
++		wakeup_req = 1;
++
++		if (i2c->nmsgs == 1 && !i2c->rx_msg &&
++		    xiic_tx_space(i2c) == 0)
++			wakeup_code = STATE_DONE;
+ 		else
+-			xiic_wakeup(i2c, STATE_ERROR);
++			wakeup_code = STATE_ERROR;
+ 	}
+ 	if (pend & (XIIC_INTR_TX_EMPTY_MASK | XIIC_INTR_TX_HALF_MASK)) {
+ 		/* Transmit register/FIFO is empty or ½ empty */
+@@ -491,7 +501,7 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 			if (i2c->nmsgs > 1) {
+ 				i2c->nmsgs--;
+ 				i2c->tx_msg++;
+-				__xiic_start_xfer(i2c);
++				xfer_more = 1;
+ 			} else {
+ 				xiic_irq_dis(i2c, XIIC_INTR_TX_HALF_MASK);
+ 
+@@ -509,6 +519,13 @@ out:
+ 	dev_dbg(i2c->adap.dev.parent, "%s clr: 0x%x\n", __func__, clr);
+ 
+ 	xiic_setreg32(i2c, XIIC_IISR_OFFSET, clr);
++	if (xfer_more)
++		__xiic_start_xfer(i2c);
++	if (wakeup_req)
++		xiic_wakeup(i2c, wakeup_code);
++
++	WARN_ON(xfer_more && wakeup_req);
++
+ 	mutex_unlock(&i2c->lock);
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/iio/adc/meson_saradc.c b/drivers/iio/adc/meson_saradc.c
+index e03988698755f..e771299fac9df 100644
+--- a/drivers/iio/adc/meson_saradc.c
++++ b/drivers/iio/adc/meson_saradc.c
+@@ -71,7 +71,7 @@
+ 	#define MESON_SAR_ADC_REG3_PANEL_DETECT_COUNT_MASK	GENMASK(20, 18)
+ 	#define MESON_SAR_ADC_REG3_PANEL_DETECT_FILTER_TB_MASK	GENMASK(17, 16)
+ 	#define MESON_SAR_ADC_REG3_ADC_CLK_DIV_SHIFT		10
+-	#define MESON_SAR_ADC_REG3_ADC_CLK_DIV_WIDTH		5
++	#define MESON_SAR_ADC_REG3_ADC_CLK_DIV_WIDTH		6
+ 	#define MESON_SAR_ADC_REG3_BLOCK_DLY_SEL_MASK		GENMASK(9, 8)
+ 	#define MESON_SAR_ADC_REG3_BLOCK_DLY_MASK		GENMASK(7, 0)
+ 
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index db24f7dfa00f7..805678f6fe576 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1792,6 +1792,14 @@ static void cma_cancel_operation(struct rdma_id_private *id_priv,
+ {
+ 	switch (state) {
+ 	case RDMA_CM_ADDR_QUERY:
++		/*
++		 * We can avoid doing the rdma_addr_cancel() based on state,
++		 * only RDMA_CM_ADDR_QUERY has a work that could still execute.
++		 * Notice that the addr_handler work could still be exiting
++		 * outside this state, however due to the interaction with the
++		 * handler_mutex the work is guaranteed not to touch id_priv
++		 * during exit.
++		 */
+ 		rdma_addr_cancel(&id_priv->id.route.addr.dev_addr);
+ 		break;
+ 	case RDMA_CM_ROUTE_QUERY:
+@@ -3401,6 +3409,21 @@ int rdma_resolve_addr(struct rdma_cm_id *id, struct sockaddr *src_addr,
+ 		if (dst_addr->sa_family == AF_IB) {
+ 			ret = cma_resolve_ib_addr(id_priv);
+ 		} else {
++			/*
++			 * The FSM can return back to RDMA_CM_ADDR_BOUND after
++			 * rdma_resolve_ip() is called, eg through the error
++			 * path in addr_handler(). If this happens the existing
++			 * request must be canceled before issuing a new one.
++			 * Since canceling a request is a bit slow and this
++			 * oddball path is rare, keep track once a request has
++			 * been issued. The track turns out to be a permanent
++			 * state since this is the only cancel as it is
++			 * immediately before rdma_resolve_ip().
++			 */
++			if (id_priv->used_resolve_ip)
++				rdma_addr_cancel(&id->route.addr.dev_addr);
++			else
++				id_priv->used_resolve_ip = 1;
+ 			ret = rdma_resolve_ip(cma_src_addr(id_priv), dst_addr,
+ 					      &id->route.addr.dev_addr,
+ 					      timeout_ms, addr_handler,
+diff --git a/drivers/infiniband/core/cma_priv.h b/drivers/infiniband/core/cma_priv.h
+index caece96ebcf5f..b53f4fa5e3fb5 100644
+--- a/drivers/infiniband/core/cma_priv.h
++++ b/drivers/infiniband/core/cma_priv.h
+@@ -89,6 +89,7 @@ struct rdma_id_private {
+ 	u8			reuseaddr;
+ 	u8			afonly;
+ 	u8			timeout;
++	u8 used_resolve_ip;
+ 	enum ib_gid_type	gid_type;
+ 
+ 	/*
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 5b7abcf102fe9..3c29fd04b3016 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -600,6 +600,17 @@ struct ib_device *_ib_alloc_device(size_t size)
+ 	init_completion(&device->unreg_completion);
+ 	INIT_WORK(&device->unregistration_work, ib_unregister_work);
+ 
++	device->uverbs_ex_cmd_mask =
++		BIT_ULL(IB_USER_VERBS_EX_CMD_CREATE_FLOW) |
++		BIT_ULL(IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) |
++		BIT_ULL(IB_USER_VERBS_EX_CMD_CREATE_WQ) |
++		BIT_ULL(IB_USER_VERBS_EX_CMD_DESTROY_FLOW) |
++		BIT_ULL(IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL) |
++		BIT_ULL(IB_USER_VERBS_EX_CMD_DESTROY_WQ) |
++		BIT_ULL(IB_USER_VERBS_EX_CMD_MODIFY_CQ) |
++		BIT_ULL(IB_USER_VERBS_EX_CMD_MODIFY_WQ) |
++		BIT_ULL(IB_USER_VERBS_EX_CMD_QUERY_DEVICE);
++
+ 	return device;
+ }
+ EXPORT_SYMBOL(_ib_alloc_device);
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 09cf470c08d65..158f9eadc4e95 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -3778,7 +3778,7 @@ const struct uapi_definition uverbs_def_write_intf[] = {
+ 			IB_USER_VERBS_EX_CMD_MODIFY_CQ,
+ 			ib_uverbs_ex_modify_cq,
+ 			UAPI_DEF_WRITE_I(struct ib_uverbs_ex_modify_cq),
+-			UAPI_DEF_METHOD_NEEDS_FN(create_cq))),
++			UAPI_DEF_METHOD_NEEDS_FN(modify_cq))),
+ 
+ 	DECLARE_UVERBS_OBJECT(
+ 		UVERBS_OBJECT_DEVICE,
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 9ef6aea29ff16..8a618769915d5 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -294,15 +294,21 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ 	for (indx = 0; indx < rdev->num_msix; indx++)
+ 		rdev->msix_entries[indx].vector = ent[indx].vector;
+ 
+-	bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
+-				  false);
++	rc = bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
++				       false);
++	if (rc) {
++		ibdev_warn(&rdev->ibdev, "Failed to reinit CREQ\n");
++		return;
++	}
+ 	for (indx = BNXT_RE_NQ_IDX ; indx < rdev->num_msix; indx++) {
+ 		nq = &rdev->nq[indx - 1];
+ 		rc = bnxt_qplib_nq_start_irq(nq, indx - 1,
+ 					     msix_ent[indx].vector, false);
+-		if (rc)
++		if (rc) {
+ 			ibdev_warn(&rdev->ibdev, "Failed to reinit NQ index %d\n",
+ 				   indx - 1);
++			return;
++		}
+ 	}
+ }
+ 
+@@ -1185,12 +1191,6 @@ static int bnxt_re_update_gid(struct bnxt_re_dev *rdev)
+ 	if (!ib_device_try_get(&rdev->ibdev))
+ 		return 0;
+ 
+-	if (!sgid_tbl) {
+-		ibdev_err(&rdev->ibdev, "QPLIB: SGID table not allocated");
+-		rc = -EINVAL;
+-		goto out;
+-	}
+-
+ 	for (index = 0; index < sgid_tbl->active; index++) {
+ 		gid_idx = sgid_tbl->hw_id[index];
+ 
+@@ -1208,7 +1208,7 @@ static int bnxt_re_update_gid(struct bnxt_re_dev *rdev)
+ 		rc = bnxt_qplib_update_sgid(sgid_tbl, &gid, gid_idx,
+ 					    rdev->qplib_res.netdev->dev_addr);
+ 	}
+-out:
++
+ 	ib_device_put(&rdev->ibdev);
+ 	return rc;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index b26a89187a192..d44b6a5c90b57 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -404,6 +404,9 @@ static irqreturn_t bnxt_qplib_nq_irq(int irq, void *dev_instance)
+ 
+ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill)
+ {
++	if (!nq->requested)
++		return;
++
+ 	tasklet_disable(&nq->nq_tasklet);
+ 	/* Mask h/w interrupt */
+ 	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, nq->res->cctx, false);
+@@ -411,11 +414,12 @@ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill)
+ 	synchronize_irq(nq->msix_vec);
+ 	if (kill)
+ 		tasklet_kill(&nq->nq_tasklet);
+-	if (nq->requested) {
+-		irq_set_affinity_hint(nq->msix_vec, NULL);
+-		free_irq(nq->msix_vec, nq);
+-		nq->requested = false;
+-	}
++
++	irq_set_affinity_hint(nq->msix_vec, NULL);
++	free_irq(nq->msix_vec, nq);
++	kfree(nq->name);
++	nq->name = NULL;
++	nq->requested = false;
+ }
+ 
+ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+@@ -441,6 +445,7 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 			    int msix_vector, bool need_init)
+ {
++	struct bnxt_qplib_res *res = nq->res;
+ 	int rc;
+ 
+ 	if (nq->requested)
+@@ -452,10 +457,17 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 	else
+ 		tasklet_enable(&nq->nq_tasklet);
+ 
+-	snprintf(nq->name, sizeof(nq->name), "bnxt_qplib_nq-%d", nq_indx);
++	nq->name = kasprintf(GFP_KERNEL, "bnxt_re-nq-%d@pci:%s",
++			     nq_indx, pci_name(res->pdev));
++	if (!nq->name)
++		return -ENOMEM;
+ 	rc = request_irq(nq->msix_vec, bnxt_qplib_nq_irq, 0, nq->name, nq);
+-	if (rc)
++	if (rc) {
++		kfree(nq->name);
++		nq->name = NULL;
++		tasklet_disable(&nq->nq_tasklet);
+ 		return rc;
++	}
+ 
+ 	cpumask_clear(&nq->mask);
+ 	cpumask_set_cpu(nq_indx, &nq->mask);
+@@ -466,7 +478,7 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 			 nq->msix_vec, nq_indx);
+ 	}
+ 	nq->requested = true;
+-	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, nq->res->cctx, true);
++	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, res->cctx, true);
+ 
+ 	return rc;
+ }
+@@ -1599,7 +1611,7 @@ static int bnxt_qplib_put_inline(struct bnxt_qplib_qp *qp,
+ 		il_src = (void *)wqe->sg_list[indx].addr;
+ 		t_len += len;
+ 		if (t_len > qp->max_inline_data)
+-			goto bad;
++			return -ENOMEM;
+ 		while (len) {
+ 			if (pull_dst) {
+ 				pull_dst = false;
+@@ -1623,8 +1635,6 @@ static int bnxt_qplib_put_inline(struct bnxt_qplib_qp *qp,
+ 	}
+ 
+ 	return t_len;
+-bad:
+-	return -ENOMEM;
+ }
+ 
+ static u32 bnxt_qplib_put_sges(struct bnxt_qplib_hwq *hwq,
+@@ -2054,7 +2064,7 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 	hwq_attr.sginfo = &cq->sg_info;
+ 	rc = bnxt_qplib_alloc_init_hwq(&cq->hwq, &hwq_attr);
+ 	if (rc)
+-		goto exit;
++		return rc;
+ 
+ 	RCFW_CMD_PREP(req, CREATE_CQ, cmd_flags);
+ 
+@@ -2095,7 +2105,6 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 
+ fail:
+ 	bnxt_qplib_free_hwq(res, &cq->hwq);
+-exit:
+ 	return rc;
+ }
+ 
+@@ -2723,11 +2732,8 @@ static int bnxt_qplib_cq_process_terminal(struct bnxt_qplib_cq *cq,
+ 
+ 	qp = (struct bnxt_qplib_qp *)((unsigned long)
+ 				      le64_to_cpu(hwcqe->qp_handle));
+-	if (!qp) {
+-		dev_err(&cq->hwq.pdev->dev,
+-			"FP: CQ Process terminal qp is NULL\n");
++	if (!qp)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Must block new posting of SQ and RQ */
+ 	qp->state = CMDQ_MODIFY_QP_NEW_STATE_ERR;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index f50784405e27e..667f93d90045e 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -469,7 +469,7 @@ typedef int (*srqn_handler_t)(struct bnxt_qplib_nq *nq,
+ struct bnxt_qplib_nq {
+ 	struct pci_dev			*pdev;
+ 	struct bnxt_qplib_res		*res;
+-	char				name[32];
++	char				*name;
+ 	struct bnxt_qplib_hwq		hwq;
+ 	struct bnxt_qplib_nq_db		nq_db;
+ 	u16				ring_id;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 5759027914b01..2b0c3a86293cf 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -181,7 +181,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, struct cmdq_base *req,
+ 	} while (size > 0);
+ 	cmdq->seq_num++;
+ 
+-	cmdq_prod = hwq->prod;
++	cmdq_prod = hwq->prod & 0xFFFF;
+ 	if (test_bit(FIRMWARE_FIRST_FLAG, &cmdq->flags)) {
+ 		/* The very first doorbell write
+ 		 * is required to set this flag
+@@ -295,7 +295,8 @@ static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw,
+ }
+ 
+ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+-				       struct creq_qp_event *qp_event)
++				       struct creq_qp_event *qp_event,
++				       u32 *num_wait)
+ {
+ 	struct creq_qp_error_notification *err_event;
+ 	struct bnxt_qplib_hwq *hwq = &rcfw->cmdq.hwq;
+@@ -304,6 +305,7 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 	u16 cbit, blocked = 0;
+ 	struct pci_dev *pdev;
+ 	unsigned long flags;
++	u32 wait_cmds = 0;
+ 	__le16  mcookie;
+ 	u16 cookie;
+ 	int rc = 0;
+@@ -363,9 +365,10 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 		crsqe->req_size = 0;
+ 
+ 		if (!blocked)
+-			wake_up(&rcfw->cmdq.waitq);
++			wait_cmds++;
+ 		spin_unlock_irqrestore(&hwq->lock, flags);
+ 	}
++	*num_wait += wait_cmds;
+ 	return rc;
+ }
+ 
+@@ -379,6 +382,7 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 	struct creq_base *creqe;
+ 	u32 sw_cons, raw_cons;
+ 	unsigned long flags;
++	u32 num_wakeup = 0;
+ 
+ 	/* Service the CREQ until budget is over */
+ 	spin_lock_irqsave(&hwq->lock, flags);
+@@ -397,7 +401,8 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 		switch (type) {
+ 		case CREQ_BASE_TYPE_QP_EVENT:
+ 			bnxt_qplib_process_qp_event
+-				(rcfw, (struct creq_qp_event *)creqe);
++				(rcfw, (struct creq_qp_event *)creqe,
++				 &num_wakeup);
+ 			creq->stats.creq_qp_event_processed++;
+ 			break;
+ 		case CREQ_BASE_TYPE_FUNC_EVENT:
+@@ -425,6 +430,8 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 				      rcfw->res->cctx, true);
+ 	}
+ 	spin_unlock_irqrestore(&hwq->lock, flags);
++	if (num_wakeup)
++		wake_up_nr(&rcfw->cmdq.waitq, num_wakeup);
+ }
+ 
+ static irqreturn_t bnxt_qplib_creq_irq(int irq, void *dev_instance)
+@@ -595,7 +602,7 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res,
+ 		rcfw->cmdq_depth = BNXT_QPLIB_CMDQE_MAX_CNT_8192;
+ 
+ 	sginfo.pgsize = bnxt_qplib_cmdqe_page_size(rcfw->cmdq_depth);
+-	hwq_attr.depth = rcfw->cmdq_depth;
++	hwq_attr.depth = rcfw->cmdq_depth & 0x7FFFFFFF;
+ 	hwq_attr.stride = BNXT_QPLIB_CMDQE_UNITS;
+ 	hwq_attr.type = HWQ_TYPE_CTX;
+ 	if (bnxt_qplib_alloc_init_hwq(&cmdq->hwq, &hwq_attr)) {
+@@ -633,6 +640,10 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill)
+ 	struct bnxt_qplib_creq_ctx *creq;
+ 
+ 	creq = &rcfw->creq;
++
++	if (!creq->requested)
++		return;
++
+ 	tasklet_disable(&creq->creq_tasklet);
+ 	/* Mask h/w interrupts */
+ 	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, rcfw->res->cctx, false);
+@@ -641,10 +652,10 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill)
+ 	if (kill)
+ 		tasklet_kill(&creq->creq_tasklet);
+ 
+-	if (creq->requested) {
+-		free_irq(creq->msix_vec, rcfw);
+-		creq->requested = false;
+-	}
++	free_irq(creq->msix_vec, rcfw);
++	kfree(creq->irq_name);
++	creq->irq_name = NULL;
++	creq->requested = false;
+ }
+ 
+ void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
+@@ -676,9 +687,11 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
+ 			      bool need_init)
+ {
+ 	struct bnxt_qplib_creq_ctx *creq;
++	struct bnxt_qplib_res *res;
+ 	int rc;
+ 
+ 	creq = &rcfw->creq;
++	res = rcfw->res;
+ 
+ 	if (creq->requested)
+ 		return -EFAULT;
+@@ -688,13 +701,22 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
+ 		tasklet_setup(&creq->creq_tasklet, bnxt_qplib_service_creq);
+ 	else
+ 		tasklet_enable(&creq->creq_tasklet);
++
++	creq->irq_name = kasprintf(GFP_KERNEL, "bnxt_re-creq@pci:%s",
++				   pci_name(res->pdev));
++	if (!creq->irq_name)
++		return -ENOMEM;
+ 	rc = request_irq(creq->msix_vec, bnxt_qplib_creq_irq, 0,
+-			 "bnxt_qplib_creq", rcfw);
+-	if (rc)
++			 creq->irq_name, rcfw);
++	if (rc) {
++		kfree(creq->irq_name);
++		creq->irq_name = NULL;
++		tasklet_disable(&creq->creq_tasklet);
+ 		return rc;
++	}
+ 	creq->requested = true;
+ 
+-	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, rcfw->res->cctx, true);
++	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, res->cctx, true);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+index 6953f4e53dd20..7df7170c80e06 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+@@ -172,6 +172,7 @@ struct bnxt_qplib_creq_ctx {
+ 	u16				ring_id;
+ 	int				msix_vec;
+ 	bool				requested; /*irq handler installed */
++	char				*irq_name;
+ };
+ 
+ /* RCFW Communication Channels */
+diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c
+index ffdd18f4217f5..cd41cd114ab63 100644
+--- a/drivers/infiniband/hw/efa/efa_main.c
++++ b/drivers/infiniband/hw/efa/efa_main.c
+@@ -326,9 +326,6 @@ static int efa_ib_device_add(struct efa_dev *dev)
+ 		(1ull << IB_USER_VERBS_CMD_CREATE_AH) |
+ 		(1ull << IB_USER_VERBS_CMD_DESTROY_AH);
+ 
+-	dev->ibdev.uverbs_ex_cmd_mask =
+-		(1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE);
+-
+ 	ib_set_device_ops(&dev->ibdev, &efa_dev_ops);
+ 
+ 	err = ib_register_device(&dev->ibdev, "efa_%d", &pdev->dev);
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index 956fc3fd88b99..1880484681357 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -251,11 +251,11 @@ static int hfi1_ipoib_build_ulp_payload(struct ipoib_txreq *tx,
+ 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ 
+ 		ret = sdma_txadd_page(dd,
+-				      NULL,
+ 				      txreq,
+ 				      skb_frag_page(frag),
+ 				      frag->bv_offset,
+-				      skb_frag_size(frag));
++				      skb_frag_size(frag),
++				      NULL, NULL, NULL);
+ 		if (unlikely(ret))
+ 			break;
+ 	}
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index d331184ded308..a501b7a682fca 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -60,8 +60,7 @@ static int mmu_notifier_range_start(struct mmu_notifier *,
+ 		const struct mmu_notifier_range *);
+ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *,
+ 					   unsigned long, unsigned long);
+-static void do_remove(struct mmu_rb_handler *handler,
+-		      struct list_head *del_list);
++static void release_immediate(struct kref *refcount);
+ static void handle_remove(struct work_struct *work);
+ 
+ static const struct mmu_notifier_ops mn_opts = {
+@@ -144,7 +143,11 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+ 	}
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	do_remove(handler, &del_list);
++	while (!list_empty(&del_list)) {
++		rbnode = list_first_entry(&del_list, struct mmu_rb_node, list);
++		list_del(&rbnode->list);
++		kref_put(&rbnode->refcount, release_immediate);
++	}
+ 
+ 	/* Now the mm may be freed. */
+ 	mmdrop(handler->mn.mm);
+@@ -172,12 +175,6 @@ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 	}
+ 	__mmu_int_rb_insert(mnode, &handler->root);
+ 	list_add_tail(&mnode->list, &handler->lru_list);
+-
+-	ret = handler->ops->insert(handler->ops_arg, mnode);
+-	if (ret) {
+-		__mmu_int_rb_remove(mnode, &handler->root);
+-		list_del(&mnode->list); /* remove from LRU list */
+-	}
+ 	mnode->handler = handler;
+ unlock:
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+@@ -221,6 +218,48 @@ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
+ 	return node;
+ }
+ 
++/*
++ * Must NOT call while holding mnode->handler->lock.
++ * mnode->handler->ops->remove() may sleep and mnode->handler->lock is a
++ * spinlock.
++ */
++static void release_immediate(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	mnode->handler->ops->remove(mnode->handler->ops_arg, mnode);
++}
++
++/* Caller must hold mnode->handler->lock */
++static void release_nolock(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	list_move(&mnode->list, &mnode->handler->del_list);
++	queue_work(mnode->handler->wq, &mnode->handler->del_work);
++}
++
++/*
++ * struct mmu_rb_node->refcount kref_put() callback.
++ * Adds mmu_rb_node to mmu_rb_node->handler->del_list and queues
++ * handler->del_work on handler->wq.
++ * Does not remove mmu_rb_node from handler->lru_list or handler->rb_root.
++ * Acquires mmu_rb_node->handler->lock; do not call while already holding
++ * handler->lock.
++ */
++void hfi1_mmu_rb_release(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	struct mmu_rb_handler *handler = mnode->handler;
++	unsigned long flags;
++
++	spin_lock_irqsave(&handler->lock, flags);
++	list_move(&mnode->list, &mnode->handler->del_list);
++	spin_unlock_irqrestore(&handler->lock, flags);
++	queue_work(handler->wq, &handler->del_work);
++}
++
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ {
+ 	struct mmu_rb_node *rbnode, *ptr;
+@@ -235,6 +274,10 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	list_for_each_entry_safe(rbnode, ptr, &handler->lru_list, list) {
++		/* refcount == 1 implies mmu_rb_handler has only rbnode ref */
++		if (kref_read(&rbnode->refcount) > 1)
++			continue;
++
+ 		if (handler->ops->evict(handler->ops_arg, rbnode, evict_arg,
+ 					&stop)) {
+ 			__mmu_int_rb_remove(rbnode, &handler->root);
+@@ -247,7 +290,7 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+ 	list_for_each_entry_safe(rbnode, ptr, &del_list, list) {
+-		handler->ops->remove(handler->ops_arg, rbnode);
++		kref_put(&rbnode->refcount, release_immediate);
+ 	}
+ }
+ 
+@@ -259,7 +302,6 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn,
+ 	struct rb_root_cached *root = &handler->root;
+ 	struct mmu_rb_node *node, *ptr = NULL;
+ 	unsigned long flags;
+-	bool added = false;
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	for (node = __mmu_int_rb_iter_first(root, range->start, range->end-1);
+@@ -268,38 +310,16 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn,
+ 		ptr = __mmu_int_rb_iter_next(node, range->start,
+ 					     range->end - 1);
+ 		trace_hfi1_mmu_mem_invalidate(node->addr, node->len);
+-		if (handler->ops->invalidate(handler->ops_arg, node)) {
+-			__mmu_int_rb_remove(node, root);
+-			/* move from LRU list to delete list */
+-			list_move(&node->list, &handler->del_list);
+-			added = true;
+-		}
++		/* Remove from rb tree and lru_list. */
++		__mmu_int_rb_remove(node, root);
++		list_del_init(&node->list);
++		kref_put(&node->refcount, release_nolock);
+ 	}
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	if (added)
+-		queue_work(handler->wq, &handler->del_work);
+-
+ 	return 0;
+ }
+ 
+-/*
+- * Call the remove function for the given handler and the list.  This
+- * is expected to be called with a delete list extracted from handler.
+- * The caller should not be holding the handler lock.
+- */
+-static void do_remove(struct mmu_rb_handler *handler,
+-		      struct list_head *del_list)
+-{
+-	struct mmu_rb_node *node;
+-
+-	while (!list_empty(del_list)) {
+-		node = list_first_entry(del_list, struct mmu_rb_node, list);
+-		list_del(&node->list);
+-		handler->ops->remove(handler->ops_arg, node);
+-	}
+-}
+-
+ /*
+  * Work queue function to remove all nodes that have been queued up to
+  * be removed.  The key feature is that mm->mmap_lock is not being held
+@@ -312,11 +332,16 @@ static void handle_remove(struct work_struct *work)
+ 						del_work);
+ 	struct list_head del_list;
+ 	unsigned long flags;
++	struct mmu_rb_node *node;
+ 
+ 	/* remove anything that is queued to get removed */
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	list_replace_init(&handler->del_list, &del_list);
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	do_remove(handler, &del_list);
++	while (!list_empty(&del_list)) {
++		node = list_first_entry(&del_list, struct mmu_rb_node, list);
++		list_del(&node->list);
++		handler->ops->remove(handler->ops_arg, node);
++	}
+ }
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.h b/drivers/infiniband/hw/hfi1/mmu_rb.h
+index 0265d81c62061..be85537d23267 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.h
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.h
+@@ -57,6 +57,7 @@ struct mmu_rb_node {
+ 	struct rb_node node;
+ 	struct mmu_rb_handler *handler;
+ 	struct list_head list;
++	struct kref refcount;
+ };
+ 
+ /*
+@@ -92,6 +93,8 @@ int hfi1_mmu_rb_register(void *ops_arg,
+ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler);
+ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 		       struct mmu_rb_node *mnode);
++void hfi1_mmu_rb_release(struct kref *refcount);
++
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg);
+ struct mmu_rb_node *hfi1_mmu_rb_get_first(struct mmu_rb_handler *handler,
+ 					  unsigned long addr,
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 061562627dae4..2dc97de434a5e 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1635,7 +1635,20 @@ static inline void sdma_unmap_desc(
+ 	struct hfi1_devdata *dd,
+ 	struct sdma_desc *descp)
+ {
+-	system_descriptor_complete(dd, descp);
++	switch (sdma_mapping_type(descp)) {
++	case SDMA_MAP_SINGLE:
++		dma_unmap_single(&dd->pcidev->dev, sdma_mapping_addr(descp),
++				 sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	case SDMA_MAP_PAGE:
++		dma_unmap_page(&dd->pcidev->dev, sdma_mapping_addr(descp),
++			       sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	}
++
++	if (descp->pinning_ctx && descp->ctx_put)
++		descp->ctx_put(descp->pinning_ctx);
++	descp->pinning_ctx = NULL;
+ }
+ 
+ /*
+@@ -3155,8 +3168,8 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
+ 
+ 		/* Add descriptor for coalesce buffer */
+ 		tx->desc_limit = MAX_DESC;
+-		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx,
+-					 addr, tx->tlen);
++		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx,
++					 addr, tx->tlen, NULL, NULL, NULL);
+ 	}
+ 
+ 	return 1;
+@@ -3187,8 +3200,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ {
+ 	int rval = 0;
+ 
+-	tx->num_desc++;
+-	if ((unlikely(tx->num_desc == tx->desc_limit))) {
++	if ((unlikely(tx->num_desc + 1 == tx->desc_limit))) {
+ 		rval = _extend_sdma_tx_descs(dd, tx);
+ 		if (rval) {
+ 			__sdma_txclean(dd, tx);
+@@ -3200,9 +3212,10 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		SDMA_MAP_NONE,
+-		NULL,
+ 		dd->sdma_pad_phys,
+-		sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)));
++		sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)),
++		NULL, NULL, NULL);
++	tx->num_desc++;
+ 	_sdma_close_tx(dd, tx);
+ 	return rval;
+ }
+diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
+index 7d4f316ac6e43..7611f09d78dca 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.h
++++ b/drivers/infiniband/hw/hfi1/sdma.h
+@@ -635,9 +635,11 @@ static inline dma_addr_t sdma_mapping_addr(struct sdma_desc *d)
+ static inline void make_tx_sdma_desc(
+ 	struct sdma_txreq *tx,
+ 	int type,
+-	void *pinning_ctx,
+ 	dma_addr_t addr,
+-	size_t len)
++	size_t len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	struct sdma_desc *desc = &tx->descp[tx->num_desc];
+ 
+@@ -654,7 +656,11 @@ static inline void make_tx_sdma_desc(
+ 				<< SDMA_DESC0_PHY_ADDR_SHIFT) |
+ 			(((u64)len & SDMA_DESC0_BYTE_COUNT_MASK)
+ 				<< SDMA_DESC0_BYTE_COUNT_SHIFT);
++
+ 	desc->pinning_ctx = pinning_ctx;
++	desc->ctx_put = ctx_put;
++	if (pinning_ctx && ctx_get)
++		ctx_get(pinning_ctx);
+ }
+ 
+ /* helper to extend txreq */
+@@ -674,32 +680,34 @@ static inline void sdma_txclean(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ static inline void _sdma_close_tx(struct hfi1_devdata *dd,
+ 				  struct sdma_txreq *tx)
+ {
+-	tx->descp[tx->num_desc].qw[0] |=
+-		SDMA_DESC0_LAST_DESC_FLAG;
+-	tx->descp[tx->num_desc].qw[1] |=
+-		dd->default_desc1;
++	u16 last_desc = tx->num_desc - 1;
++
++	tx->descp[last_desc].qw[0] |= SDMA_DESC0_LAST_DESC_FLAG;
++	tx->descp[last_desc].qw[1] |= dd->default_desc1;
+ 	if (tx->flags & SDMA_TXREQ_F_URGENT)
+-		tx->descp[tx->num_desc].qw[1] |=
+-			(SDMA_DESC1_HEAD_TO_HOST_FLAG |
+-			 SDMA_DESC1_INT_REQ_FLAG);
++		tx->descp[last_desc].qw[1] |= (SDMA_DESC1_HEAD_TO_HOST_FLAG |
++					       SDMA_DESC1_INT_REQ_FLAG);
+ }
+ 
+ static inline int _sdma_txadd_daddr(
+ 	struct hfi1_devdata *dd,
+ 	int type,
+-	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	dma_addr_t addr,
+-	u16 len)
++	u16 len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	int rval = 0;
+ 
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		type,
+-		pinning_ctx,
+-		addr, len);
++		addr, len,
++		pinning_ctx, ctx_get, ctx_put);
+ 	WARN_ON(len > tx->tlen);
++	tx->num_desc++;
+ 	tx->tlen -= len;
+ 	/* special cases for last */
+ 	if (!tx->tlen) {
+@@ -711,18 +719,24 @@ static inline int _sdma_txadd_daddr(
+ 			_sdma_close_tx(dd, tx);
+ 		}
+ 	}
+-	tx->num_desc++;
+ 	return rval;
+ }
+ 
+ /**
+  * sdma_txadd_page() - add a page to the sdma_txreq
+  * @dd: the device to use for mapping
+- * @pinning_ctx: context to be released at descriptor retirement
+  * @tx: tx request to which the page is added
+  * @page: page to map
+  * @offset: offset within the page
+  * @len: length in bytes
++ * @pinning_ctx: context to be stored on struct sdma_desc .pinning_ctx. Not
++ *               added if coalesce buffer is used. E.g. pointer to pinned-page
++ *               cache entry for the sdma_desc.
++ * @ctx_get: optional function to take reference to @pinning_ctx. Not called if
++ *           @pinning_ctx is NULL.
++ * @ctx_put: optional function to release reference to @pinning_ctx after
++ *           sdma_desc completes. May be called in interrupt context so must
++ *           not sleep. Not called if @pinning_ctx is NULL.
+  *
+  * This is used to add a page/offset/length descriptor.
+  *
+@@ -734,11 +748,13 @@ static inline int _sdma_txadd_daddr(
+  */
+ static inline int sdma_txadd_page(
+ 	struct hfi1_devdata *dd,
+-	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	struct page *page,
+ 	unsigned long offset,
+-	u16 len)
++	u16 len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	dma_addr_t addr;
+ 	int rval;
+@@ -762,7 +778,8 @@ static inline int sdma_txadd_page(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_PAGE, pinning_ctx, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_PAGE, tx, addr, len,
++				 pinning_ctx, ctx_get, ctx_put);
+ }
+ 
+ /**
+@@ -796,8 +813,8 @@ static inline int sdma_txadd_daddr(
+ 			return rval;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, NULL, tx,
+-				 addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, tx, addr, len,
++				 NULL, NULL, NULL);
+ }
+ 
+ /**
+@@ -843,7 +860,8 @@ static inline int sdma_txadd_kvaddr(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx, addr, len,
++				 NULL, NULL, NULL);
+ }
+ 
+ struct iowait_work;
+@@ -1094,6 +1112,4 @@ u16 sdma_get_descq_cnt(void);
+ extern uint mod_num_sdma;
+ 
+ void sdma_update_lmc(struct hfi1_devdata *dd, u64 mask, u32 lid);
+-
+-void system_descriptor_complete(struct hfi1_devdata *dd, struct sdma_desc *descp);
+ #endif
+diff --git a/drivers/infiniband/hw/hfi1/sdma_txreq.h b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+index 4204650cebc29..fb091b5834b5d 100644
+--- a/drivers/infiniband/hw/hfi1/sdma_txreq.h
++++ b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+@@ -62,6 +62,8 @@ struct sdma_desc {
+ 	/* private:  don't use directly */
+ 	u64 qw[2];
+ 	void *pinning_ctx;
++	/* Release reference to @pinning_ctx. May be called in interrupt context. Must not sleep. */
++	void (*ctx_put)(void *ctx);
+ };
+ 
+ /**
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index 1eb5a44a4ae6a..a67791187d46d 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -103,18 +103,14 @@ static int defer_packet_queue(
+ static void activate_packet_queue(struct iowait *wait, int reason);
+ static bool sdma_rb_filter(struct mmu_rb_node *node, unsigned long addr,
+ 			   unsigned long len);
+-static int sdma_rb_insert(void *arg, struct mmu_rb_node *mnode);
+ static int sdma_rb_evict(void *arg, struct mmu_rb_node *mnode,
+ 			 void *arg2, bool *stop);
+ static void sdma_rb_remove(void *arg, struct mmu_rb_node *mnode);
+-static int sdma_rb_invalidate(void *arg, struct mmu_rb_node *mnode);
+ 
+ static struct mmu_rb_ops sdma_rb_ops = {
+ 	.filter = sdma_rb_filter,
+-	.insert = sdma_rb_insert,
+ 	.evict = sdma_rb_evict,
+ 	.remove = sdma_rb_remove,
+-	.invalidate = sdma_rb_invalidate
+ };
+ 
+ static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
+@@ -202,9 +198,7 @@ int hfi1_user_sdma_alloc_queues(struct hfi1_ctxtdata *uctxt,
+ 	if (!pq->reqs)
+ 		goto pq_reqs_nomem;
+ 
+-	pq->req_in_use = kcalloc(BITS_TO_LONGS(hfi1_sdma_comp_ring_size),
+-				 sizeof(*pq->req_in_use),
+-				 GFP_KERNEL);
++	pq->req_in_use = bitmap_zalloc(hfi1_sdma_comp_ring_size, GFP_KERNEL);
+ 	if (!pq->req_in_use)
+ 		goto pq_reqs_no_in_use;
+ 
+@@ -251,7 +245,7 @@ cq_comps_nomem:
+ cq_nomem:
+ 	kmem_cache_destroy(pq->txreq_cache);
+ pq_txreq_nomem:
+-	kfree(pq->req_in_use);
++	bitmap_free(pq->req_in_use);
+ pq_reqs_no_in_use:
+ 	kfree(pq->reqs);
+ pq_reqs_nomem:
+@@ -290,15 +284,15 @@ int hfi1_user_sdma_free_queues(struct hfi1_filedata *fd,
+ 		spin_unlock(&fd->pq_rcu_lock);
+ 		synchronize_srcu(&fd->pq_srcu);
+ 		/* at this point there can be no more new requests */
+-		if (pq->handler)
+-			hfi1_mmu_rb_unregister(pq->handler);
+ 		iowait_sdma_drain(&pq->busy);
+ 		/* Wait until all requests have been freed. */
+ 		wait_event_interruptible(
+ 			pq->wait,
+ 			!atomic_read(&pq->n_reqs));
+ 		kfree(pq->reqs);
+-		kfree(pq->req_in_use);
++		if (pq->handler)
++			hfi1_mmu_rb_unregister(pq->handler);
++		bitmap_free(pq->req_in_use);
+ 		kmem_cache_destroy(pq->txreq_cache);
+ 		flush_pq_iowait(pq);
+ 		kfree(pq);
+@@ -1318,25 +1312,17 @@ static void free_system_node(struct sdma_mmu_node *node)
+ 	kfree(node);
+ }
+ 
+-static inline void acquire_node(struct sdma_mmu_node *node)
+-{
+-	atomic_inc(&node->refcount);
+-	WARN_ON(atomic_read(&node->refcount) < 0);
+-}
+-
+-static inline void release_node(struct mmu_rb_handler *handler,
+-				struct sdma_mmu_node *node)
+-{
+-	atomic_dec(&node->refcount);
+-	WARN_ON(atomic_read(&node->refcount) < 0);
+-}
+-
++/*
++ * kref_get()'s an additional kref on the returned rb_node to prevent rb_node
++ * from being released until after rb_node is assigned to an SDMA descriptor
++ * (struct sdma_desc) under add_system_iovec_to_sdma_packet(), even if the
++ * virtual address range for rb_node is invalidated between now and then.
++ */
+ static struct sdma_mmu_node *find_system_node(struct mmu_rb_handler *handler,
+ 					      unsigned long start,
+ 					      unsigned long end)
+ {
+ 	struct mmu_rb_node *rb_node;
+-	struct sdma_mmu_node *node;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+@@ -1345,11 +1331,12 @@ static struct sdma_mmu_node *find_system_node(struct mmu_rb_handler *handler,
+ 		spin_unlock_irqrestore(&handler->lock, flags);
+ 		return NULL;
+ 	}
+-	node = container_of(rb_node, struct sdma_mmu_node, rb);
+-	acquire_node(node);
++
++	/* "safety" kref to prevent release before add_system_iovec_to_sdma_packet() */
++	kref_get(&rb_node->refcount);
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	return node;
++	return container_of(rb_node, struct sdma_mmu_node, rb);
+ }
+ 
+ static int pin_system_pages(struct user_sdma_request *req,
+@@ -1398,6 +1385,13 @@ retry:
+ 	return 0;
+ }
+ 
++/*
++ * kref refcount on *node_p will be 2 on successful addition: one kref from
++ * kref_init() for mmu_rb_handler and one kref to prevent *node_p from being
++ * released until after *node_p is assigned to an SDMA descriptor (struct
++ * sdma_desc) under add_system_iovec_to_sdma_packet(), even if the virtual
++ * address range for *node_p is invalidated between now and then.
++ */
+ static int add_system_pinning(struct user_sdma_request *req,
+ 			      struct sdma_mmu_node **node_p,
+ 			      unsigned long start, unsigned long len)
+@@ -1411,6 +1405,12 @@ static int add_system_pinning(struct user_sdma_request *req,
+ 	if (!node)
+ 		return -ENOMEM;
+ 
++	/* First kref "moves" to mmu_rb_handler */
++	kref_init(&node->rb.refcount);
++
++	/* "safety" kref to prevent release before add_system_iovec_to_sdma_packet() */
++	kref_get(&node->rb.refcount);
++
+ 	node->pq = pq;
+ 	ret = pin_system_pages(req, start, len, node, PFN_DOWN(len));
+ 	if (ret == 0) {
+@@ -1474,15 +1474,15 @@ static int get_system_cache_entry(struct user_sdma_request *req,
+ 			return 0;
+ 		}
+ 
+-		SDMA_DBG(req, "prepend: node->rb.addr %lx, node->refcount %d",
+-			 node->rb.addr, atomic_read(&node->refcount));
++		SDMA_DBG(req, "prepend: node->rb.addr %lx, node->rb.refcount %d",
++			 node->rb.addr, kref_read(&node->rb.refcount));
+ 		prepend_len = node->rb.addr - start;
+ 
+ 		/*
+ 		 * This node will not be returned, instead a new node
+ 		 * will be. So release the reference.
+ 		 */
+-		release_node(handler, node);
++		kref_put(&node->rb.refcount, hfi1_mmu_rb_release);
+ 
+ 		/* Prepend a node to cover the beginning of the allocation */
+ 		ret = add_system_pinning(req, node_p, start, prepend_len);
+@@ -1494,6 +1494,20 @@ static int get_system_cache_entry(struct user_sdma_request *req,
+ 	}
+ }
+ 
++static void sdma_mmu_rb_node_get(void *ctx)
++{
++	struct mmu_rb_node *node = ctx;
++
++	kref_get(&node->refcount);
++}
++
++static void sdma_mmu_rb_node_put(void *ctx)
++{
++	struct sdma_mmu_node *node = ctx;
++
++	kref_put(&node->rb.refcount, hfi1_mmu_rb_release);
++}
++
+ static int add_mapping_to_sdma_packet(struct user_sdma_request *req,
+ 				      struct user_sdma_txreq *tx,
+ 				      struct sdma_mmu_node *cache_entry,
+@@ -1537,9 +1551,12 @@ static int add_mapping_to_sdma_packet(struct user_sdma_request *req,
+ 			ctx = cache_entry;
+ 		}
+ 
+-		ret = sdma_txadd_page(pq->dd, ctx, &tx->txreq,
++		ret = sdma_txadd_page(pq->dd, &tx->txreq,
+ 				      cache_entry->pages[page_index],
+-				      page_offset, from_this_page);
++				      page_offset, from_this_page,
++				      ctx,
++				      sdma_mmu_rb_node_get,
++				      sdma_mmu_rb_node_put);
+ 		if (ret) {
+ 			/*
+ 			 * When there's a failure, the entire request is freed by
+@@ -1561,8 +1578,6 @@ static int add_system_iovec_to_sdma_packet(struct user_sdma_request *req,
+ 					   struct user_sdma_iovec *iovec,
+ 					   size_t from_this_iovec)
+ {
+-	struct mmu_rb_handler *handler = req->pq->handler;
+-
+ 	while (from_this_iovec > 0) {
+ 		struct sdma_mmu_node *cache_entry;
+ 		size_t from_this_cache_entry;
+@@ -1583,15 +1598,15 @@ static int add_system_iovec_to_sdma_packet(struct user_sdma_request *req,
+ 
+ 		ret = add_mapping_to_sdma_packet(req, tx, cache_entry, start,
+ 						 from_this_cache_entry);
++
++		/*
++		 * Done adding cache_entry to zero or more sdma_desc. Can
++		 * kref_put() the "safety" kref taken under
++		 * get_system_cache_entry().
++		 */
++		kref_put(&cache_entry->rb.refcount, hfi1_mmu_rb_release);
++
+ 		if (ret) {
+-			/*
+-			 * We're guaranteed that there will be no descriptor
+-			 * completion callback that releases this node
+-			 * because only the last descriptor referencing it
+-			 * has a context attached, and a failure means the
+-			 * last descriptor was never added.
+-			 */
+-			release_node(handler, cache_entry);
+ 			SDMA_DBG(req, "add system segment failed %d", ret);
+ 			return ret;
+ 		}
+@@ -1642,42 +1657,12 @@ static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
+ 	return 0;
+ }
+ 
+-void system_descriptor_complete(struct hfi1_devdata *dd,
+-				struct sdma_desc *descp)
+-{
+-	switch (sdma_mapping_type(descp)) {
+-	case SDMA_MAP_SINGLE:
+-		dma_unmap_single(&dd->pcidev->dev, sdma_mapping_addr(descp),
+-				 sdma_mapping_len(descp), DMA_TO_DEVICE);
+-		break;
+-	case SDMA_MAP_PAGE:
+-		dma_unmap_page(&dd->pcidev->dev, sdma_mapping_addr(descp),
+-			       sdma_mapping_len(descp), DMA_TO_DEVICE);
+-		break;
+-	}
+-
+-	if (descp->pinning_ctx) {
+-		struct sdma_mmu_node *node = descp->pinning_ctx;
+-
+-		release_node(node->rb.handler, node);
+-	}
+-}
+-
+ static bool sdma_rb_filter(struct mmu_rb_node *node, unsigned long addr,
+ 			   unsigned long len)
+ {
+ 	return (bool)(node->addr == addr);
+ }
+ 
+-static int sdma_rb_insert(void *arg, struct mmu_rb_node *mnode)
+-{
+-	struct sdma_mmu_node *node =
+-		container_of(mnode, struct sdma_mmu_node, rb);
+-
+-	atomic_inc(&node->refcount);
+-	return 0;
+-}
+-
+ /*
+  * Return 1 to remove the node from the rb tree and call the remove op.
+  *
+@@ -1690,10 +1675,6 @@ static int sdma_rb_evict(void *arg, struct mmu_rb_node *mnode,
+ 		container_of(mnode, struct sdma_mmu_node, rb);
+ 	struct evict_data *evict_data = evict_arg;
+ 
+-	/* is this node still being used? */
+-	if (atomic_read(&node->refcount))
+-		return 0; /* keep this node */
+-
+ 	/* this node will be evicted, add its pages to our count */
+ 	evict_data->cleared += node->npages;
+ 
+@@ -1711,13 +1692,3 @@ static void sdma_rb_remove(void *arg, struct mmu_rb_node *mnode)
+ 
+ 	free_system_node(node);
+ }
+-
+-static int sdma_rb_invalidate(void *arg, struct mmu_rb_node *mnode)
+-{
+-	struct sdma_mmu_node *node =
+-		container_of(mnode, struct sdma_mmu_node, rb);
+-
+-	if (!atomic_read(&node->refcount))
+-		return 1;
+-	return 0;
+-}
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.h b/drivers/infiniband/hw/hfi1/user_sdma.h
+index 9d417aacfa8b7..b2b26b71fcef0 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.h
++++ b/drivers/infiniband/hw/hfi1/user_sdma.h
+@@ -145,7 +145,6 @@ struct hfi1_user_sdma_comp_q {
+ struct sdma_mmu_node {
+ 	struct mmu_rb_node rb;
+ 	struct hfi1_user_sdma_pkt_q *pq;
+-	atomic_t refcount;
+ 	struct page **pages;
+ 	unsigned int npages;
+ };
+diff --git a/drivers/infiniband/hw/hfi1/vnic_sdma.c b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+index 7658c620a125c..ab8bcdf104475 100644
+--- a/drivers/infiniband/hw/hfi1/vnic_sdma.c
++++ b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+@@ -106,11 +106,11 @@ static noinline int build_vnic_ulp_payload(struct sdma_engine *sde,
+ 
+ 		/* combine physically continuous fragments later? */
+ 		ret = sdma_txadd_page(sde->dd,
+-				      NULL,
+ 				      &tx->txreq,
+ 				      skb_frag_page(frag),
+ 				      skb_frag_off(frag),
+-				      skb_frag_size(frag));
++				      skb_frag_size(frag),
++				      NULL, NULL, NULL);
+ 		if (unlikely(ret))
+ 			goto bail_txadd;
+ 	}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
+index 455d533dd7c4a..c493d7644b577 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
+@@ -36,9 +36,9 @@
+ #include "hns_roce_device.h"
+ #include "hns_roce_cmd.h"
+ 
+-#define CMD_POLL_TOKEN		0xffff
+-#define CMD_MAX_NUM		32
+-#define CMD_TOKEN_MASK		0x1f
++#define CMD_POLL_TOKEN 0xffff
++#define CMD_MAX_NUM 32
++#define CMD_TOKEN_MASK 0x1f
+ 
+ static int hns_roce_cmd_mbox_post_hw(struct hns_roce_dev *hr_dev, u64 in_param,
+ 				     u64 out_param, u32 in_modifier,
+@@ -93,8 +93,8 @@ static int hns_roce_cmd_mbox_poll(struct hns_roce_dev *hr_dev, u64 in_param,
+ void hns_roce_cmd_event(struct hns_roce_dev *hr_dev, u16 token, u8 status,
+ 			u64 out_param)
+ {
+-	struct hns_roce_cmd_context
+-		*context = &hr_dev->cmd.context[token & hr_dev->cmd.token_mask];
++	struct hns_roce_cmd_context *context =
++		&hr_dev->cmd.context[token % hr_dev->cmd.max_cmds];
+ 
+ 	if (token != context->token)
+ 		return;
+@@ -164,8 +164,8 @@ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
+ 	int ret;
+ 
+ 	down(&hr_dev->cmd.event_sem);
+-	ret = __hns_roce_cmd_mbox_wait(hr_dev, in_param, out_param,
+-				       in_modifier, op_modifier, op, timeout);
++	ret = __hns_roce_cmd_mbox_wait(hr_dev, in_param, out_param, in_modifier,
++				       op_modifier, op, timeout);
+ 	up(&hr_dev->cmd.event_sem);
+ 
+ 	return ret;
+@@ -231,9 +231,8 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev)
+ 	struct hns_roce_cmdq *hr_cmd = &hr_dev->cmd;
+ 	int i;
+ 
+-	hr_cmd->context = kmalloc_array(hr_cmd->max_cmds,
+-					sizeof(*hr_cmd->context),
+-					GFP_KERNEL);
++	hr_cmd->context =
++		kcalloc(hr_cmd->max_cmds, sizeof(*hr_cmd->context), GFP_KERNEL);
+ 	if (!hr_cmd->context)
+ 		return -ENOMEM;
+ 
+@@ -262,8 +261,8 @@ void hns_roce_cmd_use_polling(struct hns_roce_dev *hr_dev)
+ 	hr_cmd->use_events = 0;
+ }
+ 
+-struct hns_roce_cmd_mailbox
+-	*hns_roce_alloc_cmd_mailbox(struct hns_roce_dev *hr_dev)
++struct hns_roce_cmd_mailbox *
++hns_roce_alloc_cmd_mailbox(struct hns_roce_dev *hr_dev)
+ {
+ 	struct hns_roce_cmd_mailbox *mailbox;
+ 
+@@ -271,8 +270,8 @@ struct hns_roce_cmd_mailbox
+ 	if (!mailbox)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	mailbox->buf = dma_pool_alloc(hr_dev->cmd.pool, GFP_KERNEL,
+-				      &mailbox->dma);
++	mailbox->buf =
++		dma_pool_alloc(hr_dev->cmd.pool, GFP_KERNEL, &mailbox->dma);
+ 	if (!mailbox->buf) {
+ 		kfree(mailbox);
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.h b/drivers/infiniband/hw/hns/hns_roce_cmd.h
+index 1915bacaded0a..8e63b827f28cc 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cmd.h
++++ b/drivers/infiniband/hw/hns/hns_roce_cmd.h
+@@ -143,8 +143,8 @@ int hns_roce_cmd_mbox(struct hns_roce_dev *hr_dev, u64 in_param, u64 out_param,
+ 		      unsigned long in_modifier, u8 op_modifier, u16 op,
+ 		      unsigned long timeout);
+ 
+-struct hns_roce_cmd_mailbox
+-	*hns_roce_alloc_cmd_mailbox(struct hns_roce_dev *hr_dev);
++struct hns_roce_cmd_mailbox *
++hns_roce_alloc_cmd_mailbox(struct hns_roce_dev *hr_dev);
+ void hns_roce_free_cmd_mailbox(struct hns_roce_dev *hr_dev,
+ 			       struct hns_roce_cmd_mailbox *mailbox);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index 8a6bded9c11cb..9200e6477e1ed 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -41,9 +41,9 @@
+ 
+ static int alloc_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
+ {
++	struct ib_device *ibdev = &hr_dev->ib_dev;
+ 	struct hns_roce_cmd_mailbox *mailbox;
+ 	struct hns_roce_cq_table *cq_table;
+-	struct ib_device *ibdev = &hr_dev->ib_dev;
+ 	u64 mtts[MTT_MIN_COUNT] = { 0 };
+ 	dma_addr_t dma_handle;
+ 	int ret;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index d9aa7424d2902..09b5e4935c2ca 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -46,8 +46,6 @@
+ 
+ #define HNS_ROCE_IB_MIN_SQ_STRIDE		6
+ 
+-#define HNS_ROCE_BA_SIZE			(32 * 4096)
+-
+ #define BA_BYTE_LEN				8
+ 
+ /* Hardware specification only for v1 engine */
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index c880a8be7e3cd..854b41c14774d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -36,9 +36,6 @@
+ #include "hns_roce_hem.h"
+ #include "hns_roce_common.h"
+ 
+-#define DMA_ADDR_T_SHIFT		12
+-#define BT_BA_SHIFT			32
+-
+ #define HEM_INDEX_BUF			BIT(0)
+ #define HEM_INDEX_L0			BIT(1)
+ #define HEM_INDEX_L1			BIT(2)
+@@ -198,9 +195,9 @@ int hns_roce_calc_hem_mhop(struct hns_roce_dev *hr_dev,
+ {
+ 	struct device *dev = hr_dev->dev;
+ 	u32 chunk_ba_num;
++	u32 chunk_size;
+ 	u32 table_idx;
+ 	u32 bt_num;
+-	u32 chunk_size;
+ 
+ 	if (get_hem_table_config(hr_dev, mhop, table->type))
+ 		return -EINVAL;
+@@ -260,7 +257,6 @@ static struct hns_roce_hem *hns_roce_alloc_hem(struct hns_roce_dev *hr_dev,
+ 	if (!hem)
+ 		return NULL;
+ 
+-	hem->refcount = 0;
+ 	INIT_LIST_HEAD(&hem->chunk_list);
+ 
+ 	order = get_order(hem_alloc_size);
+@@ -327,81 +323,6 @@ void hns_roce_free_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem *hem)
+ 	kfree(hem);
+ }
+ 
+-static int hns_roce_set_hem(struct hns_roce_dev *hr_dev,
+-			    struct hns_roce_hem_table *table, unsigned long obj)
+-{
+-	spinlock_t *lock = &hr_dev->bt_cmd_lock;
+-	struct device *dev = hr_dev->dev;
+-	long end;
+-	unsigned long flags;
+-	struct hns_roce_hem_iter iter;
+-	void __iomem *bt_cmd;
+-	__le32 bt_cmd_val[2];
+-	__le32 bt_cmd_h = 0;
+-	__le32 bt_cmd_l;
+-	u64 bt_ba;
+-	int ret = 0;
+-
+-	/* Find the HEM(Hardware Entry Memory) entry */
+-	unsigned long i = (obj & (table->num_obj - 1)) /
+-			  (table->table_chunk_size / table->obj_size);
+-
+-	switch (table->type) {
+-	case HEM_TYPE_QPC:
+-	case HEM_TYPE_MTPT:
+-	case HEM_TYPE_CQC:
+-	case HEM_TYPE_SRQC:
+-		roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_MDF_M,
+-			ROCEE_BT_CMD_H_ROCEE_BT_CMD_MDF_S, table->type);
+-		break;
+-	default:
+-		return ret;
+-	}
+-
+-	roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_IN_MDF_M,
+-		       ROCEE_BT_CMD_H_ROCEE_BT_CMD_IN_MDF_S, obj);
+-	roce_set_bit(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_S, 0);
+-	roce_set_bit(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_HW_SYNS_S, 1);
+-
+-	/* Currently iter only a chunk */
+-	for (hns_roce_hem_first(table->hem[i], &iter);
+-	     !hns_roce_hem_last(&iter); hns_roce_hem_next(&iter)) {
+-		bt_ba = hns_roce_hem_addr(&iter) >> DMA_ADDR_T_SHIFT;
+-
+-		spin_lock_irqsave(lock, flags);
+-
+-		bt_cmd = hr_dev->reg_base + ROCEE_BT_CMD_H_REG;
+-
+-		end = HW_SYNC_TIMEOUT_MSECS;
+-		while (end > 0) {
+-			if (!(readl(bt_cmd) >> BT_CMD_SYNC_SHIFT))
+-				break;
+-
+-			mdelay(HW_SYNC_SLEEP_TIME_INTERVAL);
+-			end -= HW_SYNC_SLEEP_TIME_INTERVAL;
+-		}
+-
+-		if (end <= 0) {
+-			dev_err(dev, "Write bt_cmd err,hw_sync is not zero.\n");
+-			spin_unlock_irqrestore(lock, flags);
+-			return -EBUSY;
+-		}
+-
+-		bt_cmd_l = cpu_to_le32(bt_ba);
+-		roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_BA_H_M,
+-			       ROCEE_BT_CMD_H_ROCEE_BT_CMD_BA_H_S,
+-			       bt_ba >> BT_BA_SHIFT);
+-
+-		bt_cmd_val[0] = bt_cmd_l;
+-		bt_cmd_val[1] = bt_cmd_h;
+-		hns_roce_write64_k(bt_cmd_val,
+-				   hr_dev->reg_base + ROCEE_BT_CMD_L_REG);
+-		spin_unlock_irqrestore(lock, flags);
+-	}
+-
+-	return ret;
+-}
+-
+ static int calc_hem_config(struct hns_roce_dev *hr_dev,
+ 			   struct hns_roce_hem_table *table, unsigned long obj,
+ 			   struct hns_roce_hem_mhop *mhop,
+@@ -607,7 +528,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ 
+ 	mutex_lock(&table->mutex);
+ 	if (table->hem[index.buf]) {
+-		++table->hem[index.buf]->refcount;
++		refcount_inc(&table->hem[index.buf]->refcount);
+ 		goto out;
+ 	}
+ 
+@@ -626,7 +547,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ 		}
+ 	}
+ 
+-	++table->hem[index.buf]->refcount;
++	refcount_set(&table->hem[index.buf]->refcount, 1);
+ 	goto out;
+ 
+ err_alloc:
+@@ -640,8 +561,8 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
+ 		       struct hns_roce_hem_table *table, unsigned long obj)
+ {
+ 	struct device *dev = hr_dev->dev;
+-	int ret = 0;
+ 	unsigned long i;
++	int ret = 0;
+ 
+ 	if (hns_roce_check_whether_mhop(hr_dev, table->type))
+ 		return hns_roce_table_mhop_get(hr_dev, table, obj);
+@@ -652,7 +573,7 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
+ 	mutex_lock(&table->mutex);
+ 
+ 	if (table->hem[i]) {
+-		++table->hem[i]->refcount;
++		refcount_inc(&table->hem[i]->refcount);
+ 		goto out;
+ 	}
+ 
+@@ -667,15 +588,16 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
+ 	}
+ 
+ 	/* Set HEM base address(128K/page, pa) to Hardware */
+-	if (hns_roce_set_hem(hr_dev, table, obj)) {
++	ret = hr_dev->hw->set_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT);
++	if (ret) {
+ 		hns_roce_free_hem(hr_dev, table->hem[i]);
+ 		table->hem[i] = NULL;
+-		ret = -ENODEV;
+-		dev_err(dev, "set HEM base address to HW failed.\n");
++		dev_err(dev, "set HEM base address to HW failed, ret = %d.\n",
++			ret);
+ 		goto out;
+ 	}
+ 
+-	++table->hem[i]->refcount;
++	refcount_set(&table->hem[i]->refcount, 1);
+ out:
+ 	mutex_unlock(&table->mutex);
+ 	return ret;
+@@ -742,11 +664,11 @@ static void hns_roce_table_mhop_put(struct hns_roce_dev *hr_dev,
+ 		return;
+ 	}
+ 
+-	mutex_lock(&table->mutex);
+-	if (check_refcount && (--table->hem[index.buf]->refcount > 0)) {
+-		mutex_unlock(&table->mutex);
++	if (!check_refcount)
++		mutex_lock(&table->mutex);
++	else if (!refcount_dec_and_mutex_lock(&table->hem[index.buf]->refcount,
++					      &table->mutex))
+ 		return;
+-	}
+ 
+ 	clear_mhop_hem(hr_dev, table, obj, &mhop, &index);
+ 	free_mhop_hem(hr_dev, table, &mhop, &index);
+@@ -768,16 +690,15 @@ void hns_roce_table_put(struct hns_roce_dev *hr_dev,
+ 	i = (obj & (table->num_obj - 1)) /
+ 	    (table->table_chunk_size / table->obj_size);
+ 
+-	mutex_lock(&table->mutex);
++	if (!refcount_dec_and_mutex_lock(&table->hem[i]->refcount,
++					 &table->mutex))
++		return;
+ 
+-	if (--table->hem[i]->refcount == 0) {
+-		/* Clear HEM base address */
+-		if (hr_dev->hw->clear_hem(hr_dev, table, obj, 0))
+-			dev_warn(dev, "Clear HEM base address failed.\n");
++	if (hr_dev->hw->clear_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT))
++		dev_warn(dev, "failed to clear HEM base address.\n");
+ 
+-		hns_roce_free_hem(hr_dev, table->hem[i]);
+-		table->hem[i] = NULL;
+-	}
++	hns_roce_free_hem(hr_dev, table->hem[i]);
++	table->hem[i] = NULL;
+ 
+ 	mutex_unlock(&table->mutex);
+ }
+@@ -789,14 +710,14 @@ void *hns_roce_table_find(struct hns_roce_dev *hr_dev,
+ 	struct hns_roce_hem_chunk *chunk;
+ 	struct hns_roce_hem_mhop mhop;
+ 	struct hns_roce_hem *hem;
+-	void *addr = NULL;
+ 	unsigned long mhop_obj = obj;
+ 	unsigned long obj_per_chunk;
+ 	unsigned long idx_offset;
+ 	int offset, dma_offset;
++	void *addr = NULL;
++	u32 hem_idx = 0;
+ 	int length;
+ 	int i, j;
+-	u32 hem_idx = 0;
+ 
+ 	if (!table->lowmem)
+ 		return NULL;
+@@ -966,8 +887,8 @@ static void hns_roce_cleanup_mhop_hem_table(struct hns_roce_dev *hr_dev,
+ {
+ 	struct hns_roce_hem_mhop mhop;
+ 	u32 buf_chunk_size;
+-	int i;
+ 	u64 obj;
++	int i;
+ 
+ 	if (hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop))
+ 		return;
+@@ -1298,8 +1219,8 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
+ 				  const struct hns_roce_buf_region *regions,
+ 				  int region_cnt)
+ {
+-	struct roce_hem_item *hem, *temp_hem, *root_hem;
+ 	struct list_head temp_list[HNS_ROCE_MAX_BT_REGION];
++	struct roce_hem_item *hem, *temp_hem, *root_hem;
+ 	const struct hns_roce_buf_region *r;
+ 	struct list_head temp_root;
+ 	struct list_head temp_btm;
+@@ -1404,8 +1325,8 @@ int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev,
+ {
+ 	const struct hns_roce_buf_region *r;
+ 	int ofs, end;
+-	int ret;
+ 	int unit;
++	int ret;
+ 	int i;
+ 
+ 	if (region_cnt > HNS_ROCE_MAX_BT_REGION) {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.h b/drivers/infiniband/hw/hns/hns_roce_hem.h
+index b34c940077bb5..b7617786b1005 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.h
+@@ -34,9 +34,7 @@
+ #ifndef _HNS_ROCE_HEM_H
+ #define _HNS_ROCE_HEM_H
+ 
+-#define HW_SYNC_SLEEP_TIME_INTERVAL	20
+-#define HW_SYNC_TIMEOUT_MSECS           (25 * HW_SYNC_SLEEP_TIME_INTERVAL)
+-#define BT_CMD_SYNC_SHIFT		31
++#define HEM_HOP_STEP_DIRECT 0xff
+ 
+ enum {
+ 	/* MAP HEM(Hardware Entry Memory) */
+@@ -73,11 +71,6 @@ enum {
+ 	(type >= HEM_TYPE_MTT && hop_num == 1) || \
+ 	(type >= HEM_TYPE_MTT && hop_num == HNS_ROCE_HOP_NUM_0))
+ 
+-enum {
+-	 HNS_ROCE_HEM_PAGE_SHIFT = 12,
+-	 HNS_ROCE_HEM_PAGE_SIZE  = 1 << HNS_ROCE_HEM_PAGE_SHIFT,
+-};
+-
+ struct hns_roce_hem_chunk {
+ 	struct list_head	 list;
+ 	int			 npages;
+@@ -87,8 +80,8 @@ struct hns_roce_hem_chunk {
+ };
+ 
+ struct hns_roce_hem {
+-	struct list_head	 chunk_list;
+-	int			 refcount;
++	struct list_head chunk_list;
++	refcount_t refcount;
+ };
+ 
+ struct hns_roce_hem_iter {
+@@ -174,4 +167,4 @@ static inline dma_addr_t hns_roce_hem_addr(struct hns_roce_hem_iter *iter)
+ 	return sg_dma_address(&iter->chunk->mem[iter->page_idx]);
+ }
+ 
+-#endif /*_HNS_ROCE_HEM_H*/
++#endif /* _HNS_ROCE_HEM_H */
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+index 5f4d8a32ed6d9..6f9b024d4ff7c 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+@@ -239,7 +239,7 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
+ 				break;
+ 			}
+ 
+-			/*Ctrl field, ctrl set type: sig, solic, imm, fence */
++			/* Ctrl field, ctrl set type: sig, solic, imm, fence */
+ 			/* SO wait for conforming application scenarios */
+ 			ctrl->flag |= (wr->send_flags & IB_SEND_SIGNALED ?
+ 				      cpu_to_le32(HNS_ROCE_WQE_CQ_NOTIFY) : 0) |
+@@ -300,7 +300,7 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
+ 				}
+ 				ctrl->flag |= cpu_to_le32(HNS_ROCE_WQE_INLINE);
+ 			} else {
+-				/*sqe num is two */
++				/* sqe num is two */
+ 				for (i = 0; i < wr->num_sge; i++)
+ 					set_data_seg(dseg + i, wr->sg_list + i);
+ 
+@@ -450,6 +450,82 @@ static void hns_roce_set_db_event_mode(struct hns_roce_dev *hr_dev,
+ 	roce_write(hr_dev, ROCEE_GLB_CFG_REG, val);
+ }
+ 
++static int hns_roce_v1_set_hem(struct hns_roce_dev *hr_dev,
++			       struct hns_roce_hem_table *table, int obj,
++			       int step_idx)
++{
++	spinlock_t *lock = &hr_dev->bt_cmd_lock;
++	struct device *dev = hr_dev->dev;
++	struct hns_roce_hem_iter iter;
++	void __iomem *bt_cmd;
++	__le32 bt_cmd_val[2];
++	__le32 bt_cmd_h = 0;
++	unsigned long flags;
++	__le32 bt_cmd_l;
++	int ret = 0;
++	u64 bt_ba;
++	long end;
++
++	/* Find the HEM(Hardware Entry Memory) entry */
++	unsigned long i = (obj & (table->num_obj - 1)) /
++			  (table->table_chunk_size / table->obj_size);
++
++	switch (table->type) {
++	case HEM_TYPE_QPC:
++	case HEM_TYPE_MTPT:
++	case HEM_TYPE_CQC:
++	case HEM_TYPE_SRQC:
++		roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_MDF_M,
++			ROCEE_BT_CMD_H_ROCEE_BT_CMD_MDF_S, table->type);
++		break;
++	default:
++		return ret;
++	}
++
++	roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_IN_MDF_M,
++		       ROCEE_BT_CMD_H_ROCEE_BT_CMD_IN_MDF_S, obj);
++	roce_set_bit(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_S, 0);
++	roce_set_bit(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_HW_SYNS_S, 1);
++
++	/* Currently iter only a chunk */
++	for (hns_roce_hem_first(table->hem[i], &iter);
++	     !hns_roce_hem_last(&iter); hns_roce_hem_next(&iter)) {
++		bt_ba = hns_roce_hem_addr(&iter) >> HNS_HW_PAGE_SHIFT;
++
++		spin_lock_irqsave(lock, flags);
++
++		bt_cmd = hr_dev->reg_base + ROCEE_BT_CMD_H_REG;
++
++		end = HW_SYNC_TIMEOUT_MSECS;
++		while (end > 0) {
++			if (!(readl(bt_cmd) >> BT_CMD_SYNC_SHIFT))
++				break;
++
++			mdelay(HW_SYNC_SLEEP_TIME_INTERVAL);
++			end -= HW_SYNC_SLEEP_TIME_INTERVAL;
++		}
++
++		if (end <= 0) {
++			dev_err(dev, "Write bt_cmd err,hw_sync is not zero.\n");
++			spin_unlock_irqrestore(lock, flags);
++			return -EBUSY;
++		}
++
++		bt_cmd_l = cpu_to_le32(bt_ba);
++		roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_BA_H_M,
++			       ROCEE_BT_CMD_H_ROCEE_BT_CMD_BA_H_S,
++			       upper_32_bits(bt_ba));
++
++		bt_cmd_val[0] = bt_cmd_l;
++		bt_cmd_val[1] = bt_cmd_h;
++		hns_roce_write64_k(bt_cmd_val,
++				   hr_dev->reg_base + ROCEE_BT_CMD_L_REG);
++		spin_unlock_irqrestore(lock, flags);
++	}
++
++	return ret;
++}
++
+ static void hns_roce_set_db_ext_mode(struct hns_roce_dev *hr_dev, u32 sdb_mode,
+ 				     u32 odb_mode)
+ {
+@@ -1165,7 +1241,7 @@ static int hns_roce_raq_init(struct hns_roce_dev *hr_dev)
+ 	}
+ 	raq->e_raq_buf->map = addr;
+ 
+-	/* Configure raq extended address. 48bit 4K align*/
++	/* Configure raq extended address. 48bit 4K align */
+ 	roce_write(hr_dev, ROCEE_EXT_RAQ_REG, raq->e_raq_buf->map >> 12);
+ 
+ 	/* Configure raq_shift */
+@@ -2062,11 +2138,6 @@ static void hns_roce_v1_write_cqc(struct hns_roce_dev *hr_dev,
+ 		       CQ_CONTEXT_CQC_BYTE_32_CQ_CONS_IDX_S, 0);
+ }
+ 
+-static int hns_roce_v1_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+ static int hns_roce_v1_req_notify_cq(struct ib_cq *ibcq,
+ 				     enum ib_cq_notify_flags flags)
+ {
+@@ -2765,7 +2836,6 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ 		roce_set_field(context->qpc_bytes_16,
+ 			       QP_CONTEXT_QPC_BYTES_16_QP_NUM_M,
+ 			       QP_CONTEXT_QPC_BYTES_16_QP_NUM_S, hr_qp->qpn);
+-
+ 	} else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_INIT) {
+ 		roce_set_field(context->qpc_bytes_4,
+ 			       QP_CONTEXT_QPC_BYTES_4_TRANSPORT_SERVICE_TYPE_M,
+@@ -3798,7 +3868,6 @@ static int hns_roce_v1_aeq_int(struct hns_roce_dev *hr_dev,
+ 	int event_type;
+ 
+ 	while ((aeqe = next_aeqe_sw_v1(eq))) {
+-
+ 		/* Make sure we read the AEQ entry after we have checked the
+ 		 * ownership bit
+ 		 */
+@@ -3903,7 +3972,6 @@ static int hns_roce_v1_ceq_int(struct hns_roce_dev *hr_dev,
+ 	u32 cqn;
+ 
+ 	while ((ceqe = next_ceqe_sw_v1(eq))) {
+-
+ 		/* Make sure we read CEQ entry after we have checked the
+ 		 * ownership bit
+ 		 */
+@@ -4347,7 +4415,6 @@ static void hns_roce_v1_cleanup_eq_table(struct hns_roce_dev *hr_dev)
+ 
+ static const struct ib_device_ops hns_roce_v1_dev_ops = {
+ 	.destroy_qp = hns_roce_v1_destroy_qp,
+-	.modify_cq = hns_roce_v1_modify_cq,
+ 	.poll_cq = hns_roce_v1_poll_cq,
+ 	.post_recv = hns_roce_v1_post_recv,
+ 	.post_send = hns_roce_v1_post_send,
+@@ -4367,7 +4434,7 @@ static const struct hns_roce_hw hns_roce_hw_v1 = {
+ 	.set_mtu = hns_roce_v1_set_mtu,
+ 	.write_mtpt = hns_roce_v1_write_mtpt,
+ 	.write_cqc = hns_roce_v1_write_cqc,
+-	.modify_cq = hns_roce_v1_modify_cq,
++	.set_hem = hns_roce_v1_set_hem,
+ 	.clear_hem = hns_roce_v1_clear_hem,
+ 	.modify_qp = hns_roce_v1_modify_qp,
+ 	.query_qp = hns_roce_v1_query_qp,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
+index ffd0156080f52..9ff1a41ddec3f 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
+@@ -419,7 +419,7 @@ struct hns_roce_wqe_data_seg {
+ 
+ struct hns_roce_wqe_raddr_seg {
+ 	__le32 rkey;
+-	__le32 len;/* reserved */
++	__le32 len; /* reserved */
+ 	__le64 raddr;
+ };
+ 
+@@ -1042,6 +1042,11 @@ struct hns_roce_db_table {
+ 	struct hns_roce_ext_db *ext_db;
+ };
+ 
++#define HW_SYNC_SLEEP_TIME_INTERVAL 20
++#define HW_SYNC_TIMEOUT_MSECS (25 * HW_SYNC_SLEEP_TIME_INTERVAL)
++#define BT_CMD_SYNC_SHIFT 31
++#define HNS_ROCE_BA_SIZE (32 * 4096)
++
+ struct hns_roce_bt_table {
+ 	struct hns_roce_buf_list qpc_buf;
+ 	struct hns_roce_buf_list mtpt_buf;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 76ed547b76ea7..322f341f41458 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1028,8 +1028,8 @@ static int hns_roce_v2_rst_process_cmd(struct hns_roce_dev *hr_dev)
+ 	struct hns_roce_v2_priv *priv = hr_dev->priv;
+ 	struct hnae3_handle *handle = priv->handle;
+ 	const struct hnae3_ae_ops *ops = handle->ae_algo->ops;
+-	unsigned long instance_stage;	/* the current instance stage */
+-	unsigned long reset_stage;	/* the current reset stage */
++	unsigned long instance_stage; /* the current instance stage */
++	unsigned long reset_stage; /* the current reset stage */
+ 	unsigned long reset_cnt;
+ 	bool sw_resetting;
+ 	bool hw_resetting;
+@@ -2434,7 +2434,6 @@ static int hns_roce_init_link_table(struct hns_roce_dev *hr_dev,
+ 		if (i < (pg_num - 1))
+ 			entry[i].blk_ba1_nxt_ptr |=
+ 				(i + 1) << HNS_ROCE_LINK_TABLE_NXT_PTR_S;
+-
+ 	}
+ 	link_tbl->npages = pg_num;
+ 	link_tbl->pg_sz = buf_chk_sz;
+@@ -5540,16 +5539,14 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ 		case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+ 			hns_roce_cq_event(hr_dev, cqn, event_type);
+ 			break;
+-		case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+-			break;
+ 		case HNS_ROCE_EVENT_TYPE_MB:
+ 			hns_roce_cmd_event(hr_dev,
+ 					le16_to_cpu(aeqe->event.cmd.token),
+ 					aeqe->event.cmd.status,
+ 					le64_to_cpu(aeqe->event.cmd.out_param));
+ 			break;
++		case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+ 		case HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW:
+-			break;
+ 		case HNS_ROCE_EVENT_TYPE_FLR:
+ 			break;
+ 		default:
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index 8a92faeb3d237..8948d2b5577d5 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -440,7 +440,7 @@ struct hns_roce_srq_context {
+ #define SRQC_BYTE_60_SRQ_DB_RECORD_ADDR_S 1
+ #define SRQC_BYTE_60_SRQ_DB_RECORD_ADDR_M GENMASK(31, 1)
+ 
+-enum{
++enum {
+ 	V2_MPT_ST_VALID = 0x1,
+ 	V2_MPT_ST_FREE	= 0x2,
+ };
+@@ -1076,9 +1076,9 @@ struct hns_roce_v2_ud_send_wqe {
+ 	__le32	dmac;
+ 	__le32	byte_48;
+ 	u8	dgid[GID_LEN_V2];
+-
+ };
+-#define	V2_UD_SEND_WQE_BYTE_4_OPCODE_S 0
++
++#define V2_UD_SEND_WQE_BYTE_4_OPCODE_S 0
+ #define V2_UD_SEND_WQE_BYTE_4_OPCODE_M GENMASK(4, 0)
+ 
+ #define	V2_UD_SEND_WQE_BYTE_4_OWNER_S 7
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 1e8b3e4ef1b17..90cbd15f64415 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -511,8 +511,6 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
+ 		(1ULL << IB_USER_VERBS_CMD_QUERY_QP) |
+ 		(1ULL << IB_USER_VERBS_CMD_DESTROY_QP);
+ 
+-	ib_dev->uverbs_ex_cmd_mask |= (1ULL << IB_USER_VERBS_EX_CMD_MODIFY_CQ);
+-
+ 	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_REREG_MR) {
+ 		ib_dev->uverbs_cmd_mask |= (1ULL << IB_USER_VERBS_CMD_REREG_MR);
+ 		ib_set_device_ops(ib_dev, &hns_roce_dev_mr_ops);
+@@ -584,8 +582,8 @@ error_failed_setup_mtu_mac:
+ 
+ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
+ {
+-	int ret;
+ 	struct device *dev = hr_dev->dev;
++	int ret;
+ 
+ 	ret = hns_roce_init_hem_table(hr_dev, &hr_dev->mr_table.mtpt_table,
+ 				      HEM_TYPE_MTPT, hr_dev->caps.mtpt_entry_sz,
+@@ -725,8 +723,8 @@ err_unmap_dmpt:
+  */
+ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
+ {
+-	int ret;
+ 	struct device *dev = hr_dev->dev;
++	int ret;
+ 
+ 	spin_lock_init(&hr_dev->sm_lock);
+ 	spin_lock_init(&hr_dev->bt_cmd_lock);
+@@ -849,8 +847,8 @@ void hns_roce_handle_device_err(struct hns_roce_dev *hr_dev)
+ 
+ int hns_roce_init(struct hns_roce_dev *hr_dev)
+ {
+-	int ret;
+ 	struct device *dev = hr_dev->dev;
++	int ret;
+ 
+ 	if (hr_dev->hw->reset) {
+ 		ret = hr_dev->hw->reset(hr_dev, true);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 1c342a7bd7dff..d5b3b10e0a807 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -167,10 +167,10 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev,
+ static int hns_roce_mr_enable(struct hns_roce_dev *hr_dev,
+ 			      struct hns_roce_mr *mr)
+ {
+-	int ret;
+ 	unsigned long mtpt_idx = key_to_hw_index(mr->key);
+-	struct device *dev = hr_dev->dev;
+ 	struct hns_roce_cmd_mailbox *mailbox;
++	struct device *dev = hr_dev->dev;
++	int ret;
+ 
+ 	/* Allocate mailbox memory */
+ 	mailbox = hns_roce_alloc_cmd_mailbox(hr_dev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 6fe98af7741b5..c42c6761382d1 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -114,8 +114,8 @@ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
+ static void hns_roce_ib_qp_event(struct hns_roce_qp *hr_qp,
+ 				 enum hns_roce_event type)
+ {
+-	struct ib_event event;
+ 	struct ib_qp *ibqp = &hr_qp->ibqp;
++	struct ib_event event;
+ 
+ 	if (ibqp->event_handler) {
+ 		event.device = ibqp->device;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index 08df97e0a6654..02e2416b5fed6 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -245,7 +245,6 @@ static int alloc_srq_idx(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
+ 			err = -ENOMEM;
+ 			goto err_idx_mtr;
+ 		}
+-
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index 05c7200751e50..c62cdd6456962 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -2682,8 +2682,6 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
+ 
+ 	ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_ops);
+ 	ibdev->ib_dev.uverbs_ex_cmd_mask |=
+-		(1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ) |
+-		(1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) |
+ 		(1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) |
+ 		(1ull << IB_USER_VERBS_EX_CMD_CREATE_QP);
+ 
+@@ -2691,15 +2689,8 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
+ 	    ((mlx4_ib_port_link_layer(&ibdev->ib_dev, 1) ==
+ 	    IB_LINK_LAYER_ETHERNET) ||
+ 	    (mlx4_ib_port_link_layer(&ibdev->ib_dev, 2) ==
+-	    IB_LINK_LAYER_ETHERNET))) {
+-		ibdev->ib_dev.uverbs_ex_cmd_mask |=
+-			(1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ)	  |
+-			(1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ)	  |
+-			(1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ)	  |
+-			(1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) |
+-			(1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL);
++	    IB_LINK_LAYER_ETHERNET)))
+ 		ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_wq_ops);
+-	}
+ 
+ 	if (dev->caps.flags & MLX4_DEV_CAP_FLAG_MEM_WINDOW ||
+ 	    dev->caps.bmme_flags & MLX4_BMME_FLAG_TYPE_2_WIN) {
+@@ -2718,9 +2709,6 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
+ 
+ 	if (check_flow_steering_support(dev)) {
+ 		ibdev->steering_support = MLX4_STEERING_MODE_DEVICE_MANAGED;
+-		ibdev->ib_dev.uverbs_ex_cmd_mask	|=
+-			(1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) |
+-			(1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW);
+ 		ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fs_ops);
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 39ba7005f2c4c..215d6618839be 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4180,14 +4180,10 @@ static int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev)
+ 		(1ull << IB_USER_VERBS_CMD_DESTROY_SRQ)		|
+ 		(1ull << IB_USER_VERBS_CMD_CREATE_XSRQ)		|
+ 		(1ull << IB_USER_VERBS_CMD_OPEN_QP);
+-	dev->ib_dev.uverbs_ex_cmd_mask =
+-		(1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE)	|
++	dev->ib_dev.uverbs_ex_cmd_mask |=
+ 		(1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ)	|
+ 		(1ull << IB_USER_VERBS_EX_CMD_CREATE_QP)	|
+-		(1ull << IB_USER_VERBS_EX_CMD_MODIFY_QP)	|
+-		(1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ)	|
+-		(1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW)	|
+-		(1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW);
++		(1ull << IB_USER_VERBS_EX_CMD_MODIFY_QP);
+ 
+ 	if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads) &&
+ 	    IS_ENABLED(CONFIG_MLX5_CORE_IPOIB))
+@@ -4290,12 +4286,6 @@ static int mlx5_ib_roce_init(struct mlx5_ib_dev *dev)
+ 	ll = mlx5_port_type_cap_to_rdma_ll(port_type_cap);
+ 
+ 	if (ll == IB_LINK_LAYER_ETHERNET) {
+-		dev->ib_dev.uverbs_ex_cmd_mask |=
+-			(1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) |
+-			(1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) |
+-			(1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ) |
+-			(1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) |
+-			(1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL);
+ 		ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_common_roce_ops);
+ 
+ 		port_num = mlx5_core_native_port_num(dev->mdev) - 1;
+diff --git a/drivers/input/misc/adxl34x.c b/drivers/input/misc/adxl34x.c
+index 4cc4e8ff42b33..ad035c342cd3b 100644
+--- a/drivers/input/misc/adxl34x.c
++++ b/drivers/input/misc/adxl34x.c
+@@ -811,8 +811,7 @@ struct adxl34x *adxl34x_probe(struct device *dev, int irq,
+ 	AC_WRITE(ac, POWER_CTL, 0);
+ 
+ 	err = request_threaded_irq(ac->irq, NULL, adxl34x_irq,
+-				   IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+-				   dev_name(dev), ac);
++				   IRQF_ONESHOT, dev_name(dev), ac);
+ 	if (err) {
+ 		dev_err(dev, "irq %d busy?\n", ac->irq);
+ 		goto err_free_mem;
+diff --git a/drivers/input/misc/drv260x.c b/drivers/input/misc/drv260x.c
+index 79d7fa710a714..54002d1a446b7 100644
+--- a/drivers/input/misc/drv260x.c
++++ b/drivers/input/misc/drv260x.c
+@@ -435,6 +435,7 @@ static int drv260x_init(struct drv260x_data *haptics)
+ 	}
+ 
+ 	do {
++		usleep_range(15000, 15500);
+ 		error = regmap_read(haptics->regmap, DRV260X_GO, &cal_buf);
+ 		if (error) {
+ 			dev_err(&haptics->client->dev,
+diff --git a/drivers/irqchip/irq-jcore-aic.c b/drivers/irqchip/irq-jcore-aic.c
+index 033bccb41455c..b9dcc8e78c750 100644
+--- a/drivers/irqchip/irq-jcore-aic.c
++++ b/drivers/irqchip/irq-jcore-aic.c
+@@ -68,6 +68,7 @@ static int __init aic_irq_of_init(struct device_node *node,
+ 	unsigned min_irq = JCORE_AIC2_MIN_HWIRQ;
+ 	unsigned dom_sz = JCORE_AIC_MAX_HWIRQ+1;
+ 	struct irq_domain *domain;
++	int ret;
+ 
+ 	pr_info("Initializing J-Core AIC\n");
+ 
+@@ -100,11 +101,17 @@ static int __init aic_irq_of_init(struct device_node *node,
+ 	jcore_aic.irq_unmask = noop;
+ 	jcore_aic.name = "AIC";
+ 
+-	domain = irq_domain_add_linear(node, dom_sz, &jcore_aic_irqdomain_ops,
++	ret = irq_alloc_descs(-1, min_irq, dom_sz - min_irq,
++			      of_node_to_nid(node));
++
++	if (ret < 0)
++		return ret;
++
++	domain = irq_domain_add_legacy(node, dom_sz - min_irq, min_irq, min_irq,
++				       &jcore_aic_irqdomain_ops,
+ 				       &jcore_aic);
+ 	if (!domain)
+ 		return -ENOMEM;
+-	irq_create_strict_mappings(domain, min_irq, min_irq, dom_sz - min_irq);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/leds/trigger/ledtrig-netdev.c b/drivers/leds/trigger/ledtrig-netdev.c
+index d5e774d830215..f4d670ec30bcb 100644
+--- a/drivers/leds/trigger/ledtrig-netdev.c
++++ b/drivers/leds/trigger/ledtrig-netdev.c
+@@ -318,6 +318,9 @@ static int netdev_trig_notify(struct notifier_block *nb,
+ 	clear_bit(NETDEV_LED_MODE_LINKUP, &trigger_data->mode);
+ 	switch (evt) {
+ 	case NETDEV_CHANGENAME:
++		if (netif_carrier_ok(dev))
++			set_bit(NETDEV_LED_MODE_LINKUP, &trigger_data->mode);
++		fallthrough;
+ 	case NETDEV_REGISTER:
+ 		if (trigger_data->net_dev)
+ 			dev_put(trigger_data->net_dev);
+diff --git a/drivers/mailbox/ti-msgmgr.c b/drivers/mailbox/ti-msgmgr.c
+index 0130628f4d9db..535fe73ce3109 100644
+--- a/drivers/mailbox/ti-msgmgr.c
++++ b/drivers/mailbox/ti-msgmgr.c
+@@ -385,14 +385,20 @@ static int ti_msgmgr_send_data(struct mbox_chan *chan, void *data)
+ 		/* Ensure all unused data is 0 */
+ 		data_trail &= 0xFFFFFFFF >> (8 * (sizeof(u32) - trail_bytes));
+ 		writel(data_trail, data_reg);
+-		data_reg++;
++		data_reg += sizeof(u32);
+ 	}
++
+ 	/*
+ 	 * 'data_reg' indicates next register to write. If we did not already
+ 	 * write on tx complete reg(last reg), we must do so for transmit
++	 * In addition, we also need to make sure all intermediate data
++	 * registers(if any required), are reset to 0 for TISCI backward
++	 * compatibility to be maintained.
+ 	 */
+-	if (data_reg <= qinst->queue_buff_end)
+-		writel(0, qinst->queue_buff_end);
++	while (data_reg <= qinst->queue_buff_end) {
++		writel(0, data_reg);
++		data_reg += sizeof(u32);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index b47c00dea0f20..24c57bb85b359 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -885,7 +885,7 @@ static struct btree *mca_cannibalize(struct cache_set *c, struct btree_op *op,
+  * cannibalize_bucket() will take. This means every time we unlock the root of
+  * the btree, we need to release this lock if we have it held.
+  */
+-static void bch_cannibalize_unlock(struct cache_set *c)
++void bch_cannibalize_unlock(struct cache_set *c)
+ {
+ 	spin_lock(&c->btree_cannibalize_lock);
+ 	if (c->btree_cache_alloc_lock == current) {
+@@ -1090,10 +1090,12 @@ struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
+ 				     struct btree *parent)
+ {
+ 	BKEY_PADDED(key) k;
+-	struct btree *b = ERR_PTR(-EAGAIN);
++	struct btree *b;
+ 
+ 	mutex_lock(&c->bucket_lock);
+ retry:
++	/* return ERR_PTR(-EAGAIN) when it fails */
++	b = ERR_PTR(-EAGAIN);
+ 	if (__bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, wait))
+ 		goto err;
+ 
+@@ -1138,7 +1140,7 @@ static struct btree *btree_node_alloc_replacement(struct btree *b,
+ {
+ 	struct btree *n = bch_btree_node_alloc(b->c, op, b->level, b->parent);
+ 
+-	if (!IS_ERR_OR_NULL(n)) {
++	if (!IS_ERR(n)) {
+ 		mutex_lock(&n->write_lock);
+ 		bch_btree_sort_into(&b->keys, &n->keys, &b->c->sort);
+ 		bkey_copy_key(&n->key, &b->key);
+@@ -1340,7 +1342,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ 	memset(new_nodes, 0, sizeof(new_nodes));
+ 	closure_init_stack(&cl);
+ 
+-	while (nodes < GC_MERGE_NODES && !IS_ERR_OR_NULL(r[nodes].b))
++	while (nodes < GC_MERGE_NODES && !IS_ERR(r[nodes].b))
+ 		keys += r[nodes++].keys;
+ 
+ 	blocks = btree_default_blocks(b->c) * 2 / 3;
+@@ -1352,7 +1354,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ 
+ 	for (i = 0; i < nodes; i++) {
+ 		new_nodes[i] = btree_node_alloc_replacement(r[i].b, NULL);
+-		if (IS_ERR_OR_NULL(new_nodes[i]))
++		if (IS_ERR(new_nodes[i]))
+ 			goto out_nocoalesce;
+ 	}
+ 
+@@ -1487,7 +1489,7 @@ out_nocoalesce:
+ 	bch_keylist_free(&keylist);
+ 
+ 	for (i = 0; i < nodes; i++)
+-		if (!IS_ERR_OR_NULL(new_nodes[i])) {
++		if (!IS_ERR(new_nodes[i])) {
+ 			btree_node_free(new_nodes[i]);
+ 			rw_unlock(true, new_nodes[i]);
+ 		}
+@@ -1669,7 +1671,7 @@ static int bch_btree_gc_root(struct btree *b, struct btree_op *op,
+ 	if (should_rewrite) {
+ 		n = btree_node_alloc_replacement(b, NULL);
+ 
+-		if (!IS_ERR_OR_NULL(n)) {
++		if (!IS_ERR(n)) {
+ 			bch_btree_node_write_sync(n);
+ 
+ 			bch_btree_set_root(n);
+@@ -1968,6 +1970,15 @@ static int bch_btree_check_thread(void *arg)
+ 			c->gc_stats.nodes++;
+ 			bch_btree_op_init(&op, 0);
+ 			ret = bcache_btree(check_recurse, p, c->root, &op);
++			/*
++			 * The op may be added to cache_set's btree_cache_wait
++			 * in mca_cannibalize(), must ensure it is removed from
++			 * the list and release btree_cache_alloc_lock before
++			 * free op memory.
++			 * Otherwise, the btree_cache_wait will be damaged.
++			 */
++			bch_cannibalize_unlock(c);
++			finish_wait(&c->btree_cache_wait, &(&op)->wait);
+ 			if (ret)
+ 				goto out;
+ 		}
+diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h
+index 1b5fdbc0d83eb..a2920bbfcad56 100644
+--- a/drivers/md/bcache/btree.h
++++ b/drivers/md/bcache/btree.h
+@@ -282,6 +282,7 @@ void bch_initial_gc_finish(struct cache_set *c);
+ void bch_moving_gc(struct cache_set *c);
+ int bch_btree_check(struct cache_set *c);
+ void bch_initial_mark_key(struct cache_set *c, int level, struct bkey *k);
++void bch_cannibalize_unlock(struct cache_set *c);
+ 
+ static inline void wake_up_gc(struct cache_set *c)
+ {
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 7f5ea25096430..e8c8077667f9e 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1748,7 +1748,7 @@ static void cache_set_flush(struct closure *cl)
+ 	if (!IS_ERR_OR_NULL(c->gc_thread))
+ 		kthread_stop(c->gc_thread);
+ 
+-	if (!IS_ERR_OR_NULL(c->root))
++	if (!IS_ERR(c->root))
+ 		list_add(&c->root->list, &c->btree_cache);
+ 
+ 	/*
+@@ -2112,7 +2112,7 @@ static int run_cache_set(struct cache_set *c)
+ 
+ 		err = "cannot allocate new btree root";
+ 		c->root = __bch_btree_node_alloc(c, NULL, 0, true, NULL);
+-		if (IS_ERR_OR_NULL(c->root))
++		if (IS_ERR(c->root))
+ 			goto err;
+ 
+ 		mutex_lock(&c->root->write_lock);
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 3aa73da2c67bd..6324c922f6ba4 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -834,6 +834,16 @@ static int bch_root_node_dirty_init(struct cache_set *c,
+ 	if (ret < 0)
+ 		pr_warn("sectors dirty init failed, ret=%d!\n", ret);
+ 
++	/*
++	 * The op may be added to cache_set's btree_cache_wait
++	 * in mca_cannibalize(), must ensure it is removed from
++	 * the list and release btree_cache_alloc_lock before
++	 * free op memory.
++	 * Otherwise, the btree_cache_wait will be damaged.
++	 */
++	bch_cannibalize_unlock(c);
++	finish_wait(&c->btree_cache_wait, &(&op.op)->wait);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 20afc0aec1778..f843ade442dec 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -54,14 +54,7 @@ __acquires(bitmap->lock)
+ {
+ 	unsigned char *mappage;
+ 
+-	if (page >= bitmap->pages) {
+-		/* This can happen if bitmap_start_sync goes beyond
+-		 * End-of-device while looking for a whole page.
+-		 * It is harmless.
+-		 */
+-		return -EINVAL;
+-	}
+-
++	WARN_ON_ONCE(page >= bitmap->pages);
+ 	if (bitmap->bp[page].hijacked) /* it's hijacked, don't try to alloc */
+ 		return 0;
+ 
+@@ -1365,6 +1358,14 @@ __acquires(bitmap->lock)
+ 	sector_t csize;
+ 	int err;
+ 
++	if (page >= bitmap->pages) {
++		/*
++		 * This can happen if bitmap_start_sync goes beyond
++		 * End-of-device while looking for a whole page or
++		 * user set a huge number to sysfs bitmap_set_bits.
++		 */
++		return NULL;
++	}
+ 	err = md_bitmap_checkpage(bitmap, page, create, 0);
+ 
+ 	if (bitmap->bp[page].hijacked ||
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 1553c2495841b..ae0a857d6076a 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -3890,8 +3890,9 @@ int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale)
+ static ssize_t
+ safe_delay_show(struct mddev *mddev, char *page)
+ {
+-	int msec = (mddev->safemode_delay*1000)/HZ;
+-	return sprintf(page, "%d.%03d\n", msec/1000, msec%1000);
++	unsigned int msec = ((unsigned long)mddev->safemode_delay*1000)/HZ;
++
++	return sprintf(page, "%u.%03u\n", msec/1000, msec%1000);
+ }
+ static ssize_t
+ safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
+@@ -3903,7 +3904,7 @@ safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (strict_strtoul_scaled(cbuf, &msec, 3) < 0)
++	if (strict_strtoul_scaled(cbuf, &msec, 3) < 0 || msec > UINT_MAX / HZ)
+ 		return -EINVAL;
+ 	if (msec == 0)
+ 		mddev->safemode_delay = 0;
+@@ -4573,6 +4574,8 @@ max_corrected_read_errors_store(struct mddev *mddev, const char *buf, size_t len
+ 	rv = kstrtouint(buf, 10, &n);
+ 	if (rv < 0)
+ 		return rv;
++	if (n > INT_MAX)
++		return -EINVAL;
+ 	atomic_set(&mddev->max_corr_read_errors, n);
+ 	return len;
+ }
+@@ -4887,11 +4890,21 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 			return -EINVAL;
+ 		err = mddev_lock(mddev);
+ 		if (!err) {
+-			if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
++			if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
+ 				err =  -EBUSY;
+-			else {
++			} else if (mddev->reshape_position == MaxSector ||
++				   mddev->pers->check_reshape == NULL ||
++				   mddev->pers->check_reshape(mddev)) {
+ 				clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 				err = mddev->pers->start_reshape(mddev);
++			} else {
++				/*
++				 * If reshape is still in progress, and
++				 * md_check_recovery() can continue to reshape,
++				 * don't restart reshape because data can be
++				 * corrupted for raid456.
++				 */
++				clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 			}
+ 			mddev_unlock(mddev);
+ 		}
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index a20332e755e81..ee2cfd6c2dfbd 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -274,6 +274,18 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ 		goto abort;
+ 	}
+ 
++	if (conf->layout == RAID0_ORIG_LAYOUT) {
++		for (i = 1; i < conf->nr_strip_zones; i++) {
++			sector_t first_sector = conf->strip_zone[i-1].zone_end;
++
++			sector_div(first_sector, mddev->chunk_sectors);
++			zone = conf->strip_zone + i;
++			/* disk_shift is first disk index used in the zone */
++			zone->disk_shift = sector_div(first_sector,
++						      zone->nb_dev);
++		}
++	}
++
+ 	pr_debug("md/raid0:%s: done.\n", mdname(mddev));
+ 	*private_conf = conf;
+ 
+@@ -427,6 +439,20 @@ static void raid0_free(struct mddev *mddev, void *priv)
+ 	kfree(conf);
+ }
+ 
++/*
++ * Convert disk_index to the disk order in which it is read/written.
++ *  For example, if we have 4 disks, they are numbered 0,1,2,3. If we
++ *  write the disks starting at disk 3, then the read/write order would
++ *  be disk 3, then 0, then 1, and then disk 2 and we want map_disk_shift()
++ *  to map the disks as follows 0,1,2,3 => 1,2,3,0. So disk 0 would map
++ *  to 1, 1 to 2, 2 to 3, and 3 to 0. That way we can compare disks in
++ *  that 'output' space to understand the read/write disk ordering.
++ */
++static int map_disk_shift(int disk_index, int num_disks, int disk_shift)
++{
++	return ((disk_index + num_disks - disk_shift) % num_disks);
++}
++
+ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
+ {
+ 	struct r0conf *conf = mddev->private;
+@@ -440,7 +466,9 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
+ 	sector_t end_disk_offset;
+ 	unsigned int end_disk_index;
+ 	unsigned int disk;
++	sector_t orig_start, orig_end;
+ 
++	orig_start = start;
+ 	zone = find_zone(conf, &start);
+ 
+ 	if (bio_end_sector(bio) > zone->zone_end) {
+@@ -454,6 +482,7 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
+ 	} else
+ 		end = bio_end_sector(bio);
+ 
++	orig_end = end;
+ 	if (zone != conf->strip_zone)
+ 		end = end - zone[-1].zone_end;
+ 
+@@ -465,13 +494,26 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
+ 	last_stripe_index = end;
+ 	sector_div(last_stripe_index, stripe_size);
+ 
+-	start_disk_index = (int)(start - first_stripe_index * stripe_size) /
+-		mddev->chunk_sectors;
++	/* In the first zone the original and alternate layouts are the same */
++	if ((conf->layout == RAID0_ORIG_LAYOUT) && (zone != conf->strip_zone)) {
++		sector_div(orig_start, mddev->chunk_sectors);
++		start_disk_index = sector_div(orig_start, zone->nb_dev);
++		start_disk_index = map_disk_shift(start_disk_index,
++						  zone->nb_dev,
++						  zone->disk_shift);
++		sector_div(orig_end, mddev->chunk_sectors);
++		end_disk_index = sector_div(orig_end, zone->nb_dev);
++		end_disk_index = map_disk_shift(end_disk_index,
++						zone->nb_dev, zone->disk_shift);
++	} else {
++		start_disk_index = (int)(start - first_stripe_index * stripe_size) /
++			mddev->chunk_sectors;
++		end_disk_index = (int)(end - last_stripe_index * stripe_size) /
++			mddev->chunk_sectors;
++	}
+ 	start_disk_offset = ((int)(start - first_stripe_index * stripe_size) %
+ 		mddev->chunk_sectors) +
+ 		first_stripe_index * mddev->chunk_sectors;
+-	end_disk_index = (int)(end - last_stripe_index * stripe_size) /
+-		mddev->chunk_sectors;
+ 	end_disk_offset = ((int)(end - last_stripe_index * stripe_size) %
+ 		mddev->chunk_sectors) +
+ 		last_stripe_index * mddev->chunk_sectors;
+@@ -480,18 +522,22 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
+ 		sector_t dev_start, dev_end;
+ 		struct bio *discard_bio = NULL;
+ 		struct md_rdev *rdev;
++		int compare_disk;
++
++		compare_disk = map_disk_shift(disk, zone->nb_dev,
++					      zone->disk_shift);
+ 
+-		if (disk < start_disk_index)
++		if (compare_disk < start_disk_index)
+ 			dev_start = (first_stripe_index + 1) *
+ 				mddev->chunk_sectors;
+-		else if (disk > start_disk_index)
++		else if (compare_disk > start_disk_index)
+ 			dev_start = first_stripe_index * mddev->chunk_sectors;
+ 		else
+ 			dev_start = start_disk_offset;
+ 
+-		if (disk < end_disk_index)
++		if (compare_disk < end_disk_index)
+ 			dev_end = (last_stripe_index + 1) * mddev->chunk_sectors;
+-		else if (disk > end_disk_index)
++		else if (compare_disk > end_disk_index)
+ 			dev_end = last_stripe_index * mddev->chunk_sectors;
+ 		else
+ 			dev_end = end_disk_offset;
+diff --git a/drivers/md/raid0.h b/drivers/md/raid0.h
+index 3816e5477db1e..8cc761ca74230 100644
+--- a/drivers/md/raid0.h
++++ b/drivers/md/raid0.h
+@@ -6,6 +6,7 @@ struct strip_zone {
+ 	sector_t zone_end;	/* Start of the next zone (in sectors) */
+ 	sector_t dev_start;	/* Zone offset in real dev (in sectors) */
+ 	int	 nb_dev;	/* # of devices attached to the zone */
++	int	 disk_shift;	/* start disk for the original layout */
+ };
+ 
+ /* Linux 3.14 (20d0189b101) made an unintended change to
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 6a0459f9fafbc..55144f7d93037 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -751,8 +751,16 @@ static struct md_rdev *read_balance(struct r10conf *conf,
+ 		disk = r10_bio->devs[slot].devnum;
+ 		rdev = rcu_dereference(conf->mirrors[disk].replacement);
+ 		if (rdev == NULL || test_bit(Faulty, &rdev->flags) ||
+-		    r10_bio->devs[slot].addr + sectors > rdev->recovery_offset)
++		    r10_bio->devs[slot].addr + sectors >
++		    rdev->recovery_offset) {
++			/*
++			 * Read replacement first to prevent reading both rdev
++			 * and replacement as NULL during replacement replace
++			 * rdev.
++			 */
++			smp_mb();
+ 			rdev = rcu_dereference(conf->mirrors[disk].rdev);
++		}
+ 		if (rdev == NULL ||
+ 		    test_bit(Faulty, &rdev->flags))
+ 			continue;
+@@ -894,6 +902,7 @@ static void flush_pending_writes(struct r10conf *conf)
+ 			else
+ 				submit_bio_noacct(bio);
+ 			bio = next;
++			cond_resched();
+ 		}
+ 		blk_finish_plug(&plug);
+ 	} else
+@@ -1087,6 +1096,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
+ 		else
+ 			submit_bio_noacct(bio);
+ 		bio = next;
++		cond_resched();
+ 	}
+ 	kfree(plug);
+ }
+@@ -1346,9 +1356,15 @@ retry_write:
+ 
+ 	for (i = 0;  i < conf->copies; i++) {
+ 		int d = r10_bio->devs[i].devnum;
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[d].rdev);
+-		struct md_rdev *rrdev = rcu_dereference(
+-			conf->mirrors[d].replacement);
++		struct md_rdev *rdev, *rrdev;
++
++		rrdev = rcu_dereference(conf->mirrors[d].replacement);
++		/*
++		 * Read replacement first to prevent reading both rdev and
++		 * replacement as NULL during replacement replace rdev.
++		 */
++		smp_mb();
++		rdev = rcu_dereference(conf->mirrors[d].rdev);
+ 		if (rdev == rrdev)
+ 			rrdev = NULL;
+ 		if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
+@@ -3037,7 +3053,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 			int must_sync;
+ 			int any_working;
+ 			int need_recover = 0;
+-			int need_replace = 0;
+ 			struct raid10_info *mirror = &conf->mirrors[i];
+ 			struct md_rdev *mrdev, *mreplace;
+ 
+@@ -3049,11 +3064,10 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 			    !test_bit(Faulty, &mrdev->flags) &&
+ 			    !test_bit(In_sync, &mrdev->flags))
+ 				need_recover = 1;
+-			if (mreplace != NULL &&
+-			    !test_bit(Faulty, &mreplace->flags))
+-				need_replace = 1;
++			if (mreplace && test_bit(Faulty, &mreplace->flags))
++				mreplace = NULL;
+ 
+-			if (!need_recover && !need_replace) {
++			if (!need_recover && !mreplace) {
+ 				rcu_read_unlock();
+ 				continue;
+ 			}
+@@ -3069,8 +3083,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 				rcu_read_unlock();
+ 				continue;
+ 			}
+-			if (mreplace && test_bit(Faulty, &mreplace->flags))
+-				mreplace = NULL;
+ 			/* Unless we are doing a full sync, or a replacement
+ 			 * we only need to recover the block if it is set in
+ 			 * the bitmap
+@@ -3193,11 +3205,11 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 				bio = r10_bio->devs[1].repl_bio;
+ 				if (bio)
+ 					bio->bi_end_io = NULL;
+-				/* Note: if need_replace, then bio
++				/* Note: if replace is not NULL, then bio
+ 				 * cannot be NULL as r10buf_pool_alloc will
+ 				 * have allocated it.
+ 				 */
+-				if (!need_replace)
++				if (!mreplace)
+ 					break;
+ 				bio->bi_next = biolist;
+ 				biolist = bio;
+diff --git a/drivers/media/cec/i2c/Kconfig b/drivers/media/cec/i2c/Kconfig
+index 70432a1d69186..d912d143fb312 100644
+--- a/drivers/media/cec/i2c/Kconfig
++++ b/drivers/media/cec/i2c/Kconfig
+@@ -5,6 +5,7 @@
+ config CEC_CH7322
+ 	tristate "Chrontel CH7322 CEC controller"
+ 	depends on I2C
++	select REGMAP
+ 	select REGMAP_I2C
+ 	select CEC_CORE
+ 	help
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index 5ca3920237c5a..5fdce5f07364e 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -917,8 +917,8 @@ static u32 get_framesize_raw_yuv420_tp10_ubwc(u32 width, u32 height)
+ 	u32 extradata = SZ_16K;
+ 	u32 size;
+ 
+-	y_stride = ALIGN(ALIGN(width, 192) * 4 / 3, 256);
+-	uv_stride = ALIGN(ALIGN(width, 192) * 4 / 3, 256);
++	y_stride = ALIGN(width * 4 / 3, 256);
++	uv_stride = ALIGN(width * 4 / 3, 256);
+ 	y_sclines = ALIGN(height, 16);
+ 	uv_sclines = ALIGN((height + 1) >> 1, 16);
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/az6007.c b/drivers/media/usb/dvb-usb-v2/az6007.c
+index 62ee09f28a0bc..7524c90f5da61 100644
+--- a/drivers/media/usb/dvb-usb-v2/az6007.c
++++ b/drivers/media/usb/dvb-usb-v2/az6007.c
+@@ -202,7 +202,8 @@ static int az6007_rc_query(struct dvb_usb_device *d)
+ 	unsigned code;
+ 	enum rc_proto proto;
+ 
+-	az6007_read(d, AZ6007_READ_IR, 0, 0, st->data, 10);
++	if (az6007_read(d, AZ6007_READ_IR, 0, 0, st->data, 10) < 0)
++		return -EIO;
+ 
+ 	if (st->data[1] == 0x44)
+ 		return 0;
+diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
+index 1babfe6e2c361..5c223b5498b4b 100644
+--- a/drivers/media/usb/siano/smsusb.c
++++ b/drivers/media/usb/siano/smsusb.c
+@@ -179,7 +179,8 @@ static void smsusb_stop_streaming(struct smsusb_device_t *dev)
+ 
+ 	for (i = 0; i < MAX_URBS; i++) {
+ 		usb_kill_urb(&dev->surbs[i].urb);
+-		cancel_work_sync(&dev->surbs[i].wq);
++		if (dev->surbs[i].wq.func)
++			cancel_work_sync(&dev->surbs[i].wq);
+ 
+ 		if (dev->surbs[i].cb) {
+ 			smscore_putbuffer(dev->coredev, dev->surbs[i].cb);
+diff --git a/drivers/memory/brcmstb_dpfe.c b/drivers/memory/brcmstb_dpfe.c
+index f43ba69fbb3e3..2daae2e0cb19e 100644
+--- a/drivers/memory/brcmstb_dpfe.c
++++ b/drivers/memory/brcmstb_dpfe.c
+@@ -434,15 +434,17 @@ static void __finalize_command(struct brcmstb_dpfe_priv *priv)
+ static int __send_command(struct brcmstb_dpfe_priv *priv, unsigned int cmd,
+ 			  u32 result[])
+ {
+-	const u32 *msg = priv->dpfe_api->command[cmd];
+ 	void __iomem *regs = priv->regs;
+ 	unsigned int i, chksum, chksum_idx;
++	const u32 *msg;
+ 	int ret = 0;
+ 	u32 resp;
+ 
+ 	if (cmd >= DPFE_CMD_MAX)
+ 		return -1;
+ 
++	msg = priv->dpfe_api->command[cmd];
++
+ 	mutex_lock(&priv->lock);
+ 
+ 	/* Wait for DCPU to become ready */
+diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
+index dd06c18495eb6..0e37c6a5ee36c 100644
+--- a/drivers/memstick/host/r592.c
++++ b/drivers/memstick/host/r592.c
+@@ -44,12 +44,10 @@ static const char *tpc_names[] = {
+  * memstick_debug_get_tpc_name - debug helper that returns string for
+  * a TPC number
+  */
+-const char *memstick_debug_get_tpc_name(int tpc)
++static __maybe_unused const char *memstick_debug_get_tpc_name(int tpc)
+ {
+ 	return tpc_names[tpc-1];
+ }
+-EXPORT_SYMBOL(memstick_debug_get_tpc_name);
+-
+ 
+ /* Read a register*/
+ static inline u32 r592_read_reg(struct r592_device *dev, int address)
+diff --git a/drivers/mfd/intel-lpss-acpi.c b/drivers/mfd/intel-lpss-acpi.c
+index 045cbf0cbe53a..993e305a232c5 100644
+--- a/drivers/mfd/intel-lpss-acpi.c
++++ b/drivers/mfd/intel-lpss-acpi.c
+@@ -114,6 +114,9 @@ static int intel_lpss_acpi_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	info->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!info->mem)
++		return -ENODEV;
++
+ 	info->irq = platform_get_irq(pdev, 0);
+ 
+ 	ret = intel_lpss_probe(&pdev->dev, info);
+diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c
+index 48381d9bf7403..302115dabff4b 100644
+--- a/drivers/mfd/rt5033.c
++++ b/drivers/mfd/rt5033.c
+@@ -41,9 +41,6 @@ static const struct mfd_cell rt5033_devs[] = {
+ 	{
+ 		.name = "rt5033-charger",
+ 		.of_compatible = "richtek,rt5033-charger",
+-	}, {
+-		.name = "rt5033-battery",
+-		.of_compatible = "richtek,rt5033-battery",
+ 	}, {
+ 		.name = "rt5033-led",
+ 		.of_compatible = "richtek,rt5033-led",
+diff --git a/drivers/mfd/stmfx.c b/drivers/mfd/stmfx.c
+index 988e2ba6dd0f3..b45d7b0b842c5 100644
+--- a/drivers/mfd/stmfx.c
++++ b/drivers/mfd/stmfx.c
+@@ -330,9 +330,8 @@ static int stmfx_chip_init(struct i2c_client *client)
+ 	stmfx->vdd = devm_regulator_get_optional(&client->dev, "vdd");
+ 	ret = PTR_ERR_OR_ZERO(stmfx->vdd);
+ 	if (ret) {
+-		if (ret == -ENODEV)
+-			stmfx->vdd = NULL;
+-		else
++		stmfx->vdd = NULL;
++		if (ret != -ENODEV)
+ 			return dev_err_probe(&client->dev, ret, "Failed to get VDD regulator\n");
+ 	}
+ 
+@@ -387,7 +386,7 @@ static int stmfx_chip_init(struct i2c_client *client)
+ 
+ err:
+ 	if (stmfx->vdd)
+-		return regulator_disable(stmfx->vdd);
++		regulator_disable(stmfx->vdd);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c
+index 508349399f8af..7f758fb60c1fa 100644
+--- a/drivers/mfd/stmpe.c
++++ b/drivers/mfd/stmpe.c
+@@ -1494,9 +1494,9 @@ int stmpe_probe(struct stmpe_client_info *ci, enum stmpe_partnum partnum)
+ 
+ int stmpe_remove(struct stmpe *stmpe)
+ {
+-	if (!IS_ERR(stmpe->vio))
++	if (!IS_ERR(stmpe->vio) && regulator_is_enabled(stmpe->vio))
+ 		regulator_disable(stmpe->vio);
+-	if (!IS_ERR(stmpe->vcc))
++	if (!IS_ERR(stmpe->vcc) && regulator_is_enabled(stmpe->vcc))
+ 		regulator_disable(stmpe->vcc);
+ 
+ 	__stmpe_disable(stmpe, STMPE_BLOCK_ADC);
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 2488a9a67d18a..eed047e971e70 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1106,7 +1106,7 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl,
+ 
+ 	sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE, 4, 0);
+ 	if (init.attrs)
+-		sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 6, 0);
++		sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 4, 0);
+ 
+ 	err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE,
+ 				      sc, args);
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 48eec5fe7397b..6c4c85eb71479 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -727,6 +727,10 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
+ 	struct pci_dev *pdev = test->pdev;
+ 
+ 	mutex_lock(&test->mutex);
++
++	reinit_completion(&test->irq_raised);
++	test->last_irq = -ENODATA;
++
+ 	switch (cmd) {
+ 	case PCITEST_BAR:
+ 		bar = arg;
+@@ -935,6 +939,9 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
+ 	if (id < 0)
+ 		return;
+ 
++	pci_endpoint_test_release_irq(test);
++	pci_endpoint_test_free_irq_vectors(test);
++
+ 	misc_deregister(&test->miscdev);
+ 	kfree(misc_device->name);
+ 	kfree(test->name);
+@@ -944,9 +951,6 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
+ 			pci_iounmap(pdev, test->bar[bar]);
+ 	}
+ 
+-	pci_endpoint_test_release_irq(test);
+-	pci_endpoint_test_free_irq_vectors(test);
+-
+ 	pci_release_regions(pdev);
+ 	pci_disable_device(pdev);
+ }
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index c8c0f50a2076d..afe8d8c5fa8a2 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -99,6 +99,20 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ 	MMC_FIXUP("V10016", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc,
+ 		  MMC_QUIRK_TRIM_BROKEN),
+ 
++	/*
++	 * Kingston EMMC04G-M627 advertises TRIM but it does not seems to
++	 * support being used to offload WRITE_ZEROES.
++	 */
++	MMC_FIXUP("M62704", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc,
++		  MMC_QUIRK_TRIM_BROKEN),
++
++	/*
++	 * Micron MTFC4GACAJCN-1M advertises TRIM but it does not seems to
++	 * support being used to offload WRITE_ZEROES.
++	 */
++	MMC_FIXUP("Q2J54A", CID_MANFID_MICRON, 0x014e, add_quirk_mmc,
++		  MMC_QUIRK_TRIM_BROKEN),
++
+ 	/*
+ 	 * Some SD cards reports discard support while they don't
+ 	 */
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index 9cfffcecc9eb2..a72ca7416138a 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -2386,6 +2386,7 @@ static struct amba_driver mmci_driver = {
+ 	.drv		= {
+ 		.name	= DRIVER_NAME,
+ 		.pm	= &mmci_dev_pm_ops,
++		.probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ 	},
+ 	.probe		= mmci_probe,
+ 	.remove		= mmci_remove,
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 133f0d3764804..bad01cc6823f6 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1145,6 +1145,8 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
+ 		}
+ 	}
+ 
++	sdhci_config_dma(host);
++
+ 	if (host->flags & SDHCI_REQ_USE_DMA) {
+ 		int sg_cnt = sdhci_pre_dma_transfer(host, data, COOKIE_MAPPED);
+ 
+@@ -1164,8 +1166,6 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
+ 		}
+ 	}
+ 
+-	sdhci_config_dma(host);
+-
+ 	if (!(host->flags & SDHCI_REQ_USE_DMA)) {
+ 		int flags;
+ 
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index 228612d82f311..ee3976b7e197e 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -72,6 +72,7 @@
+ #define GENCMDIADDRH(aih, addr)		((aih) | (((addr) >> 16) & 0xffff))
+ 
+ #define DMA_DIR(dir)		((dir) ? NFC_CMD_N2M : NFC_CMD_M2N)
++#define DMA_ADDR_ALIGN		8
+ 
+ #define ECC_CHECK_RETURN_FF	(-1)
+ 
+@@ -838,6 +839,9 @@ static int meson_nfc_read_oob(struct nand_chip *nand, int page)
+ 
+ static bool meson_nfc_is_buffer_dma_safe(const void *buffer)
+ {
++	if ((uintptr_t)buffer % DMA_ADDR_ALIGN)
++		return false;
++
+ 	if (virt_addr_valid(buffer) && (!object_is_on_stack(buffer)))
+ 		return true;
+ 	return false;
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index 19ce4aa0973b4..80eadf509c0a9 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -1025,17 +1025,17 @@ static int vsc73xx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ 	struct vsc73xx *vsc = ds->priv;
+ 
+ 	return vsc73xx_write(vsc, VSC73XX_BLOCK_MAC, port,
+-			     VSC73XX_MAXLEN, new_mtu);
++			     VSC73XX_MAXLEN, new_mtu + ETH_HLEN + ETH_FCS_LEN);
+ }
+ 
+ /* According to application not "VSC7398 Jumbo Frames" setting
+- * up the MTU to 9.6 KB does not affect the performance on standard
++ * up the frame size to 9.6 KB does not affect the performance on standard
+  * frames. It is clear from the application note that
+  * "9.6 kilobytes" == 9600 bytes.
+  */
+ static int vsc73xx_get_max_mtu(struct dsa_switch *ds, int port)
+ {
+-	return 9600;
++	return 9600 - ETH_HLEN - ETH_FCS_LEN;
+ }
+ 
+ static const struct dsa_switch_ops vsc73xx_ds_ops = {
+diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
+index 5f8769aa469d9..d59ea5148c16c 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_com.c
+@@ -35,6 +35,8 @@
+ 
+ #define ENA_REGS_ADMIN_INTR_MASK 1
+ 
++#define ENA_MAX_BACKOFF_DELAY_EXP 16U
++
+ #define ENA_MIN_ADMIN_POLL_US 100
+ 
+ #define ENA_MAX_ADMIN_POLL_US 5000
+@@ -522,6 +524,7 @@ static int ena_com_comp_status_to_errno(u8 comp_status)
+ 
+ static void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us)
+ {
++	exp = min_t(u32, exp, ENA_MAX_BACKOFF_DELAY_EXP);
+ 	delay_us = max_t(u32, ENA_MIN_ADMIN_POLL_US, delay_us);
+ 	delay_us = min_t(u32, delay_us * (1U << exp), ENA_MAX_ADMIN_POLL_US);
+ 	usleep_range(delay_us, 2 * delay_us);
+diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
+index bb999e67d7736..ab8ee93316354 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.c
++++ b/drivers/net/ethernet/broadcom/bgmac.c
+@@ -1492,8 +1492,6 @@ int bgmac_enet_probe(struct bgmac *bgmac)
+ 
+ 	bgmac->in_init = true;
+ 
+-	bgmac_chip_intrs_off(bgmac);
+-
+ 	net_dev->irq = bgmac->irq;
+ 	SET_NETDEV_DEV(net_dev, bgmac->dev);
+ 	dev_set_drvdata(bgmac->dev, bgmac);
+@@ -1511,6 +1509,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
+ 	 */
+ 	bgmac_clk_enable(bgmac, 0);
+ 
++	bgmac_chip_intrs_off(bgmac);
++
+ 	/* This seems to be fixing IRQ by assigning OOB #6 to the core */
+ 	if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) {
+ 		if (bgmac->feature_flags & BGMAC_FEAT_IRQ_ID_OOB_6)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 4b875838a6467..99aba64f03c2f 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -624,5 +624,7 @@ void bcmgenet_mii_exit(struct net_device *dev)
+ 	if (of_phy_is_fixed_link(dn))
+ 		of_phy_deregister_fixed_link(dn);
+ 	of_node_put(priv->phy_dn);
++	clk_prepare_enable(priv->clk);
+ 	platform_device_unregister(priv->mii_pdev);
++	clk_disable_unprepare(priv->clk);
+ }
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index 613ca6124e3ce..d14f37be1eb3e 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -224,6 +224,7 @@ MODULE_AUTHOR("David S. Miller (davem@redhat.com) and Jeff Garzik (jgarzik@pobox
+ MODULE_DESCRIPTION("Broadcom Tigon3 ethernet driver");
+ MODULE_LICENSE("GPL");
+ MODULE_FIRMWARE(FIRMWARE_TG3);
++MODULE_FIRMWARE(FIRMWARE_TG357766);
+ MODULE_FIRMWARE(FIRMWARE_TG3TSO);
+ MODULE_FIRMWARE(FIRMWARE_TG3TSO5);
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
+index e0449cc24fbdb..cbfd007449351 100644
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -516,6 +516,9 @@ static int gve_get_link_ksettings(struct net_device *netdev,
+ 		err = gve_adminq_report_link_speed(priv);
+ 
+ 	cmd->base.speed = priv->link_speed;
++
++	cmd->base.duplex = DUPLEX_FULL;
++
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index 4680a2fe6d3cc..05cd70579c169 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -968,7 +968,7 @@ static int iavf_set_channels(struct net_device *netdev,
+ 	}
+ 	if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) {
+ 		adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+-		adapter->num_active_queues = num_req;
++		adapter->num_req_queues = 0;
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index e45f3a1a11f36..b64801bc216bb 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1377,19 +1377,16 @@ static int iavf_alloc_q_vectors(struct iavf_adapter *adapter)
+ static void iavf_free_q_vectors(struct iavf_adapter *adapter)
+ {
+ 	int q_idx, num_q_vectors;
+-	int napi_vectors;
+ 
+ 	if (!adapter->q_vectors)
+ 		return;
+ 
+ 	num_q_vectors = adapter->num_msix_vectors - NONQ_VECS;
+-	napi_vectors = adapter->num_active_queues;
+ 
+ 	for (q_idx = 0; q_idx < num_q_vectors; q_idx++) {
+ 		struct iavf_q_vector *q_vector = &adapter->q_vectors[q_idx];
+ 
+-		if (q_idx < napi_vectors)
+-			netif_napi_del(&q_vector->napi);
++		netif_napi_del(&q_vector->napi);
+ 	}
+ 	kfree(adapter->q_vectors);
+ 	adapter->q_vectors = NULL;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index c5f465814dec3..4465982100127 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -9453,6 +9453,11 @@ static pci_ers_result_t igb_io_error_detected(struct pci_dev *pdev,
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct igb_adapter *adapter = netdev_priv(netdev);
+ 
++	if (state == pci_channel_io_normal) {
++		dev_warn(&pdev->dev, "Non-correctable non-fatal error reported.\n");
++		return PCI_ERS_RESULT_CAN_RECOVER;
++	}
++
+ 	netif_device_detach(netdev);
+ 
+ 	if (state == pci_channel_io_perm_failure)
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index 970dd878d8a76..33f64c80335d3 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -13,6 +13,7 @@
+ #include <linux/ptp_clock_kernel.h>
+ #include <linux/timecounter.h>
+ #include <linux/net_tstamp.h>
++#include <linux/bitfield.h>
+ 
+ #include "igc_hw.h"
+ 
+@@ -209,6 +210,10 @@ struct igc_adapter {
+ 	struct ptp_clock *ptp_clock;
+ 	struct ptp_clock_info ptp_caps;
+ 	struct work_struct ptp_tx_work;
++	/* Access to ptp_tx_skb and ptp_tx_start are protected by the
++	 * ptp_tx_lock.
++	 */
++	spinlock_t ptp_tx_lock;
+ 	struct sk_buff *ptp_tx_skb;
+ 	struct hwtstamp_config tstamp_config;
+ 	unsigned long ptp_tx_start;
+@@ -272,6 +277,33 @@ extern char igc_driver_name[];
+ #define IGC_MRQC_RSS_FIELD_IPV4_UDP	0x00400000
+ #define IGC_MRQC_RSS_FIELD_IPV6_UDP	0x00800000
+ 
++/* RX-desc Write-Back format RSS Type's */
++enum igc_rss_type_num {
++	IGC_RSS_TYPE_NO_HASH		= 0,
++	IGC_RSS_TYPE_HASH_TCP_IPV4	= 1,
++	IGC_RSS_TYPE_HASH_IPV4		= 2,
++	IGC_RSS_TYPE_HASH_TCP_IPV6	= 3,
++	IGC_RSS_TYPE_HASH_IPV6_EX	= 4,
++	IGC_RSS_TYPE_HASH_IPV6		= 5,
++	IGC_RSS_TYPE_HASH_TCP_IPV6_EX	= 6,
++	IGC_RSS_TYPE_HASH_UDP_IPV4	= 7,
++	IGC_RSS_TYPE_HASH_UDP_IPV6	= 8,
++	IGC_RSS_TYPE_HASH_UDP_IPV6_EX	= 9,
++	IGC_RSS_TYPE_MAX		= 10,
++};
++#define IGC_RSS_TYPE_MAX_TABLE		16
++#define IGC_RSS_TYPE_MASK		GENMASK(3,0) /* 4-bits (3:0) = mask 0x0F */
++
++/* igc_rss_type - Rx descriptor RSS type field */
++static inline u32 igc_rss_type(const union igc_adv_rx_desc *rx_desc)
++{
++	/* RSS Type 4-bits (3:0) number: 0-9 (above 9 is reserved)
++	 * Accessing the same bits via u16 (wb.lower.lo_dword.hs_rss.pkt_info)
++	 * is slightly slower than via u32 (wb.lower.lo_dword.data)
++	 */
++	return le32_get_bits(rx_desc->wb.lower.lo_dword.data, IGC_RSS_TYPE_MASK);
++}
++
+ /* Interrupt defines */
+ #define IGC_START_ITR			648 /* ~6000 ints/sec */
+ #define IGC_4K_ITR			980
+@@ -361,7 +393,6 @@ enum igc_state_t {
+ 	__IGC_TESTING,
+ 	__IGC_RESETTING,
+ 	__IGC_DOWN,
+-	__IGC_PTP_TX_IN_PROGRESS,
+ };
+ 
+ enum igc_tx_flags {
+diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+index da259cd59adda..d28ac3a025ab1 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
++++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+@@ -1673,6 +1673,8 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
+ 	/* twisted pair */
+ 	cmd->base.port = PORT_TP;
+ 	cmd->base.phy_address = hw->phy.addr;
++	ethtool_link_ksettings_add_link_mode(cmd, supported, TP);
++	ethtool_link_ksettings_add_link_mode(cmd, advertising, TP);
+ 
+ 	/* advertising link modes */
+ 	if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF)
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 3aa0efb542aaf..631ce793fb2ec 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -600,7 +600,6 @@ static void igc_configure_tx_ring(struct igc_adapter *adapter,
+ 	/* disable the queue */
+ 	wr32(IGC_TXDCTL(reg_idx), 0);
+ 	wrfl();
+-	mdelay(10);
+ 
+ 	wr32(IGC_TDLEN(reg_idx),
+ 	     ring->count * sizeof(union igc_adv_tx_desc));
+@@ -906,7 +905,7 @@ static __le32 igc_tx_launchtime(struct igc_ring *ring, ktime_t txtime,
+ 	ktime_t base_time = adapter->base_time;
+ 	ktime_t now = ktime_get_clocktai();
+ 	ktime_t baset_est, end_of_cycle;
+-	u32 launchtime;
++	s32 launchtime;
+ 	s64 n;
+ 
+ 	n = div64_s64(ktime_sub_ns(now, base_time), cycle_time);
+@@ -919,7 +918,7 @@ static __le32 igc_tx_launchtime(struct igc_ring *ring, ktime_t txtime,
+ 			*first_flag = true;
+ 			ring->last_ff_cycle = baset_est;
+ 
+-			if (ktime_compare(txtime, ring->last_tx_cycle) > 0)
++			if (ktime_compare(end_of_cycle, ring->last_tx_cycle) > 0)
+ 				*insert_empty = true;
+ 		}
+ 	}
+@@ -1467,9 +1466,10 @@ done:
+ 		 * the other timer registers before skipping the
+ 		 * timestamping request.
+ 		 */
+-		if (adapter->tstamp_config.tx_type == HWTSTAMP_TX_ON &&
+-		    !test_and_set_bit_lock(__IGC_PTP_TX_IN_PROGRESS,
+-					   &adapter->state)) {
++		unsigned long flags;
++
++		spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
++		if (adapter->tstamp_config.tx_type == HWTSTAMP_TX_ON && !adapter->ptp_tx_skb) {
+ 			skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 			tx_flags |= IGC_TX_FLAGS_TSTAMP;
+ 
+@@ -1478,6 +1478,8 @@ done:
+ 		} else {
+ 			adapter->tx_hwtstamp_skipped++;
+ 		}
++
++		spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
+ 	}
+ 
+ 	/* record initial flags and protocol */
+@@ -1569,14 +1571,36 @@ static void igc_rx_checksum(struct igc_ring *ring,
+ 		   le32_to_cpu(rx_desc->wb.upper.status_error));
+ }
+ 
++/* Mapping HW RSS Type to enum pkt_hash_types */
++static const enum pkt_hash_types igc_rss_type_table[IGC_RSS_TYPE_MAX_TABLE] = {
++	[IGC_RSS_TYPE_NO_HASH]		= PKT_HASH_TYPE_L2,
++	[IGC_RSS_TYPE_HASH_TCP_IPV4]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_IPV4]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_TCP_IPV6]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_IPV6_EX]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_IPV6]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_TCP_IPV6_EX] = PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV4]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV6]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV6_EX] = PKT_HASH_TYPE_L4,
++	[10] = PKT_HASH_TYPE_NONE, /* RSS Type above 9 "Reserved" by HW  */
++	[11] = PKT_HASH_TYPE_NONE, /* keep array sized for SW bit-mask   */
++	[12] = PKT_HASH_TYPE_NONE, /* to handle future HW revisons       */
++	[13] = PKT_HASH_TYPE_NONE,
++	[14] = PKT_HASH_TYPE_NONE,
++	[15] = PKT_HASH_TYPE_NONE,
++};
++
+ static inline void igc_rx_hash(struct igc_ring *ring,
+ 			       union igc_adv_rx_desc *rx_desc,
+ 			       struct sk_buff *skb)
+ {
+-	if (ring->netdev->features & NETIF_F_RXHASH)
+-		skb_set_hash(skb,
+-			     le32_to_cpu(rx_desc->wb.lower.hi_dword.rss),
+-			     PKT_HASH_TYPE_L3);
++	if (ring->netdev->features & NETIF_F_RXHASH) {
++		u32 rss_hash = le32_to_cpu(rx_desc->wb.lower.hi_dword.rss);
++		u32 rss_type = igc_rss_type(rx_desc);
++
++		skb_set_hash(skb, rss_hash, igc_rss_type_table[rss_type]);
++	}
+ }
+ 
+ /**
+@@ -5257,6 +5281,7 @@ static int igc_probe(struct pci_dev *pdev,
+ 	netdev->features |= NETIF_F_TSO;
+ 	netdev->features |= NETIF_F_TSO6;
+ 	netdev->features |= NETIF_F_TSO_ECN;
++	netdev->features |= NETIF_F_RXHASH;
+ 	netdev->features |= NETIF_F_RXCSUM;
+ 	netdev->features |= NETIF_F_HW_CSUM;
+ 	netdev->features |= NETIF_F_SCTP_CRC;
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index ef53f7665b58c..25b238c6a675c 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -323,6 +323,7 @@ static int igc_ptp_set_timestamp_mode(struct igc_adapter *adapter,
+ 	return 0;
+ }
+ 
++/* Requires adapter->ptp_tx_lock held by caller. */
+ static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
+ {
+ 	struct igc_hw *hw = &adapter->hw;
+@@ -330,7 +331,6 @@ static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
+ 	dev_kfree_skb_any(adapter->ptp_tx_skb);
+ 	adapter->ptp_tx_skb = NULL;
+ 	adapter->tx_hwtstamp_timeouts++;
+-	clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
+ 	/* Clear the tx valid bit in TSYNCTXCTL register to enable interrupt. */
+ 	rd32(IGC_TXSTMPH);
+ 	netdev_warn(adapter->netdev, "Tx timestamp timeout\n");
+@@ -338,20 +338,20 @@ static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
+ 
+ void igc_ptp_tx_hang(struct igc_adapter *adapter)
+ {
+-	bool timeout = time_is_before_jiffies(adapter->ptp_tx_start +
+-					      IGC_PTP_TX_TIMEOUT);
++	unsigned long flags;
+ 
+-	if (!test_bit(__IGC_PTP_TX_IN_PROGRESS, &adapter->state))
+-		return;
++	spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
+ 
+-	/* If we haven't received a timestamp within the timeout, it is
+-	 * reasonable to assume that it will never occur, so we can unlock the
+-	 * timestamp bit when this occurs.
+-	 */
+-	if (timeout) {
+-		cancel_work_sync(&adapter->ptp_tx_work);
+-		igc_ptp_tx_timeout(adapter);
+-	}
++	if (!adapter->ptp_tx_skb)
++		goto unlock;
++
++	if (time_is_after_jiffies(adapter->ptp_tx_start + IGC_PTP_TX_TIMEOUT))
++		goto unlock;
++
++	igc_ptp_tx_timeout(adapter);
++
++unlock:
++	spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
+ }
+ 
+ /**
+@@ -361,6 +361,8 @@ void igc_ptp_tx_hang(struct igc_adapter *adapter)
+  * If we were asked to do hardware stamping and such a time stamp is
+  * available, then it must have been for this skb here because we only
+  * allow only one such packet into the queue.
++ *
++ * Context: Expects adapter->ptp_tx_lock to be held by caller.
+  */
+ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
+ {
+@@ -396,13 +398,7 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
+ 	shhwtstamps.hwtstamp =
+ 		ktime_add_ns(shhwtstamps.hwtstamp, adjust);
+ 
+-	/* Clear the lock early before calling skb_tstamp_tx so that
+-	 * applications are not woken up before the lock bit is clear. We use
+-	 * a copy of the skb pointer to ensure other threads can't change it
+-	 * while we're notifying the stack.
+-	 */
+ 	adapter->ptp_tx_skb = NULL;
+-	clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
+ 
+ 	/* Notify the stack and free the skb after we've unlocked */
+ 	skb_tstamp_tx(skb, &shhwtstamps);
+@@ -413,24 +409,33 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
+  * igc_ptp_tx_work
+  * @work: pointer to work struct
+  *
+- * This work function polls the TSYNCTXCTL valid bit to determine when a
+- * timestamp has been taken for the current stored skb.
++ * This work function checks the TSYNCTXCTL valid bit to determine when
++ * a timestamp has been taken for the current stored skb.
+  */
+ static void igc_ptp_tx_work(struct work_struct *work)
+ {
+ 	struct igc_adapter *adapter = container_of(work, struct igc_adapter,
+ 						   ptp_tx_work);
+ 	struct igc_hw *hw = &adapter->hw;
++	unsigned long flags;
+ 	u32 tsynctxctl;
+ 
+-	if (!test_bit(__IGC_PTP_TX_IN_PROGRESS, &adapter->state))
+-		return;
++	spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
++
++	if (!adapter->ptp_tx_skb)
++		goto unlock;
+ 
+ 	tsynctxctl = rd32(IGC_TSYNCTXCTL);
+-	if (WARN_ON_ONCE(!(tsynctxctl & IGC_TSYNCTXCTL_TXTT_0)))
+-		return;
++	tsynctxctl &= IGC_TSYNCTXCTL_TXTT_0;
++	if (!tsynctxctl) {
++		WARN_ONCE(1, "Received a TSTAMP interrupt but no TSTAMP is ready.\n");
++		goto unlock;
++	}
+ 
+ 	igc_ptp_tx_hwtstamp(adapter);
++
++unlock:
++	spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
+ }
+ 
+ /**
+@@ -506,6 +511,7 @@ void igc_ptp_init(struct igc_adapter *adapter)
+ 		return;
+ 	}
+ 
++	spin_lock_init(&adapter->ptp_tx_lock);
+ 	spin_lock_init(&adapter->tmreg_lock);
+ 	INIT_WORK(&adapter->ptp_tx_work, igc_ptp_tx_work);
+ 
+@@ -559,7 +565,6 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
+ 	cancel_work_sync(&adapter->ptp_tx_work);
+ 	dev_kfree_skb_any(adapter->ptp_tx_skb);
+ 	adapter->ptp_tx_skb = NULL;
+-	clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
+ 
+ 	if (pci_device_is_present(adapter->pdev))
+ 		igc_ptp_time_save(adapter);
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index f5567d485e91a..3656a3937eca6 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -1471,7 +1471,7 @@ static void mvneta_defaults_set(struct mvneta_port *pp)
+ 			 */
+ 			if (txq_number == 1)
+ 				txq_map = (cpu == pp->rxq_def) ?
+-					MVNETA_CPU_TXQ_ACCESS(1) : 0;
++					MVNETA_CPU_TXQ_ACCESS(0) : 0;
+ 
+ 		} else {
+ 			txq_map = MVNETA_CPU_TXQ_ACCESS_ALL_MASK;
+@@ -4165,7 +4165,7 @@ static void mvneta_percpu_elect(struct mvneta_port *pp)
+ 		 */
+ 		if (txq_number == 1)
+ 			txq_map = (cpu == elected_cpu) ?
+-				MVNETA_CPU_TXQ_ACCESS(1) : 0;
++				MVNETA_CPU_TXQ_ACCESS(0) : 0;
+ 		else
+ 			txq_map = mvreg_read(pp, MVNETA_CPU_MAP(cpu)) &
+ 				MVNETA_CPU_TXQ_ACCESS_ALL_MASK;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index fc6d785b98ddd..ec9a291e866c7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -20,6 +20,7 @@
+ #define	PCI_DEVID_OCTEONTX2_RVU_AF		0xA065
+ 
+ /* Subsystem Device ID */
++#define PCI_SUBSYS_DEVID_98XX                  0xB100
+ #define PCI_SUBSYS_DEVID_96XX                  0xB200
+ 
+ /* PCI BAR nos */
+@@ -403,6 +404,16 @@ static inline bool is_rvu_96xx_B0(struct rvu *rvu)
+ 		(pdev->subsystem_device == PCI_SUBSYS_DEVID_96XX);
+ }
+ 
++static inline bool is_rvu_supports_nix1(struct rvu *rvu)
++{
++	struct pci_dev *pdev = rvu->pdev;
++
++	if (pdev->subsystem_device == PCI_SUBSYS_DEVID_98XX)
++		return true;
++
++	return false;
++}
++
+ /* Function Prototypes
+  * RVU
+  */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 6c6b411e78fd8..83743e15326d7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -84,7 +84,7 @@ static void rvu_map_cgx_nix_block(struct rvu *rvu, int pf,
+ 	p2x = cgx_lmac_get_p2x(cgx_id, lmac_id);
+ 	/* Firmware sets P2X_SELECT as either NIX0 or NIX1 */
+ 	pfvf->nix_blkaddr = BLKADDR_NIX0;
+-	if (p2x == CMR_P2X_SEL_NIX1)
++	if (is_rvu_supports_nix1(rvu) && p2x == CMR_P2X_SEL_NIX1)
+ 		pfvf->nix_blkaddr = BLKADDR_NIX1;
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 54aeb276b9a0a..000dd89c4baff 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1311,8 +1311,9 @@ static int otx2_init_hw_resources(struct otx2_nic *pf)
+ 	if (err)
+ 		goto err_free_npa_lf;
+ 
+-	/* Enable backpressure */
+-	otx2_nix_config_bp(pf, true);
++	/* Enable backpressure for CGX mapped PF/VFs */
++	if (!is_otx2_lbkvf(pf->pdev))
++		otx2_nix_config_bp(pf, true);
+ 
+ 	/* Init Auras and pools used by NIX RQ, for free buffer ptrs */
+ 	err = otx2_rq_aura_pool_init(pf);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c
+index e51f60b55daa4..2da90f6649d17 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c
+@@ -194,6 +194,7 @@ static int accel_fs_tcp_create_groups(struct mlx5e_flow_table *ft,
+ 	in = kvzalloc(inlen, GFP_KERNEL);
+ 	if  (!in || !ft->g) {
+ 		kfree(ft->g);
++		ft->g = NULL;
+ 		kvfree(in);
+ 		return -ENOMEM;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 16846442717dc..c6a81a51530d2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1334,7 +1334,8 @@ static void remove_unready_flow(struct mlx5e_tc_flow *flow)
+ 	uplink_priv = &rpriv->uplink_priv;
+ 
+ 	mutex_lock(&uplink_priv->unready_flows_lock);
+-	unready_flow_del(flow);
++	if (flow_flag_test(flow, NOT_READY))
++		unready_flow_del(flow);
+ 	mutex_unlock(&uplink_priv->unready_flows_lock);
+ }
+ 
+@@ -1475,8 +1476,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
+ 
+ 	mlx5e_put_flow_tunnel_id(flow);
+ 
+-	if (flow_flag_test(flow, NOT_READY))
+-		remove_unready_flow(flow);
++	remove_unready_flow(flow);
+ 
+ 	if (mlx5e_is_offloaded_flow(flow)) {
+ 		if (flow_flag_test(flow, SLOW))
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index 481f89d193f77..50cb1c5251f71 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -83,6 +83,18 @@ static int lan743x_csr_light_reset(struct lan743x_adapter *adapter)
+ 				  !(data & HW_CFG_LRST_), 100000, 10000000);
+ }
+ 
++static int lan743x_csr_wait_for_bit_atomic(struct lan743x_adapter *adapter,
++					   int offset, u32 bit_mask,
++					   int target_value, int udelay_min,
++					   int udelay_max, int count)
++{
++	u32 data;
++
++	return readx_poll_timeout_atomic(LAN743X_CSR_READ_OP, offset, data,
++					 target_value == !!(data & bit_mask),
++					 udelay_max, udelay_min * count);
++}
++
+ static int lan743x_csr_wait_for_bit(struct lan743x_adapter *adapter,
+ 				    int offset, u32 bit_mask,
+ 				    int target_value, int usleep_min,
+@@ -678,8 +690,8 @@ static int lan743x_dp_write(struct lan743x_adapter *adapter,
+ 	u32 dp_sel;
+ 	int i;
+ 
+-	if (lan743x_csr_wait_for_bit(adapter, DP_SEL, DP_SEL_DPRDY_,
+-				     1, 40, 100, 100))
++	if (lan743x_csr_wait_for_bit_atomic(adapter, DP_SEL, DP_SEL_DPRDY_,
++					    1, 40, 100, 100))
+ 		return -EIO;
+ 	dp_sel = lan743x_csr_read(adapter, DP_SEL);
+ 	dp_sel &= ~DP_SEL_MASK_;
+@@ -690,8 +702,9 @@ static int lan743x_dp_write(struct lan743x_adapter *adapter,
+ 		lan743x_csr_write(adapter, DP_ADDR, addr + i);
+ 		lan743x_csr_write(adapter, DP_DATA_0, buf[i]);
+ 		lan743x_csr_write(adapter, DP_CMD, DP_CMD_WRITE_);
+-		if (lan743x_csr_wait_for_bit(adapter, DP_SEL, DP_SEL_DPRDY_,
+-					     1, 40, 100, 100))
++		if (lan743x_csr_wait_for_bit_atomic(adapter, DP_SEL,
++						    DP_SEL_DPRDY_,
++						    1, 40, 100, 100))
+ 			return -EIO;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index fcd4213c99b83..098772601df8c 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -433,11 +433,6 @@ static void ionic_qcqs_free(struct ionic_lif *lif)
+ static void ionic_link_qcq_interrupts(struct ionic_qcq *src_qcq,
+ 				      struct ionic_qcq *n_qcq)
+ {
+-	if (WARN_ON(n_qcq->flags & IONIC_QCQ_F_INTR)) {
+-		ionic_intr_free(n_qcq->cq.lif->ionic, n_qcq->intr.index);
+-		n_qcq->flags &= ~IONIC_QCQ_F_INTR;
+-	}
+-
+ 	n_qcq->intr.vector = src_qcq->intr.vector;
+ 	n_qcq->intr.index = src_qcq->intr.index;
+ }
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index 32654fe1f8b59..3f53b5ea78410 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -1297,8 +1297,10 @@ static void efx_ef10_fini_nic(struct efx_nic *efx)
+ {
+ 	struct efx_ef10_nic_data *nic_data = efx->nic_data;
+ 
++	spin_lock_bh(&efx->stats_lock);
+ 	kfree(nic_data->mc_stats);
+ 	nic_data->mc_stats = NULL;
++	spin_unlock_bh(&efx->stats_lock);
+ }
+ 
+ static int efx_ef10_init_nic(struct efx_nic *efx)
+@@ -1836,9 +1838,14 @@ static size_t efx_ef10_update_stats_pf(struct efx_nic *efx, u64 *full_stats,
+ 
+ 	efx_ef10_get_stat_mask(efx, mask);
+ 
+-	efx_nic_copy_stats(efx, nic_data->mc_stats);
+-	efx_nic_update_stats(efx_ef10_stat_desc, EF10_STAT_COUNT,
+-			     mask, stats, nic_data->mc_stats, false);
++	/* If NIC was fini'd (probably resetting), then we can't read
++	 * updated stats right now.
++	 */
++	if (nic_data->mc_stats) {
++		efx_nic_copy_stats(efx, nic_data->mc_stats);
++		efx_nic_update_stats(efx_ef10_stat_desc, EF10_STAT_COUNT,
++				     mask, stats, nic_data->mc_stats, false);
++	}
+ 
+ 	/* Update derived statistics */
+ 	efx_nic_fix_nodesc_drop_stat(efx,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index de66406c50572..83e9a4d019c16 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -5254,12 +5254,6 @@ int stmmac_dvr_remove(struct device *dev)
+ 	netif_carrier_off(ndev);
+ 	unregister_netdev(ndev);
+ 
+-	/* Serdes power down needs to happen after VLAN filter
+-	 * is deleted that is triggered by unregister_netdev().
+-	 */
+-	if (priv->plat->serdes_powerdown)
+-		priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
+-
+ #ifdef CONFIG_DEBUG_FS
+ 	stmmac_exit_fs(ndev);
+ #endif
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index a6a455c326288..73efc8b453643 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -104,23 +104,37 @@ struct cpsw_ale_dev_id {
+ 
+ static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits)
+ {
+-	int idx;
++	int idx, idx2;
++	u32 hi_val = 0;
+ 
+ 	idx    = start / 32;
++	idx2 = (start + bits - 1) / 32;
++	/* Check if bits to be fetched exceed a word */
++	if (idx != idx2) {
++		idx2 = 2 - idx2; /* flip */
++		hi_val = ale_entry[idx2] << ((idx2 * 32) - start);
++	}
+ 	start -= idx * 32;
+ 	idx    = 2 - idx; /* flip */
+-	return (ale_entry[idx] >> start) & BITMASK(bits);
++	return (hi_val + (ale_entry[idx] >> start)) & BITMASK(bits);
+ }
+ 
+ static inline void cpsw_ale_set_field(u32 *ale_entry, u32 start, u32 bits,
+ 				      u32 value)
+ {
+-	int idx;
++	int idx, idx2;
+ 
+ 	value &= BITMASK(bits);
+-	idx    = start / 32;
++	idx = start / 32;
++	idx2 = (start + bits - 1) / 32;
++	/* Check if bits to be set exceed a word */
++	if (idx != idx2) {
++		idx2 = 2 - idx2; /* flip */
++		ale_entry[idx2] &= ~(BITMASK(bits + start - (idx2 * 32)));
++		ale_entry[idx2] |= (value >> ((idx2 * 32) - start));
++	}
+ 	start -= idx * 32;
+-	idx    = 2 - idx; /* flip */
++	idx = 2 - idx; /* flip */
+ 	ale_entry[idx] &= ~(BITMASK(bits) << start);
+ 	ale_entry[idx] |=  (value << start);
+ }
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 3d91baf2e55aa..9d362283196aa 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2009,6 +2009,11 @@ static int axienet_probe(struct platform_device *pdev)
+ 		goto cleanup_clk;
+ 	}
+ 
++	/* Reset core now that clocks are enabled, prior to accessing MDIO */
++	ret = __axienet_device_reset(lp);
++	if (ret)
++		goto cleanup_clk;
++
+ 	/* Autodetect the need for 64-bit DMA pointers.
+ 	 * When the IP is configured for a bus width bigger than 32 bits,
+ 	 * writing the MSB registers is mandatory, even if they are all 0.
+@@ -2055,11 +2060,6 @@ static int axienet_probe(struct platform_device *pdev)
+ 	lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
+ 	lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
+ 
+-	/* Reset core now that clocks are enabled, prior to accessing MDIO */
+-	ret = __axienet_device_reset(lp);
+-	if (ret)
+-		goto cleanup_clk;
+-
+ 	ret = axienet_mdio_setup(lp);
+ 	if (ret)
+ 		dev_warn(&pdev->dev,
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 1c46bc4d27058..05ea3a18552b6 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -291,7 +291,9 @@ static void __gtp_encap_destroy(struct sock *sk)
+ 			gtp->sk1u = NULL;
+ 		udp_sk(sk)->encap_type = 0;
+ 		rcu_assign_sk_user_data(sk, NULL);
++		release_sock(sk);
+ 		sock_put(sk);
++		return;
+ 	}
+ 	release_sock(sk);
+ }
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index 0a5b5ff597c6f..ab09d110760ec 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -586,7 +586,8 @@ static int ipvlan_xmit_mode_l3(struct sk_buff *skb, struct net_device *dev)
+ 				consume_skb(skb);
+ 				return NET_XMIT_DROP;
+ 			}
+-			return ipvlan_rcv_frame(addr, &skb, true);
++			ipvlan_rcv_frame(addr, &skb, true);
++			return NET_XMIT_SUCCESS;
+ 		}
+ 	}
+ out:
+@@ -612,7 +613,8 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ 					consume_skb(skb);
+ 					return NET_XMIT_DROP;
+ 				}
+-				return ipvlan_rcv_frame(addr, &skb, true);
++				ipvlan_rcv_frame(addr, &skb, true);
++				return NET_XMIT_SUCCESS;
+ 			}
+ 		}
+ 		skb = skb_share_check(skb, GFP_ATOMIC);
+@@ -624,7 +626,8 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ 		 * the skb for the main-dev. At the RX side we just return
+ 		 * RX_PASS for it to be processed further on the stack.
+ 		 */
+-		return dev_forward_skb(ipvlan->phy_dev, skb);
++		dev_forward_skb(ipvlan->phy_dev, skb);
++		return NET_XMIT_SUCCESS;
+ 
+ 	} else if (is_multicast_ether_addr(eth->h_dest)) {
+ 		skb_reset_mac_header(skb);
+diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
+index 9bbecf4d159b4..bcf354719745c 100644
+--- a/drivers/net/netdevsim/dev.c
++++ b/drivers/net/netdevsim/dev.c
+@@ -149,13 +149,10 @@ static ssize_t nsim_dev_trap_fa_cookie_write(struct file *file,
+ 	cookie_len = (count - 1) / 2;
+ 	if ((count - 1) % 2)
+ 		return -EINVAL;
+-	buf = kmalloc(count, GFP_KERNEL | __GFP_NOWARN);
+-	if (!buf)
+-		return -ENOMEM;
+ 
+-	ret = simple_write_to_buffer(buf, count, ppos, data, count);
+-	if (ret < 0)
+-		goto free_buf;
++	buf = memdup_user(data, count);
++	if (IS_ERR(buf))
++		return PTR_ERR(buf);
+ 
+ 	fa_cookie = kmalloc(sizeof(*fa_cookie) + cookie_len,
+ 			    GFP_KERNEL | __GFP_NOWARN);
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index e771e0e8a9bc6..095d16ceafcf8 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -3024,23 +3024,30 @@ static int __init phy_init(void)
+ {
+ 	int rc;
+ 
++	ethtool_set_ethtool_phy_ops(&phy_ethtool_phy_ops);
++
+ 	rc = mdio_bus_init();
+ 	if (rc)
+-		return rc;
++		goto err_ethtool_phy_ops;
+ 
+-	ethtool_set_ethtool_phy_ops(&phy_ethtool_phy_ops);
+ 	features_init();
+ 
+ 	rc = phy_driver_register(&genphy_c45_driver, THIS_MODULE);
+ 	if (rc)
+-		goto err_c45;
++		goto err_mdio_bus;
+ 
+ 	rc = phy_driver_register(&genphy_driver, THIS_MODULE);
+-	if (rc) {
+-		phy_driver_unregister(&genphy_c45_driver);
++	if (rc)
++		goto err_c45;
++
++	return 0;
++
+ err_c45:
+-		mdio_bus_exit();
+-	}
++	phy_driver_unregister(&genphy_c45_driver);
++err_mdio_bus:
++	mdio_bus_exit();
++err_ethtool_phy_ops:
++	ethtool_set_ethtool_phy_ops(NULL);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index ee5058445d06e..05a75b5a8b680 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -24,6 +24,7 @@
+ #include <linux/in.h>
+ #include <linux/ip.h>
+ #include <linux/rcupdate.h>
++#include <linux/security.h>
+ #include <linux/spinlock.h>
+ 
+ #include <net/sock.h>
+@@ -128,6 +129,23 @@ static void del_chan(struct pppox_sock *sock)
+ 	spin_unlock(&chan_lock);
+ }
+ 
++static struct rtable *pptp_route_output(struct pppox_sock *po,
++					struct flowi4 *fl4)
++{
++	struct sock *sk = &po->sk;
++	struct net *net;
++
++	net = sock_net(sk);
++	flowi4_init_output(fl4, sk->sk_bound_dev_if, sk->sk_mark, 0,
++			   RT_SCOPE_UNIVERSE, IPPROTO_GRE, 0,
++			   po->proto.pptp.dst_addr.sin_addr.s_addr,
++			   po->proto.pptp.src_addr.sin_addr.s_addr,
++			   0, 0, sock_net_uid(net, sk));
++	security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
++
++	return ip_route_output_flow(net, fl4, sk);
++}
++
+ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ {
+ 	struct sock *sk = (struct sock *) chan->private;
+@@ -151,11 +169,7 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ 	if (sk_pppox(po)->sk_state & PPPOX_DEAD)
+ 		goto tx_error;
+ 
+-	rt = ip_route_output_ports(net, &fl4, NULL,
+-				   opt->dst_addr.sin_addr.s_addr,
+-				   opt->src_addr.sin_addr.s_addr,
+-				   0, 0, IPPROTO_GRE,
+-				   RT_TOS(0), sk->sk_bound_dev_if);
++	rt = pptp_route_output(po, &fl4);
+ 	if (IS_ERR(rt))
+ 		goto tx_error;
+ 
+@@ -440,12 +454,7 @@ static int pptp_connect(struct socket *sock, struct sockaddr *uservaddr,
+ 	po->chan.private = sk;
+ 	po->chan.ops = &pptp_chan_ops;
+ 
+-	rt = ip_route_output_ports(sock_net(sk), &fl4, sk,
+-				   opt->dst_addr.sin_addr.s_addr,
+-				   opt->src_addr.sin_addr.s_addr,
+-				   0, 0,
+-				   IPPROTO_GRE, RT_CONN_FLAGS(sk),
+-				   sk->sk_bound_dev_if);
++	rt = pptp_route_output(po, &fl4);
+ 	if (IS_ERR(rt)) {
+ 		error = -EHOSTUNREACH;
+ 		goto end;
+diff --git a/drivers/net/wireguard/netlink.c b/drivers/net/wireguard/netlink.c
+index 5c804bcabfe6b..f5bc279c9a8c2 100644
+--- a/drivers/net/wireguard/netlink.c
++++ b/drivers/net/wireguard/netlink.c
+@@ -546,6 +546,7 @@ static int wg_set_device(struct sk_buff *skb, struct genl_info *info)
+ 		u8 *private_key = nla_data(info->attrs[WGDEVICE_A_PRIVATE_KEY]);
+ 		u8 public_key[NOISE_PUBLIC_KEY_LEN];
+ 		struct wg_peer *peer, *temp;
++		bool send_staged_packets;
+ 
+ 		if (!crypto_memneq(wg->static_identity.static_private,
+ 				   private_key, NOISE_PUBLIC_KEY_LEN))
+@@ -564,14 +565,17 @@ static int wg_set_device(struct sk_buff *skb, struct genl_info *info)
+ 		}
+ 
+ 		down_write(&wg->static_identity.lock);
+-		wg_noise_set_static_identity_private_key(&wg->static_identity,
+-							 private_key);
+-		list_for_each_entry_safe(peer, temp, &wg->peer_list,
+-					 peer_list) {
++		send_staged_packets = !wg->static_identity.has_identity && netif_running(wg->dev);
++		wg_noise_set_static_identity_private_key(&wg->static_identity, private_key);
++		send_staged_packets = send_staged_packets && wg->static_identity.has_identity;
++
++		wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
++		list_for_each_entry_safe(peer, temp, &wg->peer_list, peer_list) {
+ 			wg_noise_precompute_static_static(peer);
+ 			wg_noise_expire_current_peer_keypairs(peer);
++			if (send_staged_packets)
++				wg_packet_send_staged_packets(peer);
+ 		}
+-		wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
+ 		up_write(&wg->static_identity.lock);
+ 	}
+ skip_set_private_key:
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 8084e7408c0ae..26d235d152352 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -28,6 +28,7 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+ 	int ret;
+ 
+ 	memset(queue, 0, sizeof(*queue));
++	queue->last_cpu = -1;
+ 	ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
+index e2388107f7fdc..a2e702f8c5826 100644
+--- a/drivers/net/wireguard/queueing.h
++++ b/drivers/net/wireguard/queueing.h
+@@ -119,20 +119,17 @@ static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
+ 	return cpu;
+ }
+ 
+-/* This function is racy, in the sense that next is unlocked, so it could return
+- * the same CPU twice. A race-free version of this would be to instead store an
+- * atomic sequence number, do an increment-and-return, and then iterate through
+- * every possible CPU until we get to that index -- choose_cpu. However that's
+- * a bit slower, and it doesn't seem like this potential race actually
+- * introduces any performance loss, so we live with it.
++/* This function is racy, in the sense that it's called while last_cpu is
++ * unlocked, so it could return the same CPU twice. Adding locking or using
++ * atomic sequence numbers is slower though, and the consequences of racing are
++ * harmless, so live with it.
+  */
+-static inline int wg_cpumask_next_online(int *next)
++static inline int wg_cpumask_next_online(int *last_cpu)
+ {
+-	int cpu = *next;
+-
+-	while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask)))
+-		cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
+-	*next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
++	int cpu = cpumask_next(*last_cpu, cpu_online_mask);
++	if (cpu >= nr_cpu_ids)
++		cpu = cpumask_first(cpu_online_mask);
++	*last_cpu = cpu;
+ 	return cpu;
+ }
+ 
+@@ -161,7 +158,7 @@ static inline void wg_prev_queue_drop_peeked(struct prev_queue *queue)
+ 
+ static inline int wg_queue_enqueue_per_device_and_peer(
+ 	struct crypt_queue *device_queue, struct prev_queue *peer_queue,
+-	struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu)
++	struct sk_buff *skb, struct workqueue_struct *wq)
+ {
+ 	int cpu;
+ 
+@@ -175,7 +172,7 @@ static inline int wg_queue_enqueue_per_device_and_peer(
+ 	/* Then we queue it up in the device queue, which consumes the
+ 	 * packet as soon as it can.
+ 	 */
+-	cpu = wg_cpumask_next_online(next_cpu);
++	cpu = wg_cpumask_next_online(&device_queue->last_cpu);
+ 	if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb)))
+ 		return -EPIPE;
+ 	queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work);
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index 7b8df406c7737..f500aaf678370 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -531,7 +531,7 @@ static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb)
+ 		goto err;
+ 
+ 	ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb,
+-						   wg->packet_crypt_wq, &wg->decrypt_queue.last_cpu);
++						   wg->packet_crypt_wq);
+ 	if (unlikely(ret == -EPIPE))
+ 		wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD);
+ 	if (likely(!ret || ret == -EPIPE)) {
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index 5368f7c35b4bf..95c853b59e1da 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -318,7 +318,7 @@ static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first)
+ 		goto err;
+ 
+ 	ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first,
+-						   wg->packet_crypt_wq, &wg->encrypt_queue.last_cpu);
++						   wg->packet_crypt_wq);
+ 	if (unlikely(ret == -EPIPE))
+ 		wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD);
+ err:
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 67faf62999ded..3170c54c97b74 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -6044,7 +6044,7 @@ static int ath11k_mac_setup_channels_rates(struct ath11k *ar,
+ 	}
+ 
+ 	if (supported_bands & WMI_HOST_WLAN_5G_CAP) {
+-		if (reg_cap->high_5ghz_chan >= ATH11K_MAX_6G_FREQ) {
++		if (reg_cap->high_5ghz_chan >= ATH11K_MIN_6G_FREQ) {
+ 			channels = kmemdup(ath11k_6ghz_channels,
+ 					   sizeof(ath11k_6ghz_channels), GFP_KERNEL);
+ 			if (!channels) {
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_hw.c b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
+index 42f00a2a8c800..cf5648188459c 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_hw.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
+@@ -1099,17 +1099,22 @@ static bool ath9k_hw_verify_hang(struct ath_hw *ah, unsigned int queue)
+ {
+ 	u32 dma_dbg_chain, dma_dbg_complete;
+ 	u8 dcu_chain_state, dcu_complete_state;
++	unsigned int dbg_reg, reg_offset;
+ 	int i;
+ 
+-	for (i = 0; i < NUM_STATUS_READS; i++) {
+-		if (queue < 6)
+-			dma_dbg_chain = REG_READ(ah, AR_DMADBG_4);
+-		else
+-			dma_dbg_chain = REG_READ(ah, AR_DMADBG_5);
++	if (queue < 6) {
++		dbg_reg = AR_DMADBG_4;
++		reg_offset = queue * 5;
++	} else {
++		dbg_reg = AR_DMADBG_5;
++		reg_offset = (queue - 6) * 5;
++	}
+ 
++	for (i = 0; i < NUM_STATUS_READS; i++) {
++		dma_dbg_chain = REG_READ(ah, dbg_reg);
+ 		dma_dbg_complete = REG_READ(ah, AR_DMADBG_6);
+ 
+-		dcu_chain_state = (dma_dbg_chain >> (5 * queue)) & 0x1f;
++		dcu_chain_state = (dma_dbg_chain >> reg_offset) & 0x1f;
+ 		dcu_complete_state = dma_dbg_complete & 0x3;
+ 
+ 		if ((dcu_chain_state != 0x6) || (dcu_complete_state != 0x1))
+@@ -1128,6 +1133,7 @@ static bool ar9003_hw_detect_mac_hang(struct ath_hw *ah)
+ 	u8 dcu_chain_state, dcu_complete_state;
+ 	bool dcu_wait_frdone = false;
+ 	unsigned long chk_dcu = 0;
++	unsigned int reg_offset;
+ 	unsigned int i = 0;
+ 
+ 	dma_dbg_4 = REG_READ(ah, AR_DMADBG_4);
+@@ -1139,12 +1145,15 @@ static bool ar9003_hw_detect_mac_hang(struct ath_hw *ah)
+ 		goto exit;
+ 
+ 	for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) {
+-		if (i < 6)
++		if (i < 6) {
+ 			chk_dbg = dma_dbg_4;
+-		else
++			reg_offset = i * 5;
++		} else {
+ 			chk_dbg = dma_dbg_5;
++			reg_offset = (i - 6) * 5;
++		}
+ 
+-		dcu_chain_state = (chk_dbg >> (5 * i)) & 0x1f;
++		dcu_chain_state = (chk_dbg >> reg_offset) & 0x1f;
+ 		if (dcu_chain_state == 0x6) {
+ 			dcu_wait_frdone = true;
+ 			chk_dcu |= BIT(i);
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index fe62ff668f757..99667aba289df 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -114,7 +114,13 @@ static void htc_process_conn_rsp(struct htc_target *target,
+ 
+ 	if (svc_rspmsg->status == HTC_SERVICE_SUCCESS) {
+ 		epid = svc_rspmsg->endpoint_id;
+-		if (epid < 0 || epid >= ENDPOINT_MAX)
++
++		/* Check that the received epid for the endpoint to attach
++		 * a new service is valid. ENDPOINT0 can't be used here as it
++		 * is already reserved for HTC_CTRL_RSVD_SVC service and thus
++		 * should not be modified.
++		 */
++		if (epid <= ENDPOINT0 || epid >= ENDPOINT_MAX)
+ 			return;
+ 
+ 		service_id = be16_to_cpu(svc_rspmsg->service_id);
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index ac354dfc50559..b2cfc483515c0 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -203,7 +203,7 @@ void ath_cancel_work(struct ath_softc *sc)
+ void ath_restart_work(struct ath_softc *sc)
+ {
+ 	ieee80211_queue_delayed_work(sc->hw, &sc->hw_check_work,
+-				     ATH_HW_CHECK_POLL_INT);
++				     msecs_to_jiffies(ATH_HW_CHECK_POLL_INT));
+ 
+ 	if (AR_SREV_9340(sc->sc_ah) || AR_SREV_9330(sc->sc_ah))
+ 		ieee80211_queue_delayed_work(sc->hw, &sc->hw_pll_work,
+@@ -850,7 +850,7 @@ static bool ath9k_txq_list_has_key(struct list_head *txq_list, u32 keyix)
+ static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
+ {
+ 	struct ath_hw *ah = sc->sc_ah;
+-	int i;
++	int i, j;
+ 	struct ath_txq *txq;
+ 	bool key_in_use = false;
+ 
+@@ -868,8 +868,9 @@ static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
+ 		if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) {
+ 			int idx = txq->txq_tailidx;
+ 
+-			while (!key_in_use &&
+-			       !list_empty(&txq->txq_fifo[idx])) {
++			for (j = 0; !key_in_use &&
++			     !list_empty(&txq->txq_fifo[idx]) &&
++			     j < ATH_TXFIFO_DEPTH; j++) {
+ 				key_in_use = ath9k_txq_list_has_key(
+ 					&txq->txq_fifo[idx], keyix);
+ 				INCR(idx, ATH_TXFIFO_DEPTH);
+@@ -2243,7 +2244,7 @@ void __ath9k_flush(struct ieee80211_hw *hw, u32 queues, bool drop,
+ 	}
+ 
+ 	ieee80211_queue_delayed_work(hw, &sc->hw_check_work,
+-				     ATH_HW_CHECK_POLL_INT);
++				     msecs_to_jiffies(ATH_HW_CHECK_POLL_INT));
+ }
+ 
+ static bool ath9k_tx_frames_pending(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index 19345b8f7bfd5..d652c647d56b5 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -221,6 +221,10 @@ static void ath9k_wmi_ctrl_rx(void *priv, struct sk_buff *skb,
+ 	if (unlikely(wmi->stopped))
+ 		goto free_skb;
+ 
++	/* Validate the obtained SKB. */
++	if (unlikely(skb->len < sizeof(struct wmi_cmd_hdr)))
++		goto free_skb;
++
+ 	hdr = (struct wmi_cmd_hdr *) skb->data;
+ 	cmd_id = be16_to_cpu(hdr->command_id);
+ 
+diff --git a/drivers/net/wireless/atmel/atmel_cs.c b/drivers/net/wireless/atmel/atmel_cs.c
+index 368eebefa741e..e64f108d288bb 100644
+--- a/drivers/net/wireless/atmel/atmel_cs.c
++++ b/drivers/net/wireless/atmel/atmel_cs.c
+@@ -73,6 +73,7 @@ struct local_info {
+ static int atmel_probe(struct pcmcia_device *p_dev)
+ {
+ 	struct local_info *local;
++	int ret;
+ 
+ 	dev_dbg(&p_dev->dev, "atmel_attach()\n");
+ 
+@@ -83,8 +84,16 @@ static int atmel_probe(struct pcmcia_device *p_dev)
+ 
+ 	p_dev->priv = local;
+ 
+-	return atmel_config(p_dev);
+-} /* atmel_attach */
++	ret = atmel_config(p_dev);
++	if (ret)
++		goto err_free_priv;
++
++	return 0;
++
++err_free_priv:
++	kfree(p_dev->priv);
++	return ret;
++}
+ 
+ static void atmel_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c
+index 8c9c6bfbaeee7..aa1d12f6f5c3b 100644
+--- a/drivers/net/wireless/cisco/airo.c
++++ b/drivers/net/wireless/cisco/airo.c
+@@ -6150,8 +6150,11 @@ static int airo_get_rate(struct net_device *dev,
+ {
+ 	struct airo_info *local = dev->ml_priv;
+ 	StatusRid status_rid;		/* Card status info */
++	int ret;
+ 
+-	readStatusRid(local, &status_rid, 1);
++	ret = readStatusRid(local, &status_rid, 1);
++	if (ret)
++		return -EBUSY;
+ 
+ 	vwrq->value = le16_to_cpu(status_rid.currentXmitRate) * 500000;
+ 	/* If more than one rate, set auto */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 7c61d179895b3..5b173f21e87bf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -1174,8 +1174,11 @@ static void iwl_mvm_queue_state_change(struct iwl_op_mode *op_mode,
+ 		mvmtxq = iwl_mvm_txq_from_mac80211(txq);
+ 		mvmtxq->stopped = !start;
+ 
+-		if (start && mvmsta->sta_state != IEEE80211_STA_NOTEXIST)
++		if (start && mvmsta->sta_state != IEEE80211_STA_NOTEXIST) {
++			local_bh_disable();
+ 			iwl_mvm_mac_itxq_xmit(mvm->hw, txq);
++			local_bh_enable();
++		}
+ 	}
+ 
+ out:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 09f870c48a4f6..141581fa74c82 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -2590,7 +2590,7 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	}
+ 
+ 	if (iwl_mvm_has_new_rx_api(mvm) && start) {
+-		u16 reorder_buf_size = buf_size * sizeof(baid_data->entries[0]);
++		u32 reorder_buf_size = buf_size * sizeof(baid_data->entries[0]);
+ 
+ 		/* sparse doesn't like the __align() so don't check */
+ #ifndef __CHECKER__
+diff --git a/drivers/net/wireless/intersil/orinoco/orinoco_cs.c b/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
+index a956f965a1e5e..03bfd2482656c 100644
+--- a/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
++++ b/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
+@@ -96,6 +96,7 @@ orinoco_cs_probe(struct pcmcia_device *link)
+ {
+ 	struct orinoco_private *priv;
+ 	struct orinoco_pccard *card;
++	int ret;
+ 
+ 	priv = alloc_orinocodev(sizeof(*card), &link->dev,
+ 				orinoco_cs_hard_reset, NULL);
+@@ -107,8 +108,16 @@ orinoco_cs_probe(struct pcmcia_device *link)
+ 	card->p_dev = link;
+ 	link->priv = priv;
+ 
+-	return orinoco_cs_config(link);
+-}				/* orinoco_cs_attach */
++	ret = orinoco_cs_config(link);
++	if (ret)
++		goto err_free_orinocodev;
++
++	return 0;
++
++err_free_orinocodev:
++	free_orinocodev(priv);
++	return ret;
++}
+ 
+ static void orinoco_cs_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/intersil/orinoco/spectrum_cs.c b/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
+index 291ef97ed45ec..841d623c621ac 100644
+--- a/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
++++ b/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
+@@ -157,6 +157,7 @@ spectrum_cs_probe(struct pcmcia_device *link)
+ {
+ 	struct orinoco_private *priv;
+ 	struct orinoco_pccard *card;
++	int ret;
+ 
+ 	priv = alloc_orinocodev(sizeof(*card), &link->dev,
+ 				spectrum_cs_hard_reset,
+@@ -169,8 +170,16 @@ spectrum_cs_probe(struct pcmcia_device *link)
+ 	card->p_dev = link;
+ 	link->priv = priv;
+ 
+-	return spectrum_cs_config(link);
+-}				/* spectrum_cs_attach */
++	ret = spectrum_cs_config(link);
++	if (ret)
++		goto err_free_orinocodev;
++
++	return 0;
++
++err_free_orinocodev:
++	free_orinocodev(priv);
++	return ret;
++}
+ 
+ static void spectrum_cs_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
+index c2a685f63e959..78ef40e315b5c 100644
+--- a/drivers/net/wireless/marvell/mwifiex/scan.c
++++ b/drivers/net/wireless/marvell/mwifiex/scan.c
+@@ -2200,9 +2200,9 @@ int mwifiex_ret_802_11_scan(struct mwifiex_private *priv,
+ 
+ 	if (nd_config) {
+ 		adapter->nd_info =
+-			kzalloc(sizeof(struct cfg80211_wowlan_nd_match) +
+-				sizeof(struct cfg80211_wowlan_nd_match *) *
+-				scan_rsp->number_of_sets, GFP_ATOMIC);
++			kzalloc(struct_size(adapter->nd_info, matches,
++					    scan_rsp->number_of_sets),
++				GFP_ATOMIC);
+ 
+ 		if (adapter->nd_info)
+ 			adapter->nd_info->n_matches = scan_rsp->number_of_sets;
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index b25847799138b..884f45e627a72 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -470,6 +470,9 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 		int rsn_ie_len = sizeof(struct element) + rsn_ie[1];
+ 		int offset = 8;
+ 
++		param->mode_802_11i = 2;
++		param->rsn_found = true;
++
+ 		/* extract RSN capabilities */
+ 		if (offset < rsn_ie_len) {
+ 			/* skip over pairwise suites */
+@@ -479,11 +482,8 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 				/* skip over authentication suites */
+ 				offset += (rsn_ie[offset] * 4) + 2;
+ 
+-				if (offset + 1 < rsn_ie_len) {
+-					param->mode_802_11i = 2;
+-					param->rsn_found = true;
++				if (offset + 1 < rsn_ie_len)
+ 					memcpy(param->rsn_cap, &rsn_ie[offset], 2);
+-				}
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c
+index 091eea0d958d1..bf1282702761f 100644
+--- a/drivers/net/wireless/ray_cs.c
++++ b/drivers/net/wireless/ray_cs.c
+@@ -270,13 +270,14 @@ static int ray_probe(struct pcmcia_device *p_dev)
+ {
+ 	ray_dev_t *local;
+ 	struct net_device *dev;
++	int ret;
+ 
+ 	dev_dbg(&p_dev->dev, "ray_attach()\n");
+ 
+ 	/* Allocate space for private device-specific data */
+ 	dev = alloc_etherdev(sizeof(ray_dev_t));
+ 	if (!dev)
+-		goto fail_alloc_dev;
++		return -ENOMEM;
+ 
+ 	local = netdev_priv(dev);
+ 	local->finder = p_dev;
+@@ -313,11 +314,16 @@ static int ray_probe(struct pcmcia_device *p_dev)
+ 	timer_setup(&local->timer, NULL, 0);
+ 
+ 	this_device = p_dev;
+-	return ray_config(p_dev);
++	ret = ray_config(p_dev);
++	if (ret)
++		goto err_free_dev;
++
++	return 0;
+ 
+-fail_alloc_dev:
+-	return -ENOMEM;
+-} /* ray_attach */
++err_free_dev:
++	free_netdev(dev);
++	return ret;
++}
+ 
+ static void ray_detach(struct pcmcia_device *link)
+ {
+@@ -1641,38 +1647,34 @@ static void authenticate_timeout(struct timer_list *t)
+ /*===========================================================================*/
+ static int parse_addr(char *in_str, UCHAR *out)
+ {
++	int i, k;
+ 	int len;
+-	int i, j, k;
+-	int status;
+ 
+ 	if (in_str == NULL)
+ 		return 0;
+-	if ((len = strlen(in_str)) < 2)
++	len = strnlen(in_str, ADDRLEN * 2 + 1) - 1;
++	if (len < 1)
+ 		return 0;
+ 	memset(out, 0, ADDRLEN);
+ 
+-	status = 1;
+-	j = len - 1;
+-	if (j > 12)
+-		j = 12;
+ 	i = 5;
+ 
+-	while (j > 0) {
+-		if ((k = hex_to_bin(in_str[j--])) != -1)
++	while (len > 0) {
++		if ((k = hex_to_bin(in_str[len--])) != -1)
+ 			out[i] = k;
+ 		else
+ 			return 0;
+ 
+-		if (j == 0)
++		if (len == 0)
+ 			break;
+-		if ((k = hex_to_bin(in_str[j--])) != -1)
++		if ((k = hex_to_bin(in_str[len--])) != -1)
+ 			out[i] += k << 4;
+ 		else
+ 			return 0;
+ 		if (!i--)
+ 			break;
+ 	}
+-	return status;
++	return 1;
+ }
+ 
+ /*===========================================================================*/
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index 8108f941ccd3f..b1d3aea10d7df 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -1463,10 +1463,8 @@ static void rsi_shutdown(struct device *dev)
+ 
+ 	rsi_dbg(ERR_ZONE, "SDIO Bus shutdown =====>\n");
+ 
+-	if (hw) {
+-		struct cfg80211_wowlan *wowlan = hw->wiphy->wowlan_config;
+-
+-		if (rsi_config_wowlan(adapter, wowlan))
++	if (hw && hw->wiphy && hw->wiphy->wowlan_config) {
++		if (rsi_config_wowlan(adapter, hw->wiphy->wowlan_config))
+ 			rsi_dbg(ERR_ZONE, "Failed to configure WoWLAN\n");
+ 	}
+ 
+@@ -1481,9 +1479,6 @@ static void rsi_shutdown(struct device *dev)
+ 	if (sdev->write_fail)
+ 		rsi_dbg(INFO_ZONE, "###### Device is not ready #######\n");
+ 
+-	if (rsi_set_sdio_pm_caps(adapter))
+-		rsi_dbg(INFO_ZONE, "Setting power management caps failed\n");
+-
+ 	rsi_dbg(INFO_ZONE, "***** RSI module shut down *****\n");
+ }
+ 
+diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
+index ccf6344ed6fd2..4c408fd7c1594 100644
+--- a/drivers/net/wireless/wl3501_cs.c
++++ b/drivers/net/wireless/wl3501_cs.c
+@@ -134,7 +134,7 @@ static const struct {
+ 
+ /**
+  * iw_valid_channel - validate channel in regulatory domain
+- * @reg_comain: regulatory domain
++ * @reg_domain: regulatory domain
+  * @channel: channel to validate
+  *
+  * Returns 0 if invalid in the specified regulatory domain, non-zero if valid.
+@@ -458,11 +458,9 @@ out:
+ /**
+  * wl3501_send_pkt - Send a packet.
+  * @this: Card
+- *
+- * Send a packet.
+- *
+- * data = Ethernet raw frame.  (e.g. data[0] - data[5] is Dest MAC Addr,
++ * @data: Ethernet raw frame.  (e.g. data[0] - data[5] is Dest MAC Addr,
+  *                                   data[6] - data[11] is Src MAC Addr)
++ * @len: Packet length
+  * Ref: IEEE 802.11
+  */
+ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len)
+@@ -1864,6 +1862,7 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ {
+ 	struct net_device *dev;
+ 	struct wl3501_card *this;
++	int ret;
+ 
+ 	/* The io structure describes IO port mapping */
+ 	p_dev->resource[0]->end	= 16;
+@@ -1875,8 +1874,7 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ 
+ 	dev = alloc_etherdev(sizeof(struct wl3501_card));
+ 	if (!dev)
+-		goto out_link;
+-
++		return -ENOMEM;
+ 
+ 	dev->netdev_ops		= &wl3501_netdev_ops;
+ 	dev->watchdog_timeo	= 5 * HZ;
+@@ -1889,9 +1887,15 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ 	netif_stop_queue(dev);
+ 	p_dev->priv = dev;
+ 
+-	return wl3501_config(p_dev);
+-out_link:
+-	return -ENOMEM;
++	ret = wl3501_config(p_dev);
++	if (ret)
++		goto out_free_etherdev;
++
++	return 0;
++
++out_free_etherdev:
++	free_netdev(dev);
++	return ret;
+ }
+ 
+ static int wl3501_config(struct pcmcia_device *link)
+@@ -1947,8 +1951,7 @@ static int wl3501_config(struct pcmcia_device *link)
+ 		goto failed;
+ 	}
+ 
+-	for (i = 0; i < 6; i++)
+-		dev->dev_addr[i] = ((char *)&this->mac_addr)[i];
++	eth_hw_addr_set(dev, this->mac_addr);
+ 
+ 	/* print probe information */
+ 	printk(KERN_INFO "%s: wl3501 @ 0x%3.3x, IRQ %d, "
+diff --git a/drivers/ntb/hw/amd/ntb_hw_amd.c b/drivers/ntb/hw/amd/ntb_hw_amd.c
+index 71428d8cbcfc5..ac401ad7884a6 100644
+--- a/drivers/ntb/hw/amd/ntb_hw_amd.c
++++ b/drivers/ntb/hw/amd/ntb_hw_amd.c
+@@ -1344,12 +1344,17 @@ static struct pci_driver amd_ntb_pci_driver = {
+ 
+ static int __init amd_ntb_pci_driver_init(void)
+ {
++	int ret;
+ 	pr_info("%s %s\n", NTB_DESC, NTB_VER);
+ 
+ 	if (debugfs_initialized())
+ 		debugfs_dir = debugfs_create_dir(KBUILD_MODNAME, NULL);
+ 
+-	return pci_register_driver(&amd_ntb_pci_driver);
++	ret = pci_register_driver(&amd_ntb_pci_driver);
++	if (ret)
++		debugfs_remove_recursive(debugfs_dir);
++
++	return ret;
+ }
+ module_init(amd_ntb_pci_driver_init);
+ 
+diff --git a/drivers/ntb/hw/idt/ntb_hw_idt.c b/drivers/ntb/hw/idt/ntb_hw_idt.c
+index d54261f508519..99711dd0b6e8e 100644
+--- a/drivers/ntb/hw/idt/ntb_hw_idt.c
++++ b/drivers/ntb/hw/idt/ntb_hw_idt.c
+@@ -2902,6 +2902,7 @@ static struct pci_driver idt_pci_driver = {
+ 
+ static int __init idt_pci_driver_init(void)
+ {
++	int ret;
+ 	pr_info("%s %s\n", NTB_DESC, NTB_VER);
+ 
+ 	/* Create the top DebugFS directory if the FS is initialized */
+@@ -2909,7 +2910,11 @@ static int __init idt_pci_driver_init(void)
+ 		dbgfs_topdir = debugfs_create_dir(KBUILD_MODNAME, NULL);
+ 
+ 	/* Register the NTB hardware driver to handle the PCI device */
+-	return pci_register_driver(&idt_pci_driver);
++	ret = pci_register_driver(&idt_pci_driver);
++	if (ret)
++		debugfs_remove_recursive(dbgfs_topdir);
++
++	return ret;
+ }
+ module_init(idt_pci_driver_init);
+ 
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen1.c b/drivers/ntb/hw/intel/ntb_hw_gen1.c
+index 093dd20057b92..4f1add57d81de 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen1.c
++++ b/drivers/ntb/hw/intel/ntb_hw_gen1.c
+@@ -2068,12 +2068,17 @@ static struct pci_driver intel_ntb_pci_driver = {
+ 
+ static int __init intel_ntb_pci_driver_init(void)
+ {
++	int ret;
+ 	pr_info("%s %s\n", NTB_DESC, NTB_VER);
+ 
+ 	if (debugfs_initialized())
+ 		debugfs_dir = debugfs_create_dir(KBUILD_MODNAME, NULL);
+ 
+-	return pci_register_driver(&intel_ntb_pci_driver);
++	ret = pci_register_driver(&intel_ntb_pci_driver);
++	if (ret)
++		debugfs_remove_recursive(debugfs_dir);
++
++	return ret;
+ }
+ module_init(intel_ntb_pci_driver_init);
+ 
+diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
+index 4a02561cfb965..d18cb44765603 100644
+--- a/drivers/ntb/ntb_transport.c
++++ b/drivers/ntb/ntb_transport.c
+@@ -412,7 +412,7 @@ int ntb_transport_register_client_dev(char *device_name)
+ 
+ 		rc = device_register(dev);
+ 		if (rc) {
+-			kfree(client_dev);
++			put_device(dev);
+ 			goto err;
+ 		}
+ 
+diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
+index 5ee0afa621a95..eeeb4b1c97d2c 100644
+--- a/drivers/ntb/test/ntb_tool.c
++++ b/drivers/ntb/test/ntb_tool.c
+@@ -998,6 +998,8 @@ static int tool_init_mws(struct tool_ctx *tc)
+ 		tc->peers[pidx].outmws =
+ 			devm_kcalloc(&tc->ntb->dev, tc->peers[pidx].outmw_cnt,
+ 				   sizeof(*tc->peers[pidx].outmws), GFP_KERNEL);
++		if (tc->peers[pidx].outmws == NULL)
++			return -ENOMEM;
+ 
+ 		for (widx = 0; widx < tc->peers[pidx].outmw_cnt; widx++) {
+ 			tc->peers[pidx].outmws[widx].pidx = pidx;
+diff --git a/drivers/nubus/proc.c b/drivers/nubus/proc.c
+index 88e1f9a0faafd..78cf0e7b53d5b 100644
+--- a/drivers/nubus/proc.c
++++ b/drivers/nubus/proc.c
+@@ -137,6 +137,18 @@ static int nubus_proc_rsrc_show(struct seq_file *m, void *v)
+ 	return 0;
+ }
+ 
++static int nubus_rsrc_proc_open(struct inode *inode, struct file *file)
++{
++	return single_open(file, nubus_proc_rsrc_show, inode);
++}
++
++static const struct proc_ops nubus_rsrc_proc_ops = {
++	.proc_open	= nubus_rsrc_proc_open,
++	.proc_read	= seq_read,
++	.proc_lseek	= seq_lseek,
++	.proc_release	= single_release,
++};
++
+ void nubus_proc_add_rsrc_mem(struct proc_dir_entry *procdir,
+ 			     const struct nubus_dirent *ent,
+ 			     unsigned int size)
+@@ -152,8 +164,8 @@ void nubus_proc_add_rsrc_mem(struct proc_dir_entry *procdir,
+ 		pde_data = nubus_proc_alloc_pde_data(nubus_dirptr(ent), size);
+ 	else
+ 		pde_data = NULL;
+-	proc_create_single_data(name, S_IFREG | 0444, procdir,
+-			nubus_proc_rsrc_show, pde_data);
++	proc_create_data(name, S_IFREG | 0444, procdir,
++			 &nubus_rsrc_proc_ops, pde_data);
+ }
+ 
+ void nubus_proc_add_rsrc(struct proc_dir_entry *procdir,
+@@ -166,9 +178,9 @@ void nubus_proc_add_rsrc(struct proc_dir_entry *procdir,
+ 		return;
+ 
+ 	snprintf(name, sizeof(name), "%x", ent->type);
+-	proc_create_single_data(name, S_IFREG | 0444, procdir,
+-			nubus_proc_rsrc_show,
+-			nubus_proc_alloc_pde_data(data, 0));
++	proc_create_data(name, S_IFREG | 0444, procdir,
++			 &nubus_rsrc_proc_ops,
++			 nubus_proc_alloc_pde_data(data, 0));
+ }
+ 
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index c47512da9872a..3aaead9b3a570 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -968,7 +968,8 @@ static void nvme_pci_complete_rq(struct request *req)
+ 
+ 	if (blk_integrity_rq(req))
+ 		dma_unmap_page(dev->dev, iod->meta_dma,
+-			       rq_integrity_vec(req)->bv_len, rq_data_dir(req));
++			       rq_integrity_vec(req)->bv_len, rq_dma_dir(req));
++
+ 	if (blk_rq_nr_phys_segments(req))
+ 		nvme_unmap_data(dev, req);
+ 	nvme_complete_rq(req);
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index fb96d37a135c1..4d8d15ac51ef4 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -12,6 +12,8 @@
+ 
+ #include "pcie-cadence.h"
+ 
++#define LINK_RETRAIN_TIMEOUT HZ
++
+ static u64 bar_max_size[] = {
+ 	[RP_BAR0] = _ULL(128 * SZ_2G),
+ 	[RP_BAR1] = SZ_2G,
+@@ -77,6 +79,27 @@ static struct pci_ops cdns_pcie_host_ops = {
+ 	.write		= pci_generic_config_write,
+ };
+ 
++static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
++{
++	u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
++	unsigned long end_jiffies;
++	u16 lnk_stat;
++
++	/* Wait for link training to complete. Exit after timeout. */
++	end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
++	do {
++		lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
++		if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
++			break;
++		usleep_range(0, 1000);
++	} while (time_before(jiffies, end_jiffies));
++
++	if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
++		return 0;
++
++	return -ETIMEDOUT;
++}
++
+ static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
+ {
+ 	struct device *dev = pcie->dev;
+@@ -118,6 +141,10 @@ static int cdns_pcie_retrain(struct cdns_pcie *pcie)
+ 		cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
+ 				    lnk_ctl);
+ 
++		ret = cdns_pcie_host_training_complete(pcie);
++		if (ret)
++			return ret;
++
+ 		ret = cdns_pcie_host_wait_for_link(pcie);
+ 	}
+ 	return ret;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index c68e14271c02c..737cc9d6fa6ab 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -771,6 +771,8 @@ static int qcom_pcie_get_resources_2_4_0(struct qcom_pcie *pcie)
+ 			return PTR_ERR(res->phy_ahb_reset);
+ 	}
+ 
++	dw_pcie_dbi_ro_wr_dis(pci);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/controller/pci-ftpci100.c b/drivers/pci/controller/pci-ftpci100.c
+index aefef1986201a..80cfea5d9f122 100644
+--- a/drivers/pci/controller/pci-ftpci100.c
++++ b/drivers/pci/controller/pci-ftpci100.c
+@@ -442,22 +442,12 @@ static int faraday_pci_probe(struct platform_device *pdev)
+ 	p->dev = dev;
+ 
+ 	/* Retrieve and enable optional clocks */
+-	clk = devm_clk_get(dev, "PCLK");
++	clk = devm_clk_get_enabled(dev, "PCLK");
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+-	ret = clk_prepare_enable(clk);
+-	if (ret) {
+-		dev_err(dev, "could not prepare PCLK\n");
+-		return ret;
+-	}
+-	p->bus_clk = devm_clk_get(dev, "PCICLK");
++	p->bus_clk = devm_clk_get_enabled(dev, "PCICLK");
+ 	if (IS_ERR(p->bus_clk))
+ 		return PTR_ERR(p->bus_clk);
+-	ret = clk_prepare_enable(p->bus_clk);
+-	if (ret) {
+-		dev_err(dev, "could not prepare PCICLK\n");
+-		return ret;
+-	}
+ 
+ 	p->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(p->base))
+diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
+index 379cde59988cf..dba8bdc3fc942 100644
+--- a/drivers/pci/controller/pcie-rockchip-ep.c
++++ b/drivers/pci/controller/pcie-rockchip-ep.c
+@@ -125,6 +125,7 @@ static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
+ static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
+ 					 struct pci_epf_header *hdr)
+ {
++	u32 reg;
+ 	struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
+ 	struct rockchip_pcie *rockchip = &ep->rockchip;
+ 
+@@ -137,8 +138,9 @@ static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
+ 				    PCIE_CORE_CONFIG_VENDOR);
+ 	}
+ 
+-	rockchip_pcie_write(rockchip, hdr->deviceid << 16,
+-			    ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_VENDOR_ID);
++	reg = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_DID_VID);
++	reg = (reg & 0xFFFF) | (hdr->deviceid << 16);
++	rockchip_pcie_write(rockchip, reg, PCIE_EP_CONFIG_DID_VID);
+ 
+ 	rockchip_pcie_write(rockchip,
+ 			    hdr->revid |
+@@ -312,15 +314,15 @@ static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn,
+ {
+ 	struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
+ 	struct rockchip_pcie *rockchip = &ep->rockchip;
+-	u16 flags;
++	u32 flags;
+ 
+ 	flags = rockchip_pcie_read(rockchip,
+ 				   ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
+ 				   ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
+ 	flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK;
+ 	flags |=
+-	   ((multi_msg_cap << 1) <<  ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) |
+-	   PCI_MSI_FLAGS_64BIT;
++	   (multi_msg_cap << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) |
++	   (PCI_MSI_FLAGS_64BIT << ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET);
+ 	flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP;
+ 	rockchip_pcie_write(rockchip, flags,
+ 			    ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
+@@ -332,7 +334,7 @@ static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
+ {
+ 	struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
+ 	struct rockchip_pcie *rockchip = &ep->rockchip;
+-	u16 flags;
++	u32 flags;
+ 
+ 	flags = rockchip_pcie_read(rockchip,
+ 				   ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
+@@ -345,48 +347,25 @@ static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
+ }
+ 
+ static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn,
+-					 u8 intx, bool is_asserted)
++					 u8 intx, bool do_assert)
+ {
+ 	struct rockchip_pcie *rockchip = &ep->rockchip;
+-	u32 r = ep->max_regions - 1;
+-	u32 offset;
+-	u32 status;
+-	u8 msg_code;
+-
+-	if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR ||
+-		     ep->irq_pci_fn != fn)) {
+-		rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r,
+-					     AXI_WRAPPER_NOR_MSG,
+-					     ep->irq_phys_addr, 0, 0);
+-		ep->irq_pci_addr = ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR;
+-		ep->irq_pci_fn = fn;
+-	}
+ 
+ 	intx &= 3;
+-	if (is_asserted) {
++
++	if (do_assert) {
+ 		ep->irq_pending |= BIT(intx);
+-		msg_code = ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA + intx;
++		rockchip_pcie_write(rockchip,
++				    PCIE_CLIENT_INT_IN_ASSERT |
++				    PCIE_CLIENT_INT_PEND_ST_PEND,
++				    PCIE_CLIENT_LEGACY_INT_CTRL);
+ 	} else {
+ 		ep->irq_pending &= ~BIT(intx);
+-		msg_code = ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA + intx;
++		rockchip_pcie_write(rockchip,
++				    PCIE_CLIENT_INT_IN_DEASSERT |
++				    PCIE_CLIENT_INT_PEND_ST_NORMAL,
++				    PCIE_CLIENT_LEGACY_INT_CTRL);
+ 	}
+-
+-	status = rockchip_pcie_read(rockchip,
+-				    ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
+-				    ROCKCHIP_PCIE_EP_CMD_STATUS);
+-	status &= ROCKCHIP_PCIE_EP_CMD_STATUS_IS;
+-
+-	if ((status != 0) ^ (ep->irq_pending != 0)) {
+-		status ^= ROCKCHIP_PCIE_EP_CMD_STATUS_IS;
+-		rockchip_pcie_write(rockchip, status,
+-				    ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
+-				    ROCKCHIP_PCIE_EP_CMD_STATUS);
+-	}
+-
+-	offset =
+-	   ROCKCHIP_PCIE_MSG_ROUTING(ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX) |
+-	   ROCKCHIP_PCIE_MSG_CODE(msg_code) | ROCKCHIP_PCIE_MSG_NO_DATA;
+-	writel(0, ep->irq_cpu_addr + offset);
+ }
+ 
+ static int rockchip_pcie_ep_send_legacy_irq(struct rockchip_pcie_ep *ep, u8 fn,
+@@ -416,7 +395,7 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
+ 					 u8 interrupt_num)
+ {
+ 	struct rockchip_pcie *rockchip = &ep->rockchip;
+-	u16 flags, mme, data, data_mask;
++	u32 flags, mme, data, data_mask;
+ 	u8 msi_count;
+ 	u64 pci_addr, pci_addr_mask = 0xff;
+ 
+@@ -506,6 +485,7 @@ static const struct pci_epc_features rockchip_pcie_epc_features = {
+ 	.linkup_notifier = false,
+ 	.msi_capable = true,
+ 	.msix_capable = false,
++	.align = 256,
+ };
+ 
+ static const struct pci_epc_features*
+@@ -631,6 +611,9 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev)
+ 
+ 	ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR;
+ 
++	rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE,
++			    PCIE_CLIENT_CONFIG);
++
+ 	return 0;
+ err_epc_mem_exit:
+ 	pci_epc_mem_exit(epc);
+diff --git a/drivers/pci/controller/pcie-rockchip.c b/drivers/pci/controller/pcie-rockchip.c
+index 990a00e08bc5b..1aa84035a8bc7 100644
+--- a/drivers/pci/controller/pcie-rockchip.c
++++ b/drivers/pci/controller/pcie-rockchip.c
+@@ -14,6 +14,7 @@
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/gpio/consumer.h>
++#include <linux/iopoll.h>
+ #include <linux/of_pci.h>
+ #include <linux/phy/phy.h>
+ #include <linux/platform_device.h>
+@@ -153,6 +154,12 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
+ }
+ EXPORT_SYMBOL_GPL(rockchip_pcie_parse_dt);
+ 
++#define rockchip_pcie_read_addr(addr) rockchip_pcie_read(rockchip, addr)
++/* 100 ms max wait time for PHY PLLs to lock */
++#define RK_PHY_PLL_LOCK_TIMEOUT_US 100000
++/* Sleep should be less than 20ms */
++#define RK_PHY_PLL_LOCK_SLEEP_US 1000
++
+ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
+ {
+ 	struct device *dev = rockchip->dev;
+@@ -254,6 +261,16 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
+ 		}
+ 	}
+ 
++	err = readx_poll_timeout(rockchip_pcie_read_addr,
++				 PCIE_CLIENT_SIDE_BAND_STATUS,
++				 regs, !(regs & PCIE_CLIENT_PHY_ST),
++				 RK_PHY_PLL_LOCK_SLEEP_US,
++				 RK_PHY_PLL_LOCK_TIMEOUT_US);
++	if (err) {
++		dev_err(dev, "PHY PLLs could not lock, %d\n", err);
++		goto err_power_off_phy;
++	}
++
+ 	/*
+ 	 * Please don't reorder the deassert sequence of the following
+ 	 * four reset pins.
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index c7d0178fc8c23..76a5f96bfd0a7 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -37,6 +37,13 @@
+ #define   PCIE_CLIENT_MODE_EP            HIWORD_UPDATE(0x0040, 0)
+ #define   PCIE_CLIENT_GEN_SEL_1		  HIWORD_UPDATE(0x0080, 0)
+ #define   PCIE_CLIENT_GEN_SEL_2		  HIWORD_UPDATE_BIT(0x0080)
++#define PCIE_CLIENT_LEGACY_INT_CTRL	(PCIE_CLIENT_BASE + 0x0c)
++#define   PCIE_CLIENT_INT_IN_ASSERT		HIWORD_UPDATE_BIT(0x0002)
++#define   PCIE_CLIENT_INT_IN_DEASSERT		HIWORD_UPDATE(0x0002, 0)
++#define   PCIE_CLIENT_INT_PEND_ST_PEND		HIWORD_UPDATE_BIT(0x0001)
++#define   PCIE_CLIENT_INT_PEND_ST_NORMAL	HIWORD_UPDATE(0x0001, 0)
++#define PCIE_CLIENT_SIDE_BAND_STATUS	(PCIE_CLIENT_BASE + 0x20)
++#define   PCIE_CLIENT_PHY_ST			BIT(12)
+ #define PCIE_CLIENT_DEBUG_OUT_0		(PCIE_CLIENT_BASE + 0x3c)
+ #define   PCIE_CLIENT_DEBUG_LTSSM_MASK		GENMASK(5, 0)
+ #define   PCIE_CLIENT_DEBUG_LTSSM_L1		0x18
+@@ -132,6 +139,8 @@
+ #define PCIE_RC_RP_ATS_BASE		0x400000
+ #define PCIE_RC_CONFIG_NORMAL_BASE	0x800000
+ #define PCIE_RC_CONFIG_BASE		0xa00000
++#define PCIE_EP_CONFIG_BASE		0xa00000
++#define PCIE_EP_CONFIG_DID_VID		(PCIE_EP_CONFIG_BASE + 0x00)
+ #define PCIE_RC_CONFIG_RID_CCR		(PCIE_RC_CONFIG_BASE + 0x08)
+ #define   PCIE_RC_CONFIG_SCC_SHIFT		16
+ #define PCIE_RC_CONFIG_DCR		(PCIE_RC_CONFIG_BASE + 0xc4)
+@@ -223,6 +232,7 @@
+ #define ROCKCHIP_PCIE_EP_CMD_STATUS			0x4
+ #define   ROCKCHIP_PCIE_EP_CMD_STATUS_IS		BIT(19)
+ #define ROCKCHIP_PCIE_EP_MSI_CTRL_REG			0x90
++#define   ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET		16
+ #define   ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET		17
+ #define   ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK		GENMASK(19, 17)
+ #define   ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET		20
+@@ -230,7 +240,6 @@
+ #define   ROCKCHIP_PCIE_EP_MSI_CTRL_ME				BIT(16)
+ #define   ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP	BIT(24)
+ #define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR				0x1
+-#define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR		0x3
+ #define ROCKCHIP_PCIE_EP_FUNC_BASE(fn)	(((fn) << 12) & GENMASK(19, 12))
+ #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
+ 	(PCIE_RC_RP_ATS_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
+diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c
+index 529c348084401..32baba1b7f131 100644
+--- a/drivers/pci/hotplug/pciehp_ctrl.c
++++ b/drivers/pci/hotplug/pciehp_ctrl.c
+@@ -256,6 +256,14 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
+ 	present = pciehp_card_present(ctrl);
+ 	link_active = pciehp_check_link_active(ctrl);
+ 	if (present <= 0 && link_active <= 0) {
++		if (ctrl->state == BLINKINGON_STATE) {
++			ctrl->state = OFF_STATE;
++			cancel_delayed_work(&ctrl->button_work);
++			pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
++					      INDICATOR_NOOP);
++			ctrl_info(ctrl, "Slot(%s): Card not present\n",
++				  slot_name(ctrl));
++		}
+ 		mutex_unlock(&ctrl->state_lock);
+ 		return;
+ 	}
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index d37013d007b6e..1f8106ec70945 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2830,13 +2830,13 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+ 	{
+ 		/*
+ 		 * Downstream device is not accessible after putting a root port
+-		 * into D3cold and back into D0 on Elo i2.
++		 * into D3cold and back into D0 on Elo Continental Z2 board
+ 		 */
+-		.ident = "Elo i2",
++		.ident = "Elo Continental Z2",
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Elo Touch Solutions"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Elo i2"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "RevB"),
++			DMI_MATCH(DMI_BOARD_VENDOR, "Elo Touch Solutions"),
++			DMI_MATCH(DMI_BOARD_NAME, "Geminilake"),
++			DMI_MATCH(DMI_BOARD_VERSION, "Continental Z2"),
+ 		},
+ 	},
+ #endif
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index ac0557a305aff..51da8ba67d216 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -993,21 +993,24 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 
+ 	down_read(&pci_bus_sem);
+ 	mutex_lock(&aspm_lock);
+-	/*
+-	 * All PCIe functions are in one slot, remove one function will remove
+-	 * the whole slot, so just wait until we are the last function left.
+-	 */
+-	if (!list_empty(&parent->subordinate->devices))
+-		goto out;
+ 
+ 	link = parent->link_state;
+ 	root = link->root;
+ 	parent_link = link->parent;
+ 
+-	/* All functions are removed, so just disable ASPM for the link */
++	/*
++	 * link->downstream is a pointer to the pci_dev of function 0.  If
++	 * we remove that function, the pci_dev is about to be deallocated,
++	 * so we can't use link->downstream again.  Free the link state to
++	 * avoid this.
++	 *
++	 * If we're removing a non-0 function, it's possible we could
++	 * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends
++	 * programming the same ASPM Control value for all functions of
++	 * multi-function devices, so disable ASPM for all of them.
++	 */
+ 	pcie_config_aspm_link(link, 0);
+ 	list_del(&link->sibling);
+-	/* Clock PM is for endpoint device */
+ 	free_link_state(link);
+ 
+ 	/* Recheck latencies and configure upstream links */
+@@ -1015,7 +1018,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 		pcie_update_aspm_capable(root);
+ 		pcie_config_aspm_path(parent_link);
+ 	}
+-out:
++
+ 	mutex_unlock(&aspm_lock);
+ 	up_read(&pci_bus_sem);
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index c1ebd5e12b06e..c0d1134811915 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4123,6 +4123,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9220,
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c49 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230,
+ 			 quirk_dma_func1_alias);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9235,
++			 quirk_dma_func1_alias);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0642,
+ 			 quirk_dma_func1_alias);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0645,
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index bb019e3839888..36061aaf026c8 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -1254,9 +1254,10 @@ static int arm_cmn_init_dtc(struct arm_cmn *cmn, struct arm_cmn_node *dn, int id
+ 	if (dtc->irq < 0)
+ 		return dtc->irq;
+ 
+-	writel_relaxed(0, dtc->base + CMN_DT_PMCR);
++	writel_relaxed(CMN_DT_DTC_CTL_DT_EN, dtc->base + CMN_DT_DTC_CTL);
++	writel_relaxed(CMN_DT_PMCR_PMU_EN | CMN_DT_PMCR_OVFL_INTR_EN, dtc->base + CMN_DT_PMCR);
++	writeq_relaxed(0, dtc->base + CMN_DT_PMCCNTR);
+ 	writel_relaxed(0x1ff, dtc->base + CMN_DT_PMOVSR_CLR);
+-	writel_relaxed(CMN_DT_PMCR_OVFL_INTR_EN, dtc->base + CMN_DT_PMCR);
+ 
+ 	/* We do at least know that a DTC's XP must be in that DTC's domain */
+ 	xp = arm_cmn_node_to_xp(dn);
+@@ -1303,7 +1304,7 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
+ 			dn->type = CMN_TYPE_RNI;
+ 	}
+ 
+-	writel_relaxed(CMN_DT_DTC_CTL_DT_EN, cmn->dtc[0].base + CMN_DT_DTC_CTL);
++	arm_cmn_set_state(cmn, CMN_STATE_DISABLED);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c
+index d07f33ec79397..8f11b293c48d1 100644
+--- a/drivers/phy/tegra/xusb.c
++++ b/drivers/phy/tegra/xusb.c
+@@ -556,6 +556,7 @@ static void tegra_xusb_port_unregister(struct tegra_xusb_port *port)
+ 		usb_role_switch_unregister(port->usb_role_sw);
+ 		cancel_work_sync(&port->usb_phy_work);
+ 		usb_remove_phy(&port->usb_phy);
++		port->usb_phy.dev->driver = NULL;
+ 	}
+ 
+ 	if (port->ops->remove)
+@@ -662,6 +663,9 @@ static int tegra_xusb_setup_usb_role_switch(struct tegra_xusb_port *port)
+ 	port->dev.driver = devm_kzalloc(&port->dev,
+ 					sizeof(struct device_driver),
+ 					GFP_KERNEL);
++	if (!port->dev.driver)
++		return -ENOMEM;
++
+ 	port->dev.driver->owner	 = THIS_MODULE;
+ 
+ 	port->usb_role_sw = usb_role_switch_register(&port->dev,
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index c7ae9f900b532..e3f49d0ed0298 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -359,10 +359,8 @@ static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc,
+ 	if (!pctldev)
+ 		return 0;
+ 
+-	gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
+-			       gc->ngpio);
+-
+-	return 0;
++	return gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
++				      gc->ngpio);
+ }
+ 
+ static const struct gpio_chip bcm2835_gpio_chip = {
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index 2ed17cdf946d1..44caada37b71f 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -945,11 +945,6 @@ static int chv_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
+ 
+ 		break;
+ 
+-	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		if (!(ctrl1 & CHV_PADCTRL1_ODEN))
+-			return -EINVAL;
+-		break;
+-
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: {
+ 		u32 cfg;
+ 
+@@ -959,6 +954,16 @@ static int chv_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			return -EINVAL;
+ 
+ 		break;
++
++	case PIN_CONFIG_DRIVE_PUSH_PULL:
++		if (ctrl1 & CHV_PADCTRL1_ODEN)
++			return -EINVAL;
++		break;
++
++	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
++		if (!(ctrl1 & CHV_PADCTRL1_ODEN))
++			return -EINVAL;
++		break;
+ 	}
+ 
+ 	default:
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 82b658a3c220a..3a05ebb9aa253 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -126,6 +126,14 @@ static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,
+ 	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+ 
+ 	raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++
++	/* Use special handling for Pin0 debounce */
++	if (offset == 0) {
++		pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);
++		if (pin_reg & INTERNAL_GPIO0_DEBOUNCE)
++			debounce = 0;
++	}
++
+ 	pin_reg = readl(gpio_dev->base + offset * 4);
+ 
+ 	if (debounce) {
+@@ -181,18 +189,6 @@ static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,
+ 	return ret;
+ }
+ 
+-static int amd_gpio_set_config(struct gpio_chip *gc, unsigned offset,
+-			       unsigned long config)
+-{
+-	u32 debounce;
+-
+-	if (pinconf_to_config_param(config) != PIN_CONFIG_INPUT_DEBOUNCE)
+-		return -ENOTSUPP;
+-
+-	debounce = pinconf_to_config_argument(config);
+-	return amd_gpio_set_debounce(gc, offset, debounce);
+-}
+-
+ #ifdef CONFIG_DEBUG_FS
+ static void amd_gpio_dbg_show(struct seq_file *s, struct gpio_chip *gc)
+ {
+@@ -215,6 +211,7 @@ static void amd_gpio_dbg_show(struct seq_file *s, struct gpio_chip *gc)
+ 	char *output_value;
+ 	char *output_enable;
+ 
++	seq_printf(s, "WAKE_INT_MASTER_REG: 0x%08x\n", readl(gpio_dev->base + WAKE_INT_MASTER_REG));
+ 	for (bank = 0; bank < gpio_dev->hwbank_num; bank++) {
+ 		seq_printf(s, "GPIO bank%d\t", bank);
+ 
+@@ -667,7 +664,7 @@ static int amd_pinconf_get(struct pinctrl_dev *pctldev,
+ }
+ 
+ static int amd_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+-				unsigned long *configs, unsigned num_configs)
++			   unsigned long *configs, unsigned int num_configs)
+ {
+ 	int i;
+ 	u32 arg;
+@@ -757,6 +754,20 @@ static int amd_pinconf_group_set(struct pinctrl_dev *pctldev,
+ 	return 0;
+ }
+ 
++static int amd_gpio_set_config(struct gpio_chip *gc, unsigned int pin,
++			       unsigned long config)
++{
++	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
++
++	if (pinconf_to_config_param(config) == PIN_CONFIG_INPUT_DEBOUNCE) {
++		u32 debounce = pinconf_to_config_argument(config);
++
++		return amd_gpio_set_debounce(gc, pin, debounce);
++	}
++
++	return amd_pinconf_set(gpio_dev->pctrl, pin, &config, 1);
++}
++
+ static const struct pinconf_ops amd_pinconf_ops = {
+ 	.pin_config_get		= amd_pinconf_get,
+ 	.pin_config_set		= amd_pinconf_set,
+@@ -784,9 +795,9 @@ static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
+ 
+ 		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ 
+-		pin_reg = readl(gpio_dev->base + i * 4);
++		pin_reg = readl(gpio_dev->base + pin * 4);
+ 		pin_reg &= ~mask;
+-		writel(pin_reg, gpio_dev->base + i * 4);
++		writel(pin_reg, gpio_dev->base + pin * 4);
+ 
+ 		raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ 	}
+diff --git a/drivers/pinctrl/pinctrl-amd.h b/drivers/pinctrl/pinctrl-amd.h
+index 95e7634240422..9f95ec9e2201a 100644
+--- a/drivers/pinctrl/pinctrl-amd.h
++++ b/drivers/pinctrl/pinctrl-amd.h
+@@ -17,6 +17,7 @@
+ #define AMD_GPIO_PINS_BANK3     32
+ 
+ #define WAKE_INT_MASTER_REG 0xfc
++#define INTERNAL_GPIO0_DEBOUNCE (1 << 15)
+ #define EOI_MASK (1 << 29)
+ 
+ #define WAKE_INT_STATUS_REG0 0x2f8
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index 315a6c4d9ade0..bf8aa0ea35d1b 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -1083,6 +1083,8 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+ 		/* Pin naming convention: P(bank_name)(bank_pin_number). */
+ 		pin_desc[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "P%c%d",
+ 						  bank + 'A', line);
++		if (!pin_desc[i].name)
++			return -ENOMEM;
+ 
+ 		group->name = group_names[i] = pin_desc[i].name;
+ 		group->pin = pin_desc[i].number;
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index 1f80b26281628..567c28705cb1b 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -40,7 +40,7 @@ MODULE_LICENSE("GPL");
+ static LIST_HEAD(wmi_block_list);
+ 
+ struct guid_block {
+-	char guid[16];
++	guid_t guid;
+ 	union {
+ 		char object_id[2];
+ 		struct {
+@@ -121,7 +121,7 @@ static bool find_guid(const char *guid_string, struct wmi_block **out)
+ 	list_for_each_entry(wblock, &wmi_block_list, list) {
+ 		block = &wblock->gblock;
+ 
+-		if (memcmp(block->guid, &guid_input, 16) == 0) {
++		if (guid_equal(&block->guid, &guid_input)) {
+ 			if (out)
+ 				*out = wblock;
+ 			return true;
+@@ -130,11 +130,20 @@ static bool find_guid(const char *guid_string, struct wmi_block **out)
+ 	return false;
+ }
+ 
++static bool guid_parse_and_compare(const char *string, const guid_t *guid)
++{
++	guid_t guid_input;
++
++	if (guid_parse(string, &guid_input))
++		return false;
++
++	return guid_equal(&guid_input, guid);
++}
++
+ static const void *find_guid_context(struct wmi_block *wblock,
+ 				      struct wmi_driver *wdriver)
+ {
+ 	const struct wmi_device_id *id;
+-	guid_t guid_input;
+ 
+ 	if (wblock == NULL || wdriver == NULL)
+ 		return NULL;
+@@ -143,9 +152,7 @@ static const void *find_guid_context(struct wmi_block *wblock,
+ 
+ 	id = wdriver->id_table;
+ 	while (*id->guid_string) {
+-		if (guid_parse(id->guid_string, &guid_input))
+-			continue;
+-		if (!memcmp(wblock->gblock.guid, &guid_input, 16))
++		if (guid_parse_and_compare(id->guid_string, &wblock->gblock.guid))
+ 			return id->context;
+ 		id++;
+ 	}
+@@ -457,7 +464,7 @@ EXPORT_SYMBOL_GPL(wmi_set_block);
+ 
+ static void wmi_dump_wdg(const struct guid_block *g)
+ {
+-	pr_info("%pUL:\n", g->guid);
++	pr_info("%pUL:\n", &g->guid);
+ 	if (g->flags & ACPI_WMI_EVENT)
+ 		pr_info("\tnotify_id: 0x%02X\n", g->notify_id);
+ 	else
+@@ -539,7 +546,7 @@ wmi_notify_handler handler, void *data)
+ 	list_for_each_entry(block, &wmi_block_list, list) {
+ 		acpi_status wmi_status;
+ 
+-		if (memcmp(block->gblock.guid, &guid_input, 16) == 0) {
++		if (guid_equal(&block->gblock.guid, &guid_input)) {
+ 			if (block->handler &&
+ 			    block->handler != wmi_notify_debug)
+ 				return AE_ALREADY_ACQUIRED;
+@@ -579,7 +586,7 @@ acpi_status wmi_remove_notify_handler(const char *guid)
+ 	list_for_each_entry(block, &wmi_block_list, list) {
+ 		acpi_status wmi_status;
+ 
+-		if (memcmp(block->gblock.guid, &guid_input, 16) == 0) {
++		if (guid_equal(&block->gblock.guid, &guid_input)) {
+ 			if (!block->handler ||
+ 			    block->handler == wmi_notify_debug)
+ 				return AE_NULL_ENTRY;
+@@ -615,7 +622,6 @@ acpi_status wmi_get_event_data(u32 event, struct acpi_buffer *out)
+ {
+ 	struct acpi_object_list input;
+ 	union acpi_object params[1];
+-	struct guid_block *gblock;
+ 	struct wmi_block *wblock;
+ 
+ 	input.count = 1;
+@@ -624,7 +630,7 @@ acpi_status wmi_get_event_data(u32 event, struct acpi_buffer *out)
+ 	params[0].integer.value = event;
+ 
+ 	list_for_each_entry(wblock, &wmi_block_list, list) {
+-		gblock = &wblock->gblock;
++		struct guid_block *gblock = &wblock->gblock;
+ 
+ 		if ((gblock->flags & ACPI_WMI_EVENT) &&
+ 			(gblock->notify_id == event))
+@@ -685,7 +691,7 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
+ {
+ 	struct wmi_block *wblock = dev_to_wblock(dev);
+ 
+-	return sprintf(buf, "wmi:%pUL\n", wblock->gblock.guid);
++	return sprintf(buf, "wmi:%pUL\n", &wblock->gblock.guid);
+ }
+ static DEVICE_ATTR_RO(modalias);
+ 
+@@ -694,7 +700,7 @@ static ssize_t guid_show(struct device *dev, struct device_attribute *attr,
+ {
+ 	struct wmi_block *wblock = dev_to_wblock(dev);
+ 
+-	return sprintf(buf, "%pUL\n", wblock->gblock.guid);
++	return sprintf(buf, "%pUL\n", &wblock->gblock.guid);
+ }
+ static DEVICE_ATTR_RO(guid);
+ 
+@@ -777,10 +783,10 @@ static int wmi_dev_uevent(struct device *dev, struct kobj_uevent_env *env)
+ {
+ 	struct wmi_block *wblock = dev_to_wblock(dev);
+ 
+-	if (add_uevent_var(env, "MODALIAS=wmi:%pUL", wblock->gblock.guid))
++	if (add_uevent_var(env, "MODALIAS=wmi:%pUL", &wblock->gblock.guid))
+ 		return -ENOMEM;
+ 
+-	if (add_uevent_var(env, "WMI_GUID=%pUL", wblock->gblock.guid))
++	if (add_uevent_var(env, "WMI_GUID=%pUL", &wblock->gblock.guid))
+ 		return -ENOMEM;
+ 
+ 	return 0;
+@@ -804,11 +810,7 @@ static int wmi_dev_match(struct device *dev, struct device_driver *driver)
+ 		return 0;
+ 
+ 	while (*id->guid_string) {
+-		guid_t driver_guid;
+-
+-		if (WARN_ON(guid_parse(id->guid_string, &driver_guid)))
+-			continue;
+-		if (!memcmp(&driver_guid, wblock->gblock.guid, 16))
++		if (guid_parse_and_compare(id->guid_string, &wblock->gblock.guid))
+ 			return 1;
+ 
+ 		id++;
+@@ -1042,7 +1044,6 @@ static const struct device_type wmi_type_data = {
+ };
+ 
+ static int wmi_create_device(struct device *wmi_bus_dev,
+-			     const struct guid_block *gblock,
+ 			     struct wmi_block *wblock,
+ 			     struct acpi_device *device)
+ {
+@@ -1050,12 +1051,12 @@ static int wmi_create_device(struct device *wmi_bus_dev,
+ 	char method[5];
+ 	int result;
+ 
+-	if (gblock->flags & ACPI_WMI_EVENT) {
++	if (wblock->gblock.flags & ACPI_WMI_EVENT) {
+ 		wblock->dev.dev.type = &wmi_type_event;
+ 		goto out_init;
+ 	}
+ 
+-	if (gblock->flags & ACPI_WMI_METHOD) {
++	if (wblock->gblock.flags & ACPI_WMI_METHOD) {
+ 		wblock->dev.dev.type = &wmi_type_method;
+ 		mutex_init(&wblock->char_mutex);
+ 		goto out_init;
+@@ -1105,7 +1106,7 @@ static int wmi_create_device(struct device *wmi_bus_dev,
+ 	wblock->dev.dev.bus = &wmi_bus_type;
+ 	wblock->dev.dev.parent = wmi_bus_dev;
+ 
+-	dev_set_name(&wblock->dev.dev, "%pUL", gblock->guid);
++	dev_set_name(&wblock->dev.dev, "%pUL", &wblock->gblock.guid);
+ 
+ 	device_initialize(&wblock->dev.dev);
+ 
+@@ -1125,12 +1126,12 @@ static void wmi_free_devices(struct acpi_device *device)
+ 	}
+ }
+ 
+-static bool guid_already_parsed(struct acpi_device *device, const u8 *guid)
++static bool guid_already_parsed(struct acpi_device *device, const guid_t *guid)
+ {
+ 	struct wmi_block *wblock;
+ 
+ 	list_for_each_entry(wblock, &wmi_block_list, list) {
+-		if (memcmp(wblock->gblock.guid, guid, 16) == 0) {
++		if (guid_equal(&wblock->gblock.guid, guid)) {
+ 			/*
+ 			 * Because we historically didn't track the relationship
+ 			 * between GUIDs and ACPI nodes, we don't know whether
+@@ -1185,7 +1186,7 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
+ 		 * case yet, so for now, we'll just ignore the duplicate
+ 		 * for device creation.
+ 		 */
+-		if (guid_already_parsed(device, gblock[i].guid))
++		if (guid_already_parsed(device, &gblock[i].guid))
+ 			continue;
+ 
+ 		wblock = kzalloc(sizeof(struct wmi_block), GFP_KERNEL);
+@@ -1197,7 +1198,7 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
+ 		wblock->acpi_device = device;
+ 		wblock->gblock = gblock[i];
+ 
+-		retval = wmi_create_device(wmi_bus_dev, &gblock[i], wblock, device);
++		retval = wmi_create_device(wmi_bus_dev, wblock, device);
+ 		if (retval) {
+ 			kfree(wblock);
+ 			continue;
+@@ -1222,7 +1223,7 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
+ 		retval = device_add(&wblock->dev.dev);
+ 		if (retval) {
+ 			dev_err(wmi_bus_dev, "failed to register %pUL\n",
+-				wblock->gblock.guid);
++				&wblock->gblock.guid);
+ 			if (debug_event)
+ 				wmi_method_enable(wblock, 0);
+ 			list_del(&wblock->list);
+@@ -1282,12 +1283,11 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
+ static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
+ 				    void *context)
+ {
+-	struct guid_block *block;
+ 	struct wmi_block *wblock;
+ 	bool found_it = false;
+ 
+ 	list_for_each_entry(wblock, &wmi_block_list, list) {
+-		block = &wblock->gblock;
++		struct guid_block *block = &wblock->gblock;
+ 
+ 		if (wblock->acpi_device->handle == handle &&
+ 		    (block->flags & ACPI_WMI_EVENT) &&
+@@ -1336,7 +1336,7 @@ static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
+ 	}
+ 
+ 	if (debug_event)
+-		pr_info("DEBUG Event GUID: %pUL\n", wblock->gblock.guid);
++		pr_info("DEBUG Event GUID: %pUL\n", &wblock->gblock.guid);
+ 
+ 	acpi_bus_generate_netlink_event(
+ 		wblock->acpi_device->pnp.device_class,
+diff --git a/drivers/powercap/Kconfig b/drivers/powercap/Kconfig
+index bc228725346b4..0e4b2c214a70a 100644
+--- a/drivers/powercap/Kconfig
++++ b/drivers/powercap/Kconfig
+@@ -18,10 +18,12 @@ if POWERCAP
+ # Client driver configurations go here.
+ config INTEL_RAPL_CORE
+ 	tristate
++	depends on PCI
++	select IOSF_MBI
+ 
+ config INTEL_RAPL
+ 	tristate "Intel RAPL Support via MSR Interface"
+-	depends on X86 && IOSF_MBI
++	depends on X86 && PCI
+ 	select INTEL_RAPL_CORE
+ 	help
+ 	  This enables support for the Intel Running Average Power Limit (RAPL)
+diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
+index 1646808d354ce..6b68e5ed20812 100644
+--- a/drivers/powercap/intel_rapl_msr.c
++++ b/drivers/powercap/intel_rapl_msr.c
+@@ -22,7 +22,6 @@
+ #include <linux/processor.h>
+ #include <linux/platform_device.h>
+ 
+-#include <asm/iosf_mbi.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+ 
+diff --git a/drivers/pwm/pwm-imx-tpm.c b/drivers/pwm/pwm-imx-tpm.c
+index fcdf6befb8389..871527b78aa46 100644
+--- a/drivers/pwm/pwm-imx-tpm.c
++++ b/drivers/pwm/pwm-imx-tpm.c
+@@ -403,6 +403,13 @@ static int __maybe_unused pwm_imx_tpm_suspend(struct device *dev)
+ 	if (tpm->enable_count > 0)
+ 		return -EBUSY;
+ 
++	/*
++	 * Force 'real_period' to be zero to force period update code
++	 * can be executed after system resume back, since suspend causes
++	 * the period related registers to become their reset values.
++	 */
++	tpm->real_period = 0;
++
+ 	clk_disable_unprepare(tpm->clk);
+ 
+ 	return 0;
+diff --git a/drivers/pwm/sysfs.c b/drivers/pwm/sysfs.c
+index 9903c3a7ecedc..b8417a8d2ef97 100644
+--- a/drivers/pwm/sysfs.c
++++ b/drivers/pwm/sysfs.c
+@@ -424,6 +424,13 @@ static int pwm_class_resume_npwm(struct device *parent, unsigned int npwm)
+ 		if (!export)
+ 			continue;
+ 
++		/* If pwmchip was not enabled before suspend, do nothing. */
++		if (!export->suspend.enabled) {
++			/* release lock taken in pwm_class_get_state */
++			mutex_unlock(&export->lock);
++			continue;
++		}
++
+ 		state.enabled = export->suspend.enabled;
+ 		ret = pwm_class_apply_state(export, pwm, &state);
+ 		if (ret < 0)
+@@ -448,7 +455,17 @@ static int __maybe_unused pwm_class_suspend(struct device *parent)
+ 		if (!export)
+ 			continue;
+ 
++		/*
++		 * If pwmchip was not enabled before suspend, save
++		 * state for resume time and do nothing else.
++		 */
+ 		export->suspend = state;
++		if (!state.enabled) {
++			/* release lock taken in pwm_class_get_state */
++			mutex_unlock(&export->lock);
++			continue;
++		}
++
+ 		state.enabled = false;
+ 		ret = pwm_class_apply_state(export, pwm, &state);
+ 		if (ret < 0) {
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index f5ab74683b58a..52b75779dbb7e 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1751,19 +1751,17 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 
+ 	if (err != -EEXIST)
+ 		regulator->debugfs = debugfs_create_dir(supply_name, rdev->debugfs);
+-	if (!regulator->debugfs) {
++	if (IS_ERR(regulator->debugfs))
+ 		rdev_dbg(rdev, "Failed to create debugfs directory\n");
+-	} else {
+-		debugfs_create_u32("uA_load", 0444, regulator->debugfs,
+-				   &regulator->uA_load);
+-		debugfs_create_u32("min_uV", 0444, regulator->debugfs,
+-				   &regulator->voltage[PM_SUSPEND_ON].min_uV);
+-		debugfs_create_u32("max_uV", 0444, regulator->debugfs,
+-				   &regulator->voltage[PM_SUSPEND_ON].max_uV);
+-		debugfs_create_file("constraint_flags", 0444,
+-				    regulator->debugfs, regulator,
+-				    &constraint_flags_fops);
+-	}
++
++	debugfs_create_u32("uA_load", 0444, regulator->debugfs,
++			   &regulator->uA_load);
++	debugfs_create_u32("min_uV", 0444, regulator->debugfs,
++			   &regulator->voltage[PM_SUSPEND_ON].min_uV);
++	debugfs_create_u32("max_uV", 0444, regulator->debugfs,
++			   &regulator->voltage[PM_SUSPEND_ON].max_uV);
++	debugfs_create_file("constraint_flags", 0444, regulator->debugfs,
++			    regulator, &constraint_flags_fops);
+ 
+ 	/*
+ 	 * Check now if the regulator is an always on regulator - if
+@@ -5032,10 +5030,8 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
+ 	}
+ 
+ 	rdev->debugfs = debugfs_create_dir(rname, debugfs_root);
+-	if (IS_ERR(rdev->debugfs)) {
+-		rdev_warn(rdev, "Failed to create debugfs directory\n");
+-		return;
+-	}
++	if (IS_ERR(rdev->debugfs))
++		rdev_dbg(rdev, "Failed to create debugfs directory\n");
+ 
+ 	debugfs_create_u32("use_count", 0444, rdev->debugfs,
+ 			   &rdev->use_count);
+@@ -5938,7 +5934,7 @@ static int __init regulator_init(void)
+ 
+ 	debugfs_root = debugfs_create_dir("regulator", NULL);
+ 	if (IS_ERR(debugfs_root))
+-		pr_warn("regulator: Failed to create debugfs directory\n");
++		pr_debug("regulator: Failed to create debugfs directory\n");
+ 
+ #ifdef CONFIG_DEBUG_FS
+ 	debugfs_create_file("supply_map", 0444, debugfs_root, NULL,
+diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
+index 7d53f7e2febcc..c4ea3f3f08844 100644
+--- a/drivers/rtc/rtc-st-lpc.c
++++ b/drivers/rtc/rtc-st-lpc.c
+@@ -228,7 +228,7 @@ static int st_rtc_probe(struct platform_device *pdev)
+ 	enable_irq_wake(rtc->irq);
+ 	disable_irq(rtc->irq);
+ 
+-	rtc->clk = clk_get(&pdev->dev, NULL);
++	rtc->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(rtc->clk)) {
+ 		dev_err(&pdev->dev, "Unable to request clock\n");
+ 		return PTR_ERR(rtc->clk);
+diff --git a/drivers/s390/net/qeth_l3_sys.c b/drivers/s390/net/qeth_l3_sys.c
+index 997fbb7006a7c..316f8622f3ccb 100644
+--- a/drivers/s390/net/qeth_l3_sys.c
++++ b/drivers/s390/net/qeth_l3_sys.c
+@@ -652,7 +652,7 @@ static QETH_DEVICE_ATTR(vipa_add4, add4, 0644,
+ static ssize_t qeth_l3_dev_vipa_del4_store(struct device *dev,
+ 		struct device_attribute *attr, const char *buf, size_t count)
+ {
+-	return qeth_l3_vipa_store(dev, buf, true, count, QETH_PROT_IPV4);
++	return qeth_l3_vipa_store(dev, buf, false, count, QETH_PROT_IPV4);
+ }
+ 
+ static QETH_DEVICE_ATTR(vipa_del4, del4, 0200, NULL,
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index fb6444d0409cf..211a25351e7d4 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -2308,8 +2308,10 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	TW_DISABLE_INTERRUPTS(tw_dev);
+ 
+ 	/* Initialize the card */
+-	if (tw_reset_sequence(tw_dev))
++	if (tw_reset_sequence(tw_dev)) {
++		retval = -EINVAL;
+ 		goto out_release_mem_region;
++	}
+ 
+ 	/* Set host specific parameters */
+ 	host->max_id = TW_MAX_UNITS;
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index f48ef47546f4d..b33cb1172f31d 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3042,9 +3042,8 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedf->p_cpuq) {
+-		status = -EINVAL;
+ 		QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n");
+-		goto mem_alloc_failure;
++		return -EINVAL;
+ 	}
+ 
+ 	qedf->global_queues = kzalloc((sizeof(struct global_queue *)
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 61b9dc511d904..12e27ee8c5c73 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -2698,6 +2698,7 @@ static void
+ qla2x00_terminate_rport_io(struct fc_rport *rport)
+ {
+ 	fc_port_t *fcport = *(fc_port_t **)rport->dd_data;
++	scsi_qla_host_t *vha;
+ 
+ 	if (!fcport)
+ 		return;
+@@ -2707,9 +2708,12 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
+ 
+ 	if (test_bit(ABORT_ISP_ACTIVE, &fcport->vha->dpc_flags))
+ 		return;
++	vha = fcport->vha;
+ 
+ 	if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) {
+ 		qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16);
++		qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24,
++			0, WAIT_TARGET);
+ 		return;
+ 	}
+ 	/*
+@@ -2724,6 +2728,15 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
+ 		else
+ 			qla2x00_port_logout(fcport->vha, fcport);
+ 	}
++
++	/* check for any straggling io left behind */
++	if (qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24, 0, WAIT_TARGET)) {
++		ql_log(ql_log_warn, vha, 0x300b,
++		       "IO not return.  Resetting. \n");
++		set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
++		qla2xxx_wake_dpc(vha);
++		qla2x00_wait_for_chip_reset(vha);
++	}
+ }
+ 
+ static int
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 1fd292a6ac881..804cac4c34769 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -268,6 +268,10 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ 
+ 	if (bsg_request->msgcode == FC_BSG_RPT_ELS) {
+ 		rport = fc_bsg_to_rport(bsg_job);
++		if (!rport) {
++			rval = -ENOMEM;
++			goto done;
++		}
+ 		fcport = *(fc_port_t **) rport->dd_data;
+ 		host = rport_to_shost(rport);
+ 		vha = shost_priv(host);
+@@ -2541,6 +2545,8 @@ qla24xx_bsg_request(struct bsg_job *bsg_job)
+ 
+ 	if (bsg_request->msgcode == FC_BSG_RPT_ELS) {
+ 		rport = fc_bsg_to_rport(bsg_job);
++		if (!rport)
++			return ret;
+ 		host = rport_to_shost(rport);
+ 		vha = shost_priv(host);
+ 	} else {
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 6afce455b9d8d..06b0ad2b51bb4 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -639,7 +639,6 @@ typedef struct srb {
+ 	struct iocb_resource iores;
+ 	struct kref cmd_kref;	/* need to migrate ref_count over to this */
+ 	void *priv;
+-	wait_queue_head_t nvme_ls_waitq;
+ 	struct fc_port *fcport;
+ 	struct scsi_qla_host *vha;
+ 	unsigned int start_timer:1;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 422ff67038d17..3d1a53ba86ac8 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -5107,7 +5107,7 @@ static void qla_get_login_template(scsi_qla_host_t *vha)
+ 	__be32 *q;
+ 
+ 	memset(ha->init_cb, 0, ha->init_cb_size);
+-	sz = min_t(int, sizeof(struct fc_els_flogi), ha->init_cb_size);
++	sz = min_t(int, sizeof(struct fc_els_csp), ha->init_cb_size);
+ 	rval = qla24xx_get_port_login_templ(vha, ha->init_cb_dma,
+ 					    ha->init_cb, sz);
+ 	if (rval != QLA_SUCCESS) {
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index e80e41b6c9e1d..7e8b59a0954bb 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -109,11 +109,13 @@ qla2x00_set_fcport_disc_state(fc_port_t *fcport, int state)
+ {
+ 	int old_val;
+ 	uint8_t shiftbits, mask;
++	uint8_t port_dstate_str_sz;
+ 
+ 	/* This will have to change when the max no. of states > 16 */
+ 	shiftbits = 4;
+ 	mask = (1 << shiftbits) - 1;
+ 
++	port_dstate_str_sz = sizeof(port_dstate_str) / sizeof(char *);
+ 	fcport->disc_state = state;
+ 	while (1) {
+ 		old_val = atomic_read(&fcport->shadow_disc_state);
+@@ -121,7 +123,8 @@ qla2x00_set_fcport_disc_state(fc_port_t *fcport, int state)
+ 		    old_val, (old_val << shiftbits) | state)) {
+ 			ql_dbg(ql_dbg_disc, fcport->vha, 0x2134,
+ 			    "FCPort %8phC disc_state transition: %s to %s - portid=%06x.\n",
+-			    fcport->port_name, port_dstate_str[old_val & mask],
++			    fcport->port_name, (old_val & mask) < port_dstate_str_sz ?
++				    port_dstate_str[old_val & mask] : "Unknown",
+ 			    port_dstate_str[state], fcport->d_id.b24);
+ 			return;
+ 		}
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index e54cc2a761dd4..54fc0afbc02ac 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -601,7 +601,8 @@ qla24xx_build_scsi_type_6_iocbs(srb_t *sp, struct cmd_type_6 *cmd_pkt,
+ 	put_unaligned_le32(COMMAND_TYPE_6, &cmd_pkt->entry_type);
+ 
+ 	/* No data transfer */
+-	if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) {
++	if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE ||
++	    tot_dsds == 0) {
+ 		cmd_pkt->byte_count = cpu_to_le32(0);
+ 		return 0;
+ 	}
+@@ -3713,7 +3714,7 @@ qla2x00_start_sp(srb_t *sp)
+ 	spin_lock_irqsave(qp->qp_lock_ptr, flags);
+ 	pkt = __qla2x00_alloc_iocbs(sp->qpair, sp);
+ 	if (!pkt) {
+-		rval = EAGAIN;
++		rval = -EAGAIN;
+ 		ql_log(ql_log_warn, vha, 0x700c,
+ 		    "qla2x00_alloc_iocbs failed.\n");
+ 		goto done;
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 695dd89be3307..8b0c8f9bdef08 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -331,7 +331,6 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ 	if (rval != QLA_SUCCESS) {
+ 		ql_log(ql_log_warn, vha, 0x700e,
+ 		    "qla2x00_start_sp failed = %d\n", rval);
+-		wake_up(&sp->nvme_ls_waitq);
+ 		sp->priv = NULL;
+ 		priv->sp = NULL;
+ 		qla2x00_rel_sp(sp);
+@@ -590,7 +589,6 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
+ 	if (!sp)
+ 		return -EBUSY;
+ 
+-	init_waitqueue_head(&sp->nvme_ls_waitq);
+ 	kref_init(&sp->cmd_kref);
+ 	spin_lock_init(&priv->cmd_lock);
+ 	sp->priv = priv;
+@@ -608,7 +606,6 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
+ 	if (rval != QLA_SUCCESS) {
+ 		ql_log(ql_log_warn, vha, 0x212d,
+ 		    "qla2x00_start_nvme_mq failed = %d\n", rval);
+-		wake_up(&sp->nvme_ls_waitq);
+ 		sp->priv = NULL;
+ 		priv->sp = NULL;
+ 		qla2xxx_rel_qpair_sp(sp->qpair, sp);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 38b8ff87ec0a7..cbc5af26303a3 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -4877,7 +4877,8 @@ struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
+ 	}
+ 	INIT_DELAYED_WORK(&vha->scan.scan_work, qla_scan_work_fn);
+ 
+-	sprintf(vha->host_str, "%s_%lu", QLA2XXX_DRIVER_NAME, vha->host_no);
++	snprintf(vha->host_str, sizeof(vha->host_str), "%s_%lu",
++		 QLA2XXX_DRIVER_NAME, vha->host_no);
+ 	ql_dbg(ql_dbg_init, vha, 0x0041,
+ 	    "Allocated the host=%p hw=%p vha=%p dev_name=%s",
+ 	    vha->host, vha->hw, vha,
+diff --git a/drivers/soc/amlogic/meson-secure-pwrc.c b/drivers/soc/amlogic/meson-secure-pwrc.c
+index fff92e2f39744..090a326664756 100644
+--- a/drivers/soc/amlogic/meson-secure-pwrc.c
++++ b/drivers/soc/amlogic/meson-secure-pwrc.c
+@@ -103,7 +103,7 @@ static struct meson_secure_pwrc_domain_desc a1_pwrc_domains[] = {
+ 	SEC_PD(ACODEC,	0),
+ 	SEC_PD(AUDIO,	0),
+ 	SEC_PD(OTP,	0),
+-	SEC_PD(DMA,	0),
++	SEC_PD(DMA,	GENPD_FLAG_ALWAYS_ON | GENPD_FLAG_IRQ_SAFE),
+ 	SEC_PD(SD_EMMC,	0),
+ 	SEC_PD(RAMA,	0),
+ 	/* SRAMB is used as ATF runtime memory, and should be always on */
+diff --git a/drivers/soc/fsl/qe/Kconfig b/drivers/soc/fsl/qe/Kconfig
+index 357c5800b112f..7afa796dbbb89 100644
+--- a/drivers/soc/fsl/qe/Kconfig
++++ b/drivers/soc/fsl/qe/Kconfig
+@@ -39,6 +39,7 @@ config QE_TDM
+ 
+ config QE_USB
+ 	bool
++	depends on QUICC_ENGINE
+ 	default y if USB_FSL_QE
+ 	help
+ 	  QE USB Controller support
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 766b00350e391..2c734ea0784b7 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1369,13 +1369,9 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 		res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ 						   "mspi");
+ 
+-	if (res) {
+-		qspi->base[MSPI]  = devm_ioremap_resource(dev, res);
+-		if (IS_ERR(qspi->base[MSPI]))
+-			return PTR_ERR(qspi->base[MSPI]);
+-	} else {
+-		return 0;
+-	}
++	qspi->base[MSPI]  = devm_ioremap_resource(dev, res);
++	if (IS_ERR(qspi->base[MSPI]))
++		return PTR_ERR(qspi->base[MSPI]);
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "bspi");
+ 	if (res) {
+diff --git a/drivers/spi/spi-bcm63xx.c b/drivers/spi/spi-bcm63xx.c
+index 96d075e633f43..d36384fef0d71 100644
+--- a/drivers/spi/spi-bcm63xx.c
++++ b/drivers/spi/spi-bcm63xx.c
+@@ -126,7 +126,7 @@ enum bcm63xx_regs_spi {
+ 	SPI_MSG_DATA_SIZE,
+ };
+ 
+-#define BCM63XX_SPI_MAX_PREPEND		15
++#define BCM63XX_SPI_MAX_PREPEND		7
+ 
+ #define BCM63XX_SPI_MAX_CS		8
+ #define BCM63XX_SPI_BUS_NUM		0
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index 01ef79f15b024..be259c685cc80 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -32,7 +32,7 @@
+ #define CS_DEMUX_OUTPUT_SEL	GENMASK(3, 0)
+ 
+ #define SE_SPI_TRANS_CFG	0x25c
+-#define CS_TOGGLE		BIT(0)
++#define CS_TOGGLE		BIT(1)
+ 
+ #define SE_SPI_WORD_LEN		0x268
+ #define WORD_LEN_MSK		GENMASK(9, 0)
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_cmd.c b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+index 20c19e08968ea..613bd96202242 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_cmd.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+@@ -5243,7 +5243,7 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
+ 	int (*configure_pp_input)(struct atomisp_sub_device *asd,
+ 				  unsigned int width, unsigned int height) =
+ 				      configure_pp_input_nop;
+-	u16 stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
++	u16 stream_index;
+ 	const struct atomisp_in_fmt_conv *fc;
+ 	int ret, i;
+ 
+@@ -5252,6 +5252,7 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
+ 			__func__, vdev->name);
+ 		return -EINVAL;
+ 	}
++	stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
+ 
+ 	v4l2_fh_init(&fh.vfh, vdev);
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index c9ee85037644f..f0387486eb174 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -1198,7 +1198,7 @@ static int gmin_get_config_dsm_var(struct device *dev,
+ 	dev_info(dev, "found _DSM entry for '%s': %s\n", var,
+ 		 cur->string.pointer);
+ 	strscpy(out, cur->string.pointer, *out_len);
+-	*out_len = strlen(cur->string.pointer);
++	*out_len = strlen(out);
+ 
+ 	ACPI_FREE(obj);
+ 	return 0;
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+index 8a0648fd7c813..4615e4cae718b 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+@@ -1123,7 +1123,7 @@ int __atomisp_reqbufs(struct file *file, void *fh,
+ 	struct ia_css_frame *frame;
+ 	struct videobuf_vmalloc_memory *vm_mem;
+ 	u16 source_pad = atomisp_subdev_source_pad(vdev);
+-	u16 stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
++	u16 stream_id;
+ 	int ret = 0, i = 0;
+ 
+ 	if (!asd) {
+@@ -1131,6 +1131,7 @@ int __atomisp_reqbufs(struct file *file, void *fh,
+ 			__func__, vdev->name);
+ 		return -EINVAL;
+ 	}
++	stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
+ 
+ 	if (req->count == 0) {
+ 		mutex_lock(&pipe->capq.vb_lock);
+diff --git a/drivers/thermal/mtk_thermal.c b/drivers/thermal/mtk_thermal.c
+index 9fe169dbed887..0bd7aa564bc25 100644
+--- a/drivers/thermal/mtk_thermal.c
++++ b/drivers/thermal/mtk_thermal.c
+@@ -1026,12 +1026,7 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	auxadc_base = devm_of_iomap(&pdev->dev, auxadc, 0, NULL);
+-	if (IS_ERR(auxadc_base)) {
+-		of_node_put(auxadc);
+-		return PTR_ERR(auxadc_base);
+-	}
+-
++	auxadc_base = of_iomap(auxadc, 0);
+ 	auxadc_phys_base = of_get_phys_base(auxadc);
+ 
+ 	of_node_put(auxadc);
+@@ -1047,12 +1042,7 @@ static int mtk_thermal_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	apmixed_base = devm_of_iomap(&pdev->dev, apmixedsys, 0, NULL);
+-	if (IS_ERR(apmixed_base)) {
+-		of_node_put(apmixedsys);
+-		return PTR_ERR(apmixed_base);
+-	}
+-
++	apmixed_base = of_iomap(apmixedsys, 0);
+ 	apmixed_phys_base = of_get_phys_base(apmixedsys);
+ 
+ 	of_node_put(apmixedsys);
+diff --git a/drivers/thermal/sun8i_thermal.c b/drivers/thermal/sun8i_thermal.c
+index f8b13071a6f42..e053b06280172 100644
+--- a/drivers/thermal/sun8i_thermal.c
++++ b/drivers/thermal/sun8i_thermal.c
+@@ -318,6 +318,11 @@ out:
+ 	return ret;
+ }
+ 
++static void sun8i_ths_reset_control_assert(void *data)
++{
++	reset_control_assert(data);
++}
++
+ static int sun8i_ths_resource_init(struct ths_device *tmdev)
+ {
+ 	struct device *dev = tmdev->dev;
+@@ -338,47 +343,35 @@ static int sun8i_ths_resource_init(struct ths_device *tmdev)
+ 		if (IS_ERR(tmdev->reset))
+ 			return PTR_ERR(tmdev->reset);
+ 
+-		tmdev->bus_clk = devm_clk_get(&pdev->dev, "bus");
++		ret = reset_control_deassert(tmdev->reset);
++		if (ret)
++			return ret;
++
++		ret = devm_add_action_or_reset(dev, sun8i_ths_reset_control_assert,
++					       tmdev->reset);
++		if (ret)
++			return ret;
++
++		tmdev->bus_clk = devm_clk_get_enabled(&pdev->dev, "bus");
+ 		if (IS_ERR(tmdev->bus_clk))
+ 			return PTR_ERR(tmdev->bus_clk);
+ 	}
+ 
+ 	if (tmdev->chip->has_mod_clk) {
+-		tmdev->mod_clk = devm_clk_get(&pdev->dev, "mod");
++		tmdev->mod_clk = devm_clk_get_enabled(&pdev->dev, "mod");
+ 		if (IS_ERR(tmdev->mod_clk))
+ 			return PTR_ERR(tmdev->mod_clk);
+ 	}
+ 
+-	ret = reset_control_deassert(tmdev->reset);
+-	if (ret)
+-		return ret;
+-
+-	ret = clk_prepare_enable(tmdev->bus_clk);
+-	if (ret)
+-		goto assert_reset;
+-
+ 	ret = clk_set_rate(tmdev->mod_clk, 24000000);
+ 	if (ret)
+-		goto bus_disable;
+-
+-	ret = clk_prepare_enable(tmdev->mod_clk);
+-	if (ret)
+-		goto bus_disable;
++		return ret;
+ 
+ 	ret = sun8i_ths_calibrate(tmdev);
+ 	if (ret)
+-		goto mod_disable;
++		return ret;
+ 
+ 	return 0;
+-
+-mod_disable:
+-	clk_disable_unprepare(tmdev->mod_clk);
+-bus_disable:
+-	clk_disable_unprepare(tmdev->bus_clk);
+-assert_reset:
+-	reset_control_assert(tmdev->reset);
+-
+-	return ret;
+ }
+ 
+ static int sun8i_h3_thermal_init(struct ths_device *tmdev)
+@@ -529,17 +522,6 @@ static int sun8i_ths_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int sun8i_ths_remove(struct platform_device *pdev)
+-{
+-	struct ths_device *tmdev = platform_get_drvdata(pdev);
+-
+-	clk_disable_unprepare(tmdev->mod_clk);
+-	clk_disable_unprepare(tmdev->bus_clk);
+-	reset_control_assert(tmdev->reset);
+-
+-	return 0;
+-}
+-
+ static const struct ths_thermal_chip sun8i_a83t_ths = {
+ 	.sensor_num = 3,
+ 	.scale = 705,
+@@ -641,7 +623,6 @@ MODULE_DEVICE_TABLE(of, of_ths_match);
+ 
+ static struct platform_driver ths_driver = {
+ 	.probe = sun8i_ths_probe,
+-	.remove = sun8i_ths_remove,
+ 	.driver = {
+ 		.name = "sun8i-thermal",
+ 		.of_match_table = of_ths_match,
+diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
+index 0771cd2265813..61b11490ae5be 100644
+--- a/drivers/tty/serial/8250/8250.h
++++ b/drivers/tty/serial/8250/8250.h
+@@ -87,7 +87,6 @@ struct serial8250_config {
+ #define UART_BUG_TXEN	(1 << 1)	/* UART has buggy TX IIR status */
+ #define UART_BUG_NOMSR	(1 << 2)	/* UART has buggy MSR status bits (Au1x00) */
+ #define UART_BUG_THRE	(1 << 3)	/* UART has buggy THRE reassertion */
+-#define UART_BUG_PARITY	(1 << 4)	/* UART mishandles parity if FIFO enabled */
+ #define UART_BUG_TXRACE	(1 << 5)	/* UART Tx fails to set remote DR */
+ 
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 483fff3a95c9e..e26ac3f42e05c 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -653,6 +653,8 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 	if ((lsr & UART_LSR_OE) && up->overrun_backoff_time_ms > 0) {
+ 		unsigned long delay;
+ 
++		/* Synchronize UART_IER access against the console. */
++		spin_lock(&port->lock);
+ 		up->ier = port->serial_in(port, UART_IER);
+ 		if (up->ier & (UART_IER_RLSI | UART_IER_RDI)) {
+ 			port->ops->stop_rx(port);
+@@ -662,6 +664,7 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 			 */
+ 			cancel_delayed_work(&up->overrun_backoff);
+ 		}
++		spin_unlock(&port->lock);
+ 
+ 		delay = msecs_to_jiffies(up->overrun_backoff_time_ms);
+ 		schedule_delayed_work(&up->overrun_backoff, delay);
+@@ -1469,7 +1472,9 @@ static int omap8250_probe(struct platform_device *pdev)
+ err:
+ 	pm_runtime_dont_use_autosuspend(&pdev->dev);
+ 	pm_runtime_put_sync(&pdev->dev);
++	flush_work(&priv->qos_work);
+ 	pm_runtime_disable(&pdev->dev);
++	cpu_latency_qos_remove_request(&priv->pm_qos_request);
+ 	return ret;
+ }
+ 
+@@ -1516,25 +1521,35 @@ static int omap8250_suspend(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
+ 	struct uart_8250_port *up = serial8250_get_port(priv->line);
++	int err;
+ 
+ 	serial8250_suspend_port(priv->line);
+ 
+-	pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
++	if (err)
++		return err;
+ 	if (!device_may_wakeup(dev))
+ 		priv->wer = 0;
+ 	serial_out(up, UART_OMAP_WER, priv->wer);
+-	pm_runtime_mark_last_busy(dev);
+-	pm_runtime_put_autosuspend(dev);
+-
++	err = pm_runtime_force_suspend(dev);
+ 	flush_work(&priv->qos_work);
+-	return 0;
++
++	return err;
+ }
+ 
+ static int omap8250_resume(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
++	int err;
+ 
++	err = pm_runtime_force_resume(dev);
++	if (err)
++		return err;
+ 	serial8250_resume_port(priv->line);
++	/* Paired with pm_runtime_resume_and_get() in omap8250_suspend() */
++	pm_runtime_mark_last_busy(dev);
++	pm_runtime_put_autosuspend(dev);
++
+ 	return 0;
+ }
+ #else
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 9617f7ad332d1..fd857d4343266 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -1044,14 +1044,6 @@ static int pci_oxsemi_tornado_init(struct pci_dev *dev)
+ 	return number_uarts;
+ }
+ 
+-static int pci_asix_setup(struct serial_private *priv,
+-		  const struct pciserial_board *board,
+-		  struct uart_8250_port *port, int idx)
+-{
+-	port->bugs |= UART_BUG_PARITY;
+-	return pci_default_setup(priv, board, port, idx);
+-}
+-
+ /* Quatech devices have their own extra interface features */
+ 
+ struct quatech_feature {
+@@ -1874,7 +1866,6 @@ pci_moxa_setup(struct serial_private *priv,
+ #define PCI_DEVICE_ID_WCH_CH355_4S	0x7173
+ #define PCI_VENDOR_ID_AGESTAR		0x5372
+ #define PCI_DEVICE_ID_AGESTAR_9375	0x6872
+-#define PCI_VENDOR_ID_ASIX		0x9710
+ #define PCI_DEVICE_ID_BROADCOM_TRUMANAGE 0x160a
+ #define PCI_DEVICE_ID_AMCC_ADDIDATA_APCI7800 0x818e
+ 
+@@ -2684,16 +2675,6 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
+ 		.exit		= pci_wch_ch38x_exit,
+ 		.setup          = pci_wch_ch38x_setup,
+ 	},
+-	/*
+-	 * ASIX devices with FIFO bug
+-	 */
+-	{
+-		.vendor		= PCI_VENDOR_ID_ASIX,
+-		.device		= PCI_ANY_ID,
+-		.subvendor	= PCI_ANY_ID,
+-		.subdevice	= PCI_ANY_ID,
+-		.setup		= pci_asix_setup,
+-	},
+ 	/*
+ 	 * Broadcom TruManage (NetXtreme)
+ 	 */
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index b19908779e3b8..432a438929e64 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2577,11 +2577,8 @@ static unsigned char serial8250_compute_lcr(struct uart_8250_port *up,
+ 
+ 	if (c_cflag & CSTOPB)
+ 		cval |= UART_LCR_STOP;
+-	if (c_cflag & PARENB) {
++	if (c_cflag & PARENB)
+ 		cval |= UART_LCR_PARITY;
+-		if (up->bugs & UART_BUG_PARITY)
+-			up->fifo_bug = true;
+-	}
+ 	if (!(c_cflag & PARODD))
+ 		cval |= UART_LCR_EPAR;
+ #ifdef CMSPAR
+@@ -2744,8 +2741,7 @@ serial8250_do_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	up->lcr = cval;					/* Save computed LCR */
+ 
+ 	if (up->capabilities & UART_CAP_FIFO && port->fifosize > 1) {
+-		/* NOTE: If fifo_bug is not set, a user can set RX_trigger. */
+-		if ((baud < 2400 && !up->dma) || up->fifo_bug) {
++		if (baud < 2400 && !up->dma) {
+ 			up->fcr &= ~UART_FCR_TRIGGER_MASK;
+ 			up->fcr |= UART_FCR_TRIGGER_1;
+ 		}
+@@ -3081,8 +3077,7 @@ static int do_set_rxtrig(struct tty_port *port, unsigned char bytes)
+ 	struct uart_8250_port *up = up_to_u8250p(uport);
+ 	int rxtrig;
+ 
+-	if (!(up->capabilities & UART_CAP_FIFO) || uport->fifosize <= 1 ||
+-	    up->fifo_bug)
++	if (!(up->capabilities & UART_CAP_FIFO) || uport->fifosize <= 1)
+ 		return -EINVAL;
+ 
+ 	rxtrig = bytes_to_fcr_rxtrig(up, bytes);
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 02fd0e79c8f70..a1249bed66369 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -873,11 +873,11 @@ static void atmel_complete_tx_dma(void *arg)
+ 
+ 	port->icount.tx += atmel_port->tx_len;
+ 
+-	spin_lock_irq(&atmel_port->lock_tx);
++	spin_lock(&atmel_port->lock_tx);
+ 	async_tx_ack(atmel_port->desc_tx);
+ 	atmel_port->cookie_tx = -EINVAL;
+ 	atmel_port->desc_tx = NULL;
+-	spin_unlock_irq(&atmel_port->lock_tx);
++	spin_unlock(&atmel_port->lock_tx);
+ 
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(port);
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index ca22a11258217..a4cf00756681a 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2589,6 +2589,7 @@ OF_EARLYCON_DECLARE(lpuart, "fsl,vf610-lpuart", lpuart_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1021a-lpuart", lpuart32_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1028a-lpuart", ls1028a_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,imx7ulp-lpuart", lpuart32_imx_early_console_setup);
++OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8ulp-lpuart", lpuart32_imx_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8qxp-lpuart", lpuart32_imx_early_console_setup);
+ EARLYCON_DECLARE(lpuart, lpuart_early_console_setup);
+ EARLYCON_DECLARE(lpuart32, lpuart32_early_console_setup);
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index 263c33260d8a8..fa5b1321d9b15 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -1313,8 +1313,12 @@ static unsigned int s3c24xx_serial_getclk(struct s3c24xx_uart_port *ourport,
+ 			continue;
+ 
+ 		rate = clk_get_rate(clk);
+-		if (!rate)
++		if (!rate) {
++			dev_err(ourport->port.dev,
++				"Failed to get clock rate for %s.\n", clkname);
++			clk_put(clk);
+ 			continue;
++		}
+ 
+ 		if (ourport->info->has_divslot) {
+ 			unsigned long div = rate / req_baud;
+@@ -1340,10 +1344,18 @@ static unsigned int s3c24xx_serial_getclk(struct s3c24xx_uart_port *ourport,
+ 			calc_deviation = -calc_deviation;
+ 
+ 		if (calc_deviation < deviation) {
++			/*
++			 * If we find a better clk, release the previous one, if
++			 * any.
++			 */
++			if (!IS_ERR(*best_clk))
++				clk_put(*best_clk);
+ 			*best_clk = clk;
+ 			best_quot = quot;
+ 			*clk_num = cnt;
+ 			deviation = calc_deviation;
++		} else {
++			clk_put(clk);
+ 		}
+ 	}
+ 
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 2fe29319de441..1b95035d179f3 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -734,6 +734,7 @@ static int driver_resume(struct usb_interface *intf)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PM
+ /* The following routines apply to the entire device, not interfaces */
+ void usbfs_notify_suspend(struct usb_device *udev)
+ {
+@@ -752,6 +753,7 @@ void usbfs_notify_resume(struct usb_device *udev)
+ 	}
+ 	mutex_unlock(&usbfs_mutex);
+ }
++#endif
+ 
+ struct usb_driver usbfs_driver = {
+ 	.name =		"usbfs",
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index d0f9b7c296b0d..69ec06efd7f25 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -805,7 +805,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ 
+ 	ret = dwc3_meson_g12a_otg_init(pdev, priv);
+ 	if (ret)
+-		goto err_phys_power;
++		goto err_plat_depopulate;
+ 
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+@@ -813,6 +813,9 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++err_plat_depopulate:
++	of_platform_depopulate(dev);
++
+ err_phys_power:
+ 	for (i = 0 ; i < PHY_COUNT ; ++i)
+ 		phy_power_off(priv->phys[i]);
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index dac13fe978110..ec8c43231746e 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -722,6 +722,7 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 	struct device		*dev = &pdev->dev;
+ 	struct dwc3_qcom	*qcom;
+ 	struct resource		*res, *parent_res = NULL;
++	struct resource		local_res;
+ 	int			ret, i;
+ 	bool			ignore_pipe_clk;
+ 
+@@ -772,9 +773,8 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 	if (np) {
+ 		parent_res = res;
+ 	} else {
+-		parent_res = kmemdup(res, sizeof(struct resource), GFP_KERNEL);
+-		if (!parent_res)
+-			return -ENOMEM;
++		memcpy(&local_res, res, sizeof(struct resource));
++		parent_res = &local_res;
+ 
+ 		parent_res->start = res->start +
+ 			qcom->acpi_pdata->qscratch_base_offset;
+@@ -786,9 +786,10 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 			if (IS_ERR_OR_NULL(qcom->urs_usb)) {
+ 				dev_err(dev, "failed to create URS USB platdev\n");
+ 				if (!qcom->urs_usb)
+-					return -ENODEV;
++					ret = -ENODEV;
+ 				else
+-					return PTR_ERR(qcom->urs_usb);
++					ret = PTR_ERR(qcom->urs_usb);
++				goto clk_disable;
+ 			}
+ 		}
+ 	}
+@@ -869,10 +870,14 @@ reset_assert:
+ static int dwc3_qcom_remove(struct platform_device *pdev)
+ {
+ 	struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
++	struct device_node *np = pdev->dev.of_node;
+ 	struct device *dev = &pdev->dev;
+ 	int i;
+ 
+-	of_platform_depopulate(dev);
++	if (np)
++		of_platform_depopulate(&pdev->dev);
++	else
++		platform_device_put(pdev);
+ 
+ 	for (i = qcom->num_clocks - 1; i >= 0; i--) {
+ 		clk_disable_unprepare(qcom->clks[i]);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 164910237669f..221738b644269 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2215,7 +2215,9 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 	ret = pm_runtime_get_sync(dwc->dev);
+ 	if (!ret || ret < 0) {
+ 		pm_runtime_put(dwc->dev);
+-		return 0;
++		if (ret < 0)
++			pm_runtime_set_suspended(dwc->dev);
++		return ret;
+ 	}
+ 
+ 	if (dwc->pullups_connected == is_on) {
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 7b54e814aefb1..3b5a6430e2418 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1421,10 +1421,19 @@ EXPORT_SYMBOL_GPL(gserial_disconnect);
+ 
+ void gserial_suspend(struct gserial *gser)
+ {
+-	struct gs_port	*port = gser->ioport;
++	struct gs_port	*port;
+ 	unsigned long	flags;
+ 
+-	spin_lock_irqsave(&port->port_lock, flags);
++	spin_lock_irqsave(&serial_port_lock, flags);
++	port = gser->ioport;
++
++	if (!port) {
++		spin_unlock_irqrestore(&serial_port_lock, flags);
++		return;
++	}
++
++	spin_lock(&port->port_lock);
++	spin_unlock(&serial_port_lock);
+ 	port->suspended = true;
+ 	spin_unlock_irqrestore(&port->port_lock, flags);
+ }
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index a8a9addb4d253..390bdf823e088 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2146,7 +2146,7 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
+ {
+ 	u32 temp, port_offset, port_count;
+ 	int i;
+-	u8 major_revision, minor_revision;
++	u8 major_revision, minor_revision, tmp_minor_revision;
+ 	struct xhci_hub *rhub;
+ 	struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
+ 	struct xhci_port_cap *port_cap;
+@@ -2166,6 +2166,15 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
+ 		 */
+ 		if (minor_revision > 0x00 && minor_revision < 0x10)
+ 			minor_revision <<= 4;
++		/*
++		 * Some zhaoxin's xHCI controller that follow usb3.1 spec
++		 * but only support Gen1.
++		 */
++		if (xhci->quirks & XHCI_ZHAOXIN_HOST) {
++			tmp_minor_revision = minor_revision;
++			minor_revision = 0;
++		}
++
+ 	} else if (major_revision <= 0x02) {
+ 		rhub = &xhci->usb2_rhub;
+ 	} else {
+@@ -2175,10 +2184,6 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
+ 		/* Ignoring port protocol we can't understand. FIXME */
+ 		return;
+ 	}
+-	rhub->maj_rev = XHCI_EXT_PORT_MAJOR(temp);
+-
+-	if (rhub->min_rev < minor_revision)
+-		rhub->min_rev = minor_revision;
+ 
+ 	/* Port offset and count in the third dword, see section 7.2 */
+ 	temp = readl(addr + 2);
+@@ -2197,8 +2202,6 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
+ 	if (xhci->num_port_caps > max_caps)
+ 		return;
+ 
+-	port_cap->maj_rev = major_revision;
+-	port_cap->min_rev = minor_revision;
+ 	port_cap->psi_count = XHCI_EXT_PORT_PSIC(temp);
+ 
+ 	if (port_cap->psi_count) {
+@@ -2219,6 +2222,11 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
+ 				  XHCI_EXT_PORT_PSIV(port_cap->psi[i - 1])))
+ 				port_cap->psi_uid_count++;
+ 
++			if (xhci->quirks & XHCI_ZHAOXIN_HOST &&
++			    major_revision == 0x03 &&
++			    XHCI_EXT_PORT_PSIV(port_cap->psi[i]) >= 5)
++				minor_revision = tmp_minor_revision;
++
+ 			xhci_dbg(xhci, "PSIV:%d PSIE:%d PLT:%d PFD:%d LP:%d PSIM:%d\n",
+ 				  XHCI_EXT_PORT_PSIV(port_cap->psi[i]),
+ 				  XHCI_EXT_PORT_PSIE(port_cap->psi[i]),
+@@ -2228,6 +2236,15 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
+ 				  XHCI_EXT_PORT_PSIM(port_cap->psi[i]));
+ 		}
+ 	}
++
++	rhub->maj_rev = major_revision;
++
++	if (rhub->min_rev < minor_revision)
++		rhub->min_rev = minor_revision;
++
++	port_cap->maj_rev = major_revision;
++	port_cap->min_rev = minor_revision;
++
+ 	/* cache usb2 port capabilities */
+ 	if (major_revision < 0x03 && xhci->num_ext_caps < max_caps)
+ 		xhci->ext_caps[xhci->num_ext_caps++] = temp;
+@@ -2472,8 +2489,12 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ 	 * and our use of dma addresses in the trb_address_map radix tree needs
+ 	 * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need.
+ 	 */
+-	xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+-			TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
++	if (xhci->quirks & XHCI_ZHAOXIN_TRB_FETCH)
++		xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
++				TRB_SEGMENT_SIZE * 2, TRB_SEGMENT_SIZE * 2, xhci->page_size * 2);
++	else
++		xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
++				TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
+ 
+ 	/* See Table 46 and Note on Figure 55 */
+ 	xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index aff65cefead2f..8034e643a4afd 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -330,6 +330,18 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4))
+ 		xhci->quirks |= XHCI_NO_SOFT_RETRY;
+ 
++	if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) {
++		xhci->quirks |= XHCI_ZHAOXIN_HOST;
++
++		if (pdev->device == 0x9202) {
++			xhci->quirks |= XHCI_RESET_ON_RESUME;
++			xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH;
++		}
++
++		if (pdev->device == 0x9203)
++			xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH;
++	}
++
+ 	/* xHC spec requires PCI devices to support D3hot and D3cold */
+ 	if (xhci->hci_version >= 0x120)
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index c7749f6e34745..6a7c05940e661 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1895,6 +1895,8 @@ struct xhci_hcd {
+ #define XHCI_EP_CTX_BROKEN_DCS	BIT_ULL(42)
+ #define XHCI_SUSPEND_RESUME_CLKS	BIT_ULL(43)
+ #define XHCI_RESET_TO_DEFAULT	BIT_ULL(44)
++#define XHCI_ZHAOXIN_TRB_FETCH	BIT_ULL(45)
++#define XHCI_ZHAOXIN_HOST	BIT_ULL(46)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+diff --git a/drivers/usb/phy/phy-tahvo.c b/drivers/usb/phy/phy-tahvo.c
+index a3e043e3e4aae..d0672b6712985 100644
+--- a/drivers/usb/phy/phy-tahvo.c
++++ b/drivers/usb/phy/phy-tahvo.c
+@@ -395,7 +395,7 @@ static int tahvo_usb_probe(struct platform_device *pdev)
+ 
+ 	tu->irq = ret = platform_get_irq(pdev, 0);
+ 	if (ret < 0)
+-		return ret;
++		goto err_remove_phy;
+ 	ret = request_threaded_irq(tu->irq, NULL, tahvo_usb_vbus_interrupt,
+ 				   IRQF_ONESHOT,
+ 				   "tahvo-vbus", tu);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 243f97a307793..625d9dc776bed 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1151,6 +1151,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x90fa),
+ 	  .driver_info = RSVD(3) },
+ 	/* u-blox products */
++	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1311) },	/* u-blox LARA-R6 01B */
++	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1312),		/* u-blox LARA-R6 01B (RMNET) */
++	  .driver_info = RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(UBLOX_VENDOR_ID, 0x1313, 0xff) },	/* u-blox LARA-R6 01B (ECM) */
+ 	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1341) },	/* u-blox LARA-L6 */
+ 	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1342),		/* u-blox LARA-L6 (RMNET) */
+ 	  .driver_info = RSVD(4) },
+diff --git a/drivers/video/fbdev/au1200fb.c b/drivers/video/fbdev/au1200fb.c
+index a8a0a448cdb5e..80f54111baec1 100644
+--- a/drivers/video/fbdev/au1200fb.c
++++ b/drivers/video/fbdev/au1200fb.c
+@@ -1732,6 +1732,9 @@ static int au1200fb_drv_probe(struct platform_device *dev)
+ 
+ 	/* Now hook interrupt too */
+ 	irq = platform_get_irq(dev, 0);
++	if (irq < 0)
++		return irq;
++
+ 	ret = request_irq(irq, au1200fb_handle_irq,
+ 			  IRQF_SHARED, "lcd", (void *)dev);
+ 	if (ret) {
+diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c
+index e04411701ec85..1b2fb8ed76237 100644
+--- a/drivers/video/fbdev/imsttfb.c
++++ b/drivers/video/fbdev/imsttfb.c
+@@ -1346,7 +1346,7 @@ static const struct fb_ops imsttfb_ops = {
+ 	.fb_ioctl 	= imsttfb_ioctl,
+ };
+ 
+-static void init_imstt(struct fb_info *info)
++static int init_imstt(struct fb_info *info)
+ {
+ 	struct imstt_par *par = info->par;
+ 	__u32 i, tmp, *ip, *end;
+@@ -1419,7 +1419,7 @@ static void init_imstt(struct fb_info *info)
+ 	    || !(compute_imstt_regvals(par, info->var.xres, info->var.yres))) {
+ 		printk("imsttfb: %ux%ux%u not supported\n", info->var.xres, info->var.yres, info->var.bits_per_pixel);
+ 		framebuffer_release(info);
+-		return;
++		return -ENODEV;
+ 	}
+ 
+ 	sprintf(info->fix.id, "IMS TT (%s)", par->ramdac == IBM ? "IBM" : "TVP");
+@@ -1455,12 +1455,13 @@ static void init_imstt(struct fb_info *info)
+ 
+ 	if (register_framebuffer(info) < 0) {
+ 		framebuffer_release(info);
+-		return;
++		return -ENODEV;
+ 	}
+ 
+ 	tmp = (read_reg_le32(par->dc_regs, SSTATUS) & 0x0f00) >> 8;
+ 	fb_info(info, "%s frame buffer; %uMB vram; chip version %u\n",
+ 		info->fix.id, info->fix.smem_len >> 20, tmp);
++	return 0;
+ }
+ 
+ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -1469,6 +1470,7 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct imstt_par *par;
+ 	struct fb_info *info;
+ 	struct device_node *dp;
++	int ret = -ENOMEM;
+ 	
+ 	dp = pci_device_to_OF_node(pdev);
+ 	if(dp)
+@@ -1504,23 +1506,37 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		default:
+ 			printk(KERN_INFO "imsttfb: Device 0x%x unknown, "
+ 					 "contact maintainer.\n", pdev->device);
+-			release_mem_region(addr, size);
+-			framebuffer_release(info);
+-			return -ENODEV;
++			ret = -ENODEV;
++			goto error;
+ 	}
+ 
+ 	info->fix.smem_start = addr;
+ 	info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ?
+ 					    0x400000 : 0x800000);
++	if (!info->screen_base)
++		goto error;
+ 	info->fix.mmio_start = addr + 0x800000;
+ 	par->dc_regs = ioremap(addr + 0x800000, 0x1000);
++	if (!par->dc_regs)
++		goto error;
+ 	par->cmap_regs_phys = addr + 0x840000;
+ 	par->cmap_regs = (__u8 *)ioremap(addr + 0x840000, 0x1000);
++	if (!par->cmap_regs)
++		goto error;
+ 	info->pseudo_palette = par->palette;
+-	init_imstt(info);
+-
+-	pci_set_drvdata(pdev, info);
+-	return 0;
++	ret = init_imstt(info);
++	if (!ret)
++		pci_set_drvdata(pdev, info);
++	return ret;
++
++error:
++	if (par->dc_regs)
++		iounmap(par->dc_regs);
++	if (info->screen_base)
++		iounmap(info->screen_base);
++	release_mem_region(addr, size);
++	framebuffer_release(info);
++	return ret;
+ }
+ 
+ static void imsttfb_remove(struct pci_dev *pdev)
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index 564bd0407ed81..d663e080b1571 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -602,10 +602,10 @@ static int imxfb_activate_var(struct fb_var_screeninfo *var, struct fb_info *inf
+ 	if (var->hsync_len < 1    || var->hsync_len > 64)
+ 		printk(KERN_ERR "%s: invalid hsync_len %d\n",
+ 			info->fix.id, var->hsync_len);
+-	if (var->left_margin > 255)
++	if (var->left_margin < 3  || var->left_margin > 255)
+ 		printk(KERN_ERR "%s: invalid left_margin %d\n",
+ 			info->fix.id, var->left_margin);
+-	if (var->right_margin > 255)
++	if (var->right_margin < 1 || var->right_margin > 255)
+ 		printk(KERN_ERR "%s: invalid right_margin %d\n",
+ 			info->fix.id, var->right_margin);
+ 	if (var->yres < 1 || var->yres > ymax_mask)
+diff --git a/drivers/video/fbdev/omap/lcd_mipid.c b/drivers/video/fbdev/omap/lcd_mipid.c
+index a75ae0c9b14c7..d1cd8785d011d 100644
+--- a/drivers/video/fbdev/omap/lcd_mipid.c
++++ b/drivers/video/fbdev/omap/lcd_mipid.c
+@@ -563,11 +563,15 @@ static int mipid_spi_probe(struct spi_device *spi)
+ 
+ 	r = mipid_detect(md);
+ 	if (r < 0)
+-		return r;
++		goto free_md;
+ 
+ 	omapfb_register_panel(&md->panel);
+ 
+ 	return 0;
++
++free_md:
++	kfree(md);
++	return r;
+ }
+ 
+ static int mipid_spi_remove(struct spi_device *spi)
+diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c
+index 6546d029c7fd6..3888643a22f60 100644
+--- a/drivers/w1/slaves/w1_therm.c
++++ b/drivers/w1/slaves/w1_therm.c
+@@ -1094,29 +1094,26 @@ static int convert_t(struct w1_slave *sl, struct therm_info *info)
+ 
+ 			w1_write_8(dev_master, W1_CONVERT_TEMP);
+ 
+-			if (strong_pullup) { /*some device need pullup */
++			if (SLAVE_FEATURES(sl) & W1_THERM_POLL_COMPLETION) {
++				ret = w1_poll_completion(dev_master, W1_POLL_CONVERT_TEMP);
++				if (ret) {
++					dev_dbg(&sl->dev, "%s: Timeout\n", __func__);
++					goto mt_unlock;
++				}
++				mutex_unlock(&dev_master->bus_mutex);
++			} else if (!strong_pullup) { /*no device need pullup */
+ 				sleep_rem = msleep_interruptible(t_conv);
+ 				if (sleep_rem != 0) {
+ 					ret = -EINTR;
+ 					goto mt_unlock;
+ 				}
+ 				mutex_unlock(&dev_master->bus_mutex);
+-			} else { /*no device need pullup */
+-				if (SLAVE_FEATURES(sl) & W1_THERM_POLL_COMPLETION) {
+-					ret = w1_poll_completion(dev_master, W1_POLL_CONVERT_TEMP);
+-					if (ret) {
+-						dev_dbg(&sl->dev, "%s: Timeout\n", __func__);
+-						goto mt_unlock;
+-					}
+-					mutex_unlock(&dev_master->bus_mutex);
+-				} else {
+-					/* Fixed delay */
+-					mutex_unlock(&dev_master->bus_mutex);
+-					sleep_rem = msleep_interruptible(t_conv);
+-					if (sleep_rem != 0) {
+-						ret = -EINTR;
+-						goto dec_refcnt;
+-					}
++			} else { /*some device need pullup */
++				mutex_unlock(&dev_master->bus_mutex);
++				sleep_rem = msleep_interruptible(t_conv);
++				if (sleep_rem != 0) {
++					ret = -EINTR;
++					goto dec_refcnt;
+ 				}
+ 			}
+ 			ret = read_scratchpad(sl, info);
+diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
+index 15842377c8d2c..1c1a9438f4b6b 100644
+--- a/drivers/w1/w1.c
++++ b/drivers/w1/w1.c
+@@ -1228,10 +1228,10 @@ err_out_exit_init:
+ 
+ static void __exit w1_fini(void)
+ {
+-	struct w1_master *dev;
++	struct w1_master *dev, *n;
+ 
+ 	/* Set netlink removal messages and some cleanup */
+-	list_for_each_entry(dev, &w1_masters, w1_master_entry)
++	list_for_each_entry_safe(dev, n, &w1_masters, w1_master_entry)
+ 		__w1_remove_master_device(dev);
+ 
+ 	w1_fini_netlink();
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index d0fecbd28232f..c4e3c1a5de059 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -77,14 +77,21 @@ static u64 btrfs_reduce_alloc_profile(struct btrfs_fs_info *fs_info, u64 flags)
+ 	}
+ 	allowed &= flags;
+ 
+-	if (allowed & BTRFS_BLOCK_GROUP_RAID6)
++	/* Select the highest-redundancy RAID level. */
++	if (allowed & BTRFS_BLOCK_GROUP_RAID1C4)
++		allowed = BTRFS_BLOCK_GROUP_RAID1C4;
++	else if (allowed & BTRFS_BLOCK_GROUP_RAID6)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID6;
++	else if (allowed & BTRFS_BLOCK_GROUP_RAID1C3)
++		allowed = BTRFS_BLOCK_GROUP_RAID1C3;
+ 	else if (allowed & BTRFS_BLOCK_GROUP_RAID5)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID5;
+ 	else if (allowed & BTRFS_BLOCK_GROUP_RAID10)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID10;
+ 	else if (allowed & BTRFS_BLOCK_GROUP_RAID1)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID1;
++	else if (allowed & BTRFS_BLOCK_GROUP_DUP)
++		allowed = BTRFS_BLOCK_GROUP_DUP;
+ 	else if (allowed & BTRFS_BLOCK_GROUP_RAID0)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID0;
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 5a114cad988a6..608b939a4d287 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2256,6 +2256,9 @@ static int btrfs_init_csum_hash(struct btrfs_fs_info *fs_info, u16 csum_type)
+ 		if (!strstr(crypto_shash_driver_name(csum_shash), "generic"))
+ 			set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags);
+ 		break;
++	case BTRFS_CSUM_TYPE_XXHASH:
++		set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags);
++		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 3fc689154bb5b..828a7ff4aebe7 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1270,7 +1270,9 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 		goto out;
+ 	}
+ 
++	spin_lock(&fs_info->trans_lock);
+ 	list_del(&quota_root->dirty_list);
++	spin_unlock(&fs_info->trans_lock);
+ 
+ 	btrfs_tree_lock(quota_root->node);
+ 	btrfs_clean_tree_block(quota_root->node);
+@@ -4374,4 +4376,5 @@ void btrfs_qgroup_destroy_extent_records(struct btrfs_transaction *trans)
+ 		ulist_free(entry->old_roots);
+ 		kfree(entry);
+ 	}
++	*root = RB_ROOT;
+ }
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index e1fda3923944e..432dc2a16e282 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -3574,6 +3574,15 @@ static void handle_cap_grant(struct inode *inode,
+ 	}
+ 	BUG_ON(cap->issued & ~cap->implemented);
+ 
++	/* don't let check_caps skip sending a response to MDS for revoke msgs */
++	if (le32_to_cpu(grant->op) == CEPH_CAP_OP_REVOKE) {
++		cap->mds_wanted = 0;
++		if (cap == ci->i_auth_cap)
++			check_caps = 1; /* check auth cap only */
++		else
++			check_caps = 2; /* check all caps */
++	}
++
+ 	if (extra_info->inline_version > 0 &&
+ 	    extra_info->inline_version >= ci->i_inline_version) {
+ 		ci->i_inline_version = extra_info->inline_version;
+diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
+index a10d2bcfe75a8..edce0b25cd90e 100644
+--- a/fs/dlm/plock.c
++++ b/fs/dlm/plock.c
+@@ -363,7 +363,9 @@ int dlm_posix_get(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 		locks_init_lock(fl);
+ 		fl->fl_type = (op->info.ex) ? F_WRLCK : F_RDLCK;
+ 		fl->fl_flags = FL_POSIX;
+-		fl->fl_pid = -op->info.pid;
++		fl->fl_pid = op->info.pid;
++		if (op->info.nodeid != dlm_our_nodeid())
++			fl->fl_pid = -fl->fl_pid;
+ 		fl->fl_start = op->info.start;
+ 		fl->fl_end = op->info.end;
+ 		rv = 0;
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 8cb2cf612e49b..9cff927382599 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -629,7 +629,7 @@ hitted:
+ 	tight &= (clt->mode >= COLLECT_PRIMARY_HOOKED &&
+ 		  clt->mode != COLLECT_PRIMARY_FOLLOWED_NOINPLACE);
+ 
+-	cur = end - min_t(unsigned int, offset + end - map->m_la, end);
++	cur = end - min_t(erofs_off_t, offset + end - map->m_la, end);
+ 	if (!(map->m_flags & EROFS_MAP_MAPPED)) {
+ 		zero_user_segment(page, cur, end);
+ 		goto next_part;
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index f18194fd8d770..a5537a9f8f367 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -215,7 +215,7 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ 	int i;
+ 	u8 *in, type;
+ 
+-	if (1 << amortizedshift == 4)
++	if (1 << amortizedshift == 4 && lclusterbits <= 14)
+ 		vcnt = 2;
+ 	else if (1 << amortizedshift == 2 && lclusterbits == 12)
+ 		vcnt = 16;
+@@ -273,7 +273,6 @@ static int compacted_load_cluster_from_disk(struct z_erofs_maprecorder *m,
+ {
+ 	struct inode *const inode = m->inode;
+ 	struct erofs_inode *const vi = EROFS_I(inode);
+-	const unsigned int lclusterbits = vi->z_logical_clusterbits;
+ 	const erofs_off_t ebase = ALIGN(iloc(EROFS_I_SB(inode), vi->nid) +
+ 					vi->inode_isize + vi->xattr_isize, 8) +
+ 		sizeof(struct z_erofs_map_header);
+@@ -283,9 +282,6 @@ static int compacted_load_cluster_from_disk(struct z_erofs_maprecorder *m,
+ 	erofs_off_t pos;
+ 	int err;
+ 
+-	if (lclusterbits != 12)
+-		return -EOPNOTSUPP;
+-
+ 	if (lcn >= totalidx)
+ 		return -EINVAL;
+ 
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 237983cd8cdc2..c2bb2ff3fbb6b 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -649,6 +649,14 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
+ 
+ 	ext4_update_inode_fsync_trans(handle, inode, 1);
+ 	count = ar.len;
++
++	/*
++	 * Update reserved blocks/metadata blocks after successful block
++	 * allocation which had been deferred till now.
++	 */
++	if (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE)
++		ext4_da_update_reserve_space(inode, count, 1);
++
+ got_it:
+ 	map->m_flags |= EXT4_MAP_MAPPED;
+ 	map->m_pblk = le32_to_cpu(chain[depth-1].key);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index d3143746bcdbf..365c4d3a434ab 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -654,16 +654,6 @@ found:
+ 			 */
+ 			ext4_clear_inode_state(inode, EXT4_STATE_EXT_MIGRATE);
+ 		}
+-
+-		/*
+-		 * Update reserved blocks/metadata blocks after successful
+-		 * block allocation which had been deferred till now. We don't
+-		 * support fallocate for non extent files. So we can update
+-		 * reserve space here.
+-		 */
+-		if ((retval > 0) &&
+-			(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE))
+-			ext4_da_update_reserve_space(inode, retval, 1);
+ 	}
+ 
+ 	if (retval > 0) {
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 8a51448b76700..3a71928846712 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5549,8 +5549,8 @@ do_more:
+ 		 * them with group lock_held
+ 		 */
+ 		if (test_opt(sb, DISCARD)) {
+-			err = ext4_issue_discard(sb, block_group, bit, count,
+-						 NULL);
++			err = ext4_issue_discard(sb, block_group, bit,
++						 count_clusters, NULL);
+ 			if (err && err != -EOPNOTSUPP)
+ 				ext4_msg(sb, KERN_WARNING, "discard request in"
+ 					 " group:%u block:%d count:%lu failed"
+@@ -5634,12 +5634,6 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ 
+ 	sbi = EXT4_SB(sb);
+ 
+-	if (sbi->s_mount_state & EXT4_FC_REPLAY) {
+-		ext4_free_blocks_simple(inode, block, count);
+-		return;
+-	}
+-
+-	might_sleep();
+ 	if (bh) {
+ 		if (block)
+ 			BUG_ON(block != bh->b_blocknr);
+@@ -5647,6 +5641,13 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ 			block = bh->b_blocknr;
+ 	}
+ 
++	if (sbi->s_mount_state & EXT4_FC_REPLAY) {
++		ext4_free_blocks_simple(inode, block, EXT4_NUM_B2C(sbi, count));
++		return;
++	}
++
++	might_sleep();
++
+ 	if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
+ 	    !ext4_inode_block_valid(inode, block, count)) {
+ 		ext4_error(sb, "Freeing blocks not in datazone - "
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 45f719c1e0023..ca3d2a33a08c8 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3863,19 +3863,10 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			return retval;
+ 	}
+ 
+-	/*
+-	 * We need to protect against old.inode directory getting converted
+-	 * from inline directory format into a normal one.
+-	 */
+-	if (S_ISDIR(old.inode->i_mode))
+-		inode_lock_nested(old.inode, I_MUTEX_NONDIR2);
+-
+ 	old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de,
+ 				 &old.inlined);
+-	if (IS_ERR(old.bh)) {
+-		retval = PTR_ERR(old.bh);
+-		goto unlock_moved_dir;
+-	}
++	if (IS_ERR(old.bh))
++		return PTR_ERR(old.bh);
+ 
+ 	/*
+ 	 *  Check for inode number is _not_ due to possible IO errors.
+@@ -4065,10 +4056,6 @@ release_bh:
+ 	brelse(old.bh);
+ 	brelse(new.bh);
+ 
+-unlock_moved_dir:
+-	if (S_ISDIR(old.inode->i_mode))
+-		inode_unlock(old.inode);
+-
+ 	return retval;
+ }
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 84b4fc9833e38..e386d67cff9d1 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1091,6 +1091,12 @@ static void ext4_blkdev_remove(struct ext4_sb_info *sbi)
+ 	struct block_device *bdev;
+ 	bdev = sbi->s_journal_bdev;
+ 	if (bdev) {
++		/*
++		 * Invalidate the journal device's buffers.  We don't want them
++		 * floating about in memory - the physical journal device may
++		 * hotswapped, and it breaks the `ro-after' testing code.
++		 */
++		invalidate_bdev(bdev);
+ 		ext4_blkdev_put(bdev);
+ 		sbi->s_journal_bdev = NULL;
+ 	}
+@@ -1230,13 +1236,7 @@ static void ext4_put_super(struct super_block *sb)
+ 	sync_blockdev(sb->s_bdev);
+ 	invalidate_bdev(sb->s_bdev);
+ 	if (sbi->s_journal_bdev && sbi->s_journal_bdev != sb->s_bdev) {
+-		/*
+-		 * Invalidate the journal device's buffers.  We don't want them
+-		 * floating about in memory - the physical journal device may
+-		 * hotswapped, and it breaks the `ro-after' testing code.
+-		 */
+ 		sync_blockdev(sbi->s_journal_bdev);
+-		invalidate_bdev(sbi->s_journal_bdev);
+ 		ext4_blkdev_remove(sbi);
+ 	}
+ 
+@@ -5206,6 +5206,7 @@ failed_mount:
+ 	brelse(bh);
+ 	ext4_blkdev_remove(sbi);
+ out_fail:
++	invalidate_bdev(sb->s_bdev);
+ 	sb->s_fs_info = NULL;
+ 	kfree(sbi->s_blockgroup_lock);
+ out_free_base:
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index ed39101dc7b6f..5bf858fc271fc 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1725,6 +1725,20 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
+ 		memmove(here, (void *)here + size,
+ 			(void *)last - (void *)here + sizeof(__u32));
+ 		memset(last, 0, size);
++
++		/*
++		 * Update i_inline_off - moved ibody region might contain
++		 * system.data attribute.  Handling a failure here won't
++		 * cause other complications for setting an xattr.
++		 */
++		if (!is_block && ext4_has_inline_data(inode)) {
++			ret = ext4_find_inline_data_nolock(inode);
++			if (ret) {
++				ext4_warning_inode(inode,
++					"unable to update i_inline_off");
++				goto out;
++			}
++		}
+ 	} else if (s->not_found) {
+ 		/* Insert new name. */
+ 		size_t size = EXT4_XATTR_LEN(name_len);
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 62b7848f1f71e..83ebc860508b0 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3483,7 +3483,7 @@ block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode);
+ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background, bool force,
+ 			unsigned int segno);
+ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi);
+-int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count);
++int f2fs_resize_fs(struct file *filp, __u64 block_count);
+ int __init f2fs_create_garbage_collection_cache(void);
+ void f2fs_destroy_garbage_collection_cache(void);
+ 
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index a0d8aa52b696b..6e91be5b8c30f 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -3356,7 +3356,7 @@ static int f2fs_ioc_resize_fs(struct file *filp, unsigned long arg)
+ 			   sizeof(block_count)))
+ 		return -EFAULT;
+ 
+-	return f2fs_resize_fs(sbi, block_count);
++	return f2fs_resize_fs(filp, block_count);
+ }
+ 
+ static int f2fs_ioc_enable_verity(struct file *filp, unsigned long arg)
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 66ac048cc8998..8e4daee0171f8 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -7,6 +7,7 @@
+  */
+ #include <linux/fs.h>
+ #include <linux/module.h>
++#include <linux/mount.h>
+ #include <linux/backing-dev.h>
+ #include <linux/init.h>
+ #include <linux/f2fs_fs.h>
+@@ -1976,8 +1977,9 @@ static void update_fs_metadata(struct f2fs_sb_info *sbi, int secs)
+ 	}
+ }
+ 
+-int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
++int f2fs_resize_fs(struct file *filp, __u64 block_count)
+ {
++	struct f2fs_sb_info *sbi = F2FS_I_SB(file_inode(filp));
+ 	__u64 old_block_count, shrunk_blocks;
+ 	struct cp_control cpc = { CP_RESIZE, 0, 0, 0 };
+ 	unsigned int secs;
+@@ -2015,12 +2017,18 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
+ 		return -EINVAL;
+ 	}
+ 
++	err = mnt_want_write_file(filp);
++	if (err)
++		return err;
++
+ 	shrunk_blocks = old_block_count - block_count;
+ 	secs = div_u64(shrunk_blocks, BLKS_PER_SEC(sbi));
+ 
+ 	/* stop other GC */
+-	if (!down_write_trylock(&sbi->gc_lock))
+-		return -EAGAIN;
++	if (!down_write_trylock(&sbi->gc_lock)) {
++		err = -EAGAIN;
++		goto out_drop_write;
++	}
+ 
+ 	/* stop CP to protect MAIN_SEC in free_segment_range */
+ 	f2fs_lock_op(sbi);
+@@ -2040,10 +2048,18 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
+ out_unlock:
+ 	f2fs_unlock_op(sbi);
+ 	up_write(&sbi->gc_lock);
++out_drop_write:
++	mnt_drop_write_file(filp);
+ 	if (err)
+ 		return err;
+ 
+ 	freeze_super(sbi->sb);
++
++	if (f2fs_readonly(sbi->sb)) {
++		thaw_super(sbi->sb);
++		return -EROFS;
++	}
++
+ 	down_write(&sbi->gc_lock);
+ 	mutex_lock(&sbi->cp_mutex);
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 98263180c0ead..72b109685db47 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -969,20 +969,12 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			goto out;
+ 	}
+ 
+-	/*
+-	 * Copied from ext4_rename: we need to protect against old.inode
+-	 * directory getting converted from inline directory format into
+-	 * a normal one.
+-	 */
+-	if (S_ISDIR(old_inode->i_mode))
+-		inode_lock_nested(old_inode, I_MUTEX_NONDIR2);
+-
+ 	err = -ENOENT;
+ 	old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page);
+ 	if (!old_entry) {
+ 		if (IS_ERR(old_page))
+ 			err = PTR_ERR(old_page);
+-		goto out_unlock_old;
++		goto out;
+ 	}
+ 
+ 	if (S_ISDIR(old_inode->i_mode)) {
+@@ -1090,9 +1082,6 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	f2fs_unlock_op(sbi);
+ 
+-	if (S_ISDIR(old_inode->i_mode))
+-		inode_unlock(old_inode);
+-
+ 	if (IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir))
+ 		f2fs_sync_fs(sbi->sb, 1);
+ 
+@@ -1107,9 +1096,6 @@ out_dir:
+ 		f2fs_put_page(old_dir_page, 0);
+ out_old:
+ 	f2fs_put_page(old_page, 0);
+-out_unlock_old:
+-	if (S_ISDIR(old_inode->i_mode))
+-		inode_unlock(old_inode);
+ out:
+ 	if (whiteout)
+ 		iput(whiteout);
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index c63274d4b74b0..02cb1c806c3ed 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -884,8 +884,10 @@ static int truncate_dnode(struct dnode_of_data *dn)
+ 	dn->ofs_in_node = 0;
+ 	f2fs_truncate_data_blocks(dn);
+ 	err = truncate_node(dn);
+-	if (err)
++	if (err) {
++		f2fs_put_page(page, 1);
+ 		return err;
++	}
+ 
+ 	return 1;
+ }
+diff --git a/fs/fs_context.c b/fs/fs_context.c
+index 740322dff4a30..e6ef7e040743b 100644
+--- a/fs/fs_context.c
++++ b/fs/fs_context.c
+@@ -543,7 +543,8 @@ static int legacy_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 			return -ENOMEM;
+ 	}
+ 
+-	ctx->legacy_data[size++] = ',';
++	if (size)
++		ctx->legacy_data[size++] = ',';
+ 	len = strlen(param->key);
+ 	memcpy(ctx->legacy_data + size, param->key, len);
+ 	size += len;
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index e3b9b7d188e67..b0c701c007c68 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -249,7 +249,7 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
+ 			spin_unlock(&fi->lock);
+ 		}
+ 		kfree(forget);
+-		if (ret == -ENOMEM)
++		if (ret == -ENOMEM || ret == -EINTR)
+ 			goto out;
+ 		if (ret || fuse_invalid_attr(&outarg.attr) ||
+ 		    fuse_stale_inode(inode, outarg.generation, &outarg.attr))
+diff --git a/fs/inode.c b/fs/inode.c
+index 7ec90788d8be9..311237e8d3595 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -1015,6 +1015,48 @@ void discard_new_inode(struct inode *inode)
+ }
+ EXPORT_SYMBOL(discard_new_inode);
+ 
++/**
++ * lock_two_inodes - lock two inodes (may be regular files but also dirs)
++ *
++ * Lock any non-NULL argument. The caller must make sure that if he is passing
++ * in two directories, one is not ancestor of the other.  Zero, one or two
++ * objects may be locked by this function.
++ *
++ * @inode1: first inode to lock
++ * @inode2: second inode to lock
++ * @subclass1: inode lock subclass for the first lock obtained
++ * @subclass2: inode lock subclass for the second lock obtained
++ */
++void lock_two_inodes(struct inode *inode1, struct inode *inode2,
++		     unsigned subclass1, unsigned subclass2)
++{
++	if (!inode1 || !inode2) {
++		/*
++		 * Make sure @subclass1 will be used for the acquired lock.
++		 * This is not strictly necessary (no current caller cares) but
++		 * let's keep things consistent.
++		 */
++		if (!inode1)
++			swap(inode1, inode2);
++		goto lock;
++	}
++
++	/*
++	 * If one object is directory and the other is not, we must make sure
++	 * to lock directory first as the other object may be its child.
++	 */
++	if (S_ISDIR(inode2->i_mode) == S_ISDIR(inode1->i_mode)) {
++		if (inode1 > inode2)
++			swap(inode1, inode2);
++	} else if (!S_ISDIR(inode1->i_mode))
++		swap(inode1, inode2);
++lock:
++	if (inode1)
++		inode_lock_nested(inode1, subclass1);
++	if (inode2 && inode2 != inode1)
++		inode_lock_nested(inode2, subclass2);
++}
++
+ /**
+  * lock_two_nondirectories - take two i_mutexes on non-directory objects
+  *
+diff --git a/fs/internal.h b/fs/internal.h
+index d5d9fcdae10c4..39193250a4ed2 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -150,6 +150,8 @@ extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
+ extern void inode_add_lru(struct inode *inode);
+ extern int dentry_needs_remove_privs(struct dentry *dentry);
+ bool in_group_or_capable(const struct inode *inode, kgid_t gid);
++void lock_two_inodes(struct inode *inode1, struct inode *inode2,
++		     unsigned subclass1, unsigned subclass2);
+ 
+ /*
+  * fs-writeback.c
+diff --git a/fs/jffs2/build.c b/fs/jffs2/build.c
+index 837cd55fd4c5e..6ae9d6fefb861 100644
+--- a/fs/jffs2/build.c
++++ b/fs/jffs2/build.c
+@@ -211,7 +211,10 @@ static int jffs2_build_filesystem(struct jffs2_sb_info *c)
+ 		ic->scan_dents = NULL;
+ 		cond_resched();
+ 	}
+-	jffs2_build_xattr_subsystem(c);
++	ret = jffs2_build_xattr_subsystem(c);
++	if (ret)
++		goto exit;
++
+ 	c->flags &= ~JFFS2_SB_FLAG_BUILDING;
+ 
+ 	dbg_fsbuild("FS build complete\n");
+diff --git a/fs/jffs2/xattr.c b/fs/jffs2/xattr.c
+index da3e18503c658..acb4492f5970c 100644
+--- a/fs/jffs2/xattr.c
++++ b/fs/jffs2/xattr.c
+@@ -772,10 +772,10 @@ void jffs2_clear_xattr_subsystem(struct jffs2_sb_info *c)
+ }
+ 
+ #define XREF_TMPHASH_SIZE	(128)
+-void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
++int jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
+ {
+ 	struct jffs2_xattr_ref *ref, *_ref;
+-	struct jffs2_xattr_ref *xref_tmphash[XREF_TMPHASH_SIZE];
++	struct jffs2_xattr_ref **xref_tmphash;
+ 	struct jffs2_xattr_datum *xd, *_xd;
+ 	struct jffs2_inode_cache *ic;
+ 	struct jffs2_raw_node_ref *raw;
+@@ -784,9 +784,12 @@ void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
+ 
+ 	BUG_ON(!(c->flags & JFFS2_SB_FLAG_BUILDING));
+ 
++	xref_tmphash = kcalloc(XREF_TMPHASH_SIZE,
++			       sizeof(struct jffs2_xattr_ref *), GFP_KERNEL);
++	if (!xref_tmphash)
++		return -ENOMEM;
++
+ 	/* Phase.1 : Merge same xref */
+-	for (i=0; i < XREF_TMPHASH_SIZE; i++)
+-		xref_tmphash[i] = NULL;
+ 	for (ref=c->xref_temp; ref; ref=_ref) {
+ 		struct jffs2_xattr_ref *tmp;
+ 
+@@ -884,6 +887,8 @@ void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
+ 		     "%u of xref (%u dead, %u orphan) found.\n",
+ 		     xdatum_count, xdatum_unchecked_count, xdatum_orphan_count,
+ 		     xref_count, xref_dead_count, xref_orphan_count);
++	kfree(xref_tmphash);
++	return 0;
+ }
+ 
+ struct jffs2_xattr_datum *jffs2_setup_xattr_datum(struct jffs2_sb_info *c,
+diff --git a/fs/jffs2/xattr.h b/fs/jffs2/xattr.h
+index 720007b2fd65d..1b5030a3349db 100644
+--- a/fs/jffs2/xattr.h
++++ b/fs/jffs2/xattr.h
+@@ -71,7 +71,7 @@ static inline int is_xattr_ref_dead(struct jffs2_xattr_ref *ref)
+ #ifdef CONFIG_JFFS2_FS_XATTR
+ 
+ extern void jffs2_init_xattr_subsystem(struct jffs2_sb_info *c);
+-extern void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c);
++extern int jffs2_build_xattr_subsystem(struct jffs2_sb_info *c);
+ extern void jffs2_clear_xattr_subsystem(struct jffs2_sb_info *c);
+ 
+ extern struct jffs2_xattr_datum *jffs2_setup_xattr_datum(struct jffs2_sb_info *c,
+@@ -103,7 +103,7 @@ extern ssize_t jffs2_listxattr(struct dentry *, char *, size_t);
+ #else
+ 
+ #define jffs2_init_xattr_subsystem(c)
+-#define jffs2_build_xattr_subsystem(c)
++#define jffs2_build_xattr_subsystem(c)		(0)
+ #define jffs2_clear_xattr_subsystem(c)
+ 
+ #define jffs2_xattr_do_crccheck_inode(c, ic)
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 501263355ef48..bd9af2be352fc 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -178,7 +178,13 @@ int dbMount(struct inode *ipbmap)
+ 	dbmp_le = (struct dbmap_disk *) mp->data;
+ 	bmp->db_mapsize = le64_to_cpu(dbmp_le->dn_mapsize);
+ 	bmp->db_nfree = le64_to_cpu(dbmp_le->dn_nfree);
++
+ 	bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
++	if (bmp->db_l2nbperpage > L2PSIZE - L2MINBLOCKSIZE) {
++		err = -EINVAL;
++		goto err_release_metapage;
++	}
++
+ 	bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
+ 	if (!bmp->db_numag) {
+ 		err = -EINVAL;
+diff --git a/fs/jfs/jfs_filsys.h b/fs/jfs/jfs_filsys.h
+index b5d702df7111a..33ef13a0b1108 100644
+--- a/fs/jfs/jfs_filsys.h
++++ b/fs/jfs/jfs_filsys.h
+@@ -122,7 +122,9 @@
+ #define NUM_INODE_PER_IAG	INOSPERIAG
+ 
+ #define MINBLOCKSIZE		512
++#define L2MINBLOCKSIZE		9
+ #define MAXBLOCKSIZE		4096
++#define L2MAXBLOCKSIZE		12
+ #define	MAXFILESIZE		((s64)1 << 52)
+ 
+ #define JFS_LINK_MAX		0xffffffff
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 8b3c86a502daa..c91ee05cce74f 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -679,7 +679,9 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
+ 	return kn;
+ 
+  err_out3:
++	spin_lock(&kernfs_idr_lock);
+ 	idr_remove(&root->ino_idr, (u32)kernfs_ino(kn));
++	spin_unlock(&kernfs_idr_lock);
+  err_out2:
+ 	kmem_cache_free(kernfs_node_cache, kn);
+  err_out1:
+diff --git a/fs/namei.c b/fs/namei.c
+index 3d98db9802a77..bc5e633a5954e 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2782,8 +2782,8 @@ struct dentry *lock_rename(struct dentry *p1, struct dentry *p2)
+ 		return p;
+ 	}
+ 
+-	inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
+-	inode_lock_nested(p2->d_inode, I_MUTEX_PARENT2);
++	lock_two_inodes(p1->d_inode, p2->d_inode,
++			I_MUTEX_PARENT, I_MUTEX_PARENT2);
+ 	return NULL;
+ }
+ EXPORT_SYMBOL(lock_rename);
+@@ -4264,7 +4264,7 @@ SYSCALL_DEFINE2(link, const char __user *, oldname, const char __user *, newname
+  *	   sb->s_vfs_rename_mutex. We might be more accurate, but that's another
+  *	   story.
+  *	c) we have to lock _four_ objects - parents and victim (if it exists),
+- *	   and source (if it is not a directory).
++ *	   and source.
+  *	   And that - after we got ->i_mutex on parents (until then we don't know
+  *	   whether the target exists).  Solution: try to be smart with locking
+  *	   order for inodes.  We rely on the fact that tree topology may change
+@@ -4341,10 +4341,16 @@ int vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	take_dentry_name_snapshot(&old_name, old_dentry);
+ 	dget(new_dentry);
+-	if (!is_dir || (flags & RENAME_EXCHANGE))
+-		lock_two_nondirectories(source, target);
+-	else if (target)
+-		inode_lock(target);
++	/*
++	 * Lock all moved children. Moved directories may need to change parent
++	 * pointer so they need the lock to prevent against concurrent
++	 * directory changes moving parent pointer. For regular files we've
++	 * historically always done this. The lockdep locking subclasses are
++	 * somewhat arbitrary but RENAME_EXCHANGE in particular can swap
++	 * regular files and directories so it's difficult to tell which
++	 * subclasses to use.
++	 */
++	lock_two_inodes(source, target, I_MUTEX_NORMAL, I_MUTEX_NONDIR2);
+ 
+ 	error = -EBUSY;
+ 	if (is_local_mountpoint(old_dentry) || is_local_mountpoint(new_dentry))
+@@ -4388,9 +4394,8 @@ int vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			d_exchange(old_dentry, new_dentry);
+ 	}
+ out:
+-	if (!is_dir || (flags & RENAME_EXCHANGE))
+-		unlock_two_nondirectories(source, target);
+-	else if (target)
++	inode_unlock(source);
++	if (target)
+ 		inode_unlock(target);
+ 	dput(new_dentry);
+ 	if (!error) {
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index bca5d1bdd79bd..b9567cc8698ed 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -926,6 +926,7 @@ out:
+ out_noaction:
+ 	return ret;
+ session_recover:
++	set_bit(NFS4_SLOT_TBL_DRAINING, &session->fc_slot_table.slot_tbl_state);
+ 	nfs4_schedule_session_recovery(session, status);
+ 	dprintk("%s ERROR: %d Reset session\n", __func__, status);
+ 	nfs41_sequence_free_slot(res);
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 1e3410a9b03eb..c7e8e641d3e5f 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3705,7 +3705,7 @@ nfsd4_encode_open(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_op
+ 		p = xdr_reserve_space(xdr, 32);
+ 		if (!p)
+ 			return nfserr_resource;
+-		*p++ = cpu_to_be32(0);
++		*p++ = cpu_to_be32(open->op_recall);
+ 
+ 		/*
+ 		 * TODO: space_limit's in delegations
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 18e014fa06480..84de9f97bbc09 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -1090,8 +1090,11 @@ static int fanotify_test_fid(struct path *path, __kernel_fsid_t *fsid)
+ 	return 0;
+ }
+ 
+-static int fanotify_events_supported(struct path *path, __u64 mask)
++static int fanotify_events_supported(struct path *path, __u64 mask,
++				     unsigned int flags)
+ {
++	unsigned int mark_type = flags & FANOTIFY_MARK_TYPE_BITS;
++
+ 	/*
+ 	 * Some filesystems such as 'proc' acquire unusual locks when opening
+ 	 * files. For them fanotify permission events have high chances of
+@@ -1103,6 +1106,21 @@ static int fanotify_events_supported(struct path *path, __u64 mask)
+ 	if (mask & FANOTIFY_PERM_EVENTS &&
+ 	    path->mnt->mnt_sb->s_type->fs_flags & FS_DISALLOW_NOTIFY_PERM)
+ 		return -EINVAL;
++
++	/*
++	 * mount and sb marks are not allowed on kernel internal pseudo fs,
++	 * like pipe_mnt, because that would subscribe to events on all the
++	 * anonynous pipes in the system.
++	 *
++	 * SB_NOUSER covers all of the internal pseudo fs whose objects are not
++	 * exposed to user's mount namespace, but there are other SB_KERNMOUNT
++	 * fs, like nsfs, debugfs, for which the value of allowing sb and mount
++	 * mark is questionable. For now we leave them alone.
++	 */
++	if (mark_type != FAN_MARK_INODE &&
++	    path->mnt->mnt_sb->s_flags & SB_NOUSER)
++		return -EINVAL;
++
+ 	return 0;
+ }
+ 
+@@ -1218,7 +1236,7 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 		goto fput_and_out;
+ 
+ 	if (flags & FAN_MARK_ADD) {
+-		ret = fanotify_events_supported(&path, mask);
++		ret = fanotify_events_supported(&path, mask, flags);
+ 		if (ret)
+ 			goto path_put_and_out;
+ 	}
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index e466c58f9ec4c..7ef3c87f8a23d 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -475,6 +475,7 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+ 			/* Restore timestamps on parent (best effort) */
+ 			ovl_set_timestamps(upperdir, &c->pstat);
+ 			ovl_dentry_set_upper_alias(c->dentry);
++			ovl_dentry_update_reval(c->dentry, upper);
+ 		}
+ 	}
+ 	inode_unlock(udir);
+@@ -762,6 +763,7 @@ static int ovl_do_copy_up(struct ovl_copy_up_ctx *c)
+ 		inode_unlock(udir);
+ 
+ 		ovl_dentry_set_upper_alias(c->dentry);
++		ovl_dentry_update_reval(c->dentry, ovl_dentry_upper(c->dentry));
+ 	}
+ 
+ out:
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 8ebd9f2b1c95b..a7021c87bfcb0 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -266,8 +266,7 @@ static int ovl_instantiate(struct dentry *dentry, struct inode *inode,
+ 
+ 	ovl_dir_modified(dentry->d_parent, false);
+ 	ovl_dentry_set_upper_alias(dentry);
+-	ovl_dentry_update_reval(dentry, newdentry,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, newdentry);
+ 
+ 	if (!hardlink) {
+ 		/*
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index 44118f0ab0b31..f981283177ecd 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -324,8 +324,7 @@ static struct dentry *ovl_obtain_alias(struct super_block *sb,
+ 	if (upper_alias)
+ 		ovl_dentry_set_upper_alias(dentry);
+ 
+-	ovl_dentry_update_reval(dentry, upper,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, upper);
+ 
+ 	return d_instantiate_anon(dentry, inode);
+ 
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index 092812c2f118a..ff5284b86bd56 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -1095,8 +1095,7 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 			ovl_set_flag(OVL_UPPERDATA, inode);
+ 	}
+ 
+-	ovl_dentry_update_reval(dentry, upperdentry,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, upperdentry);
+ 
+ 	revert_creds(old_cred);
+ 	if (origin_path) {
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 898de3bf884e4..26f91868fbdaf 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -257,8 +257,10 @@ bool ovl_index_all(struct super_block *sb);
+ bool ovl_verify_lower(struct super_block *sb);
+ struct ovl_entry *ovl_alloc_entry(unsigned int numlower);
+ bool ovl_dentry_remote(struct dentry *dentry);
+-void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *upperdentry,
+-			     unsigned int mask);
++void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *realdentry);
++void ovl_dentry_init_reval(struct dentry *dentry, struct dentry *upperdentry);
++void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
++			   unsigned int mask);
+ bool ovl_dentry_weird(struct dentry *dentry);
+ enum ovl_path_type ovl_path_type(struct dentry *dentry);
+ void ovl_path_upper(struct dentry *dentry, struct path *path);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index e3cd5a00f880d..5d7df839902df 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1868,7 +1868,7 @@ static struct dentry *ovl_get_root(struct super_block *sb,
+ 	ovl_dentry_set_flag(OVL_E_CONNECTED, root);
+ 	ovl_set_upperdata(d_inode(root));
+ 	ovl_inode_init(d_inode(root), &oip, ino, fsid);
+-	ovl_dentry_update_reval(root, upperdentry, DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_flags(root, upperdentry, DCACHE_OP_WEAK_REVALIDATE);
+ 
+ 	return root;
+ }
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index e8b14d2c180c6..060f9c99d9b33 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -90,14 +90,30 @@ struct ovl_entry *ovl_alloc_entry(unsigned int numlower)
+ 	return oe;
+ }
+ 
++#define OVL_D_REVALIDATE (DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE)
++
+ bool ovl_dentry_remote(struct dentry *dentry)
+ {
+-	return dentry->d_flags &
+-		(DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	return dentry->d_flags & OVL_D_REVALIDATE;
++}
++
++void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *realdentry)
++{
++	if (!ovl_dentry_remote(realdentry))
++		return;
++
++	spin_lock(&dentry->d_lock);
++	dentry->d_flags |= realdentry->d_flags & OVL_D_REVALIDATE;
++	spin_unlock(&dentry->d_lock);
++}
++
++void ovl_dentry_init_reval(struct dentry *dentry, struct dentry *upperdentry)
++{
++	return ovl_dentry_init_flags(dentry, upperdentry, OVL_D_REVALIDATE);
+ }
+ 
+-void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *upperdentry,
+-			     unsigned int mask)
++void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
++			   unsigned int mask)
+ {
+ 	struct ovl_entry *oe = OVL_E(dentry);
+ 	unsigned int i, flags = 0;
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 184cb97c83bdd..b6183f1f4ebcf 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -577,6 +577,8 @@ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
+ 	raw_spin_lock_init(&prz->buffer_lock);
+ 	prz->flags = flags;
+ 	prz->label = kstrdup(label, GFP_KERNEL);
++	if (!prz->label)
++		goto err;
+ 
+ 	ret = persistent_ram_buffer_map(start, size, prz, memtype);
+ 	if (ret)
+diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
+index ee179a81b3da8..23d2c333066f6 100644
+--- a/fs/ramfs/inode.c
++++ b/fs/ramfs/inode.c
+@@ -264,7 +264,7 @@ int ramfs_init_fs_context(struct fs_context *fc)
+ 	return 0;
+ }
+ 
+-static void ramfs_kill_sb(struct super_block *sb)
++void ramfs_kill_sb(struct super_block *sb)
+ {
+ 	kfree(sb->s_fs_info);
+ 	kill_litter_super(sb);
+diff --git a/include/asm-generic/pgtable-nop4d.h b/include/asm-generic/pgtable-nop4d.h
+index ce2cbb3c380ff..2f1d0aad645cf 100644
+--- a/include/asm-generic/pgtable-nop4d.h
++++ b/include/asm-generic/pgtable-nop4d.h
+@@ -42,7 +42,7 @@ static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address)
+ #define __p4d(x)				((p4d_t) { __pgd(x) })
+ 
+ #define pgd_page(pgd)				(p4d_page((p4d_t){ pgd }))
+-#define pgd_page_vaddr(pgd)			(p4d_page_vaddr((p4d_t){ pgd }))
++#define pgd_page_vaddr(pgd)			((unsigned long)(p4d_pgtable((p4d_t){ pgd })))
+ 
+ /*
+  * allocating and freeing a p4d is trivial: the 1-entry p4d is
+diff --git a/include/asm-generic/pgtable-nopmd.h b/include/asm-generic/pgtable-nopmd.h
+index 3e13acd019aef..10789cf51d160 100644
+--- a/include/asm-generic/pgtable-nopmd.h
++++ b/include/asm-generic/pgtable-nopmd.h
+@@ -51,7 +51,7 @@ static inline pmd_t * pmd_offset(pud_t * pud, unsigned long address)
+ #define __pmd(x)				((pmd_t) { __pud(x) } )
+ 
+ #define pud_page(pud)				(pmd_page((pmd_t){ pud }))
+-#define pud_page_vaddr(pud)			(pmd_page_vaddr((pmd_t){ pud }))
++#define pud_pgtable(pud)			((pmd_t *)(pmd_page_vaddr((pmd_t){ pud })))
+ 
+ /*
+  * allocating and freeing a pmd is trivial: the 1-entry pmd is
+diff --git a/include/asm-generic/pgtable-nopud.h b/include/asm-generic/pgtable-nopud.h
+index a9d751fbda9e8..eb70c6d7ceff2 100644
+--- a/include/asm-generic/pgtable-nopud.h
++++ b/include/asm-generic/pgtable-nopud.h
+@@ -49,7 +49,7 @@ static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
+ #define __pud(x)				((pud_t) { __p4d(x) })
+ 
+ #define p4d_page(p4d)				(pud_page((pud_t){ p4d }))
+-#define p4d_page_vaddr(p4d)			(pud_page_vaddr((pud_t){ p4d }))
++#define p4d_pgtable(p4d)			((pud_t *)(pud_pgtable((pud_t){ p4d })))
+ 
+ /*
+  * allocating and freeing a pud is trivial: the 1-entry pud is
+diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
+index 91b9669785418..53702b83ce5f1 100644
+--- a/include/linux/bpf-cgroup.h
++++ b/include/linux/bpf-cgroup.h
+@@ -158,6 +158,10 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 				       int __user *optlen, int max_optlen,
+ 				       int retval);
+ 
++int __cgroup_bpf_run_filter_getsockopt_kern(struct sock *sk, int level,
++					    int optname, void *optval,
++					    int *optlen, int retval);
++
+ static inline enum bpf_cgroup_storage_type cgroup_storage_type(
+ 	struct bpf_map *map)
+ {
+@@ -404,10 +408,23 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
+ ({									       \
+ 	int __ret = retval;						       \
+ 	if (cgroup_bpf_enabled)						       \
+-		__ret = __cgroup_bpf_run_filter_getsockopt(sock, level,	       \
+-							   optname, optval,    \
+-							   optlen, max_optlen, \
+-							   retval);	       \
++		if (!(sock)->sk_prot->bpf_bypass_getsockopt ||		       \
++		    !INDIRECT_CALL_INET_1((sock)->sk_prot->bpf_bypass_getsockopt, \
++					tcp_bpf_bypass_getsockopt,	       \
++					level, optname))		       \
++			__ret = __cgroup_bpf_run_filter_getsockopt(	       \
++				sock, level, optname, optval, optlen,	       \
++				max_optlen, retval);			       \
++	__ret;								       \
++})
++
++#define BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sock, level, optname, optval,      \
++					    optlen, retval)		       \
++({									       \
++	int __ret = retval;						       \
++	if (cgroup_bpf_enabled)						       \
++		__ret = __cgroup_bpf_run_filter_getsockopt_kern(	       \
++			sock, level, optname, optval, optlen, retval);	       \
+ 	__ret;								       \
+ })
+ 
+@@ -493,6 +510,8 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
+ #define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) ({ 0; })
+ #define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, \
+ 				       optlen, max_optlen, retval) ({ retval; })
++#define BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sock, level, optname, optval, \
++					    optlen, retval) ({ retval; })
+ #define BPF_CGROUP_RUN_PROG_SETSOCKOPT(sock, level, optname, optval, optlen, \
+ 				       kernel_optval) ({ 0; })
+ 
+diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
+index 99209f50915f4..b060514bf25d2 100644
+--- a/include/linux/etherdevice.h
++++ b/include/linux/etherdevice.h
+@@ -299,6 +299,18 @@ static inline void ether_addr_copy(u8 *dst, const u8 *src)
+ #endif
+ }
+ 
++/**
++ * eth_hw_addr_set - Assign Ethernet address to a net_device
++ * @dev: pointer to net_device structure
++ * @addr: address to assign
++ *
++ * Assign given address to the net_device, addr_assign_type is not changed.
++ */
++static inline void eth_hw_addr_set(struct net_device *dev, const u8 *addr)
++{
++	ether_addr_copy(dev->dev_addr, addr);
++}
++
+ /**
+  * eth_hw_addr_inherit - Copy dev_addr from another net_device
+  * @dst: pointer to net_device to copy dev_addr to
+diff --git a/include/linux/indirect_call_wrapper.h b/include/linux/indirect_call_wrapper.h
+index 54c02c84906ab..cfcfef37b2f1a 100644
+--- a/include/linux/indirect_call_wrapper.h
++++ b/include/linux/indirect_call_wrapper.h
+@@ -60,4 +60,10 @@
+ #define INDIRECT_CALL_INET(f, f2, f1, ...) f(__VA_ARGS__)
+ #endif
+ 
++#if IS_ENABLED(CONFIG_INET)
++#define INDIRECT_CALL_INET_1(f, f1, ...) INDIRECT_CALL_1(f, f1, __VA_ARGS__)
++#else
++#define INDIRECT_CALL_INET_1(f, f1, ...) f(__VA_ARGS__)
++#endif
++
+ #endif
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 8f03cc42bd43f..302abfc2a1f63 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -4474,6 +4474,24 @@ void __hw_addr_unsync_dev(struct netdev_hw_addr_list *list,
+ void __hw_addr_init(struct netdev_hw_addr_list *list);
+ 
+ /* Functions used for device addresses handling */
++static inline void
++__dev_addr_set(struct net_device *dev, const u8 *addr, size_t len)
++{
++	memcpy(dev->dev_addr, addr, len);
++}
++
++static inline void dev_addr_set(struct net_device *dev, const u8 *addr)
++{
++	__dev_addr_set(dev, addr, dev->addr_len);
++}
++
++static inline void
++dev_addr_mod(struct net_device *dev, unsigned int offset,
++	     const u8 *addr, size_t len)
++{
++	memcpy(&dev->dev_addr[offset], addr, len);
++}
++
+ int dev_addr_add(struct net_device *dev, const unsigned char *addr,
+ 		 unsigned char addr_type);
+ int dev_addr_del(struct net_device *dev, const unsigned char *addr,
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index f700ff2df074e..0db377ff8f608 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -197,7 +197,7 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh);
+ #endif
+ 
+ #if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \
+-    defined(CONFIG_HARDLOCKUP_DETECTOR)
++    defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
+ void watchdog_update_hrtimer_threshold(u64 period);
+ #else
+ static inline void watchdog_update_hrtimer_threshold(u64 period) { }
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 4cc42ad2f6c52..550e1cdb473fa 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1719,6 +1719,7 @@ static inline struct pci_dev *pci_get_class(unsigned int class,
+ #define pci_dev_put(dev)	do { } while (0)
+ 
+ static inline void pci_set_master(struct pci_dev *dev) { }
++static inline void pci_clear_master(struct pci_dev *dev) { }
+ static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; }
+ static inline void pci_disable_device(struct pci_dev *dev) { }
+ static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; }
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index 9def1ac19546b..f924468d84ec4 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -89,7 +89,7 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address)
+ #ifndef pmd_offset
+ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
+ {
+-	return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
++	return pud_pgtable(*pud) + pmd_index(address);
+ }
+ #define pmd_offset pmd_offset
+ #endif
+@@ -97,7 +97,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
+ #ifndef pud_offset
+ static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
+ {
+-	return (pud_t *)p4d_page_vaddr(*p4d) + pud_index(address);
++	return p4d_pgtable(*p4d) + pud_index(address);
+ }
+ #define pud_offset pud_offset
+ #endif
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index c0b6ec6bf65b7..ef236dbaa2945 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -256,18 +256,14 @@ void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
+ 
+ extern const struct pipe_buf_operations nosteal_pipe_buf_ops;
+ 
+-#ifdef CONFIG_WATCH_QUEUE
+ unsigned long account_pipe_buffers(struct user_struct *user,
+ 				   unsigned long old, unsigned long new);
+ bool too_many_pipe_buffers_soft(unsigned long user_bufs);
+ bool too_many_pipe_buffers_hard(unsigned long user_bufs);
+ bool pipe_is_unprivileged_user(void);
+-#endif
+ 
+ /* for F_SETPIPE_SZ and F_GETPIPE_SZ */
+-#ifdef CONFIG_WATCH_QUEUE
+ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots);
+-#endif
+ long pipe_fcntl(struct file *, unsigned int, unsigned long arg);
+ struct pipe_inode_info *get_pipe_info(struct file *file, bool for_splice);
+ 
+diff --git a/include/linux/ramfs.h b/include/linux/ramfs.h
+index 917528d102c4e..d506dc63dd47c 100644
+--- a/include/linux/ramfs.h
++++ b/include/linux/ramfs.h
+@@ -7,6 +7,7 @@
+ struct inode *ramfs_get_inode(struct super_block *sb, const struct inode *dir,
+ 	 umode_t mode, dev_t dev);
+ extern int ramfs_init_fs_context(struct fs_context *fc);
++extern void ramfs_kill_sb(struct super_block *sb);
+ 
+ #ifdef CONFIG_MMU
+ static inline int
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index ae60f838ebb92..2c634010cc7bd 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -125,7 +125,7 @@ struct signal_struct {
+ #ifdef CONFIG_POSIX_TIMERS
+ 
+ 	/* POSIX.1b Interval Timers */
+-	int			posix_timer_id;
++	unsigned int		next_posix_timer_id;
+ 	struct list_head	posix_timers;
+ 
+ 	/* ITIMER_REAL timer for the process */
+diff --git a/include/linux/serial_8250.h b/include/linux/serial_8250.h
+index 92f3b778d8c20..abb928361270e 100644
+--- a/include/linux/serial_8250.h
++++ b/include/linux/serial_8250.h
+@@ -98,7 +98,6 @@ struct uart_8250_port {
+ 	struct list_head	list;		/* ports on this IRQ */
+ 	u32			capabilities;	/* port capabilities */
+ 	unsigned short		bugs;		/* port bugs */
+-	bool			fifo_bug;	/* min RX trigger if enabled */
+ 	unsigned int		tx_loadsz;	/* transmit fifo load size */
+ 	unsigned char		acr;
+ 	unsigned char		fcr;
+diff --git a/include/linux/tcp.h b/include/linux/tcp.h
+index 6e3340379d85f..11a98144bda0b 100644
+--- a/include/linux/tcp.h
++++ b/include/linux/tcp.h
+@@ -473,7 +473,7 @@ static inline void fastopen_queue_tune(struct sock *sk, int backlog)
+ 	struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue;
+ 	int somaxconn = READ_ONCE(sock_net(sk)->core.sysctl_somaxconn);
+ 
+-	queue->fastopenq.max_qlen = min_t(unsigned int, backlog, somaxconn);
++	WRITE_ONCE(queue->fastopenq.max_qlen, min_t(unsigned int, backlog, somaxconn));
+ }
+ 
+ static inline void tcp_move_syn(struct tcp_sock *tp,
+diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
+index 2fa9b311e5663..460c58fa011ae 100644
+--- a/include/linux/workqueue.h
++++ b/include/linux/workqueue.h
+@@ -73,7 +73,6 @@ enum {
+ 	WORK_OFFQ_FLAG_BASE	= WORK_STRUCT_COLOR_SHIFT,
+ 
+ 	__WORK_OFFQ_CANCELING	= WORK_OFFQ_FLAG_BASE,
+-	WORK_OFFQ_CANCELING	= (1 << __WORK_OFFQ_CANCELING),
+ 
+ 	/*
+ 	 * When a work item is off queue, its high bits point to the last
+@@ -84,12 +83,6 @@ enum {
+ 	WORK_OFFQ_POOL_SHIFT	= WORK_OFFQ_FLAG_BASE + WORK_OFFQ_FLAG_BITS,
+ 	WORK_OFFQ_LEFT		= BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT,
+ 	WORK_OFFQ_POOL_BITS	= WORK_OFFQ_LEFT <= 31 ? WORK_OFFQ_LEFT : 31,
+-	WORK_OFFQ_POOL_NONE	= (1LU << WORK_OFFQ_POOL_BITS) - 1,
+-
+-	/* convenience constants */
+-	WORK_STRUCT_FLAG_MASK	= (1UL << WORK_STRUCT_FLAG_BITS) - 1,
+-	WORK_STRUCT_WQ_DATA_MASK = ~WORK_STRUCT_FLAG_MASK,
+-	WORK_STRUCT_NO_POOL	= (unsigned long)WORK_OFFQ_POOL_NONE << WORK_OFFQ_POOL_SHIFT,
+ 
+ 	/* bit mask for work_busy() return values */
+ 	WORK_BUSY_PENDING	= 1 << 0,
+@@ -99,6 +92,14 @@ enum {
+ 	WORKER_DESC_LEN		= 24,
+ };
+ 
++/* Convenience constants - of type 'unsigned long', not 'enum'! */
++#define WORK_OFFQ_CANCELING	(1ul << __WORK_OFFQ_CANCELING)
++#define WORK_OFFQ_POOL_NONE	((1ul << WORK_OFFQ_POOL_BITS) - 1)
++#define WORK_STRUCT_NO_POOL	(WORK_OFFQ_POOL_NONE << WORK_OFFQ_POOL_SHIFT)
++
++#define WORK_STRUCT_FLAG_MASK    ((1ul << WORK_STRUCT_FLAG_BITS) - 1)
++#define WORK_STRUCT_WQ_DATA_MASK (~WORK_STRUCT_FLAG_MASK)
++
+ struct work_struct {
+ 	atomic_long_t data;
+ 	struct list_head entry;
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 030237f3d82a6..fb3c5f6907506 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -382,7 +382,8 @@ struct nft_set_ops {
+ 	int				(*init)(const struct nft_set *set,
+ 						const struct nft_set_desc *desc,
+ 						const struct nlattr * const nla[]);
+-	void				(*destroy)(const struct nft_set *set);
++	void				(*destroy)(const struct nft_ctx *ctx,
++						   const struct nft_set *set);
+ 	void				(*gc_init)(const struct nft_set *set);
+ 
+ 	unsigned int			elemsize;
+@@ -686,6 +687,8 @@ void *nft_set_elem_init(const struct nft_set *set,
+ 			u64 timeout, u64 expiration, gfp_t gfp);
+ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+ 			  bool destroy_expr);
++void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
++				const struct nft_set *set, void *elem);
+ 
+ /**
+  *	struct nft_set_gc_batch_head - nf_tables set garbage collection batch
+@@ -777,6 +780,7 @@ struct nft_expr_type {
+ 
+ enum nft_trans_phase {
+ 	NFT_TRANS_PREPARE,
++	NFT_TRANS_PREPARE_ERROR,
+ 	NFT_TRANS_ABORT,
+ 	NFT_TRANS_COMMIT,
+ 	NFT_TRANS_RELEASE
+@@ -907,7 +911,10 @@ static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule)
+ 	return (void *)&rule->data[rule->dlen];
+ }
+ 
+-void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule);
++void nft_rule_expr_activate(const struct nft_ctx *ctx, struct nft_rule *rule);
++void nft_rule_expr_deactivate(const struct nft_ctx *ctx, struct nft_rule *rule,
++			      enum nft_trans_phase phase);
++void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule);
+ 
+ static inline void nft_set_elem_update_expr(const struct nft_set_ext *ext,
+ 					    struct nft_regs *regs,
+@@ -966,6 +973,8 @@ struct nft_chain {
+ };
+ 
+ int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain);
++int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain);
++void nf_tables_unbind_chain(const struct nft_ctx *ctx, struct nft_chain *chain);
+ 
+ enum nft_chain_types {
+ 	NFT_CHAIN_T_DEFAULT = 0,
+@@ -1002,11 +1011,17 @@ int nft_chain_validate_dependency(const struct nft_chain *chain,
+ int nft_chain_validate_hooks(const struct nft_chain *chain,
+                              unsigned int hook_flags);
+ 
++static inline bool nft_chain_binding(const struct nft_chain *chain)
++{
++	return chain->flags & NFT_CHAIN_BINDING;
++}
++
+ static inline bool nft_chain_is_bound(struct nft_chain *chain)
+ {
+ 	return (chain->flags & NFT_CHAIN_BINDING) && chain->bound;
+ }
+ 
++int nft_chain_add(struct nft_table *table, struct nft_chain *chain);
+ void nft_chain_del(struct nft_chain *chain);
+ void nf_tables_chain_destroy(struct nft_ctx *ctx);
+ 
+@@ -1414,6 +1429,7 @@ static inline void nft_set_elem_clear_busy(struct nft_set_ext *ext)
+  *	struct nft_trans - nf_tables object update in transaction
+  *
+  *	@list: used internally
++ *	@binding_list: list of objects with possible bindings
+  *	@msg_type: message type
+  *	@put_net: ctx->net needs to be put
+  *	@ctx: transaction context
+@@ -1421,6 +1437,7 @@ static inline void nft_set_elem_clear_busy(struct nft_set_ext *ext)
+  */
+ struct nft_trans {
+ 	struct list_head		list;
++	struct list_head		binding_list;
+ 	int				msg_type;
+ 	bool				put_net;
+ 	struct nft_ctx			ctx;
+@@ -1431,6 +1448,7 @@ struct nft_trans_rule {
+ 	struct nft_rule			*rule;
+ 	struct nft_flow_rule		*flow;
+ 	u32				rule_id;
++	bool				bound;
+ };
+ 
+ #define nft_trans_rule(trans)	\
+@@ -1439,6 +1457,8 @@ struct nft_trans_rule {
+ 	(((struct nft_trans_rule *)trans->data)->flow)
+ #define nft_trans_rule_id(trans)	\
+ 	(((struct nft_trans_rule *)trans->data)->rule_id)
++#define nft_trans_rule_bound(trans)	\
++	(((struct nft_trans_rule *)trans->data)->bound)
+ 
+ struct nft_trans_set {
+ 	struct nft_set			*set;
+@@ -1454,13 +1474,17 @@ struct nft_trans_set {
+ 	(((struct nft_trans_set *)trans->data)->bound)
+ 
+ struct nft_trans_chain {
++	struct nft_chain		*chain;
+ 	bool				update;
+ 	char				*name;
+ 	struct nft_stats __percpu	*stats;
+ 	u8				policy;
++	bool				bound;
+ 	u32				chain_id;
+ };
+ 
++#define nft_trans_chain(trans)	\
++	(((struct nft_trans_chain *)trans->data)->chain)
+ #define nft_trans_chain_update(trans)	\
+ 	(((struct nft_trans_chain *)trans->data)->update)
+ #define nft_trans_chain_name(trans)	\
+@@ -1469,6 +1493,8 @@ struct nft_trans_chain {
+ 	(((struct nft_trans_chain *)trans->data)->stats)
+ #define nft_trans_chain_policy(trans)	\
+ 	(((struct nft_trans_chain *)trans->data)->policy)
++#define nft_trans_chain_bound(trans)	\
++	(((struct nft_trans_chain *)trans->data)->bound)
+ #define nft_trans_chain_id(trans)	\
+ 	(((struct nft_trans_chain *)trans->data)->chain_id)
+ 
+@@ -1535,4 +1561,15 @@ void nf_tables_trans_destroy_flush_work(void);
+ int nf_msecs_to_jiffies64(const struct nlattr *nla, u64 *result);
+ __be64 nf_jiffies64_to_msecs(u64 input);
+ 
++struct nftables_pernet {
++	struct list_head	tables;
++	struct list_head	commit_list;
++	struct list_head	binding_list;
++	struct list_head	module_list;
++	struct list_head	notify_list;
++	struct mutex		commit_mutex;
++	unsigned int		base_seq;
++	u8			validate_state;
++};
++
+ #endif /* _NET_NF_TABLES_H */
+diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
+index 4a4a5270ff6f2..9b0d8649ae5b8 100644
+--- a/include/net/netns/ipv4.h
++++ b/include/net/netns/ipv4.h
+@@ -131,6 +131,7 @@ struct netns_ipv4 {
+ 	u8 sysctl_tcp_syn_retries;
+ 	u8 sysctl_tcp_synack_retries;
+ 	u8 sysctl_tcp_syncookies;
++	u8 sysctl_tcp_migrate_req;
+ 	int sysctl_tcp_reordering;
+ 	u8 sysctl_tcp_retries1;
+ 	u8 sysctl_tcp_retries2;
+diff --git a/include/net/netns/nftables.h b/include/net/netns/nftables.h
+index 6c0806bd8d1e6..8c77832d02404 100644
+--- a/include/net/netns/nftables.h
++++ b/include/net/netns/nftables.h
+@@ -5,14 +5,7 @@
+ #include <linux/list.h>
+ 
+ struct netns_nftables {
+-	struct list_head	tables;
+-	struct list_head	commit_list;
+-	struct list_head	module_list;
+-	struct list_head	notify_list;
+-	struct mutex		commit_mutex;
+-	unsigned int		base_seq;
+ 	u8			gencursor;
+-	u8			validate_state;
+ };
+ 
+ #endif
+diff --git a/include/net/nfc/nfc.h b/include/net/nfc/nfc.h
+index 2cd3a261bcbcf..32890e43f06cc 100644
+--- a/include/net/nfc/nfc.h
++++ b/include/net/nfc/nfc.h
+@@ -266,7 +266,7 @@ struct sk_buff *nfc_alloc_send_skb(struct nfc_dev *dev, struct sock *sk,
+ struct sk_buff *nfc_alloc_recv_skb(unsigned int size, gfp_t gfp);
+ 
+ int nfc_set_remote_general_bytes(struct nfc_dev *dev,
+-				 u8 *gt, u8 gt_len);
++				 const u8 *gt, u8 gt_len);
+ u8 *nfc_get_local_general_bytes(struct nfc_dev *dev, size_t *gb_len);
+ 
+ int nfc_fw_download_done(struct nfc_dev *dev, const char *firmware_name,
+@@ -280,7 +280,7 @@ int nfc_dep_link_is_up(struct nfc_dev *dev, u32 target_idx,
+ 		       u8 comm_mode, u8 rf_mode);
+ 
+ int nfc_tm_activated(struct nfc_dev *dev, u32 protocol, u8 comm_mode,
+-		     u8 *gb, size_t gb_len);
++		     const u8 *gb, size_t gb_len);
+ int nfc_tm_deactivated(struct nfc_dev *dev);
+ int nfc_tm_data_received(struct nfc_dev *dev, struct sk_buff *skb);
+ 
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index ba781e0aaf566..e186b2bd8c860 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -136,7 +136,7 @@ extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
+  */
+ static inline unsigned int psched_mtu(const struct net_device *dev)
+ {
+-	return dev->mtu + dev->hard_header_len;
++	return READ_ONCE(dev->mtu) + dev->hard_header_len;
+ }
+ 
+ static inline struct net *qdisc_net(struct Qdisc *q)
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 51b499d745499..1fb5c535537c1 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1207,6 +1207,8 @@ struct proto {
+ 
+ 	int			(*backlog_rcv) (struct sock *sk,
+ 						struct sk_buff *skb);
++	bool			(*bpf_bypass_getsockopt)(int level,
++							 int optname);
+ 
+ 	void		(*release_cb)(struct sock *sk);
+ 
+@@ -1932,6 +1934,7 @@ static inline void sock_graft(struct sock *sk, struct socket *parent)
+ }
+ 
+ kuid_t sock_i_uid(struct sock *sk);
++unsigned long __sock_i_ino(struct sock *sk);
+ unsigned long sock_i_ino(struct sock *sk);
+ 
+ static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk)
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index d213b86a48227..dcca41f3a2240 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -389,6 +389,7 @@ __poll_t tcp_poll(struct file *file, struct socket *sock,
+ 		      struct poll_table_struct *wait);
+ int tcp_getsockopt(struct sock *sk, int level, int optname,
+ 		   char __user *optval, int __user *optlen);
++bool tcp_bpf_bypass_getsockopt(int level, int optname);
+ int tcp_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
+ 		   unsigned int optlen);
+ void tcp_set_keepalive(struct sock *sk, int val);
+@@ -1450,25 +1451,38 @@ void tcp_leave_memory_pressure(struct sock *sk);
+ static inline int keepalive_intvl_when(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
++	int val;
+ 
+-	return tp->keepalive_intvl ? :
+-		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl);
++	/* Paired with WRITE_ONCE() in tcp_sock_set_keepintvl()
++	 * and do_tcp_setsockopt().
++	 */
++	val = READ_ONCE(tp->keepalive_intvl);
++
++	return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl);
+ }
+ 
+ static inline int keepalive_time_when(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
++	int val;
+ 
+-	return tp->keepalive_time ? :
+-		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time);
++	/* Paired with WRITE_ONCE() in tcp_sock_set_keepidle_locked() */
++	val = READ_ONCE(tp->keepalive_time);
++
++	return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time);
+ }
+ 
+ static inline int keepalive_probes(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
++	int val;
+ 
+-	return tp->keepalive_probes ? :
+-		READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes);
++	/* Paired with WRITE_ONCE() in tcp_sock_set_keepcnt()
++	 * and do_tcp_setsockopt().
++	 */
++	val = READ_ONCE(tp->keepalive_probes);
++
++	return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes);
+ }
+ 
+ static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp)
+@@ -1977,7 +1991,11 @@ void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr);
+ static inline u32 tcp_notsent_lowat(const struct tcp_sock *tp)
+ {
+ 	struct net *net = sock_net((struct sock *)tp);
+-	return tp->notsent_lowat ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);
++	u32 val;
++
++	val = READ_ONCE(tp->notsent_lowat);
++
++	return val ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);
+ }
+ 
+ /* @wake is one when sk_stream_write_space() calls us.
+diff --git a/include/trace/events/timer.h b/include/trace/events/timer.h
+index 40e9b5a12732d..5c540ccabcac9 100644
+--- a/include/trace/events/timer.h
++++ b/include/trace/events/timer.h
+@@ -156,7 +156,11 @@ DEFINE_EVENT(timer_class, timer_cancel,
+ 		{ HRTIMER_MODE_ABS_SOFT,	"ABS|SOFT"	},	\
+ 		{ HRTIMER_MODE_REL_SOFT,	"REL|SOFT"	},	\
+ 		{ HRTIMER_MODE_ABS_PINNED_SOFT,	"ABS|PINNED|SOFT" },	\
+-		{ HRTIMER_MODE_REL_PINNED_SOFT,	"REL|PINNED|SOFT" })
++		{ HRTIMER_MODE_REL_PINNED_SOFT,	"REL|PINNED|SOFT" },	\
++		{ HRTIMER_MODE_ABS_HARD,	"ABS|HARD" },		\
++		{ HRTIMER_MODE_REL_HARD,	"REL|HARD" },		\
++		{ HRTIMER_MODE_ABS_PINNED_HARD, "ABS|PINNED|HARD" },	\
++		{ HRTIMER_MODE_REL_PINNED_HARD,	"REL|PINNED|HARD" })
+ 
+ /**
+  * hrtimer_init - called when the hrtimer is initialized
+diff --git a/include/uapi/linux/affs_hardblocks.h b/include/uapi/linux/affs_hardblocks.h
+index 5e2fb8481252a..a5aff2eb5f708 100644
+--- a/include/uapi/linux/affs_hardblocks.h
++++ b/include/uapi/linux/affs_hardblocks.h
+@@ -7,42 +7,42 @@
+ /* Just the needed definitions for the RDB of an Amiga HD. */
+ 
+ struct RigidDiskBlock {
+-	__u32	rdb_ID;
++	__be32	rdb_ID;
+ 	__be32	rdb_SummedLongs;
+-	__s32	rdb_ChkSum;
+-	__u32	rdb_HostID;
++	__be32	rdb_ChkSum;
++	__be32	rdb_HostID;
+ 	__be32	rdb_BlockBytes;
+-	__u32	rdb_Flags;
+-	__u32	rdb_BadBlockList;
++	__be32	rdb_Flags;
++	__be32	rdb_BadBlockList;
+ 	__be32	rdb_PartitionList;
+-	__u32	rdb_FileSysHeaderList;
+-	__u32	rdb_DriveInit;
+-	__u32	rdb_Reserved1[6];
+-	__u32	rdb_Cylinders;
+-	__u32	rdb_Sectors;
+-	__u32	rdb_Heads;
+-	__u32	rdb_Interleave;
+-	__u32	rdb_Park;
+-	__u32	rdb_Reserved2[3];
+-	__u32	rdb_WritePreComp;
+-	__u32	rdb_ReducedWrite;
+-	__u32	rdb_StepRate;
+-	__u32	rdb_Reserved3[5];
+-	__u32	rdb_RDBBlocksLo;
+-	__u32	rdb_RDBBlocksHi;
+-	__u32	rdb_LoCylinder;
+-	__u32	rdb_HiCylinder;
+-	__u32	rdb_CylBlocks;
+-	__u32	rdb_AutoParkSeconds;
+-	__u32	rdb_HighRDSKBlock;
+-	__u32	rdb_Reserved4;
++	__be32	rdb_FileSysHeaderList;
++	__be32	rdb_DriveInit;
++	__be32	rdb_Reserved1[6];
++	__be32	rdb_Cylinders;
++	__be32	rdb_Sectors;
++	__be32	rdb_Heads;
++	__be32	rdb_Interleave;
++	__be32	rdb_Park;
++	__be32	rdb_Reserved2[3];
++	__be32	rdb_WritePreComp;
++	__be32	rdb_ReducedWrite;
++	__be32	rdb_StepRate;
++	__be32	rdb_Reserved3[5];
++	__be32	rdb_RDBBlocksLo;
++	__be32	rdb_RDBBlocksHi;
++	__be32	rdb_LoCylinder;
++	__be32	rdb_HiCylinder;
++	__be32	rdb_CylBlocks;
++	__be32	rdb_AutoParkSeconds;
++	__be32	rdb_HighRDSKBlock;
++	__be32	rdb_Reserved4;
+ 	char	rdb_DiskVendor[8];
+ 	char	rdb_DiskProduct[16];
+ 	char	rdb_DiskRevision[4];
+ 	char	rdb_ControllerVendor[8];
+ 	char	rdb_ControllerProduct[16];
+ 	char	rdb_ControllerRevision[4];
+-	__u32	rdb_Reserved5[10];
++	__be32	rdb_Reserved5[10];
+ };
+ 
+ #define	IDNAME_RIGIDDISK	0x5244534B	/* "RDSK" */
+@@ -50,16 +50,16 @@ struct RigidDiskBlock {
+ struct PartitionBlock {
+ 	__be32	pb_ID;
+ 	__be32	pb_SummedLongs;
+-	__s32	pb_ChkSum;
+-	__u32	pb_HostID;
++	__be32	pb_ChkSum;
++	__be32	pb_HostID;
+ 	__be32	pb_Next;
+-	__u32	pb_Flags;
+-	__u32	pb_Reserved1[2];
+-	__u32	pb_DevFlags;
++	__be32	pb_Flags;
++	__be32	pb_Reserved1[2];
++	__be32	pb_DevFlags;
+ 	__u8	pb_DriveName[32];
+-	__u32	pb_Reserved2[15];
++	__be32	pb_Reserved2[15];
+ 	__be32	pb_Environment[17];
+-	__u32	pb_EReserved[15];
++	__be32	pb_EReserved[15];
+ };
+ 
+ #define	IDNAME_PARTITION	0x50415254	/* "PART" */
+diff --git a/include/uapi/linux/auto_dev-ioctl.h b/include/uapi/linux/auto_dev-ioctl.h
+index 62e625356dc81..08be539605fca 100644
+--- a/include/uapi/linux/auto_dev-ioctl.h
++++ b/include/uapi/linux/auto_dev-ioctl.h
+@@ -109,7 +109,7 @@ struct autofs_dev_ioctl {
+ 		struct args_ismountpoint	ismountpoint;
+ 	};
+ 
+-	char path[0];
++	char path[];
+ };
+ 
+ static inline void init_autofs_dev_ioctl(struct autofs_dev_ioctl *in)
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index b28817c59fdf2..55b8c4b824797 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -1644,7 +1644,7 @@ struct v4l2_input {
+ 	__u8	     name[32];		/*  Label */
+ 	__u32	     type;		/*  Type of input */
+ 	__u32	     audioset;		/*  Associated audios (bitfield) */
+-	__u32        tuner;             /*  enum v4l2_tuner_type */
++	__u32        tuner;             /*  Tuner index */
+ 	v4l2_std_id  std;
+ 	__u32	     status;
+ 	__u32	     capabilities;
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index f169b46457693..51e6ebe72caf9 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1521,6 +1521,8 @@ static void io_kill_timeout(struct io_kiocb *req, int status)
+ 
+ static void io_queue_deferred(struct io_ring_ctx *ctx)
+ {
++	lockdep_assert_held(&ctx->completion_lock);
++
+ 	while (!list_empty(&ctx->defer_list)) {
+ 		struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
+ 						struct io_defer_entry, list);
+@@ -1572,14 +1574,24 @@ static void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
+ 		io_queue_deferred(ctx);
+ }
+ 
+-static inline void io_commit_cqring(struct io_ring_ctx *ctx)
++static inline bool io_commit_needs_flush(struct io_ring_ctx *ctx)
++{
++	return ctx->off_timeout_used || ctx->drain_active;
++}
++
++static inline void __io_commit_cqring(struct io_ring_ctx *ctx)
+ {
+-	if (unlikely(ctx->off_timeout_used || ctx->drain_active))
+-		__io_commit_cqring_flush(ctx);
+ 	/* order cqe stores with ring update */
+ 	smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
+ }
+ 
++static inline void io_commit_cqring(struct io_ring_ctx *ctx)
++{
++	if (unlikely(io_commit_needs_flush(ctx)))
++		__io_commit_cqring_flush(ctx);
++	__io_commit_cqring(ctx);
++}
++
+ static inline bool io_sqring_full(struct io_ring_ctx *ctx)
+ {
+ 	struct io_rings *r = ctx->rings;
+@@ -2202,9 +2214,12 @@ static void tctx_task_work(struct callback_head *cb)
+ 			}
+ 			req->io_task_work.func(req, &locked);
+ 			node = next;
++			if (unlikely(need_resched())) {
++				ctx_flush_and_put(ctx, &locked);
++				ctx = NULL;
++				cond_resched();
++			}
+ 		} while (node);
+-
+-		cond_resched();
+ 	}
+ 
+ 	ctx_flush_and_put(ctx, &locked);
+@@ -2518,7 +2533,12 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ 			io_req_free_batch(&rb, req, &ctx->submit_state);
+ 	}
+ 
+-	io_commit_cqring(ctx);
++	if (io_commit_needs_flush(ctx)) {
++		spin_lock(&ctx->completion_lock);
++		__io_commit_cqring_flush(ctx);
++		spin_unlock(&ctx->completion_lock);
++	}
++	__io_commit_cqring(ctx);
+ 	io_cqring_ev_posted_iopoll(ctx);
+ 	io_req_free_batch_finish(ctx, &rb);
+ }
+@@ -7608,7 +7628,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 					  struct io_wait_queue *iowq,
+ 					  ktime_t *timeout)
+ {
+-	int ret;
++	int token, ret;
+ 
+ 	/* make sure we run task_work before checking for signals */
+ 	ret = io_run_task_work_sig();
+@@ -7618,9 +7638,17 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 	if (test_bit(0, &ctx->check_cq_overflow))
+ 		return 1;
+ 
++	/*
++	 * Use io_schedule_prepare/finish, so cpufreq can take into account
++	 * that the task is waiting for IO - turns out to be important for low
++	 * QD IO.
++	 */
++	token = io_schedule_prepare();
++	ret = 1;
+ 	if (!schedule_hrtimeout(timeout, HRTIMER_MODE_ABS))
+-		return -ETIME;
+-	return 1;
++		ret = -ETIME;
++	io_schedule_finish(token);
++	return ret;
+ }
+ 
+ /*
+@@ -9526,7 +9554,18 @@ static void io_ring_exit_work(struct work_struct *work)
+ 			/* there is little hope left, don't run it too often */
+ 			interval = HZ * 60;
+ 		}
+-	} while (!wait_for_completion_timeout(&ctx->ref_comp, interval));
++		/*
++		 * This is really an uninterruptible wait, as it has to be
++		 * complete. But it's also run from a kworker, which doesn't
++		 * take signals, so it's fine to make it interruptible. This
++		 * avoids scenarios where we knowingly can wait much longer
++		 * on completions, for example if someone does a SIGSTOP on
++		 * a task that needs to finish task_work to make this loop
++		 * complete. That's a synthetic situation that should not
++		 * cause a stuck task backtrace, and hence a potential panic
++		 * on stuck tasks if that is enabled.
++		 */
++	} while (!wait_for_completion_interruptible_timeout(&ctx->ref_comp, interval));
+ 
+ 	init_completion(&exit.completion);
+ 	init_task_work(&exit.task_work, io_tctx_exit_cb);
+@@ -9551,7 +9590,12 @@ static void io_ring_exit_work(struct work_struct *work)
+ 		wake_up_process(node->task);
+ 
+ 		mutex_unlock(&ctx->uring_lock);
+-		wait_for_completion(&exit.completion);
++		/*
++		 * See comment above for
++		 * wait_for_completion_interruptible_timeout() on why this
++		 * wait is marked as interruptible.
++		 */
++		wait_for_completion_interruptible(&exit.completion);
+ 		mutex_lock(&ctx->uring_lock);
+ 	}
+ 	mutex_unlock(&ctx->uring_lock);
+diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
+index d99e89f113c43..3dabdd137d102 100644
+--- a/kernel/bpf/bpf_lru_list.c
++++ b/kernel/bpf/bpf_lru_list.c
+@@ -41,7 +41,12 @@ static struct list_head *local_pending_list(struct bpf_lru_locallist *loc_l)
+ /* bpf_lru_node helpers */
+ static bool bpf_lru_node_is_ref(const struct bpf_lru_node *node)
+ {
+-	return node->ref;
++	return READ_ONCE(node->ref);
++}
++
++static void bpf_lru_node_clear_ref(struct bpf_lru_node *node)
++{
++	WRITE_ONCE(node->ref, 0);
+ }
+ 
+ static void bpf_lru_list_count_inc(struct bpf_lru_list *l,
+@@ -89,7 +94,7 @@ static void __bpf_lru_node_move_in(struct bpf_lru_list *l,
+ 
+ 	bpf_lru_list_count_inc(l, tgt_type);
+ 	node->type = tgt_type;
+-	node->ref = 0;
++	bpf_lru_node_clear_ref(node);
+ 	list_move(&node->list, &l->lists[tgt_type]);
+ }
+ 
+@@ -110,7 +115,7 @@ static void __bpf_lru_node_move(struct bpf_lru_list *l,
+ 		bpf_lru_list_count_inc(l, tgt_type);
+ 		node->type = tgt_type;
+ 	}
+-	node->ref = 0;
++	bpf_lru_node_clear_ref(node);
+ 
+ 	/* If the moving node is the next_inactive_rotation candidate,
+ 	 * move the next_inactive_rotation pointer also.
+@@ -353,7 +358,7 @@ static void __local_list_add_pending(struct bpf_lru *lru,
+ 	*(u32 *)((void *)node + lru->hash_offset) = hash;
+ 	node->cpu = cpu;
+ 	node->type = BPF_LRU_LOCAL_LIST_T_PENDING;
+-	node->ref = 0;
++	bpf_lru_node_clear_ref(node);
+ 	list_add(&node->list, local_pending_list(loc_l));
+ }
+ 
+@@ -419,7 +424,7 @@ static struct bpf_lru_node *bpf_percpu_lru_pop_free(struct bpf_lru *lru,
+ 	if (!list_empty(free_list)) {
+ 		node = list_first_entry(free_list, struct bpf_lru_node, list);
+ 		*(u32 *)((void *)node + lru->hash_offset) = hash;
+-		node->ref = 0;
++		bpf_lru_node_clear_ref(node);
+ 		__bpf_lru_node_move(l, node, BPF_LRU_LIST_T_INACTIVE);
+ 	}
+ 
+@@ -522,7 +527,7 @@ static void bpf_common_lru_push_free(struct bpf_lru *lru,
+ 		}
+ 
+ 		node->type = BPF_LRU_LOCAL_LIST_T_FREE;
+-		node->ref = 0;
++		bpf_lru_node_clear_ref(node);
+ 		list_move(&node->list, local_free_list(loc_l));
+ 
+ 		raw_spin_unlock_irqrestore(&loc_l->lock, flags);
+@@ -568,7 +573,7 @@ static void bpf_common_lru_populate(struct bpf_lru *lru, void *buf,
+ 
+ 		node = (struct bpf_lru_node *)(buf + node_offset);
+ 		node->type = BPF_LRU_LIST_T_FREE;
+-		node->ref = 0;
++		bpf_lru_node_clear_ref(node);
+ 		list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]);
+ 		buf += elem_size;
+ 	}
+@@ -594,7 +599,7 @@ again:
+ 		node = (struct bpf_lru_node *)(buf + node_offset);
+ 		node->cpu = cpu;
+ 		node->type = BPF_LRU_LIST_T_FREE;
+-		node->ref = 0;
++		bpf_lru_node_clear_ref(node);
+ 		list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]);
+ 		i++;
+ 		buf += elem_size;
+diff --git a/kernel/bpf/bpf_lru_list.h b/kernel/bpf/bpf_lru_list.h
+index 6b12f06ee18c3..9c12ee453c616 100644
+--- a/kernel/bpf/bpf_lru_list.h
++++ b/kernel/bpf/bpf_lru_list.h
+@@ -63,11 +63,8 @@ struct bpf_lru {
+ 
+ static inline void bpf_lru_node_set_ref(struct bpf_lru_node *node)
+ {
+-	/* ref is an approximation on access frequency.  It does not
+-	 * have to be very accurate.  Hence, no protection is used.
+-	 */
+-	if (!node->ref)
+-		node->ref = 1;
++	if (!READ_ONCE(node->ref))
++		WRITE_ONCE(node->ref, 1);
+ }
+ 
+ int bpf_lru_init(struct bpf_lru *lru, bool percpu, u32 hash_offset,
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index d3593a520bb72..85927c2aa3433 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1546,6 +1546,52 @@ out:
+ 	sockopt_free_buf(&ctx);
+ 	return ret;
+ }
++
++int __cgroup_bpf_run_filter_getsockopt_kern(struct sock *sk, int level,
++					    int optname, void *optval,
++					    int *optlen, int retval)
++{
++	struct cgroup *cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
++	struct bpf_sockopt_kern ctx = {
++		.sk = sk,
++		.level = level,
++		.optname = optname,
++		.retval = retval,
++		.optlen = *optlen,
++		.optval = optval,
++		.optval_end = optval + *optlen,
++	};
++	int ret;
++
++	/* Note that __cgroup_bpf_run_filter_getsockopt doesn't copy
++	 * user data back into BPF buffer when reval != 0. This is
++	 * done as an optimization to avoid extra copy, assuming
++	 * kernel won't populate the data in case of an error.
++	 * Here we always pass the data and memset() should
++	 * be called if that data shouldn't be "exported".
++	 */
++
++	ret = BPF_PROG_RUN_ARRAY(cgrp->bpf.effective[BPF_CGROUP_GETSOCKOPT],
++				 &ctx, BPF_PROG_RUN);
++	if (!ret)
++		return -EPERM;
++
++	if (ctx.optlen > *optlen)
++		return -EFAULT;
++
++	/* BPF programs only allowed to set retval to 0, not some
++	 * arbitrary value.
++	 */
++	if (ctx.retval != 0 && ctx.retval != retval)
++		return -EFAULT;
++
++	/* BPF programs can shrink the buffer, export the modifications.
++	 */
++	if (ctx.optlen != 0)
++		*optlen = ctx.optlen;
++
++	return ctx.retval;
++}
+ #endif
+ 
+ static ssize_t sysctl_cpy_dir(const struct ctl_dir *dir, char **bufp,
+diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
+index 762df6108c589..473dc04591b8e 100644
+--- a/kernel/kcsan/core.c
++++ b/kernel/kcsan/core.c
+@@ -1035,7 +1035,9 @@ EXPORT_SYMBOL(__tsan_init);
+ DEFINE_TSAN_ATOMIC_OPS(8);
+ DEFINE_TSAN_ATOMIC_OPS(16);
+ DEFINE_TSAN_ATOMIC_OPS(32);
++#ifdef CONFIG_64BIT
+ DEFINE_TSAN_ATOMIC_OPS(64);
++#endif
+ 
+ void __tsan_atomic_thread_fence(int memorder);
+ void __tsan_atomic_thread_fence(int memorder)
+diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
+index 7a8104d489971..3a37fc62dc95f 100644
+--- a/kernel/kexec_core.c
++++ b/kernel/kexec_core.c
+@@ -1029,6 +1029,7 @@ int crash_shrink_memory(unsigned long new_size)
+ 	start = crashk_res.start;
+ 	end = crashk_res.end;
+ 	old_size = (end == 0) ? 0 : end - start + 1;
++	new_size = roundup(new_size, KEXEC_CRASH_MEM_ALIGN);
+ 	if (new_size >= old_size) {
+ 		ret = (new_size == old_size) ? 0 : -EINVAL;
+ 		goto unlock;
+@@ -1040,9 +1041,7 @@ int crash_shrink_memory(unsigned long new_size)
+ 		goto unlock;
+ 	}
+ 
+-	start = roundup(start, KEXEC_CRASH_MEM_ALIGN);
+-	end = roundup(start + new_size, KEXEC_CRASH_MEM_ALIGN);
+-
++	end = start + new_size;
+ 	crash_free_reserved_phys_range(end, crashk_res.end);
+ 
+ 	if ((start == end) && (crashk_res.parent != NULL))
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index 2819b95479af9..6c05365ed80fc 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -49,8 +49,8 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>");
+ 	pr_alert("%s" SCALE_FLAG " %s\n", scale_type, s)
+ #define VERBOSE_SCALEOUT_STRING(s) \
+ 	do { if (verbose) pr_alert("%s" SCALE_FLAG " %s\n", scale_type, s); } while (0)
+-#define VERBOSE_SCALEOUT_ERRSTRING(s) \
+-	do { if (verbose) pr_alert("%s" SCALE_FLAG "!!! %s\n", scale_type, s); } while (0)
++#define SCALEOUT_ERRSTRING(s) \
++	pr_alert("%s" SCALE_FLAG "!!! %s\n", scale_type, s)
+ 
+ /*
+  * The intended use cases for the nreaders and nwriters module parameters
+@@ -457,7 +457,7 @@ retry:
+ 	if (gp_async) {
+ 		cur_ops->gp_barrier();
+ 	}
+-	writer_n_durations[me] = i_max;
++	writer_n_durations[me] = i_max + 1;
+ 	torture_kthread_stopping("rcu_scale_writer");
+ 	return 0;
+ }
+@@ -470,89 +470,6 @@ rcu_scale_print_module_parms(struct rcu_scale_ops *cur_ops, const char *tag)
+ 		 scale_type, tag, nrealreaders, nrealwriters, verbose, shutdown);
+ }
+ 
+-static void
+-rcu_scale_cleanup(void)
+-{
+-	int i;
+-	int j;
+-	int ngps = 0;
+-	u64 *wdp;
+-	u64 *wdpp;
+-
+-	/*
+-	 * Would like warning at start, but everything is expedited
+-	 * during the mid-boot phase, so have to wait till the end.
+-	 */
+-	if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp)
+-		VERBOSE_SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!");
+-	if (rcu_gp_is_normal() && gp_exp)
+-		VERBOSE_SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!");
+-	if (gp_exp && gp_async)
+-		VERBOSE_SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
+-
+-	if (torture_cleanup_begin())
+-		return;
+-	if (!cur_ops) {
+-		torture_cleanup_end();
+-		return;
+-	}
+-
+-	if (reader_tasks) {
+-		for (i = 0; i < nrealreaders; i++)
+-			torture_stop_kthread(rcu_scale_reader,
+-					     reader_tasks[i]);
+-		kfree(reader_tasks);
+-	}
+-
+-	if (writer_tasks) {
+-		for (i = 0; i < nrealwriters; i++) {
+-			torture_stop_kthread(rcu_scale_writer,
+-					     writer_tasks[i]);
+-			if (!writer_n_durations)
+-				continue;
+-			j = writer_n_durations[i];
+-			pr_alert("%s%s writer %d gps: %d\n",
+-				 scale_type, SCALE_FLAG, i, j);
+-			ngps += j;
+-		}
+-		pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n",
+-			 scale_type, SCALE_FLAG,
+-			 t_rcu_scale_writer_started, t_rcu_scale_writer_finished,
+-			 t_rcu_scale_writer_finished -
+-			 t_rcu_scale_writer_started,
+-			 ngps,
+-			 rcuscale_seq_diff(b_rcu_gp_test_finished,
+-					   b_rcu_gp_test_started));
+-		for (i = 0; i < nrealwriters; i++) {
+-			if (!writer_durations)
+-				break;
+-			if (!writer_n_durations)
+-				continue;
+-			wdpp = writer_durations[i];
+-			if (!wdpp)
+-				continue;
+-			for (j = 0; j <= writer_n_durations[i]; j++) {
+-				wdp = &wdpp[j];
+-				pr_alert("%s%s %4d writer-duration: %5d %llu\n",
+-					scale_type, SCALE_FLAG,
+-					i, j, *wdp);
+-				if (j % 100 == 0)
+-					schedule_timeout_uninterruptible(1);
+-			}
+-			kfree(writer_durations[i]);
+-		}
+-		kfree(writer_tasks);
+-		kfree(writer_durations);
+-		kfree(writer_n_durations);
+-	}
+-
+-	/* Do torture-type-specific cleanup operations.  */
+-	if (cur_ops->cleanup != NULL)
+-		cur_ops->cleanup();
+-
+-	torture_cleanup_end();
+-}
+-
+ /*
+  * Return the number if non-negative.  If -1, the number of CPUs.
+  * If less than -1, that much less than the number of CPUs, but
+@@ -572,21 +489,6 @@ static int compute_real(int n)
+ 	return nr;
+ }
+ 
+-/*
+- * RCU scalability shutdown kthread.  Just waits to be awakened, then shuts
+- * down system.
+- */
+-static int
+-rcu_scale_shutdown(void *arg)
+-{
+-	wait_event(shutdown_wq,
+-		   atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters);
+-	smp_mb(); /* Wake before output. */
+-	rcu_scale_cleanup();
+-	kernel_power_off();
+-	return -EINVAL;
+-}
+-
+ /*
+  * kfree_rcu() scalability tests: Start a kfree_rcu() loop on all CPUs for number
+  * of iterations and measure total time and number of GP for all iterations to complete.
+@@ -693,8 +595,8 @@ kfree_scale_cleanup(void)
+ static int
+ kfree_scale_shutdown(void *arg)
+ {
+-	wait_event(shutdown_wq,
+-		   atomic_read(&n_kfree_scale_thread_ended) >= kfree_nrealthreads);
++	wait_event_idle(shutdown_wq,
++			atomic_read(&n_kfree_scale_thread_ended) >= kfree_nrealthreads);
+ 
+ 	smp_mb(); /* Wake before output. */
+ 
+@@ -748,6 +650,108 @@ unwind:
+ 	return firsterr;
+ }
+ 
++static void
++rcu_scale_cleanup(void)
++{
++	int i;
++	int j;
++	int ngps = 0;
++	u64 *wdp;
++	u64 *wdpp;
++
++	/*
++	 * Would like warning at start, but everything is expedited
++	 * during the mid-boot phase, so have to wait till the end.
++	 */
++	if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp)
++		SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!");
++	if (rcu_gp_is_normal() && gp_exp)
++		SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!");
++	if (gp_exp && gp_async)
++		SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
++
++	if (kfree_rcu_test) {
++		kfree_scale_cleanup();
++		return;
++	}
++
++	if (torture_cleanup_begin())
++		return;
++	if (!cur_ops) {
++		torture_cleanup_end();
++		return;
++	}
++
++	if (reader_tasks) {
++		for (i = 0; i < nrealreaders; i++)
++			torture_stop_kthread(rcu_scale_reader,
++					     reader_tasks[i]);
++		kfree(reader_tasks);
++	}
++
++	if (writer_tasks) {
++		for (i = 0; i < nrealwriters; i++) {
++			torture_stop_kthread(rcu_scale_writer,
++					     writer_tasks[i]);
++			if (!writer_n_durations)
++				continue;
++			j = writer_n_durations[i];
++			pr_alert("%s%s writer %d gps: %d\n",
++				 scale_type, SCALE_FLAG, i, j);
++			ngps += j;
++		}
++		pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n",
++			 scale_type, SCALE_FLAG,
++			 t_rcu_scale_writer_started, t_rcu_scale_writer_finished,
++			 t_rcu_scale_writer_finished -
++			 t_rcu_scale_writer_started,
++			 ngps,
++			 rcuscale_seq_diff(b_rcu_gp_test_finished,
++					   b_rcu_gp_test_started));
++		for (i = 0; i < nrealwriters; i++) {
++			if (!writer_durations)
++				break;
++			if (!writer_n_durations)
++				continue;
++			wdpp = writer_durations[i];
++			if (!wdpp)
++				continue;
++			for (j = 0; j < writer_n_durations[i]; j++) {
++				wdp = &wdpp[j];
++				pr_alert("%s%s %4d writer-duration: %5d %llu\n",
++					scale_type, SCALE_FLAG,
++					i, j, *wdp);
++				if (j % 100 == 0)
++					schedule_timeout_uninterruptible(1);
++			}
++			kfree(writer_durations[i]);
++		}
++		kfree(writer_tasks);
++		kfree(writer_durations);
++		kfree(writer_n_durations);
++	}
++
++	/* Do torture-type-specific cleanup operations.  */
++	if (cur_ops->cleanup != NULL)
++		cur_ops->cleanup();
++
++	torture_cleanup_end();
++}
++
++/*
++ * RCU scalability shutdown kthread.  Just waits to be awakened, then shuts
++ * down system.
++ */
++static int
++rcu_scale_shutdown(void *arg)
++{
++	wait_event_idle(shutdown_wq, atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters);
++	smp_mb(); /* Wake before output. */
++	rcu_scale_cleanup();
++	kernel_power_off();
++	return -EINVAL;
++}
++
+ static int __init
+ rcu_scale_init(void)
+ {
+@@ -803,7 +807,7 @@ rcu_scale_init(void)
+ 	reader_tasks = kcalloc(nrealreaders, sizeof(reader_tasks[0]),
+ 			       GFP_KERNEL);
+ 	if (reader_tasks == NULL) {
+-		VERBOSE_SCALEOUT_ERRSTRING("out of memory");
++		SCALEOUT_ERRSTRING("out of memory");
+ 		firsterr = -ENOMEM;
+ 		goto unwind;
+ 	}
+@@ -823,7 +827,7 @@ rcu_scale_init(void)
+ 		kcalloc(nrealwriters, sizeof(*writer_n_durations),
+ 			GFP_KERNEL);
+ 	if (!writer_tasks || !writer_durations || !writer_n_durations) {
+-		VERBOSE_SCALEOUT_ERRSTRING("out of memory");
++		SCALEOUT_ERRSTRING("out of memory");
+ 		firsterr = -ENOMEM;
+ 		goto unwind;
+ 	}
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index c66d47685b28e..23101ebbbe1ef 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -801,7 +801,7 @@ static DEFINE_IRQ_WORK(rcu_tasks_trace_iw, rcu_read_unlock_iw);
+ /* If we are the last reader, wake up the grace-period kthread. */
+ void rcu_read_unlock_trace_special(struct task_struct *t, int nesting)
+ {
+-	int nq = t->trc_reader_special.b.need_qs;
++	int nq = READ_ONCE(t->trc_reader_special.b.need_qs);
+ 
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) &&
+ 	    t->trc_reader_special.b.need_mb)
+@@ -841,33 +841,25 @@ static void trc_read_check_handler(void *t_in)
+ 
+ 	// If the task is no longer running on this CPU, leave.
+ 	if (unlikely(texp != t)) {
+-		if (WARN_ON_ONCE(atomic_dec_and_test(&trc_n_readers_need_end)))
+-			wake_up(&trc_wait);
+ 		goto reset_ipi; // Already on holdout list, so will check later.
+ 	}
+ 
+ 	// If the task is not in a read-side critical section, and
+ 	// if this is the last reader, awaken the grace-period kthread.
+-	if (likely(!t->trc_reader_nesting)) {
+-		if (WARN_ON_ONCE(atomic_dec_and_test(&trc_n_readers_need_end)))
+-			wake_up(&trc_wait);
+-		// Mark as checked after decrement to avoid false
+-		// positives on the above WARN_ON_ONCE().
++	if (likely(!READ_ONCE(t->trc_reader_nesting))) {
+ 		WRITE_ONCE(t->trc_reader_checked, true);
+ 		goto reset_ipi;
+ 	}
+ 	// If we are racing with an rcu_read_unlock_trace(), try again later.
+-	if (unlikely(t->trc_reader_nesting < 0)) {
+-		if (WARN_ON_ONCE(atomic_dec_and_test(&trc_n_readers_need_end)))
+-			wake_up(&trc_wait);
++	if (unlikely(READ_ONCE(t->trc_reader_nesting) < 0))
+ 		goto reset_ipi;
+-	}
+ 	WRITE_ONCE(t->trc_reader_checked, true);
+ 
+ 	// Get here if the task is in a read-side critical section.  Set
+ 	// its state so that it will awaken the grace-period kthread upon
+ 	// exit from that critical section.
+-	WARN_ON_ONCE(t->trc_reader_special.b.need_qs);
++	atomic_inc(&trc_n_readers_need_end); // One more to wait on.
++	WARN_ON_ONCE(READ_ONCE(t->trc_reader_special.b.need_qs));
+ 	WRITE_ONCE(t->trc_reader_special.b.need_qs, true);
+ 
+ reset_ipi:
+@@ -904,6 +896,7 @@ static bool trc_inspect_reader(struct task_struct *t, void *arg)
+ 			n_heavy_reader_ofl_updates++;
+ 		in_qs = true;
+ 	} else {
++		// The task is not running, so C-language access is safe.
+ 		in_qs = likely(!t->trc_reader_nesting);
+ 	}
+ 
+@@ -918,7 +911,7 @@ static bool trc_inspect_reader(struct task_struct *t, void *arg)
+ 	// state so that it will awaken the grace-period kthread upon exit
+ 	// from that critical section.
+ 	atomic_inc(&trc_n_readers_need_end); // One more to wait on.
+-	WARN_ON_ONCE(t->trc_reader_special.b.need_qs);
++	WARN_ON_ONCE(READ_ONCE(t->trc_reader_special.b.need_qs));
+ 	WRITE_ONCE(t->trc_reader_special.b.need_qs, true);
+ 	return true;
+ }
+@@ -936,7 +929,7 @@ static void trc_wait_for_one_reader(struct task_struct *t,
+ 	// The current task had better be in a quiescent state.
+ 	if (t == current) {
+ 		t->trc_reader_checked = true;
+-		WARN_ON_ONCE(t->trc_reader_nesting);
++		WARN_ON_ONCE(READ_ONCE(t->trc_reader_nesting));
+ 		return;
+ 	}
+ 
+@@ -959,21 +952,15 @@ static void trc_wait_for_one_reader(struct task_struct *t,
+ 		if (per_cpu(trc_ipi_to_cpu, cpu) || t->trc_ipi_to_cpu >= 0)
+ 			return;
+ 
+-		atomic_inc(&trc_n_readers_need_end);
+ 		per_cpu(trc_ipi_to_cpu, cpu) = true;
+ 		t->trc_ipi_to_cpu = cpu;
+ 		rcu_tasks_trace.n_ipis++;
+-		if (smp_call_function_single(cpu,
+-					     trc_read_check_handler, t, 0)) {
++		if (smp_call_function_single(cpu, trc_read_check_handler, t, 0)) {
+ 			// Just in case there is some other reason for
+ 			// failure than the target CPU being offline.
+ 			rcu_tasks_trace.n_ipis_fails++;
+ 			per_cpu(trc_ipi_to_cpu, cpu) = false;
+ 			t->trc_ipi_to_cpu = cpu;
+-			if (atomic_dec_and_test(&trc_n_readers_need_end)) {
+-				WARN_ON_ONCE(1);
+-				wake_up(&trc_wait);
+-			}
+ 		}
+ 	}
+ }
+@@ -1046,8 +1033,8 @@ static void show_stalled_task_trace(struct task_struct *t, bool *firstreport)
+ 		 ".I"[READ_ONCE(t->trc_ipi_to_cpu) > 0],
+ 		 ".i"[is_idle_task(t)],
+ 		 ".N"[cpu > 0 && tick_nohz_full_cpu(cpu)],
+-		 t->trc_reader_nesting,
+-		 " N"[!!t->trc_reader_special.b.need_qs],
++		 READ_ONCE(t->trc_reader_nesting),
++		 " N"[!!READ_ONCE(t->trc_reader_special.b.need_qs)],
+ 		 cpu);
+ 	sched_show_task(t);
+ }
+@@ -1141,7 +1128,7 @@ static void rcu_tasks_trace_postgp(struct rcu_tasks *rtp)
+ static void exit_tasks_rcu_finish_trace(struct task_struct *t)
+ {
+ 	WRITE_ONCE(t->trc_reader_checked, true);
+-	WARN_ON_ONCE(t->trc_reader_nesting);
++	WARN_ON_ONCE(READ_ONCE(t->trc_reader_nesting));
+ 	WRITE_ONCE(t->trc_reader_nesting, 0);
+ 	if (WARN_ON_ONCE(READ_ONCE(t->trc_reader_special.b.need_qs)))
+ 		rcu_read_unlock_trace_special(t, 0);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 45c1d03aff735..d53f57ac76094 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -9883,7 +9883,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
+ 		.sd		= sd,
+ 		.dst_cpu	= this_cpu,
+ 		.dst_rq		= this_rq,
+-		.dst_grpmask    = sched_group_span(sd->groups),
++		.dst_grpmask    = group_balance_mask(sd->groups),
+ 		.idle		= idle,
+ 		.loop_break	= sched_nr_migrate_break,
+ 		.cpus		= cpus,
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index d089627f2f2b4..29569b1c3d8c8 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -140,25 +140,30 @@ static struct k_itimer *posix_timer_by_id(timer_t id)
+ static int posix_timer_add(struct k_itimer *timer)
+ {
+ 	struct signal_struct *sig = current->signal;
+-	int first_free_id = sig->posix_timer_id;
+ 	struct hlist_head *head;
+-	int ret = -ENOENT;
++	unsigned int cnt, id;
+ 
+-	do {
++	/*
++	 * FIXME: Replace this by a per signal struct xarray once there is
++	 * a plan to handle the resulting CRIU regression gracefully.
++	 */
++	for (cnt = 0; cnt <= INT_MAX; cnt++) {
+ 		spin_lock(&hash_lock);
+-		head = &posix_timers_hashtable[hash(sig, sig->posix_timer_id)];
+-		if (!__posix_timers_find(head, sig, sig->posix_timer_id)) {
++		id = sig->next_posix_timer_id;
++
++		/* Write the next ID back. Clamp it to the positive space */
++		sig->next_posix_timer_id = (id + 1) & INT_MAX;
++
++		head = &posix_timers_hashtable[hash(sig, id)];
++		if (!__posix_timers_find(head, sig, id)) {
+ 			hlist_add_head_rcu(&timer->t_hash, head);
+-			ret = sig->posix_timer_id;
++			spin_unlock(&hash_lock);
++			return id;
+ 		}
+-		if (++sig->posix_timer_id < 0)
+-			sig->posix_timer_id = 0;
+-		if ((sig->posix_timer_id == first_free_id) && (ret == -ENOENT))
+-			/* Loop over all possible ids completed */
+-			ret = -EAGAIN;
+ 		spin_unlock(&hash_lock);
+-	} while (ret == -ENOENT);
+-	return ret;
++	}
++	/* POSIX return code when no timer ID could be allocated */
++	return -EAGAIN;
+ }
+ 
+ static inline void unlock_timer(struct k_itimer *timr, unsigned long flags)
+@@ -1037,27 +1042,52 @@ retry_delete:
+ }
+ 
+ /*
+- * return timer owned by the process, used by exit_itimers
++ * Delete a timer if it is armed, remove it from the hash and schedule it
++ * for RCU freeing.
+  */
+ static void itimer_delete(struct k_itimer *timer)
+ {
+-retry_delete:
+-	spin_lock_irq(&timer->it_lock);
++	unsigned long flags;
++
++	/*
++	 * irqsave is required to make timer_wait_running() work.
++	 */
++	spin_lock_irqsave(&timer->it_lock, flags);
+ 
++retry_delete:
++	/*
++	 * Even if the timer is not longer accessible from other tasks
++	 * it still might be armed and queued in the underlying timer
++	 * mechanism. Worse, that timer mechanism might run the expiry
++	 * function concurrently.
++	 */
+ 	if (timer_delete_hook(timer) == TIMER_RETRY) {
+-		spin_unlock_irq(&timer->it_lock);
++		/*
++		 * Timer is expired concurrently, prevent livelocks
++		 * and pointless spinning on RT.
++		 *
++		 * timer_wait_running() drops timer::it_lock, which opens
++		 * the possibility for another task to delete the timer.
++		 *
++		 * That's not possible here because this is invoked from
++		 * do_exit() only for the last thread of the thread group.
++		 * So no other task can access and delete that timer.
++		 */
++		if (WARN_ON_ONCE(timer_wait_running(timer, &flags) != timer))
++			return;
++
+ 		goto retry_delete;
+ 	}
+ 	list_del(&timer->list);
+ 
+-	spin_unlock_irq(&timer->it_lock);
++	spin_unlock_irqrestore(&timer->it_lock, flags);
+ 	release_posix_timer(timer, IT_ID_SET);
+ }
+ 
+ /*
+- * This is called by do_exit or de_thread, only when nobody else can
+- * modify the signal->posix_timers list. Yet we need sighand->siglock
+- * to prevent the race with /proc/pid/timers.
++ * Invoked from do_exit() when the last thread of a thread group exits.
++ * At that point no other task can access the timers of the dying
++ * task anymore.
+  */
+ void exit_itimers(struct task_struct *tsk)
+ {
+@@ -1067,10 +1097,12 @@ void exit_itimers(struct task_struct *tsk)
+ 	if (list_empty(&tsk->signal->posix_timers))
+ 		return;
+ 
++	/* Protect against concurrent read via /proc/$PID/timers */
+ 	spin_lock_irq(&tsk->sighand->siglock);
+ 	list_replace_init(&tsk->signal->posix_timers, &timers);
+ 	spin_unlock_irq(&tsk->sighand->siglock);
+ 
++	/* The timers are not longer accessible via tsk::signal */
+ 	while (!list_empty(&timers)) {
+ 		tmr = list_first_entry(&timers, struct k_itimer, list);
+ 		itimer_delete(tmr);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 3dab978c156d2..31fec924b7c48 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -1091,7 +1091,7 @@ struct ftrace_page {
+ 	struct ftrace_page	*next;
+ 	struct dyn_ftrace	*records;
+ 	int			index;
+-	int			size;
++	int			order;
+ };
+ 
+ #define ENTRY_SIZE sizeof(struct dyn_ftrace)
+@@ -3188,7 +3188,7 @@ static int ftrace_allocate_records(struct ftrace_page *pg, int count)
+ 	ftrace_number_of_groups++;
+ 
+ 	cnt = (PAGE_SIZE << order) / ENTRY_SIZE;
+-	pg->size = cnt;
++	pg->order = order;
+ 
+ 	if (cnt > count)
+ 		cnt = count;
+@@ -3196,12 +3196,27 @@ static int ftrace_allocate_records(struct ftrace_page *pg, int count)
+ 	return cnt;
+ }
+ 
++static void ftrace_free_pages(struct ftrace_page *pages)
++{
++	struct ftrace_page *pg = pages;
++
++	while (pg) {
++		if (pg->records) {
++			free_pages((unsigned long)pg->records, pg->order);
++			ftrace_number_of_pages -= 1 << pg->order;
++		}
++		pages = pg->next;
++		kfree(pg);
++		pg = pages;
++		ftrace_number_of_groups--;
++	}
++}
++
+ static struct ftrace_page *
+ ftrace_allocate_pages(unsigned long num_to_init)
+ {
+ 	struct ftrace_page *start_pg;
+ 	struct ftrace_page *pg;
+-	int order;
+ 	int cnt;
+ 
+ 	if (!num_to_init)
+@@ -3235,17 +3250,7 @@ ftrace_allocate_pages(unsigned long num_to_init)
+ 	return start_pg;
+ 
+  free_pages:
+-	pg = start_pg;
+-	while (pg) {
+-		order = get_count_order(pg->size / ENTRIES_PER_PAGE);
+-		if (order >= 0)
+-			free_pages((unsigned long)pg->records, order);
+-		start_pg = pg->next;
+-		kfree(pg);
+-		pg = start_pg;
+-		ftrace_number_of_pages -= 1 << order;
+-		ftrace_number_of_groups--;
+-	}
++	ftrace_free_pages(start_pg);
+ 	pr_info("ftrace: FAILED to allocate memory for functions\n");
+ 	return NULL;
+ }
+@@ -6191,9 +6196,11 @@ static int ftrace_process_locs(struct module *mod,
+ 			       unsigned long *start,
+ 			       unsigned long *end)
+ {
++	struct ftrace_page *pg_unuse = NULL;
+ 	struct ftrace_page *start_pg;
+ 	struct ftrace_page *pg;
+ 	struct dyn_ftrace *rec;
++	unsigned long skipped = 0;
+ 	unsigned long count;
+ 	unsigned long *p;
+ 	unsigned long addr;
+@@ -6239,6 +6246,7 @@ static int ftrace_process_locs(struct module *mod,
+ 	p = start;
+ 	pg = start_pg;
+ 	while (p < end) {
++		unsigned long end_offset;
+ 		addr = ftrace_call_adjust(*p++);
+ 		/*
+ 		 * Some architecture linkers will pad between
+@@ -6246,10 +6254,13 @@ static int ftrace_process_locs(struct module *mod,
+ 		 * object files to satisfy alignments.
+ 		 * Skip any NULL pointers.
+ 		 */
+-		if (!addr)
++		if (!addr) {
++			skipped++;
+ 			continue;
++		}
+ 
+-		if (pg->index == pg->size) {
++		end_offset = (pg->index+1) * sizeof(pg->records[0]);
++		if (end_offset > PAGE_SIZE << pg->order) {
+ 			/* We should have allocated enough */
+ 			if (WARN_ON(!pg->next))
+ 				break;
+@@ -6260,8 +6271,10 @@ static int ftrace_process_locs(struct module *mod,
+ 		rec->ip = addr;
+ 	}
+ 
+-	/* We should have used all pages */
+-	WARN_ON(pg->next);
++	if (pg->next) {
++		pg_unuse = pg->next;
++		pg->next = NULL;
++	}
+ 
+ 	/* Assign the last page to ftrace_pages */
+ 	ftrace_pages = pg;
+@@ -6283,6 +6296,11 @@ static int ftrace_process_locs(struct module *mod,
+  out:
+ 	mutex_unlock(&ftrace_lock);
+ 
++	/* We should have used all pages unless we skipped some */
++	if (pg_unuse) {
++		WARN_ON(!skipped);
++		ftrace_free_pages(pg_unuse);
++	}
+ 	return ret;
+ }
+ 
+@@ -6418,7 +6436,6 @@ void ftrace_release_mod(struct module *mod)
+ 	struct ftrace_page **last_pg;
+ 	struct ftrace_page *tmp_page = NULL;
+ 	struct ftrace_page *pg;
+-	int order;
+ 
+ 	mutex_lock(&ftrace_lock);
+ 
+@@ -6469,12 +6486,12 @@ void ftrace_release_mod(struct module *mod)
+ 		/* Needs to be called outside of ftrace_lock */
+ 		clear_mod_from_hashes(pg);
+ 
+-		order = get_count_order(pg->size / ENTRIES_PER_PAGE);
+-		if (order >= 0)
+-			free_pages((unsigned long)pg->records, order);
++		if (pg->records) {
++			free_pages((unsigned long)pg->records, pg->order);
++			ftrace_number_of_pages -= 1 << pg->order;
++		}
+ 		tmp_page = pg->next;
+ 		kfree(pg);
+-		ftrace_number_of_pages -= 1 << order;
+ 		ftrace_number_of_groups--;
+ 	}
+ }
+@@ -6792,7 +6809,6 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr)
+ 	struct ftrace_mod_map *mod_map = NULL;
+ 	struct ftrace_init_func *func, *func_next;
+ 	struct list_head clear_hash;
+-	int order;
+ 
+ 	INIT_LIST_HEAD(&clear_hash);
+ 
+@@ -6830,10 +6846,10 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr)
+ 		ftrace_update_tot_cnt--;
+ 		if (!pg->index) {
+ 			*last_pg = pg->next;
+-			order = get_count_order(pg->size / ENTRIES_PER_PAGE);
+-			if (order >= 0)
+-				free_pages((unsigned long)pg->records, order);
+-			ftrace_number_of_pages -= 1 << order;
++			if (pg->records) {
++				free_pages((unsigned long)pg->records, pg->order);
++				ftrace_number_of_pages -= 1 << pg->order;
++			}
+ 			ftrace_number_of_groups--;
+ 			kfree(pg);
+ 			pg = container_of(last_pg, struct ftrace_page, next);
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index f08904914166b..593e446f6c487 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -4954,28 +4954,34 @@ unsigned long ring_buffer_size(struct trace_buffer *buffer, int cpu)
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_size);
+ 
++static void rb_clear_buffer_page(struct buffer_page *page)
++{
++	local_set(&page->write, 0);
++	local_set(&page->entries, 0);
++	rb_init_page(page->page);
++	page->read = 0;
++}
++
+ static void
+ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer)
+ {
++	struct buffer_page *page;
++
+ 	rb_head_page_deactivate(cpu_buffer);
+ 
+ 	cpu_buffer->head_page
+ 		= list_entry(cpu_buffer->pages, struct buffer_page, list);
+-	local_set(&cpu_buffer->head_page->write, 0);
+-	local_set(&cpu_buffer->head_page->entries, 0);
+-	local_set(&cpu_buffer->head_page->page->commit, 0);
+-
+-	cpu_buffer->head_page->read = 0;
++	rb_clear_buffer_page(cpu_buffer->head_page);
++	list_for_each_entry(page, cpu_buffer->pages, list) {
++		rb_clear_buffer_page(page);
++	}
+ 
+ 	cpu_buffer->tail_page = cpu_buffer->head_page;
+ 	cpu_buffer->commit_page = cpu_buffer->head_page;
+ 
+ 	INIT_LIST_HEAD(&cpu_buffer->reader_page->list);
+ 	INIT_LIST_HEAD(&cpu_buffer->new_pages);
+-	local_set(&cpu_buffer->reader_page->write, 0);
+-	local_set(&cpu_buffer->reader_page->entries, 0);
+-	local_set(&cpu_buffer->reader_page->page->commit, 0);
+-	cpu_buffer->reader_page->read = 0;
++	rb_clear_buffer_page(cpu_buffer->reader_page);
+ 
+ 	local_set(&cpu_buffer->entries_bytes, 0);
+ 	local_set(&cpu_buffer->overrun, 0);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 70526400e05c9..7867fc39c4fc5 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6250,6 +6250,7 @@ static int tracing_release_pipe(struct inode *inode, struct file *file)
+ 	mutex_unlock(&trace_types_lock);
+ 
+ 	free_cpumask_var(iter->started);
++	kfree(iter->temp);
+ 	mutex_destroy(&iter->mutex);
+ 	kfree(iter);
+ 
+@@ -7529,7 +7530,7 @@ static const struct file_operations tracing_err_log_fops = {
+ 	.open           = tracing_err_log_open,
+ 	.write		= tracing_err_log_write,
+ 	.read           = seq_read,
+-	.llseek         = seq_lseek,
++	.llseek         = tracing_lseek,
+ 	.release        = tracing_err_log_release,
+ };
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 9ed65191888ef..059a106e62bec 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -5817,13 +5817,16 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
+ 	if (get_named_trigger_data(trigger_data))
+ 		goto enable;
+ 
+-	if (has_hist_vars(hist_data))
+-		save_hist_vars(hist_data);
+-
+ 	ret = create_actions(hist_data);
+ 	if (ret)
+ 		goto out_unreg;
+ 
++	if (has_hist_vars(hist_data) || hist_data->n_var_refs) {
++		ret = save_hist_vars(hist_data);
++		if (ret)
++			goto out_unreg;
++	}
++
+ 	ret = tracing_map_init(hist_data->map);
+ 	if (ret)
+ 		goto out_unreg;
+diff --git a/kernel/trace/trace_probe_tmpl.h b/kernel/trace/trace_probe_tmpl.h
+index e5282828f4a60..29348874ebde7 100644
+--- a/kernel/trace/trace_probe_tmpl.h
++++ b/kernel/trace/trace_probe_tmpl.h
+@@ -143,6 +143,8 @@ stage3:
+ array:
+ 	/* the last stage: Loop on array */
+ 	if (code->op == FETCH_OP_LP_ARRAY) {
++		if (ret < 0)
++			ret = 0;
+ 		total += ret;
+ 		if (++i < code->param) {
+ 			code = s3;
+diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
+index 247bf0b1582ca..1e8a49dc956e2 100644
+--- a/kernel/watchdog_hld.c
++++ b/kernel/watchdog_hld.c
+@@ -114,14 +114,14 @@ static void watchdog_overflow_callback(struct perf_event *event,
+ 	/* Ensure the watchdog never gets throttled */
+ 	event->hw.interrupts = 0;
+ 
++	if (!watchdog_check_timestamp())
++		return;
++
+ 	if (__this_cpu_read(watchdog_nmi_touch) == true) {
+ 		__this_cpu_write(watchdog_nmi_touch, false);
+ 		return;
+ 	}
+ 
+-	if (!watchdog_check_timestamp())
+-		return;
+-
+ 	/* check for a hardlockup
+ 	 * This is done by making sure our timer interrupt
+ 	 * is incrementing.  The timer interrupt should have
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index b9041ab881bc8..fa0a0e59b3851 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -679,12 +679,17 @@ static void clear_work_data(struct work_struct *work)
+ 	set_work_data(work, WORK_STRUCT_NO_POOL, 0);
+ }
+ 
++static inline struct pool_workqueue *work_struct_pwq(unsigned long data)
++{
++	return (struct pool_workqueue *)(data & WORK_STRUCT_WQ_DATA_MASK);
++}
++
+ static struct pool_workqueue *get_work_pwq(struct work_struct *work)
+ {
+ 	unsigned long data = atomic_long_read(&work->data);
+ 
+ 	if (data & WORK_STRUCT_PWQ)
+-		return (void *)(data & WORK_STRUCT_WQ_DATA_MASK);
++		return work_struct_pwq(data);
+ 	else
+ 		return NULL;
+ }
+@@ -712,8 +717,7 @@ static struct worker_pool *get_work_pool(struct work_struct *work)
+ 	assert_rcu_or_pool_mutex();
+ 
+ 	if (data & WORK_STRUCT_PWQ)
+-		return ((struct pool_workqueue *)
+-			(data & WORK_STRUCT_WQ_DATA_MASK))->pool;
++		return work_struct_pwq(data)->pool;
+ 
+ 	pool_id = data >> WORK_OFFQ_POOL_SHIFT;
+ 	if (pool_id == WORK_OFFQ_POOL_NONE)
+@@ -734,8 +738,7 @@ static int get_work_pool_id(struct work_struct *work)
+ 	unsigned long data = atomic_long_read(&work->data);
+ 
+ 	if (data & WORK_STRUCT_PWQ)
+-		return ((struct pool_workqueue *)
+-			(data & WORK_STRUCT_WQ_DATA_MASK))->pool->id;
++		return work_struct_pwq(data)->pool->id;
+ 
+ 	return data >> WORK_OFFQ_POOL_SHIFT;
+ }
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 4c39678c03ee5..4dd9283f6fea0 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -501,6 +501,15 @@ static void debug_print_object(struct debug_obj *obj, char *msg)
+ 	const struct debug_obj_descr *descr = obj->descr;
+ 	static int limit;
+ 
++	/*
++	 * Don't report if lookup_object_or_alloc() by the current thread
++	 * failed because lookup_object_or_alloc()/debug_objects_oom() by a
++	 * concurrent thread turned off debug_objects_enabled and cleared
++	 * the hash buckets.
++	 */
++	if (!debug_objects_enabled)
++		return;
++
+ 	if (limit < 5 && descr != descr_test) {
+ 		void *hint = descr->debug_hint ?
+ 			descr->debug_hint(obj->object) : NULL;
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index ed0455a9ded87..25dc9eb6c902b 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -183,7 +183,7 @@ static int __kstrncpy(char **dst, const char *name, size_t count, gfp_t gfp)
+ {
+ 	*dst = kstrndup(name, count, gfp);
+ 	if (!*dst)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 	return count;
+ }
+ 
+@@ -606,7 +606,7 @@ static ssize_t trigger_request_store(struct device *dev,
+ 
+ 	name = kstrndup(buf, count, GFP_KERNEL);
+ 	if (!name)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 
+ 	pr_info("loading '%s'\n", name);
+ 
+@@ -654,7 +654,7 @@ static ssize_t trigger_request_platform_store(struct device *dev,
+ 
+ 	name = kstrndup(buf, count, GFP_KERNEL);
+ 	if (!name)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 
+ 	pr_info("inserting test platform fw '%s'\n", name);
+ 	efi_embedded_fw.name = name;
+@@ -707,7 +707,7 @@ static ssize_t trigger_async_request_store(struct device *dev,
+ 
+ 	name = kstrndup(buf, count, GFP_KERNEL);
+ 	if (!name)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 
+ 	pr_info("loading '%s'\n", name);
+ 
+@@ -752,7 +752,7 @@ static ssize_t trigger_custom_fallback_store(struct device *dev,
+ 
+ 	name = kstrndup(buf, count, GFP_KERNEL);
+ 	if (!name)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 
+ 	pr_info("loading '%s' using custom fallback mechanism\n", name);
+ 
+@@ -803,7 +803,7 @@ static int test_fw_run_batch_request(void *data)
+ 
+ 		test_buf = kzalloc(TEST_FIRMWARE_BUF_SIZE, GFP_KERNEL);
+ 		if (!test_buf)
+-			return -ENOSPC;
++			return -ENOMEM;
+ 
+ 		if (test_fw_config->partial)
+ 			req->rc = request_partial_firmware_into_buf
+diff --git a/lib/ts_bm.c b/lib/ts_bm.c
+index 4cf250031f0f0..352ae837e0317 100644
+--- a/lib/ts_bm.c
++++ b/lib/ts_bm.c
+@@ -60,10 +60,12 @@ static unsigned int bm_find(struct ts_config *conf, struct ts_state *state)
+ 	struct ts_bm *bm = ts_config_priv(conf);
+ 	unsigned int i, text_len, consumed = state->offset;
+ 	const u8 *text;
+-	int shift = bm->patlen - 1, bs;
++	int bs;
+ 	const u8 icase = conf->flags & TS_IGNORECASE;
+ 
+ 	for (;;) {
++		int shift = bm->patlen - 1;
++
+ 		text_len = conf->get_next_block(consumed, &text, conf, state);
+ 
+ 		if (unlikely(text_len == 0))
+diff --git a/mm/shmem.c b/mm/shmem.c
+index d3d8c5e7a296b..cfa8f43cb3a62 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -4128,7 +4128,7 @@ static struct file_system_type shmem_fs_type = {
+ 	.name		= "tmpfs",
+ 	.init_fs_context = ramfs_init_fs_context,
+ 	.parameters	= ramfs_fs_parameters,
+-	.kill_sb	= kill_litter_super,
++	.kill_sb	= ramfs_kill_sb,
+ 	.fs_flags	= FS_USERNS_MOUNT,
+ };
+ 
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 1d87bf51f3840..e35488fde9c85 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -157,8 +157,9 @@ void br_manage_promisc(struct net_bridge *br)
+ 			 * This lets us disable promiscuous mode and write
+ 			 * this config to hw.
+ 			 */
+-			if (br->auto_cnt == 0 ||
+-			    (br->auto_cnt == 1 && br_auto_port(p)))
++			if ((p->dev->priv_flags & IFF_UNICAST_FLT) &&
++			    (br->auto_cnt == 0 ||
++			     (br->auto_cnt == 1 && br_auto_port(p))))
+ 				br_port_clear_promisc(p);
+ 			else
+ 				br_port_set_promisc(p);
+diff --git a/net/bridge/br_stp_if.c b/net/bridge/br_stp_if.c
+index ba55851fe132c..3326dfced68ab 100644
+--- a/net/bridge/br_stp_if.c
++++ b/net/bridge/br_stp_if.c
+@@ -201,6 +201,9 @@ int br_stp_set_enabled(struct net_bridge *br, unsigned long val,
+ {
+ 	ASSERT_RTNL();
+ 
++	if (!net_eq(dev_net(br->dev), &init_net))
++		NL_SET_ERR_MSG_MOD(extack, "STP does not work in non-root netns");
++
+ 	if (br_mrp_enabled(br)) {
+ 		NL_SET_ERR_MSG_MOD(extack,
+ 				   "STP can't be enabled if MRP is already enabled");
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index ddba4e12da783..2388c619f29ca 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -1521,6 +1521,12 @@ static int bcm_release(struct socket *sock)
+ 
+ 	lock_sock(sk);
+ 
++#if IS_ENABLED(CONFIG_PROC_FS)
++	/* remove procfs entry */
++	if (net->can.bcmproc_dir && bo->bcm_proc_read)
++		remove_proc_entry(bo->procname, net->can.bcmproc_dir);
++#endif /* CONFIG_PROC_FS */
++
+ 	list_for_each_entry_safe(op, next, &bo->tx_ops, list)
+ 		bcm_remove_op(op);
+ 
+@@ -1556,12 +1562,6 @@ static int bcm_release(struct socket *sock)
+ 	list_for_each_entry_safe(op, next, &bo->rx_ops, list)
+ 		bcm_remove_op(op);
+ 
+-#if IS_ENABLED(CONFIG_PROC_FS)
+-	/* remove procfs entry */
+-	if (net->can.bcmproc_dir && bo->bcm_proc_read)
+-		remove_proc_entry(bo->procname, net->can.bcmproc_dir);
+-#endif /* CONFIG_PROC_FS */
+-
+ 	/* remove device reference */
+ 	if (bo->bound) {
+ 		bo->bound   = 0;
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 4fcd8162fad4a..16ebc187af1c6 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -990,8 +990,9 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		/* wait for complete transmission of current pdu */
+ 		wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
+ 
+-		if (sk->sk_err)
+-			return -sk->sk_err;
++		err = sock_error(sk);
++		if (err)
++			return err;
+ 	}
+ 
+ 	return size;
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 72047750dcd96..00c6944ed6342 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -8092,7 +8092,10 @@ EXPORT_SYMBOL_GPL(devlink_free);
+ 
+ static void devlink_port_type_warn(struct work_struct *work)
+ {
+-	WARN(true, "Type was not set for devlink port.");
++	struct devlink_port *port = container_of(to_delayed_work(work),
++						 struct devlink_port,
++						 type_warn_dw);
++	dev_warn(port->devlink->dev, "Type was not set for devlink port.");
+ }
+ 
+ static bool devlink_port_type_should_warn(struct devlink_port *devlink_port)
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 3c9c2d6e3b92e..d3c03ebf06a5b 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -929,24 +929,27 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev,
+ 			 nla_total_size(sizeof(struct ifla_vf_rate)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_link_state)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_rss_query_en)) +
+-			 nla_total_size(0) + /* nest IFLA_VF_STATS */
+-			 /* IFLA_VF_STATS_RX_PACKETS */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_PACKETS */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_RX_BYTES */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_BYTES */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_BROADCAST */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_MULTICAST */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_RX_DROPPED */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_DROPPED */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_trust)));
++		if (~ext_filter_mask & RTEXT_FILTER_SKIP_STATS) {
++			size += num_vfs *
++				(nla_total_size(0) + /* nest IFLA_VF_STATS */
++				 /* IFLA_VF_STATS_RX_PACKETS */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_PACKETS */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_RX_BYTES */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_BYTES */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_BROADCAST */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_MULTICAST */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_RX_DROPPED */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_DROPPED */
++				 nla_total_size_64bit(sizeof(__u64)));
++		}
+ 		return size;
+ 	} else
+ 		return 0;
+@@ -1221,7 +1224,8 @@ static noinline_for_stack int rtnl_fill_stats(struct sk_buff *skb,
+ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
+ 					       struct net_device *dev,
+ 					       int vfs_num,
+-					       struct nlattr *vfinfo)
++					       struct nlattr *vfinfo,
++					       u32 ext_filter_mask)
+ {
+ 	struct ifla_vf_rss_query_en vf_rss_query_en;
+ 	struct nlattr *vf, *vfstats, *vfvlanlist;
+@@ -1327,33 +1331,35 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
+ 		goto nla_put_vf_failure;
+ 	}
+ 	nla_nest_end(skb, vfvlanlist);
+-	memset(&vf_stats, 0, sizeof(vf_stats));
+-	if (dev->netdev_ops->ndo_get_vf_stats)
+-		dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num,
+-						&vf_stats);
+-	vfstats = nla_nest_start_noflag(skb, IFLA_VF_STATS);
+-	if (!vfstats)
+-		goto nla_put_vf_failure;
+-	if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
+-			      vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
+-			      vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
+-			      vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
+-			      vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
+-			      vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
+-			      vf_stats.multicast, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_DROPPED,
+-			      vf_stats.rx_dropped, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_DROPPED,
+-			      vf_stats.tx_dropped, IFLA_VF_STATS_PAD)) {
+-		nla_nest_cancel(skb, vfstats);
+-		goto nla_put_vf_failure;
++	if (~ext_filter_mask & RTEXT_FILTER_SKIP_STATS) {
++		memset(&vf_stats, 0, sizeof(vf_stats));
++		if (dev->netdev_ops->ndo_get_vf_stats)
++			dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num,
++							  &vf_stats);
++		vfstats = nla_nest_start_noflag(skb, IFLA_VF_STATS);
++		if (!vfstats)
++			goto nla_put_vf_failure;
++		if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
++				      vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
++				      vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
++				      vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
++				      vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
++				      vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
++				      vf_stats.multicast, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_DROPPED,
++				      vf_stats.rx_dropped, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_DROPPED,
++				      vf_stats.tx_dropped, IFLA_VF_STATS_PAD)) {
++			nla_nest_cancel(skb, vfstats);
++			goto nla_put_vf_failure;
++		}
++		nla_nest_end(skb, vfstats);
+ 	}
+-	nla_nest_end(skb, vfstats);
+ 	nla_nest_end(skb, vf);
+ 	return 0;
+ 
+@@ -1386,7 +1392,7 @@ static noinline_for_stack int rtnl_fill_vf(struct sk_buff *skb,
+ 		return -EMSGSIZE;
+ 
+ 	for (i = 0; i < num_vfs; i++) {
+-		if (rtnl_fill_vfinfo(skb, dev, i, vfinfo))
++		if (rtnl_fill_vfinfo(skb, dev, i, vfinfo, ext_filter_mask))
+ 			return -EMSGSIZE;
+ 	}
+ 
+@@ -3883,7 +3889,7 @@ static int nlmsg_populate_fdb_fill(struct sk_buff *skb,
+ 	ndm->ndm_ifindex = dev->ifindex;
+ 	ndm->ndm_state   = ndm_state;
+ 
+-	if (nla_put(skb, NDA_LLADDR, ETH_ALEN, addr))
++	if (nla_put(skb, NDA_LLADDR, dev->addr_len, addr))
+ 		goto nla_put_failure;
+ 	if (vid)
+ 		if (nla_put(skb, NDA_VLAN, sizeof(u16), &vid))
+@@ -3897,10 +3903,10 @@ nla_put_failure:
+ 	return -EMSGSIZE;
+ }
+ 
+-static inline size_t rtnl_fdb_nlmsg_size(void)
++static inline size_t rtnl_fdb_nlmsg_size(const struct net_device *dev)
+ {
+ 	return NLMSG_ALIGN(sizeof(struct ndmsg)) +
+-	       nla_total_size(ETH_ALEN) +	/* NDA_LLADDR */
++	       nla_total_size(dev->addr_len) +	/* NDA_LLADDR */
+ 	       nla_total_size(sizeof(u16)) +	/* NDA_VLAN */
+ 	       0;
+ }
+@@ -3912,7 +3918,7 @@ static void rtnl_fdb_notify(struct net_device *dev, u8 *addr, u16 vid, int type,
+ 	struct sk_buff *skb;
+ 	int err = -ENOBUFS;
+ 
+-	skb = nlmsg_new(rtnl_fdb_nlmsg_size(), GFP_ATOMIC);
++	skb = nlmsg_new(rtnl_fdb_nlmsg_size(dev), GFP_ATOMIC);
+ 	if (!skb)
+ 		goto errout;
+ 
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index e203172b9b9e7..b10285d06a2ca 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3685,6 +3685,11 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 
+ 	skb_push(skb, -skb_network_offset(skb) + offset);
+ 
++	/* Ensure the head is writeable before touching the shared info */
++	err = skb_unclone(skb, GFP_ATOMIC);
++	if (err)
++		goto err_linearize;
++
+ 	skb_shinfo(skb)->frag_list = NULL;
+ 
+ 	while (list_skb) {
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 9b013d052a722..4e00c6e2cb431 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2175,13 +2175,24 @@ kuid_t sock_i_uid(struct sock *sk)
+ }
+ EXPORT_SYMBOL(sock_i_uid);
+ 
+-unsigned long sock_i_ino(struct sock *sk)
++unsigned long __sock_i_ino(struct sock *sk)
+ {
+ 	unsigned long ino;
+ 
+-	read_lock_bh(&sk->sk_callback_lock);
++	read_lock(&sk->sk_callback_lock);
+ 	ino = sk->sk_socket ? SOCK_INODE(sk->sk_socket)->i_ino : 0;
+-	read_unlock_bh(&sk->sk_callback_lock);
++	read_unlock(&sk->sk_callback_lock);
++	return ino;
++}
++EXPORT_SYMBOL(__sock_i_ino);
++
++unsigned long sock_i_ino(struct sock *sk)
++{
++	unsigned long ino;
++
++	local_bh_disable();
++	ino = __sock_i_ino(sk);
++	local_bh_enable();
+ 	return ino;
+ }
+ EXPORT_SYMBOL(sock_i_ino);
+diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c
+index 50496013cdb7f..07876160edd2b 100644
+--- a/net/dsa/tag_sja1105.c
++++ b/net/dsa/tag_sja1105.c
+@@ -48,8 +48,8 @@ static void sja1105_meta_unpack(const struct sk_buff *skb,
+ 	 * a unified unpacking command for both device series.
+ 	 */
+ 	packing(buf,     &meta->tstamp,     31, 0, 4, UNPACK, 0);
+-	packing(buf + 4, &meta->dmac_byte_4, 7, 0, 1, UNPACK, 0);
+-	packing(buf + 5, &meta->dmac_byte_3, 7, 0, 1, UNPACK, 0);
++	packing(buf + 4, &meta->dmac_byte_3, 7, 0, 1, UNPACK, 0);
++	packing(buf + 5, &meta->dmac_byte_4, 7, 0, 1, UNPACK, 0);
+ 	packing(buf + 6, &meta->source_port, 7, 0, 1, UNPACK, 0);
+ 	packing(buf + 7, &meta->switch_id,   7, 0, 1, UNPACK, 0);
+ }
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 20d7381378418..28252029bd798 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -1134,7 +1134,7 @@ static int esp_init_authenc(struct xfrm_state *x)
+ 	err = crypto_aead_setkey(aead, key, keylen);
+ 
+ free_key:
+-	kfree(key);
++	kfree_sensitive(key);
+ 
+ error:
+ 	return err;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 406305aaec904..5f71a1c74e7e0 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -740,7 +740,8 @@ static void reqsk_timer_handler(struct timer_list *t)
+ 	if (inet_sk_state_load(sk_listener) != TCP_LISTEN)
+ 		goto drop;
+ 
+-	max_syn_ack_retries = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_synack_retries;
++	max_syn_ack_retries = READ_ONCE(icsk->icsk_syn_retries) ? :
++		READ_ONCE(net->ipv4.sysctl_tcp_synack_retries);
+ 	/* Normally all the openreqs are young and become mature
+ 	 * (i.e. converted to established socket) for first timeout.
+ 	 * If synack was not acknowledged for 1 second, it means
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 79bf550c9dfc5..ad050f8476b8e 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -571,20 +571,8 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
+ 	spin_lock(lock);
+ 	if (osk) {
+ 		WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
+-		ret = sk_hashed(osk);
+-		if (ret) {
+-			/* Before deleting the node, we insert a new one to make
+-			 * sure that the look-up-sk process would not miss either
+-			 * of them and that at least one node would exist in ehash
+-			 * table all the time. Otherwise there's a tiny chance
+-			 * that lookup process could find nothing in ehash table.
+-			 */
+-			__sk_nulls_add_node_tail_rcu(sk, list);
+-			sk_nulls_del_node_init_rcu(osk);
+-		}
+-		goto unlock;
+-	}
+-	if (found_dup_sk) {
++		ret = sk_nulls_del_node_init_rcu(osk);
++	} else if (found_dup_sk) {
+ 		*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
+ 		if (*found_dup_sk)
+ 			ret = false;
+@@ -593,7 +581,6 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
+ 	if (ret)
+ 		__sk_nulls_add_node_rcu(sk, list);
+ 
+-unlock:
+ 	spin_unlock(lock);
+ 
+ 	return ret;
+diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
+index a00102d7c7fd4..c411c87ae865f 100644
+--- a/net/ipv4/inet_timewait_sock.c
++++ b/net/ipv4/inet_timewait_sock.c
+@@ -81,10 +81,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
+ }
+ EXPORT_SYMBOL_GPL(inet_twsk_put);
+ 
+-static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
+-					struct hlist_nulls_head *list)
++static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
++				   struct hlist_nulls_head *list)
+ {
+-	hlist_nulls_add_tail_rcu(&tw->tw_node, list);
++	hlist_nulls_add_head_rcu(&tw->tw_node, list);
+ }
+ 
+ static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
+@@ -120,7 +120,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
+ 
+ 	spin_lock(lock);
+ 
+-	inet_twsk_add_node_tail_rcu(tw, &ehead->chain);
++	inet_twsk_add_node_rcu(tw, &ehead->chain);
+ 
+ 	/* Step 3: Remove SK from hash chain */
+ 	if (__sk_nulls_del_node_init_rcu(sk))
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 5aa8bde3e9c8e..59ba518a85b9c 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -878,6 +878,15 @@ static struct ctl_table ipv4_net_table[] = {
+ 		.proc_handler	= proc_dou8vec_minmax,
+ 	},
+ #endif
++	{
++		.procname	= "tcp_migrate_req",
++		.data		= &init_net.ipv4.sysctl_tcp_migrate_req,
++		.maxlen		= sizeof(u8),
++		.mode		= 0644,
++		.proc_handler	= proc_dou8vec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_ONE
++	},
+ 	{
+ 		.procname	= "tcp_reordering",
+ 		.data		= &init_net.ipv4.sysctl_tcp_reordering,
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 82abbf1929851..3dd9b76f40559 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3072,7 +3072,7 @@ int tcp_sock_set_syncnt(struct sock *sk, int val)
+ 		return -EINVAL;
+ 
+ 	lock_sock(sk);
+-	inet_csk(sk)->icsk_syn_retries = val;
++	WRITE_ONCE(inet_csk(sk)->icsk_syn_retries, val);
+ 	release_sock(sk);
+ 	return 0;
+ }
+@@ -3081,7 +3081,7 @@ EXPORT_SYMBOL(tcp_sock_set_syncnt);
+ void tcp_sock_set_user_timeout(struct sock *sk, u32 val)
+ {
+ 	lock_sock(sk);
+-	inet_csk(sk)->icsk_user_timeout = val;
++	WRITE_ONCE(inet_csk(sk)->icsk_user_timeout, val);
+ 	release_sock(sk);
+ }
+ EXPORT_SYMBOL(tcp_sock_set_user_timeout);
+@@ -3093,7 +3093,8 @@ int tcp_sock_set_keepidle_locked(struct sock *sk, int val)
+ 	if (val < 1 || val > MAX_TCP_KEEPIDLE)
+ 		return -EINVAL;
+ 
+-	tp->keepalive_time = val * HZ;
++	/* Paired with WRITE_ONCE() in keepalive_time_when() */
++	WRITE_ONCE(tp->keepalive_time, val * HZ);
+ 	if (sock_flag(sk, SOCK_KEEPOPEN) &&
+ 	    !((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) {
+ 		u32 elapsed = keepalive_time_elapsed(tp);
+@@ -3125,7 +3126,7 @@ int tcp_sock_set_keepintvl(struct sock *sk, int val)
+ 		return -EINVAL;
+ 
+ 	lock_sock(sk);
+-	tcp_sk(sk)->keepalive_intvl = val * HZ;
++	WRITE_ONCE(tcp_sk(sk)->keepalive_intvl, val * HZ);
+ 	release_sock(sk);
+ 	return 0;
+ }
+@@ -3137,7 +3138,8 @@ int tcp_sock_set_keepcnt(struct sock *sk, int val)
+ 		return -EINVAL;
+ 
+ 	lock_sock(sk);
+-	tcp_sk(sk)->keepalive_probes = val;
++	/* Paired with READ_ONCE() in keepalive_probes() */
++	WRITE_ONCE(tcp_sk(sk)->keepalive_probes, val);
+ 	release_sock(sk);
+ 	return 0;
+ }
+@@ -3323,19 +3325,19 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 		if (val < 1 || val > MAX_TCP_KEEPINTVL)
+ 			err = -EINVAL;
+ 		else
+-			tp->keepalive_intvl = val * HZ;
++			WRITE_ONCE(tp->keepalive_intvl, val * HZ);
+ 		break;
+ 	case TCP_KEEPCNT:
+ 		if (val < 1 || val > MAX_TCP_KEEPCNT)
+ 			err = -EINVAL;
+ 		else
+-			tp->keepalive_probes = val;
++			WRITE_ONCE(tp->keepalive_probes, val);
+ 		break;
+ 	case TCP_SYNCNT:
+ 		if (val < 1 || val > MAX_TCP_SYNCNT)
+ 			err = -EINVAL;
+ 		else
+-			icsk->icsk_syn_retries = val;
++			WRITE_ONCE(icsk->icsk_syn_retries, val);
+ 		break;
+ 
+ 	case TCP_SAVE_SYN:
+@@ -3348,18 +3350,18 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 
+ 	case TCP_LINGER2:
+ 		if (val < 0)
+-			tp->linger2 = -1;
++			WRITE_ONCE(tp->linger2, -1);
+ 		else if (val > TCP_FIN_TIMEOUT_MAX / HZ)
+-			tp->linger2 = TCP_FIN_TIMEOUT_MAX;
++			WRITE_ONCE(tp->linger2, TCP_FIN_TIMEOUT_MAX);
+ 		else
+-			tp->linger2 = val * HZ;
++			WRITE_ONCE(tp->linger2, val * HZ);
+ 		break;
+ 
+ 	case TCP_DEFER_ACCEPT:
+ 		/* Translate value in seconds to number of retransmits */
+-		icsk->icsk_accept_queue.rskq_defer_accept =
+-			secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ,
+-					TCP_RTO_MAX / HZ);
++		WRITE_ONCE(icsk->icsk_accept_queue.rskq_defer_accept,
++			   secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ,
++					   TCP_RTO_MAX / HZ));
+ 		break;
+ 
+ 	case TCP_WINDOW_CLAMP:
+@@ -3391,7 +3393,7 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 		if (val < 0)
+ 			err = -EINVAL;
+ 		else
+-			icsk->icsk_user_timeout = val;
++			WRITE_ONCE(icsk->icsk_user_timeout, val);
+ 		break;
+ 
+ 	case TCP_FASTOPEN:
+@@ -3435,7 +3437,7 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 		err = tcp_repair_set_window(tp, optval, optlen);
+ 		break;
+ 	case TCP_NOTSENT_LOWAT:
+-		tp->notsent_lowat = val;
++		WRITE_ONCE(tp->notsent_lowat, val);
+ 		sk->sk_write_space(sk);
+ 		break;
+ 	case TCP_INQ:
+@@ -3447,7 +3449,7 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 	case TCP_TX_DELAY:
+ 		if (val)
+ 			tcp_enable_tx_delay();
+-		tp->tcp_tx_delay = val;
++		WRITE_ONCE(tp->tcp_tx_delay, val);
+ 		break;
+ 	default:
+ 		err = -ENOPROTOOPT;
+@@ -3741,16 +3743,18 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 		val = keepalive_probes(tp);
+ 		break;
+ 	case TCP_SYNCNT:
+-		val = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries;
++		val = READ_ONCE(icsk->icsk_syn_retries) ? :
++			READ_ONCE(net->ipv4.sysctl_tcp_syn_retries);
+ 		break;
+ 	case TCP_LINGER2:
+-		val = tp->linger2;
++		val = READ_ONCE(tp->linger2);
+ 		if (val >= 0)
+ 			val = (val ? : READ_ONCE(net->ipv4.sysctl_tcp_fin_timeout)) / HZ;
+ 		break;
+ 	case TCP_DEFER_ACCEPT:
+-		val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept,
+-				      TCP_TIMEOUT_INIT / HZ, TCP_RTO_MAX / HZ);
++		val = READ_ONCE(icsk->icsk_accept_queue.rskq_defer_accept);
++		val = retrans_to_secs(val, TCP_TIMEOUT_INIT / HZ,
++				      TCP_RTO_MAX / HZ);
+ 		break;
+ 	case TCP_WINDOW_CLAMP:
+ 		val = tp->window_clamp;
+@@ -3886,11 +3890,11 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 		break;
+ 
+ 	case TCP_USER_TIMEOUT:
+-		val = icsk->icsk_user_timeout;
++		val = READ_ONCE(icsk->icsk_user_timeout);
+ 		break;
+ 
+ 	case TCP_FASTOPEN:
+-		val = icsk->icsk_accept_queue.fastopenq.max_qlen;
++		val = READ_ONCE(icsk->icsk_accept_queue.fastopenq.max_qlen);
+ 		break;
+ 
+ 	case TCP_FASTOPEN_CONNECT:
+@@ -3902,14 +3906,14 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 		break;
+ 
+ 	case TCP_TX_DELAY:
+-		val = tp->tcp_tx_delay;
++		val = READ_ONCE(tp->tcp_tx_delay);
+ 		break;
+ 
+ 	case TCP_TIMESTAMP:
+ 		val = tcp_time_stamp_raw() + tp->tsoffset;
+ 		break;
+ 	case TCP_NOTSENT_LOWAT:
+-		val = tp->notsent_lowat;
++		val = READ_ONCE(tp->notsent_lowat);
+ 		break;
+ 	case TCP_INQ:
+ 		val = tp->recvmsg_inq;
+@@ -3970,6 +3974,8 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 			return -EFAULT;
+ 		lock_sock(sk);
+ 		err = tcp_zerocopy_receive(sk, &zc);
++		err = BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sk, level, optname,
++							  &zc, &len, err);
+ 		release_sock(sk);
+ 		if (len >= offsetofend(struct tcp_zerocopy_receive, err))
+ 			goto zerocopy_rcv_sk_err;
+@@ -4004,6 +4010,18 @@ zerocopy_rcv_out:
+ 	return 0;
+ }
+ 
++bool tcp_bpf_bypass_getsockopt(int level, int optname)
++{
++	/* TCP do_tcp_getsockopt has optimized getsockopt implementation
++	 * to avoid extra socket lock for TCP_ZEROCOPY_RECEIVE.
++	 */
++	if (level == SOL_TCP && optname == TCP_ZEROCOPY_RECEIVE)
++		return true;
++
++	return false;
++}
++EXPORT_SYMBOL(tcp_bpf_bypass_getsockopt);
++
+ int tcp_getsockopt(struct sock *sk, int level, int optname, char __user *optval,
+ 		   int __user *optlen)
+ {
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 39fb037ce5f3f..92d63cf3e50b9 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -312,6 +312,7 @@ static struct sock *tcp_fastopen_create_child(struct sock *sk,
+ static bool tcp_fastopen_queue_check(struct sock *sk)
+ {
+ 	struct fastopen_queue *fastopenq;
++	int max_qlen;
+ 
+ 	/* Make sure the listener has enabled fastopen, and we don't
+ 	 * exceed the max # of pending TFO requests allowed before trying
+@@ -324,10 +325,11 @@ static bool tcp_fastopen_queue_check(struct sock *sk)
+ 	 * temporarily vs a server not supporting Fast Open at all.
+ 	 */
+ 	fastopenq = &inet_csk(sk)->icsk_accept_queue.fastopenq;
+-	if (fastopenq->max_qlen == 0)
++	max_qlen = READ_ONCE(fastopenq->max_qlen);
++	if (max_qlen == 0)
+ 		return false;
+ 
+-	if (fastopenq->qlen >= fastopenq->max_qlen) {
++	if (fastopenq->qlen >= max_qlen) {
+ 		struct request_sock *req1;
+ 		spin_lock(&fastopenq->lock);
+ 		req1 = fastopenq->rskq_rst_head;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index b98b7920c4029..d6dfbb88dcf5b 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3560,8 +3560,11 @@ static int tcp_ack_update_window(struct sock *sk, const struct sk_buff *skb, u32
+ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
+ 				   u32 *last_oow_ack_time)
+ {
+-	if (*last_oow_ack_time) {
+-		s32 elapsed = (s32)(tcp_jiffies32 - *last_oow_ack_time);
++	/* Paired with the WRITE_ONCE() in this function. */
++	u32 val = READ_ONCE(*last_oow_ack_time);
++
++	if (val) {
++		s32 elapsed = (s32)(tcp_jiffies32 - val);
+ 
+ 		if (0 <= elapsed &&
+ 		    elapsed < READ_ONCE(net->ipv4.sysctl_tcp_invalid_ratelimit)) {
+@@ -3570,7 +3573,10 @@ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
+ 		}
+ 	}
+ 
+-	*last_oow_ack_time = tcp_jiffies32;
++	/* Paired with the prior READ_ONCE() and with itself,
++	 * as we might be lockless.
++	 */
++	WRITE_ONCE(*last_oow_ack_time, tcp_jiffies32);
+ 
+ 	return false;	/* not rate-limited: go ahead, send dupack now! */
+ }
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 270b20e0907c2..b40780fde7915 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -960,7 +960,7 @@ static void tcp_v4_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb,
+ 			tcp_rsk(req)->rcv_nxt,
+ 			req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale,
+ 			tcp_time_stamp_raw() + tcp_rsk(req)->ts_off,
+-			req->ts_recent,
++			READ_ONCE(req->ts_recent),
+ 			0,
+ 			tcp_md5_do_lookup(sk, l3index, addr, AF_INET),
+ 			inet_rsk(req)->no_srccheck ? IP_REPLY_ARG_NOSRCCHECK : 0,
+@@ -2805,6 +2805,7 @@ struct proto tcp_prot = {
+ 	.shutdown		= tcp_shutdown,
+ 	.setsockopt		= tcp_setsockopt,
+ 	.getsockopt		= tcp_getsockopt,
++	.bpf_bypass_getsockopt	= tcp_bpf_bypass_getsockopt,
+ 	.keepalive		= tcp_set_keepalive,
+ 	.recvmsg		= tcp_recvmsg,
+ 	.sendmsg		= tcp_sendmsg,
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 8d854feebdb00..01e27620b7ee5 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -523,7 +523,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ 	newtp->max_window = newtp->snd_wnd;
+ 
+ 	if (newtp->rx_opt.tstamp_ok) {
+-		newtp->rx_opt.ts_recent = req->ts_recent;
++		newtp->rx_opt.ts_recent = READ_ONCE(req->ts_recent);
+ 		newtp->rx_opt.ts_recent_stamp = ktime_get_seconds();
+ 		newtp->tcp_header_len = sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED;
+ 	} else {
+@@ -586,7 +586,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ 		tcp_parse_options(sock_net(sk), skb, &tmp_opt, 0, NULL);
+ 
+ 		if (tmp_opt.saw_tstamp) {
+-			tmp_opt.ts_recent = req->ts_recent;
++			tmp_opt.ts_recent = READ_ONCE(req->ts_recent);
+ 			if (tmp_opt.rcv_tsecr)
+ 				tmp_opt.rcv_tsecr -= tcp_rsk(req)->ts_off;
+ 			/* We do not store true stamp, but it is not required,
+@@ -726,8 +726,11 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ 
+ 	/* In sequence, PAWS is OK. */
+ 
++	/* TODO: We probably should defer ts_recent change once
++	 * we take ownership of @req.
++	 */
+ 	if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))
+-		req->ts_recent = tmp_opt.rcv_tsval;
++		WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval);
+ 
+ 	if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) {
+ 		/* Truncate SYN, it is out of window starting
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index e4ad274ec7a30..86e896351364e 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -874,7 +874,7 @@ static unsigned int tcp_synack_options(const struct sock *sk,
+ 	if (likely(ireq->tstamp_ok)) {
+ 		opts->options |= OPTION_TS;
+ 		opts->tsval = tcp_skb_timestamp(skb) + tcp_rsk(req)->ts_off;
+-		opts->tsecr = req->ts_recent;
++		opts->tsecr = READ_ONCE(req->ts_recent);
+ 		remaining -= TCPOLEN_TSTAMP_ALIGNED;
+ 	}
+ 	if (likely(ireq->sack_ok)) {
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 888683f2ff3ee..715fdfa3e2ae9 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -239,7 +239,8 @@ static int tcp_write_timeout(struct sock *sk)
+ 	if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
+ 		if (icsk->icsk_retransmits)
+ 			__dst_negative_advice(sk);
+-		retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries;
++		retry_until = icsk->icsk_syn_retries ? :
++			READ_ONCE(net->ipv4.sysctl_tcp_syn_retries);
+ 		expired = icsk->icsk_retransmits >= retry_until;
+ 	} else {
+ 		if (retransmits_timed_out(sk, READ_ONCE(net->ipv4.sysctl_tcp_retries1), 0)) {
+@@ -406,12 +407,15 @@ abort:		tcp_write_err(sk);
+ static void tcp_fastopen_synack_timer(struct sock *sk, struct request_sock *req)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+-	int max_retries = icsk->icsk_syn_retries ? :
+-	    sock_net(sk)->ipv4.sysctl_tcp_synack_retries + 1; /* add one more retry for fastopen */
+ 	struct tcp_sock *tp = tcp_sk(sk);
++	int max_retries;
+ 
+ 	req->rsk_ops->syn_ack_timeout(req);
+ 
++	/* add one more retry for fastopen */
++	max_retries = icsk->icsk_syn_retries ? :
++		READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_synack_retries) + 1;
++
+ 	if (req->num_timeout >= max_retries) {
+ 		tcp_write_err(sk);
+ 		return;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index ed1e5bfc97b31..d5d10496b4aef 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -314,9 +314,8 @@ static void addrconf_del_dad_work(struct inet6_ifaddr *ifp)
+ static void addrconf_mod_rs_timer(struct inet6_dev *idev,
+ 				  unsigned long when)
+ {
+-	if (!timer_pending(&idev->rs_timer))
++	if (!mod_timer(&idev->rs_timer, jiffies + when))
+ 		in6_dev_hold(idev);
+-	mod_timer(&idev->rs_timer, jiffies + when);
+ }
+ 
+ static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp,
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index fd1f896115c1e..d01165bb6a32b 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -429,7 +429,10 @@ static struct net_device *icmp6_dev(const struct sk_buff *skb)
+ 	if (unlikely(dev->ifindex == LOOPBACK_IFINDEX || netif_is_l3_master(skb->dev))) {
+ 		const struct rt6_info *rt6 = skb_rt6_info(skb);
+ 
+-		if (rt6)
++		/* The destination could be an external IP in Ext Hdr (SRv6, RPL, etc.),
++		 * and ip6_null_entry could be set to skb if no route is found.
++		 */
++		if (rt6 && rt6->rt6i_idev)
+ 			dev = rt6->rt6i_idev->dev;
+ 	}
+ 
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 7b50e1811678e..2df1036330f80 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -955,7 +955,8 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 		goto tx_err;
+ 
+ 	if (skb->len > dev->mtu + dev->hard_header_len) {
+-		pskb_trim(skb, dev->mtu + dev->hard_header_len);
++		if (pskb_trim(skb, dev->mtu + dev->hard_header_len))
++			goto tx_err;
+ 		truncate = true;
+ 	}
+ 
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index fe29bc66aeac7..79d6f6ea3c546 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1151,7 +1151,7 @@ static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb,
+ 			tcp_rsk(req)->rcv_nxt,
+ 			req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale,
+ 			tcp_time_stamp_raw() + tcp_rsk(req)->ts_off,
+-			req->ts_recent, sk->sk_bound_dev_if,
++			READ_ONCE(req->ts_recent), sk->sk_bound_dev_if,
+ 			tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr, l3index),
+ 			ipv6_get_dsfield(ipv6_hdr(skb)), 0, sk->sk_priority);
+ }
+@@ -2135,6 +2135,7 @@ struct proto tcpv6_prot = {
+ 	.shutdown		= tcp_shutdown,
+ 	.setsockopt		= tcp_setsockopt,
+ 	.getsockopt		= tcp_getsockopt,
++	.bpf_bypass_getsockopt	= tcp_bpf_bypass_getsockopt,
+ 	.keepalive		= tcp_set_keepalive,
+ 	.recvmsg		= tcp_recvmsg,
+ 	.sendmsg		= tcp_sendmsg,
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 19c0721399d9e..788bb19f32e99 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -87,7 +87,7 @@ static u32 udp6_ehashfn(const struct net *net,
+ 	fhash = __ipv6_addr_jhash(faddr, udp_ipv6_hash_secret);
+ 
+ 	return __inet6_ehashfn(lhash, lport, fhash, fport,
+-			       udp_ipv6_hash_secret + net_hash_mix(net));
++			       udp6_ehash_secret + net_hash_mix(net));
+ }
+ 
+ int udp_v6_get_port(struct sock *sk, unsigned short snum)
+diff --git a/net/llc/llc_input.c b/net/llc/llc_input.c
+index c309b72a58779..7cac441862e21 100644
+--- a/net/llc/llc_input.c
++++ b/net/llc/llc_input.c
+@@ -163,9 +163,6 @@ int llc_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	void (*sta_handler)(struct sk_buff *skb);
+ 	void (*sap_handler)(struct llc_sap *sap, struct sk_buff *skb);
+ 
+-	if (!net_eq(dev_net(dev), &init_net))
+-		goto drop;
+-
+ 	/*
+ 	 * When the interface is in promisc. mode, drop all the crap that it
+ 	 * receives, do not try to analyse it.
+diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
+index 118f415928aef..32cc91f5ba99f 100644
+--- a/net/netfilter/nf_conntrack_helper.c
++++ b/net/netfilter/nf_conntrack_helper.c
+@@ -404,6 +404,9 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me)
+ 	BUG_ON(me->expect_class_max >= NF_CT_MAX_EXPECT_CLASSES);
+ 	BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1);
+ 
++	if (!nf_ct_helper_hash)
++		return -ENOENT;
++
+ 	if (me->expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT)
+ 		return -EINVAL;
+ 
+@@ -587,4 +590,5 @@ void nf_conntrack_helper_fini(void)
+ {
+ 	nf_ct_extend_unregister(&helper_extend);
+ 	kvfree(nf_ct_helper_hash);
++	nf_ct_helper_hash = NULL;
+ }
+diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
+index 94001eb51ffe4..a9ae292e932ae 100644
+--- a/net/netfilter/nf_conntrack_proto_dccp.c
++++ b/net/netfilter/nf_conntrack_proto_dccp.c
+@@ -431,9 +431,19 @@ static bool dccp_error(const struct dccp_hdr *dh,
+ 		       struct sk_buff *skb, unsigned int dataoff,
+ 		       const struct nf_hook_state *state)
+ {
++	static const unsigned long require_seq48 = 1 << DCCP_PKT_REQUEST |
++						   1 << DCCP_PKT_RESPONSE |
++						   1 << DCCP_PKT_CLOSEREQ |
++						   1 << DCCP_PKT_CLOSE |
++						   1 << DCCP_PKT_RESET |
++						   1 << DCCP_PKT_SYNC |
++						   1 << DCCP_PKT_SYNCACK;
+ 	unsigned int dccp_len = skb->len - dataoff;
+ 	unsigned int cscov;
+ 	const char *msg;
++	u8 type;
++
++	BUILD_BUG_ON(DCCP_PKT_INVALID >= BITS_PER_LONG);
+ 
+ 	if (dh->dccph_doff * 4 < sizeof(struct dccp_hdr) ||
+ 	    dh->dccph_doff * 4 > dccp_len) {
+@@ -458,10 +468,17 @@ static bool dccp_error(const struct dccp_hdr *dh,
+ 		goto out_invalid;
+ 	}
+ 
+-	if (dh->dccph_type >= DCCP_PKT_INVALID) {
++	type = dh->dccph_type;
++	if (type >= DCCP_PKT_INVALID) {
+ 		msg = "nf_ct_dccp: reserved packet type ";
+ 		goto out_invalid;
+ 	}
++
++	if (test_bit(type, &require_seq48) && !dh->dccph_x) {
++		msg = "nf_ct_dccp: type lacks 48bit sequence numbers";
++		goto out_invalid;
++	}
++
+ 	return false;
+ out_invalid:
+ 	nf_l4proto_log_invalid(skb, state->net, state->pf,
+@@ -469,24 +486,53 @@ out_invalid:
+ 	return true;
+ }
+ 
++struct nf_conntrack_dccp_buf {
++	struct dccp_hdr dh;	 /* generic header part */
++	struct dccp_hdr_ext ext; /* optional depending dh->dccph_x */
++	union {			 /* depends on header type */
++		struct dccp_hdr_ack_bits ack;
++		struct dccp_hdr_request req;
++		struct dccp_hdr_response response;
++		struct dccp_hdr_reset rst;
++	} u;
++};
++
++static struct dccp_hdr *
++dccp_header_pointer(const struct sk_buff *skb, int offset, const struct dccp_hdr *dh,
++		    struct nf_conntrack_dccp_buf *buf)
++{
++	unsigned int hdrlen = __dccp_hdr_len(dh);
++
++	if (hdrlen > sizeof(*buf))
++		return NULL;
++
++	return skb_header_pointer(skb, offset, hdrlen, buf);
++}
++
+ int nf_conntrack_dccp_packet(struct nf_conn *ct, struct sk_buff *skb,
+ 			     unsigned int dataoff,
+ 			     enum ip_conntrack_info ctinfo,
+ 			     const struct nf_hook_state *state)
+ {
+ 	enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo);
+-	struct dccp_hdr _dh, *dh;
++	struct nf_conntrack_dccp_buf _dh;
+ 	u_int8_t type, old_state, new_state;
+ 	enum ct_dccp_roles role;
+ 	unsigned int *timeouts;
++	struct dccp_hdr *dh;
+ 
+-	dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
++	dh = skb_header_pointer(skb, dataoff, sizeof(*dh), &_dh.dh);
+ 	if (!dh)
+ 		return NF_DROP;
+ 
+ 	if (dccp_error(dh, skb, dataoff, state))
+ 		return -NF_ACCEPT;
+ 
++	/* pull again, including possible 48 bit sequences and subtype header */
++	dh = dccp_header_pointer(skb, dataoff, dh, &_dh);
++	if (!dh)
++		return NF_DROP;
++
+ 	type = dh->dccph_type;
+ 	if (!nf_ct_is_confirmed(ct) && !dccp_new(ct, skb, dh))
+ 		return -NF_ACCEPT;
+diff --git a/net/netfilter/nf_conntrack_sip.c b/net/netfilter/nf_conntrack_sip.c
+index 78fd9122b70c7..751df19fe0f8a 100644
+--- a/net/netfilter/nf_conntrack_sip.c
++++ b/net/netfilter/nf_conntrack_sip.c
+@@ -611,7 +611,7 @@ int ct_sip_parse_numerical_param(const struct nf_conn *ct, const char *dptr,
+ 	start += strlen(name);
+ 	*val = simple_strtoul(start, &end, 0);
+ 	if (start == end)
+-		return 0;
++		return -1;
+ 	if (matchoff && matchlen) {
+ 		*matchoff = start - dptr;
+ 		*matchlen = end - start;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index e59cad1f7a36b..356416564d9f4 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -21,10 +21,13 @@
+ #include <net/netfilter/nf_tables.h>
+ #include <net/netfilter/nf_tables_offload.h>
+ #include <net/net_namespace.h>
++#include <net/netns/generic.h>
+ #include <net/sock.h>
+ 
+ #define NFT_MODULE_AUTOLOAD_LIMIT (MODULE_NAME_LEN - sizeof("nft-expr-255-"))
+ 
++unsigned int nf_tables_net_id __read_mostly;
++
+ static LIST_HEAD(nf_tables_expressions);
+ static LIST_HEAD(nf_tables_objects);
+ static LIST_HEAD(nf_tables_flowtables);
+@@ -103,7 +106,9 @@ static const u8 nft2audit_op[NFT_MSG_MAX] = { // enum nf_tables_msg_types
+ 
+ static void nft_validate_state_update(struct net *net, u8 new_validate_state)
+ {
+-	switch (net->nft.validate_state) {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++
++	switch (nft_net->validate_state) {
+ 	case NFT_VALIDATE_SKIP:
+ 		WARN_ON_ONCE(new_validate_state == NFT_VALIDATE_DO);
+ 		break;
+@@ -114,7 +119,7 @@ static void nft_validate_state_update(struct net *net, u8 new_validate_state)
+ 			return;
+ 	}
+ 
+-	net->nft.validate_state = new_validate_state;
++	nft_net->validate_state = new_validate_state;
+ }
+ static void nf_tables_trans_destroy_work(struct work_struct *w);
+ static DECLARE_WORK(trans_destroy_work, nf_tables_trans_destroy_work);
+@@ -150,6 +155,7 @@ static struct nft_trans *nft_trans_alloc_gfp(const struct nft_ctx *ctx,
+ 		return NULL;
+ 
+ 	INIT_LIST_HEAD(&trans->list);
++	INIT_LIST_HEAD(&trans->binding_list);
+ 	trans->msg_type = msg_type;
+ 	trans->ctx	= *ctx;
+ 
+@@ -162,34 +168,107 @@ static struct nft_trans *nft_trans_alloc(const struct nft_ctx *ctx,
+ 	return nft_trans_alloc_gfp(ctx, msg_type, size, GFP_KERNEL);
+ }
+ 
+-static void nft_trans_destroy(struct nft_trans *trans)
++static void nft_trans_list_del(struct nft_trans *trans)
+ {
+ 	list_del(&trans->list);
++	list_del(&trans->binding_list);
++}
++
++static void nft_trans_destroy(struct nft_trans *trans)
++{
++	nft_trans_list_del(trans);
+ 	kfree(trans);
+ }
+ 
+-static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
++static void __nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set,
++				 bool bind)
+ {
++	struct nftables_pernet *nft_net;
+ 	struct net *net = ctx->net;
+ 	struct nft_trans *trans;
+ 
+ 	if (!nft_set_is_anonymous(set))
+ 		return;
+ 
+-	list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
++	nft_net = net_generic(net, nf_tables_net_id);
++	list_for_each_entry_reverse(trans, &nft_net->commit_list, list) {
+ 		switch (trans->msg_type) {
+ 		case NFT_MSG_NEWSET:
+ 			if (nft_trans_set(trans) == set)
+-				nft_trans_set_bound(trans) = true;
++				nft_trans_set_bound(trans) = bind;
+ 			break;
+ 		case NFT_MSG_NEWSETELEM:
+ 			if (nft_trans_elem_set(trans) == set)
+-				nft_trans_elem_set_bound(trans) = true;
++				nft_trans_elem_set_bound(trans) = bind;
+ 			break;
+ 		}
+ 	}
+ }
+ 
++static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	return __nft_set_trans_bind(ctx, set, true);
++}
++
++static void nft_set_trans_unbind(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	return __nft_set_trans_bind(ctx, set, false);
++}
++
++static void __nft_chain_trans_bind(const struct nft_ctx *ctx,
++				   struct nft_chain *chain, bool bind)
++{
++	struct nftables_pernet *nft_net;
++	struct net *net = ctx->net;
++	struct nft_trans *trans;
++
++	if (!nft_chain_binding(chain))
++		return;
++
++	nft_net = net_generic(net, nf_tables_net_id);
++	list_for_each_entry_reverse(trans, &nft_net->commit_list, list) {
++		switch (trans->msg_type) {
++		case NFT_MSG_NEWCHAIN:
++			if (nft_trans_chain(trans) == chain)
++				nft_trans_chain_bound(trans) = bind;
++			break;
++		case NFT_MSG_NEWRULE:
++			if (trans->ctx.chain == chain)
++				nft_trans_rule_bound(trans) = bind;
++			break;
++		}
++	}
++}
++
++static void nft_chain_trans_bind(const struct nft_ctx *ctx,
++				 struct nft_chain *chain)
++{
++	__nft_chain_trans_bind(ctx, chain, true);
++}
++
++int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain)
++{
++	if (!nft_chain_binding(chain))
++		return 0;
++
++	if (nft_chain_binding(ctx->chain))
++		return -EOPNOTSUPP;
++
++	if (chain->bound)
++		return -EBUSY;
++
++	chain->bound = true;
++	chain->use++;
++	nft_chain_trans_bind(ctx, chain);
++
++	return 0;
++}
++
++void nf_tables_unbind_chain(const struct nft_ctx *ctx, struct nft_chain *chain)
++{
++	__nft_chain_trans_bind(ctx, chain, false);
++}
++
+ static int nft_netdev_register_hooks(struct net *net,
+ 				     struct list_head *hook_list)
+ {
+@@ -270,6 +349,27 @@ static void nf_tables_unregister_hook(struct net *net,
+ 		nf_unregister_net_hook(net, &basechain->ops);
+ }
+ 
++static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *trans)
++{
++	struct nftables_pernet *nft_net;
++
++	nft_net = net_generic(net, nf_tables_net_id);
++
++	switch (trans->msg_type) {
++	case NFT_MSG_NEWSET:
++		if (nft_set_is_anonymous(nft_trans_set(trans)))
++			list_add_tail(&trans->binding_list, &nft_net->binding_list);
++		break;
++	case NFT_MSG_NEWCHAIN:
++		if (!nft_trans_chain_update(trans) &&
++		    nft_chain_binding(nft_trans_chain(trans)))
++			list_add_tail(&trans->binding_list, &nft_net->binding_list);
++		break;
++	}
++
++	list_add_tail(&trans->list, &nft_net->commit_list);
++}
++
+ static int nft_trans_table_add(struct nft_ctx *ctx, int msg_type)
+ {
+ 	struct nft_trans *trans;
+@@ -281,7 +381,7 @@ static int nft_trans_table_add(struct nft_ctx *ctx, int msg_type)
+ 	if (msg_type == NFT_MSG_NEWTABLE)
+ 		nft_activate_next(ctx->net, ctx->table);
+ 
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 	return 0;
+ }
+ 
+@@ -313,8 +413,9 @@ static struct nft_trans *nft_trans_chain_add(struct nft_ctx *ctx, int msg_type)
+ 				ntohl(nla_get_be32(ctx->nla[NFTA_CHAIN_ID]));
+ 		}
+ 	}
++	nft_trans_chain(trans) = ctx->chain;
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
+ 	return trans;
+ }
+ 
+@@ -332,8 +433,7 @@ static int nft_delchain(struct nft_ctx *ctx)
+ 	return 0;
+ }
+ 
+-static void nft_rule_expr_activate(const struct nft_ctx *ctx,
+-				   struct nft_rule *rule)
++void nft_rule_expr_activate(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+ 	struct nft_expr *expr;
+ 
+@@ -346,9 +446,8 @@ static void nft_rule_expr_activate(const struct nft_ctx *ctx,
+ 	}
+ }
+ 
+-static void nft_rule_expr_deactivate(const struct nft_ctx *ctx,
+-				     struct nft_rule *rule,
+-				     enum nft_trans_phase phase)
++void nft_rule_expr_deactivate(const struct nft_ctx *ctx, struct nft_rule *rule,
++			      enum nft_trans_phase phase)
+ {
+ 	struct nft_expr *expr;
+ 
+@@ -387,7 +486,7 @@ static struct nft_trans *nft_trans_rule_add(struct nft_ctx *ctx, int msg_type,
+ 			ntohl(nla_get_be32(ctx->nla[NFTA_RULE_ID]));
+ 	}
+ 	nft_trans_rule(trans) = rule;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return trans;
+ }
+@@ -453,11 +552,36 @@ static int nft_trans_set_add(const struct nft_ctx *ctx, int msg_type,
+ 		nft_activate_next(ctx->net, set);
+ 	}
+ 	nft_trans_set(trans) = set;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+ }
+ 
++static void nft_setelem_data_deactivate(const struct net *net,
++					const struct nft_set *set,
++					struct nft_set_elem *elem);
++
++static int nft_mapelem_deactivate(const struct nft_ctx *ctx,
++				  struct nft_set *set,
++				  const struct nft_set_iter *iter,
++				  struct nft_set_elem *elem)
++{
++	nft_setelem_data_deactivate(ctx->net, set, elem);
++
++	return 0;
++}
++
++static void nft_map_deactivate(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	struct nft_set_iter iter = {
++		.genmask	= nft_genmask_next(ctx->net),
++		.fn		= nft_mapelem_deactivate,
++	};
++
++	set->ops->walk(ctx, set, &iter);
++	WARN_ON_ONCE(iter.err);
++}
++
+ static int nft_delset(const struct nft_ctx *ctx, struct nft_set *set)
+ {
+ 	int err;
+@@ -466,6 +590,9 @@ static int nft_delset(const struct nft_ctx *ctx, struct nft_set *set)
+ 	if (err < 0)
+ 		return err;
+ 
++	if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++		nft_map_deactivate(ctx, set);
++
+ 	nft_deactivate_next(ctx->net, set);
+ 	ctx->table->use--;
+ 
+@@ -485,7 +612,7 @@ static int nft_trans_obj_add(struct nft_ctx *ctx, int msg_type,
+ 		nft_activate_next(ctx->net, obj);
+ 
+ 	nft_trans_obj(trans) = obj;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+ }
+@@ -519,7 +646,7 @@ static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type,
+ 
+ 	INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
+ 	nft_trans_flowtable(trans) = flowtable;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+ }
+@@ -547,13 +674,15 @@ static struct nft_table *nft_table_lookup(const struct net *net,
+ 					  const struct nlattr *nla,
+ 					  u8 family, u8 genmask)
+ {
++	struct nftables_pernet *nft_net;
+ 	struct nft_table *table;
+ 
+ 	if (nla == NULL)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	list_for_each_entry_rcu(table, &net->nft.tables, list,
+-				lockdep_is_held(&net->nft.commit_mutex)) {
++	nft_net = net_generic(net, nf_tables_net_id);
++	list_for_each_entry_rcu(table, &nft_net->tables, list,
++				lockdep_is_held(&nft_net->commit_mutex)) {
+ 		if (!nla_strcmp(nla, table->name) &&
+ 		    table->family == family &&
+ 		    nft_active_genmask(table, genmask))
+@@ -567,9 +696,11 @@ static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
+ 						   const struct nlattr *nla,
+ 						   u8 genmask)
+ {
++	struct nftables_pernet *nft_net;
+ 	struct nft_table *table;
+ 
+-	list_for_each_entry(table, &net->nft.tables, list) {
++	nft_net = net_generic(net, nf_tables_net_id);
++	list_for_each_entry(table, &nft_net->tables, list) {
+ 		if (be64_to_cpu(nla_get_be64(nla)) == table->handle &&
+ 		    nft_active_genmask(table, genmask))
+ 			return table;
+@@ -621,6 +752,7 @@ struct nft_module_request {
+ static int nft_request_module(struct net *net, const char *fmt, ...)
+ {
+ 	char module_name[MODULE_NAME_LEN];
++	struct nftables_pernet *nft_net;
+ 	struct nft_module_request *req;
+ 	va_list args;
+ 	int ret;
+@@ -631,7 +763,8 @@ static int nft_request_module(struct net *net, const char *fmt, ...)
+ 	if (ret >= MODULE_NAME_LEN)
+ 		return 0;
+ 
+-	list_for_each_entry(req, &net->nft.module_list, list) {
++	nft_net = net_generic(net, nf_tables_net_id);
++	list_for_each_entry(req, &nft_net->module_list, list) {
+ 		if (!strcmp(req->module, module_name)) {
+ 			if (req->done)
+ 				return 0;
+@@ -647,7 +780,7 @@ static int nft_request_module(struct net *net, const char *fmt, ...)
+ 
+ 	req->done = false;
+ 	strlcpy(req->module, module_name, MODULE_NAME_LEN);
+-	list_add_tail(&req->list, &net->nft.module_list);
++	list_add_tail(&req->list, &nft_net->module_list);
+ 
+ 	return -EAGAIN;
+ }
+@@ -685,7 +818,9 @@ nf_tables_chain_type_lookup(struct net *net, const struct nlattr *nla,
+ 
+ static __be16 nft_base_seq(const struct net *net)
+ {
+-	return htons(net->nft.base_seq & 0xffff);
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++
++	return htons(nft_net->base_seq & 0xffff);
+ }
+ 
+ static const struct nla_policy nft_table_policy[NFTA_TABLE_MAX + 1] = {
+@@ -743,6 +878,7 @@ static void nft_notify_enqueue(struct sk_buff *skb, bool report,
+ 
+ static void nf_tables_table_notify(const struct nft_ctx *ctx, int event)
+ {
++	struct nftables_pernet *nft_net;
+ 	struct sk_buff *skb;
+ 	int err;
+ 
+@@ -761,7 +897,8 @@ static void nf_tables_table_notify(const struct nft_ctx *ctx, int event)
+ 		goto err;
+ 	}
+ 
+-	nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
++	nft_net = net_generic(ctx->net, nf_tables_net_id);
++	nft_notify_enqueue(skb, ctx->report, &nft_net->notify_list);
+ 	return;
+ err:
+ 	nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS);
+@@ -771,15 +908,17 @@ static int nf_tables_dump_tables(struct sk_buff *skb,
+ 				 struct netlink_callback *cb)
+ {
+ 	const struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh);
++	struct nftables_pernet *nft_net;
+ 	const struct nft_table *table;
+ 	unsigned int idx = 0, s_idx = cb->args[0];
+ 	struct net *net = sock_net(skb->sk);
+ 	int family = nfmsg->nfgen_family;
+ 
+ 	rcu_read_lock();
+-	cb->seq = net->nft.base_seq;
++	nft_net = net_generic(net, nf_tables_net_id);
++	cb->seq = nft_net->base_seq;
+ 
+-	list_for_each_entry_rcu(table, &net->nft.tables, list) {
++	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+ 			continue;
+ 
+@@ -954,7 +1093,7 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 		goto err;
+ 
+ 	nft_trans_table_update(trans) = true;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 	return 0;
+ err:
+ 	nft_trans_destroy(trans);
+@@ -1017,6 +1156,7 @@ static int nf_tables_newtable(struct net *net, struct sock *nlsk,
+ 			      const struct nlattr * const nla[],
+ 			      struct netlink_ext_ack *extack)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
+ 	u8 genmask = nft_genmask_next(net);
+ 	int family = nfmsg->nfgen_family;
+@@ -1026,7 +1166,7 @@ static int nf_tables_newtable(struct net *net, struct sock *nlsk,
+ 	u32 flags = 0;
+ 	int err;
+ 
+-	lockdep_assert_held(&net->nft.commit_mutex);
++	lockdep_assert_held(&nft_net->commit_mutex);
+ 	attr = nla[NFTA_TABLE_NAME];
+ 	table = nft_table_lookup(net, attr, family, genmask);
+ 	if (IS_ERR(table)) {
+@@ -1084,7 +1224,7 @@ static int nf_tables_newtable(struct net *net, struct sock *nlsk,
+ 	if (err < 0)
+ 		goto err_trans;
+ 
+-	list_add_tail_rcu(&table->list, &net->nft.tables);
++	list_add_tail_rcu(&table->list, &nft_net->tables);
+ 	return 0;
+ err_trans:
+ 	rhltable_destroy(&table->chains_ht);
+@@ -1172,11 +1312,12 @@ out:
+ 
+ static int nft_flush(struct nft_ctx *ctx, int family)
+ {
++	struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
+ 	struct nft_table *table, *nt;
+ 	const struct nlattr * const *nla = ctx->nla;
+ 	int err = 0;
+ 
+-	list_for_each_entry_safe(table, nt, &ctx->net->nft.tables, list) {
++	list_for_each_entry_safe(table, nt, &nft_net->tables, list) {
+ 		if (family != AF_UNSPEC && table->family != family)
+ 			continue;
+ 
+@@ -1291,7 +1432,9 @@ nft_chain_lookup_byhandle(const struct nft_table *table, u64 handle, u8 genmask)
+ static bool lockdep_commit_lock_is_held(const struct net *net)
+ {
+ #ifdef CONFIG_PROVE_LOCKING
+-	return lockdep_is_held(&net->nft.commit_mutex);
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++
++	return lockdep_is_held(&nft_net->commit_mutex);
+ #else
+ 	return true;
+ #endif
+@@ -1494,6 +1637,7 @@ nla_put_failure:
+ 
+ static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
+ {
++	struct nftables_pernet *nft_net;
+ 	struct sk_buff *skb;
+ 	int err;
+ 
+@@ -1513,7 +1657,8 @@ static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
+ 		goto err;
+ 	}
+ 
+-	nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
++	nft_net = net_generic(ctx->net, nf_tables_net_id);
++	nft_notify_enqueue(skb, ctx->report, &nft_net->notify_list);
+ 	return;
+ err:
+ 	nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS);
+@@ -1528,11 +1673,13 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
+ 	unsigned int idx = 0, s_idx = cb->args[0];
+ 	struct net *net = sock_net(skb->sk);
+ 	int family = nfmsg->nfgen_family;
++	struct nftables_pernet *nft_net;
+ 
+ 	rcu_read_lock();
+-	cb->seq = net->nft.base_seq;
++	nft_net = net_generic(net, nf_tables_net_id);
++	cb->seq = nft_net->base_seq;
+ 
+-	list_for_each_entry_rcu(table, &net->nft.tables, list) {
++	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+ 			continue;
+ 
+@@ -1847,11 +1994,12 @@ static int nft_chain_parse_hook(struct net *net,
+ 				struct nft_chain_hook *hook, u8 family,
+ 				bool autoload)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nlattr *ha[NFTA_HOOK_MAX + 1];
+ 	const struct nft_chain_type *type;
+ 	int err;
+ 
+-	lockdep_assert_held(&net->nft.commit_mutex);
++	lockdep_assert_held(&nft_net->commit_mutex);
+ 	lockdep_nfnl_nft_mutex_not_held();
+ 
+ 	err = nla_parse_nested_deprecated(ha, NFTA_HOOK_MAX,
+@@ -1981,7 +2129,7 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family,
+ 	return 0;
+ }
+ 
+-static int nft_chain_add(struct nft_table *table, struct nft_chain *chain)
++int nft_chain_add(struct nft_table *table, struct nft_chain *chain)
+ {
+ 	int err;
+ 
+@@ -2244,6 +2392,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 
+ 	if (nla[NFTA_CHAIN_HANDLE] &&
+ 	    nla[NFTA_CHAIN_NAME]) {
++		struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
+ 		struct nft_trans *tmp;
+ 		char *name;
+ 
+@@ -2253,7 +2402,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 			goto err;
+ 
+ 		err = -EEXIST;
+-		list_for_each_entry(tmp, &ctx->net->nft.commit_list, list) {
++		list_for_each_entry(tmp, &nft_net->commit_list, list) {
+ 			if (tmp->msg_type == NFT_MSG_NEWCHAIN &&
+ 			    tmp->ctx.table == table &&
+ 			    nft_trans_chain_update(tmp) &&
+@@ -2267,7 +2416,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 
+ 		nft_trans_chain_name(trans) = name;
+ 	}
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+ err:
+@@ -2278,17 +2427,19 @@ err:
+ 
+ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
+ 					       const struct nft_table *table,
+-					       const struct nlattr *nla)
++					       const struct nlattr *nla, u8 genmask)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	u32 id = ntohl(nla_get_be32(nla));
+ 	struct nft_trans *trans;
+ 
+-	list_for_each_entry(trans, &net->nft.commit_list, list) {
++	list_for_each_entry(trans, &nft_net->commit_list, list) {
+ 		struct nft_chain *chain = trans->ctx.chain;
+ 
+ 		if (trans->msg_type == NFT_MSG_NEWCHAIN &&
+ 		    chain->table == table &&
+-		    id == nft_trans_chain_id(trans))
++		    id == nft_trans_chain_id(trans) &&
++		    nft_active_genmask(chain, genmask))
+ 			return chain;
+ 	}
+ 	return ERR_PTR(-ENOENT);
+@@ -2299,6 +2450,7 @@ static int nf_tables_newchain(struct net *net, struct sock *nlsk,
+ 			      const struct nlattr * const nla[],
+ 			      struct netlink_ext_ack *extack)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
+ 	u8 genmask = nft_genmask_next(net);
+ 	int family = nfmsg->nfgen_family;
+@@ -2310,7 +2462,7 @@ static int nf_tables_newchain(struct net *net, struct sock *nlsk,
+ 	u64 handle = 0;
+ 	u32 flags = 0;
+ 
+-	lockdep_assert_held(&net->nft.commit_mutex);
++	lockdep_assert_held(&nft_net->commit_mutex);
+ 
+ 	table = nft_table_lookup(net, nla[NFTA_CHAIN_TABLE], family, genmask);
+ 	if (IS_ERR(table)) {
+@@ -2848,6 +3000,7 @@ nla_put_failure:
+ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
+ 				  const struct nft_rule *rule, int event)
+ {
++	struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
+ 	struct sk_buff *skb;
+ 	int err;
+ 
+@@ -2867,7 +3020,7 @@ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
+ 		goto err;
+ 	}
+ 
+-	nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
++	nft_notify_enqueue(skb, ctx->report, &nft_net->notify_list);
+ 	return;
+ err:
+ 	nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS);
+@@ -2925,11 +3078,13 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
+ 	unsigned int idx = 0;
+ 	struct net *net = sock_net(skb->sk);
+ 	int family = nfmsg->nfgen_family;
++	struct nftables_pernet *nft_net;
+ 
+ 	rcu_read_lock();
+-	cb->seq = net->nft.base_seq;
++	nft_net = net_generic(net, nf_tables_net_id);
++	cb->seq = nft_net->base_seq;
+ 
+-	list_for_each_entry_rcu(table, &net->nft.tables, list) {
++	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+ 			continue;
+ 
+@@ -3076,8 +3231,7 @@ err_fill_rule_info:
+ 	return err;
+ }
+ 
+-static void nf_tables_rule_destroy(const struct nft_ctx *ctx,
+-				   struct nft_rule *rule)
++void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+ 	struct nft_expr *expr, *next;
+ 
+@@ -3094,7 +3248,7 @@ static void nf_tables_rule_destroy(const struct nft_ctx *ctx,
+ 	kfree(rule);
+ }
+ 
+-void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
++static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+ 	nft_rule_expr_deactivate(ctx, rule, NFT_TRANS_RELEASE);
+ 	nf_tables_rule_destroy(ctx, rule);
+@@ -3145,6 +3299,8 @@ static int nft_table_validate(struct net *net, const struct nft_table *table)
+ 		err = nft_chain_validate(&ctx, chain);
+ 		if (err < 0)
+ 			return err;
++
++		cond_resched();
+ 	}
+ 
+ 	return 0;
+@@ -3161,6 +3317,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 			     const struct nlattr * const nla[],
+ 			     struct netlink_ext_ack *extack)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
+ 	u8 genmask = nft_genmask_next(net);
+ 	struct nft_expr_info *info = NULL;
+@@ -3178,7 +3335,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 	int err, rem;
+ 	u64 handle, pos_handle;
+ 
+-	lockdep_assert_held(&net->nft.commit_mutex);
++	lockdep_assert_held(&nft_net->commit_mutex);
+ 
+ 	table = nft_table_lookup(net, nla[NFTA_RULE_TABLE], family, genmask);
+ 	if (IS_ERR(table)) {
+@@ -3197,7 +3354,8 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 			return -EOPNOTSUPP;
+ 
+ 	} else if (nla[NFTA_RULE_CHAIN_ID]) {
+-		chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID]);
++		chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID],
++					      genmask);
+ 		if (IS_ERR(chain)) {
+ 			NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]);
+ 			return PTR_ERR(chain);
+@@ -3351,7 +3509,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 	kvfree(info);
+ 	chain->use++;
+ 
+-	if (net->nft.validate_state == NFT_VALIDATE_DO)
++	if (nft_net->validate_state == NFT_VALIDATE_DO)
+ 		return nft_table_validate(net, table);
+ 
+ 	if (chain->flags & NFT_CHAIN_HW_OFFLOAD) {
+@@ -3364,7 +3522,8 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 
+ 	return 0;
+ err2:
+-	nf_tables_rule_release(&ctx, rule);
++	nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE_ERROR);
++	nf_tables_rule_destroy(&ctx, rule);
+ err1:
+ 	for (i = 0; i < n; i++) {
+ 		if (info[i].ops) {
+@@ -3381,10 +3540,11 @@ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
+ 					     const struct nft_chain *chain,
+ 					     const struct nlattr *nla)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	u32 id = ntohl(nla_get_be32(nla));
+ 	struct nft_trans *trans;
+ 
+-	list_for_each_entry(trans, &net->nft.commit_list, list) {
++	list_for_each_entry(trans, &nft_net->commit_list, list) {
+ 		struct nft_rule *rule = nft_trans_rule(trans);
+ 
+ 		if (trans->msg_type == NFT_MSG_NEWRULE &&
+@@ -3451,6 +3611,8 @@ static int nf_tables_delrule(struct net *net, struct sock *nlsk,
+ 		list_for_each_entry(chain, &table->chains, list) {
+ 			if (!nft_is_active_next(net, chain))
+ 				continue;
++			if (nft_chain_is_bound(chain))
++				continue;
+ 
+ 			ctx.chain = chain;
+ 			err = nft_delrule_by_chain(&ctx);
+@@ -3497,13 +3659,14 @@ nft_select_set_ops(const struct nft_ctx *ctx,
+ 		   const struct nft_set_desc *desc,
+ 		   enum nft_set_policies policy)
+ {
++	struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
+ 	const struct nft_set_ops *ops, *bops;
+ 	struct nft_set_estimate est, best;
+ 	const struct nft_set_type *type;
+ 	u32 flags = 0;
+ 	int i;
+ 
+-	lockdep_assert_held(&ctx->net->nft.commit_mutex);
++	lockdep_assert_held(&nft_net->commit_mutex);
+ 	lockdep_nfnl_nft_mutex_not_held();
+ 
+ 	if (nla[NFTA_SET_FLAGS] != NULL)
+@@ -3641,10 +3804,11 @@ static struct nft_set *nft_set_lookup_byid(const struct net *net,
+ 					   const struct nft_table *table,
+ 					   const struct nlattr *nla, u8 genmask)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_trans *trans;
+ 	u32 id = ntohl(nla_get_be32(nla));
+ 
+-	list_for_each_entry(trans, &net->nft.commit_list, list) {
++	list_for_each_entry(trans, &nft_net->commit_list, list) {
+ 		if (trans->msg_type == NFT_MSG_NEWSET) {
+ 			struct nft_set *set = nft_trans_set(trans);
+ 
+@@ -3867,6 +4031,7 @@ static void nf_tables_set_notify(const struct nft_ctx *ctx,
+ 				 const struct nft_set *set, int event,
+ 			         gfp_t gfp_flags)
+ {
++	struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
+ 	struct sk_buff *skb;
+ 	u32 portid = ctx->portid;
+ 	int err;
+@@ -3885,7 +4050,7 @@ static void nf_tables_set_notify(const struct nft_ctx *ctx,
+ 		goto err;
+ 	}
+ 
+-	nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
++	nft_notify_enqueue(skb, ctx->report, &nft_net->notify_list);
+ 	return;
+ err:
+ 	nfnetlink_set_err(ctx->net, portid, NFNLGRP_NFTABLES, -ENOBUFS);
+@@ -3898,14 +4063,16 @@ static int nf_tables_dump_sets(struct sk_buff *skb, struct netlink_callback *cb)
+ 	struct nft_table *table, *cur_table = (struct nft_table *)cb->args[2];
+ 	struct net *net = sock_net(skb->sk);
+ 	struct nft_ctx *ctx = cb->data, ctx_set;
++	struct nftables_pernet *nft_net;
+ 
+ 	if (cb->args[1])
+ 		return skb->len;
+ 
+ 	rcu_read_lock();
+-	cb->seq = net->nft.base_seq;
++	nft_net = net_generic(net, nf_tables_net_id);
++	cb->seq = nft_net->base_seq;
+ 
+-	list_for_each_entry_rcu(table, &net->nft.tables, list) {
++	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (ctx->family != NFPROTO_UNSPEC &&
+ 		    ctx->family != table->family)
+ 			continue;
+@@ -4339,7 +4506,7 @@ err_set_expr_alloc:
+ 	if (set->expr)
+ 		nft_expr_destroy(&ctx, set->expr);
+ 
+-	ops->destroy(set);
++	ops->destroy(&ctx, set);
+ err_set_init:
+ 	kfree(set->name);
+ err_set_name:
+@@ -4355,7 +4522,7 @@ static void nft_set_destroy(const struct nft_ctx *ctx, struct nft_set *set)
+ 	if (set->expr)
+ 		nft_expr_destroy(ctx, set->expr);
+ 
+-	set->ops->destroy(set);
++	set->ops->destroy(ctx, set);
+ 	kfree(set->name);
+ 	kvfree(set);
+ }
+@@ -4479,10 +4646,39 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 	}
+ }
+ 
++static void nft_setelem_data_activate(const struct net *net,
++				      const struct nft_set *set,
++				      struct nft_set_elem *elem);
++
++static int nft_mapelem_activate(const struct nft_ctx *ctx,
++				struct nft_set *set,
++				const struct nft_set_iter *iter,
++				struct nft_set_elem *elem)
++{
++	nft_setelem_data_activate(ctx->net, set, elem);
++
++	return 0;
++}
++
++static void nft_map_activate(const struct nft_ctx *ctx, struct nft_set *set)
++{
++	struct nft_set_iter iter = {
++		.genmask	= nft_genmask_next(ctx->net),
++		.fn		= nft_mapelem_activate,
++	};
++
++	set->ops->walk(ctx, set, &iter);
++	WARN_ON_ONCE(iter.err);
++}
++
+ void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
+ {
+-	if (nft_set_is_anonymous(set))
++	if (nft_set_is_anonymous(set)) {
++		if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++			nft_map_activate(ctx, set);
++
+ 		nft_clear(ctx->net, set);
++	}
+ 
+ 	set->use++;
+ }
+@@ -4493,14 +4689,30 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      enum nft_trans_phase phase)
+ {
+ 	switch (phase) {
+-	case NFT_TRANS_PREPARE:
++	case NFT_TRANS_PREPARE_ERROR:
++		nft_set_trans_unbind(ctx, set);
+ 		if (nft_set_is_anonymous(set))
+ 			nft_deactivate_next(ctx->net, set);
++		else
++			list_del_rcu(&binding->list);
++
++		set->use--;
++		break;
++	case NFT_TRANS_PREPARE:
++		if (nft_set_is_anonymous(set)) {
++			if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++				nft_map_deactivate(ctx, set);
+ 
++			nft_deactivate_next(ctx->net, set);
++		}
+ 		set->use--;
+ 		return;
+ 	case NFT_TRANS_ABORT:
+ 	case NFT_TRANS_RELEASE:
++		if (nft_set_is_anonymous(set) &&
++		    set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++			nft_map_deactivate(ctx, set);
++
+ 		set->use--;
+ 		fallthrough;
+ 	default:
+@@ -4706,6 +4918,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ {
+ 	struct nft_set_dump_ctx *dump_ctx = cb->data;
+ 	struct net *net = sock_net(skb->sk);
++	struct nftables_pernet *nft_net;
+ 	struct nft_table *table;
+ 	struct nft_set *set;
+ 	struct nft_set_dump_args args;
+@@ -4716,7 +4929,8 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ 	int event;
+ 
+ 	rcu_read_lock();
+-	list_for_each_entry_rcu(table, &net->nft.tables, list) {
++	nft_net = net_generic(net, nf_tables_net_id);
++	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (dump_ctx->ctx.family != NFPROTO_UNSPEC &&
+ 		    dump_ctx->ctx.family != table->family)
+ 			continue;
+@@ -4995,6 +5209,7 @@ static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
+ 				     const struct nft_set_elem *elem,
+ 				     int event, u16 flags)
+ {
++	struct nftables_pernet *nft_net;
+ 	struct net *net = ctx->net;
+ 	u32 portid = ctx->portid;
+ 	struct sk_buff *skb;
+@@ -5014,7 +5229,8 @@ static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
+ 		goto err;
+ 	}
+ 
+-	nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
++	nft_net = net_generic(net, nf_tables_net_id);
++	nft_notify_enqueue(skb, ctx->report, &nft_net->notify_list);
+ 	return;
+ err:
+ 	nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS);
+@@ -5103,6 +5319,7 @@ static void nft_set_elem_expr_destroy(const struct nft_ctx *ctx,
+ 	}
+ }
+ 
++/* Drop references and destroy. Called from gc, dynset and abort path. */
+ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+ 			  bool destroy_expr)
+ {
+@@ -5124,11 +5341,11 @@ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+ }
+ EXPORT_SYMBOL_GPL(nft_set_elem_destroy);
+ 
+-/* Only called from commit path, nft_set_elem_deactivate() already deals with
+- * the refcounting from the preparation phase.
++/* Destroy element. References have been already dropped in the preparation
++ * path via nft_setelem_data_deactivate().
+  */
+-static void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
+-				       const struct nft_set *set, void *elem)
++void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
++				const struct nft_set *set, void *elem)
+ {
+ 	struct nft_set_ext *ext = nft_set_elem_ext(set, elem);
+ 
+@@ -5410,7 +5627,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 	}
+ 
+ 	nft_trans_elem(trans) = elem;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 	return 0;
+ 
+ err_set_full:
+@@ -5441,6 +5658,7 @@ static int nf_tables_newsetelem(struct net *net, struct sock *nlsk,
+ 				const struct nlattr * const nla[],
+ 				struct netlink_ext_ack *extack)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	u8 genmask = nft_genmask_next(net);
+ 	const struct nlattr *attr;
+ 	struct nft_set *set;
+@@ -5470,7 +5688,7 @@ static int nf_tables_newsetelem(struct net *net, struct sock *nlsk,
+ 			return err;
+ 	}
+ 
+-	if (net->nft.validate_state == NFT_VALIDATE_DO)
++	if (nft_net->validate_state == NFT_VALIDATE_DO)
+ 		return nft_table_validate(net, ctx.table);
+ 
+ 	return 0;
+@@ -5490,7 +5708,6 @@ static int nf_tables_newsetelem(struct net *net, struct sock *nlsk,
+ void nft_data_hold(const struct nft_data *data, enum nft_data_types type)
+ {
+ 	struct nft_chain *chain;
+-	struct nft_rule *rule;
+ 
+ 	if (type == NFT_DATA_VERDICT) {
+ 		switch (data->verdict.code) {
+@@ -5498,23 +5715,14 @@ void nft_data_hold(const struct nft_data *data, enum nft_data_types type)
+ 		case NFT_GOTO:
+ 			chain = data->verdict.chain;
+ 			chain->use++;
+-
+-			if (!nft_chain_is_bound(chain))
+-				break;
+-
+-			chain->table->use++;
+-			list_for_each_entry(rule, &chain->rules, list)
+-				chain->use++;
+-
+-			nft_chain_add(chain->table, chain);
+ 			break;
+ 		}
+ 	}
+ }
+ 
+-static void nft_set_elem_activate(const struct net *net,
+-				  const struct nft_set *set,
+-				  struct nft_set_elem *elem)
++static void nft_setelem_data_activate(const struct net *net,
++				      const struct nft_set *set,
++				      struct nft_set_elem *elem)
+ {
+ 	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ 
+@@ -5524,9 +5732,9 @@ static void nft_set_elem_activate(const struct net *net,
+ 		(*nft_set_ext_obj(ext))->use++;
+ }
+ 
+-static void nft_set_elem_deactivate(const struct net *net,
+-				    const struct nft_set *set,
+-				    struct nft_set_elem *elem)
++static void nft_setelem_data_deactivate(const struct net *net,
++					const struct nft_set *set,
++					struct nft_set_elem *elem)
+ {
+ 	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ 
+@@ -5603,10 +5811,10 @@ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
+ 	kfree(elem.priv);
+ 	elem.priv = priv;
+ 
+-	nft_set_elem_deactivate(ctx->net, set, &elem);
++	nft_setelem_data_deactivate(ctx->net, set, &elem);
+ 
+ 	nft_trans_elem(trans) = elem;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 	return 0;
+ 
+ fail_ops:
+@@ -5637,10 +5845,10 @@ static int nft_flush_set(const struct nft_ctx *ctx,
+ 	}
+ 	set->ndeact++;
+ 
+-	nft_set_elem_deactivate(ctx->net, set, elem);
++	nft_setelem_data_deactivate(ctx->net, set, elem);
+ 	nft_trans_elem_set(trans) = set;
+ 	nft_trans_elem(trans) = *elem;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+ err1:
+@@ -5939,7 +6147,7 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
+ 	nft_trans_obj(trans) = obj;
+ 	nft_trans_obj_update(trans) = true;
+ 	nft_trans_obj_newobj(trans) = newobj;
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+ 
+@@ -6102,6 +6310,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 	struct nft_obj_filter *filter = cb->data;
+ 	struct net *net = sock_net(skb->sk);
+ 	int family = nfmsg->nfgen_family;
++	struct nftables_pernet *nft_net;
+ 	struct nft_object *obj;
+ 	bool reset = false;
+ 
+@@ -6109,9 +6318,10 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 		reset = true;
+ 
+ 	rcu_read_lock();
+-	cb->seq = net->nft.base_seq;
++	nft_net = net_generic(net, nf_tables_net_id);
++	cb->seq = nft_net->base_seq;
+ 
+-	list_for_each_entry_rcu(table, &net->nft.tables, list) {
++	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+ 			continue;
+ 
+@@ -6134,7 +6344,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 				char *buf = kasprintf(GFP_ATOMIC,
+ 						      "%s:%u",
+ 						      table->name,
+-						      net->nft.base_seq);
++						      nft_net->base_seq);
+ 
+ 				audit_log_nfcfg(buf,
+ 						family,
+@@ -6255,8 +6465,11 @@ static int nf_tables_getobj(struct net *net, struct sock *nlsk,
+ 		reset = true;
+ 
+ 	if (reset) {
+-		char *buf = kasprintf(GFP_ATOMIC, "%s:%u",
+-				      table->name, net->nft.base_seq);
++		const struct nftables_pernet *nft_net;
++		char *buf;
++
++		nft_net = net_generic(net, nf_tables_net_id);
++		buf = kasprintf(GFP_ATOMIC, "%s:%u", table->name, nft_net->base_seq);
+ 
+ 		audit_log_nfcfg(buf,
+ 				family,
+@@ -6341,10 +6554,11 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ 		    struct nft_object *obj, u32 portid, u32 seq, int event,
+ 		    int family, int report, gfp_t gfp)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct sk_buff *skb;
+ 	int err;
+ 	char *buf = kasprintf(gfp, "%s:%u",
+-			      table->name, net->nft.base_seq);
++			      table->name, nft_net->base_seq);
+ 
+ 	audit_log_nfcfg(buf,
+ 			family,
+@@ -6370,7 +6584,7 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ 		goto err;
+ 	}
+ 
+-	nft_notify_enqueue(skb, report, &net->nft.notify_list);
++	nft_notify_enqueue(skb, report, &nft_net->notify_list);
+ 	return;
+ err:
+ 	nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS);
+@@ -6432,6 +6646,7 @@ void nf_tables_deactivate_flowtable(const struct nft_ctx *ctx,
+ 				    enum nft_trans_phase phase)
+ {
+ 	switch (phase) {
++	case NFT_TRANS_PREPARE_ERROR:
+ 	case NFT_TRANS_PREPARE:
+ 	case NFT_TRANS_ABORT:
+ 	case NFT_TRANS_RELEASE:
+@@ -6706,7 +6921,7 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+ 	INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
+ 	list_splice(&flowtable_hook.list, &nft_trans_flowtable_hooks(trans));
+ 
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+ 
+@@ -6896,7 +7111,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ 	list_splice(&flowtable_del_list, &nft_trans_flowtable_hooks(trans));
+ 	nft_flowtable_hook_release(&flowtable_hook);
+ 
+-	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
++	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+ 	return 0;
+ 
+@@ -7022,12 +7237,14 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb,
+ 	struct net *net = sock_net(skb->sk);
+ 	int family = nfmsg->nfgen_family;
+ 	struct nft_flowtable *flowtable;
++	struct nftables_pernet *nft_net;
+ 	const struct nft_table *table;
+ 
+ 	rcu_read_lock();
+-	cb->seq = net->nft.base_seq;
++	nft_net = net_generic(net, nf_tables_net_id);
++	cb->seq = nft_net->base_seq;
+ 
+-	list_for_each_entry_rcu(table, &net->nft.tables, list) {
++	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+ 			continue;
+ 
+@@ -7162,6 +7379,7 @@ static void nf_tables_flowtable_notify(struct nft_ctx *ctx,
+ 				       struct list_head *hook_list,
+ 				       int event)
+ {
++	struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
+ 	struct sk_buff *skb;
+ 	int err;
+ 
+@@ -7181,7 +7399,7 @@ static void nf_tables_flowtable_notify(struct nft_ctx *ctx,
+ 		goto err;
+ 	}
+ 
+-	nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
++	nft_notify_enqueue(skb, ctx->report, &nft_net->notify_list);
+ 	return;
+ err:
+ 	nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS);
+@@ -7206,6 +7424,7 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
+ static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net,
+ 				   u32 portid, u32 seq)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nlmsghdr *nlh;
+ 	char buf[TASK_COMM_LEN];
+ 	int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN);
+@@ -7215,7 +7434,7 @@ static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net,
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_be32(skb, NFTA_GEN_ID, htonl(net->nft.base_seq)) ||
++	if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_net->base_seq)) ||
+ 	    nla_put_be32(skb, NFTA_GEN_PROC_PID, htonl(task_pid_nr(current))) ||
+ 	    nla_put_string(skb, NFTA_GEN_PROC_NAME, get_task_comm(buf, current)))
+ 		goto nla_put_failure;
+@@ -7250,6 +7469,7 @@ static int nf_tables_flowtable_event(struct notifier_block *this,
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ 	struct nft_flowtable *flowtable;
++	struct nftables_pernet *nft_net;
+ 	struct nft_table *table;
+ 	struct net *net;
+ 
+@@ -7257,13 +7477,14 @@ static int nf_tables_flowtable_event(struct notifier_block *this,
+ 		return 0;
+ 
+ 	net = dev_net(dev);
+-	mutex_lock(&net->nft.commit_mutex);
+-	list_for_each_entry(table, &net->nft.tables, list) {
++	nft_net = net_generic(net, nf_tables_net_id);
++	mutex_lock(&nft_net->commit_mutex);
++	list_for_each_entry(table, &nft_net->tables, list) {
+ 		list_for_each_entry(flowtable, &table->flowtables, list) {
+ 			nft_flowtable_event(event, dev, flowtable);
+ 		}
+ 	}
+-	mutex_unlock(&net->nft.commit_mutex);
++	mutex_unlock(&nft_net->commit_mutex);
+ 
+ 	return NOTIFY_DONE;
+ }
+@@ -7444,16 +7665,17 @@ static const struct nfnl_callback nf_tables_cb[NFT_MSG_MAX] = {
+ 
+ static int nf_tables_validate(struct net *net)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_table *table;
+ 
+-	switch (net->nft.validate_state) {
++	switch (nft_net->validate_state) {
+ 	case NFT_VALIDATE_SKIP:
+ 		break;
+ 	case NFT_VALIDATE_NEED:
+ 		nft_validate_state_update(net, NFT_VALIDATE_DO);
+ 		fallthrough;
+ 	case NFT_VALIDATE_DO:
+-		list_for_each_entry(table, &net->nft.tables, list) {
++		list_for_each_entry(table, &nft_net->tables, list) {
+ 			if (nft_table_validate(net, table) < 0)
+ 				return -EAGAIN;
+ 		}
+@@ -7586,7 +7808,7 @@ static void nf_tables_trans_destroy_work(struct work_struct *w)
+ 	synchronize_rcu();
+ 
+ 	list_for_each_entry_safe(trans, next, &head, list) {
+-		list_del(&trans->list);
++		nft_trans_list_del(trans);
+ 		nft_commit_release(trans);
+ 	}
+ }
+@@ -7630,9 +7852,10 @@ static int nf_tables_commit_chain_prepare(struct net *net, struct nft_chain *cha
+ 
+ static void nf_tables_commit_chain_prepare_cancel(struct net *net)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_trans *trans, *next;
+ 
+-	list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
++	list_for_each_entry_safe(trans, next, &nft_net->commit_list, list) {
+ 		struct nft_chain *chain = trans->ctx.chain;
+ 
+ 		if (trans->msg_type == NFT_MSG_NEWRULE ||
+@@ -7730,10 +7953,11 @@ void nft_chain_del(struct nft_chain *chain)
+ 
+ static void nf_tables_module_autoload_cleanup(struct net *net)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_module_request *req, *next;
+ 
+-	WARN_ON_ONCE(!list_empty(&net->nft.commit_list));
+-	list_for_each_entry_safe(req, next, &net->nft.module_list, list) {
++	WARN_ON_ONCE(!list_empty(&nft_net->commit_list));
++	list_for_each_entry_safe(req, next, &nft_net->module_list, list) {
+ 		WARN_ON_ONCE(!req->done);
+ 		list_del(&req->list);
+ 		kfree(req);
+@@ -7742,6 +7966,7 @@ static void nf_tables_module_autoload_cleanup(struct net *net)
+ 
+ static void nf_tables_commit_release(struct net *net)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_trans *trans;
+ 
+ 	/* all side effects have to be made visible.
+@@ -7751,35 +7976,36 @@ static void nf_tables_commit_release(struct net *net)
+ 	 * Memory reclaim happens asynchronously from work queue
+ 	 * to prevent expensive synchronize_rcu() in commit phase.
+ 	 */
+-	if (list_empty(&net->nft.commit_list)) {
++	if (list_empty(&nft_net->commit_list)) {
+ 		nf_tables_module_autoload_cleanup(net);
+-		mutex_unlock(&net->nft.commit_mutex);
++		mutex_unlock(&nft_net->commit_mutex);
+ 		return;
+ 	}
+ 
+-	trans = list_last_entry(&net->nft.commit_list,
++	trans = list_last_entry(&nft_net->commit_list,
+ 				struct nft_trans, list);
+ 	get_net(trans->ctx.net);
+ 	WARN_ON_ONCE(trans->put_net);
+ 
+ 	trans->put_net = true;
+ 	spin_lock(&nf_tables_destroy_list_lock);
+-	list_splice_tail_init(&net->nft.commit_list, &nf_tables_destroy_list);
++	list_splice_tail_init(&nft_net->commit_list, &nf_tables_destroy_list);
+ 	spin_unlock(&nf_tables_destroy_list_lock);
+ 
+ 	nf_tables_module_autoload_cleanup(net);
+ 	schedule_work(&trans_destroy_work);
+ 
+-	mutex_unlock(&net->nft.commit_mutex);
++	mutex_unlock(&nft_net->commit_mutex);
+ }
+ 
+ static void nft_commit_notify(struct net *net, u32 portid)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct sk_buff *batch_skb = NULL, *nskb, *skb;
+ 	unsigned char *data;
+ 	int len;
+ 
+-	list_for_each_entry_safe(skb, nskb, &net->nft.notify_list, list) {
++	list_for_each_entry_safe(skb, nskb, &nft_net->notify_list, list) {
+ 		if (!batch_skb) {
+ new_batch:
+ 			batch_skb = skb;
+@@ -7805,7 +8031,7 @@ new_batch:
+ 			       NFT_CB(batch_skb).report, GFP_KERNEL);
+ 	}
+ 
+-	WARN_ON_ONCE(!list_empty(&net->nft.notify_list));
++	WARN_ON_ONCE(!list_empty(&nft_net->notify_list));
+ }
+ 
+ static int nf_tables_commit_audit_alloc(struct list_head *adl,
+@@ -7871,6 +8097,7 @@ static void nf_tables_commit_audit_log(struct list_head *adl, u32 generation)
+ 
+ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_trans *trans, *next;
+ 	struct nft_trans_elem *te;
+ 	struct nft_chain *chain;
+@@ -7878,11 +8105,31 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 	LIST_HEAD(adl);
+ 	int err;
+ 
+-	if (list_empty(&net->nft.commit_list)) {
+-		mutex_unlock(&net->nft.commit_mutex);
++	if (list_empty(&nft_net->commit_list)) {
++		mutex_unlock(&nft_net->commit_mutex);
+ 		return 0;
+ 	}
+ 
++	list_for_each_entry(trans, &nft_net->binding_list, binding_list) {
++		switch (trans->msg_type) {
++		case NFT_MSG_NEWSET:
++			if (nft_set_is_anonymous(nft_trans_set(trans)) &&
++			    !nft_trans_set_bound(trans)) {
++				pr_warn_once("nftables ruleset with unbound set\n");
++				return -EINVAL;
++			}
++			break;
++		case NFT_MSG_NEWCHAIN:
++			if (!nft_trans_chain_update(trans) &&
++			    nft_chain_binding(nft_trans_chain(trans)) &&
++			    !nft_trans_chain_bound(trans)) {
++				pr_warn_once("nftables ruleset with unbound chain\n");
++				return -EINVAL;
++			}
++			break;
++		}
++	}
++
+ 	/* 0. Validate ruleset, otherwise roll back for error reporting. */
+ 	if (nf_tables_validate(net) < 0)
+ 		return -EAGAIN;
+@@ -7892,7 +8139,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 		return err;
+ 
+ 	/* 1.  Allocate space for next generation rules_gen_X[] */
+-	list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
++	list_for_each_entry_safe(trans, next, &nft_net->commit_list, list) {
+ 		int ret;
+ 
+ 		ret = nf_tables_commit_audit_alloc(&adl, trans->ctx.table);
+@@ -7915,7 +8162,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 	}
+ 
+ 	/* step 2.  Make rules_gen_X visible to packet path */
+-	list_for_each_entry(table, &net->nft.tables, list) {
++	list_for_each_entry(table, &nft_net->tables, list) {
+ 		list_for_each_entry(chain, &table->chains, list)
+ 			nf_tables_commit_chain(net, chain);
+ 	}
+@@ -7924,12 +8171,13 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 	 * Bump generation counter, invalidate any dump in progress.
+ 	 * Cannot fail after this point.
+ 	 */
+-	while (++net->nft.base_seq == 0);
++	while (++nft_net->base_seq == 0)
++		;
+ 
+ 	/* step 3. Start new generation, rules_gen_X now in use. */
+ 	net->nft.gencursor = nft_gencursor_next(net);
+ 
+-	list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
++	list_for_each_entry_safe(trans, next, &nft_net->commit_list, list) {
+ 		nf_tables_commit_audit_collect(&adl, trans->ctx.table,
+ 					       trans->msg_type);
+ 		switch (trans->msg_type) {
+@@ -8089,7 +8337,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 
+ 	nft_commit_notify(net, NETLINK_CB(skb).portid);
+ 	nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
+-	nf_tables_commit_audit_log(&adl, net->nft.base_seq);
++	nf_tables_commit_audit_log(&adl, nft_net->base_seq);
+ 	nf_tables_commit_release(net);
+ 
+ 	return 0;
+@@ -8097,17 +8345,18 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 
+ static void nf_tables_module_autoload(struct net *net)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_module_request *req, *next;
+ 	LIST_HEAD(module_list);
+ 
+-	list_splice_init(&net->nft.module_list, &module_list);
+-	mutex_unlock(&net->nft.commit_mutex);
++	list_splice_init(&nft_net->module_list, &module_list);
++	mutex_unlock(&nft_net->commit_mutex);
+ 	list_for_each_entry_safe(req, next, &module_list, list) {
+ 		request_module("%s", req->module);
+ 		req->done = true;
+ 	}
+-	mutex_lock(&net->nft.commit_mutex);
+-	list_splice(&module_list, &net->nft.module_list);
++	mutex_lock(&nft_net->commit_mutex);
++	list_splice(&module_list, &nft_net->module_list);
+ }
+ 
+ static void nf_tables_abort_release(struct nft_trans *trans)
+@@ -8144,6 +8393,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ 
+ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_trans *trans, *next;
+ 	struct nft_trans_elem *te;
+ 
+@@ -8151,7 +8401,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 	    nf_tables_validate(net) < 0)
+ 		return -EAGAIN;
+ 
+-	list_for_each_entry_safe_reverse(trans, next, &net->nft.commit_list,
++	list_for_each_entry_safe_reverse(trans, next, &nft_net->commit_list,
+ 					 list) {
+ 		switch (trans->msg_type) {
+ 		case NFT_MSG_NEWTABLE:
+@@ -8176,7 +8426,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 				kfree(nft_trans_chain_name(trans));
+ 				nft_trans_destroy(trans);
+ 			} else {
+-				if (nft_chain_is_bound(trans->ctx.chain)) {
++				if (nft_trans_chain_bound(trans)) {
+ 					nft_trans_destroy(trans);
+ 					break;
+ 				}
+@@ -8193,6 +8443,10 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWRULE:
++			if (nft_trans_rule_bound(trans)) {
++				nft_trans_destroy(trans);
++				break;
++			}
+ 			trans->ctx.chain->use--;
+ 			list_del_rcu(&nft_trans_rule(trans)->list);
+ 			nft_rule_expr_deactivate(&trans->ctx,
+@@ -8216,6 +8470,9 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 		case NFT_MSG_DELSET:
+ 			trans->ctx.table->use++;
+ 			nft_clear(trans->ctx.net, nft_trans_set(trans));
++			if (nft_trans_set(trans)->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++				nft_map_activate(&trans->ctx, nft_trans_set(trans));
++
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWSETELEM:
+@@ -8230,7 +8487,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 		case NFT_MSG_DELSETELEM:
+ 			te = (struct nft_trans_elem *)trans->data;
+ 
+-			nft_set_elem_activate(net, te->set, &te->elem);
++			nft_setelem_data_activate(net, te->set, &te->elem);
+ 			te->set->ops->activate(net, te->set, &te->elem);
+ 			te->set->ndeact--;
+ 
+@@ -8277,8 +8534,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 	synchronize_rcu();
+ 
+ 	list_for_each_entry_safe_reverse(trans, next,
+-					 &net->nft.commit_list, list) {
+-		list_del(&trans->list);
++					 &nft_net->commit_list, list) {
++		nft_trans_list_del(trans);
+ 		nf_tables_abort_release(trans);
+ 	}
+ 
+@@ -8293,22 +8550,24 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ static int nf_tables_abort(struct net *net, struct sk_buff *skb,
+ 			   enum nfnl_abort_action action)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	int ret = __nf_tables_abort(net, action);
+ 
+-	mutex_unlock(&net->nft.commit_mutex);
++	mutex_unlock(&nft_net->commit_mutex);
+ 
+ 	return ret;
+ }
+ 
+ static bool nf_tables_valid_genid(struct net *net, u32 genid)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	bool genid_ok;
+ 
+-	mutex_lock(&net->nft.commit_mutex);
++	mutex_lock(&nft_net->commit_mutex);
+ 
+-	genid_ok = genid == 0 || net->nft.base_seq == genid;
++	genid_ok = genid == 0 || nft_net->base_seq == genid;
+ 	if (!genid_ok)
+-		mutex_unlock(&net->nft.commit_mutex);
++		mutex_unlock(&nft_net->commit_mutex);
+ 
+ 	/* else, commit mutex has to be released by commit or abort function */
+ 	return genid_ok;
+@@ -8657,6 +8916,9 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 
+ 	if (!tb[NFTA_VERDICT_CODE])
+ 		return -EINVAL;
++
++	/* zero padding hole for memcmp */
++	memset(data, 0, sizeof(*data));
+ 	data->verdict.code = ntohl(nla_get_be32(tb[NFTA_VERDICT_CODE]));
+ 
+ 	switch (data->verdict.code) {
+@@ -8682,7 +8944,8 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 						 genmask);
+ 		} else if (tb[NFTA_VERDICT_CHAIN_ID]) {
+ 			chain = nft_chain_lookup_byid(ctx->net, ctx->table,
+-						      tb[NFTA_VERDICT_CHAIN_ID]);
++						      tb[NFTA_VERDICT_CHAIN_ID],
++						      genmask);
+ 			if (IS_ERR(chain))
+ 				return PTR_ERR(chain);
+ 		} else {
+@@ -8712,22 +8975,12 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ static void nft_verdict_uninit(const struct nft_data *data)
+ {
+ 	struct nft_chain *chain;
+-	struct nft_rule *rule;
+ 
+ 	switch (data->verdict.code) {
+ 	case NFT_JUMP:
+ 	case NFT_GOTO:
+ 		chain = data->verdict.chain;
+ 		chain->use--;
+-
+-		if (!nft_chain_is_bound(chain))
+-			break;
+-
+-		chain->table->use--;
+-		list_for_each_entry(rule, &chain->rules, list)
+-			chain->use--;
+-
+-		nft_chain_del(chain);
+ 		break;
+ 	}
+ }
+@@ -8909,19 +9162,19 @@ EXPORT_SYMBOL_GPL(__nft_release_basechain);
+ 
+ static void __nft_release_hooks(struct net *net)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_table *table;
+ 	struct nft_chain *chain;
+ 
+-	list_for_each_entry(table, &net->nft.tables, list) {
++	list_for_each_entry(table, &nft_net->tables, list) {
+ 		list_for_each_entry(chain, &table->chains, list)
+ 			nf_tables_unregister_hook(net, table, chain);
+ 	}
+ }
+ 
+-static void __nft_release_tables(struct net *net)
++static void __nft_release_table(struct net *net, struct nft_table *table)
+ {
+ 	struct nft_flowtable *flowtable, *nf;
+-	struct nft_table *table, *nt;
+ 	struct nft_chain *chain, *nc;
+ 	struct nft_object *obj, *ne;
+ 	struct nft_rule *rule, *nr;
+@@ -8931,79 +9184,101 @@ static void __nft_release_tables(struct net *net)
+ 		.family	= NFPROTO_NETDEV,
+ 	};
+ 
+-	list_for_each_entry_safe(table, nt, &net->nft.tables, list) {
+-		ctx.family = table->family;
+-		ctx.table = table;
+-		list_for_each_entry(chain, &table->chains, list) {
+-			ctx.chain = chain;
+-			list_for_each_entry_safe(rule, nr, &chain->rules, list) {
+-				list_del(&rule->list);
+-				chain->use--;
+-				nf_tables_rule_release(&ctx, rule);
+-			}
+-		}
+-		list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) {
+-			list_del(&flowtable->list);
+-			table->use--;
+-			nf_tables_flowtable_destroy(flowtable);
+-		}
+-		list_for_each_entry_safe(set, ns, &table->sets, list) {
+-			list_del(&set->list);
+-			table->use--;
+-			nft_set_destroy(&ctx, set);
+-		}
+-		list_for_each_entry_safe(obj, ne, &table->objects, list) {
+-			nft_obj_del(obj);
+-			table->use--;
+-			nft_obj_destroy(&ctx, obj);
+-		}
+-		list_for_each_entry_safe(chain, nc, &table->chains, list) {
+-			ctx.chain = chain;
+-			nft_chain_del(chain);
+-			table->use--;
+-			nf_tables_chain_destroy(&ctx);
++	ctx.family = table->family;
++	ctx.table = table;
++	list_for_each_entry(chain, &table->chains, list) {
++		if (nft_chain_is_bound(chain))
++			continue;
++
++		ctx.chain = chain;
++		list_for_each_entry_safe(rule, nr, &chain->rules, list) {
++			list_del(&rule->list);
++			chain->use--;
++			nf_tables_rule_release(&ctx, rule);
+ 		}
+-		list_del(&table->list);
+-		nf_tables_table_destroy(&ctx);
+ 	}
++	list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) {
++		list_del(&flowtable->list);
++		table->use--;
++		nf_tables_flowtable_destroy(flowtable);
++	}
++	list_for_each_entry_safe(set, ns, &table->sets, list) {
++		list_del(&set->list);
++		table->use--;
++		if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
++			nft_map_deactivate(&ctx, set);
++
++		nft_set_destroy(&ctx, set);
++	}
++	list_for_each_entry_safe(obj, ne, &table->objects, list) {
++		nft_obj_del(obj);
++		table->use--;
++		nft_obj_destroy(&ctx, obj);
++	}
++	list_for_each_entry_safe(chain, nc, &table->chains, list) {
++		ctx.chain = chain;
++		nft_chain_del(chain);
++		table->use--;
++		nf_tables_chain_destroy(&ctx);
++	}
++	list_del(&table->list);
++	nf_tables_table_destroy(&ctx);
++}
++
++static void __nft_release_tables(struct net *net)
++{
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++	struct nft_table *table, *nt;
++
++	list_for_each_entry_safe(table, nt, &nft_net->tables, list)
++		__nft_release_table(net, table);
+ }
+ 
+ static int __net_init nf_tables_init_net(struct net *net)
+ {
+-	INIT_LIST_HEAD(&net->nft.tables);
+-	INIT_LIST_HEAD(&net->nft.commit_list);
+-	INIT_LIST_HEAD(&net->nft.module_list);
+-	INIT_LIST_HEAD(&net->nft.notify_list);
+-	mutex_init(&net->nft.commit_mutex);
+-	net->nft.base_seq = 1;
+-	net->nft.validate_state = NFT_VALIDATE_SKIP;
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++
++	INIT_LIST_HEAD(&nft_net->tables);
++	INIT_LIST_HEAD(&nft_net->commit_list);
++	INIT_LIST_HEAD(&nft_net->binding_list);
++	INIT_LIST_HEAD(&nft_net->module_list);
++	INIT_LIST_HEAD(&nft_net->notify_list);
++	mutex_init(&nft_net->commit_mutex);
++	nft_net->base_seq = 1;
++	nft_net->validate_state = NFT_VALIDATE_SKIP;
+ 
+ 	return 0;
+ }
+ 
+ static void __net_exit nf_tables_pre_exit_net(struct net *net)
+ {
+-	mutex_lock(&net->nft.commit_mutex);
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++
++	mutex_lock(&nft_net->commit_mutex);
+ 	__nft_release_hooks(net);
+-	mutex_unlock(&net->nft.commit_mutex);
++	mutex_unlock(&nft_net->commit_mutex);
+ }
+ 
+ static void __net_exit nf_tables_exit_net(struct net *net)
+ {
+-	mutex_lock(&net->nft.commit_mutex);
+-	if (!list_empty(&net->nft.commit_list))
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++
++	mutex_lock(&nft_net->commit_mutex);
++	if (!list_empty(&nft_net->commit_list))
+ 		__nf_tables_abort(net, NFNL_ABORT_NONE);
+ 	__nft_release_tables(net);
+-	mutex_unlock(&net->nft.commit_mutex);
+-	WARN_ON_ONCE(!list_empty(&net->nft.tables));
+-	WARN_ON_ONCE(!list_empty(&net->nft.module_list));
+-	WARN_ON_ONCE(!list_empty(&net->nft.notify_list));
++	mutex_unlock(&nft_net->commit_mutex);
++	WARN_ON_ONCE(!list_empty(&nft_net->tables));
++	WARN_ON_ONCE(!list_empty(&nft_net->module_list));
++	WARN_ON_ONCE(!list_empty(&nft_net->notify_list));
+ }
+ 
+ static struct pernet_operations nf_tables_net_ops = {
+ 	.init		= nf_tables_init_net,
+ 	.pre_exit	= nf_tables_pre_exit_net,
+ 	.exit		= nf_tables_exit_net,
++	.id		= &nf_tables_net_id,
++	.size		= sizeof(struct nftables_pernet),
+ };
+ 
+ static int __init nf_tables_module_init(void)
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index 4e99b1731b3f9..5cfbb29d8a34a 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -7,6 +7,8 @@
+ #include <net/netfilter/nf_tables_offload.h>
+ #include <net/pkt_cls.h>
+ 
++extern unsigned int nf_tables_net_id;
++
+ static struct nft_flow_rule *nft_flow_rule_alloc(int num_actions)
+ {
+ 	struct nft_flow_rule *flow;
+@@ -371,16 +373,18 @@ static void nft_indr_block_cleanup(struct flow_block_cb *block_cb)
+ 	struct nft_base_chain *basechain = block_cb->indr.data;
+ 	struct net_device *dev = block_cb->indr.dev;
+ 	struct netlink_ext_ack extack = {};
++	struct nftables_pernet *nft_net;
+ 	struct net *net = dev_net(dev);
+ 	struct flow_block_offload bo;
+ 
+ 	nft_flow_block_offload_init(&bo, dev_net(dev), FLOW_BLOCK_UNBIND,
+ 				    basechain, &extack);
+-	mutex_lock(&net->nft.commit_mutex);
++	nft_net = net_generic(net, nf_tables_net_id);
++	mutex_lock(&nft_net->commit_mutex);
+ 	list_del(&block_cb->driver_list);
+ 	list_move(&block_cb->list, &bo.cb_list);
+ 	nft_flow_offload_unbind(&bo, basechain);
+-	mutex_unlock(&net->nft.commit_mutex);
++	mutex_unlock(&nft_net->commit_mutex);
+ }
+ 
+ static int nft_indr_block_offload_cmd(struct nft_base_chain *basechain,
+@@ -476,9 +480,10 @@ static int nft_flow_offload_chain(struct nft_chain *chain, u8 *ppolicy,
+ static void nft_flow_rule_offload_abort(struct net *net,
+ 					struct nft_trans *trans)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	int err = 0;
+ 
+-	list_for_each_entry_continue_reverse(trans, &net->nft.commit_list, list) {
++	list_for_each_entry_continue_reverse(trans, &nft_net->commit_list, list) {
+ 		if (trans->ctx.family != NFPROTO_NETDEV)
+ 			continue;
+ 
+@@ -524,11 +529,12 @@ static void nft_flow_rule_offload_abort(struct net *net,
+ 
+ int nft_flow_rule_offload_commit(struct net *net)
+ {
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_trans *trans;
+ 	int err = 0;
+ 	u8 policy;
+ 
+-	list_for_each_entry(trans, &net->nft.commit_list, list) {
++	list_for_each_entry(trans, &nft_net->commit_list, list) {
+ 		if (trans->ctx.family != NFPROTO_NETDEV)
+ 			continue;
+ 
+@@ -580,7 +586,7 @@ int nft_flow_rule_offload_commit(struct net *net)
+ 		}
+ 	}
+ 
+-	list_for_each_entry(trans, &net->nft.commit_list, list) {
++	list_for_each_entry(trans, &nft_net->commit_list, list) {
+ 		if (trans->ctx.family != NFPROTO_NETDEV)
+ 			continue;
+ 
+@@ -600,15 +606,15 @@ int nft_flow_rule_offload_commit(struct net *net)
+ 	return err;
+ }
+ 
+-static struct nft_chain *__nft_offload_get_chain(struct net_device *dev)
++static struct nft_chain *__nft_offload_get_chain(const struct nftables_pernet *nft_net,
++						 struct net_device *dev)
+ {
+ 	struct nft_base_chain *basechain;
+-	struct net *net = dev_net(dev);
+ 	struct nft_hook *hook, *found;
+ 	const struct nft_table *table;
+ 	struct nft_chain *chain;
+ 
+-	list_for_each_entry(table, &net->nft.tables, list) {
++	list_for_each_entry(table, &nft_net->tables, list) {
+ 		if (table->family != NFPROTO_NETDEV)
+ 			continue;
+ 
+@@ -640,19 +646,21 @@ static int nft_offload_netdev_event(struct notifier_block *this,
+ 				    unsigned long event, void *ptr)
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++	struct nftables_pernet *nft_net;
+ 	struct net *net = dev_net(dev);
+ 	struct nft_chain *chain;
+ 
+ 	if (event != NETDEV_UNREGISTER)
+ 		return NOTIFY_DONE;
+ 
+-	mutex_lock(&net->nft.commit_mutex);
+-	chain = __nft_offload_get_chain(dev);
++	nft_net = net_generic(net, nf_tables_net_id);
++	mutex_lock(&nft_net->commit_mutex);
++	chain = __nft_offload_get_chain(nft_net, dev);
+ 	if (chain)
+ 		nft_flow_block_chain(nft_base_chain(chain), dev,
+ 				     FLOW_BLOCK_UNBIND);
+ 
+-	mutex_unlock(&net->nft.commit_mutex);
++	mutex_unlock(&nft_net->commit_mutex);
+ 
+ 	return NOTIFY_DONE;
+ }
+diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c
+index 9d5947ab8d4ef..7b0b8fecb2205 100644
+--- a/net/netfilter/nft_byteorder.c
++++ b/net/netfilter/nft_byteorder.c
+@@ -30,11 +30,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
+ 	const struct nft_byteorder *priv = nft_expr_priv(expr);
+ 	u32 *src = &regs->data[priv->sreg];
+ 	u32 *dst = &regs->data[priv->dreg];
+-	union { u32 u32; u16 u16; } *s, *d;
++	u16 *s16, *d16;
+ 	unsigned int i;
+ 
+-	s = (void *)src;
+-	d = (void *)dst;
++	s16 = (void *)src;
++	d16 = (void *)dst;
+ 
+ 	switch (priv->size) {
+ 	case 8: {
+@@ -61,11 +61,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
+ 		switch (priv->op) {
+ 		case NFT_BYTEORDER_NTOH:
+ 			for (i = 0; i < priv->len / 4; i++)
+-				d[i].u32 = ntohl((__force __be32)s[i].u32);
++				dst[i] = ntohl((__force __be32)src[i]);
+ 			break;
+ 		case NFT_BYTEORDER_HTON:
+ 			for (i = 0; i < priv->len / 4; i++)
+-				d[i].u32 = (__force __u32)htonl(s[i].u32);
++				dst[i] = (__force __u32)htonl(src[i]);
+ 			break;
+ 		}
+ 		break;
+@@ -73,11 +73,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
+ 		switch (priv->op) {
+ 		case NFT_BYTEORDER_NTOH:
+ 			for (i = 0; i < priv->len / 2; i++)
+-				d[i].u16 = ntohs((__force __be16)s[i].u16);
++				d16[i] = ntohs((__force __be16)s16[i]);
+ 			break;
+ 		case NFT_BYTEORDER_HTON:
+ 			for (i = 0; i < priv->len / 2; i++)
+-				d[i].u16 = (__force __u16)htons(s[i].u16);
++				d16[i] = (__force __u16)htons(s16[i]);
+ 			break;
+ 		}
+ 		break;
+diff --git a/net/netfilter/nft_chain_filter.c b/net/netfilter/nft_chain_filter.c
+index ff8528ad3dc63..7a9aa57b195bf 100644
+--- a/net/netfilter/nft_chain_filter.c
++++ b/net/netfilter/nft_chain_filter.c
+@@ -2,6 +2,7 @@
+ #include <linux/kernel.h>
+ #include <linux/netdevice.h>
+ #include <net/net_namespace.h>
++#include <net/netns/generic.h>
+ #include <net/netfilter/nf_tables.h>
+ #include <linux/netfilter_ipv4.h>
+ #include <linux/netfilter_ipv6.h>
+@@ -10,6 +11,8 @@
+ #include <net/netfilter/nf_tables_ipv4.h>
+ #include <net/netfilter/nf_tables_ipv6.h>
+ 
++extern unsigned int nf_tables_net_id;
++
+ #ifdef CONFIG_NF_TABLES_IPV4
+ static unsigned int nft_do_chain_ipv4(void *priv,
+ 				      struct sk_buff *skb,
+@@ -355,6 +358,7 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 				  unsigned long event, void *ptr)
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++	struct nftables_pernet *nft_net;
+ 	struct nft_table *table;
+ 	struct nft_chain *chain, *nr;
+ 	struct nft_ctx ctx = {
+@@ -365,8 +369,9 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 	    event != NETDEV_CHANGENAME)
+ 		return NOTIFY_DONE;
+ 
+-	mutex_lock(&ctx.net->nft.commit_mutex);
+-	list_for_each_entry(table, &ctx.net->nft.tables, list) {
++	nft_net = net_generic(ctx.net, nf_tables_net_id);
++	mutex_lock(&nft_net->commit_mutex);
++	list_for_each_entry(table, &nft_net->tables, list) {
+ 		if (table->family != NFPROTO_NETDEV)
+ 			continue;
+ 
+@@ -380,7 +385,7 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 			nft_netdev_event(event, dev, &ctx);
+ 		}
+ 	}
+-	mutex_unlock(&ctx.net->nft.commit_mutex);
++	mutex_unlock(&nft_net->commit_mutex);
+ 
+ 	return NOTIFY_DONE;
+ }
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 038588d4d80e1..8d47782b778f1 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -11,6 +11,9 @@
+ #include <linux/netfilter/nf_tables.h>
+ #include <net/netfilter/nf_tables.h>
+ #include <net/netfilter/nf_tables_core.h>
++#include <net/netns/generic.h>
++
++extern unsigned int nf_tables_net_id;
+ 
+ struct nft_dynset {
+ 	struct nft_set			*set;
+@@ -106,13 +109,14 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 			   const struct nft_expr *expr,
+ 			   const struct nlattr * const tb[])
+ {
++	struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
+ 	struct nft_dynset *priv = nft_expr_priv(expr);
+ 	u8 genmask = nft_genmask_next(ctx->net);
+ 	struct nft_set *set;
+ 	u64 timeout;
+ 	int err;
+ 
+-	lockdep_assert_held(&ctx->net->nft.commit_mutex);
++	lockdep_assert_held(&nft_net->commit_mutex);
+ 
+ 	if (tb[NFTA_DYNSET_SET_NAME] == NULL ||
+ 	    tb[NFTA_DYNSET_OP] == NULL ||
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index fcdbc5ed3f367..6b0efab4fad09 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -76,11 +76,9 @@ static int nft_immediate_init(const struct nft_ctx *ctx,
+ 		switch (priv->data.verdict.code) {
+ 		case NFT_JUMP:
+ 		case NFT_GOTO:
+-			if (nft_chain_is_bound(chain)) {
+-				err = -EBUSY;
+-				goto err1;
+-			}
+-			chain->bound = true;
++			err = nf_tables_bind_chain(ctx, chain);
++			if (err < 0)
++				return err;
+ 			break;
+ 		default:
+ 			break;
+@@ -98,6 +96,31 @@ static void nft_immediate_activate(const struct nft_ctx *ctx,
+ 				   const struct nft_expr *expr)
+ {
+ 	const struct nft_immediate_expr *priv = nft_expr_priv(expr);
++	const struct nft_data *data = &priv->data;
++	struct nft_ctx chain_ctx;
++	struct nft_chain *chain;
++	struct nft_rule *rule;
++
++	if (priv->dreg == NFT_REG_VERDICT) {
++		switch (data->verdict.code) {
++		case NFT_JUMP:
++		case NFT_GOTO:
++			chain = data->verdict.chain;
++			if (!nft_chain_binding(chain))
++				break;
++
++			chain_ctx = *ctx;
++			chain_ctx.chain = chain;
++
++			list_for_each_entry(rule, &chain->rules, list)
++				nft_rule_expr_activate(&chain_ctx, rule);
++
++			nft_clear(ctx->net, chain);
++			break;
++		default:
++			break;
++		}
++	}
+ 
+ 	return nft_data_hold(&priv->data, nft_dreg_to_type(priv->dreg));
+ }
+@@ -107,6 +130,43 @@ static void nft_immediate_deactivate(const struct nft_ctx *ctx,
+ 				     enum nft_trans_phase phase)
+ {
+ 	const struct nft_immediate_expr *priv = nft_expr_priv(expr);
++	const struct nft_data *data = &priv->data;
++	struct nft_ctx chain_ctx;
++	struct nft_chain *chain;
++	struct nft_rule *rule;
++
++	if (priv->dreg == NFT_REG_VERDICT) {
++		switch (data->verdict.code) {
++		case NFT_JUMP:
++		case NFT_GOTO:
++			chain = data->verdict.chain;
++			if (!nft_chain_binding(chain))
++				break;
++
++			chain_ctx = *ctx;
++			chain_ctx.chain = chain;
++
++			list_for_each_entry(rule, &chain->rules, list)
++				nft_rule_expr_deactivate(&chain_ctx, rule, phase);
++
++			switch (phase) {
++			case NFT_TRANS_PREPARE_ERROR:
++				nf_tables_unbind_chain(ctx, chain);
++				fallthrough;
++			case NFT_TRANS_PREPARE:
++				nft_deactivate_next(ctx->net, chain);
++				break;
++			default:
++				nft_chain_del(chain);
++				chain->bound = false;
++				chain->table->use--;
++				break;
++			}
++			break;
++		default:
++			break;
++		}
++	}
+ 
+ 	if (phase == NFT_TRANS_COMMIT)
+ 		return;
+@@ -131,15 +191,27 @@ static void nft_immediate_destroy(const struct nft_ctx *ctx,
+ 	case NFT_GOTO:
+ 		chain = data->verdict.chain;
+ 
+-		if (!nft_chain_is_bound(chain))
++		if (!nft_chain_binding(chain))
++			break;
++
++		/* Rule construction failed, but chain is already bound:
++		 * let the transaction records release this chain and its rules.
++		 */
++		if (chain->bound) {
++			chain->use--;
+ 			break;
++		}
+ 
++		/* Rule has been deleted, release chain and its rules. */
+ 		chain_ctx = *ctx;
+ 		chain_ctx.chain = chain;
+ 
+-		list_for_each_entry_safe(rule, n, &chain->rules, list)
+-			nf_tables_rule_release(&chain_ctx, rule);
+-
++		chain->use--;
++		list_for_each_entry_safe(rule, n, &chain->rules, list) {
++			chain->use--;
++			list_del(&rule->list);
++			nf_tables_rule_destroy(&chain_ctx, rule);
++		}
+ 		nf_tables_chain_destroy(&chain_ctx);
+ 		break;
+ 	default:
+diff --git a/net/netfilter/nft_set_bitmap.c b/net/netfilter/nft_set_bitmap.c
+index 2a81ea4218193..3c63f8acebd8a 100644
+--- a/net/netfilter/nft_set_bitmap.c
++++ b/net/netfilter/nft_set_bitmap.c
+@@ -270,13 +270,14 @@ static int nft_bitmap_init(const struct nft_set *set,
+ 	return 0;
+ }
+ 
+-static void nft_bitmap_destroy(const struct nft_set *set)
++static void nft_bitmap_destroy(const struct nft_ctx *ctx,
++			       const struct nft_set *set)
+ {
+ 	struct nft_bitmap *priv = nft_set_priv(set);
+ 	struct nft_bitmap_elem *be, *n;
+ 
+ 	list_for_each_entry_safe(be, n, &priv->list, head)
+-		nft_set_elem_destroy(set, be, true);
++		nf_tables_set_elem_destroy(ctx, set, be);
+ }
+ 
+ static bool nft_bitmap_estimate(const struct nft_set_desc *desc, u32 features,
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index a5cfb321ae23a..51d3e6f0934a9 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -380,19 +380,31 @@ static int nft_rhash_init(const struct nft_set *set,
+ 	return 0;
+ }
+ 
++struct nft_rhash_ctx {
++	const struct nft_ctx	ctx;
++	const struct nft_set	*set;
++};
++
+ static void nft_rhash_elem_destroy(void *ptr, void *arg)
+ {
+-	nft_set_elem_destroy(arg, ptr, true);
++	struct nft_rhash_ctx *rhash_ctx = arg;
++
++	nf_tables_set_elem_destroy(&rhash_ctx->ctx, rhash_ctx->set, ptr);
+ }
+ 
+-static void nft_rhash_destroy(const struct nft_set *set)
++static void nft_rhash_destroy(const struct nft_ctx *ctx,
++			      const struct nft_set *set)
+ {
+ 	struct nft_rhash *priv = nft_set_priv(set);
++	struct nft_rhash_ctx rhash_ctx = {
++		.ctx	= *ctx,
++		.set	= set,
++	};
+ 
+ 	cancel_delayed_work_sync(&priv->gc_work);
+ 	rcu_barrier();
+ 	rhashtable_free_and_destroy(&priv->ht, nft_rhash_elem_destroy,
+-				    (void *)set);
++				    (void *)&rhash_ctx);
+ }
+ 
+ /* Number of buckets is stored in u32, so cap our result to 1U<<31 */
+@@ -621,7 +633,8 @@ static int nft_hash_init(const struct nft_set *set,
+ 	return 0;
+ }
+ 
+-static void nft_hash_destroy(const struct nft_set *set)
++static void nft_hash_destroy(const struct nft_ctx *ctx,
++			     const struct nft_set *set)
+ {
+ 	struct nft_hash *priv = nft_set_priv(set);
+ 	struct nft_hash_elem *he;
+@@ -631,7 +644,7 @@ static void nft_hash_destroy(const struct nft_set *set)
+ 	for (i = 0; i < priv->buckets; i++) {
+ 		hlist_for_each_entry_safe(he, next, &priv->table[i], node) {
+ 			hlist_del_rcu(&he->node);
+-			nft_set_elem_destroy(set, he, true);
++			nf_tables_set_elem_destroy(ctx, set, he);
+ 		}
+ 	}
+ }
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index eb5934eb3adfc..3be93175b3ffd 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1904,7 +1904,11 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+ 		int i, start, rules_fx;
+ 
+ 		match_start = data;
+-		match_end = (const u8 *)nft_set_ext_key_end(&e->ext)->data;
++
++		if (nft_set_ext_exists(&e->ext, NFT_SET_EXT_KEY_END))
++			match_end = (const u8 *)nft_set_ext_key_end(&e->ext)->data;
++		else
++			match_end = data;
+ 
+ 		start = first_rule;
+ 		rules_fx = rules_f0;
+@@ -2127,10 +2131,12 @@ out_scratch:
+ 
+ /**
+  * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array
++ * @ctx:	context
+  * @set:	nftables API set representation
+  * @m:		matching data pointing to key mapping array
+  */
+-static void nft_set_pipapo_match_destroy(const struct nft_set *set,
++static void nft_set_pipapo_match_destroy(const struct nft_ctx *ctx,
++					 const struct nft_set *set,
+ 					 struct nft_pipapo_match *m)
+ {
+ 	struct nft_pipapo_field *f;
+@@ -2147,15 +2153,17 @@ static void nft_set_pipapo_match_destroy(const struct nft_set *set,
+ 
+ 		e = f->mt[r].e;
+ 
+-		nft_set_elem_destroy(set, e, true);
++		nf_tables_set_elem_destroy(ctx, set, e);
+ 	}
+ }
+ 
+ /**
+  * nft_pipapo_destroy() - Free private data for set and all committed elements
++ * @ctx:	context
+  * @set:	nftables API set representation
+  */
+-static void nft_pipapo_destroy(const struct nft_set *set)
++static void nft_pipapo_destroy(const struct nft_ctx *ctx,
++			       const struct nft_set *set)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct nft_pipapo_match *m;
+@@ -2165,7 +2173,7 @@ static void nft_pipapo_destroy(const struct nft_set *set)
+ 	if (m) {
+ 		rcu_barrier();
+ 
+-		nft_set_pipapo_match_destroy(set, m);
++		nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+ #ifdef NFT_PIPAPO_ALIGN
+ 		free_percpu(m->scratch_aligned);
+@@ -2182,7 +2190,7 @@ static void nft_pipapo_destroy(const struct nft_set *set)
+ 		m = priv->clone;
+ 
+ 		if (priv->dirty)
+-			nft_set_pipapo_match_destroy(set, m);
++			nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+ #ifdef NFT_PIPAPO_ALIGN
+ 		free_percpu(priv->clone->scratch_aligned);
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 1ffb24f4c74ca..172b994790a06 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -657,7 +657,8 @@ static int nft_rbtree_init(const struct nft_set *set,
+ 	return 0;
+ }
+ 
+-static void nft_rbtree_destroy(const struct nft_set *set)
++static void nft_rbtree_destroy(const struct nft_ctx *ctx,
++			       const struct nft_set *set)
+ {
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	struct nft_rbtree_elem *rbe;
+@@ -668,7 +669,7 @@ static void nft_rbtree_destroy(const struct nft_set *set)
+ 	while ((node = priv->root.rb_node) != NULL) {
+ 		rb_erase(node, &priv->root);
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+-		nft_set_elem_destroy(set, rbe, true);
++		nf_tables_set_elem_destroy(ctx, set, rbe);
+ 	}
+ }
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 99c869d8d3044..9737c3229c12a 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1602,6 +1602,7 @@ out:
+ int netlink_set_err(struct sock *ssk, u32 portid, u32 group, int code)
+ {
+ 	struct netlink_set_err_data info;
++	unsigned long flags;
+ 	struct sock *sk;
+ 	int ret = 0;
+ 
+@@ -1611,12 +1612,12 @@ int netlink_set_err(struct sock *ssk, u32 portid, u32 group, int code)
+ 	/* sk->sk_err wants a positive error value */
+ 	info.code = -code;
+ 
+-	read_lock(&nl_table_lock);
++	read_lock_irqsave(&nl_table_lock, flags);
+ 
+ 	sk_for_each_bound(sk, &nl_table[ssk->sk_protocol].mc_list)
+ 		ret += do_one_set_err(sk, &info);
+ 
+-	read_unlock(&nl_table_lock);
++	read_unlock_irqrestore(&nl_table_lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netlink_set_err);
+diff --git a/net/netlink/diag.c b/net/netlink/diag.c
+index c6255eac305c7..e4f21b1067bcc 100644
+--- a/net/netlink/diag.c
++++ b/net/netlink/diag.c
+@@ -94,6 +94,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ 	struct net *net = sock_net(skb->sk);
+ 	struct netlink_diag_req *req;
+ 	struct netlink_sock *nlsk;
++	unsigned long flags;
+ 	struct sock *sk;
+ 	int num = 2;
+ 	int ret = 0;
+@@ -152,7 +153,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ 	num++;
+ 
+ mc_list:
+-	read_lock(&nl_table_lock);
++	read_lock_irqsave(&nl_table_lock, flags);
+ 	sk_for_each_bound(sk, &tbl->mc_list) {
+ 		if (sk_hashed(sk))
+ 			continue;
+@@ -167,13 +168,13 @@ mc_list:
+ 				 NETLINK_CB(cb->skb).portid,
+ 				 cb->nlh->nlmsg_seq,
+ 				 NLM_F_MULTI,
+-				 sock_i_ino(sk)) < 0) {
++				 __sock_i_ino(sk)) < 0) {
+ 			ret = 1;
+ 			break;
+ 		}
+ 		num++;
+ 	}
+-	read_unlock(&nl_table_lock);
++	read_unlock_irqrestore(&nl_table_lock, flags);
+ 
+ done:
+ 	cb->args[0] = num;
+diff --git a/net/nfc/core.c b/net/nfc/core.c
+index 2ef56366bd5fe..10a3d740d1553 100644
+--- a/net/nfc/core.c
++++ b/net/nfc/core.c
+@@ -634,7 +634,7 @@ error:
+ 	return rc;
+ }
+ 
+-int nfc_set_remote_general_bytes(struct nfc_dev *dev, u8 *gb, u8 gb_len)
++int nfc_set_remote_general_bytes(struct nfc_dev *dev, const u8 *gb, u8 gb_len)
+ {
+ 	pr_debug("dev_name=%s gb_len=%d\n", dev_name(&dev->dev), gb_len);
+ 
+@@ -663,7 +663,7 @@ int nfc_tm_data_received(struct nfc_dev *dev, struct sk_buff *skb)
+ EXPORT_SYMBOL(nfc_tm_data_received);
+ 
+ int nfc_tm_activated(struct nfc_dev *dev, u32 protocol, u8 comm_mode,
+-		     u8 *gb, size_t gb_len)
++		     const u8 *gb, size_t gb_len)
+ {
+ 	int rc;
+ 
+diff --git a/net/nfc/hci/llc_shdlc.c b/net/nfc/hci/llc_shdlc.c
+index 0eb4ddc056e78..02909e3e91ef1 100644
+--- a/net/nfc/hci/llc_shdlc.c
++++ b/net/nfc/hci/llc_shdlc.c
+@@ -123,7 +123,7 @@ static bool llc_shdlc_x_lteq_y_lt_z(int x, int y, int z)
+ 		return ((y >= x) || (y < z)) ? true : false;
+ }
+ 
+-static struct sk_buff *llc_shdlc_alloc_skb(struct llc_shdlc *shdlc,
++static struct sk_buff *llc_shdlc_alloc_skb(const struct llc_shdlc *shdlc,
+ 					   int payload_len)
+ {
+ 	struct sk_buff *skb;
+@@ -137,7 +137,7 @@ static struct sk_buff *llc_shdlc_alloc_skb(struct llc_shdlc *shdlc,
+ }
+ 
+ /* immediately sends an S frame. */
+-static int llc_shdlc_send_s_frame(struct llc_shdlc *shdlc,
++static int llc_shdlc_send_s_frame(const struct llc_shdlc *shdlc,
+ 				  enum sframe_type sframe_type, int nr)
+ {
+ 	int r;
+@@ -159,7 +159,7 @@ static int llc_shdlc_send_s_frame(struct llc_shdlc *shdlc,
+ }
+ 
+ /* immediately sends an U frame. skb may contain optional payload */
+-static int llc_shdlc_send_u_frame(struct llc_shdlc *shdlc,
++static int llc_shdlc_send_u_frame(const struct llc_shdlc *shdlc,
+ 				  struct sk_buff *skb,
+ 				  enum uframe_modifier uframe_modifier)
+ {
+@@ -361,7 +361,7 @@ static void llc_shdlc_connect_complete(struct llc_shdlc *shdlc, int r)
+ 	wake_up(shdlc->connect_wq);
+ }
+ 
+-static int llc_shdlc_connect_initiate(struct llc_shdlc *shdlc)
++static int llc_shdlc_connect_initiate(const struct llc_shdlc *shdlc)
+ {
+ 	struct sk_buff *skb;
+ 
+@@ -377,7 +377,7 @@ static int llc_shdlc_connect_initiate(struct llc_shdlc *shdlc)
+ 	return llc_shdlc_send_u_frame(shdlc, skb, U_FRAME_RSET);
+ }
+ 
+-static int llc_shdlc_connect_send_ua(struct llc_shdlc *shdlc)
++static int llc_shdlc_connect_send_ua(const struct llc_shdlc *shdlc)
+ {
+ 	struct sk_buff *skb;
+ 
+diff --git a/net/nfc/llcp.h b/net/nfc/llcp.h
+index 97853c9cefc70..a81893bc06ce8 100644
+--- a/net/nfc/llcp.h
++++ b/net/nfc/llcp.h
+@@ -202,7 +202,6 @@ void nfc_llcp_sock_link(struct llcp_sock_list *l, struct sock *s);
+ void nfc_llcp_sock_unlink(struct llcp_sock_list *l, struct sock *s);
+ void nfc_llcp_socket_remote_param_init(struct nfc_llcp_sock *sock);
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev);
+-struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local);
+ int nfc_llcp_local_put(struct nfc_llcp_local *local);
+ u8 nfc_llcp_get_sdp_ssap(struct nfc_llcp_local *local,
+ 			 struct nfc_llcp_sock *sock);
+@@ -221,15 +220,15 @@ struct sock *nfc_llcp_accept_dequeue(struct sock *sk, struct socket *newsock);
+ 
+ /* TLV API */
+ int nfc_llcp_parse_gb_tlv(struct nfc_llcp_local *local,
+-			  u8 *tlv_array, u16 tlv_array_len);
++			  const u8 *tlv_array, u16 tlv_array_len);
+ int nfc_llcp_parse_connection_tlv(struct nfc_llcp_sock *sock,
+-				  u8 *tlv_array, u16 tlv_array_len);
++				  const u8 *tlv_array, u16 tlv_array_len);
+ 
+ /* Commands API */
+ void nfc_llcp_recv(void *data, struct sk_buff *skb, int err);
+-u8 *nfc_llcp_build_tlv(u8 type, u8 *value, u8 value_length, u8 *tlv_length);
++u8 *nfc_llcp_build_tlv(u8 type, const u8 *value, u8 value_length, u8 *tlv_length);
+ struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdres_tlv(u8 tid, u8 sap);
+-struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, char *uri,
++struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, const char *uri,
+ 						  size_t uri_len);
+ void nfc_llcp_free_sdp_tlv(struct nfc_llcp_sdp_tlv *sdp);
+ void nfc_llcp_free_sdp_tlv_list(struct hlist_head *sdp_head);
+diff --git a/net/nfc/llcp_commands.c b/net/nfc/llcp_commands.c
+index 475061c79c442..5b8754ae7d3af 100644
+--- a/net/nfc/llcp_commands.c
++++ b/net/nfc/llcp_commands.c
+@@ -15,7 +15,7 @@
+ #include "nfc.h"
+ #include "llcp.h"
+ 
+-static u8 llcp_tlv_length[LLCP_TLV_MAX] = {
++static const u8 llcp_tlv_length[LLCP_TLV_MAX] = {
+ 	0,
+ 	1, /* VERSION */
+ 	2, /* MIUX */
+@@ -29,7 +29,7 @@ static u8 llcp_tlv_length[LLCP_TLV_MAX] = {
+ 
+ };
+ 
+-static u8 llcp_tlv8(u8 *tlv, u8 type)
++static u8 llcp_tlv8(const u8 *tlv, u8 type)
+ {
+ 	if (tlv[0] != type || tlv[1] != llcp_tlv_length[tlv[0]])
+ 		return 0;
+@@ -37,7 +37,7 @@ static u8 llcp_tlv8(u8 *tlv, u8 type)
+ 	return tlv[2];
+ }
+ 
+-static u16 llcp_tlv16(u8 *tlv, u8 type)
++static u16 llcp_tlv16(const u8 *tlv, u8 type)
+ {
+ 	if (tlv[0] != type || tlv[1] != llcp_tlv_length[tlv[0]])
+ 		return 0;
+@@ -46,37 +46,37 @@ static u16 llcp_tlv16(u8 *tlv, u8 type)
+ }
+ 
+ 
+-static u8 llcp_tlv_version(u8 *tlv)
++static u8 llcp_tlv_version(const u8 *tlv)
+ {
+ 	return llcp_tlv8(tlv, LLCP_TLV_VERSION);
+ }
+ 
+-static u16 llcp_tlv_miux(u8 *tlv)
++static u16 llcp_tlv_miux(const u8 *tlv)
+ {
+ 	return llcp_tlv16(tlv, LLCP_TLV_MIUX) & 0x7ff;
+ }
+ 
+-static u16 llcp_tlv_wks(u8 *tlv)
++static u16 llcp_tlv_wks(const u8 *tlv)
+ {
+ 	return llcp_tlv16(tlv, LLCP_TLV_WKS);
+ }
+ 
+-static u16 llcp_tlv_lto(u8 *tlv)
++static u16 llcp_tlv_lto(const u8 *tlv)
+ {
+ 	return llcp_tlv8(tlv, LLCP_TLV_LTO);
+ }
+ 
+-static u8 llcp_tlv_opt(u8 *tlv)
++static u8 llcp_tlv_opt(const u8 *tlv)
+ {
+ 	return llcp_tlv8(tlv, LLCP_TLV_OPT);
+ }
+ 
+-static u8 llcp_tlv_rw(u8 *tlv)
++static u8 llcp_tlv_rw(const u8 *tlv)
+ {
+ 	return llcp_tlv8(tlv, LLCP_TLV_RW) & 0xf;
+ }
+ 
+-u8 *nfc_llcp_build_tlv(u8 type, u8 *value, u8 value_length, u8 *tlv_length)
++u8 *nfc_llcp_build_tlv(u8 type, const u8 *value, u8 value_length, u8 *tlv_length)
+ {
+ 	u8 *tlv, length;
+ 
+@@ -130,7 +130,7 @@ struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdres_tlv(u8 tid, u8 sap)
+ 	return sdres;
+ }
+ 
+-struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, char *uri,
++struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, const char *uri,
+ 						  size_t uri_len)
+ {
+ 	struct nfc_llcp_sdp_tlv *sdreq;
+@@ -190,9 +190,10 @@ void nfc_llcp_free_sdp_tlv_list(struct hlist_head *head)
+ }
+ 
+ int nfc_llcp_parse_gb_tlv(struct nfc_llcp_local *local,
+-			  u8 *tlv_array, u16 tlv_array_len)
++			  const u8 *tlv_array, u16 tlv_array_len)
+ {
+-	u8 *tlv = tlv_array, type, length, offset = 0;
++	const u8 *tlv = tlv_array;
++	u8 type, length, offset = 0;
+ 
+ 	pr_debug("TLV array length %d\n", tlv_array_len);
+ 
+@@ -239,9 +240,10 @@ int nfc_llcp_parse_gb_tlv(struct nfc_llcp_local *local,
+ }
+ 
+ int nfc_llcp_parse_connection_tlv(struct nfc_llcp_sock *sock,
+-				  u8 *tlv_array, u16 tlv_array_len)
++				  const u8 *tlv_array, u16 tlv_array_len)
+ {
+-	u8 *tlv = tlv_array, type, length, offset = 0;
++	const u8 *tlv = tlv_array;
++	u8 type, length, offset = 0;
+ 
+ 	pr_debug("TLV array length %d\n", tlv_array_len);
+ 
+@@ -295,7 +297,7 @@ static struct sk_buff *llcp_add_header(struct sk_buff *pdu,
+ 	return pdu;
+ }
+ 
+-static struct sk_buff *llcp_add_tlv(struct sk_buff *pdu, u8 *tlv,
++static struct sk_buff *llcp_add_tlv(struct sk_buff *pdu, const u8 *tlv,
+ 				    u8 tlv_length)
+ {
+ 	/* XXX Add an skb length check */
+@@ -359,6 +361,7 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 	struct sk_buff *skb;
+ 	struct nfc_llcp_local *local;
+ 	u16 size = 0;
++	int err;
+ 
+ 	pr_debug("Sending SYMM\n");
+ 
+@@ -370,8 +373,10 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 	size += dev->tx_headroom + dev->tx_tailroom + NFC_HEADER_SIZE;
+ 
+ 	skb = alloc_skb(size, GFP_KERNEL);
+-	if (skb == NULL)
+-		return -ENOMEM;
++	if (skb == NULL) {
++		err = -ENOMEM;
++		goto out;
++	}
+ 
+ 	skb_reserve(skb, dev->tx_headroom + NFC_HEADER_SIZE);
+ 
+@@ -381,17 +386,22 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 
+ 	nfc_llcp_send_to_raw_sock(local, skb, NFC_DIRECTION_TX);
+ 
+-	return nfc_data_exchange(dev, local->target_idx, skb,
++	err = nfc_data_exchange(dev, local->target_idx, skb,
+ 				 nfc_llcp_recv, local);
++out:
++	nfc_llcp_local_put(local);
++	return err;
+ }
+ 
+ int nfc_llcp_send_connect(struct nfc_llcp_sock *sock)
+ {
+ 	struct nfc_llcp_local *local;
+ 	struct sk_buff *skb;
+-	u8 *service_name_tlv = NULL, service_name_tlv_length;
+-	u8 *miux_tlv = NULL, miux_tlv_length;
+-	u8 *rw_tlv = NULL, rw_tlv_length, rw;
++	const u8 *service_name_tlv = NULL;
++	const u8 *miux_tlv = NULL;
++	const u8 *rw_tlv = NULL;
++	u8 service_name_tlv_length = 0;
++	u8 miux_tlv_length,  rw_tlv_length, rw;
+ 	int err;
+ 	u16 size = 0;
+ 	__be16 miux;
+@@ -465,8 +475,9 @@ int nfc_llcp_send_cc(struct nfc_llcp_sock *sock)
+ {
+ 	struct nfc_llcp_local *local;
+ 	struct sk_buff *skb;
+-	u8 *miux_tlv = NULL, miux_tlv_length;
+-	u8 *rw_tlv = NULL, rw_tlv_length, rw;
++	const u8 *miux_tlv = NULL;
++	const u8 *rw_tlv = NULL;
++	u8 miux_tlv_length, rw_tlv_length, rw;
+ 	int err;
+ 	u16 size = 0;
+ 	__be16 miux;
+diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
+index edadebb3efd2a..ddfd159f64e13 100644
+--- a/net/nfc/llcp_core.c
++++ b/net/nfc/llcp_core.c
+@@ -17,6 +17,8 @@
+ static u8 llcp_magic[3] = {0x46, 0x66, 0x6d};
+ 
+ static LIST_HEAD(llcp_devices);
++/* Protects llcp_devices list */
++static DEFINE_SPINLOCK(llcp_devices_lock);
+ 
+ static void nfc_llcp_rx_skb(struct nfc_llcp_local *local, struct sk_buff *skb);
+ 
+@@ -143,7 +145,7 @@ static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool device,
+ 	write_unlock(&local->raw_sockets.lock);
+ }
+ 
+-struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
++static struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
+ {
+ 	kref_get(&local->ref);
+ 
+@@ -171,7 +173,6 @@ static void local_release(struct kref *ref)
+ 
+ 	local = container_of(ref, struct nfc_llcp_local, ref);
+ 
+-	list_del(&local->list);
+ 	local_cleanup(local);
+ 	kfree(local);
+ }
+@@ -284,12 +285,33 @@ static void nfc_llcp_sdreq_timer(struct timer_list *t)
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev)
+ {
+ 	struct nfc_llcp_local *local;
++	struct nfc_llcp_local *res = NULL;
+ 
++	spin_lock(&llcp_devices_lock);
+ 	list_for_each_entry(local, &llcp_devices, list)
+-		if (local->dev == dev)
++		if (local->dev == dev) {
++			res = nfc_llcp_local_get(local);
++			break;
++		}
++	spin_unlock(&llcp_devices_lock);
++
++	return res;
++}
++
++static struct nfc_llcp_local *nfc_llcp_remove_local(struct nfc_dev *dev)
++{
++	struct nfc_llcp_local *local, *tmp;
++
++	spin_lock(&llcp_devices_lock);
++	list_for_each_entry_safe(local, tmp, &llcp_devices, list)
++		if (local->dev == dev) {
++			list_del(&local->list);
++			spin_unlock(&llcp_devices_lock);
+ 			return local;
++		}
++	spin_unlock(&llcp_devices_lock);
+ 
+-	pr_debug("No device found\n");
++	pr_warn("Shutting down device not found\n");
+ 
+ 	return NULL;
+ }
+@@ -302,7 +324,7 @@ static char *wks[] = {
+ 	"urn:nfc:sn:snep",
+ };
+ 
+-static int nfc_llcp_wks_sap(char *service_name, size_t service_name_len)
++static int nfc_llcp_wks_sap(const char *service_name, size_t service_name_len)
+ {
+ 	int sap, num_wks;
+ 
+@@ -326,7 +348,7 @@ static int nfc_llcp_wks_sap(char *service_name, size_t service_name_len)
+ 
+ static
+ struct nfc_llcp_sock *nfc_llcp_sock_from_sn(struct nfc_llcp_local *local,
+-					    u8 *sn, size_t sn_len)
++					    const u8 *sn, size_t sn_len)
+ {
+ 	struct sock *sk;
+ 	struct nfc_llcp_sock *llcp_sock, *tmp_sock;
+@@ -523,7 +545,7 @@ static int nfc_llcp_build_gb(struct nfc_llcp_local *local)
+ {
+ 	u8 *gb_cur, version, version_length;
+ 	u8 lto_length, wks_length, miux_length;
+-	u8 *version_tlv = NULL, *lto_tlv = NULL,
++	const u8 *version_tlv = NULL, *lto_tlv = NULL,
+ 	   *wks_tlv = NULL, *miux_tlv = NULL;
+ 	__be16 wks = cpu_to_be16(local->local_wks);
+ 	u8 gb_len = 0;
+@@ -610,12 +632,15 @@ u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len)
+ 
+ 	*general_bytes_len = local->gb_len;
+ 
++	nfc_llcp_local_put(local);
++
+ 	return local->gb;
+ }
+ 
+-int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len)
++int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len)
+ {
+ 	struct nfc_llcp_local *local;
++	int err;
+ 
+ 	if (gb_len < 3 || gb_len > NFC_MAX_GT_LEN)
+ 		return -EINVAL;
+@@ -632,35 +657,39 @@ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len)
+ 
+ 	if (memcmp(local->remote_gb, llcp_magic, 3)) {
+ 		pr_err("MAC does not support LLCP\n");
+-		return -EINVAL;
++		err = -EINVAL;
++		goto out;
+ 	}
+ 
+-	return nfc_llcp_parse_gb_tlv(local,
++	err = nfc_llcp_parse_gb_tlv(local,
+ 				     &local->remote_gb[3],
+ 				     local->remote_gb_len - 3);
++out:
++	nfc_llcp_local_put(local);
++	return err;
+ }
+ 
+-static u8 nfc_llcp_dsap(struct sk_buff *pdu)
++static u8 nfc_llcp_dsap(const struct sk_buff *pdu)
+ {
+ 	return (pdu->data[0] & 0xfc) >> 2;
+ }
+ 
+-static u8 nfc_llcp_ptype(struct sk_buff *pdu)
++static u8 nfc_llcp_ptype(const struct sk_buff *pdu)
+ {
+ 	return ((pdu->data[0] & 0x03) << 2) | ((pdu->data[1] & 0xc0) >> 6);
+ }
+ 
+-static u8 nfc_llcp_ssap(struct sk_buff *pdu)
++static u8 nfc_llcp_ssap(const struct sk_buff *pdu)
+ {
+ 	return pdu->data[1] & 0x3f;
+ }
+ 
+-static u8 nfc_llcp_ns(struct sk_buff *pdu)
++static u8 nfc_llcp_ns(const struct sk_buff *pdu)
+ {
+ 	return pdu->data[2] >> 4;
+ }
+ 
+-static u8 nfc_llcp_nr(struct sk_buff *pdu)
++static u8 nfc_llcp_nr(const struct sk_buff *pdu)
+ {
+ 	return pdu->data[2] & 0xf;
+ }
+@@ -802,7 +831,7 @@ out:
+ }
+ 
+ static struct nfc_llcp_sock *nfc_llcp_sock_get_sn(struct nfc_llcp_local *local,
+-						  u8 *sn, size_t sn_len)
++						  const u8 *sn, size_t sn_len)
+ {
+ 	struct nfc_llcp_sock *llcp_sock;
+ 
+@@ -816,9 +845,10 @@ static struct nfc_llcp_sock *nfc_llcp_sock_get_sn(struct nfc_llcp_local *local,
+ 	return llcp_sock;
+ }
+ 
+-static u8 *nfc_llcp_connect_sn(struct sk_buff *skb, size_t *sn_len)
++static const u8 *nfc_llcp_connect_sn(const struct sk_buff *skb, size_t *sn_len)
+ {
+-	u8 *tlv = &skb->data[2], type, length;
++	u8 type, length;
++	const u8 *tlv = &skb->data[2];
+ 	size_t tlv_array_len = skb->len - LLCP_HEADER_SIZE, offset = 0;
+ 
+ 	while (offset < tlv_array_len) {
+@@ -876,7 +906,7 @@ static void nfc_llcp_recv_ui(struct nfc_llcp_local *local,
+ }
+ 
+ static void nfc_llcp_recv_connect(struct nfc_llcp_local *local,
+-				  struct sk_buff *skb)
++				  const struct sk_buff *skb)
+ {
+ 	struct sock *new_sk, *parent;
+ 	struct nfc_llcp_sock *sock, *new_sock;
+@@ -894,7 +924,7 @@ static void nfc_llcp_recv_connect(struct nfc_llcp_local *local,
+ 			goto fail;
+ 		}
+ 	} else {
+-		u8 *sn;
++		const u8 *sn;
+ 		size_t sn_len;
+ 
+ 		sn = nfc_llcp_connect_sn(skb, &sn_len);
+@@ -1113,7 +1143,7 @@ static void nfc_llcp_recv_hdlc(struct nfc_llcp_local *local,
+ }
+ 
+ static void nfc_llcp_recv_disc(struct nfc_llcp_local *local,
+-			       struct sk_buff *skb)
++			       const struct sk_buff *skb)
+ {
+ 	struct nfc_llcp_sock *llcp_sock;
+ 	struct sock *sk;
+@@ -1156,7 +1186,8 @@ static void nfc_llcp_recv_disc(struct nfc_llcp_local *local,
+ 	nfc_llcp_sock_put(llcp_sock);
+ }
+ 
+-static void nfc_llcp_recv_cc(struct nfc_llcp_local *local, struct sk_buff *skb)
++static void nfc_llcp_recv_cc(struct nfc_llcp_local *local,
++			     const struct sk_buff *skb)
+ {
+ 	struct nfc_llcp_sock *llcp_sock;
+ 	struct sock *sk;
+@@ -1189,7 +1220,8 @@ static void nfc_llcp_recv_cc(struct nfc_llcp_local *local, struct sk_buff *skb)
+ 	nfc_llcp_sock_put(llcp_sock);
+ }
+ 
+-static void nfc_llcp_recv_dm(struct nfc_llcp_local *local, struct sk_buff *skb)
++static void nfc_llcp_recv_dm(struct nfc_llcp_local *local,
++			     const struct sk_buff *skb)
+ {
+ 	struct nfc_llcp_sock *llcp_sock;
+ 	struct sock *sk;
+@@ -1227,12 +1259,13 @@ static void nfc_llcp_recv_dm(struct nfc_llcp_local *local, struct sk_buff *skb)
+ }
+ 
+ static void nfc_llcp_recv_snl(struct nfc_llcp_local *local,
+-			      struct sk_buff *skb)
++			      const struct sk_buff *skb)
+ {
+ 	struct nfc_llcp_sock *llcp_sock;
+-	u8 dsap, ssap, *tlv, type, length, tid, sap;
++	u8 dsap, ssap, type, length, tid, sap;
++	const u8 *tlv;
+ 	u16 tlv_len, offset;
+-	char *service_name;
++	const char *service_name;
+ 	size_t service_name_len;
+ 	struct nfc_llcp_sdp_tlv *sdp;
+ 	HLIST_HEAD(llc_sdres_list);
+@@ -1523,6 +1556,8 @@ int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb)
+ 
+ 	__nfc_llcp_recv(local, skb);
+ 
++	nfc_llcp_local_put(local);
++
+ 	return 0;
+ }
+ 
+@@ -1539,6 +1574,8 @@ void nfc_llcp_mac_is_down(struct nfc_dev *dev)
+ 
+ 	/* Close and purge all existing sockets */
+ 	nfc_llcp_socket_release(local, true, 0);
++
++	nfc_llcp_local_put(local);
+ }
+ 
+ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
+@@ -1564,6 +1601,8 @@ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
+ 		mod_timer(&local->link_timer,
+ 			  jiffies + msecs_to_jiffies(local->remote_lto));
+ 	}
++
++	nfc_llcp_local_put(local);
+ }
+ 
+ int nfc_llcp_register_device(struct nfc_dev *ndev)
+@@ -1614,7 +1653,7 @@ int nfc_llcp_register_device(struct nfc_dev *ndev)
+ 
+ void nfc_llcp_unregister_device(struct nfc_dev *dev)
+ {
+-	struct nfc_llcp_local *local = nfc_llcp_find_local(dev);
++	struct nfc_llcp_local *local = nfc_llcp_remove_local(dev);
+ 
+ 	if (local == NULL) {
+ 		pr_debug("No such device\n");
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index 0b93a17b9f11f..6e1fba2084930 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -99,7 +99,7 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->nfc_protocol = llcp_addr.nfc_protocol;
+ 	llcp_sock->service_name_len = min_t(unsigned int,
+ 					    llcp_addr.service_name_len,
+@@ -181,7 +181,7 @@ static int llcp_raw_sock_bind(struct socket *sock, struct sockaddr *addr,
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->nfc_protocol = llcp_addr.nfc_protocol;
+ 
+ 	nfc_llcp_sock_link(&local->raw_sockets, sk);
+@@ -698,24 +698,22 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
+ 	if (dev->dep_link_up == false) {
+ 		ret = -ENOLINK;
+ 		device_unlock(&dev->dev);
+-		goto put_dev;
++		goto sock_llcp_put_local;
+ 	}
+ 	device_unlock(&dev->dev);
+ 
+ 	if (local->rf_mode == NFC_RF_INITIATOR &&
+ 	    addr->target_idx != local->target_idx) {
+ 		ret = -ENOLINK;
+-		goto put_dev;
++		goto sock_llcp_put_local;
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
+-		nfc_llcp_local_put(llcp_sock->local);
+-		llcp_sock->local = NULL;
+ 		ret = -ENOMEM;
+-		goto put_dev;
++		goto sock_llcp_nullify;
+ 	}
+ 
+ 	llcp_sock->reserved_ssap = llcp_sock->ssap;
+@@ -760,8 +758,13 @@ sock_unlink:
+ 
+ sock_llcp_release:
+ 	nfc_llcp_put_ssap(local, llcp_sock->ssap);
+-	nfc_llcp_local_put(llcp_sock->local);
++
++sock_llcp_nullify:
+ 	llcp_sock->local = NULL;
++	llcp_sock->dev = NULL;
++
++sock_llcp_put_local:
++	nfc_llcp_local_put(local);
+ 
+ put_dev:
+ 	nfc_put_device(dev);
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index e0e1168655118..1c5b3ce1e8b16 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1039,11 +1039,14 @@ static int nfc_genl_llc_get_params(struct sk_buff *skb, struct genl_info *info)
+ 	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ 	if (!msg) {
+ 		rc = -ENOMEM;
+-		goto exit;
++		goto put_local;
+ 	}
+ 
+ 	rc = nfc_genl_send_params(msg, local, info->snd_portid, info->snd_seq);
+ 
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+@@ -1105,7 +1108,7 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NFC_ATTR_LLC_PARAM_LTO]) {
+ 		if (dev->dep_link_up) {
+ 			rc = -EINPROGRESS;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		local->lto = nla_get_u8(info->attrs[NFC_ATTR_LLC_PARAM_LTO]);
+@@ -1117,6 +1120,9 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NFC_ATTR_LLC_PARAM_MIUX])
+ 		local->miux = cpu_to_be16(miux);
+ 
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+@@ -1172,7 +1178,7 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		if (rc != 0) {
+ 			rc = -EINVAL;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		if (!sdp_attrs[NFC_SDP_ATTR_URI])
+@@ -1191,7 +1197,7 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 		sdreq = nfc_llcp_build_sdreq_tlv(tid, uri, uri_len);
+ 		if (sdreq == NULL) {
+ 			rc = -ENOMEM;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		tlvs_len += sdreq->tlv_len;
+@@ -1201,10 +1207,14 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	if (hlist_empty(&sdreq_list)) {
+ 		rc = -EINVAL;
+-		goto exit;
++		goto put_local;
+ 	}
+ 
+ 	rc = nfc_llcp_send_snl_sdreq(local, &sdreq_list, tlvs_len);
++
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+diff --git a/net/nfc/nfc.h b/net/nfc/nfc.h
+index 889fefd64e56b..0b1e6466f4fbf 100644
+--- a/net/nfc/nfc.h
++++ b/net/nfc/nfc.h
+@@ -48,10 +48,11 @@ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
+ 			u8 comm_mode, u8 rf_mode);
+ int nfc_llcp_register_device(struct nfc_dev *dev);
+ void nfc_llcp_unregister_device(struct nfc_dev *dev);
+-int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len);
++int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len);
+ u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len);
+ int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb);
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev);
++int nfc_llcp_local_put(struct nfc_llcp_local *local);
+ int __init nfc_llcp_init(void);
+ void nfc_llcp_exit(void);
+ void nfc_llcp_free_sdp_tlv(struct nfc_llcp_sdp_tlv *sdp);
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index db0d3bff19eba..a44101b2f4419 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -26,6 +26,7 @@ static struct tc_action_ops act_pedit_ops;
+ 
+ static const struct nla_policy pedit_policy[TCA_PEDIT_MAX + 1] = {
+ 	[TCA_PEDIT_PARMS]	= { .len = sizeof(struct tc_pedit) },
++	[TCA_PEDIT_PARMS_EX]	= { .len = sizeof(struct tc_pedit) },
+ 	[TCA_PEDIT_KEYS_EX]   = { .type = NLA_NESTED },
+ };
+ 
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index caf1a05bfbde4..dcf21d99f132c 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -778,6 +778,16 @@ static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key,
+ 		       TCA_FLOWER_KEY_PORT_SRC_MAX, &mask->tp_range.tp_max.src,
+ 		       TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src));
+ 
++	if (mask->tp_range.tp_min.dst != mask->tp_range.tp_max.dst) {
++		NL_SET_ERR_MSG(extack,
++			       "Both min and max destination ports must be specified");
++		return -EINVAL;
++	}
++	if (mask->tp_range.tp_min.src != mask->tp_range.tp_max.src) {
++		NL_SET_ERR_MSG(extack,
++			       "Both min and max source ports must be specified");
++		return -EINVAL;
++	}
+ 	if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&
+ 	    ntohs(key->tp_range.tp_max.dst) <=
+ 	    ntohs(key->tp_range.tp_min.dst)) {
+diff --git a/net/sched/cls_fw.c b/net/sched/cls_fw.c
+index ec945294626a8..41f0898a5a565 100644
+--- a/net/sched/cls_fw.c
++++ b/net/sched/cls_fw.c
+@@ -210,11 +210,6 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (tb[TCA_FW_CLASSID]) {
+-		f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
+-		tcf_bind_filter(tp, &f->res, base);
+-	}
+-
+ 	if (tb[TCA_FW_INDEV]) {
+ 		int ret;
+ 		ret = tcf_change_indev(net, tb[TCA_FW_INDEV], extack);
+@@ -231,6 +226,11 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
+ 	} else if (head->mask != 0xFFFFFFFF)
+ 		return err;
+ 
++	if (tb[TCA_FW_CLASSID]) {
++		f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
++		tcf_bind_filter(tp, &f->res, base);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index cad7deacf60a4..d5a1e4b237b18 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -113,6 +113,7 @@
+ 
+ #define QFQ_MTU_SHIFT		16	/* to support TSO/GSO */
+ #define QFQ_MIN_LMAX		512	/* see qfq_slot_insert */
++#define QFQ_MAX_LMAX		(1UL << QFQ_MTU_SHIFT)
+ 
+ #define QFQ_MAX_AGG_CLASSES	8 /* max num classes per aggregate allowed */
+ 
+@@ -214,9 +215,14 @@ static struct qfq_class *qfq_find_class(struct Qdisc *sch, u32 classid)
+ 	return container_of(clc, struct qfq_class, common);
+ }
+ 
++static struct netlink_range_validation lmax_range = {
++	.min = QFQ_MIN_LMAX,
++	.max = QFQ_MAX_LMAX,
++};
++
+ static const struct nla_policy qfq_policy[TCA_QFQ_MAX + 1] = {
+-	[TCA_QFQ_WEIGHT] = { .type = NLA_U32 },
+-	[TCA_QFQ_LMAX] = { .type = NLA_U32 },
++	[TCA_QFQ_WEIGHT] = NLA_POLICY_RANGE(NLA_U32, 1, QFQ_MAX_WEIGHT),
++	[TCA_QFQ_LMAX] = NLA_POLICY_FULL_RANGE(NLA_U32, &lmax_range),
+ };
+ 
+ /*
+@@ -375,8 +381,13 @@ static int qfq_change_agg(struct Qdisc *sch, struct qfq_class *cl, u32 weight,
+ 			   u32 lmax)
+ {
+ 	struct qfq_sched *q = qdisc_priv(sch);
+-	struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight);
++	struct qfq_aggregate *new_agg;
++
++	/* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */
++	if (lmax > QFQ_MAX_LMAX)
++		return -EINVAL;
+ 
++	new_agg = qfq_find_agg(q, lmax, weight);
+ 	if (new_agg == NULL) { /* create new aggregate */
+ 		new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC);
+ 		if (new_agg == NULL)
+@@ -408,27 +419,25 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 	}
+ 
+ 	err = nla_parse_nested_deprecated(tb, TCA_QFQ_MAX, tca[TCA_OPTIONS],
+-					  qfq_policy, NULL);
++					  qfq_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (tb[TCA_QFQ_WEIGHT]) {
++	if (tb[TCA_QFQ_WEIGHT])
+ 		weight = nla_get_u32(tb[TCA_QFQ_WEIGHT]);
+-		if (!weight || weight > (1UL << QFQ_MAX_WSHIFT)) {
+-			pr_notice("qfq: invalid weight %u\n", weight);
+-			return -EINVAL;
+-		}
+-	} else
++	else
+ 		weight = 1;
+ 
+-	if (tb[TCA_QFQ_LMAX])
++	if (tb[TCA_QFQ_LMAX]) {
+ 		lmax = nla_get_u32(tb[TCA_QFQ_LMAX]);
+-	else
++	} else {
++		/* MTU size is user controlled */
+ 		lmax = psched_mtu(qdisc_dev(sch));
+-
+-	if (lmax < QFQ_MIN_LMAX || lmax > (1UL << QFQ_MTU_SHIFT)) {
+-		pr_notice("qfq: invalid max length %u\n", lmax);
+-		return -EINVAL;
++		if (lmax < QFQ_MIN_LMAX || lmax > QFQ_MAX_LMAX) {
++			NL_SET_ERR_MSG_MOD(extack,
++					   "MTU size out of bounds for qfq");
++			return -EINVAL;
++		}
+ 	}
+ 
+ 	inv_w = ONE_FP / weight;
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 35d3eee26ea56..534364bb871a3 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -362,9 +362,9 @@ static void sctp_auto_asconf_init(struct sctp_sock *sp)
+ 	struct net *net = sock_net(&sp->inet.sk);
+ 
+ 	if (net->sctp.default_auto_asconf) {
+-		spin_lock(&net->sctp.addr_wq_lock);
++		spin_lock_bh(&net->sctp.addr_wq_lock);
+ 		list_add_tail(&sp->auto_asconf_list, &net->sctp.auto_asconf_splist);
+-		spin_unlock(&net->sctp.addr_wq_lock);
++		spin_unlock_bh(&net->sctp.addr_wq_lock);
+ 		sp->do_auto_asconf = 1;
+ 	}
+ }
+@@ -8039,6 +8039,22 @@ static int sctp_getsockopt(struct sock *sk, int level, int optname,
+ 	return retval;
+ }
+ 
++static bool sctp_bpf_bypass_getsockopt(int level, int optname)
++{
++	if (level == SOL_SCTP) {
++		switch (optname) {
++		case SCTP_SOCKOPT_PEELOFF:
++		case SCTP_SOCKOPT_PEELOFF_FLAGS:
++		case SCTP_SOCKOPT_CONNECTX3:
++			return true;
++		default:
++			return false;
++		}
++	}
++
++	return false;
++}
++
+ static int sctp_hash(struct sock *sk)
+ {
+ 	/* STUB */
+@@ -9407,6 +9423,7 @@ struct proto sctp_prot = {
+ 	.shutdown    =	sctp_shutdown,
+ 	.setsockopt  =	sctp_setsockopt,
+ 	.getsockopt  =	sctp_getsockopt,
++	.bpf_bypass_getsockopt	= sctp_bpf_bypass_getsockopt,
+ 	.sendmsg     =	sctp_sendmsg,
+ 	.recvmsg     =	sctp_recvmsg,
+ 	.bind        =	sctp_bind,
+@@ -9459,6 +9476,7 @@ struct proto sctpv6_prot = {
+ 	.shutdown	= sctp_shutdown,
+ 	.setsockopt	= sctp_setsockopt,
+ 	.getsockopt	= sctp_getsockopt,
++	.bpf_bypass_getsockopt	= sctp_bpf_bypass_getsockopt,
+ 	.sendmsg	= sctp_sendmsg,
+ 	.recvmsg	= sctp_recvmsg,
+ 	.bind		= sctp_bind,
+diff --git a/net/socket.c b/net/socket.c
+index 84223419da862..f2172b756c0f7 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2137,6 +2137,9 @@ SYSCALL_DEFINE5(setsockopt, int, fd, int, level, int, optname,
+ 	return __sys_setsockopt(fd, level, optname, optval, optlen);
+ }
+ 
++INDIRECT_CALLABLE_DECLARE(bool tcp_bpf_bypass_getsockopt(int level,
++							 int optname));
++
+ /*
+  *	Get a socket option. Because we don't know the option lengths we have
+  *	to pass a user mode parameter for the protocols to sort out.
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 6d5bb8bfed38b..3d5ee042c5015 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -692,12 +692,6 @@ static void svc_tcp_listen_data_ready(struct sock *sk)
+ {
+ 	struct svc_sock	*svsk = (struct svc_sock *)sk->sk_user_data;
+ 
+-	if (svsk) {
+-		/* Refer to svc_setup_socket() for details. */
+-		rmb();
+-		svsk->sk_odata(sk);
+-	}
+-
+ 	/*
+ 	 * This callback may called twice when a new connection
+ 	 * is established as a child socket inherits everything
+@@ -706,13 +700,18 @@ static void svc_tcp_listen_data_ready(struct sock *sk)
+ 	 *    when one of child sockets become ESTABLISHED.
+ 	 * 2) data_ready method of the child socket may be called
+ 	 *    when it receives data before the socket is accepted.
+-	 * In case of 2, we should ignore it silently.
++	 * In case of 2, we should ignore it silently and DO NOT
++	 * dereference svsk.
+ 	 */
+-	if (sk->sk_state == TCP_LISTEN) {
+-		if (svsk) {
+-			set_bit(XPT_CONN, &svsk->sk_xprt.xpt_flags);
+-			svc_xprt_enqueue(&svsk->sk_xprt);
+-		}
++	if (sk->sk_state != TCP_LISTEN)
++		return;
++
++	if (svsk) {
++		/* Refer to svc_setup_socket() for details. */
++		rmb();
++		svsk->sk_odata(sk);
++		set_bit(XPT_CONN, &svsk->sk_xprt.xpt_flags);
++		svc_xprt_enqueue(&svsk->sk_xprt);
+ 	}
+ }
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index d09dabae56271..671c7f83d5fc3 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -262,117 +262,152 @@ bool cfg80211_is_element_inherited(const struct element *elem,
+ }
+ EXPORT_SYMBOL(cfg80211_is_element_inherited);
+ 
+-static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
+-				  const u8 *subelement, size_t subie_len,
+-				  u8 *new_ie, gfp_t gfp)
++static size_t cfg80211_copy_elem_with_frags(const struct element *elem,
++					    const u8 *ie, size_t ie_len,
++					    u8 **pos, u8 *buf, size_t buf_len)
+ {
+-	u8 *pos, *tmp;
+-	const u8 *tmp_old, *tmp_new;
+-	const struct element *non_inherit_elem;
+-	u8 *sub_copy;
++	if (WARN_ON((u8 *)elem < ie || elem->data > ie + ie_len ||
++		    elem->data + elem->datalen > ie + ie_len))
++		return 0;
+ 
+-	/* copy subelement as we need to change its content to
+-	 * mark an ie after it is processed.
+-	 */
+-	sub_copy = kmemdup(subelement, subie_len, gfp);
+-	if (!sub_copy)
++	if (elem->datalen + 2 > buf + buf_len - *pos)
+ 		return 0;
+ 
+-	pos = &new_ie[0];
++	memcpy(*pos, elem, elem->datalen + 2);
++	*pos += elem->datalen + 2;
+ 
+-	/* set new ssid */
+-	tmp_new = cfg80211_find_ie(WLAN_EID_SSID, sub_copy, subie_len);
+-	if (tmp_new) {
+-		memcpy(pos, tmp_new, tmp_new[1] + 2);
+-		pos += (tmp_new[1] + 2);
++	/* Finish if it is not fragmented  */
++	if (elem->datalen != 255)
++		return *pos - buf;
++
++	ie_len = ie + ie_len - elem->data - elem->datalen;
++	ie = (const u8 *)elem->data + elem->datalen;
++
++	for_each_element(elem, ie, ie_len) {
++		if (elem->id != WLAN_EID_FRAGMENT)
++			break;
++
++		if (elem->datalen + 2 > buf + buf_len - *pos)
++			return 0;
++
++		memcpy(*pos, elem, elem->datalen + 2);
++		*pos += elem->datalen + 2;
++
++		if (elem->datalen != 255)
++			break;
+ 	}
+ 
+-	/* get non inheritance list if exists */
+-	non_inherit_elem =
+-		cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
+-				       sub_copy, subie_len);
++	return *pos - buf;
++}
+ 
+-	/* go through IEs in ie (skip SSID) and subelement,
+-	 * merge them into new_ie
++static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
++				  const u8 *subie, size_t subie_len,
++				  u8 *new_ie, size_t new_ie_len)
++{
++	const struct element *non_inherit_elem, *parent, *sub;
++	u8 *pos = new_ie;
++	u8 id, ext_id;
++	unsigned int match_len;
++
++	non_inherit_elem = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
++						  subie, subie_len);
++
++	/* We copy the elements one by one from the parent to the generated
++	 * elements.
++	 * If they are not inherited (included in subie or in the non
++	 * inheritance element), then we copy all occurrences the first time
++	 * we see this element type.
+ 	 */
+-	tmp_old = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen);
+-	tmp_old = (tmp_old) ? tmp_old + tmp_old[1] + 2 : ie;
+-
+-	while (tmp_old + 2 - ie <= ielen &&
+-	       tmp_old + tmp_old[1] + 2 - ie <= ielen) {
+-		if (tmp_old[0] == 0) {
+-			tmp_old++;
++	for_each_element(parent, ie, ielen) {
++		if (parent->id == WLAN_EID_FRAGMENT)
+ 			continue;
++
++		if (parent->id == WLAN_EID_EXTENSION) {
++			if (parent->datalen < 1)
++				continue;
++
++			id = WLAN_EID_EXTENSION;
++			ext_id = parent->data[0];
++			match_len = 1;
++		} else {
++			id = parent->id;
++			match_len = 0;
+ 		}
+ 
+-		if (tmp_old[0] == WLAN_EID_EXTENSION)
+-			tmp = (u8 *)cfg80211_find_ext_ie(tmp_old[2], sub_copy,
+-							 subie_len);
+-		else
+-			tmp = (u8 *)cfg80211_find_ie(tmp_old[0], sub_copy,
+-						     subie_len);
++		/* Find first occurrence in subie */
++		sub = cfg80211_find_elem_match(id, subie, subie_len,
++					       &ext_id, match_len, 0);
+ 
+-		if (!tmp) {
+-			const struct element *old_elem = (void *)tmp_old;
++		/* Copy from parent if not in subie and inherited */
++		if (!sub &&
++		    cfg80211_is_element_inherited(parent, non_inherit_elem)) {
++			if (!cfg80211_copy_elem_with_frags(parent,
++							   ie, ielen,
++							   &pos, new_ie,
++							   new_ie_len))
++				return 0;
+ 
+-			/* ie in old ie but not in subelement */
+-			if (cfg80211_is_element_inherited(old_elem,
+-							  non_inherit_elem)) {
+-				memcpy(pos, tmp_old, tmp_old[1] + 2);
+-				pos += tmp_old[1] + 2;
+-			}
+-		} else {
+-			/* ie in transmitting ie also in subelement,
+-			 * copy from subelement and flag the ie in subelement
+-			 * as copied (by setting eid field to WLAN_EID_SSID,
+-			 * which is skipped anyway).
+-			 * For vendor ie, compare OUI + type + subType to
+-			 * determine if they are the same ie.
+-			 */
+-			if (tmp_old[0] == WLAN_EID_VENDOR_SPECIFIC) {
+-				if (tmp_old[1] >= 5 && tmp[1] >= 5 &&
+-				    !memcmp(tmp_old + 2, tmp + 2, 5)) {
+-					/* same vendor ie, copy from
+-					 * subelement
+-					 */
+-					memcpy(pos, tmp, tmp[1] + 2);
+-					pos += tmp[1] + 2;
+-					tmp[0] = WLAN_EID_SSID;
+-				} else {
+-					memcpy(pos, tmp_old, tmp_old[1] + 2);
+-					pos += tmp_old[1] + 2;
+-				}
+-			} else {
+-				/* copy ie from subelement into new ie */
+-				memcpy(pos, tmp, tmp[1] + 2);
+-				pos += tmp[1] + 2;
+-				tmp[0] = WLAN_EID_SSID;
+-			}
++			continue;
+ 		}
+ 
+-		if (tmp_old + tmp_old[1] + 2 - ie == ielen)
+-			break;
++		/* Already copied if an earlier element had the same type */
++		if (cfg80211_find_elem_match(id, ie, (u8 *)parent - ie,
++					     &ext_id, match_len, 0))
++			continue;
+ 
+-		tmp_old += tmp_old[1] + 2;
++		/* Not inheriting, copy all similar elements from subie */
++		while (sub) {
++			if (!cfg80211_copy_elem_with_frags(sub,
++							   subie, subie_len,
++							   &pos, new_ie,
++							   new_ie_len))
++				return 0;
++
++			sub = cfg80211_find_elem_match(id,
++						       sub->data + sub->datalen,
++						       subie_len + subie -
++						       (sub->data +
++							sub->datalen),
++						       &ext_id, match_len, 0);
++		}
+ 	}
+ 
+-	/* go through subelement again to check if there is any ie not
+-	 * copied to new ie, skip ssid, capability, bssid-index ie
++	/* The above misses elements that are included in subie but not in the
++	 * parent, so do a pass over subie and append those.
++	 * Skip the non-tx BSSID caps and non-inheritance element.
+ 	 */
+-	tmp_new = sub_copy;
+-	while (tmp_new + 2 - sub_copy <= subie_len &&
+-	       tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) {
+-		if (!(tmp_new[0] == WLAN_EID_NON_TX_BSSID_CAP ||
+-		      tmp_new[0] == WLAN_EID_SSID)) {
+-			memcpy(pos, tmp_new, tmp_new[1] + 2);
+-			pos += tmp_new[1] + 2;
++	for_each_element(sub, subie, subie_len) {
++		if (sub->id == WLAN_EID_NON_TX_BSSID_CAP)
++			continue;
++
++		if (sub->id == WLAN_EID_FRAGMENT)
++			continue;
++
++		if (sub->id == WLAN_EID_EXTENSION) {
++			if (sub->datalen < 1)
++				continue;
++
++			id = WLAN_EID_EXTENSION;
++			ext_id = sub->data[0];
++			match_len = 1;
++
++			if (ext_id == WLAN_EID_EXT_NON_INHERITANCE)
++				continue;
++		} else {
++			id = sub->id;
++			match_len = 0;
+ 		}
+-		if (tmp_new + tmp_new[1] + 2 - sub_copy == subie_len)
+-			break;
+-		tmp_new += tmp_new[1] + 2;
++
++		/* Processed if one was included in the parent */
++		if (cfg80211_find_elem_match(id, ie, ielen,
++					     &ext_id, match_len, 0))
++			continue;
++
++		if (!cfg80211_copy_elem_with_frags(sub, subie, subie_len,
++						   &pos, new_ie, new_ie_len))
++			return 0;
+ 	}
+ 
+-	kfree(sub_copy);
+ 	return pos - new_ie;
+ }
+ 
+@@ -2170,7 +2205,7 @@ static void cfg80211_parse_mbssid_data(struct wiphy *wiphy,
+ 			new_ie_len = cfg80211_gen_new_ie(ie, ielen,
+ 							 profile,
+ 							 profile_len, new_ie,
+-							 gfp);
++							 IEEE80211_MAX_DATA_LEN);
+ 			if (!new_ie_len)
+ 				continue;
+ 
+diff --git a/net/wireless/wext-core.c b/net/wireless/wext-core.c
+index 76a80a41615be..a57f54bc0e1a7 100644
+--- a/net/wireless/wext-core.c
++++ b/net/wireless/wext-core.c
+@@ -796,6 +796,12 @@ static int ioctl_standard_iw_point(struct iw_point *iwp, unsigned int cmd,
+ 		}
+ 	}
+ 
++	/* Sanity-check to ensure we never end up _allocating_ zero
++	 * bytes of data for extra.
++	 */
++	if (extra_size <= 0)
++		return -EFAULT;
++
+ 	/* kzalloc() ensures NULL-termination for essid_compat. */
+ 	extra = kzalloc(extra_size, GFP_KERNEL);
+ 	if (!extra)
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 691841dc6d334..d04f91f4d09df 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -667,6 +667,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 	struct sock *sk = sock->sk;
+ 	struct xdp_sock *xs = xdp_sk(sk);
+ 	struct net_device *dev;
++	int bound_dev_if;
+ 	u32 flags, qid;
+ 	int err = 0;
+ 
+@@ -680,6 +681,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 		      XDP_USE_NEED_WAKEUP))
+ 		return -EINVAL;
+ 
++	bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
++	if (bound_dev_if && bound_dev_if != sxdp->sxdp_ifindex)
++		return -EINVAL;
++
+ 	rtnl_lock();
+ 	mutex_lock(&xs->mutex);
+ 	if (xs->state != XSK_READY) {
+diff --git a/samples/bpf/tcp_basertt_kern.c b/samples/bpf/tcp_basertt_kern.c
+index 8dfe09a92feca..822b0742b8154 100644
+--- a/samples/bpf/tcp_basertt_kern.c
++++ b/samples/bpf/tcp_basertt_kern.c
+@@ -47,7 +47,7 @@ int bpf_basertt(struct bpf_sock_ops *skops)
+ 		case BPF_SOCK_OPS_BASE_RTT:
+ 			n = bpf_getsockopt(skops, SOL_TCP, TCP_CONGESTION,
+ 					   cong, sizeof(cong));
+-			if (!n && !__builtin_memcmp(cong, nv, sizeof(nv)+1)) {
++			if (!n && !__builtin_memcmp(cong, nv, sizeof(nv))) {
+ 				/* Set base_rtt to 80us */
+ 				rv = 80;
+ 			} else if (n) {
+diff --git a/samples/ftrace/ftrace-direct-too.c b/samples/ftrace/ftrace-direct-too.c
+index 3927cb880d1ab..4bdd67916ce47 100644
+--- a/samples/ftrace/ftrace-direct-too.c
++++ b/samples/ftrace/ftrace-direct-too.c
+@@ -4,14 +4,14 @@
+ #include <linux/mm.h> /* for handle_mm_fault() */
+ #include <linux/ftrace.h>
+ 
+-extern void my_direct_func(struct vm_area_struct *vma,
+-			   unsigned long address, unsigned int flags);
++extern void my_direct_func(struct vm_area_struct *vma, unsigned long address,
++			   unsigned int flags, struct pt_regs *regs);
+ 
+-void my_direct_func(struct vm_area_struct *vma,
+-			unsigned long address, unsigned int flags)
++void my_direct_func(struct vm_area_struct *vma, unsigned long address,
++		    unsigned int flags, struct pt_regs *regs)
+ {
+-	trace_printk("handle mm fault vma=%p address=%lx flags=%x\n",
+-		     vma, address, flags);
++	trace_printk("handle mm fault vma=%p address=%lx flags=%x regs=%p\n",
++		     vma, address, flags, regs);
+ }
+ 
+ extern void my_tramp(void *);
+@@ -26,7 +26,9 @@ asm (
+ "	pushq %rdi\n"
+ "	pushq %rsi\n"
+ "	pushq %rdx\n"
++"	pushq %rcx\n"
+ "	call my_direct_func\n"
++"	popq %rcx\n"
+ "	popq %rdx\n"
+ "	popq %rsi\n"
+ "	popq %rdi\n"
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index e48742760fec8..78ac98cfa02d4 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -1313,6 +1313,10 @@ static Elf_Sym *find_elf_symbol(struct elf_info *elf, Elf64_Sword addr,
+ 	if (relsym->st_name != 0)
+ 		return relsym;
+ 
++	/*
++	 * Strive to find a better symbol name, but the resulting name may not
++	 * match the symbol referenced in the original code.
++	 */
+ 	relsym_secindex = get_secindex(elf, relsym);
+ 	for (sym = elf->symtab_start; sym < elf->symtab_stop; sym++) {
+ 		if (get_secindex(elf, sym) != relsym_secindex)
+@@ -1617,7 +1621,7 @@ static void default_mismatch_handler(const char *modname, struct elf_info *elf,
+ 
+ static int is_executable_section(struct elf_info* elf, unsigned int section_index)
+ {
+-	if (section_index > elf->num_sections)
++	if (section_index >= elf->num_sections)
+ 		fatal("section_index is outside elf->num_sections!\n");
+ 
+ 	return ((elf->sechdrs[section_index].sh_flags & SHF_EXECINSTR) == SHF_EXECINSTR);
+@@ -1792,19 +1796,33 @@ static int addend_386_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
+ #define	R_ARM_THM_JUMP19	51
+ #endif
+ 
++static int32_t sign_extend32(int32_t value, int index)
++{
++	uint8_t shift = 31 - index;
++
++	return (int32_t)(value << shift) >> shift;
++}
++
+ static int addend_arm_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
+ {
+ 	unsigned int r_typ = ELF_R_TYPE(r->r_info);
++	Elf_Sym *sym = elf->symtab_start + ELF_R_SYM(r->r_info);
++	void *loc = reloc_location(elf, sechdr, r);
++	uint32_t inst;
++	int32_t offset;
+ 
+ 	switch (r_typ) {
+ 	case R_ARM_ABS32:
+-		/* From ARM ABI: (S + A) | T */
+-		r->r_addend = (int)(long)
+-			      (elf->symtab_start + ELF_R_SYM(r->r_info));
++		inst = TO_NATIVE(*(uint32_t *)loc);
++		r->r_addend = inst + sym->st_value;
+ 		break;
+ 	case R_ARM_PC24:
+ 	case R_ARM_CALL:
+ 	case R_ARM_JUMP24:
++		inst = TO_NATIVE(*(uint32_t *)loc);
++		offset = sign_extend32((inst & 0x00ffffff) << 2, 25);
++		r->r_addend = offset + sym->st_value + 8;
++		break;
+ 	case R_ARM_THM_CALL:
+ 	case R_ARM_THM_JUMP24:
+ 	case R_ARM_THM_JUMP19:
+diff --git a/scripts/tags.sh b/scripts/tags.sh
+index b82aebb0c995e..d17801c9d85a0 100755
+--- a/scripts/tags.sh
++++ b/scripts/tags.sh
+@@ -32,6 +32,13 @@ else
+ 	ignore="$ignore ( -path ${tree}tools ) -prune -o"
+ fi
+ 
++# gtags(1) refuses to index any file outside of its current working dir.
++# If gtags indexing is requested and the build output directory is not
++# the kernel source tree, index all files in absolute-path form.
++if [[ "$1" == "gtags" && -n "${tree}" ]]; then
++	tree=$(realpath "$tree")/
++fi
++
+ # Detect if ALLSOURCE_ARCHS is set. If not, we assume SRCARCH
+ if [ "${ALLSOURCE_ARCHS}" = "" ]; then
+ 	ALLSOURCE_ARCHS=${SRCARCH}
+@@ -131,7 +138,7 @@ docscope()
+ 
+ dogtags()
+ {
+-	all_target_sources | gtags -i -f -
++	all_target_sources | gtags -i -C "${tree:-.}" -f - "$PWD"
+ }
+ 
+ # Basic regular expressions with an optional /kind-spec/ for ctags and
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 519656e685822..10896d69c442a 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -909,8 +909,13 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 				goto fail;
+ 			}
+ 
+-			rhashtable_insert_fast(profile->data, &data->head,
+-					       profile->data->p);
++			if (rhashtable_insert_fast(profile->data, &data->head,
++						   profile->data->p)) {
++				kfree_sensitive(data->key);
++				kfree_sensitive(data);
++				info = "failed to insert data to table";
++				goto fail;
++			}
+ 		}
+ 
+ 		if (!unpack_nameX(e, AA_STRUCTEND, NULL)) {
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 0033364ac404f..8cfc49fa4df5b 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -472,7 +472,9 @@ void evm_inode_post_removexattr(struct dentry *dentry, const char *xattr_name)
+ 
+ /**
+  * evm_inode_setattr - prevent updating an invalid EVM extended attribute
++ * @idmap: idmap of the mount
+  * @dentry: pointer to the affected dentry
++ * @attr: iattr structure containing the new file attributes
+  *
+  * Permit update of file attributes when files have a valid EVM signature,
+  * except in the case of them having an immutable portable signature.
+diff --git a/security/integrity/iint.c b/security/integrity/iint.c
+index 0ba01847e836c..9ed2d5bfbed5e 100644
+--- a/security/integrity/iint.c
++++ b/security/integrity/iint.c
+@@ -43,12 +43,10 @@ static struct integrity_iint_cache *__integrity_iint_find(struct inode *inode)
+ 		else if (inode > iint->inode)
+ 			n = n->rb_right;
+ 		else
+-			break;
++			return iint;
+ 	}
+-	if (!n)
+-		return NULL;
+ 
+-	return iint;
++	return NULL;
+ }
+ 
+ /*
+@@ -121,10 +119,15 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode)
+ 		parent = *p;
+ 		test_iint = rb_entry(parent, struct integrity_iint_cache,
+ 				     rb_node);
+-		if (inode < test_iint->inode)
++		if (inode < test_iint->inode) {
+ 			p = &(*p)->rb_left;
+-		else
++		} else if (inode > test_iint->inode) {
+ 			p = &(*p)->rb_right;
++		} else {
++			write_unlock(&integrity_iint_lock);
++			kmem_cache_free(iint_cache, iint);
++			return test_iint;
++		}
+ 	}
+ 
+ 	iint->inode = inode;
+diff --git a/security/integrity/ima/ima_modsig.c b/security/integrity/ima/ima_modsig.c
+index fb25723c65bc4..3e7bee30080f2 100644
+--- a/security/integrity/ima/ima_modsig.c
++++ b/security/integrity/ima/ima_modsig.c
+@@ -89,6 +89,9 @@ int ima_read_modsig(enum ima_hooks func, const void *buf, loff_t buf_len,
+ 
+ /**
+  * ima_collect_modsig - Calculate the file hash without the appended signature.
++ * @modsig: parsed module signature
++ * @buf: data to verify the signature on
++ * @size: data size
+  *
+  * Since the modsig is part of the file contents, the hash used in its signature
+  * isn't the same one ordinarily calculated by IMA. Therefore PKCS7 code
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 96ecb7d254037..1c403e8a8044c 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -628,6 +628,7 @@ static int get_subaction(struct ima_rule_entry *rule, enum ima_hooks func)
+  * @secid: LSM secid of the task to be validated
+  * @func: IMA hook identifier
+  * @mask: requested action (MAY_READ | MAY_WRITE | MAY_APPEND | MAY_EXEC)
++ * @flags: IMA actions to consider (e.g. IMA_MEASURE | IMA_APPRAISE)
+  * @pcr: set the pcr to extend
+  * @template_desc: the template that should be used for this rule
+  * @keyring: the keyring name, if given, to be used to check in the policy.
+@@ -1515,7 +1516,7 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ 
+ /**
+  * ima_parse_add_rule - add a rule to ima_policy_rules
+- * @rule - ima measurement policy rule
++ * @rule: ima measurement policy rule
+  *
+  * Avoid locking by allowing just one writer at a time in ima_write_policy()
+  * Returns the length of the rule parsed, an error code on failure
+diff --git a/security/keys/request_key.c b/security/keys/request_key.c
+index 07a0ef2baacd8..a7673ad86d18d 100644
+--- a/security/keys/request_key.c
++++ b/security/keys/request_key.c
+@@ -401,17 +401,21 @@ static int construct_alloc_key(struct keyring_search_context *ctx,
+ 	set_bit(KEY_FLAG_USER_CONSTRUCT, &key->flags);
+ 
+ 	if (dest_keyring) {
+-		ret = __key_link_lock(dest_keyring, &ctx->index_key);
++		ret = __key_link_lock(dest_keyring, &key->index_key);
+ 		if (ret < 0)
+ 			goto link_lock_failed;
+-		ret = __key_link_begin(dest_keyring, &ctx->index_key, &edit);
+-		if (ret < 0)
+-			goto link_prealloc_failed;
+ 	}
+ 
+-	/* attach the key to the destination keyring under lock, but we do need
++	/*
++	 * Attach the key to the destination keyring under lock, but we do need
+ 	 * to do another check just in case someone beat us to it whilst we
+-	 * waited for locks */
++	 * waited for locks.
++	 *
++	 * The caller might specify a comparison function which looks for keys
++	 * that do not exactly match but are still equivalent from the caller's
++	 * perspective. The __key_link_begin() operation must be done only after
++	 * an actual key is determined.
++	 */
+ 	mutex_lock(&key_construction_mutex);
+ 
+ 	rcu_read_lock();
+@@ -420,12 +424,16 @@ static int construct_alloc_key(struct keyring_search_context *ctx,
+ 	if (!IS_ERR(key_ref))
+ 		goto key_already_present;
+ 
+-	if (dest_keyring)
++	if (dest_keyring) {
++		ret = __key_link_begin(dest_keyring, &key->index_key, &edit);
++		if (ret < 0)
++			goto link_alloc_failed;
+ 		__key_link(dest_keyring, key, &edit);
++	}
+ 
+ 	mutex_unlock(&key_construction_mutex);
+ 	if (dest_keyring)
+-		__key_link_end(dest_keyring, &ctx->index_key, edit);
++		__key_link_end(dest_keyring, &key->index_key, edit);
+ 	mutex_unlock(&user->cons_lock);
+ 	*_key = key;
+ 	kleave(" = 0 [%d]", key_serial(key));
+@@ -438,10 +446,13 @@ key_already_present:
+ 	mutex_unlock(&key_construction_mutex);
+ 	key = key_ref_to_ptr(key_ref);
+ 	if (dest_keyring) {
++		ret = __key_link_begin(dest_keyring, &key->index_key, &edit);
++		if (ret < 0)
++			goto link_alloc_failed_unlocked;
+ 		ret = __key_link_check_live_key(dest_keyring, key);
+ 		if (ret == 0)
+ 			__key_link(dest_keyring, key, &edit);
+-		__key_link_end(dest_keyring, &ctx->index_key, edit);
++		__key_link_end(dest_keyring, &key->index_key, edit);
+ 		if (ret < 0)
+ 			goto link_check_failed;
+ 	}
+@@ -456,8 +467,10 @@ link_check_failed:
+ 	kleave(" = %d [linkcheck]", ret);
+ 	return ret;
+ 
+-link_prealloc_failed:
+-	__key_link_end(dest_keyring, &ctx->index_key, edit);
++link_alloc_failed:
++	mutex_unlock(&key_construction_mutex);
++link_alloc_failed_unlocked:
++	__key_link_end(dest_keyring, &key->index_key, edit);
+ link_lock_failed:
+ 	mutex_unlock(&user->cons_lock);
+ 	key_put(key);
+diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
+index 4c19d3abddbee..65f68856414a6 100644
+--- a/security/keys/trusted-keys/trusted_tpm2.c
++++ b/security/keys/trusted-keys/trusted_tpm2.c
+@@ -21,7 +21,7 @@ static struct tpm2_hash tpm2_hash_map[] = {
+ };
+ 
+ /**
+- * tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.
++ * tpm2_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.
+  *
+  * @buf: an allocated tpm_buf instance
+  * @session_handle: session handle
+diff --git a/sound/core/jack.c b/sound/core/jack.c
+index 45e28db6ea38d..8a9baa084191f 100644
+--- a/sound/core/jack.c
++++ b/sound/core/jack.c
+@@ -364,6 +364,7 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ {
+ 	struct snd_jack_kctl *jack_kctl;
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
++	struct input_dev *idev;
+ 	int i;
+ #endif
+ 
+@@ -375,30 +376,28 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ 					    status & jack_kctl->mask_bits);
+ 
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+-	mutex_lock(&jack->input_dev_lock);
+-	if (!jack->input_dev) {
+-		mutex_unlock(&jack->input_dev_lock);
++	idev = input_get_device(jack->input_dev);
++	if (!idev)
+ 		return;
+-	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(jack->key); i++) {
+ 		int testbit = SND_JACK_BTN_0 >> i;
+ 
+ 		if (jack->type & testbit)
+-			input_report_key(jack->input_dev, jack->key[i],
++			input_report_key(idev, jack->key[i],
+ 					 status & testbit);
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(jack_switch_types); i++) {
+ 		int testbit = 1 << i;
+ 		if (jack->type & testbit)
+-			input_report_switch(jack->input_dev,
++			input_report_switch(idev,
+ 					    jack_switch_types[i],
+ 					    status & testbit);
+ 	}
+ 
+-	input_sync(jack->input_dev);
+-	mutex_unlock(&jack->input_dev_lock);
++	input_sync(idev);
++	input_put_device(idev);
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+ }
+ EXPORT_SYMBOL(snd_jack_report);
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index cd66632bf1c37..e18572eae5e01 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -2007,8 +2007,8 @@ int snd_ac97_mixer(struct snd_ac97_bus *bus, struct snd_ac97_template *template,
+ 		.dev_disconnect =	snd_ac97_dev_disconnect,
+ 	};
+ 
+-	if (rac97)
+-		*rac97 = NULL;
++	if (!rac97)
++		return -EINVAL;
+ 	if (snd_BUG_ON(!bus || !template))
+ 		return -EINVAL;
+ 	if (snd_BUG_ON(template->num >= 4))
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 7d6a36b647a73..6bfc7e28515a6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -121,6 +121,7 @@ struct alc_spec {
+ 	unsigned int ultra_low_power:1;
+ 	unsigned int has_hs_key:1;
+ 	unsigned int no_internal_mic_pin:1;
++	unsigned int en_3kpull_low:1;
+ 
+ 	/* for PLL fix */
+ 	hda_nid_t pll_nid;
+@@ -3617,6 +3618,7 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	if (!hp_pin)
+ 		hp_pin = 0x21;
+ 
++	alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+ 	hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+ 
+ 	if (hp_pin_sense)
+@@ -3633,8 +3635,7 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	/* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ 	 * when booting with headset plugged. So skip setting it for the codec alc257
+ 	 */
+-	if (codec->core.vendor_id != 0x10ec0236 &&
+-	    codec->core.vendor_id != 0x10ec0257)
++	if (spec->en_3kpull_low)
+ 		alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+ 	if (!spec->no_shutup_pins)
+@@ -4558,6 +4559,21 @@ static void alc236_fixup_hp_mute_led_coefbit(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc236_fixup_hp_mute_led_coefbit2(struct hda_codec *codec,
++					  const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->mute_led_polarity = 0;
++		spec->mute_led_coef.idx = 0x07;
++		spec->mute_led_coef.mask = 1;
++		spec->mute_led_coef.on = 1;
++		spec->mute_led_coef.off = 0;
++		snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set);
++	}
++}
++
+ /* turn on/off mic-mute LED per capture hook by coef bit */
+ static int coef_micmute_led_set(struct led_classdev *led_cdev,
+ 				enum led_brightness brightness)
+@@ -6877,6 +6893,7 @@ enum {
+ 	ALC285_FIXUP_HP_GPIO_LED,
+ 	ALC285_FIXUP_HP_MUTE_LED,
+ 	ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED,
++	ALC236_FIXUP_HP_MUTE_LED_COEFBIT2,
+ 	ALC236_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+@@ -8249,6 +8266,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_spectre_x360_mute_led,
+ 	},
++	[ALC236_FIXUP_HP_MUTE_LED_COEFBIT2] = {
++	    .type = HDA_FIXUP_FUNC,
++	    .v.func = alc236_fixup_hp_mute_led_coefbit2,
++	},
+ 	[ALC236_FIXUP_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc236_fixup_hp_gpio_led,
+@@ -9003,6 +9024,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x887a, "HP Laptop 15s-eq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8895, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+@@ -10065,6 +10087,8 @@ static int patch_alc269(struct hda_codec *codec)
+ 		spec->shutup = alc256_shutup;
+ 		spec->init_hook = alc256_init;
+ 		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
++		if (codec->bus->pci->vendor == PCI_VENDOR_ID_AMD)
++			spec->en_3kpull_low = true;
+ 		break;
+ 	case 0x10ec0257:
+ 		spec->codec_variant = ALC269_TYPE_ALC257;
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index bc3d46617a113..03ad34a275da2 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -52,7 +52,12 @@ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(dac_vol_tlv, -9600, 50, 1);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(adc_vol_tlv, -9600, 50, 1);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_max_gain_tlv, -650, 150, 0);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_min_gain_tlv, -1200, 150, 0);
+-static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_target_tlv, -1650, 150, 0);
++
++static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(alc_target_tlv,
++	0, 10, TLV_DB_SCALE_ITEM(-1650, 150, 0),
++	11, 11, TLV_DB_SCALE_ITEM(-150, 0, 0),
++);
++
+ static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(hpmixer_gain_tlv,
+ 	0, 4, TLV_DB_SCALE_ITEM(-1200, 150, 0),
+ 	8, 11, TLV_DB_SCALE_ITEM(-450, 150, 0),
+@@ -115,7 +120,7 @@ static const struct snd_kcontrol_new es8316_snd_controls[] = {
+ 		       alc_max_gain_tlv),
+ 	SOC_SINGLE_TLV("ALC Capture Min Volume", ES8316_ADC_ALC2, 0, 28, 0,
+ 		       alc_min_gain_tlv),
+-	SOC_SINGLE_TLV("ALC Capture Target Volume", ES8316_ADC_ALC3, 4, 10, 0,
++	SOC_SINGLE_TLV("ALC Capture Target Volume", ES8316_ADC_ALC3, 4, 11, 0,
+ 		       alc_target_tlv),
+ 	SOC_SINGLE("ALC Capture Hold Time", ES8316_ADC_ALC3, 0, 10, 0),
+ 	SOC_SINGLE("ALC Capture Decay Time", ES8316_ADC_ALC4, 4, 10, 0),
+@@ -364,13 +369,11 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ 	int count = 0;
+ 
+ 	es8316->sysclk = freq;
++	es8316->sysclk_constraints.list = NULL;
++	es8316->sysclk_constraints.count = 0;
+ 
+-	if (freq == 0) {
+-		es8316->sysclk_constraints.list = NULL;
+-		es8316->sysclk_constraints.count = 0;
+-
++	if (freq == 0)
+ 		return 0;
+-	}
+ 
+ 	ret = clk_set_rate(es8316->mclk, freq);
+ 	if (ret)
+@@ -386,8 +389,10 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ 			es8316->allowed_rates[count++] = freq / ratio;
+ 	}
+ 
+-	es8316->sysclk_constraints.list = es8316->allowed_rates;
+-	es8316->sysclk_constraints.count = count;
++	if (count) {
++		es8316->sysclk_constraints.list = es8316->allowed_rates;
++		es8316->sysclk_constraints.count = count;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 6a5d2b08e2713..03731d14d4757 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -552,7 +552,7 @@ static void fsl_sai_config_disable(struct fsl_sai *sai, int dir)
+ 	u32 xcsr, count = 100;
+ 
+ 	regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs),
+-			   FSL_SAI_CSR_TERE, 0);
++			   FSL_SAI_CSR_TERE | FSL_SAI_CSR_BCE, 0);
+ 
+ 	/* TERE will remain set till the end of current frame */
+ 	do {
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index 8923c680f0e0f..691847d54b17d 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -87,6 +87,7 @@
+ /* SAI Transmit/Receive Control Register */
+ #define FSL_SAI_CSR_TERE	BIT(31)
+ #define FSL_SAI_CSR_SE		BIT(30)
++#define FSL_SAI_CSR_BCE		BIT(28)
+ #define FSL_SAI_CSR_FR		BIT(25)
+ #define FSL_SAI_CSR_SR		BIT(24)
+ #define FSL_SAI_CSR_xF_SHIFT	16
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index cbdc0a2c09c54..77d8234c7ac49 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -230,6 +230,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 
+ 		dai_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s%s",
+ 					  fe_name_pref, args.np->full_name + 1);
++		if (!dai_name)
++			return -ENOMEM;
+ 
+ 		dev_info(pdev->dev.parent, "DAI FE name:%s\n", dai_name);
+ 
+@@ -238,6 +240,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 			capture_dai_name =
+ 				devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s %s",
+ 					       dai_name, "CPU-Capture");
++			if (!capture_dai_name)
++				return -ENOMEM;
+ 		}
+ 
+ 		priv->dai[i].cpus = &dlc[0];
+@@ -268,6 +272,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 				       "AUDMIX-Playback-%d", i);
+ 		be_cp = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ 				       "AUDMIX-Capture-%d", i);
++		if (!be_name || !be_pb || !be_cp)
++			return -ENOMEM;
+ 
+ 		priv->dai[num_dai + i].cpus = &dlc[3];
+ 		priv->dai[num_dai + i].codecs = &dlc[4];
+@@ -295,6 +301,9 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 		priv->dapm_routes[i].source =
+ 			devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s %s",
+ 				       dai_name, "CPU-Playback");
++		if (!priv->dapm_routes[i].source)
++			return -ENOMEM;
++
+ 		priv->dapm_routes[i].sink = be_pb;
+ 		priv->dapm_routes[num_dai + i].source   = be_pb;
+ 		priv->dapm_routes[num_dai + i].sink     = be_cp;
+diff --git a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+index 619d6733091cd..3f6d9659ee959 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
++++ b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+@@ -1072,6 +1072,10 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 
+ 	afe->dev = &pdev->dev;
+ 
++	irq_id = platform_get_irq(pdev, 0);
++	if (irq_id <= 0)
++		return irq_id < 0 ? irq_id : -ENXIO;
++
+ 	afe->base_addr = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(afe->base_addr))
+ 		return PTR_ERR(afe->base_addr);
+@@ -1158,14 +1162,14 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	comp_hdmi = devm_kzalloc(&pdev->dev, sizeof(*comp_hdmi), GFP_KERNEL);
+ 	if (!comp_hdmi) {
+ 		ret = -ENOMEM;
+-		goto err_pm_disable;
++		goto err_cleanup_components;
+ 	}
+ 
+ 	ret = snd_soc_component_initialize(comp_hdmi,
+ 					   &mt8173_afe_hdmi_dai_component,
+ 					   &pdev->dev);
+ 	if (ret)
+-		goto err_pm_disable;
++		goto err_cleanup_components;
+ 
+ #ifdef CONFIG_DEBUG_FS
+ 	comp_hdmi->debugfs_prefix = "hdmi";
+@@ -1177,14 +1181,11 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_cleanup_components;
+ 
+-	irq_id = platform_get_irq(pdev, 0);
+-	if (irq_id <= 0)
+-		return irq_id < 0 ? irq_id : -ENXIO;
+ 	ret = devm_request_irq(afe->dev, irq_id, mt8173_afe_irq_handler,
+ 			       0, "Afe_ISR_Handle", (void *)afe);
+ 	if (ret) {
+ 		dev_err(afe->dev, "could not request_irq\n");
+-		goto err_pm_disable;
++		goto err_cleanup_components;
+ 	}
+ 
+ 	dev_info(&pdev->dev, "MT8173 AFE driver initialized.\n");
+diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c
+index 359960a8f1def..5f0b1397798ed 100644
+--- a/tools/bpf/bpftool/feature.c
++++ b/tools/bpf/bpftool/feature.c
+@@ -135,12 +135,12 @@ static void print_end_section(void)
+ 
+ /* Probing functions */
+ 
+-static int read_procfs(const char *path)
++static long read_procfs(const char *path)
+ {
+ 	char *endptr, *line = NULL;
+ 	size_t len = 0;
+ 	FILE *fd;
+-	int res;
++	long res;
+ 
+ 	fd = fopen(path, "r");
+ 	if (!fd)
+@@ -162,7 +162,7 @@ static int read_procfs(const char *path)
+ 
+ static void probe_unprivileged_disabled(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -181,14 +181,14 @@ static void probe_unprivileged_disabled(void)
+ 			printf("Unable to retrieve required privileges for bpf() syscall\n");
+ 			break;
+ 		default:
+-			printf("bpf() syscall restriction has unknown value %d\n", res);
++			printf("bpf() syscall restriction has unknown value %ld\n", res);
+ 		}
+ 	}
+ }
+ 
+ static void probe_jit_enable(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -210,7 +210,7 @@ static void probe_jit_enable(void)
+ 			printf("Unable to retrieve JIT-compiler status\n");
+ 			break;
+ 		default:
+-			printf("JIT-compiler status has unknown value %d\n",
++			printf("JIT-compiler status has unknown value %ld\n",
+ 			       res);
+ 		}
+ 	}
+@@ -218,7 +218,7 @@ static void probe_jit_enable(void)
+ 
+ static void probe_jit_harden(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -240,7 +240,7 @@ static void probe_jit_harden(void)
+ 			printf("Unable to retrieve JIT hardening status\n");
+ 			break;
+ 		default:
+-			printf("JIT hardening status has unknown value %d\n",
++			printf("JIT hardening status has unknown value %ld\n",
+ 			       res);
+ 		}
+ 	}
+@@ -248,7 +248,7 @@ static void probe_jit_harden(void)
+ 
+ static void probe_jit_kallsyms(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -267,14 +267,14 @@ static void probe_jit_kallsyms(void)
+ 			printf("Unable to retrieve JIT kallsyms export status\n");
+ 			break;
+ 		default:
+-			printf("JIT kallsyms exports status has unknown value %d\n", res);
++			printf("JIT kallsyms exports status has unknown value %ld\n", res);
+ 		}
+ 	}
+ }
+ 
+ static void probe_jit_limit(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -287,7 +287,7 @@ static void probe_jit_limit(void)
+ 			printf("Unable to retrieve global memory limit for JIT compiler for unprivileged users\n");
+ 			break;
+ 		default:
+-			printf("Global memory limit for JIT compiler for unprivileged users is %d bytes\n", res);
++			printf("Global memory limit for JIT compiler for unprivileged users is %ld bytes\n", res);
+ 		}
+ 	}
+ }
+diff --git a/tools/include/uapi/linux/tcp.h b/tools/include/uapi/linux/tcp.h
+new file mode 100644
+index 0000000000000..13ceeb395eb8f
+--- /dev/null
++++ b/tools/include/uapi/linux/tcp.h
+@@ -0,0 +1,357 @@
++/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
++/*
++ * INET		An implementation of the TCP/IP protocol suite for the LINUX
++ *		operating system.  INET is implemented using the  BSD Socket
++ *		interface as the means of communication with the user level.
++ *
++ *		Definitions for the TCP protocol.
++ *
++ * Version:	@(#)tcp.h	1.0.2	04/28/93
++ *
++ * Author:	Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
++ *
++ *		This program is free software; you can redistribute it and/or
++ *		modify it under the terms of the GNU General Public License
++ *		as published by the Free Software Foundation; either version
++ *		2 of the License, or (at your option) any later version.
++ */
++#ifndef _UAPI_LINUX_TCP_H
++#define _UAPI_LINUX_TCP_H
++
++#include <linux/types.h>
++#include <asm/byteorder.h>
++#include <linux/socket.h>
++
++struct tcphdr {
++	__be16	source;
++	__be16	dest;
++	__be32	seq;
++	__be32	ack_seq;
++#if defined(__LITTLE_ENDIAN_BITFIELD)
++	__u16	res1:4,
++		doff:4,
++		fin:1,
++		syn:1,
++		rst:1,
++		psh:1,
++		ack:1,
++		urg:1,
++		ece:1,
++		cwr:1;
++#elif defined(__BIG_ENDIAN_BITFIELD)
++	__u16	doff:4,
++		res1:4,
++		cwr:1,
++		ece:1,
++		urg:1,
++		ack:1,
++		psh:1,
++		rst:1,
++		syn:1,
++		fin:1;
++#else
++#error	"Adjust your <asm/byteorder.h> defines"
++#endif	
++	__be16	window;
++	__sum16	check;
++	__be16	urg_ptr;
++};
++
++/*
++ *	The union cast uses a gcc extension to avoid aliasing problems
++ *  (union is compatible to any of its members)
++ *  This means this part of the code is -fstrict-aliasing safe now.
++ */
++union tcp_word_hdr { 
++	struct tcphdr hdr;
++	__be32 		  words[5];
++}; 
++
++#define tcp_flag_word(tp) ( ((union tcp_word_hdr *)(tp))->words [3]) 
++
++enum { 
++	TCP_FLAG_CWR = __constant_cpu_to_be32(0x00800000),
++	TCP_FLAG_ECE = __constant_cpu_to_be32(0x00400000),
++	TCP_FLAG_URG = __constant_cpu_to_be32(0x00200000),
++	TCP_FLAG_ACK = __constant_cpu_to_be32(0x00100000),
++	TCP_FLAG_PSH = __constant_cpu_to_be32(0x00080000),
++	TCP_FLAG_RST = __constant_cpu_to_be32(0x00040000),
++	TCP_FLAG_SYN = __constant_cpu_to_be32(0x00020000),
++	TCP_FLAG_FIN = __constant_cpu_to_be32(0x00010000),
++	TCP_RESERVED_BITS = __constant_cpu_to_be32(0x0F000000),
++	TCP_DATA_OFFSET = __constant_cpu_to_be32(0xF0000000)
++}; 
++
++/*
++ * TCP general constants
++ */
++#define TCP_MSS_DEFAULT		 536U	/* IPv4 (RFC1122, RFC2581) */
++#define TCP_MSS_DESIRED		1220U	/* IPv6 (tunneled), EDNS0 (RFC3226) */
++
++/* TCP socket options */
++#define TCP_NODELAY		1	/* Turn off Nagle's algorithm. */
++#define TCP_MAXSEG		2	/* Limit MSS */
++#define TCP_CORK		3	/* Never send partially complete segments */
++#define TCP_KEEPIDLE		4	/* Start keeplives after this period */
++#define TCP_KEEPINTVL		5	/* Interval between keepalives */
++#define TCP_KEEPCNT		6	/* Number of keepalives before death */
++#define TCP_SYNCNT		7	/* Number of SYN retransmits */
++#define TCP_LINGER2		8	/* Life time of orphaned FIN-WAIT-2 state */
++#define TCP_DEFER_ACCEPT	9	/* Wake up listener only when data arrive */
++#define TCP_WINDOW_CLAMP	10	/* Bound advertised window */
++#define TCP_INFO		11	/* Information about this connection. */
++#define TCP_QUICKACK		12	/* Block/reenable quick acks */
++#define TCP_CONGESTION		13	/* Congestion control algorithm */
++#define TCP_MD5SIG		14	/* TCP MD5 Signature (RFC2385) */
++#define TCP_THIN_LINEAR_TIMEOUTS 16      /* Use linear timeouts for thin streams*/
++#define TCP_THIN_DUPACK         17      /* Fast retrans. after 1 dupack */
++#define TCP_USER_TIMEOUT	18	/* How long for loss retry before timeout */
++#define TCP_REPAIR		19	/* TCP sock is under repair right now */
++#define TCP_REPAIR_QUEUE	20
++#define TCP_QUEUE_SEQ		21
++#define TCP_REPAIR_OPTIONS	22
++#define TCP_FASTOPEN		23	/* Enable FastOpen on listeners */
++#define TCP_TIMESTAMP		24
++#define TCP_NOTSENT_LOWAT	25	/* limit number of unsent bytes in write queue */
++#define TCP_CC_INFO		26	/* Get Congestion Control (optional) info */
++#define TCP_SAVE_SYN		27	/* Record SYN headers for new connections */
++#define TCP_SAVED_SYN		28	/* Get SYN headers recorded for connection */
++#define TCP_REPAIR_WINDOW	29	/* Get/set window parameters */
++#define TCP_FASTOPEN_CONNECT	30	/* Attempt FastOpen with connect */
++#define TCP_ULP			31	/* Attach a ULP to a TCP connection */
++#define TCP_MD5SIG_EXT		32	/* TCP MD5 Signature with extensions */
++#define TCP_FASTOPEN_KEY	33	/* Set the key for Fast Open (cookie) */
++#define TCP_FASTOPEN_NO_COOKIE	34	/* Enable TFO without a TFO cookie */
++#define TCP_ZEROCOPY_RECEIVE	35
++#define TCP_INQ			36	/* Notify bytes available to read as a cmsg on read */
++
++#define TCP_CM_INQ		TCP_INQ
++
++#define TCP_TX_DELAY		37	/* delay outgoing packets by XX usec */
++
++
++#define TCP_REPAIR_ON		1
++#define TCP_REPAIR_OFF		0
++#define TCP_REPAIR_OFF_NO_WP	-1	/* Turn off without window probes */
++
++struct tcp_repair_opt {
++	__u32	opt_code;
++	__u32	opt_val;
++};
++
++struct tcp_repair_window {
++	__u32	snd_wl1;
++	__u32	snd_wnd;
++	__u32	max_window;
++
++	__u32	rcv_wnd;
++	__u32	rcv_wup;
++};
++
++enum {
++	TCP_NO_QUEUE,
++	TCP_RECV_QUEUE,
++	TCP_SEND_QUEUE,
++	TCP_QUEUES_NR,
++};
++
++/* why fastopen failed from client perspective */
++enum tcp_fastopen_client_fail {
++	TFO_STATUS_UNSPEC, /* catch-all */
++	TFO_COOKIE_UNAVAILABLE, /* if not in TFO_CLIENT_NO_COOKIE mode */
++	TFO_DATA_NOT_ACKED, /* SYN-ACK did not ack SYN data */
++	TFO_SYN_RETRANSMITTED, /* SYN-ACK did not ack SYN data after timeout */
++};
++
++/* for TCP_INFO socket option */
++#define TCPI_OPT_TIMESTAMPS	1
++#define TCPI_OPT_SACK		2
++#define TCPI_OPT_WSCALE		4
++#define TCPI_OPT_ECN		8 /* ECN was negociated at TCP session init */
++#define TCPI_OPT_ECN_SEEN	16 /* we received at least one packet with ECT */
++#define TCPI_OPT_SYN_DATA	32 /* SYN-ACK acked data in SYN sent or rcvd */
++
++/*
++ * Sender's congestion state indicating normal or abnormal situations
++ * in the last round of packets sent. The state is driven by the ACK
++ * information and timer events.
++ */
++enum tcp_ca_state {
++	/*
++	 * Nothing bad has been observed recently.
++	 * No apparent reordering, packet loss, or ECN marks.
++	 */
++	TCP_CA_Open = 0,
++#define TCPF_CA_Open	(1<<TCP_CA_Open)
++	/*
++	 * The sender enters disordered state when it has received DUPACKs or
++	 * SACKs in the last round of packets sent. This could be due to packet
++	 * loss or reordering but needs further information to confirm packets
++	 * have been lost.
++	 */
++	TCP_CA_Disorder = 1,
++#define TCPF_CA_Disorder (1<<TCP_CA_Disorder)
++	/*
++	 * The sender enters Congestion Window Reduction (CWR) state when it
++	 * has received ACKs with ECN-ECE marks, or has experienced congestion
++	 * or packet discard on the sender host (e.g. qdisc).
++	 */
++	TCP_CA_CWR = 2,
++#define TCPF_CA_CWR	(1<<TCP_CA_CWR)
++	/*
++	 * The sender is in fast recovery and retransmitting lost packets,
++	 * typically triggered by ACK events.
++	 */
++	TCP_CA_Recovery = 3,
++#define TCPF_CA_Recovery (1<<TCP_CA_Recovery)
++	/*
++	 * The sender is in loss recovery triggered by retransmission timeout.
++	 */
++	TCP_CA_Loss = 4
++#define TCPF_CA_Loss	(1<<TCP_CA_Loss)
++};
++
++struct tcp_info {
++	__u8	tcpi_state;
++	__u8	tcpi_ca_state;
++	__u8	tcpi_retransmits;
++	__u8	tcpi_probes;
++	__u8	tcpi_backoff;
++	__u8	tcpi_options;
++	__u8	tcpi_snd_wscale : 4, tcpi_rcv_wscale : 4;
++	__u8	tcpi_delivery_rate_app_limited:1, tcpi_fastopen_client_fail:2;
++
++	__u32	tcpi_rto;
++	__u32	tcpi_ato;
++	__u32	tcpi_snd_mss;
++	__u32	tcpi_rcv_mss;
++
++	__u32	tcpi_unacked;
++	__u32	tcpi_sacked;
++	__u32	tcpi_lost;
++	__u32	tcpi_retrans;
++	__u32	tcpi_fackets;
++
++	/* Times. */
++	__u32	tcpi_last_data_sent;
++	__u32	tcpi_last_ack_sent;     /* Not remembered, sorry. */
++	__u32	tcpi_last_data_recv;
++	__u32	tcpi_last_ack_recv;
++
++	/* Metrics. */
++	__u32	tcpi_pmtu;
++	__u32	tcpi_rcv_ssthresh;
++	__u32	tcpi_rtt;
++	__u32	tcpi_rttvar;
++	__u32	tcpi_snd_ssthresh;
++	__u32	tcpi_snd_cwnd;
++	__u32	tcpi_advmss;
++	__u32	tcpi_reordering;
++
++	__u32	tcpi_rcv_rtt;
++	__u32	tcpi_rcv_space;
++
++	__u32	tcpi_total_retrans;
++
++	__u64	tcpi_pacing_rate;
++	__u64	tcpi_max_pacing_rate;
++	__u64	tcpi_bytes_acked;    /* RFC4898 tcpEStatsAppHCThruOctetsAcked */
++	__u64	tcpi_bytes_received; /* RFC4898 tcpEStatsAppHCThruOctetsReceived */
++	__u32	tcpi_segs_out;	     /* RFC4898 tcpEStatsPerfSegsOut */
++	__u32	tcpi_segs_in;	     /* RFC4898 tcpEStatsPerfSegsIn */
++
++	__u32	tcpi_notsent_bytes;
++	__u32	tcpi_min_rtt;
++	__u32	tcpi_data_segs_in;	/* RFC4898 tcpEStatsDataSegsIn */
++	__u32	tcpi_data_segs_out;	/* RFC4898 tcpEStatsDataSegsOut */
++
++	__u64   tcpi_delivery_rate;
++
++	__u64	tcpi_busy_time;      /* Time (usec) busy sending data */
++	__u64	tcpi_rwnd_limited;   /* Time (usec) limited by receive window */
++	__u64	tcpi_sndbuf_limited; /* Time (usec) limited by send buffer */
++
++	__u32	tcpi_delivered;
++	__u32	tcpi_delivered_ce;
++
++	__u64	tcpi_bytes_sent;     /* RFC4898 tcpEStatsPerfHCDataOctetsOut */
++	__u64	tcpi_bytes_retrans;  /* RFC4898 tcpEStatsPerfOctetsRetrans */
++	__u32	tcpi_dsack_dups;     /* RFC4898 tcpEStatsStackDSACKDups */
++	__u32	tcpi_reord_seen;     /* reordering events seen */
++
++	__u32	tcpi_rcv_ooopack;    /* Out-of-order packets received */
++
++	__u32	tcpi_snd_wnd;	     /* peer's advertised receive window after
++				      * scaling (bytes)
++				      */
++};
++
++/* netlink attributes types for SCM_TIMESTAMPING_OPT_STATS */
++enum {
++	TCP_NLA_PAD,
++	TCP_NLA_BUSY,		/* Time (usec) busy sending data */
++	TCP_NLA_RWND_LIMITED,	/* Time (usec) limited by receive window */
++	TCP_NLA_SNDBUF_LIMITED,	/* Time (usec) limited by send buffer */
++	TCP_NLA_DATA_SEGS_OUT,	/* Data pkts sent including retransmission */
++	TCP_NLA_TOTAL_RETRANS,	/* Data pkts retransmitted */
++	TCP_NLA_PACING_RATE,    /* Pacing rate in bytes per second */
++	TCP_NLA_DELIVERY_RATE,  /* Delivery rate in bytes per second */
++	TCP_NLA_SND_CWND,       /* Sending congestion window */
++	TCP_NLA_REORDERING,     /* Reordering metric */
++	TCP_NLA_MIN_RTT,        /* minimum RTT */
++	TCP_NLA_RECUR_RETRANS,  /* Recurring retransmits for the current pkt */
++	TCP_NLA_DELIVERY_RATE_APP_LMT, /* delivery rate application limited ? */
++	TCP_NLA_SNDQ_SIZE,	/* Data (bytes) pending in send queue */
++	TCP_NLA_CA_STATE,	/* ca_state of socket */
++	TCP_NLA_SND_SSTHRESH,	/* Slow start size threshold */
++	TCP_NLA_DELIVERED,	/* Data pkts delivered incl. out-of-order */
++	TCP_NLA_DELIVERED_CE,	/* Like above but only ones w/ CE marks */
++	TCP_NLA_BYTES_SENT,	/* Data bytes sent including retransmission */
++	TCP_NLA_BYTES_RETRANS,	/* Data bytes retransmitted */
++	TCP_NLA_DSACK_DUPS,	/* DSACK blocks received */
++	TCP_NLA_REORD_SEEN,	/* reordering events seen */
++	TCP_NLA_SRTT,		/* smoothed RTT in usecs */
++	TCP_NLA_TIMEOUT_REHASH, /* Timeout-triggered rehash attempts */
++	TCP_NLA_BYTES_NOTSENT,	/* Bytes in write queue not yet sent */
++	TCP_NLA_EDT,		/* Earliest departure time (CLOCK_MONOTONIC) */
++};
++
++/* for TCP_MD5SIG socket option */
++#define TCP_MD5SIG_MAXKEYLEN	80
++
++/* tcp_md5sig extension flags for TCP_MD5SIG_EXT */
++#define TCP_MD5SIG_FLAG_PREFIX		0x1	/* address prefix length */
++#define TCP_MD5SIG_FLAG_IFINDEX		0x2	/* ifindex set */
++
++struct tcp_md5sig {
++	struct __kernel_sockaddr_storage tcpm_addr;	/* address associated */
++	__u8	tcpm_flags;				/* extension flags */
++	__u8	tcpm_prefixlen;				/* address prefix */
++	__u16	tcpm_keylen;				/* key length */
++	int	tcpm_ifindex;				/* device index for scope */
++	__u8	tcpm_key[TCP_MD5SIG_MAXKEYLEN];		/* key (binary) */
++};
++
++/* INET_DIAG_MD5SIG */
++struct tcp_diag_md5sig {
++	__u8	tcpm_family;
++	__u8	tcpm_prefixlen;
++	__u16	tcpm_keylen;
++	__be32	tcpm_addr[4];
++	__u8	tcpm_key[TCP_MD5SIG_MAXKEYLEN];
++};
++
++/* setsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...) */
++
++#define TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT 0x1
++struct tcp_zerocopy_receive {
++	__u64 address;		/* in: address of mapping */
++	__u32 length;		/* in/out: number of bytes to map/mapped */
++	__u32 recv_skip_hint;	/* out: amount of bytes to skip */
++	__u32 inq; /* out: amount of bytes in read queue */
++	__s32 err; /* out: socket error */
++	__u64 copybuf_address;	/* in: copybuf address (small reads) */
++	__s32 copybuf_len; /* in/out: copybuf bytes avail/used or error */
++	__u32 flags; /* in: flags */
++};
++#endif /* _UAPI_LINUX_TCP_H */
+diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
+index 72b251110c4d7..1c389b0f5499a 100644
+--- a/tools/lib/bpf/bpf_helpers.h
++++ b/tools/lib/bpf/bpf_helpers.h
+@@ -42,16 +42,21 @@
+ /*
+  * Helper macro to manipulate data structures
+  */
+-#ifndef offsetof
+-#define offsetof(TYPE, MEMBER)	((unsigned long)&((TYPE *)0)->MEMBER)
+-#endif
+-#ifndef container_of
++
++/* offsetof() definition that uses __builtin_offset() might not preserve field
++ * offset CO-RE relocation properly, so force-redefine offsetof() using
++ * old-school approach which works with CO-RE correctly
++ */
++#undef offsetof
++#define offsetof(type, member)	((unsigned long)&((type *)0)->member)
++
++/* redefined container_of() to ensure we use the above offsetof() macro */
++#undef container_of
+ #define container_of(ptr, type, member)				\
+ 	({							\
+ 		void *__mptr = (void *)(ptr);			\
+ 		((type *)(__mptr - offsetof(type, member)));	\
+ 	})
+-#endif
+ 
+ /*
+  * Helper macro to throw a compilation error if __bpf_unreachable() gets
+diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c
+index 62a7b7420a448..fb3029495c23c 100644
+--- a/tools/perf/builtin-bench.c
++++ b/tools/perf/builtin-bench.c
+@@ -21,6 +21,7 @@
+ #include "builtin.h"
+ #include "bench/bench.h"
+ 
++#include <locale.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+@@ -225,7 +226,6 @@ static void run_collection(struct collection *coll)
+ 		if (!bench->fn)
+ 			break;
+ 		printf("# Running %s/%s benchmark...\n", coll->name, bench->name);
+-		fflush(stdout);
+ 
+ 		argv[1] = bench->name;
+ 		run_bench(coll->name, bench->name, bench->fn, 1, argv);
+@@ -246,6 +246,10 @@ int cmd_bench(int argc, const char **argv)
+ 	struct collection *coll;
+ 	int ret = 0;
+ 
++	/* Unbuffered output */
++	setvbuf(stdout, NULL, _IONBF, 0);
++	setlocale(LC_ALL, "");
++
+ 	if (argc < 2) {
+ 		/* No collection specified. */
+ 		print_usage();
+@@ -299,7 +303,6 @@ int cmd_bench(int argc, const char **argv)
+ 
+ 			if (bench_format == BENCH_FORMAT_DEFAULT)
+ 				printf("# Running '%s/%s' benchmark:\n", coll->name, bench->name);
+-			fflush(stdout);
+ 			ret = run_bench(coll->name, bench->name, bench->fn, argc-1, argv+1);
+ 			goto end;
+ 		}
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 5109d01619eed..85befbacb2a44 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -295,8 +295,7 @@ static inline struct evsel_script *evsel_script(struct evsel *evsel)
+ 	return (struct evsel_script *)evsel->priv;
+ }
+ 
+-static struct evsel_script *perf_evsel_script__new(struct evsel *evsel,
+-							struct perf_data *data)
++static struct evsel_script *evsel_script__new(struct evsel *evsel, struct perf_data *data)
+ {
+ 	struct evsel_script *es = zalloc(sizeof(*es));
+ 
+@@ -316,7 +315,7 @@ out_free:
+ 	return NULL;
+ }
+ 
+-static void perf_evsel_script__delete(struct evsel_script *es)
++static void evsel_script__delete(struct evsel_script *es)
+ {
+ 	zfree(&es->filename);
+ 	fclose(es->fp);
+@@ -324,7 +323,7 @@ static void perf_evsel_script__delete(struct evsel_script *es)
+ 	free(es);
+ }
+ 
+-static int perf_evsel_script__fprintf(struct evsel_script *es, FILE *fp)
++static int evsel_script__fprintf(struct evsel_script *es, FILE *fp)
+ {
+ 	struct stat st;
+ 
+@@ -2147,6 +2146,9 @@ out_put:
+ 	return 0;
+ }
+ 
++// Used when scr->per_event_dump is not set
++static struct evsel_script es_stdout;
++
+ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 			struct evlist **pevlist)
+ {
+@@ -2155,7 +2157,6 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 	struct evsel *evsel, *pos;
+ 	u64 sample_type;
+ 	int err;
+-	static struct evsel_script *es;
+ 
+ 	err = perf_event__process_attr(tool, event, pevlist);
+ 	if (err)
+@@ -2165,15 +2166,13 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 	evsel = evlist__last(*pevlist);
+ 
+ 	if (!evsel->priv) {
+-		if (scr->per_event_dump) {
+-			evsel->priv = perf_evsel_script__new(evsel,
+-						scr->session->data);
+-		} else {
+-			es = zalloc(sizeof(*es));
+-			if (!es)
++		if (scr->per_event_dump) { 
++			evsel->priv = evsel_script__new(evsel, scr->session->data);
++			if (!evsel->priv)
+ 				return -ENOMEM;
+-			es->fp = stdout;
+-			evsel->priv = es;
++		} else { // Replicate what is done in perf_script__setup_per_event_dump()
++			es_stdout.fp = stdout;
++			evsel->priv = &es_stdout;
+ 		}
+ 	}
+ 
+@@ -2422,7 +2421,7 @@ static void perf_script__fclose_per_event_dump(struct perf_script *script)
+ 	evlist__for_each_entry(evlist, evsel) {
+ 		if (!evsel->priv)
+ 			break;
+-		perf_evsel_script__delete(evsel->priv);
++		evsel_script__delete(evsel->priv);
+ 		evsel->priv = NULL;
+ 	}
+ }
+@@ -2442,7 +2441,7 @@ static int perf_script__fopen_per_event_dump(struct perf_script *script)
+ 		if (evsel->priv != NULL)
+ 			continue;
+ 
+-		evsel->priv = perf_evsel_script__new(evsel, script->session->data);
++		evsel->priv = evsel_script__new(evsel, script->session->data);
+ 		if (evsel->priv == NULL)
+ 			goto out_err_fclose;
+ 	}
+@@ -2457,7 +2456,6 @@ out_err_fclose:
+ static int perf_script__setup_per_event_dump(struct perf_script *script)
+ {
+ 	struct evsel *evsel;
+-	static struct evsel_script es_stdout;
+ 
+ 	if (script->per_event_dump)
+ 		return perf_script__fopen_per_event_dump(script);
+@@ -2477,8 +2475,8 @@ static void perf_script__exit_per_event_dump_stats(struct perf_script *script)
+ 	evlist__for_each_entry(script->session->evlist, evsel) {
+ 		struct evsel_script *es = evsel->priv;
+ 
+-		perf_evsel_script__fprintf(es, stdout);
+-		perf_evsel_script__delete(es);
++		evsel_script__fprintf(es, stdout);
++		evsel_script__delete(es);
+ 		evsel->priv = NULL;
+ 	}
+ }
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index 132bdb3e6c31a..73c911dd0c2ca 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -793,6 +793,9 @@ int cmd_test(int argc, const char **argv)
+         if (ret < 0)
+                 return ret;
+ 
++	/* Unbuffered output */
++	setvbuf(stdout, NULL, _IONBF, 0);
++
+ 	argc = parse_options_subcommand(argc, argv, test_options, test_subcommands, test_usage, 0);
+ 	if (argc >= 1 && !strcmp(argv[0], "list"))
+ 		return perf_test__list(argc - 1, argv + 1);
+diff --git a/tools/perf/tests/shell/test_uprobe_from_different_cu.sh b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
+new file mode 100644
+index 0000000000000..00d2e0e2e0c28
+--- /dev/null
++++ b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
+@@ -0,0 +1,77 @@
++#!/bin/bash
++# test perf probe of function from different CU
++# SPDX-License-Identifier: GPL-2.0
++
++set -e
++
++temp_dir=$(mktemp -d /tmp/perf-uprobe-different-cu-sh.XXXXXXXXXX)
++
++cleanup()
++{
++	trap - EXIT TERM INT
++	if [[ "${temp_dir}" =~ ^/tmp/perf-uprobe-different-cu-sh.*$ ]]; then
++		echo "--- Cleaning up ---"
++		perf probe -x ${temp_dir}/testfile -d foo
++		rm -f "${temp_dir}/"*
++		rmdir "${temp_dir}"
++	fi
++}
++
++trap_cleanup()
++{
++        cleanup
++        exit 1
++}
++
++trap trap_cleanup EXIT TERM INT
++
++cat > ${temp_dir}/testfile-foo.h << EOF
++struct t
++{
++  int *p;
++  int c;
++};
++
++extern int foo (int i, struct t *t);
++EOF
++
++cat > ${temp_dir}/testfile-foo.c << EOF
++#include "testfile-foo.h"
++
++int
++foo (int i, struct t *t)
++{
++  int j, res = 0;
++  for (j = 0; j < i && j < t->c; j++)
++    res += t->p[j];
++
++  return res;
++}
++EOF
++
++cat > ${temp_dir}/testfile-main.c << EOF
++#include "testfile-foo.h"
++
++static struct t g;
++
++int
++main (int argc, char **argv)
++{
++  int i;
++  int j[argc];
++  g.c = argc;
++  g.p = j;
++  for (i = 0; i < argc; i++)
++    j[i] = (int) argv[i][0];
++  return foo (3, &g);
++}
++EOF
++
++gcc -g -Og -flto -c ${temp_dir}/testfile-foo.c -o ${temp_dir}/testfile-foo.o
++gcc -g -Og -c ${temp_dir}/testfile-main.c -o ${temp_dir}/testfile-main.o
++gcc -g -Og -o ${temp_dir}/testfile ${temp_dir}/testfile-foo.o ${temp_dir}/testfile-main.o
++
++perf probe -x ${temp_dir}/testfile --funcs foo
++perf probe -x ${temp_dir}/testfile foo
++
++cleanup
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index f8a10d5148f6f..443374a77c8dc 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -1081,7 +1081,7 @@ int die_get_varname(Dwarf_Die *vr_die, struct strbuf *buf)
+ 	ret = die_get_typename(vr_die, buf);
+ 	if (ret < 0) {
+ 		pr_debug("Failed to get type, make it unknown.\n");
+-		ret = strbuf_add(buf, " (unknown_type)", 14);
++		ret = strbuf_add(buf, "(unknown_type)", 14);
+ 	}
+ 
+ 	return ret < 0 ? ret : strbuf_addf(buf, "\t%s", dwarf_diename(vr_die));
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c b/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
+index 9a8f47fc0b91e..37c5494a0381b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
+@@ -2,6 +2,7 @@
+ /* Copyright (c) 2019 Facebook */
+ 
+ #include <linux/err.h>
++#include <netinet/tcp.h>
+ #include <test_progs.h>
+ #include "bpf_dctcp.skel.h"
+ #include "bpf_cubic.skel.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/cls_redirect.c b/tools/testing/selftests/bpf/prog_tests/cls_redirect.c
+index 9781d85cb2239..e075d03ab630a 100644
+--- a/tools/testing/selftests/bpf/prog_tests/cls_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/cls_redirect.c
+@@ -7,6 +7,7 @@
+ #include <string.h>
+ 
+ #include <linux/pkt_cls.h>
++#include <netinet/tcp.h>
+ 
+ #include <test_progs.h>
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+index 85f73261fab0a..b8b48cac2ac3d 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ // Copyright (c) 2020 Cloudflare
+ #include <error.h>
++#include <netinet/tcp.h>
+ 
+ #include "test_progs.h"
+ #include "test_skmsg_load_helpers.skel.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
+index b25c9c45c1484..d5b44b135c00d 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
+@@ -2,6 +2,12 @@
+ #include <test_progs.h>
+ #include "cgroup_helpers.h"
+ 
++#include <linux/tcp.h>
++
++#ifndef SOL_TCP
++#define SOL_TCP IPPROTO_TCP
++#endif
++
+ #define SOL_CUSTOM			0xdeadbeef
+ 
+ static int getsetsockopt(void)
+@@ -11,6 +17,7 @@ static int getsetsockopt(void)
+ 		char u8[4];
+ 		__u32 u32;
+ 		char cc[16]; /* TCP_CA_NAME_MAX */
++		struct tcp_zerocopy_receive zc;
+ 	} buf = {};
+ 	socklen_t optlen;
+ 	char *big_buf = NULL;
+@@ -154,6 +161,27 @@ static int getsetsockopt(void)
+ 		goto err;
+ 	}
+ 
++	/* TCP_ZEROCOPY_RECEIVE triggers */
++	memset(&buf, 0, sizeof(buf));
++	optlen = sizeof(buf.zc);
++	err = getsockopt(fd, SOL_TCP, TCP_ZEROCOPY_RECEIVE, &buf, &optlen);
++	if (err) {
++		log_err("Unexpected getsockopt(TCP_ZEROCOPY_RECEIVE) err=%d errno=%d",
++			err, errno);
++		goto err;
++	}
++
++	memset(&buf, 0, sizeof(buf));
++	buf.zc.address = 12345; /* rejected by BPF */
++	optlen = sizeof(buf.zc);
++	errno = 0;
++	err = getsockopt(fd, SOL_TCP, TCP_ZEROCOPY_RECEIVE, &buf, &optlen);
++	if (errno != EPERM) {
++		log_err("Unexpected getsockopt(TCP_ZEROCOPY_RECEIVE) err=%d errno=%d",
++			err, errno);
++		goto err;
++	}
++
+ 	free(big_buf);
+ 	close(fd);
+ 	return 0;
+diff --git a/tools/testing/selftests/bpf/progs/sockopt_sk.c b/tools/testing/selftests/bpf/progs/sockopt_sk.c
+index 712df7b49cb1a..d3597f81e6e94 100644
+--- a/tools/testing/selftests/bpf/progs/sockopt_sk.c
++++ b/tools/testing/selftests/bpf/progs/sockopt_sk.c
+@@ -1,8 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <string.h>
+-#include <netinet/in.h>
+-#include <netinet/tcp.h>
++#include <linux/tcp.h>
+ #include <linux/bpf.h>
++#include <netinet/in.h>
+ #include <bpf/bpf_helpers.h>
+ 
+ char _license[] SEC("license") = "GPL";
+@@ -12,6 +12,10 @@ __u32 _version SEC("version") = 1;
+ #define PAGE_SIZE 4096
+ #endif
+ 
++#ifndef SOL_TCP
++#define SOL_TCP IPPROTO_TCP
++#endif
++
+ #define SOL_CUSTOM			0xdeadbeef
+ 
+ struct sockopt_sk {
+@@ -57,6 +61,21 @@ int _getsockopt(struct bpf_sockopt *ctx)
+ 		return 1;
+ 	}
+ 
++	if (ctx->level == SOL_TCP && ctx->optname == TCP_ZEROCOPY_RECEIVE) {
++		/* Verify that TCP_ZEROCOPY_RECEIVE triggers.
++		 * It has a custom implementation for performance
++		 * reasons.
++		 */
++
++		if (optval + sizeof(struct tcp_zerocopy_receive) > optval_end)
++			return 0; /* EPERM, bounds check */
++
++		if (((struct tcp_zerocopy_receive *)optval)->address != 0)
++			return 0; /* EPERM, unexpected data */
++
++		return 1;
++	}
++
+ 	if (ctx->level == SOL_IP && ctx->optname == IP_FREEBIND) {
+ 		if (optval + 1 > optval_end)
+ 			return 0; /* EPERM, bounds check */
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index 238f5f61189ee..1d429d67f8ddc 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -16,7 +16,6 @@ typedef __u16 __sum16;
+ #include <linux/if_packet.h>
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
+-#include <netinet/tcp.h>
+ #include <linux/filter.h>
+ #include <linux/perf_event.h>
+ #include <linux/socket.h>
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 0fb92d9a319b7..961c17b4681e5 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -50,7 +50,7 @@
+ #define MAX_INSNS	BPF_MAXINSNS
+ #define MAX_TEST_INSNS	1000000
+ #define MAX_FIXUPS	8
+-#define MAX_NR_MAPS	20
++#define MAX_NR_MAPS	21
+ #define MAX_TEST_RUNS	8
+ #define POINTER_VALUE	0xcafe4all
+ #define TEST_DATA_LEN	64
+@@ -87,6 +87,7 @@ struct bpf_test {
+ 	int fixup_sk_storage_map[MAX_FIXUPS];
+ 	int fixup_map_event_output[MAX_FIXUPS];
+ 	int fixup_map_reuseport_array[MAX_FIXUPS];
++	int fixup_map_ringbuf[MAX_FIXUPS];
+ 	const char *errstr;
+ 	const char *errstr_unpriv;
+ 	uint32_t insn_processed;
+@@ -640,6 +641,7 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type,
+ 	int *fixup_sk_storage_map = test->fixup_sk_storage_map;
+ 	int *fixup_map_event_output = test->fixup_map_event_output;
+ 	int *fixup_map_reuseport_array = test->fixup_map_reuseport_array;
++	int *fixup_map_ringbuf = test->fixup_map_ringbuf;
+ 
+ 	if (test->fill_helper) {
+ 		test->fill_insns = calloc(MAX_TEST_INSNS, sizeof(struct bpf_insn));
+@@ -817,6 +819,14 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type,
+ 			fixup_map_reuseport_array++;
+ 		} while (*fixup_map_reuseport_array);
+ 	}
++	if (*fixup_map_ringbuf) {
++		map_fds[20] = create_map(BPF_MAP_TYPE_RINGBUF, 0,
++					   0, 4096);
++		do {
++			prog[*fixup_map_ringbuf].imm = map_fds[20];
++			fixup_map_ringbuf++;
++		} while (*fixup_map_ringbuf);
++	}
+ }
+ 
+ struct libcap {
+diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c
+index 45d43bf82f269..0b943897aaf6c 100644
+--- a/tools/testing/selftests/bpf/verifier/spill_fill.c
++++ b/tools/testing/selftests/bpf/verifier/spill_fill.c
+@@ -28,6 +28,36 @@
+ 	.result = ACCEPT,
+ 	.result_unpriv = ACCEPT,
+ },
++{
++	"check valid spill/fill, ptr to mem",
++	.insns = {
++	/* reserve 8 byte ringbuf memory */
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_MOV64_IMM(BPF_REG_2, 8),
++	BPF_MOV64_IMM(BPF_REG_3, 0),
++	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
++	/* store a pointer to the reserved memory in R6 */
++	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
++	/* check whether the reservation was successful */
++	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
++	/* spill R6(mem) into the stack */
++	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
++	/* fill it back in R7 */
++	BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8),
++	/* should be able to access *(R7) = 0 */
++	BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0),
++	/* submit the reserved ringbuf memory */
++	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
++	BPF_MOV64_IMM(BPF_REG_2, 0),
++	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.fixup_map_ringbuf = { 1 },
++	.result = ACCEPT,
++	.result_unpriv = ACCEPT,
++},
+ {
+ 	"check corrupted spill/fill",
+ 	.insns = {
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index c3a905923ef29..cbf166df57da7 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -835,6 +835,7 @@ EOF
+ 	fi
+ 
+ 	# clean up any leftovers
++	echo 0 > /sys/bus/netdevsim/del_device
+ 	$probed && rmmod netdevsim
+ 
+ 	if [ $ret -ne 0 ]; then
+diff --git a/tools/testing/selftests/tc-testing/config b/tools/testing/selftests/tc-testing/config
+index b71828df5a6dd..5f581c3a11319 100644
+--- a/tools/testing/selftests/tc-testing/config
++++ b/tools/testing/selftests/tc-testing/config
+@@ -5,6 +5,7 @@ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_MARK=y
+ CONFIG_NF_CONNTRACK_ZONES=y
+ CONFIG_NF_CONNTRACK_LABELS=y
++CONFIG_NF_FLOW_TABLE=m
+ CONFIG_NF_NAT=m
+ 
+ CONFIG_NET_SCHED=y
+diff --git a/tools/testing/selftests/tc-testing/settings b/tools/testing/selftests/tc-testing/settings
+new file mode 100644
+index 0000000000000..e2206265f67c7
+--- /dev/null
++++ b/tools/testing/selftests/tc-testing/settings
+@@ -0,0 +1 @@
++timeout=900
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 8a9461aa0878a..93e44410f170e 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -502,10 +502,32 @@ n2 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/all/rp_filter'
+ n1 ping -W 1 -c 1 192.168.241.2
+ [[ $(n2 wg show wg0 endpoints) == "$pub1	10.0.0.3:1" ]]
+ 
+-ip1 link del veth1
+-ip1 link del veth3
+-ip1 link del wg0
+-ip2 link del wg0
++ip1 link del dev veth3
++ip1 link del dev wg0
++ip2 link del dev wg0
++
++# Make sure persistent keep alives are sent when an adapter comes up
++ip1 link add dev wg0 type wireguard
++n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" endpoint 10.0.0.1:1 persistent-keepalive 1
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -eq 0 ]]
++ip1 link set dev wg0 up
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -gt 0 ]]
++ip1 link del dev wg0
++# This should also happen even if the private key is set later
++ip1 link add dev wg0 type wireguard
++n1 wg set wg0 peer "$pub2" endpoint 10.0.0.1:1 persistent-keepalive 1
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -eq 0 ]]
++ip1 link set dev wg0 up
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -eq 0 ]]
++n1 wg set wg0 private-key <(echo "$key1")
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -gt 0 ]]
++ip1 link del dev veth1
++ip1 link del dev wg0
+ 
+ # We test that Netlink/IPC is working properly by doing things that usually cause split responses
+ ip0 link add dev wg0 type wireguard


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-08-08 18:42 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-08-08 18:42 UTC (permalink / raw
  To: gentoo-commits

commit:     178e5b69267ea201ea26ef487ad8a94f58c34f0f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug  8 18:41:45 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug  8 18:41:45 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=178e5b69

Linux patch 5.10.189

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1188_linux-5.10.189.patch | 2944 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2948 insertions(+)

diff --git a/0000_README b/0000_README
index 1cb8b638..9f251533 100644
--- a/0000_README
+++ b/0000_README
@@ -795,6 +795,10 @@ Patch:  1187_linux-5.10.188.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.188
 
+Patch:  1188_linux-5.10.189.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.189
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1188_linux-5.10.189.patch b/1188_linux-5.10.189.patch
new file mode 100644
index 00000000..ed7e42d8
--- /dev/null
+++ b/1188_linux-5.10.189.patch
@@ -0,0 +1,2944 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 500d5d8937cbb..bfb4f4fada337 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -501,17 +501,18 @@ Description:	information about CPUs heterogeneity.
+ 		cpu_capacity: capacity of cpu#.
+ 
+ What:		/sys/devices/system/cpu/vulnerabilities
++		/sys/devices/system/cpu/vulnerabilities/gather_data_sampling
++		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
++		/sys/devices/system/cpu/vulnerabilities/l1tf
++		/sys/devices/system/cpu/vulnerabilities/mds
+ 		/sys/devices/system/cpu/vulnerabilities/meltdown
++		/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
++		/sys/devices/system/cpu/vulnerabilities/retbleed
++		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v1
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
+-		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+-		/sys/devices/system/cpu/vulnerabilities/l1tf
+-		/sys/devices/system/cpu/vulnerabilities/mds
+ 		/sys/devices/system/cpu/vulnerabilities/srbds
+ 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+-		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
+-		/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
+-		/sys/devices/system/cpu/vulnerabilities/retbleed
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+diff --git a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+new file mode 100644
+index 0000000000000..264bfa937f7de
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+@@ -0,0 +1,109 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++GDS - Gather Data Sampling
++==========================
++
++Gather Data Sampling is a hardware vulnerability which allows unprivileged
++speculative access to data which was previously stored in vector registers.
++
++Problem
++-------
++When a gather instruction performs loads from memory, different data elements
++are merged into the destination vector register. However, when a gather
++instruction that is transiently executed encounters a fault, stale data from
++architectural or internal vector registers may get transiently forwarded to the
++destination vector register instead. This will allow a malicious attacker to
++infer stale data using typical side channel techniques like cache timing
++attacks. GDS is a purely sampling-based attack.
++
++The attacker uses gather instructions to infer the stale vector register data.
++The victim does not need to do anything special other than use the vector
++registers. The victim does not need to use gather instructions to be
++vulnerable.
++
++Because the buffers are shared between Hyper-Threads cross Hyper-Thread attacks
++are possible.
++
++Attack scenarios
++----------------
++Without mitigation, GDS can infer stale data across virtually all
++permission boundaries:
++
++	Non-enclaves can infer SGX enclave data
++	Userspace can infer kernel data
++	Guests can infer data from hosts
++	Guest can infer guest from other guests
++	Users can infer data from other users
++
++Because of this, it is important to ensure that the mitigation stays enabled in
++lower-privilege contexts like guests and when running outside SGX enclaves.
++
++The hardware enforces the mitigation for SGX. Likewise, VMMs should  ensure
++that guests are not allowed to disable the GDS mitigation. If a host erred and
++allowed this, a guest could theoretically disable GDS mitigation, mount an
++attack, and re-enable it.
++
++Mitigation mechanism
++--------------------
++This issue is mitigated in microcode. The microcode defines the following new
++bits:
++
++ ================================   ===   ============================
++ IA32_ARCH_CAPABILITIES[GDS_CTRL]   R/O   Enumerates GDS vulnerability
++                                          and mitigation support.
++ IA32_ARCH_CAPABILITIES[GDS_NO]     R/O   Processor is not vulnerable.
++ IA32_MCU_OPT_CTRL[GDS_MITG_DIS]    R/W   Disables the mitigation
++                                          0 by default.
++ IA32_MCU_OPT_CTRL[GDS_MITG_LOCK]   R/W   Locks GDS_MITG_DIS=0. Writes
++                                          to GDS_MITG_DIS are ignored
++                                          Can't be cleared once set.
++ ================================   ===   ============================
++
++GDS can also be mitigated on systems that don't have updated microcode by
++disabling AVX. This can be done by setting gather_data_sampling="force" or
++"clearcpuid=avx" on the kernel command-line.
++
++If used, these options will disable AVX use by turning off XSAVE YMM support.
++However, the processor will still enumerate AVX support.  Userspace that
++does not follow proper AVX enumeration to check both AVX *and* XSAVE YMM
++support will break.
++
++Mitigation control on the kernel command line
++---------------------------------------------
++The mitigation can be disabled by setting "gather_data_sampling=off" or
++"mitigations=off" on the kernel command line. Not specifying either will default
++to the mitigation being enabled. Specifying "gather_data_sampling=force" will
++use the microcode mitigation when available or disable AVX on affected systems
++where the microcode hasn't been updated to include the mitigation.
++
++GDS System Information
++------------------------
++The kernel provides vulnerability status information through sysfs. For
++GDS this can be accessed by the following sysfs file:
++
++/sys/devices/system/cpu/vulnerabilities/gather_data_sampling
++
++The possible values contained in this file are:
++
++ ============================== =============================================
++ Not affected                   Processor not vulnerable.
++ Vulnerable                     Processor vulnerable and mitigation disabled.
++ Vulnerable: No microcode       Processor vulnerable and microcode is missing
++                                mitigation.
++ Mitigation: AVX disabled,
++ no microcode                   Processor is vulnerable and microcode is missing
++                                mitigation. AVX disabled as mitigation.
++ Mitigation: Microcode          Processor is vulnerable and mitigation is in
++                                effect.
++ Mitigation: Microcode (locked) Processor is vulnerable and mitigation is in
++                                effect and cannot be disabled.
++ Unknown: Dependent on
++ hypervisor status              Running on a virtual guest processor that is
++                                affected but with no way to know if host
++                                processor is mitigated or vulnerable.
++ ============================== =============================================
++
++GDS Default mitigation
++----------------------
++The updated microcode will enable the mitigation by default. The kernel's
++default action is to leave the mitigation enabled.
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index 2adec1e6520a6..84742be223ff8 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -16,3 +16,5 @@ are configurable at compile, boot or run time.
+    multihit.rst
+    special-register-buffer-data-sampling.rst
+    processor_mmio_stale_data.rst
++   gather_data_sampling.rst
++   srso
+diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
+new file mode 100644
+index 0000000000000..2f923c805802f
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/srso.rst
+@@ -0,0 +1,133 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++Speculative Return Stack Overflow (SRSO)
++========================================
++
++This is a mitigation for the speculative return stack overflow (SRSO)
++vulnerability found on AMD processors. The mechanism is by now the well
++known scenario of poisoning CPU functional units - the Branch Target
++Buffer (BTB) and Return Address Predictor (RAP) in this case - and then
++tricking the elevated privilege domain (the kernel) into leaking
++sensitive data.
++
++AMD CPUs predict RET instructions using a Return Address Predictor (aka
++Return Address Stack/Return Stack Buffer). In some cases, a non-architectural
++CALL instruction (i.e., an instruction predicted to be a CALL but is
++not actually a CALL) can create an entry in the RAP which may be used
++to predict the target of a subsequent RET instruction.
++
++The specific circumstances that lead to this varies by microarchitecture
++but the concern is that an attacker can mis-train the CPU BTB to predict
++non-architectural CALL instructions in kernel space and use this to
++control the speculative target of a subsequent kernel RET, potentially
++leading to information disclosure via a speculative side-channel.
++
++The issue is tracked under CVE-2023-20569.
++
++Affected processors
++-------------------
++
++AMD Zen, generations 1-4. That is, all families 0x17 and 0x19. Older
++processors have not been investigated.
++
++System information and options
++------------------------------
++
++First of all, it is required that the latest microcode be loaded for
++mitigations to be effective.
++
++The sysfs file showing SRSO mitigation status is:
++
++  /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow
++
++The possible values in this file are:
++
++ - 'Not affected'               The processor is not vulnerable
++
++ - 'Vulnerable: no microcode'   The processor is vulnerable, no
++                                microcode extending IBPB functionality
++                                to address the vulnerability has been
++                                applied.
++
++ - 'Mitigation: microcode'      Extended IBPB functionality microcode
++                                patch has been applied. It does not
++                                address User->Kernel and Guest->Host
++                                transitions protection but it does
++                                address User->User and VM->VM attack
++                                vectors.
++
++                                (spec_rstack_overflow=microcode)
++
++ - 'Mitigation: safe RET'       Software-only mitigation. It complements
++                                the extended IBPB microcode patch
++                                functionality by addressing User->Kernel
++                                and Guest->Host transitions protection.
++
++                                Selected by default or by
++                                spec_rstack_overflow=safe-ret
++
++ - 'Mitigation: IBPB'           Similar protection as "safe RET" above
++                                but employs an IBPB barrier on privilege
++                                domain crossings (User->Kernel,
++                                Guest->Host).
++
++                                (spec_rstack_overflow=ibpb)
++
++ - 'Mitigation: IBPB on VMEXIT' Mitigation addressing the cloud provider
++                                scenario - the Guest->Host transitions
++                                only.
++
++                                (spec_rstack_overflow=ibpb-vmexit)
++
++In order to exploit vulnerability, an attacker needs to:
++
++ - gain local access on the machine
++
++ - break kASLR
++
++ - find gadgets in the running kernel in order to use them in the exploit
++
++ - potentially create and pin an additional workload on the sibling
++   thread, depending on the microarchitecture (not necessary on fam 0x19)
++
++ - run the exploit
++
++Considering the performance implications of each mitigation type, the
++default one is 'Mitigation: safe RET' which should take care of most
++attack vectors, including the local User->Kernel one.
++
++As always, the user is advised to keep her/his system up-to-date by
++applying software updates regularly.
++
++The default setting will be reevaluated when needed and especially when
++new attack vectors appear.
++
++As one can surmise, 'Mitigation: safe RET' does come at the cost of some
++performance depending on the workload. If one trusts her/his userspace
++and does not want to suffer the performance impact, one can always
++disable the mitigation with spec_rstack_overflow=off.
++
++Similarly, 'Mitigation: IBPB' is another full mitigation type employing
++an indrect branch prediction barrier after having applied the required
++microcode patch for one's system. This mitigation comes also at
++a performance cost.
++
++Mitigation: safe RET
++--------------------
++
++The mitigation works by ensuring all RET instructions speculate to
++a controlled location, similar to how speculation is controlled in the
++retpoline sequence.  To accomplish this, the __x86_return_thunk forces
++the CPU to mispredict every function return using a 'safe return'
++sequence.
++
++To ensure the safety of this mitigation, the kernel must ensure that the
++safe return sequence is itself free from attacker interference.  In Zen3
++and Zen4, this is accomplished by creating a BTB alias between the
++untraining function srso_untrain_ret_alias() and the safe return
++function srso_safe_ret_alias() which results in evicting a potentially
++poisoned BTB entry and using that safe one for all function returns.
++
++In older Zen1 and Zen2, this is accomplished using a reinterpretation
++technique similar to Retbleed one: srso_untrain_ret() and
++srso_safe_ret().
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 1f4a0523dc1ab..f1f7c068cf65b 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1447,6 +1447,26 @@
+ 			Format: off | on
+ 			default: on
+ 
++	gather_data_sampling=
++			[X86,INTEL] Control the Gather Data Sampling (GDS)
++			mitigation.
++
++			Gather Data Sampling is a hardware vulnerability which
++			allows unprivileged speculative access to data which was
++			previously stored in vector registers.
++
++			This issue is mitigated by default in updated microcode.
++			The mitigation may have a performance impact but can be
++			disabled. On systems without the microcode mitigation
++			disabling AVX serves as a mitigation.
++
++			force:	Disable AVX to mitigate systems without
++				microcode mitigation. No effect if the microcode
++				mitigation is present. Known to cause crashes in
++				userspace with buggy AVX enumeration.
++
++			off:    Disable GDS mitigation.
++
+ 	gcov_persist=	[GCOV] When non-zero (default), profiling data for
+ 			kernel modules is saved and remains accessible via
+ 			debugfs, even when the module is unloaded/reloaded.
+@@ -2887,22 +2907,23 @@
+ 				Disable all optional CPU mitigations.  This
+ 				improves system performance, but it may also
+ 				expose users to several CPU vulnerabilities.
+-				Equivalent to: nopti [X86,PPC]
++				Equivalent to: gather_data_sampling=off [X86]
+ 					       kpti=0 [ARM64]
+-					       nospectre_v1 [X86,PPC]
+-					       nobp=0 [S390]
+-					       nospectre_v2 [X86,PPC,S390,ARM64]
+-					       spectre_v2_user=off [X86]
+-					       spec_store_bypass_disable=off [X86,PPC]
+-					       ssbd=force-off [ARM64]
++					       kvm.nx_huge_pages=off [X86]
+ 					       l1tf=off [X86]
+ 					       mds=off [X86]
+-					       tsx_async_abort=off [X86]
+-					       kvm.nx_huge_pages=off [X86]
++					       mmio_stale_data=off [X86]
+ 					       no_entry_flush [PPC]
+ 					       no_uaccess_flush [PPC]
+-					       mmio_stale_data=off [X86]
++					       nobp=0 [S390]
++					       nopti [X86,PPC]
++					       nospectre_v1 [X86,PPC]
++					       nospectre_v2 [X86,PPC,S390,ARM64]
+ 					       retbleed=off [X86]
++					       spec_store_bypass_disable=off [X86,PPC]
++					       spectre_v2_user=off [X86]
++					       ssbd=force-off [ARM64]
++					       tsx_async_abort=off [X86]
+ 
+ 				Exceptions:
+ 					       This does not have any effect on
+@@ -5120,6 +5141,17 @@
+ 			Not specifying this option is equivalent to
+ 			spectre_v2_user=auto.
+ 
++	spec_rstack_overflow=
++			[X86] Control RAS overflow mitigation on AMD Zen CPUs
++
++			off		- Disable mitigation
++			microcode	- Enable microcode mitigation only
++			safe-ret	- Enable sw-only safe RET mitigation (default)
++			ibpb		- Enable mitigation by issuing IBPB on
++					  kernel entry
++			ibpb-vmexit	- Issue IBPB only on VMEXIT
++					  (cloud-specific mitigation)
++
+ 	spec_store_bypass_disable=
+ 			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
+ 			(Speculative Store Bypass vulnerability)
+diff --git a/Makefile b/Makefile
+index 7f9e1667d87d8..36047436fae33 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 188
++SUBLEVEL = 189
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 632d60e13494c..240277d5626c8 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -290,6 +290,9 @@ config ARCH_HAS_DMA_SET_UNCACHED
+ config ARCH_HAS_DMA_CLEAR_UNCACHED
+ 	bool
+ 
++config ARCH_HAS_CPU_FINALIZE_INIT
++	bool
++
+ # Select if arch init_task must go in the __init_task_data section
+ config ARCH_TASK_STRUCT_ON_STACK
+ 	bool
+diff --git a/arch/alpha/include/asm/bugs.h b/arch/alpha/include/asm/bugs.h
+deleted file mode 100644
+index 78030d1c7e7e0..0000000000000
+--- a/arch/alpha/include/asm/bugs.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-/*
+- *  include/asm-alpha/bugs.h
+- *
+- *  Copyright (C) 1994  Linus Torvalds
+- */
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- */
+-
+-/*
+- * I don't know of any alpha bugs yet.. Nice chip
+- */
+-
+-static void check_bugs(void)
+-{
+-}
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 985ab0b091a6a..335308aff6ce0 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -4,6 +4,7 @@ config ARM
+ 	default y
+ 	select ARCH_32BIT_OFF_T
+ 	select ARCH_HAS_BINFMT_FLAT
++	select ARCH_HAS_CPU_FINALIZE_INIT if MMU
+ 	select ARCH_HAS_DEBUG_VIRTUAL if MMU
+ 	select ARCH_HAS_DEVMEM_IS_ALLOWED
+ 	select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
+diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
+index 97a312ba08401..fe385551edeca 100644
+--- a/arch/arm/include/asm/bugs.h
++++ b/arch/arm/include/asm/bugs.h
+@@ -1,7 +1,5 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- *  arch/arm/include/asm/bugs.h
+- *
+  *  Copyright (C) 1995-2003 Russell King
+  */
+ #ifndef __ASM_BUGS_H
+@@ -10,10 +8,8 @@
+ extern void check_writebuffer_bugs(void);
+ 
+ #ifdef CONFIG_MMU
+-extern void check_bugs(void);
+ extern void check_other_bugs(void);
+ #else
+-#define check_bugs() do { } while (0)
+ #define check_other_bugs() do { } while (0)
+ #endif
+ 
+diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
+index 14c8dbbb7d2df..087bce6ec8e9b 100644
+--- a/arch/arm/kernel/bugs.c
++++ b/arch/arm/kernel/bugs.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/init.h>
++#include <linux/cpu.h>
+ #include <asm/bugs.h>
+ #include <asm/proc-fns.h>
+ 
+@@ -11,7 +12,7 @@ void check_other_bugs(void)
+ #endif
+ }
+ 
+-void __init check_bugs(void)
++void __init arch_cpu_finalize_init(void)
+ {
+ 	check_writebuffer_bugs();
+ 	check_other_bugs();
+diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
+index 1d0579bc9d655..975b00b6034b6 100644
+--- a/arch/ia64/Kconfig
++++ b/arch/ia64/Kconfig
+@@ -8,6 +8,7 @@ menu "Processor type and features"
+ 
+ config IA64
+ 	bool
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_DMA_MARK_CLEAN
+ 	select ARCH_MIGHT_HAVE_PC_PARPORT
+ 	select ARCH_MIGHT_HAVE_PC_SERIO
+diff --git a/arch/ia64/include/asm/bugs.h b/arch/ia64/include/asm/bugs.h
+deleted file mode 100644
+index 0d6b9bded56c6..0000000000000
+--- a/arch/ia64/include/asm/bugs.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- *
+- * Based on <asm-alpha/bugs.h>.
+- *
+- * Modified 1998, 1999, 2003
+- *	David Mosberger-Tang <davidm@hpl.hp.com>,  Hewlett-Packard Co.
+- */
+-#ifndef _ASM_IA64_BUGS_H
+-#define _ASM_IA64_BUGS_H
+-
+-#include <asm/processor.h>
+-
+-extern void check_bugs (void);
+-
+-#endif /* _ASM_IA64_BUGS_H */
+diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
+index dd595fbd80065..e7dd6756d5cb7 100644
+--- a/arch/ia64/kernel/setup.c
++++ b/arch/ia64/kernel/setup.c
+@@ -1071,8 +1071,7 @@ cpu_init (void)
+ 	}
+ }
+ 
+-void __init
+-check_bugs (void)
++void __init arch_cpu_finalize_init(void)
+ {
+ 	ia64_patch_mckinley_e9((unsigned long) __start___mckinley_e9_bundles,
+ 			       (unsigned long) __end___mckinley_e9_bundles);
+diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
+index 372e4e69c43ac..ad7a4ffd65c97 100644
+--- a/arch/m68k/Kconfig
++++ b/arch/m68k/Kconfig
+@@ -4,6 +4,7 @@ config M68K
+ 	default y
+ 	select ARCH_32BIT_OFF_T
+ 	select ARCH_HAS_BINFMT_FLAT
++	select ARCH_HAS_CPU_FINALIZE_INIT if MMU
+ 	select ARCH_HAS_DMA_PREP_COHERENT if HAS_DMA && MMU && !COLDFIRE
+ 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA
+ 	select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
+diff --git a/arch/m68k/include/asm/bugs.h b/arch/m68k/include/asm/bugs.h
+deleted file mode 100644
+index 745530651e0bf..0000000000000
+--- a/arch/m68k/include/asm/bugs.h
++++ /dev/null
+@@ -1,21 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- *  include/asm-m68k/bugs.h
+- *
+- *  Copyright (C) 1994  Linus Torvalds
+- */
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- */
+-
+-#ifdef CONFIG_MMU
+-extern void check_bugs(void);	/* in arch/m68k/kernel/setup.c */
+-#else
+-static void check_bugs(void)
+-{
+-}
+-#endif
+diff --git a/arch/m68k/kernel/setup_mm.c b/arch/m68k/kernel/setup_mm.c
+index ab8aa7be260f3..f8116660417fa 100644
+--- a/arch/m68k/kernel/setup_mm.c
++++ b/arch/m68k/kernel/setup_mm.c
+@@ -10,6 +10,7 @@
+  */
+ 
+ #include <linux/kernel.h>
++#include <linux/cpu.h>
+ #include <linux/mm.h>
+ #include <linux/sched.h>
+ #include <linux/delay.h>
+@@ -523,7 +524,7 @@ static int __init proc_hardware_init(void)
+ module_init(proc_hardware_init);
+ #endif
+ 
+-void check_bugs(void)
++void __init arch_cpu_finalize_init(void)
+ {
+ #if defined(CONFIG_FPU) && !defined(CONFIG_M68KFPU_EMU)
+ 	if (m68k_fputype == 0) {
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 3442bdd4314cb..57839f63074f7 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -4,6 +4,7 @@ config MIPS
+ 	default y
+ 	select ARCH_32BIT_OFF_T if !64BIT
+ 	select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_FORTIFY_SOURCE
+ 	select ARCH_HAS_KCOV
+ 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE if !EVA
+diff --git a/arch/mips/include/asm/bugs.h b/arch/mips/include/asm/bugs.h
+index d72dc6e1cf3cd..8d4cf29861b87 100644
+--- a/arch/mips/include/asm/bugs.h
++++ b/arch/mips/include/asm/bugs.h
+@@ -1,17 +1,11 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+  * Copyright (C) 2007  Maciej W. Rozycki
+- *
+- * Needs:
+- *	void check_bugs(void);
+  */
+ #ifndef _ASM_BUGS_H
+ #define _ASM_BUGS_H
+ 
+ #include <linux/bug.h>
+-#include <linux/delay.h>
+ #include <linux/smp.h>
+ 
+ #include <asm/cpu.h>
+@@ -30,17 +24,6 @@ static inline void check_bugs_early(void)
+ 		check_bugs64_early();
+ }
+ 
+-static inline void check_bugs(void)
+-{
+-	unsigned int cpu = smp_processor_id();
+-
+-	cpu_data[cpu].udelay_val = loops_per_jiffy;
+-	check_bugs32();
+-
+-	if (IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))
+-		check_bugs64();
+-}
+-
+ static inline int r4k_daddiu_bug(void)
+ {
+ 	if (!IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index b49ad569e4690..b7eb7dd96e179 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -11,6 +11,8 @@
+  * Copyright (C) 2000, 2001, 2002, 2007	 Maciej W. Rozycki
+  */
+ #include <linux/init.h>
++#include <linux/cpu.h>
++#include <linux/delay.h>
+ #include <linux/ioport.h>
+ #include <linux/export.h>
+ #include <linux/screen_info.h>
+@@ -829,3 +831,14 @@ static int __init setnocoherentio(char *str)
+ }
+ early_param("nocoherentio", setnocoherentio);
+ #endif
++
++void __init arch_cpu_finalize_init(void)
++{
++	unsigned int cpu = smp_processor_id();
++
++	cpu_data[cpu].udelay_val = loops_per_jiffy;
++	check_bugs32();
++
++	if (IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))
++		check_bugs64();
++}
+diff --git a/arch/parisc/include/asm/bugs.h b/arch/parisc/include/asm/bugs.h
+deleted file mode 100644
+index 0a7f9db6bd1c7..0000000000000
+--- a/arch/parisc/include/asm/bugs.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- *  include/asm-parisc/bugs.h
+- *
+- *  Copyright (C) 1999	Mike Shaver
+- */
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- */
+-
+-#include <asm/processor.h>
+-
+-static inline void check_bugs(void)
+-{
+-//	identify_cpu(&boot_cpu_data);
+-}
+diff --git a/arch/powerpc/include/asm/bugs.h b/arch/powerpc/include/asm/bugs.h
+deleted file mode 100644
+index 01b8f6ca4dbbc..0000000000000
+--- a/arch/powerpc/include/asm/bugs.h
++++ /dev/null
+@@ -1,15 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-or-later */
+-#ifndef _ASM_POWERPC_BUGS_H
+-#define _ASM_POWERPC_BUGS_H
+-
+-/*
+- */
+-
+-/*
+- * This file is included by 'init/main.c' to check for
+- * architecture-dependent bugs.
+- */
+-
+-static inline void check_bugs(void) { }
+-
+-#endif	/* _ASM_POWERPC_BUGS_H */
+diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
+index b6f3d49991d37..44dffe7ce50ad 100644
+--- a/arch/sh/Kconfig
++++ b/arch/sh/Kconfig
+@@ -5,6 +5,7 @@ config SUPERH
+ 	select ARCH_HAVE_CUSTOM_GPIO_H
+ 	select ARCH_HAVE_NMI_SAFE_CMPXCHG if (GUSA_RB || CPU_SH4A)
+ 	select ARCH_HAS_BINFMT_FLAT if !MMU
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_GIGANTIC_PAGE
+ 	select ARCH_HAS_GCOV_PROFILE_ALL
+ 	select ARCH_HAS_PTE_SPECIAL
+diff --git a/arch/sh/include/asm/bugs.h b/arch/sh/include/asm/bugs.h
+deleted file mode 100644
+index fe52abb69cea3..0000000000000
+--- a/arch/sh/include/asm/bugs.h
++++ /dev/null
+@@ -1,74 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef __ASM_SH_BUGS_H
+-#define __ASM_SH_BUGS_H
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- */
+-
+-/*
+- * I don't know of any Super-H bugs yet.
+- */
+-
+-#include <asm/processor.h>
+-
+-extern void select_idle_routine(void);
+-
+-static void __init check_bugs(void)
+-{
+-	extern unsigned long loops_per_jiffy;
+-	char *p = &init_utsname()->machine[2]; /* "sh" */
+-
+-	select_idle_routine();
+-
+-	current_cpu_data.loops_per_jiffy = loops_per_jiffy;
+-
+-	switch (current_cpu_data.family) {
+-	case CPU_FAMILY_SH2:
+-		*p++ = '2';
+-		break;
+-	case CPU_FAMILY_SH2A:
+-		*p++ = '2';
+-		*p++ = 'a';
+-		break;
+-	case CPU_FAMILY_SH3:
+-		*p++ = '3';
+-		break;
+-	case CPU_FAMILY_SH4:
+-		*p++ = '4';
+-		break;
+-	case CPU_FAMILY_SH4A:
+-		*p++ = '4';
+-		*p++ = 'a';
+-		break;
+-	case CPU_FAMILY_SH4AL_DSP:
+-		*p++ = '4';
+-		*p++ = 'a';
+-		*p++ = 'l';
+-		*p++ = '-';
+-		*p++ = 'd';
+-		*p++ = 's';
+-		*p++ = 'p';
+-		break;
+-	case CPU_FAMILY_UNKNOWN:
+-		/*
+-		 * Specifically use CPU_FAMILY_UNKNOWN rather than
+-		 * default:, so we're able to have the compiler whine
+-		 * about unhandled enumerations.
+-		 */
+-		break;
+-	}
+-
+-	printk("CPU: %s\n", get_cpu_subtype(&current_cpu_data));
+-
+-#ifndef __LITTLE_ENDIAN__
+-	/* 'eb' means 'Endian Big' */
+-	*p++ = 'e';
+-	*p++ = 'b';
+-#endif
+-	*p = '\0';
+-}
+-#endif /* __ASM_SH_BUGS_H */
+diff --git a/arch/sh/include/asm/processor.h b/arch/sh/include/asm/processor.h
+index 3820d698846e0..97af2d9b02693 100644
+--- a/arch/sh/include/asm/processor.h
++++ b/arch/sh/include/asm/processor.h
+@@ -167,6 +167,8 @@ extern unsigned int instruction_size(unsigned int insn);
+ #define instruction_size(insn)	(2)
+ #endif
+ 
++void select_idle_routine(void);
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #include <asm/processor_32.h>
+diff --git a/arch/sh/kernel/idle.c b/arch/sh/kernel/idle.c
+index f59814983bd59..a80b2a5b25c7f 100644
+--- a/arch/sh/kernel/idle.c
++++ b/arch/sh/kernel/idle.c
+@@ -14,6 +14,7 @@
+ #include <linux/irqflags.h>
+ #include <linux/smp.h>
+ #include <linux/atomic.h>
++#include <asm/processor.h>
+ #include <asm/smp.h>
+ #include <asm/bl_bit.h>
+ 
+diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
+index 556e463a43d22..805c02eef5ace 100644
+--- a/arch/sh/kernel/setup.c
++++ b/arch/sh/kernel/setup.c
+@@ -43,6 +43,7 @@
+ #include <asm/smp.h>
+ #include <asm/mmu_context.h>
+ #include <asm/mmzone.h>
++#include <asm/processor.h>
+ #include <asm/sparsemem.h>
+ #include <asm/platform_early.h>
+ 
+@@ -357,3 +358,57 @@ int test_mode_pin(int pin)
+ {
+ 	return sh_mv.mv_mode_pins() & pin;
+ }
++
++void __init arch_cpu_finalize_init(void)
++{
++	char *p = &init_utsname()->machine[2]; /* "sh" */
++
++	select_idle_routine();
++
++	current_cpu_data.loops_per_jiffy = loops_per_jiffy;
++
++	switch (current_cpu_data.family) {
++	case CPU_FAMILY_SH2:
++		*p++ = '2';
++		break;
++	case CPU_FAMILY_SH2A:
++		*p++ = '2';
++		*p++ = 'a';
++		break;
++	case CPU_FAMILY_SH3:
++		*p++ = '3';
++		break;
++	case CPU_FAMILY_SH4:
++		*p++ = '4';
++		break;
++	case CPU_FAMILY_SH4A:
++		*p++ = '4';
++		*p++ = 'a';
++		break;
++	case CPU_FAMILY_SH4AL_DSP:
++		*p++ = '4';
++		*p++ = 'a';
++		*p++ = 'l';
++		*p++ = '-';
++		*p++ = 'd';
++		*p++ = 's';
++		*p++ = 'p';
++		break;
++	case CPU_FAMILY_UNKNOWN:
++		/*
++		 * Specifically use CPU_FAMILY_UNKNOWN rather than
++		 * default:, so we're able to have the compiler whine
++		 * about unhandled enumerations.
++		 */
++		break;
++	}
++
++	pr_info("CPU: %s\n", get_cpu_subtype(&current_cpu_data));
++
++#ifndef __LITTLE_ENDIAN__
++	/* 'eb' means 'Endian Big' */
++	*p++ = 'e';
++	*p++ = 'b';
++#endif
++	*p = '\0';
++}
+diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
+index b5ed893420591..7e2ce4c8d6571 100644
+--- a/arch/sparc/Kconfig
++++ b/arch/sparc/Kconfig
+@@ -56,6 +56,7 @@ config SPARC
+ config SPARC32
+ 	def_bool !64BIT
+ 	select ARCH_32BIT_OFF_T
++	select ARCH_HAS_CPU_FINALIZE_INIT if !SMP
+ 	select ARCH_HAS_SYNC_DMA_FOR_CPU
+ 	select GENERIC_ATOMIC64
+ 	select CLZ_TAB
+diff --git a/arch/sparc/include/asm/bugs.h b/arch/sparc/include/asm/bugs.h
+deleted file mode 100644
+index 02fa369b9c21f..0000000000000
+--- a/arch/sparc/include/asm/bugs.h
++++ /dev/null
+@@ -1,18 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/* include/asm/bugs.h:  Sparc probes for various bugs.
+- *
+- * Copyright (C) 1996, 2007 David S. Miller (davem@davemloft.net)
+- */
+-
+-#ifdef CONFIG_SPARC32
+-#include <asm/cpudata.h>
+-#endif
+-
+-extern unsigned long loops_per_jiffy;
+-
+-static void __init check_bugs(void)
+-{
+-#if defined(CONFIG_SPARC32) && !defined(CONFIG_SMP)
+-	cpu_data(0).udelay_val = loops_per_jiffy;
+-#endif
+-}
+diff --git a/arch/sparc/kernel/setup_32.c b/arch/sparc/kernel/setup_32.c
+index eea43a1aef1b9..8e2c7c1aad83a 100644
+--- a/arch/sparc/kernel/setup_32.c
++++ b/arch/sparc/kernel/setup_32.c
+@@ -415,3 +415,10 @@ static int __init topology_init(void)
+ }
+ 
+ subsys_initcall(topology_init);
++
++#if defined(CONFIG_SPARC32) && !defined(CONFIG_SMP)
++void __init arch_cpu_finalize_init(void)
++{
++	cpu_data(0).udelay_val = loops_per_jiffy;
++}
++#endif
+diff --git a/arch/um/Kconfig b/arch/um/Kconfig
+index 1c57599b82fa7..eb1c6880bde49 100644
+--- a/arch/um/Kconfig
++++ b/arch/um/Kconfig
+@@ -5,6 +5,7 @@ menu "UML-specific options"
+ config UML
+ 	bool
+ 	default y
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_KCOV
+ 	select ARCH_NO_PREEMPT
+ 	select HAVE_ARCH_AUDITSYSCALL
+diff --git a/arch/um/include/asm/bugs.h b/arch/um/include/asm/bugs.h
+deleted file mode 100644
+index 4473942a08397..0000000000000
+--- a/arch/um/include/asm/bugs.h
++++ /dev/null
+@@ -1,7 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef __UM_BUGS_H
+-#define __UM_BUGS_H
+-
+-void check_bugs(void);
+-
+-#endif
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index 55930ca498e93..ca5383a8a9889 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -3,6 +3,7 @@
+  * Copyright (C) 2000 - 2007 Jeff Dike (jdike@{addtoit,linux.intel}.com)
+  */
+ 
++#include <linux/cpu.h>
+ #include <linux/delay.h>
+ #include <linux/init.h>
+ #include <linux/mm.h>
+@@ -353,7 +354,7 @@ void __init setup_arch(char **cmdline_p)
+ 	setup_hostinfo(host_info, sizeof host_info);
+ }
+ 
+-void __init check_bugs(void)
++void __init arch_cpu_finalize_init(void)
+ {
+ 	arch_check_bugs();
+ 	os_check_bugs();
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 2284666e8c90c..6dc670e363939 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -59,6 +59,7 @@ config X86
+ 	select ARCH_32BIT_OFF_T			if X86_32
+ 	select ARCH_CLOCKSOURCE_INIT
+ 	select ARCH_HAS_ACPI_TABLE_UPGRADE	if ACPI
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_DEBUG_VIRTUAL
+ 	select ARCH_HAS_DEBUG_VM_PGTABLE	if !X86_PAE
+ 	select ARCH_HAS_DEVMEM_IS_ALLOWED
+@@ -2475,6 +2476,13 @@ config CPU_IBRS_ENTRY
+ 	  This mitigates both spectre_v2 and retbleed at great cost to
+ 	  performance.
+ 
++config CPU_SRSO
++	bool "Mitigate speculative RAS overflow on AMD"
++	depends on CPU_SUP_AMD && X86_64 && RETHUNK
++	default y
++	help
++	  Enable the SRSO mitigation needed on AMD Zen1-4 machines.
++
+ config SLS
+ 	bool "Mitigate Straight-Line-Speculation"
+ 	depends on CC_HAS_SLS && X86_64
+@@ -2484,6 +2492,25 @@ config SLS
+ 	  against straight line speculation. The kernel image might be slightly
+ 	  larger.
+ 
++config GDS_FORCE_MITIGATION
++	bool "Force GDS Mitigation"
++	depends on CPU_SUP_INTEL
++	default n
++	help
++	  Gather Data Sampling (GDS) is a hardware vulnerability which allows
++	  unprivileged speculative access to data which was previously stored in
++	  vector registers.
++
++	  This option is equivalent to setting gather_data_sampling=force on the
++	  command line. The microcode mitigation is used if present, otherwise
++	  AVX is disabled as a mitigation. On affected systems that are missing
++	  the microcode any userspace code that unconditionally uses AVX will
++	  break with this option set.
++
++	  Setting this option on systems not vulnerable to GDS has no effect.
++
++	  If in doubt, say N.
++
+ endif
+ 
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/include/asm/bugs.h b/arch/x86/include/asm/bugs.h
+index 92ae283899409..f25ca2d709d40 100644
+--- a/arch/x86/include/asm/bugs.h
++++ b/arch/x86/include/asm/bugs.h
+@@ -4,8 +4,6 @@
+ 
+ #include <asm/processor.h>
+ 
+-extern void check_bugs(void);
+-
+ #if defined(CONFIG_CPU_SUP_INTEL) && defined(CONFIG_X86_32)
+ int ppro_with_ram_bug(void);
+ #else
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index f4cbc01c0bc46..cc3f62f5d5515 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -31,6 +31,8 @@ enum cpuid_leafs
+ 	CPUID_7_ECX,
+ 	CPUID_8000_0007_EBX,
+ 	CPUID_7_EDX,
++	CPUID_8000_001F_EAX,
++	CPUID_8000_0021_EAX,
+ };
+ 
+ #ifdef CONFIG_X86_FEATURE_NAMES
+@@ -89,8 +91,10 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
+ 	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 16, feature_bit) ||	\
+ 	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) ||	\
+ 	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) ||	\
++	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) ||	\
++	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 20, feature_bit) ||	\
+ 	   REQUIRED_MASK_CHECK					  ||	\
+-	   BUILD_BUG_ON_ZERO(NCAPINTS != 19))
++	   BUILD_BUG_ON_ZERO(NCAPINTS != 21))
+ 
+ #define DISABLED_MASK_BIT_SET(feature_bit)				\
+ 	 ( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK,  0, feature_bit) ||	\
+@@ -112,8 +116,10 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
+ 	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 16, feature_bit) ||	\
+ 	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) ||	\
+ 	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) ||	\
++	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) ||	\
++	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 20, feature_bit) ||	\
+ 	   DISABLED_MASK_CHECK					  ||	\
+-	   BUILD_BUG_ON_ZERO(NCAPINTS != 19))
++	   BUILD_BUG_ON_ZERO(NCAPINTS != 21))
+ 
+ #define cpu_has(c, bit)							\
+ 	(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 :	\
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 1fcda82635546..2c0c495b9cb62 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -13,8 +13,8 @@
+ /*
+  * Defines x86 CPU feature bits
+  */
+-#define NCAPINTS			19	   /* N 32-bit words worth of info */
+-#define NBUGINTS			1	   /* N 32-bit bug flags */
++#define NCAPINTS			21	   /* N 32-bit words worth of info */
++#define NBUGINTS			2	   /* N 32-bit bug flags */
+ 
+ /*
+  * Note: If the comment begins with a quoted string, that string is used
+@@ -96,7 +96,7 @@
+ #define X86_FEATURE_SYSCALL32		( 3*32+14) /* "" syscall in IA32 userspace */
+ #define X86_FEATURE_SYSENTER32		( 3*32+15) /* "" sysenter in IA32 userspace */
+ #define X86_FEATURE_REP_GOOD		( 3*32+16) /* REP microcode works well */
+-#define X86_FEATURE_SME_COHERENT	( 3*32+17) /* "" AMD hardware-enforced cache coherency */
++/* FREE!                                ( 3*32+17) */
+ #define X86_FEATURE_LFENCE_RDTSC	( 3*32+18) /* "" LFENCE synchronizes RDTSC */
+ #define X86_FEATURE_ACC_POWER		( 3*32+19) /* AMD Accumulated Power Mechanism */
+ #define X86_FEATURE_NOPL		( 3*32+20) /* The NOPL (0F 1F) instructions */
+@@ -201,7 +201,7 @@
+ #define X86_FEATURE_INVPCID_SINGLE	( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */
+ #define X86_FEATURE_HW_PSTATE		( 7*32+ 8) /* AMD HW-PState */
+ #define X86_FEATURE_PROC_FEEDBACK	( 7*32+ 9) /* AMD ProcFeedbackInterface */
+-#define X86_FEATURE_SME			( 7*32+10) /* AMD Secure Memory Encryption */
++/* FREE!                                ( 7*32+10) */
+ #define X86_FEATURE_PTI			( 7*32+11) /* Kernel Page Table Isolation enabled */
+ #define X86_FEATURE_KERNEL_IBRS		( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */
+ #define X86_FEATURE_RSB_VMEXIT		( 7*32+13) /* "" Fill RSB on VM-Exit */
+@@ -211,7 +211,7 @@
+ #define X86_FEATURE_SSBD		( 7*32+17) /* Speculative Store Bypass Disable */
+ #define X86_FEATURE_MBA			( 7*32+18) /* Memory Bandwidth Allocation */
+ #define X86_FEATURE_RSB_CTXSW		( 7*32+19) /* "" Fill RSB on context switches */
+-#define X86_FEATURE_SEV			( 7*32+20) /* AMD Secure Encrypted Virtualization */
++/* FREE!                                ( 7*32+20) */
+ #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
+ #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
+ #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /* "" Disable Speculative Store Bypass. */
+@@ -236,7 +236,6 @@
+ #define X86_FEATURE_EPT_AD		( 8*32+17) /* Intel Extended Page Table access-dirty bit */
+ #define X86_FEATURE_VMCALL		( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */
+ #define X86_FEATURE_VMW_VMMCALL		( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
+-#define X86_FEATURE_SEV_ES		( 8*32+20) /* AMD Secure Encrypted Virtualization - Encrypted State */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
+ #define X86_FEATURE_FSGSBASE		( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
+@@ -302,6 +301,10 @@
+ #define X86_FEATURE_RSB_VMEXIT_LITE	(11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
+ #define X86_FEATURE_MSR_TSX_CTRL	(11*32+18) /* "" MSR IA32_TSX_CTRL (Intel) implemented */
+ 
++#define X86_FEATURE_SRSO		(11*32+24) /* "" AMD BTB untrain RETs */
++#define X86_FEATURE_SRSO_ALIAS		(11*32+25) /* "" AMD BTB untrain RETs through aliasing */
++#define X86_FEATURE_IBPB_ON_VMEXIT	(11*32+26) /* "" Issue an IBPB only on VMEXIT */
++
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX512_BF16		(12*32+ 5) /* AVX512 BFLOAT16 instructions */
+ 
+@@ -393,6 +396,17 @@
+ #define X86_FEATURE_CORE_CAPABILITIES	(18*32+30) /* "" IA32_CORE_CAPABILITIES MSR */
+ #define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
+ 
++/* AMD-defined memory encryption features, CPUID level 0x8000001f (EAX), word 19 */
++#define X86_FEATURE_SME			(19*32+ 0) /* AMD Secure Memory Encryption */
++#define X86_FEATURE_SEV			(19*32+ 1) /* AMD Secure Encrypted Virtualization */
++#define X86_FEATURE_VM_PAGE_FLUSH	(19*32+ 2) /* "" VM Page Flush MSR is supported */
++#define X86_FEATURE_SEV_ES		(19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
++#define X86_FEATURE_SME_COHERENT	(19*32+10) /* "" AMD hardware-enforced cache coherency */
++
++#define X86_FEATURE_SBPB		(20*32+27) /* "" Selective Branch Prediction Barrier */
++#define X86_FEATURE_IBPB_BRTYPE		(20*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */
++#define X86_FEATURE_SRSO_NO		(20*32+29) /* "" CPU is not affected by SRSO */
++
+ /*
+  * BUG word(s)
+  */
+@@ -433,5 +447,8 @@
+ #define X86_BUG_MMIO_UNKNOWN		X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */
+ #define X86_BUG_RETBLEED		X86_BUG(27) /* CPU is affected by RETBleed */
+ #define X86_BUG_EIBRS_PBRSB		X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
++#define X86_BUG_GDS			X86_BUG(29) /* CPU is affected by Gather Data Sampling */
+ 
++/* BUG word 2 */
++#define X86_BUG_SRSO			X86_BUG(1*32 + 0) /* AMD SRSO bug */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
+index 16c24915210d3..3c24378e67bea 100644
+--- a/arch/x86/include/asm/disabled-features.h
++++ b/arch/x86/include/asm/disabled-features.h
+@@ -101,6 +101,8 @@
+ 			 DISABLE_ENQCMD)
+ #define DISABLED_MASK17	0
+ #define DISABLED_MASK18	0
+-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
++#define DISABLED_MASK19	0
++#define DISABLED_MASK20	0
++#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)
+ 
+ #endif /* _ASM_X86_DISABLED_FEATURES_H */
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index 70b9bc5403c5e..94c07151a9e57 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -42,7 +42,7 @@ extern int  fpu__exception_code(struct fpu *fpu, int trap_nr);
+ extern void fpu__init_cpu(void);
+ extern void fpu__init_system_xstate(void);
+ extern void fpu__init_cpu_xstate(void);
+-extern void fpu__init_system(struct cpuinfo_x86 *c);
++extern void fpu__init_system(void);
+ extern void fpu__init_check_bugs(void);
+ extern void fpu__resume_cpu(void);
+ extern u64 fpu__get_supported_xfeatures_mask(void);
+diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
+index 8f3fa55d43ce7..4406d58cf7cb6 100644
+--- a/arch/x86/include/asm/mem_encrypt.h
++++ b/arch/x86/include/asm/mem_encrypt.h
+@@ -47,14 +47,13 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
+ 
+ void __init mem_encrypt_free_decrypted_mem(void);
+ 
+-/* Architecture __weak replacement functions */
+-void __init mem_encrypt_init(void);
+-
+ void __init sev_es_init_vc_handling(void);
+ bool sme_active(void);
+ bool sev_active(void);
+ bool sev_es_active(void);
+ 
++void __init mem_encrypt_init(void);
++
+ #define __bss_decrypted __section(".bss..decrypted")
+ 
+ #else	/* !CONFIG_AMD_MEM_ENCRYPT */
+@@ -86,6 +85,8 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0;
+ 
+ static inline void mem_encrypt_free_decrypted_mem(void) { }
+ 
++static inline void mem_encrypt_init(void) { }
++
+ #define __bss_decrypted
+ 
+ #endif	/* CONFIG_AMD_MEM_ENCRYPT */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 3fab152809ab1..202a52e42a368 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -60,6 +60,7 @@
+ 
+ #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
+ #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
++#define PRED_CMD_SBPB			BIT(7)	   /* Selective Branch Prediction Barrier */
+ 
+ #define MSR_PPIN_CTL			0x0000004e
+ #define MSR_PPIN			0x0000004f
+@@ -156,6 +157,15 @@
+ 						 * Not susceptible to Post-Barrier
+ 						 * Return Stack Buffer Predictions.
+ 						 */
++#define ARCH_CAP_GDS_CTRL		BIT(25)	/*
++						 * CPU is vulnerable to Gather
++						 * Data Sampling (GDS) and
++						 * has controls for mitigation.
++						 */
++#define ARCH_CAP_GDS_NO			BIT(26)	/*
++						 * CPU is not vulnerable to Gather
++						 * Data Sampling (GDS).
++						 */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+ #define L1D_FLUSH			BIT(0)	/*
+@@ -174,6 +184,8 @@
+ #define MSR_IA32_MCU_OPT_CTRL		0x00000123
+ #define RNGDS_MITG_DIS			BIT(0)
+ #define FB_CLEAR_DIS			BIT(3)	/* CPU Fill buffer clear disable */
++#define GDS_MITG_DIS			BIT(4)	/* Disable GDS mitigation */
++#define GDS_MITG_LOCKED			BIT(5)	/* GDS mitigation locked */
+ 
+ #define MSR_IA32_SYSENTER_CS		0x00000174
+ #define MSR_IA32_SYSENTER_ESP		0x00000175
+@@ -519,6 +531,7 @@
+ #define MSR_AMD64_ICIBSEXTDCTL		0xc001103c
+ #define MSR_AMD64_IBSOPDATA4		0xc001103d
+ #define MSR_AMD64_IBS_REG_COUNT_MAX	8 /* includes MSR_AMD64_IBSBRTARGET */
++#define MSR_AMD64_VM_PAGE_FLUSH		0xc001011e
+ #define MSR_AMD64_SEV_ES_GHCB		0xc0010130
+ #define MSR_AMD64_SEV			0xc0010131
+ #define MSR_AMD64_SEV_ENABLED_BIT	0
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index f14cdf9512493..99fbce2c1c7c1 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -112,7 +112,7 @@
+  * eventually turn into it's own annotation.
+  */
+ .macro ANNOTATE_UNRET_END
+-#ifdef CONFIG_DEBUG_ENTRY
++#if (defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
+ 	ANNOTATE_RETPOLINE_SAFE
+ 	nop
+ #endif
+@@ -173,12 +173,18 @@
+  * where we have a stack but before any RET instruction.
+  */
+ .macro UNTRAIN_RET
+-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY)
++#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
++	defined(CONFIG_CPU_SRSO)
+ 	ANNOTATE_UNRET_END
+ 	ALTERNATIVE_2 "",						\
+ 	              CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET,		\
+ 		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB
+ #endif
++
++#ifdef CONFIG_CPU_SRSO
++	ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \
++			  "call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS
++#endif
+ .endm
+ 
+ #else /* __ASSEMBLY__ */
+@@ -191,6 +197,8 @@
+ 
+ extern void __x86_return_thunk(void);
+ extern void zen_untrain_ret(void);
++extern void srso_untrain_ret(void);
++extern void srso_untrain_ret_alias(void);
+ extern void entry_ibpb(void);
+ 
+ #ifdef CONFIG_RETPOLINE
+@@ -300,11 +308,11 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature)
+ 		: "memory");
+ }
+ 
++extern u64 x86_pred_cmd;
++
+ static inline void indirect_branch_prediction_barrier(void)
+ {
+-	u64 val = PRED_CMD_IBPB;
+-
+-	alternative_msr_write(MSR_IA32_PRED_CMD, val, X86_FEATURE_USE_IBPB);
++	alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_USE_IBPB);
+ }
+ 
+ /* The Intel SPEC CTRL MSR base value cache */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 60514502ead67..6a9de5b1fe458 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -808,9 +808,11 @@ DECLARE_PER_CPU(u64, msr_misc_features_shadow);
+ #ifdef CONFIG_CPU_SUP_AMD
+ extern u16 amd_get_nb_id(int cpu);
+ extern u32 amd_get_nodes_per_socket(void);
++extern bool cpu_has_ibpb_brtype_microcode(void);
+ #else
+ static inline u16 amd_get_nb_id(int cpu)		{ return 0; }
+ static inline u32 amd_get_nodes_per_socket(void)	{ return 0; }
++static inline bool cpu_has_ibpb_brtype_microcode(void)	{ return false; }
+ #endif
+ 
+ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
+diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
+index 3ff0d48469f28..9bf60a8b9e9c2 100644
+--- a/arch/x86/include/asm/required-features.h
++++ b/arch/x86/include/asm/required-features.h
+@@ -101,6 +101,8 @@
+ #define REQUIRED_MASK16	0
+ #define REQUIRED_MASK17	0
+ #define REQUIRED_MASK18	0
+-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
++#define REQUIRED_MASK19	0
++#define REQUIRED_MASK20	0
++#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)
+ 
+ #endif /* _ASM_X86_REQUIRED_FEATURES_H */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 3d99a823ffac7..ffecf4e9444ea 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1282,6 +1282,25 @@ void set_dr_addr_mask(unsigned long mask, int dr)
+ 	}
+ }
+ 
++bool cpu_has_ibpb_brtype_microcode(void)
++{
++	switch (boot_cpu_data.x86) {
++	/* Zen1/2 IBPB flushes branch type predictions too. */
++	case 0x17:
++		return boot_cpu_has(X86_FEATURE_AMD_IBPB);
++	case 0x19:
++		/* Poke the MSR bit on Zen3/4 to check its presence. */
++		if (!wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
++			setup_force_cpu_cap(X86_FEATURE_SBPB);
++			return true;
++		} else {
++			return false;
++		}
++	default:
++		return false;
++	}
++}
++
+ static void zenbleed_check_cpu(void *unused)
+ {
+ 	struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index c81b8b029b680..d31639e3ce282 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -9,7 +9,6 @@
+  *	- Andrew D. Balsa (code cleanup).
+  */
+ #include <linux/init.h>
+-#include <linux/utsname.h>
+ #include <linux/cpu.h>
+ #include <linux/module.h>
+ #include <linux/nospec.h>
+@@ -27,8 +26,6 @@
+ #include <asm/msr.h>
+ #include <asm/vmx.h>
+ #include <asm/paravirt.h>
+-#include <asm/alternative.h>
+-#include <asm/set_memory.h>
+ #include <asm/intel-family.h>
+ #include <asm/e820/api.h>
+ #include <asm/hypervisor.h>
+@@ -48,6 +45,8 @@ static void __init md_clear_select_mitigation(void);
+ static void __init taa_select_mitigation(void);
+ static void __init mmio_select_mitigation(void);
+ static void __init srbds_select_mitigation(void);
++static void __init gds_select_mitigation(void);
++static void __init srso_select_mitigation(void);
+ 
+ /* The base value of the SPEC_CTRL MSR without task-specific bits set */
+ u64 x86_spec_ctrl_base;
+@@ -57,6 +56,9 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
+ DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
+ EXPORT_SYMBOL_GPL(x86_spec_ctrl_current);
+ 
++u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
++EXPORT_SYMBOL_GPL(x86_pred_cmd);
++
+ static DEFINE_MUTEX(spec_ctrl_mutex);
+ 
+ /* Update SPEC_CTRL MSR and its cached copy unconditionally */
+@@ -116,21 +118,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
+ DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
+ EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
+ 
+-void __init check_bugs(void)
++void __init cpu_select_mitigations(void)
+ {
+-	identify_boot_cpu();
+-
+-	/*
+-	 * identify_boot_cpu() initialized SMT support information, let the
+-	 * core code know.
+-	 */
+-	cpu_smt_check_topology();
+-
+-	if (!IS_ENABLED(CONFIG_SMP)) {
+-		pr_info("CPU: ");
+-		print_cpu_info(&boot_cpu_data);
+-	}
+-
+ 	/*
+ 	 * Read the SPEC_CTRL MSR to account for reserved bits which may
+ 	 * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
+@@ -166,39 +155,8 @@ void __init check_bugs(void)
+ 	l1tf_select_mitigation();
+ 	md_clear_select_mitigation();
+ 	srbds_select_mitigation();
+-
+-	arch_smt_update();
+-
+-#ifdef CONFIG_X86_32
+-	/*
+-	 * Check whether we are able to run this kernel safely on SMP.
+-	 *
+-	 * - i386 is no longer supported.
+-	 * - In order to run on anything without a TSC, we need to be
+-	 *   compiled for a i486.
+-	 */
+-	if (boot_cpu_data.x86 < 4)
+-		panic("Kernel requires i486+ for 'invlpg' and other features");
+-
+-	init_utsname()->machine[1] =
+-		'0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86);
+-	alternative_instructions();
+-
+-	fpu__init_check_bugs();
+-#else /* CONFIG_X86_64 */
+-	alternative_instructions();
+-
+-	/*
+-	 * Make sure the first 2MB area is not mapped by huge pages
+-	 * There are typically fixed size MTRRs in there and overlapping
+-	 * MTRRs into large pages causes slow downs.
+-	 *
+-	 * Right now we don't do that with gbpages because there seems
+-	 * very little benefit for that case.
+-	 */
+-	if (!direct_gbpages)
+-		set_memory_4k((unsigned long)__va(0), 1);
+-#endif
++	gds_select_mitigation();
++	srso_select_mitigation();
+ }
+ 
+ /*
+@@ -656,6 +614,149 @@ static int __init srbds_parse_cmdline(char *str)
+ }
+ early_param("srbds", srbds_parse_cmdline);
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"GDS: " fmt
++
++enum gds_mitigations {
++	GDS_MITIGATION_OFF,
++	GDS_MITIGATION_UCODE_NEEDED,
++	GDS_MITIGATION_FORCE,
++	GDS_MITIGATION_FULL,
++	GDS_MITIGATION_FULL_LOCKED,
++	GDS_MITIGATION_HYPERVISOR,
++};
++
++#if IS_ENABLED(CONFIG_GDS_FORCE_MITIGATION)
++static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FORCE;
++#else
++static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL;
++#endif
++
++static const char * const gds_strings[] = {
++	[GDS_MITIGATION_OFF]		= "Vulnerable",
++	[GDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
++	[GDS_MITIGATION_FORCE]		= "Mitigation: AVX disabled, no microcode",
++	[GDS_MITIGATION_FULL]		= "Mitigation: Microcode",
++	[GDS_MITIGATION_FULL_LOCKED]	= "Mitigation: Microcode (locked)",
++	[GDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
++};
++
++bool gds_ucode_mitigated(void)
++{
++	return (gds_mitigation == GDS_MITIGATION_FULL ||
++		gds_mitigation == GDS_MITIGATION_FULL_LOCKED);
++}
++EXPORT_SYMBOL_GPL(gds_ucode_mitigated);
++
++void update_gds_msr(void)
++{
++	u64 mcu_ctrl_after;
++	u64 mcu_ctrl;
++
++	switch (gds_mitigation) {
++	case GDS_MITIGATION_OFF:
++		rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++		mcu_ctrl |= GDS_MITG_DIS;
++		break;
++	case GDS_MITIGATION_FULL_LOCKED:
++		/*
++		 * The LOCKED state comes from the boot CPU. APs might not have
++		 * the same state. Make sure the mitigation is enabled on all
++		 * CPUs.
++		 */
++	case GDS_MITIGATION_FULL:
++		rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++		mcu_ctrl &= ~GDS_MITG_DIS;
++		break;
++	case GDS_MITIGATION_FORCE:
++	case GDS_MITIGATION_UCODE_NEEDED:
++	case GDS_MITIGATION_HYPERVISOR:
++		return;
++	};
++
++	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++
++	/*
++	 * Check to make sure that the WRMSR value was not ignored. Writes to
++	 * GDS_MITG_DIS will be ignored if this processor is locked but the boot
++	 * processor was not.
++	 */
++	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl_after);
++	WARN_ON_ONCE(mcu_ctrl != mcu_ctrl_after);
++}
++
++static void __init gds_select_mitigation(void)
++{
++	u64 mcu_ctrl;
++
++	if (!boot_cpu_has_bug(X86_BUG_GDS))
++		return;
++
++	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
++		gds_mitigation = GDS_MITIGATION_HYPERVISOR;
++		goto out;
++	}
++
++	if (cpu_mitigations_off())
++		gds_mitigation = GDS_MITIGATION_OFF;
++	/* Will verify below that mitigation _can_ be disabled */
++
++	/* No microcode */
++	if (!(x86_read_arch_cap_msr() & ARCH_CAP_GDS_CTRL)) {
++		if (gds_mitigation == GDS_MITIGATION_FORCE) {
++			/*
++			 * This only needs to be done on the boot CPU so do it
++			 * here rather than in update_gds_msr()
++			 */
++			setup_clear_cpu_cap(X86_FEATURE_AVX);
++			pr_warn("Microcode update needed! Disabling AVX as mitigation.\n");
++		} else {
++			gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
++		}
++		goto out;
++	}
++
++	/* Microcode has mitigation, use it */
++	if (gds_mitigation == GDS_MITIGATION_FORCE)
++		gds_mitigation = GDS_MITIGATION_FULL;
++
++	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++	if (mcu_ctrl & GDS_MITG_LOCKED) {
++		if (gds_mitigation == GDS_MITIGATION_OFF)
++			pr_warn("Mitigation locked. Disable failed.\n");
++
++		/*
++		 * The mitigation is selected from the boot CPU. All other CPUs
++		 * _should_ have the same state. If the boot CPU isn't locked
++		 * but others are then update_gds_msr() will WARN() of the state
++		 * mismatch. If the boot CPU is locked update_gds_msr() will
++		 * ensure the other CPUs have the mitigation enabled.
++		 */
++		gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
++	}
++
++	update_gds_msr();
++out:
++	pr_info("%s\n", gds_strings[gds_mitigation]);
++}
++
++static int __init gds_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	if (!boot_cpu_has_bug(X86_BUG_GDS))
++		return 0;
++
++	if (!strcmp(str, "off"))
++		gds_mitigation = GDS_MITIGATION_OFF;
++	else if (!strcmp(str, "force"))
++		gds_mitigation = GDS_MITIGATION_FORCE;
++
++	return 0;
++}
++early_param("gather_data_sampling", gds_parse_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "Spectre V1 : " fmt
+ 
+@@ -2137,6 +2238,165 @@ static int __init l1tf_cmdline(char *str)
+ }
+ early_param("l1tf", l1tf_cmdline);
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"Speculative Return Stack Overflow: " fmt
++
++enum srso_mitigation {
++	SRSO_MITIGATION_NONE,
++	SRSO_MITIGATION_MICROCODE,
++	SRSO_MITIGATION_SAFE_RET,
++	SRSO_MITIGATION_IBPB,
++	SRSO_MITIGATION_IBPB_ON_VMEXIT,
++};
++
++enum srso_mitigation_cmd {
++	SRSO_CMD_OFF,
++	SRSO_CMD_MICROCODE,
++	SRSO_CMD_SAFE_RET,
++	SRSO_CMD_IBPB,
++	SRSO_CMD_IBPB_ON_VMEXIT,
++};
++
++static const char * const srso_strings[] = {
++	[SRSO_MITIGATION_NONE]           = "Vulnerable",
++	[SRSO_MITIGATION_MICROCODE]      = "Mitigation: microcode",
++	[SRSO_MITIGATION_SAFE_RET]	 = "Mitigation: safe RET",
++	[SRSO_MITIGATION_IBPB]		 = "Mitigation: IBPB",
++	[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
++};
++
++static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
++static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET;
++
++static int __init srso_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off"))
++		srso_cmd = SRSO_CMD_OFF;
++	else if (!strcmp(str, "microcode"))
++		srso_cmd = SRSO_CMD_MICROCODE;
++	else if (!strcmp(str, "safe-ret"))
++		srso_cmd = SRSO_CMD_SAFE_RET;
++	else if (!strcmp(str, "ibpb"))
++		srso_cmd = SRSO_CMD_IBPB;
++	else if (!strcmp(str, "ibpb-vmexit"))
++		srso_cmd = SRSO_CMD_IBPB_ON_VMEXIT;
++	else
++		pr_err("Ignoring unknown SRSO option (%s).", str);
++
++	return 0;
++}
++early_param("spec_rstack_overflow", srso_parse_cmdline);
++
++#define SRSO_NOTICE "WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options."
++
++static void __init srso_select_mitigation(void)
++{
++	bool has_microcode;
++
++	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
++		goto pred_cmd;
++
++	/*
++	 * The first check is for the kernel running as a guest in order
++	 * for guests to verify whether IBPB is a viable mitigation.
++	 */
++	has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) || cpu_has_ibpb_brtype_microcode();
++	if (!has_microcode) {
++		pr_warn("IBPB-extending microcode not applied!\n");
++		pr_warn(SRSO_NOTICE);
++	} else {
++		/*
++		 * Enable the synthetic (even if in a real CPUID leaf)
++		 * flags for guests.
++		 */
++		setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
++
++		/*
++		 * Zen1/2 with SMT off aren't vulnerable after the right
++		 * IBPB microcode has been applied.
++		 */
++		if ((boot_cpu_data.x86 < 0x19) &&
++		    (!cpu_smt_possible() || (cpu_smt_control == CPU_SMT_DISABLED)))
++			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
++	}
++
++	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
++		if (has_microcode) {
++			pr_err("Retbleed IBPB mitigation enabled, using same for SRSO\n");
++			srso_mitigation = SRSO_MITIGATION_IBPB;
++			goto pred_cmd;
++		}
++	}
++
++	switch (srso_cmd) {
++	case SRSO_CMD_OFF:
++		return;
++
++	case SRSO_CMD_MICROCODE:
++		if (has_microcode) {
++			srso_mitigation = SRSO_MITIGATION_MICROCODE;
++			pr_warn(SRSO_NOTICE);
++		}
++		break;
++
++	case SRSO_CMD_SAFE_RET:
++		if (IS_ENABLED(CONFIG_CPU_SRSO)) {
++			/*
++			 * Enable the return thunk for generated code
++			 * like ftrace, static_call, etc.
++			 */
++			setup_force_cpu_cap(X86_FEATURE_RETHUNK);
++
++			if (boot_cpu_data.x86 == 0x19)
++				setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
++			else
++				setup_force_cpu_cap(X86_FEATURE_SRSO);
++			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
++		} else {
++			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
++			goto pred_cmd;
++		}
++		break;
++
++	case SRSO_CMD_IBPB:
++		if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) {
++			if (has_microcode) {
++				setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
++				srso_mitigation = SRSO_MITIGATION_IBPB;
++			}
++		} else {
++			pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n");
++			goto pred_cmd;
++		}
++		break;
++
++	case SRSO_CMD_IBPB_ON_VMEXIT:
++		if (IS_ENABLED(CONFIG_CPU_SRSO)) {
++			if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
++				setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
++				srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
++			}
++		} else {
++			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
++			goto pred_cmd;
++                }
++		break;
++
++	default:
++		break;
++	}
++
++	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
++
++pred_cmd:
++	if ((boot_cpu_has(X86_FEATURE_SRSO_NO) || srso_cmd == SRSO_CMD_OFF) &&
++	     boot_cpu_has(X86_FEATURE_SBPB))
++		x86_pred_cmd = PRED_CMD_SBPB;
++}
++
+ #undef pr_fmt
+ #define pr_fmt(fmt) fmt
+ 
+@@ -2335,6 +2595,18 @@ static ssize_t retbleed_show_state(char *buf)
+ 	return sprintf(buf, "%s\n", retbleed_strings[retbleed_mitigation]);
+ }
+ 
++static ssize_t gds_show_state(char *buf)
++{
++	return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]);
++}
++
++static ssize_t srso_show_state(char *buf)
++{
++	return sysfs_emit(buf, "%s%s\n",
++			  srso_strings[srso_mitigation],
++			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -2384,6 +2656,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_RETBLEED:
+ 		return retbleed_show_state(buf);
+ 
++	case X86_BUG_GDS:
++		return gds_show_state(buf);
++
++	case X86_BUG_SRSO:
++		return srso_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -2448,4 +2726,14 @@ ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, cha
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_RETBLEED);
+ }
++
++ssize_t cpu_show_gds(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_GDS);
++}
++
++ssize_t cpu_show_spec_rstack_overflow(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_SRSO);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index f41781d06a5f3..c1f2360309120 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -18,11 +18,15 @@
+ #include <linux/init.h>
+ #include <linux/kprobes.h>
+ #include <linux/kgdb.h>
++#include <linux/mem_encrypt.h>
+ #include <linux/smp.h>
++#include <linux/cpu.h>
+ #include <linux/io.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/pgtable.h>
++#include <linux/utsname.h>
+ 
++#include <asm/alternative.h>
+ #include <asm/cmdline.h>
+ #include <asm/stackprotector.h>
+ #include <asm/perf_event.h>
+@@ -58,6 +62,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/uv/uv.h>
++#include <asm/set_memory.h>
+ 
+ #include "cpu.h"
+ 
+@@ -961,6 +966,12 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ 	if (c->extended_cpuid_level >= 0x8000000a)
+ 		c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a);
+ 
++	if (c->extended_cpuid_level >= 0x8000001f)
++		c->x86_capability[CPUID_8000_001F_EAX] = cpuid_eax(0x8000001f);
++
++	if (c->extended_cpuid_level >= 0x80000021)
++		c->x86_capability[CPUID_8000_0021_EAX] = cpuid_eax(0x80000021);
++
+ 	init_scattered_cpuid_features(c);
+ 	init_speculation_control(c);
+ 
+@@ -1122,6 +1133,12 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define MMIO_SBDS	BIT(2)
+ /* CPU is affected by RETbleed, speculating where you would not expect it */
+ #define RETBLEED	BIT(3)
++/* CPU is affected by SMT (cross-thread) return predictions */
++#define SMT_RSB		BIT(4)
++/* CPU is affected by SRSO */
++#define SRSO		BIT(5)
++/* CPU is affected by GDS */
++#define GDS		BIT(6)
+ 
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+@@ -1134,27 +1151,30 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_X,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	X86_STEPPING_ANY,		MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED | GDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(CANNONLAKE_L,	X86_STEPPING_ANY,		RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPING_ANY,		MMIO),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_X,	X86_STEPPING_ANY,		MMIO),
+-	VULNBL_INTEL_STEPPINGS(COMETLAKE,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED | GDS),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPING_ANY,		MMIO | GDS),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_X,	X86_STEPPING_ANY,		MMIO | GDS),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPINGS(0x0, 0x0),	MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED | GDS),
++	VULNBL_INTEL_STEPPINGS(TIGERLAKE_L,	X86_STEPPING_ANY,		GDS),
++	VULNBL_INTEL_STEPPINGS(TIGERLAKE,	X86_STEPPING_ANY,		GDS),
+ 	VULNBL_INTEL_STEPPINGS(LAKEFIELD,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
+ 	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
+ 
+ 	VULNBL_AMD(0x15, RETBLEED),
+ 	VULNBL_AMD(0x16, RETBLEED),
+-	VULNBL_AMD(0x17, RETBLEED),
++	VULNBL_AMD(0x17, RETBLEED | SRSO),
+ 	VULNBL_HYGON(0x18, RETBLEED),
++	VULNBL_AMD(0x19, SRSO),
+ 	{}
+ };
+ 
+@@ -1272,6 +1292,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	    !(ia32_cap & ARCH_CAP_PBRSB_NO))
+ 		setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
+ 
++	/*
++	 * Check if CPU is vulnerable to GDS. If running in a virtual machine on
++	 * an affected processor, the VMM may have disabled the use of GATHER by
++	 * disabling AVX2. The only way to do this in HW is to clear XCR0[2],
++	 * which means that AVX will be disabled.
++	 */
++	if (cpu_matches(cpu_vuln_blacklist, GDS) && !(ia32_cap & ARCH_CAP_GDS_NO) &&
++	    boot_cpu_has(X86_FEATURE_AVX))
++		setup_force_cpu_bug(X86_BUG_GDS);
++
++	if (!cpu_has(c, X86_FEATURE_SRSO_NO)) {
++		if (cpu_matches(cpu_vuln_blacklist, SRSO))
++			setup_force_cpu_bug(X86_BUG_SRSO);
++	}
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+@@ -1413,8 +1448,6 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+ 
+ 	cpu_set_core_cap_bits(c);
+ 
+-	fpu__init_system(c);
+-
+ #ifdef CONFIG_X86_32
+ 	/*
+ 	 * Regardless of whether PCID is enumerated, the SDM says
+@@ -1792,6 +1825,8 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
+ 	validate_apic_and_package_id(c);
+ 	x86_spec_ctrl_setup_ap();
+ 	update_srbds_msr();
++	if (boot_cpu_has_bug(X86_BUG_GDS))
++		update_gds_msr();
+ }
+ 
+ static __init int setup_noclflush(char *arg)
+@@ -2109,8 +2144,6 @@ void cpu_init(void)
+ 
+ 	doublefault_init_cpu_tss();
+ 
+-	fpu__init_cpu();
+-
+ 	if (is_uv_system())
+ 		uv_cpu_init();
+ 
+@@ -2126,6 +2159,7 @@ void cpu_init_secondary(void)
+ 	 */
+ 	cpu_init_exception_handling();
+ 	cpu_init();
++	fpu__init_cpu();
+ }
+ #endif
+ 
+@@ -2188,3 +2222,69 @@ void arch_smt_update(void)
+ 	/* Check whether IPI broadcasting can be enabled */
+ 	apic_smt_update();
+ }
++
++void __init arch_cpu_finalize_init(void)
++{
++	identify_boot_cpu();
++
++	/*
++	 * identify_boot_cpu() initialized SMT support information, let the
++	 * core code know.
++	 */
++	cpu_smt_check_topology();
++
++	if (!IS_ENABLED(CONFIG_SMP)) {
++		pr_info("CPU: ");
++		print_cpu_info(&boot_cpu_data);
++	}
++
++	cpu_select_mitigations();
++
++	arch_smt_update();
++
++	if (IS_ENABLED(CONFIG_X86_32)) {
++		/*
++		 * Check whether this is a real i386 which is not longer
++		 * supported and fixup the utsname.
++		 */
++		if (boot_cpu_data.x86 < 4)
++			panic("Kernel requires i486+ for 'invlpg' and other features");
++
++		init_utsname()->machine[1] =
++			'0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86);
++	}
++
++	/*
++	 * Must be before alternatives because it might set or clear
++	 * feature bits.
++	 */
++	fpu__init_system();
++	fpu__init_cpu();
++
++	alternative_instructions();
++
++	if (IS_ENABLED(CONFIG_X86_64)) {
++		/*
++		 * Make sure the first 2MB area is not mapped by huge pages
++		 * There are typically fixed size MTRRs in there and overlapping
++		 * MTRRs into large pages causes slow downs.
++		 *
++		 * Right now we don't do that with gbpages because there seems
++		 * very little benefit for that case.
++		 */
++		if (!direct_gbpages)
++			set_memory_4k((unsigned long)__va(0), 1);
++	} else {
++		fpu__init_check_bugs();
++	}
++
++	/*
++	 * This needs to be called before any devices perform DMA
++	 * operations that might use the SWIOTLB bounce buffers. It will
++	 * mark the bounce buffers as decrypted so that their usage will
++	 * not cause "plain-text" data to be decrypted when accessed. It
++	 * must be called after late_time_init() so that Hyper-V x86/x64
++	 * hypercalls work when the SWIOTLB bounce buffers are decrypted.
++	 */
++	mem_encrypt_init();
++}
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 91df90abc1d9c..66e19380a899d 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -78,9 +78,11 @@ extern void detect_ht(struct cpuinfo_x86 *c);
+ extern void check_null_seg_clears_base(struct cpuinfo_x86 *c);
+ 
+ unsigned int aperfmperf_get_khz(int cpu);
++void cpu_select_mitigations(void);
+ 
+ extern void x86_spec_ctrl_setup_ap(void);
+ extern void update_srbds_msr(void);
++extern void update_gds_msr(void);
+ 
+ extern u64 x86_read_arch_cap_msr(void);
+ 
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index 82fe492121bb3..f1cd1b6fb99ef 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -41,10 +41,6 @@ static const struct cpuid_bit cpuid_bits[] = {
+ 	{ X86_FEATURE_CPB,		CPUID_EDX,  9, 0x80000007, 0 },
+ 	{ X86_FEATURE_PROC_FEEDBACK,    CPUID_EDX, 11, 0x80000007, 0 },
+ 	{ X86_FEATURE_MBA,		CPUID_EBX,  6, 0x80000008, 0 },
+-	{ X86_FEATURE_SME,		CPUID_EAX,  0, 0x8000001f, 0 },
+-	{ X86_FEATURE_SEV,		CPUID_EAX,  1, 0x8000001f, 0 },
+-	{ X86_FEATURE_SEV_ES,		CPUID_EAX,  3, 0x8000001f, 0 },
+-	{ X86_FEATURE_SME_COHERENT,	CPUID_EAX, 10, 0x8000001f, 0 },
+ 	{ 0, 0, 0, 0, 0 }
+ };
+ 
+diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
+index 3c0a621b97921..ed73f9b407068 100644
+--- a/arch/x86/kernel/fpu/init.c
++++ b/arch/x86/kernel/fpu/init.c
+@@ -49,7 +49,7 @@ void fpu__init_cpu(void)
+ 	fpu__init_cpu_xstate();
+ }
+ 
+-static bool fpu__probe_without_cpuid(void)
++static bool __init fpu__probe_without_cpuid(void)
+ {
+ 	unsigned long cr0;
+ 	u16 fsw, fcw;
+@@ -67,7 +67,7 @@ static bool fpu__probe_without_cpuid(void)
+ 	return fsw == 0 && (fcw & 0x103f) == 0x003f;
+ }
+ 
+-static void fpu__init_system_early_generic(struct cpuinfo_x86 *c)
++static void __init fpu__init_system_early_generic(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_CPUID) &&
+ 	    !test_bit(X86_FEATURE_FPU, (unsigned long *)cpu_caps_cleared)) {
+@@ -237,9 +237,9 @@ static void __init fpu__init_system_ctx_switch(void)
+  * Called on the boot CPU once per system bootup, to set up the initial
+  * FPU state that is later cloned into all processes:
+  */
+-void __init fpu__init_system(struct cpuinfo_x86 *c)
++void __init fpu__init_system(void)
+ {
+-	fpu__init_system_early_generic(c);
++	fpu__init_system_early_generic();
+ 
+ 	/*
+ 	 * The FPU has to be operational for some of the
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index a21cd2381fa8f..4955cb5cc0016 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -133,7 +133,20 @@ SECTIONS
+ 		LOCK_TEXT
+ 		KPROBES_TEXT
+ 		ALIGN_ENTRY_TEXT_BEGIN
++#ifdef CONFIG_CPU_SRSO
++		*(.text.__x86.rethunk_untrain)
++#endif
++
+ 		ENTRY_TEXT
++
++#ifdef CONFIG_CPU_SRSO
++		/*
++		 * See the comment above srso_untrain_ret_alias()'s
++		 * definition.
++		 */
++		. = srso_untrain_ret_alias | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20);
++		*(.text.__x86.rethunk_safe)
++#endif
+ 		ALIGN_ENTRY_TEXT_END
+ 		SOFTIRQENTRY_TEXT
+ 		STATIC_CALL_TEXT
+@@ -142,13 +155,15 @@ SECTIONS
+ 
+ #ifdef CONFIG_RETPOLINE
+ 		__indirect_thunk_start = .;
+-		*(.text.__x86.*)
++		*(.text.__x86.indirect_thunk)
++		*(.text.__x86.return_thunk)
+ 		__indirect_thunk_end = .;
+ #endif
+ 	} :text =0xcccc
+ 
+ 	/* End of text section, which should occupy whole number of pages */
+ 	_etext = .;
++
+ 	. = ALIGN(PAGE_SIZE);
+ 
+ 	X86_ALIGN_RODATA_BEGIN
+@@ -502,6 +517,21 @@ INIT_PER_CPU(irq_stack_backing_store);
+            "fixed_percpu_data is not at start of per-cpu area");
+ #endif
+ 
++#ifdef CONFIG_RETHUNK
++. = ASSERT((__ret & 0x3f) == 0, "__ret not cacheline-aligned");
++. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
++#endif
++
++#ifdef CONFIG_CPU_SRSO
++/*
++ * GNU ld cannot do XOR so do: (A | B) - (A & B) in order to compute the XOR
++ * of the two function addresses:
++ */
++. = ASSERT(((srso_untrain_ret_alias | srso_safe_ret_alias) -
++		(srso_untrain_ret_alias & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
++		"SRSO function pair won't alias");
++#endif
++
+ #endif /* CONFIG_X86_32 */
+ 
+ #ifdef CONFIG_KEXEC_CORE
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index de4b171cb76bc..8b07e48612d7d 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -491,6 +491,9 @@ void kvm_set_cpu_caps(void)
+ 	    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
+ 		kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
+ 
++	if (cpu_feature_enabled(X86_FEATURE_SRSO_NO))
++		kvm_cpu_cap_set(X86_FEATURE_SRSO_NO);
++
+ 	/*
+ 	 * Hide all SVM features by default, SVM will set the cap bits for
+ 	 * features it emulates and/or exposes for L1.
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index dc921d76e42e8..1ba9313d26b91 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -63,6 +63,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
+ 	[CPUID_8000_0007_EBX] = {0x80000007, 0, CPUID_EBX},
+ 	[CPUID_7_EDX]         = {         7, 0, CPUID_EDX},
+ 	[CPUID_7_1_EAX]       = {         7, 1, CPUID_EAX},
++	[CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
+ };
+ 
+ /*
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 7b2b61309d8a4..8544bca6b3356 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1392,7 +1392,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 
+ 	if (sd->current_vmcb != svm->vmcb) {
+ 		sd->current_vmcb = svm->vmcb;
+-		indirect_branch_prediction_barrier();
++
++		if (!cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT))
++			indirect_branch_prediction_barrier();
+ 	}
+ 	avic_vcpu_load(vcpu, cpu);
+ }
+diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
+index c18d812d00cd7..a8859c1732580 100644
+--- a/arch/x86/kvm/svm/vmenter.S
++++ b/arch/x86/kvm/svm/vmenter.S
+@@ -137,6 +137,9 @@ SYM_FUNC_START(__svm_vcpu_run)
+ 	 */
+ 	UNTRAIN_RET
+ 
++	/* SRSO */
++	ALTERNATIVE "", "call entry_ibpb", X86_FEATURE_IBPB_ON_VMEXIT
++
+ 	/*
+ 	 * Clear all general purpose registers except RSP and RAX to prevent
+ 	 * speculative use of the guest's values, even those that are reloaded
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 34670943f5435..cf47392005663 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -255,6 +255,8 @@ static struct kmem_cache *x86_fpu_cache;
+ 
+ static struct kmem_cache *x86_emulator_cache;
+ 
++extern bool gds_ucode_mitigated(void);
++
+ /*
+  * When called, it means the previous get/set msr reached an invalid msr.
+  * Return true if we want to ignore/silent this failed msr access.
+@@ -1389,7 +1391,7 @@ static unsigned int num_msr_based_features;
+ 	 ARCH_CAP_SKIP_VMENTRY_L1DFLUSH | ARCH_CAP_SSB_NO | ARCH_CAP_MDS_NO | \
+ 	 ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \
+ 	 ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \
+-	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO)
++	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO)
+ 
+ static u64 kvm_get_arch_capabilities(void)
+ {
+@@ -1446,6 +1448,9 @@ static u64 kvm_get_arch_capabilities(void)
+ 		 */
+ 	}
+ 
++	if (!boot_cpu_has_bug(X86_BUG_GDS) || gds_ucode_mitigated())
++		data |= ARCH_CAP_GDS_NO;
++
+ 	return data;
+ }
+ 
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index 1221bb099afb4..5f7eed97487ec 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -9,6 +9,7 @@
+ #include <asm/nospec-branch.h>
+ #include <asm/unwind_hints.h>
+ #include <asm/frame.h>
++#include <asm/nops.h>
+ 
+ 	.section .text.__x86.indirect_thunk
+ 
+@@ -73,6 +74,45 @@ SYM_CODE_END(__x86_indirect_thunk_array)
+  */
+ #ifdef CONFIG_RETHUNK
+ 
++/*
++ * srso_untrain_ret_alias() and srso_safe_ret_alias() are placed at
++ * special addresses:
++ *
++ * - srso_untrain_ret_alias() is 2M aligned
++ * - srso_safe_ret_alias() is also in the same 2M page but bits 2, 8, 14
++ * and 20 in its virtual address are set (while those bits in the
++ * srso_untrain_ret_alias() function are cleared).
++ *
++ * This guarantees that those two addresses will alias in the branch
++ * target buffer of Zen3/4 generations, leading to any potential
++ * poisoned entries at that BTB slot to get evicted.
++ *
++ * As a result, srso_safe_ret_alias() becomes a safe return.
++ */
++#ifdef CONFIG_CPU_SRSO
++	.section .text.__x86.rethunk_untrain
++
++SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE)
++	ASM_NOP2
++	lfence
++	jmp __x86_return_thunk
++SYM_FUNC_END(srso_untrain_ret_alias)
++__EXPORT_THUNK(srso_untrain_ret_alias)
++
++	.section .text.__x86.rethunk_safe
++#endif
++
++/* Needs a definition for the __x86_return_thunk alternative below. */
++SYM_START(srso_safe_ret_alias, SYM_L_GLOBAL, SYM_A_NONE)
++#ifdef CONFIG_CPU_SRSO
++	add $8, %_ASM_SP
++	UNWIND_HINT_FUNC
++#endif
++	ANNOTATE_UNRET_SAFE
++	ret
++	int3
++SYM_FUNC_END(srso_safe_ret_alias)
++
+ 	.section .text.__x86.return_thunk
+ 
+ /*
+@@ -85,7 +125,7 @@ SYM_CODE_END(__x86_indirect_thunk_array)
+  *    from re-poisioning the BTB prediction.
+  */
+ 	.align 64
+-	.skip 63, 0xcc
++	.skip 64 - (__ret - zen_untrain_ret), 0xcc
+ SYM_FUNC_START_NOALIGN(zen_untrain_ret);
+ 
+ 	/*
+@@ -117,10 +157,10 @@ SYM_FUNC_START_NOALIGN(zen_untrain_ret);
+ 	 * evicted, __x86_return_thunk will suffer Straight Line Speculation
+ 	 * which will be contained safely by the INT3.
+ 	 */
+-SYM_INNER_LABEL(__x86_return_thunk, SYM_L_GLOBAL)
++SYM_INNER_LABEL(__ret, SYM_L_GLOBAL)
+ 	ret
+ 	int3
+-SYM_CODE_END(__x86_return_thunk)
++SYM_CODE_END(__ret)
+ 
+ 	/*
+ 	 * Ensure the TEST decoding / BTB invalidation is complete.
+@@ -131,11 +171,44 @@ SYM_CODE_END(__x86_return_thunk)
+ 	 * Jump back and execute the RET in the middle of the TEST instruction.
+ 	 * INT3 is for SLS protection.
+ 	 */
+-	jmp __x86_return_thunk
++	jmp __ret
+ 	int3
+ SYM_FUNC_END(zen_untrain_ret)
+ __EXPORT_THUNK(zen_untrain_ret)
+ 
++/*
++ * SRSO untraining sequence for Zen1/2, similar to zen_untrain_ret()
++ * above. On kernel entry, srso_untrain_ret() is executed which is a
++ *
++ * movabs $0xccccccc308c48348,%rax
++ *
++ * and when the return thunk executes the inner label srso_safe_ret()
++ * later, it is a stack manipulation and a RET which is mispredicted and
++ * thus a "safe" one to use.
++ */
++	.align 64
++	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
++SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
++	.byte 0x48, 0xb8
++
++SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
++	add $8, %_ASM_SP
++	ret
++	int3
++	int3
++	int3
++	lfence
++	call srso_safe_ret
++	int3
++SYM_CODE_END(srso_safe_ret)
++SYM_FUNC_END(srso_untrain_ret)
++__EXPORT_THUNK(srso_untrain_ret)
++
++SYM_FUNC_START(__x86_return_thunk)
++	ALTERNATIVE_2 "jmp __ret", "call srso_safe_ret", X86_FEATURE_SRSO, \
++			"call srso_safe_ret_alias", X86_FEATURE_SRSO_ALIAS
++	int3
++SYM_CODE_END(__x86_return_thunk)
+ EXPORT_SYMBOL(__x86_return_thunk)
+ 
+ #endif /* CONFIG_RETHUNK */
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index ff3b0d8fe0486..dd15fdee45366 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -7,6 +7,7 @@
+ #include <linux/swapops.h>
+ #include <linux/kmemleak.h>
+ #include <linux/sched/task.h>
++#include <linux/sched/mm.h>
+ 
+ #include <asm/set_memory.h>
+ #include <asm/cpu_device_id.h>
+@@ -27,6 +28,7 @@
+ #include <asm/pti.h>
+ #include <asm/text-patching.h>
+ #include <asm/memtype.h>
++#include <asm/paravirt.h>
+ 
+ /*
+  * We need to define the tracepoints somewhere, and tlb.c
+@@ -804,9 +806,12 @@ void __init poking_init(void)
+ 	spinlock_t *ptl;
+ 	pte_t *ptep;
+ 
+-	poking_mm = copy_init_mm();
++	poking_mm = mm_alloc();
+ 	BUG_ON(!poking_mm);
+ 
++	/* Xen PV guests need the PGD to be pinned. */
++	paravirt_arch_dup_mmap(NULL, poking_mm);
++
+ 	/*
+ 	 * Randomize the poking address, but make sure that the following page
+ 	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 64873937cd1d7..755e939db3ed3 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -30,6 +30,7 @@
+ #include <asm/desc.h>
+ #include <asm/cpu.h>
+ #include <asm/io_apic.h>
++#include <asm/fpu/internal.h>
+ 
+ #include <xen/interface/xen.h>
+ #include <xen/interface/vcpu.h>
+@@ -63,6 +64,7 @@ static void cpu_bringup(void)
+ 
+ 	cr4_init();
+ 	cpu_init();
++	fpu__init_cpu();
+ 	touch_softlockup_watchdog();
+ 	preempt_disable();
+ 
+diff --git a/arch/xtensa/include/asm/bugs.h b/arch/xtensa/include/asm/bugs.h
+deleted file mode 100644
+index 69b29d1982494..0000000000000
+--- a/arch/xtensa/include/asm/bugs.h
++++ /dev/null
+@@ -1,18 +0,0 @@
+-/*
+- * include/asm-xtensa/bugs.h
+- *
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Xtensa processors don't have any bugs.  :)
+- *
+- * This file is subject to the terms and conditions of the GNU General
+- * Public License.  See the file "COPYING" in the main directory of
+- * this archive for more details.
+- */
+-
+-#ifndef _XTENSA_BUGS_H
+-#define _XTENSA_BUGS_H
+-
+-static void check_bugs(void) { }
+-
+-#endif /* _XTENSA_BUGS_H */
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 33e0526907ebd..2db1e0e8c1a7d 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -579,6 +579,18 @@ ssize_t __weak cpu_show_retbleed(struct device *dev,
+ 	return sysfs_emit(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_gds(struct device *dev,
++			    struct device_attribute *attr, char *buf)
++{
++	return sysfs_emit(buf, "Not affected\n");
++}
++
++ssize_t __weak cpu_show_spec_rstack_overflow(struct device *dev,
++					     struct device_attribute *attr, char *buf)
++{
++	return sysfs_emit(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -590,6 +602,8 @@ static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+ static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
+ static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL);
+ static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL);
++static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
++static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -603,6 +617,8 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_srbds.attr,
+ 	&dev_attr_mmio_stale_data.attr,
+ 	&dev_attr_retbleed.attr,
++	&dev_attr_gather_data_sampling.attr,
++	&dev_attr_spec_rstack_overflow.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index 379ac9ca60b70..1c366ddf62bc5 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -396,7 +396,7 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 	struct gnttab_map_grant_ref *gop = queue->tx_map_ops + *map_ops;
+ 	struct xen_netif_tx_request *txp = first;
+ 
+-	nr_slots = shinfo->nr_frags + 1;
++	nr_slots = shinfo->nr_frags + frag_overflow + 1;
+ 
+ 	copy_count(skb) = 0;
+ 	XENVIF_TX_CB(skb)->split_mask = 0;
+@@ -462,8 +462,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 		}
+ 	}
+ 
+-	for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots;
+-	     shinfo->nr_frags++, gop++) {
++	for (shinfo->nr_frags = 0; nr_slots > 0 && shinfo->nr_frags < MAX_SKB_FRAGS;
++	     shinfo->nr_frags++, gop++, nr_slots--) {
+ 		index = pending_index(queue->pending_cons++);
+ 		pending_idx = queue->pending_ring[index];
+ 		xenvif_tx_create_map_op(queue, pending_idx, txp,
+@@ -476,12 +476,12 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 			txp++;
+ 	}
+ 
+-	if (frag_overflow) {
++	if (nr_slots > 0) {
+ 
+ 		shinfo = skb_shinfo(nskb);
+ 		frags = shinfo->frags;
+ 
+-		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
++		for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots;
+ 		     shinfo->nr_frags++, txp++, gop++) {
+ 			index = pending_index(queue->pending_cons++);
+ 			pending_idx = queue->pending_ring[index];
+@@ -492,6 +492,11 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 		}
+ 
+ 		skb_shinfo(skb)->frag_list = nskb;
++	} else if (nskb) {
++		/* A frag_list skb was allocated but it is no longer needed
++		 * because enough slots were converted to copy ops above.
++		 */
++		kfree_skb(nskb);
+ 	}
+ 
+ 	(*copy_ops) = cop - queue->tx_copy_ops;
+diff --git a/include/asm-generic/bugs.h b/include/asm-generic/bugs.h
+deleted file mode 100644
+index 69021830f078d..0000000000000
+--- a/include/asm-generic/bugs.h
++++ /dev/null
+@@ -1,11 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef __ASM_GENERIC_BUGS_H
+-#define __ASM_GENERIC_BUGS_H
+-/*
+- * This file is included by 'init/main.c' to check for
+- * architecture-dependent bugs.
+- */
+-
+-static inline void check_bugs(void) { }
+-
+-#endif	/* __ASM_GENERIC_BUGS_H */
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 24e5f237244d8..358905a859938 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -70,6 +70,8 @@ extern ssize_t cpu_show_mmio_stale_data(struct device *dev,
+ 					char *buf);
+ extern ssize_t cpu_show_retbleed(struct device *dev,
+ 				 struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_spec_rstack_overflow(struct device *dev,
++					     struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+@@ -188,6 +190,12 @@ void arch_cpu_idle_enter(void);
+ void arch_cpu_idle_exit(void);
+ void arch_cpu_idle_dead(void);
+ 
++#ifdef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT
++void arch_cpu_finalize_init(void);
++#else
++static inline void arch_cpu_finalize_init(void) { }
++#endif
++
+ int cpu_report_state(int cpu);
+ int cpu_check_up_prepare(int cpu);
+ void cpu_set_state_online(int cpu);
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 2832cc6be062b..e8304e929e283 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -63,6 +63,7 @@ extern void sched_dead(struct task_struct *p);
+ void __noreturn do_task_dead(void);
+ void __noreturn make_task_dead(int signr);
+ 
++extern void mm_cache_init(void);
+ extern void proc_caches_init(void);
+ 
+ extern void fork_init(void);
+@@ -89,7 +90,6 @@ extern void exit_itimers(struct task_struct *);
+ extern pid_t kernel_clone(struct kernel_clone_args *kargs);
+ struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node);
+ struct task_struct *fork_idle(int);
+-struct mm_struct *copy_init_mm(void);
+ extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
+ extern long kernel_wait4(pid_t, int __user *, int, struct rusage *);
+ int kernel_wait(pid_t pid, int *stat);
+diff --git a/init/main.c b/init/main.c
+index d8bfe61b5a889..298989b0d4e88 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -95,12 +95,10 @@
+ #include <linux/cache.h>
+ #include <linux/rodata_test.h>
+ #include <linux/jump_label.h>
+-#include <linux/mem_encrypt.h>
+ #include <linux/kcsan.h>
+ #include <linux/init_syscalls.h>
+ 
+ #include <asm/io.h>
+-#include <asm/bugs.h>
+ #include <asm/setup.h>
+ #include <asm/sections.h>
+ #include <asm/cacheflush.h>
+@@ -774,8 +772,6 @@ void __init __weak thread_stack_cache_init(void)
+ }
+ #endif
+ 
+-void __init __weak mem_encrypt_init(void) { }
+-
+ void __init __weak poking_init(void) { }
+ 
+ void __init __weak pgtable_cache_init(void) { }
+@@ -839,6 +835,7 @@ static void __init mm_init(void)
+ 	init_espfix_bsp();
+ 	/* Should be run after espfix64 is set up. */
+ 	pti_init();
++	mm_cache_init();
+ }
+ 
+ void __init __weak arch_call_rest_init(void)
+@@ -903,7 +900,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	sort_main_extable();
+ 	trap_init();
+ 	mm_init();
+-
++	poking_init();
+ 	ftrace_init();
+ 
+ 	/* trace_printk can be enabled here */
+@@ -993,14 +990,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	 */
+ 	locking_selftest();
+ 
+-	/*
+-	 * This needs to be called before any devices perform DMA
+-	 * operations that might use the SWIOTLB bounce buffers. It will
+-	 * mark the bounce buffers as decrypted so that their usage will
+-	 * not cause "plain-text" data to be decrypted when accessed.
+-	 */
+-	mem_encrypt_init();
+-
+ #ifdef CONFIG_BLK_DEV_INITRD
+ 	if (initrd_start && !initrd_below_start_ok &&
+ 	    page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) {
+@@ -1017,6 +1006,9 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 		late_time_init();
+ 	sched_clock_init();
+ 	calibrate_delay();
++
++	arch_cpu_finalize_init();
++
+ 	pid_idr_init();
+ 	anon_vma_init();
+ #ifdef CONFIG_X86
+@@ -1043,9 +1035,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	taskstats_init_early();
+ 	delayacct_init();
+ 
+-	poking_init();
+-	check_bugs();
+-
+ 	acpi_subsystem_init();
+ 	arch_post_acpi_subsys_init();
+ 	sfi_init_late();
+diff --git a/kernel/fork.c b/kernel/fork.c
+index c6a289317e89b..31455f5ab015a 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2424,11 +2424,6 @@ struct task_struct * __init fork_idle(int cpu)
+ 	return task;
+ }
+ 
+-struct mm_struct *copy_init_mm(void)
+-{
+-	return dup_mm(NULL, &init_mm);
+-}
+-
+ /*
+  * This is like kernel_clone(), but shaved down and tailored to just
+  * creating io_uring workers. It returns a created task, or an error pointer.
+@@ -2823,10 +2818,27 @@ static void sighand_ctor(void *data)
+ 	init_waitqueue_head(&sighand->signalfd_wqh);
+ }
+ 
+-void __init proc_caches_init(void)
++void __init mm_cache_init(void)
+ {
+ 	unsigned int mm_size;
+ 
++	/*
++	 * The mm_cpumask is located at the end of mm_struct, and is
++	 * dynamically sized based on the maximum CPU number this system
++	 * can have, taking hotplug into account (nr_cpu_ids).
++	 */
++	mm_size = sizeof(struct mm_struct) + cpumask_size();
++
++	mm_cachep = kmem_cache_create_usercopy("mm_struct",
++			mm_size, ARCH_MIN_MMSTRUCT_ALIGN,
++			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
++			offsetof(struct mm_struct, saved_auxv),
++			sizeof_field(struct mm_struct, saved_auxv),
++			NULL);
++}
++
++void __init proc_caches_init(void)
++{
+ 	sighand_cachep = kmem_cache_create("sighand_cache",
+ 			sizeof(struct sighand_struct), 0,
+ 			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU|
+@@ -2844,19 +2856,6 @@ void __init proc_caches_init(void)
+ 			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
+ 			NULL);
+ 
+-	/*
+-	 * The mm_cpumask is located at the end of mm_struct, and is
+-	 * dynamically sized based on the maximum CPU number this system
+-	 * can have, taking hotplug into account (nr_cpu_ids).
+-	 */
+-	mm_size = sizeof(struct mm_struct) + cpumask_size();
+-
+-	mm_cachep = kmem_cache_create_usercopy("mm_struct",
+-			mm_size, ARCH_MIN_MMSTRUCT_ALIGN,
+-			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
+-			offsetof(struct mm_struct, saved_auxv),
+-			sizeof_field(struct mm_struct, saved_auxv),
+-			NULL);
+ 	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT);
+ 	mmap_init();
+ 	nsproxy_cache_init();
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index ec53f52a06a58..2ae4d74ee73b4 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -13,8 +13,8 @@
+ /*
+  * Defines x86 CPU feature bits
+  */
+-#define NCAPINTS			19	   /* N 32-bit words worth of info */
+-#define NBUGINTS			1	   /* N 32-bit bug flags */
++#define NCAPINTS			20	   /* N 32-bit words worth of info */
++#define NBUGINTS			2	   /* N 32-bit bug flags */
+ 
+ /*
+  * Note: If the comment begins with a quoted string, that string is used
+@@ -96,7 +96,7 @@
+ #define X86_FEATURE_SYSCALL32		( 3*32+14) /* "" syscall in IA32 userspace */
+ #define X86_FEATURE_SYSENTER32		( 3*32+15) /* "" sysenter in IA32 userspace */
+ #define X86_FEATURE_REP_GOOD		( 3*32+16) /* REP microcode works well */
+-#define X86_FEATURE_SME_COHERENT	( 3*32+17) /* "" AMD hardware-enforced cache coherency */
++/* FREE!                                ( 3*32+17) */
+ #define X86_FEATURE_LFENCE_RDTSC	( 3*32+18) /* "" LFENCE synchronizes RDTSC */
+ #define X86_FEATURE_ACC_POWER		( 3*32+19) /* AMD Accumulated Power Mechanism */
+ #define X86_FEATURE_NOPL		( 3*32+20) /* The NOPL (0F 1F) instructions */
+@@ -201,7 +201,7 @@
+ #define X86_FEATURE_INVPCID_SINGLE	( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */
+ #define X86_FEATURE_HW_PSTATE		( 7*32+ 8) /* AMD HW-PState */
+ #define X86_FEATURE_PROC_FEEDBACK	( 7*32+ 9) /* AMD ProcFeedbackInterface */
+-#define X86_FEATURE_SME			( 7*32+10) /* AMD Secure Memory Encryption */
++/* FREE!                                ( 7*32+10) */
+ #define X86_FEATURE_PTI			( 7*32+11) /* Kernel Page Table Isolation enabled */
+ #define X86_FEATURE_KERNEL_IBRS		( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */
+ #define X86_FEATURE_RSB_VMEXIT		( 7*32+13) /* "" Fill RSB on VM-Exit */
+@@ -211,7 +211,7 @@
+ #define X86_FEATURE_SSBD		( 7*32+17) /* Speculative Store Bypass Disable */
+ #define X86_FEATURE_MBA			( 7*32+18) /* Memory Bandwidth Allocation */
+ #define X86_FEATURE_RSB_CTXSW		( 7*32+19) /* "" Fill RSB on context switches */
+-#define X86_FEATURE_SEV			( 7*32+20) /* AMD Secure Encrypted Virtualization */
++/* FREE!                                ( 7*32+20) */
+ #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
+ #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
+ #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /* "" Disable Speculative Store Bypass. */
+@@ -236,7 +236,6 @@
+ #define X86_FEATURE_EPT_AD		( 8*32+17) /* Intel Extended Page Table access-dirty bit */
+ #define X86_FEATURE_VMCALL		( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */
+ #define X86_FEATURE_VMW_VMMCALL		( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
+-#define X86_FEATURE_SEV_ES		( 8*32+20) /* AMD Secure Encrypted Virtualization - Encrypted State */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
+ #define X86_FEATURE_FSGSBASE		( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
+@@ -299,6 +298,7 @@
+ #define X86_FEATURE_RSB_VMEXIT_LITE	(11*32+17) /* "" Fill RSB on VM-Exit when EIBRS is enabled */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
++#define X86_FEATURE_AVX_VNNI		(12*32+ 4) /* AVX VNNI instructions */
+ #define X86_FEATURE_AVX512_BF16		(12*32+ 5) /* AVX512 BFLOAT16 instructions */
+ 
+ /* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */
+@@ -343,6 +343,7 @@
+ #define X86_FEATURE_AVIC		(15*32+13) /* Virtual Interrupt Controller */
+ #define X86_FEATURE_V_VMSAVE_VMLOAD	(15*32+15) /* Virtual VMSAVE VMLOAD */
+ #define X86_FEATURE_VGIF		(15*32+16) /* Virtual GIF */
++#define X86_FEATURE_SVME_ADDR_CHK	(15*32+28) /* "" SVME addr check */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
+ #define X86_FEATURE_AVX512VBMI		(16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
+@@ -389,6 +390,13 @@
+ #define X86_FEATURE_CORE_CAPABILITIES	(18*32+30) /* "" IA32_CORE_CAPABILITIES MSR */
+ #define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
+ 
++/* AMD-defined memory encryption features, CPUID level 0x8000001f (EAX), word 19 */
++#define X86_FEATURE_SME			(19*32+ 0) /* AMD Secure Memory Encryption */
++#define X86_FEATURE_SEV			(19*32+ 1) /* AMD Secure Encrypted Virtualization */
++#define X86_FEATURE_VM_PAGE_FLUSH	(19*32+ 2) /* "" VM Page Flush MSR is supported */
++#define X86_FEATURE_SEV_ES		(19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
++#define X86_FEATURE_SME_COHERENT	(19*32+10) /* "" AMD hardware-enforced cache coherency */
++
+ /*
+  * BUG word(s)
+  */
+diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
+index d109c5ec967ce..a50f075244b44 100644
+--- a/tools/arch/x86/include/asm/disabled-features.h
++++ b/tools/arch/x86/include/asm/disabled-features.h
+@@ -104,6 +104,7 @@
+ 			 DISABLE_ENQCMD)
+ #define DISABLED_MASK17	0
+ #define DISABLED_MASK18	0
+-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
++#define DISABLED_MASK19	0
++#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
+ 
+ #endif /* _ASM_X86_DISABLED_FEATURES_H */
+diff --git a/tools/arch/x86/include/asm/required-features.h b/tools/arch/x86/include/asm/required-features.h
+index 3ff0d48469f28..b2d504f119370 100644
+--- a/tools/arch/x86/include/asm/required-features.h
++++ b/tools/arch/x86/include/asm/required-features.h
+@@ -101,6 +101,7 @@
+ #define REQUIRED_MASK16	0
+ #define REQUIRED_MASK17	0
+ #define REQUIRED_MASK18	0
+-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
++#define REQUIRED_MASK19	0
++#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
+ 
+ #endif /* _ASM_X86_REQUIRED_FEATURES_H */
+diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
+index d8f47704fd85f..5b915ebb61163 100644
+--- a/tools/objtool/arch/x86/decode.c
++++ b/tools/objtool/arch/x86/decode.c
+@@ -652,5 +652,8 @@ bool arch_is_retpoline(struct symbol *sym)
+ 
+ bool arch_is_rethunk(struct symbol *sym)
+ {
+-	return !strcmp(sym->name, "__x86_return_thunk");
++	return !strcmp(sym->name, "__x86_return_thunk") ||
++	       !strcmp(sym->name, "srso_untrain_ret") ||
++	       !strcmp(sym->name, "srso_safe_ret") ||
++	       !strcmp(sym->name, "__ret");
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-08-11 11:56 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-08-11 11:56 UTC (permalink / raw
  To: gentoo-commits

commit:     b250ab26b0b70ebe4965ba44a1509694220baac5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 11 11:56:19 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 11 11:56:19 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b250ab26

Linux patch 5.10.190

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1189_linux-5.10.190.patch | 17245 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 17249 insertions(+)

diff --git a/0000_README b/0000_README
index 9f251533..573656a0 100644
--- a/0000_README
+++ b/0000_README
@@ -799,6 +799,10 @@ Patch:  1188_linux-5.10.189.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.189
 
+Patch:  1189_linux-5.10.190.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.190
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1189_linux-5.10.190.patch b/1189_linux-5.10.190.patch
new file mode 100644
index 00000000..c50a7d27
--- /dev/null
+++ b/1189_linux-5.10.190.patch
@@ -0,0 +1,17245 @@
+diff --git a/Documentation/admin-guide/security-bugs.rst b/Documentation/admin-guide/security-bugs.rst
+index c32eb786201c1..d6d93e96128ef 100644
+--- a/Documentation/admin-guide/security-bugs.rst
++++ b/Documentation/admin-guide/security-bugs.rst
+@@ -63,31 +63,28 @@ information submitted to the security list and any followup discussions
+ of the report are treated confidentially even after the embargo has been
+ lifted, in perpetuity.
+ 
+-Coordination
+-------------
+-
+-Fixes for sensitive bugs, such as those that might lead to privilege
+-escalations, may need to be coordinated with the private
+-<linux-distros@vs.openwall.org> mailing list so that distribution vendors
+-are well prepared to issue a fixed kernel upon public disclosure of the
+-upstream fix. Distros will need some time to test the proposed patch and
+-will generally request at least a few days of embargo, and vendor update
+-publication prefers to happen Tuesday through Thursday. When appropriate,
+-the security team can assist with this coordination, or the reporter can
+-include linux-distros from the start. In this case, remember to prefix
+-the email Subject line with "[vs]" as described in the linux-distros wiki:
+-<http://oss-security.openwall.org/wiki/mailing-lists/distros#how-to-use-the-lists>
++Coordination with other groups
++------------------------------
++
++The kernel security team strongly recommends that reporters of potential
++security issues NEVER contact the "linux-distros" mailing list until
++AFTER discussing it with the kernel security team.  Do not Cc: both
++lists at once.  You may contact the linux-distros mailing list after a
++fix has been agreed on and you fully understand the requirements that
++doing so will impose on you and the kernel community.
++
++The different lists have different goals and the linux-distros rules do
++not contribute to actually fixing any potential security problems.
+ 
+ CVE assignment
+ --------------
+ 
+-The security team does not normally assign CVEs, nor do we require them
+-for reports or fixes, as this can needlessly complicate the process and
+-may delay the bug handling. If a reporter wishes to have a CVE identifier
+-assigned ahead of public disclosure, they will need to contact the private
+-linux-distros list, described above. When such a CVE identifier is known
+-before a patch is provided, it is desirable to mention it in the commit
+-message if the reporter agrees.
++The security team does not assign CVEs, nor do we require them for
++reports or fixes, as this can needlessly complicate the process and may
++delay the bug handling.  If a reporter wishes to have a CVE identifier
++assigned, they should find one by themselves, for example by contacting
++MITRE directly.  However under no circumstances will a patch inclusion
++be delayed to wait for a CVE identifier to arrive.
+ 
+ Non-disclosure agreements
+ -------------------------
+diff --git a/Makefile b/Makefile
+index 36047436fae33..bd2f457703634 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 189
++SUBLEVEL = 190
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
+index 7858ae5d39df7..7d81992dfafc6 100644
+--- a/arch/arm/boot/dts/imx6qdl.dtsi
++++ b/arch/arm/boot/dts/imx6qdl.dtsi
+@@ -45,6 +45,10 @@
+ 		spi1 = &ecspi2;
+ 		spi2 = &ecspi3;
+ 		spi3 = &ecspi4;
++		usb0 = &usbotg;
++		usb1 = &usbh1;
++		usb2 = &usbh2;
++		usb3 = &usbh3;
+ 		usbphy0 = &usbphy1;
+ 		usbphy1 = &usbphy2;
+ 	};
+diff --git a/arch/arm/boot/dts/imx6sl.dtsi b/arch/arm/boot/dts/imx6sl.dtsi
+index c184a6d5bc420..5b4dfc62030e8 100644
+--- a/arch/arm/boot/dts/imx6sl.dtsi
++++ b/arch/arm/boot/dts/imx6sl.dtsi
+@@ -39,6 +39,9 @@
+ 		spi1 = &ecspi2;
+ 		spi2 = &ecspi3;
+ 		spi3 = &ecspi4;
++		usb0 = &usbotg1;
++		usb1 = &usbotg2;
++		usb2 = &usbh;
+ 		usbphy0 = &usbphy1;
+ 		usbphy1 = &usbphy2;
+ 	};
+diff --git a/arch/arm/boot/dts/imx6sll.dtsi b/arch/arm/boot/dts/imx6sll.dtsi
+index bf5b262b91f91..3659fd5ecfa62 100644
+--- a/arch/arm/boot/dts/imx6sll.dtsi
++++ b/arch/arm/boot/dts/imx6sll.dtsi
+@@ -36,6 +36,8 @@
+ 		spi1 = &ecspi2;
+ 		spi3 = &ecspi3;
+ 		spi4 = &ecspi4;
++		usb0 = &usbotg1;
++		usb1 = &usbotg2;
+ 		usbphy0 = &usbphy1;
+ 		usbphy1 = &usbphy2;
+ 	};
+@@ -49,20 +51,18 @@
+ 			device_type = "cpu";
+ 			reg = <0>;
+ 			next-level-cache = <&L2>;
+-			operating-points = <
++			operating-points =
+ 				/* kHz    uV */
+-				996000  1275000
+-				792000  1175000
+-				396000  1075000
+-				198000	975000
+-			>;
+-			fsl,soc-operating-points = <
++				<996000  1275000>,
++				<792000  1175000>,
++				<396000  1075000>,
++				<198000	  975000>;
++			fsl,soc-operating-points =
+ 				/* ARM kHz      SOC-PU uV */
+-				996000          1175000
+-				792000          1175000
+-				396000          1175000
+-				198000		1175000
+-			>;
++				<996000         1175000>,
++				<792000         1175000>,
++				<396000         1175000>,
++				<198000		1175000>;
+ 			clock-latency = <61036>; /* two CLK32 periods */
+ 			#cooling-cells = <2>;
+ 			clocks = <&clks IMX6SLL_CLK_ARM>,
+@@ -552,7 +552,7 @@
+ 				reg = <0x020ca000 0x1000>;
+ 				interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6SLL_CLK_USBPHY2>;
+-				phy-reg_3p0-supply = <&reg_3p0>;
++				phy-3p0-supply = <&reg_3p0>;
+ 				fsl,anatop = <&anatop>;
+ 			};
+ 
+diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
+index c399919943c34..629c6a7432d9b 100644
+--- a/arch/arm/boot/dts/imx6sx.dtsi
++++ b/arch/arm/boot/dts/imx6sx.dtsi
+@@ -49,6 +49,9 @@
+ 		spi2 = &ecspi3;
+ 		spi3 = &ecspi4;
+ 		spi4 = &ecspi5;
++		usb0 = &usbotg1;
++		usb1 = &usbotg2;
++		usb2 = &usbh;
+ 		usbphy0 = &usbphy1;
+ 		usbphy1 = &usbphy2;
+ 	};
+diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
+index c40684ad11b8e..b40dd0c198028 100644
+--- a/arch/arm/boot/dts/imx6ul.dtsi
++++ b/arch/arm/boot/dts/imx6ul.dtsi
+@@ -47,6 +47,8 @@
+ 		spi1 = &ecspi2;
+ 		spi2 = &ecspi3;
+ 		spi3 = &ecspi4;
++		usb0 = &usbotg1;
++		usb1 = &usbotg2;
+ 		usbphy0 = &usbphy1;
+ 		usbphy1 = &usbphy2;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7d.dtsi b/arch/arm/boot/dts/imx7d.dtsi
+index cff875b80b60e..b0bcfa9094a30 100644
+--- a/arch/arm/boot/dts/imx7d.dtsi
++++ b/arch/arm/boot/dts/imx7d.dtsi
+@@ -7,6 +7,12 @@
+ #include <dt-bindings/reset/imx7-reset.h>
+ 
+ / {
++	aliases {
++		usb0 = &usbotg1;
++		usb1 = &usbotg2;
++		usb2 = &usbh;
++	};
++
+ 	cpus {
+ 		cpu0: cpu@0 {
+ 			clock-frequency = <996000000>;
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 43b39ad9ddcee..334e781663cc2 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -47,6 +47,8 @@
+ 		spi1 = &ecspi2;
+ 		spi2 = &ecspi3;
+ 		spi3 = &ecspi4;
++		usb0 = &usbotg1;
++		usb1 = &usbh;
+ 	};
+ 
+ 	cpus {
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
+index 46e558ab7729b..f0e8af12442a4 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
+@@ -129,7 +129,7 @@
+ 	status = "okay";
+ 	clock-frequency = <100000>;
+ 	i2c-sda-falling-time-ns = <890>;  /* hcnt */
+-	i2c-sdl-falling-time-ns = <890>;  /* lcnt */
++	i2c-scl-falling-time-ns = <890>;  /* lcnt */
+ 
+ 	adc@14 {
+ 		compatible = "lltc,ltc2497";
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts
+index f9b4a39683cf4..92ac3c86ebd56 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts
+@@ -162,7 +162,7 @@
+ 	status = "okay";
+ 	clock-frequency = <100000>;
+ 	i2c-sda-falling-time-ns = <890>;  /* hcnt */
+-	i2c-sdl-falling-time-ns = <890>;  /* lcnt */
++	i2c-scl-falling-time-ns = <890>;  /* lcnt */
+ 
+ 	adc@14 {
+ 		compatible = "lltc,ltc2497";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+index 24f9e8fd0c8b8..9c6c21cc6c6c8 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+@@ -351,7 +351,7 @@
+ 			MX8MN_IOMUXC_ENET_RXC_ENET1_RGMII_RXC		0x91
+ 			MX8MN_IOMUXC_ENET_RX_CTL_ENET1_RGMII_RX_CTL	0x91
+ 			MX8MN_IOMUXC_ENET_TX_CTL_ENET1_RGMII_TX_CTL	0x1f
+-			MX8MN_IOMUXC_GPIO1_IO09_GPIO1_IO9		0x19
++			MX8MN_IOMUXC_GPIO1_IO09_GPIO1_IO9		0x159
+ 		>;
+ 	};
+ 
+diff --git a/arch/powerpc/include/asm/word-at-a-time.h b/arch/powerpc/include/asm/word-at-a-time.h
+index f3f4710d4ff52..99129b0cd8b8a 100644
+--- a/arch/powerpc/include/asm/word-at-a-time.h
++++ b/arch/powerpc/include/asm/word-at-a-time.h
+@@ -34,7 +34,7 @@ static inline long find_zero(unsigned long mask)
+ 	return leading_zero_bits >> 3;
+ }
+ 
+-static inline bool has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c)
++static inline unsigned long has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c)
+ {
+ 	unsigned long rhs = val | c->low_bits;
+ 	*data = rhs;
+diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
+index b76cd49d521b9..db040f34c0046 100644
+--- a/arch/powerpc/mm/init_64.c
++++ b/arch/powerpc/mm/init_64.c
+@@ -313,8 +313,7 @@ void __ref vmemmap_free(unsigned long start, unsigned long end,
+ 	start = ALIGN_DOWN(start, page_size);
+ 	if (altmap) {
+ 		alt_start = altmap->base_pfn;
+-		alt_end = altmap->base_pfn + altmap->reserve +
+-			  altmap->free + altmap->alloc + altmap->align;
++		alt_end = altmap->base_pfn + altmap->reserve + altmap->free;
+ 	}
+ 
+ 	pr_debug("vmemmap_free %lx...%lx\n", start, end);
+diff --git a/arch/s390/kernel/sthyi.c b/arch/s390/kernel/sthyi.c
+index 888cc2f166db7..ce6084e28d904 100644
+--- a/arch/s390/kernel/sthyi.c
++++ b/arch/s390/kernel/sthyi.c
+@@ -460,9 +460,9 @@ static int sthyi_update_cache(u64 *rc)
+  *
+  * Fills the destination with system information returned by the STHYI
+  * instruction. The data is generated by emulation or execution of STHYI,
+- * if available. The return value is the condition code that would be
+- * returned, the rc parameter is the return code which is passed in
+- * register R2 + 1.
++ * if available. The return value is either a negative error value or
++ * the condition code that would be returned, the rc parameter is the
++ * return code which is passed in register R2 + 1.
+  */
+ int sthyi_fill(void *dst, u64 *rc)
+ {
+diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
+index 5be68190901f9..8bf72a323e4fa 100644
+--- a/arch/s390/kvm/intercept.c
++++ b/arch/s390/kvm/intercept.c
+@@ -387,8 +387,8 @@ static int handle_partial_execution(struct kvm_vcpu *vcpu)
+  */
+ int handle_sthyi(struct kvm_vcpu *vcpu)
+ {
+-	int reg1, reg2, r = 0;
+-	u64 code, addr, cc = 0, rc = 0;
++	int reg1, reg2, cc = 0, r = 0;
++	u64 code, addr, rc = 0;
+ 	struct sthyi_sctns *sctns = NULL;
+ 
+ 	if (!test_kvm_facility(vcpu->kvm, 74))
+@@ -419,7 +419,10 @@ int handle_sthyi(struct kvm_vcpu *vcpu)
+ 		return -ENOMEM;
+ 
+ 	cc = sthyi_fill(sctns, &rc);
+-
++	if (cc < 0) {
++		free_page((unsigned long)sctns);
++		return cc;
++	}
+ out:
+ 	if (!cc) {
+ 		if (kvm_s390_pv_cpu_is_protected(vcpu)) {
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 03e561608eed4..b5a60fbb96644 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2786,6 +2786,7 @@ int s390_replace_asce(struct gmap *gmap)
+ 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+ 	if (!page)
+ 		return -ENOMEM;
++	page->index = 0;
+ 	table = page_to_virt(page);
+ 	memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT));
+ 
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 2c0c495b9cb62..5a54c3685a066 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -451,4 +451,5 @@
+ 
+ /* BUG word 2 */
+ #define X86_BUG_SRSO			X86_BUG(1*32 + 0) /* AMD SRSO bug */
++#define X86_BUG_DIV0			X86_BUG(1*32 + 1) /* AMD DIV0 speculation bug */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
+index 991a7ad540c72..bd7f5886a7898 100644
+--- a/arch/x86/include/asm/kprobes.h
++++ b/arch/x86/include/asm/kprobes.h
+@@ -58,14 +58,29 @@ struct arch_specific_insn {
+ 	/* copy of the original instruction */
+ 	kprobe_opcode_t *insn;
+ 	/*
+-	 * boostable = false: This instruction type is not boostable.
+-	 * boostable = true: This instruction has been boosted: we have
++	 * boostable = 0: This instruction type is not boostable.
++	 * boostable = 1: This instruction has been boosted: we have
+ 	 * added a relative jump after the instruction copy in insn,
+ 	 * so no single-step and fixup are needed (unless there's
+ 	 * a post_handler).
+ 	 */
+-	bool boostable;
+-	bool if_modifier;
++	unsigned boostable:1;
++	unsigned char size;	/* The size of insn */
++	union {
++		unsigned char opcode;
++		struct {
++			unsigned char type;
++		} jcc;
++		struct {
++			unsigned char type;
++			unsigned char asize;
++		} loop;
++		struct {
++			unsigned char reg;
++		} indirect;
++	};
++	s32 rel32;	/* relative offset must be s32, s16, or s8 */
++	void (*emulate_op)(struct kprobe *p, struct pt_regs *regs);
+ 	/* Number of bytes of text poked */
+ 	int tp_len;
+ };
+@@ -104,7 +119,6 @@ extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
+ extern int kprobe_exceptions_notify(struct notifier_block *self,
+ 				    unsigned long val, void *data);
+ extern int kprobe_int3_handler(struct pt_regs *regs);
+-extern int kprobe_debug_handler(struct pt_regs *regs);
+ 
+ #else
+ 
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 6a9de5b1fe458..2dd9b661a5fd5 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -809,10 +809,12 @@ DECLARE_PER_CPU(u64, msr_misc_features_shadow);
+ extern u16 amd_get_nb_id(int cpu);
+ extern u32 amd_get_nodes_per_socket(void);
+ extern bool cpu_has_ibpb_brtype_microcode(void);
++extern void amd_clear_divider(void);
+ #else
+ static inline u16 amd_get_nb_id(int cpu)		{ return 0; }
+ static inline u32 amd_get_nodes_per_socket(void)	{ return 0; }
+ static inline bool cpu_has_ibpb_brtype_microcode(void)	{ return false; }
++static inline void amd_clear_divider(void)		{ }
+ #endif
+ 
+ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index ffecf4e9444ea..da38765aa74c4 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -76,6 +76,10 @@ static const int amd_zenbleed[] =
+ 			   AMD_MODEL_RANGE(0x17, 0x60, 0x0, 0x7f, 0xf),
+ 			   AMD_MODEL_RANGE(0x17, 0xa0, 0x0, 0xaf, 0xf));
+ 
++static const int amd_div0[] =
++	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0x00, 0x0, 0x2f, 0xf),
++			   AMD_MODEL_RANGE(0x17, 0x50, 0x0, 0x5f, 0xf));
++
+ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
+ {
+ 	int osvw_id = *erratum++;
+@@ -1168,6 +1172,11 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 	check_null_seg_clears_base(c);
+ 
+ 	zenbleed_check(c);
++
++	if (cpu_has_amd_erratum(c, amd_div0)) {
++		pr_notice_once("AMD Zen1 DIV0 bug detected. Disable SMT for full protection.\n");
++		setup_force_cpu_bug(X86_BUG_DIV0);
++	}
+ }
+ 
+ #ifdef CONFIG_X86_32
+@@ -1312,3 +1321,13 @@ void amd_check_microcode(void)
+ {
+ 	on_each_cpu(zenbleed_check_cpu, NULL, 1);
+ }
++
++/*
++ * Issue a DIV 0/1 insn to clear any division data from previous DIV
++ * operations.
++ */
++void noinstr amd_clear_divider(void)
++{
++	asm volatile(ALTERNATIVE("", "div %2\n\t", X86_BUG_DIV0)
++		     :: "a" (0), "d" (0), "r" (1));
++}
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 5de757099186c..c78b4946385e7 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -133,26 +133,6 @@ void synthesize_relcall(void *dest, void *from, void *to)
+ }
+ NOKPROBE_SYMBOL(synthesize_relcall);
+ 
+-/*
+- * Skip the prefixes of the instruction.
+- */
+-static kprobe_opcode_t *skip_prefixes(kprobe_opcode_t *insn)
+-{
+-	insn_attr_t attr;
+-
+-	attr = inat_get_opcode_attribute((insn_byte_t)*insn);
+-	while (inat_is_legacy_prefix(attr)) {
+-		insn++;
+-		attr = inat_get_opcode_attribute((insn_byte_t)*insn);
+-	}
+-#ifdef CONFIG_X86_64
+-	if (inat_is_rex_prefix(attr))
+-		insn++;
+-#endif
+-	return insn;
+-}
+-NOKPROBE_SYMBOL(skip_prefixes);
+-
+ /*
+  * Returns non-zero if INSN is boostable.
+  * RIP relative instructions are adjusted at copying time in 64 bits mode
+@@ -185,29 +165,28 @@ int can_boost(struct insn *insn, void *addr)
+ 
+ 	opcode = insn->opcode.bytes[0];
+ 
+-	switch (opcode & 0xf0) {
+-	case 0x60:
+-		/* can't boost "bound" */
+-		return (opcode != 0x62);
+-	case 0x70:
+-		return 0; /* can't boost conditional jump */
+-	case 0x90:
+-		return opcode != 0x9a;	/* can't boost call far */
+-	case 0xc0:
+-		/* can't boost software-interruptions */
+-		return (0xc1 < opcode && opcode < 0xcc) || opcode == 0xcf;
+-	case 0xd0:
+-		/* can boost AA* and XLAT */
+-		return (opcode == 0xd4 || opcode == 0xd5 || opcode == 0xd7);
+-	case 0xe0:
+-		/* can boost in/out and absolute jmps */
+-		return ((opcode & 0x04) || opcode == 0xea);
+-	case 0xf0:
+-		/* clear and set flags are boostable */
+-		return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
++	switch (opcode) {
++	case 0x62:		/* bound */
++	case 0x70 ... 0x7f:	/* Conditional jumps */
++	case 0x9a:		/* Call far */
++	case 0xc0 ... 0xc1:	/* Grp2 */
++	case 0xcc ... 0xce:	/* software exceptions */
++	case 0xd0 ... 0xd3:	/* Grp2 */
++	case 0xd6:		/* (UD) */
++	case 0xd8 ... 0xdf:	/* ESC */
++	case 0xe0 ... 0xe3:	/* LOOP*, JCXZ */
++	case 0xe8 ... 0xe9:	/* near Call, JMP */
++	case 0xeb:		/* Short JMP */
++	case 0xf0 ... 0xf4:	/* LOCK/REP, HLT */
++	case 0xf6 ... 0xf7:	/* Grp3 */
++	case 0xfe:		/* Grp4 */
++		/* ... are not boostable */
++		return 0;
++	case 0xff:		/* Grp5 */
++		/* Only indirect jmp is boostable */
++		return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
+ 	default:
+-		/* call is not boostable */
+-		return opcode != 0x9a;
++		return 1;
+ 	}
+ }
+ 
+@@ -326,25 +305,6 @@ static int can_probe(unsigned long paddr)
+ 	return (addr == paddr);
+ }
+ 
+-/*
+- * Returns non-zero if opcode modifies the interrupt flag.
+- */
+-static int is_IF_modifier(kprobe_opcode_t *insn)
+-{
+-	/* Skip prefixes */
+-	insn = skip_prefixes(insn);
+-
+-	switch (*insn) {
+-	case 0xfa:		/* cli */
+-	case 0xfb:		/* sti */
+-	case 0xcf:		/* iret/iretd */
+-	case 0x9d:		/* popf/popfd */
+-		return 1;
+-	}
+-
+-	return 0;
+-}
+-
+ /*
+  * Copy an instruction with recovering modified instruction by kprobes
+  * and adjust the displacement if the instruction uses the %rip-relative
+@@ -412,13 +372,14 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
+ 	return insn->length;
+ }
+ 
+-/* Prepare reljump right after instruction to boost */
+-static int prepare_boost(kprobe_opcode_t *buf, struct kprobe *p,
+-			  struct insn *insn)
++/* Prepare reljump or int3 right after instruction */
++static int prepare_singlestep(kprobe_opcode_t *buf, struct kprobe *p,
++			      struct insn *insn)
+ {
+ 	int len = insn->length;
+ 
+-	if (can_boost(insn, p->addr) &&
++	if (!IS_ENABLED(CONFIG_PREEMPTION) &&
++	    !p->post_handler && can_boost(insn, p->addr) &&
+ 	    MAX_INSN_SIZE - len >= JMP32_INSN_SIZE) {
+ 		/*
+ 		 * These instructions can be executed directly if it
+@@ -427,9 +388,14 @@ static int prepare_boost(kprobe_opcode_t *buf, struct kprobe *p,
+ 		synthesize_reljump(buf + len, p->ainsn.insn + len,
+ 				   p->addr + insn->length);
+ 		len += JMP32_INSN_SIZE;
+-		p->ainsn.boostable = true;
++		p->ainsn.boostable = 1;
+ 	} else {
+-		p->ainsn.boostable = false;
++		/* Otherwise, put an int3 for trapping singlestep */
++		if (MAX_INSN_SIZE - len < INT3_INSN_SIZE)
++			return -ENOSPC;
++
++		buf[len] = INT3_INSN_OPCODE;
++		len += INT3_INSN_SIZE;
+ 	}
+ 
+ 	return len;
+@@ -466,25 +432,290 @@ void free_insn_page(void *page)
+ 	module_memfree(page);
+ }
+ 
++/* Kprobe x86 instruction emulation - only regs->ip or IF flag modifiers */
++
++static void kprobe_emulate_ifmodifiers(struct kprobe *p, struct pt_regs *regs)
++{
++	switch (p->ainsn.opcode) {
++	case 0xfa:	/* cli */
++		regs->flags &= ~(X86_EFLAGS_IF);
++		break;
++	case 0xfb:	/* sti */
++		regs->flags |= X86_EFLAGS_IF;
++		break;
++	case 0x9c:	/* pushf */
++		int3_emulate_push(regs, regs->flags);
++		break;
++	case 0x9d:	/* popf */
++		regs->flags = int3_emulate_pop(regs);
++		break;
++	}
++	regs->ip = regs->ip - INT3_INSN_SIZE + p->ainsn.size;
++}
++NOKPROBE_SYMBOL(kprobe_emulate_ifmodifiers);
++
++static void kprobe_emulate_ret(struct kprobe *p, struct pt_regs *regs)
++{
++	int3_emulate_ret(regs);
++}
++NOKPROBE_SYMBOL(kprobe_emulate_ret);
++
++static void kprobe_emulate_call(struct kprobe *p, struct pt_regs *regs)
++{
++	unsigned long func = regs->ip - INT3_INSN_SIZE + p->ainsn.size;
++
++	func += p->ainsn.rel32;
++	int3_emulate_call(regs, func);
++}
++NOKPROBE_SYMBOL(kprobe_emulate_call);
++
++static nokprobe_inline
++void __kprobe_emulate_jmp(struct kprobe *p, struct pt_regs *regs, bool cond)
++{
++	unsigned long ip = regs->ip - INT3_INSN_SIZE + p->ainsn.size;
++
++	if (cond)
++		ip += p->ainsn.rel32;
++	int3_emulate_jmp(regs, ip);
++}
++
++static void kprobe_emulate_jmp(struct kprobe *p, struct pt_regs *regs)
++{
++	__kprobe_emulate_jmp(p, regs, true);
++}
++NOKPROBE_SYMBOL(kprobe_emulate_jmp);
++
++static const unsigned long jcc_mask[6] = {
++	[0] = X86_EFLAGS_OF,
++	[1] = X86_EFLAGS_CF,
++	[2] = X86_EFLAGS_ZF,
++	[3] = X86_EFLAGS_CF | X86_EFLAGS_ZF,
++	[4] = X86_EFLAGS_SF,
++	[5] = X86_EFLAGS_PF,
++};
++
++static void kprobe_emulate_jcc(struct kprobe *p, struct pt_regs *regs)
++{
++	bool invert = p->ainsn.jcc.type & 1;
++	bool match;
++
++	if (p->ainsn.jcc.type < 0xc) {
++		match = regs->flags & jcc_mask[p->ainsn.jcc.type >> 1];
++	} else {
++		match = ((regs->flags & X86_EFLAGS_SF) >> X86_EFLAGS_SF_BIT) ^
++			((regs->flags & X86_EFLAGS_OF) >> X86_EFLAGS_OF_BIT);
++		if (p->ainsn.jcc.type >= 0xe)
++			match = match || (regs->flags & X86_EFLAGS_ZF);
++	}
++	__kprobe_emulate_jmp(p, regs, (match && !invert) || (!match && invert));
++}
++NOKPROBE_SYMBOL(kprobe_emulate_jcc);
++
++static void kprobe_emulate_loop(struct kprobe *p, struct pt_regs *regs)
++{
++	bool match;
++
++	if (p->ainsn.loop.type != 3) {	/* LOOP* */
++		if (p->ainsn.loop.asize == 32)
++			match = ((*(u32 *)&regs->cx)--) != 0;
++#ifdef CONFIG_X86_64
++		else if (p->ainsn.loop.asize == 64)
++			match = ((*(u64 *)&regs->cx)--) != 0;
++#endif
++		else
++			match = ((*(u16 *)&regs->cx)--) != 0;
++	} else {			/* JCXZ */
++		if (p->ainsn.loop.asize == 32)
++			match = *(u32 *)(&regs->cx) == 0;
++#ifdef CONFIG_X86_64
++		else if (p->ainsn.loop.asize == 64)
++			match = *(u64 *)(&regs->cx) == 0;
++#endif
++		else
++			match = *(u16 *)(&regs->cx) == 0;
++	}
++
++	if (p->ainsn.loop.type == 0)	/* LOOPNE */
++		match = match && !(regs->flags & X86_EFLAGS_ZF);
++	else if (p->ainsn.loop.type == 1)	/* LOOPE */
++		match = match && (regs->flags & X86_EFLAGS_ZF);
++
++	__kprobe_emulate_jmp(p, regs, match);
++}
++NOKPROBE_SYMBOL(kprobe_emulate_loop);
++
++static const int addrmode_regoffs[] = {
++	offsetof(struct pt_regs, ax),
++	offsetof(struct pt_regs, cx),
++	offsetof(struct pt_regs, dx),
++	offsetof(struct pt_regs, bx),
++	offsetof(struct pt_regs, sp),
++	offsetof(struct pt_regs, bp),
++	offsetof(struct pt_regs, si),
++	offsetof(struct pt_regs, di),
++#ifdef CONFIG_X86_64
++	offsetof(struct pt_regs, r8),
++	offsetof(struct pt_regs, r9),
++	offsetof(struct pt_regs, r10),
++	offsetof(struct pt_regs, r11),
++	offsetof(struct pt_regs, r12),
++	offsetof(struct pt_regs, r13),
++	offsetof(struct pt_regs, r14),
++	offsetof(struct pt_regs, r15),
++#endif
++};
++
++static void kprobe_emulate_call_indirect(struct kprobe *p, struct pt_regs *regs)
++{
++	unsigned long offs = addrmode_regoffs[p->ainsn.indirect.reg];
++
++	int3_emulate_call(regs, regs_get_register(regs, offs));
++}
++NOKPROBE_SYMBOL(kprobe_emulate_call_indirect);
++
++static void kprobe_emulate_jmp_indirect(struct kprobe *p, struct pt_regs *regs)
++{
++	unsigned long offs = addrmode_regoffs[p->ainsn.indirect.reg];
++
++	int3_emulate_jmp(regs, regs_get_register(regs, offs));
++}
++NOKPROBE_SYMBOL(kprobe_emulate_jmp_indirect);
++
++static int prepare_emulation(struct kprobe *p, struct insn *insn)
++{
++	insn_byte_t opcode = insn->opcode.bytes[0];
++
++	switch (opcode) {
++	case 0xfa:		/* cli */
++	case 0xfb:		/* sti */
++	case 0x9c:		/* pushfl */
++	case 0x9d:		/* popf/popfd */
++		/*
++		 * IF modifiers must be emulated since it will enable interrupt while
++		 * int3 single stepping.
++		 */
++		p->ainsn.emulate_op = kprobe_emulate_ifmodifiers;
++		p->ainsn.opcode = opcode;
++		break;
++	case 0xc2:	/* ret/lret */
++	case 0xc3:
++	case 0xca:
++	case 0xcb:
++		p->ainsn.emulate_op = kprobe_emulate_ret;
++		break;
++	case 0x9a:	/* far call absolute -- segment is not supported */
++	case 0xea:	/* far jmp absolute -- segment is not supported */
++	case 0xcc:	/* int3 */
++	case 0xcf:	/* iret -- in-kernel IRET is not supported */
++		return -EOPNOTSUPP;
++		break;
++	case 0xe8:	/* near call relative */
++		p->ainsn.emulate_op = kprobe_emulate_call;
++		if (insn->immediate.nbytes == 2)
++			p->ainsn.rel32 = *(s16 *)&insn->immediate.value;
++		else
++			p->ainsn.rel32 = *(s32 *)&insn->immediate.value;
++		break;
++	case 0xeb:	/* short jump relative */
++	case 0xe9:	/* near jump relative */
++		p->ainsn.emulate_op = kprobe_emulate_jmp;
++		if (insn->immediate.nbytes == 1)
++			p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
++		else if (insn->immediate.nbytes == 2)
++			p->ainsn.rel32 = *(s16 *)&insn->immediate.value;
++		else
++			p->ainsn.rel32 = *(s32 *)&insn->immediate.value;
++		break;
++	case 0x70 ... 0x7f:
++		/* 1 byte conditional jump */
++		p->ainsn.emulate_op = kprobe_emulate_jcc;
++		p->ainsn.jcc.type = opcode & 0xf;
++		p->ainsn.rel32 = *(char *)insn->immediate.bytes;
++		break;
++	case 0x0f:
++		opcode = insn->opcode.bytes[1];
++		if ((opcode & 0xf0) == 0x80) {
++			/* 2 bytes Conditional Jump */
++			p->ainsn.emulate_op = kprobe_emulate_jcc;
++			p->ainsn.jcc.type = opcode & 0xf;
++			if (insn->immediate.nbytes == 2)
++				p->ainsn.rel32 = *(s16 *)&insn->immediate.value;
++			else
++				p->ainsn.rel32 = *(s32 *)&insn->immediate.value;
++		} else if (opcode == 0x01 &&
++			   X86_MODRM_REG(insn->modrm.bytes[0]) == 0 &&
++			   X86_MODRM_MOD(insn->modrm.bytes[0]) == 3) {
++			/* VM extensions - not supported */
++			return -EOPNOTSUPP;
++		}
++		break;
++	case 0xe0:	/* Loop NZ */
++	case 0xe1:	/* Loop */
++	case 0xe2:	/* Loop */
++	case 0xe3:	/* J*CXZ */
++		p->ainsn.emulate_op = kprobe_emulate_loop;
++		p->ainsn.loop.type = opcode & 0x3;
++		p->ainsn.loop.asize = insn->addr_bytes * 8;
++		p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
++		break;
++	case 0xff:
++		/*
++		 * Since the 0xff is an extended group opcode, the instruction
++		 * is determined by the MOD/RM byte.
++		 */
++		opcode = insn->modrm.bytes[0];
++		if ((opcode & 0x30) == 0x10) {
++			if ((opcode & 0x8) == 0x8)
++				return -EOPNOTSUPP;	/* far call */
++			/* call absolute, indirect */
++			p->ainsn.emulate_op = kprobe_emulate_call_indirect;
++		} else if ((opcode & 0x30) == 0x20) {
++			if ((opcode & 0x8) == 0x8)
++				return -EOPNOTSUPP;	/* far jmp */
++			/* jmp near absolute indirect */
++			p->ainsn.emulate_op = kprobe_emulate_jmp_indirect;
++		} else
++			break;
++
++		if (insn->addr_bytes != sizeof(unsigned long))
++			return -EOPNOTSUPP;	/* Don't support differnt size */
++		if (X86_MODRM_MOD(opcode) != 3)
++			return -EOPNOTSUPP;	/* TODO: support memory addressing */
++
++		p->ainsn.indirect.reg = X86_MODRM_RM(opcode);
++#ifdef CONFIG_X86_64
++		if (X86_REX_B(insn->rex_prefix.value))
++			p->ainsn.indirect.reg += 8;
++#endif
++		break;
++	default:
++		break;
++	}
++	p->ainsn.size = insn->length;
++
++	return 0;
++}
++
+ static int arch_copy_kprobe(struct kprobe *p)
+ {
+ 	struct insn insn;
+ 	kprobe_opcode_t buf[MAX_INSN_SIZE];
+-	int len;
++	int ret, len;
+ 
+ 	/* Copy an instruction with recovering if other optprobe modifies it.*/
+ 	len = __copy_instruction(buf, p->addr, p->ainsn.insn, &insn);
+ 	if (!len)
+ 		return -EINVAL;
+ 
+-	/*
+-	 * __copy_instruction can modify the displacement of the instruction,
+-	 * but it doesn't affect boostable check.
+-	 */
+-	len = prepare_boost(buf, p, &insn);
++	/* Analyze the opcode and setup emulate functions */
++	ret = prepare_emulation(p, &insn);
++	if (ret < 0)
++		return ret;
+ 
+-	/* Check whether the instruction modifies Interrupt Flag or not */
+-	p->ainsn.if_modifier = is_IF_modifier(buf);
++	/* Add int3 for single-step or booster jmp */
++	len = prepare_singlestep(buf, p, &insn);
++	if (len < 0)
++		return len;
+ 
+ 	/* Also, displacement change doesn't affect the first byte */
+ 	p->opcode = buf[0];
+@@ -507,6 +738,9 @@ int arch_prepare_kprobe(struct kprobe *p)
+ 
+ 	if (!can_probe((unsigned long)p->addr))
+ 		return -EILSEQ;
++
++	memset(&p->ainsn, 0, sizeof(p->ainsn));
++
+ 	/* insn: must be on special executable page on x86. */
+ 	p->ainsn.insn = get_insn_slot();
+ 	if (!p->ainsn.insn)
+@@ -574,29 +808,7 @@ set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
+ {
+ 	__this_cpu_write(current_kprobe, p);
+ 	kcb->kprobe_saved_flags = kcb->kprobe_old_flags
+-		= (regs->flags & (X86_EFLAGS_TF | X86_EFLAGS_IF));
+-	if (p->ainsn.if_modifier)
+-		kcb->kprobe_saved_flags &= ~X86_EFLAGS_IF;
+-}
+-
+-static nokprobe_inline void clear_btf(void)
+-{
+-	if (test_thread_flag(TIF_BLOCKSTEP)) {
+-		unsigned long debugctl = get_debugctlmsr();
+-
+-		debugctl &= ~DEBUGCTLMSR_BTF;
+-		update_debugctlmsr(debugctl);
+-	}
+-}
+-
+-static nokprobe_inline void restore_btf(void)
+-{
+-	if (test_thread_flag(TIF_BLOCKSTEP)) {
+-		unsigned long debugctl = get_debugctlmsr();
+-
+-		debugctl |= DEBUGCTLMSR_BTF;
+-		update_debugctlmsr(debugctl);
+-	}
++		= (regs->flags & X86_EFLAGS_IF);
+ }
+ 
+ void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
+@@ -611,6 +823,26 @@ void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
+ }
+ NOKPROBE_SYMBOL(arch_prepare_kretprobe);
+ 
++static void kprobe_post_process(struct kprobe *cur, struct pt_regs *regs,
++			       struct kprobe_ctlblk *kcb)
++{
++	/* Restore back the original saved kprobes variables and continue. */
++	if (kcb->kprobe_status == KPROBE_REENTER) {
++		/* This will restore both kcb and current_kprobe */
++		restore_previous_kprobe(kcb);
++	} else {
++		/*
++		 * Always update the kcb status because
++		 * reset_curent_kprobe() doesn't update kcb.
++		 */
++		kcb->kprobe_status = KPROBE_HIT_SSDONE;
++		if (cur->post_handler)
++			cur->post_handler(cur, regs, 0);
++		reset_current_kprobe();
++	}
++}
++NOKPROBE_SYMBOL(kprobe_post_process);
++
+ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
+ 			     struct kprobe_ctlblk *kcb, int reenter)
+ {
+@@ -618,7 +850,7 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
+ 		return;
+ 
+ #if !defined(CONFIG_PREEMPTION)
+-	if (p->ainsn.boostable && !p->post_handler) {
++	if (p->ainsn.boostable) {
+ 		/* Boost up -- we can execute copied instructions directly */
+ 		if (!reenter)
+ 			reset_current_kprobe();
+@@ -637,18 +869,50 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
+ 		kcb->kprobe_status = KPROBE_REENTER;
+ 	} else
+ 		kcb->kprobe_status = KPROBE_HIT_SS;
+-	/* Prepare real single stepping */
+-	clear_btf();
+-	regs->flags |= X86_EFLAGS_TF;
++
++	if (p->ainsn.emulate_op) {
++		p->ainsn.emulate_op(p, regs);
++		kprobe_post_process(p, regs, kcb);
++		return;
++	}
++
++	/* Disable interrupt, and set ip register on trampoline */
+ 	regs->flags &= ~X86_EFLAGS_IF;
+-	/* single step inline if the instruction is an int3 */
+-	if (p->opcode == INT3_INSN_OPCODE)
+-		regs->ip = (unsigned long)p->addr;
+-	else
+-		regs->ip = (unsigned long)p->ainsn.insn;
++	regs->ip = (unsigned long)p->ainsn.insn;
+ }
+ NOKPROBE_SYMBOL(setup_singlestep);
+ 
++/*
++ * Called after single-stepping.  p->addr is the address of the
++ * instruction whose first byte has been replaced by the "int3"
++ * instruction.  To avoid the SMP problems that can occur when we
++ * temporarily put back the original opcode to single-step, we
++ * single-stepped a copy of the instruction.  The address of this
++ * copy is p->ainsn.insn. We also doesn't use trap, but "int3" again
++ * right after the copied instruction.
++ * Different from the trap single-step, "int3" single-step can not
++ * handle the instruction which changes the ip register, e.g. jmp,
++ * call, conditional jmp, and the instructions which changes the IF
++ * flags because interrupt must be disabled around the single-stepping.
++ * Such instructions are software emulated, but others are single-stepped
++ * using "int3".
++ *
++ * When the 2nd "int3" handled, the regs->ip and regs->flags needs to
++ * be adjusted, so that we can resume execution on correct code.
++ */
++static void resume_singlestep(struct kprobe *p, struct pt_regs *regs,
++			      struct kprobe_ctlblk *kcb)
++{
++	unsigned long copy_ip = (unsigned long)p->ainsn.insn;
++	unsigned long orig_ip = (unsigned long)p->addr;
++
++	/* Restore saved interrupt flag and ip register */
++	regs->flags |= kcb->kprobe_saved_flags;
++	/* Note that regs->ip is executed int3 so must be a step back */
++	regs->ip += (orig_ip - copy_ip) - INT3_INSN_SIZE;
++}
++NOKPROBE_SYMBOL(resume_singlestep);
++
+ /*
+  * We have reentered the kprobe_handler(), since another probe was hit while
+  * within the handler. We save the original kprobes variables and just single
+@@ -684,6 +948,12 @@ static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
+ }
+ NOKPROBE_SYMBOL(reenter_kprobe);
+ 
++static nokprobe_inline int kprobe_is_ss(struct kprobe_ctlblk *kcb)
++{
++	return (kcb->kprobe_status == KPROBE_HIT_SS ||
++		kcb->kprobe_status == KPROBE_REENTER);
++}
++
+ /*
+  * Interrupts are disabled on entry as trap3 is an interrupt gate and they
+  * remain disabled throughout this function.
+@@ -728,7 +998,18 @@ int kprobe_int3_handler(struct pt_regs *regs)
+ 				reset_current_kprobe();
+ 			return 1;
+ 		}
+-	} else if (*addr != INT3_INSN_OPCODE) {
++	} else if (kprobe_is_ss(kcb)) {
++		p = kprobe_running();
++		if ((unsigned long)p->ainsn.insn < regs->ip &&
++		    (unsigned long)p->ainsn.insn + MAX_INSN_SIZE > regs->ip) {
++			/* Most provably this is the second int3 for singlestep */
++			resume_singlestep(p, regs, kcb);
++			kprobe_post_process(p, regs, kcb);
++			return 1;
++		}
++	}
++
++	if (*addr != INT3_INSN_OPCODE) {
+ 		/*
+ 		 * The breakpoint instruction was removed right
+ 		 * after we hit it.  Another cpu has removed
+@@ -801,135 +1082,6 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
+ }
+ NOKPROBE_SYMBOL(trampoline_handler);
+ 
+-/*
+- * Called after single-stepping.  p->addr is the address of the
+- * instruction whose first byte has been replaced by the "int 3"
+- * instruction.  To avoid the SMP problems that can occur when we
+- * temporarily put back the original opcode to single-step, we
+- * single-stepped a copy of the instruction.  The address of this
+- * copy is p->ainsn.insn.
+- *
+- * This function prepares to return from the post-single-step
+- * interrupt.  We have to fix up the stack as follows:
+- *
+- * 0) Except in the case of absolute or indirect jump or call instructions,
+- * the new ip is relative to the copied instruction.  We need to make
+- * it relative to the original instruction.
+- *
+- * 1) If the single-stepped instruction was pushfl, then the TF and IF
+- * flags are set in the just-pushed flags, and may need to be cleared.
+- *
+- * 2) If the single-stepped instruction was a call, the return address
+- * that is atop the stack is the address following the copied instruction.
+- * We need to make it the address following the original instruction.
+- *
+- * If this is the first time we've single-stepped the instruction at
+- * this probepoint, and the instruction is boostable, boost it: add a
+- * jump instruction after the copied instruction, that jumps to the next
+- * instruction after the probepoint.
+- */
+-static void resume_execution(struct kprobe *p, struct pt_regs *regs,
+-			     struct kprobe_ctlblk *kcb)
+-{
+-	unsigned long *tos = stack_addr(regs);
+-	unsigned long copy_ip = (unsigned long)p->ainsn.insn;
+-	unsigned long orig_ip = (unsigned long)p->addr;
+-	kprobe_opcode_t *insn = p->ainsn.insn;
+-
+-	/* Skip prefixes */
+-	insn = skip_prefixes(insn);
+-
+-	regs->flags &= ~X86_EFLAGS_TF;
+-	switch (*insn) {
+-	case 0x9c:	/* pushfl */
+-		*tos &= ~(X86_EFLAGS_TF | X86_EFLAGS_IF);
+-		*tos |= kcb->kprobe_old_flags;
+-		break;
+-	case 0xc2:	/* iret/ret/lret */
+-	case 0xc3:
+-	case 0xca:
+-	case 0xcb:
+-	case 0xcf:
+-	case 0xea:	/* jmp absolute -- ip is correct */
+-		/* ip is already adjusted, no more changes required */
+-		p->ainsn.boostable = true;
+-		goto no_change;
+-	case 0xe8:	/* call relative - Fix return addr */
+-		*tos = orig_ip + (*tos - copy_ip);
+-		break;
+-#ifdef CONFIG_X86_32
+-	case 0x9a:	/* call absolute -- same as call absolute, indirect */
+-		*tos = orig_ip + (*tos - copy_ip);
+-		goto no_change;
+-#endif
+-	case 0xff:
+-		if ((insn[1] & 0x30) == 0x10) {
+-			/*
+-			 * call absolute, indirect
+-			 * Fix return addr; ip is correct.
+-			 * But this is not boostable
+-			 */
+-			*tos = orig_ip + (*tos - copy_ip);
+-			goto no_change;
+-		} else if (((insn[1] & 0x31) == 0x20) ||
+-			   ((insn[1] & 0x31) == 0x21)) {
+-			/*
+-			 * jmp near and far, absolute indirect
+-			 * ip is correct. And this is boostable
+-			 */
+-			p->ainsn.boostable = true;
+-			goto no_change;
+-		}
+-	default:
+-		break;
+-	}
+-
+-	regs->ip += orig_ip - copy_ip;
+-
+-no_change:
+-	restore_btf();
+-}
+-NOKPROBE_SYMBOL(resume_execution);
+-
+-/*
+- * Interrupts are disabled on entry as trap1 is an interrupt gate and they
+- * remain disabled throughout this function.
+- */
+-int kprobe_debug_handler(struct pt_regs *regs)
+-{
+-	struct kprobe *cur = kprobe_running();
+-	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+-
+-	if (!cur)
+-		return 0;
+-
+-	resume_execution(cur, regs, kcb);
+-	regs->flags |= kcb->kprobe_saved_flags;
+-
+-	if ((kcb->kprobe_status != KPROBE_REENTER) && cur->post_handler) {
+-		kcb->kprobe_status = KPROBE_HIT_SSDONE;
+-		cur->post_handler(cur, regs, 0);
+-	}
+-
+-	/* Restore back the original saved kprobes variables and continue. */
+-	if (kcb->kprobe_status == KPROBE_REENTER) {
+-		restore_previous_kprobe(kcb);
+-		goto out;
+-	}
+-	reset_current_kprobe();
+-out:
+-	/*
+-	 * if somebody else is singlestepping across a probe point, flags
+-	 * will have TF set, in which case, continue the remaining processing
+-	 * of do_debug, as if this is not a probe hit.
+-	 */
+-	if (regs->flags & X86_EFLAGS_TF)
+-		return 0;
+-
+-	return 1;
+-}
+-NOKPROBE_SYMBOL(kprobe_debug_handler);
+-
+ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
+ {
+ 	struct kprobe *cur = kprobe_running();
+@@ -947,20 +1099,9 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
+ 		 * normal page fault.
+ 		 */
+ 		regs->ip = (unsigned long)cur->addr;
+-		/*
+-		 * Trap flag (TF) has been set here because this fault
+-		 * happened where the single stepping will be done.
+-		 * So clear it by resetting the current kprobe:
+-		 */
+-		regs->flags &= ~X86_EFLAGS_TF;
+-		/*
+-		 * Since the single step (trap) has been cancelled,
+-		 * we need to restore BTF here.
+-		 */
+-		restore_btf();
+ 
+ 		/*
+-		 * If the TF flag was set before the kprobe hit,
++		 * If the IF flag was set before the kprobe hit,
+ 		 * don't touch it:
+ 		 */
+ 		regs->flags |= kcb->kprobe_old_flags;
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 3780c728345c3..28f5cc0a9decb 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -198,6 +198,8 @@ DEFINE_IDTENTRY(exc_divide_error)
+ {
+ 	do_error_trap(regs, 0, "divide error", X86_TRAP_DE, SIGFPE,
+ 		      FPE_INTDIV, error_get_trap_addr(regs));
++
++	amd_clear_divider();
+ }
+ 
+ DEFINE_IDTENTRY(exc_overflow)
+@@ -917,9 +919,6 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs,
+ 	if ((dr6 & DR_STEP) && is_sysenter_singlestep(regs))
+ 		dr6 &= ~DR_STEP;
+ 
+-	if (kprobe_debug_handler(regs))
+-		goto out;
+-
+ 	/*
+ 	 * The kernel doesn't use INT1
+ 	 */
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 9aedc7b06da7a..2445c61038954 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -135,8 +135,7 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
+ #define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD)
+ #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE
+ #define KVM_VM_CR0_ALWAYS_ON				\
+-	(KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST | 	\
+-	 X86_CR0_WP | X86_CR0_PG | X86_CR0_PE)
++	(KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST | X86_CR0_PG | X86_CR0_PE)
+ 
+ #define KVM_VM_CR4_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR4_VMXE
+ #define KVM_PMODE_VM_CR4_ALWAYS_ON (X86_CR4_PAE | X86_CR4_VMXE)
+@@ -1520,6 +1519,11 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	unsigned long old_rflags;
+ 
++	/*
++	 * Unlike CR0 and CR4, RFLAGS handling requires checking if the vCPU
++	 * is an unrestricted guest in order to mark L2 as needing emulation
++	 * if L1 runs L2 as a restricted guest.
++	 */
+ 	if (is_unrestricted_guest(vcpu)) {
+ 		kvm_register_mark_available(vcpu, VCPU_EXREG_RFLAGS);
+ 		vmx->rflags = rflags;
+@@ -3064,42 +3068,22 @@ void ept_save_pdptrs(struct kvm_vcpu *vcpu)
+ 	kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR);
+ }
+ 
+-static void ept_update_paging_mode_cr0(unsigned long *hw_cr0,
+-					unsigned long cr0,
+-					struct kvm_vcpu *vcpu)
+-{
+-	struct vcpu_vmx *vmx = to_vmx(vcpu);
+-
+-	if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3))
+-		vmx_cache_reg(vcpu, VCPU_EXREG_CR3);
+-	if (!(cr0 & X86_CR0_PG)) {
+-		/* From paging/starting to nonpaging */
+-		exec_controls_setbit(vmx, CPU_BASED_CR3_LOAD_EXITING |
+-					  CPU_BASED_CR3_STORE_EXITING);
+-		vcpu->arch.cr0 = cr0;
+-		vmx_set_cr4(vcpu, kvm_read_cr4(vcpu));
+-	} else if (!is_paging(vcpu)) {
+-		/* From nonpaging to paging */
+-		exec_controls_clearbit(vmx, CPU_BASED_CR3_LOAD_EXITING |
+-					    CPU_BASED_CR3_STORE_EXITING);
+-		vcpu->arch.cr0 = cr0;
+-		vmx_set_cr4(vcpu, kvm_read_cr4(vcpu));
+-	}
+-
+-	if (!(cr0 & X86_CR0_WP))
+-		*hw_cr0 &= ~X86_CR0_WP;
+-}
++#define CR3_EXITING_BITS (CPU_BASED_CR3_LOAD_EXITING | \
++			  CPU_BASED_CR3_STORE_EXITING)
+ 
+ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	unsigned long hw_cr0;
++	u32 tmp;
+ 
+ 	hw_cr0 = (cr0 & ~KVM_VM_CR0_ALWAYS_OFF);
+-	if (is_unrestricted_guest(vcpu))
++	if (enable_unrestricted_guest)
+ 		hw_cr0 |= KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST;
+ 	else {
+ 		hw_cr0 |= KVM_VM_CR0_ALWAYS_ON;
++		if (!enable_ept)
++			hw_cr0 |= X86_CR0_WP;
+ 
+ 		if (vmx->rmode.vm86_active && (cr0 & X86_CR0_PE))
+ 			enter_pmode(vcpu);
+@@ -3117,8 +3101,47 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+ 	}
+ #endif
+ 
+-	if (enable_ept && !is_unrestricted_guest(vcpu))
+-		ept_update_paging_mode_cr0(&hw_cr0, cr0, vcpu);
++	if (enable_ept && !enable_unrestricted_guest) {
++		/*
++		 * Ensure KVM has an up-to-date snapshot of the guest's CR3.  If
++		 * the below code _enables_ CR3 exiting, vmx_cache_reg() will
++		 * (correctly) stop reading vmcs.GUEST_CR3 because it thinks
++		 * KVM's CR3 is installed.
++		 */
++		if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3))
++			vmx_cache_reg(vcpu, VCPU_EXREG_CR3);
++
++		/*
++		 * When running with EPT but not unrestricted guest, KVM must
++		 * intercept CR3 accesses when paging is _disabled_.  This is
++		 * necessary because restricted guests can't actually run with
++		 * paging disabled, and so KVM stuffs its own CR3 in order to
++		 * run the guest when identity mapped page tables.
++		 *
++		 * Do _NOT_ check the old CR0.PG, e.g. to optimize away the
++		 * update, it may be stale with respect to CR3 interception,
++		 * e.g. after nested VM-Enter.
++		 *
++		 * Lastly, honor L1's desires, i.e. intercept CR3 loads and/or
++		 * stores to forward them to L1, even if KVM does not need to
++		 * intercept them to preserve its identity mapped page tables.
++		 */
++		if (!(cr0 & X86_CR0_PG)) {
++			exec_controls_setbit(vmx, CR3_EXITING_BITS);
++		} else if (!is_guest_mode(vcpu)) {
++			exec_controls_clearbit(vmx, CR3_EXITING_BITS);
++		} else {
++			tmp = exec_controls_get(vmx);
++			tmp &= ~CR3_EXITING_BITS;
++			tmp |= get_vmcs12(vcpu)->cpu_based_vm_exec_control & CR3_EXITING_BITS;
++			exec_controls_set(vmx, tmp);
++		}
++
++		if (!is_paging(vcpu) != !(cr0 & X86_CR0_PG)) {
++			vcpu->arch.cr0 = cr0;
++			vmx_set_cr4(vcpu, kvm_read_cr4(vcpu));
++		}
++	}
+ 
+ 	vmcs_writel(CR0_READ_SHADOW, cr0);
+ 	vmcs_writel(GUEST_CR0, hw_cr0);
+@@ -3213,7 +3236,7 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 	unsigned long hw_cr4;
+ 
+ 	hw_cr4 = (cr4_read_shadow() & X86_CR4_MCE) | (cr4 & ~X86_CR4_MCE);
+-	if (is_unrestricted_guest(vcpu))
++	if (enable_unrestricted_guest)
+ 		hw_cr4 |= KVM_VM_CR4_ALWAYS_ON_UNRESTRICTED_GUEST;
+ 	else if (vmx->rmode.vm86_active)
+ 		hw_cr4 |= KVM_RMODE_VM_CR4_ALWAYS_ON;
+@@ -3233,7 +3256,7 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 	vcpu->arch.cr4 = cr4;
+ 	kvm_register_mark_available(vcpu, VCPU_EXREG_CR4);
+ 
+-	if (!is_unrestricted_guest(vcpu)) {
++	if (!enable_unrestricted_guest) {
+ 		if (enable_ept) {
+ 			if (!is_paging(vcpu)) {
+ 				hw_cr4 &= ~X86_CR4_PAE;
+diff --git a/drivers/acpi/processor_perflib.c b/drivers/acpi/processor_perflib.c
+index b04a68950ff14..fc42d649c7e4f 100644
+--- a/drivers/acpi/processor_perflib.c
++++ b/drivers/acpi/processor_perflib.c
+@@ -56,6 +56,8 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
+ {
+ 	acpi_status status = 0;
+ 	unsigned long long ppc = 0;
++	s32 qos_value;
++	int index;
+ 	int ret;
+ 
+ 	if (!pr)
+@@ -75,17 +77,30 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
+ 		return -ENODEV;
+ 	}
+ 
++	index = ppc;
++
++	if (pr->performance_platform_limit == index ||
++	    ppc >= pr->performance->state_count)
++		return 0;
++
+ 	pr_debug("CPU %d: _PPC is %d - frequency %s limited\n", pr->id,
+-		       (int)ppc, ppc ? "" : "not");
++		 index, index ? "is" : "is not");
+ 
+-	pr->performance_platform_limit = (int)ppc;
++	pr->performance_platform_limit = index;
+ 
+-	if (ppc >= pr->performance->state_count ||
+-	    unlikely(!freq_qos_request_active(&pr->perflib_req)))
++	if (unlikely(!freq_qos_request_active(&pr->perflib_req)))
+ 		return 0;
+ 
+-	ret = freq_qos_update_request(&pr->perflib_req,
+-			pr->performance->states[ppc].core_frequency * 1000);
++	/*
++	 * If _PPC returns 0, it means that all of the available states can be
++	 * used ("no limit").
++	 */
++	if (index == 0)
++		qos_value = FREQ_QOS_MAX_DEFAULT_VALUE;
++	else
++		qos_value = pr->performance->states[index].core_frequency * 1000;
++
++	ret = freq_qos_update_request(&pr->perflib_req, qos_value);
+ 	if (ret < 0) {
+ 		pr_warn("Failed to update perflib freq constraint: CPU%d (%d)\n",
+ 			pr->id, ret);
+@@ -168,9 +183,16 @@ void acpi_processor_ppc_init(struct cpufreq_policy *policy)
+ 		if (!pr)
+ 			continue;
+ 
++		/*
++		 * Reset performance_platform_limit in case there is a stale
++		 * value in it, so as to make it match the "no limit" QoS value
++		 * below.
++		 */
++		pr->performance_platform_limit = 0;
++
+ 		ret = freq_qos_add_request(&policy->constraints,
+-					   &pr->perflib_req,
+-					   FREQ_QOS_MAX, INT_MAX);
++					   &pr->perflib_req, FREQ_QOS_MAX,
++					   FREQ_QOS_MAX_DEFAULT_VALUE);
+ 		if (ret < 0)
+ 			pr_err("Failed to add freq constraint for CPU%d (%d)\n",
+ 			       cpu, ret);
+diff --git a/drivers/ata/pata_ns87415.c b/drivers/ata/pata_ns87415.c
+index 1532b2e3c6720..9217385774400 100644
+--- a/drivers/ata/pata_ns87415.c
++++ b/drivers/ata/pata_ns87415.c
+@@ -260,7 +260,7 @@ static u8 ns87560_check_status(struct ata_port *ap)
+  *	LOCKING:
+  *	Inherited from caller.
+  */
+-void ns87560_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
++static void ns87560_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
+ {
+ 	struct ata_ioports *ioaddr = &ap->ioaddr;
+ 
+diff --git a/drivers/base/power/power.h b/drivers/base/power/power.h
+index 54292cdd7808b..922ed457db191 100644
+--- a/drivers/base/power/power.h
++++ b/drivers/base/power/power.h
+@@ -25,8 +25,11 @@ extern u64 pm_runtime_active_time(struct device *dev);
+ 
+ #define WAKE_IRQ_DEDICATED_ALLOCATED	BIT(0)
+ #define WAKE_IRQ_DEDICATED_MANAGED	BIT(1)
++#define WAKE_IRQ_DEDICATED_REVERSE	BIT(2)
+ #define WAKE_IRQ_DEDICATED_MASK		(WAKE_IRQ_DEDICATED_ALLOCATED | \
+-					 WAKE_IRQ_DEDICATED_MANAGED)
++					 WAKE_IRQ_DEDICATED_MANAGED | \
++					 WAKE_IRQ_DEDICATED_REVERSE)
++#define WAKE_IRQ_DEDICATED_ENABLED	BIT(3)
+ 
+ struct wake_irq {
+ 	struct device *dev;
+@@ -39,7 +42,8 @@ extern void dev_pm_arm_wake_irq(struct wake_irq *wirq);
+ extern void dev_pm_disarm_wake_irq(struct wake_irq *wirq);
+ extern void dev_pm_enable_wake_irq_check(struct device *dev,
+ 					 bool can_change_status);
+-extern void dev_pm_disable_wake_irq_check(struct device *dev);
++extern void dev_pm_disable_wake_irq_check(struct device *dev, bool cond_disable);
++extern void dev_pm_enable_wake_irq_complete(struct device *dev);
+ 
+ #ifdef CONFIG_PM_SLEEP
+ 
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 360094692d29e..fbbc3ed143f27 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -675,6 +675,8 @@ static int rpm_suspend(struct device *dev, int rpmflags)
+ 	if (retval)
+ 		goto fail;
+ 
++	dev_pm_enable_wake_irq_complete(dev);
++
+  no_callback:
+ 	__update_runtime_status(dev, RPM_SUSPENDED);
+ 	pm_runtime_deactivate_timer(dev);
+@@ -720,7 +722,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
+ 	return retval;
+ 
+  fail:
+-	dev_pm_disable_wake_irq_check(dev);
++	dev_pm_disable_wake_irq_check(dev, true);
+ 	__update_runtime_status(dev, RPM_ACTIVE);
+ 	dev->power.deferred_resume = false;
+ 	wake_up_all(&dev->power.wait_queue);
+@@ -903,7 +905,7 @@ static int rpm_resume(struct device *dev, int rpmflags)
+ 
+ 	callback = RPM_GET_CALLBACK(dev, runtime_resume);
+ 
+-	dev_pm_disable_wake_irq_check(dev);
++	dev_pm_disable_wake_irq_check(dev, false);
+ 	retval = rpm_callback(callback, dev);
+ 	if (retval) {
+ 		__update_runtime_status(dev, RPM_SUSPENDED);
+diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c
+index 8e021082dba8c..aea690c64e394 100644
+--- a/drivers/base/power/wakeirq.c
++++ b/drivers/base/power/wakeirq.c
+@@ -145,24 +145,7 @@ static irqreturn_t handle_threaded_wake_irq(int irq, void *_wirq)
+ 	return IRQ_HANDLED;
+ }
+ 
+-/**
+- * dev_pm_set_dedicated_wake_irq - Request a dedicated wake-up interrupt
+- * @dev: Device entry
+- * @irq: Device wake-up interrupt
+- *
+- * Unless your hardware has separate wake-up interrupts in addition
+- * to the device IO interrupts, you don't need this.
+- *
+- * Sets up a threaded interrupt handler for a device that has
+- * a dedicated wake-up interrupt in addition to the device IO
+- * interrupt.
+- *
+- * The interrupt starts disabled, and needs to be managed for
+- * the device by the bus code or the device driver using
+- * dev_pm_enable_wake_irq() and dev_pm_disable_wake_irq()
+- * functions.
+- */
+-int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
++static int __dev_pm_set_dedicated_wake_irq(struct device *dev, int irq, unsigned int flag)
+ {
+ 	struct wake_irq *wirq;
+ 	int err;
+@@ -200,7 +183,7 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
+ 	if (err)
+ 		goto err_free_irq;
+ 
+-	wirq->status = WAKE_IRQ_DEDICATED_ALLOCATED;
++	wirq->status = WAKE_IRQ_DEDICATED_ALLOCATED | flag;
+ 
+ 	return err;
+ 
+@@ -213,8 +196,57 @@ err_free:
+ 
+ 	return err;
+ }
++
++
++/**
++ * dev_pm_set_dedicated_wake_irq - Request a dedicated wake-up interrupt
++ * @dev: Device entry
++ * @irq: Device wake-up interrupt
++ *
++ * Unless your hardware has separate wake-up interrupts in addition
++ * to the device IO interrupts, you don't need this.
++ *
++ * Sets up a threaded interrupt handler for a device that has
++ * a dedicated wake-up interrupt in addition to the device IO
++ * interrupt.
++ *
++ * The interrupt starts disabled, and needs to be managed for
++ * the device by the bus code or the device driver using
++ * dev_pm_enable_wake_irq*() and dev_pm_disable_wake_irq*()
++ * functions.
++ */
++int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
++{
++	return __dev_pm_set_dedicated_wake_irq(dev, irq, 0);
++}
+ EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq);
+ 
++/**
++ * dev_pm_set_dedicated_wake_irq_reverse - Request a dedicated wake-up interrupt
++ *                                         with reverse enable ordering
++ * @dev: Device entry
++ * @irq: Device wake-up interrupt
++ *
++ * Unless your hardware has separate wake-up interrupts in addition
++ * to the device IO interrupts, you don't need this.
++ *
++ * Sets up a threaded interrupt handler for a device that has a dedicated
++ * wake-up interrupt in addition to the device IO interrupt. It sets
++ * the status of WAKE_IRQ_DEDICATED_REVERSE to tell rpm_suspend()
++ * to enable dedicated wake-up interrupt after running the runtime suspend
++ * callback for @dev.
++ *
++ * The interrupt starts disabled, and needs to be managed for
++ * the device by the bus code or the device driver using
++ * dev_pm_enable_wake_irq*() and dev_pm_disable_wake_irq*()
++ * functions.
++ */
++int dev_pm_set_dedicated_wake_irq_reverse(struct device *dev, int irq)
++{
++	return __dev_pm_set_dedicated_wake_irq(dev, irq, WAKE_IRQ_DEDICATED_REVERSE);
++}
++EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq_reverse);
++
+ /**
+  * dev_pm_enable_wake_irq - Enable device wake-up interrupt
+  * @dev: Device
+@@ -285,25 +317,56 @@ void dev_pm_enable_wake_irq_check(struct device *dev,
+ 	return;
+ 
+ enable:
+-	enable_irq(wirq->irq);
++	if (!can_change_status || !(wirq->status & WAKE_IRQ_DEDICATED_REVERSE)) {
++		enable_irq(wirq->irq);
++		wirq->status |= WAKE_IRQ_DEDICATED_ENABLED;
++	}
+ }
+ 
+ /**
+  * dev_pm_disable_wake_irq_check - Checks and disables wake-up interrupt
+  * @dev: Device
++ * @cond_disable: if set, also check WAKE_IRQ_DEDICATED_REVERSE
+  *
+  * Disables wake-up interrupt conditionally based on status.
+  * Should be only called from rpm_suspend() and rpm_resume() path.
+  */
+-void dev_pm_disable_wake_irq_check(struct device *dev)
++void dev_pm_disable_wake_irq_check(struct device *dev, bool cond_disable)
+ {
+ 	struct wake_irq *wirq = dev->power.wakeirq;
+ 
+ 	if (!wirq || !(wirq->status & WAKE_IRQ_DEDICATED_MASK))
+ 		return;
+ 
+-	if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED)
++	if (cond_disable && (wirq->status & WAKE_IRQ_DEDICATED_REVERSE))
++		return;
++
++	if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED) {
++		wirq->status &= ~WAKE_IRQ_DEDICATED_ENABLED;
+ 		disable_irq_nosync(wirq->irq);
++	}
++}
++
++/**
++ * dev_pm_enable_wake_irq_complete - enable wake IRQ not enabled before
++ * @dev: Device using the wake IRQ
++ *
++ * Enable wake IRQ conditionally based on status, mainly used if want to
++ * enable wake IRQ after running ->runtime_suspend() which depends on
++ * WAKE_IRQ_DEDICATED_REVERSE.
++ *
++ * Should be only called from rpm_suspend() path.
++ */
++void dev_pm_enable_wake_irq_complete(struct device *dev)
++{
++	struct wake_irq *wirq = dev->power.wakeirq;
++
++	if (!wirq || !(wirq->status & WAKE_IRQ_DEDICATED_MASK))
++		return;
++
++	if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED &&
++	    wirq->status & WAKE_IRQ_DEDICATED_REVERSE)
++		enable_irq(wirq->irq);
+ }
+ 
+ /**
+@@ -320,7 +383,7 @@ void dev_pm_arm_wake_irq(struct wake_irq *wirq)
+ 
+ 	if (device_may_wakeup(wirq->dev)) {
+ 		if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED &&
+-		    !pm_runtime_status_suspended(wirq->dev))
++		    !(wirq->status & WAKE_IRQ_DEDICATED_ENABLED))
+ 			enable_irq(wirq->irq);
+ 
+ 		enable_irq_wake(wirq->irq);
+@@ -343,7 +406,7 @@ void dev_pm_disarm_wake_irq(struct wake_irq *wirq)
+ 		disable_irq_wake(wirq->irq);
+ 
+ 		if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED &&
+-		    !pm_runtime_status_suspended(wirq->dev))
++		    !(wirq->status & WAKE_IRQ_DEDICATED_ENABLED))
+ 			disable_irq_nosync(wirq->irq);
+ 	}
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index d86fbea54652a..7444cc2a6c86d 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -2109,7 +2109,8 @@ static int loop_add(struct loop_device **l, int i)
+ 	lo->tag_set.queue_depth = 128;
+ 	lo->tag_set.numa_node = NUMA_NO_NODE;
+ 	lo->tag_set.cmd_size = sizeof(struct loop_cmd);
+-	lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_STACKING;
++	lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_STACKING |
++		BLK_MQ_F_NO_SCHED;
+ 	lo->tag_set.driver_data = lo;
+ 
+ 	err = blk_mq_alloc_tag_set(&lo->tag_set);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 365761055df3e..d7c440ac465f3 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -314,6 +314,7 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ 	int size = 0;
+ 	int status;
+ 	u32 expected;
++	int rc;
+ 
+ 	if (count < TPM_HEADER_SIZE) {
+ 		size = -EIO;
+@@ -333,8 +334,13 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ 		goto out;
+ 	}
+ 
+-	size += recv_data(chip, &buf[TPM_HEADER_SIZE],
+-			  expected - TPM_HEADER_SIZE);
++	rc = recv_data(chip, &buf[TPM_HEADER_SIZE],
++		       expected - TPM_HEADER_SIZE);
++	if (rc < 0) {
++		size = rc;
++		goto out;
++	}
++	size += rc;
+ 	if (size < expected) {
+ 		dev_err(&chip->dev, "Unable to read remainder of result\n");
+ 		size = -ETIME;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 4b06b81d8bb0a..4359ed1d3b7e9 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -443,20 +443,6 @@ static void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy)
+ 			 (u32) cpu->acpi_perf_data.states[i].control);
+ 	}
+ 
+-	/*
+-	 * The _PSS table doesn't contain whole turbo frequency range.
+-	 * This just contains +1 MHZ above the max non turbo frequency,
+-	 * with control value corresponding to max turbo ratio. But
+-	 * when cpufreq set policy is called, it will call with this
+-	 * max frequency, which will cause a reduced performance as
+-	 * this driver uses real max turbo frequency as the max
+-	 * frequency. So correct this frequency in _PSS table to
+-	 * correct max turbo frequency based on the turbo state.
+-	 * Also need to convert to MHz as _PSS freq is in MHz.
+-	 */
+-	if (!global.turbo_disabled)
+-		cpu->acpi_perf_data.states[0].core_frequency =
+-					policy->cpuinfo.max_freq / 1000;
+ 	cpu->valid_pss_table = true;
+ 	pr_debug("_PPC limits will be enforced\n");
+ 
+diff --git a/drivers/gpio/gpio-tps68470.c b/drivers/gpio/gpio-tps68470.c
+index f7f5f770e0fbb..e19eb7c982a13 100644
+--- a/drivers/gpio/gpio-tps68470.c
++++ b/drivers/gpio/gpio-tps68470.c
+@@ -91,13 +91,13 @@ static int tps68470_gpio_output(struct gpio_chip *gc, unsigned int offset,
+ 	struct tps68470_gpio_data *tps68470_gpio = gpiochip_get_data(gc);
+ 	struct regmap *regmap = tps68470_gpio->tps68470_regmap;
+ 
++	/* Set the initial value */
++	tps68470_gpio_set(gc, offset, value);
++
+ 	/* rest are always outputs */
+ 	if (offset >= TPS68470_N_REGULAR_GPIO)
+ 		return 0;
+ 
+-	/* Set the initial value */
+-	tps68470_gpio_set(gc, offset, value);
+-
+ 	return regmap_update_bits(regmap, TPS68470_GPIO_CTL_REG_A(offset),
+ 				 TPS68470_GPIO_MODE_MASK,
+ 				 TPS68470_GPIO_MODE_OUT_CMOS);
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 0fcba2bc26b8e..9ae0e60ecac30 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -81,7 +81,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit
+ 			 * since we've already mapped it once in
+ 			 * submit_reloc()
+ 			 */
+-			if (WARN_ON(!ptr))
++			if (WARN_ON(IS_ERR_OR_NULL(ptr)))
+ 				return;
+ 
+ 			for (i = 0; i < dwords; i++) {
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
+index 2fb58b7098e4b..3bd2065a9d30e 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
+@@ -200,7 +200,7 @@ static const struct a6xx_shader_block {
+ 	SHADER(A6XX_SP_LB_3_DATA, 0x800),
+ 	SHADER(A6XX_SP_LB_4_DATA, 0x800),
+ 	SHADER(A6XX_SP_LB_5_DATA, 0x200),
+-	SHADER(A6XX_SP_CB_BINDLESS_DATA, 0x2000),
++	SHADER(A6XX_SP_CB_BINDLESS_DATA, 0x800),
+ 	SHADER(A6XX_SP_CB_LEGACY_DATA, 0x280),
+ 	SHADER(A6XX_SP_UAV_DATA, 0x80),
+ 	SHADER(A6XX_SP_INST_TAG, 0x80),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
+index cf4b9b5964c6c..cd6c3518ba021 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
+@@ -14,19 +14,6 @@
+ 
+ #define	DPU_PERF_DEFAULT_MAX_CORE_CLK_RATE	412500000
+ 
+-/**
+- * enum dpu_core_perf_data_bus_id - data bus identifier
+- * @DPU_CORE_PERF_DATA_BUS_ID_MNOC: DPU/MNOC data bus
+- * @DPU_CORE_PERF_DATA_BUS_ID_LLCC: MNOC/LLCC data bus
+- * @DPU_CORE_PERF_DATA_BUS_ID_EBI: LLCC/EBI data bus
+- */
+-enum dpu_core_perf_data_bus_id {
+-	DPU_CORE_PERF_DATA_BUS_ID_MNOC,
+-	DPU_CORE_PERF_DATA_BUS_ID_LLCC,
+-	DPU_CORE_PERF_DATA_BUS_ID_EBI,
+-	DPU_CORE_PERF_DATA_BUS_ID_MAX,
+-};
+-
+ /**
+  * struct dpu_core_perf_params - definition of performance parameters
+  * @max_per_pipe_ib: maximum instantaneous bandwidth request
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index f673292eec9db..8fe3be20af62f 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -115,7 +115,7 @@ static void ttm_bo_add_mem_to_lru(struct ttm_buffer_object *bo,
+ 	struct ttm_bo_device *bdev = bo->bdev;
+ 	struct ttm_resource_manager *man;
+ 
+-	if (!list_empty(&bo->lru))
++	if (!list_empty(&bo->lru) || bo->pin_count)
+ 		return;
+ 
+ 	if (mem->placement & TTM_PL_FLAG_NO_EVICT)
+@@ -165,7 +165,8 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
+ 	ttm_bo_del_from_lru(bo);
+ 	ttm_bo_add_mem_to_lru(bo, &bo->mem);
+ 
+-	if (bulk && !(bo->mem.placement & TTM_PL_FLAG_NO_EVICT)) {
++	if (bulk && !(bo->mem.placement & TTM_PL_FLAG_NO_EVICT) &&
++	    !bo->pin_count) {
+ 		switch (bo->mem.mem_type) {
+ 		case TTM_PL_TT:
+ 			ttm_bo_bulk_move_set_pos(&bulk->tt[bo->priority], bo);
+@@ -544,8 +545,9 @@ static void ttm_bo_release(struct kref *kref)
+ 		 * shrinkers, now that they are queued for
+ 		 * destruction.
+ 		 */
+-		if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT) {
++		if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT || bo->pin_count) {
+ 			bo->mem.placement &= ~TTM_PL_FLAG_NO_EVICT;
++			bo->pin_count = 0;
+ 			ttm_bo_del_from_lru(bo);
+ 			ttm_bo_add_mem_to_lru(bo, &bo->mem);
+ 		}
+@@ -670,6 +672,13 @@ static bool ttm_bo_evict_swapout_allowable(struct ttm_buffer_object *bo,
+ {
+ 	bool ret = false;
+ 
++	if (bo->pin_count) {
++		*locked = false;
++		if (busy)
++			*busy = false;
++		return false;
++	}
++
+ 	if (bo->base.resv == ctx->resv) {
+ 		dma_resv_assert_held(bo->base.resv);
+ 		if (ctx->flags & TTM_OPT_FLAG_ALLOW_RES_EVICT)
+@@ -1174,6 +1183,7 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
+ 	bo->moving = NULL;
+ 	bo->mem.placement = TTM_PL_FLAG_CACHED;
+ 	bo->acc_size = acc_size;
++	bo->pin_count = 0;
+ 	bo->sg = sg;
+ 	if (resv) {
+ 		bo->base.resv = resv;
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
+index fb2a25f8408fc..1968df9743fcb 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
+@@ -352,7 +352,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
+ 		return -ENOMEM;
+ 
+ 	fbo->base = *bo;
+-	fbo->base.mem.placement |= TTM_PL_FLAG_NO_EVICT;
+ 
+ 	ttm_bo_get(bo);
+ 	fbo->bo = bo;
+@@ -372,6 +371,7 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
+ 	kref_init(&fbo->base.kref);
+ 	fbo->base.destroy = &ttm_transfered_destroy;
+ 	fbo->base.acc_size = 0;
++	fbo->base.pin_count = 1;
+ 	if (bo->type != ttm_bo_type_sg)
+ 		fbo->base.base.resv = &fbo->base.base._resv;
+ 
+diff --git a/drivers/hwmon/nct7802.c b/drivers/hwmon/nct7802.c
+index 604af2f6103a3..88eddb8d61d37 100644
+--- a/drivers/hwmon/nct7802.c
++++ b/drivers/hwmon/nct7802.c
+@@ -708,7 +708,7 @@ static umode_t nct7802_temp_is_visible(struct kobject *kobj,
+ 	if (index >= 38 && index < 46 && !(reg & 0x01))		/* PECI 0 */
+ 		return 0;
+ 
+-	if (index >= 0x46 && (!(reg & 0x02)))			/* PECI 1 */
++	if (index >= 46 && !(reg & 0x02))			/* PECI 1 */
+ 		return 0;
+ 
+ 	return attr->mode;
+diff --git a/drivers/i2c/busses/i2c-ibm_iic.c b/drivers/i2c/busses/i2c-ibm_iic.c
+index 9f71daf6db64b..c073f5b8833a2 100644
+--- a/drivers/i2c/busses/i2c-ibm_iic.c
++++ b/drivers/i2c/busses/i2c-ibm_iic.c
+@@ -694,10 +694,8 @@ static int iic_probe(struct platform_device *ofdev)
+ 	int ret;
+ 
+ 	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+-	if (!dev) {
+-		dev_err(&ofdev->dev, "failed to allocate device data\n");
++	if (!dev)
+ 		return -ENOMEM;
+-	}
+ 
+ 	platform_set_drvdata(ofdev, dev);
+ 
+diff --git a/drivers/i2c/busses/i2c-nomadik.c b/drivers/i2c/busses/i2c-nomadik.c
+index a3363b20f168a..a06c4b76894a9 100644
+--- a/drivers/i2c/busses/i2c-nomadik.c
++++ b/drivers/i2c/busses/i2c-nomadik.c
+@@ -970,12 +970,10 @@ static int nmk_i2c_probe(struct amba_device *adev, const struct amba_id *id)
+ 	struct i2c_vendor_data *vendor = id->data;
+ 	u32 max_fifo_threshold = (vendor->fifodepth / 2) - 1;
+ 
+-	dev = devm_kzalloc(&adev->dev, sizeof(struct nmk_i2c_dev), GFP_KERNEL);
+-	if (!dev) {
+-		dev_err(&adev->dev, "cannot allocate memory\n");
+-		ret = -ENOMEM;
+-		goto err_no_mem;
+-	}
++	dev = devm_kzalloc(&adev->dev, sizeof(*dev), GFP_KERNEL);
++	if (!dev)
++		return -ENOMEM;
++
+ 	dev->vendor = vendor;
+ 	dev->adev = adev;
+ 	nmk_i2c_of_probe(np, dev);
+@@ -996,30 +994,21 @@ static int nmk_i2c_probe(struct amba_device *adev, const struct amba_id *id)
+ 
+ 	dev->virtbase = devm_ioremap(&adev->dev, adev->res.start,
+ 				resource_size(&adev->res));
+-	if (!dev->virtbase) {
+-		ret = -ENOMEM;
+-		goto err_no_mem;
+-	}
++	if (!dev->virtbase)
++		return -ENOMEM;
+ 
+ 	dev->irq = adev->irq[0];
+ 	ret = devm_request_irq(&adev->dev, dev->irq, i2c_irq_handler, 0,
+ 				DRIVER_NAME, dev);
+ 	if (ret) {
+ 		dev_err(&adev->dev, "cannot claim the irq %d\n", dev->irq);
+-		goto err_no_mem;
++		return ret;
+ 	}
+ 
+-	dev->clk = devm_clk_get(&adev->dev, NULL);
++	dev->clk = devm_clk_get_enabled(&adev->dev, NULL);
+ 	if (IS_ERR(dev->clk)) {
+-		dev_err(&adev->dev, "could not get i2c clock\n");
+-		ret = PTR_ERR(dev->clk);
+-		goto err_no_mem;
+-	}
+-
+-	ret = clk_prepare_enable(dev->clk);
+-	if (ret) {
+-		dev_err(&adev->dev, "can't prepare_enable clock\n");
+-		goto err_no_mem;
++		dev_err(&adev->dev, "could enable i2c clock\n");
++		return PTR_ERR(dev->clk);
+ 	}
+ 
+ 	init_hw(dev);
+@@ -1042,22 +1031,15 @@ static int nmk_i2c_probe(struct amba_device *adev, const struct amba_id *id)
+ 
+ 	ret = i2c_add_adapter(adap);
+ 	if (ret)
+-		goto err_no_adap;
++		return ret;
+ 
+ 	pm_runtime_put(&adev->dev);
+ 
+ 	return 0;
+-
+- err_no_adap:
+-	clk_disable_unprepare(dev->clk);
+- err_no_mem:
+-
+-	return ret;
+ }
+ 
+ static void nmk_i2c_remove(struct amba_device *adev)
+ {
+-	struct resource *res = &adev->res;
+ 	struct nmk_i2c_dev *dev = amba_get_drvdata(adev);
+ 
+ 	i2c_del_adapter(&dev->adap);
+@@ -1066,8 +1048,6 @@ static void nmk_i2c_remove(struct amba_device *adev)
+ 	clear_all_interrupts(dev);
+ 	/* disable the controller */
+ 	i2c_clr_bit(dev->virtbase + I2C_CR, I2C_CR_PE);
+-	clk_disable_unprepare(dev->clk);
+-	release_mem_region(res->start, resource_size(res));
+ }
+ 
+ static struct i2c_vendor_data vendor_stn8815 = {
+diff --git a/drivers/i2c/busses/i2c-sh7760.c b/drivers/i2c/busses/i2c-sh7760.c
+index 319d1fa617c88..051b904cb35f6 100644
+--- a/drivers/i2c/busses/i2c-sh7760.c
++++ b/drivers/i2c/busses/i2c-sh7760.c
+@@ -443,9 +443,8 @@ static int sh7760_i2c_probe(struct platform_device *pdev)
+ 		goto out0;
+ 	}
+ 
+-	id = kzalloc(sizeof(struct cami2c), GFP_KERNEL);
++	id = kzalloc(sizeof(*id), GFP_KERNEL);
+ 	if (!id) {
+-		dev_err(&pdev->dev, "no mem for private data\n");
+ 		ret = -ENOMEM;
+ 		goto out0;
+ 	}
+diff --git a/drivers/i2c/busses/i2c-tiny-usb.c b/drivers/i2c/busses/i2c-tiny-usb.c
+index 7279ca0eaa2d0..d1fa9ff5aeab4 100644
+--- a/drivers/i2c/busses/i2c-tiny-usb.c
++++ b/drivers/i2c/busses/i2c-tiny-usb.c
+@@ -226,10 +226,8 @@ static int i2c_tiny_usb_probe(struct usb_interface *interface,
+ 
+ 	/* allocate memory for our device state and initialize it */
+ 	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+-	if (dev == NULL) {
+-		dev_err(&interface->dev, "Out of memory\n");
++	if (!dev)
+ 		goto error;
+-	}
+ 
+ 	dev->usb_dev = usb_get_dev(interface_to_usbdev(interface));
+ 	dev->interface = interface;
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 255194029e2d8..50b355e34445c 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -530,15 +530,15 @@ static int set_qp_rss(struct mlx4_ib_dev *dev, struct mlx4_ib_rss *rss_ctx,
+ 		return (-EOPNOTSUPP);
+ 	}
+ 
+-	if (ucmd->rx_hash_fields_mask & ~(MLX4_IB_RX_HASH_SRC_IPV4	|
+-					  MLX4_IB_RX_HASH_DST_IPV4	|
+-					  MLX4_IB_RX_HASH_SRC_IPV6	|
+-					  MLX4_IB_RX_HASH_DST_IPV6	|
+-					  MLX4_IB_RX_HASH_SRC_PORT_TCP	|
+-					  MLX4_IB_RX_HASH_DST_PORT_TCP	|
+-					  MLX4_IB_RX_HASH_SRC_PORT_UDP	|
+-					  MLX4_IB_RX_HASH_DST_PORT_UDP  |
+-					  MLX4_IB_RX_HASH_INNER)) {
++	if (ucmd->rx_hash_fields_mask & ~(u64)(MLX4_IB_RX_HASH_SRC_IPV4	|
++					       MLX4_IB_RX_HASH_DST_IPV4	|
++					       MLX4_IB_RX_HASH_SRC_IPV6	|
++					       MLX4_IB_RX_HASH_DST_IPV6	|
++					       MLX4_IB_RX_HASH_SRC_PORT_TCP |
++					       MLX4_IB_RX_HASH_DST_PORT_TCP |
++					       MLX4_IB_RX_HASH_SRC_PORT_UDP |
++					       MLX4_IB_RX_HASH_DST_PORT_UDP |
++					       MLX4_IB_RX_HASH_INNER)) {
+ 		pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n",
+ 			 ucmd->rx_hash_fields_mask);
+ 		return (-EOPNOTSUPP);
+diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c
+index 08a2a7afafd3d..3f57f7dfb822f 100644
+--- a/drivers/infiniband/hw/mthca/mthca_qp.c
++++ b/drivers/infiniband/hw/mthca/mthca_qp.c
+@@ -1390,7 +1390,7 @@ int mthca_alloc_sqp(struct mthca_dev *dev,
+ 	if (mthca_array_get(&dev->qp_table.qp, mqpn))
+ 		err = -EBUSY;
+ 	else
+-		mthca_array_set(&dev->qp_table.qp, mqpn, qp->sqp);
++		mthca_array_set(&dev->qp_table.qp, mqpn, qp);
+ 	spin_unlock_irq(&dev->qp_table.lock);
+ 
+ 	if (err)
+diff --git a/drivers/irqchip/irq-bcm6345-l1.c b/drivers/irqchip/irq-bcm6345-l1.c
+index 1bd0621c4ce2a..4827a11832478 100644
+--- a/drivers/irqchip/irq-bcm6345-l1.c
++++ b/drivers/irqchip/irq-bcm6345-l1.c
+@@ -82,6 +82,7 @@ struct bcm6345_l1_chip {
+ };
+ 
+ struct bcm6345_l1_cpu {
++	struct bcm6345_l1_chip	*intc;
+ 	void __iomem		*map_base;
+ 	unsigned int		parent_irq;
+ 	u32			enable_cache[];
+@@ -115,17 +116,11 @@ static inline unsigned int cpu_for_irq(struct bcm6345_l1_chip *intc,
+ 
+ static void bcm6345_l1_irq_handle(struct irq_desc *desc)
+ {
+-	struct bcm6345_l1_chip *intc = irq_desc_get_handler_data(desc);
+-	struct bcm6345_l1_cpu *cpu;
++	struct bcm6345_l1_cpu *cpu = irq_desc_get_handler_data(desc);
++	struct bcm6345_l1_chip *intc = cpu->intc;
+ 	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	unsigned int idx;
+ 
+-#ifdef CONFIG_SMP
+-	cpu = intc->cpus[cpu_logical_map(smp_processor_id())];
+-#else
+-	cpu = intc->cpus[0];
+-#endif
+-
+ 	chained_irq_enter(chip, desc);
+ 
+ 	for (idx = 0; idx < intc->n_words; idx++) {
+@@ -257,6 +252,7 @@ static int __init bcm6345_l1_init_one(struct device_node *dn,
+ 	if (!cpu)
+ 		return -ENOMEM;
+ 
++	cpu->intc = intc;
+ 	cpu->map_base = ioremap(res.start, sz);
+ 	if (!cpu->map_base)
+ 		return -ENOMEM;
+@@ -272,7 +268,7 @@ static int __init bcm6345_l1_init_one(struct device_node *dn,
+ 		return -EINVAL;
+ 	}
+ 	irq_set_chained_handler_and_data(cpu->parent_irq,
+-						bcm6345_l1_irq_handle, intc);
++						bcm6345_l1_irq_handle, cpu);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 5ec091c64d47f..f1fa98e5ea13f 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -267,13 +267,23 @@ static void vpe_to_cpuid_unlock(struct its_vpe *vpe, unsigned long flags)
+ 	raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);
+ }
+ 
++static struct irq_chip its_vpe_irq_chip;
++
+ static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags)
+ {
+-	struct its_vlpi_map *map = get_vlpi_map(d);
++	struct its_vpe *vpe = NULL;
+ 	int cpu;
+ 
+-	if (map) {
+-		cpu = vpe_to_cpuid_lock(map->vpe, flags);
++	if (d->chip == &its_vpe_irq_chip) {
++		vpe = irq_data_get_irq_chip_data(d);
++	} else {
++		struct its_vlpi_map *map = get_vlpi_map(d);
++		if (map)
++			vpe = map->vpe;
++	}
++
++	if (vpe) {
++		cpu = vpe_to_cpuid_lock(vpe, flags);
+ 	} else {
+ 		/* Physical LPIs are already locked via the irq_desc lock */
+ 		struct its_device *its_dev = irq_data_get_irq_chip_data(d);
+@@ -287,10 +297,18 @@ static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags)
+ 
+ static void irq_to_cpuid_unlock(struct irq_data *d, unsigned long flags)
+ {
+-	struct its_vlpi_map *map = get_vlpi_map(d);
++	struct its_vpe *vpe = NULL;
++
++	if (d->chip == &its_vpe_irq_chip) {
++		vpe = irq_data_get_irq_chip_data(d);
++	} else {
++		struct its_vlpi_map *map = get_vlpi_map(d);
++		if (map)
++			vpe = map->vpe;
++	}
+ 
+-	if (map)
+-		vpe_to_cpuid_unlock(map->vpe, flags);
++	if (vpe)
++		vpe_to_cpuid_unlock(vpe, flags);
+ }
+ 
+ static struct its_collection *valid_col(struct its_collection *col)
+@@ -1422,14 +1440,29 @@ static void wait_for_syncr(void __iomem *rdbase)
+ 		cpu_relax();
+ }
+ 
+-static void direct_lpi_inv(struct irq_data *d)
++static void __direct_lpi_inv(struct irq_data *d, u64 val)
+ {
+-	struct its_vlpi_map *map = get_vlpi_map(d);
+ 	void __iomem *rdbase;
+ 	unsigned long flags;
+-	u64 val;
+ 	int cpu;
+ 
++	/* Target the redistributor this LPI is currently routed to */
++	cpu = irq_to_cpuid_lock(d, &flags);
++	raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
++
++	rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
++	gic_write_lpir(val, rdbase + GICR_INVLPIR);
++	wait_for_syncr(rdbase);
++
++	raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
++	irq_to_cpuid_unlock(d, flags);
++}
++
++static void direct_lpi_inv(struct irq_data *d)
++{
++	struct its_vlpi_map *map = get_vlpi_map(d);
++	u64 val;
++
+ 	if (map) {
+ 		struct its_device *its_dev = irq_data_get_irq_chip_data(d);
+ 
+@@ -1442,15 +1475,7 @@ static void direct_lpi_inv(struct irq_data *d)
+ 		val = d->hwirq;
+ 	}
+ 
+-	/* Target the redistributor this LPI is currently routed to */
+-	cpu = irq_to_cpuid_lock(d, &flags);
+-	raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
+-	rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
+-	gic_write_lpir(val, rdbase + GICR_INVLPIR);
+-
+-	wait_for_syncr(rdbase);
+-	raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
+-	irq_to_cpuid_unlock(d, flags);
++	__direct_lpi_inv(d, val);
+ }
+ 
+ static void lpi_update_config(struct irq_data *d, u8 clr, u8 set)
+@@ -3916,18 +3941,10 @@ static void its_vpe_send_inv(struct irq_data *d)
+ {
+ 	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+ 
+-	if (gic_rdists->has_direct_lpi) {
+-		void __iomem *rdbase;
+-
+-		/* Target the redistributor this VPE is currently known on */
+-		raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
+-		rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
+-		gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR);
+-		wait_for_syncr(rdbase);
+-		raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
+-	} else {
++	if (gic_rdists->has_direct_lpi)
++		__direct_lpi_inv(d, d->parent_data->hwirq);
++	else
+ 		its_vpe_send_cmd(vpe, its_send_inv);
+-	}
+ }
+ 
+ static void its_vpe_mask_irq(struct irq_data *d)
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index eba58b99cd29d..d6cf01c32a33d 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -839,7 +839,7 @@ hfcpci_fill_fifo(struct bchannel *bch)
+ 		*z1t = cpu_to_le16(new_z1);	/* now send data */
+ 		if (bch->tx_idx < bch->tx_skb->len)
+ 			return;
+-		dev_kfree_skb(bch->tx_skb);
++		dev_kfree_skb_any(bch->tx_skb);
+ 		if (get_next_bframe(bch))
+ 			goto next_t_frame;
+ 		return;
+@@ -895,7 +895,7 @@ hfcpci_fill_fifo(struct bchannel *bch)
+ 	}
+ 	bz->za[new_f1].z1 = cpu_to_le16(new_z1);	/* for next buffer */
+ 	bz->f1 = new_f1;	/* next frame */
+-	dev_kfree_skb(bch->tx_skb);
++	dev_kfree_skb_any(bch->tx_skb);
+ 	get_next_bframe(bch);
+ }
+ 
+@@ -1119,7 +1119,7 @@ tx_birq(struct bchannel *bch)
+ 	if (bch->tx_skb && bch->tx_idx < bch->tx_skb->len)
+ 		hfcpci_fill_fifo(bch);
+ 	else {
+-		dev_kfree_skb(bch->tx_skb);
++		dev_kfree_skb_any(bch->tx_skb);
+ 		if (get_next_bframe(bch))
+ 			hfcpci_fill_fifo(bch);
+ 	}
+@@ -2277,7 +2277,7 @@ _hfcpci_softirq(struct device *dev, void *unused)
+ 		return 0;
+ 
+ 	if (hc->hw.int_m2 & HFCPCI_IRQ_ENABLE) {
+-		spin_lock(&hc->lock);
++		spin_lock_irq(&hc->lock);
+ 		bch = Sel_BCS(hc, hc->hw.bswapped ? 2 : 1);
+ 		if (bch && bch->state == ISDN_P_B_RAW) { /* B1 rx&tx */
+ 			main_rec_hfcpci(bch);
+@@ -2288,7 +2288,7 @@ _hfcpci_softirq(struct device *dev, void *unused)
+ 			main_rec_hfcpci(bch);
+ 			tx_birq(bch);
+ 		}
+-		spin_unlock(&hc->lock);
++		spin_unlock_irq(&hc->lock);
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/md/dm-cache-policy-smq.c b/drivers/md/dm-cache-policy-smq.c
+index b61aac00ff409..859073193f5b4 100644
+--- a/drivers/md/dm-cache-policy-smq.c
++++ b/drivers/md/dm-cache-policy-smq.c
+@@ -854,7 +854,13 @@ struct smq_policy {
+ 
+ 	struct background_tracker *bg_work;
+ 
+-	bool migrations_allowed;
++	bool migrations_allowed:1;
++
++	/*
++	 * If this is set the policy will try and clean the whole cache
++	 * even if the device is not idle.
++	 */
++	bool cleaner:1;
+ };
+ 
+ /*----------------------------------------------------------------*/
+@@ -1133,7 +1139,7 @@ static bool clean_target_met(struct smq_policy *mq, bool idle)
+ 	 * Cache entries may not be populated.  So we cannot rely on the
+ 	 * size of the clean queue.
+ 	 */
+-	if (idle) {
++	if (idle || mq->cleaner) {
+ 		/*
+ 		 * We'd like to clean everything.
+ 		 */
+@@ -1716,11 +1722,9 @@ static void calc_hotspot_params(sector_t origin_size,
+ 		*hotspot_block_size /= 2u;
+ }
+ 
+-static struct dm_cache_policy *__smq_create(dm_cblock_t cache_size,
+-					    sector_t origin_size,
+-					    sector_t cache_block_size,
+-					    bool mimic_mq,
+-					    bool migrations_allowed)
++static struct dm_cache_policy *
++__smq_create(dm_cblock_t cache_size, sector_t origin_size, sector_t cache_block_size,
++	     bool mimic_mq, bool migrations_allowed, bool cleaner)
+ {
+ 	unsigned i;
+ 	unsigned nr_sentinels_per_queue = 2u * NR_CACHE_LEVELS;
+@@ -1807,6 +1811,7 @@ static struct dm_cache_policy *__smq_create(dm_cblock_t cache_size,
+ 		goto bad_btracker;
+ 
+ 	mq->migrations_allowed = migrations_allowed;
++	mq->cleaner = cleaner;
+ 
+ 	return &mq->policy;
+ 
+@@ -1830,21 +1835,24 @@ static struct dm_cache_policy *smq_create(dm_cblock_t cache_size,
+ 					  sector_t origin_size,
+ 					  sector_t cache_block_size)
+ {
+-	return __smq_create(cache_size, origin_size, cache_block_size, false, true);
++	return __smq_create(cache_size, origin_size, cache_block_size,
++			    false, true, false);
+ }
+ 
+ static struct dm_cache_policy *mq_create(dm_cblock_t cache_size,
+ 					 sector_t origin_size,
+ 					 sector_t cache_block_size)
+ {
+-	return __smq_create(cache_size, origin_size, cache_block_size, true, true);
++	return __smq_create(cache_size, origin_size, cache_block_size,
++			    true, true, false);
+ }
+ 
+ static struct dm_cache_policy *cleaner_create(dm_cblock_t cache_size,
+ 					      sector_t origin_size,
+ 					      sector_t cache_block_size)
+ {
+-	return __smq_create(cache_size, origin_size, cache_block_size, false, false);
++	return __smq_create(cache_size, origin_size, cache_block_size,
++			    false, false, true);
+ }
+ 
+ /*----------------------------------------------------------------*/
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index a2d09c9c6e9f7..140bdf2a6ee11 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3258,8 +3258,7 @@ size_check:
+ 	r = md_start(&rs->md);
+ 	if (r) {
+ 		ti->error = "Failed to start raid array";
+-		mddev_unlock(&rs->md);
+-		goto bad_md_start;
++		goto bad_unlock;
+ 	}
+ 
+ 	/* If raid4/5/6 journal mode explicitly requested (only possible with journal dev) -> set it */
+@@ -3267,8 +3266,7 @@ size_check:
+ 		r = r5c_journal_mode_set(&rs->md, rs->journal_dev.mode);
+ 		if (r) {
+ 			ti->error = "Failed to set raid4/5/6 journal mode";
+-			mddev_unlock(&rs->md);
+-			goto bad_journal_mode_set;
++			goto bad_unlock;
+ 		}
+ 	}
+ 
+@@ -3279,14 +3277,14 @@ size_check:
+ 	if (rs_is_raid456(rs)) {
+ 		r = rs_set_raid456_stripe_cache(rs);
+ 		if (r)
+-			goto bad_stripe_cache;
++			goto bad_unlock;
+ 	}
+ 
+ 	/* Now do an early reshape check */
+ 	if (test_bit(RT_FLAG_RESHAPE_RS, &rs->runtime_flags)) {
+ 		r = rs_check_reshape(rs);
+ 		if (r)
+-			goto bad_check_reshape;
++			goto bad_unlock;
+ 
+ 		/* Restore new, ctr requested layout to perform check */
+ 		rs_config_restore(rs, &rs_layout);
+@@ -3295,7 +3293,7 @@ size_check:
+ 			r = rs->md.pers->check_reshape(&rs->md);
+ 			if (r) {
+ 				ti->error = "Reshape check failed";
+-				goto bad_check_reshape;
++				goto bad_unlock;
+ 			}
+ 		}
+ 	}
+@@ -3306,11 +3304,9 @@ size_check:
+ 	mddev_unlock(&rs->md);
+ 	return 0;
+ 
+-bad_md_start:
+-bad_journal_mode_set:
+-bad_stripe_cache:
+-bad_check_reshape:
++bad_unlock:
+ 	md_stop(&rs->md);
++	mddev_unlock(&rs->md);
+ bad:
+ 	raid_set_free(rs);
+ 
+@@ -3321,7 +3317,9 @@ static void raid_dtr(struct dm_target *ti)
+ {
+ 	struct raid_set *rs = ti->private;
+ 
++	mddev_lock_nointr(&rs->md);
+ 	md_stop(&rs->md);
++	mddev_unlock(&rs->md);
+ 	raid_set_free(rs);
+ }
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index ae0a857d6076a..6efe49f7bdf5e 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6316,6 +6316,8 @@ static void __md_stop(struct mddev *mddev)
+ 
+ void md_stop(struct mddev *mddev)
+ {
++	lockdep_assert_held(&mddev->reconfig_mutex);
++
+ 	/* stop the array and free an attached data structures.
+ 	 * This is called from dm-raid
+ 	 */
+diff --git a/drivers/mtd/nand/raw/fsl_upm.c b/drivers/mtd/nand/raw/fsl_upm.c
+index d5813b9abc8e7..9f934466dd975 100644
+--- a/drivers/mtd/nand/raw/fsl_upm.c
++++ b/drivers/mtd/nand/raw/fsl_upm.c
+@@ -136,7 +136,7 @@ static int fun_exec_op(struct nand_chip *chip, const struct nand_operation *op,
+ 	unsigned int i;
+ 	int ret;
+ 
+-	if (op->cs > NAND_MAX_CHIPS)
++	if (op->cs >= NAND_MAX_CHIPS)
+ 		return -EINVAL;
+ 
+ 	if (check_only)
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index ee3976b7e197e..6bb0fca4a91d0 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -1180,7 +1180,6 @@ static int meson_nand_attach_chip(struct nand_chip *nand)
+ 	struct meson_nfc *nfc = nand_get_controller_data(nand);
+ 	struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand);
+ 	struct mtd_info *mtd = nand_to_mtd(nand);
+-	int nsectors = mtd->writesize / 1024;
+ 	int ret;
+ 
+ 	if (!mtd->name) {
+@@ -1198,7 +1197,7 @@ static int meson_nand_attach_chip(struct nand_chip *nand)
+ 	nand->options |= NAND_NO_SUBPAGE_WRITE;
+ 
+ 	ret = nand_ecc_choose_conf(nand, nfc->data->ecc_caps,
+-				   mtd->oobsize - 2 * nsectors);
++				   mtd->oobsize - 2);
+ 	if (ret) {
+ 		dev_err(nfc->dev, "failed to ECC init\n");
+ 		return -EINVAL;
+diff --git a/drivers/mtd/nand/raw/omap_elm.c b/drivers/mtd/nand/raw/omap_elm.c
+index 4b799521a427a..dad17fa0b514e 100644
+--- a/drivers/mtd/nand/raw/omap_elm.c
++++ b/drivers/mtd/nand/raw/omap_elm.c
+@@ -174,17 +174,17 @@ static void elm_load_syndrome(struct elm_info *info,
+ 			switch (info->bch_type) {
+ 			case BCH8_ECC:
+ 				/* syndrome fragment 0 = ecc[9-12B] */
+-				val = cpu_to_be32(*(u32 *) &ecc[9]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[9]);
+ 				elm_write_reg(info, offset, val);
+ 
+ 				/* syndrome fragment 1 = ecc[5-8B] */
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[5]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[5]);
+ 				elm_write_reg(info, offset, val);
+ 
+ 				/* syndrome fragment 2 = ecc[1-4B] */
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[1]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[1]);
+ 				elm_write_reg(info, offset, val);
+ 
+ 				/* syndrome fragment 3 = ecc[0B] */
+@@ -194,35 +194,35 @@ static void elm_load_syndrome(struct elm_info *info,
+ 				break;
+ 			case BCH4_ECC:
+ 				/* syndrome fragment 0 = ecc[20-52b] bits */
+-				val = (cpu_to_be32(*(u32 *) &ecc[3]) >> 4) |
++				val = ((__force u32)cpu_to_be32(*(u32 *)&ecc[3]) >> 4) |
+ 					((ecc[2] & 0xf) << 28);
+ 				elm_write_reg(info, offset, val);
+ 
+ 				/* syndrome fragment 1 = ecc[0-20b] bits */
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[0]) >> 12;
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[0]) >> 12;
+ 				elm_write_reg(info, offset, val);
+ 				break;
+ 			case BCH16_ECC:
+-				val = cpu_to_be32(*(u32 *) &ecc[22]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[22]);
+ 				elm_write_reg(info, offset, val);
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[18]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[18]);
+ 				elm_write_reg(info, offset, val);
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[14]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[14]);
+ 				elm_write_reg(info, offset, val);
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[10]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[10]);
+ 				elm_write_reg(info, offset, val);
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[6]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[6]);
+ 				elm_write_reg(info, offset, val);
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[2]);
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[2]);
+ 				elm_write_reg(info, offset, val);
+ 				offset += 4;
+-				val = cpu_to_be32(*(u32 *) &ecc[0]) >> 16;
++				val = (__force u32)cpu_to_be32(*(u32 *)&ecc[0]) >> 16;
+ 				elm_write_reg(info, offset, val);
+ 				break;
+ 			default:
+diff --git a/drivers/mtd/nand/spi/toshiba.c b/drivers/mtd/nand/spi/toshiba.c
+index 6fe7bd2a94d28..daa49c0603681 100644
+--- a/drivers/mtd/nand/spi/toshiba.c
++++ b/drivers/mtd/nand/spi/toshiba.c
+@@ -73,7 +73,7 @@ static int tx58cxgxsxraix_ecc_get_status(struct spinand_device *spinand,
+ {
+ 	struct nand_device *nand = spinand_to_nand(spinand);
+ 	u8 mbf = 0;
+-	struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, &mbf);
++	struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, spinand->scratchbuf);
+ 
+ 	switch (status & STATUS_ECC_MASK) {
+ 	case STATUS_ECC_NO_BITFLIPS:
+@@ -92,7 +92,7 @@ static int tx58cxgxsxraix_ecc_get_status(struct spinand_device *spinand,
+ 		if (spi_mem_exec_op(spinand->spimem, &op))
+ 			return nanddev_get_ecc_requirements(nand)->strength;
+ 
+-		mbf >>= 4;
++		mbf = *(spinand->scratchbuf) >> 4;
+ 
+ 		if (WARN_ON(mbf > nanddev_get_ecc_requirements(nand)->strength || !mbf))
+ 			return nanddev_get_ecc_requirements(nand)->strength;
+diff --git a/drivers/net/Makefile b/drivers/net/Makefile
+index 72e18d505d1ac..64430440c580c 100644
+--- a/drivers/net/Makefile
++++ b/drivers/net/Makefile
+@@ -29,7 +29,7 @@ obj-$(CONFIG_TUN) += tun.o
+ obj-$(CONFIG_TAP) += tap.o
+ obj-$(CONFIG_VETH) += veth.o
+ obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
+-obj-$(CONFIG_VXLAN) += vxlan.o
++obj-$(CONFIG_VXLAN) += vxlan/
+ obj-$(CONFIG_GENEVE) += geneve.o
+ obj-$(CONFIG_BAREUDP) += bareudp.o
+ obj-$(CONFIG_GTP) += gtp.o
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 381e6cdd603a1..a260740269e9f 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1442,6 +1442,11 @@ static void bond_setup_by_slave(struct net_device *bond_dev,
+ 
+ 	memcpy(bond_dev->broadcast, slave_dev->broadcast,
+ 		slave_dev->addr_len);
++
++	if (slave_dev->flags & IFF_POINTOPOINT) {
++		bond_dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
++		bond_dev->flags |= (IFF_POINTOPOINT | IFF_NOARP);
++	}
+ }
+ 
+ /* On bonding slaves other than the currently active slave, suppress
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index a879200eaab02..1f81293f137c9 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -732,6 +732,8 @@ static int gs_can_close(struct net_device *netdev)
+ 	usb_kill_anchored_urbs(&dev->tx_submitted);
+ 	atomic_set(&dev->active_tx_urbs, 0);
+ 
++	dev->can.state = CAN_STATE_STOPPED;
++
+ 	/* reset the device */
+ 	rc = gs_cmd_reset(dev);
+ 	if (rc < 0)
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index c6563d212476a..f2f890e559f3a 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -1301,7 +1301,9 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->clk))
+ 		return PTR_ERR(priv->clk);
+ 
+-	clk_prepare_enable(priv->clk);
++	ret = clk_prepare_enable(priv->clk);
++	if (ret)
++		return ret;
+ 
+ 	priv->clk_mdiv = devm_clk_get_optional(&pdev->dev, "sw_switch_mdiv");
+ 	if (IS_ERR(priv->clk_mdiv)) {
+@@ -1309,7 +1311,9 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev)
+ 		goto out_clk;
+ 	}
+ 
+-	clk_prepare_enable(priv->clk_mdiv);
++	ret = clk_prepare_enable(priv->clk_mdiv);
++	if (ret)
++		goto out_clk;
+ 
+ 	ret = bcm_sf2_sw_rst(priv);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
+index ff9f96de74b81..696ce3c5a8ba3 100644
+--- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
++++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
+@@ -1642,8 +1642,11 @@ static int atl1e_tso_csum(struct atl1e_adapter *adapter,
+ 			real_len = (((unsigned char *)ip_hdr(skb) - skb->data)
+ 					+ ntohs(ip_hdr(skb)->tot_len));
+ 
+-			if (real_len < skb->len)
+-				pskb_trim(skb, real_len);
++			if (real_len < skb->len) {
++				err = pskb_trim(skb, real_len);
++				if (err)
++					return err;
++			}
+ 
+ 			hdr_len = (skb_transport_offset(skb) + tcp_hdrlen(skb));
+ 			if (unlikely(skb->len == hdr_len)) {
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 81be560a26431..52b399aa3213d 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -1139,7 +1139,8 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
+ 	    (lancer_chip(adapter) || BE3_chip(adapter) ||
+ 	     skb_vlan_tag_present(skb)) && is_ipv4_pkt(skb)) {
+ 		ip = (struct iphdr *)ip_hdr(skb);
+-		pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len));
++		if (unlikely(pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len))))
++			goto tx_drop;
+ 	}
+ 
+ 	/* If vlan tag is already inlined in the packet, skip HW VLAN
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 5bab885744fc8..d60b8dfe38727 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -53,7 +53,10 @@ static void hclge_tm_info_to_ieee_ets(struct hclge_dev *hdev,
+ 
+ 	for (i = 0; i < HNAE3_MAX_TC; i++) {
+ 		ets->prio_tc[i] = hdev->tm_info.prio_tc[i];
+-		ets->tc_tx_bw[i] = hdev->tm_info.pg_info[0].tc_dwrr[i];
++		if (i < hdev->tm_info.num_tc)
++			ets->tc_tx_bw[i] = hdev->tm_info.pg_info[0].tc_dwrr[i];
++		else
++			ets->tc_tx_bw[i] = 0;
+ 
+ 		if (hdev->tm_info.tc_info[i].tc_sch_mode ==
+ 		    HCLGE_SCH_MODE_SP)
+@@ -105,26 +108,31 @@ static int hclge_dcb_common_validate(struct hclge_dev *hdev, u8 num_tc,
+ 	return 0;
+ }
+ 
+-static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+-			      u8 *tc, bool *changed)
++static u8 hclge_ets_tc_changed(struct hclge_dev *hdev, struct ieee_ets *ets,
++			       bool *changed)
+ {
+-	bool has_ets_tc = false;
+-	u32 total_ets_bw = 0;
+-	u8 max_tc = 0;
+-	int ret;
++	u8 max_tc_id = 0;
+ 	u8 i;
+ 
+ 	for (i = 0; i < HNAE3_MAX_USER_PRIO; i++) {
+ 		if (ets->prio_tc[i] != hdev->tm_info.prio_tc[i])
+ 			*changed = true;
+ 
+-		if (ets->prio_tc[i] > max_tc)
+-			max_tc = ets->prio_tc[i];
++		if (ets->prio_tc[i] > max_tc_id)
++			max_tc_id = ets->prio_tc[i];
+ 	}
+ 
+-	ret = hclge_dcb_common_validate(hdev, max_tc + 1, ets->prio_tc);
+-	if (ret)
+-		return ret;
++	/* return max tc number, max tc id need to plus 1 */
++	return max_tc_id + 1;
++}
++
++static int hclge_ets_sch_mode_validate(struct hclge_dev *hdev,
++				       struct ieee_ets *ets, bool *changed,
++				       u8 tc_num)
++{
++	bool has_ets_tc = false;
++	u32 total_ets_bw = 0;
++	u8 i;
+ 
+ 	for (i = 0; i < HNAE3_MAX_TC; i++) {
+ 		switch (ets->tc_tsa[i]) {
+@@ -134,6 +142,13 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 				*changed = true;
+ 			break;
+ 		case IEEE_8021QAZ_TSA_ETS:
++			if (i >= tc_num) {
++				dev_err(&hdev->pdev->dev,
++					"tc%u is disabled, cannot set ets bw\n",
++					i);
++				return -EINVAL;
++			}
++
+ 			/* The hardware will switch to sp mode if bandwidth is
+ 			 * 0, so limit ets bandwidth must be greater than 0.
+ 			 */
+@@ -158,7 +173,26 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 	if (has_ets_tc && total_ets_bw != BW_PERCENT)
+ 		return -EINVAL;
+ 
+-	*tc = max_tc + 1;
++	return 0;
++}
++
++static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
++			      u8 *tc, bool *changed)
++{
++	u8 tc_num;
++	int ret;
++
++	tc_num = hclge_ets_tc_changed(hdev, ets, changed);
++
++	ret = hclge_dcb_common_validate(hdev, tc_num, ets->prio_tc);
++	if (ret)
++		return ret;
++
++	ret = hclge_ets_sch_mode_validate(hdev, ets, changed, tc_num);
++	if (ret)
++		return ret;
++
++	*tc = tc_num;
+ 	if (*tc != hdev->tm_info.num_tc)
+ 		*changed = true;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index b3ceaaaeacaeb..8c5c5562c0a73 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -651,6 +651,7 @@ static void hclge_tm_tc_info_init(struct hclge_dev *hdev)
+ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
+ {
+ #define BW_PERCENT	100
++#define DEFAULT_BW_WEIGHT	1
+ 
+ 	u8 i;
+ 
+@@ -672,7 +673,7 @@ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
+ 		for (k = 0; k < hdev->tm_info.num_tc; k++)
+ 			hdev->tm_info.pg_info[i].tc_dwrr[k] = BW_PERCENT;
+ 		for (; k < HNAE3_MAX_TC; k++)
+-			hdev->tm_info.pg_info[i].tc_dwrr[k] = 0;
++			hdev->tm_info.pg_info[i].tc_dwrr[k] = DEFAULT_BW_WEIGHT;
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index 989d5c7263d7c..8bcf5902babf7 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -1839,7 +1839,7 @@ void i40e_dbg_pf_exit(struct i40e_pf *pf)
+ void i40e_dbg_init(void)
+ {
+ 	i40e_dbg_root = debugfs_create_dir(i40e_driver_name, NULL);
+-	if (!i40e_dbg_root)
++	if (IS_ERR(i40e_dbg_root))
+ 		pr_info("init of debugfs failed\n");
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+index 192729546bbfc..a122a267ede53 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+@@ -1135,16 +1135,21 @@ ice_cfg_fdir_xtrct_seq(struct ice_pf *pf, struct ethtool_rx_flow_spec *fsp,
+ 				     ICE_FLOW_FLD_OFF_INVAL);
+ 	}
+ 
+-	/* add filter for outer headers */
+ 	fltr_idx = ice_ethtool_flow_to_fltr(fsp->flow_type & ~FLOW_EXT);
++
++	assign_bit(fltr_idx, hw->fdir_perfect_fltr, perfect_filter);
++
++	/* add filter for outer headers */
+ 	ret = ice_fdir_set_hw_fltr_rule(pf, seg, fltr_idx,
+ 					ICE_FD_HW_SEG_NON_TUN);
+-	if (ret == -EEXIST)
+-		/* Rule already exists, free memory and continue */
+-		devm_kfree(dev, seg);
+-	else if (ret)
++	if (ret == -EEXIST) {
++		/* Rule already exists, free memory and count as success */
++		ret = 0;
++		goto err_exit;
++	} else if (ret) {
+ 		/* could not write filter, free memory */
+ 		goto err_exit;
++	}
+ 
+ 	/* make tunneled filter HW entries if possible */
+ 	memcpy(&tun_seg[1], seg, sizeof(*seg));
+@@ -1159,18 +1164,13 @@ ice_cfg_fdir_xtrct_seq(struct ice_pf *pf, struct ethtool_rx_flow_spec *fsp,
+ 		devm_kfree(dev, tun_seg);
+ 	}
+ 
+-	if (perfect_filter)
+-		set_bit(fltr_idx, hw->fdir_perfect_fltr);
+-	else
+-		clear_bit(fltr_idx, hw->fdir_perfect_fltr);
+-
+ 	return ret;
+ 
+ err_exit:
+ 	devm_kfree(dev, tun_seg);
+ 	devm_kfree(dev, seg);
+ 
+-	return -EOPNOTSUPP;
++	return ret;
+ }
+ 
+ /**
+@@ -1680,7 +1680,9 @@ int ice_add_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
+ 	}
+ 
+ 	/* input struct is added to the HW filter list */
+-	ice_fdir_update_list_entry(pf, input, fsp->location);
++	ret = ice_fdir_update_list_entry(pf, input, fsp->location);
++	if (ret)
++		goto release_lock;
+ 
+ 	ret = ice_fdir_write_all_fltr(pf, input, true);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 5c542f5d2b20d..2b100b7b325a5 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -8409,7 +8409,7 @@ static void ixgbe_atr(struct ixgbe_ring *ring,
+ 		struct ixgbe_adapter *adapter = q_vector->adapter;
+ 
+ 		if (unlikely(skb_tail_pointer(skb) < hdr.network +
+-			     VXLAN_HEADROOM))
++			     vxlan_headroom(0)))
+ 			return;
+ 
+ 		/* verify the port is recognized as VXLAN */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+index a9b45606dbdb7..76ef8a009d6e8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+@@ -121,7 +121,9 @@ static int mlx5e_ipsec_remove_trailer(struct sk_buff *skb, struct xfrm_state *x)
+ 
+ 	trailer_len = alen + plen + 2;
+ 
+-	pskb_trim(skb, skb->len - trailer_len);
++	ret = pskb_trim(skb, skb->len - trailer_len);
++	if (unlikely(ret))
++		return ret;
+ 	if (skb->protocol == htons(ETH_P_IP)) {
+ 		ipv4hdr->tot_len = htons(ntohs(ipv4hdr->tot_len) - trailer_len);
+ 		ip_send_check(ipv4hdr);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 4bdcceffe9d38..4e8e3797aed08 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -802,7 +802,7 @@ static struct mlx5_flow_table *find_closest_ft_recursive(struct fs_node  *root,
+ 	struct fs_node *iter = list_entry(start, struct fs_node, list);
+ 	struct mlx5_flow_table *ft = NULL;
+ 
+-	if (!root || root->type == FS_TYPE_PRIO_CHAINS)
++	if (!root)
+ 		return NULL;
+ 
+ 	list_for_each_advance_continue(iter, &root->children, reverse) {
+@@ -818,20 +818,42 @@ static struct mlx5_flow_table *find_closest_ft_recursive(struct fs_node  *root,
+ 	return ft;
+ }
+ 
+-/* If reverse is false then return the first flow table in next priority of
+- * prio in the tree, else return the last flow table in the previous priority
+- * of prio in the tree.
++static struct fs_node *find_prio_chains_parent(struct fs_node *parent,
++					       struct fs_node **child)
++{
++	struct fs_node *node = NULL;
++
++	while (parent && parent->type != FS_TYPE_PRIO_CHAINS) {
++		node = parent;
++		parent = parent->parent;
++	}
++
++	if (child)
++		*child = node;
++
++	return parent;
++}
++
++/* If reverse is false then return the first flow table next to the passed node
++ * in the tree, else return the last flow table before the node in the tree.
++ * If skip is true, skip the flow tables in the same prio_chains prio.
+  */
+-static struct mlx5_flow_table *find_closest_ft(struct fs_prio *prio, bool reverse)
++static struct mlx5_flow_table *find_closest_ft(struct fs_node *node, bool reverse,
++					       bool skip)
+ {
++	struct fs_node *prio_chains_parent = NULL;
+ 	struct mlx5_flow_table *ft = NULL;
+ 	struct fs_node *curr_node;
+ 	struct fs_node *parent;
+ 
+-	parent = prio->node.parent;
+-	curr_node = &prio->node;
++	if (skip)
++		prio_chains_parent = find_prio_chains_parent(node, NULL);
++	parent = node->parent;
++	curr_node = node;
+ 	while (!ft && parent) {
+-		ft = find_closest_ft_recursive(parent, &curr_node->list, reverse);
++		if (parent != prio_chains_parent)
++			ft = find_closest_ft_recursive(parent, &curr_node->list,
++						       reverse);
+ 		curr_node = parent;
+ 		parent = curr_node->parent;
+ 	}
+@@ -839,15 +861,15 @@ static struct mlx5_flow_table *find_closest_ft(struct fs_prio *prio, bool revers
+ }
+ 
+ /* Assuming all the tree is locked by mutex chain lock */
+-static struct mlx5_flow_table *find_next_chained_ft(struct fs_prio *prio)
++static struct mlx5_flow_table *find_next_chained_ft(struct fs_node *node)
+ {
+-	return find_closest_ft(prio, false);
++	return find_closest_ft(node, false, true);
+ }
+ 
+ /* Assuming all the tree is locked by mutex chain lock */
+-static struct mlx5_flow_table *find_prev_chained_ft(struct fs_prio *prio)
++static struct mlx5_flow_table *find_prev_chained_ft(struct fs_node *node)
+ {
+-	return find_closest_ft(prio, true);
++	return find_closest_ft(node, true, true);
+ }
+ 
+ static struct mlx5_flow_table *find_next_fwd_ft(struct mlx5_flow_table *ft,
+@@ -859,7 +881,7 @@ static struct mlx5_flow_table *find_next_fwd_ft(struct mlx5_flow_table *ft,
+ 	next_ns = flow_act->action & MLX5_FLOW_CONTEXT_ACTION_FWD_NEXT_NS;
+ 	fs_get_obj(prio, next_ns ? ft->ns->node.parent : ft->node.parent);
+ 
+-	return find_next_chained_ft(prio);
++	return find_next_chained_ft(&prio->node);
+ }
+ 
+ static int connect_fts_in_prio(struct mlx5_core_dev *dev,
+@@ -883,21 +905,55 @@ static int connect_fts_in_prio(struct mlx5_core_dev *dev,
+ 	return 0;
+ }
+ 
++static struct mlx5_flow_table *find_closet_ft_prio_chains(struct fs_node *node,
++							  struct fs_node *parent,
++							  struct fs_node **child,
++							  bool reverse)
++{
++	struct mlx5_flow_table *ft;
++
++	ft = find_closest_ft(node, reverse, false);
++
++	if (ft && parent == find_prio_chains_parent(&ft->node, child))
++		return ft;
++
++	return NULL;
++}
++
+ /* Connect flow tables from previous priority of prio to ft */
+ static int connect_prev_fts(struct mlx5_core_dev *dev,
+ 			    struct mlx5_flow_table *ft,
+ 			    struct fs_prio *prio)
+ {
++	struct fs_node *prio_parent, *parent = NULL, *child, *node;
+ 	struct mlx5_flow_table *prev_ft;
++	int err = 0;
++
++	prio_parent = find_prio_chains_parent(&prio->node, &child);
++
++	/* return directly if not under the first sub ns of prio_chains prio */
++	if (prio_parent && !list_is_first(&child->list, &prio_parent->children))
++		return 0;
+ 
+-	prev_ft = find_prev_chained_ft(prio);
+-	if (prev_ft) {
++	prev_ft = find_prev_chained_ft(&prio->node);
++	while (prev_ft) {
+ 		struct fs_prio *prev_prio;
+ 
+ 		fs_get_obj(prev_prio, prev_ft->node.parent);
+-		return connect_fts_in_prio(dev, prev_prio, ft);
++		err = connect_fts_in_prio(dev, prev_prio, ft);
++		if (err)
++			break;
++
++		if (!parent) {
++			parent = find_prio_chains_parent(&prev_prio->node, &child);
++			if (!parent)
++				break;
++		}
++
++		node = child;
++		prev_ft = find_closet_ft_prio_chains(node, parent, &child, true);
+ 	}
+-	return 0;
++	return err;
+ }
+ 
+ static int update_root_ft_create(struct mlx5_flow_table *ft, struct fs_prio
+@@ -1036,7 +1092,7 @@ static int connect_flow_table(struct mlx5_core_dev *dev, struct mlx5_flow_table
+ 		if (err)
+ 			return err;
+ 
+-		next_ft = first_ft ? first_ft : find_next_chained_ft(prio);
++		next_ft = first_ft ? first_ft : find_next_chained_ft(&prio->node);
+ 		err = connect_fwd_rules(dev, ft, next_ft);
+ 		if (err)
+ 			return err;
+@@ -1114,7 +1170,7 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa
+ 	tree_init_node(&ft->node, del_hw_flow_table, del_sw_flow_table);
+ 	log_table_sz = ft->max_fte ? ilog2(ft->max_fte) : 0;
+ 	next_ft = unmanaged ? ft_attr->next_ft :
+-			      find_next_chained_ft(fs_prio);
++			      find_next_chained_ft(&fs_prio->node);
+ 	ft->def_miss_action = ns->def_miss_action;
+ 	ft->ns = ns;
+ 	err = root->cmds->create_flow_table(root, ft, log_table_sz, next_ft);
+@@ -2073,13 +2129,20 @@ EXPORT_SYMBOL(mlx5_del_flow_rules);
+ /* Assuming prio->node.children(flow tables) is sorted by level */
+ static struct mlx5_flow_table *find_next_ft(struct mlx5_flow_table *ft)
+ {
++	struct fs_node *prio_parent, *child;
+ 	struct fs_prio *prio;
+ 
+ 	fs_get_obj(prio, ft->node.parent);
+ 
+ 	if (!list_is_last(&ft->node.list, &prio->node.children))
+ 		return list_next_entry(ft, node.list);
+-	return find_next_chained_ft(prio);
++
++	prio_parent = find_prio_chains_parent(&prio->node, &child);
++
++	if (prio_parent && list_is_first(&child->list, &prio_parent->children))
++		return find_closest_ft(&prio->node, false, false);
++
++	return find_next_chained_ft(&prio->node);
+ }
+ 
+ static int update_root_ft_destroy(struct mlx5_flow_table *ft)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
+index fd56cae0d54fc..4549840fb91ad 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
+@@ -425,11 +425,12 @@ int mlx5dr_cmd_create_reformat_ctx(struct mlx5_core_dev *mdev,
+ 
+ 	err = mlx5_cmd_exec(mdev, in, inlen, out, sizeof(out));
+ 	if (err)
+-		return err;
++		goto err_free_in;
+ 
+ 	*reformat_id = MLX5_GET(alloc_packet_reformat_context_out, out, packet_reformat_id);
+-	kvfree(in);
+ 
++err_free_in:
++	kvfree(in);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index b9acee214bb6a..bb86315581818 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -1845,6 +1845,17 @@ static int netsec_of_probe(struct platform_device *pdev,
+ 		return err;
+ 	}
+ 
++	/*
++	 * SynQuacer is physically configured with TX and RX delays
++	 * but the standard firmware claimed otherwise for a long
++	 * time, ignore it.
++	 */
++	if (of_machine_is_compatible("socionext,developer-box") &&
++	    priv->phy_interface != PHY_INTERFACE_MODE_RGMII_ID) {
++		dev_warn(&pdev->dev, "Outdated firmware reports incorrect PHY mode, overriding\n");
++		priv->phy_interface = PHY_INTERFACE_MODE_RGMII_ID;
++	}
++
+ 	priv->phy_np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ 	if (!priv->phy_np) {
+ 		dev_err(&pdev->dev, "missing required property 'phy-handle'\n");
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index 130f4b707bdc4..da136abba1520 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -1550,15 +1550,15 @@ static int temac_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Error handle returned DMA RX and TX interrupts */
+-	if (lp->rx_irq < 0) {
+-		if (lp->rx_irq != -EPROBE_DEFER)
+-			dev_err(&pdev->dev, "could not get DMA RX irq\n");
+-		return lp->rx_irq;
++	if (lp->rx_irq <= 0) {
++		rc = lp->rx_irq ?: -EINVAL;
++		return dev_err_probe(&pdev->dev, rc,
++				     "could not get DMA RX irq\n");
+ 	}
+-	if (lp->tx_irq < 0) {
+-		if (lp->tx_irq != -EPROBE_DEFER)
+-			dev_err(&pdev->dev, "could not get DMA TX irq\n");
+-		return lp->tx_irq;
++	if (lp->tx_irq <= 0) {
++		rc = lp->tx_irq ?: -EINVAL;
++		return dev_err_probe(&pdev->dev, rc,
++				     "could not get DMA TX irq\n");
+ 	}
+ 
+ 	if (temac_np) {
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index 2b64318efdba6..42b48d0d0c4ed 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -263,6 +263,13 @@ static int mv3310_power_up(struct phy_device *phydev)
+ 	ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL,
+ 				 MV_V2_PORT_CTRL_PWRDOWN);
+ 
++	/* Sometimes, the power down bit doesn't clear immediately, and
++	 * a read of this register causes the bit not to clear. Delay
++	 * 100us to allow the PHY to come out of power down mode before
++	 * the next access.
++	 */
++	udelay(100);
++
+ 	if (phydev->drv->phy_id != MARVELL_PHY_ID_88X3310 ||
+ 	    priv->firmware_ver < 0x00030000)
+ 		return ret;
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index f9b3eb2d8d8b0..41ee56015a45e 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -523,7 +523,7 @@ static int tap_open(struct inode *inode, struct file *file)
+ 	q->sock.state = SS_CONNECTED;
+ 	q->sock.file = file;
+ 	q->sock.ops = &tap_socket_ops;
+-	sock_init_data_uid(&q->sock, &q->sk, inode->i_uid);
++	sock_init_data_uid(&q->sock, &q->sk, current_fsuid());
+ 	q->sk.sk_write_space = tap_sock_write_space;
+ 	q->sk.sk_destruct = tap_sock_destruct;
+ 	q->flags = IFF_VNET_HDR | IFF_NO_PI | IFF_TAP;
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 8a1619695421b..36c7eae776d44 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -2130,6 +2130,15 @@ static void team_setup_by_port(struct net_device *dev,
+ 	dev->mtu = port_dev->mtu;
+ 	memcpy(dev->broadcast, port_dev->broadcast, port_dev->addr_len);
+ 	eth_hw_addr_inherit(dev, port_dev);
++
++	if (port_dev->flags & IFF_POINTOPOINT) {
++		dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
++		dev->flags |= (IFF_POINTOPOINT | IFF_NOARP);
++	} else if ((port_dev->flags & (IFF_BROADCAST | IFF_MULTICAST)) ==
++		    (IFF_BROADCAST | IFF_MULTICAST)) {
++		dev->flags |= (IFF_BROADCAST | IFF_MULTICAST);
++		dev->flags &= ~(IFF_POINTOPOINT | IFF_NOARP);
++	}
+ }
+ 
+ static int team_dev_type_check_change(struct net_device *dev,
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index f1d46aea8a2ba..0e70877c932c7 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -3457,7 +3457,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
+ 	tfile->socket.file = file;
+ 	tfile->socket.ops = &tun_socket_ops;
+ 
+-	sock_init_data_uid(&tfile->socket, &tfile->sk, inode->i_uid);
++	sock_init_data_uid(&tfile->socket, &tfile->sk, current_fsuid());
+ 
+ 	tfile->sk.sk_write_space = tun_sock_write_space;
+ 	tfile->sk.sk_sndbuf = INT_MAX;
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 935cd296887f2..9f3446d6dde76 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -604,9 +604,23 @@ static const struct usb_device_id	products[] = {
+ 	.match_flags	=   USB_DEVICE_ID_MATCH_INT_INFO
+ 			  | USB_DEVICE_ID_MATCH_DEVICE,
+ 	.idVendor		= 0x04DD,
++	.idProduct		= 0x8005,   /* A-300 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info        = 0,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++			  | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor		= 0x04DD,
+ 	.idProduct		= 0x8006,	/* B-500/SL-5600 */
+ 	ZAURUS_MASTER_INTERFACE,
+ 	.driver_info		= 0,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++			  | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor		= 0x04DD,
++	.idProduct		= 0x8006,   /* B-500/SL-5600 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info        = 0,
+ }, {
+ 	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
+ 			  | USB_DEVICE_ID_MATCH_DEVICE,
+@@ -614,6 +628,13 @@ static const struct usb_device_id	products[] = {
+ 	.idProduct		= 0x8007,	/* C-700 */
+ 	ZAURUS_MASTER_INTERFACE,
+ 	.driver_info		= 0,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++			  | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor		= 0x04DD,
++	.idProduct		= 0x8007,   /* C-700 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info        = 0,
+ }, {
+ 	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
+ 		 | USB_DEVICE_ID_MATCH_DEVICE,
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 43d70348343b2..481a41d879b53 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1738,6 +1738,10 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 	} else if (!info->in || !info->out)
+ 		status = usbnet_get_endpoints (dev, udev);
+ 	else {
++		u8 ep_addrs[3] = {
++			info->in + USB_DIR_IN, info->out + USB_DIR_OUT, 0
++		};
++
+ 		dev->in = usb_rcvbulkpipe (xdev, info->in);
+ 		dev->out = usb_sndbulkpipe (xdev, info->out);
+ 		if (!(info->flags & FLAG_NO_SETINT))
+@@ -1747,6 +1751,8 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 		else
+ 			status = 0;
+ 
++		if (status == 0 && !usb_check_bulk_endpoints(udev, ep_addrs))
++			status = -EINVAL;
+ 	}
+ 	if (status >= 0 && dev->status)
+ 		status = init_status (dev, udev);
+diff --git a/drivers/net/usb/zaurus.c b/drivers/net/usb/zaurus.c
+index 7984f2157d222..df3617c4c44e8 100644
+--- a/drivers/net/usb/zaurus.c
++++ b/drivers/net/usb/zaurus.c
+@@ -289,9 +289,23 @@ static const struct usb_device_id	products [] = {
+ 	.match_flags	=   USB_DEVICE_ID_MATCH_INT_INFO
+ 			  | USB_DEVICE_ID_MATCH_DEVICE,
+ 	.idVendor		= 0x04DD,
++	.idProduct		= 0x8005,	/* A-300 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info = (unsigned long)&bogus_mdlm_info,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++			  | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor		= 0x04DD,
+ 	.idProduct		= 0x8006,	/* B-500/SL-5600 */
+ 	ZAURUS_MASTER_INTERFACE,
+ 	.driver_info = ZAURUS_PXA_INFO,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++			  | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor		= 0x04DD,
++	.idProduct		= 0x8006,	/* B-500/SL-5600 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info = (unsigned long)&bogus_mdlm_info,
+ }, {
+ 	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
+ 	          | USB_DEVICE_ID_MATCH_DEVICE,
+@@ -299,6 +313,13 @@ static const struct usb_device_id	products [] = {
+ 	.idProduct		= 0x8007,	/* C-700 */
+ 	ZAURUS_MASTER_INTERFACE,
+ 	.driver_info = ZAURUS_PXA_INFO,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++			  | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor		= 0x04DD,
++	.idProduct		= 0x8007,	/* C-700 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info = (unsigned long)&bogus_mdlm_info,
+ }, {
+ 	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
+ 		 | USB_DEVICE_ID_MATCH_DEVICE,
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 119a32f34b539..165149bcf0b1a 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3220,6 +3220,8 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 		}
+ 	}
+ 
++	_virtnet_set_queues(vi, vi->curr_queue_pairs);
++
+ 	/* serialize netdev register + virtio_device_ready() with ndo_open() */
+ 	rtnl_lock();
+ 
+@@ -3240,8 +3242,6 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 		goto free_unregister_netdev;
+ 	}
+ 
+-	virtnet_set_queues(vi, vi->curr_queue_pairs);
+-
+ 	/* Assume link up if device can't report link status,
+ 	   otherwise get link status from config. */
+ 	netif_carrier_off(dev);
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+deleted file mode 100644
+index 72d670667f64f..0000000000000
+--- a/drivers/net/vxlan.c
++++ /dev/null
+@@ -1,4829 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * VXLAN: Virtual eXtensible Local Area Network
+- *
+- * Copyright (c) 2012-2013 Vyatta Inc.
+- */
+-
+-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+-
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/errno.h>
+-#include <linux/slab.h>
+-#include <linux/udp.h>
+-#include <linux/igmp.h>
+-#include <linux/if_ether.h>
+-#include <linux/ethtool.h>
+-#include <net/arp.h>
+-#include <net/ndisc.h>
+-#include <net/ipv6_stubs.h>
+-#include <net/ip.h>
+-#include <net/icmp.h>
+-#include <net/rtnetlink.h>
+-#include <net/inet_ecn.h>
+-#include <net/net_namespace.h>
+-#include <net/netns/generic.h>
+-#include <net/tun_proto.h>
+-#include <net/vxlan.h>
+-#include <net/nexthop.h>
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-#include <net/ip6_tunnel.h>
+-#include <net/ip6_checksum.h>
+-#endif
+-
+-#define VXLAN_VERSION	"0.1"
+-
+-#define PORT_HASH_BITS	8
+-#define PORT_HASH_SIZE  (1<<PORT_HASH_BITS)
+-#define FDB_AGE_DEFAULT 300 /* 5 min */
+-#define FDB_AGE_INTERVAL (10 * HZ)	/* rescan interval */
+-
+-/* UDP port for VXLAN traffic.
+- * The IANA assigned port is 4789, but the Linux default is 8472
+- * for compatibility with early adopters.
+- */
+-static unsigned short vxlan_port __read_mostly = 8472;
+-module_param_named(udp_port, vxlan_port, ushort, 0444);
+-MODULE_PARM_DESC(udp_port, "Destination UDP port");
+-
+-static bool log_ecn_error = true;
+-module_param(log_ecn_error, bool, 0644);
+-MODULE_PARM_DESC(log_ecn_error, "Log packets received with corrupted ECN");
+-
+-static unsigned int vxlan_net_id;
+-static struct rtnl_link_ops vxlan_link_ops;
+-
+-static const u8 all_zeros_mac[ETH_ALEN + 2];
+-
+-static int vxlan_sock_add(struct vxlan_dev *vxlan);
+-
+-static void vxlan_vs_del_dev(struct vxlan_dev *vxlan);
+-
+-/* per-network namespace private data for this module */
+-struct vxlan_net {
+-	struct list_head  vxlan_list;
+-	struct hlist_head sock_list[PORT_HASH_SIZE];
+-	spinlock_t	  sock_lock;
+-};
+-
+-/* Forwarding table entry */
+-struct vxlan_fdb {
+-	struct hlist_node hlist;	/* linked list of entries */
+-	struct rcu_head	  rcu;
+-	unsigned long	  updated;	/* jiffies */
+-	unsigned long	  used;
+-	struct list_head  remotes;
+-	u8		  eth_addr[ETH_ALEN];
+-	u16		  state;	/* see ndm_state */
+-	__be32		  vni;
+-	u16		  flags;	/* see ndm_flags and below */
+-	struct list_head  nh_list;
+-	struct nexthop __rcu *nh;
+-	struct vxlan_dev  __rcu *vdev;
+-};
+-
+-#define NTF_VXLAN_ADDED_BY_USER 0x100
+-
+-/* salt for hash table */
+-static u32 vxlan_salt __read_mostly;
+-
+-static inline bool vxlan_collect_metadata(struct vxlan_sock *vs)
+-{
+-	return vs->flags & VXLAN_F_COLLECT_METADATA ||
+-	       ip_tunnel_collect_metadata();
+-}
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-static inline
+-bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
+-{
+-	if (a->sa.sa_family != b->sa.sa_family)
+-		return false;
+-	if (a->sa.sa_family == AF_INET6)
+-		return ipv6_addr_equal(&a->sin6.sin6_addr, &b->sin6.sin6_addr);
+-	else
+-		return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
+-}
+-
+-static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
+-{
+-	if (nla_len(nla) >= sizeof(struct in6_addr)) {
+-		ip->sin6.sin6_addr = nla_get_in6_addr(nla);
+-		ip->sa.sa_family = AF_INET6;
+-		return 0;
+-	} else if (nla_len(nla) >= sizeof(__be32)) {
+-		ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
+-		ip->sa.sa_family = AF_INET;
+-		return 0;
+-	} else {
+-		return -EAFNOSUPPORT;
+-	}
+-}
+-
+-static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
+-			      const union vxlan_addr *ip)
+-{
+-	if (ip->sa.sa_family == AF_INET6)
+-		return nla_put_in6_addr(skb, attr, &ip->sin6.sin6_addr);
+-	else
+-		return nla_put_in_addr(skb, attr, ip->sin.sin_addr.s_addr);
+-}
+-
+-#else /* !CONFIG_IPV6 */
+-
+-static inline
+-bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
+-{
+-	return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
+-}
+-
+-static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
+-{
+-	if (nla_len(nla) >= sizeof(struct in6_addr)) {
+-		return -EAFNOSUPPORT;
+-	} else if (nla_len(nla) >= sizeof(__be32)) {
+-		ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
+-		ip->sa.sa_family = AF_INET;
+-		return 0;
+-	} else {
+-		return -EAFNOSUPPORT;
+-	}
+-}
+-
+-static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
+-			      const union vxlan_addr *ip)
+-{
+-	return nla_put_in_addr(skb, attr, ip->sin.sin_addr.s_addr);
+-}
+-#endif
+-
+-/* Virtual Network hash table head */
+-static inline struct hlist_head *vni_head(struct vxlan_sock *vs, __be32 vni)
+-{
+-	return &vs->vni_list[hash_32((__force u32)vni, VNI_HASH_BITS)];
+-}
+-
+-/* Socket hash table head */
+-static inline struct hlist_head *vs_head(struct net *net, __be16 port)
+-{
+-	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+-
+-	return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
+-}
+-
+-/* First remote destination for a forwarding entry.
+- * Guaranteed to be non-NULL because remotes are never deleted.
+- */
+-static inline struct vxlan_rdst *first_remote_rcu(struct vxlan_fdb *fdb)
+-{
+-	if (rcu_access_pointer(fdb->nh))
+-		return NULL;
+-	return list_entry_rcu(fdb->remotes.next, struct vxlan_rdst, list);
+-}
+-
+-static inline struct vxlan_rdst *first_remote_rtnl(struct vxlan_fdb *fdb)
+-{
+-	if (rcu_access_pointer(fdb->nh))
+-		return NULL;
+-	return list_first_entry(&fdb->remotes, struct vxlan_rdst, list);
+-}
+-
+-/* Find VXLAN socket based on network namespace, address family, UDP port,
+- * enabled unshareable flags and socket device binding (see l3mdev with
+- * non-default VRF).
+- */
+-static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family,
+-					  __be16 port, u32 flags, int ifindex)
+-{
+-	struct vxlan_sock *vs;
+-
+-	flags &= VXLAN_F_RCV_FLAGS;
+-
+-	hlist_for_each_entry_rcu(vs, vs_head(net, port), hlist) {
+-		if (inet_sk(vs->sock->sk)->inet_sport == port &&
+-		    vxlan_get_sk_family(vs) == family &&
+-		    vs->flags == flags &&
+-		    vs->sock->sk->sk_bound_dev_if == ifindex)
+-			return vs;
+-	}
+-	return NULL;
+-}
+-
+-static struct vxlan_dev *vxlan_vs_find_vni(struct vxlan_sock *vs, int ifindex,
+-					   __be32 vni)
+-{
+-	struct vxlan_dev_node *node;
+-
+-	/* For flow based devices, map all packets to VNI 0 */
+-	if (vs->flags & VXLAN_F_COLLECT_METADATA)
+-		vni = 0;
+-
+-	hlist_for_each_entry_rcu(node, vni_head(vs, vni), hlist) {
+-		if (node->vxlan->default_dst.remote_vni != vni)
+-			continue;
+-
+-		if (IS_ENABLED(CONFIG_IPV6)) {
+-			const struct vxlan_config *cfg = &node->vxlan->cfg;
+-
+-			if ((cfg->flags & VXLAN_F_IPV6_LINKLOCAL) &&
+-			    cfg->remote_ifindex != ifindex)
+-				continue;
+-		}
+-
+-		return node->vxlan;
+-	}
+-
+-	return NULL;
+-}
+-
+-/* Look up VNI in a per net namespace table */
+-static struct vxlan_dev *vxlan_find_vni(struct net *net, int ifindex,
+-					__be32 vni, sa_family_t family,
+-					__be16 port, u32 flags)
+-{
+-	struct vxlan_sock *vs;
+-
+-	vs = vxlan_find_sock(net, family, port, flags, ifindex);
+-	if (!vs)
+-		return NULL;
+-
+-	return vxlan_vs_find_vni(vs, ifindex, vni);
+-}
+-
+-/* Fill in neighbour message in skbuff. */
+-static int vxlan_fdb_info(struct sk_buff *skb, struct vxlan_dev *vxlan,
+-			  const struct vxlan_fdb *fdb,
+-			  u32 portid, u32 seq, int type, unsigned int flags,
+-			  const struct vxlan_rdst *rdst)
+-{
+-	unsigned long now = jiffies;
+-	struct nda_cacheinfo ci;
+-	bool send_ip, send_eth;
+-	struct nlmsghdr *nlh;
+-	struct nexthop *nh;
+-	struct ndmsg *ndm;
+-	int nh_family;
+-	u32 nh_id;
+-
+-	nlh = nlmsg_put(skb, portid, seq, type, sizeof(*ndm), flags);
+-	if (nlh == NULL)
+-		return -EMSGSIZE;
+-
+-	ndm = nlmsg_data(nlh);
+-	memset(ndm, 0, sizeof(*ndm));
+-
+-	send_eth = send_ip = true;
+-
+-	rcu_read_lock();
+-	nh = rcu_dereference(fdb->nh);
+-	if (nh) {
+-		nh_family = nexthop_get_family(nh);
+-		nh_id = nh->id;
+-	}
+-	rcu_read_unlock();
+-
+-	if (type == RTM_GETNEIGH) {
+-		if (rdst) {
+-			send_ip = !vxlan_addr_any(&rdst->remote_ip);
+-			ndm->ndm_family = send_ip ? rdst->remote_ip.sa.sa_family : AF_INET;
+-		} else if (nh) {
+-			ndm->ndm_family = nh_family;
+-		}
+-		send_eth = !is_zero_ether_addr(fdb->eth_addr);
+-	} else
+-		ndm->ndm_family	= AF_BRIDGE;
+-	ndm->ndm_state = fdb->state;
+-	ndm->ndm_ifindex = vxlan->dev->ifindex;
+-	ndm->ndm_flags = fdb->flags;
+-	if (rdst && rdst->offloaded)
+-		ndm->ndm_flags |= NTF_OFFLOADED;
+-	ndm->ndm_type = RTN_UNICAST;
+-
+-	if (!net_eq(dev_net(vxlan->dev), vxlan->net) &&
+-	    nla_put_s32(skb, NDA_LINK_NETNSID,
+-			peernet2id(dev_net(vxlan->dev), vxlan->net)))
+-		goto nla_put_failure;
+-
+-	if (send_eth && nla_put(skb, NDA_LLADDR, ETH_ALEN, &fdb->eth_addr))
+-		goto nla_put_failure;
+-	if (nh) {
+-		if (nla_put_u32(skb, NDA_NH_ID, nh_id))
+-			goto nla_put_failure;
+-	} else if (rdst) {
+-		if (send_ip && vxlan_nla_put_addr(skb, NDA_DST,
+-						  &rdst->remote_ip))
+-			goto nla_put_failure;
+-
+-		if (rdst->remote_port &&
+-		    rdst->remote_port != vxlan->cfg.dst_port &&
+-		    nla_put_be16(skb, NDA_PORT, rdst->remote_port))
+-			goto nla_put_failure;
+-		if (rdst->remote_vni != vxlan->default_dst.remote_vni &&
+-		    nla_put_u32(skb, NDA_VNI, be32_to_cpu(rdst->remote_vni)))
+-			goto nla_put_failure;
+-		if (rdst->remote_ifindex &&
+-		    nla_put_u32(skb, NDA_IFINDEX, rdst->remote_ifindex))
+-			goto nla_put_failure;
+-	}
+-
+-	if ((vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA) && fdb->vni &&
+-	    nla_put_u32(skb, NDA_SRC_VNI,
+-			be32_to_cpu(fdb->vni)))
+-		goto nla_put_failure;
+-
+-	ci.ndm_used	 = jiffies_to_clock_t(now - fdb->used);
+-	ci.ndm_confirmed = 0;
+-	ci.ndm_updated	 = jiffies_to_clock_t(now - fdb->updated);
+-	ci.ndm_refcnt	 = 0;
+-
+-	if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci))
+-		goto nla_put_failure;
+-
+-	nlmsg_end(skb, nlh);
+-	return 0;
+-
+-nla_put_failure:
+-	nlmsg_cancel(skb, nlh);
+-	return -EMSGSIZE;
+-}
+-
+-static inline size_t vxlan_nlmsg_size(void)
+-{
+-	return NLMSG_ALIGN(sizeof(struct ndmsg))
+-		+ nla_total_size(ETH_ALEN) /* NDA_LLADDR */
+-		+ nla_total_size(sizeof(struct in6_addr)) /* NDA_DST */
+-		+ nla_total_size(sizeof(__be16)) /* NDA_PORT */
+-		+ nla_total_size(sizeof(__be32)) /* NDA_VNI */
+-		+ nla_total_size(sizeof(__u32)) /* NDA_IFINDEX */
+-		+ nla_total_size(sizeof(__s32)) /* NDA_LINK_NETNSID */
+-		+ nla_total_size(sizeof(struct nda_cacheinfo));
+-}
+-
+-static void __vxlan_fdb_notify(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
+-			       struct vxlan_rdst *rd, int type)
+-{
+-	struct net *net = dev_net(vxlan->dev);
+-	struct sk_buff *skb;
+-	int err = -ENOBUFS;
+-
+-	skb = nlmsg_new(vxlan_nlmsg_size(), GFP_ATOMIC);
+-	if (skb == NULL)
+-		goto errout;
+-
+-	err = vxlan_fdb_info(skb, vxlan, fdb, 0, 0, type, 0, rd);
+-	if (err < 0) {
+-		/* -EMSGSIZE implies BUG in vxlan_nlmsg_size() */
+-		WARN_ON(err == -EMSGSIZE);
+-		kfree_skb(skb);
+-		goto errout;
+-	}
+-
+-	rtnl_notify(skb, net, 0, RTNLGRP_NEIGH, NULL, GFP_ATOMIC);
+-	return;
+-errout:
+-	if (err < 0)
+-		rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
+-}
+-
+-static void vxlan_fdb_switchdev_notifier_info(const struct vxlan_dev *vxlan,
+-			    const struct vxlan_fdb *fdb,
+-			    const struct vxlan_rdst *rd,
+-			    struct netlink_ext_ack *extack,
+-			    struct switchdev_notifier_vxlan_fdb_info *fdb_info)
+-{
+-	fdb_info->info.dev = vxlan->dev;
+-	fdb_info->info.extack = extack;
+-	fdb_info->remote_ip = rd->remote_ip;
+-	fdb_info->remote_port = rd->remote_port;
+-	fdb_info->remote_vni = rd->remote_vni;
+-	fdb_info->remote_ifindex = rd->remote_ifindex;
+-	memcpy(fdb_info->eth_addr, fdb->eth_addr, ETH_ALEN);
+-	fdb_info->vni = fdb->vni;
+-	fdb_info->offloaded = rd->offloaded;
+-	fdb_info->added_by_user = fdb->flags & NTF_VXLAN_ADDED_BY_USER;
+-}
+-
+-static int vxlan_fdb_switchdev_call_notifiers(struct vxlan_dev *vxlan,
+-					      struct vxlan_fdb *fdb,
+-					      struct vxlan_rdst *rd,
+-					      bool adding,
+-					      struct netlink_ext_ack *extack)
+-{
+-	struct switchdev_notifier_vxlan_fdb_info info;
+-	enum switchdev_notifier_type notifier_type;
+-	int ret;
+-
+-	if (WARN_ON(!rd))
+-		return 0;
+-
+-	notifier_type = adding ? SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE
+-			       : SWITCHDEV_VXLAN_FDB_DEL_TO_DEVICE;
+-	vxlan_fdb_switchdev_notifier_info(vxlan, fdb, rd, NULL, &info);
+-	ret = call_switchdev_notifiers(notifier_type, vxlan->dev,
+-				       &info.info, extack);
+-	return notifier_to_errno(ret);
+-}
+-
+-static int vxlan_fdb_notify(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
+-			    struct vxlan_rdst *rd, int type, bool swdev_notify,
+-			    struct netlink_ext_ack *extack)
+-{
+-	int err;
+-
+-	if (swdev_notify && rd) {
+-		switch (type) {
+-		case RTM_NEWNEIGH:
+-			err = vxlan_fdb_switchdev_call_notifiers(vxlan, fdb, rd,
+-								 true, extack);
+-			if (err)
+-				return err;
+-			break;
+-		case RTM_DELNEIGH:
+-			vxlan_fdb_switchdev_call_notifiers(vxlan, fdb, rd,
+-							   false, extack);
+-			break;
+-		}
+-	}
+-
+-	__vxlan_fdb_notify(vxlan, fdb, rd, type);
+-	return 0;
+-}
+-
+-static void vxlan_ip_miss(struct net_device *dev, union vxlan_addr *ipa)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_fdb f = {
+-		.state = NUD_STALE,
+-	};
+-	struct vxlan_rdst remote = {
+-		.remote_ip = *ipa, /* goes to NDA_DST */
+-		.remote_vni = cpu_to_be32(VXLAN_N_VID),
+-	};
+-
+-	vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH, true, NULL);
+-}
+-
+-static void vxlan_fdb_miss(struct vxlan_dev *vxlan, const u8 eth_addr[ETH_ALEN])
+-{
+-	struct vxlan_fdb f = {
+-		.state = NUD_STALE,
+-	};
+-	struct vxlan_rdst remote = { };
+-
+-	memcpy(f.eth_addr, eth_addr, ETH_ALEN);
+-
+-	vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH, true, NULL);
+-}
+-
+-/* Hash Ethernet address */
+-static u32 eth_hash(const unsigned char *addr)
+-{
+-	u64 value = get_unaligned((u64 *)addr);
+-
+-	/* only want 6 bytes */
+-#ifdef __BIG_ENDIAN
+-	value >>= 16;
+-#else
+-	value <<= 16;
+-#endif
+-	return hash_64(value, FDB_HASH_BITS);
+-}
+-
+-static u32 eth_vni_hash(const unsigned char *addr, __be32 vni)
+-{
+-	/* use 1 byte of OUI and 3 bytes of NIC */
+-	u32 key = get_unaligned((u32 *)(addr + 2));
+-
+-	return jhash_2words(key, vni, vxlan_salt) & (FDB_HASH_SIZE - 1);
+-}
+-
+-static u32 fdb_head_index(struct vxlan_dev *vxlan, const u8 *mac, __be32 vni)
+-{
+-	if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA)
+-		return eth_vni_hash(mac, vni);
+-	else
+-		return eth_hash(mac);
+-}
+-
+-/* Hash chain to use given mac address */
+-static inline struct hlist_head *vxlan_fdb_head(struct vxlan_dev *vxlan,
+-						const u8 *mac, __be32 vni)
+-{
+-	return &vxlan->fdb_head[fdb_head_index(vxlan, mac, vni)];
+-}
+-
+-/* Look up Ethernet address in forwarding table */
+-static struct vxlan_fdb *__vxlan_find_mac(struct vxlan_dev *vxlan,
+-					  const u8 *mac, __be32 vni)
+-{
+-	struct hlist_head *head = vxlan_fdb_head(vxlan, mac, vni);
+-	struct vxlan_fdb *f;
+-
+-	hlist_for_each_entry_rcu(f, head, hlist) {
+-		if (ether_addr_equal(mac, f->eth_addr)) {
+-			if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA) {
+-				if (vni == f->vni)
+-					return f;
+-			} else {
+-				return f;
+-			}
+-		}
+-	}
+-
+-	return NULL;
+-}
+-
+-static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
+-					const u8 *mac, __be32 vni)
+-{
+-	struct vxlan_fdb *f;
+-
+-	f = __vxlan_find_mac(vxlan, mac, vni);
+-	if (f && f->used != jiffies)
+-		f->used = jiffies;
+-
+-	return f;
+-}
+-
+-/* caller should hold vxlan->hash_lock */
+-static struct vxlan_rdst *vxlan_fdb_find_rdst(struct vxlan_fdb *f,
+-					      union vxlan_addr *ip, __be16 port,
+-					      __be32 vni, __u32 ifindex)
+-{
+-	struct vxlan_rdst *rd;
+-
+-	list_for_each_entry(rd, &f->remotes, list) {
+-		if (vxlan_addr_equal(&rd->remote_ip, ip) &&
+-		    rd->remote_port == port &&
+-		    rd->remote_vni == vni &&
+-		    rd->remote_ifindex == ifindex)
+-			return rd;
+-	}
+-
+-	return NULL;
+-}
+-
+-int vxlan_fdb_find_uc(struct net_device *dev, const u8 *mac, __be32 vni,
+-		      struct switchdev_notifier_vxlan_fdb_info *fdb_info)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	u8 eth_addr[ETH_ALEN + 2] = { 0 };
+-	struct vxlan_rdst *rdst;
+-	struct vxlan_fdb *f;
+-	int rc = 0;
+-
+-	if (is_multicast_ether_addr(mac) ||
+-	    is_zero_ether_addr(mac))
+-		return -EINVAL;
+-
+-	ether_addr_copy(eth_addr, mac);
+-
+-	rcu_read_lock();
+-
+-	f = __vxlan_find_mac(vxlan, eth_addr, vni);
+-	if (!f) {
+-		rc = -ENOENT;
+-		goto out;
+-	}
+-
+-	rdst = first_remote_rcu(f);
+-	vxlan_fdb_switchdev_notifier_info(vxlan, f, rdst, NULL, fdb_info);
+-
+-out:
+-	rcu_read_unlock();
+-	return rc;
+-}
+-EXPORT_SYMBOL_GPL(vxlan_fdb_find_uc);
+-
+-static int vxlan_fdb_notify_one(struct notifier_block *nb,
+-				const struct vxlan_dev *vxlan,
+-				const struct vxlan_fdb *f,
+-				const struct vxlan_rdst *rdst,
+-				struct netlink_ext_ack *extack)
+-{
+-	struct switchdev_notifier_vxlan_fdb_info fdb_info;
+-	int rc;
+-
+-	vxlan_fdb_switchdev_notifier_info(vxlan, f, rdst, extack, &fdb_info);
+-	rc = nb->notifier_call(nb, SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE,
+-			       &fdb_info);
+-	return notifier_to_errno(rc);
+-}
+-
+-int vxlan_fdb_replay(const struct net_device *dev, __be32 vni,
+-		     struct notifier_block *nb,
+-		     struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_dev *vxlan;
+-	struct vxlan_rdst *rdst;
+-	struct vxlan_fdb *f;
+-	unsigned int h;
+-	int rc = 0;
+-
+-	if (!netif_is_vxlan(dev))
+-		return -EINVAL;
+-	vxlan = netdev_priv(dev);
+-
+-	for (h = 0; h < FDB_HASH_SIZE; ++h) {
+-		spin_lock_bh(&vxlan->hash_lock[h]);
+-		hlist_for_each_entry(f, &vxlan->fdb_head[h], hlist) {
+-			if (f->vni == vni) {
+-				list_for_each_entry(rdst, &f->remotes, list) {
+-					rc = vxlan_fdb_notify_one(nb, vxlan,
+-								  f, rdst,
+-								  extack);
+-					if (rc)
+-						goto unlock;
+-				}
+-			}
+-		}
+-		spin_unlock_bh(&vxlan->hash_lock[h]);
+-	}
+-	return 0;
+-
+-unlock:
+-	spin_unlock_bh(&vxlan->hash_lock[h]);
+-	return rc;
+-}
+-EXPORT_SYMBOL_GPL(vxlan_fdb_replay);
+-
+-void vxlan_fdb_clear_offload(const struct net_device *dev, __be32 vni)
+-{
+-	struct vxlan_dev *vxlan;
+-	struct vxlan_rdst *rdst;
+-	struct vxlan_fdb *f;
+-	unsigned int h;
+-
+-	if (!netif_is_vxlan(dev))
+-		return;
+-	vxlan = netdev_priv(dev);
+-
+-	for (h = 0; h < FDB_HASH_SIZE; ++h) {
+-		spin_lock_bh(&vxlan->hash_lock[h]);
+-		hlist_for_each_entry(f, &vxlan->fdb_head[h], hlist)
+-			if (f->vni == vni)
+-				list_for_each_entry(rdst, &f->remotes, list)
+-					rdst->offloaded = false;
+-		spin_unlock_bh(&vxlan->hash_lock[h]);
+-	}
+-
+-}
+-EXPORT_SYMBOL_GPL(vxlan_fdb_clear_offload);
+-
+-/* Replace destination of unicast mac */
+-static int vxlan_fdb_replace(struct vxlan_fdb *f,
+-			     union vxlan_addr *ip, __be16 port, __be32 vni,
+-			     __u32 ifindex, struct vxlan_rdst *oldrd)
+-{
+-	struct vxlan_rdst *rd;
+-
+-	rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
+-	if (rd)
+-		return 0;
+-
+-	rd = list_first_entry_or_null(&f->remotes, struct vxlan_rdst, list);
+-	if (!rd)
+-		return 0;
+-
+-	*oldrd = *rd;
+-	dst_cache_reset(&rd->dst_cache);
+-	rd->remote_ip = *ip;
+-	rd->remote_port = port;
+-	rd->remote_vni = vni;
+-	rd->remote_ifindex = ifindex;
+-	rd->offloaded = false;
+-	return 1;
+-}
+-
+-/* Add/update destinations for multicast */
+-static int vxlan_fdb_append(struct vxlan_fdb *f,
+-			    union vxlan_addr *ip, __be16 port, __be32 vni,
+-			    __u32 ifindex, struct vxlan_rdst **rdp)
+-{
+-	struct vxlan_rdst *rd;
+-
+-	rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
+-	if (rd)
+-		return 0;
+-
+-	rd = kmalloc(sizeof(*rd), GFP_ATOMIC);
+-	if (rd == NULL)
+-		return -ENOMEM;
+-
+-	if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
+-		kfree(rd);
+-		return -ENOMEM;
+-	}
+-
+-	rd->remote_ip = *ip;
+-	rd->remote_port = port;
+-	rd->offloaded = false;
+-	rd->remote_vni = vni;
+-	rd->remote_ifindex = ifindex;
+-
+-	list_add_tail_rcu(&rd->list, &f->remotes);
+-
+-	*rdp = rd;
+-	return 1;
+-}
+-
+-static struct vxlanhdr *vxlan_gro_remcsum(struct sk_buff *skb,
+-					  unsigned int off,
+-					  struct vxlanhdr *vh, size_t hdrlen,
+-					  __be32 vni_field,
+-					  struct gro_remcsum *grc,
+-					  bool nopartial)
+-{
+-	size_t start, offset;
+-
+-	if (skb->remcsum_offload)
+-		return vh;
+-
+-	if (!NAPI_GRO_CB(skb)->csum_valid)
+-		return NULL;
+-
+-	start = vxlan_rco_start(vni_field);
+-	offset = start + vxlan_rco_offset(vni_field);
+-
+-	vh = skb_gro_remcsum_process(skb, (void *)vh, off, hdrlen,
+-				     start, offset, grc, nopartial);
+-
+-	skb->remcsum_offload = 1;
+-
+-	return vh;
+-}
+-
+-static struct sk_buff *vxlan_gro_receive(struct sock *sk,
+-					 struct list_head *head,
+-					 struct sk_buff *skb)
+-{
+-	struct sk_buff *pp = NULL;
+-	struct sk_buff *p;
+-	struct vxlanhdr *vh, *vh2;
+-	unsigned int hlen, off_vx;
+-	int flush = 1;
+-	struct vxlan_sock *vs = rcu_dereference_sk_user_data(sk);
+-	__be32 flags;
+-	struct gro_remcsum grc;
+-
+-	skb_gro_remcsum_init(&grc);
+-
+-	off_vx = skb_gro_offset(skb);
+-	hlen = off_vx + sizeof(*vh);
+-	vh   = skb_gro_header_fast(skb, off_vx);
+-	if (skb_gro_header_hard(skb, hlen)) {
+-		vh = skb_gro_header_slow(skb, hlen, off_vx);
+-		if (unlikely(!vh))
+-			goto out;
+-	}
+-
+-	skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr));
+-
+-	flags = vh->vx_flags;
+-
+-	if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) {
+-		vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),
+-				       vh->vx_vni, &grc,
+-				       !!(vs->flags &
+-					  VXLAN_F_REMCSUM_NOPARTIAL));
+-
+-		if (!vh)
+-			goto out;
+-	}
+-
+-	skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */
+-
+-	list_for_each_entry(p, head, list) {
+-		if (!NAPI_GRO_CB(p)->same_flow)
+-			continue;
+-
+-		vh2 = (struct vxlanhdr *)(p->data + off_vx);
+-		if (vh->vx_flags != vh2->vx_flags ||
+-		    vh->vx_vni != vh2->vx_vni) {
+-			NAPI_GRO_CB(p)->same_flow = 0;
+-			continue;
+-		}
+-	}
+-
+-	pp = call_gro_receive(eth_gro_receive, head, skb);
+-	flush = 0;
+-
+-out:
+-	skb_gro_flush_final_remcsum(skb, pp, flush, &grc);
+-
+-	return pp;
+-}
+-
+-static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
+-{
+-	/* Sets 'skb->inner_mac_header' since we are always called with
+-	 * 'skb->encapsulation' set.
+-	 */
+-	return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr));
+-}
+-
+-static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan, const u8 *mac,
+-					 __u16 state, __be32 src_vni,
+-					 __u16 ndm_flags)
+-{
+-	struct vxlan_fdb *f;
+-
+-	f = kmalloc(sizeof(*f), GFP_ATOMIC);
+-	if (!f)
+-		return NULL;
+-	f->state = state;
+-	f->flags = ndm_flags;
+-	f->updated = f->used = jiffies;
+-	f->vni = src_vni;
+-	f->nh = NULL;
+-	RCU_INIT_POINTER(f->vdev, vxlan);
+-	INIT_LIST_HEAD(&f->nh_list);
+-	INIT_LIST_HEAD(&f->remotes);
+-	memcpy(f->eth_addr, mac, ETH_ALEN);
+-
+-	return f;
+-}
+-
+-static void vxlan_fdb_insert(struct vxlan_dev *vxlan, const u8 *mac,
+-			     __be32 src_vni, struct vxlan_fdb *f)
+-{
+-	++vxlan->addrcnt;
+-	hlist_add_head_rcu(&f->hlist,
+-			   vxlan_fdb_head(vxlan, mac, src_vni));
+-}
+-
+-static int vxlan_fdb_nh_update(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
+-			       u32 nhid, struct netlink_ext_ack *extack)
+-{
+-	struct nexthop *old_nh = rtnl_dereference(fdb->nh);
+-	struct nexthop *nh;
+-	int err = -EINVAL;
+-
+-	if (old_nh && old_nh->id == nhid)
+-		return 0;
+-
+-	nh = nexthop_find_by_id(vxlan->net, nhid);
+-	if (!nh) {
+-		NL_SET_ERR_MSG(extack, "Nexthop id does not exist");
+-		goto err_inval;
+-	}
+-
+-	if (nh) {
+-		if (!nexthop_get(nh)) {
+-			NL_SET_ERR_MSG(extack, "Nexthop has been deleted");
+-			nh = NULL;
+-			goto err_inval;
+-		}
+-		if (!nexthop_is_fdb(nh)) {
+-			NL_SET_ERR_MSG(extack, "Nexthop is not a fdb nexthop");
+-			goto err_inval;
+-		}
+-
+-		if (!nexthop_is_multipath(nh)) {
+-			NL_SET_ERR_MSG(extack, "Nexthop is not a multipath group");
+-			goto err_inval;
+-		}
+-
+-		/* check nexthop group family */
+-		switch (vxlan->default_dst.remote_ip.sa.sa_family) {
+-		case AF_INET:
+-			if (!nexthop_has_v4(nh)) {
+-				err = -EAFNOSUPPORT;
+-				NL_SET_ERR_MSG(extack, "Nexthop group family not supported");
+-				goto err_inval;
+-			}
+-			break;
+-		case AF_INET6:
+-			if (nexthop_has_v4(nh)) {
+-				err = -EAFNOSUPPORT;
+-				NL_SET_ERR_MSG(extack, "Nexthop group family not supported");
+-				goto err_inval;
+-			}
+-		}
+-	}
+-
+-	if (old_nh) {
+-		list_del_rcu(&fdb->nh_list);
+-		nexthop_put(old_nh);
+-	}
+-	rcu_assign_pointer(fdb->nh, nh);
+-	list_add_tail_rcu(&fdb->nh_list, &nh->fdb_list);
+-	return 1;
+-
+-err_inval:
+-	if (nh)
+-		nexthop_put(nh);
+-	return err;
+-}
+-
+-static int vxlan_fdb_create(struct vxlan_dev *vxlan,
+-			    const u8 *mac, union vxlan_addr *ip,
+-			    __u16 state, __be16 port, __be32 src_vni,
+-			    __be32 vni, __u32 ifindex, __u16 ndm_flags,
+-			    u32 nhid, struct vxlan_fdb **fdb,
+-			    struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_rdst *rd = NULL;
+-	struct vxlan_fdb *f;
+-	int rc;
+-
+-	if (vxlan->cfg.addrmax &&
+-	    vxlan->addrcnt >= vxlan->cfg.addrmax)
+-		return -ENOSPC;
+-
+-	netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
+-	f = vxlan_fdb_alloc(vxlan, mac, state, src_vni, ndm_flags);
+-	if (!f)
+-		return -ENOMEM;
+-
+-	if (nhid)
+-		rc = vxlan_fdb_nh_update(vxlan, f, nhid, extack);
+-	else
+-		rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd);
+-	if (rc < 0)
+-		goto errout;
+-
+-	*fdb = f;
+-
+-	return 0;
+-
+-errout:
+-	kfree(f);
+-	return rc;
+-}
+-
+-static void __vxlan_fdb_free(struct vxlan_fdb *f)
+-{
+-	struct vxlan_rdst *rd, *nd;
+-	struct nexthop *nh;
+-
+-	nh = rcu_dereference_raw(f->nh);
+-	if (nh) {
+-		rcu_assign_pointer(f->nh, NULL);
+-		rcu_assign_pointer(f->vdev, NULL);
+-		nexthop_put(nh);
+-	}
+-
+-	list_for_each_entry_safe(rd, nd, &f->remotes, list) {
+-		dst_cache_destroy(&rd->dst_cache);
+-		kfree(rd);
+-	}
+-	kfree(f);
+-}
+-
+-static void vxlan_fdb_free(struct rcu_head *head)
+-{
+-	struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
+-
+-	__vxlan_fdb_free(f);
+-}
+-
+-static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f,
+-			      bool do_notify, bool swdev_notify)
+-{
+-	struct vxlan_rdst *rd;
+-
+-	netdev_dbg(vxlan->dev, "delete %pM\n", f->eth_addr);
+-
+-	--vxlan->addrcnt;
+-	if (do_notify) {
+-		if (rcu_access_pointer(f->nh))
+-			vxlan_fdb_notify(vxlan, f, NULL, RTM_DELNEIGH,
+-					 swdev_notify, NULL);
+-		else
+-			list_for_each_entry(rd, &f->remotes, list)
+-				vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH,
+-						 swdev_notify, NULL);
+-	}
+-
+-	hlist_del_rcu(&f->hlist);
+-	list_del_rcu(&f->nh_list);
+-	call_rcu(&f->rcu, vxlan_fdb_free);
+-}
+-
+-static void vxlan_dst_free(struct rcu_head *head)
+-{
+-	struct vxlan_rdst *rd = container_of(head, struct vxlan_rdst, rcu);
+-
+-	dst_cache_destroy(&rd->dst_cache);
+-	kfree(rd);
+-}
+-
+-static int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
+-				     union vxlan_addr *ip,
+-				     __u16 state, __u16 flags,
+-				     __be16 port, __be32 vni,
+-				     __u32 ifindex, __u16 ndm_flags,
+-				     struct vxlan_fdb *f, u32 nhid,
+-				     bool swdev_notify,
+-				     struct netlink_ext_ack *extack)
+-{
+-	__u16 fdb_flags = (ndm_flags & ~NTF_USE);
+-	struct vxlan_rdst *rd = NULL;
+-	struct vxlan_rdst oldrd;
+-	int notify = 0;
+-	int rc = 0;
+-	int err;
+-
+-	if (nhid && !rcu_access_pointer(f->nh)) {
+-		NL_SET_ERR_MSG(extack,
+-			       "Cannot replace an existing non nexthop fdb with a nexthop");
+-		return -EOPNOTSUPP;
+-	}
+-
+-	if (nhid && (flags & NLM_F_APPEND)) {
+-		NL_SET_ERR_MSG(extack,
+-			       "Cannot append to a nexthop fdb");
+-		return -EOPNOTSUPP;
+-	}
+-
+-	/* Do not allow an externally learned entry to take over an entry added
+-	 * by the user.
+-	 */
+-	if (!(fdb_flags & NTF_EXT_LEARNED) ||
+-	    !(f->flags & NTF_VXLAN_ADDED_BY_USER)) {
+-		if (f->state != state) {
+-			f->state = state;
+-			f->updated = jiffies;
+-			notify = 1;
+-		}
+-		if (f->flags != fdb_flags) {
+-			f->flags = fdb_flags;
+-			f->updated = jiffies;
+-			notify = 1;
+-		}
+-	}
+-
+-	if ((flags & NLM_F_REPLACE)) {
+-		/* Only change unicasts */
+-		if (!(is_multicast_ether_addr(f->eth_addr) ||
+-		      is_zero_ether_addr(f->eth_addr))) {
+-			if (nhid) {
+-				rc = vxlan_fdb_nh_update(vxlan, f, nhid, extack);
+-				if (rc < 0)
+-					return rc;
+-			} else {
+-				rc = vxlan_fdb_replace(f, ip, port, vni,
+-						       ifindex, &oldrd);
+-			}
+-			notify |= rc;
+-		} else {
+-			NL_SET_ERR_MSG(extack, "Cannot replace non-unicast fdb entries");
+-			return -EOPNOTSUPP;
+-		}
+-	}
+-	if ((flags & NLM_F_APPEND) &&
+-	    (is_multicast_ether_addr(f->eth_addr) ||
+-	     is_zero_ether_addr(f->eth_addr))) {
+-		rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd);
+-
+-		if (rc < 0)
+-			return rc;
+-		notify |= rc;
+-	}
+-
+-	if (ndm_flags & NTF_USE)
+-		f->used = jiffies;
+-
+-	if (notify) {
+-		if (rd == NULL)
+-			rd = first_remote_rtnl(f);
+-
+-		err = vxlan_fdb_notify(vxlan, f, rd, RTM_NEWNEIGH,
+-				       swdev_notify, extack);
+-		if (err)
+-			goto err_notify;
+-	}
+-
+-	return 0;
+-
+-err_notify:
+-	if (nhid)
+-		return err;
+-	if ((flags & NLM_F_REPLACE) && rc)
+-		*rd = oldrd;
+-	else if ((flags & NLM_F_APPEND) && rc) {
+-		list_del_rcu(&rd->list);
+-		call_rcu(&rd->rcu, vxlan_dst_free);
+-	}
+-	return err;
+-}
+-
+-static int vxlan_fdb_update_create(struct vxlan_dev *vxlan,
+-				   const u8 *mac, union vxlan_addr *ip,
+-				   __u16 state, __u16 flags,
+-				   __be16 port, __be32 src_vni, __be32 vni,
+-				   __u32 ifindex, __u16 ndm_flags, u32 nhid,
+-				   bool swdev_notify,
+-				   struct netlink_ext_ack *extack)
+-{
+-	__u16 fdb_flags = (ndm_flags & ~NTF_USE);
+-	struct vxlan_fdb *f;
+-	int rc;
+-
+-	/* Disallow replace to add a multicast entry */
+-	if ((flags & NLM_F_REPLACE) &&
+-	    (is_multicast_ether_addr(mac) || is_zero_ether_addr(mac)))
+-		return -EOPNOTSUPP;
+-
+-	netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
+-	rc = vxlan_fdb_create(vxlan, mac, ip, state, port, src_vni,
+-			      vni, ifindex, fdb_flags, nhid, &f, extack);
+-	if (rc < 0)
+-		return rc;
+-
+-	vxlan_fdb_insert(vxlan, mac, src_vni, f);
+-	rc = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH,
+-			      swdev_notify, extack);
+-	if (rc)
+-		goto err_notify;
+-
+-	return 0;
+-
+-err_notify:
+-	vxlan_fdb_destroy(vxlan, f, false, false);
+-	return rc;
+-}
+-
+-/* Add new entry to forwarding table -- assumes lock held */
+-static int vxlan_fdb_update(struct vxlan_dev *vxlan,
+-			    const u8 *mac, union vxlan_addr *ip,
+-			    __u16 state, __u16 flags,
+-			    __be16 port, __be32 src_vni, __be32 vni,
+-			    __u32 ifindex, __u16 ndm_flags, u32 nhid,
+-			    bool swdev_notify,
+-			    struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_fdb *f;
+-
+-	f = __vxlan_find_mac(vxlan, mac, src_vni);
+-	if (f) {
+-		if (flags & NLM_F_EXCL) {
+-			netdev_dbg(vxlan->dev,
+-				   "lost race to create %pM\n", mac);
+-			return -EEXIST;
+-		}
+-
+-		return vxlan_fdb_update_existing(vxlan, ip, state, flags, port,
+-						 vni, ifindex, ndm_flags, f,
+-						 nhid, swdev_notify, extack);
+-	} else {
+-		if (!(flags & NLM_F_CREATE))
+-			return -ENOENT;
+-
+-		return vxlan_fdb_update_create(vxlan, mac, ip, state, flags,
+-					       port, src_vni, vni, ifindex,
+-					       ndm_flags, nhid, swdev_notify,
+-					       extack);
+-	}
+-}
+-
+-static void vxlan_fdb_dst_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f,
+-				  struct vxlan_rdst *rd, bool swdev_notify)
+-{
+-	list_del_rcu(&rd->list);
+-	vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH, swdev_notify, NULL);
+-	call_rcu(&rd->rcu, vxlan_dst_free);
+-}
+-
+-static int vxlan_fdb_parse(struct nlattr *tb[], struct vxlan_dev *vxlan,
+-			   union vxlan_addr *ip, __be16 *port, __be32 *src_vni,
+-			   __be32 *vni, u32 *ifindex, u32 *nhid)
+-{
+-	struct net *net = dev_net(vxlan->dev);
+-	int err;
+-
+-	if (tb[NDA_NH_ID] && (tb[NDA_DST] || tb[NDA_VNI] || tb[NDA_IFINDEX] ||
+-	    tb[NDA_PORT]))
+-		return -EINVAL;
+-
+-	if (tb[NDA_DST]) {
+-		err = vxlan_nla_get_addr(ip, tb[NDA_DST]);
+-		if (err)
+-			return err;
+-	} else {
+-		union vxlan_addr *remote = &vxlan->default_dst.remote_ip;
+-
+-		if (remote->sa.sa_family == AF_INET) {
+-			ip->sin.sin_addr.s_addr = htonl(INADDR_ANY);
+-			ip->sa.sa_family = AF_INET;
+-#if IS_ENABLED(CONFIG_IPV6)
+-		} else {
+-			ip->sin6.sin6_addr = in6addr_any;
+-			ip->sa.sa_family = AF_INET6;
+-#endif
+-		}
+-	}
+-
+-	if (tb[NDA_PORT]) {
+-		if (nla_len(tb[NDA_PORT]) != sizeof(__be16))
+-			return -EINVAL;
+-		*port = nla_get_be16(tb[NDA_PORT]);
+-	} else {
+-		*port = vxlan->cfg.dst_port;
+-	}
+-
+-	if (tb[NDA_VNI]) {
+-		if (nla_len(tb[NDA_VNI]) != sizeof(u32))
+-			return -EINVAL;
+-		*vni = cpu_to_be32(nla_get_u32(tb[NDA_VNI]));
+-	} else {
+-		*vni = vxlan->default_dst.remote_vni;
+-	}
+-
+-	if (tb[NDA_SRC_VNI]) {
+-		if (nla_len(tb[NDA_SRC_VNI]) != sizeof(u32))
+-			return -EINVAL;
+-		*src_vni = cpu_to_be32(nla_get_u32(tb[NDA_SRC_VNI]));
+-	} else {
+-		*src_vni = vxlan->default_dst.remote_vni;
+-	}
+-
+-	if (tb[NDA_IFINDEX]) {
+-		struct net_device *tdev;
+-
+-		if (nla_len(tb[NDA_IFINDEX]) != sizeof(u32))
+-			return -EINVAL;
+-		*ifindex = nla_get_u32(tb[NDA_IFINDEX]);
+-		tdev = __dev_get_by_index(net, *ifindex);
+-		if (!tdev)
+-			return -EADDRNOTAVAIL;
+-	} else {
+-		*ifindex = 0;
+-	}
+-
+-	if (tb[NDA_NH_ID])
+-		*nhid = nla_get_u32(tb[NDA_NH_ID]);
+-	else
+-		*nhid = 0;
+-
+-	return 0;
+-}
+-
+-/* Add static entry (via netlink) */
+-static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+-			 struct net_device *dev,
+-			 const unsigned char *addr, u16 vid, u16 flags,
+-			 struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	/* struct net *net = dev_net(vxlan->dev); */
+-	union vxlan_addr ip;
+-	__be16 port;
+-	__be32 src_vni, vni;
+-	u32 ifindex, nhid;
+-	u32 hash_index;
+-	int err;
+-
+-	if (!(ndm->ndm_state & (NUD_PERMANENT|NUD_REACHABLE))) {
+-		pr_info("RTM_NEWNEIGH with invalid state %#x\n",
+-			ndm->ndm_state);
+-		return -EINVAL;
+-	}
+-
+-	if (!tb || (!tb[NDA_DST] && !tb[NDA_NH_ID]))
+-		return -EINVAL;
+-
+-	err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &src_vni, &vni, &ifindex,
+-			      &nhid);
+-	if (err)
+-		return err;
+-
+-	if (vxlan->default_dst.remote_ip.sa.sa_family != ip.sa.sa_family)
+-		return -EAFNOSUPPORT;
+-
+-	hash_index = fdb_head_index(vxlan, addr, src_vni);
+-	spin_lock_bh(&vxlan->hash_lock[hash_index]);
+-	err = vxlan_fdb_update(vxlan, addr, &ip, ndm->ndm_state, flags,
+-			       port, src_vni, vni, ifindex,
+-			       ndm->ndm_flags | NTF_VXLAN_ADDED_BY_USER,
+-			       nhid, true, extack);
+-	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-
+-	return err;
+-}
+-
+-static int __vxlan_fdb_delete(struct vxlan_dev *vxlan,
+-			      const unsigned char *addr, union vxlan_addr ip,
+-			      __be16 port, __be32 src_vni, __be32 vni,
+-			      u32 ifindex, bool swdev_notify)
+-{
+-	struct vxlan_rdst *rd = NULL;
+-	struct vxlan_fdb *f;
+-	int err = -ENOENT;
+-
+-	f = vxlan_find_mac(vxlan, addr, src_vni);
+-	if (!f)
+-		return err;
+-
+-	if (!vxlan_addr_any(&ip)) {
+-		rd = vxlan_fdb_find_rdst(f, &ip, port, vni, ifindex);
+-		if (!rd)
+-			goto out;
+-	}
+-
+-	/* remove a destination if it's not the only one on the list,
+-	 * otherwise destroy the fdb entry
+-	 */
+-	if (rd && !list_is_singular(&f->remotes)) {
+-		vxlan_fdb_dst_destroy(vxlan, f, rd, swdev_notify);
+-		goto out;
+-	}
+-
+-	vxlan_fdb_destroy(vxlan, f, true, swdev_notify);
+-
+-out:
+-	return 0;
+-}
+-
+-/* Delete entry (via netlink) */
+-static int vxlan_fdb_delete(struct ndmsg *ndm, struct nlattr *tb[],
+-			    struct net_device *dev,
+-			    const unsigned char *addr, u16 vid)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	union vxlan_addr ip;
+-	__be32 src_vni, vni;
+-	u32 ifindex, nhid;
+-	u32 hash_index;
+-	__be16 port;
+-	int err;
+-
+-	err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &src_vni, &vni, &ifindex,
+-			      &nhid);
+-	if (err)
+-		return err;
+-
+-	hash_index = fdb_head_index(vxlan, addr, src_vni);
+-	spin_lock_bh(&vxlan->hash_lock[hash_index]);
+-	err = __vxlan_fdb_delete(vxlan, addr, ip, port, src_vni, vni, ifindex,
+-				 true);
+-	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-
+-	return err;
+-}
+-
+-/* Dump forwarding table */
+-static int vxlan_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
+-			  struct net_device *dev,
+-			  struct net_device *filter_dev, int *idx)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	unsigned int h;
+-	int err = 0;
+-
+-	for (h = 0; h < FDB_HASH_SIZE; ++h) {
+-		struct vxlan_fdb *f;
+-
+-		rcu_read_lock();
+-		hlist_for_each_entry_rcu(f, &vxlan->fdb_head[h], hlist) {
+-			struct vxlan_rdst *rd;
+-
+-			if (rcu_access_pointer(f->nh)) {
+-				if (*idx < cb->args[2])
+-					goto skip_nh;
+-				err = vxlan_fdb_info(skb, vxlan, f,
+-						     NETLINK_CB(cb->skb).portid,
+-						     cb->nlh->nlmsg_seq,
+-						     RTM_NEWNEIGH,
+-						     NLM_F_MULTI, NULL);
+-				if (err < 0) {
+-					rcu_read_unlock();
+-					goto out;
+-				}
+-skip_nh:
+-				*idx += 1;
+-				continue;
+-			}
+-
+-			list_for_each_entry_rcu(rd, &f->remotes, list) {
+-				if (*idx < cb->args[2])
+-					goto skip;
+-
+-				err = vxlan_fdb_info(skb, vxlan, f,
+-						     NETLINK_CB(cb->skb).portid,
+-						     cb->nlh->nlmsg_seq,
+-						     RTM_NEWNEIGH,
+-						     NLM_F_MULTI, rd);
+-				if (err < 0) {
+-					rcu_read_unlock();
+-					goto out;
+-				}
+-skip:
+-				*idx += 1;
+-			}
+-		}
+-		rcu_read_unlock();
+-	}
+-out:
+-	return err;
+-}
+-
+-static int vxlan_fdb_get(struct sk_buff *skb,
+-			 struct nlattr *tb[],
+-			 struct net_device *dev,
+-			 const unsigned char *addr,
+-			 u16 vid, u32 portid, u32 seq,
+-			 struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_fdb *f;
+-	__be32 vni;
+-	int err;
+-
+-	if (tb[NDA_VNI])
+-		vni = cpu_to_be32(nla_get_u32(tb[NDA_VNI]));
+-	else
+-		vni = vxlan->default_dst.remote_vni;
+-
+-	rcu_read_lock();
+-
+-	f = __vxlan_find_mac(vxlan, addr, vni);
+-	if (!f) {
+-		NL_SET_ERR_MSG(extack, "Fdb entry not found");
+-		err = -ENOENT;
+-		goto errout;
+-	}
+-
+-	err = vxlan_fdb_info(skb, vxlan, f, portid, seq,
+-			     RTM_NEWNEIGH, 0, first_remote_rcu(f));
+-errout:
+-	rcu_read_unlock();
+-	return err;
+-}
+-
+-/* Watch incoming packets to learn mapping between Ethernet address
+- * and Tunnel endpoint.
+- * Return true if packet is bogus and should be dropped.
+- */
+-static bool vxlan_snoop(struct net_device *dev,
+-			union vxlan_addr *src_ip, const u8 *src_mac,
+-			u32 src_ifindex, __be32 vni)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_fdb *f;
+-	u32 ifindex = 0;
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-	if (src_ip->sa.sa_family == AF_INET6 &&
+-	    (ipv6_addr_type(&src_ip->sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL))
+-		ifindex = src_ifindex;
+-#endif
+-
+-	f = vxlan_find_mac(vxlan, src_mac, vni);
+-	if (likely(f)) {
+-		struct vxlan_rdst *rdst = first_remote_rcu(f);
+-
+-		if (likely(vxlan_addr_equal(&rdst->remote_ip, src_ip) &&
+-			   rdst->remote_ifindex == ifindex))
+-			return false;
+-
+-		/* Don't migrate static entries, drop packets */
+-		if (f->state & (NUD_PERMANENT | NUD_NOARP))
+-			return true;
+-
+-		/* Don't override an fdb with nexthop with a learnt entry */
+-		if (rcu_access_pointer(f->nh))
+-			return true;
+-
+-		if (net_ratelimit())
+-			netdev_info(dev,
+-				    "%pM migrated from %pIS to %pIS\n",
+-				    src_mac, &rdst->remote_ip.sa, &src_ip->sa);
+-
+-		rdst->remote_ip = *src_ip;
+-		f->updated = jiffies;
+-		vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH, true, NULL);
+-	} else {
+-		u32 hash_index = fdb_head_index(vxlan, src_mac, vni);
+-
+-		/* learned new entry */
+-		spin_lock(&vxlan->hash_lock[hash_index]);
+-
+-		/* close off race between vxlan_flush and incoming packets */
+-		if (netif_running(dev))
+-			vxlan_fdb_update(vxlan, src_mac, src_ip,
+-					 NUD_REACHABLE,
+-					 NLM_F_EXCL|NLM_F_CREATE,
+-					 vxlan->cfg.dst_port,
+-					 vni,
+-					 vxlan->default_dst.remote_vni,
+-					 ifindex, NTF_SELF, 0, true, NULL);
+-		spin_unlock(&vxlan->hash_lock[hash_index]);
+-	}
+-
+-	return false;
+-}
+-
+-/* See if multicast group is already in use by other ID */
+-static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev)
+-{
+-	struct vxlan_dev *vxlan;
+-	struct vxlan_sock *sock4;
+-#if IS_ENABLED(CONFIG_IPV6)
+-	struct vxlan_sock *sock6;
+-#endif
+-	unsigned short family = dev->default_dst.remote_ip.sa.sa_family;
+-
+-	sock4 = rtnl_dereference(dev->vn4_sock);
+-
+-	/* The vxlan_sock is only used by dev, leaving group has
+-	 * no effect on other vxlan devices.
+-	 */
+-	if (family == AF_INET && sock4 && refcount_read(&sock4->refcnt) == 1)
+-		return false;
+-#if IS_ENABLED(CONFIG_IPV6)
+-	sock6 = rtnl_dereference(dev->vn6_sock);
+-	if (family == AF_INET6 && sock6 && refcount_read(&sock6->refcnt) == 1)
+-		return false;
+-#endif
+-
+-	list_for_each_entry(vxlan, &vn->vxlan_list, next) {
+-		if (!netif_running(vxlan->dev) || vxlan == dev)
+-			continue;
+-
+-		if (family == AF_INET &&
+-		    rtnl_dereference(vxlan->vn4_sock) != sock4)
+-			continue;
+-#if IS_ENABLED(CONFIG_IPV6)
+-		if (family == AF_INET6 &&
+-		    rtnl_dereference(vxlan->vn6_sock) != sock6)
+-			continue;
+-#endif
+-
+-		if (!vxlan_addr_equal(&vxlan->default_dst.remote_ip,
+-				      &dev->default_dst.remote_ip))
+-			continue;
+-
+-		if (vxlan->default_dst.remote_ifindex !=
+-		    dev->default_dst.remote_ifindex)
+-			continue;
+-
+-		return true;
+-	}
+-
+-	return false;
+-}
+-
+-static bool __vxlan_sock_release_prep(struct vxlan_sock *vs)
+-{
+-	struct vxlan_net *vn;
+-
+-	if (!vs)
+-		return false;
+-	if (!refcount_dec_and_test(&vs->refcnt))
+-		return false;
+-
+-	vn = net_generic(sock_net(vs->sock->sk), vxlan_net_id);
+-	spin_lock(&vn->sock_lock);
+-	hlist_del_rcu(&vs->hlist);
+-	udp_tunnel_notify_del_rx_port(vs->sock,
+-				      (vs->flags & VXLAN_F_GPE) ?
+-				      UDP_TUNNEL_TYPE_VXLAN_GPE :
+-				      UDP_TUNNEL_TYPE_VXLAN);
+-	spin_unlock(&vn->sock_lock);
+-
+-	return true;
+-}
+-
+-static void vxlan_sock_release(struct vxlan_dev *vxlan)
+-{
+-	struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
+-
+-	RCU_INIT_POINTER(vxlan->vn6_sock, NULL);
+-#endif
+-
+-	RCU_INIT_POINTER(vxlan->vn4_sock, NULL);
+-	synchronize_net();
+-
+-	vxlan_vs_del_dev(vxlan);
+-
+-	if (__vxlan_sock_release_prep(sock4)) {
+-		udp_tunnel_sock_release(sock4->sock);
+-		kfree(sock4);
+-	}
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-	if (__vxlan_sock_release_prep(sock6)) {
+-		udp_tunnel_sock_release(sock6->sock);
+-		kfree(sock6);
+-	}
+-#endif
+-}
+-
+-/* Update multicast group membership when first VNI on
+- * multicast address is brought up
+- */
+-static int vxlan_igmp_join(struct vxlan_dev *vxlan)
+-{
+-	struct sock *sk;
+-	union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
+-	int ifindex = vxlan->default_dst.remote_ifindex;
+-	int ret = -EINVAL;
+-
+-	if (ip->sa.sa_family == AF_INET) {
+-		struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
+-		struct ip_mreqn mreq = {
+-			.imr_multiaddr.s_addr	= ip->sin.sin_addr.s_addr,
+-			.imr_ifindex		= ifindex,
+-		};
+-
+-		sk = sock4->sock->sk;
+-		lock_sock(sk);
+-		ret = ip_mc_join_group(sk, &mreq);
+-		release_sock(sk);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	} else {
+-		struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
+-
+-		sk = sock6->sock->sk;
+-		lock_sock(sk);
+-		ret = ipv6_stub->ipv6_sock_mc_join(sk, ifindex,
+-						   &ip->sin6.sin6_addr);
+-		release_sock(sk);
+-#endif
+-	}
+-
+-	return ret;
+-}
+-
+-/* Inverse of vxlan_igmp_join when last VNI is brought down */
+-static int vxlan_igmp_leave(struct vxlan_dev *vxlan)
+-{
+-	struct sock *sk;
+-	union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
+-	int ifindex = vxlan->default_dst.remote_ifindex;
+-	int ret = -EINVAL;
+-
+-	if (ip->sa.sa_family == AF_INET) {
+-		struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
+-		struct ip_mreqn mreq = {
+-			.imr_multiaddr.s_addr	= ip->sin.sin_addr.s_addr,
+-			.imr_ifindex		= ifindex,
+-		};
+-
+-		sk = sock4->sock->sk;
+-		lock_sock(sk);
+-		ret = ip_mc_leave_group(sk, &mreq);
+-		release_sock(sk);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	} else {
+-		struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
+-
+-		sk = sock6->sock->sk;
+-		lock_sock(sk);
+-		ret = ipv6_stub->ipv6_sock_mc_drop(sk, ifindex,
+-						   &ip->sin6.sin6_addr);
+-		release_sock(sk);
+-#endif
+-	}
+-
+-	return ret;
+-}
+-
+-static bool vxlan_remcsum(struct vxlanhdr *unparsed,
+-			  struct sk_buff *skb, u32 vxflags)
+-{
+-	size_t start, offset;
+-
+-	if (!(unparsed->vx_flags & VXLAN_HF_RCO) || skb->remcsum_offload)
+-		goto out;
+-
+-	start = vxlan_rco_start(unparsed->vx_vni);
+-	offset = start + vxlan_rco_offset(unparsed->vx_vni);
+-
+-	if (!pskb_may_pull(skb, offset + sizeof(u16)))
+-		return false;
+-
+-	skb_remcsum_process(skb, (void *)(vxlan_hdr(skb) + 1), start, offset,
+-			    !!(vxflags & VXLAN_F_REMCSUM_NOPARTIAL));
+-out:
+-	unparsed->vx_flags &= ~VXLAN_HF_RCO;
+-	unparsed->vx_vni &= VXLAN_VNI_MASK;
+-	return true;
+-}
+-
+-static void vxlan_parse_gbp_hdr(struct vxlanhdr *unparsed,
+-				struct sk_buff *skb, u32 vxflags,
+-				struct vxlan_metadata *md)
+-{
+-	struct vxlanhdr_gbp *gbp = (struct vxlanhdr_gbp *)unparsed;
+-	struct metadata_dst *tun_dst;
+-
+-	if (!(unparsed->vx_flags & VXLAN_HF_GBP))
+-		goto out;
+-
+-	md->gbp = ntohs(gbp->policy_id);
+-
+-	tun_dst = (struct metadata_dst *)skb_dst(skb);
+-	if (tun_dst) {
+-		tun_dst->u.tun_info.key.tun_flags |= TUNNEL_VXLAN_OPT;
+-		tun_dst->u.tun_info.options_len = sizeof(*md);
+-	}
+-	if (gbp->dont_learn)
+-		md->gbp |= VXLAN_GBP_DONT_LEARN;
+-
+-	if (gbp->policy_applied)
+-		md->gbp |= VXLAN_GBP_POLICY_APPLIED;
+-
+-	/* In flow-based mode, GBP is carried in dst_metadata */
+-	if (!(vxflags & VXLAN_F_COLLECT_METADATA))
+-		skb->mark = md->gbp;
+-out:
+-	unparsed->vx_flags &= ~VXLAN_GBP_USED_BITS;
+-}
+-
+-static bool vxlan_parse_gpe_hdr(struct vxlanhdr *unparsed,
+-				__be16 *protocol,
+-				struct sk_buff *skb, u32 vxflags)
+-{
+-	struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)unparsed;
+-
+-	/* Need to have Next Protocol set for interfaces in GPE mode. */
+-	if (!gpe->np_applied)
+-		return false;
+-	/* "The initial version is 0. If a receiver does not support the
+-	 * version indicated it MUST drop the packet.
+-	 */
+-	if (gpe->version != 0)
+-		return false;
+-	/* "When the O bit is set to 1, the packet is an OAM packet and OAM
+-	 * processing MUST occur." However, we don't implement OAM
+-	 * processing, thus drop the packet.
+-	 */
+-	if (gpe->oam_flag)
+-		return false;
+-
+-	*protocol = tun_p_to_eth_p(gpe->next_protocol);
+-	if (!*protocol)
+-		return false;
+-
+-	unparsed->vx_flags &= ~VXLAN_GPE_USED_BITS;
+-	return true;
+-}
+-
+-static bool vxlan_set_mac(struct vxlan_dev *vxlan,
+-			  struct vxlan_sock *vs,
+-			  struct sk_buff *skb, __be32 vni)
+-{
+-	union vxlan_addr saddr;
+-	u32 ifindex = skb->dev->ifindex;
+-
+-	skb_reset_mac_header(skb);
+-	skb->protocol = eth_type_trans(skb, vxlan->dev);
+-	skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
+-
+-	/* Ignore packet loops (and multicast echo) */
+-	if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
+-		return false;
+-
+-	/* Get address from the outer IP header */
+-	if (vxlan_get_sk_family(vs) == AF_INET) {
+-		saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
+-		saddr.sa.sa_family = AF_INET;
+-#if IS_ENABLED(CONFIG_IPV6)
+-	} else {
+-		saddr.sin6.sin6_addr = ipv6_hdr(skb)->saddr;
+-		saddr.sa.sa_family = AF_INET6;
+-#endif
+-	}
+-
+-	if ((vxlan->cfg.flags & VXLAN_F_LEARN) &&
+-	    vxlan_snoop(skb->dev, &saddr, eth_hdr(skb)->h_source, ifindex, vni))
+-		return false;
+-
+-	return true;
+-}
+-
+-static bool vxlan_ecn_decapsulate(struct vxlan_sock *vs, void *oiph,
+-				  struct sk_buff *skb)
+-{
+-	int err = 0;
+-
+-	if (vxlan_get_sk_family(vs) == AF_INET)
+-		err = IP_ECN_decapsulate(oiph, skb);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	else
+-		err = IP6_ECN_decapsulate(oiph, skb);
+-#endif
+-
+-	if (unlikely(err) && log_ecn_error) {
+-		if (vxlan_get_sk_family(vs) == AF_INET)
+-			net_info_ratelimited("non-ECT from %pI4 with TOS=%#x\n",
+-					     &((struct iphdr *)oiph)->saddr,
+-					     ((struct iphdr *)oiph)->tos);
+-		else
+-			net_info_ratelimited("non-ECT from %pI6\n",
+-					     &((struct ipv6hdr *)oiph)->saddr);
+-	}
+-	return err <= 1;
+-}
+-
+-/* Callback from net/ipv4/udp.c to receive packets */
+-static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct vxlan_dev *vxlan;
+-	struct vxlan_sock *vs;
+-	struct vxlanhdr unparsed;
+-	struct vxlan_metadata _md;
+-	struct vxlan_metadata *md = &_md;
+-	__be16 protocol = htons(ETH_P_TEB);
+-	bool raw_proto = false;
+-	void *oiph;
+-	__be32 vni = 0;
+-
+-	/* Need UDP and VXLAN header to be present */
+-	if (!pskb_may_pull(skb, VXLAN_HLEN))
+-		goto drop;
+-
+-	unparsed = *vxlan_hdr(skb);
+-	/* VNI flag always required to be set */
+-	if (!(unparsed.vx_flags & VXLAN_HF_VNI)) {
+-		netdev_dbg(skb->dev, "invalid vxlan flags=%#x vni=%#x\n",
+-			   ntohl(vxlan_hdr(skb)->vx_flags),
+-			   ntohl(vxlan_hdr(skb)->vx_vni));
+-		/* Return non vxlan pkt */
+-		goto drop;
+-	}
+-	unparsed.vx_flags &= ~VXLAN_HF_VNI;
+-	unparsed.vx_vni &= ~VXLAN_VNI_MASK;
+-
+-	vs = rcu_dereference_sk_user_data(sk);
+-	if (!vs)
+-		goto drop;
+-
+-	vni = vxlan_vni(vxlan_hdr(skb)->vx_vni);
+-
+-	vxlan = vxlan_vs_find_vni(vs, skb->dev->ifindex, vni);
+-	if (!vxlan)
+-		goto drop;
+-
+-	/* For backwards compatibility, only allow reserved fields to be
+-	 * used by VXLAN extensions if explicitly requested.
+-	 */
+-	if (vs->flags & VXLAN_F_GPE) {
+-		if (!vxlan_parse_gpe_hdr(&unparsed, &protocol, skb, vs->flags))
+-			goto drop;
+-		raw_proto = true;
+-	}
+-
+-	if (__iptunnel_pull_header(skb, VXLAN_HLEN, protocol, raw_proto,
+-				   !net_eq(vxlan->net, dev_net(vxlan->dev))))
+-		goto drop;
+-
+-	if (vs->flags & VXLAN_F_REMCSUM_RX)
+-		if (unlikely(!vxlan_remcsum(&unparsed, skb, vs->flags)))
+-			goto drop;
+-
+-	if (vxlan_collect_metadata(vs)) {
+-		struct metadata_dst *tun_dst;
+-
+-		tun_dst = udp_tun_rx_dst(skb, vxlan_get_sk_family(vs), TUNNEL_KEY,
+-					 key32_to_tunnel_id(vni), sizeof(*md));
+-
+-		if (!tun_dst)
+-			goto drop;
+-
+-		md = ip_tunnel_info_opts(&tun_dst->u.tun_info);
+-
+-		skb_dst_set(skb, (struct dst_entry *)tun_dst);
+-	} else {
+-		memset(md, 0, sizeof(*md));
+-	}
+-
+-	if (vs->flags & VXLAN_F_GBP)
+-		vxlan_parse_gbp_hdr(&unparsed, skb, vs->flags, md);
+-	/* Note that GBP and GPE can never be active together. This is
+-	 * ensured in vxlan_dev_configure.
+-	 */
+-
+-	if (unparsed.vx_flags || unparsed.vx_vni) {
+-		/* If there are any unprocessed flags remaining treat
+-		 * this as a malformed packet. This behavior diverges from
+-		 * VXLAN RFC (RFC7348) which stipulates that bits in reserved
+-		 * in reserved fields are to be ignored. The approach here
+-		 * maintains compatibility with previous stack code, and also
+-		 * is more robust and provides a little more security in
+-		 * adding extensions to VXLAN.
+-		 */
+-		goto drop;
+-	}
+-
+-	if (!raw_proto) {
+-		if (!vxlan_set_mac(vxlan, vs, skb, vni))
+-			goto drop;
+-	} else {
+-		skb_reset_mac_header(skb);
+-		skb->dev = vxlan->dev;
+-		skb->pkt_type = PACKET_HOST;
+-	}
+-
+-	oiph = skb_network_header(skb);
+-	skb_reset_network_header(skb);
+-
+-	if (!vxlan_ecn_decapsulate(vs, oiph, skb)) {
+-		++vxlan->dev->stats.rx_frame_errors;
+-		++vxlan->dev->stats.rx_errors;
+-		goto drop;
+-	}
+-
+-	rcu_read_lock();
+-
+-	if (unlikely(!(vxlan->dev->flags & IFF_UP))) {
+-		rcu_read_unlock();
+-		atomic_long_inc(&vxlan->dev->rx_dropped);
+-		goto drop;
+-	}
+-
+-	dev_sw_netstats_rx_add(vxlan->dev, skb->len);
+-	gro_cells_receive(&vxlan->gro_cells, skb);
+-
+-	rcu_read_unlock();
+-
+-	return 0;
+-
+-drop:
+-	/* Consume bad packet */
+-	kfree_skb(skb);
+-	return 0;
+-}
+-
+-/* Callback from net/ipv{4,6}/udp.c to check that we have a VNI for errors */
+-static int vxlan_err_lookup(struct sock *sk, struct sk_buff *skb)
+-{
+-	struct vxlan_dev *vxlan;
+-	struct vxlan_sock *vs;
+-	struct vxlanhdr *hdr;
+-	__be32 vni;
+-
+-	if (!pskb_may_pull(skb, skb_transport_offset(skb) + VXLAN_HLEN))
+-		return -EINVAL;
+-
+-	hdr = vxlan_hdr(skb);
+-
+-	if (!(hdr->vx_flags & VXLAN_HF_VNI))
+-		return -EINVAL;
+-
+-	vs = rcu_dereference_sk_user_data(sk);
+-	if (!vs)
+-		return -ENOENT;
+-
+-	vni = vxlan_vni(hdr->vx_vni);
+-	vxlan = vxlan_vs_find_vni(vs, skb->dev->ifindex, vni);
+-	if (!vxlan)
+-		return -ENOENT;
+-
+-	return 0;
+-}
+-
+-static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct arphdr *parp;
+-	u8 *arpptr, *sha;
+-	__be32 sip, tip;
+-	struct neighbour *n;
+-
+-	if (dev->flags & IFF_NOARP)
+-		goto out;
+-
+-	if (!pskb_may_pull(skb, arp_hdr_len(dev))) {
+-		dev->stats.tx_dropped++;
+-		goto out;
+-	}
+-	parp = arp_hdr(skb);
+-
+-	if ((parp->ar_hrd != htons(ARPHRD_ETHER) &&
+-	     parp->ar_hrd != htons(ARPHRD_IEEE802)) ||
+-	    parp->ar_pro != htons(ETH_P_IP) ||
+-	    parp->ar_op != htons(ARPOP_REQUEST) ||
+-	    parp->ar_hln != dev->addr_len ||
+-	    parp->ar_pln != 4)
+-		goto out;
+-	arpptr = (u8 *)parp + sizeof(struct arphdr);
+-	sha = arpptr;
+-	arpptr += dev->addr_len;	/* sha */
+-	memcpy(&sip, arpptr, sizeof(sip));
+-	arpptr += sizeof(sip);
+-	arpptr += dev->addr_len;	/* tha */
+-	memcpy(&tip, arpptr, sizeof(tip));
+-
+-	if (ipv4_is_loopback(tip) ||
+-	    ipv4_is_multicast(tip))
+-		goto out;
+-
+-	n = neigh_lookup(&arp_tbl, &tip, dev);
+-
+-	if (n) {
+-		struct vxlan_fdb *f;
+-		struct sk_buff	*reply;
+-
+-		if (!(n->nud_state & NUD_CONNECTED)) {
+-			neigh_release(n);
+-			goto out;
+-		}
+-
+-		f = vxlan_find_mac(vxlan, n->ha, vni);
+-		if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
+-			/* bridge-local neighbor */
+-			neigh_release(n);
+-			goto out;
+-		}
+-
+-		reply = arp_create(ARPOP_REPLY, ETH_P_ARP, sip, dev, tip, sha,
+-				n->ha, sha);
+-
+-		neigh_release(n);
+-
+-		if (reply == NULL)
+-			goto out;
+-
+-		skb_reset_mac_header(reply);
+-		__skb_pull(reply, skb_network_offset(reply));
+-		reply->ip_summed = CHECKSUM_UNNECESSARY;
+-		reply->pkt_type = PACKET_HOST;
+-
+-		if (netif_rx_ni(reply) == NET_RX_DROP)
+-			dev->stats.rx_dropped++;
+-	} else if (vxlan->cfg.flags & VXLAN_F_L3MISS) {
+-		union vxlan_addr ipa = {
+-			.sin.sin_addr.s_addr = tip,
+-			.sin.sin_family = AF_INET,
+-		};
+-
+-		vxlan_ip_miss(dev, &ipa);
+-	}
+-out:
+-	consume_skb(skb);
+-	return NETDEV_TX_OK;
+-}
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-static struct sk_buff *vxlan_na_create(struct sk_buff *request,
+-	struct neighbour *n, bool isrouter)
+-{
+-	struct net_device *dev = request->dev;
+-	struct sk_buff *reply;
+-	struct nd_msg *ns, *na;
+-	struct ipv6hdr *pip6;
+-	u8 *daddr;
+-	int na_olen = 8; /* opt hdr + ETH_ALEN for target */
+-	int ns_olen;
+-	int i, len;
+-
+-	if (dev == NULL || !pskb_may_pull(request, request->len))
+-		return NULL;
+-
+-	len = LL_RESERVED_SPACE(dev) + sizeof(struct ipv6hdr) +
+-		sizeof(*na) + na_olen + dev->needed_tailroom;
+-	reply = alloc_skb(len, GFP_ATOMIC);
+-	if (reply == NULL)
+-		return NULL;
+-
+-	reply->protocol = htons(ETH_P_IPV6);
+-	reply->dev = dev;
+-	skb_reserve(reply, LL_RESERVED_SPACE(request->dev));
+-	skb_push(reply, sizeof(struct ethhdr));
+-	skb_reset_mac_header(reply);
+-
+-	ns = (struct nd_msg *)(ipv6_hdr(request) + 1);
+-
+-	daddr = eth_hdr(request)->h_source;
+-	ns_olen = request->len - skb_network_offset(request) -
+-		sizeof(struct ipv6hdr) - sizeof(*ns);
+-	for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) {
+-		if (!ns->opt[i + 1]) {
+-			kfree_skb(reply);
+-			return NULL;
+-		}
+-		if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) {
+-			daddr = ns->opt + i + sizeof(struct nd_opt_hdr);
+-			break;
+-		}
+-	}
+-
+-	/* Ethernet header */
+-	ether_addr_copy(eth_hdr(reply)->h_dest, daddr);
+-	ether_addr_copy(eth_hdr(reply)->h_source, n->ha);
+-	eth_hdr(reply)->h_proto = htons(ETH_P_IPV6);
+-	reply->protocol = htons(ETH_P_IPV6);
+-
+-	skb_pull(reply, sizeof(struct ethhdr));
+-	skb_reset_network_header(reply);
+-	skb_put(reply, sizeof(struct ipv6hdr));
+-
+-	/* IPv6 header */
+-
+-	pip6 = ipv6_hdr(reply);
+-	memset(pip6, 0, sizeof(struct ipv6hdr));
+-	pip6->version = 6;
+-	pip6->priority = ipv6_hdr(request)->priority;
+-	pip6->nexthdr = IPPROTO_ICMPV6;
+-	pip6->hop_limit = 255;
+-	pip6->daddr = ipv6_hdr(request)->saddr;
+-	pip6->saddr = *(struct in6_addr *)n->primary_key;
+-
+-	skb_pull(reply, sizeof(struct ipv6hdr));
+-	skb_reset_transport_header(reply);
+-
+-	/* Neighbor Advertisement */
+-	na = skb_put_zero(reply, sizeof(*na) + na_olen);
+-	na->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT;
+-	na->icmph.icmp6_router = isrouter;
+-	na->icmph.icmp6_override = 1;
+-	na->icmph.icmp6_solicited = 1;
+-	na->target = ns->target;
+-	ether_addr_copy(&na->opt[2], n->ha);
+-	na->opt[0] = ND_OPT_TARGET_LL_ADDR;
+-	na->opt[1] = na_olen >> 3;
+-
+-	na->icmph.icmp6_cksum = csum_ipv6_magic(&pip6->saddr,
+-		&pip6->daddr, sizeof(*na)+na_olen, IPPROTO_ICMPV6,
+-		csum_partial(na, sizeof(*na)+na_olen, 0));
+-
+-	pip6->payload_len = htons(sizeof(*na)+na_olen);
+-
+-	skb_push(reply, sizeof(struct ipv6hdr));
+-
+-	reply->ip_summed = CHECKSUM_UNNECESSARY;
+-
+-	return reply;
+-}
+-
+-static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	const struct in6_addr *daddr;
+-	const struct ipv6hdr *iphdr;
+-	struct inet6_dev *in6_dev;
+-	struct neighbour *n;
+-	struct nd_msg *msg;
+-
+-	rcu_read_lock();
+-	in6_dev = __in6_dev_get(dev);
+-	if (!in6_dev)
+-		goto out;
+-
+-	iphdr = ipv6_hdr(skb);
+-	daddr = &iphdr->daddr;
+-	msg = (struct nd_msg *)(iphdr + 1);
+-
+-	if (ipv6_addr_loopback(daddr) ||
+-	    ipv6_addr_is_multicast(&msg->target))
+-		goto out;
+-
+-	n = neigh_lookup(ipv6_stub->nd_tbl, &msg->target, dev);
+-
+-	if (n) {
+-		struct vxlan_fdb *f;
+-		struct sk_buff *reply;
+-
+-		if (!(n->nud_state & NUD_CONNECTED)) {
+-			neigh_release(n);
+-			goto out;
+-		}
+-
+-		f = vxlan_find_mac(vxlan, n->ha, vni);
+-		if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
+-			/* bridge-local neighbor */
+-			neigh_release(n);
+-			goto out;
+-		}
+-
+-		reply = vxlan_na_create(skb, n,
+-					!!(f ? f->flags & NTF_ROUTER : 0));
+-
+-		neigh_release(n);
+-
+-		if (reply == NULL)
+-			goto out;
+-
+-		if (netif_rx_ni(reply) == NET_RX_DROP)
+-			dev->stats.rx_dropped++;
+-
+-	} else if (vxlan->cfg.flags & VXLAN_F_L3MISS) {
+-		union vxlan_addr ipa = {
+-			.sin6.sin6_addr = msg->target,
+-			.sin6.sin6_family = AF_INET6,
+-		};
+-
+-		vxlan_ip_miss(dev, &ipa);
+-	}
+-
+-out:
+-	rcu_read_unlock();
+-	consume_skb(skb);
+-	return NETDEV_TX_OK;
+-}
+-#endif
+-
+-static bool route_shortcircuit(struct net_device *dev, struct sk_buff *skb)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct neighbour *n;
+-
+-	if (is_multicast_ether_addr(eth_hdr(skb)->h_dest))
+-		return false;
+-
+-	n = NULL;
+-	switch (ntohs(eth_hdr(skb)->h_proto)) {
+-	case ETH_P_IP:
+-	{
+-		struct iphdr *pip;
+-
+-		if (!pskb_may_pull(skb, sizeof(struct iphdr)))
+-			return false;
+-		pip = ip_hdr(skb);
+-		n = neigh_lookup(&arp_tbl, &pip->daddr, dev);
+-		if (!n && (vxlan->cfg.flags & VXLAN_F_L3MISS)) {
+-			union vxlan_addr ipa = {
+-				.sin.sin_addr.s_addr = pip->daddr,
+-				.sin.sin_family = AF_INET,
+-			};
+-
+-			vxlan_ip_miss(dev, &ipa);
+-			return false;
+-		}
+-
+-		break;
+-	}
+-#if IS_ENABLED(CONFIG_IPV6)
+-	case ETH_P_IPV6:
+-	{
+-		struct ipv6hdr *pip6;
+-
+-		if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
+-			return false;
+-		pip6 = ipv6_hdr(skb);
+-		n = neigh_lookup(ipv6_stub->nd_tbl, &pip6->daddr, dev);
+-		if (!n && (vxlan->cfg.flags & VXLAN_F_L3MISS)) {
+-			union vxlan_addr ipa = {
+-				.sin6.sin6_addr = pip6->daddr,
+-				.sin6.sin6_family = AF_INET6,
+-			};
+-
+-			vxlan_ip_miss(dev, &ipa);
+-			return false;
+-		}
+-
+-		break;
+-	}
+-#endif
+-	default:
+-		return false;
+-	}
+-
+-	if (n) {
+-		bool diff;
+-
+-		diff = !ether_addr_equal(eth_hdr(skb)->h_dest, n->ha);
+-		if (diff) {
+-			memcpy(eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
+-				dev->addr_len);
+-			memcpy(eth_hdr(skb)->h_dest, n->ha, dev->addr_len);
+-		}
+-		neigh_release(n);
+-		return diff;
+-	}
+-
+-	return false;
+-}
+-
+-static void vxlan_build_gbp_hdr(struct vxlanhdr *vxh, u32 vxflags,
+-				struct vxlan_metadata *md)
+-{
+-	struct vxlanhdr_gbp *gbp;
+-
+-	if (!md->gbp)
+-		return;
+-
+-	gbp = (struct vxlanhdr_gbp *)vxh;
+-	vxh->vx_flags |= VXLAN_HF_GBP;
+-
+-	if (md->gbp & VXLAN_GBP_DONT_LEARN)
+-		gbp->dont_learn = 1;
+-
+-	if (md->gbp & VXLAN_GBP_POLICY_APPLIED)
+-		gbp->policy_applied = 1;
+-
+-	gbp->policy_id = htons(md->gbp & VXLAN_GBP_ID_MASK);
+-}
+-
+-static int vxlan_build_gpe_hdr(struct vxlanhdr *vxh, u32 vxflags,
+-			       __be16 protocol)
+-{
+-	struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)vxh;
+-
+-	gpe->np_applied = 1;
+-	gpe->next_protocol = tun_p_from_eth_p(protocol);
+-	if (!gpe->next_protocol)
+-		return -EPFNOSUPPORT;
+-	return 0;
+-}
+-
+-static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst,
+-			   int iphdr_len, __be32 vni,
+-			   struct vxlan_metadata *md, u32 vxflags,
+-			   bool udp_sum)
+-{
+-	struct vxlanhdr *vxh;
+-	int min_headroom;
+-	int err;
+-	int type = udp_sum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
+-	__be16 inner_protocol = htons(ETH_P_TEB);
+-
+-	if ((vxflags & VXLAN_F_REMCSUM_TX) &&
+-	    skb->ip_summed == CHECKSUM_PARTIAL) {
+-		int csum_start = skb_checksum_start_offset(skb);
+-
+-		if (csum_start <= VXLAN_MAX_REMCSUM_START &&
+-		    !(csum_start & VXLAN_RCO_SHIFT_MASK) &&
+-		    (skb->csum_offset == offsetof(struct udphdr, check) ||
+-		     skb->csum_offset == offsetof(struct tcphdr, check)))
+-			type |= SKB_GSO_TUNNEL_REMCSUM;
+-	}
+-
+-	min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len
+-			+ VXLAN_HLEN + iphdr_len;
+-
+-	/* Need space for new headers (invalidates iph ptr) */
+-	err = skb_cow_head(skb, min_headroom);
+-	if (unlikely(err))
+-		return err;
+-
+-	err = iptunnel_handle_offloads(skb, type);
+-	if (err)
+-		return err;
+-
+-	vxh = __skb_push(skb, sizeof(*vxh));
+-	vxh->vx_flags = VXLAN_HF_VNI;
+-	vxh->vx_vni = vxlan_vni_field(vni);
+-
+-	if (type & SKB_GSO_TUNNEL_REMCSUM) {
+-		unsigned int start;
+-
+-		start = skb_checksum_start_offset(skb) - sizeof(struct vxlanhdr);
+-		vxh->vx_vni |= vxlan_compute_rco(start, skb->csum_offset);
+-		vxh->vx_flags |= VXLAN_HF_RCO;
+-
+-		if (!skb_is_gso(skb)) {
+-			skb->ip_summed = CHECKSUM_NONE;
+-			skb->encapsulation = 0;
+-		}
+-	}
+-
+-	if (vxflags & VXLAN_F_GBP)
+-		vxlan_build_gbp_hdr(vxh, vxflags, md);
+-	if (vxflags & VXLAN_F_GPE) {
+-		err = vxlan_build_gpe_hdr(vxh, vxflags, skb->protocol);
+-		if (err < 0)
+-			return err;
+-		inner_protocol = skb->protocol;
+-	}
+-
+-	skb_set_inner_protocol(skb, inner_protocol);
+-	return 0;
+-}
+-
+-static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan, struct net_device *dev,
+-				      struct vxlan_sock *sock4,
+-				      struct sk_buff *skb, int oif, u8 tos,
+-				      __be32 daddr, __be32 *saddr, __be16 dport, __be16 sport,
+-				      struct dst_cache *dst_cache,
+-				      const struct ip_tunnel_info *info)
+-{
+-	bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+-	struct rtable *rt = NULL;
+-	struct flowi4 fl4;
+-
+-	if (!sock4)
+-		return ERR_PTR(-EIO);
+-
+-	if (tos && !info)
+-		use_cache = false;
+-	if (use_cache) {
+-		rt = dst_cache_get_ip4(dst_cache, saddr);
+-		if (rt)
+-			return rt;
+-	}
+-
+-	memset(&fl4, 0, sizeof(fl4));
+-	fl4.flowi4_oif = oif;
+-	fl4.flowi4_tos = RT_TOS(tos);
+-	fl4.flowi4_mark = skb->mark;
+-	fl4.flowi4_proto = IPPROTO_UDP;
+-	fl4.daddr = daddr;
+-	fl4.saddr = *saddr;
+-	fl4.fl4_dport = dport;
+-	fl4.fl4_sport = sport;
+-
+-	rt = ip_route_output_key(vxlan->net, &fl4);
+-	if (!IS_ERR(rt)) {
+-		if (rt->dst.dev == dev) {
+-			netdev_dbg(dev, "circular route to %pI4\n", &daddr);
+-			ip_rt_put(rt);
+-			return ERR_PTR(-ELOOP);
+-		}
+-
+-		*saddr = fl4.saddr;
+-		if (use_cache)
+-			dst_cache_set_ip4(dst_cache, &rt->dst, fl4.saddr);
+-	} else {
+-		netdev_dbg(dev, "no route to %pI4\n", &daddr);
+-		return ERR_PTR(-ENETUNREACH);
+-	}
+-	return rt;
+-}
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
+-					  struct net_device *dev,
+-					  struct vxlan_sock *sock6,
+-					  struct sk_buff *skb, int oif, u8 tos,
+-					  __be32 label,
+-					  const struct in6_addr *daddr,
+-					  struct in6_addr *saddr,
+-					  __be16 dport, __be16 sport,
+-					  struct dst_cache *dst_cache,
+-					  const struct ip_tunnel_info *info)
+-{
+-	bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+-	struct dst_entry *ndst;
+-	struct flowi6 fl6;
+-
+-	if (!sock6)
+-		return ERR_PTR(-EIO);
+-
+-	if (tos && !info)
+-		use_cache = false;
+-	if (use_cache) {
+-		ndst = dst_cache_get_ip6(dst_cache, saddr);
+-		if (ndst)
+-			return ndst;
+-	}
+-
+-	memset(&fl6, 0, sizeof(fl6));
+-	fl6.flowi6_oif = oif;
+-	fl6.daddr = *daddr;
+-	fl6.saddr = *saddr;
+-	fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
+-	fl6.flowi6_mark = skb->mark;
+-	fl6.flowi6_proto = IPPROTO_UDP;
+-	fl6.fl6_dport = dport;
+-	fl6.fl6_sport = sport;
+-
+-	ndst = ipv6_stub->ipv6_dst_lookup_flow(vxlan->net, sock6->sock->sk,
+-					       &fl6, NULL);
+-	if (unlikely(IS_ERR(ndst))) {
+-		netdev_dbg(dev, "no route to %pI6\n", daddr);
+-		return ERR_PTR(-ENETUNREACH);
+-	}
+-
+-	if (unlikely(ndst->dev == dev)) {
+-		netdev_dbg(dev, "circular route to %pI6\n", daddr);
+-		dst_release(ndst);
+-		return ERR_PTR(-ELOOP);
+-	}
+-
+-	*saddr = fl6.saddr;
+-	if (use_cache)
+-		dst_cache_set_ip6(dst_cache, ndst, saddr);
+-	return ndst;
+-}
+-#endif
+-
+-/* Bypass encapsulation if the destination is local */
+-static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
+-			       struct vxlan_dev *dst_vxlan, __be32 vni,
+-			       bool snoop)
+-{
+-	struct pcpu_sw_netstats *tx_stats, *rx_stats;
+-	union vxlan_addr loopback;
+-	union vxlan_addr *remote_ip = &dst_vxlan->default_dst.remote_ip;
+-	struct net_device *dev;
+-	int len = skb->len;
+-
+-	tx_stats = this_cpu_ptr(src_vxlan->dev->tstats);
+-	rx_stats = this_cpu_ptr(dst_vxlan->dev->tstats);
+-	skb->pkt_type = PACKET_HOST;
+-	skb->encapsulation = 0;
+-	skb->dev = dst_vxlan->dev;
+-	__skb_pull(skb, skb_network_offset(skb));
+-
+-	if (remote_ip->sa.sa_family == AF_INET) {
+-		loopback.sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+-		loopback.sa.sa_family =  AF_INET;
+-#if IS_ENABLED(CONFIG_IPV6)
+-	} else {
+-		loopback.sin6.sin6_addr = in6addr_loopback;
+-		loopback.sa.sa_family =  AF_INET6;
+-#endif
+-	}
+-
+-	rcu_read_lock();
+-	dev = skb->dev;
+-	if (unlikely(!(dev->flags & IFF_UP))) {
+-		kfree_skb(skb);
+-		goto drop;
+-	}
+-
+-	if ((dst_vxlan->cfg.flags & VXLAN_F_LEARN) && snoop)
+-		vxlan_snoop(dev, &loopback, eth_hdr(skb)->h_source, 0, vni);
+-
+-	u64_stats_update_begin(&tx_stats->syncp);
+-	tx_stats->tx_packets++;
+-	tx_stats->tx_bytes += len;
+-	u64_stats_update_end(&tx_stats->syncp);
+-
+-	if (netif_rx(skb) == NET_RX_SUCCESS) {
+-		u64_stats_update_begin(&rx_stats->syncp);
+-		rx_stats->rx_packets++;
+-		rx_stats->rx_bytes += len;
+-		u64_stats_update_end(&rx_stats->syncp);
+-	} else {
+-drop:
+-		dev->stats.rx_dropped++;
+-	}
+-	rcu_read_unlock();
+-}
+-
+-static int encap_bypass_if_local(struct sk_buff *skb, struct net_device *dev,
+-				 struct vxlan_dev *vxlan,
+-				 union vxlan_addr *daddr,
+-				 __be16 dst_port, int dst_ifindex, __be32 vni,
+-				 struct dst_entry *dst,
+-				 u32 rt_flags)
+-{
+-#if IS_ENABLED(CONFIG_IPV6)
+-	/* IPv6 rt-flags are checked against RTF_LOCAL, but the value of
+-	 * RTF_LOCAL is equal to RTCF_LOCAL. So to keep code simple
+-	 * we can use RTCF_LOCAL which works for ipv4 and ipv6 route entry.
+-	 */
+-	BUILD_BUG_ON(RTCF_LOCAL != RTF_LOCAL);
+-#endif
+-	/* Bypass encapsulation if the destination is local */
+-	if (rt_flags & RTCF_LOCAL &&
+-	    !(rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))) {
+-		struct vxlan_dev *dst_vxlan;
+-
+-		dst_release(dst);
+-		dst_vxlan = vxlan_find_vni(vxlan->net, dst_ifindex, vni,
+-					   daddr->sa.sa_family, dst_port,
+-					   vxlan->cfg.flags);
+-		if (!dst_vxlan) {
+-			dev->stats.tx_errors++;
+-			kfree_skb(skb);
+-
+-			return -ENOENT;
+-		}
+-		vxlan_encap_bypass(skb, vxlan, dst_vxlan, vni, true);
+-		return 1;
+-	}
+-
+-	return 0;
+-}
+-
+-static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+-			   __be32 default_vni, struct vxlan_rdst *rdst,
+-			   bool did_rsc)
+-{
+-	struct dst_cache *dst_cache;
+-	struct ip_tunnel_info *info;
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	const struct iphdr *old_iph = ip_hdr(skb);
+-	union vxlan_addr *dst;
+-	union vxlan_addr remote_ip, local_ip;
+-	struct vxlan_metadata _md;
+-	struct vxlan_metadata *md = &_md;
+-	__be16 src_port = 0, dst_port;
+-	struct dst_entry *ndst = NULL;
+-	__be32 vni, label;
+-	__u8 tos, ttl;
+-	int ifindex;
+-	int err;
+-	u32 flags = vxlan->cfg.flags;
+-	bool udp_sum = false;
+-	bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
+-
+-	info = skb_tunnel_info(skb);
+-
+-	if (rdst) {
+-		dst = &rdst->remote_ip;
+-		if (vxlan_addr_any(dst)) {
+-			if (did_rsc) {
+-				/* short-circuited back to local bridge */
+-				vxlan_encap_bypass(skb, vxlan, vxlan,
+-						   default_vni, true);
+-				return;
+-			}
+-			goto drop;
+-		}
+-
+-		dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
+-		vni = (rdst->remote_vni) ? : default_vni;
+-		ifindex = rdst->remote_ifindex;
+-		local_ip = vxlan->cfg.saddr;
+-		dst_cache = &rdst->dst_cache;
+-		md->gbp = skb->mark;
+-		if (flags & VXLAN_F_TTL_INHERIT) {
+-			ttl = ip_tunnel_get_ttl(old_iph, skb);
+-		} else {
+-			ttl = vxlan->cfg.ttl;
+-			if (!ttl && vxlan_addr_multicast(dst))
+-				ttl = 1;
+-		}
+-
+-		tos = vxlan->cfg.tos;
+-		if (tos == 1)
+-			tos = ip_tunnel_get_dsfield(old_iph, skb);
+-
+-		if (dst->sa.sa_family == AF_INET)
+-			udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM_TX);
+-		else
+-			udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX);
+-		label = vxlan->cfg.label;
+-	} else {
+-		if (!info) {
+-			WARN_ONCE(1, "%s: Missing encapsulation instructions\n",
+-				  dev->name);
+-			goto drop;
+-		}
+-		remote_ip.sa.sa_family = ip_tunnel_info_af(info);
+-		if (remote_ip.sa.sa_family == AF_INET) {
+-			remote_ip.sin.sin_addr.s_addr = info->key.u.ipv4.dst;
+-			local_ip.sin.sin_addr.s_addr = info->key.u.ipv4.src;
+-		} else {
+-			remote_ip.sin6.sin6_addr = info->key.u.ipv6.dst;
+-			local_ip.sin6.sin6_addr = info->key.u.ipv6.src;
+-		}
+-		dst = &remote_ip;
+-		dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
+-		vni = tunnel_id_to_key32(info->key.tun_id);
+-		ifindex = 0;
+-		dst_cache = &info->dst_cache;
+-		if (info->key.tun_flags & TUNNEL_VXLAN_OPT) {
+-			if (info->options_len < sizeof(*md))
+-				goto drop;
+-			md = ip_tunnel_info_opts(info);
+-		}
+-		ttl = info->key.ttl;
+-		tos = info->key.tos;
+-		label = info->key.label;
+-		udp_sum = !!(info->key.tun_flags & TUNNEL_CSUM);
+-	}
+-	src_port = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
+-				     vxlan->cfg.port_max, true);
+-
+-	rcu_read_lock();
+-	if (dst->sa.sa_family == AF_INET) {
+-		struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
+-		struct rtable *rt;
+-		__be16 df = 0;
+-
+-		if (!ifindex)
+-			ifindex = sock4->sock->sk->sk_bound_dev_if;
+-
+-		rt = vxlan_get_route(vxlan, dev, sock4, skb, ifindex, tos,
+-				     dst->sin.sin_addr.s_addr,
+-				     &local_ip.sin.sin_addr.s_addr,
+-				     dst_port, src_port,
+-				     dst_cache, info);
+-		if (IS_ERR(rt)) {
+-			err = PTR_ERR(rt);
+-			goto tx_error;
+-		}
+-
+-		if (!info) {
+-			/* Bypass encapsulation if the destination is local */
+-			err = encap_bypass_if_local(skb, dev, vxlan, dst,
+-						    dst_port, ifindex, vni,
+-						    &rt->dst, rt->rt_flags);
+-			if (err)
+-				goto out_unlock;
+-
+-			if (vxlan->cfg.df == VXLAN_DF_SET) {
+-				df = htons(IP_DF);
+-			} else if (vxlan->cfg.df == VXLAN_DF_INHERIT) {
+-				struct ethhdr *eth = eth_hdr(skb);
+-
+-				if (ntohs(eth->h_proto) == ETH_P_IPV6 ||
+-				    (ntohs(eth->h_proto) == ETH_P_IP &&
+-				     old_iph->frag_off & htons(IP_DF)))
+-					df = htons(IP_DF);
+-			}
+-		} else if (info->key.tun_flags & TUNNEL_DONT_FRAGMENT) {
+-			df = htons(IP_DF);
+-		}
+-
+-		ndst = &rt->dst;
+-		err = skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM,
+-					    netif_is_any_bridge_port(dev));
+-		if (err < 0) {
+-			goto tx_error;
+-		} else if (err) {
+-			if (info) {
+-				struct ip_tunnel_info *unclone;
+-				struct in_addr src, dst;
+-
+-				unclone = skb_tunnel_info_unclone(skb);
+-				if (unlikely(!unclone))
+-					goto tx_error;
+-
+-				src = remote_ip.sin.sin_addr;
+-				dst = local_ip.sin.sin_addr;
+-				unclone->key.u.ipv4.src = src.s_addr;
+-				unclone->key.u.ipv4.dst = dst.s_addr;
+-			}
+-			vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
+-			dst_release(ndst);
+-			goto out_unlock;
+-		}
+-
+-		tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
+-		ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
+-		err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr),
+-				      vni, md, flags, udp_sum);
+-		if (err < 0)
+-			goto tx_error;
+-
+-		udp_tunnel_xmit_skb(rt, sock4->sock->sk, skb, local_ip.sin.sin_addr.s_addr,
+-				    dst->sin.sin_addr.s_addr, tos, ttl, df,
+-				    src_port, dst_port, xnet, !udp_sum);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	} else {
+-		struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
+-
+-		if (!ifindex)
+-			ifindex = sock6->sock->sk->sk_bound_dev_if;
+-
+-		ndst = vxlan6_get_route(vxlan, dev, sock6, skb, ifindex, tos,
+-					label, &dst->sin6.sin6_addr,
+-					&local_ip.sin6.sin6_addr,
+-					dst_port, src_port,
+-					dst_cache, info);
+-		if (IS_ERR(ndst)) {
+-			err = PTR_ERR(ndst);
+-			ndst = NULL;
+-			goto tx_error;
+-		}
+-
+-		if (!info) {
+-			u32 rt6i_flags = ((struct rt6_info *)ndst)->rt6i_flags;
+-
+-			err = encap_bypass_if_local(skb, dev, vxlan, dst,
+-						    dst_port, ifindex, vni,
+-						    ndst, rt6i_flags);
+-			if (err)
+-				goto out_unlock;
+-		}
+-
+-		err = skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM,
+-					    netif_is_any_bridge_port(dev));
+-		if (err < 0) {
+-			goto tx_error;
+-		} else if (err) {
+-			if (info) {
+-				struct ip_tunnel_info *unclone;
+-				struct in6_addr src, dst;
+-
+-				unclone = skb_tunnel_info_unclone(skb);
+-				if (unlikely(!unclone))
+-					goto tx_error;
+-
+-				src = remote_ip.sin6.sin6_addr;
+-				dst = local_ip.sin6.sin6_addr;
+-				unclone->key.u.ipv6.src = src;
+-				unclone->key.u.ipv6.dst = dst;
+-			}
+-
+-			vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
+-			dst_release(ndst);
+-			goto out_unlock;
+-		}
+-
+-		tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
+-		ttl = ttl ? : ip6_dst_hoplimit(ndst);
+-		skb_scrub_packet(skb, xnet);
+-		err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),
+-				      vni, md, flags, udp_sum);
+-		if (err < 0)
+-			goto tx_error;
+-
+-		udp_tunnel6_xmit_skb(ndst, sock6->sock->sk, skb, dev,
+-				     &local_ip.sin6.sin6_addr,
+-				     &dst->sin6.sin6_addr, tos, ttl,
+-				     label, src_port, dst_port, !udp_sum);
+-#endif
+-	}
+-out_unlock:
+-	rcu_read_unlock();
+-	return;
+-
+-drop:
+-	dev->stats.tx_dropped++;
+-	dev_kfree_skb(skb);
+-	return;
+-
+-tx_error:
+-	rcu_read_unlock();
+-	if (err == -ELOOP)
+-		dev->stats.collisions++;
+-	else if (err == -ENETUNREACH)
+-		dev->stats.tx_carrier_errors++;
+-	dst_release(ndst);
+-	dev->stats.tx_errors++;
+-	kfree_skb(skb);
+-}
+-
+-static void vxlan_xmit_nh(struct sk_buff *skb, struct net_device *dev,
+-			  struct vxlan_fdb *f, __be32 vni, bool did_rsc)
+-{
+-	struct vxlan_rdst nh_rdst;
+-	struct nexthop *nh;
+-	bool do_xmit;
+-	u32 hash;
+-
+-	memset(&nh_rdst, 0, sizeof(struct vxlan_rdst));
+-	hash = skb_get_hash(skb);
+-
+-	rcu_read_lock();
+-	nh = rcu_dereference(f->nh);
+-	if (!nh) {
+-		rcu_read_unlock();
+-		goto drop;
+-	}
+-	do_xmit = vxlan_fdb_nh_path_select(nh, hash, &nh_rdst);
+-	rcu_read_unlock();
+-
+-	if (likely(do_xmit))
+-		vxlan_xmit_one(skb, dev, vni, &nh_rdst, did_rsc);
+-	else
+-		goto drop;
+-
+-	return;
+-
+-drop:
+-	dev->stats.tx_dropped++;
+-	dev_kfree_skb(skb);
+-}
+-
+-/* Transmit local packets over Vxlan
+- *
+- * Outer IP header inherits ECN and DF from inner header.
+- * Outer UDP destination is the VXLAN assigned port.
+- *           source port is based on hash of flow
+- */
+-static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_rdst *rdst, *fdst = NULL;
+-	const struct ip_tunnel_info *info;
+-	bool did_rsc = false;
+-	struct vxlan_fdb *f;
+-	struct ethhdr *eth;
+-	__be32 vni = 0;
+-
+-	info = skb_tunnel_info(skb);
+-
+-	skb_reset_mac_header(skb);
+-
+-	if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA) {
+-		if (info && info->mode & IP_TUNNEL_INFO_BRIDGE &&
+-		    info->mode & IP_TUNNEL_INFO_TX) {
+-			vni = tunnel_id_to_key32(info->key.tun_id);
+-		} else {
+-			if (info && info->mode & IP_TUNNEL_INFO_TX)
+-				vxlan_xmit_one(skb, dev, vni, NULL, false);
+-			else
+-				kfree_skb(skb);
+-			return NETDEV_TX_OK;
+-		}
+-	}
+-
+-	if (vxlan->cfg.flags & VXLAN_F_PROXY) {
+-		eth = eth_hdr(skb);
+-		if (ntohs(eth->h_proto) == ETH_P_ARP)
+-			return arp_reduce(dev, skb, vni);
+-#if IS_ENABLED(CONFIG_IPV6)
+-		else if (ntohs(eth->h_proto) == ETH_P_IPV6 &&
+-			 pskb_may_pull(skb, sizeof(struct ipv6hdr) +
+-					    sizeof(struct nd_msg)) &&
+-			 ipv6_hdr(skb)->nexthdr == IPPROTO_ICMPV6) {
+-			struct nd_msg *m = (struct nd_msg *)(ipv6_hdr(skb) + 1);
+-
+-			if (m->icmph.icmp6_code == 0 &&
+-			    m->icmph.icmp6_type == NDISC_NEIGHBOUR_SOLICITATION)
+-				return neigh_reduce(dev, skb, vni);
+-		}
+-#endif
+-	}
+-
+-	eth = eth_hdr(skb);
+-	f = vxlan_find_mac(vxlan, eth->h_dest, vni);
+-	did_rsc = false;
+-
+-	if (f && (f->flags & NTF_ROUTER) && (vxlan->cfg.flags & VXLAN_F_RSC) &&
+-	    (ntohs(eth->h_proto) == ETH_P_IP ||
+-	     ntohs(eth->h_proto) == ETH_P_IPV6)) {
+-		did_rsc = route_shortcircuit(dev, skb);
+-		if (did_rsc)
+-			f = vxlan_find_mac(vxlan, eth->h_dest, vni);
+-	}
+-
+-	if (f == NULL) {
+-		f = vxlan_find_mac(vxlan, all_zeros_mac, vni);
+-		if (f == NULL) {
+-			if ((vxlan->cfg.flags & VXLAN_F_L2MISS) &&
+-			    !is_multicast_ether_addr(eth->h_dest))
+-				vxlan_fdb_miss(vxlan, eth->h_dest);
+-
+-			dev->stats.tx_dropped++;
+-			kfree_skb(skb);
+-			return NETDEV_TX_OK;
+-		}
+-	}
+-
+-	if (rcu_access_pointer(f->nh)) {
+-		vxlan_xmit_nh(skb, dev, f,
+-			      (vni ? : vxlan->default_dst.remote_vni), did_rsc);
+-	} else {
+-		list_for_each_entry_rcu(rdst, &f->remotes, list) {
+-			struct sk_buff *skb1;
+-
+-			if (!fdst) {
+-				fdst = rdst;
+-				continue;
+-			}
+-			skb1 = skb_clone(skb, GFP_ATOMIC);
+-			if (skb1)
+-				vxlan_xmit_one(skb1, dev, vni, rdst, did_rsc);
+-		}
+-		if (fdst)
+-			vxlan_xmit_one(skb, dev, vni, fdst, did_rsc);
+-		else
+-			kfree_skb(skb);
+-	}
+-
+-	return NETDEV_TX_OK;
+-}
+-
+-/* Walk the forwarding table and purge stale entries */
+-static void vxlan_cleanup(struct timer_list *t)
+-{
+-	struct vxlan_dev *vxlan = from_timer(vxlan, t, age_timer);
+-	unsigned long next_timer = jiffies + FDB_AGE_INTERVAL;
+-	unsigned int h;
+-
+-	if (!netif_running(vxlan->dev))
+-		return;
+-
+-	for (h = 0; h < FDB_HASH_SIZE; ++h) {
+-		struct hlist_node *p, *n;
+-
+-		spin_lock(&vxlan->hash_lock[h]);
+-		hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
+-			struct vxlan_fdb *f
+-				= container_of(p, struct vxlan_fdb, hlist);
+-			unsigned long timeout;
+-
+-			if (f->state & (NUD_PERMANENT | NUD_NOARP))
+-				continue;
+-
+-			if (f->flags & NTF_EXT_LEARNED)
+-				continue;
+-
+-			timeout = f->used + vxlan->cfg.age_interval * HZ;
+-			if (time_before_eq(timeout, jiffies)) {
+-				netdev_dbg(vxlan->dev,
+-					   "garbage collect %pM\n",
+-					   f->eth_addr);
+-				f->state = NUD_STALE;
+-				vxlan_fdb_destroy(vxlan, f, true, true);
+-			} else if (time_before(timeout, next_timer))
+-				next_timer = timeout;
+-		}
+-		spin_unlock(&vxlan->hash_lock[h]);
+-	}
+-
+-	mod_timer(&vxlan->age_timer, next_timer);
+-}
+-
+-static void vxlan_vs_del_dev(struct vxlan_dev *vxlan)
+-{
+-	struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+-
+-	spin_lock(&vn->sock_lock);
+-	hlist_del_init_rcu(&vxlan->hlist4.hlist);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	hlist_del_init_rcu(&vxlan->hlist6.hlist);
+-#endif
+-	spin_unlock(&vn->sock_lock);
+-}
+-
+-static void vxlan_vs_add_dev(struct vxlan_sock *vs, struct vxlan_dev *vxlan,
+-			     struct vxlan_dev_node *node)
+-{
+-	struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+-	__be32 vni = vxlan->default_dst.remote_vni;
+-
+-	node->vxlan = vxlan;
+-	spin_lock(&vn->sock_lock);
+-	hlist_add_head_rcu(&node->hlist, vni_head(vs, vni));
+-	spin_unlock(&vn->sock_lock);
+-}
+-
+-/* Setup stats when device is created */
+-static int vxlan_init(struct net_device *dev)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	int err;
+-
+-	dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
+-	if (!dev->tstats)
+-		return -ENOMEM;
+-
+-	err = gro_cells_init(&vxlan->gro_cells, dev);
+-	if (err) {
+-		free_percpu(dev->tstats);
+-		return err;
+-	}
+-
+-	return 0;
+-}
+-
+-static void vxlan_fdb_delete_default(struct vxlan_dev *vxlan, __be32 vni)
+-{
+-	struct vxlan_fdb *f;
+-	u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, vni);
+-
+-	spin_lock_bh(&vxlan->hash_lock[hash_index]);
+-	f = __vxlan_find_mac(vxlan, all_zeros_mac, vni);
+-	if (f)
+-		vxlan_fdb_destroy(vxlan, f, true, true);
+-	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-}
+-
+-static void vxlan_uninit(struct net_device *dev)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-
+-	gro_cells_destroy(&vxlan->gro_cells);
+-
+-	vxlan_fdb_delete_default(vxlan, vxlan->cfg.vni);
+-
+-	free_percpu(dev->tstats);
+-}
+-
+-/* Start ageing timer and join group when device is brought up */
+-static int vxlan_open(struct net_device *dev)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	int ret;
+-
+-	ret = vxlan_sock_add(vxlan);
+-	if (ret < 0)
+-		return ret;
+-
+-	if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) {
+-		ret = vxlan_igmp_join(vxlan);
+-		if (ret == -EADDRINUSE)
+-			ret = 0;
+-		if (ret) {
+-			vxlan_sock_release(vxlan);
+-			return ret;
+-		}
+-	}
+-
+-	if (vxlan->cfg.age_interval)
+-		mod_timer(&vxlan->age_timer, jiffies + FDB_AGE_INTERVAL);
+-
+-	return ret;
+-}
+-
+-/* Purge the forwarding table */
+-static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
+-{
+-	unsigned int h;
+-
+-	for (h = 0; h < FDB_HASH_SIZE; ++h) {
+-		struct hlist_node *p, *n;
+-
+-		spin_lock_bh(&vxlan->hash_lock[h]);
+-		hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
+-			struct vxlan_fdb *f
+-				= container_of(p, struct vxlan_fdb, hlist);
+-			if (!do_all && (f->state & (NUD_PERMANENT | NUD_NOARP)))
+-				continue;
+-			/* the all_zeros_mac entry is deleted at vxlan_uninit */
+-			if (is_zero_ether_addr(f->eth_addr) &&
+-			    f->vni == vxlan->cfg.vni)
+-				continue;
+-			vxlan_fdb_destroy(vxlan, f, true, true);
+-		}
+-		spin_unlock_bh(&vxlan->hash_lock[h]);
+-	}
+-}
+-
+-/* Cleanup timer and forwarding table on shutdown */
+-static int vxlan_stop(struct net_device *dev)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+-	int ret = 0;
+-
+-	if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip) &&
+-	    !vxlan_group_used(vn, vxlan))
+-		ret = vxlan_igmp_leave(vxlan);
+-
+-	del_timer_sync(&vxlan->age_timer);
+-
+-	vxlan_flush(vxlan, false);
+-	vxlan_sock_release(vxlan);
+-
+-	return ret;
+-}
+-
+-/* Stub, nothing needs to be done. */
+-static void vxlan_set_multicast_list(struct net_device *dev)
+-{
+-}
+-
+-static int vxlan_change_mtu(struct net_device *dev, int new_mtu)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_rdst *dst = &vxlan->default_dst;
+-	struct net_device *lowerdev = __dev_get_by_index(vxlan->net,
+-							 dst->remote_ifindex);
+-	bool use_ipv6 = !!(vxlan->cfg.flags & VXLAN_F_IPV6);
+-
+-	/* This check is different than dev->max_mtu, because it looks at
+-	 * the lowerdev->mtu, rather than the static dev->max_mtu
+-	 */
+-	if (lowerdev) {
+-		int max_mtu = lowerdev->mtu -
+-			      (use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM);
+-		if (new_mtu > max_mtu)
+-			return -EINVAL;
+-	}
+-
+-	dev->mtu = new_mtu;
+-	return 0;
+-}
+-
+-static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct ip_tunnel_info *info = skb_tunnel_info(skb);
+-	__be16 sport, dport;
+-
+-	sport = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
+-				  vxlan->cfg.port_max, true);
+-	dport = info->key.tp_dst ? : vxlan->cfg.dst_port;
+-
+-	if (ip_tunnel_info_af(info) == AF_INET) {
+-		struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
+-		struct rtable *rt;
+-
+-		rt = vxlan_get_route(vxlan, dev, sock4, skb, 0, info->key.tos,
+-				     info->key.u.ipv4.dst,
+-				     &info->key.u.ipv4.src, dport, sport,
+-				     &info->dst_cache, info);
+-		if (IS_ERR(rt))
+-			return PTR_ERR(rt);
+-		ip_rt_put(rt);
+-	} else {
+-#if IS_ENABLED(CONFIG_IPV6)
+-		struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
+-		struct dst_entry *ndst;
+-
+-		ndst = vxlan6_get_route(vxlan, dev, sock6, skb, 0, info->key.tos,
+-					info->key.label, &info->key.u.ipv6.dst,
+-					&info->key.u.ipv6.src, dport, sport,
+-					&info->dst_cache, info);
+-		if (IS_ERR(ndst))
+-			return PTR_ERR(ndst);
+-		dst_release(ndst);
+-#else /* !CONFIG_IPV6 */
+-		return -EPFNOSUPPORT;
+-#endif
+-	}
+-	info->key.tp_src = sport;
+-	info->key.tp_dst = dport;
+-	return 0;
+-}
+-
+-static const struct net_device_ops vxlan_netdev_ether_ops = {
+-	.ndo_init		= vxlan_init,
+-	.ndo_uninit		= vxlan_uninit,
+-	.ndo_open		= vxlan_open,
+-	.ndo_stop		= vxlan_stop,
+-	.ndo_start_xmit		= vxlan_xmit,
+-	.ndo_get_stats64	= ip_tunnel_get_stats64,
+-	.ndo_set_rx_mode	= vxlan_set_multicast_list,
+-	.ndo_change_mtu		= vxlan_change_mtu,
+-	.ndo_validate_addr	= eth_validate_addr,
+-	.ndo_set_mac_address	= eth_mac_addr,
+-	.ndo_fdb_add		= vxlan_fdb_add,
+-	.ndo_fdb_del		= vxlan_fdb_delete,
+-	.ndo_fdb_dump		= vxlan_fdb_dump,
+-	.ndo_fdb_get		= vxlan_fdb_get,
+-	.ndo_fill_metadata_dst	= vxlan_fill_metadata_dst,
+-	.ndo_change_proto_down  = dev_change_proto_down_generic,
+-};
+-
+-static const struct net_device_ops vxlan_netdev_raw_ops = {
+-	.ndo_init		= vxlan_init,
+-	.ndo_uninit		= vxlan_uninit,
+-	.ndo_open		= vxlan_open,
+-	.ndo_stop		= vxlan_stop,
+-	.ndo_start_xmit		= vxlan_xmit,
+-	.ndo_get_stats64	= ip_tunnel_get_stats64,
+-	.ndo_change_mtu		= vxlan_change_mtu,
+-	.ndo_fill_metadata_dst	= vxlan_fill_metadata_dst,
+-};
+-
+-/* Info for udev, that this is a virtual tunnel endpoint */
+-static struct device_type vxlan_type = {
+-	.name = "vxlan",
+-};
+-
+-/* Calls the ndo_udp_tunnel_add of the caller in order to
+- * supply the listening VXLAN udp ports. Callers are expected
+- * to implement the ndo_udp_tunnel_add.
+- */
+-static void vxlan_offload_rx_ports(struct net_device *dev, bool push)
+-{
+-	struct vxlan_sock *vs;
+-	struct net *net = dev_net(dev);
+-	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+-	unsigned int i;
+-
+-	spin_lock(&vn->sock_lock);
+-	for (i = 0; i < PORT_HASH_SIZE; ++i) {
+-		hlist_for_each_entry_rcu(vs, &vn->sock_list[i], hlist) {
+-			unsigned short type;
+-
+-			if (vs->flags & VXLAN_F_GPE)
+-				type = UDP_TUNNEL_TYPE_VXLAN_GPE;
+-			else
+-				type = UDP_TUNNEL_TYPE_VXLAN;
+-
+-			if (push)
+-				udp_tunnel_push_rx_port(dev, vs->sock, type);
+-			else
+-				udp_tunnel_drop_rx_port(dev, vs->sock, type);
+-		}
+-	}
+-	spin_unlock(&vn->sock_lock);
+-}
+-
+-/* Initialize the device structure. */
+-static void vxlan_setup(struct net_device *dev)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	unsigned int h;
+-
+-	eth_hw_addr_random(dev);
+-	ether_setup(dev);
+-
+-	dev->needs_free_netdev = true;
+-	SET_NETDEV_DEVTYPE(dev, &vxlan_type);
+-
+-	dev->features	|= NETIF_F_LLTX;
+-	dev->features	|= NETIF_F_SG | NETIF_F_HW_CSUM;
+-	dev->features   |= NETIF_F_RXCSUM;
+-	dev->features   |= NETIF_F_GSO_SOFTWARE;
+-
+-	dev->vlan_features = dev->features;
+-	dev->hw_features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
+-	dev->hw_features |= NETIF_F_GSO_SOFTWARE;
+-	netif_keep_dst(dev);
+-	dev->priv_flags |= IFF_NO_QUEUE;
+-
+-	/* MTU range: 68 - 65535 */
+-	dev->min_mtu = ETH_MIN_MTU;
+-	dev->max_mtu = ETH_MAX_MTU;
+-
+-	INIT_LIST_HEAD(&vxlan->next);
+-
+-	timer_setup(&vxlan->age_timer, vxlan_cleanup, TIMER_DEFERRABLE);
+-
+-	vxlan->dev = dev;
+-
+-	for (h = 0; h < FDB_HASH_SIZE; ++h) {
+-		spin_lock_init(&vxlan->hash_lock[h]);
+-		INIT_HLIST_HEAD(&vxlan->fdb_head[h]);
+-	}
+-}
+-
+-static void vxlan_ether_setup(struct net_device *dev)
+-{
+-	dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+-	dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+-	dev->netdev_ops = &vxlan_netdev_ether_ops;
+-}
+-
+-static void vxlan_raw_setup(struct net_device *dev)
+-{
+-	dev->header_ops = NULL;
+-	dev->type = ARPHRD_NONE;
+-	dev->hard_header_len = 0;
+-	dev->addr_len = 0;
+-	dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
+-	dev->netdev_ops = &vxlan_netdev_raw_ops;
+-}
+-
+-static const struct nla_policy vxlan_policy[IFLA_VXLAN_MAX + 1] = {
+-	[IFLA_VXLAN_ID]		= { .type = NLA_U32 },
+-	[IFLA_VXLAN_GROUP]	= { .len = sizeof_field(struct iphdr, daddr) },
+-	[IFLA_VXLAN_GROUP6]	= { .len = sizeof(struct in6_addr) },
+-	[IFLA_VXLAN_LINK]	= { .type = NLA_U32 },
+-	[IFLA_VXLAN_LOCAL]	= { .len = sizeof_field(struct iphdr, saddr) },
+-	[IFLA_VXLAN_LOCAL6]	= { .len = sizeof(struct in6_addr) },
+-	[IFLA_VXLAN_TOS]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_TTL]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_LABEL]	= { .type = NLA_U32 },
+-	[IFLA_VXLAN_LEARNING]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_AGEING]	= { .type = NLA_U32 },
+-	[IFLA_VXLAN_LIMIT]	= { .type = NLA_U32 },
+-	[IFLA_VXLAN_PORT_RANGE] = { .len  = sizeof(struct ifla_vxlan_port_range) },
+-	[IFLA_VXLAN_PROXY]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_RSC]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_L2MISS]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_L3MISS]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_COLLECT_METADATA]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_PORT]	= { .type = NLA_U16 },
+-	[IFLA_VXLAN_UDP_CSUM]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_UDP_ZERO_CSUM6_TX]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_REMCSUM_TX]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_REMCSUM_RX]	= { .type = NLA_U8 },
+-	[IFLA_VXLAN_GBP]	= { .type = NLA_FLAG, },
+-	[IFLA_VXLAN_GPE]	= { .type = NLA_FLAG, },
+-	[IFLA_VXLAN_REMCSUM_NOPARTIAL]	= { .type = NLA_FLAG },
+-	[IFLA_VXLAN_TTL_INHERIT]	= { .type = NLA_FLAG },
+-	[IFLA_VXLAN_DF]		= { .type = NLA_U8 },
+-};
+-
+-static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[],
+-			  struct netlink_ext_ack *extack)
+-{
+-	if (tb[IFLA_ADDRESS]) {
+-		if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_ADDRESS],
+-					    "Provided link layer address is not Ethernet");
+-			return -EINVAL;
+-		}
+-
+-		if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS]))) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_ADDRESS],
+-					    "Provided Ethernet address is not unicast");
+-			return -EADDRNOTAVAIL;
+-		}
+-	}
+-
+-	if (tb[IFLA_MTU]) {
+-		u32 mtu = nla_get_u32(tb[IFLA_MTU]);
+-
+-		if (mtu < ETH_MIN_MTU || mtu > ETH_MAX_MTU) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_MTU],
+-					    "MTU must be between 68 and 65535");
+-			return -EINVAL;
+-		}
+-	}
+-
+-	if (!data) {
+-		NL_SET_ERR_MSG(extack,
+-			       "Required attributes not provided to perform the operation");
+-		return -EINVAL;
+-	}
+-
+-	if (data[IFLA_VXLAN_ID]) {
+-		u32 id = nla_get_u32(data[IFLA_VXLAN_ID]);
+-
+-		if (id >= VXLAN_N_VID) {
+-			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_ID],
+-					    "VXLAN ID must be lower than 16777216");
+-			return -ERANGE;
+-		}
+-	}
+-
+-	if (data[IFLA_VXLAN_PORT_RANGE]) {
+-		const struct ifla_vxlan_port_range *p
+-			= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
+-
+-		if (ntohs(p->high) < ntohs(p->low)) {
+-			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_PORT_RANGE],
+-					    "Invalid source port range");
+-			return -EINVAL;
+-		}
+-	}
+-
+-	if (data[IFLA_VXLAN_DF]) {
+-		enum ifla_vxlan_df df = nla_get_u8(data[IFLA_VXLAN_DF]);
+-
+-		if (df < 0 || df > VXLAN_DF_MAX) {
+-			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_DF],
+-					    "Invalid DF attribute");
+-			return -EINVAL;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+-static void vxlan_get_drvinfo(struct net_device *netdev,
+-			      struct ethtool_drvinfo *drvinfo)
+-{
+-	strlcpy(drvinfo->version, VXLAN_VERSION, sizeof(drvinfo->version));
+-	strlcpy(drvinfo->driver, "vxlan", sizeof(drvinfo->driver));
+-}
+-
+-static int vxlan_get_link_ksettings(struct net_device *dev,
+-				    struct ethtool_link_ksettings *cmd)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_rdst *dst = &vxlan->default_dst;
+-	struct net_device *lowerdev = __dev_get_by_index(vxlan->net,
+-							 dst->remote_ifindex);
+-
+-	if (!lowerdev) {
+-		cmd->base.duplex = DUPLEX_UNKNOWN;
+-		cmd->base.port = PORT_OTHER;
+-		cmd->base.speed = SPEED_UNKNOWN;
+-
+-		return 0;
+-	}
+-
+-	return __ethtool_get_link_ksettings(lowerdev, cmd);
+-}
+-
+-static const struct ethtool_ops vxlan_ethtool_ops = {
+-	.get_drvinfo		= vxlan_get_drvinfo,
+-	.get_link		= ethtool_op_get_link,
+-	.get_link_ksettings	= vxlan_get_link_ksettings,
+-};
+-
+-static struct socket *vxlan_create_sock(struct net *net, bool ipv6,
+-					__be16 port, u32 flags, int ifindex)
+-{
+-	struct socket *sock;
+-	struct udp_port_cfg udp_conf;
+-	int err;
+-
+-	memset(&udp_conf, 0, sizeof(udp_conf));
+-
+-	if (ipv6) {
+-		udp_conf.family = AF_INET6;
+-		udp_conf.use_udp6_rx_checksums =
+-		    !(flags & VXLAN_F_UDP_ZERO_CSUM6_RX);
+-		udp_conf.ipv6_v6only = 1;
+-	} else {
+-		udp_conf.family = AF_INET;
+-	}
+-
+-	udp_conf.local_udp_port = port;
+-	udp_conf.bind_ifindex = ifindex;
+-
+-	/* Open UDP socket */
+-	err = udp_sock_create(net, &udp_conf, &sock);
+-	if (err < 0)
+-		return ERR_PTR(err);
+-
+-	return sock;
+-}
+-
+-/* Create new listen socket if needed */
+-static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6,
+-					      __be16 port, u32 flags,
+-					      int ifindex)
+-{
+-	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+-	struct vxlan_sock *vs;
+-	struct socket *sock;
+-	unsigned int h;
+-	struct udp_tunnel_sock_cfg tunnel_cfg;
+-
+-	vs = kzalloc(sizeof(*vs), GFP_KERNEL);
+-	if (!vs)
+-		return ERR_PTR(-ENOMEM);
+-
+-	for (h = 0; h < VNI_HASH_SIZE; ++h)
+-		INIT_HLIST_HEAD(&vs->vni_list[h]);
+-
+-	sock = vxlan_create_sock(net, ipv6, port, flags, ifindex);
+-	if (IS_ERR(sock)) {
+-		kfree(vs);
+-		return ERR_CAST(sock);
+-	}
+-
+-	vs->sock = sock;
+-	refcount_set(&vs->refcnt, 1);
+-	vs->flags = (flags & VXLAN_F_RCV_FLAGS);
+-
+-	spin_lock(&vn->sock_lock);
+-	hlist_add_head_rcu(&vs->hlist, vs_head(net, port));
+-	udp_tunnel_notify_add_rx_port(sock,
+-				      (vs->flags & VXLAN_F_GPE) ?
+-				      UDP_TUNNEL_TYPE_VXLAN_GPE :
+-				      UDP_TUNNEL_TYPE_VXLAN);
+-	spin_unlock(&vn->sock_lock);
+-
+-	/* Mark socket as an encapsulation socket. */
+-	memset(&tunnel_cfg, 0, sizeof(tunnel_cfg));
+-	tunnel_cfg.sk_user_data = vs;
+-	tunnel_cfg.encap_type = 1;
+-	tunnel_cfg.encap_rcv = vxlan_rcv;
+-	tunnel_cfg.encap_err_lookup = vxlan_err_lookup;
+-	tunnel_cfg.encap_destroy = NULL;
+-	tunnel_cfg.gro_receive = vxlan_gro_receive;
+-	tunnel_cfg.gro_complete = vxlan_gro_complete;
+-
+-	setup_udp_tunnel_sock(net, sock, &tunnel_cfg);
+-
+-	return vs;
+-}
+-
+-static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
+-{
+-	struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+-	struct vxlan_sock *vs = NULL;
+-	struct vxlan_dev_node *node;
+-	int l3mdev_index = 0;
+-
+-	if (vxlan->cfg.remote_ifindex)
+-		l3mdev_index = l3mdev_master_upper_ifindex_by_index(
+-			vxlan->net, vxlan->cfg.remote_ifindex);
+-
+-	if (!vxlan->cfg.no_share) {
+-		spin_lock(&vn->sock_lock);
+-		vs = vxlan_find_sock(vxlan->net, ipv6 ? AF_INET6 : AF_INET,
+-				     vxlan->cfg.dst_port, vxlan->cfg.flags,
+-				     l3mdev_index);
+-		if (vs && !refcount_inc_not_zero(&vs->refcnt)) {
+-			spin_unlock(&vn->sock_lock);
+-			return -EBUSY;
+-		}
+-		spin_unlock(&vn->sock_lock);
+-	}
+-	if (!vs)
+-		vs = vxlan_socket_create(vxlan->net, ipv6,
+-					 vxlan->cfg.dst_port, vxlan->cfg.flags,
+-					 l3mdev_index);
+-	if (IS_ERR(vs))
+-		return PTR_ERR(vs);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	if (ipv6) {
+-		rcu_assign_pointer(vxlan->vn6_sock, vs);
+-		node = &vxlan->hlist6;
+-	} else
+-#endif
+-	{
+-		rcu_assign_pointer(vxlan->vn4_sock, vs);
+-		node = &vxlan->hlist4;
+-	}
+-	vxlan_vs_add_dev(vs, vxlan, node);
+-	return 0;
+-}
+-
+-static int vxlan_sock_add(struct vxlan_dev *vxlan)
+-{
+-	bool metadata = vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA;
+-	bool ipv6 = vxlan->cfg.flags & VXLAN_F_IPV6 || metadata;
+-	bool ipv4 = !ipv6 || metadata;
+-	int ret = 0;
+-
+-	RCU_INIT_POINTER(vxlan->vn4_sock, NULL);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	RCU_INIT_POINTER(vxlan->vn6_sock, NULL);
+-	if (ipv6) {
+-		ret = __vxlan_sock_add(vxlan, true);
+-		if (ret < 0 && ret != -EAFNOSUPPORT)
+-			ipv4 = false;
+-	}
+-#endif
+-	if (ipv4)
+-		ret = __vxlan_sock_add(vxlan, false);
+-	if (ret < 0)
+-		vxlan_sock_release(vxlan);
+-	return ret;
+-}
+-
+-static int vxlan_config_validate(struct net *src_net, struct vxlan_config *conf,
+-				 struct net_device **lower,
+-				 struct vxlan_dev *old,
+-				 struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_net *vn = net_generic(src_net, vxlan_net_id);
+-	struct vxlan_dev *tmp;
+-	bool use_ipv6 = false;
+-
+-	if (conf->flags & VXLAN_F_GPE) {
+-		/* For now, allow GPE only together with
+-		 * COLLECT_METADATA. This can be relaxed later; in such
+-		 * case, the other side of the PtP link will have to be
+-		 * provided.
+-		 */
+-		if ((conf->flags & ~VXLAN_F_ALLOWED_GPE) ||
+-		    !(conf->flags & VXLAN_F_COLLECT_METADATA)) {
+-			NL_SET_ERR_MSG(extack,
+-				       "VXLAN GPE does not support this combination of attributes");
+-			return -EINVAL;
+-		}
+-	}
+-
+-	if (!conf->remote_ip.sa.sa_family && !conf->saddr.sa.sa_family) {
+-		/* Unless IPv6 is explicitly requested, assume IPv4 */
+-		conf->remote_ip.sa.sa_family = AF_INET;
+-		conf->saddr.sa.sa_family = AF_INET;
+-	} else if (!conf->remote_ip.sa.sa_family) {
+-		conf->remote_ip.sa.sa_family = conf->saddr.sa.sa_family;
+-	} else if (!conf->saddr.sa.sa_family) {
+-		conf->saddr.sa.sa_family = conf->remote_ip.sa.sa_family;
+-	}
+-
+-	if (conf->saddr.sa.sa_family != conf->remote_ip.sa.sa_family) {
+-		NL_SET_ERR_MSG(extack,
+-			       "Local and remote address must be from the same family");
+-		return -EINVAL;
+-	}
+-
+-	if (vxlan_addr_multicast(&conf->saddr)) {
+-		NL_SET_ERR_MSG(extack, "Local address cannot be multicast");
+-		return -EINVAL;
+-	}
+-
+-	if (conf->saddr.sa.sa_family == AF_INET6) {
+-		if (!IS_ENABLED(CONFIG_IPV6)) {
+-			NL_SET_ERR_MSG(extack,
+-				       "IPv6 support not enabled in the kernel");
+-			return -EPFNOSUPPORT;
+-		}
+-		use_ipv6 = true;
+-		conf->flags |= VXLAN_F_IPV6;
+-
+-		if (!(conf->flags & VXLAN_F_COLLECT_METADATA)) {
+-			int local_type =
+-				ipv6_addr_type(&conf->saddr.sin6.sin6_addr);
+-			int remote_type =
+-				ipv6_addr_type(&conf->remote_ip.sin6.sin6_addr);
+-
+-			if (local_type & IPV6_ADDR_LINKLOCAL) {
+-				if (!(remote_type & IPV6_ADDR_LINKLOCAL) &&
+-				    (remote_type != IPV6_ADDR_ANY)) {
+-					NL_SET_ERR_MSG(extack,
+-						       "Invalid combination of local and remote address scopes");
+-					return -EINVAL;
+-				}
+-
+-				conf->flags |= VXLAN_F_IPV6_LINKLOCAL;
+-			} else {
+-				if (remote_type ==
+-				    (IPV6_ADDR_UNICAST | IPV6_ADDR_LINKLOCAL)) {
+-					NL_SET_ERR_MSG(extack,
+-						       "Invalid combination of local and remote address scopes");
+-					return -EINVAL;
+-				}
+-
+-				conf->flags &= ~VXLAN_F_IPV6_LINKLOCAL;
+-			}
+-		}
+-	}
+-
+-	if (conf->label && !use_ipv6) {
+-		NL_SET_ERR_MSG(extack,
+-			       "Label attribute only applies to IPv6 VXLAN devices");
+-		return -EINVAL;
+-	}
+-
+-	if (conf->remote_ifindex) {
+-		struct net_device *lowerdev;
+-
+-		lowerdev = __dev_get_by_index(src_net, conf->remote_ifindex);
+-		if (!lowerdev) {
+-			NL_SET_ERR_MSG(extack,
+-				       "Invalid local interface, device not found");
+-			return -ENODEV;
+-		}
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-		if (use_ipv6) {
+-			struct inet6_dev *idev = __in6_dev_get(lowerdev);
+-			if (idev && idev->cnf.disable_ipv6) {
+-				NL_SET_ERR_MSG(extack,
+-					       "IPv6 support disabled by administrator");
+-				return -EPERM;
+-			}
+-		}
+-#endif
+-
+-		*lower = lowerdev;
+-	} else {
+-		if (vxlan_addr_multicast(&conf->remote_ip)) {
+-			NL_SET_ERR_MSG(extack,
+-				       "Local interface required for multicast remote destination");
+-
+-			return -EINVAL;
+-		}
+-
+-#if IS_ENABLED(CONFIG_IPV6)
+-		if (conf->flags & VXLAN_F_IPV6_LINKLOCAL) {
+-			NL_SET_ERR_MSG(extack,
+-				       "Local interface required for link-local local/remote addresses");
+-			return -EINVAL;
+-		}
+-#endif
+-
+-		*lower = NULL;
+-	}
+-
+-	if (!conf->dst_port) {
+-		if (conf->flags & VXLAN_F_GPE)
+-			conf->dst_port = htons(4790); /* IANA VXLAN-GPE port */
+-		else
+-			conf->dst_port = htons(vxlan_port);
+-	}
+-
+-	if (!conf->age_interval)
+-		conf->age_interval = FDB_AGE_DEFAULT;
+-
+-	list_for_each_entry(tmp, &vn->vxlan_list, next) {
+-		if (tmp == old)
+-			continue;
+-
+-		if (tmp->cfg.vni != conf->vni)
+-			continue;
+-		if (tmp->cfg.dst_port != conf->dst_port)
+-			continue;
+-		if ((tmp->cfg.flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)) !=
+-		    (conf->flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)))
+-			continue;
+-
+-		if ((conf->flags & VXLAN_F_IPV6_LINKLOCAL) &&
+-		    tmp->cfg.remote_ifindex != conf->remote_ifindex)
+-			continue;
+-
+-		NL_SET_ERR_MSG(extack,
+-			       "A VXLAN device with the specified VNI already exists");
+-		return -EEXIST;
+-	}
+-
+-	return 0;
+-}
+-
+-static void vxlan_config_apply(struct net_device *dev,
+-			       struct vxlan_config *conf,
+-			       struct net_device *lowerdev,
+-			       struct net *src_net,
+-			       bool changelink)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_rdst *dst = &vxlan->default_dst;
+-	unsigned short needed_headroom = ETH_HLEN;
+-	bool use_ipv6 = !!(conf->flags & VXLAN_F_IPV6);
+-	int max_mtu = ETH_MAX_MTU;
+-
+-	if (!changelink) {
+-		if (conf->flags & VXLAN_F_GPE)
+-			vxlan_raw_setup(dev);
+-		else
+-			vxlan_ether_setup(dev);
+-
+-		if (conf->mtu)
+-			dev->mtu = conf->mtu;
+-
+-		vxlan->net = src_net;
+-	}
+-
+-	dst->remote_vni = conf->vni;
+-
+-	memcpy(&dst->remote_ip, &conf->remote_ip, sizeof(conf->remote_ip));
+-
+-	if (lowerdev) {
+-		dst->remote_ifindex = conf->remote_ifindex;
+-
+-		dev->gso_max_size = lowerdev->gso_max_size;
+-		dev->gso_max_segs = lowerdev->gso_max_segs;
+-
+-		needed_headroom = lowerdev->hard_header_len;
+-		needed_headroom += lowerdev->needed_headroom;
+-
+-		dev->needed_tailroom = lowerdev->needed_tailroom;
+-
+-		max_mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM :
+-					   VXLAN_HEADROOM);
+-		if (max_mtu < ETH_MIN_MTU)
+-			max_mtu = ETH_MIN_MTU;
+-
+-		if (!changelink && !conf->mtu)
+-			dev->mtu = max_mtu;
+-	}
+-
+-	if (dev->mtu > max_mtu)
+-		dev->mtu = max_mtu;
+-
+-	if (use_ipv6 || conf->flags & VXLAN_F_COLLECT_METADATA)
+-		needed_headroom += VXLAN6_HEADROOM;
+-	else
+-		needed_headroom += VXLAN_HEADROOM;
+-	dev->needed_headroom = needed_headroom;
+-
+-	memcpy(&vxlan->cfg, conf, sizeof(*conf));
+-}
+-
+-static int vxlan_dev_configure(struct net *src_net, struct net_device *dev,
+-			       struct vxlan_config *conf, bool changelink,
+-			       struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct net_device *lowerdev;
+-	int ret;
+-
+-	ret = vxlan_config_validate(src_net, conf, &lowerdev, vxlan, extack);
+-	if (ret)
+-		return ret;
+-
+-	vxlan_config_apply(dev, conf, lowerdev, src_net, changelink);
+-
+-	return 0;
+-}
+-
+-static int __vxlan_dev_create(struct net *net, struct net_device *dev,
+-			      struct vxlan_config *conf,
+-			      struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct net_device *remote_dev = NULL;
+-	struct vxlan_fdb *f = NULL;
+-	bool unregister = false;
+-	struct vxlan_rdst *dst;
+-	int err;
+-
+-	dst = &vxlan->default_dst;
+-	err = vxlan_dev_configure(net, dev, conf, false, extack);
+-	if (err)
+-		return err;
+-
+-	dev->ethtool_ops = &vxlan_ethtool_ops;
+-
+-	/* create an fdb entry for a valid default destination */
+-	if (!vxlan_addr_any(&dst->remote_ip)) {
+-		err = vxlan_fdb_create(vxlan, all_zeros_mac,
+-				       &dst->remote_ip,
+-				       NUD_REACHABLE | NUD_PERMANENT,
+-				       vxlan->cfg.dst_port,
+-				       dst->remote_vni,
+-				       dst->remote_vni,
+-				       dst->remote_ifindex,
+-				       NTF_SELF, 0, &f, extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	err = register_netdevice(dev);
+-	if (err)
+-		goto errout;
+-	unregister = true;
+-
+-	if (dst->remote_ifindex) {
+-		remote_dev = __dev_get_by_index(net, dst->remote_ifindex);
+-		if (!remote_dev) {
+-			err = -ENODEV;
+-			goto errout;
+-		}
+-
+-		err = netdev_upper_dev_link(remote_dev, dev, extack);
+-		if (err)
+-			goto errout;
+-	}
+-
+-	err = rtnl_configure_link(dev, NULL);
+-	if (err < 0)
+-		goto unlink;
+-
+-	if (f) {
+-		vxlan_fdb_insert(vxlan, all_zeros_mac, dst->remote_vni, f);
+-
+-		/* notify default fdb entry */
+-		err = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f),
+-				       RTM_NEWNEIGH, true, extack);
+-		if (err) {
+-			vxlan_fdb_destroy(vxlan, f, false, false);
+-			if (remote_dev)
+-				netdev_upper_dev_unlink(remote_dev, dev);
+-			goto unregister;
+-		}
+-	}
+-
+-	list_add(&vxlan->next, &vn->vxlan_list);
+-	if (remote_dev)
+-		dst->remote_dev = remote_dev;
+-	return 0;
+-unlink:
+-	if (remote_dev)
+-		netdev_upper_dev_unlink(remote_dev, dev);
+-errout:
+-	/* unregister_netdevice() destroys the default FDB entry with deletion
+-	 * notification. But the addition notification was not sent yet, so
+-	 * destroy the entry by hand here.
+-	 */
+-	if (f)
+-		__vxlan_fdb_free(f);
+-unregister:
+-	if (unregister)
+-		unregister_netdevice(dev);
+-	return err;
+-}
+-
+-/* Set/clear flags based on attribute */
+-static int vxlan_nl2flag(struct vxlan_config *conf, struct nlattr *tb[],
+-			  int attrtype, unsigned long mask, bool changelink,
+-			  bool changelink_supported,
+-			  struct netlink_ext_ack *extack)
+-{
+-	unsigned long flags;
+-
+-	if (!tb[attrtype])
+-		return 0;
+-
+-	if (changelink && !changelink_supported) {
+-		vxlan_flag_attr_error(attrtype, extack);
+-		return -EOPNOTSUPP;
+-	}
+-
+-	if (vxlan_policy[attrtype].type == NLA_FLAG)
+-		flags = conf->flags | mask;
+-	else if (nla_get_u8(tb[attrtype]))
+-		flags = conf->flags | mask;
+-	else
+-		flags = conf->flags & ~mask;
+-
+-	conf->flags = flags;
+-
+-	return 0;
+-}
+-
+-static int vxlan_nl2conf(struct nlattr *tb[], struct nlattr *data[],
+-			 struct net_device *dev, struct vxlan_config *conf,
+-			 bool changelink, struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	int err = 0;
+-
+-	memset(conf, 0, sizeof(*conf));
+-
+-	/* if changelink operation, start with old existing cfg */
+-	if (changelink)
+-		memcpy(conf, &vxlan->cfg, sizeof(*conf));
+-
+-	if (data[IFLA_VXLAN_ID]) {
+-		__be32 vni = cpu_to_be32(nla_get_u32(data[IFLA_VXLAN_ID]));
+-
+-		if (changelink && (vni != conf->vni)) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_ID], "Cannot change VNI");
+-			return -EOPNOTSUPP;
+-		}
+-		conf->vni = cpu_to_be32(nla_get_u32(data[IFLA_VXLAN_ID]));
+-	}
+-
+-	if (data[IFLA_VXLAN_GROUP]) {
+-		if (changelink && (conf->remote_ip.sa.sa_family != AF_INET)) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_GROUP], "New group address family does not match old group");
+-			return -EOPNOTSUPP;
+-		}
+-
+-		conf->remote_ip.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_GROUP]);
+-		conf->remote_ip.sa.sa_family = AF_INET;
+-	} else if (data[IFLA_VXLAN_GROUP6]) {
+-		if (!IS_ENABLED(CONFIG_IPV6)) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_GROUP6], "IPv6 support not enabled in the kernel");
+-			return -EPFNOSUPPORT;
+-		}
+-
+-		if (changelink && (conf->remote_ip.sa.sa_family != AF_INET6)) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_GROUP6], "New group address family does not match old group");
+-			return -EOPNOTSUPP;
+-		}
+-
+-		conf->remote_ip.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_GROUP6]);
+-		conf->remote_ip.sa.sa_family = AF_INET6;
+-	}
+-
+-	if (data[IFLA_VXLAN_LOCAL]) {
+-		if (changelink && (conf->saddr.sa.sa_family != AF_INET)) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_LOCAL], "New local address family does not match old");
+-			return -EOPNOTSUPP;
+-		}
+-
+-		conf->saddr.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_LOCAL]);
+-		conf->saddr.sa.sa_family = AF_INET;
+-	} else if (data[IFLA_VXLAN_LOCAL6]) {
+-		if (!IS_ENABLED(CONFIG_IPV6)) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_LOCAL6], "IPv6 support not enabled in the kernel");
+-			return -EPFNOSUPPORT;
+-		}
+-
+-		if (changelink && (conf->saddr.sa.sa_family != AF_INET6)) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_LOCAL6], "New local address family does not match old");
+-			return -EOPNOTSUPP;
+-		}
+-
+-		/* TODO: respect scope id */
+-		conf->saddr.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_LOCAL6]);
+-		conf->saddr.sa.sa_family = AF_INET6;
+-	}
+-
+-	if (data[IFLA_VXLAN_LINK])
+-		conf->remote_ifindex = nla_get_u32(data[IFLA_VXLAN_LINK]);
+-
+-	if (data[IFLA_VXLAN_TOS])
+-		conf->tos  = nla_get_u8(data[IFLA_VXLAN_TOS]);
+-
+-	if (data[IFLA_VXLAN_TTL])
+-		conf->ttl = nla_get_u8(data[IFLA_VXLAN_TTL]);
+-
+-	if (data[IFLA_VXLAN_TTL_INHERIT]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_TTL_INHERIT,
+-				    VXLAN_F_TTL_INHERIT, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-
+-	}
+-
+-	if (data[IFLA_VXLAN_LABEL])
+-		conf->label = nla_get_be32(data[IFLA_VXLAN_LABEL]) &
+-			     IPV6_FLOWLABEL_MASK;
+-
+-	if (data[IFLA_VXLAN_LEARNING]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_LEARNING,
+-				    VXLAN_F_LEARN, changelink, true,
+-				    extack);
+-		if (err)
+-			return err;
+-	} else if (!changelink) {
+-		/* default to learn on a new device */
+-		conf->flags |= VXLAN_F_LEARN;
+-	}
+-
+-	if (data[IFLA_VXLAN_AGEING])
+-		conf->age_interval = nla_get_u32(data[IFLA_VXLAN_AGEING]);
+-
+-	if (data[IFLA_VXLAN_PROXY]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_PROXY,
+-				    VXLAN_F_PROXY, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_RSC]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_RSC,
+-				    VXLAN_F_RSC, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_L2MISS]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_L2MISS,
+-				    VXLAN_F_L2MISS, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_L3MISS]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_L3MISS,
+-				    VXLAN_F_L3MISS, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_LIMIT]) {
+-		if (changelink) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_LIMIT],
+-					    "Cannot change limit");
+-			return -EOPNOTSUPP;
+-		}
+-		conf->addrmax = nla_get_u32(data[IFLA_VXLAN_LIMIT]);
+-	}
+-
+-	if (data[IFLA_VXLAN_COLLECT_METADATA]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_COLLECT_METADATA,
+-				    VXLAN_F_COLLECT_METADATA, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_PORT_RANGE]) {
+-		if (!changelink) {
+-			const struct ifla_vxlan_port_range *p
+-				= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
+-			conf->port_min = ntohs(p->low);
+-			conf->port_max = ntohs(p->high);
+-		} else {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_PORT_RANGE],
+-					    "Cannot change port range");
+-			return -EOPNOTSUPP;
+-		}
+-	}
+-
+-	if (data[IFLA_VXLAN_PORT]) {
+-		if (changelink) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_PORT],
+-					    "Cannot change port");
+-			return -EOPNOTSUPP;
+-		}
+-		conf->dst_port = nla_get_be16(data[IFLA_VXLAN_PORT]);
+-	}
+-
+-	if (data[IFLA_VXLAN_UDP_CSUM]) {
+-		if (changelink) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_UDP_CSUM],
+-					    "Cannot change UDP_CSUM flag");
+-			return -EOPNOTSUPP;
+-		}
+-		if (!nla_get_u8(data[IFLA_VXLAN_UDP_CSUM]))
+-			conf->flags |= VXLAN_F_UDP_ZERO_CSUM_TX;
+-	}
+-
+-	if (data[IFLA_VXLAN_UDP_ZERO_CSUM6_TX]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_UDP_ZERO_CSUM6_TX,
+-				    VXLAN_F_UDP_ZERO_CSUM6_TX, changelink,
+-				    false, extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_UDP_ZERO_CSUM6_RX,
+-				    VXLAN_F_UDP_ZERO_CSUM6_RX, changelink,
+-				    false, extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_REMCSUM_TX]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_REMCSUM_TX,
+-				    VXLAN_F_REMCSUM_TX, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_REMCSUM_RX]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_REMCSUM_RX,
+-				    VXLAN_F_REMCSUM_RX, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_GBP]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_GBP,
+-				    VXLAN_F_GBP, changelink, false, extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_GPE]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_GPE,
+-				    VXLAN_F_GPE, changelink, false,
+-				    extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (data[IFLA_VXLAN_REMCSUM_NOPARTIAL]) {
+-		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_REMCSUM_NOPARTIAL,
+-				    VXLAN_F_REMCSUM_NOPARTIAL, changelink,
+-				    false, extack);
+-		if (err)
+-			return err;
+-	}
+-
+-	if (tb[IFLA_MTU]) {
+-		if (changelink) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_MTU],
+-					    "Cannot change mtu");
+-			return -EOPNOTSUPP;
+-		}
+-		conf->mtu = nla_get_u32(tb[IFLA_MTU]);
+-	}
+-
+-	if (data[IFLA_VXLAN_DF])
+-		conf->df = nla_get_u8(data[IFLA_VXLAN_DF]);
+-
+-	return 0;
+-}
+-
+-static int vxlan_newlink(struct net *src_net, struct net_device *dev,
+-			 struct nlattr *tb[], struct nlattr *data[],
+-			 struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_config conf;
+-	int err;
+-
+-	err = vxlan_nl2conf(tb, data, dev, &conf, false, extack);
+-	if (err)
+-		return err;
+-
+-	return __vxlan_dev_create(src_net, dev, &conf, extack);
+-}
+-
+-static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
+-			    struct nlattr *data[],
+-			    struct netlink_ext_ack *extack)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct net_device *lowerdev;
+-	struct vxlan_config conf;
+-	struct vxlan_rdst *dst;
+-	int err;
+-
+-	dst = &vxlan->default_dst;
+-	err = vxlan_nl2conf(tb, data, dev, &conf, true, extack);
+-	if (err)
+-		return err;
+-
+-	err = vxlan_config_validate(vxlan->net, &conf, &lowerdev,
+-				    vxlan, extack);
+-	if (err)
+-		return err;
+-
+-	if (dst->remote_dev == lowerdev)
+-		lowerdev = NULL;
+-
+-	err = netdev_adjacent_change_prepare(dst->remote_dev, lowerdev, dev,
+-					     extack);
+-	if (err)
+-		return err;
+-
+-	/* handle default dst entry */
+-	if (!vxlan_addr_equal(&conf.remote_ip, &dst->remote_ip)) {
+-		u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, conf.vni);
+-
+-		spin_lock_bh(&vxlan->hash_lock[hash_index]);
+-		if (!vxlan_addr_any(&conf.remote_ip)) {
+-			err = vxlan_fdb_update(vxlan, all_zeros_mac,
+-					       &conf.remote_ip,
+-					       NUD_REACHABLE | NUD_PERMANENT,
+-					       NLM_F_APPEND | NLM_F_CREATE,
+-					       vxlan->cfg.dst_port,
+-					       conf.vni, conf.vni,
+-					       conf.remote_ifindex,
+-					       NTF_SELF, 0, true, extack);
+-			if (err) {
+-				spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-				netdev_adjacent_change_abort(dst->remote_dev,
+-							     lowerdev, dev);
+-				return err;
+-			}
+-		}
+-		if (!vxlan_addr_any(&dst->remote_ip))
+-			__vxlan_fdb_delete(vxlan, all_zeros_mac,
+-					   dst->remote_ip,
+-					   vxlan->cfg.dst_port,
+-					   dst->remote_vni,
+-					   dst->remote_vni,
+-					   dst->remote_ifindex,
+-					   true);
+-		spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-	}
+-
+-	if (conf.age_interval != vxlan->cfg.age_interval)
+-		mod_timer(&vxlan->age_timer, jiffies);
+-
+-	netdev_adjacent_change_commit(dst->remote_dev, lowerdev, dev);
+-	if (lowerdev && lowerdev != dst->remote_dev)
+-		dst->remote_dev = lowerdev;
+-	vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true);
+-	return 0;
+-}
+-
+-static void vxlan_dellink(struct net_device *dev, struct list_head *head)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-
+-	vxlan_flush(vxlan, true);
+-
+-	list_del(&vxlan->next);
+-	unregister_netdevice_queue(dev, head);
+-	if (vxlan->default_dst.remote_dev)
+-		netdev_upper_dev_unlink(vxlan->default_dst.remote_dev, dev);
+-}
+-
+-static size_t vxlan_get_size(const struct net_device *dev)
+-{
+-
+-	return nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_ID */
+-		nla_total_size(sizeof(struct in6_addr)) + /* IFLA_VXLAN_GROUP{6} */
+-		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_LINK */
+-		nla_total_size(sizeof(struct in6_addr)) + /* IFLA_VXLAN_LOCAL{6} */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL_INHERIT */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TOS */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_DF */
+-		nla_total_size(sizeof(__be32)) + /* IFLA_VXLAN_LABEL */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_LEARNING */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_PROXY */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_RSC */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_L2MISS */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_L3MISS */
+-		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_COLLECT_METADATA */
+-		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_AGEING */
+-		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_LIMIT */
+-		nla_total_size(sizeof(struct ifla_vxlan_port_range)) +
+-		nla_total_size(sizeof(__be16)) + /* IFLA_VXLAN_PORT */
+-		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_CSUM */
+-		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_ZERO_CSUM6_TX */
+-		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_ZERO_CSUM6_RX */
+-		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_TX */
+-		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_RX */
+-		0;
+-}
+-
+-static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
+-{
+-	const struct vxlan_dev *vxlan = netdev_priv(dev);
+-	const struct vxlan_rdst *dst = &vxlan->default_dst;
+-	struct ifla_vxlan_port_range ports = {
+-		.low =  htons(vxlan->cfg.port_min),
+-		.high = htons(vxlan->cfg.port_max),
+-	};
+-
+-	if (nla_put_u32(skb, IFLA_VXLAN_ID, be32_to_cpu(dst->remote_vni)))
+-		goto nla_put_failure;
+-
+-	if (!vxlan_addr_any(&dst->remote_ip)) {
+-		if (dst->remote_ip.sa.sa_family == AF_INET) {
+-			if (nla_put_in_addr(skb, IFLA_VXLAN_GROUP,
+-					    dst->remote_ip.sin.sin_addr.s_addr))
+-				goto nla_put_failure;
+-#if IS_ENABLED(CONFIG_IPV6)
+-		} else {
+-			if (nla_put_in6_addr(skb, IFLA_VXLAN_GROUP6,
+-					     &dst->remote_ip.sin6.sin6_addr))
+-				goto nla_put_failure;
+-#endif
+-		}
+-	}
+-
+-	if (dst->remote_ifindex && nla_put_u32(skb, IFLA_VXLAN_LINK, dst->remote_ifindex))
+-		goto nla_put_failure;
+-
+-	if (!vxlan_addr_any(&vxlan->cfg.saddr)) {
+-		if (vxlan->cfg.saddr.sa.sa_family == AF_INET) {
+-			if (nla_put_in_addr(skb, IFLA_VXLAN_LOCAL,
+-					    vxlan->cfg.saddr.sin.sin_addr.s_addr))
+-				goto nla_put_failure;
+-#if IS_ENABLED(CONFIG_IPV6)
+-		} else {
+-			if (nla_put_in6_addr(skb, IFLA_VXLAN_LOCAL6,
+-					     &vxlan->cfg.saddr.sin6.sin6_addr))
+-				goto nla_put_failure;
+-#endif
+-		}
+-	}
+-
+-	if (nla_put_u8(skb, IFLA_VXLAN_TTL, vxlan->cfg.ttl) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_TTL_INHERIT,
+-		       !!(vxlan->cfg.flags & VXLAN_F_TTL_INHERIT)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_TOS, vxlan->cfg.tos) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_DF, vxlan->cfg.df) ||
+-	    nla_put_be32(skb, IFLA_VXLAN_LABEL, vxlan->cfg.label) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_LEARNING,
+-		       !!(vxlan->cfg.flags & VXLAN_F_LEARN)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_PROXY,
+-		       !!(vxlan->cfg.flags & VXLAN_F_PROXY)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_RSC,
+-		       !!(vxlan->cfg.flags & VXLAN_F_RSC)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_L2MISS,
+-		       !!(vxlan->cfg.flags & VXLAN_F_L2MISS)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_L3MISS,
+-		       !!(vxlan->cfg.flags & VXLAN_F_L3MISS)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_COLLECT_METADATA,
+-		       !!(vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA)) ||
+-	    nla_put_u32(skb, IFLA_VXLAN_AGEING, vxlan->cfg.age_interval) ||
+-	    nla_put_u32(skb, IFLA_VXLAN_LIMIT, vxlan->cfg.addrmax) ||
+-	    nla_put_be16(skb, IFLA_VXLAN_PORT, vxlan->cfg.dst_port) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_UDP_CSUM,
+-		       !(vxlan->cfg.flags & VXLAN_F_UDP_ZERO_CSUM_TX)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_UDP_ZERO_CSUM6_TX,
+-		       !!(vxlan->cfg.flags & VXLAN_F_UDP_ZERO_CSUM6_TX)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_UDP_ZERO_CSUM6_RX,
+-		       !!(vxlan->cfg.flags & VXLAN_F_UDP_ZERO_CSUM6_RX)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_REMCSUM_TX,
+-		       !!(vxlan->cfg.flags & VXLAN_F_REMCSUM_TX)) ||
+-	    nla_put_u8(skb, IFLA_VXLAN_REMCSUM_RX,
+-		       !!(vxlan->cfg.flags & VXLAN_F_REMCSUM_RX)))
+-		goto nla_put_failure;
+-
+-	if (nla_put(skb, IFLA_VXLAN_PORT_RANGE, sizeof(ports), &ports))
+-		goto nla_put_failure;
+-
+-	if (vxlan->cfg.flags & VXLAN_F_GBP &&
+-	    nla_put_flag(skb, IFLA_VXLAN_GBP))
+-		goto nla_put_failure;
+-
+-	if (vxlan->cfg.flags & VXLAN_F_GPE &&
+-	    nla_put_flag(skb, IFLA_VXLAN_GPE))
+-		goto nla_put_failure;
+-
+-	if (vxlan->cfg.flags & VXLAN_F_REMCSUM_NOPARTIAL &&
+-	    nla_put_flag(skb, IFLA_VXLAN_REMCSUM_NOPARTIAL))
+-		goto nla_put_failure;
+-
+-	return 0;
+-
+-nla_put_failure:
+-	return -EMSGSIZE;
+-}
+-
+-static struct net *vxlan_get_link_net(const struct net_device *dev)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-
+-	return vxlan->net;
+-}
+-
+-static struct rtnl_link_ops vxlan_link_ops __read_mostly = {
+-	.kind		= "vxlan",
+-	.maxtype	= IFLA_VXLAN_MAX,
+-	.policy		= vxlan_policy,
+-	.priv_size	= sizeof(struct vxlan_dev),
+-	.setup		= vxlan_setup,
+-	.validate	= vxlan_validate,
+-	.newlink	= vxlan_newlink,
+-	.changelink	= vxlan_changelink,
+-	.dellink	= vxlan_dellink,
+-	.get_size	= vxlan_get_size,
+-	.fill_info	= vxlan_fill_info,
+-	.get_link_net	= vxlan_get_link_net,
+-};
+-
+-struct net_device *vxlan_dev_create(struct net *net, const char *name,
+-				    u8 name_assign_type,
+-				    struct vxlan_config *conf)
+-{
+-	struct nlattr *tb[IFLA_MAX + 1];
+-	struct net_device *dev;
+-	int err;
+-
+-	memset(&tb, 0, sizeof(tb));
+-
+-	dev = rtnl_create_link(net, name, name_assign_type,
+-			       &vxlan_link_ops, tb, NULL);
+-	if (IS_ERR(dev))
+-		return dev;
+-
+-	err = __vxlan_dev_create(net, dev, conf, NULL);
+-	if (err < 0) {
+-		free_netdev(dev);
+-		return ERR_PTR(err);
+-	}
+-
+-	err = rtnl_configure_link(dev, NULL);
+-	if (err < 0) {
+-		LIST_HEAD(list_kill);
+-
+-		vxlan_dellink(dev, &list_kill);
+-		unregister_netdevice_many(&list_kill);
+-		return ERR_PTR(err);
+-	}
+-
+-	return dev;
+-}
+-EXPORT_SYMBOL_GPL(vxlan_dev_create);
+-
+-static void vxlan_handle_lowerdev_unregister(struct vxlan_net *vn,
+-					     struct net_device *dev)
+-{
+-	struct vxlan_dev *vxlan, *next;
+-	LIST_HEAD(list_kill);
+-
+-	list_for_each_entry_safe(vxlan, next, &vn->vxlan_list, next) {
+-		struct vxlan_rdst *dst = &vxlan->default_dst;
+-
+-		/* In case we created vxlan device with carrier
+-		 * and we loose the carrier due to module unload
+-		 * we also need to remove vxlan device. In other
+-		 * cases, it's not necessary and remote_ifindex
+-		 * is 0 here, so no matches.
+-		 */
+-		if (dst->remote_ifindex == dev->ifindex)
+-			vxlan_dellink(vxlan->dev, &list_kill);
+-	}
+-
+-	unregister_netdevice_many(&list_kill);
+-}
+-
+-static int vxlan_netdevice_event(struct notifier_block *unused,
+-				 unsigned long event, void *ptr)
+-{
+-	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-	struct vxlan_net *vn = net_generic(dev_net(dev), vxlan_net_id);
+-
+-	if (event == NETDEV_UNREGISTER) {
+-		if (!dev->udp_tunnel_nic_info)
+-			vxlan_offload_rx_ports(dev, false);
+-		vxlan_handle_lowerdev_unregister(vn, dev);
+-	} else if (event == NETDEV_REGISTER) {
+-		if (!dev->udp_tunnel_nic_info)
+-			vxlan_offload_rx_ports(dev, true);
+-	} else if (event == NETDEV_UDP_TUNNEL_PUSH_INFO ||
+-		   event == NETDEV_UDP_TUNNEL_DROP_INFO) {
+-		vxlan_offload_rx_ports(dev, event == NETDEV_UDP_TUNNEL_PUSH_INFO);
+-	}
+-
+-	return NOTIFY_DONE;
+-}
+-
+-static struct notifier_block vxlan_notifier_block __read_mostly = {
+-	.notifier_call = vxlan_netdevice_event,
+-};
+-
+-static void
+-vxlan_fdb_offloaded_set(struct net_device *dev,
+-			struct switchdev_notifier_vxlan_fdb_info *fdb_info)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_rdst *rdst;
+-	struct vxlan_fdb *f;
+-	u32 hash_index;
+-
+-	hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
+-
+-	spin_lock_bh(&vxlan->hash_lock[hash_index]);
+-
+-	f = vxlan_find_mac(vxlan, fdb_info->eth_addr, fdb_info->vni);
+-	if (!f)
+-		goto out;
+-
+-	rdst = vxlan_fdb_find_rdst(f, &fdb_info->remote_ip,
+-				   fdb_info->remote_port,
+-				   fdb_info->remote_vni,
+-				   fdb_info->remote_ifindex);
+-	if (!rdst)
+-		goto out;
+-
+-	rdst->offloaded = fdb_info->offloaded;
+-
+-out:
+-	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-}
+-
+-static int
+-vxlan_fdb_external_learn_add(struct net_device *dev,
+-			     struct switchdev_notifier_vxlan_fdb_info *fdb_info)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct netlink_ext_ack *extack;
+-	u32 hash_index;
+-	int err;
+-
+-	hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
+-	extack = switchdev_notifier_info_to_extack(&fdb_info->info);
+-
+-	spin_lock_bh(&vxlan->hash_lock[hash_index]);
+-	err = vxlan_fdb_update(vxlan, fdb_info->eth_addr, &fdb_info->remote_ip,
+-			       NUD_REACHABLE,
+-			       NLM_F_CREATE | NLM_F_REPLACE,
+-			       fdb_info->remote_port,
+-			       fdb_info->vni,
+-			       fdb_info->remote_vni,
+-			       fdb_info->remote_ifindex,
+-			       NTF_USE | NTF_SELF | NTF_EXT_LEARNED,
+-			       0, false, extack);
+-	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-
+-	return err;
+-}
+-
+-static int
+-vxlan_fdb_external_learn_del(struct net_device *dev,
+-			     struct switchdev_notifier_vxlan_fdb_info *fdb_info)
+-{
+-	struct vxlan_dev *vxlan = netdev_priv(dev);
+-	struct vxlan_fdb *f;
+-	u32 hash_index;
+-	int err = 0;
+-
+-	hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
+-	spin_lock_bh(&vxlan->hash_lock[hash_index]);
+-
+-	f = vxlan_find_mac(vxlan, fdb_info->eth_addr, fdb_info->vni);
+-	if (!f)
+-		err = -ENOENT;
+-	else if (f->flags & NTF_EXT_LEARNED)
+-		err = __vxlan_fdb_delete(vxlan, fdb_info->eth_addr,
+-					 fdb_info->remote_ip,
+-					 fdb_info->remote_port,
+-					 fdb_info->vni,
+-					 fdb_info->remote_vni,
+-					 fdb_info->remote_ifindex,
+-					 false);
+-
+-	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-
+-	return err;
+-}
+-
+-static int vxlan_switchdev_event(struct notifier_block *unused,
+-				 unsigned long event, void *ptr)
+-{
+-	struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
+-	struct switchdev_notifier_vxlan_fdb_info *fdb_info;
+-	int err = 0;
+-
+-	switch (event) {
+-	case SWITCHDEV_VXLAN_FDB_OFFLOADED:
+-		vxlan_fdb_offloaded_set(dev, ptr);
+-		break;
+-	case SWITCHDEV_VXLAN_FDB_ADD_TO_BRIDGE:
+-		fdb_info = ptr;
+-		err = vxlan_fdb_external_learn_add(dev, fdb_info);
+-		if (err) {
+-			err = notifier_from_errno(err);
+-			break;
+-		}
+-		fdb_info->offloaded = true;
+-		vxlan_fdb_offloaded_set(dev, fdb_info);
+-		break;
+-	case SWITCHDEV_VXLAN_FDB_DEL_TO_BRIDGE:
+-		fdb_info = ptr;
+-		err = vxlan_fdb_external_learn_del(dev, fdb_info);
+-		if (err) {
+-			err = notifier_from_errno(err);
+-			break;
+-		}
+-		fdb_info->offloaded = false;
+-		vxlan_fdb_offloaded_set(dev, fdb_info);
+-		break;
+-	}
+-
+-	return err;
+-}
+-
+-static struct notifier_block vxlan_switchdev_notifier_block __read_mostly = {
+-	.notifier_call = vxlan_switchdev_event,
+-};
+-
+-static void vxlan_fdb_nh_flush(struct nexthop *nh)
+-{
+-	struct vxlan_fdb *fdb;
+-	struct vxlan_dev *vxlan;
+-	u32 hash_index;
+-
+-	rcu_read_lock();
+-	list_for_each_entry_rcu(fdb, &nh->fdb_list, nh_list) {
+-		vxlan = rcu_dereference(fdb->vdev);
+-		WARN_ON(!vxlan);
+-		hash_index = fdb_head_index(vxlan, fdb->eth_addr,
+-					    vxlan->default_dst.remote_vni);
+-		spin_lock_bh(&vxlan->hash_lock[hash_index]);
+-		if (!hlist_unhashed(&fdb->hlist))
+-			vxlan_fdb_destroy(vxlan, fdb, false, false);
+-		spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+-	}
+-	rcu_read_unlock();
+-}
+-
+-static int vxlan_nexthop_event(struct notifier_block *nb,
+-			       unsigned long event, void *ptr)
+-{
+-	struct nexthop *nh = ptr;
+-
+-	if (!nh || event != NEXTHOP_EVENT_DEL)
+-		return NOTIFY_DONE;
+-
+-	vxlan_fdb_nh_flush(nh);
+-
+-	return NOTIFY_DONE;
+-}
+-
+-static struct notifier_block vxlan_nexthop_notifier_block __read_mostly = {
+-	.notifier_call = vxlan_nexthop_event,
+-};
+-
+-static __net_init int vxlan_init_net(struct net *net)
+-{
+-	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+-	unsigned int h;
+-
+-	INIT_LIST_HEAD(&vn->vxlan_list);
+-	spin_lock_init(&vn->sock_lock);
+-
+-	for (h = 0; h < PORT_HASH_SIZE; ++h)
+-		INIT_HLIST_HEAD(&vn->sock_list[h]);
+-
+-	return register_nexthop_notifier(net, &vxlan_nexthop_notifier_block);
+-}
+-
+-static void vxlan_destroy_tunnels(struct net *net, struct list_head *head)
+-{
+-	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+-	struct vxlan_dev *vxlan, *next;
+-	struct net_device *dev, *aux;
+-
+-	for_each_netdev_safe(net, dev, aux)
+-		if (dev->rtnl_link_ops == &vxlan_link_ops)
+-			unregister_netdevice_queue(dev, head);
+-
+-	list_for_each_entry_safe(vxlan, next, &vn->vxlan_list, next) {
+-		/* If vxlan->dev is in the same netns, it has already been added
+-		 * to the list by the previous loop.
+-		 */
+-		if (!net_eq(dev_net(vxlan->dev), net))
+-			unregister_netdevice_queue(vxlan->dev, head);
+-	}
+-
+-}
+-
+-static void __net_exit vxlan_exit_batch_net(struct list_head *net_list)
+-{
+-	struct net *net;
+-	LIST_HEAD(list);
+-	unsigned int h;
+-
+-	rtnl_lock();
+-	list_for_each_entry(net, net_list, exit_list)
+-		unregister_nexthop_notifier(net, &vxlan_nexthop_notifier_block);
+-	list_for_each_entry(net, net_list, exit_list)
+-		vxlan_destroy_tunnels(net, &list);
+-
+-	unregister_netdevice_many(&list);
+-	rtnl_unlock();
+-
+-	list_for_each_entry(net, net_list, exit_list) {
+-		struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+-
+-		for (h = 0; h < PORT_HASH_SIZE; ++h)
+-			WARN_ON_ONCE(!hlist_empty(&vn->sock_list[h]));
+-	}
+-}
+-
+-static struct pernet_operations vxlan_net_ops = {
+-	.init = vxlan_init_net,
+-	.exit_batch = vxlan_exit_batch_net,
+-	.id   = &vxlan_net_id,
+-	.size = sizeof(struct vxlan_net),
+-};
+-
+-static int __init vxlan_init_module(void)
+-{
+-	int rc;
+-
+-	get_random_bytes(&vxlan_salt, sizeof(vxlan_salt));
+-
+-	rc = register_pernet_subsys(&vxlan_net_ops);
+-	if (rc)
+-		goto out1;
+-
+-	rc = register_netdevice_notifier(&vxlan_notifier_block);
+-	if (rc)
+-		goto out2;
+-
+-	rc = register_switchdev_notifier(&vxlan_switchdev_notifier_block);
+-	if (rc)
+-		goto out3;
+-
+-	rc = rtnl_link_register(&vxlan_link_ops);
+-	if (rc)
+-		goto out4;
+-
+-	return 0;
+-out4:
+-	unregister_switchdev_notifier(&vxlan_switchdev_notifier_block);
+-out3:
+-	unregister_netdevice_notifier(&vxlan_notifier_block);
+-out2:
+-	unregister_pernet_subsys(&vxlan_net_ops);
+-out1:
+-	return rc;
+-}
+-late_initcall(vxlan_init_module);
+-
+-static void __exit vxlan_cleanup_module(void)
+-{
+-	rtnl_link_unregister(&vxlan_link_ops);
+-	unregister_switchdev_notifier(&vxlan_switchdev_notifier_block);
+-	unregister_netdevice_notifier(&vxlan_notifier_block);
+-	unregister_pernet_subsys(&vxlan_net_ops);
+-	/* rcu_barrier() is called by netns */
+-}
+-module_exit(vxlan_cleanup_module);
+-
+-MODULE_LICENSE("GPL");
+-MODULE_VERSION(VXLAN_VERSION);
+-MODULE_AUTHOR("Stephen Hemminger <stephen@networkplumber.org>");
+-MODULE_DESCRIPTION("Driver for VXLAN encapsulated traffic");
+-MODULE_ALIAS_RTNL_LINK("vxlan");
+diff --git a/drivers/net/vxlan/Makefile b/drivers/net/vxlan/Makefile
+new file mode 100644
+index 0000000000000..5672661335933
+--- /dev/null
++++ b/drivers/net/vxlan/Makefile
+@@ -0,0 +1,7 @@
++#
++# Makefile for the vxlan driver
++#
++
++obj-$(CONFIG_VXLAN) += vxlan.o
++
++vxlan-objs := vxlan_core.o
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+new file mode 100644
+index 0000000000000..1ac9de69bde65
+--- /dev/null
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -0,0 +1,4826 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * VXLAN: Virtual eXtensible Local Area Network
++ *
++ * Copyright (c) 2012-2013 Vyatta Inc.
++ */
++
++#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/errno.h>
++#include <linux/slab.h>
++#include <linux/udp.h>
++#include <linux/igmp.h>
++#include <linux/if_ether.h>
++#include <linux/ethtool.h>
++#include <net/arp.h>
++#include <net/ndisc.h>
++#include <net/ipv6_stubs.h>
++#include <net/ip.h>
++#include <net/icmp.h>
++#include <net/rtnetlink.h>
++#include <net/inet_ecn.h>
++#include <net/net_namespace.h>
++#include <net/netns/generic.h>
++#include <net/tun_proto.h>
++#include <net/vxlan.h>
++#include <net/nexthop.h>
++
++#if IS_ENABLED(CONFIG_IPV6)
++#include <net/ip6_tunnel.h>
++#include <net/ip6_checksum.h>
++#endif
++
++#define VXLAN_VERSION	"0.1"
++
++#define PORT_HASH_BITS	8
++#define PORT_HASH_SIZE  (1<<PORT_HASH_BITS)
++#define FDB_AGE_DEFAULT 300 /* 5 min */
++#define FDB_AGE_INTERVAL (10 * HZ)	/* rescan interval */
++
++/* UDP port for VXLAN traffic.
++ * The IANA assigned port is 4789, but the Linux default is 8472
++ * for compatibility with early adopters.
++ */
++static unsigned short vxlan_port __read_mostly = 8472;
++module_param_named(udp_port, vxlan_port, ushort, 0444);
++MODULE_PARM_DESC(udp_port, "Destination UDP port");
++
++static bool log_ecn_error = true;
++module_param(log_ecn_error, bool, 0644);
++MODULE_PARM_DESC(log_ecn_error, "Log packets received with corrupted ECN");
++
++static unsigned int vxlan_net_id;
++static struct rtnl_link_ops vxlan_link_ops;
++
++static const u8 all_zeros_mac[ETH_ALEN + 2];
++
++static int vxlan_sock_add(struct vxlan_dev *vxlan);
++
++static void vxlan_vs_del_dev(struct vxlan_dev *vxlan);
++
++/* per-network namespace private data for this module */
++struct vxlan_net {
++	struct list_head  vxlan_list;
++	struct hlist_head sock_list[PORT_HASH_SIZE];
++	spinlock_t	  sock_lock;
++};
++
++/* Forwarding table entry */
++struct vxlan_fdb {
++	struct hlist_node hlist;	/* linked list of entries */
++	struct rcu_head	  rcu;
++	unsigned long	  updated;	/* jiffies */
++	unsigned long	  used;
++	struct list_head  remotes;
++	u8		  eth_addr[ETH_ALEN];
++	u16		  state;	/* see ndm_state */
++	__be32		  vni;
++	u16		  flags;	/* see ndm_flags and below */
++	struct list_head  nh_list;
++	struct nexthop __rcu *nh;
++	struct vxlan_dev  __rcu *vdev;
++};
++
++#define NTF_VXLAN_ADDED_BY_USER 0x100
++
++/* salt for hash table */
++static u32 vxlan_salt __read_mostly;
++
++static inline bool vxlan_collect_metadata(struct vxlan_sock *vs)
++{
++	return vs->flags & VXLAN_F_COLLECT_METADATA ||
++	       ip_tunnel_collect_metadata();
++}
++
++#if IS_ENABLED(CONFIG_IPV6)
++static inline
++bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
++{
++	if (a->sa.sa_family != b->sa.sa_family)
++		return false;
++	if (a->sa.sa_family == AF_INET6)
++		return ipv6_addr_equal(&a->sin6.sin6_addr, &b->sin6.sin6_addr);
++	else
++		return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
++}
++
++static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
++{
++	if (nla_len(nla) >= sizeof(struct in6_addr)) {
++		ip->sin6.sin6_addr = nla_get_in6_addr(nla);
++		ip->sa.sa_family = AF_INET6;
++		return 0;
++	} else if (nla_len(nla) >= sizeof(__be32)) {
++		ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
++		ip->sa.sa_family = AF_INET;
++		return 0;
++	} else {
++		return -EAFNOSUPPORT;
++	}
++}
++
++static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
++			      const union vxlan_addr *ip)
++{
++	if (ip->sa.sa_family == AF_INET6)
++		return nla_put_in6_addr(skb, attr, &ip->sin6.sin6_addr);
++	else
++		return nla_put_in_addr(skb, attr, ip->sin.sin_addr.s_addr);
++}
++
++#else /* !CONFIG_IPV6 */
++
++static inline
++bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
++{
++	return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
++}
++
++static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
++{
++	if (nla_len(nla) >= sizeof(struct in6_addr)) {
++		return -EAFNOSUPPORT;
++	} else if (nla_len(nla) >= sizeof(__be32)) {
++		ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
++		ip->sa.sa_family = AF_INET;
++		return 0;
++	} else {
++		return -EAFNOSUPPORT;
++	}
++}
++
++static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
++			      const union vxlan_addr *ip)
++{
++	return nla_put_in_addr(skb, attr, ip->sin.sin_addr.s_addr);
++}
++#endif
++
++/* Virtual Network hash table head */
++static inline struct hlist_head *vni_head(struct vxlan_sock *vs, __be32 vni)
++{
++	return &vs->vni_list[hash_32((__force u32)vni, VNI_HASH_BITS)];
++}
++
++/* Socket hash table head */
++static inline struct hlist_head *vs_head(struct net *net, __be16 port)
++{
++	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
++
++	return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
++}
++
++/* First remote destination for a forwarding entry.
++ * Guaranteed to be non-NULL because remotes are never deleted.
++ */
++static inline struct vxlan_rdst *first_remote_rcu(struct vxlan_fdb *fdb)
++{
++	if (rcu_access_pointer(fdb->nh))
++		return NULL;
++	return list_entry_rcu(fdb->remotes.next, struct vxlan_rdst, list);
++}
++
++static inline struct vxlan_rdst *first_remote_rtnl(struct vxlan_fdb *fdb)
++{
++	if (rcu_access_pointer(fdb->nh))
++		return NULL;
++	return list_first_entry(&fdb->remotes, struct vxlan_rdst, list);
++}
++
++/* Find VXLAN socket based on network namespace, address family, UDP port,
++ * enabled unshareable flags and socket device binding (see l3mdev with
++ * non-default VRF).
++ */
++static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family,
++					  __be16 port, u32 flags, int ifindex)
++{
++	struct vxlan_sock *vs;
++
++	flags &= VXLAN_F_RCV_FLAGS;
++
++	hlist_for_each_entry_rcu(vs, vs_head(net, port), hlist) {
++		if (inet_sk(vs->sock->sk)->inet_sport == port &&
++		    vxlan_get_sk_family(vs) == family &&
++		    vs->flags == flags &&
++		    vs->sock->sk->sk_bound_dev_if == ifindex)
++			return vs;
++	}
++	return NULL;
++}
++
++static struct vxlan_dev *vxlan_vs_find_vni(struct vxlan_sock *vs, int ifindex,
++					   __be32 vni)
++{
++	struct vxlan_dev_node *node;
++
++	/* For flow based devices, map all packets to VNI 0 */
++	if (vs->flags & VXLAN_F_COLLECT_METADATA)
++		vni = 0;
++
++	hlist_for_each_entry_rcu(node, vni_head(vs, vni), hlist) {
++		if (node->vxlan->default_dst.remote_vni != vni)
++			continue;
++
++		if (IS_ENABLED(CONFIG_IPV6)) {
++			const struct vxlan_config *cfg = &node->vxlan->cfg;
++
++			if ((cfg->flags & VXLAN_F_IPV6_LINKLOCAL) &&
++			    cfg->remote_ifindex != ifindex)
++				continue;
++		}
++
++		return node->vxlan;
++	}
++
++	return NULL;
++}
++
++/* Look up VNI in a per net namespace table */
++static struct vxlan_dev *vxlan_find_vni(struct net *net, int ifindex,
++					__be32 vni, sa_family_t family,
++					__be16 port, u32 flags)
++{
++	struct vxlan_sock *vs;
++
++	vs = vxlan_find_sock(net, family, port, flags, ifindex);
++	if (!vs)
++		return NULL;
++
++	return vxlan_vs_find_vni(vs, ifindex, vni);
++}
++
++/* Fill in neighbour message in skbuff. */
++static int vxlan_fdb_info(struct sk_buff *skb, struct vxlan_dev *vxlan,
++			  const struct vxlan_fdb *fdb,
++			  u32 portid, u32 seq, int type, unsigned int flags,
++			  const struct vxlan_rdst *rdst)
++{
++	unsigned long now = jiffies;
++	struct nda_cacheinfo ci;
++	bool send_ip, send_eth;
++	struct nlmsghdr *nlh;
++	struct nexthop *nh;
++	struct ndmsg *ndm;
++	int nh_family;
++	u32 nh_id;
++
++	nlh = nlmsg_put(skb, portid, seq, type, sizeof(*ndm), flags);
++	if (nlh == NULL)
++		return -EMSGSIZE;
++
++	ndm = nlmsg_data(nlh);
++	memset(ndm, 0, sizeof(*ndm));
++
++	send_eth = send_ip = true;
++
++	rcu_read_lock();
++	nh = rcu_dereference(fdb->nh);
++	if (nh) {
++		nh_family = nexthop_get_family(nh);
++		nh_id = nh->id;
++	}
++	rcu_read_unlock();
++
++	if (type == RTM_GETNEIGH) {
++		if (rdst) {
++			send_ip = !vxlan_addr_any(&rdst->remote_ip);
++			ndm->ndm_family = send_ip ? rdst->remote_ip.sa.sa_family : AF_INET;
++		} else if (nh) {
++			ndm->ndm_family = nh_family;
++		}
++		send_eth = !is_zero_ether_addr(fdb->eth_addr);
++	} else
++		ndm->ndm_family	= AF_BRIDGE;
++	ndm->ndm_state = fdb->state;
++	ndm->ndm_ifindex = vxlan->dev->ifindex;
++	ndm->ndm_flags = fdb->flags;
++	if (rdst && rdst->offloaded)
++		ndm->ndm_flags |= NTF_OFFLOADED;
++	ndm->ndm_type = RTN_UNICAST;
++
++	if (!net_eq(dev_net(vxlan->dev), vxlan->net) &&
++	    nla_put_s32(skb, NDA_LINK_NETNSID,
++			peernet2id(dev_net(vxlan->dev), vxlan->net)))
++		goto nla_put_failure;
++
++	if (send_eth && nla_put(skb, NDA_LLADDR, ETH_ALEN, &fdb->eth_addr))
++		goto nla_put_failure;
++	if (nh) {
++		if (nla_put_u32(skb, NDA_NH_ID, nh_id))
++			goto nla_put_failure;
++	} else if (rdst) {
++		if (send_ip && vxlan_nla_put_addr(skb, NDA_DST,
++						  &rdst->remote_ip))
++			goto nla_put_failure;
++
++		if (rdst->remote_port &&
++		    rdst->remote_port != vxlan->cfg.dst_port &&
++		    nla_put_be16(skb, NDA_PORT, rdst->remote_port))
++			goto nla_put_failure;
++		if (rdst->remote_vni != vxlan->default_dst.remote_vni &&
++		    nla_put_u32(skb, NDA_VNI, be32_to_cpu(rdst->remote_vni)))
++			goto nla_put_failure;
++		if (rdst->remote_ifindex &&
++		    nla_put_u32(skb, NDA_IFINDEX, rdst->remote_ifindex))
++			goto nla_put_failure;
++	}
++
++	if ((vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA) && fdb->vni &&
++	    nla_put_u32(skb, NDA_SRC_VNI,
++			be32_to_cpu(fdb->vni)))
++		goto nla_put_failure;
++
++	ci.ndm_used	 = jiffies_to_clock_t(now - fdb->used);
++	ci.ndm_confirmed = 0;
++	ci.ndm_updated	 = jiffies_to_clock_t(now - fdb->updated);
++	ci.ndm_refcnt	 = 0;
++
++	if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci))
++		goto nla_put_failure;
++
++	nlmsg_end(skb, nlh);
++	return 0;
++
++nla_put_failure:
++	nlmsg_cancel(skb, nlh);
++	return -EMSGSIZE;
++}
++
++static inline size_t vxlan_nlmsg_size(void)
++{
++	return NLMSG_ALIGN(sizeof(struct ndmsg))
++		+ nla_total_size(ETH_ALEN) /* NDA_LLADDR */
++		+ nla_total_size(sizeof(struct in6_addr)) /* NDA_DST */
++		+ nla_total_size(sizeof(__be16)) /* NDA_PORT */
++		+ nla_total_size(sizeof(__be32)) /* NDA_VNI */
++		+ nla_total_size(sizeof(__u32)) /* NDA_IFINDEX */
++		+ nla_total_size(sizeof(__s32)) /* NDA_LINK_NETNSID */
++		+ nla_total_size(sizeof(struct nda_cacheinfo));
++}
++
++static void __vxlan_fdb_notify(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
++			       struct vxlan_rdst *rd, int type)
++{
++	struct net *net = dev_net(vxlan->dev);
++	struct sk_buff *skb;
++	int err = -ENOBUFS;
++
++	skb = nlmsg_new(vxlan_nlmsg_size(), GFP_ATOMIC);
++	if (skb == NULL)
++		goto errout;
++
++	err = vxlan_fdb_info(skb, vxlan, fdb, 0, 0, type, 0, rd);
++	if (err < 0) {
++		/* -EMSGSIZE implies BUG in vxlan_nlmsg_size() */
++		WARN_ON(err == -EMSGSIZE);
++		kfree_skb(skb);
++		goto errout;
++	}
++
++	rtnl_notify(skb, net, 0, RTNLGRP_NEIGH, NULL, GFP_ATOMIC);
++	return;
++errout:
++	if (err < 0)
++		rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
++}
++
++static void vxlan_fdb_switchdev_notifier_info(const struct vxlan_dev *vxlan,
++			    const struct vxlan_fdb *fdb,
++			    const struct vxlan_rdst *rd,
++			    struct netlink_ext_ack *extack,
++			    struct switchdev_notifier_vxlan_fdb_info *fdb_info)
++{
++	fdb_info->info.dev = vxlan->dev;
++	fdb_info->info.extack = extack;
++	fdb_info->remote_ip = rd->remote_ip;
++	fdb_info->remote_port = rd->remote_port;
++	fdb_info->remote_vni = rd->remote_vni;
++	fdb_info->remote_ifindex = rd->remote_ifindex;
++	memcpy(fdb_info->eth_addr, fdb->eth_addr, ETH_ALEN);
++	fdb_info->vni = fdb->vni;
++	fdb_info->offloaded = rd->offloaded;
++	fdb_info->added_by_user = fdb->flags & NTF_VXLAN_ADDED_BY_USER;
++}
++
++static int vxlan_fdb_switchdev_call_notifiers(struct vxlan_dev *vxlan,
++					      struct vxlan_fdb *fdb,
++					      struct vxlan_rdst *rd,
++					      bool adding,
++					      struct netlink_ext_ack *extack)
++{
++	struct switchdev_notifier_vxlan_fdb_info info;
++	enum switchdev_notifier_type notifier_type;
++	int ret;
++
++	if (WARN_ON(!rd))
++		return 0;
++
++	notifier_type = adding ? SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE
++			       : SWITCHDEV_VXLAN_FDB_DEL_TO_DEVICE;
++	vxlan_fdb_switchdev_notifier_info(vxlan, fdb, rd, NULL, &info);
++	ret = call_switchdev_notifiers(notifier_type, vxlan->dev,
++				       &info.info, extack);
++	return notifier_to_errno(ret);
++}
++
++static int vxlan_fdb_notify(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
++			    struct vxlan_rdst *rd, int type, bool swdev_notify,
++			    struct netlink_ext_ack *extack)
++{
++	int err;
++
++	if (swdev_notify && rd) {
++		switch (type) {
++		case RTM_NEWNEIGH:
++			err = vxlan_fdb_switchdev_call_notifiers(vxlan, fdb, rd,
++								 true, extack);
++			if (err)
++				return err;
++			break;
++		case RTM_DELNEIGH:
++			vxlan_fdb_switchdev_call_notifiers(vxlan, fdb, rd,
++							   false, extack);
++			break;
++		}
++	}
++
++	__vxlan_fdb_notify(vxlan, fdb, rd, type);
++	return 0;
++}
++
++static void vxlan_ip_miss(struct net_device *dev, union vxlan_addr *ipa)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_fdb f = {
++		.state = NUD_STALE,
++	};
++	struct vxlan_rdst remote = {
++		.remote_ip = *ipa, /* goes to NDA_DST */
++		.remote_vni = cpu_to_be32(VXLAN_N_VID),
++	};
++
++	vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH, true, NULL);
++}
++
++static void vxlan_fdb_miss(struct vxlan_dev *vxlan, const u8 eth_addr[ETH_ALEN])
++{
++	struct vxlan_fdb f = {
++		.state = NUD_STALE,
++	};
++	struct vxlan_rdst remote = { };
++
++	memcpy(f.eth_addr, eth_addr, ETH_ALEN);
++
++	vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH, true, NULL);
++}
++
++/* Hash Ethernet address */
++static u32 eth_hash(const unsigned char *addr)
++{
++	u64 value = get_unaligned((u64 *)addr);
++
++	/* only want 6 bytes */
++#ifdef __BIG_ENDIAN
++	value >>= 16;
++#else
++	value <<= 16;
++#endif
++	return hash_64(value, FDB_HASH_BITS);
++}
++
++static u32 eth_vni_hash(const unsigned char *addr, __be32 vni)
++{
++	/* use 1 byte of OUI and 3 bytes of NIC */
++	u32 key = get_unaligned((u32 *)(addr + 2));
++
++	return jhash_2words(key, vni, vxlan_salt) & (FDB_HASH_SIZE - 1);
++}
++
++static u32 fdb_head_index(struct vxlan_dev *vxlan, const u8 *mac, __be32 vni)
++{
++	if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA)
++		return eth_vni_hash(mac, vni);
++	else
++		return eth_hash(mac);
++}
++
++/* Hash chain to use given mac address */
++static inline struct hlist_head *vxlan_fdb_head(struct vxlan_dev *vxlan,
++						const u8 *mac, __be32 vni)
++{
++	return &vxlan->fdb_head[fdb_head_index(vxlan, mac, vni)];
++}
++
++/* Look up Ethernet address in forwarding table */
++static struct vxlan_fdb *__vxlan_find_mac(struct vxlan_dev *vxlan,
++					  const u8 *mac, __be32 vni)
++{
++	struct hlist_head *head = vxlan_fdb_head(vxlan, mac, vni);
++	struct vxlan_fdb *f;
++
++	hlist_for_each_entry_rcu(f, head, hlist) {
++		if (ether_addr_equal(mac, f->eth_addr)) {
++			if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA) {
++				if (vni == f->vni)
++					return f;
++			} else {
++				return f;
++			}
++		}
++	}
++
++	return NULL;
++}
++
++static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
++					const u8 *mac, __be32 vni)
++{
++	struct vxlan_fdb *f;
++
++	f = __vxlan_find_mac(vxlan, mac, vni);
++	if (f && f->used != jiffies)
++		f->used = jiffies;
++
++	return f;
++}
++
++/* caller should hold vxlan->hash_lock */
++static struct vxlan_rdst *vxlan_fdb_find_rdst(struct vxlan_fdb *f,
++					      union vxlan_addr *ip, __be16 port,
++					      __be32 vni, __u32 ifindex)
++{
++	struct vxlan_rdst *rd;
++
++	list_for_each_entry(rd, &f->remotes, list) {
++		if (vxlan_addr_equal(&rd->remote_ip, ip) &&
++		    rd->remote_port == port &&
++		    rd->remote_vni == vni &&
++		    rd->remote_ifindex == ifindex)
++			return rd;
++	}
++
++	return NULL;
++}
++
++int vxlan_fdb_find_uc(struct net_device *dev, const u8 *mac, __be32 vni,
++		      struct switchdev_notifier_vxlan_fdb_info *fdb_info)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	u8 eth_addr[ETH_ALEN + 2] = { 0 };
++	struct vxlan_rdst *rdst;
++	struct vxlan_fdb *f;
++	int rc = 0;
++
++	if (is_multicast_ether_addr(mac) ||
++	    is_zero_ether_addr(mac))
++		return -EINVAL;
++
++	ether_addr_copy(eth_addr, mac);
++
++	rcu_read_lock();
++
++	f = __vxlan_find_mac(vxlan, eth_addr, vni);
++	if (!f) {
++		rc = -ENOENT;
++		goto out;
++	}
++
++	rdst = first_remote_rcu(f);
++	vxlan_fdb_switchdev_notifier_info(vxlan, f, rdst, NULL, fdb_info);
++
++out:
++	rcu_read_unlock();
++	return rc;
++}
++EXPORT_SYMBOL_GPL(vxlan_fdb_find_uc);
++
++static int vxlan_fdb_notify_one(struct notifier_block *nb,
++				const struct vxlan_dev *vxlan,
++				const struct vxlan_fdb *f,
++				const struct vxlan_rdst *rdst,
++				struct netlink_ext_ack *extack)
++{
++	struct switchdev_notifier_vxlan_fdb_info fdb_info;
++	int rc;
++
++	vxlan_fdb_switchdev_notifier_info(vxlan, f, rdst, extack, &fdb_info);
++	rc = nb->notifier_call(nb, SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE,
++			       &fdb_info);
++	return notifier_to_errno(rc);
++}
++
++int vxlan_fdb_replay(const struct net_device *dev, __be32 vni,
++		     struct notifier_block *nb,
++		     struct netlink_ext_ack *extack)
++{
++	struct vxlan_dev *vxlan;
++	struct vxlan_rdst *rdst;
++	struct vxlan_fdb *f;
++	unsigned int h;
++	int rc = 0;
++
++	if (!netif_is_vxlan(dev))
++		return -EINVAL;
++	vxlan = netdev_priv(dev);
++
++	for (h = 0; h < FDB_HASH_SIZE; ++h) {
++		spin_lock_bh(&vxlan->hash_lock[h]);
++		hlist_for_each_entry(f, &vxlan->fdb_head[h], hlist) {
++			if (f->vni == vni) {
++				list_for_each_entry(rdst, &f->remotes, list) {
++					rc = vxlan_fdb_notify_one(nb, vxlan,
++								  f, rdst,
++								  extack);
++					if (rc)
++						goto unlock;
++				}
++			}
++		}
++		spin_unlock_bh(&vxlan->hash_lock[h]);
++	}
++	return 0;
++
++unlock:
++	spin_unlock_bh(&vxlan->hash_lock[h]);
++	return rc;
++}
++EXPORT_SYMBOL_GPL(vxlan_fdb_replay);
++
++void vxlan_fdb_clear_offload(const struct net_device *dev, __be32 vni)
++{
++	struct vxlan_dev *vxlan;
++	struct vxlan_rdst *rdst;
++	struct vxlan_fdb *f;
++	unsigned int h;
++
++	if (!netif_is_vxlan(dev))
++		return;
++	vxlan = netdev_priv(dev);
++
++	for (h = 0; h < FDB_HASH_SIZE; ++h) {
++		spin_lock_bh(&vxlan->hash_lock[h]);
++		hlist_for_each_entry(f, &vxlan->fdb_head[h], hlist)
++			if (f->vni == vni)
++				list_for_each_entry(rdst, &f->remotes, list)
++					rdst->offloaded = false;
++		spin_unlock_bh(&vxlan->hash_lock[h]);
++	}
++
++}
++EXPORT_SYMBOL_GPL(vxlan_fdb_clear_offload);
++
++/* Replace destination of unicast mac */
++static int vxlan_fdb_replace(struct vxlan_fdb *f,
++			     union vxlan_addr *ip, __be16 port, __be32 vni,
++			     __u32 ifindex, struct vxlan_rdst *oldrd)
++{
++	struct vxlan_rdst *rd;
++
++	rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
++	if (rd)
++		return 0;
++
++	rd = list_first_entry_or_null(&f->remotes, struct vxlan_rdst, list);
++	if (!rd)
++		return 0;
++
++	*oldrd = *rd;
++	dst_cache_reset(&rd->dst_cache);
++	rd->remote_ip = *ip;
++	rd->remote_port = port;
++	rd->remote_vni = vni;
++	rd->remote_ifindex = ifindex;
++	rd->offloaded = false;
++	return 1;
++}
++
++/* Add/update destinations for multicast */
++static int vxlan_fdb_append(struct vxlan_fdb *f,
++			    union vxlan_addr *ip, __be16 port, __be32 vni,
++			    __u32 ifindex, struct vxlan_rdst **rdp)
++{
++	struct vxlan_rdst *rd;
++
++	rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
++	if (rd)
++		return 0;
++
++	rd = kmalloc(sizeof(*rd), GFP_ATOMIC);
++	if (rd == NULL)
++		return -ENOMEM;
++
++	if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
++		kfree(rd);
++		return -ENOMEM;
++	}
++
++	rd->remote_ip = *ip;
++	rd->remote_port = port;
++	rd->offloaded = false;
++	rd->remote_vni = vni;
++	rd->remote_ifindex = ifindex;
++
++	list_add_tail_rcu(&rd->list, &f->remotes);
++
++	*rdp = rd;
++	return 1;
++}
++
++static struct vxlanhdr *vxlan_gro_remcsum(struct sk_buff *skb,
++					  unsigned int off,
++					  struct vxlanhdr *vh, size_t hdrlen,
++					  __be32 vni_field,
++					  struct gro_remcsum *grc,
++					  bool nopartial)
++{
++	size_t start, offset;
++
++	if (skb->remcsum_offload)
++		return vh;
++
++	if (!NAPI_GRO_CB(skb)->csum_valid)
++		return NULL;
++
++	start = vxlan_rco_start(vni_field);
++	offset = start + vxlan_rco_offset(vni_field);
++
++	vh = skb_gro_remcsum_process(skb, (void *)vh, off, hdrlen,
++				     start, offset, grc, nopartial);
++
++	skb->remcsum_offload = 1;
++
++	return vh;
++}
++
++static struct sk_buff *vxlan_gro_receive(struct sock *sk,
++					 struct list_head *head,
++					 struct sk_buff *skb)
++{
++	struct sk_buff *pp = NULL;
++	struct sk_buff *p;
++	struct vxlanhdr *vh, *vh2;
++	unsigned int hlen, off_vx;
++	int flush = 1;
++	struct vxlan_sock *vs = rcu_dereference_sk_user_data(sk);
++	__be32 flags;
++	struct gro_remcsum grc;
++
++	skb_gro_remcsum_init(&grc);
++
++	off_vx = skb_gro_offset(skb);
++	hlen = off_vx + sizeof(*vh);
++	vh   = skb_gro_header_fast(skb, off_vx);
++	if (skb_gro_header_hard(skb, hlen)) {
++		vh = skb_gro_header_slow(skb, hlen, off_vx);
++		if (unlikely(!vh))
++			goto out;
++	}
++
++	skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr));
++
++	flags = vh->vx_flags;
++
++	if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) {
++		vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),
++				       vh->vx_vni, &grc,
++				       !!(vs->flags &
++					  VXLAN_F_REMCSUM_NOPARTIAL));
++
++		if (!vh)
++			goto out;
++	}
++
++	skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */
++
++	list_for_each_entry(p, head, list) {
++		if (!NAPI_GRO_CB(p)->same_flow)
++			continue;
++
++		vh2 = (struct vxlanhdr *)(p->data + off_vx);
++		if (vh->vx_flags != vh2->vx_flags ||
++		    vh->vx_vni != vh2->vx_vni) {
++			NAPI_GRO_CB(p)->same_flow = 0;
++			continue;
++		}
++	}
++
++	pp = call_gro_receive(eth_gro_receive, head, skb);
++	flush = 0;
++
++out:
++	skb_gro_flush_final_remcsum(skb, pp, flush, &grc);
++
++	return pp;
++}
++
++static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
++{
++	/* Sets 'skb->inner_mac_header' since we are always called with
++	 * 'skb->encapsulation' set.
++	 */
++	return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr));
++}
++
++static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan, const u8 *mac,
++					 __u16 state, __be32 src_vni,
++					 __u16 ndm_flags)
++{
++	struct vxlan_fdb *f;
++
++	f = kmalloc(sizeof(*f), GFP_ATOMIC);
++	if (!f)
++		return NULL;
++	f->state = state;
++	f->flags = ndm_flags;
++	f->updated = f->used = jiffies;
++	f->vni = src_vni;
++	f->nh = NULL;
++	RCU_INIT_POINTER(f->vdev, vxlan);
++	INIT_LIST_HEAD(&f->nh_list);
++	INIT_LIST_HEAD(&f->remotes);
++	memcpy(f->eth_addr, mac, ETH_ALEN);
++
++	return f;
++}
++
++static void vxlan_fdb_insert(struct vxlan_dev *vxlan, const u8 *mac,
++			     __be32 src_vni, struct vxlan_fdb *f)
++{
++	++vxlan->addrcnt;
++	hlist_add_head_rcu(&f->hlist,
++			   vxlan_fdb_head(vxlan, mac, src_vni));
++}
++
++static int vxlan_fdb_nh_update(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
++			       u32 nhid, struct netlink_ext_ack *extack)
++{
++	struct nexthop *old_nh = rtnl_dereference(fdb->nh);
++	struct nexthop *nh;
++	int err = -EINVAL;
++
++	if (old_nh && old_nh->id == nhid)
++		return 0;
++
++	nh = nexthop_find_by_id(vxlan->net, nhid);
++	if (!nh) {
++		NL_SET_ERR_MSG(extack, "Nexthop id does not exist");
++		goto err_inval;
++	}
++
++	if (nh) {
++		if (!nexthop_get(nh)) {
++			NL_SET_ERR_MSG(extack, "Nexthop has been deleted");
++			nh = NULL;
++			goto err_inval;
++		}
++		if (!nexthop_is_fdb(nh)) {
++			NL_SET_ERR_MSG(extack, "Nexthop is not a fdb nexthop");
++			goto err_inval;
++		}
++
++		if (!nexthop_is_multipath(nh)) {
++			NL_SET_ERR_MSG(extack, "Nexthop is not a multipath group");
++			goto err_inval;
++		}
++
++		/* check nexthop group family */
++		switch (vxlan->default_dst.remote_ip.sa.sa_family) {
++		case AF_INET:
++			if (!nexthop_has_v4(nh)) {
++				err = -EAFNOSUPPORT;
++				NL_SET_ERR_MSG(extack, "Nexthop group family not supported");
++				goto err_inval;
++			}
++			break;
++		case AF_INET6:
++			if (nexthop_has_v4(nh)) {
++				err = -EAFNOSUPPORT;
++				NL_SET_ERR_MSG(extack, "Nexthop group family not supported");
++				goto err_inval;
++			}
++		}
++	}
++
++	if (old_nh) {
++		list_del_rcu(&fdb->nh_list);
++		nexthop_put(old_nh);
++	}
++	rcu_assign_pointer(fdb->nh, nh);
++	list_add_tail_rcu(&fdb->nh_list, &nh->fdb_list);
++	return 1;
++
++err_inval:
++	if (nh)
++		nexthop_put(nh);
++	return err;
++}
++
++static int vxlan_fdb_create(struct vxlan_dev *vxlan,
++			    const u8 *mac, union vxlan_addr *ip,
++			    __u16 state, __be16 port, __be32 src_vni,
++			    __be32 vni, __u32 ifindex, __u16 ndm_flags,
++			    u32 nhid, struct vxlan_fdb **fdb,
++			    struct netlink_ext_ack *extack)
++{
++	struct vxlan_rdst *rd = NULL;
++	struct vxlan_fdb *f;
++	int rc;
++
++	if (vxlan->cfg.addrmax &&
++	    vxlan->addrcnt >= vxlan->cfg.addrmax)
++		return -ENOSPC;
++
++	netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
++	f = vxlan_fdb_alloc(vxlan, mac, state, src_vni, ndm_flags);
++	if (!f)
++		return -ENOMEM;
++
++	if (nhid)
++		rc = vxlan_fdb_nh_update(vxlan, f, nhid, extack);
++	else
++		rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd);
++	if (rc < 0)
++		goto errout;
++
++	*fdb = f;
++
++	return 0;
++
++errout:
++	kfree(f);
++	return rc;
++}
++
++static void __vxlan_fdb_free(struct vxlan_fdb *f)
++{
++	struct vxlan_rdst *rd, *nd;
++	struct nexthop *nh;
++
++	nh = rcu_dereference_raw(f->nh);
++	if (nh) {
++		rcu_assign_pointer(f->nh, NULL);
++		rcu_assign_pointer(f->vdev, NULL);
++		nexthop_put(nh);
++	}
++
++	list_for_each_entry_safe(rd, nd, &f->remotes, list) {
++		dst_cache_destroy(&rd->dst_cache);
++		kfree(rd);
++	}
++	kfree(f);
++}
++
++static void vxlan_fdb_free(struct rcu_head *head)
++{
++	struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
++
++	__vxlan_fdb_free(f);
++}
++
++static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f,
++			      bool do_notify, bool swdev_notify)
++{
++	struct vxlan_rdst *rd;
++
++	netdev_dbg(vxlan->dev, "delete %pM\n", f->eth_addr);
++
++	--vxlan->addrcnt;
++	if (do_notify) {
++		if (rcu_access_pointer(f->nh))
++			vxlan_fdb_notify(vxlan, f, NULL, RTM_DELNEIGH,
++					 swdev_notify, NULL);
++		else
++			list_for_each_entry(rd, &f->remotes, list)
++				vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH,
++						 swdev_notify, NULL);
++	}
++
++	hlist_del_rcu(&f->hlist);
++	list_del_rcu(&f->nh_list);
++	call_rcu(&f->rcu, vxlan_fdb_free);
++}
++
++static void vxlan_dst_free(struct rcu_head *head)
++{
++	struct vxlan_rdst *rd = container_of(head, struct vxlan_rdst, rcu);
++
++	dst_cache_destroy(&rd->dst_cache);
++	kfree(rd);
++}
++
++static int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
++				     union vxlan_addr *ip,
++				     __u16 state, __u16 flags,
++				     __be16 port, __be32 vni,
++				     __u32 ifindex, __u16 ndm_flags,
++				     struct vxlan_fdb *f, u32 nhid,
++				     bool swdev_notify,
++				     struct netlink_ext_ack *extack)
++{
++	__u16 fdb_flags = (ndm_flags & ~NTF_USE);
++	struct vxlan_rdst *rd = NULL;
++	struct vxlan_rdst oldrd;
++	int notify = 0;
++	int rc = 0;
++	int err;
++
++	if (nhid && !rcu_access_pointer(f->nh)) {
++		NL_SET_ERR_MSG(extack,
++			       "Cannot replace an existing non nexthop fdb with a nexthop");
++		return -EOPNOTSUPP;
++	}
++
++	if (nhid && (flags & NLM_F_APPEND)) {
++		NL_SET_ERR_MSG(extack,
++			       "Cannot append to a nexthop fdb");
++		return -EOPNOTSUPP;
++	}
++
++	/* Do not allow an externally learned entry to take over an entry added
++	 * by the user.
++	 */
++	if (!(fdb_flags & NTF_EXT_LEARNED) ||
++	    !(f->flags & NTF_VXLAN_ADDED_BY_USER)) {
++		if (f->state != state) {
++			f->state = state;
++			f->updated = jiffies;
++			notify = 1;
++		}
++		if (f->flags != fdb_flags) {
++			f->flags = fdb_flags;
++			f->updated = jiffies;
++			notify = 1;
++		}
++	}
++
++	if ((flags & NLM_F_REPLACE)) {
++		/* Only change unicasts */
++		if (!(is_multicast_ether_addr(f->eth_addr) ||
++		      is_zero_ether_addr(f->eth_addr))) {
++			if (nhid) {
++				rc = vxlan_fdb_nh_update(vxlan, f, nhid, extack);
++				if (rc < 0)
++					return rc;
++			} else {
++				rc = vxlan_fdb_replace(f, ip, port, vni,
++						       ifindex, &oldrd);
++			}
++			notify |= rc;
++		} else {
++			NL_SET_ERR_MSG(extack, "Cannot replace non-unicast fdb entries");
++			return -EOPNOTSUPP;
++		}
++	}
++	if ((flags & NLM_F_APPEND) &&
++	    (is_multicast_ether_addr(f->eth_addr) ||
++	     is_zero_ether_addr(f->eth_addr))) {
++		rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd);
++
++		if (rc < 0)
++			return rc;
++		notify |= rc;
++	}
++
++	if (ndm_flags & NTF_USE)
++		f->used = jiffies;
++
++	if (notify) {
++		if (rd == NULL)
++			rd = first_remote_rtnl(f);
++
++		err = vxlan_fdb_notify(vxlan, f, rd, RTM_NEWNEIGH,
++				       swdev_notify, extack);
++		if (err)
++			goto err_notify;
++	}
++
++	return 0;
++
++err_notify:
++	if (nhid)
++		return err;
++	if ((flags & NLM_F_REPLACE) && rc)
++		*rd = oldrd;
++	else if ((flags & NLM_F_APPEND) && rc) {
++		list_del_rcu(&rd->list);
++		call_rcu(&rd->rcu, vxlan_dst_free);
++	}
++	return err;
++}
++
++static int vxlan_fdb_update_create(struct vxlan_dev *vxlan,
++				   const u8 *mac, union vxlan_addr *ip,
++				   __u16 state, __u16 flags,
++				   __be16 port, __be32 src_vni, __be32 vni,
++				   __u32 ifindex, __u16 ndm_flags, u32 nhid,
++				   bool swdev_notify,
++				   struct netlink_ext_ack *extack)
++{
++	__u16 fdb_flags = (ndm_flags & ~NTF_USE);
++	struct vxlan_fdb *f;
++	int rc;
++
++	/* Disallow replace to add a multicast entry */
++	if ((flags & NLM_F_REPLACE) &&
++	    (is_multicast_ether_addr(mac) || is_zero_ether_addr(mac)))
++		return -EOPNOTSUPP;
++
++	netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
++	rc = vxlan_fdb_create(vxlan, mac, ip, state, port, src_vni,
++			      vni, ifindex, fdb_flags, nhid, &f, extack);
++	if (rc < 0)
++		return rc;
++
++	vxlan_fdb_insert(vxlan, mac, src_vni, f);
++	rc = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH,
++			      swdev_notify, extack);
++	if (rc)
++		goto err_notify;
++
++	return 0;
++
++err_notify:
++	vxlan_fdb_destroy(vxlan, f, false, false);
++	return rc;
++}
++
++/* Add new entry to forwarding table -- assumes lock held */
++static int vxlan_fdb_update(struct vxlan_dev *vxlan,
++			    const u8 *mac, union vxlan_addr *ip,
++			    __u16 state, __u16 flags,
++			    __be16 port, __be32 src_vni, __be32 vni,
++			    __u32 ifindex, __u16 ndm_flags, u32 nhid,
++			    bool swdev_notify,
++			    struct netlink_ext_ack *extack)
++{
++	struct vxlan_fdb *f;
++
++	f = __vxlan_find_mac(vxlan, mac, src_vni);
++	if (f) {
++		if (flags & NLM_F_EXCL) {
++			netdev_dbg(vxlan->dev,
++				   "lost race to create %pM\n", mac);
++			return -EEXIST;
++		}
++
++		return vxlan_fdb_update_existing(vxlan, ip, state, flags, port,
++						 vni, ifindex, ndm_flags, f,
++						 nhid, swdev_notify, extack);
++	} else {
++		if (!(flags & NLM_F_CREATE))
++			return -ENOENT;
++
++		return vxlan_fdb_update_create(vxlan, mac, ip, state, flags,
++					       port, src_vni, vni, ifindex,
++					       ndm_flags, nhid, swdev_notify,
++					       extack);
++	}
++}
++
++static void vxlan_fdb_dst_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f,
++				  struct vxlan_rdst *rd, bool swdev_notify)
++{
++	list_del_rcu(&rd->list);
++	vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH, swdev_notify, NULL);
++	call_rcu(&rd->rcu, vxlan_dst_free);
++}
++
++static int vxlan_fdb_parse(struct nlattr *tb[], struct vxlan_dev *vxlan,
++			   union vxlan_addr *ip, __be16 *port, __be32 *src_vni,
++			   __be32 *vni, u32 *ifindex, u32 *nhid)
++{
++	struct net *net = dev_net(vxlan->dev);
++	int err;
++
++	if (tb[NDA_NH_ID] && (tb[NDA_DST] || tb[NDA_VNI] || tb[NDA_IFINDEX] ||
++	    tb[NDA_PORT]))
++		return -EINVAL;
++
++	if (tb[NDA_DST]) {
++		err = vxlan_nla_get_addr(ip, tb[NDA_DST]);
++		if (err)
++			return err;
++	} else {
++		union vxlan_addr *remote = &vxlan->default_dst.remote_ip;
++
++		if (remote->sa.sa_family == AF_INET) {
++			ip->sin.sin_addr.s_addr = htonl(INADDR_ANY);
++			ip->sa.sa_family = AF_INET;
++#if IS_ENABLED(CONFIG_IPV6)
++		} else {
++			ip->sin6.sin6_addr = in6addr_any;
++			ip->sa.sa_family = AF_INET6;
++#endif
++		}
++	}
++
++	if (tb[NDA_PORT]) {
++		if (nla_len(tb[NDA_PORT]) != sizeof(__be16))
++			return -EINVAL;
++		*port = nla_get_be16(tb[NDA_PORT]);
++	} else {
++		*port = vxlan->cfg.dst_port;
++	}
++
++	if (tb[NDA_VNI]) {
++		if (nla_len(tb[NDA_VNI]) != sizeof(u32))
++			return -EINVAL;
++		*vni = cpu_to_be32(nla_get_u32(tb[NDA_VNI]));
++	} else {
++		*vni = vxlan->default_dst.remote_vni;
++	}
++
++	if (tb[NDA_SRC_VNI]) {
++		if (nla_len(tb[NDA_SRC_VNI]) != sizeof(u32))
++			return -EINVAL;
++		*src_vni = cpu_to_be32(nla_get_u32(tb[NDA_SRC_VNI]));
++	} else {
++		*src_vni = vxlan->default_dst.remote_vni;
++	}
++
++	if (tb[NDA_IFINDEX]) {
++		struct net_device *tdev;
++
++		if (nla_len(tb[NDA_IFINDEX]) != sizeof(u32))
++			return -EINVAL;
++		*ifindex = nla_get_u32(tb[NDA_IFINDEX]);
++		tdev = __dev_get_by_index(net, *ifindex);
++		if (!tdev)
++			return -EADDRNOTAVAIL;
++	} else {
++		*ifindex = 0;
++	}
++
++	if (tb[NDA_NH_ID])
++		*nhid = nla_get_u32(tb[NDA_NH_ID]);
++	else
++		*nhid = 0;
++
++	return 0;
++}
++
++/* Add static entry (via netlink) */
++static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
++			 struct net_device *dev,
++			 const unsigned char *addr, u16 vid, u16 flags,
++			 struct netlink_ext_ack *extack)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	/* struct net *net = dev_net(vxlan->dev); */
++	union vxlan_addr ip;
++	__be16 port;
++	__be32 src_vni, vni;
++	u32 ifindex, nhid;
++	u32 hash_index;
++	int err;
++
++	if (!(ndm->ndm_state & (NUD_PERMANENT|NUD_REACHABLE))) {
++		pr_info("RTM_NEWNEIGH with invalid state %#x\n",
++			ndm->ndm_state);
++		return -EINVAL;
++	}
++
++	if (!tb || (!tb[NDA_DST] && !tb[NDA_NH_ID]))
++		return -EINVAL;
++
++	err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &src_vni, &vni, &ifindex,
++			      &nhid);
++	if (err)
++		return err;
++
++	if (vxlan->default_dst.remote_ip.sa.sa_family != ip.sa.sa_family)
++		return -EAFNOSUPPORT;
++
++	hash_index = fdb_head_index(vxlan, addr, src_vni);
++	spin_lock_bh(&vxlan->hash_lock[hash_index]);
++	err = vxlan_fdb_update(vxlan, addr, &ip, ndm->ndm_state, flags,
++			       port, src_vni, vni, ifindex,
++			       ndm->ndm_flags | NTF_VXLAN_ADDED_BY_USER,
++			       nhid, true, extack);
++	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++
++	return err;
++}
++
++static int __vxlan_fdb_delete(struct vxlan_dev *vxlan,
++			      const unsigned char *addr, union vxlan_addr ip,
++			      __be16 port, __be32 src_vni, __be32 vni,
++			      u32 ifindex, bool swdev_notify)
++{
++	struct vxlan_rdst *rd = NULL;
++	struct vxlan_fdb *f;
++	int err = -ENOENT;
++
++	f = vxlan_find_mac(vxlan, addr, src_vni);
++	if (!f)
++		return err;
++
++	if (!vxlan_addr_any(&ip)) {
++		rd = vxlan_fdb_find_rdst(f, &ip, port, vni, ifindex);
++		if (!rd)
++			goto out;
++	}
++
++	/* remove a destination if it's not the only one on the list,
++	 * otherwise destroy the fdb entry
++	 */
++	if (rd && !list_is_singular(&f->remotes)) {
++		vxlan_fdb_dst_destroy(vxlan, f, rd, swdev_notify);
++		goto out;
++	}
++
++	vxlan_fdb_destroy(vxlan, f, true, swdev_notify);
++
++out:
++	return 0;
++}
++
++/* Delete entry (via netlink) */
++static int vxlan_fdb_delete(struct ndmsg *ndm, struct nlattr *tb[],
++			    struct net_device *dev,
++			    const unsigned char *addr, u16 vid)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	union vxlan_addr ip;
++	__be32 src_vni, vni;
++	u32 ifindex, nhid;
++	u32 hash_index;
++	__be16 port;
++	int err;
++
++	err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &src_vni, &vni, &ifindex,
++			      &nhid);
++	if (err)
++		return err;
++
++	hash_index = fdb_head_index(vxlan, addr, src_vni);
++	spin_lock_bh(&vxlan->hash_lock[hash_index]);
++	err = __vxlan_fdb_delete(vxlan, addr, ip, port, src_vni, vni, ifindex,
++				 true);
++	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++
++	return err;
++}
++
++/* Dump forwarding table */
++static int vxlan_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
++			  struct net_device *dev,
++			  struct net_device *filter_dev, int *idx)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	unsigned int h;
++	int err = 0;
++
++	for (h = 0; h < FDB_HASH_SIZE; ++h) {
++		struct vxlan_fdb *f;
++
++		rcu_read_lock();
++		hlist_for_each_entry_rcu(f, &vxlan->fdb_head[h], hlist) {
++			struct vxlan_rdst *rd;
++
++			if (rcu_access_pointer(f->nh)) {
++				if (*idx < cb->args[2])
++					goto skip_nh;
++				err = vxlan_fdb_info(skb, vxlan, f,
++						     NETLINK_CB(cb->skb).portid,
++						     cb->nlh->nlmsg_seq,
++						     RTM_NEWNEIGH,
++						     NLM_F_MULTI, NULL);
++				if (err < 0) {
++					rcu_read_unlock();
++					goto out;
++				}
++skip_nh:
++				*idx += 1;
++				continue;
++			}
++
++			list_for_each_entry_rcu(rd, &f->remotes, list) {
++				if (*idx < cb->args[2])
++					goto skip;
++
++				err = vxlan_fdb_info(skb, vxlan, f,
++						     NETLINK_CB(cb->skb).portid,
++						     cb->nlh->nlmsg_seq,
++						     RTM_NEWNEIGH,
++						     NLM_F_MULTI, rd);
++				if (err < 0) {
++					rcu_read_unlock();
++					goto out;
++				}
++skip:
++				*idx += 1;
++			}
++		}
++		rcu_read_unlock();
++	}
++out:
++	return err;
++}
++
++static int vxlan_fdb_get(struct sk_buff *skb,
++			 struct nlattr *tb[],
++			 struct net_device *dev,
++			 const unsigned char *addr,
++			 u16 vid, u32 portid, u32 seq,
++			 struct netlink_ext_ack *extack)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_fdb *f;
++	__be32 vni;
++	int err;
++
++	if (tb[NDA_VNI])
++		vni = cpu_to_be32(nla_get_u32(tb[NDA_VNI]));
++	else
++		vni = vxlan->default_dst.remote_vni;
++
++	rcu_read_lock();
++
++	f = __vxlan_find_mac(vxlan, addr, vni);
++	if (!f) {
++		NL_SET_ERR_MSG(extack, "Fdb entry not found");
++		err = -ENOENT;
++		goto errout;
++	}
++
++	err = vxlan_fdb_info(skb, vxlan, f, portid, seq,
++			     RTM_NEWNEIGH, 0, first_remote_rcu(f));
++errout:
++	rcu_read_unlock();
++	return err;
++}
++
++/* Watch incoming packets to learn mapping between Ethernet address
++ * and Tunnel endpoint.
++ * Return true if packet is bogus and should be dropped.
++ */
++static bool vxlan_snoop(struct net_device *dev,
++			union vxlan_addr *src_ip, const u8 *src_mac,
++			u32 src_ifindex, __be32 vni)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_fdb *f;
++	u32 ifindex = 0;
++
++#if IS_ENABLED(CONFIG_IPV6)
++	if (src_ip->sa.sa_family == AF_INET6 &&
++	    (ipv6_addr_type(&src_ip->sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL))
++		ifindex = src_ifindex;
++#endif
++
++	f = vxlan_find_mac(vxlan, src_mac, vni);
++	if (likely(f)) {
++		struct vxlan_rdst *rdst = first_remote_rcu(f);
++
++		if (likely(vxlan_addr_equal(&rdst->remote_ip, src_ip) &&
++			   rdst->remote_ifindex == ifindex))
++			return false;
++
++		/* Don't migrate static entries, drop packets */
++		if (f->state & (NUD_PERMANENT | NUD_NOARP))
++			return true;
++
++		/* Don't override an fdb with nexthop with a learnt entry */
++		if (rcu_access_pointer(f->nh))
++			return true;
++
++		if (net_ratelimit())
++			netdev_info(dev,
++				    "%pM migrated from %pIS to %pIS\n",
++				    src_mac, &rdst->remote_ip.sa, &src_ip->sa);
++
++		rdst->remote_ip = *src_ip;
++		f->updated = jiffies;
++		vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH, true, NULL);
++	} else {
++		u32 hash_index = fdb_head_index(vxlan, src_mac, vni);
++
++		/* learned new entry */
++		spin_lock(&vxlan->hash_lock[hash_index]);
++
++		/* close off race between vxlan_flush and incoming packets */
++		if (netif_running(dev))
++			vxlan_fdb_update(vxlan, src_mac, src_ip,
++					 NUD_REACHABLE,
++					 NLM_F_EXCL|NLM_F_CREATE,
++					 vxlan->cfg.dst_port,
++					 vni,
++					 vxlan->default_dst.remote_vni,
++					 ifindex, NTF_SELF, 0, true, NULL);
++		spin_unlock(&vxlan->hash_lock[hash_index]);
++	}
++
++	return false;
++}
++
++/* See if multicast group is already in use by other ID */
++static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev)
++{
++	struct vxlan_dev *vxlan;
++	struct vxlan_sock *sock4;
++#if IS_ENABLED(CONFIG_IPV6)
++	struct vxlan_sock *sock6;
++#endif
++	unsigned short family = dev->default_dst.remote_ip.sa.sa_family;
++
++	sock4 = rtnl_dereference(dev->vn4_sock);
++
++	/* The vxlan_sock is only used by dev, leaving group has
++	 * no effect on other vxlan devices.
++	 */
++	if (family == AF_INET && sock4 && refcount_read(&sock4->refcnt) == 1)
++		return false;
++#if IS_ENABLED(CONFIG_IPV6)
++	sock6 = rtnl_dereference(dev->vn6_sock);
++	if (family == AF_INET6 && sock6 && refcount_read(&sock6->refcnt) == 1)
++		return false;
++#endif
++
++	list_for_each_entry(vxlan, &vn->vxlan_list, next) {
++		if (!netif_running(vxlan->dev) || vxlan == dev)
++			continue;
++
++		if (family == AF_INET &&
++		    rtnl_dereference(vxlan->vn4_sock) != sock4)
++			continue;
++#if IS_ENABLED(CONFIG_IPV6)
++		if (family == AF_INET6 &&
++		    rtnl_dereference(vxlan->vn6_sock) != sock6)
++			continue;
++#endif
++
++		if (!vxlan_addr_equal(&vxlan->default_dst.remote_ip,
++				      &dev->default_dst.remote_ip))
++			continue;
++
++		if (vxlan->default_dst.remote_ifindex !=
++		    dev->default_dst.remote_ifindex)
++			continue;
++
++		return true;
++	}
++
++	return false;
++}
++
++static bool __vxlan_sock_release_prep(struct vxlan_sock *vs)
++{
++	struct vxlan_net *vn;
++
++	if (!vs)
++		return false;
++	if (!refcount_dec_and_test(&vs->refcnt))
++		return false;
++
++	vn = net_generic(sock_net(vs->sock->sk), vxlan_net_id);
++	spin_lock(&vn->sock_lock);
++	hlist_del_rcu(&vs->hlist);
++	udp_tunnel_notify_del_rx_port(vs->sock,
++				      (vs->flags & VXLAN_F_GPE) ?
++				      UDP_TUNNEL_TYPE_VXLAN_GPE :
++				      UDP_TUNNEL_TYPE_VXLAN);
++	spin_unlock(&vn->sock_lock);
++
++	return true;
++}
++
++static void vxlan_sock_release(struct vxlan_dev *vxlan)
++{
++	struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
++#if IS_ENABLED(CONFIG_IPV6)
++	struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
++
++	RCU_INIT_POINTER(vxlan->vn6_sock, NULL);
++#endif
++
++	RCU_INIT_POINTER(vxlan->vn4_sock, NULL);
++	synchronize_net();
++
++	vxlan_vs_del_dev(vxlan);
++
++	if (__vxlan_sock_release_prep(sock4)) {
++		udp_tunnel_sock_release(sock4->sock);
++		kfree(sock4);
++	}
++
++#if IS_ENABLED(CONFIG_IPV6)
++	if (__vxlan_sock_release_prep(sock6)) {
++		udp_tunnel_sock_release(sock6->sock);
++		kfree(sock6);
++	}
++#endif
++}
++
++/* Update multicast group membership when first VNI on
++ * multicast address is brought up
++ */
++static int vxlan_igmp_join(struct vxlan_dev *vxlan)
++{
++	struct sock *sk;
++	union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
++	int ifindex = vxlan->default_dst.remote_ifindex;
++	int ret = -EINVAL;
++
++	if (ip->sa.sa_family == AF_INET) {
++		struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
++		struct ip_mreqn mreq = {
++			.imr_multiaddr.s_addr	= ip->sin.sin_addr.s_addr,
++			.imr_ifindex		= ifindex,
++		};
++
++		sk = sock4->sock->sk;
++		lock_sock(sk);
++		ret = ip_mc_join_group(sk, &mreq);
++		release_sock(sk);
++#if IS_ENABLED(CONFIG_IPV6)
++	} else {
++		struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
++
++		sk = sock6->sock->sk;
++		lock_sock(sk);
++		ret = ipv6_stub->ipv6_sock_mc_join(sk, ifindex,
++						   &ip->sin6.sin6_addr);
++		release_sock(sk);
++#endif
++	}
++
++	return ret;
++}
++
++/* Inverse of vxlan_igmp_join when last VNI is brought down */
++static int vxlan_igmp_leave(struct vxlan_dev *vxlan)
++{
++	struct sock *sk;
++	union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
++	int ifindex = vxlan->default_dst.remote_ifindex;
++	int ret = -EINVAL;
++
++	if (ip->sa.sa_family == AF_INET) {
++		struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
++		struct ip_mreqn mreq = {
++			.imr_multiaddr.s_addr	= ip->sin.sin_addr.s_addr,
++			.imr_ifindex		= ifindex,
++		};
++
++		sk = sock4->sock->sk;
++		lock_sock(sk);
++		ret = ip_mc_leave_group(sk, &mreq);
++		release_sock(sk);
++#if IS_ENABLED(CONFIG_IPV6)
++	} else {
++		struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
++
++		sk = sock6->sock->sk;
++		lock_sock(sk);
++		ret = ipv6_stub->ipv6_sock_mc_drop(sk, ifindex,
++						   &ip->sin6.sin6_addr);
++		release_sock(sk);
++#endif
++	}
++
++	return ret;
++}
++
++static bool vxlan_remcsum(struct vxlanhdr *unparsed,
++			  struct sk_buff *skb, u32 vxflags)
++{
++	size_t start, offset;
++
++	if (!(unparsed->vx_flags & VXLAN_HF_RCO) || skb->remcsum_offload)
++		goto out;
++
++	start = vxlan_rco_start(unparsed->vx_vni);
++	offset = start + vxlan_rco_offset(unparsed->vx_vni);
++
++	if (!pskb_may_pull(skb, offset + sizeof(u16)))
++		return false;
++
++	skb_remcsum_process(skb, (void *)(vxlan_hdr(skb) + 1), start, offset,
++			    !!(vxflags & VXLAN_F_REMCSUM_NOPARTIAL));
++out:
++	unparsed->vx_flags &= ~VXLAN_HF_RCO;
++	unparsed->vx_vni &= VXLAN_VNI_MASK;
++	return true;
++}
++
++static void vxlan_parse_gbp_hdr(struct vxlanhdr *unparsed,
++				struct sk_buff *skb, u32 vxflags,
++				struct vxlan_metadata *md)
++{
++	struct vxlanhdr_gbp *gbp = (struct vxlanhdr_gbp *)unparsed;
++	struct metadata_dst *tun_dst;
++
++	if (!(unparsed->vx_flags & VXLAN_HF_GBP))
++		goto out;
++
++	md->gbp = ntohs(gbp->policy_id);
++
++	tun_dst = (struct metadata_dst *)skb_dst(skb);
++	if (tun_dst) {
++		tun_dst->u.tun_info.key.tun_flags |= TUNNEL_VXLAN_OPT;
++		tun_dst->u.tun_info.options_len = sizeof(*md);
++	}
++	if (gbp->dont_learn)
++		md->gbp |= VXLAN_GBP_DONT_LEARN;
++
++	if (gbp->policy_applied)
++		md->gbp |= VXLAN_GBP_POLICY_APPLIED;
++
++	/* In flow-based mode, GBP is carried in dst_metadata */
++	if (!(vxflags & VXLAN_F_COLLECT_METADATA))
++		skb->mark = md->gbp;
++out:
++	unparsed->vx_flags &= ~VXLAN_GBP_USED_BITS;
++}
++
++static bool vxlan_parse_gpe_hdr(struct vxlanhdr *unparsed,
++				__be16 *protocol,
++				struct sk_buff *skb, u32 vxflags)
++{
++	struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)unparsed;
++
++	/* Need to have Next Protocol set for interfaces in GPE mode. */
++	if (!gpe->np_applied)
++		return false;
++	/* "The initial version is 0. If a receiver does not support the
++	 * version indicated it MUST drop the packet.
++	 */
++	if (gpe->version != 0)
++		return false;
++	/* "When the O bit is set to 1, the packet is an OAM packet and OAM
++	 * processing MUST occur." However, we don't implement OAM
++	 * processing, thus drop the packet.
++	 */
++	if (gpe->oam_flag)
++		return false;
++
++	*protocol = tun_p_to_eth_p(gpe->next_protocol);
++	if (!*protocol)
++		return false;
++
++	unparsed->vx_flags &= ~VXLAN_GPE_USED_BITS;
++	return true;
++}
++
++static bool vxlan_set_mac(struct vxlan_dev *vxlan,
++			  struct vxlan_sock *vs,
++			  struct sk_buff *skb, __be32 vni)
++{
++	union vxlan_addr saddr;
++	u32 ifindex = skb->dev->ifindex;
++
++	skb_reset_mac_header(skb);
++	skb->protocol = eth_type_trans(skb, vxlan->dev);
++	skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
++
++	/* Ignore packet loops (and multicast echo) */
++	if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
++		return false;
++
++	/* Get address from the outer IP header */
++	if (vxlan_get_sk_family(vs) == AF_INET) {
++		saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
++		saddr.sa.sa_family = AF_INET;
++#if IS_ENABLED(CONFIG_IPV6)
++	} else {
++		saddr.sin6.sin6_addr = ipv6_hdr(skb)->saddr;
++		saddr.sa.sa_family = AF_INET6;
++#endif
++	}
++
++	if ((vxlan->cfg.flags & VXLAN_F_LEARN) &&
++	    vxlan_snoop(skb->dev, &saddr, eth_hdr(skb)->h_source, ifindex, vni))
++		return false;
++
++	return true;
++}
++
++static bool vxlan_ecn_decapsulate(struct vxlan_sock *vs, void *oiph,
++				  struct sk_buff *skb)
++{
++	int err = 0;
++
++	if (vxlan_get_sk_family(vs) == AF_INET)
++		err = IP_ECN_decapsulate(oiph, skb);
++#if IS_ENABLED(CONFIG_IPV6)
++	else
++		err = IP6_ECN_decapsulate(oiph, skb);
++#endif
++
++	if (unlikely(err) && log_ecn_error) {
++		if (vxlan_get_sk_family(vs) == AF_INET)
++			net_info_ratelimited("non-ECT from %pI4 with TOS=%#x\n",
++					     &((struct iphdr *)oiph)->saddr,
++					     ((struct iphdr *)oiph)->tos);
++		else
++			net_info_ratelimited("non-ECT from %pI6\n",
++					     &((struct ipv6hdr *)oiph)->saddr);
++	}
++	return err <= 1;
++}
++
++/* Callback from net/ipv4/udp.c to receive packets */
++static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
++{
++	struct vxlan_dev *vxlan;
++	struct vxlan_sock *vs;
++	struct vxlanhdr unparsed;
++	struct vxlan_metadata _md;
++	struct vxlan_metadata *md = &_md;
++	__be16 protocol = htons(ETH_P_TEB);
++	bool raw_proto = false;
++	void *oiph;
++	__be32 vni = 0;
++
++	/* Need UDP and VXLAN header to be present */
++	if (!pskb_may_pull(skb, VXLAN_HLEN))
++		goto drop;
++
++	unparsed = *vxlan_hdr(skb);
++	/* VNI flag always required to be set */
++	if (!(unparsed.vx_flags & VXLAN_HF_VNI)) {
++		netdev_dbg(skb->dev, "invalid vxlan flags=%#x vni=%#x\n",
++			   ntohl(vxlan_hdr(skb)->vx_flags),
++			   ntohl(vxlan_hdr(skb)->vx_vni));
++		/* Return non vxlan pkt */
++		goto drop;
++	}
++	unparsed.vx_flags &= ~VXLAN_HF_VNI;
++	unparsed.vx_vni &= ~VXLAN_VNI_MASK;
++
++	vs = rcu_dereference_sk_user_data(sk);
++	if (!vs)
++		goto drop;
++
++	vni = vxlan_vni(vxlan_hdr(skb)->vx_vni);
++
++	vxlan = vxlan_vs_find_vni(vs, skb->dev->ifindex, vni);
++	if (!vxlan)
++		goto drop;
++
++	/* For backwards compatibility, only allow reserved fields to be
++	 * used by VXLAN extensions if explicitly requested.
++	 */
++	if (vs->flags & VXLAN_F_GPE) {
++		if (!vxlan_parse_gpe_hdr(&unparsed, &protocol, skb, vs->flags))
++			goto drop;
++		raw_proto = true;
++	}
++
++	if (__iptunnel_pull_header(skb, VXLAN_HLEN, protocol, raw_proto,
++				   !net_eq(vxlan->net, dev_net(vxlan->dev))))
++		goto drop;
++
++	if (vs->flags & VXLAN_F_REMCSUM_RX)
++		if (unlikely(!vxlan_remcsum(&unparsed, skb, vs->flags)))
++			goto drop;
++
++	if (vxlan_collect_metadata(vs)) {
++		struct metadata_dst *tun_dst;
++
++		tun_dst = udp_tun_rx_dst(skb, vxlan_get_sk_family(vs), TUNNEL_KEY,
++					 key32_to_tunnel_id(vni), sizeof(*md));
++
++		if (!tun_dst)
++			goto drop;
++
++		md = ip_tunnel_info_opts(&tun_dst->u.tun_info);
++
++		skb_dst_set(skb, (struct dst_entry *)tun_dst);
++	} else {
++		memset(md, 0, sizeof(*md));
++	}
++
++	if (vs->flags & VXLAN_F_GBP)
++		vxlan_parse_gbp_hdr(&unparsed, skb, vs->flags, md);
++	/* Note that GBP and GPE can never be active together. This is
++	 * ensured in vxlan_dev_configure.
++	 */
++
++	if (unparsed.vx_flags || unparsed.vx_vni) {
++		/* If there are any unprocessed flags remaining treat
++		 * this as a malformed packet. This behavior diverges from
++		 * VXLAN RFC (RFC7348) which stipulates that bits in reserved
++		 * in reserved fields are to be ignored. The approach here
++		 * maintains compatibility with previous stack code, and also
++		 * is more robust and provides a little more security in
++		 * adding extensions to VXLAN.
++		 */
++		goto drop;
++	}
++
++	if (!raw_proto) {
++		if (!vxlan_set_mac(vxlan, vs, skb, vni))
++			goto drop;
++	} else {
++		skb_reset_mac_header(skb);
++		skb->dev = vxlan->dev;
++		skb->pkt_type = PACKET_HOST;
++	}
++
++	oiph = skb_network_header(skb);
++	skb_reset_network_header(skb);
++
++	if (!vxlan_ecn_decapsulate(vs, oiph, skb)) {
++		++vxlan->dev->stats.rx_frame_errors;
++		++vxlan->dev->stats.rx_errors;
++		goto drop;
++	}
++
++	rcu_read_lock();
++
++	if (unlikely(!(vxlan->dev->flags & IFF_UP))) {
++		rcu_read_unlock();
++		atomic_long_inc(&vxlan->dev->rx_dropped);
++		goto drop;
++	}
++
++	dev_sw_netstats_rx_add(vxlan->dev, skb->len);
++	gro_cells_receive(&vxlan->gro_cells, skb);
++
++	rcu_read_unlock();
++
++	return 0;
++
++drop:
++	/* Consume bad packet */
++	kfree_skb(skb);
++	return 0;
++}
++
++/* Callback from net/ipv{4,6}/udp.c to check that we have a VNI for errors */
++static int vxlan_err_lookup(struct sock *sk, struct sk_buff *skb)
++{
++	struct vxlan_dev *vxlan;
++	struct vxlan_sock *vs;
++	struct vxlanhdr *hdr;
++	__be32 vni;
++
++	if (!pskb_may_pull(skb, skb_transport_offset(skb) + VXLAN_HLEN))
++		return -EINVAL;
++
++	hdr = vxlan_hdr(skb);
++
++	if (!(hdr->vx_flags & VXLAN_HF_VNI))
++		return -EINVAL;
++
++	vs = rcu_dereference_sk_user_data(sk);
++	if (!vs)
++		return -ENOENT;
++
++	vni = vxlan_vni(hdr->vx_vni);
++	vxlan = vxlan_vs_find_vni(vs, skb->dev->ifindex, vni);
++	if (!vxlan)
++		return -ENOENT;
++
++	return 0;
++}
++
++static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct arphdr *parp;
++	u8 *arpptr, *sha;
++	__be32 sip, tip;
++	struct neighbour *n;
++
++	if (dev->flags & IFF_NOARP)
++		goto out;
++
++	if (!pskb_may_pull(skb, arp_hdr_len(dev))) {
++		dev->stats.tx_dropped++;
++		goto out;
++	}
++	parp = arp_hdr(skb);
++
++	if ((parp->ar_hrd != htons(ARPHRD_ETHER) &&
++	     parp->ar_hrd != htons(ARPHRD_IEEE802)) ||
++	    parp->ar_pro != htons(ETH_P_IP) ||
++	    parp->ar_op != htons(ARPOP_REQUEST) ||
++	    parp->ar_hln != dev->addr_len ||
++	    parp->ar_pln != 4)
++		goto out;
++	arpptr = (u8 *)parp + sizeof(struct arphdr);
++	sha = arpptr;
++	arpptr += dev->addr_len;	/* sha */
++	memcpy(&sip, arpptr, sizeof(sip));
++	arpptr += sizeof(sip);
++	arpptr += dev->addr_len;	/* tha */
++	memcpy(&tip, arpptr, sizeof(tip));
++
++	if (ipv4_is_loopback(tip) ||
++	    ipv4_is_multicast(tip))
++		goto out;
++
++	n = neigh_lookup(&arp_tbl, &tip, dev);
++
++	if (n) {
++		struct vxlan_fdb *f;
++		struct sk_buff	*reply;
++
++		if (!(n->nud_state & NUD_CONNECTED)) {
++			neigh_release(n);
++			goto out;
++		}
++
++		f = vxlan_find_mac(vxlan, n->ha, vni);
++		if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
++			/* bridge-local neighbor */
++			neigh_release(n);
++			goto out;
++		}
++
++		reply = arp_create(ARPOP_REPLY, ETH_P_ARP, sip, dev, tip, sha,
++				n->ha, sha);
++
++		neigh_release(n);
++
++		if (reply == NULL)
++			goto out;
++
++		skb_reset_mac_header(reply);
++		__skb_pull(reply, skb_network_offset(reply));
++		reply->ip_summed = CHECKSUM_UNNECESSARY;
++		reply->pkt_type = PACKET_HOST;
++
++		if (netif_rx_ni(reply) == NET_RX_DROP)
++			dev->stats.rx_dropped++;
++	} else if (vxlan->cfg.flags & VXLAN_F_L3MISS) {
++		union vxlan_addr ipa = {
++			.sin.sin_addr.s_addr = tip,
++			.sin.sin_family = AF_INET,
++		};
++
++		vxlan_ip_miss(dev, &ipa);
++	}
++out:
++	consume_skb(skb);
++	return NETDEV_TX_OK;
++}
++
++#if IS_ENABLED(CONFIG_IPV6)
++static struct sk_buff *vxlan_na_create(struct sk_buff *request,
++	struct neighbour *n, bool isrouter)
++{
++	struct net_device *dev = request->dev;
++	struct sk_buff *reply;
++	struct nd_msg *ns, *na;
++	struct ipv6hdr *pip6;
++	u8 *daddr;
++	int na_olen = 8; /* opt hdr + ETH_ALEN for target */
++	int ns_olen;
++	int i, len;
++
++	if (dev == NULL || !pskb_may_pull(request, request->len))
++		return NULL;
++
++	len = LL_RESERVED_SPACE(dev) + sizeof(struct ipv6hdr) +
++		sizeof(*na) + na_olen + dev->needed_tailroom;
++	reply = alloc_skb(len, GFP_ATOMIC);
++	if (reply == NULL)
++		return NULL;
++
++	reply->protocol = htons(ETH_P_IPV6);
++	reply->dev = dev;
++	skb_reserve(reply, LL_RESERVED_SPACE(request->dev));
++	skb_push(reply, sizeof(struct ethhdr));
++	skb_reset_mac_header(reply);
++
++	ns = (struct nd_msg *)(ipv6_hdr(request) + 1);
++
++	daddr = eth_hdr(request)->h_source;
++	ns_olen = request->len - skb_network_offset(request) -
++		sizeof(struct ipv6hdr) - sizeof(*ns);
++	for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) {
++		if (!ns->opt[i + 1]) {
++			kfree_skb(reply);
++			return NULL;
++		}
++		if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) {
++			daddr = ns->opt + i + sizeof(struct nd_opt_hdr);
++			break;
++		}
++	}
++
++	/* Ethernet header */
++	ether_addr_copy(eth_hdr(reply)->h_dest, daddr);
++	ether_addr_copy(eth_hdr(reply)->h_source, n->ha);
++	eth_hdr(reply)->h_proto = htons(ETH_P_IPV6);
++	reply->protocol = htons(ETH_P_IPV6);
++
++	skb_pull(reply, sizeof(struct ethhdr));
++	skb_reset_network_header(reply);
++	skb_put(reply, sizeof(struct ipv6hdr));
++
++	/* IPv6 header */
++
++	pip6 = ipv6_hdr(reply);
++	memset(pip6, 0, sizeof(struct ipv6hdr));
++	pip6->version = 6;
++	pip6->priority = ipv6_hdr(request)->priority;
++	pip6->nexthdr = IPPROTO_ICMPV6;
++	pip6->hop_limit = 255;
++	pip6->daddr = ipv6_hdr(request)->saddr;
++	pip6->saddr = *(struct in6_addr *)n->primary_key;
++
++	skb_pull(reply, sizeof(struct ipv6hdr));
++	skb_reset_transport_header(reply);
++
++	/* Neighbor Advertisement */
++	na = skb_put_zero(reply, sizeof(*na) + na_olen);
++	na->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT;
++	na->icmph.icmp6_router = isrouter;
++	na->icmph.icmp6_override = 1;
++	na->icmph.icmp6_solicited = 1;
++	na->target = ns->target;
++	ether_addr_copy(&na->opt[2], n->ha);
++	na->opt[0] = ND_OPT_TARGET_LL_ADDR;
++	na->opt[1] = na_olen >> 3;
++
++	na->icmph.icmp6_cksum = csum_ipv6_magic(&pip6->saddr,
++		&pip6->daddr, sizeof(*na)+na_olen, IPPROTO_ICMPV6,
++		csum_partial(na, sizeof(*na)+na_olen, 0));
++
++	pip6->payload_len = htons(sizeof(*na)+na_olen);
++
++	skb_push(reply, sizeof(struct ipv6hdr));
++
++	reply->ip_summed = CHECKSUM_UNNECESSARY;
++
++	return reply;
++}
++
++static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	const struct in6_addr *daddr;
++	const struct ipv6hdr *iphdr;
++	struct inet6_dev *in6_dev;
++	struct neighbour *n;
++	struct nd_msg *msg;
++
++	rcu_read_lock();
++	in6_dev = __in6_dev_get(dev);
++	if (!in6_dev)
++		goto out;
++
++	iphdr = ipv6_hdr(skb);
++	daddr = &iphdr->daddr;
++	msg = (struct nd_msg *)(iphdr + 1);
++
++	if (ipv6_addr_loopback(daddr) ||
++	    ipv6_addr_is_multicast(&msg->target))
++		goto out;
++
++	n = neigh_lookup(ipv6_stub->nd_tbl, &msg->target, dev);
++
++	if (n) {
++		struct vxlan_fdb *f;
++		struct sk_buff *reply;
++
++		if (!(n->nud_state & NUD_CONNECTED)) {
++			neigh_release(n);
++			goto out;
++		}
++
++		f = vxlan_find_mac(vxlan, n->ha, vni);
++		if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
++			/* bridge-local neighbor */
++			neigh_release(n);
++			goto out;
++		}
++
++		reply = vxlan_na_create(skb, n,
++					!!(f ? f->flags & NTF_ROUTER : 0));
++
++		neigh_release(n);
++
++		if (reply == NULL)
++			goto out;
++
++		if (netif_rx_ni(reply) == NET_RX_DROP)
++			dev->stats.rx_dropped++;
++
++	} else if (vxlan->cfg.flags & VXLAN_F_L3MISS) {
++		union vxlan_addr ipa = {
++			.sin6.sin6_addr = msg->target,
++			.sin6.sin6_family = AF_INET6,
++		};
++
++		vxlan_ip_miss(dev, &ipa);
++	}
++
++out:
++	rcu_read_unlock();
++	consume_skb(skb);
++	return NETDEV_TX_OK;
++}
++#endif
++
++static bool route_shortcircuit(struct net_device *dev, struct sk_buff *skb)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct neighbour *n;
++
++	if (is_multicast_ether_addr(eth_hdr(skb)->h_dest))
++		return false;
++
++	n = NULL;
++	switch (ntohs(eth_hdr(skb)->h_proto)) {
++	case ETH_P_IP:
++	{
++		struct iphdr *pip;
++
++		if (!pskb_may_pull(skb, sizeof(struct iphdr)))
++			return false;
++		pip = ip_hdr(skb);
++		n = neigh_lookup(&arp_tbl, &pip->daddr, dev);
++		if (!n && (vxlan->cfg.flags & VXLAN_F_L3MISS)) {
++			union vxlan_addr ipa = {
++				.sin.sin_addr.s_addr = pip->daddr,
++				.sin.sin_family = AF_INET,
++			};
++
++			vxlan_ip_miss(dev, &ipa);
++			return false;
++		}
++
++		break;
++	}
++#if IS_ENABLED(CONFIG_IPV6)
++	case ETH_P_IPV6:
++	{
++		struct ipv6hdr *pip6;
++
++		if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
++			return false;
++		pip6 = ipv6_hdr(skb);
++		n = neigh_lookup(ipv6_stub->nd_tbl, &pip6->daddr, dev);
++		if (!n && (vxlan->cfg.flags & VXLAN_F_L3MISS)) {
++			union vxlan_addr ipa = {
++				.sin6.sin6_addr = pip6->daddr,
++				.sin6.sin6_family = AF_INET6,
++			};
++
++			vxlan_ip_miss(dev, &ipa);
++			return false;
++		}
++
++		break;
++	}
++#endif
++	default:
++		return false;
++	}
++
++	if (n) {
++		bool diff;
++
++		diff = !ether_addr_equal(eth_hdr(skb)->h_dest, n->ha);
++		if (diff) {
++			memcpy(eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
++				dev->addr_len);
++			memcpy(eth_hdr(skb)->h_dest, n->ha, dev->addr_len);
++		}
++		neigh_release(n);
++		return diff;
++	}
++
++	return false;
++}
++
++static void vxlan_build_gbp_hdr(struct vxlanhdr *vxh, u32 vxflags,
++				struct vxlan_metadata *md)
++{
++	struct vxlanhdr_gbp *gbp;
++
++	if (!md->gbp)
++		return;
++
++	gbp = (struct vxlanhdr_gbp *)vxh;
++	vxh->vx_flags |= VXLAN_HF_GBP;
++
++	if (md->gbp & VXLAN_GBP_DONT_LEARN)
++		gbp->dont_learn = 1;
++
++	if (md->gbp & VXLAN_GBP_POLICY_APPLIED)
++		gbp->policy_applied = 1;
++
++	gbp->policy_id = htons(md->gbp & VXLAN_GBP_ID_MASK);
++}
++
++static int vxlan_build_gpe_hdr(struct vxlanhdr *vxh, u32 vxflags,
++			       __be16 protocol)
++{
++	struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)vxh;
++
++	gpe->np_applied = 1;
++	gpe->next_protocol = tun_p_from_eth_p(protocol);
++	if (!gpe->next_protocol)
++		return -EPFNOSUPPORT;
++	return 0;
++}
++
++static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst,
++			   int iphdr_len, __be32 vni,
++			   struct vxlan_metadata *md, u32 vxflags,
++			   bool udp_sum)
++{
++	struct vxlanhdr *vxh;
++	int min_headroom;
++	int err;
++	int type = udp_sum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
++	__be16 inner_protocol = htons(ETH_P_TEB);
++
++	if ((vxflags & VXLAN_F_REMCSUM_TX) &&
++	    skb->ip_summed == CHECKSUM_PARTIAL) {
++		int csum_start = skb_checksum_start_offset(skb);
++
++		if (csum_start <= VXLAN_MAX_REMCSUM_START &&
++		    !(csum_start & VXLAN_RCO_SHIFT_MASK) &&
++		    (skb->csum_offset == offsetof(struct udphdr, check) ||
++		     skb->csum_offset == offsetof(struct tcphdr, check)))
++			type |= SKB_GSO_TUNNEL_REMCSUM;
++	}
++
++	min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len
++			+ VXLAN_HLEN + iphdr_len;
++
++	/* Need space for new headers (invalidates iph ptr) */
++	err = skb_cow_head(skb, min_headroom);
++	if (unlikely(err))
++		return err;
++
++	err = iptunnel_handle_offloads(skb, type);
++	if (err)
++		return err;
++
++	vxh = __skb_push(skb, sizeof(*vxh));
++	vxh->vx_flags = VXLAN_HF_VNI;
++	vxh->vx_vni = vxlan_vni_field(vni);
++
++	if (type & SKB_GSO_TUNNEL_REMCSUM) {
++		unsigned int start;
++
++		start = skb_checksum_start_offset(skb) - sizeof(struct vxlanhdr);
++		vxh->vx_vni |= vxlan_compute_rco(start, skb->csum_offset);
++		vxh->vx_flags |= VXLAN_HF_RCO;
++
++		if (!skb_is_gso(skb)) {
++			skb->ip_summed = CHECKSUM_NONE;
++			skb->encapsulation = 0;
++		}
++	}
++
++	if (vxflags & VXLAN_F_GBP)
++		vxlan_build_gbp_hdr(vxh, vxflags, md);
++	if (vxflags & VXLAN_F_GPE) {
++		err = vxlan_build_gpe_hdr(vxh, vxflags, skb->protocol);
++		if (err < 0)
++			return err;
++		inner_protocol = skb->protocol;
++	}
++
++	skb_set_inner_protocol(skb, inner_protocol);
++	return 0;
++}
++
++static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan, struct net_device *dev,
++				      struct vxlan_sock *sock4,
++				      struct sk_buff *skb, int oif, u8 tos,
++				      __be32 daddr, __be32 *saddr, __be16 dport, __be16 sport,
++				      struct dst_cache *dst_cache,
++				      const struct ip_tunnel_info *info)
++{
++	bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
++	struct rtable *rt = NULL;
++	struct flowi4 fl4;
++
++	if (!sock4)
++		return ERR_PTR(-EIO);
++
++	if (tos && !info)
++		use_cache = false;
++	if (use_cache) {
++		rt = dst_cache_get_ip4(dst_cache, saddr);
++		if (rt)
++			return rt;
++	}
++
++	memset(&fl4, 0, sizeof(fl4));
++	fl4.flowi4_oif = oif;
++	fl4.flowi4_tos = RT_TOS(tos);
++	fl4.flowi4_mark = skb->mark;
++	fl4.flowi4_proto = IPPROTO_UDP;
++	fl4.daddr = daddr;
++	fl4.saddr = *saddr;
++	fl4.fl4_dport = dport;
++	fl4.fl4_sport = sport;
++
++	rt = ip_route_output_key(vxlan->net, &fl4);
++	if (!IS_ERR(rt)) {
++		if (rt->dst.dev == dev) {
++			netdev_dbg(dev, "circular route to %pI4\n", &daddr);
++			ip_rt_put(rt);
++			return ERR_PTR(-ELOOP);
++		}
++
++		*saddr = fl4.saddr;
++		if (use_cache)
++			dst_cache_set_ip4(dst_cache, &rt->dst, fl4.saddr);
++	} else {
++		netdev_dbg(dev, "no route to %pI4\n", &daddr);
++		return ERR_PTR(-ENETUNREACH);
++	}
++	return rt;
++}
++
++#if IS_ENABLED(CONFIG_IPV6)
++static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
++					  struct net_device *dev,
++					  struct vxlan_sock *sock6,
++					  struct sk_buff *skb, int oif, u8 tos,
++					  __be32 label,
++					  const struct in6_addr *daddr,
++					  struct in6_addr *saddr,
++					  __be16 dport, __be16 sport,
++					  struct dst_cache *dst_cache,
++					  const struct ip_tunnel_info *info)
++{
++	bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
++	struct dst_entry *ndst;
++	struct flowi6 fl6;
++
++	if (!sock6)
++		return ERR_PTR(-EIO);
++
++	if (tos && !info)
++		use_cache = false;
++	if (use_cache) {
++		ndst = dst_cache_get_ip6(dst_cache, saddr);
++		if (ndst)
++			return ndst;
++	}
++
++	memset(&fl6, 0, sizeof(fl6));
++	fl6.flowi6_oif = oif;
++	fl6.daddr = *daddr;
++	fl6.saddr = *saddr;
++	fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
++	fl6.flowi6_mark = skb->mark;
++	fl6.flowi6_proto = IPPROTO_UDP;
++	fl6.fl6_dport = dport;
++	fl6.fl6_sport = sport;
++
++	ndst = ipv6_stub->ipv6_dst_lookup_flow(vxlan->net, sock6->sock->sk,
++					       &fl6, NULL);
++	if (unlikely(IS_ERR(ndst))) {
++		netdev_dbg(dev, "no route to %pI6\n", daddr);
++		return ERR_PTR(-ENETUNREACH);
++	}
++
++	if (unlikely(ndst->dev == dev)) {
++		netdev_dbg(dev, "circular route to %pI6\n", daddr);
++		dst_release(ndst);
++		return ERR_PTR(-ELOOP);
++	}
++
++	*saddr = fl6.saddr;
++	if (use_cache)
++		dst_cache_set_ip6(dst_cache, ndst, saddr);
++	return ndst;
++}
++#endif
++
++/* Bypass encapsulation if the destination is local */
++static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
++			       struct vxlan_dev *dst_vxlan, __be32 vni,
++			       bool snoop)
++{
++	struct pcpu_sw_netstats *tx_stats, *rx_stats;
++	union vxlan_addr loopback;
++	union vxlan_addr *remote_ip = &dst_vxlan->default_dst.remote_ip;
++	struct net_device *dev;
++	int len = skb->len;
++
++	tx_stats = this_cpu_ptr(src_vxlan->dev->tstats);
++	rx_stats = this_cpu_ptr(dst_vxlan->dev->tstats);
++	skb->pkt_type = PACKET_HOST;
++	skb->encapsulation = 0;
++	skb->dev = dst_vxlan->dev;
++	__skb_pull(skb, skb_network_offset(skb));
++
++	if (remote_ip->sa.sa_family == AF_INET) {
++		loopback.sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
++		loopback.sa.sa_family =  AF_INET;
++#if IS_ENABLED(CONFIG_IPV6)
++	} else {
++		loopback.sin6.sin6_addr = in6addr_loopback;
++		loopback.sa.sa_family =  AF_INET6;
++#endif
++	}
++
++	rcu_read_lock();
++	dev = skb->dev;
++	if (unlikely(!(dev->flags & IFF_UP))) {
++		kfree_skb(skb);
++		goto drop;
++	}
++
++	if ((dst_vxlan->cfg.flags & VXLAN_F_LEARN) && snoop)
++		vxlan_snoop(dev, &loopback, eth_hdr(skb)->h_source, 0, vni);
++
++	u64_stats_update_begin(&tx_stats->syncp);
++	tx_stats->tx_packets++;
++	tx_stats->tx_bytes += len;
++	u64_stats_update_end(&tx_stats->syncp);
++
++	if (netif_rx(skb) == NET_RX_SUCCESS) {
++		u64_stats_update_begin(&rx_stats->syncp);
++		rx_stats->rx_packets++;
++		rx_stats->rx_bytes += len;
++		u64_stats_update_end(&rx_stats->syncp);
++	} else {
++drop:
++		dev->stats.rx_dropped++;
++	}
++	rcu_read_unlock();
++}
++
++static int encap_bypass_if_local(struct sk_buff *skb, struct net_device *dev,
++				 struct vxlan_dev *vxlan,
++				 union vxlan_addr *daddr,
++				 __be16 dst_port, int dst_ifindex, __be32 vni,
++				 struct dst_entry *dst,
++				 u32 rt_flags)
++{
++#if IS_ENABLED(CONFIG_IPV6)
++	/* IPv6 rt-flags are checked against RTF_LOCAL, but the value of
++	 * RTF_LOCAL is equal to RTCF_LOCAL. So to keep code simple
++	 * we can use RTCF_LOCAL which works for ipv4 and ipv6 route entry.
++	 */
++	BUILD_BUG_ON(RTCF_LOCAL != RTF_LOCAL);
++#endif
++	/* Bypass encapsulation if the destination is local */
++	if (rt_flags & RTCF_LOCAL &&
++	    !(rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))) {
++		struct vxlan_dev *dst_vxlan;
++
++		dst_release(dst);
++		dst_vxlan = vxlan_find_vni(vxlan->net, dst_ifindex, vni,
++					   daddr->sa.sa_family, dst_port,
++					   vxlan->cfg.flags);
++		if (!dst_vxlan) {
++			dev->stats.tx_errors++;
++			kfree_skb(skb);
++
++			return -ENOENT;
++		}
++		vxlan_encap_bypass(skb, vxlan, dst_vxlan, vni, true);
++		return 1;
++	}
++
++	return 0;
++}
++
++static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
++			   __be32 default_vni, struct vxlan_rdst *rdst,
++			   bool did_rsc)
++{
++	struct dst_cache *dst_cache;
++	struct ip_tunnel_info *info;
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	const struct iphdr *old_iph = ip_hdr(skb);
++	union vxlan_addr *dst;
++	union vxlan_addr remote_ip, local_ip;
++	struct vxlan_metadata _md;
++	struct vxlan_metadata *md = &_md;
++	__be16 src_port = 0, dst_port;
++	struct dst_entry *ndst = NULL;
++	__be32 vni, label;
++	__u8 tos, ttl;
++	int ifindex;
++	int err;
++	u32 flags = vxlan->cfg.flags;
++	bool udp_sum = false;
++	bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
++
++	info = skb_tunnel_info(skb);
++
++	if (rdst) {
++		dst = &rdst->remote_ip;
++		if (vxlan_addr_any(dst)) {
++			if (did_rsc) {
++				/* short-circuited back to local bridge */
++				vxlan_encap_bypass(skb, vxlan, vxlan,
++						   default_vni, true);
++				return;
++			}
++			goto drop;
++		}
++
++		dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
++		vni = (rdst->remote_vni) ? : default_vni;
++		ifindex = rdst->remote_ifindex;
++		local_ip = vxlan->cfg.saddr;
++		dst_cache = &rdst->dst_cache;
++		md->gbp = skb->mark;
++		if (flags & VXLAN_F_TTL_INHERIT) {
++			ttl = ip_tunnel_get_ttl(old_iph, skb);
++		} else {
++			ttl = vxlan->cfg.ttl;
++			if (!ttl && vxlan_addr_multicast(dst))
++				ttl = 1;
++		}
++
++		tos = vxlan->cfg.tos;
++		if (tos == 1)
++			tos = ip_tunnel_get_dsfield(old_iph, skb);
++
++		if (dst->sa.sa_family == AF_INET)
++			udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM_TX);
++		else
++			udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX);
++		label = vxlan->cfg.label;
++	} else {
++		if (!info) {
++			WARN_ONCE(1, "%s: Missing encapsulation instructions\n",
++				  dev->name);
++			goto drop;
++		}
++		remote_ip.sa.sa_family = ip_tunnel_info_af(info);
++		if (remote_ip.sa.sa_family == AF_INET) {
++			remote_ip.sin.sin_addr.s_addr = info->key.u.ipv4.dst;
++			local_ip.sin.sin_addr.s_addr = info->key.u.ipv4.src;
++		} else {
++			remote_ip.sin6.sin6_addr = info->key.u.ipv6.dst;
++			local_ip.sin6.sin6_addr = info->key.u.ipv6.src;
++		}
++		dst = &remote_ip;
++		dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
++		vni = tunnel_id_to_key32(info->key.tun_id);
++		ifindex = 0;
++		dst_cache = &info->dst_cache;
++		if (info->key.tun_flags & TUNNEL_VXLAN_OPT) {
++			if (info->options_len < sizeof(*md))
++				goto drop;
++			md = ip_tunnel_info_opts(info);
++		}
++		ttl = info->key.ttl;
++		tos = info->key.tos;
++		label = info->key.label;
++		udp_sum = !!(info->key.tun_flags & TUNNEL_CSUM);
++	}
++	src_port = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
++				     vxlan->cfg.port_max, true);
++
++	rcu_read_lock();
++	if (dst->sa.sa_family == AF_INET) {
++		struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
++		struct rtable *rt;
++		__be16 df = 0;
++
++		if (!ifindex)
++			ifindex = sock4->sock->sk->sk_bound_dev_if;
++
++		rt = vxlan_get_route(vxlan, dev, sock4, skb, ifindex, tos,
++				     dst->sin.sin_addr.s_addr,
++				     &local_ip.sin.sin_addr.s_addr,
++				     dst_port, src_port,
++				     dst_cache, info);
++		if (IS_ERR(rt)) {
++			err = PTR_ERR(rt);
++			goto tx_error;
++		}
++
++		if (!info) {
++			/* Bypass encapsulation if the destination is local */
++			err = encap_bypass_if_local(skb, dev, vxlan, dst,
++						    dst_port, ifindex, vni,
++						    &rt->dst, rt->rt_flags);
++			if (err)
++				goto out_unlock;
++
++			if (vxlan->cfg.df == VXLAN_DF_SET) {
++				df = htons(IP_DF);
++			} else if (vxlan->cfg.df == VXLAN_DF_INHERIT) {
++				struct ethhdr *eth = eth_hdr(skb);
++
++				if (ntohs(eth->h_proto) == ETH_P_IPV6 ||
++				    (ntohs(eth->h_proto) == ETH_P_IP &&
++				     old_iph->frag_off & htons(IP_DF)))
++					df = htons(IP_DF);
++			}
++		} else if (info->key.tun_flags & TUNNEL_DONT_FRAGMENT) {
++			df = htons(IP_DF);
++		}
++
++		ndst = &rt->dst;
++		err = skb_tunnel_check_pmtu(skb, ndst, vxlan_headroom(flags & VXLAN_F_GPE),
++					    netif_is_any_bridge_port(dev));
++		if (err < 0) {
++			goto tx_error;
++		} else if (err) {
++			if (info) {
++				struct ip_tunnel_info *unclone;
++				struct in_addr src, dst;
++
++				unclone = skb_tunnel_info_unclone(skb);
++				if (unlikely(!unclone))
++					goto tx_error;
++
++				src = remote_ip.sin.sin_addr;
++				dst = local_ip.sin.sin_addr;
++				unclone->key.u.ipv4.src = src.s_addr;
++				unclone->key.u.ipv4.dst = dst.s_addr;
++			}
++			vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
++			dst_release(ndst);
++			goto out_unlock;
++		}
++
++		tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
++		ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
++		err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr),
++				      vni, md, flags, udp_sum);
++		if (err < 0)
++			goto tx_error;
++
++		udp_tunnel_xmit_skb(rt, sock4->sock->sk, skb, local_ip.sin.sin_addr.s_addr,
++				    dst->sin.sin_addr.s_addr, tos, ttl, df,
++				    src_port, dst_port, xnet, !udp_sum);
++#if IS_ENABLED(CONFIG_IPV6)
++	} else {
++		struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
++
++		if (!ifindex)
++			ifindex = sock6->sock->sk->sk_bound_dev_if;
++
++		ndst = vxlan6_get_route(vxlan, dev, sock6, skb, ifindex, tos,
++					label, &dst->sin6.sin6_addr,
++					&local_ip.sin6.sin6_addr,
++					dst_port, src_port,
++					dst_cache, info);
++		if (IS_ERR(ndst)) {
++			err = PTR_ERR(ndst);
++			ndst = NULL;
++			goto tx_error;
++		}
++
++		if (!info) {
++			u32 rt6i_flags = ((struct rt6_info *)ndst)->rt6i_flags;
++
++			err = encap_bypass_if_local(skb, dev, vxlan, dst,
++						    dst_port, ifindex, vni,
++						    ndst, rt6i_flags);
++			if (err)
++				goto out_unlock;
++		}
++
++		err = skb_tunnel_check_pmtu(skb, ndst,
++					    vxlan_headroom((flags & VXLAN_F_GPE) | VXLAN_F_IPV6),
++					    netif_is_any_bridge_port(dev));
++		if (err < 0) {
++			goto tx_error;
++		} else if (err) {
++			if (info) {
++				struct ip_tunnel_info *unclone;
++				struct in6_addr src, dst;
++
++				unclone = skb_tunnel_info_unclone(skb);
++				if (unlikely(!unclone))
++					goto tx_error;
++
++				src = remote_ip.sin6.sin6_addr;
++				dst = local_ip.sin6.sin6_addr;
++				unclone->key.u.ipv6.src = src;
++				unclone->key.u.ipv6.dst = dst;
++			}
++
++			vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
++			dst_release(ndst);
++			goto out_unlock;
++		}
++
++		tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
++		ttl = ttl ? : ip6_dst_hoplimit(ndst);
++		skb_scrub_packet(skb, xnet);
++		err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),
++				      vni, md, flags, udp_sum);
++		if (err < 0)
++			goto tx_error;
++
++		udp_tunnel6_xmit_skb(ndst, sock6->sock->sk, skb, dev,
++				     &local_ip.sin6.sin6_addr,
++				     &dst->sin6.sin6_addr, tos, ttl,
++				     label, src_port, dst_port, !udp_sum);
++#endif
++	}
++out_unlock:
++	rcu_read_unlock();
++	return;
++
++drop:
++	dev->stats.tx_dropped++;
++	dev_kfree_skb(skb);
++	return;
++
++tx_error:
++	rcu_read_unlock();
++	if (err == -ELOOP)
++		dev->stats.collisions++;
++	else if (err == -ENETUNREACH)
++		dev->stats.tx_carrier_errors++;
++	dst_release(ndst);
++	dev->stats.tx_errors++;
++	kfree_skb(skb);
++}
++
++static void vxlan_xmit_nh(struct sk_buff *skb, struct net_device *dev,
++			  struct vxlan_fdb *f, __be32 vni, bool did_rsc)
++{
++	struct vxlan_rdst nh_rdst;
++	struct nexthop *nh;
++	bool do_xmit;
++	u32 hash;
++
++	memset(&nh_rdst, 0, sizeof(struct vxlan_rdst));
++	hash = skb_get_hash(skb);
++
++	rcu_read_lock();
++	nh = rcu_dereference(f->nh);
++	if (!nh) {
++		rcu_read_unlock();
++		goto drop;
++	}
++	do_xmit = vxlan_fdb_nh_path_select(nh, hash, &nh_rdst);
++	rcu_read_unlock();
++
++	if (likely(do_xmit))
++		vxlan_xmit_one(skb, dev, vni, &nh_rdst, did_rsc);
++	else
++		goto drop;
++
++	return;
++
++drop:
++	dev->stats.tx_dropped++;
++	dev_kfree_skb(skb);
++}
++
++/* Transmit local packets over Vxlan
++ *
++ * Outer IP header inherits ECN and DF from inner header.
++ * Outer UDP destination is the VXLAN assigned port.
++ *           source port is based on hash of flow
++ */
++static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_rdst *rdst, *fdst = NULL;
++	const struct ip_tunnel_info *info;
++	bool did_rsc = false;
++	struct vxlan_fdb *f;
++	struct ethhdr *eth;
++	__be32 vni = 0;
++
++	info = skb_tunnel_info(skb);
++
++	skb_reset_mac_header(skb);
++
++	if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA) {
++		if (info && info->mode & IP_TUNNEL_INFO_BRIDGE &&
++		    info->mode & IP_TUNNEL_INFO_TX) {
++			vni = tunnel_id_to_key32(info->key.tun_id);
++		} else {
++			if (info && info->mode & IP_TUNNEL_INFO_TX)
++				vxlan_xmit_one(skb, dev, vni, NULL, false);
++			else
++				kfree_skb(skb);
++			return NETDEV_TX_OK;
++		}
++	}
++
++	if (vxlan->cfg.flags & VXLAN_F_PROXY) {
++		eth = eth_hdr(skb);
++		if (ntohs(eth->h_proto) == ETH_P_ARP)
++			return arp_reduce(dev, skb, vni);
++#if IS_ENABLED(CONFIG_IPV6)
++		else if (ntohs(eth->h_proto) == ETH_P_IPV6 &&
++			 pskb_may_pull(skb, sizeof(struct ipv6hdr) +
++					    sizeof(struct nd_msg)) &&
++			 ipv6_hdr(skb)->nexthdr == IPPROTO_ICMPV6) {
++			struct nd_msg *m = (struct nd_msg *)(ipv6_hdr(skb) + 1);
++
++			if (m->icmph.icmp6_code == 0 &&
++			    m->icmph.icmp6_type == NDISC_NEIGHBOUR_SOLICITATION)
++				return neigh_reduce(dev, skb, vni);
++		}
++#endif
++	}
++
++	eth = eth_hdr(skb);
++	f = vxlan_find_mac(vxlan, eth->h_dest, vni);
++	did_rsc = false;
++
++	if (f && (f->flags & NTF_ROUTER) && (vxlan->cfg.flags & VXLAN_F_RSC) &&
++	    (ntohs(eth->h_proto) == ETH_P_IP ||
++	     ntohs(eth->h_proto) == ETH_P_IPV6)) {
++		did_rsc = route_shortcircuit(dev, skb);
++		if (did_rsc)
++			f = vxlan_find_mac(vxlan, eth->h_dest, vni);
++	}
++
++	if (f == NULL) {
++		f = vxlan_find_mac(vxlan, all_zeros_mac, vni);
++		if (f == NULL) {
++			if ((vxlan->cfg.flags & VXLAN_F_L2MISS) &&
++			    !is_multicast_ether_addr(eth->h_dest))
++				vxlan_fdb_miss(vxlan, eth->h_dest);
++
++			dev->stats.tx_dropped++;
++			kfree_skb(skb);
++			return NETDEV_TX_OK;
++		}
++	}
++
++	if (rcu_access_pointer(f->nh)) {
++		vxlan_xmit_nh(skb, dev, f,
++			      (vni ? : vxlan->default_dst.remote_vni), did_rsc);
++	} else {
++		list_for_each_entry_rcu(rdst, &f->remotes, list) {
++			struct sk_buff *skb1;
++
++			if (!fdst) {
++				fdst = rdst;
++				continue;
++			}
++			skb1 = skb_clone(skb, GFP_ATOMIC);
++			if (skb1)
++				vxlan_xmit_one(skb1, dev, vni, rdst, did_rsc);
++		}
++		if (fdst)
++			vxlan_xmit_one(skb, dev, vni, fdst, did_rsc);
++		else
++			kfree_skb(skb);
++	}
++
++	return NETDEV_TX_OK;
++}
++
++/* Walk the forwarding table and purge stale entries */
++static void vxlan_cleanup(struct timer_list *t)
++{
++	struct vxlan_dev *vxlan = from_timer(vxlan, t, age_timer);
++	unsigned long next_timer = jiffies + FDB_AGE_INTERVAL;
++	unsigned int h;
++
++	if (!netif_running(vxlan->dev))
++		return;
++
++	for (h = 0; h < FDB_HASH_SIZE; ++h) {
++		struct hlist_node *p, *n;
++
++		spin_lock(&vxlan->hash_lock[h]);
++		hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
++			struct vxlan_fdb *f
++				= container_of(p, struct vxlan_fdb, hlist);
++			unsigned long timeout;
++
++			if (f->state & (NUD_PERMANENT | NUD_NOARP))
++				continue;
++
++			if (f->flags & NTF_EXT_LEARNED)
++				continue;
++
++			timeout = f->used + vxlan->cfg.age_interval * HZ;
++			if (time_before_eq(timeout, jiffies)) {
++				netdev_dbg(vxlan->dev,
++					   "garbage collect %pM\n",
++					   f->eth_addr);
++				f->state = NUD_STALE;
++				vxlan_fdb_destroy(vxlan, f, true, true);
++			} else if (time_before(timeout, next_timer))
++				next_timer = timeout;
++		}
++		spin_unlock(&vxlan->hash_lock[h]);
++	}
++
++	mod_timer(&vxlan->age_timer, next_timer);
++}
++
++static void vxlan_vs_del_dev(struct vxlan_dev *vxlan)
++{
++	struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
++
++	spin_lock(&vn->sock_lock);
++	hlist_del_init_rcu(&vxlan->hlist4.hlist);
++#if IS_ENABLED(CONFIG_IPV6)
++	hlist_del_init_rcu(&vxlan->hlist6.hlist);
++#endif
++	spin_unlock(&vn->sock_lock);
++}
++
++static void vxlan_vs_add_dev(struct vxlan_sock *vs, struct vxlan_dev *vxlan,
++			     struct vxlan_dev_node *node)
++{
++	struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
++	__be32 vni = vxlan->default_dst.remote_vni;
++
++	node->vxlan = vxlan;
++	spin_lock(&vn->sock_lock);
++	hlist_add_head_rcu(&node->hlist, vni_head(vs, vni));
++	spin_unlock(&vn->sock_lock);
++}
++
++/* Setup stats when device is created */
++static int vxlan_init(struct net_device *dev)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	int err;
++
++	dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
++	if (!dev->tstats)
++		return -ENOMEM;
++
++	err = gro_cells_init(&vxlan->gro_cells, dev);
++	if (err) {
++		free_percpu(dev->tstats);
++		return err;
++	}
++
++	return 0;
++}
++
++static void vxlan_fdb_delete_default(struct vxlan_dev *vxlan, __be32 vni)
++{
++	struct vxlan_fdb *f;
++	u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, vni);
++
++	spin_lock_bh(&vxlan->hash_lock[hash_index]);
++	f = __vxlan_find_mac(vxlan, all_zeros_mac, vni);
++	if (f)
++		vxlan_fdb_destroy(vxlan, f, true, true);
++	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++}
++
++static void vxlan_uninit(struct net_device *dev)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++
++	gro_cells_destroy(&vxlan->gro_cells);
++
++	vxlan_fdb_delete_default(vxlan, vxlan->cfg.vni);
++
++	free_percpu(dev->tstats);
++}
++
++/* Start ageing timer and join group when device is brought up */
++static int vxlan_open(struct net_device *dev)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	int ret;
++
++	ret = vxlan_sock_add(vxlan);
++	if (ret < 0)
++		return ret;
++
++	if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) {
++		ret = vxlan_igmp_join(vxlan);
++		if (ret == -EADDRINUSE)
++			ret = 0;
++		if (ret) {
++			vxlan_sock_release(vxlan);
++			return ret;
++		}
++	}
++
++	if (vxlan->cfg.age_interval)
++		mod_timer(&vxlan->age_timer, jiffies + FDB_AGE_INTERVAL);
++
++	return ret;
++}
++
++/* Purge the forwarding table */
++static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
++{
++	unsigned int h;
++
++	for (h = 0; h < FDB_HASH_SIZE; ++h) {
++		struct hlist_node *p, *n;
++
++		spin_lock_bh(&vxlan->hash_lock[h]);
++		hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
++			struct vxlan_fdb *f
++				= container_of(p, struct vxlan_fdb, hlist);
++			if (!do_all && (f->state & (NUD_PERMANENT | NUD_NOARP)))
++				continue;
++			/* the all_zeros_mac entry is deleted at vxlan_uninit */
++			if (is_zero_ether_addr(f->eth_addr) &&
++			    f->vni == vxlan->cfg.vni)
++				continue;
++			vxlan_fdb_destroy(vxlan, f, true, true);
++		}
++		spin_unlock_bh(&vxlan->hash_lock[h]);
++	}
++}
++
++/* Cleanup timer and forwarding table on shutdown */
++static int vxlan_stop(struct net_device *dev)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
++	int ret = 0;
++
++	if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip) &&
++	    !vxlan_group_used(vn, vxlan))
++		ret = vxlan_igmp_leave(vxlan);
++
++	del_timer_sync(&vxlan->age_timer);
++
++	vxlan_flush(vxlan, false);
++	vxlan_sock_release(vxlan);
++
++	return ret;
++}
++
++/* Stub, nothing needs to be done. */
++static void vxlan_set_multicast_list(struct net_device *dev)
++{
++}
++
++static int vxlan_change_mtu(struct net_device *dev, int new_mtu)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_rdst *dst = &vxlan->default_dst;
++	struct net_device *lowerdev = __dev_get_by_index(vxlan->net,
++							 dst->remote_ifindex);
++
++	/* This check is different than dev->max_mtu, because it looks at
++	 * the lowerdev->mtu, rather than the static dev->max_mtu
++	 */
++	if (lowerdev) {
++		int max_mtu = lowerdev->mtu - vxlan_headroom(vxlan->cfg.flags);
++		if (new_mtu > max_mtu)
++			return -EINVAL;
++	}
++
++	dev->mtu = new_mtu;
++	return 0;
++}
++
++static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct ip_tunnel_info *info = skb_tunnel_info(skb);
++	__be16 sport, dport;
++
++	sport = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
++				  vxlan->cfg.port_max, true);
++	dport = info->key.tp_dst ? : vxlan->cfg.dst_port;
++
++	if (ip_tunnel_info_af(info) == AF_INET) {
++		struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
++		struct rtable *rt;
++
++		rt = vxlan_get_route(vxlan, dev, sock4, skb, 0, info->key.tos,
++				     info->key.u.ipv4.dst,
++				     &info->key.u.ipv4.src, dport, sport,
++				     &info->dst_cache, info);
++		if (IS_ERR(rt))
++			return PTR_ERR(rt);
++		ip_rt_put(rt);
++	} else {
++#if IS_ENABLED(CONFIG_IPV6)
++		struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
++		struct dst_entry *ndst;
++
++		ndst = vxlan6_get_route(vxlan, dev, sock6, skb, 0, info->key.tos,
++					info->key.label, &info->key.u.ipv6.dst,
++					&info->key.u.ipv6.src, dport, sport,
++					&info->dst_cache, info);
++		if (IS_ERR(ndst))
++			return PTR_ERR(ndst);
++		dst_release(ndst);
++#else /* !CONFIG_IPV6 */
++		return -EPFNOSUPPORT;
++#endif
++	}
++	info->key.tp_src = sport;
++	info->key.tp_dst = dport;
++	return 0;
++}
++
++static const struct net_device_ops vxlan_netdev_ether_ops = {
++	.ndo_init		= vxlan_init,
++	.ndo_uninit		= vxlan_uninit,
++	.ndo_open		= vxlan_open,
++	.ndo_stop		= vxlan_stop,
++	.ndo_start_xmit		= vxlan_xmit,
++	.ndo_get_stats64	= ip_tunnel_get_stats64,
++	.ndo_set_rx_mode	= vxlan_set_multicast_list,
++	.ndo_change_mtu		= vxlan_change_mtu,
++	.ndo_validate_addr	= eth_validate_addr,
++	.ndo_set_mac_address	= eth_mac_addr,
++	.ndo_fdb_add		= vxlan_fdb_add,
++	.ndo_fdb_del		= vxlan_fdb_delete,
++	.ndo_fdb_dump		= vxlan_fdb_dump,
++	.ndo_fdb_get		= vxlan_fdb_get,
++	.ndo_fill_metadata_dst	= vxlan_fill_metadata_dst,
++	.ndo_change_proto_down  = dev_change_proto_down_generic,
++};
++
++static const struct net_device_ops vxlan_netdev_raw_ops = {
++	.ndo_init		= vxlan_init,
++	.ndo_uninit		= vxlan_uninit,
++	.ndo_open		= vxlan_open,
++	.ndo_stop		= vxlan_stop,
++	.ndo_start_xmit		= vxlan_xmit,
++	.ndo_get_stats64	= ip_tunnel_get_stats64,
++	.ndo_change_mtu		= vxlan_change_mtu,
++	.ndo_fill_metadata_dst	= vxlan_fill_metadata_dst,
++};
++
++/* Info for udev, that this is a virtual tunnel endpoint */
++static struct device_type vxlan_type = {
++	.name = "vxlan",
++};
++
++/* Calls the ndo_udp_tunnel_add of the caller in order to
++ * supply the listening VXLAN udp ports. Callers are expected
++ * to implement the ndo_udp_tunnel_add.
++ */
++static void vxlan_offload_rx_ports(struct net_device *dev, bool push)
++{
++	struct vxlan_sock *vs;
++	struct net *net = dev_net(dev);
++	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
++	unsigned int i;
++
++	spin_lock(&vn->sock_lock);
++	for (i = 0; i < PORT_HASH_SIZE; ++i) {
++		hlist_for_each_entry_rcu(vs, &vn->sock_list[i], hlist) {
++			unsigned short type;
++
++			if (vs->flags & VXLAN_F_GPE)
++				type = UDP_TUNNEL_TYPE_VXLAN_GPE;
++			else
++				type = UDP_TUNNEL_TYPE_VXLAN;
++
++			if (push)
++				udp_tunnel_push_rx_port(dev, vs->sock, type);
++			else
++				udp_tunnel_drop_rx_port(dev, vs->sock, type);
++		}
++	}
++	spin_unlock(&vn->sock_lock);
++}
++
++/* Initialize the device structure. */
++static void vxlan_setup(struct net_device *dev)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	unsigned int h;
++
++	eth_hw_addr_random(dev);
++	ether_setup(dev);
++
++	dev->needs_free_netdev = true;
++	SET_NETDEV_DEVTYPE(dev, &vxlan_type);
++
++	dev->features	|= NETIF_F_LLTX;
++	dev->features	|= NETIF_F_SG | NETIF_F_HW_CSUM;
++	dev->features   |= NETIF_F_RXCSUM;
++	dev->features   |= NETIF_F_GSO_SOFTWARE;
++
++	dev->vlan_features = dev->features;
++	dev->hw_features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
++	dev->hw_features |= NETIF_F_GSO_SOFTWARE;
++	netif_keep_dst(dev);
++	dev->priv_flags |= IFF_NO_QUEUE;
++
++	/* MTU range: 68 - 65535 */
++	dev->min_mtu = ETH_MIN_MTU;
++	dev->max_mtu = ETH_MAX_MTU;
++
++	INIT_LIST_HEAD(&vxlan->next);
++
++	timer_setup(&vxlan->age_timer, vxlan_cleanup, TIMER_DEFERRABLE);
++
++	vxlan->dev = dev;
++
++	for (h = 0; h < FDB_HASH_SIZE; ++h) {
++		spin_lock_init(&vxlan->hash_lock[h]);
++		INIT_HLIST_HEAD(&vxlan->fdb_head[h]);
++	}
++}
++
++static void vxlan_ether_setup(struct net_device *dev)
++{
++	dev->priv_flags &= ~IFF_TX_SKB_SHARING;
++	dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
++	dev->netdev_ops = &vxlan_netdev_ether_ops;
++}
++
++static void vxlan_raw_setup(struct net_device *dev)
++{
++	dev->header_ops = NULL;
++	dev->type = ARPHRD_NONE;
++	dev->hard_header_len = 0;
++	dev->addr_len = 0;
++	dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
++	dev->netdev_ops = &vxlan_netdev_raw_ops;
++}
++
++static const struct nla_policy vxlan_policy[IFLA_VXLAN_MAX + 1] = {
++	[IFLA_VXLAN_ID]		= { .type = NLA_U32 },
++	[IFLA_VXLAN_GROUP]	= { .len = sizeof_field(struct iphdr, daddr) },
++	[IFLA_VXLAN_GROUP6]	= { .len = sizeof(struct in6_addr) },
++	[IFLA_VXLAN_LINK]	= { .type = NLA_U32 },
++	[IFLA_VXLAN_LOCAL]	= { .len = sizeof_field(struct iphdr, saddr) },
++	[IFLA_VXLAN_LOCAL6]	= { .len = sizeof(struct in6_addr) },
++	[IFLA_VXLAN_TOS]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_TTL]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_LABEL]	= { .type = NLA_U32 },
++	[IFLA_VXLAN_LEARNING]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_AGEING]	= { .type = NLA_U32 },
++	[IFLA_VXLAN_LIMIT]	= { .type = NLA_U32 },
++	[IFLA_VXLAN_PORT_RANGE] = { .len  = sizeof(struct ifla_vxlan_port_range) },
++	[IFLA_VXLAN_PROXY]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_RSC]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_L2MISS]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_L3MISS]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_COLLECT_METADATA]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_PORT]	= { .type = NLA_U16 },
++	[IFLA_VXLAN_UDP_CSUM]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_UDP_ZERO_CSUM6_TX]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_REMCSUM_TX]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_REMCSUM_RX]	= { .type = NLA_U8 },
++	[IFLA_VXLAN_GBP]	= { .type = NLA_FLAG, },
++	[IFLA_VXLAN_GPE]	= { .type = NLA_FLAG, },
++	[IFLA_VXLAN_REMCSUM_NOPARTIAL]	= { .type = NLA_FLAG },
++	[IFLA_VXLAN_TTL_INHERIT]	= { .type = NLA_FLAG },
++	[IFLA_VXLAN_DF]		= { .type = NLA_U8 },
++};
++
++static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[],
++			  struct netlink_ext_ack *extack)
++{
++	if (tb[IFLA_ADDRESS]) {
++		if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_ADDRESS],
++					    "Provided link layer address is not Ethernet");
++			return -EINVAL;
++		}
++
++		if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS]))) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_ADDRESS],
++					    "Provided Ethernet address is not unicast");
++			return -EADDRNOTAVAIL;
++		}
++	}
++
++	if (tb[IFLA_MTU]) {
++		u32 mtu = nla_get_u32(tb[IFLA_MTU]);
++
++		if (mtu < ETH_MIN_MTU || mtu > ETH_MAX_MTU) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_MTU],
++					    "MTU must be between 68 and 65535");
++			return -EINVAL;
++		}
++	}
++
++	if (!data) {
++		NL_SET_ERR_MSG(extack,
++			       "Required attributes not provided to perform the operation");
++		return -EINVAL;
++	}
++
++	if (data[IFLA_VXLAN_ID]) {
++		u32 id = nla_get_u32(data[IFLA_VXLAN_ID]);
++
++		if (id >= VXLAN_N_VID) {
++			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_ID],
++					    "VXLAN ID must be lower than 16777216");
++			return -ERANGE;
++		}
++	}
++
++	if (data[IFLA_VXLAN_PORT_RANGE]) {
++		const struct ifla_vxlan_port_range *p
++			= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
++
++		if (ntohs(p->high) < ntohs(p->low)) {
++			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_PORT_RANGE],
++					    "Invalid source port range");
++			return -EINVAL;
++		}
++	}
++
++	if (data[IFLA_VXLAN_DF]) {
++		enum ifla_vxlan_df df = nla_get_u8(data[IFLA_VXLAN_DF]);
++
++		if (df < 0 || df > VXLAN_DF_MAX) {
++			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_DF],
++					    "Invalid DF attribute");
++			return -EINVAL;
++		}
++	}
++
++	return 0;
++}
++
++static void vxlan_get_drvinfo(struct net_device *netdev,
++			      struct ethtool_drvinfo *drvinfo)
++{
++	strlcpy(drvinfo->version, VXLAN_VERSION, sizeof(drvinfo->version));
++	strlcpy(drvinfo->driver, "vxlan", sizeof(drvinfo->driver));
++}
++
++static int vxlan_get_link_ksettings(struct net_device *dev,
++				    struct ethtool_link_ksettings *cmd)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_rdst *dst = &vxlan->default_dst;
++	struct net_device *lowerdev = __dev_get_by_index(vxlan->net,
++							 dst->remote_ifindex);
++
++	if (!lowerdev) {
++		cmd->base.duplex = DUPLEX_UNKNOWN;
++		cmd->base.port = PORT_OTHER;
++		cmd->base.speed = SPEED_UNKNOWN;
++
++		return 0;
++	}
++
++	return __ethtool_get_link_ksettings(lowerdev, cmd);
++}
++
++static const struct ethtool_ops vxlan_ethtool_ops = {
++	.get_drvinfo		= vxlan_get_drvinfo,
++	.get_link		= ethtool_op_get_link,
++	.get_link_ksettings	= vxlan_get_link_ksettings,
++};
++
++static struct socket *vxlan_create_sock(struct net *net, bool ipv6,
++					__be16 port, u32 flags, int ifindex)
++{
++	struct socket *sock;
++	struct udp_port_cfg udp_conf;
++	int err;
++
++	memset(&udp_conf, 0, sizeof(udp_conf));
++
++	if (ipv6) {
++		udp_conf.family = AF_INET6;
++		udp_conf.use_udp6_rx_checksums =
++		    !(flags & VXLAN_F_UDP_ZERO_CSUM6_RX);
++		udp_conf.ipv6_v6only = 1;
++	} else {
++		udp_conf.family = AF_INET;
++	}
++
++	udp_conf.local_udp_port = port;
++	udp_conf.bind_ifindex = ifindex;
++
++	/* Open UDP socket */
++	err = udp_sock_create(net, &udp_conf, &sock);
++	if (err < 0)
++		return ERR_PTR(err);
++
++	return sock;
++}
++
++/* Create new listen socket if needed */
++static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6,
++					      __be16 port, u32 flags,
++					      int ifindex)
++{
++	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
++	struct vxlan_sock *vs;
++	struct socket *sock;
++	unsigned int h;
++	struct udp_tunnel_sock_cfg tunnel_cfg;
++
++	vs = kzalloc(sizeof(*vs), GFP_KERNEL);
++	if (!vs)
++		return ERR_PTR(-ENOMEM);
++
++	for (h = 0; h < VNI_HASH_SIZE; ++h)
++		INIT_HLIST_HEAD(&vs->vni_list[h]);
++
++	sock = vxlan_create_sock(net, ipv6, port, flags, ifindex);
++	if (IS_ERR(sock)) {
++		kfree(vs);
++		return ERR_CAST(sock);
++	}
++
++	vs->sock = sock;
++	refcount_set(&vs->refcnt, 1);
++	vs->flags = (flags & VXLAN_F_RCV_FLAGS);
++
++	spin_lock(&vn->sock_lock);
++	hlist_add_head_rcu(&vs->hlist, vs_head(net, port));
++	udp_tunnel_notify_add_rx_port(sock,
++				      (vs->flags & VXLAN_F_GPE) ?
++				      UDP_TUNNEL_TYPE_VXLAN_GPE :
++				      UDP_TUNNEL_TYPE_VXLAN);
++	spin_unlock(&vn->sock_lock);
++
++	/* Mark socket as an encapsulation socket. */
++	memset(&tunnel_cfg, 0, sizeof(tunnel_cfg));
++	tunnel_cfg.sk_user_data = vs;
++	tunnel_cfg.encap_type = 1;
++	tunnel_cfg.encap_rcv = vxlan_rcv;
++	tunnel_cfg.encap_err_lookup = vxlan_err_lookup;
++	tunnel_cfg.encap_destroy = NULL;
++	tunnel_cfg.gro_receive = vxlan_gro_receive;
++	tunnel_cfg.gro_complete = vxlan_gro_complete;
++
++	setup_udp_tunnel_sock(net, sock, &tunnel_cfg);
++
++	return vs;
++}
++
++static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
++{
++	struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
++	struct vxlan_sock *vs = NULL;
++	struct vxlan_dev_node *node;
++	int l3mdev_index = 0;
++
++	if (vxlan->cfg.remote_ifindex)
++		l3mdev_index = l3mdev_master_upper_ifindex_by_index(
++			vxlan->net, vxlan->cfg.remote_ifindex);
++
++	if (!vxlan->cfg.no_share) {
++		spin_lock(&vn->sock_lock);
++		vs = vxlan_find_sock(vxlan->net, ipv6 ? AF_INET6 : AF_INET,
++				     vxlan->cfg.dst_port, vxlan->cfg.flags,
++				     l3mdev_index);
++		if (vs && !refcount_inc_not_zero(&vs->refcnt)) {
++			spin_unlock(&vn->sock_lock);
++			return -EBUSY;
++		}
++		spin_unlock(&vn->sock_lock);
++	}
++	if (!vs)
++		vs = vxlan_socket_create(vxlan->net, ipv6,
++					 vxlan->cfg.dst_port, vxlan->cfg.flags,
++					 l3mdev_index);
++	if (IS_ERR(vs))
++		return PTR_ERR(vs);
++#if IS_ENABLED(CONFIG_IPV6)
++	if (ipv6) {
++		rcu_assign_pointer(vxlan->vn6_sock, vs);
++		node = &vxlan->hlist6;
++	} else
++#endif
++	{
++		rcu_assign_pointer(vxlan->vn4_sock, vs);
++		node = &vxlan->hlist4;
++	}
++	vxlan_vs_add_dev(vs, vxlan, node);
++	return 0;
++}
++
++static int vxlan_sock_add(struct vxlan_dev *vxlan)
++{
++	bool metadata = vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA;
++	bool ipv6 = vxlan->cfg.flags & VXLAN_F_IPV6 || metadata;
++	bool ipv4 = !ipv6 || metadata;
++	int ret = 0;
++
++	RCU_INIT_POINTER(vxlan->vn4_sock, NULL);
++#if IS_ENABLED(CONFIG_IPV6)
++	RCU_INIT_POINTER(vxlan->vn6_sock, NULL);
++	if (ipv6) {
++		ret = __vxlan_sock_add(vxlan, true);
++		if (ret < 0 && ret != -EAFNOSUPPORT)
++			ipv4 = false;
++	}
++#endif
++	if (ipv4)
++		ret = __vxlan_sock_add(vxlan, false);
++	if (ret < 0)
++		vxlan_sock_release(vxlan);
++	return ret;
++}
++
++static int vxlan_config_validate(struct net *src_net, struct vxlan_config *conf,
++				 struct net_device **lower,
++				 struct vxlan_dev *old,
++				 struct netlink_ext_ack *extack)
++{
++	struct vxlan_net *vn = net_generic(src_net, vxlan_net_id);
++	struct vxlan_dev *tmp;
++	bool use_ipv6 = false;
++
++	if (conf->flags & VXLAN_F_GPE) {
++		/* For now, allow GPE only together with
++		 * COLLECT_METADATA. This can be relaxed later; in such
++		 * case, the other side of the PtP link will have to be
++		 * provided.
++		 */
++		if ((conf->flags & ~VXLAN_F_ALLOWED_GPE) ||
++		    !(conf->flags & VXLAN_F_COLLECT_METADATA)) {
++			NL_SET_ERR_MSG(extack,
++				       "VXLAN GPE does not support this combination of attributes");
++			return -EINVAL;
++		}
++	}
++
++	if (!conf->remote_ip.sa.sa_family && !conf->saddr.sa.sa_family) {
++		/* Unless IPv6 is explicitly requested, assume IPv4 */
++		conf->remote_ip.sa.sa_family = AF_INET;
++		conf->saddr.sa.sa_family = AF_INET;
++	} else if (!conf->remote_ip.sa.sa_family) {
++		conf->remote_ip.sa.sa_family = conf->saddr.sa.sa_family;
++	} else if (!conf->saddr.sa.sa_family) {
++		conf->saddr.sa.sa_family = conf->remote_ip.sa.sa_family;
++	}
++
++	if (conf->saddr.sa.sa_family != conf->remote_ip.sa.sa_family) {
++		NL_SET_ERR_MSG(extack,
++			       "Local and remote address must be from the same family");
++		return -EINVAL;
++	}
++
++	if (vxlan_addr_multicast(&conf->saddr)) {
++		NL_SET_ERR_MSG(extack, "Local address cannot be multicast");
++		return -EINVAL;
++	}
++
++	if (conf->saddr.sa.sa_family == AF_INET6) {
++		if (!IS_ENABLED(CONFIG_IPV6)) {
++			NL_SET_ERR_MSG(extack,
++				       "IPv6 support not enabled in the kernel");
++			return -EPFNOSUPPORT;
++		}
++		use_ipv6 = true;
++		conf->flags |= VXLAN_F_IPV6;
++
++		if (!(conf->flags & VXLAN_F_COLLECT_METADATA)) {
++			int local_type =
++				ipv6_addr_type(&conf->saddr.sin6.sin6_addr);
++			int remote_type =
++				ipv6_addr_type(&conf->remote_ip.sin6.sin6_addr);
++
++			if (local_type & IPV6_ADDR_LINKLOCAL) {
++				if (!(remote_type & IPV6_ADDR_LINKLOCAL) &&
++				    (remote_type != IPV6_ADDR_ANY)) {
++					NL_SET_ERR_MSG(extack,
++						       "Invalid combination of local and remote address scopes");
++					return -EINVAL;
++				}
++
++				conf->flags |= VXLAN_F_IPV6_LINKLOCAL;
++			} else {
++				if (remote_type ==
++				    (IPV6_ADDR_UNICAST | IPV6_ADDR_LINKLOCAL)) {
++					NL_SET_ERR_MSG(extack,
++						       "Invalid combination of local and remote address scopes");
++					return -EINVAL;
++				}
++
++				conf->flags &= ~VXLAN_F_IPV6_LINKLOCAL;
++			}
++		}
++	}
++
++	if (conf->label && !use_ipv6) {
++		NL_SET_ERR_MSG(extack,
++			       "Label attribute only applies to IPv6 VXLAN devices");
++		return -EINVAL;
++	}
++
++	if (conf->remote_ifindex) {
++		struct net_device *lowerdev;
++
++		lowerdev = __dev_get_by_index(src_net, conf->remote_ifindex);
++		if (!lowerdev) {
++			NL_SET_ERR_MSG(extack,
++				       "Invalid local interface, device not found");
++			return -ENODEV;
++		}
++
++#if IS_ENABLED(CONFIG_IPV6)
++		if (use_ipv6) {
++			struct inet6_dev *idev = __in6_dev_get(lowerdev);
++			if (idev && idev->cnf.disable_ipv6) {
++				NL_SET_ERR_MSG(extack,
++					       "IPv6 support disabled by administrator");
++				return -EPERM;
++			}
++		}
++#endif
++
++		*lower = lowerdev;
++	} else {
++		if (vxlan_addr_multicast(&conf->remote_ip)) {
++			NL_SET_ERR_MSG(extack,
++				       "Local interface required for multicast remote destination");
++
++			return -EINVAL;
++		}
++
++#if IS_ENABLED(CONFIG_IPV6)
++		if (conf->flags & VXLAN_F_IPV6_LINKLOCAL) {
++			NL_SET_ERR_MSG(extack,
++				       "Local interface required for link-local local/remote addresses");
++			return -EINVAL;
++		}
++#endif
++
++		*lower = NULL;
++	}
++
++	if (!conf->dst_port) {
++		if (conf->flags & VXLAN_F_GPE)
++			conf->dst_port = htons(4790); /* IANA VXLAN-GPE port */
++		else
++			conf->dst_port = htons(vxlan_port);
++	}
++
++	if (!conf->age_interval)
++		conf->age_interval = FDB_AGE_DEFAULT;
++
++	list_for_each_entry(tmp, &vn->vxlan_list, next) {
++		if (tmp == old)
++			continue;
++
++		if (tmp->cfg.vni != conf->vni)
++			continue;
++		if (tmp->cfg.dst_port != conf->dst_port)
++			continue;
++		if ((tmp->cfg.flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)) !=
++		    (conf->flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)))
++			continue;
++
++		if ((conf->flags & VXLAN_F_IPV6_LINKLOCAL) &&
++		    tmp->cfg.remote_ifindex != conf->remote_ifindex)
++			continue;
++
++		NL_SET_ERR_MSG(extack,
++			       "A VXLAN device with the specified VNI already exists");
++		return -EEXIST;
++	}
++
++	return 0;
++}
++
++static void vxlan_config_apply(struct net_device *dev,
++			       struct vxlan_config *conf,
++			       struct net_device *lowerdev,
++			       struct net *src_net,
++			       bool changelink)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_rdst *dst = &vxlan->default_dst;
++	unsigned short needed_headroom = ETH_HLEN;
++	int max_mtu = ETH_MAX_MTU;
++	u32 flags = conf->flags;
++
++	if (!changelink) {
++		if (flags & VXLAN_F_GPE)
++			vxlan_raw_setup(dev);
++		else
++			vxlan_ether_setup(dev);
++
++		if (conf->mtu)
++			dev->mtu = conf->mtu;
++
++		vxlan->net = src_net;
++	}
++
++	dst->remote_vni = conf->vni;
++
++	memcpy(&dst->remote_ip, &conf->remote_ip, sizeof(conf->remote_ip));
++
++	if (lowerdev) {
++		dst->remote_ifindex = conf->remote_ifindex;
++
++		dev->gso_max_size = lowerdev->gso_max_size;
++		dev->gso_max_segs = lowerdev->gso_max_segs;
++
++		needed_headroom = lowerdev->hard_header_len;
++		needed_headroom += lowerdev->needed_headroom;
++
++		dev->needed_tailroom = lowerdev->needed_tailroom;
++
++		max_mtu = lowerdev->mtu - vxlan_headroom(flags);
++		if (max_mtu < ETH_MIN_MTU)
++			max_mtu = ETH_MIN_MTU;
++
++		if (!changelink && !conf->mtu)
++			dev->mtu = max_mtu;
++	}
++
++	if (dev->mtu > max_mtu)
++		dev->mtu = max_mtu;
++
++	if (flags & VXLAN_F_COLLECT_METADATA)
++		flags |= VXLAN_F_IPV6;
++	needed_headroom += vxlan_headroom(flags);
++	dev->needed_headroom = needed_headroom;
++
++	memcpy(&vxlan->cfg, conf, sizeof(*conf));
++}
++
++static int vxlan_dev_configure(struct net *src_net, struct net_device *dev,
++			       struct vxlan_config *conf, bool changelink,
++			       struct netlink_ext_ack *extack)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct net_device *lowerdev;
++	int ret;
++
++	ret = vxlan_config_validate(src_net, conf, &lowerdev, vxlan, extack);
++	if (ret)
++		return ret;
++
++	vxlan_config_apply(dev, conf, lowerdev, src_net, changelink);
++
++	return 0;
++}
++
++static int __vxlan_dev_create(struct net *net, struct net_device *dev,
++			      struct vxlan_config *conf,
++			      struct netlink_ext_ack *extack)
++{
++	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct net_device *remote_dev = NULL;
++	struct vxlan_fdb *f = NULL;
++	bool unregister = false;
++	struct vxlan_rdst *dst;
++	int err;
++
++	dst = &vxlan->default_dst;
++	err = vxlan_dev_configure(net, dev, conf, false, extack);
++	if (err)
++		return err;
++
++	dev->ethtool_ops = &vxlan_ethtool_ops;
++
++	/* create an fdb entry for a valid default destination */
++	if (!vxlan_addr_any(&dst->remote_ip)) {
++		err = vxlan_fdb_create(vxlan, all_zeros_mac,
++				       &dst->remote_ip,
++				       NUD_REACHABLE | NUD_PERMANENT,
++				       vxlan->cfg.dst_port,
++				       dst->remote_vni,
++				       dst->remote_vni,
++				       dst->remote_ifindex,
++				       NTF_SELF, 0, &f, extack);
++		if (err)
++			return err;
++	}
++
++	err = register_netdevice(dev);
++	if (err)
++		goto errout;
++	unregister = true;
++
++	if (dst->remote_ifindex) {
++		remote_dev = __dev_get_by_index(net, dst->remote_ifindex);
++		if (!remote_dev) {
++			err = -ENODEV;
++			goto errout;
++		}
++
++		err = netdev_upper_dev_link(remote_dev, dev, extack);
++		if (err)
++			goto errout;
++	}
++
++	err = rtnl_configure_link(dev, NULL);
++	if (err < 0)
++		goto unlink;
++
++	if (f) {
++		vxlan_fdb_insert(vxlan, all_zeros_mac, dst->remote_vni, f);
++
++		/* notify default fdb entry */
++		err = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f),
++				       RTM_NEWNEIGH, true, extack);
++		if (err) {
++			vxlan_fdb_destroy(vxlan, f, false, false);
++			if (remote_dev)
++				netdev_upper_dev_unlink(remote_dev, dev);
++			goto unregister;
++		}
++	}
++
++	list_add(&vxlan->next, &vn->vxlan_list);
++	if (remote_dev)
++		dst->remote_dev = remote_dev;
++	return 0;
++unlink:
++	if (remote_dev)
++		netdev_upper_dev_unlink(remote_dev, dev);
++errout:
++	/* unregister_netdevice() destroys the default FDB entry with deletion
++	 * notification. But the addition notification was not sent yet, so
++	 * destroy the entry by hand here.
++	 */
++	if (f)
++		__vxlan_fdb_free(f);
++unregister:
++	if (unregister)
++		unregister_netdevice(dev);
++	return err;
++}
++
++/* Set/clear flags based on attribute */
++static int vxlan_nl2flag(struct vxlan_config *conf, struct nlattr *tb[],
++			  int attrtype, unsigned long mask, bool changelink,
++			  bool changelink_supported,
++			  struct netlink_ext_ack *extack)
++{
++	unsigned long flags;
++
++	if (!tb[attrtype])
++		return 0;
++
++	if (changelink && !changelink_supported) {
++		vxlan_flag_attr_error(attrtype, extack);
++		return -EOPNOTSUPP;
++	}
++
++	if (vxlan_policy[attrtype].type == NLA_FLAG)
++		flags = conf->flags | mask;
++	else if (nla_get_u8(tb[attrtype]))
++		flags = conf->flags | mask;
++	else
++		flags = conf->flags & ~mask;
++
++	conf->flags = flags;
++
++	return 0;
++}
++
++static int vxlan_nl2conf(struct nlattr *tb[], struct nlattr *data[],
++			 struct net_device *dev, struct vxlan_config *conf,
++			 bool changelink, struct netlink_ext_ack *extack)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	int err = 0;
++
++	memset(conf, 0, sizeof(*conf));
++
++	/* if changelink operation, start with old existing cfg */
++	if (changelink)
++		memcpy(conf, &vxlan->cfg, sizeof(*conf));
++
++	if (data[IFLA_VXLAN_ID]) {
++		__be32 vni = cpu_to_be32(nla_get_u32(data[IFLA_VXLAN_ID]));
++
++		if (changelink && (vni != conf->vni)) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_ID], "Cannot change VNI");
++			return -EOPNOTSUPP;
++		}
++		conf->vni = cpu_to_be32(nla_get_u32(data[IFLA_VXLAN_ID]));
++	}
++
++	if (data[IFLA_VXLAN_GROUP]) {
++		if (changelink && (conf->remote_ip.sa.sa_family != AF_INET)) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_GROUP], "New group address family does not match old group");
++			return -EOPNOTSUPP;
++		}
++
++		conf->remote_ip.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_GROUP]);
++		conf->remote_ip.sa.sa_family = AF_INET;
++	} else if (data[IFLA_VXLAN_GROUP6]) {
++		if (!IS_ENABLED(CONFIG_IPV6)) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_GROUP6], "IPv6 support not enabled in the kernel");
++			return -EPFNOSUPPORT;
++		}
++
++		if (changelink && (conf->remote_ip.sa.sa_family != AF_INET6)) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_GROUP6], "New group address family does not match old group");
++			return -EOPNOTSUPP;
++		}
++
++		conf->remote_ip.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_GROUP6]);
++		conf->remote_ip.sa.sa_family = AF_INET6;
++	}
++
++	if (data[IFLA_VXLAN_LOCAL]) {
++		if (changelink && (conf->saddr.sa.sa_family != AF_INET)) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_LOCAL], "New local address family does not match old");
++			return -EOPNOTSUPP;
++		}
++
++		conf->saddr.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_LOCAL]);
++		conf->saddr.sa.sa_family = AF_INET;
++	} else if (data[IFLA_VXLAN_LOCAL6]) {
++		if (!IS_ENABLED(CONFIG_IPV6)) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_LOCAL6], "IPv6 support not enabled in the kernel");
++			return -EPFNOSUPPORT;
++		}
++
++		if (changelink && (conf->saddr.sa.sa_family != AF_INET6)) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_LOCAL6], "New local address family does not match old");
++			return -EOPNOTSUPP;
++		}
++
++		/* TODO: respect scope id */
++		conf->saddr.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_LOCAL6]);
++		conf->saddr.sa.sa_family = AF_INET6;
++	}
++
++	if (data[IFLA_VXLAN_LINK])
++		conf->remote_ifindex = nla_get_u32(data[IFLA_VXLAN_LINK]);
++
++	if (data[IFLA_VXLAN_TOS])
++		conf->tos  = nla_get_u8(data[IFLA_VXLAN_TOS]);
++
++	if (data[IFLA_VXLAN_TTL])
++		conf->ttl = nla_get_u8(data[IFLA_VXLAN_TTL]);
++
++	if (data[IFLA_VXLAN_TTL_INHERIT]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_TTL_INHERIT,
++				    VXLAN_F_TTL_INHERIT, changelink, false,
++				    extack);
++		if (err)
++			return err;
++
++	}
++
++	if (data[IFLA_VXLAN_LABEL])
++		conf->label = nla_get_be32(data[IFLA_VXLAN_LABEL]) &
++			     IPV6_FLOWLABEL_MASK;
++
++	if (data[IFLA_VXLAN_LEARNING]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_LEARNING,
++				    VXLAN_F_LEARN, changelink, true,
++				    extack);
++		if (err)
++			return err;
++	} else if (!changelink) {
++		/* default to learn on a new device */
++		conf->flags |= VXLAN_F_LEARN;
++	}
++
++	if (data[IFLA_VXLAN_AGEING])
++		conf->age_interval = nla_get_u32(data[IFLA_VXLAN_AGEING]);
++
++	if (data[IFLA_VXLAN_PROXY]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_PROXY,
++				    VXLAN_F_PROXY, changelink, false,
++				    extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_RSC]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_RSC,
++				    VXLAN_F_RSC, changelink, false,
++				    extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_L2MISS]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_L2MISS,
++				    VXLAN_F_L2MISS, changelink, false,
++				    extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_L3MISS]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_L3MISS,
++				    VXLAN_F_L3MISS, changelink, false,
++				    extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_LIMIT]) {
++		if (changelink) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_LIMIT],
++					    "Cannot change limit");
++			return -EOPNOTSUPP;
++		}
++		conf->addrmax = nla_get_u32(data[IFLA_VXLAN_LIMIT]);
++	}
++
++	if (data[IFLA_VXLAN_COLLECT_METADATA]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_COLLECT_METADATA,
++				    VXLAN_F_COLLECT_METADATA, changelink, false,
++				    extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_PORT_RANGE]) {
++		if (!changelink) {
++			const struct ifla_vxlan_port_range *p
++				= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
++			conf->port_min = ntohs(p->low);
++			conf->port_max = ntohs(p->high);
++		} else {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_PORT_RANGE],
++					    "Cannot change port range");
++			return -EOPNOTSUPP;
++		}
++	}
++
++	if (data[IFLA_VXLAN_PORT]) {
++		if (changelink) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_PORT],
++					    "Cannot change port");
++			return -EOPNOTSUPP;
++		}
++		conf->dst_port = nla_get_be16(data[IFLA_VXLAN_PORT]);
++	}
++
++	if (data[IFLA_VXLAN_UDP_CSUM]) {
++		if (changelink) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_UDP_CSUM],
++					    "Cannot change UDP_CSUM flag");
++			return -EOPNOTSUPP;
++		}
++		if (!nla_get_u8(data[IFLA_VXLAN_UDP_CSUM]))
++			conf->flags |= VXLAN_F_UDP_ZERO_CSUM_TX;
++	}
++
++	if (data[IFLA_VXLAN_UDP_ZERO_CSUM6_TX]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_UDP_ZERO_CSUM6_TX,
++				    VXLAN_F_UDP_ZERO_CSUM6_TX, changelink,
++				    false, extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_UDP_ZERO_CSUM6_RX,
++				    VXLAN_F_UDP_ZERO_CSUM6_RX, changelink,
++				    false, extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_REMCSUM_TX]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_REMCSUM_TX,
++				    VXLAN_F_REMCSUM_TX, changelink, false,
++				    extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_REMCSUM_RX]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_REMCSUM_RX,
++				    VXLAN_F_REMCSUM_RX, changelink, false,
++				    extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_GBP]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_GBP,
++				    VXLAN_F_GBP, changelink, false, extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_GPE]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_GPE,
++				    VXLAN_F_GPE, changelink, false,
++				    extack);
++		if (err)
++			return err;
++	}
++
++	if (data[IFLA_VXLAN_REMCSUM_NOPARTIAL]) {
++		err = vxlan_nl2flag(conf, data, IFLA_VXLAN_REMCSUM_NOPARTIAL,
++				    VXLAN_F_REMCSUM_NOPARTIAL, changelink,
++				    false, extack);
++		if (err)
++			return err;
++	}
++
++	if (tb[IFLA_MTU]) {
++		if (changelink) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_MTU],
++					    "Cannot change mtu");
++			return -EOPNOTSUPP;
++		}
++		conf->mtu = nla_get_u32(tb[IFLA_MTU]);
++	}
++
++	if (data[IFLA_VXLAN_DF])
++		conf->df = nla_get_u8(data[IFLA_VXLAN_DF]);
++
++	return 0;
++}
++
++static int vxlan_newlink(struct net *src_net, struct net_device *dev,
++			 struct nlattr *tb[], struct nlattr *data[],
++			 struct netlink_ext_ack *extack)
++{
++	struct vxlan_config conf;
++	int err;
++
++	err = vxlan_nl2conf(tb, data, dev, &conf, false, extack);
++	if (err)
++		return err;
++
++	return __vxlan_dev_create(src_net, dev, &conf, extack);
++}
++
++static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
++			    struct nlattr *data[],
++			    struct netlink_ext_ack *extack)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct net_device *lowerdev;
++	struct vxlan_config conf;
++	struct vxlan_rdst *dst;
++	int err;
++
++	dst = &vxlan->default_dst;
++	err = vxlan_nl2conf(tb, data, dev, &conf, true, extack);
++	if (err)
++		return err;
++
++	err = vxlan_config_validate(vxlan->net, &conf, &lowerdev,
++				    vxlan, extack);
++	if (err)
++		return err;
++
++	if (dst->remote_dev == lowerdev)
++		lowerdev = NULL;
++
++	err = netdev_adjacent_change_prepare(dst->remote_dev, lowerdev, dev,
++					     extack);
++	if (err)
++		return err;
++
++	/* handle default dst entry */
++	if (!vxlan_addr_equal(&conf.remote_ip, &dst->remote_ip)) {
++		u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, conf.vni);
++
++		spin_lock_bh(&vxlan->hash_lock[hash_index]);
++		if (!vxlan_addr_any(&conf.remote_ip)) {
++			err = vxlan_fdb_update(vxlan, all_zeros_mac,
++					       &conf.remote_ip,
++					       NUD_REACHABLE | NUD_PERMANENT,
++					       NLM_F_APPEND | NLM_F_CREATE,
++					       vxlan->cfg.dst_port,
++					       conf.vni, conf.vni,
++					       conf.remote_ifindex,
++					       NTF_SELF, 0, true, extack);
++			if (err) {
++				spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++				netdev_adjacent_change_abort(dst->remote_dev,
++							     lowerdev, dev);
++				return err;
++			}
++		}
++		if (!vxlan_addr_any(&dst->remote_ip))
++			__vxlan_fdb_delete(vxlan, all_zeros_mac,
++					   dst->remote_ip,
++					   vxlan->cfg.dst_port,
++					   dst->remote_vni,
++					   dst->remote_vni,
++					   dst->remote_ifindex,
++					   true);
++		spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++	}
++
++	if (conf.age_interval != vxlan->cfg.age_interval)
++		mod_timer(&vxlan->age_timer, jiffies);
++
++	netdev_adjacent_change_commit(dst->remote_dev, lowerdev, dev);
++	if (lowerdev && lowerdev != dst->remote_dev)
++		dst->remote_dev = lowerdev;
++	vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true);
++	return 0;
++}
++
++static void vxlan_dellink(struct net_device *dev, struct list_head *head)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++
++	vxlan_flush(vxlan, true);
++
++	list_del(&vxlan->next);
++	unregister_netdevice_queue(dev, head);
++	if (vxlan->default_dst.remote_dev)
++		netdev_upper_dev_unlink(vxlan->default_dst.remote_dev, dev);
++}
++
++static size_t vxlan_get_size(const struct net_device *dev)
++{
++
++	return nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_ID */
++		nla_total_size(sizeof(struct in6_addr)) + /* IFLA_VXLAN_GROUP{6} */
++		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_LINK */
++		nla_total_size(sizeof(struct in6_addr)) + /* IFLA_VXLAN_LOCAL{6} */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL_INHERIT */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TOS */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_DF */
++		nla_total_size(sizeof(__be32)) + /* IFLA_VXLAN_LABEL */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_LEARNING */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_PROXY */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_RSC */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_L2MISS */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_L3MISS */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_COLLECT_METADATA */
++		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_AGEING */
++		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_LIMIT */
++		nla_total_size(sizeof(struct ifla_vxlan_port_range)) +
++		nla_total_size(sizeof(__be16)) + /* IFLA_VXLAN_PORT */
++		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_CSUM */
++		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_ZERO_CSUM6_TX */
++		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_ZERO_CSUM6_RX */
++		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_TX */
++		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_RX */
++		0;
++}
++
++static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
++{
++	const struct vxlan_dev *vxlan = netdev_priv(dev);
++	const struct vxlan_rdst *dst = &vxlan->default_dst;
++	struct ifla_vxlan_port_range ports = {
++		.low =  htons(vxlan->cfg.port_min),
++		.high = htons(vxlan->cfg.port_max),
++	};
++
++	if (nla_put_u32(skb, IFLA_VXLAN_ID, be32_to_cpu(dst->remote_vni)))
++		goto nla_put_failure;
++
++	if (!vxlan_addr_any(&dst->remote_ip)) {
++		if (dst->remote_ip.sa.sa_family == AF_INET) {
++			if (nla_put_in_addr(skb, IFLA_VXLAN_GROUP,
++					    dst->remote_ip.sin.sin_addr.s_addr))
++				goto nla_put_failure;
++#if IS_ENABLED(CONFIG_IPV6)
++		} else {
++			if (nla_put_in6_addr(skb, IFLA_VXLAN_GROUP6,
++					     &dst->remote_ip.sin6.sin6_addr))
++				goto nla_put_failure;
++#endif
++		}
++	}
++
++	if (dst->remote_ifindex && nla_put_u32(skb, IFLA_VXLAN_LINK, dst->remote_ifindex))
++		goto nla_put_failure;
++
++	if (!vxlan_addr_any(&vxlan->cfg.saddr)) {
++		if (vxlan->cfg.saddr.sa.sa_family == AF_INET) {
++			if (nla_put_in_addr(skb, IFLA_VXLAN_LOCAL,
++					    vxlan->cfg.saddr.sin.sin_addr.s_addr))
++				goto nla_put_failure;
++#if IS_ENABLED(CONFIG_IPV6)
++		} else {
++			if (nla_put_in6_addr(skb, IFLA_VXLAN_LOCAL6,
++					     &vxlan->cfg.saddr.sin6.sin6_addr))
++				goto nla_put_failure;
++#endif
++		}
++	}
++
++	if (nla_put_u8(skb, IFLA_VXLAN_TTL, vxlan->cfg.ttl) ||
++	    nla_put_u8(skb, IFLA_VXLAN_TTL_INHERIT,
++		       !!(vxlan->cfg.flags & VXLAN_F_TTL_INHERIT)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_TOS, vxlan->cfg.tos) ||
++	    nla_put_u8(skb, IFLA_VXLAN_DF, vxlan->cfg.df) ||
++	    nla_put_be32(skb, IFLA_VXLAN_LABEL, vxlan->cfg.label) ||
++	    nla_put_u8(skb, IFLA_VXLAN_LEARNING,
++		       !!(vxlan->cfg.flags & VXLAN_F_LEARN)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_PROXY,
++		       !!(vxlan->cfg.flags & VXLAN_F_PROXY)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_RSC,
++		       !!(vxlan->cfg.flags & VXLAN_F_RSC)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_L2MISS,
++		       !!(vxlan->cfg.flags & VXLAN_F_L2MISS)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_L3MISS,
++		       !!(vxlan->cfg.flags & VXLAN_F_L3MISS)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_COLLECT_METADATA,
++		       !!(vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA)) ||
++	    nla_put_u32(skb, IFLA_VXLAN_AGEING, vxlan->cfg.age_interval) ||
++	    nla_put_u32(skb, IFLA_VXLAN_LIMIT, vxlan->cfg.addrmax) ||
++	    nla_put_be16(skb, IFLA_VXLAN_PORT, vxlan->cfg.dst_port) ||
++	    nla_put_u8(skb, IFLA_VXLAN_UDP_CSUM,
++		       !(vxlan->cfg.flags & VXLAN_F_UDP_ZERO_CSUM_TX)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_UDP_ZERO_CSUM6_TX,
++		       !!(vxlan->cfg.flags & VXLAN_F_UDP_ZERO_CSUM6_TX)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_UDP_ZERO_CSUM6_RX,
++		       !!(vxlan->cfg.flags & VXLAN_F_UDP_ZERO_CSUM6_RX)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_REMCSUM_TX,
++		       !!(vxlan->cfg.flags & VXLAN_F_REMCSUM_TX)) ||
++	    nla_put_u8(skb, IFLA_VXLAN_REMCSUM_RX,
++		       !!(vxlan->cfg.flags & VXLAN_F_REMCSUM_RX)))
++		goto nla_put_failure;
++
++	if (nla_put(skb, IFLA_VXLAN_PORT_RANGE, sizeof(ports), &ports))
++		goto nla_put_failure;
++
++	if (vxlan->cfg.flags & VXLAN_F_GBP &&
++	    nla_put_flag(skb, IFLA_VXLAN_GBP))
++		goto nla_put_failure;
++
++	if (vxlan->cfg.flags & VXLAN_F_GPE &&
++	    nla_put_flag(skb, IFLA_VXLAN_GPE))
++		goto nla_put_failure;
++
++	if (vxlan->cfg.flags & VXLAN_F_REMCSUM_NOPARTIAL &&
++	    nla_put_flag(skb, IFLA_VXLAN_REMCSUM_NOPARTIAL))
++		goto nla_put_failure;
++
++	return 0;
++
++nla_put_failure:
++	return -EMSGSIZE;
++}
++
++static struct net *vxlan_get_link_net(const struct net_device *dev)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++
++	return vxlan->net;
++}
++
++static struct rtnl_link_ops vxlan_link_ops __read_mostly = {
++	.kind		= "vxlan",
++	.maxtype	= IFLA_VXLAN_MAX,
++	.policy		= vxlan_policy,
++	.priv_size	= sizeof(struct vxlan_dev),
++	.setup		= vxlan_setup,
++	.validate	= vxlan_validate,
++	.newlink	= vxlan_newlink,
++	.changelink	= vxlan_changelink,
++	.dellink	= vxlan_dellink,
++	.get_size	= vxlan_get_size,
++	.fill_info	= vxlan_fill_info,
++	.get_link_net	= vxlan_get_link_net,
++};
++
++struct net_device *vxlan_dev_create(struct net *net, const char *name,
++				    u8 name_assign_type,
++				    struct vxlan_config *conf)
++{
++	struct nlattr *tb[IFLA_MAX + 1];
++	struct net_device *dev;
++	int err;
++
++	memset(&tb, 0, sizeof(tb));
++
++	dev = rtnl_create_link(net, name, name_assign_type,
++			       &vxlan_link_ops, tb, NULL);
++	if (IS_ERR(dev))
++		return dev;
++
++	err = __vxlan_dev_create(net, dev, conf, NULL);
++	if (err < 0) {
++		free_netdev(dev);
++		return ERR_PTR(err);
++	}
++
++	err = rtnl_configure_link(dev, NULL);
++	if (err < 0) {
++		LIST_HEAD(list_kill);
++
++		vxlan_dellink(dev, &list_kill);
++		unregister_netdevice_many(&list_kill);
++		return ERR_PTR(err);
++	}
++
++	return dev;
++}
++EXPORT_SYMBOL_GPL(vxlan_dev_create);
++
++static void vxlan_handle_lowerdev_unregister(struct vxlan_net *vn,
++					     struct net_device *dev)
++{
++	struct vxlan_dev *vxlan, *next;
++	LIST_HEAD(list_kill);
++
++	list_for_each_entry_safe(vxlan, next, &vn->vxlan_list, next) {
++		struct vxlan_rdst *dst = &vxlan->default_dst;
++
++		/* In case we created vxlan device with carrier
++		 * and we loose the carrier due to module unload
++		 * we also need to remove vxlan device. In other
++		 * cases, it's not necessary and remote_ifindex
++		 * is 0 here, so no matches.
++		 */
++		if (dst->remote_ifindex == dev->ifindex)
++			vxlan_dellink(vxlan->dev, &list_kill);
++	}
++
++	unregister_netdevice_many(&list_kill);
++}
++
++static int vxlan_netdevice_event(struct notifier_block *unused,
++				 unsigned long event, void *ptr)
++{
++	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++	struct vxlan_net *vn = net_generic(dev_net(dev), vxlan_net_id);
++
++	if (event == NETDEV_UNREGISTER) {
++		if (!dev->udp_tunnel_nic_info)
++			vxlan_offload_rx_ports(dev, false);
++		vxlan_handle_lowerdev_unregister(vn, dev);
++	} else if (event == NETDEV_REGISTER) {
++		if (!dev->udp_tunnel_nic_info)
++			vxlan_offload_rx_ports(dev, true);
++	} else if (event == NETDEV_UDP_TUNNEL_PUSH_INFO ||
++		   event == NETDEV_UDP_TUNNEL_DROP_INFO) {
++		vxlan_offload_rx_ports(dev, event == NETDEV_UDP_TUNNEL_PUSH_INFO);
++	}
++
++	return NOTIFY_DONE;
++}
++
++static struct notifier_block vxlan_notifier_block __read_mostly = {
++	.notifier_call = vxlan_netdevice_event,
++};
++
++static void
++vxlan_fdb_offloaded_set(struct net_device *dev,
++			struct switchdev_notifier_vxlan_fdb_info *fdb_info)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_rdst *rdst;
++	struct vxlan_fdb *f;
++	u32 hash_index;
++
++	hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
++
++	spin_lock_bh(&vxlan->hash_lock[hash_index]);
++
++	f = vxlan_find_mac(vxlan, fdb_info->eth_addr, fdb_info->vni);
++	if (!f)
++		goto out;
++
++	rdst = vxlan_fdb_find_rdst(f, &fdb_info->remote_ip,
++				   fdb_info->remote_port,
++				   fdb_info->remote_vni,
++				   fdb_info->remote_ifindex);
++	if (!rdst)
++		goto out;
++
++	rdst->offloaded = fdb_info->offloaded;
++
++out:
++	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++}
++
++static int
++vxlan_fdb_external_learn_add(struct net_device *dev,
++			     struct switchdev_notifier_vxlan_fdb_info *fdb_info)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct netlink_ext_ack *extack;
++	u32 hash_index;
++	int err;
++
++	hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
++	extack = switchdev_notifier_info_to_extack(&fdb_info->info);
++
++	spin_lock_bh(&vxlan->hash_lock[hash_index]);
++	err = vxlan_fdb_update(vxlan, fdb_info->eth_addr, &fdb_info->remote_ip,
++			       NUD_REACHABLE,
++			       NLM_F_CREATE | NLM_F_REPLACE,
++			       fdb_info->remote_port,
++			       fdb_info->vni,
++			       fdb_info->remote_vni,
++			       fdb_info->remote_ifindex,
++			       NTF_USE | NTF_SELF | NTF_EXT_LEARNED,
++			       0, false, extack);
++	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++
++	return err;
++}
++
++static int
++vxlan_fdb_external_learn_del(struct net_device *dev,
++			     struct switchdev_notifier_vxlan_fdb_info *fdb_info)
++{
++	struct vxlan_dev *vxlan = netdev_priv(dev);
++	struct vxlan_fdb *f;
++	u32 hash_index;
++	int err = 0;
++
++	hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
++	spin_lock_bh(&vxlan->hash_lock[hash_index]);
++
++	f = vxlan_find_mac(vxlan, fdb_info->eth_addr, fdb_info->vni);
++	if (!f)
++		err = -ENOENT;
++	else if (f->flags & NTF_EXT_LEARNED)
++		err = __vxlan_fdb_delete(vxlan, fdb_info->eth_addr,
++					 fdb_info->remote_ip,
++					 fdb_info->remote_port,
++					 fdb_info->vni,
++					 fdb_info->remote_vni,
++					 fdb_info->remote_ifindex,
++					 false);
++
++	spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++
++	return err;
++}
++
++static int vxlan_switchdev_event(struct notifier_block *unused,
++				 unsigned long event, void *ptr)
++{
++	struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
++	struct switchdev_notifier_vxlan_fdb_info *fdb_info;
++	int err = 0;
++
++	switch (event) {
++	case SWITCHDEV_VXLAN_FDB_OFFLOADED:
++		vxlan_fdb_offloaded_set(dev, ptr);
++		break;
++	case SWITCHDEV_VXLAN_FDB_ADD_TO_BRIDGE:
++		fdb_info = ptr;
++		err = vxlan_fdb_external_learn_add(dev, fdb_info);
++		if (err) {
++			err = notifier_from_errno(err);
++			break;
++		}
++		fdb_info->offloaded = true;
++		vxlan_fdb_offloaded_set(dev, fdb_info);
++		break;
++	case SWITCHDEV_VXLAN_FDB_DEL_TO_BRIDGE:
++		fdb_info = ptr;
++		err = vxlan_fdb_external_learn_del(dev, fdb_info);
++		if (err) {
++			err = notifier_from_errno(err);
++			break;
++		}
++		fdb_info->offloaded = false;
++		vxlan_fdb_offloaded_set(dev, fdb_info);
++		break;
++	}
++
++	return err;
++}
++
++static struct notifier_block vxlan_switchdev_notifier_block __read_mostly = {
++	.notifier_call = vxlan_switchdev_event,
++};
++
++static void vxlan_fdb_nh_flush(struct nexthop *nh)
++{
++	struct vxlan_fdb *fdb;
++	struct vxlan_dev *vxlan;
++	u32 hash_index;
++
++	rcu_read_lock();
++	list_for_each_entry_rcu(fdb, &nh->fdb_list, nh_list) {
++		vxlan = rcu_dereference(fdb->vdev);
++		WARN_ON(!vxlan);
++		hash_index = fdb_head_index(vxlan, fdb->eth_addr,
++					    vxlan->default_dst.remote_vni);
++		spin_lock_bh(&vxlan->hash_lock[hash_index]);
++		if (!hlist_unhashed(&fdb->hlist))
++			vxlan_fdb_destroy(vxlan, fdb, false, false);
++		spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++	}
++	rcu_read_unlock();
++}
++
++static int vxlan_nexthop_event(struct notifier_block *nb,
++			       unsigned long event, void *ptr)
++{
++	struct nexthop *nh = ptr;
++
++	if (!nh || event != NEXTHOP_EVENT_DEL)
++		return NOTIFY_DONE;
++
++	vxlan_fdb_nh_flush(nh);
++
++	return NOTIFY_DONE;
++}
++
++static struct notifier_block vxlan_nexthop_notifier_block __read_mostly = {
++	.notifier_call = vxlan_nexthop_event,
++};
++
++static __net_init int vxlan_init_net(struct net *net)
++{
++	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
++	unsigned int h;
++
++	INIT_LIST_HEAD(&vn->vxlan_list);
++	spin_lock_init(&vn->sock_lock);
++
++	for (h = 0; h < PORT_HASH_SIZE; ++h)
++		INIT_HLIST_HEAD(&vn->sock_list[h]);
++
++	return register_nexthop_notifier(net, &vxlan_nexthop_notifier_block);
++}
++
++static void vxlan_destroy_tunnels(struct net *net, struct list_head *head)
++{
++	struct vxlan_net *vn = net_generic(net, vxlan_net_id);
++	struct vxlan_dev *vxlan, *next;
++	struct net_device *dev, *aux;
++
++	for_each_netdev_safe(net, dev, aux)
++		if (dev->rtnl_link_ops == &vxlan_link_ops)
++			unregister_netdevice_queue(dev, head);
++
++	list_for_each_entry_safe(vxlan, next, &vn->vxlan_list, next) {
++		/* If vxlan->dev is in the same netns, it has already been added
++		 * to the list by the previous loop.
++		 */
++		if (!net_eq(dev_net(vxlan->dev), net))
++			unregister_netdevice_queue(vxlan->dev, head);
++	}
++
++}
++
++static void __net_exit vxlan_exit_batch_net(struct list_head *net_list)
++{
++	struct net *net;
++	LIST_HEAD(list);
++	unsigned int h;
++
++	rtnl_lock();
++	list_for_each_entry(net, net_list, exit_list)
++		unregister_nexthop_notifier(net, &vxlan_nexthop_notifier_block);
++	list_for_each_entry(net, net_list, exit_list)
++		vxlan_destroy_tunnels(net, &list);
++
++	unregister_netdevice_many(&list);
++	rtnl_unlock();
++
++	list_for_each_entry(net, net_list, exit_list) {
++		struct vxlan_net *vn = net_generic(net, vxlan_net_id);
++
++		for (h = 0; h < PORT_HASH_SIZE; ++h)
++			WARN_ON_ONCE(!hlist_empty(&vn->sock_list[h]));
++	}
++}
++
++static struct pernet_operations vxlan_net_ops = {
++	.init = vxlan_init_net,
++	.exit_batch = vxlan_exit_batch_net,
++	.id   = &vxlan_net_id,
++	.size = sizeof(struct vxlan_net),
++};
++
++static int __init vxlan_init_module(void)
++{
++	int rc;
++
++	get_random_bytes(&vxlan_salt, sizeof(vxlan_salt));
++
++	rc = register_pernet_subsys(&vxlan_net_ops);
++	if (rc)
++		goto out1;
++
++	rc = register_netdevice_notifier(&vxlan_notifier_block);
++	if (rc)
++		goto out2;
++
++	rc = register_switchdev_notifier(&vxlan_switchdev_notifier_block);
++	if (rc)
++		goto out3;
++
++	rc = rtnl_link_register(&vxlan_link_ops);
++	if (rc)
++		goto out4;
++
++	return 0;
++out4:
++	unregister_switchdev_notifier(&vxlan_switchdev_notifier_block);
++out3:
++	unregister_netdevice_notifier(&vxlan_notifier_block);
++out2:
++	unregister_pernet_subsys(&vxlan_net_ops);
++out1:
++	return rc;
++}
++late_initcall(vxlan_init_module);
++
++static void __exit vxlan_cleanup_module(void)
++{
++	rtnl_link_unregister(&vxlan_link_ops);
++	unregister_switchdev_notifier(&vxlan_switchdev_notifier_block);
++	unregister_netdevice_notifier(&vxlan_notifier_block);
++	unregister_pernet_subsys(&vxlan_net_ops);
++	/* rcu_barrier() is called by netns */
++}
++module_exit(vxlan_cleanup_module);
++
++MODULE_LICENSE("GPL");
++MODULE_VERSION(VXLAN_VERSION);
++MODULE_AUTHOR("Stephen Hemminger <stephen@networkplumber.org>");
++MODULE_DESCRIPTION("Driver for VXLAN encapsulated traffic");
++MODULE_ALIAS_RTNL_LINK("vxlan");
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 81ff3b4c6c1b3..dc1191aa0443e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -160,9 +160,9 @@ static void mt76_init_stream_cap(struct mt76_phy *phy,
+ 
+ void mt76_set_stream_caps(struct mt76_phy *phy, bool vht)
+ {
+-	if (phy->dev->cap.has_2ghz)
++	if (phy->cap.has_2ghz)
+ 		mt76_init_stream_cap(phy, &phy->sband_2g.sband, false);
+-	if (phy->dev->cap.has_5ghz)
++	if (phy->cap.has_5ghz)
+ 		mt76_init_stream_cap(phy, &phy->sband_5g.sband, vht);
+ }
+ EXPORT_SYMBOL_GPL(mt76_set_stream_caps);
+@@ -463,13 +463,13 @@ int mt76_register_device(struct mt76_dev *dev, bool vht,
+ 	dev_set_drvdata(dev->dev, dev);
+ 	mt76_phy_init(dev, hw);
+ 
+-	if (dev->cap.has_2ghz) {
++	if (phy->cap.has_2ghz) {
+ 		ret = mt76_init_sband_2g(dev, rates, n_rates);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	if (dev->cap.has_5ghz) {
++	if (phy->cap.has_5ghz) {
+ 		ret = mt76_init_sband_5g(dev, rates + 4, n_rates - 4, vht);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 5a8060790a61f..16e65020a242d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -561,6 +561,7 @@ struct mt76_phy {
+ 	struct mt76_channel_state *chan_state;
+ 	ktime_t survey_time;
+ 
++	struct mt76_hw_cap cap;
+ 	struct mt76_sband sband_2g;
+ 	struct mt76_sband sband_5g;
+ 
+@@ -630,7 +631,6 @@ struct mt76_dev {
+ 
+ 	struct debugfs_blob_wrapper eeprom;
+ 	struct debugfs_blob_wrapper otp;
+-	struct mt76_hw_cap cap;
+ 
+ 	struct mt76_rate_power rate_power;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7603/eeprom.c
+index 01f1e0da5ee1e..a6df733aca492 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/eeprom.c
+@@ -170,7 +170,7 @@ int mt7603_eeprom_init(struct mt7603_dev *dev)
+ 	}
+ 
+ 	eeprom = (u8 *)dev->mt76.eeprom.data;
+-	dev->mt76.cap.has_2ghz = true;
++	dev->mphy.cap.has_2ghz = true;
+ 	memcpy(dev->mt76.macaddr, eeprom + MT_EE_MAC_ADDR, ETH_ALEN);
+ 
+ 	/* Check for 1SS devices */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/dma.c b/drivers/net/wireless/mediatek/mt76/mt7615/dma.c
+index bf8ae14121dba..637ef0882436c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/dma.c
+@@ -202,7 +202,7 @@ int mt7615_dma_init(struct mt7615_dev *dev)
+ 	int ret;
+ 
+ 	/* Increase buffer size to receive large VHT MPDUs */
+-	if (dev->mt76.cap.has_5ghz)
++	if (dev->mphy.cap.has_5ghz)
+ 		rx_buf_size *= 2;
+ 
+ 	mt76_dma_attach(&dev->mt76);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
+index e9cdcdc54d5c3..85f56487feff2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
+@@ -100,20 +100,20 @@ mt7615_eeprom_parse_hw_band_cap(struct mt7615_dev *dev)
+ 
+ 	if (is_mt7663(&dev->mt76)) {
+ 		/* dual band */
+-		dev->mt76.cap.has_2ghz = true;
+-		dev->mt76.cap.has_5ghz = true;
++		dev->mphy.cap.has_2ghz = true;
++		dev->mphy.cap.has_5ghz = true;
+ 		return;
+ 	}
+ 
+ 	if (is_mt7622(&dev->mt76)) {
+ 		/* 2GHz only */
+-		dev->mt76.cap.has_2ghz = true;
++		dev->mphy.cap.has_2ghz = true;
+ 		return;
+ 	}
+ 
+ 	if (is_mt7611(&dev->mt76)) {
+ 		/* 5GHz only */
+-		dev->mt76.cap.has_5ghz = true;
++		dev->mphy.cap.has_5ghz = true;
+ 		return;
+ 	}
+ 
+@@ -121,17 +121,17 @@ mt7615_eeprom_parse_hw_band_cap(struct mt7615_dev *dev)
+ 			eeprom[MT_EE_WIFI_CONF]);
+ 	switch (val) {
+ 	case MT_EE_5GHZ:
+-		dev->mt76.cap.has_5ghz = true;
+-		break;
+-	case MT_EE_2GHZ:
+-		dev->mt76.cap.has_2ghz = true;
++		dev->mphy.cap.has_5ghz = true;
+ 		break;
+ 	case MT_EE_DBDC:
+ 		dev->dbdc_support = true;
+-		/* fall through */
++		fallthrough;
++	case MT_EE_2GHZ:
++		dev->mphy.cap.has_2ghz = true;
++		break;
+ 	default:
+-		dev->mt76.cap.has_2ghz = true;
+-		dev->mt76.cap.has_5ghz = true;
++		dev->mphy.cap.has_2ghz = true;
++		dev->mphy.cap.has_5ghz = true;
+ 		break;
+ 	}
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c
+index 9087607b621e8..ebf4c96532d31 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c
+@@ -52,15 +52,15 @@ static void mt76x0_set_chip_cap(struct mt76x02_dev *dev)
+ 
+ 	mt76x02_eeprom_parse_hw_cap(dev);
+ 	dev_dbg(dev->mt76.dev, "2GHz %d 5GHz %d\n",
+-		dev->mt76.cap.has_2ghz, dev->mt76.cap.has_5ghz);
++		dev->mphy.cap.has_2ghz, dev->mphy.cap.has_5ghz);
+ 
+ 	if (dev->no_2ghz) {
+-		dev->mt76.cap.has_2ghz = false;
++		dev->mphy.cap.has_2ghz = false;
+ 		dev_dbg(dev->mt76.dev, "mask out 2GHz support\n");
+ 	}
+ 
+ 	if (is_mt7630(dev)) {
+-		dev->mt76.cap.has_5ghz = false;
++		dev->mphy.cap.has_5ghz = false;
+ 		dev_dbg(dev->mt76.dev, "mask out 5GHz support\n");
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/init.c b/drivers/net/wireless/mediatek/mt76/mt76x0/init.c
+index d78866bf41ba3..0bac39bf3b66d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x0/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x0/init.c
+@@ -245,7 +245,7 @@ int mt76x0_register_device(struct mt76x02_dev *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (dev->mt76.cap.has_5ghz) {
++	if (dev->mphy.cap.has_5ghz) {
+ 		struct ieee80211_supported_band *sband;
+ 
+ 		sband = &dev->mphy.sband_5g.sband;
+@@ -253,7 +253,7 @@ int mt76x0_register_device(struct mt76x02_dev *dev)
+ 		mt76x0_init_txpower(dev, sband);
+ 	}
+ 
+-	if (dev->mt76.cap.has_2ghz)
++	if (dev->mphy.cap.has_2ghz)
+ 		mt76x0_init_txpower(dev, &dev->mphy.sband_2g.sband);
+ 
+ 	mt76x02_init_debugfs(dev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
+index 3de33aadf7941..e91c314cdfac5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
+@@ -447,11 +447,11 @@ static void mt76x0_phy_ant_select(struct mt76x02_dev *dev)
+ 		else
+ 			coex3 |= BIT(4);
+ 		coex3 |= BIT(3);
+-		if (dev->mt76.cap.has_2ghz)
++		if (dev->mphy.cap.has_2ghz)
+ 			wlan |= BIT(6);
+ 	} else {
+ 		/* sigle antenna mode */
+-		if (dev->mt76.cap.has_5ghz) {
++		if (dev->mphy.cap.has_5ghz) {
+ 			coex3 |= BIT(3) | BIT(4);
+ 		} else {
+ 			wlan |= BIT(6);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c
+index c54c50fd639a9..0acabba2d1a50 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c
+@@ -75,14 +75,14 @@ void mt76x02_eeprom_parse_hw_cap(struct mt76x02_dev *dev)
+ 
+ 	switch (FIELD_GET(MT_EE_NIC_CONF_0_BOARD_TYPE, val)) {
+ 	case BOARD_TYPE_5GHZ:
+-		dev->mt76.cap.has_5ghz = true;
++		dev->mphy.cap.has_5ghz = true;
+ 		break;
+ 	case BOARD_TYPE_2GHZ:
+-		dev->mt76.cap.has_2ghz = true;
++		dev->mphy.cap.has_2ghz = true;
+ 		break;
+ 	default:
+-		dev->mt76.cap.has_2ghz = true;
+-		dev->mt76.cap.has_5ghz = true;
++		dev->mphy.cap.has_2ghz = true;
++		dev->mphy.cap.has_5ghz = true;
+ 		break;
+ 	}
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+index e4c5f968f706d..5f6c527611f20 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+@@ -57,14 +57,14 @@ static void mt7915_eeprom_parse_hw_cap(struct mt7915_dev *dev)
+ 	val = FIELD_GET(MT_EE_WIFI_CONF_BAND_SEL, val);
+ 	switch (val) {
+ 	case MT_EE_5GHZ:
+-		dev->mt76.cap.has_5ghz = true;
++		dev->mphy.cap.has_5ghz = true;
+ 		break;
+ 	case MT_EE_2GHZ:
+-		dev->mt76.cap.has_2ghz = true;
++		dev->mphy.cap.has_2ghz = true;
+ 		break;
+ 	default:
+-		dev->mt76.cap.has_2ghz = true;
+-		dev->mt76.cap.has_5ghz = true;
++		dev->mphy.cap.has_2ghz = true;
++		dev->mphy.cap.has_5ghz = true;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 8f01ca1694bca..99683688a8363 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -528,10 +528,9 @@ void mt7915_set_stream_he_caps(struct mt7915_phy *phy)
+ {
+ 	struct ieee80211_sband_iftype_data *data;
+ 	struct ieee80211_supported_band *band;
+-	struct mt76_dev *mdev = &phy->dev->mt76;
+ 	int n;
+ 
+-	if (mdev->cap.has_2ghz) {
++	if (phy->mt76->cap.has_2ghz) {
+ 		data = phy->iftype[NL80211_BAND_2GHZ];
+ 		n = mt7915_init_he_caps(phy, NL80211_BAND_2GHZ, data);
+ 
+@@ -540,7 +539,7 @@ void mt7915_set_stream_he_caps(struct mt7915_phy *phy)
+ 		band->n_iftype_data = n;
+ 	}
+ 
+-	if (mdev->cap.has_5ghz) {
++	if (phy->mt76->cap.has_5ghz) {
+ 		data = phy->iftype[NL80211_BAND_5GHZ];
+ 		n = mt7915_init_he_caps(phy, NL80211_BAND_5GHZ, data);
+ 
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 51da8ba67d216..7a3cf8aaec256 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -192,12 +192,39 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
+ 	link->clkpm_disable = blacklist ? 1 : 0;
+ }
+ 
+-static bool pcie_retrain_link(struct pcie_link_state *link)
++static int pcie_wait_for_retrain(struct pci_dev *pdev)
+ {
+-	struct pci_dev *parent = link->pdev;
+ 	unsigned long end_jiffies;
+ 	u16 reg16;
+ 
++	/* Wait for Link Training to be cleared by hardware */
++	end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
++	do {
++		pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &reg16);
++		if (!(reg16 & PCI_EXP_LNKSTA_LT))
++			return 0;
++		msleep(1);
++	} while (time_before(jiffies, end_jiffies));
++
++	return -ETIMEDOUT;
++}
++
++static int pcie_retrain_link(struct pcie_link_state *link)
++{
++	struct pci_dev *parent = link->pdev;
++	int rc;
++	u16 reg16;
++
++	/*
++	 * Ensure the updated LNKCTL parameters are used during link
++	 * training by checking that there is no ongoing link training to
++	 * avoid LTSSM race as recommended in Implementation Note at the
++	 * end of PCIe r6.0.1 sec 7.5.3.7.
++	 */
++	rc = pcie_wait_for_retrain(parent);
++	if (rc)
++		return rc;
++
+ 	pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
+ 	reg16 |= PCI_EXP_LNKCTL_RL;
+ 	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
+@@ -211,15 +238,7 @@ static bool pcie_retrain_link(struct pcie_link_state *link)
+ 		pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
+ 	}
+ 
+-	/* Wait for link training end. Break out after waiting for timeout */
+-	end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
+-	do {
+-		pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
+-		if (!(reg16 & PCI_EXP_LNKSTA_LT))
+-			break;
+-		msleep(1);
+-	} while (time_before(jiffies, end_jiffies));
+-	return !(reg16 & PCI_EXP_LNKSTA_LT);
++	return pcie_wait_for_retrain(parent);
+ }
+ 
+ /*
+@@ -288,15 +307,15 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ 		reg16 &= ~PCI_EXP_LNKCTL_CCC;
+ 	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
+ 
+-	if (pcie_retrain_link(link))
+-		return;
++	if (pcie_retrain_link(link)) {
+ 
+-	/* Training failed. Restore common clock configurations */
+-	pci_err(parent, "ASPM: Could not configure common clock\n");
+-	list_for_each_entry(child, &linkbus->devices, bus_list)
+-		pcie_capability_write_word(child, PCI_EXP_LNKCTL,
++		/* Training failed. Restore common clock configurations */
++		pci_err(parent, "ASPM: Could not configure common clock\n");
++		list_for_each_entry(child, &linkbus->devices, bus_list)
++			pcie_capability_write_word(child, PCI_EXP_LNKCTL,
+ 					   child_reg[PCI_FUNC(child->devfn)]);
+-	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
++		pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
++	}
+ }
+ 
+ /* Convert L0s latency encoding to ns */
+diff --git a/drivers/phy/hisilicon/phy-hisi-inno-usb2.c b/drivers/phy/hisilicon/phy-hisi-inno-usb2.c
+index 34a6a9a1ceb25..897c6bb4cbb8c 100644
+--- a/drivers/phy/hisilicon/phy-hisi-inno-usb2.c
++++ b/drivers/phy/hisilicon/phy-hisi-inno-usb2.c
+@@ -153,7 +153,7 @@ static int hisi_inno_phy_probe(struct platform_device *pdev)
+ 		phy_set_drvdata(phy, &priv->ports[i]);
+ 		i++;
+ 
+-		if (i > INNO_PHY_PORT_NUM) {
++		if (i >= INNO_PHY_PORT_NUM) {
+ 			dev_warn(dev, "Support %d ports in maximum\n", i);
+ 			break;
+ 		}
+diff --git a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
+index 7e61202aa234e..abb9264569336 100644
+--- a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
++++ b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
+@@ -68,23 +68,27 @@ static const char * const qcom_snps_hsphy_vreg_names[] = {
+ /**
+  * struct qcom_snps_hsphy - snps hs phy attributes
+  *
++ * @dev: device structure
++ *
+  * @phy: generic phy
+  * @base: iomapped memory space for snps hs phy
+  *
+- * @cfg_ahb_clk: AHB2PHY interface clock
+- * @ref_clk: phy reference clock
+- * @iface_clk: phy interface clock
++ * @num_clks: number of clocks
++ * @clks: array of clocks
+  * @phy_reset: phy reset control
+  * @vregs: regulator supplies bulk data
+  * @phy_initialized: if PHY has been initialized correctly
+  * @mode: contains the current mode the PHY is in
++ * @update_seq_cfg: tuning parameters for phy init
+  */
+ struct qcom_snps_hsphy {
++	struct device *dev;
++
+ 	struct phy *phy;
+ 	void __iomem *base;
+ 
+-	struct clk *cfg_ahb_clk;
+-	struct clk *ref_clk;
++	int num_clks;
++	struct clk_bulk_data *clks;
+ 	struct reset_control *phy_reset;
+ 	struct regulator_bulk_data vregs[SNPS_HS_NUM_VREGS];
+ 
+@@ -92,6 +96,34 @@ struct qcom_snps_hsphy {
+ 	enum phy_mode mode;
+ };
+ 
++static int qcom_snps_hsphy_clk_init(struct qcom_snps_hsphy *hsphy)
++{
++	struct device *dev = hsphy->dev;
++
++	hsphy->num_clks = 2;
++	hsphy->clks = devm_kcalloc(dev, hsphy->num_clks, sizeof(*hsphy->clks), GFP_KERNEL);
++	if (!hsphy->clks)
++		return -ENOMEM;
++
++	/*
++	 * TODO: Currently no device tree instantiation of the PHY is using the clock.
++	 * This needs to be fixed in order for this code to be able to use devm_clk_bulk_get().
++	 */
++	hsphy->clks[0].id = "cfg_ahb";
++	hsphy->clks[0].clk = devm_clk_get_optional(dev, "cfg_ahb");
++	if (IS_ERR(hsphy->clks[0].clk))
++		return dev_err_probe(dev, PTR_ERR(hsphy->clks[0].clk),
++				     "failed to get cfg_ahb clk\n");
++
++	hsphy->clks[1].id = "ref";
++	hsphy->clks[1].clk = devm_clk_get(dev, "ref");
++	if (IS_ERR(hsphy->clks[1].clk))
++		return dev_err_probe(dev, PTR_ERR(hsphy->clks[1].clk),
++				     "failed to get ref clk\n");
++
++	return 0;
++}
++
+ static inline void qcom_snps_hsphy_write_mask(void __iomem *base, u32 offset,
+ 						u32 mask, u32 val)
+ {
+@@ -122,22 +154,13 @@ static int qcom_snps_hsphy_suspend(struct qcom_snps_hsphy *hsphy)
+ 					   0, USB2_AUTO_RESUME);
+ 	}
+ 
+-	clk_disable_unprepare(hsphy->cfg_ahb_clk);
+ 	return 0;
+ }
+ 
+ static int qcom_snps_hsphy_resume(struct qcom_snps_hsphy *hsphy)
+ {
+-	int ret;
+-
+ 	dev_dbg(&hsphy->phy->dev, "Resume QCOM SNPS PHY, mode\n");
+ 
+-	ret = clk_prepare_enable(hsphy->cfg_ahb_clk);
+-	if (ret) {
+-		dev_err(&hsphy->phy->dev, "failed to enable cfg ahb clock\n");
+-		return ret;
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -183,16 +206,16 @@ static int qcom_snps_hsphy_init(struct phy *phy)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = clk_prepare_enable(hsphy->cfg_ahb_clk);
++	ret = clk_bulk_prepare_enable(hsphy->num_clks, hsphy->clks);
+ 	if (ret) {
+-		dev_err(&phy->dev, "failed to enable cfg ahb clock, %d\n", ret);
++		dev_err(&phy->dev, "failed to enable clocks, %d\n", ret);
+ 		goto poweroff_phy;
+ 	}
+ 
+ 	ret = reset_control_assert(hsphy->phy_reset);
+ 	if (ret) {
+ 		dev_err(&phy->dev, "failed to assert phy_reset, %d\n", ret);
+-		goto disable_ahb_clk;
++		goto disable_clks;
+ 	}
+ 
+ 	usleep_range(100, 150);
+@@ -200,7 +223,7 @@ static int qcom_snps_hsphy_init(struct phy *phy)
+ 	ret = reset_control_deassert(hsphy->phy_reset);
+ 	if (ret) {
+ 		dev_err(&phy->dev, "failed to de-assert phy_reset, %d\n", ret);
+-		goto disable_ahb_clk;
++		goto disable_clks;
+ 	}
+ 
+ 	qcom_snps_hsphy_write_mask(hsphy->base, USB2_PHY_USB_PHY_CFG0,
+@@ -246,8 +269,8 @@ static int qcom_snps_hsphy_init(struct phy *phy)
+ 
+ 	return 0;
+ 
+-disable_ahb_clk:
+-	clk_disable_unprepare(hsphy->cfg_ahb_clk);
++disable_clks:
++	clk_bulk_disable_unprepare(hsphy->num_clks, hsphy->clks);
+ poweroff_phy:
+ 	regulator_bulk_disable(ARRAY_SIZE(hsphy->vregs), hsphy->vregs);
+ 
+@@ -259,7 +282,7 @@ static int qcom_snps_hsphy_exit(struct phy *phy)
+ 	struct qcom_snps_hsphy *hsphy = phy_get_drvdata(phy);
+ 
+ 	reset_control_assert(hsphy->phy_reset);
+-	clk_disable_unprepare(hsphy->cfg_ahb_clk);
++	clk_bulk_disable_unprepare(hsphy->num_clks, hsphy->clks);
+ 	regulator_bulk_disable(ARRAY_SIZE(hsphy->vregs), hsphy->vregs);
+ 	hsphy->phy_initialized = false;
+ 
+@@ -299,17 +322,15 @@ static int qcom_snps_hsphy_probe(struct platform_device *pdev)
+ 	if (!hsphy)
+ 		return -ENOMEM;
+ 
++	hsphy->dev = dev;
++
+ 	hsphy->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(hsphy->base))
+ 		return PTR_ERR(hsphy->base);
+ 
+-	hsphy->ref_clk = devm_clk_get(dev, "ref");
+-	if (IS_ERR(hsphy->ref_clk)) {
+-		ret = PTR_ERR(hsphy->ref_clk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get ref clk, %d\n", ret);
+-		return ret;
+-	}
++	ret = qcom_snps_hsphy_clk_init(hsphy);
++	if (ret)
++		return dev_err_probe(dev, ret, "failed to initialize clocks\n");
+ 
+ 	hsphy->phy_reset = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ 	if (IS_ERR(hsphy->phy_reset)) {
+@@ -322,12 +343,9 @@ static int qcom_snps_hsphy_probe(struct platform_device *pdev)
+ 		hsphy->vregs[i].supply = qcom_snps_hsphy_vreg_names[i];
+ 
+ 	ret = devm_regulator_bulk_get(dev, num, hsphy->vregs);
+-	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get regulator supplies: %d\n",
+-				ret);
+-		return ret;
+-	}
++	if (ret)
++		return dev_err_probe(dev, ret,
++				     "failed to get regulator supplies\n");
+ 
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+diff --git a/drivers/platform/x86/msi-laptop.c b/drivers/platform/x86/msi-laptop.c
+index 0e804b6c2d242..dfb4af759aa75 100644
+--- a/drivers/platform/x86/msi-laptop.c
++++ b/drivers/platform/x86/msi-laptop.c
+@@ -210,7 +210,7 @@ static ssize_t set_device_state(const char *buf, size_t count, u8 mask)
+ 		return -EINVAL;
+ 
+ 	if (quirks->ec_read_only)
+-		return -EOPNOTSUPP;
++		return 0;
+ 
+ 	/* read current device state */
+ 	result = ec_read(MSI_STANDARD_EC_COMMAND_ADDRESS, &rdata);
+@@ -841,15 +841,15 @@ static bool msi_laptop_i8042_filter(unsigned char data, unsigned char str,
+ static void msi_init_rfkill(struct work_struct *ignored)
+ {
+ 	if (rfk_wlan) {
+-		rfkill_set_sw_state(rfk_wlan, !wlan_s);
++		msi_rfkill_set_state(rfk_wlan, !wlan_s);
+ 		rfkill_wlan_set(NULL, !wlan_s);
+ 	}
+ 	if (rfk_bluetooth) {
+-		rfkill_set_sw_state(rfk_bluetooth, !bluetooth_s);
++		msi_rfkill_set_state(rfk_bluetooth, !bluetooth_s);
+ 		rfkill_bluetooth_set(NULL, !bluetooth_s);
+ 	}
+ 	if (rfk_threeg) {
+-		rfkill_set_sw_state(rfk_threeg, !threeg_s);
++		msi_rfkill_set_state(rfk_threeg, !threeg_s);
+ 		rfkill_threeg_set(NULL, !threeg_s);
+ 	}
+ }
+diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c
+index 0283163ddbe8e..5b5fd16713501 100644
+--- a/drivers/pwm/pwm-meson.c
++++ b/drivers/pwm/pwm-meson.c
+@@ -147,12 +147,13 @@ static int meson_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm)
+ 		return err;
+ 	}
+ 
+-	return pwm_set_chip_data(pwm, channel);
++	return 0;
+ }
+ 
+ static void meson_pwm_free(struct pwm_chip *chip, struct pwm_device *pwm)
+ {
+-	struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
++	struct meson_pwm *meson = to_meson_pwm(chip);
++	struct meson_pwm_channel *channel = &meson->channels[pwm->hwpwm];
+ 
+ 	if (channel)
+ 		clk_disable_unprepare(channel->clk);
+@@ -161,9 +162,10 @@ static void meson_pwm_free(struct pwm_chip *chip, struct pwm_device *pwm)
+ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
+ 			  const struct pwm_state *state)
+ {
+-	struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
+-	unsigned int duty, period, pre_div, cnt, duty_cnt;
++	struct meson_pwm_channel *channel = &meson->channels[pwm->hwpwm];
++	unsigned int pre_div, cnt, duty_cnt;
+ 	unsigned long fin_freq;
++	u64 duty, period;
+ 
+ 	duty = state->duty_cycle;
+ 	period = state->period;
+@@ -185,19 +187,19 @@ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
+ 
+ 	dev_dbg(meson->chip.dev, "fin_freq: %lu Hz\n", fin_freq);
+ 
+-	pre_div = div64_u64(fin_freq * (u64)period, NSEC_PER_SEC * 0xffffLL);
++	pre_div = div64_u64(fin_freq * period, NSEC_PER_SEC * 0xffffLL);
+ 	if (pre_div > MISC_CLK_DIV_MASK) {
+ 		dev_err(meson->chip.dev, "unable to get period pre_div\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	cnt = div64_u64(fin_freq * (u64)period, NSEC_PER_SEC * (pre_div + 1));
++	cnt = div64_u64(fin_freq * period, NSEC_PER_SEC * (pre_div + 1));
+ 	if (cnt > 0xffff) {
+ 		dev_err(meson->chip.dev, "unable to get period cnt\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	dev_dbg(meson->chip.dev, "period=%u pre_div=%u cnt=%u\n", period,
++	dev_dbg(meson->chip.dev, "period=%llu pre_div=%u cnt=%u\n", period,
+ 		pre_div, cnt);
+ 
+ 	if (duty == period) {
+@@ -210,14 +212,13 @@ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
+ 		channel->lo = cnt;
+ 	} else {
+ 		/* Then check is we can have the duty with the same pre_div */
+-		duty_cnt = div64_u64(fin_freq * (u64)duty,
+-				     NSEC_PER_SEC * (pre_div + 1));
++		duty_cnt = div64_u64(fin_freq * duty, NSEC_PER_SEC * (pre_div + 1));
+ 		if (duty_cnt > 0xffff) {
+ 			dev_err(meson->chip.dev, "unable to get duty cycle\n");
+ 			return -EINVAL;
+ 		}
+ 
+-		dev_dbg(meson->chip.dev, "duty=%u pre_div=%u duty_cnt=%u\n",
++		dev_dbg(meson->chip.dev, "duty=%llu pre_div=%u duty_cnt=%u\n",
+ 			duty, pre_div, duty_cnt);
+ 
+ 		channel->pre_div = pre_div;
+@@ -230,7 +231,7 @@ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
+ 
+ static void meson_pwm_enable(struct meson_pwm *meson, struct pwm_device *pwm)
+ {
+-	struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
++	struct meson_pwm_channel *channel = &meson->channels[pwm->hwpwm];
+ 	struct meson_pwm_channel_data *channel_data;
+ 	unsigned long flags;
+ 	u32 value;
+@@ -273,8 +274,8 @@ static void meson_pwm_disable(struct meson_pwm *meson, struct pwm_device *pwm)
+ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			   const struct pwm_state *state)
+ {
+-	struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
+ 	struct meson_pwm *meson = to_meson_pwm(chip);
++	struct meson_pwm_channel *channel = &meson->channels[pwm->hwpwm];
+ 	int err = 0;
+ 
+ 	if (!state)
+diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
+index 6d5c9cb83592f..99b1b01e23e95 100644
+--- a/drivers/s390/block/dasd_ioctl.c
++++ b/drivers/s390/block/dasd_ioctl.c
+@@ -133,6 +133,7 @@ static int dasd_ioctl_resume(struct dasd_block *block)
+ 	spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags);
+ 
+ 	dasd_schedule_block_bh(block);
++	dasd_schedule_device_bh(base);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index bf8404b0e74ff..2544edd4d2b56 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -719,7 +719,6 @@ struct qeth_card_info {
+ 	u16 chid;
+ 	u8 ids_valid:1; /* cssid,iid,chid */
+ 	u8 dev_addr_is_registered:1;
+-	u8 open_when_online:1;
+ 	u8 promisc_mode:1;
+ 	u8 use_v1_blkt:1;
+ 	u8 is_vm_nic:1;
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 7b0155b0e99ee..73d564906d043 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -5351,8 +5351,6 @@ int qeth_set_offline(struct qeth_card *card, const struct qeth_discipline *disc,
+ 	qeth_clear_ipacmd_list(card);
+ 
+ 	rtnl_lock();
+-	card->info.open_when_online = card->dev->flags & IFF_UP;
+-	dev_close(card->dev);
+ 	netif_device_detach(card->dev);
+ 	netif_carrier_off(card->dev);
+ 	rtnl_unlock();
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index cfc931f2b7e2c..1797addf69b63 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -2270,9 +2270,12 @@ static int qeth_l2_set_online(struct qeth_card *card, bool carrier_ok)
+ 		qeth_enable_hw_features(dev);
+ 		qeth_l2_enable_brport_features(card);
+ 
+-		if (card->info.open_when_online) {
+-			card->info.open_when_online = 0;
+-			dev_open(dev, NULL);
++		if (netif_running(dev)) {
++			local_bh_disable();
++			napi_schedule(&card->napi);
++			/* kick-start the NAPI softirq: */
++			local_bh_enable();
++			qeth_l2_set_rx_mode(dev);
+ 		}
+ 		rtnl_unlock();
+ 	}
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index 291861c9b9569..d8cdf90241268 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -2037,9 +2037,11 @@ static int qeth_l3_set_online(struct qeth_card *card, bool carrier_ok)
+ 		netif_device_attach(dev);
+ 		qeth_enable_hw_features(dev);
+ 
+-		if (card->info.open_when_online) {
+-			card->info.open_when_online = 0;
+-			dev_open(dev, NULL);
++		if (netif_running(dev)) {
++			local_bh_disable();
++			napi_schedule(&card->napi);
++			/* kick-start the NAPI softirq: */
++			local_bh_enable();
+ 		}
+ 		rtnl_unlock();
+ 	}
+diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
+index b61acbb09be3b..d323f9985c482 100644
+--- a/drivers/s390/scsi/zfcp_fc.c
++++ b/drivers/s390/scsi/zfcp_fc.c
+@@ -534,8 +534,7 @@ static void zfcp_fc_adisc_handler(void *data)
+ 
+ 	/* re-init to undo drop from zfcp_fc_adisc() */
+ 	port->d_id = ntoh24(adisc_resp->adisc_port_id);
+-	/* port is good, unblock rport without going through erp */
+-	zfcp_scsi_schedule_rport_register(port);
++	/* port is still good, nothing to do */
+  out:
+ 	atomic_andnot(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
+ 	put_device(&port->dev);
+@@ -595,9 +594,6 @@ void zfcp_fc_link_test_work(struct work_struct *work)
+ 	int retval;
+ 
+ 	set_worker_desc("zadisc%16llx", port->wwpn); /* < WORKER_DESC_LEN=24 */
+-	get_device(&port->dev);
+-	port->rport_task = RPORT_DEL;
+-	zfcp_scsi_rport_work(&port->rport_work);
+ 
+ 	/* only issue one test command at one time per port */
+ 	if (atomic_read(&port->status) & ZFCP_STATUS_PORT_LINK_TEST)
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 3317a02bcc170..0e0a19030c35b 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -797,19 +797,19 @@ static void sdw_modify_slave_status(struct sdw_slave *slave,
+ 
+ 	if (status == SDW_SLAVE_UNATTACHED) {
+ 		dev_dbg(&slave->dev,
+-			"%s: initializing completion for Slave %d\n",
++			"%s: initializing enumeration and init completion for Slave %d\n",
+ 			__func__, slave->dev_num);
+ 
+-		init_completion(&slave->enumeration_complete);
+-		init_completion(&slave->initialization_complete);
++		reinit_completion(&slave->enumeration_complete);
++		reinit_completion(&slave->initialization_complete);
+ 
+ 	} else if ((status == SDW_SLAVE_ATTACHED) &&
+ 		   (slave->status == SDW_SLAVE_UNATTACHED)) {
+ 		dev_dbg(&slave->dev,
+-			"%s: signaling completion for Slave %d\n",
++			"%s: signaling enumeration completion for Slave %d\n",
+ 			__func__, slave->dev_num);
+ 
+-		complete(&slave->enumeration_complete);
++		complete_all(&slave->enumeration_complete);
+ 	}
+ 	slave->status = status;
+ 	mutex_unlock(&slave->bus->bus_lock);
+@@ -1734,8 +1734,25 @@ int sdw_handle_slave_status(struct sdw_bus *bus,
+ 		if (ret)
+ 			dev_err(slave->bus->dev,
+ 				"Update Slave status failed:%d\n", ret);
+-		if (attached_initializing)
+-			complete(&slave->initialization_complete);
++		if (attached_initializing) {
++			dev_dbg(&slave->dev,
++				"%s: signaling initialization completion for Slave %d\n",
++				__func__, slave->dev_num);
++
++			complete_all(&slave->initialization_complete);
++
++			/*
++			 * If the manager became pm_runtime active, the peripherals will be
++			 * restarted and attach, but their pm_runtime status may remain
++			 * suspended. If the 'update_slave_status' callback initiates
++			 * any sort of deferred processing, this processing would not be
++			 * cancelled on pm_runtime suspend.
++			 * To avoid such zombie states, we queue a request to resume.
++			 * This would be a no-op in case the peripheral was being resumed
++			 * by e.g. the ALSA/ASoC framework.
++			 */
++			pm_request_resume(&slave->dev);
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/staging/ks7010/ks_wlan_net.c b/drivers/staging/ks7010/ks_wlan_net.c
+index 09e7b4cd0138c..604882279adc9 100644
+--- a/drivers/staging/ks7010/ks_wlan_net.c
++++ b/drivers/staging/ks7010/ks_wlan_net.c
+@@ -1584,8 +1584,10 @@ static int ks_wlan_set_encode_ext(struct net_device *dev,
+ 			commit |= SME_WEP_FLAG;
+ 		}
+ 		if (enc->key_len) {
+-			memcpy(&key->key_val[0], &enc->key[0], enc->key_len);
+-			key->key_len = enc->key_len;
++			int key_len = clamp_val(enc->key_len, 0, IW_ENCODING_TOKEN_MAX);
++
++			memcpy(&key->key_val[0], &enc->key[0], key_len);
++			key->key_len = key_len;
+ 			commit |= (SME_WEP_VAL1 << index);
+ 		}
+ 		break;
+diff --git a/drivers/staging/media/atomisp/Kconfig b/drivers/staging/media/atomisp/Kconfig
+index 37577bb729980..1a0b958f1aa06 100644
+--- a/drivers/staging/media/atomisp/Kconfig
++++ b/drivers/staging/media/atomisp/Kconfig
+@@ -13,6 +13,7 @@ config VIDEO_ATOMISP
+ 	tristate "Intel Atom Image Signal Processor Driver"
+ 	depends on VIDEO_V4L2 && INTEL_ATOMISP
+ 	depends on PMIC_OPREGION
++	select V4L2_FWNODE
+ 	select IOSF_MBI
+ 	select VIDEOBUF_VMALLOC
+ 	help
+diff --git a/drivers/staging/rtl8712/ieee80211.c b/drivers/staging/rtl8712/ieee80211.c
+index b4a099169c7c8..8075ed2ba61ea 100644
+--- a/drivers/staging/rtl8712/ieee80211.c
++++ b/drivers/staging/rtl8712/ieee80211.c
+@@ -181,25 +181,25 @@ int r8712_generate_ie(struct registry_priv *registrypriv)
+ 	sz += 2;
+ 	ie += 2;
+ 	/*SSID*/
+-	ie = r8712_set_ie(ie, _SSID_IE_, dev_network->Ssid.SsidLength,
++	ie = r8712_set_ie(ie, WLAN_EID_SSID, dev_network->Ssid.SsidLength,
+ 			  dev_network->Ssid.Ssid, &sz);
+ 	/*supported rates*/
+ 	set_supported_rate(dev_network->rates, registrypriv->wireless_mode);
+ 	rate_len = r8712_get_rateset_len(dev_network->rates);
+ 	if (rate_len > 8) {
+-		ie = r8712_set_ie(ie, _SUPPORTEDRATES_IE_, 8,
++		ie = r8712_set_ie(ie, WLAN_EID_SUPP_RATES, 8,
+ 				  dev_network->rates, &sz);
+-		ie = r8712_set_ie(ie, _EXT_SUPPORTEDRATES_IE_, (rate_len - 8),
++		ie = r8712_set_ie(ie, WLAN_EID_EXT_SUPP_RATES, (rate_len - 8),
+ 				  (dev_network->rates + 8), &sz);
+ 	} else {
+-		ie = r8712_set_ie(ie, _SUPPORTEDRATES_IE_,
++		ie = r8712_set_ie(ie, WLAN_EID_SUPP_RATES,
+ 				  rate_len, dev_network->rates, &sz);
+ 	}
+ 	/*DS parameter set*/
+-	ie = r8712_set_ie(ie, _DSSET_IE_, 1,
++	ie = r8712_set_ie(ie, WLAN_EID_DS_PARAMS, 1,
+ 			  (u8 *)&dev_network->Configuration.DSConfig, &sz);
+ 	/*IBSS Parameter Set*/
+-	ie = r8712_set_ie(ie, _IBSS_PARA_IE_, 2,
++	ie = r8712_set_ie(ie, WLAN_EID_IBSS_PARAMS, 2,
+ 			  (u8 *)&dev_network->Configuration.ATIMWindow, &sz);
+ 	return sz;
+ }
+diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
+index 2a661b04cd255..15c6ac518c167 100644
+--- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
++++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
+@@ -236,7 +236,7 @@ static char *translate_scan(struct _adapter *padapter,
+ 	start = iwe_stream_add_point(info, start, stop, &iwe,
+ 				     pnetwork->network.Ssid.Ssid);
+ 	/* parsing HT_CAP_IE */
+-	p = r8712_get_ie(&pnetwork->network.IEs[12], _HT_CAPABILITY_IE_,
++	p = r8712_get_ie(&pnetwork->network.IEs[12], WLAN_EID_HT_CAPABILITY,
+ 			 &ht_ielen, pnetwork->network.IELength - 12);
+ 	if (p && ht_ielen > 0)
+ 		ht_cap = true;
+@@ -567,7 +567,7 @@ static int r871x_set_wpa_ie(struct _adapter *padapter, char *pie,
+ 			while (cnt < ielen) {
+ 				eid = buf[cnt];
+ 
+-				if ((eid == _VENDOR_SPECIFIC_IE_) &&
++				if ((eid == WLAN_EID_VENDOR_SPECIFIC) &&
+ 				    (!memcmp(&buf[cnt + 2], wps_oui, 4))) {
+ 					netdev_info(padapter->pnetdev, "r8712u: SET WPS_IE\n");
+ 					padapter->securitypriv.wps_ie_len =
+@@ -609,7 +609,7 @@ static int r8711_wx_get_name(struct net_device *dev,
+ 	if (check_fwstate(pmlmepriv, _FW_LINKED | WIFI_ADHOC_MASTER_STATE) ==
+ 	    true) {
+ 		/* parsing HT_CAP_IE */
+-		p = r8712_get_ie(&pcur_bss->IEs[12], _HT_CAPABILITY_IE_,
++		p = r8712_get_ie(&pcur_bss->IEs[12], WLAN_EID_HT_CAPABILITY,
+ 				 &ht_ielen, pcur_bss->IELength - 12);
+ 		if (p && ht_ielen > 0)
+ 			ht_cap = true;
+@@ -1403,7 +1403,7 @@ static int r8711_wx_get_rate(struct net_device *dev,
+ 	i = 0;
+ 	if (!check_fwstate(pmlmepriv, _FW_LINKED | WIFI_ADHOC_MASTER_STATE))
+ 		return -ENOLINK;
+-	p = r8712_get_ie(&pcur_bss->IEs[12], _HT_CAPABILITY_IE_, &ht_ielen,
++	p = r8712_get_ie(&pcur_bss->IEs[12], WLAN_EID_HT_CAPABILITY, &ht_ielen,
+ 			 pcur_bss->IELength - 12);
+ 	if (p && ht_ielen > 0) {
+ 		ht_cap = true;
+diff --git a/drivers/staging/rtl8712/rtl871x_mlme.c b/drivers/staging/rtl8712/rtl871x_mlme.c
+index 6074383ec0b50..250cb0c4ed083 100644
+--- a/drivers/staging/rtl8712/rtl871x_mlme.c
++++ b/drivers/staging/rtl8712/rtl871x_mlme.c
+@@ -1649,11 +1649,11 @@ unsigned int r8712_restructure_ht_ie(struct _adapter *padapter, u8 *in_ie,
+ 	struct ht_priv *phtpriv = &pmlmepriv->htpriv;
+ 
+ 	phtpriv->ht_option = 0;
+-	p = r8712_get_ie(in_ie + 12, _HT_CAPABILITY_IE_, &ielen, in_len - 12);
++	p = r8712_get_ie(in_ie + 12, WLAN_EID_HT_CAPABILITY, &ielen, in_len - 12);
+ 	if (p && (ielen > 0)) {
+ 		if (pqospriv->qos_option == 0) {
+ 			out_len = *pout_len;
+-			r8712_set_ie(out_ie + out_len, _VENDOR_SPECIFIC_IE_,
++			r8712_set_ie(out_ie + out_len, WLAN_EID_VENDOR_SPECIFIC,
+ 				     _WMM_IE_Length_, WMM_IE, pout_len);
+ 			pqospriv->qos_option = 1;
+ 		}
+@@ -1667,7 +1667,7 @@ unsigned int r8712_restructure_ht_ie(struct _adapter *padapter, u8 *in_ie,
+ 				    IEEE80211_HT_CAP_DSSSCCK40);
+ 		ht_capie.ampdu_params_info = (IEEE80211_HT_AMPDU_PARM_FACTOR &
+ 				0x03) | (IEEE80211_HT_AMPDU_PARM_DENSITY & 0x00);
+-		r8712_set_ie(out_ie + out_len, _HT_CAPABILITY_IE_,
++		r8712_set_ie(out_ie + out_len, WLAN_EID_HT_CAPABILITY,
+ 			     sizeof(struct rtl_ieee80211_ht_cap),
+ 			     (unsigned char *)&ht_capie, pout_len);
+ 		phtpriv->ht_option = 1;
+@@ -1698,7 +1698,7 @@ static void update_ht_cap(struct _adapter *padapter, u8 *pie, uint ie_len)
+ 	/*check Max Rx A-MPDU Size*/
+ 	len = 0;
+ 	p = r8712_get_ie(pie + sizeof(struct NDIS_802_11_FIXED_IEs),
+-				_HT_CAPABILITY_IE_,
++				WLAN_EID_HT_CAPABILITY,
+ 				&len, ie_len -
+ 				sizeof(struct NDIS_802_11_FIXED_IEs));
+ 	if (p && len > 0) {
+@@ -1733,7 +1733,7 @@ static void update_ht_cap(struct _adapter *padapter, u8 *pie, uint ie_len)
+ 	}
+ 	len = 0;
+ 	p = r8712_get_ie(pie + sizeof(struct NDIS_802_11_FIXED_IEs),
+-		   _HT_ADD_INFO_IE_, &len,
++		   WLAN_EID_HT_OPERATION, &len,
+ 		   ie_len - sizeof(struct NDIS_802_11_FIXED_IEs));
+ }
+ 
+diff --git a/drivers/staging/rtl8712/rtl871x_xmit.c b/drivers/staging/rtl8712/rtl871x_xmit.c
+index fd99782a400a0..eb6493047aaf6 100644
+--- a/drivers/staging/rtl8712/rtl871x_xmit.c
++++ b/drivers/staging/rtl8712/rtl871x_xmit.c
+@@ -22,6 +22,8 @@
+ #include "osdep_intf.h"
+ #include "usb_ops.h"
+ 
++#include <linux/usb.h>
++#include <linux/ieee80211.h>
+ 
+ static const u8 P802_1H_OUI[P80211_OUI_LEN] = {0x00, 0x00, 0xf8};
+ static const u8 RFC1042_OUI[P80211_OUI_LEN] = {0x00, 0x00, 0x00};
+@@ -55,6 +57,7 @@ int _r8712_init_xmit_priv(struct xmit_priv *pxmitpriv,
+ 	sint i;
+ 	struct xmit_buf *pxmitbuf;
+ 	struct xmit_frame *pxframe;
++	int j;
+ 
+ 	memset((unsigned char *)pxmitpriv, 0, sizeof(struct xmit_priv));
+ 	spin_lock_init(&pxmitpriv->lock);
+@@ -117,11 +120,8 @@ int _r8712_init_xmit_priv(struct xmit_priv *pxmitpriv,
+ 	_init_queue(&pxmitpriv->pending_xmitbuf_queue);
+ 	pxmitpriv->pallocated_xmitbuf =
+ 		kmalloc(NR_XMITBUFF * sizeof(struct xmit_buf) + 4, GFP_ATOMIC);
+-	if (!pxmitpriv->pallocated_xmitbuf) {
+-		kfree(pxmitpriv->pallocated_frame_buf);
+-		pxmitpriv->pallocated_frame_buf = NULL;
+-		return -ENOMEM;
+-	}
++	if (!pxmitpriv->pallocated_xmitbuf)
++		goto clean_up_frame_buf;
+ 	pxmitpriv->pxmitbuf = pxmitpriv->pallocated_xmitbuf + 4 -
+ 			      ((addr_t)(pxmitpriv->pallocated_xmitbuf) & 3);
+ 	pxmitbuf = (struct xmit_buf *)pxmitpriv->pxmitbuf;
+@@ -129,13 +129,17 @@ int _r8712_init_xmit_priv(struct xmit_priv *pxmitpriv,
+ 		INIT_LIST_HEAD(&pxmitbuf->list);
+ 		pxmitbuf->pallocated_buf =
+ 			kmalloc(MAX_XMITBUF_SZ + XMITBUF_ALIGN_SZ, GFP_ATOMIC);
+-		if (!pxmitbuf->pallocated_buf)
+-			return -ENOMEM;
++		if (!pxmitbuf->pallocated_buf) {
++			j = 0;
++			goto clean_up_alloc_buf;
++		}
+ 		pxmitbuf->pbuf = pxmitbuf->pallocated_buf + XMITBUF_ALIGN_SZ -
+ 				 ((addr_t) (pxmitbuf->pallocated_buf) &
+ 				 (XMITBUF_ALIGN_SZ - 1));
+-		if (r8712_xmit_resource_alloc(padapter, pxmitbuf))
+-			return -ENOMEM;
++		if (r8712_xmit_resource_alloc(padapter, pxmitbuf)) {
++			j = 1;
++			goto clean_up_alloc_buf;
++		}
+ 		list_add_tail(&pxmitbuf->list,
+ 				 &(pxmitpriv->free_xmitbuf_queue.queue));
+ 		pxmitbuf++;
+@@ -146,6 +150,28 @@ int _r8712_init_xmit_priv(struct xmit_priv *pxmitpriv,
+ 	init_hwxmits(pxmitpriv->hwxmits, pxmitpriv->hwxmit_entry);
+ 	tasklet_setup(&pxmitpriv->xmit_tasklet, r8712_xmit_bh);
+ 	return 0;
++
++clean_up_alloc_buf:
++	if (j) {
++		/* failure happened in r8712_xmit_resource_alloc()
++		 * delete extra pxmitbuf->pallocated_buf
++		 */
++		kfree(pxmitbuf->pallocated_buf);
++	}
++	for (j = 0; j < i; j++) {
++		int k;
++
++		pxmitbuf--;			/* reset pointer */
++		kfree(pxmitbuf->pallocated_buf);
++		for (k = 0; k < 8; k++)		/* delete xmit urb's */
++			usb_free_urb(pxmitbuf->pxmit_urb[k]);
++	}
++	kfree(pxmitpriv->pallocated_xmitbuf);
++	pxmitpriv->pallocated_xmitbuf = NULL;
++clean_up_frame_buf:
++	kfree(pxmitpriv->pallocated_frame_buf);
++	pxmitpriv->pallocated_frame_buf = NULL;
++	return -ENOMEM;
+ }
+ 
+ void _free_xmit_priv(struct xmit_priv *pxmitpriv)
+@@ -709,7 +735,7 @@ void r8712_update_protection(struct _adapter *padapter, u8 *ie, uint ie_len)
+ 		break;
+ 	case AUTO_VCS:
+ 	default:
+-		perp = r8712_get_ie(ie, _ERPINFO_IE_, &erp_len, ie_len);
++		perp = r8712_get_ie(ie, WLAN_EID_ERP_INFO, &erp_len, ie_len);
+ 		if (!perp) {
+ 			pxmitpriv->vcs = NONE_VCS;
+ 		} else {
+diff --git a/drivers/staging/rtl8712/wifi.h b/drivers/staging/rtl8712/wifi.h
+index 601d4ff607bc8..9bb310b245899 100644
+--- a/drivers/staging/rtl8712/wifi.h
++++ b/drivers/staging/rtl8712/wifi.h
+@@ -374,21 +374,6 @@ static inline unsigned char *get_hdr_bssid(unsigned char *pframe)
+ 
+ #define _FIXED_IE_LENGTH_	_BEACON_IE_OFFSET_
+ 
+-#define _SSID_IE_		0
+-#define _SUPPORTEDRATES_IE_	1
+-#define _DSSET_IE_		3
+-#define _IBSS_PARA_IE_		6
+-#define _ERPINFO_IE_		42
+-#define _EXT_SUPPORTEDRATES_IE_	50
+-
+-#define _HT_CAPABILITY_IE_	45
+-#define _HT_EXTRA_INFO_IE_	61
+-#define _HT_ADD_INFO_IE_	61 /* _HT_EXTRA_INFO_IE_ */
+-
+-#define _VENDOR_SPECIFIC_IE_	221
+-
+-#define	_RESERVED47_		47
+-
+ /* ---------------------------------------------------------------------------
+  *			Below is the fixed elements...
+  * ---------------------------------------------------------------------------
+diff --git a/drivers/staging/rtl8712/xmit_linux.c b/drivers/staging/rtl8712/xmit_linux.c
+index 1f67d86c606f6..9050e51aa4079 100644
+--- a/drivers/staging/rtl8712/xmit_linux.c
++++ b/drivers/staging/rtl8712/xmit_linux.c
+@@ -119,6 +119,12 @@ int r8712_xmit_resource_alloc(struct _adapter *padapter,
+ 	for (i = 0; i < 8; i++) {
+ 		pxmitbuf->pxmit_urb[i] = usb_alloc_urb(0, GFP_KERNEL);
+ 		if (!pxmitbuf->pxmit_urb[i]) {
++			int k;
++
++			for (k = i - 1; k >= 0; k--) {
++				/* handle allocation errors part way through loop */
++				usb_free_urb(pxmitbuf->pxmit_urb[k]);
++			}
+ 			netdev_err(padapter->pnetdev, "pxmitbuf->pxmit_urb[i] == NULL\n");
+ 			return -ENOMEM;
+ 		}
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 23b014b8c9199..d439afef92128 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -2178,8 +2178,10 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ 
+ 	/* Free up any link layer users and finally the control channel */
+ 	for (i = NUM_DLCI - 1; i >= 0; i--)
+-		if (gsm->dlci[i])
++		if (gsm->dlci[i]) {
+ 			gsm_dlci_release(gsm->dlci[i]);
++			gsm->dlci[i] = NULL;
++		}
+ 	mutex_unlock(&gsm->mutex);
+ 	/* Now wipe the queues */
+ 	tty_ldisc_flush(gsm->tty);
+diff --git a/drivers/tty/serial/8250/8250_dwlib.c b/drivers/tty/serial/8250/8250_dwlib.c
+index 6d6a78eead3ef..1cf229cca5928 100644
+--- a/drivers/tty/serial/8250/8250_dwlib.c
++++ b/drivers/tty/serial/8250/8250_dwlib.c
+@@ -80,7 +80,7 @@ static void dw8250_set_divisor(struct uart_port *p, unsigned int baud,
+ void dw8250_setup_port(struct uart_port *p)
+ {
+ 	struct uart_8250_port *up = up_to_u8250p(p);
+-	u32 reg;
++	u32 reg, old_dlf;
+ 
+ 	/*
+ 	 * If the Component Version Register returns zero, we know that
+@@ -93,9 +93,11 @@ void dw8250_setup_port(struct uart_port *p)
+ 	dev_dbg(p->dev, "Designware UART version %c.%c%c\n",
+ 		(reg >> 24) & 0xff, (reg >> 16) & 0xff, (reg >> 8) & 0xff);
+ 
++	/* Preserve value written by firmware or bootloader  */
++	old_dlf = dw8250_readl_ext(p, DW_UART_DLF);
+ 	dw8250_writel_ext(p, DW_UART_DLF, ~0U);
+ 	reg = dw8250_readl_ext(p, DW_UART_DLF);
+-	dw8250_writel_ext(p, DW_UART_DLF, 0);
++	dw8250_writel_ext(p, DW_UART_DLF, old_dlf);
+ 
+ 	if (reg) {
+ 		struct dw8250_port_data *d = p->private_data;
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index f50ffc8076d8b..65a0c4e2bb29c 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -1468,13 +1468,6 @@ static int qcom_geni_serial_probe(struct platform_device *pdev)
+ 		goto err;
+ 	}
+ 
+-	/*
+-	 * Set pm_runtime status as ACTIVE so that wakeup_irq gets
+-	 * enabled/disabled from dev_pm_arm_wake_irq during system
+-	 * suspend/resume respectively.
+-	 */
+-	pm_runtime_set_active(&pdev->dev);
+-
+ 	if (port->wakeup_irq > 0) {
+ 		device_init_wakeup(&pdev->dev, true);
+ 		ret = dev_pm_set_dedicated_wake_irq(&pdev->dev,
+diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c
+index 91952be010740..c234114b50527 100644
+--- a/drivers/tty/serial/sifive.c
++++ b/drivers/tty/serial/sifive.c
+@@ -844,7 +844,7 @@ static void sifive_serial_console_write(struct console *co, const char *s,
+ 	local_irq_restore(flags);
+ }
+ 
+-static int __init sifive_serial_console_setup(struct console *co, char *options)
++static int sifive_serial_console_setup(struct console *co, char *options)
+ {
+ 	struct sifive_serial_port *ssp;
+ 	int baud = SIFIVE_DEFAULT_BAUD_RATE;
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 4ac1c22f13be0..856947620f140 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -437,6 +437,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* novation SoundControl XL */
+ 	{ USB_DEVICE(0x1235, 0x0061), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
++	/* Focusrite Scarlett Solo USB */
++	{ USB_DEVICE(0x1235, 0x8211), .driver_info =
++			USB_QUIRK_DISCONNECT_SUSPEND },
++
+ 	/* Huawei 4G LTE module */
+ 	{ USB_DEVICE(0x12d1, 0x15bb), .driver_info =
+ 			USB_QUIRK_DISCONNECT_SUSPEND },
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 5709b959b1d93..b02b10acb8842 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -275,9 +275,9 @@ int dwc3_core_soft_reset(struct dwc3 *dwc)
+ 	/*
+ 	 * We're resetting only the device side because, if we're in host mode,
+ 	 * XHCI driver will reset the host block. If dwc3 was configured for
+-	 * host-only mode, then we can return early.
++	 * host-only mode or current role is host, then we can return early.
+ 	 */
+-	if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
++	if (dwc->dr_mode == USB_DR_MODE_HOST || dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
+ 		return 0;
+ 
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+@@ -1066,22 +1066,6 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 		dwc3_writel(dwc->regs, DWC3_GUCTL1, reg);
+ 	}
+ 
+-	if (dwc->dr_mode == USB_DR_MODE_HOST ||
+-	    dwc->dr_mode == USB_DR_MODE_OTG) {
+-		reg = dwc3_readl(dwc->regs, DWC3_GUCTL);
+-
+-		/*
+-		 * Enable Auto retry Feature to make the controller operating in
+-		 * Host mode on seeing transaction errors(CRC errors or internal
+-		 * overrun scenerios) on IN transfers to reply to the device
+-		 * with a non-terminating retry ACK (i.e, an ACK transcation
+-		 * packet with Retry=1 & Nump != 0)
+-		 */
+-		reg |= DWC3_GUCTL_HSTINAUTORETRY;
+-
+-		dwc3_writel(dwc->regs, DWC3_GUCTL, reg);
+-	}
+-
+ 	/*
+ 	 * Must config both number of packets and max burst settings to enable
+ 	 * RX and/or TX threshold.
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index cbebe541f7e8f..291893d274297 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -249,9 +249,6 @@
+ #define DWC3_GCTL_GBLHIBERNATIONEN	BIT(1)
+ #define DWC3_GCTL_DSBLCLKGTNG		BIT(0)
+ 
+-/* Global User Control Register */
+-#define DWC3_GUCTL_HSTINAUTORETRY	BIT(14)
+-
+ /* Global User Control 1 Register */
+ #define DWC3_GUCTL1_PARKMODE_DISABLE_SS	BIT(17)
+ #define DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS	BIT(28)
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index a5a8c5712bce4..9f420cc8d7c79 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -173,10 +173,12 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc)
+ 
+ 			/*
+ 			 * A lot of BYT devices lack ACPI resource entries for
+-			 * the GPIOs, add a fallback mapping to the reference
++			 * the GPIOs. If the ACPI entry for the GPIO controller
++			 * is present add a fallback mapping to the reference
+ 			 * design GPIOs which all boards seem to use.
+ 			 */
+-			gpiod_add_lookup_table(&platform_bytcr_gpios);
++			if (acpi_dev_present("INT33FC", NULL, -1))
++				gpiod_add_lookup_table(&platform_bytcr_gpios);
+ 
+ 			/*
+ 			 * These GPIOs will turn on the USB2 PHY. Note that we have to
+diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c
+index b496ca937deed..ddb39e6728017 100644
+--- a/drivers/usb/gadget/legacy/raw_gadget.c
++++ b/drivers/usb/gadget/legacy/raw_gadget.c
+@@ -309,13 +309,15 @@ static int gadget_bind(struct usb_gadget *gadget,
+ 	dev->eps_num = i;
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
+-	/* Matches kref_put() in gadget_unbind(). */
+-	kref_get(&dev->count);
+-
+ 	ret = raw_queue_event(dev, USB_RAW_EVENT_CONNECT, 0, NULL);
+-	if (ret < 0)
++	if (ret < 0) {
+ 		dev_err(&gadget->dev, "failed to queue event\n");
++		set_gadget_data(gadget, NULL);
++		return ret;
++	}
+ 
++	/* Matches kref_put() in gadget_unbind(). */
++	kref_get(&dev->count);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 66d5f6a85c848..c5f0fbb8ffe47 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3693,15 +3693,15 @@ static int tegra_xudc_powerdomain_init(struct tegra_xudc *xudc)
+ 	int err;
+ 
+ 	xudc->genpd_dev_device = dev_pm_domain_attach_by_name(dev, "dev");
+-	if (IS_ERR_OR_NULL(xudc->genpd_dev_device)) {
+-		err = PTR_ERR(xudc->genpd_dev_device) ? : -ENODATA;
++	if (IS_ERR(xudc->genpd_dev_device)) {
++		err = PTR_ERR(xudc->genpd_dev_device);
+ 		dev_err(dev, "failed to get device power domain: %d\n", err);
+ 		return err;
+ 	}
+ 
+ 	xudc->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "ss");
+-	if (IS_ERR_OR_NULL(xudc->genpd_dev_ss)) {
+-		err = PTR_ERR(xudc->genpd_dev_ss) ? : -ENODATA;
++	if (IS_ERR(xudc->genpd_dev_ss)) {
++		err = PTR_ERR(xudc->genpd_dev_ss);
+ 		dev_err(dev, "failed to get SuperSpeed power domain: %d\n", err);
+ 		return err;
+ 	}
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index 99e994fd3d1df..2ba3c1b6ad6dc 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -647,7 +647,13 @@ ohci_hcd_at91_drv_resume(struct device *dev)
+ 	else
+ 		at91_start_clock(ohci_at91);
+ 
+-	ohci_resume(hcd, false);
++	/*
++	 * According to the comment in ohci_hcd_at91_drv_suspend()
++	 * we need to do a reset if the 48Mhz clock was stopped,
++	 * that is, if ohci_at91->wakeup is clear. Tell ohci_resume()
++	 * to reset in this case by setting its "hibernated" flag.
++	 */
++	ohci_resume(hcd, !ohci_at91->wakeup);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index 1c331577fca92..122777b21b24b 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -535,6 +535,7 @@ static int xhci_mtk_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	device_init_wakeup(dev, true);
++	dma_set_max_seg_size(dev, UINT_MAX);
+ 
+ 	xhci = hcd_to_xhci(hcd);
+ 	xhci->main_hcd = hcd;
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 9fa4f8f39830a..ffb09737b5d0f 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -1042,15 +1042,15 @@ static int tegra_xusb_powerdomain_init(struct device *dev,
+ 	int err;
+ 
+ 	tegra->genpd_dev_host = dev_pm_domain_attach_by_name(dev, "xusb_host");
+-	if (IS_ERR_OR_NULL(tegra->genpd_dev_host)) {
+-		err = PTR_ERR(tegra->genpd_dev_host) ? : -ENODATA;
++	if (IS_ERR(tegra->genpd_dev_host)) {
++		err = PTR_ERR(tegra->genpd_dev_host);
+ 		dev_err(dev, "failed to get host pm-domain: %d\n", err);
+ 		return err;
+ 	}
+ 
+ 	tegra->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "xusb_ss");
+-	if (IS_ERR_OR_NULL(tegra->genpd_dev_ss)) {
+-		err = PTR_ERR(tegra->genpd_dev_ss) ? : -ENODATA;
++	if (IS_ERR(tegra->genpd_dev_ss)) {
++		err = PTR_ERR(tegra->genpd_dev_ss);
+ 		dev_err(dev, "failed to get superspeed pm-domain: %d\n", err);
+ 		return err;
+ 	}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 625d9dc776bed..0b3422c06ab94 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -251,6 +251,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EM061K_LTA		0x0123
+ #define QUECTEL_PRODUCT_EM061K_LMS		0x0124
+ #define QUECTEL_PRODUCT_EC25			0x0125
++#define QUECTEL_PRODUCT_EM060K_128		0x0128
+ #define QUECTEL_PRODUCT_EG91			0x0191
+ #define QUECTEL_PRODUCT_EG95			0x0195
+ #define QUECTEL_PRODUCT_BG96			0x0296
+@@ -268,6 +269,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_RM520N			0x0801
+ #define QUECTEL_PRODUCT_EC200U			0x0901
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
++#define QUECTEL_PRODUCT_EC200A			0x6005
+ #define QUECTEL_PRODUCT_EM061K_LWW		0x6008
+ #define QUECTEL_PRODUCT_EM061K_LCN		0x6009
+ #define QUECTEL_PRODUCT_EC200T			0x6026
+@@ -1197,6 +1199,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) },
+@@ -1225,6 +1230,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0900, 0xff, 0, 0), /* RM500U-CN */
+ 	  .driver_info = ZLP },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200A, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index 4c6747889a194..24b8772a345e2 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -38,16 +38,6 @@ static struct usb_serial_driver vendor##_device = {		\
+ 	{ USB_DEVICE(0x0a21, 0x8001) }	/* MMT-7305WW */
+ DEVICE(carelink, CARELINK_IDS);
+ 
+-/* ZIO Motherboard USB driver */
+-#define ZIO_IDS()			\
+-	{ USB_DEVICE(0x1CBE, 0x0103) }
+-DEVICE(zio, ZIO_IDS);
+-
+-/* Funsoft Serial USB driver */
+-#define FUNSOFT_IDS()			\
+-	{ USB_DEVICE(0x1404, 0xcddc) }
+-DEVICE(funsoft, FUNSOFT_IDS);
+-
+ /* Infineon Flashloader driver */
+ #define FLASHLOADER_IDS()		\
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x058b, 0x0041, USB_CLASS_CDC_DATA) }, \
+@@ -55,6 +45,11 @@ DEVICE(funsoft, FUNSOFT_IDS);
+ 	{ USB_DEVICE(0x8087, 0x0801) }
+ DEVICE(flashloader, FLASHLOADER_IDS);
+ 
++/* Funsoft Serial USB driver */
++#define FUNSOFT_IDS()			\
++	{ USB_DEVICE(0x1404, 0xcddc) }
++DEVICE(funsoft, FUNSOFT_IDS);
++
+ /* Google Serial USB SubClass */
+ #define GOOGLE_IDS()						\
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(0x18d1,			\
+@@ -63,16 +58,21 @@ DEVICE(flashloader, FLASHLOADER_IDS);
+ 					0x01) }
+ DEVICE(google, GOOGLE_IDS);
+ 
++/* HP4x (48/49) Generic Serial driver */
++#define HP4X_IDS()			\
++	{ USB_DEVICE(0x03f0, 0x0121) }
++DEVICE(hp4x, HP4X_IDS);
++
++/* KAUFMANN RKS+CAN VCP */
++#define KAUFMANN_IDS()			\
++	{ USB_DEVICE(0x16d0, 0x0870) }
++DEVICE(kaufmann, KAUFMANN_IDS);
++
+ /* Libtransistor USB console */
+ #define LIBTRANSISTOR_IDS()			\
+ 	{ USB_DEVICE(0x1209, 0x8b00) }
+ DEVICE(libtransistor, LIBTRANSISTOR_IDS);
+ 
+-/* ViVOpay USB Serial Driver */
+-#define VIVOPAY_IDS()			\
+-	{ USB_DEVICE(0x1d5f, 0x1004) }	/* ViVOpay 8800 */
+-DEVICE(vivopay, VIVOPAY_IDS);
+-
+ /* Motorola USB Phone driver */
+ #define MOTO_IDS()			\
+ 	{ USB_DEVICE(0x05c6, 0x3197) },	/* unknown Motorola phone */	\
+@@ -101,10 +101,10 @@ DEVICE(nokia, NOKIA_IDS);
+ 	{ USB_DEVICE(0x09d7, 0x0100) }	/* NovAtel FlexPack GPS */
+ DEVICE_N(novatel_gps, NOVATEL_IDS, 3);
+ 
+-/* HP4x (48/49) Generic Serial driver */
+-#define HP4X_IDS()			\
+-	{ USB_DEVICE(0x03f0, 0x0121) }
+-DEVICE(hp4x, HP4X_IDS);
++/* Siemens USB/MPI adapter */
++#define SIEMENS_IDS()			\
++	{ USB_DEVICE(0x908, 0x0004) }
++DEVICE(siemens_mpi, SIEMENS_IDS);
+ 
+ /* Suunto ANT+ USB Driver */
+ #define SUUNTO_IDS()			\
+@@ -112,45 +112,52 @@ DEVICE(hp4x, HP4X_IDS);
+ 	{ USB_DEVICE(0x0fcf, 0x1009) } /* Dynastream ANT USB-m Stick */
+ DEVICE(suunto, SUUNTO_IDS);
+ 
+-/* Siemens USB/MPI adapter */
+-#define SIEMENS_IDS()			\
+-	{ USB_DEVICE(0x908, 0x0004) }
+-DEVICE(siemens_mpi, SIEMENS_IDS);
++/* ViVOpay USB Serial Driver */
++#define VIVOPAY_IDS()			\
++	{ USB_DEVICE(0x1d5f, 0x1004) }	/* ViVOpay 8800 */
++DEVICE(vivopay, VIVOPAY_IDS);
++
++/* ZIO Motherboard USB driver */
++#define ZIO_IDS()			\
++	{ USB_DEVICE(0x1CBE, 0x0103) }
++DEVICE(zio, ZIO_IDS);
+ 
+ /* All of the above structures mushed into two lists */
+ static struct usb_serial_driver * const serial_drivers[] = {
+ 	&carelink_device,
+-	&zio_device,
+-	&funsoft_device,
+ 	&flashloader_device,
++	&funsoft_device,
+ 	&google_device,
++	&hp4x_device,
++	&kaufmann_device,
+ 	&libtransistor_device,
+-	&vivopay_device,
+ 	&moto_modem_device,
+ 	&motorola_tetra_device,
+ 	&nokia_device,
+ 	&novatel_gps_device,
+-	&hp4x_device,
+-	&suunto_device,
+ 	&siemens_mpi_device,
++	&suunto_device,
++	&vivopay_device,
++	&zio_device,
+ 	NULL
+ };
+ 
+ static const struct usb_device_id id_table[] = {
+ 	CARELINK_IDS(),
+-	ZIO_IDS(),
+-	FUNSOFT_IDS(),
+ 	FLASHLOADER_IDS(),
++	FUNSOFT_IDS(),
+ 	GOOGLE_IDS(),
++	HP4X_IDS(),
++	KAUFMANN_IDS(),
+ 	LIBTRANSISTOR_IDS(),
+-	VIVOPAY_IDS(),
+ 	MOTO_IDS(),
+ 	MOTOROLA_TETRA_IDS(),
+ 	NOKIA_IDS(),
+ 	NOVATEL_IDS(),
+-	HP4X_IDS(),
+-	SUUNTO_IDS(),
+ 	SIEMENS_IDS(),
++	SUUNTO_IDS(),
++	VIVOPAY_IDS(),
++	ZIO_IDS(),
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(usb, id_table);
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 41a7ace9998e4..814f2f07e74c4 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -3589,6 +3589,8 @@ static noinline int split_node(struct btrfs_trans_handle *trans,
+ 
+ 	ret = tree_mod_log_eb_copy(split, c, 0, mid, c_nritems - mid);
+ 	if (ret) {
++		btrfs_tree_unlock(split);
++		free_extent_buffer(split);
+ 		btrfs_abort_transaction(trans, ret);
+ 		return ret;
+ 	}
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 828a7ff4aebe7..a67323c2d41f7 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1202,12 +1202,23 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	int ret = 0;
+ 
+ 	/*
+-	 * We need to have subvol_sem write locked, to prevent races between
+-	 * concurrent tasks trying to disable quotas, because we will unlock
+-	 * and relock qgroup_ioctl_lock across BTRFS_FS_QUOTA_ENABLED changes.
++	 * We need to have subvol_sem write locked to prevent races with
++	 * snapshot creation.
+ 	 */
+ 	lockdep_assert_held_write(&fs_info->subvol_sem);
+ 
++	/*
++	 * Lock the cleaner mutex to prevent races with concurrent relocation,
++	 * because relocation may be building backrefs for blocks of the quota
++	 * root while we are deleting the root. This is like dropping fs roots
++	 * of deleted snapshots/subvolumes, we need the same protection.
++	 *
++	 * This also prevents races between concurrent tasks trying to disable
++	 * quotas, because we will unlock and relock qgroup_ioctl_lock across
++	 * BTRFS_FS_QUOTA_ENABLED changes.
++	 */
++	mutex_lock(&fs_info->cleaner_mutex);
++
+ 	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (!fs_info->quota_root)
+ 		goto out;
+@@ -1287,6 +1298,7 @@ out:
+ 		btrfs_end_transaction(trans);
+ 	else if (trans)
+ 		ret = btrfs_end_transaction(trans);
++	mutex_unlock(&fs_info->cleaner_mutex);
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 8daa9e4eb1d2e..abd67f984fbcf 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -821,8 +821,13 @@ btrfs_attach_transaction_barrier(struct btrfs_root *root)
+ 
+ 	trans = start_transaction(root, 0, TRANS_ATTACH,
+ 				  BTRFS_RESERVE_NO_FLUSH, true);
+-	if (trans == ERR_PTR(-ENOENT))
+-		btrfs_wait_for_commit(root->fs_info, 0);
++	if (trans == ERR_PTR(-ENOENT)) {
++		int ret;
++
++		ret = btrfs_wait_for_commit(root->fs_info, 0);
++		if (ret)
++			return ERR_PTR(ret);
++	}
+ 
+ 	return trans;
+ }
+@@ -886,6 +891,7 @@ int btrfs_wait_for_commit(struct btrfs_fs_info *fs_info, u64 transid)
+ 	}
+ 
+ 	wait_for_commit(cur_trans);
++	ret = cur_trans->aborted;
+ 	btrfs_put_transaction(cur_trans);
+ out:
+ 	return ret;
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 87a9e9096421a..df1ecb8bfebf7 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4511,7 +4511,7 @@ static void delayed_work(struct work_struct *work)
+ 
+ 	dout("mdsc delayed_work\n");
+ 
+-	if (mdsc->stopping)
++	if (mdsc->stopping >= CEPH_MDSC_STOPPING_FLUSHED)
+ 		return;
+ 
+ 	mutex_lock(&mdsc->mutex);
+@@ -4701,7 +4701,7 @@ void send_flush_mdlog(struct ceph_mds_session *s)
+ void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc)
+ {
+ 	dout("pre_umount\n");
+-	mdsc->stopping = 1;
++	mdsc->stopping = CEPH_MDSC_STOPPING_BEGIN;
+ 
+ 	ceph_mdsc_iterate_sessions(mdsc, send_flush_mdlog, true);
+ 	ceph_mdsc_iterate_sessions(mdsc, lock_unlock_session, false);
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index a92e42e8a9f82..1c958510f00f5 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -372,6 +372,11 @@ struct cap_wait {
+ 	int			want;
+ };
+ 
++enum {
++       CEPH_MDSC_STOPPING_BEGIN = 1,
++       CEPH_MDSC_STOPPING_FLUSHED = 2,
++};
++
+ /*
+  * mds client state
+  */
+diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
+index 9e0a0e26294ee..906e446abb46a 100644
+--- a/fs/ceph/metric.c
++++ b/fs/ceph/metric.c
+@@ -130,7 +130,7 @@ static void metric_delayed_work(struct work_struct *work)
+ 	struct ceph_mds_client *mdsc =
+ 		container_of(m, struct ceph_mds_client, metric);
+ 
+-	if (mdsc->stopping)
++	if (mdsc->stopping || disable_send_metrics)
+ 		return;
+ 
+ 	if (!m->session || !check_session_state(m->session)) {
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 08c8d34c98091..f2aff97348bc9 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -1222,6 +1222,16 @@ static void ceph_kill_sb(struct super_block *s)
+ 	ceph_mdsc_pre_umount(fsc->mdsc);
+ 	flush_fs_workqueues(fsc);
+ 
++	/*
++	 * Though the kill_anon_super() will finally trigger the
++	 * sync_filesystem() anyway, we still need to do it here
++	 * and then bump the stage of shutdown to stop the work
++	 * queue as earlier as possible.
++	 */
++	sync_filesystem(s);
++
++	fsc->mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHED;
++
+ 	kill_anon_super(s);
+ 
+ 	fsc->client->extra_mon_dispatch = NULL;
+diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
+index edce0b25cd90e..f3482e936cc25 100644
+--- a/fs/dlm/plock.c
++++ b/fs/dlm/plock.c
+@@ -19,20 +19,20 @@ static struct list_head recv_list;
+ static wait_queue_head_t send_wq;
+ static wait_queue_head_t recv_wq;
+ 
+-struct plock_op {
+-	struct list_head list;
+-	int done;
+-	struct dlm_plock_info info;
+-	int (*callback)(struct file_lock *fl, int result);
+-};
+-
+-struct plock_xop {
+-	struct plock_op xop;
++struct plock_async_data {
+ 	void *fl;
+ 	void *file;
+ 	struct file_lock flc;
++	int (*callback)(struct file_lock *fl, int result);
+ };
+ 
++struct plock_op {
++	struct list_head list;
++	int done;
++	struct dlm_plock_info info;
++	/* if set indicates async handling */
++	struct plock_async_data *data;
++};
+ 
+ static inline void set_version(struct dlm_plock_info *info)
+ {
+@@ -58,6 +58,12 @@ static int check_version(struct dlm_plock_info *info)
+ 	return 0;
+ }
+ 
++static void dlm_release_plock_op(struct plock_op *op)
++{
++	kfree(op->data);
++	kfree(op);
++}
++
+ static void send_op(struct plock_op *op)
+ {
+ 	set_version(&op->info);
+@@ -101,22 +107,21 @@ static void do_unlock_close(struct dlm_ls *ls, u64 number,
+ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 		   int cmd, struct file_lock *fl)
+ {
++	struct plock_async_data *op_data;
+ 	struct dlm_ls *ls;
+ 	struct plock_op *op;
+-	struct plock_xop *xop;
+ 	int rv;
+ 
+ 	ls = dlm_find_lockspace_local(lockspace);
+ 	if (!ls)
+ 		return -EINVAL;
+ 
+-	xop = kzalloc(sizeof(*xop), GFP_NOFS);
+-	if (!xop) {
++	op = kzalloc(sizeof(*op), GFP_NOFS);
++	if (!op) {
+ 		rv = -ENOMEM;
+ 		goto out;
+ 	}
+ 
+-	op = &xop->xop;
+ 	op->info.optype		= DLM_PLOCK_OP_LOCK;
+ 	op->info.pid		= fl->fl_pid;
+ 	op->info.ex		= (fl->fl_type == F_WRLCK);
+@@ -125,35 +130,44 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 	op->info.number		= number;
+ 	op->info.start		= fl->fl_start;
+ 	op->info.end		= fl->fl_end;
++	/* async handling */
+ 	if (fl->fl_lmops && fl->fl_lmops->lm_grant) {
++		op_data = kzalloc(sizeof(*op_data), GFP_NOFS);
++		if (!op_data) {
++			dlm_release_plock_op(op);
++			rv = -ENOMEM;
++			goto out;
++		}
++
+ 		/* fl_owner is lockd which doesn't distinguish
+ 		   processes on the nfs client */
+ 		op->info.owner	= (__u64) fl->fl_pid;
+-		op->callback	= fl->fl_lmops->lm_grant;
+-		locks_init_lock(&xop->flc);
+-		locks_copy_lock(&xop->flc, fl);
+-		xop->fl		= fl;
+-		xop->file	= file;
++		op_data->callback = fl->fl_lmops->lm_grant;
++		locks_init_lock(&op_data->flc);
++		locks_copy_lock(&op_data->flc, fl);
++		op_data->fl		= fl;
++		op_data->file	= file;
++
++		op->data = op_data;
++
++		send_op(op);
++		rv = FILE_LOCK_DEFERRED;
++		goto out;
+ 	} else {
+ 		op->info.owner	= (__u64)(long) fl->fl_owner;
+ 	}
+ 
+ 	send_op(op);
+ 
+-	if (!op->callback) {
+-		rv = wait_event_interruptible(recv_wq, (op->done != 0));
+-		if (rv == -ERESTARTSYS) {
+-			log_debug(ls, "dlm_posix_lock: wait killed %llx",
+-				  (unsigned long long)number);
+-			spin_lock(&ops_lock);
+-			list_del(&op->list);
+-			spin_unlock(&ops_lock);
+-			kfree(xop);
+-			do_unlock_close(ls, number, file, fl);
+-			goto out;
+-		}
+-	} else {
+-		rv = FILE_LOCK_DEFERRED;
++	rv = wait_event_killable(recv_wq, (op->done != 0));
++	if (rv == -ERESTARTSYS) {
++		log_debug(ls, "%s: wait killed %llx", __func__,
++			  (unsigned long long)number);
++		spin_lock(&ops_lock);
++		list_del(&op->list);
++		spin_unlock(&ops_lock);
++		dlm_release_plock_op(op);
++		do_unlock_close(ls, number, file, fl);
+ 		goto out;
+ 	}
+ 
+@@ -173,7 +187,7 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 				  (unsigned long long)number);
+ 	}
+ 
+-	kfree(xop);
++	dlm_release_plock_op(op);
+ out:
+ 	dlm_put_lockspace(ls);
+ 	return rv;
+@@ -183,11 +197,11 @@ EXPORT_SYMBOL_GPL(dlm_posix_lock);
+ /* Returns failure iff a successful lock operation should be canceled */
+ static int dlm_plock_callback(struct plock_op *op)
+ {
++	struct plock_async_data *op_data = op->data;
+ 	struct file *file;
+ 	struct file_lock *fl;
+ 	struct file_lock *flc;
+ 	int (*notify)(struct file_lock *fl, int result) = NULL;
+-	struct plock_xop *xop = (struct plock_xop *)op;
+ 	int rv = 0;
+ 
+ 	spin_lock(&ops_lock);
+@@ -199,10 +213,10 @@ static int dlm_plock_callback(struct plock_op *op)
+ 	spin_unlock(&ops_lock);
+ 
+ 	/* check if the following 2 are still valid or make a copy */
+-	file = xop->file;
+-	flc = &xop->flc;
+-	fl = xop->fl;
+-	notify = op->callback;
++	file = op_data->file;
++	flc = &op_data->flc;
++	fl = op_data->fl;
++	notify = op_data->callback;
+ 
+ 	if (op->info.rv) {
+ 		notify(fl, op->info.rv);
+@@ -233,7 +247,7 @@ static int dlm_plock_callback(struct plock_op *op)
+ 	}
+ 
+ out:
+-	kfree(xop);
++	dlm_release_plock_op(op);
+ 	return rv;
+ }
+ 
+@@ -303,7 +317,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 		rv = 0;
+ 
+ out_free:
+-	kfree(op);
++	dlm_release_plock_op(op);
+ out:
+ 	dlm_put_lockspace(ls);
+ 	fl->fl_flags = fl_flags;
+@@ -371,7 +385,7 @@ int dlm_posix_get(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 		rv = 0;
+ 	}
+ 
+-	kfree(op);
++	dlm_release_plock_op(op);
+ out:
+ 	dlm_put_lockspace(ls);
+ 	return rv;
+@@ -407,7 +421,7 @@ static ssize_t dev_read(struct file *file, char __user *u, size_t count,
+ 	   (the process did not make an unlock call). */
+ 
+ 	if (op->info.flags & DLM_PLOCK_FL_CLOSE)
+-		kfree(op);
++		dlm_release_plock_op(op);
+ 
+ 	if (copy_to_user(u, &info, sizeof(info)))
+ 		return -EFAULT;
+@@ -439,7 +453,7 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 		    op->info.owner == info.owner) {
+ 			list_del_init(&op->list);
+ 			memcpy(&op->info, &info, sizeof(info));
+-			if (op->callback)
++			if (op->data)
+ 				do_callback = 1;
+ 			else
+ 				op->done = 1;
+diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c
+index 258b6bb5762a4..ab091440e8b93 100644
+--- a/fs/exfat/balloc.c
++++ b/fs/exfat/balloc.c
+@@ -69,7 +69,7 @@ static int exfat_allocate_bitmap(struct super_block *sb,
+ 	}
+ 	sbi->map_sectors = ((need_map_size - 1) >>
+ 			(sb->s_blocksize_bits)) + 1;
+-	sbi->vol_amap = kmalloc_array(sbi->map_sectors,
++	sbi->vol_amap = kvmalloc_array(sbi->map_sectors,
+ 				sizeof(struct buffer_head *), GFP_KERNEL);
+ 	if (!sbi->vol_amap)
+ 		return -ENOMEM;
+@@ -84,7 +84,7 @@ static int exfat_allocate_bitmap(struct super_block *sb,
+ 			while (j < i)
+ 				brelse(sbi->vol_amap[j++]);
+ 
+-			kfree(sbi->vol_amap);
++			kvfree(sbi->vol_amap);
+ 			sbi->vol_amap = NULL;
+ 			return -EIO;
+ 		}
+@@ -138,7 +138,7 @@ void exfat_free_bitmap(struct exfat_sb_info *sbi)
+ 	for (i = 0; i < sbi->map_sectors; i++)
+ 		__brelse(sbi->vol_amap[i]);
+ 
+-	kfree(sbi->vol_amap);
++	kvfree(sbi->vol_amap);
+ }
+ 
+ int exfat_set_bitmap(struct inode *inode, unsigned int clu)
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 6caded58cda52..db735a0d32fc6 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -33,6 +33,7 @@ static void exfat_get_uniname_from_ext_entry(struct super_block *sb,
+ {
+ 	int i;
+ 	struct exfat_entry_set_cache *es;
++	unsigned int uni_len = 0, len;
+ 
+ 	es = exfat_get_dentry_set(sb, p_dir, entry, ES_ALL_ENTRIES);
+ 	if (!es)
+@@ -51,7 +52,10 @@ static void exfat_get_uniname_from_ext_entry(struct super_block *sb,
+ 		if (exfat_get_entry_type(ep) != TYPE_EXTEND)
+ 			break;
+ 
+-		exfat_extract_uni_name(ep, uniname);
++		len = exfat_extract_uni_name(ep, uniname);
++		uni_len += len;
++		if (len != EXFAT_FILE_NAME_LEN || uni_len >= MAX_NAME_LENGTH)
++			break;
+ 		uniname += EXFAT_FILE_NAME_LEN;
+ 	}
+ 
+@@ -148,7 +152,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ 					0);
+ 
+ 			*uni_name.name = 0x0;
+-			exfat_get_uniname_from_ext_entry(sb, &dir, dentry,
++			exfat_get_uniname_from_ext_entry(sb, &clu, i,
+ 				uni_name.name);
+ 			exfat_utf16_to_nls(sb, &uni_name,
+ 				dir_entry->namebuf.lfn,
+@@ -210,7 +214,10 @@ static void exfat_free_namebuf(struct exfat_dentry_namebuf *nb)
+ 	exfat_init_namebuf(nb);
+ }
+ 
+-/* skip iterating emit_dots when dir is empty */
++/*
++ * Before calling dir_emit*(), sbi->s_lock should be released
++ * because page fault can occur in dir_emit*().
++ */
+ #define ITER_POS_FILLED_DOTS    (2)
+ static int exfat_iterate(struct file *filp, struct dir_context *ctx)
+ {
+@@ -225,11 +232,10 @@ static int exfat_iterate(struct file *filp, struct dir_context *ctx)
+ 	int err = 0, fake_offset = 0;
+ 
+ 	exfat_init_namebuf(nb);
+-	mutex_lock(&EXFAT_SB(sb)->s_lock);
+ 
+ 	cpos = ctx->pos;
+ 	if (!dir_emit_dots(filp, ctx))
+-		goto unlock;
++		goto out;
+ 
+ 	if (ctx->pos == ITER_POS_FILLED_DOTS) {
+ 		cpos = 0;
+@@ -241,16 +247,18 @@ static int exfat_iterate(struct file *filp, struct dir_context *ctx)
+ 	/* name buffer should be allocated before use */
+ 	err = exfat_alloc_namebuf(nb);
+ 	if (err)
+-		goto unlock;
++		goto out;
+ get_new:
++	mutex_lock(&EXFAT_SB(sb)->s_lock);
++
+ 	if (ei->flags == ALLOC_NO_FAT_CHAIN && cpos >= i_size_read(inode))
+ 		goto end_of_dir;
+ 
+ 	err = exfat_readdir(inode, &cpos, &de);
+ 	if (err) {
+ 		/*
+-		 * At least we tried to read a sector.  Move cpos to next sector
+-		 * position (should be aligned).
++		 * At least we tried to read a sector.
++		 * Move cpos to next sector position (should be aligned).
+ 		 */
+ 		if (err == -EIO) {
+ 			cpos += 1 << (sb->s_blocksize_bits);
+@@ -273,16 +281,10 @@ get_new:
+ 		inum = iunique(sb, EXFAT_ROOT_INO);
+ 	}
+ 
+-	/*
+-	 * Before calling dir_emit(), sb_lock should be released.
+-	 * Because page fault can occur in dir_emit() when the size
+-	 * of buffer given from user is larger than one page size.
+-	 */
+ 	mutex_unlock(&EXFAT_SB(sb)->s_lock);
+ 	if (!dir_emit(ctx, nb->lfn, strlen(nb->lfn), inum,
+ 			(de.attr & ATTR_SUBDIR) ? DT_DIR : DT_REG))
+-		goto out_unlocked;
+-	mutex_lock(&EXFAT_SB(sb)->s_lock);
++		goto out;
+ 	ctx->pos = cpos;
+ 	goto get_new;
+ 
+@@ -290,9 +292,8 @@ end_of_dir:
+ 	if (!cpos && fake_offset)
+ 		cpos = ITER_POS_FILLED_DOTS;
+ 	ctx->pos = cpos;
+-unlock:
+ 	mutex_unlock(&EXFAT_SB(sb)->s_lock);
+-out_unlocked:
++out:
+ 	/*
+ 	 * To improve performance, free namebuf after unlock sb_lock.
+ 	 * If namebuf is not allocated, this function do nothing
+@@ -612,6 +613,10 @@ int exfat_free_dentry_set(struct exfat_entry_set_cache *es, int sync)
+ 			bforget(es->bh[i]);
+ 		else
+ 			brelse(es->bh[i]);
++
++	if (IS_DYNAMIC_ES(es))
++		kfree(es->bh);
++
+ 	kfree(es);
+ 	return err;
+ }
+@@ -847,6 +852,7 @@ struct exfat_entry_set_cache *exfat_get_dentry_set(struct super_block *sb,
+ 	/* byte offset in sector */
+ 	off = EXFAT_BLK_OFFSET(byte_offset, sb);
+ 	es->start_off = off;
++	es->bh = es->__bh;
+ 
+ 	/* sector offset in cluster */
+ 	sec = EXFAT_B_TO_BLK(byte_offset, sb);
+@@ -866,6 +872,16 @@ struct exfat_entry_set_cache *exfat_get_dentry_set(struct super_block *sb,
+ 	es->num_entries = num_entries;
+ 
+ 	num_bh = EXFAT_B_TO_BLK_ROUND_UP(off + num_entries * DENTRY_SIZE, sb);
++	if (num_bh > ARRAY_SIZE(es->__bh)) {
++		es->bh = kmalloc_array(num_bh, sizeof(*es->bh), GFP_KERNEL);
++		if (!es->bh) {
++			brelse(bh);
++			kfree(es);
++			return NULL;
++		}
++		es->bh[0] = bh;
++	}
++
+ 	for (i = 1; i < num_bh; i++) {
+ 		/* get the next sector */
+ 		if (exfat_is_last_sector_in_cluster(sbi, sec)) {
+@@ -905,14 +921,19 @@ enum {
+ };
+ 
+ /*
+- * return values:
+- *   >= 0	: return dir entiry position with the name in dir
+- *   -ENOENT	: entry with the name does not exist
+- *   -EIO	: I/O error
++ * @ei:         inode info of parent directory
++ * @p_dir:      directory structure of parent directory
++ * @num_entries:entry size of p_uniname
++ * @hint_opt:   If p_uniname is found, filled with optimized dir/entry
++ *              for traversing cluster chain.
++ * @return:
++ *   >= 0:      file directory entry position where the name exists
++ *   -ENOENT:   entry with the name does not exist
++ *   -EIO:      I/O error
+  */
+ int exfat_find_dir_entry(struct super_block *sb, struct exfat_inode_info *ei,
+ 		struct exfat_chain *p_dir, struct exfat_uni_name *p_uniname,
+-		int num_entries, unsigned int type)
++		int num_entries, unsigned int type, struct exfat_hint *hint_opt)
+ {
+ 	int i, rewind = 0, dentry = 0, end_eidx = 0, num_ext = 0, len;
+ 	int order, step, name_len = 0;
+@@ -989,6 +1010,8 @@ rewind:
+ 
+ 			if (entry_type == TYPE_FILE || entry_type == TYPE_DIR) {
+ 				step = DIRENT_STEP_FILE;
++				hint_opt->clu = clu.dir;
++				hint_opt->eidx = i;
+ 				if (type == TYPE_ALL || type == entry_type) {
+ 					num_ext = ep->dentry.file.num_ext;
+ 					step = DIRENT_STEP_STRM;
+@@ -1023,7 +1046,8 @@ rewind:
+ 			if (entry_type == TYPE_EXTEND) {
+ 				unsigned short entry_uniname[16], unichar;
+ 
+-				if (step != DIRENT_STEP_NAME) {
++				if (step != DIRENT_STEP_NAME ||
++				    name_len >= MAX_NAME_LENGTH) {
+ 					step = DIRENT_STEP_FILE;
+ 					continue;
+ 				}
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index 07b09af57436f..11e579a2598d8 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -170,10 +170,13 @@ struct exfat_entry_set_cache {
+ 	bool modified;
+ 	unsigned int start_off;
+ 	int num_bh;
+-	struct buffer_head *bh[DIR_CACHE_SIZE];
++	struct buffer_head *__bh[DIR_CACHE_SIZE];
++	struct buffer_head **bh;
+ 	unsigned int num_entries;
+ };
+ 
++#define IS_DYNAMIC_ES(es)	((es)->__bh != (es)->bh)
++
+ struct exfat_dir_entry {
+ 	struct exfat_chain dir;
+ 	int entry;
+@@ -458,7 +461,7 @@ void exfat_update_dir_chksum_with_entry_set(struct exfat_entry_set_cache *es);
+ int exfat_calc_num_entries(struct exfat_uni_name *p_uniname);
+ int exfat_find_dir_entry(struct super_block *sb, struct exfat_inode_info *ei,
+ 		struct exfat_chain *p_dir, struct exfat_uni_name *p_uniname,
+-		int num_entries, unsigned int type);
++		int num_entries, unsigned int type, struct exfat_hint *hint_opt);
+ int exfat_alloc_new_dir(struct inode *inode, struct exfat_chain *clu);
+ int exfat_find_location(struct super_block *sb, struct exfat_chain *p_dir,
+ 		int entry, sector_t *sector, int *offset);
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 1382d816912c8..bd00afc5e4c16 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -596,6 +596,8 @@ static int exfat_find(struct inode *dir, struct qstr *qname,
+ 	struct exfat_inode_info *ei = EXFAT_I(dir);
+ 	struct exfat_dentry *ep, *ep2;
+ 	struct exfat_entry_set_cache *es;
++	/* for optimized dir & entry to prevent long traverse of cluster chain */
++	struct exfat_hint hint_opt;
+ 
+ 	if (qname->len == 0)
+ 		return -ENOENT;
+@@ -619,7 +621,7 @@ static int exfat_find(struct inode *dir, struct qstr *qname,
+ 
+ 	/* search the file name for directories */
+ 	dentry = exfat_find_dir_entry(sb, ei, &cdir, &uni_name,
+-			num_entries, TYPE_ALL);
++			num_entries, TYPE_ALL, &hint_opt);
+ 
+ 	if (dentry < 0)
+ 		return dentry; /* -error value */
+@@ -628,6 +630,11 @@ static int exfat_find(struct inode *dir, struct qstr *qname,
+ 	info->entry = dentry;
+ 	info->num_subdirs = 0;
+ 
++	/* adjust cdir to the optimized value */
++	cdir.dir = hint_opt.clu;
++	if (cdir.flags & ALLOC_NO_FAT_CHAIN)
++		cdir.size -= dentry / sbi->dentries_per_clu;
++	dentry = hint_opt.eidx;
+ 	es = exfat_get_dentry_set(sb, &cdir, dentry, ES_2_ENTRIES);
+ 	if (!es)
+ 		return -EIO;
+diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
+index f06367cfd7641..fb2086bcbbf83 100644
+--- a/fs/ext2/ext2.h
++++ b/fs/ext2/ext2.h
+@@ -68,10 +68,7 @@ struct mb_cache;
+  * second extended-fs super-block data in memory
+  */
+ struct ext2_sb_info {
+-	unsigned long s_frag_size;	/* Size of a fragment in bytes */
+-	unsigned long s_frags_per_block;/* Number of fragments per block */
+ 	unsigned long s_inodes_per_block;/* Number of inodes per block */
+-	unsigned long s_frags_per_group;/* Number of fragments in a group */
+ 	unsigned long s_blocks_per_group;/* Number of blocks in a group */
+ 	unsigned long s_inodes_per_group;/* Number of inodes in a group */
+ 	unsigned long s_itb_per_group;	/* Number of inode table blocks per group */
+@@ -185,15 +182,6 @@ static inline struct ext2_sb_info *EXT2_SB(struct super_block *sb)
+ #define EXT2_INODE_SIZE(s)		(EXT2_SB(s)->s_inode_size)
+ #define EXT2_FIRST_INO(s)		(EXT2_SB(s)->s_first_ino)
+ 
+-/*
+- * Macro-instructions used to manage fragments
+- */
+-#define EXT2_MIN_FRAG_SIZE		1024
+-#define	EXT2_MAX_FRAG_SIZE		4096
+-#define EXT2_MIN_FRAG_LOG_SIZE		  10
+-#define EXT2_FRAG_SIZE(s)		(EXT2_SB(s)->s_frag_size)
+-#define EXT2_FRAGS_PER_BLOCK(s)		(EXT2_SB(s)->s_frags_per_block)
+-
+ /*
+  * Structure of a blocks group descriptor
+  */
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index ab01ec7ac48c5..a810b9c9e8eb5 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -673,10 +673,9 @@ static int ext2_setup_super (struct super_block * sb,
+ 		es->s_max_mnt_count = cpu_to_le16(EXT2_DFL_MAX_MNT_COUNT);
+ 	le16_add_cpu(&es->s_mnt_count, 1);
+ 	if (test_opt (sb, DEBUG))
+-		ext2_msg(sb, KERN_INFO, "%s, %s, bs=%lu, fs=%lu, gc=%lu, "
++		ext2_msg(sb, KERN_INFO, "%s, %s, bs=%lu, gc=%lu, "
+ 			"bpg=%lu, ipg=%lu, mo=%04lx]",
+ 			EXT2FS_VERSION, EXT2FS_DATE, sb->s_blocksize,
+-			sbi->s_frag_size,
+ 			sbi->s_groups_count,
+ 			EXT2_BLOCKS_PER_GROUP(sb),
+ 			EXT2_INODES_PER_GROUP(sb),
+@@ -1014,14 +1013,7 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ 		}
+ 	}
+ 
+-	sbi->s_frag_size = EXT2_MIN_FRAG_SIZE <<
+-				   le32_to_cpu(es->s_log_frag_size);
+-	if (sbi->s_frag_size == 0)
+-		goto cantfind_ext2;
+-	sbi->s_frags_per_block = sb->s_blocksize / sbi->s_frag_size;
+-
+ 	sbi->s_blocks_per_group = le32_to_cpu(es->s_blocks_per_group);
+-	sbi->s_frags_per_group = le32_to_cpu(es->s_frags_per_group);
+ 	sbi->s_inodes_per_group = le32_to_cpu(es->s_inodes_per_group);
+ 
+ 	sbi->s_inodes_per_block = sb->s_blocksize / EXT2_INODE_SIZE(sb);
+@@ -1047,11 +1039,10 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto failed_mount;
+ 	}
+ 
+-	if (sb->s_blocksize != sbi->s_frag_size) {
++	if (es->s_log_frag_size != es->s_log_block_size) {
+ 		ext2_msg(sb, KERN_ERR,
+-			"error: fragsize %lu != blocksize %lu"
+-			"(not supported yet)",
+-			sbi->s_frag_size, sb->s_blocksize);
++			"error: fragsize log %u != blocksize log %u",
++			le32_to_cpu(es->s_log_frag_size), sb->s_blocksize_bits);
+ 		goto failed_mount;
+ 	}
+ 
+@@ -1061,12 +1052,6 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ 			sbi->s_blocks_per_group);
+ 		goto failed_mount;
+ 	}
+-	if (sbi->s_frags_per_group > sb->s_blocksize * 8) {
+-		ext2_msg(sb, KERN_ERR,
+-			"error: #fragments per group too big: %lu",
+-			sbi->s_frags_per_group);
+-		goto failed_mount;
+-	}
+ 	if (sbi->s_inodes_per_group < sbi->s_inodes_per_block ||
+ 	    sbi->s_inodes_per_group > sb->s_blocksize * 8) {
+ 		ext2_msg(sb, KERN_ERR,
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 1171618f6549a..56829507e68c8 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -612,6 +612,7 @@ static int ext4_shutdown(struct super_block *sb, unsigned long arg)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	__u32 flags;
++	struct super_block *ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+@@ -630,7 +631,9 @@ static int ext4_shutdown(struct super_block *sb, unsigned long arg)
+ 
+ 	switch (flags) {
+ 	case EXT4_GOING_FLAGS_DEFAULT:
+-		freeze_bdev(sb->s_bdev);
++		ret = freeze_bdev(sb->s_bdev);
++		if (IS_ERR(ret))
++			return PTR_ERR(ret);
+ 		set_bit(EXT4_FLAGS_SHUTDOWN, &sbi->s_ext4_flags);
+ 		thaw_bdev(sb->s_bdev, sb);
+ 		break;
+diff --git a/fs/file.c b/fs/file.c
+index 173d318208b85..d6bc73960e4ac 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -1007,16 +1007,30 @@ unsigned long __fdget_raw(unsigned int fd)
+ 	return __fget_light(fd, 0);
+ }
+ 
++/*
++ * Try to avoid f_pos locking. We only need it if the
++ * file is marked for FMODE_ATOMIC_POS, and it can be
++ * accessed multiple ways.
++ *
++ * Always do it for directories, because pidfd_getfd()
++ * can make a file accessible even if it otherwise would
++ * not be, and for directories this is a correctness
++ * issue, not a "POSIX requirement".
++ */
++static inline bool file_needs_f_pos_lock(struct file *file)
++{
++	return (file->f_mode & FMODE_ATOMIC_POS) &&
++		(file_count(file) > 1 || S_ISDIR(file_inode(file)->i_mode));
++}
++
+ unsigned long __fdget_pos(unsigned int fd)
+ {
+ 	unsigned long v = __fdget(fd);
+ 	struct file *file = (struct file *)(v & ~3);
+ 
+-	if (file && (file->f_mode & FMODE_ATOMIC_POS)) {
+-		if (file_count(file) > 1) {
+-			v |= FDPUT_POS_UNLOCK;
+-			mutex_lock(&file->f_pos_lock);
+-		}
++	if (file && file_needs_f_pos_lock(file)) {
++		v |= FDPUT_POS_UNLOCK;
++		mutex_lock(&file->f_pos_lock);
+ 	}
+ 	return v;
+ }
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index cb13a16496320..1c1b231b2ab33 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5656,8 +5656,6 @@ static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid)
+ 	if (ZERO_STATEID(stateid) || ONE_STATEID(stateid) ||
+ 		CLOSE_STATEID(stateid))
+ 		return status;
+-	if (!same_clid(&stateid->si_opaque.so_clid, &cl->cl_clientid))
+-		return status;
+ 	spin_lock(&cl->cl_lock);
+ 	s = find_stateid_locked(cl, stateid);
+ 	if (!s)
+diff --git a/fs/open.c b/fs/open.c
+index 1ca4b236fdbe0..83f62cf1432c8 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -1101,7 +1101,7 @@ inline int build_open_flags(const struct open_how *how, struct open_flags *op)
+ 		lookup_flags |= LOOKUP_IN_ROOT;
+ 	if (how->resolve & RESOLVE_CACHED) {
+ 		/* Don't bother even trying for create/truncate/tmpfile open */
+-		if (flags & (O_TRUNC | O_CREAT | O_TMPFILE))
++		if (flags & (O_TRUNC | O_CREAT | __O_TMPFILE))
+ 			return -EAGAIN;
+ 		lookup_flags |= LOOKUP_CACHED;
+ 	}
+diff --git a/fs/super.c b/fs/super.c
+index 7629f9dd031cc..f9795e72e3bf8 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -906,6 +906,7 @@ int reconfigure_super(struct fs_context *fc)
+ 	struct super_block *sb = fc->root->d_sb;
+ 	int retval;
+ 	bool remount_ro = false;
++	bool remount_rw = false;
+ 	bool force = fc->sb_flags & SB_FORCE;
+ 
+ 	if (fc->sb_flags_mask & ~MS_RMT_MASK)
+@@ -922,7 +923,7 @@ int reconfigure_super(struct fs_context *fc)
+ 		if (!(fc->sb_flags & SB_RDONLY) && bdev_read_only(sb->s_bdev))
+ 			return -EACCES;
+ #endif
+-
++		remount_rw = !(fc->sb_flags & SB_RDONLY) && sb_rdonly(sb);
+ 		remount_ro = (fc->sb_flags & SB_RDONLY) && !sb_rdonly(sb);
+ 	}
+ 
+@@ -952,6 +953,14 @@ int reconfigure_super(struct fs_context *fc)
+ 			if (retval)
+ 				return retval;
+ 		}
++	} else if (remount_rw) {
++		/*
++		 * We set s_readonly_remount here to protect filesystem's
++		 * reconfigure code from writes from userspace until
++		 * reconfigure finishes.
++		 */
++		sb->s_readonly_remount = 1;
++		smp_wmb();
+ 	}
+ 
+ 	if (fc->ops->reconfigure) {
+diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c
+index 31f66053e2393..e3d1673b8ec97 100644
+--- a/fs/sysv/itree.c
++++ b/fs/sysv/itree.c
+@@ -145,6 +145,10 @@ static int alloc_branch(struct inode *inode,
+ 		 */
+ 		parent = block_to_cpu(SYSV_SB(inode->i_sb), branch[n-1].key);
+ 		bh = sb_getblk(inode->i_sb, parent);
++		if (!bh) {
++			sysv_free_block(inode->i_sb, branch[n].key);
++			break;
++		}
+ 		lock_buffer(bh);
+ 		memset(bh->b_data, 0, blocksize);
+ 		branch[n].bh = bh;
+diff --git a/include/asm-generic/word-at-a-time.h b/include/asm-generic/word-at-a-time.h
+index 20c93f08c9933..95a1d214108a5 100644
+--- a/include/asm-generic/word-at-a-time.h
++++ b/include/asm-generic/word-at-a-time.h
+@@ -38,7 +38,7 @@ static inline long find_zero(unsigned long mask)
+ 	return (mask >> 8) ? byte : byte + 1;
+ }
+ 
+-static inline bool has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c)
++static inline unsigned long has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c)
+ {
+ 	unsigned long rhs = val | c->low_bits;
+ 	*data = rhs;
+diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
+index 0f7cd21d6d748..09ccfee48fb45 100644
+--- a/include/drm/ttm/ttm_bo_api.h
++++ b/include/drm/ttm/ttm_bo_api.h
+@@ -157,6 +157,7 @@ struct ttm_buffer_object {
+ 
+ 	struct dma_fence *moving;
+ 	unsigned priority;
++	unsigned pin_count;
+ 
+ 	/**
+ 	 * Special members that are protected by the reserve lock
+@@ -606,6 +607,33 @@ static inline bool ttm_bo_uses_embedded_gem_object(struct ttm_buffer_object *bo)
+ 	return bo->base.dev != NULL;
+ }
+ 
++/**
++ * ttm_bo_pin - Pin the buffer object.
++ * @bo: The buffer object to pin
++ *
++ * Make sure the buffer is not evicted any more during memory pressure.
++ */
++static inline void ttm_bo_pin(struct ttm_buffer_object *bo)
++{
++	dma_resv_assert_held(bo->base.resv);
++	++bo->pin_count;
++}
++
++/**
++ * ttm_bo_unpin - Unpin the buffer object.
++ * @bo: The buffer object to unpin
++ *
++ * Allows the buffer object to be evicted again during memory pressure.
++ */
++static inline void ttm_bo_unpin(struct ttm_buffer_object *bo)
++{
++	dma_resv_assert_held(bo->base.resv);
++	if (bo->pin_count)
++		--bo->pin_count;
++	else
++		WARN_ON_ONCE(true);
++}
++
+ int ttm_mem_evict_first(struct ttm_bo_device *bdev,
+ 			struct ttm_resource_manager *man,
+ 			const struct ttm_place *place,
+diff --git a/include/linux/pm_wakeirq.h b/include/linux/pm_wakeirq.h
+index cd5b62db90845..e63a63aa47a37 100644
+--- a/include/linux/pm_wakeirq.h
++++ b/include/linux/pm_wakeirq.h
+@@ -17,8 +17,8 @@
+ #ifdef CONFIG_PM
+ 
+ extern int dev_pm_set_wake_irq(struct device *dev, int irq);
+-extern int dev_pm_set_dedicated_wake_irq(struct device *dev,
+-					 int irq);
++extern int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq);
++extern int dev_pm_set_dedicated_wake_irq_reverse(struct device *dev, int irq);
+ extern void dev_pm_clear_wake_irq(struct device *dev);
+ extern void dev_pm_enable_wake_irq(struct device *dev);
+ extern void dev_pm_disable_wake_irq(struct device *dev);
+@@ -35,6 +35,11 @@ static inline int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
+ 	return 0;
+ }
+ 
++static inline int dev_pm_set_dedicated_wake_irq_reverse(struct device *dev, int irq)
++{
++	return 0;
++}
++
+ static inline void dev_pm_clear_wake_irq(struct device *dev)
+ {
+ }
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index c57b79301a75e..e418065c2c909 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -55,6 +55,8 @@ struct trace_event;
+ 
+ int trace_raw_output_prep(struct trace_iterator *iter,
+ 			  struct trace_event *event);
++extern __printf(2, 3)
++void trace_event_printf(struct trace_iterator *iter, const char *fmt, ...);
+ 
+ /*
+  * The trace entry - the most basic unit of tracing. This is what
+@@ -87,6 +89,8 @@ struct trace_iterator {
+ 	unsigned long		iter_flags;
+ 	void			*temp;	/* temp holder */
+ 	unsigned int		temp_size;
++	char			*fmt;	/* modified format holder */
++	unsigned int		fmt_size;
+ 
+ 	/* trace_seq for __print_flags() and __print_symbolic() etc. */
+ 	struct trace_seq	tmp_seq;
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 8879c0ab0b89d..4c8f97a6da5a7 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -663,12 +663,8 @@ static inline u32 ipv6_addr_hash(const struct in6_addr *a)
+ /* more secured version of ipv6_addr_hash() */
+ static inline u32 __ipv6_addr_jhash(const struct in6_addr *a, const u32 initval)
+ {
+-	u32 v = (__force u32)a->s6_addr32[0] ^ (__force u32)a->s6_addr32[1];
+-
+-	return jhash_3words(v,
+-			    (__force u32)a->s6_addr32[2],
+-			    (__force u32)a->s6_addr32[3],
+-			    initval);
++	return jhash2((__force const u32 *)a->s6_addr32,
++		      ARRAY_SIZE(a->s6_addr32), initval);
+ }
+ 
+ static inline bool ipv6_addr_loopback(const struct in6_addr *a)
+diff --git a/include/net/vxlan.h b/include/net/vxlan.h
+index 08537aa14f7c3..e149a0b6f9a3c 100644
+--- a/include/net/vxlan.h
++++ b/include/net/vxlan.h
+@@ -327,10 +327,15 @@ static inline netdev_features_t vxlan_features_check(struct sk_buff *skb,
+ 	return features;
+ }
+ 
+-/* IP header + UDP + VXLAN + Ethernet header */
+-#define VXLAN_HEADROOM (20 + 8 + 8 + 14)
+-/* IPv6 header + UDP + VXLAN + Ethernet header */
+-#define VXLAN6_HEADROOM (40 + 8 + 8 + 14)
++static inline int vxlan_headroom(u32 flags)
++{
++	/* VXLAN:     IP4/6 header + UDP + VXLAN + Ethernet header */
++	/* VXLAN-GPE: IP4/6 header + UDP + VXLAN */
++	return (flags & VXLAN_F_IPV6 ? sizeof(struct ipv6hdr) :
++				       sizeof(struct iphdr)) +
++	       sizeof(struct udphdr) + sizeof(struct vxlanhdr) +
++	       (flags & VXLAN_F_GPE ? 0 : ETH_HLEN);
++}
+ 
+ static inline struct vxlanhdr *vxlan_hdr(struct sk_buff *skb)
+ {
+@@ -492,12 +497,12 @@ static inline void vxlan_flag_attr_error(int attrtype,
+ }
+ 
+ static inline bool vxlan_fdb_nh_path_select(struct nexthop *nh,
+-					    int hash,
++					    u32 hash,
+ 					    struct vxlan_rdst *rdst)
+ {
+ 	struct fib_nh_common *nhc;
+ 
+-	nhc = nexthop_path_fdb_result(nh, hash);
++	nhc = nexthop_path_fdb_result(nh, hash >> 1);
+ 	if (unlikely(!nhc))
+ 		return false;
+ 
+diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
+index 717d388ecbd6a..29917bce6dbc5 100644
+--- a/include/trace/trace_events.h
++++ b/include/trace/trace_events.h
+@@ -364,7 +364,7 @@ trace_raw_output_##call(struct trace_iterator *iter, int flags,		\
+ 	if (ret != TRACE_TYPE_HANDLED)					\
+ 		return ret;						\
+ 									\
+-	trace_seq_printf(s, print);					\
++	trace_event_printf(iter, print);				\
+ 									\
+ 	return trace_handle_return(s);					\
+ }									\
+diff --git a/include/uapi/linux/blkzoned.h b/include/uapi/linux/blkzoned.h
+index 656a326821a2b..321965feee354 100644
+--- a/include/uapi/linux/blkzoned.h
++++ b/include/uapi/linux/blkzoned.h
+@@ -51,13 +51,13 @@ enum blk_zone_type {
+  *
+  * The Zone Condition state machine in the ZBC/ZAC standards maps the above
+  * deinitions as:
+- *   - ZC1: Empty         | BLK_ZONE_EMPTY
++ *   - ZC1: Empty         | BLK_ZONE_COND_EMPTY
+  *   - ZC2: Implicit Open | BLK_ZONE_COND_IMP_OPEN
+  *   - ZC3: Explicit Open | BLK_ZONE_COND_EXP_OPEN
+- *   - ZC4: Closed        | BLK_ZONE_CLOSED
+- *   - ZC5: Full          | BLK_ZONE_FULL
+- *   - ZC6: Read Only     | BLK_ZONE_READONLY
+- *   - ZC7: Offline       | BLK_ZONE_OFFLINE
++ *   - ZC4: Closed        | BLK_ZONE_COND_CLOSED
++ *   - ZC5: Full          | BLK_ZONE_COND_FULL
++ *   - ZC6: Read Only     | BLK_ZONE_COND_READONLY
++ *   - ZC7: Offline       | BLK_ZONE_COND_OFFLINE
+  *
+  * Conditions 0x5 to 0xC are reserved by the current ZBC/ZAC spec and should
+  * be considered invalid.
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 51e6ebe72caf9..92eb4769b0a35 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -6895,6 +6895,14 @@ static void io_wq_submit_work(struct io_wq_work *work)
+ 			 */
+ 			if (ret != -EAGAIN || !(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 				break;
++
++			/*
++			 * If REQ_F_NOWAIT is set, then don't wait or retry with
++			 * poll. -EAGAIN is final for that case.
++			 */
++			if (req->flags & REQ_F_NOWAIT)
++				break;
++
+ 			cond_resched();
+ 		} while (1);
+ 	}
+@@ -7623,12 +7631,21 @@ static int io_run_task_work_sig(void)
+ 	return -EINTR;
+ }
+ 
++static bool current_pending_io(void)
++{
++	struct io_uring_task *tctx = current->io_uring;
++
++	if (!tctx)
++		return false;
++	return percpu_counter_read_positive(&tctx->inflight);
++}
++
+ /* when returns >0, the caller should retry */
+ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 					  struct io_wait_queue *iowq,
+ 					  ktime_t *timeout)
+ {
+-	int token, ret;
++	int io_wait, ret;
+ 
+ 	/* make sure we run task_work before checking for signals */
+ 	ret = io_run_task_work_sig();
+@@ -7639,15 +7656,17 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 		return 1;
+ 
+ 	/*
+-	 * Use io_schedule_prepare/finish, so cpufreq can take into account
+-	 * that the task is waiting for IO - turns out to be important for low
+-	 * QD IO.
++	 * Mark us as being in io_wait if we have pending requests, so cpufreq
++	 * can take into account that the task is waiting for IO - turns out
++	 * to be important for low QD IO.
+ 	 */
+-	token = io_schedule_prepare();
++	io_wait = current->in_iowait;
++	if (current_pending_io())
++		current->in_iowait = 1;
+ 	ret = 1;
+ 	if (!schedule_hrtimeout(timeout, HRTIMER_MODE_ABS))
+ 		ret = -ETIME;
+-	io_schedule_finish(token);
++	current->in_iowait = io_wait;
+ 	return ret;
+ }
+ 
+@@ -10431,7 +10450,7 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
+ 	if (!ctx)
+ 		return -ENOMEM;
+ 	ctx->compat = in_compat_syscall();
+-	if (!capable(CAP_IPC_LOCK))
++	if (!ns_capable_noaudit(&init_user_ns, CAP_IPC_LOCK))
+ 		ctx->user = get_uid(current_user());
+ 
+ 	/*
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 53f36bbaf0c66..8c5400fd227b8 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -1222,6 +1222,11 @@ static int perf_mux_hrtimer_restart(struct perf_cpu_context *cpuctx)
+ 	return 0;
+ }
+ 
++static int perf_mux_hrtimer_restart_ipi(void *arg)
++{
++	return perf_mux_hrtimer_restart(arg);
++}
++
+ void perf_pmu_disable(struct pmu *pmu)
+ {
+ 	int *count = this_cpu_ptr(pmu->pmu_disable_count);
+@@ -10772,8 +10777,7 @@ perf_event_mux_interval_ms_store(struct device *dev,
+ 		cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
+ 		cpuctx->hrtimer_interval = ns_to_ktime(NSEC_PER_MSEC * timer);
+ 
+-		cpu_function_call(cpu,
+-			(remote_function_f)perf_mux_hrtimer_restart, cpuctx);
++		cpu_function_call(cpu, perf_mux_hrtimer_restart_ipi, cpuctx);
+ 	}
+ 	cpus_read_unlock();
+ 	mutex_unlock(&mux_interval_mutex);
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 9e90d1e7af2c8..1de9a6bf84711 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -970,7 +970,6 @@ static DEFINE_PER_CPU(struct bpf_trace_sample_data, bpf_misc_sds);
+ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+ 		     void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)
+ {
+-	int nest_level = this_cpu_inc_return(bpf_event_output_nest_level);
+ 	struct perf_raw_frag frag = {
+ 		.copy		= ctx_copy,
+ 		.size		= ctx_size,
+@@ -987,8 +986,12 @@ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+ 	};
+ 	struct perf_sample_data *sd;
+ 	struct pt_regs *regs;
++	int nest_level;
+ 	u64 ret;
+ 
++	preempt_disable();
++	nest_level = this_cpu_inc_return(bpf_event_output_nest_level);
++
+ 	if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bpf_misc_sds.sds))) {
+ 		ret = -EBUSY;
+ 		goto out;
+@@ -1003,6 +1006,7 @@ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+ 	ret = __bpf_perf_event_output(regs, map, flags, sd);
+ out:
+ 	this_cpu_dec(bpf_event_output_nest_level);
++	preempt_enable();
+ 	return ret;
+ }
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 593e446f6c487..3b8c53264441e 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -526,6 +526,8 @@ struct ring_buffer_per_cpu {
+ 	rb_time_t			write_stamp;
+ 	rb_time_t			before_stamp;
+ 	u64				read_stamp;
++	/* pages removed since last reset */
++	unsigned long			pages_removed;
+ 	/* ring buffer pages to update, > 0 to add, < 0 to remove */
+ 	long				nr_pages_to_update;
+ 	struct list_head		new_pages; /* new pages to add */
+@@ -561,6 +563,7 @@ struct ring_buffer_iter {
+ 	struct buffer_page		*head_page;
+ 	struct buffer_page		*cache_reader_page;
+ 	unsigned long			cache_read;
++	unsigned long			cache_pages_removed;
+ 	u64				read_stamp;
+ 	u64				page_stamp;
+ 	struct ring_buffer_event	*event;
+@@ -1833,6 +1836,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+ 		to_remove = rb_list_head(to_remove)->next;
+ 		head_bit |= (unsigned long)to_remove & RB_PAGE_HEAD;
+ 	}
++	/* Read iterators need to reset themselves when some pages removed */
++	cpu_buffer->pages_removed += nr_removed;
+ 
+ 	next_page = rb_list_head(to_remove)->next;
+ 
+@@ -1854,12 +1859,6 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+ 		cpu_buffer->head_page = list_entry(next_page,
+ 						struct buffer_page, list);
+ 
+-	/*
+-	 * change read pointer to make sure any read iterators reset
+-	 * themselves
+-	 */
+-	cpu_buffer->read = 0;
+-
+ 	/* pages are removed, resume tracing and then free the pages */
+ 	atomic_dec(&cpu_buffer->record_disabled);
+ 	raw_spin_unlock_irq(&cpu_buffer->reader_lock);
+@@ -4105,6 +4104,7 @@ static void rb_iter_reset(struct ring_buffer_iter *iter)
+ 
+ 	iter->cache_reader_page = iter->head_page;
+ 	iter->cache_read = cpu_buffer->read;
++	iter->cache_pages_removed = cpu_buffer->pages_removed;
+ 
+ 	if (iter->head) {
+ 		iter->read_stamp = cpu_buffer->read_stamp;
+@@ -4558,12 +4558,13 @@ rb_iter_peek(struct ring_buffer_iter *iter, u64 *ts)
+ 	buffer = cpu_buffer->buffer;
+ 
+ 	/*
+-	 * Check if someone performed a consuming read to
+-	 * the buffer. A consuming read invalidates the iterator
+-	 * and we need to reset the iterator in this case.
++	 * Check if someone performed a consuming read to the buffer
++	 * or removed some pages from the buffer. In these cases,
++	 * iterator was invalidated and we need to reset it.
+ 	 */
+ 	if (unlikely(iter->cache_read != cpu_buffer->read ||
+-		     iter->cache_reader_page != cpu_buffer->reader_page))
++		     iter->cache_reader_page != cpu_buffer->reader_page ||
++		     iter->cache_pages_removed != cpu_buffer->pages_removed))
+ 		rb_iter_reset(iter);
+ 
+  again:
+@@ -5005,6 +5006,7 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer)
+ 	cpu_buffer->last_overrun = 0;
+ 
+ 	rb_head_page_activate(cpu_buffer);
++	cpu_buffer->pages_removed = 0;
+ }
+ 
+ /* Must have disabled the cpu buffer then done a synchronize_rcu */
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 7867fc39c4fc5..7e99319bd5365 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3582,6 +3582,62 @@ __find_next_entry(struct trace_iterator *iter, int *ent_cpu,
+ 	return next;
+ }
+ 
++#define STATIC_FMT_BUF_SIZE	128
++static char static_fmt_buf[STATIC_FMT_BUF_SIZE];
++
++static char *trace_iter_expand_format(struct trace_iterator *iter)
++{
++	char *tmp;
++
++	if (iter->fmt == static_fmt_buf)
++		return NULL;
++
++	tmp = krealloc(iter->fmt, iter->fmt_size + STATIC_FMT_BUF_SIZE,
++		       GFP_KERNEL);
++	if (tmp) {
++		iter->fmt_size += STATIC_FMT_BUF_SIZE;
++		iter->fmt = tmp;
++	}
++
++	return tmp;
++}
++
++const char *trace_event_format(struct trace_iterator *iter, const char *fmt)
++{
++	const char *p, *new_fmt;
++	char *q;
++
++	if (WARN_ON_ONCE(!fmt))
++		return fmt;
++
++	p = fmt;
++	new_fmt = q = iter->fmt;
++	while (*p) {
++		if (unlikely(q - new_fmt + 3 > iter->fmt_size)) {
++			if (!trace_iter_expand_format(iter))
++				return fmt;
++
++			q += iter->fmt - new_fmt;
++			new_fmt = iter->fmt;
++		}
++
++		*q++ = *p++;
++
++		/* Replace %p with %px */
++		if (p[-1] == '%') {
++			if (p[0] == '%') {
++				*q++ = *p++;
++			} else if (p[0] == 'p' && !isalnum(p[1])) {
++				*q++ = *p++;
++				*q++ = 'x';
++			}
++		}
++	}
++	*q = '\0';
++
++	return new_fmt;
++}
++
+ #define STATIC_TEMP_BUF_SIZE	128
+ static char static_temp_buf[STATIC_TEMP_BUF_SIZE] __aligned(4);
+ 
+@@ -4368,6 +4424,16 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ 	if (iter->temp)
+ 		iter->temp_size = 128;
+ 
++	/*
++	 * trace_event_printf() may need to modify given format
++	 * string to replace %p with %px so that it shows real address
++	 * instead of hash value. However, that is only for the event
++	 * tracing, other tracer may not need. Defer the allocation
++	 * until it is needed.
++	 */
++	iter->fmt = NULL;
++	iter->fmt_size = 0;
++
+ 	/*
+ 	 * We make a copy of the current tracer to avoid concurrent
+ 	 * changes on it while we are reading.
+@@ -4519,6 +4585,7 @@ static int tracing_release(struct inode *inode, struct file *file)
+ 
+ 	mutex_destroy(&iter->mutex);
+ 	free_cpumask_var(iter->started);
++	kfree(iter->fmt);
+ 	kfree(iter->temp);
+ 	kfree(iter->trace);
+ 	kfree(iter->buffer_iter);
+@@ -9349,6 +9416,12 @@ void trace_init_global_iter(struct trace_iterator *iter)
+ 	/* Output in nanoseconds only if we are using a clock in nanoseconds. */
+ 	if (trace_clocks[iter->tr->clock_id].in_ns)
+ 		iter->iter_flags |= TRACE_FILE_TIME_IN_NS;
++
++	/* Can not use kmalloc for iter.temp and iter.fmt */
++	iter->temp = static_temp_buf;
++	iter->temp_size = STATIC_TEMP_BUF_SIZE;
++	iter->fmt = static_fmt_buf;
++	iter->fmt_size = STATIC_FMT_BUF_SIZE;
+ }
+ 
+ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+@@ -9382,9 +9455,6 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+ 
+ 	/* Simulate the iterator */
+ 	trace_init_global_iter(&iter);
+-	/* Can not use kmalloc for iter.temp */
+-	iter.temp = static_temp_buf;
+-	iter.temp_size = STATIC_TEMP_BUF_SIZE;
+ 
+ 	for_each_tracing_cpu(cpu) {
+ 		atomic_inc(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index e5b505b5b7d09..892b3d2f33b79 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -758,6 +758,8 @@ struct trace_entry *trace_find_next_entry(struct trace_iterator *iter,
+ void trace_buffer_unlock_commit_nostack(struct trace_buffer *buffer,
+ 					struct ring_buffer_event *event);
+ 
++const char *trace_event_format(struct trace_iterator *iter, const char *fmt);
++
+ int trace_empty(struct trace_iterator *iter);
+ 
+ void *trace_find_next_entry_inc(struct trace_iterator *iter);
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index f8ed66f38175b..a46d34d840f69 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -371,7 +371,6 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
+ {
+ 	struct trace_event_call *call = file->event_call;
+ 	struct trace_array *tr = file->tr;
+-	unsigned long file_flags = file->flags;
+ 	int ret = 0;
+ 	int disable;
+ 
+@@ -395,6 +394,8 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
+ 				break;
+ 			disable = file->flags & EVENT_FILE_FL_SOFT_DISABLED;
+ 			clear_bit(EVENT_FILE_FL_SOFT_MODE_BIT, &file->flags);
++			/* Disable use of trace_buffered_event */
++			trace_buffered_event_disable();
+ 		} else
+ 			disable = !(file->flags & EVENT_FILE_FL_SOFT_MODE);
+ 
+@@ -433,6 +434,8 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
+ 			if (atomic_inc_return(&file->sm_ref) > 1)
+ 				break;
+ 			set_bit(EVENT_FILE_FL_SOFT_MODE_BIT, &file->flags);
++			/* Enable use of trace_buffered_event */
++			trace_buffered_event_enable();
+ 		}
+ 
+ 		if (!(file->flags & EVENT_FILE_FL_ENABLED)) {
+@@ -472,15 +475,6 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
+ 		break;
+ 	}
+ 
+-	/* Enable or disable use of trace_buffered_event */
+-	if ((file_flags & EVENT_FILE_FL_SOFT_DISABLED) !=
+-	    (file->flags & EVENT_FILE_FL_SOFT_DISABLED)) {
+-		if (file->flags & EVENT_FILE_FL_SOFT_DISABLED)
+-			trace_buffered_event_enable();
+-		else
+-			trace_buffered_event_disable();
+-	}
+-
+ 	return ret;
+ }
+ 
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index b3ee8d9b6b62a..94b0991717b6d 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -312,13 +312,23 @@ int trace_raw_output_prep(struct trace_iterator *iter,
+ }
+ EXPORT_SYMBOL(trace_raw_output_prep);
+ 
++void trace_event_printf(struct trace_iterator *iter, const char *fmt, ...)
++{
++	va_list ap;
++
++	va_start(ap, fmt);
++	trace_seq_vprintf(&iter->seq, trace_event_format(iter, fmt), ap);
++	va_end(ap);
++}
++EXPORT_SYMBOL(trace_event_printf);
++
+ static int trace_output_raw(struct trace_iterator *iter, char *name,
+ 			    char *fmt, va_list ap)
+ {
+ 	struct trace_seq *s = &iter->seq;
+ 
+ 	trace_seq_printf(s, "%s: ", name);
+-	trace_seq_vprintf(s, fmt, ap);
++	trace_seq_vprintf(s, trace_event_format(iter, fmt), ap);
+ 
+ 	return trace_handle_return(s);
+ }
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index a267c9b6bcef4..756523e5402a8 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -45,6 +45,7 @@ static const struct proto_ops l2cap_sock_ops;
+ static void l2cap_sock_init(struct sock *sk, struct sock *parent);
+ static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock,
+ 				     int proto, gfp_t prio, int kern);
++static void l2cap_sock_cleanup_listen(struct sock *parent);
+ 
+ bool l2cap_is_socket(struct socket *sock)
+ {
+@@ -1414,6 +1415,7 @@ static int l2cap_sock_release(struct socket *sock)
+ 	if (!sk)
+ 		return 0;
+ 
++	l2cap_sock_cleanup_listen(sk);
+ 	bt_sock_unlink(&l2cap_sk_list, sk);
+ 
+ 	err = l2cap_sock_shutdown(sock, SHUT_RDWR);
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 1e9fab79e2456..d594cd501861a 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -3330,17 +3330,24 @@ static int linger_reg_commit_wait(struct ceph_osd_linger_request *lreq)
+ 	int ret;
+ 
+ 	dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id);
+-	ret = wait_for_completion_interruptible(&lreq->reg_commit_wait);
++	ret = wait_for_completion_killable(&lreq->reg_commit_wait);
+ 	return ret ?: lreq->reg_commit_error;
+ }
+ 
+-static int linger_notify_finish_wait(struct ceph_osd_linger_request *lreq)
++static int linger_notify_finish_wait(struct ceph_osd_linger_request *lreq,
++				     unsigned long timeout)
+ {
+-	int ret;
++	long left;
+ 
+ 	dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id);
+-	ret = wait_for_completion_interruptible(&lreq->notify_finish_wait);
+-	return ret ?: lreq->notify_finish_error;
++	left = wait_for_completion_killable_timeout(&lreq->notify_finish_wait,
++						ceph_timeout_jiffies(timeout));
++	if (left <= 0)
++		left = left ?: -ETIMEDOUT;
++	else
++		left = lreq->notify_finish_error; /* completed */
++
++	return left;
+ }
+ 
+ /*
+@@ -4888,7 +4895,8 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc,
+ 	linger_submit(lreq);
+ 	ret = linger_reg_commit_wait(lreq);
+ 	if (!ret)
+-		ret = linger_notify_finish_wait(lreq);
++		ret = linger_notify_finish_wait(lreq,
++				 msecs_to_jiffies(2 * timeout * MSEC_PER_SEC));
+ 	else
+ 		dout("lreq %p failed to initiate notify %d\n", lreq, ret);
+ 
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index d67d06d6b817c..a811fe0f0f6fd 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -446,8 +446,11 @@ bpf_sk_storage_diag_alloc(const struct nlattr *nla_stgs)
+ 		return ERR_PTR(-EPERM);
+ 
+ 	nla_for_each_nested(nla, nla_stgs, rem) {
+-		if (nla_type(nla) == SK_DIAG_BPF_STORAGE_REQ_MAP_FD)
++		if (nla_type(nla) == SK_DIAG_BPF_STORAGE_REQ_MAP_FD) {
++			if (nla_len(nla) != sizeof(u32))
++				return ERR_PTR(-EINVAL);
+ 			nr_maps++;
++		}
+ 	}
+ 
+ 	diag = kzalloc(sizeof(*diag) + sizeof(diag->maps[0]) * nr_maps,
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index d3c03ebf06a5b..ce37a052b9c32 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -4897,13 +4897,17 @@ static int rtnl_bridge_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
+ 	if (br_spec) {
+ 		nla_for_each_nested(attr, br_spec, rem) {
+-			if (nla_type(attr) == IFLA_BRIDGE_FLAGS) {
++			if (nla_type(attr) == IFLA_BRIDGE_FLAGS && !have_flags) {
+ 				if (nla_len(attr) < sizeof(flags))
+ 					return -EINVAL;
+ 
+ 				have_flags = true;
+ 				flags = nla_get_u16(attr);
+-				break;
++			}
++
++			if (nla_type(attr) == IFLA_BRIDGE_MODE) {
++				if (nla_len(attr) < sizeof(u16))
++					return -EINVAL;
+ 			}
+ 		}
+ 	}
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 4e00c6e2cb431..98f4b4a80de42 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1183,7 +1183,8 @@ set_sndbuf:
+ 			cmpxchg(&sk->sk_pacing_status,
+ 				SK_PACING_NONE,
+ 				SK_PACING_NEEDED);
+-		sk->sk_max_pacing_rate = ulval;
++		/* Pairs with READ_ONCE() from sk_getsockopt() */
++		WRITE_ONCE(sk->sk_max_pacing_rate, ulval);
+ 		sk->sk_pacing_rate = min(sk->sk_pacing_rate, ulval);
+ 		break;
+ 		}
+@@ -1331,11 +1332,11 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case SO_SNDBUF:
+-		v.val = sk->sk_sndbuf;
++		v.val = READ_ONCE(sk->sk_sndbuf);
+ 		break;
+ 
+ 	case SO_RCVBUF:
+-		v.val = sk->sk_rcvbuf;
++		v.val = READ_ONCE(sk->sk_rcvbuf);
+ 		break;
+ 
+ 	case SO_REUSEADDR:
+@@ -1422,7 +1423,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case SO_RCVLOWAT:
+-		v.val = sk->sk_rcvlowat;
++		v.val = READ_ONCE(sk->sk_rcvlowat);
+ 		break;
+ 
+ 	case SO_SNDLOWAT:
+@@ -1516,7 +1517,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		if (!sock->ops->set_peek_off)
+ 			return -EOPNOTSUPP;
+ 
+-		v.val = sk->sk_peek_off;
++		v.val = READ_ONCE(sk->sk_peek_off);
+ 		break;
+ 	case SO_NOFCS:
+ 		v.val = sock_flag(sk, SOCK_NOFCS);
+@@ -1546,17 +1547,19 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+ 	case SO_BUSY_POLL:
+-		v.val = sk->sk_ll_usec;
++		v.val = READ_ONCE(sk->sk_ll_usec);
+ 		break;
+ #endif
+ 
+ 	case SO_MAX_PACING_RATE:
++		/* The READ_ONCE() pair with the WRITE_ONCE() in sk_setsockopt() */
+ 		if (sizeof(v.ulval) != sizeof(v.val) && len >= sizeof(v.ulval)) {
+ 			lv = sizeof(v.ulval);
+-			v.ulval = sk->sk_max_pacing_rate;
++			v.ulval = READ_ONCE(sk->sk_max_pacing_rate);
+ 		} else {
+ 			/* 32bit version */
+-			v.val = min_t(unsigned long, sk->sk_max_pacing_rate, ~0U);
++			v.val = min_t(unsigned long, ~0U,
++				      READ_ONCE(sk->sk_max_pacing_rate));
+ 		}
+ 		break;
+ 
+@@ -2742,7 +2745,7 @@ EXPORT_SYMBOL(__sk_mem_reclaim);
+ 
+ int sk_set_peek_off(struct sock *sk, int val)
+ {
+-	sk->sk_peek_off = val;
++	WRITE_ONCE(sk->sk_peek_off, val);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(sk_set_peek_off);
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index ee5d3f49b0b5b..f375ef1501490 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -122,7 +122,6 @@ static void sock_map_sk_acquire(struct sock *sk)
+ 	__acquires(&sk->sk_lock.slock)
+ {
+ 	lock_sock(sk);
+-	preempt_disable();
+ 	rcu_read_lock();
+ }
+ 
+@@ -130,7 +129,6 @@ static void sock_map_sk_release(struct sock *sk)
+ 	__releases(&sk->sk_lock.slock)
+ {
+ 	rcu_read_unlock();
+-	preempt_enable();
+ 	release_sock(sk);
+ }
+ 
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index 2535d3dfb92c8..c0fb70936ca17 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -946,7 +946,7 @@ static int dcbnl_bcn_setcfg(struct net_device *netdev, struct nlmsghdr *nlh,
+ 		return -EOPNOTSUPP;
+ 
+ 	ret = nla_parse_nested_deprecated(data, DCB_BCN_ATTR_MAX,
+-					  tb[DCB_ATTR_BCN], dcbnl_pfc_up_nest,
++					  tb[DCB_ATTR_BCN], dcbnl_bcn_nest,
+ 					  NULL);
+ 	if (ret)
+ 		return ret;
+diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
+index f3ca6eea2ca39..a707fa1dbcafd 100644
+--- a/net/ipv4/tcp_metrics.c
++++ b/net/ipv4/tcp_metrics.c
+@@ -40,7 +40,7 @@ struct tcp_fastopen_metrics {
+ 
+ struct tcp_metrics_block {
+ 	struct tcp_metrics_block __rcu	*tcpm_next;
+-	possible_net_t			tcpm_net;
++	struct net			*tcpm_net;
+ 	struct inetpeer_addr		tcpm_saddr;
+ 	struct inetpeer_addr		tcpm_daddr;
+ 	unsigned long			tcpm_stamp;
+@@ -51,34 +51,38 @@ struct tcp_metrics_block {
+ 	struct rcu_head			rcu_head;
+ };
+ 
+-static inline struct net *tm_net(struct tcp_metrics_block *tm)
++static inline struct net *tm_net(const struct tcp_metrics_block *tm)
+ {
+-	return read_pnet(&tm->tcpm_net);
++	/* Paired with the WRITE_ONCE() in tcpm_new() */
++	return READ_ONCE(tm->tcpm_net);
+ }
+ 
+ static bool tcp_metric_locked(struct tcp_metrics_block *tm,
+ 			      enum tcp_metric_index idx)
+ {
+-	return tm->tcpm_lock & (1 << idx);
++	/* Paired with WRITE_ONCE() in tcpm_suck_dst() */
++	return READ_ONCE(tm->tcpm_lock) & (1 << idx);
+ }
+ 
+-static u32 tcp_metric_get(struct tcp_metrics_block *tm,
++static u32 tcp_metric_get(const struct tcp_metrics_block *tm,
+ 			  enum tcp_metric_index idx)
+ {
+-	return tm->tcpm_vals[idx];
++	/* Paired with WRITE_ONCE() in tcp_metric_set() */
++	return READ_ONCE(tm->tcpm_vals[idx]);
+ }
+ 
+ static void tcp_metric_set(struct tcp_metrics_block *tm,
+ 			   enum tcp_metric_index idx,
+ 			   u32 val)
+ {
+-	tm->tcpm_vals[idx] = val;
++	/* Paired with READ_ONCE() in tcp_metric_get() */
++	WRITE_ONCE(tm->tcpm_vals[idx], val);
+ }
+ 
+ static bool addr_same(const struct inetpeer_addr *a,
+ 		      const struct inetpeer_addr *b)
+ {
+-	return inetpeer_addr_cmp(a, b) == 0;
++	return (a->family == b->family) && !inetpeer_addr_cmp(a, b);
+ }
+ 
+ struct tcpm_hash_bucket {
+@@ -89,6 +93,7 @@ static struct tcpm_hash_bucket	*tcp_metrics_hash __read_mostly;
+ static unsigned int		tcp_metrics_hash_log __read_mostly;
+ 
+ static DEFINE_SPINLOCK(tcp_metrics_lock);
++static DEFINE_SEQLOCK(fastopen_seqlock);
+ 
+ static void tcpm_suck_dst(struct tcp_metrics_block *tm,
+ 			  const struct dst_entry *dst,
+@@ -97,7 +102,7 @@ static void tcpm_suck_dst(struct tcp_metrics_block *tm,
+ 	u32 msval;
+ 	u32 val;
+ 
+-	tm->tcpm_stamp = jiffies;
++	WRITE_ONCE(tm->tcpm_stamp, jiffies);
+ 
+ 	val = 0;
+ 	if (dst_metric_locked(dst, RTAX_RTT))
+@@ -110,30 +115,42 @@ static void tcpm_suck_dst(struct tcp_metrics_block *tm,
+ 		val |= 1 << TCP_METRIC_CWND;
+ 	if (dst_metric_locked(dst, RTAX_REORDERING))
+ 		val |= 1 << TCP_METRIC_REORDERING;
+-	tm->tcpm_lock = val;
++	/* Paired with READ_ONCE() in tcp_metric_locked() */
++	WRITE_ONCE(tm->tcpm_lock, val);
+ 
+ 	msval = dst_metric_raw(dst, RTAX_RTT);
+-	tm->tcpm_vals[TCP_METRIC_RTT] = msval * USEC_PER_MSEC;
++	tcp_metric_set(tm, TCP_METRIC_RTT, msval * USEC_PER_MSEC);
+ 
+ 	msval = dst_metric_raw(dst, RTAX_RTTVAR);
+-	tm->tcpm_vals[TCP_METRIC_RTTVAR] = msval * USEC_PER_MSEC;
+-	tm->tcpm_vals[TCP_METRIC_SSTHRESH] = dst_metric_raw(dst, RTAX_SSTHRESH);
+-	tm->tcpm_vals[TCP_METRIC_CWND] = dst_metric_raw(dst, RTAX_CWND);
+-	tm->tcpm_vals[TCP_METRIC_REORDERING] = dst_metric_raw(dst, RTAX_REORDERING);
++	tcp_metric_set(tm, TCP_METRIC_RTTVAR, msval * USEC_PER_MSEC);
++	tcp_metric_set(tm, TCP_METRIC_SSTHRESH,
++		       dst_metric_raw(dst, RTAX_SSTHRESH));
++	tcp_metric_set(tm, TCP_METRIC_CWND,
++		       dst_metric_raw(dst, RTAX_CWND));
++	tcp_metric_set(tm, TCP_METRIC_REORDERING,
++		       dst_metric_raw(dst, RTAX_REORDERING));
+ 	if (fastopen_clear) {
++		write_seqlock(&fastopen_seqlock);
+ 		tm->tcpm_fastopen.mss = 0;
+ 		tm->tcpm_fastopen.syn_loss = 0;
+ 		tm->tcpm_fastopen.try_exp = 0;
+ 		tm->tcpm_fastopen.cookie.exp = false;
+ 		tm->tcpm_fastopen.cookie.len = 0;
++		write_sequnlock(&fastopen_seqlock);
+ 	}
+ }
+ 
+ #define TCP_METRICS_TIMEOUT		(60 * 60 * HZ)
+ 
+-static void tcpm_check_stamp(struct tcp_metrics_block *tm, struct dst_entry *dst)
++static void tcpm_check_stamp(struct tcp_metrics_block *tm,
++			     const struct dst_entry *dst)
+ {
+-	if (tm && unlikely(time_after(jiffies, tm->tcpm_stamp + TCP_METRICS_TIMEOUT)))
++	unsigned long limit;
++
++	if (!tm)
++		return;
++	limit = READ_ONCE(tm->tcpm_stamp) + TCP_METRICS_TIMEOUT;
++	if (unlikely(time_after(jiffies, limit)))
+ 		tcpm_suck_dst(tm, dst, false);
+ }
+ 
+@@ -174,20 +191,23 @@ static struct tcp_metrics_block *tcpm_new(struct dst_entry *dst,
+ 		oldest = deref_locked(tcp_metrics_hash[hash].chain);
+ 		for (tm = deref_locked(oldest->tcpm_next); tm;
+ 		     tm = deref_locked(tm->tcpm_next)) {
+-			if (time_before(tm->tcpm_stamp, oldest->tcpm_stamp))
++			if (time_before(READ_ONCE(tm->tcpm_stamp),
++					READ_ONCE(oldest->tcpm_stamp)))
+ 				oldest = tm;
+ 		}
+ 		tm = oldest;
+ 	} else {
+-		tm = kmalloc(sizeof(*tm), GFP_ATOMIC);
++		tm = kzalloc(sizeof(*tm), GFP_ATOMIC);
+ 		if (!tm)
+ 			goto out_unlock;
+ 	}
+-	write_pnet(&tm->tcpm_net, net);
++	/* Paired with the READ_ONCE() in tm_net() */
++	WRITE_ONCE(tm->tcpm_net, net);
++
+ 	tm->tcpm_saddr = *saddr;
+ 	tm->tcpm_daddr = *daddr;
+ 
+-	tcpm_suck_dst(tm, dst, true);
++	tcpm_suck_dst(tm, dst, reclaim);
+ 
+ 	if (likely(!reclaim)) {
+ 		tm->tcpm_next = tcp_metrics_hash[hash].chain;
+@@ -434,7 +454,7 @@ void tcp_update_metrics(struct sock *sk)
+ 					       tp->reordering);
+ 		}
+ 	}
+-	tm->tcpm_stamp = jiffies;
++	WRITE_ONCE(tm->tcpm_stamp, jiffies);
+ out_unlock:
+ 	rcu_read_unlock();
+ }
+@@ -539,8 +559,6 @@ bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst)
+ 	return ret;
+ }
+ 
+-static DEFINE_SEQLOCK(fastopen_seqlock);
+-
+ void tcp_fastopen_cache_get(struct sock *sk, u16 *mss,
+ 			    struct tcp_fastopen_cookie *cookie)
+ {
+@@ -647,7 +665,7 @@ static int tcp_metrics_fill_info(struct sk_buff *msg,
+ 	}
+ 
+ 	if (nla_put_msecs(msg, TCP_METRICS_ATTR_AGE,
+-			  jiffies - tm->tcpm_stamp,
++			  jiffies - READ_ONCE(tm->tcpm_stamp),
+ 			  TCP_METRICS_ATTR_PAD) < 0)
+ 		goto nla_put_failure;
+ 
+@@ -658,7 +676,7 @@ static int tcp_metrics_fill_info(struct sk_buff *msg,
+ 		if (!nest)
+ 			goto nla_put_failure;
+ 		for (i = 0; i < TCP_METRIC_MAX_KERNEL + 1; i++) {
+-			u32 val = tm->tcpm_vals[i];
++			u32 val = tcp_metric_get(tm, i);
+ 
+ 			if (!val)
+ 				continue;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index d5d10496b4aef..9b414681500a5 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2555,12 +2555,18 @@ static void manage_tempaddrs(struct inet6_dev *idev,
+ 			ipv6_ifa_notify(0, ift);
+ 	}
+ 
+-	if ((create || list_empty(&idev->tempaddr_list)) &&
+-	    idev->cnf.use_tempaddr > 0) {
++	/* Also create a temporary address if it's enabled but no temporary
++	 * address currently exists.
++	 * However, we get called with valid_lft == 0, prefered_lft == 0, create == false
++	 * as part of cleanup (ie. deleting the mngtmpaddr).
++	 * We don't want that to result in creating a new temporary ip address.
++	 */
++	if (list_empty(&idev->tempaddr_list) && (valid_lft || prefered_lft))
++		create = true;
++
++	if (create && idev->cnf.use_tempaddr > 0) {
+ 		/* When a new public address is created as described
+ 		 * in [ADDRCONF], also create a new temporary address.
+-		 * Also create a temporary address if it's enabled but
+-		 * no temporary address currently exists.
+ 		 */
+ 		read_unlock_bh(&idev->lock);
+ 		ipv6_create_tempaddr(ifp, false);
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 5f0ac47acc74b..c758d0cc6146d 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -1069,7 +1069,7 @@ static int ip6mr_cache_report(struct mr_table *mrt, struct sk_buff *pkt,
+ 		   And all this only to mangle msg->im6_msgtype and
+ 		   to set msg->im6_mbz to "mbz" :-)
+ 		 */
+-		skb_push(skb, -skb_network_offset(pkt));
++		__skb_pull(skb, skb_network_offset(pkt));
+ 
+ 		skb_push(skb, sizeof(*msg));
+ 		skb_reset_transport_header(skb);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 356416564d9f4..19653b8784bbc 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3350,8 +3350,6 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 			NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN]);
+ 			return PTR_ERR(chain);
+ 		}
+-		if (nft_chain_is_bound(chain))
+-			return -EOPNOTSUPP;
+ 
+ 	} else if (nla[NFTA_RULE_CHAIN_ID]) {
+ 		chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID],
+@@ -3364,6 +3362,9 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 		return -EINVAL;
+ 	}
+ 
++	if (nft_chain_is_bound(chain))
++		return -EOPNOTSUPP;
++
+ 	if (nla[NFTA_RULE_HANDLE]) {
+ 		handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_HANDLE]));
+ 		rule = __nft_rule_lookup(chain, handle);
+@@ -4576,10 +4577,9 @@ static int nft_validate_register_store(const struct nft_ctx *ctx,
+ 				       enum nft_data_types type,
+ 				       unsigned int len);
+ 
+-static int nf_tables_bind_check_setelem(const struct nft_ctx *ctx,
+-					struct nft_set *set,
+-					const struct nft_set_iter *iter,
+-					struct nft_set_elem *elem)
++static int nft_setelem_data_validate(const struct nft_ctx *ctx,
++				     struct nft_set *set,
++				     struct nft_set_elem *elem)
+ {
+ 	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ 	enum nft_registers dreg;
+@@ -4591,6 +4591,14 @@ static int nf_tables_bind_check_setelem(const struct nft_ctx *ctx,
+ 					   set->dlen);
+ }
+ 
++static int nf_tables_bind_check_setelem(const struct nft_ctx *ctx,
++					struct nft_set *set,
++					const struct nft_set_iter *iter,
++					struct nft_set_elem *elem)
++{
++	return nft_setelem_data_validate(ctx, set, elem);
++}
++
+ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 		       struct nft_set_binding *binding)
+ {
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index 6b0efab4fad09..6bf1c852e8eaa 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -125,15 +125,27 @@ static void nft_immediate_activate(const struct nft_ctx *ctx,
+ 	return nft_data_hold(&priv->data, nft_dreg_to_type(priv->dreg));
+ }
+ 
++static void nft_immediate_chain_deactivate(const struct nft_ctx *ctx,
++					   struct nft_chain *chain,
++					   enum nft_trans_phase phase)
++{
++	struct nft_ctx chain_ctx;
++	struct nft_rule *rule;
++
++	chain_ctx = *ctx;
++	chain_ctx.chain = chain;
++
++	list_for_each_entry(rule, &chain->rules, list)
++		nft_rule_expr_deactivate(&chain_ctx, rule, phase);
++}
++
+ static void nft_immediate_deactivate(const struct nft_ctx *ctx,
+ 				     const struct nft_expr *expr,
+ 				     enum nft_trans_phase phase)
+ {
+ 	const struct nft_immediate_expr *priv = nft_expr_priv(expr);
+ 	const struct nft_data *data = &priv->data;
+-	struct nft_ctx chain_ctx;
+ 	struct nft_chain *chain;
+-	struct nft_rule *rule;
+ 
+ 	if (priv->dreg == NFT_REG_VERDICT) {
+ 		switch (data->verdict.code) {
+@@ -143,20 +155,17 @@ static void nft_immediate_deactivate(const struct nft_ctx *ctx,
+ 			if (!nft_chain_binding(chain))
+ 				break;
+ 
+-			chain_ctx = *ctx;
+-			chain_ctx.chain = chain;
+-
+-			list_for_each_entry(rule, &chain->rules, list)
+-				nft_rule_expr_deactivate(&chain_ctx, rule, phase);
+-
+ 			switch (phase) {
+ 			case NFT_TRANS_PREPARE_ERROR:
+ 				nf_tables_unbind_chain(ctx, chain);
+-				fallthrough;
++				nft_deactivate_next(ctx->net, chain);
++				break;
+ 			case NFT_TRANS_PREPARE:
++				nft_immediate_chain_deactivate(ctx, chain, phase);
+ 				nft_deactivate_next(ctx->net, chain);
+ 				break;
+ 			default:
++				nft_immediate_chain_deactivate(ctx, chain, phase);
+ 				nft_chain_del(chain);
+ 				chain->bound = false;
+ 				chain->table->use--;
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 172b994790a06..eae760adae4d5 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -216,29 +216,37 @@ static void *nft_rbtree_get(const struct net *net, const struct nft_set *set,
+ 
+ static int nft_rbtree_gc_elem(const struct nft_set *__set,
+ 			      struct nft_rbtree *priv,
+-			      struct nft_rbtree_elem *rbe)
++			      struct nft_rbtree_elem *rbe,
++			      u8 genmask)
+ {
+ 	struct nft_set *set = (struct nft_set *)__set;
+ 	struct rb_node *prev = rb_prev(&rbe->node);
+-	struct nft_rbtree_elem *rbe_prev = NULL;
++	struct nft_rbtree_elem *rbe_prev;
+ 	struct nft_set_gc_batch *gcb;
+ 
+ 	gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC);
+ 	if (!gcb)
+ 		return -ENOMEM;
+ 
+-	/* search for expired end interval coming before this element. */
++	/* search for end interval coming before this element.
++	 * end intervals don't carry a timeout extension, they
++	 * are coupled with the interval start element.
++	 */
+ 	while (prev) {
+ 		rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
+-		if (nft_rbtree_interval_end(rbe_prev))
++		if (nft_rbtree_interval_end(rbe_prev) &&
++		    nft_set_elem_active(&rbe_prev->ext, genmask))
+ 			break;
+ 
+ 		prev = rb_prev(prev);
+ 	}
+ 
+-	if (rbe_prev) {
++	if (prev) {
++		rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
++
+ 		rb_erase(&rbe_prev->node, &priv->root);
+ 		atomic_dec(&set->nelems);
++		nft_set_gc_batch_add(gcb, rbe_prev);
+ 	}
+ 
+ 	rb_erase(&rbe->node, &priv->root);
+@@ -320,7 +328,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 
+ 		/* perform garbage collection to avoid bogus overlap reports. */
+ 		if (nft_set_elem_expired(&rbe->ext)) {
+-			err = nft_rbtree_gc_elem(set, priv, rbe);
++			err = nft_rbtree_gc_elem(set, priv, rbe, genmask);
+ 			if (err < 0)
+ 				return err;
+ 
+diff --git a/net/sched/cls_fw.c b/net/sched/cls_fw.c
+index 41f0898a5a565..08c41f1976c47 100644
+--- a/net/sched/cls_fw.c
++++ b/net/sched/cls_fw.c
+@@ -266,7 +266,6 @@ static int fw_change(struct net *net, struct sk_buff *in_skb,
+ 			return -ENOBUFS;
+ 
+ 		fnew->id = f->id;
+-		fnew->res = f->res;
+ 		fnew->ifindex = f->ifindex;
+ 		fnew->tp = f->tp;
+ 
+diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c
+index b775e681cb56e..1ad4b3e60eb3b 100644
+--- a/net/sched/cls_route.c
++++ b/net/sched/cls_route.c
+@@ -511,7 +511,6 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
+ 	if (fold) {
+ 		f->id = fold->id;
+ 		f->iif = fold->iif;
+-		f->res = fold->res;
+ 		f->handle = fold->handle;
+ 
+ 		f->tp = fold->tp;
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 1ac8ff445a6d3..b2d2ba561eba1 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -812,7 +812,6 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
+ 
+ 	new->ifindex = n->ifindex;
+ 	new->fshift = n->fshift;
+-	new->res = n->res;
+ 	new->flags = n->flags;
+ 	RCU_INIT_POINTER(new->ht_down, ht);
+ 
+@@ -999,18 +998,62 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 		return -EINVAL;
+ 	}
+ 
++	/* At this point, we need to derive the new handle that will be used to
++	 * uniquely map the identity of this table match entry. The
++	 * identity of the entry that we need to construct is 32 bits made of:
++	 *     htid(12b):bucketid(8b):node/entryid(12b)
++	 *
++	 * At this point _we have the table(ht)_ in which we will insert this
++	 * entry. We carry the table's id in variable "htid".
++	 * Note that earlier code picked the ht selection either by a) the user
++	 * providing the htid specified via TCA_U32_HASH attribute or b) when
++	 * no such attribute is passed then the root ht, is default to at ID
++	 * 0x[800][00][000]. Rule: the root table has a single bucket with ID 0.
++	 * If OTOH the user passed us the htid, they may also pass a bucketid of
++	 * choice. 0 is fine. For example a user htid is 0x[600][01][000] it is
++	 * indicating hash bucketid of 1. Rule: the entry/node ID _cannot_ be
++	 * passed via the htid, so even if it was non-zero it will be ignored.
++	 *
++	 * We may also have a handle, if the user passed one. The handle also
++	 * carries the same addressing of htid(12b):bucketid(8b):node/entryid(12b).
++	 * Rule: the bucketid on the handle is ignored even if one was passed;
++	 * rather the value on "htid" is always assumed to be the bucketid.
++	 */
+ 	if (handle) {
++		/* Rule: The htid from handle and tableid from htid must match */
+ 		if (TC_U32_HTID(handle) && TC_U32_HTID(handle ^ htid)) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Handle specified hash table address mismatch");
+ 			return -EINVAL;
+ 		}
+-		handle = htid | TC_U32_NODE(handle);
+-		err = idr_alloc_u32(&ht->handle_idr, NULL, &handle, handle,
+-				    GFP_KERNEL);
+-		if (err)
+-			return err;
+-	} else
++		/* Ok, so far we have a valid htid(12b):bucketid(8b) but we
++		 * need to finalize the table entry identification with the last
++		 * part - the node/entryid(12b)). Rule: Nodeid _cannot be 0_ for
++		 * entries. Rule: nodeid of 0 is reserved only for tables(see
++		 * earlier code which processes TC_U32_DIVISOR attribute).
++		 * Rule: The nodeid can only be derived from the handle (and not
++		 * htid).
++		 * Rule: if the handle specified zero for the node id example
++		 * 0x60000000, then pick a new nodeid from the pool of IDs
++		 * this hash table has been allocating from.
++		 * If OTOH it is specified (i.e for example the user passed a
++		 * handle such as 0x60000123), then we use it generate our final
++		 * handle which is used to uniquely identify the match entry.
++		 */
++		if (!TC_U32_NODE(handle)) {
++			handle = gen_new_kid(ht, htid);
++		} else {
++			handle = htid | TC_U32_NODE(handle);
++			err = idr_alloc_u32(&ht->handle_idr, NULL, &handle,
++					    handle, GFP_KERNEL);
++			if (err)
++				return err;
++		}
++	} else {
++		/* The user did not give us a handle; lets just generate one
++		 * from the table's pool of nodeids.
++		 */
+ 		handle = gen_new_kid(ht, htid);
++	}
+ 
+ 	if (tb[TCA_U32_SEL] == NULL) {
+ 		NL_SET_ERR_MSG_MOD(extack, "Selector not specified");
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 50e15add6068f..56d3dc5e95c7c 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -130,6 +130,97 @@ static int parse_attr(struct nlattr *tb[], int maxtype, struct nlattr *nla,
+ 	return 0;
+ }
+ 
++static int mqprio_parse_nlattr(struct Qdisc *sch, struct tc_mqprio_qopt *qopt,
++			       struct nlattr *opt,
++			       struct netlink_ext_ack *extack)
++{
++	struct mqprio_sched *priv = qdisc_priv(sch);
++	struct nlattr *tb[TCA_MQPRIO_MAX + 1];
++	struct nlattr *attr;
++	int i, rem, err;
++
++	err = parse_attr(tb, TCA_MQPRIO_MAX, opt, mqprio_policy,
++			 sizeof(*qopt));
++	if (err < 0)
++		return err;
++
++	if (!qopt->hw) {
++		NL_SET_ERR_MSG(extack,
++			       "mqprio TCA_OPTIONS can only contain netlink attributes in hardware mode");
++		return -EINVAL;
++	}
++
++	if (tb[TCA_MQPRIO_MODE]) {
++		priv->flags |= TC_MQPRIO_F_MODE;
++		priv->mode = *(u16 *)nla_data(tb[TCA_MQPRIO_MODE]);
++	}
++
++	if (tb[TCA_MQPRIO_SHAPER]) {
++		priv->flags |= TC_MQPRIO_F_SHAPER;
++		priv->shaper = *(u16 *)nla_data(tb[TCA_MQPRIO_SHAPER]);
++	}
++
++	if (tb[TCA_MQPRIO_MIN_RATE64]) {
++		if (priv->shaper != TC_MQPRIO_SHAPER_BW_RATE) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[TCA_MQPRIO_MIN_RATE64],
++					    "min_rate accepted only when shaper is in bw_rlimit mode");
++			return -EINVAL;
++		}
++		i = 0;
++		nla_for_each_nested(attr, tb[TCA_MQPRIO_MIN_RATE64],
++				    rem) {
++			if (nla_type(attr) != TCA_MQPRIO_MIN_RATE64) {
++				NL_SET_ERR_MSG_ATTR(extack, attr,
++						    "Attribute type expected to be TCA_MQPRIO_MIN_RATE64");
++				return -EINVAL;
++			}
++
++			if (nla_len(attr) != sizeof(u64)) {
++				NL_SET_ERR_MSG_ATTR(extack, attr,
++						    "Attribute TCA_MQPRIO_MIN_RATE64 expected to have 8 bytes length");
++				return -EINVAL;
++			}
++
++			if (i >= qopt->num_tc)
++				break;
++			priv->min_rate[i] = *(u64 *)nla_data(attr);
++			i++;
++		}
++		priv->flags |= TC_MQPRIO_F_MIN_RATE;
++	}
++
++	if (tb[TCA_MQPRIO_MAX_RATE64]) {
++		if (priv->shaper != TC_MQPRIO_SHAPER_BW_RATE) {
++			NL_SET_ERR_MSG_ATTR(extack, tb[TCA_MQPRIO_MAX_RATE64],
++					    "max_rate accepted only when shaper is in bw_rlimit mode");
++			return -EINVAL;
++		}
++		i = 0;
++		nla_for_each_nested(attr, tb[TCA_MQPRIO_MAX_RATE64],
++				    rem) {
++			if (nla_type(attr) != TCA_MQPRIO_MAX_RATE64) {
++				NL_SET_ERR_MSG_ATTR(extack, attr,
++						    "Attribute type expected to be TCA_MQPRIO_MAX_RATE64");
++				return -EINVAL;
++			}
++
++			if (nla_len(attr) != sizeof(u64)) {
++				NL_SET_ERR_MSG_ATTR(extack, attr,
++						    "Attribute TCA_MQPRIO_MAX_RATE64 expected to have 8 bytes length");
++				return -EINVAL;
++			}
++
++			if (i >= qopt->num_tc)
++				break;
++			priv->max_rate[i] = *(u64 *)nla_data(attr);
++			i++;
++		}
++		priv->flags |= TC_MQPRIO_F_MAX_RATE;
++	}
++
++	return 0;
++}
++
+ static int mqprio_init(struct Qdisc *sch, struct nlattr *opt,
+ 		       struct netlink_ext_ack *extack)
+ {
+@@ -139,9 +230,6 @@ static int mqprio_init(struct Qdisc *sch, struct nlattr *opt,
+ 	struct Qdisc *qdisc;
+ 	int i, err = -EOPNOTSUPP;
+ 	struct tc_mqprio_qopt *qopt = NULL;
+-	struct nlattr *tb[TCA_MQPRIO_MAX + 1];
+-	struct nlattr *attr;
+-	int rem;
+ 	int len;
+ 
+ 	BUILD_BUG_ON(TC_MAX_QUEUE != TC_QOPT_MAX_QUEUE);
+@@ -166,55 +254,9 @@ static int mqprio_init(struct Qdisc *sch, struct nlattr *opt,
+ 
+ 	len = nla_len(opt) - NLA_ALIGN(sizeof(*qopt));
+ 	if (len > 0) {
+-		err = parse_attr(tb, TCA_MQPRIO_MAX, opt, mqprio_policy,
+-				 sizeof(*qopt));
+-		if (err < 0)
++		err = mqprio_parse_nlattr(sch, qopt, opt, extack);
++		if (err)
+ 			return err;
+-
+-		if (!qopt->hw)
+-			return -EINVAL;
+-
+-		if (tb[TCA_MQPRIO_MODE]) {
+-			priv->flags |= TC_MQPRIO_F_MODE;
+-			priv->mode = *(u16 *)nla_data(tb[TCA_MQPRIO_MODE]);
+-		}
+-
+-		if (tb[TCA_MQPRIO_SHAPER]) {
+-			priv->flags |= TC_MQPRIO_F_SHAPER;
+-			priv->shaper = *(u16 *)nla_data(tb[TCA_MQPRIO_SHAPER]);
+-		}
+-
+-		if (tb[TCA_MQPRIO_MIN_RATE64]) {
+-			if (priv->shaper != TC_MQPRIO_SHAPER_BW_RATE)
+-				return -EINVAL;
+-			i = 0;
+-			nla_for_each_nested(attr, tb[TCA_MQPRIO_MIN_RATE64],
+-					    rem) {
+-				if (nla_type(attr) != TCA_MQPRIO_MIN_RATE64)
+-					return -EINVAL;
+-				if (i >= qopt->num_tc)
+-					break;
+-				priv->min_rate[i] = *(u64 *)nla_data(attr);
+-				i++;
+-			}
+-			priv->flags |= TC_MQPRIO_F_MIN_RATE;
+-		}
+-
+-		if (tb[TCA_MQPRIO_MAX_RATE64]) {
+-			if (priv->shaper != TC_MQPRIO_SHAPER_BW_RATE)
+-				return -EINVAL;
+-			i = 0;
+-			nla_for_each_nested(attr, tb[TCA_MQPRIO_MAX_RATE64],
+-					    rem) {
+-				if (nla_type(attr) != TCA_MQPRIO_MAX_RATE64)
+-					return -EINVAL;
+-				if (i >= qopt->num_tc)
+-					break;
+-				priv->max_rate[i] = *(u64 *)nla_data(attr);
+-				i++;
+-			}
+-			priv->flags |= TC_MQPRIO_F_MAX_RATE;
+-		}
+ 	}
+ 
+ 	/* pre-allocate qdisc, attachment can't fail */
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index de63d6d41645c..2784d69892117 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -1964,7 +1964,8 @@ rcv:
+ 
+ 	skb_reset_network_header(*skb);
+ 	skb_pull(*skb, tipc_ehdr_size(ehdr));
+-	pskb_trim(*skb, (*skb)->len - aead->authsize);
++	if (pskb_trim(*skb, (*skb)->len - aead->authsize))
++		goto free_skb;
+ 
+ 	/* Validate TIPCv2 message */
+ 	if (unlikely(!tipc_msg_validate(skb))) {
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index 38f61dccb8552..9e3cfeb82a23d 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -567,7 +567,7 @@ update:
+ 				 n->capabilities, &n->bc_entry.inputq1,
+ 				 &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) {
+ 		pr_warn("Broadcast rcv link creation failed, no memory\n");
+-		kfree(n);
++		tipc_node_put(n);
+ 		n = NULL;
+ 		goto exit;
+ 	}
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 2fe0efcbfed16..3aa783a23c5f6 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -697,7 +697,7 @@ static int unix_set_peek_off(struct sock *sk, int val)
+ 	if (mutex_lock_interruptible(&u->iolock))
+ 		return -EINTR;
+ 
+-	sk->sk_peek_off = val;
++	WRITE_ONCE(sk->sk_peek_off, val);
+ 	mutex_unlock(&u->iolock);
+ 
+ 	return 0;
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 671c7f83d5fc3..f59691936e5b8 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -641,7 +641,7 @@ static int cfg80211_parse_colocated_ap(const struct cfg80211_bss_ies *ies,
+ 
+ 	ret = cfg80211_calc_short_ssid(ies, &ssid_elem, &s_ssid_tmp);
+ 	if (ret)
+-		return ret;
++		return 0;
+ 
+ 	/* RNR IE may contain more than one NEIGHBOR_AP_INFO */
+ 	while (pos + sizeof(*ap_info) <= end) {
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6bfc7e28515a6..db8593d794315 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9015,6 +9015,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ 	SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
++	SND_PCI_QUIRK(0x103c, 0x881d, "HP 250 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+diff --git a/sound/soc/codecs/cs42l51-i2c.c b/sound/soc/codecs/cs42l51-i2c.c
+index 70260e0a8f095..3ff73367897d8 100644
+--- a/sound/soc/codecs/cs42l51-i2c.c
++++ b/sound/soc/codecs/cs42l51-i2c.c
+@@ -19,6 +19,12 @@ static struct i2c_device_id cs42l51_i2c_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, cs42l51_i2c_id);
+ 
++const struct of_device_id cs42l51_of_match[] = {
++	{ .compatible = "cirrus,cs42l51", },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, cs42l51_of_match);
++
+ static int cs42l51_i2c_probe(struct i2c_client *i2c,
+ 			     const struct i2c_device_id *id)
+ {
+diff --git a/sound/soc/codecs/cs42l51.c b/sound/soc/codecs/cs42l51.c
+index c61b17dc2af87..4b026e1c3fe3e 100644
+--- a/sound/soc/codecs/cs42l51.c
++++ b/sound/soc/codecs/cs42l51.c
+@@ -825,13 +825,6 @@ int __maybe_unused cs42l51_resume(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(cs42l51_resume);
+ 
+-const struct of_device_id cs42l51_of_match[] = {
+-	{ .compatible = "cirrus,cs42l51", },
+-	{ }
+-};
+-MODULE_DEVICE_TABLE(of, cs42l51_of_match);
+-EXPORT_SYMBOL_GPL(cs42l51_of_match);
+-
+ MODULE_AUTHOR("Arnaud Patard <arnaud.patard@rtp-net.org>");
+ MODULE_DESCRIPTION("Cirrus Logic CS42L51 ALSA SoC Codec Driver");
+ MODULE_LICENSE("GPL");
+diff --git a/sound/soc/codecs/cs42l51.h b/sound/soc/codecs/cs42l51.h
+index 9d06cf7f88768..4f13c38484b7f 100644
+--- a/sound/soc/codecs/cs42l51.h
++++ b/sound/soc/codecs/cs42l51.h
+@@ -16,7 +16,6 @@ int cs42l51_probe(struct device *dev, struct regmap *regmap);
+ int cs42l51_remove(struct device *dev);
+ int __maybe_unused cs42l51_suspend(struct device *dev);
+ int __maybe_unused cs42l51_resume(struct device *dev);
+-extern const struct of_device_id cs42l51_of_match[];
+ 
+ #define CS42L51_CHIP_ID			0x1B
+ #define CS42L51_CHIP_REV_A		0x00
+diff --git a/sound/soc/codecs/wm8904.c b/sound/soc/codecs/wm8904.c
+index cc96c9bdff41f..c90e776f7a547 100644
+--- a/sound/soc/codecs/wm8904.c
++++ b/sound/soc/codecs/wm8904.c
+@@ -2306,6 +2306,9 @@ static int wm8904_i2c_probe(struct i2c_client *i2c,
+ 	regmap_update_bits(wm8904->regmap, WM8904_BIAS_CONTROL_0,
+ 			    WM8904_POBCTRL, 0);
+ 
++	/* Fill the cache for the ADC test register */
++	regmap_read(wm8904->regmap, WM8904_ADC_TEST_0, &val);
++
+ 	/* Can leave the device powered off until we need it */
+ 	regcache_cache_only(wm8904->regmap, true);
+ 	regulator_bulk_disable(ARRAY_SIZE(wm8904->supplies), wm8904->supplies);
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index d01e8d516df1f..64b85b786bf64 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -612,6 +612,8 @@ static int fsl_spdif_trigger(struct snd_pcm_substream *substream,
+ 	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ 		regmap_update_bits(regmap, REG_SPDIF_SCR, dmaen, 0);
+ 		regmap_update_bits(regmap, REG_SPDIF_SIE, intr, 0);
++		regmap_write(regmap, REG_SPDIF_STL, 0x0);
++		regmap_write(regmap, REG_SPDIF_STR, 0x0);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/tools/perf/tests/shell/test_uprobe_from_different_cu.sh b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
+index 00d2e0e2e0c28..319f36ebb9a40 100644
+--- a/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
++++ b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
+@@ -4,6 +4,12 @@
+ 
+ set -e
+ 
++# skip if there's no gcc
++if ! [ -x "$(command -v gcc)" ]; then
++        echo "failed: no gcc compiler"
++        exit 2
++fi
++
+ temp_dir=$(mktemp -d /tmp/perf-uprobe-different-cu-sh.XXXXXXXXXX)
+ 
+ cleanup()
+@@ -11,7 +17,7 @@ cleanup()
+ 	trap - EXIT TERM INT
+ 	if [[ "${temp_dir}" =~ ^/tmp/perf-uprobe-different-cu-sh.*$ ]]; then
+ 		echo "--- Cleaning up ---"
+-		perf probe -x ${temp_dir}/testfile -d foo
++		perf probe -x ${temp_dir}/testfile -d foo || true
+ 		rm -f "${temp_dir}/"*
+ 		rmdir "${temp_dir}"
+ 	fi
+diff --git a/tools/testing/selftests/net/mptcp/config b/tools/testing/selftests/net/mptcp/config
+index 1a4c11a444d95..8867c40258b5a 100644
+--- a/tools/testing/selftests/net/mptcp/config
++++ b/tools/testing/selftests/net/mptcp/config
+@@ -6,3 +6,4 @@ CONFIG_INET_DIAG=m
+ CONFIG_INET_MPTCP_DIAG=m
+ CONFIG_VETH=y
+ CONFIG_NET_SCH_NETEM=m
++CONFIG_SYN_COOKIES=y
+diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
+index 986b9458efb26..b736a5169aad0 100644
+--- a/tools/testing/selftests/rseq/rseq.c
++++ b/tools/testing/selftests/rseq/rseq.c
+@@ -32,9 +32,17 @@
+ #include "../kselftest.h"
+ #include "rseq.h"
+ 
+-static const ptrdiff_t *libc_rseq_offset_p;
+-static const unsigned int *libc_rseq_size_p;
+-static const unsigned int *libc_rseq_flags_p;
++/*
++ * Define weak versions to play nice with binaries that are statically linked
++ * against a libc that doesn't support registering its own rseq.
++ */
++__weak ptrdiff_t __rseq_offset;
++__weak unsigned int __rseq_size;
++__weak unsigned int __rseq_flags;
++
++static const ptrdiff_t *libc_rseq_offset_p = &__rseq_offset;
++static const unsigned int *libc_rseq_size_p = &__rseq_size;
++static const unsigned int *libc_rseq_flags_p = &__rseq_flags;
+ 
+ /* Offset from the thread pointer to the rseq area.  */
+ ptrdiff_t rseq_offset;
+@@ -108,10 +116,19 @@ int rseq_unregister_current_thread(void)
+ static __attribute__((constructor))
+ void rseq_init(void)
+ {
+-	libc_rseq_offset_p = dlsym(RTLD_NEXT, "__rseq_offset");
+-	libc_rseq_size_p = dlsym(RTLD_NEXT, "__rseq_size");
+-	libc_rseq_flags_p = dlsym(RTLD_NEXT, "__rseq_flags");
+-	if (libc_rseq_size_p && libc_rseq_offset_p && libc_rseq_flags_p) {
++	/*
++	 * If the libc's registered rseq size isn't already valid, it may be
++	 * because the binary is dynamically linked and not necessarily due to
++	 * libc not having registered a restartable sequence.  Try to find the
++	 * symbols if that's the case.
++	 */
++	if (!*libc_rseq_size_p) {
++		libc_rseq_offset_p = dlsym(RTLD_NEXT, "__rseq_offset");
++		libc_rseq_size_p = dlsym(RTLD_NEXT, "__rseq_size");
++		libc_rseq_flags_p = dlsym(RTLD_NEXT, "__rseq_flags");
++	}
++	if (libc_rseq_size_p && libc_rseq_offset_p && libc_rseq_flags_p &&
++			*libc_rseq_size_p != 0) {
+ 		/* rseq registration owned by glibc */
+ 		rseq_offset = *libc_rseq_offset_p;
+ 		rseq_size = *libc_rseq_size_p;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-08-16 17:01 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-08-16 17:01 UTC (permalink / raw
  To: gentoo-commits

commit:     ceebc7d6e191b6efa272e7d98f7fe0b007999b8b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 16 17:01:07 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 16 17:01:07 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ceebc7d6

Linux patch 5.10.191

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1190_linux-5.10.191.patch | 2939 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2943 insertions(+)

diff --git a/0000_README b/0000_README
index 573656a0..b923fd86 100644
--- a/0000_README
+++ b/0000_README
@@ -803,6 +803,10 @@ Patch:  1189_linux-5.10.190.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.190
 
+Patch:  1190_linux-5.10.191.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.191
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1190_linux-5.10.191.patch b/1190_linux-5.10.191.patch
new file mode 100644
index 00000000..b64c9e4a
--- /dev/null
+++ b/1190_linux-5.10.191.patch
@@ -0,0 +1,2939 @@
+diff --git a/Makefile b/Makefile
+index bd2f457703634..ecf9ab05e13a2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 190
++SUBLEVEL = 191
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
+index 7eea28d6f6c88..490a161112968 100644
+--- a/arch/alpha/kernel/setup.c
++++ b/arch/alpha/kernel/setup.c
+@@ -394,8 +394,7 @@ setup_memory(void *kernel_end)
+ extern void setup_memory(void *);
+ #endif /* !CONFIG_DISCONTIGMEM */
+ 
+-int __init
+-page_is_ram(unsigned long pfn)
++int page_is_ram(unsigned long pfn)
+ {
+ 	struct memclust_struct * cluster;
+ 	struct memdesc_struct * memdesc;
+diff --git a/arch/riscv/include/asm/mmio.h b/arch/riscv/include/asm/mmio.h
+index aff6c33ab0c08..4c58ee7f95ecf 100644
+--- a/arch/riscv/include/asm/mmio.h
++++ b/arch/riscv/include/asm/mmio.h
+@@ -101,9 +101,9 @@ static inline u64 __raw_readq(const volatile void __iomem *addr)
+  * Relaxed I/O memory access primitives. These follow the Device memory
+  * ordering rules but do not guarantee any ordering relative to Normal memory
+  * accesses.  These are defined to order the indicated access (either a read or
+- * write) with all other I/O memory accesses. Since the platform specification
+- * defines that all I/O regions are strongly ordered on channel 2, no explicit
+- * fences are required to enforce this ordering.
++ * write) with all other I/O memory accesses to the same peripheral. Since the
++ * platform specification defines that all I/O regions are strongly ordered on
++ * channel 0, no explicit fences are required to enforce this ordering.
+  */
+ /* FIXME: These are now the same as asm-generic */
+ #define __io_rbr()		do {} while (0)
+@@ -125,14 +125,14 @@ static inline u64 __raw_readq(const volatile void __iomem *addr)
+ #endif
+ 
+ /*
+- * I/O memory access primitives. Reads are ordered relative to any
+- * following Normal memory access. Writes are ordered relative to any prior
+- * Normal memory access.  The memory barriers here are necessary as RISC-V
++ * I/O memory access primitives.  Reads are ordered relative to any following
++ * Normal memory read and delay() loop.  Writes are ordered relative to any
++ * prior Normal memory write.  The memory barriers here are necessary as RISC-V
+  * doesn't define any ordering between the memory space and the I/O space.
+  */
+ #define __io_br()	do {} while (0)
+-#define __io_ar(v)	__asm__ __volatile__ ("fence i,r" : : : "memory")
+-#define __io_bw()	__asm__ __volatile__ ("fence w,o" : : : "memory")
++#define __io_ar(v)	({ __asm__ __volatile__ ("fence i,ir" : : : "memory"); })
++#define __io_bw()	({ __asm__ __volatile__ ("fence w,o" : : : "memory"); })
+ #define __io_aw()	mmiowb_set_pending()
+ 
+ #define readb(c)	({ u8  __v; __io_br(); __v = readb_cpu(c); __io_ar(__v); __v; })
+diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
+index 5876289e48d89..2b5e04cc45e41 100644
+--- a/arch/x86/entry/vdso/vma.c
++++ b/arch/x86/entry/vdso/vma.c
+@@ -339,8 +339,8 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
+ 
+ 	/* Round the lowest possible end address up to a PMD boundary. */
+ 	end = (start + len + PMD_SIZE - 1) & PMD_MASK;
+-	if (end >= TASK_SIZE_MAX)
+-		end = TASK_SIZE_MAX;
++	if (end >= DEFAULT_MAP_WINDOW)
++		end = DEFAULT_MAP_WINDOW;
+ 	end -= len;
+ 
+ 	if (end > start) {
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 2dd9b661a5fd5..d7e017b0b4c3b 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -864,4 +864,6 @@ enum mds_mitigations {
+ 	MDS_MITIGATION_VMWERV,
+ };
+ 
++extern bool gds_ucode_mitigated(void);
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index da38765aa74c4..7f351093cd947 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -74,6 +74,7 @@ static const int amd_erratum_1054[] =
+ static const int amd_zenbleed[] =
+ 	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0x30, 0x0, 0x4f, 0xf),
+ 			   AMD_MODEL_RANGE(0x17, 0x60, 0x0, 0x7f, 0xf),
++			   AMD_MODEL_RANGE(0x17, 0x90, 0x0, 0x91, 0xf),
+ 			   AMD_MODEL_RANGE(0x17, 0xa0, 0x0, 0xaf, 0xf));
+ 
+ static const int amd_div0[] =
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index c1f2360309120..c1ff75ad11358 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -472,8 +472,6 @@ static bool pku_disabled;
+ 
+ static __always_inline void setup_pku(struct cpuinfo_x86 *c)
+ {
+-	struct pkru_state *pk;
+-
+ 	/* check the boot processor, plus compile options for PKU: */
+ 	if (!cpu_feature_enabled(X86_FEATURE_PKU))
+ 		return;
+@@ -484,9 +482,6 @@ static __always_inline void setup_pku(struct cpuinfo_x86 *c)
+ 		return;
+ 
+ 	cr4_set_bits(X86_CR4_PKE);
+-	pk = get_xsave_addr(&init_fpstate.xsave, XFEATURE_PKRU);
+-	if (pk)
+-		pk->pkru = init_pkru_value;
+ 	/*
+ 	 * Seting X86_CR4_PKE will cause the X86_FEATURE_OSPKE
+ 	 * cpuid bit to be set.  We need to ensure that we
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 4955cb5cc0016..72ba175cb9d4c 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -524,11 +524,17 @@ INIT_PER_CPU(irq_stack_backing_store);
+ 
+ #ifdef CONFIG_CPU_SRSO
+ /*
+- * GNU ld cannot do XOR so do: (A | B) - (A & B) in order to compute the XOR
++ * GNU ld cannot do XOR until 2.41.
++ * https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f6f78318fca803c4907fb8d7f6ded8295f1947b1
++ *
++ * LLVM lld cannot do XOR until lld-17.
++ * https://github.com/llvm/llvm-project/commit/fae96104d4378166cbe5c875ef8ed808a356f3fb
++ *
++ * Instead do: (A | B) - (A & B) in order to compute the XOR
+  * of the two function addresses:
+  */
+-. = ASSERT(((srso_untrain_ret_alias | srso_safe_ret_alias) -
+-		(srso_untrain_ret_alias & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
++. = ASSERT(((ABSOLUTE(srso_untrain_ret_alias) | srso_safe_ret_alias) -
++		(ABSOLUTE(srso_untrain_ret_alias) & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
+ 		"SRSO function pair won't alias");
+ #endif
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index cf47392005663..9d3015863e581 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -255,8 +255,6 @@ static struct kmem_cache *x86_fpu_cache;
+ 
+ static struct kmem_cache *x86_emulator_cache;
+ 
+-extern bool gds_ucode_mitigated(void);
+-
+ /*
+  * When called, it means the previous get/set msr reached an invalid msr.
+  * Return true if we want to ignore/silent this failed msr access.
+diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
+index 8873ed1438a97..379c396127935 100644
+--- a/arch/x86/mm/pkeys.c
++++ b/arch/x86/mm/pkeys.c
+@@ -10,7 +10,6 @@
+ 
+ #include <asm/cpufeature.h>             /* boot_cpu_has, ...            */
+ #include <asm/mmu_context.h>            /* vma_pkey()                   */
+-#include <asm/fpu/internal.h>		/* init_fpstate			*/
+ 
+ int __execute_only_pkey(struct mm_struct *mm)
+ {
+@@ -154,7 +153,6 @@ static ssize_t init_pkru_read_file(struct file *file, char __user *user_buf,
+ static ssize_t init_pkru_write_file(struct file *file,
+ 		 const char __user *user_buf, size_t count, loff_t *ppos)
+ {
+-	struct pkru_state *pk;
+ 	char buf[32];
+ 	ssize_t len;
+ 	u32 new_init_pkru;
+@@ -177,10 +175,6 @@ static ssize_t init_pkru_write_file(struct file *file,
+ 		return -EINVAL;
+ 
+ 	WRITE_ONCE(init_pkru_value, new_init_pkru);
+-	pk = get_xsave_addr(&init_fpstate.xsave, XFEATURE_PKRU);
+-	if (!pk)
+-		return -EINVAL;
+-	pk->pkru = new_init_pkru;
+ 	return count;
+ }
+ 
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index dbae98f096580..2aaccb78235b5 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -6541,6 +6541,7 @@ err_init_binder_device_failed:
+ 
+ err_alloc_device_names_failed:
+ 	debugfs_remove_recursive(binder_debugfs_dir_entry_root);
++	binder_alloc_shrinker_exit();
+ 
+ 	return ret;
+ }
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index a77ed66425f27..b6bf9caaf1d1e 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -1086,6 +1086,12 @@ int binder_alloc_shrinker_init(void)
+ 	return ret;
+ }
+ 
++void binder_alloc_shrinker_exit(void)
++{
++	unregister_shrinker(&binder_shrinker);
++	list_lru_destroy(&binder_alloc_lru);
++}
++
+ /**
+  * check_buffer() - verify that buffer/offset is safe to access
+  * @alloc: binder_alloc for this proc
+diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
+index 6e8e001381af4..f6052c97bce52 100644
+--- a/drivers/android/binder_alloc.h
++++ b/drivers/android/binder_alloc.h
+@@ -125,6 +125,7 @@ extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
+ 						  int pid);
+ extern void binder_alloc_init(struct binder_alloc *alloc);
+ extern int binder_alloc_shrinker_init(void);
++extern void binder_alloc_shrinker_exit(void);
+ extern void binder_alloc_vma_close(struct binder_alloc *alloc);
+ extern struct binder_buffer *
+ binder_alloc_prepare_to_free(struct binder_alloc *alloc,
+diff --git a/drivers/dma/mcf-edma.c b/drivers/dma/mcf-edma.c
+index e12b754e6398d..60d3c5f09ad67 100644
+--- a/drivers/dma/mcf-edma.c
++++ b/drivers/dma/mcf-edma.c
+@@ -191,7 +191,13 @@ static int mcf_edma_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	chans = pdata->dma_channels;
++	if (!pdata->dma_channels) {
++		dev_info(&pdev->dev, "setting default channel number to 64");
++		chans = 64;
++	} else {
++		chans = pdata->dma_channels;
++	}
++
+ 	len = sizeof(*mcf_edma) + sizeof(*mcf_chan) * chans;
+ 	mcf_edma = devm_kzalloc(&pdev->dev, len, GFP_KERNEL);
+ 	if (!mcf_edma)
+@@ -203,11 +209,6 @@ static int mcf_edma_probe(struct platform_device *pdev)
+ 	mcf_edma->drvdata = &mcf_data;
+ 	mcf_edma->big_endian = 1;
+ 
+-	if (!mcf_edma->n_chans) {
+-		dev_info(&pdev->dev, "setting default channel number to 64");
+-		mcf_edma->n_chans = 64;
+-	}
+-
+ 	mutex_init(&mcf_edma->fsl_edma_mutex);
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index 6f697b3f2c184..627aa9e9bc12a 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -403,6 +403,12 @@ enum desc_status {
+ 	 * of a channel can be BUSY at any time.
+ 	 */
+ 	BUSY,
++	/*
++	 * Pause was called while descriptor was BUSY. Due to hardware
++	 * limitations, only termination is possible for descriptors
++	 * that have been paused.
++	 */
++	PAUSED,
+ 	/*
+ 	 * Sitting on the channel work_list but xfer done
+ 	 * by PL330 core
+@@ -2043,7 +2049,7 @@ static inline void fill_queue(struct dma_pl330_chan *pch)
+ 	list_for_each_entry(desc, &pch->work_list, node) {
+ 
+ 		/* If already submitted */
+-		if (desc->status == BUSY)
++		if (desc->status == BUSY || desc->status == PAUSED)
+ 			continue;
+ 
+ 		ret = pl330_submit_req(pch->thread, desc);
+@@ -2328,6 +2334,7 @@ static int pl330_pause(struct dma_chan *chan)
+ {
+ 	struct dma_pl330_chan *pch = to_pchan(chan);
+ 	struct pl330_dmac *pl330 = pch->dmac;
++	struct dma_pl330_desc *desc;
+ 	unsigned long flags;
+ 
+ 	pm_runtime_get_sync(pl330->ddma.dev);
+@@ -2337,6 +2344,10 @@ static int pl330_pause(struct dma_chan *chan)
+ 	_stop(pch->thread);
+ 	spin_unlock(&pl330->lock);
+ 
++	list_for_each_entry(desc, &pch->work_list, node) {
++		if (desc->status == BUSY)
++			desc->status = PAUSED;
++	}
+ 	spin_unlock_irqrestore(&pch->lock, flags);
+ 	pm_runtime_mark_last_busy(pl330->ddma.dev);
+ 	pm_runtime_put_autosuspend(pl330->ddma.dev);
+@@ -2427,7 +2438,7 @@ pl330_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 		else if (running && desc == running)
+ 			transferred =
+ 				pl330_get_current_xferred_count(pch, desc);
+-		else if (desc->status == BUSY)
++		else if (desc->status == BUSY || desc->status == PAUSED)
+ 			/*
+ 			 * Busy but not running means either just enqueued,
+ 			 * or finished and not yet marked done
+@@ -2444,6 +2455,9 @@ pl330_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 			case DONE:
+ 				ret = DMA_COMPLETE;
+ 				break;
++			case PAUSED:
++				ret = DMA_PAUSED;
++				break;
+ 			case PREP:
+ 			case BUSY:
+ 				ret = DMA_IN_PROGRESS;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
+index 0eb881f2e0d61..8e729ef85b13e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
+@@ -354,8 +354,11 @@ void dpp3_set_cursor_attributes(
+ 	int cur_rom_en = 0;
+ 
+ 	if (color_format == CURSOR_MODE_COLOR_PRE_MULTIPLIED_ALPHA ||
+-		color_format == CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA)
+-		cur_rom_en = 1;
++		color_format == CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA) {
++		if (cursor_attributes->attribute_flags.bits.ENABLE_CURSOR_DEGAMMA) {
++			cur_rom_en = 1;
++		}
++	}
+ 
+ 	REG_UPDATE_3(CURSOR0_CONTROL,
+ 			CUR0_MODE, color_format,
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index b7bb5610dfe21..e8f07305e279a 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -614,7 +614,13 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+ 	int ret;
+ 
+ 	if (obj->import_attach) {
++		/* Reset both vm_ops and vm_private_data, so we don't end up with
++		 * vm_ops pointing to our implementation if the dma-buf backend
++		 * doesn't set those fields.
++		 */
+ 		vma->vm_private_data = NULL;
++		vma->vm_ops = NULL;
++
+ 		ret = dma_buf_mmap(obj->dma_buf, vma, 0);
+ 
+ 		/* Drop the reference drm_gem_mmap_obj() acquired.*/
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index b8884272d65e0..d2cefca120c4b 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -947,7 +947,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
+ 	/* Determine display colour depth for everything except LVDS now,
+ 	 * DP requires this before mode_valid() is called.
+ 	 */
+-	if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
++	if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)
+ 		nouveau_connector_detect_depth(connector);
+ 
+ 	/* Find the native mode if this is a digital panel, if we didn't
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h
+index 32bbddc0993e8..679aff79f4d6b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h
+@@ -123,6 +123,7 @@ void gk104_grctx_generate_r418800(struct gf100_gr *);
+ 
+ extern const struct gf100_grctx_func gk110_grctx;
+ void gk110_grctx_generate_r419eb0(struct gf100_gr *);
++void gk110_grctx_generate_r419f78(struct gf100_gr *);
+ 
+ extern const struct gf100_grctx_func gk110b_grctx;
+ extern const struct gf100_grctx_func gk208_grctx;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c
+index 304e9d268bad4..f894f82548242 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c
+@@ -916,7 +916,9 @@ static void
+ gk104_grctx_generate_r419f78(struct gf100_gr *gr)
+ {
+ 	struct nvkm_device *device = gr->base.engine.subdev.device;
+-	nvkm_mask(device, 0x419f78, 0x00000001, 0x00000000);
++
++	/* bit 3 set disables loads in fp helper invocations, we need it enabled */
++	nvkm_mask(device, 0x419f78, 0x00000009, 0x00000000);
+ }
+ 
+ void
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk110.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk110.c
+index 86547cfc38dce..e88740d4e54d4 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk110.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk110.c
+@@ -820,6 +820,15 @@ gk110_grctx_generate_r419eb0(struct gf100_gr *gr)
+ 	nvkm_mask(device, 0x419eb0, 0x00001000, 0x00001000);
+ }
+ 
++void
++gk110_grctx_generate_r419f78(struct gf100_gr *gr)
++{
++	struct nvkm_device *device = gr->base.engine.subdev.device;
++
++	/* bit 3 set disables loads in fp helper invocations, we need it enabled */
++	nvkm_mask(device, 0x419f78, 0x00000008, 0x00000000);
++}
++
+ const struct gf100_grctx_func
+ gk110_grctx = {
+ 	.main  = gf100_grctx_generate_main,
+@@ -852,4 +861,5 @@ gk110_grctx = {
+ 	.gpc_tpc_nr = gk104_grctx_generate_gpc_tpc_nr,
+ 	.r418800 = gk104_grctx_generate_r418800,
+ 	.r419eb0 = gk110_grctx_generate_r419eb0,
++	.r419f78 = gk110_grctx_generate_r419f78,
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk110b.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk110b.c
+index ebb947bd1446b..086e4d49e1121 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk110b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk110b.c
+@@ -101,4 +101,5 @@ gk110b_grctx = {
+ 	.gpc_tpc_nr = gk104_grctx_generate_gpc_tpc_nr,
+ 	.r418800 = gk104_grctx_generate_r418800,
+ 	.r419eb0 = gk110_grctx_generate_r419eb0,
++	.r419f78 = gk110_grctx_generate_r419f78,
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk208.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk208.c
+index 4d40512b5c998..0bf438c3f7cbc 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk208.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk208.c
+@@ -566,4 +566,5 @@ gk208_grctx = {
+ 	.dist_skip_table = gf117_grctx_generate_dist_skip_table,
+ 	.gpc_tpc_nr = gk104_grctx_generate_gpc_tpc_nr,
+ 	.r418800 = gk104_grctx_generate_r418800,
++	.r419f78 = gk110_grctx_generate_r419f78,
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c
+index 0b3964e6b36e2..acdf0932a99e1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c
+@@ -991,4 +991,5 @@ gm107_grctx = {
+ 	.r406500 = gm107_grctx_generate_r406500,
+ 	.gpc_tpc_nr = gk104_grctx_generate_gpc_tpc_nr,
+ 	.r419e00 = gm107_grctx_generate_r419e00,
++	.r419f78 = gk110_grctx_generate_r419f78,
+ };
+diff --git a/drivers/hwmon/pmbus/bel-pfe.c b/drivers/hwmon/pmbus/bel-pfe.c
+index 2c5b853d6c7fc..fc3d9a9159f2e 100644
+--- a/drivers/hwmon/pmbus/bel-pfe.c
++++ b/drivers/hwmon/pmbus/bel-pfe.c
+@@ -17,12 +17,13 @@
+ enum chips {pfe1100, pfe3000};
+ 
+ /*
+- * Disable status check for pfe3000 devices, because some devices report
+- * communication error (invalid command) for VOUT_MODE command (0x20)
+- * although correct VOUT_MODE (0x16) is returned: it leads to incorrect
+- * exponent in linear mode.
++ * Disable status check because some devices report communication error
++ * (invalid command) for VOUT_MODE command (0x20) although the correct
++ * VOUT_MODE (0x16) is returned: it leads to incorrect exponent in linear
++ * mode.
++ * This affects both pfe3000 and pfe1100.
+  */
+-static struct pmbus_platform_data pfe3000_plat_data = {
++static struct pmbus_platform_data pfe_plat_data = {
+ 	.flags = PMBUS_SKIP_STATUS_CHECK,
+ };
+ 
+@@ -94,16 +95,15 @@ static int pfe_pmbus_probe(struct i2c_client *client)
+ 	int model;
+ 
+ 	model = (int)i2c_match_id(pfe_device_id, client)->driver_data;
++	client->dev.platform_data = &pfe_plat_data;
+ 
+ 	/*
+ 	 * PFE3000-12-069RA devices may not stay in page 0 during device
+ 	 * probe which leads to probe failure (read status word failed).
+ 	 * So let's set the device to page 0 at the beginning.
+ 	 */
+-	if (model == pfe3000) {
+-		client->dev.platform_data = &pfe3000_plat_data;
++	if (model == pfe3000)
+ 		i2c_smbus_write_byte_data(client, PMBUS_PAGE, 0);
+-	}
+ 
+ 	return pmbus_do_probe(client, &pfe_driver_info[model]);
+ }
+diff --git a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
+index e3f507771f17e..829077bcd09d6 100644
+--- a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
++++ b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
+@@ -263,7 +263,7 @@ int cros_ec_sensors_core_init(struct platform_device *pdev,
+ 	platform_set_drvdata(pdev, indio_dev);
+ 
+ 	state->ec = ec->ec_dev;
+-	state->msg = devm_kzalloc(&pdev->dev,
++	state->msg = devm_kzalloc(&pdev->dev, sizeof(*state->msg) +
+ 				max((u16)sizeof(struct ec_params_motion_sense),
+ 				state->ec->max_response), GFP_KERNEL);
+ 	if (!state->msg)
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index 4b41f35668b20..c74868f016379 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -12348,6 +12348,7 @@ static void free_cntrs(struct hfi1_devdata *dd)
+ 
+ 	if (dd->synth_stats_timer.function)
+ 		del_timer_sync(&dd->synth_stats_timer);
++	cancel_work_sync(&dd->update_cntr_work);
+ 	ppd = (struct hfi1_pportdata *)(dd + 1);
+ 	for (i = 0; i < dd->num_pports; i++, ppd++) {
+ 		kfree(ppd->cntrs);
+diff --git a/drivers/isdn/mISDN/dsp.h b/drivers/isdn/mISDN/dsp.h
+index fa09d511a8eda..baf31258f5c90 100644
+--- a/drivers/isdn/mISDN/dsp.h
++++ b/drivers/isdn/mISDN/dsp.h
+@@ -247,7 +247,7 @@ extern void dsp_cmx_hardware(struct dsp_conf *conf, struct dsp *dsp);
+ extern int dsp_cmx_conf(struct dsp *dsp, u32 conf_id);
+ extern void dsp_cmx_receive(struct dsp *dsp, struct sk_buff *skb);
+ extern void dsp_cmx_hdlc(struct dsp *dsp, struct sk_buff *skb);
+-extern void dsp_cmx_send(void *arg);
++extern void dsp_cmx_send(struct timer_list *arg);
+ extern void dsp_cmx_transmit(struct dsp *dsp, struct sk_buff *skb);
+ extern int dsp_cmx_del_conf_member(struct dsp *dsp);
+ extern int dsp_cmx_del_conf(struct dsp_conf *conf);
+diff --git a/drivers/isdn/mISDN/dsp_cmx.c b/drivers/isdn/mISDN/dsp_cmx.c
+index 6d2088fbaf69c..1b73af5013976 100644
+--- a/drivers/isdn/mISDN/dsp_cmx.c
++++ b/drivers/isdn/mISDN/dsp_cmx.c
+@@ -1625,7 +1625,7 @@ static u16	dsp_count; /* last sample count */
+ static int	dsp_count_valid; /* if we have last sample count */
+ 
+ void
+-dsp_cmx_send(void *arg)
++dsp_cmx_send(struct timer_list *arg)
+ {
+ 	struct dsp_conf *conf;
+ 	struct dsp_conf_member *member;
+diff --git a/drivers/isdn/mISDN/dsp_core.c b/drivers/isdn/mISDN/dsp_core.c
+index 038e72a84b334..5b954012e3948 100644
+--- a/drivers/isdn/mISDN/dsp_core.c
++++ b/drivers/isdn/mISDN/dsp_core.c
+@@ -1200,7 +1200,7 @@ static int __init dsp_init(void)
+ 	}
+ 
+ 	/* set sample timer */
+-	timer_setup(&dsp_spl_tl, (void *)dsp_cmx_send, 0);
++	timer_setup(&dsp_spl_tl, dsp_cmx_send, 0);
+ 	dsp_spl_tl.expires = jiffies + dsp_tics;
+ 	dsp_spl_jiffies = dsp_spl_tl.expires;
+ 	add_timer(&dsp_spl_tl);
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index fb96bb76eefbe..c4baf8afbf8dc 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -339,13 +339,7 @@ static void moxart_transfer_pio(struct moxart_host *host)
+ 				return;
+ 			}
+ 			for (len = 0; len < remain && len < host->fifo_width;) {
+-				/* SCR data must be read in big endian. */
+-				if (data->mrq->cmd->opcode == SD_APP_SEND_SCR)
+-					*sgp = ioread32be(host->base +
+-							  REG_DATA_WINDOW);
+-				else
+-					*sgp = ioread32(host->base +
+-							REG_DATA_WINDOW);
++				*sgp = ioread32(host->base + REG_DATA_WINDOW);
+ 				sgp++;
+ 				len += 4;
+ 			}
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index a260740269e9f..bcb019121d835 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -4920,7 +4920,9 @@ void bond_setup(struct net_device *bond_dev)
+ 
+ 	bond_dev->hw_features = BOND_VLAN_FEATURES |
+ 				NETIF_F_HW_VLAN_CTAG_RX |
+-				NETIF_F_HW_VLAN_CTAG_FILTER;
++				NETIF_F_HW_VLAN_CTAG_FILTER |
++				NETIF_F_HW_VLAN_STAG_RX |
++				NETIF_F_HW_VLAN_STAG_FILTER;
+ 
+ 	bond_dev->hw_features |= NETIF_F_GSO_ENCAP_ALL | NETIF_F_GSO_UDP_L4;
+ #ifdef CONFIG_XFRM_OFFLOAD
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 1ec1709446bab..47f8f66cf7ecd 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -71,6 +71,8 @@ static int hclge_set_default_loopback(struct hclge_dev *hdev);
+ static void hclge_sync_mac_table(struct hclge_dev *hdev);
+ static void hclge_restore_hw_table(struct hclge_dev *hdev);
+ static void hclge_sync_promisc_mode(struct hclge_dev *hdev);
++static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret,
++				      int wait_cnt);
+ 
+ static struct hnae3_ae_algo ae_algo;
+ 
+@@ -6558,6 +6560,8 @@ static void hclge_enable_fd(struct hnae3_handle *handle, bool enable)
+ 
+ static void hclge_cfg_mac_mode(struct hclge_dev *hdev, bool enable)
+ {
++#define HCLGE_LINK_STATUS_WAIT_CNT  3
++
+ 	struct hclge_desc desc;
+ 	struct hclge_config_mac_mode_cmd *req =
+ 		(struct hclge_config_mac_mode_cmd *)desc.data;
+@@ -6582,9 +6586,15 @@ static void hclge_cfg_mac_mode(struct hclge_dev *hdev, bool enable)
+ 	req->txrx_pad_fcs_loop_en = cpu_to_le32(loop_en);
+ 
+ 	ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+-	if (ret)
++	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"mac enable fail, ret =%d.\n", ret);
++		return;
++	}
++
++	if (!enable)
++		hclge_mac_link_status_wait(hdev, HCLGE_LINK_STATUS_DOWN,
++					   HCLGE_LINK_STATUS_WAIT_CNT);
+ }
+ 
+ static int hclge_config_switch_param(struct hclge_dev *hdev, int vfid,
+@@ -6647,10 +6657,9 @@ static void hclge_phy_link_status_wait(struct hclge_dev *hdev,
+ 	} while (++i < HCLGE_PHY_LINK_STATUS_NUM);
+ }
+ 
+-static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret)
++static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret,
++				      int wait_cnt)
+ {
+-#define HCLGE_MAC_LINK_STATUS_NUM  100
+-
+ 	int link_status;
+ 	int i = 0;
+ 	int ret;
+@@ -6663,13 +6672,15 @@ static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret)
+ 			return 0;
+ 
+ 		msleep(HCLGE_LINK_STATUS_MS);
+-	} while (++i < HCLGE_MAC_LINK_STATUS_NUM);
++	} while (++i < wait_cnt);
+ 	return -EBUSY;
+ }
+ 
+ static int hclge_mac_phy_link_status_wait(struct hclge_dev *hdev, bool en,
+ 					  bool is_phy)
+ {
++#define HCLGE_MAC_LINK_STATUS_NUM  100
++
+ 	int link_ret;
+ 
+ 	link_ret = en ? HCLGE_LINK_STATUS_UP : HCLGE_LINK_STATUS_DOWN;
+@@ -6677,7 +6688,8 @@ static int hclge_mac_phy_link_status_wait(struct hclge_dev *hdev, bool en,
+ 	if (is_phy)
+ 		hclge_phy_link_status_wait(hdev, link_ret);
+ 
+-	return hclge_mac_link_status_wait(hdev, link_ret);
++	return hclge_mac_link_status_wait(hdev, link_ret,
++					  HCLGE_MAC_LINK_STATUS_NUM);
+ }
+ 
+ static int hclge_set_app_loopback(struct hclge_dev *hdev, bool en)
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 7fe2e47dc83dc..84da6ccaf3395 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -929,12 +929,22 @@ static int ibmvnic_login(struct net_device *netdev)
+ 
+ static void release_login_buffer(struct ibmvnic_adapter *adapter)
+ {
++	if (!adapter->login_buf)
++		return;
++
++	dma_unmap_single(&adapter->vdev->dev, adapter->login_buf_token,
++			 adapter->login_buf_sz, DMA_TO_DEVICE);
+ 	kfree(adapter->login_buf);
+ 	adapter->login_buf = NULL;
+ }
+ 
+ static void release_login_rsp_buffer(struct ibmvnic_adapter *adapter)
+ {
++	if (!adapter->login_rsp_buf)
++		return;
++
++	dma_unmap_single(&adapter->vdev->dev, adapter->login_rsp_buf_token,
++			 adapter->login_rsp_buf_sz, DMA_FROM_DEVICE);
+ 	kfree(adapter->login_rsp_buf);
+ 	adapter->login_rsp_buf = NULL;
+ }
+@@ -3861,11 +3871,14 @@ static int send_login(struct ibmvnic_adapter *adapter)
+ 	if (rc) {
+ 		adapter->login_pending = false;
+ 		netdev_err(adapter->netdev, "Failed to send login, rc=%d\n", rc);
+-		goto buf_rsp_map_failed;
++		goto buf_send_failed;
+ 	}
+ 
+ 	return 0;
+ 
++buf_send_failed:
++	dma_unmap_single(dev, rsp_buffer_token, rsp_buffer_size,
++			 DMA_FROM_DEVICE);
+ buf_rsp_map_failed:
+ 	kfree(login_rsp_buffer);
+ 	adapter->login_rsp_buf = NULL;
+@@ -4430,6 +4443,7 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
+ 	int num_tx_pools;
+ 	int num_rx_pools;
+ 	u64 *size_array;
++	u32 rsp_len;
+ 	int i;
+ 
+ 	/* CHECK: Test/set of login_pending does not need to be atomic
+@@ -4441,11 +4455,6 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
+ 	}
+ 	adapter->login_pending = false;
+ 
+-	dma_unmap_single(dev, adapter->login_buf_token, adapter->login_buf_sz,
+-			 DMA_TO_DEVICE);
+-	dma_unmap_single(dev, adapter->login_rsp_buf_token,
+-			 adapter->login_rsp_buf_sz, DMA_FROM_DEVICE);
+-
+ 	/* If the number of queues requested can't be allocated by the
+ 	 * server, the login response will return with code 1. We will need
+ 	 * to resend the login buffer with fewer queues requested.
+@@ -4481,6 +4490,23 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
+ 		ibmvnic_reset(adapter, VNIC_RESET_FATAL);
+ 		return -EIO;
+ 	}
++
++	rsp_len = be32_to_cpu(login_rsp->len);
++	if (be32_to_cpu(login->login_rsp_len) < rsp_len ||
++	    rsp_len <= be32_to_cpu(login_rsp->off_txsubm_subcrqs) ||
++	    rsp_len <= be32_to_cpu(login_rsp->off_rxadd_subcrqs) ||
++	    rsp_len <= be32_to_cpu(login_rsp->off_rxadd_buff_size) ||
++	    rsp_len <= be32_to_cpu(login_rsp->off_supp_tx_desc)) {
++		/* This can happen if a login request times out and there are
++		 * 2 outstanding login requests sent, the LOGIN_RSP crq
++		 * could have been for the older login request. So we are
++		 * parsing the newer response buffer which may be incomplete
++		 */
++		dev_err(dev, "FATAL: Login rsp offsets/lengths invalid\n");
++		ibmvnic_reset(adapter, VNIC_RESET_FATAL);
++		return -EIO;
++	}
++
+ 	size_array = (u64 *)((u8 *)(adapter->login_rsp_buf) +
+ 		be32_to_cpu(adapter->login_rsp_buf->off_rxadd_buff_size));
+ 	/* variable buffer sizes are not supported, so just read the
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
+index 3094d20297a97..ee9287d7d1d30 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
+@@ -211,8 +211,7 @@ static u16 mlx5_get_max_vfs(struct mlx5_core_dev *dev)
+ 		host_total_vfs = MLX5_GET(query_esw_functions_out, out,
+ 					  host_params_context.host_total_vfs);
+ 		kvfree(out);
+-		if (host_total_vfs)
+-			return host_total_vfs;
++		return host_total_vfs;
+ 	}
+ 
+ done:
+diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c
+index ed601a7e46a0c..ac373510b486e 100644
+--- a/drivers/net/phy/at803x.c
++++ b/drivers/net/phy/at803x.c
+@@ -1115,8 +1115,6 @@ static struct phy_driver at803x_driver[] = {
+ 	.flags			= PHY_POLL_CABLE_TEST,
+ 	.config_init		= at803x_config_init,
+ 	.link_change_notify	= at803x_link_change_notify,
+-	.set_wol		= at803x_set_wol,
+-	.get_wol		= at803x_get_wol,
+ 	.suspend		= at803x_suspend,
+ 	.resume			= at803x_resume,
+ 	/* PHY_BASIC_FEATURES */
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 0e70877c932c7..191bf0df14661 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1604,7 +1604,7 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile,
+ 	if (zerocopy)
+ 		return false;
+ 
+-	if (SKB_DATA_ALIGN(len + TUN_RX_PAD) +
++	if (SKB_DATA_ALIGN(len + TUN_RX_PAD + XDP_PACKET_HEADROOM) +
+ 	    SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) > PAGE_SIZE)
+ 		return false;
+ 
+diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
+index 5bf7822c53f18..0ba714ca5185c 100644
+--- a/drivers/net/wireguard/allowedips.c
++++ b/drivers/net/wireguard/allowedips.c
+@@ -6,7 +6,7 @@
+ #include "allowedips.h"
+ #include "peer.h"
+ 
+-enum { MAX_ALLOWEDIPS_BITS = 128 };
++enum { MAX_ALLOWEDIPS_DEPTH = 129 };
+ 
+ static struct kmem_cache *node_cache;
+ 
+@@ -42,7 +42,7 @@ static void push_rcu(struct allowedips_node **stack,
+ 		     struct allowedips_node __rcu *p, unsigned int *len)
+ {
+ 	if (rcu_access_pointer(p)) {
+-		if (WARN_ON(IS_ENABLED(DEBUG) && *len >= MAX_ALLOWEDIPS_BITS))
++		if (WARN_ON(IS_ENABLED(DEBUG) && *len >= MAX_ALLOWEDIPS_DEPTH))
+ 			return;
+ 		stack[(*len)++] = rcu_dereference_raw(p);
+ 	}
+@@ -55,7 +55,7 @@ static void node_free_rcu(struct rcu_head *rcu)
+ 
+ static void root_free_rcu(struct rcu_head *rcu)
+ {
+-	struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_BITS] = {
++	struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
+ 		container_of(rcu, struct allowedips_node, rcu) };
+ 	unsigned int len = 1;
+ 
+@@ -68,7 +68,7 @@ static void root_free_rcu(struct rcu_head *rcu)
+ 
+ static void root_remove_peer_lists(struct allowedips_node *root)
+ {
+-	struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_BITS] = { root };
++	struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = { root };
+ 	unsigned int len = 1;
+ 
+ 	while (len > 0 && (node = stack[--len])) {
+diff --git a/drivers/net/wireguard/selftest/allowedips.c b/drivers/net/wireguard/selftest/allowedips.c
+index 41db10f9be498..2c9eec24eec45 100644
+--- a/drivers/net/wireguard/selftest/allowedips.c
++++ b/drivers/net/wireguard/selftest/allowedips.c
+@@ -593,16 +593,20 @@ bool __init wg_allowedips_selftest(void)
+ 	wg_allowedips_remove_by_peer(&t, a, &mutex);
+ 	test_negative(4, a, 192, 168, 0, 1);
+ 
+-	/* These will hit the WARN_ON(len >= MAX_ALLOWEDIPS_BITS) in free_node
++	/* These will hit the WARN_ON(len >= MAX_ALLOWEDIPS_DEPTH) in free_node
+ 	 * if something goes wrong.
+ 	 */
+-	for (i = 0; i < MAX_ALLOWEDIPS_BITS; ++i) {
+-		part = cpu_to_be64(~(1LLU << (i % 64)));
+-		memset(&ip, 0xff, 16);
+-		memcpy((u8 *)&ip + (i < 64) * 8, &part, 8);
++	for (i = 0; i < 64; ++i) {
++		part = cpu_to_be64(~0LLU << i);
++		memset(&ip, 0xff, 8);
++		memcpy((u8 *)&ip + 8, &part, 8);
++		wg_allowedips_insert_v6(&t, &ip, 128, a, &mutex);
++		memcpy(&ip, &part, 8);
++		memset((u8 *)&ip + 8, 0, 8);
+ 		wg_allowedips_insert_v6(&t, &ip, 128, a, &mutex);
+ 	}
+-
++	memset(&ip, 0, 16);
++	wg_allowedips_insert_v6(&t, &ip, 128, a, &mutex);
+ 	wg_allowedips_free(&t, &mutex);
+ 
+ 	wg_allowedips_init(&t);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index b61924394032a..825c961c6fd50 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -989,6 +989,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
+ 		goto out_cleanup_connect_q;
+ 
+ 	if (!new) {
++		nvme_start_freeze(&ctrl->ctrl);
+ 		nvme_start_queues(&ctrl->ctrl);
+ 		if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
+ 			/*
+@@ -997,6 +998,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
+ 			 * to be safe.
+ 			 */
+ 			ret = -ENODEV;
++			nvme_unfreeze(&ctrl->ctrl);
+ 			goto out_wait_freeze_timed_out;
+ 		}
+ 		blk_mq_update_nr_hw_queues(ctrl->ctrl.tagset,
+@@ -1042,7 +1044,6 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+ 		bool remove)
+ {
+ 	if (ctrl->ctrl.queue_count > 1) {
+-		nvme_start_freeze(&ctrl->ctrl);
+ 		nvme_stop_queues(&ctrl->ctrl);
+ 		nvme_sync_io_queues(&ctrl->ctrl);
+ 		nvme_rdma_stop_io_queues(ctrl);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index e6147a9220f9a..ea4d3170acae5 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1859,6 +1859,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+ 		goto out_cleanup_connect_q;
+ 
+ 	if (!new) {
++		nvme_start_freeze(ctrl);
+ 		nvme_start_queues(ctrl);
+ 		if (!nvme_wait_freeze_timeout(ctrl, NVME_IO_TIMEOUT)) {
+ 			/*
+@@ -1867,6 +1868,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+ 			 * to be safe.
+ 			 */
+ 			ret = -ENODEV;
++			nvme_unfreeze(ctrl);
+ 			goto out_wait_freeze_timed_out;
+ 		}
+ 		blk_mq_update_nr_hw_queues(ctrl->tagset,
+@@ -1989,7 +1991,6 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+ 	if (ctrl->queue_count <= 1)
+ 		return;
+ 	blk_mq_quiesce_queue(ctrl->admin_q);
+-	nvme_start_freeze(ctrl);
+ 	nvme_stop_queues(ctrl);
+ 	nvme_sync_io_queues(ctrl);
+ 	nvme_tcp_stop_io_queues(ctrl);
+diff --git a/drivers/scsi/53c700.c b/drivers/scsi/53c700.c
+index 3242ff63986fd..37e1994c1f814 100644
+--- a/drivers/scsi/53c700.c
++++ b/drivers/scsi/53c700.c
+@@ -1600,7 +1600,7 @@ NCR_700_intr(int irq, void *dev_id)
+ 				printk("scsi%d (%d:%d) PHASE MISMATCH IN SEND MESSAGE %d remain, return %p[%04x], phase %s\n", host->host_no, pun, lun, count, (void *)temp, temp - hostdata->pScript, sbcl_to_string(NCR_700_readb(host, SBCL_REG)));
+ #endif
+ 				resume_offset = hostdata->pScript + Ent_SendMessagePhaseMismatch;
+-			} else if(dsp >= to32bit(&slot->pSG[0].ins) &&
++			} else if (slot && dsp >= to32bit(&slot->pSG[0].ins) &&
+ 				  dsp <= to32bit(&slot->pSG[NCR_700_SG_SEGMENTS].ins)) {
+ 				int data_transfer = NCR_700_readl(host, DBC_REG) & 0xffffff;
+ 				int SGcount = (dsp - to32bit(&slot->pSG[0].ins))/sizeof(struct NCR_700_SG_List);
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index b33cb1172f31d..63c9368bafcf1 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -31,6 +31,7 @@ static void qedf_remove(struct pci_dev *pdev);
+ static void qedf_shutdown(struct pci_dev *pdev);
+ static void qedf_schedule_recovery_handler(void *dev);
+ static void qedf_recovery_handler(struct work_struct *work);
++static int qedf_suspend(struct pci_dev *pdev, pm_message_t state);
+ 
+ /*
+  * Driver module parameters.
+@@ -3272,6 +3273,7 @@ static struct pci_driver qedf_pci_driver = {
+ 	.probe = qedf_probe,
+ 	.remove = qedf_remove,
+ 	.shutdown = qedf_shutdown,
++	.suspend = qedf_suspend,
+ };
+ 
+ static int __qedf_probe(struct pci_dev *pdev, int mode)
+@@ -3986,6 +3988,22 @@ static void qedf_shutdown(struct pci_dev *pdev)
+ 	__qedf_remove(pdev, QEDF_MODE_NORMAL);
+ }
+ 
++static int qedf_suspend(struct pci_dev *pdev, pm_message_t state)
++{
++	struct qedf_ctx *qedf;
++
++	if (!pdev) {
++		QEDF_ERR(NULL, "pdev is NULL.\n");
++		return -ENODEV;
++	}
++
++	qedf = pci_get_drvdata(pdev);
++
++	QEDF_ERR(&qedf->dbg_ctx, "%s: Device does not support suspend operation\n", __func__);
++
++	return -EPERM;
++}
++
+ /*
+  * Recovery handler code
+  */
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 7df0106f132ee..cc2152c56d355 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -69,6 +69,7 @@ static struct nvm_iscsi_block *qedi_get_nvram_block(struct qedi_ctx *qedi);
+ static void qedi_recovery_handler(struct work_struct *work);
+ static void qedi_schedule_hw_err_handler(void *dev,
+ 					 enum qed_hw_err_type err_type);
++static int qedi_suspend(struct pci_dev *pdev, pm_message_t state);
+ 
+ static int qedi_iscsi_event_cb(void *context, u8 fw_event_code, void *fw_handle)
+ {
+@@ -2517,6 +2518,22 @@ static void qedi_shutdown(struct pci_dev *pdev)
+ 	__qedi_remove(pdev, QEDI_MODE_SHUTDOWN);
+ }
+ 
++static int qedi_suspend(struct pci_dev *pdev, pm_message_t state)
++{
++	struct qedi_ctx *qedi;
++
++	if (!pdev) {
++		QEDI_ERR(NULL, "pdev is NULL.\n");
++		return -ENODEV;
++	}
++
++	qedi = pci_get_drvdata(pdev);
++
++	QEDI_ERR(&qedi->dbg_ctx, "%s: Device does not support suspend operation\n", __func__);
++
++	return -EPERM;
++}
++
+ static int __qedi_probe(struct pci_dev *pdev, int mode)
+ {
+ 	struct qedi_ctx *qedi;
+@@ -2875,6 +2892,7 @@ static struct pci_driver qedi_pci_driver = {
+ 	.remove = qedi_remove,
+ 	.shutdown = qedi_shutdown,
+ 	.err_handler = &qedi_err_handler,
++	.suspend = qedi_suspend,
+ };
+ 
+ static int __init qedi_init(void)
+diff --git a/drivers/scsi/raid_class.c b/drivers/scsi/raid_class.c
+index 898a0bdf8df67..711252e52d8e1 100644
+--- a/drivers/scsi/raid_class.c
++++ b/drivers/scsi/raid_class.c
+@@ -248,6 +248,7 @@ int raid_component_add(struct raid_template *r,struct device *raid_dev,
+ 	return 0;
+ 
+ err_out:
++	put_device(&rc->dev);
+ 	list_del(&rc->node);
+ 	rd->component_count--;
+ 	put_device(component_dev);
+diff --git a/drivers/scsi/scsi_proc.c b/drivers/scsi/scsi_proc.c
+index d6982d3557396..94603e64cc6bf 100644
+--- a/drivers/scsi/scsi_proc.c
++++ b/drivers/scsi/scsi_proc.c
+@@ -311,7 +311,7 @@ static ssize_t proc_scsi_write(struct file *file, const char __user *buf,
+ 			       size_t length, loff_t *ppos)
+ {
+ 	int host, channel, id, lun;
+-	char *buffer, *p;
++	char *buffer, *end, *p;
+ 	int err;
+ 
+ 	if (!buf || length > PAGE_SIZE)
+@@ -326,10 +326,14 @@ static ssize_t proc_scsi_write(struct file *file, const char __user *buf,
+ 		goto out;
+ 
+ 	err = -EINVAL;
+-	if (length < PAGE_SIZE)
+-		buffer[length] = '\0';
+-	else if (buffer[PAGE_SIZE-1])
+-		goto out;
++	if (length < PAGE_SIZE) {
++		end = buffer + length;
++		*end = '\0';
++	} else {
++		end = buffer + PAGE_SIZE - 1;
++		if (*end)
++			goto out;
++	}
+ 
+ 	/*
+ 	 * Usage: echo "scsi add-single-device 0 1 2 3" >/proc/scsi/scsi
+@@ -338,10 +342,10 @@ static ssize_t proc_scsi_write(struct file *file, const char __user *buf,
+ 	if (!strncmp("scsi add-single-device", buffer, 22)) {
+ 		p = buffer + 23;
+ 
+-		host = simple_strtoul(p, &p, 0);
+-		channel = simple_strtoul(p + 1, &p, 0);
+-		id = simple_strtoul(p + 1, &p, 0);
+-		lun = simple_strtoul(p + 1, &p, 0);
++		host    = (p     < end) ? simple_strtoul(p, &p, 0) : 0;
++		channel = (p + 1 < end) ? simple_strtoul(p + 1, &p, 0) : 0;
++		id      = (p + 1 < end) ? simple_strtoul(p + 1, &p, 0) : 0;
++		lun     = (p + 1 < end) ? simple_strtoul(p + 1, &p, 0) : 0;
+ 
+ 		err = scsi_add_single_device(host, channel, id, lun);
+ 
+@@ -352,10 +356,10 @@ static ssize_t proc_scsi_write(struct file *file, const char __user *buf,
+ 	} else if (!strncmp("scsi remove-single-device", buffer, 25)) {
+ 		p = buffer + 26;
+ 
+-		host = simple_strtoul(p, &p, 0);
+-		channel = simple_strtoul(p + 1, &p, 0);
+-		id = simple_strtoul(p + 1, &p, 0);
+-		lun = simple_strtoul(p + 1, &p, 0);
++		host    = (p     < end) ? simple_strtoul(p, &p, 0) : 0;
++		channel = (p + 1 < end) ? simple_strtoul(p + 1, &p, 0) : 0;
++		id      = (p + 1 < end) ? simple_strtoul(p + 1, &p, 0) : 0;
++		lun     = (p + 1 < end) ? simple_strtoul(p + 1, &p, 0) : 0;
+ 
+ 		err = scsi_remove_single_device(host, channel, id, lun);
+ 	}
+diff --git a/drivers/scsi/snic/snic_disc.c b/drivers/scsi/snic/snic_disc.c
+index 7cf871323b2c4..c445853c623e2 100644
+--- a/drivers/scsi/snic/snic_disc.c
++++ b/drivers/scsi/snic/snic_disc.c
+@@ -317,6 +317,7 @@ snic_tgt_create(struct snic *snic, struct snic_tgt_id *tgtid)
+ 			      "Snic Tgt: device_add, with err = %d\n",
+ 			      ret);
+ 
++		put_device(&tgt->dev);
+ 		put_device(&snic->shost->shost_gendev);
+ 		spin_lock_irqsave(snic->shost->host_lock, flags);
+ 		list_del(&tgt->list);
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 70b4868fe2f7d..45d8549623442 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1641,10 +1641,6 @@ static int storvsc_host_reset_handler(struct scsi_cmnd *scmnd)
+  */
+ static enum blk_eh_timer_return storvsc_eh_timed_out(struct scsi_cmnd *scmnd)
+ {
+-#if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)
+-	if (scmnd->device->host->transportt == fc_transport_template)
+-		return fc_eh_timed_out(scmnd);
+-#endif
+ 	return BLK_EH_RESET_TIMER;
+ }
+ 
+diff --git a/drivers/usb/common/usb-conn-gpio.c b/drivers/usb/common/usb-conn-gpio.c
+index c9545a4eff664..02446092520c8 100644
+--- a/drivers/usb/common/usb-conn-gpio.c
++++ b/drivers/usb/common/usb-conn-gpio.c
+@@ -42,6 +42,7 @@ struct usb_conn_info {
+ 
+ 	struct power_supply_desc desc;
+ 	struct power_supply *charger;
++	bool initial_detection;
+ };
+ 
+ /*
+@@ -86,11 +87,13 @@ static void usb_conn_detect_cable(struct work_struct *work)
+ 	dev_dbg(info->dev, "role %d/%d, gpios: id %d, vbus %d\n",
+ 		info->last_role, role, id, vbus);
+ 
+-	if (info->last_role == role) {
++	if (!info->initial_detection && info->last_role == role) {
+ 		dev_warn(info->dev, "repeated role: %d\n", role);
+ 		return;
+ 	}
+ 
++	info->initial_detection = false;
++
+ 	if (info->last_role == USB_ROLE_HOST && info->vbus)
+ 		regulator_disable(info->vbus);
+ 
+@@ -277,6 +280,7 @@ static int usb_conn_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, info);
+ 
+ 	/* Perform initial detection */
++	info->initial_detection = true;
+ 	usb_conn_queue_dwork(info, 0);
+ 
+ 	return 0;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 221738b644269..565397c41910d 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3830,9 +3830,14 @@ static irqreturn_t dwc3_check_event_buf(struct dwc3_event_buffer *evt)
+ 	u32 reg;
+ 
+ 	if (pm_runtime_suspended(dwc->dev)) {
++		dwc->pending_events = true;
++		/*
++		 * Trigger runtime resume. The get() function will be balanced
++		 * after processing the pending events in dwc3_process_pending
++		 * events().
++		 */
+ 		pm_runtime_get(dwc->dev);
+ 		disable_irq_nosync(dwc->irq_gadget);
+-		dwc->pending_events = true;
+ 		return IRQ_HANDLED;
+ 	}
+ 
+@@ -4091,6 +4096,8 @@ void dwc3_gadget_process_pending_events(struct dwc3 *dwc)
+ {
+ 	if (dwc->pending_events) {
+ 		dwc3_interrupt(dwc->irq_gadget, dwc->ev_buf);
++		dwc3_thread_interrupt(dwc->irq_gadget, dwc->ev_buf);
++		pm_runtime_put(dwc->dev);
+ 		dwc->pending_events = false;
+ 		enable_irq(dwc->irq_gadget);
+ 	}
+diff --git a/drivers/usb/storage/alauda.c b/drivers/usb/storage/alauda.c
+index 7e4ce0e7e05a7..dcc4778d1ae99 100644
+--- a/drivers/usb/storage/alauda.c
++++ b/drivers/usb/storage/alauda.c
+@@ -318,7 +318,8 @@ static int alauda_get_media_status(struct us_data *us, unsigned char *data)
+ 	rc = usb_stor_ctrl_transfer(us, us->recv_ctrl_pipe,
+ 		command, 0xc0, 0, 1, data, 2);
+ 
+-	usb_stor_dbg(us, "Media status %02X %02X\n", data[0], data[1]);
++	if (rc == USB_STOR_XFER_GOOD)
++		usb_stor_dbg(us, "Media status %02X %02X\n", data[0], data[1]);
+ 
+ 	return rc;
+ }
+@@ -454,9 +455,14 @@ static int alauda_init_media(struct us_data *us)
+ static int alauda_check_media(struct us_data *us)
+ {
+ 	struct alauda_info *info = (struct alauda_info *) us->extra;
+-	unsigned char status[2];
++	unsigned char *status = us->iobuf;
++	int rc;
+ 
+-	alauda_get_media_status(us, status);
++	rc = alauda_get_media_status(us, status);
++	if (rc != USB_STOR_XFER_GOOD) {
++		status[0] = 0xF0;	/* Pretend there's no media */
++		status[1] = 0;
++	}
+ 
+ 	/* Check for no media or door open */
+ 	if ((status[0] & 0x80) || ((status[0] & 0x1F) == 0x10)
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 7d9b8050b09cd..9cdf50e2484e1 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4138,8 +4138,11 @@ have_block_group:
+ 			ret = 0;
+ 		}
+ 
+-		if (unlikely(block_group->cached == BTRFS_CACHE_ERROR))
++		if (unlikely(block_group->cached == BTRFS_CACHE_ERROR)) {
++			if (!cache_block_group_error)
++				cache_block_group_error = -EIO;
+ 			goto loop;
++		}
+ 
+ 		bg_ret = NULL;
+ 		ret = do_allocation(block_group, &ffe_ctl, &bg_ret);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 5fc65a780f83b..0e266772beaef 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4034,11 +4034,12 @@ retry:
+ 			free_extent_buffer(eb);
+ 
+ 			/*
+-			 * the filesystem may choose to bump up nr_to_write.
++			 * The filesystem may choose to bump up nr_to_write.
+ 			 * We have to make sure to honor the new nr_to_write
+-			 * at any time
++			 * at any time.
+ 			 */
+-			nr_to_write_done = wbc->nr_to_write <= 0;
++			nr_to_write_done = (wbc->sync_mode == WB_SYNC_NONE &&
++					    wbc->nr_to_write <= 0);
+ 		}
+ 		pagevec_release(&pvec);
+ 		cond_resched();
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index 042f4512e47d9..3a3cdc3706471 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -1103,9 +1103,17 @@ int nilfs_set_file_dirty(struct inode *inode, unsigned int nr_dirty)
+ 
+ int __nilfs_mark_inode_dirty(struct inode *inode, int flags)
+ {
++	struct the_nilfs *nilfs = inode->i_sb->s_fs_info;
+ 	struct buffer_head *ibh;
+ 	int err;
+ 
++	/*
++	 * Do not dirty inodes after the log writer has been detached
++	 * and its nilfs_root struct has been freed.
++	 */
++	if (unlikely(nilfs_purging(nilfs)))
++		return 0;
++
+ 	err = nilfs_load_inode_block(inode, &ibh);
+ 	if (unlikely(err)) {
+ 		nilfs_warn(inode->i_sb,
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 4a910c8a56913..e4686e30b1a7d 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -2850,6 +2850,7 @@ void nilfs_detach_log_writer(struct super_block *sb)
+ 		nilfs_segctor_destroy(nilfs->ns_writer);
+ 		nilfs->ns_writer = NULL;
+ 	}
++	set_nilfs_purging(nilfs);
+ 
+ 	/* Force to free the list of dirty files */
+ 	spin_lock(&nilfs->ns_inode_lock);
+@@ -2862,4 +2863,5 @@ void nilfs_detach_log_writer(struct super_block *sb)
+ 	up_write(&nilfs->ns_segctor_sem);
+ 
+ 	nilfs_dispose_list(nilfs, &garbage_list, 1);
++	clear_nilfs_purging(nilfs);
+ }
+diff --git a/fs/nilfs2/the_nilfs.h b/fs/nilfs2/the_nilfs.h
+index b55cdeb4d1699..01914089e76df 100644
+--- a/fs/nilfs2/the_nilfs.h
++++ b/fs/nilfs2/the_nilfs.h
+@@ -29,6 +29,7 @@ enum {
+ 	THE_NILFS_DISCONTINUED,	/* 'next' pointer chain has broken */
+ 	THE_NILFS_GC_RUNNING,	/* gc process is running */
+ 	THE_NILFS_SB_DIRTY,	/* super block is dirty */
++	THE_NILFS_PURGING,	/* disposing dirty files for cleanup */
+ };
+ 
+ /**
+@@ -208,6 +209,7 @@ THE_NILFS_FNS(INIT, init)
+ THE_NILFS_FNS(DISCONTINUED, discontinued)
+ THE_NILFS_FNS(GC_RUNNING, gc_running)
+ THE_NILFS_FNS(SB_DIRTY, sb_dirty)
++THE_NILFS_FNS(PURGING, purging)
+ 
+ /*
+  * Mount option operations
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 358905a859938..6d7ab016127c9 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -72,6 +72,8 @@ extern ssize_t cpu_show_retbleed(struct device *dev,
+ 				 struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_spec_rstack_overflow(struct device *dev,
+ 					     struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_gds(struct device *dev,
++			    struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 2a819be384a78..4536a122c4bc5 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -513,6 +513,9 @@ ieee80211_get_sband_iftype_data(const struct ieee80211_supported_band *sband,
+ 	if (WARN_ON(iftype >= NL80211_IFTYPE_MAX))
+ 		return NULL;
+ 
++	if (iftype == NL80211_IFTYPE_AP_VLAN)
++		iftype = NL80211_IFTYPE_AP;
++
+ 	for (i = 0; i < sband->n_iftype_data; i++)  {
+ 		const struct ieee80211_sband_iftype_data *data =
+ 			&sband->iftype_data[i];
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index fb3c5f6907506..eec29dd6681ca 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1073,6 +1073,29 @@ int __nft_release_basechain(struct nft_ctx *ctx);
+ 
+ unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv);
+ 
++static inline bool nft_use_inc(u32 *use)
++{
++	if (*use == UINT_MAX)
++		return false;
++
++	(*use)++;
++
++	return true;
++}
++
++static inline void nft_use_dec(u32 *use)
++{
++	WARN_ON_ONCE((*use)-- == 0);
++}
++
++/* For error and abort path: restore use counter to previous state. */
++static inline void nft_use_inc_restore(u32 *use)
++{
++	WARN_ON_ONCE(!nft_use_inc(use));
++}
++
++#define nft_use_dec_restore	nft_use_dec
++
+ /**
+  *	struct nft_table - nf_tables table
+  *
+@@ -1150,8 +1173,8 @@ struct nft_object {
+ 	struct list_head		list;
+ 	struct rhlist_head		rhlhead;
+ 	struct nft_object_hash_key	key;
+-	u32				genmask:2,
+-					use:30;
++	u32				genmask:2;
++	u32				use;
+ 	u64				handle;
+ 	u16				udlen;
+ 	u8				*udata;
+@@ -1253,8 +1276,8 @@ struct nft_flowtable {
+ 	char				*name;
+ 	int				hooknum;
+ 	int				ops_len;
+-	u32				genmask:2,
+-					use:30;
++	u32				genmask:2;
++	u32				use;
+ 	u64				handle;
+ 	/* runtime data below here */
+ 	struct list_head		hook_list ____cacheline_aligned;
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 92eb4769b0a35..dec8a6355f2ae 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -4229,9 +4229,11 @@ static int io_openat2(struct io_kiocb *req, unsigned int issue_flags)
+ 	if (issue_flags & IO_URING_F_NONBLOCK) {
+ 		/*
+ 		 * Don't bother trying for O_TRUNC, O_CREAT, or O_TMPFILE open,
+-		 * it'll always -EAGAIN
++		 * it'll always -EAGAIN. Note that we test for __O_TMPFILE
++		 * because O_TMPFILE includes O_DIRECTORY, which isn't a flag
++		 * we need to force async for.
+ 		 */
+-		if (req->open.how.flags & (O_TRUNC | O_CREAT | O_TMPFILE))
++		if (req->open.how.flags & (O_TRUNC | O_CREAT | __O_TMPFILE))
+ 			return -EAGAIN;
+ 		op.lookup_flags |= LOOKUP_CACHED;
+ 		op.open_flag |= O_NONBLOCK;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index edb19ada0405d..8f1e43df8c5fa 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1359,7 +1359,7 @@ static void __mark_reg_unknown(const struct bpf_verifier_env *env,
+ 	reg->type = SCALAR_VALUE;
+ 	reg->var_off = tnum_unknown;
+ 	reg->frameno = 0;
+-	reg->precise = env->subprog_cnt > 1 || !env->bpf_capable;
++	reg->precise = !env->bpf_capable;
+ 	__mark_reg_unbounded(reg);
+ }
+ 
+@@ -2023,8 +2023,11 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env,
+ 
+ 	/* big hammer: mark all scalars precise in this path.
+ 	 * pop_stack may still get !precise scalars.
++	 * We also skip current state and go straight to first parent state,
++	 * because precision markings in current non-checkpointed state are
++	 * not needed. See why in the comment in __mark_chain_precision below.
+ 	 */
+-	for (; st; st = st->parent)
++	for (st = st->parent; st; st = st->parent) {
+ 		for (i = 0; i <= st->curframe; i++) {
+ 			func = st->frame[i];
+ 			for (j = 0; j < BPF_REG_FP; j++) {
+@@ -2042,8 +2045,121 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env,
+ 				reg->precise = true;
+ 			}
+ 		}
++	}
++}
++
++static void mark_all_scalars_imprecise(struct bpf_verifier_env *env, struct bpf_verifier_state *st)
++{
++	struct bpf_func_state *func;
++	struct bpf_reg_state *reg;
++	int i, j;
++
++	for (i = 0; i <= st->curframe; i++) {
++		func = st->frame[i];
++		for (j = 0; j < BPF_REG_FP; j++) {
++			reg = &func->regs[j];
++			if (reg->type != SCALAR_VALUE)
++				continue;
++			reg->precise = false;
++		}
++		for (j = 0; j < func->allocated_stack / BPF_REG_SIZE; j++) {
++			if (!is_spilled_reg(&func->stack[j]))
++				continue;
++			reg = &func->stack[j].spilled_ptr;
++			if (reg->type != SCALAR_VALUE)
++				continue;
++			reg->precise = false;
++		}
++	}
+ }
+ 
++/*
++ * __mark_chain_precision() backtracks BPF program instruction sequence and
++ * chain of verifier states making sure that register *regno* (if regno >= 0)
++ * and/or stack slot *spi* (if spi >= 0) are marked as precisely tracked
++ * SCALARS, as well as any other registers and slots that contribute to
++ * a tracked state of given registers/stack slots, depending on specific BPF
++ * assembly instructions (see backtrack_insns() for exact instruction handling
++ * logic). This backtracking relies on recorded jmp_history and is able to
++ * traverse entire chain of parent states. This process ends only when all the
++ * necessary registers/slots and their transitive dependencies are marked as
++ * precise.
++ *
++ * One important and subtle aspect is that precise marks *do not matter* in
++ * the currently verified state (current state). It is important to understand
++ * why this is the case.
++ *
++ * First, note that current state is the state that is not yet "checkpointed",
++ * i.e., it is not yet put into env->explored_states, and it has no children
++ * states as well. It's ephemeral, and can end up either a) being discarded if
++ * compatible explored state is found at some point or BPF_EXIT instruction is
++ * reached or b) checkpointed and put into env->explored_states, branching out
++ * into one or more children states.
++ *
++ * In the former case, precise markings in current state are completely
++ * ignored by state comparison code (see regsafe() for details). Only
++ * checkpointed ("old") state precise markings are important, and if old
++ * state's register/slot is precise, regsafe() assumes current state's
++ * register/slot as precise and checks value ranges exactly and precisely. If
++ * states turn out to be compatible, current state's necessary precise
++ * markings and any required parent states' precise markings are enforced
++ * after the fact with propagate_precision() logic, after the fact. But it's
++ * important to realize that in this case, even after marking current state
++ * registers/slots as precise, we immediately discard current state. So what
++ * actually matters is any of the precise markings propagated into current
++ * state's parent states, which are always checkpointed (due to b) case above).
++ * As such, for scenario a) it doesn't matter if current state has precise
++ * markings set or not.
++ *
++ * Now, for the scenario b), checkpointing and forking into child(ren)
++ * state(s). Note that before current state gets to checkpointing step, any
++ * processed instruction always assumes precise SCALAR register/slot
++ * knowledge: if precise value or range is useful to prune jump branch, BPF
++ * verifier takes this opportunity enthusiastically. Similarly, when
++ * register's value is used to calculate offset or memory address, exact
++ * knowledge of SCALAR range is assumed, checked, and enforced. So, similar to
++ * what we mentioned above about state comparison ignoring precise markings
++ * during state comparison, BPF verifier ignores and also assumes precise
++ * markings *at will* during instruction verification process. But as verifier
++ * assumes precision, it also propagates any precision dependencies across
++ * parent states, which are not yet finalized, so can be further restricted
++ * based on new knowledge gained from restrictions enforced by their children
++ * states. This is so that once those parent states are finalized, i.e., when
++ * they have no more active children state, state comparison logic in
++ * is_state_visited() would enforce strict and precise SCALAR ranges, if
++ * required for correctness.
++ *
++ * To build a bit more intuition, note also that once a state is checkpointed,
++ * the path we took to get to that state is not important. This is crucial
++ * property for state pruning. When state is checkpointed and finalized at
++ * some instruction index, it can be correctly and safely used to "short
++ * circuit" any *compatible* state that reaches exactly the same instruction
++ * index. I.e., if we jumped to that instruction from a completely different
++ * code path than original finalized state was derived from, it doesn't
++ * matter, current state can be discarded because from that instruction
++ * forward having a compatible state will ensure we will safely reach the
++ * exit. States describe preconditions for further exploration, but completely
++ * forget the history of how we got here.
++ *
++ * This also means that even if we needed precise SCALAR range to get to
++ * finalized state, but from that point forward *that same* SCALAR register is
++ * never used in a precise context (i.e., it's precise value is not needed for
++ * correctness), it's correct and safe to mark such register as "imprecise"
++ * (i.e., precise marking set to false). This is what we rely on when we do
++ * not set precise marking in current state. If no child state requires
++ * precision for any given SCALAR register, it's safe to dictate that it can
++ * be imprecise. If any child state does require this register to be precise,
++ * we'll mark it precise later retroactively during precise markings
++ * propagation from child state to parent states.
++ *
++ * Skipping precise marking setting in current state is a mild version of
++ * relying on the above observation. But we can utilize this property even
++ * more aggressively by proactively forgetting any precise marking in the
++ * current state (which we inherited from the parent state), right before we
++ * checkpoint it and branch off into new child state. This is done by
++ * mark_all_scalars_imprecise() to hopefully get more permissive and generic
++ * finalized states which help in short circuiting more future states.
++ */
+ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int regno,
+ 				  int spi)
+ {
+@@ -2061,6 +2177,10 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 	if (!env->bpf_capable)
+ 		return 0;
+ 
++	/* Do sanity checks against current state of register and/or stack
++	 * slot, but don't set precise flag in current state, as precision
++	 * tracking in the current state is unnecessary.
++	 */
+ 	func = st->frame[frame];
+ 	if (regno >= 0) {
+ 		reg = &func->regs[regno];
+@@ -2068,11 +2188,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 			WARN_ONCE(1, "backtracing misuse");
+ 			return -EFAULT;
+ 		}
+-		if (!reg->precise)
+-			new_marks = true;
+-		else
+-			reg_mask = 0;
+-		reg->precise = true;
++		new_marks = true;
+ 	}
+ 
+ 	while (spi >= 0) {
+@@ -2085,11 +2201,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 			stack_mask = 0;
+ 			break;
+ 		}
+-		if (!reg->precise)
+-			new_marks = true;
+-		else
+-			stack_mask = 0;
+-		reg->precise = true;
++		new_marks = true;
+ 		break;
+ 	}
+ 
+@@ -2097,12 +2209,42 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 		return 0;
+ 	if (!reg_mask && !stack_mask)
+ 		return 0;
++
+ 	for (;;) {
+ 		DECLARE_BITMAP(mask, 64);
+ 		u32 history = st->jmp_history_cnt;
+ 
+ 		if (env->log.level & BPF_LOG_LEVEL)
+ 			verbose(env, "last_idx %d first_idx %d\n", last_idx, first_idx);
++
++		if (last_idx < 0) {
++			/* we are at the entry into subprog, which
++			 * is expected for global funcs, but only if
++			 * requested precise registers are R1-R5
++			 * (which are global func's input arguments)
++			 */
++			if (st->curframe == 0 &&
++			    st->frame[0]->subprogno > 0 &&
++			    st->frame[0]->callsite == BPF_MAIN_FUNC &&
++			    stack_mask == 0 && (reg_mask & ~0x3e) == 0) {
++				bitmap_from_u64(mask, reg_mask);
++				for_each_set_bit(i, mask, 32) {
++					reg = &st->frame[0]->regs[i];
++					if (reg->type != SCALAR_VALUE) {
++						reg_mask &= ~(1u << i);
++						continue;
++					}
++					reg->precise = true;
++				}
++				return 0;
++			}
++
++			verbose(env, "BUG backtracing func entry subprog %d reg_mask %x stack_mask %llx\n",
++				st->frame[0]->subprogno, reg_mask, stack_mask);
++			WARN_ONCE(1, "verifier backtracking bug");
++			return -EFAULT;
++		}
++
+ 		for (i = last_idx;;) {
+ 			if (skip_first) {
+ 				err = 0;
+@@ -9233,7 +9375,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
+ 		if (env->explore_alu_limits)
+ 			return false;
+ 		if (rcur->type == SCALAR_VALUE) {
+-			if (!rold->precise && !rcur->precise)
++			if (!rold->precise)
+ 				return true;
+ 			/* new val must satisfy old val knowledge */
+ 			return range_within(rold, rcur) &&
+@@ -9766,6 +9908,10 @@ next:
+ 	env->prev_jmps_processed = env->jmps_processed;
+ 	env->prev_insn_processed = env->insn_processed;
+ 
++	/* forget precise markings we inherited, see __mark_chain_precision */
++	if (env->bpf_capable)
++		mark_all_scalars_imprecise(env, cur);
++
+ 	/* add new state to the head of linked list */
+ 	new = &new_sl->state;
+ 	err = copy_verifier_state(new, cur);
+@@ -11846,6 +11992,9 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
+ 			0 /* frameno */,
+ 			subprog);
+ 
++	state->first_insn_idx = env->subprog_info[subprog].start;
++	state->last_insn_idx = -1;
++
+ 	regs = state->frame[state->curframe]->regs;
+ 	if (subprog || env->prog->type == BPF_PROG_TYPE_EXT) {
+ 		ret = btf_prepare_func_args(env, subprog, regs);
+diff --git a/net/dccp/output.c b/net/dccp/output.c
+index 50e6d5699bb29..d679032f00340 100644
+--- a/net/dccp/output.c
++++ b/net/dccp/output.c
+@@ -185,7 +185,7 @@ unsigned int dccp_sync_mss(struct sock *sk, u32 pmtu)
+ 
+ 	/* And store cached results */
+ 	icsk->icsk_pmtu_cookie = pmtu;
+-	dp->dccps_mss_cache = cur_mps;
++	WRITE_ONCE(dp->dccps_mss_cache, cur_mps);
+ 
+ 	return cur_mps;
+ }
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index e946211758c05..3293c1e3aa5d5 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -639,7 +639,7 @@ static int do_dccp_getsockopt(struct sock *sk, int level, int optname,
+ 		return dccp_getsockopt_service(sk, len,
+ 					       (__be32 __user *)optval, optlen);
+ 	case DCCP_SOCKOPT_GET_CUR_MPS:
+-		val = dp->dccps_mss_cache;
++		val = READ_ONCE(dp->dccps_mss_cache);
+ 		break;
+ 	case DCCP_SOCKOPT_AVAILABLE_CCIDS:
+ 		return ccid_getsockopt_builtin_ccids(sk, len, optval, optlen);
+@@ -748,7 +748,7 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ 	trace_dccp_probe(sk, len);
+ 
+-	if (len > dp->dccps_mss_cache)
++	if (len > READ_ONCE(dp->dccps_mss_cache))
+ 		return -EMSGSIZE;
+ 
+ 	lock_sock(sk);
+@@ -781,6 +781,12 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 		goto out_discard;
+ 	}
+ 
++	/* We need to check dccps_mss_cache after socket is locked. */
++	if (len > dp->dccps_mss_cache) {
++		rc = -EMSGSIZE;
++		goto out_discard;
++	}
++
+ 	skb_reserve(skb, sk->sk_prot->max_header);
+ 	rc = memcpy_from_msg(skb_put(skb, len), msg, len);
+ 	if (rc != 0)
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index 4b74c67f13c9d..da9a55c68e11e 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -224,7 +224,7 @@ static int iptunnel_pmtud_build_icmp(struct sk_buff *skb, int mtu)
+ 		.un.frag.__unused	= 0,
+ 		.un.frag.mtu		= ntohs(mtu),
+ 	};
+-	icmph->checksum = ip_compute_csum(icmph, len);
++	icmph->checksum = csum_fold(skb_checksum(skb, 0, len, 0));
+ 	skb_reset_transport_header(skb);
+ 
+ 	niph = skb_push(skb, sizeof(*niph));
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index 76717478f1733..ac1e51087b1d8 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -196,7 +196,8 @@ static struct nd_opt_hdr *ndisc_next_option(struct nd_opt_hdr *cur,
+ static inline int ndisc_is_useropt(const struct net_device *dev,
+ 				   struct nd_opt_hdr *opt)
+ {
+-	return opt->nd_opt_type == ND_OPT_RDNSS ||
++	return opt->nd_opt_type == ND_OPT_PREFIX_INFO ||
++		opt->nd_opt_type == ND_OPT_RDNSS ||
+ 		opt->nd_opt_type == ND_OPT_DNSSL ||
+ 		opt->nd_opt_type == ND_OPT_CAPTIVE_PORTAL ||
+ 		opt->nd_opt_type == ND_OPT_PREF64 ||
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 19653b8784bbc..2669999d1bc9c 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -257,8 +257,10 @@ int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain)
+ 	if (chain->bound)
+ 		return -EBUSY;
+ 
++	if (!nft_use_inc(&chain->use))
++		return -EMFILE;
++
+ 	chain->bound = true;
+-	chain->use++;
+ 	nft_chain_trans_bind(ctx, chain);
+ 
+ 	return 0;
+@@ -427,7 +429,7 @@ static int nft_delchain(struct nft_ctx *ctx)
+ 	if (IS_ERR(trans))
+ 		return PTR_ERR(trans);
+ 
+-	ctx->table->use--;
++	nft_use_dec(&ctx->table->use);
+ 	nft_deactivate_next(ctx->net, ctx->chain);
+ 
+ 	return 0;
+@@ -466,7 +468,7 @@ nf_tables_delrule_deactivate(struct nft_ctx *ctx, struct nft_rule *rule)
+ 	/* You cannot delete the same rule twice */
+ 	if (nft_is_active_next(ctx->net, rule)) {
+ 		nft_deactivate_next(ctx->net, rule);
+-		ctx->chain->use--;
++		nft_use_dec(&ctx->chain->use);
+ 		return 0;
+ 	}
+ 	return -ENOENT;
+@@ -594,7 +596,7 @@ static int nft_delset(const struct nft_ctx *ctx, struct nft_set *set)
+ 		nft_map_deactivate(ctx, set);
+ 
+ 	nft_deactivate_next(ctx->net, set);
+-	ctx->table->use--;
++	nft_use_dec(&ctx->table->use);
+ 
+ 	return err;
+ }
+@@ -626,7 +628,7 @@ static int nft_delobj(struct nft_ctx *ctx, struct nft_object *obj)
+ 		return err;
+ 
+ 	nft_deactivate_next(ctx->net, obj);
+-	ctx->table->use--;
++	nft_use_dec(&ctx->table->use);
+ 
+ 	return err;
+ }
+@@ -661,7 +663,7 @@ static int nft_delflowtable(struct nft_ctx *ctx,
+ 		return err;
+ 
+ 	nft_deactivate_next(ctx->net, flowtable);
+-	ctx->table->use--;
++	nft_use_dec(&ctx->table->use);
+ 
+ 	return err;
+ }
+@@ -2158,9 +2160,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 	struct nft_rule **rules;
+ 	int err;
+ 
+-	if (table->use == UINT_MAX)
+-		return -EOVERFLOW;
+-
+ 	if (nla[NFTA_CHAIN_HOOK]) {
+ 		struct nft_stats __percpu *stats = NULL;
+ 		struct nft_chain_hook hook;
+@@ -2256,6 +2255,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 	if (err < 0)
+ 		goto err_destroy_chain;
+ 
++	if (!nft_use_inc(&table->use)) {
++		err = -EMFILE;
++		goto err_use;
++	}
++
+ 	trans = nft_trans_chain_add(ctx, NFT_MSG_NEWCHAIN);
+ 	if (IS_ERR(trans)) {
+ 		err = PTR_ERR(trans);
+@@ -2272,10 +2276,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 		goto err_unregister_hook;
+ 	}
+ 
+-	table->use++;
+-
+ 	return 0;
++
+ err_unregister_hook:
++	nft_use_dec_restore(&table->use);
++err_use:
+ 	nf_tables_unregister_hook(net, table, chain);
+ err_destroy_chain:
+ 	nf_tables_chain_destroy(ctx);
+@@ -3387,9 +3392,6 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 			return -EINVAL;
+ 		handle = nf_tables_alloc_handle(table);
+ 
+-		if (chain->use == UINT_MAX)
+-			return -EOVERFLOW;
+-
+ 		if (nla[NFTA_RULE_POSITION]) {
+ 			pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
+ 			old_rule = __nft_rule_lookup(chain, pos_handle);
+@@ -3475,16 +3477,21 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 		expr = nft_expr_next(expr);
+ 	}
+ 
++	if (!nft_use_inc(&chain->use)) {
++		err = -EMFILE;
++		goto err2;
++	}
++
+ 	if (nlh->nlmsg_flags & NLM_F_REPLACE) {
+ 		trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
+ 		if (trans == NULL) {
+ 			err = -ENOMEM;
+-			goto err2;
++			goto err_destroy_flow_rule;
+ 		}
+ 		err = nft_delrule(&ctx, old_rule);
+ 		if (err < 0) {
+ 			nft_trans_destroy(trans);
+-			goto err2;
++			goto err_destroy_flow_rule;
+ 		}
+ 
+ 		list_add_tail_rcu(&rule->list, &old_rule->list);
+@@ -3492,7 +3499,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 		trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
+ 		if (!trans) {
+ 			err = -ENOMEM;
+-			goto err2;
++			goto err_destroy_flow_rule;
+ 		}
+ 
+ 		if (nlh->nlmsg_flags & NLM_F_APPEND) {
+@@ -3508,7 +3515,6 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 		}
+ 	}
+ 	kvfree(info);
+-	chain->use++;
+ 
+ 	if (nft_net->validate_state == NFT_VALIDATE_DO)
+ 		return nft_table_validate(net, table);
+@@ -3522,6 +3528,9 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 	}
+ 
+ 	return 0;
++
++err_destroy_flow_rule:
++	nft_use_dec_restore(&chain->use);
+ err2:
+ 	nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE_ERROR);
+ 	nf_tables_rule_destroy(&ctx, rule);
+@@ -4437,9 +4446,15 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	alloc_size = sizeof(*set) + size + udlen;
+ 	if (alloc_size < size || alloc_size > INT_MAX)
+ 		return -ENOMEM;
++
++	if (!nft_use_inc(&table->use))
++		return -EMFILE;
++
+ 	set = kvzalloc(alloc_size, GFP_KERNEL);
+-	if (!set)
+-		return -ENOMEM;
++	if (!set) {
++		err = -ENOMEM;
++		goto err_alloc;
++	}
+ 
+ 	name = nla_strdup(nla[NFTA_SET_NAME], GFP_KERNEL);
+ 	if (!name) {
+@@ -4500,7 +4515,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 		goto err_set_expr_alloc;
+ 
+ 	list_add_tail_rcu(&set->list, &table->sets);
+-	table->use++;
++
+ 	return 0;
+ 
+ err_set_expr_alloc:
+@@ -4512,6 +4527,9 @@ err_set_init:
+ 	kfree(set->name);
+ err_set_name:
+ 	kvfree(set);
++err_alloc:
++	nft_use_dec_restore(&table->use);
++
+ 	return err;
+ }
+ 
+@@ -4605,9 +4623,6 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 	struct nft_set_binding *i;
+ 	struct nft_set_iter iter;
+ 
+-	if (set->use == UINT_MAX)
+-		return -EOVERFLOW;
+-
+ 	if (!list_empty(&set->bindings) && nft_set_is_anonymous(set))
+ 		return -EBUSY;
+ 
+@@ -4632,10 +4647,12 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			return iter.err;
+ 	}
+ bind:
++	if (!nft_use_inc(&set->use))
++		return -EMFILE;
++
+ 	binding->chain = ctx->chain;
+ 	list_add_tail_rcu(&binding->list, &set->bindings);
+ 	nft_set_trans_bind(ctx, set);
+-	set->use++;
+ 
+ 	return 0;
+ }
+@@ -4688,7 +4705,7 @@ void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
+ 		nft_clear(ctx->net, set);
+ 	}
+ 
+-	set->use++;
++	nft_use_inc_restore(&set->use);
+ }
+ EXPORT_SYMBOL_GPL(nf_tables_activate_set);
+ 
+@@ -4704,7 +4721,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 		else
+ 			list_del_rcu(&binding->list);
+ 
+-		set->use--;
++		nft_use_dec(&set->use);
+ 		break;
+ 	case NFT_TRANS_PREPARE:
+ 		if (nft_set_is_anonymous(set)) {
+@@ -4713,7 +4730,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 
+ 			nft_deactivate_next(ctx->net, set);
+ 		}
+-		set->use--;
++		nft_use_dec(&set->use);
+ 		return;
+ 	case NFT_TRANS_ABORT:
+ 	case NFT_TRANS_RELEASE:
+@@ -4721,7 +4738,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 		    set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
+ 			nft_map_deactivate(ctx, set);
+ 
+-		set->use--;
++		nft_use_dec(&set->use);
+ 		fallthrough;
+ 	default:
+ 		nf_tables_unbind_set(ctx, set, binding,
+@@ -5344,7 +5361,7 @@ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+ 		nft_set_elem_expr_destroy(&ctx, nft_set_ext_expr(ext));
+ 
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
+-		(*nft_set_ext_obj(ext))->use--;
++		nft_use_dec(&(*nft_set_ext_obj(ext))->use);
+ 	kfree(elem);
+ }
+ EXPORT_SYMBOL_GPL(nft_set_elem_destroy);
+@@ -5518,8 +5535,16 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 				     set->objtype, genmask);
+ 		if (IS_ERR(obj)) {
+ 			err = PTR_ERR(obj);
++			obj = NULL;
++			goto err_parse_key_end;
++		}
++
++		if (!nft_use_inc(&obj->use)) {
++			err = -EMFILE;
++			obj = NULL;
+ 			goto err_parse_key_end;
+ 		}
++
+ 		nft_set_ext_add(&tmpl, NFT_SET_EXT_OBJREF);
+ 	}
+ 
+@@ -5584,10 +5609,8 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 		udata->len = ulen - 1;
+ 		nla_memcpy(&udata->data, nla[NFTA_SET_ELEM_USERDATA], ulen);
+ 	}
+-	if (obj) {
++	if (obj)
+ 		*nft_set_ext_obj(ext) = obj;
+-		obj->use++;
+-	}
+ 
+ 	err = nft_set_elem_expr_setup(ctx, ext, expr);
+ 	if (err < 0)
+@@ -5643,14 +5666,14 @@ err_set_full:
+ err_element_clash:
+ 	kfree(trans);
+ err_elem_expr:
+-	if (obj)
+-		obj->use--;
+-
+ 	nf_tables_set_elem_destroy(ctx, set, elem.priv);
+ err_parse_data:
+ 	if (nla[NFTA_SET_ELEM_DATA] != NULL)
+ 		nft_data_release(&elem.data.val, desc.type);
+ err_parse_key_end:
++	if (obj)
++		nft_use_dec_restore(&obj->use);
++
+ 	nft_data_release(&elem.key_end.val, NFT_DATA_VALUE);
+ err_parse_key:
+ 	nft_data_release(&elem.key.val, NFT_DATA_VALUE);
+@@ -5722,7 +5745,7 @@ void nft_data_hold(const struct nft_data *data, enum nft_data_types type)
+ 		case NFT_JUMP:
+ 		case NFT_GOTO:
+ 			chain = data->verdict.chain;
+-			chain->use++;
++			nft_use_inc_restore(&chain->use);
+ 			break;
+ 		}
+ 	}
+@@ -5737,7 +5760,7 @@ static void nft_setelem_data_activate(const struct net *net,
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))
+ 		nft_data_hold(nft_set_ext_data(ext), set->dtype);
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
+-		(*nft_set_ext_obj(ext))->use++;
++		nft_use_inc_restore(&(*nft_set_ext_obj(ext))->use);
+ }
+ 
+ static void nft_setelem_data_deactivate(const struct net *net,
+@@ -5749,7 +5772,7 @@ static void nft_setelem_data_deactivate(const struct net *net,
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))
+ 		nft_data_release(nft_set_ext_data(ext), set->dtype);
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
+-		(*nft_set_ext_obj(ext))->use--;
++		nft_use_dec(&(*nft_set_ext_obj(ext))->use);
+ }
+ 
+ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
+@@ -6216,9 +6239,14 @@ static int nf_tables_newobj(struct net *net, struct sock *nlsk,
+ 
+ 	nft_ctx_init(&ctx, net, skb, nlh, family, table, NULL, nla);
+ 
++	if (!nft_use_inc(&table->use))
++		return -EMFILE;
++
+ 	type = nft_obj_type_get(net, objtype);
+-	if (IS_ERR(type))
+-		return PTR_ERR(type);
++	if (IS_ERR(type)) {
++		err = PTR_ERR(type);
++		goto err_type;
++	}
+ 
+ 	obj = nft_obj_init(&ctx, type, nla[NFTA_OBJ_DATA]);
+ 	if (IS_ERR(obj)) {
+@@ -6252,7 +6280,7 @@ static int nf_tables_newobj(struct net *net, struct sock *nlsk,
+ 		goto err_obj_ht;
+ 
+ 	list_add_tail_rcu(&obj->list, &table->objects);
+-	table->use++;
++
+ 	return 0;
+ err_obj_ht:
+ 	/* queued in transaction log */
+@@ -6268,6 +6296,9 @@ err_strdup:
+ 	kfree(obj);
+ err_init:
+ 	module_put(type->owner);
++err_type:
++	nft_use_dec_restore(&table->use);
++
+ 	return err;
+ }
+ 
+@@ -6658,7 +6689,7 @@ void nf_tables_deactivate_flowtable(const struct nft_ctx *ctx,
+ 	case NFT_TRANS_PREPARE:
+ 	case NFT_TRANS_ABORT:
+ 	case NFT_TRANS_RELEASE:
+-		flowtable->use--;
++		nft_use_dec(&flowtable->use);
+ 		fallthrough;
+ 	default:
+ 		return;
+@@ -6995,9 +7026,14 @@ static int nf_tables_newflowtable(struct net *net, struct sock *nlsk,
+ 
+ 	nft_ctx_init(&ctx, net, skb, nlh, family, table, NULL, nla);
+ 
++	if (!nft_use_inc(&table->use))
++		return -EMFILE;
++
+ 	flowtable = kzalloc(sizeof(*flowtable), GFP_KERNEL);
+-	if (!flowtable)
+-		return -ENOMEM;
++	if (!flowtable) {
++		err = -ENOMEM;
++		goto flowtable_alloc;
++	}
+ 
+ 	flowtable->table = table;
+ 	flowtable->handle = nf_tables_alloc_handle(table);
+@@ -7052,7 +7088,6 @@ static int nf_tables_newflowtable(struct net *net, struct sock *nlsk,
+ 		goto err5;
+ 
+ 	list_add_tail_rcu(&flowtable->list, &table->flowtables);
+-	table->use++;
+ 
+ 	return 0;
+ err5:
+@@ -7069,6 +7104,9 @@ err2:
+ 	kfree(flowtable->name);
+ err1:
+ 	kfree(flowtable);
++flowtable_alloc:
++	nft_use_dec_restore(&table->use);
++
+ 	return err;
+ }
+ 
+@@ -8254,7 +8292,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			 */
+ 			if (nft_set_is_anonymous(nft_trans_set(trans)) &&
+ 			    !list_empty(&nft_trans_set(trans)->bindings))
+-				trans->ctx.table->use--;
++				nft_use_dec(&trans->ctx.table->use);
+ 
+ 			nf_tables_set_notify(&trans->ctx, nft_trans_set(trans),
+ 					     NFT_MSG_NEWSET, GFP_KERNEL);
+@@ -8438,7 +8476,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 					nft_trans_destroy(trans);
+ 					break;
+ 				}
+-				trans->ctx.table->use--;
++				nft_use_dec_restore(&trans->ctx.table->use);
+ 				nft_chain_del(trans->ctx.chain);
+ 				nf_tables_unregister_hook(trans->ctx.net,
+ 							  trans->ctx.table,
+@@ -8446,7 +8484,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			}
+ 			break;
+ 		case NFT_MSG_DELCHAIN:
+-			trans->ctx.table->use++;
++			nft_use_inc_restore(&trans->ctx.table->use);
+ 			nft_clear(trans->ctx.net, trans->ctx.chain);
+ 			nft_trans_destroy(trans);
+ 			break;
+@@ -8455,20 +8493,20 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 				nft_trans_destroy(trans);
+ 				break;
+ 			}
+-			trans->ctx.chain->use--;
++			nft_use_dec_restore(&trans->ctx.chain->use);
+ 			list_del_rcu(&nft_trans_rule(trans)->list);
+ 			nft_rule_expr_deactivate(&trans->ctx,
+ 						 nft_trans_rule(trans),
+ 						 NFT_TRANS_ABORT);
+ 			break;
+ 		case NFT_MSG_DELRULE:
+-			trans->ctx.chain->use++;
++			nft_use_inc_restore(&trans->ctx.chain->use);
+ 			nft_clear(trans->ctx.net, nft_trans_rule(trans));
+ 			nft_rule_expr_activate(&trans->ctx, nft_trans_rule(trans));
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWSET:
+-			trans->ctx.table->use--;
++			nft_use_dec_restore(&trans->ctx.table->use);
+ 			if (nft_trans_set_bound(trans)) {
+ 				nft_trans_destroy(trans);
+ 				break;
+@@ -8476,7 +8514,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			list_del_rcu(&nft_trans_set(trans)->list);
+ 			break;
+ 		case NFT_MSG_DELSET:
+-			trans->ctx.table->use++;
++			nft_use_inc_restore(&trans->ctx.table->use);
+ 			nft_clear(trans->ctx.net, nft_trans_set(trans));
+ 			if (nft_trans_set(trans)->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
+ 				nft_map_activate(&trans->ctx, nft_trans_set(trans));
+@@ -8506,12 +8544,12 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 				nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans));
+ 				nft_trans_destroy(trans);
+ 			} else {
+-				trans->ctx.table->use--;
++				nft_use_dec_restore(&trans->ctx.table->use);
+ 				nft_obj_del(nft_trans_obj(trans));
+ 			}
+ 			break;
+ 		case NFT_MSG_DELOBJ:
+-			trans->ctx.table->use++;
++			nft_use_inc_restore(&trans->ctx.table->use);
+ 			nft_clear(trans->ctx.net, nft_trans_obj(trans));
+ 			nft_trans_destroy(trans);
+ 			break;
+@@ -8520,7 +8558,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 				nft_unregister_flowtable_net_hooks(net,
+ 						&nft_trans_flowtable_hooks(trans));
+ 			} else {
+-				trans->ctx.table->use--;
++				nft_use_dec_restore(&trans->ctx.table->use);
+ 				list_del_rcu(&nft_trans_flowtable(trans)->list);
+ 				nft_unregister_flowtable_net_hooks(net,
+ 						&nft_trans_flowtable(trans)->hook_list);
+@@ -8531,7 +8569,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 				list_splice(&nft_trans_flowtable_hooks(trans),
+ 					    &nft_trans_flowtable(trans)->hook_list);
+ 			} else {
+-				trans->ctx.table->use++;
++				nft_use_inc_restore(&trans->ctx.table->use);
+ 				nft_clear(trans->ctx.net, nft_trans_flowtable(trans));
+ 			}
+ 			nft_trans_destroy(trans);
+@@ -8969,8 +9007,9 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 		if (desc->flags & NFT_DATA_DESC_SETELEM &&
+ 		    chain->flags & NFT_CHAIN_BINDING)
+ 			return -EINVAL;
++		if (!nft_use_inc(&chain->use))
++			return -EMFILE;
+ 
+-		chain->use++;
+ 		data->verdict.chain = chain;
+ 		break;
+ 	}
+@@ -8988,7 +9027,7 @@ static void nft_verdict_uninit(const struct nft_data *data)
+ 	case NFT_JUMP:
+ 	case NFT_GOTO:
+ 		chain = data->verdict.chain;
+-		chain->use--;
++		nft_use_dec(&chain->use);
+ 		break;
+ 	}
+ }
+@@ -9157,11 +9196,11 @@ int __nft_release_basechain(struct nft_ctx *ctx)
+ 	nf_tables_unregister_hook(ctx->net, ctx->chain->table, ctx->chain);
+ 	list_for_each_entry_safe(rule, nr, &ctx->chain->rules, list) {
+ 		list_del(&rule->list);
+-		ctx->chain->use--;
++		nft_use_dec(&ctx->chain->use);
+ 		nf_tables_rule_release(ctx, rule);
+ 	}
+ 	nft_chain_del(ctx->chain);
+-	ctx->table->use--;
++	nft_use_dec(&ctx->table->use);
+ 	nf_tables_chain_destroy(ctx);
+ 
+ 	return 0;
+@@ -9201,18 +9240,18 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
+ 		ctx.chain = chain;
+ 		list_for_each_entry_safe(rule, nr, &chain->rules, list) {
+ 			list_del(&rule->list);
+-			chain->use--;
++			nft_use_dec(&chain->use);
+ 			nf_tables_rule_release(&ctx, rule);
+ 		}
+ 	}
+ 	list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) {
+ 		list_del(&flowtable->list);
+-		table->use--;
++		nft_use_dec(&table->use);
+ 		nf_tables_flowtable_destroy(flowtable);
+ 	}
+ 	list_for_each_entry_safe(set, ns, &table->sets, list) {
+ 		list_del(&set->list);
+-		table->use--;
++		nft_use_dec(&table->use);
+ 		if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
+ 			nft_map_deactivate(&ctx, set);
+ 
+@@ -9220,13 +9259,13 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
+ 	}
+ 	list_for_each_entry_safe(obj, ne, &table->objects, list) {
+ 		nft_obj_del(obj);
+-		table->use--;
++		nft_use_dec(&table->use);
+ 		nft_obj_destroy(&ctx, obj);
+ 	}
+ 	list_for_each_entry_safe(chain, nc, &table->chains, list) {
+ 		ctx.chain = chain;
+ 		nft_chain_del(chain);
+-		table->use--;
++		nft_use_dec(&table->use);
+ 		nf_tables_chain_destroy(&ctx);
+ 	}
+ 	list_del(&table->list);
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index 3a6c84fb2c90d..d868eade60176 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -174,8 +174,10 @@ static int nft_flow_offload_init(const struct nft_ctx *ctx,
+ 	if (IS_ERR(flowtable))
+ 		return PTR_ERR(flowtable);
+ 
++	if (!nft_use_inc(&flowtable->use))
++		return -EMFILE;
++
+ 	priv->flowtable = flowtable;
+-	flowtable->use++;
+ 
+ 	return nf_ct_netns_get(ctx->net, ctx->family);
+ }
+@@ -194,7 +196,7 @@ static void nft_flow_offload_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_flow_offload *priv = nft_expr_priv(expr);
+ 
+-	priv->flowtable->use++;
++	nft_use_inc_restore(&priv->flowtable->use);
+ }
+ 
+ static void nft_flow_offload_destroy(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index 6bf1c852e8eaa..7d5b63c5a30af 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -168,7 +168,7 @@ static void nft_immediate_deactivate(const struct nft_ctx *ctx,
+ 				nft_immediate_chain_deactivate(ctx, chain, phase);
+ 				nft_chain_del(chain);
+ 				chain->bound = false;
+-				chain->table->use--;
++				nft_use_dec(&chain->table->use);
+ 				break;
+ 			}
+ 			break;
+@@ -207,7 +207,7 @@ static void nft_immediate_destroy(const struct nft_ctx *ctx,
+ 		 * let the transaction records release this chain and its rules.
+ 		 */
+ 		if (chain->bound) {
+-			chain->use--;
++			nft_use_dec(&chain->use);
+ 			break;
+ 		}
+ 
+@@ -215,9 +215,9 @@ static void nft_immediate_destroy(const struct nft_ctx *ctx,
+ 		chain_ctx = *ctx;
+ 		chain_ctx.chain = chain;
+ 
+-		chain->use--;
++		nft_use_dec(&chain->use);
+ 		list_for_each_entry_safe(rule, n, &chain->rules, list) {
+-			chain->use--;
++			nft_use_dec(&chain->use);
+ 			list_del(&rule->list);
+ 			nf_tables_rule_destroy(&chain_ctx, rule);
+ 		}
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index 25157d8cc2504..30d0b0a346192 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -41,8 +41,10 @@ static int nft_objref_init(const struct nft_ctx *ctx,
+ 	if (IS_ERR(obj))
+ 		return -ENOENT;
+ 
++	if (!nft_use_inc(&obj->use))
++		return -EMFILE;
++
+ 	nft_objref_priv(expr) = obj;
+-	obj->use++;
+ 
+ 	return 0;
+ }
+@@ -71,7 +73,7 @@ static void nft_objref_deactivate(const struct nft_ctx *ctx,
+ 	if (phase == NFT_TRANS_COMMIT)
+ 		return;
+ 
+-	obj->use--;
++	nft_use_dec(&obj->use);
+ }
+ 
+ static void nft_objref_activate(const struct nft_ctx *ctx,
+@@ -79,7 +81,7 @@ static void nft_objref_activate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_object *obj = nft_objref_priv(expr);
+ 
+-	obj->use++;
++	nft_use_inc_restore(&obj->use);
+ }
+ 
+ static struct nft_expr_type nft_objref_type;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index c7129616dd530..bbdb32acac324 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -366,18 +366,20 @@ static void __packet_set_status(struct packet_sock *po, void *frame, int status)
+ {
+ 	union tpacket_uhdr h;
+ 
++	/* WRITE_ONCE() are paired with READ_ONCE() in __packet_get_status */
++
+ 	h.raw = frame;
+ 	switch (po->tp_version) {
+ 	case TPACKET_V1:
+-		h.h1->tp_status = status;
++		WRITE_ONCE(h.h1->tp_status, status);
+ 		flush_dcache_page(pgv_to_page(&h.h1->tp_status));
+ 		break;
+ 	case TPACKET_V2:
+-		h.h2->tp_status = status;
++		WRITE_ONCE(h.h2->tp_status, status);
+ 		flush_dcache_page(pgv_to_page(&h.h2->tp_status));
+ 		break;
+ 	case TPACKET_V3:
+-		h.h3->tp_status = status;
++		WRITE_ONCE(h.h3->tp_status, status);
+ 		flush_dcache_page(pgv_to_page(&h.h3->tp_status));
+ 		break;
+ 	default:
+@@ -394,17 +396,19 @@ static int __packet_get_status(const struct packet_sock *po, void *frame)
+ 
+ 	smp_rmb();
+ 
++	/* READ_ONCE() are paired with WRITE_ONCE() in __packet_set_status */
++
+ 	h.raw = frame;
+ 	switch (po->tp_version) {
+ 	case TPACKET_V1:
+ 		flush_dcache_page(pgv_to_page(&h.h1->tp_status));
+-		return h.h1->tp_status;
++		return READ_ONCE(h.h1->tp_status);
+ 	case TPACKET_V2:
+ 		flush_dcache_page(pgv_to_page(&h.h2->tp_status));
+-		return h.h2->tp_status;
++		return READ_ONCE(h.h2->tp_status);
+ 	case TPACKET_V3:
+ 		flush_dcache_page(pgv_to_page(&h.h3->tp_status));
+-		return h.h3->tp_status;
++		return READ_ONCE(h.h3->tp_status);
+ 	default:
+ 		WARN(1, "TPACKET version not supported.\n");
+ 		BUG();
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index be42b1196786b..08aaa6efc62c8 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -773,12 +773,10 @@ static void dist_free(struct disttable *d)
+  * signed 16 bit values.
+  */
+ 
+-static int get_dist_table(struct Qdisc *sch, struct disttable **tbl,
+-			  const struct nlattr *attr)
++static int get_dist_table(struct disttable **tbl, const struct nlattr *attr)
+ {
+ 	size_t n = nla_len(attr)/sizeof(__s16);
+ 	const __s16 *data = nla_data(attr);
+-	spinlock_t *root_lock;
+ 	struct disttable *d;
+ 	int i;
+ 
+@@ -793,13 +791,7 @@ static int get_dist_table(struct Qdisc *sch, struct disttable **tbl,
+ 	for (i = 0; i < n; i++)
+ 		d->table[i] = data[i];
+ 
+-	root_lock = qdisc_root_sleeping_lock(sch);
+-
+-	spin_lock_bh(root_lock);
+-	swap(*tbl, d);
+-	spin_unlock_bh(root_lock);
+-
+-	dist_free(d);
++	*tbl = d;
+ 	return 0;
+ }
+ 
+@@ -956,6 +948,8 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ {
+ 	struct netem_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_NETEM_MAX + 1];
++	struct disttable *delay_dist = NULL;
++	struct disttable *slot_dist = NULL;
+ 	struct tc_netem_qopt *qopt;
+ 	struct clgstate old_clg;
+ 	int old_loss_model = CLG_RANDOM;
+@@ -969,6 +963,18 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (tb[TCA_NETEM_DELAY_DIST]) {
++		ret = get_dist_table(&delay_dist, tb[TCA_NETEM_DELAY_DIST]);
++		if (ret)
++			goto table_free;
++	}
++
++	if (tb[TCA_NETEM_SLOT_DIST]) {
++		ret = get_dist_table(&slot_dist, tb[TCA_NETEM_SLOT_DIST]);
++		if (ret)
++			goto table_free;
++	}
++
+ 	sch_tree_lock(sch);
+ 	/* backup q->clg and q->loss_model */
+ 	old_clg = q->clg;
+@@ -978,26 +984,17 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 		ret = get_loss_clg(q, tb[TCA_NETEM_LOSS]);
+ 		if (ret) {
+ 			q->loss_model = old_loss_model;
++			q->clg = old_clg;
+ 			goto unlock;
+ 		}
+ 	} else {
+ 		q->loss_model = CLG_RANDOM;
+ 	}
+ 
+-	if (tb[TCA_NETEM_DELAY_DIST]) {
+-		ret = get_dist_table(sch, &q->delay_dist,
+-				     tb[TCA_NETEM_DELAY_DIST]);
+-		if (ret)
+-			goto get_table_failure;
+-	}
+-
+-	if (tb[TCA_NETEM_SLOT_DIST]) {
+-		ret = get_dist_table(sch, &q->slot_dist,
+-				     tb[TCA_NETEM_SLOT_DIST]);
+-		if (ret)
+-			goto get_table_failure;
+-	}
+-
++	if (delay_dist)
++		swap(q->delay_dist, delay_dist);
++	if (slot_dist)
++		swap(q->slot_dist, slot_dist);
+ 	sch->limit = qopt->limit;
+ 
+ 	q->latency = PSCHED_TICKS2NS(qopt->latency);
+@@ -1047,17 +1044,11 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 
+ unlock:
+ 	sch_tree_unlock(sch);
+-	return ret;
+ 
+-get_table_failure:
+-	/* recover clg and loss_model, in case of
+-	 * q->clg and q->loss_model were modified
+-	 * in get_loss_clg()
+-	 */
+-	q->clg = old_clg;
+-	q->loss_model = old_loss_model;
+-
+-	goto unlock;
++table_free:
++	dist_free(delay_dist);
++	dist_free(slot_dist);
++	return ret;
+ }
+ 
+ static int netem_init(struct Qdisc *sch, struct nlattr *opt,
+diff --git a/tools/testing/radix-tree/regression1.c b/tools/testing/radix-tree/regression1.c
+index a61c7bcbc72da..63f468bf8245c 100644
+--- a/tools/testing/radix-tree/regression1.c
++++ b/tools/testing/radix-tree/regression1.c
+@@ -177,7 +177,7 @@ void regression1_test(void)
+ 	nr_threads = 2;
+ 	pthread_barrier_init(&worker_barrier, NULL, nr_threads);
+ 
+-	threads = malloc(nr_threads * sizeof(pthread_t *));
++	threads = malloc(nr_threads * sizeof(*threads));
+ 
+ 	for (i = 0; i < nr_threads; i++) {
+ 		arg = i;
+diff --git a/tools/testing/selftests/bpf/prog_tests/align.c b/tools/testing/selftests/bpf/prog_tests/align.c
+index 5861446d07770..7996ec07e0bdb 100644
+--- a/tools/testing/selftests/bpf/prog_tests/align.c
++++ b/tools/testing/selftests/bpf/prog_tests/align.c
+@@ -2,7 +2,7 @@
+ #include <test_progs.h>
+ 
+ #define MAX_INSNS	512
+-#define MAX_MATCHES	16
++#define MAX_MATCHES	24
+ 
+ struct bpf_reg_match {
+ 	unsigned int line;
+@@ -267,6 +267,7 @@ static struct bpf_align_test tests[] = {
+ 			 */
+ 			BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
+ 			BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_6),
++			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
+ 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
+ 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
+ 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
+@@ -280,6 +281,7 @@ static struct bpf_align_test tests[] = {
+ 			BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
+ 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
+ 			BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_6),
++			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
+ 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 4),
+ 			BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_6),
+ 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
+@@ -311,44 +313,52 @@ static struct bpf_align_test tests[] = {
+ 			{15, "R4=pkt(id=1,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ 			{15, "R5=pkt(id=1,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ 			/* Variable offset is added to R5 packet pointer,
+-			 * resulting in auxiliary alignment of 4.
++			 * resulting in auxiliary alignment of 4. To avoid BPF
++			 * verifier's precision backtracking logging
++			 * interfering we also have a no-op R4 = R5
++			 * instruction to validate R5 state. We also check
++			 * that R4 is what it should be in such case.
+ 			 */
+-			{18, "R5_w=pkt(id=2,off=0,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
++			{19, "R4_w=pkt(id=2,off=0,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
++			{19, "R5_w=pkt(id=2,off=0,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ 			/* Constant offset is added to R5, resulting in
+ 			 * reg->off of 14.
+ 			 */
+-			{19, "R5_w=pkt(id=2,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
++			{20, "R5_w=pkt(id=2,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ 			/* At the time the word size load is performed from R5,
+ 			 * its total fixed offset is NET_IP_ALIGN + reg->off
+ 			 * (14) which is 16.  Then the variable offset is 4-byte
+ 			 * aligned, so the total offset is 4-byte aligned and
+ 			 * meets the load's requirements.
+ 			 */
+-			{23, "R4=pkt(id=2,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
+-			{23, "R5=pkt(id=2,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
++			{24, "R4=pkt(id=2,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
++			{24, "R5=pkt(id=2,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ 			/* Constant offset is added to R5 packet pointer,
+ 			 * resulting in reg->off value of 14.
+ 			 */
+-			{26, "R5_w=pkt(id=0,off=14,r=8"},
++			{27, "R5_w=pkt(id=0,off=14,r=8"},
+ 			/* Variable offset is added to R5, resulting in a
+-			 * variable offset of (4n).
++			 * variable offset of (4n). See comment for insn #19
++			 * for R4 = R5 trick.
+ 			 */
+-			{27, "R5_w=pkt(id=3,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
++			{29, "R4_w=pkt(id=3,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
++			{29, "R5_w=pkt(id=3,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ 			/* Constant is added to R5 again, setting reg->off to 18. */
+-			{28, "R5_w=pkt(id=3,off=18,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
++			{30, "R5_w=pkt(id=3,off=18,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ 			/* And once more we add a variable; resulting var_off
+ 			 * is still (4n), fixed offset is not changed.
+ 			 * Also, we create a new reg->id.
+ 			 */
+-			{29, "R5_w=pkt(id=4,off=18,r=0,umax_value=2040,var_off=(0x0; 0x7fc)"},
++			{32, "R4_w=pkt(id=4,off=18,r=0,umax_value=2040,var_off=(0x0; 0x7fc)"},
++			{32, "R5_w=pkt(id=4,off=18,r=0,umax_value=2040,var_off=(0x0; 0x7fc)"},
+ 			/* At the time the word size load is performed from R5,
+ 			 * its total fixed offset is NET_IP_ALIGN + reg->off (18)
+ 			 * which is 20.  Then the variable offset is (4n), so
+ 			 * the total offset is 4-byte aligned and meets the
+ 			 * load's requirements.
+ 			 */
+-			{33, "R4=pkt(id=4,off=22,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"},
+-			{33, "R5=pkt(id=4,off=18,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"},
++			{35, "R4=pkt(id=4,off=22,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"},
++			{35, "R5=pkt(id=4,off=18,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"},
+ 		},
+ 	},
+ 	{
+diff --git a/tools/testing/selftests/bpf/prog_tests/sk_assign.c b/tools/testing/selftests/bpf/prog_tests/sk_assign.c
+index 3a469099f30d8..e09c5239a5951 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sk_assign.c
++++ b/tools/testing/selftests/bpf/prog_tests/sk_assign.c
+@@ -29,7 +29,23 @@ static int stop, duration;
+ static bool
+ configure_stack(void)
+ {
++	char tc_version[128];
+ 	char tc_cmd[BUFSIZ];
++	char *prog;
++	FILE *tc;
++
++	/* Check whether tc is built with libbpf. */
++	tc = popen("tc -V", "r");
++	if (CHECK_FAIL(!tc))
++		return false;
++	if (CHECK_FAIL(!fgets(tc_version, sizeof(tc_version), tc)))
++		return false;
++	if (strstr(tc_version, ", libbpf "))
++		prog = "test_sk_assign_libbpf.o";
++	else
++		prog = "test_sk_assign.o";
++	if (CHECK_FAIL(pclose(tc)))
++		return false;
+ 
+ 	/* Move to a new networking namespace */
+ 	if (CHECK_FAIL(unshare(CLONE_NEWNET)))
+@@ -46,8 +62,8 @@ configure_stack(void)
+ 	/* Load qdisc, BPF program */
+ 	if (CHECK_FAIL(system("tc qdisc add dev lo clsact")))
+ 		return false;
+-	sprintf(tc_cmd, "%s %s %s %s", "tc filter add dev lo ingress bpf",
+-		       "direct-action object-file ./test_sk_assign.o",
++	sprintf(tc_cmd, "%s %s %s %s %s", "tc filter add dev lo ingress bpf",
++		       "direct-action object-file", prog,
+ 		       "section classifier/sk_assign_test",
+ 		       (env.verbosity < VERBOSE_VERY) ? " 2>/dev/null" : "verbose");
+ 	if (CHECK(system(tc_cmd), "BPF load failed;",
+@@ -129,15 +145,12 @@ get_port(int fd)
+ static ssize_t
+ rcv_msg(int srv_client, int type)
+ {
+-	struct sockaddr_storage ss;
+ 	char buf[BUFSIZ];
+-	socklen_t slen;
+ 
+ 	if (type == SOCK_STREAM)
+ 		return read(srv_client, &buf, sizeof(buf));
+ 	else
+-		return recvfrom(srv_client, &buf, sizeof(buf), 0,
+-				(struct sockaddr *)&ss, &slen);
++		return recvfrom(srv_client, &buf, sizeof(buf), 0, NULL, NULL);
+ }
+ 
+ static int
+diff --git a/tools/testing/selftests/bpf/progs/connect4_prog.c b/tools/testing/selftests/bpf/progs/connect4_prog.c
+index a943d394fd3a0..38ab1ce32e57c 100644
+--- a/tools/testing/selftests/bpf/progs/connect4_prog.c
++++ b/tools/testing/selftests/bpf/progs/connect4_prog.c
+@@ -33,7 +33,7 @@
+ 
+ int _version SEC("version") = 1;
+ 
+-__attribute__ ((noinline))
++__attribute__ ((noinline)) __weak
+ int do_bind(struct bpf_sock_addr *ctx)
+ {
+ 	struct sockaddr_in sa = {};
+diff --git a/tools/testing/selftests/bpf/progs/test_sk_assign.c b/tools/testing/selftests/bpf/progs/test_sk_assign.c
+index 1ecd987005d2c..77fd42f835fcf 100644
+--- a/tools/testing/selftests/bpf/progs/test_sk_assign.c
++++ b/tools/testing/selftests/bpf/progs/test_sk_assign.c
+@@ -16,6 +16,16 @@
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_endian.h>
+ 
++#if defined(IPROUTE2_HAVE_LIBBPF)
++/* Use a new-style map definition. */
++struct {
++	__uint(type, BPF_MAP_TYPE_SOCKMAP);
++	__type(key, int);
++	__type(value, __u64);
++	__uint(pinning, LIBBPF_PIN_BY_NAME);
++	__uint(max_entries, 1);
++} server_map SEC(".maps");
++#else
+ /* Pin map under /sys/fs/bpf/tc/globals/<map name> */
+ #define PIN_GLOBAL_NS 2
+ 
+@@ -35,6 +45,7 @@ struct {
+ 	.max_elem = 1,
+ 	.pinning = PIN_GLOBAL_NS,
+ };
++#endif
+ 
+ int _version SEC("version") = 1;
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/progs/test_sk_assign_libbpf.c b/tools/testing/selftests/bpf/progs/test_sk_assign_libbpf.c
+new file mode 100644
+index 0000000000000..dcf46adfda041
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/test_sk_assign_libbpf.c
+@@ -0,0 +1,3 @@
++// SPDX-License-Identifier: GPL-2.0
++#define IPROUTE2_HAVE_LIBBPF
++#include "test_sk_assign.c"
+diff --git a/tools/testing/selftests/net/forwarding/ethtool.sh b/tools/testing/selftests/net/forwarding/ethtool.sh
+index dbb9fcf759e0f..aa2eafb7b2437 100755
+--- a/tools/testing/selftests/net/forwarding/ethtool.sh
++++ b/tools/testing/selftests/net/forwarding/ethtool.sh
+@@ -286,6 +286,8 @@ different_speeds_autoneg_on()
+ 	ethtool -s $h1 autoneg on
+ }
+ 
++skip_on_veth
++
+ trap cleanup EXIT
+ 
+ setup_prepare
+diff --git a/tools/testing/selftests/net/forwarding/ethtool_extended_state.sh b/tools/testing/selftests/net/forwarding/ethtool_extended_state.sh
+index 4b42dfd4efd1a..baf831da5366c 100755
+--- a/tools/testing/selftests/net/forwarding/ethtool_extended_state.sh
++++ b/tools/testing/selftests/net/forwarding/ethtool_extended_state.sh
+@@ -95,6 +95,8 @@ no_cable()
+ 	ip link set dev $swp3 down
+ }
+ 
++skip_on_veth
++
+ setup_prepare
+ 
+ tests_run
+diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
+index 9605e158a0bfc..dfb41db7fbe48 100644
+--- a/tools/testing/selftests/net/forwarding/lib.sh
++++ b/tools/testing/selftests/net/forwarding/lib.sh
+@@ -69,6 +69,17 @@ check_tc_action_hw_stats_support()
+ 	fi
+ }
+ 
++skip_on_veth()
++{
++	local kind=$(ip -j -d link show dev ${NETIFS[p1]} |
++		jq -r '.[].linkinfo.info_kind')
++
++	if [[ $kind == veth ]]; then
++		echo "SKIP: Test cannot be run with veth pairs"
++		exit $ksft_skip
++	fi
++}
++
+ if [[ "$(id -u)" -ne 0 ]]; then
+ 	echo "SKIP: need root privileges"
+ 	exit 0
+@@ -121,6 +132,11 @@ create_netif_veth()
+ 	for ((i = 1; i <= NUM_NETIFS; ++i)); do
+ 		local j=$((i+1))
+ 
++		if [ -z ${NETIFS[p$i]} ]; then
++			echo "SKIP: Cannot create interface. Name not specified"
++			exit $ksft_skip
++		fi
++
+ 		ip link show dev ${NETIFS[p$i]} &> /dev/null
+ 		if [[ $? -ne 0 ]]; then
+ 			ip link add ${NETIFS[p$i]} type veth \
+diff --git a/tools/testing/selftests/net/forwarding/settings b/tools/testing/selftests/net/forwarding/settings
+new file mode 100644
+index 0000000000000..e7b9417537fbc
+--- /dev/null
++++ b/tools/testing/selftests/net/forwarding/settings
+@@ -0,0 +1 @@
++timeout=0
+diff --git a/tools/testing/selftests/net/forwarding/tc_flower.sh b/tools/testing/selftests/net/forwarding/tc_flower.sh
+index b11d8e6b5bc14..b7cdf75efb5f9 100755
+--- a/tools/testing/selftests/net/forwarding/tc_flower.sh
++++ b/tools/testing/selftests/net/forwarding/tc_flower.sh
+@@ -49,8 +49,8 @@ match_dst_mac_test()
+ 	tc_check_packets "dev $h2 ingress" 101 1
+ 	check_fail $? "Matched on a wrong filter"
+ 
+-	tc_check_packets "dev $h2 ingress" 102 1
+-	check_err $? "Did not match on correct filter"
++	tc_check_packets "dev $h2 ingress" 102 0
++	check_fail $? "Did not match on correct filter"
+ 
+ 	tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower
+ 	tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower
+@@ -75,8 +75,8 @@ match_src_mac_test()
+ 	tc_check_packets "dev $h2 ingress" 101 1
+ 	check_fail $? "Matched on a wrong filter"
+ 
+-	tc_check_packets "dev $h2 ingress" 102 1
+-	check_err $? "Did not match on correct filter"
++	tc_check_packets "dev $h2 ingress" 102 0
++	check_fail $? "Did not match on correct filter"
+ 
+ 	tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower
+ 	tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower
+diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
+index 215e1067f0376..82ceca6aab965 100644
+--- a/tools/testing/selftests/rseq/Makefile
++++ b/tools/testing/selftests/rseq/Makefile
+@@ -4,8 +4,10 @@ ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
+ CLANG_FLAGS += -no-integrated-as
+ endif
+ 
++top_srcdir = ../../../..
++
+ CFLAGS += -O2 -Wall -g -I./ -I../../../../usr/include/ -L$(OUTPUT) -Wl,-rpath=./ \
+-	  $(CLANG_FLAGS)
++	  $(CLANG_FLAGS) -I$(top_srcdir)/tools/include
+ LDLIBS += -lpthread -ldl
+ 
+ # Own dependencies because we only want to build against 1st prerequisite, but
+diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
+index b736a5169aad0..e20191fb40d49 100644
+--- a/tools/testing/selftests/rseq/rseq.c
++++ b/tools/testing/selftests/rseq/rseq.c
+@@ -29,6 +29,8 @@
+ #include <dlfcn.h>
+ #include <stddef.h>
+ 
++#include <linux/compiler.h>
++
+ #include "../kselftest.h"
+ #include "rseq.h"
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-08-26 15:21 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-08-26 15:21 UTC (permalink / raw
  To: gentoo-commits

commit:     3d8e1ad2eff11756dfc3dc7d50c4c6858d91780a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 26 15:20:49 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Aug 26 15:20:49 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3d8e1ad2

Linux patch 5.10.192

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1191_linux-5.10.192.patch | 16960 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 16964 insertions(+)

diff --git a/0000_README b/0000_README
index b923fd86..d0216fc5 100644
--- a/0000_README
+++ b/0000_README
@@ -807,6 +807,10 @@ Patch:  1190_linux-5.10.191.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.191
 
+Patch:  1191_linux-5.10.192.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.192
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1191_linux-5.10.192.patch b/1191_linux-5.10.192.patch
new file mode 100644
index 00000000..ad54f374
--- /dev/null
+++ b/1191_linux-5.10.192.patch
@@ -0,0 +1,16960 @@
+diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
+index 2f923c805802f..f79cb11b080f6 100644
+--- a/Documentation/admin-guide/hw-vuln/srso.rst
++++ b/Documentation/admin-guide/hw-vuln/srso.rst
+@@ -124,8 +124,8 @@ sequence.
+ To ensure the safety of this mitigation, the kernel must ensure that the
+ safe return sequence is itself free from attacker interference.  In Zen3
+ and Zen4, this is accomplished by creating a BTB alias between the
+-untraining function srso_untrain_ret_alias() and the safe return
+-function srso_safe_ret_alias() which results in evicting a potentially
++untraining function srso_alias_untrain_ret() and the safe return
++function srso_alias_safe_ret() which results in evicting a potentially
+ poisoned BTB entry and using that safe one for all function returns.
+ 
+ In older Zen1 and Zen2, this is accomplished using a reinterpretation
+diff --git a/Documentation/devicetree/bindings/iio/addac/adi,ad74413r.yaml b/Documentation/devicetree/bindings/iio/addac/adi,ad74413r.yaml
+new file mode 100644
+index 0000000000000..baa65a521bad5
+--- /dev/null
++++ b/Documentation/devicetree/bindings/iio/addac/adi,ad74413r.yaml
+@@ -0,0 +1,158 @@
++# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
++%YAML 1.2
++---
++$id: http://devicetree.org/schemas/iio/addac/adi,ad74413r.yaml#
++$schema: http://devicetree.org/meta-schemas/core.yaml#
++
++title: Analog Devices AD74412R/AD74413R device
++
++maintainers:
++  - Cosmin Tanislav <cosmin.tanislav@analog.com>
++
++description: |
++  The AD74412R and AD74413R are quad-channel software configurable input/output
++  solutions for building and process control applications. They contain
++  functionality for analog output, analog input, digital input, resistance
++  temperature detector, and thermocouple measurements integrated
++  into a single chip solution with an SPI interface.
++  The devices feature a 16-bit ADC and four configurable 13-bit DACs to provide
++  four configurable input/output channels and a suite of diagnostic functions.
++  The AD74413R differentiates itself from the AD74412R by being HART-compatible.
++    https://www.analog.com/en/products/ad74412r.html
++    https://www.analog.com/en/products/ad74413r.html
++
++properties:
++  compatible:
++    enum:
++      - adi,ad74412r
++      - adi,ad74413r
++
++  reg:
++    maxItems: 1
++
++  '#address-cells':
++    const: 1
++
++  '#size-cells':
++    const: 0
++
++  spi-max-frequency:
++    maximum: 1000000
++
++  spi-cpol: true
++
++  interrupts:
++    maxItems: 1
++
++  refin-supply: true
++
++  shunt-resistor-micro-ohms:
++    description:
++      Shunt (sense) resistor value in micro-Ohms.
++    default: 100000000
++
++required:
++  - compatible
++  - reg
++  - spi-max-frequency
++  - spi-cpol
++  - refin-supply
++
++additionalProperties: false
++
++patternProperties:
++  "^channel@[0-3]$":
++    type: object
++    description: Represents the external channels which are connected to the device.
++
++    properties:
++      reg:
++        description: |
++          The channel number. It can have up to 4 channels numbered from 0 to 3.
++        minimum: 0
++        maximum: 3
++
++      adi,ch-func:
++        $ref: /schemas/types.yaml#/definitions/uint32
++        description: |
++          Channel function.
++          HART functions are not supported on AD74412R.
++          0 - CH_FUNC_HIGH_IMPEDANCE
++          1 - CH_FUNC_VOLTAGE_OUTPUT
++          2 - CH_FUNC_CURRENT_OUTPUT
++          3 - CH_FUNC_VOLTAGE_INPUT
++          4 - CH_FUNC_CURRENT_INPUT_EXT_POWER
++          5 - CH_FUNC_CURRENT_INPUT_LOOP_POWER
++          6 - CH_FUNC_RESISTANCE_INPUT
++          7 - CH_FUNC_DIGITAL_INPUT_LOGIC
++          8 - CH_FUNC_DIGITAL_INPUT_LOOP_POWER
++          9 - CH_FUNC_CURRENT_INPUT_EXT_POWER_HART
++          10 - CH_FUNC_CURRENT_INPUT_LOOP_POWER_HART
++        minimum: 0
++        maximum: 10
++        default: 0
++
++      adi,gpo-comparator:
++        type: boolean
++        description: |
++          Whether to configure GPO as a comparator or not.
++          When not configured as a comparator, the GPO will be treated as an
++          output-only GPIO.
++
++    required:
++      - reg
++
++examples:
++  - |
++    #include <dt-bindings/gpio/gpio.h>
++    #include <dt-bindings/interrupt-controller/irq.h>
++    #include <dt-bindings/iio/addac/adi,ad74413r.h>
++
++    spi {
++      #address-cells = <1>;
++      #size-cells = <0>;
++
++      cs-gpios = <&gpio 17 GPIO_ACTIVE_LOW>;
++      status = "okay";
++
++      ad74413r@0 {
++        compatible = "adi,ad74413r";
++        reg = <0>;
++        spi-max-frequency = <1000000>;
++        spi-cpol;
++
++        #address-cells = <1>;
++        #size-cells = <0>;
++
++        interrupt-parent = <&gpio>;
++        interrupts = <26 IRQ_TYPE_EDGE_FALLING>;
++
++        refin-supply = <&ad74413r_refin>;
++
++        channel@0 {
++          reg = <0>;
++
++          adi,ch-func = <CH_FUNC_VOLTAGE_OUTPUT>;
++        };
++
++        channel@1 {
++          reg = <1>;
++
++          adi,ch-func = <CH_FUNC_CURRENT_OUTPUT>;
++        };
++
++        channel@2 {
++          reg = <2>;
++
++          adi,ch-func = <CH_FUNC_DIGITAL_INPUT_LOGIC>;
++          adi,gpo-comparator;
++        };
++
++        channel@3 {
++          reg = <3>;
++
++          adi,ch-func = <CH_FUNC_CURRENT_INPUT_EXT_POWER>;
++        };
++      };
++    };
++...
+diff --git a/Makefile b/Makefile
+index ecf9ab05e13a2..316598ce1b126 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 191
++SUBLEVEL = 192
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6dl-prtrvt.dts b/arch/arm/boot/dts/imx6dl-prtrvt.dts
+index 5ac84445e9cc1..90e01de8c2c15 100644
+--- a/arch/arm/boot/dts/imx6dl-prtrvt.dts
++++ b/arch/arm/boot/dts/imx6dl-prtrvt.dts
+@@ -126,6 +126,10 @@
+ 	status = "disabled";
+ };
+ 
++&usbotg {
++	disable-over-current;
++};
++
+ &vpu {
+ 	status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/imx6qdl-prti6q.dtsi b/arch/arm/boot/dts/imx6qdl-prti6q.dtsi
+index 19578f660b092..70dfa07a16981 100644
+--- a/arch/arm/boot/dts/imx6qdl-prti6q.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-prti6q.dtsi
+@@ -69,6 +69,7 @@
+ 	vbus-supply = <&reg_usb_h1_vbus>;
+ 	phy_type = "utmi";
+ 	dr_mode = "host";
++	disable-over-current;
+ 	status = "okay";
+ };
+ 
+@@ -78,10 +79,18 @@
+ 	pinctrl-0 = <&pinctrl_usbotg>;
+ 	phy_type = "utmi";
+ 	dr_mode = "host";
+-	disable-over-current;
++	over-current-active-low;
+ 	status = "okay";
+ };
+ 
++&usbphynop1 {
++	status = "disabled";
++};
++
++&usbphynop2 {
++	status = "disabled";
++};
++
+ &usdhc1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_usdhc1>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+index 64df643391194..2f52b91b72152 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+@@ -31,32 +31,56 @@
+ 		reset-gpios = <&gpio0 RK_PB2 GPIO_ACTIVE_LOW>;
+ 	};
+ 
+-	vcc12v_dcin: dc-12v {
++	sound {
++		compatible = "audio-graph-card";
++		label = "Analog";
++		dais = <&i2s0_p0>;
++	};
++
++	sound-dit {
++		compatible = "audio-graph-card";
++		label = "SPDIF";
++		dais = <&spdif_p0>;
++	};
++
++	spdif-dit {
++		compatible = "linux,spdif-dit";
++		#sound-dai-cells = <0>;
++
++		port {
++			dit_p0_0: endpoint {
++				remote-endpoint = <&spdif_p0_0>;
++			};
++		};
++	};
++
++	vbus_typec: vbus-typec-regulator {
+ 		compatible = "regulator-fixed";
+-		regulator-name = "vcc12v_dcin";
++		enable-active-high;
++		gpio = <&gpio1 RK_PA3 GPIO_ACTIVE_HIGH>;
++		pinctrl-names = "default";
++		pinctrl-0 = <&vcc5v0_typec_en>;
++		regulator-name = "vbus_typec";
+ 		regulator-always-on;
+-		regulator-boot-on;
+-		regulator-min-microvolt = <12000000>;
+-		regulator-max-microvolt = <12000000>;
++		vin-supply = <&vcc5v0_sys>;
+ 	};
+ 
+-	vcc5v0_sys: vcc-sys {
++	vcc12v_dcin: dc-12v {
+ 		compatible = "regulator-fixed";
+-		regulator-name = "vcc5v0_sys";
++		regulator-name = "vcc12v_dcin";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-		vin-supply = <&vcc12v_dcin>;
++		regulator-min-microvolt = <12000000>;
++		regulator-max-microvolt = <12000000>;
+ 	};
+ 
+-	vcc_0v9: vcc-0v9 {
++	vcc3v3_lan: vcc3v3-lan-regulator {
+ 		compatible = "regulator-fixed";
+-		regulator-name = "vcc_0v9";
++		regulator-name = "vcc3v3_lan";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+-		regulator-min-microvolt = <900000>;
+-		regulator-max-microvolt = <900000>;
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
+ 		vin-supply = <&vcc3v3_sys>;
+ 	};
+ 
+@@ -93,28 +117,24 @@
+ 		vin-supply = <&vcc5v0_sys>;
+ 	};
+ 
+-	vcc5v0_typec: vcc5v0-typec-regulator {
++	vcc5v0_sys: vcc-sys {
+ 		compatible = "regulator-fixed";
+-		enable-active-high;
+-		gpio = <&gpio1 RK_PA3 GPIO_ACTIVE_HIGH>;
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&vcc5v0_typec_en>;
+-		regulator-name = "vcc5v0_typec";
++		regulator-name = "vcc5v0_sys";
+ 		regulator-always-on;
+-		vin-supply = <&vcc5v0_sys>;
++		regulator-boot-on;
++		regulator-min-microvolt = <5000000>;
++		regulator-max-microvolt = <5000000>;
++		vin-supply = <&vcc12v_dcin>;
+ 	};
+ 
+-	vcc_lan: vcc3v3-phy-regulator {
++	vcc_0v9: vcc-0v9 {
+ 		compatible = "regulator-fixed";
+-		regulator-name = "vcc_lan";
++		regulator-name = "vcc_0v9";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+-		regulator-min-microvolt = <3300000>;
+-		regulator-max-microvolt = <3300000>;
+-
+-		regulator-state-mem {
+-			regulator-off-in-suspend;
+-		};
++		regulator-min-microvolt = <900000>;
++		regulator-max-microvolt = <900000>;
++		vin-supply = <&vcc3v3_sys>;
+ 	};
+ 
+ 	vdd_log: vdd-log {
+@@ -161,7 +181,7 @@
+ 	assigned-clocks = <&cru SCLK_RMII_SRC>;
+ 	assigned-clock-parents = <&clkin_gmac>;
+ 	clock_in_out = "input";
+-	phy-supply = <&vcc_lan>;
++	phy-supply = <&vcc3v3_lan>;
+ 	phy-mode = "rgmii";
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&rgmii_pins>;
+@@ -266,8 +286,8 @@
+ 				};
+ 			};
+ 
+-			vcc1v8_codec: LDO_REG1 {
+-				regulator-name = "vcc1v8_codec";
++			vcca1v8_codec: LDO_REG1 {
++				regulator-name = "vcca1v8_codec";
+ 				regulator-always-on;
+ 				regulator-boot-on;
+ 				regulator-min-microvolt = <1800000>;
+@@ -277,8 +297,8 @@
+ 				};
+ 			};
+ 
+-			vcc1v8_hdmi: LDO_REG2 {
+-				regulator-name = "vcc1v8_hdmi";
++			vcca1v8_hdmi: LDO_REG2 {
++				regulator-name = "vcca1v8_hdmi";
+ 				regulator-always-on;
+ 				regulator-boot-on;
+ 				regulator-min-microvolt = <1800000>;
+@@ -335,8 +355,8 @@
+ 				};
+ 			};
+ 
+-			vcc0v9_hdmi: LDO_REG7 {
+-				regulator-name = "vcc0v9_hdmi";
++			vcca0v9_hdmi: LDO_REG7 {
++				regulator-name = "vcca0v9_hdmi";
+ 				regulator-always-on;
+ 				regulator-boot-on;
+ 				regulator-min-microvolt = <900000>;
+@@ -362,8 +382,6 @@
+ 				regulator-name = "vcc_cam";
+ 				regulator-always-on;
+ 				regulator-boot-on;
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				regulator-state-mem {
+ 					regulator-off-in-suspend;
+ 				};
+@@ -373,8 +391,6 @@
+ 				regulator-name = "vcc_mipi";
+ 				regulator-always-on;
+ 				regulator-boot-on;
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				regulator-state-mem {
+ 					regulator-off-in-suspend;
+ 				};
+@@ -425,6 +441,20 @@
+ 	i2c-scl-rising-time-ns = <300>;
+ 	i2c-scl-falling-time-ns = <15>;
+ 	status = "okay";
++
++	es8316: codec@11 {
++		compatible = "everest,es8316";
++		reg = <0x11>;
++		clocks = <&cru SCLK_I2S_8CH_OUT>;
++		clock-names = "mclk";
++		#sound-dai-cells = <0>;
++
++		port {
++			es8316_p0_0: endpoint {
++				remote-endpoint = <&i2s0_p0_0>;
++			};
++		};
++	};
+ };
+ 
+ &i2c3 {
+@@ -443,6 +473,14 @@
+ 	rockchip,playback-channels = <8>;
+ 	rockchip,capture-channels = <8>;
+ 	status = "okay";
++
++	i2s0_p0: port {
++		i2s0_p0_0: endpoint {
++			dai-format = "i2s";
++			mclk-fs = <256>;
++			remote-endpoint = <&es8316_p0_0>;
++		};
++	};
+ };
+ 
+ &i2s1 {
+@@ -455,21 +493,10 @@
+ };
+ 
+ &io_domains {
+-	status = "okay";
+-
++	audio-supply = <&vcca1v8_codec>;
+ 	bt656-supply = <&vcc_3v0>;
+-	audio-supply = <&vcc1v8_codec>;
+-	sdmmc-supply = <&vcc_sdio>;
+ 	gpio1830-supply = <&vcc_3v0>;
+-};
+-
+-&pmu_io_domains {
+-	status = "okay";
+-
+-	pmu1830-supply = <&vcc_3v0>;
+-};
+-
+-&pcie_phy {
++	sdmmc-supply = <&vcc_sdio>;
+ 	status = "okay";
+ };
+ 
+@@ -485,6 +512,10 @@
+ 	status = "okay";
+ };
+ 
++&pcie_phy {
++	status = "okay";
++};
++
+ &pinctrl {
+ 	bt {
+ 		bt_enable_h: bt-enable-h {
+@@ -506,6 +537,20 @@
+ 		};
+ 	};
+ 
++	pmic {
++		pmic_int_l: pmic-int-l {
++			rockchip,pins = <1 RK_PC5 RK_FUNC_GPIO &pcfg_pull_up>;
++		};
++
++		vsel1_pin: vsel1-pin {
++			rockchip,pins = <1 RK_PC1 RK_FUNC_GPIO &pcfg_pull_down>;
++		};
++
++		vsel2_pin: vsel2-pin {
++			rockchip,pins = <1 RK_PB6 RK_FUNC_GPIO &pcfg_pull_down>;
++		};
++	};
++
+ 	sdio0 {
+ 		sdio0_bus4: sdio0-bus4 {
+ 			rockchip,pins = <2 RK_PC4 1 &pcfg_pull_up_20ma>,
+@@ -523,20 +568,6 @@
+ 		};
+ 	};
+ 
+-	pmic {
+-		pmic_int_l: pmic-int-l {
+-			rockchip,pins = <1 RK_PC5 RK_FUNC_GPIO &pcfg_pull_up>;
+-		};
+-
+-		vsel1_pin: vsel1-pin {
+-			rockchip,pins = <1 RK_PC1 RK_FUNC_GPIO &pcfg_pull_down>;
+-		};
+-
+-		vsel2_pin: vsel2-pin {
+-			rockchip,pins = <1 RK_PB6 RK_FUNC_GPIO &pcfg_pull_down>;
+-		};
+-	};
+-
+ 	usb-typec {
+ 		vcc5v0_typec_en: vcc5v0-typec-en {
+ 			rockchip,pins = <1 RK_PA3 RK_FUNC_GPIO &pcfg_pull_up>;
+@@ -560,6 +591,11 @@
+ 	};
+ };
+ 
++&pmu_io_domains {
++	pmu1830-supply = <&vcc_3v0>;
++	status = "okay";
++};
++
+ &pwm2 {
+ 	status = "okay";
+ };
+@@ -570,6 +606,14 @@
+ 	vref-supply = <&vcc_1v8>;
+ };
+ 
++&sdhci {
++	max-frequency = <150000000>;
++	bus-width = <8>;
++	mmc-hs200-1_8v;
++	non-removable;
++	status = "okay";
++};
++
+ &sdio0 {
+ 	#address-cells = <1>;
+ 	#size-cells = <0>;
+@@ -597,12 +641,13 @@
+ 	status = "okay";
+ };
+ 
+-&sdhci {
+-	bus-width = <8>;
+-	mmc-hs400-1_8v;
+-	mmc-hs400-enhanced-strobe;
+-	non-removable;
+-	status = "okay";
++&spdif {
++
++	spdif_p0: port {
++		spdif_p0_0: endpoint {
++			remote-endpoint = <&dit_p0_0>;
++		};
++	};
+ };
+ 
+ &tcphy0 {
+@@ -677,13 +722,13 @@
+ 	status = "okay";
+ };
+ 
+-&usbdrd_dwc3_0 {
++&usbdrd3_1 {
+ 	status = "okay";
+-	dr_mode = "otg";
+ };
+ 
+-&usbdrd3_1 {
++&usbdrd_dwc3_0 {
+ 	status = "okay";
++	dr_mode = "host";
+ };
+ 
+ &usbdrd_dwc3_1 {
+diff --git a/arch/mips/include/asm/dec/prom.h b/arch/mips/include/asm/dec/prom.h
+index 1e1247add1cf8..908e96e3a3117 100644
+--- a/arch/mips/include/asm/dec/prom.h
++++ b/arch/mips/include/asm/dec/prom.h
+@@ -70,7 +70,7 @@ static inline bool prom_is_rex(u32 magic)
+  */
+ typedef struct {
+ 	int pagesize;
+-	unsigned char bitmap[0];
++	unsigned char bitmap[];
+ } memmap;
+ 
+ 
+diff --git a/arch/powerpc/kernel/rtas_flash.c b/arch/powerpc/kernel/rtas_flash.c
+index a99179d835382..56bd0aa30f930 100644
+--- a/arch/powerpc/kernel/rtas_flash.c
++++ b/arch/powerpc/kernel/rtas_flash.c
+@@ -710,9 +710,9 @@ static int __init rtas_flash_init(void)
+ 	if (!rtas_validate_flash_data.buf)
+ 		return -ENOMEM;
+ 
+-	flash_block_cache = kmem_cache_create("rtas_flash_cache",
+-					      RTAS_BLK_SIZE, RTAS_BLK_SIZE, 0,
+-					      NULL);
++	flash_block_cache = kmem_cache_create_usercopy("rtas_flash_cache",
++						       RTAS_BLK_SIZE, RTAS_BLK_SIZE,
++						       0, 0, RTAS_BLK_SIZE, NULL);
+ 	if (!flash_block_cache) {
+ 		printk(KERN_ERR "%s: failed to create block cache\n",
+ 				__func__);
+diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
+index bb1a5408b86b2..8636b17c6a20f 100644
+--- a/arch/powerpc/mm/kasan/Makefile
++++ b/arch/powerpc/mm/kasan/Makefile
+@@ -1,6 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ KASAN_SANITIZE := n
++KCOV_INSTRUMENT := n
+ 
+ obj-$(CONFIG_PPC32)           += kasan_init_32.o
+ obj-$(CONFIG_PPC_8xx)		+= 8xx.o
+diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
+index 4a382fb6a9ef8..5443851d3aa60 100644
+--- a/arch/x86/include/asm/entry-common.h
++++ b/arch/x86/include/asm/entry-common.h
+@@ -78,6 +78,7 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
+ static __always_inline void arch_exit_to_user_mode(void)
+ {
+ 	mds_user_clear_cpu_buffers();
++	amd_clear_divider();
+ }
+ #define arch_exit_to_user_mode arch_exit_to_user_mode
+ 
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 99fbce2c1c7c1..7b4782249a925 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -156,9 +156,9 @@
+ .endm
+ 
+ #ifdef CONFIG_CPU_UNRET_ENTRY
+-#define CALL_ZEN_UNTRAIN_RET	"call zen_untrain_ret"
++#define CALL_UNTRAIN_RET	"call entry_untrain_ret"
+ #else
+-#define CALL_ZEN_UNTRAIN_RET	""
++#define CALL_UNTRAIN_RET	""
+ #endif
+ 
+ /*
+@@ -166,7 +166,7 @@
+  * return thunk isn't mapped into the userspace tables (then again, AMD
+  * typically has NO_MELTDOWN).
+  *
+- * While zen_untrain_ret() doesn't clobber anything but requires stack,
++ * While retbleed_untrain_ret() doesn't clobber anything but requires stack,
+  * entry_ibpb() will clobber AX, CX, DX.
+  *
+  * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
+@@ -177,14 +177,9 @@
+ 	defined(CONFIG_CPU_SRSO)
+ 	ANNOTATE_UNRET_END
+ 	ALTERNATIVE_2 "",						\
+-	              CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET,		\
++		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
+ 		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB
+ #endif
+-
+-#ifdef CONFIG_CPU_SRSO
+-	ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \
+-			  "call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS
+-#endif
+ .endm
+ 
+ #else /* __ASSEMBLY__ */
+@@ -195,10 +190,21 @@
+ 	_ASM_PTR " 999b\n\t"					\
+ 	".popsection\n\t"
+ 
++#ifdef CONFIG_RETHUNK
+ extern void __x86_return_thunk(void);
+-extern void zen_untrain_ret(void);
++#else
++static inline void __x86_return_thunk(void) {}
++#endif
++
++extern void retbleed_return_thunk(void);
++extern void srso_return_thunk(void);
++extern void srso_alias_return_thunk(void);
++
++extern void retbleed_untrain_ret(void);
+ extern void srso_untrain_ret(void);
+-extern void srso_untrain_ret_alias(void);
++extern void srso_alias_untrain_ret(void);
++
++extern void entry_untrain_ret(void);
+ extern void entry_ibpb(void);
+ 
+ #ifdef CONFIG_RETPOLINE
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 7f351093cd947..2bc1c78a7cc57 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1332,3 +1332,4 @@ void noinstr amd_clear_divider(void)
+ 	asm volatile(ALTERNATIVE("", "div %2\n\t", X86_BUG_DIV0)
+ 		     :: "a" (0), "d" (0), "r" (1));
+ }
++EXPORT_SYMBOL_GPL(amd_clear_divider);
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index d31639e3ce282..4d11a50089b27 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -61,6 +61,8 @@ EXPORT_SYMBOL_GPL(x86_pred_cmd);
+ 
+ static DEFINE_MUTEX(spec_ctrl_mutex);
+ 
++void (*x86_return_thunk)(void) __ro_after_init = &__x86_return_thunk;
++
+ /* Update SPEC_CTRL MSR and its cached copy unconditionally */
+ static void update_spec_ctrl(u64 val)
+ {
+@@ -155,8 +157,13 @@ void __init cpu_select_mitigations(void)
+ 	l1tf_select_mitigation();
+ 	md_clear_select_mitigation();
+ 	srbds_select_mitigation();
+-	gds_select_mitigation();
++
++	/*
++	 * srso_select_mitigation() depends and must run after
++	 * retbleed_select_mitigation().
++	 */
+ 	srso_select_mitigation();
++	gds_select_mitigation();
+ }
+ 
+ /*
+@@ -976,6 +983,9 @@ do_cmd_auto:
+ 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+ 		setup_force_cpu_cap(X86_FEATURE_UNRET);
+ 
++		if (IS_ENABLED(CONFIG_RETHUNK))
++			x86_return_thunk = retbleed_return_thunk;
++
+ 		if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
+ 		    boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
+ 			pr_err(RETBLEED_UNTRAIN_MSG);
+@@ -2318,9 +2328,10 @@ static void __init srso_select_mitigation(void)
+ 		 * Zen1/2 with SMT off aren't vulnerable after the right
+ 		 * IBPB microcode has been applied.
+ 		 */
+-		if ((boot_cpu_data.x86 < 0x19) &&
+-		    (!cpu_smt_possible() || (cpu_smt_control == CPU_SMT_DISABLED)))
++		if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
+ 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
++			return;
++		}
+ 	}
+ 
+ 	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
+@@ -2349,11 +2360,15 @@ static void __init srso_select_mitigation(void)
+ 			 * like ftrace, static_call, etc.
+ 			 */
+ 			setup_force_cpu_cap(X86_FEATURE_RETHUNK);
++			setup_force_cpu_cap(X86_FEATURE_UNRET);
+ 
+-			if (boot_cpu_data.x86 == 0x19)
++			if (boot_cpu_data.x86 == 0x19) {
+ 				setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
+-			else
++				x86_return_thunk = srso_alias_return_thunk;
++			} else {
+ 				setup_force_cpu_cap(X86_FEATURE_SRSO);
++				x86_return_thunk = srso_return_thunk;
++			}
+ 			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+ 		} else {
+ 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
+@@ -2602,6 +2617,9 @@ static ssize_t gds_show_state(char *buf)
+ 
+ static ssize_t srso_show_state(char *buf)
+ {
++	if (boot_cpu_has(X86_FEATURE_SRSO_NO))
++		return sysfs_emit(buf, "Mitigation: SMT disabled\n");
++
+ 	return sysfs_emit(buf, "%s%s\n",
+ 			  srso_strings[srso_mitigation],
+ 			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 2973b3fb0ec1a..759b986b7f033 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -123,6 +123,19 @@ EXPORT_SYMBOL_GPL(arch_static_call_transform);
+  */
+ bool __static_call_fixup(void *tramp, u8 op, void *dest)
+ {
++	unsigned long addr = (unsigned long)tramp;
++	/*
++	 * Not all .return_sites are a static_call trampoline (most are not).
++	 * Check if the 3 bytes after the return are still kernel text, if not,
++	 * then this definitely is not a trampoline and we need not worry
++	 * further.
++	 *
++	 * This avoids the memcmp() below tripping over pagefaults etc..
++	 */
++	if (((addr >> PAGE_SHIFT) != ((addr + 7) >> PAGE_SHIFT)) &&
++	    !kernel_text_address(addr + 7))
++		return false;
++
+ 	if (memcmp(tramp+5, tramp_ud, 3)) {
+ 		/* Not a trampoline site, not our problem. */
+ 		return false;
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 28f5cc0a9decb..98838b784524e 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -198,8 +198,6 @@ DEFINE_IDTENTRY(exc_divide_error)
+ {
+ 	do_error_trap(regs, 0, "divide error", X86_TRAP_DE, SIGFPE,
+ 		      FPE_INTDIV, error_get_trap_addr(regs));
+-
+-	amd_clear_divider();
+ }
+ 
+ DEFINE_IDTENTRY(exc_overflow)
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 72ba175cb9d4c..f0d4500ae77ae 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -134,18 +134,18 @@ SECTIONS
+ 		KPROBES_TEXT
+ 		ALIGN_ENTRY_TEXT_BEGIN
+ #ifdef CONFIG_CPU_SRSO
+-		*(.text.__x86.rethunk_untrain)
++		*(.text..__x86.rethunk_untrain)
+ #endif
+ 
+ 		ENTRY_TEXT
+ 
+ #ifdef CONFIG_CPU_SRSO
+ 		/*
+-		 * See the comment above srso_untrain_ret_alias()'s
++		 * See the comment above srso_alias_untrain_ret()'s
+ 		 * definition.
+ 		 */
+-		. = srso_untrain_ret_alias | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20);
+-		*(.text.__x86.rethunk_safe)
++		. = srso_alias_untrain_ret | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20);
++		*(.text..__x86.rethunk_safe)
+ #endif
+ 		ALIGN_ENTRY_TEXT_END
+ 		SOFTIRQENTRY_TEXT
+@@ -155,8 +155,8 @@ SECTIONS
+ 
+ #ifdef CONFIG_RETPOLINE
+ 		__indirect_thunk_start = .;
+-		*(.text.__x86.indirect_thunk)
+-		*(.text.__x86.return_thunk)
++		*(.text..__x86.indirect_thunk)
++		*(.text..__x86.return_thunk)
+ 		__indirect_thunk_end = .;
+ #endif
+ 	} :text =0xcccc
+@@ -518,7 +518,7 @@ INIT_PER_CPU(irq_stack_backing_store);
+ #endif
+ 
+ #ifdef CONFIG_RETHUNK
+-. = ASSERT((__ret & 0x3f) == 0, "__ret not cacheline-aligned");
++. = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
+ . = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
+ #endif
+ 
+@@ -533,8 +533,8 @@ INIT_PER_CPU(irq_stack_backing_store);
+  * Instead do: (A | B) - (A & B) in order to compute the XOR
+  * of the two function addresses:
+  */
+-. = ASSERT(((ABSOLUTE(srso_untrain_ret_alias) | srso_safe_ret_alias) -
+-		(ABSOLUTE(srso_untrain_ret_alias) & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
++. = ASSERT(((ABSOLUTE(srso_alias_untrain_ret) | srso_alias_safe_ret) -
++		(ABSOLUTE(srso_alias_untrain_ret) & srso_alias_safe_ret)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
+ 		"SRSO function pair won't alias");
+ #endif
+ 
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 8544bca6b3356..1616e39ddc3f1 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -3376,6 +3376,7 @@ static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva)
+ 
+ static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
+ {
++	amd_clear_divider();
+ }
+ 
+ static inline void sync_cr8_to_lapic(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index 5f7eed97487ec..6f5321b36dbb1 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -11,7 +11,7 @@
+ #include <asm/frame.h>
+ #include <asm/nops.h>
+ 
+-	.section .text.__x86.indirect_thunk
++	.section .text..__x86.indirect_thunk
+ 
+ .macro RETPOLINE reg
+ 	ANNOTATE_INTRA_FUNCTION_CALL
+@@ -75,74 +75,105 @@ SYM_CODE_END(__x86_indirect_thunk_array)
+ #ifdef CONFIG_RETHUNK
+ 
+ /*
+- * srso_untrain_ret_alias() and srso_safe_ret_alias() are placed at
++ * srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at
+  * special addresses:
+  *
+- * - srso_untrain_ret_alias() is 2M aligned
+- * - srso_safe_ret_alias() is also in the same 2M page but bits 2, 8, 14
++ * - srso_alias_untrain_ret() is 2M aligned
++ * - srso_alias_safe_ret() is also in the same 2M page but bits 2, 8, 14
+  * and 20 in its virtual address are set (while those bits in the
+- * srso_untrain_ret_alias() function are cleared).
++ * srso_alias_untrain_ret() function are cleared).
+  *
+  * This guarantees that those two addresses will alias in the branch
+  * target buffer of Zen3/4 generations, leading to any potential
+  * poisoned entries at that BTB slot to get evicted.
+  *
+- * As a result, srso_safe_ret_alias() becomes a safe return.
++ * As a result, srso_alias_safe_ret() becomes a safe return.
+  */
+ #ifdef CONFIG_CPU_SRSO
+-	.section .text.__x86.rethunk_untrain
++	.section .text..__x86.rethunk_untrain
+ 
+-SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE)
++SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
++	UNWIND_HINT_FUNC
+ 	ASM_NOP2
+ 	lfence
+-	jmp __x86_return_thunk
+-SYM_FUNC_END(srso_untrain_ret_alias)
+-__EXPORT_THUNK(srso_untrain_ret_alias)
++	jmp srso_alias_return_thunk
++SYM_FUNC_END(srso_alias_untrain_ret)
++__EXPORT_THUNK(srso_alias_untrain_ret)
+ 
+-	.section .text.__x86.rethunk_safe
++	.section .text..__x86.rethunk_safe
++#else
++/* dummy definition for alternatives */
++SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
++	ANNOTATE_UNRET_SAFE
++	ret
++	int3
++SYM_FUNC_END(srso_alias_untrain_ret)
+ #endif
+ 
+-/* Needs a definition for the __x86_return_thunk alternative below. */
+-SYM_START(srso_safe_ret_alias, SYM_L_GLOBAL, SYM_A_NONE)
+-#ifdef CONFIG_CPU_SRSO
+-	add $8, %_ASM_SP
++SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
++	lea 8(%_ASM_SP), %_ASM_SP
+ 	UNWIND_HINT_FUNC
+-#endif
+ 	ANNOTATE_UNRET_SAFE
+ 	ret
+ 	int3
+-SYM_FUNC_END(srso_safe_ret_alias)
++SYM_FUNC_END(srso_alias_safe_ret)
+ 
+-	.section .text.__x86.return_thunk
++	.section .text..__x86.return_thunk
++
++SYM_CODE_START(srso_alias_return_thunk)
++	UNWIND_HINT_FUNC
++	ANNOTATE_NOENDBR
++	call srso_alias_safe_ret
++	ud2
++SYM_CODE_END(srso_alias_return_thunk)
++
++/*
++ * Some generic notes on the untraining sequences:
++ *
++ * They are interchangeable when it comes to flushing potentially wrong
++ * RET predictions from the BTB.
++ *
++ * The SRSO Zen1/2 (MOVABS) untraining sequence is longer than the
++ * Retbleed sequence because the return sequence done there
++ * (srso_safe_ret()) is longer and the return sequence must fully nest
++ * (end before) the untraining sequence. Therefore, the untraining
++ * sequence must fully overlap the return sequence.
++ *
++ * Regarding alignment - the instructions which need to be untrained,
++ * must all start at a cacheline boundary for Zen1/2 generations. That
++ * is, instruction sequences starting at srso_safe_ret() and
++ * the respective instruction sequences at retbleed_return_thunk()
++ * must start at a cacheline boundary.
++ */
+ 
+ /*
+  * Safety details here pertain to the AMD Zen{1,2} microarchitecture:
+- * 1) The RET at __x86_return_thunk must be on a 64 byte boundary, for
++ * 1) The RET at retbleed_return_thunk must be on a 64 byte boundary, for
+  *    alignment within the BTB.
+- * 2) The instruction at zen_untrain_ret must contain, and not
++ * 2) The instruction at retbleed_untrain_ret must contain, and not
+  *    end with, the 0xc3 byte of the RET.
+  * 3) STIBP must be enabled, or SMT disabled, to prevent the sibling thread
+  *    from re-poisioning the BTB prediction.
+  */
+ 	.align 64
+-	.skip 64 - (__ret - zen_untrain_ret), 0xcc
+-SYM_FUNC_START_NOALIGN(zen_untrain_ret);
++	.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
++SYM_FUNC_START_NOALIGN(retbleed_untrain_ret);
+ 
+ 	/*
+-	 * As executed from zen_untrain_ret, this is:
++	 * As executed from retbleed_untrain_ret, this is:
+ 	 *
+ 	 *   TEST $0xcc, %bl
+ 	 *   LFENCE
+-	 *   JMP __x86_return_thunk
++	 *   JMP retbleed_return_thunk
+ 	 *
+ 	 * Executing the TEST instruction has a side effect of evicting any BTB
+ 	 * prediction (potentially attacker controlled) attached to the RET, as
+-	 * __x86_return_thunk + 1 isn't an instruction boundary at the moment.
++	 * retbleed_return_thunk + 1 isn't an instruction boundary at the moment.
+ 	 */
+ 	.byte	0xf6
+ 
+ 	/*
+-	 * As executed from __x86_return_thunk, this is a plain RET.
++	 * As executed from retbleed_return_thunk, this is a plain RET.
+ 	 *
+ 	 * As part of the TEST above, RET is the ModRM byte, and INT3 the imm8.
+ 	 *
+@@ -154,13 +185,13 @@ SYM_FUNC_START_NOALIGN(zen_untrain_ret);
+ 	 * With SMT enabled and STIBP active, a sibling thread cannot poison
+ 	 * RET's prediction to a type of its choice, but can evict the
+ 	 * prediction due to competitive sharing. If the prediction is
+-	 * evicted, __x86_return_thunk will suffer Straight Line Speculation
++	 * evicted, retbleed_return_thunk will suffer Straight Line Speculation
+ 	 * which will be contained safely by the INT3.
+ 	 */
+-SYM_INNER_LABEL(__ret, SYM_L_GLOBAL)
++SYM_INNER_LABEL(retbleed_return_thunk, SYM_L_GLOBAL)
+ 	ret
+ 	int3
+-SYM_CODE_END(__ret)
++SYM_CODE_END(retbleed_return_thunk)
+ 
+ 	/*
+ 	 * Ensure the TEST decoding / BTB invalidation is complete.
+@@ -171,16 +202,16 @@ SYM_CODE_END(__ret)
+ 	 * Jump back and execute the RET in the middle of the TEST instruction.
+ 	 * INT3 is for SLS protection.
+ 	 */
+-	jmp __ret
++	jmp retbleed_return_thunk
+ 	int3
+-SYM_FUNC_END(zen_untrain_ret)
+-__EXPORT_THUNK(zen_untrain_ret)
++SYM_FUNC_END(retbleed_untrain_ret)
++__EXPORT_THUNK(retbleed_untrain_ret)
+ 
+ /*
+- * SRSO untraining sequence for Zen1/2, similar to zen_untrain_ret()
++ * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
+  * above. On kernel entry, srso_untrain_ret() is executed which is a
+  *
+- * movabs $0xccccccc308c48348,%rax
++ * movabs $0xccccc30824648d48,%rax
+  *
+  * and when the return thunk executes the inner label srso_safe_ret()
+  * later, it is a stack manipulation and a RET which is mispredicted and
+@@ -191,22 +222,44 @@ __EXPORT_THUNK(zen_untrain_ret)
+ SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+ 	.byte 0x48, 0xb8
+ 
++/*
++ * This forces the function return instruction to speculate into a trap
++ * (UD2 in srso_return_thunk() below).  This RET will then mispredict
++ * and execution will continue at the return site read from the top of
++ * the stack.
++ */
+ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
+-	add $8, %_ASM_SP
++	lea 8(%_ASM_SP), %_ASM_SP
+ 	ret
+ 	int3
+ 	int3
+-	int3
++	/* end of movabs */
+ 	lfence
+ 	call srso_safe_ret
+-	int3
++	ud2
+ SYM_CODE_END(srso_safe_ret)
+ SYM_FUNC_END(srso_untrain_ret)
+ __EXPORT_THUNK(srso_untrain_ret)
+ 
+-SYM_FUNC_START(__x86_return_thunk)
+-	ALTERNATIVE_2 "jmp __ret", "call srso_safe_ret", X86_FEATURE_SRSO, \
+-			"call srso_safe_ret_alias", X86_FEATURE_SRSO_ALIAS
++SYM_CODE_START(srso_return_thunk)
++	UNWIND_HINT_FUNC
++	ANNOTATE_NOENDBR
++	call srso_safe_ret
++	ud2
++SYM_CODE_END(srso_return_thunk)
++
++SYM_FUNC_START(entry_untrain_ret)
++	ALTERNATIVE_2 "jmp retbleed_untrain_ret", \
++		      "jmp srso_untrain_ret", X86_FEATURE_SRSO, \
++		      "jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS
++SYM_FUNC_END(entry_untrain_ret)
++__EXPORT_THUNK(entry_untrain_ret)
++
++SYM_CODE_START(__x86_return_thunk)
++	UNWIND_HINT_FUNC
++	ANNOTATE_NOENDBR
++	ANNOTATE_UNRET_SAFE
++	ret
+ 	int3
+ SYM_CODE_END(__x86_return_thunk)
+ EXPORT_SYMBOL(__x86_return_thunk)
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2695ece47eb0e..49d5375b04f40 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -432,6 +432,9 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH |
+ 						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x0489, 0xe0f5), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
+ 	{ USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH |
+ 						     BTUSB_VALID_LE_STATES },
+diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
+index 397e35392bff8..16c47a0616ae4 100644
+--- a/drivers/bus/Makefile
++++ b/drivers/bus/Makefile
+@@ -38,4 +38,4 @@ obj-$(CONFIG_VEXPRESS_CONFIG)	+= vexpress-config.o
+ obj-$(CONFIG_DA8XX_MSTPRI)	+= da8xx-mstpri.o
+ 
+ # MHI
+-obj-$(CONFIG_MHI_BUS)		+= mhi/
++obj-y				+= mhi/
+diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
+index e841c1097fb4d..4748df7f9cd58 100644
+--- a/drivers/bus/mhi/Kconfig
++++ b/drivers/bus/mhi/Kconfig
+@@ -2,21 +2,7 @@
+ #
+ # MHI bus
+ #
+-# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++# Copyright (c) 2021, Linaro Ltd.
+ #
+ 
+-config MHI_BUS
+-	tristate "Modem Host Interface (MHI) bus"
+-	help
+-	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
+-	  communication protocol used by the host processors to control
+-	  and communicate with modem devices over a high speed peripheral
+-	  bus or shared memory.
+-
+-config MHI_BUS_DEBUG
+-	bool "Debugfs support for the MHI bus"
+-	depends on MHI_BUS && DEBUG_FS
+-	help
+-	  Enable debugfs support for use with the MHI transport. Allows
+-	  reading and/or modifying some values within the MHI controller
+-	  for debug and test purposes.
++source "drivers/bus/mhi/host/Kconfig"
+diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
+index 19e6443b72df4..5f5708a249f54 100644
+--- a/drivers/bus/mhi/Makefile
++++ b/drivers/bus/mhi/Makefile
+@@ -1,2 +1,2 @@
+-# core layer
+-obj-y += core/
++# Host MHI stack
++obj-y += host/
+diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
+deleted file mode 100644
+index c3feb4130aa37..0000000000000
+--- a/drivers/bus/mhi/core/Makefile
++++ /dev/null
+@@ -1,4 +0,0 @@
+-obj-$(CONFIG_MHI_BUS) += mhi.o
+-
+-mhi-y := init.o main.o pm.o boot.o
+-mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
+diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c
+deleted file mode 100644
+index 24422f5c3d808..0000000000000
+--- a/drivers/bus/mhi/core/boot.c
++++ /dev/null
+@@ -1,525 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+- *
+- */
+-
+-#include <linux/delay.h>
+-#include <linux/device.h>
+-#include <linux/dma-direction.h>
+-#include <linux/dma-mapping.h>
+-#include <linux/firmware.h>
+-#include <linux/interrupt.h>
+-#include <linux/list.h>
+-#include <linux/mhi.h>
+-#include <linux/module.h>
+-#include <linux/random.h>
+-#include <linux/slab.h>
+-#include <linux/wait.h>
+-#include "internal.h"
+-
+-/* Setup RDDM vector table for RDDM transfer and program RXVEC */
+-void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
+-		      struct image_info *img_info)
+-{
+-	struct mhi_buf *mhi_buf = img_info->mhi_buf;
+-	struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
+-	void __iomem *base = mhi_cntrl->bhie;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	u32 sequence_id;
+-	unsigned int i;
+-
+-	for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
+-		bhi_vec->dma_addr = mhi_buf->dma_addr;
+-		bhi_vec->size = mhi_buf->len;
+-	}
+-
+-	dev_dbg(dev, "BHIe programming for RDDM\n");
+-
+-	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
+-		      upper_32_bits(mhi_buf->dma_addr));
+-
+-	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
+-		      lower_32_bits(mhi_buf->dma_addr));
+-
+-	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
+-	sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
+-
+-	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
+-			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
+-			    sequence_id);
+-
+-	dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
+-		&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
+-}
+-
+-/* Collect RDDM buffer during kernel panic */
+-static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
+-{
+-	int ret;
+-	u32 rx_status;
+-	enum mhi_ee_type ee;
+-	const u32 delayus = 2000;
+-	u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
+-	const u32 rddm_timeout_us = 200000;
+-	int rddm_retry = rddm_timeout_us / delayus;
+-	void __iomem *base = mhi_cntrl->bhie;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-
+-	dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
+-		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+-		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+-		TO_MHI_EXEC_STR(mhi_cntrl->ee));
+-
+-	/*
+-	 * This should only be executing during a kernel panic, we expect all
+-	 * other cores to shutdown while we're collecting RDDM buffer. After
+-	 * returning from this function, we expect the device to reset.
+-	 *
+-	 * Normaly, we read/write pm_state only after grabbing the
+-	 * pm_lock, since we're in a panic, skipping it. Also there is no
+-	 * gurantee that this state change would take effect since
+-	 * we're setting it w/o grabbing pm_lock
+-	 */
+-	mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
+-	/* update should take the effect immediately */
+-	smp_wmb();
+-
+-	/*
+-	 * Make sure device is not already in RDDM. In case the device asserts
+-	 * and a kernel panic follows, device will already be in RDDM.
+-	 * Do not trigger SYS ERR again and proceed with waiting for
+-	 * image download completion.
+-	 */
+-	ee = mhi_get_exec_env(mhi_cntrl);
+-	if (ee != MHI_EE_RDDM) {
+-		dev_dbg(dev, "Trigger device into RDDM mode using SYS ERR\n");
+-		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+-
+-		dev_dbg(dev, "Waiting for device to enter RDDM\n");
+-		while (rddm_retry--) {
+-			ee = mhi_get_exec_env(mhi_cntrl);
+-			if (ee == MHI_EE_RDDM)
+-				break;
+-
+-			udelay(delayus);
+-		}
+-
+-		if (rddm_retry <= 0) {
+-			/* Hardware reset so force device to enter RDDM */
+-			dev_dbg(dev,
+-				"Did not enter RDDM, do a host req reset\n");
+-			mhi_write_reg(mhi_cntrl, mhi_cntrl->regs,
+-				      MHI_SOC_RESET_REQ_OFFSET,
+-				      MHI_SOC_RESET_REQ);
+-			udelay(delayus);
+-		}
+-
+-		ee = mhi_get_exec_env(mhi_cntrl);
+-	}
+-
+-	dev_dbg(dev,
+-		"Waiting for RDDM image download via BHIe, current EE:%s\n",
+-		TO_MHI_EXEC_STR(ee));
+-
+-	while (retry--) {
+-		ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
+-					 BHIE_RXVECSTATUS_STATUS_BMSK,
+-					 BHIE_RXVECSTATUS_STATUS_SHFT,
+-					 &rx_status);
+-		if (ret)
+-			return -EIO;
+-
+-		if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL)
+-			return 0;
+-
+-		udelay(delayus);
+-	}
+-
+-	ee = mhi_get_exec_env(mhi_cntrl);
+-	ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
+-
+-	dev_err(dev, "Did not complete RDDM transfer\n");
+-	dev_err(dev, "Current EE: %s\n", TO_MHI_EXEC_STR(ee));
+-	dev_err(dev, "RXVEC_STATUS: 0x%x\n", rx_status);
+-
+-	return -EIO;
+-}
+-
+-/* Download RDDM image from device */
+-int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
+-{
+-	void __iomem *base = mhi_cntrl->bhie;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	u32 rx_status;
+-
+-	if (in_panic)
+-		return __mhi_download_rddm_in_panic(mhi_cntrl);
+-
+-	dev_dbg(dev, "Waiting for RDDM image download via BHIe\n");
+-
+-	/* Wait for the image download to complete */
+-	wait_event_timeout(mhi_cntrl->state_event,
+-			   mhi_read_reg_field(mhi_cntrl, base,
+-					      BHIE_RXVECSTATUS_OFFS,
+-					      BHIE_RXVECSTATUS_STATUS_BMSK,
+-					      BHIE_RXVECSTATUS_STATUS_SHFT,
+-					      &rx_status) || rx_status,
+-			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-
+-	return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
+-}
+-EXPORT_SYMBOL_GPL(mhi_download_rddm_img);
+-
+-static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
+-			    const struct mhi_buf *mhi_buf)
+-{
+-	void __iomem *base = mhi_cntrl->bhie;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+-	u32 tx_status, sequence_id;
+-	int ret;
+-
+-	read_lock_bh(pm_lock);
+-	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+-		read_unlock_bh(pm_lock);
+-		return -EIO;
+-	}
+-
+-	sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_TXVECSTATUS_SEQNUM_BMSK);
+-	dev_dbg(dev, "Starting AMSS download via BHIe. Sequence ID:%u\n",
+-		sequence_id);
+-	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
+-		      upper_32_bits(mhi_buf->dma_addr));
+-
+-	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS,
+-		      lower_32_bits(mhi_buf->dma_addr));
+-
+-	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
+-
+-	mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
+-			    BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
+-			    sequence_id);
+-	read_unlock_bh(pm_lock);
+-
+-	/* Wait for the image download to complete */
+-	ret = wait_event_timeout(mhi_cntrl->state_event,
+-				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+-				 mhi_read_reg_field(mhi_cntrl, base,
+-						   BHIE_TXVECSTATUS_OFFS,
+-						   BHIE_TXVECSTATUS_STATUS_BMSK,
+-						   BHIE_TXVECSTATUS_STATUS_SHFT,
+-						   &tx_status) || tx_status,
+-				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+-	    tx_status != BHIE_TXVECSTATUS_STATUS_XFER_COMPL)
+-		return -EIO;
+-
+-	return (!ret) ? -ETIMEDOUT : 0;
+-}
+-
+-static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
+-			   dma_addr_t dma_addr,
+-			   size_t size)
+-{
+-	u32 tx_status, val, session_id;
+-	int i, ret;
+-	void __iomem *base = mhi_cntrl->bhi;
+-	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	struct {
+-		char *name;
+-		u32 offset;
+-	} error_reg[] = {
+-		{ "ERROR_CODE", BHI_ERRCODE },
+-		{ "ERROR_DBG1", BHI_ERRDBG1 },
+-		{ "ERROR_DBG2", BHI_ERRDBG2 },
+-		{ "ERROR_DBG3", BHI_ERRDBG3 },
+-		{ NULL },
+-	};
+-
+-	read_lock_bh(pm_lock);
+-	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+-		read_unlock_bh(pm_lock);
+-		goto invalid_pm_state;
+-	}
+-
+-	session_id = MHI_RANDOM_U32_NONZERO(BHI_TXDB_SEQNUM_BMSK);
+-	dev_dbg(dev, "Starting SBL download via BHI. Session ID:%u\n",
+-		session_id);
+-	mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
+-	mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
+-		      upper_32_bits(dma_addr));
+-	mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
+-		      lower_32_bits(dma_addr));
+-	mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
+-	mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id);
+-	read_unlock_bh(pm_lock);
+-
+-	/* Wait for the image download to complete */
+-	ret = wait_event_timeout(mhi_cntrl->state_event,
+-			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+-			   mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
+-					      BHI_STATUS_MASK, BHI_STATUS_SHIFT,
+-					      &tx_status) || tx_status,
+-			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+-		goto invalid_pm_state;
+-
+-	if (tx_status == BHI_STATUS_ERROR) {
+-		dev_err(dev, "Image transfer failed\n");
+-		read_lock_bh(pm_lock);
+-		if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+-			for (i = 0; error_reg[i].name; i++) {
+-				ret = mhi_read_reg(mhi_cntrl, base,
+-						   error_reg[i].offset, &val);
+-				if (ret)
+-					break;
+-				dev_err(dev, "Reg: %s value: 0x%x\n",
+-					error_reg[i].name, val);
+-			}
+-		}
+-		read_unlock_bh(pm_lock);
+-		goto invalid_pm_state;
+-	}
+-
+-	return (!ret) ? -ETIMEDOUT : 0;
+-
+-invalid_pm_state:
+-
+-	return -EIO;
+-}
+-
+-void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+-			 struct image_info *image_info)
+-{
+-	int i;
+-	struct mhi_buf *mhi_buf = image_info->mhi_buf;
+-
+-	for (i = 0; i < image_info->entries; i++, mhi_buf++)
+-		mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
+-				  mhi_buf->dma_addr);
+-
+-	kfree(image_info->mhi_buf);
+-	kfree(image_info);
+-}
+-
+-int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
+-			 struct image_info **image_info,
+-			 size_t alloc_size)
+-{
+-	size_t seg_size = mhi_cntrl->seg_len;
+-	int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1;
+-	int i;
+-	struct image_info *img_info;
+-	struct mhi_buf *mhi_buf;
+-
+-	img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
+-	if (!img_info)
+-		return -ENOMEM;
+-
+-	/* Allocate memory for entries */
+-	img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf),
+-				    GFP_KERNEL);
+-	if (!img_info->mhi_buf)
+-		goto error_alloc_mhi_buf;
+-
+-	/* Allocate and populate vector table */
+-	mhi_buf = img_info->mhi_buf;
+-	for (i = 0; i < segments; i++, mhi_buf++) {
+-		size_t vec_size = seg_size;
+-
+-		/* Vector table is the last entry */
+-		if (i == segments - 1)
+-			vec_size = sizeof(struct bhi_vec_entry) * i;
+-
+-		mhi_buf->len = vec_size;
+-		mhi_buf->buf = mhi_alloc_coherent(mhi_cntrl, vec_size,
+-						  &mhi_buf->dma_addr,
+-						  GFP_KERNEL);
+-		if (!mhi_buf->buf)
+-			goto error_alloc_segment;
+-	}
+-
+-	img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf;
+-	img_info->entries = segments;
+-	*image_info = img_info;
+-
+-	return 0;
+-
+-error_alloc_segment:
+-	for (--i, --mhi_buf; i >= 0; i--, mhi_buf--)
+-		mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
+-				  mhi_buf->dma_addr);
+-
+-error_alloc_mhi_buf:
+-	kfree(img_info);
+-
+-	return -ENOMEM;
+-}
+-
+-static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
+-			      const struct firmware *firmware,
+-			      struct image_info *img_info)
+-{
+-	size_t remainder = firmware->size;
+-	size_t to_cpy;
+-	const u8 *buf = firmware->data;
+-	int i = 0;
+-	struct mhi_buf *mhi_buf = img_info->mhi_buf;
+-	struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
+-
+-	while (remainder) {
+-		to_cpy = min(remainder, mhi_buf->len);
+-		memcpy(mhi_buf->buf, buf, to_cpy);
+-		bhi_vec->dma_addr = mhi_buf->dma_addr;
+-		bhi_vec->size = to_cpy;
+-
+-		buf += to_cpy;
+-		remainder -= to_cpy;
+-		i++;
+-		bhi_vec++;
+-		mhi_buf++;
+-	}
+-}
+-
+-void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
+-{
+-	const struct firmware *firmware = NULL;
+-	struct image_info *image_info;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	const char *fw_name;
+-	void *buf;
+-	dma_addr_t dma_addr;
+-	size_t size;
+-	int i, ret;
+-
+-	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		dev_err(dev, "Device MHI is not in valid state\n");
+-		return;
+-	}
+-
+-	/* save hardware info from BHI */
+-	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_SERIALNU,
+-			   &mhi_cntrl->serial_number);
+-	if (ret)
+-		dev_err(dev, "Could not capture serial number via BHI\n");
+-
+-	for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++) {
+-		ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_OEMPKHASH(i),
+-				   &mhi_cntrl->oem_pk_hash[i]);
+-		if (ret) {
+-			dev_err(dev, "Could not capture OEM PK HASH via BHI\n");
+-			break;
+-		}
+-	}
+-
+-	/* If device is in pass through, do reset to ready state transition */
+-	if (mhi_cntrl->ee == MHI_EE_PTHRU)
+-		goto fw_load_ee_pthru;
+-
+-	fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
+-		mhi_cntrl->edl_image : mhi_cntrl->fw_image;
+-
+-	if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
+-						     !mhi_cntrl->seg_len))) {
+-		dev_err(dev,
+-			"No firmware image defined or !sbl_size || !seg_len\n");
+-		return;
+-	}
+-
+-	ret = request_firmware(&firmware, fw_name, dev);
+-	if (ret) {
+-		dev_err(dev, "Error loading firmware: %d\n", ret);
+-		return;
+-	}
+-
+-	size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
+-
+-	/* SBL size provided is maximum size, not necessarily the image size */
+-	if (size > firmware->size)
+-		size = firmware->size;
+-
+-	buf = mhi_alloc_coherent(mhi_cntrl, size, &dma_addr, GFP_KERNEL);
+-	if (!buf) {
+-		release_firmware(firmware);
+-		return;
+-	}
+-
+-	/* Download SBL image */
+-	memcpy(buf, firmware->data, size);
+-	ret = mhi_fw_load_sbl(mhi_cntrl, dma_addr, size);
+-	mhi_free_coherent(mhi_cntrl, size, buf, dma_addr);
+-
+-	if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL)
+-		release_firmware(firmware);
+-
+-	/* Error or in EDL mode, we're done */
+-	if (ret) {
+-		dev_err(dev, "MHI did not load SBL, ret:%d\n", ret);
+-		return;
+-	}
+-
+-	if (mhi_cntrl->ee == MHI_EE_EDL)
+-		return;
+-
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	mhi_cntrl->dev_state = MHI_STATE_RESET;
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-
+-	/*
+-	 * If we're doing fbc, populate vector tables while
+-	 * device transitioning into MHI READY state
+-	 */
+-	if (mhi_cntrl->fbc_download) {
+-		ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
+-					   firmware->size);
+-		if (ret)
+-			goto error_alloc_fw_table;
+-
+-		/* Load the firmware into BHIE vec table */
+-		mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
+-	}
+-
+-fw_load_ee_pthru:
+-	/* Transitioning into MHI RESET->READY state */
+-	ret = mhi_ready_state_transition(mhi_cntrl);
+-
+-	if (!mhi_cntrl->fbc_download)
+-		return;
+-
+-	if (ret) {
+-		dev_err(dev, "MHI did not enter READY state\n");
+-		goto error_read;
+-	}
+-
+-	/* Wait for the SBL event */
+-	ret = wait_event_timeout(mhi_cntrl->state_event,
+-				 mhi_cntrl->ee == MHI_EE_SBL ||
+-				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+-				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-
+-	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		dev_err(dev, "MHI did not enter SBL\n");
+-		goto error_read;
+-	}
+-
+-	/* Start full firmware image download */
+-	image_info = mhi_cntrl->fbc_image;
+-	ret = mhi_fw_load_amss(mhi_cntrl,
+-			       /* Vector table is the last entry */
+-			       &image_info->mhi_buf[image_info->entries - 1]);
+-	if (ret)
+-		dev_err(dev, "MHI did not load AMSS, ret:%d\n", ret);
+-
+-	release_firmware(firmware);
+-
+-	return;
+-
+-error_read:
+-	mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+-	mhi_cntrl->fbc_image = NULL;
+-
+-error_alloc_fw_table:
+-	release_firmware(firmware);
+-}
+diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/core/debugfs.c
+deleted file mode 100644
+index 3a48801e01f4a..0000000000000
+--- a/drivers/bus/mhi/core/debugfs.c
++++ /dev/null
+@@ -1,411 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+- *
+- */
+-
+-#include <linux/debugfs.h>
+-#include <linux/device.h>
+-#include <linux/interrupt.h>
+-#include <linux/list.h>
+-#include <linux/mhi.h>
+-#include <linux/module.h>
+-#include "internal.h"
+-
+-static int mhi_debugfs_states_show(struct seq_file *m, void *d)
+-{
+-	struct mhi_controller *mhi_cntrl = m->private;
+-
+-	/* states */
+-	seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
+-		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
+-		   mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
+-		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+-		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
+-		   mhi_cntrl->wake_set ? "true" : "false");
+-
+-	/* counters */
+-	seq_printf(m, "M0: %u M2: %u M3: %u", mhi_cntrl->M0, mhi_cntrl->M2,
+-		   mhi_cntrl->M3);
+-
+-	seq_printf(m, " device wake: %u pending packets: %u\n",
+-		   atomic_read(&mhi_cntrl->dev_wake),
+-		   atomic_read(&mhi_cntrl->pending_pkts));
+-
+-	return 0;
+-}
+-
+-static int mhi_debugfs_events_show(struct seq_file *m, void *d)
+-{
+-	struct mhi_controller *mhi_cntrl = m->private;
+-	struct mhi_event *mhi_event;
+-	struct mhi_event_ctxt *er_ctxt;
+-	int i;
+-
+-	if (!mhi_is_active(mhi_cntrl)) {
+-		seq_puts(m, "Device not ready\n");
+-		return -ENODEV;
+-	}
+-
+-	er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings;
+-						i++, er_ctxt++, mhi_event++) {
+-		struct mhi_ring *ring = &mhi_event->ring;
+-
+-		if (mhi_event->offload_ev) {
+-			seq_printf(m, "Index: %d is an offload event ring\n",
+-				   i);
+-			continue;
+-		}
+-
+-		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
+-			   i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
+-			   EV_CTX_INTMODC_SHIFT,
+-			   (er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
+-			   EV_CTX_INTMODT_SHIFT);
+-
+-		seq_printf(m, " base: 0x%0llx len: 0x%llx", er_ctxt->rbase,
+-			   er_ctxt->rlen);
+-
+-		seq_printf(m, " rp: 0x%llx wp: 0x%llx", er_ctxt->rp,
+-			   er_ctxt->wp);
+-
+-		seq_printf(m, " local rp: 0x%pK db: 0x%pad\n", ring->rp,
+-			   &mhi_event->db_cfg.db_val);
+-	}
+-
+-	return 0;
+-}
+-
+-static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
+-{
+-	struct mhi_controller *mhi_cntrl = m->private;
+-	struct mhi_chan *mhi_chan;
+-	struct mhi_chan_ctxt *chan_ctxt;
+-	int i;
+-
+-	if (!mhi_is_active(mhi_cntrl)) {
+-		seq_puts(m, "Device not ready\n");
+-		return -ENODEV;
+-	}
+-
+-	mhi_chan = mhi_cntrl->mhi_chan;
+-	chan_ctxt = mhi_cntrl->mhi_ctxt->chan_ctxt;
+-	for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
+-		struct mhi_ring *ring = &mhi_chan->tre_ring;
+-
+-		if (mhi_chan->offload_ch) {
+-			seq_printf(m, "%s(%u) is an offload channel\n",
+-				   mhi_chan->name, mhi_chan->chan);
+-			continue;
+-		}
+-
+-		if (!mhi_chan->mhi_dev)
+-			continue;
+-
+-		seq_printf(m,
+-			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
+-			   mhi_chan->name, mhi_chan->chan, (chan_ctxt->chcfg &
+-			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
+-			   (chan_ctxt->chcfg & CHAN_CTX_BRSTMODE_MASK) >>
+-			   CHAN_CTX_BRSTMODE_SHIFT, (chan_ctxt->chcfg &
+-			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
+-
+-		seq_printf(m, " type: 0x%x event ring: %u", chan_ctxt->chtype,
+-			   chan_ctxt->erindex);
+-
+-		seq_printf(m, " base: 0x%llx len: 0x%llx rp: 0x%llx wp: 0x%llx",
+-			   chan_ctxt->rbase, chan_ctxt->rlen, chan_ctxt->rp,
+-			   chan_ctxt->wp);
+-
+-		seq_printf(m, " local rp: 0x%pK local wp: 0x%pK db: 0x%pad\n",
+-			   ring->rp, ring->wp,
+-			   &mhi_chan->db_cfg.db_val);
+-	}
+-
+-	return 0;
+-}
+-
+-static int mhi_device_info_show(struct device *dev, void *data)
+-{
+-	struct mhi_device *mhi_dev;
+-
+-	if (dev->bus != &mhi_bus_type)
+-		return 0;
+-
+-	mhi_dev = to_mhi_device(dev);
+-
+-	seq_printf((struct seq_file *)data, "%s: type: %s dev_wake: %u",
+-		   mhi_dev->name, mhi_dev->dev_type ? "Controller" : "Transfer",
+-		   mhi_dev->dev_wake);
+-
+-	/* for transfer device types only */
+-	if (mhi_dev->dev_type == MHI_DEVICE_XFER)
+-		seq_printf((struct seq_file *)data, " channels: %u(UL)/%u(DL)",
+-			   mhi_dev->ul_chan_id, mhi_dev->dl_chan_id);
+-
+-	seq_puts((struct seq_file *)data, "\n");
+-
+-	return 0;
+-}
+-
+-static int mhi_debugfs_devices_show(struct seq_file *m, void *d)
+-{
+-	struct mhi_controller *mhi_cntrl = m->private;
+-
+-	if (!mhi_is_active(mhi_cntrl)) {
+-		seq_puts(m, "Device not ready\n");
+-		return -ENODEV;
+-	}
+-
+-	device_for_each_child(mhi_cntrl->cntrl_dev, m, mhi_device_info_show);
+-
+-	return 0;
+-}
+-
+-static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
+-{
+-	struct mhi_controller *mhi_cntrl = m->private;
+-	enum mhi_state state;
+-	enum mhi_ee_type ee;
+-	int i, ret = -EIO;
+-	u32 val;
+-	void __iomem *mhi_base = mhi_cntrl->regs;
+-	void __iomem *bhi_base = mhi_cntrl->bhi;
+-	void __iomem *bhie_base = mhi_cntrl->bhie;
+-	void __iomem *wake_db = mhi_cntrl->wake_db;
+-	struct {
+-		const char *name;
+-		int offset;
+-		void __iomem *base;
+-	} regs[] = {
+-		{ "MHI_REGLEN", MHIREGLEN, mhi_base},
+-		{ "MHI_VER", MHIVER, mhi_base},
+-		{ "MHI_CFG", MHICFG, mhi_base},
+-		{ "MHI_CTRL", MHICTRL, mhi_base},
+-		{ "MHI_STATUS", MHISTATUS, mhi_base},
+-		{ "MHI_WAKE_DB", 0, wake_db},
+-		{ "BHI_EXECENV", BHI_EXECENV, bhi_base},
+-		{ "BHI_STATUS", BHI_STATUS, bhi_base},
+-		{ "BHI_ERRCODE", BHI_ERRCODE, bhi_base},
+-		{ "BHI_ERRDBG1", BHI_ERRDBG1, bhi_base},
+-		{ "BHI_ERRDBG2", BHI_ERRDBG2, bhi_base},
+-		{ "BHI_ERRDBG3", BHI_ERRDBG3, bhi_base},
+-		{ "BHIE_TXVEC_DB", BHIE_TXVECDB_OFFS, bhie_base},
+-		{ "BHIE_TXVEC_STATUS", BHIE_TXVECSTATUS_OFFS, bhie_base},
+-		{ "BHIE_RXVEC_DB", BHIE_RXVECDB_OFFS, bhie_base},
+-		{ "BHIE_RXVEC_STATUS", BHIE_RXVECSTATUS_OFFS, bhie_base},
+-		{ NULL },
+-	};
+-
+-	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+-		return ret;
+-
+-	seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
+-		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
+-		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+-		   TO_MHI_EXEC_STR(mhi_cntrl->ee));
+-
+-	state = mhi_get_mhi_state(mhi_cntrl);
+-	ee = mhi_get_exec_env(mhi_cntrl);
+-	seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
+-		   TO_MHI_STATE_STR(state));
+-
+-	for (i = 0; regs[i].name; i++) {
+-		if (!regs[i].base)
+-			continue;
+-		ret = mhi_read_reg(mhi_cntrl, regs[i].base, regs[i].offset,
+-				   &val);
+-		if (ret)
+-			continue;
+-
+-		seq_printf(m, "%s: 0x%x\n", regs[i].name, val);
+-	}
+-
+-	return 0;
+-}
+-
+-static int mhi_debugfs_device_wake_show(struct seq_file *m, void *d)
+-{
+-	struct mhi_controller *mhi_cntrl = m->private;
+-	struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
+-
+-	if (!mhi_is_active(mhi_cntrl)) {
+-		seq_puts(m, "Device not ready\n");
+-		return -ENODEV;
+-	}
+-
+-	seq_printf(m,
+-		   "Wake count: %d\n%s\n", mhi_dev->dev_wake,
+-		   "Usage: echo get/put > device_wake to vote/unvote for M0");
+-
+-	return 0;
+-}
+-
+-static ssize_t mhi_debugfs_device_wake_write(struct file *file,
+-					     const char __user *ubuf,
+-					     size_t count, loff_t *ppos)
+-{
+-	struct seq_file	*m = file->private_data;
+-	struct mhi_controller *mhi_cntrl = m->private;
+-	struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
+-	char buf[16];
+-	int ret = -EINVAL;
+-
+-	if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+-		return -EFAULT;
+-
+-	if (!strncmp(buf, "get", 3)) {
+-		ret = mhi_device_get_sync(mhi_dev);
+-	} else if (!strncmp(buf, "put", 3)) {
+-		mhi_device_put(mhi_dev);
+-		ret = 0;
+-	}
+-
+-	return ret ? ret : count;
+-}
+-
+-static int mhi_debugfs_timeout_ms_show(struct seq_file *m, void *d)
+-{
+-	struct mhi_controller *mhi_cntrl = m->private;
+-
+-	seq_printf(m, "%u ms\n", mhi_cntrl->timeout_ms);
+-
+-	return 0;
+-}
+-
+-static ssize_t mhi_debugfs_timeout_ms_write(struct file *file,
+-					    const char __user *ubuf,
+-					    size_t count, loff_t *ppos)
+-{
+-	struct seq_file	*m = file->private_data;
+-	struct mhi_controller *mhi_cntrl = m->private;
+-	u32 timeout_ms;
+-
+-	if (kstrtou32_from_user(ubuf, count, 0, &timeout_ms))
+-		return -EINVAL;
+-
+-	mhi_cntrl->timeout_ms = timeout_ms;
+-
+-	return count;
+-}
+-
+-static int mhi_debugfs_states_open(struct inode *inode, struct file *fp)
+-{
+-	return single_open(fp, mhi_debugfs_states_show, inode->i_private);
+-}
+-
+-static int mhi_debugfs_events_open(struct inode *inode, struct file *fp)
+-{
+-	return single_open(fp, mhi_debugfs_events_show, inode->i_private);
+-}
+-
+-static int mhi_debugfs_channels_open(struct inode *inode, struct file *fp)
+-{
+-	return single_open(fp, mhi_debugfs_channels_show, inode->i_private);
+-}
+-
+-static int mhi_debugfs_devices_open(struct inode *inode, struct file *fp)
+-{
+-	return single_open(fp, mhi_debugfs_devices_show, inode->i_private);
+-}
+-
+-static int mhi_debugfs_regdump_open(struct inode *inode, struct file *fp)
+-{
+-	return single_open(fp, mhi_debugfs_regdump_show, inode->i_private);
+-}
+-
+-static int mhi_debugfs_device_wake_open(struct inode *inode, struct file *fp)
+-{
+-	return single_open(fp, mhi_debugfs_device_wake_show, inode->i_private);
+-}
+-
+-static int mhi_debugfs_timeout_ms_open(struct inode *inode, struct file *fp)
+-{
+-	return single_open(fp, mhi_debugfs_timeout_ms_show, inode->i_private);
+-}
+-
+-static const struct file_operations debugfs_states_fops = {
+-	.open = mhi_debugfs_states_open,
+-	.release = single_release,
+-	.read = seq_read,
+-};
+-
+-static const struct file_operations debugfs_events_fops = {
+-	.open = mhi_debugfs_events_open,
+-	.release = single_release,
+-	.read = seq_read,
+-};
+-
+-static const struct file_operations debugfs_channels_fops = {
+-	.open = mhi_debugfs_channels_open,
+-	.release = single_release,
+-	.read = seq_read,
+-};
+-
+-static const struct file_operations debugfs_devices_fops = {
+-	.open = mhi_debugfs_devices_open,
+-	.release = single_release,
+-	.read = seq_read,
+-};
+-
+-static const struct file_operations debugfs_regdump_fops = {
+-	.open = mhi_debugfs_regdump_open,
+-	.release = single_release,
+-	.read = seq_read,
+-};
+-
+-static const struct file_operations debugfs_device_wake_fops = {
+-	.open = mhi_debugfs_device_wake_open,
+-	.write = mhi_debugfs_device_wake_write,
+-	.release = single_release,
+-	.read = seq_read,
+-};
+-
+-static const struct file_operations debugfs_timeout_ms_fops = {
+-	.open = mhi_debugfs_timeout_ms_open,
+-	.write = mhi_debugfs_timeout_ms_write,
+-	.release = single_release,
+-	.read = seq_read,
+-};
+-
+-static struct dentry *mhi_debugfs_root;
+-
+-void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
+-{
+-	mhi_cntrl->debugfs_dentry =
+-			debugfs_create_dir(dev_name(mhi_cntrl->cntrl_dev),
+-					   mhi_debugfs_root);
+-
+-	debugfs_create_file("states", 0444, mhi_cntrl->debugfs_dentry,
+-			    mhi_cntrl, &debugfs_states_fops);
+-	debugfs_create_file("events", 0444, mhi_cntrl->debugfs_dentry,
+-			    mhi_cntrl, &debugfs_events_fops);
+-	debugfs_create_file("channels", 0444, mhi_cntrl->debugfs_dentry,
+-			    mhi_cntrl, &debugfs_channels_fops);
+-	debugfs_create_file("devices", 0444, mhi_cntrl->debugfs_dentry,
+-			    mhi_cntrl, &debugfs_devices_fops);
+-	debugfs_create_file("regdump", 0444, mhi_cntrl->debugfs_dentry,
+-			    mhi_cntrl, &debugfs_regdump_fops);
+-	debugfs_create_file("device_wake", 0644, mhi_cntrl->debugfs_dentry,
+-			    mhi_cntrl, &debugfs_device_wake_fops);
+-	debugfs_create_file("timeout_ms", 0644, mhi_cntrl->debugfs_dentry,
+-			    mhi_cntrl, &debugfs_timeout_ms_fops);
+-}
+-
+-void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl)
+-{
+-	debugfs_remove_recursive(mhi_cntrl->debugfs_dentry);
+-	mhi_cntrl->debugfs_dentry = NULL;
+-}
+-
+-void mhi_debugfs_init(void)
+-{
+-	mhi_debugfs_root = debugfs_create_dir(mhi_bus_type.name, NULL);
+-}
+-
+-void mhi_debugfs_exit(void)
+-{
+-	debugfs_remove_recursive(mhi_debugfs_root);
+-}
+diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
+deleted file mode 100644
+index 0d0386f67ffe2..0000000000000
+--- a/drivers/bus/mhi/core/init.c
++++ /dev/null
+@@ -1,1375 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+- *
+- */
+-
+-#include <linux/debugfs.h>
+-#include <linux/device.h>
+-#include <linux/dma-direction.h>
+-#include <linux/dma-mapping.h>
+-#include <linux/interrupt.h>
+-#include <linux/list.h>
+-#include <linux/mhi.h>
+-#include <linux/mod_devicetable.h>
+-#include <linux/module.h>
+-#include <linux/slab.h>
+-#include <linux/vmalloc.h>
+-#include <linux/wait.h>
+-#include "internal.h"
+-
+-const char * const mhi_ee_str[MHI_EE_MAX] = {
+-	[MHI_EE_PBL] = "PBL",
+-	[MHI_EE_SBL] = "SBL",
+-	[MHI_EE_AMSS] = "AMSS",
+-	[MHI_EE_RDDM] = "RDDM",
+-	[MHI_EE_WFW] = "WFW",
+-	[MHI_EE_PTHRU] = "PASS THRU",
+-	[MHI_EE_EDL] = "EDL",
+-	[MHI_EE_DISABLE_TRANSITION] = "DISABLE",
+-	[MHI_EE_NOT_SUPPORTED] = "NOT SUPPORTED",
+-};
+-
+-const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
+-	[DEV_ST_TRANSITION_PBL] = "PBL",
+-	[DEV_ST_TRANSITION_READY] = "READY",
+-	[DEV_ST_TRANSITION_SBL] = "SBL",
+-	[DEV_ST_TRANSITION_MISSION_MODE] = "MISSION_MODE",
+-	[DEV_ST_TRANSITION_SYS_ERR] = "SYS_ERR",
+-	[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
+-};
+-
+-const char * const mhi_state_str[MHI_STATE_MAX] = {
+-	[MHI_STATE_RESET] = "RESET",
+-	[MHI_STATE_READY] = "READY",
+-	[MHI_STATE_M0] = "M0",
+-	[MHI_STATE_M1] = "M1",
+-	[MHI_STATE_M2] = "M2",
+-	[MHI_STATE_M3] = "M3",
+-	[MHI_STATE_M3_FAST] = "M3_FAST",
+-	[MHI_STATE_BHI] = "BHI",
+-	[MHI_STATE_SYS_ERR] = "SYS_ERR",
+-};
+-
+-static const char * const mhi_pm_state_str[] = {
+-	[MHI_PM_STATE_DISABLE] = "DISABLE",
+-	[MHI_PM_STATE_POR] = "POR",
+-	[MHI_PM_STATE_M0] = "M0",
+-	[MHI_PM_STATE_M2] = "M2",
+-	[MHI_PM_STATE_M3_ENTER] = "M?->M3",
+-	[MHI_PM_STATE_M3] = "M3",
+-	[MHI_PM_STATE_M3_EXIT] = "M3->M0",
+-	[MHI_PM_STATE_FW_DL_ERR] = "FW DL Error",
+-	[MHI_PM_STATE_SYS_ERR_DETECT] = "SYS_ERR Detect",
+-	[MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS_ERR Process",
+-	[MHI_PM_STATE_SHUTDOWN_PROCESS] = "SHUTDOWN Process",
+-	[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "LD or Error Fatal Detect",
+-};
+-
+-const char *to_mhi_pm_state_str(enum mhi_pm_state state)
+-{
+-	int index = find_last_bit((unsigned long *)&state, 32);
+-
+-	if (index >= ARRAY_SIZE(mhi_pm_state_str))
+-		return "Invalid State";
+-
+-	return mhi_pm_state_str[index];
+-}
+-
+-static ssize_t serial_number_show(struct device *dev,
+-				  struct device_attribute *attr,
+-				  char *buf)
+-{
+-	struct mhi_device *mhi_dev = to_mhi_device(dev);
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-
+-	return snprintf(buf, PAGE_SIZE, "Serial Number: %u\n",
+-			mhi_cntrl->serial_number);
+-}
+-static DEVICE_ATTR_RO(serial_number);
+-
+-static ssize_t oem_pk_hash_show(struct device *dev,
+-				struct device_attribute *attr,
+-				char *buf)
+-{
+-	struct mhi_device *mhi_dev = to_mhi_device(dev);
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	int i, cnt = 0;
+-
+-	for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++)
+-		cnt += snprintf(buf + cnt, PAGE_SIZE - cnt,
+-				"OEMPKHASH[%d]: 0x%x\n", i,
+-				mhi_cntrl->oem_pk_hash[i]);
+-
+-	return cnt;
+-}
+-static DEVICE_ATTR_RO(oem_pk_hash);
+-
+-static struct attribute *mhi_dev_attrs[] = {
+-	&dev_attr_serial_number.attr,
+-	&dev_attr_oem_pk_hash.attr,
+-	NULL,
+-};
+-ATTRIBUTE_GROUPS(mhi_dev);
+-
+-/* MHI protocol requires the transfer ring to be aligned with ring length */
+-static int mhi_alloc_aligned_ring(struct mhi_controller *mhi_cntrl,
+-				  struct mhi_ring *ring,
+-				  u64 len)
+-{
+-	ring->alloc_size = len + (len - 1);
+-	ring->pre_aligned = mhi_alloc_coherent(mhi_cntrl, ring->alloc_size,
+-					       &ring->dma_handle, GFP_KERNEL);
+-	if (!ring->pre_aligned)
+-		return -ENOMEM;
+-
+-	ring->iommu_base = (ring->dma_handle + (len - 1)) & ~(len - 1);
+-	ring->base = ring->pre_aligned + (ring->iommu_base - ring->dma_handle);
+-
+-	return 0;
+-}
+-
+-void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl)
+-{
+-	int i;
+-	struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+-
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		free_irq(mhi_cntrl->irq[mhi_event->irq], mhi_event);
+-	}
+-
+-	free_irq(mhi_cntrl->irq[0], mhi_cntrl);
+-}
+-
+-int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
+-{
+-	struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	int i, ret;
+-
+-	/* Setup BHI_INTVEC IRQ */
+-	ret = request_threaded_irq(mhi_cntrl->irq[0], mhi_intvec_handler,
+-				   mhi_intvec_threaded_handler,
+-				   IRQF_SHARED | IRQF_NO_SUSPEND,
+-				   "bhi", mhi_cntrl);
+-	if (ret)
+-		return ret;
+-
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		if (mhi_event->irq >= mhi_cntrl->nr_irqs) {
+-			dev_err(dev, "irq %d not available for event ring\n",
+-				mhi_event->irq);
+-			ret = -EINVAL;
+-			goto error_request;
+-		}
+-
+-		ret = request_irq(mhi_cntrl->irq[mhi_event->irq],
+-				  mhi_irq_handler,
+-				  IRQF_SHARED | IRQF_NO_SUSPEND,
+-				  "mhi", mhi_event);
+-		if (ret) {
+-			dev_err(dev, "Error requesting irq:%d for ev:%d\n",
+-				mhi_cntrl->irq[mhi_event->irq], i);
+-			goto error_request;
+-		}
+-	}
+-
+-	return 0;
+-
+-error_request:
+-	for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		free_irq(mhi_cntrl->irq[mhi_event->irq], mhi_event);
+-	}
+-	free_irq(mhi_cntrl->irq[0], mhi_cntrl);
+-
+-	return ret;
+-}
+-
+-void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl)
+-{
+-	int i;
+-	struct mhi_ctxt *mhi_ctxt = mhi_cntrl->mhi_ctxt;
+-	struct mhi_cmd *mhi_cmd;
+-	struct mhi_event *mhi_event;
+-	struct mhi_ring *ring;
+-
+-	mhi_cmd = mhi_cntrl->mhi_cmd;
+-	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++) {
+-		ring = &mhi_cmd->ring;
+-		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+-				  ring->pre_aligned, ring->dma_handle);
+-		ring->base = NULL;
+-		ring->iommu_base = 0;
+-	}
+-
+-	mhi_free_coherent(mhi_cntrl,
+-			  sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+-			  mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
+-
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		ring = &mhi_event->ring;
+-		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+-				  ring->pre_aligned, ring->dma_handle);
+-		ring->base = NULL;
+-		ring->iommu_base = 0;
+-	}
+-
+-	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
+-			  mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
+-			  mhi_ctxt->er_ctxt_addr);
+-
+-	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
+-			  mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
+-			  mhi_ctxt->chan_ctxt_addr);
+-
+-	kfree(mhi_ctxt);
+-	mhi_cntrl->mhi_ctxt = NULL;
+-}
+-
+-int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+-{
+-	struct mhi_ctxt *mhi_ctxt;
+-	struct mhi_chan_ctxt *chan_ctxt;
+-	struct mhi_event_ctxt *er_ctxt;
+-	struct mhi_cmd_ctxt *cmd_ctxt;
+-	struct mhi_chan *mhi_chan;
+-	struct mhi_event *mhi_event;
+-	struct mhi_cmd *mhi_cmd;
+-	u32 tmp;
+-	int ret = -ENOMEM, i;
+-
+-	atomic_set(&mhi_cntrl->dev_wake, 0);
+-	atomic_set(&mhi_cntrl->pending_pkts, 0);
+-
+-	mhi_ctxt = kzalloc(sizeof(*mhi_ctxt), GFP_KERNEL);
+-	if (!mhi_ctxt)
+-		return -ENOMEM;
+-
+-	/* Setup channel ctxt */
+-	mhi_ctxt->chan_ctxt = mhi_alloc_coherent(mhi_cntrl,
+-						 sizeof(*mhi_ctxt->chan_ctxt) *
+-						 mhi_cntrl->max_chan,
+-						 &mhi_ctxt->chan_ctxt_addr,
+-						 GFP_KERNEL);
+-	if (!mhi_ctxt->chan_ctxt)
+-		goto error_alloc_chan_ctxt;
+-
+-	mhi_chan = mhi_cntrl->mhi_chan;
+-	chan_ctxt = mhi_ctxt->chan_ctxt;
+-	for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
+-		/* Skip if it is an offload channel */
+-		if (mhi_chan->offload_ch)
+-			continue;
+-
+-		tmp = chan_ctxt->chcfg;
+-		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+-		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+-		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
+-		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
+-		tmp &= ~CHAN_CTX_POLLCFG_MASK;
+-		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
+-		chan_ctxt->chcfg = tmp;
+-
+-		chan_ctxt->chtype = mhi_chan->type;
+-		chan_ctxt->erindex = mhi_chan->er_index;
+-
+-		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+-		mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;
+-	}
+-
+-	/* Setup event context */
+-	mhi_ctxt->er_ctxt = mhi_alloc_coherent(mhi_cntrl,
+-					       sizeof(*mhi_ctxt->er_ctxt) *
+-					       mhi_cntrl->total_ev_rings,
+-					       &mhi_ctxt->er_ctxt_addr,
+-					       GFP_KERNEL);
+-	if (!mhi_ctxt->er_ctxt)
+-		goto error_alloc_er_ctxt;
+-
+-	er_ctxt = mhi_ctxt->er_ctxt;
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+-		     mhi_event++) {
+-		struct mhi_ring *ring = &mhi_event->ring;
+-
+-		/* Skip if it is an offload event */
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		tmp = er_ctxt->intmod;
+-		tmp &= ~EV_CTX_INTMODC_MASK;
+-		tmp &= ~EV_CTX_INTMODT_MASK;
+-		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
+-		er_ctxt->intmod = tmp;
+-
+-		er_ctxt->ertype = MHI_ER_TYPE_VALID;
+-		er_ctxt->msivec = mhi_event->irq;
+-		mhi_event->db_cfg.db_mode = true;
+-
+-		ring->el_size = sizeof(struct mhi_tre);
+-		ring->len = ring->el_size * ring->elements;
+-		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
+-		if (ret)
+-			goto error_alloc_er;
+-
+-		/*
+-		 * If the read pointer equals to the write pointer, then the
+-		 * ring is empty
+-		 */
+-		ring->rp = ring->wp = ring->base;
+-		er_ctxt->rbase = ring->iommu_base;
+-		er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
+-		er_ctxt->rlen = ring->len;
+-		ring->ctxt_wp = &er_ctxt->wp;
+-	}
+-
+-	/* Setup cmd context */
+-	ret = -ENOMEM;
+-	mhi_ctxt->cmd_ctxt = mhi_alloc_coherent(mhi_cntrl,
+-						sizeof(*mhi_ctxt->cmd_ctxt) *
+-						NR_OF_CMD_RINGS,
+-						&mhi_ctxt->cmd_ctxt_addr,
+-						GFP_KERNEL);
+-	if (!mhi_ctxt->cmd_ctxt)
+-		goto error_alloc_er;
+-
+-	mhi_cmd = mhi_cntrl->mhi_cmd;
+-	cmd_ctxt = mhi_ctxt->cmd_ctxt;
+-	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
+-		struct mhi_ring *ring = &mhi_cmd->ring;
+-
+-		ring->el_size = sizeof(struct mhi_tre);
+-		ring->elements = CMD_EL_PER_RING;
+-		ring->len = ring->el_size * ring->elements;
+-		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
+-		if (ret)
+-			goto error_alloc_cmd;
+-
+-		ring->rp = ring->wp = ring->base;
+-		cmd_ctxt->rbase = ring->iommu_base;
+-		cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
+-		cmd_ctxt->rlen = ring->len;
+-		ring->ctxt_wp = &cmd_ctxt->wp;
+-	}
+-
+-	mhi_cntrl->mhi_ctxt = mhi_ctxt;
+-
+-	return 0;
+-
+-error_alloc_cmd:
+-	for (--i, --mhi_cmd; i >= 0; i--, mhi_cmd--) {
+-		struct mhi_ring *ring = &mhi_cmd->ring;
+-
+-		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+-				  ring->pre_aligned, ring->dma_handle);
+-	}
+-	mhi_free_coherent(mhi_cntrl,
+-			  sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+-			  mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
+-	i = mhi_cntrl->total_ev_rings;
+-	mhi_event = mhi_cntrl->mhi_event + i;
+-
+-error_alloc_er:
+-	for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
+-		struct mhi_ring *ring = &mhi_event->ring;
+-
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+-				  ring->pre_aligned, ring->dma_handle);
+-	}
+-	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
+-			  mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
+-			  mhi_ctxt->er_ctxt_addr);
+-
+-error_alloc_er_ctxt:
+-	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
+-			  mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
+-			  mhi_ctxt->chan_ctxt_addr);
+-
+-error_alloc_chan_ctxt:
+-	kfree(mhi_ctxt);
+-
+-	return ret;
+-}
+-
+-int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
+-{
+-	u32 val;
+-	int i, ret;
+-	struct mhi_chan *mhi_chan;
+-	struct mhi_event *mhi_event;
+-	void __iomem *base = mhi_cntrl->regs;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	struct {
+-		u32 offset;
+-		u32 mask;
+-		u32 shift;
+-		u32 val;
+-	} reg_info[] = {
+-		{
+-			CCABAP_HIGHER, U32_MAX, 0,
+-			upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
+-		},
+-		{
+-			CCABAP_LOWER, U32_MAX, 0,
+-			lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
+-		},
+-		{
+-			ECABAP_HIGHER, U32_MAX, 0,
+-			upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
+-		},
+-		{
+-			ECABAP_LOWER, U32_MAX, 0,
+-			lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
+-		},
+-		{
+-			CRCBAP_HIGHER, U32_MAX, 0,
+-			upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
+-		},
+-		{
+-			CRCBAP_LOWER, U32_MAX, 0,
+-			lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
+-		},
+-		{
+-			MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
+-			mhi_cntrl->total_ev_rings,
+-		},
+-		{
+-			MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
+-			mhi_cntrl->hw_ev_rings,
+-		},
+-		{
+-			MHICTRLBASE_HIGHER, U32_MAX, 0,
+-			upper_32_bits(mhi_cntrl->iova_start),
+-		},
+-		{
+-			MHICTRLBASE_LOWER, U32_MAX, 0,
+-			lower_32_bits(mhi_cntrl->iova_start),
+-		},
+-		{
+-			MHIDATABASE_HIGHER, U32_MAX, 0,
+-			upper_32_bits(mhi_cntrl->iova_start),
+-		},
+-		{
+-			MHIDATABASE_LOWER, U32_MAX, 0,
+-			lower_32_bits(mhi_cntrl->iova_start),
+-		},
+-		{
+-			MHICTRLLIMIT_HIGHER, U32_MAX, 0,
+-			upper_32_bits(mhi_cntrl->iova_stop),
+-		},
+-		{
+-			MHICTRLLIMIT_LOWER, U32_MAX, 0,
+-			lower_32_bits(mhi_cntrl->iova_stop),
+-		},
+-		{
+-			MHIDATALIMIT_HIGHER, U32_MAX, 0,
+-			upper_32_bits(mhi_cntrl->iova_stop),
+-		},
+-		{
+-			MHIDATALIMIT_LOWER, U32_MAX, 0,
+-			lower_32_bits(mhi_cntrl->iova_stop),
+-		},
+-		{ 0, 0, 0 }
+-	};
+-
+-	dev_dbg(dev, "Initializing MHI registers\n");
+-
+-	/* Read channel db offset */
+-	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
+-				 CHDBOFF_CHDBOFF_SHIFT, &val);
+-	if (ret) {
+-		dev_err(dev, "Unable to read CHDBOFF register\n");
+-		return -EIO;
+-	}
+-
+-	/* Setup wake db */
+-	mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
+-	mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
+-	mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
+-	mhi_cntrl->wake_set = false;
+-
+-	/* Setup channel db address for each channel in tre_ring */
+-	mhi_chan = mhi_cntrl->mhi_chan;
+-	for (i = 0; i < mhi_cntrl->max_chan; i++, val += 8, mhi_chan++)
+-		mhi_chan->tre_ring.db_addr = base + val;
+-
+-	/* Read event ring db offset */
+-	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
+-				 ERDBOFF_ERDBOFF_SHIFT, &val);
+-	if (ret) {
+-		dev_err(dev, "Unable to read ERDBOFF register\n");
+-		return -EIO;
+-	}
+-
+-	/* Setup event db address for each ev_ring */
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		mhi_event->ring.db_addr = base + val;
+-	}
+-
+-	/* Setup DB register for primary CMD rings */
+-	mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING].ring.db_addr = base + CRDB_LOWER;
+-
+-	/* Write to MMIO registers */
+-	for (i = 0; reg_info[i].offset; i++)
+-		mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
+-				    reg_info[i].mask, reg_info[i].shift,
+-				    reg_info[i].val);
+-
+-	return 0;
+-}
+-
+-void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+-			  struct mhi_chan *mhi_chan)
+-{
+-	struct mhi_ring *buf_ring;
+-	struct mhi_ring *tre_ring;
+-	struct mhi_chan_ctxt *chan_ctxt;
+-	u32 tmp;
+-
+-	buf_ring = &mhi_chan->buf_ring;
+-	tre_ring = &mhi_chan->tre_ring;
+-	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+-
+-	mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+-			  tre_ring->pre_aligned, tre_ring->dma_handle);
+-	vfree(buf_ring->base);
+-
+-	buf_ring->base = tre_ring->base = NULL;
+-	tre_ring->ctxt_wp = NULL;
+-	chan_ctxt->rbase = 0;
+-	chan_ctxt->rlen = 0;
+-	chan_ctxt->rp = 0;
+-	chan_ctxt->wp = 0;
+-
+-	tmp = chan_ctxt->chcfg;
+-	tmp &= ~CHAN_CTX_CHSTATE_MASK;
+-	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+-	chan_ctxt->chcfg = tmp;
+-
+-	/* Update to all cores */
+-	smp_wmb();
+-}
+-
+-int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+-		       struct mhi_chan *mhi_chan)
+-{
+-	struct mhi_ring *buf_ring;
+-	struct mhi_ring *tre_ring;
+-	struct mhi_chan_ctxt *chan_ctxt;
+-	u32 tmp;
+-	int ret;
+-
+-	buf_ring = &mhi_chan->buf_ring;
+-	tre_ring = &mhi_chan->tre_ring;
+-	tre_ring->el_size = sizeof(struct mhi_tre);
+-	tre_ring->len = tre_ring->el_size * tre_ring->elements;
+-	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+-	ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len);
+-	if (ret)
+-		return -ENOMEM;
+-
+-	buf_ring->el_size = sizeof(struct mhi_buf_info);
+-	buf_ring->len = buf_ring->el_size * buf_ring->elements;
+-	buf_ring->base = vzalloc(buf_ring->len);
+-
+-	if (!buf_ring->base) {
+-		mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+-				  tre_ring->pre_aligned, tre_ring->dma_handle);
+-		return -ENOMEM;
+-	}
+-
+-	tmp = chan_ctxt->chcfg;
+-	tmp &= ~CHAN_CTX_CHSTATE_MASK;
+-	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
+-	chan_ctxt->chcfg = tmp;
+-
+-	chan_ctxt->rbase = tre_ring->iommu_base;
+-	chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
+-	chan_ctxt->rlen = tre_ring->len;
+-	tre_ring->ctxt_wp = &chan_ctxt->wp;
+-
+-	tre_ring->rp = tre_ring->wp = tre_ring->base;
+-	buf_ring->rp = buf_ring->wp = buf_ring->base;
+-	mhi_chan->db_cfg.db_mode = 1;
+-
+-	/* Update to all cores */
+-	smp_wmb();
+-
+-	return 0;
+-}
+-
+-static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
+-			const struct mhi_controller_config *config)
+-{
+-	struct mhi_event *mhi_event;
+-	const struct mhi_event_config *event_cfg;
+-	struct device *dev = mhi_cntrl->cntrl_dev;
+-	int i, num;
+-
+-	num = config->num_events;
+-	mhi_cntrl->total_ev_rings = num;
+-	mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
+-				       GFP_KERNEL);
+-	if (!mhi_cntrl->mhi_event)
+-		return -ENOMEM;
+-
+-	/* Populate event ring */
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < num; i++) {
+-		event_cfg = &config->event_cfg[i];
+-
+-		mhi_event->er_index = i;
+-		mhi_event->ring.elements = event_cfg->num_elements;
+-		mhi_event->intmod = event_cfg->irq_moderation_ms;
+-		mhi_event->irq = event_cfg->irq;
+-
+-		if (event_cfg->channel != U32_MAX) {
+-			/* This event ring has a dedicated channel */
+-			mhi_event->chan = event_cfg->channel;
+-			if (mhi_event->chan >= mhi_cntrl->max_chan) {
+-				dev_err(dev,
+-					"Event Ring channel not available\n");
+-				goto error_ev_cfg;
+-			}
+-
+-			mhi_event->mhi_chan =
+-				&mhi_cntrl->mhi_chan[mhi_event->chan];
+-		}
+-
+-		/* Priority is fixed to 1 for now */
+-		mhi_event->priority = 1;
+-
+-		mhi_event->db_cfg.brstmode = event_cfg->mode;
+-		if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
+-			goto error_ev_cfg;
+-
+-		if (mhi_event->db_cfg.brstmode == MHI_DB_BRST_ENABLE)
+-			mhi_event->db_cfg.process_db = mhi_db_brstmode;
+-		else
+-			mhi_event->db_cfg.process_db = mhi_db_brstmode_disable;
+-
+-		mhi_event->data_type = event_cfg->data_type;
+-
+-		switch (mhi_event->data_type) {
+-		case MHI_ER_DATA:
+-			mhi_event->process_event = mhi_process_data_event_ring;
+-			break;
+-		case MHI_ER_CTRL:
+-			mhi_event->process_event = mhi_process_ctrl_ev_ring;
+-			break;
+-		default:
+-			dev_err(dev, "Event Ring type not supported\n");
+-			goto error_ev_cfg;
+-		}
+-
+-		mhi_event->hw_ring = event_cfg->hardware_event;
+-		if (mhi_event->hw_ring)
+-			mhi_cntrl->hw_ev_rings++;
+-		else
+-			mhi_cntrl->sw_ev_rings++;
+-
+-		mhi_event->cl_manage = event_cfg->client_managed;
+-		mhi_event->offload_ev = event_cfg->offload_channel;
+-		mhi_event++;
+-	}
+-
+-	return 0;
+-
+-error_ev_cfg:
+-
+-	kfree(mhi_cntrl->mhi_event);
+-	return -EINVAL;
+-}
+-
+-static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
+-			const struct mhi_controller_config *config)
+-{
+-	const struct mhi_channel_config *ch_cfg;
+-	struct device *dev = mhi_cntrl->cntrl_dev;
+-	int i;
+-	u32 chan;
+-
+-	mhi_cntrl->max_chan = config->max_channels;
+-
+-	/*
+-	 * The allocation of MHI channels can exceed 32KB in some scenarios,
+-	 * so to avoid any memory possible allocation failures, vzalloc is
+-	 * used here
+-	 */
+-	mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
+-				      sizeof(*mhi_cntrl->mhi_chan));
+-	if (!mhi_cntrl->mhi_chan)
+-		return -ENOMEM;
+-
+-	INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
+-
+-	/* Populate channel configurations */
+-	for (i = 0; i < config->num_channels; i++) {
+-		struct mhi_chan *mhi_chan;
+-
+-		ch_cfg = &config->ch_cfg[i];
+-
+-		chan = ch_cfg->num;
+-		if (chan >= mhi_cntrl->max_chan) {
+-			dev_err(dev, "Channel %d not available\n", chan);
+-			goto error_chan_cfg;
+-		}
+-
+-		mhi_chan = &mhi_cntrl->mhi_chan[chan];
+-		mhi_chan->name = ch_cfg->name;
+-		mhi_chan->chan = chan;
+-
+-		mhi_chan->tre_ring.elements = ch_cfg->num_elements;
+-		if (!mhi_chan->tre_ring.elements)
+-			goto error_chan_cfg;
+-
+-		/*
+-		 * For some channels, local ring length should be bigger than
+-		 * the transfer ring length due to internal logical channels
+-		 * in device. So host can queue much more buffers than transfer
+-		 * ring length. Example, RSC channels should have a larger local
+-		 * channel length than transfer ring length.
+-		 */
+-		mhi_chan->buf_ring.elements = ch_cfg->local_elements;
+-		if (!mhi_chan->buf_ring.elements)
+-			mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
+-		mhi_chan->er_index = ch_cfg->event_ring;
+-		mhi_chan->dir = ch_cfg->dir;
+-
+-		/*
+-		 * For most channels, chtype is identical to channel directions.
+-		 * So, if it is not defined then assign channel direction to
+-		 * chtype
+-		 */
+-		mhi_chan->type = ch_cfg->type;
+-		if (!mhi_chan->type)
+-			mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
+-
+-		mhi_chan->ee_mask = ch_cfg->ee_mask;
+-		mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
+-		mhi_chan->lpm_notify = ch_cfg->lpm_notify;
+-		mhi_chan->offload_ch = ch_cfg->offload_channel;
+-		mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
+-		mhi_chan->pre_alloc = ch_cfg->auto_queue;
+-		mhi_chan->auto_start = ch_cfg->auto_start;
+-
+-		/*
+-		 * If MHI host allocates buffers, then the channel direction
+-		 * should be DMA_FROM_DEVICE
+-		 */
+-		if (mhi_chan->pre_alloc && mhi_chan->dir != DMA_FROM_DEVICE) {
+-			dev_err(dev, "Invalid channel configuration\n");
+-			goto error_chan_cfg;
+-		}
+-
+-		/*
+-		 * Bi-directional and direction less channel must be an
+-		 * offload channel
+-		 */
+-		if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
+-		     mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
+-			dev_err(dev, "Invalid channel configuration\n");
+-			goto error_chan_cfg;
+-		}
+-
+-		if (!mhi_chan->offload_ch) {
+-			mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
+-			if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
+-				dev_err(dev, "Invalid Door bell mode\n");
+-				goto error_chan_cfg;
+-			}
+-		}
+-
+-		if (mhi_chan->db_cfg.brstmode == MHI_DB_BRST_ENABLE)
+-			mhi_chan->db_cfg.process_db = mhi_db_brstmode;
+-		else
+-			mhi_chan->db_cfg.process_db = mhi_db_brstmode_disable;
+-
+-		mhi_chan->configured = true;
+-
+-		if (mhi_chan->lpm_notify)
+-			list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
+-	}
+-
+-	return 0;
+-
+-error_chan_cfg:
+-	vfree(mhi_cntrl->mhi_chan);
+-
+-	return -EINVAL;
+-}
+-
+-static int parse_config(struct mhi_controller *mhi_cntrl,
+-			const struct mhi_controller_config *config)
+-{
+-	int ret;
+-
+-	/* Parse MHI channel configuration */
+-	ret = parse_ch_cfg(mhi_cntrl, config);
+-	if (ret)
+-		return ret;
+-
+-	/* Parse MHI event configuration */
+-	ret = parse_ev_cfg(mhi_cntrl, config);
+-	if (ret)
+-		goto error_ev_cfg;
+-
+-	mhi_cntrl->timeout_ms = config->timeout_ms;
+-	if (!mhi_cntrl->timeout_ms)
+-		mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
+-
+-	mhi_cntrl->bounce_buf = config->use_bounce_buf;
+-	mhi_cntrl->buffer_len = config->buf_len;
+-	if (!mhi_cntrl->buffer_len)
+-		mhi_cntrl->buffer_len = MHI_MAX_MTU;
+-
+-	/* By default, host is allowed to ring DB in both M0 and M2 states */
+-	mhi_cntrl->db_access = MHI_PM_M0 | MHI_PM_M2;
+-	if (config->m2_no_db)
+-		mhi_cntrl->db_access &= ~MHI_PM_M2;
+-
+-	return 0;
+-
+-error_ev_cfg:
+-	vfree(mhi_cntrl->mhi_chan);
+-
+-	return ret;
+-}
+-
+-int mhi_register_controller(struct mhi_controller *mhi_cntrl,
+-			    const struct mhi_controller_config *config)
+-{
+-	struct mhi_event *mhi_event;
+-	struct mhi_chan *mhi_chan;
+-	struct mhi_cmd *mhi_cmd;
+-	struct mhi_device *mhi_dev;
+-	u32 soc_info;
+-	int ret, i;
+-
+-	if (!mhi_cntrl)
+-		return -EINVAL;
+-
+-	if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||
+-	    !mhi_cntrl->status_cb || !mhi_cntrl->read_reg ||
+-	    !mhi_cntrl->write_reg)
+-		return -EINVAL;
+-
+-	ret = parse_config(mhi_cntrl, config);
+-	if (ret)
+-		return -EINVAL;
+-
+-	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
+-				     sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
+-	if (!mhi_cntrl->mhi_cmd) {
+-		ret = -ENOMEM;
+-		goto error_alloc_cmd;
+-	}
+-
+-	INIT_LIST_HEAD(&mhi_cntrl->transition_list);
+-	mutex_init(&mhi_cntrl->pm_mutex);
+-	rwlock_init(&mhi_cntrl->pm_lock);
+-	spin_lock_init(&mhi_cntrl->transition_lock);
+-	spin_lock_init(&mhi_cntrl->wlock);
+-	INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
+-	init_waitqueue_head(&mhi_cntrl->state_event);
+-
+-	mhi_cmd = mhi_cntrl->mhi_cmd;
+-	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
+-		spin_lock_init(&mhi_cmd->lock);
+-
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+-		/* Skip for offload events */
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		mhi_event->mhi_cntrl = mhi_cntrl;
+-		spin_lock_init(&mhi_event->lock);
+-		if (mhi_event->data_type == MHI_ER_CTRL)
+-			tasklet_init(&mhi_event->task, mhi_ctrl_ev_task,
+-				     (ulong)mhi_event);
+-		else
+-			tasklet_init(&mhi_event->task, mhi_ev_task,
+-				     (ulong)mhi_event);
+-	}
+-
+-	mhi_chan = mhi_cntrl->mhi_chan;
+-	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+-		mutex_init(&mhi_chan->mutex);
+-		init_completion(&mhi_chan->completion);
+-		rwlock_init(&mhi_chan->lock);
+-
+-		/* used in setting bei field of TRE */
+-		mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+-		mhi_chan->intmod = mhi_event->intmod;
+-	}
+-
+-	if (mhi_cntrl->bounce_buf) {
+-		mhi_cntrl->map_single = mhi_map_single_use_bb;
+-		mhi_cntrl->unmap_single = mhi_unmap_single_use_bb;
+-	} else {
+-		mhi_cntrl->map_single = mhi_map_single_no_bb;
+-		mhi_cntrl->unmap_single = mhi_unmap_single_no_bb;
+-	}
+-
+-	/* Read the MHI device info */
+-	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs,
+-			   SOC_HW_VERSION_OFFS, &soc_info);
+-	if (ret)
+-		goto error_alloc_dev;
+-
+-	mhi_cntrl->family_number = (soc_info & SOC_HW_VERSION_FAM_NUM_BMSK) >>
+-					SOC_HW_VERSION_FAM_NUM_SHFT;
+-	mhi_cntrl->device_number = (soc_info & SOC_HW_VERSION_DEV_NUM_BMSK) >>
+-					SOC_HW_VERSION_DEV_NUM_SHFT;
+-	mhi_cntrl->major_version = (soc_info & SOC_HW_VERSION_MAJOR_VER_BMSK) >>
+-					SOC_HW_VERSION_MAJOR_VER_SHFT;
+-	mhi_cntrl->minor_version = (soc_info & SOC_HW_VERSION_MINOR_VER_BMSK) >>
+-					SOC_HW_VERSION_MINOR_VER_SHFT;
+-
+-	/* Register controller with MHI bus */
+-	mhi_dev = mhi_alloc_device(mhi_cntrl);
+-	if (IS_ERR(mhi_dev)) {
+-		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate MHI device\n");
+-		ret = PTR_ERR(mhi_dev);
+-		goto error_alloc_dev;
+-	}
+-
+-	mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
+-	mhi_dev->mhi_cntrl = mhi_cntrl;
+-	dev_set_name(&mhi_dev->dev, "%s", dev_name(mhi_cntrl->cntrl_dev));
+-	mhi_dev->name = dev_name(mhi_cntrl->cntrl_dev);
+-
+-	/* Init wakeup source */
+-	device_init_wakeup(&mhi_dev->dev, true);
+-
+-	ret = device_add(&mhi_dev->dev);
+-	if (ret)
+-		goto error_add_dev;
+-
+-	mhi_cntrl->mhi_dev = mhi_dev;
+-
+-	mhi_create_debugfs(mhi_cntrl);
+-
+-	return 0;
+-
+-error_add_dev:
+-	put_device(&mhi_dev->dev);
+-
+-error_alloc_dev:
+-	kfree(mhi_cntrl->mhi_cmd);
+-
+-error_alloc_cmd:
+-	vfree(mhi_cntrl->mhi_chan);
+-	kfree(mhi_cntrl->mhi_event);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(mhi_register_controller);
+-
+-void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
+-{
+-	struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
+-	struct mhi_chan *mhi_chan = mhi_cntrl->mhi_chan;
+-	unsigned int i;
+-
+-	mhi_destroy_debugfs(mhi_cntrl);
+-
+-	kfree(mhi_cntrl->mhi_cmd);
+-	kfree(mhi_cntrl->mhi_event);
+-
+-	/* Drop the references to MHI devices created for channels */
+-	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+-		if (!mhi_chan->mhi_dev)
+-			continue;
+-
+-		put_device(&mhi_chan->mhi_dev->dev);
+-	}
+-	vfree(mhi_cntrl->mhi_chan);
+-
+-	device_del(&mhi_dev->dev);
+-	put_device(&mhi_dev->dev);
+-}
+-EXPORT_SYMBOL_GPL(mhi_unregister_controller);
+-
+-struct mhi_controller *mhi_alloc_controller(void)
+-{
+-	struct mhi_controller *mhi_cntrl;
+-
+-	mhi_cntrl = kzalloc(sizeof(*mhi_cntrl), GFP_KERNEL);
+-
+-	return mhi_cntrl;
+-}
+-EXPORT_SYMBOL_GPL(mhi_alloc_controller);
+-
+-void mhi_free_controller(struct mhi_controller *mhi_cntrl)
+-{
+-	kfree(mhi_cntrl);
+-}
+-EXPORT_SYMBOL_GPL(mhi_free_controller);
+-
+-int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
+-{
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	u32 bhie_off;
+-	int ret;
+-
+-	mutex_lock(&mhi_cntrl->pm_mutex);
+-
+-	ret = mhi_init_dev_ctxt(mhi_cntrl);
+-	if (ret)
+-		goto error_dev_ctxt;
+-
+-	/*
+-	 * Allocate RDDM table if specified, this table is for debugging purpose
+-	 */
+-	if (mhi_cntrl->rddm_size) {
+-		mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->rddm_image,
+-				     mhi_cntrl->rddm_size);
+-
+-		/*
+-		 * This controller supports RDDM, so we need to manually clear
+-		 * BHIE RX registers since POR values are undefined.
+-		 */
+-		ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF,
+-				   &bhie_off);
+-		if (ret) {
+-			dev_err(dev, "Error getting BHIE offset\n");
+-			goto bhie_error;
+-		}
+-
+-		mhi_cntrl->bhie = mhi_cntrl->regs + bhie_off;
+-		memset_io(mhi_cntrl->bhie + BHIE_RXVECADDR_LOW_OFFS,
+-			  0, BHIE_RXVECSTATUS_OFFS - BHIE_RXVECADDR_LOW_OFFS +
+-			  4);
+-
+-		if (mhi_cntrl->rddm_image)
+-			mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
+-	}
+-
+-	mhi_cntrl->pre_init = true;
+-
+-	mutex_unlock(&mhi_cntrl->pm_mutex);
+-
+-	return 0;
+-
+-bhie_error:
+-	if (mhi_cntrl->rddm_image) {
+-		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image);
+-		mhi_cntrl->rddm_image = NULL;
+-	}
+-
+-error_dev_ctxt:
+-	mutex_unlock(&mhi_cntrl->pm_mutex);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(mhi_prepare_for_power_up);
+-
+-void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
+-{
+-	if (mhi_cntrl->fbc_image) {
+-		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+-		mhi_cntrl->fbc_image = NULL;
+-	}
+-
+-	if (mhi_cntrl->rddm_image) {
+-		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image);
+-		mhi_cntrl->rddm_image = NULL;
+-	}
+-
+-	mhi_deinit_dev_ctxt(mhi_cntrl);
+-	mhi_cntrl->pre_init = false;
+-}
+-EXPORT_SYMBOL_GPL(mhi_unprepare_after_power_down);
+-
+-static void mhi_release_device(struct device *dev)
+-{
+-	struct mhi_device *mhi_dev = to_mhi_device(dev);
+-
+-	/*
+-	 * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
+-	 * devices for the channels will only get created if the mhi_dev
+-	 * associated with it is NULL. This scenario will happen during the
+-	 * controller suspend and resume.
+-	 */
+-	if (mhi_dev->ul_chan)
+-		mhi_dev->ul_chan->mhi_dev = NULL;
+-
+-	if (mhi_dev->dl_chan)
+-		mhi_dev->dl_chan->mhi_dev = NULL;
+-
+-	kfree(mhi_dev);
+-}
+-
+-struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
+-{
+-	struct mhi_device *mhi_dev;
+-	struct device *dev;
+-
+-	mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
+-	if (!mhi_dev)
+-		return ERR_PTR(-ENOMEM);
+-
+-	dev = &mhi_dev->dev;
+-	device_initialize(dev);
+-	dev->bus = &mhi_bus_type;
+-	dev->release = mhi_release_device;
+-	dev->parent = mhi_cntrl->cntrl_dev;
+-	mhi_dev->mhi_cntrl = mhi_cntrl;
+-	mhi_dev->dev_wake = 0;
+-
+-	return mhi_dev;
+-}
+-
+-static int mhi_driver_probe(struct device *dev)
+-{
+-	struct mhi_device *mhi_dev = to_mhi_device(dev);
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	struct device_driver *drv = dev->driver;
+-	struct mhi_driver *mhi_drv = to_mhi_driver(drv);
+-	struct mhi_event *mhi_event;
+-	struct mhi_chan *ul_chan = mhi_dev->ul_chan;
+-	struct mhi_chan *dl_chan = mhi_dev->dl_chan;
+-	int ret;
+-
+-	/* Bring device out of LPM */
+-	ret = mhi_device_get_sync(mhi_dev);
+-	if (ret)
+-		return ret;
+-
+-	ret = -EINVAL;
+-
+-	if (ul_chan) {
+-		/*
+-		 * If channel supports LPM notifications then status_cb should
+-		 * be provided
+-		 */
+-		if (ul_chan->lpm_notify && !mhi_drv->status_cb)
+-			goto exit_probe;
+-
+-		/* For non-offload channels then xfer_cb should be provided */
+-		if (!ul_chan->offload_ch && !mhi_drv->ul_xfer_cb)
+-			goto exit_probe;
+-
+-		ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+-		if (ul_chan->auto_start) {
+-			ret = mhi_prepare_channel(mhi_cntrl, ul_chan);
+-			if (ret)
+-				goto exit_probe;
+-		}
+-	}
+-
+-	ret = -EINVAL;
+-	if (dl_chan) {
+-		/*
+-		 * If channel supports LPM notifications then status_cb should
+-		 * be provided
+-		 */
+-		if (dl_chan->lpm_notify && !mhi_drv->status_cb)
+-			goto exit_probe;
+-
+-		/* For non-offload channels then xfer_cb should be provided */
+-		if (!dl_chan->offload_ch && !mhi_drv->dl_xfer_cb)
+-			goto exit_probe;
+-
+-		mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index];
+-
+-		/*
+-		 * If the channel event ring is managed by client, then
+-		 * status_cb must be provided so that the framework can
+-		 * notify pending data
+-		 */
+-		if (mhi_event->cl_manage && !mhi_drv->status_cb)
+-			goto exit_probe;
+-
+-		dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
+-	}
+-
+-	/* Call the user provided probe function */
+-	ret = mhi_drv->probe(mhi_dev, mhi_dev->id);
+-	if (ret)
+-		goto exit_probe;
+-
+-	if (dl_chan && dl_chan->auto_start)
+-		mhi_prepare_channel(mhi_cntrl, dl_chan);
+-
+-	mhi_device_put(mhi_dev);
+-
+-	return ret;
+-
+-exit_probe:
+-	mhi_unprepare_from_transfer(mhi_dev);
+-
+-	mhi_device_put(mhi_dev);
+-
+-	return ret;
+-}
+-
+-static int mhi_driver_remove(struct device *dev)
+-{
+-	struct mhi_device *mhi_dev = to_mhi_device(dev);
+-	struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver);
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	struct mhi_chan *mhi_chan;
+-	enum mhi_ch_state ch_state[] = {
+-		MHI_CH_STATE_DISABLED,
+-		MHI_CH_STATE_DISABLED
+-	};
+-	int dir;
+-
+-	/* Skip if it is a controller device */
+-	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+-		return 0;
+-
+-	/* Reset both channels */
+-	for (dir = 0; dir < 2; dir++) {
+-		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+-
+-		if (!mhi_chan)
+-			continue;
+-
+-		/* Wake all threads waiting for completion */
+-		write_lock_irq(&mhi_chan->lock);
+-		mhi_chan->ccs = MHI_EV_CC_INVALID;
+-		complete_all(&mhi_chan->completion);
+-		write_unlock_irq(&mhi_chan->lock);
+-
+-		/* Set the channel state to disabled */
+-		mutex_lock(&mhi_chan->mutex);
+-		write_lock_irq(&mhi_chan->lock);
+-		ch_state[dir] = mhi_chan->ch_state;
+-		mhi_chan->ch_state = MHI_CH_STATE_SUSPENDED;
+-		write_unlock_irq(&mhi_chan->lock);
+-
+-		/* Reset the non-offload channel */
+-		if (!mhi_chan->offload_ch)
+-			mhi_reset_chan(mhi_cntrl, mhi_chan);
+-
+-		mutex_unlock(&mhi_chan->mutex);
+-	}
+-
+-	mhi_drv->remove(mhi_dev);
+-
+-	/* De-init channel if it was enabled */
+-	for (dir = 0; dir < 2; dir++) {
+-		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+-
+-		if (!mhi_chan)
+-			continue;
+-
+-		mutex_lock(&mhi_chan->mutex);
+-
+-		if ((ch_state[dir] == MHI_CH_STATE_ENABLED ||
+-		     ch_state[dir] == MHI_CH_STATE_STOP) &&
+-		    !mhi_chan->offload_ch)
+-			mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+-
+-		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+-
+-		mutex_unlock(&mhi_chan->mutex);
+-	}
+-
+-	while (mhi_dev->dev_wake)
+-		mhi_device_put(mhi_dev);
+-
+-	return 0;
+-}
+-
+-int __mhi_driver_register(struct mhi_driver *mhi_drv, struct module *owner)
+-{
+-	struct device_driver *driver = &mhi_drv->driver;
+-
+-	if (!mhi_drv->probe || !mhi_drv->remove)
+-		return -EINVAL;
+-
+-	driver->bus = &mhi_bus_type;
+-	driver->owner = owner;
+-	driver->probe = mhi_driver_probe;
+-	driver->remove = mhi_driver_remove;
+-
+-	return driver_register(driver);
+-}
+-EXPORT_SYMBOL_GPL(__mhi_driver_register);
+-
+-void mhi_driver_unregister(struct mhi_driver *mhi_drv)
+-{
+-	driver_unregister(&mhi_drv->driver);
+-}
+-EXPORT_SYMBOL_GPL(mhi_driver_unregister);
+-
+-static int mhi_uevent(struct device *dev, struct kobj_uevent_env *env)
+-{
+-	struct mhi_device *mhi_dev = to_mhi_device(dev);
+-
+-	return add_uevent_var(env, "MODALIAS=" MHI_DEVICE_MODALIAS_FMT,
+-					mhi_dev->name);
+-}
+-
+-static int mhi_match(struct device *dev, struct device_driver *drv)
+-{
+-	struct mhi_device *mhi_dev = to_mhi_device(dev);
+-	struct mhi_driver *mhi_drv = to_mhi_driver(drv);
+-	const struct mhi_device_id *id;
+-
+-	/*
+-	 * If the device is a controller type then there is no client driver
+-	 * associated with it
+-	 */
+-	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+-		return 0;
+-
+-	for (id = mhi_drv->id_table; id->chan[0]; id++)
+-		if (!strcmp(mhi_dev->name, id->chan)) {
+-			mhi_dev->id = id;
+-			return 1;
+-		}
+-
+-	return 0;
+-};
+-
+-struct bus_type mhi_bus_type = {
+-	.name = "mhi",
+-	.dev_name = "mhi",
+-	.match = mhi_match,
+-	.uevent = mhi_uevent,
+-	.dev_groups = mhi_dev_groups,
+-};
+-
+-static int __init mhi_init(void)
+-{
+-	mhi_debugfs_init();
+-	return bus_register(&mhi_bus_type);
+-}
+-
+-static void __exit mhi_exit(void)
+-{
+-	mhi_debugfs_exit();
+-	bus_unregister(&mhi_bus_type);
+-}
+-
+-postcore_initcall(mhi_init);
+-module_exit(mhi_exit);
+-
+-MODULE_LICENSE("GPL v2");
+-MODULE_DESCRIPTION("MHI Host Interface");
+diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
+deleted file mode 100644
+index 7989269ddd963..0000000000000
+--- a/drivers/bus/mhi/core/internal.h
++++ /dev/null
+@@ -1,722 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+- *
+- */
+-
+-#ifndef _MHI_INT_H
+-#define _MHI_INT_H
+-
+-#include <linux/mhi.h>
+-
+-extern struct bus_type mhi_bus_type;
+-
+-#define MHIREGLEN (0x0)
+-#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
+-#define MHIREGLEN_MHIREGLEN_SHIFT (0)
+-
+-#define MHIVER (0x8)
+-#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
+-#define MHIVER_MHIVER_SHIFT (0)
+-
+-#define MHICFG (0x10)
+-#define MHICFG_NHWER_MASK (0xFF000000)
+-#define MHICFG_NHWER_SHIFT (24)
+-#define MHICFG_NER_MASK (0xFF0000)
+-#define MHICFG_NER_SHIFT (16)
+-#define MHICFG_NHWCH_MASK (0xFF00)
+-#define MHICFG_NHWCH_SHIFT (8)
+-#define MHICFG_NCH_MASK (0xFF)
+-#define MHICFG_NCH_SHIFT (0)
+-
+-#define CHDBOFF (0x18)
+-#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
+-#define CHDBOFF_CHDBOFF_SHIFT (0)
+-
+-#define ERDBOFF (0x20)
+-#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
+-#define ERDBOFF_ERDBOFF_SHIFT (0)
+-
+-#define BHIOFF (0x28)
+-#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
+-#define BHIOFF_BHIOFF_SHIFT (0)
+-
+-#define BHIEOFF (0x2C)
+-#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
+-#define BHIEOFF_BHIEOFF_SHIFT (0)
+-
+-#define DEBUGOFF (0x30)
+-#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
+-#define DEBUGOFF_DEBUGOFF_SHIFT (0)
+-
+-#define MHICTRL (0x38)
+-#define MHICTRL_MHISTATE_MASK (0x0000FF00)
+-#define MHICTRL_MHISTATE_SHIFT (8)
+-#define MHICTRL_RESET_MASK (0x2)
+-#define MHICTRL_RESET_SHIFT (1)
+-
+-#define MHISTATUS (0x48)
+-#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
+-#define MHISTATUS_MHISTATE_SHIFT (8)
+-#define MHISTATUS_SYSERR_MASK (0x4)
+-#define MHISTATUS_SYSERR_SHIFT (2)
+-#define MHISTATUS_READY_MASK (0x1)
+-#define MHISTATUS_READY_SHIFT (0)
+-
+-#define CCABAP_LOWER (0x58)
+-#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
+-#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
+-
+-#define CCABAP_HIGHER (0x5C)
+-#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
+-#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
+-
+-#define ECABAP_LOWER (0x60)
+-#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
+-#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
+-
+-#define ECABAP_HIGHER (0x64)
+-#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
+-#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
+-
+-#define CRCBAP_LOWER (0x68)
+-#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
+-#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
+-
+-#define CRCBAP_HIGHER (0x6C)
+-#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
+-#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
+-
+-#define CRDB_LOWER (0x70)
+-#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
+-#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
+-
+-#define CRDB_HIGHER (0x74)
+-#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
+-#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
+-
+-#define MHICTRLBASE_LOWER (0x80)
+-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
+-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
+-
+-#define MHICTRLBASE_HIGHER (0x84)
+-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
+-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
+-
+-#define MHICTRLLIMIT_LOWER (0x88)
+-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
+-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
+-
+-#define MHICTRLLIMIT_HIGHER (0x8C)
+-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
+-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
+-
+-#define MHIDATABASE_LOWER (0x98)
+-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
+-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
+-
+-#define MHIDATABASE_HIGHER (0x9C)
+-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
+-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
+-
+-#define MHIDATALIMIT_LOWER (0xA0)
+-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
+-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
+-
+-#define MHIDATALIMIT_HIGHER (0xA4)
+-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
+-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
+-
+-/* Host request register */
+-#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
+-#define MHI_SOC_RESET_REQ BIT(0)
+-
+-/* MHI BHI offfsets */
+-#define BHI_BHIVERSION_MINOR (0x00)
+-#define BHI_BHIVERSION_MAJOR (0x04)
+-#define BHI_IMGADDR_LOW (0x08)
+-#define BHI_IMGADDR_HIGH (0x0C)
+-#define BHI_IMGSIZE (0x10)
+-#define BHI_RSVD1 (0x14)
+-#define BHI_IMGTXDB (0x18)
+-#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
+-#define BHI_TXDB_SEQNUM_SHFT (0)
+-#define BHI_RSVD2 (0x1C)
+-#define BHI_INTVEC (0x20)
+-#define BHI_RSVD3 (0x24)
+-#define BHI_EXECENV (0x28)
+-#define BHI_STATUS (0x2C)
+-#define BHI_ERRCODE (0x30)
+-#define BHI_ERRDBG1 (0x34)
+-#define BHI_ERRDBG2 (0x38)
+-#define BHI_ERRDBG3 (0x3C)
+-#define BHI_SERIALNU (0x40)
+-#define BHI_SBLANTIROLLVER (0x44)
+-#define BHI_NUMSEG (0x48)
+-#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
+-#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
+-#define BHI_RSVD5 (0xC4)
+-#define BHI_STATUS_MASK (0xC0000000)
+-#define BHI_STATUS_SHIFT (30)
+-#define BHI_STATUS_ERROR (3)
+-#define BHI_STATUS_SUCCESS (2)
+-#define BHI_STATUS_RESET (0)
+-
+-/* MHI BHIE offsets */
+-#define BHIE_MSMSOCID_OFFS (0x0000)
+-#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
+-#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
+-#define BHIE_TXVECSIZE_OFFS (0x0034)
+-#define BHIE_TXVECDB_OFFS (0x003C)
+-#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
+-#define BHIE_TXVECDB_SEQNUM_SHFT (0)
+-#define BHIE_TXVECSTATUS_OFFS (0x0044)
+-#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
+-#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
+-#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
+-#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
+-#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
+-#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
+-#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
+-#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
+-#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
+-#define BHIE_RXVECSIZE_OFFS (0x0068)
+-#define BHIE_RXVECDB_OFFS (0x0070)
+-#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
+-#define BHIE_RXVECDB_SEQNUM_SHFT (0)
+-#define BHIE_RXVECSTATUS_OFFS (0x0078)
+-#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
+-#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
+-#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
+-#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
+-#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
+-#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
+-#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
+-
+-#define SOC_HW_VERSION_OFFS (0x224)
+-#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
+-#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
+-#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
+-#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
+-#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
+-#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
+-#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
+-#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
+-
+-#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
+-#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
+-#define EV_CTX_INTMODC_SHIFT 8
+-#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
+-#define EV_CTX_INTMODT_SHIFT 16
+-struct mhi_event_ctxt {
+-	__u32 intmod;
+-	__u32 ertype;
+-	__u32 msivec;
+-
+-	__u64 rbase __packed __aligned(4);
+-	__u64 rlen __packed __aligned(4);
+-	__u64 rp __packed __aligned(4);
+-	__u64 wp __packed __aligned(4);
+-};
+-
+-#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
+-#define CHAN_CTX_CHSTATE_SHIFT 0
+-#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
+-#define CHAN_CTX_BRSTMODE_SHIFT 8
+-#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
+-#define CHAN_CTX_POLLCFG_SHIFT 10
+-#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
+-struct mhi_chan_ctxt {
+-	__u32 chcfg;
+-	__u32 chtype;
+-	__u32 erindex;
+-
+-	__u64 rbase __packed __aligned(4);
+-	__u64 rlen __packed __aligned(4);
+-	__u64 rp __packed __aligned(4);
+-	__u64 wp __packed __aligned(4);
+-};
+-
+-struct mhi_cmd_ctxt {
+-	__u32 reserved0;
+-	__u32 reserved1;
+-	__u32 reserved2;
+-
+-	__u64 rbase __packed __aligned(4);
+-	__u64 rlen __packed __aligned(4);
+-	__u64 rp __packed __aligned(4);
+-	__u64 wp __packed __aligned(4);
+-};
+-
+-struct mhi_ctxt {
+-	struct mhi_event_ctxt *er_ctxt;
+-	struct mhi_chan_ctxt *chan_ctxt;
+-	struct mhi_cmd_ctxt *cmd_ctxt;
+-	dma_addr_t er_ctxt_addr;
+-	dma_addr_t chan_ctxt_addr;
+-	dma_addr_t cmd_ctxt_addr;
+-};
+-
+-struct mhi_tre {
+-	u64 ptr;
+-	u32 dword[2];
+-};
+-
+-struct bhi_vec_entry {
+-	u64 dma_addr;
+-	u64 size;
+-};
+-
+-enum mhi_cmd_type {
+-	MHI_CMD_NOP = 1,
+-	MHI_CMD_RESET_CHAN = 16,
+-	MHI_CMD_STOP_CHAN = 17,
+-	MHI_CMD_START_CHAN = 18,
+-};
+-
+-/* No operation command */
+-#define MHI_TRE_CMD_NOOP_PTR (0)
+-#define MHI_TRE_CMD_NOOP_DWORD0 (0)
+-#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
+-
+-/* Channel reset command */
+-#define MHI_TRE_CMD_RESET_PTR (0)
+-#define MHI_TRE_CMD_RESET_DWORD0 (0)
+-#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
+-					(MHI_CMD_RESET_CHAN << 16))
+-
+-/* Channel stop command */
+-#define MHI_TRE_CMD_STOP_PTR (0)
+-#define MHI_TRE_CMD_STOP_DWORD0 (0)
+-#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
+-				       (MHI_CMD_STOP_CHAN << 16))
+-
+-/* Channel start command */
+-#define MHI_TRE_CMD_START_PTR (0)
+-#define MHI_TRE_CMD_START_DWORD0 (0)
+-#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
+-					(MHI_CMD_START_CHAN << 16))
+-
+-#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+-
+-/* Event descriptor macros */
+-#define MHI_TRE_EV_PTR(ptr) (ptr)
+-#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
+-#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
+-#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
+-#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
+-#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+-#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
+-#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
+-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
+-#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
+-#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
+-
+-/* Transfer descriptor macros */
+-#define MHI_TRE_DATA_PTR(ptr) (ptr)
+-#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
+-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+-	| (ieot << 9) | (ieob << 8) | chain)
+-
+-/* RSC transfer descriptor macros */
+-#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
+-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
+-#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
+-
+-enum mhi_pkt_type {
+-	MHI_PKT_TYPE_INVALID = 0x0,
+-	MHI_PKT_TYPE_NOOP_CMD = 0x1,
+-	MHI_PKT_TYPE_TRANSFER = 0x2,
+-	MHI_PKT_TYPE_COALESCING = 0x8,
+-	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
+-	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
+-	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
+-	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
+-	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
+-	MHI_PKT_TYPE_TX_EVENT = 0x22,
+-	MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
+-	MHI_PKT_TYPE_EE_EVENT = 0x40,
+-	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
+-	MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
+-	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
+-};
+-
+-/* MHI transfer completion events */
+-enum mhi_ev_ccs {
+-	MHI_EV_CC_INVALID = 0x0,
+-	MHI_EV_CC_SUCCESS = 0x1,
+-	MHI_EV_CC_EOT = 0x2, /* End of transfer event */
+-	MHI_EV_CC_OVERFLOW = 0x3,
+-	MHI_EV_CC_EOB = 0x4, /* End of block event */
+-	MHI_EV_CC_OOB = 0x5, /* Out of block event */
+-	MHI_EV_CC_DB_MODE = 0x6,
+-	MHI_EV_CC_UNDEFINED_ERR = 0x10,
+-	MHI_EV_CC_BAD_TRE = 0x11,
+-};
+-
+-enum mhi_ch_state {
+-	MHI_CH_STATE_DISABLED = 0x0,
+-	MHI_CH_STATE_ENABLED = 0x1,
+-	MHI_CH_STATE_RUNNING = 0x2,
+-	MHI_CH_STATE_SUSPENDED = 0x3,
+-	MHI_CH_STATE_STOP = 0x4,
+-	MHI_CH_STATE_ERROR = 0x5,
+-};
+-
+-#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
+-				    mode != MHI_DB_BRST_ENABLE)
+-
+-extern const char * const mhi_ee_str[MHI_EE_MAX];
+-#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \
+-			     "INVALID_EE" : mhi_ee_str[ee])
+-
+-#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
+-			ee == MHI_EE_EDL)
+-
+-#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
+-
+-enum dev_st_transition {
+-	DEV_ST_TRANSITION_PBL,
+-	DEV_ST_TRANSITION_READY,
+-	DEV_ST_TRANSITION_SBL,
+-	DEV_ST_TRANSITION_MISSION_MODE,
+-	DEV_ST_TRANSITION_SYS_ERR,
+-	DEV_ST_TRANSITION_DISABLE,
+-	DEV_ST_TRANSITION_MAX,
+-};
+-
+-extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
+-#define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
+-				"INVALID_STATE" : dev_state_tran_str[state])
+-
+-extern const char * const mhi_state_str[MHI_STATE_MAX];
+-#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
+-				  !mhi_state_str[state]) ? \
+-				"INVALID_STATE" : mhi_state_str[state])
+-
+-/* internal power states */
+-enum mhi_pm_state {
+-	MHI_PM_STATE_DISABLE,
+-	MHI_PM_STATE_POR,
+-	MHI_PM_STATE_M0,
+-	MHI_PM_STATE_M2,
+-	MHI_PM_STATE_M3_ENTER,
+-	MHI_PM_STATE_M3,
+-	MHI_PM_STATE_M3_EXIT,
+-	MHI_PM_STATE_FW_DL_ERR,
+-	MHI_PM_STATE_SYS_ERR_DETECT,
+-	MHI_PM_STATE_SYS_ERR_PROCESS,
+-	MHI_PM_STATE_SHUTDOWN_PROCESS,
+-	MHI_PM_STATE_LD_ERR_FATAL_DETECT,
+-	MHI_PM_STATE_MAX
+-};
+-
+-#define MHI_PM_DISABLE			BIT(0)
+-#define MHI_PM_POR			BIT(1)
+-#define MHI_PM_M0			BIT(2)
+-#define MHI_PM_M2			BIT(3)
+-#define MHI_PM_M3_ENTER			BIT(4)
+-#define MHI_PM_M3			BIT(5)
+-#define MHI_PM_M3_EXIT			BIT(6)
+-/* firmware download failure state */
+-#define MHI_PM_FW_DL_ERR		BIT(7)
+-#define MHI_PM_SYS_ERR_DETECT		BIT(8)
+-#define MHI_PM_SYS_ERR_PROCESS		BIT(9)
+-#define MHI_PM_SHUTDOWN_PROCESS		BIT(10)
+-/* link not accessible */
+-#define MHI_PM_LD_ERR_FATAL_DETECT	BIT(11)
+-
+-#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
+-		MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
+-		MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
+-		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
+-#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
+-#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
+-#define MHI_DB_ACCESS_VALID(mhi_cntrl) (mhi_cntrl->pm_state & \
+-					mhi_cntrl->db_access)
+-#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
+-						MHI_PM_M2 | MHI_PM_M3_EXIT))
+-#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2)
+-#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state)
+-#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
+-					    MHI_PM_IN_ERROR_STATE(pm_state))
+-#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
+-					   (MHI_PM_M3_ENTER | MHI_PM_M3))
+-
+-#define NR_OF_CMD_RINGS			1
+-#define CMD_EL_PER_RING			128
+-#define PRIMARY_CMD_RING		0
+-#define MHI_DEV_WAKE_DB			127
+-#define MHI_MAX_MTU			0xffff
+-#define MHI_RANDOM_U32_NONZERO(bmsk)	(prandom_u32_max(bmsk) + 1)
+-
+-enum mhi_er_type {
+-	MHI_ER_TYPE_INVALID = 0x0,
+-	MHI_ER_TYPE_VALID = 0x1,
+-};
+-
+-struct db_cfg {
+-	bool reset_req;
+-	bool db_mode;
+-	u32 pollcfg;
+-	enum mhi_db_brst_mode brstmode;
+-	dma_addr_t db_val;
+-	void (*process_db)(struct mhi_controller *mhi_cntrl,
+-			   struct db_cfg *db_cfg, void __iomem *io_addr,
+-			   dma_addr_t db_val);
+-};
+-
+-struct mhi_pm_transitions {
+-	enum mhi_pm_state from_state;
+-	u32 to_states;
+-};
+-
+-struct state_transition {
+-	struct list_head node;
+-	enum dev_st_transition state;
+-};
+-
+-struct mhi_ring {
+-	dma_addr_t dma_handle;
+-	dma_addr_t iommu_base;
+-	u64 *ctxt_wp; /* point to ctxt wp */
+-	void *pre_aligned;
+-	void *base;
+-	void *rp;
+-	void *wp;
+-	size_t el_size;
+-	size_t len;
+-	size_t elements;
+-	size_t alloc_size;
+-	void __iomem *db_addr;
+-};
+-
+-struct mhi_cmd {
+-	struct mhi_ring ring;
+-	spinlock_t lock;
+-};
+-
+-struct mhi_buf_info {
+-	void *v_addr;
+-	void *bb_addr;
+-	void *wp;
+-	void *cb_buf;
+-	dma_addr_t p_addr;
+-	size_t len;
+-	enum dma_data_direction dir;
+-	bool used; /* Indicates whether the buffer is used or not */
+-	bool pre_mapped; /* Already pre-mapped by client */
+-};
+-
+-struct mhi_event {
+-	struct mhi_controller *mhi_cntrl;
+-	struct mhi_chan *mhi_chan; /* dedicated to channel */
+-	u32 er_index;
+-	u32 intmod;
+-	u32 irq;
+-	int chan; /* this event ring is dedicated to a channel (optional) */
+-	u32 priority;
+-	enum mhi_er_data_type data_type;
+-	struct mhi_ring ring;
+-	struct db_cfg db_cfg;
+-	struct tasklet_struct task;
+-	spinlock_t lock;
+-	int (*process_event)(struct mhi_controller *mhi_cntrl,
+-			     struct mhi_event *mhi_event,
+-			     u32 event_quota);
+-	bool hw_ring;
+-	bool cl_manage;
+-	bool offload_ev; /* managed by a device driver */
+-};
+-
+-struct mhi_chan {
+-	const char *name;
+-	/*
+-	 * Important: When consuming, increment tre_ring first and when
+-	 * releasing, decrement buf_ring first. If tre_ring has space, buf_ring
+-	 * is guranteed to have space so we do not need to check both rings.
+-	 */
+-	struct mhi_ring buf_ring;
+-	struct mhi_ring tre_ring;
+-	u32 chan;
+-	u32 er_index;
+-	u32 intmod;
+-	enum mhi_ch_type type;
+-	enum dma_data_direction dir;
+-	struct db_cfg db_cfg;
+-	enum mhi_ch_ee_mask ee_mask;
+-	enum mhi_ch_state ch_state;
+-	enum mhi_ev_ccs ccs;
+-	struct mhi_device *mhi_dev;
+-	void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
+-	struct mutex mutex;
+-	struct completion completion;
+-	rwlock_t lock;
+-	struct list_head node;
+-	bool lpm_notify;
+-	bool configured;
+-	bool offload_ch;
+-	bool pre_alloc;
+-	bool auto_start;
+-	bool wake_capable;
+-};
+-
+-/* Default MHI timeout */
+-#define MHI_TIMEOUT_MS (1000)
+-
+-/* debugfs related functions */
+-#ifdef CONFIG_MHI_BUS_DEBUG
+-void mhi_create_debugfs(struct mhi_controller *mhi_cntrl);
+-void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl);
+-void mhi_debugfs_init(void);
+-void mhi_debugfs_exit(void);
+-#else
+-static inline void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
+-{
+-}
+-
+-static inline void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl)
+-{
+-}
+-
+-static inline void mhi_debugfs_init(void)
+-{
+-}
+-
+-static inline void mhi_debugfs_exit(void)
+-{
+-}
+-#endif
+-
+-struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
+-
+-int mhi_destroy_device(struct device *dev, void *data);
+-void mhi_create_devices(struct mhi_controller *mhi_cntrl);
+-
+-int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
+-			 struct image_info **image_info, size_t alloc_size);
+-void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+-			 struct image_info *image_info);
+-
+-/* Power management APIs */
+-enum mhi_pm_state __must_check mhi_tryset_pm_state(
+-					struct mhi_controller *mhi_cntrl,
+-					enum mhi_pm_state state);
+-const char *to_mhi_pm_state_str(enum mhi_pm_state state);
+-enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
+-int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+-			       enum dev_st_transition state);
+-void mhi_pm_st_worker(struct work_struct *work);
+-void mhi_pm_sys_err_handler(struct mhi_controller *mhi_cntrl);
+-void mhi_fw_load_worker(struct work_struct *work);
+-int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
+-int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
+-void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
+-int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
+-int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
+-int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+-		 enum mhi_cmd_type cmd);
+-static inline bool mhi_is_active(struct mhi_controller *mhi_cntrl)
+-{
+-	return (mhi_cntrl->dev_state >= MHI_STATE_M0 &&
+-		mhi_cntrl->dev_state <= MHI_STATE_M3_FAST);
+-}
+-
+-static inline void mhi_trigger_resume(struct mhi_controller *mhi_cntrl)
+-{
+-	pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
+-	mhi_cntrl->runtime_get(mhi_cntrl);
+-	mhi_cntrl->runtime_put(mhi_cntrl);
+-}
+-
+-/* Register access methods */
+-void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
+-		     void __iomem *db_addr, dma_addr_t db_val);
+-void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
+-			     struct db_cfg *db_mode, void __iomem *db_addr,
+-			     dma_addr_t db_val);
+-int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
+-			      void __iomem *base, u32 offset, u32 *out);
+-int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
+-				    void __iomem *base, u32 offset, u32 mask,
+-				    u32 shift, u32 *out);
+-void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
+-		   u32 offset, u32 val);
+-void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
+-			 u32 offset, u32 mask, u32 shift, u32 val);
+-void mhi_ring_er_db(struct mhi_event *mhi_event);
+-void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
+-		  dma_addr_t db_val);
+-void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
+-void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+-		      struct mhi_chan *mhi_chan);
+-
+-/* Initialization methods */
+-int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
+-int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
+-void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
+-int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
+-void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
+-void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
+-		      struct image_info *img_info);
+-void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl);
+-int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
+-			struct mhi_chan *mhi_chan);
+-int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+-		       struct mhi_chan *mhi_chan);
+-void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+-			  struct mhi_chan *mhi_chan);
+-void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
+-		    struct mhi_chan *mhi_chan);
+-
+-/* Memory allocation methods */
+-static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
+-				       size_t size,
+-				       dma_addr_t *dma_handle,
+-				       gfp_t gfp)
+-{
+-	void *buf = dma_alloc_coherent(mhi_cntrl->cntrl_dev, size, dma_handle,
+-				       gfp);
+-
+-	return buf;
+-}
+-
+-static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
+-				     size_t size,
+-				     void *vaddr,
+-				     dma_addr_t dma_handle)
+-{
+-	dma_free_coherent(mhi_cntrl->cntrl_dev, size, vaddr, dma_handle);
+-}
+-
+-/* Event processing methods */
+-void mhi_ctrl_ev_task(unsigned long data);
+-void mhi_ev_task(unsigned long data);
+-int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+-				struct mhi_event *mhi_event, u32 event_quota);
+-int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+-			     struct mhi_event *mhi_event, u32 event_quota);
+-
+-/* ISR handlers */
+-irqreturn_t mhi_irq_handler(int irq_number, void *dev);
+-irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
+-irqreturn_t mhi_intvec_handler(int irq_number, void *dev);
+-
+-int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+-		struct mhi_buf_info *info, enum mhi_flags flags);
+-int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
+-			 struct mhi_buf_info *buf_info);
+-int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
+-			  struct mhi_buf_info *buf_info);
+-void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
+-			    struct mhi_buf_info *buf_info);
+-void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
+-			     struct mhi_buf_info *buf_info);
+-
+-#endif /* _MHI_INT_H */
+diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
+deleted file mode 100644
+index 614dd287cb4ff..0000000000000
+--- a/drivers/bus/mhi/core/main.c
++++ /dev/null
+@@ -1,1630 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+- *
+- */
+-
+-#include <linux/device.h>
+-#include <linux/dma-direction.h>
+-#include <linux/dma-mapping.h>
+-#include <linux/interrupt.h>
+-#include <linux/list.h>
+-#include <linux/mhi.h>
+-#include <linux/module.h>
+-#include <linux/skbuff.h>
+-#include <linux/slab.h>
+-#include "internal.h"
+-
+-int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
+-			      void __iomem *base, u32 offset, u32 *out)
+-{
+-	return mhi_cntrl->read_reg(mhi_cntrl, base + offset, out);
+-}
+-
+-int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
+-				    void __iomem *base, u32 offset,
+-				    u32 mask, u32 shift, u32 *out)
+-{
+-	u32 tmp;
+-	int ret;
+-
+-	ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
+-	if (ret)
+-		return ret;
+-
+-	*out = (tmp & mask) >> shift;
+-
+-	return 0;
+-}
+-
+-void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
+-		   u32 offset, u32 val)
+-{
+-	mhi_cntrl->write_reg(mhi_cntrl, base + offset, val);
+-}
+-
+-void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
+-			 u32 offset, u32 mask, u32 shift, u32 val)
+-{
+-	int ret;
+-	u32 tmp;
+-
+-	ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
+-	if (ret)
+-		return;
+-
+-	tmp &= ~mask;
+-	tmp |= (val << shift);
+-	mhi_write_reg(mhi_cntrl, base, offset, tmp);
+-}
+-
+-void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
+-		  dma_addr_t db_val)
+-{
+-	mhi_write_reg(mhi_cntrl, db_addr, 4, upper_32_bits(db_val));
+-	mhi_write_reg(mhi_cntrl, db_addr, 0, lower_32_bits(db_val));
+-}
+-
+-void mhi_db_brstmode(struct mhi_controller *mhi_cntrl,
+-		     struct db_cfg *db_cfg,
+-		     void __iomem *db_addr,
+-		     dma_addr_t db_val)
+-{
+-	if (db_cfg->db_mode) {
+-		db_cfg->db_val = db_val;
+-		mhi_write_db(mhi_cntrl, db_addr, db_val);
+-		db_cfg->db_mode = 0;
+-	}
+-}
+-
+-void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
+-			     struct db_cfg *db_cfg,
+-			     void __iomem *db_addr,
+-			     dma_addr_t db_val)
+-{
+-	db_cfg->db_val = db_val;
+-	mhi_write_db(mhi_cntrl, db_addr, db_val);
+-}
+-
+-void mhi_ring_er_db(struct mhi_event *mhi_event)
+-{
+-	struct mhi_ring *ring = &mhi_event->ring;
+-
+-	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
+-				     ring->db_addr, *ring->ctxt_wp);
+-}
+-
+-void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
+-{
+-	dma_addr_t db;
+-	struct mhi_ring *ring = &mhi_cmd->ring;
+-
+-	db = ring->iommu_base + (ring->wp - ring->base);
+-	*ring->ctxt_wp = db;
+-	mhi_write_db(mhi_cntrl, ring->db_addr, db);
+-}
+-
+-void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+-		      struct mhi_chan *mhi_chan)
+-{
+-	struct mhi_ring *ring = &mhi_chan->tre_ring;
+-	dma_addr_t db;
+-
+-	db = ring->iommu_base + (ring->wp - ring->base);
+-	*ring->ctxt_wp = db;
+-	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
+-				    ring->db_addr, db);
+-}
+-
+-enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl)
+-{
+-	u32 exec;
+-	int ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_EXECENV, &exec);
+-
+-	return (ret) ? MHI_EE_MAX : exec;
+-}
+-
+-enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
+-{
+-	u32 state;
+-	int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
+-				     MHISTATUS_MHISTATE_MASK,
+-				     MHISTATUS_MHISTATE_SHIFT, &state);
+-	return ret ? MHI_STATE_MAX : state;
+-}
+-
+-int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
+-			 struct mhi_buf_info *buf_info)
+-{
+-	buf_info->p_addr = dma_map_single(mhi_cntrl->cntrl_dev,
+-					  buf_info->v_addr, buf_info->len,
+-					  buf_info->dir);
+-	if (dma_mapping_error(mhi_cntrl->cntrl_dev, buf_info->p_addr))
+-		return -ENOMEM;
+-
+-	return 0;
+-}
+-
+-int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
+-			  struct mhi_buf_info *buf_info)
+-{
+-	void *buf = mhi_alloc_coherent(mhi_cntrl, buf_info->len,
+-				       &buf_info->p_addr, GFP_ATOMIC);
+-
+-	if (!buf)
+-		return -ENOMEM;
+-
+-	if (buf_info->dir == DMA_TO_DEVICE)
+-		memcpy(buf, buf_info->v_addr, buf_info->len);
+-
+-	buf_info->bb_addr = buf;
+-
+-	return 0;
+-}
+-
+-void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
+-			    struct mhi_buf_info *buf_info)
+-{
+-	dma_unmap_single(mhi_cntrl->cntrl_dev, buf_info->p_addr, buf_info->len,
+-			 buf_info->dir);
+-}
+-
+-void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
+-			     struct mhi_buf_info *buf_info)
+-{
+-	if (buf_info->dir == DMA_FROM_DEVICE)
+-		memcpy(buf_info->v_addr, buf_info->bb_addr, buf_info->len);
+-
+-	mhi_free_coherent(mhi_cntrl, buf_info->len, buf_info->bb_addr,
+-			  buf_info->p_addr);
+-}
+-
+-static int get_nr_avail_ring_elements(struct mhi_controller *mhi_cntrl,
+-				      struct mhi_ring *ring)
+-{
+-	int nr_el;
+-
+-	if (ring->wp < ring->rp) {
+-		nr_el = ((ring->rp - ring->wp) / ring->el_size) - 1;
+-	} else {
+-		nr_el = (ring->rp - ring->base) / ring->el_size;
+-		nr_el += ((ring->base + ring->len - ring->wp) /
+-			  ring->el_size) - 1;
+-	}
+-
+-	return nr_el;
+-}
+-
+-static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
+-{
+-	return (addr - ring->iommu_base) + ring->base;
+-}
+-
+-static void mhi_add_ring_element(struct mhi_controller *mhi_cntrl,
+-				 struct mhi_ring *ring)
+-{
+-	ring->wp += ring->el_size;
+-	if (ring->wp >= (ring->base + ring->len))
+-		ring->wp = ring->base;
+-	/* smp update */
+-	smp_wmb();
+-}
+-
+-static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
+-				 struct mhi_ring *ring)
+-{
+-	ring->rp += ring->el_size;
+-	if (ring->rp >= (ring->base + ring->len))
+-		ring->rp = ring->base;
+-	/* smp update */
+-	smp_wmb();
+-}
+-
+-static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
+-{
+-	return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
+-}
+-
+-int mhi_destroy_device(struct device *dev, void *data)
+-{
+-	struct mhi_chan *ul_chan, *dl_chan;
+-	struct mhi_device *mhi_dev;
+-	struct mhi_controller *mhi_cntrl;
+-	enum mhi_ee_type ee = MHI_EE_MAX;
+-
+-	if (dev->bus != &mhi_bus_type)
+-		return 0;
+-
+-	mhi_dev = to_mhi_device(dev);
+-	mhi_cntrl = mhi_dev->mhi_cntrl;
+-
+-	/* Only destroy virtual devices thats attached to bus */
+-	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+-		return 0;
+-
+-	ul_chan = mhi_dev->ul_chan;
+-	dl_chan = mhi_dev->dl_chan;
+-
+-	/*
+-	 * If execution environment is specified, remove only those devices that
+-	 * started in them based on ee_mask for the channels as we move on to a
+-	 * different execution environment
+-	 */
+-	if (data)
+-		ee = *(enum mhi_ee_type *)data;
+-
+-	/*
+-	 * For the suspend and resume case, this function will get called
+-	 * without mhi_unregister_controller(). Hence, we need to drop the
+-	 * references to mhi_dev created for ul and dl channels. We can
+-	 * be sure that there will be no instances of mhi_dev left after
+-	 * this.
+-	 */
+-	if (ul_chan) {
+-		if (ee != MHI_EE_MAX && !(ul_chan->ee_mask & BIT(ee)))
+-			return 0;
+-
+-		put_device(&ul_chan->mhi_dev->dev);
+-	}
+-
+-	if (dl_chan) {
+-		if (ee != MHI_EE_MAX && !(dl_chan->ee_mask & BIT(ee)))
+-			return 0;
+-
+-		put_device(&dl_chan->mhi_dev->dev);
+-	}
+-
+-	dev_dbg(&mhi_cntrl->mhi_dev->dev, "destroy device for chan:%s\n",
+-		 mhi_dev->name);
+-
+-	/* Notify the client and remove the device from MHI bus */
+-	device_del(dev);
+-	put_device(dev);
+-
+-	return 0;
+-}
+-
+-void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason)
+-{
+-	struct mhi_driver *mhi_drv;
+-
+-	if (!mhi_dev->dev.driver)
+-		return;
+-
+-	mhi_drv = to_mhi_driver(mhi_dev->dev.driver);
+-
+-	if (mhi_drv->status_cb)
+-		mhi_drv->status_cb(mhi_dev, cb_reason);
+-}
+-EXPORT_SYMBOL_GPL(mhi_notify);
+-
+-/* Bind MHI channels to MHI devices */
+-void mhi_create_devices(struct mhi_controller *mhi_cntrl)
+-{
+-	struct mhi_chan *mhi_chan;
+-	struct mhi_device *mhi_dev;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	int i, ret;
+-
+-	mhi_chan = mhi_cntrl->mhi_chan;
+-	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+-		if (!mhi_chan->configured || mhi_chan->mhi_dev ||
+-		    !(mhi_chan->ee_mask & BIT(mhi_cntrl->ee)))
+-			continue;
+-		mhi_dev = mhi_alloc_device(mhi_cntrl);
+-		if (IS_ERR(mhi_dev))
+-			return;
+-
+-		mhi_dev->dev_type = MHI_DEVICE_XFER;
+-		switch (mhi_chan->dir) {
+-		case DMA_TO_DEVICE:
+-			mhi_dev->ul_chan = mhi_chan;
+-			mhi_dev->ul_chan_id = mhi_chan->chan;
+-			break;
+-		case DMA_FROM_DEVICE:
+-			/* We use dl_chan as offload channels */
+-			mhi_dev->dl_chan = mhi_chan;
+-			mhi_dev->dl_chan_id = mhi_chan->chan;
+-			break;
+-		default:
+-			dev_err(dev, "Direction not supported\n");
+-			put_device(&mhi_dev->dev);
+-			return;
+-		}
+-
+-		get_device(&mhi_dev->dev);
+-		mhi_chan->mhi_dev = mhi_dev;
+-
+-		/* Check next channel if it matches */
+-		if ((i + 1) < mhi_cntrl->max_chan && mhi_chan[1].configured) {
+-			if (!strcmp(mhi_chan[1].name, mhi_chan->name)) {
+-				i++;
+-				mhi_chan++;
+-				if (mhi_chan->dir == DMA_TO_DEVICE) {
+-					mhi_dev->ul_chan = mhi_chan;
+-					mhi_dev->ul_chan_id = mhi_chan->chan;
+-				} else {
+-					mhi_dev->dl_chan = mhi_chan;
+-					mhi_dev->dl_chan_id = mhi_chan->chan;
+-				}
+-				get_device(&mhi_dev->dev);
+-				mhi_chan->mhi_dev = mhi_dev;
+-			}
+-		}
+-
+-		/* Channel name is same for both UL and DL */
+-		mhi_dev->name = mhi_chan->name;
+-		dev_set_name(&mhi_dev->dev, "%s_%s",
+-			     dev_name(mhi_cntrl->cntrl_dev),
+-			     mhi_dev->name);
+-
+-		/* Init wakeup source if available */
+-		if (mhi_dev->dl_chan && mhi_dev->dl_chan->wake_capable)
+-			device_init_wakeup(&mhi_dev->dev, true);
+-
+-		ret = device_add(&mhi_dev->dev);
+-		if (ret)
+-			put_device(&mhi_dev->dev);
+-	}
+-}
+-
+-irqreturn_t mhi_irq_handler(int irq_number, void *dev)
+-{
+-	struct mhi_event *mhi_event = dev;
+-	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+-	struct mhi_event_ctxt *er_ctxt =
+-		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+-	struct mhi_ring *ev_ring = &mhi_event->ring;
+-	dma_addr_t ptr = er_ctxt->rp;
+-	void *dev_rp;
+-
+-	if (!is_valid_ring_ptr(ev_ring, ptr)) {
+-		dev_err(&mhi_cntrl->mhi_dev->dev,
+-			"Event ring rp points outside of the event ring\n");
+-		return IRQ_HANDLED;
+-	}
+-
+-	dev_rp = mhi_to_virtual(ev_ring, ptr);
+-
+-	/* Only proceed if event ring has pending events */
+-	if (ev_ring->rp == dev_rp)
+-		return IRQ_HANDLED;
+-
+-	/* For client managed event ring, notify pending data */
+-	if (mhi_event->cl_manage) {
+-		struct mhi_chan *mhi_chan = mhi_event->mhi_chan;
+-		struct mhi_device *mhi_dev = mhi_chan->mhi_dev;
+-
+-		if (mhi_dev)
+-			mhi_notify(mhi_dev, MHI_CB_PENDING_DATA);
+-	} else {
+-		tasklet_schedule(&mhi_event->task);
+-	}
+-
+-	return IRQ_HANDLED;
+-}
+-
+-irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
+-{
+-	struct mhi_controller *mhi_cntrl = priv;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	enum mhi_state state = MHI_STATE_MAX;
+-	enum mhi_pm_state pm_state = 0;
+-	enum mhi_ee_type ee = 0;
+-
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-		goto exit_intvec;
+-	}
+-
+-	state = mhi_get_mhi_state(mhi_cntrl);
+-	ee = mhi_cntrl->ee;
+-	mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+-	dev_dbg(dev, "local ee:%s device ee:%s dev_state:%s\n",
+-		TO_MHI_EXEC_STR(mhi_cntrl->ee), TO_MHI_EXEC_STR(ee),
+-		TO_MHI_STATE_STR(state));
+-
+-	if (state == MHI_STATE_SYS_ERR) {
+-		dev_dbg(dev, "System error detected\n");
+-		pm_state = mhi_tryset_pm_state(mhi_cntrl,
+-					       MHI_PM_SYS_ERR_DETECT);
+-	}
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-
+-	 /* If device supports RDDM don't bother processing SYS error */
+-	if (mhi_cntrl->rddm_image) {
+-		if (mhi_cntrl->ee == MHI_EE_RDDM && mhi_cntrl->ee != ee) {
+-			mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
+-			wake_up_all(&mhi_cntrl->state_event);
+-		}
+-		goto exit_intvec;
+-	}
+-
+-	if (pm_state == MHI_PM_SYS_ERR_DETECT) {
+-		wake_up_all(&mhi_cntrl->state_event);
+-
+-		/* For fatal errors, we let controller decide next step */
+-		if (MHI_IN_PBL(ee))
+-			mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_FATAL_ERROR);
+-		else
+-			mhi_pm_sys_err_handler(mhi_cntrl);
+-	}
+-
+-exit_intvec:
+-
+-	return IRQ_HANDLED;
+-}
+-
+-irqreturn_t mhi_intvec_handler(int irq_number, void *dev)
+-{
+-	struct mhi_controller *mhi_cntrl = dev;
+-
+-	/* Wake up events waiting for state change */
+-	wake_up_all(&mhi_cntrl->state_event);
+-
+-	return IRQ_WAKE_THREAD;
+-}
+-
+-static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
+-					struct mhi_ring *ring)
+-{
+-	dma_addr_t ctxt_wp;
+-
+-	/* Update the WP */
+-	ring->wp += ring->el_size;
+-	ctxt_wp = *ring->ctxt_wp + ring->el_size;
+-
+-	if (ring->wp >= (ring->base + ring->len)) {
+-		ring->wp = ring->base;
+-		ctxt_wp = ring->iommu_base;
+-	}
+-
+-	*ring->ctxt_wp = ctxt_wp;
+-
+-	/* Update the RP */
+-	ring->rp += ring->el_size;
+-	if (ring->rp >= (ring->base + ring->len))
+-		ring->rp = ring->base;
+-
+-	/* Update to all cores */
+-	smp_wmb();
+-}
+-
+-static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+-			    struct mhi_tre *event,
+-			    struct mhi_chan *mhi_chan)
+-{
+-	struct mhi_ring *buf_ring, *tre_ring;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	struct mhi_result result;
+-	unsigned long flags = 0;
+-	u32 ev_code;
+-
+-	ev_code = MHI_TRE_GET_EV_CODE(event);
+-	buf_ring = &mhi_chan->buf_ring;
+-	tre_ring = &mhi_chan->tre_ring;
+-
+-	result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+-		-EOVERFLOW : 0;
+-
+-	/*
+-	 * If it's a DB Event then we need to grab the lock
+-	 * with preemption disabled and as a write because we
+-	 * have to update db register and there are chances that
+-	 * another thread could be doing the same.
+-	 */
+-	if (ev_code >= MHI_EV_CC_OOB)
+-		write_lock_irqsave(&mhi_chan->lock, flags);
+-	else
+-		read_lock_bh(&mhi_chan->lock);
+-
+-	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
+-		goto end_process_tx_event;
+-
+-	switch (ev_code) {
+-	case MHI_EV_CC_OVERFLOW:
+-	case MHI_EV_CC_EOB:
+-	case MHI_EV_CC_EOT:
+-	{
+-		dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
+-		struct mhi_tre *local_rp, *ev_tre;
+-		void *dev_rp;
+-		struct mhi_buf_info *buf_info;
+-		u16 xfer_len;
+-
+-		if (!is_valid_ring_ptr(tre_ring, ptr)) {
+-			dev_err(&mhi_cntrl->mhi_dev->dev,
+-				"Event element points outside of the tre ring\n");
+-			break;
+-		}
+-		/* Get the TRB this event points to */
+-		ev_tre = mhi_to_virtual(tre_ring, ptr);
+-
+-		dev_rp = ev_tre + 1;
+-		if (dev_rp >= (tre_ring->base + tre_ring->len))
+-			dev_rp = tre_ring->base;
+-
+-		result.dir = mhi_chan->dir;
+-
+-		local_rp = tre_ring->rp;
+-		while (local_rp != dev_rp) {
+-			buf_info = buf_ring->rp;
+-			/* If it's the last TRE, get length from the event */
+-			if (local_rp == ev_tre)
+-				xfer_len = MHI_TRE_GET_EV_LEN(event);
+-			else
+-				xfer_len = buf_info->len;
+-
+-			/* Unmap if it's not pre-mapped by client */
+-			if (likely(!buf_info->pre_mapped))
+-				mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+-
+-			result.buf_addr = buf_info->cb_buf;
+-
+-			/* truncate to buf len if xfer_len is larger */
+-			result.bytes_xferd =
+-				min_t(u16, xfer_len, buf_info->len);
+-			mhi_del_ring_element(mhi_cntrl, buf_ring);
+-			mhi_del_ring_element(mhi_cntrl, tre_ring);
+-			local_rp = tre_ring->rp;
+-
+-			/* notify client */
+-			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+-
+-			if (mhi_chan->dir == DMA_TO_DEVICE)
+-				atomic_dec(&mhi_cntrl->pending_pkts);
+-
+-			/*
+-			 * Recycle the buffer if buffer is pre-allocated,
+-			 * if there is an error, not much we can do apart
+-			 * from dropping the packet
+-			 */
+-			if (mhi_chan->pre_alloc) {
+-				if (mhi_queue_buf(mhi_chan->mhi_dev,
+-						  mhi_chan->dir,
+-						  buf_info->cb_buf,
+-						  buf_info->len, MHI_EOT)) {
+-					dev_err(dev,
+-						"Error recycling buffer for chan:%d\n",
+-						mhi_chan->chan);
+-					kfree(buf_info->cb_buf);
+-				}
+-			}
+-		}
+-		break;
+-	} /* CC_EOT */
+-	case MHI_EV_CC_OOB:
+-	case MHI_EV_CC_DB_MODE:
+-	{
+-		unsigned long flags;
+-
+-		mhi_chan->db_cfg.db_mode = 1;
+-		read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+-		if (tre_ring->wp != tre_ring->rp &&
+-		    MHI_DB_ACCESS_VALID(mhi_cntrl)) {
+-			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+-		}
+-		read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+-		break;
+-	}
+-	case MHI_EV_CC_BAD_TRE:
+-	default:
+-		dev_err(dev, "Unknown event 0x%x\n", ev_code);
+-		break;
+-	} /* switch(MHI_EV_READ_CODE(EV_TRB_CODE,event)) */
+-
+-end_process_tx_event:
+-	if (ev_code >= MHI_EV_CC_OOB)
+-		write_unlock_irqrestore(&mhi_chan->lock, flags);
+-	else
+-		read_unlock_bh(&mhi_chan->lock);
+-
+-	return 0;
+-}
+-
+-static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
+-			   struct mhi_tre *event,
+-			   struct mhi_chan *mhi_chan)
+-{
+-	struct mhi_ring *buf_ring, *tre_ring;
+-	struct mhi_buf_info *buf_info;
+-	struct mhi_result result;
+-	int ev_code;
+-	u32 cookie; /* offset to local descriptor */
+-	u16 xfer_len;
+-
+-	buf_ring = &mhi_chan->buf_ring;
+-	tre_ring = &mhi_chan->tre_ring;
+-
+-	ev_code = MHI_TRE_GET_EV_CODE(event);
+-	cookie = MHI_TRE_GET_EV_COOKIE(event);
+-	xfer_len = MHI_TRE_GET_EV_LEN(event);
+-
+-	/* Received out of bound cookie */
+-	WARN_ON(cookie >= buf_ring->len);
+-
+-	buf_info = buf_ring->base + cookie;
+-
+-	result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+-		-EOVERFLOW : 0;
+-
+-	/* truncate to buf len if xfer_len is larger */
+-	result.bytes_xferd = min_t(u16, xfer_len, buf_info->len);
+-	result.buf_addr = buf_info->cb_buf;
+-	result.dir = mhi_chan->dir;
+-
+-	read_lock_bh(&mhi_chan->lock);
+-
+-	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
+-		goto end_process_rsc_event;
+-
+-	WARN_ON(!buf_info->used);
+-
+-	/* notify the client */
+-	mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+-
+-	/*
+-	 * Note: We're arbitrarily incrementing RP even though, completion
+-	 * packet we processed might not be the same one, reason we can do this
+-	 * is because device guaranteed to cache descriptors in order it
+-	 * receive, so even though completion event is different we can re-use
+-	 * all descriptors in between.
+-	 * Example:
+-	 * Transfer Ring has descriptors: A, B, C, D
+-	 * Last descriptor host queue is D (WP) and first descriptor
+-	 * host queue is A (RP).
+-	 * The completion event we just serviced is descriptor C.
+-	 * Then we can safely queue descriptors to replace A, B, and C
+-	 * even though host did not receive any completions.
+-	 */
+-	mhi_del_ring_element(mhi_cntrl, tre_ring);
+-	buf_info->used = false;
+-
+-end_process_rsc_event:
+-	read_unlock_bh(&mhi_chan->lock);
+-
+-	return 0;
+-}
+-
+-static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
+-				       struct mhi_tre *tre)
+-{
+-	dma_addr_t ptr = MHI_TRE_GET_EV_PTR(tre);
+-	struct mhi_cmd *cmd_ring = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+-	struct mhi_ring *mhi_ring = &cmd_ring->ring;
+-	struct mhi_tre *cmd_pkt;
+-	struct mhi_chan *mhi_chan;
+-	u32 chan;
+-
+-	if (!is_valid_ring_ptr(mhi_ring, ptr)) {
+-		dev_err(&mhi_cntrl->mhi_dev->dev,
+-			"Event element points outside of the cmd ring\n");
+-		return;
+-	}
+-
+-	cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
+-
+-	chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+-
+-	if (chan < mhi_cntrl->max_chan &&
+-	    mhi_cntrl->mhi_chan[chan].configured) {
+-		mhi_chan = &mhi_cntrl->mhi_chan[chan];
+-		write_lock_bh(&mhi_chan->lock);
+-		mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
+-		complete(&mhi_chan->completion);
+-		write_unlock_bh(&mhi_chan->lock);
+-	} else {
+-		dev_err(&mhi_cntrl->mhi_dev->dev,
+-			"Completion packet for invalid channel ID: %d\n", chan);
+-	}
+-
+-	mhi_del_ring_element(mhi_cntrl, mhi_ring);
+-}
+-
+-int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+-			     struct mhi_event *mhi_event,
+-			     u32 event_quota)
+-{
+-	struct mhi_tre *dev_rp, *local_rp;
+-	struct mhi_ring *ev_ring = &mhi_event->ring;
+-	struct mhi_event_ctxt *er_ctxt =
+-		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+-	struct mhi_chan *mhi_chan;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	u32 chan;
+-	int count = 0;
+-	dma_addr_t ptr = er_ctxt->rp;
+-
+-	/*
+-	 * This is a quick check to avoid unnecessary event processing
+-	 * in case MHI is already in error state, but it's still possible
+-	 * to transition to error state while processing events
+-	 */
+-	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+-		return -EIO;
+-
+-	if (!is_valid_ring_ptr(ev_ring, ptr)) {
+-		dev_err(&mhi_cntrl->mhi_dev->dev,
+-			"Event ring rp points outside of the event ring\n");
+-		return -EIO;
+-	}
+-
+-	dev_rp = mhi_to_virtual(ev_ring, ptr);
+-	local_rp = ev_ring->rp;
+-
+-	while (dev_rp != local_rp) {
+-		enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
+-
+-		switch (type) {
+-		case MHI_PKT_TYPE_BW_REQ_EVENT:
+-		{
+-			struct mhi_link_info *link_info;
+-
+-			link_info = &mhi_cntrl->mhi_link_info;
+-			write_lock_irq(&mhi_cntrl->pm_lock);
+-			link_info->target_link_speed =
+-				MHI_TRE_GET_EV_LINKSPEED(local_rp);
+-			link_info->target_link_width =
+-				MHI_TRE_GET_EV_LINKWIDTH(local_rp);
+-			write_unlock_irq(&mhi_cntrl->pm_lock);
+-			dev_dbg(dev, "Received BW_REQ event\n");
+-			mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_BW_REQ);
+-			break;
+-		}
+-		case MHI_PKT_TYPE_STATE_CHANGE_EVENT:
+-		{
+-			enum mhi_state new_state;
+-
+-			new_state = MHI_TRE_GET_EV_STATE(local_rp);
+-
+-			dev_dbg(dev, "State change event to state: %s\n",
+-				TO_MHI_STATE_STR(new_state));
+-
+-			switch (new_state) {
+-			case MHI_STATE_M0:
+-				mhi_pm_m0_transition(mhi_cntrl);
+-				break;
+-			case MHI_STATE_M1:
+-				mhi_pm_m1_transition(mhi_cntrl);
+-				break;
+-			case MHI_STATE_M3:
+-				mhi_pm_m3_transition(mhi_cntrl);
+-				break;
+-			case MHI_STATE_SYS_ERR:
+-			{
+-				enum mhi_pm_state new_state;
+-
+-				/* skip SYS_ERROR handling if RDDM supported */
+-				if (mhi_cntrl->ee == MHI_EE_RDDM ||
+-				    mhi_cntrl->rddm_image)
+-					break;
+-
+-				dev_dbg(dev, "System error detected\n");
+-				write_lock_irq(&mhi_cntrl->pm_lock);
+-				new_state = mhi_tryset_pm_state(mhi_cntrl,
+-							MHI_PM_SYS_ERR_DETECT);
+-				write_unlock_irq(&mhi_cntrl->pm_lock);
+-				if (new_state == MHI_PM_SYS_ERR_DETECT)
+-					mhi_pm_sys_err_handler(mhi_cntrl);
+-				break;
+-			}
+-			default:
+-				dev_err(dev, "Invalid state: %s\n",
+-					TO_MHI_STATE_STR(new_state));
+-			}
+-
+-			break;
+-		}
+-		case MHI_PKT_TYPE_CMD_COMPLETION_EVENT:
+-			mhi_process_cmd_completion(mhi_cntrl, local_rp);
+-			break;
+-		case MHI_PKT_TYPE_EE_EVENT:
+-		{
+-			enum dev_st_transition st = DEV_ST_TRANSITION_MAX;
+-			enum mhi_ee_type event = MHI_TRE_GET_EV_EXECENV(local_rp);
+-
+-			dev_dbg(dev, "Received EE event: %s\n",
+-				TO_MHI_EXEC_STR(event));
+-			switch (event) {
+-			case MHI_EE_SBL:
+-				st = DEV_ST_TRANSITION_SBL;
+-				break;
+-			case MHI_EE_WFW:
+-			case MHI_EE_AMSS:
+-				st = DEV_ST_TRANSITION_MISSION_MODE;
+-				break;
+-			case MHI_EE_RDDM:
+-				mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
+-				write_lock_irq(&mhi_cntrl->pm_lock);
+-				mhi_cntrl->ee = event;
+-				write_unlock_irq(&mhi_cntrl->pm_lock);
+-				wake_up_all(&mhi_cntrl->state_event);
+-				break;
+-			default:
+-				dev_err(dev,
+-					"Unhandled EE event: 0x%x\n", type);
+-			}
+-			if (st != DEV_ST_TRANSITION_MAX)
+-				mhi_queue_state_transition(mhi_cntrl, st);
+-
+-			break;
+-		}
+-		case MHI_PKT_TYPE_TX_EVENT:
+-			chan = MHI_TRE_GET_EV_CHID(local_rp);
+-
+-			WARN_ON(chan >= mhi_cntrl->max_chan);
+-
+-			/*
+-			 * Only process the event ring elements whose channel
+-			 * ID is within the maximum supported range.
+-			 */
+-			if (chan < mhi_cntrl->max_chan) {
+-				mhi_chan = &mhi_cntrl->mhi_chan[chan];
+-				if (!mhi_chan->configured)
+-					break;
+-				parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+-				event_quota--;
+-			}
+-			break;
+-		default:
+-			dev_err(dev, "Unhandled event type: %d\n", type);
+-			break;
+-		}
+-
+-		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+-		local_rp = ev_ring->rp;
+-
+-		ptr = er_ctxt->rp;
+-		if (!is_valid_ring_ptr(ev_ring, ptr)) {
+-			dev_err(&mhi_cntrl->mhi_dev->dev,
+-				"Event ring rp points outside of the event ring\n");
+-			return -EIO;
+-		}
+-
+-		dev_rp = mhi_to_virtual(ev_ring, ptr);
+-		count++;
+-	}
+-
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+-		mhi_ring_er_db(mhi_event);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	return count;
+-}
+-
+-int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+-				struct mhi_event *mhi_event,
+-				u32 event_quota)
+-{
+-	struct mhi_tre *dev_rp, *local_rp;
+-	struct mhi_ring *ev_ring = &mhi_event->ring;
+-	struct mhi_event_ctxt *er_ctxt =
+-		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+-	int count = 0;
+-	u32 chan;
+-	struct mhi_chan *mhi_chan;
+-	dma_addr_t ptr = er_ctxt->rp;
+-
+-	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+-		return -EIO;
+-
+-	if (!is_valid_ring_ptr(ev_ring, ptr)) {
+-		dev_err(&mhi_cntrl->mhi_dev->dev,
+-			"Event ring rp points outside of the event ring\n");
+-		return -EIO;
+-	}
+-
+-	dev_rp = mhi_to_virtual(ev_ring, ptr);
+-	local_rp = ev_ring->rp;
+-
+-	while (dev_rp != local_rp && event_quota > 0) {
+-		enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
+-
+-		chan = MHI_TRE_GET_EV_CHID(local_rp);
+-
+-		WARN_ON(chan >= mhi_cntrl->max_chan);
+-
+-		/*
+-		 * Only process the event ring elements whose channel
+-		 * ID is within the maximum supported range.
+-		 */
+-		if (chan < mhi_cntrl->max_chan &&
+-		    mhi_cntrl->mhi_chan[chan].configured) {
+-			mhi_chan = &mhi_cntrl->mhi_chan[chan];
+-
+-			if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
+-				parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+-				event_quota--;
+-			} else if (type == MHI_PKT_TYPE_RSC_TX_EVENT) {
+-				parse_rsc_event(mhi_cntrl, local_rp, mhi_chan);
+-				event_quota--;
+-			}
+-		}
+-
+-		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+-		local_rp = ev_ring->rp;
+-
+-		ptr = er_ctxt->rp;
+-		if (!is_valid_ring_ptr(ev_ring, ptr)) {
+-			dev_err(&mhi_cntrl->mhi_dev->dev,
+-				"Event ring rp points outside of the event ring\n");
+-			return -EIO;
+-		}
+-
+-		dev_rp = mhi_to_virtual(ev_ring, ptr);
+-		count++;
+-	}
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+-		mhi_ring_er_db(mhi_event);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	return count;
+-}
+-
+-void mhi_ev_task(unsigned long data)
+-{
+-	struct mhi_event *mhi_event = (struct mhi_event *)data;
+-	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+-
+-	/* process all pending events */
+-	spin_lock_bh(&mhi_event->lock);
+-	mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
+-	spin_unlock_bh(&mhi_event->lock);
+-}
+-
+-void mhi_ctrl_ev_task(unsigned long data)
+-{
+-	struct mhi_event *mhi_event = (struct mhi_event *)data;
+-	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	enum mhi_state state;
+-	enum mhi_pm_state pm_state = 0;
+-	int ret;
+-
+-	/*
+-	 * We can check PM state w/o a lock here because there is no way
+-	 * PM state can change from reg access valid to no access while this
+-	 * thread being executed.
+-	 */
+-	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+-		/*
+-		 * We may have a pending event but not allowed to
+-		 * process it since we are probably in a suspended state,
+-		 * so trigger a resume.
+-		 */
+-		mhi_trigger_resume(mhi_cntrl);
+-
+-		return;
+-	}
+-
+-	/* Process ctrl events events */
+-	ret = mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
+-
+-	/*
+-	 * We received an IRQ but no events to process, maybe device went to
+-	 * SYS_ERR state? Check the state to confirm.
+-	 */
+-	if (!ret) {
+-		write_lock_irq(&mhi_cntrl->pm_lock);
+-		state = mhi_get_mhi_state(mhi_cntrl);
+-		if (state == MHI_STATE_SYS_ERR) {
+-			dev_dbg(dev, "System error detected\n");
+-			pm_state = mhi_tryset_pm_state(mhi_cntrl,
+-						       MHI_PM_SYS_ERR_DETECT);
+-		}
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-		if (pm_state == MHI_PM_SYS_ERR_DETECT)
+-			mhi_pm_sys_err_handler(mhi_cntrl);
+-	}
+-}
+-
+-static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl,
+-			     struct mhi_ring *ring)
+-{
+-	void *tmp = ring->wp + ring->el_size;
+-
+-	if (tmp >= (ring->base + ring->len))
+-		tmp = ring->base;
+-
+-	return (tmp == ring->rp);
+-}
+-
+-int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
+-		  struct sk_buff *skb, size_t len, enum mhi_flags mflags)
+-{
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
+-							     mhi_dev->dl_chan;
+-	struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+-	struct mhi_buf_info buf_info = { };
+-	int ret;
+-
+-	/* If MHI host pre-allocates buffers then client drivers cannot queue */
+-	if (mhi_chan->pre_alloc)
+-		return -EINVAL;
+-
+-	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+-		return -ENOMEM;
+-
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+-		read_unlock_bh(&mhi_cntrl->pm_lock);
+-		return -EIO;
+-	}
+-
+-	/* we're in M3 or transitioning to M3 */
+-	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
+-		mhi_trigger_resume(mhi_cntrl);
+-
+-	/* Toggle wake to exit out of M2 */
+-	mhi_cntrl->wake_toggle(mhi_cntrl);
+-
+-	buf_info.v_addr = skb->data;
+-	buf_info.cb_buf = skb;
+-	buf_info.len = len;
+-
+-	ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
+-	if (unlikely(ret)) {
+-		read_unlock_bh(&mhi_cntrl->pm_lock);
+-		return ret;
+-	}
+-
+-	if (mhi_chan->dir == DMA_TO_DEVICE)
+-		atomic_inc(&mhi_cntrl->pending_pkts);
+-
+-	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+-		read_lock_bh(&mhi_chan->lock);
+-		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+-		read_unlock_bh(&mhi_chan->lock);
+-	}
+-
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(mhi_queue_skb);
+-
+-int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
+-		  struct mhi_buf *mhi_buf, size_t len, enum mhi_flags mflags)
+-{
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
+-							     mhi_dev->dl_chan;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+-	struct mhi_buf_info buf_info = { };
+-	int ret;
+-
+-	/* If MHI host pre-allocates buffers then client drivers cannot queue */
+-	if (mhi_chan->pre_alloc)
+-		return -EINVAL;
+-
+-	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+-		return -ENOMEM;
+-
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+-		dev_err(dev, "MHI is not in activate state, PM state: %s\n",
+-			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+-		read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-		return -EIO;
+-	}
+-
+-	/* we're in M3 or transitioning to M3 */
+-	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
+-		mhi_trigger_resume(mhi_cntrl);
+-
+-	/* Toggle wake to exit out of M2 */
+-	mhi_cntrl->wake_toggle(mhi_cntrl);
+-
+-	buf_info.p_addr = mhi_buf->dma_addr;
+-	buf_info.cb_buf = mhi_buf;
+-	buf_info.pre_mapped = true;
+-	buf_info.len = len;
+-
+-	ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
+-	if (unlikely(ret)) {
+-		read_unlock_bh(&mhi_cntrl->pm_lock);
+-		return ret;
+-	}
+-
+-	if (mhi_chan->dir == DMA_TO_DEVICE)
+-		atomic_inc(&mhi_cntrl->pending_pkts);
+-
+-	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+-		read_lock_bh(&mhi_chan->lock);
+-		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+-		read_unlock_bh(&mhi_chan->lock);
+-	}
+-
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(mhi_queue_dma);
+-
+-int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+-			struct mhi_buf_info *info, enum mhi_flags flags)
+-{
+-	struct mhi_ring *buf_ring, *tre_ring;
+-	struct mhi_tre *mhi_tre;
+-	struct mhi_buf_info *buf_info;
+-	int eot, eob, chain, bei;
+-	int ret;
+-
+-	buf_ring = &mhi_chan->buf_ring;
+-	tre_ring = &mhi_chan->tre_ring;
+-
+-	buf_info = buf_ring->wp;
+-	WARN_ON(buf_info->used);
+-	buf_info->pre_mapped = info->pre_mapped;
+-	if (info->pre_mapped)
+-		buf_info->p_addr = info->p_addr;
+-	else
+-		buf_info->v_addr = info->v_addr;
+-	buf_info->cb_buf = info->cb_buf;
+-	buf_info->wp = tre_ring->wp;
+-	buf_info->dir = mhi_chan->dir;
+-	buf_info->len = info->len;
+-
+-	if (!info->pre_mapped) {
+-		ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	eob = !!(flags & MHI_EOB);
+-	eot = !!(flags & MHI_EOT);
+-	chain = !!(flags & MHI_CHAIN);
+-	bei = !!(mhi_chan->intmod);
+-
+-	mhi_tre = tre_ring->wp;
+-	mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+-	mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(info->len);
+-	mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(bei, eot, eob, chain);
+-
+-	/* increment WP */
+-	mhi_add_ring_element(mhi_cntrl, tre_ring);
+-	mhi_add_ring_element(mhi_cntrl, buf_ring);
+-
+-	return 0;
+-}
+-
+-int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
+-		  void *buf, size_t len, enum mhi_flags mflags)
+-{
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
+-							     mhi_dev->dl_chan;
+-	struct mhi_ring *tre_ring;
+-	struct mhi_buf_info buf_info = { };
+-	unsigned long flags;
+-	int ret;
+-
+-	/*
+-	 * this check here only as a guard, it's always
+-	 * possible mhi can enter error while executing rest of function,
+-	 * which is not fatal so we do not need to hold pm_lock
+-	 */
+-	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
+-		return -EIO;
+-
+-	tre_ring = &mhi_chan->tre_ring;
+-	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+-		return -ENOMEM;
+-
+-	buf_info.v_addr = buf;
+-	buf_info.cb_buf = buf;
+-	buf_info.len = len;
+-
+-	ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
+-	if (unlikely(ret))
+-		return ret;
+-
+-	read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+-
+-	/* we're in M3 or transitioning to M3 */
+-	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
+-		mhi_trigger_resume(mhi_cntrl);
+-
+-	/* Toggle wake to exit out of M2 */
+-	mhi_cntrl->wake_toggle(mhi_cntrl);
+-
+-	if (mhi_chan->dir == DMA_TO_DEVICE)
+-		atomic_inc(&mhi_cntrl->pending_pkts);
+-
+-	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+-		unsigned long flags;
+-
+-		read_lock_irqsave(&mhi_chan->lock, flags);
+-		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+-		read_unlock_irqrestore(&mhi_chan->lock, flags);
+-	}
+-
+-	read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(mhi_queue_buf);
+-
+-int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
+-		 struct mhi_chan *mhi_chan,
+-		 enum mhi_cmd_type cmd)
+-{
+-	struct mhi_tre *cmd_tre = NULL;
+-	struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+-	struct mhi_ring *ring = &mhi_cmd->ring;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	int chan = 0;
+-
+-	if (mhi_chan)
+-		chan = mhi_chan->chan;
+-
+-	spin_lock_bh(&mhi_cmd->lock);
+-	if (!get_nr_avail_ring_elements(mhi_cntrl, ring)) {
+-		spin_unlock_bh(&mhi_cmd->lock);
+-		return -ENOMEM;
+-	}
+-
+-	/* prepare the cmd tre */
+-	cmd_tre = ring->wp;
+-	switch (cmd) {
+-	case MHI_CMD_RESET_CHAN:
+-		cmd_tre->ptr = MHI_TRE_CMD_RESET_PTR;
+-		cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
+-		cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
+-		break;
+-	case MHI_CMD_START_CHAN:
+-		cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
+-		cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
+-		cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan);
+-		break;
+-	default:
+-		dev_err(dev, "Command not supported\n");
+-		break;
+-	}
+-
+-	/* queue to hardware */
+-	mhi_add_ring_element(mhi_cntrl, ring);
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+-		mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-	spin_unlock_bh(&mhi_cmd->lock);
+-
+-	return 0;
+-}
+-
+-static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
+-				    struct mhi_chan *mhi_chan)
+-{
+-	int ret;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-
+-	dev_dbg(dev, "Entered: unprepare channel:%d\n", mhi_chan->chan);
+-
+-	/* no more processing events for this channel */
+-	mutex_lock(&mhi_chan->mutex);
+-	write_lock_irq(&mhi_chan->lock);
+-	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
+-		write_unlock_irq(&mhi_chan->lock);
+-		mutex_unlock(&mhi_chan->mutex);
+-		return;
+-	}
+-
+-	mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+-	write_unlock_irq(&mhi_chan->lock);
+-
+-	reinit_completion(&mhi_chan->completion);
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		read_unlock_bh(&mhi_cntrl->pm_lock);
+-		goto error_invalid_state;
+-	}
+-
+-	mhi_cntrl->wake_toggle(mhi_cntrl);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	mhi_cntrl->runtime_get(mhi_cntrl);
+-	mhi_cntrl->runtime_put(mhi_cntrl);
+-	ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
+-	if (ret)
+-		goto error_invalid_state;
+-
+-	/* even if it fails we will still reset */
+-	ret = wait_for_completion_timeout(&mhi_chan->completion,
+-				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-	if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
+-		dev_err(dev,
+-			"Failed to receive cmd completion, still resetting\n");
+-
+-error_invalid_state:
+-	if (!mhi_chan->offload_ch) {
+-		mhi_reset_chan(mhi_cntrl, mhi_chan);
+-		mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+-	}
+-	dev_dbg(dev, "chan:%d successfully resetted\n", mhi_chan->chan);
+-	mutex_unlock(&mhi_chan->mutex);
+-}
+-
+-int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
+-			struct mhi_chan *mhi_chan)
+-{
+-	int ret = 0;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-
+-	dev_dbg(dev, "Preparing channel: %d\n", mhi_chan->chan);
+-
+-	if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
+-		dev_err(dev,
+-			"Current EE: %s Required EE Mask: 0x%x for chan: %s\n",
+-			TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask,
+-			mhi_chan->name);
+-		return -ENOTCONN;
+-	}
+-
+-	mutex_lock(&mhi_chan->mutex);
+-
+-	/* If channel is not in disable state, do not allow it to start */
+-	if (mhi_chan->ch_state != MHI_CH_STATE_DISABLED) {
+-		ret = -EIO;
+-		dev_dbg(dev, "channel: %d is not in disabled state\n",
+-			mhi_chan->chan);
+-		goto error_init_chan;
+-	}
+-
+-	/* Check of client manages channel context for offload channels */
+-	if (!mhi_chan->offload_ch) {
+-		ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
+-		if (ret)
+-			goto error_init_chan;
+-	}
+-
+-	reinit_completion(&mhi_chan->completion);
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		read_unlock_bh(&mhi_cntrl->pm_lock);
+-		ret = -EIO;
+-		goto error_pm_state;
+-	}
+-
+-	mhi_cntrl->wake_toggle(mhi_cntrl);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-	mhi_cntrl->runtime_get(mhi_cntrl);
+-	mhi_cntrl->runtime_put(mhi_cntrl);
+-
+-	ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
+-	if (ret)
+-		goto error_pm_state;
+-
+-	ret = wait_for_completion_timeout(&mhi_chan->completion,
+-				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-	if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
+-		ret = -EIO;
+-		goto error_pm_state;
+-	}
+-
+-	write_lock_irq(&mhi_chan->lock);
+-	mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
+-	write_unlock_irq(&mhi_chan->lock);
+-
+-	/* Pre-allocate buffer for xfer ring */
+-	if (mhi_chan->pre_alloc) {
+-		int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
+-						       &mhi_chan->tre_ring);
+-		size_t len = mhi_cntrl->buffer_len;
+-
+-		while (nr_el--) {
+-			void *buf;
+-			struct mhi_buf_info info = { };
+-			buf = kmalloc(len, GFP_KERNEL);
+-			if (!buf) {
+-				ret = -ENOMEM;
+-				goto error_pre_alloc;
+-			}
+-
+-			/* Prepare transfer descriptors */
+-			info.v_addr = buf;
+-			info.cb_buf = buf;
+-			info.len = len;
+-			ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &info, MHI_EOT);
+-			if (ret) {
+-				kfree(buf);
+-				goto error_pre_alloc;
+-			}
+-		}
+-
+-		read_lock_bh(&mhi_cntrl->pm_lock);
+-		if (MHI_DB_ACCESS_VALID(mhi_cntrl)) {
+-			read_lock_irq(&mhi_chan->lock);
+-			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+-			read_unlock_irq(&mhi_chan->lock);
+-		}
+-		read_unlock_bh(&mhi_cntrl->pm_lock);
+-	}
+-
+-	mutex_unlock(&mhi_chan->mutex);
+-
+-	dev_dbg(dev, "Chan: %d successfully moved to start state\n",
+-		mhi_chan->chan);
+-
+-	return 0;
+-
+-error_pm_state:
+-	if (!mhi_chan->offload_ch)
+-		mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+-
+-error_init_chan:
+-	mutex_unlock(&mhi_chan->mutex);
+-
+-	return ret;
+-
+-error_pre_alloc:
+-	mutex_unlock(&mhi_chan->mutex);
+-	__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+-
+-	return ret;
+-}
+-
+-static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
+-				  struct mhi_event *mhi_event,
+-				  struct mhi_event_ctxt *er_ctxt,
+-				  int chan)
+-
+-{
+-	struct mhi_tre *dev_rp, *local_rp;
+-	struct mhi_ring *ev_ring;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	unsigned long flags;
+-	dma_addr_t ptr;
+-
+-	dev_dbg(dev, "Marking all events for chan: %d as stale\n", chan);
+-
+-	ev_ring = &mhi_event->ring;
+-
+-	/* mark all stale events related to channel as STALE event */
+-	spin_lock_irqsave(&mhi_event->lock, flags);
+-
+-	ptr = er_ctxt->rp;
+-	if (!is_valid_ring_ptr(ev_ring, ptr)) {
+-		dev_err(&mhi_cntrl->mhi_dev->dev,
+-			"Event ring rp points outside of the event ring\n");
+-		dev_rp = ev_ring->rp;
+-	} else {
+-		dev_rp = mhi_to_virtual(ev_ring, ptr);
+-	}
+-
+-	local_rp = ev_ring->rp;
+-	while (dev_rp != local_rp) {
+-		if (MHI_TRE_GET_EV_TYPE(local_rp) == MHI_PKT_TYPE_TX_EVENT &&
+-		    chan == MHI_TRE_GET_EV_CHID(local_rp))
+-			local_rp->dword[1] = MHI_TRE_EV_DWORD1(chan,
+-					MHI_PKT_TYPE_STALE_EVENT);
+-		local_rp++;
+-		if (local_rp == (ev_ring->base + ev_ring->len))
+-			local_rp = ev_ring->base;
+-	}
+-
+-	dev_dbg(dev, "Finished marking events as stale events\n");
+-	spin_unlock_irqrestore(&mhi_event->lock, flags);
+-}
+-
+-static void mhi_reset_data_chan(struct mhi_controller *mhi_cntrl,
+-				struct mhi_chan *mhi_chan)
+-{
+-	struct mhi_ring *buf_ring, *tre_ring;
+-	struct mhi_result result;
+-
+-	/* Reset any pending buffers */
+-	buf_ring = &mhi_chan->buf_ring;
+-	tre_ring = &mhi_chan->tre_ring;
+-	result.transaction_status = -ENOTCONN;
+-	result.bytes_xferd = 0;
+-	while (tre_ring->rp != tre_ring->wp) {
+-		struct mhi_buf_info *buf_info = buf_ring->rp;
+-
+-		if (mhi_chan->dir == DMA_TO_DEVICE)
+-			atomic_dec(&mhi_cntrl->pending_pkts);
+-
+-		if (!buf_info->pre_mapped)
+-			mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+-
+-		mhi_del_ring_element(mhi_cntrl, buf_ring);
+-		mhi_del_ring_element(mhi_cntrl, tre_ring);
+-
+-		if (mhi_chan->pre_alloc) {
+-			kfree(buf_info->cb_buf);
+-		} else {
+-			result.buf_addr = buf_info->cb_buf;
+-			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+-		}
+-	}
+-}
+-
+-void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan)
+-{
+-	struct mhi_event *mhi_event;
+-	struct mhi_event_ctxt *er_ctxt;
+-	int chan = mhi_chan->chan;
+-
+-	/* Nothing to reset, client doesn't queue buffers */
+-	if (mhi_chan->offload_ch)
+-		return;
+-
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+-	er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_chan->er_index];
+-
+-	mhi_mark_stale_events(mhi_cntrl, mhi_event, er_ctxt, chan);
+-
+-	mhi_reset_data_chan(mhi_cntrl, mhi_chan);
+-
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-}
+-
+-/* Move channel to start state */
+-int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
+-{
+-	int ret, dir;
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	struct mhi_chan *mhi_chan;
+-
+-	for (dir = 0; dir < 2; dir++) {
+-		mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
+-		if (!mhi_chan)
+-			continue;
+-
+-		ret = mhi_prepare_channel(mhi_cntrl, mhi_chan);
+-		if (ret)
+-			goto error_open_chan;
+-	}
+-
+-	return 0;
+-
+-error_open_chan:
+-	for (--dir; dir >= 0; dir--) {
+-		mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
+-		if (!mhi_chan)
+-			continue;
+-
+-		__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+-	}
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(mhi_prepare_for_transfer);
+-
+-void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
+-{
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	struct mhi_chan *mhi_chan;
+-	int dir;
+-
+-	for (dir = 0; dir < 2; dir++) {
+-		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+-		if (!mhi_chan)
+-			continue;
+-
+-		__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+-	}
+-}
+-EXPORT_SYMBOL_GPL(mhi_unprepare_from_transfer);
+-
+-int mhi_poll(struct mhi_device *mhi_dev, u32 budget)
+-{
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	struct mhi_chan *mhi_chan = mhi_dev->dl_chan;
+-	struct mhi_event *mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+-	int ret;
+-
+-	spin_lock_bh(&mhi_event->lock);
+-	ret = mhi_event->process_event(mhi_cntrl, mhi_event, budget);
+-	spin_unlock_bh(&mhi_event->lock);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(mhi_poll);
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+deleted file mode 100644
+index 7d69b740b9f93..0000000000000
+--- a/drivers/bus/mhi/core/pm.c
++++ /dev/null
+@@ -1,1157 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+- *
+- */
+-
+-#include <linux/delay.h>
+-#include <linux/device.h>
+-#include <linux/dma-direction.h>
+-#include <linux/dma-mapping.h>
+-#include <linux/interrupt.h>
+-#include <linux/list.h>
+-#include <linux/mhi.h>
+-#include <linux/module.h>
+-#include <linux/slab.h>
+-#include <linux/wait.h>
+-#include "internal.h"
+-
+-/*
+- * Not all MHI state transitions are synchronous. Transitions like Linkdown,
+- * SYS_ERR, and shutdown can happen anytime asynchronously. This function will
+- * transition to a new state only if we're allowed to.
+- *
+- * Priority increases as we go down. For instance, from any state in L0, the
+- * transition can be made to states in L1, L2 and L3. A notable exception to
+- * this rule is state DISABLE.  From DISABLE state we can only transition to
+- * POR state. Also, while in L2 state, user cannot jump back to previous
+- * L1 or L0 states.
+- *
+- * Valid transitions:
+- * L0: DISABLE <--> POR
+- *     POR <--> POR
+- *     POR -> M0 -> M2 --> M0
+- *     POR -> FW_DL_ERR
+- *     FW_DL_ERR <--> FW_DL_ERR
+- *     M0 <--> M0
+- *     M0 -> FW_DL_ERR
+- *     M0 -> M3_ENTER -> M3 -> M3_EXIT --> M0
+- * L1: SYS_ERR_DETECT -> SYS_ERR_PROCESS --> POR
+- * L2: SHUTDOWN_PROCESS -> DISABLE
+- * L3: LD_ERR_FATAL_DETECT <--> LD_ERR_FATAL_DETECT
+- *     LD_ERR_FATAL_DETECT -> SHUTDOWN_PROCESS
+- */
+-static struct mhi_pm_transitions const dev_state_transitions[] = {
+-	/* L0 States */
+-	{
+-		MHI_PM_DISABLE,
+-		MHI_PM_POR
+-	},
+-	{
+-		MHI_PM_POR,
+-		MHI_PM_POR | MHI_PM_DISABLE | MHI_PM_M0 |
+-		MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+-		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
+-	},
+-	{
+-		MHI_PM_M0,
+-		MHI_PM_M0 | MHI_PM_M2 | MHI_PM_M3_ENTER |
+-		MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+-		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
+-	},
+-	{
+-		MHI_PM_M2,
+-		MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+-		MHI_PM_LD_ERR_FATAL_DETECT
+-	},
+-	{
+-		MHI_PM_M3_ENTER,
+-		MHI_PM_M3 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+-		MHI_PM_LD_ERR_FATAL_DETECT
+-	},
+-	{
+-		MHI_PM_M3,
+-		MHI_PM_M3_EXIT | MHI_PM_SYS_ERR_DETECT |
+-		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+-	},
+-	{
+-		MHI_PM_M3_EXIT,
+-		MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+-		MHI_PM_LD_ERR_FATAL_DETECT
+-	},
+-	{
+-		MHI_PM_FW_DL_ERR,
+-		MHI_PM_FW_DL_ERR | MHI_PM_SYS_ERR_DETECT |
+-		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+-	},
+-	/* L1 States */
+-	{
+-		MHI_PM_SYS_ERR_DETECT,
+-		MHI_PM_SYS_ERR_PROCESS | MHI_PM_SHUTDOWN_PROCESS |
+-		MHI_PM_LD_ERR_FATAL_DETECT
+-	},
+-	{
+-		MHI_PM_SYS_ERR_PROCESS,
+-		MHI_PM_POR | MHI_PM_SHUTDOWN_PROCESS |
+-		MHI_PM_LD_ERR_FATAL_DETECT
+-	},
+-	/* L2 States */
+-	{
+-		MHI_PM_SHUTDOWN_PROCESS,
+-		MHI_PM_DISABLE | MHI_PM_LD_ERR_FATAL_DETECT
+-	},
+-	/* L3 States */
+-	{
+-		MHI_PM_LD_ERR_FATAL_DETECT,
+-		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_SHUTDOWN_PROCESS
+-	},
+-};
+-
+-enum mhi_pm_state __must_check mhi_tryset_pm_state(struct mhi_controller *mhi_cntrl,
+-						   enum mhi_pm_state state)
+-{
+-	unsigned long cur_state = mhi_cntrl->pm_state;
+-	int index = find_last_bit(&cur_state, 32);
+-
+-	if (unlikely(index >= ARRAY_SIZE(dev_state_transitions)))
+-		return cur_state;
+-
+-	if (unlikely(dev_state_transitions[index].from_state != cur_state))
+-		return cur_state;
+-
+-	if (unlikely(!(dev_state_transitions[index].to_states & state)))
+-		return cur_state;
+-
+-	mhi_cntrl->pm_state = state;
+-	return mhi_cntrl->pm_state;
+-}
+-
+-void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
+-{
+-	if (state == MHI_STATE_RESET) {
+-		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
+-				    MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
+-	} else {
+-		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
+-				    MHICTRL_MHISTATE_MASK,
+-				    MHICTRL_MHISTATE_SHIFT, state);
+-	}
+-}
+-
+-/* NOP for backward compatibility, host allowed to ring DB in M2 state */
+-static void mhi_toggle_dev_wake_nop(struct mhi_controller *mhi_cntrl)
+-{
+-}
+-
+-static void mhi_toggle_dev_wake(struct mhi_controller *mhi_cntrl)
+-{
+-	mhi_cntrl->wake_get(mhi_cntrl, false);
+-	mhi_cntrl->wake_put(mhi_cntrl, true);
+-}
+-
+-/* Handle device ready state transition */
+-int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
+-{
+-	void __iomem *base = mhi_cntrl->regs;
+-	struct mhi_event *mhi_event;
+-	enum mhi_pm_state cur_state;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	u32 reset = 1, ready = 0;
+-	int ret, i;
+-
+-	/* Wait for RESET to be cleared and READY bit to be set by the device */
+-	wait_event_timeout(mhi_cntrl->state_event,
+-			   MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
+-			   mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
+-					      MHICTRL_RESET_MASK,
+-					      MHICTRL_RESET_SHIFT, &reset) ||
+-			   mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
+-					      MHISTATUS_READY_MASK,
+-					      MHISTATUS_READY_SHIFT, &ready) ||
+-			   (!reset && ready),
+-			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-
+-	/* Check if device entered error state */
+-	if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) {
+-		dev_err(dev, "Device link is not accessible\n");
+-		return -EIO;
+-	}
+-
+-	/* Timeout if device did not transition to ready state */
+-	if (reset || !ready) {
+-		dev_err(dev, "Device Ready timeout\n");
+-		return -ETIMEDOUT;
+-	}
+-
+-	dev_dbg(dev, "Device in READY State\n");
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
+-	mhi_cntrl->dev_state = MHI_STATE_READY;
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-
+-	if (cur_state != MHI_PM_POR) {
+-		dev_err(dev, "Error moving to state %s from %s\n",
+-			to_mhi_pm_state_str(MHI_PM_POR),
+-			to_mhi_pm_state_str(cur_state));
+-		return -EIO;
+-	}
+-
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+-		dev_err(dev, "Device registers not accessible\n");
+-		goto error_mmio;
+-	}
+-
+-	/* Configure MMIO registers */
+-	ret = mhi_init_mmio(mhi_cntrl);
+-	if (ret) {
+-		dev_err(dev, "Error configuring MMIO registers\n");
+-		goto error_mmio;
+-	}
+-
+-	/* Add elements to all SW event rings */
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+-		struct mhi_ring *ring = &mhi_event->ring;
+-
+-		/* Skip if this is an offload or HW event */
+-		if (mhi_event->offload_ev || mhi_event->hw_ring)
+-			continue;
+-
+-		ring->wp = ring->base + ring->len - ring->el_size;
+-		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+-		/* Update all cores */
+-		smp_wmb();
+-
+-		/* Ring the event ring db */
+-		spin_lock_irq(&mhi_event->lock);
+-		mhi_ring_er_db(mhi_event);
+-		spin_unlock_irq(&mhi_event->lock);
+-	}
+-
+-	/* Set MHI to M0 state */
+-	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	return 0;
+-
+-error_mmio:
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	return -EIO;
+-}
+-
+-int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
+-{
+-	enum mhi_pm_state cur_state;
+-	struct mhi_chan *mhi_chan;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	int i;
+-
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	mhi_cntrl->dev_state = MHI_STATE_M0;
+-	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M0);
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-	if (unlikely(cur_state != MHI_PM_M0)) {
+-		dev_err(dev, "Unable to transition to M0 state\n");
+-		return -EIO;
+-	}
+-	mhi_cntrl->M0++;
+-
+-	/* Wake up the device */
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	mhi_cntrl->wake_get(mhi_cntrl, true);
+-
+-	/* Ring all event rings and CMD ring only if we're in mission mode */
+-	if (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) {
+-		struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+-		struct mhi_cmd *mhi_cmd =
+-			&mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+-
+-		for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+-			if (mhi_event->offload_ev)
+-				continue;
+-
+-			spin_lock_irq(&mhi_event->lock);
+-			mhi_ring_er_db(mhi_event);
+-			spin_unlock_irq(&mhi_event->lock);
+-		}
+-
+-		/* Only ring primary cmd ring if ring is not empty */
+-		spin_lock_irq(&mhi_cmd->lock);
+-		if (mhi_cmd->ring.rp != mhi_cmd->ring.wp)
+-			mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+-		spin_unlock_irq(&mhi_cmd->lock);
+-	}
+-
+-	/* Ring channel DB registers */
+-	mhi_chan = mhi_cntrl->mhi_chan;
+-	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+-		struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+-
+-		if (mhi_chan->db_cfg.reset_req) {
+-			write_lock_irq(&mhi_chan->lock);
+-			mhi_chan->db_cfg.db_mode = true;
+-			write_unlock_irq(&mhi_chan->lock);
+-		}
+-
+-		read_lock_irq(&mhi_chan->lock);
+-
+-		/* Only ring DB if ring is not empty */
+-		if (tre_ring->base && tre_ring->wp  != tre_ring->rp &&
+-		    mhi_chan->ch_state == MHI_CH_STATE_ENABLED)
+-			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+-		read_unlock_irq(&mhi_chan->lock);
+-	}
+-
+-	mhi_cntrl->wake_put(mhi_cntrl, false);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-	wake_up_all(&mhi_cntrl->state_event);
+-
+-	return 0;
+-}
+-
+-/*
+- * After receiving the MHI state change event from the device indicating the
+- * transition to M1 state, the host can transition the device to M2 state
+- * for keeping it in low power state.
+- */
+-void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl)
+-{
+-	enum mhi_pm_state state;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M2);
+-	if (state == MHI_PM_M2) {
+-		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M2);
+-		mhi_cntrl->dev_state = MHI_STATE_M2;
+-
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-
+-		mhi_cntrl->M2++;
+-		wake_up_all(&mhi_cntrl->state_event);
+-
+-		/* If there are any pending resources, exit M2 immediately */
+-		if (unlikely(atomic_read(&mhi_cntrl->pending_pkts) ||
+-			     atomic_read(&mhi_cntrl->dev_wake))) {
+-			dev_dbg(dev,
+-				"Exiting M2, pending_pkts: %d dev_wake: %d\n",
+-				atomic_read(&mhi_cntrl->pending_pkts),
+-				atomic_read(&mhi_cntrl->dev_wake));
+-			read_lock_bh(&mhi_cntrl->pm_lock);
+-			mhi_cntrl->wake_get(mhi_cntrl, true);
+-			mhi_cntrl->wake_put(mhi_cntrl, true);
+-			read_unlock_bh(&mhi_cntrl->pm_lock);
+-		} else {
+-			mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_IDLE);
+-		}
+-	} else {
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-	}
+-}
+-
+-/* MHI M3 completion handler */
+-int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl)
+-{
+-	enum mhi_pm_state state;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	mhi_cntrl->dev_state = MHI_STATE_M3;
+-	state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3);
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-	if (state != MHI_PM_M3) {
+-		dev_err(dev, "Unable to transition to M3 state\n");
+-		return -EIO;
+-	}
+-
+-	mhi_cntrl->M3++;
+-	wake_up_all(&mhi_cntrl->state_event);
+-
+-	return 0;
+-}
+-
+-/* Handle device Mission Mode transition */
+-static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
+-{
+-	struct mhi_event *mhi_event;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	enum mhi_ee_type current_ee = mhi_cntrl->ee;
+-	int i, ret;
+-
+-	dev_dbg(dev, "Processing Mission Mode transition\n");
+-
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+-		mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-
+-	if (!MHI_IN_MISSION_MODE(mhi_cntrl->ee))
+-		return -EIO;
+-
+-	wake_up_all(&mhi_cntrl->state_event);
+-
+-	device_for_each_child(&mhi_cntrl->mhi_dev->dev, &current_ee,
+-			      mhi_destroy_device);
+-	mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_MISSION_MODE);
+-
+-	/* Force MHI to be in M0 state before continuing */
+-	ret = __mhi_device_get_sync(mhi_cntrl);
+-	if (ret)
+-		return ret;
+-
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-
+-	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		ret = -EIO;
+-		goto error_mission_mode;
+-	}
+-
+-	/* Add elements to all HW event rings */
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+-		struct mhi_ring *ring = &mhi_event->ring;
+-
+-		if (mhi_event->offload_ev || !mhi_event->hw_ring)
+-			continue;
+-
+-		ring->wp = ring->base + ring->len - ring->el_size;
+-		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+-		/* Update to all cores */
+-		smp_wmb();
+-
+-		spin_lock_irq(&mhi_event->lock);
+-		if (MHI_DB_ACCESS_VALID(mhi_cntrl))
+-			mhi_ring_er_db(mhi_event);
+-		spin_unlock_irq(&mhi_event->lock);
+-	}
+-
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	/*
+-	 * The MHI devices are only created when the client device switches its
+-	 * Execution Environment (EE) to either SBL or AMSS states
+-	 */
+-	mhi_create_devices(mhi_cntrl);
+-
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-
+-error_mission_mode:
+-	mhi_cntrl->wake_put(mhi_cntrl, false);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	return ret;
+-}
+-
+-/* Handle SYS_ERR and Shutdown transitions */
+-static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
+-				      enum mhi_pm_state transition_state)
+-{
+-	enum mhi_pm_state cur_state, prev_state;
+-	struct mhi_event *mhi_event;
+-	struct mhi_cmd_ctxt *cmd_ctxt;
+-	struct mhi_cmd *mhi_cmd;
+-	struct mhi_event_ctxt *er_ctxt;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	int ret, i;
+-
+-	dev_dbg(dev, "Transitioning from PM state: %s to: %s\n",
+-		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+-		to_mhi_pm_state_str(transition_state));
+-
+-	/* We must notify MHI control driver so it can clean up first */
+-	if (transition_state == MHI_PM_SYS_ERR_PROCESS)
+-		mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_SYS_ERROR);
+-
+-	mutex_lock(&mhi_cntrl->pm_mutex);
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	prev_state = mhi_cntrl->pm_state;
+-	cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
+-	if (cur_state == transition_state) {
+-		mhi_cntrl->ee = MHI_EE_DISABLE_TRANSITION;
+-		mhi_cntrl->dev_state = MHI_STATE_RESET;
+-	}
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-
+-	/* Wake up threads waiting for state transition */
+-	wake_up_all(&mhi_cntrl->state_event);
+-
+-	if (cur_state != transition_state) {
+-		dev_err(dev, "Failed to transition to state: %s from: %s\n",
+-			to_mhi_pm_state_str(transition_state),
+-			to_mhi_pm_state_str(cur_state));
+-		mutex_unlock(&mhi_cntrl->pm_mutex);
+-		return;
+-	}
+-
+-	/* Trigger MHI RESET so that the device will not access host memory */
+-	if (MHI_REG_ACCESS_VALID(prev_state)) {
+-		u32 in_reset = -1;
+-		unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
+-
+-		dev_dbg(dev, "Triggering MHI Reset in device\n");
+-		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
+-
+-		/* Wait for the reset bit to be cleared by the device */
+-		ret = wait_event_timeout(mhi_cntrl->state_event,
+-					 mhi_read_reg_field(mhi_cntrl,
+-							    mhi_cntrl->regs,
+-							    MHICTRL,
+-							    MHICTRL_RESET_MASK,
+-							    MHICTRL_RESET_SHIFT,
+-							    &in_reset) ||
+-					!in_reset, timeout);
+-		if ((!ret || in_reset) && cur_state == MHI_PM_SYS_ERR_PROCESS) {
+-			dev_err(dev, "Device failed to exit MHI Reset state\n");
+-			mutex_unlock(&mhi_cntrl->pm_mutex);
+-			return;
+-		}
+-
+-		/*
+-		 * Device will clear BHI_INTVEC as a part of RESET processing,
+-		 * hence re-program it
+-		 */
+-		mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+-	}
+-
+-	dev_dbg(dev,
+-		 "Waiting for all pending event ring processing to complete\n");
+-	mhi_event = mhi_cntrl->mhi_event;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+-		if (mhi_event->offload_ev)
+-			continue;
+-		tasklet_kill(&mhi_event->task);
+-	}
+-
+-	/* Release lock and wait for all pending threads to complete */
+-	mutex_unlock(&mhi_cntrl->pm_mutex);
+-	dev_dbg(dev, "Waiting for all pending threads to complete\n");
+-	wake_up_all(&mhi_cntrl->state_event);
+-
+-	dev_dbg(dev, "Reset all active channels and remove MHI devices\n");
+-	device_for_each_child(mhi_cntrl->cntrl_dev, NULL, mhi_destroy_device);
+-
+-	mutex_lock(&mhi_cntrl->pm_mutex);
+-
+-	WARN_ON(atomic_read(&mhi_cntrl->dev_wake));
+-	WARN_ON(atomic_read(&mhi_cntrl->pending_pkts));
+-
+-	/* Reset the ev rings and cmd rings */
+-	dev_dbg(dev, "Resetting EV CTXT and CMD CTXT\n");
+-	mhi_cmd = mhi_cntrl->mhi_cmd;
+-	cmd_ctxt = mhi_cntrl->mhi_ctxt->cmd_ctxt;
+-	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
+-		struct mhi_ring *ring = &mhi_cmd->ring;
+-
+-		ring->rp = ring->base;
+-		ring->wp = ring->base;
+-		cmd_ctxt->rp = cmd_ctxt->rbase;
+-		cmd_ctxt->wp = cmd_ctxt->rbase;
+-	}
+-
+-	mhi_event = mhi_cntrl->mhi_event;
+-	er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
+-	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+-		     mhi_event++) {
+-		struct mhi_ring *ring = &mhi_event->ring;
+-
+-		/* Skip offload events */
+-		if (mhi_event->offload_ev)
+-			continue;
+-
+-		ring->rp = ring->base;
+-		ring->wp = ring->base;
+-		er_ctxt->rp = er_ctxt->rbase;
+-		er_ctxt->wp = er_ctxt->rbase;
+-	}
+-
+-	if (cur_state == MHI_PM_SYS_ERR_PROCESS) {
+-		mhi_ready_state_transition(mhi_cntrl);
+-	} else {
+-		/* Move to disable state */
+-		write_lock_irq(&mhi_cntrl->pm_lock);
+-		cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_DISABLE);
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-		if (unlikely(cur_state != MHI_PM_DISABLE))
+-			dev_err(dev, "Error moving from PM state: %s to: %s\n",
+-				to_mhi_pm_state_str(cur_state),
+-				to_mhi_pm_state_str(MHI_PM_DISABLE));
+-	}
+-
+-	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
+-		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+-		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+-
+-	mutex_unlock(&mhi_cntrl->pm_mutex);
+-}
+-
+-/* Queue a new work item and schedule work */
+-int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+-			       enum dev_st_transition state)
+-{
+-	struct state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
+-	unsigned long flags;
+-
+-	if (!item)
+-		return -ENOMEM;
+-
+-	item->state = state;
+-	spin_lock_irqsave(&mhi_cntrl->transition_lock, flags);
+-	list_add_tail(&item->node, &mhi_cntrl->transition_list);
+-	spin_unlock_irqrestore(&mhi_cntrl->transition_lock, flags);
+-
+-	schedule_work(&mhi_cntrl->st_worker);
+-
+-	return 0;
+-}
+-
+-/* SYS_ERR worker */
+-void mhi_pm_sys_err_handler(struct mhi_controller *mhi_cntrl)
+-{
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-
+-	/* skip if controller supports RDDM */
+-	if (mhi_cntrl->rddm_image) {
+-		dev_dbg(dev, "Controller supports RDDM, skip SYS_ERROR\n");
+-		return;
+-	}
+-
+-	mhi_queue_state_transition(mhi_cntrl, DEV_ST_TRANSITION_SYS_ERR);
+-}
+-
+-/* Device State Transition worker */
+-void mhi_pm_st_worker(struct work_struct *work)
+-{
+-	struct state_transition *itr, *tmp;
+-	LIST_HEAD(head);
+-	struct mhi_controller *mhi_cntrl = container_of(work,
+-							struct mhi_controller,
+-							st_worker);
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-
+-	spin_lock_irq(&mhi_cntrl->transition_lock);
+-	list_splice_tail_init(&mhi_cntrl->transition_list, &head);
+-	spin_unlock_irq(&mhi_cntrl->transition_lock);
+-
+-	list_for_each_entry_safe(itr, tmp, &head, node) {
+-		list_del(&itr->node);
+-		dev_dbg(dev, "Handling state transition: %s\n",
+-			TO_DEV_STATE_TRANS_STR(itr->state));
+-
+-		switch (itr->state) {
+-		case DEV_ST_TRANSITION_PBL:
+-			write_lock_irq(&mhi_cntrl->pm_lock);
+-			if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+-				mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+-			write_unlock_irq(&mhi_cntrl->pm_lock);
+-			if (MHI_IN_PBL(mhi_cntrl->ee))
+-				mhi_fw_load_handler(mhi_cntrl);
+-			break;
+-		case DEV_ST_TRANSITION_SBL:
+-			write_lock_irq(&mhi_cntrl->pm_lock);
+-			mhi_cntrl->ee = MHI_EE_SBL;
+-			write_unlock_irq(&mhi_cntrl->pm_lock);
+-			/*
+-			 * The MHI devices are only created when the client
+-			 * device switches its Execution Environment (EE) to
+-			 * either SBL or AMSS states
+-			 */
+-			mhi_create_devices(mhi_cntrl);
+-			break;
+-		case DEV_ST_TRANSITION_MISSION_MODE:
+-			mhi_pm_mission_mode_transition(mhi_cntrl);
+-			break;
+-		case DEV_ST_TRANSITION_READY:
+-			mhi_ready_state_transition(mhi_cntrl);
+-			break;
+-		case DEV_ST_TRANSITION_SYS_ERR:
+-			mhi_pm_disable_transition
+-				(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS);
+-			break;
+-		case DEV_ST_TRANSITION_DISABLE:
+-			mhi_pm_disable_transition
+-				(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS);
+-			break;
+-		default:
+-			break;
+-		}
+-		kfree(itr);
+-	}
+-}
+-
+-int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
+-{
+-	struct mhi_chan *itr, *tmp;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	enum mhi_pm_state new_state;
+-	int ret;
+-
+-	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
+-		return -EINVAL;
+-
+-	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+-		return -EIO;
+-
+-	/* Return busy if there are any pending resources */
+-	if (atomic_read(&mhi_cntrl->dev_wake) ||
+-	    atomic_read(&mhi_cntrl->pending_pkts))
+-		return -EBUSY;
+-
+-	/* Take MHI out of M2 state */
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	mhi_cntrl->wake_get(mhi_cntrl, false);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	ret = wait_event_timeout(mhi_cntrl->state_event,
+-				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+-				 mhi_cntrl->dev_state == MHI_STATE_M1 ||
+-				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+-				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	mhi_cntrl->wake_put(mhi_cntrl, false);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		dev_err(dev,
+-			"Could not enter M0/M1 state");
+-		return -EIO;
+-	}
+-
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-
+-	if (atomic_read(&mhi_cntrl->dev_wake) ||
+-	    atomic_read(&mhi_cntrl->pending_pkts)) {
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-		return -EBUSY;
+-	}
+-
+-	dev_info(dev, "Allowing M3 transition\n");
+-	new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_ENTER);
+-	if (new_state != MHI_PM_M3_ENTER) {
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-		dev_err(dev,
+-			"Error setting to PM state: %s from: %s\n",
+-			to_mhi_pm_state_str(MHI_PM_M3_ENTER),
+-			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+-		return -EIO;
+-	}
+-
+-	/* Set MHI to M3 and wait for completion */
+-	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-	dev_info(dev, "Wait for M3 completion\n");
+-
+-	ret = wait_event_timeout(mhi_cntrl->state_event,
+-				 mhi_cntrl->dev_state == MHI_STATE_M3 ||
+-				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+-				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-
+-	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		dev_err(dev,
+-			"Did not enter M3 state, MHI state: %s, PM state: %s\n",
+-			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+-			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+-		return -EIO;
+-	}
+-
+-	/* Notify clients about entering LPM */
+-	list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
+-		mutex_lock(&itr->mutex);
+-		if (itr->mhi_dev)
+-			mhi_notify(itr->mhi_dev, MHI_CB_LPM_ENTER);
+-		mutex_unlock(&itr->mutex);
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(mhi_pm_suspend);
+-
+-int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
+-{
+-	struct mhi_chan *itr, *tmp;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	enum mhi_pm_state cur_state;
+-	int ret;
+-
+-	dev_info(dev, "Entered with PM state: %s, MHI state: %s\n",
+-		 to_mhi_pm_state_str(mhi_cntrl->pm_state),
+-		 TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+-
+-	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
+-		return 0;
+-
+-	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+-		return -EIO;
+-
+-	/* Notify clients about exiting LPM */
+-	list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
+-		mutex_lock(&itr->mutex);
+-		if (itr->mhi_dev)
+-			mhi_notify(itr->mhi_dev, MHI_CB_LPM_EXIT);
+-		mutex_unlock(&itr->mutex);
+-	}
+-
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_EXIT);
+-	if (cur_state != MHI_PM_M3_EXIT) {
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-		dev_info(dev,
+-			 "Error setting to PM state: %s from: %s\n",
+-			 to_mhi_pm_state_str(MHI_PM_M3_EXIT),
+-			 to_mhi_pm_state_str(mhi_cntrl->pm_state));
+-		return -EIO;
+-	}
+-
+-	/* Set MHI to M0 and wait for completion */
+-	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-
+-	ret = wait_event_timeout(mhi_cntrl->state_event,
+-				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+-				 mhi_cntrl->dev_state == MHI_STATE_M2 ||
+-				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+-				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-
+-	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		dev_err(dev,
+-			"Did not enter M0 state, MHI state: %s, PM state: %s\n",
+-			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+-			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+-		return -EIO;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(mhi_pm_resume);
+-
+-int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
+-{
+-	int ret;
+-
+-	/* Wake up the device */
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	mhi_cntrl->wake_get(mhi_cntrl, true);
+-	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
+-		mhi_trigger_resume(mhi_cntrl);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-
+-	ret = wait_event_timeout(mhi_cntrl->state_event,
+-				 mhi_cntrl->pm_state == MHI_PM_M0 ||
+-				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+-				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-
+-	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+-		read_lock_bh(&mhi_cntrl->pm_lock);
+-		mhi_cntrl->wake_put(mhi_cntrl, false);
+-		read_unlock_bh(&mhi_cntrl->pm_lock);
+-		return -EIO;
+-	}
+-
+-	return 0;
+-}
+-
+-/* Assert device wake db */
+-static void mhi_assert_dev_wake(struct mhi_controller *mhi_cntrl, bool force)
+-{
+-	unsigned long flags;
+-
+-	/*
+-	 * If force flag is set, then increment the wake count value and
+-	 * ring wake db
+-	 */
+-	if (unlikely(force)) {
+-		spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+-		atomic_inc(&mhi_cntrl->dev_wake);
+-		if (MHI_WAKE_DB_FORCE_SET_VALID(mhi_cntrl->pm_state) &&
+-		    !mhi_cntrl->wake_set) {
+-			mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
+-			mhi_cntrl->wake_set = true;
+-		}
+-		spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+-	} else {
+-		/*
+-		 * If resources are already requested, then just increment
+-		 * the wake count value and return
+-		 */
+-		if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, 1, 0)))
+-			return;
+-
+-		spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+-		if ((atomic_inc_return(&mhi_cntrl->dev_wake) == 1) &&
+-		    MHI_WAKE_DB_SET_VALID(mhi_cntrl->pm_state) &&
+-		    !mhi_cntrl->wake_set) {
+-			mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
+-			mhi_cntrl->wake_set = true;
+-		}
+-		spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+-	}
+-}
+-
+-/* De-assert device wake db */
+-static void mhi_deassert_dev_wake(struct mhi_controller *mhi_cntrl,
+-				  bool override)
+-{
+-	unsigned long flags;
+-
+-	/*
+-	 * Only continue if there is a single resource, else just decrement
+-	 * and return
+-	 */
+-	if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, -1, 1)))
+-		return;
+-
+-	spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+-	if ((atomic_dec_return(&mhi_cntrl->dev_wake) == 0) &&
+-	    MHI_WAKE_DB_CLEAR_VALID(mhi_cntrl->pm_state) && !override &&
+-	    mhi_cntrl->wake_set) {
+-		mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 0);
+-		mhi_cntrl->wake_set = false;
+-	}
+-	spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+-}
+-
+-int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+-{
+-	enum mhi_state state;
+-	enum mhi_ee_type current_ee;
+-	enum dev_st_transition next_state;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	u32 val;
+-	int ret;
+-
+-	dev_info(dev, "Requested to power ON\n");
+-
+-	if (mhi_cntrl->nr_irqs < 1)
+-		return -EINVAL;
+-
+-	/* Supply default wake routines if not provided by controller driver */
+-	if (!mhi_cntrl->wake_get || !mhi_cntrl->wake_put ||
+-	    !mhi_cntrl->wake_toggle) {
+-		mhi_cntrl->wake_get = mhi_assert_dev_wake;
+-		mhi_cntrl->wake_put = mhi_deassert_dev_wake;
+-		mhi_cntrl->wake_toggle = (mhi_cntrl->db_access & MHI_PM_M2) ?
+-			mhi_toggle_dev_wake_nop : mhi_toggle_dev_wake;
+-	}
+-
+-	mutex_lock(&mhi_cntrl->pm_mutex);
+-	mhi_cntrl->pm_state = MHI_PM_DISABLE;
+-
+-	if (!mhi_cntrl->pre_init) {
+-		/* Setup device context */
+-		ret = mhi_init_dev_ctxt(mhi_cntrl);
+-		if (ret)
+-			goto error_dev_ctxt;
+-	}
+-
+-	ret = mhi_init_irq_setup(mhi_cntrl);
+-	if (ret)
+-		goto error_setup_irq;
+-
+-	/* Setup BHI offset & INTVEC */
+-	write_lock_irq(&mhi_cntrl->pm_lock);
+-	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIOFF, &val);
+-	if (ret) {
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-		goto error_bhi_offset;
+-	}
+-
+-	mhi_cntrl->bhi = mhi_cntrl->regs + val;
+-
+-	/* Setup BHIE offset */
+-	if (mhi_cntrl->fbc_download) {
+-		ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF, &val);
+-		if (ret) {
+-			write_unlock_irq(&mhi_cntrl->pm_lock);
+-			dev_err(dev, "Error reading BHIE offset\n");
+-			goto error_bhi_offset;
+-		}
+-
+-		mhi_cntrl->bhie = mhi_cntrl->regs + val;
+-	}
+-
+-	mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+-	mhi_cntrl->pm_state = MHI_PM_POR;
+-	mhi_cntrl->ee = MHI_EE_MAX;
+-	current_ee = mhi_get_exec_env(mhi_cntrl);
+-	write_unlock_irq(&mhi_cntrl->pm_lock);
+-
+-	/* Confirm that the device is in valid exec env */
+-	if (!MHI_IN_PBL(current_ee) && current_ee != MHI_EE_AMSS) {
+-		dev_err(dev, "Not a valid EE for power on\n");
+-		ret = -EIO;
+-		goto error_bhi_offset;
+-	}
+-
+-	state = mhi_get_mhi_state(mhi_cntrl);
+-	if (state == MHI_STATE_SYS_ERR) {
+-		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
+-		ret = wait_event_timeout(mhi_cntrl->state_event,
+-				MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
+-					mhi_read_reg_field(mhi_cntrl,
+-							   mhi_cntrl->regs,
+-							   MHICTRL,
+-							   MHICTRL_RESET_MASK,
+-							   MHICTRL_RESET_SHIFT,
+-							   &val) ||
+-					!val,
+-				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-		if (!ret) {
+-			ret = -EIO;
+-			dev_info(dev, "Failed to reset MHI due to syserr state\n");
+-			goto error_bhi_offset;
+-		}
+-
+-		/*
+-		 * device cleares INTVEC as part of RESET processing,
+-		 * re-program it
+-		 */
+-		mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+-	}
+-
+-	/* Transition to next state */
+-	next_state = MHI_IN_PBL(current_ee) ?
+-		DEV_ST_TRANSITION_PBL : DEV_ST_TRANSITION_READY;
+-
+-	mhi_queue_state_transition(mhi_cntrl, next_state);
+-
+-	mutex_unlock(&mhi_cntrl->pm_mutex);
+-
+-	dev_info(dev, "Power on setup success\n");
+-
+-	return 0;
+-
+-error_bhi_offset:
+-	mhi_deinit_free_irq(mhi_cntrl);
+-
+-error_setup_irq:
+-	if (!mhi_cntrl->pre_init)
+-		mhi_deinit_dev_ctxt(mhi_cntrl);
+-
+-error_dev_ctxt:
+-	mutex_unlock(&mhi_cntrl->pm_mutex);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(mhi_async_power_up);
+-
+-void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
+-{
+-	enum mhi_pm_state cur_state;
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-
+-	/* If it's not a graceful shutdown, force MHI to linkdown state */
+-	if (!graceful) {
+-		mutex_lock(&mhi_cntrl->pm_mutex);
+-		write_lock_irq(&mhi_cntrl->pm_lock);
+-		cur_state = mhi_tryset_pm_state(mhi_cntrl,
+-						MHI_PM_LD_ERR_FATAL_DETECT);
+-		write_unlock_irq(&mhi_cntrl->pm_lock);
+-		mutex_unlock(&mhi_cntrl->pm_mutex);
+-		if (cur_state != MHI_PM_LD_ERR_FATAL_DETECT)
+-			dev_dbg(dev, "Failed to move to state: %s from: %s\n",
+-				to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT),
+-				to_mhi_pm_state_str(mhi_cntrl->pm_state));
+-	}
+-
+-	mhi_queue_state_transition(mhi_cntrl, DEV_ST_TRANSITION_DISABLE);
+-
+-	/* Wait for shutdown to complete */
+-	flush_work(&mhi_cntrl->st_worker);
+-
+-	mhi_deinit_free_irq(mhi_cntrl);
+-
+-	if (!mhi_cntrl->pre_init) {
+-		/* Free all allocated resources */
+-		if (mhi_cntrl->fbc_image) {
+-			mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+-			mhi_cntrl->fbc_image = NULL;
+-		}
+-		mhi_deinit_dev_ctxt(mhi_cntrl);
+-	}
+-}
+-EXPORT_SYMBOL_GPL(mhi_power_down);
+-
+-int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
+-{
+-	int ret = mhi_async_power_up(mhi_cntrl);
+-
+-	if (ret)
+-		return ret;
+-
+-	wait_event_timeout(mhi_cntrl->state_event,
+-			   MHI_IN_MISSION_MODE(mhi_cntrl->ee) ||
+-			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+-			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-
+-	ret = (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -ETIMEDOUT;
+-	if (ret)
+-		mhi_power_down(mhi_cntrl, false);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL(mhi_sync_power_up);
+-
+-int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
+-{
+-	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	int ret;
+-
+-	/* Check if device is already in RDDM */
+-	if (mhi_cntrl->ee == MHI_EE_RDDM)
+-		return 0;
+-
+-	dev_dbg(dev, "Triggering SYS_ERR to force RDDM state\n");
+-	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+-
+-	/* Wait for RDDM event */
+-	ret = wait_event_timeout(mhi_cntrl->state_event,
+-				 mhi_cntrl->ee == MHI_EE_RDDM,
+-				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-	ret = ret ? 0 : -EIO;
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(mhi_force_rddm_mode);
+-
+-void mhi_device_get(struct mhi_device *mhi_dev)
+-{
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-
+-	mhi_dev->dev_wake++;
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
+-		mhi_trigger_resume(mhi_cntrl);
+-
+-	mhi_cntrl->wake_get(mhi_cntrl, true);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-}
+-EXPORT_SYMBOL_GPL(mhi_device_get);
+-
+-int mhi_device_get_sync(struct mhi_device *mhi_dev)
+-{
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-	int ret;
+-
+-	ret = __mhi_device_get_sync(mhi_cntrl);
+-	if (!ret)
+-		mhi_dev->dev_wake++;
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(mhi_device_get_sync);
+-
+-void mhi_device_put(struct mhi_device *mhi_dev)
+-{
+-	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+-
+-	mhi_dev->dev_wake--;
+-	read_lock_bh(&mhi_cntrl->pm_lock);
+-	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
+-		mhi_trigger_resume(mhi_cntrl);
+-
+-	mhi_cntrl->wake_put(mhi_cntrl, false);
+-	read_unlock_bh(&mhi_cntrl->pm_lock);
+-}
+-EXPORT_SYMBOL_GPL(mhi_device_put);
+diff --git a/drivers/bus/mhi/host/Kconfig b/drivers/bus/mhi/host/Kconfig
+new file mode 100644
+index 0000000000000..da5cd0c9fc620
+--- /dev/null
++++ b/drivers/bus/mhi/host/Kconfig
+@@ -0,0 +1,31 @@
++# SPDX-License-Identifier: GPL-2.0
++#
++# MHI bus
++#
++# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++#
++
++config MHI_BUS
++	tristate "Modem Host Interface (MHI) bus"
++	help
++	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
++	  communication protocol used by the host processors to control
++	  and communicate with modem devices over a high speed peripheral
++	  bus or shared memory.
++
++config MHI_BUS_DEBUG
++	bool "Debugfs support for the MHI bus"
++	depends on MHI_BUS && DEBUG_FS
++	help
++	  Enable debugfs support for use with the MHI transport. Allows
++	  reading and/or modifying some values within the MHI controller
++	  for debug and test purposes.
++
++config MHI_BUS_PCI_GENERIC
++	tristate "MHI PCI controller driver"
++	depends on MHI_BUS
++	depends on PCI
++	help
++	  This driver provides MHI PCI controller driver for devices such as
++	  Qualcomm SDX55 based PCIe modems.
++
+diff --git a/drivers/bus/mhi/host/Makefile b/drivers/bus/mhi/host/Makefile
+new file mode 100644
+index 0000000000000..859c2f38451c6
+--- /dev/null
++++ b/drivers/bus/mhi/host/Makefile
+@@ -0,0 +1,6 @@
++obj-$(CONFIG_MHI_BUS) += mhi.o
++mhi-y := init.o main.o pm.o boot.o
++mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
++
++obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
++mhi_pci_generic-y += pci_generic.o
+diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
+new file mode 100644
+index 0000000000000..24422f5c3d808
+--- /dev/null
++++ b/drivers/bus/mhi/host/boot.c
+@@ -0,0 +1,525 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++ *
++ */
++
++#include <linux/delay.h>
++#include <linux/device.h>
++#include <linux/dma-direction.h>
++#include <linux/dma-mapping.h>
++#include <linux/firmware.h>
++#include <linux/interrupt.h>
++#include <linux/list.h>
++#include <linux/mhi.h>
++#include <linux/module.h>
++#include <linux/random.h>
++#include <linux/slab.h>
++#include <linux/wait.h>
++#include "internal.h"
++
++/* Setup RDDM vector table for RDDM transfer and program RXVEC */
++void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
++		      struct image_info *img_info)
++{
++	struct mhi_buf *mhi_buf = img_info->mhi_buf;
++	struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
++	void __iomem *base = mhi_cntrl->bhie;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	u32 sequence_id;
++	unsigned int i;
++
++	for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
++		bhi_vec->dma_addr = mhi_buf->dma_addr;
++		bhi_vec->size = mhi_buf->len;
++	}
++
++	dev_dbg(dev, "BHIe programming for RDDM\n");
++
++	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
++		      upper_32_bits(mhi_buf->dma_addr));
++
++	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
++		      lower_32_bits(mhi_buf->dma_addr));
++
++	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
++	sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
++
++	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
++			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
++			    sequence_id);
++
++	dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
++		&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
++}
++
++/* Collect RDDM buffer during kernel panic */
++static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
++{
++	int ret;
++	u32 rx_status;
++	enum mhi_ee_type ee;
++	const u32 delayus = 2000;
++	u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
++	const u32 rddm_timeout_us = 200000;
++	int rddm_retry = rddm_timeout_us / delayus;
++	void __iomem *base = mhi_cntrl->bhie;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++
++	dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
++		to_mhi_pm_state_str(mhi_cntrl->pm_state),
++		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
++		TO_MHI_EXEC_STR(mhi_cntrl->ee));
++
++	/*
++	 * This should only be executing during a kernel panic, we expect all
++	 * other cores to shutdown while we're collecting RDDM buffer. After
++	 * returning from this function, we expect the device to reset.
++	 *
++	 * Normaly, we read/write pm_state only after grabbing the
++	 * pm_lock, since we're in a panic, skipping it. Also there is no
++	 * gurantee that this state change would take effect since
++	 * we're setting it w/o grabbing pm_lock
++	 */
++	mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
++	/* update should take the effect immediately */
++	smp_wmb();
++
++	/*
++	 * Make sure device is not already in RDDM. In case the device asserts
++	 * and a kernel panic follows, device will already be in RDDM.
++	 * Do not trigger SYS ERR again and proceed with waiting for
++	 * image download completion.
++	 */
++	ee = mhi_get_exec_env(mhi_cntrl);
++	if (ee != MHI_EE_RDDM) {
++		dev_dbg(dev, "Trigger device into RDDM mode using SYS ERR\n");
++		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
++
++		dev_dbg(dev, "Waiting for device to enter RDDM\n");
++		while (rddm_retry--) {
++			ee = mhi_get_exec_env(mhi_cntrl);
++			if (ee == MHI_EE_RDDM)
++				break;
++
++			udelay(delayus);
++		}
++
++		if (rddm_retry <= 0) {
++			/* Hardware reset so force device to enter RDDM */
++			dev_dbg(dev,
++				"Did not enter RDDM, do a host req reset\n");
++			mhi_write_reg(mhi_cntrl, mhi_cntrl->regs,
++				      MHI_SOC_RESET_REQ_OFFSET,
++				      MHI_SOC_RESET_REQ);
++			udelay(delayus);
++		}
++
++		ee = mhi_get_exec_env(mhi_cntrl);
++	}
++
++	dev_dbg(dev,
++		"Waiting for RDDM image download via BHIe, current EE:%s\n",
++		TO_MHI_EXEC_STR(ee));
++
++	while (retry--) {
++		ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
++					 BHIE_RXVECSTATUS_STATUS_BMSK,
++					 BHIE_RXVECSTATUS_STATUS_SHFT,
++					 &rx_status);
++		if (ret)
++			return -EIO;
++
++		if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL)
++			return 0;
++
++		udelay(delayus);
++	}
++
++	ee = mhi_get_exec_env(mhi_cntrl);
++	ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
++
++	dev_err(dev, "Did not complete RDDM transfer\n");
++	dev_err(dev, "Current EE: %s\n", TO_MHI_EXEC_STR(ee));
++	dev_err(dev, "RXVEC_STATUS: 0x%x\n", rx_status);
++
++	return -EIO;
++}
++
++/* Download RDDM image from device */
++int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
++{
++	void __iomem *base = mhi_cntrl->bhie;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	u32 rx_status;
++
++	if (in_panic)
++		return __mhi_download_rddm_in_panic(mhi_cntrl);
++
++	dev_dbg(dev, "Waiting for RDDM image download via BHIe\n");
++
++	/* Wait for the image download to complete */
++	wait_event_timeout(mhi_cntrl->state_event,
++			   mhi_read_reg_field(mhi_cntrl, base,
++					      BHIE_RXVECSTATUS_OFFS,
++					      BHIE_RXVECSTATUS_STATUS_BMSK,
++					      BHIE_RXVECSTATUS_STATUS_SHFT,
++					      &rx_status) || rx_status,
++			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
++
++	return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
++}
++EXPORT_SYMBOL_GPL(mhi_download_rddm_img);
++
++static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
++			    const struct mhi_buf *mhi_buf)
++{
++	void __iomem *base = mhi_cntrl->bhie;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
++	u32 tx_status, sequence_id;
++	int ret;
++
++	read_lock_bh(pm_lock);
++	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
++		read_unlock_bh(pm_lock);
++		return -EIO;
++	}
++
++	sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_TXVECSTATUS_SEQNUM_BMSK);
++	dev_dbg(dev, "Starting AMSS download via BHIe. Sequence ID:%u\n",
++		sequence_id);
++	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
++		      upper_32_bits(mhi_buf->dma_addr));
++
++	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS,
++		      lower_32_bits(mhi_buf->dma_addr));
++
++	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
++
++	mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
++			    BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
++			    sequence_id);
++	read_unlock_bh(pm_lock);
++
++	/* Wait for the image download to complete */
++	ret = wait_event_timeout(mhi_cntrl->state_event,
++				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
++				 mhi_read_reg_field(mhi_cntrl, base,
++						   BHIE_TXVECSTATUS_OFFS,
++						   BHIE_TXVECSTATUS_STATUS_BMSK,
++						   BHIE_TXVECSTATUS_STATUS_SHFT,
++						   &tx_status) || tx_status,
++				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
++	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
++	    tx_status != BHIE_TXVECSTATUS_STATUS_XFER_COMPL)
++		return -EIO;
++
++	return (!ret) ? -ETIMEDOUT : 0;
++}
++
++static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
++			   dma_addr_t dma_addr,
++			   size_t size)
++{
++	u32 tx_status, val, session_id;
++	int i, ret;
++	void __iomem *base = mhi_cntrl->bhi;
++	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	struct {
++		char *name;
++		u32 offset;
++	} error_reg[] = {
++		{ "ERROR_CODE", BHI_ERRCODE },
++		{ "ERROR_DBG1", BHI_ERRDBG1 },
++		{ "ERROR_DBG2", BHI_ERRDBG2 },
++		{ "ERROR_DBG3", BHI_ERRDBG3 },
++		{ NULL },
++	};
++
++	read_lock_bh(pm_lock);
++	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
++		read_unlock_bh(pm_lock);
++		goto invalid_pm_state;
++	}
++
++	session_id = MHI_RANDOM_U32_NONZERO(BHI_TXDB_SEQNUM_BMSK);
++	dev_dbg(dev, "Starting SBL download via BHI. Session ID:%u\n",
++		session_id);
++	mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
++	mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
++		      upper_32_bits(dma_addr));
++	mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
++		      lower_32_bits(dma_addr));
++	mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
++	mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id);
++	read_unlock_bh(pm_lock);
++
++	/* Wait for the image download to complete */
++	ret = wait_event_timeout(mhi_cntrl->state_event,
++			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
++			   mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
++					      BHI_STATUS_MASK, BHI_STATUS_SHIFT,
++					      &tx_status) || tx_status,
++			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
++	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
++		goto invalid_pm_state;
++
++	if (tx_status == BHI_STATUS_ERROR) {
++		dev_err(dev, "Image transfer failed\n");
++		read_lock_bh(pm_lock);
++		if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
++			for (i = 0; error_reg[i].name; i++) {
++				ret = mhi_read_reg(mhi_cntrl, base,
++						   error_reg[i].offset, &val);
++				if (ret)
++					break;
++				dev_err(dev, "Reg: %s value: 0x%x\n",
++					error_reg[i].name, val);
++			}
++		}
++		read_unlock_bh(pm_lock);
++		goto invalid_pm_state;
++	}
++
++	return (!ret) ? -ETIMEDOUT : 0;
++
++invalid_pm_state:
++
++	return -EIO;
++}
++
++void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
++			 struct image_info *image_info)
++{
++	int i;
++	struct mhi_buf *mhi_buf = image_info->mhi_buf;
++
++	for (i = 0; i < image_info->entries; i++, mhi_buf++)
++		mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
++				  mhi_buf->dma_addr);
++
++	kfree(image_info->mhi_buf);
++	kfree(image_info);
++}
++
++int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
++			 struct image_info **image_info,
++			 size_t alloc_size)
++{
++	size_t seg_size = mhi_cntrl->seg_len;
++	int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1;
++	int i;
++	struct image_info *img_info;
++	struct mhi_buf *mhi_buf;
++
++	img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
++	if (!img_info)
++		return -ENOMEM;
++
++	/* Allocate memory for entries */
++	img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf),
++				    GFP_KERNEL);
++	if (!img_info->mhi_buf)
++		goto error_alloc_mhi_buf;
++
++	/* Allocate and populate vector table */
++	mhi_buf = img_info->mhi_buf;
++	for (i = 0; i < segments; i++, mhi_buf++) {
++		size_t vec_size = seg_size;
++
++		/* Vector table is the last entry */
++		if (i == segments - 1)
++			vec_size = sizeof(struct bhi_vec_entry) * i;
++
++		mhi_buf->len = vec_size;
++		mhi_buf->buf = mhi_alloc_coherent(mhi_cntrl, vec_size,
++						  &mhi_buf->dma_addr,
++						  GFP_KERNEL);
++		if (!mhi_buf->buf)
++			goto error_alloc_segment;
++	}
++
++	img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf;
++	img_info->entries = segments;
++	*image_info = img_info;
++
++	return 0;
++
++error_alloc_segment:
++	for (--i, --mhi_buf; i >= 0; i--, mhi_buf--)
++		mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
++				  mhi_buf->dma_addr);
++
++error_alloc_mhi_buf:
++	kfree(img_info);
++
++	return -ENOMEM;
++}
++
++static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
++			      const struct firmware *firmware,
++			      struct image_info *img_info)
++{
++	size_t remainder = firmware->size;
++	size_t to_cpy;
++	const u8 *buf = firmware->data;
++	int i = 0;
++	struct mhi_buf *mhi_buf = img_info->mhi_buf;
++	struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
++
++	while (remainder) {
++		to_cpy = min(remainder, mhi_buf->len);
++		memcpy(mhi_buf->buf, buf, to_cpy);
++		bhi_vec->dma_addr = mhi_buf->dma_addr;
++		bhi_vec->size = to_cpy;
++
++		buf += to_cpy;
++		remainder -= to_cpy;
++		i++;
++		bhi_vec++;
++		mhi_buf++;
++	}
++}
++
++void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
++{
++	const struct firmware *firmware = NULL;
++	struct image_info *image_info;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	const char *fw_name;
++	void *buf;
++	dma_addr_t dma_addr;
++	size_t size;
++	int i, ret;
++
++	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		dev_err(dev, "Device MHI is not in valid state\n");
++		return;
++	}
++
++	/* save hardware info from BHI */
++	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_SERIALNU,
++			   &mhi_cntrl->serial_number);
++	if (ret)
++		dev_err(dev, "Could not capture serial number via BHI\n");
++
++	for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++) {
++		ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_OEMPKHASH(i),
++				   &mhi_cntrl->oem_pk_hash[i]);
++		if (ret) {
++			dev_err(dev, "Could not capture OEM PK HASH via BHI\n");
++			break;
++		}
++	}
++
++	/* If device is in pass through, do reset to ready state transition */
++	if (mhi_cntrl->ee == MHI_EE_PTHRU)
++		goto fw_load_ee_pthru;
++
++	fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
++		mhi_cntrl->edl_image : mhi_cntrl->fw_image;
++
++	if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
++						     !mhi_cntrl->seg_len))) {
++		dev_err(dev,
++			"No firmware image defined or !sbl_size || !seg_len\n");
++		return;
++	}
++
++	ret = request_firmware(&firmware, fw_name, dev);
++	if (ret) {
++		dev_err(dev, "Error loading firmware: %d\n", ret);
++		return;
++	}
++
++	size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
++
++	/* SBL size provided is maximum size, not necessarily the image size */
++	if (size > firmware->size)
++		size = firmware->size;
++
++	buf = mhi_alloc_coherent(mhi_cntrl, size, &dma_addr, GFP_KERNEL);
++	if (!buf) {
++		release_firmware(firmware);
++		return;
++	}
++
++	/* Download SBL image */
++	memcpy(buf, firmware->data, size);
++	ret = mhi_fw_load_sbl(mhi_cntrl, dma_addr, size);
++	mhi_free_coherent(mhi_cntrl, size, buf, dma_addr);
++
++	if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL)
++		release_firmware(firmware);
++
++	/* Error or in EDL mode, we're done */
++	if (ret) {
++		dev_err(dev, "MHI did not load SBL, ret:%d\n", ret);
++		return;
++	}
++
++	if (mhi_cntrl->ee == MHI_EE_EDL)
++		return;
++
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	mhi_cntrl->dev_state = MHI_STATE_RESET;
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++
++	/*
++	 * If we're doing fbc, populate vector tables while
++	 * device transitioning into MHI READY state
++	 */
++	if (mhi_cntrl->fbc_download) {
++		ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
++					   firmware->size);
++		if (ret)
++			goto error_alloc_fw_table;
++
++		/* Load the firmware into BHIE vec table */
++		mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
++	}
++
++fw_load_ee_pthru:
++	/* Transitioning into MHI RESET->READY state */
++	ret = mhi_ready_state_transition(mhi_cntrl);
++
++	if (!mhi_cntrl->fbc_download)
++		return;
++
++	if (ret) {
++		dev_err(dev, "MHI did not enter READY state\n");
++		goto error_read;
++	}
++
++	/* Wait for the SBL event */
++	ret = wait_event_timeout(mhi_cntrl->state_event,
++				 mhi_cntrl->ee == MHI_EE_SBL ||
++				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
++				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
++
++	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		dev_err(dev, "MHI did not enter SBL\n");
++		goto error_read;
++	}
++
++	/* Start full firmware image download */
++	image_info = mhi_cntrl->fbc_image;
++	ret = mhi_fw_load_amss(mhi_cntrl,
++			       /* Vector table is the last entry */
++			       &image_info->mhi_buf[image_info->entries - 1]);
++	if (ret)
++		dev_err(dev, "MHI did not load AMSS, ret:%d\n", ret);
++
++	release_firmware(firmware);
++
++	return;
++
++error_read:
++	mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
++	mhi_cntrl->fbc_image = NULL;
++
++error_alloc_fw_table:
++	release_firmware(firmware);
++}
+diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
+new file mode 100644
+index 0000000000000..3a48801e01f4a
+--- /dev/null
++++ b/drivers/bus/mhi/host/debugfs.c
+@@ -0,0 +1,411 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
++ *
++ */
++
++#include <linux/debugfs.h>
++#include <linux/device.h>
++#include <linux/interrupt.h>
++#include <linux/list.h>
++#include <linux/mhi.h>
++#include <linux/module.h>
++#include "internal.h"
++
++static int mhi_debugfs_states_show(struct seq_file *m, void *d)
++{
++	struct mhi_controller *mhi_cntrl = m->private;
++
++	/* states */
++	seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
++		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
++		   mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
++		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
++		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
++		   mhi_cntrl->wake_set ? "true" : "false");
++
++	/* counters */
++	seq_printf(m, "M0: %u M2: %u M3: %u", mhi_cntrl->M0, mhi_cntrl->M2,
++		   mhi_cntrl->M3);
++
++	seq_printf(m, " device wake: %u pending packets: %u\n",
++		   atomic_read(&mhi_cntrl->dev_wake),
++		   atomic_read(&mhi_cntrl->pending_pkts));
++
++	return 0;
++}
++
++static int mhi_debugfs_events_show(struct seq_file *m, void *d)
++{
++	struct mhi_controller *mhi_cntrl = m->private;
++	struct mhi_event *mhi_event;
++	struct mhi_event_ctxt *er_ctxt;
++	int i;
++
++	if (!mhi_is_active(mhi_cntrl)) {
++		seq_puts(m, "Device not ready\n");
++		return -ENODEV;
++	}
++
++	er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < mhi_cntrl->total_ev_rings;
++						i++, er_ctxt++, mhi_event++) {
++		struct mhi_ring *ring = &mhi_event->ring;
++
++		if (mhi_event->offload_ev) {
++			seq_printf(m, "Index: %d is an offload event ring\n",
++				   i);
++			continue;
++		}
++
++		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
++			   i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
++			   EV_CTX_INTMODC_SHIFT,
++			   (er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
++			   EV_CTX_INTMODT_SHIFT);
++
++		seq_printf(m, " base: 0x%0llx len: 0x%llx", er_ctxt->rbase,
++			   er_ctxt->rlen);
++
++		seq_printf(m, " rp: 0x%llx wp: 0x%llx", er_ctxt->rp,
++			   er_ctxt->wp);
++
++		seq_printf(m, " local rp: 0x%pK db: 0x%pad\n", ring->rp,
++			   &mhi_event->db_cfg.db_val);
++	}
++
++	return 0;
++}
++
++static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
++{
++	struct mhi_controller *mhi_cntrl = m->private;
++	struct mhi_chan *mhi_chan;
++	struct mhi_chan_ctxt *chan_ctxt;
++	int i;
++
++	if (!mhi_is_active(mhi_cntrl)) {
++		seq_puts(m, "Device not ready\n");
++		return -ENODEV;
++	}
++
++	mhi_chan = mhi_cntrl->mhi_chan;
++	chan_ctxt = mhi_cntrl->mhi_ctxt->chan_ctxt;
++	for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
++		struct mhi_ring *ring = &mhi_chan->tre_ring;
++
++		if (mhi_chan->offload_ch) {
++			seq_printf(m, "%s(%u) is an offload channel\n",
++				   mhi_chan->name, mhi_chan->chan);
++			continue;
++		}
++
++		if (!mhi_chan->mhi_dev)
++			continue;
++
++		seq_printf(m,
++			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
++			   mhi_chan->name, mhi_chan->chan, (chan_ctxt->chcfg &
++			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
++			   (chan_ctxt->chcfg & CHAN_CTX_BRSTMODE_MASK) >>
++			   CHAN_CTX_BRSTMODE_SHIFT, (chan_ctxt->chcfg &
++			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
++
++		seq_printf(m, " type: 0x%x event ring: %u", chan_ctxt->chtype,
++			   chan_ctxt->erindex);
++
++		seq_printf(m, " base: 0x%llx len: 0x%llx rp: 0x%llx wp: 0x%llx",
++			   chan_ctxt->rbase, chan_ctxt->rlen, chan_ctxt->rp,
++			   chan_ctxt->wp);
++
++		seq_printf(m, " local rp: 0x%pK local wp: 0x%pK db: 0x%pad\n",
++			   ring->rp, ring->wp,
++			   &mhi_chan->db_cfg.db_val);
++	}
++
++	return 0;
++}
++
++static int mhi_device_info_show(struct device *dev, void *data)
++{
++	struct mhi_device *mhi_dev;
++
++	if (dev->bus != &mhi_bus_type)
++		return 0;
++
++	mhi_dev = to_mhi_device(dev);
++
++	seq_printf((struct seq_file *)data, "%s: type: %s dev_wake: %u",
++		   mhi_dev->name, mhi_dev->dev_type ? "Controller" : "Transfer",
++		   mhi_dev->dev_wake);
++
++	/* for transfer device types only */
++	if (mhi_dev->dev_type == MHI_DEVICE_XFER)
++		seq_printf((struct seq_file *)data, " channels: %u(UL)/%u(DL)",
++			   mhi_dev->ul_chan_id, mhi_dev->dl_chan_id);
++
++	seq_puts((struct seq_file *)data, "\n");
++
++	return 0;
++}
++
++static int mhi_debugfs_devices_show(struct seq_file *m, void *d)
++{
++	struct mhi_controller *mhi_cntrl = m->private;
++
++	if (!mhi_is_active(mhi_cntrl)) {
++		seq_puts(m, "Device not ready\n");
++		return -ENODEV;
++	}
++
++	device_for_each_child(mhi_cntrl->cntrl_dev, m, mhi_device_info_show);
++
++	return 0;
++}
++
++static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
++{
++	struct mhi_controller *mhi_cntrl = m->private;
++	enum mhi_state state;
++	enum mhi_ee_type ee;
++	int i, ret = -EIO;
++	u32 val;
++	void __iomem *mhi_base = mhi_cntrl->regs;
++	void __iomem *bhi_base = mhi_cntrl->bhi;
++	void __iomem *bhie_base = mhi_cntrl->bhie;
++	void __iomem *wake_db = mhi_cntrl->wake_db;
++	struct {
++		const char *name;
++		int offset;
++		void __iomem *base;
++	} regs[] = {
++		{ "MHI_REGLEN", MHIREGLEN, mhi_base},
++		{ "MHI_VER", MHIVER, mhi_base},
++		{ "MHI_CFG", MHICFG, mhi_base},
++		{ "MHI_CTRL", MHICTRL, mhi_base},
++		{ "MHI_STATUS", MHISTATUS, mhi_base},
++		{ "MHI_WAKE_DB", 0, wake_db},
++		{ "BHI_EXECENV", BHI_EXECENV, bhi_base},
++		{ "BHI_STATUS", BHI_STATUS, bhi_base},
++		{ "BHI_ERRCODE", BHI_ERRCODE, bhi_base},
++		{ "BHI_ERRDBG1", BHI_ERRDBG1, bhi_base},
++		{ "BHI_ERRDBG2", BHI_ERRDBG2, bhi_base},
++		{ "BHI_ERRDBG3", BHI_ERRDBG3, bhi_base},
++		{ "BHIE_TXVEC_DB", BHIE_TXVECDB_OFFS, bhie_base},
++		{ "BHIE_TXVEC_STATUS", BHIE_TXVECSTATUS_OFFS, bhie_base},
++		{ "BHIE_RXVEC_DB", BHIE_RXVECDB_OFFS, bhie_base},
++		{ "BHIE_RXVEC_STATUS", BHIE_RXVECSTATUS_OFFS, bhie_base},
++		{ NULL },
++	};
++
++	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
++		return ret;
++
++	seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
++		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
++		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
++		   TO_MHI_EXEC_STR(mhi_cntrl->ee));
++
++	state = mhi_get_mhi_state(mhi_cntrl);
++	ee = mhi_get_exec_env(mhi_cntrl);
++	seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
++		   TO_MHI_STATE_STR(state));
++
++	for (i = 0; regs[i].name; i++) {
++		if (!regs[i].base)
++			continue;
++		ret = mhi_read_reg(mhi_cntrl, regs[i].base, regs[i].offset,
++				   &val);
++		if (ret)
++			continue;
++
++		seq_printf(m, "%s: 0x%x\n", regs[i].name, val);
++	}
++
++	return 0;
++}
++
++static int mhi_debugfs_device_wake_show(struct seq_file *m, void *d)
++{
++	struct mhi_controller *mhi_cntrl = m->private;
++	struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
++
++	if (!mhi_is_active(mhi_cntrl)) {
++		seq_puts(m, "Device not ready\n");
++		return -ENODEV;
++	}
++
++	seq_printf(m,
++		   "Wake count: %d\n%s\n", mhi_dev->dev_wake,
++		   "Usage: echo get/put > device_wake to vote/unvote for M0");
++
++	return 0;
++}
++
++static ssize_t mhi_debugfs_device_wake_write(struct file *file,
++					     const char __user *ubuf,
++					     size_t count, loff_t *ppos)
++{
++	struct seq_file	*m = file->private_data;
++	struct mhi_controller *mhi_cntrl = m->private;
++	struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
++	char buf[16];
++	int ret = -EINVAL;
++
++	if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
++		return -EFAULT;
++
++	if (!strncmp(buf, "get", 3)) {
++		ret = mhi_device_get_sync(mhi_dev);
++	} else if (!strncmp(buf, "put", 3)) {
++		mhi_device_put(mhi_dev);
++		ret = 0;
++	}
++
++	return ret ? ret : count;
++}
++
++static int mhi_debugfs_timeout_ms_show(struct seq_file *m, void *d)
++{
++	struct mhi_controller *mhi_cntrl = m->private;
++
++	seq_printf(m, "%u ms\n", mhi_cntrl->timeout_ms);
++
++	return 0;
++}
++
++static ssize_t mhi_debugfs_timeout_ms_write(struct file *file,
++					    const char __user *ubuf,
++					    size_t count, loff_t *ppos)
++{
++	struct seq_file	*m = file->private_data;
++	struct mhi_controller *mhi_cntrl = m->private;
++	u32 timeout_ms;
++
++	if (kstrtou32_from_user(ubuf, count, 0, &timeout_ms))
++		return -EINVAL;
++
++	mhi_cntrl->timeout_ms = timeout_ms;
++
++	return count;
++}
++
++static int mhi_debugfs_states_open(struct inode *inode, struct file *fp)
++{
++	return single_open(fp, mhi_debugfs_states_show, inode->i_private);
++}
++
++static int mhi_debugfs_events_open(struct inode *inode, struct file *fp)
++{
++	return single_open(fp, mhi_debugfs_events_show, inode->i_private);
++}
++
++static int mhi_debugfs_channels_open(struct inode *inode, struct file *fp)
++{
++	return single_open(fp, mhi_debugfs_channels_show, inode->i_private);
++}
++
++static int mhi_debugfs_devices_open(struct inode *inode, struct file *fp)
++{
++	return single_open(fp, mhi_debugfs_devices_show, inode->i_private);
++}
++
++static int mhi_debugfs_regdump_open(struct inode *inode, struct file *fp)
++{
++	return single_open(fp, mhi_debugfs_regdump_show, inode->i_private);
++}
++
++static int mhi_debugfs_device_wake_open(struct inode *inode, struct file *fp)
++{
++	return single_open(fp, mhi_debugfs_device_wake_show, inode->i_private);
++}
++
++static int mhi_debugfs_timeout_ms_open(struct inode *inode, struct file *fp)
++{
++	return single_open(fp, mhi_debugfs_timeout_ms_show, inode->i_private);
++}
++
++static const struct file_operations debugfs_states_fops = {
++	.open = mhi_debugfs_states_open,
++	.release = single_release,
++	.read = seq_read,
++};
++
++static const struct file_operations debugfs_events_fops = {
++	.open = mhi_debugfs_events_open,
++	.release = single_release,
++	.read = seq_read,
++};
++
++static const struct file_operations debugfs_channels_fops = {
++	.open = mhi_debugfs_channels_open,
++	.release = single_release,
++	.read = seq_read,
++};
++
++static const struct file_operations debugfs_devices_fops = {
++	.open = mhi_debugfs_devices_open,
++	.release = single_release,
++	.read = seq_read,
++};
++
++static const struct file_operations debugfs_regdump_fops = {
++	.open = mhi_debugfs_regdump_open,
++	.release = single_release,
++	.read = seq_read,
++};
++
++static const struct file_operations debugfs_device_wake_fops = {
++	.open = mhi_debugfs_device_wake_open,
++	.write = mhi_debugfs_device_wake_write,
++	.release = single_release,
++	.read = seq_read,
++};
++
++static const struct file_operations debugfs_timeout_ms_fops = {
++	.open = mhi_debugfs_timeout_ms_open,
++	.write = mhi_debugfs_timeout_ms_write,
++	.release = single_release,
++	.read = seq_read,
++};
++
++static struct dentry *mhi_debugfs_root;
++
++void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
++{
++	mhi_cntrl->debugfs_dentry =
++			debugfs_create_dir(dev_name(mhi_cntrl->cntrl_dev),
++					   mhi_debugfs_root);
++
++	debugfs_create_file("states", 0444, mhi_cntrl->debugfs_dentry,
++			    mhi_cntrl, &debugfs_states_fops);
++	debugfs_create_file("events", 0444, mhi_cntrl->debugfs_dentry,
++			    mhi_cntrl, &debugfs_events_fops);
++	debugfs_create_file("channels", 0444, mhi_cntrl->debugfs_dentry,
++			    mhi_cntrl, &debugfs_channels_fops);
++	debugfs_create_file("devices", 0444, mhi_cntrl->debugfs_dentry,
++			    mhi_cntrl, &debugfs_devices_fops);
++	debugfs_create_file("regdump", 0444, mhi_cntrl->debugfs_dentry,
++			    mhi_cntrl, &debugfs_regdump_fops);
++	debugfs_create_file("device_wake", 0644, mhi_cntrl->debugfs_dentry,
++			    mhi_cntrl, &debugfs_device_wake_fops);
++	debugfs_create_file("timeout_ms", 0644, mhi_cntrl->debugfs_dentry,
++			    mhi_cntrl, &debugfs_timeout_ms_fops);
++}
++
++void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl)
++{
++	debugfs_remove_recursive(mhi_cntrl->debugfs_dentry);
++	mhi_cntrl->debugfs_dentry = NULL;
++}
++
++void mhi_debugfs_init(void)
++{
++	mhi_debugfs_root = debugfs_create_dir(mhi_bus_type.name, NULL);
++}
++
++void mhi_debugfs_exit(void)
++{
++	debugfs_remove_recursive(mhi_debugfs_root);
++}
+diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
+new file mode 100644
+index 0000000000000..2cc48f96afdbc
+--- /dev/null
++++ b/drivers/bus/mhi/host/init.c
+@@ -0,0 +1,1387 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++ *
++ */
++
++#include <linux/debugfs.h>
++#include <linux/device.h>
++#include <linux/dma-direction.h>
++#include <linux/dma-mapping.h>
++#include <linux/interrupt.h>
++#include <linux/list.h>
++#include <linux/mhi.h>
++#include <linux/mod_devicetable.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/vmalloc.h>
++#include <linux/wait.h>
++#include "internal.h"
++
++const char * const mhi_ee_str[MHI_EE_MAX] = {
++	[MHI_EE_PBL] = "PBL",
++	[MHI_EE_SBL] = "SBL",
++	[MHI_EE_AMSS] = "AMSS",
++	[MHI_EE_RDDM] = "RDDM",
++	[MHI_EE_WFW] = "WFW",
++	[MHI_EE_PTHRU] = "PASS THRU",
++	[MHI_EE_EDL] = "EDL",
++	[MHI_EE_DISABLE_TRANSITION] = "DISABLE",
++	[MHI_EE_NOT_SUPPORTED] = "NOT SUPPORTED",
++};
++
++const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
++	[DEV_ST_TRANSITION_PBL] = "PBL",
++	[DEV_ST_TRANSITION_READY] = "READY",
++	[DEV_ST_TRANSITION_SBL] = "SBL",
++	[DEV_ST_TRANSITION_MISSION_MODE] = "MISSION_MODE",
++	[DEV_ST_TRANSITION_SYS_ERR] = "SYS_ERR",
++	[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
++};
++
++const char * const mhi_state_str[MHI_STATE_MAX] = {
++	[MHI_STATE_RESET] = "RESET",
++	[MHI_STATE_READY] = "READY",
++	[MHI_STATE_M0] = "M0",
++	[MHI_STATE_M1] = "M1",
++	[MHI_STATE_M2] = "M2",
++	[MHI_STATE_M3] = "M3",
++	[MHI_STATE_M3_FAST] = "M3_FAST",
++	[MHI_STATE_BHI] = "BHI",
++	[MHI_STATE_SYS_ERR] = "SYS_ERR",
++};
++
++static const char * const mhi_pm_state_str[] = {
++	[MHI_PM_STATE_DISABLE] = "DISABLE",
++	[MHI_PM_STATE_POR] = "POR",
++	[MHI_PM_STATE_M0] = "M0",
++	[MHI_PM_STATE_M2] = "M2",
++	[MHI_PM_STATE_M3_ENTER] = "M?->M3",
++	[MHI_PM_STATE_M3] = "M3",
++	[MHI_PM_STATE_M3_EXIT] = "M3->M0",
++	[MHI_PM_STATE_FW_DL_ERR] = "FW DL Error",
++	[MHI_PM_STATE_SYS_ERR_DETECT] = "SYS_ERR Detect",
++	[MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS_ERR Process",
++	[MHI_PM_STATE_SHUTDOWN_PROCESS] = "SHUTDOWN Process",
++	[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "LD or Error Fatal Detect",
++};
++
++const char *to_mhi_pm_state_str(enum mhi_pm_state state)
++{
++	int index = find_last_bit((unsigned long *)&state, 32);
++
++	if (index >= ARRAY_SIZE(mhi_pm_state_str))
++		return "Invalid State";
++
++	return mhi_pm_state_str[index];
++}
++
++static ssize_t serial_number_show(struct device *dev,
++				  struct device_attribute *attr,
++				  char *buf)
++{
++	struct mhi_device *mhi_dev = to_mhi_device(dev);
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++
++	return snprintf(buf, PAGE_SIZE, "Serial Number: %u\n",
++			mhi_cntrl->serial_number);
++}
++static DEVICE_ATTR_RO(serial_number);
++
++static ssize_t oem_pk_hash_show(struct device *dev,
++				struct device_attribute *attr,
++				char *buf)
++{
++	struct mhi_device *mhi_dev = to_mhi_device(dev);
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	int i, cnt = 0;
++
++	for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++)
++		cnt += snprintf(buf + cnt, PAGE_SIZE - cnt,
++				"OEMPKHASH[%d]: 0x%x\n", i,
++				mhi_cntrl->oem_pk_hash[i]);
++
++	return cnt;
++}
++static DEVICE_ATTR_RO(oem_pk_hash);
++
++static struct attribute *mhi_dev_attrs[] = {
++	&dev_attr_serial_number.attr,
++	&dev_attr_oem_pk_hash.attr,
++	NULL,
++};
++ATTRIBUTE_GROUPS(mhi_dev);
++
++/* MHI protocol requires the transfer ring to be aligned with ring length */
++static int mhi_alloc_aligned_ring(struct mhi_controller *mhi_cntrl,
++				  struct mhi_ring *ring,
++				  u64 len)
++{
++	ring->alloc_size = len + (len - 1);
++	ring->pre_aligned = mhi_alloc_coherent(mhi_cntrl, ring->alloc_size,
++					       &ring->dma_handle, GFP_KERNEL);
++	if (!ring->pre_aligned)
++		return -ENOMEM;
++
++	ring->iommu_base = (ring->dma_handle + (len - 1)) & ~(len - 1);
++	ring->base = ring->pre_aligned + (ring->iommu_base - ring->dma_handle);
++
++	return 0;
++}
++
++void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl)
++{
++	int i;
++	struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
++
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
++		if (mhi_event->offload_ev)
++			continue;
++
++		free_irq(mhi_cntrl->irq[mhi_event->irq], mhi_event);
++	}
++
++	free_irq(mhi_cntrl->irq[0], mhi_cntrl);
++}
++
++int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
++{
++	struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	int i, ret;
++
++	/* Setup BHI_INTVEC IRQ */
++	ret = request_threaded_irq(mhi_cntrl->irq[0], mhi_intvec_handler,
++				   mhi_intvec_threaded_handler,
++				   IRQF_SHARED | IRQF_NO_SUSPEND,
++				   "bhi", mhi_cntrl);
++	if (ret)
++		return ret;
++
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
++		if (mhi_event->offload_ev)
++			continue;
++
++		if (mhi_event->irq >= mhi_cntrl->nr_irqs) {
++			dev_err(dev, "irq %d not available for event ring\n",
++				mhi_event->irq);
++			ret = -EINVAL;
++			goto error_request;
++		}
++
++		ret = request_irq(mhi_cntrl->irq[mhi_event->irq],
++				  mhi_irq_handler,
++				  IRQF_SHARED | IRQF_NO_SUSPEND,
++				  "mhi", mhi_event);
++		if (ret) {
++			dev_err(dev, "Error requesting irq:%d for ev:%d\n",
++				mhi_cntrl->irq[mhi_event->irq], i);
++			goto error_request;
++		}
++	}
++
++	return 0;
++
++error_request:
++	for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
++		if (mhi_event->offload_ev)
++			continue;
++
++		free_irq(mhi_cntrl->irq[mhi_event->irq], mhi_event);
++	}
++	free_irq(mhi_cntrl->irq[0], mhi_cntrl);
++
++	return ret;
++}
++
++void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl)
++{
++	int i;
++	struct mhi_ctxt *mhi_ctxt = mhi_cntrl->mhi_ctxt;
++	struct mhi_cmd *mhi_cmd;
++	struct mhi_event *mhi_event;
++	struct mhi_ring *ring;
++
++	mhi_cmd = mhi_cntrl->mhi_cmd;
++	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++) {
++		ring = &mhi_cmd->ring;
++		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
++				  ring->pre_aligned, ring->dma_handle);
++		ring->base = NULL;
++		ring->iommu_base = 0;
++	}
++
++	mhi_free_coherent(mhi_cntrl,
++			  sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
++			  mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
++
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
++		if (mhi_event->offload_ev)
++			continue;
++
++		ring = &mhi_event->ring;
++		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
++				  ring->pre_aligned, ring->dma_handle);
++		ring->base = NULL;
++		ring->iommu_base = 0;
++	}
++
++	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
++			  mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
++			  mhi_ctxt->er_ctxt_addr);
++
++	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
++			  mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
++			  mhi_ctxt->chan_ctxt_addr);
++
++	kfree(mhi_ctxt);
++	mhi_cntrl->mhi_ctxt = NULL;
++}
++
++int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
++{
++	struct mhi_ctxt *mhi_ctxt;
++	struct mhi_chan_ctxt *chan_ctxt;
++	struct mhi_event_ctxt *er_ctxt;
++	struct mhi_cmd_ctxt *cmd_ctxt;
++	struct mhi_chan *mhi_chan;
++	struct mhi_event *mhi_event;
++	struct mhi_cmd *mhi_cmd;
++	u32 tmp;
++	int ret = -ENOMEM, i;
++
++	atomic_set(&mhi_cntrl->dev_wake, 0);
++	atomic_set(&mhi_cntrl->pending_pkts, 0);
++
++	mhi_ctxt = kzalloc(sizeof(*mhi_ctxt), GFP_KERNEL);
++	if (!mhi_ctxt)
++		return -ENOMEM;
++
++	/* Setup channel ctxt */
++	mhi_ctxt->chan_ctxt = mhi_alloc_coherent(mhi_cntrl,
++						 sizeof(*mhi_ctxt->chan_ctxt) *
++						 mhi_cntrl->max_chan,
++						 &mhi_ctxt->chan_ctxt_addr,
++						 GFP_KERNEL);
++	if (!mhi_ctxt->chan_ctxt)
++		goto error_alloc_chan_ctxt;
++
++	mhi_chan = mhi_cntrl->mhi_chan;
++	chan_ctxt = mhi_ctxt->chan_ctxt;
++	for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
++		/* Skip if it is an offload channel */
++		if (mhi_chan->offload_ch)
++			continue;
++
++		tmp = chan_ctxt->chcfg;
++		tmp &= ~CHAN_CTX_CHSTATE_MASK;
++		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
++		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
++		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
++		tmp &= ~CHAN_CTX_POLLCFG_MASK;
++		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
++		chan_ctxt->chcfg = tmp;
++
++		chan_ctxt->chtype = mhi_chan->type;
++		chan_ctxt->erindex = mhi_chan->er_index;
++
++		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
++		mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;
++	}
++
++	/* Setup event context */
++	mhi_ctxt->er_ctxt = mhi_alloc_coherent(mhi_cntrl,
++					       sizeof(*mhi_ctxt->er_ctxt) *
++					       mhi_cntrl->total_ev_rings,
++					       &mhi_ctxt->er_ctxt_addr,
++					       GFP_KERNEL);
++	if (!mhi_ctxt->er_ctxt)
++		goto error_alloc_er_ctxt;
++
++	er_ctxt = mhi_ctxt->er_ctxt;
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
++		     mhi_event++) {
++		struct mhi_ring *ring = &mhi_event->ring;
++
++		/* Skip if it is an offload event */
++		if (mhi_event->offload_ev)
++			continue;
++
++		tmp = er_ctxt->intmod;
++		tmp &= ~EV_CTX_INTMODC_MASK;
++		tmp &= ~EV_CTX_INTMODT_MASK;
++		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
++		er_ctxt->intmod = tmp;
++
++		er_ctxt->ertype = MHI_ER_TYPE_VALID;
++		er_ctxt->msivec = mhi_event->irq;
++		mhi_event->db_cfg.db_mode = true;
++
++		ring->el_size = sizeof(struct mhi_tre);
++		ring->len = ring->el_size * ring->elements;
++		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
++		if (ret)
++			goto error_alloc_er;
++
++		/*
++		 * If the read pointer equals to the write pointer, then the
++		 * ring is empty
++		 */
++		ring->rp = ring->wp = ring->base;
++		er_ctxt->rbase = ring->iommu_base;
++		er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
++		er_ctxt->rlen = ring->len;
++		ring->ctxt_wp = &er_ctxt->wp;
++	}
++
++	/* Setup cmd context */
++	ret = -ENOMEM;
++	mhi_ctxt->cmd_ctxt = mhi_alloc_coherent(mhi_cntrl,
++						sizeof(*mhi_ctxt->cmd_ctxt) *
++						NR_OF_CMD_RINGS,
++						&mhi_ctxt->cmd_ctxt_addr,
++						GFP_KERNEL);
++	if (!mhi_ctxt->cmd_ctxt)
++		goto error_alloc_er;
++
++	mhi_cmd = mhi_cntrl->mhi_cmd;
++	cmd_ctxt = mhi_ctxt->cmd_ctxt;
++	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
++		struct mhi_ring *ring = &mhi_cmd->ring;
++
++		ring->el_size = sizeof(struct mhi_tre);
++		ring->elements = CMD_EL_PER_RING;
++		ring->len = ring->el_size * ring->elements;
++		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
++		if (ret)
++			goto error_alloc_cmd;
++
++		ring->rp = ring->wp = ring->base;
++		cmd_ctxt->rbase = ring->iommu_base;
++		cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
++		cmd_ctxt->rlen = ring->len;
++		ring->ctxt_wp = &cmd_ctxt->wp;
++	}
++
++	mhi_cntrl->mhi_ctxt = mhi_ctxt;
++
++	return 0;
++
++error_alloc_cmd:
++	for (--i, --mhi_cmd; i >= 0; i--, mhi_cmd--) {
++		struct mhi_ring *ring = &mhi_cmd->ring;
++
++		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
++				  ring->pre_aligned, ring->dma_handle);
++	}
++	mhi_free_coherent(mhi_cntrl,
++			  sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
++			  mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
++	i = mhi_cntrl->total_ev_rings;
++	mhi_event = mhi_cntrl->mhi_event + i;
++
++error_alloc_er:
++	for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
++		struct mhi_ring *ring = &mhi_event->ring;
++
++		if (mhi_event->offload_ev)
++			continue;
++
++		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
++				  ring->pre_aligned, ring->dma_handle);
++	}
++	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
++			  mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
++			  mhi_ctxt->er_ctxt_addr);
++
++error_alloc_er_ctxt:
++	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
++			  mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
++			  mhi_ctxt->chan_ctxt_addr);
++
++error_alloc_chan_ctxt:
++	kfree(mhi_ctxt);
++
++	return ret;
++}
++
++int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
++{
++	u32 val;
++	int i, ret;
++	struct mhi_chan *mhi_chan;
++	struct mhi_event *mhi_event;
++	void __iomem *base = mhi_cntrl->regs;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	struct {
++		u32 offset;
++		u32 mask;
++		u32 shift;
++		u32 val;
++	} reg_info[] = {
++		{
++			CCABAP_HIGHER, U32_MAX, 0,
++			upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
++		},
++		{
++			CCABAP_LOWER, U32_MAX, 0,
++			lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
++		},
++		{
++			ECABAP_HIGHER, U32_MAX, 0,
++			upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
++		},
++		{
++			ECABAP_LOWER, U32_MAX, 0,
++			lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
++		},
++		{
++			CRCBAP_HIGHER, U32_MAX, 0,
++			upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
++		},
++		{
++			CRCBAP_LOWER, U32_MAX, 0,
++			lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
++		},
++		{
++			MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
++			mhi_cntrl->total_ev_rings,
++		},
++		{
++			MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
++			mhi_cntrl->hw_ev_rings,
++		},
++		{
++			MHICTRLBASE_HIGHER, U32_MAX, 0,
++			upper_32_bits(mhi_cntrl->iova_start),
++		},
++		{
++			MHICTRLBASE_LOWER, U32_MAX, 0,
++			lower_32_bits(mhi_cntrl->iova_start),
++		},
++		{
++			MHIDATABASE_HIGHER, U32_MAX, 0,
++			upper_32_bits(mhi_cntrl->iova_start),
++		},
++		{
++			MHIDATABASE_LOWER, U32_MAX, 0,
++			lower_32_bits(mhi_cntrl->iova_start),
++		},
++		{
++			MHICTRLLIMIT_HIGHER, U32_MAX, 0,
++			upper_32_bits(mhi_cntrl->iova_stop),
++		},
++		{
++			MHICTRLLIMIT_LOWER, U32_MAX, 0,
++			lower_32_bits(mhi_cntrl->iova_stop),
++		},
++		{
++			MHIDATALIMIT_HIGHER, U32_MAX, 0,
++			upper_32_bits(mhi_cntrl->iova_stop),
++		},
++		{
++			MHIDATALIMIT_LOWER, U32_MAX, 0,
++			lower_32_bits(mhi_cntrl->iova_stop),
++		},
++		{ 0, 0, 0 }
++	};
++
++	dev_dbg(dev, "Initializing MHI registers\n");
++
++	/* Read channel db offset */
++	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
++				 CHDBOFF_CHDBOFF_SHIFT, &val);
++	if (ret) {
++		dev_err(dev, "Unable to read CHDBOFF register\n");
++		return -EIO;
++	}
++
++	if (val >= mhi_cntrl->reg_len - (8 * MHI_DEV_WAKE_DB)) {
++		dev_err(dev, "CHDB offset: 0x%x is out of range: 0x%zx\n",
++			val, mhi_cntrl->reg_len - (8 * MHI_DEV_WAKE_DB));
++		return -ERANGE;
++	}
++
++	/* Setup wake db */
++	mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
++	mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
++	mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
++	mhi_cntrl->wake_set = false;
++
++	/* Setup channel db address for each channel in tre_ring */
++	mhi_chan = mhi_cntrl->mhi_chan;
++	for (i = 0; i < mhi_cntrl->max_chan; i++, val += 8, mhi_chan++)
++		mhi_chan->tre_ring.db_addr = base + val;
++
++	/* Read event ring db offset */
++	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
++				 ERDBOFF_ERDBOFF_SHIFT, &val);
++	if (ret) {
++		dev_err(dev, "Unable to read ERDBOFF register\n");
++		return -EIO;
++	}
++
++	if (val >= mhi_cntrl->reg_len - (8 * mhi_cntrl->total_ev_rings)) {
++		dev_err(dev, "ERDB offset: 0x%x is out of range: 0x%zx\n",
++			val, mhi_cntrl->reg_len - (8 * mhi_cntrl->total_ev_rings));
++		return -ERANGE;
++	}
++
++	/* Setup event db address for each ev_ring */
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
++		if (mhi_event->offload_ev)
++			continue;
++
++		mhi_event->ring.db_addr = base + val;
++	}
++
++	/* Setup DB register for primary CMD rings */
++	mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING].ring.db_addr = base + CRDB_LOWER;
++
++	/* Write to MMIO registers */
++	for (i = 0; reg_info[i].offset; i++)
++		mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
++				    reg_info[i].mask, reg_info[i].shift,
++				    reg_info[i].val);
++
++	return 0;
++}
++
++void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
++			  struct mhi_chan *mhi_chan)
++{
++	struct mhi_ring *buf_ring;
++	struct mhi_ring *tre_ring;
++	struct mhi_chan_ctxt *chan_ctxt;
++	u32 tmp;
++
++	buf_ring = &mhi_chan->buf_ring;
++	tre_ring = &mhi_chan->tre_ring;
++	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
++
++	mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
++			  tre_ring->pre_aligned, tre_ring->dma_handle);
++	vfree(buf_ring->base);
++
++	buf_ring->base = tre_ring->base = NULL;
++	tre_ring->ctxt_wp = NULL;
++	chan_ctxt->rbase = 0;
++	chan_ctxt->rlen = 0;
++	chan_ctxt->rp = 0;
++	chan_ctxt->wp = 0;
++
++	tmp = chan_ctxt->chcfg;
++	tmp &= ~CHAN_CTX_CHSTATE_MASK;
++	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
++	chan_ctxt->chcfg = tmp;
++
++	/* Update to all cores */
++	smp_wmb();
++}
++
++int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
++		       struct mhi_chan *mhi_chan)
++{
++	struct mhi_ring *buf_ring;
++	struct mhi_ring *tre_ring;
++	struct mhi_chan_ctxt *chan_ctxt;
++	u32 tmp;
++	int ret;
++
++	buf_ring = &mhi_chan->buf_ring;
++	tre_ring = &mhi_chan->tre_ring;
++	tre_ring->el_size = sizeof(struct mhi_tre);
++	tre_ring->len = tre_ring->el_size * tre_ring->elements;
++	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
++	ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len);
++	if (ret)
++		return -ENOMEM;
++
++	buf_ring->el_size = sizeof(struct mhi_buf_info);
++	buf_ring->len = buf_ring->el_size * buf_ring->elements;
++	buf_ring->base = vzalloc(buf_ring->len);
++
++	if (!buf_ring->base) {
++		mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
++				  tre_ring->pre_aligned, tre_ring->dma_handle);
++		return -ENOMEM;
++	}
++
++	tmp = chan_ctxt->chcfg;
++	tmp &= ~CHAN_CTX_CHSTATE_MASK;
++	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
++	chan_ctxt->chcfg = tmp;
++
++	chan_ctxt->rbase = tre_ring->iommu_base;
++	chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
++	chan_ctxt->rlen = tre_ring->len;
++	tre_ring->ctxt_wp = &chan_ctxt->wp;
++
++	tre_ring->rp = tre_ring->wp = tre_ring->base;
++	buf_ring->rp = buf_ring->wp = buf_ring->base;
++	mhi_chan->db_cfg.db_mode = 1;
++
++	/* Update to all cores */
++	smp_wmb();
++
++	return 0;
++}
++
++static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
++			const struct mhi_controller_config *config)
++{
++	struct mhi_event *mhi_event;
++	const struct mhi_event_config *event_cfg;
++	struct device *dev = mhi_cntrl->cntrl_dev;
++	int i, num;
++
++	num = config->num_events;
++	mhi_cntrl->total_ev_rings = num;
++	mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
++				       GFP_KERNEL);
++	if (!mhi_cntrl->mhi_event)
++		return -ENOMEM;
++
++	/* Populate event ring */
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < num; i++) {
++		event_cfg = &config->event_cfg[i];
++
++		mhi_event->er_index = i;
++		mhi_event->ring.elements = event_cfg->num_elements;
++		mhi_event->intmod = event_cfg->irq_moderation_ms;
++		mhi_event->irq = event_cfg->irq;
++
++		if (event_cfg->channel != U32_MAX) {
++			/* This event ring has a dedicated channel */
++			mhi_event->chan = event_cfg->channel;
++			if (mhi_event->chan >= mhi_cntrl->max_chan) {
++				dev_err(dev,
++					"Event Ring channel not available\n");
++				goto error_ev_cfg;
++			}
++
++			mhi_event->mhi_chan =
++				&mhi_cntrl->mhi_chan[mhi_event->chan];
++		}
++
++		/* Priority is fixed to 1 for now */
++		mhi_event->priority = 1;
++
++		mhi_event->db_cfg.brstmode = event_cfg->mode;
++		if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
++			goto error_ev_cfg;
++
++		if (mhi_event->db_cfg.brstmode == MHI_DB_BRST_ENABLE)
++			mhi_event->db_cfg.process_db = mhi_db_brstmode;
++		else
++			mhi_event->db_cfg.process_db = mhi_db_brstmode_disable;
++
++		mhi_event->data_type = event_cfg->data_type;
++
++		switch (mhi_event->data_type) {
++		case MHI_ER_DATA:
++			mhi_event->process_event = mhi_process_data_event_ring;
++			break;
++		case MHI_ER_CTRL:
++			mhi_event->process_event = mhi_process_ctrl_ev_ring;
++			break;
++		default:
++			dev_err(dev, "Event Ring type not supported\n");
++			goto error_ev_cfg;
++		}
++
++		mhi_event->hw_ring = event_cfg->hardware_event;
++		if (mhi_event->hw_ring)
++			mhi_cntrl->hw_ev_rings++;
++		else
++			mhi_cntrl->sw_ev_rings++;
++
++		mhi_event->cl_manage = event_cfg->client_managed;
++		mhi_event->offload_ev = event_cfg->offload_channel;
++		mhi_event++;
++	}
++
++	return 0;
++
++error_ev_cfg:
++
++	kfree(mhi_cntrl->mhi_event);
++	return -EINVAL;
++}
++
++static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
++			const struct mhi_controller_config *config)
++{
++	const struct mhi_channel_config *ch_cfg;
++	struct device *dev = mhi_cntrl->cntrl_dev;
++	int i;
++	u32 chan;
++
++	mhi_cntrl->max_chan = config->max_channels;
++
++	/*
++	 * The allocation of MHI channels can exceed 32KB in some scenarios,
++	 * so to avoid any memory possible allocation failures, vzalloc is
++	 * used here
++	 */
++	mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
++				      sizeof(*mhi_cntrl->mhi_chan));
++	if (!mhi_cntrl->mhi_chan)
++		return -ENOMEM;
++
++	INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
++
++	/* Populate channel configurations */
++	for (i = 0; i < config->num_channels; i++) {
++		struct mhi_chan *mhi_chan;
++
++		ch_cfg = &config->ch_cfg[i];
++
++		chan = ch_cfg->num;
++		if (chan >= mhi_cntrl->max_chan) {
++			dev_err(dev, "Channel %d not available\n", chan);
++			goto error_chan_cfg;
++		}
++
++		mhi_chan = &mhi_cntrl->mhi_chan[chan];
++		mhi_chan->name = ch_cfg->name;
++		mhi_chan->chan = chan;
++
++		mhi_chan->tre_ring.elements = ch_cfg->num_elements;
++		if (!mhi_chan->tre_ring.elements)
++			goto error_chan_cfg;
++
++		/*
++		 * For some channels, local ring length should be bigger than
++		 * the transfer ring length due to internal logical channels
++		 * in device. So host can queue much more buffers than transfer
++		 * ring length. Example, RSC channels should have a larger local
++		 * channel length than transfer ring length.
++		 */
++		mhi_chan->buf_ring.elements = ch_cfg->local_elements;
++		if (!mhi_chan->buf_ring.elements)
++			mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
++		mhi_chan->er_index = ch_cfg->event_ring;
++		mhi_chan->dir = ch_cfg->dir;
++
++		/*
++		 * For most channels, chtype is identical to channel directions.
++		 * So, if it is not defined then assign channel direction to
++		 * chtype
++		 */
++		mhi_chan->type = ch_cfg->type;
++		if (!mhi_chan->type)
++			mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
++
++		mhi_chan->ee_mask = ch_cfg->ee_mask;
++		mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
++		mhi_chan->lpm_notify = ch_cfg->lpm_notify;
++		mhi_chan->offload_ch = ch_cfg->offload_channel;
++		mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
++		mhi_chan->pre_alloc = ch_cfg->auto_queue;
++		mhi_chan->auto_start = ch_cfg->auto_start;
++
++		/*
++		 * If MHI host allocates buffers, then the channel direction
++		 * should be DMA_FROM_DEVICE
++		 */
++		if (mhi_chan->pre_alloc && mhi_chan->dir != DMA_FROM_DEVICE) {
++			dev_err(dev, "Invalid channel configuration\n");
++			goto error_chan_cfg;
++		}
++
++		/*
++		 * Bi-directional and direction less channel must be an
++		 * offload channel
++		 */
++		if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
++		     mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
++			dev_err(dev, "Invalid channel configuration\n");
++			goto error_chan_cfg;
++		}
++
++		if (!mhi_chan->offload_ch) {
++			mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
++			if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
++				dev_err(dev, "Invalid Door bell mode\n");
++				goto error_chan_cfg;
++			}
++		}
++
++		if (mhi_chan->db_cfg.brstmode == MHI_DB_BRST_ENABLE)
++			mhi_chan->db_cfg.process_db = mhi_db_brstmode;
++		else
++			mhi_chan->db_cfg.process_db = mhi_db_brstmode_disable;
++
++		mhi_chan->configured = true;
++
++		if (mhi_chan->lpm_notify)
++			list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
++	}
++
++	return 0;
++
++error_chan_cfg:
++	vfree(mhi_cntrl->mhi_chan);
++
++	return -EINVAL;
++}
++
++static int parse_config(struct mhi_controller *mhi_cntrl,
++			const struct mhi_controller_config *config)
++{
++	int ret;
++
++	/* Parse MHI channel configuration */
++	ret = parse_ch_cfg(mhi_cntrl, config);
++	if (ret)
++		return ret;
++
++	/* Parse MHI event configuration */
++	ret = parse_ev_cfg(mhi_cntrl, config);
++	if (ret)
++		goto error_ev_cfg;
++
++	mhi_cntrl->timeout_ms = config->timeout_ms;
++	if (!mhi_cntrl->timeout_ms)
++		mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
++
++	mhi_cntrl->bounce_buf = config->use_bounce_buf;
++	mhi_cntrl->buffer_len = config->buf_len;
++	if (!mhi_cntrl->buffer_len)
++		mhi_cntrl->buffer_len = MHI_MAX_MTU;
++
++	/* By default, host is allowed to ring DB in both M0 and M2 states */
++	mhi_cntrl->db_access = MHI_PM_M0 | MHI_PM_M2;
++	if (config->m2_no_db)
++		mhi_cntrl->db_access &= ~MHI_PM_M2;
++
++	return 0;
++
++error_ev_cfg:
++	vfree(mhi_cntrl->mhi_chan);
++
++	return ret;
++}
++
++int mhi_register_controller(struct mhi_controller *mhi_cntrl,
++			    const struct mhi_controller_config *config)
++{
++	struct mhi_event *mhi_event;
++	struct mhi_chan *mhi_chan;
++	struct mhi_cmd *mhi_cmd;
++	struct mhi_device *mhi_dev;
++	u32 soc_info;
++	int ret, i;
++
++	if (!mhi_cntrl)
++		return -EINVAL;
++
++	if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||
++	    !mhi_cntrl->status_cb || !mhi_cntrl->read_reg ||
++	    !mhi_cntrl->write_reg)
++		return -EINVAL;
++
++	ret = parse_config(mhi_cntrl, config);
++	if (ret)
++		return -EINVAL;
++
++	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
++				     sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
++	if (!mhi_cntrl->mhi_cmd) {
++		ret = -ENOMEM;
++		goto error_alloc_cmd;
++	}
++
++	INIT_LIST_HEAD(&mhi_cntrl->transition_list);
++	mutex_init(&mhi_cntrl->pm_mutex);
++	rwlock_init(&mhi_cntrl->pm_lock);
++	spin_lock_init(&mhi_cntrl->transition_lock);
++	spin_lock_init(&mhi_cntrl->wlock);
++	INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
++	init_waitqueue_head(&mhi_cntrl->state_event);
++
++	mhi_cmd = mhi_cntrl->mhi_cmd;
++	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
++		spin_lock_init(&mhi_cmd->lock);
++
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
++		/* Skip for offload events */
++		if (mhi_event->offload_ev)
++			continue;
++
++		mhi_event->mhi_cntrl = mhi_cntrl;
++		spin_lock_init(&mhi_event->lock);
++		if (mhi_event->data_type == MHI_ER_CTRL)
++			tasklet_init(&mhi_event->task, mhi_ctrl_ev_task,
++				     (ulong)mhi_event);
++		else
++			tasklet_init(&mhi_event->task, mhi_ev_task,
++				     (ulong)mhi_event);
++	}
++
++	mhi_chan = mhi_cntrl->mhi_chan;
++	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
++		mutex_init(&mhi_chan->mutex);
++		init_completion(&mhi_chan->completion);
++		rwlock_init(&mhi_chan->lock);
++
++		/* used in setting bei field of TRE */
++		mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
++		mhi_chan->intmod = mhi_event->intmod;
++	}
++
++	if (mhi_cntrl->bounce_buf) {
++		mhi_cntrl->map_single = mhi_map_single_use_bb;
++		mhi_cntrl->unmap_single = mhi_unmap_single_use_bb;
++	} else {
++		mhi_cntrl->map_single = mhi_map_single_no_bb;
++		mhi_cntrl->unmap_single = mhi_unmap_single_no_bb;
++	}
++
++	/* Read the MHI device info */
++	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs,
++			   SOC_HW_VERSION_OFFS, &soc_info);
++	if (ret)
++		goto error_alloc_dev;
++
++	mhi_cntrl->family_number = (soc_info & SOC_HW_VERSION_FAM_NUM_BMSK) >>
++					SOC_HW_VERSION_FAM_NUM_SHFT;
++	mhi_cntrl->device_number = (soc_info & SOC_HW_VERSION_DEV_NUM_BMSK) >>
++					SOC_HW_VERSION_DEV_NUM_SHFT;
++	mhi_cntrl->major_version = (soc_info & SOC_HW_VERSION_MAJOR_VER_BMSK) >>
++					SOC_HW_VERSION_MAJOR_VER_SHFT;
++	mhi_cntrl->minor_version = (soc_info & SOC_HW_VERSION_MINOR_VER_BMSK) >>
++					SOC_HW_VERSION_MINOR_VER_SHFT;
++
++	/* Register controller with MHI bus */
++	mhi_dev = mhi_alloc_device(mhi_cntrl);
++	if (IS_ERR(mhi_dev)) {
++		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate MHI device\n");
++		ret = PTR_ERR(mhi_dev);
++		goto error_alloc_dev;
++	}
++
++	mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
++	mhi_dev->mhi_cntrl = mhi_cntrl;
++	dev_set_name(&mhi_dev->dev, "%s", dev_name(mhi_cntrl->cntrl_dev));
++	mhi_dev->name = dev_name(mhi_cntrl->cntrl_dev);
++
++	/* Init wakeup source */
++	device_init_wakeup(&mhi_dev->dev, true);
++
++	ret = device_add(&mhi_dev->dev);
++	if (ret)
++		goto error_add_dev;
++
++	mhi_cntrl->mhi_dev = mhi_dev;
++
++	mhi_create_debugfs(mhi_cntrl);
++
++	return 0;
++
++error_add_dev:
++	put_device(&mhi_dev->dev);
++
++error_alloc_dev:
++	kfree(mhi_cntrl->mhi_cmd);
++
++error_alloc_cmd:
++	vfree(mhi_cntrl->mhi_chan);
++	kfree(mhi_cntrl->mhi_event);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mhi_register_controller);
++
++void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
++{
++	struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
++	struct mhi_chan *mhi_chan = mhi_cntrl->mhi_chan;
++	unsigned int i;
++
++	mhi_destroy_debugfs(mhi_cntrl);
++
++	kfree(mhi_cntrl->mhi_cmd);
++	kfree(mhi_cntrl->mhi_event);
++
++	/* Drop the references to MHI devices created for channels */
++	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
++		if (!mhi_chan->mhi_dev)
++			continue;
++
++		put_device(&mhi_chan->mhi_dev->dev);
++	}
++	vfree(mhi_cntrl->mhi_chan);
++
++	device_del(&mhi_dev->dev);
++	put_device(&mhi_dev->dev);
++}
++EXPORT_SYMBOL_GPL(mhi_unregister_controller);
++
++struct mhi_controller *mhi_alloc_controller(void)
++{
++	struct mhi_controller *mhi_cntrl;
++
++	mhi_cntrl = kzalloc(sizeof(*mhi_cntrl), GFP_KERNEL);
++
++	return mhi_cntrl;
++}
++EXPORT_SYMBOL_GPL(mhi_alloc_controller);
++
++void mhi_free_controller(struct mhi_controller *mhi_cntrl)
++{
++	kfree(mhi_cntrl);
++}
++EXPORT_SYMBOL_GPL(mhi_free_controller);
++
++int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
++{
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	u32 bhie_off;
++	int ret;
++
++	mutex_lock(&mhi_cntrl->pm_mutex);
++
++	ret = mhi_init_dev_ctxt(mhi_cntrl);
++	if (ret)
++		goto error_dev_ctxt;
++
++	/*
++	 * Allocate RDDM table if specified, this table is for debugging purpose
++	 */
++	if (mhi_cntrl->rddm_size) {
++		mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->rddm_image,
++				     mhi_cntrl->rddm_size);
++
++		/*
++		 * This controller supports RDDM, so we need to manually clear
++		 * BHIE RX registers since POR values are undefined.
++		 */
++		ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF,
++				   &bhie_off);
++		if (ret) {
++			dev_err(dev, "Error getting BHIE offset\n");
++			goto bhie_error;
++		}
++
++		mhi_cntrl->bhie = mhi_cntrl->regs + bhie_off;
++		memset_io(mhi_cntrl->bhie + BHIE_RXVECADDR_LOW_OFFS,
++			  0, BHIE_RXVECSTATUS_OFFS - BHIE_RXVECADDR_LOW_OFFS +
++			  4);
++
++		if (mhi_cntrl->rddm_image)
++			mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
++	}
++
++	mhi_cntrl->pre_init = true;
++
++	mutex_unlock(&mhi_cntrl->pm_mutex);
++
++	return 0;
++
++bhie_error:
++	if (mhi_cntrl->rddm_image) {
++		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image);
++		mhi_cntrl->rddm_image = NULL;
++	}
++
++error_dev_ctxt:
++	mutex_unlock(&mhi_cntrl->pm_mutex);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mhi_prepare_for_power_up);
++
++void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
++{
++	if (mhi_cntrl->fbc_image) {
++		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
++		mhi_cntrl->fbc_image = NULL;
++	}
++
++	if (mhi_cntrl->rddm_image) {
++		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image);
++		mhi_cntrl->rddm_image = NULL;
++	}
++
++	mhi_deinit_dev_ctxt(mhi_cntrl);
++	mhi_cntrl->pre_init = false;
++}
++EXPORT_SYMBOL_GPL(mhi_unprepare_after_power_down);
++
++static void mhi_release_device(struct device *dev)
++{
++	struct mhi_device *mhi_dev = to_mhi_device(dev);
++
++	/*
++	 * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
++	 * devices for the channels will only get created if the mhi_dev
++	 * associated with it is NULL. This scenario will happen during the
++	 * controller suspend and resume.
++	 */
++	if (mhi_dev->ul_chan)
++		mhi_dev->ul_chan->mhi_dev = NULL;
++
++	if (mhi_dev->dl_chan)
++		mhi_dev->dl_chan->mhi_dev = NULL;
++
++	kfree(mhi_dev);
++}
++
++struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
++{
++	struct mhi_device *mhi_dev;
++	struct device *dev;
++
++	mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
++	if (!mhi_dev)
++		return ERR_PTR(-ENOMEM);
++
++	dev = &mhi_dev->dev;
++	device_initialize(dev);
++	dev->bus = &mhi_bus_type;
++	dev->release = mhi_release_device;
++	dev->parent = mhi_cntrl->cntrl_dev;
++	mhi_dev->mhi_cntrl = mhi_cntrl;
++	mhi_dev->dev_wake = 0;
++
++	return mhi_dev;
++}
++
++static int mhi_driver_probe(struct device *dev)
++{
++	struct mhi_device *mhi_dev = to_mhi_device(dev);
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	struct device_driver *drv = dev->driver;
++	struct mhi_driver *mhi_drv = to_mhi_driver(drv);
++	struct mhi_event *mhi_event;
++	struct mhi_chan *ul_chan = mhi_dev->ul_chan;
++	struct mhi_chan *dl_chan = mhi_dev->dl_chan;
++	int ret;
++
++	/* Bring device out of LPM */
++	ret = mhi_device_get_sync(mhi_dev);
++	if (ret)
++		return ret;
++
++	ret = -EINVAL;
++
++	if (ul_chan) {
++		/*
++		 * If channel supports LPM notifications then status_cb should
++		 * be provided
++		 */
++		if (ul_chan->lpm_notify && !mhi_drv->status_cb)
++			goto exit_probe;
++
++		/* For non-offload channels then xfer_cb should be provided */
++		if (!ul_chan->offload_ch && !mhi_drv->ul_xfer_cb)
++			goto exit_probe;
++
++		ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
++		if (ul_chan->auto_start) {
++			ret = mhi_prepare_channel(mhi_cntrl, ul_chan);
++			if (ret)
++				goto exit_probe;
++		}
++	}
++
++	ret = -EINVAL;
++	if (dl_chan) {
++		/*
++		 * If channel supports LPM notifications then status_cb should
++		 * be provided
++		 */
++		if (dl_chan->lpm_notify && !mhi_drv->status_cb)
++			goto exit_probe;
++
++		/* For non-offload channels then xfer_cb should be provided */
++		if (!dl_chan->offload_ch && !mhi_drv->dl_xfer_cb)
++			goto exit_probe;
++
++		mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index];
++
++		/*
++		 * If the channel event ring is managed by client, then
++		 * status_cb must be provided so that the framework can
++		 * notify pending data
++		 */
++		if (mhi_event->cl_manage && !mhi_drv->status_cb)
++			goto exit_probe;
++
++		dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
++	}
++
++	/* Call the user provided probe function */
++	ret = mhi_drv->probe(mhi_dev, mhi_dev->id);
++	if (ret)
++		goto exit_probe;
++
++	if (dl_chan && dl_chan->auto_start)
++		mhi_prepare_channel(mhi_cntrl, dl_chan);
++
++	mhi_device_put(mhi_dev);
++
++	return ret;
++
++exit_probe:
++	mhi_unprepare_from_transfer(mhi_dev);
++
++	mhi_device_put(mhi_dev);
++
++	return ret;
++}
++
++static int mhi_driver_remove(struct device *dev)
++{
++	struct mhi_device *mhi_dev = to_mhi_device(dev);
++	struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver);
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	struct mhi_chan *mhi_chan;
++	enum mhi_ch_state ch_state[] = {
++		MHI_CH_STATE_DISABLED,
++		MHI_CH_STATE_DISABLED
++	};
++	int dir;
++
++	/* Skip if it is a controller device */
++	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
++		return 0;
++
++	/* Reset both channels */
++	for (dir = 0; dir < 2; dir++) {
++		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
++
++		if (!mhi_chan)
++			continue;
++
++		/* Wake all threads waiting for completion */
++		write_lock_irq(&mhi_chan->lock);
++		mhi_chan->ccs = MHI_EV_CC_INVALID;
++		complete_all(&mhi_chan->completion);
++		write_unlock_irq(&mhi_chan->lock);
++
++		/* Set the channel state to disabled */
++		mutex_lock(&mhi_chan->mutex);
++		write_lock_irq(&mhi_chan->lock);
++		ch_state[dir] = mhi_chan->ch_state;
++		mhi_chan->ch_state = MHI_CH_STATE_SUSPENDED;
++		write_unlock_irq(&mhi_chan->lock);
++
++		/* Reset the non-offload channel */
++		if (!mhi_chan->offload_ch)
++			mhi_reset_chan(mhi_cntrl, mhi_chan);
++
++		mutex_unlock(&mhi_chan->mutex);
++	}
++
++	mhi_drv->remove(mhi_dev);
++
++	/* De-init channel if it was enabled */
++	for (dir = 0; dir < 2; dir++) {
++		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
++
++		if (!mhi_chan)
++			continue;
++
++		mutex_lock(&mhi_chan->mutex);
++
++		if ((ch_state[dir] == MHI_CH_STATE_ENABLED ||
++		     ch_state[dir] == MHI_CH_STATE_STOP) &&
++		    !mhi_chan->offload_ch)
++			mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
++
++		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
++
++		mutex_unlock(&mhi_chan->mutex);
++	}
++
++	while (mhi_dev->dev_wake)
++		mhi_device_put(mhi_dev);
++
++	return 0;
++}
++
++int __mhi_driver_register(struct mhi_driver *mhi_drv, struct module *owner)
++{
++	struct device_driver *driver = &mhi_drv->driver;
++
++	if (!mhi_drv->probe || !mhi_drv->remove)
++		return -EINVAL;
++
++	driver->bus = &mhi_bus_type;
++	driver->owner = owner;
++	driver->probe = mhi_driver_probe;
++	driver->remove = mhi_driver_remove;
++
++	return driver_register(driver);
++}
++EXPORT_SYMBOL_GPL(__mhi_driver_register);
++
++void mhi_driver_unregister(struct mhi_driver *mhi_drv)
++{
++	driver_unregister(&mhi_drv->driver);
++}
++EXPORT_SYMBOL_GPL(mhi_driver_unregister);
++
++static int mhi_uevent(struct device *dev, struct kobj_uevent_env *env)
++{
++	struct mhi_device *mhi_dev = to_mhi_device(dev);
++
++	return add_uevent_var(env, "MODALIAS=" MHI_DEVICE_MODALIAS_FMT,
++					mhi_dev->name);
++}
++
++static int mhi_match(struct device *dev, struct device_driver *drv)
++{
++	struct mhi_device *mhi_dev = to_mhi_device(dev);
++	struct mhi_driver *mhi_drv = to_mhi_driver(drv);
++	const struct mhi_device_id *id;
++
++	/*
++	 * If the device is a controller type then there is no client driver
++	 * associated with it
++	 */
++	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
++		return 0;
++
++	for (id = mhi_drv->id_table; id->chan[0]; id++)
++		if (!strcmp(mhi_dev->name, id->chan)) {
++			mhi_dev->id = id;
++			return 1;
++		}
++
++	return 0;
++};
++
++struct bus_type mhi_bus_type = {
++	.name = "mhi",
++	.dev_name = "mhi",
++	.match = mhi_match,
++	.uevent = mhi_uevent,
++	.dev_groups = mhi_dev_groups,
++};
++
++static int __init mhi_init(void)
++{
++	mhi_debugfs_init();
++	return bus_register(&mhi_bus_type);
++}
++
++static void __exit mhi_exit(void)
++{
++	mhi_debugfs_exit();
++	bus_unregister(&mhi_bus_type);
++}
++
++postcore_initcall(mhi_init);
++module_exit(mhi_exit);
++
++MODULE_LICENSE("GPL v2");
++MODULE_DESCRIPTION("MHI Host Interface");
+diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
+new file mode 100644
+index 0000000000000..7989269ddd963
+--- /dev/null
++++ b/drivers/bus/mhi/host/internal.h
+@@ -0,0 +1,722 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++ *
++ */
++
++#ifndef _MHI_INT_H
++#define _MHI_INT_H
++
++#include <linux/mhi.h>
++
++extern struct bus_type mhi_bus_type;
++
++#define MHIREGLEN (0x0)
++#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
++#define MHIREGLEN_MHIREGLEN_SHIFT (0)
++
++#define MHIVER (0x8)
++#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
++#define MHIVER_MHIVER_SHIFT (0)
++
++#define MHICFG (0x10)
++#define MHICFG_NHWER_MASK (0xFF000000)
++#define MHICFG_NHWER_SHIFT (24)
++#define MHICFG_NER_MASK (0xFF0000)
++#define MHICFG_NER_SHIFT (16)
++#define MHICFG_NHWCH_MASK (0xFF00)
++#define MHICFG_NHWCH_SHIFT (8)
++#define MHICFG_NCH_MASK (0xFF)
++#define MHICFG_NCH_SHIFT (0)
++
++#define CHDBOFF (0x18)
++#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
++#define CHDBOFF_CHDBOFF_SHIFT (0)
++
++#define ERDBOFF (0x20)
++#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
++#define ERDBOFF_ERDBOFF_SHIFT (0)
++
++#define BHIOFF (0x28)
++#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
++#define BHIOFF_BHIOFF_SHIFT (0)
++
++#define BHIEOFF (0x2C)
++#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
++#define BHIEOFF_BHIEOFF_SHIFT (0)
++
++#define DEBUGOFF (0x30)
++#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
++#define DEBUGOFF_DEBUGOFF_SHIFT (0)
++
++#define MHICTRL (0x38)
++#define MHICTRL_MHISTATE_MASK (0x0000FF00)
++#define MHICTRL_MHISTATE_SHIFT (8)
++#define MHICTRL_RESET_MASK (0x2)
++#define MHICTRL_RESET_SHIFT (1)
++
++#define MHISTATUS (0x48)
++#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
++#define MHISTATUS_MHISTATE_SHIFT (8)
++#define MHISTATUS_SYSERR_MASK (0x4)
++#define MHISTATUS_SYSERR_SHIFT (2)
++#define MHISTATUS_READY_MASK (0x1)
++#define MHISTATUS_READY_SHIFT (0)
++
++#define CCABAP_LOWER (0x58)
++#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
++#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
++
++#define CCABAP_HIGHER (0x5C)
++#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
++#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
++
++#define ECABAP_LOWER (0x60)
++#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
++#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
++
++#define ECABAP_HIGHER (0x64)
++#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
++#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
++
++#define CRCBAP_LOWER (0x68)
++#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
++#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
++
++#define CRCBAP_HIGHER (0x6C)
++#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
++#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
++
++#define CRDB_LOWER (0x70)
++#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
++#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
++
++#define CRDB_HIGHER (0x74)
++#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
++#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
++
++#define MHICTRLBASE_LOWER (0x80)
++#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
++#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
++
++#define MHICTRLBASE_HIGHER (0x84)
++#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
++#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
++
++#define MHICTRLLIMIT_LOWER (0x88)
++#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
++#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
++
++#define MHICTRLLIMIT_HIGHER (0x8C)
++#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
++#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
++
++#define MHIDATABASE_LOWER (0x98)
++#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
++#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
++
++#define MHIDATABASE_HIGHER (0x9C)
++#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
++#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
++
++#define MHIDATALIMIT_LOWER (0xA0)
++#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
++#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
++
++#define MHIDATALIMIT_HIGHER (0xA4)
++#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
++#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
++
++/* Host request register */
++#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
++#define MHI_SOC_RESET_REQ BIT(0)
++
++/* MHI BHI offfsets */
++#define BHI_BHIVERSION_MINOR (0x00)
++#define BHI_BHIVERSION_MAJOR (0x04)
++#define BHI_IMGADDR_LOW (0x08)
++#define BHI_IMGADDR_HIGH (0x0C)
++#define BHI_IMGSIZE (0x10)
++#define BHI_RSVD1 (0x14)
++#define BHI_IMGTXDB (0x18)
++#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
++#define BHI_TXDB_SEQNUM_SHFT (0)
++#define BHI_RSVD2 (0x1C)
++#define BHI_INTVEC (0x20)
++#define BHI_RSVD3 (0x24)
++#define BHI_EXECENV (0x28)
++#define BHI_STATUS (0x2C)
++#define BHI_ERRCODE (0x30)
++#define BHI_ERRDBG1 (0x34)
++#define BHI_ERRDBG2 (0x38)
++#define BHI_ERRDBG3 (0x3C)
++#define BHI_SERIALNU (0x40)
++#define BHI_SBLANTIROLLVER (0x44)
++#define BHI_NUMSEG (0x48)
++#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
++#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
++#define BHI_RSVD5 (0xC4)
++#define BHI_STATUS_MASK (0xC0000000)
++#define BHI_STATUS_SHIFT (30)
++#define BHI_STATUS_ERROR (3)
++#define BHI_STATUS_SUCCESS (2)
++#define BHI_STATUS_RESET (0)
++
++/* MHI BHIE offsets */
++#define BHIE_MSMSOCID_OFFS (0x0000)
++#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
++#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
++#define BHIE_TXVECSIZE_OFFS (0x0034)
++#define BHIE_TXVECDB_OFFS (0x003C)
++#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
++#define BHIE_TXVECDB_SEQNUM_SHFT (0)
++#define BHIE_TXVECSTATUS_OFFS (0x0044)
++#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
++#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
++#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
++#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
++#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
++#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
++#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
++#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
++#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
++#define BHIE_RXVECSIZE_OFFS (0x0068)
++#define BHIE_RXVECDB_OFFS (0x0070)
++#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
++#define BHIE_RXVECDB_SEQNUM_SHFT (0)
++#define BHIE_RXVECSTATUS_OFFS (0x0078)
++#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
++#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
++#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
++#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
++#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
++#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
++#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
++
++#define SOC_HW_VERSION_OFFS (0x224)
++#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
++#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
++#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
++#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
++#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
++#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
++#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
++#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
++
++#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
++#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
++#define EV_CTX_INTMODC_SHIFT 8
++#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
++#define EV_CTX_INTMODT_SHIFT 16
++struct mhi_event_ctxt {
++	__u32 intmod;
++	__u32 ertype;
++	__u32 msivec;
++
++	__u64 rbase __packed __aligned(4);
++	__u64 rlen __packed __aligned(4);
++	__u64 rp __packed __aligned(4);
++	__u64 wp __packed __aligned(4);
++};
++
++#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
++#define CHAN_CTX_CHSTATE_SHIFT 0
++#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
++#define CHAN_CTX_BRSTMODE_SHIFT 8
++#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
++#define CHAN_CTX_POLLCFG_SHIFT 10
++#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
++struct mhi_chan_ctxt {
++	__u32 chcfg;
++	__u32 chtype;
++	__u32 erindex;
++
++	__u64 rbase __packed __aligned(4);
++	__u64 rlen __packed __aligned(4);
++	__u64 rp __packed __aligned(4);
++	__u64 wp __packed __aligned(4);
++};
++
++struct mhi_cmd_ctxt {
++	__u32 reserved0;
++	__u32 reserved1;
++	__u32 reserved2;
++
++	__u64 rbase __packed __aligned(4);
++	__u64 rlen __packed __aligned(4);
++	__u64 rp __packed __aligned(4);
++	__u64 wp __packed __aligned(4);
++};
++
++struct mhi_ctxt {
++	struct mhi_event_ctxt *er_ctxt;
++	struct mhi_chan_ctxt *chan_ctxt;
++	struct mhi_cmd_ctxt *cmd_ctxt;
++	dma_addr_t er_ctxt_addr;
++	dma_addr_t chan_ctxt_addr;
++	dma_addr_t cmd_ctxt_addr;
++};
++
++struct mhi_tre {
++	u64 ptr;
++	u32 dword[2];
++};
++
++struct bhi_vec_entry {
++	u64 dma_addr;
++	u64 size;
++};
++
++enum mhi_cmd_type {
++	MHI_CMD_NOP = 1,
++	MHI_CMD_RESET_CHAN = 16,
++	MHI_CMD_STOP_CHAN = 17,
++	MHI_CMD_START_CHAN = 18,
++};
++
++/* No operation command */
++#define MHI_TRE_CMD_NOOP_PTR (0)
++#define MHI_TRE_CMD_NOOP_DWORD0 (0)
++#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
++
++/* Channel reset command */
++#define MHI_TRE_CMD_RESET_PTR (0)
++#define MHI_TRE_CMD_RESET_DWORD0 (0)
++#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
++					(MHI_CMD_RESET_CHAN << 16))
++
++/* Channel stop command */
++#define MHI_TRE_CMD_STOP_PTR (0)
++#define MHI_TRE_CMD_STOP_DWORD0 (0)
++#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
++				       (MHI_CMD_STOP_CHAN << 16))
++
++/* Channel start command */
++#define MHI_TRE_CMD_START_PTR (0)
++#define MHI_TRE_CMD_START_DWORD0 (0)
++#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
++					(MHI_CMD_START_CHAN << 16))
++
++#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
++#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
++
++/* Event descriptor macros */
++#define MHI_TRE_EV_PTR(ptr) (ptr)
++#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
++#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
++#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
++#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
++#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
++#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
++#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
++#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
++#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
++#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
++
++/* Transfer descriptor macros */
++#define MHI_TRE_DATA_PTR(ptr) (ptr)
++#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
++#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
++	| (ieot << 9) | (ieob << 8) | chain)
++
++/* RSC transfer descriptor macros */
++#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
++#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
++#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
++
++enum mhi_pkt_type {
++	MHI_PKT_TYPE_INVALID = 0x0,
++	MHI_PKT_TYPE_NOOP_CMD = 0x1,
++	MHI_PKT_TYPE_TRANSFER = 0x2,
++	MHI_PKT_TYPE_COALESCING = 0x8,
++	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
++	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
++	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
++	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
++	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
++	MHI_PKT_TYPE_TX_EVENT = 0x22,
++	MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
++	MHI_PKT_TYPE_EE_EVENT = 0x40,
++	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
++	MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
++	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
++};
++
++/* MHI transfer completion events */
++enum mhi_ev_ccs {
++	MHI_EV_CC_INVALID = 0x0,
++	MHI_EV_CC_SUCCESS = 0x1,
++	MHI_EV_CC_EOT = 0x2, /* End of transfer event */
++	MHI_EV_CC_OVERFLOW = 0x3,
++	MHI_EV_CC_EOB = 0x4, /* End of block event */
++	MHI_EV_CC_OOB = 0x5, /* Out of block event */
++	MHI_EV_CC_DB_MODE = 0x6,
++	MHI_EV_CC_UNDEFINED_ERR = 0x10,
++	MHI_EV_CC_BAD_TRE = 0x11,
++};
++
++enum mhi_ch_state {
++	MHI_CH_STATE_DISABLED = 0x0,
++	MHI_CH_STATE_ENABLED = 0x1,
++	MHI_CH_STATE_RUNNING = 0x2,
++	MHI_CH_STATE_SUSPENDED = 0x3,
++	MHI_CH_STATE_STOP = 0x4,
++	MHI_CH_STATE_ERROR = 0x5,
++};
++
++#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
++				    mode != MHI_DB_BRST_ENABLE)
++
++extern const char * const mhi_ee_str[MHI_EE_MAX];
++#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \
++			     "INVALID_EE" : mhi_ee_str[ee])
++
++#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
++			ee == MHI_EE_EDL)
++
++#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
++
++enum dev_st_transition {
++	DEV_ST_TRANSITION_PBL,
++	DEV_ST_TRANSITION_READY,
++	DEV_ST_TRANSITION_SBL,
++	DEV_ST_TRANSITION_MISSION_MODE,
++	DEV_ST_TRANSITION_SYS_ERR,
++	DEV_ST_TRANSITION_DISABLE,
++	DEV_ST_TRANSITION_MAX,
++};
++
++extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
++#define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
++				"INVALID_STATE" : dev_state_tran_str[state])
++
++extern const char * const mhi_state_str[MHI_STATE_MAX];
++#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
++				  !mhi_state_str[state]) ? \
++				"INVALID_STATE" : mhi_state_str[state])
++
++/* internal power states */
++enum mhi_pm_state {
++	MHI_PM_STATE_DISABLE,
++	MHI_PM_STATE_POR,
++	MHI_PM_STATE_M0,
++	MHI_PM_STATE_M2,
++	MHI_PM_STATE_M3_ENTER,
++	MHI_PM_STATE_M3,
++	MHI_PM_STATE_M3_EXIT,
++	MHI_PM_STATE_FW_DL_ERR,
++	MHI_PM_STATE_SYS_ERR_DETECT,
++	MHI_PM_STATE_SYS_ERR_PROCESS,
++	MHI_PM_STATE_SHUTDOWN_PROCESS,
++	MHI_PM_STATE_LD_ERR_FATAL_DETECT,
++	MHI_PM_STATE_MAX
++};
++
++#define MHI_PM_DISABLE			BIT(0)
++#define MHI_PM_POR			BIT(1)
++#define MHI_PM_M0			BIT(2)
++#define MHI_PM_M2			BIT(3)
++#define MHI_PM_M3_ENTER			BIT(4)
++#define MHI_PM_M3			BIT(5)
++#define MHI_PM_M3_EXIT			BIT(6)
++/* firmware download failure state */
++#define MHI_PM_FW_DL_ERR		BIT(7)
++#define MHI_PM_SYS_ERR_DETECT		BIT(8)
++#define MHI_PM_SYS_ERR_PROCESS		BIT(9)
++#define MHI_PM_SHUTDOWN_PROCESS		BIT(10)
++/* link not accessible */
++#define MHI_PM_LD_ERR_FATAL_DETECT	BIT(11)
++
++#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
++		MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
++		MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
++		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
++#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
++#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
++#define MHI_DB_ACCESS_VALID(mhi_cntrl) (mhi_cntrl->pm_state & \
++					mhi_cntrl->db_access)
++#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
++						MHI_PM_M2 | MHI_PM_M3_EXIT))
++#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2)
++#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state)
++#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
++					    MHI_PM_IN_ERROR_STATE(pm_state))
++#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
++					   (MHI_PM_M3_ENTER | MHI_PM_M3))
++
++#define NR_OF_CMD_RINGS			1
++#define CMD_EL_PER_RING			128
++#define PRIMARY_CMD_RING		0
++#define MHI_DEV_WAKE_DB			127
++#define MHI_MAX_MTU			0xffff
++#define MHI_RANDOM_U32_NONZERO(bmsk)	(prandom_u32_max(bmsk) + 1)
++
++enum mhi_er_type {
++	MHI_ER_TYPE_INVALID = 0x0,
++	MHI_ER_TYPE_VALID = 0x1,
++};
++
++struct db_cfg {
++	bool reset_req;
++	bool db_mode;
++	u32 pollcfg;
++	enum mhi_db_brst_mode brstmode;
++	dma_addr_t db_val;
++	void (*process_db)(struct mhi_controller *mhi_cntrl,
++			   struct db_cfg *db_cfg, void __iomem *io_addr,
++			   dma_addr_t db_val);
++};
++
++struct mhi_pm_transitions {
++	enum mhi_pm_state from_state;
++	u32 to_states;
++};
++
++struct state_transition {
++	struct list_head node;
++	enum dev_st_transition state;
++};
++
++struct mhi_ring {
++	dma_addr_t dma_handle;
++	dma_addr_t iommu_base;
++	u64 *ctxt_wp; /* point to ctxt wp */
++	void *pre_aligned;
++	void *base;
++	void *rp;
++	void *wp;
++	size_t el_size;
++	size_t len;
++	size_t elements;
++	size_t alloc_size;
++	void __iomem *db_addr;
++};
++
++struct mhi_cmd {
++	struct mhi_ring ring;
++	spinlock_t lock;
++};
++
++struct mhi_buf_info {
++	void *v_addr;
++	void *bb_addr;
++	void *wp;
++	void *cb_buf;
++	dma_addr_t p_addr;
++	size_t len;
++	enum dma_data_direction dir;
++	bool used; /* Indicates whether the buffer is used or not */
++	bool pre_mapped; /* Already pre-mapped by client */
++};
++
++struct mhi_event {
++	struct mhi_controller *mhi_cntrl;
++	struct mhi_chan *mhi_chan; /* dedicated to channel */
++	u32 er_index;
++	u32 intmod;
++	u32 irq;
++	int chan; /* this event ring is dedicated to a channel (optional) */
++	u32 priority;
++	enum mhi_er_data_type data_type;
++	struct mhi_ring ring;
++	struct db_cfg db_cfg;
++	struct tasklet_struct task;
++	spinlock_t lock;
++	int (*process_event)(struct mhi_controller *mhi_cntrl,
++			     struct mhi_event *mhi_event,
++			     u32 event_quota);
++	bool hw_ring;
++	bool cl_manage;
++	bool offload_ev; /* managed by a device driver */
++};
++
++struct mhi_chan {
++	const char *name;
++	/*
++	 * Important: When consuming, increment tre_ring first and when
++	 * releasing, decrement buf_ring first. If tre_ring has space, buf_ring
++	 * is guranteed to have space so we do not need to check both rings.
++	 */
++	struct mhi_ring buf_ring;
++	struct mhi_ring tre_ring;
++	u32 chan;
++	u32 er_index;
++	u32 intmod;
++	enum mhi_ch_type type;
++	enum dma_data_direction dir;
++	struct db_cfg db_cfg;
++	enum mhi_ch_ee_mask ee_mask;
++	enum mhi_ch_state ch_state;
++	enum mhi_ev_ccs ccs;
++	struct mhi_device *mhi_dev;
++	void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
++	struct mutex mutex;
++	struct completion completion;
++	rwlock_t lock;
++	struct list_head node;
++	bool lpm_notify;
++	bool configured;
++	bool offload_ch;
++	bool pre_alloc;
++	bool auto_start;
++	bool wake_capable;
++};
++
++/* Default MHI timeout */
++#define MHI_TIMEOUT_MS (1000)
++
++/* debugfs related functions */
++#ifdef CONFIG_MHI_BUS_DEBUG
++void mhi_create_debugfs(struct mhi_controller *mhi_cntrl);
++void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl);
++void mhi_debugfs_init(void);
++void mhi_debugfs_exit(void);
++#else
++static inline void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
++{
++}
++
++static inline void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl)
++{
++}
++
++static inline void mhi_debugfs_init(void)
++{
++}
++
++static inline void mhi_debugfs_exit(void)
++{
++}
++#endif
++
++struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
++
++int mhi_destroy_device(struct device *dev, void *data);
++void mhi_create_devices(struct mhi_controller *mhi_cntrl);
++
++int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
++			 struct image_info **image_info, size_t alloc_size);
++void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
++			 struct image_info *image_info);
++
++/* Power management APIs */
++enum mhi_pm_state __must_check mhi_tryset_pm_state(
++					struct mhi_controller *mhi_cntrl,
++					enum mhi_pm_state state);
++const char *to_mhi_pm_state_str(enum mhi_pm_state state);
++enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
++int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
++			       enum dev_st_transition state);
++void mhi_pm_st_worker(struct work_struct *work);
++void mhi_pm_sys_err_handler(struct mhi_controller *mhi_cntrl);
++void mhi_fw_load_worker(struct work_struct *work);
++int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
++int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
++void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
++int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
++int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
++int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
++		 enum mhi_cmd_type cmd);
++static inline bool mhi_is_active(struct mhi_controller *mhi_cntrl)
++{
++	return (mhi_cntrl->dev_state >= MHI_STATE_M0 &&
++		mhi_cntrl->dev_state <= MHI_STATE_M3_FAST);
++}
++
++static inline void mhi_trigger_resume(struct mhi_controller *mhi_cntrl)
++{
++	pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
++	mhi_cntrl->runtime_get(mhi_cntrl);
++	mhi_cntrl->runtime_put(mhi_cntrl);
++}
++
++/* Register access methods */
++void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
++		     void __iomem *db_addr, dma_addr_t db_val);
++void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
++			     struct db_cfg *db_mode, void __iomem *db_addr,
++			     dma_addr_t db_val);
++int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
++			      void __iomem *base, u32 offset, u32 *out);
++int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
++				    void __iomem *base, u32 offset, u32 mask,
++				    u32 shift, u32 *out);
++void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
++		   u32 offset, u32 val);
++void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
++			 u32 offset, u32 mask, u32 shift, u32 val);
++void mhi_ring_er_db(struct mhi_event *mhi_event);
++void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
++		  dma_addr_t db_val);
++void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
++void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
++		      struct mhi_chan *mhi_chan);
++
++/* Initialization methods */
++int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
++int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
++void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
++int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
++void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
++void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
++		      struct image_info *img_info);
++void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl);
++int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
++			struct mhi_chan *mhi_chan);
++int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
++		       struct mhi_chan *mhi_chan);
++void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
++			  struct mhi_chan *mhi_chan);
++void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
++		    struct mhi_chan *mhi_chan);
++
++/* Memory allocation methods */
++static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
++				       size_t size,
++				       dma_addr_t *dma_handle,
++				       gfp_t gfp)
++{
++	void *buf = dma_alloc_coherent(mhi_cntrl->cntrl_dev, size, dma_handle,
++				       gfp);
++
++	return buf;
++}
++
++static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
++				     size_t size,
++				     void *vaddr,
++				     dma_addr_t dma_handle)
++{
++	dma_free_coherent(mhi_cntrl->cntrl_dev, size, vaddr, dma_handle);
++}
++
++/* Event processing methods */
++void mhi_ctrl_ev_task(unsigned long data);
++void mhi_ev_task(unsigned long data);
++int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
++				struct mhi_event *mhi_event, u32 event_quota);
++int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
++			     struct mhi_event *mhi_event, u32 event_quota);
++
++/* ISR handlers */
++irqreturn_t mhi_irq_handler(int irq_number, void *dev);
++irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
++irqreturn_t mhi_intvec_handler(int irq_number, void *dev);
++
++int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
++		struct mhi_buf_info *info, enum mhi_flags flags);
++int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
++			 struct mhi_buf_info *buf_info);
++int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
++			  struct mhi_buf_info *buf_info);
++void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
++			    struct mhi_buf_info *buf_info);
++void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
++			     struct mhi_buf_info *buf_info);
++
++#endif /* _MHI_INT_H */
+diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
+new file mode 100644
+index 0000000000000..614dd287cb4ff
+--- /dev/null
++++ b/drivers/bus/mhi/host/main.c
+@@ -0,0 +1,1630 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++ *
++ */
++
++#include <linux/device.h>
++#include <linux/dma-direction.h>
++#include <linux/dma-mapping.h>
++#include <linux/interrupt.h>
++#include <linux/list.h>
++#include <linux/mhi.h>
++#include <linux/module.h>
++#include <linux/skbuff.h>
++#include <linux/slab.h>
++#include "internal.h"
++
++int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
++			      void __iomem *base, u32 offset, u32 *out)
++{
++	return mhi_cntrl->read_reg(mhi_cntrl, base + offset, out);
++}
++
++int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
++				    void __iomem *base, u32 offset,
++				    u32 mask, u32 shift, u32 *out)
++{
++	u32 tmp;
++	int ret;
++
++	ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
++	if (ret)
++		return ret;
++
++	*out = (tmp & mask) >> shift;
++
++	return 0;
++}
++
++void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
++		   u32 offset, u32 val)
++{
++	mhi_cntrl->write_reg(mhi_cntrl, base + offset, val);
++}
++
++void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
++			 u32 offset, u32 mask, u32 shift, u32 val)
++{
++	int ret;
++	u32 tmp;
++
++	ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
++	if (ret)
++		return;
++
++	tmp &= ~mask;
++	tmp |= (val << shift);
++	mhi_write_reg(mhi_cntrl, base, offset, tmp);
++}
++
++void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
++		  dma_addr_t db_val)
++{
++	mhi_write_reg(mhi_cntrl, db_addr, 4, upper_32_bits(db_val));
++	mhi_write_reg(mhi_cntrl, db_addr, 0, lower_32_bits(db_val));
++}
++
++void mhi_db_brstmode(struct mhi_controller *mhi_cntrl,
++		     struct db_cfg *db_cfg,
++		     void __iomem *db_addr,
++		     dma_addr_t db_val)
++{
++	if (db_cfg->db_mode) {
++		db_cfg->db_val = db_val;
++		mhi_write_db(mhi_cntrl, db_addr, db_val);
++		db_cfg->db_mode = 0;
++	}
++}
++
++void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
++			     struct db_cfg *db_cfg,
++			     void __iomem *db_addr,
++			     dma_addr_t db_val)
++{
++	db_cfg->db_val = db_val;
++	mhi_write_db(mhi_cntrl, db_addr, db_val);
++}
++
++void mhi_ring_er_db(struct mhi_event *mhi_event)
++{
++	struct mhi_ring *ring = &mhi_event->ring;
++
++	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
++				     ring->db_addr, *ring->ctxt_wp);
++}
++
++void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
++{
++	dma_addr_t db;
++	struct mhi_ring *ring = &mhi_cmd->ring;
++
++	db = ring->iommu_base + (ring->wp - ring->base);
++	*ring->ctxt_wp = db;
++	mhi_write_db(mhi_cntrl, ring->db_addr, db);
++}
++
++void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
++		      struct mhi_chan *mhi_chan)
++{
++	struct mhi_ring *ring = &mhi_chan->tre_ring;
++	dma_addr_t db;
++
++	db = ring->iommu_base + (ring->wp - ring->base);
++	*ring->ctxt_wp = db;
++	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
++				    ring->db_addr, db);
++}
++
++enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl)
++{
++	u32 exec;
++	int ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_EXECENV, &exec);
++
++	return (ret) ? MHI_EE_MAX : exec;
++}
++
++enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
++{
++	u32 state;
++	int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
++				     MHISTATUS_MHISTATE_MASK,
++				     MHISTATUS_MHISTATE_SHIFT, &state);
++	return ret ? MHI_STATE_MAX : state;
++}
++
++int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
++			 struct mhi_buf_info *buf_info)
++{
++	buf_info->p_addr = dma_map_single(mhi_cntrl->cntrl_dev,
++					  buf_info->v_addr, buf_info->len,
++					  buf_info->dir);
++	if (dma_mapping_error(mhi_cntrl->cntrl_dev, buf_info->p_addr))
++		return -ENOMEM;
++
++	return 0;
++}
++
++int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
++			  struct mhi_buf_info *buf_info)
++{
++	void *buf = mhi_alloc_coherent(mhi_cntrl, buf_info->len,
++				       &buf_info->p_addr, GFP_ATOMIC);
++
++	if (!buf)
++		return -ENOMEM;
++
++	if (buf_info->dir == DMA_TO_DEVICE)
++		memcpy(buf, buf_info->v_addr, buf_info->len);
++
++	buf_info->bb_addr = buf;
++
++	return 0;
++}
++
++void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
++			    struct mhi_buf_info *buf_info)
++{
++	dma_unmap_single(mhi_cntrl->cntrl_dev, buf_info->p_addr, buf_info->len,
++			 buf_info->dir);
++}
++
++void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
++			     struct mhi_buf_info *buf_info)
++{
++	if (buf_info->dir == DMA_FROM_DEVICE)
++		memcpy(buf_info->v_addr, buf_info->bb_addr, buf_info->len);
++
++	mhi_free_coherent(mhi_cntrl, buf_info->len, buf_info->bb_addr,
++			  buf_info->p_addr);
++}
++
++static int get_nr_avail_ring_elements(struct mhi_controller *mhi_cntrl,
++				      struct mhi_ring *ring)
++{
++	int nr_el;
++
++	if (ring->wp < ring->rp) {
++		nr_el = ((ring->rp - ring->wp) / ring->el_size) - 1;
++	} else {
++		nr_el = (ring->rp - ring->base) / ring->el_size;
++		nr_el += ((ring->base + ring->len - ring->wp) /
++			  ring->el_size) - 1;
++	}
++
++	return nr_el;
++}
++
++static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
++{
++	return (addr - ring->iommu_base) + ring->base;
++}
++
++static void mhi_add_ring_element(struct mhi_controller *mhi_cntrl,
++				 struct mhi_ring *ring)
++{
++	ring->wp += ring->el_size;
++	if (ring->wp >= (ring->base + ring->len))
++		ring->wp = ring->base;
++	/* smp update */
++	smp_wmb();
++}
++
++static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
++				 struct mhi_ring *ring)
++{
++	ring->rp += ring->el_size;
++	if (ring->rp >= (ring->base + ring->len))
++		ring->rp = ring->base;
++	/* smp update */
++	smp_wmb();
++}
++
++static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
++{
++	return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
++}
++
++int mhi_destroy_device(struct device *dev, void *data)
++{
++	struct mhi_chan *ul_chan, *dl_chan;
++	struct mhi_device *mhi_dev;
++	struct mhi_controller *mhi_cntrl;
++	enum mhi_ee_type ee = MHI_EE_MAX;
++
++	if (dev->bus != &mhi_bus_type)
++		return 0;
++
++	mhi_dev = to_mhi_device(dev);
++	mhi_cntrl = mhi_dev->mhi_cntrl;
++
++	/* Only destroy virtual devices thats attached to bus */
++	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
++		return 0;
++
++	ul_chan = mhi_dev->ul_chan;
++	dl_chan = mhi_dev->dl_chan;
++
++	/*
++	 * If execution environment is specified, remove only those devices that
++	 * started in them based on ee_mask for the channels as we move on to a
++	 * different execution environment
++	 */
++	if (data)
++		ee = *(enum mhi_ee_type *)data;
++
++	/*
++	 * For the suspend and resume case, this function will get called
++	 * without mhi_unregister_controller(). Hence, we need to drop the
++	 * references to mhi_dev created for ul and dl channels. We can
++	 * be sure that there will be no instances of mhi_dev left after
++	 * this.
++	 */
++	if (ul_chan) {
++		if (ee != MHI_EE_MAX && !(ul_chan->ee_mask & BIT(ee)))
++			return 0;
++
++		put_device(&ul_chan->mhi_dev->dev);
++	}
++
++	if (dl_chan) {
++		if (ee != MHI_EE_MAX && !(dl_chan->ee_mask & BIT(ee)))
++			return 0;
++
++		put_device(&dl_chan->mhi_dev->dev);
++	}
++
++	dev_dbg(&mhi_cntrl->mhi_dev->dev, "destroy device for chan:%s\n",
++		 mhi_dev->name);
++
++	/* Notify the client and remove the device from MHI bus */
++	device_del(dev);
++	put_device(dev);
++
++	return 0;
++}
++
++void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason)
++{
++	struct mhi_driver *mhi_drv;
++
++	if (!mhi_dev->dev.driver)
++		return;
++
++	mhi_drv = to_mhi_driver(mhi_dev->dev.driver);
++
++	if (mhi_drv->status_cb)
++		mhi_drv->status_cb(mhi_dev, cb_reason);
++}
++EXPORT_SYMBOL_GPL(mhi_notify);
++
++/* Bind MHI channels to MHI devices */
++void mhi_create_devices(struct mhi_controller *mhi_cntrl)
++{
++	struct mhi_chan *mhi_chan;
++	struct mhi_device *mhi_dev;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	int i, ret;
++
++	mhi_chan = mhi_cntrl->mhi_chan;
++	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
++		if (!mhi_chan->configured || mhi_chan->mhi_dev ||
++		    !(mhi_chan->ee_mask & BIT(mhi_cntrl->ee)))
++			continue;
++		mhi_dev = mhi_alloc_device(mhi_cntrl);
++		if (IS_ERR(mhi_dev))
++			return;
++
++		mhi_dev->dev_type = MHI_DEVICE_XFER;
++		switch (mhi_chan->dir) {
++		case DMA_TO_DEVICE:
++			mhi_dev->ul_chan = mhi_chan;
++			mhi_dev->ul_chan_id = mhi_chan->chan;
++			break;
++		case DMA_FROM_DEVICE:
++			/* We use dl_chan as offload channels */
++			mhi_dev->dl_chan = mhi_chan;
++			mhi_dev->dl_chan_id = mhi_chan->chan;
++			break;
++		default:
++			dev_err(dev, "Direction not supported\n");
++			put_device(&mhi_dev->dev);
++			return;
++		}
++
++		get_device(&mhi_dev->dev);
++		mhi_chan->mhi_dev = mhi_dev;
++
++		/* Check next channel if it matches */
++		if ((i + 1) < mhi_cntrl->max_chan && mhi_chan[1].configured) {
++			if (!strcmp(mhi_chan[1].name, mhi_chan->name)) {
++				i++;
++				mhi_chan++;
++				if (mhi_chan->dir == DMA_TO_DEVICE) {
++					mhi_dev->ul_chan = mhi_chan;
++					mhi_dev->ul_chan_id = mhi_chan->chan;
++				} else {
++					mhi_dev->dl_chan = mhi_chan;
++					mhi_dev->dl_chan_id = mhi_chan->chan;
++				}
++				get_device(&mhi_dev->dev);
++				mhi_chan->mhi_dev = mhi_dev;
++			}
++		}
++
++		/* Channel name is same for both UL and DL */
++		mhi_dev->name = mhi_chan->name;
++		dev_set_name(&mhi_dev->dev, "%s_%s",
++			     dev_name(mhi_cntrl->cntrl_dev),
++			     mhi_dev->name);
++
++		/* Init wakeup source if available */
++		if (mhi_dev->dl_chan && mhi_dev->dl_chan->wake_capable)
++			device_init_wakeup(&mhi_dev->dev, true);
++
++		ret = device_add(&mhi_dev->dev);
++		if (ret)
++			put_device(&mhi_dev->dev);
++	}
++}
++
++irqreturn_t mhi_irq_handler(int irq_number, void *dev)
++{
++	struct mhi_event *mhi_event = dev;
++	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
++	struct mhi_event_ctxt *er_ctxt =
++		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
++	struct mhi_ring *ev_ring = &mhi_event->ring;
++	dma_addr_t ptr = er_ctxt->rp;
++	void *dev_rp;
++
++	if (!is_valid_ring_ptr(ev_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event ring rp points outside of the event ring\n");
++		return IRQ_HANDLED;
++	}
++
++	dev_rp = mhi_to_virtual(ev_ring, ptr);
++
++	/* Only proceed if event ring has pending events */
++	if (ev_ring->rp == dev_rp)
++		return IRQ_HANDLED;
++
++	/* For client managed event ring, notify pending data */
++	if (mhi_event->cl_manage) {
++		struct mhi_chan *mhi_chan = mhi_event->mhi_chan;
++		struct mhi_device *mhi_dev = mhi_chan->mhi_dev;
++
++		if (mhi_dev)
++			mhi_notify(mhi_dev, MHI_CB_PENDING_DATA);
++	} else {
++		tasklet_schedule(&mhi_event->task);
++	}
++
++	return IRQ_HANDLED;
++}
++
++irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
++{
++	struct mhi_controller *mhi_cntrl = priv;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	enum mhi_state state = MHI_STATE_MAX;
++	enum mhi_pm_state pm_state = 0;
++	enum mhi_ee_type ee = 0;
++
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		goto exit_intvec;
++	}
++
++	state = mhi_get_mhi_state(mhi_cntrl);
++	ee = mhi_cntrl->ee;
++	mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
++	dev_dbg(dev, "local ee:%s device ee:%s dev_state:%s\n",
++		TO_MHI_EXEC_STR(mhi_cntrl->ee), TO_MHI_EXEC_STR(ee),
++		TO_MHI_STATE_STR(state));
++
++	if (state == MHI_STATE_SYS_ERR) {
++		dev_dbg(dev, "System error detected\n");
++		pm_state = mhi_tryset_pm_state(mhi_cntrl,
++					       MHI_PM_SYS_ERR_DETECT);
++	}
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++
++	 /* If device supports RDDM don't bother processing SYS error */
++	if (mhi_cntrl->rddm_image) {
++		if (mhi_cntrl->ee == MHI_EE_RDDM && mhi_cntrl->ee != ee) {
++			mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
++			wake_up_all(&mhi_cntrl->state_event);
++		}
++		goto exit_intvec;
++	}
++
++	if (pm_state == MHI_PM_SYS_ERR_DETECT) {
++		wake_up_all(&mhi_cntrl->state_event);
++
++		/* For fatal errors, we let controller decide next step */
++		if (MHI_IN_PBL(ee))
++			mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_FATAL_ERROR);
++		else
++			mhi_pm_sys_err_handler(mhi_cntrl);
++	}
++
++exit_intvec:
++
++	return IRQ_HANDLED;
++}
++
++irqreturn_t mhi_intvec_handler(int irq_number, void *dev)
++{
++	struct mhi_controller *mhi_cntrl = dev;
++
++	/* Wake up events waiting for state change */
++	wake_up_all(&mhi_cntrl->state_event);
++
++	return IRQ_WAKE_THREAD;
++}
++
++static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
++					struct mhi_ring *ring)
++{
++	dma_addr_t ctxt_wp;
++
++	/* Update the WP */
++	ring->wp += ring->el_size;
++	ctxt_wp = *ring->ctxt_wp + ring->el_size;
++
++	if (ring->wp >= (ring->base + ring->len)) {
++		ring->wp = ring->base;
++		ctxt_wp = ring->iommu_base;
++	}
++
++	*ring->ctxt_wp = ctxt_wp;
++
++	/* Update the RP */
++	ring->rp += ring->el_size;
++	if (ring->rp >= (ring->base + ring->len))
++		ring->rp = ring->base;
++
++	/* Update to all cores */
++	smp_wmb();
++}
++
++static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
++			    struct mhi_tre *event,
++			    struct mhi_chan *mhi_chan)
++{
++	struct mhi_ring *buf_ring, *tre_ring;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	struct mhi_result result;
++	unsigned long flags = 0;
++	u32 ev_code;
++
++	ev_code = MHI_TRE_GET_EV_CODE(event);
++	buf_ring = &mhi_chan->buf_ring;
++	tre_ring = &mhi_chan->tre_ring;
++
++	result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
++		-EOVERFLOW : 0;
++
++	/*
++	 * If it's a DB Event then we need to grab the lock
++	 * with preemption disabled and as a write because we
++	 * have to update db register and there are chances that
++	 * another thread could be doing the same.
++	 */
++	if (ev_code >= MHI_EV_CC_OOB)
++		write_lock_irqsave(&mhi_chan->lock, flags);
++	else
++		read_lock_bh(&mhi_chan->lock);
++
++	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
++		goto end_process_tx_event;
++
++	switch (ev_code) {
++	case MHI_EV_CC_OVERFLOW:
++	case MHI_EV_CC_EOB:
++	case MHI_EV_CC_EOT:
++	{
++		dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
++		struct mhi_tre *local_rp, *ev_tre;
++		void *dev_rp;
++		struct mhi_buf_info *buf_info;
++		u16 xfer_len;
++
++		if (!is_valid_ring_ptr(tre_ring, ptr)) {
++			dev_err(&mhi_cntrl->mhi_dev->dev,
++				"Event element points outside of the tre ring\n");
++			break;
++		}
++		/* Get the TRB this event points to */
++		ev_tre = mhi_to_virtual(tre_ring, ptr);
++
++		dev_rp = ev_tre + 1;
++		if (dev_rp >= (tre_ring->base + tre_ring->len))
++			dev_rp = tre_ring->base;
++
++		result.dir = mhi_chan->dir;
++
++		local_rp = tre_ring->rp;
++		while (local_rp != dev_rp) {
++			buf_info = buf_ring->rp;
++			/* If it's the last TRE, get length from the event */
++			if (local_rp == ev_tre)
++				xfer_len = MHI_TRE_GET_EV_LEN(event);
++			else
++				xfer_len = buf_info->len;
++
++			/* Unmap if it's not pre-mapped by client */
++			if (likely(!buf_info->pre_mapped))
++				mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
++
++			result.buf_addr = buf_info->cb_buf;
++
++			/* truncate to buf len if xfer_len is larger */
++			result.bytes_xferd =
++				min_t(u16, xfer_len, buf_info->len);
++			mhi_del_ring_element(mhi_cntrl, buf_ring);
++			mhi_del_ring_element(mhi_cntrl, tre_ring);
++			local_rp = tre_ring->rp;
++
++			/* notify client */
++			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
++
++			if (mhi_chan->dir == DMA_TO_DEVICE)
++				atomic_dec(&mhi_cntrl->pending_pkts);
++
++			/*
++			 * Recycle the buffer if buffer is pre-allocated,
++			 * if there is an error, not much we can do apart
++			 * from dropping the packet
++			 */
++			if (mhi_chan->pre_alloc) {
++				if (mhi_queue_buf(mhi_chan->mhi_dev,
++						  mhi_chan->dir,
++						  buf_info->cb_buf,
++						  buf_info->len, MHI_EOT)) {
++					dev_err(dev,
++						"Error recycling buffer for chan:%d\n",
++						mhi_chan->chan);
++					kfree(buf_info->cb_buf);
++				}
++			}
++		}
++		break;
++	} /* CC_EOT */
++	case MHI_EV_CC_OOB:
++	case MHI_EV_CC_DB_MODE:
++	{
++		unsigned long flags;
++
++		mhi_chan->db_cfg.db_mode = 1;
++		read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
++		if (tre_ring->wp != tre_ring->rp &&
++		    MHI_DB_ACCESS_VALID(mhi_cntrl)) {
++			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
++		}
++		read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
++		break;
++	}
++	case MHI_EV_CC_BAD_TRE:
++	default:
++		dev_err(dev, "Unknown event 0x%x\n", ev_code);
++		break;
++	} /* switch(MHI_EV_READ_CODE(EV_TRB_CODE,event)) */
++
++end_process_tx_event:
++	if (ev_code >= MHI_EV_CC_OOB)
++		write_unlock_irqrestore(&mhi_chan->lock, flags);
++	else
++		read_unlock_bh(&mhi_chan->lock);
++
++	return 0;
++}
++
++static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
++			   struct mhi_tre *event,
++			   struct mhi_chan *mhi_chan)
++{
++	struct mhi_ring *buf_ring, *tre_ring;
++	struct mhi_buf_info *buf_info;
++	struct mhi_result result;
++	int ev_code;
++	u32 cookie; /* offset to local descriptor */
++	u16 xfer_len;
++
++	buf_ring = &mhi_chan->buf_ring;
++	tre_ring = &mhi_chan->tre_ring;
++
++	ev_code = MHI_TRE_GET_EV_CODE(event);
++	cookie = MHI_TRE_GET_EV_COOKIE(event);
++	xfer_len = MHI_TRE_GET_EV_LEN(event);
++
++	/* Received out of bound cookie */
++	WARN_ON(cookie >= buf_ring->len);
++
++	buf_info = buf_ring->base + cookie;
++
++	result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
++		-EOVERFLOW : 0;
++
++	/* truncate to buf len if xfer_len is larger */
++	result.bytes_xferd = min_t(u16, xfer_len, buf_info->len);
++	result.buf_addr = buf_info->cb_buf;
++	result.dir = mhi_chan->dir;
++
++	read_lock_bh(&mhi_chan->lock);
++
++	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
++		goto end_process_rsc_event;
++
++	WARN_ON(!buf_info->used);
++
++	/* notify the client */
++	mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
++
++	/*
++	 * Note: We're arbitrarily incrementing RP even though, completion
++	 * packet we processed might not be the same one, reason we can do this
++	 * is because device guaranteed to cache descriptors in order it
++	 * receive, so even though completion event is different we can re-use
++	 * all descriptors in between.
++	 * Example:
++	 * Transfer Ring has descriptors: A, B, C, D
++	 * Last descriptor host queue is D (WP) and first descriptor
++	 * host queue is A (RP).
++	 * The completion event we just serviced is descriptor C.
++	 * Then we can safely queue descriptors to replace A, B, and C
++	 * even though host did not receive any completions.
++	 */
++	mhi_del_ring_element(mhi_cntrl, tre_ring);
++	buf_info->used = false;
++
++end_process_rsc_event:
++	read_unlock_bh(&mhi_chan->lock);
++
++	return 0;
++}
++
++static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
++				       struct mhi_tre *tre)
++{
++	dma_addr_t ptr = MHI_TRE_GET_EV_PTR(tre);
++	struct mhi_cmd *cmd_ring = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
++	struct mhi_ring *mhi_ring = &cmd_ring->ring;
++	struct mhi_tre *cmd_pkt;
++	struct mhi_chan *mhi_chan;
++	u32 chan;
++
++	if (!is_valid_ring_ptr(mhi_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event element points outside of the cmd ring\n");
++		return;
++	}
++
++	cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
++
++	chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
++
++	if (chan < mhi_cntrl->max_chan &&
++	    mhi_cntrl->mhi_chan[chan].configured) {
++		mhi_chan = &mhi_cntrl->mhi_chan[chan];
++		write_lock_bh(&mhi_chan->lock);
++		mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
++		complete(&mhi_chan->completion);
++		write_unlock_bh(&mhi_chan->lock);
++	} else {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Completion packet for invalid channel ID: %d\n", chan);
++	}
++
++	mhi_del_ring_element(mhi_cntrl, mhi_ring);
++}
++
++int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
++			     struct mhi_event *mhi_event,
++			     u32 event_quota)
++{
++	struct mhi_tre *dev_rp, *local_rp;
++	struct mhi_ring *ev_ring = &mhi_event->ring;
++	struct mhi_event_ctxt *er_ctxt =
++		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
++	struct mhi_chan *mhi_chan;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	u32 chan;
++	int count = 0;
++	dma_addr_t ptr = er_ctxt->rp;
++
++	/*
++	 * This is a quick check to avoid unnecessary event processing
++	 * in case MHI is already in error state, but it's still possible
++	 * to transition to error state while processing events
++	 */
++	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
++		return -EIO;
++
++	if (!is_valid_ring_ptr(ev_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event ring rp points outside of the event ring\n");
++		return -EIO;
++	}
++
++	dev_rp = mhi_to_virtual(ev_ring, ptr);
++	local_rp = ev_ring->rp;
++
++	while (dev_rp != local_rp) {
++		enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
++
++		switch (type) {
++		case MHI_PKT_TYPE_BW_REQ_EVENT:
++		{
++			struct mhi_link_info *link_info;
++
++			link_info = &mhi_cntrl->mhi_link_info;
++			write_lock_irq(&mhi_cntrl->pm_lock);
++			link_info->target_link_speed =
++				MHI_TRE_GET_EV_LINKSPEED(local_rp);
++			link_info->target_link_width =
++				MHI_TRE_GET_EV_LINKWIDTH(local_rp);
++			write_unlock_irq(&mhi_cntrl->pm_lock);
++			dev_dbg(dev, "Received BW_REQ event\n");
++			mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_BW_REQ);
++			break;
++		}
++		case MHI_PKT_TYPE_STATE_CHANGE_EVENT:
++		{
++			enum mhi_state new_state;
++
++			new_state = MHI_TRE_GET_EV_STATE(local_rp);
++
++			dev_dbg(dev, "State change event to state: %s\n",
++				TO_MHI_STATE_STR(new_state));
++
++			switch (new_state) {
++			case MHI_STATE_M0:
++				mhi_pm_m0_transition(mhi_cntrl);
++				break;
++			case MHI_STATE_M1:
++				mhi_pm_m1_transition(mhi_cntrl);
++				break;
++			case MHI_STATE_M3:
++				mhi_pm_m3_transition(mhi_cntrl);
++				break;
++			case MHI_STATE_SYS_ERR:
++			{
++				enum mhi_pm_state new_state;
++
++				/* skip SYS_ERROR handling if RDDM supported */
++				if (mhi_cntrl->ee == MHI_EE_RDDM ||
++				    mhi_cntrl->rddm_image)
++					break;
++
++				dev_dbg(dev, "System error detected\n");
++				write_lock_irq(&mhi_cntrl->pm_lock);
++				new_state = mhi_tryset_pm_state(mhi_cntrl,
++							MHI_PM_SYS_ERR_DETECT);
++				write_unlock_irq(&mhi_cntrl->pm_lock);
++				if (new_state == MHI_PM_SYS_ERR_DETECT)
++					mhi_pm_sys_err_handler(mhi_cntrl);
++				break;
++			}
++			default:
++				dev_err(dev, "Invalid state: %s\n",
++					TO_MHI_STATE_STR(new_state));
++			}
++
++			break;
++		}
++		case MHI_PKT_TYPE_CMD_COMPLETION_EVENT:
++			mhi_process_cmd_completion(mhi_cntrl, local_rp);
++			break;
++		case MHI_PKT_TYPE_EE_EVENT:
++		{
++			enum dev_st_transition st = DEV_ST_TRANSITION_MAX;
++			enum mhi_ee_type event = MHI_TRE_GET_EV_EXECENV(local_rp);
++
++			dev_dbg(dev, "Received EE event: %s\n",
++				TO_MHI_EXEC_STR(event));
++			switch (event) {
++			case MHI_EE_SBL:
++				st = DEV_ST_TRANSITION_SBL;
++				break;
++			case MHI_EE_WFW:
++			case MHI_EE_AMSS:
++				st = DEV_ST_TRANSITION_MISSION_MODE;
++				break;
++			case MHI_EE_RDDM:
++				mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
++				write_lock_irq(&mhi_cntrl->pm_lock);
++				mhi_cntrl->ee = event;
++				write_unlock_irq(&mhi_cntrl->pm_lock);
++				wake_up_all(&mhi_cntrl->state_event);
++				break;
++			default:
++				dev_err(dev,
++					"Unhandled EE event: 0x%x\n", type);
++			}
++			if (st != DEV_ST_TRANSITION_MAX)
++				mhi_queue_state_transition(mhi_cntrl, st);
++
++			break;
++		}
++		case MHI_PKT_TYPE_TX_EVENT:
++			chan = MHI_TRE_GET_EV_CHID(local_rp);
++
++			WARN_ON(chan >= mhi_cntrl->max_chan);
++
++			/*
++			 * Only process the event ring elements whose channel
++			 * ID is within the maximum supported range.
++			 */
++			if (chan < mhi_cntrl->max_chan) {
++				mhi_chan = &mhi_cntrl->mhi_chan[chan];
++				if (!mhi_chan->configured)
++					break;
++				parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
++				event_quota--;
++			}
++			break;
++		default:
++			dev_err(dev, "Unhandled event type: %d\n", type);
++			break;
++		}
++
++		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
++		local_rp = ev_ring->rp;
++
++		ptr = er_ctxt->rp;
++		if (!is_valid_ring_ptr(ev_ring, ptr)) {
++			dev_err(&mhi_cntrl->mhi_dev->dev,
++				"Event ring rp points outside of the event ring\n");
++			return -EIO;
++		}
++
++		dev_rp = mhi_to_virtual(ev_ring, ptr);
++		count++;
++	}
++
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
++		mhi_ring_er_db(mhi_event);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	return count;
++}
++
++int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
++				struct mhi_event *mhi_event,
++				u32 event_quota)
++{
++	struct mhi_tre *dev_rp, *local_rp;
++	struct mhi_ring *ev_ring = &mhi_event->ring;
++	struct mhi_event_ctxt *er_ctxt =
++		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
++	int count = 0;
++	u32 chan;
++	struct mhi_chan *mhi_chan;
++	dma_addr_t ptr = er_ctxt->rp;
++
++	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
++		return -EIO;
++
++	if (!is_valid_ring_ptr(ev_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event ring rp points outside of the event ring\n");
++		return -EIO;
++	}
++
++	dev_rp = mhi_to_virtual(ev_ring, ptr);
++	local_rp = ev_ring->rp;
++
++	while (dev_rp != local_rp && event_quota > 0) {
++		enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
++
++		chan = MHI_TRE_GET_EV_CHID(local_rp);
++
++		WARN_ON(chan >= mhi_cntrl->max_chan);
++
++		/*
++		 * Only process the event ring elements whose channel
++		 * ID is within the maximum supported range.
++		 */
++		if (chan < mhi_cntrl->max_chan &&
++		    mhi_cntrl->mhi_chan[chan].configured) {
++			mhi_chan = &mhi_cntrl->mhi_chan[chan];
++
++			if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
++				parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
++				event_quota--;
++			} else if (type == MHI_PKT_TYPE_RSC_TX_EVENT) {
++				parse_rsc_event(mhi_cntrl, local_rp, mhi_chan);
++				event_quota--;
++			}
++		}
++
++		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
++		local_rp = ev_ring->rp;
++
++		ptr = er_ctxt->rp;
++		if (!is_valid_ring_ptr(ev_ring, ptr)) {
++			dev_err(&mhi_cntrl->mhi_dev->dev,
++				"Event ring rp points outside of the event ring\n");
++			return -EIO;
++		}
++
++		dev_rp = mhi_to_virtual(ev_ring, ptr);
++		count++;
++	}
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
++		mhi_ring_er_db(mhi_event);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	return count;
++}
++
++void mhi_ev_task(unsigned long data)
++{
++	struct mhi_event *mhi_event = (struct mhi_event *)data;
++	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
++
++	/* process all pending events */
++	spin_lock_bh(&mhi_event->lock);
++	mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
++	spin_unlock_bh(&mhi_event->lock);
++}
++
++void mhi_ctrl_ev_task(unsigned long data)
++{
++	struct mhi_event *mhi_event = (struct mhi_event *)data;
++	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	enum mhi_state state;
++	enum mhi_pm_state pm_state = 0;
++	int ret;
++
++	/*
++	 * We can check PM state w/o a lock here because there is no way
++	 * PM state can change from reg access valid to no access while this
++	 * thread being executed.
++	 */
++	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
++		/*
++		 * We may have a pending event but not allowed to
++		 * process it since we are probably in a suspended state,
++		 * so trigger a resume.
++		 */
++		mhi_trigger_resume(mhi_cntrl);
++
++		return;
++	}
++
++	/* Process ctrl events events */
++	ret = mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
++
++	/*
++	 * We received an IRQ but no events to process, maybe device went to
++	 * SYS_ERR state? Check the state to confirm.
++	 */
++	if (!ret) {
++		write_lock_irq(&mhi_cntrl->pm_lock);
++		state = mhi_get_mhi_state(mhi_cntrl);
++		if (state == MHI_STATE_SYS_ERR) {
++			dev_dbg(dev, "System error detected\n");
++			pm_state = mhi_tryset_pm_state(mhi_cntrl,
++						       MHI_PM_SYS_ERR_DETECT);
++		}
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		if (pm_state == MHI_PM_SYS_ERR_DETECT)
++			mhi_pm_sys_err_handler(mhi_cntrl);
++	}
++}
++
++static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl,
++			     struct mhi_ring *ring)
++{
++	void *tmp = ring->wp + ring->el_size;
++
++	if (tmp >= (ring->base + ring->len))
++		tmp = ring->base;
++
++	return (tmp == ring->rp);
++}
++
++int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
++		  struct sk_buff *skb, size_t len, enum mhi_flags mflags)
++{
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
++							     mhi_dev->dl_chan;
++	struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
++	struct mhi_buf_info buf_info = { };
++	int ret;
++
++	/* If MHI host pre-allocates buffers then client drivers cannot queue */
++	if (mhi_chan->pre_alloc)
++		return -EINVAL;
++
++	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
++		return -ENOMEM;
++
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
++		read_unlock_bh(&mhi_cntrl->pm_lock);
++		return -EIO;
++	}
++
++	/* we're in M3 or transitioning to M3 */
++	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
++		mhi_trigger_resume(mhi_cntrl);
++
++	/* Toggle wake to exit out of M2 */
++	mhi_cntrl->wake_toggle(mhi_cntrl);
++
++	buf_info.v_addr = skb->data;
++	buf_info.cb_buf = skb;
++	buf_info.len = len;
++
++	ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
++	if (unlikely(ret)) {
++		read_unlock_bh(&mhi_cntrl->pm_lock);
++		return ret;
++	}
++
++	if (mhi_chan->dir == DMA_TO_DEVICE)
++		atomic_inc(&mhi_cntrl->pending_pkts);
++
++	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
++		read_lock_bh(&mhi_chan->lock);
++		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
++		read_unlock_bh(&mhi_chan->lock);
++	}
++
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(mhi_queue_skb);
++
++int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
++		  struct mhi_buf *mhi_buf, size_t len, enum mhi_flags mflags)
++{
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
++							     mhi_dev->dl_chan;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
++	struct mhi_buf_info buf_info = { };
++	int ret;
++
++	/* If MHI host pre-allocates buffers then client drivers cannot queue */
++	if (mhi_chan->pre_alloc)
++		return -EINVAL;
++
++	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
++		return -ENOMEM;
++
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
++		dev_err(dev, "MHI is not in activate state, PM state: %s\n",
++			to_mhi_pm_state_str(mhi_cntrl->pm_state));
++		read_unlock_bh(&mhi_cntrl->pm_lock);
++
++		return -EIO;
++	}
++
++	/* we're in M3 or transitioning to M3 */
++	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
++		mhi_trigger_resume(mhi_cntrl);
++
++	/* Toggle wake to exit out of M2 */
++	mhi_cntrl->wake_toggle(mhi_cntrl);
++
++	buf_info.p_addr = mhi_buf->dma_addr;
++	buf_info.cb_buf = mhi_buf;
++	buf_info.pre_mapped = true;
++	buf_info.len = len;
++
++	ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
++	if (unlikely(ret)) {
++		read_unlock_bh(&mhi_cntrl->pm_lock);
++		return ret;
++	}
++
++	if (mhi_chan->dir == DMA_TO_DEVICE)
++		atomic_inc(&mhi_cntrl->pending_pkts);
++
++	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
++		read_lock_bh(&mhi_chan->lock);
++		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
++		read_unlock_bh(&mhi_chan->lock);
++	}
++
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(mhi_queue_dma);
++
++int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
++			struct mhi_buf_info *info, enum mhi_flags flags)
++{
++	struct mhi_ring *buf_ring, *tre_ring;
++	struct mhi_tre *mhi_tre;
++	struct mhi_buf_info *buf_info;
++	int eot, eob, chain, bei;
++	int ret;
++
++	buf_ring = &mhi_chan->buf_ring;
++	tre_ring = &mhi_chan->tre_ring;
++
++	buf_info = buf_ring->wp;
++	WARN_ON(buf_info->used);
++	buf_info->pre_mapped = info->pre_mapped;
++	if (info->pre_mapped)
++		buf_info->p_addr = info->p_addr;
++	else
++		buf_info->v_addr = info->v_addr;
++	buf_info->cb_buf = info->cb_buf;
++	buf_info->wp = tre_ring->wp;
++	buf_info->dir = mhi_chan->dir;
++	buf_info->len = info->len;
++
++	if (!info->pre_mapped) {
++		ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
++		if (ret)
++			return ret;
++	}
++
++	eob = !!(flags & MHI_EOB);
++	eot = !!(flags & MHI_EOT);
++	chain = !!(flags & MHI_CHAIN);
++	bei = !!(mhi_chan->intmod);
++
++	mhi_tre = tre_ring->wp;
++	mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
++	mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(info->len);
++	mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(bei, eot, eob, chain);
++
++	/* increment WP */
++	mhi_add_ring_element(mhi_cntrl, tre_ring);
++	mhi_add_ring_element(mhi_cntrl, buf_ring);
++
++	return 0;
++}
++
++int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
++		  void *buf, size_t len, enum mhi_flags mflags)
++{
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
++							     mhi_dev->dl_chan;
++	struct mhi_ring *tre_ring;
++	struct mhi_buf_info buf_info = { };
++	unsigned long flags;
++	int ret;
++
++	/*
++	 * this check here only as a guard, it's always
++	 * possible mhi can enter error while executing rest of function,
++	 * which is not fatal so we do not need to hold pm_lock
++	 */
++	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
++		return -EIO;
++
++	tre_ring = &mhi_chan->tre_ring;
++	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
++		return -ENOMEM;
++
++	buf_info.v_addr = buf;
++	buf_info.cb_buf = buf;
++	buf_info.len = len;
++
++	ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
++	if (unlikely(ret))
++		return ret;
++
++	read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
++
++	/* we're in M3 or transitioning to M3 */
++	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
++		mhi_trigger_resume(mhi_cntrl);
++
++	/* Toggle wake to exit out of M2 */
++	mhi_cntrl->wake_toggle(mhi_cntrl);
++
++	if (mhi_chan->dir == DMA_TO_DEVICE)
++		atomic_inc(&mhi_cntrl->pending_pkts);
++
++	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
++		unsigned long flags;
++
++		read_lock_irqsave(&mhi_chan->lock, flags);
++		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
++		read_unlock_irqrestore(&mhi_chan->lock, flags);
++	}
++
++	read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(mhi_queue_buf);
++
++int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
++		 struct mhi_chan *mhi_chan,
++		 enum mhi_cmd_type cmd)
++{
++	struct mhi_tre *cmd_tre = NULL;
++	struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
++	struct mhi_ring *ring = &mhi_cmd->ring;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	int chan = 0;
++
++	if (mhi_chan)
++		chan = mhi_chan->chan;
++
++	spin_lock_bh(&mhi_cmd->lock);
++	if (!get_nr_avail_ring_elements(mhi_cntrl, ring)) {
++		spin_unlock_bh(&mhi_cmd->lock);
++		return -ENOMEM;
++	}
++
++	/* prepare the cmd tre */
++	cmd_tre = ring->wp;
++	switch (cmd) {
++	case MHI_CMD_RESET_CHAN:
++		cmd_tre->ptr = MHI_TRE_CMD_RESET_PTR;
++		cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
++		cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
++		break;
++	case MHI_CMD_START_CHAN:
++		cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
++		cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
++		cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan);
++		break;
++	default:
++		dev_err(dev, "Command not supported\n");
++		break;
++	}
++
++	/* queue to hardware */
++	mhi_add_ring_element(mhi_cntrl, ring);
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
++		mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++	spin_unlock_bh(&mhi_cmd->lock);
++
++	return 0;
++}
++
++static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
++				    struct mhi_chan *mhi_chan)
++{
++	int ret;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++
++	dev_dbg(dev, "Entered: unprepare channel:%d\n", mhi_chan->chan);
++
++	/* no more processing events for this channel */
++	mutex_lock(&mhi_chan->mutex);
++	write_lock_irq(&mhi_chan->lock);
++	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
++		write_unlock_irq(&mhi_chan->lock);
++		mutex_unlock(&mhi_chan->mutex);
++		return;
++	}
++
++	mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
++	write_unlock_irq(&mhi_chan->lock);
++
++	reinit_completion(&mhi_chan->completion);
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		read_unlock_bh(&mhi_cntrl->pm_lock);
++		goto error_invalid_state;
++	}
++
++	mhi_cntrl->wake_toggle(mhi_cntrl);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	mhi_cntrl->runtime_get(mhi_cntrl);
++	mhi_cntrl->runtime_put(mhi_cntrl);
++	ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
++	if (ret)
++		goto error_invalid_state;
++
++	/* even if it fails we will still reset */
++	ret = wait_for_completion_timeout(&mhi_chan->completion,
++				msecs_to_jiffies(mhi_cntrl->timeout_ms));
++	if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
++		dev_err(dev,
++			"Failed to receive cmd completion, still resetting\n");
++
++error_invalid_state:
++	if (!mhi_chan->offload_ch) {
++		mhi_reset_chan(mhi_cntrl, mhi_chan);
++		mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
++	}
++	dev_dbg(dev, "chan:%d successfully resetted\n", mhi_chan->chan);
++	mutex_unlock(&mhi_chan->mutex);
++}
++
++int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
++			struct mhi_chan *mhi_chan)
++{
++	int ret = 0;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++
++	dev_dbg(dev, "Preparing channel: %d\n", mhi_chan->chan);
++
++	if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
++		dev_err(dev,
++			"Current EE: %s Required EE Mask: 0x%x for chan: %s\n",
++			TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask,
++			mhi_chan->name);
++		return -ENOTCONN;
++	}
++
++	mutex_lock(&mhi_chan->mutex);
++
++	/* If channel is not in disable state, do not allow it to start */
++	if (mhi_chan->ch_state != MHI_CH_STATE_DISABLED) {
++		ret = -EIO;
++		dev_dbg(dev, "channel: %d is not in disabled state\n",
++			mhi_chan->chan);
++		goto error_init_chan;
++	}
++
++	/* Check of client manages channel context for offload channels */
++	if (!mhi_chan->offload_ch) {
++		ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
++		if (ret)
++			goto error_init_chan;
++	}
++
++	reinit_completion(&mhi_chan->completion);
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		read_unlock_bh(&mhi_cntrl->pm_lock);
++		ret = -EIO;
++		goto error_pm_state;
++	}
++
++	mhi_cntrl->wake_toggle(mhi_cntrl);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++	mhi_cntrl->runtime_get(mhi_cntrl);
++	mhi_cntrl->runtime_put(mhi_cntrl);
++
++	ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
++	if (ret)
++		goto error_pm_state;
++
++	ret = wait_for_completion_timeout(&mhi_chan->completion,
++				msecs_to_jiffies(mhi_cntrl->timeout_ms));
++	if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
++		ret = -EIO;
++		goto error_pm_state;
++	}
++
++	write_lock_irq(&mhi_chan->lock);
++	mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
++	write_unlock_irq(&mhi_chan->lock);
++
++	/* Pre-allocate buffer for xfer ring */
++	if (mhi_chan->pre_alloc) {
++		int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
++						       &mhi_chan->tre_ring);
++		size_t len = mhi_cntrl->buffer_len;
++
++		while (nr_el--) {
++			void *buf;
++			struct mhi_buf_info info = { };
++			buf = kmalloc(len, GFP_KERNEL);
++			if (!buf) {
++				ret = -ENOMEM;
++				goto error_pre_alloc;
++			}
++
++			/* Prepare transfer descriptors */
++			info.v_addr = buf;
++			info.cb_buf = buf;
++			info.len = len;
++			ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &info, MHI_EOT);
++			if (ret) {
++				kfree(buf);
++				goto error_pre_alloc;
++			}
++		}
++
++		read_lock_bh(&mhi_cntrl->pm_lock);
++		if (MHI_DB_ACCESS_VALID(mhi_cntrl)) {
++			read_lock_irq(&mhi_chan->lock);
++			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
++			read_unlock_irq(&mhi_chan->lock);
++		}
++		read_unlock_bh(&mhi_cntrl->pm_lock);
++	}
++
++	mutex_unlock(&mhi_chan->mutex);
++
++	dev_dbg(dev, "Chan: %d successfully moved to start state\n",
++		mhi_chan->chan);
++
++	return 0;
++
++error_pm_state:
++	if (!mhi_chan->offload_ch)
++		mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
++
++error_init_chan:
++	mutex_unlock(&mhi_chan->mutex);
++
++	return ret;
++
++error_pre_alloc:
++	mutex_unlock(&mhi_chan->mutex);
++	__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
++
++	return ret;
++}
++
++static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
++				  struct mhi_event *mhi_event,
++				  struct mhi_event_ctxt *er_ctxt,
++				  int chan)
++
++{
++	struct mhi_tre *dev_rp, *local_rp;
++	struct mhi_ring *ev_ring;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	unsigned long flags;
++	dma_addr_t ptr;
++
++	dev_dbg(dev, "Marking all events for chan: %d as stale\n", chan);
++
++	ev_ring = &mhi_event->ring;
++
++	/* mark all stale events related to channel as STALE event */
++	spin_lock_irqsave(&mhi_event->lock, flags);
++
++	ptr = er_ctxt->rp;
++	if (!is_valid_ring_ptr(ev_ring, ptr)) {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Event ring rp points outside of the event ring\n");
++		dev_rp = ev_ring->rp;
++	} else {
++		dev_rp = mhi_to_virtual(ev_ring, ptr);
++	}
++
++	local_rp = ev_ring->rp;
++	while (dev_rp != local_rp) {
++		if (MHI_TRE_GET_EV_TYPE(local_rp) == MHI_PKT_TYPE_TX_EVENT &&
++		    chan == MHI_TRE_GET_EV_CHID(local_rp))
++			local_rp->dword[1] = MHI_TRE_EV_DWORD1(chan,
++					MHI_PKT_TYPE_STALE_EVENT);
++		local_rp++;
++		if (local_rp == (ev_ring->base + ev_ring->len))
++			local_rp = ev_ring->base;
++	}
++
++	dev_dbg(dev, "Finished marking events as stale events\n");
++	spin_unlock_irqrestore(&mhi_event->lock, flags);
++}
++
++static void mhi_reset_data_chan(struct mhi_controller *mhi_cntrl,
++				struct mhi_chan *mhi_chan)
++{
++	struct mhi_ring *buf_ring, *tre_ring;
++	struct mhi_result result;
++
++	/* Reset any pending buffers */
++	buf_ring = &mhi_chan->buf_ring;
++	tre_ring = &mhi_chan->tre_ring;
++	result.transaction_status = -ENOTCONN;
++	result.bytes_xferd = 0;
++	while (tre_ring->rp != tre_ring->wp) {
++		struct mhi_buf_info *buf_info = buf_ring->rp;
++
++		if (mhi_chan->dir == DMA_TO_DEVICE)
++			atomic_dec(&mhi_cntrl->pending_pkts);
++
++		if (!buf_info->pre_mapped)
++			mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
++
++		mhi_del_ring_element(mhi_cntrl, buf_ring);
++		mhi_del_ring_element(mhi_cntrl, tre_ring);
++
++		if (mhi_chan->pre_alloc) {
++			kfree(buf_info->cb_buf);
++		} else {
++			result.buf_addr = buf_info->cb_buf;
++			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
++		}
++	}
++}
++
++void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan)
++{
++	struct mhi_event *mhi_event;
++	struct mhi_event_ctxt *er_ctxt;
++	int chan = mhi_chan->chan;
++
++	/* Nothing to reset, client doesn't queue buffers */
++	if (mhi_chan->offload_ch)
++		return;
++
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
++	er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_chan->er_index];
++
++	mhi_mark_stale_events(mhi_cntrl, mhi_event, er_ctxt, chan);
++
++	mhi_reset_data_chan(mhi_cntrl, mhi_chan);
++
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++}
++
++/* Move channel to start state */
++int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
++{
++	int ret, dir;
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	struct mhi_chan *mhi_chan;
++
++	for (dir = 0; dir < 2; dir++) {
++		mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
++		if (!mhi_chan)
++			continue;
++
++		ret = mhi_prepare_channel(mhi_cntrl, mhi_chan);
++		if (ret)
++			goto error_open_chan;
++	}
++
++	return 0;
++
++error_open_chan:
++	for (--dir; dir >= 0; dir--) {
++		mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
++		if (!mhi_chan)
++			continue;
++
++		__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
++	}
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mhi_prepare_for_transfer);
++
++void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
++{
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	struct mhi_chan *mhi_chan;
++	int dir;
++
++	for (dir = 0; dir < 2; dir++) {
++		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
++		if (!mhi_chan)
++			continue;
++
++		__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
++	}
++}
++EXPORT_SYMBOL_GPL(mhi_unprepare_from_transfer);
++
++int mhi_poll(struct mhi_device *mhi_dev, u32 budget)
++{
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	struct mhi_chan *mhi_chan = mhi_dev->dl_chan;
++	struct mhi_event *mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
++	int ret;
++
++	spin_lock_bh(&mhi_event->lock);
++	ret = mhi_event->process_event(mhi_cntrl, mhi_event, budget);
++	spin_unlock_bh(&mhi_event->lock);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mhi_poll);
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+new file mode 100644
+index 0000000000000..e3df838c3c80e
+--- /dev/null
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -0,0 +1,345 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * MHI PCI driver - MHI over PCI controller driver
++ *
++ * This module is a generic driver for registering MHI-over-PCI devices,
++ * such as PCIe QCOM modems.
++ *
++ * Copyright (C) 2020 Linaro Ltd <loic.poulain@linaro.org>
++ */
++
++#include <linux/device.h>
++#include <linux/mhi.h>
++#include <linux/module.h>
++#include <linux/pci.h>
++
++#define MHI_PCI_DEFAULT_BAR_NUM 0
++
++/**
++ * struct mhi_pci_dev_info - MHI PCI device specific information
++ * @config: MHI controller configuration
++ * @name: name of the PCI module
++ * @fw: firmware path (if any)
++ * @edl: emergency download mode firmware path (if any)
++ * @bar_num: PCI base address register to use for MHI MMIO register space
++ * @dma_data_width: DMA transfer word size (32 or 64 bits)
++ */
++struct mhi_pci_dev_info {
++	const struct mhi_controller_config *config;
++	const char *name;
++	const char *fw;
++	const char *edl;
++	unsigned int bar_num;
++	unsigned int dma_data_width;
++};
++
++#define MHI_CHANNEL_CONFIG_UL(ch_num, ch_name, el_count, ev_ring) \
++	{						\
++		.num = ch_num,				\
++		.name = ch_name,			\
++		.num_elements = el_count,		\
++		.event_ring = ev_ring,			\
++		.dir = DMA_TO_DEVICE,			\
++		.ee_mask = BIT(MHI_EE_AMSS),		\
++		.pollcfg = 0,				\
++		.doorbell = MHI_DB_BRST_DISABLE,	\
++		.lpm_notify = false,			\
++		.offload_channel = false,		\
++		.doorbell_mode_switch = false,		\
++	}						\
++
++#define MHI_CHANNEL_CONFIG_DL(ch_num, ch_name, el_count, ev_ring) \
++	{						\
++		.num = ch_num,				\
++		.name = ch_name,			\
++		.num_elements = el_count,		\
++		.event_ring = ev_ring,			\
++		.dir = DMA_FROM_DEVICE,			\
++		.ee_mask = BIT(MHI_EE_AMSS),		\
++		.pollcfg = 0,				\
++		.doorbell = MHI_DB_BRST_DISABLE,	\
++		.lpm_notify = false,			\
++		.offload_channel = false,		\
++		.doorbell_mode_switch = false,		\
++	}
++
++#define MHI_EVENT_CONFIG_CTRL(ev_ring)		\
++	{					\
++		.num_elements = 64,		\
++		.irq_moderation_ms = 0,		\
++		.irq = (ev_ring) + 1,		\
++		.priority = 1,			\
++		.mode = MHI_DB_BRST_DISABLE,	\
++		.data_type = MHI_ER_CTRL,	\
++		.hardware_event = false,	\
++		.client_managed = false,	\
++		.offload_channel = false,	\
++	}
++
++#define MHI_EVENT_CONFIG_DATA(ev_ring)		\
++	{					\
++		.num_elements = 128,		\
++		.irq_moderation_ms = 5,		\
++		.irq = (ev_ring) + 1,		\
++		.priority = 1,			\
++		.mode = MHI_DB_BRST_DISABLE,	\
++		.data_type = MHI_ER_DATA,	\
++		.hardware_event = false,	\
++		.client_managed = false,	\
++		.offload_channel = false,	\
++	}
++
++#define MHI_EVENT_CONFIG_HW_DATA(ev_ring, ch_num) \
++	{					\
++		.num_elements = 128,		\
++		.irq_moderation_ms = 5,		\
++		.irq = (ev_ring) + 1,		\
++		.priority = 1,			\
++		.mode = MHI_DB_BRST_DISABLE,	\
++		.data_type = MHI_ER_DATA,	\
++		.hardware_event = true,		\
++		.client_managed = false,	\
++		.offload_channel = false,	\
++		.channel = ch_num,		\
++	}
++
++static const struct mhi_channel_config modem_qcom_v1_mhi_channels[] = {
++	MHI_CHANNEL_CONFIG_UL(12, "MBIM", 4, 0),
++	MHI_CHANNEL_CONFIG_DL(13, "MBIM", 4, 0),
++	MHI_CHANNEL_CONFIG_UL(14, "QMI", 4, 0),
++	MHI_CHANNEL_CONFIG_DL(15, "QMI", 4, 0),
++	MHI_CHANNEL_CONFIG_UL(20, "IPCR", 8, 0),
++	MHI_CHANNEL_CONFIG_DL(21, "IPCR", 8, 0),
++	MHI_CHANNEL_CONFIG_UL(100, "IP_HW0", 128, 1),
++	MHI_CHANNEL_CONFIG_DL(101, "IP_HW0", 128, 2),
++};
++
++static const struct mhi_event_config modem_qcom_v1_mhi_events[] = {
++	/* first ring is control+data ring */
++	MHI_EVENT_CONFIG_CTRL(0),
++	/* Hardware channels request dedicated hardware event rings */
++	MHI_EVENT_CONFIG_HW_DATA(1, 100),
++	MHI_EVENT_CONFIG_HW_DATA(2, 101)
++};
++
++static const struct mhi_controller_config modem_qcom_v1_mhiv_config = {
++	.max_channels = 128,
++	.timeout_ms = 5000,
++	.num_channels = ARRAY_SIZE(modem_qcom_v1_mhi_channels),
++	.ch_cfg = modem_qcom_v1_mhi_channels,
++	.num_events = ARRAY_SIZE(modem_qcom_v1_mhi_events),
++	.event_cfg = modem_qcom_v1_mhi_events,
++};
++
++static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
++	.name = "qcom-sdx55m",
++	.fw = "qcom/sdx55m/sbl1.mbn",
++	.edl = "qcom/sdx55m/edl.mbn",
++	.config = &modem_qcom_v1_mhiv_config,
++	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
++	.dma_data_width = 32
++};
++
++static const struct pci_device_id mhi_pci_id_table[] = {
++	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0306),
++		.driver_data = (kernel_ulong_t) &mhi_qcom_sdx55_info },
++	{  }
++};
++MODULE_DEVICE_TABLE(pci, mhi_pci_id_table);
++
++static int mhi_pci_read_reg(struct mhi_controller *mhi_cntrl,
++			    void __iomem *addr, u32 *out)
++{
++	*out = readl(addr);
++	return 0;
++}
++
++static void mhi_pci_write_reg(struct mhi_controller *mhi_cntrl,
++			      void __iomem *addr, u32 val)
++{
++	writel(val, addr);
++}
++
++static void mhi_pci_status_cb(struct mhi_controller *mhi_cntrl,
++			      enum mhi_callback cb)
++{
++	/* Nothing to do for now */
++}
++
++static int mhi_pci_claim(struct mhi_controller *mhi_cntrl,
++			 unsigned int bar_num, u64 dma_mask)
++{
++	struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
++	int err;
++
++	err = pci_assign_resource(pdev, bar_num);
++	if (err)
++		return err;
++
++	err = pcim_enable_device(pdev);
++	if (err) {
++		dev_err(&pdev->dev, "failed to enable pci device: %d\n", err);
++		return err;
++	}
++
++	err = pcim_iomap_regions(pdev, 1 << bar_num, pci_name(pdev));
++	if (err) {
++		dev_err(&pdev->dev, "failed to map pci region: %d\n", err);
++		return err;
++	}
++	mhi_cntrl->regs = pcim_iomap_table(pdev)[bar_num];
++
++	err = pci_set_dma_mask(pdev, dma_mask);
++	if (err) {
++		dev_err(&pdev->dev, "Cannot set proper DMA mask\n");
++		return err;
++	}
++
++	err = pci_set_consistent_dma_mask(pdev, dma_mask);
++	if (err) {
++		dev_err(&pdev->dev, "set consistent dma mask failed\n");
++		return err;
++	}
++
++	pci_set_master(pdev);
++
++	return 0;
++}
++
++static int mhi_pci_get_irqs(struct mhi_controller *mhi_cntrl,
++			    const struct mhi_controller_config *mhi_cntrl_config)
++{
++	struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
++	int nr_vectors, i;
++	int *irq;
++
++	/*
++	 * Alloc one MSI vector for BHI + one vector per event ring, ideally...
++	 * No explicit pci_free_irq_vectors required, done by pcim_release.
++	 */
++	mhi_cntrl->nr_irqs = 1 + mhi_cntrl_config->num_events;
++
++	nr_vectors = pci_alloc_irq_vectors(pdev, 1, mhi_cntrl->nr_irqs, PCI_IRQ_MSI);
++	if (nr_vectors < 0) {
++		dev_err(&pdev->dev, "Error allocating MSI vectors %d\n",
++			nr_vectors);
++		return nr_vectors;
++	}
++
++	if (nr_vectors < mhi_cntrl->nr_irqs) {
++		dev_warn(&pdev->dev, "Not enough MSI vectors (%d/%d), use shared MSI\n",
++			 nr_vectors, mhi_cntrl_config->num_events);
++	}
++
++	irq = devm_kcalloc(&pdev->dev, mhi_cntrl->nr_irqs, sizeof(int), GFP_KERNEL);
++	if (!irq)
++		return -ENOMEM;
++
++	for (i = 0; i < mhi_cntrl->nr_irqs; i++) {
++		int vector = i >= nr_vectors ? (nr_vectors - 1) : i;
++
++		irq[i] = pci_irq_vector(pdev, vector);
++	}
++
++	mhi_cntrl->irq = irq;
++
++	return 0;
++}
++
++static int mhi_pci_runtime_get(struct mhi_controller *mhi_cntrl)
++{
++	/* no PM for now */
++	return 0;
++}
++
++static void mhi_pci_runtime_put(struct mhi_controller *mhi_cntrl)
++{
++	/* no PM for now */
++}
++
++static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
++{
++	const struct mhi_pci_dev_info *info = (struct mhi_pci_dev_info *) id->driver_data;
++	const struct mhi_controller_config *mhi_cntrl_config;
++	struct mhi_controller *mhi_cntrl;
++	int err;
++
++	dev_dbg(&pdev->dev, "MHI PCI device found: %s\n", info->name);
++
++	mhi_cntrl = mhi_alloc_controller();
++	if (!mhi_cntrl)
++		return -ENOMEM;
++
++	mhi_cntrl_config = info->config;
++	mhi_cntrl->cntrl_dev = &pdev->dev;
++	mhi_cntrl->iova_start = 0;
++	mhi_cntrl->iova_stop = DMA_BIT_MASK(info->dma_data_width);
++	mhi_cntrl->fw_image = info->fw;
++	mhi_cntrl->edl_image = info->edl;
++
++	mhi_cntrl->read_reg = mhi_pci_read_reg;
++	mhi_cntrl->write_reg = mhi_pci_write_reg;
++	mhi_cntrl->status_cb = mhi_pci_status_cb;
++	mhi_cntrl->runtime_get = mhi_pci_runtime_get;
++	mhi_cntrl->runtime_put = mhi_pci_runtime_put;
++
++	err = mhi_pci_claim(mhi_cntrl, info->bar_num, DMA_BIT_MASK(info->dma_data_width));
++	if (err)
++		goto err_release;
++
++	err = mhi_pci_get_irqs(mhi_cntrl, mhi_cntrl_config);
++	if (err)
++		goto err_release;
++
++	pci_set_drvdata(pdev, mhi_cntrl);
++
++	err = mhi_register_controller(mhi_cntrl, mhi_cntrl_config);
++	if (err)
++		goto err_release;
++
++	/* MHI bus does not power up the controller by default */
++	err = mhi_prepare_for_power_up(mhi_cntrl);
++	if (err) {
++		dev_err(&pdev->dev, "failed to prepare MHI controller\n");
++		goto err_unregister;
++	}
++
++	err = mhi_sync_power_up(mhi_cntrl);
++	if (err) {
++		dev_err(&pdev->dev, "failed to power up MHI controller\n");
++		goto err_unprepare;
++	}
++
++	return 0;
++
++err_unprepare:
++	mhi_unprepare_after_power_down(mhi_cntrl);
++err_unregister:
++	mhi_unregister_controller(mhi_cntrl);
++err_release:
++	mhi_free_controller(mhi_cntrl);
++
++	return err;
++}
++
++static void mhi_pci_remove(struct pci_dev *pdev)
++{
++	struct mhi_controller *mhi_cntrl = pci_get_drvdata(pdev);
++
++	mhi_power_down(mhi_cntrl, true);
++	mhi_unprepare_after_power_down(mhi_cntrl);
++	mhi_unregister_controller(mhi_cntrl);
++	mhi_free_controller(mhi_cntrl);
++}
++
++static struct pci_driver mhi_pci_driver = {
++	.name		= "mhi-pci-generic",
++	.id_table	= mhi_pci_id_table,
++	.probe		= mhi_pci_probe,
++	.remove		= mhi_pci_remove
++};
++module_pci_driver(mhi_pci_driver);
++
++MODULE_AUTHOR("Loic Poulain <loic.poulain@linaro.org>");
++MODULE_DESCRIPTION("Modem Host Interface (MHI) PCI controller driver");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
+new file mode 100644
+index 0000000000000..7d69b740b9f93
+--- /dev/null
++++ b/drivers/bus/mhi/host/pm.c
+@@ -0,0 +1,1157 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++ *
++ */
++
++#include <linux/delay.h>
++#include <linux/device.h>
++#include <linux/dma-direction.h>
++#include <linux/dma-mapping.h>
++#include <linux/interrupt.h>
++#include <linux/list.h>
++#include <linux/mhi.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/wait.h>
++#include "internal.h"
++
++/*
++ * Not all MHI state transitions are synchronous. Transitions like Linkdown,
++ * SYS_ERR, and shutdown can happen anytime asynchronously. This function will
++ * transition to a new state only if we're allowed to.
++ *
++ * Priority increases as we go down. For instance, from any state in L0, the
++ * transition can be made to states in L1, L2 and L3. A notable exception to
++ * this rule is state DISABLE.  From DISABLE state we can only transition to
++ * POR state. Also, while in L2 state, user cannot jump back to previous
++ * L1 or L0 states.
++ *
++ * Valid transitions:
++ * L0: DISABLE <--> POR
++ *     POR <--> POR
++ *     POR -> M0 -> M2 --> M0
++ *     POR -> FW_DL_ERR
++ *     FW_DL_ERR <--> FW_DL_ERR
++ *     M0 <--> M0
++ *     M0 -> FW_DL_ERR
++ *     M0 -> M3_ENTER -> M3 -> M3_EXIT --> M0
++ * L1: SYS_ERR_DETECT -> SYS_ERR_PROCESS --> POR
++ * L2: SHUTDOWN_PROCESS -> DISABLE
++ * L3: LD_ERR_FATAL_DETECT <--> LD_ERR_FATAL_DETECT
++ *     LD_ERR_FATAL_DETECT -> SHUTDOWN_PROCESS
++ */
++static struct mhi_pm_transitions const dev_state_transitions[] = {
++	/* L0 States */
++	{
++		MHI_PM_DISABLE,
++		MHI_PM_POR
++	},
++	{
++		MHI_PM_POR,
++		MHI_PM_POR | MHI_PM_DISABLE | MHI_PM_M0 |
++		MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
++		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
++	},
++	{
++		MHI_PM_M0,
++		MHI_PM_M0 | MHI_PM_M2 | MHI_PM_M3_ENTER |
++		MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
++		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
++	},
++	{
++		MHI_PM_M2,
++		MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
++		MHI_PM_LD_ERR_FATAL_DETECT
++	},
++	{
++		MHI_PM_M3_ENTER,
++		MHI_PM_M3 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
++		MHI_PM_LD_ERR_FATAL_DETECT
++	},
++	{
++		MHI_PM_M3,
++		MHI_PM_M3_EXIT | MHI_PM_SYS_ERR_DETECT |
++		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
++	},
++	{
++		MHI_PM_M3_EXIT,
++		MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
++		MHI_PM_LD_ERR_FATAL_DETECT
++	},
++	{
++		MHI_PM_FW_DL_ERR,
++		MHI_PM_FW_DL_ERR | MHI_PM_SYS_ERR_DETECT |
++		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
++	},
++	/* L1 States */
++	{
++		MHI_PM_SYS_ERR_DETECT,
++		MHI_PM_SYS_ERR_PROCESS | MHI_PM_SHUTDOWN_PROCESS |
++		MHI_PM_LD_ERR_FATAL_DETECT
++	},
++	{
++		MHI_PM_SYS_ERR_PROCESS,
++		MHI_PM_POR | MHI_PM_SHUTDOWN_PROCESS |
++		MHI_PM_LD_ERR_FATAL_DETECT
++	},
++	/* L2 States */
++	{
++		MHI_PM_SHUTDOWN_PROCESS,
++		MHI_PM_DISABLE | MHI_PM_LD_ERR_FATAL_DETECT
++	},
++	/* L3 States */
++	{
++		MHI_PM_LD_ERR_FATAL_DETECT,
++		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_SHUTDOWN_PROCESS
++	},
++};
++
++enum mhi_pm_state __must_check mhi_tryset_pm_state(struct mhi_controller *mhi_cntrl,
++						   enum mhi_pm_state state)
++{
++	unsigned long cur_state = mhi_cntrl->pm_state;
++	int index = find_last_bit(&cur_state, 32);
++
++	if (unlikely(index >= ARRAY_SIZE(dev_state_transitions)))
++		return cur_state;
++
++	if (unlikely(dev_state_transitions[index].from_state != cur_state))
++		return cur_state;
++
++	if (unlikely(!(dev_state_transitions[index].to_states & state)))
++		return cur_state;
++
++	mhi_cntrl->pm_state = state;
++	return mhi_cntrl->pm_state;
++}
++
++void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
++{
++	if (state == MHI_STATE_RESET) {
++		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
++				    MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
++	} else {
++		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
++				    MHICTRL_MHISTATE_MASK,
++				    MHICTRL_MHISTATE_SHIFT, state);
++	}
++}
++
++/* NOP for backward compatibility, host allowed to ring DB in M2 state */
++static void mhi_toggle_dev_wake_nop(struct mhi_controller *mhi_cntrl)
++{
++}
++
++static void mhi_toggle_dev_wake(struct mhi_controller *mhi_cntrl)
++{
++	mhi_cntrl->wake_get(mhi_cntrl, false);
++	mhi_cntrl->wake_put(mhi_cntrl, true);
++}
++
++/* Handle device ready state transition */
++int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
++{
++	void __iomem *base = mhi_cntrl->regs;
++	struct mhi_event *mhi_event;
++	enum mhi_pm_state cur_state;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	u32 reset = 1, ready = 0;
++	int ret, i;
++
++	/* Wait for RESET to be cleared and READY bit to be set by the device */
++	wait_event_timeout(mhi_cntrl->state_event,
++			   MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
++			   mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
++					      MHICTRL_RESET_MASK,
++					      MHICTRL_RESET_SHIFT, &reset) ||
++			   mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
++					      MHISTATUS_READY_MASK,
++					      MHISTATUS_READY_SHIFT, &ready) ||
++			   (!reset && ready),
++			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
++
++	/* Check if device entered error state */
++	if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) {
++		dev_err(dev, "Device link is not accessible\n");
++		return -EIO;
++	}
++
++	/* Timeout if device did not transition to ready state */
++	if (reset || !ready) {
++		dev_err(dev, "Device Ready timeout\n");
++		return -ETIMEDOUT;
++	}
++
++	dev_dbg(dev, "Device in READY State\n");
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
++	mhi_cntrl->dev_state = MHI_STATE_READY;
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++
++	if (cur_state != MHI_PM_POR) {
++		dev_err(dev, "Error moving to state %s from %s\n",
++			to_mhi_pm_state_str(MHI_PM_POR),
++			to_mhi_pm_state_str(cur_state));
++		return -EIO;
++	}
++
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
++		dev_err(dev, "Device registers not accessible\n");
++		goto error_mmio;
++	}
++
++	/* Configure MMIO registers */
++	ret = mhi_init_mmio(mhi_cntrl);
++	if (ret) {
++		dev_err(dev, "Error configuring MMIO registers\n");
++		goto error_mmio;
++	}
++
++	/* Add elements to all SW event rings */
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
++		struct mhi_ring *ring = &mhi_event->ring;
++
++		/* Skip if this is an offload or HW event */
++		if (mhi_event->offload_ev || mhi_event->hw_ring)
++			continue;
++
++		ring->wp = ring->base + ring->len - ring->el_size;
++		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
++		/* Update all cores */
++		smp_wmb();
++
++		/* Ring the event ring db */
++		spin_lock_irq(&mhi_event->lock);
++		mhi_ring_er_db(mhi_event);
++		spin_unlock_irq(&mhi_event->lock);
++	}
++
++	/* Set MHI to M0 state */
++	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	return 0;
++
++error_mmio:
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	return -EIO;
++}
++
++int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
++{
++	enum mhi_pm_state cur_state;
++	struct mhi_chan *mhi_chan;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	int i;
++
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	mhi_cntrl->dev_state = MHI_STATE_M0;
++	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M0);
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++	if (unlikely(cur_state != MHI_PM_M0)) {
++		dev_err(dev, "Unable to transition to M0 state\n");
++		return -EIO;
++	}
++	mhi_cntrl->M0++;
++
++	/* Wake up the device */
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	mhi_cntrl->wake_get(mhi_cntrl, true);
++
++	/* Ring all event rings and CMD ring only if we're in mission mode */
++	if (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) {
++		struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
++		struct mhi_cmd *mhi_cmd =
++			&mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
++
++		for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
++			if (mhi_event->offload_ev)
++				continue;
++
++			spin_lock_irq(&mhi_event->lock);
++			mhi_ring_er_db(mhi_event);
++			spin_unlock_irq(&mhi_event->lock);
++		}
++
++		/* Only ring primary cmd ring if ring is not empty */
++		spin_lock_irq(&mhi_cmd->lock);
++		if (mhi_cmd->ring.rp != mhi_cmd->ring.wp)
++			mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
++		spin_unlock_irq(&mhi_cmd->lock);
++	}
++
++	/* Ring channel DB registers */
++	mhi_chan = mhi_cntrl->mhi_chan;
++	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
++		struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
++
++		if (mhi_chan->db_cfg.reset_req) {
++			write_lock_irq(&mhi_chan->lock);
++			mhi_chan->db_cfg.db_mode = true;
++			write_unlock_irq(&mhi_chan->lock);
++		}
++
++		read_lock_irq(&mhi_chan->lock);
++
++		/* Only ring DB if ring is not empty */
++		if (tre_ring->base && tre_ring->wp  != tre_ring->rp &&
++		    mhi_chan->ch_state == MHI_CH_STATE_ENABLED)
++			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
++		read_unlock_irq(&mhi_chan->lock);
++	}
++
++	mhi_cntrl->wake_put(mhi_cntrl, false);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++	wake_up_all(&mhi_cntrl->state_event);
++
++	return 0;
++}
++
++/*
++ * After receiving the MHI state change event from the device indicating the
++ * transition to M1 state, the host can transition the device to M2 state
++ * for keeping it in low power state.
++ */
++void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl)
++{
++	enum mhi_pm_state state;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M2);
++	if (state == MHI_PM_M2) {
++		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M2);
++		mhi_cntrl->dev_state = MHI_STATE_M2;
++
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++
++		mhi_cntrl->M2++;
++		wake_up_all(&mhi_cntrl->state_event);
++
++		/* If there are any pending resources, exit M2 immediately */
++		if (unlikely(atomic_read(&mhi_cntrl->pending_pkts) ||
++			     atomic_read(&mhi_cntrl->dev_wake))) {
++			dev_dbg(dev,
++				"Exiting M2, pending_pkts: %d dev_wake: %d\n",
++				atomic_read(&mhi_cntrl->pending_pkts),
++				atomic_read(&mhi_cntrl->dev_wake));
++			read_lock_bh(&mhi_cntrl->pm_lock);
++			mhi_cntrl->wake_get(mhi_cntrl, true);
++			mhi_cntrl->wake_put(mhi_cntrl, true);
++			read_unlock_bh(&mhi_cntrl->pm_lock);
++		} else {
++			mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_IDLE);
++		}
++	} else {
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++	}
++}
++
++/* MHI M3 completion handler */
++int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl)
++{
++	enum mhi_pm_state state;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	mhi_cntrl->dev_state = MHI_STATE_M3;
++	state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3);
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++	if (state != MHI_PM_M3) {
++		dev_err(dev, "Unable to transition to M3 state\n");
++		return -EIO;
++	}
++
++	mhi_cntrl->M3++;
++	wake_up_all(&mhi_cntrl->state_event);
++
++	return 0;
++}
++
++/* Handle device Mission Mode transition */
++static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
++{
++	struct mhi_event *mhi_event;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	enum mhi_ee_type current_ee = mhi_cntrl->ee;
++	int i, ret;
++
++	dev_dbg(dev, "Processing Mission Mode transition\n");
++
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
++		mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++
++	if (!MHI_IN_MISSION_MODE(mhi_cntrl->ee))
++		return -EIO;
++
++	wake_up_all(&mhi_cntrl->state_event);
++
++	device_for_each_child(&mhi_cntrl->mhi_dev->dev, &current_ee,
++			      mhi_destroy_device);
++	mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_MISSION_MODE);
++
++	/* Force MHI to be in M0 state before continuing */
++	ret = __mhi_device_get_sync(mhi_cntrl);
++	if (ret)
++		return ret;
++
++	read_lock_bh(&mhi_cntrl->pm_lock);
++
++	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		ret = -EIO;
++		goto error_mission_mode;
++	}
++
++	/* Add elements to all HW event rings */
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
++		struct mhi_ring *ring = &mhi_event->ring;
++
++		if (mhi_event->offload_ev || !mhi_event->hw_ring)
++			continue;
++
++		ring->wp = ring->base + ring->len - ring->el_size;
++		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
++		/* Update to all cores */
++		smp_wmb();
++
++		spin_lock_irq(&mhi_event->lock);
++		if (MHI_DB_ACCESS_VALID(mhi_cntrl))
++			mhi_ring_er_db(mhi_event);
++		spin_unlock_irq(&mhi_event->lock);
++	}
++
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	/*
++	 * The MHI devices are only created when the client device switches its
++	 * Execution Environment (EE) to either SBL or AMSS states
++	 */
++	mhi_create_devices(mhi_cntrl);
++
++	read_lock_bh(&mhi_cntrl->pm_lock);
++
++error_mission_mode:
++	mhi_cntrl->wake_put(mhi_cntrl, false);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	return ret;
++}
++
++/* Handle SYS_ERR and Shutdown transitions */
++static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
++				      enum mhi_pm_state transition_state)
++{
++	enum mhi_pm_state cur_state, prev_state;
++	struct mhi_event *mhi_event;
++	struct mhi_cmd_ctxt *cmd_ctxt;
++	struct mhi_cmd *mhi_cmd;
++	struct mhi_event_ctxt *er_ctxt;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	int ret, i;
++
++	dev_dbg(dev, "Transitioning from PM state: %s to: %s\n",
++		to_mhi_pm_state_str(mhi_cntrl->pm_state),
++		to_mhi_pm_state_str(transition_state));
++
++	/* We must notify MHI control driver so it can clean up first */
++	if (transition_state == MHI_PM_SYS_ERR_PROCESS)
++		mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_SYS_ERROR);
++
++	mutex_lock(&mhi_cntrl->pm_mutex);
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	prev_state = mhi_cntrl->pm_state;
++	cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
++	if (cur_state == transition_state) {
++		mhi_cntrl->ee = MHI_EE_DISABLE_TRANSITION;
++		mhi_cntrl->dev_state = MHI_STATE_RESET;
++	}
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++
++	/* Wake up threads waiting for state transition */
++	wake_up_all(&mhi_cntrl->state_event);
++
++	if (cur_state != transition_state) {
++		dev_err(dev, "Failed to transition to state: %s from: %s\n",
++			to_mhi_pm_state_str(transition_state),
++			to_mhi_pm_state_str(cur_state));
++		mutex_unlock(&mhi_cntrl->pm_mutex);
++		return;
++	}
++
++	/* Trigger MHI RESET so that the device will not access host memory */
++	if (MHI_REG_ACCESS_VALID(prev_state)) {
++		u32 in_reset = -1;
++		unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
++
++		dev_dbg(dev, "Triggering MHI Reset in device\n");
++		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
++
++		/* Wait for the reset bit to be cleared by the device */
++		ret = wait_event_timeout(mhi_cntrl->state_event,
++					 mhi_read_reg_field(mhi_cntrl,
++							    mhi_cntrl->regs,
++							    MHICTRL,
++							    MHICTRL_RESET_MASK,
++							    MHICTRL_RESET_SHIFT,
++							    &in_reset) ||
++					!in_reset, timeout);
++		if ((!ret || in_reset) && cur_state == MHI_PM_SYS_ERR_PROCESS) {
++			dev_err(dev, "Device failed to exit MHI Reset state\n");
++			mutex_unlock(&mhi_cntrl->pm_mutex);
++			return;
++		}
++
++		/*
++		 * Device will clear BHI_INTVEC as a part of RESET processing,
++		 * hence re-program it
++		 */
++		mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
++	}
++
++	dev_dbg(dev,
++		 "Waiting for all pending event ring processing to complete\n");
++	mhi_event = mhi_cntrl->mhi_event;
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
++		if (mhi_event->offload_ev)
++			continue;
++		tasklet_kill(&mhi_event->task);
++	}
++
++	/* Release lock and wait for all pending threads to complete */
++	mutex_unlock(&mhi_cntrl->pm_mutex);
++	dev_dbg(dev, "Waiting for all pending threads to complete\n");
++	wake_up_all(&mhi_cntrl->state_event);
++
++	dev_dbg(dev, "Reset all active channels and remove MHI devices\n");
++	device_for_each_child(mhi_cntrl->cntrl_dev, NULL, mhi_destroy_device);
++
++	mutex_lock(&mhi_cntrl->pm_mutex);
++
++	WARN_ON(atomic_read(&mhi_cntrl->dev_wake));
++	WARN_ON(atomic_read(&mhi_cntrl->pending_pkts));
++
++	/* Reset the ev rings and cmd rings */
++	dev_dbg(dev, "Resetting EV CTXT and CMD CTXT\n");
++	mhi_cmd = mhi_cntrl->mhi_cmd;
++	cmd_ctxt = mhi_cntrl->mhi_ctxt->cmd_ctxt;
++	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
++		struct mhi_ring *ring = &mhi_cmd->ring;
++
++		ring->rp = ring->base;
++		ring->wp = ring->base;
++		cmd_ctxt->rp = cmd_ctxt->rbase;
++		cmd_ctxt->wp = cmd_ctxt->rbase;
++	}
++
++	mhi_event = mhi_cntrl->mhi_event;
++	er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
++	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
++		     mhi_event++) {
++		struct mhi_ring *ring = &mhi_event->ring;
++
++		/* Skip offload events */
++		if (mhi_event->offload_ev)
++			continue;
++
++		ring->rp = ring->base;
++		ring->wp = ring->base;
++		er_ctxt->rp = er_ctxt->rbase;
++		er_ctxt->wp = er_ctxt->rbase;
++	}
++
++	if (cur_state == MHI_PM_SYS_ERR_PROCESS) {
++		mhi_ready_state_transition(mhi_cntrl);
++	} else {
++		/* Move to disable state */
++		write_lock_irq(&mhi_cntrl->pm_lock);
++		cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_DISABLE);
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		if (unlikely(cur_state != MHI_PM_DISABLE))
++			dev_err(dev, "Error moving from PM state: %s to: %s\n",
++				to_mhi_pm_state_str(cur_state),
++				to_mhi_pm_state_str(MHI_PM_DISABLE));
++	}
++
++	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
++		to_mhi_pm_state_str(mhi_cntrl->pm_state),
++		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
++
++	mutex_unlock(&mhi_cntrl->pm_mutex);
++}
++
++/* Queue a new work item and schedule work */
++int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
++			       enum dev_st_transition state)
++{
++	struct state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
++	unsigned long flags;
++
++	if (!item)
++		return -ENOMEM;
++
++	item->state = state;
++	spin_lock_irqsave(&mhi_cntrl->transition_lock, flags);
++	list_add_tail(&item->node, &mhi_cntrl->transition_list);
++	spin_unlock_irqrestore(&mhi_cntrl->transition_lock, flags);
++
++	schedule_work(&mhi_cntrl->st_worker);
++
++	return 0;
++}
++
++/* SYS_ERR worker */
++void mhi_pm_sys_err_handler(struct mhi_controller *mhi_cntrl)
++{
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++
++	/* skip if controller supports RDDM */
++	if (mhi_cntrl->rddm_image) {
++		dev_dbg(dev, "Controller supports RDDM, skip SYS_ERROR\n");
++		return;
++	}
++
++	mhi_queue_state_transition(mhi_cntrl, DEV_ST_TRANSITION_SYS_ERR);
++}
++
++/* Device State Transition worker */
++void mhi_pm_st_worker(struct work_struct *work)
++{
++	struct state_transition *itr, *tmp;
++	LIST_HEAD(head);
++	struct mhi_controller *mhi_cntrl = container_of(work,
++							struct mhi_controller,
++							st_worker);
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++
++	spin_lock_irq(&mhi_cntrl->transition_lock);
++	list_splice_tail_init(&mhi_cntrl->transition_list, &head);
++	spin_unlock_irq(&mhi_cntrl->transition_lock);
++
++	list_for_each_entry_safe(itr, tmp, &head, node) {
++		list_del(&itr->node);
++		dev_dbg(dev, "Handling state transition: %s\n",
++			TO_DEV_STATE_TRANS_STR(itr->state));
++
++		switch (itr->state) {
++		case DEV_ST_TRANSITION_PBL:
++			write_lock_irq(&mhi_cntrl->pm_lock);
++			if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
++				mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
++			write_unlock_irq(&mhi_cntrl->pm_lock);
++			if (MHI_IN_PBL(mhi_cntrl->ee))
++				mhi_fw_load_handler(mhi_cntrl);
++			break;
++		case DEV_ST_TRANSITION_SBL:
++			write_lock_irq(&mhi_cntrl->pm_lock);
++			mhi_cntrl->ee = MHI_EE_SBL;
++			write_unlock_irq(&mhi_cntrl->pm_lock);
++			/*
++			 * The MHI devices are only created when the client
++			 * device switches its Execution Environment (EE) to
++			 * either SBL or AMSS states
++			 */
++			mhi_create_devices(mhi_cntrl);
++			break;
++		case DEV_ST_TRANSITION_MISSION_MODE:
++			mhi_pm_mission_mode_transition(mhi_cntrl);
++			break;
++		case DEV_ST_TRANSITION_READY:
++			mhi_ready_state_transition(mhi_cntrl);
++			break;
++		case DEV_ST_TRANSITION_SYS_ERR:
++			mhi_pm_disable_transition
++				(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS);
++			break;
++		case DEV_ST_TRANSITION_DISABLE:
++			mhi_pm_disable_transition
++				(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS);
++			break;
++		default:
++			break;
++		}
++		kfree(itr);
++	}
++}
++
++int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
++{
++	struct mhi_chan *itr, *tmp;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	enum mhi_pm_state new_state;
++	int ret;
++
++	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
++		return -EINVAL;
++
++	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
++		return -EIO;
++
++	/* Return busy if there are any pending resources */
++	if (atomic_read(&mhi_cntrl->dev_wake) ||
++	    atomic_read(&mhi_cntrl->pending_pkts))
++		return -EBUSY;
++
++	/* Take MHI out of M2 state */
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	mhi_cntrl->wake_get(mhi_cntrl, false);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	ret = wait_event_timeout(mhi_cntrl->state_event,
++				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
++				 mhi_cntrl->dev_state == MHI_STATE_M1 ||
++				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
++				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
++
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	mhi_cntrl->wake_put(mhi_cntrl, false);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		dev_err(dev,
++			"Could not enter M0/M1 state");
++		return -EIO;
++	}
++
++	write_lock_irq(&mhi_cntrl->pm_lock);
++
++	if (atomic_read(&mhi_cntrl->dev_wake) ||
++	    atomic_read(&mhi_cntrl->pending_pkts)) {
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		return -EBUSY;
++	}
++
++	dev_info(dev, "Allowing M3 transition\n");
++	new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_ENTER);
++	if (new_state != MHI_PM_M3_ENTER) {
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		dev_err(dev,
++			"Error setting to PM state: %s from: %s\n",
++			to_mhi_pm_state_str(MHI_PM_M3_ENTER),
++			to_mhi_pm_state_str(mhi_cntrl->pm_state));
++		return -EIO;
++	}
++
++	/* Set MHI to M3 and wait for completion */
++	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++	dev_info(dev, "Wait for M3 completion\n");
++
++	ret = wait_event_timeout(mhi_cntrl->state_event,
++				 mhi_cntrl->dev_state == MHI_STATE_M3 ||
++				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
++				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
++
++	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		dev_err(dev,
++			"Did not enter M3 state, MHI state: %s, PM state: %s\n",
++			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
++			to_mhi_pm_state_str(mhi_cntrl->pm_state));
++		return -EIO;
++	}
++
++	/* Notify clients about entering LPM */
++	list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
++		mutex_lock(&itr->mutex);
++		if (itr->mhi_dev)
++			mhi_notify(itr->mhi_dev, MHI_CB_LPM_ENTER);
++		mutex_unlock(&itr->mutex);
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(mhi_pm_suspend);
++
++int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
++{
++	struct mhi_chan *itr, *tmp;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	enum mhi_pm_state cur_state;
++	int ret;
++
++	dev_info(dev, "Entered with PM state: %s, MHI state: %s\n",
++		 to_mhi_pm_state_str(mhi_cntrl->pm_state),
++		 TO_MHI_STATE_STR(mhi_cntrl->dev_state));
++
++	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
++		return 0;
++
++	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
++		return -EIO;
++
++	/* Notify clients about exiting LPM */
++	list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
++		mutex_lock(&itr->mutex);
++		if (itr->mhi_dev)
++			mhi_notify(itr->mhi_dev, MHI_CB_LPM_EXIT);
++		mutex_unlock(&itr->mutex);
++	}
++
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_EXIT);
++	if (cur_state != MHI_PM_M3_EXIT) {
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		dev_info(dev,
++			 "Error setting to PM state: %s from: %s\n",
++			 to_mhi_pm_state_str(MHI_PM_M3_EXIT),
++			 to_mhi_pm_state_str(mhi_cntrl->pm_state));
++		return -EIO;
++	}
++
++	/* Set MHI to M0 and wait for completion */
++	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++
++	ret = wait_event_timeout(mhi_cntrl->state_event,
++				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
++				 mhi_cntrl->dev_state == MHI_STATE_M2 ||
++				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
++				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
++
++	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		dev_err(dev,
++			"Did not enter M0 state, MHI state: %s, PM state: %s\n",
++			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
++			to_mhi_pm_state_str(mhi_cntrl->pm_state));
++		return -EIO;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(mhi_pm_resume);
++
++int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
++{
++	int ret;
++
++	/* Wake up the device */
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	mhi_cntrl->wake_get(mhi_cntrl, true);
++	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
++		mhi_trigger_resume(mhi_cntrl);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++
++	ret = wait_event_timeout(mhi_cntrl->state_event,
++				 mhi_cntrl->pm_state == MHI_PM_M0 ||
++				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
++				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
++
++	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
++		read_lock_bh(&mhi_cntrl->pm_lock);
++		mhi_cntrl->wake_put(mhi_cntrl, false);
++		read_unlock_bh(&mhi_cntrl->pm_lock);
++		return -EIO;
++	}
++
++	return 0;
++}
++
++/* Assert device wake db */
++static void mhi_assert_dev_wake(struct mhi_controller *mhi_cntrl, bool force)
++{
++	unsigned long flags;
++
++	/*
++	 * If force flag is set, then increment the wake count value and
++	 * ring wake db
++	 */
++	if (unlikely(force)) {
++		spin_lock_irqsave(&mhi_cntrl->wlock, flags);
++		atomic_inc(&mhi_cntrl->dev_wake);
++		if (MHI_WAKE_DB_FORCE_SET_VALID(mhi_cntrl->pm_state) &&
++		    !mhi_cntrl->wake_set) {
++			mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
++			mhi_cntrl->wake_set = true;
++		}
++		spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
++	} else {
++		/*
++		 * If resources are already requested, then just increment
++		 * the wake count value and return
++		 */
++		if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, 1, 0)))
++			return;
++
++		spin_lock_irqsave(&mhi_cntrl->wlock, flags);
++		if ((atomic_inc_return(&mhi_cntrl->dev_wake) == 1) &&
++		    MHI_WAKE_DB_SET_VALID(mhi_cntrl->pm_state) &&
++		    !mhi_cntrl->wake_set) {
++			mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
++			mhi_cntrl->wake_set = true;
++		}
++		spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
++	}
++}
++
++/* De-assert device wake db */
++static void mhi_deassert_dev_wake(struct mhi_controller *mhi_cntrl,
++				  bool override)
++{
++	unsigned long flags;
++
++	/*
++	 * Only continue if there is a single resource, else just decrement
++	 * and return
++	 */
++	if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, -1, 1)))
++		return;
++
++	spin_lock_irqsave(&mhi_cntrl->wlock, flags);
++	if ((atomic_dec_return(&mhi_cntrl->dev_wake) == 0) &&
++	    MHI_WAKE_DB_CLEAR_VALID(mhi_cntrl->pm_state) && !override &&
++	    mhi_cntrl->wake_set) {
++		mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 0);
++		mhi_cntrl->wake_set = false;
++	}
++	spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
++}
++
++int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
++{
++	enum mhi_state state;
++	enum mhi_ee_type current_ee;
++	enum dev_st_transition next_state;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	u32 val;
++	int ret;
++
++	dev_info(dev, "Requested to power ON\n");
++
++	if (mhi_cntrl->nr_irqs < 1)
++		return -EINVAL;
++
++	/* Supply default wake routines if not provided by controller driver */
++	if (!mhi_cntrl->wake_get || !mhi_cntrl->wake_put ||
++	    !mhi_cntrl->wake_toggle) {
++		mhi_cntrl->wake_get = mhi_assert_dev_wake;
++		mhi_cntrl->wake_put = mhi_deassert_dev_wake;
++		mhi_cntrl->wake_toggle = (mhi_cntrl->db_access & MHI_PM_M2) ?
++			mhi_toggle_dev_wake_nop : mhi_toggle_dev_wake;
++	}
++
++	mutex_lock(&mhi_cntrl->pm_mutex);
++	mhi_cntrl->pm_state = MHI_PM_DISABLE;
++
++	if (!mhi_cntrl->pre_init) {
++		/* Setup device context */
++		ret = mhi_init_dev_ctxt(mhi_cntrl);
++		if (ret)
++			goto error_dev_ctxt;
++	}
++
++	ret = mhi_init_irq_setup(mhi_cntrl);
++	if (ret)
++		goto error_setup_irq;
++
++	/* Setup BHI offset & INTVEC */
++	write_lock_irq(&mhi_cntrl->pm_lock);
++	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIOFF, &val);
++	if (ret) {
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		goto error_bhi_offset;
++	}
++
++	mhi_cntrl->bhi = mhi_cntrl->regs + val;
++
++	/* Setup BHIE offset */
++	if (mhi_cntrl->fbc_download) {
++		ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF, &val);
++		if (ret) {
++			write_unlock_irq(&mhi_cntrl->pm_lock);
++			dev_err(dev, "Error reading BHIE offset\n");
++			goto error_bhi_offset;
++		}
++
++		mhi_cntrl->bhie = mhi_cntrl->regs + val;
++	}
++
++	mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
++	mhi_cntrl->pm_state = MHI_PM_POR;
++	mhi_cntrl->ee = MHI_EE_MAX;
++	current_ee = mhi_get_exec_env(mhi_cntrl);
++	write_unlock_irq(&mhi_cntrl->pm_lock);
++
++	/* Confirm that the device is in valid exec env */
++	if (!MHI_IN_PBL(current_ee) && current_ee != MHI_EE_AMSS) {
++		dev_err(dev, "Not a valid EE for power on\n");
++		ret = -EIO;
++		goto error_bhi_offset;
++	}
++
++	state = mhi_get_mhi_state(mhi_cntrl);
++	if (state == MHI_STATE_SYS_ERR) {
++		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
++		ret = wait_event_timeout(mhi_cntrl->state_event,
++				MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
++					mhi_read_reg_field(mhi_cntrl,
++							   mhi_cntrl->regs,
++							   MHICTRL,
++							   MHICTRL_RESET_MASK,
++							   MHICTRL_RESET_SHIFT,
++							   &val) ||
++					!val,
++				msecs_to_jiffies(mhi_cntrl->timeout_ms));
++		if (!ret) {
++			ret = -EIO;
++			dev_info(dev, "Failed to reset MHI due to syserr state\n");
++			goto error_bhi_offset;
++		}
++
++		/*
++		 * device cleares INTVEC as part of RESET processing,
++		 * re-program it
++		 */
++		mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
++	}
++
++	/* Transition to next state */
++	next_state = MHI_IN_PBL(current_ee) ?
++		DEV_ST_TRANSITION_PBL : DEV_ST_TRANSITION_READY;
++
++	mhi_queue_state_transition(mhi_cntrl, next_state);
++
++	mutex_unlock(&mhi_cntrl->pm_mutex);
++
++	dev_info(dev, "Power on setup success\n");
++
++	return 0;
++
++error_bhi_offset:
++	mhi_deinit_free_irq(mhi_cntrl);
++
++error_setup_irq:
++	if (!mhi_cntrl->pre_init)
++		mhi_deinit_dev_ctxt(mhi_cntrl);
++
++error_dev_ctxt:
++	mutex_unlock(&mhi_cntrl->pm_mutex);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mhi_async_power_up);
++
++void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
++{
++	enum mhi_pm_state cur_state;
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++
++	/* If it's not a graceful shutdown, force MHI to linkdown state */
++	if (!graceful) {
++		mutex_lock(&mhi_cntrl->pm_mutex);
++		write_lock_irq(&mhi_cntrl->pm_lock);
++		cur_state = mhi_tryset_pm_state(mhi_cntrl,
++						MHI_PM_LD_ERR_FATAL_DETECT);
++		write_unlock_irq(&mhi_cntrl->pm_lock);
++		mutex_unlock(&mhi_cntrl->pm_mutex);
++		if (cur_state != MHI_PM_LD_ERR_FATAL_DETECT)
++			dev_dbg(dev, "Failed to move to state: %s from: %s\n",
++				to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT),
++				to_mhi_pm_state_str(mhi_cntrl->pm_state));
++	}
++
++	mhi_queue_state_transition(mhi_cntrl, DEV_ST_TRANSITION_DISABLE);
++
++	/* Wait for shutdown to complete */
++	flush_work(&mhi_cntrl->st_worker);
++
++	mhi_deinit_free_irq(mhi_cntrl);
++
++	if (!mhi_cntrl->pre_init) {
++		/* Free all allocated resources */
++		if (mhi_cntrl->fbc_image) {
++			mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
++			mhi_cntrl->fbc_image = NULL;
++		}
++		mhi_deinit_dev_ctxt(mhi_cntrl);
++	}
++}
++EXPORT_SYMBOL_GPL(mhi_power_down);
++
++int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
++{
++	int ret = mhi_async_power_up(mhi_cntrl);
++
++	if (ret)
++		return ret;
++
++	wait_event_timeout(mhi_cntrl->state_event,
++			   MHI_IN_MISSION_MODE(mhi_cntrl->ee) ||
++			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
++			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
++
++	ret = (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -ETIMEDOUT;
++	if (ret)
++		mhi_power_down(mhi_cntrl, false);
++
++	return ret;
++}
++EXPORT_SYMBOL(mhi_sync_power_up);
++
++int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
++{
++	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	int ret;
++
++	/* Check if device is already in RDDM */
++	if (mhi_cntrl->ee == MHI_EE_RDDM)
++		return 0;
++
++	dev_dbg(dev, "Triggering SYS_ERR to force RDDM state\n");
++	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
++
++	/* Wait for RDDM event */
++	ret = wait_event_timeout(mhi_cntrl->state_event,
++				 mhi_cntrl->ee == MHI_EE_RDDM,
++				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
++	ret = ret ? 0 : -EIO;
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mhi_force_rddm_mode);
++
++void mhi_device_get(struct mhi_device *mhi_dev)
++{
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++
++	mhi_dev->dev_wake++;
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
++		mhi_trigger_resume(mhi_cntrl);
++
++	mhi_cntrl->wake_get(mhi_cntrl, true);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++}
++EXPORT_SYMBOL_GPL(mhi_device_get);
++
++int mhi_device_get_sync(struct mhi_device *mhi_dev)
++{
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++	int ret;
++
++	ret = __mhi_device_get_sync(mhi_cntrl);
++	if (!ret)
++		mhi_dev->dev_wake++;
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mhi_device_get_sync);
++
++void mhi_device_put(struct mhi_device *mhi_dev)
++{
++	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
++
++	mhi_dev->dev_wake--;
++	read_lock_bh(&mhi_cntrl->pm_lock);
++	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
++		mhi_trigger_resume(mhi_cntrl);
++
++	mhi_cntrl->wake_put(mhi_cntrl, false);
++	read_unlock_bh(&mhi_cntrl->pm_lock);
++}
++EXPORT_SYMBOL_GPL(mhi_device_put);
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 4b1641fe30dba..fcfe4d16cc149 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -2078,6 +2078,8 @@ static int sysc_reset(struct sysc *ddata)
+ 		sysc_val = sysc_read_sysconfig(ddata);
+ 		sysc_val |= sysc_mask;
+ 		sysc_write(ddata, sysc_offset, sysc_val);
++		/* Flush posted write */
++		sysc_val = sysc_read_sysconfig(ddata);
+ 	}
+ 
+ 	if (ddata->cfg.srst_udelay)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index ffd8f5601e28a..e25c3387bcf87 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1517,15 +1517,15 @@ static int amdgpu_cs_wait_all_fences(struct amdgpu_device *adev,
+ 			continue;
+ 
+ 		r = dma_fence_wait_timeout(fence, true, timeout);
++		if (r > 0 && fence->error)
++			r = fence->error;
++
+ 		dma_fence_put(fence);
+ 		if (r < 0)
+ 			return r;
+ 
+ 		if (r == 0)
+ 			break;
+-
+-		if (fence->error)
+-			return fence->error;
+ 	}
+ 
+ 	memset(wait, 0, sizeof(*wait));
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 8445bb7ae06ab..3b4724d60868f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2155,6 +2155,7 @@ struct amdgpu_bo_va *amdgpu_vm_bo_add(struct amdgpu_device *adev,
+ 	amdgpu_vm_bo_base_init(&bo_va->base, vm, bo);
+ 
+ 	bo_va->ref_count = 1;
++	bo_va->last_pt_update = dma_fence_get_stub();
+ 	INIT_LIST_HEAD(&bo_va->valids);
+ 	INIT_LIST_HEAD(&bo_va->invalids);
+ 
+@@ -2867,7 +2868,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 		vm->update_funcs = &amdgpu_vm_cpu_funcs;
+ 	else
+ 		vm->update_funcs = &amdgpu_vm_sdma_funcs;
+-	vm->last_update = NULL;
++
++	vm->last_update = dma_fence_get_stub();
+ 	vm->last_unlocked = dma_fence_get_stub();
+ 
+ 	mutex_init(&vm->eviction_lock);
+@@ -3042,7 +3044,7 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 		vm->update_funcs = &amdgpu_vm_sdma_funcs;
+ 	}
+ 	dma_fence_put(vm->last_update);
+-	vm->last_update = NULL;
++	vm->last_update = dma_fence_get_stub();
+ 	vm->is_compute_context = true;
+ 
+ 	if (vm->pasid) {
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 7b69f81444ebd..e40321d798981 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -1010,21 +1010,21 @@ static const struct panel_desc auo_g104sn02 = {
+ 	},
+ };
+ 
+-static const struct drm_display_mode auo_g121ean01_mode = {
+-	.clock = 66700,
+-	.hdisplay = 1280,
+-	.hsync_start = 1280 + 58,
+-	.hsync_end = 1280 + 58 + 8,
+-	.htotal = 1280 + 58 + 8 + 70,
+-	.vdisplay = 800,
+-	.vsync_start = 800 + 6,
+-	.vsync_end = 800 + 6 + 4,
+-	.vtotal = 800 + 6 + 4 + 10,
++static const struct display_timing auo_g121ean01_timing = {
++	.pixelclock = { 60000000, 74400000, 90000000 },
++	.hactive = { 1280, 1280, 1280 },
++	.hfront_porch = { 20, 50, 100 },
++	.hback_porch = { 20, 50, 100 },
++	.hsync_len = { 30, 100, 200 },
++	.vactive = { 800, 800, 800 },
++	.vfront_porch = { 2, 10, 25 },
++	.vback_porch = { 2, 10, 25 },
++	.vsync_len = { 4, 18, 50 },
+ };
+ 
+ static const struct panel_desc auo_g121ean01 = {
+-	.modes = &auo_g121ean01_mode,
+-	.num_modes = 1,
++	.timings = &auo_g121ean01_timing,
++	.num_timings = 1,
+ 	.bpc = 8,
+ 	.size = {
+ 		.width = 261,
+diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c
+index a78b60b62caf2..87a57e5588a28 100644
+--- a/drivers/gpu/drm/radeon/radeon_cs.c
++++ b/drivers/gpu/drm/radeon/radeon_cs.c
+@@ -271,7 +271,8 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data)
+ {
+ 	struct drm_radeon_cs *cs = data;
+ 	uint64_t *chunk_array_ptr;
+-	unsigned size, i;
++	u64 size;
++	unsigned i;
+ 	u32 ring = RADEON_CS_RING_GFX;
+ 	s32 priority = 0;
+ 
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 2b658d820b800..6712d99ad80da 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -582,6 +582,7 @@
+ #define USB_DEVICE_ID_UGCI_FIGHTING	0x0030
+ 
+ #define USB_VENDOR_ID_HP		0x03f0
++#define USB_PRODUCT_ID_HP_ELITE_PRESENTER_MOUSE_464A		0x464a
+ #define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A	0x0a4a
+ #define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A	0x0b4a
+ #define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE		0x134a
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 9f1fcbea19eb7..4229e5de06745 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -96,6 +96,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A096), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A293), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A), HID_QUIRK_ALWAYS_POLL },
++	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_ELITE_PRESENTER_MOUSE_464A), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_094A), HID_QUIRK_ALWAYS_POLL },
+diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
+index 35baca2f62c4e..a524d2cd11421 100644
+--- a/drivers/i2c/busses/i2c-bcm-iproc.c
++++ b/drivers/i2c/busses/i2c-bcm-iproc.c
+@@ -242,13 +242,14 @@ static inline u32 iproc_i2c_rd_reg(struct bcm_iproc_i2c_dev *iproc_i2c,
+ 				   u32 offset)
+ {
+ 	u32 val;
++	unsigned long flags;
+ 
+ 	if (iproc_i2c->idm_base) {
+-		spin_lock(&iproc_i2c->idm_lock);
++		spin_lock_irqsave(&iproc_i2c->idm_lock, flags);
+ 		writel(iproc_i2c->ape_addr_mask,
+ 		       iproc_i2c->idm_base + IDM_CTRL_DIRECT_OFFSET);
+ 		val = readl(iproc_i2c->base + offset);
+-		spin_unlock(&iproc_i2c->idm_lock);
++		spin_unlock_irqrestore(&iproc_i2c->idm_lock, flags);
+ 	} else {
+ 		val = readl(iproc_i2c->base + offset);
+ 	}
+@@ -259,12 +260,14 @@ static inline u32 iproc_i2c_rd_reg(struct bcm_iproc_i2c_dev *iproc_i2c,
+ static inline void iproc_i2c_wr_reg(struct bcm_iproc_i2c_dev *iproc_i2c,
+ 				    u32 offset, u32 val)
+ {
++	unsigned long flags;
++
+ 	if (iproc_i2c->idm_base) {
+-		spin_lock(&iproc_i2c->idm_lock);
++		spin_lock_irqsave(&iproc_i2c->idm_lock, flags);
+ 		writel(iproc_i2c->ape_addr_mask,
+ 		       iproc_i2c->idm_base + IDM_CTRL_DIRECT_OFFSET);
+ 		writel(val, iproc_i2c->base + offset);
+-		spin_unlock(&iproc_i2c->idm_lock);
++		spin_unlock_irqrestore(&iproc_i2c->idm_lock, flags);
+ 	} else {
+ 		writel(val, iproc_i2c->base + offset);
+ 	}
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 2871cf2ee8b44..106080b25e81c 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -432,8 +432,19 @@ i2c_dw_read(struct dw_i2c_dev *dev)
+ 
+ 			regmap_read(dev->map, DW_IC_DATA_CMD, &tmp);
+ 			/* Ensure length byte is a valid value */
+-			if (flags & I2C_M_RECV_LEN &&
+-			    tmp <= I2C_SMBUS_BLOCK_MAX && tmp > 0) {
++			if (flags & I2C_M_RECV_LEN) {
++				/*
++				 * if IC_EMPTYFIFO_HOLD_MASTER_EN is set, which cannot be
++				 * detected from the registers, the controller can be
++				 * disabled if the STOP bit is set. But it is only set
++				 * after receiving block data response length in
++				 * I2C_FUNC_SMBUS_BLOCK_DATA case. That needs to read
++				 * another byte with STOP bit set when the block data
++				 * response length is invalid to complete the transaction.
++				 */
++				if (!tmp || tmp > I2C_SMBUS_BLOCK_MAX)
++					tmp = 1;
++
+ 				len = i2c_dw_recv_len(dev, tmp);
+ 			}
+ 			*buf++ = tmp;
+diff --git a/drivers/iio/Kconfig b/drivers/iio/Kconfig
+index 267553386c710..2ed303aa7de3c 100644
+--- a/drivers/iio/Kconfig
++++ b/drivers/iio/Kconfig
+@@ -70,6 +70,7 @@ config IIO_TRIGGERED_EVENT
+ 
+ source "drivers/iio/accel/Kconfig"
+ source "drivers/iio/adc/Kconfig"
++source "drivers/iio/addac/Kconfig"
+ source "drivers/iio/afe/Kconfig"
+ source "drivers/iio/amplifiers/Kconfig"
+ source "drivers/iio/chemical/Kconfig"
+diff --git a/drivers/iio/Makefile b/drivers/iio/Makefile
+index 1712011c0f4a1..d6690e449ccec 100644
+--- a/drivers/iio/Makefile
++++ b/drivers/iio/Makefile
+@@ -15,6 +15,7 @@ obj-$(CONFIG_IIO_TRIGGERED_EVENT) += industrialio-triggered-event.o
+ 
+ obj-y += accel/
+ obj-y += adc/
++obj-y += addac/
+ obj-y += afe/
+ obj-y += amplifiers/
+ obj-y += buffer/
+diff --git a/drivers/iio/adc/stx104.c b/drivers/iio/adc/stx104.c
+index 55bd2dc514e93..b658a75d4e3a8 100644
+--- a/drivers/iio/adc/stx104.c
++++ b/drivers/iio/adc/stx104.c
+@@ -15,7 +15,9 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
++#include <linux/mutex.h>
+ #include <linux/spinlock.h>
++#include <linux/types.h>
+ 
+ #define STX104_OUT_CHAN(chan) {				\
+ 	.type = IIO_VOLTAGE,				\
+@@ -44,14 +46,38 @@ static unsigned int num_stx104;
+ module_param_hw_array(base, uint, ioport, &num_stx104, 0);
+ MODULE_PARM_DESC(base, "Apex Embedded Systems STX104 base addresses");
+ 
++/**
++ * struct stx104_reg - device register structure
++ * @ssr_ad:	Software Strobe Register and ADC Data
++ * @achan:	ADC Channel
++ * @dio:	Digital I/O
++ * @dac:	DAC Channels
++ * @cir_asr:	Clear Interrupts and ADC Status
++ * @acr:	ADC Control
++ * @pccr_fsh:	Pacer Clock Control and FIFO Status MSB
++ * @acfg:	ADC Configuration
++ */
++struct stx104_reg {
++	u16 ssr_ad;
++	u8 achan;
++	u8 dio;
++	u16 dac[2];
++	u8 cir_asr;
++	u8 acr;
++	u8 pccr_fsh;
++	u8 acfg;
++};
++
+ /**
+  * struct stx104_iio - IIO device private data structure
++ * @lock: synchronization lock to prevent I/O race conditions
+  * @chan_out_states:	channels' output states
+- * @base:		base port address of the IIO device
++ * @reg:		I/O address offset for the device registers
+  */
+ struct stx104_iio {
++	struct mutex lock;
+ 	unsigned int chan_out_states[STX104_NUM_OUT_CHAN];
+-	unsigned int base;
++	struct stx104_reg __iomem *reg;
+ };
+ 
+ /**
+@@ -64,7 +90,7 @@ struct stx104_iio {
+ struct stx104_gpio {
+ 	struct gpio_chip chip;
+ 	spinlock_t lock;
+-	unsigned int base;
++	u8 __iomem *base;
+ 	unsigned int out_state;
+ };
+ 
+@@ -72,6 +98,7 @@ static int stx104_read_raw(struct iio_dev *indio_dev,
+ 	struct iio_chan_spec const *chan, int *val, int *val2, long mask)
+ {
+ 	struct stx104_iio *const priv = iio_priv(indio_dev);
++	struct stx104_reg __iomem *const reg = priv->reg;
+ 	unsigned int adc_config;
+ 	int adbu;
+ 	int gain;
+@@ -79,7 +106,7 @@ static int stx104_read_raw(struct iio_dev *indio_dev,
+ 	switch (mask) {
+ 	case IIO_CHAN_INFO_HARDWAREGAIN:
+ 		/* get gain configuration */
+-		adc_config = inb(priv->base + 11);
++		adc_config = ioread8(&reg->acfg);
+ 		gain = adc_config & 0x3;
+ 
+ 		*val = 1 << gain;
+@@ -90,25 +117,31 @@ static int stx104_read_raw(struct iio_dev *indio_dev,
+ 			return IIO_VAL_INT;
+ 		}
+ 
++		mutex_lock(&priv->lock);
++
+ 		/* select ADC channel */
+-		outb(chan->channel | (chan->channel << 4), priv->base + 2);
++		iowrite8(chan->channel | (chan->channel << 4), &reg->achan);
++
++		/* trigger ADC sample capture by writing to the 8-bit
++		 * Software Strobe Register and wait for completion
++		 */
++		iowrite8(0, &reg->ssr_ad);
++		while (ioread8(&reg->cir_asr) & BIT(7));
+ 
+-		/* trigger ADC sample capture and wait for completion */
+-		outb(0, priv->base);
+-		while (inb(priv->base + 8) & BIT(7));
++		*val = ioread16(&reg->ssr_ad);
+ 
+-		*val = inw(priv->base);
++		mutex_unlock(&priv->lock);
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_OFFSET:
+ 		/* get ADC bipolar/unipolar configuration */
+-		adc_config = inb(priv->base + 11);
++		adc_config = ioread8(&reg->acfg);
+ 		adbu = !(adc_config & BIT(2));
+ 
+ 		*val = -32768 * adbu;
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SCALE:
+ 		/* get ADC bipolar/unipolar and gain configuration */
+-		adc_config = inb(priv->base + 11);
++		adc_config = ioread8(&reg->acfg);
+ 		adbu = !(adc_config & BIT(2));
+ 		gain = adc_config & 0x3;
+ 
+@@ -130,16 +163,16 @@ static int stx104_write_raw(struct iio_dev *indio_dev,
+ 		/* Only four gain states (x1, x2, x4, x8) */
+ 		switch (val) {
+ 		case 1:
+-			outb(0, priv->base + 11);
++			iowrite8(0, &priv->reg->acfg);
+ 			break;
+ 		case 2:
+-			outb(1, priv->base + 11);
++			iowrite8(1, &priv->reg->acfg);
+ 			break;
+ 		case 4:
+-			outb(2, priv->base + 11);
++			iowrite8(2, &priv->reg->acfg);
+ 			break;
+ 		case 8:
+-			outb(3, priv->base + 11);
++			iowrite8(3, &priv->reg->acfg);
+ 			break;
+ 		default:
+ 			return -EINVAL;
+@@ -152,9 +185,12 @@ static int stx104_write_raw(struct iio_dev *indio_dev,
+ 			if ((unsigned int)val > 65535)
+ 				return -EINVAL;
+ 
++			mutex_lock(&priv->lock);
++
+ 			priv->chan_out_states[chan->channel] = val;
+-			outw(val, priv->base + 4 + 2 * chan->channel);
++			iowrite16(val, &priv->reg->dac[chan->channel]);
+ 
++			mutex_unlock(&priv->lock);
+ 			return 0;
+ 		}
+ 		return -EINVAL;
+@@ -222,7 +258,7 @@ static int stx104_gpio_get(struct gpio_chip *chip, unsigned int offset)
+ 	if (offset >= 4)
+ 		return -EINVAL;
+ 
+-	return !!(inb(stx104gpio->base) & BIT(offset));
++	return !!(ioread8(stx104gpio->base) & BIT(offset));
+ }
+ 
+ static int stx104_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask,
+@@ -230,7 +266,7 @@ static int stx104_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask,
+ {
+ 	struct stx104_gpio *const stx104gpio = gpiochip_get_data(chip);
+ 
+-	*bits = inb(stx104gpio->base);
++	*bits = ioread8(stx104gpio->base);
+ 
+ 	return 0;
+ }
+@@ -252,7 +288,7 @@ static void stx104_gpio_set(struct gpio_chip *chip, unsigned int offset,
+ 	else
+ 		stx104gpio->out_state &= ~mask;
+ 
+-	outb(stx104gpio->out_state, stx104gpio->base);
++	iowrite8(stx104gpio->out_state, stx104gpio->base);
+ 
+ 	spin_unlock_irqrestore(&stx104gpio->lock, flags);
+ }
+@@ -279,7 +315,7 @@ static void stx104_gpio_set_multiple(struct gpio_chip *chip,
+ 
+ 	stx104gpio->out_state &= ~*mask;
+ 	stx104gpio->out_state |= *mask & *bits;
+-	outb(stx104gpio->out_state, stx104gpio->base);
++	iowrite8(stx104gpio->out_state, stx104gpio->base);
+ 
+ 	spin_unlock_irqrestore(&stx104gpio->lock, flags);
+ }
+@@ -306,11 +342,16 @@ static int stx104_probe(struct device *dev, unsigned int id)
+ 		return -EBUSY;
+ 	}
+ 
++	priv = iio_priv(indio_dev);
++	priv->reg = devm_ioport_map(dev, base[id], STX104_EXTENT);
++	if (!priv->reg)
++		return -ENOMEM;
++
+ 	indio_dev->info = &stx104_info;
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 
+ 	/* determine if differential inputs */
+-	if (inb(base[id] + 8) & BIT(5)) {
++	if (ioread8(&priv->reg->cir_asr) & BIT(5)) {
+ 		indio_dev->num_channels = ARRAY_SIZE(stx104_channels_diff);
+ 		indio_dev->channels = stx104_channels_diff;
+ 	} else {
+@@ -320,18 +361,17 @@ static int stx104_probe(struct device *dev, unsigned int id)
+ 
+ 	indio_dev->name = dev_name(dev);
+ 
+-	priv = iio_priv(indio_dev);
+-	priv->base = base[id];
++	mutex_init(&priv->lock);
+ 
+ 	/* configure device for software trigger operation */
+-	outb(0, base[id] + 9);
++	iowrite8(0, &priv->reg->acr);
+ 
+ 	/* initialize gain setting to x1 */
+-	outb(0, base[id] + 11);
++	iowrite8(0, &priv->reg->acfg);
+ 
+ 	/* initialize DAC output to 0V */
+-	outw(0, base[id] + 4);
+-	outw(0, base[id] + 6);
++	iowrite16(0, &priv->reg->dac[0]);
++	iowrite16(0, &priv->reg->dac[1]);
+ 
+ 	stx104gpio->chip.label = dev_name(dev);
+ 	stx104gpio->chip.parent = dev;
+@@ -346,7 +386,7 @@ static int stx104_probe(struct device *dev, unsigned int id)
+ 	stx104gpio->chip.get_multiple = stx104_gpio_get_multiple;
+ 	stx104gpio->chip.set = stx104_gpio_set;
+ 	stx104gpio->chip.set_multiple = stx104_gpio_set_multiple;
+-	stx104gpio->base = base[id] + 3;
++	stx104gpio->base = &priv->reg->dio;
+ 	stx104gpio->out_state = 0x0;
+ 
+ 	spin_lock_init(&stx104gpio->lock);
+diff --git a/drivers/iio/addac/Kconfig b/drivers/iio/addac/Kconfig
+new file mode 100644
+index 0000000000000..2e64d7755d5ea
+--- /dev/null
++++ b/drivers/iio/addac/Kconfig
+@@ -0,0 +1,8 @@
++#
++# ADC DAC drivers
++#
++# When adding new entries keep the list in alphabetical order
++
++menu "Analog to digital and digital to analog converters"
++
++endmenu
+diff --git a/drivers/iio/addac/Makefile b/drivers/iio/addac/Makefile
+new file mode 100644
+index 0000000000000..b888b9ee12da0
+--- /dev/null
++++ b/drivers/iio/addac/Makefile
+@@ -0,0 +1,6 @@
++# SPDX-License-Identifier: GPL-2.0
++#
++# Makefile for industrial I/O ADDAC drivers
++#
++
++# When adding new entries keep the list in alphabetical order
+diff --git a/drivers/infiniband/hw/mlx5/qpc.c b/drivers/infiniband/hw/mlx5/qpc.c
+index c683d7000168d..9a306da7f9496 100644
+--- a/drivers/infiniband/hw/mlx5/qpc.c
++++ b/drivers/infiniband/hw/mlx5/qpc.c
+@@ -297,8 +297,7 @@ int mlx5_core_destroy_qp(struct mlx5_ib_dev *dev, struct mlx5_core_qp *qp)
+ 	MLX5_SET(destroy_qp_in, in, opcode, MLX5_CMD_OP_DESTROY_QP);
+ 	MLX5_SET(destroy_qp_in, in, qpn, qp->qpn);
+ 	MLX5_SET(destroy_qp_in, in, uid, qp->uid);
+-	mlx5_cmd_exec_in(dev->mdev, destroy_qp, in);
+-	return 0;
++	return mlx5_cmd_exec_in(dev->mdev, destroy_qp, in);
+ }
+ 
+ int mlx5_core_set_delay_drop(struct mlx5_ib_dev *dev,
+@@ -542,14 +541,14 @@ int mlx5_core_xrcd_dealloc(struct mlx5_ib_dev *dev, u32 xrcdn)
+ 	return mlx5_cmd_exec_in(dev->mdev, dealloc_xrcd, in);
+ }
+ 
+-static void destroy_rq_tracked(struct mlx5_ib_dev *dev, u32 rqn, u16 uid)
++static int destroy_rq_tracked(struct mlx5_ib_dev *dev, u32 rqn, u16 uid)
+ {
+ 	u32 in[MLX5_ST_SZ_DW(destroy_rq_in)] = {};
+ 
+ 	MLX5_SET(destroy_rq_in, in, opcode, MLX5_CMD_OP_DESTROY_RQ);
+ 	MLX5_SET(destroy_rq_in, in, rqn, rqn);
+ 	MLX5_SET(destroy_rq_in, in, uid, uid);
+-	mlx5_cmd_exec_in(dev->mdev, destroy_rq, in);
++	return mlx5_cmd_exec_in(dev->mdev, destroy_rq, in);
+ }
+ 
+ int mlx5_core_create_rq_tracked(struct mlx5_ib_dev *dev, u32 *in, int inlen,
+@@ -580,8 +579,7 @@ int mlx5_core_destroy_rq_tracked(struct mlx5_ib_dev *dev,
+ 				 struct mlx5_core_qp *rq)
+ {
+ 	destroy_resource_common(dev, rq);
+-	destroy_rq_tracked(dev, rq->qpn, rq->uid);
+-	return 0;
++	return destroy_rq_tracked(dev, rq->qpn, rq->uid);
+ }
+ 
+ static void destroy_sq_tracked(struct mlx5_ib_dev *dev, u32 sqn, u16 uid)
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index 8ada91bdbe4d0..fc25b900cef71 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -48,7 +48,7 @@ void __iomem *mips_gic_base;
+ 
+ static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks);
+ 
+-static DEFINE_SPINLOCK(gic_lock);
++static DEFINE_RAW_SPINLOCK(gic_lock);
+ static struct irq_domain *gic_irq_domain;
+ static int gic_shared_intrs;
+ static unsigned int gic_cpu_pin;
+@@ -209,7 +209,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
+ 
+ 	irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	switch (type & IRQ_TYPE_SENSE_MASK) {
+ 	case IRQ_TYPE_EDGE_FALLING:
+ 		pol = GIC_POL_FALLING_EDGE;
+@@ -249,7 +249,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
+ 	else
+ 		irq_set_chip_handler_name_locked(d, &gic_level_irq_controller,
+ 						 handle_level_irq, NULL);
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -267,7 +267,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
+ 		return -EINVAL;
+ 
+ 	/* Assumption : cpumask refers to a single CPU */
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 
+ 	/* Re-route this IRQ */
+ 	write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
+@@ -278,7 +278,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
+ 		set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
+ 
+ 	irq_data_update_effective_affinity(d, cpumask_of(cpu));
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ 
+ 	return IRQ_SET_MASK_OK;
+ }
+@@ -356,12 +356,12 @@ static void gic_mask_local_irq_all_vpes(struct irq_data *d)
+ 	cd = irq_data_get_irq_chip_data(d);
+ 	cd->mask = false;
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	for_each_online_cpu(cpu) {
+ 		write_gic_vl_other(mips_cm_vp_id(cpu));
+ 		write_gic_vo_rmask(BIT(intr));
+ 	}
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ }
+ 
+ static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
+@@ -374,32 +374,43 @@ static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
+ 	cd = irq_data_get_irq_chip_data(d);
+ 	cd->mask = true;
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	for_each_online_cpu(cpu) {
+ 		write_gic_vl_other(mips_cm_vp_id(cpu));
+ 		write_gic_vo_smask(BIT(intr));
+ 	}
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ }
+ 
+-static void gic_all_vpes_irq_cpu_online(struct irq_data *d)
++static void gic_all_vpes_irq_cpu_online(void)
+ {
+-	struct gic_all_vpes_chip_data *cd;
+-	unsigned int intr;
++	static const unsigned int local_intrs[] = {
++		GIC_LOCAL_INT_TIMER,
++		GIC_LOCAL_INT_PERFCTR,
++		GIC_LOCAL_INT_FDC,
++	};
++	unsigned long flags;
++	int i;
+ 
+-	intr = GIC_HWIRQ_TO_LOCAL(d->hwirq);
+-	cd = irq_data_get_irq_chip_data(d);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 
+-	write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
+-	if (cd->mask)
+-		write_gic_vl_smask(BIT(intr));
++	for (i = 0; i < ARRAY_SIZE(local_intrs); i++) {
++		unsigned int intr = local_intrs[i];
++		struct gic_all_vpes_chip_data *cd;
++
++		cd = &gic_all_vpes_chip_data[intr];
++		write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
++		if (cd->mask)
++			write_gic_vl_smask(BIT(intr));
++	}
++
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ }
+ 
+ static struct irq_chip gic_all_vpes_local_irq_controller = {
+ 	.name			= "MIPS GIC Local",
+ 	.irq_mask		= gic_mask_local_irq_all_vpes,
+ 	.irq_unmask		= gic_unmask_local_irq_all_vpes,
+-	.irq_cpu_online		= gic_all_vpes_irq_cpu_online,
+ };
+ 
+ static void __gic_irq_dispatch(void)
+@@ -423,11 +434,11 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ 
+ 	data = irq_get_irq_data(virq);
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
+ 	write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
+ 	irq_data_update_effective_affinity(data, cpumask_of(cpu));
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -480,6 +491,10 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ 	intr = GIC_HWIRQ_TO_LOCAL(hwirq);
+ 	map = GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin;
+ 
++	/*
++	 * If adding support for more per-cpu interrupts, keep the the
++	 * array in gic_all_vpes_irq_cpu_online() in sync.
++	 */
+ 	switch (intr) {
+ 	case GIC_LOCAL_INT_TIMER:
+ 		/* CONFIG_MIPS_CMP workaround (see __gic_init) */
+@@ -518,12 +533,12 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ 	if (!gic_local_irq_is_routable(intr))
+ 		return -EPERM;
+ 
+-	spin_lock_irqsave(&gic_lock, flags);
++	raw_spin_lock_irqsave(&gic_lock, flags);
+ 	for_each_online_cpu(cpu) {
+ 		write_gic_vl_other(mips_cm_vp_id(cpu));
+ 		write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
+ 	}
+-	spin_unlock_irqrestore(&gic_lock, flags);
++	raw_spin_unlock_irqrestore(&gic_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -710,8 +725,8 @@ static int gic_cpu_startup(unsigned int cpu)
+ 	/* Clear all local IRQ masks (ie. disable all local interrupts) */
+ 	write_gic_vl_rmask(~0);
+ 
+-	/* Invoke irq_cpu_online callbacks to enable desired interrupts */
+-	irq_cpu_online();
++	/* Enable desired interrupts */
++	gic_all_vpes_irq_cpu_online();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+index c62eb212cca92..e7c4b0dd588a9 100644
+--- a/drivers/media/platform/mtk-vpu/mtk_vpu.c
++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+@@ -539,15 +539,17 @@ static int load_requested_vpu(struct mtk_vpu *vpu,
+ int vpu_load_firmware(struct platform_device *pdev)
+ {
+ 	struct mtk_vpu *vpu;
+-	struct device *dev = &pdev->dev;
++	struct device *dev;
+ 	struct vpu_run *run;
+ 	int ret;
+ 
+ 	if (!pdev) {
+-		dev_err(dev, "VPU platform device is invalid\n");
++		pr_err("VPU platform device is invalid\n");
+ 		return -EINVAL;
+ 	}
+ 
++	dev = &pdev->dev;
++
+ 	vpu = platform_get_drvdata(pdev);
+ 	run = &vpu->run;
+ 
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 599b7317b59a5..d81baf750aebb 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1980,14 +1980,14 @@ static void mmc_blk_mq_poll_completion(struct mmc_queue *mq,
+ 	mmc_blk_urgent_bkops(mq, mqrq);
+ }
+ 
+-static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
++static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, enum mmc_issue_type issue_type)
+ {
+ 	unsigned long flags;
+ 	bool put_card;
+ 
+ 	spin_lock_irqsave(&mq->lock, flags);
+ 
+-	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
++	mq->in_flight[issue_type] -= 1;
+ 
+ 	put_card = (mmc_tot_in_flight(mq) == 0);
+ 
+@@ -1999,6 +1999,7 @@ static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
+ 
+ static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req)
+ {
++	enum mmc_issue_type issue_type = mmc_issue_type(mq, req);
+ 	struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
+ 	struct mmc_request *mrq = &mqrq->brq.mrq;
+ 	struct mmc_host *host = mq->card->host;
+@@ -2014,7 +2015,7 @@ static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req)
+ 	else if (likely(!blk_should_fake_timeout(req->q)))
+ 		blk_mq_complete_request(req);
+ 
+-	mmc_blk_mq_dec_in_flight(mq, req);
++	mmc_blk_mq_dec_in_flight(mq, issue_type);
+ }
+ 
+ void mmc_blk_mq_recovery(struct mmc_queue *mq)
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index 03e2f965a96a8..1f46694b2e531 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -513,6 +513,32 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
+ 
+ EXPORT_SYMBOL(mmc_alloc_host);
+ 
++static void devm_mmc_host_release(struct device *dev, void *res)
++{
++	mmc_free_host(*(struct mmc_host **)res);
++}
++
++struct mmc_host *devm_mmc_alloc_host(struct device *dev, int extra)
++{
++	struct mmc_host **dr, *host;
++
++	dr = devres_alloc(devm_mmc_host_release, sizeof(*dr), GFP_KERNEL);
++	if (!dr)
++		return ERR_PTR(-ENOMEM);
++
++	host = mmc_alloc_host(extra, dev);
++	if (IS_ERR(host)) {
++		devres_free(dr);
++		return host;
++	}
++
++	*dr = host;
++	devres_add(dev, dr);
++
++	return host;
++}
++EXPORT_SYMBOL(devm_mmc_alloc_host);
++
+ static int mmc_validate_host_caps(struct mmc_host *host)
+ {
+ 	if (host->caps & MMC_CAP_SDIO_IRQ && !host->ops->enable_sdio_irq) {
+diff --git a/drivers/mmc/host/bcm2835.c b/drivers/mmc/host/bcm2835.c
+index 8c2361e662774..985079943be76 100644
+--- a/drivers/mmc/host/bcm2835.c
++++ b/drivers/mmc/host/bcm2835.c
+@@ -1413,8 +1413,8 @@ static int bcm2835_probe(struct platform_device *pdev)
+ 	host->max_clk = clk_get_rate(clk);
+ 
+ 	host->irq = platform_get_irq(pdev, 0);
+-	if (host->irq <= 0) {
+-		ret = -EINVAL;
++	if (host->irq < 0) {
++		ret = host->irq;
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index e89bd6f4b317c..1992eea8b777e 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -1122,7 +1122,7 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ 	struct mmc_host *mmc;
+ 	int ret;
+ 
+-	mmc = mmc_alloc_host(sizeof(struct meson_host), &pdev->dev);
++	mmc = devm_mmc_alloc_host(&pdev->dev, sizeof(struct meson_host));
+ 	if (!mmc)
+ 		return -ENOMEM;
+ 	host = mmc_priv(mmc);
+@@ -1138,46 +1138,33 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ 	host->vqmmc_enabled = false;
+ 	ret = mmc_regulator_get_supply(mmc);
+ 	if (ret)
+-		goto free_host;
++		return ret;
+ 
+ 	ret = mmc_of_parse(mmc);
+-	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_warn(&pdev->dev, "error parsing DT: %d\n", ret);
+-		goto free_host;
+-	}
++	if (ret)
++		return dev_err_probe(&pdev->dev, ret, "error parsing DT\n");
+ 
+ 	host->data = (struct meson_mmc_data *)
+ 		of_device_get_match_data(&pdev->dev);
+-	if (!host->data) {
+-		ret = -EINVAL;
+-		goto free_host;
+-	}
++	if (!host->data)
++		return -EINVAL;
+ 
+ 	ret = device_reset_optional(&pdev->dev);
+-	if (ret) {
+-		dev_err_probe(&pdev->dev, ret, "device reset failed\n");
+-		goto free_host;
+-	}
++	if (ret)
++		return dev_err_probe(&pdev->dev, ret, "device reset failed\n");
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	host->regs = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(host->regs)) {
+-		ret = PTR_ERR(host->regs);
+-		goto free_host;
+-	}
++	if (IS_ERR(host->regs))
++		return PTR_ERR(host->regs);
+ 
+ 	host->irq = platform_get_irq(pdev, 0);
+-	if (host->irq <= 0) {
+-		ret = -EINVAL;
+-		goto free_host;
+-	}
++	if (host->irq < 0)
++		return host->irq;
+ 
+ 	host->pinctrl = devm_pinctrl_get(&pdev->dev);
+-	if (IS_ERR(host->pinctrl)) {
+-		ret = PTR_ERR(host->pinctrl);
+-		goto free_host;
+-	}
++	if (IS_ERR(host->pinctrl))
++		return PTR_ERR(host->pinctrl);
+ 
+ 	host->pins_clk_gate = pinctrl_lookup_state(host->pinctrl,
+ 						   "clk-gate");
+@@ -1188,14 +1175,12 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	host->core_clk = devm_clk_get(&pdev->dev, "core");
+-	if (IS_ERR(host->core_clk)) {
+-		ret = PTR_ERR(host->core_clk);
+-		goto free_host;
+-	}
++	if (IS_ERR(host->core_clk))
++		return PTR_ERR(host->core_clk);
+ 
+ 	ret = clk_prepare_enable(host->core_clk);
+ 	if (ret)
+-		goto free_host;
++		return ret;
+ 
+ 	ret = meson_mmc_clk_init(host);
+ 	if (ret)
+@@ -1290,8 +1275,6 @@ err_init_clk:
+ 	clk_disable_unprepare(host->mmc_clk);
+ err_core_clk:
+ 	clk_disable_unprepare(host->core_clk);
+-free_host:
+-	mmc_free_host(mmc);
+ 	return ret;
+ }
+ 
+@@ -1315,7 +1298,6 @@ static int meson_mmc_remove(struct platform_device *pdev)
+ 	clk_disable_unprepare(host->mmc_clk);
+ 	clk_disable_unprepare(host->core_clk);
+ 
+-	mmc_free_host(host->mmc);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci_f_sdh30.c b/drivers/mmc/host/sdhci_f_sdh30.c
+index 6c4f43e112826..7ede74bf37230 100644
+--- a/drivers/mmc/host/sdhci_f_sdh30.c
++++ b/drivers/mmc/host/sdhci_f_sdh30.c
+@@ -26,9 +26,16 @@ struct f_sdhost_priv {
+ 	bool enable_cmd_dat_delay;
+ };
+ 
++static void *sdhci_f_sdhost_priv(struct sdhci_host *host)
++{
++	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++
++	return sdhci_pltfm_priv(pltfm_host);
++}
++
+ static void sdhci_f_sdh30_soft_voltage_switch(struct sdhci_host *host)
+ {
+-	struct f_sdhost_priv *priv = sdhci_priv(host);
++	struct f_sdhost_priv *priv = sdhci_f_sdhost_priv(host);
+ 	u32 ctrl = 0;
+ 
+ 	usleep_range(2500, 3000);
+@@ -61,7 +68,7 @@ static unsigned int sdhci_f_sdh30_get_min_clock(struct sdhci_host *host)
+ 
+ static void sdhci_f_sdh30_reset(struct sdhci_host *host, u8 mask)
+ {
+-	struct f_sdhost_priv *priv = sdhci_priv(host);
++	struct f_sdhost_priv *priv = sdhci_f_sdhost_priv(host);
+ 	u32 ctl;
+ 
+ 	if (sdhci_readw(host, SDHCI_CLOCK_CONTROL) == 0)
+@@ -85,30 +92,32 @@ static const struct sdhci_ops sdhci_f_sdh30_ops = {
+ 	.set_uhs_signaling = sdhci_set_uhs_signaling,
+ };
+ 
++static const struct sdhci_pltfm_data sdhci_f_sdh30_pltfm_data = {
++	.ops = &sdhci_f_sdh30_ops,
++	.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC
++		| SDHCI_QUIRK_INVERTED_WRITE_PROTECT,
++	.quirks2 = SDHCI_QUIRK2_SUPPORT_SINGLE
++		|  SDHCI_QUIRK2_TUNING_WORK_AROUND,
++};
++
+ static int sdhci_f_sdh30_probe(struct platform_device *pdev)
+ {
+ 	struct sdhci_host *host;
+ 	struct device *dev = &pdev->dev;
+-	int irq, ctrl = 0, ret = 0;
++	int ctrl = 0, ret = 0;
+ 	struct f_sdhost_priv *priv;
++	struct sdhci_pltfm_host *pltfm_host;
+ 	u32 reg = 0;
+ 
+-	irq = platform_get_irq(pdev, 0);
+-	if (irq < 0)
+-		return irq;
+-
+-	host = sdhci_alloc_host(dev, sizeof(struct f_sdhost_priv));
++	host = sdhci_pltfm_init(pdev, &sdhci_f_sdh30_pltfm_data,
++				sizeof(struct f_sdhost_priv));
+ 	if (IS_ERR(host))
+ 		return PTR_ERR(host);
+ 
+-	priv = sdhci_priv(host);
++	pltfm_host = sdhci_priv(host);
++	priv = sdhci_pltfm_priv(pltfm_host);
+ 	priv->dev = dev;
+ 
+-	host->quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+-		       SDHCI_QUIRK_INVERTED_WRITE_PROTECT;
+-	host->quirks2 = SDHCI_QUIRK2_SUPPORT_SINGLE |
+-			SDHCI_QUIRK2_TUNING_WORK_AROUND;
+-
+ 	priv->enable_cmd_dat_delay = device_property_read_bool(dev,
+ 						"fujitsu,cmd-dat-delay-select");
+ 
+@@ -116,18 +125,6 @@ static int sdhci_f_sdh30_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err;
+ 
+-	platform_set_drvdata(pdev, host);
+-
+-	host->hw_name = "f_sdh30";
+-	host->ops = &sdhci_f_sdh30_ops;
+-	host->irq = irq;
+-
+-	host->ioaddr = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(host->ioaddr)) {
+-		ret = PTR_ERR(host->ioaddr);
+-		goto err;
+-	}
+-
+ 	if (dev_of_node(dev)) {
+ 		sdhci_get_of_property(pdev);
+ 
+@@ -182,23 +179,22 @@ err_add_host:
+ err_clk:
+ 	clk_disable_unprepare(priv->clk_iface);
+ err:
+-	sdhci_free_host(host);
++	sdhci_pltfm_free(pdev);
++
+ 	return ret;
+ }
+ 
+ static int sdhci_f_sdh30_remove(struct platform_device *pdev)
+ {
+ 	struct sdhci_host *host = platform_get_drvdata(pdev);
+-	struct f_sdhost_priv *priv = sdhci_priv(host);
+-
+-	sdhci_remove_host(host, readl(host->ioaddr + SDHCI_INT_STATUS) ==
+-			  0xffffffff);
++	struct f_sdhost_priv *priv = sdhci_f_sdhost_priv(host);
++	struct clk *clk_iface = priv->clk_iface;
++	struct clk *clk = priv->clk;
+ 
+-	clk_disable_unprepare(priv->clk_iface);
+-	clk_disable_unprepare(priv->clk);
++	sdhci_pltfm_unregister(pdev);
+ 
+-	sdhci_free_host(host);
+-	platform_set_drvdata(pdev, NULL);
++	clk_disable_unprepare(clk_iface);
++	clk_disable_unprepare(clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index 9215069c61560..b834fde3f9eda 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -1317,8 +1317,8 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
+ 		return ret;
+ 
+ 	host->irq = platform_get_irq(pdev, 0);
+-	if (host->irq <= 0) {
+-		ret = -EINVAL;
++	if (host->irq < 0) {
++		ret = host->irq;
+ 		goto error_disable_mmc;
+ 	}
+ 
+diff --git a/drivers/mmc/host/wbsd.c b/drivers/mmc/host/wbsd.c
+index f3090216e0dcc..6db08070b628d 100644
+--- a/drivers/mmc/host/wbsd.c
++++ b/drivers/mmc/host/wbsd.c
+@@ -1710,8 +1710,6 @@ static int wbsd_init(struct device *dev, int base, int irq, int dma,
+ 
+ 		wbsd_release_resources(host);
+ 		wbsd_free_mmc(dev);
+-
+-		mmc_free_host(mmc);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 8b2c8546f4c99..177151298d72a 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2310,6 +2310,14 @@ static void mv88e6xxx_hardware_reset(struct mv88e6xxx_chip *chip)
+ 
+ 	/* If there is a GPIO connected to the reset pin, toggle it */
+ 	if (gpiod) {
++		/* If the switch has just been reset and not yet completed
++		 * loading EEPROM, the reset may interrupt the I2C transaction
++		 * mid-byte, causing the first EEPROM read after the reset
++		 * from the wrong location resulting in the switch booting
++		 * to wrong mode and inoperable.
++		 */
++		mv88e6xxx_g1_wait_eeprom_done(chip);
++
+ 		gpiod_set_value_cansleep(gpiod, 1);
+ 		usleep_range(10000, 20000);
+ 		gpiod_set_value_cansleep(gpiod, 0);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_nvm.c b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
+index 7164f4ad81202..6b1996451a4bd 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_nvm.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
+@@ -210,11 +210,11 @@ read_nvm_exit:
+  * @hw: pointer to the HW structure.
+  * @module_pointer: module pointer location in words from the NVM beginning
+  * @offset: offset in words from module start
+- * @words: number of words to write
+- * @data: buffer with words to write to the Shadow RAM
++ * @words: number of words to read
++ * @data: buffer with words to read to the Shadow RAM
+  * @last_command: tells the AdminQ that this is the last command
+  *
+- * Writes a 16 bit words buffer to the Shadow RAM using the admin command.
++ * Reads a 16 bit words buffer to the Shadow RAM using the admin command.
+  **/
+ static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw,
+ 				    u8 module_pointer, u32 offset,
+@@ -234,18 +234,18 @@ static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw,
+ 	 */
+ 	if ((offset + words) > hw->nvm.sr_size)
+ 		i40e_debug(hw, I40E_DEBUG_NVM,
+-			   "NVM write error: offset %d beyond Shadow RAM limit %d\n",
++			   "NVM read error: offset %d beyond Shadow RAM limit %d\n",
+ 			   (offset + words), hw->nvm.sr_size);
+ 	else if (words > I40E_SR_SECTOR_SIZE_IN_WORDS)
+-		/* We can write only up to 4KB (one sector), in one AQ write */
++		/* We can read only up to 4KB (one sector), in one AQ write */
+ 		i40e_debug(hw, I40E_DEBUG_NVM,
+-			   "NVM write fail error: tried to write %d words, limit is %d.\n",
++			   "NVM read fail error: tried to read %d words, limit is %d.\n",
+ 			   words, I40E_SR_SECTOR_SIZE_IN_WORDS);
+ 	else if (((offset + (words - 1)) / I40E_SR_SECTOR_SIZE_IN_WORDS)
+ 		 != (offset / I40E_SR_SECTOR_SIZE_IN_WORDS))
+-		/* A single write cannot spread over two sectors */
++		/* A single read cannot spread over two sectors */
+ 		i40e_debug(hw, I40E_DEBUG_NVM,
+-			   "NVM write error: cannot spread over two sectors in a single write offset=%d words=%d\n",
++			   "NVM read error: cannot spread over two sectors in a single read offset=%d words=%d\n",
+ 			   offset, words);
+ 	else
+ 		ret_code = i40e_aq_read_nvm(hw, module_pointer,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index 44a434b1178b5..80dee8c692495 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -89,7 +89,8 @@ static u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev,
+ 
+ static u64 read_internal_timer(const struct cyclecounter *cc)
+ {
+-	struct mlx5_clock *clock = container_of(cc, struct mlx5_clock, cycles);
++	struct mlx5_timer *timer = container_of(cc, struct mlx5_timer, cycles);
++	struct mlx5_clock *clock = container_of(timer, struct mlx5_clock, timer);
+ 	struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev,
+ 						  clock);
+ 
+@@ -100,6 +101,7 @@ static void mlx5_update_clock_info_page(struct mlx5_core_dev *mdev)
+ {
+ 	struct mlx5_ib_clock_info *clock_info = mdev->clock_info;
+ 	struct mlx5_clock *clock = &mdev->clock;
++	struct mlx5_timer *timer;
+ 	u32 sign;
+ 
+ 	if (!clock_info)
+@@ -109,10 +111,11 @@ static void mlx5_update_clock_info_page(struct mlx5_core_dev *mdev)
+ 	smp_store_mb(clock_info->sign,
+ 		     sign | MLX5_IB_CLOCK_INFO_KERNEL_UPDATING);
+ 
+-	clock_info->cycles = clock->tc.cycle_last;
+-	clock_info->mult   = clock->cycles.mult;
+-	clock_info->nsec   = clock->tc.nsec;
+-	clock_info->frac   = clock->tc.frac;
++	timer = &clock->timer;
++	clock_info->cycles = timer->tc.cycle_last;
++	clock_info->mult   = timer->cycles.mult;
++	clock_info->nsec   = timer->tc.nsec;
++	clock_info->frac   = timer->tc.frac;
+ 
+ 	smp_store_release(&clock_info->sign,
+ 			  sign + MLX5_IB_CLOCK_INFO_KERNEL_UPDATING * 2);
+@@ -151,28 +154,37 @@ static void mlx5_timestamp_overflow(struct work_struct *work)
+ {
+ 	struct delayed_work *dwork = to_delayed_work(work);
+ 	struct mlx5_core_dev *mdev;
++	struct mlx5_timer *timer;
+ 	struct mlx5_clock *clock;
+ 	unsigned long flags;
+ 
+-	clock = container_of(dwork, struct mlx5_clock, overflow_work);
++	timer = container_of(dwork, struct mlx5_timer, overflow_work);
++	clock = container_of(timer, struct mlx5_clock, timer);
+ 	mdev = container_of(clock, struct mlx5_core_dev, clock);
++
++	if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
++		goto out;
++
+ 	write_seqlock_irqsave(&clock->lock, flags);
+-	timecounter_read(&clock->tc);
++	timecounter_read(&timer->tc);
+ 	mlx5_update_clock_info_page(mdev);
+ 	write_sequnlock_irqrestore(&clock->lock, flags);
+-	schedule_delayed_work(&clock->overflow_work, clock->overflow_period);
++
++out:
++	schedule_delayed_work(&timer->overflow_work, timer->overflow_period);
+ }
+ 
+ static int mlx5_ptp_settime(struct ptp_clock_info *ptp, const struct timespec64 *ts)
+ {
+ 	struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
++	struct mlx5_timer *timer = &clock->timer;
+ 	u64 ns = timespec64_to_ns(ts);
+ 	struct mlx5_core_dev *mdev;
+ 	unsigned long flags;
+ 
+ 	mdev = container_of(clock, struct mlx5_core_dev, clock);
+ 	write_seqlock_irqsave(&clock->lock, flags);
+-	timecounter_init(&clock->tc, &clock->cycles, ns);
++	timecounter_init(&timer->tc, &timer->cycles, ns);
+ 	mlx5_update_clock_info_page(mdev);
+ 	write_sequnlock_irqrestore(&clock->lock, flags);
+ 
+@@ -183,6 +195,7 @@ static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts,
+ 			     struct ptp_system_timestamp *sts)
+ {
+ 	struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
++	struct mlx5_timer *timer = &clock->timer;
+ 	struct mlx5_core_dev *mdev;
+ 	unsigned long flags;
+ 	u64 cycles, ns;
+@@ -190,7 +203,7 @@ static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts,
+ 	mdev = container_of(clock, struct mlx5_core_dev, clock);
+ 	write_seqlock_irqsave(&clock->lock, flags);
+ 	cycles = mlx5_read_internal_timer(mdev, sts);
+-	ns = timecounter_cyc2time(&clock->tc, cycles);
++	ns = timecounter_cyc2time(&timer->tc, cycles);
+ 	write_sequnlock_irqrestore(&clock->lock, flags);
+ 
+ 	*ts = ns_to_timespec64(ns);
+@@ -201,12 +214,13 @@ static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts,
+ static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+ {
+ 	struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
++	struct mlx5_timer *timer = &clock->timer;
+ 	struct mlx5_core_dev *mdev;
+ 	unsigned long flags;
+ 
+ 	mdev = container_of(clock, struct mlx5_core_dev, clock);
+ 	write_seqlock_irqsave(&clock->lock, flags);
+-	timecounter_adjtime(&clock->tc, delta);
++	timecounter_adjtime(&timer->tc, delta);
+ 	mlx5_update_clock_info_page(mdev);
+ 	write_sequnlock_irqrestore(&clock->lock, flags);
+ 
+@@ -216,27 +230,27 @@ static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+ static int mlx5_ptp_adjfreq(struct ptp_clock_info *ptp, s32 delta)
+ {
+ 	struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
++	struct mlx5_timer *timer = &clock->timer;
+ 	struct mlx5_core_dev *mdev;
+ 	unsigned long flags;
+ 	int neg_adj = 0;
+ 	u32 diff;
+ 	u64 adj;
+ 
+-
+ 	if (delta < 0) {
+ 		neg_adj = 1;
+ 		delta = -delta;
+ 	}
+ 
+-	adj = clock->nominal_c_mult;
++	adj = timer->nominal_c_mult;
+ 	adj *= delta;
+ 	diff = div_u64(adj, 1000000000ULL);
+ 
+ 	mdev = container_of(clock, struct mlx5_core_dev, clock);
+ 	write_seqlock_irqsave(&clock->lock, flags);
+-	timecounter_read(&clock->tc);
+-	clock->cycles.mult = neg_adj ? clock->nominal_c_mult - diff :
+-				       clock->nominal_c_mult + diff;
++	timecounter_read(&timer->tc);
++	timer->cycles.mult = neg_adj ? timer->nominal_c_mult - diff :
++				       timer->nominal_c_mult + diff;
+ 	mlx5_update_clock_info_page(mdev);
+ 	write_sequnlock_irqrestore(&clock->lock, flags);
+ 
+@@ -313,6 +327,7 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+ 			container_of(ptp, struct mlx5_clock, ptp_info);
+ 	struct mlx5_core_dev *mdev =
+ 			container_of(clock, struct mlx5_core_dev, clock);
++	struct mlx5_timer *timer = &clock->timer;
+ 	u32 in[MLX5_ST_SZ_DW(mtpps_reg)] = {0};
+ 	u64 nsec_now, nsec_delta, time_stamp = 0;
+ 	u64 cycles_now, cycles_delta;
+@@ -355,10 +370,10 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+ 		ns = timespec64_to_ns(&ts);
+ 		cycles_now = mlx5_read_internal_timer(mdev, NULL);
+ 		write_seqlock_irqsave(&clock->lock, flags);
+-		nsec_now = timecounter_cyc2time(&clock->tc, cycles_now);
++		nsec_now = timecounter_cyc2time(&timer->tc, cycles_now);
+ 		nsec_delta = ns - nsec_now;
+-		cycles_delta = div64_u64(nsec_delta << clock->cycles.shift,
+-					 clock->cycles.mult);
++		cycles_delta = div64_u64(nsec_delta << timer->cycles.shift,
++					 timer->cycles.mult);
+ 		write_sequnlock_irqrestore(&clock->lock, flags);
+ 		time_stamp = cycles_now + cycles_delta;
+ 		field_select = MLX5_MTPPS_FS_PIN_MODE |
+@@ -541,6 +556,7 @@ static int mlx5_pps_event(struct notifier_block *nb,
+ 			  unsigned long type, void *data)
+ {
+ 	struct mlx5_clock *clock = mlx5_nb_cof(nb, struct mlx5_clock, pps_nb);
++	struct mlx5_timer *timer = &clock->timer;
+ 	struct ptp_clock_event ptp_event;
+ 	u64 cycles_now, cycles_delta;
+ 	u64 nsec_now, nsec_delta, ns;
+@@ -575,10 +591,10 @@ static int mlx5_pps_event(struct notifier_block *nb,
+ 		ts.tv_nsec = 0;
+ 		ns = timespec64_to_ns(&ts);
+ 		write_seqlock_irqsave(&clock->lock, flags);
+-		nsec_now = timecounter_cyc2time(&clock->tc, cycles_now);
++		nsec_now = timecounter_cyc2time(&timer->tc, cycles_now);
+ 		nsec_delta = ns - nsec_now;
+-		cycles_delta = div64_u64(nsec_delta << clock->cycles.shift,
+-					 clock->cycles.mult);
++		cycles_delta = div64_u64(nsec_delta << timer->cycles.shift,
++					 timer->cycles.mult);
+ 		clock->pps_info.start[pin] = cycles_now + cycles_delta;
+ 		write_sequnlock_irqrestore(&clock->lock, flags);
+ 		schedule_work(&clock->pps_info.out_work);
+@@ -591,29 +607,32 @@ static int mlx5_pps_event(struct notifier_block *nb,
+ 	return NOTIFY_OK;
+ }
+ 
+-void mlx5_init_clock(struct mlx5_core_dev *mdev)
++static void mlx5_timecounter_init(struct mlx5_core_dev *mdev)
+ {
+ 	struct mlx5_clock *clock = &mdev->clock;
+-	u64 overflow_cycles;
+-	u64 ns;
+-	u64 frac = 0;
++	struct mlx5_timer *timer = &clock->timer;
+ 	u32 dev_freq;
+ 
+ 	dev_freq = MLX5_CAP_GEN(mdev, device_frequency_khz);
+-	if (!dev_freq) {
+-		mlx5_core_warn(mdev, "invalid device_frequency_khz, aborting HW clock init\n");
+-		return;
+-	}
+-	seqlock_init(&clock->lock);
+-	clock->cycles.read = read_internal_timer;
+-	clock->cycles.shift = MLX5_CYCLES_SHIFT;
+-	clock->cycles.mult = clocksource_khz2mult(dev_freq,
+-						  clock->cycles.shift);
+-	clock->nominal_c_mult = clock->cycles.mult;
+-	clock->cycles.mask = CLOCKSOURCE_MASK(41);
+-
+-	timecounter_init(&clock->tc, &clock->cycles,
++	timer->cycles.read = read_internal_timer;
++	timer->cycles.shift = MLX5_CYCLES_SHIFT;
++	timer->cycles.mult = clocksource_khz2mult(dev_freq,
++						  timer->cycles.shift);
++	timer->nominal_c_mult = timer->cycles.mult;
++	timer->cycles.mask = CLOCKSOURCE_MASK(41);
++
++	timecounter_init(&timer->tc, &timer->cycles,
+ 			 ktime_to_ns(ktime_get_real()));
++}
++
++static void mlx5_init_overflow_period(struct mlx5_clock *clock)
++{
++	struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev, clock);
++	struct mlx5_ib_clock_info *clock_info = mdev->clock_info;
++	struct mlx5_timer *timer = &clock->timer;
++	u64 overflow_cycles;
++	u64 frac = 0;
++	u64 ns;
+ 
+ 	/* Calculate period in seconds to call the overflow watchdog - to make
+ 	 * sure counter is checked at least twice every wrap around.
+@@ -622,32 +641,63 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev)
+ 	 * multiplied by clock multiplier where the result doesn't exceed
+ 	 * 64bits.
+ 	 */
+-	overflow_cycles = div64_u64(~0ULL >> 1, clock->cycles.mult);
+-	overflow_cycles = min(overflow_cycles, div_u64(clock->cycles.mask, 3));
++	overflow_cycles = div64_u64(~0ULL >> 1, timer->cycles.mult);
++	overflow_cycles = min(overflow_cycles, div_u64(timer->cycles.mask, 3));
+ 
+-	ns = cyclecounter_cyc2ns(&clock->cycles, overflow_cycles,
++	ns = cyclecounter_cyc2ns(&timer->cycles, overflow_cycles,
+ 				 frac, &frac);
+ 	do_div(ns, NSEC_PER_SEC / HZ);
+-	clock->overflow_period = ns;
++	timer->overflow_period = ns;
+ 
+-	mdev->clock_info =
+-		(struct mlx5_ib_clock_info *)get_zeroed_page(GFP_KERNEL);
+-	if (mdev->clock_info) {
+-		mdev->clock_info->nsec = clock->tc.nsec;
+-		mdev->clock_info->cycles = clock->tc.cycle_last;
+-		mdev->clock_info->mask = clock->cycles.mask;
+-		mdev->clock_info->mult = clock->nominal_c_mult;
+-		mdev->clock_info->shift = clock->cycles.shift;
+-		mdev->clock_info->frac = clock->tc.frac;
+-		mdev->clock_info->overflow_period = clock->overflow_period;
++	INIT_DELAYED_WORK(&timer->overflow_work, mlx5_timestamp_overflow);
++	if (timer->overflow_period)
++		schedule_delayed_work(&timer->overflow_work, 0);
++	else
++		mlx5_core_warn(mdev,
++			       "invalid overflow period, overflow_work is not scheduled\n");
++
++	if (clock_info)
++		clock_info->overflow_period = timer->overflow_period;
++}
++
++static void mlx5_init_clock_info(struct mlx5_core_dev *mdev)
++{
++	struct mlx5_clock *clock = &mdev->clock;
++	struct mlx5_ib_clock_info *info;
++	struct mlx5_timer *timer;
++
++	mdev->clock_info = (struct mlx5_ib_clock_info *)get_zeroed_page(GFP_KERNEL);
++	if (!mdev->clock_info) {
++		mlx5_core_warn(mdev, "Failed to allocate IB clock info page\n");
++		return;
+ 	}
+ 
++	info = mdev->clock_info;
++	timer = &clock->timer;
++
++	info->nsec = timer->tc.nsec;
++	info->cycles = timer->tc.cycle_last;
++	info->mask = timer->cycles.mask;
++	info->mult = timer->nominal_c_mult;
++	info->shift = timer->cycles.shift;
++	info->frac = timer->tc.frac;
++}
++
++void mlx5_init_clock(struct mlx5_core_dev *mdev)
++{
++	struct mlx5_clock *clock = &mdev->clock;
++
++	if (!MLX5_CAP_GEN(mdev, device_frequency_khz)) {
++		mlx5_core_warn(mdev, "invalid device_frequency_khz, aborting HW clock init\n");
++		return;
++	}
++
++	seqlock_init(&clock->lock);
++
++	mlx5_timecounter_init(mdev);
++	mlx5_init_clock_info(mdev);
++	mlx5_init_overflow_period(clock);
+ 	INIT_WORK(&clock->pps_info.out_work, mlx5_pps_out);
+-	INIT_DELAYED_WORK(&clock->overflow_work, mlx5_timestamp_overflow);
+-	if (clock->overflow_period)
+-		schedule_delayed_work(&clock->overflow_work, 0);
+-	else
+-		mlx5_core_warn(mdev, "invalid overflow period, overflow_work is not scheduled\n");
+ 
+ 	/* Configure the PHC */
+ 	clock->ptp_info = mlx5_ptp_clock_info;
+@@ -684,7 +734,7 @@ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev)
+ 	}
+ 
+ 	cancel_work_sync(&clock->pps_info.out_work);
+-	cancel_delayed_work_sync(&clock->overflow_work);
++	cancel_delayed_work_sync(&clock->timer.overflow_work);
+ 
+ 	if (mdev->clock_info) {
+ 		free_page((unsigned long)mdev->clock_info);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.h
+index 31600924bdc36..6e8804ebc773b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.h
+@@ -45,12 +45,13 @@ static inline int mlx5_clock_get_ptp_index(struct mlx5_core_dev *mdev)
+ static inline ktime_t mlx5_timecounter_cyc2time(struct mlx5_clock *clock,
+ 						u64 timestamp)
+ {
++	struct mlx5_timer *timer = &clock->timer;
+ 	unsigned int seq;
+ 	u64 nsec;
+ 
+ 	do {
+ 		seq = read_seqbegin(&clock->lock);
+-		nsec = timecounter_cyc2time(&clock->tc, timestamp);
++		nsec = timecounter_cyc2time(&timer->tc, timestamp);
+ 	} while (read_seqretry(&clock->lock, seq));
+ 
+ 	return ns_to_ktime(nsec);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 4fdb970e34823..2ad15b1d7ffd7 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -159,6 +159,19 @@ static struct macsec_rx_sa *macsec_rxsa_get(struct macsec_rx_sa __rcu *ptr)
+ 	return sa;
+ }
+ 
++static struct macsec_rx_sa *macsec_active_rxsa_get(struct macsec_rx_sc *rx_sc)
++{
++	struct macsec_rx_sa *sa = NULL;
++	int an;
++
++	for (an = 0; an < MACSEC_NUM_AN; an++)	{
++		sa = macsec_rxsa_get(rx_sc->sa[an]);
++		if (sa)
++			break;
++	}
++	return sa;
++}
++
+ static void free_rx_sc_rcu(struct rcu_head *head)
+ {
+ 	struct macsec_rx_sc *rx_sc = container_of(head, struct macsec_rx_sc, rcu_head);
+@@ -497,18 +510,28 @@ static void macsec_encrypt_finish(struct sk_buff *skb, struct net_device *dev)
+ 	skb->protocol = eth_hdr(skb)->h_proto;
+ }
+ 
++static unsigned int macsec_msdu_len(struct sk_buff *skb)
++{
++	struct macsec_dev *macsec = macsec_priv(skb->dev);
++	struct macsec_secy *secy = &macsec->secy;
++	bool sci_present = macsec_skb_cb(skb)->has_sci;
++
++	return skb->len - macsec_hdr_len(sci_present) - secy->icv_len;
++}
++
+ static void macsec_count_tx(struct sk_buff *skb, struct macsec_tx_sc *tx_sc,
+ 			    struct macsec_tx_sa *tx_sa)
+ {
++	unsigned int msdu_len = macsec_msdu_len(skb);
+ 	struct pcpu_tx_sc_stats *txsc_stats = this_cpu_ptr(tx_sc->stats);
+ 
+ 	u64_stats_update_begin(&txsc_stats->syncp);
+ 	if (tx_sc->encrypt) {
+-		txsc_stats->stats.OutOctetsEncrypted += skb->len;
++		txsc_stats->stats.OutOctetsEncrypted += msdu_len;
+ 		txsc_stats->stats.OutPktsEncrypted++;
+ 		this_cpu_inc(tx_sa->stats->OutPktsEncrypted);
+ 	} else {
+-		txsc_stats->stats.OutOctetsProtected += skb->len;
++		txsc_stats->stats.OutOctetsProtected += msdu_len;
+ 		txsc_stats->stats.OutPktsProtected++;
+ 		this_cpu_inc(tx_sa->stats->OutPktsProtected);
+ 	}
+@@ -538,9 +561,10 @@ static void macsec_encrypt_done(struct crypto_async_request *base, int err)
+ 	aead_request_free(macsec_skb_cb(skb)->req);
+ 
+ 	rcu_read_lock_bh();
+-	macsec_encrypt_finish(skb, dev);
+ 	macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa);
+-	len = skb->len;
++	/* packet is encrypted/protected so tx_bytes must be calculated */
++	len = macsec_msdu_len(skb) + 2 * ETH_ALEN;
++	macsec_encrypt_finish(skb, dev);
+ 	ret = dev_queue_xmit(skb);
+ 	count_tx(dev, ret, len);
+ 	rcu_read_unlock_bh();
+@@ -699,6 +723,7 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
+ 
+ 	macsec_skb_cb(skb)->req = req;
+ 	macsec_skb_cb(skb)->tx_sa = tx_sa;
++	macsec_skb_cb(skb)->has_sci = sci_present;
+ 	aead_request_set_callback(req, 0, macsec_encrypt_done, skb);
+ 
+ 	dev_hold(skb->dev);
+@@ -740,15 +765,17 @@ static bool macsec_post_decrypt(struct sk_buff *skb, struct macsec_secy *secy, u
+ 		u64_stats_update_begin(&rxsc_stats->syncp);
+ 		rxsc_stats->stats.InPktsLate++;
+ 		u64_stats_update_end(&rxsc_stats->syncp);
++		DEV_STATS_INC(secy->netdev, rx_dropped);
+ 		return false;
+ 	}
+ 
+ 	if (secy->validate_frames != MACSEC_VALIDATE_DISABLED) {
++		unsigned int msdu_len = macsec_msdu_len(skb);
+ 		u64_stats_update_begin(&rxsc_stats->syncp);
+ 		if (hdr->tci_an & MACSEC_TCI_E)
+-			rxsc_stats->stats.InOctetsDecrypted += skb->len;
++			rxsc_stats->stats.InOctetsDecrypted += msdu_len;
+ 		else
+-			rxsc_stats->stats.InOctetsValidated += skb->len;
++			rxsc_stats->stats.InOctetsValidated += msdu_len;
+ 		u64_stats_update_end(&rxsc_stats->syncp);
+ 	}
+ 
+@@ -761,6 +788,8 @@ static bool macsec_post_decrypt(struct sk_buff *skb, struct macsec_secy *secy, u
+ 			u64_stats_update_begin(&rxsc_stats->syncp);
+ 			rxsc_stats->stats.InPktsNotValid++;
+ 			u64_stats_update_end(&rxsc_stats->syncp);
++			this_cpu_inc(rx_sa->stats->InPktsNotValid);
++			DEV_STATS_INC(secy->netdev, rx_errors);
+ 			return false;
+ 		}
+ 
+@@ -853,9 +882,9 @@ static void macsec_decrypt_done(struct crypto_async_request *base, int err)
+ 
+ 	macsec_finalize_skb(skb, macsec->secy.icv_len,
+ 			    macsec_extra_len(macsec_skb_cb(skb)->has_sci));
++	len = skb->len;
+ 	macsec_reset_skb(skb, macsec->secy.netdev);
+ 
+-	len = skb->len;
+ 	if (gro_cells_receive(&macsec->gro_cells, skb) == NET_RX_SUCCESS)
+ 		count_rx(dev, len);
+ 
+@@ -1046,6 +1075,7 @@ static enum rx_handler_result handle_not_macsec(struct sk_buff *skb)
+ 			u64_stats_update_begin(&secy_stats->syncp);
+ 			secy_stats->stats.InPktsNoTag++;
+ 			u64_stats_update_end(&secy_stats->syncp);
++			DEV_STATS_INC(macsec->secy.netdev, rx_dropped);
+ 			continue;
+ 		}
+ 
+@@ -1155,6 +1185,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 		u64_stats_update_begin(&secy_stats->syncp);
+ 		secy_stats->stats.InPktsBadTag++;
+ 		u64_stats_update_end(&secy_stats->syncp);
++		DEV_STATS_INC(secy->netdev, rx_errors);
+ 		goto drop_nosa;
+ 	}
+ 
+@@ -1165,11 +1196,15 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 		/* If validateFrames is Strict or the C bit in the
+ 		 * SecTAG is set, discard
+ 		 */
++		struct macsec_rx_sa *active_rx_sa = macsec_active_rxsa_get(rx_sc);
+ 		if (hdr->tci_an & MACSEC_TCI_C ||
+ 		    secy->validate_frames == MACSEC_VALIDATE_STRICT) {
+ 			u64_stats_update_begin(&rxsc_stats->syncp);
+ 			rxsc_stats->stats.InPktsNotUsingSA++;
+ 			u64_stats_update_end(&rxsc_stats->syncp);
++			DEV_STATS_INC(secy->netdev, rx_errors);
++			if (active_rx_sa)
++				this_cpu_inc(active_rx_sa->stats->InPktsNotUsingSA);
+ 			goto drop_nosa;
+ 		}
+ 
+@@ -1179,6 +1214,8 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 		u64_stats_update_begin(&rxsc_stats->syncp);
+ 		rxsc_stats->stats.InPktsUnusedSA++;
+ 		u64_stats_update_end(&rxsc_stats->syncp);
++		if (active_rx_sa)
++			this_cpu_inc(active_rx_sa->stats->InPktsUnusedSA);
+ 		goto deliver;
+ 	}
+ 
+@@ -1199,6 +1236,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 			u64_stats_update_begin(&rxsc_stats->syncp);
+ 			rxsc_stats->stats.InPktsLate++;
+ 			u64_stats_update_end(&rxsc_stats->syncp);
++			DEV_STATS_INC(macsec->secy.netdev, rx_dropped);
+ 			goto drop;
+ 		}
+ 	}
+@@ -1227,6 +1265,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ deliver:
+ 	macsec_finalize_skb(skb, secy->icv_len,
+ 			    macsec_extra_len(macsec_skb_cb(skb)->has_sci));
++	len = skb->len;
+ 	macsec_reset_skb(skb, secy->netdev);
+ 
+ 	if (rx_sa)
+@@ -1234,12 +1273,11 @@ deliver:
+ 	macsec_rxsc_put(rx_sc);
+ 
+ 	skb_orphan(skb);
+-	len = skb->len;
+ 	ret = gro_cells_receive(&macsec->gro_cells, skb);
+ 	if (ret == NET_RX_SUCCESS)
+ 		count_rx(dev, len);
+ 	else
+-		macsec->secy.netdev->stats.rx_dropped++;
++		DEV_STATS_INC(macsec->secy.netdev, rx_dropped);
+ 
+ 	rcu_read_unlock();
+ 
+@@ -1276,6 +1314,7 @@ nosci:
+ 			u64_stats_update_begin(&secy_stats->syncp);
+ 			secy_stats->stats.InPktsNoSCI++;
+ 			u64_stats_update_end(&secy_stats->syncp);
++			DEV_STATS_INC(macsec->secy.netdev, rx_errors);
+ 			continue;
+ 		}
+ 
+@@ -1294,7 +1333,7 @@ nosci:
+ 			secy_stats->stats.InPktsUnknownSCI++;
+ 			u64_stats_update_end(&secy_stats->syncp);
+ 		} else {
+-			macsec->secy.netdev->stats.rx_dropped++;
++			DEV_STATS_INC(macsec->secy.netdev, rx_dropped);
+ 		}
+ 	}
+ 
+@@ -3403,21 +3442,21 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
+ 
+ 	if (!secy->operational) {
+ 		kfree_skb(skb);
+-		dev->stats.tx_dropped++;
++		DEV_STATS_INC(dev, tx_dropped);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
++	len = skb->len;
+ 	skb = macsec_encrypt(skb, dev);
+ 	if (IS_ERR(skb)) {
+ 		if (PTR_ERR(skb) != -EINPROGRESS)
+-			dev->stats.tx_dropped++;
++			DEV_STATS_INC(dev, tx_dropped);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+ 	macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa);
+ 
+ 	macsec_encrypt_finish(skb, dev);
+-	len = skb->len;
+ 	ret = dev_queue_xmit(skb);
+ 	count_tx(dev, ret, len);
+ 	return ret;
+@@ -3646,8 +3685,9 @@ static void macsec_get_stats64(struct net_device *dev,
+ 
+ 	dev_fetch_sw_netstats(s, dev->tstats);
+ 
+-	s->rx_dropped = dev->stats.rx_dropped;
+-	s->tx_dropped = dev->stats.tx_dropped;
++	s->rx_dropped = atomic_long_read(&dev->stats.__rx_dropped);
++	s->tx_dropped = atomic_long_read(&dev->stats.__tx_dropped);
++	s->rx_errors = atomic_long_read(&dev->stats.__rx_errors);
+ }
+ 
+ static int macsec_get_iflink(const struct net_device *dev)
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 0cde17bd743f3..a664faa8f01f6 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -404,6 +404,17 @@ static int bcm54xx_resume(struct phy_device *phydev)
+ 	return bcm54xx_config_init(phydev);
+ }
+ 
++static int bcm54810_read_mmd(struct phy_device *phydev, int devnum, u16 regnum)
++{
++	return -EOPNOTSUPP;
++}
++
++static int bcm54810_write_mmd(struct phy_device *phydev, int devnum, u16 regnum,
++			      u16 val)
++{
++	return -EOPNOTSUPP;
++}
++
+ static int bcm54811_config_init(struct phy_device *phydev)
+ {
+ 	int err, reg;
+@@ -841,6 +852,8 @@ static struct phy_driver broadcom_drivers[] = {
+ 	.phy_id_mask    = 0xfffffff0,
+ 	.name           = "Broadcom BCM54810",
+ 	/* PHY_GBIT_FEATURES */
++	.read_mmd	= bcm54810_read_mmd,
++	.write_mmd	= bcm54810_write_mmd,
+ 	.config_init    = bcm54xx_config_init,
+ 	.config_aneg    = bcm5481_config_aneg,
+ 	.ack_interrupt  = bcm_phy_ack_intr,
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 36c7eae776d44..721b536ce8861 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -2195,7 +2195,9 @@ static void team_setup(struct net_device *dev)
+ 
+ 	dev->hw_features = TEAM_VLAN_FEATURES |
+ 			   NETIF_F_HW_VLAN_CTAG_RX |
+-			   NETIF_F_HW_VLAN_CTAG_FILTER;
++			   NETIF_F_HW_VLAN_CTAG_FILTER |
++			   NETIF_F_HW_VLAN_STAG_RX |
++			   NETIF_F_HW_VLAN_STAG_FILTER;
+ 
+ 	dev->hw_features |= NETIF_F_GSO_ENCAP_ALL | NETIF_F_GSO_UDP_L4;
+ 	dev->features |= dev->hw_features;
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 165149bcf0b1a..2fd5d2b7a2092 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3220,8 +3220,6 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 		}
+ 	}
+ 
+-	_virtnet_set_queues(vi, vi->curr_queue_pairs);
+-
+ 	/* serialize netdev register + virtio_device_ready() with ndo_open() */
+ 	rtnl_lock();
+ 
+@@ -3234,6 +3232,8 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 
+ 	virtio_device_ready(vdev);
+ 
++	_virtnet_set_queues(vi, vi->curr_queue_pairs);
++
+ 	rtnl_unlock();
+ 
+ 	err = virtnet_cpu_notif_add(vi);
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index 1222f5749bc67..a215777df96c7 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -239,6 +239,7 @@
+ #define EP_STATE_ENABLED	1
+ 
+ static const unsigned int pcie_gen_freq[] = {
++	GEN1_CORE_CLK_FREQ,	/* PCI_EXP_LNKSTA_CLS == 0; undefined */
+ 	GEN1_CORE_CLK_FREQ,
+ 	GEN2_CORE_CLK_FREQ,
+ 	GEN3_CORE_CLK_FREQ,
+@@ -470,7 +471,11 @@ static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg)
+ 
+ 	speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
+ 		PCI_EXP_LNKSTA_CLS;
+-	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
++
++	if (speed >= ARRAY_SIZE(pcie_gen_freq))
++		speed = 0;
++
++	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed]);
+ 
+ 	/* If EP doesn't advertise L1SS, just return */
+ 	val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
+@@ -973,7 +978,11 @@ static int tegra_pcie_dw_host_init(struct pcie_port *pp)
+ 
+ 	speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
+ 		PCI_EXP_LNKSTA_CLS;
+-	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
++
++	if (speed >= ARRAY_SIZE(pcie_gen_freq))
++		speed = 0;
++
++	clk_set_rate(pcie->core_clk, pcie_gen_freq[speed]);
+ 
+ 	tegra_pcie_enable_interrupts(pp);
+ 
+diff --git a/drivers/pcmcia/rsrc_nonstatic.c b/drivers/pcmcia/rsrc_nonstatic.c
+index 69a6e9a5d6d26..6e90927e65769 100644
+--- a/drivers/pcmcia/rsrc_nonstatic.c
++++ b/drivers/pcmcia/rsrc_nonstatic.c
+@@ -1053,6 +1053,8 @@ static void nonstatic_release_resource_db(struct pcmcia_socket *s)
+ 		q = p->next;
+ 		kfree(p);
+ 	}
++
++	kfree(data);
+ }
+ 
+ 
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index d439afef92128..94c963462d74b 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -2159,12 +2159,13 @@ static void gsm_error(struct gsm_mux *gsm,
+ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ {
+ 	int i;
+-	struct gsm_dlci *dlci = gsm->dlci[0];
++	struct gsm_dlci *dlci;
+ 	struct gsm_msg *txq, *ntxq;
+ 
+ 	gsm->dead = true;
+ 	mutex_lock(&gsm->mutex);
+ 
++	dlci = gsm->dlci[0];
+ 	if (dlci) {
+ 		if (disc && dlci->state != DLCI_CLOSED) {
+ 			gsm_dlci_begin_close(dlci);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 432a438929e64..7499954c9aa76 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -3231,6 +3231,7 @@ void serial8250_init_port(struct uart_8250_port *up)
+ 	struct uart_port *port = &up->port;
+ 
+ 	spin_lock_init(&port->lock);
++	port->pm = NULL;
+ 	port->ops = &serial8250_pops;
+ 	port->has_sysrq = IS_ENABLED(CONFIG_SERIAL_8250_CONSOLE);
+ 
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index a4cf00756681a..227fb2d320465 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1062,8 +1062,8 @@ static void lpuart_copy_rx_to_tty(struct lpuart_port *sport)
+ 		unsigned long sr = lpuart32_read(&sport->port, UARTSTAT);
+ 
+ 		if (sr & (UARTSTAT_PE | UARTSTAT_FE)) {
+-			/* Read DR to clear the error flags */
+-			lpuart32_read(&sport->port, UARTDATA);
++			/* Clear the error flags */
++			lpuart32_write(&sport->port, sr, UARTSTAT);
+ 
+ 			if (sr & UARTSTAT_PE)
+ 				sport->port.icount.parity++;
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index e3a8b6c71aa1d..210c1d6150825 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -2041,7 +2041,7 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	u8 mult = 0;
+ 	int ret;
+ 
+-	buffering = CDNS3_EP_BUF_SIZE - 1;
++	buffering = priv_dev->ep_buf_size - 1;
+ 
+ 	cdns3_configure_dmult(priv_dev, priv_ep);
+ 
+@@ -2060,7 +2060,7 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 		break;
+ 	default:
+ 		ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_ISOC);
+-		mult = CDNS3_EP_ISO_HS_MULT - 1;
++		mult = priv_dev->ep_iso_burst - 1;
+ 		buffering = mult + 1;
+ 	}
+ 
+@@ -2076,14 +2076,14 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 		mult = 0;
+ 		max_packet_size = 1024;
+ 		if (priv_ep->type == USB_ENDPOINT_XFER_ISOC) {
+-			maxburst = CDNS3_EP_ISO_SS_BURST - 1;
++			maxburst = priv_dev->ep_iso_burst - 1;
+ 			buffering = (mult + 1) *
+ 				    (maxburst + 1);
+ 
+ 			if (priv_ep->interval > 1)
+ 				buffering++;
+ 		} else {
+-			maxburst = CDNS3_EP_BUF_SIZE - 1;
++			maxburst = priv_dev->ep_buf_size - 1;
+ 		}
+ 		break;
+ 	default:
+@@ -2098,6 +2098,23 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	else
+ 		priv_ep->trb_burst_size = 16;
+ 
++	/*
++	 * In versions preceding DEV_VER_V2, for example, iMX8QM, there exit the bugs
++	 * in the DMA. These bugs occur when the trb_burst_size exceeds 16 and the
++	 * address is not aligned to 128 Bytes (which is a product of the 64-bit AXI
++	 * and AXI maximum burst length of 16 or 0xF+1, dma_axi_ctrl0[3:0]). This
++	 * results in data corruption when it crosses the 4K border. The corruption
++	 * specifically occurs from the position (4K - (address & 0x7F)) to 4K.
++	 *
++	 * So force trb_burst_size to 16 at such platform.
++	 */
++	if (priv_dev->dev_ver < DEV_VER_V2)
++		priv_ep->trb_burst_size = 16;
++
++	mult = min_t(u8, mult, EP_CFG_MULT_MAX);
++	buffering = min_t(u8, buffering, EP_CFG_BUFFERING_MAX);
++	maxburst = min_t(u8, maxburst, EP_CFG_MAXBURST_MAX);
++
+ 	/* onchip buffer is only allocated before configuration */
+ 	if (!priv_dev->hw_configured_flag) {
+ 		ret = cdns3_ep_onchip_buffer_reserve(priv_dev, buffering + 1,
+@@ -2971,6 +2988,40 @@ static int cdns3_gadget_udc_stop(struct usb_gadget *gadget)
+ 	return 0;
+ }
+ 
++/**
++ * cdns3_gadget_check_config - ensure cdns3 can support the USB configuration
++ * @gadget: pointer to the USB gadget
++ *
++ * Used to record the maximum number of endpoints being used in a USB composite
++ * device. (across all configurations)  This is to be used in the calculation
++ * of the TXFIFO sizes when resizing internal memory for individual endpoints.
++ * It will help ensured that the resizing logic reserves enough space for at
++ * least one max packet.
++ */
++static int cdns3_gadget_check_config(struct usb_gadget *gadget)
++{
++	struct cdns3_device *priv_dev = gadget_to_cdns3_device(gadget);
++	struct usb_ep *ep;
++	int n_in = 0;
++	int total;
++
++	list_for_each_entry(ep, &gadget->ep_list, ep_list) {
++		if (ep->claimed && (ep->address & USB_DIR_IN))
++			n_in++;
++	}
++
++	/* 2KB are reserved for EP0, 1KB for out*/
++	total = 2 + n_in + 1;
++
++	if (total > priv_dev->onchip_buffers)
++		return -ENOMEM;
++
++	priv_dev->ep_buf_size = priv_dev->ep_iso_burst =
++			(priv_dev->onchip_buffers - 2) / (n_in + 1);
++
++	return 0;
++}
++
+ static const struct usb_gadget_ops cdns3_gadget_ops = {
+ 	.get_frame = cdns3_gadget_get_frame,
+ 	.wakeup = cdns3_gadget_wakeup,
+@@ -2979,6 +3030,7 @@ static const struct usb_gadget_ops cdns3_gadget_ops = {
+ 	.udc_start = cdns3_gadget_udc_start,
+ 	.udc_stop = cdns3_gadget_udc_stop,
+ 	.match_ep = cdns3_gadget_match_ep,
++	.check_config = cdns3_gadget_check_config,
+ };
+ 
+ static void cdns3_free_all_eps(struct cdns3_device *priv_dev)
+diff --git a/drivers/usb/cdns3/gadget.h b/drivers/usb/cdns3/gadget.h
+index 21fa461c518ec..32825477edd3e 100644
+--- a/drivers/usb/cdns3/gadget.h
++++ b/drivers/usb/cdns3/gadget.h
+@@ -561,15 +561,18 @@ struct cdns3_usb_regs {
+ /* Max burst size (used only in SS mode). */
+ #define EP_CFG_MAXBURST_MASK	GENMASK(11, 8)
+ #define EP_CFG_MAXBURST(p)	(((p) << 8) & EP_CFG_MAXBURST_MASK)
++#define EP_CFG_MAXBURST_MAX	15
+ /* ISO max burst. */
+ #define EP_CFG_MULT_MASK	GENMASK(15, 14)
+ #define EP_CFG_MULT(p)		(((p) << 14) & EP_CFG_MULT_MASK)
++#define EP_CFG_MULT_MAX		2
+ /* ISO max burst. */
+ #define EP_CFG_MAXPKTSIZE_MASK	GENMASK(26, 16)
+ #define EP_CFG_MAXPKTSIZE(p)	(((p) << 16) & EP_CFG_MAXPKTSIZE_MASK)
+ /* Max number of buffered packets. */
+ #define EP_CFG_BUFFERING_MASK	GENMASK(31, 27)
+ #define EP_CFG_BUFFERING(p)	(((p) << 27) & EP_CFG_BUFFERING_MASK)
++#define EP_CFG_BUFFERING_MAX	15
+ 
+ /* EP_CMD - bitmasks */
+ /* Endpoint reset. */
+@@ -1093,9 +1096,6 @@ struct cdns3_trb {
+ #define CDNS3_ENDPOINTS_MAX_COUNT	32
+ #define CDNS3_EP_ZLP_BUF_SIZE		1024
+ 
+-#define CDNS3_EP_BUF_SIZE		4	/* KB */
+-#define CDNS3_EP_ISO_HS_MULT		3
+-#define CDNS3_EP_ISO_SS_BURST		3
+ #define CDNS3_MAX_NUM_DESCMISS_BUF	32
+ #define CDNS3_DESCMIS_BUF_SIZE		2048	/* Bytes */
+ #define CDNS3_WA2_NUM_BUFFERS		128
+@@ -1330,6 +1330,9 @@ struct cdns3_device {
+ 	/*in KB */
+ 	u16				onchip_buffers;
+ 	u16				onchip_used_size;
++
++	u16				ep_buf_size;
++	u16				ep_iso_burst;
+ };
+ 
+ void cdns3_set_register_bit(void __iomem *ptr, u32 mask);
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index f798455942844..4d47fe89864d9 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -70,6 +70,10 @@ static const struct ci_hdrc_imx_platform_flag imx7ulp_usb_data = {
+ 		CI_HDRC_PMQOS,
+ };
+ 
++static const struct ci_hdrc_imx_platform_flag imx8ulp_usb_data = {
++	.flags = CI_HDRC_SUPPORTS_RUNTIME_PM,
++};
++
+ static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
+ 	{ .compatible = "fsl,imx23-usb", .data = &imx23_usb_data},
+ 	{ .compatible = "fsl,imx28-usb", .data = &imx28_usb_data},
+@@ -80,6 +84,7 @@ static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
+ 	{ .compatible = "fsl,imx6ul-usb", .data = &imx6ul_usb_data},
+ 	{ .compatible = "fsl,imx7d-usb", .data = &imx7d_usb_data},
+ 	{ .compatible = "fsl,imx7ulp-usb", .data = &imx7ulp_usb_data},
++	{ .compatible = "fsl,imx8ulp-usb", .data = &imx8ulp_usb_data},
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, ci_hdrc_imx_dt_ids);
+diff --git a/drivers/usb/chipidea/usbmisc_imx.c b/drivers/usb/chipidea/usbmisc_imx.c
+index 425b29168b4d0..9b1d5c11dc340 100644
+--- a/drivers/usb/chipidea/usbmisc_imx.c
++++ b/drivers/usb/chipidea/usbmisc_imx.c
+@@ -135,7 +135,7 @@
+ #define TXVREFTUNE0_MASK		(0xf << 20)
+ 
+ #define MX6_USB_OTG_WAKEUP_BITS (MX6_BM_WAKEUP_ENABLE | MX6_BM_VBUS_WAKEUP | \
+-				 MX6_BM_ID_WAKEUP)
++				 MX6_BM_ID_WAKEUP | MX6SX_BM_DPDM_WAKEUP_EN)
+ 
+ struct usbmisc_ops {
+ 	/* It's called once when probe a usb device */
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index ec8c43231746e..3973f6c18857e 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -306,7 +306,16 @@ static void dwc3_qcom_interconnect_exit(struct dwc3_qcom *qcom)
+ /* Only usable in contexts where the role can not change. */
+ static bool dwc3_qcom_is_host(struct dwc3_qcom *qcom)
+ {
+-	struct dwc3 *dwc = platform_get_drvdata(qcom->dwc3);
++	struct dwc3 *dwc;
++
++	/*
++	 * FIXME: Fix this layering violation.
++	 */
++	dwc = platform_get_drvdata(qcom->dwc3);
++
++	/* Core driver may not have probed yet. */
++	if (!dwc)
++		return false;
+ 
+ 	return dwc->xhci;
+ }
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 3b5a6430e2418..a717b53847a8e 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -917,8 +917,11 @@ static void __gs_console_push(struct gs_console *cons)
+ 	}
+ 
+ 	req->length = size;
++
++	spin_unlock_irq(&cons->lock);
+ 	if (usb_ep_queue(ep, req, GFP_ATOMIC))
+ 		req->length = 0;
++	spin_lock_irq(&cons->lock);
+ }
+ 
+ static void gs_console_work(struct work_struct *work)
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 3a3b5a03dda75..14d9d1ee16fc4 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1004,6 +1004,25 @@ int usb_gadget_ep_match_desc(struct usb_gadget *gadget,
+ }
+ EXPORT_SYMBOL_GPL(usb_gadget_ep_match_desc);
+ 
++/**
++ * usb_gadget_check_config - checks if the UDC can support the binded
++ *	configuration
++ * @gadget: controller to check the USB configuration
++ *
++ * Ensure that a UDC is able to support the requested resources by a
++ * configuration, and that there are no resource limitations, such as
++ * internal memory allocated to all requested endpoints.
++ *
++ * Returns zero on success, else a negative errno.
++ */
++int usb_gadget_check_config(struct usb_gadget *gadget)
++{
++	if (gadget->ops->check_config)
++		return gadget->ops->check_config(gadget);
++	return 0;
++}
++EXPORT_SYMBOL_GPL(usb_gadget_check_config);
++
+ /* ------------------------------------------------------------------------- */
+ 
+ static void usb_gadget_state_work(struct work_struct *work)
+diff --git a/drivers/video/fbdev/mmp/hw/mmp_ctrl.c b/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
+index 061a105afb865..27c3ee5df8def 100644
+--- a/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
++++ b/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
+@@ -518,7 +518,9 @@ static int mmphw_probe(struct platform_device *pdev)
+ 		ret = -ENOENT;
+ 		goto failed;
+ 	}
+-	clk_prepare_enable(ctrl->clk);
++	ret = clk_prepare_enable(ctrl->clk);
++	if (ret)
++		goto failed;
+ 
+ 	/* init global regs */
+ 	ctrl_set_default(ctrl);
+diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
+index e8ef0c66e558f..136f90dbad831 100644
+--- a/drivers/virtio/virtio_mmio.c
++++ b/drivers/virtio/virtio_mmio.c
+@@ -571,11 +571,9 @@ static void virtio_mmio_release_dev(struct device *_d)
+ {
+ 	struct virtio_device *vdev =
+ 			container_of(_d, struct virtio_device, dev);
+-	struct virtio_mmio_device *vm_dev =
+-			container_of(vdev, struct virtio_mmio_device, vdev);
+-	struct platform_device *pdev = vm_dev->pdev;
++	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
+ 
+-	devm_kfree(&pdev->dev, vm_dev);
++	kfree(vm_dev);
+ }
+ 
+ /* Platform device */
+@@ -586,7 +584,7 @@ static int virtio_mmio_probe(struct platform_device *pdev)
+ 	unsigned long magic;
+ 	int rc;
+ 
+-	vm_dev = devm_kzalloc(&pdev->dev, sizeof(*vm_dev), GFP_KERNEL);
++	vm_dev = kzalloc(sizeof(*vm_dev), GFP_KERNEL);
+ 	if (!vm_dev)
+ 		return -ENOMEM;
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 83dca79ff042c..b798586263ebb 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4460,8 +4460,7 @@ int btrfs_cancel_balance(struct btrfs_fs_info *fs_info)
+ 		}
+ 	}
+ 
+-	BUG_ON(fs_info->balance_ctl ||
+-		test_bit(BTRFS_FS_BALANCE_RUNNING, &fs_info->flags));
++	ASSERT(!test_bit(BTRFS_FS_BALANCE_RUNNING, &fs_info->flags));
+ 	atomic_dec(&fs_info->balance_cancel_req);
+ 	mutex_unlock(&fs_info->balance_mutex);
+ 	return 0;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 5fe85dc0e2651..a56738244f3a9 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4580,9 +4580,9 @@ static int cifs_readpage_worker(struct file *file, struct page *page,
+ 
+ io_error:
+ 	kunmap(page);
+-	unlock_page(page);
+ 
+ read_complete:
++	unlock_page(page);
+ 	return rc;
+ }
+ 
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index e01b6a2d12d30..b61de8dab51a0 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1017,7 +1017,14 @@ static int gfs2_show_options(struct seq_file *s, struct dentry *root)
+ {
+ 	struct gfs2_sbd *sdp = root->d_sb->s_fs_info;
+ 	struct gfs2_args *args = &sdp->sd_args;
+-	int val;
++	unsigned int logd_secs, statfs_slow, statfs_quantum, quota_quantum;
++
++	spin_lock(&sdp->sd_tune.gt_spin);
++	logd_secs = sdp->sd_tune.gt_logd_secs;
++	quota_quantum = sdp->sd_tune.gt_quota_quantum;
++	statfs_quantum = sdp->sd_tune.gt_statfs_quantum;
++	statfs_slow = sdp->sd_tune.gt_statfs_slow;
++	spin_unlock(&sdp->sd_tune.gt_spin);
+ 
+ 	if (is_ancestor(root, sdp->sd_master_dir))
+ 		seq_puts(s, ",meta");
+@@ -1072,17 +1079,14 @@ static int gfs2_show_options(struct seq_file *s, struct dentry *root)
+ 	}
+ 	if (args->ar_discard)
+ 		seq_puts(s, ",discard");
+-	val = sdp->sd_tune.gt_logd_secs;
+-	if (val != 30)
+-		seq_printf(s, ",commit=%d", val);
+-	val = sdp->sd_tune.gt_statfs_quantum;
+-	if (val != 30)
+-		seq_printf(s, ",statfs_quantum=%d", val);
+-	else if (sdp->sd_tune.gt_statfs_slow)
++	if (logd_secs != 30)
++		seq_printf(s, ",commit=%d", logd_secs);
++	if (statfs_quantum != 30)
++		seq_printf(s, ",statfs_quantum=%d", statfs_quantum);
++	else if (statfs_slow)
+ 		seq_puts(s, ",statfs_quantum=0");
+-	val = sdp->sd_tune.gt_quota_quantum;
+-	if (val != 60)
+-		seq_printf(s, ",quota_quantum=%d", val);
++	if (quota_quantum != 60)
++		seq_printf(s, ",quota_quantum=%d", quota_quantum);
+ 	if (args->ar_statfs_percent)
+ 		seq_printf(s, ",statfs_percent=%d", args->ar_statfs_percent);
+ 	if (args->ar_errors != GFS2_ERRORS_DEFAULT) {
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index bd9af2be352fc..cef3303d94995 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -2027,6 +2027,9 @@ dbAllocDmapLev(struct bmap * bmp,
+ 	if (dbFindLeaf((dmtree_t *) & dp->tree, l2nb, &leafidx))
+ 		return -ENOSPC;
+ 
++	if (leafidx < 0)
++		return -EIO;
++
+ 	/* determine the block number within the file system corresponding
+ 	 * to the leaf at which free space was found.
+ 	 */
+diff --git a/fs/jfs/jfs_txnmgr.c b/fs/jfs/jfs_txnmgr.c
+index c8ce7f1bc5942..6f6a5b9203d3f 100644
+--- a/fs/jfs/jfs_txnmgr.c
++++ b/fs/jfs/jfs_txnmgr.c
+@@ -354,6 +354,11 @@ tid_t txBegin(struct super_block *sb, int flag)
+ 	jfs_info("txBegin: flag = 0x%x", flag);
+ 	log = JFS_SBI(sb)->log;
+ 
++	if (!log) {
++		jfs_error(sb, "read-only filesystem\n");
++		return 0;
++	}
++
+ 	TXN_LOCK();
+ 
+ 	INCREMENT(TxStat.txBegin);
+diff --git a/fs/jfs/namei.c b/fs/jfs/namei.c
+index 7a55d14cc1af0..f155ad6650bd4 100644
+--- a/fs/jfs/namei.c
++++ b/fs/jfs/namei.c
+@@ -798,6 +798,11 @@ static int jfs_link(struct dentry *old_dentry,
+ 	if (rc)
+ 		goto out;
+ 
++	if (isReadOnly(ip)) {
++		jfs_error(ip->i_sb, "read-only filesystem\n");
++		return -EROFS;
++	}
++
+ 	tid = txBegin(ip->i_sb, 0);
+ 
+ 	mutex_lock_nested(&JFS_IP(dir)->commit_mutex, COMMIT_MUTEX_PARENT);
+diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
+index b208eba5d0b64..b58a0140d78d5 100644
+--- a/fs/overlayfs/ovl_entry.h
++++ b/fs/overlayfs/ovl_entry.h
+@@ -30,6 +30,7 @@ struct ovl_sb {
+ };
+ 
+ struct ovl_layer {
++	/* ovl_free_fs() relies on @mnt being the first member! */
+ 	struct vfsmount *mnt;
+ 	/* Trap in ovl inode cache */
+ 	struct inode *trap;
+@@ -40,6 +41,14 @@ struct ovl_layer {
+ 	int fsid;
+ };
+ 
++/*
++ * ovl_free_fs() relies on @mnt being the first member when unmounting
++ * the private mounts created for each layer. Let's check both the
++ * offset and type.
++ */
++static_assert(offsetof(struct ovl_layer, mnt) == 0);
++static_assert(__same_type(typeof_member(struct ovl_layer, mnt), struct vfsmount *));
++
+ struct ovl_path {
+ 	const struct ovl_layer *layer;
+ 	struct dentry *dentry;
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index ad255f8ab5c55..8d0cd68fc90a4 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -557,7 +557,7 @@ restart:
+ 			continue;
+ 		/* Wait for dquot users */
+ 		if (atomic_read(&dquot->dq_count)) {
+-			dqgrab(dquot);
++			atomic_inc(&dquot->dq_count);
+ 			spin_unlock(&dq_list_lock);
+ 			/*
+ 			 * Once dqput() wakes us up, we know it's time to free
+@@ -2415,7 +2415,8 @@ int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
+ 
+ 	error = add_dquot_ref(sb, type);
+ 	if (error)
+-		dquot_disable(sb, type, flags);
++		dquot_disable(sb, type,
++			      DQUOT_USAGE_ENABLED | DQUOT_LIMITS_ENABLED);
+ 
+ 	return error;
+ out_fmt:
+diff --git a/fs/udf/unicode.c b/fs/udf/unicode.c
+index 622569007b530..2142cbd1dde24 100644
+--- a/fs/udf/unicode.c
++++ b/fs/udf/unicode.c
+@@ -247,7 +247,7 @@ static int udf_name_from_CS0(struct super_block *sb,
+ 	}
+ 
+ 	if (translate) {
+-		if (str_o_len <= 2 && str_o[0] == '.' &&
++		if (str_o_len > 0 && str_o_len <= 2 && str_o[0] == '.' &&
+ 		    (str_o_len == 1 || str_o[1] == '.'))
+ 			needsCRC = 1;
+ 		if (needsCRC) {
+diff --git a/include/dt-bindings/iio/addac/adi,ad74413r.h b/include/dt-bindings/iio/addac/adi,ad74413r.h
+new file mode 100644
+index 0000000000000..204f92bbd79f2
+--- /dev/null
++++ b/include/dt-bindings/iio/addac/adi,ad74413r.h
+@@ -0,0 +1,21 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++
++#ifndef _DT_BINDINGS_ADI_AD74413R_H
++#define _DT_BINDINGS_ADI_AD74413R_H
++
++#define CH_FUNC_HIGH_IMPEDANCE			0x0
++#define CH_FUNC_VOLTAGE_OUTPUT			0x1
++#define CH_FUNC_CURRENT_OUTPUT			0x2
++#define CH_FUNC_VOLTAGE_INPUT			0x3
++#define CH_FUNC_CURRENT_INPUT_EXT_POWER		0x4
++#define CH_FUNC_CURRENT_INPUT_LOOP_POWER	0x5
++#define CH_FUNC_RESISTANCE_INPUT		0x6
++#define CH_FUNC_DIGITAL_INPUT_LOGIC		0x7
++#define CH_FUNC_DIGITAL_INPUT_LOOP_POWER	0x8
++#define CH_FUNC_CURRENT_INPUT_EXT_POWER_HART	0x9
++#define CH_FUNC_CURRENT_INPUT_LOOP_POWER_HART	0xA
++
++#define CH_FUNC_MIN	CH_FUNC_HIGH_IMPEDANCE
++#define CH_FUNC_MAX	CH_FUNC_CURRENT_INPUT_LOOP_POWER_HART
++
++#endif /* _DT_BINDINGS_ADI_AD74413R_H */
+diff --git a/include/linux/iopoll.h b/include/linux/iopoll.h
+index 2c8860e406bd8..0417360a6db9b 100644
+--- a/include/linux/iopoll.h
++++ b/include/linux/iopoll.h
+@@ -53,6 +53,7 @@
+ 		} \
+ 		if (__sleep_us) \
+ 			usleep_range((__sleep_us >> 2) + 1, __sleep_us); \
++		cpu_relax(); \
+ 	} \
+ 	(cond) ? 0 : -ETIMEDOUT; \
+ })
+@@ -95,6 +96,7 @@
+ 		} \
+ 		if (__delay_us) \
+ 			udelay(__delay_us); \
++		cpu_relax(); \
+ 	} \
+ 	(cond) ? 0 : -ETIMEDOUT; \
+ })
+diff --git a/include/linux/mhi.h b/include/linux/mhi.h
+index d4841e5a5f458..5d9f8c6f3d40f 100644
+--- a/include/linux/mhi.h
++++ b/include/linux/mhi.h
+@@ -303,6 +303,7 @@ struct mhi_controller_config {
+  * @rddm_size: RAM dump size that host should allocate for debugging purpose
+  * @sbl_size: SBL image size downloaded through BHIe (optional)
+  * @seg_len: BHIe vector size (optional)
++ * @reg_len: Length of the MHI MMIO region (required)
+  * @fbc_image: Points to firmware image buffer
+  * @rddm_image: Points to RAM dump buffer
+  * @mhi_chan: Points to the channel configuration table
+@@ -383,6 +384,7 @@ struct mhi_controller {
+ 	size_t rddm_size;
+ 	size_t sbl_size;
+ 	size_t seg_len;
++	size_t reg_len;
+ 	struct image_info *fbc_image;
+ 	struct image_info *rddm_image;
+ 	struct mhi_chan *mhi_chan;
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index ae88362216a4e..4f95b98215d81 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -644,18 +644,22 @@ struct mlx5_pps {
+ 	u8                         enabled;
+ };
+ 
+-struct mlx5_clock {
+-	struct mlx5_nb             pps_nb;
+-	seqlock_t                  lock;
++struct mlx5_timer {
+ 	struct cyclecounter        cycles;
+ 	struct timecounter         tc;
+-	struct hwtstamp_config     hwtstamp_config;
+ 	u32                        nominal_c_mult;
+ 	unsigned long              overflow_period;
+ 	struct delayed_work        overflow_work;
++};
++
++struct mlx5_clock {
++	struct mlx5_nb             pps_nb;
++	seqlock_t                  lock;
++	struct hwtstamp_config     hwtstamp_config;
+ 	struct ptp_clock          *ptp;
+ 	struct ptp_clock_info      ptp_info;
+ 	struct mlx5_pps            pps_info;
++	struct mlx5_timer          timer;
+ };
+ 
+ struct mlx5_dm;
+diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
+index 40d7e98fc9902..fb294cbb9081d 100644
+--- a/include/linux/mmc/host.h
++++ b/include/linux/mmc/host.h
+@@ -477,6 +477,7 @@ struct mmc_host {
+ struct device_node;
+ 
+ struct mmc_host *mmc_alloc_host(int extra, struct device *);
++struct mmc_host *devm_mmc_alloc_host(struct device *dev, int extra);
+ int mmc_add_host(struct mmc_host *);
+ void mmc_remove_host(struct mmc_host *);
+ void mmc_free_host(struct mmc_host *);
+diff --git a/include/linux/objtool.h b/include/linux/objtool.h
+index 662f19374bd98..d17fa7f4001d7 100644
+--- a/include/linux/objtool.h
++++ b/include/linux/objtool.h
+@@ -71,6 +71,23 @@ struct unwind_hint {
+ 	static void __used __section(".discard.func_stack_frame_non_standard") \
+ 		*__func_stack_frame_non_standard_##func = func
+ 
++/*
++ * STACK_FRAME_NON_STANDARD_FP() is a frame-pointer-specific function ignore
++ * for the case where a function is intentionally missing frame pointer setup,
++ * but otherwise needs objtool/ORC coverage when frame pointers are disabled.
++ */
++#ifdef CONFIG_FRAME_POINTER
++#define STACK_FRAME_NON_STANDARD_FP(func) STACK_FRAME_NON_STANDARD(func)
++#else
++#define STACK_FRAME_NON_STANDARD_FP(func)
++#endif
++
++#define ANNOTATE_NOENDBR					\
++	"986: \n\t"						\
++	".pushsection .discard.noendbr\n\t"			\
++	_ASM_PTR " 986b\n\t"					\
++	".popsection\n\t"
++
+ #else /* __ASSEMBLY__ */
+ 
+ /*
+@@ -117,6 +134,13 @@ struct unwind_hint {
+ 	.popsection
+ .endm
+ 
++.macro ANNOTATE_NOENDBR
++.Lhere_\@:
++	.pushsection .discard.noendbr
++	.quad	.Lhere_\@
++	.popsection
++.endm
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #else /* !CONFIG_STACK_VALIDATION */
+@@ -126,10 +150,14 @@ struct unwind_hint {
+ #define UNWIND_HINT(sp_reg, sp_offset, type, end)	\
+ 	"\n\t"
+ #define STACK_FRAME_NON_STANDARD(func)
++#define STACK_FRAME_NON_STANDARD_FP(func)
++#define ANNOTATE_NOENDBR
+ #else
+ #define ANNOTATE_INTRA_FUNCTION_CALL
+ .macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
+ .endm
++.macro ANNOTATE_NOENDBR
++.endm
+ #endif
+ 
+ #endif /* CONFIG_STACK_VALIDATION */
+diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h
+index e7351d64f11fa..11df3d5b40c6b 100644
+--- a/include/linux/usb/gadget.h
++++ b/include/linux/usb/gadget.h
+@@ -326,6 +326,7 @@ struct usb_gadget_ops {
+ 	struct usb_ep *(*match_ep)(struct usb_gadget *,
+ 			struct usb_endpoint_descriptor *,
+ 			struct usb_ss_ep_comp_descriptor *);
++	int	(*check_config)(struct usb_gadget *gadget);
+ };
+ 
+ /**
+@@ -596,6 +597,7 @@ int usb_gadget_connect(struct usb_gadget *gadget);
+ int usb_gadget_disconnect(struct usb_gadget *gadget);
+ int usb_gadget_deactivate(struct usb_gadget *gadget);
+ int usb_gadget_activate(struct usb_gadget *gadget);
++int usb_gadget_check_config(struct usb_gadget *gadget);
+ #else
+ static inline int usb_gadget_frame_number(struct usb_gadget *gadget)
+ { return 0; }
+@@ -619,6 +621,8 @@ static inline int usb_gadget_deactivate(struct usb_gadget *gadget)
+ { return 0; }
+ static inline int usb_gadget_activate(struct usb_gadget *gadget)
+ { return 0; }
++static inline int usb_gadget_check_config(struct usb_gadget *gadget)
++{ return 0; }
+ #endif /* CONFIG_USB_GADGET */
+ 
+ /*-------------------------------------------------------------------------*/
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index a960de68ac69e..6047058d67037 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -148,6 +148,10 @@ retry:
+ 		if (gso_type & SKB_GSO_UDP)
+ 			nh_off -= thlen;
+ 
++		/* Kernel has a special handling for GSO_BY_FRAGS. */
++		if (gso_size == GSO_BY_FRAGS)
++			return -EINVAL;
++
+ 		/* Too small packets are not really GSO ones. */
+ 		if (skb->len - nh_off > gso_size) {
+ 			shinfo->gso_size = gso_size;
+diff --git a/include/media/v4l2-mem2mem.h b/include/media/v4l2-mem2mem.h
+index 5a91b548ecc0c..8d52c4506762d 100644
+--- a/include/media/v4l2-mem2mem.h
++++ b/include/media/v4l2-mem2mem.h
+@@ -588,7 +588,14 @@ void v4l2_m2m_buf_queue(struct v4l2_m2m_ctx *m2m_ctx,
+ static inline
+ unsigned int v4l2_m2m_num_src_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
+ {
+-	return m2m_ctx->out_q_ctx.num_rdy;
++	unsigned int num_buf_rdy;
++	unsigned long flags;
++
++	spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags);
++	num_buf_rdy = m2m_ctx->out_q_ctx.num_rdy;
++	spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags);
++
++	return num_buf_rdy;
+ }
+ 
+ /**
+@@ -600,7 +607,14 @@ unsigned int v4l2_m2m_num_src_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
+ static inline
+ unsigned int v4l2_m2m_num_dst_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
+ {
+-	return m2m_ctx->cap_q_ctx.num_rdy;
++	unsigned int num_buf_rdy;
++	unsigned long flags;
++
++	spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags);
++	num_buf_rdy = m2m_ctx->cap_q_ctx.num_rdy;
++	spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags);
++
++	return num_buf_rdy;
+ }
+ 
+ /**
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 1fb5c535537c1..665e388593752 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1346,6 +1346,12 @@ static inline bool sk_has_memory_pressure(const struct sock *sk)
+ 	return sk->sk_prot->memory_pressure != NULL;
+ }
+ 
++static inline bool sk_under_global_memory_pressure(const struct sock *sk)
++{
++	return sk->sk_prot->memory_pressure &&
++		!!*sk->sk_prot->memory_pressure;
++}
++
+ static inline bool sk_under_memory_pressure(const struct sock *sk)
+ {
+ 	if (!sk->sk_prot->memory_pressure)
+diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c
+index 905c3fa005f10..5bff061993102 100644
+--- a/kernel/dma/remap.c
++++ b/kernel/dma/remap.c
+@@ -43,13 +43,13 @@ void *dma_common_contiguous_remap(struct page *page, size_t size,
+ 	void *vaddr;
+ 	int i;
+ 
+-	pages = kmalloc_array(count, sizeof(struct page *), GFP_KERNEL);
++	pages = kvmalloc_array(count, sizeof(struct page *), GFP_KERNEL);
+ 	if (!pages)
+ 		return NULL;
+ 	for (i = 0; i < count; i++)
+ 		pages[i] = nth_page(page, i);
+ 	vaddr = vmap(pages, count, VM_DMA_COHERENT, prot);
+-	kfree(pages);
++	kvfree(pages);
+ 
+ 	return vaddr;
+ }
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 3b8c53264441e..f8126fa0630e2 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -541,6 +541,7 @@ struct trace_buffer {
+ 	unsigned			flags;
+ 	int				cpus;
+ 	atomic_t			record_disabled;
++	atomic_t			resizing;
+ 	cpumask_var_t			cpumask;
+ 
+ 	struct lock_class_key		*reader_lock_key;
+@@ -2041,7 +2042,7 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 
+ 	/* prevent another thread from changing buffer sizes */
+ 	mutex_lock(&buffer->mutex);
+-
++	atomic_inc(&buffer->resizing);
+ 
+ 	if (cpu_id == RING_BUFFER_ALL_CPUS) {
+ 		/*
+@@ -2184,6 +2185,7 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 		atomic_dec(&buffer->record_disabled);
+ 	}
+ 
++	atomic_dec(&buffer->resizing);
+ 	mutex_unlock(&buffer->mutex);
+ 	return 0;
+ 
+@@ -2204,6 +2206,7 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 		}
+ 	}
+  out_err_unlock:
++	atomic_dec(&buffer->resizing);
+ 	mutex_unlock(&buffer->mutex);
+ 	return err;
+ }
+@@ -5253,6 +5256,15 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a,
+ 	if (local_read(&cpu_buffer_b->committing))
+ 		goto out_dec;
+ 
++	/*
++	 * When resize is in progress, we cannot swap it because
++	 * it will mess the state of the cpu buffer.
++	 */
++	if (atomic_read(&buffer_a->resizing))
++		goto out_dec;
++	if (atomic_read(&buffer_b->resizing))
++		goto out_dec;
++
+ 	buffer_a->buffers[cpu] = cpu_buffer_b;
+ 	buffer_b->buffers[cpu] = cpu_buffer_a;
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 7e99319bd5365..167f2a19fd8a2 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1882,9 +1882,10 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
+ 		 * place on this CPU. We fail to record, but we reset
+ 		 * the max trace buffer (no one writes directly to it)
+ 		 * and flag that it failed.
++		 * Another reason is resize is in progress.
+ 		 */
+ 		trace_array_printk_buf(tr->max_buffer.buffer, _THIS_IP_,
+-			"Failed to swap buffers due to commit in progress\n");
++			"Failed to swap buffers due to commit or resize in progress\n");
+ 	}
+ 
+ 	WARN_ON_ONCE(ret && ret != -EAGAIN && ret != -EBUSY);
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 41dd17390c732..b882c6519b035 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1332,9 +1332,10 @@ probe_mem_read(void *dest, void *src, size_t size)
+ 
+ /* Note that we don't verify it, since the code does not come from user space */
+ static int
+-process_fetch_insn(struct fetch_insn *code, struct pt_regs *regs, void *dest,
++process_fetch_insn(struct fetch_insn *code, void *rec, void *dest,
+ 		   void *base)
+ {
++	struct pt_regs *regs = rec;
+ 	unsigned long val;
+ 
+ retry:
+diff --git a/kernel/trace/trace_probe_tmpl.h b/kernel/trace/trace_probe_tmpl.h
+index 29348874ebde7..cf14a37dff8c8 100644
+--- a/kernel/trace/trace_probe_tmpl.h
++++ b/kernel/trace/trace_probe_tmpl.h
+@@ -54,7 +54,7 @@ fetch_apply_bitfield(struct fetch_insn *code, void *buf)
+  * If dest is NULL, don't store result and return required dynamic data size.
+  */
+ static int
+-process_fetch_insn(struct fetch_insn *code, struct pt_regs *regs,
++process_fetch_insn(struct fetch_insn *code, void *rec,
+ 		   void *dest, void *base);
+ static nokprobe_inline int fetch_store_strlen(unsigned long addr);
+ static nokprobe_inline int
+@@ -190,7 +190,7 @@ __get_data_size(struct trace_probe *tp, struct pt_regs *regs)
+ 
+ /* Store the value of each argument */
+ static nokprobe_inline void
+-store_trace_args(void *data, struct trace_probe *tp, struct pt_regs *regs,
++store_trace_args(void *data, struct trace_probe *tp, void *rec,
+ 		 int header_size, int maxlen)
+ {
+ 	struct probe_arg *arg;
+@@ -205,12 +205,14 @@ store_trace_args(void *data, struct trace_probe *tp, struct pt_regs *regs,
+ 		/* Point the dynamic data area if needed */
+ 		if (unlikely(arg->dynamic))
+ 			*dl = make_data_loc(maxlen, dyndata - base);
+-		ret = process_fetch_insn(arg->code, regs, dl, base);
+-		if (unlikely(ret < 0 && arg->dynamic)) {
+-			*dl = make_data_loc(0, dyndata - base);
+-		} else {
+-			dyndata += ret;
+-			maxlen -= ret;
++		ret = process_fetch_insn(arg->code, rec, dl, base);
++		if (arg->dynamic) {
++			if (unlikely(ret < 0)) {
++				*dl = make_data_loc(0, dyndata - base);
++			} else {
++				dyndata += ret;
++				maxlen -= ret;
++			}
+ 		}
+ 	}
+ }
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 9900d4e3808cc..f6c47361c154e 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -217,9 +217,10 @@ static unsigned long translate_user_vaddr(unsigned long file_offset)
+ 
+ /* Note that we don't verify it, since the code does not come from user space */
+ static int
+-process_fetch_insn(struct fetch_insn *code, struct pt_regs *regs, void *dest,
++process_fetch_insn(struct fetch_insn *code, void *rec, void *dest,
+ 		   void *base)
+ {
++	struct pt_regs *regs = rec;
+ 	unsigned long val;
+ 
+ 	/* 1st stage: get value from context */
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 568f0f072b3df..7b40e4737a2bb 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -6370,9 +6370,14 @@ static inline int l2cap_le_command_rej(struct l2cap_conn *conn,
+ 	if (!chan)
+ 		goto done;
+ 
++	chan = l2cap_chan_hold_unless_zero(chan);
++	if (!chan)
++		goto done;
++
+ 	l2cap_chan_lock(chan);
+ 	l2cap_chan_del(chan, ECONNREFUSED);
+ 	l2cap_chan_unlock(chan);
++	l2cap_chan_put(chan);
+ 
+ done:
+ 	mutex_unlock(&conn->chan_lock);
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 98f4b4a80de42..742356cfd07c4 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2724,7 +2724,7 @@ void __sk_mem_reduce_allocated(struct sock *sk, int amount)
+ 	if (mem_cgroup_sockets_enabled && sk->sk_memcg)
+ 		mem_cgroup_uncharge_skmem(sk->sk_memcg, amount);
+ 
+-	if (sk_under_memory_pressure(sk) &&
++	if (sk_under_global_memory_pressure(sk) &&
+ 	    (sk_memory_allocated(sk) < sk_prot_mem_limits(sk, 0)))
+ 		sk_leave_memory_pressure(sk);
+ }
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index 84a818b09beeb..90f349be4848d 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -285,12 +285,12 @@ static netdev_tx_t vti_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	switch (skb->protocol) {
+ 	case htons(ETH_P_IP):
+-		xfrm_decode_session(skb, &fl, AF_INET);
+ 		memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
++		xfrm_decode_session(skb, &fl, AF_INET);
+ 		break;
+ 	case htons(ETH_P_IPV6):
+-		xfrm_decode_session(skb, &fl, AF_INET6);
+ 		memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
++		xfrm_decode_session(skb, &fl, AF_INET6);
+ 		break;
+ 	default:
+ 		goto tx_err;
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 715fdfa3e2ae9..d2e07bb30164c 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -582,7 +582,9 @@ out_reset_timer:
+ 	    tcp_stream_is_thin(tp) &&
+ 	    icsk->icsk_retransmits <= TCP_THIN_LINEAR_RETRIES) {
+ 		icsk->icsk_backoff = 0;
+-		icsk->icsk_rto = min(__tcp_set_rto(tp), TCP_RTO_MAX);
++		icsk->icsk_rto = clamp(__tcp_set_rto(tp),
++				       tcp_rto_min(sk),
++				       TCP_RTO_MAX);
+ 	} else {
+ 		/* Use normal (exponential) backoff */
+ 		icsk->icsk_rto = min(icsk->icsk_rto << 1, TCP_RTO_MAX);
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index 99f2dc802e366..162ba065d4764 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -567,12 +567,12 @@ vti6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		    vti6_addr_conflict(t, ipv6_hdr(skb)))
+ 			goto tx_err;
+ 
+-		xfrm_decode_session(skb, &fl, AF_INET6);
+ 		memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
++		xfrm_decode_session(skb, &fl, AF_INET6);
+ 		break;
+ 	case htons(ETH_P_IP):
+-		xfrm_decode_session(skb, &fl, AF_INET);
+ 		memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
++		xfrm_decode_session(skb, &fl, AF_INET);
+ 		break;
+ 	default:
+ 		goto tx_err;
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index fff2bd5f03e37..f42854973ba8d 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1852,9 +1852,9 @@ static int pfkey_dump(struct sock *sk, struct sk_buff *skb, const struct sadb_ms
+ 	if (ext_hdrs[SADB_X_EXT_FILTER - 1]) {
+ 		struct sadb_x_filter *xfilter = ext_hdrs[SADB_X_EXT_FILTER - 1];
+ 
+-		if ((xfilter->sadb_x_filter_splen >=
++		if ((xfilter->sadb_x_filter_splen >
+ 			(sizeof(xfrm_address_t) << 3)) ||
+-		    (xfilter->sadb_x_filter_dplen >=
++		    (xfilter->sadb_x_filter_dplen >
+ 			(sizeof(xfrm_address_t) << 3))) {
+ 			mutex_unlock(&pfk->dump_lock);
+ 			return -EINVAL;
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index 29ec3ef63edc7..d0b64c36471d5 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -1802,6 +1802,7 @@ static int
+ proc_do_sync_threshold(struct ctl_table *table, int write,
+ 		       void *buffer, size_t *lenp, loff_t *ppos)
+ {
++	struct netns_ipvs *ipvs = table->extra2;
+ 	int *valp = table->data;
+ 	int val[2];
+ 	int rc;
+@@ -1811,6 +1812,7 @@ proc_do_sync_threshold(struct ctl_table *table, int write,
+ 		.mode = table->mode,
+ 	};
+ 
++	mutex_lock(&ipvs->sync_mutex);
+ 	memcpy(val, valp, sizeof(val));
+ 	rc = proc_dointvec(&tmp, write, buffer, lenp, ppos);
+ 	if (write) {
+@@ -1820,6 +1822,7 @@ proc_do_sync_threshold(struct ctl_table *table, int write,
+ 		else
+ 			memcpy(valp, val, sizeof(val));
+ 	}
++	mutex_unlock(&ipvs->sync_mutex);
+ 	return rc;
+ }
+ 
+@@ -4077,6 +4080,7 @@ static int __net_init ip_vs_control_net_init_sysctl(struct netns_ipvs *ipvs)
+ 	ipvs->sysctl_sync_threshold[0] = DEFAULT_SYNC_THRESHOLD;
+ 	ipvs->sysctl_sync_threshold[1] = DEFAULT_SYNC_PERIOD;
+ 	tbl[idx].data = &ipvs->sysctl_sync_threshold;
++	tbl[idx].extra2 = ipvs;
+ 	tbl[idx++].maxlen = sizeof(ipvs->sysctl_sync_threshold);
+ 	ipvs->sysctl_sync_refresh_period = DEFAULT_SYNC_REFRESH_PERIOD;
+ 	tbl[idx++].data = &ipvs->sysctl_sync_refresh_period;
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index cec4b16170a0b..21cbaf6dac331 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -49,8 +49,8 @@ static const unsigned int sctp_timeouts[SCTP_CONNTRACK_MAX] = {
+ 	[SCTP_CONNTRACK_COOKIE_WAIT]		= 3 SECS,
+ 	[SCTP_CONNTRACK_COOKIE_ECHOED]		= 3 SECS,
+ 	[SCTP_CONNTRACK_ESTABLISHED]		= 210 SECS,
+-	[SCTP_CONNTRACK_SHUTDOWN_SENT]		= 300 SECS / 1000,
+-	[SCTP_CONNTRACK_SHUTDOWN_RECD]		= 300 SECS / 1000,
++	[SCTP_CONNTRACK_SHUTDOWN_SENT]		= 3 SECS,
++	[SCTP_CONNTRACK_SHUTDOWN_RECD]		= 3 SECS,
+ 	[SCTP_CONNTRACK_SHUTDOWN_ACK_SENT]	= 3 SECS,
+ 	[SCTP_CONNTRACK_HEARTBEAT_SENT]		= 30 SECS,
+ };
+@@ -105,7 +105,7 @@ static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {
+ 	{
+ /*	ORIGINAL	*/
+ /*                  sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */
+-/* init         */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW},
++/* init         */ {sCL, sCL, sCW, sCE, sES, sCL, sCL, sSA, sCW},
+ /* init_ack     */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},
+ /* abort        */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},
+ /* shutdown     */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL},
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 8d47782b778f1..408b7f5faa5e5 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -138,6 +138,9 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 	if (IS_ERR(set))
+ 		return PTR_ERR(set);
+ 
++	if (set->flags & NFT_SET_OBJECT)
++		return -EOPNOTSUPP;
++
+ 	if (set->ops->update == NULL)
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 3aa783a23c5f6..8d941cbba5cb7 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2008,6 +2008,7 @@ static ssize_t unix_stream_sendpage(struct socket *socket, struct page *page,
+ 
+ 	if (false) {
+ alloc_skb:
++		spin_unlock(&other->sk_receive_queue.lock);
+ 		unix_state_unlock(other);
+ 		mutex_unlock(&unix_sk(other)->iolock);
+ 		newskb = sock_alloc_send_pskb(sk, 0, 0, flags & MSG_DONTWAIT,
+@@ -2047,6 +2048,7 @@ alloc_skb:
+ 		init_scm = false;
+ 	}
+ 
++	spin_lock(&other->sk_receive_queue.lock);
+ 	skb = skb_peek_tail(&other->sk_receive_queue);
+ 	if (tail && tail == skb) {
+ 		skb = newskb;
+@@ -2077,14 +2079,11 @@ alloc_skb:
+ 	refcount_add(size, &sk->sk_wmem_alloc);
+ 
+ 	if (newskb) {
+-		err = unix_scm_to_skb(&scm, skb, false);
+-		if (err)
+-			goto err_state_unlock;
+-		spin_lock(&other->sk_receive_queue.lock);
++		unix_scm_to_skb(&scm, skb, false);
+ 		__skb_queue_tail(&other->sk_receive_queue, newskb);
+-		spin_unlock(&other->sk_receive_queue.lock);
+ 	}
+ 
++	spin_unlock(&other->sk_receive_queue.lock);
+ 	unix_state_unlock(other);
+ 	mutex_unlock(&unix_sk(other)->iolock);
+ 
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index 8cbf45a8bcdc2..655fe4ff86212 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -108,7 +108,7 @@ static const struct nla_policy compat_policy[XFRMA_MAX+1] = {
+ 	[XFRMA_ALG_COMP]	= { .len = sizeof(struct xfrm_algo) },
+ 	[XFRMA_ENCAP]		= { .len = sizeof(struct xfrm_encap_tmpl) },
+ 	[XFRMA_TMPL]		= { .len = sizeof(struct xfrm_user_tmpl) },
+-	[XFRMA_SEC_CTX]		= { .len = sizeof(struct xfrm_sec_ctx) },
++	[XFRMA_SEC_CTX]		= { .len = sizeof(struct xfrm_user_sec_ctx) },
+ 	[XFRMA_LTIME_VAL]	= { .len = sizeof(struct xfrm_lifetime_cur) },
+ 	[XFRMA_REPLAY_VAL]	= { .len = sizeof(struct xfrm_replay_state) },
+ 	[XFRMA_REPLAY_THRESH]	= { .type = NLA_U32 },
+diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c
+index e4f21a6924153..4eeec33675754 100644
+--- a/net/xfrm/xfrm_interface_core.c
++++ b/net/xfrm/xfrm_interface_core.c
+@@ -403,8 +403,8 @@ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	switch (skb->protocol) {
+ 	case htons(ETH_P_IPV6):
+-		xfrm_decode_session(skb, &fl, AF_INET6);
+ 		memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
++		xfrm_decode_session(skb, &fl, AF_INET6);
+ 		if (!dst) {
+ 			fl.u.ip6.flowi6_oif = dev->ifindex;
+ 			fl.u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC;
+@@ -418,8 +418,8 @@ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		}
+ 		break;
+ 	case htons(ETH_P_IP):
+-		xfrm_decode_session(skb, &fl, AF_INET);
+ 		memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
++		xfrm_decode_session(skb, &fl, AF_INET);
+ 		if (!dst) {
+ 			struct rtable *rt;
+ 
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index c6bf3898d1bf0..8fce2e93bb3b3 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -527,7 +527,7 @@ static void xfrm_update_ae_params(struct xfrm_state *x, struct nlattr **attrs,
+ 	struct nlattr *rt = attrs[XFRMA_REPLAY_THRESH];
+ 	struct nlattr *mt = attrs[XFRMA_MTIMER_THRESH];
+ 
+-	if (re) {
++	if (re && x->replay_esn && x->preplay_esn) {
+ 		struct xfrm_replay_state_esn *replay_esn;
+ 		replay_esn = nla_data(re);
+ 		memcpy(x->replay_esn, replay_esn,
+@@ -1062,6 +1062,15 @@ static int xfrm_dump_sa(struct sk_buff *skb, struct netlink_callback *cb)
+ 					 sizeof(*filter), GFP_KERNEL);
+ 			if (filter == NULL)
+ 				return -ENOMEM;
++
++			/* see addr_match(), (prefix length >> 5) << 2
++			 * will be used to compare xfrm_address_t
++			 */
++			if (filter->splen > (sizeof(xfrm_address_t) << 3) ||
++			    filter->dplen > (sizeof(xfrm_address_t) << 3)) {
++				kfree(filter);
++				return -EINVAL;
++			}
+ 		}
+ 
+ 		if (attrs[XFRMA_PROTO])
+@@ -2728,7 +2737,7 @@ const struct nla_policy xfrma_policy[XFRMA_MAX+1] = {
+ 	[XFRMA_ALG_COMP]	= { .len = sizeof(struct xfrm_algo) },
+ 	[XFRMA_ENCAP]		= { .len = sizeof(struct xfrm_encap_tmpl) },
+ 	[XFRMA_TMPL]		= { .len = sizeof(struct xfrm_user_tmpl) },
+-	[XFRMA_SEC_CTX]		= { .len = sizeof(struct xfrm_sec_ctx) },
++	[XFRMA_SEC_CTX]		= { .len = sizeof(struct xfrm_user_sec_ctx) },
+ 	[XFRMA_LTIME_VAL]	= { .len = sizeof(struct xfrm_lifetime_cur) },
+ 	[XFRMA_REPLAY_VAL]	= { .len = sizeof(struct xfrm_replay_state) },
+ 	[XFRMA_REPLAY_THRESH]	= { .type = NLA_U32 },
+@@ -2748,6 +2757,7 @@ const struct nla_policy xfrma_policy[XFRMA_MAX+1] = {
+ 	[XFRMA_SET_MARK]	= { .type = NLA_U32 },
+ 	[XFRMA_SET_MARK_MASK]	= { .type = NLA_U32 },
+ 	[XFRMA_IF_ID]		= { .type = NLA_U32 },
++	[XFRMA_MTIMER_THRESH]   = { .type = NLA_U32 },
+ };
+ EXPORT_SYMBOL_GPL(xfrma_policy);
+ 
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index 755af0b29e755..0a5ae1e8da47a 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -8,7 +8,7 @@ config IMA
+ 	select CRYPTO_HMAC
+ 	select CRYPTO_SHA1
+ 	select CRYPTO_HASH_INFO
+-	select TCG_TPM if HAS_IOMEM && !UML
++	select TCG_TPM if HAS_IOMEM
+ 	select TCG_TIS if TCG_TPM && X86
+ 	select TCG_CRB if TCG_TPM && ACPI
+ 	select TCG_IBMVTPM if TCG_TPM && PPC_PSERIES
+diff --git a/sound/hda/hdac_regmap.c b/sound/hda/hdac_regmap.c
+index d75f31eb9d78f..bf35acca5ea0e 100644
+--- a/sound/hda/hdac_regmap.c
++++ b/sound/hda/hdac_regmap.c
+@@ -597,10 +597,9 @@ EXPORT_SYMBOL_GPL(snd_hdac_regmap_update_raw_once);
+  */
+ void snd_hdac_regmap_sync(struct hdac_device *codec)
+ {
+-	if (codec->regmap) {
+-		mutex_lock(&codec->regmap_lock);
++	mutex_lock(&codec->regmap_lock);
++	if (codec->regmap)
+ 		regcache_sync(codec->regmap);
+-		mutex_unlock(&codec->regmap_lock);
+-	}
++	mutex_unlock(&codec->regmap_lock);
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_regmap_sync);
+diff --git a/sound/pci/emu10k1/emufx.c b/sound/pci/emu10k1/emufx.c
+index 4e76ed0e91d5d..e17b93b25d2ff 100644
+--- a/sound/pci/emu10k1/emufx.c
++++ b/sound/pci/emu10k1/emufx.c
+@@ -1560,14 +1560,8 @@ A_OP(icode, &ptr, iMAC0, A_GPR(var), A_GPR(var), A_GPR(vol), A_EXTIN(input))
+ 	gpr += 2;
+ 
+ 	/* Master volume (will be renamed later) */
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+0+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+0+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+1+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+1+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+2+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+2+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+3+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+3+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+4+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+4+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+5+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+5+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+6+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+6+SND_EMU10K1_PLAYBACK_CHANNELS));
+-	A_OP(icode, &ptr, iMAC0, A_GPR(playback+7+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+7+SND_EMU10K1_PLAYBACK_CHANNELS));
++	for (z = 0; z < 8; z++)
++		A_OP(icode, &ptr, iMAC0, A_GPR(playback+z+SND_EMU10K1_PLAYBACK_CHANNELS), A_C_00000000, A_GPR(gpr), A_GPR(playback+z+SND_EMU10K1_PLAYBACK_CHANNELS));
+ 	snd_emu10k1_init_mono_control(&controls[nctl++], "Wave Master Playback Volume", gpr, 0);
+ 	gpr += 2;
+ 
+@@ -1651,102 +1645,14 @@ A_OP(icode, &ptr, iMAC0, A_GPR(var), A_GPR(var), A_GPR(vol), A_EXTIN(input))
+ 			dev_dbg(emu->card->dev, "emufx.c: gpr=0x%x, tmp=0x%x\n",
+ 			       gpr, tmp);
+ 			*/
+-			/* For the EMU1010: How to get 32bit values from the DSP. High 16bits into L, low 16bits into R. */
+-			/* A_P16VIN(0) is delayed by one sample,
+-			 * so all other A_P16VIN channels will need to also be delayed
+-			 */
+-			/* Left ADC in. 1 of 2 */
+ 			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_P16VIN(0x0), A_FXBUS2(0) );
+-			/* Right ADC in 1 of 2 */
+-			gpr_map[gpr++] = 0x00000000;
+-			/* Delaying by one sample: instead of copying the input
+-			 * value A_P16VIN to output A_FXBUS2 as in the first channel,
+-			 * we use an auxiliary register, delaying the value by one
+-			 * sample
+-			 */
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(2) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x1), A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(4) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x2), A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(6) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x3), A_C_00000000, A_C_00000000);
+-			/* For 96kHz mode */
+-			/* Left ADC in. 2 of 2 */
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(0x8) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x4), A_C_00000000, A_C_00000000);
+-			/* Right ADC in 2 of 2 */
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(0xa) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x5), A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(0xc) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x6), A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr - 1), A_FXBUS2(0xe) );
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x7), A_C_00000000, A_C_00000000);
+-			/* Pavel Hofman - we still have voices, A_FXBUS2s, and
+-			 * A_P16VINs available -
+-			 * let's add 8 more capture channels - total of 16
+-			 */
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x10));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x8),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x12));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0x9),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x14));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xa),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x16));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xb),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x18));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xc),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x1a));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xd),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x1c));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xe),
+-			     A_C_00000000, A_C_00000000);
+-			gpr_map[gpr++] = 0x00000000;
+-			snd_emu10k1_audigy_dsp_convert_32_to_2x16(icode, &ptr, tmp,
+-								  bit_shifter16,
+-								  A_GPR(gpr - 1),
+-								  A_FXBUS2(0x1e));
+-			A_OP(icode, &ptr, iACC3, A_GPR(gpr - 1), A_P16VIN(0xf),
+-			     A_C_00000000, A_C_00000000);
++			/* A_P16VIN(0) is delayed by one sample, so all other A_P16VIN channels
++			 * will need to also be delayed; we use an auxiliary register for that. */
++			for (z = 1; z < 0x10; z++) {
++				snd_emu10k1_audigy_dsp_convert_32_to_2x16( icode, &ptr, tmp, bit_shifter16, A_GPR(gpr), A_FXBUS2(z * 2) );
++				A_OP(icode, &ptr, iACC3, A_GPR(gpr), A_P16VIN(z), A_C_00000000, A_C_00000000);
++				gpr_map[gpr++] = 0x00000000;
++			}
+ 		}
+ 
+ #if 0
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index db8593d794315..adfab80b8189d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10006,6 +10006,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 	spec = codec->spec;
+ 	spec->gen.shared_mic_vref_pin = 0x18;
+ 	codec->power_save_node = 0;
++	spec->en_3kpull_low = true;
+ 
+ #ifdef CONFIG_PM
+ 	codec->patch_ops.suspend = alc269_suspend;
+@@ -10088,14 +10089,16 @@ static int patch_alc269(struct hda_codec *codec)
+ 		spec->shutup = alc256_shutup;
+ 		spec->init_hook = alc256_init;
+ 		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
+-		if (codec->bus->pci->vendor == PCI_VENDOR_ID_AMD)
+-			spec->en_3kpull_low = true;
++		if (codec->core.vendor_id == 0x10ec0236 &&
++		    codec->bus->pci->vendor != PCI_VENDOR_ID_AMD)
++			spec->en_3kpull_low = false;
+ 		break;
+ 	case 0x10ec0257:
+ 		spec->codec_variant = ALC269_TYPE_ALC257;
+ 		spec->shutup = alc256_shutup;
+ 		spec->init_hook = alc256_init;
+ 		spec->gen.mixer_nid = 0;
++		spec->en_3kpull_low = false;
+ 		break;
+ 	case 0x10ec0215:
+ 	case 0x10ec0245:
+@@ -10719,6 +10722,7 @@ enum {
+ 	ALC897_FIXUP_HP_HSMIC_VERB,
+ 	ALC897_FIXUP_LENOVO_HEADSET_MODE,
+ 	ALC897_FIXUP_HEADSET_MIC_PIN2,
++	ALC897_FIXUP_UNIS_H3C_X500S,
+ };
+ 
+ static const struct hda_fixup alc662_fixups[] = {
+@@ -11158,6 +11162,13 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC897_FIXUP_LENOVO_HEADSET_MODE
+ 	},
++	[ALC897_FIXUP_UNIS_H3C_X500S] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x14, AC_VERB_SET_EAPD_BTLENABLE, 0 },
++			{}
++		},
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+@@ -11319,6 +11330,7 @@ static const struct hda_model_fixup alc662_fixup_models[] = {
+ 	{.id = ALC662_FIXUP_USI_HEADSET_MODE, .name = "usi-headset"},
+ 	{.id = ALC662_FIXUP_LENOVO_MULTI_CODECS, .name = "dual-codecs"},
+ 	{.id = ALC669_FIXUP_ACER_ASPIRE_ETHOS, .name = "aspire-ethos"},
++	{.id = ALC897_FIXUP_UNIS_H3C_X500S, .name = "unis-h3c-x500s"},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/codecs/rt5665.c b/sound/soc/codecs/rt5665.c
+index 8a915cdce0fe9..8b73c2d7f1f10 100644
+--- a/sound/soc/codecs/rt5665.c
++++ b/sound/soc/codecs/rt5665.c
+@@ -4472,6 +4472,8 @@ static void rt5665_remove(struct snd_soc_component *component)
+ 	struct rt5665_priv *rt5665 = snd_soc_component_get_drvdata(component);
+ 
+ 	regmap_write(rt5665->regmap, RT5665_RESET, 0);
++
++	regulator_bulk_disable(ARRAY_SIZE(rt5665->supplies), rt5665->supplies);
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index f5d8f7951cfc3..cbbb50ddc7954 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -199,6 +199,31 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_PCH_DMIC),
+ 	},
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Meteor Lake Client Platform"),
++		},
++		.driver_data = (void *)(RT711_JD2_100K),
++	},
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Google"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Rex"),
++		},
++		.driver_data = (void *)(SOF_SDW_PCH_DMIC),
++	},
++	/* LunarLake devices */
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Lunar Lake Client Platform"),
++		},
++		.driver_data = (void *)(RT711_JD2_100K),
++	},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/meson/axg-tdm-formatter.c b/sound/soc/meson/axg-tdm-formatter.c
+index cab7fa2851aa8..4834cfd163c03 100644
+--- a/sound/soc/meson/axg-tdm-formatter.c
++++ b/sound/soc/meson/axg-tdm-formatter.c
+@@ -30,27 +30,32 @@ int axg_tdm_formatter_set_channel_masks(struct regmap *map,
+ 					struct axg_tdm_stream *ts,
+ 					unsigned int offset)
+ {
+-	unsigned int val, ch = ts->channels;
+-	unsigned long mask;
+-	int i, j;
++	unsigned int ch = ts->channels;
++	u32 val[AXG_TDM_NUM_LANES];
++	int i, j, k;
++
++	/*
++	 * We need to mimick the slot distribution used by the HW to keep the
++	 * channel placement consistent regardless of the number of channel
++	 * in the stream. This is why the odd algorithm below is used.
++	 */
++	memset(val, 0, sizeof(*val) * AXG_TDM_NUM_LANES);
+ 
+ 	/*
+ 	 * Distribute the channels of the stream over the available slots
+-	 * of each TDM lane
++	 * of each TDM lane. We need to go over the 32 slots ...
+ 	 */
+-	for (i = 0; i < AXG_TDM_NUM_LANES; i++) {
+-		val = 0;
+-		mask = ts->mask[i];
+-
+-		for (j = find_first_bit(&mask, 32);
+-		     (j < 32) && ch;
+-		     j = find_next_bit(&mask, 32, j + 1)) {
+-			val |= 1 << j;
+-			ch -= 1;
++	for (i = 0; (i < 32) && ch; i += 2) {
++		/* ... of all the lanes ... */
++		for (j = 0; j < AXG_TDM_NUM_LANES; j++) {
++			/* ... then distribute the channels in pairs */
++			for (k = 0; k < 2; k++) {
++				if ((BIT(i + k) & ts->mask[j]) && ch) {
++					val[j] |= BIT(i + k);
++					ch -= 1;
++				}
++			}
+ 		}
+-
+-		regmap_write(map, offset, val);
+-		offset += regmap_get_reg_stride(map);
+ 	}
+ 
+ 	/*
+@@ -63,6 +68,11 @@ int axg_tdm_formatter_set_channel_masks(struct regmap *map,
+ 		return -EINVAL;
+ 	}
+ 
++	for (i = 0; i < AXG_TDM_NUM_LANES; i++) {
++		regmap_write(map, offset, val[i]);
++		offset += regmap_get_reg_stride(map);
++	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(axg_tdm_formatter_set_channel_masks);
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index ec74aead0844c..97fe2fadcafb3 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3797,6 +3797,35 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ 		}
+ 	}
+ },
++{
++	/* Advanced modes of the Mythware XA001AU.
++	 * For the standard mode, Mythware XA001AU has ID ffad:a001
++	 */
++	USB_DEVICE_VENDOR_SPEC(0xffad, 0xa001),
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.vendor_name = "Mythware",
++		.product_name = "XA001AU",
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_COMPOSITE,
++		.data = (const struct snd_usb_audio_quirk[]) {
++			{
++				.ifnum = 0,
++				.type = QUIRK_IGNORE_INTERFACE,
++			},
++			{
++				.ifnum = 1,
++				.type = QUIRK_AUDIO_STANDARD_INTERFACE,
++			},
++			{
++				.ifnum = 2,
++				.type = QUIRK_AUDIO_STANDARD_INTERFACE,
++			},
++			{
++				.ifnum = -1
++			}
++		}
++	}
++},
+ 
+ #undef USB_DEVICE_VENDOR_SPEC
+ #undef USB_AUDIO_DEVICE
+diff --git a/tools/include/linux/objtool.h b/tools/include/linux/objtool.h
+index 662f19374bd98..d17fa7f4001d7 100644
+--- a/tools/include/linux/objtool.h
++++ b/tools/include/linux/objtool.h
+@@ -71,6 +71,23 @@ struct unwind_hint {
+ 	static void __used __section(".discard.func_stack_frame_non_standard") \
+ 		*__func_stack_frame_non_standard_##func = func
+ 
++/*
++ * STACK_FRAME_NON_STANDARD_FP() is a frame-pointer-specific function ignore
++ * for the case where a function is intentionally missing frame pointer setup,
++ * but otherwise needs objtool/ORC coverage when frame pointers are disabled.
++ */
++#ifdef CONFIG_FRAME_POINTER
++#define STACK_FRAME_NON_STANDARD_FP(func) STACK_FRAME_NON_STANDARD(func)
++#else
++#define STACK_FRAME_NON_STANDARD_FP(func)
++#endif
++
++#define ANNOTATE_NOENDBR					\
++	"986: \n\t"						\
++	".pushsection .discard.noendbr\n\t"			\
++	_ASM_PTR " 986b\n\t"					\
++	".popsection\n\t"
++
+ #else /* __ASSEMBLY__ */
+ 
+ /*
+@@ -117,6 +134,13 @@ struct unwind_hint {
+ 	.popsection
+ .endm
+ 
++.macro ANNOTATE_NOENDBR
++.Lhere_\@:
++	.pushsection .discard.noendbr
++	.quad	.Lhere_\@
++	.popsection
++.endm
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #else /* !CONFIG_STACK_VALIDATION */
+@@ -126,10 +150,14 @@ struct unwind_hint {
+ #define UNWIND_HINT(sp_reg, sp_offset, type, end)	\
+ 	"\n\t"
+ #define STACK_FRAME_NON_STANDARD(func)
++#define STACK_FRAME_NON_STANDARD_FP(func)
++#define ANNOTATE_NOENDBR
+ #else
+ #define ANNOTATE_INTRA_FUNCTION_CALL
+ .macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
+ .endm
++.macro ANNOTATE_NOENDBR
++.endm
+ #endif
+ 
+ #endif /* CONFIG_STACK_VALIDATION */
+diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
+index 5b915ebb61163..b6791e8d9ab3b 100644
+--- a/tools/objtool/arch/x86/decode.c
++++ b/tools/objtool/arch/x86/decode.c
+@@ -655,5 +655,5 @@ bool arch_is_rethunk(struct symbol *sym)
+ 	return !strcmp(sym->name, "__x86_return_thunk") ||
+ 	       !strcmp(sym->name, "srso_untrain_ret") ||
+ 	       !strcmp(sym->name, "srso_safe_ret") ||
+-	       !strcmp(sym->name, "__ret");
++	       !strcmp(sym->name, "retbleed_return_thunk");
+ }
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 9a0a54194636c..965c055aa8088 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -369,7 +369,7 @@ static int decode_instructions(struct objtool_file *file)
+ 
+ 		if (!strcmp(sec->name, ".noinstr.text") ||
+ 		    !strcmp(sec->name, ".entry.text") ||
+-		    !strncmp(sec->name, ".text.__x86.", 12))
++		    !strncmp(sec->name, ".text..__x86.", 13))
+ 			sec->noinstr = true;
+ 
+ 		for (offset = 0; offset < sec->len; offset += insn->len) {
+@@ -1165,7 +1165,7 @@ static int add_jump_destinations(struct objtool_file *file)
+ 				continue;
+ 
+ 			/*
+-			 * This is a special case for zen_untrain_ret().
++			 * This is a special case for retbleed_untrain_ret().
+ 			 * It jumps to __x86_return_thunk(), but objtool
+ 			 * can't find the thunk's starting RET
+ 			 * instruction, because the RET is also in the
+@@ -2079,12 +2079,17 @@ static int decode_sections(struct objtool_file *file)
+ 	return 0;
+ }
+ 
+-static bool is_fentry_call(struct instruction *insn)
++static bool is_special_call(struct instruction *insn)
+ {
+-	if (insn->type == INSN_CALL &&
+-	    insn->call_dest &&
+-	    insn->call_dest->fentry)
+-		return true;
++	if (insn->type == INSN_CALL) {
++		struct symbol *dest = insn->call_dest;
++
++		if (!dest)
++			return false;
++
++		if (dest->fentry)
++			return true;
++	}
+ 
+ 	return false;
+ }
+@@ -2958,7 +2963,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 			if (ret)
+ 				return ret;
+ 
+-			if (!no_fp && func && !is_fentry_call(insn) &&
++			if (!no_fp && func && !is_special_call(insn) &&
+ 			    !has_valid_stack_frame(&state)) {
+ 				WARN_FUNC("call without frame pointer save/setup",
+ 					  sec, insn->offset);
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_changes.sh b/tools/testing/selftests/net/forwarding/mirror_gre_changes.sh
+index 472bd023e2a5f..b501b366367f7 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_changes.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_changes.sh
+@@ -72,7 +72,8 @@ test_span_gre_ttl()
+ 
+ 	RET=0
+ 
+-	mirror_install $swp1 ingress $tundev "matchall $tcflags"
++	mirror_install $swp1 ingress $tundev \
++		"prot ip flower $tcflags ip_prot icmp"
+ 	tc filter add dev $h3 ingress pref 77 prot $prot \
+ 		flower ip_ttl 50 action pass
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-08-30 14:45 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-08-30 14:45 UTC (permalink / raw
  To: gentoo-commits

commit:     17aa2602b0563a3ebc29486f01fe79d5df5791ff
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 30 14:45:49 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 30 14:45:49 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=17aa2602

Linux patch 5.10.193

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1192_linux-5.10.193.patch | 4055 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4059 insertions(+)

diff --git a/0000_README b/0000_README
index d0216fc5..9cc075fd 100644
--- a/0000_README
+++ b/0000_README
@@ -811,6 +811,10 @@ Patch:  1191_linux-5.10.192.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.192
 
+Patch:  1192_linux-5.10.193.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.193
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1192_linux-5.10.193.patch b/1192_linux-5.10.193.patch
new file mode 100644
index 00000000..aaf38c34
--- /dev/null
+++ b/1192_linux-5.10.193.patch
@@ -0,0 +1,4055 @@
+diff --git a/Makefile b/Makefile
+index 316598ce1b126..0423b4b2b000f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 192
++SUBLEVEL = 193
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
+index 8294eaa6f902d..dd03bc905841f 100644
+--- a/arch/mips/include/asm/cpu-features.h
++++ b/arch/mips/include/asm/cpu-features.h
+@@ -126,7 +126,24 @@
+ #define cpu_has_tx39_cache	__opt(MIPS_CPU_TX39_CACHE)
+ #endif
+ #ifndef cpu_has_octeon_cache
+-#define cpu_has_octeon_cache	0
++#define cpu_has_octeon_cache						\
++({									\
++	int __res;							\
++									\
++	switch (boot_cpu_type()) {					\
++	case CPU_CAVIUM_OCTEON:						\
++	case CPU_CAVIUM_OCTEON_PLUS:					\
++	case CPU_CAVIUM_OCTEON2:					\
++	case CPU_CAVIUM_OCTEON3:					\
++		__res = 1;						\
++		break;							\
++									\
++	default:							\
++		__res = 0;						\
++	}								\
++									\
++	__res;								\
++})
+ #endif
+ /* Don't override `cpu_has_fpu' to 1 or the "nofpu" option won't work.  */
+ #ifndef cpu_has_fpu
+@@ -353,7 +370,7 @@
+ ({									\
+ 	int __res;							\
+ 									\
+-	switch (current_cpu_type()) {					\
++	switch (boot_cpu_type()) {					\
+ 	case CPU_M14KC:							\
+ 	case CPU_74K:							\
+ 	case CPU_1074K:							\
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 80836b94189e7..b897feb519adf 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -892,6 +892,14 @@ void __init fpu__init_system_xstate(void)
+ 	setup_init_fpu_buf();
+ 	setup_xstate_comp_offsets();
+ 	setup_supervisor_only_offsets();
++
++	/*
++	 * CPU capabilities initialization runs before FPU init. So
++	 * X86_FEATURE_OSXSAVE is not set. Now that XSAVE is completely
++	 * functional, set the feature bit so depending code works.
++	 */
++	setup_force_cpu_cap(X86_FEATURE_OSXSAVE);
++
+ 	print_xstate_offset_size();
+ 
+ 	pr_info("x86/fpu: Enabled xstate features 0x%llx, context size is %d bytes, using '%s' format.\n",
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 63491748dc8d7..95cbd5790ed60 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -3740,7 +3740,7 @@ static int rbd_lock(struct rbd_device *rbd_dev)
+ 	ret = ceph_cls_lock(osdc, &rbd_dev->header_oid, &rbd_dev->header_oloc,
+ 			    RBD_LOCK_NAME, CEPH_CLS_LOCK_EXCLUSIVE, cookie,
+ 			    RBD_LOCK_TAG, "", 0);
+-	if (ret)
++	if (ret && ret != -EEXIST)
+ 		return ret;
+ 
+ 	__rbd_lock(rbd_dev, cookie);
+@@ -3914,10 +3914,26 @@ static void wake_lock_waiters(struct rbd_device *rbd_dev, int result)
+ 	list_splice_tail_init(&rbd_dev->acquiring_list, &rbd_dev->running_list);
+ }
+ 
+-static int get_lock_owner_info(struct rbd_device *rbd_dev,
+-			       struct ceph_locker **lockers, u32 *num_lockers)
++static bool locker_equal(const struct ceph_locker *lhs,
++			 const struct ceph_locker *rhs)
++{
++	return lhs->id.name.type == rhs->id.name.type &&
++	       lhs->id.name.num == rhs->id.name.num &&
++	       !strcmp(lhs->id.cookie, rhs->id.cookie) &&
++	       ceph_addr_equal_no_type(&lhs->info.addr, &rhs->info.addr);
++}
++
++static void free_locker(struct ceph_locker *locker)
++{
++	if (locker)
++		ceph_free_lockers(locker, 1);
++}
++
++static struct ceph_locker *get_lock_owner_info(struct rbd_device *rbd_dev)
+ {
+ 	struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
++	struct ceph_locker *lockers;
++	u32 num_lockers;
+ 	u8 lock_type;
+ 	char *lock_tag;
+ 	int ret;
+@@ -3926,39 +3942,45 @@ static int get_lock_owner_info(struct rbd_device *rbd_dev,
+ 
+ 	ret = ceph_cls_lock_info(osdc, &rbd_dev->header_oid,
+ 				 &rbd_dev->header_oloc, RBD_LOCK_NAME,
+-				 &lock_type, &lock_tag, lockers, num_lockers);
+-	if (ret)
+-		return ret;
++				 &lock_type, &lock_tag, &lockers, &num_lockers);
++	if (ret) {
++		rbd_warn(rbd_dev, "failed to get header lockers: %d", ret);
++		return ERR_PTR(ret);
++	}
+ 
+-	if (*num_lockers == 0) {
++	if (num_lockers == 0) {
+ 		dout("%s rbd_dev %p no lockers detected\n", __func__, rbd_dev);
++		lockers = NULL;
+ 		goto out;
+ 	}
+ 
+ 	if (strcmp(lock_tag, RBD_LOCK_TAG)) {
+ 		rbd_warn(rbd_dev, "locked by external mechanism, tag %s",
+ 			 lock_tag);
+-		ret = -EBUSY;
+-		goto out;
++		goto err_busy;
+ 	}
+ 
+ 	if (lock_type == CEPH_CLS_LOCK_SHARED) {
+ 		rbd_warn(rbd_dev, "shared lock type detected");
+-		ret = -EBUSY;
+-		goto out;
++		goto err_busy;
+ 	}
+ 
+-	if (strncmp((*lockers)[0].id.cookie, RBD_LOCK_COOKIE_PREFIX,
++	WARN_ON(num_lockers != 1);
++	if (strncmp(lockers[0].id.cookie, RBD_LOCK_COOKIE_PREFIX,
+ 		    strlen(RBD_LOCK_COOKIE_PREFIX))) {
+ 		rbd_warn(rbd_dev, "locked by external mechanism, cookie %s",
+-			 (*lockers)[0].id.cookie);
+-		ret = -EBUSY;
+-		goto out;
++			 lockers[0].id.cookie);
++		goto err_busy;
+ 	}
+ 
+ out:
+ 	kfree(lock_tag);
+-	return ret;
++	return lockers;
++
++err_busy:
++	kfree(lock_tag);
++	ceph_free_lockers(lockers, num_lockers);
++	return ERR_PTR(-EBUSY);
+ }
+ 
+ static int find_watcher(struct rbd_device *rbd_dev,
+@@ -3974,13 +3996,19 @@ static int find_watcher(struct rbd_device *rbd_dev,
+ 	ret = ceph_osdc_list_watchers(osdc, &rbd_dev->header_oid,
+ 				      &rbd_dev->header_oloc, &watchers,
+ 				      &num_watchers);
+-	if (ret)
++	if (ret) {
++		rbd_warn(rbd_dev, "failed to get watchers: %d", ret);
+ 		return ret;
++	}
+ 
+ 	sscanf(locker->id.cookie, RBD_LOCK_COOKIE_PREFIX " %llu", &cookie);
+ 	for (i = 0; i < num_watchers; i++) {
+-		if (!memcmp(&watchers[i].addr, &locker->info.addr,
+-			    sizeof(locker->info.addr)) &&
++		/*
++		 * Ignore addr->type while comparing.  This mimics
++		 * entity_addr_t::get_legacy_str() + strcmp().
++		 */
++		if (ceph_addr_equal_no_type(&watchers[i].addr,
++					    &locker->info.addr) &&
+ 		    watchers[i].cookie == cookie) {
+ 			struct rbd_client_id cid = {
+ 				.gid = le64_to_cpu(watchers[i].name.num),
+@@ -4008,51 +4036,72 @@ out:
+ static int rbd_try_lock(struct rbd_device *rbd_dev)
+ {
+ 	struct ceph_client *client = rbd_dev->rbd_client->client;
+-	struct ceph_locker *lockers;
+-	u32 num_lockers;
++	struct ceph_locker *locker, *refreshed_locker;
+ 	int ret;
+ 
+ 	for (;;) {
++		locker = refreshed_locker = NULL;
++
+ 		ret = rbd_lock(rbd_dev);
+-		if (ret != -EBUSY)
+-			return ret;
++		if (!ret)
++			goto out;
++		if (ret != -EBUSY) {
++			rbd_warn(rbd_dev, "failed to lock header: %d", ret);
++			goto out;
++		}
+ 
+ 		/* determine if the current lock holder is still alive */
+-		ret = get_lock_owner_info(rbd_dev, &lockers, &num_lockers);
+-		if (ret)
+-			return ret;
+-
+-		if (num_lockers == 0)
++		locker = get_lock_owner_info(rbd_dev);
++		if (IS_ERR(locker)) {
++			ret = PTR_ERR(locker);
++			locker = NULL;
++			goto out;
++		}
++		if (!locker)
+ 			goto again;
+ 
+-		ret = find_watcher(rbd_dev, lockers);
++		ret = find_watcher(rbd_dev, locker);
+ 		if (ret)
+ 			goto out; /* request lock or error */
+ 
++		refreshed_locker = get_lock_owner_info(rbd_dev);
++		if (IS_ERR(refreshed_locker)) {
++			ret = PTR_ERR(refreshed_locker);
++			refreshed_locker = NULL;
++			goto out;
++		}
++		if (!refreshed_locker ||
++		    !locker_equal(locker, refreshed_locker))
++			goto again;
++
+ 		rbd_warn(rbd_dev, "breaking header lock owned by %s%llu",
+-			 ENTITY_NAME(lockers[0].id.name));
++			 ENTITY_NAME(locker->id.name));
+ 
+ 		ret = ceph_monc_blocklist_add(&client->monc,
+-					      &lockers[0].info.addr);
++					      &locker->info.addr);
+ 		if (ret) {
+-			rbd_warn(rbd_dev, "blocklist of %s%llu failed: %d",
+-				 ENTITY_NAME(lockers[0].id.name), ret);
++			rbd_warn(rbd_dev, "failed to blocklist %s%llu: %d",
++				 ENTITY_NAME(locker->id.name), ret);
+ 			goto out;
+ 		}
+ 
+ 		ret = ceph_cls_break_lock(&client->osdc, &rbd_dev->header_oid,
+ 					  &rbd_dev->header_oloc, RBD_LOCK_NAME,
+-					  lockers[0].id.cookie,
+-					  &lockers[0].id.name);
+-		if (ret && ret != -ENOENT)
++					  locker->id.cookie, &locker->id.name);
++		if (ret && ret != -ENOENT) {
++			rbd_warn(rbd_dev, "failed to break header lock: %d",
++				 ret);
+ 			goto out;
++		}
+ 
+ again:
+-		ceph_free_lockers(lockers, num_lockers);
++		free_locker(refreshed_locker);
++		free_locker(locker);
+ 	}
+ 
+ out:
+-	ceph_free_lockers(lockers, num_lockers);
++	free_locker(refreshed_locker);
++	free_locker(locker);
+ 	return ret;
+ }
+ 
+@@ -4102,11 +4151,8 @@ static int rbd_try_acquire_lock(struct rbd_device *rbd_dev)
+ 
+ 	ret = rbd_try_lock(rbd_dev);
+ 	if (ret < 0) {
+-		rbd_warn(rbd_dev, "failed to lock header: %d", ret);
+-		if (ret == -EBLOCKLISTED)
+-			goto out;
+-
+-		ret = 1; /* request lock anyway */
++		rbd_warn(rbd_dev, "failed to acquire lock: %d", ret);
++		goto out;
+ 	}
+ 	if (ret > 0) {
+ 		up_write(&rbd_dev->lock_rwsem);
+@@ -6656,12 +6702,11 @@ static int rbd_add_acquire_lock(struct rbd_device *rbd_dev)
+ 		cancel_delayed_work_sync(&rbd_dev->lock_dwork);
+ 		if (!ret)
+ 			ret = -ETIMEDOUT;
+-	}
+ 
+-	if (ret) {
+-		rbd_warn(rbd_dev, "failed to acquire exclusive lock: %ld", ret);
+-		return ret;
++		rbd_warn(rbd_dev, "failed to acquire lock: %ld", ret);
+ 	}
++	if (ret)
++		return ret;
+ 
+ 	/*
+ 	 * The lock may have been released by now, unless automatic lock
+diff --git a/drivers/clk/clk-devres.c b/drivers/clk/clk-devres.c
+index 4fb4fd4b06bda..737aa70e2cb3d 100644
+--- a/drivers/clk/clk-devres.c
++++ b/drivers/clk/clk-devres.c
+@@ -205,18 +205,19 @@ EXPORT_SYMBOL(devm_clk_put);
+ struct clk *devm_get_clk_from_child(struct device *dev,
+ 				    struct device_node *np, const char *con_id)
+ {
+-	struct clk **ptr, *clk;
++	struct devm_clk_state *state;
++	struct clk *clk;
+ 
+-	ptr = devres_alloc(devm_clk_release, sizeof(*ptr), GFP_KERNEL);
+-	if (!ptr)
++	state = devres_alloc(devm_clk_release, sizeof(*state), GFP_KERNEL);
++	if (!state)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	clk = of_clk_get_by_name(np, con_id);
+ 	if (!IS_ERR(clk)) {
+-		*ptr = clk;
+-		devres_add(dev, ptr);
++		state->clk = clk;
++		devres_add(dev, state);
+ 	} else {
+-		devres_free(ptr);
++		devres_free(state);
+ 	}
+ 
+ 	return clk;
+diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
+index 348b3a9170fa4..7f5ed1aa7a9f8 100644
+--- a/drivers/dma-buf/sw_sync.c
++++ b/drivers/dma-buf/sw_sync.c
+@@ -191,6 +191,7 @@ static const struct dma_fence_ops timeline_fence_ops = {
+  */
+ static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
+ {
++	LIST_HEAD(signalled);
+ 	struct sync_pt *pt, *next;
+ 
+ 	trace_sync_timeline(obj);
+@@ -203,21 +204,20 @@ static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
+ 		if (!timeline_fence_signaled(&pt->base))
+ 			break;
+ 
+-		list_del_init(&pt->link);
++		dma_fence_get(&pt->base);
++
++		list_move_tail(&pt->link, &signalled);
+ 		rb_erase(&pt->node, &obj->pt_tree);
+ 
+-		/*
+-		 * A signal callback may release the last reference to this
+-		 * fence, causing it to be freed. That operation has to be
+-		 * last to avoid a use after free inside this loop, and must
+-		 * be after we remove the fence from the timeline in order to
+-		 * prevent deadlocking on timeline->lock inside
+-		 * timeline_fence_release().
+-		 */
+ 		dma_fence_signal_locked(&pt->base);
+ 	}
+ 
+ 	spin_unlock_irq(&obj->lock);
++
++	list_for_each_entry_safe(pt, next, &signalled, link) {
++		list_del_init(&pt->link);
++		dma_fence_put(&pt->base);
++	}
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 71a85c5306ed0..1c669f115dd80 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -3282,7 +3282,9 @@ void dcn10_wait_for_mpcc_disconnect(
+ 		if (pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst]) {
+ 			struct hubp *hubp = get_hubp_by_inst(res_pool, mpcc_inst);
+ 
+-			res_pool->mpc->funcs->wait_for_idle(res_pool->mpc, mpcc_inst);
++			if (pipe_ctx->stream_res.tg &&
++				pipe_ctx->stream_res.tg->funcs->is_tg_enabled(pipe_ctx->stream_res.tg))
++				res_pool->mpc->funcs->wait_for_idle(res_pool->mpc, mpcc_inst);
+ 			pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst] = false;
+ 			hubp->funcs->set_blank(hubp, true);
+ 		}
+diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
+index cae9ac6379a5d..aba811c6aa0d6 100644
+--- a/drivers/gpu/drm/i915/i915_active.c
++++ b/drivers/gpu/drm/i915/i915_active.c
+@@ -457,8 +457,11 @@ int i915_active_ref(struct i915_active *ref, u64 idx, struct dma_fence *fence)
+ 		}
+ 	} while (unlikely(is_barrier(active)));
+ 
+-	if (!__i915_active_fence_set(active, fence))
++	fence = __i915_active_fence_set(active, fence);
++	if (!fence)
+ 		__i915_active_acquire(ref);
++	else
++		dma_fence_put(fence);
+ 
+ out:
+ 	i915_active_release(ref);
+@@ -477,13 +480,9 @@ __i915_active_set_fence(struct i915_active *ref,
+ 		return NULL;
+ 	}
+ 
+-	rcu_read_lock();
+ 	prev = __i915_active_fence_set(active, fence);
+-	if (prev)
+-		prev = dma_fence_get_rcu(prev);
+-	else
++	if (!prev)
+ 		__i915_active_acquire(ref);
+-	rcu_read_unlock();
+ 
+ 	return prev;
+ }
+@@ -1050,10 +1049,11 @@ void i915_request_add_active_barriers(struct i915_request *rq)
+  *
+  * Records the new @fence as the last active fence along its timeline in
+  * this active tracker, moving the tracking callbacks from the previous
+- * fence onto this one. Returns the previous fence (if not already completed),
+- * which the caller must ensure is executed before the new fence. To ensure
+- * that the order of fences within the timeline of the i915_active_fence is
+- * understood, it should be locked by the caller.
++ * fence onto this one. Gets and returns a reference to the previous fence
++ * (if not already completed), which the caller must put after making sure
++ * that it is executed before the new fence. To ensure that the order of
++ * fences within the timeline of the i915_active_fence is understood, it
++ * should be locked by the caller.
+  */
+ struct dma_fence *
+ __i915_active_fence_set(struct i915_active_fence *active,
+@@ -1062,7 +1062,23 @@ __i915_active_fence_set(struct i915_active_fence *active,
+ 	struct dma_fence *prev;
+ 	unsigned long flags;
+ 
+-	if (fence == rcu_access_pointer(active->fence))
++	/*
++	 * In case of fences embedded in i915_requests, their memory is
++	 * SLAB_FAILSAFE_BY_RCU, then it can be reused right after release
++	 * by new requests.  Then, there is a risk of passing back a pointer
++	 * to a new, completely unrelated fence that reuses the same memory
++	 * while tracked under a different active tracker.  Combined with i915
++	 * perf open/close operations that build await dependencies between
++	 * engine kernel context requests and user requests from different
++	 * timelines, this can lead to dependency loops and infinite waits.
++	 *
++	 * As a countermeasure, we try to get a reference to the active->fence
++	 * first, so if we succeed and pass it back to our user then it is not
++	 * released and potentially reused by an unrelated request before the
++	 * user has a chance to set up an await dependency on it.
++	 */
++	prev = i915_active_fence_get(active);
++	if (fence == prev)
+ 		return fence;
+ 
+ 	GEM_BUG_ON(test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags));
+@@ -1071,27 +1087,56 @@ __i915_active_fence_set(struct i915_active_fence *active,
+ 	 * Consider that we have two threads arriving (A and B), with
+ 	 * C already resident as the active->fence.
+ 	 *
+-	 * A does the xchg first, and so it sees C or NULL depending
+-	 * on the timing of the interrupt handler. If it is NULL, the
+-	 * previous fence must have been signaled and we know that
+-	 * we are first on the timeline. If it is still present,
+-	 * we acquire the lock on that fence and serialise with the interrupt
+-	 * handler, in the process removing it from any future interrupt
+-	 * callback. A will then wait on C before executing (if present).
+-	 *
+-	 * As B is second, it sees A as the previous fence and so waits for
+-	 * it to complete its transition and takes over the occupancy for
+-	 * itself -- remembering that it needs to wait on A before executing.
++	 * Both A and B have got a reference to C or NULL, depending on the
++	 * timing of the interrupt handler.  Let's assume that if A has got C
++	 * then it has locked C first (before B).
+ 	 *
+ 	 * Note the strong ordering of the timeline also provides consistent
+ 	 * nesting rules for the fence->lock; the inner lock is always the
+ 	 * older lock.
+ 	 */
+ 	spin_lock_irqsave(fence->lock, flags);
+-	prev = xchg(__active_fence_slot(active), fence);
+-	if (prev) {
+-		GEM_BUG_ON(prev == fence);
++	if (prev)
+ 		spin_lock_nested(prev->lock, SINGLE_DEPTH_NESTING);
++
++	/*
++	 * A does the cmpxchg first, and so it sees C or NULL, as before, or
++	 * something else, depending on the timing of other threads and/or
++	 * interrupt handler.  If not the same as before then A unlocks C if
++	 * applicable and retries, starting from an attempt to get a new
++	 * active->fence.  Meanwhile, B follows the same path as A.
++	 * Once A succeeds with cmpxch, B fails again, retires, gets A from
++	 * active->fence, locks it as soon as A completes, and possibly
++	 * succeeds with cmpxchg.
++	 */
++	while (cmpxchg(__active_fence_slot(active), prev, fence) != prev) {
++		if (prev) {
++			spin_unlock(prev->lock);
++			dma_fence_put(prev);
++		}
++		spin_unlock_irqrestore(fence->lock, flags);
++
++		prev = i915_active_fence_get(active);
++		GEM_BUG_ON(prev == fence);
++
++		spin_lock_irqsave(fence->lock, flags);
++		if (prev)
++			spin_lock_nested(prev->lock, SINGLE_DEPTH_NESTING);
++	}
++
++	/*
++	 * If prev is NULL then the previous fence must have been signaled
++	 * and we know that we are first on the timeline.  If it is still
++	 * present then, having the lock on that fence already acquired, we
++	 * serialise with the interrupt handler, in the process of removing it
++	 * from any future interrupt callback.  A will then wait on C before
++	 * executing (if present).
++	 *
++	 * As B is second, it sees A as the previous fence and so waits for
++	 * it to complete its transition and takes over the occupancy for
++	 * itself -- remembering that it needs to wait on A before executing.
++	 */
++	if (prev) {
+ 		__list_del_entry(&active->cb.node);
+ 		spin_unlock(prev->lock); /* serialise with prev->cb_list */
+ 	}
+@@ -1108,11 +1153,7 @@ int i915_active_fence_set(struct i915_active_fence *active,
+ 	int err = 0;
+ 
+ 	/* Must maintain timeline ordering wrt previous active requests */
+-	rcu_read_lock();
+ 	fence = __i915_active_fence_set(active, &rq->fence);
+-	if (fence) /* but the previous fence may not belong to that timeline! */
+-		fence = dma_fence_get_rcu(fence);
+-	rcu_read_unlock();
+ 	if (fence) {
+ 		err = i915_request_await_dma_fence(rq, fence);
+ 		dma_fence_put(fence);
+diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
+index 896389f930294..eda56b2fdc68d 100644
+--- a/drivers/gpu/drm/i915/i915_request.c
++++ b/drivers/gpu/drm/i915/i915_request.c
+@@ -1525,6 +1525,8 @@ __i915_request_add_to_timeline(struct i915_request *rq)
+ 							 &rq->dep,
+ 							 0);
+ 	}
++	if (prev)
++		i915_request_put(prev);
+ 
+ 	/*
+ 	 * Make sure that no request gazumped us - if it was allocated after
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index ad208a5f4ebe5..0a79c57c7db64 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -1606,4 +1606,17 @@ static inline void vmw_mmio_write(u32 value, u32 *addr)
+ {
+ 	WRITE_ONCE(*addr, value);
+ }
++
++static inline bool vmw_shadertype_is_valid(enum vmw_sm_type shader_model,
++					   u32 shader_type)
++{
++	SVGA3dShaderType max_allowed = SVGA3D_SHADERTYPE_PREDX_MAX;
++
++	if (shader_model >= VMW_SM_5)
++		max_allowed = SVGA3D_SHADERTYPE_MAX;
++	else if (shader_model >= VMW_SM_4)
++		max_allowed = SVGA3D_SHADERTYPE_DX10_MAX;
++	return shader_type >= SVGA3D_SHADERTYPE_MIN && shader_type < max_allowed;
++}
++
+ #endif
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 739cbc77d8867..4c6c2e5abf95e 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -1998,7 +1998,7 @@ static int vmw_cmd_set_shader(struct vmw_private *dev_priv,
+ 
+ 	cmd = container_of(header, typeof(*cmd), header);
+ 
+-	if (cmd->body.type >= SVGA3D_SHADERTYPE_PREDX_MAX) {
++	if (!vmw_shadertype_is_valid(VMW_SM_LEGACY, cmd->body.type)) {
+ 		VMW_DEBUG_USER("Illegal shader type %u.\n",
+ 			       (unsigned int) cmd->body.type);
+ 		return -EINVAL;
+@@ -2120,8 +2120,6 @@ vmw_cmd_dx_set_single_constant_buffer(struct vmw_private *dev_priv,
+ 				      SVGA3dCmdHeader *header)
+ {
+ 	VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXSetSingleConstantBuffer);
+-	SVGA3dShaderType max_shader_num = has_sm5_context(dev_priv) ?
+-		SVGA3D_NUM_SHADERTYPE : SVGA3D_NUM_SHADERTYPE_DX10;
+ 
+ 	struct vmw_resource *res = NULL;
+ 	struct vmw_ctx_validation_info *ctx_node = VMW_GET_CTX_NODE(sw_context);
+@@ -2138,6 +2136,14 @@ vmw_cmd_dx_set_single_constant_buffer(struct vmw_private *dev_priv,
+ 	if (unlikely(ret != 0))
+ 		return ret;
+ 
++	if (!vmw_shadertype_is_valid(dev_priv->sm_type, cmd->body.type) ||
++	    cmd->body.slot >= SVGA3D_DX_MAX_CONSTBUFFERS) {
++		VMW_DEBUG_USER("Illegal const buffer shader %u slot %u.\n",
++			       (unsigned int) cmd->body.type,
++			       (unsigned int) cmd->body.slot);
++		return -EINVAL;
++	}
++
+ 	binding.bi.ctx = ctx_node->ctx;
+ 	binding.bi.res = res;
+ 	binding.bi.bt = vmw_ctx_binding_cb;
+@@ -2146,14 +2152,6 @@ vmw_cmd_dx_set_single_constant_buffer(struct vmw_private *dev_priv,
+ 	binding.size = cmd->body.sizeInBytes;
+ 	binding.slot = cmd->body.slot;
+ 
+-	if (binding.shader_slot >= max_shader_num ||
+-	    binding.slot >= SVGA3D_DX_MAX_CONSTBUFFERS) {
+-		VMW_DEBUG_USER("Illegal const buffer shader %u slot %u.\n",
+-			       (unsigned int) cmd->body.type,
+-			       (unsigned int) binding.slot);
+-		return -EINVAL;
+-	}
+-
+ 	vmw_binding_add(ctx_node->staged, &binding.bi, binding.shader_slot,
+ 			binding.slot);
+ 
+@@ -2174,15 +2172,13 @@ static int vmw_cmd_dx_set_shader_res(struct vmw_private *dev_priv,
+ {
+ 	VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXSetShaderResources) =
+ 		container_of(header, typeof(*cmd), header);
+-	SVGA3dShaderType max_allowed = has_sm5_context(dev_priv) ?
+-		SVGA3D_SHADERTYPE_MAX : SVGA3D_SHADERTYPE_DX10_MAX;
+ 
+ 	u32 num_sr_view = (cmd->header.size - sizeof(cmd->body)) /
+ 		sizeof(SVGA3dShaderResourceViewId);
+ 
+ 	if ((u64) cmd->body.startView + (u64) num_sr_view >
+ 	    (u64) SVGA3D_DX_MAX_SRVIEWS ||
+-	    cmd->body.type >= max_allowed) {
++	    !vmw_shadertype_is_valid(dev_priv->sm_type, cmd->body.type)) {
+ 		VMW_DEBUG_USER("Invalid shader binding.\n");
+ 		return -EINVAL;
+ 	}
+@@ -2206,8 +2202,6 @@ static int vmw_cmd_dx_set_shader(struct vmw_private *dev_priv,
+ 				 SVGA3dCmdHeader *header)
+ {
+ 	VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXSetShader);
+-	SVGA3dShaderType max_allowed = has_sm5_context(dev_priv) ?
+-		SVGA3D_SHADERTYPE_MAX : SVGA3D_SHADERTYPE_DX10_MAX;
+ 	struct vmw_resource *res = NULL;
+ 	struct vmw_ctx_validation_info *ctx_node = VMW_GET_CTX_NODE(sw_context);
+ 	struct vmw_ctx_bindinfo_shader binding;
+@@ -2218,8 +2212,7 @@ static int vmw_cmd_dx_set_shader(struct vmw_private *dev_priv,
+ 
+ 	cmd = container_of(header, typeof(*cmd), header);
+ 
+-	if (cmd->body.type >= max_allowed ||
+-	    cmd->body.type < SVGA3D_SHADERTYPE_MIN) {
++	if (!vmw_shadertype_is_valid(dev_priv->sm_type, cmd->body.type)) {
+ 		VMW_DEBUG_USER("Illegal shader type %u.\n",
+ 			       (unsigned int) cmd->body.type);
+ 		return -EINVAL;
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 7599a122c9563..1667ac1406098 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -31,11 +31,11 @@
+ #define DEFAULT_BUFFER_SECTORS		128
+ #define DEFAULT_JOURNAL_WATERMARK	50
+ #define DEFAULT_SYNC_MSEC		10000
+-#define DEFAULT_MAX_JOURNAL_SECTORS	131072
++#define DEFAULT_MAX_JOURNAL_SECTORS	(IS_ENABLED(CONFIG_64BIT) ? 131072 : 8192)
+ #define MIN_LOG2_INTERLEAVE_SECTORS	3
+ #define MAX_LOG2_INTERLEAVE_SECTORS	31
+ #define METADATA_WORKQUEUE_MAX_ACTIVE	16
+-#define RECALC_SECTORS			8192
++#define RECALC_SECTORS			(IS_ENABLED(CONFIG_64BIT) ? 32768 : 2048)
+ #define RECALC_WRITE_SUPER		16
+ #define BITMAP_BLOCK_SIZE		4096	/* don't change it */
+ #define BITMAP_FLUSH_INTERVAL		(10 * HZ)
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
+index 21de1431cfcb5..2c2be43a3e9ea 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
+@@ -729,6 +729,8 @@ static int vb2ops_venc_queue_setup(struct vb2_queue *vq,
+ 		return -EINVAL;
+ 
+ 	if (*nplanes) {
++		if (*nplanes != q_data->fmt->num_planes)
++			return -EINVAL;
+ 		for (i = 0; i < *nplanes; i++)
+ 			if (sizes[i] < q_data->sizeimage[i])
+ 				return -EINVAL;
+diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
+index 152f76f869278..64ba465741a78 100644
+--- a/drivers/net/bonding/bond_alb.c
++++ b/drivers/net/bonding/bond_alb.c
+@@ -656,10 +656,10 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
+ 		return NULL;
+ 	arp = (struct arp_pkt *)skb_network_header(skb);
+ 
+-	/* Don't modify or load balance ARPs that do not originate locally
+-	 * (e.g.,arrive via a bridge).
++	/* Don't modify or load balance ARPs that do not originate
++	 * from the bond itself or a VLAN directly above the bond.
+ 	 */
+-	if (!bond_slave_has_mac_rx(bond, arp->mac_src))
++	if (!bond_slave_has_mac_rcu(bond, arp->mac_src))
+ 		return NULL;
+ 
+ 	if (arp->op_code == htons(ARPOP_REPLY)) {
+diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
+index 282c53ef76d23..1bfede407270d 100644
+--- a/drivers/net/can/vxcan.c
++++ b/drivers/net/can/vxcan.c
+@@ -179,12 +179,7 @@ static int vxcan_newlink(struct net *net, struct net_device *dev,
+ 
+ 		nla_peer = data[VXCAN_INFO_PEER];
+ 		ifmp = nla_data(nla_peer);
+-		err = rtnl_nla_parse_ifla(peer_tb,
+-					  nla_data(nla_peer) +
+-					  sizeof(struct ifinfomsg),
+-					  nla_len(nla_peer) -
+-					  sizeof(struct ifinfomsg),
+-					  NULL);
++		err = rtnl_nla_parse_ifinfomsg(peer_tb, nla_peer, extack);
+ 		if (err < 0)
+ 			return err;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
+index ab8ee93316354..a4f6143e66fe9 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.c
++++ b/drivers/net/ethernet/broadcom/bgmac.c
+@@ -1448,7 +1448,7 @@ int bgmac_phy_connect_direct(struct bgmac *bgmac)
+ 	int err;
+ 
+ 	phy_dev = fixed_phy_register(PHY_POLL, &fphy_status, NULL);
+-	if (!phy_dev || IS_ERR(phy_dev)) {
++	if (IS_ERR(phy_dev)) {
+ 		dev_err(bgmac->dev, "Failed to register fixed PHY device\n");
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 99aba64f03c2f..2b0538f2af639 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -568,7 +568,7 @@ static int bcmgenet_mii_pd_init(struct bcmgenet_priv *priv)
+ 		};
+ 
+ 		phydev = fixed_phy_register(PHY_POLL, &fphy_status, NULL);
+-		if (!phydev || IS_ERR(phydev)) {
++		if (IS_ERR(phydev)) {
+ 			dev_err(kdev, "failed to register fixed PHY device\n");
+ 			return -ENODEV;
+ 		}
+diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
+index c3ec9ceed833e..d80f155574c67 100644
+--- a/drivers/net/ethernet/ibm/ibmveth.c
++++ b/drivers/net/ethernet/ibm/ibmveth.c
+@@ -196,7 +196,7 @@ static inline void ibmveth_flush_buffer(void *addr, unsigned long length)
+ 	unsigned long offset;
+ 
+ 	for (offset = 0; offset < length; offset += SMP_CACHE_BYTES)
+-		asm("dcbfl %0,%1" :: "b" (addr), "r" (offset));
++		asm("dcbf %0,%1,1" :: "b" (addr), "r" (offset));
+ }
+ 
+ /* replenish the buffers for a pool.  note that we don't need to
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index 1929847b8c404..59df4c9bd8f90 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -353,7 +353,8 @@ int ice_setup_rx_ctx(struct ice_ring *ring)
+ 	/* Receive Packet Data Buffer Size.
+ 	 * The Packet Data Buffer Size is defined in 128 byte units.
+ 	 */
+-	rlan_ctx.dbuf = ring->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
++	rlan_ctx.dbuf = DIV_ROUND_UP(ring->rx_buf_len,
++				     BIT_ULL(ICE_RLAN_CTX_DBUF_S));
+ 
+ 	/* use 32 byte descriptors */
+ 	rlan_ctx.dsize = 1;
+diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c
+index 86a576201f5ff..0dbbb32905fa4 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ptp.c
++++ b/drivers/net/ethernet/intel/igb/igb_ptp.c
+@@ -1262,18 +1262,6 @@ void igb_ptp_init(struct igb_adapter *adapter)
+ 		return;
+ 	}
+ 
+-	spin_lock_init(&adapter->tmreg_lock);
+-	INIT_WORK(&adapter->ptp_tx_work, igb_ptp_tx_work);
+-
+-	if (adapter->ptp_flags & IGB_PTP_OVERFLOW_CHECK)
+-		INIT_DELAYED_WORK(&adapter->ptp_overflow_work,
+-				  igb_ptp_overflow_check);
+-
+-	adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+-	adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+-
+-	igb_ptp_reset(adapter);
+-
+ 	adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps,
+ 						&adapter->pdev->dev);
+ 	if (IS_ERR(adapter->ptp_clock)) {
+@@ -1283,6 +1271,18 @@ void igb_ptp_init(struct igb_adapter *adapter)
+ 		dev_info(&adapter->pdev->dev, "added PHC on %s\n",
+ 			 adapter->netdev->name);
+ 		adapter->ptp_flags |= IGB_PTP_ENABLED;
++
++		spin_lock_init(&adapter->tmreg_lock);
++		INIT_WORK(&adapter->ptp_tx_work, igb_ptp_tx_work);
++
++		if (adapter->ptp_flags & IGB_PTP_OVERFLOW_CHECK)
++			INIT_DELAYED_WORK(&adapter->ptp_overflow_work,
++					  igb_ptp_overflow_check);
++
++		adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
++		adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
++
++		igb_ptp_reset(adapter);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 449f5224d1aeb..e549b09c347a7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -2876,9 +2876,10 @@ rx_frscfg:
+ 	if (link < 0)
+ 		return NIX_AF_ERR_RX_LINK_INVALID;
+ 
+-	nix_find_link_frs(rvu, req, pcifunc);
+ 
+ linkcfg:
++	nix_find_link_frs(rvu, req, pcifunc);
++
+ 	cfg = rvu_read64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link));
+ 	cfg = (cfg & ~(0xFFFFULL << 16)) | ((u64)req->maxlen << 16);
+ 	if (req->update_minlen)
+diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
+index 60b7d93bb834e..93be7dd571fc5 100644
+--- a/drivers/net/ipvlan/ipvlan_main.c
++++ b/drivers/net/ipvlan/ipvlan_main.c
+@@ -745,7 +745,8 @@ static int ipvlan_device_event(struct notifier_block *unused,
+ 
+ 		write_pnet(&port->pnet, newnet);
+ 
+-		ipvlan_migrate_l3s_hook(oldnet, newnet);
++		if (port->mode == IPVLAN_MODE_L3S)
++			ipvlan_migrate_l3s_hook(oldnet, newnet);
+ 		break;
+ 	}
+ 	case NETDEV_UNREGISTER:
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index 5aa23a036ed36..4ba86fa4d6497 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -1313,10 +1313,7 @@ static int veth_newlink(struct net *src_net, struct net_device *dev,
+ 
+ 		nla_peer = data[VETH_INFO_PEER];
+ 		ifmp = nla_data(nla_peer);
+-		err = rtnl_nla_parse_ifla(peer_tb,
+-					  nla_data(nla_peer) + sizeof(struct ifinfomsg),
+-					  nla_len(nla_peer) - sizeof(struct ifinfomsg),
+-					  NULL);
++		err = rtnl_nla_parse_ifinfomsg(peer_tb, nla_peer, extack);
+ 		if (err < 0)
+ 			return err;
+ 
+diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c
+index fe64430b438a1..be26346085faf 100644
+--- a/drivers/of/dynamic.c
++++ b/drivers/of/dynamic.c
+@@ -63,15 +63,14 @@ int of_reconfig_notifier_unregister(struct notifier_block *nb)
+ }
+ EXPORT_SYMBOL_GPL(of_reconfig_notifier_unregister);
+ 
+-#ifdef DEBUG
+-const char *action_names[] = {
++static const char *action_names[] = {
++	[0] = "INVALID",
+ 	[OF_RECONFIG_ATTACH_NODE] = "ATTACH_NODE",
+ 	[OF_RECONFIG_DETACH_NODE] = "DETACH_NODE",
+ 	[OF_RECONFIG_ADD_PROPERTY] = "ADD_PROPERTY",
+ 	[OF_RECONFIG_REMOVE_PROPERTY] = "REMOVE_PROPERTY",
+ 	[OF_RECONFIG_UPDATE_PROPERTY] = "UPDATE_PROPERTY",
+ };
+-#endif
+ 
+ int of_reconfig_notify(unsigned long action, struct of_reconfig_data *p)
+ {
+@@ -589,21 +588,9 @@ static int __of_changeset_entry_apply(struct of_changeset_entry *ce)
+ 		}
+ 
+ 		ret = __of_add_property(ce->np, ce->prop);
+-		if (ret) {
+-			pr_err("changeset: add_property failed @%pOF/%s\n",
+-				ce->np,
+-				ce->prop->name);
+-			break;
+-		}
+ 		break;
+ 	case OF_RECONFIG_REMOVE_PROPERTY:
+ 		ret = __of_remove_property(ce->np, ce->prop);
+-		if (ret) {
+-			pr_err("changeset: remove_property failed @%pOF/%s\n",
+-				ce->np,
+-				ce->prop->name);
+-			break;
+-		}
+ 		break;
+ 
+ 	case OF_RECONFIG_UPDATE_PROPERTY:
+@@ -617,20 +604,17 @@ static int __of_changeset_entry_apply(struct of_changeset_entry *ce)
+ 		}
+ 
+ 		ret = __of_update_property(ce->np, ce->prop, &old_prop);
+-		if (ret) {
+-			pr_err("changeset: update_property failed @%pOF/%s\n",
+-				ce->np,
+-				ce->prop->name);
+-			break;
+-		}
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+ 	}
+ 	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 
+-	if (ret)
++	if (ret) {
++		pr_err("changeset: apply failed: %-15s %pOF:%s\n",
++		       action_names[ce->action], ce->np, ce->prop->name);
+ 		return ret;
++	}
+ 
+ 	switch (ce->action) {
+ 	case OF_RECONFIG_ATTACH_NODE:
+@@ -913,6 +897,9 @@ int of_changeset_action(struct of_changeset *ocs, unsigned long action,
+ 	if (!ce)
+ 		return -ENOMEM;
+ 
++	if (WARN_ON(action >= ARRAY_SIZE(action_names)))
++		return -EINVAL;
++
+ 	/* get a reference to the node */
+ 	ce->action = action;
+ 	ce->np = of_node_get(np);
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index f031302ad4019..0a37967b0a939 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -503,12 +503,15 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
+ 				if (pass && dev->subordinate) {
+ 					check_hotplug_bridge(slot, dev);
+ 					pcibios_resource_survey_bus(dev->subordinate);
+-					__pci_bus_size_bridges(dev->subordinate,
+-							       &add_list);
++					if (pci_is_root_bus(bus))
++						__pci_bus_size_bridges(dev->subordinate, &add_list);
+ 				}
+ 			}
+ 		}
+-		__pci_bus_assign_resources(bus, &add_list, NULL);
++		if (pci_is_root_bus(bus))
++			__pci_bus_assign_resources(bus, &add_list, NULL);
++		else
++			pci_assign_unassigned_bridge_resources(bus->self);
+ 	}
+ 
+ 	acpiphp_sanitize_bus(bus);
+diff --git a/drivers/pinctrl/renesas/pinctrl-rza2.c b/drivers/pinctrl/renesas/pinctrl-rza2.c
+index 32829eb9656c9..ddd8ee6b604ef 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rza2.c
++++ b/drivers/pinctrl/renesas/pinctrl-rza2.c
+@@ -14,6 +14,7 @@
+ #include <linux/gpio/driver.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/of_device.h>
+ #include <linux/pinctrl/pinmux.h>
+ 
+@@ -46,6 +47,7 @@ struct rza2_pinctrl_priv {
+ 	struct pinctrl_dev *pctl;
+ 	struct pinctrl_gpio_range gpio_range;
+ 	int npins;
++	struct mutex mutex; /* serialize adding groups and functions */
+ };
+ 
+ #define RZA2_PDR(port)		(0x0000 + (port) * 2)	/* Direction 16-bit */
+@@ -359,10 +361,14 @@ static int rza2_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 		psel_val[i] = MUX_FUNC(value);
+ 	}
+ 
++	mutex_lock(&priv->mutex);
++
+ 	/* Register a single pin group listing all the pins we read from DT */
+ 	gsel = pinctrl_generic_add_group(pctldev, np->name, pins, npins, NULL);
+-	if (gsel < 0)
+-		return gsel;
++	if (gsel < 0) {
++		ret = gsel;
++		goto unlock;
++	}
+ 
+ 	/*
+ 	 * Register a single group function where the 'data' is an array PSEL
+@@ -391,6 +397,8 @@ static int rza2_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	(*map)->data.mux.function = np->name;
+ 	*num_maps = 1;
+ 
++	mutex_unlock(&priv->mutex);
++
+ 	return 0;
+ 
+ remove_function:
+@@ -399,6 +407,9 @@ remove_function:
+ remove_group:
+ 	pinctrl_generic_remove_group(pctldev, gsel);
+ 
++unlock:
++	mutex_unlock(&priv->mutex);
++
+ 	dev_err(priv->dev, "Unable to parse DT node %s\n", np->name);
+ 
+ 	return ret;
+@@ -474,6 +485,8 @@ static int rza2_pinctrl_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->base))
+ 		return PTR_ERR(priv->base);
+ 
++	mutex_init(&priv->mutex);
++
+ 	platform_set_drvdata(pdev, priv);
+ 
+ 	priv->npins = (int)(uintptr_t)of_device_get_match_data(&pdev->dev) *
+diff --git a/drivers/scsi/raid_class.c b/drivers/scsi/raid_class.c
+index 711252e52d8e1..95a86e0dfd77a 100644
+--- a/drivers/scsi/raid_class.c
++++ b/drivers/scsi/raid_class.c
+@@ -209,54 +209,6 @@ raid_attr_ro_state(level);
+ raid_attr_ro_fn(resync);
+ raid_attr_ro_state_fn(state);
+ 
+-static void raid_component_release(struct device *dev)
+-{
+-	struct raid_component *rc =
+-		container_of(dev, struct raid_component, dev);
+-	dev_printk(KERN_ERR, rc->dev.parent, "COMPONENT RELEASE\n");
+-	put_device(rc->dev.parent);
+-	kfree(rc);
+-}
+-
+-int raid_component_add(struct raid_template *r,struct device *raid_dev,
+-		       struct device *component_dev)
+-{
+-	struct device *cdev =
+-		attribute_container_find_class_device(&r->raid_attrs.ac,
+-						      raid_dev);
+-	struct raid_component *rc;
+-	struct raid_data *rd = dev_get_drvdata(cdev);
+-	int err;
+-
+-	rc = kzalloc(sizeof(*rc), GFP_KERNEL);
+-	if (!rc)
+-		return -ENOMEM;
+-
+-	INIT_LIST_HEAD(&rc->node);
+-	device_initialize(&rc->dev);
+-	rc->dev.release = raid_component_release;
+-	rc->dev.parent = get_device(component_dev);
+-	rc->num = rd->component_count++;
+-
+-	dev_set_name(&rc->dev, "component-%d", rc->num);
+-	list_add_tail(&rc->node, &rd->component_list);
+-	rc->dev.class = &raid_class.class;
+-	err = device_add(&rc->dev);
+-	if (err)
+-		goto err_out;
+-
+-	return 0;
+-
+-err_out:
+-	put_device(&rc->dev);
+-	list_del(&rc->node);
+-	rd->component_count--;
+-	put_device(component_dev);
+-	kfree(rc);
+-	return err;
+-}
+-EXPORT_SYMBOL(raid_component_add);
+-
+ struct raid_template *
+ raid_class_attach(struct raid_function_template *ft)
+ {
+diff --git a/drivers/scsi/snic/snic_disc.c b/drivers/scsi/snic/snic_disc.c
+index c445853c623e2..e362453e8d262 100644
+--- a/drivers/scsi/snic/snic_disc.c
++++ b/drivers/scsi/snic/snic_disc.c
+@@ -317,12 +317,11 @@ snic_tgt_create(struct snic *snic, struct snic_tgt_id *tgtid)
+ 			      "Snic Tgt: device_add, with err = %d\n",
+ 			      ret);
+ 
+-		put_device(&tgt->dev);
+ 		put_device(&snic->shost->shost_gendev);
+ 		spin_lock_irqsave(snic->shost->host_lock, flags);
+ 		list_del(&tgt->list);
+ 		spin_unlock_irqrestore(snic->shost->host_lock, flags);
+-		kfree(tgt);
++		put_device(&tgt->dev);
+ 		tgt = NULL;
+ 
+ 		return tgt;
+diff --git a/drivers/video/fbdev/core/sysimgblt.c b/drivers/video/fbdev/core/sysimgblt.c
+index a4d05b1b17d7d..665ef7a0a2495 100644
+--- a/drivers/video/fbdev/core/sysimgblt.c
++++ b/drivers/video/fbdev/core/sysimgblt.c
+@@ -188,23 +188,29 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p,
+ {
+ 	u32 fgx = fgcolor, bgx = bgcolor, bpp = p->var.bits_per_pixel;
+ 	u32 ppw = 32/bpp, spitch = (image->width + 7)/8;
+-	u32 bit_mask, end_mask, eorx, shift;
+-	const char *s = image->data, *src;
++	u32 bit_mask, eorx, shift;
++	const u8 *s = image->data, *src;
+ 	u32 *dst;
+-	const u32 *tab = NULL;
++	const u32 *tab;
++	size_t tablen;
++	u32 colortab[16];
+ 	int i, j, k;
+ 
+ 	switch (bpp) {
+ 	case 8:
+ 		tab = fb_be_math(p) ? cfb_tab8_be : cfb_tab8_le;
++		tablen = 16;
+ 		break;
+ 	case 16:
+ 		tab = fb_be_math(p) ? cfb_tab16_be : cfb_tab16_le;
++		tablen = 4;
+ 		break;
+ 	case 32:
+-	default:
+ 		tab = cfb_tab32;
++		tablen = 2;
+ 		break;
++	default:
++		return;
+ 	}
+ 
+ 	for (i = ppw-1; i--; ) {
+@@ -218,20 +224,62 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p,
+ 	eorx = fgx ^ bgx;
+ 	k = image->width/ppw;
+ 
++	for (i = 0; i < tablen; ++i)
++		colortab[i] = (tab[i] & eorx) ^ bgx;
++
+ 	for (i = image->height; i--; ) {
+ 		dst = dst1;
+ 		shift = 8;
+ 		src = s;
+ 
+-		for (j = k; j--; ) {
++		/*
++		 * Manually unroll the per-line copying loop for better
++		 * performance. This works until we processed the last
++		 * completely filled source byte (inclusive).
++		 */
++		switch (ppw) {
++		case 4: /* 8 bpp */
++			for (j = k; j >= 2; j -= 2, ++src) {
++				*dst++ = colortab[(*src >> 4) & bit_mask];
++				*dst++ = colortab[(*src >> 0) & bit_mask];
++			}
++			break;
++		case 2: /* 16 bpp */
++			for (j = k; j >= 4; j -= 4, ++src) {
++				*dst++ = colortab[(*src >> 6) & bit_mask];
++				*dst++ = colortab[(*src >> 4) & bit_mask];
++				*dst++ = colortab[(*src >> 2) & bit_mask];
++				*dst++ = colortab[(*src >> 0) & bit_mask];
++			}
++			break;
++		case 1: /* 32 bpp */
++			for (j = k; j >= 8; j -= 8, ++src) {
++				*dst++ = colortab[(*src >> 7) & bit_mask];
++				*dst++ = colortab[(*src >> 6) & bit_mask];
++				*dst++ = colortab[(*src >> 5) & bit_mask];
++				*dst++ = colortab[(*src >> 4) & bit_mask];
++				*dst++ = colortab[(*src >> 3) & bit_mask];
++				*dst++ = colortab[(*src >> 2) & bit_mask];
++				*dst++ = colortab[(*src >> 1) & bit_mask];
++				*dst++ = colortab[(*src >> 0) & bit_mask];
++			}
++			break;
++		}
++
++		/*
++		 * For image widths that are not a multiple of 8, there
++		 * are trailing pixels left on the current line. Print
++		 * them as well.
++		 */
++		for (; j--; ) {
+ 			shift -= ppw;
+-			end_mask = tab[(*src >> shift) & bit_mask];
+-			*dst++ = (end_mask & eorx) ^ bgx;
++			*dst++ = colortab[(*src >> shift) & bit_mask];
+ 			if (!shift) {
+ 				shift = 8;
+-				src++;
++				++src;
+ 			}
+ 		}
++
+ 		dst1 += p->fix.line_length;
+ 		s += spitch;
+ 	}
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index dde9afb6747ba..51ab06308bc73 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -1856,7 +1856,7 @@ static void del_timeout(struct dlm_lkb *lkb)
+ void dlm_scan_timeout(struct dlm_ls *ls)
+ {
+ 	struct dlm_rsb *r;
+-	struct dlm_lkb *lkb;
++	struct dlm_lkb *lkb = NULL, *iter;
+ 	int do_cancel, do_warn;
+ 	s64 wait_us;
+ 
+@@ -1867,27 +1867,28 @@ void dlm_scan_timeout(struct dlm_ls *ls)
+ 		do_cancel = 0;
+ 		do_warn = 0;
+ 		mutex_lock(&ls->ls_timeout_mutex);
+-		list_for_each_entry(lkb, &ls->ls_timeout, lkb_time_list) {
++		list_for_each_entry(iter, &ls->ls_timeout, lkb_time_list) {
+ 
+ 			wait_us = ktime_to_us(ktime_sub(ktime_get(),
+-					      		lkb->lkb_timestamp));
++							iter->lkb_timestamp));
+ 
+-			if ((lkb->lkb_exflags & DLM_LKF_TIMEOUT) &&
+-			    wait_us >= (lkb->lkb_timeout_cs * 10000))
++			if ((iter->lkb_exflags & DLM_LKF_TIMEOUT) &&
++			    wait_us >= (iter->lkb_timeout_cs * 10000))
+ 				do_cancel = 1;
+ 
+-			if ((lkb->lkb_flags & DLM_IFL_WATCH_TIMEWARN) &&
++			if ((iter->lkb_flags & DLM_IFL_WATCH_TIMEWARN) &&
+ 			    wait_us >= dlm_config.ci_timewarn_cs * 10000)
+ 				do_warn = 1;
+ 
+ 			if (!do_cancel && !do_warn)
+ 				continue;
+-			hold_lkb(lkb);
++			hold_lkb(iter);
++			lkb = iter;
+ 			break;
+ 		}
+ 		mutex_unlock(&ls->ls_timeout_mutex);
+ 
+-		if (!do_cancel && !do_warn)
++		if (!lkb)
+ 			break;
+ 
+ 		r = lkb->lkb_resource;
+@@ -5241,21 +5242,18 @@ void dlm_recover_waiters_pre(struct dlm_ls *ls)
+ 
+ static struct dlm_lkb *find_resend_waiter(struct dlm_ls *ls)
+ {
+-	struct dlm_lkb *lkb;
+-	int found = 0;
++	struct dlm_lkb *lkb = NULL, *iter;
+ 
+ 	mutex_lock(&ls->ls_waiters_mutex);
+-	list_for_each_entry(lkb, &ls->ls_waiters, lkb_wait_reply) {
+-		if (lkb->lkb_flags & DLM_IFL_RESEND) {
+-			hold_lkb(lkb);
+-			found = 1;
++	list_for_each_entry(iter, &ls->ls_waiters, lkb_wait_reply) {
++		if (iter->lkb_flags & DLM_IFL_RESEND) {
++			hold_lkb(iter);
++			lkb = iter;
+ 			break;
+ 		}
+ 	}
+ 	mutex_unlock(&ls->ls_waiters_mutex);
+ 
+-	if (!found)
+-		lkb = NULL;
+ 	return lkb;
+ }
+ 
+@@ -5914,37 +5912,36 @@ int dlm_user_adopt_orphan(struct dlm_ls *ls, struct dlm_user_args *ua_tmp,
+ 		     int mode, uint32_t flags, void *name, unsigned int namelen,
+ 		     unsigned long timeout_cs, uint32_t *lkid)
+ {
+-	struct dlm_lkb *lkb;
++	struct dlm_lkb *lkb = NULL, *iter;
+ 	struct dlm_user_args *ua;
+ 	int found_other_mode = 0;
+-	int found = 0;
+ 	int rv = 0;
+ 
+ 	mutex_lock(&ls->ls_orphans_mutex);
+-	list_for_each_entry(lkb, &ls->ls_orphans, lkb_ownqueue) {
+-		if (lkb->lkb_resource->res_length != namelen)
++	list_for_each_entry(iter, &ls->ls_orphans, lkb_ownqueue) {
++		if (iter->lkb_resource->res_length != namelen)
+ 			continue;
+-		if (memcmp(lkb->lkb_resource->res_name, name, namelen))
++		if (memcmp(iter->lkb_resource->res_name, name, namelen))
+ 			continue;
+-		if (lkb->lkb_grmode != mode) {
++		if (iter->lkb_grmode != mode) {
+ 			found_other_mode = 1;
+ 			continue;
+ 		}
+ 
+-		found = 1;
+-		list_del_init(&lkb->lkb_ownqueue);
+-		lkb->lkb_flags &= ~DLM_IFL_ORPHAN;
+-		*lkid = lkb->lkb_id;
++		lkb = iter;
++		list_del_init(&iter->lkb_ownqueue);
++		iter->lkb_flags &= ~DLM_IFL_ORPHAN;
++		*lkid = iter->lkb_id;
+ 		break;
+ 	}
+ 	mutex_unlock(&ls->ls_orphans_mutex);
+ 
+-	if (!found && found_other_mode) {
++	if (!lkb && found_other_mode) {
+ 		rv = -EAGAIN;
+ 		goto out;
+ 	}
+ 
+-	if (!found) {
++	if (!lkb) {
+ 		rv = -ENOENT;
+ 		goto out;
+ 	}
+diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
+index f3482e936cc25..28735e8c5e206 100644
+--- a/fs/dlm/plock.c
++++ b/fs/dlm/plock.c
+@@ -80,8 +80,7 @@ static void send_op(struct plock_op *op)
+    abandoned waiter.  So, we have to insert the unlock-close when the
+    lock call is interrupted. */
+ 
+-static void do_unlock_close(struct dlm_ls *ls, u64 number,
+-			    struct file *file, struct file_lock *fl)
++static void do_unlock_close(const struct dlm_plock_info *info)
+ {
+ 	struct plock_op *op;
+ 
+@@ -90,15 +89,12 @@ static void do_unlock_close(struct dlm_ls *ls, u64 number,
+ 		return;
+ 
+ 	op->info.optype		= DLM_PLOCK_OP_UNLOCK;
+-	op->info.pid		= fl->fl_pid;
+-	op->info.fsid		= ls->ls_global_id;
+-	op->info.number		= number;
++	op->info.pid		= info->pid;
++	op->info.fsid		= info->fsid;
++	op->info.number		= info->number;
+ 	op->info.start		= 0;
+ 	op->info.end		= OFFSET_MAX;
+-	if (fl->fl_lmops && fl->fl_lmops->lm_grant)
+-		op->info.owner	= (__u64) fl->fl_pid;
+-	else
+-		op->info.owner	= (__u64)(long) fl->fl_owner;
++	op->info.owner		= info->owner;
+ 
+ 	op->info.flags |= DLM_PLOCK_FL_CLOSE;
+ 	send_op(op);
+@@ -161,13 +157,14 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 
+ 	rv = wait_event_killable(recv_wq, (op->done != 0));
+ 	if (rv == -ERESTARTSYS) {
+-		log_debug(ls, "%s: wait killed %llx", __func__,
+-			  (unsigned long long)number);
+ 		spin_lock(&ops_lock);
+ 		list_del(&op->list);
+ 		spin_unlock(&ops_lock);
++		log_debug(ls, "%s: wait interrupted %x %llx pid %d",
++			  __func__, ls->ls_global_id,
++			  (unsigned long long)number, op->info.pid);
+ 		dlm_release_plock_op(op);
+-		do_unlock_close(ls, number, file, fl);
++		do_unlock_close(&op->info);
+ 		goto out;
+ 	}
+ 
+@@ -408,7 +405,7 @@ static ssize_t dev_read(struct file *file, char __user *u, size_t count,
+ 		if (op->info.flags & DLM_PLOCK_FL_CLOSE)
+ 			list_del(&op->list);
+ 		else
+-			list_move(&op->list, &recv_list);
++			list_move_tail(&op->list, &recv_list);
+ 		memcpy(&info, &op->info, sizeof(info));
+ 	}
+ 	spin_unlock(&ops_lock);
+@@ -433,9 +430,9 @@ static ssize_t dev_read(struct file *file, char __user *u, size_t count,
+ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 			 loff_t *ppos)
+ {
++	struct plock_op *op = NULL, *iter;
+ 	struct dlm_plock_info info;
+-	struct plock_op *op;
+-	int found = 0, do_callback = 0;
++	int do_callback = 0;
+ 
+ 	if (count != sizeof(info))
+ 		return -EINVAL;
+@@ -446,31 +443,63 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 	if (check_version(&info))
+ 		return -EINVAL;
+ 
++	/*
++	 * The results for waiting ops (SETLKW) can be returned in any
++	 * order, so match all fields to find the op.  The results for
++	 * non-waiting ops are returned in the order that they were sent
++	 * to userspace, so match the result with the first non-waiting op.
++	 */
+ 	spin_lock(&ops_lock);
+-	list_for_each_entry(op, &recv_list, list) {
+-		if (op->info.fsid == info.fsid &&
+-		    op->info.number == info.number &&
+-		    op->info.owner == info.owner) {
+-			list_del_init(&op->list);
+-			memcpy(&op->info, &info, sizeof(info));
+-			if (op->data)
+-				do_callback = 1;
+-			else
+-				op->done = 1;
+-			found = 1;
+-			break;
++	if (info.wait) {
++		list_for_each_entry(iter, &recv_list, list) {
++			if (iter->info.fsid == info.fsid &&
++			    iter->info.number == info.number &&
++			    iter->info.owner == info.owner &&
++			    iter->info.pid == info.pid &&
++			    iter->info.start == info.start &&
++			    iter->info.end == info.end &&
++			    iter->info.ex == info.ex &&
++			    iter->info.wait) {
++				op = iter;
++				break;
++			}
++		}
++	} else {
++		list_for_each_entry(iter, &recv_list, list) {
++			if (!iter->info.wait) {
++				op = iter;
++				break;
++			}
+ 		}
+ 	}
++
++	if (op) {
++		/* Sanity check that op and info match. */
++		if (info.wait)
++			WARN_ON(op->info.optype != DLM_PLOCK_OP_LOCK);
++		else
++			WARN_ON(op->info.fsid != info.fsid ||
++				op->info.number != info.number ||
++				op->info.owner != info.owner ||
++				op->info.optype != info.optype);
++
++		list_del_init(&op->list);
++		memcpy(&op->info, &info, sizeof(info));
++		if (op->data)
++			do_callback = 1;
++		else
++			op->done = 1;
++	}
+ 	spin_unlock(&ops_lock);
+ 
+-	if (found) {
++	if (op) {
+ 		if (do_callback)
+ 			dlm_plock_callback(op);
+ 		else
+ 			wake_up(&recv_wq);
+ 	} else
+-		log_print("dev_write no op %x %llx", info.fsid,
+-			  (unsigned long long)info.number);
++		log_print("%s: no op %x %llx", __func__,
++			  info.fsid, (unsigned long long)info.number);
+ 	return count;
+ }
+ 
+diff --git a/fs/dlm/recover.c b/fs/dlm/recover.c
+index 8928e99dfd47d..df18f38a02734 100644
+--- a/fs/dlm/recover.c
++++ b/fs/dlm/recover.c
+@@ -732,10 +732,9 @@ void dlm_recovered_lock(struct dlm_rsb *r)
+ 
+ static void recover_lvb(struct dlm_rsb *r)
+ {
+-	struct dlm_lkb *lkb, *high_lkb = NULL;
++	struct dlm_lkb *big_lkb = NULL, *iter, *high_lkb = NULL;
+ 	uint32_t high_seq = 0;
+ 	int lock_lvb_exists = 0;
+-	int big_lock_exists = 0;
+ 	int lvblen = r->res_ls->ls_lvblen;
+ 
+ 	if (!rsb_flag(r, RSB_NEW_MASTER2) &&
+@@ -751,37 +750,37 @@ static void recover_lvb(struct dlm_rsb *r)
+ 	/* we are the new master, so figure out if VALNOTVALID should
+ 	   be set, and set the rsb lvb from the best lkb available. */
+ 
+-	list_for_each_entry(lkb, &r->res_grantqueue, lkb_statequeue) {
+-		if (!(lkb->lkb_exflags & DLM_LKF_VALBLK))
++	list_for_each_entry(iter, &r->res_grantqueue, lkb_statequeue) {
++		if (!(iter->lkb_exflags & DLM_LKF_VALBLK))
+ 			continue;
+ 
+ 		lock_lvb_exists = 1;
+ 
+-		if (lkb->lkb_grmode > DLM_LOCK_CR) {
+-			big_lock_exists = 1;
++		if (iter->lkb_grmode > DLM_LOCK_CR) {
++			big_lkb = iter;
+ 			goto setflag;
+ 		}
+ 
+-		if (((int)lkb->lkb_lvbseq - (int)high_seq) >= 0) {
+-			high_lkb = lkb;
+-			high_seq = lkb->lkb_lvbseq;
++		if (((int)iter->lkb_lvbseq - (int)high_seq) >= 0) {
++			high_lkb = iter;
++			high_seq = iter->lkb_lvbseq;
+ 		}
+ 	}
+ 
+-	list_for_each_entry(lkb, &r->res_convertqueue, lkb_statequeue) {
+-		if (!(lkb->lkb_exflags & DLM_LKF_VALBLK))
++	list_for_each_entry(iter, &r->res_convertqueue, lkb_statequeue) {
++		if (!(iter->lkb_exflags & DLM_LKF_VALBLK))
+ 			continue;
+ 
+ 		lock_lvb_exists = 1;
+ 
+-		if (lkb->lkb_grmode > DLM_LOCK_CR) {
+-			big_lock_exists = 1;
++		if (iter->lkb_grmode > DLM_LOCK_CR) {
++			big_lkb = iter;
+ 			goto setflag;
+ 		}
+ 
+-		if (((int)lkb->lkb_lvbseq - (int)high_seq) >= 0) {
+-			high_lkb = lkb;
+-			high_seq = lkb->lkb_lvbseq;
++		if (((int)iter->lkb_lvbseq - (int)high_seq) >= 0) {
++			high_lkb = iter;
++			high_seq = iter->lkb_lvbseq;
+ 		}
+ 	}
+ 
+@@ -790,7 +789,7 @@ static void recover_lvb(struct dlm_rsb *r)
+ 		goto out;
+ 
+ 	/* lvb is invalidated if only NL/CR locks remain */
+-	if (!big_lock_exists)
++	if (!big_lkb)
+ 		rsb_set_flag(r, RSB_VALNOTVALID);
+ 
+ 	if (!r->res_lvbptr) {
+@@ -799,9 +798,9 @@ static void recover_lvb(struct dlm_rsb *r)
+ 			goto out;
+ 	}
+ 
+-	if (big_lock_exists) {
+-		r->res_lvbseq = lkb->lkb_lvbseq;
+-		memcpy(r->res_lvbptr, lkb->lkb_lvbptr, lvblen);
++	if (big_lkb) {
++		r->res_lvbseq = big_lkb->lkb_lvbseq;
++		memcpy(r->res_lvbptr, big_lkb->lkb_lvbptr, lvblen);
+ 	} else if (high_lkb) {
+ 		r->res_lvbseq = high_lkb->lkb_lvbseq;
+ 		memcpy(r->res_lvbptr, high_lkb->lkb_lvbptr, lvblen);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index c220810c61d14..fbc7304bed56b 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -509,20 +509,26 @@ out:
+ 	return result;
+ }
+ 
+-static void
+-nfs_direct_join_group(struct list_head *list, struct inode *inode)
++static void nfs_direct_join_group(struct list_head *list, struct inode *inode)
+ {
+-	struct nfs_page *req, *next;
++	struct nfs_page *req, *subreq;
+ 
+ 	list_for_each_entry(req, list, wb_list) {
+-		if (req->wb_head != req || req->wb_this_page == req)
++		if (req->wb_head != req)
+ 			continue;
+-		for (next = req->wb_this_page;
+-				next != req->wb_head;
+-				next = next->wb_this_page) {
+-			nfs_list_remove_request(next);
+-			nfs_release_request(next);
+-		}
++		subreq = req->wb_this_page;
++		if (subreq == req)
++			continue;
++		do {
++			/*
++			 * Remove subrequests from this list before freeing
++			 * them in the call to nfs_join_page_group().
++			 */
++			if (!list_empty(&subreq->wb_list)) {
++				nfs_list_remove_request(subreq);
++				nfs_release_request(subreq);
++			}
++		} while ((subreq = subreq->wb_this_page) != req);
+ 		nfs_join_page_group(req, inode);
+ 	}
+ }
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index b9567cc8698ed..c34df51a8f2b7 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5864,9 +5864,8 @@ static ssize_t __nfs4_get_acl_uncached(struct inode *inode, void *buf, size_t bu
+ out_ok:
+ 	ret = res.acl_len;
+ out_free:
+-	for (i = 0; i < npages; i++)
+-		if (pages[i])
+-			__free_page(pages[i]);
++	while (--i >= 0)
++		__free_page(pages[i]);
+ 	if (res.acl_scratch)
+ 		__free_page(res.acl_scratch);
+ 	kfree(pages);
+@@ -7047,8 +7046,15 @@ static void nfs4_lock_done(struct rpc_task *task, void *calldata)
+ 		} else if (!nfs4_update_lock_stateid(lsp, &data->res.stateid))
+ 			goto out_restart;
+ 		break;
+-	case -NFS4ERR_BAD_STATEID:
+ 	case -NFS4ERR_OLD_STATEID:
++		if (data->arg.new_lock_owner != 0 &&
++			nfs4_refresh_open_old_stateid(&data->arg.open_stateid,
++					lsp->ls_state))
++			goto out_restart;
++		if (nfs4_refresh_lock_old_stateid(&data->arg.lock_stateid, lsp))
++			goto out_restart;
++		fallthrough;
++	case -NFS4ERR_BAD_STATEID:
+ 	case -NFS4ERR_STALE_STATEID:
+ 	case -NFS4ERR_EXPIRED:
+ 		if (data->arg.new_lock_owner != 0) {
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 1c1b231b2ab33..b045be7394a08 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1145,9 +1145,9 @@ static void revoke_delegation(struct nfs4_delegation *dp)
+ 	WARN_ON(!list_empty(&dp->dl_recall_lru));
+ 
+ 	if (clp->cl_minorversion) {
++		spin_lock(&clp->cl_lock);
+ 		dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID;
+ 		refcount_inc(&dp->dl_stid.sc_count);
+-		spin_lock(&clp->cl_lock);
+ 		list_add(&dp->dl_recall_lru, &clp->cl_revoked);
+ 		spin_unlock(&clp->cl_lock);
+ 	}
+diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
+index a53243abd9450..881d329cee2f0 100644
+--- a/include/drm/drm_dp_helper.h
++++ b/include/drm/drm_dp_helper.h
+@@ -1182,7 +1182,7 @@ u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZ
+ 
+ #define DP_BRANCH_OUI_HEADER_SIZE	0xc
+ #define DP_RECEIVER_CAP_SIZE		0xf
+-#define DP_DSC_RECEIVER_CAP_SIZE        0xf
++#define DP_DSC_RECEIVER_CAP_SIZE        0x10 /* DSC Capabilities 0x60 through 0x6F */
+ #define EDP_PSR_RECEIVER_CAP_SIZE	2
+ #define EDP_DISPLAY_CTL_CAP_SIZE	3
+ 
+diff --git a/include/linux/ceph/msgr.h b/include/linux/ceph/msgr.h
+index 9e50aede46c83..7bde0af29a814 100644
+--- a/include/linux/ceph/msgr.h
++++ b/include/linux/ceph/msgr.h
+@@ -61,11 +61,18 @@ extern const char *ceph_entity_type_name(int type);
+  * entity_addr -- network address
+  */
+ struct ceph_entity_addr {
+-	__le32 type;
++	__le32 type;  /* CEPH_ENTITY_ADDR_TYPE_* */
+ 	__le32 nonce;  /* unique id for process (e.g. pid) */
+ 	struct sockaddr_storage in_addr;
+ } __attribute__ ((packed));
+ 
++static inline bool ceph_addr_equal_no_type(const struct ceph_entity_addr *lhs,
++					   const struct ceph_entity_addr *rhs)
++{
++	return !memcmp(&lhs->in_addr, &rhs->in_addr, sizeof(lhs->in_addr)) &&
++	       lhs->nonce == rhs->nonce;
++}
++
+ struct ceph_entity_inst {
+ 	struct ceph_entity_name name;
+ 	struct ceph_entity_addr addr;
+diff --git a/include/linux/clk.h b/include/linux/clk.h
+index 1814eabb7c204..12c85ba606ec5 100644
+--- a/include/linux/clk.h
++++ b/include/linux/clk.h
+@@ -172,6 +172,39 @@ int clk_get_scaled_duty_cycle(struct clk *clk, unsigned int scale);
+  */
+ bool clk_is_match(const struct clk *p, const struct clk *q);
+ 
++/**
++ * clk_rate_exclusive_get - get exclusivity over the rate control of a
++ *                          producer
++ * @clk: clock source
++ *
++ * This function allows drivers to get exclusive control over the rate of a
++ * provider. It prevents any other consumer to execute, even indirectly,
++ * opereation which could alter the rate of the provider or cause glitches
++ *
++ * If exlusivity is claimed more than once on clock, even by the same driver,
++ * the rate effectively gets locked as exclusivity can't be preempted.
++ *
++ * Must not be called from within atomic context.
++ *
++ * Returns success (0) or negative errno.
++ */
++int clk_rate_exclusive_get(struct clk *clk);
++
++/**
++ * clk_rate_exclusive_put - release exclusivity over the rate control of a
++ *                          producer
++ * @clk: clock source
++ *
++ * This function allows drivers to release the exclusivity it previously got
++ * from clk_rate_exclusive_get()
++ *
++ * The caller must balance the number of clk_rate_exclusive_get() and
++ * clk_rate_exclusive_put() calls.
++ *
++ * Must not be called from within atomic context.
++ */
++void clk_rate_exclusive_put(struct clk *clk);
++
+ #else
+ 
+ static inline int clk_notifier_register(struct clk *clk,
+@@ -218,6 +251,13 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
+ 	return p == q;
+ }
+ 
++static inline int clk_rate_exclusive_get(struct clk *clk)
++{
++	return 0;
++}
++
++static inline void clk_rate_exclusive_put(struct clk *clk) {}
++
+ #endif
+ 
+ /**
+@@ -530,38 +570,6 @@ struct clk *devm_clk_get_optional_enabled(struct device *dev, const char *id);
+  */
+ struct clk *devm_get_clk_from_child(struct device *dev,
+ 				    struct device_node *np, const char *con_id);
+-/**
+- * clk_rate_exclusive_get - get exclusivity over the rate control of a
+- *                          producer
+- * @clk: clock source
+- *
+- * This function allows drivers to get exclusive control over the rate of a
+- * provider. It prevents any other consumer to execute, even indirectly,
+- * opereation which could alter the rate of the provider or cause glitches
+- *
+- * If exlusivity is claimed more than once on clock, even by the same driver,
+- * the rate effectively gets locked as exclusivity can't be preempted.
+- *
+- * Must not be called from within atomic context.
+- *
+- * Returns success (0) or negative errno.
+- */
+-int clk_rate_exclusive_get(struct clk *clk);
+-
+-/**
+- * clk_rate_exclusive_put - release exclusivity over the rate control of a
+- *                          producer
+- * @clk: clock source
+- *
+- * This function allows drivers to release the exclusivity it previously got
+- * from clk_rate_exclusive_get()
+- *
+- * The caller must balance the number of clk_rate_exclusive_get() and
+- * clk_rate_exclusive_put() calls.
+- *
+- * Must not be called from within atomic context.
+- */
+-void clk_rate_exclusive_put(struct clk *clk);
+ 
+ /**
+  * clk_enable - inform the system when the clock source should be running.
+@@ -921,14 +929,6 @@ static inline void clk_bulk_put_all(int num_clks, struct clk_bulk_data *clks) {}
+ 
+ static inline void devm_clk_put(struct device *dev, struct clk *clk) {}
+ 
+-
+-static inline int clk_rate_exclusive_get(struct clk *clk)
+-{
+-	return 0;
+-}
+-
+-static inline void clk_rate_exclusive_put(struct clk *clk) {}
+-
+ static inline int clk_enable(struct clk *clk)
+ {
+ 	return 0;
+diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
+index 04c20de66afc2..b70224370832f 100644
+--- a/include/linux/cpuset.h
++++ b/include/linux/cpuset.h
+@@ -55,8 +55,10 @@ extern void cpuset_init_smp(void);
+ extern void cpuset_force_rebuild(void);
+ extern void cpuset_update_active_cpus(void);
+ extern void cpuset_wait_for_hotplug(void);
+-extern void cpuset_read_lock(void);
+-extern void cpuset_read_unlock(void);
++extern void inc_dl_tasks_cs(struct task_struct *task);
++extern void dec_dl_tasks_cs(struct task_struct *task);
++extern void cpuset_lock(void);
++extern void cpuset_unlock(void);
+ extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
+ extern void cpuset_cpus_allowed_fallback(struct task_struct *p);
+ extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
+@@ -178,8 +180,10 @@ static inline void cpuset_update_active_cpus(void)
+ 
+ static inline void cpuset_wait_for_hotplug(void) { }
+ 
+-static inline void cpuset_read_lock(void) { }
+-static inline void cpuset_read_unlock(void) { }
++static inline void inc_dl_tasks_cs(struct task_struct *task) { }
++static inline void dec_dl_tasks_cs(struct task_struct *task) { }
++static inline void cpuset_lock(void) { }
++static inline void cpuset_unlock(void) { }
+ 
+ static inline void cpuset_cpus_allowed(struct task_struct *p,
+ 				       struct cpumask *mask)
+diff --git a/include/linux/raid_class.h b/include/linux/raid_class.h
+index 5cdfcb873a8f0..772d45b2a60a0 100644
+--- a/include/linux/raid_class.h
++++ b/include/linux/raid_class.h
+@@ -77,7 +77,3 @@ DEFINE_RAID_ATTRIBUTE(enum raid_state, state)
+ 	
+ struct raid_template *raid_class_attach(struct raid_function_template *);
+ void raid_class_release(struct raid_template *);
+-
+-int __must_check raid_component_add(struct raid_template *, struct device *,
+-				    struct device *);
+-
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 5da4b3c89f636..aa015416c5693 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1657,7 +1657,9 @@ current_restore_flags(unsigned long orig_flags, unsigned long flags)
+ }
+ 
+ extern int cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
+-extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_effective_cpus);
++extern int task_can_attach(struct task_struct *p);
++extern int dl_bw_alloc(int cpu, u64 dl_bw);
++extern void dl_bw_free(int cpu, u64 dl_bw);
+ #ifdef CONFIG_SMP
+ extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask);
+ extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask);
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index a248caff969f5..82d128c0fe6df 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -698,37 +698,14 @@ static inline struct slave *bond_slave_has_mac(struct bonding *bond,
+ }
+ 
+ /* Caller must hold rcu_read_lock() for read */
+-static inline struct slave *bond_slave_has_mac_rcu(struct bonding *bond,
+-					       const u8 *mac)
++static inline bool bond_slave_has_mac_rcu(struct bonding *bond, const u8 *mac)
+ {
+ 	struct list_head *iter;
+ 	struct slave *tmp;
+ 
+-	bond_for_each_slave_rcu(bond, tmp, iter)
+-		if (ether_addr_equal_64bits(mac, tmp->dev->dev_addr))
+-			return tmp;
+-
+-	return NULL;
+-}
+-
+-/* Caller must hold rcu_read_lock() for read */
+-static inline bool bond_slave_has_mac_rx(struct bonding *bond, const u8 *mac)
+-{
+-	struct list_head *iter;
+-	struct slave *tmp;
+-	struct netdev_hw_addr *ha;
+-
+ 	bond_for_each_slave_rcu(bond, tmp, iter)
+ 		if (ether_addr_equal_64bits(mac, tmp->dev->dev_addr))
+ 			return true;
+-
+-	if (netdev_uc_empty(bond->dev))
+-		return false;
+-
+-	netdev_for_each_uc_addr(ha, bond->dev)
+-		if (ether_addr_equal_64bits(mac, ha->addr))
+-			return true;
+-
+ 	return false;
+ }
+ 
+diff --git a/include/net/rtnetlink.h b/include/net/rtnetlink.h
+index 4da61c950e931..5c2a73bbfabee 100644
+--- a/include/net/rtnetlink.h
++++ b/include/net/rtnetlink.h
+@@ -166,8 +166,8 @@ struct net_device *rtnl_create_link(struct net *net, const char *ifname,
+ int rtnl_delete_link(struct net_device *dev);
+ int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm);
+ 
+-int rtnl_nla_parse_ifla(struct nlattr **tb, const struct nlattr *head, int len,
+-			struct netlink_ext_ack *exterr);
++int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer,
++			     struct netlink_ext_ack *exterr);
+ struct net *rtnl_get_net_ns_capable(struct sock *sk, int netnsid);
+ 
+ #define MODULE_ALIAS_RTNL_LINK(kind) MODULE_ALIAS("rtnl-link-" kind)
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 665e388593752..234196d904238 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1233,6 +1233,7 @@ struct proto {
+ 	/*
+ 	 * Pressure flag: try to collapse.
+ 	 * Technical note: it is used by multiple contexts non atomically.
++	 * Make sure to use READ_ONCE()/WRITE_ONCE() for all reads/writes.
+ 	 * All the __sk_mem_schedule() is of this nature: accounting
+ 	 * is strict, actions are advisory and have some latency.
+ 	 */
+@@ -1349,7 +1350,7 @@ static inline bool sk_has_memory_pressure(const struct sock *sk)
+ static inline bool sk_under_global_memory_pressure(const struct sock *sk)
+ {
+ 	return sk->sk_prot->memory_pressure &&
+-		!!*sk->sk_prot->memory_pressure;
++		!!READ_ONCE(*sk->sk_prot->memory_pressure);
+ }
+ 
+ static inline bool sk_under_memory_pressure(const struct sock *sk)
+@@ -1361,7 +1362,7 @@ static inline bool sk_under_memory_pressure(const struct sock *sk)
+ 	    mem_cgroup_under_socket_pressure(sk->sk_memcg))
+ 		return true;
+ 
+-	return !!*sk->sk_prot->memory_pressure;
++	return !!READ_ONCE(*sk->sk_prot->memory_pressure);
+ }
+ 
+ static inline long
+@@ -1415,7 +1416,7 @@ proto_memory_pressure(struct proto *prot)
+ {
+ 	if (!prot->memory_pressure)
+ 		return false;
+-	return !!*prot->memory_pressure;
++	return !!READ_ONCE(*prot->memory_pressure);
+ }
+ 
+ 
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 70ed21607e472..11400eba61242 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -56,6 +56,7 @@
+ #include <linux/file.h>
+ #include <linux/fs_parser.h>
+ #include <linux/sched/cputime.h>
++#include <linux/sched/deadline.h>
+ #include <linux/psi.h>
+ #include <net/sock.h>
+ 
+@@ -6326,6 +6327,9 @@ void cgroup_exit(struct task_struct *tsk)
+ 	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
+ 	cset->nr_tasks--;
+ 
++	if (dl_task(tsk))
++		dec_dl_tasks_cs(tsk);
++
+ 	WARN_ON_ONCE(cgroup_task_frozen(tsk));
+ 	if (unlikely(cgroup_task_freeze(tsk)))
+ 		cgroup_update_frozen(task_dfl_cgroup(tsk));
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index b476591168dcf..195f9cccab20b 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -161,6 +161,14 @@ struct cpuset {
+ 	 */
+ 	int use_parent_ecpus;
+ 	int child_ecpus_count;
++
++	/*
++	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
++	 * know when to rebuild associated root domain bandwidth information.
++	 */
++	int nr_deadline_tasks;
++	int nr_migrate_dl_tasks;
++	u64 sum_migrate_dl_bw;
+ };
+ 
+ /*
+@@ -206,6 +214,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
+ 	return css_cs(cs->css.parent);
+ }
+ 
++void inc_dl_tasks_cs(struct task_struct *p)
++{
++	struct cpuset *cs = task_cs(p);
++
++	cs->nr_deadline_tasks++;
++}
++
++void dec_dl_tasks_cs(struct task_struct *p)
++{
++	struct cpuset *cs = task_cs(p);
++
++	cs->nr_deadline_tasks--;
++}
++
+ /* bits in struct cpuset flags field */
+ typedef enum {
+ 	CS_ONLINE,
+@@ -334,16 +356,16 @@ static struct cpuset top_cpuset = {
+  * guidelines for accessing subsystem state in kernel/cgroup.c
+  */
+ 
+-DEFINE_STATIC_PERCPU_RWSEM(cpuset_rwsem);
++static DEFINE_MUTEX(cpuset_mutex);
+ 
+-void cpuset_read_lock(void)
++void cpuset_lock(void)
+ {
+-	percpu_down_read(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ }
+ 
+-void cpuset_read_unlock(void)
++void cpuset_unlock(void)
+ {
+-	percpu_up_read(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ }
+ 
+ static DEFINE_SPINLOCK(callback_lock);
+@@ -912,11 +934,14 @@ done:
+ 	return ndoms;
+ }
+ 
+-static void update_tasks_root_domain(struct cpuset *cs)
++static void dl_update_tasks_root_domain(struct cpuset *cs)
+ {
+ 	struct css_task_iter it;
+ 	struct task_struct *task;
+ 
++	if (cs->nr_deadline_tasks == 0)
++		return;
++
+ 	css_task_iter_start(&cs->css, 0, &it);
+ 
+ 	while ((task = css_task_iter_next(&it)))
+@@ -925,12 +950,12 @@ static void update_tasks_root_domain(struct cpuset *cs)
+ 	css_task_iter_end(&it);
+ }
+ 
+-static void rebuild_root_domains(void)
++static void dl_rebuild_rd_accounting(void)
+ {
+ 	struct cpuset *cs = NULL;
+ 	struct cgroup_subsys_state *pos_css;
+ 
+-	percpu_rwsem_assert_held(&cpuset_rwsem);
++	lockdep_assert_held(&cpuset_mutex);
+ 	lockdep_assert_cpus_held();
+ 	lockdep_assert_held(&sched_domains_mutex);
+ 
+@@ -953,7 +978,7 @@ static void rebuild_root_domains(void)
+ 
+ 		rcu_read_unlock();
+ 
+-		update_tasks_root_domain(cs);
++		dl_update_tasks_root_domain(cs);
+ 
+ 		rcu_read_lock();
+ 		css_put(&cs->css);
+@@ -967,7 +992,7 @@ partition_and_rebuild_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ {
+ 	mutex_lock(&sched_domains_mutex);
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+-	rebuild_root_domains();
++	dl_rebuild_rd_accounting();
+ 	mutex_unlock(&sched_domains_mutex);
+ }
+ 
+@@ -991,7 +1016,7 @@ static void rebuild_sched_domains_locked(void)
+ 	int ndoms;
+ 
+ 	lockdep_assert_cpus_held();
+-	percpu_rwsem_assert_held(&cpuset_rwsem);
++	lockdep_assert_held(&cpuset_mutex);
+ 
+ 	/*
+ 	 * If we have raced with CPU hotplug, return early to avoid
+@@ -1042,9 +1067,9 @@ static void rebuild_sched_domains_locked(void)
+ void rebuild_sched_domains(void)
+ {
+ 	get_online_cpus();
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 	rebuild_sched_domains_locked();
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 	put_online_cpus();
+ }
+ 
+@@ -1160,7 +1185,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	int new_prs;
+ 	bool part_error = false;	/* Partition error? */
+ 
+-	percpu_rwsem_assert_held(&cpuset_rwsem);
++	lockdep_assert_held(&cpuset_mutex);
+ 
+ 	/*
+ 	 * The parent must be a partition root.
+@@ -1490,7 +1515,7 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+ 	struct cpuset *sibling;
+ 	struct cgroup_subsys_state *pos_css;
+ 
+-	percpu_rwsem_assert_held(&cpuset_rwsem);
++	lockdep_assert_held(&cpuset_mutex);
+ 
+ 	/*
+ 	 * Check all its siblings and call update_cpumasks_hier()
+@@ -2145,19 +2170,26 @@ static int fmeter_getrate(struct fmeter *fmp)
+ 
+ static struct cpuset *cpuset_attach_old_cs;
+ 
++static void reset_migrate_dl_data(struct cpuset *cs)
++{
++	cs->nr_migrate_dl_tasks = 0;
++	cs->sum_migrate_dl_bw = 0;
++}
++
+ /* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */
+ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ {
+ 	struct cgroup_subsys_state *css;
+-	struct cpuset *cs;
++	struct cpuset *cs, *oldcs;
+ 	struct task_struct *task;
+ 	int ret;
+ 
+ 	/* used later by cpuset_attach() */
+ 	cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, &css));
++	oldcs = cpuset_attach_old_cs;
+ 	cs = css_cs(css);
+ 
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 
+ 	/* allow moving tasks into an empty cpuset if on default hierarchy */
+ 	ret = -ENOSPC;
+@@ -2166,14 +2198,39 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 		goto out_unlock;
+ 
+ 	cgroup_taskset_for_each(task, css, tset) {
+-		ret = task_can_attach(task, cs->effective_cpus);
++		ret = task_can_attach(task);
+ 		if (ret)
+ 			goto out_unlock;
+ 		ret = security_task_setscheduler(task);
+ 		if (ret)
+ 			goto out_unlock;
++
++		if (dl_task(task)) {
++			cs->nr_migrate_dl_tasks++;
++			cs->sum_migrate_dl_bw += task->dl.dl_bw;
++		}
+ 	}
+ 
++	if (!cs->nr_migrate_dl_tasks)
++		goto out_success;
++
++	if (!cpumask_intersects(oldcs->effective_cpus, cs->effective_cpus)) {
++		int cpu = cpumask_any_and(cpu_active_mask, cs->effective_cpus);
++
++		if (unlikely(cpu >= nr_cpu_ids)) {
++			reset_migrate_dl_data(cs);
++			ret = -EINVAL;
++			goto out_unlock;
++		}
++
++		ret = dl_bw_alloc(cpu, cs->sum_migrate_dl_bw);
++		if (ret) {
++			reset_migrate_dl_data(cs);
++			goto out_unlock;
++		}
++	}
++
++out_success:
+ 	/*
+ 	 * Mark attach is in progress.  This makes validate_change() fail
+ 	 * changes which zero cpus/mems_allowed.
+@@ -2181,7 +2238,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 	cs->attach_in_progress++;
+ 	ret = 0;
+ out_unlock:
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 	return ret;
+ }
+ 
+@@ -2193,11 +2250,19 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ 	cgroup_taskset_first(tset, &css);
+ 	cs = css_cs(css);
+ 
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 	cs->attach_in_progress--;
+ 	if (!cs->attach_in_progress)
+ 		wake_up(&cpuset_attach_wq);
+-	percpu_up_write(&cpuset_rwsem);
++
++	if (cs->nr_migrate_dl_tasks) {
++		int cpu = cpumask_any(cs->effective_cpus);
++
++		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
++		reset_migrate_dl_data(cs);
++	}
++
++	mutex_unlock(&cpuset_mutex);
+ }
+ 
+ /*
+@@ -2221,7 +2286,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ 	cs = css_cs(css);
+ 
+ 	lockdep_assert_cpus_held();	/* see cgroup_attach_lock() */
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 
+ 	/* prepare for attach */
+ 	if (cs == &top_cpuset)
+@@ -2271,11 +2336,17 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ 
+ 	cs->old_mems_allowed = cpuset_attach_nodemask_to;
+ 
++	if (cs->nr_migrate_dl_tasks) {
++		cs->nr_deadline_tasks += cs->nr_migrate_dl_tasks;
++		oldcs->nr_deadline_tasks -= cs->nr_migrate_dl_tasks;
++		reset_migrate_dl_data(cs);
++	}
++
+ 	cs->attach_in_progress--;
+ 	if (!cs->attach_in_progress)
+ 		wake_up(&cpuset_attach_wq);
+ 
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ }
+ 
+ /* The various types of files and directories in a cpuset file system */
+@@ -2307,7 +2378,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
+ 	int retval = 0;
+ 
+ 	get_online_cpus();
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 	if (!is_cpuset_online(cs)) {
+ 		retval = -ENODEV;
+ 		goto out_unlock;
+@@ -2343,7 +2414,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
+ 		break;
+ 	}
+ out_unlock:
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 	put_online_cpus();
+ 	return retval;
+ }
+@@ -2356,7 +2427,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
+ 	int retval = -ENODEV;
+ 
+ 	get_online_cpus();
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 	if (!is_cpuset_online(cs))
+ 		goto out_unlock;
+ 
+@@ -2369,7 +2440,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
+ 		break;
+ 	}
+ out_unlock:
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 	put_online_cpus();
+ 	return retval;
+ }
+@@ -2410,7 +2481,7 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
+ 	flush_work(&cpuset_hotplug_work);
+ 
+ 	get_online_cpus();
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 	if (!is_cpuset_online(cs))
+ 		goto out_unlock;
+ 
+@@ -2434,7 +2505,7 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
+ 
+ 	free_cpuset(trialcs);
+ out_unlock:
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 	put_online_cpus();
+ 	kernfs_unbreak_active_protection(of->kn);
+ 	css_put(&cs->css);
+@@ -2567,13 +2638,13 @@ static ssize_t sched_partition_write(struct kernfs_open_file *of, char *buf,
+ 
+ 	css_get(&cs->css);
+ 	get_online_cpus();
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 	if (!is_cpuset_online(cs))
+ 		goto out_unlock;
+ 
+ 	retval = update_prstate(cs, val);
+ out_unlock:
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 	put_online_cpus();
+ 	css_put(&cs->css);
+ 	return retval ?: nbytes;
+@@ -2781,7 +2852,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
+ 		return 0;
+ 
+ 	get_online_cpus();
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 
+ 	set_bit(CS_ONLINE, &cs->flags);
+ 	if (is_spread_page(parent))
+@@ -2832,7 +2903,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
+ 	cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
+ 	spin_unlock_irq(&callback_lock);
+ out_unlock:
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 	put_online_cpus();
+ 	return 0;
+ }
+@@ -2853,7 +2924,7 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
+ 	struct cpuset *cs = css_cs(css);
+ 
+ 	get_online_cpus();
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 
+ 	if (is_partition_root(cs))
+ 		update_prstate(cs, 0);
+@@ -2872,7 +2943,7 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
+ 	cpuset_dec();
+ 	clear_bit(CS_ONLINE, &cs->flags);
+ 
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 	put_online_cpus();
+ }
+ 
+@@ -2885,7 +2956,7 @@ static void cpuset_css_free(struct cgroup_subsys_state *css)
+ 
+ static void cpuset_bind(struct cgroup_subsys_state *root_css)
+ {
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 	spin_lock_irq(&callback_lock);
+ 
+ 	if (is_in_v2_mode()) {
+@@ -2898,7 +2969,7 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css)
+ 	}
+ 
+ 	spin_unlock_irq(&callback_lock);
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ }
+ 
+ /*
+@@ -2940,8 +3011,6 @@ struct cgroup_subsys cpuset_cgrp_subsys = {
+ 
+ int __init cpuset_init(void)
+ {
+-	BUG_ON(percpu_init_rwsem(&cpuset_rwsem));
+-
+ 	BUG_ON(!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL));
+ 	BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL));
+ 	BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL));
+@@ -3013,7 +3082,7 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
+ 	is_empty = cpumask_empty(cs->cpus_allowed) ||
+ 		   nodes_empty(cs->mems_allowed);
+ 
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 
+ 	/*
+ 	 * Move tasks to the nearest ancestor with execution resources,
+@@ -3023,7 +3092,7 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
+ 	if (is_empty)
+ 		remove_tasks_in_empty_cpuset(cs);
+ 
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ }
+ 
+ static void
+@@ -3073,14 +3142,14 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
+ retry:
+ 	wait_event(cpuset_attach_wq, cs->attach_in_progress == 0);
+ 
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 
+ 	/*
+ 	 * We have raced with task attaching. We wait until attaching
+ 	 * is finished, so we won't attach a task to an empty cpuset.
+ 	 */
+ 	if (cs->attach_in_progress) {
+-		percpu_up_write(&cpuset_rwsem);
++		mutex_unlock(&cpuset_mutex);
+ 		goto retry;
+ 	}
+ 
+@@ -3152,7 +3221,7 @@ update_tasks:
+ 		hotplug_update_tasks_legacy(cs, &new_cpus, &new_mems,
+ 					    cpus_updated, mems_updated);
+ 
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ }
+ 
+ /**
+@@ -3182,7 +3251,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
+ 	if (on_dfl && !alloc_cpumasks(NULL, &tmp))
+ 		ptmp = &tmp;
+ 
+-	percpu_down_write(&cpuset_rwsem);
++	mutex_lock(&cpuset_mutex);
+ 
+ 	/* fetch the available cpus/mems and find out which changed how */
+ 	cpumask_copy(&new_cpus, cpu_active_mask);
+@@ -3239,7 +3308,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
+ 		update_tasks_nodemask(&top_cpuset);
+ 	}
+ 
+-	percpu_up_write(&cpuset_rwsem);
++	mutex_unlock(&cpuset_mutex);
+ 
+ 	/* if cpus or mems changed, we need to propagate to descendants */
+ 	if (cpus_updated || mems_updated) {
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 9d6dd14cfd261..40f40f359c5d5 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5221,6 +5221,7 @@ static int __sched_setscheduler(struct task_struct *p,
+ 	int reset_on_fork;
+ 	int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
+ 	struct rq *rq;
++	bool cpuset_locked = false;
+ 
+ 	/* The pi code expects interrupts enabled */
+ 	BUG_ON(pi && in_interrupt());
+@@ -5318,8 +5319,14 @@ recheck:
+ 			return retval;
+ 	}
+ 
+-	if (pi)
+-		cpuset_read_lock();
++	/*
++	 * SCHED_DEADLINE bandwidth accounting relies on stable cpusets
++	 * information.
++	 */
++	if (dl_policy(policy) || dl_policy(p->policy)) {
++		cpuset_locked = true;
++		cpuset_lock();
++	}
+ 
+ 	/*
+ 	 * Make sure no PI-waiters arrive (or leave) while we are
+@@ -5395,8 +5402,8 @@ change:
+ 	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
+ 		policy = oldpolicy = -1;
+ 		task_rq_unlock(rq, p, &rf);
+-		if (pi)
+-			cpuset_read_unlock();
++		if (cpuset_locked)
++			cpuset_unlock();
+ 		goto recheck;
+ 	}
+ 
+@@ -5462,7 +5469,8 @@ change:
+ 	task_rq_unlock(rq, p, &rf);
+ 
+ 	if (pi) {
+-		cpuset_read_unlock();
++		if (cpuset_locked)
++			cpuset_unlock();
+ 		rt_mutex_adjust_pi(p);
+ 	}
+ 
+@@ -5474,8 +5482,8 @@ change:
+ 
+ unlock:
+ 	task_rq_unlock(rq, p, &rf);
+-	if (pi)
+-		cpuset_read_unlock();
++	if (cpuset_locked)
++		cpuset_unlock();
+ 	return retval;
+ }
+ 
+@@ -6592,8 +6600,7 @@ int cpuset_cpumask_can_shrink(const struct cpumask *cur,
+ 	return ret;
+ }
+ 
+-int task_can_attach(struct task_struct *p,
+-		    const struct cpumask *cs_effective_cpus)
++int task_can_attach(struct task_struct *p)
+ {
+ 	int ret = 0;
+ 
+@@ -6606,21 +6613,9 @@ int task_can_attach(struct task_struct *p,
+ 	 * success of set_cpus_allowed_ptr() on all attached tasks
+ 	 * before cpus_mask may be changed.
+ 	 */
+-	if (p->flags & PF_NO_SETAFFINITY) {
++	if (p->flags & PF_NO_SETAFFINITY)
+ 		ret = -EINVAL;
+-		goto out;
+-	}
+ 
+-	if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span,
+-					      cs_effective_cpus)) {
+-		int cpu = cpumask_any_and(cpu_active_mask, cs_effective_cpus);
+-
+-		if (unlikely(cpu >= nr_cpu_ids))
+-			return -EINVAL;
+-		ret = dl_cpu_busy(cpu, p);
+-	}
+-
+-out:
+ 	return ret;
+ }
+ 
+@@ -6877,7 +6872,7 @@ static void cpuset_cpu_active(void)
+ static int cpuset_cpu_inactive(unsigned int cpu)
+ {
+ 	if (!cpuhp_tasks_frozen) {
+-		int ret = dl_cpu_busy(cpu, NULL);
++		int ret = dl_bw_check_overflow(cpu);
+ 
+ 		if (ret)
+ 			return ret;
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index f59cb3e8a6130..d91295d3059f7 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -17,6 +17,7 @@
+  */
+ #include "sched.h"
+ #include "pelt.h"
++#include <linux/cpuset.h>
+ 
+ struct dl_bandwidth def_dl_bandwidth;
+ 
+@@ -2417,6 +2418,12 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
+ 	if (task_on_rq_queued(p) && p->dl.dl_runtime)
+ 		task_non_contending(p);
+ 
++	/*
++	 * In case a task is setscheduled out from SCHED_DEADLINE we need to
++	 * keep track of that on its cpuset (for correct bandwidth tracking).
++	 */
++	dec_dl_tasks_cs(p);
++
+ 	if (!task_on_rq_queued(p)) {
+ 		/*
+ 		 * Inactive timer is armed. However, p is leaving DEADLINE and
+@@ -2457,6 +2464,12 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
+ 	if (hrtimer_try_to_cancel(&p->dl.inactive_timer) == 1)
+ 		put_task_struct(p);
+ 
++	/*
++	 * In case a task is setscheduled to SCHED_DEADLINE we need to keep
++	 * track of that on its cpuset (for correct bandwidth tracking).
++	 */
++	inc_dl_tasks_cs(p);
++
+ 	/* If p is not queued we will update its parameters at next wakeup. */
+ 	if (!task_on_rq_queued(p)) {
+ 		add_rq_bw(&p->dl, &rq->dl);
+@@ -2845,26 +2858,38 @@ int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
+ 	return ret;
+ }
+ 
+-int dl_cpu_busy(int cpu, struct task_struct *p)
++enum dl_bw_request {
++	dl_bw_req_check_overflow = 0,
++	dl_bw_req_alloc,
++	dl_bw_req_free
++};
++
++static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
+ {
+-	unsigned long flags, cap;
++	unsigned long flags;
+ 	struct dl_bw *dl_b;
+-	bool overflow;
++	bool overflow = 0;
+ 
+ 	rcu_read_lock_sched();
+ 	dl_b = dl_bw_of(cpu);
+ 	raw_spin_lock_irqsave(&dl_b->lock, flags);
+-	cap = dl_bw_capacity(cpu);
+-	overflow = __dl_overflow(dl_b, cap, 0, p ? p->dl.dl_bw : 0);
+ 
+-	if (!overflow && p) {
+-		/*
+-		 * We reserve space for this task in the destination
+-		 * root_domain, as we can't fail after this point.
+-		 * We will free resources in the source root_domain
+-		 * later on (see set_cpus_allowed_dl()).
+-		 */
+-		__dl_add(dl_b, p->dl.dl_bw, dl_bw_cpus(cpu));
++	if (req == dl_bw_req_free) {
++		__dl_sub(dl_b, dl_bw, dl_bw_cpus(cpu));
++	} else {
++		unsigned long cap = dl_bw_capacity(cpu);
++
++		overflow = __dl_overflow(dl_b, cap, 0, dl_bw);
++
++		if (req == dl_bw_req_alloc && !overflow) {
++			/*
++			 * We reserve space in the destination
++			 * root_domain, as we can't fail after this point.
++			 * We will free resources in the source root_domain
++			 * later on (see set_cpus_allowed_dl()).
++			 */
++			__dl_add(dl_b, dl_bw, dl_bw_cpus(cpu));
++		}
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+@@ -2872,6 +2897,21 @@ int dl_cpu_busy(int cpu, struct task_struct *p)
+ 
+ 	return overflow ? -EBUSY : 0;
+ }
++
++int dl_bw_check_overflow(int cpu)
++{
++	return dl_bw_manage(dl_bw_req_check_overflow, cpu, 0);
++}
++
++int dl_bw_alloc(int cpu, u64 dl_bw)
++{
++	return dl_bw_manage(dl_bw_req_alloc, cpu, dl_bw);
++}
++
++void dl_bw_free(int cpu, u64 dl_bw)
++{
++	dl_bw_manage(dl_bw_req_free, cpu, dl_bw);
++}
+ #endif
+ 
+ #ifdef CONFIG_SCHED_DEBUG
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 852e856eed488..8de07aba8bdd4 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -348,7 +348,7 @@ extern void __getparam_dl(struct task_struct *p, struct sched_attr *attr);
+ extern bool __checkparam_dl(const struct sched_attr *attr);
+ extern bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr);
+ extern int  dl_cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
+-extern int  dl_cpu_busy(int cpu, struct task_struct *p);
++extern int  dl_bw_check_overflow(int cpu);
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index d07de3ff42acc..fc79b04b59470 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -148,6 +148,8 @@ static ktime_t tick_init_jiffy_update(void)
+ 	return period;
+ }
+ 
++#define MAX_STALLED_JIFFIES 5
++
+ static void tick_sched_do_timer(struct tick_sched *ts, ktime_t now)
+ {
+ 	int cpu = smp_processor_id();
+@@ -175,6 +177,21 @@ static void tick_sched_do_timer(struct tick_sched *ts, ktime_t now)
+ 	if (tick_do_timer_cpu == cpu)
+ 		tick_do_update_jiffies64(now);
+ 
++	/*
++	 * If jiffies update stalled for too long (timekeeper in stop_machine()
++	 * or VMEXIT'ed for several msecs), force an update.
++	 */
++	if (ts->last_tick_jiffies != jiffies) {
++		ts->stalled_jiffies = 0;
++		ts->last_tick_jiffies = READ_ONCE(jiffies);
++	} else {
++		if (++ts->stalled_jiffies == MAX_STALLED_JIFFIES) {
++			tick_do_update_jiffies64(now);
++			ts->stalled_jiffies = 0;
++			ts->last_tick_jiffies = READ_ONCE(jiffies);
++		}
++	}
++
+ 	if (ts->inidle)
+ 		ts->got_idle_tick = 1;
+ }
+@@ -867,6 +884,8 @@ static void tick_nohz_stop_tick(struct tick_sched *ts, int cpu)
+ 	if (unlikely(expires == KTIME_MAX)) {
+ 		if (ts->nohz_mode == NOHZ_MODE_HIGHRES)
+ 			hrtimer_cancel(&ts->sched_timer);
++		else
++			tick_program_event(KTIME_MAX, 1);
+ 		return;
+ 	}
+ 
+@@ -1257,9 +1276,15 @@ static void tick_nohz_handler(struct clock_event_device *dev)
+ 	tick_sched_do_timer(ts, now);
+ 	tick_sched_handle(ts, regs);
+ 
+-	/* No need to reprogram if we are running tickless  */
+-	if (unlikely(ts->tick_stopped))
++	if (unlikely(ts->tick_stopped)) {
++		/*
++		 * The clockevent device is not reprogrammed, so change the
++		 * clock event device to ONESHOT_STOPPED to avoid spurious
++		 * interrupts on devices which might not be truly one shot.
++		 */
++		tick_program_event(KTIME_MAX, 1);
+ 		return;
++	}
+ 
+ 	hrtimer_forward(&ts->sched_timer, now, TICK_NSEC);
+ 	tick_program_event(hrtimer_get_expires(&ts->sched_timer), 1);
+diff --git a/kernel/time/tick-sched.h b/kernel/time/tick-sched.h
+index 4fb06527cf64f..1e7ec5c968a5d 100644
+--- a/kernel/time/tick-sched.h
++++ b/kernel/time/tick-sched.h
+@@ -49,6 +49,8 @@ enum tick_nohz_mode {
+  * @timer_expires_base:	Base time clock monotonic for @timer_expires
+  * @next_timer:		Expiry time of next expiring timer for debugging purpose only
+  * @tick_dep_mask:	Tick dependency mask - is set, if someone needs the tick
++ * @last_tick_jiffies:	Value of jiffies seen on last tick
++ * @stalled_jiffies:	Number of stalled jiffies detected across ticks
+  */
+ struct tick_sched {
+ 	struct hrtimer			sched_timer;
+@@ -77,6 +79,8 @@ struct tick_sched {
+ 	u64				next_timer;
+ 	ktime_t				idle_expires;
+ 	atomic_t			tick_dep_mask;
++	unsigned long			last_tick_jiffies;
++	unsigned int			stalled_jiffies;
+ };
+ 
+ extern struct tick_sched *tick_get_tick_sched(int cpu);
+diff --git a/kernel/torture.c b/kernel/torture.c
+index 1061492f14bd9..458a5eed9454d 100644
+--- a/kernel/torture.c
++++ b/kernel/torture.c
+@@ -788,7 +788,7 @@ void torture_kthread_stopping(char *title)
+ 	VERBOSE_TOROUT_STRING(buf);
+ 	while (!kthread_should_stop()) {
+ 		torture_shutdown_absorb(title);
+-		schedule_timeout_uninterruptible(1);
++		schedule_timeout_uninterruptible(HZ / 20);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(torture_kthread_stopping);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 167f2a19fd8a2..597487a7f1bfb 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3782,8 +3782,15 @@ static void *s_start(struct seq_file *m, loff_t *pos)
+ 	 * will point to the same string as current_trace->name.
+ 	 */
+ 	mutex_lock(&trace_types_lock);
+-	if (unlikely(tr->current_trace && iter->trace->name != tr->current_trace->name))
++	if (unlikely(tr->current_trace && iter->trace->name != tr->current_trace->name)) {
++		/* Close iter->trace before switching to the new current tracer */
++		if (iter->trace->close)
++			iter->trace->close(iter);
+ 		*iter->trace = *tr->current_trace;
++		/* Reopen the new current tracer */
++		if (iter->trace->open)
++			iter->trace->open(iter);
++	}
+ 	mutex_unlock(&trace_types_lock);
+ 
+ #ifdef CONFIG_TRACER_MAX_TRACE
+@@ -4843,11 +4850,17 @@ int tracing_set_cpumask(struct trace_array *tr,
+ 				!cpumask_test_cpu(cpu, tracing_cpumask_new)) {
+ 			atomic_inc(&per_cpu_ptr(tr->array_buffer.data, cpu)->disabled);
+ 			ring_buffer_record_disable_cpu(tr->array_buffer.buffer, cpu);
++#ifdef CONFIG_TRACER_MAX_TRACE
++			ring_buffer_record_disable_cpu(tr->max_buffer.buffer, cpu);
++#endif
+ 		}
+ 		if (!cpumask_test_cpu(cpu, tr->tracing_cpumask) &&
+ 				cpumask_test_cpu(cpu, tracing_cpumask_new)) {
+ 			atomic_dec(&per_cpu_ptr(tr->array_buffer.data, cpu)->disabled);
+ 			ring_buffer_record_enable_cpu(tr->array_buffer.buffer, cpu);
++#ifdef CONFIG_TRACER_MAX_TRACE
++			ring_buffer_record_enable_cpu(tr->max_buffer.buffer, cpu);
++#endif
+ 		}
+ 	}
+ 	arch_spin_unlock(&tr->max_lock);
+diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
+index ee4571b624bcb..619a60944bb6d 100644
+--- a/kernel/trace/trace_irqsoff.c
++++ b/kernel/trace/trace_irqsoff.c
+@@ -228,7 +228,8 @@ static void irqsoff_trace_open(struct trace_iterator *iter)
+ {
+ 	if (is_graph(iter->tr))
+ 		graph_trace_open(iter);
+-
++	else
++		iter->private = NULL;
+ }
+ 
+ static void irqsoff_trace_close(struct trace_iterator *iter)
+diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
+index 97b10bb31a1f0..037e1e863b17f 100644
+--- a/kernel/trace/trace_sched_wakeup.c
++++ b/kernel/trace/trace_sched_wakeup.c
+@@ -171,6 +171,8 @@ static void wakeup_trace_open(struct trace_iterator *iter)
+ {
+ 	if (is_graph(iter->tr))
+ 		graph_trace_open(iter);
++	else
++		iter->private = NULL;
+ }
+ 
+ static void wakeup_trace_close(struct trace_iterator *iter)
+diff --git a/lib/clz_ctz.c b/lib/clz_ctz.c
+index 0d3a686b5ba29..fb8c0c5c2bd27 100644
+--- a/lib/clz_ctz.c
++++ b/lib/clz_ctz.c
+@@ -28,36 +28,16 @@ int __weak __clzsi2(int val)
+ }
+ EXPORT_SYMBOL(__clzsi2);
+ 
+-int __weak __clzdi2(long val);
+-int __weak __ctzdi2(long val);
+-#if BITS_PER_LONG == 32
+-
+-int __weak __clzdi2(long val)
++int __weak __clzdi2(u64 val);
++int __weak __clzdi2(u64 val)
+ {
+-	return 32 - fls((int)val);
++	return 64 - fls64(val);
+ }
+ EXPORT_SYMBOL(__clzdi2);
+ 
+-int __weak __ctzdi2(long val)
++int __weak __ctzdi2(u64 val);
++int __weak __ctzdi2(u64 val)
+ {
+-	return __ffs((u32)val);
++	return __ffs64(val);
+ }
+ EXPORT_SYMBOL(__ctzdi2);
+-
+-#elif BITS_PER_LONG == 64
+-
+-int __weak __clzdi2(long val)
+-{
+-	return 64 - fls64((u64)val);
+-}
+-EXPORT_SYMBOL(__clzdi2);
+-
+-int __weak __ctzdi2(long val)
+-{
+-	return __ffs64((u64)val);
+-}
+-EXPORT_SYMBOL(__ctzdi2);
+-
+-#else
+-#error BITS_PER_LONG not 32 or 64
+-#endif
+diff --git a/lib/radix-tree.c b/lib/radix-tree.c
+index 3a4da11b804d9..cbc6915252363 100644
+--- a/lib/radix-tree.c
++++ b/lib/radix-tree.c
+@@ -1133,7 +1133,6 @@ static void set_iter_tags(struct radix_tree_iter *iter,
+ void __rcu **radix_tree_iter_resume(void __rcu **slot,
+ 					struct radix_tree_iter *iter)
+ {
+-	slot++;
+ 	iter->index = __radix_tree_iter_add(iter, 1);
+ 	iter->next_index = iter->index;
+ 	iter->tags = 0;
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index b21dd4a793926..652283a1353d7 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1690,70 +1690,51 @@ EXPORT_SYMBOL(unpoison_memory);
+ 
+ /*
+  * Safely get reference count of an arbitrary page.
+- * Returns 0 for a free page, -EIO for a zero refcount page
+- * that is not free, and 1 for any other page type.
+- * For 1 the page is returned with increased page count, otherwise not.
++ * Returns 0 for a free page, 1 for an in-use page, -EIO for a page-type we
++ * cannot handle and -EBUSY if we raced with an allocation.
++ * We only incremented refcount in case the page was already in-use and it is
++ * a known type we can handle.
+  */
+-static int __get_any_page(struct page *p, unsigned long pfn, int flags)
++static int get_any_page(struct page *p, int flags)
+ {
+-	int ret;
++	int ret = 0, pass = 0;
++	bool count_increased = false;
+ 
+ 	if (flags & MF_COUNT_INCREASED)
+-		return 1;
+-
+-	/*
+-	 * When the target page is a free hugepage, just remove it
+-	 * from free hugepage list.
+-	 */
+-	if (!get_hwpoison_page(p)) {
+-		if (PageHuge(p)) {
+-			pr_info("%s: %#lx free huge page\n", __func__, pfn);
+-			ret = 0;
+-		} else if (is_free_buddy_page(p)) {
+-			pr_info("%s: %#lx free buddy page\n", __func__, pfn);
+-			ret = 0;
+-		} else if (page_count(p)) {
+-			/* raced with allocation */
++		count_increased = true;
++
++try_again:
++	if (!count_increased && !get_hwpoison_page(p)) {
++		if (page_count(p)) {
++			/* We raced with an allocation, retry. */
++			if (pass++ < 3)
++				goto try_again;
+ 			ret = -EBUSY;
+-		} else {
+-			pr_info("%s: %#lx: unknown zero refcount page type %lx\n",
+-				__func__, pfn, p->flags);
++		} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
++			/* We raced with put_page, retry. */
++			if (pass++ < 3)
++				goto try_again;
+ 			ret = -EIO;
+ 		}
+ 	} else {
+-		/* Not a free page */
+-		ret = 1;
+-	}
+-	return ret;
+-}
+-
+-static int get_any_page(struct page *page, unsigned long pfn, int flags)
+-{
+-	int ret = __get_any_page(page, pfn, flags);
+-
+-	if (ret == -EBUSY)
+-		ret = __get_any_page(page, pfn, flags);
+-
+-	if (ret == 1 && !PageHuge(page) &&
+-	    !PageLRU(page) && !__PageMovable(page)) {
+-		/*
+-		 * Try to free it.
+-		 */
+-		put_page(page);
+-		shake_page(page, 1);
+-
+-		/*
+-		 * Did it turn free?
+-		 */
+-		ret = __get_any_page(page, pfn, 0);
+-		if (ret == 1 && !PageLRU(page)) {
+-			/* Drop page reference which is from __get_any_page() */
+-			put_page(page);
+-			pr_info("soft_offline: %#lx: unknown non LRU page type %lx (%pGp)\n",
+-				pfn, page->flags, &page->flags);
+-			return -EIO;
++		if (PageHuge(p) || PageLRU(p) || __PageMovable(p)) {
++			ret = 1;
++		} else {
++			/*
++			 * A page we cannot handle. Check whether we can turn
++			 * it into something we can handle.
++			 */
++			if (pass++ < 3) {
++				put_page(p);
++				shake_page(p, 1);
++				count_increased = false;
++				goto try_again;
++			}
++			put_page(p);
++			ret = -EIO;
+ 		}
+ 	}
++
+ 	return ret;
+ }
+ 
+@@ -1876,14 +1857,10 @@ static int soft_offline_in_use_page(struct page *page)
+ 	return __soft_offline_page(page);
+ }
+ 
+-static int soft_offline_free_page(struct page *page)
++static void put_ref_page(struct page *page)
+ {
+-	int rc = 0;
+-
+-	if (!page_handle_poison(page, true, false))
+-		rc = -EBUSY;
+-
+-	return rc;
++	if (page)
++		put_page(page);
+ }
+ 
+ /**
+@@ -1911,36 +1888,49 @@ static int soft_offline_free_page(struct page *page)
+ int soft_offline_page(unsigned long pfn, int flags)
+ {
+ 	int ret;
+-	struct page *page;
+ 	bool try_again = true;
++	struct page *page, *ref_page = NULL;
++
++	WARN_ON_ONCE(!pfn_valid(pfn) && (flags & MF_COUNT_INCREASED));
+ 
+ 	if (!pfn_valid(pfn))
+ 		return -ENXIO;
++	if (flags & MF_COUNT_INCREASED)
++		ref_page = pfn_to_page(pfn);
++
+ 	/* Only online pages can be soft-offlined (esp., not ZONE_DEVICE). */
+ 	page = pfn_to_online_page(pfn);
+-	if (!page)
++	if (!page) {
++		put_ref_page(ref_page);
+ 		return -EIO;
++	}
+ 
+ 	if (PageHWPoison(page)) {
+-		pr_info("soft offline: %#lx page already poisoned\n", pfn);
+-		if (flags & MF_COUNT_INCREASED)
+-			put_page(page);
++		pr_info("%s: %#lx page already poisoned\n", __func__, pfn);
++		put_ref_page(ref_page);
+ 		return 0;
+ 	}
+ 
+ retry:
+ 	get_online_mems();
+-	ret = get_any_page(page, pfn, flags);
++	ret = get_any_page(page, flags);
+ 	put_online_mems();
+ 
+-	if (ret > 0)
++	if (ret > 0) {
+ 		ret = soft_offline_in_use_page(page);
+-	else if (ret == 0)
+-		if (soft_offline_free_page(page) && try_again) {
+-			try_again = false;
+-			flags &= ~MF_COUNT_INCREASED;
+-			goto retry;
++	} else if (ret == 0) {
++		if (!page_handle_poison(page, true, false)) {
++			if (try_again) {
++				try_again = false;
++				flags &= ~MF_COUNT_INCREASED;
++				goto retry;
++			}
++			ret = -EBUSY;
+ 		}
++	} else if (ret == -EIO) {
++		pr_info("%s: %#lx: unknown page type: %lx (%pGp)\n",
++			 __func__, pfn, page->flags, &page->flags);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index fff03a331314f..d6a4794fa8ca8 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2453,6 +2453,10 @@ void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot)
+ 		free_vm_area(area);
+ 		return NULL;
+ 	}
++
++	flush_cache_vmap((unsigned long)area->addr,
++			 (unsigned long)area->addr + count * PAGE_SIZE);
++
+ 	return area->addr;
+ }
+ EXPORT_SYMBOL_GPL(vmap_pfn);
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index 79a7dfc32e76c..83586f1dd8d76 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -509,7 +509,7 @@ int batadv_v_elp_packet_recv(struct sk_buff *skb,
+ 	struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface);
+ 	struct batadv_elp_packet *elp_packet;
+ 	struct batadv_hard_iface *primary_if;
+-	struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb);
++	struct ethhdr *ethhdr;
+ 	bool res;
+ 	int ret = NET_RX_DROP;
+ 
+@@ -517,6 +517,7 @@ int batadv_v_elp_packet_recv(struct sk_buff *skb,
+ 	if (!res)
+ 		goto free_skb;
+ 
++	ethhdr = eth_hdr(skb);
+ 	if (batadv_is_my_mac(bat_priv, ethhdr->h_source))
+ 		goto free_skb;
+ 
+diff --git a/net/batman-adv/bat_v_ogm.c b/net/batman-adv/bat_v_ogm.c
+index 8c1148fc73d77..c451694fdb42f 100644
+--- a/net/batman-adv/bat_v_ogm.c
++++ b/net/batman-adv/bat_v_ogm.c
+@@ -123,8 +123,10 @@ static void batadv_v_ogm_send_to_if(struct sk_buff *skb,
+ {
+ 	struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
+ 
+-	if (hard_iface->if_status != BATADV_IF_ACTIVE)
++	if (hard_iface->if_status != BATADV_IF_ACTIVE) {
++		kfree_skb(skb);
+ 		return;
++	}
+ 
+ 	batadv_inc_counter(bat_priv, BATADV_CNT_MGMT_TX);
+ 	batadv_add_counter(bat_priv, BATADV_CNT_MGMT_TX_BYTES,
+@@ -998,7 +1000,7 @@ int batadv_v_ogm_packet_recv(struct sk_buff *skb,
+ {
+ 	struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface);
+ 	struct batadv_ogm2_packet *ogm_packet;
+-	struct ethhdr *ethhdr = eth_hdr(skb);
++	struct ethhdr *ethhdr;
+ 	int ogm_offset;
+ 	u8 *packet_pos;
+ 	int ret = NET_RX_DROP;
+@@ -1012,6 +1014,7 @@ int batadv_v_ogm_packet_recv(struct sk_buff *skb,
+ 	if (!batadv_check_management_packet(skb, if_incoming, BATADV_OGM2_HLEN))
+ 		goto free_skb;
+ 
++	ethhdr = eth_hdr(skb);
+ 	if (batadv_is_my_mac(bat_priv, ethhdr->h_source))
+ 		goto free_skb;
+ 
+diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
+index fe0898a9b4e82..fe79bfc6d2dd1 100644
+--- a/net/batman-adv/hard-interface.c
++++ b/net/batman-adv/hard-interface.c
+@@ -632,7 +632,19 @@ out:
+  */
+ void batadv_update_min_mtu(struct net_device *soft_iface)
+ {
+-	soft_iface->mtu = batadv_hardif_min_mtu(soft_iface);
++	struct batadv_priv *bat_priv = netdev_priv(soft_iface);
++	int limit_mtu;
++	int mtu;
++
++	mtu = batadv_hardif_min_mtu(soft_iface);
++
++	if (bat_priv->mtu_set_by_user)
++		limit_mtu = bat_priv->mtu_set_by_user;
++	else
++		limit_mtu = ETH_DATA_LEN;
++
++	mtu = min(mtu, limit_mtu);
++	dev_set_mtu(soft_iface, mtu);
+ 
+ 	/* Check if the local translate table should be cleaned up to match a
+ 	 * new (and smaller) MTU.
+diff --git a/net/batman-adv/netlink.c b/net/batman-adv/netlink.c
+index 121459704b069..931bc3b5c6df0 100644
+--- a/net/batman-adv/netlink.c
++++ b/net/batman-adv/netlink.c
+@@ -496,7 +496,10 @@ static int batadv_netlink_set_mesh(struct sk_buff *skb, struct genl_info *info)
+ 		attr = info->attrs[BATADV_ATTR_FRAGMENTATION_ENABLED];
+ 
+ 		atomic_set(&bat_priv->fragmentation, !!nla_get_u8(attr));
++
++		rtnl_lock();
+ 		batadv_update_min_mtu(bat_priv->soft_iface);
++		rtnl_unlock();
+ 	}
+ 
+ 	if (info->attrs[BATADV_ATTR_GW_BANDWIDTH_DOWN]) {
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index 8f7c778255fba..7ac16d7b94a2f 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -156,11 +156,14 @@ static int batadv_interface_set_mac_addr(struct net_device *dev, void *p)
+ 
+ static int batadv_interface_change_mtu(struct net_device *dev, int new_mtu)
+ {
++	struct batadv_priv *bat_priv = netdev_priv(dev);
++
+ 	/* check ranges */
+ 	if (new_mtu < 68 || new_mtu > batadv_hardif_min_mtu(dev))
+ 		return -EINVAL;
+ 
+ 	dev->mtu = new_mtu;
++	bat_priv->mtu_set_by_user = new_mtu;
+ 
+ 	return 0;
+ }
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 5f990a2061072..9e8ebac9b7e7e 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -775,7 +775,6 @@ check_roaming:
+ 		if (roamed_back) {
+ 			batadv_tt_global_free(bat_priv, tt_global,
+ 					      "Roaming canceled");
+-			tt_global = NULL;
+ 		} else {
+ 			/* The global entry has to be marked as ROAMING and
+ 			 * has to be kept for consistency purpose
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index 965336a3b89d5..7d47fe7534c18 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -1566,6 +1566,12 @@ struct batadv_priv {
+ 	/** @soft_iface: net device which holds this struct as private data */
+ 	struct net_device *soft_iface;
+ 
++	/**
++	 * @mtu_set_by_user: MTU was set once by user
++	 * protected by rtnl_lock
++	 */
++	int mtu_set_by_user;
++
+ 	/**
+ 	 * @bat_counters: mesh internal traffic statistic counters (see
+ 	 *  batadv_counters)
+diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
+index c4cf2529d08ba..ef5c174102d5e 100644
+--- a/net/ceph/mon_client.c
++++ b/net/ceph/mon_client.c
+@@ -96,9 +96,11 @@ int ceph_monmap_contains(struct ceph_monmap *m, struct ceph_entity_addr *addr)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < m->num_mon; i++)
+-		if (memcmp(addr, &m->mon_inst[i].addr, sizeof(*addr)) == 0)
++	for (i = 0; i < m->num_mon; i++) {
++		if (ceph_addr_equal_no_type(addr, &m->mon_inst[i].addr))
+ 			return 1;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index ce37a052b9c32..021dcfdae2835 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2161,13 +2161,27 @@ out_err:
+ 	return err;
+ }
+ 
+-int rtnl_nla_parse_ifla(struct nlattr **tb, const struct nlattr *head, int len,
+-			struct netlink_ext_ack *exterr)
++int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer,
++			     struct netlink_ext_ack *exterr)
+ {
+-	return nla_parse_deprecated(tb, IFLA_MAX, head, len, ifla_policy,
++	const struct ifinfomsg *ifmp;
++	const struct nlattr *attrs;
++	size_t len;
++
++	ifmp = nla_data(nla_peer);
++	attrs = nla_data(nla_peer) + sizeof(struct ifinfomsg);
++	len = nla_len(nla_peer) - sizeof(struct ifinfomsg);
++
++	if (ifmp->ifi_index < 0) {
++		NL_SET_ERR_MSG_ATTR(exterr, nla_peer,
++				    "ifindex can't be negative");
++		return -EINVAL;
++	}
++
++	return nla_parse_deprecated(tb, IFLA_MAX, attrs, len, ifla_policy,
+ 				    exterr);
+ }
+-EXPORT_SYMBOL(rtnl_nla_parse_ifla);
++EXPORT_SYMBOL(rtnl_nla_parse_ifinfomsg);
+ 
+ struct net *rtnl_link_get_net(struct net *src_net, struct nlattr *tb[])
+ {
+@@ -3258,6 +3272,7 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	struct ifinfomsg *ifm;
+ 	char ifname[IFNAMSIZ];
+ 	struct nlattr **data;
++	bool link_specified;
+ 	int err;
+ 
+ #ifdef CONFIG_MODULES
+@@ -3278,12 +3293,19 @@ replay:
+ 		ifname[0] = '\0';
+ 
+ 	ifm = nlmsg_data(nlh);
+-	if (ifm->ifi_index > 0)
++	if (ifm->ifi_index > 0) {
++		link_specified = true;
+ 		dev = __dev_get_by_index(net, ifm->ifi_index);
+-	else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME])
++	} else if (ifm->ifi_index < 0) {
++		NL_SET_ERR_MSG(extack, "ifindex can't be negative");
++		return -EINVAL;
++	} else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME]) {
++		link_specified = true;
+ 		dev = rtnl_dev_get(net, NULL, tb[IFLA_ALT_IFNAME], ifname);
+-	else
++	} else {
++		link_specified = false;
+ 		dev = NULL;
++	}
+ 
+ 	master_dev = NULL;
+ 	m_ops = NULL;
+@@ -3386,7 +3408,12 @@ replay:
+ 	}
+ 
+ 	if (!(nlh->nlmsg_flags & NLM_F_CREATE)) {
+-		if (ifm->ifi_index == 0 && tb[IFLA_GROUP])
++		/* No dev found and NLM_F_CREATE not set. Requested dev does not exist,
++		 * or it's for a group
++		*/
++		if (link_specified)
++			return -ENODEV;
++		if (tb[IFLA_GROUP])
+ 			return rtnl_group_changelink(skb, net,
+ 						nla_get_u32(tb[IFLA_GROUP]),
+ 						ifm, extack, tb);
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 3293c1e3aa5d5..c647035a36d18 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -324,11 +324,15 @@ EXPORT_SYMBOL_GPL(dccp_disconnect);
+ __poll_t dccp_poll(struct file *file, struct socket *sock,
+ 		       poll_table *wait)
+ {
+-	__poll_t mask;
+ 	struct sock *sk = sock->sk;
++	__poll_t mask;
++	u8 shutdown;
++	int state;
+ 
+ 	sock_poll_wait(file, sock, wait);
+-	if (sk->sk_state == DCCP_LISTEN)
++
++	state = inet_sk_state_load(sk);
++	if (state == DCCP_LISTEN)
+ 		return inet_csk_listen_poll(sk);
+ 
+ 	/* Socket is not locked. We are protected from async events
+@@ -337,20 +341,21 @@ __poll_t dccp_poll(struct file *file, struct socket *sock,
+ 	 */
+ 
+ 	mask = 0;
+-	if (sk->sk_err)
++	if (READ_ONCE(sk->sk_err))
+ 		mask = EPOLLERR;
++	shutdown = READ_ONCE(sk->sk_shutdown);
+ 
+-	if (sk->sk_shutdown == SHUTDOWN_MASK || sk->sk_state == DCCP_CLOSED)
++	if (shutdown == SHUTDOWN_MASK || state == DCCP_CLOSED)
+ 		mask |= EPOLLHUP;
+-	if (sk->sk_shutdown & RCV_SHUTDOWN)
++	if (shutdown & RCV_SHUTDOWN)
+ 		mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
+ 
+ 	/* Connected? */
+-	if ((1 << sk->sk_state) & ~(DCCPF_REQUESTING | DCCPF_RESPOND)) {
++	if ((1 << state) & ~(DCCPF_REQUESTING | DCCPF_RESPOND)) {
+ 		if (atomic_read(&sk->sk_rmem_alloc) > 0)
+ 			mask |= EPOLLIN | EPOLLRDNORM;
+ 
+-		if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
++		if (!(shutdown & SEND_SHUTDOWN)) {
+ 			if (sk_stream_is_writeable(sk)) {
+ 				mask |= EPOLLOUT | EPOLLWRNORM;
+ 			} else {  /* send SIGIO later */
+@@ -368,7 +373,6 @@ __poll_t dccp_poll(struct file *file, struct socket *sock,
+ 	}
+ 	return mask;
+ }
+-
+ EXPORT_SYMBOL_GPL(dccp_poll);
+ 
+ int dccp_ioctl(struct sock *sk, int cmd, unsigned long arg)
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 3be93175b3ffd..50f840e312b03 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -901,12 +901,14 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ static int pipapo_insert(struct nft_pipapo_field *f, const uint8_t *k,
+ 			 int mask_bits)
+ {
+-	int rule = f->rules++, group, ret, bit_offset = 0;
++	int rule = f->rules, group, ret, bit_offset = 0;
+ 
+-	ret = pipapo_resize(f, f->rules - 1, f->rules);
++	ret = pipapo_resize(f, f->rules, f->rules + 1);
+ 	if (ret)
+ 		return ret;
+ 
++	f->rules++;
++
+ 	for (group = 0; group < f->groups; group++) {
+ 		int i, v;
+ 		u8 mask;
+@@ -1051,7 +1053,9 @@ static int pipapo_expand(struct nft_pipapo_field *f,
+ 			step++;
+ 			if (step >= len) {
+ 				if (!masks) {
+-					pipapo_insert(f, base, 0);
++					err = pipapo_insert(f, base, 0);
++					if (err < 0)
++						return err;
+ 					masks = 1;
+ 				}
+ 				goto out;
+@@ -1234,6 +1238,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 		else
+ 			ret = pipapo_expand(f, start, end, f->groups * f->bb);
+ 
++		if (ret < 0)
++			return ret;
++
+ 		if (f->bsize > bsize_max)
+ 			bsize_max = f->bsize;
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index fb50e3f3283f9..5c2d230790db9 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1513,10 +1513,28 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 	return 0;
+ }
+ 
++static bool req_create_or_replace(struct nlmsghdr *n)
++{
++	return (n->nlmsg_flags & NLM_F_CREATE &&
++		n->nlmsg_flags & NLM_F_REPLACE);
++}
++
++static bool req_create_exclusive(struct nlmsghdr *n)
++{
++	return (n->nlmsg_flags & NLM_F_CREATE &&
++		n->nlmsg_flags & NLM_F_EXCL);
++}
++
++static bool req_change(struct nlmsghdr *n)
++{
++	return (!(n->nlmsg_flags & NLM_F_CREATE) &&
++		!(n->nlmsg_flags & NLM_F_REPLACE) &&
++		!(n->nlmsg_flags & NLM_F_EXCL));
++}
++
+ /*
+  * Create/change qdisc.
+  */
+-
+ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 			   struct netlink_ext_ack *extack)
+ {
+@@ -1613,27 +1631,35 @@ replay:
+ 				 *
+ 				 *   We know, that some child q is already
+ 				 *   attached to this parent and have choice:
+-				 *   either to change it or to create/graft new one.
++				 *   1) change it or 2) create/graft new one.
++				 *   If the requested qdisc kind is different
++				 *   than the existing one, then we choose graft.
++				 *   If they are the same then this is "change"
++				 *   operation - just let it fallthrough..
+ 				 *
+ 				 *   1. We are allowed to create/graft only
+-				 *   if CREATE and REPLACE flags are set.
++				 *   if the request is explicitly stating
++				 *   "please create if it doesn't exist".
+ 				 *
+-				 *   2. If EXCL is set, requestor wanted to say,
+-				 *   that qdisc tcm_handle is not expected
++				 *   2. If the request is to exclusive create
++				 *   then the qdisc tcm_handle is not expected
+ 				 *   to exist, so that we choose create/graft too.
+ 				 *
+ 				 *   3. The last case is when no flags are set.
++				 *   This will happen when for example tc
++				 *   utility issues a "change" command.
+ 				 *   Alas, it is sort of hole in API, we
+ 				 *   cannot decide what to do unambiguously.
+-				 *   For now we select create/graft, if
+-				 *   user gave KIND, which does not match existing.
++				 *   For now we select create/graft.
+ 				 */
+-				if ((n->nlmsg_flags & NLM_F_CREATE) &&
+-				    (n->nlmsg_flags & NLM_F_REPLACE) &&
+-				    ((n->nlmsg_flags & NLM_F_EXCL) ||
+-				     (tca[TCA_KIND] &&
+-				      nla_strcmp(tca[TCA_KIND], q->ops->id))))
+-					goto create_n_graft;
++				if (tca[TCA_KIND] &&
++				    nla_strcmp(tca[TCA_KIND], q->ops->id)) {
++					if (req_create_or_replace(n) ||
++					    req_create_exclusive(n))
++						goto create_n_graft;
++					else if (req_change(n))
++						goto create_n_graft2;
++				}
+ 			}
+ 		}
+ 	} else {
+@@ -1667,6 +1693,7 @@ create_n_graft:
+ 		NL_SET_ERR_MSG(extack, "Qdisc not found. To create specify NLM_F_CREATE flag");
+ 		return -ENOENT;
+ 	}
++create_n_graft2:
+ 	if (clid == TC_H_INGRESS) {
+ 		if (dev_ingress_queue(dev)) {
+ 			q = qdisc_create(dev, dev_ingress_queue(dev), p,
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 534364bb871a3..fa4d31b507f29 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -97,7 +97,7 @@ struct percpu_counter sctp_sockets_allocated;
+ 
+ static void sctp_enter_memory_pressure(struct sock *sk)
+ {
+-	sctp_memory_pressure = 1;
++	WRITE_ONCE(sctp_memory_pressure, 1);
+ }
+ 
+ 
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 338b06de86d16..d015576f3081a 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -972,9 +972,6 @@ struct rpcrdma_rep *rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt,
+ 	if (!rep->rr_rdmabuf)
+ 		goto out_free;
+ 
+-	if (!rpcrdma_regbuf_dma_map(r_xprt, rep->rr_rdmabuf))
+-		goto out_free_regbuf;
+-
+ 	xdr_buf_init(&rep->rr_hdrbuf, rdmab_data(rep->rr_rdmabuf),
+ 		     rdmab_length(rep->rr_rdmabuf));
+ 	rep->rr_cqe.done = rpcrdma_wc_receive;
+@@ -987,8 +984,6 @@ struct rpcrdma_rep *rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt,
+ 	list_add(&rep->rr_all, &r_xprt->rx_buf.rb_all_reps);
+ 	return rep;
+ 
+-out_free_regbuf:
+-	rpcrdma_regbuf_free(rep->rr_rdmabuf);
+ out_free:
+ 	kfree(rep);
+ out:
+@@ -1425,6 +1420,10 @@ void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp)
+ 			rep = rpcrdma_rep_create(r_xprt, temp);
+ 		if (!rep)
+ 			break;
++		if (!rpcrdma_regbuf_dma_map(r_xprt, rep->rr_rdmabuf)) {
++			rpcrdma_rep_put(buf, rep);
++			break;
++		}
+ 
+ 		trace_xprtrdma_post_recv(rep);
+ 		rep->rr_recv_wr.next = wr;
+diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c
+index 6a04de21343f8..82cfeab162174 100644
+--- a/security/selinux/ss/policydb.c
++++ b/security/selinux/ss/policydb.c
+@@ -2011,6 +2011,7 @@ static int filename_trans_read_helper(struct policydb *p, void *fp)
+ 		if (!datum)
+ 			goto out;
+ 
++		datum->next = NULL;
+ 		*dst = datum;
+ 
+ 		/* ebitmap_read() will at least init the bitmap */
+@@ -2023,7 +2024,6 @@ static int filename_trans_read_helper(struct policydb *p, void *fp)
+ 			goto out;
+ 
+ 		datum->otype = le32_to_cpu(buf[0]);
+-		datum->next = NULL;
+ 
+ 		dst = &datum->next;
+ 	}
+diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c
+index 191883842a35d..3e60a337bbef1 100644
+--- a/sound/core/pcm_memory.c
++++ b/sound/core/pcm_memory.c
+@@ -31,20 +31,51 @@ static unsigned long max_alloc_per_card = 32UL * 1024UL * 1024UL;
+ module_param(max_alloc_per_card, ulong, 0644);
+ MODULE_PARM_DESC(max_alloc_per_card, "Max total allocation bytes per card.");
+ 
++static void __update_allocated_size(struct snd_card *card, ssize_t bytes)
++{
++	card->total_pcm_alloc_bytes += bytes;
++}
++
++static void update_allocated_size(struct snd_card *card, ssize_t bytes)
++{
++	mutex_lock(&card->memory_mutex);
++	__update_allocated_size(card, bytes);
++	mutex_unlock(&card->memory_mutex);
++}
++
++static void decrease_allocated_size(struct snd_card *card, size_t bytes)
++{
++	mutex_lock(&card->memory_mutex);
++	WARN_ON(card->total_pcm_alloc_bytes < bytes);
++	__update_allocated_size(card, -(ssize_t)bytes);
++	mutex_unlock(&card->memory_mutex);
++}
++
+ static int do_alloc_pages(struct snd_card *card, int type, struct device *dev,
+ 			  size_t size, struct snd_dma_buffer *dmab)
+ {
+ 	int err;
+ 
++	/* check and reserve the requested size */
++	mutex_lock(&card->memory_mutex);
+ 	if (max_alloc_per_card &&
+-	    card->total_pcm_alloc_bytes + size > max_alloc_per_card)
++	    card->total_pcm_alloc_bytes + size > max_alloc_per_card) {
++		mutex_unlock(&card->memory_mutex);
+ 		return -ENOMEM;
++	}
++	__update_allocated_size(card, size);
++	mutex_unlock(&card->memory_mutex);
+ 
+ 	err = snd_dma_alloc_pages(type, dev, size, dmab);
+ 	if (!err) {
+-		mutex_lock(&card->memory_mutex);
+-		card->total_pcm_alloc_bytes += dmab->bytes;
+-		mutex_unlock(&card->memory_mutex);
++		/* the actual allocation size might be bigger than requested,
++		 * and we need to correct the account
++		 */
++		if (dmab->bytes != size)
++			update_allocated_size(card, dmab->bytes - size);
++	} else {
++		/* take back on allocation failure */
++		decrease_allocated_size(card, size);
+ 	}
+ 	return err;
+ }
+@@ -53,10 +84,7 @@ static void do_free_pages(struct snd_card *card, struct snd_dma_buffer *dmab)
+ {
+ 	if (!dmab->area)
+ 		return;
+-	mutex_lock(&card->memory_mutex);
+-	WARN_ON(card->total_pcm_alloc_bytes < dmab->bytes);
+-	card->total_pcm_alloc_bytes -= dmab->bytes;
+-	mutex_unlock(&card->memory_mutex);
++	decrease_allocated_size(card, dmab->bytes);
+ 	snd_dma_free_pages(dmab);
+ 	dmab->area = NULL;
+ }
+diff --git a/sound/soc/codecs/rt711-sdw.h b/sound/soc/codecs/rt711-sdw.h
+index 43b2b984b29cb..6acf9858330d7 100644
+--- a/sound/soc/codecs/rt711-sdw.h
++++ b/sound/soc/codecs/rt711-sdw.h
+@@ -267,7 +267,9 @@ static const struct reg_default rt711_reg_defaults[] = {
+ 	{ 0x8393, 0x00 },
+ 	{ 0x7319, 0x00 },
+ 	{ 0x8399, 0x00 },
++	{ 0x752008, 0xa807 },
+ 	{ 0x752009, 0x1029 },
++	{ 0x75200b, 0x7770 },
+ 	{ 0x752011, 0x007a },
+ 	{ 0x75201a, 0x8003 },
+ 	{ 0x752045, 0x5289 },
+diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
+index 9bdcc7872053b..0e343ad205fc1 100644
+--- a/sound/soc/codecs/rt711.c
++++ b/sound/soc/codecs/rt711.c
+@@ -389,6 +389,36 @@ static void rt711_jack_init(struct rt711_priv *rt711)
+ 				RT711_HP_JD_FINAL_RESULT_CTL_JD12,
+ 				RT711_HP_JD_FINAL_RESULT_CTL_JD12);
+ 			break;
++		case RT711_JD2_100K:
++			rt711_index_update_bits(rt711->regmap, RT711_VENDOR_REG,
++				RT711_JD_CTL2, RT711_JD2_2PORT_100K_DECODE | RT711_JD2_1PORT_TYPE_DECODE |
++				RT711_HP_JD_SEL_JD2 | RT711_JD1_2PORT_TYPE_100K_DECODE,
++				RT711_JD2_2PORT_100K_DECODE_HP | RT711_JD2_1PORT_JD_HP |
++				RT711_HP_JD_SEL_JD2 | RT711_JD1_2PORT_JD_RESERVED);
++			rt711_index_update_bits(rt711->regmap, RT711_VENDOR_REG,
++				RT711_CC_DET1,
++				RT711_HP_JD_FINAL_RESULT_CTL_JD12,
++				RT711_HP_JD_FINAL_RESULT_CTL_JD12);
++			break;
++		case RT711_JD2_1P8V_1PORT:
++			rt711_index_update_bits(rt711->regmap, RT711_VENDOR_REG,
++				RT711_JD_CTL1, RT711_JD2_DIGITAL_JD_MODE_SEL,
++				RT711_JD2_1_JD_MODE);
++			rt711_index_update_bits(rt711->regmap, RT711_VENDOR_REG,
++				RT711_JD_CTL2, RT711_JD2_1PORT_TYPE_DECODE |
++				RT711_HP_JD_SEL_JD2,
++				RT711_JD2_1PORT_JD_HP |
++				RT711_HP_JD_SEL_JD2);
++			rt711_index_update_bits(rt711->regmap, RT711_VENDOR_REG,
++				RT711_JD_CTL4, RT711_JD2_PAD_PULL_UP_MASK |
++				RT711_JD2_MODE_SEL_MASK,
++				RT711_JD2_PAD_PULL_UP |
++				RT711_JD2_MODE2_1P8V_1PORT);
++			rt711_index_update_bits(rt711->regmap, RT711_VENDOR_REG,
++				RT711_CC_DET1,
++				RT711_HP_JD_FINAL_RESULT_CTL_JD12,
++				RT711_HP_JD_FINAL_RESULT_CTL_JD12);
++			break;
+ 		default:
+ 			dev_warn(rt711->component->dev, "Wrong JD source\n");
+ 			break;
+diff --git a/sound/soc/codecs/rt711.h b/sound/soc/codecs/rt711.h
+index ca0f581feec78..5f2ba1341085f 100644
+--- a/sound/soc/codecs/rt711.h
++++ b/sound/soc/codecs/rt711.h
+@@ -52,7 +52,9 @@ struct sdw_stream_data {
+ 
+ /* Index (NID:20h) */
+ #define RT711_DAC_DC_CALI_CTL1				0x00
++#define RT711_JD_CTL1				0x08
+ #define RT711_JD_CTL2				0x09
++#define RT711_JD_CTL4				0x0b
+ #define RT711_CC_DET1				0x11
+ #define RT711_PARA_VERB_CTL				0x1a
+ #define RT711_COMBO_JACK_AUTO_CTL1				0x45
+@@ -171,10 +173,33 @@ struct sdw_stream_data {
+ /* DAC DC offset calibration control-1 (0x00)(NID:20h) */
+ #define RT711_DAC_DC_CALI_TRIGGER (0x1 << 15)
+ 
++/* jack detect control 1 (0x08)(NID:20h) */
++#define RT711_JD2_DIGITAL_JD_MODE_SEL (0x1 << 1)
++#define RT711_JD2_1_JD_MODE (0x0 << 1)
++#define RT711_JD2_2_JD_MODE (0x1 << 1)
++
+ /* jack detect control 2 (0x09)(NID:20h) */
+ #define RT711_JD2_2PORT_200K_DECODE_HP (0x1 << 13)
++#define RT711_JD2_2PORT_100K_DECODE (0x1 << 12)
++#define RT711_JD2_2PORT_100K_DECODE_HP (0x0 << 12)
+ #define RT711_HP_JD_SEL_JD1 (0x0 << 1)
+ #define RT711_HP_JD_SEL_JD2 (0x1 << 1)
++#define RT711_JD2_1PORT_TYPE_DECODE (0x3 << 10)
++#define RT711_JD2_1PORT_JD_LINE2 (0x0 << 10)
++#define RT711_JD2_1PORT_JD_HP (0x1 << 10)
++#define RT711_JD2_1PORT_JD_LINE1 (0x2 << 10)
++#define RT711_JD1_2PORT_TYPE_100K_DECODE (0x1 << 0)
++#define RT711_JD1_2PORT_JD_RESERVED (0x0 << 0)
++#define RT711_JD1_2PORT_JD_LINE1 (0x1 << 0)
++
++/* jack detect control 4 (0x0b)(NID:20h) */
++#define RT711_JD2_PAD_PULL_UP_MASK (0x1 << 3)
++#define RT711_JD2_PAD_NOT_PULL_UP (0x0 << 3)
++#define RT711_JD2_PAD_PULL_UP (0x1 << 3)
++#define RT711_JD2_MODE_SEL_MASK (0x3 << 0)
++#define RT711_JD2_MODE0_2PORT (0x0 << 0)
++#define RT711_JD2_MODE1_3P3V_1PORT (0x1 << 0)
++#define RT711_JD2_MODE2_1P8V_1PORT (0x2 << 0)
+ 
+ /* CC DET1 (0x11)(NID:20h) */
+ #define RT711_HP_JD_FINAL_RESULT_CTL_JD12 (0x1 << 10)
+@@ -215,7 +240,9 @@ enum {
+ enum rt711_jd_src {
+ 	RT711_JD_NULL,
+ 	RT711_JD1,
+-	RT711_JD2
++	RT711_JD2,
++	RT711_JD2_100K,
++	RT711_JD2_1P8V_1PORT
+ };
+ 
+ int rt711_io_init(struct device *dev, struct sdw_slave *slave);
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index cbbb50ddc7954..f36a0fda1b6ae 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -13,8 +13,9 @@
+ #include <sound/soc.h>
+ #include <sound/soc-acpi.h>
+ #include "sof_sdw_common.h"
++#include "../../codecs/rt711.h"
+ 
+-unsigned long sof_sdw_quirk = SOF_RT711_JD_SRC_JD1;
++unsigned long sof_sdw_quirk = RT711_JD1;
+ static int quirk_override = -1;
+ module_param_named(quirk, quirk_override, int, 0444);
+ MODULE_PARM_DESC(quirk, "Board-specific quirk override");
+@@ -63,7 +64,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "09C6")
+ 		},
+-		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
++		.driver_data = (void *)(RT711_JD2 |
+ 					SOF_RT715_DAI_ID_FIX),
+ 	},
+ 	{
+@@ -73,7 +74,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0983")
+ 		},
+-		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
++		.driver_data = (void *)(RT711_JD2 |
+ 					SOF_RT715_DAI_ID_FIX),
+ 	},
+ 	{
+@@ -82,7 +83,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "098F"),
+ 		},
+-		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
++		.driver_data = (void *)(RT711_JD2 |
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+@@ -92,7 +93,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0990"),
+ 		},
+-		.driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
++		.driver_data = (void *)(RT711_JD2 |
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+@@ -114,7 +115,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 				  "Tiger Lake Client Platform"),
+ 		},
+ 		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+-					SOF_RT711_JD_SRC_JD1 |
++					RT711_JD1 |
+ 					SOF_SDW_PCH_DMIC |
+ 					SOF_SSP_PORT(SOF_I2S_SSP2)),
+ 	},
+@@ -125,7 +126,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A3E")
+ 		},
+ 		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+-					SOF_RT711_JD_SRC_JD2 |
++					RT711_JD2 |
+ 					SOF_RT715_DAI_ID_FIX),
+ 	},
+ 	{
+@@ -135,7 +136,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E")
+ 		},
+ 		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+-					SOF_RT711_JD_SRC_JD2 |
++					RT711_JD2 |
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+@@ -173,7 +174,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+ 					SOF_SDW_PCH_DMIC |
+-					SOF_RT711_JD_SRC_JD2),
++					RT711_JD2),
+ 	},
+ 	/* TigerLake-SDCA devices */
+ 	{
+@@ -183,7 +184,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A32")
+ 		},
+ 		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+-					SOF_RT711_JD_SRC_JD2 |
++					RT711_JD2 |
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_FOUR_SPK),
+ 	},
+@@ -194,7 +195,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Alder Lake Client Platform"),
+ 		},
+-		.driver_data = (void *)(SOF_RT711_JD_SRC_JD1 |
++		.driver_data = (void *)(RT711_JD1 |
+ 					SOF_SDW_TGL_HDMI |
+ 					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_PCH_DMIC),
+diff --git a/sound/soc/intel/boards/sof_sdw_common.h b/sound/soc/intel/boards/sof_sdw_common.h
+index ea60e8ed215c5..801600522c472 100644
+--- a/sound/soc/intel/boards/sof_sdw_common.h
++++ b/sound/soc/intel/boards/sof_sdw_common.h
+@@ -22,11 +22,6 @@
+ /* 8 combinations with 4 links + unused group 0 */
+ #define SDW_MAX_GROUPS 9
+ 
+-enum {
+-	SOF_RT711_JD_SRC_JD1 = 1,
+-	SOF_RT711_JD_SRC_JD2 = 2,
+-};
+-
+ enum {
+ 	SOF_PRE_TGL_HDMI_COUNT = 3,
+ 	SOF_TGL_HDMI_COUNT = 4,
+diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h
+index 580ce18575857..75840291b3934 100644
+--- a/tools/objtool/arch.h
++++ b/tools/objtool/arch.h
+@@ -90,6 +90,7 @@ int arch_decode_hint_reg(u8 sp_reg, int *base);
+ 
+ bool arch_is_retpoline(struct symbol *sym);
+ bool arch_is_rethunk(struct symbol *sym);
++bool arch_is_embedded_insn(struct symbol *sym);
+ 
+ int arch_rewrite_retpolines(struct objtool_file *file);
+ 
+diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
+index b6791e8d9ab3b..9d2af67672e7c 100644
+--- a/tools/objtool/arch/x86/decode.c
++++ b/tools/objtool/arch/x86/decode.c
+@@ -652,8 +652,11 @@ bool arch_is_retpoline(struct symbol *sym)
+ 
+ bool arch_is_rethunk(struct symbol *sym)
+ {
+-	return !strcmp(sym->name, "__x86_return_thunk") ||
+-	       !strcmp(sym->name, "srso_untrain_ret") ||
+-	       !strcmp(sym->name, "srso_safe_ret") ||
+-	       !strcmp(sym->name, "retbleed_return_thunk");
++	return !strcmp(sym->name, "__x86_return_thunk");
++}
++
++bool arch_is_embedded_insn(struct symbol *sym)
++{
++	return !strcmp(sym->name, "retbleed_return_thunk") ||
++	       !strcmp(sym->name, "srso_safe_ret");
+ }
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 965c055aa8088..bd24951faa094 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -946,16 +946,33 @@ static int add_ignore_alternatives(struct objtool_file *file)
+ 	return 0;
+ }
+ 
++/*
++ * Symbols that replace INSN_CALL_DYNAMIC, every (tail) call to such a symbol
++ * will be added to the .retpoline_sites section.
++ */
+ __weak bool arch_is_retpoline(struct symbol *sym)
+ {
+ 	return false;
+ }
+ 
++/*
++ * Symbols that replace INSN_RETURN, every (tail) call to such a symbol
++ * will be added to the .return_sites section.
++ */
+ __weak bool arch_is_rethunk(struct symbol *sym)
+ {
+ 	return false;
+ }
+ 
++/*
++ * Symbols that are embedded inside other instructions, because sometimes crazy
++ * code exists. These are mostly ignored for validation purposes.
++ */
++__weak bool arch_is_embedded_insn(struct symbol *sym)
++{
++	return false;
++}
++
+ #define NEGATIVE_RELOC	((void *)-1L)
+ 
+ static struct reloc *insn_reloc(struct objtool_file *file, struct instruction *insn)
+@@ -1172,7 +1189,7 @@ static int add_jump_destinations(struct objtool_file *file)
+ 			 * middle of another instruction.  Objtool only
+ 			 * knows about the outer instruction.
+ 			 */
+-			if (sym && sym->return_thunk) {
++			if (sym && sym->embedded_insn) {
+ 				add_return_call(file, insn, false);
+ 				continue;
+ 			}
+@@ -1971,6 +1988,9 @@ static int classify_symbols(struct objtool_file *file)
+ 			if (arch_is_rethunk(func))
+ 				func->return_thunk = true;
+ 
++			if (arch_is_embedded_insn(func))
++				func->embedded_insn = true;
++
+ 			if (!strcmp(func->name, "__fentry__"))
+ 				func->fentry = true;
+ 
+diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h
+index a1863eb35fbbc..19446d9112443 100644
+--- a/tools/objtool/elf.h
++++ b/tools/objtool/elf.h
+@@ -61,6 +61,7 @@ struct symbol {
+ 	u8 return_thunk      : 1;
+ 	u8 fentry            : 1;
+ 	u8 kcov              : 1;
++	u8 embedded_insn     : 1;
+ };
+ 
+ struct reloc {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-09-02  9:59 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-09-02  9:59 UTC (permalink / raw
  To: gentoo-commits

commit:     8cc75f539068bd1a5718dec6606f5c6826a7cf7f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep  2 09:58:44 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep  2 09:58:44 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8cc75f53

Linux patch 5.10.194

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1193_linux-5.10.194.patch | 280 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 284 insertions(+)

diff --git a/0000_README b/0000_README
index 9cc075fd..9db6312d 100644
--- a/0000_README
+++ b/0000_README
@@ -815,6 +815,10 @@ Patch:  1192_linux-5.10.193.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.193
 
+Patch:  1193_linux-5.10.194.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.194
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1193_linux-5.10.194.patch b/1193_linux-5.10.194.patch
new file mode 100644
index 00000000..e21befbf
--- /dev/null
+++ b/1193_linux-5.10.194.patch
@@ -0,0 +1,280 @@
+diff --git a/Makefile b/Makefile
+index 0423b4b2b000f..9ec2fb0d08f01 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 193
++SUBLEVEL = 194
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/kernel/module-plts.c b/arch/arm/kernel/module-plts.c
+index 1fc309b41f944..8d809724cde52 100644
+--- a/arch/arm/kernel/module-plts.c
++++ b/arch/arm/kernel/module-plts.c
+@@ -256,7 +256,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ 		/* sort by type and symbol index */
+ 		sort(rels, numrels, sizeof(Elf32_Rel), cmp_rel, NULL);
+ 
+-		if (strncmp(secstrings + dstsec->sh_name, ".init", 5) != 0)
++		if (!module_init_layout_section(secstrings + dstsec->sh_name))
+ 			core_plts += count_plts(syms, dstsec->sh_addr, rels,
+ 						numrels, s->sh_info);
+ 		else
+diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
+index 2e224435c0249..29569284f4016 100644
+--- a/arch/arm64/kernel/module-plts.c
++++ b/arch/arm64/kernel/module-plts.c
+@@ -7,6 +7,7 @@
+ #include <linux/ftrace.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/moduleloader.h>
+ #include <linux/sort.h>
+ 
+ static struct plt_entry __get_adrp_add_pair(u64 dst, u64 pc,
+@@ -342,7 +343,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ 		if (nents)
+ 			sort(rels, nents, sizeof(Elf64_Rela), cmp_rela, NULL);
+ 
+-		if (!str_has_prefix(secstrings + dstsec->sh_name, ".init"))
++		if (!module_init_layout_section(secstrings + dstsec->sh_name))
+ 			core_plts += count_plts(syms, rels, numrels,
+ 						sechdrs[i].sh_info, dstsec);
+ 		else
+diff --git a/arch/mips/alchemy/common/dbdma.c b/arch/mips/alchemy/common/dbdma.c
+index e9ee9ab90a0c6..4ca2c28878e0f 100644
+--- a/arch/mips/alchemy/common/dbdma.c
++++ b/arch/mips/alchemy/common/dbdma.c
+@@ -30,7 +30,6 @@
+  *
+  */
+ 
+-#include <linux/dma-map-ops.h> /* for dma_default_coherent */
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+@@ -624,18 +623,17 @@ u32 au1xxx_dbdma_put_source(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
+ 		dp->dscr_cmd0 &= ~DSCR_CMD0_IE;
+ 
+ 	/*
+-	 * There is an erratum on certain Au1200/Au1550 revisions that could
+-	 * result in "stale" data being DMA'ed. It has to do with the snoop
+-	 * logic on the cache eviction buffer.  dma_default_coherent is set
+-	 * to false on these parts.
++	 * There is an errata on the Au1200/Au1550 parts that could result
++	 * in "stale" data being DMA'ed. It has to do with the snoop logic on
++	 * the cache eviction buffer.  DMA_NONCOHERENT is on by default for
++	 * these parts. If it is fixed in the future, these dma_cache_inv will
++	 * just be nothing more than empty macros. See io.h.
+ 	 */
+-	if (!dma_default_coherent)
+-		dma_cache_wback_inv(KSEG0ADDR(buf), nbytes);
++	dma_cache_wback_inv((unsigned long)buf, nbytes);
+ 	dp->dscr_cmd0 |= DSCR_CMD0_V;	/* Let it rip */
+ 	wmb(); /* drain writebuffer */
+ 	dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
+ 	ctp->chan_ptr->ddma_dbell = 0;
+-	wmb(); /* force doorbell write out to dma engine */
+ 
+ 	/* Get next descriptor pointer. */
+ 	ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
+@@ -687,18 +685,17 @@ u32 au1xxx_dbdma_put_dest(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
+ 			  dp->dscr_source1, dp->dscr_dest0, dp->dscr_dest1);
+ #endif
+ 	/*
+-	 * There is an erratum on certain Au1200/Au1550 revisions that could
+-	 * result in "stale" data being DMA'ed. It has to do with the snoop
+-	 * logic on the cache eviction buffer.  dma_default_coherent is set
+-	 * to false on these parts.
++	 * There is an errata on the Au1200/Au1550 parts that could result in
++	 * "stale" data being DMA'ed. It has to do with the snoop logic on the
++	 * cache eviction buffer.  DMA_NONCOHERENT is on by default for these
++	 * parts. If it is fixed in the future, these dma_cache_inv will just
++	 * be nothing more than empty macros. See io.h.
+ 	 */
+-	if (!dma_default_coherent)
+-		dma_cache_inv(KSEG0ADDR(buf), nbytes);
++	dma_cache_inv((unsigned long)buf, nbytes);
+ 	dp->dscr_cmd0 |= DSCR_CMD0_V;	/* Let it rip */
+ 	wmb(); /* drain writebuffer */
+ 	dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
+ 	ctp->chan_ptr->ddma_dbell = 0;
+-	wmb(); /* force doorbell write out to dma engine */
+ 
+ 	/* Get next descriptor pointer. */
+ 	ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index e3df838c3c80e..f5bee76ea0618 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -273,7 +273,7 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	mhi_cntrl_config = info->config;
+ 	mhi_cntrl->cntrl_dev = &pdev->dev;
+ 	mhi_cntrl->iova_start = 0;
+-	mhi_cntrl->iova_stop = DMA_BIT_MASK(info->dma_data_width);
++	mhi_cntrl->iova_stop = (dma_addr_t)DMA_BIT_MASK(info->dma_data_width);
+ 	mhi_cntrl->fw_image = info->fw;
+ 	mhi_cntrl->edl_image = info->edl;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 3b4724d60868f..8445bb7ae06ab 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2155,7 +2155,6 @@ struct amdgpu_bo_va *amdgpu_vm_bo_add(struct amdgpu_device *adev,
+ 	amdgpu_vm_bo_base_init(&bo_va->base, vm, bo);
+ 
+ 	bo_va->ref_count = 1;
+-	bo_va->last_pt_update = dma_fence_get_stub();
+ 	INIT_LIST_HEAD(&bo_va->valids);
+ 	INIT_LIST_HEAD(&bo_va->invalids);
+ 
+@@ -2868,8 +2867,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 		vm->update_funcs = &amdgpu_vm_cpu_funcs;
+ 	else
+ 		vm->update_funcs = &amdgpu_vm_sdma_funcs;
+-
+-	vm->last_update = dma_fence_get_stub();
++	vm->last_update = NULL;
+ 	vm->last_unlocked = dma_fence_get_stub();
+ 
+ 	mutex_init(&vm->eviction_lock);
+@@ -3044,7 +3042,7 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 		vm->update_funcs = &amdgpu_vm_sdma_funcs;
+ 	}
+ 	dma_fence_put(vm->last_update);
+-	vm->last_update = dma_fence_get_stub();
++	vm->last_update = NULL;
+ 	vm->is_compute_context = true;
+ 
+ 	if (vm->pasid) {
+diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h
+index 4fa67a8b22652..03476e205b7e3 100644
+--- a/include/linux/moduleloader.h
++++ b/include/linux/moduleloader.h
+@@ -39,6 +39,11 @@ bool module_init_section(const char *name);
+  */
+ bool module_exit_section(const char *name);
+ 
++/* Describes whether within_module_init() will consider this an init section
++ * or not. This behaviour changes with CONFIG_MODULE_UNLOAD.
++ */
++bool module_init_layout_section(const char *sname);
++
+ /*
+  * Apply the given relocation to the (simplified) ELF.  Return -error
+  * or 0.
+diff --git a/kernel/module.c b/kernel/module.c
+index 33d1dc6d4cd6a..82af3ef79135a 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -2280,7 +2280,7 @@ void *__symbol_get(const char *symbol)
+ }
+ EXPORT_SYMBOL_GPL(__symbol_get);
+ 
+-static bool module_init_layout_section(const char *sname)
++bool module_init_layout_section(const char *sname)
+ {
+ #ifndef CONFIG_MODULE_UNLOAD
+ 	if (module_exit_section(sname))
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 23101ebbbe1ef..c5624ab0580c5 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -874,7 +874,7 @@ reset_ipi:
+ static bool trc_inspect_reader(struct task_struct *t, void *arg)
+ {
+ 	int cpu = task_cpu(t);
+-	bool in_qs = false;
++	int nesting;
+ 	bool ofl = cpu_is_offline(cpu);
+ 
+ 	if (task_curr(t)) {
+@@ -894,18 +894,18 @@ static bool trc_inspect_reader(struct task_struct *t, void *arg)
+ 		n_heavy_reader_updates++;
+ 		if (ofl)
+ 			n_heavy_reader_ofl_updates++;
+-		in_qs = true;
++		nesting = 0;
+ 	} else {
+ 		// The task is not running, so C-language access is safe.
+-		in_qs = likely(!t->trc_reader_nesting);
++		nesting = t->trc_reader_nesting;
+ 	}
+ 
+-	// Mark as checked so that the grace-period kthread will
+-	// remove it from the holdout list.
+-	t->trc_reader_checked = true;
+-
+-	if (in_qs)
+-		return true;  // Already in quiescent state, done!!!
++	// If not exiting a read-side critical section, mark as checked
++	// so that the grace-period kthread will remove it from the
++	// holdout list.
++	t->trc_reader_checked = nesting >= 0;
++	if (nesting <= 0)
++		return !nesting;  // If in QS, done, otherwise try again later.
+ 
+ 	// The task is in a read-side critical section, so set up its
+ 	// state so that it will awaken the grace-period kthread upon exit
+@@ -958,9 +958,11 @@ static void trc_wait_for_one_reader(struct task_struct *t,
+ 		if (smp_call_function_single(cpu, trc_read_check_handler, t, 0)) {
+ 			// Just in case there is some other reason for
+ 			// failure than the target CPU being offline.
++			WARN_ONCE(1, "%s():  smp_call_function_single() failed for CPU: %d\n",
++				  __func__, cpu);
+ 			rcu_tasks_trace.n_ipis_fails++;
+ 			per_cpu(trc_ipi_to_cpu, cpu) = false;
+-			t->trc_ipi_to_cpu = cpu;
++			t->trc_ipi_to_cpu = -1;
+ 		}
+ 	}
+ }
+@@ -1081,14 +1083,28 @@ static void check_all_holdout_tasks_trace(struct list_head *hop,
+ 	}
+ }
+ 
++static void rcu_tasks_trace_empty_fn(void *unused)
++{
++}
++
+ /* Wait for grace period to complete and provide ordering. */
+ static void rcu_tasks_trace_postgp(struct rcu_tasks *rtp)
+ {
++	int cpu;
+ 	bool firstreport;
+ 	struct task_struct *g, *t;
+ 	LIST_HEAD(holdouts);
+ 	long ret;
+ 
++	// Wait for any lingering IPI handlers to complete.  Note that
++	// if a CPU has gone offline or transitioned to userspace in the
++	// meantime, all IPI handlers should have been drained beforehand.
++	// Yes, this assumes that CPUs process IPIs in order.  If that ever
++	// changes, there will need to be a recheck and/or timed wait.
++	for_each_online_cpu(cpu)
++		if (smp_load_acquire(per_cpu_ptr(&trc_ipi_to_cpu, cpu)))
++			smp_call_function_single(cpu, rcu_tasks_trace_empty_fn, NULL, 1);
++
+ 	// Remove the safety count.
+ 	smp_mb__before_atomic();  // Order vs. earlier atomics
+ 	atomic_dec(&trc_n_readers_need_end);
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 401c1f331cafa..07a284a18645a 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -507,7 +507,10 @@ static void synchronize_rcu_expedited_wait(void)
+ 				if (rdp->rcu_forced_tick_exp)
+ 					continue;
+ 				rdp->rcu_forced_tick_exp = true;
+-				tick_dep_set_cpu(cpu, TICK_DEP_BIT_RCU_EXP);
++				preempt_disable();
++				if (cpu_online(cpu))
++					tick_dep_set_cpu(cpu, TICK_DEP_BIT_RCU_EXP);
++				preempt_enable();
+ 			}
+ 		}
+ 		j = READ_ONCE(jiffies_till_first_fqs);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-09-19 13:22 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-09-19 13:22 UTC (permalink / raw
  To: gentoo-commits

commit:     3169c2140726510ff42efb118d1553cd43325716
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 19 13:22:13 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Sep 19 13:22:13 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3169c214

Linux patch 5.10.195

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1194_linux-5.10.195.patch | 17174 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 17178 insertions(+)

diff --git a/0000_README b/0000_README
index 9db6312d..08e7549f 100644
--- a/0000_README
+++ b/0000_README
@@ -819,6 +819,10 @@ Patch:  1193_linux-5.10.194.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.194
 
+Patch:  1194_linux-5.10.195.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.195
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1194_linux-5.10.195.patch b/1194_linux-5.10.195.patch
new file mode 100644
index 00000000..1011a3b4
--- /dev/null
+++ b/1194_linux-5.10.195.patch
@@ -0,0 +1,17174 @@
+diff --git a/Documentation/devicetree/bindings/clock/xlnx,versal-clk.yaml b/Documentation/devicetree/bindings/clock/xlnx,versal-clk.yaml
+index 229af98b1d305..7cd88bc3a67d7 100644
+--- a/Documentation/devicetree/bindings/clock/xlnx,versal-clk.yaml
++++ b/Documentation/devicetree/bindings/clock/xlnx,versal-clk.yaml
+@@ -16,8 +16,6 @@ description: |
+   reads required input clock frequencies from the devicetree and acts as clock
+   provider for all clock consumers of PS clocks.
+ 
+-select: false
+-
+ properties:
+   compatible:
+     const: xlnx,versal-clk
+diff --git a/Documentation/scsi/scsi_mid_low_api.rst b/Documentation/scsi/scsi_mid_low_api.rst
+index 5bc17d012b256..d65c902b69ea3 100644
+--- a/Documentation/scsi/scsi_mid_low_api.rst
++++ b/Documentation/scsi/scsi_mid_low_api.rst
+@@ -1195,11 +1195,11 @@ Members of interest:
+ 		 - pointer to scsi_device object that this command is
+                    associated with.
+     resid
+-		 - an LLD should set this signed integer to the requested
++		 - an LLD should set this unsigned integer to the requested
+                    transfer length (i.e. 'request_bufflen') less the number
+                    of bytes that are actually transferred. 'resid' is
+                    preset to 0 so an LLD can ignore it if it cannot detect
+-                   underruns (overruns should be rare). If possible an LLD
++                   underruns (overruns should not be reported). An LLD
+                    should set 'resid' prior to invoking 'done'. The most
+                    interesting case is data transfers from a SCSI target
+                    device (e.g. READs) that underrun.
+diff --git a/Makefile b/Makefile
+index 9ec2fb0d08f01..006700fbb6525 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 194
++SUBLEVEL = 195
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/bcm4708-linksys-ea6500-v2.dts b/arch/arm/boot/dts/bcm4708-linksys-ea6500-v2.dts
+index cd797b4202ad8..01c48faabfade 100644
+--- a/arch/arm/boot/dts/bcm4708-linksys-ea6500-v2.dts
++++ b/arch/arm/boot/dts/bcm4708-linksys-ea6500-v2.dts
+@@ -19,7 +19,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x00000000 0x08000000>;
++		reg = <0x00000000 0x08000000>,
++		      <0x88000000 0x08000000>;
+ 	};
+ 
+ 	gpio-keys {
+diff --git a/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts b/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts
+index 57ca1cfaecd8e..00e688b45d981 100644
+--- a/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts
++++ b/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts
+@@ -46,3 +46,16 @@
+ 		};
+ 	};
+ };
++
++&gmac0 {
++	phy-mode = "rgmii";
++	phy-handle = <&bcm54210e>;
++
++	mdio {
++		/delete-node/ switch@1e;
++
++		bcm54210e: ethernet-phy@0 {
++			reg = <0>;
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts b/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts
+index 2e1a7e382cb7a..78c80a5d3f4fa 100644
+--- a/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts
++++ b/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts
+@@ -83,3 +83,16 @@
+ 		};
+ 	};
+ };
++
++&gmac0 {
++	phy-mode = "rgmii";
++	phy-handle = <&bcm54210e>;
++
++	mdio {
++		/delete-node/ switch@1e;
++
++		bcm54210e: ethernet-phy@0 {
++			reg = <0>;
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 4e9bb10f37d0f..9189a9489464b 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -267,7 +267,7 @@
+ 
+ 			interrupt-parent = <&gic>;
+ 
+-			ehci: ehci@21000 {
++			ehci: usb@21000 {
+ 				#usb-cells = <0>;
+ 
+ 				compatible = "generic-ehci";
+@@ -289,7 +289,7 @@
+ 				};
+ 			};
+ 
+-			ohci: ohci@22000 {
++			ohci: usb@22000 {
+ 				#usb-cells = <0>;
+ 
+ 				compatible = "generic-ohci";
+diff --git a/arch/arm/boot/dts/bcm53573.dtsi b/arch/arm/boot/dts/bcm53573.dtsi
+index 4af8e3293cff4..eed1a6147f0bf 100644
+--- a/arch/arm/boot/dts/bcm53573.dtsi
++++ b/arch/arm/boot/dts/bcm53573.dtsi
+@@ -127,6 +127,9 @@
+ 
+ 		pcie0: pcie@2000 {
+ 			reg = <0x00002000 0x1000>;
++
++			#address-cells = <3>;
++			#size-cells = <2>;
+ 		};
+ 
+ 		usb2: usb2@4000 {
+@@ -135,7 +138,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 
+-			ehci: ehci@4000 {
++			ehci: usb@4000 {
+ 				compatible = "generic-ehci";
+ 				reg = <0x4000 0x1000>;
+ 				interrupt-parent = <&gic>;
+@@ -155,9 +158,7 @@
+ 				};
+ 			};
+ 
+-			ohci: ohci@d000 {
+-				#usb-cells = <0>;
+-
++			ohci: usb@d000 {
+ 				compatible = "generic-ohci";
+ 				reg = <0xd000 0x1000>;
+ 				interrupt-parent = <&gic>;
+@@ -180,6 +181,24 @@
+ 
+ 		gmac0: ethernet@5000 {
+ 			reg = <0x5000 0x1000>;
++
++			mdio {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				switch: switch@1e {
++					compatible = "brcm,bcm53125";
++					reg = <0x1e>;
++
++					status = "disabled";
++
++					/* ports are defined in board DTS */
++					ports {
++						#address-cells = <1>;
++						#size-cells = <0>;
++					};
++				};
++			};
+ 		};
+ 
+ 		gmac1: ethernet@b000 {
+diff --git a/arch/arm/boot/dts/bcm947189acdbmr.dts b/arch/arm/boot/dts/bcm947189acdbmr.dts
+index b0b8c774a37f9..1f0be30e54435 100644
+--- a/arch/arm/boot/dts/bcm947189acdbmr.dts
++++ b/arch/arm/boot/dts/bcm947189acdbmr.dts
+@@ -60,9 +60,9 @@
+ 	spi {
+ 		compatible = "spi-gpio";
+ 		num-chipselects = <1>;
+-		gpio-sck = <&chipcommon 21 0>;
+-		gpio-miso = <&chipcommon 22 0>;
+-		gpio-mosi = <&chipcommon 23 0>;
++		sck-gpios = <&chipcommon 21 0>;
++		miso-gpios = <&chipcommon 22 0>;
++		mosi-gpios = <&chipcommon 23 0>;
+ 		cs-gpios = <&chipcommon 24 0>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/exynos4210-i9100.dts b/arch/arm/boot/dts/exynos4210-i9100.dts
+index ecc9d4dc707e4..d186b93144e38 100644
+--- a/arch/arm/boot/dts/exynos4210-i9100.dts
++++ b/arch/arm/boot/dts/exynos4210-i9100.dts
+@@ -170,8 +170,8 @@
+ 			power-on-delay = <10>;
+ 			reset-delay = <10>;
+ 
+-			panel-width-mm = <90>;
+-			panel-height-mm = <154>;
++			panel-width-mm = <56>;
++			panel-height-mm = <93>;
+ 
+ 			display-timings {
+ 				timing {
+diff --git a/arch/arm/boot/dts/imx23.dtsi b/arch/arm/boot/dts/imx23.dtsi
+index 7f4c602454a5f..ce3d6360a7efb 100644
+--- a/arch/arm/boot/dts/imx23.dtsi
++++ b/arch/arm/boot/dts/imx23.dtsi
+@@ -59,7 +59,7 @@
+ 				reg = <0x80000000 0x2000>;
+ 			};
+ 
+-			dma_apbh: dma-apbh@80004000 {
++			dma_apbh: dma-controller@80004000 {
+ 				compatible = "fsl,imx23-dma-apbh";
+ 				reg = <0x80004000 0x2000>;
+ 				interrupts = <0 14 20 0
+diff --git a/arch/arm/boot/dts/imx25.dtsi b/arch/arm/boot/dts/imx25.dtsi
+index 1ab19f1268f81..d24b1da18766b 100644
+--- a/arch/arm/boot/dts/imx25.dtsi
++++ b/arch/arm/boot/dts/imx25.dtsi
+@@ -515,7 +515,7 @@
+ 				#interrupt-cells = <2>;
+ 			};
+ 
+-			sdma: sdma@53fd4000 {
++			sdma: dma-controller@53fd4000 {
+ 				compatible = "fsl,imx25-sdma";
+ 				reg = <0x53fd4000 0x4000>;
+ 				clocks = <&clks 112>, <&clks 68>;
+diff --git a/arch/arm/boot/dts/imx28.dtsi b/arch/arm/boot/dts/imx28.dtsi
+index 94dfbf5b3f34a..6cab8b66db805 100644
+--- a/arch/arm/boot/dts/imx28.dtsi
++++ b/arch/arm/boot/dts/imx28.dtsi
+@@ -78,7 +78,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			dma_apbh: dma-apbh@80004000 {
++			dma_apbh: dma-controller@80004000 {
+ 				compatible = "fsl,imx28-dma-apbh";
+ 				reg = <0x80004000 0x2000>;
+ 				interrupts = <82 83 84 85
+diff --git a/arch/arm/boot/dts/imx31.dtsi b/arch/arm/boot/dts/imx31.dtsi
+index 45333f7e10eaa..0ee7e5547beaf 100644
+--- a/arch/arm/boot/dts/imx31.dtsi
++++ b/arch/arm/boot/dts/imx31.dtsi
+@@ -297,7 +297,7 @@
+ 				#interrupt-cells = <2>;
+ 			};
+ 
+-			sdma: sdma@53fd4000 {
++			sdma: dma-controller@53fd4000 {
+ 				compatible = "fsl,imx31-sdma";
+ 				reg = <0x53fd4000 0x4000>;
+ 				interrupts = <34>;
+diff --git a/arch/arm/boot/dts/imx35.dtsi b/arch/arm/boot/dts/imx35.dtsi
+index aba16252faab4..e7c4803ef866e 100644
+--- a/arch/arm/boot/dts/imx35.dtsi
++++ b/arch/arm/boot/dts/imx35.dtsi
+@@ -284,7 +284,7 @@
+ 				#interrupt-cells = <2>;
+ 			};
+ 
+-			sdma: sdma@53fd4000 {
++			sdma: dma-controller@53fd4000 {
+ 				compatible = "fsl,imx35-sdma";
+ 				reg = <0x53fd4000 0x4000>;
+ 				clocks = <&clks 9>, <&clks 65>;
+diff --git a/arch/arm/boot/dts/imx50.dtsi b/arch/arm/boot/dts/imx50.dtsi
+index b6b2e6af9b966..ae51333c30b32 100644
+--- a/arch/arm/boot/dts/imx50.dtsi
++++ b/arch/arm/boot/dts/imx50.dtsi
+@@ -421,7 +421,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			sdma: sdma@63fb0000 {
++			sdma: dma-controller@63fb0000 {
+ 				compatible = "fsl,imx50-sdma", "fsl,imx35-sdma";
+ 				reg = <0x63fb0000 0x4000>;
+ 				interrupts = <6>;
+diff --git a/arch/arm/boot/dts/imx51.dtsi b/arch/arm/boot/dts/imx51.dtsi
+index 985e1be03ad64..d80063560a657 100644
+--- a/arch/arm/boot/dts/imx51.dtsi
++++ b/arch/arm/boot/dts/imx51.dtsi
+@@ -498,7 +498,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			sdma: sdma@83fb0000 {
++			sdma: dma-controller@83fb0000 {
+ 				compatible = "fsl,imx51-sdma", "fsl,imx35-sdma";
+ 				reg = <0x83fb0000 0x4000>;
+ 				interrupts = <6>;
+diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi
+index 500eeaa3a27c5..f4e7f437ea506 100644
+--- a/arch/arm/boot/dts/imx53.dtsi
++++ b/arch/arm/boot/dts/imx53.dtsi
+@@ -710,7 +710,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			sdma: sdma@63fb0000 {
++			sdma: dma-controller@63fb0000 {
+ 				compatible = "fsl,imx53-sdma", "fsl,imx35-sdma";
+ 				reg = <0x63fb0000 0x4000>;
+ 				interrupts = <6>;
+diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
+index 7d81992dfafc6..e67d8dffba35c 100644
+--- a/arch/arm/boot/dts/imx6qdl.dtsi
++++ b/arch/arm/boot/dts/imx6qdl.dtsi
+@@ -150,7 +150,7 @@
+ 		interrupt-parent = <&gpc>;
+ 		ranges;
+ 
+-		dma_apbh: dma-apbh@110000 {
++		dma_apbh: dma-controller@110000 {
+ 			compatible = "fsl,imx6q-dma-apbh", "fsl,imx28-dma-apbh";
+ 			reg = <0x00110000 0x2000>;
+ 			interrupts = <0 13 IRQ_TYPE_LEVEL_HIGH>,
+@@ -927,7 +927,7 @@
+ 				interrupts = <0 125 IRQ_TYPE_LEVEL_HIGH>;
+ 			};
+ 
+-			sdma: sdma@20ec000 {
++			sdma: dma-controller@20ec000 {
+ 				compatible = "fsl,imx6q-sdma", "fsl,imx35-sdma";
+ 				reg = <0x020ec000 0x4000>;
+ 				interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/imx6sl.dtsi b/arch/arm/boot/dts/imx6sl.dtsi
+index 5b4dfc62030e8..0e0139246ad21 100644
+--- a/arch/arm/boot/dts/imx6sl.dtsi
++++ b/arch/arm/boot/dts/imx6sl.dtsi
+@@ -752,7 +752,7 @@
+ 				interrupts = <0 6 IRQ_TYPE_LEVEL_HIGH>;
+ 			};
+ 
+-			sdma: sdma@20ec000 {
++			sdma: dma-controller@20ec000 {
+ 				compatible = "fsl,imx6sl-sdma", "fsl,imx6q-sdma";
+ 				reg = <0x020ec000 0x4000>;
+ 				interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
+index 629c6a7432d9b..08332f70a8dc2 100644
+--- a/arch/arm/boot/dts/imx6sx.dtsi
++++ b/arch/arm/boot/dts/imx6sx.dtsi
+@@ -209,7 +209,7 @@
+ 			power-domains = <&pd_pu>;
+ 		};
+ 
+-		dma_apbh: dma-apbh@1804000 {
++		dma_apbh: dma-controller@1804000 {
+ 			compatible = "fsl,imx6sx-dma-apbh", "fsl,imx28-dma-apbh";
+ 			reg = <0x01804000 0x2000>;
+ 			interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
+@@ -848,7 +848,7 @@
+ 				reg = <0x020e4000 0x4000>;
+ 			};
+ 
+-			sdma: sdma@20ec000 {
++			sdma: dma-controller@20ec000 {
+ 				compatible = "fsl,imx6sx-sdma", "fsl,imx6q-sdma";
+ 				reg = <0x020ec000 0x4000>;
+ 				interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
+index b40dd0c198028..c374fe2a6f19e 100644
+--- a/arch/arm/boot/dts/imx6ul.dtsi
++++ b/arch/arm/boot/dts/imx6ul.dtsi
+@@ -164,7 +164,7 @@
+ 			      <0x00a06000 0x2000>;
+ 		};
+ 
+-		dma_apbh: dma-apbh@1804000 {
++		dma_apbh: dma-controller@1804000 {
+ 			compatible = "fsl,imx6q-dma-apbh", "fsl,imx28-dma-apbh";
+ 			reg = <0x01804000 0x2000>;
+ 			interrupts = <0 13 IRQ_TYPE_LEVEL_HIGH>,
+@@ -743,7 +743,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			sdma: sdma@20ec000 {
++			sdma: dma-controller@20ec000 {
+ 				compatible = "fsl,imx6ul-sdma", "fsl,imx6q-sdma",
+ 					     "fsl,imx35-sdma";
+ 				reg = <0x020ec000 0x4000>;
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 334e781663cc2..854c380cccea4 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -1137,6 +1137,8 @@
+ 					<&clks IMX7D_USDHC1_ROOT_CLK>;
+ 				clock-names = "ipg", "ahb", "per";
+ 				bus-width = <4>;
++				fsl,tuning-step = <2>;
++				fsl,tuning-start-tap = <20>;
+ 				status = "disabled";
+ 			};
+ 
+@@ -1149,6 +1151,8 @@
+ 					<&clks IMX7D_USDHC2_ROOT_CLK>;
+ 				clock-names = "ipg", "ahb", "per";
+ 				bus-width = <4>;
++				fsl,tuning-step = <2>;
++				fsl,tuning-start-tap = <20>;
+ 				status = "disabled";
+ 			};
+ 
+@@ -1161,6 +1165,8 @@
+ 					<&clks IMX7D_USDHC3_ROOT_CLK>;
+ 				clock-names = "ipg", "ahb", "per";
+ 				bus-width = <4>;
++				fsl,tuning-step = <2>;
++				fsl,tuning-start-tap = <20>;
+ 				status = "disabled";
+ 			};
+ 
+@@ -1177,7 +1183,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			sdma: sdma@30bd0000 {
++			sdma: dma-controller@30bd0000 {
+ 				compatible = "fsl,imx7d-sdma", "fsl,imx35-sdma";
+ 				reg = <0x30bd0000 0x10000>;
+ 				interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+@@ -1210,14 +1216,13 @@
+ 			};
+ 		};
+ 
+-		dma_apbh: dma-apbh@33000000 {
++		dma_apbh: dma-controller@33000000 {
+ 			compatible = "fsl,imx7d-dma-apbh", "fsl,imx28-dma-apbh";
+ 			reg = <0x33000000 0x2000>;
+ 			interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+-			interrupt-names = "gpmi0", "gpmi1", "gpmi2", "gpmi3";
+ 			#dma-cells = <1>;
+ 			dma-channels = <4>;
+ 			clocks = <&clks IMX7D_NAND_USDHC_BUS_RAWNAND_CLK>;
+diff --git a/arch/arm/boot/dts/s3c6410-mini6410.dts b/arch/arm/boot/dts/s3c6410-mini6410.dts
+index 285555b9ed943..0b07b3c319604 100644
+--- a/arch/arm/boot/dts/s3c6410-mini6410.dts
++++ b/arch/arm/boot/dts/s3c6410-mini6410.dts
+@@ -51,7 +51,7 @@
+ 
+ 		ethernet@18000000 {
+ 			compatible = "davicom,dm9000";
+-			reg = <0x18000000 0x2 0x18000004 0x2>;
++			reg = <0x18000000 0x2>, <0x18000004 0x2>;
+ 			interrupt-parent = <&gpn>;
+ 			interrupts = <7 IRQ_TYPE_LEVEL_HIGH>;
+ 			davicom,no-eeprom;
+@@ -193,12 +193,12 @@
+ };
+ 
+ &pinctrl0 {
+-	gpio_leds: gpio-leds {
++	gpio_leds: gpio-leds-pins {
+ 		samsung,pins = "gpk-4", "gpk-5", "gpk-6", "gpk-7";
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	gpio_keys: gpio-keys {
++	gpio_keys: gpio-keys-pins {
+ 		samsung,pins = "gpn-0", "gpn-1", "gpn-2", "gpn-3",
+ 				"gpn-4", "gpn-5", "gpl-11", "gpl-12";
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+diff --git a/arch/arm/boot/dts/s3c64xx-pinctrl.dtsi b/arch/arm/boot/dts/s3c64xx-pinctrl.dtsi
+index 8e9594d64b579..0a3186d57cb56 100644
+--- a/arch/arm/boot/dts/s3c64xx-pinctrl.dtsi
++++ b/arch/arm/boot/dts/s3c64xx-pinctrl.dtsi
+@@ -16,111 +16,111 @@
+ 	 * Pin banks
+ 	 */
+ 
+-	gpa: gpa {
++	gpa: gpa-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpb: gpb {
++	gpb: gpb-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpc: gpc {
++	gpc: gpc-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpd: gpd {
++	gpd: gpd-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpe: gpe {
++	gpe: gpe-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 	};
+ 
+-	gpf: gpf {
++	gpf: gpf-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpg: gpg {
++	gpg: gpg-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gph: gph {
++	gph: gph-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpi: gpi {
++	gpi: gpi-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 	};
+ 
+-	gpj: gpj {
++	gpj: gpj-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 	};
+ 
+-	gpk: gpk {
++	gpk: gpk-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 	};
+ 
+-	gpl: gpl {
++	gpl: gpl-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpm: gpm {
++	gpm: gpm-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpn: gpn {
++	gpn: gpn-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpo: gpo {
++	gpo: gpo-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpp: gpp {
++	gpp: gpp-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 	};
+ 
+-	gpq: gpq {
++	gpq: gpq-gpio-bank {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
+@@ -131,225 +131,225 @@
+ 	 * Pin groups
+ 	 */
+ 
+-	uart0_data: uart0-data {
++	uart0_data: uart0-data-pins {
+ 		samsung,pins = "gpa-0", "gpa-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	uart0_fctl: uart0-fctl {
++	uart0_fctl: uart0-fctl-pins {
+ 		samsung,pins = "gpa-2", "gpa-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	uart1_data: uart1-data {
++	uart1_data: uart1-data-pins {
+ 		samsung,pins = "gpa-4", "gpa-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	uart1_fctl: uart1-fctl {
++	uart1_fctl: uart1-fctl-pins {
+ 		samsung,pins = "gpa-6", "gpa-7";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	uart2_data: uart2-data {
++	uart2_data: uart2-data-pins {
+ 		samsung,pins = "gpb-0", "gpb-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	uart3_data: uart3-data {
++	uart3_data: uart3-data-pins {
+ 		samsung,pins = "gpb-2", "gpb-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	ext_dma_0: ext-dma-0 {
++	ext_dma_0: ext-dma-0-pins {
+ 		samsung,pins = "gpb-0", "gpb-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	ext_dma_1: ext-dma-1 {
++	ext_dma_1: ext-dma-1-pins {
+ 		samsung,pins = "gpb-2", "gpb-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	irda_data_0: irda-data-0 {
++	irda_data_0: irda-data-0-pins {
+ 		samsung,pins = "gpb-0", "gpb-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	irda_data_1: irda-data-1 {
++	irda_data_1: irda-data-1-pins {
+ 		samsung,pins = "gpb-2", "gpb-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	irda_sdbw: irda-sdbw {
++	irda_sdbw: irda-sdbw-pins {
+ 		samsung,pins = "gpb-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	i2c0_bus: i2c0-bus {
++	i2c0_bus: i2c0-bus-pins {
+ 		samsung,pins = "gpb-5", "gpb-6";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_UP>;
+ 	};
+ 
+-	i2c1_bus: i2c1-bus {
++	i2c1_bus: i2c1-bus-pins {
+ 		/* S3C6410-only */
+ 		samsung,pins = "gpb-2", "gpb-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_6>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_UP>;
+ 	};
+ 
+-	spi0_bus: spi0-bus {
++	spi0_bus: spi0-bus-pins {
+ 		samsung,pins = "gpc-0", "gpc-1", "gpc-2";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_UP>;
+ 	};
+ 
+-	spi0_cs: spi0-cs {
++	spi0_cs: spi0-cs-pins {
+ 		samsung,pins = "gpc-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	spi1_bus: spi1-bus {
++	spi1_bus: spi1-bus-pins {
+ 		samsung,pins = "gpc-4", "gpc-5", "gpc-6";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_UP>;
+ 	};
+ 
+-	spi1_cs: spi1-cs {
++	spi1_cs: spi1-cs-pins {
+ 		samsung,pins = "gpc-7";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd0_cmd: sd0-cmd {
++	sd0_cmd: sd0-cmd-pins {
+ 		samsung,pins = "gpg-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd0_clk: sd0-clk {
++	sd0_clk: sd0-clk-pins {
+ 		samsung,pins = "gpg-0";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd0_bus1: sd0-bus1 {
++	sd0_bus1: sd0-bus1-pins {
+ 		samsung,pins = "gpg-2";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd0_bus4: sd0-bus4 {
++	sd0_bus4: sd0-bus4-pins {
+ 		samsung,pins = "gpg-2", "gpg-3", "gpg-4", "gpg-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd0_cd: sd0-cd {
++	sd0_cd: sd0-cd-pins {
+ 		samsung,pins = "gpg-6";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_UP>;
+ 	};
+ 
+-	sd1_cmd: sd1-cmd {
++	sd1_cmd: sd1-cmd-pins {
+ 		samsung,pins = "gph-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd1_clk: sd1-clk {
++	sd1_clk: sd1-clk-pins {
+ 		samsung,pins = "gph-0";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd1_bus1: sd1-bus1 {
++	sd1_bus1: sd1-bus1-pins {
+ 		samsung,pins = "gph-2";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd1_bus4: sd1-bus4 {
++	sd1_bus4: sd1-bus4-pins {
+ 		samsung,pins = "gph-2", "gph-3", "gph-4", "gph-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd1_bus8: sd1-bus8 {
++	sd1_bus8: sd1-bus8-pins {
+ 		samsung,pins = "gph-2", "gph-3", "gph-4", "gph-5",
+ 				"gph-6", "gph-7", "gph-8", "gph-9";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd1_cd: sd1-cd {
++	sd1_cd: sd1-cd-pins {
+ 		samsung,pins = "gpg-6";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_UP>;
+ 	};
+ 
+-	sd2_cmd: sd2-cmd {
++	sd2_cmd: sd2-cmd-pins {
+ 		samsung,pins = "gpc-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd2_clk: sd2-clk {
++	sd2_clk: sd2-clk-pins {
+ 		samsung,pins = "gpc-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd2_bus1: sd2-bus1 {
++	sd2_bus1: sd2-bus1-pins {
+ 		samsung,pins = "gph-6";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	sd2_bus4: sd2-bus4 {
++	sd2_bus4: sd2-bus4-pins {
+ 		samsung,pins = "gph-6", "gph-7", "gph-8", "gph-9";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	i2s0_bus: i2s0-bus {
++	i2s0_bus: i2s0-bus-pins {
+ 		samsung,pins = "gpd-0", "gpd-2", "gpd-3", "gpd-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	i2s0_cdclk: i2s0-cdclk {
++	i2s0_cdclk: i2s0-cdclk-pins {
+ 		samsung,pins = "gpd-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	i2s1_bus: i2s1-bus {
++	i2s1_bus: i2s1-bus-pins {
+ 		samsung,pins = "gpe-0", "gpe-2", "gpe-3", "gpe-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	i2s1_cdclk: i2s1-cdclk {
++	i2s1_cdclk: i2s1-cdclk-pins {
+ 		samsung,pins = "gpe-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	i2s2_bus: i2s2-bus {
++	i2s2_bus: i2s2-bus-pins {
+ 		/* S3C6410-only */
+ 		samsung,pins = "gpc-4", "gpc-5", "gpc-6", "gph-6",
+ 				"gph-8", "gph-9";
+@@ -357,50 +357,50 @@
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	i2s2_cdclk: i2s2-cdclk {
++	i2s2_cdclk: i2s2-cdclk-pins {
+ 		/* S3C6410-only */
+ 		samsung,pins = "gph-7";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_5>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	pcm0_bus: pcm0-bus {
++	pcm0_bus: pcm0-bus-pins {
+ 		samsung,pins = "gpd-0", "gpd-2", "gpd-3", "gpd-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	pcm0_extclk: pcm0-extclk {
++	pcm0_extclk: pcm0-extclk-pins {
+ 		samsung,pins = "gpd-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	pcm1_bus: pcm1-bus {
++	pcm1_bus: pcm1-bus-pins {
+ 		samsung,pins = "gpe-0", "gpe-2", "gpe-3", "gpe-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	pcm1_extclk: pcm1-extclk {
++	pcm1_extclk: pcm1-extclk-pins {
+ 		samsung,pins = "gpe-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	ac97_bus_0: ac97-bus-0 {
++	ac97_bus_0: ac97-bus-0-pins {
+ 		samsung,pins = "gpd-0", "gpd-1", "gpd-2", "gpd-3", "gpd-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	ac97_bus_1: ac97-bus-1 {
++	ac97_bus_1: ac97-bus-1-pins {
+ 		samsung,pins = "gpe-0", "gpe-1", "gpe-2", "gpe-3", "gpe-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	cam_port: cam-port {
++	cam_port: cam-port-pins {
+ 		samsung,pins = "gpf-0", "gpf-1", "gpf-2", "gpf-4",
+ 				"gpf-5", "gpf-6", "gpf-7", "gpf-8",
+ 				"gpf-9", "gpf-10", "gpf-11", "gpf-12";
+@@ -408,242 +408,242 @@
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	cam_rst: cam-rst {
++	cam_rst: cam-rst-pins {
+ 		samsung,pins = "gpf-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	cam_field: cam-field {
++	cam_field: cam-field-pins {
+ 		/* S3C6410-only */
+ 		samsung,pins = "gpb-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	pwm_extclk: pwm-extclk {
++	pwm_extclk: pwm-extclk-pins {
+ 		samsung,pins = "gpf-13";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	pwm0_out: pwm0-out {
++	pwm0_out: pwm0-out-pins {
+ 		samsung,pins = "gpf-14";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	pwm1_out: pwm1-out {
++	pwm1_out: pwm1-out-pins {
+ 		samsung,pins = "gpf-15";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	clkout0: clkout-0 {
++	clkout0: clkout-0-pins {
+ 		samsung,pins = "gpf-14";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col0_0: keypad-col0-0 {
++	keypad_col0_0: keypad-col0-0-pins {
+ 		samsung,pins = "gph-0";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col1_0: keypad-col1-0 {
++	keypad_col1_0: keypad-col1-0-pins {
+ 		samsung,pins = "gph-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col2_0: keypad-col2-0 {
++	keypad_col2_0: keypad-col2-0-pins {
+ 		samsung,pins = "gph-2";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col3_0: keypad-col3-0 {
++	keypad_col3_0: keypad-col3-0-pins {
+ 		samsung,pins = "gph-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col4_0: keypad-col4-0 {
++	keypad_col4_0: keypad-col4-0-pins {
+ 		samsung,pins = "gph-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col5_0: keypad-col5-0 {
++	keypad_col5_0: keypad-col5-0-pins {
+ 		samsung,pins = "gph-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col6_0: keypad-col6-0 {
++	keypad_col6_0: keypad-col6-0-pins {
+ 		samsung,pins = "gph-6";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col7_0: keypad-col7-0 {
++	keypad_col7_0: keypad-col7-0-pins {
+ 		samsung,pins = "gph-7";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_4>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col0_1: keypad-col0-1 {
++	keypad_col0_1: keypad-col0-1-pins {
+ 		samsung,pins = "gpl-0";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col1_1: keypad-col1-1 {
++	keypad_col1_1: keypad-col1-1-pins {
+ 		samsung,pins = "gpl-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col2_1: keypad-col2-1 {
++	keypad_col2_1: keypad-col2-1-pins {
+ 		samsung,pins = "gpl-2";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col3_1: keypad-col3-1 {
++	keypad_col3_1: keypad-col3-1-pins {
+ 		samsung,pins = "gpl-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col4_1: keypad-col4-1 {
++	keypad_col4_1: keypad-col4-1-pins {
+ 		samsung,pins = "gpl-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col5_1: keypad-col5-1 {
++	keypad_col5_1: keypad-col5-1-pins {
+ 		samsung,pins = "gpl-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col6_1: keypad-col6-1 {
++	keypad_col6_1: keypad-col6-1-pins {
+ 		samsung,pins = "gpl-6";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_col7_1: keypad-col7-1 {
++	keypad_col7_1: keypad-col7-1-pins {
+ 		samsung,pins = "gpl-7";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row0_0: keypad-row0-0 {
++	keypad_row0_0: keypad-row0-0-pins {
+ 		samsung,pins = "gpk-8";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row1_0: keypad-row1-0 {
++	keypad_row1_0: keypad-row1-0-pins {
+ 		samsung,pins = "gpk-9";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row2_0: keypad-row2-0 {
++	keypad_row2_0: keypad-row2-0-pins {
+ 		samsung,pins = "gpk-10";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row3_0: keypad-row3-0 {
++	keypad_row3_0: keypad-row3-0-pins {
+ 		samsung,pins = "gpk-11";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row4_0: keypad-row4-0 {
++	keypad_row4_0: keypad-row4-0-pins {
+ 		samsung,pins = "gpk-12";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row5_0: keypad-row5-0 {
++	keypad_row5_0: keypad-row5-0-pins {
+ 		samsung,pins = "gpk-13";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row6_0: keypad-row6-0 {
++	keypad_row6_0: keypad-row6-0-pins {
+ 		samsung,pins = "gpk-14";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row7_0: keypad-row7-0 {
++	keypad_row7_0: keypad-row7-0-pins {
+ 		samsung,pins = "gpk-15";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row0_1: keypad-row0-1 {
++	keypad_row0_1: keypad-row0-1-pins {
+ 		samsung,pins = "gpn-0";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row1_1: keypad-row1-1 {
++	keypad_row1_1: keypad-row1-1-pins {
+ 		samsung,pins = "gpn-1";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row2_1: keypad-row2-1 {
++	keypad_row2_1: keypad-row2-1-pins {
+ 		samsung,pins = "gpn-2";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row3_1: keypad-row3-1 {
++	keypad_row3_1: keypad-row3-1-pins {
+ 		samsung,pins = "gpn-3";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row4_1: keypad-row4-1 {
++	keypad_row4_1: keypad-row4-1-pins {
+ 		samsung,pins = "gpn-4";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row5_1: keypad-row5-1 {
++	keypad_row5_1: keypad-row5-1-pins {
+ 		samsung,pins = "gpn-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row6_1: keypad-row6-1 {
++	keypad_row6_1: keypad-row6-1-pins {
+ 		samsung,pins = "gpn-6";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	keypad_row7_1: keypad-row7-1 {
++	keypad_row7_1: keypad-row7-1-pins {
+ 		samsung,pins = "gpn-7";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	lcd_ctrl: lcd-ctrl {
++	lcd_ctrl: lcd-ctrl-pins {
+ 		samsung,pins = "gpj-8", "gpj-9", "gpj-10", "gpj-11";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	lcd_data16: lcd-data-width16 {
++	lcd_data16: lcd-data-width16-pins {
+ 		samsung,pins = "gpi-3", "gpi-4", "gpi-5", "gpi-6",
+ 				"gpi-7", "gpi-10", "gpi-11", "gpi-12",
+ 				"gpi-13", "gpi-14", "gpi-15", "gpj-3",
+@@ -652,7 +652,7 @@
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	lcd_data18: lcd-data-width18 {
++	lcd_data18: lcd-data-width18-pins {
+ 		samsung,pins = "gpi-2", "gpi-3", "gpi-4", "gpi-5",
+ 				"gpi-6", "gpi-7", "gpi-10", "gpi-11",
+ 				"gpi-12", "gpi-13", "gpi-14", "gpi-15",
+@@ -662,7 +662,7 @@
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	lcd_data24: lcd-data-width24 {
++	lcd_data24: lcd-data-width24-pins {
+ 		samsung,pins = "gpi-0", "gpi-1", "gpi-2", "gpi-3",
+ 				"gpi-4", "gpi-5", "gpi-6", "gpi-7",
+ 				"gpi-8", "gpi-9", "gpi-10", "gpi-11",
+@@ -673,7 +673,7 @@
+ 		samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
+ 	};
+ 
+-	hsi_bus: hsi-bus {
++	hsi_bus: hsi-bus-pins {
+ 		samsung,pins = "gpk-0", "gpk-1", "gpk-2", "gpk-3",
+ 				"gpk-4", "gpk-5", "gpk-6", "gpk-7";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_3>;
+diff --git a/arch/arm/boot/dts/s5pv210-aquila.dts b/arch/arm/boot/dts/s5pv210-aquila.dts
+index 8e57e5a1f0c51..6423348034b68 100644
+--- a/arch/arm/boot/dts/s5pv210-aquila.dts
++++ b/arch/arm/boot/dts/s5pv210-aquila.dts
+@@ -277,37 +277,37 @@
+ 			<&keypad_col0>, <&keypad_col1>, <&keypad_col2>;
+ 	status = "okay";
+ 
+-	key_1 {
++	key-1 {
+ 		keypad,row = <0>;
+ 		keypad,column = <1>;
+ 		linux,code = <KEY_CONNECT>;
+ 	};
+ 
+-	key_2 {
++	key-2 {
+ 		keypad,row = <0>;
+ 		keypad,column = <2>;
+ 		linux,code = <KEY_BACK>;
+ 	};
+ 
+-	key_3 {
++	key-3 {
+ 		keypad,row = <1>;
+ 		keypad,column = <1>;
+ 		linux,code = <KEY_CAMERA_FOCUS>;
+ 	};
+ 
+-	key_4 {
++	key-4 {
+ 		keypad,row = <1>;
+ 		keypad,column = <2>;
+ 		linux,code = <KEY_VOLUMEUP>;
+ 	};
+ 
+-	key_5 {
++	key-5 {
+ 		keypad,row = <2>;
+ 		keypad,column = <1>;
+ 		linux,code = <KEY_CAMERA>;
+ 	};
+ 
+-	key_6 {
++	key-6 {
+ 		keypad,row = <2>;
+ 		keypad,column = <2>;
+ 		linux,code = <KEY_VOLUMEDOWN>;
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index 984bc8dc5e4bd..8f7dcd7af0bc9 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -54,7 +54,7 @@
+ 		clock-frequency = <32768>;
+ 	};
+ 
+-	bt_codec: bt_sco {
++	bt_codec: bt-sco {
+ 		compatible = "linux,bt-sco";
+ 		#sound-dai-cells = <0>;
+ 	};
+@@ -113,7 +113,7 @@
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&sound_i2c_pins>;
+ 
+-		wm8994: wm8994@1a {
++		wm8994: audio-codec@1a {
+ 			compatible = "wlf,wm8994";
+ 			reg = <0x1a>;
+ 
+diff --git a/arch/arm/boot/dts/s5pv210-goni.dts b/arch/arm/boot/dts/s5pv210-goni.dts
+index ad8d5d2fa32d7..5c1e12d39747b 100644
+--- a/arch/arm/boot/dts/s5pv210-goni.dts
++++ b/arch/arm/boot/dts/s5pv210-goni.dts
+@@ -259,37 +259,37 @@
+ 			<&keypad_col0>, <&keypad_col1>, <&keypad_col2>;
+ 	status = "okay";
+ 
+-	key_1 {
++	key-1 {
+ 		keypad,row = <0>;
+ 		keypad,column = <1>;
+ 		linux,code = <KEY_CONNECT>;
+ 	};
+ 
+-	key_2 {
++	key-2 {
+ 		keypad,row = <0>;
+ 		keypad,column = <2>;
+ 		linux,code = <KEY_BACK>;
+ 	};
+ 
+-	key_3 {
++	key-3 {
+ 		keypad,row = <1>;
+ 		keypad,column = <1>;
+ 		linux,code = <KEY_CAMERA_FOCUS>;
+ 	};
+ 
+-	key_4 {
++	key-4 {
+ 		keypad,row = <1>;
+ 		keypad,column = <2>;
+ 		linux,code = <KEY_VOLUMEUP>;
+ 	};
+ 
+-	key_5 {
++	key-5 {
+ 		keypad,row = <2>;
+ 		keypad,column = <1>;
+ 		linux,code = <KEY_CAMERA>;
+ 	};
+ 
+-	key_6 {
++	key-6 {
+ 		keypad,row = <2>;
+ 		keypad,column = <2>;
+ 		linux,code = <KEY_VOLUMEDOWN>;
+@@ -353,7 +353,7 @@
+ 	samsung,i2c-slave-addr = <0x10>;
+ 	status = "okay";
+ 
+-	tsp@4a {
++	touchscreen@4a {
+ 		compatible = "atmel,maxtouch";
+ 		reg = <0x4a>;
+ 		interrupt-parent = <&gpj0>;
+diff --git a/arch/arm/boot/dts/s5pv210-smdkv210.dts b/arch/arm/boot/dts/s5pv210-smdkv210.dts
+index 7459e41e8ef13..901e7197b1368 100644
+--- a/arch/arm/boot/dts/s5pv210-smdkv210.dts
++++ b/arch/arm/boot/dts/s5pv210-smdkv210.dts
+@@ -41,7 +41,7 @@
+ 
+ 	ethernet@a8000000 {
+ 		compatible = "davicom,dm9000";
+-		reg = <0xA8000000 0x2 0xA8000002 0x2>;
++		reg = <0xa8000000 0x2>, <0xa8000002 0x2>;
+ 		interrupt-parent = <&gph1>;
+ 		interrupts = <1 IRQ_TYPE_LEVEL_HIGH>;
+ 		local-mac-address = [00 00 de ad be ef];
+@@ -55,6 +55,14 @@
+ 		default-brightness-level = <6>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pwm3_out>;
++		power-supply = <&dc5v_reg>;
++	};
++
++	dc5v_reg: regulator-0 {
++		compatible = "regulator-fixed";
++		regulator-name = "DC5V";
++		regulator-min-microvolt = <5000000>;
++		regulator-max-microvolt = <5000000>;
+ 	};
+ };
+ 
+@@ -76,61 +84,61 @@
+ 			<&keypad_col6>, <&keypad_col7>;
+ 	status = "okay";
+ 
+-	key_1 {
++	key-1 {
+ 		keypad,row = <0>;
+ 		keypad,column = <3>;
+ 		linux,code = <KEY_1>;
+ 	};
+ 
+-	key_2 {
++	key-2 {
+ 		keypad,row = <0>;
+ 		keypad,column = <4>;
+ 		linux,code = <KEY_2>;
+ 	};
+ 
+-	key_3 {
++	key-3 {
+ 		keypad,row = <0>;
+ 		keypad,column = <5>;
+ 		linux,code = <KEY_3>;
+ 	};
+ 
+-	key_4 {
++	key-4 {
+ 		keypad,row = <0>;
+ 		keypad,column = <6>;
+ 		linux,code = <KEY_4>;
+ 	};
+ 
+-	key_5 {
++	key-5 {
+ 		keypad,row = <0
+ 		>;
+ 		keypad,column = <7>;
+ 		linux,code = <KEY_5>;
+ 	};
+ 
+-	key_6 {
++	key-6 {
+ 		keypad,row = <1>;
+ 		keypad,column = <3>;
+ 		linux,code = <KEY_A>;
+ 	};
+-	key_7 {
++	key-7 {
+ 		keypad,row = <1>;
+ 		keypad,column = <4>;
+ 		linux,code = <KEY_B>;
+ 	};
+ 
+-	key_8 {
++	key-8 {
+ 		keypad,row = <1>;
+ 		keypad,column = <5>;
+ 		linux,code = <KEY_C>;
+ 	};
+ 
+-	key_9 {
++	key-9 {
+ 		keypad,row = <1>;
+ 		keypad,column = <6>;
+ 		linux,code = <KEY_D>;
+ 	};
+ 
+-	key_10 {
++	key-10 {
+ 		keypad,row = <1>;
+ 		keypad,column = <7>;
+ 		linux,code = <KEY_E>;
+diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c
+index 1cbac76136d46..6a10d23d787e8 100644
+--- a/arch/arm/mach-omap2/powerdomain.c
++++ b/arch/arm/mach-omap2/powerdomain.c
+@@ -174,7 +174,7 @@ static int _pwrdm_state_switch(struct powerdomain *pwrdm, int flag)
+ 		break;
+ 	case PWRDM_STATE_PREV:
+ 		prev = pwrdm_read_prev_pwrst(pwrdm);
+-		if (pwrdm->state != prev)
++		if (prev >= 0 && pwrdm->state != prev)
+ 			pwrdm->state_counter[prev]++;
+ 		if (prev == PWRDM_POWER_RET)
+ 			_update_logic_membank_counters(pwrdm);
+diff --git a/arch/arm/mach-pxa/sharpsl_pm.c b/arch/arm/mach-pxa/sharpsl_pm.c
+index 83cfbb882a2d4..7f6bd7f069e49 100644
+--- a/arch/arm/mach-pxa/sharpsl_pm.c
++++ b/arch/arm/mach-pxa/sharpsl_pm.c
+@@ -220,8 +220,6 @@ void sharpsl_battery_kick(void)
+ {
+ 	schedule_delayed_work(&sharpsl_bat, msecs_to_jiffies(125));
+ }
+-EXPORT_SYMBOL(sharpsl_battery_kick);
+-
+ 
+ static void sharpsl_battery_thread(struct work_struct *private_)
+ {
+diff --git a/arch/arm/mach-pxa/spitz.c b/arch/arm/mach-pxa/spitz.c
+index 371008e9bb029..264de0bc97d68 100644
+--- a/arch/arm/mach-pxa/spitz.c
++++ b/arch/arm/mach-pxa/spitz.c
+@@ -9,7 +9,6 @@
+  */
+ 
+ #include <linux/kernel.h>
+-#include <linux/module.h>	/* symbol_get ; symbol_put */
+ #include <linux/platform_device.h>
+ #include <linux/delay.h>
+ #include <linux/gpio_keys.h>
+@@ -514,17 +513,6 @@ static struct pxa2xx_spi_chip spitz_ads7846_chip = {
+ 	.gpio_cs		= SPITZ_GPIO_ADS7846_CS,
+ };
+ 
+-static void spitz_bl_kick_battery(void)
+-{
+-	void (*kick_batt)(void);
+-
+-	kick_batt = symbol_get(sharpsl_battery_kick);
+-	if (kick_batt) {
+-		kick_batt();
+-		symbol_put(sharpsl_battery_kick);
+-	}
+-}
+-
+ static struct gpiod_lookup_table spitz_lcdcon_gpio_table = {
+ 	.dev_id = "spi2.1",
+ 	.table = {
+@@ -552,7 +540,7 @@ static struct corgi_lcd_platform_data spitz_lcdcon_info = {
+ 	.max_intensity		= 0x2f,
+ 	.default_intensity	= 0x1f,
+ 	.limit_mask		= 0x0b,
+-	.kick_battery		= spitz_bl_kick_battery,
++	.kick_battery		= sharpsl_battery_kick,
+ };
+ 
+ static struct pxa2xx_spi_chip spitz_lcdcon_chip = {
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 73f7490911c92..0bc5fefb7a49b 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -1966,6 +1966,9 @@
+ 			#size-cells = <1>;
+ 			ranges;
+ 
++			interrupts = <GIC_SPI 352 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-names = "hs_phy_irq";
++
+ 			clocks = <&gcc GCC_PERIPH_NOC_USB20_AHB_CLK>,
+ 				<&gcc GCC_USB20_MASTER_CLK>,
+ 				<&gcc GCC_USB20_MOCK_UTMI_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 71e5b9fdc9e16..5c696ebf5c20c 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -1064,6 +1064,7 @@
+ 			#clock-cells = <1>;
+ 			#reset-cells = <1>;
+ 			#power-domain-cells = <1>;
++			power-domains = <&rpmhpd SDM845_CX>;
+ 		};
+ 
+ 		qfprom@784000 {
+@@ -2108,7 +2109,7 @@
+ 				<0 0>,
+ 				<0 0>,
+ 				<0 0>,
+-				<0 300000000>;
++				<75000000 300000000>;
+ 
+ 			status = "disabled";
+ 		};
+diff --git a/arch/arm64/include/asm/sdei.h b/arch/arm64/include/asm/sdei.h
+index 63e0b92a5fbb0..5882c0e29331e 100644
+--- a/arch/arm64/include/asm/sdei.h
++++ b/arch/arm64/include/asm/sdei.h
+@@ -17,6 +17,9 @@
+ 
+ #include <asm/virt.h>
+ 
++DECLARE_PER_CPU(struct sdei_registered_event *, sdei_active_normal_event);
++DECLARE_PER_CPU(struct sdei_registered_event *, sdei_active_critical_event);
++
+ extern unsigned long sdei_exit_mode;
+ 
+ /* Software Delegated Exception entry point from firmware*/
+@@ -29,6 +32,9 @@ asmlinkage void __sdei_asm_entry_trampoline(unsigned long event_num,
+ 						   unsigned long pc,
+ 						   unsigned long pstate);
+ 
++/* Abort a running handler. Context is discarded. */
++void __sdei_handler_abort(void);
++
+ /*
+  * The above entry point does the minimum to call C code. This function does
+  * anything else, before calling the driver.
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 55e477f73158d..a94acea770c7c 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -1137,9 +1137,13 @@ SYM_CODE_START(__sdei_asm_handler)
+ 
+ 	mov	x19, x1
+ 
+-#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK)
++	/* Store the registered-event for crash_smp_send_stop() */
+ 	ldrb	w4, [x19, #SDEI_EVENT_PRIORITY]
+-#endif
++	cbnz	w4, 1f
++	adr_this_cpu dst=x5, sym=sdei_active_normal_event, tmp=x6
++	b	2f
++1:	adr_this_cpu dst=x5, sym=sdei_active_critical_event, tmp=x6
++2:	str	x19, [x5]
+ 
+ #ifdef CONFIG_VMAP_STACK
+ 	/*
+@@ -1204,6 +1208,14 @@ SYM_CODE_START(__sdei_asm_handler)
+ 
+ 	ldr_l	x2, sdei_exit_mode
+ 
++	/* Clear the registered-event seen by crash_smp_send_stop() */
++	ldrb	w3, [x4, #SDEI_EVENT_PRIORITY]
++	cbnz	w3, 1f
++	adr_this_cpu dst=x5, sym=sdei_active_normal_event, tmp=x6
++	b	2f
++1:	adr_this_cpu dst=x5, sym=sdei_active_critical_event, tmp=x6
++2:	str	xzr, [x5]
++
+ alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
+ 	sdei_handler_exit exit_mode=x2
+ alternative_else_nop_endif
+@@ -1214,4 +1226,15 @@ alternative_else_nop_endif
+ #endif
+ SYM_CODE_END(__sdei_asm_handler)
+ NOKPROBE(__sdei_asm_handler)
++
++SYM_CODE_START(__sdei_handler_abort)
++	mov_q	x0, SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME
++	adr	x1, 1f
++	ldr_l	x2, sdei_exit_mode
++	sdei_handler_exit exit_mode=x2
++	// exit the handler and jump to the next instruction.
++	// Exit will stomp x0-x17, PSTATE, ELR_ELx, and SPSR_ELx.
++1:	ret
++SYM_CODE_END(__sdei_handler_abort)
++NOKPROBE(__sdei_handler_abort)
+ #endif /* CONFIG_ARM_SDE_INTERFACE */
+diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
+index 793c46d6a4479..0083f5afa51db 100644
+--- a/arch/arm64/kernel/sdei.c
++++ b/arch/arm64/kernel/sdei.c
+@@ -38,6 +38,9 @@ DEFINE_PER_CPU(unsigned long *, sdei_stack_normal_ptr);
+ DEFINE_PER_CPU(unsigned long *, sdei_stack_critical_ptr);
+ #endif
+ 
++DEFINE_PER_CPU(struct sdei_registered_event *, sdei_active_normal_event);
++DEFINE_PER_CPU(struct sdei_registered_event *, sdei_active_critical_event);
++
+ static void _free_sdei_stack(unsigned long * __percpu *ptr, int cpu)
+ {
+ 	unsigned long *p;
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index feee5a3cd1288..ae0977b632a18 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -1072,10 +1072,8 @@ void crash_smp_send_stop(void)
+ 	 * If this cpu is the only one alive at this point in time, online or
+ 	 * not, there are no stop messages to be sent around, so just back out.
+ 	 */
+-	if (num_other_online_cpus() == 0) {
+-		sdei_mask_local_cpu();
+-		return;
+-	}
++	if (num_other_online_cpus() == 0)
++		goto skip_ipi;
+ 
+ 	cpumask_copy(&mask, cpu_online_mask);
+ 	cpumask_clear_cpu(smp_processor_id(), &mask);
+@@ -1094,7 +1092,9 @@ void crash_smp_send_stop(void)
+ 		pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",
+ 			cpumask_pr_args(&mask));
+ 
++skip_ipi:
+ 	sdei_mask_local_cpu();
++	sdei_handler_abort();
+ }
+ 
+ bool smp_crash_stop_failed(void)
+diff --git a/arch/arm64/lib/csum.c b/arch/arm64/lib/csum.c
+index 78b87a64ca0a3..2432683e48a61 100644
+--- a/arch/arm64/lib/csum.c
++++ b/arch/arm64/lib/csum.c
+@@ -24,7 +24,7 @@ unsigned int __no_sanitize_address do_csum(const unsigned char *buff, int len)
+ 	const u64 *ptr;
+ 	u64 data, sum64 = 0;
+ 
+-	if (unlikely(len == 0))
++	if (unlikely(len <= 0))
+ 		return 0;
+ 
+ 	offset = (unsigned long)buff & 7;
+diff --git a/arch/m68k/fpsp040/skeleton.S b/arch/m68k/fpsp040/skeleton.S
+index a8f41615d94a7..31a9c634c81ed 100644
+--- a/arch/m68k/fpsp040/skeleton.S
++++ b/arch/m68k/fpsp040/skeleton.S
+@@ -499,12 +499,12 @@ in_ea:
+ 	dbf	%d0,morein
+ 	rts
+ 
+-	.section .fixup,#alloc,#execinstr
++	.section .fixup,"ax"
+ 	.even
+ 1:
+ 	jbra	fpsp040_die
+ 
+-	.section __ex_table,#alloc
++	.section __ex_table,"a"
+ 	.align	4
+ 
+ 	.long	in_ea,1b
+diff --git a/arch/m68k/ifpsp060/os.S b/arch/m68k/ifpsp060/os.S
+index 7a0d6e4280665..89e2ec224ab6c 100644
+--- a/arch/m68k/ifpsp060/os.S
++++ b/arch/m68k/ifpsp060/os.S
+@@ -379,11 +379,11 @@ _060_real_access:
+ 
+ 
+ | Execption handling for movs access to illegal memory
+-	.section .fixup,#alloc,#execinstr
++	.section .fixup,"ax"
+ 	.even
+ 1:	moveq		#-1,%d1
+ 	rts
+-.section __ex_table,#alloc
++.section __ex_table,"a"
+ 	.align 4
+ 	.long	dmrbuae,1b
+ 	.long	dmrwuae,1b
+diff --git a/arch/m68k/kernel/relocate_kernel.S b/arch/m68k/kernel/relocate_kernel.S
+index ab0f1e7d46535..f7667079e08e9 100644
+--- a/arch/m68k/kernel/relocate_kernel.S
++++ b/arch/m68k/kernel/relocate_kernel.S
+@@ -26,7 +26,7 @@ ENTRY(relocate_new_kernel)
+ 	lea %pc@(.Lcopy),%a4
+ 2:	addl #0x00000000,%a4		/* virt_to_phys() */
+ 
+-	.section ".m68k_fixup","aw"
++	.section .m68k_fixup,"aw"
+ 	.long M68K_FIXUP_MEMOFFSET, 2b+2
+ 	.previous
+ 
+@@ -49,7 +49,7 @@ ENTRY(relocate_new_kernel)
+ 	lea %pc@(.Lcont040),%a4
+ 5:	addl #0x00000000,%a4		/* virt_to_phys() */
+ 
+-	.section ".m68k_fixup","aw"
++	.section .m68k_fixup,"aw"
+ 	.long M68K_FIXUP_MEMOFFSET, 5b+2
+ 	.previous
+ 
+diff --git a/arch/mips/alchemy/devboards/db1000.c b/arch/mips/alchemy/devboards/db1000.c
+index 2c52ee27b4f25..50de86eb8784c 100644
+--- a/arch/mips/alchemy/devboards/db1000.c
++++ b/arch/mips/alchemy/devboards/db1000.c
+@@ -14,7 +14,6 @@
+ #include <linux/interrupt.h>
+ #include <linux/leds.h>
+ #include <linux/mmc/host.h>
+-#include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include <linux/pm.h>
+ #include <linux/spi/spi.h>
+@@ -167,12 +166,7 @@ static struct platform_device db1x00_audio_dev = {
+ 
+ static irqreturn_t db1100_mmc_cd(int irq, void *ptr)
+ {
+-	void (*mmc_cd)(struct mmc_host *, unsigned long);
+-	/* link against CONFIG_MMC=m */
+-	mmc_cd = symbol_get(mmc_detect_change);
+-	mmc_cd(ptr, msecs_to_jiffies(500));
+-	symbol_put(mmc_detect_change);
+-
++	mmc_detect_change(ptr, msecs_to_jiffies(500));
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/arch/mips/alchemy/devboards/db1200.c b/arch/mips/alchemy/devboards/db1200.c
+index 421d651433b67..b70e2cf8a27bc 100644
+--- a/arch/mips/alchemy/devboards/db1200.c
++++ b/arch/mips/alchemy/devboards/db1200.c
+@@ -10,7 +10,6 @@
+ #include <linux/gpio.h>
+ #include <linux/i2c.h>
+ #include <linux/init.h>
+-#include <linux/module.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+ #include <linux/leds.h>
+@@ -340,14 +339,7 @@ static irqreturn_t db1200_mmc_cd(int irq, void *ptr)
+ 
+ static irqreturn_t db1200_mmc_cdfn(int irq, void *ptr)
+ {
+-	void (*mmc_cd)(struct mmc_host *, unsigned long);
+-
+-	/* link against CONFIG_MMC=m */
+-	mmc_cd = symbol_get(mmc_detect_change);
+-	if (mmc_cd) {
+-		mmc_cd(ptr, msecs_to_jiffies(200));
+-		symbol_put(mmc_detect_change);
+-	}
++	mmc_detect_change(ptr, msecs_to_jiffies(200));
+ 
+ 	msleep(100);	/* debounce */
+ 	if (irq == DB1200_SD0_INSERT_INT)
+@@ -431,14 +423,7 @@ static irqreturn_t pb1200_mmc1_cd(int irq, void *ptr)
+ 
+ static irqreturn_t pb1200_mmc1_cdfn(int irq, void *ptr)
+ {
+-	void (*mmc_cd)(struct mmc_host *, unsigned long);
+-
+-	/* link against CONFIG_MMC=m */
+-	mmc_cd = symbol_get(mmc_detect_change);
+-	if (mmc_cd) {
+-		mmc_cd(ptr, msecs_to_jiffies(200));
+-		symbol_put(mmc_detect_change);
+-	}
++	mmc_detect_change(ptr, msecs_to_jiffies(200));
+ 
+ 	msleep(100);	/* debounce */
+ 	if (irq == PB1200_SD1_INSERT_INT)
+diff --git a/arch/mips/alchemy/devboards/db1300.c b/arch/mips/alchemy/devboards/db1300.c
+index cd72eaa1168f7..ca71e5ed51abd 100644
+--- a/arch/mips/alchemy/devboards/db1300.c
++++ b/arch/mips/alchemy/devboards/db1300.c
+@@ -17,7 +17,6 @@
+ #include <linux/interrupt.h>
+ #include <linux/ata_platform.h>
+ #include <linux/mmc/host.h>
+-#include <linux/module.h>
+ #include <linux/mtd/mtd.h>
+ #include <linux/mtd/platnand.h>
+ #include <linux/platform_device.h>
+@@ -459,14 +458,7 @@ static irqreturn_t db1300_mmc_cd(int irq, void *ptr)
+ 
+ static irqreturn_t db1300_mmc_cdfn(int irq, void *ptr)
+ {
+-	void (*mmc_cd)(struct mmc_host *, unsigned long);
+-
+-	/* link against CONFIG_MMC=m.  We can only be called once MMC core has
+-	 * initialized the controller, so symbol_get() should always succeed.
+-	 */
+-	mmc_cd = symbol_get(mmc_detect_change);
+-	mmc_cd(ptr, msecs_to_jiffies(200));
+-	symbol_put(mmc_detect_change);
++	mmc_detect_change(ptr, msecs_to_jiffies(200));
+ 
+ 	msleep(100);	/* debounce */
+ 	if (irq == DB1300_SD1_INSERT_INT)
+diff --git a/arch/parisc/include/asm/led.h b/arch/parisc/include/asm/led.h
+index 6de13d08a3886..b70b9094fb7cd 100644
+--- a/arch/parisc/include/asm/led.h
++++ b/arch/parisc/include/asm/led.h
+@@ -11,8 +11,8 @@
+ #define	LED1		0x02
+ #define	LED0		0x01		/* bottom (or furthest left) LED */
+ 
+-#define	LED_LAN_TX	LED0		/* for LAN transmit activity */
+-#define	LED_LAN_RCV	LED1		/* for LAN receive activity */
++#define	LED_LAN_RCV	LED0		/* for LAN receive activity */
++#define	LED_LAN_TX	LED1		/* for LAN transmit activity */
+ #define	LED_DISK_IO	LED2		/* for disk activity */
+ #define	LED_HEARTBEAT	LED3		/* heartbeat */
+ 
+diff --git a/arch/parisc/include/asm/processor.h b/arch/parisc/include/asm/processor.h
+index 6e2a8176b0dd0..40135be979652 100644
+--- a/arch/parisc/include/asm/processor.h
++++ b/arch/parisc/include/asm/processor.h
+@@ -97,7 +97,6 @@ struct cpuinfo_parisc {
+ 	unsigned long cpu_loc;      /* CPU location from PAT firmware */
+ 	unsigned int state;
+ 	struct parisc_device *dev;
+-	unsigned long loops_per_jiffy;
+ };
+ 
+ extern struct system_cpuinfo_parisc boot_cpu_data;
+diff --git a/arch/parisc/kernel/processor.c b/arch/parisc/kernel/processor.c
+index 176ef00bdd15e..ccdbcfdfe4e21 100644
+--- a/arch/parisc/kernel/processor.c
++++ b/arch/parisc/kernel/processor.c
+@@ -163,7 +163,6 @@ static int __init processor_probe(struct parisc_device *dev)
+ 	if (cpuid)
+ 		memset(p, 0, sizeof(struct cpuinfo_parisc));
+ 
+-	p->loops_per_jiffy = loops_per_jiffy;
+ 	p->dev = dev;		/* Save IODC data in case we need it */
+ 	p->hpa = dev->hpa.start;	/* save CPU hpa */
+ 	p->cpuid = cpuid;	/* save CPU id */
+@@ -373,10 +372,18 @@ int
+ show_cpuinfo (struct seq_file *m, void *v)
+ {
+ 	unsigned long cpu;
++	char cpu_name[60], *p;
++
++	/* strip PA path from CPU name to not confuse lscpu */
++	strlcpy(cpu_name, per_cpu(cpu_data, 0).dev->name, sizeof(cpu_name));
++	p = strrchr(cpu_name, '[');
++	if (p)
++		*(--p) = 0;
+ 
+ 	for_each_online_cpu(cpu) {
+-		const struct cpuinfo_parisc *cpuinfo = &per_cpu(cpu_data, cpu);
+ #ifdef CONFIG_SMP
++		const struct cpuinfo_parisc *cpuinfo = &per_cpu(cpu_data, cpu);
++
+ 		if (0 == cpuinfo->hpa)
+ 			continue;
+ #endif
+@@ -421,8 +428,7 @@ show_cpuinfo (struct seq_file *m, void *v)
+ 
+ 		seq_printf(m, "model\t\t: %s - %s\n",
+ 				 boot_cpu_data.pdc.sys_model_name,
+-				 cpuinfo->dev ?
+-				 cpuinfo->dev->name : "Unknown");
++				 cpu_name);
+ 
+ 		seq_printf(m, "hversion\t: 0x%08x\n"
+ 			        "sversion\t: 0x%08x\n",
+@@ -433,8 +439,8 @@ show_cpuinfo (struct seq_file *m, void *v)
+ 		show_cache_info(m);
+ 
+ 		seq_printf(m, "bogomips\t: %lu.%02lu\n",
+-			     cpuinfo->loops_per_jiffy / (500000 / HZ),
+-			     (cpuinfo->loops_per_jiffy / (5000 / HZ)) % 100);
++			     loops_per_jiffy / (500000 / HZ),
++			     loops_per_jiffy / (5000 / HZ) % 100);
+ 
+ 		seq_printf(m, "software id\t: %ld\n\n",
+ 				boot_cpu_data.pdc.model.sw_id);
+diff --git a/arch/powerpc/include/asm/lppaca.h b/arch/powerpc/include/asm/lppaca.h
+index c390ec377baed..1412e643122e4 100644
+--- a/arch/powerpc/include/asm/lppaca.h
++++ b/arch/powerpc/include/asm/lppaca.h
+@@ -45,6 +45,7 @@
+ #include <asm/types.h>
+ #include <asm/mmu.h>
+ #include <asm/firmware.h>
++#include <asm/paca.h>
+ 
+ /*
+  * The lppaca is the "virtual processor area" registered with the hypervisor,
+@@ -123,13 +124,23 @@ struct lppaca {
+  */
+ #define LPPACA_OLD_SHARED_PROC		2
+ 
+-static inline bool lppaca_shared_proc(struct lppaca *l)
++#ifdef CONFIG_PPC_PSERIES
++/*
++ * All CPUs should have the same shared proc value, so directly access the PACA
++ * to avoid false positives from DEBUG_PREEMPT.
++ */
++static inline bool lppaca_shared_proc(void)
+ {
++	struct lppaca *l = local_paca->lppaca_ptr;
++
+ 	if (!firmware_has_feature(FW_FEATURE_SPLPAR))
+ 		return false;
+ 	return !!(l->__old_status & LPPACA_OLD_SHARED_PROC);
+ }
+ 
++#define get_lppaca()	(get_paca()->lppaca_ptr)
++#endif
++
+ /*
+  * SLB shadow buffer structure as defined in the PAPR.  The save_area
+  * contains adjacent ESID and VSID pairs for each shadowed SLB.  The
+diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
+index 9454d29ff4b47..555aa3580e160 100644
+--- a/arch/powerpc/include/asm/paca.h
++++ b/arch/powerpc/include/asm/paca.h
+@@ -14,7 +14,6 @@
+ 
+ #include <linux/string.h>
+ #include <asm/types.h>
+-#include <asm/lppaca.h>
+ #include <asm/mmu.h>
+ #include <asm/page.h>
+ #ifdef CONFIG_PPC_BOOK3E
+@@ -45,14 +44,11 @@ extern unsigned int debug_smp_processor_id(void); /* from linux/smp.h */
+ #define get_paca()	local_paca
+ #endif
+ 
+-#ifdef CONFIG_PPC_PSERIES
+-#define get_lppaca()	(get_paca()->lppaca_ptr)
+-#endif
+-
+ #define get_slb_shadow()	(get_paca()->slb_shadow_ptr)
+ 
+ struct task_struct;
+ struct rtas_args;
++struct lppaca;
+ 
+ /*
+  * Defines the layout of the paca.
+diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
+index 588bfb9a0579c..546c725013789 100644
+--- a/arch/powerpc/include/asm/paravirt.h
++++ b/arch/powerpc/include/asm/paravirt.h
+@@ -6,6 +6,7 @@
+ #include <asm/smp.h>
+ #ifdef CONFIG_PPC64
+ #include <asm/paca.h>
++#include <asm/lppaca.h>
+ #include <asm/hvcall.h>
+ #endif
+ 
+diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h
+index ece84a430701f..7796cc05e2c8d 100644
+--- a/arch/powerpc/include/asm/plpar_wrappers.h
++++ b/arch/powerpc/include/asm/plpar_wrappers.h
+@@ -9,6 +9,7 @@
+ 
+ #include <asm/hvcall.h>
+ #include <asm/paca.h>
++#include <asm/lppaca.h>
+ #include <asm/page.h>
+ 
+ static inline long poll_pending(void)
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 1a5ba26aab156..935ce1bec43fa 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -642,6 +642,7 @@ int __init fadump_reserve_mem(void)
+ 	return ret;
+ error_out:
+ 	fw_dump.fadump_enabled = 0;
++	fw_dump.reserve_dump_area_size = 0;
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index 6806eefa52ceb..370635107f1c6 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -133,17 +133,28 @@ static int fail_iommu_bus_notify(struct notifier_block *nb,
+ 	return 0;
+ }
+ 
+-static struct notifier_block fail_iommu_bus_notifier = {
++/*
++ * PCI and VIO buses need separate notifier_block structs, since they're linked
++ * list nodes.  Sharing a notifier_block would mean that any notifiers later
++ * registered for PCI buses would also get called by VIO buses and vice versa.
++ */
++static struct notifier_block fail_iommu_pci_bus_notifier = {
+ 	.notifier_call = fail_iommu_bus_notify
+ };
+ 
++#ifdef CONFIG_IBMVIO
++static struct notifier_block fail_iommu_vio_bus_notifier = {
++	.notifier_call = fail_iommu_bus_notify
++};
++#endif
++
+ static int __init fail_iommu_setup(void)
+ {
+ #ifdef CONFIG_PCI
+-	bus_register_notifier(&pci_bus_type, &fail_iommu_bus_notifier);
++	bus_register_notifier(&pci_bus_type, &fail_iommu_pci_bus_notifier);
+ #endif
+ #ifdef CONFIG_IBMVIO
+-	bus_register_notifier(&vio_bus_type, &fail_iommu_bus_notifier);
++	bus_register_notifier(&vio_bus_type, &fail_iommu_vio_bus_notifier);
+ #endif
+ 
+ 	return 0;
+diff --git a/arch/powerpc/kvm/book3s_hv_ras.c b/arch/powerpc/kvm/book3s_hv_ras.c
+index 6028628ea3acf..7c693b3ee34b9 100644
+--- a/arch/powerpc/kvm/book3s_hv_ras.c
++++ b/arch/powerpc/kvm/book3s_hv_ras.c
+@@ -9,6 +9,7 @@
+ #include <linux/kvm.h>
+ #include <linux/kvm_host.h>
+ #include <linux/kernel.h>
++#include <asm/lppaca.h>
+ #include <asm/opal.h>
+ #include <asm/mce.h>
+ #include <asm/machdep.h>
+diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
+index c30fcbfa0e326..ccf0a876a7bd0 100644
+--- a/arch/powerpc/mm/book3s64/slb.c
++++ b/arch/powerpc/mm/book3s64/slb.c
+@@ -13,6 +13,7 @@
+ #include <asm/mmu.h>
+ #include <asm/mmu_context.h>
+ #include <asm/paca.h>
++#include <asm/lppaca.h>
+ #include <asm/ppc-opcode.h>
+ #include <asm/cputable.h>
+ #include <asm/cacheflush.h>
+diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
+index ee721f420a7ba..1a53ab08447cb 100644
+--- a/arch/powerpc/perf/core-fsl-emb.c
++++ b/arch/powerpc/perf/core-fsl-emb.c
+@@ -645,7 +645,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
+ 	struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
+ 	struct perf_event *event;
+ 	unsigned long val;
+-	int found = 0;
+ 
+ 	for (i = 0; i < ppmu->n_counter; ++i) {
+ 		event = cpuhw->event[i];
+@@ -654,7 +653,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
+ 		if ((int)val < 0) {
+ 			if (event) {
+ 				/* event has overflowed */
+-				found = 1;
+ 				record_and_restart(event, val, regs);
+ 			} else {
+ 				/*
+@@ -672,11 +670,13 @@ static void perf_event_interrupt(struct pt_regs *regs)
+ 	isync();
+ }
+ 
+-void hw_perf_event_setup(int cpu)
++static int fsl_emb_pmu_prepare_cpu(unsigned int cpu)
+ {
+ 	struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu);
+ 
+ 	memset(cpuhw, 0, sizeof(*cpuhw));
++
++	return 0;
+ }
+ 
+ int register_fsl_emb_pmu(struct fsl_emb_pmu *pmu)
+@@ -689,6 +689,8 @@ int register_fsl_emb_pmu(struct fsl_emb_pmu *pmu)
+ 		pmu->name);
+ 
+ 	perf_pmu_register(&fsl_emb_pmu, "cpu", PERF_TYPE_RAW);
++	cpuhp_setup_state(CPUHP_PERF_POWER, "perf/powerpc:prepare",
++			  fsl_emb_pmu_prepare_cpu, NULL);
+ 
+ 	return 0;
+ }
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 28396a7e77d6f..68f3b082245e0 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -637,16 +637,8 @@ static const struct proc_ops vcpudispatch_stats_freq_proc_ops = {
+ 
+ static int __init vcpudispatch_stats_procfs_init(void)
+ {
+-	/*
+-	 * Avoid smp_processor_id while preemptible. All CPUs should have
+-	 * the same value for lppaca_shared_proc.
+-	 */
+-	preempt_disable();
+-	if (!lppaca_shared_proc(get_lppaca())) {
+-		preempt_enable();
++	if (!lppaca_shared_proc())
+ 		return 0;
+-	}
+-	preempt_enable();
+ 
+ 	if (!proc_create("powerpc/vcpudispatch_stats", 0600, NULL,
+ 					&vcpudispatch_stats_proc_ops))
+diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c
+index d3517e498512f..a7d4e25ae82a1 100644
+--- a/arch/powerpc/platforms/pseries/lparcfg.c
++++ b/arch/powerpc/platforms/pseries/lparcfg.c
+@@ -205,7 +205,7 @@ static void parse_ppp_data(struct seq_file *m)
+ 	           ppp_data.active_system_procs);
+ 
+ 	/* pool related entries are appropriate for shared configs */
+-	if (lppaca_shared_proc(get_lppaca())) {
++	if (lppaca_shared_proc()) {
+ 		unsigned long pool_idle_time, pool_procs;
+ 
+ 		seq_printf(m, "pool=%d\n", ppp_data.pool_num);
+@@ -529,7 +529,7 @@ static int pseries_lparcfg_data(struct seq_file *m, void *v)
+ 		   partition_potential_processors);
+ 
+ 	seq_printf(m, "shared_processor_mode=%d\n",
+-		   lppaca_shared_proc(get_lppaca()));
++		   lppaca_shared_proc());
+ 
+ #ifdef CONFIG_PPC_BOOK3S_64
+ 	seq_printf(m, "slb_size=%d\n", mmu_slb_size);
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 0eac9ca782c21..822be2680b792 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -800,7 +800,7 @@ static void __init pSeries_setup_arch(void)
+ 	if (firmware_has_feature(FW_FEATURE_LPAR)) {
+ 		vpa_init(boot_cpuid);
+ 
+-		if (lppaca_shared_proc(get_lppaca())) {
++		if (lppaca_shared_proc()) {
+ 			static_branch_enable(&shared_processor);
+ 			pv_spinlocks_init();
+ 		}
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index 2872b66d9fec7..3de2adc0a8074 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -58,6 +58,7 @@
+ #ifdef CONFIG_PPC64
+ #include <asm/hvcall.h>
+ #include <asm/paca.h>
++#include <asm/lppaca.h>
+ #endif
+ 
+ #include "nonstdio.h"
+diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
+index f3caeb17c85b9..a6727ad58d65a 100644
+--- a/arch/s390/crypto/paes_s390.c
++++ b/arch/s390/crypto/paes_s390.c
+@@ -34,7 +34,7 @@
+  * and padding is also possible, the limits need to be generous.
+  */
+ #define PAES_MIN_KEYSIZE 16
+-#define PAES_MAX_KEYSIZE 320
++#define PAES_MAX_KEYSIZE MAXEP11AESKEYBLOBSIZE
+ 
+ static u8 *ctrblk;
+ static DEFINE_MUTEX(ctrblk_lock);
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index 6da06905ddce5..c469e8848d659 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -501,6 +501,8 @@ static struct attribute_group ipl_ccw_attr_group_lpar = {
+ 
+ static struct attribute *ipl_unknown_attrs[] = {
+ 	&sys_ipl_type_attr.attr,
++	&sys_ipl_secure_attr.attr,
++	&sys_ipl_has_secure_attr.attr,
+ 	NULL,
+ };
+ 
+diff --git a/arch/sh/boards/mach-ap325rxa/setup.c b/arch/sh/boards/mach-ap325rxa/setup.c
+index bac8a058ebd7c..05bd42dde107b 100644
+--- a/arch/sh/boards/mach-ap325rxa/setup.c
++++ b/arch/sh/boards/mach-ap325rxa/setup.c
+@@ -530,7 +530,7 @@ static int __init ap325rxa_devices_setup(void)
+ 	device_initialize(&ap325rxa_ceu_device.dev);
+ 	dma_declare_coherent_memory(&ap325rxa_ceu_device.dev,
+ 			ceu_dma_membase, ceu_dma_membase,
+-			ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
++			CEU_BUFFER_MEMORY_SIZE);
+ 
+ 	platform_device_add(&ap325rxa_ceu_device);
+ 
+diff --git a/arch/sh/boards/mach-ecovec24/setup.c b/arch/sh/boards/mach-ecovec24/setup.c
+index bab91a99124e1..9730a992dab33 100644
+--- a/arch/sh/boards/mach-ecovec24/setup.c
++++ b/arch/sh/boards/mach-ecovec24/setup.c
+@@ -1454,15 +1454,13 @@ static int __init arch_setup(void)
+ 	device_initialize(&ecovec_ceu_devices[0]->dev);
+ 	dma_declare_coherent_memory(&ecovec_ceu_devices[0]->dev,
+ 				    ceu0_dma_membase, ceu0_dma_membase,
+-				    ceu0_dma_membase +
+-				    CEU_BUFFER_MEMORY_SIZE - 1);
++				    CEU_BUFFER_MEMORY_SIZE);
+ 	platform_device_add(ecovec_ceu_devices[0]);
+ 
+ 	device_initialize(&ecovec_ceu_devices[1]->dev);
+ 	dma_declare_coherent_memory(&ecovec_ceu_devices[1]->dev,
+ 				    ceu1_dma_membase, ceu1_dma_membase,
+-				    ceu1_dma_membase +
+-				    CEU_BUFFER_MEMORY_SIZE - 1);
++				    CEU_BUFFER_MEMORY_SIZE);
+ 	platform_device_add(ecovec_ceu_devices[1]);
+ 
+ 	gpiod_add_lookup_table(&cn12_power_gpiod_table);
+diff --git a/arch/sh/boards/mach-kfr2r09/setup.c b/arch/sh/boards/mach-kfr2r09/setup.c
+index eeb5ce341efdd..4a1caa3e7cf5a 100644
+--- a/arch/sh/boards/mach-kfr2r09/setup.c
++++ b/arch/sh/boards/mach-kfr2r09/setup.c
+@@ -603,7 +603,7 @@ static int __init kfr2r09_devices_setup(void)
+ 	device_initialize(&kfr2r09_ceu_device.dev);
+ 	dma_declare_coherent_memory(&kfr2r09_ceu_device.dev,
+ 			ceu_dma_membase, ceu_dma_membase,
+-			ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
++			CEU_BUFFER_MEMORY_SIZE);
+ 
+ 	platform_device_add(&kfr2r09_ceu_device);
+ 
+diff --git a/arch/sh/boards/mach-migor/setup.c b/arch/sh/boards/mach-migor/setup.c
+index 6703a2122c0d6..bd4ccd9f8dd06 100644
+--- a/arch/sh/boards/mach-migor/setup.c
++++ b/arch/sh/boards/mach-migor/setup.c
+@@ -604,7 +604,7 @@ static int __init migor_devices_setup(void)
+ 	device_initialize(&migor_ceu_device.dev);
+ 	dma_declare_coherent_memory(&migor_ceu_device.dev,
+ 			ceu_dma_membase, ceu_dma_membase,
+-			ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
++			CEU_BUFFER_MEMORY_SIZE);
+ 
+ 	platform_device_add(&migor_ceu_device);
+ 
+diff --git a/arch/sh/boards/mach-se/7724/setup.c b/arch/sh/boards/mach-se/7724/setup.c
+index 8d6541ba01865..edc7712e4a804 100644
+--- a/arch/sh/boards/mach-se/7724/setup.c
++++ b/arch/sh/boards/mach-se/7724/setup.c
+@@ -940,15 +940,13 @@ static int __init devices_setup(void)
+ 	device_initialize(&ms7724se_ceu_devices[0]->dev);
+ 	dma_declare_coherent_memory(&ms7724se_ceu_devices[0]->dev,
+ 				    ceu0_dma_membase, ceu0_dma_membase,
+-				    ceu0_dma_membase +
+-				    CEU_BUFFER_MEMORY_SIZE - 1);
++				    CEU_BUFFER_MEMORY_SIZE);
+ 	platform_device_add(ms7724se_ceu_devices[0]);
+ 
+ 	device_initialize(&ms7724se_ceu_devices[1]->dev);
+ 	dma_declare_coherent_memory(&ms7724se_ceu_devices[1]->dev,
+ 				    ceu1_dma_membase, ceu1_dma_membase,
+-				    ceu1_dma_membase +
+-				    CEU_BUFFER_MEMORY_SIZE - 1);
++				    CEU_BUFFER_MEMORY_SIZE);
+ 	platform_device_add(ms7724se_ceu_devices[1]);
+ 
+ 	return platform_add_devices(ms7724se_devices,
+diff --git a/arch/um/configs/i386_defconfig b/arch/um/configs/i386_defconfig
+index fb51bd206dbed..4d7f99a02c1eb 100644
+--- a/arch/um/configs/i386_defconfig
++++ b/arch/um/configs/i386_defconfig
+@@ -35,6 +35,7 @@ CONFIG_TTY_CHAN=y
+ CONFIG_XTERM_CHAN=y
+ CONFIG_CON_CHAN="pts"
+ CONFIG_SSL_CHAN="pts"
++CONFIG_SOUND=m
+ CONFIG_UML_SOUND=m
+ CONFIG_DEVTMPFS=y
+ CONFIG_DEVTMPFS_MOUNT=y
+diff --git a/arch/um/configs/x86_64_defconfig b/arch/um/configs/x86_64_defconfig
+index 477b873174243..4bdd83008f623 100644
+--- a/arch/um/configs/x86_64_defconfig
++++ b/arch/um/configs/x86_64_defconfig
+@@ -33,6 +33,7 @@ CONFIG_TTY_CHAN=y
+ CONFIG_XTERM_CHAN=y
+ CONFIG_CON_CHAN="pts"
+ CONFIG_SSL_CHAN="pts"
++CONFIG_SOUND=m
+ CONFIG_UML_SOUND=m
+ CONFIG_DEVTMPFS=y
+ CONFIG_DEVTMPFS_MOUNT=y
+diff --git a/arch/um/drivers/Kconfig b/arch/um/drivers/Kconfig
+index 2e7b8e0e7194b..01dfbd57e29d7 100644
+--- a/arch/um/drivers/Kconfig
++++ b/arch/um/drivers/Kconfig
+@@ -104,24 +104,14 @@ config SSL_CHAN
+ 
+ config UML_SOUND
+ 	tristate "Sound support"
++	depends on SOUND
++	select SOUND_OSS_CORE
+ 	help
+ 	  This option enables UML sound support.  If enabled, it will pull in
+-	  soundcore and the UML hostaudio relay, which acts as a intermediary
++	  the UML hostaudio relay, which acts as a intermediary
+ 	  between the host's dsp and mixer devices and the UML sound system.
+ 	  It is safe to say 'Y' here.
+ 
+-config SOUND
+-	tristate
+-	default UML_SOUND
+-
+-config SOUND_OSS_CORE
+-	bool
+-	default UML_SOUND
+-
+-config HOSTAUDIO
+-	tristate
+-	default UML_SOUND
+-
+ endmenu
+ 
+ menu "UML Network Devices"
+diff --git a/arch/um/drivers/Makefile b/arch/um/drivers/Makefile
+index 2a249f6194671..207d62ab519df 100644
+--- a/arch/um/drivers/Makefile
++++ b/arch/um/drivers/Makefile
+@@ -52,7 +52,7 @@ obj-$(CONFIG_UML_NET) += net.o
+ obj-$(CONFIG_MCONSOLE) += mconsole.o
+ obj-$(CONFIG_MMAPPER) += mmapper_kern.o 
+ obj-$(CONFIG_BLK_DEV_UBD) += ubd.o 
+-obj-$(CONFIG_HOSTAUDIO) += hostaudio.o
++obj-$(CONFIG_UML_SOUND) += hostaudio.o
+ obj-$(CONFIG_NULL_CHAN) += null.o 
+ obj-$(CONFIG_PORT_CHAN) += port.o
+ obj-$(CONFIG_PTY_CHAN) += pty.o
+diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
+index b55e2007b30c6..473d84eb5dda9 100644
+--- a/arch/x86/boot/compressed/head_64.S
++++ b/arch/x86/boot/compressed/head_64.S
+@@ -454,11 +454,25 @@ SYM_CODE_START(startup_64)
+ 	/* Save the trampoline address in RCX */
+ 	movq	%rax, %rcx
+ 
++	/* Set up 32-bit addressable stack */
++	leaq	TRAMPOLINE_32BIT_STACK_END(%rcx), %rsp
++
++	/*
++	 * Preserve live 64-bit registers on the stack: this is necessary
++	 * because the architecture does not guarantee that GPRs will retain
++	 * their full 64-bit values across a 32-bit mode switch.
++	 */
++	pushq	%rbp
++	pushq	%rbx
++	pushq	%rsi
++
+ 	/*
+-	 * Load the address of trampoline_return() into RDI.
+-	 * It will be used by the trampoline to return to the main code.
++	 * Push the 64-bit address of trampoline_return() onto the new stack.
++	 * It will be used by the trampoline to return to the main code. Due to
++	 * the 32-bit mode switch, it cannot be kept it in a register either.
+ 	 */
+ 	leaq	trampoline_return(%rip), %rdi
++	pushq	%rdi
+ 
+ 	/* Switch to compatibility mode (CS.L = 0 CS.D = 1) via far return */
+ 	pushq	$__KERNEL32_CS
+@@ -466,6 +480,11 @@ SYM_CODE_START(startup_64)
+ 	pushq	%rax
+ 	lretq
+ trampoline_return:
++	/* Restore live 64-bit registers */
++	popq	%rsi
++	popq	%rbx
++	popq	%rbp
++
+ 	/* Restore the stack, the 32-bit trampoline uses its own stack */
+ 	leaq	rva(boot_stack_end)(%rbx), %rsp
+ 
+@@ -586,7 +605,7 @@ SYM_FUNC_END(.Lrelocated)
+ /*
+  * This is the 32-bit trampoline that will be copied over to low memory.
+  *
+- * RDI contains the return address (might be above 4G).
++ * Return address is at the top of the stack (might be above 4G).
+  * ECX contains the base address of the trampoline memory.
+  * Non zero RDX means trampoline needs to enable 5-level paging.
+  */
+@@ -596,9 +615,6 @@ SYM_CODE_START(trampoline_32bit_src)
+ 	movl	%eax, %ds
+ 	movl	%eax, %ss
+ 
+-	/* Set up new stack */
+-	leal	TRAMPOLINE_32BIT_STACK_END(%ecx), %esp
+-
+ 	/* Disable paging */
+ 	movl	%cr0, %eax
+ 	btrl	$X86_CR0_PG_BIT, %eax
+@@ -658,7 +674,7 @@ SYM_CODE_END(trampoline_32bit_src)
+ 	.code64
+ SYM_FUNC_START_LOCAL_NOALIGN(.Lpaging_enabled)
+ 	/* Return from the trampoline */
+-	jmp	*%rdi
++	retq
+ SYM_FUNC_END(.Lpaging_enabled)
+ 
+ 	/*
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 394757ee030a6..85baa72cb8947 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -125,11 +125,12 @@
+  * instance, and is *not* included in this mask since
+  * pte_modify() does modify it.
+  */
+-#define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
+-			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
+-			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC |  \
+-			 _PAGE_UFFD_WP)
+-#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
++#define _COMMON_PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |	       \
++				 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |\
++				 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC | \
++				 _PAGE_UFFD_WP)
++#define _PAGE_CHG_MASK	(_COMMON_PAGE_CHG_MASK | _PAGE_PAT)
++#define _HPAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_PAT_LARGE)
+ 
+ /*
+  * The cache modes defined here are used to translate between pure SW usage
+diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
+index 8eefa3386d8ce..331474296e6f1 100644
+--- a/arch/x86/include/asm/virtext.h
++++ b/arch/x86/include/asm/virtext.h
+@@ -95,12 +95,6 @@ static inline int cpu_has_svm(const char **msg)
+ 		return 0;
+ 	}
+ 
+-	if (boot_cpu_data.extended_cpuid_level < SVM_CPUID_FUNC) {
+-		if (msg)
+-			*msg = "can't execute cpuid_8000000a";
+-		return 0;
+-	}
+-
+ 	if (!boot_cpu_has(X86_FEATURE_SVM)) {
+ 		if (msg)
+ 			*msg = "svm not available";
+diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
+index 660270359d393..166d9991e7111 100644
+--- a/arch/x86/kernel/apm_32.c
++++ b/arch/x86/kernel/apm_32.c
+@@ -237,12 +237,6 @@
+ extern int (*console_blank_hook)(int);
+ #endif
+ 
+-/*
+- * The apm_bios device is one of the misc char devices.
+- * This is its minor number.
+- */
+-#define	APM_MINOR_DEV	134
+-
+ /*
+  * Various options can be changed at boot time as follows:
+  * (We allow underscores for compatibility with the modules code)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index c1ff75ad11358..4ecc6072e9a48 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1145,11 +1145,11 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_X,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+ 	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED | GDS),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED | GDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS | SRBDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		MMIO | RETBLEED | GDS | SRBDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS | SRBDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS | SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(CANNONLAKE_L,	X86_STEPPING_ANY,		RETBLEED),
+ 	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPING_ANY,		MMIO | GDS),
+diff --git a/arch/xtensa/include/asm/core.h b/arch/xtensa/include/asm/core.h
+index 5590b0f688376..a4e40166ff4bb 100644
+--- a/arch/xtensa/include/asm/core.h
++++ b/arch/xtensa/include/asm/core.h
+@@ -26,4 +26,13 @@
+ #define XCHAL_SPANNING_WAY 0
+ #endif
+ 
++#ifndef XCHAL_HW_MIN_VERSION
++#if defined(XCHAL_HW_MIN_VERSION_MAJOR) && defined(XCHAL_HW_MIN_VERSION_MINOR)
++#define XCHAL_HW_MIN_VERSION (XCHAL_HW_MIN_VERSION_MAJOR * 100 + \
++			      XCHAL_HW_MIN_VERSION_MINOR)
++#else
++#define XCHAL_HW_MIN_VERSION 0
++#endif
++#endif
++
+ #endif
+diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c
+index a0d05c8598d0f..183618090d05b 100644
+--- a/arch/xtensa/kernel/perf_event.c
++++ b/arch/xtensa/kernel/perf_event.c
+@@ -13,17 +13,26 @@
+ #include <linux/perf_event.h>
+ #include <linux/platform_device.h>
+ 
++#include <asm/core.h>
+ #include <asm/processor.h>
+ #include <asm/stacktrace.h>
+ 
++#define XTENSA_HWVERSION_RG_2015_0	260000
++
++#if XCHAL_HW_MIN_VERSION >= XTENSA_HWVERSION_RG_2015_0
++#define XTENSA_PMU_ERI_BASE		0x00101000
++#else
++#define XTENSA_PMU_ERI_BASE		0x00001000
++#endif
++
+ /* Global control/status for all perf counters */
+-#define XTENSA_PMU_PMG			0x1000
++#define XTENSA_PMU_PMG			XTENSA_PMU_ERI_BASE
+ /* Perf counter values */
+-#define XTENSA_PMU_PM(i)		(0x1080 + (i) * 4)
++#define XTENSA_PMU_PM(i)		(XTENSA_PMU_ERI_BASE + 0x80 + (i) * 4)
+ /* Perf counter control registers */
+-#define XTENSA_PMU_PMCTRL(i)		(0x1100 + (i) * 4)
++#define XTENSA_PMU_PMCTRL(i)		(XTENSA_PMU_ERI_BASE + 0x100 + (i) * 4)
+ /* Perf counter status registers */
+-#define XTENSA_PMU_PMSTAT(i)		(0x1180 + (i) * 4)
++#define XTENSA_PMU_PMSTAT(i)		(XTENSA_PMU_ERI_BASE + 0x180 + (i) * 4)
+ 
+ #define XTENSA_PMU_PMG_PMEN		0x1
+ 
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 42dca17dc2d97..5d422e725b267 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -16,6 +16,7 @@
+ #include <linux/rtnetlink.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
++#include <linux/workqueue.h>
+ 
+ #include "internal.h"
+ 
+@@ -68,15 +69,26 @@ static void crypto_free_instance(struct crypto_instance *inst)
+ 	inst->alg.cra_type->free(inst);
+ }
+ 
+-static void crypto_destroy_instance(struct crypto_alg *alg)
++static void crypto_destroy_instance_workfn(struct work_struct *w)
+ {
+-	struct crypto_instance *inst = (void *)alg;
++	struct crypto_instance *inst = container_of(w, struct crypto_instance,
++						    free_work);
+ 	struct crypto_template *tmpl = inst->tmpl;
+ 
+ 	crypto_free_instance(inst);
+ 	crypto_tmpl_put(tmpl);
+ }
+ 
++static void crypto_destroy_instance(struct crypto_alg *alg)
++{
++	struct crypto_instance *inst = container_of(alg,
++						    struct crypto_instance,
++						    alg);
++
++	INIT_WORK(&inst->free_work, crypto_destroy_instance_workfn);
++	schedule_work(&inst->free_work);
++}
++
+ /*
+  * This function adds a spawn to the list secondary_spawns which
+  * will be used at the end of crypto_remove_spawns to unregister
+diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
+index ae450eb8be144..b8135c38f584f 100644
+--- a/crypto/asymmetric_keys/x509_public_key.c
++++ b/crypto/asymmetric_keys/x509_public_key.c
+@@ -132,6 +132,11 @@ int x509_check_for_self_signed(struct x509_certificate *cert)
+ 	if (strcmp(cert->pub->pkey_algo, cert->sig->pkey_algo) != 0)
+ 		goto out;
+ 
++	if (cert->unsupported_sig) {
++		ret = 0;
++		goto out;
++	}
++
+ 	ret = public_key_verify_signature(cert->pub, cert->sig);
+ 	if (ret < 0) {
+ 		if (ret == -ENOPKG) {
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 9bdb5bd5fda63..8678e162181f4 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -1457,33 +1457,35 @@ static struct platform_driver ghes_platform_driver = {
+ 	.remove		= ghes_remove,
+ };
+ 
+-static int __init ghes_init(void)
++void __init ghes_init(void)
+ {
+ 	int rc;
+ 
++	sdei_init();
++
+ 	if (acpi_disabled)
+-		return -ENODEV;
++		return;
+ 
+ 	switch (hest_disable) {
+ 	case HEST_NOT_FOUND:
+-		return -ENODEV;
++		return;
+ 	case HEST_DISABLED:
+ 		pr_info(GHES_PFX "HEST is not enabled!\n");
+-		return -EINVAL;
++		return;
+ 	default:
+ 		break;
+ 	}
+ 
+ 	if (ghes_disable) {
+ 		pr_info(GHES_PFX "GHES is not enabled!\n");
+-		return -EINVAL;
++		return;
+ 	}
+ 
+ 	ghes_nmi_init_cxt();
+ 
+ 	rc = platform_driver_register(&ghes_platform_driver);
+ 	if (rc)
+-		goto err;
++		return;
+ 
+ 	rc = apei_osc_setup();
+ 	if (rc == 0 && osc_sb_apei_support_acked)
+@@ -1494,9 +1496,4 @@ static int __init ghes_init(void)
+ 		pr_info(GHES_PFX "APEI firmware first mode is enabled by APEI bit.\n");
+ 	else
+ 		pr_info(GHES_PFX "Failed to enable APEI firmware first mode.\n");
+-
+-	return 0;
+-err:
+-	return rc;
+ }
+-device_initcall(ghes_init);
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index 5e14288fcabe9..60dfe63301d00 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -1252,6 +1252,8 @@ static int __init acpi_init(void)
+ 
+ 	pci_mmcfg_late_init();
+ 	acpi_iort_init();
++	acpi_hest_init();
++	ghes_init();
+ 	acpi_scan_init();
+ 	acpi_ec_init();
+ 	acpi_debugfs_init();
+diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
+index c12b5fb3e8fba..d972ea057a035 100644
+--- a/drivers/acpi/pci_root.c
++++ b/drivers/acpi/pci_root.c
+@@ -20,8 +20,6 @@
+ #include <linux/slab.h>
+ #include <linux/dmi.h>
+ #include <linux/platform_data/x86/apple.h>
+-#include <acpi/apei.h>	/* for acpi_hest_init() */
+-
+ #include "internal.h"
+ 
+ #define ACPI_PCI_ROOT_CLASS		"pci_bridge"
+@@ -950,7 +948,6 @@ out_release_info:
+ 
+ void __init acpi_pci_root_init(void)
+ {
+-	acpi_hest_init();
+ 	if (acpi_pci_disabled)
+ 		return;
+ 
+diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c
+index 47c72447ccd59..52ab582930caa 100644
+--- a/drivers/amba/bus.c
++++ b/drivers/amba/bus.c
+@@ -363,6 +363,7 @@ static void amba_device_release(struct device *dev)
+ {
+ 	struct amba_device *d = to_amba_device(dev);
+ 
++	of_node_put(d->dev.of_node);
+ 	if (d->res.parent)
+ 		release_resource(&d->res);
+ 	kfree(d);
+diff --git a/drivers/ata/pata_arasan_cf.c b/drivers/ata/pata_arasan_cf.c
+index 63f39440a9b42..4ba02f082f962 100644
+--- a/drivers/ata/pata_arasan_cf.c
++++ b/drivers/ata/pata_arasan_cf.c
+@@ -528,7 +528,8 @@ static void data_xfer(struct work_struct *work)
+ 	/* dma_request_channel may sleep, so calling from process context */
+ 	acdev->dma_chan = dma_request_chan(acdev->host->dev, "data");
+ 	if (IS_ERR(acdev->dma_chan)) {
+-		dev_err(acdev->host->dev, "Unable to get dma_chan\n");
++		dev_err_probe(acdev->host->dev, PTR_ERR(acdev->dma_chan),
++			      "Unable to get dma_chan\n");
+ 		acdev->dma_chan = NULL;
+ 		goto chan_request_fail;
+ 	}
+diff --git a/drivers/ata/pata_ftide010.c b/drivers/ata/pata_ftide010.c
+index 34cb104f6b43e..bc30e2f305beb 100644
+--- a/drivers/ata/pata_ftide010.c
++++ b/drivers/ata/pata_ftide010.c
+@@ -570,6 +570,7 @@ static struct platform_driver pata_ftide010_driver = {
+ };
+ module_platform_driver(pata_ftide010_driver);
+ 
++MODULE_DESCRIPTION("low level driver for Faraday Technology FTIDE010");
+ MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>");
+ MODULE_LICENSE("GPL");
+ MODULE_ALIAS("platform:" DRV_NAME);
+diff --git a/drivers/ata/sata_gemini.c b/drivers/ata/sata_gemini.c
+index f793564f3d787..6fd54e968d10a 100644
+--- a/drivers/ata/sata_gemini.c
++++ b/drivers/ata/sata_gemini.c
+@@ -435,6 +435,7 @@ static struct platform_driver gemini_sata_driver = {
+ };
+ module_platform_driver(gemini_sata_driver);
+ 
++MODULE_DESCRIPTION("low level driver for Cortina Systems Gemini SATA bridge");
+ MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>");
+ MODULE_LICENSE("GPL");
+ MODULE_ALIAS("platform:" DRV_NAME);
+diff --git a/drivers/base/regmap/regcache-rbtree.c b/drivers/base/regmap/regcache-rbtree.c
+index fabf87058d80b..ae6b8788d5f3f 100644
+--- a/drivers/base/regmap/regcache-rbtree.c
++++ b/drivers/base/regmap/regcache-rbtree.c
+@@ -277,7 +277,7 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
+ 
+ 	blk = krealloc(rbnode->block,
+ 		       blklen * map->cache_word_size,
+-		       GFP_KERNEL);
++		       map->alloc_flags);
+ 	if (!blk)
+ 		return -ENOMEM;
+ 
+@@ -286,7 +286,7 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
+ 	if (BITS_TO_LONGS(blklen) > BITS_TO_LONGS(rbnode->blklen)) {
+ 		present = krealloc(rbnode->cache_present,
+ 				   BITS_TO_LONGS(blklen) * sizeof(*present),
+-				   GFP_KERNEL);
++				   map->alloc_flags);
+ 		if (!present)
+ 			return -ENOMEM;
+ 
+@@ -320,7 +320,7 @@ regcache_rbtree_node_alloc(struct regmap *map, unsigned int reg)
+ 	const struct regmap_range *range;
+ 	int i;
+ 
+-	rbnode = kzalloc(sizeof(*rbnode), GFP_KERNEL);
++	rbnode = kzalloc(sizeof(*rbnode), map->alloc_flags);
+ 	if (!rbnode)
+ 		return NULL;
+ 
+@@ -346,13 +346,13 @@ regcache_rbtree_node_alloc(struct regmap *map, unsigned int reg)
+ 	}
+ 
+ 	rbnode->block = kmalloc_array(rbnode->blklen, map->cache_word_size,
+-				      GFP_KERNEL);
++				      map->alloc_flags);
+ 	if (!rbnode->block)
+ 		goto err_free;
+ 
+ 	rbnode->cache_present = kcalloc(BITS_TO_LONGS(rbnode->blklen),
+ 					sizeof(*rbnode->cache_present),
+-					GFP_KERNEL);
++					map->alloc_flags);
+ 	if (!rbnode->cache_present)
+ 		goto err_free_block;
+ 
+diff --git a/drivers/base/test/test_async_driver_probe.c b/drivers/base/test/test_async_driver_probe.c
+index c157a912d6739..88336f093decd 100644
+--- a/drivers/base/test/test_async_driver_probe.c
++++ b/drivers/base/test/test_async_driver_probe.c
+@@ -84,7 +84,7 @@ test_platform_device_register_node(char *name, int id, int nid)
+ 
+ 	pdev = platform_device_alloc(name, id);
+ 	if (!pdev)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	if (nid != NUMA_NO_NODE)
+ 		set_dev_node(&pdev->dev, nid);
+diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c
+index 199e8f7d426d9..2e4ac39dd9751 100644
+--- a/drivers/bluetooth/btsdio.c
++++ b/drivers/bluetooth/btsdio.c
+@@ -355,6 +355,7 @@ static void btsdio_remove(struct sdio_func *func)
+ 	if (!data)
+ 		return;
+ 
++	cancel_work_sync(&data->work);
+ 	hdev = data->hdev;
+ 
+ 	sdio_set_drvdata(func, NULL);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 49d5375b04f40..f99d190770204 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -1689,7 +1689,7 @@ static int btusb_switch_alt_setting(struct hci_dev *hdev, int new_alts)
+ 		 * alternate setting.
+ 		 */
+ 		spin_lock_irqsave(&data->rxlock, flags);
+-		kfree_skb(data->sco_skb);
++		dev_kfree_skb_irq(data->sco_skb);
+ 		data->sco_skb = NULL;
+ 		spin_unlock_irqrestore(&data->rxlock, flags);
+ 
+diff --git a/drivers/bluetooth/hci_nokia.c b/drivers/bluetooth/hci_nokia.c
+index 05f7f6de6863d..97da0b2bfd17e 100644
+--- a/drivers/bluetooth/hci_nokia.c
++++ b/drivers/bluetooth/hci_nokia.c
+@@ -734,7 +734,11 @@ static int nokia_bluetooth_serdev_probe(struct serdev_device *serdev)
+ 		return err;
+ 	}
+ 
+-	clk_prepare_enable(sysclk);
++	err = clk_prepare_enable(sysclk);
++	if (err) {
++		dev_err(dev, "could not enable sysclk: %d", err);
++		return err;
++	}
+ 	btdev->sysclk_speed = clk_get_rate(sysclk);
+ 	clk_disable_unprepare(sysclk);
+ 
+diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
+index 7d69b740b9f93..fe8ecd6eaa4d1 100644
+--- a/drivers/bus/mhi/host/pm.c
++++ b/drivers/bus/mhi/host/pm.c
+@@ -490,6 +490,10 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
+ 		u32 in_reset = -1;
+ 		unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
+ 
++		/* Skip MHI RESET if in RDDM state */
++		if (mhi_cntrl->rddm_image && mhi_get_exec_env(mhi_cntrl) == MHI_EE_RDDM)
++			goto skip_mhi_reset;
++
+ 		dev_dbg(dev, "Triggering MHI Reset in device\n");
+ 		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
+ 
+@@ -515,6 +519,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
+ 		mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+ 	}
+ 
++skip_mhi_reset:
+ 	dev_dbg(dev,
+ 		 "Waiting for all pending event ring processing to complete\n");
+ 	mhi_event = mhi_cntrl->mhi_event;
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index fcfe4d16cc149..c8e0f8cb9aa32 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3041,7 +3041,7 @@ static int sysc_init_static_data(struct sysc *ddata)
+ 
+ 	match = soc_device_match(sysc_soc_match);
+ 	if (match && match->data)
+-		sysc_soc->soc = (int)match->data;
++		sysc_soc->soc = (enum sysc_soc)(uintptr_t)match->data;
+ 
+ 	/* Ignore devices that are not available on HS and EMU SoCs */
+ 	if (!sysc_soc->general_purpose) {
+diff --git a/drivers/char/hw_random/iproc-rng200.c b/drivers/char/hw_random/iproc-rng200.c
+index 01583faf9893e..52c4aa66d8379 100644
+--- a/drivers/char/hw_random/iproc-rng200.c
++++ b/drivers/char/hw_random/iproc-rng200.c
+@@ -195,6 +195,8 @@ static int iproc_rng200_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->base);
+ 	}
+ 
++	dev_set_drvdata(dev, priv);
++
+ 	priv->rng.name = "iproc-rng200";
+ 	priv->rng.read = iproc_rng200_read;
+ 	priv->rng.init = iproc_rng200_init;
+@@ -212,6 +214,28 @@ static int iproc_rng200_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static int __maybe_unused iproc_rng200_suspend(struct device *dev)
++{
++	struct iproc_rng200_dev *priv = dev_get_drvdata(dev);
++
++	iproc_rng200_cleanup(&priv->rng);
++
++	return 0;
++}
++
++static int __maybe_unused iproc_rng200_resume(struct device *dev)
++{
++	struct iproc_rng200_dev *priv =  dev_get_drvdata(dev);
++
++	iproc_rng200_init(&priv->rng);
++
++	return 0;
++}
++
++static const struct dev_pm_ops iproc_rng200_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(iproc_rng200_suspend, iproc_rng200_resume)
++};
++
+ static const struct of_device_id iproc_rng200_of_match[] = {
+ 	{ .compatible = "brcm,bcm2711-rng200", },
+ 	{ .compatible = "brcm,bcm7211-rng200", },
+@@ -225,6 +249,7 @@ static struct platform_driver iproc_rng200_driver = {
+ 	.driver = {
+ 		.name		= "iproc-rng200",
+ 		.of_match_table = iproc_rng200_of_match,
++		.pm		= &iproc_rng200_pm_ops,
+ 	},
+ 	.probe		= iproc_rng200_probe,
+ };
+diff --git a/drivers/char/hw_random/nomadik-rng.c b/drivers/char/hw_random/nomadik-rng.c
+index e8f9621e79541..3774adf903a83 100644
+--- a/drivers/char/hw_random/nomadik-rng.c
++++ b/drivers/char/hw_random/nomadik-rng.c
+@@ -13,8 +13,6 @@
+ #include <linux/clk.h>
+ #include <linux/err.h>
+ 
+-static struct clk *rng_clk;
+-
+ static int nmk_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
+ {
+ 	void __iomem *base = (void __iomem *)rng->priv;
+@@ -36,21 +34,20 @@ static struct hwrng nmk_rng = {
+ 
+ static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id)
+ {
++	struct clk *rng_clk;
+ 	void __iomem *base;
+ 	int ret;
+ 
+-	rng_clk = devm_clk_get(&dev->dev, NULL);
++	rng_clk = devm_clk_get_enabled(&dev->dev, NULL);
+ 	if (IS_ERR(rng_clk)) {
+ 		dev_err(&dev->dev, "could not get rng clock\n");
+ 		ret = PTR_ERR(rng_clk);
+ 		return ret;
+ 	}
+ 
+-	clk_prepare_enable(rng_clk);
+-
+ 	ret = amba_request_regions(dev, dev->dev.init_name);
+ 	if (ret)
+-		goto out_clk;
++		return ret;
+ 	ret = -ENOMEM;
+ 	base = devm_ioremap(&dev->dev, dev->res.start,
+ 			    resource_size(&dev->res));
+@@ -64,15 +61,12 @@ static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id)
+ 
+ out_release:
+ 	amba_release_regions(dev);
+-out_clk:
+-	clk_disable_unprepare(rng_clk);
+ 	return ret;
+ }
+ 
+ static void nmk_rng_remove(struct amba_device *dev)
+ {
+ 	amba_release_regions(dev);
+-	clk_disable_unprepare(rng_clk);
+ }
+ 
+ static const struct amba_id nmk_rng_ids[] = {
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 7fe68c680d3ee..a5418692a8182 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2089,6 +2089,11 @@ static int try_smi_init(struct smi_info *new_smi)
+ 		new_smi->io.io_cleanup = NULL;
+ 	}
+ 
++	if (rv && new_smi->si_sm) {
++		kfree(new_smi->si_sm);
++		new_smi->si_sm = NULL;
++	}
++
+ 	return rv;
+ }
+ 
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index a3745fa643f3b..30f757249c5c0 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -1414,7 +1414,7 @@ static struct ssif_addr_info *ssif_info_find(unsigned short addr,
+ restart:
+ 	list_for_each_entry(info, &ssif_infos, link) {
+ 		if (info->binfo.addr == addr) {
+-			if (info->addr_src == SI_SMBIOS)
++			if (info->addr_src == SI_SMBIOS && !info->adapter_name)
+ 				info->adapter_name = kstrdup(adapter_name,
+ 							     GFP_KERNEL);
+ 
+@@ -1614,6 +1614,11 @@ static int ssif_add_infos(struct i2c_client *client)
+ 	info->addr_src = SI_ACPI;
+ 	info->client = client;
+ 	info->adapter_name = kstrdup(client->adapter->name, GFP_KERNEL);
++	if (!info->adapter_name) {
++		kfree(info);
++		return -ENOMEM;
++	}
++
+ 	info->binfo.addr = client->addr;
+ 	list_add_tail(&info->link, &ssif_infos);
+ 	return 0;
+diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
+index 4ae49eae45869..df739665f2063 100644
+--- a/drivers/clk/Kconfig
++++ b/drivers/clk/Kconfig
+@@ -356,6 +356,7 @@ config COMMON_CLK_BD718XX
+ config COMMON_CLK_FIXED_MMIO
+ 	bool "Clock driver for Memory Mapped Fixed values"
+ 	depends on COMMON_CLK && OF
++	depends on HAS_IOMEM
+ 	help
+ 	  Support for Memory Mapped IO Fixed clocks
+ 
+diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c
+index 04e728538cefe..75e05582cb24f 100644
+--- a/drivers/clk/imx/clk-composite-8m.c
++++ b/drivers/clk/imx/clk-composite-8m.c
+@@ -97,7 +97,7 @@ static int imx8m_clk_composite_divider_set_rate(struct clk_hw *hw,
+ 	int prediv_value;
+ 	int div_value;
+ 	int ret;
+-	u32 val;
++	u32 orig, val;
+ 
+ 	ret = imx8m_clk_composite_compute_dividers(rate, parent_rate,
+ 						&prediv_value, &div_value);
+@@ -106,13 +106,15 @@ static int imx8m_clk_composite_divider_set_rate(struct clk_hw *hw,
+ 
+ 	spin_lock_irqsave(divider->lock, flags);
+ 
+-	val = readl(divider->reg);
+-	val &= ~((clk_div_mask(divider->width) << divider->shift) |
+-			(clk_div_mask(PCG_DIV_WIDTH) << PCG_DIV_SHIFT));
++	orig = readl(divider->reg);
++	val = orig & ~((clk_div_mask(divider->width) << divider->shift) |
++		       (clk_div_mask(PCG_DIV_WIDTH) << PCG_DIV_SHIFT));
+ 
+ 	val |= (u32)(prediv_value  - 1) << divider->shift;
+ 	val |= (u32)(div_value - 1) << PCG_DIV_SHIFT;
+-	writel(val, divider->reg);
++
++	if (val != orig)
++		writel(val, divider->reg);
+ 
+ 	spin_unlock_irqrestore(divider->lock, flags);
+ 
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 98a4711ef38d0..148572852e70f 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -181,10 +181,6 @@ static const char * const imx8mp_sai3_sels[] = {"osc_24m", "audio_pll1_out", "au
+ 						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
+ 						"clk_ext3", "clk_ext4", };
+ 
+-static const char * const imx8mp_sai4_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+-						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
+-						"clk_ext1", "clk_ext2", };
+-
+ static const char * const imx8mp_sai5_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+ 						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
+ 						"clk_ext2", "clk_ext3", };
+@@ -596,7 +592,6 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MP_CLK_SAI1] = imx8m_clk_hw_composite("sai1", imx8mp_sai1_sels, ccm_base + 0xa580);
+ 	hws[IMX8MP_CLK_SAI2] = imx8m_clk_hw_composite("sai2", imx8mp_sai2_sels, ccm_base + 0xa600);
+ 	hws[IMX8MP_CLK_SAI3] = imx8m_clk_hw_composite("sai3", imx8mp_sai3_sels, ccm_base + 0xa680);
+-	hws[IMX8MP_CLK_SAI4] = imx8m_clk_hw_composite("sai4", imx8mp_sai4_sels, ccm_base + 0xa700);
+ 	hws[IMX8MP_CLK_SAI5] = imx8m_clk_hw_composite("sai5", imx8mp_sai5_sels, ccm_base + 0xa780);
+ 	hws[IMX8MP_CLK_SAI6] = imx8m_clk_hw_composite("sai6", imx8mp_sai6_sels, ccm_base + 0xa800);
+ 	hws[IMX8MP_CLK_ENET_QOS] = imx8m_clk_hw_composite("enet_qos", imx8mp_enet_qos_sels, ccm_base + 0xa880);
+diff --git a/drivers/clk/imx/clk-pll14xx.c b/drivers/clk/imx/clk-pll14xx.c
+index aba36e4217d2d..e46311c2e63ea 100644
+--- a/drivers/clk/imx/clk-pll14xx.c
++++ b/drivers/clk/imx/clk-pll14xx.c
+@@ -60,8 +60,6 @@ static const struct imx_pll14xx_rate_table imx_pll1443x_tbl[] = {
+ 	PLL_1443X_RATE(650000000U, 325, 3, 2, 0),
+ 	PLL_1443X_RATE(594000000U, 198, 2, 2, 0),
+ 	PLL_1443X_RATE(519750000U, 173, 2, 2, 16384),
+-	PLL_1443X_RATE(393216000U, 262, 2, 3, 9437),
+-	PLL_1443X_RATE(361267200U, 361, 3, 3, 17511),
+ };
+ 
+ struct imx_pll14xx_clk imx_1443x_pll = {
+diff --git a/drivers/clk/keystone/pll.c b/drivers/clk/keystone/pll.c
+index d59a7621bb204..ee5c72369334f 100644
+--- a/drivers/clk/keystone/pll.c
++++ b/drivers/clk/keystone/pll.c
+@@ -209,7 +209,7 @@ static void __init _of_pll_clk_init(struct device_node *node, bool pllctrl)
+ 	}
+ 
+ 	clk = clk_register_pll(NULL, node->name, parent_name, pll_data);
+-	if (clk) {
++	if (!IS_ERR_OR_NULL(clk)) {
+ 		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 		return;
+ 	}
+diff --git a/drivers/clk/qcom/gcc-mdm9615.c b/drivers/clk/qcom/gcc-mdm9615.c
+index 8bed02a748aba..470a277603a92 100644
+--- a/drivers/clk/qcom/gcc-mdm9615.c
++++ b/drivers/clk/qcom/gcc-mdm9615.c
+@@ -58,7 +58,7 @@ static struct clk_regmap pll0_vote = {
+ 	.enable_mask = BIT(0),
+ 	.hw.init = &(struct clk_init_data){
+ 		.name = "pll0_vote",
+-		.parent_names = (const char *[]){ "pll8" },
++		.parent_names = (const char *[]){ "pll0" },
+ 		.num_parents = 1,
+ 		.ops = &clk_pll_vote_ops,
+ 	},
+diff --git a/drivers/clk/qcom/gcc-sc7180.c b/drivers/clk/qcom/gcc-sc7180.c
+index 7e80dbd4a3f9f..bebe317935238 100644
+--- a/drivers/clk/qcom/gcc-sc7180.c
++++ b/drivers/clk/qcom/gcc-sc7180.c
+@@ -285,7 +285,7 @@ static struct clk_rcg2 gcc_cpuss_ahb_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_cpuss_ahb_clk_src",
+ 		.parent_data = gcc_parent_data_0_ao,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0_ao),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 		},
+@@ -309,7 +309,7 @@ static struct clk_rcg2 gcc_gp1_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp1_clk_src",
+ 		.parent_data = gcc_parent_data_4,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_4),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -323,7 +323,7 @@ static struct clk_rcg2 gcc_gp2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp2_clk_src",
+ 		.parent_data = gcc_parent_data_4,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_4),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -337,7 +337,7 @@ static struct clk_rcg2 gcc_gp3_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp3_clk_src",
+ 		.parent_data = gcc_parent_data_4,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_4),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -357,7 +357,7 @@ static struct clk_rcg2 gcc_pdm2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pdm2_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -378,7 +378,7 @@ static struct clk_rcg2 gcc_qspi_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qspi_core_clk_src",
+ 		.parent_data = gcc_parent_data_2,
+-		.num_parents = 6,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -619,7 +619,7 @@ static struct clk_rcg2 gcc_sdcc1_apps_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_sdcc1_apps_clk_src",
+ 		.parent_data = gcc_parent_data_1,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+@@ -641,7 +641,7 @@ static struct clk_rcg2 gcc_sdcc1_ice_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_sdcc1_ice_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -665,7 +665,8 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parent_data_5,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_5),
++		.flags = CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+@@ -688,7 +689,7 @@ static struct clk_rcg2 gcc_ufs_phy_axi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_axi_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -710,7 +711,7 @@ static struct clk_rcg2 gcc_ufs_phy_ice_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_ice_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -730,7 +731,7 @@ static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_phy_aux_clk_src",
+ 		.parent_data = gcc_parent_data_3,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_3),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -751,7 +752,7 @@ static struct clk_rcg2 gcc_ufs_phy_unipro_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_unipro_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -773,7 +774,7 @@ static struct clk_rcg2 gcc_usb30_prim_master_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_prim_master_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -793,7 +794,7 @@ static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_prim_mock_utmi_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -812,7 +813,7 @@ static struct clk_rcg2 gcc_usb3_prim_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb3_prim_phy_aux_clk_src",
+ 		.parent_data = gcc_parent_data_6,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_6),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+diff --git a/drivers/clk/qcom/gcc-sm8250.c b/drivers/clk/qcom/gcc-sm8250.c
+index 7ec11acc82984..70723e4dab008 100644
+--- a/drivers/clk/qcom/gcc-sm8250.c
++++ b/drivers/clk/qcom/gcc-sm8250.c
+@@ -200,7 +200,7 @@ static struct clk_rcg2 gcc_cpuss_ahb_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_cpuss_ahb_clk_src",
+ 		.parent_data = gcc_parent_data_0_ao,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0_ao),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -224,7 +224,7 @@ static struct clk_rcg2 gcc_gp1_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp1_clk_src",
+ 		.parent_data = gcc_parent_data_1,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -238,7 +238,7 @@ static struct clk_rcg2 gcc_gp2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp2_clk_src",
+ 		.parent_data = gcc_parent_data_1,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -252,7 +252,7 @@ static struct clk_rcg2 gcc_gp3_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp3_clk_src",
+ 		.parent_data = gcc_parent_data_1,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -272,7 +272,7 @@ static struct clk_rcg2 gcc_pcie_0_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pcie_0_aux_clk_src",
+ 		.parent_data = gcc_parent_data_2,
+-		.num_parents = 2,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -286,7 +286,7 @@ static struct clk_rcg2 gcc_pcie_1_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pcie_1_aux_clk_src",
+ 		.parent_data = gcc_parent_data_2,
+-		.num_parents = 2,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -300,7 +300,7 @@ static struct clk_rcg2 gcc_pcie_2_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pcie_2_aux_clk_src",
+ 		.parent_data = gcc_parent_data_2,
+-		.num_parents = 2,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -320,7 +320,7 @@ static struct clk_rcg2 gcc_pcie_phy_refgen_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pcie_phy_refgen_clk_src",
+ 		.parent_data = gcc_parent_data_0_ao,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0_ao),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -341,7 +341,7 @@ static struct clk_rcg2 gcc_pdm2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pdm2_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -369,7 +369,7 @@ static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s0_clk_src[] = {
+ static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s0_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -385,7 +385,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s1_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -417,7 +417,7 @@ static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s2_clk_src[] = {
+ static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s2_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -433,7 +433,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s3_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -449,7 +449,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s4_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -465,7 +465,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s5_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -481,7 +481,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap0_s6_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s6_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -497,7 +497,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap0_s7_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s7_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -513,7 +513,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap1_s0_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -529,7 +529,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap1_s1_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -545,7 +545,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap1_s2_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -561,7 +561,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap1_s3_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -577,7 +577,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap1_s4_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -593,7 +593,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap1_s5_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -609,7 +609,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap2_s0_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap2_s0_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -625,7 +625,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap2_s1_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap2_s1_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -641,7 +641,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap2_s2_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap2_s2_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -657,7 +657,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap2_s3_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap2_s3_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -673,7 +673,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap2_s4_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap2_s4_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -689,7 +689,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = {
+ static struct clk_init_data gcc_qupv3_wrap2_s5_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap2_s5_clk_src",
+ 	.parent_data = gcc_parent_data_0,
+-	.num_parents = 3,
++	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.ops = &clk_rcg2_ops,
+ };
+ 
+@@ -721,7 +721,8 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parent_data_4,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_4),
++		.flags = CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+@@ -744,7 +745,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_sdcc4_apps_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+@@ -763,7 +764,7 @@ static struct clk_rcg2 gcc_tsif_ref_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_tsif_ref_clk_src",
+ 		.parent_data = gcc_parent_data_5,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_5),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -785,7 +786,7 @@ static struct clk_rcg2 gcc_ufs_card_axi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_card_axi_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -807,7 +808,7 @@ static struct clk_rcg2 gcc_ufs_card_ice_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_card_ice_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -826,7 +827,7 @@ static struct clk_rcg2 gcc_ufs_card_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_card_phy_aux_clk_src",
+ 		.parent_data = gcc_parent_data_3,
+-		.num_parents = 1,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_3),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -847,7 +848,7 @@ static struct clk_rcg2 gcc_ufs_card_unipro_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_card_unipro_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -870,7 +871,7 @@ static struct clk_rcg2 gcc_ufs_phy_axi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_axi_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -884,7 +885,7 @@ static struct clk_rcg2 gcc_ufs_phy_ice_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_ice_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -898,7 +899,7 @@ static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_phy_aux_clk_src",
+ 		.parent_data = gcc_parent_data_3,
+-		.num_parents = 1,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_3),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -912,7 +913,7 @@ static struct clk_rcg2 gcc_ufs_phy_unipro_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_unipro_core_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -935,7 +936,7 @@ static struct clk_rcg2 gcc_usb30_prim_master_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_prim_master_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -949,7 +950,7 @@ static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_prim_mock_utmi_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -963,7 +964,7 @@ static struct clk_rcg2 gcc_usb30_sec_master_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_sec_master_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -977,7 +978,7 @@ static struct clk_rcg2 gcc_usb30_sec_mock_utmi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_sec_mock_utmi_clk_src",
+ 		.parent_data = gcc_parent_data_0,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -991,7 +992,7 @@ static struct clk_rcg2 gcc_usb3_prim_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb3_prim_phy_aux_clk_src",
+ 		.parent_data = gcc_parent_data_2,
+-		.num_parents = 2,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+@@ -1005,7 +1006,7 @@ static struct clk_rcg2 gcc_usb3_sec_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb3_sec_phy_aux_clk_src",
+ 		.parent_data = gcc_parent_data_2,
+-		.num_parents = 2,
++		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.ops = &clk_rcg2_ops,
+ 	},
+ };
+diff --git a/drivers/clk/qcom/reset.c b/drivers/clk/qcom/reset.c
+index 0e914ec7aeae1..e45e32804d2c7 100644
+--- a/drivers/clk/qcom/reset.c
++++ b/drivers/clk/qcom/reset.c
+@@ -16,7 +16,8 @@ static int qcom_reset(struct reset_controller_dev *rcdev, unsigned long id)
+ 	struct qcom_reset_controller *rst = to_qcom_reset_controller(rcdev);
+ 
+ 	rcdev->ops->assert(rcdev, id);
+-	udelay(rst->reset_map[id].udelay ?: 1); /* use 1 us as default */
++	fsleep(rst->reset_map[id].udelay ?: 1); /* use 1 us as default */
++
+ 	rcdev->ops->deassert(rcdev, id);
+ 	return 0;
+ }
+diff --git a/drivers/clk/sunxi-ng/ccu_mmc_timing.c b/drivers/clk/sunxi-ng/ccu_mmc_timing.c
+index de33414fc5c28..c6a6ce98ca03a 100644
+--- a/drivers/clk/sunxi-ng/ccu_mmc_timing.c
++++ b/drivers/clk/sunxi-ng/ccu_mmc_timing.c
+@@ -43,7 +43,7 @@ int sunxi_ccu_set_mmc_timing_mode(struct clk *clk, bool new_mode)
+ EXPORT_SYMBOL_GPL(sunxi_ccu_set_mmc_timing_mode);
+ 
+ /**
+- * sunxi_ccu_set_mmc_timing_mode: Get the current MMC clock timing mode
++ * sunxi_ccu_get_mmc_timing_mode: Get the current MMC clock timing mode
+  * @clk: clock to query
+  *
+  * Returns 0 if the clock is in old timing mode, > 0 if it is in
+diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+index 4153150e20db5..f644c5e325fb2 100644
+--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c
++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+@@ -434,7 +434,11 @@ brcm_avs_get_freq_table(struct device *dev, struct private_data *priv)
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+-	table = devm_kcalloc(dev, AVS_PSTATE_MAX + 1, sizeof(*table),
++	/*
++	 * We allocate space for the 5 different P-STATES AVS,
++	 * plus extra space for a terminating element.
++	 */
++	table = devm_kcalloc(dev, AVS_PSTATE_MAX + 1 + 1, sizeof(*table),
+ 			     GFP_KERNEL);
+ 	if (!table)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 58342390966b7..5b4bca71f201d 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -450,8 +450,10 @@ void cpufreq_freq_transition_end(struct cpufreq_policy *policy,
+ 			    policy->cur,
+ 			    policy->cpuinfo.max_freq);
+ 
++	spin_lock(&policy->transition_lock);
+ 	policy->transition_ongoing = false;
+ 	policy->transition_task = NULL;
++	spin_unlock(&policy->transition_lock);
+ 
+ 	wake_up(&policy->transition_wait);
+ }
+diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c
+index b9ccb6a3dad98..22d4c639d71db 100644
+--- a/drivers/cpufreq/powernow-k8.c
++++ b/drivers/cpufreq/powernow-k8.c
+@@ -1101,7 +1101,8 @@ static int powernowk8_cpu_exit(struct cpufreq_policy *pol)
+ 
+ 	kfree(data->powernow_table);
+ 	kfree(data);
+-	for_each_cpu(cpu, pol->cpus)
++	/* pol->cpus will be empty here, use related_cpus instead. */
++	for_each_cpu(cpu, pol->related_cpus)
+ 		per_cpu(powernow_data, cpu) = NULL;
+ 
+ 	return 0;
+diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
+index ff164dec8422e..f4cf3ade03db8 100644
+--- a/drivers/cpuidle/cpuidle-pseries.c
++++ b/drivers/cpuidle/cpuidle-pseries.c
+@@ -409,13 +409,7 @@ static int __init pseries_idle_probe(void)
+ 		return -ENODEV;
+ 
+ 	if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
+-		/*
+-		 * Use local_paca instead of get_lppaca() since
+-		 * preemption is not disabled, and it is not required in
+-		 * fact, since lppaca_ptr does not need to be the value
+-		 * associated to the current CPU, it can be from any CPU.
+-		 */
+-		if (lppaca_shared_proc(local_paca->lppaca_ptr)) {
++		if (lppaca_shared_proc()) {
+ 			cpuidle_state_table = shared_states;
+ 			max_idle_state = ARRAY_SIZE(shared_states);
+ 		} else {
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 3acc825da4cca..5bd70a59f4ce2 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -222,7 +222,9 @@ static int caam_rsa_count_leading_zeros(struct scatterlist *sgl,
+ 		if (len && *buff)
+ 			break;
+ 
+-		sg_miter_next(&miter);
++		if (!sg_miter_next(&miter))
++			break;
++
+ 		buff = miter.addr;
+ 		len = miter.length;
+ 
+diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
+index 16bb52836b28d..37fde13b80dde 100644
+--- a/drivers/crypto/stm32/stm32-hash.c
++++ b/drivers/crypto/stm32/stm32-hash.c
+@@ -564,9 +564,9 @@ static int stm32_hash_dma_send(struct stm32_hash_dev *hdev)
+ 	}
+ 
+ 	for_each_sg(rctx->sg, tsg, rctx->nents, i) {
++		sg[0] = *tsg;
+ 		len = sg->length;
+ 
+-		sg[0] = *tsg;
+ 		if (sg_is_last(sg)) {
+ 			if (hdev->dma_mode == 1) {
+ 				len = (ALIGN(sg->length, 16) - 16);
+@@ -1565,9 +1565,7 @@ static int stm32_hash_remove(struct platform_device *pdev)
+ 	if (!hdev)
+ 		return -ENODEV;
+ 
+-	ret = pm_runtime_resume_and_get(hdev->dev);
+-	if (ret < 0)
+-		return ret;
++	ret = pm_runtime_get_sync(hdev->dev);
+ 
+ 	stm32_hash_unregister_algs(hdev);
+ 
+@@ -1583,7 +1581,8 @@ static int stm32_hash_remove(struct platform_device *pdev)
+ 	pm_runtime_disable(hdev->dev);
+ 	pm_runtime_put_noidle(hdev->dev);
+ 
+-	clk_disable_unprepare(hdev->clk);
++	if (ret >= 0)
++		clk_disable_unprepare(hdev->clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index c6f460550f5e9..42c1eed445296 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -732,6 +732,7 @@ static void devfreq_dev_release(struct device *dev)
+ 		devfreq->profile->exit(devfreq->dev.parent);
+ 
+ 	mutex_destroy(&devfreq->lock);
++	srcu_cleanup_notifier_head(&devfreq->transition_notifier_list);
+ 	kfree(devfreq);
+ }
+ 
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index 08013345d1f24..7e1bd79fbee8f 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -208,6 +208,7 @@ config FSL_DMA
+ config FSL_EDMA
+ 	tristate "Freescale eDMA engine support"
+ 	depends on OF
++	depends on HAS_IOMEM
+ 	select DMA_ENGINE
+ 	select DMA_VIRTUAL_CHANNELS
+ 	help
+@@ -277,6 +278,7 @@ config IMX_SDMA
+ 
+ config INTEL_IDMA64
+ 	tristate "Intel integrated DMA 64-bit support"
++	depends on HAS_IOMEM
+ 	select DMA_ENGINE
+ 	select DMA_VIRTUAL_CHANNELS
+ 	help
+diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
+index b35b97cb8fd25..d99fec8215083 100644
+--- a/drivers/dma/ste_dma40.c
++++ b/drivers/dma/ste_dma40.c
+@@ -3598,6 +3598,10 @@ static int __init d40_probe(struct platform_device *pdev)
+ 	spin_lock_init(&base->lcla_pool.lock);
+ 
+ 	base->irq = platform_get_irq(pdev, 0);
++	if (base->irq < 0) {
++		ret = base->irq;
++		goto destroy_cache;
++	}
+ 
+ 	ret = request_irq(base->irq, d40_handle_interrupt, 0, D40_NAME, base);
+ 	if (ret) {
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index c08968c5ddf8c..807c5320dc0ff 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -72,6 +72,7 @@ config ARM_SCPI_POWER_DOMAIN
+ config ARM_SDE_INTERFACE
+ 	bool "ARM Software Delegated Exception Interface (SDEI)"
+ 	depends on ARM64
++	depends on ACPI_APEI_GHES
+ 	help
+ 	  The Software Delegated Exception Interface (SDEI) is an ARM
+ 	  standard for registering callbacks from the platform firmware
+diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
+index 5a877d76078f7..68e55ca7491e5 100644
+--- a/drivers/firmware/arm_sdei.c
++++ b/drivers/firmware/arm_sdei.c
+@@ -1063,14 +1063,14 @@ static bool __init sdei_present_acpi(void)
+ 	return true;
+ }
+ 
+-static int __init sdei_init(void)
++void __init sdei_init(void)
+ {
+ 	struct platform_device *pdev;
+ 	int ret;
+ 
+ 	ret = platform_driver_register(&sdei_driver);
+ 	if (ret || !sdei_present_acpi())
+-		return ret;
++		return;
+ 
+ 	pdev = platform_device_register_simple(sdei_driver.driver.name,
+ 					       0, NULL, 0);
+@@ -1080,17 +1080,8 @@ static int __init sdei_init(void)
+ 		pr_info("Failed to register ACPI:SDEI platform device %d\n",
+ 			ret);
+ 	}
+-
+-	return ret;
+ }
+ 
+-/*
+- * On an ACPI system SDEI needs to be ready before HEST:GHES tries to register
+- * its events. ACPI is initialised from a subsys_initcall(), GHES is initialised
+- * by device_initcall(). We want to be called in the middle.
+- */
+-subsys_initcall_sync(sdei_init);
+-
+ int sdei_event_handler(struct pt_regs *regs,
+ 		       struct sdei_registered_event *arg)
+ {
+@@ -1118,3 +1109,22 @@ int sdei_event_handler(struct pt_regs *regs,
+ 	return err;
+ }
+ NOKPROBE_SYMBOL(sdei_event_handler);
++
++void sdei_handler_abort(void)
++{
++	/*
++	 * If the crash happened in an SDEI event handler then we need to
++	 * finish the handler with the firmware so that we can have working
++	 * interrupts in the crash kernel.
++	 */
++	if (__this_cpu_read(sdei_active_critical_event)) {
++	        pr_warn("still in SDEI critical event context, attempting to finish handler.\n");
++	        __sdei_handler_abort();
++	        __this_cpu_write(sdei_active_critical_event, NULL);
++	}
++	if (__this_cpu_read(sdei_active_normal_event)) {
++	        pr_warn("still in SDEI normal event context, attempting to finish handler.\n");
++	        __sdei_handler_abort();
++	        __this_cpu_write(sdei_active_normal_event, NULL);
++	}
++}
+diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
+index 5d0f1b1966fc6..9f998e6bff957 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.c
++++ b/drivers/firmware/efi/libstub/x86-stub.c
+@@ -60,7 +60,7 @@ preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom)
+ 	rom->data.type	= SETUP_PCI;
+ 	rom->data.len	= size - sizeof(struct setup_data);
+ 	rom->data.next	= 0;
+-	rom->pcilen	= pci->romsize;
++	rom->pcilen	= romsize;
+ 	*__rom = rom;
+ 
+ 	status = efi_call_proto(pci, pci.read, EfiPciIoWidthUint16,
+diff --git a/drivers/firmware/meson/meson_sm.c b/drivers/firmware/meson/meson_sm.c
+index 2854b56f6e0bd..ed27ff2e503ef 100644
+--- a/drivers/firmware/meson/meson_sm.c
++++ b/drivers/firmware/meson/meson_sm.c
+@@ -292,6 +292,8 @@ static int __init meson_sm_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	chip = of_match_device(meson_sm_ids, dev)->data;
++	if (!chip)
++		return -EINVAL;
+ 
+ 	if (chip->cmd_shmem_in_base) {
+ 		fw->sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base,
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index f432fe7cb60d7..cd0f12b2b3861 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -623,7 +623,7 @@ svc_create_memory_pool(struct platform_device *pdev,
+ 	paddr = begin;
+ 	size = end - begin;
+ 	va = devm_memremap(dev, paddr, size, MEMREMAP_WC);
+-	if (!va) {
++	if (IS_ERR(va)) {
+ 		dev_err(dev, "fail to remap shared memory\n");
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c
+index 87edc77260d20..db0519da0f892 100644
+--- a/drivers/fsi/fsi-master-aspeed.c
++++ b/drivers/fsi/fsi-master-aspeed.c
+@@ -445,6 +445,8 @@ static ssize_t cfam_reset_store(struct device *dev, struct device_attribute *att
+ 	gpiod_set_value(aspeed->cfam_reset_gpio, 1);
+ 	usleep_range(900, 1000);
+ 	gpiod_set_value(aspeed->cfam_reset_gpio, 0);
++	usleep_range(900, 1000);
++	opb_writel(aspeed, ctrl_base + FSI_MRESP0, cpu_to_be32(FSI_MRESP_RST_ALL_MASTER));
+ 	mutex_unlock(&aspeed->lock);
+ 
+ 	return count;
+diff --git a/drivers/fsi/fsi-master-ast-cf.c b/drivers/fsi/fsi-master-ast-cf.c
+index 70c03e304d6c8..42f908025985f 100644
+--- a/drivers/fsi/fsi-master-ast-cf.c
++++ b/drivers/fsi/fsi-master-ast-cf.c
+@@ -1440,3 +1440,4 @@ static struct platform_driver fsi_master_acf = {
+ 
+ module_platform_driver(fsi_master_acf);
+ MODULE_LICENSE("GPL");
++MODULE_FIRMWARE(FW_FILE_NAME);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 8bd887fb6e631..f0db9724ca85e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -1091,6 +1091,9 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev)
+ 	u16 cmd;
+ 	int r;
+ 
++	if (!IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT))
++		return 0;
++
+ 	/* Bypass for VF */
+ 	if (amdgpu_sriov_vf(adev))
+ 		return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 917b94002f4b7..93a4b52f4a73b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -505,6 +505,7 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 			crtc = (struct drm_crtc *)minfo->crtcs[i];
+ 			if (crtc && crtc->base.id == info->mode_crtc.id) {
+ 				struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
++
+ 				ui32 = amdgpu_crtc->crtc_id;
+ 				found = 1;
+ 				break;
+@@ -523,7 +524,7 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = copy_to_user(out, &ip, min((size_t)size, sizeof(ip)));
++		ret = copy_to_user(out, &ip, min_t(size_t, size, sizeof(ip)));
+ 		return ret ? -EFAULT : 0;
+ 	}
+ 	case AMDGPU_INFO_HW_IP_COUNT: {
+@@ -671,17 +672,18 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 				    ? -EFAULT : 0;
+ 	}
+ 	case AMDGPU_INFO_READ_MMR_REG: {
+-		unsigned n, alloc_size;
++		unsigned int n, alloc_size;
+ 		uint32_t *regs;
+-		unsigned se_num = (info->read_mmr_reg.instance >>
++		unsigned int se_num = (info->read_mmr_reg.instance >>
+ 				   AMDGPU_INFO_MMR_SE_INDEX_SHIFT) &
+ 				  AMDGPU_INFO_MMR_SE_INDEX_MASK;
+-		unsigned sh_num = (info->read_mmr_reg.instance >>
++		unsigned int sh_num = (info->read_mmr_reg.instance >>
+ 				   AMDGPU_INFO_MMR_SH_INDEX_SHIFT) &
+ 				  AMDGPU_INFO_MMR_SH_INDEX_MASK;
+ 
+ 		/* set full masks if the userspace set all bits
+-		 * in the bitfields */
++		 * in the bitfields
++		 */
+ 		if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK)
+ 			se_num = 0xffffffff;
+ 		else if (se_num >= AMDGPU_GFX_MAX_SE)
+@@ -799,7 +801,7 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 				    min((size_t)size, sizeof(dev_info))) ? -EFAULT : 0;
+ 	}
+ 	case AMDGPU_INFO_VCE_CLOCK_TABLE: {
+-		unsigned i;
++		unsigned int i;
+ 		struct drm_amdgpu_info_vce_clock_table vce_clk_table = {};
+ 		struct amd_vce_state *vce_state;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c
+index 5442df0941024..453464914f353 100644
+--- a/drivers/gpu/drm/amd/amdgpu/cik.c
++++ b/drivers/gpu/drm/amd/amdgpu/cik.c
+@@ -1509,17 +1509,8 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
+ 			u16 bridge_cfg2, gpu_cfg2;
+ 			u32 max_lw, current_lw, tmp;
+ 
+-			pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-						  &bridge_cfg);
+-			pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL,
+-						  &gpu_cfg);
+-
+-			tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
+-
+-			tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL,
+-						   tmp16);
++			pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
++			pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
+ 
+ 			tmp = RREG32_PCIE(ixPCIE_LC_STATUS1);
+ 			max_lw = (tmp & PCIE_LC_STATUS1__LC_DETECTED_LINK_WIDTH_MASK) >>
+@@ -1572,21 +1563,14 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
+ 				msleep(100);
+ 
+ 				/* linkctl */
+-				pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(root, PCI_EXP_LNKCTL,
+-							   tmp16);
+-
+-				pcie_capability_read_word(adev->pdev,
+-							  PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(adev->pdev,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
++				pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   bridge_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
++				pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   gpu_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
+ 
+ 				/* linkctl2 */
+ 				pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+diff --git a/drivers/gpu/drm/amd/amdgpu/si.c b/drivers/gpu/drm/amd/amdgpu/si.c
+index e5e336fd9e941..b7e1201d46f97 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si.c
++++ b/drivers/gpu/drm/amd/amdgpu/si.c
+@@ -2159,17 +2159,8 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
+ 			u16 bridge_cfg2, gpu_cfg2;
+ 			u32 max_lw, current_lw, tmp;
+ 
+-			pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-						  &bridge_cfg);
+-			pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL,
+-						  &gpu_cfg);
+-
+-			tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
+-
+-			tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL,
+-						   tmp16);
++			pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
++			pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
+ 
+ 			tmp = RREG32_PCIE(PCIE_LC_STATUS1);
+ 			max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
+@@ -2214,21 +2205,14 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
+ 
+ 				mdelay(100);
+ 
+-				pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(root, PCI_EXP_LNKCTL,
+-							   tmp16);
+-
+-				pcie_capability_read_word(adev->pdev,
+-							  PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(adev->pdev,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
++				pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   bridge_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
++				pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   gpu_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
+ 
+ 				pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+ 							  &tmp16);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index e33fe0207b9e5..53e8defd34751 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1682,10 +1682,13 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context)
+ 			hws->funcs.edp_backlight_control(edp_link_with_sink, false);
+ 		}
+ 		/*resume from S3, no vbios posting, no need to power down again*/
++		clk_mgr_exit_optimized_pwr_state(dc, dc->clk_mgr);
++
+ 		power_down_all_hw_blocks(dc);
+ 		disable_vga_and_power_gate_all_controllers(dc);
+ 		if (edp_link_with_sink && !keep_edp_vdd_on)
+ 			dc->hwss.edp_power_control(edp_link_with_sink, false);
++		clk_mgr_optimize_pwr_state(dc, dc->clk_mgr);
+ 	}
+ 	bios_set_scratch_acc_mode_change(dc->ctx->dc_bios);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
+index 855682590c1bb..fd08177de595c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
+@@ -206,8 +206,9 @@ struct mpcc *mpc1_insert_plane(
+ 		/* check insert_above_mpcc exist in tree->opp_list */
+ 		struct mpcc *temp_mpcc = tree->opp_list;
+ 
+-		while (temp_mpcc && temp_mpcc->mpcc_bot != insert_above_mpcc)
+-			temp_mpcc = temp_mpcc->mpcc_bot;
++		if (temp_mpcc != insert_above_mpcc)
++			while (temp_mpcc && temp_mpcc->mpcc_bot != insert_above_mpcc)
++				temp_mpcc = temp_mpcc->mpcc_bot;
+ 		if (temp_mpcc == NULL)
+ 			return NULL;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+index d988533d4af5f..627d578175cf9 100644
+--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
++++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+@@ -327,7 +327,9 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
+ 		 *  - Delta for CEIL: delta_from_mid_point_in_us_1
+ 		 *  - Delta for FLOOR: delta_from_mid_point_in_us_2
+ 		 */
+-		if ((last_render_time_in_us / mid_point_frames_ceil) < in_out_vrr->min_duration_in_us) {
++		if (mid_point_frames_ceil &&
++		    (last_render_time_in_us / mid_point_frames_ceil) <
++		    in_out_vrr->min_duration_in_us) {
+ 			/* Check for out of range.
+ 			 * If using CEIL produces a value that is out of range,
+ 			 * then we are forced to use FLOOR.
+@@ -374,8 +376,9 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
+ 		/* Either we've calculated the number of frames to insert,
+ 		 * or we need to insert min duration frames
+ 		 */
+-		if (last_render_time_in_us / frames_to_insert <
+-				in_out_vrr->min_duration_in_us){
++		if (frames_to_insert &&
++		    (last_render_time_in_us / frames_to_insert) <
++		    in_out_vrr->min_duration_in_us){
+ 			frames_to_insert -= (frames_to_insert > 1) ?
+ 					1 : 0;
+ 		}
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 5abb68017f6ed..d58a59cf4f853 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2115,15 +2115,19 @@ static int amdgpu_device_attr_create(struct amdgpu_device *adev,
+ 				     uint32_t mask, struct list_head *attr_list)
+ {
+ 	int ret = 0;
+-	struct device_attribute *dev_attr = &attr->dev_attr;
+-	const char *name = dev_attr->attr.name;
+ 	enum amdgpu_device_attr_states attr_states = ATTR_STATE_SUPPORTED;
+ 	struct amdgpu_device_attr_entry *attr_entry;
++	struct device_attribute *dev_attr;
++	const char *name;
+ 
+ 	int (*attr_update)(struct amdgpu_device *adev, struct amdgpu_device_attr *attr,
+ 			   uint32_t mask, enum amdgpu_device_attr_states *states) = default_attr_update;
+ 
+-	BUG_ON(!attr);
++	if (!attr)
++		return -EINVAL;
++
++	dev_attr = &attr->dev_attr;
++	name = dev_attr->attr.name;
+ 
+ 	attr_update = attr->attr_update ? attr_update : default_attr_update;
+ 
+diff --git a/drivers/gpu/drm/armada/armada_overlay.c b/drivers/gpu/drm/armada/armada_overlay.c
+index 30e01101f59ed..7ee4c90d4a2df 100644
+--- a/drivers/gpu/drm/armada/armada_overlay.c
++++ b/drivers/gpu/drm/armada/armada_overlay.c
+@@ -4,6 +4,8 @@
+  *  Rewritten from the dovefb driver, and Armada510 manuals.
+  */
+ 
++#include <linux/bitfield.h>
++
+ #include <drm/armada_drm.h>
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
+@@ -446,8 +448,8 @@ static int armada_overlay_get_property(struct drm_plane *plane,
+ 			     drm_to_overlay_state(state)->colorkey_ug,
+ 			     drm_to_overlay_state(state)->colorkey_vb, 0);
+ 	} else if (property == priv->colorkey_mode_prop) {
+-		*val = (drm_to_overlay_state(state)->colorkey_mode &
+-			CFG_CKMODE_MASK) >> ffs(CFG_CKMODE_MASK);
++		*val = FIELD_GET(CFG_CKMODE_MASK,
++				 drm_to_overlay_state(state)->colorkey_mode);
+ 	} else if (property == priv->brightness_prop) {
+ 		*val = drm_to_overlay_state(state)->brightness + 256;
+ 	} else if (property == priv->contrast_prop) {
+diff --git a/drivers/gpu/drm/ast/ast_post.c b/drivers/gpu/drm/ast/ast_post.c
+index 8902c2f84bf99..1bc05bf0f2320 100644
+--- a/drivers/gpu/drm/ast/ast_post.c
++++ b/drivers/gpu/drm/ast/ast_post.c
+@@ -290,7 +290,7 @@ static void ast_init_dram_reg(struct drm_device *dev)
+ 				;
+ 			} while (ast_read32(ast, 0x10100) != 0xa8);
+ 		} else {/* AST2100/1100 */
+-			if (ast->chip == AST2100 || ast->chip == 2200)
++			if (ast->chip == AST2100 || ast->chip == AST2200)
+ 				dram_reg_info = ast2100_dram_table_data;
+ 			else
+ 				dram_reg_info = ast1100_dram_table_data;
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 6ba860a16e96c..e50c741cbfe72 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -786,8 +786,13 @@ static void adv7511_mode_set(struct adv7511 *adv7511,
+ 	else
+ 		low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE;
+ 
+-	regmap_update_bits(adv7511->regmap, 0xfb,
+-		0x6, low_refresh_rate << 1);
++	if (adv7511->type == ADV7511)
++		regmap_update_bits(adv7511->regmap, 0xfb,
++				   0x6, low_refresh_rate << 1);
++	else
++		regmap_update_bits(adv7511->regmap, 0x4a,
++				   0xc, low_refresh_rate << 2);
++
+ 	regmap_update_bits(adv7511->regmap, 0x17,
+ 		0x60, (vsync_polarity << 6) | (hsync_polarity << 5));
+ 
+diff --git a/drivers/gpu/drm/bridge/tc358764.c b/drivers/gpu/drm/bridge/tc358764.c
+index d89394bc5aa4d..ea1445e09e6f1 100644
+--- a/drivers/gpu/drm/bridge/tc358764.c
++++ b/drivers/gpu/drm/bridge/tc358764.c
+@@ -180,7 +180,7 @@ static void tc358764_read(struct tc358764 *ctx, u16 addr, u32 *val)
+ 	if (ret >= 0)
+ 		le32_to_cpus(val);
+ 
+-	dev_dbg(ctx->dev, "read: %d, addr: %d\n", addr, *val);
++	dev_dbg(ctx->dev, "read: addr=0x%04x data=0x%08x\n", addr, *val);
+ }
+ 
+ static void tc358764_write(struct tc358764 *ctx, u16 addr, u32 val)
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c
+index 706af0304ca4c..7b57d01ba865b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c
+@@ -125,9 +125,9 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit)
+ 		return;
+ 	etnaviv_dump_core = false;
+ 
+-	mutex_lock(&gpu->mmu_context->lock);
++	mutex_lock(&submit->mmu_context->lock);
+ 
+-	mmu_size = etnaviv_iommu_dump_size(gpu->mmu_context);
++	mmu_size = etnaviv_iommu_dump_size(submit->mmu_context);
+ 
+ 	/* We always dump registers, mmu, ring, hanging cmdbuf and end marker */
+ 	n_obj = 5;
+@@ -157,7 +157,7 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit)
+ 	iter.start = __vmalloc(file_size, GFP_KERNEL | __GFP_NOWARN |
+ 			__GFP_NORETRY);
+ 	if (!iter.start) {
+-		mutex_unlock(&gpu->mmu_context->lock);
++		mutex_unlock(&submit->mmu_context->lock);
+ 		dev_warn(gpu->dev, "failed to allocate devcoredump file\n");
+ 		return;
+ 	}
+@@ -169,18 +169,18 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit)
+ 	memset(iter.hdr, 0, iter.data - iter.start);
+ 
+ 	etnaviv_core_dump_registers(&iter, gpu);
+-	etnaviv_core_dump_mmu(&iter, gpu->mmu_context, mmu_size);
++	etnaviv_core_dump_mmu(&iter, submit->mmu_context, mmu_size);
+ 	etnaviv_core_dump_mem(&iter, ETDUMP_BUF_RING, gpu->buffer.vaddr,
+ 			      gpu->buffer.size,
+ 			      etnaviv_cmdbuf_get_va(&gpu->buffer,
+-					&gpu->mmu_context->cmdbuf_mapping));
++					&submit->mmu_context->cmdbuf_mapping));
+ 
+ 	etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD,
+ 			      submit->cmdbuf.vaddr, submit->cmdbuf.size,
+ 			      etnaviv_cmdbuf_get_va(&submit->cmdbuf,
+-					&gpu->mmu_context->cmdbuf_mapping));
++					&submit->mmu_context->cmdbuf_mapping));
+ 
+-	mutex_unlock(&gpu->mmu_context->lock);
++	mutex_unlock(&submit->mmu_context->lock);
+ 
+ 	/* Reserve space for the bomap */
+ 	if (n_bomap_pages) {
+diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
+index 0201f9b5f87e7..0d31a0db305d5 100644
+--- a/drivers/gpu/drm/i915/gvt/gtt.c
++++ b/drivers/gpu/drm/i915/gvt/gtt.c
+@@ -636,9 +636,18 @@ static void ggtt_set_host_entry(struct intel_vgpu_mm *mm,
+ 		struct intel_gvt_gtt_entry *entry, unsigned long index)
+ {
+ 	struct intel_gvt_gtt_pte_ops *pte_ops = mm->vgpu->gvt->gtt.pte_ops;
++	unsigned long offset = index;
+ 
+ 	GEM_BUG_ON(mm->type != INTEL_GVT_MM_GGTT);
+ 
++	if (vgpu_gmadr_is_aperture(mm->vgpu, index << I915_GTT_PAGE_SHIFT)) {
++		offset -= (vgpu_aperture_gmadr_base(mm->vgpu) >> PAGE_SHIFT);
++		mm->ggtt_mm.host_ggtt_aperture[offset] = entry->val64;
++	} else if (vgpu_gmadr_is_hidden(mm->vgpu, index << I915_GTT_PAGE_SHIFT)) {
++		offset -= (vgpu_hidden_gmadr_base(mm->vgpu) >> PAGE_SHIFT);
++		mm->ggtt_mm.host_ggtt_hidden[offset] = entry->val64;
++	}
++
+ 	pte_ops->set_entry(NULL, entry, index, false, 0, mm->vgpu);
+ }
+ 
+@@ -1953,6 +1962,21 @@ static struct intel_vgpu_mm *intel_vgpu_create_ggtt_mm(struct intel_vgpu *vgpu)
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
++	mm->ggtt_mm.host_ggtt_aperture = vzalloc((vgpu_aperture_sz(vgpu) >> PAGE_SHIFT) * sizeof(u64));
++	if (!mm->ggtt_mm.host_ggtt_aperture) {
++		vfree(mm->ggtt_mm.virtual_ggtt);
++		vgpu_free_mm(mm);
++		return ERR_PTR(-ENOMEM);
++	}
++
++	mm->ggtt_mm.host_ggtt_hidden = vzalloc((vgpu_hidden_sz(vgpu) >> PAGE_SHIFT) * sizeof(u64));
++	if (!mm->ggtt_mm.host_ggtt_hidden) {
++		vfree(mm->ggtt_mm.host_ggtt_aperture);
++		vfree(mm->ggtt_mm.virtual_ggtt);
++		vgpu_free_mm(mm);
++		return ERR_PTR(-ENOMEM);
++	}
++
+ 	return mm;
+ }
+ 
+@@ -1980,6 +2004,8 @@ void _intel_vgpu_mm_release(struct kref *mm_ref)
+ 		invalidate_ppgtt_mm(mm);
+ 	} else {
+ 		vfree(mm->ggtt_mm.virtual_ggtt);
++		vfree(mm->ggtt_mm.host_ggtt_aperture);
++		vfree(mm->ggtt_mm.host_ggtt_hidden);
+ 	}
+ 
+ 	vgpu_free_mm(mm);
+@@ -2845,19 +2871,39 @@ void intel_vgpu_reset_ggtt(struct intel_vgpu *vgpu, bool invalidate_old)
+ }
+ 
+ /**
+- * intel_vgpu_reset_gtt - reset the all GTT related status
+- * @vgpu: a vGPU
++ * intel_gvt_restore_ggtt - restore all vGPU's ggtt entries
++ * @gvt: intel gvt device
+  *
+- * This function is called from vfio core to reset reset all
+- * GTT related status, including GGTT, PPGTT, scratch page.
++ * This function is called at driver resume stage to restore
++ * GGTT entries of every vGPU.
+  *
+  */
+-void intel_vgpu_reset_gtt(struct intel_vgpu *vgpu)
++void intel_gvt_restore_ggtt(struct intel_gvt *gvt)
+ {
+-	/* Shadow pages are only created when there is no page
+-	 * table tracking data, so remove page tracking data after
+-	 * removing the shadow pages.
+-	 */
+-	intel_vgpu_destroy_all_ppgtt_mm(vgpu);
+-	intel_vgpu_reset_ggtt(vgpu, true);
++	struct intel_vgpu *vgpu;
++	struct intel_vgpu_mm *mm;
++	int id;
++	gen8_pte_t pte;
++	u32 idx, num_low, num_hi, offset;
++
++	/* Restore dirty host ggtt for all vGPUs */
++	idr_for_each_entry(&(gvt)->vgpu_idr, vgpu, id) {
++		mm = vgpu->gtt.ggtt_mm;
++
++		num_low = vgpu_aperture_sz(vgpu) >> PAGE_SHIFT;
++		offset = vgpu_aperture_gmadr_base(vgpu) >> PAGE_SHIFT;
++		for (idx = 0; idx < num_low; idx++) {
++			pte = mm->ggtt_mm.host_ggtt_aperture[idx];
++			if (pte & _PAGE_PRESENT)
++				write_pte64(vgpu->gvt->gt->ggtt, offset + idx, pte);
++		}
++
++		num_hi = vgpu_hidden_sz(vgpu) >> PAGE_SHIFT;
++		offset = vgpu_hidden_gmadr_base(vgpu) >> PAGE_SHIFT;
++		for (idx = 0; idx < num_hi; idx++) {
++			pte = mm->ggtt_mm.host_ggtt_hidden[idx];
++			if (pte & _PAGE_PRESENT)
++				write_pte64(vgpu->gvt->gt->ggtt, offset + idx, pte);
++		}
++	}
+ }
+diff --git a/drivers/gpu/drm/i915/gvt/gtt.h b/drivers/gpu/drm/i915/gvt/gtt.h
+index 52d0d88abd86a..89ffb52cafa04 100644
+--- a/drivers/gpu/drm/i915/gvt/gtt.h
++++ b/drivers/gpu/drm/i915/gvt/gtt.h
+@@ -164,6 +164,9 @@ struct intel_vgpu_mm {
+ 		} ppgtt_mm;
+ 		struct {
+ 			void *virtual_ggtt;
++			/* Save/restore for PM */
++			u64 *host_ggtt_aperture;
++			u64 *host_ggtt_hidden;
+ 			struct list_head partial_pte_list;
+ 		} ggtt_mm;
+ 	};
+@@ -212,7 +215,6 @@ void intel_vgpu_reset_ggtt(struct intel_vgpu *vgpu, bool invalidate_old);
+ void intel_vgpu_invalidate_ppgtt(struct intel_vgpu *vgpu);
+ 
+ int intel_gvt_init_gtt(struct intel_gvt *gvt);
+-void intel_vgpu_reset_gtt(struct intel_vgpu *vgpu);
+ void intel_gvt_clean_gtt(struct intel_gvt *gvt);
+ 
+ struct intel_vgpu_mm *intel_gvt_find_ppgtt_mm(struct intel_vgpu *vgpu,
+@@ -280,5 +282,6 @@ int intel_vgpu_emulate_ggtt_mmio_write(struct intel_vgpu *vgpu,
+ 	unsigned int off, void *p_data, unsigned int bytes);
+ 
+ void intel_vgpu_destroy_all_ppgtt_mm(struct intel_vgpu *vgpu);
++void intel_gvt_restore_ggtt(struct intel_gvt *gvt);
+ 
+ #endif /* _GVT_GTT_H_ */
+diff --git a/drivers/gpu/drm/i915/gvt/gvt.c b/drivers/gpu/drm/i915/gvt/gvt.c
+index 5c9ef8e58a087..87f22a88925ce 100644
+--- a/drivers/gpu/drm/i915/gvt/gvt.c
++++ b/drivers/gpu/drm/i915/gvt/gvt.c
+@@ -405,6 +405,15 @@ out_clean_idr:
+ 	return ret;
+ }
+ 
++int
++intel_gvt_pm_resume(struct intel_gvt *gvt)
++{
++	intel_gvt_restore_fence(gvt);
++	intel_gvt_restore_mmio(gvt);
++	intel_gvt_restore_ggtt(gvt);
++	return 0;
++}
++
+ int
+ intel_gvt_register_hypervisor(struct intel_gvt_mpt *m)
+ {
+diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
+index a81cf0f01e78e..b3d6355dd797d 100644
+--- a/drivers/gpu/drm/i915/gvt/gvt.h
++++ b/drivers/gpu/drm/i915/gvt/gvt.h
+@@ -255,6 +255,8 @@ struct intel_gvt_mmio {
+ #define F_CMD_ACCESS	(1 << 3)
+ /* This reg has been accessed by a VM */
+ #define F_ACCESSED	(1 << 4)
++/* This reg requires save & restore during host PM suspend/resume */
++#define F_PM_SAVE	(1 << 5)
+ /* This reg could be accessed by unaligned address */
+ #define F_UNALIGN	(1 << 6)
+ /* This reg is in GVT's mmio save-restor list and in hardware
+@@ -685,6 +687,7 @@ void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
+ void intel_gvt_debugfs_init(struct intel_gvt *gvt);
+ void intel_gvt_debugfs_clean(struct intel_gvt *gvt);
+ 
++int intel_gvt_pm_resume(struct intel_gvt *gvt);
+ 
+ #include "trace.h"
+ #include "mpt.h"
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index 606e6c315fe24..55ce7aaabf893 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -3135,9 +3135,10 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
+ 	MMIO_DFH(TRVATTL3PTRDW(2), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
+ 	MMIO_DFH(TRVATTL3PTRDW(3), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
+ 	MMIO_DFH(TRVADR, D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
+-	MMIO_DFH(TRTTE, D_SKL_PLUS, F_CMD_ACCESS,
+-		NULL, gen9_trtte_write);
+-	MMIO_DH(_MMIO(0x4dfc), D_SKL_PLUS, NULL, gen9_trtt_chicken_write);
++	MMIO_DFH(TRTTE, D_SKL_PLUS, F_CMD_ACCESS | F_PM_SAVE,
++		 NULL, gen9_trtte_write);
++	MMIO_DFH(_MMIO(0x4dfc), D_SKL_PLUS, F_PM_SAVE,
++		 NULL, gen9_trtt_chicken_write);
+ 
+ 	MMIO_D(_MMIO(0x46430), D_SKL_PLUS);
+ 
+@@ -3686,3 +3687,40 @@ default_rw:
+ 		intel_vgpu_default_mmio_read(vgpu, offset, pdata, bytes) :
+ 		intel_vgpu_default_mmio_write(vgpu, offset, pdata, bytes);
+ }
++
++void intel_gvt_restore_fence(struct intel_gvt *gvt)
++{
++	struct intel_vgpu *vgpu;
++	int i, id;
++
++	idr_for_each_entry(&(gvt)->vgpu_idr, vgpu, id) {
++		mmio_hw_access_pre(gvt->gt);
++		for (i = 0; i < vgpu_fence_sz(vgpu); i++)
++			intel_vgpu_write_fence(vgpu, i, vgpu_vreg64(vgpu, fence_num_to_offset(i)));
++		mmio_hw_access_post(gvt->gt);
++	}
++}
++
++static inline int mmio_pm_restore_handler(struct intel_gvt *gvt,
++					  u32 offset, void *data)
++{
++	struct intel_vgpu *vgpu = data;
++	struct drm_i915_private *dev_priv = gvt->gt->i915;
++
++	if (gvt->mmio.mmio_attribute[offset >> 2] & F_PM_SAVE)
++		I915_WRITE(_MMIO(offset), vgpu_vreg(vgpu, offset));
++
++	return 0;
++}
++
++void intel_gvt_restore_mmio(struct intel_gvt *gvt)
++{
++	struct intel_vgpu *vgpu;
++	int id;
++
++	idr_for_each_entry(&(gvt)->vgpu_idr, vgpu, id) {
++		mmio_hw_access_pre(gvt->gt);
++		intel_gvt_for_each_tracked_mmio(gvt, mmio_pm_restore_handler, vgpu);
++		mmio_hw_access_post(gvt->gt);
++	}
++}
+diff --git a/drivers/gpu/drm/i915/gvt/mmio.h b/drivers/gpu/drm/i915/gvt/mmio.h
+index cc4812648bf4a..9e862dc73579b 100644
+--- a/drivers/gpu/drm/i915/gvt/mmio.h
++++ b/drivers/gpu/drm/i915/gvt/mmio.h
+@@ -104,4 +104,8 @@ int intel_vgpu_mmio_reg_rw(struct intel_vgpu *vgpu, unsigned int offset,
+ 
+ int intel_vgpu_mask_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
+ 				  void *p_data, unsigned int bytes);
++
++void intel_gvt_restore_fence(struct intel_gvt *gvt);
++void intel_gvt_restore_mmio(struct intel_gvt *gvt);
++
+ #endif
+diff --git a/drivers/gpu/drm/i915/intel_gvt.c b/drivers/gpu/drm/i915/intel_gvt.c
+index 99fe8aef1c67f..4e70c1a9ef2ed 100644
+--- a/drivers/gpu/drm/i915/intel_gvt.c
++++ b/drivers/gpu/drm/i915/intel_gvt.c
+@@ -24,6 +24,7 @@
+ #include "i915_drv.h"
+ #include "i915_vgpu.h"
+ #include "intel_gvt.h"
++#include "gvt/gvt.h"
+ 
+ /**
+  * DOC: Intel GVT-g host support
+@@ -147,3 +148,17 @@ void intel_gvt_driver_remove(struct drm_i915_private *dev_priv)
+ 
+ 	intel_gvt_clean_device(dev_priv);
+ }
++
++/**
++ * intel_gvt_resume - GVT resume routine wapper
++ *
++ * @dev_priv: drm i915 private *
++ *
++ * This function is called at the i915 driver resume stage to restore required
++ * HW status for GVT so that vGPU can continue running after resumed.
++ */
++void intel_gvt_resume(struct drm_i915_private *dev_priv)
++{
++	if (intel_gvt_active(dev_priv))
++		intel_gvt_pm_resume(dev_priv->gvt);
++}
+diff --git a/drivers/gpu/drm/i915/intel_gvt.h b/drivers/gpu/drm/i915/intel_gvt.h
+index 502fad8a8652c..d7d3fb6186fdd 100644
+--- a/drivers/gpu/drm/i915/intel_gvt.h
++++ b/drivers/gpu/drm/i915/intel_gvt.h
+@@ -33,6 +33,7 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv);
+ void intel_gvt_clean_device(struct drm_i915_private *dev_priv);
+ int intel_gvt_init_host(void);
+ void intel_gvt_sanitize_options(struct drm_i915_private *dev_priv);
++void intel_gvt_resume(struct drm_i915_private *dev_priv);
+ #else
+ static inline int intel_gvt_init(struct drm_i915_private *dev_priv)
+ {
+@@ -46,6 +47,10 @@ static inline void intel_gvt_driver_remove(struct drm_i915_private *dev_priv)
+ static inline void intel_gvt_sanitize_options(struct drm_i915_private *dev_priv)
+ {
+ }
++
++static inline void intel_gvt_resume(struct drm_i915_private *dev_priv)
++{
++}
+ #endif
+ 
+ #endif /* _INTEL_GVT_H_ */
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+index 29702dd8631d4..fe64bf2176f30 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+@@ -249,7 +249,11 @@ void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
+ 
+ 	mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP,
+ 			       pgprot_writecombine(PAGE_KERNEL));
+-
++	if (!mtk_gem->kvaddr) {
++		kfree(sgt);
++		kfree(mtk_gem->pages);
++		return -ENOMEM;
++	}
+ out:
+ 	kfree(sgt);
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+index 7e82c41a85f1a..64ee63dcdb7c9 100644
+--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+@@ -521,6 +521,10 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
+ 	gpu->perfcntrs = perfcntrs;
+ 	gpu->num_perfcntrs = ARRAY_SIZE(perfcntrs);
+ 
++	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1);
++	if (ret)
++		goto fail;
++
+ 	if (adreno_is_a20x(adreno_gpu))
+ 		adreno_gpu->registers = a200_registers;
+ 	else if (adreno_is_a225(adreno_gpu))
+@@ -528,10 +532,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
+ 	else
+ 		adreno_gpu->registers = a220_registers;
+ 
+-	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1);
+-	if (ret)
+-		goto fail;
+-
+ 	if (!gpu->aspace) {
+ 		dev_err(dev->dev, "No memory protection without MMU\n");
+ 		ret = -ENXIO;
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+index 0dc23c86747e8..e1c1b4ad5ed04 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+@@ -221,8 +221,7 @@ static void mdp5_plane_destroy_state(struct drm_plane *plane,
+ {
+ 	struct mdp5_plane_state *pstate = to_mdp5_plane_state(state);
+ 
+-	if (state->fb)
+-		drm_framebuffer_put(state->fb);
++	__drm_atomic_helper_plane_destroy_state(state);
+ 
+ 	kfree(pstate);
+ }
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index e40321d798981..e90b518118881 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -1200,7 +1200,9 @@ static const struct panel_desc auo_t215hvn01 = {
+ 	.delay = {
+ 		.disable = 5,
+ 		.unprepare = 1000,
+-	}
++	},
++	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+ static const struct drm_display_mode avic_tm070ddh03_mode = {
+diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
+index 5c42877fd6fbf..13f25ec1fe25c 100644
+--- a/drivers/gpu/drm/radeon/cik.c
++++ b/drivers/gpu/drm/radeon/cik.c
+@@ -9551,17 +9551,8 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev)
+ 			u16 bridge_cfg2, gpu_cfg2;
+ 			u32 max_lw, current_lw, tmp;
+ 
+-			pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-						  &bridge_cfg);
+-			pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL,
+-						  &gpu_cfg);
+-
+-			tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
+-
+-			tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL,
+-						   tmp16);
++			pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
++			pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
+ 
+ 			tmp = RREG32_PCIE_PORT(PCIE_LC_STATUS1);
+ 			max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
+@@ -9608,21 +9599,14 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev)
+ 				msleep(100);
+ 
+ 				/* linkctl */
+-				pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(root, PCI_EXP_LNKCTL,
+-							   tmp16);
+-
+-				pcie_capability_read_word(rdev->pdev,
+-							  PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(rdev->pdev,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
++				pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   bridge_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
++				pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   gpu_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
+ 
+ 				/* linkctl2 */
+ 				pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index 93dcab548a835..31e2c1083b089 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -7138,17 +7138,8 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev)
+ 			u16 bridge_cfg2, gpu_cfg2;
+ 			u32 max_lw, current_lw, tmp;
+ 
+-			pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-						  &bridge_cfg);
+-			pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL,
+-						  &gpu_cfg);
+-
+-			tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
+-
+-			tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL,
+-						   tmp16);
++			pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
++			pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
+ 
+ 			tmp = RREG32_PCIE(PCIE_LC_STATUS1);
+ 			max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
+@@ -7195,22 +7186,14 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev)
+ 				msleep(100);
+ 
+ 				/* linkctl */
+-				pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(root,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
+-
+-				pcie_capability_read_word(rdev->pdev,
+-							  PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(rdev->pdev,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
++				pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   bridge_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
++				pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   gpu_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
+ 
+ 				/* linkctl2 */
+ 				pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+diff --git a/drivers/gpu/drm/tegra/dpaux.c b/drivers/gpu/drm/tegra/dpaux.c
+index 105fb9cdbb3bd..f8b8107368a02 100644
+--- a/drivers/gpu/drm/tegra/dpaux.c
++++ b/drivers/gpu/drm/tegra/dpaux.c
+@@ -467,10 +467,8 @@ static int tegra_dpaux_probe(struct platform_device *pdev)
+ 		return PTR_ERR(dpaux->regs);
+ 
+ 	dpaux->irq = platform_get_irq(pdev, 0);
+-	if (dpaux->irq < 0) {
+-		dev_err(&pdev->dev, "failed to get IRQ\n");
+-		return -ENXIO;
+-	}
++	if (dpaux->irq < 0)
++		return dpaux->irq;
+ 
+ 	if (!pdev->dev.pm_domain) {
+ 		dpaux->rst = devm_reset_control_get(&pdev->dev, "dpaux");
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+index 8e69303aad3f7..5f6eea81f3cc8 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+@@ -214,7 +214,9 @@ static int zynqmp_dpsub_probe(struct platform_device *pdev)
+ 	dpsub->dev = &pdev->dev;
+ 	platform_set_drvdata(pdev, dpsub);
+ 
+-	dma_set_mask(dpsub->dev, DMA_BIT_MASK(ZYNQMP_DISP_MAX_DMA_BIT));
++	ret = dma_set_mask(dpsub->dev, DMA_BIT_MASK(ZYNQMP_DISP_MAX_DMA_BIT));
++	if (ret)
++		return ret;
+ 
+ 	/* Try the reserved memory. Proceed if there's none. */
+ 	of_reserved_mem_device_init(&pdev->dev);
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index 587259b3db97c..f4d79ec826797 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1217,6 +1217,9 @@ static int logi_dj_recv_switch_to_dj_mode(struct dj_receiver_dev *djrcv_dev,
+ 		 * 50 msec should gives enough time to the receiver to be ready.
+ 		 */
+ 		msleep(50);
++
++		if (retval)
++			return retval;
+ 	}
+ 
+ 	/*
+@@ -1238,7 +1241,7 @@ static int logi_dj_recv_switch_to_dj_mode(struct dj_receiver_dev *djrcv_dev,
+ 	buf[5] = 0x09;
+ 	buf[6] = 0x00;
+ 
+-	hid_hw_raw_request(hdev, REPORT_ID_HIDPP_SHORT, buf,
++	retval = hid_hw_raw_request(hdev, REPORT_ID_HIDPP_SHORT, buf,
+ 			HIDPP_REPORT_SHORT_LENGTH, HID_OUTPUT_REPORT,
+ 			HID_REQ_SET_REPORT);
+ 
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index ea8c52f0aa783..dc7c33f6b2c4e 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1541,7 +1541,6 @@ static void mt_post_parse(struct mt_device *td, struct mt_application *app)
+ static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ {
+ 	struct mt_device *td = hid_get_drvdata(hdev);
+-	char *name;
+ 	const char *suffix = NULL;
+ 	struct mt_report_data *rdata;
+ 	struct mt_application *mt_application = NULL;
+@@ -1595,15 +1594,9 @@ static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ 		break;
+ 	}
+ 
+-	if (suffix) {
+-		name = devm_kzalloc(&hi->input->dev,
+-				    strlen(hdev->name) + strlen(suffix) + 2,
+-				    GFP_KERNEL);
+-		if (name) {
+-			sprintf(name, "%s %s", hdev->name, suffix);
+-			hi->input->name = name;
+-		}
+-	}
++	if (suffix)
++		hi->input->name = devm_kasprintf(&hdev->dev, GFP_KERNEL,
++						 "%s %s", hdev->name, suffix);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/hid/wacom.h b/drivers/hid/wacom.h
+index 3f8b24a57014b..c034a1e850e45 100644
+--- a/drivers/hid/wacom.h
++++ b/drivers/hid/wacom.h
+@@ -153,6 +153,7 @@ struct wacom_remote {
+ 		struct input_dev *input;
+ 		bool registered;
+ 		struct wacom_battery battery;
++		ktime_t active_time;
+ 	} remotes[WACOM_MAX_REMOTES];
+ };
+ 
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 36cb456709ed7..1a7e1d3e7a379 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2529,6 +2529,18 @@ fail:
+ 	return;
+ }
+ 
++static void wacom_remote_destroy_battery(struct wacom *wacom, int index)
++{
++	struct wacom_remote *remote = wacom->remote;
++
++	if (remote->remotes[index].battery.battery) {
++		devres_release_group(&wacom->hdev->dev,
++				     &remote->remotes[index].battery.bat_desc);
++		remote->remotes[index].battery.battery = NULL;
++		remote->remotes[index].active_time = 0;
++	}
++}
++
+ static void wacom_remote_destroy_one(struct wacom *wacom, unsigned int index)
+ {
+ 	struct wacom_remote *remote = wacom->remote;
+@@ -2543,9 +2555,7 @@ static void wacom_remote_destroy_one(struct wacom *wacom, unsigned int index)
+ 			remote->remotes[i].registered = false;
+ 			spin_unlock_irqrestore(&remote->remote_lock, flags);
+ 
+-			if (remote->remotes[i].battery.battery)
+-				devres_release_group(&wacom->hdev->dev,
+-						     &remote->remotes[i].battery.bat_desc);
++			wacom_remote_destroy_battery(wacom, i);
+ 
+ 			if (remote->remotes[i].group.name)
+ 				devres_release_group(&wacom->hdev->dev,
+@@ -2553,7 +2563,6 @@ static void wacom_remote_destroy_one(struct wacom *wacom, unsigned int index)
+ 
+ 			remote->remotes[i].serial = 0;
+ 			remote->remotes[i].group.name = NULL;
+-			remote->remotes[i].battery.battery = NULL;
+ 			wacom->led.groups[i].select = WACOM_STATUS_UNKNOWN;
+ 		}
+ 	}
+@@ -2638,6 +2647,9 @@ static int wacom_remote_attach_battery(struct wacom *wacom, int index)
+ 	if (remote->remotes[index].battery.battery)
+ 		return 0;
+ 
++	if (!remote->remotes[index].active_time)
++		return 0;
++
+ 	if (wacom->led.groups[index].select == WACOM_STATUS_UNKNOWN)
+ 		return 0;
+ 
+@@ -2653,6 +2665,7 @@ static void wacom_remote_work(struct work_struct *work)
+ {
+ 	struct wacom *wacom = container_of(work, struct wacom, remote_work);
+ 	struct wacom_remote *remote = wacom->remote;
++	ktime_t kt = ktime_get();
+ 	struct wacom_remote_data data;
+ 	unsigned long flags;
+ 	unsigned int count;
+@@ -2679,6 +2692,10 @@ static void wacom_remote_work(struct work_struct *work)
+ 		serial = data.remote[i].serial;
+ 		if (data.remote[i].connected) {
+ 
++			if (kt - remote->remotes[i].active_time > WACOM_REMOTE_BATTERY_TIMEOUT
++			    && remote->remotes[i].active_time != 0)
++				wacom_remote_destroy_battery(wacom, i);
++
+ 			if (remote->remotes[i].serial == serial) {
+ 				wacom_remote_attach_battery(wacom, i);
+ 				continue;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index de6fe98668700..1b5e5e9f577db 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1127,6 +1127,7 @@ static int wacom_remote_irq(struct wacom_wac *wacom_wac, size_t len)
+ 	if (index < 0 || !remote->remotes[index].registered)
+ 		goto out;
+ 
++	remote->remotes[i].active_time = ktime_get();
+ 	input = remote->remotes[index].input;
+ 
+ 	input_report_key(input, BTN_0, (data[9] & 0x01));
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 166731292c359..d393f96626fb1 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -15,6 +15,7 @@
+ #define WACOM_NAME_MAX		64
+ #define WACOM_MAX_REMOTES	5
+ #define WACOM_STATUS_UNKNOWN	255
++#define WACOM_REMOTE_BATTERY_TIMEOUT	21000000000ll
+ 
+ /* packet length for individual models */
+ #define WACOM_PKGLEN_BBFUN	 9
+diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c
+index 7d5f7441aceb1..b9a93ee9c2364 100644
+--- a/drivers/hwmon/tmp513.c
++++ b/drivers/hwmon/tmp513.c
+@@ -434,7 +434,7 @@ static umode_t tmp51x_is_visible(const void *_data,
+ 
+ 	switch (type) {
+ 	case hwmon_temp:
+-		if (data->id == tmp512 && channel == 4)
++		if (data->id == tmp512 && channel == 3)
+ 			return 0;
+ 		switch (attr) {
+ 		case hwmon_temp_input:
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index 8978f3410bee5..eee069e95b9f6 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -426,7 +426,7 @@ static int tmc_set_etf_buffer(struct coresight_device *csdev,
+ 		return -EINVAL;
+ 
+ 	/* wrap head around to the amount of space we have */
+-	head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1);
++	head = handle->head & (((unsigned long)buf->nr_pages << PAGE_SHIFT) - 1);
+ 
+ 	/* find the page to write to */
+ 	buf->cur = head / PAGE_SIZE;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+index 3e74f5aed20d7..ae2dd0c88f4eb 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+@@ -47,7 +47,8 @@ struct etr_perf_buffer {
+ };
+ 
+ /* Convert the perf index to an offset within the ETR buffer */
+-#define PERF_IDX2OFF(idx, buf)	((idx) % ((buf)->nr_pages << PAGE_SHIFT))
++#define PERF_IDX2OFF(idx, buf)		\
++		((idx) % ((unsigned long)(buf)->nr_pages << PAGE_SHIFT))
+ 
+ /* Lower limit for ETR hardware buffer */
+ #define TMC_ETR_PERF_MIN_BUF_SIZE	SZ_1M
+@@ -1232,7 +1233,7 @@ alloc_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event,
+ 	 * than the size requested via sysfs.
+ 	 */
+ 	if ((nr_pages << PAGE_SHIFT) > drvdata->size) {
+-		etr_buf = tmc_alloc_etr_buf(drvdata, (nr_pages << PAGE_SHIFT),
++		etr_buf = tmc_alloc_etr_buf(drvdata, ((ssize_t)nr_pages << PAGE_SHIFT),
+ 					    0, node, NULL);
+ 		if (!IS_ERR(etr_buf))
+ 			goto done;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h
+index b91ec7dde7bc9..3655b3bfb2e32 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc.h
++++ b/drivers/hwtracing/coresight/coresight-tmc.h
+@@ -321,7 +321,7 @@ ssize_t tmc_sg_table_get_data(struct tmc_sg_table *sg_table,
+ static inline unsigned long
+ tmc_sg_table_buf_size(struct tmc_sg_table *sg_table)
+ {
+-	return sg_table->data_pages.nr_pages << PAGE_SHIFT;
++	return (unsigned long)sg_table->data_pages.nr_pages << PAGE_SHIFT;
+ }
+ 
+ struct coresight_device *tmc_etr_get_catu_device(struct tmc_drvdata *drvdata);
+diff --git a/drivers/infiniband/core/uverbs_std_types_counters.c b/drivers/infiniband/core/uverbs_std_types_counters.c
+index b3c6c066b6010..c61b10fbf90a9 100644
+--- a/drivers/infiniband/core/uverbs_std_types_counters.c
++++ b/drivers/infiniband/core/uverbs_std_types_counters.c
+@@ -108,6 +108,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)(
+ 		return ret;
+ 
+ 	uattr = uverbs_attr_get(attrs, UVERBS_ATTR_READ_COUNTERS_BUFF);
++	if (IS_ERR(uattr))
++		return PTR_ERR(uattr);
+ 	read_attr.ncounters = uattr->ptr_attr.len / sizeof(u64);
+ 	read_attr.counters_buff = uverbs_zalloc(
+ 		attrs, array_size(read_attr.ncounters, sizeof(u64)));
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index b87ba4c9fccf1..de5ab282ac748 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -1490,7 +1490,6 @@ error:
+ 
+ 		cep->cm_id = NULL;
+ 		id->rem_ref(id);
+-		siw_cep_put(cep);
+ 
+ 		qp->cep = NULL;
+ 		siw_cep_put(cep);
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index d043793ff0f53..1d4e0dc550e42 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -1487,7 +1487,7 @@ int siw_map_mr_sg(struct ib_mr *base_mr, struct scatterlist *sl, int num_sle,
+ 
+ 	if (pbl->max_buf < num_sle) {
+ 		siw_dbg_mem(mem, "too many SGE's: %d > %d\n",
+-			    mem->pbl->max_buf, num_sle);
++			    num_sle, pbl->max_buf);
+ 		return -ENOMEM;
+ 	}
+ 	for_each_sg(sl, slp, num_sle, i) {
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 7cd90604502ec..ed375f517e8ac 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -2560,6 +2560,8 @@ static void isert_wait_conn(struct iscsi_conn *conn)
+ 	isert_put_unsol_pending_cmds(conn);
+ 	isert_wait4cmds(conn);
+ 	isert_wait4logout(isert_conn);
++
++	queue_work(isert_release_wq, &isert_conn->release_work);
+ }
+ 
+ static void isert_free_conn(struct iscsi_conn *conn)
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index adbd56af379ff..9b9b9557ae746 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -1990,12 +1990,8 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp)
+ 
+ 		if (unlikely(rsp->flags & SRP_RSP_FLAG_DIUNDER))
+ 			scsi_set_resid(scmnd, be32_to_cpu(rsp->data_in_res_cnt));
+-		else if (unlikely(rsp->flags & SRP_RSP_FLAG_DIOVER))
+-			scsi_set_resid(scmnd, -be32_to_cpu(rsp->data_in_res_cnt));
+ 		else if (unlikely(rsp->flags & SRP_RSP_FLAG_DOUNDER))
+ 			scsi_set_resid(scmnd, be32_to_cpu(rsp->data_out_res_cnt));
+-		else if (unlikely(rsp->flags & SRP_RSP_FLAG_DOOVER))
+-			scsi_set_resid(scmnd, -be32_to_cpu(rsp->data_out_res_cnt));
+ 
+ 		srp_free_req(ch, req, scmnd,
+ 			     be32_to_cpu(rsp->req_lim_delta));
+diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+index a24390c548a91..37c8f75a35801 100644
+--- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c
++++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+@@ -283,6 +283,13 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain,
+ 			ctx->secure_init = true;
+ 		}
+ 
++		/* Disable context bank before programming */
++		iommu_writel(ctx, ARM_SMMU_CB_SCTLR, 0);
++
++		/* Clear context bank fault address fault status registers */
++		iommu_writel(ctx, ARM_SMMU_CB_FAR, 0);
++		iommu_writel(ctx, ARM_SMMU_CB_FSR, ARM_SMMU_FSR_FAULT);
++
+ 		/* TTBRs */
+ 		iommu_writeq(ctx, ARM_SMMU_CB_TTBR0,
+ 				pgtbl_cfg.arm_lpae_s1_cfg.ttbr |
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index 80d6412e2c546..9b24e8224379e 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -187,7 +187,7 @@ attach_out:
+ 	device_attach_pasid_table(info, pasid_table);
+ 
+ 	if (!ecap_coherent(info->iommu->ecap))
+-		clflush_cache_range(pasid_table->table, size);
++		clflush_cache_range(pasid_table->table, (1 << order) * PAGE_SIZE);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index f843ade442dec..b28302836b2e9 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -2476,11 +2476,35 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len)
+ {
+ 	unsigned long backlog;
+ 	unsigned long old_mwb = mddev->bitmap_info.max_write_behind;
++	struct md_rdev *rdev;
++	bool has_write_mostly = false;
+ 	int rv = kstrtoul(buf, 10, &backlog);
+ 	if (rv)
+ 		return rv;
+ 	if (backlog > COUNTER_MAX)
+ 		return -EINVAL;
++
++	rv = mddev_lock(mddev);
++	if (rv)
++		return rv;
++
++	/*
++	 * Without write mostly device, it doesn't make sense to set
++	 * backlog for max_write_behind.
++	 */
++	rdev_for_each(rdev, mddev) {
++		if (test_bit(WriteMostly, &rdev->flags)) {
++			has_write_mostly = true;
++			break;
++		}
++	}
++	if (!has_write_mostly) {
++		pr_warn_ratelimited("%s: can't set backlog, no write mostly device available\n",
++				    mdname(mddev));
++		mddev_unlock(mddev);
++		return -EINVAL;
++	}
++
+ 	mddev->bitmap_info.max_write_behind = backlog;
+ 	if (!backlog && mddev->serial_info_pool) {
+ 		/* serial_info_pool is not needed if backlog is zero */
+@@ -2488,13 +2512,13 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len)
+ 			mddev_destroy_serial_pool(mddev, NULL, false);
+ 	} else if (backlog && !mddev->serial_info_pool) {
+ 		/* serial_info_pool is needed since backlog is not zero */
+-		struct md_rdev *rdev;
+-
+ 		rdev_for_each(rdev, mddev)
+ 			mddev_create_serial_pool(mddev, rdev, false);
+ 	}
+ 	if (old_mwb != backlog)
+ 		md_bitmap_update_sb(mddev->bitmap);
++
++	mddev_unlock(mddev);
+ 	return len;
+ }
+ 
+diff --git a/drivers/media/cec/usb/pulse8/pulse8-cec.c b/drivers/media/cec/usb/pulse8/pulse8-cec.c
+index 04b13cdc38d2c..ba67587bd43ec 100644
+--- a/drivers/media/cec/usb/pulse8/pulse8-cec.c
++++ b/drivers/media/cec/usb/pulse8/pulse8-cec.c
+@@ -809,8 +809,11 @@ static void pulse8_ping_eeprom_work_handler(struct work_struct *work)
+ 
+ 	mutex_lock(&pulse8->lock);
+ 	cmd = MSGCODE_PING;
+-	pulse8_send_and_wait(pulse8, &cmd, 1,
+-			     MSGCODE_COMMAND_ACCEPTED, 0);
++	if (pulse8_send_and_wait(pulse8, &cmd, 1,
++				 MSGCODE_COMMAND_ACCEPTED, 0)) {
++		dev_warn(pulse8->dev, "failed to ping EEPROM\n");
++		goto unlock;
++	}
+ 
+ 	if (pulse8->vers < 2)
+ 		goto unlock;
+diff --git a/drivers/media/dvb-frontends/ascot2e.c b/drivers/media/dvb-frontends/ascot2e.c
+index 9b00b56230b61..cf8e5f1bd1018 100644
+--- a/drivers/media/dvb-frontends/ascot2e.c
++++ b/drivers/media/dvb-frontends/ascot2e.c
+@@ -533,7 +533,7 @@ struct dvb_frontend *ascot2e_attach(struct dvb_frontend *fe,
+ 		priv->i2c_address, priv->i2c);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(ascot2e_attach);
++EXPORT_SYMBOL_GPL(ascot2e_attach);
+ 
+ MODULE_DESCRIPTION("Sony ASCOT2E terr/cab tuner driver");
+ MODULE_AUTHOR("info@netup.ru");
+diff --git a/drivers/media/dvb-frontends/atbm8830.c b/drivers/media/dvb-frontends/atbm8830.c
+index bdd16b9c58244..778c865085bf9 100644
+--- a/drivers/media/dvb-frontends/atbm8830.c
++++ b/drivers/media/dvb-frontends/atbm8830.c
+@@ -489,7 +489,7 @@ error_out:
+ 	return NULL;
+ 
+ }
+-EXPORT_SYMBOL(atbm8830_attach);
++EXPORT_SYMBOL_GPL(atbm8830_attach);
+ 
+ MODULE_DESCRIPTION("AltoBeam ATBM8830/8831 GB20600 demodulator driver");
+ MODULE_AUTHOR("David T. L. Wong <davidtlwong@gmail.com>");
+diff --git a/drivers/media/dvb-frontends/au8522_dig.c b/drivers/media/dvb-frontends/au8522_dig.c
+index 78cafdf279618..230436bf6cbd9 100644
+--- a/drivers/media/dvb-frontends/au8522_dig.c
++++ b/drivers/media/dvb-frontends/au8522_dig.c
+@@ -879,7 +879,7 @@ error:
+ 	au8522_release_state(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(au8522_attach);
++EXPORT_SYMBOL_GPL(au8522_attach);
+ 
+ static const struct dvb_frontend_ops au8522_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/bcm3510.c b/drivers/media/dvb-frontends/bcm3510.c
+index 68b92b4419cff..b3f5c49accafd 100644
+--- a/drivers/media/dvb-frontends/bcm3510.c
++++ b/drivers/media/dvb-frontends/bcm3510.c
+@@ -835,7 +835,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(bcm3510_attach);
++EXPORT_SYMBOL_GPL(bcm3510_attach);
+ 
+ static const struct dvb_frontend_ops bcm3510_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/cx22700.c b/drivers/media/dvb-frontends/cx22700.c
+index b39ff516271b2..1d04c0a652b26 100644
+--- a/drivers/media/dvb-frontends/cx22700.c
++++ b/drivers/media/dvb-frontends/cx22700.c
+@@ -432,4 +432,4 @@ MODULE_DESCRIPTION("Conexant CX22700 DVB-T Demodulator driver");
+ MODULE_AUTHOR("Holger Waechtler");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(cx22700_attach);
++EXPORT_SYMBOL_GPL(cx22700_attach);
+diff --git a/drivers/media/dvb-frontends/cx22702.c b/drivers/media/dvb-frontends/cx22702.c
+index cc6acbf6393d4..61ad34b7004b5 100644
+--- a/drivers/media/dvb-frontends/cx22702.c
++++ b/drivers/media/dvb-frontends/cx22702.c
+@@ -604,7 +604,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(cx22702_attach);
++EXPORT_SYMBOL_GPL(cx22702_attach);
+ 
+ static const struct dvb_frontend_ops cx22702_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/cx24110.c b/drivers/media/dvb-frontends/cx24110.c
+index 6f99d6a27be2d..9aeea089756fe 100644
+--- a/drivers/media/dvb-frontends/cx24110.c
++++ b/drivers/media/dvb-frontends/cx24110.c
+@@ -653,4 +653,4 @@ MODULE_DESCRIPTION("Conexant CX24110 DVB-S Demodulator driver");
+ MODULE_AUTHOR("Peter Hettkamp");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(cx24110_attach);
++EXPORT_SYMBOL_GPL(cx24110_attach);
+diff --git a/drivers/media/dvb-frontends/cx24113.c b/drivers/media/dvb-frontends/cx24113.c
+index 60a9f70275f75..619df8329fbbc 100644
+--- a/drivers/media/dvb-frontends/cx24113.c
++++ b/drivers/media/dvb-frontends/cx24113.c
+@@ -590,7 +590,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(cx24113_attach);
++EXPORT_SYMBOL_GPL(cx24113_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Activates frontend debugging (default:0)");
+diff --git a/drivers/media/dvb-frontends/cx24116.c b/drivers/media/dvb-frontends/cx24116.c
+index ea8264ccbb4e8..8b978a9f74a4e 100644
+--- a/drivers/media/dvb-frontends/cx24116.c
++++ b/drivers/media/dvb-frontends/cx24116.c
+@@ -1133,7 +1133,7 @@ struct dvb_frontend *cx24116_attach(const struct cx24116_config *config,
+ 	state->frontend.demodulator_priv = state;
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(cx24116_attach);
++EXPORT_SYMBOL_GPL(cx24116_attach);
+ 
+ /*
+  * Initialise or wake up device
+diff --git a/drivers/media/dvb-frontends/cx24120.c b/drivers/media/dvb-frontends/cx24120.c
+index 2464b63fe0cf4..0fa033633f4cd 100644
+--- a/drivers/media/dvb-frontends/cx24120.c
++++ b/drivers/media/dvb-frontends/cx24120.c
+@@ -305,7 +305,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(cx24120_attach);
++EXPORT_SYMBOL_GPL(cx24120_attach);
+ 
+ static int cx24120_test_rom(struct cx24120_state *state)
+ {
+@@ -972,7 +972,9 @@ static void cx24120_set_clock_ratios(struct dvb_frontend *fe)
+ 	cmd.arg[8] = (clock_ratios_table[idx].rate >> 8) & 0xff;
+ 	cmd.arg[9] = (clock_ratios_table[idx].rate >> 0) & 0xff;
+ 
+-	cx24120_message_send(state, &cmd);
++	ret = cx24120_message_send(state, &cmd);
++	if (ret != 0)
++		return;
+ 
+ 	/* Calculate ber window rates for stat work */
+ 	cx24120_calculate_ber_window(state, clock_ratios_table[idx].rate);
+diff --git a/drivers/media/dvb-frontends/cx24123.c b/drivers/media/dvb-frontends/cx24123.c
+index 3d84ee17e54c6..539889e638ccc 100644
+--- a/drivers/media/dvb-frontends/cx24123.c
++++ b/drivers/media/dvb-frontends/cx24123.c
+@@ -1096,7 +1096,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(cx24123_attach);
++EXPORT_SYMBOL_GPL(cx24123_attach);
+ 
+ static const struct dvb_frontend_ops cx24123_ops = {
+ 	.delsys = { SYS_DVBS },
+diff --git a/drivers/media/dvb-frontends/cxd2820r_core.c b/drivers/media/dvb-frontends/cxd2820r_core.c
+index b1618339eec0e..b0e6343ea5911 100644
+--- a/drivers/media/dvb-frontends/cxd2820r_core.c
++++ b/drivers/media/dvb-frontends/cxd2820r_core.c
+@@ -536,7 +536,7 @@ struct dvb_frontend *cxd2820r_attach(const struct cxd2820r_config *config,
+ 
+ 	return pdata.get_dvb_frontend(client);
+ }
+-EXPORT_SYMBOL(cxd2820r_attach);
++EXPORT_SYMBOL_GPL(cxd2820r_attach);
+ 
+ static struct dvb_frontend *cxd2820r_get_dvb_frontend(struct i2c_client *client)
+ {
+diff --git a/drivers/media/dvb-frontends/cxd2841er.c b/drivers/media/dvb-frontends/cxd2841er.c
+index 758c95bc3b113..493ba8b6b8f62 100644
+--- a/drivers/media/dvb-frontends/cxd2841er.c
++++ b/drivers/media/dvb-frontends/cxd2841er.c
+@@ -3930,14 +3930,14 @@ struct dvb_frontend *cxd2841er_attach_s(struct cxd2841er_config *cfg,
+ {
+ 	return cxd2841er_attach(cfg, i2c, SYS_DVBS);
+ }
+-EXPORT_SYMBOL(cxd2841er_attach_s);
++EXPORT_SYMBOL_GPL(cxd2841er_attach_s);
+ 
+ struct dvb_frontend *cxd2841er_attach_t_c(struct cxd2841er_config *cfg,
+ 					struct i2c_adapter *i2c)
+ {
+ 	return cxd2841er_attach(cfg, i2c, 0);
+ }
+-EXPORT_SYMBOL(cxd2841er_attach_t_c);
++EXPORT_SYMBOL_GPL(cxd2841er_attach_t_c);
+ 
+ static const struct dvb_frontend_ops cxd2841er_dvbs_s2_ops = {
+ 	.delsys = { SYS_DVBS, SYS_DVBS2 },
+diff --git a/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c b/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c
+index d5b1b3788e392..09d31c368741d 100644
+--- a/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c
++++ b/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c
+@@ -1950,7 +1950,7 @@ struct dvb_frontend *cxd2880_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(cxd2880_attach);
++EXPORT_SYMBOL_GPL(cxd2880_attach);
+ 
+ MODULE_DESCRIPTION("Sony CXD2880 DVB-T2/T tuner + demod driver");
+ MODULE_AUTHOR("Sony Semiconductor Solutions Corporation");
+diff --git a/drivers/media/dvb-frontends/dib0070.c b/drivers/media/dvb-frontends/dib0070.c
+index cafb41dba861c..9a8e7cdd2a247 100644
+--- a/drivers/media/dvb-frontends/dib0070.c
++++ b/drivers/media/dvb-frontends/dib0070.c
+@@ -762,7 +762,7 @@ free_mem:
+ 	fe->tuner_priv = NULL;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib0070_attach);
++EXPORT_SYMBOL_GPL(dib0070_attach);
+ 
+ MODULE_AUTHOR("Patrick Boettcher <patrick.boettcher@posteo.de>");
+ MODULE_DESCRIPTION("Driver for the DiBcom 0070 base-band RF Tuner");
+diff --git a/drivers/media/dvb-frontends/dib0090.c b/drivers/media/dvb-frontends/dib0090.c
+index 08a85831e917f..bb2fc12c114cc 100644
+--- a/drivers/media/dvb-frontends/dib0090.c
++++ b/drivers/media/dvb-frontends/dib0090.c
+@@ -2632,7 +2632,7 @@ struct dvb_frontend *dib0090_register(struct dvb_frontend *fe, struct i2c_adapte
+ 	return NULL;
+ }
+ 
+-EXPORT_SYMBOL(dib0090_register);
++EXPORT_SYMBOL_GPL(dib0090_register);
+ 
+ struct dvb_frontend *dib0090_fw_register(struct dvb_frontend *fe, struct i2c_adapter *i2c, const struct dib0090_config *config)
+ {
+@@ -2658,7 +2658,7 @@ free_mem:
+ 	fe->tuner_priv = NULL;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib0090_fw_register);
++EXPORT_SYMBOL_GPL(dib0090_fw_register);
+ 
+ MODULE_AUTHOR("Patrick Boettcher <patrick.boettcher@posteo.de>");
+ MODULE_AUTHOR("Olivier Grenie <olivier.grenie@parrot.com>");
+diff --git a/drivers/media/dvb-frontends/dib3000mb.c b/drivers/media/dvb-frontends/dib3000mb.c
+index a6c2fc4586eb3..c598b2a633256 100644
+--- a/drivers/media/dvb-frontends/dib3000mb.c
++++ b/drivers/media/dvb-frontends/dib3000mb.c
+@@ -815,4 +815,4 @@ MODULE_AUTHOR(DRIVER_AUTHOR);
+ MODULE_DESCRIPTION(DRIVER_DESC);
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(dib3000mb_attach);
++EXPORT_SYMBOL_GPL(dib3000mb_attach);
+diff --git a/drivers/media/dvb-frontends/dib3000mc.c b/drivers/media/dvb-frontends/dib3000mc.c
+index 692600ce5f230..c69665024330c 100644
+--- a/drivers/media/dvb-frontends/dib3000mc.c
++++ b/drivers/media/dvb-frontends/dib3000mc.c
+@@ -935,7 +935,7 @@ error:
+ 	kfree(st);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib3000mc_attach);
++EXPORT_SYMBOL_GPL(dib3000mc_attach);
+ 
+ static const struct dvb_frontend_ops dib3000mc_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/dib7000m.c b/drivers/media/dvb-frontends/dib7000m.c
+index 97ce97789c9e3..fdb22f32e3a11 100644
+--- a/drivers/media/dvb-frontends/dib7000m.c
++++ b/drivers/media/dvb-frontends/dib7000m.c
+@@ -1434,7 +1434,7 @@ error:
+ 	kfree(st);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib7000m_attach);
++EXPORT_SYMBOL_GPL(dib7000m_attach);
+ 
+ static const struct dvb_frontend_ops dib7000m_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/dib7000p.c b/drivers/media/dvb-frontends/dib7000p.c
+index 55bee50aa8716..8c426baf76ee3 100644
+--- a/drivers/media/dvb-frontends/dib7000p.c
++++ b/drivers/media/dvb-frontends/dib7000p.c
+@@ -497,7 +497,7 @@ static int dib7000p_update_pll(struct dvb_frontend *fe, struct dibx000_bandwidth
+ 	prediv = reg_1856 & 0x3f;
+ 	loopdiv = (reg_1856 >> 6) & 0x3f;
+ 
+-	if ((bw != NULL) && (bw->pll_prediv != prediv || bw->pll_ratio != loopdiv)) {
++	if (loopdiv && bw && (bw->pll_prediv != prediv || bw->pll_ratio != loopdiv)) {
+ 		dprintk("Updating pll (prediv: old =  %d new = %d ; loopdiv : old = %d new = %d)\n", prediv, bw->pll_prediv, loopdiv, bw->pll_ratio);
+ 		reg_1856 &= 0xf000;
+ 		reg_1857 = dib7000p_read_word(state, 1857);
+@@ -2822,7 +2822,7 @@ void *dib7000p_attach(struct dib7000p_ops *ops)
+ 
+ 	return ops;
+ }
+-EXPORT_SYMBOL(dib7000p_attach);
++EXPORT_SYMBOL_GPL(dib7000p_attach);
+ 
+ static const struct dvb_frontend_ops dib7000p_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index d67f2dd997d06..02cb48223dc67 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -4527,7 +4527,7 @@ void *dib8000_attach(struct dib8000_ops *ops)
+ 
+ 	return ops;
+ }
+-EXPORT_SYMBOL(dib8000_attach);
++EXPORT_SYMBOL_GPL(dib8000_attach);
+ 
+ MODULE_AUTHOR("Olivier Grenie <Olivier.Grenie@parrot.com, Patrick Boettcher <patrick.boettcher@posteo.de>");
+ MODULE_DESCRIPTION("Driver for the DiBcom 8000 ISDB-T demodulator");
+diff --git a/drivers/media/dvb-frontends/dib9000.c b/drivers/media/dvb-frontends/dib9000.c
+index 04d92d6142797..24f7f7a7598d4 100644
+--- a/drivers/media/dvb-frontends/dib9000.c
++++ b/drivers/media/dvb-frontends/dib9000.c
+@@ -2546,7 +2546,7 @@ error:
+ 	kfree(st);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib9000_attach);
++EXPORT_SYMBOL_GPL(dib9000_attach);
+ 
+ static const struct dvb_frontend_ops dib9000_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/drx39xyj/drxj.c b/drivers/media/dvb-frontends/drx39xyj/drxj.c
+index 237b9d04c0766..499b37b78dd5f 100644
+--- a/drivers/media/dvb-frontends/drx39xyj/drxj.c
++++ b/drivers/media/dvb-frontends/drx39xyj/drxj.c
+@@ -12375,7 +12375,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(drx39xxj_attach);
++EXPORT_SYMBOL_GPL(drx39xxj_attach);
+ 
+ static const struct dvb_frontend_ops drx39xxj_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/drxd_hard.c b/drivers/media/dvb-frontends/drxd_hard.c
+index 45f9828639042..e3236adf84021 100644
+--- a/drivers/media/dvb-frontends/drxd_hard.c
++++ b/drivers/media/dvb-frontends/drxd_hard.c
+@@ -2948,7 +2948,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(drxd_attach);
++EXPORT_SYMBOL_GPL(drxd_attach);
+ 
+ MODULE_DESCRIPTION("DRXD driver");
+ MODULE_AUTHOR("Micronas");
+diff --git a/drivers/media/dvb-frontends/drxk_hard.c b/drivers/media/dvb-frontends/drxk_hard.c
+index 2134e25096aac..a6b86c181d1c9 100644
+--- a/drivers/media/dvb-frontends/drxk_hard.c
++++ b/drivers/media/dvb-frontends/drxk_hard.c
+@@ -6845,7 +6845,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(drxk_attach);
++EXPORT_SYMBOL_GPL(drxk_attach);
+ 
+ MODULE_DESCRIPTION("DRX-K driver");
+ MODULE_AUTHOR("Ralph Metzler");
+diff --git a/drivers/media/dvb-frontends/ds3000.c b/drivers/media/dvb-frontends/ds3000.c
+index 20fcf31af1658..515aa7c7baf2a 100644
+--- a/drivers/media/dvb-frontends/ds3000.c
++++ b/drivers/media/dvb-frontends/ds3000.c
+@@ -859,7 +859,7 @@ struct dvb_frontend *ds3000_attach(const struct ds3000_config *config,
+ 	ds3000_set_voltage(&state->frontend, SEC_VOLTAGE_OFF);
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(ds3000_attach);
++EXPORT_SYMBOL_GPL(ds3000_attach);
+ 
+ static int ds3000_set_carrier_offset(struct dvb_frontend *fe,
+ 					s32 carrier_offset_khz)
+diff --git a/drivers/media/dvb-frontends/dvb-pll.c b/drivers/media/dvb-frontends/dvb-pll.c
+index d45b4ddc8f912..846bfe7ef30eb 100644
+--- a/drivers/media/dvb-frontends/dvb-pll.c
++++ b/drivers/media/dvb-frontends/dvb-pll.c
+@@ -866,7 +866,7 @@ out:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dvb_pll_attach);
++EXPORT_SYMBOL_GPL(dvb_pll_attach);
+ 
+ 
+ static int
+diff --git a/drivers/media/dvb-frontends/ec100.c b/drivers/media/dvb-frontends/ec100.c
+index 03bd80666cf83..2ad0a3c2f7567 100644
+--- a/drivers/media/dvb-frontends/ec100.c
++++ b/drivers/media/dvb-frontends/ec100.c
+@@ -299,7 +299,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(ec100_attach);
++EXPORT_SYMBOL_GPL(ec100_attach);
+ 
+ static const struct dvb_frontend_ops ec100_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/helene.c b/drivers/media/dvb-frontends/helene.c
+index 8c1310c6b0bc2..c299d31dc7d27 100644
+--- a/drivers/media/dvb-frontends/helene.c
++++ b/drivers/media/dvb-frontends/helene.c
+@@ -1025,7 +1025,7 @@ struct dvb_frontend *helene_attach_s(struct dvb_frontend *fe,
+ 			priv->i2c_address, priv->i2c);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(helene_attach_s);
++EXPORT_SYMBOL_GPL(helene_attach_s);
+ 
+ struct dvb_frontend *helene_attach(struct dvb_frontend *fe,
+ 		const struct helene_config *config,
+@@ -1061,7 +1061,7 @@ struct dvb_frontend *helene_attach(struct dvb_frontend *fe,
+ 			priv->i2c_address, priv->i2c);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(helene_attach);
++EXPORT_SYMBOL_GPL(helene_attach);
+ 
+ static int helene_probe(struct i2c_client *client,
+ 			const struct i2c_device_id *id)
+diff --git a/drivers/media/dvb-frontends/horus3a.c b/drivers/media/dvb-frontends/horus3a.c
+index 24bf5cbcc1846..0330b78a5b3f2 100644
+--- a/drivers/media/dvb-frontends/horus3a.c
++++ b/drivers/media/dvb-frontends/horus3a.c
+@@ -395,7 +395,7 @@ struct dvb_frontend *horus3a_attach(struct dvb_frontend *fe,
+ 		priv->i2c_address, priv->i2c);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(horus3a_attach);
++EXPORT_SYMBOL_GPL(horus3a_attach);
+ 
+ MODULE_DESCRIPTION("Sony HORUS3A satellite tuner driver");
+ MODULE_AUTHOR("Sergey Kozlov <serjk@netup.ru>");
+diff --git a/drivers/media/dvb-frontends/isl6405.c b/drivers/media/dvb-frontends/isl6405.c
+index 2cd69b4ff82cb..7d28a743f97eb 100644
+--- a/drivers/media/dvb-frontends/isl6405.c
++++ b/drivers/media/dvb-frontends/isl6405.c
+@@ -141,7 +141,7 @@ struct dvb_frontend *isl6405_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(isl6405_attach);
++EXPORT_SYMBOL_GPL(isl6405_attach);
+ 
+ MODULE_DESCRIPTION("Driver for lnb supply and control ic isl6405");
+ MODULE_AUTHOR("Hartmut Hackmann & Oliver Endriss");
+diff --git a/drivers/media/dvb-frontends/isl6421.c b/drivers/media/dvb-frontends/isl6421.c
+index 43b0dfc6f453e..2e9f6f12f849e 100644
+--- a/drivers/media/dvb-frontends/isl6421.c
++++ b/drivers/media/dvb-frontends/isl6421.c
+@@ -213,7 +213,7 @@ struct dvb_frontend *isl6421_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(isl6421_attach);
++EXPORT_SYMBOL_GPL(isl6421_attach);
+ 
+ MODULE_DESCRIPTION("Driver for lnb supply and control ic isl6421");
+ MODULE_AUTHOR("Andrew de Quincey & Oliver Endriss");
+diff --git a/drivers/media/dvb-frontends/isl6423.c b/drivers/media/dvb-frontends/isl6423.c
+index 8cd1bb88ce6e7..a0d0a38340574 100644
+--- a/drivers/media/dvb-frontends/isl6423.c
++++ b/drivers/media/dvb-frontends/isl6423.c
+@@ -289,7 +289,7 @@ exit:
+ 	fe->sec_priv = NULL;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(isl6423_attach);
++EXPORT_SYMBOL_GPL(isl6423_attach);
+ 
+ MODULE_DESCRIPTION("ISL6423 SEC");
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/dvb-frontends/itd1000.c b/drivers/media/dvb-frontends/itd1000.c
+index 1b33478653d16..f8f362f50e78d 100644
+--- a/drivers/media/dvb-frontends/itd1000.c
++++ b/drivers/media/dvb-frontends/itd1000.c
+@@ -389,7 +389,7 @@ struct dvb_frontend *itd1000_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(itd1000_attach);
++EXPORT_SYMBOL_GPL(itd1000_attach);
+ 
+ MODULE_AUTHOR("Patrick Boettcher <pb@linuxtv.org>");
+ MODULE_DESCRIPTION("Integrant ITD1000 driver");
+diff --git a/drivers/media/dvb-frontends/ix2505v.c b/drivers/media/dvb-frontends/ix2505v.c
+index 73f27105c139d..3212e333d472b 100644
+--- a/drivers/media/dvb-frontends/ix2505v.c
++++ b/drivers/media/dvb-frontends/ix2505v.c
+@@ -302,7 +302,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(ix2505v_attach);
++EXPORT_SYMBOL_GPL(ix2505v_attach);
+ 
+ module_param_named(debug, ix2505v_debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/l64781.c b/drivers/media/dvb-frontends/l64781.c
+index c5106a1ea1cd0..fe5af2453d559 100644
+--- a/drivers/media/dvb-frontends/l64781.c
++++ b/drivers/media/dvb-frontends/l64781.c
+@@ -593,4 +593,4 @@ MODULE_DESCRIPTION("LSI L64781 DVB-T Demodulator driver");
+ MODULE_AUTHOR("Holger Waechtler, Marko Kohtala");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(l64781_attach);
++EXPORT_SYMBOL_GPL(l64781_attach);
+diff --git a/drivers/media/dvb-frontends/lg2160.c b/drivers/media/dvb-frontends/lg2160.c
+index f343066c297e2..fe700aa56bff3 100644
+--- a/drivers/media/dvb-frontends/lg2160.c
++++ b/drivers/media/dvb-frontends/lg2160.c
+@@ -1426,7 +1426,7 @@ struct dvb_frontend *lg2160_attach(const struct lg2160_config *config,
+ 
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(lg2160_attach);
++EXPORT_SYMBOL_GPL(lg2160_attach);
+ 
+ MODULE_DESCRIPTION("LG Electronics LG216x ATSC/MH Demodulator Driver");
+ MODULE_AUTHOR("Michael Krufky <mkrufky@linuxtv.org>");
+diff --git a/drivers/media/dvb-frontends/lgdt3305.c b/drivers/media/dvb-frontends/lgdt3305.c
+index 62d7439889196..60a97f1cc74e5 100644
+--- a/drivers/media/dvb-frontends/lgdt3305.c
++++ b/drivers/media/dvb-frontends/lgdt3305.c
+@@ -1148,7 +1148,7 @@ fail:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(lgdt3305_attach);
++EXPORT_SYMBOL_GPL(lgdt3305_attach);
+ 
+ static const struct dvb_frontend_ops lgdt3304_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/lgdt3306a.c b/drivers/media/dvb-frontends/lgdt3306a.c
+index 722576f1732aa..47fb22180d5b4 100644
+--- a/drivers/media/dvb-frontends/lgdt3306a.c
++++ b/drivers/media/dvb-frontends/lgdt3306a.c
+@@ -1895,7 +1895,7 @@ fail:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(lgdt3306a_attach);
++EXPORT_SYMBOL_GPL(lgdt3306a_attach);
+ 
+ #ifdef DBG_DUMP
+ 
+diff --git a/drivers/media/dvb-frontends/lgdt330x.c b/drivers/media/dvb-frontends/lgdt330x.c
+index da3a8c5e18d8e..53b1443ba0220 100644
+--- a/drivers/media/dvb-frontends/lgdt330x.c
++++ b/drivers/media/dvb-frontends/lgdt330x.c
+@@ -928,7 +928,7 @@ struct dvb_frontend *lgdt330x_attach(const struct lgdt330x_config *_config,
+ 
+ 	return lgdt330x_get_dvb_frontend(client);
+ }
+-EXPORT_SYMBOL(lgdt330x_attach);
++EXPORT_SYMBOL_GPL(lgdt330x_attach);
+ 
+ static const struct dvb_frontend_ops lgdt3302_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/lgs8gxx.c b/drivers/media/dvb-frontends/lgs8gxx.c
+index 30014979b985b..ffaf60e16ecd4 100644
+--- a/drivers/media/dvb-frontends/lgs8gxx.c
++++ b/drivers/media/dvb-frontends/lgs8gxx.c
+@@ -1043,7 +1043,7 @@ error_out:
+ 	return NULL;
+ 
+ }
+-EXPORT_SYMBOL(lgs8gxx_attach);
++EXPORT_SYMBOL_GPL(lgs8gxx_attach);
+ 
+ MODULE_DESCRIPTION("Legend Silicon LGS8913/LGS8GXX DMB-TH demodulator driver");
+ MODULE_AUTHOR("David T. L. Wong <davidtlwong@gmail.com>");
+diff --git a/drivers/media/dvb-frontends/lnbh25.c b/drivers/media/dvb-frontends/lnbh25.c
+index 9ffe06cd787dd..41bec050642b5 100644
+--- a/drivers/media/dvb-frontends/lnbh25.c
++++ b/drivers/media/dvb-frontends/lnbh25.c
+@@ -173,7 +173,7 @@ struct dvb_frontend *lnbh25_attach(struct dvb_frontend *fe,
+ 		__func__, priv->i2c_address);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(lnbh25_attach);
++EXPORT_SYMBOL_GPL(lnbh25_attach);
+ 
+ MODULE_DESCRIPTION("ST LNBH25 driver");
+ MODULE_AUTHOR("info@netup.ru");
+diff --git a/drivers/media/dvb-frontends/lnbp21.c b/drivers/media/dvb-frontends/lnbp21.c
+index e564974162d65..32593b1f75a38 100644
+--- a/drivers/media/dvb-frontends/lnbp21.c
++++ b/drivers/media/dvb-frontends/lnbp21.c
+@@ -155,7 +155,7 @@ struct dvb_frontend *lnbh24_attach(struct dvb_frontend *fe,
+ 	return lnbx2x_attach(fe, i2c, override_set, override_clear,
+ 							i2c_addr, LNBH24_TTX);
+ }
+-EXPORT_SYMBOL(lnbh24_attach);
++EXPORT_SYMBOL_GPL(lnbh24_attach);
+ 
+ struct dvb_frontend *lnbp21_attach(struct dvb_frontend *fe,
+ 				struct i2c_adapter *i2c, u8 override_set,
+@@ -164,7 +164,7 @@ struct dvb_frontend *lnbp21_attach(struct dvb_frontend *fe,
+ 	return lnbx2x_attach(fe, i2c, override_set, override_clear,
+ 							0x08, LNBP21_ISEL);
+ }
+-EXPORT_SYMBOL(lnbp21_attach);
++EXPORT_SYMBOL_GPL(lnbp21_attach);
+ 
+ MODULE_DESCRIPTION("Driver for lnb supply and control ic lnbp21, lnbh24");
+ MODULE_AUTHOR("Oliver Endriss, Igor M. Liplianin");
+diff --git a/drivers/media/dvb-frontends/lnbp22.c b/drivers/media/dvb-frontends/lnbp22.c
+index b8c7145d4cefe..cb4ea5d3fad4a 100644
+--- a/drivers/media/dvb-frontends/lnbp22.c
++++ b/drivers/media/dvb-frontends/lnbp22.c
+@@ -125,7 +125,7 @@ struct dvb_frontend *lnbp22_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(lnbp22_attach);
++EXPORT_SYMBOL_GPL(lnbp22_attach);
+ 
+ MODULE_DESCRIPTION("Driver for lnb supply and control ic lnbp22");
+ MODULE_AUTHOR("Dominik Kuhlen");
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index c120cffb52ad4..ff106d6ece68a 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -1699,7 +1699,7 @@ struct dvb_frontend *m88ds3103_attach(const struct m88ds3103_config *cfg,
+ 	*tuner_i2c_adapter = pdata.get_i2c_adapter(client);
+ 	return pdata.get_dvb_frontend(client);
+ }
+-EXPORT_SYMBOL(m88ds3103_attach);
++EXPORT_SYMBOL_GPL(m88ds3103_attach);
+ 
+ static const struct dvb_frontend_ops m88ds3103_ops = {
+ 	.delsys = {SYS_DVBS, SYS_DVBS2},
+diff --git a/drivers/media/dvb-frontends/m88rs2000.c b/drivers/media/dvb-frontends/m88rs2000.c
+index 39cbb3ea1c9dc..d1f235b472ceb 100644
+--- a/drivers/media/dvb-frontends/m88rs2000.c
++++ b/drivers/media/dvb-frontends/m88rs2000.c
+@@ -807,7 +807,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(m88rs2000_attach);
++EXPORT_SYMBOL_GPL(m88rs2000_attach);
+ 
+ MODULE_DESCRIPTION("M88RS2000 DVB-S Demodulator driver");
+ MODULE_AUTHOR("Malcolm Priestley tvboxspy@gmail.com");
+diff --git a/drivers/media/dvb-frontends/mb86a16.c b/drivers/media/dvb-frontends/mb86a16.c
+index 2505f1e5794e7..ed08e0c2cf512 100644
+--- a/drivers/media/dvb-frontends/mb86a16.c
++++ b/drivers/media/dvb-frontends/mb86a16.c
+@@ -1848,6 +1848,6 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(mb86a16_attach);
++EXPORT_SYMBOL_GPL(mb86a16_attach);
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/dvb-frontends/mb86a20s.c b/drivers/media/dvb-frontends/mb86a20s.c
+index a7faf0cf8788b..8a333af9e176f 100644
+--- a/drivers/media/dvb-frontends/mb86a20s.c
++++ b/drivers/media/dvb-frontends/mb86a20s.c
+@@ -2081,7 +2081,7 @@ struct dvb_frontend *mb86a20s_attach(const struct mb86a20s_config *config,
+ 	dev_info(&i2c->dev, "Detected a Fujitsu mb86a20s frontend\n");
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(mb86a20s_attach);
++EXPORT_SYMBOL_GPL(mb86a20s_attach);
+ 
+ static const struct dvb_frontend_ops mb86a20s_ops = {
+ 	.delsys = { SYS_ISDBT },
+diff --git a/drivers/media/dvb-frontends/mt312.c b/drivers/media/dvb-frontends/mt312.c
+index d43a67045dbe7..fb867dd8a26be 100644
+--- a/drivers/media/dvb-frontends/mt312.c
++++ b/drivers/media/dvb-frontends/mt312.c
+@@ -827,7 +827,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(mt312_attach);
++EXPORT_SYMBOL_GPL(mt312_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/mt352.c b/drivers/media/dvb-frontends/mt352.c
+index 399d5c519027e..1b2889f5cf67d 100644
+--- a/drivers/media/dvb-frontends/mt352.c
++++ b/drivers/media/dvb-frontends/mt352.c
+@@ -593,4 +593,4 @@ MODULE_DESCRIPTION("Zarlink MT352 DVB-T Demodulator driver");
+ MODULE_AUTHOR("Holger Waechtler, Daniel Mack, Antonio Mancuso");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(mt352_attach);
++EXPORT_SYMBOL_GPL(mt352_attach);
+diff --git a/drivers/media/dvb-frontends/nxt200x.c b/drivers/media/dvb-frontends/nxt200x.c
+index 35b83b1dd82ce..1124baf5baf4c 100644
+--- a/drivers/media/dvb-frontends/nxt200x.c
++++ b/drivers/media/dvb-frontends/nxt200x.c
+@@ -1232,5 +1232,5 @@ MODULE_DESCRIPTION("NXT200X (ATSC 8VSB & ITU-T J.83 AnnexB 64/256 QAM) Demodulat
+ MODULE_AUTHOR("Kirk Lapray, Michael Krufky, Jean-Francois Thibert, and Taylor Jacob");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(nxt200x_attach);
++EXPORT_SYMBOL_GPL(nxt200x_attach);
+ 
+diff --git a/drivers/media/dvb-frontends/nxt6000.c b/drivers/media/dvb-frontends/nxt6000.c
+index 136918f82dda0..e8d4940370ddf 100644
+--- a/drivers/media/dvb-frontends/nxt6000.c
++++ b/drivers/media/dvb-frontends/nxt6000.c
+@@ -621,4 +621,4 @@ MODULE_DESCRIPTION("NxtWave NXT6000 DVB-T demodulator driver");
+ MODULE_AUTHOR("Florian Schirmer");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(nxt6000_attach);
++EXPORT_SYMBOL_GPL(nxt6000_attach);
+diff --git a/drivers/media/dvb-frontends/or51132.c b/drivers/media/dvb-frontends/or51132.c
+index 24de1b1151583..144a1f25dec0a 100644
+--- a/drivers/media/dvb-frontends/or51132.c
++++ b/drivers/media/dvb-frontends/or51132.c
+@@ -605,4 +605,4 @@ MODULE_AUTHOR("Kirk Lapray");
+ MODULE_AUTHOR("Trent Piepho");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(or51132_attach);
++EXPORT_SYMBOL_GPL(or51132_attach);
+diff --git a/drivers/media/dvb-frontends/or51211.c b/drivers/media/dvb-frontends/or51211.c
+index ddcaea5c9941f..dc60482162c54 100644
+--- a/drivers/media/dvb-frontends/or51211.c
++++ b/drivers/media/dvb-frontends/or51211.c
+@@ -551,5 +551,5 @@ MODULE_DESCRIPTION("Oren OR51211 VSB [pcHDTV HD-2000] Demodulator Driver");
+ MODULE_AUTHOR("Kirk Lapray");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(or51211_attach);
++EXPORT_SYMBOL_GPL(or51211_attach);
+ 
+diff --git a/drivers/media/dvb-frontends/s5h1409.c b/drivers/media/dvb-frontends/s5h1409.c
+index 3089cc174a6f5..28b1dca077ead 100644
+--- a/drivers/media/dvb-frontends/s5h1409.c
++++ b/drivers/media/dvb-frontends/s5h1409.c
+@@ -981,7 +981,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(s5h1409_attach);
++EXPORT_SYMBOL_GPL(s5h1409_attach);
+ 
+ static const struct dvb_frontend_ops s5h1409_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/s5h1411.c b/drivers/media/dvb-frontends/s5h1411.c
+index c1334d7eb4420..ae2b391af9039 100644
+--- a/drivers/media/dvb-frontends/s5h1411.c
++++ b/drivers/media/dvb-frontends/s5h1411.c
+@@ -900,7 +900,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(s5h1411_attach);
++EXPORT_SYMBOL_GPL(s5h1411_attach);
+ 
+ static const struct dvb_frontend_ops s5h1411_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/s5h1420.c b/drivers/media/dvb-frontends/s5h1420.c
+index 6bdec2898bc81..d700de1ea6c24 100644
+--- a/drivers/media/dvb-frontends/s5h1420.c
++++ b/drivers/media/dvb-frontends/s5h1420.c
+@@ -918,7 +918,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(s5h1420_attach);
++EXPORT_SYMBOL_GPL(s5h1420_attach);
+ 
+ static const struct dvb_frontend_ops s5h1420_ops = {
+ 	.delsys = { SYS_DVBS },
+diff --git a/drivers/media/dvb-frontends/s5h1432.c b/drivers/media/dvb-frontends/s5h1432.c
+index 956e8ee4b388e..ff5d3bdf3bc67 100644
+--- a/drivers/media/dvb-frontends/s5h1432.c
++++ b/drivers/media/dvb-frontends/s5h1432.c
+@@ -355,7 +355,7 @@ struct dvb_frontend *s5h1432_attach(const struct s5h1432_config *config,
+ 
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(s5h1432_attach);
++EXPORT_SYMBOL_GPL(s5h1432_attach);
+ 
+ static const struct dvb_frontend_ops s5h1432_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/s921.c b/drivers/media/dvb-frontends/s921.c
+index f118d8e641030..7e461ac159fc1 100644
+--- a/drivers/media/dvb-frontends/s921.c
++++ b/drivers/media/dvb-frontends/s921.c
+@@ -495,7 +495,7 @@ struct dvb_frontend *s921_attach(const struct s921_config *config,
+ 
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(s921_attach);
++EXPORT_SYMBOL_GPL(s921_attach);
+ 
+ static const struct dvb_frontend_ops s921_ops = {
+ 	.delsys = { SYS_ISDBT },
+diff --git a/drivers/media/dvb-frontends/si21xx.c b/drivers/media/dvb-frontends/si21xx.c
+index a116eff417f26..6d84a5534aba6 100644
+--- a/drivers/media/dvb-frontends/si21xx.c
++++ b/drivers/media/dvb-frontends/si21xx.c
+@@ -938,7 +938,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(si21xx_attach);
++EXPORT_SYMBOL_GPL(si21xx_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/sp887x.c b/drivers/media/dvb-frontends/sp887x.c
+index c89a91a3daf40..72f58626475c4 100644
+--- a/drivers/media/dvb-frontends/sp887x.c
++++ b/drivers/media/dvb-frontends/sp887x.c
+@@ -626,4 +626,4 @@ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+ MODULE_DESCRIPTION("Spase sp887x DVB-T demodulator driver");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(sp887x_attach);
++EXPORT_SYMBOL_GPL(sp887x_attach);
+diff --git a/drivers/media/dvb-frontends/stb0899_drv.c b/drivers/media/dvb-frontends/stb0899_drv.c
+index 4ee6c1e1e9f7d..2f4d8fb400cd6 100644
+--- a/drivers/media/dvb-frontends/stb0899_drv.c
++++ b/drivers/media/dvb-frontends/stb0899_drv.c
+@@ -1638,7 +1638,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stb0899_attach);
++EXPORT_SYMBOL_GPL(stb0899_attach);
+ MODULE_PARM_DESC(verbose, "Set Verbosity level");
+ MODULE_AUTHOR("Manu Abraham");
+ MODULE_DESCRIPTION("STB0899 Multi-Std frontend");
+diff --git a/drivers/media/dvb-frontends/stb6000.c b/drivers/media/dvb-frontends/stb6000.c
+index 8c9800d577e03..d74e34677b925 100644
+--- a/drivers/media/dvb-frontends/stb6000.c
++++ b/drivers/media/dvb-frontends/stb6000.c
+@@ -232,7 +232,7 @@ struct dvb_frontend *stb6000_attach(struct dvb_frontend *fe, int addr,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(stb6000_attach);
++EXPORT_SYMBOL_GPL(stb6000_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/stb6100.c b/drivers/media/dvb-frontends/stb6100.c
+index d541d66136107..9f92760256cf5 100644
+--- a/drivers/media/dvb-frontends/stb6100.c
++++ b/drivers/media/dvb-frontends/stb6100.c
+@@ -557,7 +557,7 @@ static void stb6100_release(struct dvb_frontend *fe)
+ 	kfree(state);
+ }
+ 
+-EXPORT_SYMBOL(stb6100_attach);
++EXPORT_SYMBOL_GPL(stb6100_attach);
+ MODULE_PARM_DESC(verbose, "Set Verbosity level");
+ 
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/dvb-frontends/stv0288.c b/drivers/media/dvb-frontends/stv0288.c
+index 3ae1f3a2f1420..a5581bd60f9e8 100644
+--- a/drivers/media/dvb-frontends/stv0288.c
++++ b/drivers/media/dvb-frontends/stv0288.c
+@@ -590,7 +590,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0288_attach);
++EXPORT_SYMBOL_GPL(stv0288_attach);
+ 
+ module_param(debug_legacy_dish_switch, int, 0444);
+ MODULE_PARM_DESC(debug_legacy_dish_switch,
+diff --git a/drivers/media/dvb-frontends/stv0297.c b/drivers/media/dvb-frontends/stv0297.c
+index 6d5962d5697ac..9d4dbd99a5a79 100644
+--- a/drivers/media/dvb-frontends/stv0297.c
++++ b/drivers/media/dvb-frontends/stv0297.c
+@@ -710,4 +710,4 @@ MODULE_DESCRIPTION("ST STV0297 DVB-C Demodulator driver");
+ MODULE_AUTHOR("Dennis Noermann and Andrew de Quincey");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(stv0297_attach);
++EXPORT_SYMBOL_GPL(stv0297_attach);
+diff --git a/drivers/media/dvb-frontends/stv0299.c b/drivers/media/dvb-frontends/stv0299.c
+index 421395ea33343..0a1b57e9e2281 100644
+--- a/drivers/media/dvb-frontends/stv0299.c
++++ b/drivers/media/dvb-frontends/stv0299.c
+@@ -751,4 +751,4 @@ MODULE_DESCRIPTION("ST STV0299 DVB Demodulator driver");
+ MODULE_AUTHOR("Ralph Metzler, Holger Waechtler, Peter Schildmann, Felix Domke, Andreas Oberritter, Andrew de Quincey, Kenneth Aafly");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(stv0299_attach);
++EXPORT_SYMBOL_GPL(stv0299_attach);
+diff --git a/drivers/media/dvb-frontends/stv0367.c b/drivers/media/dvb-frontends/stv0367.c
+index 6c2b05fae1c55..0bfca1174e9e7 100644
+--- a/drivers/media/dvb-frontends/stv0367.c
++++ b/drivers/media/dvb-frontends/stv0367.c
+@@ -1750,7 +1750,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0367ter_attach);
++EXPORT_SYMBOL_GPL(stv0367ter_attach);
+ 
+ static int stv0367cab_gate_ctrl(struct dvb_frontend *fe, int enable)
+ {
+@@ -2923,7 +2923,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0367cab_attach);
++EXPORT_SYMBOL_GPL(stv0367cab_attach);
+ 
+ /*
+  * Functions for operation on Digital Devices hardware
+@@ -3344,7 +3344,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0367ddb_attach);
++EXPORT_SYMBOL_GPL(stv0367ddb_attach);
+ 
+ MODULE_PARM_DESC(debug, "Set debug");
+ MODULE_PARM_DESC(i2c_debug, "Set i2c debug");
+diff --git a/drivers/media/dvb-frontends/stv0900_core.c b/drivers/media/dvb-frontends/stv0900_core.c
+index 212312d20ff62..e7b9b9b11d7df 100644
+--- a/drivers/media/dvb-frontends/stv0900_core.c
++++ b/drivers/media/dvb-frontends/stv0900_core.c
+@@ -1957,7 +1957,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0900_attach);
++EXPORT_SYMBOL_GPL(stv0900_attach);
+ 
+ MODULE_PARM_DESC(debug, "Set debug");
+ 
+diff --git a/drivers/media/dvb-frontends/stv090x.c b/drivers/media/dvb-frontends/stv090x.c
+index 90d24131d335f..799dbefb9eef7 100644
+--- a/drivers/media/dvb-frontends/stv090x.c
++++ b/drivers/media/dvb-frontends/stv090x.c
+@@ -5073,7 +5073,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv090x_attach);
++EXPORT_SYMBOL_GPL(stv090x_attach);
+ 
+ static const struct i2c_device_id stv090x_id_table[] = {
+ 	{"stv090x", 0},
+diff --git a/drivers/media/dvb-frontends/stv6110.c b/drivers/media/dvb-frontends/stv6110.c
+index 963f6a896102a..1cf9c095dbff0 100644
+--- a/drivers/media/dvb-frontends/stv6110.c
++++ b/drivers/media/dvb-frontends/stv6110.c
+@@ -427,7 +427,7 @@ struct dvb_frontend *stv6110_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(stv6110_attach);
++EXPORT_SYMBOL_GPL(stv6110_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/stv6110x.c b/drivers/media/dvb-frontends/stv6110x.c
+index 5012d02316522..b08c7536a69fb 100644
+--- a/drivers/media/dvb-frontends/stv6110x.c
++++ b/drivers/media/dvb-frontends/stv6110x.c
+@@ -469,7 +469,7 @@ const struct stv6110x_devctl *stv6110x_attach(struct dvb_frontend *fe,
+ 	dev_info(&stv6110x->i2c->dev, "Attaching STV6110x\n");
+ 	return stv6110x->devctl;
+ }
+-EXPORT_SYMBOL(stv6110x_attach);
++EXPORT_SYMBOL_GPL(stv6110x_attach);
+ 
+ static const struct i2c_device_id stv6110x_id_table[] = {
+ 	{"stv6110x", 0},
+diff --git a/drivers/media/dvb-frontends/tda10021.c b/drivers/media/dvb-frontends/tda10021.c
+index faa6e54b33729..462e12ab6bd14 100644
+--- a/drivers/media/dvb-frontends/tda10021.c
++++ b/drivers/media/dvb-frontends/tda10021.c
+@@ -523,4 +523,4 @@ MODULE_DESCRIPTION("Philips TDA10021 DVB-C demodulator driver");
+ MODULE_AUTHOR("Ralph Metzler, Holger Waechtler, Markus Schulz");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda10021_attach);
++EXPORT_SYMBOL_GPL(tda10021_attach);
+diff --git a/drivers/media/dvb-frontends/tda10023.c b/drivers/media/dvb-frontends/tda10023.c
+index 8f32edf6b700e..4c2541ecd7433 100644
+--- a/drivers/media/dvb-frontends/tda10023.c
++++ b/drivers/media/dvb-frontends/tda10023.c
+@@ -594,4 +594,4 @@ MODULE_DESCRIPTION("Philips TDA10023 DVB-C demodulator driver");
+ MODULE_AUTHOR("Georg Acher, Hartmut Birr");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda10023_attach);
++EXPORT_SYMBOL_GPL(tda10023_attach);
+diff --git a/drivers/media/dvb-frontends/tda10048.c b/drivers/media/dvb-frontends/tda10048.c
+index d1d206ebdedd7..f1d5e77d5dcce 100644
+--- a/drivers/media/dvb-frontends/tda10048.c
++++ b/drivers/media/dvb-frontends/tda10048.c
+@@ -1138,7 +1138,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(tda10048_attach);
++EXPORT_SYMBOL_GPL(tda10048_attach);
+ 
+ static const struct dvb_frontend_ops tda10048_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/tda1004x.c b/drivers/media/dvb-frontends/tda1004x.c
+index 83a798ca9b002..6f306db6c615f 100644
+--- a/drivers/media/dvb-frontends/tda1004x.c
++++ b/drivers/media/dvb-frontends/tda1004x.c
+@@ -1378,5 +1378,5 @@ MODULE_DESCRIPTION("Philips TDA10045H & TDA10046H DVB-T Demodulator");
+ MODULE_AUTHOR("Andrew de Quincey & Robert Schlabbach");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda10045_attach);
+-EXPORT_SYMBOL(tda10046_attach);
++EXPORT_SYMBOL_GPL(tda10045_attach);
++EXPORT_SYMBOL_GPL(tda10046_attach);
+diff --git a/drivers/media/dvb-frontends/tda10086.c b/drivers/media/dvb-frontends/tda10086.c
+index cdcf97664bba8..b449514ae5854 100644
+--- a/drivers/media/dvb-frontends/tda10086.c
++++ b/drivers/media/dvb-frontends/tda10086.c
+@@ -764,4 +764,4 @@ MODULE_DESCRIPTION("Philips TDA10086 DVB-S Demodulator");
+ MODULE_AUTHOR("Andrew de Quincey");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda10086_attach);
++EXPORT_SYMBOL_GPL(tda10086_attach);
+diff --git a/drivers/media/dvb-frontends/tda665x.c b/drivers/media/dvb-frontends/tda665x.c
+index 13e8969da7f89..346be5011fb73 100644
+--- a/drivers/media/dvb-frontends/tda665x.c
++++ b/drivers/media/dvb-frontends/tda665x.c
+@@ -227,7 +227,7 @@ struct dvb_frontend *tda665x_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(tda665x_attach);
++EXPORT_SYMBOL_GPL(tda665x_attach);
+ 
+ MODULE_DESCRIPTION("TDA665x driver");
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/dvb-frontends/tda8083.c b/drivers/media/dvb-frontends/tda8083.c
+index 5be11fd65e3b1..9fc16e917f342 100644
+--- a/drivers/media/dvb-frontends/tda8083.c
++++ b/drivers/media/dvb-frontends/tda8083.c
+@@ -481,4 +481,4 @@ MODULE_DESCRIPTION("Philips TDA8083 DVB-S Demodulator");
+ MODULE_AUTHOR("Ralph Metzler, Holger Waechtler");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda8083_attach);
++EXPORT_SYMBOL_GPL(tda8083_attach);
+diff --git a/drivers/media/dvb-frontends/tda8261.c b/drivers/media/dvb-frontends/tda8261.c
+index 0d576d41c67d8..8b06f92745dca 100644
+--- a/drivers/media/dvb-frontends/tda8261.c
++++ b/drivers/media/dvb-frontends/tda8261.c
+@@ -188,7 +188,7 @@ exit:
+ 	return NULL;
+ }
+ 
+-EXPORT_SYMBOL(tda8261_attach);
++EXPORT_SYMBOL_GPL(tda8261_attach);
+ 
+ MODULE_AUTHOR("Manu Abraham");
+ MODULE_DESCRIPTION("TDA8261 8PSK/QPSK Tuner");
+diff --git a/drivers/media/dvb-frontends/tda826x.c b/drivers/media/dvb-frontends/tda826x.c
+index f9703a1dd758c..eafcf5f7da3dc 100644
+--- a/drivers/media/dvb-frontends/tda826x.c
++++ b/drivers/media/dvb-frontends/tda826x.c
+@@ -164,7 +164,7 @@ struct dvb_frontend *tda826x_attach(struct dvb_frontend *fe, int addr, struct i2
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(tda826x_attach);
++EXPORT_SYMBOL_GPL(tda826x_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/ts2020.c b/drivers/media/dvb-frontends/ts2020.c
+index 234607b02edb7..1f1004ccce1e4 100644
+--- a/drivers/media/dvb-frontends/ts2020.c
++++ b/drivers/media/dvb-frontends/ts2020.c
+@@ -525,7 +525,7 @@ struct dvb_frontend *ts2020_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(ts2020_attach);
++EXPORT_SYMBOL_GPL(ts2020_attach);
+ 
+ /*
+  * We implement own regmap locking due to legacy DVB attach which uses frontend
+diff --git a/drivers/media/dvb-frontends/tua6100.c b/drivers/media/dvb-frontends/tua6100.c
+index 2483f614d0e7d..41dd9b6d31908 100644
+--- a/drivers/media/dvb-frontends/tua6100.c
++++ b/drivers/media/dvb-frontends/tua6100.c
+@@ -186,7 +186,7 @@ struct dvb_frontend *tua6100_attach(struct dvb_frontend *fe, int addr, struct i2
+ 	fe->tuner_priv = priv;
+ 	return fe;
+ }
+-EXPORT_SYMBOL(tua6100_attach);
++EXPORT_SYMBOL_GPL(tua6100_attach);
+ 
+ MODULE_DESCRIPTION("DVB tua6100 driver");
+ MODULE_AUTHOR("Andrew de Quincey");
+diff --git a/drivers/media/dvb-frontends/ves1820.c b/drivers/media/dvb-frontends/ves1820.c
+index 9df14d0be1c1a..ee5620e731e9b 100644
+--- a/drivers/media/dvb-frontends/ves1820.c
++++ b/drivers/media/dvb-frontends/ves1820.c
+@@ -434,4 +434,4 @@ MODULE_DESCRIPTION("VLSI VES1820 DVB-C Demodulator driver");
+ MODULE_AUTHOR("Ralph Metzler, Holger Waechtler");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(ves1820_attach);
++EXPORT_SYMBOL_GPL(ves1820_attach);
+diff --git a/drivers/media/dvb-frontends/ves1x93.c b/drivers/media/dvb-frontends/ves1x93.c
+index b747272863025..c60e21d26b881 100644
+--- a/drivers/media/dvb-frontends/ves1x93.c
++++ b/drivers/media/dvb-frontends/ves1x93.c
+@@ -540,4 +540,4 @@ MODULE_DESCRIPTION("VLSI VES1x93 DVB-S Demodulator driver");
+ MODULE_AUTHOR("Ralph Metzler");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(ves1x93_attach);
++EXPORT_SYMBOL_GPL(ves1x93_attach);
+diff --git a/drivers/media/dvb-frontends/zl10036.c b/drivers/media/dvb-frontends/zl10036.c
+index d392c7cce2ce0..7ba575e9c55f4 100644
+--- a/drivers/media/dvb-frontends/zl10036.c
++++ b/drivers/media/dvb-frontends/zl10036.c
+@@ -496,7 +496,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(zl10036_attach);
++EXPORT_SYMBOL_GPL(zl10036_attach);
+ 
+ module_param_named(debug, zl10036_debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/zl10039.c b/drivers/media/dvb-frontends/zl10039.c
+index 1335bf78d5b7f..a3e4d219400ce 100644
+--- a/drivers/media/dvb-frontends/zl10039.c
++++ b/drivers/media/dvb-frontends/zl10039.c
+@@ -295,7 +295,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(zl10039_attach);
++EXPORT_SYMBOL_GPL(zl10039_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/zl10353.c b/drivers/media/dvb-frontends/zl10353.c
+index 2a2cf20a73d61..8849d05475c27 100644
+--- a/drivers/media/dvb-frontends/zl10353.c
++++ b/drivers/media/dvb-frontends/zl10353.c
+@@ -665,4 +665,4 @@ MODULE_DESCRIPTION("Zarlink ZL10353 DVB-T demodulator driver");
+ MODULE_AUTHOR("Chris Pascoe");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(zl10353_attach);
++EXPORT_SYMBOL_GPL(zl10353_attach);
+diff --git a/drivers/media/i2c/ad5820.c b/drivers/media/i2c/ad5820.c
+index f55322eebf6d0..d2c69ee27f008 100644
+--- a/drivers/media/i2c/ad5820.c
++++ b/drivers/media/i2c/ad5820.c
+@@ -359,7 +359,6 @@ static int ad5820_remove(struct i2c_client *client)
+ static const struct i2c_device_id ad5820_id_table[] = {
+ 	{ "ad5820", 0 },
+ 	{ "ad5821", 0 },
+-	{ "ad5823", 0 },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(i2c, ad5820_id_table);
+@@ -367,7 +366,6 @@ MODULE_DEVICE_TABLE(i2c, ad5820_id_table);
+ static const struct of_device_id ad5820_of_table[] = {
+ 	{ .compatible = "adi,ad5820" },
+ 	{ .compatible = "adi,ad5821" },
+-	{ .compatible = "adi,ad5823" },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, ad5820_of_table);
+diff --git a/drivers/media/i2c/ov2680.c b/drivers/media/i2c/ov2680.c
+index 59cdbc33658ce..731a60f6a59af 100644
+--- a/drivers/media/i2c/ov2680.c
++++ b/drivers/media/i2c/ov2680.c
+@@ -85,15 +85,8 @@ struct ov2680_mode_info {
+ 
+ struct ov2680_ctrls {
+ 	struct v4l2_ctrl_handler handler;
+-	struct {
+-		struct v4l2_ctrl *auto_exp;
+-		struct v4l2_ctrl *exposure;
+-	};
+-	struct {
+-		struct v4l2_ctrl *auto_gain;
+-		struct v4l2_ctrl *gain;
+-	};
+-
++	struct v4l2_ctrl *exposure;
++	struct v4l2_ctrl *gain;
+ 	struct v4l2_ctrl *hflip;
+ 	struct v4l2_ctrl *vflip;
+ 	struct v4l2_ctrl *test_pattern;
+@@ -143,6 +136,7 @@ static const struct reg_value ov2680_setting_30fps_QUXGA_800_600[] = {
+ 	{0x380e, 0x02}, {0x380f, 0x84}, {0x3811, 0x04}, {0x3813, 0x04},
+ 	{0x3814, 0x31}, {0x3815, 0x31}, {0x3820, 0xc0}, {0x4008, 0x00},
+ 	{0x4009, 0x03}, {0x4837, 0x1e}, {0x3501, 0x4e}, {0x3502, 0xe0},
++	{0x3503, 0x03},
+ };
+ 
+ static const struct reg_value ov2680_setting_30fps_720P_1280_720[] = {
+@@ -321,70 +315,49 @@ static void ov2680_power_down(struct ov2680_dev *sensor)
+ 	usleep_range(5000, 10000);
+ }
+ 
+-static int ov2680_bayer_order(struct ov2680_dev *sensor)
++static void ov2680_set_bayer_order(struct ov2680_dev *sensor)
+ {
+-	u32 format1;
+-	u32 format2;
+-	u32 hv_flip;
+-	int ret;
+-
+-	ret = ov2680_read_reg(sensor, OV2680_REG_FORMAT1, &format1);
+-	if (ret < 0)
+-		return ret;
++	int hv_flip = 0;
+ 
+-	ret = ov2680_read_reg(sensor, OV2680_REG_FORMAT2, &format2);
+-	if (ret < 0)
+-		return ret;
++	if (sensor->ctrls.vflip && sensor->ctrls.vflip->val)
++		hv_flip += 1;
+ 
+-	hv_flip = (format2 & BIT(2)  << 1) | (format1 & BIT(2));
++	if (sensor->ctrls.hflip && sensor->ctrls.hflip->val)
++		hv_flip += 2;
+ 
+ 	sensor->fmt.code = ov2680_hv_flip_bayer_order[hv_flip];
+-
+-	return 0;
+ }
+ 
+-static int ov2680_vflip_enable(struct ov2680_dev *sensor)
++static int ov2680_set_vflip(struct ov2680_dev *sensor, s32 val)
+ {
+ 	int ret;
+ 
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1, BIT(2), BIT(2));
+-	if (ret < 0)
+-		return ret;
+-
+-	return ov2680_bayer_order(sensor);
+-}
+-
+-static int ov2680_vflip_disable(struct ov2680_dev *sensor)
+-{
+-	int ret;
++	if (sensor->is_streaming)
++		return -EBUSY;
+ 
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1, BIT(2), BIT(0));
++	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1,
++			     BIT(2), val ? BIT(2) : 0);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return ov2680_bayer_order(sensor);
++	ov2680_set_bayer_order(sensor);
++	return 0;
+ }
+ 
+-static int ov2680_hflip_enable(struct ov2680_dev *sensor)
++static int ov2680_set_hflip(struct ov2680_dev *sensor, s32 val)
+ {
+ 	int ret;
+ 
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2, BIT(2), BIT(2));
+-	if (ret < 0)
+-		return ret;
+-
+-	return ov2680_bayer_order(sensor);
+-}
+-
+-static int ov2680_hflip_disable(struct ov2680_dev *sensor)
+-{
+-	int ret;
++	if (sensor->is_streaming)
++		return -EBUSY;
+ 
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2, BIT(2), BIT(0));
++	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2,
++			     BIT(2), val ? BIT(2) : 0);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return ov2680_bayer_order(sensor);
++	ov2680_set_bayer_order(sensor);
++	return 0;
+ }
+ 
+ static int ov2680_test_pattern_set(struct ov2680_dev *sensor, int value)
+@@ -405,69 +378,15 @@ static int ov2680_test_pattern_set(struct ov2680_dev *sensor, int value)
+ 	return 0;
+ }
+ 
+-static int ov2680_gain_set(struct ov2680_dev *sensor, bool auto_gain)
++static int ov2680_gain_set(struct ov2680_dev *sensor, u32 gain)
+ {
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+-	u32 gain;
+-	int ret;
+-
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_R_MANUAL, BIT(1),
+-			     auto_gain ? 0 : BIT(1));
+-	if (ret < 0)
+-		return ret;
+-
+-	if (auto_gain || !ctrls->gain->is_new)
+-		return 0;
+-
+-	gain = ctrls->gain->val;
+-
+-	ret = ov2680_write_reg16(sensor, OV2680_REG_GAIN_PK, gain);
+-
+-	return 0;
++	return ov2680_write_reg16(sensor, OV2680_REG_GAIN_PK, gain);
+ }
+ 
+-static int ov2680_gain_get(struct ov2680_dev *sensor)
++static int ov2680_exposure_set(struct ov2680_dev *sensor, u32 exp)
+ {
+-	u32 gain;
+-	int ret;
+-
+-	ret = ov2680_read_reg16(sensor, OV2680_REG_GAIN_PK, &gain);
+-	if (ret)
+-		return ret;
+-
+-	return gain;
+-}
+-
+-static int ov2680_exposure_set(struct ov2680_dev *sensor, bool auto_exp)
+-{
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+-	u32 exp;
+-	int ret;
+-
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_R_MANUAL, BIT(0),
+-			     auto_exp ? 0 : BIT(0));
+-	if (ret < 0)
+-		return ret;
+-
+-	if (auto_exp || !ctrls->exposure->is_new)
+-		return 0;
+-
+-	exp = (u32)ctrls->exposure->val;
+-	exp <<= 4;
+-
+-	return ov2680_write_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH, exp);
+-}
+-
+-static int ov2680_exposure_get(struct ov2680_dev *sensor)
+-{
+-	int ret;
+-	u32 exp;
+-
+-	ret = ov2680_read_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH, &exp);
+-	if (ret)
+-		return ret;
+-
+-	return exp >> 4;
++	return ov2680_write_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH,
++				  exp << 4);
+ }
+ 
+ static int ov2680_stream_enable(struct ov2680_dev *sensor)
+@@ -482,33 +401,17 @@ static int ov2680_stream_disable(struct ov2680_dev *sensor)
+ 
+ static int ov2680_mode_set(struct ov2680_dev *sensor)
+ {
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+ 	int ret;
+ 
+-	ret = ov2680_gain_set(sensor, false);
++	ret = ov2680_load_regs(sensor, sensor->current_mode);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = ov2680_exposure_set(sensor, false);
++	/* Restore value of all ctrls */
++	ret = __v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = ov2680_load_regs(sensor, sensor->current_mode);
+-	if (ret < 0)
+-		return ret;
+-
+-	if (ctrls->auto_gain->val) {
+-		ret = ov2680_gain_set(sensor, true);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+-	if (ctrls->auto_exp->val == V4L2_EXPOSURE_AUTO) {
+-		ret = ov2680_exposure_set(sensor, true);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+ 	sensor->mode_pending_changes = false;
+ 
+ 	return 0;
+@@ -556,7 +459,7 @@ static int ov2680_power_on(struct ov2680_dev *sensor)
+ 		ret = ov2680_write_reg(sensor, OV2680_REG_SOFT_RESET, 0x01);
+ 		if (ret != 0) {
+ 			dev_err(dev, "sensor soft reset failed\n");
+-			return ret;
++			goto err_disable_regulators;
+ 		}
+ 		usleep_range(1000, 2000);
+ 	} else {
+@@ -566,7 +469,7 @@ static int ov2680_power_on(struct ov2680_dev *sensor)
+ 
+ 	ret = clk_prepare_enable(sensor->xvclk);
+ 	if (ret < 0)
+-		return ret;
++		goto err_disable_regulators;
+ 
+ 	sensor->is_enabled = true;
+ 
+@@ -576,6 +479,10 @@ static int ov2680_power_on(struct ov2680_dev *sensor)
+ 	ov2680_stream_disable(sensor);
+ 
+ 	return 0;
++
++err_disable_regulators:
++	regulator_bulk_disable(OV2680_NUM_SUPPLIES, sensor->supplies);
++	return ret;
+ }
+ 
+ static int ov2680_s_power(struct v4l2_subdev *sd, int on)
+@@ -590,15 +497,10 @@ static int ov2680_s_power(struct v4l2_subdev *sd, int on)
+ 	else
+ 		ret = ov2680_power_off(sensor);
+ 
+-	mutex_unlock(&sensor->lock);
+-
+-	if (on && ret == 0) {
+-		ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
+-		if (ret < 0)
+-			return ret;
+-
++	if (on && ret == 0)
+ 		ret = ov2680_mode_restore(sensor);
+-	}
++
++	mutex_unlock(&sensor->lock);
+ 
+ 	return ret;
+ }
+@@ -793,66 +695,23 @@ static int ov2680_enum_frame_interval(struct v4l2_subdev *sd,
+ 	return 0;
+ }
+ 
+-static int ov2680_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
+-{
+-	struct v4l2_subdev *sd = ctrl_to_sd(ctrl);
+-	struct ov2680_dev *sensor = to_ov2680_dev(sd);
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+-	int val;
+-
+-	if (!sensor->is_enabled)
+-		return 0;
+-
+-	switch (ctrl->id) {
+-	case V4L2_CID_GAIN:
+-		val = ov2680_gain_get(sensor);
+-		if (val < 0)
+-			return val;
+-		ctrls->gain->val = val;
+-		break;
+-	case V4L2_CID_EXPOSURE:
+-		val = ov2680_exposure_get(sensor);
+-		if (val < 0)
+-			return val;
+-		ctrls->exposure->val = val;
+-		break;
+-	}
+-
+-	return 0;
+-}
+-
+ static int ov2680_s_ctrl(struct v4l2_ctrl *ctrl)
+ {
+ 	struct v4l2_subdev *sd = ctrl_to_sd(ctrl);
+ 	struct ov2680_dev *sensor = to_ov2680_dev(sd);
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+ 
+ 	if (!sensor->is_enabled)
+ 		return 0;
+ 
+ 	switch (ctrl->id) {
+-	case V4L2_CID_AUTOGAIN:
+-		return ov2680_gain_set(sensor, !!ctrl->val);
+ 	case V4L2_CID_GAIN:
+-		return ov2680_gain_set(sensor, !!ctrls->auto_gain->val);
+-	case V4L2_CID_EXPOSURE_AUTO:
+-		return ov2680_exposure_set(sensor, !!ctrl->val);
++		return ov2680_gain_set(sensor, ctrl->val);
+ 	case V4L2_CID_EXPOSURE:
+-		return ov2680_exposure_set(sensor, !!ctrls->auto_exp->val);
++		return ov2680_exposure_set(sensor, ctrl->val);
+ 	case V4L2_CID_VFLIP:
+-		if (sensor->is_streaming)
+-			return -EBUSY;
+-		if (ctrl->val)
+-			return ov2680_vflip_enable(sensor);
+-		else
+-			return ov2680_vflip_disable(sensor);
++		return ov2680_set_vflip(sensor, ctrl->val);
+ 	case V4L2_CID_HFLIP:
+-		if (sensor->is_streaming)
+-			return -EBUSY;
+-		if (ctrl->val)
+-			return ov2680_hflip_enable(sensor);
+-		else
+-			return ov2680_hflip_disable(sensor);
++		return ov2680_set_hflip(sensor, ctrl->val);
+ 	case V4L2_CID_TEST_PATTERN:
+ 		return ov2680_test_pattern_set(sensor, ctrl->val);
+ 	default:
+@@ -863,7 +722,6 @@ static int ov2680_s_ctrl(struct v4l2_ctrl *ctrl)
+ }
+ 
+ static const struct v4l2_ctrl_ops ov2680_ctrl_ops = {
+-	.g_volatile_ctrl = ov2680_g_volatile_ctrl,
+ 	.s_ctrl = ov2680_s_ctrl,
+ };
+ 
+@@ -935,7 +793,7 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	v4l2_ctrl_handler_init(hdl, 7);
++	v4l2_ctrl_handler_init(hdl, 5);
+ 
+ 	hdl->lock = &sensor->lock;
+ 
+@@ -947,16 +805,9 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor)
+ 					ARRAY_SIZE(test_pattern_menu) - 1,
+ 					0, 0, test_pattern_menu);
+ 
+-	ctrls->auto_exp = v4l2_ctrl_new_std_menu(hdl, ops,
+-						 V4L2_CID_EXPOSURE_AUTO,
+-						 V4L2_EXPOSURE_MANUAL, 0,
+-						 V4L2_EXPOSURE_AUTO);
+-
+ 	ctrls->exposure = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_EXPOSURE,
+ 					    0, 32767, 1, 0);
+ 
+-	ctrls->auto_gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_AUTOGAIN,
+-					     0, 1, 1, 1);
+ 	ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_GAIN, 0, 2047, 1, 0);
+ 
+ 	if (hdl->error) {
+@@ -964,11 +815,8 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor)
+ 		goto cleanup_entity;
+ 	}
+ 
+-	ctrls->gain->flags |= V4L2_CTRL_FLAG_VOLATILE;
+-	ctrls->exposure->flags |= V4L2_CTRL_FLAG_VOLATILE;
+-
+-	v4l2_ctrl_auto_cluster(2, &ctrls->auto_gain, 0, true);
+-	v4l2_ctrl_auto_cluster(2, &ctrls->auto_exp, 1, true);
++	ctrls->vflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT;
++	ctrls->hflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT;
+ 
+ 	sensor->sd.ctrl_handler = hdl;
+ 
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index 92a5f9aff9b53..db4b6095f4f4c 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -1942,9 +1942,9 @@ static int ov5640_set_power_mipi(struct ov5640_dev *sensor, bool on)
+ 	 *		  "ov5640_set_stream_mipi()")
+ 	 * [4] = 0	: Power up MIPI HS Tx
+ 	 * [3] = 0	: Power up MIPI LS Rx
+-	 * [2] = 0	: MIPI interface disabled
++	 * [2] = 1	: MIPI interface enabled
+ 	 */
+-	ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x40);
++	ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x44);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index 3b3221fd3fe8f..cf0570a6760ca 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -2078,6 +2078,10 @@ static int tvp5150_parse_dt(struct tvp5150 *decoder, struct device_node *np)
+ 		tvpc->ent.name = devm_kasprintf(dev, GFP_KERNEL, "%s %s",
+ 						v4l2c->name, v4l2c->label ?
+ 						v4l2c->label : "");
++		if (!tvpc->ent.name) {
++			ret = -ENOMEM;
++			goto err_free;
++		}
+ 	}
+ 
+ 	ep_np = of_graph_get_endpoint_by_regs(np, TVP5150_PAD_VID_OUT, 0);
+diff --git a/drivers/media/pci/bt8xx/dst.c b/drivers/media/pci/bt8xx/dst.c
+index 3e52a51982d76..110651e478314 100644
+--- a/drivers/media/pci/bt8xx/dst.c
++++ b/drivers/media/pci/bt8xx/dst.c
+@@ -1722,7 +1722,7 @@ struct dst_state *dst_attach(struct dst_state *state, struct dvb_adapter *dvb_ad
+ 	return state;				/*	Manu (DST is a card not a frontend)	*/
+ }
+ 
+-EXPORT_SYMBOL(dst_attach);
++EXPORT_SYMBOL_GPL(dst_attach);
+ 
+ static const struct dvb_frontend_ops dst_dvbt_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/pci/bt8xx/dst_ca.c b/drivers/media/pci/bt8xx/dst_ca.c
+index 85fcdc59f0d18..571392d80ccc6 100644
+--- a/drivers/media/pci/bt8xx/dst_ca.c
++++ b/drivers/media/pci/bt8xx/dst_ca.c
+@@ -668,7 +668,7 @@ struct dvb_device *dst_ca_attach(struct dst_state *dst, struct dvb_adapter *dvb_
+ 	return NULL;
+ }
+ 
+-EXPORT_SYMBOL(dst_ca_attach);
++EXPORT_SYMBOL_GPL(dst_ca_attach);
+ 
+ MODULE_DESCRIPTION("DST DVB-S/T/C Combo CA driver");
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/pci/cx23885/cx23885-dvb.c b/drivers/media/pci/cx23885/cx23885-dvb.c
+index 45c2f4afceb82..9b437faf2c3f6 100644
+--- a/drivers/media/pci/cx23885/cx23885-dvb.c
++++ b/drivers/media/pci/cx23885/cx23885-dvb.c
+@@ -2459,16 +2459,10 @@ static int dvb_register(struct cx23885_tsport *port)
+ 			request_module("%s", info.type);
+ 			client_tuner = i2c_new_client_device(&dev->i2c_bus[1].i2c_adap, &info);
+ 			if (!i2c_client_has_driver(client_tuner)) {
+-				module_put(client_demod->dev.driver->owner);
+-				i2c_unregister_device(client_demod);
+-				port->i2c_client_demod = NULL;
+ 				goto frontend_detach;
+ 			}
+ 			if (!try_module_get(client_tuner->dev.driver->owner)) {
+ 				i2c_unregister_device(client_tuner);
+-				module_put(client_demod->dev.driver->owner);
+-				i2c_unregister_device(client_demod);
+-				port->i2c_client_demod = NULL;
+ 				goto frontend_detach;
+ 			}
+ 			port->i2c_client_tuner = client_tuner;
+@@ -2505,16 +2499,10 @@ static int dvb_register(struct cx23885_tsport *port)
+ 			request_module("%s", info.type);
+ 			client_tuner = i2c_new_client_device(&dev->i2c_bus[1].i2c_adap, &info);
+ 			if (!i2c_client_has_driver(client_tuner)) {
+-				module_put(client_demod->dev.driver->owner);
+-				i2c_unregister_device(client_demod);
+-				port->i2c_client_demod = NULL;
+ 				goto frontend_detach;
+ 			}
+ 			if (!try_module_get(client_tuner->dev.driver->owner)) {
+ 				i2c_unregister_device(client_tuner);
+-				module_put(client_demod->dev.driver->owner);
+-				i2c_unregister_device(client_demod);
+-				port->i2c_client_demod = NULL;
+ 				goto frontend_detach;
+ 			}
+ 			port->i2c_client_tuner = client_tuner;
+diff --git a/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c b/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c
+index 6868a0c4fc82a..520ebd16b0c44 100644
+--- a/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c
++++ b/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c
+@@ -112,7 +112,7 @@ struct dvb_frontend *ddbridge_dummy_fe_qam_attach(void)
+ 	state->frontend.demodulator_priv = state;
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(ddbridge_dummy_fe_qam_attach);
++EXPORT_SYMBOL_GPL(ddbridge_dummy_fe_qam_attach);
+ 
+ static const struct dvb_frontend_ops ddbridge_dummy_fe_qam_ops = {
+ 	.delsys = { SYS_DVBC_ANNEX_A },
+diff --git a/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c b/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c
+index d9880210b2ab6..43c108b68d0a0 100644
+--- a/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c
++++ b/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c
+@@ -226,10 +226,11 @@ static struct vdec_fb *vp9_rm_from_fb_use_list(struct vdec_vp9_inst
+ 		if (fb->base_y.va == addr) {
+ 			list_move_tail(&node->list,
+ 				       &inst->available_fb_node_list);
+-			break;
++			return fb;
+ 		}
+ 	}
+-	return fb;
++
++	return NULL;
+ }
+ 
+ static void vp9_add_to_fb_free_list(struct vdec_vp9_inst *inst,
+diff --git a/drivers/media/tuners/fc0011.c b/drivers/media/tuners/fc0011.c
+index eaa3bbc903d7e..3d3b54be29557 100644
+--- a/drivers/media/tuners/fc0011.c
++++ b/drivers/media/tuners/fc0011.c
+@@ -499,7 +499,7 @@ struct dvb_frontend *fc0011_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(fc0011_attach);
++EXPORT_SYMBOL_GPL(fc0011_attach);
+ 
+ MODULE_DESCRIPTION("Fitipower FC0011 silicon tuner driver");
+ MODULE_AUTHOR("Michael Buesch <m@bues.ch>");
+diff --git a/drivers/media/tuners/fc0012.c b/drivers/media/tuners/fc0012.c
+index 4429d5e8c5796..81e65acbdb170 100644
+--- a/drivers/media/tuners/fc0012.c
++++ b/drivers/media/tuners/fc0012.c
+@@ -495,7 +495,7 @@ err:
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(fc0012_attach);
++EXPORT_SYMBOL_GPL(fc0012_attach);
+ 
+ MODULE_DESCRIPTION("Fitipower FC0012 silicon tuner driver");
+ MODULE_AUTHOR("Hans-Frieder Vogt <hfvogt@gmx.net>");
+diff --git a/drivers/media/tuners/fc0013.c b/drivers/media/tuners/fc0013.c
+index 29dd9b55ff333..1006a2798eefc 100644
+--- a/drivers/media/tuners/fc0013.c
++++ b/drivers/media/tuners/fc0013.c
+@@ -608,7 +608,7 @@ struct dvb_frontend *fc0013_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(fc0013_attach);
++EXPORT_SYMBOL_GPL(fc0013_attach);
+ 
+ MODULE_DESCRIPTION("Fitipower FC0013 silicon tuner driver");
+ MODULE_AUTHOR("Hans-Frieder Vogt <hfvogt@gmx.net>");
+diff --git a/drivers/media/tuners/max2165.c b/drivers/media/tuners/max2165.c
+index 1c746bed51fee..1575ab94e1c8b 100644
+--- a/drivers/media/tuners/max2165.c
++++ b/drivers/media/tuners/max2165.c
+@@ -410,7 +410,7 @@ struct dvb_frontend *max2165_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(max2165_attach);
++EXPORT_SYMBOL_GPL(max2165_attach);
+ 
+ MODULE_AUTHOR("David T. L. Wong <davidtlwong@gmail.com>");
+ MODULE_DESCRIPTION("Maxim MAX2165 silicon tuner driver");
+diff --git a/drivers/media/tuners/mc44s803.c b/drivers/media/tuners/mc44s803.c
+index 0c9161516abdf..ed8bdf7ebd99d 100644
+--- a/drivers/media/tuners/mc44s803.c
++++ b/drivers/media/tuners/mc44s803.c
+@@ -356,7 +356,7 @@ error:
+ 	kfree(priv);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(mc44s803_attach);
++EXPORT_SYMBOL_GPL(mc44s803_attach);
+ 
+ MODULE_AUTHOR("Jochen Friedrich");
+ MODULE_DESCRIPTION("Freescale MC44S803 silicon tuner driver");
+diff --git a/drivers/media/tuners/mt2060.c b/drivers/media/tuners/mt2060.c
+index 0e7ac2b49990f..b59c5ba2ee58e 100644
+--- a/drivers/media/tuners/mt2060.c
++++ b/drivers/media/tuners/mt2060.c
+@@ -440,7 +440,7 @@ struct dvb_frontend * mt2060_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(mt2060_attach);
++EXPORT_SYMBOL_GPL(mt2060_attach);
+ 
+ static int mt2060_probe(struct i2c_client *client,
+ 			const struct i2c_device_id *id)
+diff --git a/drivers/media/tuners/mt2131.c b/drivers/media/tuners/mt2131.c
+index 37f50ff6c0bd2..eebc060883414 100644
+--- a/drivers/media/tuners/mt2131.c
++++ b/drivers/media/tuners/mt2131.c
+@@ -274,7 +274,7 @@ struct dvb_frontend * mt2131_attach(struct dvb_frontend *fe,
+ 	fe->tuner_priv = priv;
+ 	return fe;
+ }
+-EXPORT_SYMBOL(mt2131_attach);
++EXPORT_SYMBOL_GPL(mt2131_attach);
+ 
+ MODULE_AUTHOR("Steven Toth");
+ MODULE_DESCRIPTION("Microtune MT2131 silicon tuner driver");
+diff --git a/drivers/media/tuners/mt2266.c b/drivers/media/tuners/mt2266.c
+index 6136f20fa9b7f..2e92885a6bcb9 100644
+--- a/drivers/media/tuners/mt2266.c
++++ b/drivers/media/tuners/mt2266.c
+@@ -336,7 +336,7 @@ struct dvb_frontend * mt2266_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 	mt2266_calibrate(priv);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(mt2266_attach);
++EXPORT_SYMBOL_GPL(mt2266_attach);
+ 
+ MODULE_AUTHOR("Olivier DANET");
+ MODULE_DESCRIPTION("Microtune MT2266 silicon tuner driver");
+diff --git a/drivers/media/tuners/mxl5005s.c b/drivers/media/tuners/mxl5005s.c
+index 1c07e2225fb39..cae6ded10b122 100644
+--- a/drivers/media/tuners/mxl5005s.c
++++ b/drivers/media/tuners/mxl5005s.c
+@@ -4114,7 +4114,7 @@ struct dvb_frontend *mxl5005s_attach(struct dvb_frontend *fe,
+ 	fe->tuner_priv = state;
+ 	return fe;
+ }
+-EXPORT_SYMBOL(mxl5005s_attach);
++EXPORT_SYMBOL_GPL(mxl5005s_attach);
+ 
+ MODULE_DESCRIPTION("MaxLinear MXL5005S silicon tuner driver");
+ MODULE_AUTHOR("Steven Toth");
+diff --git a/drivers/media/tuners/qt1010.c b/drivers/media/tuners/qt1010.c
+index 3853a3d43d4f2..60931367b82ca 100644
+--- a/drivers/media/tuners/qt1010.c
++++ b/drivers/media/tuners/qt1010.c
+@@ -440,7 +440,7 @@ struct dvb_frontend * qt1010_attach(struct dvb_frontend *fe,
+ 	fe->tuner_priv = priv;
+ 	return fe;
+ }
+-EXPORT_SYMBOL(qt1010_attach);
++EXPORT_SYMBOL_GPL(qt1010_attach);
+ 
+ MODULE_DESCRIPTION("Quantek QT1010 silicon tuner driver");
+ MODULE_AUTHOR("Antti Palosaari <crope@iki.fi>");
+diff --git a/drivers/media/tuners/tda18218.c b/drivers/media/tuners/tda18218.c
+index 4ed94646116fa..7d8d84dcb2459 100644
+--- a/drivers/media/tuners/tda18218.c
++++ b/drivers/media/tuners/tda18218.c
+@@ -336,7 +336,7 @@ struct dvb_frontend *tda18218_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(tda18218_attach);
++EXPORT_SYMBOL_GPL(tda18218_attach);
+ 
+ MODULE_DESCRIPTION("NXP TDA18218HN silicon tuner driver");
+ MODULE_AUTHOR("Antti Palosaari <crope@iki.fi>");
+diff --git a/drivers/media/tuners/xc4000.c b/drivers/media/tuners/xc4000.c
+index d9606738ce432..ef9af052007cb 100644
+--- a/drivers/media/tuners/xc4000.c
++++ b/drivers/media/tuners/xc4000.c
+@@ -1744,7 +1744,7 @@ fail2:
+ 	xc4000_release(fe);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(xc4000_attach);
++EXPORT_SYMBOL_GPL(xc4000_attach);
+ 
+ MODULE_AUTHOR("Steven Toth, Davide Ferri");
+ MODULE_DESCRIPTION("Xceive xc4000 silicon tuner driver");
+diff --git a/drivers/media/tuners/xc5000.c b/drivers/media/tuners/xc5000.c
+index 7b7d9fe4f9453..2182e5b7b6064 100644
+--- a/drivers/media/tuners/xc5000.c
++++ b/drivers/media/tuners/xc5000.c
+@@ -1460,7 +1460,7 @@ fail:
+ 	xc5000_release(fe);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(xc5000_attach);
++EXPORT_SYMBOL_GPL(xc5000_attach);
+ 
+ MODULE_AUTHOR("Steven Toth");
+ MODULE_DESCRIPTION("Xceive xc5000 silicon tuner driver");
+diff --git a/drivers/media/usb/dvb-usb/m920x.c b/drivers/media/usb/dvb-usb/m920x.c
+index 691e05833db19..da81fa189b5d5 100644
+--- a/drivers/media/usb/dvb-usb/m920x.c
++++ b/drivers/media/usb/dvb-usb/m920x.c
+@@ -277,7 +277,6 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
+ 			char *read = kmalloc(1, GFP_KERNEL);
+ 			if (!read) {
+ 				ret = -ENOMEM;
+-				kfree(read);
+ 				goto unlock;
+ 			}
+ 
+@@ -288,8 +287,10 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
+ 
+ 				if ((ret = m920x_read(d->udev, M9206_I2C, 0x0,
+ 						      0x20 | stop,
+-						      read, 1)) != 0)
++						      read, 1)) != 0) {
++					kfree(read);
+ 					goto unlock;
++				}
+ 				msg[i].buf[j] = read[0];
+ 			}
+ 
+diff --git a/drivers/media/usb/go7007/go7007-i2c.c b/drivers/media/usb/go7007/go7007-i2c.c
+index 38339dd2f83f7..2880370e45c8b 100644
+--- a/drivers/media/usb/go7007/go7007-i2c.c
++++ b/drivers/media/usb/go7007/go7007-i2c.c
+@@ -165,8 +165,6 @@ static int go7007_i2c_master_xfer(struct i2c_adapter *adapter,
+ 		} else if (msgs[i].len == 3) {
+ 			if (msgs[i].flags & I2C_M_RD)
+ 				return -EIO;
+-			if (msgs[i].len != 3)
+-				return -EIO;
+ 			if (go7007_i2c_xfer(go, msgs[i].addr, 0,
+ 					(msgs[i].buf[0] << 8) | msgs[i].buf[1],
+ 					0x01, &msgs[i].buf[2]) < 0)
+diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
+index 5c223b5498b4b..6036ad3b15681 100644
+--- a/drivers/media/usb/siano/smsusb.c
++++ b/drivers/media/usb/siano/smsusb.c
+@@ -455,12 +455,7 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 	rc = smscore_register_device(&params, &dev->coredev, 0, mdev);
+ 	if (rc < 0) {
+ 		pr_err("smscore_register_device(...) failed, rc %d\n", rc);
+-		smsusb_term_device(intf);
+-#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+-		media_device_unregister(mdev);
+-#endif
+-		kfree(mdev);
+-		return rc;
++		goto err_unregister_device;
+ 	}
+ 
+ 	smscore_set_board_id(dev->coredev, board_id);
+@@ -477,8 +472,7 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 	rc = smsusb_start_streaming(dev);
+ 	if (rc < 0) {
+ 		pr_err("smsusb_start_streaming(...) failed\n");
+-		smsusb_term_device(intf);
+-		return rc;
++		goto err_unregister_device;
+ 	}
+ 
+ 	dev->state = SMSUSB_ACTIVE;
+@@ -486,13 +480,20 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 	rc = smscore_start_device(dev->coredev);
+ 	if (rc < 0) {
+ 		pr_err("smscore_start_device(...) failed\n");
+-		smsusb_term_device(intf);
+-		return rc;
++		goto err_unregister_device;
+ 	}
+ 
+ 	pr_debug("device 0x%p created\n", dev);
+ 
+ 	return rc;
++
++err_unregister_device:
++	smsusb_term_device(intf);
++#ifdef CONFIG_MEDIA_CONTROLLER_DVB
++	media_device_unregister(mdev);
++#endif
++	kfree(mdev);
++	return rc;
+ }
+ 
+ static int smsusb_probe(struct usb_interface *intf,
+diff --git a/drivers/media/v4l2-core/v4l2-fwnode.c b/drivers/media/v4l2-core/v4l2-fwnode.c
+index dfc53d11053fc..1977ce0195fee 100644
+--- a/drivers/media/v4l2-core/v4l2-fwnode.c
++++ b/drivers/media/v4l2-core/v4l2-fwnode.c
+@@ -572,19 +572,29 @@ int v4l2_fwnode_parse_link(struct fwnode_handle *fwnode,
+ 	link->local_id = fwep.id;
+ 	link->local_port = fwep.port;
+ 	link->local_node = fwnode_graph_get_port_parent(fwnode);
++	if (!link->local_node)
++		return -ENOLINK;
+ 
+ 	fwnode = fwnode_graph_get_remote_endpoint(fwnode);
+-	if (!fwnode) {
+-		fwnode_handle_put(fwnode);
+-		return -ENOLINK;
+-	}
++	if (!fwnode)
++		goto err_put_local_node;
+ 
+ 	fwnode_graph_parse_endpoint(fwnode, &fwep);
+ 	link->remote_id = fwep.id;
+ 	link->remote_port = fwep.port;
+ 	link->remote_node = fwnode_graph_get_port_parent(fwnode);
++	if (!link->remote_node)
++		goto err_put_remote_endpoint;
+ 
+ 	return 0;
++
++err_put_remote_endpoint:
++	fwnode_handle_put(fwnode);
++
++err_put_local_node:
++	fwnode_handle_put(link->local_node);
++
++	return -ENOLINK;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fwnode_parse_link);
+ 
+diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
+index 82e1fbd6b2ff0..a5b2bf0e40cc1 100644
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -520,11 +520,12 @@ config MMC_ALCOR
+ 	  of Alcor Micro PCI-E card reader
+ 
+ config MMC_AU1X
+-	tristate "Alchemy AU1XX0 MMC Card Interface support"
++	bool "Alchemy AU1XX0 MMC Card Interface support"
+ 	depends on MIPS_ALCHEMY
++	depends on MMC=y
+ 	help
+ 	  This selects the AMD Alchemy(R) Multimedia card interface.
+-	  If you have a Alchemy platform with a MMC slot, say Y or M here.
++	  If you have a Alchemy platform with a MMC slot, say Y here.
+ 
+ 	  If unsure, say N.
+ 
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 580b91cbd18de..e170c545fec50 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -1040,6 +1040,14 @@ static int bcmnand_ctrl_poll_status(struct brcmnand_controller *ctrl,
+ 		cpu_relax();
+ 	} while (time_after(limit, jiffies));
+ 
++	/*
++	 * do a final check after time out in case the CPU was busy and the driver
++	 * did not get enough time to perform the polling to avoid false alarms
++	 */
++	val = brcmnand_read_reg(ctrl, BRCMNAND_INTFC_STATUS);
++	if ((val & mask) == expected_val)
++		return 0;
++
+ 	dev_warn(ctrl->dev, "timeout on status poll (expected %x got %x)\n",
+ 		 expected_val, val & mask);
+ 
+@@ -1429,19 +1437,33 @@ static int write_oob_to_regs(struct brcmnand_controller *ctrl, int i,
+ 			     const u8 *oob, int sas, int sector_1k)
+ {
+ 	int tbytes = sas << sector_1k;
+-	int j;
++	int j, k = 0;
++	u32 last = 0xffffffff;
++	u8 *plast = (u8 *)&last;
+ 
+ 	/* Adjust OOB values for 1K sector size */
+ 	if (sector_1k && (i & 0x01))
+ 		tbytes = max(0, tbytes - (int)ctrl->max_oob);
+ 	tbytes = min_t(int, tbytes, ctrl->max_oob);
+ 
+-	for (j = 0; j < tbytes; j += 4)
++	/*
++	 * tbytes may not be multiple of words. Make sure we don't read out of
++	 * the boundary and stop at last word.
++	 */
++	for (j = 0; (j + 3) < tbytes; j += 4)
+ 		oob_reg_write(ctrl, j,
+ 				(oob[j + 0] << 24) |
+ 				(oob[j + 1] << 16) |
+ 				(oob[j + 2] <<  8) |
+ 				(oob[j + 3] <<  0));
++
++	/* handle the remaing bytes */
++	while (j < tbytes)
++		plast[k++] = oob[j++];
++
++	if (tbytes & 0x3)
++		oob_reg_write(ctrl, (tbytes & ~0x3), (__force u32)cpu_to_be32(last));
++
+ 	return tbytes;
+ }
+ 
+@@ -1543,7 +1565,17 @@ static void brcmnand_send_cmd(struct brcmnand_host *host, int cmd)
+ 
+ 	dev_dbg(ctrl->dev, "send native cmd %d addr 0x%llx\n", cmd, cmd_addr);
+ 
+-	BUG_ON(ctrl->cmd_pending != 0);
++	/*
++	 * If we came here through _panic_write and there is a pending
++	 * command, try to wait for it. If it times out, rather than
++	 * hitting BUG_ON, just return so we don't crash while crashing.
++	 */
++	if (oops_in_progress) {
++		if (ctrl->cmd_pending &&
++			bcmnand_ctrl_poll_status(ctrl, NAND_CTRL_RDY, NAND_CTRL_RDY, 0))
++			return;
++	} else
++		BUG_ON(ctrl->cmd_pending != 0);
+ 	ctrl->cmd_pending = cmd;
+ 
+ 	ret = bcmnand_ctrl_poll_status(ctrl, NAND_CTRL_RDY, NAND_CTRL_RDY, 0);
+@@ -2534,6 +2566,8 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
+ 	struct nand_chip *chip = &host->chip;
+ 	const struct nand_ecc_props *requirements =
+ 		nanddev_get_ecc_requirements(&chip->base);
++	struct nand_memory_organization *memorg =
++		nanddev_get_memorg(&chip->base);
+ 	struct brcmnand_controller *ctrl = host->ctrl;
+ 	struct brcmnand_cfg *cfg = &host->hwcfg;
+ 	char msg[128];
+@@ -2555,10 +2589,11 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
+ 	if (cfg->spare_area_size > ctrl->max_oob)
+ 		cfg->spare_area_size = ctrl->max_oob;
+ 	/*
+-	 * Set oobsize to be consistent with controller's spare_area_size, as
+-	 * the rest is inaccessible.
++	 * Set mtd and memorg oobsize to be consistent with controller's
++	 * spare_area_size, as the rest is inaccessible.
+ 	 */
+ 	mtd->oobsize = cfg->spare_area_size * (mtd->writesize >> FC_SHIFT);
++	memorg->oobsize = mtd->oobsize;
+ 
+ 	cfg->device_size = mtd->size;
+ 	cfg->block_size = mtd->erasesize;
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index 663ff5300ad99..3da66e95e5b7e 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -1190,9 +1190,14 @@ static int fsmc_nand_suspend(struct device *dev)
+ static int fsmc_nand_resume(struct device *dev)
+ {
+ 	struct fsmc_nand_data *host = dev_get_drvdata(dev);
++	int ret;
+ 
+ 	if (host) {
+-		clk_prepare_enable(host->clk);
++		ret = clk_prepare_enable(host->clk);
++		if (ret) {
++			dev_err(dev, "failed to enable clk\n");
++			return ret;
++		}
+ 		if (host->dev_timings)
+ 			fsmc_nand_setup(host, host->dev_timings);
+ 		nand_reset(&host->nand, 0);
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 3422152319321..09e112f376918 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -800,21 +800,22 @@ static int spi_nor_write_16bit_sr_and_check(struct spi_nor *nor, u8 sr1)
+ 		ret = spi_nor_read_cr(nor, &sr_cr[1]);
+ 		if (ret)
+ 			return ret;
+-	} else if (nor->params->quad_enable) {
++	} else if (spi_nor_get_protocol_width(nor->read_proto) == 4 &&
++		   spi_nor_get_protocol_width(nor->write_proto) == 4 &&
++		   nor->params->quad_enable) {
+ 		/*
+ 		 * If the Status Register 2 Read command (35h) is not
+ 		 * supported, we should at least be sure we don't
+ 		 * change the value of the SR2 Quad Enable bit.
+ 		 *
+-		 * We can safely assume that when the Quad Enable method is
+-		 * set, the value of the QE bit is one, as a consequence of the
+-		 * nor->params->quad_enable() call.
++		 * When the Quad Enable method is set and the buswidth is 4, we
++		 * can safely assume that the value of the QE bit is one, as a
++		 * consequence of the nor->params->quad_enable() call.
+ 		 *
+-		 * We can safely assume that the Quad Enable bit is present in
+-		 * the Status Register 2 at BIT(1). According to the JESD216
+-		 * revB standard, BFPT DWORDS[15], bits 22:20, the 16-bit
+-		 * Write Status (01h) command is available just for the cases
+-		 * in which the QE bit is described in SR2 at BIT(1).
++		 * According to the JESD216 revB standard, BFPT DWORDS[15],
++		 * bits 22:20, the 16-bit Write Status (01h) command is
++		 * available just for the cases in which the QE bit is
++		 * described in SR2 at BIT(1).
+ 		 */
+ 		sr_cr[1] = SR2_QUAD_EN_BIT1;
+ 	} else {
+diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c
+index d76dd7d14299e..a7899405a51a5 100644
+--- a/drivers/net/arcnet/arcnet.c
++++ b/drivers/net/arcnet/arcnet.c
+@@ -468,7 +468,7 @@ static void arcnet_reply_tasklet(unsigned long data)
+ 
+ 	ret = sock_queue_err_skb(sk, ackskb);
+ 	if (ret)
+-		kfree_skb(ackskb);
++		dev_kfree_skb_irq(ackskb);
+ 
+ 	local_irq_enable();
+ };
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index 1f81293f137c9..864db200f45e5 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -381,6 +381,9 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 	}
+ 
+ 	if (hf->flags & GS_CAN_FLAG_OVERFLOW) {
++		stats->rx_over_errors++;
++		stats->rx_errors++;
++
+ 		skb = alloc_can_err_skb(netdev, &cf);
+ 		if (!skb)
+ 			goto resubmit_urb;
+@@ -388,8 +391,6 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 		cf->can_id |= CAN_ERR_CRTL;
+ 		cf->can_dlc = CAN_ERR_DLC;
+ 		cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+-		stats->rx_over_errors++;
+-		stats->rx_errors++;
+ 		netif_rx(skb);
+ 	}
+ 
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index c03d76c108686..4362fe0f346d2 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1691,6 +1691,18 @@ static void sja1105_bridge_leave(struct dsa_switch *ds, int port,
+ 
+ #define BYTES_PER_KBIT (1000LL / 8)
+ 
++static int sja1105_find_cbs_shaper(struct sja1105_private *priv,
++				   int port, int prio)
++{
++	int i;
++
++	for (i = 0; i < priv->info->num_cbs_shapers; i++)
++		if (priv->cbs[i].port == port && priv->cbs[i].prio == prio)
++			return i;
++
++	return -1;
++}
++
+ static int sja1105_find_unused_cbs_shaper(struct sja1105_private *priv)
+ {
+ 	int i;
+@@ -1725,14 +1737,20 @@ static int sja1105_setup_tc_cbs(struct dsa_switch *ds, int port,
+ {
+ 	struct sja1105_private *priv = ds->priv;
+ 	struct sja1105_cbs_entry *cbs;
++	s64 port_transmit_rate_kbps;
+ 	int index;
+ 
+ 	if (!offload->enable)
+ 		return sja1105_delete_cbs_shaper(priv, port, offload->queue);
+ 
+-	index = sja1105_find_unused_cbs_shaper(priv);
+-	if (index < 0)
+-		return -ENOSPC;
++	/* The user may be replacing an existing shaper */
++	index = sja1105_find_cbs_shaper(priv, port, offload->queue);
++	if (index < 0) {
++		/* That isn't the case - see if we can allocate a new one */
++		index = sja1105_find_unused_cbs_shaper(priv);
++		if (index < 0)
++			return -ENOSPC;
++	}
+ 
+ 	cbs = &priv->cbs[index];
+ 	cbs->port = port;
+@@ -1742,9 +1760,17 @@ static int sja1105_setup_tc_cbs(struct dsa_switch *ds, int port,
+ 	 */
+ 	cbs->credit_hi = offload->hicredit;
+ 	cbs->credit_lo = abs(offload->locredit);
+-	/* User space is in kbits/sec, hardware in bytes/sec */
+-	cbs->idle_slope = offload->idleslope * BYTES_PER_KBIT;
+-	cbs->send_slope = abs(offload->sendslope * BYTES_PER_KBIT);
++	/* User space is in kbits/sec, while the hardware in bytes/sec times
++	 * link speed. Since the given offload->sendslope is good only for the
++	 * current link speed anyway, and user space is likely to reprogram it
++	 * when that changes, don't even bother to track the port's link speed,
++	 * but deduce the port transmit rate from idleslope - sendslope.
++	 */
++	port_transmit_rate_kbps = offload->idleslope - offload->sendslope;
++	cbs->idle_slope = div_s64(offload->idleslope * BYTES_PER_KBIT,
++				  port_transmit_rate_kbps);
++	cbs->send_slope = div_s64(abs(offload->sendslope * BYTES_PER_KBIT),
++				  port_transmit_rate_kbps);
+ 	/* Convert the negative values from 64-bit 2's complement
+ 	 * to 32-bit 2's complement (for the case of 0x80000000 whose
+ 	 * negative is still negative).
+diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+index 3f65f2b370c57..2c5af0d7666aa 100644
+--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
++++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+@@ -1987,8 +1987,11 @@ static int atl1c_tso_csum(struct atl1c_adapter *adapter,
+ 			real_len = (((unsigned char *)ip_hdr(skb) - skb->data)
+ 					+ ntohs(ip_hdr(skb)->tot_len));
+ 
+-			if (real_len < skb->len)
+-				pskb_trim(skb, real_len);
++			if (real_len < skb->len) {
++				err = pskb_trim(skb, real_len);
++				if (err)
++					return err;
++			}
+ 
+ 			hdr_len = (skb_transport_offset(skb) + tcp_hdrlen(skb));
+ 			if (unlikely(skb->len == hdr_len)) {
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index afb6d3ee1f564..c8cbf3ed128de 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -14372,11 +14372,16 @@ static void bnx2x_io_resume(struct pci_dev *pdev)
+ 	bp->fw_seq = SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) &
+ 							DRV_MSG_SEQ_NUMBER_MASK;
+ 
+-	if (netif_running(dev))
+-		bnx2x_nic_load(bp, LOAD_NORMAL);
++	if (netif_running(dev)) {
++		if (bnx2x_nic_load(bp, LOAD_NORMAL)) {
++			netdev_err(bp->dev, "Error during driver initialization, try unloading/reloading the driver\n");
++			goto done;
++		}
++	}
+ 
+ 	netif_device_attach(dev);
+ 
++done:
+ 	rtnl_unlock();
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ptp.c b/drivers/net/ethernet/freescale/enetc/enetc_ptp.c
+index bc594892507ac..8c36615256944 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ptp.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ptp.c
+@@ -8,7 +8,7 @@
+ #include "enetc.h"
+ 
+ int enetc_phc_index = -1;
+-EXPORT_SYMBOL(enetc_phc_index);
++EXPORT_SYMBOL_GPL(enetc_phc_index);
+ 
+ static struct ptp_clock_info enetc_ptp_caps = {
+ 	.owner		= THIS_MODULE,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index cd0d7a546957a..d35f4b2b480e6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -704,7 +704,9 @@ static int hns3_get_link_ksettings(struct net_device *netdev,
+ 		hns3_get_ksettings(h, cmd);
+ 		break;
+ 	case HNAE3_MEDIA_TYPE_FIBER:
+-		if (module_type == HNAE3_MODULE_TYPE_CR)
++		if (module_type == HNAE3_MODULE_TYPE_UNKNOWN)
++			cmd->base.port = PORT_OTHER;
++		else if (module_type == HNAE3_MODULE_TYPE_CR)
+ 			cmd->base.port = PORT_DA;
+ 		else
+ 			cmd->base.port = PORT_FIBRE;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 4f0d63fa5709b..d2ee760f92942 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1137,6 +1137,7 @@ int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout,
+ static void ice_aq_check_events(struct ice_pf *pf, u16 opcode,
+ 				struct ice_rq_event_info *event)
+ {
++	struct ice_rq_event_info *task_ev;
+ 	struct ice_aq_task *task;
+ 	bool found = false;
+ 
+@@ -1145,15 +1146,15 @@ static void ice_aq_check_events(struct ice_pf *pf, u16 opcode,
+ 		if (task->state || task->opcode != opcode)
+ 			continue;
+ 
+-		memcpy(&task->event->desc, &event->desc, sizeof(event->desc));
+-		task->event->msg_len = event->msg_len;
++		task_ev = task->event;
++		memcpy(&task_ev->desc, &event->desc, sizeof(event->desc));
++		task_ev->msg_len = event->msg_len;
+ 
+ 		/* Only copy the data buffer if a destination was set */
+-		if (task->event->msg_buf &&
+-		    task->event->buf_len > event->buf_len) {
+-			memcpy(task->event->msg_buf, event->msg_buf,
++		if (task_ev->msg_buf && task_ev->buf_len >= event->buf_len) {
++			memcpy(task_ev->msg_buf, event->msg_buf,
+ 			       event->buf_len);
+-			task->event->buf_len = event->buf_len;
++			task_ev->buf_len = event->buf_len;
+ 		}
+ 
+ 		task->state = ICE_AQ_TASK_COMPLETE;
+diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h
+index e6d2800a8abc5..da0e3897e6831 100644
+--- a/drivers/net/ethernet/intel/igb/igb.h
++++ b/drivers/net/ethernet/intel/igb/igb.h
+@@ -34,11 +34,11 @@ struct igb_adapter;
+ /* TX/RX descriptor defines */
+ #define IGB_DEFAULT_TXD		256
+ #define IGB_DEFAULT_TX_WORK	128
+-#define IGB_MIN_TXD		80
++#define IGB_MIN_TXD		64
+ #define IGB_MAX_TXD		4096
+ 
+ #define IGB_DEFAULT_RXD		256
+-#define IGB_MIN_RXD		80
++#define IGB_MIN_RXD		64
+ #define IGB_MAX_RXD		4096
+ 
+ #define IGB_DEFAULT_ITR		3 /* dynamic */
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 4465982100127..01176c86be125 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -3857,8 +3857,9 @@ static void igb_probe_vfs(struct igb_adapter *adapter)
+ 	struct pci_dev *pdev = adapter->pdev;
+ 	struct e1000_hw *hw = &adapter->hw;
+ 
+-	/* Virtualization features not supported on i210 family. */
+-	if ((hw->mac.type == e1000_i210) || (hw->mac.type == e1000_i211))
++	/* Virtualization features not supported on i210 and 82580 family. */
++	if ((hw->mac.type == e1000_i210) || (hw->mac.type == e1000_i211) ||
++	    (hw->mac.type == e1000_82580))
+ 		return;
+ 
+ 	/* Of the below we really only want the effect of getting
+@@ -4731,6 +4732,10 @@ void igb_configure_rx_ring(struct igb_adapter *adapter,
+ static void igb_set_rx_buffer_len(struct igb_adapter *adapter,
+ 				  struct igb_ring *rx_ring)
+ {
++#if (PAGE_SIZE < 8192)
++	struct e1000_hw *hw = &adapter->hw;
++#endif
++
+ 	/* set build_skb and buffer size flags */
+ 	clear_ring_build_skb_enabled(rx_ring);
+ 	clear_ring_uses_large_buffer(rx_ring);
+@@ -4741,10 +4746,9 @@ static void igb_set_rx_buffer_len(struct igb_adapter *adapter,
+ 	set_ring_build_skb_enabled(rx_ring);
+ 
+ #if (PAGE_SIZE < 8192)
+-	if (adapter->max_frame_size <= IGB_MAX_FRAME_BUILD_SKB)
+-		return;
+-
+-	set_ring_uses_large_buffer(rx_ring);
++	if (adapter->max_frame_size > IGB_MAX_FRAME_BUILD_SKB ||
++	    rd32(E1000_RCTL) & E1000_RCTL_SBP)
++		set_ring_uses_large_buffer(rx_ring);
+ #endif
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/igbvf/igbvf.h b/drivers/net/ethernet/intel/igbvf/igbvf.h
+index 975eb47ee04df..b39fca9827dc2 100644
+--- a/drivers/net/ethernet/intel/igbvf/igbvf.h
++++ b/drivers/net/ethernet/intel/igbvf/igbvf.h
+@@ -39,11 +39,11 @@ enum latency_range {
+ /* Tx/Rx descriptor defines */
+ #define IGBVF_DEFAULT_TXD	256
+ #define IGBVF_MAX_TXD		4096
+-#define IGBVF_MIN_TXD		80
++#define IGBVF_MIN_TXD		64
+ 
+ #define IGBVF_DEFAULT_RXD	256
+ #define IGBVF_MAX_RXD		4096
+-#define IGBVF_MIN_RXD		80
++#define IGBVF_MIN_RXD		64
+ 
+ #define IGBVF_MIN_ITR_USECS	10 /* 100000 irq/sec */
+ #define IGBVF_MAX_ITR_USECS	10000 /* 100    irq/sec */
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index 33f64c80335d3..31af08ceb36b9 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -319,11 +319,11 @@ static inline u32 igc_rss_type(const union igc_adv_rx_desc *rx_desc)
+ /* TX/RX descriptor defines */
+ #define IGC_DEFAULT_TXD		256
+ #define IGC_DEFAULT_TX_WORK	128
+-#define IGC_MIN_TXD		80
++#define IGC_MIN_TXD		64
+ #define IGC_MAX_TXD		4096
+ 
+ #define IGC_DEFAULT_RXD		256
+-#define IGC_MIN_RXD		80
++#define IGC_MIN_RXD		64
+ #define IGC_MAX_RXD		4096
+ 
+ /* Supported Rx Buffer Sizes */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+index 8b7f300355710..3eb2c05361e80 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+@@ -989,6 +989,7 @@ static int ixgbe_ptp_set_timestamp_mode(struct ixgbe_adapter *adapter,
+ 	u32 tsync_tx_ctl = IXGBE_TSYNCTXCTL_ENABLED;
+ 	u32 tsync_rx_ctl = IXGBE_TSYNCRXCTL_ENABLED;
+ 	u32 tsync_rx_mtrl = PTP_EV_PORT << 16;
++	u32 aflags = adapter->flags;
+ 	bool is_l2 = false;
+ 	u32 regval;
+ 
+@@ -1009,20 +1010,20 @@ static int ixgbe_ptp_set_timestamp_mode(struct ixgbe_adapter *adapter,
+ 	case HWTSTAMP_FILTER_NONE:
+ 		tsync_rx_ctl = 0;
+ 		tsync_rx_mtrl = 0;
+-		adapter->flags &= ~(IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+-				    IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
++		aflags &= ~(IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
++			    IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ 		break;
+ 	case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+ 		tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_L4_V1;
+ 		tsync_rx_mtrl |= IXGBE_RXMTRL_V1_SYNC_MSG;
+-		adapter->flags |= (IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+-				   IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
++		aflags |= (IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
++			   IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ 		break;
+ 	case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+ 		tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_L4_V1;
+ 		tsync_rx_mtrl |= IXGBE_RXMTRL_V1_DELAY_REQ_MSG;
+-		adapter->flags |= (IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+-				   IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
++		aflags |= (IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
++			   IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ 		break;
+ 	case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ 	case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+@@ -1036,8 +1037,8 @@ static int ixgbe_ptp_set_timestamp_mode(struct ixgbe_adapter *adapter,
+ 		tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_EVENT_V2;
+ 		is_l2 = true;
+ 		config->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+-		adapter->flags |= (IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+-				   IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
++		aflags |= (IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
++			   IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ 		break;
+ 	case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+ 	case HWTSTAMP_FILTER_NTP_ALL:
+@@ -1048,7 +1049,7 @@ static int ixgbe_ptp_set_timestamp_mode(struct ixgbe_adapter *adapter,
+ 		if (hw->mac.type >= ixgbe_mac_X550) {
+ 			tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_ALL;
+ 			config->rx_filter = HWTSTAMP_FILTER_ALL;
+-			adapter->flags |= IXGBE_FLAG_RX_HWTSTAMP_ENABLED;
++			aflags |= IXGBE_FLAG_RX_HWTSTAMP_ENABLED;
+ 			break;
+ 		}
+ 		fallthrough;
+@@ -1059,8 +1060,6 @@ static int ixgbe_ptp_set_timestamp_mode(struct ixgbe_adapter *adapter,
+ 		 * Delay_Req messages and hardware does not support
+ 		 * timestamping all packets => return error
+ 		 */
+-		adapter->flags &= ~(IXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+-				    IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ 		config->rx_filter = HWTSTAMP_FILTER_NONE;
+ 		return -ERANGE;
+ 	}
+@@ -1092,8 +1091,8 @@ static int ixgbe_ptp_set_timestamp_mode(struct ixgbe_adapter *adapter,
+ 			       IXGBE_TSYNCRXCTL_TYPE_ALL |
+ 			       IXGBE_TSYNCRXCTL_TSIP_UT_EN;
+ 		config->rx_filter = HWTSTAMP_FILTER_ALL;
+-		adapter->flags |= IXGBE_FLAG_RX_HWTSTAMP_ENABLED;
+-		adapter->flags &= ~IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER;
++		aflags |= IXGBE_FLAG_RX_HWTSTAMP_ENABLED;
++		aflags &= ~IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER;
+ 		is_l2 = true;
+ 		break;
+ 	default:
+@@ -1126,6 +1125,9 @@ static int ixgbe_ptp_set_timestamp_mode(struct ixgbe_adapter *adapter,
+ 
+ 	IXGBE_WRITE_FLUSH(hw);
+ 
++	/* configure adapter flags only when HW is actually configured */
++	adapter->flags = aflags;
++
+ 	/* clear TX/RX time stamp registers, just to be sure */
+ 	ixgbe_ptp_clear_tx_timestamp(adapter);
+ 	IXGBE_READ_REG(hw, IXGBE_RXSTMPH);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 68c5ed8716c84..e0e6275b3e20c 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5201,6 +5201,11 @@ static int mvpp2_ethtool_get_rxnfc(struct net_device *dev,
+ 		break;
+ 	case ETHTOOL_GRXCLSRLALL:
+ 		for (i = 0; i < MVPP2_N_RFS_ENTRIES_PER_FLOW; i++) {
++			if (loc == info->rule_cnt) {
++				ret = -EMSGSIZE;
++				break;
++			}
++
+ 			if (port->rfs_rules[i])
+ 				rules[loc++] = i;
+ 		}
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index a8319295f1ab2..aa9e616cc1d59 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -2013,6 +2013,9 @@ static int mtk_hwlro_get_fdir_all(struct net_device *dev,
+ 	int i;
+ 
+ 	for (i = 0; i < MTK_MAX_LRO_IP_CNT; i++) {
++		if (cnt == cmd->rule_cnt)
++			return -EMSGSIZE;
++
+ 		if (mac->hwlro_ip[i]) {
+ 			rule_locs[cnt] = i;
+ 			cnt++;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index e29db4c39b37f..a2d9904e10492 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -279,16 +279,11 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
+ 		pci_cfg_access_lock(sdev);
+ 	}
+ 	/* PCI link toggle */
+-	err = pci_read_config_word(bridge, cap + PCI_EXP_LNKCTL, &reg16);
+-	if (err)
+-		return err;
+-	reg16 |= PCI_EXP_LNKCTL_LD;
+-	err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16);
++	err = pcie_capability_set_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD);
+ 	if (err)
+ 		return err;
+ 	msleep(500);
+-	reg16 &= ~PCI_EXP_LNKCTL_LD;
+-	err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16);
++	err = pcie_capability_clear_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+index ce843ea914646..61d2f621d65fc 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+@@ -47,6 +47,7 @@
+ #define MLXSW_I2C_MBOX_SIZE_BITS	12
+ #define MLXSW_I2C_ADDR_BUF_SIZE		4
+ #define MLXSW_I2C_BLK_DEF		32
++#define MLXSW_I2C_BLK_MAX		100
+ #define MLXSW_I2C_RETRY			5
+ #define MLXSW_I2C_TIMEOUT_MSECS		5000
+ #define MLXSW_I2C_MAX_DATA_SIZE		256
+@@ -428,7 +429,7 @@ mlxsw_i2c_cmd(struct device *dev, u16 opcode, u32 in_mod, size_t in_mbox_size,
+ 	} else {
+ 		/* No input mailbox is case of initialization query command. */
+ 		reg_size = MLXSW_I2C_MAX_DATA_SIZE;
+-		num = reg_size / mlxsw_i2c->block_size;
++		num = DIV_ROUND_UP(reg_size, mlxsw_i2c->block_size);
+ 
+ 		if (mutex_lock_interruptible(&mlxsw_i2c->cmd.lock) < 0) {
+ 			dev_err(&client->dev, "Could not acquire lock");
+@@ -576,7 +577,7 @@ static int mlxsw_i2c_probe(struct i2c_client *client,
+ 			return -EOPNOTSUPP;
+ 		}
+ 
+-		mlxsw_i2c->block_size = max_t(u16, MLXSW_I2C_BLK_DEF,
++		mlxsw_i2c->block_size = min_t(u16, MLXSW_I2C_BLK_MAX,
+ 					      min_t(u16, quirks->max_read_len,
+ 						    quirks->max_write_len));
+ 	} else {
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 2ad15b1d7ffd7..4fb58fc5ec95a 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1347,8 +1347,7 @@ static struct crypto_aead *macsec_alloc_tfm(char *key, int key_len, int icv_len)
+ 	struct crypto_aead *tfm;
+ 	int ret;
+ 
+-	/* Pick a sync gcm(aes) cipher to ensure order is preserved. */
+-	tfm = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC);
++	tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
+ 
+ 	if (IS_ERR(tfm))
+ 		return tfm;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index de6c561535346..5cf7f389bf4ef 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1351,6 +1351,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)},	/* Quectel EG91 */
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)},	/* Quectel EG95 */
+ 	{QMI_FIXED_INTF(0x2c7c, 0x0296, 4)},	/* Quectel BG96 */
++	{QMI_QUIRK_SET_DTR(0x2c7c, 0x030e, 4)},	/* Quectel EM05GV2 */
+ 	{QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)},	/* Fibocom NL678 series */
+ 	{QMI_FIXED_INTF(0x0489, 0xe0b4, 0)},	/* Foxconn T77W968 LTE */
+ 	{QMI_FIXED_INTF(0x0489, 0xe0b5, 0)},	/* Foxconn T77W968 LTE with eSIM support*/
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index f9a79d67d6d4f..cc7c86debfa27 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -2439,6 +2439,9 @@ static int r8152_poll(struct napi_struct *napi, int budget)
+ 	struct r8152 *tp = container_of(napi, struct r8152, napi);
+ 	int work_done;
+ 
++	if (!budget)
++		return 0;
++
+ 	work_done = rx_bottom(tp, budget);
+ 
+ 	if (work_done < budget) {
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index 4ba86fa4d6497..743716ebebdb9 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -285,6 +285,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct veth_priv *rcv_priv, *priv = netdev_priv(dev);
+ 	struct veth_rq *rq = NULL;
++	int ret = NETDEV_TX_OK;
+ 	struct net_device *rcv;
+ 	int length = skb->len;
+ 	bool rcv_xdp = false;
+@@ -311,6 +312,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	} else {
+ drop:
+ 		atomic64_inc(&priv->dropped);
++		ret = NET_XMIT_DROP;
+ 	}
+ 
+ 	if (rcv_xdp)
+@@ -318,7 +320,7 @@ drop:
+ 
+ 	rcu_read_unlock();
+ 
+-	return NETDEV_TX_OK;
++	return ret;
+ }
+ 
+ static u64 veth_stats_tx(struct net_device *dev, u64 *packets, u64 *bytes)
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 1ac9de69bde65..3096769e718ed 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -729,6 +729,32 @@ static int vxlan_fdb_append(struct vxlan_fdb *f,
+ 	return 1;
+ }
+ 
++static bool vxlan_parse_gpe_proto(struct vxlanhdr *hdr, __be16 *protocol)
++{
++	struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)hdr;
++
++	/* Need to have Next Protocol set for interfaces in GPE mode. */
++	if (!gpe->np_applied)
++		return false;
++	/* "The initial version is 0. If a receiver does not support the
++	 * version indicated it MUST drop the packet.
++	 */
++	if (gpe->version != 0)
++		return false;
++	/* "When the O bit is set to 1, the packet is an OAM packet and OAM
++	 * processing MUST occur." However, we don't implement OAM
++	 * processing, thus drop the packet.
++	 */
++	if (gpe->oam_flag)
++		return false;
++
++	*protocol = tun_p_to_eth_p(gpe->next_protocol);
++	if (!*protocol)
++		return false;
++
++	return true;
++}
++
+ static struct vxlanhdr *vxlan_gro_remcsum(struct sk_buff *skb,
+ 					  unsigned int off,
+ 					  struct vxlanhdr *vh, size_t hdrlen,
+@@ -1737,35 +1763,6 @@ out:
+ 	unparsed->vx_flags &= ~VXLAN_GBP_USED_BITS;
+ }
+ 
+-static bool vxlan_parse_gpe_hdr(struct vxlanhdr *unparsed,
+-				__be16 *protocol,
+-				struct sk_buff *skb, u32 vxflags)
+-{
+-	struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)unparsed;
+-
+-	/* Need to have Next Protocol set for interfaces in GPE mode. */
+-	if (!gpe->np_applied)
+-		return false;
+-	/* "The initial version is 0. If a receiver does not support the
+-	 * version indicated it MUST drop the packet.
+-	 */
+-	if (gpe->version != 0)
+-		return false;
+-	/* "When the O bit is set to 1, the packet is an OAM packet and OAM
+-	 * processing MUST occur." However, we don't implement OAM
+-	 * processing, thus drop the packet.
+-	 */
+-	if (gpe->oam_flag)
+-		return false;
+-
+-	*protocol = tun_p_to_eth_p(gpe->next_protocol);
+-	if (!*protocol)
+-		return false;
+-
+-	unparsed->vx_flags &= ~VXLAN_GPE_USED_BITS;
+-	return true;
+-}
+-
+ static bool vxlan_set_mac(struct vxlan_dev *vxlan,
+ 			  struct vxlan_sock *vs,
+ 			  struct sk_buff *skb, __be32 vni)
+@@ -1866,8 +1863,9 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
+ 	 * used by VXLAN extensions if explicitly requested.
+ 	 */
+ 	if (vs->flags & VXLAN_F_GPE) {
+-		if (!vxlan_parse_gpe_hdr(&unparsed, &protocol, skb, vs->flags))
++		if (!vxlan_parse_gpe_proto(&unparsed, &protocol))
+ 			goto drop;
++		unparsed.vx_flags &= ~VXLAN_GPE_USED_BITS;
+ 		raw_proto = true;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index 67e240327fb31..2c8f04b415c71 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -1963,8 +1963,9 @@ static int ath10k_pci_hif_start(struct ath10k *ar)
+ 	ath10k_pci_irq_enable(ar);
+ 	ath10k_pci_rx_post(ar);
+ 
+-	pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL,
+-				   ar_pci->link_ctl);
++	pcie_capability_clear_and_set_word(ar_pci->pdev, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_ASPMC,
++					   ar_pci->link_ctl & PCI_EXP_LNKCTL_ASPMC);
+ 
+ 	return 0;
+ }
+@@ -2820,8 +2821,8 @@ static int ath10k_pci_hif_power_up(struct ath10k *ar,
+ 
+ 	pcie_capability_read_word(ar_pci->pdev, PCI_EXP_LNKCTL,
+ 				  &ar_pci->link_ctl);
+-	pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL,
+-				   ar_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC);
++	pcie_capability_clear_word(ar_pci->pdev, PCI_EXP_LNKCTL,
++				   PCI_EXP_LNKCTL_ASPMC);
+ 
+ 	/*
+ 	 * Bring the target up cleanly.
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+index b3ed65e5c4da8..c55aab01fff5d 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+@@ -491,7 +491,7 @@ int ath9k_htc_init_debug(struct ath_hw *ah)
+ 
+ 	priv->debug.debugfs_phy = debugfs_create_dir(KBUILD_MODNAME,
+ 					     priv->hw->wiphy->debugfsdir);
+-	if (!priv->debug.debugfs_phy)
++	if (IS_ERR(priv->debug.debugfs_phy))
+ 		return -ENOMEM;
+ 
+ 	ath9k_cmn_spectral_init_debug(&priv->spec_priv, priv->debug.debugfs_phy);
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index d652c647d56b5..1476b42b52a91 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -242,10 +242,10 @@ static void ath9k_wmi_ctrl_rx(void *priv, struct sk_buff *skb,
+ 		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 		goto free_skb;
+ 	}
+-	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 
+ 	/* WMI command response */
+ 	ath9k_wmi_rsp_callback(wmi, skb);
++	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 
+ free_skb:
+ 	kfree_skb(skb);
+@@ -283,7 +283,8 @@ int ath9k_wmi_connect(struct htc_target *htc, struct wmi *wmi,
+ 
+ static int ath9k_wmi_cmd_issue(struct wmi *wmi,
+ 			       struct sk_buff *skb,
+-			       enum wmi_cmd_id cmd, u16 len)
++			       enum wmi_cmd_id cmd, u16 len,
++			       u8 *rsp_buf, u32 rsp_len)
+ {
+ 	struct wmi_cmd_hdr *hdr;
+ 	unsigned long flags;
+@@ -293,6 +294,11 @@ static int ath9k_wmi_cmd_issue(struct wmi *wmi,
+ 	hdr->seq_no = cpu_to_be16(++wmi->tx_seq_id);
+ 
+ 	spin_lock_irqsave(&wmi->wmi_lock, flags);
++
++	/* record the rsp buffer and length */
++	wmi->cmd_rsp_buf = rsp_buf;
++	wmi->cmd_rsp_len = rsp_len;
++
+ 	wmi->last_seq_id = wmi->tx_seq_id;
+ 	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 
+@@ -308,8 +314,8 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ 	struct ath_common *common = ath9k_hw_common(ah);
+ 	u16 headroom = sizeof(struct htc_frame_hdr) +
+ 		       sizeof(struct wmi_cmd_hdr);
++	unsigned long time_left, flags;
+ 	struct sk_buff *skb;
+-	unsigned long time_left;
+ 	int ret = 0;
+ 
+ 	if (ah->ah_flags & AH_UNPLUGGED)
+@@ -333,11 +339,7 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ 		goto out;
+ 	}
+ 
+-	/* record the rsp buffer and length */
+-	wmi->cmd_rsp_buf = rsp_buf;
+-	wmi->cmd_rsp_len = rsp_len;
+-
+-	ret = ath9k_wmi_cmd_issue(wmi, skb, cmd_id, cmd_len);
++	ret = ath9k_wmi_cmd_issue(wmi, skb, cmd_id, cmd_len, rsp_buf, rsp_len);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -345,7 +347,9 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ 	if (!time_left) {
+ 		ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
+ 			wmi_cmd_to_name(cmd_id));
++		spin_lock_irqsave(&wmi->wmi_lock, flags);
+ 		wmi->last_seq_id = 0;
++		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 		mutex_unlock(&wmi->op_mutex);
+ 		return -ETIMEDOUT;
+ 	}
+diff --git a/drivers/net/wireless/marvell/mwifiex/debugfs.c b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+index dded92db1f373..1e7dc724c6a94 100644
+--- a/drivers/net/wireless/marvell/mwifiex/debugfs.c
++++ b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+@@ -265,8 +265,11 @@ mwifiex_histogram_read(struct file *file, char __user *ubuf,
+ 	if (!p)
+ 		return -ENOMEM;
+ 
+-	if (!priv || !priv->hist_data)
+-		return -EFAULT;
++	if (!priv || !priv->hist_data) {
++		ret = -EFAULT;
++		goto free_and_exit;
++	}
++
+ 	phist_data = priv->hist_data;
+ 
+ 	p += sprintf(p, "\n"
+@@ -321,6 +324,8 @@ mwifiex_histogram_read(struct file *file, char __user *ubuf,
+ 	ret = simple_read_from_buffer(ubuf, count, ppos, (char *)page,
+ 				      (unsigned long)p - page);
+ 
++free_and_exit:
++	free_page(page);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index 50c34630ca302..7cec6398da71c 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -200,6 +200,8 @@ static int mwifiex_pcie_probe_of(struct device *dev)
+ }
+ 
+ static void mwifiex_pcie_work(struct work_struct *work);
++static int mwifiex_pcie_delete_rxbd_ring(struct mwifiex_adapter *adapter);
++static int mwifiex_pcie_delete_evtbd_ring(struct mwifiex_adapter *adapter);
+ 
+ static int
+ mwifiex_map_pci_memory(struct mwifiex_adapter *adapter, struct sk_buff *skb,
+@@ -794,14 +796,15 @@ static int mwifiex_init_rxq_ring(struct mwifiex_adapter *adapter)
+ 		if (!skb) {
+ 			mwifiex_dbg(adapter, ERROR,
+ 				    "Unable to allocate skb for RX ring.\n");
+-			kfree(card->rxbd_ring_vbase);
+ 			return -ENOMEM;
+ 		}
+ 
+ 		if (mwifiex_map_pci_memory(adapter, skb,
+ 					   MWIFIEX_RX_DATA_BUF_SIZE,
+-					   DMA_FROM_DEVICE))
+-			return -1;
++					   DMA_FROM_DEVICE)) {
++			kfree_skb(skb);
++			return -ENOMEM;
++		}
+ 
+ 		buf_pa = MWIFIEX_SKB_DMA_ADDR(skb);
+ 
+@@ -851,7 +854,6 @@ static int mwifiex_pcie_init_evt_ring(struct mwifiex_adapter *adapter)
+ 		if (!skb) {
+ 			mwifiex_dbg(adapter, ERROR,
+ 				    "Unable to allocate skb for EVENT buf.\n");
+-			kfree(card->evtbd_ring_vbase);
+ 			return -ENOMEM;
+ 		}
+ 		skb_put(skb, MAX_EVENT_SIZE);
+@@ -859,8 +861,7 @@ static int mwifiex_pcie_init_evt_ring(struct mwifiex_adapter *adapter)
+ 		if (mwifiex_map_pci_memory(adapter, skb, MAX_EVENT_SIZE,
+ 					   DMA_FROM_DEVICE)) {
+ 			kfree_skb(skb);
+-			kfree(card->evtbd_ring_vbase);
+-			return -1;
++			return -ENOMEM;
+ 		}
+ 
+ 		buf_pa = MWIFIEX_SKB_DMA_ADDR(skb);
+@@ -1060,6 +1061,7 @@ static int mwifiex_pcie_delete_txbd_ring(struct mwifiex_adapter *adapter)
+  */
+ static int mwifiex_pcie_create_rxbd_ring(struct mwifiex_adapter *adapter)
+ {
++	int ret;
+ 	struct pcie_service_card *card = adapter->card;
+ 	const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
+ 
+@@ -1098,7 +1100,10 @@ static int mwifiex_pcie_create_rxbd_ring(struct mwifiex_adapter *adapter)
+ 		    (u32)((u64)card->rxbd_ring_pbase >> 32),
+ 		    card->rxbd_ring_size);
+ 
+-	return mwifiex_init_rxq_ring(adapter);
++	ret = mwifiex_init_rxq_ring(adapter);
++	if (ret)
++		mwifiex_pcie_delete_rxbd_ring(adapter);
++	return ret;
+ }
+ 
+ /*
+@@ -1129,6 +1134,7 @@ static int mwifiex_pcie_delete_rxbd_ring(struct mwifiex_adapter *adapter)
+  */
+ static int mwifiex_pcie_create_evtbd_ring(struct mwifiex_adapter *adapter)
+ {
++	int ret;
+ 	struct pcie_service_card *card = adapter->card;
+ 	const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
+ 
+@@ -1163,7 +1169,10 @@ static int mwifiex_pcie_create_evtbd_ring(struct mwifiex_adapter *adapter)
+ 		    (u32)((u64)card->evtbd_ring_pbase >> 32),
+ 		    card->evtbd_ring_size);
+ 
+-	return mwifiex_pcie_init_evt_ring(adapter);
++	ret = mwifiex_pcie_init_evt_ring(adapter);
++	if (ret)
++		mwifiex_pcie_delete_evtbd_ring(adapter);
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_rx.c b/drivers/net/wireless/marvell/mwifiex/sta_rx.c
+index 0d2adf8879005..3c555946cb2cc 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_rx.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_rx.c
+@@ -98,6 +98,15 @@ int mwifiex_process_rx_packet(struct mwifiex_private *priv,
+ 	rx_pkt_len = le16_to_cpu(local_rx_pd->rx_pkt_length);
+ 	rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_off;
+ 
++	if (sizeof(*rx_pkt_hdr) + rx_pkt_off > skb->len) {
++		mwifiex_dbg(priv->adapter, ERROR,
++			    "wrong rx packet offset: len=%d, rx_pkt_off=%d\n",
++			    skb->len, rx_pkt_off);
++		priv->stats.rx_dropped++;
++		dev_kfree_skb_any(skb);
++		return -1;
++	}
++
+ 	if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header,
+ 		     sizeof(bridge_tunnel_header))) ||
+ 	    (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header,
+@@ -206,7 +215,8 @@ int mwifiex_process_sta_rx_packet(struct mwifiex_private *priv,
+ 
+ 	rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_offset;
+ 
+-	if ((rx_pkt_offset + rx_pkt_length) > (u16) skb->len) {
++	if ((rx_pkt_offset + rx_pkt_length) > skb->len ||
++	    sizeof(rx_pkt_hdr->eth803_hdr) + rx_pkt_offset > skb->len) {
+ 		mwifiex_dbg(adapter, ERROR,
+ 			    "wrong rx packet: len=%d, rx_pkt_offset=%d, rx_pkt_length=%d\n",
+ 			    skb->len, rx_pkt_offset, rx_pkt_length);
+diff --git a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
+index 9bbdb8dfce62a..780ea467471f6 100644
+--- a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
++++ b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
+@@ -115,6 +115,16 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv,
+ 		return;
+ 	}
+ 
++	if (sizeof(*rx_pkt_hdr) +
++	    le16_to_cpu(uap_rx_pd->rx_pkt_offset) > skb->len) {
++		mwifiex_dbg(adapter, ERROR,
++			    "wrong rx packet offset: len=%d,rx_pkt_offset=%d\n",
++			    skb->len, le16_to_cpu(uap_rx_pd->rx_pkt_offset));
++		priv->stats.rx_dropped++;
++		dev_kfree_skb_any(skb);
++		return;
++	}
++
+ 	if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header,
+ 		     sizeof(bridge_tunnel_header))) ||
+ 	    (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header,
+@@ -255,7 +265,15 @@ int mwifiex_handle_uap_rx_forward(struct mwifiex_private *priv,
+ 
+ 	if (is_multicast_ether_addr(ra)) {
+ 		skb_uap = skb_copy(skb, GFP_ATOMIC);
+-		mwifiex_uap_queue_bridged_pkt(priv, skb_uap);
++		if (likely(skb_uap)) {
++			mwifiex_uap_queue_bridged_pkt(priv, skb_uap);
++		} else {
++			mwifiex_dbg(adapter, ERROR,
++				    "failed to copy skb for uAP\n");
++			priv->stats.rx_dropped++;
++			dev_kfree_skb_any(skb);
++			return -1;
++		}
+ 	} else {
+ 		if (mwifiex_get_sta_entry(priv, ra)) {
+ 			/* Requeue Intra-BSS packet */
+@@ -379,6 +397,16 @@ int mwifiex_process_uap_rx_packet(struct mwifiex_private *priv,
+ 	rx_pkt_type = le16_to_cpu(uap_rx_pd->rx_pkt_type);
+ 	rx_pkt_hdr = (void *)uap_rx_pd + le16_to_cpu(uap_rx_pd->rx_pkt_offset);
+ 
++	if (le16_to_cpu(uap_rx_pd->rx_pkt_offset) +
++	    sizeof(rx_pkt_hdr->eth803_hdr) > skb->len) {
++		mwifiex_dbg(adapter, ERROR,
++			    "wrong rx packet for struct ethhdr: len=%d, offset=%d\n",
++			    skb->len, le16_to_cpu(uap_rx_pd->rx_pkt_offset));
++		priv->stats.rx_dropped++;
++		dev_kfree_skb_any(skb);
++		return 0;
++	}
++
+ 	ether_addr_copy(ta, rx_pkt_hdr->eth803_hdr.h_source);
+ 
+ 	if ((le16_to_cpu(uap_rx_pd->rx_pkt_offset) +
+diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c
+index d583fa600a296..1f5a6dab9ce55 100644
+--- a/drivers/net/wireless/marvell/mwifiex/util.c
++++ b/drivers/net/wireless/marvell/mwifiex/util.c
+@@ -405,11 +405,15 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv,
+ 	}
+ 
+ 	rx_pd = (struct rxpd *)skb->data;
++	pkt_len = le16_to_cpu(rx_pd->rx_pkt_length);
++	if (pkt_len < sizeof(struct ieee80211_hdr) + sizeof(pkt_len)) {
++		mwifiex_dbg(priv->adapter, ERROR, "invalid rx_pkt_length");
++		return -1;
++	}
+ 
+ 	skb_pull(skb, le16_to_cpu(rx_pd->rx_pkt_offset));
+ 	skb_pull(skb, sizeof(pkt_len));
+-
+-	pkt_len = le16_to_cpu(rx_pd->rx_pkt_length);
++	pkt_len -= sizeof(pkt_len);
+ 
+ 	ieee_hdr = (void *)skb->data;
+ 	if (ieee80211_is_mgmt(ieee_hdr->frame_control)) {
+@@ -422,7 +426,7 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv,
+ 		skb->data + sizeof(struct ieee80211_hdr),
+ 		pkt_len - sizeof(struct ieee80211_hdr));
+ 
+-	pkt_len -= ETH_ALEN + sizeof(pkt_len);
++	pkt_len -= ETH_ALEN;
+ 	rx_pd->rx_pkt_length = cpu_to_le16(pkt_len);
+ 
+ 	cfg80211_rx_mgmt(&priv->wdev, priv->roc_cfg.chan.center_freq,
+diff --git a/drivers/net/wireless/mediatek/mt76/testmode.c b/drivers/net/wireless/mediatek/mt76/testmode.c
+index 883f59c7a7e4a..7ab99efb7f9b0 100644
+--- a/drivers/net/wireless/mediatek/mt76/testmode.c
++++ b/drivers/net/wireless/mediatek/mt76/testmode.c
+@@ -6,6 +6,7 @@ static const struct nla_policy mt76_tm_policy[NUM_MT76_TM_ATTRS] = {
+ 	[MT76_TM_ATTR_RESET] = { .type = NLA_FLAG },
+ 	[MT76_TM_ATTR_STATE] = { .type = NLA_U8 },
+ 	[MT76_TM_ATTR_TX_COUNT] = { .type = NLA_U32 },
++	[MT76_TM_ATTR_TX_LENGTH] = { .type = NLA_U32 },
+ 	[MT76_TM_ATTR_TX_RATE_MODE] = { .type = NLA_U8 },
+ 	[MT76_TM_ATTR_TX_RATE_NSS] = { .type = NLA_U8 },
+ 	[MT76_TM_ATTR_TX_RATE_IDX] = { .type = NLA_U8 },
+diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
+index d18cb44765603..859570017ef3a 100644
+--- a/drivers/ntb/ntb_transport.c
++++ b/drivers/ntb/ntb_transport.c
+@@ -911,7 +911,7 @@ static int ntb_set_mw(struct ntb_transport_ctx *nt, int num_mw,
+ 	return 0;
+ }
+ 
+-static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp)
++static void ntb_qp_link_context_reset(struct ntb_transport_qp *qp)
+ {
+ 	qp->link_is_up = false;
+ 	qp->active = false;
+@@ -934,6 +934,13 @@ static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp)
+ 	qp->tx_async = 0;
+ }
+ 
++static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp)
++{
++	ntb_qp_link_context_reset(qp);
++	if (qp->remote_rx_info)
++		qp->remote_rx_info->entry = qp->rx_max_entry - 1;
++}
++
+ static void ntb_qp_link_cleanup(struct ntb_transport_qp *qp)
+ {
+ 	struct ntb_transport_ctx *nt = qp->transport;
+@@ -1176,7 +1183,7 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt,
+ 	qp->ndev = nt->ndev;
+ 	qp->client_ready = false;
+ 	qp->event_handler = NULL;
+-	ntb_qp_link_down_reset(qp);
++	ntb_qp_link_context_reset(qp);
+ 
+ 	if (mw_num < qp_count % mw_count)
+ 		num_qps_mw = qp_count / mw_count + 1;
+@@ -2278,9 +2285,13 @@ int ntb_transport_tx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data,
+ 	struct ntb_queue_entry *entry;
+ 	int rc;
+ 
+-	if (!qp || !qp->link_is_up || !len)
++	if (!qp || !len)
+ 		return -EINVAL;
+ 
++	/* If the qp link is down already, just ignore. */
++	if (!qp->link_is_up)
++		return 0;
++
+ 	entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q);
+ 	if (!entry) {
+ 		qp->tx_err_no_buf++;
+@@ -2420,7 +2431,7 @@ unsigned int ntb_transport_tx_free_entry(struct ntb_transport_qp *qp)
+ 	unsigned int head = qp->tx_index;
+ 	unsigned int tail = qp->remote_rx_info->entry;
+ 
+-	return tail > head ? tail - head : qp->tx_max_entry + tail - head;
++	return tail >= head ? tail - head : qp->tx_max_entry + tail - head;
+ }
+ EXPORT_SYMBOL_GPL(ntb_transport_tx_free_entry);
+ 
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 5407bbdb64395..412d7ddb3b8b2 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -69,7 +69,7 @@ static void __init of_unittest_find_node_by_name(void)
+ 
+ 	np = of_find_node_by_path("/testcase-data");
+ 	name = kasprintf(GFP_KERNEL, "%pOF", np);
+-	unittest(np && !strcmp("/testcase-data", name),
++	unittest(np && name && !strcmp("/testcase-data", name),
+ 		"find /testcase-data failed\n");
+ 	of_node_put(np);
+ 	kfree(name);
+@@ -80,14 +80,14 @@ static void __init of_unittest_find_node_by_name(void)
+ 
+ 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+ 	name = kasprintf(GFP_KERNEL, "%pOF", np);
+-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
++	unittest(np && name && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+ 		"find /testcase-data/phandle-tests/consumer-a failed\n");
+ 	of_node_put(np);
+ 	kfree(name);
+ 
+ 	np = of_find_node_by_path("testcase-alias");
+ 	name = kasprintf(GFP_KERNEL, "%pOF", np);
+-	unittest(np && !strcmp("/testcase-data", name),
++	unittest(np && name && !strcmp("/testcase-data", name),
+ 		"find testcase-alias failed\n");
+ 	of_node_put(np);
+ 	kfree(name);
+@@ -98,7 +98,7 @@ static void __init of_unittest_find_node_by_name(void)
+ 
+ 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+ 	name = kasprintf(GFP_KERNEL, "%pOF", np);
+-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
++	unittest(np && name && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+ 		"find testcase-alias/phandle-tests/consumer-a failed\n");
+ 	of_node_put(np);
+ 	kfree(name);
+@@ -1376,6 +1376,8 @@ static void attach_node_and_children(struct device_node *np)
+ 	const char *full_name;
+ 
+ 	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
++	if (!full_name)
++		return;
+ 
+ 	if (!strcmp(full_name, "/__local_fixups__") ||
+ 	    !strcmp(full_name, "/__fixups__")) {
+@@ -2065,7 +2067,7 @@ static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+ 	of_unittest_untrack_overlay(save_id);
+ 
+ 	/* unittest device must be again in before state */
+-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
++	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
+ 		unittest(0, "%s with device @\"%s\" %s\n",
+ 				overlay_name_from_nr(overlay_nr),
+ 				unittest_path(unittest_nr, ovtype),
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 7ed605ffb7171..7999baa075b0e 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -2053,7 +2053,7 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
+ 
+ 		virt_dev = dev_pm_domain_attach_by_name(dev, *name);
+ 		if (IS_ERR_OR_NULL(virt_dev)) {
+-			ret = PTR_ERR(virt_dev) ? : -ENODEV;
++			ret = virt_dev ? PTR_ERR(virt_dev) : -ENODEV;
+ 			dev_err(dev, "Couldn't attach to pm_domain: %d\n", ret);
+ 			goto err;
+ 		}
+diff --git a/drivers/parisc/led.c b/drivers/parisc/led.c
+index 4854120fc0954..81e5e7a20b94e 100644
+--- a/drivers/parisc/led.c
++++ b/drivers/parisc/led.c
+@@ -56,8 +56,8 @@
+ static int led_type __read_mostly = -1;
+ static unsigned char lastleds;	/* LED state from most recent update */
+ static unsigned int led_heartbeat __read_mostly = 1;
+-static unsigned int led_diskio    __read_mostly = 1;
+-static unsigned int led_lanrxtx   __read_mostly = 1;
++static unsigned int led_diskio    __read_mostly;
++static unsigned int led_lanrxtx   __read_mostly;
+ static char lcd_text[32]          __read_mostly;
+ static char lcd_text_default[32]  __read_mostly;
+ static int  lcd_no_led_support    __read_mostly = 0; /* KittyHawk doesn't support LED on its LCD */
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index dda9523577472..75c6c72ec32ac 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -332,17 +332,11 @@ int pciehp_check_link_status(struct controller *ctrl)
+ static int __pciehp_link_set(struct controller *ctrl, bool enable)
+ {
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+-	u16 lnk_ctrl;
+ 
+-	pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnk_ctrl);
++	pcie_capability_clear_and_set_word(pdev, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_LD,
++					   enable ? 0 : PCI_EXP_LNKCTL_LD);
+ 
+-	if (enable)
+-		lnk_ctrl &= ~PCI_EXP_LNKCTL_LD;
+-	else
+-		lnk_ctrl |= PCI_EXP_LNKCTL_LD;
+-
+-	pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnk_ctrl);
+-	ctrl_dbg(ctrl, "%s: lnk_ctrl = %x\n", __func__, lnk_ctrl);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 7a3cf8aaec256..ef6f0ceb92f9f 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -249,7 +249,7 @@ static int pcie_retrain_link(struct pcie_link_state *link)
+ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ {
+ 	int same_clock = 1;
+-	u16 reg16, parent_reg, child_reg[8];
++	u16 reg16, ccc, parent_old_ccc, child_old_ccc[8];
+ 	struct pci_dev *child, *parent = link->pdev;
+ 	struct pci_bus *linkbus = parent->subordinate;
+ 	/*
+@@ -271,6 +271,7 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ 
+ 	/* Port might be already in common clock mode */
+ 	pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
++	parent_old_ccc = reg16 & PCI_EXP_LNKCTL_CCC;
+ 	if (same_clock && (reg16 & PCI_EXP_LNKCTL_CCC)) {
+ 		bool consistent = true;
+ 
+@@ -287,34 +288,29 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ 		pci_info(parent, "ASPM: current common clock configuration is inconsistent, reconfiguring\n");
+ 	}
+ 
++	ccc = same_clock ? PCI_EXP_LNKCTL_CCC : 0;
+ 	/* Configure downstream component, all functions */
+ 	list_for_each_entry(child, &linkbus->devices, bus_list) {
+ 		pcie_capability_read_word(child, PCI_EXP_LNKCTL, &reg16);
+-		child_reg[PCI_FUNC(child->devfn)] = reg16;
+-		if (same_clock)
+-			reg16 |= PCI_EXP_LNKCTL_CCC;
+-		else
+-			reg16 &= ~PCI_EXP_LNKCTL_CCC;
+-		pcie_capability_write_word(child, PCI_EXP_LNKCTL, reg16);
++		child_old_ccc[PCI_FUNC(child->devfn)] = reg16 & PCI_EXP_LNKCTL_CCC;
++		pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL,
++						   PCI_EXP_LNKCTL_CCC, ccc);
+ 	}
+ 
+ 	/* Configure upstream component */
+-	pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
+-	parent_reg = reg16;
+-	if (same_clock)
+-		reg16 |= PCI_EXP_LNKCTL_CCC;
+-	else
+-		reg16 &= ~PCI_EXP_LNKCTL_CCC;
+-	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
++	pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_CCC, ccc);
+ 
+ 	if (pcie_retrain_link(link)) {
+ 
+ 		/* Training failed. Restore common clock configurations */
+ 		pci_err(parent, "ASPM: Could not configure common clock\n");
+ 		list_for_each_entry(child, &linkbus->devices, bus_list)
+-			pcie_capability_write_word(child, PCI_EXP_LNKCTL,
+-					   child_reg[PCI_FUNC(child->devfn)]);
+-		pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
++			pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL,
++							   PCI_EXP_LNKCTL_CCC,
++							   child_old_ccc[PCI_FUNC(child->devfn)]);
++		pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL,
++						   PCI_EXP_LNKCTL_CCC, parent_old_ccc);
+ 	}
+ }
+ 
+diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c
+index e09bbf3890c49..8dfb67530d6bc 100644
+--- a/drivers/perf/fsl_imx8_ddr_perf.c
++++ b/drivers/perf/fsl_imx8_ddr_perf.c
+@@ -82,6 +82,7 @@ struct ddr_pmu {
+ 	const struct fsl_ddr_devtype_data *devtype_data;
+ 	int irq;
+ 	int id;
++	int active_counter;
+ };
+ 
+ enum ddr_perf_filter_capabilities {
+@@ -414,6 +415,10 @@ static void ddr_perf_event_start(struct perf_event *event, int flags)
+ 
+ 	ddr_perf_counter_enable(pmu, event->attr.config, counter, true);
+ 
++	if (!pmu->active_counter++)
++		ddr_perf_counter_enable(pmu, EVENT_CYCLES_ID,
++			EVENT_CYCLES_COUNTER, true);
++
+ 	hwc->state = 0;
+ }
+ 
+@@ -468,6 +473,10 @@ static void ddr_perf_event_stop(struct perf_event *event, int flags)
+ 	ddr_perf_counter_enable(pmu, event->attr.config, counter, false);
+ 	ddr_perf_event_update(event);
+ 
++	if (!--pmu->active_counter)
++		ddr_perf_counter_enable(pmu, EVENT_CYCLES_ID,
++			EVENT_CYCLES_COUNTER, false);
++
+ 	hwc->state |= PERF_HES_STOPPED;
+ }
+ 
+@@ -486,25 +495,10 @@ static void ddr_perf_event_del(struct perf_event *event, int flags)
+ 
+ static void ddr_perf_pmu_enable(struct pmu *pmu)
+ {
+-	struct ddr_pmu *ddr_pmu = to_ddr_pmu(pmu);
+-
+-	/* enable cycle counter if cycle is not active event list */
+-	if (ddr_pmu->events[EVENT_CYCLES_COUNTER] == NULL)
+-		ddr_perf_counter_enable(ddr_pmu,
+-				      EVENT_CYCLES_ID,
+-				      EVENT_CYCLES_COUNTER,
+-				      true);
+ }
+ 
+ static void ddr_perf_pmu_disable(struct pmu *pmu)
+ {
+-	struct ddr_pmu *ddr_pmu = to_ddr_pmu(pmu);
+-
+-	if (ddr_pmu->events[EVENT_CYCLES_COUNTER] == NULL)
+-		ddr_perf_counter_enable(ddr_pmu,
+-				      EVENT_CYCLES_ID,
+-				      EVENT_CYCLES_COUNTER,
+-				      false);
+ }
+ 
+ static int ddr_perf_init(struct ddr_pmu *pmu, void __iomem *base,
+diff --git a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
+index abb9264569336..173d166ed8295 100644
+--- a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
++++ b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
+@@ -171,8 +171,7 @@ static int __maybe_unused qcom_snps_hsphy_runtime_suspend(struct device *dev)
+ 	if (!hsphy->phy_initialized)
+ 		return 0;
+ 
+-	qcom_snps_hsphy_suspend(hsphy);
+-	return 0;
++	return qcom_snps_hsphy_suspend(hsphy);
+ }
+ 
+ static int __maybe_unused qcom_snps_hsphy_runtime_resume(struct device *dev)
+@@ -182,8 +181,7 @@ static int __maybe_unused qcom_snps_hsphy_runtime_resume(struct device *dev)
+ 	if (!hsphy->phy_initialized)
+ 		return 0;
+ 
+-	qcom_snps_hsphy_resume(hsphy);
+-	return 0;
++	return qcom_snps_hsphy_resume(hsphy);
+ }
+ 
+ static int qcom_snps_hsphy_set_mode(struct phy *phy, enum phy_mode mode,
+diff --git a/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c b/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
+index 9ca20c947283d..2b0f5f2b4f339 100644
+--- a/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
++++ b/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
+@@ -745,10 +745,12 @@ unsigned long inno_hdmi_phy_rk3328_clk_recalc_rate(struct clk_hw *hw,
+ 		do_div(vco, (nd * (no_a == 1 ? no_b : no_a) * no_d * 2));
+ 	}
+ 
+-	inno->pixclock = vco;
+-	dev_dbg(inno->dev, "%s rate %lu\n", __func__, inno->pixclock);
++	inno->pixclock = DIV_ROUND_CLOSEST((unsigned long)vco, 1000) * 1000;
+ 
+-	return vco;
++	dev_dbg(inno->dev, "%s rate %lu vco %llu\n",
++		__func__, inno->pixclock, vco);
++
++	return inno->pixclock;
+ }
+ 
+ static long inno_hdmi_phy_rk3328_clk_round_rate(struct clk_hw *hw,
+@@ -790,8 +792,8 @@ static int inno_hdmi_phy_rk3328_clk_set_rate(struct clk_hw *hw,
+ 			 RK3328_PRE_PLL_POWER_DOWN);
+ 
+ 	/* Configure pre-pll */
+-	inno_update_bits(inno, 0xa0, RK3228_PCLK_VCO_DIV_5_MASK,
+-			 RK3228_PCLK_VCO_DIV_5(cfg->vco_div_5_en));
++	inno_update_bits(inno, 0xa0, RK3328_PCLK_VCO_DIV_5_MASK,
++			 RK3328_PCLK_VCO_DIV_5(cfg->vco_div_5_en));
+ 	inno_write(inno, 0xa1, RK3328_PRE_PLL_PRE_DIV(cfg->prediv));
+ 
+ 	val = RK3328_SPREAD_SPECTRUM_MOD_DISABLE;
+@@ -1021,9 +1023,10 @@ inno_hdmi_phy_rk3328_power_on(struct inno_hdmi_phy *inno,
+ 
+ 	inno_write(inno, 0xac, RK3328_POST_PLL_FB_DIV_7_0(cfg->fbdiv));
+ 	if (cfg->postdiv == 1) {
+-		inno_write(inno, 0xaa, RK3328_POST_PLL_REFCLK_SEL_TMDS);
+ 		inno_write(inno, 0xab, RK3328_POST_PLL_FB_DIV_8(cfg->fbdiv) |
+ 			   RK3328_POST_PLL_PRE_DIV(cfg->prediv));
++		inno_write(inno, 0xaa, RK3328_POST_PLL_REFCLK_SEL_TMDS |
++			   RK3328_POST_PLL_POWER_DOWN);
+ 	} else {
+ 		v = (cfg->postdiv / 2) - 1;
+ 		v &= RK3328_POST_PLL_POST_DIV_MASK;
+@@ -1031,7 +1034,8 @@ inno_hdmi_phy_rk3328_power_on(struct inno_hdmi_phy *inno,
+ 		inno_write(inno, 0xab, RK3328_POST_PLL_FB_DIV_8(cfg->fbdiv) |
+ 			   RK3328_POST_PLL_PRE_DIV(cfg->prediv));
+ 		inno_write(inno, 0xaa, RK3328_POST_PLL_POST_DIV_ENABLE |
+-			   RK3328_POST_PLL_REFCLK_SEL_TMDS);
++			   RK3328_POST_PLL_REFCLK_SEL_TMDS |
++			   RK3328_POST_PLL_POWER_DOWN);
+ 	}
+ 
+ 	for (v = 0; v < 14; v++)
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index 44caada37b71f..18b85ae84ed79 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -1625,7 +1625,6 @@ static int chv_pinctrl_probe(struct platform_device *pdev)
+ 	const struct intel_pinctrl_soc_data *soc_data;
+ 	struct intel_community *community;
+ 	struct device *dev = &pdev->dev;
+-	struct acpi_device *adev = ACPI_COMPANION(dev);
+ 	struct intel_pinctrl *pctrl;
+ 	acpi_status status;
+ 	int ret, irq;
+@@ -1688,7 +1687,7 @@ static int chv_pinctrl_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	status = acpi_install_address_space_handler(adev->handle,
++	status = acpi_install_address_space_handler(ACPI_HANDLE(dev),
+ 					community->acpi_space_id,
+ 					chv_pinctrl_mmio_access_handler,
+ 					NULL, pctrl);
+@@ -1705,7 +1704,7 @@ static int chv_pinctrl_remove(struct platform_device *pdev)
+ 	struct intel_pinctrl *pctrl = platform_get_drvdata(pdev);
+ 	const struct intel_community *community = &pctrl->communities[0];
+ 
+-	acpi_remove_address_space_handler(ACPI_COMPANION(&pdev->dev),
++	acpi_remove_address_space_handler(ACPI_HANDLE(&pdev->dev),
+ 					  community->acpi_space_id,
+ 					  chv_pinctrl_mmio_access_handler);
+ 
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 3a05ebb9aa253..71576dceed3a2 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -653,7 +653,7 @@ static int amd_pinconf_get(struct pinctrl_dev *pctldev,
+ 		break;
+ 
+ 	default:
+-		dev_err(&gpio_dev->pdev->dev, "Invalid config param %04x\n",
++		dev_dbg(&gpio_dev->pdev->dev, "Invalid config param %04x\n",
+ 			param);
+ 		return -ENOTSUPP;
+ 	}
+@@ -706,7 +706,7 @@ static int amd_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			break;
+ 
+ 		default:
+-			dev_err(&gpio_dev->pdev->dev,
++			dev_dbg(&gpio_dev->pdev->dev,
+ 				"Invalid config param %04x\n", param);
+ 			ret = -ENOTSUPP;
+ 		}
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08_spi.c b/drivers/pinctrl/pinctrl-mcp23s08_spi.c
+index 9ae10318f6f35..ea059b9c5542e 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08_spi.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08_spi.c
+@@ -91,18 +91,28 @@ static int mcp23s08_spi_regmap_init(struct mcp23s08 *mcp, struct device *dev,
+ 		mcp->reg_shift = 0;
+ 		mcp->chip.ngpio = 8;
+ 		mcp->chip.label = devm_kasprintf(dev, GFP_KERNEL, "mcp23s08.%d", addr);
++		if (!mcp->chip.label)
++			return -ENOMEM;
+ 
+ 		config = &mcp23x08_regmap;
+ 		name = devm_kasprintf(dev, GFP_KERNEL, "%d", addr);
++		if (!name)
++			return -ENOMEM;
++
+ 		break;
+ 
+ 	case MCP_TYPE_S17:
+ 		mcp->reg_shift = 1;
+ 		mcp->chip.ngpio = 16;
+ 		mcp->chip.label = devm_kasprintf(dev, GFP_KERNEL, "mcp23s17.%d", addr);
++		if (!mcp->chip.label)
++			return -ENOMEM;
+ 
+ 		config = &mcp23x17_regmap;
+ 		name = devm_kasprintf(dev, GFP_KERNEL, "%d", addr);
++		if (!name)
++			return -ENOMEM;
++
+ 		break;
+ 
+ 	case MCP_TYPE_S18:
+diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
+index 38800e86ed8ad..194f3205e5597 100644
+--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
+@@ -56,6 +56,7 @@ struct mlxbf_tmfifo;
+  * @vq: pointer to the virtio virtqueue
+  * @desc: current descriptor of the pending packet
+  * @desc_head: head descriptor of the pending packet
++ * @drop_desc: dummy desc for packet dropping
+  * @cur_len: processed length of the current descriptor
+  * @rem_len: remaining length of the pending packet
+  * @pkt_len: total length of the pending packet
+@@ -72,6 +73,7 @@ struct mlxbf_tmfifo_vring {
+ 	struct virtqueue *vq;
+ 	struct vring_desc *desc;
+ 	struct vring_desc *desc_head;
++	struct vring_desc drop_desc;
+ 	int cur_len;
+ 	int rem_len;
+ 	u32 pkt_len;
+@@ -83,6 +85,14 @@ struct mlxbf_tmfifo_vring {
+ 	struct mlxbf_tmfifo *fifo;
+ };
+ 
++/* Check whether vring is in drop mode. */
++#define IS_VRING_DROP(_r) ({ \
++	typeof(_r) (r) = (_r); \
++	(r->desc_head == &r->drop_desc ? true : false); })
++
++/* A stub length to drop maximum length packet. */
++#define VRING_DROP_DESC_MAX_LEN		GENMASK(15, 0)
++
+ /* Interrupt types. */
+ enum {
+ 	MLXBF_TM_RX_LWM_IRQ,
+@@ -195,7 +205,7 @@ static u8 mlxbf_tmfifo_net_default_mac[ETH_ALEN] = {
+ static efi_char16_t mlxbf_tmfifo_efi_name[] = L"RshimMacAddr";
+ 
+ /* Maximum L2 header length. */
+-#define MLXBF_TMFIFO_NET_L2_OVERHEAD	36
++#define MLXBF_TMFIFO_NET_L2_OVERHEAD	(ETH_HLEN + VLAN_HLEN)
+ 
+ /* Supported virtio-net features. */
+ #define MLXBF_TMFIFO_NET_FEATURES \
+@@ -243,6 +253,7 @@ static int mlxbf_tmfifo_alloc_vrings(struct mlxbf_tmfifo *fifo,
+ 		vring->align = SMP_CACHE_BYTES;
+ 		vring->index = i;
+ 		vring->vdev_id = tm_vdev->vdev.id.device;
++		vring->drop_desc.len = VRING_DROP_DESC_MAX_LEN;
+ 		dev = &tm_vdev->vdev.dev;
+ 
+ 		size = vring_size(vring->num, vring->align);
+@@ -348,7 +359,7 @@ static u32 mlxbf_tmfifo_get_pkt_len(struct mlxbf_tmfifo_vring *vring,
+ 	return len;
+ }
+ 
+-static void mlxbf_tmfifo_release_pending_pkt(struct mlxbf_tmfifo_vring *vring)
++static void mlxbf_tmfifo_release_pkt(struct mlxbf_tmfifo_vring *vring)
+ {
+ 	struct vring_desc *desc_head;
+ 	u32 len = 0;
+@@ -577,19 +588,25 @@ static void mlxbf_tmfifo_rxtx_word(struct mlxbf_tmfifo_vring *vring,
+ 
+ 	if (vring->cur_len + sizeof(u64) <= len) {
+ 		/* The whole word. */
+-		if (is_rx)
+-			memcpy(addr + vring->cur_len, &data, sizeof(u64));
+-		else
+-			memcpy(&data, addr + vring->cur_len, sizeof(u64));
++		if (!IS_VRING_DROP(vring)) {
++			if (is_rx)
++				memcpy(addr + vring->cur_len, &data,
++				       sizeof(u64));
++			else
++				memcpy(&data, addr + vring->cur_len,
++				       sizeof(u64));
++		}
+ 		vring->cur_len += sizeof(u64);
+ 	} else {
+ 		/* Leftover bytes. */
+-		if (is_rx)
+-			memcpy(addr + vring->cur_len, &data,
+-			       len - vring->cur_len);
+-		else
+-			memcpy(&data, addr + vring->cur_len,
+-			       len - vring->cur_len);
++		if (!IS_VRING_DROP(vring)) {
++			if (is_rx)
++				memcpy(addr + vring->cur_len, &data,
++				       len - vring->cur_len);
++			else
++				memcpy(&data, addr + vring->cur_len,
++				       len - vring->cur_len);
++		}
+ 		vring->cur_len = len;
+ 	}
+ 
+@@ -606,13 +623,14 @@ static void mlxbf_tmfifo_rxtx_word(struct mlxbf_tmfifo_vring *vring,
+  * flag is set.
+  */
+ static void mlxbf_tmfifo_rxtx_header(struct mlxbf_tmfifo_vring *vring,
+-				     struct vring_desc *desc,
++				     struct vring_desc **desc,
+ 				     bool is_rx, bool *vring_change)
+ {
+ 	struct mlxbf_tmfifo *fifo = vring->fifo;
+ 	struct virtio_net_config *config;
+ 	struct mlxbf_tmfifo_msg_hdr hdr;
+ 	int vdev_id, hdr_len;
++	bool drop_rx = false;
+ 
+ 	/* Read/Write packet header. */
+ 	if (is_rx) {
+@@ -632,8 +650,8 @@ static void mlxbf_tmfifo_rxtx_header(struct mlxbf_tmfifo_vring *vring,
+ 			if (ntohs(hdr.len) >
+ 			    __virtio16_to_cpu(virtio_legacy_is_little_endian(),
+ 					      config->mtu) +
+-			    MLXBF_TMFIFO_NET_L2_OVERHEAD)
+-				return;
++					      MLXBF_TMFIFO_NET_L2_OVERHEAD)
++				drop_rx = true;
+ 		} else {
+ 			vdev_id = VIRTIO_ID_CONSOLE;
+ 			hdr_len = 0;
+@@ -648,16 +666,25 @@ static void mlxbf_tmfifo_rxtx_header(struct mlxbf_tmfifo_vring *vring,
+ 
+ 			if (!tm_dev2)
+ 				return;
+-			vring->desc = desc;
++			vring->desc = *desc;
+ 			vring = &tm_dev2->vrings[MLXBF_TMFIFO_VRING_RX];
+ 			*vring_change = true;
+ 		}
++
++		if (drop_rx && !IS_VRING_DROP(vring)) {
++			if (vring->desc_head)
++				mlxbf_tmfifo_release_pkt(vring);
++			*desc = &vring->drop_desc;
++			vring->desc_head = *desc;
++			vring->desc = *desc;
++		}
++
+ 		vring->pkt_len = ntohs(hdr.len) + hdr_len;
+ 	} else {
+ 		/* Network virtio has an extra header. */
+ 		hdr_len = (vring->vdev_id == VIRTIO_ID_NET) ?
+ 			   sizeof(struct virtio_net_hdr) : 0;
+-		vring->pkt_len = mlxbf_tmfifo_get_pkt_len(vring, desc);
++		vring->pkt_len = mlxbf_tmfifo_get_pkt_len(vring, *desc);
+ 		hdr.type = (vring->vdev_id == VIRTIO_ID_NET) ?
+ 			    VIRTIO_ID_NET : VIRTIO_ID_CONSOLE;
+ 		hdr.len = htons(vring->pkt_len - hdr_len);
+@@ -690,15 +717,23 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring,
+ 	/* Get the descriptor of the next packet. */
+ 	if (!vring->desc) {
+ 		desc = mlxbf_tmfifo_get_next_pkt(vring, is_rx);
+-		if (!desc)
+-			return false;
++		if (!desc) {
++			/* Drop next Rx packet to avoid stuck. */
++			if (is_rx) {
++				desc = &vring->drop_desc;
++				vring->desc_head = desc;
++				vring->desc = desc;
++			} else {
++				return false;
++			}
++		}
+ 	} else {
+ 		desc = vring->desc;
+ 	}
+ 
+ 	/* Beginning of a packet. Start to Rx/Tx packet header. */
+ 	if (vring->pkt_len == 0) {
+-		mlxbf_tmfifo_rxtx_header(vring, desc, is_rx, &vring_change);
++		mlxbf_tmfifo_rxtx_header(vring, &desc, is_rx, &vring_change);
+ 		(*avail)--;
+ 
+ 		/* Return if new packet is for another ring. */
+@@ -724,17 +759,24 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring,
+ 		vring->rem_len -= len;
+ 
+ 		/* Get the next desc on the chain. */
+-		if (vring->rem_len > 0 &&
++		if (!IS_VRING_DROP(vring) && vring->rem_len > 0 &&
+ 		    (virtio16_to_cpu(vdev, desc->flags) & VRING_DESC_F_NEXT)) {
+ 			idx = virtio16_to_cpu(vdev, desc->next);
+ 			desc = &vr->desc[idx];
+ 			goto mlxbf_tmfifo_desc_done;
+ 		}
+ 
+-		/* Done and release the pending packet. */
+-		mlxbf_tmfifo_release_pending_pkt(vring);
++		/* Done and release the packet. */
+ 		desc = NULL;
+ 		fifo->vring[is_rx] = NULL;
++		if (!IS_VRING_DROP(vring)) {
++			mlxbf_tmfifo_release_pkt(vring);
++		} else {
++			vring->pkt_len = 0;
++			vring->desc_head = NULL;
++			vring->desc = NULL;
++			return false;
++		}
+ 
+ 		/*
+ 		 * Make sure the load/store are in order before
+@@ -868,6 +910,7 @@ static bool mlxbf_tmfifo_virtio_notify(struct virtqueue *vq)
+ 			tm_vdev = fifo->vdev[VIRTIO_ID_CONSOLE];
+ 			mlxbf_tmfifo_console_output(tm_vdev, vring);
+ 			spin_unlock_irqrestore(&fifo->spin_lock[0], flags);
++			set_bit(MLXBF_TM_TX_LWM_IRQ, &fifo->pend_events);
+ 		} else if (test_and_set_bit(MLXBF_TM_TX_LWM_IRQ,
+ 					    &fifo->pend_events)) {
+ 			return true;
+@@ -913,7 +956,7 @@ static void mlxbf_tmfifo_virtio_del_vqs(struct virtio_device *vdev)
+ 
+ 		/* Release the pending packet. */
+ 		if (vring->desc)
+-			mlxbf_tmfifo_release_pending_pkt(vring);
++			mlxbf_tmfifo_release_pkt(vring);
+ 		vq = vring->vq;
+ 		if (vq) {
+ 			vring->vq = NULL;
+diff --git a/drivers/platform/x86/huawei-wmi.c b/drivers/platform/x86/huawei-wmi.c
+index 935562c870c3d..23ebd0c046e16 100644
+--- a/drivers/platform/x86/huawei-wmi.c
++++ b/drivers/platform/x86/huawei-wmi.c
+@@ -86,6 +86,8 @@ static const struct key_entry huawei_wmi_keymap[] = {
+ 	{ KE_IGNORE, 0x293, { KEY_KBDILLUMTOGGLE } },
+ 	{ KE_IGNORE, 0x294, { KEY_KBDILLUMUP } },
+ 	{ KE_IGNORE, 0x295, { KEY_KBDILLUMUP } },
++	// Ignore Ambient Light Sensoring
++	{ KE_KEY,    0x2c1, { KEY_RESERVED } },
+ 	{ KE_END,	 0 }
+ };
+ 
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index cebddefba2f42..0b0602fc43601 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -458,7 +458,7 @@ static bool button_array_present(struct platform_device *device)
+ static int intel_hid_probe(struct platform_device *device)
+ {
+ 	acpi_handle handle = ACPI_HANDLE(&device->dev);
+-	unsigned long long mode;
++	unsigned long long mode, dummy;
+ 	struct intel_hid_priv *priv;
+ 	acpi_status status;
+ 	int err;
+@@ -510,18 +510,15 @@ static int intel_hid_probe(struct platform_device *device)
+ 	if (err)
+ 		goto err_remove_notify;
+ 
+-	if (priv->array) {
+-		unsigned long long dummy;
++	intel_button_array_enable(&device->dev, true);
+ 
+-		intel_button_array_enable(&device->dev, true);
+-
+-		/* Call button load method to enable HID power button */
+-		if (!intel_hid_evaluate_method(handle, INTEL_HID_DSM_BTNL_FN,
+-					       &dummy)) {
+-			dev_warn(&device->dev,
+-				 "failed to enable HID power button\n");
+-		}
+-	}
++	/*
++	 * Call button load method to enable HID power button
++	 * Always do this since it activates events on some devices without
++	 * a button array too.
++	 */
++	if (!intel_hid_evaluate_method(handle, INTEL_HID_DSM_BTNL_FN, &dummy))
++		dev_warn(&device->dev, "failed to enable HID power button\n");
+ 
+ 	device_init_wakeup(&device->dev, true);
+ 	/*
+diff --git a/drivers/pwm/pwm-lpc32xx.c b/drivers/pwm/pwm-lpc32xx.c
+index 522f862eca526..504a8f506195a 100644
+--- a/drivers/pwm/pwm-lpc32xx.c
++++ b/drivers/pwm/pwm-lpc32xx.c
+@@ -51,10 +51,10 @@ static int lpc32xx_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	if (duty_cycles > 255)
+ 		duty_cycles = 255;
+ 
+-	val = readl(lpc32xx->base + (pwm->hwpwm << 2));
++	val = readl(lpc32xx->base);
+ 	val &= ~0xFFFF;
+ 	val |= (period_cycles << 8) | duty_cycles;
+-	writel(val, lpc32xx->base + (pwm->hwpwm << 2));
++	writel(val, lpc32xx->base);
+ 
+ 	return 0;
+ }
+@@ -69,9 +69,9 @@ static int lpc32xx_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	if (ret)
+ 		return ret;
+ 
+-	val = readl(lpc32xx->base + (pwm->hwpwm << 2));
++	val = readl(lpc32xx->base);
+ 	val |= PWM_ENABLE;
+-	writel(val, lpc32xx->base + (pwm->hwpwm << 2));
++	writel(val, lpc32xx->base);
+ 
+ 	return 0;
+ }
+@@ -81,9 +81,9 @@ static void lpc32xx_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	struct lpc32xx_pwm_chip *lpc32xx = to_lpc32xx_pwm_chip(chip);
+ 	u32 val;
+ 
+-	val = readl(lpc32xx->base + (pwm->hwpwm << 2));
++	val = readl(lpc32xx->base);
+ 	val &= ~PWM_ENABLE;
+-	writel(val, lpc32xx->base + (pwm->hwpwm << 2));
++	writel(val, lpc32xx->base);
+ 
+ 	clk_disable_unprepare(lpc32xx->clk);
+ }
+@@ -121,9 +121,9 @@ static int lpc32xx_pwm_probe(struct platform_device *pdev)
+ 	lpc32xx->chip.base = -1;
+ 
+ 	/* If PWM is disabled, configure the output to the default value */
+-	val = readl(lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
++	val = readl(lpc32xx->base);
+ 	val &= ~PWM_PIN_LEVEL;
+-	writel(val, lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
++	writel(val, lpc32xx->base);
+ 
+ 	ret = pwmchip_add(&lpc32xx->chip);
+ 	if (ret < 0) {
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 98b6d4c09c82c..e776d1bfc9767 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -222,6 +222,10 @@ static struct glink_channel *qcom_glink_alloc_channel(struct qcom_glink *glink,
+ 
+ 	channel->glink = glink;
+ 	channel->name = kstrdup(name, GFP_KERNEL);
++	if (!channel->name) {
++		kfree(channel);
++		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	init_completion(&channel->open_req);
+ 	init_completion(&channel->open_ack);
+diff --git a/drivers/rtc/rtc-ds1685.c b/drivers/rtc/rtc-ds1685.c
+index dfbd7b88b2b91..98932ab0fcaba 100644
+--- a/drivers/rtc/rtc-ds1685.c
++++ b/drivers/rtc/rtc-ds1685.c
+@@ -1441,7 +1441,7 @@ ds1685_rtc_poweroff(struct platform_device *pdev)
+ 		unreachable();
+ 	}
+ }
+-EXPORT_SYMBOL(ds1685_rtc_poweroff);
++EXPORT_SYMBOL_GPL(ds1685_rtc_poweroff);
+ /* ----------------------------------------------------------------------- */
+ 
+ 
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 792f8f5688085..09e932a7b17f4 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -2985,41 +2985,32 @@ static void _dasd_wake_block_flush_cb(struct dasd_ccw_req *cqr, void *data)
+  * Requeue a request back to the block request queue
+  * only works for block requests
+  */
+-static int _dasd_requeue_request(struct dasd_ccw_req *cqr)
++static void _dasd_requeue_request(struct dasd_ccw_req *cqr)
+ {
+-	struct dasd_block *block = cqr->block;
+ 	struct request *req;
+ 
+-	if (!block)
+-		return -EINVAL;
+ 	/*
+ 	 * If the request is an ERP request there is nothing to requeue.
+ 	 * This will be done with the remaining original request.
+ 	 */
+ 	if (cqr->refers)
+-		return 0;
++		return;
+ 	spin_lock_irq(&cqr->dq->lock);
+ 	req = (struct request *) cqr->callback_data;
+ 	blk_mq_requeue_request(req, true);
+ 	spin_unlock_irq(&cqr->dq->lock);
+ 
+-	return 0;
++	return;
+ }
+ 
+-/*
+- * Go through all request on the dasd_block request queue, cancel them
+- * on the respective dasd_device, and return them to the generic
+- * block layer.
+- */
+-static int dasd_flush_block_queue(struct dasd_block *block)
++static int _dasd_requests_to_flushqueue(struct dasd_block *block,
++					struct list_head *flush_queue)
+ {
+ 	struct dasd_ccw_req *cqr, *n;
+-	int rc, i;
+-	struct list_head flush_queue;
+ 	unsigned long flags;
++	int rc, i;
+ 
+-	INIT_LIST_HEAD(&flush_queue);
+-	spin_lock_bh(&block->queue_lock);
++	spin_lock_irqsave(&block->queue_lock, flags);
+ 	rc = 0;
+ restart:
+ 	list_for_each_entry_safe(cqr, n, &block->ccw_queue, blocklist) {
+@@ -3034,13 +3025,32 @@ restart:
+ 		 * is returned from the dasd_device layer.
+ 		 */
+ 		cqr->callback = _dasd_wake_block_flush_cb;
+-		for (i = 0; cqr != NULL; cqr = cqr->refers, i++)
+-			list_move_tail(&cqr->blocklist, &flush_queue);
++		for (i = 0; cqr; cqr = cqr->refers, i++)
++			list_move_tail(&cqr->blocklist, flush_queue);
+ 		if (i > 1)
+ 			/* moved more than one request - need to restart */
+ 			goto restart;
+ 	}
+-	spin_unlock_bh(&block->queue_lock);
++	spin_unlock_irqrestore(&block->queue_lock, flags);
++
++	return rc;
++}
++
++/*
++ * Go through all request on the dasd_block request queue, cancel them
++ * on the respective dasd_device, and return them to the generic
++ * block layer.
++ */
++static int dasd_flush_block_queue(struct dasd_block *block)
++{
++	struct dasd_ccw_req *cqr, *n;
++	struct list_head flush_queue;
++	unsigned long flags;
++	int rc;
++
++	INIT_LIST_HEAD(&flush_queue);
++	rc = _dasd_requests_to_flushqueue(block, &flush_queue);
++
+ 	/* Now call the callback function of flushed requests */
+ restart_cb:
+ 	list_for_each_entry_safe(cqr, n, &flush_queue, blocklist) {
+@@ -3977,75 +3987,36 @@ EXPORT_SYMBOL_GPL(dasd_generic_space_avail);
+  */
+ static int dasd_generic_requeue_all_requests(struct dasd_device *device)
+ {
++	struct dasd_block *block = device->block;
+ 	struct list_head requeue_queue;
+ 	struct dasd_ccw_req *cqr, *n;
+-	struct dasd_ccw_req *refers;
+ 	int rc;
+ 
+-	INIT_LIST_HEAD(&requeue_queue);
+-	spin_lock_irq(get_ccwdev_lock(device->cdev));
+-	rc = 0;
+-	list_for_each_entry_safe(cqr, n, &device->ccw_queue, devlist) {
+-		/* Check status and move request to flush_queue */
+-		if (cqr->status == DASD_CQR_IN_IO) {
+-			rc = device->discipline->term_IO(cqr);
+-			if (rc) {
+-				/* unable to terminate requeust */
+-				dev_err(&device->cdev->dev,
+-					"Unable to terminate request %p "
+-					"on suspend\n", cqr);
+-				spin_unlock_irq(get_ccwdev_lock(device->cdev));
+-				dasd_put_device(device);
+-				return rc;
+-			}
+-		}
+-		list_move_tail(&cqr->devlist, &requeue_queue);
+-	}
+-	spin_unlock_irq(get_ccwdev_lock(device->cdev));
+-
+-	list_for_each_entry_safe(cqr, n, &requeue_queue, devlist) {
+-		wait_event(dasd_flush_wq,
+-			   (cqr->status != DASD_CQR_CLEAR_PENDING));
++	if (!block)
++		return 0;
+ 
+-		/*
+-		 * requeue requests to blocklayer will only work
+-		 * for block device requests
+-		 */
+-		if (_dasd_requeue_request(cqr))
+-			continue;
++	INIT_LIST_HEAD(&requeue_queue);
++	rc = _dasd_requests_to_flushqueue(block, &requeue_queue);
+ 
+-		/* remove requests from device and block queue */
+-		list_del_init(&cqr->devlist);
+-		while (cqr->refers != NULL) {
+-			refers = cqr->refers;
+-			/* remove the request from the block queue */
+-			list_del(&cqr->blocklist);
+-			/* free the finished erp request */
+-			dasd_free_erp_request(cqr, cqr->memdev);
+-			cqr = refers;
++	/* Now call the callback function of flushed requests */
++restart_cb:
++	list_for_each_entry_safe(cqr, n, &requeue_queue, blocklist) {
++		wait_event(dasd_flush_wq, (cqr->status < DASD_CQR_QUEUED));
++		/* Process finished ERP request. */
++		if (cqr->refers) {
++			spin_lock_bh(&block->queue_lock);
++			__dasd_process_erp(block->base, cqr);
++			spin_unlock_bh(&block->queue_lock);
++			/* restart list_for_xx loop since dasd_process_erp
++			 * might remove multiple elements
++			 */
++			goto restart_cb;
+ 		}
+-
+-		/*
+-		 * _dasd_requeue_request already checked for a valid
+-		 * blockdevice, no need to check again
+-		 * all erp requests (cqr->refers) have a cqr->block
+-		 * pointer copy from the original cqr
+-		 */
++		_dasd_requeue_request(cqr);
+ 		list_del_init(&cqr->blocklist);
+ 		cqr->block->base->discipline->free_cp(
+ 			cqr, (struct request *) cqr->callback_data);
+ 	}
+-
+-	/*
+-	 * if requests remain then they are internal request
+-	 * and go back to the device queue
+-	 */
+-	if (!list_empty(&requeue_queue)) {
+-		/* move freeze_queue to start of the ccw_queue */
+-		spin_lock_irq(get_ccwdev_lock(device->cdev));
+-		list_splice_tail(&requeue_queue, &device->ccw_queue);
+-		spin_unlock_irq(get_ccwdev_lock(device->cdev));
+-	}
+ 	dasd_schedule_device_bh(device);
+ 	return rc;
+ }
+diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c
+index 4691a3c35d725..c2d4ea74e0d00 100644
+--- a/drivers/s390/block/dasd_3990_erp.c
++++ b/drivers/s390/block/dasd_3990_erp.c
+@@ -2436,7 +2436,7 @@ static struct dasd_ccw_req *dasd_3990_erp_add_erp(struct dasd_ccw_req *cqr)
+ 	erp->block    = cqr->block;
+ 	erp->magic    = cqr->magic;
+ 	erp->expires  = cqr->expires;
+-	erp->retries  = 256;
++	erp->retries  = device->default_retries;
+ 	erp->buildclk = get_tod_clock();
+ 	erp->status = DASD_CQR_FILLED;
+ 
+diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
+index 870e00effe439..69882ff4db107 100644
+--- a/drivers/s390/crypto/pkey_api.c
++++ b/drivers/s390/crypto/pkey_api.c
+@@ -735,7 +735,7 @@ static int pkey_verifykey2(const u8 *key, size_t keylen,
+ 		if (ktype)
+ 			*ktype = PKEY_TYPE_EP11;
+ 		if (ksize)
+-			*ksize = kb->head.keybitlen;
++			*ksize = kb->head.bitlen;
+ 
+ 		rc = ep11_findcard2(&_apqns, &_nr_apqns, *cardnr, *domain,
+ 				    ZCRYPT_CEX7, EP11_API_V, kb->wkvp);
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index 3b9eda311c273..b518009715eeb 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -399,6 +399,7 @@ static int zcdn_create(const char *name)
+ 			 ZCRYPT_NAME "_%d", (int) MINOR(devt));
+ 	nodename[sizeof(nodename)-1] = '\0';
+ 	if (dev_set_name(&zcdndev->device, nodename)) {
++		kfree(zcdndev);
+ 		rc = -EINVAL;
+ 		goto unlockout;
+ 	}
+diff --git a/drivers/s390/crypto/zcrypt_ep11misc.c b/drivers/s390/crypto/zcrypt_ep11misc.c
+index 9ce5a71da69b8..3daf259ba10e7 100644
+--- a/drivers/s390/crypto/zcrypt_ep11misc.c
++++ b/drivers/s390/crypto/zcrypt_ep11misc.c
+@@ -788,7 +788,7 @@ int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
+ 	kb->head.type = TOKTYPE_NON_CCA;
+ 	kb->head.len = rep_pl->data_len;
+ 	kb->head.version = TOKVER_EP11_AES;
+-	kb->head.keybitlen = keybitsize;
++	kb->head.bitlen = keybitsize;
+ 
+ out:
+ 	kfree(req);
+@@ -1056,7 +1056,7 @@ static int ep11_unwrapkey(u16 card, u16 domain,
+ 	kb->head.type = TOKTYPE_NON_CCA;
+ 	kb->head.len = rep_pl->data_len;
+ 	kb->head.version = TOKVER_EP11_AES;
+-	kb->head.keybitlen = keybitsize;
++	kb->head.bitlen = keybitsize;
+ 
+ out:
+ 	kfree(req);
+diff --git a/drivers/s390/crypto/zcrypt_ep11misc.h b/drivers/s390/crypto/zcrypt_ep11misc.h
+index 1e02b197c0035..d424fa901f1b0 100644
+--- a/drivers/s390/crypto/zcrypt_ep11misc.h
++++ b/drivers/s390/crypto/zcrypt_ep11misc.h
+@@ -29,14 +29,7 @@ struct ep11keyblob {
+ 	union {
+ 		u8 session[32];
+ 		/* only used for PKEY_TYPE_EP11: */
+-		struct {
+-			u8  type;      /* 0x00 (TOKTYPE_NON_CCA) */
+-			u8  res0;      /* unused */
+-			u16 len;       /* total length in bytes of this blob */
+-			u8  version;   /* 0x03 (TOKVER_EP11_AES) */
+-			u8  res1;      /* unused */
+-			u16 keybitlen; /* clear key bit len, 0 for unknown */
+-		} head;
++		struct ep11kblob_header head;
+ 	};
+ 	u8  wkvp[16];  /* wrapping key verification pattern */
+ 	u64 attr;      /* boolean key attributes */
+diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
+index 593b167ceefee..2eb2885ee6e2f 100644
+--- a/drivers/scsi/aic94xx/aic94xx_task.c
++++ b/drivers/scsi/aic94xx/aic94xx_task.c
+@@ -208,7 +208,7 @@ Again:
+ 	switch (opcode) {
+ 	case TC_NO_ERROR:
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_GOOD;
++		ts->stat = SAS_SAM_STAT_GOOD;
+ 		break;
+ 	case TC_UNDERRUN:
+ 		ts->resp = SAS_TASK_COMPLETE;
+diff --git a/drivers/scsi/be2iscsi/be_iscsi.c b/drivers/scsi/be2iscsi/be_iscsi.c
+index c4881657a807b..e07052fb0ec32 100644
+--- a/drivers/scsi/be2iscsi/be_iscsi.c
++++ b/drivers/scsi/be2iscsi/be_iscsi.c
+@@ -450,6 +450,10 @@ int beiscsi_iface_set_param(struct Scsi_Host *shost,
+ 	}
+ 
+ 	nla_for_each_attr(attrib, data, dt_len, rm_len) {
++		/* ignore nla_type as it is never used */
++		if (nla_len(attrib) < sizeof(*iface_param))
++			return -EINVAL;
++
+ 		iface_param = nla_data(attrib);
+ 
+ 		if (iface_param->param_type != ISCSI_NET_PARAM)
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index bbc5d6b9be737..a2d60ad2a6835 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -319,16 +319,17 @@ static void fcoe_ctlr_announce(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *sel;
+ 	struct fcoe_fcf *fcf;
++	unsigned long flags;
+ 
+ 	mutex_lock(&fip->ctlr_mutex);
+-	spin_lock_bh(&fip->ctlr_lock);
++	spin_lock_irqsave(&fip->ctlr_lock, flags);
+ 
+ 	kfree_skb(fip->flogi_req);
+ 	fip->flogi_req = NULL;
+ 	list_for_each_entry(fcf, &fip->fcfs, list)
+ 		fcf->flogi_sent = 0;
+ 
+-	spin_unlock_bh(&fip->ctlr_lock);
++	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
+ 	sel = fip->sel_fcf;
+ 
+ 	if (sel && ether_addr_equal(sel->fcf_mac, fip->dest_addr))
+@@ -699,6 +700,7 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
+ {
+ 	struct fc_frame *fp;
+ 	struct fc_frame_header *fh;
++	unsigned long flags;
+ 	u16 old_xid;
+ 	u8 op;
+ 	u8 mac[ETH_ALEN];
+@@ -732,11 +734,11 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
+ 		op = FIP_DT_FLOGI;
+ 		if (fip->mode == FIP_MODE_VN2VN)
+ 			break;
+-		spin_lock_bh(&fip->ctlr_lock);
++		spin_lock_irqsave(&fip->ctlr_lock, flags);
+ 		kfree_skb(fip->flogi_req);
+ 		fip->flogi_req = skb;
+ 		fip->flogi_req_send = 1;
+-		spin_unlock_bh(&fip->ctlr_lock);
++		spin_unlock_irqrestore(&fip->ctlr_lock, flags);
+ 		schedule_work(&fip->timer_work);
+ 		return -EINPROGRESS;
+ 	case ELS_FDISC:
+@@ -1713,10 +1715,11 @@ static int fcoe_ctlr_flogi_send_locked(struct fcoe_ctlr *fip)
+ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *fcf;
++	unsigned long flags;
+ 	int error;
+ 
+ 	mutex_lock(&fip->ctlr_mutex);
+-	spin_lock_bh(&fip->ctlr_lock);
++	spin_lock_irqsave(&fip->ctlr_lock, flags);
+ 	LIBFCOE_FIP_DBG(fip, "re-sending FLOGI - reselect\n");
+ 	fcf = fcoe_ctlr_select(fip);
+ 	if (!fcf || fcf->flogi_sent) {
+@@ -1727,7 +1730,7 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ 		fcoe_ctlr_solicit(fip, NULL);
+ 		error = fcoe_ctlr_flogi_send_locked(fip);
+ 	}
+-	spin_unlock_bh(&fip->ctlr_lock);
++	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
+ 	mutex_unlock(&fip->ctlr_mutex);
+ 	return error;
+ }
+@@ -1744,8 +1747,9 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *fcf;
++	unsigned long flags;
+ 
+-	spin_lock_bh(&fip->ctlr_lock);
++	spin_lock_irqsave(&fip->ctlr_lock, flags);
+ 	fcf = fip->sel_fcf;
+ 	if (!fcf || !fip->flogi_req_send)
+ 		goto unlock;
+@@ -1772,7 +1776,7 @@ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
+ 	} else /* XXX */
+ 		LIBFCOE_FIP_DBG(fip, "No FCF selected - defer send\n");
+ unlock:
+-	spin_unlock_bh(&fip->ctlr_lock);
++	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 2c1028183b242..5b54cdd6b9767 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1152,14 +1152,14 @@ static void slot_err_v1_hw(struct hisi_hba *hisi_hba,
+ 		}
+ 		default:
+ 		{
+-			ts->stat = SAM_STAT_CHECK_CONDITION;
++			ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 			break;
+ 		}
+ 		}
+ 	}
+ 		break;
+ 	case SAS_PROTOCOL_SMP:
+-		ts->stat = SAM_STAT_CHECK_CONDITION;
++		ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		break;
+ 
+ 	case SAS_PROTOCOL_SATA:
+@@ -1281,7 +1281,7 @@ static void slot_complete_v1_hw(struct hisi_hba *hisi_hba,
+ 		struct scatterlist *sg_resp = &task->smp_task.smp_resp;
+ 		void *to = page_address(sg_page(sg_resp));
+ 
+-		ts->stat = SAM_STAT_GOOD;
++		ts->stat = SAS_SAM_STAT_GOOD;
+ 
+ 		dma_unmap_sg(dev, &task->smp_task.smp_req, 1,
+ 			     DMA_TO_DEVICE);
+@@ -1298,7 +1298,7 @@ static void slot_complete_v1_hw(struct hisi_hba *hisi_hba,
+ 		break;
+ 
+ 	default:
+-		ts->stat = SAM_STAT_CHECK_CONDITION;
++		ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index b75d54339e40c..f6e9114debd4d 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -2026,6 +2026,11 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
+ 	u16 dma_tx_err_type = le16_to_cpu(err_record->dma_tx_err_type);
+ 	u16 sipc_rx_err_type = le16_to_cpu(err_record->sipc_rx_err_type);
+ 	u32 dma_rx_err_type = le32_to_cpu(err_record->dma_rx_err_type);
++	struct hisi_sas_complete_v2_hdr *complete_queue =
++			hisi_hba->complete_hdr[slot->cmplt_queue];
++	struct hisi_sas_complete_v2_hdr *complete_hdr =
++			&complete_queue[slot->cmplt_queue_slot];
++	u32 dw0 = le32_to_cpu(complete_hdr->dw0);
+ 	int error = -1;
+ 
+ 	if (err_phase == 1) {
+@@ -2168,7 +2173,7 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
+ 	}
+ 		break;
+ 	case SAS_PROTOCOL_SMP:
+-		ts->stat = SAM_STAT_CHECK_CONDITION;
++		ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		break;
+ 
+ 	case SAS_PROTOCOL_SATA:
+@@ -2310,7 +2315,8 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
+ 			break;
+ 		}
+ 		}
+-		hisi_sas_sata_done(task, slot);
++		if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
++			hisi_sas_sata_done(task, slot);
+ 	}
+ 		break;
+ 	default:
+@@ -2427,7 +2433,7 @@ static void slot_complete_v2_hw(struct hisi_hba *hisi_hba,
+ 		struct scatterlist *sg_resp = &task->smp_task.smp_resp;
+ 		void *to = page_address(sg_page(sg_resp));
+ 
+-		ts->stat = SAM_STAT_GOOD;
++		ts->stat = SAS_SAM_STAT_GOOD;
+ 
+ 		dma_unmap_sg(dev, &task->smp_task.smp_req, 1,
+ 			     DMA_TO_DEVICE);
+@@ -2441,12 +2447,13 @@ static void slot_complete_v2_hw(struct hisi_hba *hisi_hba,
+ 	case SAS_PROTOCOL_STP:
+ 	case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
+ 	{
+-		ts->stat = SAM_STAT_GOOD;
+-		hisi_sas_sata_done(task, slot);
++		ts->stat = SAS_SAM_STAT_GOOD;
++		if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
++			hisi_sas_sata_done(task, slot);
+ 		break;
+ 	}
+ 	default:
+-		ts->stat = SAM_STAT_CHECK_CONDITION;
++		ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 65971bd80186b..0d21c64efa817 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -392,6 +392,8 @@
+ #define CMPLT_HDR_ERROR_PHASE_MSK   (0xff << CMPLT_HDR_ERROR_PHASE_OFF)
+ #define CMPLT_HDR_RSPNS_XFRD_OFF	10
+ #define CMPLT_HDR_RSPNS_XFRD_MSK	(0x1 << CMPLT_HDR_RSPNS_XFRD_OFF)
++#define CMPLT_HDR_RSPNS_GOOD_OFF	11
++#define CMPLT_HDR_RSPNS_GOOD_MSK	(0x1 << CMPLT_HDR_RSPNS_GOOD_OFF)
+ #define CMPLT_HDR_ERX_OFF		12
+ #define CMPLT_HDR_ERX_MSK		(0x1 << CMPLT_HDR_ERX_OFF)
+ #define CMPLT_HDR_ABORT_STAT_OFF	13
+@@ -465,6 +467,9 @@ struct hisi_sas_err_record_v3 {
+ #define RX_DATA_LEN_UNDERFLOW_OFF	6
+ #define RX_DATA_LEN_UNDERFLOW_MSK	(1 << RX_DATA_LEN_UNDERFLOW_OFF)
+ 
++#define RX_FIS_STATUS_ERR_OFF		0
++#define RX_FIS_STATUS_ERR_MSK		(1 << RX_FIS_STATUS_ERR_OFF)
++
+ #define HISI_SAS_COMMAND_ENTRIES_V3_HW 4096
+ #define HISI_SAS_MSI_COUNT_V3_HW 32
+ 
+@@ -2115,7 +2120,7 @@ static irqreturn_t fatal_axi_int_v3_hw(int irq_no, void *p)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void
++static bool
+ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task,
+ 	       struct hisi_sas_slot *slot)
+ {
+@@ -2128,11 +2133,22 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task,
+ 			hisi_sas_status_buf_addr_mem(slot);
+ 	u32 dma_rx_err_type = le32_to_cpu(record->dma_rx_err_type);
+ 	u32 trans_tx_fail_type = le32_to_cpu(record->trans_tx_fail_type);
++	u16 sipc_rx_err_type = le16_to_cpu(record->sipc_rx_err_type);
+ 	u32 dw3 = le32_to_cpu(complete_hdr->dw3);
++	u32 dw0 = le32_to_cpu(complete_hdr->dw0);
+ 
+ 	switch (task->task_proto) {
+ 	case SAS_PROTOCOL_SSP:
+ 		if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) {
++			/*
++			 * If returned response frame is incorrect because of data underflow,
++			 * but I/O information has been written to the host memory, we examine
++			 * response IU.
++			 */
++			if (!(dw0 & CMPLT_HDR_RSPNS_GOOD_MSK) &&
++			    (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK))
++				return false;
++
+ 			ts->residual = trans_tx_fail_type;
+ 			ts->stat = SAS_DATA_UNDERRUN;
+ 		} else if (dw3 & CMPLT_HDR_IO_IN_TARGET_MSK) {
+@@ -2146,7 +2162,10 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task,
+ 	case SAS_PROTOCOL_SATA:
+ 	case SAS_PROTOCOL_STP:
+ 	case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
+-		if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) {
++		if ((dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) &&
++		    (sipc_rx_err_type & RX_FIS_STATUS_ERR_MSK)) {
++			ts->stat = SAS_PROTO_RESPONSE;
++		} else if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) {
+ 			ts->residual = trans_tx_fail_type;
+ 			ts->stat = SAS_DATA_UNDERRUN;
+ 		} else if (dw3 & CMPLT_HDR_IO_IN_TARGET_MSK) {
+@@ -2156,14 +2175,16 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task,
+ 			ts->stat = SAS_OPEN_REJECT;
+ 			ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		}
+-		hisi_sas_sata_done(task, slot);
++		if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
++			hisi_sas_sata_done(task, slot);
+ 		break;
+ 	case SAS_PROTOCOL_SMP:
+-		ts->stat = SAM_STAT_CHECK_CONDITION;
++		ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		break;
+ 	default:
+ 		break;
+ 	}
++	return true;
+ }
+ 
+ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
+@@ -2238,18 +2259,20 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
+ 	if ((dw0 & CMPLT_HDR_CMPLT_MSK) == 0x3) {
+ 		u32 *error_info = hisi_sas_status_buf_addr_mem(slot);
+ 
+-		slot_err_v3_hw(hisi_hba, task, slot);
+-		if (ts->stat != SAS_DATA_UNDERRUN)
+-			dev_info(dev, "erroneous completion iptt=%d task=%pK dev id=%d CQ hdr: 0x%x 0x%x 0x%x 0x%x Error info: 0x%x 0x%x 0x%x 0x%x\n",
+-				 slot->idx, task, sas_dev->device_id,
+-				 dw0, dw1, complete_hdr->act, dw3,
+-				 error_info[0], error_info[1],
+-				 error_info[2], error_info[3]);
+-		if (unlikely(slot->abort)) {
+-			sas_task_abort(task);
+-			return;
++		if (slot_err_v3_hw(hisi_hba, task, slot)) {
++			if (ts->stat != SAS_DATA_UNDERRUN)
++				dev_info(dev, "erroneous completion iptt=%d task=%pK dev id=%d addr=%016llx CQ hdr: 0x%x 0x%x 0x%x 0x%x Error info: 0x%x 0x%x 0x%x 0x%x\n",
++					slot->idx, task, sas_dev->device_id,
++					SAS_ADDR(device->sas_addr),
++					dw0, dw1, complete_hdr->act, dw3,
++					error_info[0], error_info[1],
++					error_info[2], error_info[3]);
++			if (unlikely(slot->abort)) {
++				sas_task_abort(task);
++				return;
++			}
++			goto out;
+ 		}
+-		goto out;
+ 	}
+ 
+ 	switch (task->task_proto) {
+@@ -2265,7 +2288,7 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
+ 		struct scatterlist *sg_resp = &task->smp_task.smp_resp;
+ 		void *to = page_address(sg_page(sg_resp));
+ 
+-		ts->stat = SAM_STAT_GOOD;
++		ts->stat = SAS_SAM_STAT_GOOD;
+ 
+ 		dma_unmap_sg(dev, &task->smp_task.smp_req, 1,
+ 			     DMA_TO_DEVICE);
+@@ -2278,11 +2301,12 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
+ 	case SAS_PROTOCOL_SATA:
+ 	case SAS_PROTOCOL_STP:
+ 	case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
+-		ts->stat = SAM_STAT_GOOD;
+-		hisi_sas_sata_done(task, slot);
++		ts->stat = SAS_SAM_STAT_GOOD;
++		if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
++			hisi_sas_sata_done(task, slot);
+ 		break;
+ 	default:
+-		ts->stat = SAM_STAT_CHECK_CONDITION;
++		ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 18321cf9db5d6..59eb6c2969860 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -522,7 +522,7 @@ EXPORT_SYMBOL(scsi_host_alloc);
+ static int __scsi_host_match(struct device *dev, const void *data)
+ {
+ 	struct Scsi_Host *p;
+-	const unsigned short *hostnum = data;
++	const unsigned int *hostnum = data;
+ 
+ 	p = class_to_shost(dev);
+ 	return p->host_no == *hostnum;
+@@ -539,7 +539,7 @@ static int __scsi_host_match(struct device *dev, const void *data)
+  *	that scsi_host_get() took. The put_device() below dropped
+  *	the reference from class_find_device().
+  **/
+-struct Scsi_Host *scsi_host_lookup(unsigned short hostnum)
++struct Scsi_Host *scsi_host_lookup(unsigned int hostnum)
+ {
+ 	struct device *cdev;
+ 	struct Scsi_Host *shost = NULL;
+diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
+index 6e0817941fa74..b6d68d871b6cb 100644
+--- a/drivers/scsi/isci/request.c
++++ b/drivers/scsi/isci/request.c
+@@ -2574,7 +2574,7 @@ static void isci_request_handle_controller_specific_errors(
+ 			if (!idev)
+ 				*status_ptr = SAS_DEVICE_UNKNOWN;
+ 			else
+-				*status_ptr = SAM_STAT_TASK_ABORTED;
++				*status_ptr = SAS_SAM_STAT_TASK_ABORTED;
+ 
+ 			clear_bit(IREQ_COMPLETE_IN_TARGET, &request->flags);
+ 		}
+@@ -2704,7 +2704,7 @@ static void isci_request_handle_controller_specific_errors(
+ 	default:
+ 		/* Task in the target is not done. */
+ 		*response_ptr = SAS_TASK_UNDELIVERED;
+-		*status_ptr = SAM_STAT_TASK_ABORTED;
++		*status_ptr = SAS_SAM_STAT_TASK_ABORTED;
+ 
+ 		if (task->task_proto == SAS_PROTOCOL_SMP)
+ 			set_bit(IREQ_COMPLETE_IN_TARGET, &request->flags);
+@@ -2727,7 +2727,7 @@ static void isci_process_stp_response(struct sas_task *task, struct dev_to_host_
+ 	if (ac_err_mask(fis->status))
+ 		ts->stat = SAS_PROTO_RESPONSE;
+ 	else
+-		ts->stat = SAM_STAT_GOOD;
++		ts->stat = SAS_SAM_STAT_GOOD;
+ 
+ 	ts->resp = SAS_TASK_COMPLETE;
+ }
+@@ -2790,7 +2790,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
+ 	case SCI_IO_SUCCESS_IO_DONE_EARLY:
+ 
+ 		response = SAS_TASK_COMPLETE;
+-		status   = SAM_STAT_GOOD;
++		status   = SAS_SAM_STAT_GOOD;
+ 		set_bit(IREQ_COMPLETE_IN_TARGET, &request->flags);
+ 
+ 		if (completion_status == SCI_IO_SUCCESS_IO_DONE_EARLY) {
+@@ -2860,7 +2860,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
+ 
+ 		/* Fail the I/O. */
+ 		response = SAS_TASK_UNDELIVERED;
+-		status = SAM_STAT_TASK_ABORTED;
++		status = SAS_SAM_STAT_TASK_ABORTED;
+ 
+ 		clear_bit(IREQ_COMPLETE_IN_TARGET, &request->flags);
+ 		break;
+diff --git a/drivers/scsi/isci/task.c b/drivers/scsi/isci/task.c
+index 26fa1a4d1e6bf..1d1db40a1572c 100644
+--- a/drivers/scsi/isci/task.c
++++ b/drivers/scsi/isci/task.c
+@@ -160,7 +160,7 @@ int isci_task_execute_task(struct sas_task *task, gfp_t gfp_flags)
+ 
+ 			isci_task_refuse(ihost, task,
+ 					 SAS_TASK_UNDELIVERED,
+-					 SAM_STAT_TASK_ABORTED);
++					 SAS_SAM_STAT_TASK_ABORTED);
+ 		} else {
+ 			task->task_state_flags |= SAS_TASK_AT_INITIATOR;
+ 			spin_unlock_irqrestore(&task->task_state_lock, flags);
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index a1a06a832d866..f92b889369c39 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -122,9 +122,10 @@ static void sas_ata_task_done(struct sas_task *task)
+ 		}
+ 	}
+ 
+-	if (stat->stat == SAS_PROTO_RESPONSE || stat->stat == SAM_STAT_GOOD ||
+-	    ((stat->stat == SAM_STAT_CHECK_CONDITION &&
+-	      dev->sata_dev.class == ATA_DEV_ATAPI))) {
++	if (stat->stat == SAS_PROTO_RESPONSE ||
++	    stat->stat == SAS_SAM_STAT_GOOD ||
++	    (stat->stat == SAS_SAM_STAT_CHECK_CONDITION &&
++	      dev->sata_dev.class == ATA_DEV_ATAPI)) {
+ 		memcpy(dev->sata_dev.fis, resp->ending_fis, ATA_RESP_FIS_SIZE);
+ 
+ 		if (!link->sactive) {
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 51485d0251f2d..8444a4287ac1c 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -101,7 +101,7 @@ static int smp_execute_task_sg(struct domain_device *dev,
+ 			}
+ 		}
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+-		    task->task_status.stat == SAM_STAT_GOOD) {
++		    task->task_status.stat == SAS_SAM_STAT_GOOD) {
+ 			res = 0;
+ 			break;
+ 		}
+diff --git a/drivers/scsi/libsas/sas_task.c b/drivers/scsi/libsas/sas_task.c
+index e2d42593ce529..2966ead1d4217 100644
+--- a/drivers/scsi/libsas/sas_task.c
++++ b/drivers/scsi/libsas/sas_task.c
+@@ -20,7 +20,7 @@ void sas_ssp_task_response(struct device *dev, struct sas_task *task,
+ 	else if (iu->datapres == 1)
+ 		tstat->stat = iu->resp_data[3];
+ 	else if (iu->datapres == 2) {
+-		tstat->stat = SAM_STAT_CHECK_CONDITION;
++		tstat->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		tstat->buf_valid_size =
+ 			min_t(int, SAS_STATUS_BUF_SIZE,
+ 			      be32_to_cpu(iu->sense_data_len));
+@@ -32,7 +32,7 @@ void sas_ssp_task_response(struct device *dev, struct sas_task *task,
+ 	}
+ 	else
+ 		/* when datapres contains corrupt/unknown value... */
+-		tstat->stat = SAM_STAT_CHECK_CONDITION;
++		tstat->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ }
+ EXPORT_SYMBOL_GPL(sas_ssp_task_response);
+ 
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 26b15a24300ef..3728e4cf6e057 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -131,6 +131,9 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc);
+ static void
+ _base_clear_outstanding_commands(struct MPT3SAS_ADAPTER *ioc);
+ 
++static u32
++_base_readl_ext_retry(const volatile void __iomem *addr);
++
+ /**
+  * mpt3sas_base_check_cmd_timeout - Function
+  *		to check timeout and command termination due
+@@ -206,6 +209,20 @@ _base_readl_aero(const volatile void __iomem *addr)
+ 	return ret_val;
+ }
+ 
++static u32
++_base_readl_ext_retry(const volatile void __iomem *addr)
++{
++	u32 i, ret_val;
++
++	for (i = 0 ; i < 30 ; i++) {
++		ret_val = readl(addr);
++		if (ret_val == 0)
++			continue;
++	}
++
++	return ret_val;
++}
++
+ static inline u32
+ _base_readl(const volatile void __iomem *addr)
+ {
+@@ -861,7 +878,7 @@ mpt3sas_halt_firmware(struct MPT3SAS_ADAPTER *ioc)
+ 
+ 	dump_stack();
+ 
+-	doorbell = ioc->base_readl(&ioc->chip->Doorbell);
++	doorbell = ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 	if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
+ 		mpt3sas_print_fault_code(ioc, doorbell &
+ 		    MPI2_DOORBELL_DATA_MASK);
+@@ -5667,7 +5684,7 @@ mpt3sas_base_get_iocstate(struct MPT3SAS_ADAPTER *ioc, int cooked)
+ {
+ 	u32 s, sc;
+ 
+-	s = ioc->base_readl(&ioc->chip->Doorbell);
++	s = ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 	sc = s & MPI2_IOC_STATE_MASK;
+ 	return cooked ? sc : s;
+ }
+@@ -5812,7 +5829,7 @@ _base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout)
+ 					   __func__, count, timeout));
+ 			return 0;
+ 		} else if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) {
+-			doorbell = ioc->base_readl(&ioc->chip->Doorbell);
++			doorbell = ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 			if ((doorbell & MPI2_IOC_STATE_MASK) ==
+ 			    MPI2_IOC_STATE_FAULT) {
+ 				mpt3sas_print_fault_code(ioc, doorbell);
+@@ -5852,7 +5869,7 @@ _base_wait_for_doorbell_not_used(struct MPT3SAS_ADAPTER *ioc, int timeout)
+ 	count = 0;
+ 	cntdn = 1000 * timeout;
+ 	do {
+-		doorbell_reg = ioc->base_readl(&ioc->chip->Doorbell);
++		doorbell_reg = ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 		if (!(doorbell_reg & MPI2_DOORBELL_USED)) {
+ 			dhsprintk(ioc,
+ 				  ioc_info(ioc, "%s: successful count(%d), timeout(%d)\n",
+@@ -5989,7 +6006,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ 	__le32 *mfp;
+ 
+ 	/* make sure doorbell is not in use */
+-	if ((ioc->base_readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) {
++	if ((ioc->base_readl_ext_retry(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) {
+ 		ioc_err(ioc, "doorbell is in use (line=%d)\n", __LINE__);
+ 		return -EFAULT;
+ 	}
+@@ -6038,7 +6055,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ 	}
+ 
+ 	/* read the first two 16-bits, it gives the total length of the reply */
+-	reply[0] = le16_to_cpu(ioc->base_readl(&ioc->chip->Doorbell)
++	reply[0] = le16_to_cpu(ioc->base_readl_ext_retry(&ioc->chip->Doorbell)
+ 	    & MPI2_DOORBELL_DATA_MASK);
+ 	writel(0, &ioc->chip->HostInterruptStatus);
+ 	if ((_base_wait_for_doorbell_int(ioc, 5))) {
+@@ -6046,7 +6063,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ 			__LINE__);
+ 		return -EFAULT;
+ 	}
+-	reply[1] = le16_to_cpu(ioc->base_readl(&ioc->chip->Doorbell)
++	reply[1] = le16_to_cpu(ioc->base_readl_ext_retry(&ioc->chip->Doorbell)
+ 	    & MPI2_DOORBELL_DATA_MASK);
+ 	writel(0, &ioc->chip->HostInterruptStatus);
+ 
+@@ -6057,10 +6074,10 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ 			return -EFAULT;
+ 		}
+ 		if (i >=  reply_bytes/2) /* overflow case */
+-			ioc->base_readl(&ioc->chip->Doorbell);
++			ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 		else
+ 			reply[i] = le16_to_cpu(
+-			    ioc->base_readl(&ioc->chip->Doorbell)
++			    ioc->base_readl_ext_retry(&ioc->chip->Doorbell)
+ 			    & MPI2_DOORBELL_DATA_MASK);
+ 		writel(0, &ioc->chip->HostInterruptStatus);
+ 	}
+@@ -6906,7 +6923,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+ 			goto out;
+ 		}
+ 
+-		host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic);
++		host_diagnostic = ioc->base_readl_ext_retry(&ioc->chip->HostDiagnostic);
+ 		drsprintk(ioc,
+ 			  ioc_info(ioc, "wrote magic sequence: count(%d), host_diagnostic(0x%08x)\n",
+ 				   count, host_diagnostic));
+@@ -6926,7 +6943,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+ 	for (count = 0; count < (300000000 /
+ 		MPI2_HARD_RESET_PCIE_SECOND_READ_DELAY_MICRO_SEC); count++) {
+ 
+-		host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic);
++		host_diagnostic = ioc->base_readl_ext_retry(&ioc->chip->HostDiagnostic);
+ 
+ 		if (host_diagnostic == 0xFFFFFFFF) {
+ 			ioc_info(ioc,
+@@ -7313,10 +7330,13 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
+ 	ioc->rdpq_array_enable_assigned = 0;
+ 	ioc->use_32bit_dma = false;
+ 	ioc->dma_mask = 64;
+-	if (ioc->is_aero_ioc)
++	if (ioc->is_aero_ioc) {
+ 		ioc->base_readl = &_base_readl_aero;
+-	else
++		ioc->base_readl_ext_retry = &_base_readl_ext_retry;
++	} else {
+ 		ioc->base_readl = &_base_readl;
++		ioc->base_readl_ext_retry = &_base_readl;
++	}
+ 	r = mpt3sas_base_map_resources(ioc);
+ 	if (r)
+ 		goto out_free_resources;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h
+index 823bbe64a477f..dc0e130ba5ea4 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.h
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
+@@ -1469,6 +1469,7 @@ struct MPT3SAS_ADAPTER {
+ 	u8		diag_trigger_active;
+ 	u8		atomic_desc_capable;
+ 	BASE_READ_REG	base_readl;
++	BASE_READ_REG	base_readl_ext_retry;
+ 	struct SL_WH_MASTER_TRIGGER_T diag_trigger_master;
+ 	struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event;
+ 	struct SL_WH_SCSI_TRIGGERS_T diag_trigger_scsi;
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index 484e01428da28..a2a13969c686e 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -1314,7 +1314,7 @@ static int mvs_exec_internal_tmf_task(struct domain_device *dev,
+ 		}
+ 
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+-		    task->task_status.stat == SAM_STAT_GOOD) {
++		    task->task_status.stat == SAS_SAM_STAT_GOOD) {
+ 			res = TMF_RESP_FUNC_COMPLETE;
+ 			break;
+ 		}
+@@ -1764,7 +1764,7 @@ int mvs_slot_complete(struct mvs_info *mvi, u32 rx_desc, u32 flags)
+ 	case SAS_PROTOCOL_SSP:
+ 		/* hw says status == 0, datapres == 0 */
+ 		if (rx_desc & RXQ_GOOD) {
+-			tstat->stat = SAM_STAT_GOOD;
++			tstat->stat = SAS_SAM_STAT_GOOD;
+ 			tstat->resp = SAS_TASK_COMPLETE;
+ 		}
+ 		/* response frame present */
+@@ -1773,12 +1773,12 @@ int mvs_slot_complete(struct mvs_info *mvi, u32 rx_desc, u32 flags)
+ 						sizeof(struct mvs_err_info);
+ 			sas_ssp_task_response(mvi->dev, task, iu);
+ 		} else
+-			tstat->stat = SAM_STAT_CHECK_CONDITION;
++			tstat->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		break;
+ 
+ 	case SAS_PROTOCOL_SMP: {
+ 			struct scatterlist *sg_resp = &task->smp_task.smp_resp;
+-			tstat->stat = SAM_STAT_GOOD;
++			tstat->stat = SAS_SAM_STAT_GOOD;
+ 			to = kmap_atomic(sg_page(sg_resp));
+ 			memcpy(to + sg_resp->offset,
+ 				slot->response + sizeof(struct mvs_err_info),
+@@ -1795,7 +1795,7 @@ int mvs_slot_complete(struct mvs_info *mvi, u32 rx_desc, u32 flags)
+ 		}
+ 
+ 	default:
+-		tstat->stat = SAM_STAT_CHECK_CONDITION;
++		tstat->stat = SAS_SAM_STAT_CHECK_CONDITION;
+ 		break;
+ 	}
+ 	if (!slot->port->port_attached) {
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index da9fbe62a34d1..e9b3485baee01 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1881,7 +1881,7 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 			   param);
+ 		if (param == 0) {
+ 			ts->resp = SAS_TASK_COMPLETE;
+-			ts->stat = SAM_STAT_GOOD;
++			ts->stat = SAS_SAM_STAT_GOOD;
+ 		} else {
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAS_PROTO_RESPONSE;
+@@ -2341,7 +2341,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS\n");
+ 		if (param == 0) {
+ 			ts->resp = SAS_TASK_COMPLETE;
+-			ts->stat = SAM_STAT_GOOD;
++			ts->stat = SAS_SAM_STAT_GOOD;
+ 			/* check if response is for SEND READ LOG */
+ 			if (pm8001_dev &&
+ 				(pm8001_dev->id & NCQ_READ_LOG_FLAG)) {
+@@ -2864,7 +2864,7 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case IO_SUCCESS:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_GOOD;
++		ts->stat = SAS_SAM_STAT_GOOD;
+ 		if (pm8001_dev)
+ 			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+@@ -2891,17 +2891,17 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case IO_ERROR_HW_TIMEOUT:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_ERROR_HW_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_BUSY;
++		ts->stat = SAS_SAM_STAT_BUSY;
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_BUSY;
++		ts->stat = SAS_SAM_STAT_BUSY;
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_BUSY;
++		ts->stat = SAS_SAM_STAT_BUSY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+ 		pm8001_dbg(pm8001_ha, IO,
+@@ -3656,7 +3656,7 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case IO_SUCCESS:
+ 		pm8001_dbg(pm8001_ha, EH, "IO_SUCCESS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_GOOD;
++		ts->stat = SAS_SAM_STAT_GOOD;
+ 		break;
+ 	case IO_NOT_VALID:
+ 		pm8001_dbg(pm8001_ha, EH, "IO_NOT_VALID\n");
+@@ -4288,7 +4288,7 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 
+ 			spin_lock_irqsave(&task->task_state_lock, flags);
+ 			ts->resp = SAS_TASK_COMPLETE;
+-			ts->stat = SAM_STAT_GOOD;
++			ts->stat = SAS_SAM_STAT_GOOD;
+ 			task->task_state_flags &= ~SAS_TASK_STATE_PENDING;
+ 			task->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+ 			task->task_state_flags |= SAS_TASK_STATE_DONE;
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index ba5852548bee3..a16ed0695f1ae 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -764,7 +764,7 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
+ 		}
+ 
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+-			task->task_status.stat == SAM_STAT_GOOD) {
++			task->task_status.stat == SAS_SAM_STAT_GOOD) {
+ 			res = TMF_RESP_FUNC_COMPLETE;
+ 			break;
+ 		}
+@@ -846,7 +846,7 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+ 		}
+ 
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+-			task->task_status.stat == SAM_STAT_GOOD) {
++			task->task_status.stat == SAS_SAM_STAT_GOOD) {
+ 			res = TMF_RESP_FUNC_COMPLETE;
+ 			break;
+ 
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 0305c8999ba5d..c98c0a53a018c 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -1916,7 +1916,7 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb)
+ 			   param);
+ 		if (param == 0) {
+ 			ts->resp = SAS_TASK_COMPLETE;
+-			ts->stat = SAM_STAT_GOOD;
++			ts->stat = SAS_SAM_STAT_GOOD;
+ 		} else {
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAS_PROTO_RESPONSE;
+@@ -2450,7 +2450,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS\n");
+ 		if (param == 0) {
+ 			ts->resp = SAS_TASK_COMPLETE;
+-			ts->stat = SAM_STAT_GOOD;
++			ts->stat = SAS_SAM_STAT_GOOD;
+ 			/* check if response is for SEND READ LOG */
+ 			if (pm8001_dev &&
+ 				(pm8001_dev->id & NCQ_READ_LOG_FLAG)) {
+@@ -3004,7 +3004,7 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case IO_SUCCESS:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_SUCCESS\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_GOOD;
++		ts->stat = SAS_SAM_STAT_GOOD;
+ 		if (pm8001_dev)
+ 			atomic_dec(&pm8001_dev->running_req);
+ 		if (pm8001_ha->smp_exp_mode == SMP_DIRECT) {
+@@ -3046,17 +3046,17 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	case IO_ERROR_HW_TIMEOUT:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_ERROR_HW_TIMEOUT\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_BUSY;
++		ts->stat = SAS_SAM_STAT_BUSY;
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_BUSY;
++		ts->stat = SAS_SAM_STAT_BUSY;
+ 		break;
+ 	case IO_XFER_ERROR_PHY_NOT_READY:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_PHY_NOT_READY\n");
+ 		ts->resp = SAS_TASK_COMPLETE;
+-		ts->stat = SAM_STAT_BUSY;
++		ts->stat = SAS_SAM_STAT_BUSY;
+ 		break;
+ 	case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED:
+ 		pm8001_dbg(pm8001_ha, IO,
+@@ -4679,7 +4679,7 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 
+ 			spin_lock_irqsave(&task->task_state_lock, flags);
+ 			ts->resp = SAS_TASK_COMPLETE;
+-			ts->stat = SAM_STAT_GOOD;
++			ts->stat = SAS_SAM_STAT_GOOD;
+ 			task->task_state_flags &= ~SAS_TASK_STATE_PENDING;
+ 			task->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+ 			task->task_state_flags |= SAS_TASK_STATE_DONE;
+diff --git a/drivers/scsi/qedf/qedf_dbg.h b/drivers/scsi/qedf/qedf_dbg.h
+index 2386bfb73c461..4a536c8377081 100644
+--- a/drivers/scsi/qedf/qedf_dbg.h
++++ b/drivers/scsi/qedf/qedf_dbg.h
+@@ -60,6 +60,8 @@ extern uint qedf_debug;
+ #define QEDF_LOG_NOTICE	0x40000000	/* Notice logs */
+ #define QEDF_LOG_WARN		0x80000000	/* Warning logs */
+ 
++#define QEDF_DEBUGFS_LOG_LEN (2 * PAGE_SIZE)
++
+ /* Debug context structure */
+ struct qedf_dbg_ctx {
+ 	unsigned int host_no;
+diff --git a/drivers/scsi/qedf/qedf_debugfs.c b/drivers/scsi/qedf/qedf_debugfs.c
+index a3ed681c8ce3f..451fd236bfd05 100644
+--- a/drivers/scsi/qedf/qedf_debugfs.c
++++ b/drivers/scsi/qedf/qedf_debugfs.c
+@@ -8,6 +8,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/debugfs.h>
+ #include <linux/module.h>
++#include <linux/vmalloc.h>
+ 
+ #include "qedf.h"
+ #include "qedf_dbg.h"
+@@ -98,7 +99,9 @@ static ssize_t
+ qedf_dbg_fp_int_cmd_read(struct file *filp, char __user *buffer, size_t count,
+ 			 loff_t *ppos)
+ {
++	ssize_t ret;
+ 	size_t cnt = 0;
++	char *cbuf;
+ 	int id;
+ 	struct qedf_fastpath *fp = NULL;
+ 	struct qedf_dbg_ctx *qedf_dbg =
+@@ -108,19 +111,25 @@ qedf_dbg_fp_int_cmd_read(struct file *filp, char __user *buffer, size_t count,
+ 
+ 	QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "entered\n");
+ 
+-	cnt = sprintf(buffer, "\nFastpath I/O completions\n\n");
++	cbuf = vmalloc(QEDF_DEBUGFS_LOG_LEN);
++	if (!cbuf)
++		return 0;
++
++	cnt += scnprintf(cbuf + cnt, QEDF_DEBUGFS_LOG_LEN - cnt, "\nFastpath I/O completions\n\n");
+ 
+ 	for (id = 0; id < qedf->num_queues; id++) {
+ 		fp = &(qedf->fp_array[id]);
+ 		if (fp->sb_id == QEDF_SB_ID_NULL)
+ 			continue;
+-		cnt += sprintf((buffer + cnt), "#%d: %lu\n", id,
+-			       fp->completions);
++		cnt += scnprintf(cbuf + cnt, QEDF_DEBUGFS_LOG_LEN - cnt,
++				 "#%d: %lu\n", id, fp->completions);
+ 	}
+ 
+-	cnt = min_t(int, count, cnt - *ppos);
+-	*ppos += cnt;
+-	return cnt;
++	ret = simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
++
++	vfree(cbuf);
++
++	return ret;
+ }
+ 
+ static ssize_t
+@@ -138,15 +147,14 @@ qedf_dbg_debug_cmd_read(struct file *filp, char __user *buffer, size_t count,
+ 			loff_t *ppos)
+ {
+ 	int cnt;
++	char cbuf[32];
+ 	struct qedf_dbg_ctx *qedf_dbg =
+ 				(struct qedf_dbg_ctx *)filp->private_data;
+ 
+ 	QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "debug mask=0x%x\n", qedf_debug);
+-	cnt = sprintf(buffer, "debug mask = 0x%x\n", qedf_debug);
++	cnt = scnprintf(cbuf, sizeof(cbuf), "debug mask = 0x%x\n", qedf_debug);
+ 
+-	cnt = min_t(int, count, cnt - *ppos);
+-	*ppos += cnt;
+-	return cnt;
++	return simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
+ }
+ 
+ static ssize_t
+@@ -185,18 +193,17 @@ qedf_dbg_stop_io_on_error_cmd_read(struct file *filp, char __user *buffer,
+ 				   size_t count, loff_t *ppos)
+ {
+ 	int cnt;
++	char cbuf[7];
+ 	struct qedf_dbg_ctx *qedf_dbg =
+ 				(struct qedf_dbg_ctx *)filp->private_data;
+ 	struct qedf_ctx *qedf = container_of(qedf_dbg,
+ 	    struct qedf_ctx, dbg_ctx);
+ 
+ 	QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "entered\n");
+-	cnt = sprintf(buffer, "%s\n",
++	cnt = scnprintf(cbuf, sizeof(cbuf), "%s\n",
+ 	    qedf->stop_io_on_error ? "true" : "false");
+ 
+-	cnt = min_t(int, count, cnt - *ppos);
+-	*ppos += cnt;
+-	return cnt;
++	return simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
+ }
+ 
+ static ssize_t
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index cc2152c56d355..96e470746767a 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1981,8 +1981,9 @@ static int qedi_cpu_offline(unsigned int cpu)
+ 	struct qedi_percpu_s *p = this_cpu_ptr(&qedi_percpu);
+ 	struct qedi_work *work, *tmp;
+ 	struct task_struct *thread;
++	unsigned long flags;
+ 
+-	spin_lock_bh(&p->p_work_lock);
++	spin_lock_irqsave(&p->p_work_lock, flags);
+ 	thread = p->iothread;
+ 	p->iothread = NULL;
+ 
+@@ -1993,7 +1994,7 @@ static int qedi_cpu_offline(unsigned int cpu)
+ 			kfree(work);
+ 	}
+ 
+-	spin_unlock_bh(&p->p_work_lock);
++	spin_unlock_irqrestore(&p->p_work_lock, flags);
+ 	if (thread)
+ 		kthread_stop(thread);
+ 	return 0;
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 12e27ee8c5c73..f919da0bbf75e 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -3028,8 +3028,6 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
+ 			vha->flags.difdix_supported = 1;
+ 			ql_dbg(ql_dbg_user, vha, 0x7082,
+ 			    "Registered for DIF/DIX type 1 and 3 protection.\n");
+-			if (ql2xenabledif == 1)
+-				prot = SHOST_DIX_TYPE0_PROTECTION;
+ 			scsi_host_set_prot(vha->host,
+ 			    prot | SHOST_DIF_TYPE1_PROTECTION
+ 			    | SHOST_DIF_TYPE2_PROTECTION
+diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
+index 00b4d033b07a9..8e9ffbec6643f 100644
+--- a/drivers/scsi/qla2xxx/qla_dbg.c
++++ b/drivers/scsi/qla2xxx/qla_dbg.c
+@@ -18,7 +18,7 @@
+  * | Queue Command and IO tracing |       0x3074       | 0x300b         |
+  * |                              |                    | 0x3027-0x3028  |
+  * |                              |                    | 0x303d-0x3041  |
+- * |                              |                    | 0x302d,0x3033  |
++ * |                              |                    | 0x302e,0x3033  |
+  * |                              |                    | 0x3036,0x3038  |
+  * |                              |                    | 0x303a		|
+  * | DPC Thread                   |       0x4023       | 0x4002,0x4013  |
+@@ -112,8 +112,13 @@ qla27xx_dump_mpi_ram(struct qla_hw_data *ha, uint32_t addr, uint32_t *ram,
+ 	uint32_t stat;
+ 	ulong i, j, timer = 6000000;
+ 	int rval = QLA_FUNCTION_FAILED;
++	scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
+ 
+ 	clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
++
++	if (qla_pci_disconnected(vha, reg))
++		return rval;
++
+ 	for (i = 0; i < ram_dwords; i += dwords, addr += dwords) {
+ 		if (i + dwords > ram_dwords)
+ 			dwords = ram_dwords - i;
+@@ -137,6 +142,9 @@ qla27xx_dump_mpi_ram(struct qla_hw_data *ha, uint32_t addr, uint32_t *ram,
+ 		while (timer--) {
+ 			udelay(5);
+ 
++			if (qla_pci_disconnected(vha, reg))
++				return rval;
++
+ 			stat = rd_reg_dword(&reg->host_status);
+ 			/* Check for pending interrupts. */
+ 			if (!(stat & HSRX_RISC_INT))
+@@ -191,9 +199,13 @@ qla24xx_dump_ram(struct qla_hw_data *ha, uint32_t addr, __be32 *ram,
+ 	uint32_t dwords = qla2x00_gid_list_size(ha) / 4;
+ 	uint32_t stat;
+ 	ulong i, j, timer = 6000000;
++	scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
+ 
+ 	clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
+ 
++	if (qla_pci_disconnected(vha, reg))
++		return rval;
++
+ 	for (i = 0; i < ram_dwords; i += dwords, addr += dwords) {
+ 		if (i + dwords > ram_dwords)
+ 			dwords = ram_dwords - i;
+@@ -215,8 +227,10 @@ qla24xx_dump_ram(struct qla_hw_data *ha, uint32_t addr, __be32 *ram,
+ 		ha->flags.mbox_int = 0;
+ 		while (timer--) {
+ 			udelay(5);
+-			stat = rd_reg_dword(&reg->host_status);
++			if (qla_pci_disconnected(vha, reg))
++				return rval;
+ 
++			stat = rd_reg_dword(&reg->host_status);
+ 			/* Check for pending interrupts. */
+ 			if (!(stat & HSRX_RISC_INT))
+ 				continue;
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 06b0ad2b51bb4..6645b69fc2a0f 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -396,6 +396,7 @@ typedef union {
+ 	} b;
+ } port_id_t;
+ #define INVALID_PORT_ID	0xFFFFFF
++#define ISP_REG16_DISCONNECT 0xFFFF
+ 
+ static inline le_id_t be_id_to_le(be_id_t id)
+ {
+@@ -3848,6 +3849,13 @@ struct qla_hw_data_stat {
+ 	u32 num_mpi_reset;
+ };
+ 
++/* refer to pcie_do_recovery reference */
++typedef enum {
++	QLA_PCI_RESUME,
++	QLA_PCI_ERR_DETECTED,
++	QLA_PCI_MMIO_ENABLED,
++	QLA_PCI_SLOT_RESET,
++} pci_error_state_t;
+ /*
+  * Qlogic host adapter specific data structure.
+ */
+@@ -4192,7 +4200,6 @@ struct qla_hw_data {
+ 	uint8_t		aen_mbx_count;
+ 	atomic_t	num_pend_mbx_stage1;
+ 	atomic_t	num_pend_mbx_stage2;
+-	atomic_t	num_pend_mbx_stage3;
+ 	uint16_t	frame_payload_size;
+ 
+ 	uint32_t	login_retry_count;
+@@ -4586,6 +4593,7 @@ struct qla_hw_data {
+ #define DEFAULT_ZIO_THRESHOLD 5
+ 
+ 	struct qla_hw_data_stat stat;
++	pci_error_state_t pci_error_state;
+ };
+ 
+ struct active_regions {
+@@ -4706,7 +4714,7 @@ typedef struct scsi_qla_host {
+ #define FX00_CRITEMP_RECOVERY	25
+ #define FX00_HOST_INFO_RESEND	26
+ #define QPAIR_ONLINE_CHECK_NEEDED	27
+-#define SET_NVME_ZIO_THRESHOLD_NEEDED	28
++#define DO_EEH_RECOVERY		28
+ #define DETECT_SFP_CHANGE	29
+ #define N2N_LOGIN_NEEDED	30
+ #define IOCB_WORK_ACTIVE	31
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 7e5ee31581d61..8ef2de6822de9 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -222,6 +222,7 @@ extern int qla2x00_post_uevent_work(struct scsi_qla_host *, u32);
+ 
+ extern int qla2x00_post_uevent_work(struct scsi_qla_host *, u32);
+ extern void qla2x00_disable_board_on_pci_error(struct work_struct *);
++extern void qla_eeh_work(struct work_struct *);
+ extern void qla2x00_sp_compl(srb_t *sp, int);
+ extern void qla2xxx_qpair_sp_free_dma(srb_t *sp);
+ extern void qla2xxx_qpair_sp_compl(srb_t *sp, int);
+@@ -233,6 +234,8 @@ int qla24xx_post_relogin_work(struct scsi_qla_host *vha);
+ void qla2x00_wait_for_sess_deletion(scsi_qla_host_t *);
+ void qla24xx_process_purex_rdp(struct scsi_qla_host *vha,
+ 			       struct purex_item *pkt);
++void qla_pci_set_eeh_busy(struct scsi_qla_host *);
++void qla_schedule_eeh_work(struct scsi_qla_host *);
+ 
+ /*
+  * Global Functions in qla_mid.c source file.
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 3d1a53ba86ac8..a8d2c06285c24 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -483,6 +483,7 @@ static
+ void qla24xx_handle_adisc_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ {
+ 	struct fc_port *fcport = ea->fcport;
++	unsigned long flags;
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0x20d2,
+ 	    "%s %8phC DS %d LS %d rc %d login %d|%d rscn %d|%d lid %d\n",
+@@ -497,9 +498,15 @@ void qla24xx_handle_adisc_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ 		ql_dbg(ql_dbg_disc, vha, 0x2066,
+ 		    "%s %8phC: adisc fail: post delete\n",
+ 		    __func__, ea->fcport->port_name);
++
++		spin_lock_irqsave(&vha->work_lock, flags);
+ 		/* deleted = 0 & logout_on_delete = force fw cleanup */
+-		fcport->deleted = 0;
++		if (fcport->deleted == QLA_SESS_DELETED)
++			fcport->deleted = 0;
++
+ 		fcport->logout_on_delete = 1;
++		spin_unlock_irqrestore(&vha->work_lock, flags);
++
+ 		qlt_schedule_sess_for_deletion(ea->fcport);
+ 		return;
+ 	}
+@@ -1405,7 +1412,6 @@ void __qla24xx_handle_gpdb_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ 
+ 	spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+ 	ea->fcport->login_gen++;
+-	ea->fcport->deleted = 0;
+ 	ea->fcport->logout_on_delete = 1;
+ 
+ 	if (!ea->fcport->login_succ && !IS_SW_RESV_ADDR(ea->fcport->d_id)) {
+@@ -4336,15 +4342,16 @@ qla2x00_init_rings(scsi_qla_host_t *vha)
+ 		memcpy(ha->port_name, ha->init_cb->port_name, WWN_SIZE);
+ 	}
+ 
++	QLA_FW_STARTED(ha);
+ 	rval = qla2x00_init_firmware(vha, ha->init_cb_size);
+ next_check:
+ 	if (rval) {
++		QLA_FW_STOPPED(ha);
+ 		ql_log(ql_log_fatal, vha, 0x00d2,
+ 		    "Init Firmware **** FAILED ****.\n");
+ 	} else {
+ 		ql_dbg(ql_dbg_init, vha, 0x00d3,
+ 		    "Init Firmware -- success.\n");
+-		QLA_FW_STARTED(ha);
+ 		vha->u_ql2xexchoffld = vha->u_ql2xiniexchg = 0;
+ 	}
+ 
+@@ -5107,7 +5114,7 @@ static void qla_get_login_template(scsi_qla_host_t *vha)
+ 	__be32 *q;
+ 
+ 	memset(ha->init_cb, 0, ha->init_cb_size);
+-	sz = min_t(int, sizeof(struct fc_els_csp), ha->init_cb_size);
++	sz = min_t(int, sizeof(struct fc_els_flogi), ha->init_cb_size);
+ 	rval = qla24xx_get_port_login_templ(vha, ha->init_cb_dma,
+ 					    ha->init_cb, sz);
+ 	if (rval != QLA_SUCCESS) {
+@@ -5639,6 +5646,8 @@ qla2x00_reg_remote_port(scsi_qla_host_t *vha, fc_port_t *fcport)
+ void
+ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ {
++	unsigned long flags;
++
+ 	if (IS_SW_RESV_ADDR(fcport->d_id))
+ 		return;
+ 
+@@ -5648,7 +5657,11 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	qla2x00_set_fcport_disc_state(fcport, DSC_UPD_FCPORT);
+ 	fcport->login_retry = vha->hw->login_retry_count;
+ 	fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT);
++
++	spin_lock_irqsave(&vha->work_lock, flags);
+ 	fcport->deleted = 0;
++	spin_unlock_irqrestore(&vha->work_lock, flags);
++
+ 	if (vha->hw->current_topology == ISP_CFG_NL)
+ 		fcport->logout_on_delete = 0;
+ 	else
+@@ -6913,14 +6926,15 @@ qla2x00_abort_isp_cleanup(scsi_qla_host_t *vha)
+ 	}
+ 
+ 	/* purge MBox commands */
+-	if (atomic_read(&ha->num_pend_mbx_stage3)) {
++	spin_lock_irqsave(&ha->hardware_lock, flags);
++	if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags)) {
+ 		clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
+ 		complete(&ha->mbx_intr_comp);
+ 	}
++	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+ 
+ 	i = 0;
+-	while (atomic_read(&ha->num_pend_mbx_stage3) ||
+-	    atomic_read(&ha->num_pend_mbx_stage2) ||
++	while (atomic_read(&ha->num_pend_mbx_stage2) ||
+ 	    atomic_read(&ha->num_pend_mbx_stage1)) {
+ 		msleep(20);
+ 		i++;
+@@ -6969,22 +6983,18 @@ qla2x00_abort_isp_cleanup(scsi_qla_host_t *vha)
+ 	}
+ 	spin_unlock_irqrestore(&ha->vport_slock, flags);
+ 
+-	if (!ha->flags.eeh_busy) {
+-		/* Make sure for ISP 82XX IO DMA is complete */
+-		if (IS_P3P_TYPE(ha)) {
+-			qla82xx_chip_reset_cleanup(vha);
+-			ql_log(ql_log_info, vha, 0x00b4,
+-			    "Done chip reset cleanup.\n");
+-
+-			/* Done waiting for pending commands.
+-			 * Reset the online flag.
+-			 */
+-			vha->flags.online = 0;
+-		}
++	/* Make sure for ISP 82XX IO DMA is complete */
++	if (IS_P3P_TYPE(ha)) {
++		qla82xx_chip_reset_cleanup(vha);
++		ql_log(ql_log_info, vha, 0x00b4,
++		       "Done chip reset cleanup.\n");
+ 
+-		/* Requeue all commands in outstanding command list. */
+-		qla2x00_abort_all_cmds(vha, DID_RESET << 16);
++		/* Done waiting for pending commands. Reset online flag */
++		vha->flags.online = 0;
+ 	}
++
++	/* Requeue all commands in outstanding command list. */
++	qla2x00_abort_all_cmds(vha, DID_RESET << 16);
+ 	/* memory barrier */
+ 	wmb();
+ }
+@@ -7012,6 +7022,12 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 	if (vha->flags.online) {
+ 		qla2x00_abort_isp_cleanup(vha);
+ 
++		if (qla2x00_isp_reg_stat(ha)) {
++			ql_log(ql_log_info, vha, 0x803f,
++			       "ISP Abort - ISP reg disconnect, exiting.\n");
++			return status;
++		}
++
+ 		if (test_and_clear_bit(ISP_ABORT_TO_ROM, &vha->dpc_flags)) {
+ 			ha->flags.chip_reset_done = 1;
+ 			vha->flags.online = 1;
+@@ -7052,8 +7068,18 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 
+ 		ha->isp_ops->get_flash_version(vha, req->ring);
+ 
++		if (qla2x00_isp_reg_stat(ha)) {
++			ql_log(ql_log_info, vha, 0x803f,
++			       "ISP Abort - ISP reg disconnect pre nvram config, exiting.\n");
++			return status;
++		}
+ 		ha->isp_ops->nvram_config(vha);
+ 
++		if (qla2x00_isp_reg_stat(ha)) {
++			ql_log(ql_log_info, vha, 0x803f,
++			       "ISP Abort - ISP reg disconnect post nvmram config, exiting.\n");
++			return status;
++		}
+ 		if (!qla2x00_restart_isp(vha)) {
+ 			clear_bit(RESET_MARKER_NEEDED, &vha->dpc_flags);
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index 7e8b59a0954bb..47ee5b9f2a55c 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -435,3 +435,49 @@ qla_put_iocbs(struct qla_qpair *qp, struct iocb_resource *iores)
+ 	}
+ 	iores->res_type = RESOURCE_NONE;
+ }
++
++#define ISP_REG_DISCONNECT 0xffffffffU
++/**************************************************************************
++ * qla2x00_isp_reg_stat
++ *
++ * Description:
++ *        Read the host status register of ISP before aborting the command.
++ *
++ * Input:
++ *       ha = pointer to host adapter structure.
++ *
++ *
++ * Returns:
++ *       Either true or false.
++ *
++ * Note: Return true if there is register disconnect.
++ **************************************************************************/
++static inline
++uint32_t qla2x00_isp_reg_stat(struct qla_hw_data *ha)
++{
++	struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
++	struct device_reg_82xx __iomem *reg82 = &ha->iobase->isp82;
++
++	if (IS_P3P_TYPE(ha))
++		return ((rd_reg_dword(&reg82->host_int)) == ISP_REG_DISCONNECT);
++	else
++		return ((rd_reg_dword(&reg->host_status)) ==
++			ISP_REG_DISCONNECT);
++}
++
++static inline
++bool qla_pci_disconnected(struct scsi_qla_host *vha,
++			  struct device_reg_24xx __iomem *reg)
++{
++	uint32_t stat;
++	bool ret = false;
++
++	stat = rd_reg_dword(&reg->host_status);
++	if (stat == 0xffffffff) {
++		ql_log(ql_log_info, vha, 0x8041,
++		       "detected PCI disconnect.\n");
++		qla_schedule_eeh_work(vha);
++		ret = true;
++	}
++	return ret;
++}
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index 54fc0afbc02ac..1752a62031710 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -1644,8 +1644,14 @@ qla24xx_start_scsi(srb_t *sp)
+ 		goto queuing_error;
+ 
+ 	if (req->cnt < (req_cnt + 2)) {
+-		cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
+-		    rd_reg_dword_relaxed(req->req_q_out);
++		if (IS_SHADOW_REG_CAPABLE(ha)) {
++			cnt = *req->out_ptr;
++		} else {
++			cnt = rd_reg_dword_relaxed(req->req_q_out);
++			if (qla2x00_check_reg16_for_disconnect(vha, cnt))
++				goto queuing_error;
++		}
++
+ 		if (req->ring_index < cnt)
+ 			req->cnt = cnt - req->ring_index;
+ 		else
+@@ -1836,8 +1842,13 @@ qla24xx_dif_start_scsi(srb_t *sp)
+ 		goto queuing_error;
+ 
+ 	if (req->cnt < (req_cnt + 2)) {
+-		cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
+-		    rd_reg_dword_relaxed(req->req_q_out);
++		if (IS_SHADOW_REG_CAPABLE(ha)) {
++			cnt = *req->out_ptr;
++		} else {
++			cnt = rd_reg_dword_relaxed(req->req_q_out);
++			if (qla2x00_check_reg16_for_disconnect(vha, cnt))
++				goto queuing_error;
++		}
+ 		if (req->ring_index < cnt)
+ 			req->cnt = cnt - req->ring_index;
+ 		else
+@@ -1911,6 +1922,7 @@ queuing_error:
+ 
+ 	qla_put_iocbs(sp->qpair, &sp->iores);
+ 	spin_unlock_irqrestore(&ha->hardware_lock, flags);
++
+ 	return QLA_FUNCTION_FAILED;
+ }
+ 
+@@ -1978,8 +1990,14 @@ qla2xxx_start_scsi_mq(srb_t *sp)
+ 		goto queuing_error;
+ 
+ 	if (req->cnt < (req_cnt + 2)) {
+-		cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
+-		    rd_reg_dword_relaxed(req->req_q_out);
++		if (IS_SHADOW_REG_CAPABLE(ha)) {
++			cnt = *req->out_ptr;
++		} else {
++			cnt = rd_reg_dword_relaxed(req->req_q_out);
++			if (qla2x00_check_reg16_for_disconnect(vha, cnt))
++				goto queuing_error;
++		}
++
+ 		if (req->ring_index < cnt)
+ 			req->cnt = cnt - req->ring_index;
+ 		else
+@@ -2185,8 +2203,14 @@ qla2xxx_dif_start_scsi_mq(srb_t *sp)
+ 		goto queuing_error;
+ 
+ 	if (req->cnt < (req_cnt + 2)) {
+-		cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
+-		    rd_reg_dword_relaxed(req->req_q_out);
++		if (IS_SHADOW_REG_CAPABLE(ha)) {
++			cnt = *req->out_ptr;
++		} else {
++			cnt = rd_reg_dword_relaxed(req->req_q_out);
++			if (qla2x00_check_reg16_for_disconnect(vha, cnt))
++				goto queuing_error;
++		}
++
+ 		if (req->ring_index < cnt)
+ 			req->cnt = cnt - req->ring_index;
+ 		else
+@@ -2263,6 +2287,7 @@ queuing_error:
+ 
+ 	qla_put_iocbs(sp->qpair, &sp->iores);
+ 	spin_unlock_irqrestore(&qpair->qp_lock, flags);
++
+ 	return QLA_FUNCTION_FAILED;
+ }
+ 
+@@ -2307,6 +2332,11 @@ __qla2x00_alloc_iocbs(struct qla_qpair *qpair, srb_t *sp)
+ 			cnt = qla2x00_debounce_register(
+ 			    ISP_REQ_Q_OUT(ha, &reg->isp));
+ 
++		if (!qpair->use_shadow_reg && cnt == ISP_REG16_DISCONNECT) {
++			qla_schedule_eeh_work(vha);
++			return NULL;
++		}
++
+ 		if  (req->ring_index < cnt)
+ 			req->cnt = cnt - req->ring_index;
+ 		else
+@@ -3711,6 +3741,9 @@ qla2x00_start_sp(srb_t *sp)
+ 	void *pkt;
+ 	unsigned long flags;
+ 
++	if (vha->hw->flags.eeh_busy)
++		return -EIO;
++
+ 	spin_lock_irqsave(qp->qp_lock_ptr, flags);
+ 	pkt = __qla2x00_alloc_iocbs(sp->qpair, sp);
+ 	if (!pkt) {
+@@ -3928,8 +3961,14 @@ qla2x00_start_bidir(srb_t *sp, struct scsi_qla_host *vha, uint32_t tot_dsds)
+ 
+ 	/* Check for room on request queue. */
+ 	if (req->cnt < req_cnt + 2) {
+-		cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
+-		    rd_reg_dword_relaxed(req->req_q_out);
++		if (IS_SHADOW_REG_CAPABLE(ha)) {
++			cnt = *req->out_ptr;
++		} else {
++			cnt = rd_reg_dword_relaxed(req->req_q_out);
++			if (qla2x00_check_reg16_for_disconnect(vha, cnt))
++				goto queuing_error;
++		}
++
+ 		if  (req->ring_index < cnt)
+ 			req->cnt = cnt - req->ring_index;
+ 		else
+@@ -3968,5 +4007,6 @@ qla2x00_start_bidir(srb_t *sp, struct scsi_qla_host *vha, uint32_t tot_dsds)
+ 	qla2x00_start_iocbs(vha, req);
+ queuing_error:
+ 	spin_unlock_irqrestore(&ha->hardware_lock, flags);
++
+ 	return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 7ea73ad845de6..fd0beb194e351 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -269,12 +269,7 @@ qla2x00_check_reg32_for_disconnect(scsi_qla_host_t *vha, uint32_t reg)
+ 		if (!test_and_set_bit(PFLG_DISCONNECTED, &vha->pci_flags) &&
+ 		    !test_bit(PFLG_DRIVER_REMOVING, &vha->pci_flags) &&
+ 		    !test_bit(PFLG_DRIVER_PROBING, &vha->pci_flags)) {
+-			/*
+-			 * Schedule this (only once) on the default system
+-			 * workqueue so that all the adapter workqueues and the
+-			 * DPC thread can be shutdown cleanly.
+-			 */
+-			schedule_work(&vha->hw->board_disable);
++			qla_schedule_eeh_work(vha);
+ 		}
+ 		return true;
+ 	} else
+@@ -982,8 +977,12 @@ qla2x00_async_event(scsi_qla_host_t *vha, struct rsp_que *rsp, uint16_t *mb)
+ 	unsigned long	flags;
+ 	fc_port_t	*fcport = NULL;
+ 
+-	if (!vha->hw->flags.fw_started)
++	if (!vha->hw->flags.fw_started) {
++		ql_log(ql_log_warn, vha, 0x50ff,
++		    "Dropping AEN - %04x %04x %04x %04x.\n",
++		    mb[0], mb[1], mb[2], mb[3]);
+ 		return;
++	}
+ 
+ 	/* Setup to process RIO completion. */
+ 	handle_cnt = 0;
+@@ -1639,8 +1638,6 @@ global_port_update:
+ 	case MBA_TEMPERATURE_ALERT:
+ 		ql_dbg(ql_dbg_async, vha, 0x505e,
+ 		    "TEMPERATURE ALERT: %04x %04x %04x\n", mb[1], mb[2], mb[3]);
+-		if (mb[1] == 0x12)
+-			schedule_work(&ha->board_disable);
+ 		break;
+ 
+ 	case MBA_TRANS_INSERT:
+@@ -3139,7 +3136,6 @@ check_scsi_status:
+ 	case CS_PORT_BUSY:
+ 	case CS_INCOMPLETE:
+ 	case CS_PORT_UNAVAILABLE:
+-	case CS_TIMEOUT:
+ 	case CS_RESET:
+ 
+ 		/*
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 6ff720d8961d0..21ba7100ff676 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -167,7 +167,8 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ 	/* check if ISP abort is active and return cmd with timeout */
+ 	if ((test_bit(ABORT_ISP_ACTIVE, &base_vha->dpc_flags) ||
+ 	    test_bit(ISP_ABORT_RETRY, &base_vha->dpc_flags) ||
+-	    test_bit(ISP_ABORT_NEEDED, &base_vha->dpc_flags)) &&
++	    test_bit(ISP_ABORT_NEEDED, &base_vha->dpc_flags) ||
++	    ha->flags.eeh_busy) &&
+ 	    !is_rom_cmd(mcp->mb[0])) {
+ 		ql_log(ql_log_info, vha, 0x1005,
+ 		    "Cmd 0x%x aborted with timeout since ISP Abort is pending\n",
+@@ -268,7 +269,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ 		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+ 
+ 		wait_time = jiffies;
+-		atomic_inc(&ha->num_pend_mbx_stage3);
+ 		if (!wait_for_completion_timeout(&ha->mbx_intr_comp,
+ 		    mcp->tov * HZ)) {
+ 			ql_dbg(ql_dbg_mbx, vha, 0x117a,
+@@ -283,7 +283,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ 				spin_unlock_irqrestore(&ha->hardware_lock,
+ 				    flags);
+ 				atomic_dec(&ha->num_pend_mbx_stage2);
+-				atomic_dec(&ha->num_pend_mbx_stage3);
+ 				rval = QLA_ABORTED;
+ 				goto premature_exit;
+ 			}
+@@ -293,11 +292,9 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ 			ha->flags.mbox_busy = 0;
+ 			spin_unlock_irqrestore(&ha->hardware_lock, flags);
+ 			atomic_dec(&ha->num_pend_mbx_stage2);
+-			atomic_dec(&ha->num_pend_mbx_stage3);
+ 			rval = QLA_ABORTED;
+ 			goto premature_exit;
+ 		}
+-		atomic_dec(&ha->num_pend_mbx_stage3);
+ 
+ 		if (time_after(jiffies, wait_time + 5 * HZ))
+ 			ql_log(ql_log_warn, vha, 0x1015, "cmd=0x%x, waited %d msecs\n",
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 8b0c8f9bdef08..6dad7787f20de 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -397,8 +397,13 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
+ 	}
+ 	req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+ 	if (req->cnt < (req_cnt + 2)) {
+-		cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
+-		    rd_reg_dword_relaxed(req->req_q_out);
++		if (IS_SHADOW_REG_CAPABLE(ha)) {
++			cnt = *req->out_ptr;
++		} else {
++			cnt = rd_reg_dword_relaxed(req->req_q_out);
++			if (qla2x00_check_reg16_for_disconnect(vha, cnt))
++				goto queuing_error;
++		}
+ 
+ 		if (req->ring_index < cnt)
+ 			req->cnt = cnt - req->ring_index;
+@@ -535,6 +540,7 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
+ 
+ queuing_error:
+ 	spin_unlock_irqrestore(&qpair->qp_lock, flags);
++
+ 	return rval;
+ }
+ 
+@@ -604,7 +610,7 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
+ 
+ 	rval = qla2x00_start_nvme_mq(sp);
+ 	if (rval != QLA_SUCCESS) {
+-		ql_log(ql_log_warn, vha, 0x212d,
++		ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x212d,
+ 		    "qla2x00_start_nvme_mq failed = %d\n", rval);
+ 		sp->priv = NULL;
+ 		priv->sp = NULL;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index cbc5af26303a3..8d199deaf3b12 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -879,8 +879,8 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 			goto qc24_fail_command;
+ 	}
+ 
+-	if (!fcport) {
+-		cmd->result = DID_NO_CONNECT << 16;
++	if (!fcport || fcport->deleted) {
++		cmd->result = DID_IMM_RETRY << 16;
+ 		goto qc24_fail_command;
+ 	}
+ 
+@@ -961,11 +961,18 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ 		goto qc24_fail_command;
+ 	}
+ 
+-	if (!fcport) {
++	if (!qpair->online) {
++		ql_dbg(ql_dbg_io, vha, 0x3077,
++		       "qpair not online. eeh_busy=%d.\n", ha->flags.eeh_busy);
+ 		cmd->result = DID_NO_CONNECT << 16;
+ 		goto qc24_fail_command;
+ 	}
+ 
++	if (!fcport || fcport->deleted) {
++		cmd->result = DID_IMM_RETRY << 16;
++		goto qc24_fail_command;
++	}
++
+ 	if (atomic_read(&fcport->state) != FCS_ONLINE || fcport->deleted) {
+ 		if (atomic_read(&fcport->state) == FCS_DEVICE_DEAD ||
+ 			atomic_read(&base_vha->loop_state) == LOOP_DEAD) {
+@@ -1190,35 +1197,6 @@ qla2x00_wait_for_chip_reset(scsi_qla_host_t *vha)
+ 	return return_status;
+ }
+ 
+-#define ISP_REG_DISCONNECT 0xffffffffU
+-/**************************************************************************
+-* qla2x00_isp_reg_stat
+-*
+-* Description:
+-*	Read the host status register of ISP before aborting the command.
+-*
+-* Input:
+-*	ha = pointer to host adapter structure.
+-*
+-*
+-* Returns:
+-*	Either true or false.
+-*
+-* Note:	Return true if there is register disconnect.
+-**************************************************************************/
+-static inline
+-uint32_t qla2x00_isp_reg_stat(struct qla_hw_data *ha)
+-{
+-	struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
+-	struct device_reg_82xx __iomem *reg82 = &ha->iobase->isp82;
+-
+-	if (IS_P3P_TYPE(ha))
+-		return ((rd_reg_dword(&reg82->host_int)) == ISP_REG_DISCONNECT);
+-	else
+-		return ((rd_reg_dword(&reg->host_status)) ==
+-			ISP_REG_DISCONNECT);
+-}
+-
+ /**************************************************************************
+ * qla2xxx_eh_abort
+ *
+@@ -1253,6 +1231,7 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ 	if (qla2x00_isp_reg_stat(ha)) {
+ 		ql_log(ql_log_info, vha, 0x8042,
+ 		    "PCI/Register disconnect, exiting.\n");
++		qla_pci_set_eeh_busy(vha);
+ 		return FAILED;
+ 	}
+ 
+@@ -1444,6 +1423,7 @@ qla2xxx_eh_device_reset(struct scsi_cmnd *cmd)
+ 	if (qla2x00_isp_reg_stat(ha)) {
+ 		ql_log(ql_log_info, vha, 0x803e,
+ 		    "PCI/Register disconnect, exiting.\n");
++		qla_pci_set_eeh_busy(vha);
+ 		return FAILED;
+ 	}
+ 
+@@ -1460,6 +1440,7 @@ qla2xxx_eh_target_reset(struct scsi_cmnd *cmd)
+ 	if (qla2x00_isp_reg_stat(ha)) {
+ 		ql_log(ql_log_info, vha, 0x803f,
+ 		    "PCI/Register disconnect, exiting.\n");
++		qla_pci_set_eeh_busy(vha);
+ 		return FAILED;
+ 	}
+ 
+@@ -1495,6 +1476,7 @@ qla2xxx_eh_bus_reset(struct scsi_cmnd *cmd)
+ 	if (qla2x00_isp_reg_stat(ha)) {
+ 		ql_log(ql_log_info, vha, 0x8040,
+ 		    "PCI/Register disconnect, exiting.\n");
++		qla_pci_set_eeh_busy(vha);
+ 		return FAILED;
+ 	}
+ 
+@@ -1572,7 +1554,7 @@ qla2xxx_eh_host_reset(struct scsi_cmnd *cmd)
+ 	if (qla2x00_isp_reg_stat(ha)) {
+ 		ql_log(ql_log_info, vha, 0x8041,
+ 		    "PCI/Register disconnect, exiting.\n");
+-		schedule_work(&ha->board_disable);
++		qla_pci_set_eeh_busy(vha);
+ 		return SUCCESS;
+ 	}
+ 
+@@ -2866,7 +2848,6 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	ha->max_exchg = FW_MAX_EXCHANGES_CNT;
+ 	atomic_set(&ha->num_pend_mbx_stage1, 0);
+ 	atomic_set(&ha->num_pend_mbx_stage2, 0);
+-	atomic_set(&ha->num_pend_mbx_stage3, 0);
+ 	atomic_set(&ha->zio_threshold, DEFAULT_ZIO_THRESHOLD);
+ 	ha->last_zio_threshold = DEFAULT_ZIO_THRESHOLD;
+ 
+@@ -3141,6 +3122,13 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	host->max_id = ha->max_fibre_devices;
+ 	host->cmd_per_lun = 3;
+ 	host->unique_id = host->host_no;
++
++	if (ql2xenabledif && ql2xenabledif != 2) {
++		ql_log(ql_log_warn, base_vha, 0x302d,
++		       "Invalid value for ql2xenabledif, resetting it to default (2)\n");
++		ql2xenabledif = 2;
++	}
++
+ 	if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif)
+ 		host->max_cmd_len = 32;
+ 	else
+@@ -3373,8 +3361,6 @@ skip_dpc:
+ 			base_vha->flags.difdix_supported = 1;
+ 			ql_dbg(ql_dbg_init, base_vha, 0x00f1,
+ 			    "Registering for DIF/DIX type 1 and 3 protection.\n");
+-			if (ql2xenabledif == 1)
+-				prot = SHOST_DIX_TYPE0_PROTECTION;
+ 			if (ql2xprotmask)
+ 				scsi_host_set_prot(host, ql2xprotmask);
+ 			else
+@@ -6672,6 +6658,9 @@ qla2x00_do_dpc(void *data)
+ 
+ 		schedule();
+ 
++		if (test_and_clear_bit(DO_EEH_RECOVERY, &base_vha->dpc_flags))
++			qla_pci_set_eeh_busy(base_vha);
++
+ 		if (!base_vha->flags.init_done || ha->flags.mbox_busy)
+ 			goto end_loop;
+ 
+@@ -6968,28 +6957,23 @@ intr_on_check:
+ 			mutex_unlock(&ha->mq_lock);
+ 		}
+ 
+-		if (test_and_clear_bit(SET_NVME_ZIO_THRESHOLD_NEEDED,
+-		    &base_vha->dpc_flags)) {
++		if (test_and_clear_bit(SET_ZIO_THRESHOLD_NEEDED,
++				       &base_vha->dpc_flags)) {
++			u16 threshold = ha->nvme_last_rptd_aen + ha->last_zio_threshold;
++
++			if (threshold > ha->orig_fw_xcb_count)
++				threshold = ha->orig_fw_xcb_count;
++
+ 			ql_log(ql_log_info, base_vha, 0xffffff,
+-				"nvme: SET ZIO Activity exchange threshold to %d.\n",
+-						ha->nvme_last_rptd_aen);
+-			if (qla27xx_set_zio_threshold(base_vha,
+-			    ha->nvme_last_rptd_aen)) {
++			       "SET ZIO Activity exchange threshold to %d.\n",
++			       threshold);
++			if (qla27xx_set_zio_threshold(base_vha, threshold)) {
+ 				ql_log(ql_log_info, base_vha, 0xffffff,
+-				    "nvme: Unable to SET ZIO Activity exchange threshold to %d.\n",
+-				    ha->nvme_last_rptd_aen);
++				       "Unable to SET ZIO Activity exchange threshold to %d.\n",
++				       threshold);
+ 			}
+ 		}
+ 
+-		if (test_and_clear_bit(SET_ZIO_THRESHOLD_NEEDED,
+-		    &base_vha->dpc_flags)) {
+-			ql_log(ql_log_info, base_vha, 0xffffff,
+-			    "SET ZIO Activity exchange threshold to %d.\n",
+-			    ha->last_zio_threshold);
+-			qla27xx_set_zio_threshold(base_vha,
+-			    ha->last_zio_threshold);
+-		}
+-
+ 		if (!IS_QLAFX00(ha))
+ 			qla2x00_do_dpc_all_vps(base_vha);
+ 
+@@ -7205,14 +7189,13 @@ qla2x00_timer(struct timer_list *t)
+ 	index = atomic_read(&ha->nvme_active_aen_cnt);
+ 	if (!vha->vp_idx &&
+ 	    (index != ha->nvme_last_rptd_aen) &&
+-	    (index >= DEFAULT_ZIO_THRESHOLD) &&
+ 	    ha->zio_mode == QLA_ZIO_MODE_6 &&
+ 	    !ha->flags.host_shutting_down) {
++		ha->nvme_last_rptd_aen = atomic_read(&ha->nvme_active_aen_cnt);
+ 		ql_log(ql_log_info, vha, 0x3002,
+ 		    "nvme: Sched: Set ZIO exchange threshold to %d.\n",
+ 		    ha->nvme_last_rptd_aen);
+-		ha->nvme_last_rptd_aen = atomic_read(&ha->nvme_active_aen_cnt);
+-		set_bit(SET_NVME_ZIO_THRESHOLD_NEEDED, &vha->dpc_flags);
++		set_bit(SET_ZIO_THRESHOLD_NEEDED, &vha->dpc_flags);
+ 		start_dpc++;
+ 	}
+ 
+@@ -7385,6 +7368,8 @@ static void qla_pci_error_cleanup(scsi_qla_host_t *vha)
+ 	int i;
+ 	unsigned long flags;
+ 
++	ql_dbg(ql_dbg_aer, vha, 0x9000,
++	       "%s\n", __func__);
+ 	ha->chip_reset++;
+ 
+ 	ha->base_qpair->chip_reset = ha->chip_reset;
+@@ -7394,28 +7379,16 @@ static void qla_pci_error_cleanup(scsi_qla_host_t *vha)
+ 			    ha->base_qpair->chip_reset;
+ 	}
+ 
+-	/* purge MBox commands */
+-	if (atomic_read(&ha->num_pend_mbx_stage3)) {
+-		clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
+-		complete(&ha->mbx_intr_comp);
+-	}
+-
+-	i = 0;
+-
+-	while (atomic_read(&ha->num_pend_mbx_stage3) ||
+-	    atomic_read(&ha->num_pend_mbx_stage2) ||
+-	    atomic_read(&ha->num_pend_mbx_stage1)) {
+-		msleep(20);
+-		i++;
+-		if (i > 50)
+-			break;
+-	}
+-
+-	ha->flags.purge_mbox = 0;
++	/*
++	 * purge mailbox might take a while. Slot Reset/chip reset
++	 * will take care of the purge
++	 */
+ 
+ 	mutex_lock(&ha->mq_lock);
++	ha->base_qpair->online = 0;
+ 	list_for_each_entry(qpair, &base_vha->qp_list, qp_list_elem)
+ 		qpair->online = 0;
++	wmb();
+ 	mutex_unlock(&ha->mq_lock);
+ 
+ 	qla2x00_mark_all_devices_lost(vha);
+@@ -7452,14 +7425,17 @@ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+ {
+ 	scsi_qla_host_t *vha = pci_get_drvdata(pdev);
+ 	struct qla_hw_data *ha = vha->hw;
++	pci_ers_result_t ret = PCI_ERS_RESULT_NEED_RESET;
+ 
+-	ql_dbg(ql_dbg_aer, vha, 0x9000,
+-	    "PCI error detected, state %x.\n", state);
++	ql_log(ql_log_warn, vha, 0x9000,
++	       "PCI error detected, state %x.\n", state);
++	ha->pci_error_state = QLA_PCI_ERR_DETECTED;
+ 
+ 	if (!atomic_read(&pdev->enable_cnt)) {
+ 		ql_log(ql_log_info, vha, 0xffff,
+ 			"PCI device is disabled,state %x\n", state);
+-		return PCI_ERS_RESULT_NEED_RESET;
++		ret = PCI_ERS_RESULT_NEED_RESET;
++		goto out;
+ 	}
+ 
+ 	switch (state) {
+@@ -7469,11 +7445,12 @@ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+ 			set_bit(QPAIR_ONLINE_CHECK_NEEDED, &vha->dpc_flags);
+ 			qla2xxx_wake_dpc(vha);
+ 		}
+-		return PCI_ERS_RESULT_CAN_RECOVER;
++		ret = PCI_ERS_RESULT_CAN_RECOVER;
++		break;
+ 	case pci_channel_io_frozen:
+-		ha->flags.eeh_busy = 1;
+-		qla_pci_error_cleanup(vha);
+-		return PCI_ERS_RESULT_NEED_RESET;
++		qla_pci_set_eeh_busy(vha);
++		ret = PCI_ERS_RESULT_NEED_RESET;
++		break;
+ 	case pci_channel_io_perm_failure:
+ 		ha->flags.pci_channel_io_perm_failure = 1;
+ 		qla2x00_abort_all_cmds(vha, DID_NO_CONNECT << 16);
+@@ -7481,9 +7458,12 @@ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+ 			set_bit(QPAIR_ONLINE_CHECK_NEEDED, &vha->dpc_flags);
+ 			qla2xxx_wake_dpc(vha);
+ 		}
+-		return PCI_ERS_RESULT_DISCONNECT;
++		ret = PCI_ERS_RESULT_DISCONNECT;
+ 	}
+-	return PCI_ERS_RESULT_NEED_RESET;
++out:
++	ql_dbg(ql_dbg_aer, vha, 0x600d,
++	       "PCI error detected returning [%x].\n", ret);
++	return ret;
+ }
+ 
+ static pci_ers_result_t
+@@ -7497,6 +7477,10 @@ qla2xxx_pci_mmio_enabled(struct pci_dev *pdev)
+ 	struct device_reg_2xxx __iomem *reg = &ha->iobase->isp;
+ 	struct device_reg_24xx __iomem *reg24 = &ha->iobase->isp24;
+ 
++	ql_log(ql_log_warn, base_vha, 0x9000,
++	       "mmio enabled\n");
++
++	ha->pci_error_state = QLA_PCI_MMIO_ENABLED;
+ 	if (IS_QLA82XX(ha))
+ 		return PCI_ERS_RESULT_RECOVERED;
+ 
+@@ -7520,10 +7504,11 @@ qla2xxx_pci_mmio_enabled(struct pci_dev *pdev)
+ 		ql_log(ql_log_info, base_vha, 0x9003,
+ 		    "RISC paused -- mmio_enabled, Dumping firmware.\n");
+ 		qla2xxx_dump_fw(base_vha);
+-
+-		return PCI_ERS_RESULT_NEED_RESET;
+-	} else
+-		return PCI_ERS_RESULT_RECOVERED;
++	}
++	/* set PCI_ERS_RESULT_NEED_RESET to trigger call to qla2xxx_pci_slot_reset */
++	ql_dbg(ql_dbg_aer, base_vha, 0x600d,
++	       "mmio enabled returning.\n");
++	return PCI_ERS_RESULT_NEED_RESET;
+ }
+ 
+ static pci_ers_result_t
+@@ -7535,9 +7520,10 @@ qla2xxx_pci_slot_reset(struct pci_dev *pdev)
+ 	int rc;
+ 	struct qla_qpair *qpair = NULL;
+ 
+-	ql_dbg(ql_dbg_aer, base_vha, 0x9004,
+-	    "Slot Reset.\n");
++	ql_log(ql_log_warn, base_vha, 0x9004,
++	       "Slot Reset.\n");
+ 
++	ha->pci_error_state = QLA_PCI_SLOT_RESET;
+ 	/* Workaround: qla2xxx driver which access hardware earlier
+ 	 * needs error state to be pci_channel_io_online.
+ 	 * Otherwise mailbox command timesout.
+@@ -7571,16 +7557,24 @@ qla2xxx_pci_slot_reset(struct pci_dev *pdev)
+ 		qpair->online = 1;
+ 	mutex_unlock(&ha->mq_lock);
+ 
++	ha->flags.eeh_busy = 0;
+ 	base_vha->flags.online = 1;
+ 	set_bit(ABORT_ISP_ACTIVE, &base_vha->dpc_flags);
+-	if (ha->isp_ops->abort_isp(base_vha) == QLA_SUCCESS)
+-		ret =  PCI_ERS_RESULT_RECOVERED;
++	ha->isp_ops->abort_isp(base_vha);
+ 	clear_bit(ABORT_ISP_ACTIVE, &base_vha->dpc_flags);
+ 
++	if (qla2x00_isp_reg_stat(ha)) {
++		ha->flags.eeh_busy = 1;
++		qla_pci_error_cleanup(base_vha);
++		ql_log(ql_log_warn, base_vha, 0x9005,
++		       "Device unable to recover from PCI error.\n");
++	} else {
++		ret =  PCI_ERS_RESULT_RECOVERED;
++	}
+ 
+ exit_slot_reset:
+ 	ql_dbg(ql_dbg_aer, base_vha, 0x900e,
+-	    "slot_reset return %x.\n", ret);
++	    "Slot Reset returning %x.\n", ret);
+ 
+ 	return ret;
+ }
+@@ -7592,16 +7586,55 @@ qla2xxx_pci_resume(struct pci_dev *pdev)
+ 	struct qla_hw_data *ha = base_vha->hw;
+ 	int ret;
+ 
+-	ql_dbg(ql_dbg_aer, base_vha, 0x900f,
+-	    "pci_resume.\n");
++	ql_log(ql_log_warn, base_vha, 0x900f,
++	       "Pci Resume.\n");
+ 
+-	ha->flags.eeh_busy = 0;
+ 
+ 	ret = qla2x00_wait_for_hba_online(base_vha);
+ 	if (ret != QLA_SUCCESS) {
+ 		ql_log(ql_log_fatal, base_vha, 0x9002,
+ 		    "The device failed to resume I/O from slot/link_reset.\n");
+ 	}
++	ha->pci_error_state = QLA_PCI_RESUME;
++	ql_dbg(ql_dbg_aer, base_vha, 0x600d,
++	       "Pci Resume returning.\n");
++}
++
++void qla_pci_set_eeh_busy(struct scsi_qla_host *vha)
++{
++	struct qla_hw_data *ha = vha->hw;
++	struct scsi_qla_host *base_vha = pci_get_drvdata(ha->pdev);
++	bool do_cleanup = false;
++	unsigned long flags;
++
++	if (ha->flags.eeh_busy)
++		return;
++
++	spin_lock_irqsave(&base_vha->work_lock, flags);
++	if (!ha->flags.eeh_busy) {
++		ha->flags.eeh_busy = 1;
++		do_cleanup = true;
++	}
++	spin_unlock_irqrestore(&base_vha->work_lock, flags);
++
++	if (do_cleanup)
++		qla_pci_error_cleanup(base_vha);
++}
++
++/*
++ * this routine will schedule a task to pause IO from interrupt context
++ * if caller sees a PCIE error event (register read = 0xf's)
++ */
++void qla_schedule_eeh_work(struct scsi_qla_host *vha)
++{
++	struct qla_hw_data *ha = vha->hw;
++	struct scsi_qla_host *base_vha = pci_get_drvdata(ha->pdev);
++
++	if (ha->flags.eeh_busy)
++		return;
++
++	set_bit(DO_EEH_RECOVERY, &base_vha->dpc_flags);
++	qla2xxx_wake_dpc(base_vha);
+ }
+ 
+ static void
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index ecb30c2738b8b..fdb424501da5b 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1044,10 +1044,6 @@ void qlt_free_session_done(struct work_struct *work)
+ 			(struct imm_ntfy_from_isp *)sess->iocb, SRB_NACK_LOGO);
+ 	}
+ 
+-	spin_lock_irqsave(&vha->work_lock, flags);
+-	sess->flags &= ~FCF_ASYNC_SENT;
+-	spin_unlock_irqrestore(&vha->work_lock, flags);
+-
+ 	spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 	if (sess->se_sess) {
+ 		sess->se_sess = NULL;
+@@ -1057,7 +1053,6 @@ void qlt_free_session_done(struct work_struct *work)
+ 
+ 	qla2x00_set_fcport_disc_state(sess, DSC_DELETED);
+ 	sess->fw_login_state = DSC_LS_PORT_UNAVAIL;
+-	sess->deleted = QLA_SESS_DELETED;
+ 
+ 	if (sess->login_succ && !IS_SW_RESV_ADDR(sess->d_id)) {
+ 		vha->fcport_count--;
+@@ -1109,10 +1104,15 @@ void qlt_free_session_done(struct work_struct *work)
+ 
+ 	sess->explicit_logout = 0;
+ 	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+-	sess->free_pending = 0;
+ 
+ 	qla2x00_dfs_remove_rport(vha, sess);
+ 
++	spin_lock_irqsave(&vha->work_lock, flags);
++	sess->flags &= ~FCF_ASYNC_SENT;
++	sess->deleted = QLA_SESS_DELETED;
++	sess->free_pending = 0;
++	spin_unlock_irqrestore(&vha->work_lock, flags);
++
+ 	ql_dbg(ql_dbg_disc, vha, 0xf001,
+ 	    "Unregistration of sess %p %8phC finished fcp_cnt %d\n",
+ 		sess, sess->port_name, vha->fcport_count);
+@@ -1161,12 +1161,12 @@ void qlt_unreg_sess(struct fc_port *sess)
+ 	 * management from being sent.
+ 	 */
+ 	sess->flags |= FCF_ASYNC_SENT;
++	sess->deleted = QLA_SESS_DELETION_IN_PROGRESS;
+ 	spin_unlock_irqrestore(&sess->vha->work_lock, flags);
+ 
+ 	if (sess->se_sess)
+ 		vha->hw->tgt.tgt_ops->clear_nacl_from_fcport_map(sess);
+ 
+-	sess->deleted = QLA_SESS_DELETION_IN_PROGRESS;
+ 	qla2x00_set_fcport_disc_state(sess, DSC_DELETE_PEND);
+ 	sess->last_rscn_gen = sess->rscn_gen;
+ 	sess->last_login_gen = sess->login_gen;
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index 8d82d2a83059d..05ae9b1157096 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -973,6 +973,11 @@ static int qla4xxx_set_chap_entry(struct Scsi_Host *shost, void *data, int len)
+ 	memset(&chap_rec, 0, sizeof(chap_rec));
+ 
+ 	nla_for_each_attr(attr, data, len, rem) {
++		if (nla_len(attr) < sizeof(*param_info)) {
++			rc = -EINVAL;
++			goto exit_set_chap;
++		}
++
+ 		param_info = nla_data(attr);
+ 
+ 		switch (param_info->param) {
+@@ -2755,6 +2760,11 @@ qla4xxx_iface_set_param(struct Scsi_Host *shost, void *data, uint32_t len)
+ 	}
+ 
+ 	nla_for_each_attr(attr, data, len, rem) {
++		if (nla_len(attr) < sizeof(*iface_param)) {
++			rval = -EINVAL;
++			goto exit_init_fw_cb;
++		}
++
+ 		iface_param = nla_data(attr);
+ 
+ 		if (iface_param->param_type == ISCSI_NET_PARAM) {
+@@ -8119,6 +8129,11 @@ qla4xxx_sysfs_ddb_set_param(struct iscsi_bus_flash_session *fnode_sess,
+ 
+ 	memset((void *)&chap_tbl, 0, sizeof(chap_tbl));
+ 	nla_for_each_attr(attr, data, len, rem) {
++		if (nla_len(attr) < sizeof(*fnode_param)) {
++			rc = -EINVAL;
++			goto exit_set_param;
++		}
++
+ 		fnode_param = nla_data(attr);
+ 
+ 		switch (fnode_param->param) {
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 092bd6a3d64a1..074cbd64aa253 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2991,14 +2991,15 @@ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev
+ }
+ 
+ static int
+-iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
++iscsi_if_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	char *data = (char*)ev + sizeof(*ev);
+ 	struct iscsi_cls_conn *conn;
+ 	struct iscsi_cls_session *session;
+ 	int err = 0, value = 0, state;
+ 
+-	if (ev->u.set_param.len > PAGE_SIZE)
++	if (ev->u.set_param.len > rlen ||
++	    ev->u.set_param.len > PAGE_SIZE)
+ 		return -EINVAL;
+ 
+ 	session = iscsi_session_lookup(ev->u.set_param.sid);
+@@ -3006,6 +3007,10 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ 	if (!conn || !session)
+ 		return -EINVAL;
+ 
++	/* data will be regarded as NULL-ended string, do length check */
++	if (strlen(data) > ev->u.set_param.len)
++		return -EINVAL;
++
+ 	switch (ev->u.set_param.param) {
+ 	case ISCSI_PARAM_SESS_RECOVERY_TMO:
+ 		sscanf(data, "%d", &value);
+@@ -3095,7 +3100,7 @@ put_ep:
+ 
+ static int
+ iscsi_if_transport_ep(struct iscsi_transport *transport,
+-		      struct iscsi_uevent *ev, int msg_type)
++		      struct iscsi_uevent *ev, int msg_type, u32 rlen)
+ {
+ 	struct iscsi_endpoint *ep;
+ 	int rc = 0;
+@@ -3103,7 +3108,10 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
+ 	switch (msg_type) {
+ 	case ISCSI_UEVENT_TRANSPORT_EP_CONNECT_THROUGH_HOST:
+ 	case ISCSI_UEVENT_TRANSPORT_EP_CONNECT:
+-		rc = iscsi_if_ep_connect(transport, ev, msg_type);
++		if (rlen < sizeof(struct sockaddr))
++			rc = -EINVAL;
++		else
++			rc = iscsi_if_ep_connect(transport, ev, msg_type);
+ 		break;
+ 	case ISCSI_UEVENT_TRANSPORT_EP_POLL:
+ 		if (!transport->ep_poll)
+@@ -3127,12 +3135,15 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
+ 
+ static int
+ iscsi_tgt_dscvr(struct iscsi_transport *transport,
+-		struct iscsi_uevent *ev)
++		struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	struct Scsi_Host *shost;
+ 	struct sockaddr *dst_addr;
+ 	int err;
+ 
++	if (rlen < sizeof(*dst_addr))
++		return -EINVAL;
++
+ 	if (!transport->tgt_dscvr)
+ 		return -EINVAL;
+ 
+@@ -3153,7 +3164,7 @@ iscsi_tgt_dscvr(struct iscsi_transport *transport,
+ 
+ static int
+ iscsi_set_host_param(struct iscsi_transport *transport,
+-		     struct iscsi_uevent *ev)
++		     struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	char *data = (char*)ev + sizeof(*ev);
+ 	struct Scsi_Host *shost;
+@@ -3162,7 +3173,8 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ 	if (!transport->set_host_param)
+ 		return -ENOSYS;
+ 
+-	if (ev->u.set_host_param.len > PAGE_SIZE)
++	if (ev->u.set_host_param.len > rlen ||
++	    ev->u.set_host_param.len > PAGE_SIZE)
+ 		return -EINVAL;
+ 
+ 	shost = scsi_host_lookup(ev->u.set_host_param.host_no);
+@@ -3172,6 +3184,10 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ 		return -ENODEV;
+ 	}
+ 
++	/* see similar check in iscsi_if_set_param() */
++	if (strlen(data) > ev->u.set_host_param.len)
++		return -EINVAL;
++
+ 	err = transport->set_host_param(shost, ev->u.set_host_param.param,
+ 					data, ev->u.set_host_param.len);
+ 	scsi_host_put(shost);
+@@ -3179,12 +3195,15 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ }
+ 
+ static int
+-iscsi_set_path(struct iscsi_transport *transport, struct iscsi_uevent *ev)
++iscsi_set_path(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	struct Scsi_Host *shost;
+ 	struct iscsi_path *params;
+ 	int err;
+ 
++	if (rlen < sizeof(*params))
++		return -EINVAL;
++
+ 	if (!transport->set_path)
+ 		return -ENOSYS;
+ 
+@@ -3244,12 +3263,15 @@ iscsi_set_iface_params(struct iscsi_transport *transport,
+ }
+ 
+ static int
+-iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev)
++iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	struct Scsi_Host *shost;
+ 	struct sockaddr *dst_addr;
+ 	int err;
+ 
++	if (rlen < sizeof(*dst_addr))
++		return -EINVAL;
++
+ 	if (!transport->send_ping)
+ 		return -ENOSYS;
+ 
+@@ -3747,13 +3769,12 @@ exit_host_stats:
+ }
+ 
+ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
+-				   struct nlmsghdr *nlh)
++				   struct nlmsghdr *nlh, u32 pdu_len)
+ {
+ 	struct iscsi_uevent *ev = nlmsg_data(nlh);
+ 	struct iscsi_cls_session *session;
+ 	struct iscsi_cls_conn *conn = NULL;
+ 	struct iscsi_endpoint *ep;
+-	uint32_t pdu_len;
+ 	int err = 0;
+ 
+ 	switch (nlh->nlmsg_type) {
+@@ -3833,8 +3854,6 @@ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
+ 
+ 		break;
+ 	case ISCSI_UEVENT_SEND_PDU:
+-		pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
+-
+ 		if ((ev->u.send_pdu.hdr_size > pdu_len) ||
+ 		    (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
+ 			err = -EINVAL;
+@@ -3864,6 +3883,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 	struct iscsi_internal *priv;
+ 	struct iscsi_cls_session *session;
+ 	struct iscsi_endpoint *ep = NULL;
++	u32 rlen;
+ 
+ 	if (!netlink_capable(skb, CAP_SYS_ADMIN))
+ 		return -EPERM;
+@@ -3883,6 +3903,13 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 
+ 	portid = NETLINK_CB(skb).portid;
+ 
++	/*
++	 * Even though the remaining payload may not be regarded as nlattr,
++	 * (like address or something else), calculate the remaining length
++	 * here to ease following length checks.
++	 */
++	rlen = nlmsg_attrlen(nlh, sizeof(*ev));
++
+ 	switch (nlh->nlmsg_type) {
+ 	case ISCSI_UEVENT_CREATE_SESSION:
+ 		err = iscsi_if_create_session(priv, ep, ev,
+@@ -3940,7 +3967,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 			err = -EINVAL;
+ 		break;
+ 	case ISCSI_UEVENT_SET_PARAM:
+-		err = iscsi_set_param(transport, ev);
++		err = iscsi_if_set_param(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_CREATE_CONN:
+ 	case ISCSI_UEVENT_DESTROY_CONN:
+@@ -3948,7 +3975,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 	case ISCSI_UEVENT_START_CONN:
+ 	case ISCSI_UEVENT_BIND_CONN:
+ 	case ISCSI_UEVENT_SEND_PDU:
+-		err = iscsi_if_transport_conn(transport, nlh);
++		err = iscsi_if_transport_conn(transport, nlh, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_GET_STATS:
+ 		err = iscsi_if_get_stats(transport, nlh);
+@@ -3957,23 +3984,22 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 	case ISCSI_UEVENT_TRANSPORT_EP_POLL:
+ 	case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT:
+ 	case ISCSI_UEVENT_TRANSPORT_EP_CONNECT_THROUGH_HOST:
+-		err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type);
++		err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_TGT_DSCVR:
+-		err = iscsi_tgt_dscvr(transport, ev);
++		err = iscsi_tgt_dscvr(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_SET_HOST_PARAM:
+-		err = iscsi_set_host_param(transport, ev);
++		err = iscsi_set_host_param(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_PATH_UPDATE:
+-		err = iscsi_set_path(transport, ev);
++		err = iscsi_set_path(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_SET_IFACE_PARAMS:
+-		err = iscsi_set_iface_params(transport, ev,
+-					     nlmsg_attrlen(nlh, sizeof(*ev)));
++		err = iscsi_set_iface_params(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_PING:
+-		err = iscsi_send_ping(transport, ev);
++		err = iscsi_send_ping(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_GET_CHAP:
+ 		err = iscsi_get_chap(transport, nlh);
+@@ -3982,13 +4008,10 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 		err = iscsi_delete_chap(transport, ev);
+ 		break;
+ 	case ISCSI_UEVENT_SET_FLASHNODE_PARAMS:
+-		err = iscsi_set_flashnode_param(transport, ev,
+-						nlmsg_attrlen(nlh,
+-							      sizeof(*ev)));
++		err = iscsi_set_flashnode_param(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_NEW_FLASHNODE:
+-		err = iscsi_new_flashnode(transport, ev,
+-					  nlmsg_attrlen(nlh, sizeof(*ev)));
++		err = iscsi_new_flashnode(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_DEL_FLASHNODE:
+ 		err = iscsi_del_flashnode(transport, ev);
+@@ -4003,8 +4026,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 		err = iscsi_logout_flashnode_sid(transport, ev);
+ 		break;
+ 	case ISCSI_UEVENT_SET_CHAP:
+-		err = iscsi_set_chap(transport, ev,
+-				     nlmsg_attrlen(nlh, sizeof(*ev)));
++		err = iscsi_set_chap(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_GET_HOST_STATS:
+ 		err = iscsi_get_host_stats(transport, nlh);
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 45d8549623442..37ad5f5256474 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1537,6 +1537,8 @@ static int storvsc_device_configure(struct scsi_device *sdevice)
+ {
+ 	blk_queue_rq_timeout(sdevice->request_queue, (storvsc_timeout * HZ));
+ 
++	/* storvsc devices don't support MAINTENANCE_IN SCSI cmd */
++	sdevice->no_report_opcodes = 1;
+ 	sdevice->no_write_same = 1;
+ 
+ 	/*
+diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c
+index 1dfdd0b9ba24d..8b80c8e94c77a 100644
+--- a/drivers/soc/qcom/ocmem.c
++++ b/drivers/soc/qcom/ocmem.c
+@@ -76,8 +76,12 @@ struct ocmem {
+ #define OCMEM_REG_GFX_MPU_START			0x00001004
+ #define OCMEM_REG_GFX_MPU_END			0x00001008
+ 
+-#define OCMEM_HW_PROFILE_NUM_PORTS(val)		FIELD_PREP(0x0000000f, (val))
+-#define OCMEM_HW_PROFILE_NUM_MACROS(val)	FIELD_PREP(0x00003f00, (val))
++#define OCMEM_HW_VERSION_MAJOR(val)		FIELD_GET(GENMASK(31, 28), val)
++#define OCMEM_HW_VERSION_MINOR(val)		FIELD_GET(GENMASK(27, 16), val)
++#define OCMEM_HW_VERSION_STEP(val)		FIELD_GET(GENMASK(15, 0), val)
++
++#define OCMEM_HW_PROFILE_NUM_PORTS(val)		FIELD_GET(0x0000000f, (val))
++#define OCMEM_HW_PROFILE_NUM_MACROS(val)	FIELD_GET(0x00003f00, (val))
+ 
+ #define OCMEM_HW_PROFILE_LAST_REGN_HALFSIZE	0x00010000
+ #define OCMEM_HW_PROFILE_INTERLEAVING		0x00020000
+@@ -357,6 +361,12 @@ static int ocmem_dev_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	reg = ocmem_read(ocmem, OCMEM_REG_HW_VERSION);
++	dev_dbg(dev, "OCMEM hardware version: %lu.%lu.%lu\n",
++		OCMEM_HW_VERSION_MAJOR(reg),
++		OCMEM_HW_VERSION_MINOR(reg),
++		OCMEM_HW_VERSION_STEP(reg));
++
+ 	reg = ocmem_read(ocmem, OCMEM_REG_HW_PROFILE);
+ 	ocmem->num_ports = OCMEM_HW_PROFILE_NUM_PORTS(reg);
+ 	ocmem->num_macros = OCMEM_HW_PROFILE_NUM_MACROS(reg);
+diff --git a/drivers/soc/qcom/qmi_encdec.c b/drivers/soc/qcom/qmi_encdec.c
+index 3aaab71d1b2c1..dbc8b4c931903 100644
+--- a/drivers/soc/qcom/qmi_encdec.c
++++ b/drivers/soc/qcom/qmi_encdec.c
+@@ -534,8 +534,8 @@ static int qmi_decode_string_elem(struct qmi_elem_info *ei_array,
+ 		decoded_bytes += rc;
+ 	}
+ 
+-	if (string_len > temp_ei->elem_len) {
+-		pr_err("%s: String len %d > Max Len %d\n",
++	if (string_len >= temp_ei->elem_len) {
++		pr_err("%s: String len %d >= Max Len %d\n",
+ 		       __func__, string_len, temp_ei->elem_len);
+ 		return -ETOOSMALL;
+ 	} else if (string_len > tlv_len) {
+diff --git a/drivers/spi/spi-tegra20-sflash.c b/drivers/spi/spi-tegra20-sflash.c
+index cfb7de7379376..62e50830b7f95 100644
+--- a/drivers/spi/spi-tegra20-sflash.c
++++ b/drivers/spi/spi-tegra20-sflash.c
+@@ -456,7 +456,11 @@ static int tegra_sflash_probe(struct platform_device *pdev)
+ 		goto exit_free_master;
+ 	}
+ 
+-	tsd->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto exit_free_master;
++	tsd->irq = ret;
++
+ 	ret = request_irq(tsd->irq, tegra_sflash_isr, 0,
+ 			dev_name(&pdev->dev), tsd);
+ 	if (ret < 0) {
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index f6a29a7078625..86483f1c070b9 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -101,7 +101,7 @@ static const struct rkvdec_coded_fmt_desc rkvdec_coded_fmts[] = {
+ 			.max_width = 4096,
+ 			.step_width = 16,
+ 			.min_height = 48,
+-			.max_height = 2304,
++			.max_height = 2560,
+ 			.step_height = 16,
+ 		},
+ 		.ctrls = &rkvdec_h264_ctrls,
+diff --git a/drivers/staging/rtl8712/os_intfs.c b/drivers/staging/rtl8712/os_intfs.c
+index daa3180dfde30..5dfc9e3aa83dd 100644
+--- a/drivers/staging/rtl8712/os_intfs.c
++++ b/drivers/staging/rtl8712/os_intfs.c
+@@ -323,6 +323,7 @@ int r8712_init_drv_sw(struct _adapter *padapter)
+ 	mp871xinit(padapter);
+ 	init_default_value(padapter);
+ 	r8712_InitSwLeds(padapter);
++	mutex_init(&padapter->mutex_start);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index 68d66c3ce2c8f..67f89ea1c5887 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -570,7 +570,6 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ 	if (rtl871x_load_fw(padapter))
+ 		goto deinit_drv_sw;
+ 	spin_lock_init(&padapter->lock_rx_ff0_filter);
+-	mutex_init(&padapter->mutex_start);
+ 	return 0;
+ 
+ deinit_drv_sw:
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 65a0c4e2bb29c..7caba23365c1c 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -125,6 +125,7 @@ struct qcom_geni_serial_port {
+ 	u32 tx_fifo_width;
+ 	u32 rx_fifo_depth;
+ 	bool setup;
++	unsigned long clk_rate;
+ 	int (*handle_rx)(struct uart_port *uport, u32 bytes, bool drop);
+ 	unsigned int baud;
+ 	void *rx_fifo;
+@@ -1022,6 +1023,7 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ 		goto out_restart_rx;
+ 
+ 	uport->uartclk = clk_rate;
++	port->clk_rate = clk_rate;
+ 	dev_pm_opp_set_rate(uport->dev, clk_rate);
+ 	ser_clk_cfg = SER_CLK_EN;
+ 	ser_clk_cfg |= clk_div << CLK_DIV_SHFT;
+@@ -1305,10 +1307,13 @@ static void qcom_geni_serial_pm(struct uart_port *uport,
+ 
+ 	if (new_state == UART_PM_STATE_ON && old_state == UART_PM_STATE_OFF) {
+ 		geni_icc_enable(&port->se);
++		if (port->clk_rate)
++			dev_pm_opp_set_rate(uport->dev, port->clk_rate);
+ 		geni_se_resources_on(&port->se);
+ 	} else if (new_state == UART_PM_STATE_OFF &&
+ 			old_state == UART_PM_STATE_ON) {
+ 		geni_se_resources_off(&port->se);
++		dev_pm_opp_set_rate(uport->dev, 0);
+ 		geni_icc_disable(&port->se);
+ 	}
+ }
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 7ece8d1a23cb3..c681ed589303c 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1170,9 +1170,18 @@ static int sc16is7xx_gpio_direction_output(struct gpio_chip *chip,
+ 		state |= BIT(offset);
+ 	else
+ 		state &= ~BIT(offset);
+-	sc16is7xx_port_write(port, SC16IS7XX_IOSTATE_REG, state);
++
++	/*
++	 * If we write IOSTATE first, and then IODIR, the output value is not
++	 * transferred to the corresponding I/O pin.
++	 * The datasheet states that each register bit will be transferred to
++	 * the corresponding I/O pin programmed as output when writing to
++	 * IOSTATE. Therefore, configure direction first with IODIR, and then
++	 * set value after with IOSTATE.
++	 */
+ 	sc16is7xx_port_update(port, SC16IS7XX_IODIR_REG, BIT(offset),
+ 			      BIT(offset));
++	sc16is7xx_port_write(port, SC16IS7XX_IOSTATE_REG, state);
+ 
+ 	return 0;
+ }
+@@ -1256,6 +1265,12 @@ static int sc16is7xx_probe(struct device *dev,
+ 		s->p[i].port.fifosize	= SC16IS7XX_FIFO_SIZE;
+ 		s->p[i].port.flags	= UPF_FIXED_TYPE | UPF_LOW_LATENCY;
+ 		s->p[i].port.iobase	= i;
++		/*
++		 * Use all ones as membase to make sure uart_configure_port() in
++		 * serial_core.c does not abort for SPI/I2C devices where the
++		 * membase address is not applicable.
++		 */
++		s->p[i].port.membase	= (void __iomem *)~0;
+ 		s->p[i].port.iotype	= UPIO_PORT;
+ 		s->p[i].port.uartclk	= freq;
+ 		s->p[i].port.rs485_config = sc16is7xx_config_rs485;
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index 62377c831894d..fac4d90047b03 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -994,7 +994,11 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
+ 	tup->ier_shadow = 0;
+ 	tup->current_baud = 0;
+ 
+-	clk_prepare_enable(tup->uart_clk);
++	ret = clk_prepare_enable(tup->uart_clk);
++	if (ret) {
++		dev_err(tup->uport.dev, "could not enable clk\n");
++		return ret;
++	}
+ 
+ 	/* Reset the UART controller to clear all previous status.*/
+ 	reset_control_assert(tup->rst);
+diff --git a/drivers/tty/serial/sprd_serial.c b/drivers/tty/serial/sprd_serial.c
+index 9a7ae6384edfa..a1952e4f1fcbb 100644
+--- a/drivers/tty/serial/sprd_serial.c
++++ b/drivers/tty/serial/sprd_serial.c
+@@ -367,7 +367,7 @@ static void sprd_rx_free_buf(struct sprd_uart_port *sp)
+ 	if (sp->rx_dma.virt)
+ 		dma_free_coherent(sp->port.dev, SPRD_UART_RX_SIZE,
+ 				  sp->rx_dma.virt, sp->rx_dma.phys_addr);
+-
++	sp->rx_dma.virt = NULL;
+ }
+ 
+ static int sprd_rx_dma_config(struct uart_port *port, u32 burst)
+@@ -1133,7 +1133,7 @@ static bool sprd_uart_is_console(struct uart_port *uport)
+ static int sprd_clk_init(struct uart_port *uport)
+ {
+ 	struct clk *clk_uart, *clk_parent;
+-	struct sprd_uart_port *u = sprd_port[uport->line];
++	struct sprd_uart_port *u = container_of(uport, struct sprd_uart_port, port);
+ 
+ 	clk_uart = devm_clk_get(uport->dev, "uart");
+ 	if (IS_ERR(clk_uart)) {
+@@ -1176,22 +1176,22 @@ static int sprd_probe(struct platform_device *pdev)
+ {
+ 	struct resource *res;
+ 	struct uart_port *up;
++	struct sprd_uart_port *sport;
+ 	int irq;
+ 	int index;
+ 	int ret;
+ 
+ 	index = of_alias_get_id(pdev->dev.of_node, "serial");
+-	if (index < 0 || index >= ARRAY_SIZE(sprd_port)) {
++	if (index < 0 || index >= UART_NR_MAX) {
+ 		dev_err(&pdev->dev, "got a wrong serial alias id %d\n", index);
+ 		return -EINVAL;
+ 	}
+ 
+-	sprd_port[index] = devm_kzalloc(&pdev->dev, sizeof(*sprd_port[index]),
+-					GFP_KERNEL);
+-	if (!sprd_port[index])
++	sport = devm_kzalloc(&pdev->dev, sizeof(*sport), GFP_KERNEL);
++	if (!sport)
+ 		return -ENOMEM;
+ 
+-	up = &sprd_port[index]->port;
++	up = &sport->port;
+ 	up->dev = &pdev->dev;
+ 	up->line = index;
+ 	up->type = PORT_SPRD;
+@@ -1222,7 +1222,7 @@ static int sprd_probe(struct platform_device *pdev)
+ 	 * Allocate one dma buffer to prepare for receive transfer, in case
+ 	 * memory allocation failure at runtime.
+ 	 */
+-	ret = sprd_rx_alloc_buf(sprd_port[index]);
++	ret = sprd_rx_alloc_buf(sport);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1230,17 +1230,27 @@ static int sprd_probe(struct platform_device *pdev)
+ 		ret = uart_register_driver(&sprd_uart_driver);
+ 		if (ret < 0) {
+ 			pr_err("Failed to register SPRD-UART driver\n");
+-			return ret;
++			goto free_rx_buf;
+ 		}
+ 	}
++
+ 	sprd_ports_num++;
++	sprd_port[index] = sport;
+ 
+ 	ret = uart_add_one_port(&sprd_uart_driver, up);
+ 	if (ret)
+-		sprd_remove(pdev);
++		goto clean_port;
+ 
+ 	platform_set_drvdata(pdev, up);
+ 
++	return 0;
++
++clean_port:
++	sprd_port[index] = NULL;
++	if (--sprd_ports_num == 0)
++		uart_unregister_driver(&sprd_uart_driver);
++free_rx_buf:
++	sprd_rx_free_buf(sport);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index 4d47fe89864d9..a54c3cff6c28e 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -175,10 +175,12 @@ static struct imx_usbmisc_data *usbmisc_get_init_data(struct device *dev)
+ 	if (of_usb_get_phy_mode(np) == USBPHY_INTERFACE_MODE_ULPI)
+ 		data->ulpi = 1;
+ 
+-	of_property_read_u32(np, "samsung,picophy-pre-emp-curr-control",
+-			&data->emp_curr_control);
+-	of_property_read_u32(np, "samsung,picophy-dc-vol-level-adjust",
+-			&data->dc_vol_level_adjust);
++	if (of_property_read_u32(np, "samsung,picophy-pre-emp-curr-control",
++			&data->emp_curr_control))
++		data->emp_curr_control = -1;
++	if (of_property_read_u32(np, "samsung,picophy-dc-vol-level-adjust",
++			&data->dc_vol_level_adjust))
++		data->dc_vol_level_adjust = -1;
+ 
+ 	return data;
+ }
+diff --git a/drivers/usb/chipidea/usbmisc_imx.c b/drivers/usb/chipidea/usbmisc_imx.c
+index 9b1d5c11dc340..12fc8c83081c5 100644
+--- a/drivers/usb/chipidea/usbmisc_imx.c
++++ b/drivers/usb/chipidea/usbmisc_imx.c
+@@ -657,13 +657,15 @@ static int usbmisc_imx7d_init(struct imx_usbmisc_data *data)
+ 			usbmisc->base + MX7D_USBNC_USB_CTRL2);
+ 		/* PHY tuning for signal quality */
+ 		reg = readl(usbmisc->base + MX7D_USB_OTG_PHY_CFG1);
+-		if (data->emp_curr_control && data->emp_curr_control <=
++		if (data->emp_curr_control >= 0 &&
++			data->emp_curr_control <=
+ 			(TXPREEMPAMPTUNE0_MASK >> TXPREEMPAMPTUNE0_BIT)) {
+ 			reg &= ~TXPREEMPAMPTUNE0_MASK;
+ 			reg |= (data->emp_curr_control << TXPREEMPAMPTUNE0_BIT);
+ 		}
+ 
+-		if (data->dc_vol_level_adjust && data->dc_vol_level_adjust <=
++		if (data->dc_vol_level_adjust >= 0 &&
++			data->dc_vol_level_adjust <=
+ 			(TXVREFTUNE0_MASK >> TXVREFTUNE0_BIT)) {
+ 			reg &= ~TXVREFTUNE0_MASK;
+ 			reg |= (data->dc_vol_level_adjust << TXVREFTUNE0_BIT);
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index ac347f9d5ef0b..63bb04d262d84 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -982,6 +982,7 @@ static int register_root_hub(struct usb_hcd *hcd)
+ {
+ 	struct device *parent_dev = hcd->self.controller;
+ 	struct usb_device *usb_dev = hcd->self.root_hub;
++	struct usb_device_descriptor *descr;
+ 	const int devnum = 1;
+ 	int retval;
+ 
+@@ -993,13 +994,16 @@ static int register_root_hub(struct usb_hcd *hcd)
+ 	mutex_lock(&usb_bus_idr_lock);
+ 
+ 	usb_dev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
+-	retval = usb_get_device_descriptor(usb_dev, USB_DT_DEVICE_SIZE);
+-	if (retval != sizeof usb_dev->descriptor) {
++	descr = usb_get_device_descriptor(usb_dev);
++	if (IS_ERR(descr)) {
++		retval = PTR_ERR(descr);
+ 		mutex_unlock(&usb_bus_idr_lock);
+ 		dev_dbg (parent_dev, "can't read %s device descriptor %d\n",
+ 				dev_name(&usb_dev->dev), retval);
+-		return (retval < 0) ? retval : -EMSGSIZE;
++		return retval;
+ 	}
++	usb_dev->descriptor = *descr;
++	kfree(descr);
+ 
+ 	if (le16_to_cpu(usb_dev->descriptor.bcdUSB) >= 0x0201) {
+ 		retval = usb_get_bos_descriptor(usb_dev);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 7af2def631a24..580604596499a 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2646,12 +2646,17 @@ int usb_authorize_device(struct usb_device *usb_dev)
+ 	}
+ 
+ 	if (usb_dev->wusb) {
+-		result = usb_get_device_descriptor(usb_dev, sizeof(usb_dev->descriptor));
+-		if (result < 0) {
++		struct usb_device_descriptor *descr;
++
++		descr = usb_get_device_descriptor(usb_dev);
++		if (IS_ERR(descr)) {
++			result = PTR_ERR(descr);
+ 			dev_err(&usb_dev->dev, "can't re-read device descriptor for "
+ 				"authorization: %d\n", result);
+ 			goto error_device_descriptor;
+ 		}
++		usb_dev->descriptor = *descr;
++		kfree(descr);
+ 	}
+ 
+ 	usb_dev->authorized = 1;
+@@ -4594,6 +4599,67 @@ static int hub_enable_device(struct usb_device *udev)
+ 	return hcd->driver->enable_device(hcd, udev);
+ }
+ 
++/*
++ * Get the bMaxPacketSize0 value during initialization by reading the
++ * device's device descriptor.  Since we don't already know this value,
++ * the transfer is unsafe and it ignores I/O errors, only testing for
++ * reasonable received values.
++ *
++ * For "old scheme" initialization, size will be 8 so we read just the
++ * start of the device descriptor, which should work okay regardless of
++ * the actual bMaxPacketSize0 value.  For "new scheme" initialization,
++ * size will be 64 (and buf will point to a sufficiently large buffer),
++ * which might not be kosher according to the USB spec but it's what
++ * Windows does and what many devices expect.
++ *
++ * Returns: bMaxPacketSize0 or a negative error code.
++ */
++static int get_bMaxPacketSize0(struct usb_device *udev,
++		struct usb_device_descriptor *buf, int size, bool first_time)
++{
++	int i, rc;
++
++	/*
++	 * Retry on all errors; some devices are flakey.
++	 * 255 is for WUSB devices, we actually need to use
++	 * 512 (WUSB1.0[4.8.1]).
++	 */
++	for (i = 0; i < GET_MAXPACKET0_TRIES; ++i) {
++		/* Start with invalid values in case the transfer fails */
++		buf->bDescriptorType = buf->bMaxPacketSize0 = 0;
++		rc = usb_control_msg(udev, usb_rcvaddr0pipe(),
++				USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
++				USB_DT_DEVICE << 8, 0,
++				buf, size,
++				initial_descriptor_timeout);
++		switch (buf->bMaxPacketSize0) {
++		case 8: case 16: case 32: case 64: case 9:
++			if (buf->bDescriptorType == USB_DT_DEVICE) {
++				rc = buf->bMaxPacketSize0;
++				break;
++			}
++			fallthrough;
++		default:
++			if (rc >= 0)
++				rc = -EPROTO;
++			break;
++		}
++
++		/*
++		 * Some devices time out if they are powered on
++		 * when already connected. They need a second
++		 * reset, so return early. But only on the first
++		 * attempt, lest we get into a time-out/reset loop.
++		 */
++		if (rc > 0 || (rc == -ETIMEDOUT && first_time &&
++				udev->speed > USB_SPEED_FULL))
++			break;
++	}
++	return rc;
++}
++
++#define GET_DESCRIPTOR_BUFSIZE	64
++
+ /* Reset device, (re)assign address, get device descriptor.
+  * Device connection must be stable, no more debouncing needed.
+  * Returns device in USB_STATE_ADDRESS, except on error.
+@@ -4603,10 +4669,17 @@ static int hub_enable_device(struct usb_device *udev)
+  * the port lock.  For a newly detected device that is not accessible
+  * through any global pointers, it's not necessary to lock the device,
+  * but it is still necessary to lock the port.
++ *
++ * For a newly detected device, @dev_descr must be NULL.  The device
++ * descriptor retrieved from the device will then be stored in
++ * @udev->descriptor.  For an already existing device, @dev_descr
++ * must be non-NULL.  The device descriptor will be stored there,
++ * not in @udev->descriptor, because descriptors for registered
++ * devices are meant to be immutable.
+  */
+ static int
+ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+-		int retry_counter)
++		int retry_counter, struct usb_device_descriptor *dev_descr)
+ {
+ 	struct usb_device	*hdev = hub->hdev;
+ 	struct usb_hcd		*hcd = bus_to_hcd(hdev->bus);
+@@ -4618,6 +4691,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	int			devnum = udev->devnum;
+ 	const char		*driver_name;
+ 	bool			do_new_scheme;
++	const bool		initial = !dev_descr;
++	int			maxp0;
++	struct usb_device_descriptor	*buf, *descr;
++
++	buf = kmalloc(GET_DESCRIPTOR_BUFSIZE, GFP_NOIO);
++	if (!buf)
++		return -ENOMEM;
+ 
+ 	/* root hub ports have a slightly longer reset period
+ 	 * (from USB 2.0 spec, section 7.1.7.5)
+@@ -4650,32 +4730,34 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	}
+ 	oldspeed = udev->speed;
+ 
+-	/* USB 2.0 section 5.5.3 talks about ep0 maxpacket ...
+-	 * it's fixed size except for full speed devices.
+-	 * For Wireless USB devices, ep0 max packet is always 512 (tho
+-	 * reported as 0xff in the device descriptor). WUSB1.0[4.8.1].
+-	 */
+-	switch (udev->speed) {
+-	case USB_SPEED_SUPER_PLUS:
+-	case USB_SPEED_SUPER:
+-	case USB_SPEED_WIRELESS:	/* fixed at 512 */
+-		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(512);
+-		break;
+-	case USB_SPEED_HIGH:		/* fixed at 64 */
+-		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
+-		break;
+-	case USB_SPEED_FULL:		/* 8, 16, 32, or 64 */
+-		/* to determine the ep0 maxpacket size, try to read
+-		 * the device descriptor to get bMaxPacketSize0 and
+-		 * then correct our initial guess.
++	if (initial) {
++		/* USB 2.0 section 5.5.3 talks about ep0 maxpacket ...
++		 * it's fixed size except for full speed devices.
++		 * For Wireless USB devices, ep0 max packet is always 512 (tho
++		 * reported as 0xff in the device descriptor). WUSB1.0[4.8.1].
+ 		 */
+-		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
+-		break;
+-	case USB_SPEED_LOW:		/* fixed at 8 */
+-		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(8);
+-		break;
+-	default:
+-		goto fail;
++		switch (udev->speed) {
++		case USB_SPEED_SUPER_PLUS:
++		case USB_SPEED_SUPER:
++		case USB_SPEED_WIRELESS:	/* fixed at 512 */
++			udev->ep0.desc.wMaxPacketSize = cpu_to_le16(512);
++			break;
++		case USB_SPEED_HIGH:		/* fixed at 64 */
++			udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
++			break;
++		case USB_SPEED_FULL:		/* 8, 16, 32, or 64 */
++			/* to determine the ep0 maxpacket size, try to read
++			 * the device descriptor to get bMaxPacketSize0 and
++			 * then correct our initial guess.
++			 */
++			udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
++			break;
++		case USB_SPEED_LOW:		/* fixed at 8 */
++			udev->ep0.desc.wMaxPacketSize = cpu_to_le16(8);
++			break;
++		default:
++			goto fail;
++		}
+ 	}
+ 
+ 	if (udev->speed == USB_SPEED_WIRELESS)
+@@ -4698,22 +4780,24 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	if (udev->speed < USB_SPEED_SUPER)
+ 		dev_info(&udev->dev,
+ 				"%s %s USB device number %d using %s\n",
+-				(udev->config) ? "reset" : "new", speed,
++				(initial ? "new" : "reset"), speed,
+ 				devnum, driver_name);
+ 
+-	/* Set up TT records, if needed  */
+-	if (hdev->tt) {
+-		udev->tt = hdev->tt;
+-		udev->ttport = hdev->ttport;
+-	} else if (udev->speed != USB_SPEED_HIGH
+-			&& hdev->speed == USB_SPEED_HIGH) {
+-		if (!hub->tt.hub) {
+-			dev_err(&udev->dev, "parent hub has no TT\n");
+-			retval = -EINVAL;
+-			goto fail;
++	if (initial) {
++		/* Set up TT records, if needed  */
++		if (hdev->tt) {
++			udev->tt = hdev->tt;
++			udev->ttport = hdev->ttport;
++		} else if (udev->speed != USB_SPEED_HIGH
++				&& hdev->speed == USB_SPEED_HIGH) {
++			if (!hub->tt.hub) {
++				dev_err(&udev->dev, "parent hub has no TT\n");
++				retval = -EINVAL;
++				goto fail;
++			}
++			udev->tt = &hub->tt;
++			udev->ttport = port1;
+ 		}
+-		udev->tt = &hub->tt;
+-		udev->ttport = port1;
+ 	}
+ 
+ 	/* Why interleave GET_DESCRIPTOR and SET_ADDRESS this way?
+@@ -4732,9 +4816,6 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 
+ 	for (retries = 0; retries < GET_DESCRIPTOR_TRIES; (++retries, msleep(100))) {
+ 		if (do_new_scheme) {
+-			struct usb_device_descriptor *buf;
+-			int r = 0;
+-
+ 			retval = hub_enable_device(udev);
+ 			if (retval < 0) {
+ 				dev_err(&udev->dev,
+@@ -4743,52 +4824,14 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 				goto fail;
+ 			}
+ 
+-#define GET_DESCRIPTOR_BUFSIZE	64
+-			buf = kmalloc(GET_DESCRIPTOR_BUFSIZE, GFP_NOIO);
+-			if (!buf) {
+-				retval = -ENOMEM;
+-				continue;
+-			}
+-
+-			/* Retry on all errors; some devices are flakey.
+-			 * 255 is for WUSB devices, we actually need to use
+-			 * 512 (WUSB1.0[4.8.1]).
+-			 */
+-			for (operations = 0; operations < GET_MAXPACKET0_TRIES;
+-					++operations) {
+-				buf->bMaxPacketSize0 = 0;
+-				r = usb_control_msg(udev, usb_rcvaddr0pipe(),
+-					USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
+-					USB_DT_DEVICE << 8, 0,
+-					buf, GET_DESCRIPTOR_BUFSIZE,
+-					initial_descriptor_timeout);
+-				switch (buf->bMaxPacketSize0) {
+-				case 8: case 16: case 32: case 64: case 255:
+-					if (buf->bDescriptorType ==
+-							USB_DT_DEVICE) {
+-						r = 0;
+-						break;
+-					}
+-					fallthrough;
+-				default:
+-					if (r == 0)
+-						r = -EPROTO;
+-					break;
+-				}
+-				/*
+-				 * Some devices time out if they are powered on
+-				 * when already connected. They need a second
+-				 * reset. But only on the first attempt,
+-				 * lest we get into a time out/reset loop
+-				 */
+-				if (r == 0 || (r == -ETIMEDOUT &&
+-						retries == 0 &&
+-						udev->speed > USB_SPEED_FULL))
+-					break;
++			maxp0 = get_bMaxPacketSize0(udev, buf,
++					GET_DESCRIPTOR_BUFSIZE, retries == 0);
++			if (maxp0 > 0 && !initial &&
++					maxp0 != udev->descriptor.bMaxPacketSize0) {
++				dev_err(&udev->dev, "device reset changed ep0 maxpacket size!\n");
++				retval = -ENODEV;
++				goto fail;
+ 			}
+-			udev->descriptor.bMaxPacketSize0 =
+-					buf->bMaxPacketSize0;
+-			kfree(buf);
+ 
+ 			retval = hub_port_reset(hub, port1, udev, delay, false);
+ 			if (retval < 0)		/* error or disconnect */
+@@ -4799,14 +4842,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 				retval = -ENODEV;
+ 				goto fail;
+ 			}
+-			if (r) {
+-				if (r != -ENODEV)
++			if (maxp0 < 0) {
++				if (maxp0 != -ENODEV)
+ 					dev_err(&udev->dev, "device descriptor read/64, error %d\n",
+-							r);
+-				retval = -EMSGSIZE;
++							maxp0);
++				retval = maxp0;
+ 				continue;
+ 			}
+-#undef GET_DESCRIPTOR_BUFSIZE
+ 		}
+ 
+ 		/*
+@@ -4848,18 +4890,22 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 				break;
+ 		}
+ 
+-		retval = usb_get_device_descriptor(udev, 8);
+-		if (retval < 8) {
++		/* !do_new_scheme || wusb */
++		maxp0 = get_bMaxPacketSize0(udev, buf, 8, retries == 0);
++		if (maxp0 < 0) {
++			retval = maxp0;
+ 			if (retval != -ENODEV)
+ 				dev_err(&udev->dev,
+ 					"device descriptor read/8, error %d\n",
+ 					retval);
+-			if (retval >= 0)
+-				retval = -EMSGSIZE;
+ 		} else {
+ 			u32 delay;
+ 
+-			retval = 0;
++			if (!initial && maxp0 != udev->descriptor.bMaxPacketSize0) {
++				dev_err(&udev->dev, "device reset changed ep0 maxpacket size!\n");
++				retval = -ENODEV;
++				goto fail;
++			}
+ 
+ 			delay = udev->parent->hub_delay;
+ 			udev->hub_delay = min_t(u32, delay,
+@@ -4878,48 +4924,61 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 		goto fail;
+ 
+ 	/*
+-	 * Some superspeed devices have finished the link training process
+-	 * and attached to a superspeed hub port, but the device descriptor
+-	 * got from those devices show they aren't superspeed devices. Warm
+-	 * reset the port attached by the devices can fix them.
++	 * Check the ep0 maxpacket guess and correct it if necessary.
++	 * maxp0 is the value stored in the device descriptor;
++	 * i is the value it encodes (logarithmic for SuperSpeed or greater).
+ 	 */
+-	if ((udev->speed >= USB_SPEED_SUPER) &&
+-			(le16_to_cpu(udev->descriptor.bcdUSB) < 0x0300)) {
+-		dev_err(&udev->dev, "got a wrong device descriptor, "
+-				"warm reset device\n");
+-		hub_port_reset(hub, port1, udev,
+-				HUB_BH_RESET_TIME, true);
+-		retval = -EINVAL;
+-		goto fail;
+-	}
+-
+-	if (udev->descriptor.bMaxPacketSize0 == 0xff ||
+-			udev->speed >= USB_SPEED_SUPER)
+-		i = 512;
+-	else
+-		i = udev->descriptor.bMaxPacketSize0;
+-	if (usb_endpoint_maxp(&udev->ep0.desc) != i) {
+-		if (udev->speed == USB_SPEED_LOW ||
+-				!(i == 8 || i == 16 || i == 32 || i == 64)) {
+-			dev_err(&udev->dev, "Invalid ep0 maxpacket: %d\n", i);
+-			retval = -EMSGSIZE;
+-			goto fail;
+-		}
++	i = maxp0;
++	if (udev->speed >= USB_SPEED_SUPER) {
++		if (maxp0 <= 16)
++			i = 1 << maxp0;
++		else
++			i = 0;		/* Invalid */
++	}
++	if (usb_endpoint_maxp(&udev->ep0.desc) == i) {
++		;	/* Initial ep0 maxpacket guess is right */
++	} else if ((udev->speed == USB_SPEED_FULL ||
++				udev->speed == USB_SPEED_HIGH) &&
++			(i == 8 || i == 16 || i == 32 || i == 64)) {
++		/* Initial guess is wrong; use the descriptor's value */
+ 		if (udev->speed == USB_SPEED_FULL)
+ 			dev_dbg(&udev->dev, "ep0 maxpacket = %d\n", i);
+ 		else
+ 			dev_warn(&udev->dev, "Using ep0 maxpacket: %d\n", i);
+ 		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(i);
+ 		usb_ep0_reinit(udev);
++	} else {
++		/* Initial guess is wrong and descriptor's value is invalid */
++		dev_err(&udev->dev, "Invalid ep0 maxpacket: %d\n", maxp0);
++		retval = -EMSGSIZE;
++		goto fail;
+ 	}
+ 
+-	retval = usb_get_device_descriptor(udev, USB_DT_DEVICE_SIZE);
+-	if (retval < (signed)sizeof(udev->descriptor)) {
++	descr = usb_get_device_descriptor(udev);
++	if (IS_ERR(descr)) {
++		retval = PTR_ERR(descr);
+ 		if (retval != -ENODEV)
+ 			dev_err(&udev->dev, "device descriptor read/all, error %d\n",
+ 					retval);
+-		if (retval >= 0)
+-			retval = -ENOMSG;
++		goto fail;
++	}
++	if (initial)
++		udev->descriptor = *descr;
++	else
++		*dev_descr = *descr;
++	kfree(descr);
++
++	/*
++	 * Some superspeed devices have finished the link training process
++	 * and attached to a superspeed hub port, but the device descriptor
++	 * got from those devices show they aren't superspeed devices. Warm
++	 * reset the port attached by the devices can fix them.
++	 */
++	if ((udev->speed >= USB_SPEED_SUPER) &&
++			(le16_to_cpu(udev->descriptor.bcdUSB) < 0x0300)) {
++		dev_err(&udev->dev, "got a wrong device descriptor, warm reset device\n");
++		hub_port_reset(hub, port1, udev, HUB_BH_RESET_TIME, true);
++		retval = -EINVAL;
+ 		goto fail;
+ 	}
+ 
+@@ -4943,6 +5002,7 @@ fail:
+ 		hub_port_disable(hub, port1, 0);
+ 		update_devnum(udev, devnum);	/* for disconnect processing */
+ 	}
++	kfree(buf);
+ 	return retval;
+ }
+ 
+@@ -5023,7 +5083,7 @@ hub_power_remaining(struct usb_hub *hub)
+ 
+ 
+ static int descriptors_changed(struct usb_device *udev,
+-		struct usb_device_descriptor *old_device_descriptor,
++		struct usb_device_descriptor *new_device_descriptor,
+ 		struct usb_host_bos *old_bos)
+ {
+ 	int		changed = 0;
+@@ -5034,8 +5094,8 @@ static int descriptors_changed(struct usb_device *udev,
+ 	int		length;
+ 	char		*buf;
+ 
+-	if (memcmp(&udev->descriptor, old_device_descriptor,
+-			sizeof(*old_device_descriptor)) != 0)
++	if (memcmp(&udev->descriptor, new_device_descriptor,
++			sizeof(*new_device_descriptor)) != 0)
+ 		return 1;
+ 
+ 	if ((old_bos && !udev->bos) || (!old_bos && udev->bos))
+@@ -5208,7 +5268,7 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
+ 		}
+ 
+ 		/* reset (non-USB 3.0 devices) and get descriptor */
+-		status = hub_port_init(hub, udev, port1, i);
++		status = hub_port_init(hub, udev, port1, i, NULL);
+ 		if (status < 0)
+ 			goto loop;
+ 
+@@ -5355,9 +5415,8 @@ static void hub_port_connect_change(struct usb_hub *hub, int port1,
+ {
+ 	struct usb_port *port_dev = hub->ports[port1 - 1];
+ 	struct usb_device *udev = port_dev->child;
+-	struct usb_device_descriptor descriptor;
++	struct usb_device_descriptor *descr;
+ 	int status = -ENODEV;
+-	int retval;
+ 
+ 	dev_dbg(&port_dev->dev, "status %04x, change %04x, %s\n", portstatus,
+ 			portchange, portspeed(hub, portstatus));
+@@ -5384,23 +5443,20 @@ static void hub_port_connect_change(struct usb_hub *hub, int port1,
+ 			 * changed device descriptors before resuscitating the
+ 			 * device.
+ 			 */
+-			descriptor = udev->descriptor;
+-			retval = usb_get_device_descriptor(udev,
+-					sizeof(udev->descriptor));
+-			if (retval < 0) {
++			descr = usb_get_device_descriptor(udev);
++			if (IS_ERR(descr)) {
+ 				dev_dbg(&udev->dev,
+-						"can't read device descriptor %d\n",
+-						retval);
++						"can't read device descriptor %ld\n",
++						PTR_ERR(descr));
+ 			} else {
+-				if (descriptors_changed(udev, &descriptor,
++				if (descriptors_changed(udev, descr,
+ 						udev->bos)) {
+ 					dev_dbg(&udev->dev,
+ 							"device descriptor has changed\n");
+-					/* for disconnect() calls */
+-					udev->descriptor = descriptor;
+ 				} else {
+ 					status = 0; /* Nothing to do */
+ 				}
++				kfree(descr);
+ 			}
+ #ifdef CONFIG_PM
+ 		} else if (udev->state == USB_STATE_SUSPENDED &&
+@@ -5828,7 +5884,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	struct usb_device		*parent_hdev = udev->parent;
+ 	struct usb_hub			*parent_hub;
+ 	struct usb_hcd			*hcd = bus_to_hcd(udev->bus);
+-	struct usb_device_descriptor	descriptor = udev->descriptor;
++	struct usb_device_descriptor	descriptor;
+ 	struct usb_host_bos		*bos;
+ 	int				i, j, ret = 0;
+ 	int				port1 = udev->portnum;
+@@ -5870,7 +5926,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 		/* ep0 maxpacket size may change; let the HCD know about it.
+ 		 * Other endpoints will be handled by re-enumeration. */
+ 		usb_ep0_reinit(udev);
+-		ret = hub_port_init(parent_hub, udev, port1, i);
++		ret = hub_port_init(parent_hub, udev, port1, i, &descriptor);
+ 		if (ret >= 0 || ret == -ENOTCONN || ret == -ENODEV)
+ 			break;
+ 	}
+@@ -5882,7 +5938,6 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	/* Device might have changed firmware (DFU or similar) */
+ 	if (descriptors_changed(udev, &descriptor, bos)) {
+ 		dev_info(&udev->dev, "device firmware changed\n");
+-		udev->descriptor = descriptor;	/* for disconnect() calls */
+ 		goto re_enumerate;
+ 	}
+ 
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index dba2baca486e7..d64aaff223e79 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1039,39 +1039,34 @@ char *usb_cache_string(struct usb_device *udev, int index)
+ }
+ 
+ /*
+- * usb_get_device_descriptor - (re)reads the device descriptor (usbcore)
+- * @dev: the device whose device descriptor is being updated
+- * @size: how much of the descriptor to read
++ * usb_get_device_descriptor - read the device descriptor
++ * @udev: the device whose device descriptor should be read
+  * Context: !in_interrupt ()
+  *
+- * Updates the copy of the device descriptor stored in the device structure,
+- * which dedicates space for this purpose.
+- *
+  * Not exported, only for use by the core.  If drivers really want to read
+  * the device descriptor directly, they can call usb_get_descriptor() with
+  * type = USB_DT_DEVICE and index = 0.
+  *
+- * This call is synchronous, and may not be used in an interrupt context.
+- *
+- * Return: The number of bytes received on success, or else the status code
+- * returned by the underlying usb_control_msg() call.
++ * Returns: a pointer to a dynamically allocated usb_device_descriptor
++ * structure (which the caller must deallocate), or an ERR_PTR value.
+  */
+-int usb_get_device_descriptor(struct usb_device *dev, unsigned int size)
++struct usb_device_descriptor *usb_get_device_descriptor(struct usb_device *udev)
+ {
+ 	struct usb_device_descriptor *desc;
+ 	int ret;
+ 
+-	if (size > sizeof(*desc))
+-		return -EINVAL;
+ 	desc = kmalloc(sizeof(*desc), GFP_NOIO);
+ 	if (!desc)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
++
++	ret = usb_get_descriptor(udev, USB_DT_DEVICE, 0, desc, sizeof(*desc));
++	if (ret == sizeof(*desc))
++		return desc;
+ 
+-	ret = usb_get_descriptor(dev, USB_DT_DEVICE, 0, desc, size);
+ 	if (ret >= 0)
+-		memcpy(&dev->descriptor, desc, size);
++		ret = -EMSGSIZE;
+ 	kfree(desc);
+-	return ret;
++	return ERR_PTR(ret);
+ }
+ 
+ /*
+diff --git a/drivers/usb/core/usb.h b/drivers/usb/core/usb.h
+index 82538daac8b89..3bb2e1db42b5d 100644
+--- a/drivers/usb/core/usb.h
++++ b/drivers/usb/core/usb.h
+@@ -42,8 +42,8 @@ extern bool usb_endpoint_is_ignored(struct usb_device *udev,
+ 		struct usb_endpoint_descriptor *epd);
+ extern int usb_remove_device(struct usb_device *udev);
+ 
+-extern int usb_get_device_descriptor(struct usb_device *dev,
+-		unsigned int size);
++extern struct usb_device_descriptor *usb_get_device_descriptor(
++		struct usb_device *udev);
+ extern int usb_set_isoch_delay(struct usb_device *dev);
+ extern int usb_get_bos_descriptor(struct usb_device *dev);
+ extern void usb_release_bos_descriptor(struct usb_device *dev);
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index 69ec06efd7f25..2d3ca6e8eb654 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -931,6 +931,12 @@ static int __maybe_unused dwc3_meson_g12a_resume(struct device *dev)
+ 			return ret;
+ 	}
+ 
++	if (priv->drvdata->usb_post_init) {
++		ret = priv->drvdata->usb_post_init(priv);
++		if (ret)
++			return ret;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index 950c9435beec3..553547f12fd20 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -951,7 +951,7 @@ static void invalidate_sub(struct fsg_lun *curlun)
+ {
+ 	struct file	*filp = curlun->filp;
+ 	struct inode	*inode = file_inode(filp);
+-	unsigned long	rc;
++	unsigned long __maybe_unused	rc;
+ 
+ 	rc = invalidate_mapping_pages(inode->i_mapping, 0, -1);
+ 	VLDBG(curlun, "invalidate_mapping_pages -> %ld\n", rc);
+diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
+index 67b39dc62b373..70e23334b27f9 100644
+--- a/drivers/usb/phy/phy-mxs-usb.c
++++ b/drivers/usb/phy/phy-mxs-usb.c
+@@ -388,14 +388,8 @@ static void __mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool disconnect)
+ 
+ static bool mxs_phy_is_otg_host(struct mxs_phy *mxs_phy)
+ {
+-	void __iomem *base = mxs_phy->phy.io_priv;
+-	u32 phyctrl = readl(base + HW_USBPHY_CTRL);
+-
+-	if (IS_ENABLED(CONFIG_USB_OTG) &&
+-			!(phyctrl & BM_USBPHY_CTRL_OTG_ID_VALUE))
+-		return true;
+-
+-	return false;
++	return IS_ENABLED(CONFIG_USB_OTG) &&
++		mxs_phy->phy.last_event == USB_EVENT_ID;
+ }
+ 
+ static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on)
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 0b3422c06ab94..83567e6e32e08 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -259,6 +259,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EM05G			0x030a
+ #define QUECTEL_PRODUCT_EM060K			0x030b
+ #define QUECTEL_PRODUCT_EM05G_CS		0x030c
++#define QUECTEL_PRODUCT_EM05GV2			0x030e
+ #define QUECTEL_PRODUCT_EM05CN_SG		0x0310
+ #define QUECTEL_PRODUCT_EM05G_SG		0x0311
+ #define QUECTEL_PRODUCT_EM05CN			0x0312
+@@ -1190,6 +1191,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_GR, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05GV2, 0xff),
++	  .driver_info = RSVD(4) | ZLP },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_CS, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_RS, 0xff),
+@@ -2232,6 +2235,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0db, 0xff),			/* Foxconn T99W265 MBIM */
+ 	  .driver_info = RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0ee, 0xff),			/* Foxconn T99W368 MBIM */
++	  .driver_info = RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0f0, 0xff),			/* Foxconn T99W373 MBIM */
++	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+ 	  .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x1782, 0x4d10) },						/* Fibocom L610 (AT mode) */
+diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c
+index f4e7f4d78b565..7994b46592b99 100644
+--- a/drivers/usb/typec/bus.c
++++ b/drivers/usb/typec/bus.c
+@@ -152,12 +152,20 @@ EXPORT_SYMBOL_GPL(typec_altmode_exit);
+  *
+  * Notifies the partner of @adev about Attention command.
+  */
+-void typec_altmode_attention(struct typec_altmode *adev, u32 vdo)
++int typec_altmode_attention(struct typec_altmode *adev, u32 vdo)
+ {
+-	struct typec_altmode *pdev = &to_altmode(adev)->partner->adev;
++	struct altmode *partner = to_altmode(adev)->partner;
++	struct typec_altmode *pdev;
++
++	if (!partner)
++		return -ENODEV;
++
++	pdev = &partner->adev;
+ 
+ 	if (pdev->ops && pdev->ops->attention)
+ 		pdev->ops->attention(pdev, vdo);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(typec_altmode_attention);
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index 069affa5cb1ee..e34e46df80243 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -475,6 +475,10 @@ static int tcpci_init(struct tcpc_dev *tcpc)
+ 	if (time_after(jiffies, timeout))
+ 		return -ETIMEDOUT;
+ 
++	ret = tcpci_write16(tcpci, TCPC_FAULT_STATUS, TCPC_FAULT_STATUS_ALL_REG_RST_TO_DEFAULT);
++	if (ret < 0)
++		return ret;
++
+ 	/* Handle vendor init */
+ 	if (tcpci->data->init) {
+ 		ret = tcpci->data->init(tcpci, tcpci->data);
+diff --git a/drivers/usb/typec/tcpm/tcpci.h b/drivers/usb/typec/tcpm/tcpci.h
+index 5ef07a56d67aa..95ce89139c6ef 100644
+--- a/drivers/usb/typec/tcpm/tcpci.h
++++ b/drivers/usb/typec/tcpm/tcpci.h
+@@ -84,6 +84,7 @@
+ #define TCPC_POWER_STATUS_VBUS_PRES	BIT(2)
+ 
+ #define TCPC_FAULT_STATUS		0x1f
++#define TCPC_FAULT_STATUS_ALL_REG_RST_TO_DEFAULT BIT(7)
+ 
+ #define TCPC_ALERT_EXTENDED		0x21
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index cf0e6a80815ae..ac3953a0fa291 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -1395,7 +1395,8 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ 			}
+ 			break;
+ 		case ADEV_ATTENTION:
+-			typec_altmode_attention(adev, p[1]);
++			if (typec_altmode_attention(adev, p[1]))
++				tcpm_log(port, "typec_altmode_attention no port partner altmode");
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index ec1428dbdf9d9..9b01f88ae4762 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -2659,7 +2659,7 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu,
+ static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu,
+ 					   struct vfio_info_cap *caps)
+ {
+-	struct vfio_iommu_type1_info_cap_migration cap_mig;
++	struct vfio_iommu_type1_info_cap_migration cap_mig = {};
+ 
+ 	cap_mig.header.id = VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION;
+ 	cap_mig.header.version = 1;
+diff --git a/drivers/video/backlight/bd6107.c b/drivers/video/backlight/bd6107.c
+index 515184fbe33a9..5c67ef8bd60ca 100644
+--- a/drivers/video/backlight/bd6107.c
++++ b/drivers/video/backlight/bd6107.c
+@@ -104,7 +104,7 @@ static int bd6107_backlight_check_fb(struct backlight_device *backlight,
+ {
+ 	struct bd6107 *bd = bl_get_data(backlight);
+ 
+-	return bd->pdata->fbdev == NULL || bd->pdata->fbdev == info->dev;
++	return bd->pdata->fbdev == NULL || bd->pdata->fbdev == info->device;
+ }
+ 
+ static const struct backlight_ops bd6107_backlight_ops = {
+diff --git a/drivers/video/backlight/gpio_backlight.c b/drivers/video/backlight/gpio_backlight.c
+index 6f78d928f054a..30ec5b6845335 100644
+--- a/drivers/video/backlight/gpio_backlight.c
++++ b/drivers/video/backlight/gpio_backlight.c
+@@ -35,7 +35,7 @@ static int gpio_backlight_check_fb(struct backlight_device *bl,
+ {
+ 	struct gpio_backlight *gbl = bl_get_data(bl);
+ 
+-	return gbl->fbdev == NULL || gbl->fbdev == info->dev;
++	return gbl->fbdev == NULL || gbl->fbdev == info->device;
+ }
+ 
+ static const struct backlight_ops gpio_backlight_ops = {
+@@ -87,8 +87,7 @@ static int gpio_backlight_probe(struct platform_device *pdev)
+ 		/* Not booted with device tree or no phandle link to the node */
+ 		bl->props.power = def_value ? FB_BLANK_UNBLANK
+ 					    : FB_BLANK_POWERDOWN;
+-	else if (gpiod_get_direction(gbl->gpiod) == 0 &&
+-		 gpiod_get_value_cansleep(gbl->gpiod) == 0)
++	else if (gpiod_get_value_cansleep(gbl->gpiod) == 0)
+ 		bl->props.power = FB_BLANK_POWERDOWN;
+ 	else
+ 		bl->props.power = FB_BLANK_UNBLANK;
+diff --git a/drivers/video/backlight/lv5207lp.c b/drivers/video/backlight/lv5207lp.c
+index 1842ae9a55f8b..720ada475ce53 100644
+--- a/drivers/video/backlight/lv5207lp.c
++++ b/drivers/video/backlight/lv5207lp.c
+@@ -67,7 +67,7 @@ static int lv5207lp_backlight_check_fb(struct backlight_device *backlight,
+ {
+ 	struct lv5207lp *lv = bl_get_data(backlight);
+ 
+-	return lv->pdata->fbdev == NULL || lv->pdata->fbdev == info->dev;
++	return lv->pdata->fbdev == NULL || lv->pdata->fbdev == info->device;
+ }
+ 
+ static const struct backlight_ops lv5207lp_backlight_ops = {
+diff --git a/drivers/video/fbdev/ep93xx-fb.c b/drivers/video/fbdev/ep93xx-fb.c
+index ba33b4dce0dfa..04aac2ad53822 100644
+--- a/drivers/video/fbdev/ep93xx-fb.c
++++ b/drivers/video/fbdev/ep93xx-fb.c
+@@ -474,7 +474,6 @@ static int ep93xxfb_probe(struct platform_device *pdev)
+ 	if (!info)
+ 		return -ENOMEM;
+ 
+-	info->dev = &pdev->dev;
+ 	platform_set_drvdata(pdev, info);
+ 	fbi = info->par;
+ 	fbi->mach_info = mach_info;
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 3cc2a4ee7152c..cf0e8e1893ee6 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1190,7 +1190,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 		}
+ 	}
+ 
+-	if (i < head)
++	if (i <= head)
+ 		vq->packed.avail_wrap_counter ^= 1;
+ 
+ 	/* We're using some buffers from the free list. */
+diff --git a/drivers/watchdog/intel-mid_wdt.c b/drivers/watchdog/intel-mid_wdt.c
+index 9b2173f765c8c..fb7fae750181b 100644
+--- a/drivers/watchdog/intel-mid_wdt.c
++++ b/drivers/watchdog/intel-mid_wdt.c
+@@ -203,3 +203,4 @@ module_platform_driver(mid_wdt_driver);
+ MODULE_AUTHOR("David Cohen <david.a.cohen@linux.intel.com>");
+ MODULE_DESCRIPTION("Watchdog Driver for Intel MID platform");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS("platform:intel_mid_wdt");
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 608b939a4d287..1bc6909d4de94 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2496,11 +2496,10 @@ static int validate_super(struct btrfs_fs_info *fs_info,
+ 		ret = -EINVAL;
+ 	}
+ 
+-	if (memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,
+-		   BTRFS_FSID_SIZE)) {
++	if (memcmp(fs_info->fs_devices->fsid, sb->fsid, BTRFS_FSID_SIZE) != 0) {
+ 		btrfs_err(fs_info,
+ 		"superblock fsid doesn't match fsid of fs_devices: %pU != %pU",
+-			fs_info->super_copy->fsid, fs_info->fs_devices->fsid);
++			  sb->fsid, fs_info->fs_devices->fsid);
+ 		ret = -EINVAL;
+ 	}
+ 
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index abd67f984fbcf..d23047b23005c 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -301,10 +301,11 @@ loop:
+ 	spin_unlock(&fs_info->trans_lock);
+ 
+ 	/*
+-	 * If we are ATTACH, we just want to catch the current transaction,
+-	 * and commit it. If there is no transaction, just return ENOENT.
++	 * If we are ATTACH or TRANS_JOIN_NOSTART, we just want to catch the
++	 * current transaction, and commit it. If there is no transaction, just
++	 * return ENOENT.
+ 	 */
+-	if (type == TRANS_ATTACH)
++	if (type == TRANS_ATTACH || type == TRANS_JOIN_NOSTART)
+ 		return -ENOENT;
+ 
+ 	/*
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 12388ed4faa59..0b7e9ab517d58 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -479,6 +479,7 @@ static struct dentry * configfs_lookup(struct inode *dir,
+ 	if (!configfs_dirent_is_ready(parent_sd))
+ 		goto out;
+ 
++	spin_lock(&configfs_dirent_lock);
+ 	list_for_each_entry(sd, &parent_sd->s_children, s_sibling) {
+ 		if (sd->s_type & CONFIGFS_NOT_PINNED) {
+ 			const unsigned char * name = configfs_get_name(sd);
+@@ -491,6 +492,7 @@ static struct dentry * configfs_lookup(struct inode *dir,
+ 			break;
+ 		}
+ 	}
++	spin_unlock(&configfs_dirent_lock);
+ 
+ 	if (!found) {
+ 		/*
+diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
+index 28735e8c5e206..5f2e2fa2ba090 100644
+--- a/fs/dlm/plock.c
++++ b/fs/dlm/plock.c
+@@ -466,7 +466,8 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 		}
+ 	} else {
+ 		list_for_each_entry(iter, &recv_list, list) {
+-			if (!iter->info.wait) {
++			if (!iter->info.wait &&
++			    iter->info.fsid == info.fsid) {
+ 				op = iter;
+ 				break;
+ 			}
+@@ -478,8 +479,7 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 		if (info.wait)
+ 			WARN_ON(op->info.optype != DLM_PLOCK_OP_LOCK);
+ 		else
+-			WARN_ON(op->info.fsid != info.fsid ||
+-				op->info.number != info.number ||
++			WARN_ON(op->info.number != info.number ||
+ 				op->info.owner != info.owner ||
+ 				op->info.optype != info.optype);
+ 
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 9cff927382599..da133950d5144 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -632,6 +632,8 @@ hitted:
+ 	cur = end - min_t(erofs_off_t, offset + end - map->m_la, end);
+ 	if (!(map->m_flags & EROFS_MAP_MAPPED)) {
+ 		zero_user_segment(page, cur, end);
++		++spiltted;
++		tight = false;
+ 		goto next_part;
+ 	}
+ 
+diff --git a/fs/eventfd.c b/fs/eventfd.c
+index 4a14295cffe0d..3673eb8de0356 100644
+--- a/fs/eventfd.c
++++ b/fs/eventfd.c
+@@ -187,11 +187,14 @@ static __poll_t eventfd_poll(struct file *file, poll_table *wait)
+ 	return events;
+ }
+ 
+-static void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt)
++void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt)
+ {
+-	*cnt = (ctx->flags & EFD_SEMAPHORE) ? 1 : ctx->count;
++	lockdep_assert_held(&ctx->wqh.lock);
++
++	*cnt = ((ctx->flags & EFD_SEMAPHORE) && ctx->count) ? 1 : ctx->count;
+ 	ctx->count -= *cnt;
+ }
++EXPORT_SYMBOL_GPL(eventfd_ctx_do_read);
+ 
+ /**
+  * eventfd_ctx_remove_wait_queue - Read the current counter and removes wait queue.
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 4efe71efe1277..bdbf130416c73 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -903,11 +903,11 @@ unsigned long ext4_bg_num_gdb(struct super_block *sb, ext4_group_t group)
+ }
+ 
+ /*
+- * This function returns the number of file system metadata clusters at
++ * This function returns the number of file system metadata blocks at
+  * the beginning of a block group, including the reserved gdt blocks.
+  */
+-static unsigned ext4_num_base_meta_clusters(struct super_block *sb,
+-				     ext4_group_t block_group)
++unsigned int ext4_num_base_meta_blocks(struct super_block *sb,
++				       ext4_group_t block_group)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	unsigned num;
+@@ -925,8 +925,15 @@ static unsigned ext4_num_base_meta_clusters(struct super_block *sb,
+ 	} else { /* For META_BG_BLOCK_GROUPS */
+ 		num += ext4_bg_num_gdb(sb, block_group);
+ 	}
+-	return EXT4_NUM_B2C(sbi, num);
++	return num;
+ }
++
++static unsigned int ext4_num_base_meta_clusters(struct super_block *sb,
++						ext4_group_t block_group)
++{
++	return EXT4_NUM_B2C(EXT4_SB(sb), ext4_num_base_meta_blocks(sb, block_group));
++}
++
+ /**
+  *	ext4_inode_to_goal_block - return a hint for block allocation
+  *	@inode: inode for block allocation
+diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
+index eed5b855dd949..295e89d93295e 100644
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -217,7 +217,6 @@ int ext4_setup_system_zone(struct super_block *sb)
+ 	struct ext4_system_blocks *system_blks;
+ 	struct ext4_group_desc *gdp;
+ 	ext4_group_t i;
+-	int flex_size = ext4_flex_bg_size(sbi);
+ 	int ret;
+ 
+ 	system_blks = kzalloc(sizeof(*system_blks), GFP_KERNEL);
+@@ -225,12 +224,13 @@ int ext4_setup_system_zone(struct super_block *sb)
+ 		return -ENOMEM;
+ 
+ 	for (i=0; i < ngroups; i++) {
++		unsigned int meta_blks = ext4_num_base_meta_blocks(sb, i);
++
+ 		cond_resched();
+-		if (ext4_bg_has_super(sb, i) &&
+-		    ((i < 5) || ((i % flex_size) == 0))) {
++		if (meta_blks != 0) {
+ 			ret = add_system_zone(system_blks,
+ 					ext4_group_first_block_no(sb, i),
+-					ext4_bg_num_gdb(sb, i) + 1, 0);
++					meta_blks, 0);
+ 			if (ret)
+ 				goto err;
+ 		}
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index c8fdf127d843b..c3e9cb5037631 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2967,6 +2967,8 @@ extern const char *ext4_decode_error(struct super_block *sb, int errno,
+ extern void ext4_mark_group_bitmap_corrupted(struct super_block *sb,
+ 					     ext4_group_t block_group,
+ 					     unsigned int flags);
++extern unsigned int ext4_num_base_meta_blocks(struct super_block *sb,
++					      ext4_group_t block_group);
+ 
+ extern __printf(6, 7)
+ void __ext4_error(struct super_block *, const char *, unsigned int, int, __u64,
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 3a71928846712..2f6ed59d81f02 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2103,7 +2103,7 @@ static bool ext4_mb_good_group(struct ext4_allocation_context *ac,
+ 
+ 	BUG_ON(cr < 0 || cr >= 4);
+ 
+-	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp) || !grp))
++	if (unlikely(!grp || EXT4_MB_GRP_BBITMAP_CORRUPT(grp)))
+ 		return false;
+ 
+ 	free = grp->bb_free;
+diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c
+index d5294e663df50..14e99ffa57af9 100644
+--- a/fs/fuse/readdir.c
++++ b/fs/fuse/readdir.c
+@@ -243,8 +243,16 @@ retry:
+ 			dput(dentry);
+ 			dentry = alias;
+ 		}
+-		if (IS_ERR(dentry))
++		if (IS_ERR(dentry)) {
++			if (!IS_ERR(inode)) {
++				struct fuse_inode *fi = get_fuse_inode(inode);
++
++				spin_lock(&fi->lock);
++				fi->nlookup--;
++				spin_unlock(&fi->lock);
++			}
+ 			return PTR_ERR(dentry);
++		}
+ 	}
+ 	if (fc->readdirplus_auto)
+ 		set_bit(FUSE_I_INIT_RDPLUS, &get_fuse_inode(inode)->state);
+diff --git a/fs/jfs/jfs_extent.c b/fs/jfs/jfs_extent.c
+index f65bd6b35412b..d4e063dbb9a0b 100644
+--- a/fs/jfs/jfs_extent.c
++++ b/fs/jfs/jfs_extent.c
+@@ -508,6 +508,11 @@ extBalloc(struct inode *ip, s64 hint, s64 * nblocks, s64 * blkno)
+ 	 * blocks in the map. in that case, we'll start off with the
+ 	 * maximum free.
+ 	 */
++
++	/* give up if no space left */
++	if (bmp->db_maxfreebud == -1)
++		return -ENOSPC;
++
+ 	max = (s64) 1 << bmp->db_maxfreebud;
+ 	if (*nblocks >= max && *nblocks > nbperpage)
+ 		nb = nblks = (max > nbperpage) ? max : nbperpage;
+diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c
+index 1d9488cf05348..87a0f207df0b9 100644
+--- a/fs/lockd/mon.c
++++ b/fs/lockd/mon.c
+@@ -276,6 +276,9 @@ static struct nsm_handle *nsm_create_handle(const struct sockaddr *sap,
+ {
+ 	struct nsm_handle *new;
+ 
++	if (!hostname)
++		return NULL;
++
+ 	new = kzalloc(sizeof(*new) + hostname_len + 1, GFP_KERNEL);
+ 	if (unlikely(new == NULL))
+ 		return NULL;
+diff --git a/fs/namei.c b/fs/namei.c
+index bc5e633a5954e..3ff954a2bbd1d 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2640,7 +2640,7 @@ int path_pts(struct path *path)
+ 	dput(path->dentry);
+ 	path->dentry = parent;
+ 	child = d_hash_and_lookup(parent, &this);
+-	if (!child)
++	if (IS_ERR_OR_NULL(child))
+ 		return -ENOENT;
+ 
+ 	path->dentry = child;
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index dec5880ac6de2..6e3a14fdff9c8 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -422,7 +422,7 @@ bl_parse_concat(struct nfs_server *server, struct pnfs_block_dev *d,
+ 	int ret, i;
+ 
+ 	d->children = kcalloc(v->concat.volumes_count,
+-			sizeof(struct pnfs_block_dev), GFP_KERNEL);
++			sizeof(struct pnfs_block_dev), gfp_mask);
+ 	if (!d->children)
+ 		return -ENOMEM;
+ 
+@@ -451,7 +451,7 @@ bl_parse_stripe(struct nfs_server *server, struct pnfs_block_dev *d,
+ 	int ret, i;
+ 
+ 	d->children = kcalloc(v->stripe.volumes_count,
+-			sizeof(struct pnfs_block_dev), GFP_KERNEL);
++			sizeof(struct pnfs_block_dev), gfp_mask);
+ 	if (!d->children)
+ 		return -ENOMEM;
+ 
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index fbc7304bed56b..018af6ec97b40 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -509,13 +509,31 @@ out:
+ 	return result;
+ }
+ 
++static void nfs_direct_add_page_head(struct list_head *list,
++				     struct nfs_page *req)
++{
++	struct nfs_page *head = req->wb_head;
++
++	if (!list_empty(&head->wb_list) || !nfs_lock_request(head))
++		return;
++	if (!list_empty(&head->wb_list)) {
++		nfs_unlock_request(head);
++		return;
++	}
++	list_add(&head->wb_list, list);
++	kref_get(&head->wb_kref);
++	kref_get(&head->wb_kref);
++}
++
+ static void nfs_direct_join_group(struct list_head *list, struct inode *inode)
+ {
+ 	struct nfs_page *req, *subreq;
+ 
+ 	list_for_each_entry(req, list, wb_list) {
+-		if (req->wb_head != req)
++		if (req->wb_head != req) {
++			nfs_direct_add_page_head(&req->wb_list, req);
+ 			continue;
++		}
+ 		subreq = req->wb_this_page;
+ 		if (subreq == req)
+ 			continue;
+diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c
+index 5e6453e9b3079..b34196da1f945 100644
+--- a/fs/nfs/nfs2xdr.c
++++ b/fs/nfs/nfs2xdr.c
+@@ -948,7 +948,7 @@ int nfs2_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 
+ 	error = decode_filename_inline(xdr, &entry->name, &entry->len);
+ 	if (unlikely(error))
+-		return -EAGAIN;
++		return error == -ENAMETOOLONG ? -ENAMETOOLONG : -EAGAIN;
+ 
+ 	/*
+ 	 * The type (size and byte order) of nfscookie isn't defined in
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index b5a9379b14504..509f32845d7b2 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -1987,7 +1987,7 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 
+ 	error = decode_inline_filename3(xdr, &entry->name, &entry->len);
+ 	if (unlikely(error))
+-		return -EAGAIN;
++		return error == -ENAMETOOLONG ? -ENAMETOOLONG : -EAGAIN;
+ 
+ 	error = decode_cookie3(xdr, &new_cookie);
+ 	if (unlikely(error))
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index dad32b171e677..dfeea712014b7 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -443,8 +443,9 @@ ssize_t nfs42_proc_copy(struct file *src, loff_t pos_src,
+ 				continue;
+ 			}
+ 			break;
+-		} else if (err == -NFS4ERR_OFFLOAD_NO_REQS && !args.sync) {
+-			args.sync = true;
++		} else if (err == -NFS4ERR_OFFLOAD_NO_REQS &&
++				args.sync != res.synchronous) {
++			args.sync = res.synchronous;
+ 			dst_exception.retry = 1;
+ 			continue;
+ 		} else if ((err == -ESTALE ||
+diff --git a/fs/nfs/pnfs_dev.c b/fs/nfs/pnfs_dev.c
+index 537b80d693f1e..d4829f3f22935 100644
+--- a/fs/nfs/pnfs_dev.c
++++ b/fs/nfs/pnfs_dev.c
+@@ -152,7 +152,7 @@ nfs4_get_device_info(struct nfs_server *server,
+ 		set_bit(NFS_DEVICEID_NOCACHE, &d->flags);
+ 
+ out_free_pages:
+-	for (i = 0; i < max_pages; i++)
++	while (--i >= 0)
+ 		__free_page(pages[i]);
+ 	kfree(pages);
+ out_free_pdev:
+diff --git a/fs/nfsd/blocklayoutxdr.c b/fs/nfsd/blocklayoutxdr.c
+index 442543304930b..2455dc8be18a8 100644
+--- a/fs/nfsd/blocklayoutxdr.c
++++ b/fs/nfsd/blocklayoutxdr.c
+@@ -82,6 +82,15 @@ nfsd4_block_encode_getdeviceinfo(struct xdr_stream *xdr,
+ 	int len = sizeof(__be32), ret, i;
+ 	__be32 *p;
+ 
++	/*
++	 * See paragraph 5 of RFC 8881 S18.40.3.
++	 */
++	if (!gdp->gd_maxcount) {
++		if (xdr_stream_encode_u32(xdr, 0) != XDR_UNIT)
++			return nfserr_resource;
++		return nfs_ok;
++	}
++
+ 	p = xdr_reserve_space(xdr, len + sizeof(__be32));
+ 	if (!p)
+ 		return nfserr_resource;
+diff --git a/fs/nfsd/flexfilelayoutxdr.c b/fs/nfsd/flexfilelayoutxdr.c
+index e81d2a5cf381e..bb205328e043d 100644
+--- a/fs/nfsd/flexfilelayoutxdr.c
++++ b/fs/nfsd/flexfilelayoutxdr.c
+@@ -85,6 +85,15 @@ nfsd4_ff_encode_getdeviceinfo(struct xdr_stream *xdr,
+ 	int addr_len;
+ 	__be32 *p;
+ 
++	/*
++	 * See paragraph 5 of RFC 8881 S18.40.3.
++	 */
++	if (!gdp->gd_maxcount) {
++		if (xdr_stream_encode_u32(xdr, 0) != XDR_UNIT)
++			return nfserr_resource;
++		return nfs_ok;
++	}
++
+ 	/* len + padding for two strings */
+ 	addr_len = 16 + da->netaddr.netid_len + da->netaddr.addr_len;
+ 	ver_len = 20;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index c7e8e641d3e5f..dbfa24cf33906 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -4407,20 +4407,17 @@ nfsd4_encode_getdeviceinfo(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ 	*p++ = cpu_to_be32(gdev->gd_layout_type);
+ 
+-	/* If maxcount is 0 then just update notifications */
+-	if (gdev->gd_maxcount != 0) {
+-		ops = nfsd4_layout_ops[gdev->gd_layout_type];
+-		nfserr = ops->encode_getdeviceinfo(xdr, gdev);
+-		if (nfserr) {
+-			/*
+-			 * We don't bother to burden the layout drivers with
+-			 * enforcing gd_maxcount, just tell the client to
+-			 * come back with a bigger buffer if it's not enough.
+-			 */
+-			if (xdr->buf->len + 4 > gdev->gd_maxcount)
+-				goto toosmall;
+-			return nfserr;
+-		}
++	ops = nfsd4_layout_ops[gdev->gd_layout_type];
++	nfserr = ops->encode_getdeviceinfo(xdr, gdev);
++	if (nfserr) {
++		/*
++		 * We don't bother to burden the layout drivers with
++		 * enforcing gd_maxcount, just tell the client to
++		 * come back with a bigger buffer if it's not enough.
++		 */
++		if (xdr->buf->len + 4 > gdev->gd_maxcount)
++			goto toosmall;
++		return nfserr;
+ 	}
+ 
+ 	if (gdev->gd_notify_types) {
+diff --git a/fs/nilfs2/alloc.c b/fs/nilfs2/alloc.c
+index adf3bb0a80482..279d945d4ebee 100644
+--- a/fs/nilfs2/alloc.c
++++ b/fs/nilfs2/alloc.c
+@@ -205,7 +205,8 @@ static int nilfs_palloc_get_block(struct inode *inode, unsigned long blkoff,
+ 	int ret;
+ 
+ 	spin_lock(lock);
+-	if (prev->bh && blkoff == prev->blkoff) {
++	if (prev->bh && blkoff == prev->blkoff &&
++	    likely(buffer_uptodate(prev->bh))) {
+ 		get_bh(prev->bh);
+ 		*bhp = prev->bh;
+ 		spin_unlock(lock);
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index 3a3cdc3706471..d144e08a9003a 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -1027,7 +1027,7 @@ int nilfs_load_inode_block(struct inode *inode, struct buffer_head **pbh)
+ 	int err;
+ 
+ 	spin_lock(&nilfs->ns_inode_lock);
+-	if (ii->i_bh == NULL) {
++	if (ii->i_bh == NULL || unlikely(!buffer_uptodate(ii->i_bh))) {
+ 		spin_unlock(&nilfs->ns_inode_lock);
+ 		err = nilfs_ifile_get_inode_block(ii->i_root->ifile,
+ 						  inode->i_ino, pbh);
+@@ -1036,7 +1036,10 @@ int nilfs_load_inode_block(struct inode *inode, struct buffer_head **pbh)
+ 		spin_lock(&nilfs->ns_inode_lock);
+ 		if (ii->i_bh == NULL)
+ 			ii->i_bh = *pbh;
+-		else {
++		else if (unlikely(!buffer_uptodate(ii->i_bh))) {
++			__brelse(ii->i_bh);
++			ii->i_bh = *pbh;
++		} else {
+ 			brelse(*pbh);
+ 			*pbh = ii->i_bh;
+ 		}
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index e4686e30b1a7d..418055ac910b6 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -730,6 +730,11 @@ static size_t nilfs_lookup_dirty_data_buffers(struct inode *inode,
+ 		struct page *page = pvec.pages[i];
+ 
+ 		lock_page(page);
++		if (unlikely(page->mapping != mapping)) {
++			/* Exclude pages removed from the address space */
++			unlock_page(page);
++			continue;
++		}
+ 		if (!page_has_buffers(page))
+ 			create_empty_buffers(page, i_blocksize(inode), 0);
+ 		unlock_page(page);
+diff --git a/fs/nls/nls_base.c b/fs/nls/nls_base.c
+index 52ccd34b1e792..a026dbd3593f6 100644
+--- a/fs/nls/nls_base.c
++++ b/fs/nls/nls_base.c
+@@ -272,7 +272,7 @@ int unregister_nls(struct nls_table * nls)
+ 	return -EINVAL;
+ }
+ 
+-static struct nls_table *find_nls(char *charset)
++static struct nls_table *find_nls(const char *charset)
+ {
+ 	struct nls_table *nls;
+ 	spin_lock(&nls_lock);
+@@ -288,7 +288,7 @@ static struct nls_table *find_nls(char *charset)
+ 	return nls;
+ }
+ 
+-struct nls_table *load_nls(char *charset)
++struct nls_table *load_nls(const char *charset)
+ {
+ 	return try_then_request_module(find_nls(charset), "nls_%s", charset);
+ }
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index d6a0e719b1ad9..5c98813b3dcaf 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -1532,6 +1532,10 @@ static int ocfs2_rename(struct inode *old_dir,
+ 		status = ocfs2_add_entry(handle, new_dentry, old_inode,
+ 					 OCFS2_I(old_inode)->ip_blkno,
+ 					 new_dir_bh, &target_insert);
++		if (status < 0) {
++			mlog_errno(status);
++			goto bail;
++		}
+ 	}
+ 
+ 	old_inode->i_ctime = current_time(old_inode);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 5d7df839902df..e0384095ca960 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -2028,7 +2028,7 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 	sb->s_xattr = ovl_xattr_handlers;
+ 	sb->s_fs_info = ofs;
+ 	sb->s_flags |= SB_POSIXACL;
+-	sb->s_iflags |= SB_I_SKIP_SYNC;
++	sb->s_iflags |= SB_I_SKIP_SYNC | SB_I_IMA_UNVERIFIABLE_SIGNATURE;
+ 
+ 	err = -ENOMEM;
+ 	root_dentry = ovl_get_root(sb, upperpath.dentry, oe);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 5d52aea8d7e7d..a484c30bd5cf6 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -3506,7 +3506,8 @@ static int proc_tid_comm_permission(struct inode *inode, int mask)
+ }
+ 
+ static const struct inode_operations proc_tid_comm_inode_operations = {
+-		.permission = proc_tid_comm_permission,
++		.setattr	= proc_setattr,
++		.permission	= proc_tid_comm_permission,
+ };
+ 
+ /*
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index b6183f1f4ebcf..a0fa3820ef2ab 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -504,7 +504,7 @@ static int persistent_ram_post_init(struct persistent_ram_zone *prz, u32 sig,
+ 	sig ^= PERSISTENT_RAM_SIG;
+ 
+ 	if (prz->buffer->sig == sig) {
+-		if (buffer_size(prz) == 0) {
++		if (buffer_size(prz) == 0 && buffer_start(prz) == 0) {
+ 			pr_debug("found existing empty buffer\n");
+ 			return 0;
+ 		}
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 8d0cd68fc90a4..13a9a17d6a13b 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -225,13 +225,22 @@ static void put_quota_format(struct quota_format_type *fmt)
+ 
+ /*
+  * Dquot List Management:
+- * The quota code uses four lists for dquot management: the inuse_list,
+- * free_dquots, dqi_dirty_list, and dquot_hash[] array. A single dquot
+- * structure may be on some of those lists, depending on its current state.
++ * The quota code uses five lists for dquot management: the inuse_list,
++ * releasing_dquots, free_dquots, dqi_dirty_list, and dquot_hash[] array.
++ * A single dquot structure may be on some of those lists, depending on
++ * its current state.
+  *
+  * All dquots are placed to the end of inuse_list when first created, and this
+  * list is used for invalidate operation, which must look at every dquot.
+  *
++ * When the last reference of a dquot will be dropped, the dquot will be
++ * added to releasing_dquots. We'd then queue work item which would call
++ * synchronize_srcu() and after that perform the final cleanup of all the
++ * dquots on the list. Both releasing_dquots and free_dquots use the
++ * dq_free list_head in the dquot struct. When a dquot is removed from
++ * releasing_dquots, a reference count is always subtracted, and if
++ * dq_count == 0 at that point, the dquot will be added to the free_dquots.
++ *
+  * Unused dquots (dq_count == 0) are added to the free_dquots list when freed,
+  * and this list is searched whenever we need an available dquot.  Dquots are
+  * removed from the list as soon as they are used again, and
+@@ -250,6 +259,7 @@ static void put_quota_format(struct quota_format_type *fmt)
+ 
+ static LIST_HEAD(inuse_list);
+ static LIST_HEAD(free_dquots);
++static LIST_HEAD(releasing_dquots);
+ static unsigned int dq_hash_bits, dq_hash_mask;
+ static struct hlist_head *dquot_hash;
+ 
+@@ -260,6 +270,9 @@ static qsize_t inode_get_rsv_space(struct inode *inode);
+ static qsize_t __inode_get_rsv_space(struct inode *inode);
+ static int __dquot_initialize(struct inode *inode, int type);
+ 
++static void quota_release_workfn(struct work_struct *work);
++static DECLARE_DELAYED_WORK(quota_release_work, quota_release_workfn);
++
+ static inline unsigned int
+ hashfn(const struct super_block *sb, struct kqid qid)
+ {
+@@ -307,12 +320,18 @@ static inline void put_dquot_last(struct dquot *dquot)
+ 	dqstats_inc(DQST_FREE_DQUOTS);
+ }
+ 
++static inline void put_releasing_dquots(struct dquot *dquot)
++{
++	list_add_tail(&dquot->dq_free, &releasing_dquots);
++}
++
+ static inline void remove_free_dquot(struct dquot *dquot)
+ {
+ 	if (list_empty(&dquot->dq_free))
+ 		return;
+ 	list_del_init(&dquot->dq_free);
+-	dqstats_dec(DQST_FREE_DQUOTS);
++	if (!atomic_read(&dquot->dq_count))
++		dqstats_dec(DQST_FREE_DQUOTS);
+ }
+ 
+ static inline void put_inuse(struct dquot *dquot)
+@@ -338,6 +357,11 @@ static void wait_on_dquot(struct dquot *dquot)
+ 	mutex_unlock(&dquot->dq_lock);
+ }
+ 
++static inline int dquot_active(struct dquot *dquot)
++{
++	return test_bit(DQ_ACTIVE_B, &dquot->dq_flags);
++}
++
+ static inline int dquot_dirty(struct dquot *dquot)
+ {
+ 	return test_bit(DQ_MOD_B, &dquot->dq_flags);
+@@ -353,14 +377,14 @@ int dquot_mark_dquot_dirty(struct dquot *dquot)
+ {
+ 	int ret = 1;
+ 
+-	if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
++	if (!dquot_active(dquot))
+ 		return 0;
+ 
+ 	if (sb_dqopt(dquot->dq_sb)->flags & DQUOT_NOLIST_DIRTY)
+ 		return test_and_set_bit(DQ_MOD_B, &dquot->dq_flags);
+ 
+ 	/* If quota is dirty already, we don't have to acquire dq_list_lock */
+-	if (test_bit(DQ_MOD_B, &dquot->dq_flags))
++	if (dquot_dirty(dquot))
+ 		return 1;
+ 
+ 	spin_lock(&dq_list_lock);
+@@ -442,7 +466,7 @@ int dquot_acquire(struct dquot *dquot)
+ 	smp_mb__before_atomic();
+ 	set_bit(DQ_READ_B, &dquot->dq_flags);
+ 	/* Instantiate dquot if needed */
+-	if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags) && !dquot->dq_off) {
++	if (!dquot_active(dquot) && !dquot->dq_off) {
+ 		ret = dqopt->ops[dquot->dq_id.type]->commit_dqblk(dquot);
+ 		/* Write the info if needed */
+ 		if (info_dirty(&dqopt->info[dquot->dq_id.type])) {
+@@ -484,7 +508,7 @@ int dquot_commit(struct dquot *dquot)
+ 		goto out_lock;
+ 	/* Inactive dquot can be only if there was error during read/init
+ 	 * => we have better not writing it */
+-	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
++	if (dquot_active(dquot))
+ 		ret = dqopt->ops[dquot->dq_id.type]->commit_dqblk(dquot);
+ 	else
+ 		ret = -EIO;
+@@ -549,6 +573,8 @@ static void invalidate_dquots(struct super_block *sb, int type)
+ 	struct dquot *dquot, *tmp;
+ 
+ restart:
++	flush_delayed_work(&quota_release_work);
++
+ 	spin_lock(&dq_list_lock);
+ 	list_for_each_entry_safe(dquot, tmp, &inuse_list, dq_inuse) {
+ 		if (dquot->dq_sb != sb)
+@@ -557,6 +583,12 @@ restart:
+ 			continue;
+ 		/* Wait for dquot users */
+ 		if (atomic_read(&dquot->dq_count)) {
++			/* dquot in releasing_dquots, flush and retry */
++			if (!list_empty(&dquot->dq_free)) {
++				spin_unlock(&dq_list_lock);
++				goto restart;
++			}
++
+ 			atomic_inc(&dquot->dq_count);
+ 			spin_unlock(&dq_list_lock);
+ 			/*
+@@ -599,7 +631,7 @@ int dquot_scan_active(struct super_block *sb,
+ 
+ 	spin_lock(&dq_list_lock);
+ 	list_for_each_entry(dquot, &inuse_list, dq_inuse) {
+-		if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
++		if (!dquot_active(dquot))
+ 			continue;
+ 		if (dquot->dq_sb != sb)
+ 			continue;
+@@ -614,7 +646,7 @@ int dquot_scan_active(struct super_block *sb,
+ 		 * outstanding call and recheck the DQ_ACTIVE_B after that.
+ 		 */
+ 		wait_on_dquot(dquot);
+-		if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
++		if (dquot_active(dquot)) {
+ 			ret = fn(dquot, priv);
+ 			if (ret < 0)
+ 				goto out;
+@@ -630,6 +662,18 @@ out:
+ }
+ EXPORT_SYMBOL(dquot_scan_active);
+ 
++static inline int dquot_write_dquot(struct dquot *dquot)
++{
++	int ret = dquot->dq_sb->dq_op->write_dquot(dquot);
++	if (ret < 0) {
++		quota_error(dquot->dq_sb, "Can't write quota structure "
++			    "(error %d). Quota may get out of sync!", ret);
++		/* Clear dirty bit anyway to avoid infinite loop. */
++		clear_dquot_dirty(dquot);
++	}
++	return ret;
++}
++
+ /* Write all dquot structures to quota files */
+ int dquot_writeback_dquots(struct super_block *sb, int type)
+ {
+@@ -653,23 +697,16 @@ int dquot_writeback_dquots(struct super_block *sb, int type)
+ 			dquot = list_first_entry(&dirty, struct dquot,
+ 						 dq_dirty);
+ 
+-			WARN_ON(!test_bit(DQ_ACTIVE_B, &dquot->dq_flags));
++			WARN_ON(!dquot_active(dquot));
+ 
+ 			/* Now we have active dquot from which someone is
+  			 * holding reference so we can safely just increase
+ 			 * use count */
+ 			dqgrab(dquot);
+ 			spin_unlock(&dq_list_lock);
+-			err = sb->dq_op->write_dquot(dquot);
+-			if (err) {
+-				/*
+-				 * Clear dirty bit anyway to avoid infinite
+-				 * loop here.
+-				 */
+-				clear_dquot_dirty(dquot);
+-				if (!ret)
+-					ret = err;
+-			}
++			err = dquot_write_dquot(dquot);
++			if (err && !ret)
++				ret = err;
+ 			dqput(dquot);
+ 			spin_lock(&dq_list_lock);
+ 		}
+@@ -762,13 +799,54 @@ static struct shrinker dqcache_shrinker = {
+ 	.seeks = DEFAULT_SEEKS,
+ };
+ 
++/*
++ * Safely release dquot and put reference to dquot.
++ */
++static void quota_release_workfn(struct work_struct *work)
++{
++	struct dquot *dquot;
++	struct list_head rls_head;
++
++	spin_lock(&dq_list_lock);
++	/* Exchange the list head to avoid livelock. */
++	list_replace_init(&releasing_dquots, &rls_head);
++	spin_unlock(&dq_list_lock);
++
++restart:
++	synchronize_srcu(&dquot_srcu);
++	spin_lock(&dq_list_lock);
++	while (!list_empty(&rls_head)) {
++		dquot = list_first_entry(&rls_head, struct dquot, dq_free);
++		/* Dquot got used again? */
++		if (atomic_read(&dquot->dq_count) > 1) {
++			remove_free_dquot(dquot);
++			atomic_dec(&dquot->dq_count);
++			continue;
++		}
++		if (dquot_dirty(dquot)) {
++			spin_unlock(&dq_list_lock);
++			/* Commit dquot before releasing */
++			dquot_write_dquot(dquot);
++			goto restart;
++		}
++		if (dquot_active(dquot)) {
++			spin_unlock(&dq_list_lock);
++			dquot->dq_sb->dq_op->release_dquot(dquot);
++			goto restart;
++		}
++		/* Dquot is inactive and clean, now move it to free list */
++		remove_free_dquot(dquot);
++		atomic_dec(&dquot->dq_count);
++		put_dquot_last(dquot);
++	}
++	spin_unlock(&dq_list_lock);
++}
++
+ /*
+  * Put reference to dquot
+  */
+ void dqput(struct dquot *dquot)
+ {
+-	int ret;
+-
+ 	if (!dquot)
+ 		return;
+ #ifdef CONFIG_QUOTA_DEBUG
+@@ -780,7 +858,7 @@ void dqput(struct dquot *dquot)
+ 	}
+ #endif
+ 	dqstats_inc(DQST_DROPS);
+-we_slept:
++
+ 	spin_lock(&dq_list_lock);
+ 	if (atomic_read(&dquot->dq_count) > 1) {
+ 		/* We have more than one user... nothing to do */
+@@ -792,35 +870,15 @@ we_slept:
+ 		spin_unlock(&dq_list_lock);
+ 		return;
+ 	}
++
+ 	/* Need to release dquot? */
+-	if (dquot_dirty(dquot)) {
+-		spin_unlock(&dq_list_lock);
+-		/* Commit dquot before releasing */
+-		ret = dquot->dq_sb->dq_op->write_dquot(dquot);
+-		if (ret < 0) {
+-			quota_error(dquot->dq_sb, "Can't write quota structure"
+-				    " (error %d). Quota may get out of sync!",
+-				    ret);
+-			/*
+-			 * We clear dirty bit anyway, so that we avoid
+-			 * infinite loop here
+-			 */
+-			clear_dquot_dirty(dquot);
+-		}
+-		goto we_slept;
+-	}
+-	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
+-		spin_unlock(&dq_list_lock);
+-		dquot->dq_sb->dq_op->release_dquot(dquot);
+-		goto we_slept;
+-	}
+-	atomic_dec(&dquot->dq_count);
+ #ifdef CONFIG_QUOTA_DEBUG
+ 	/* sanity check */
+ 	BUG_ON(!list_empty(&dquot->dq_free));
+ #endif
+-	put_dquot_last(dquot);
++	put_releasing_dquots(dquot);
+ 	spin_unlock(&dq_list_lock);
++	queue_delayed_work(system_unbound_wq, &quota_release_work, 1);
+ }
+ EXPORT_SYMBOL(dqput);
+ 
+@@ -910,7 +968,7 @@ we_slept:
+ 	 * already finished or it will be canceled due to dq_count > 1 test */
+ 	wait_on_dquot(dquot);
+ 	/* Read the dquot / allocate space in quota file */
+-	if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
++	if (!dquot_active(dquot)) {
+ 		int err;
+ 
+ 		err = sb->dq_op->acquire_dquot(dquot);
+@@ -1427,7 +1485,7 @@ static int info_bdq_free(struct dquot *dquot, qsize_t space)
+ 	return QUOTA_NL_NOWARN;
+ }
+ 
+-static int dquot_active(const struct inode *inode)
++static int inode_quota_active(const struct inode *inode)
+ {
+ 	struct super_block *sb = inode->i_sb;
+ 
+@@ -1450,7 +1508,7 @@ static int __dquot_initialize(struct inode *inode, int type)
+ 	qsize_t rsv;
+ 	int ret = 0;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return 0;
+ 
+ 	dquots = i_dquot(inode);
+@@ -1558,7 +1616,7 @@ bool dquot_initialize_needed(struct inode *inode)
+ 	struct dquot **dquots;
+ 	int i;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return false;
+ 
+ 	dquots = i_dquot(inode);
+@@ -1669,7 +1727,7 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
+ 	int reserve = flags & DQUOT_SPACE_RESERVE;
+ 	struct dquot **dquots;
+ 
+-	if (!dquot_active(inode)) {
++	if (!inode_quota_active(inode)) {
+ 		if (reserve) {
+ 			spin_lock(&inode->i_lock);
+ 			*inode_reserved_space(inode) += number;
+@@ -1739,7 +1797,7 @@ int dquot_alloc_inode(struct inode *inode)
+ 	struct dquot_warn warn[MAXQUOTAS];
+ 	struct dquot * const *dquots;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return 0;
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
+ 		warn[cnt].w_type = QUOTA_NL_NOWARN;
+@@ -1782,7 +1840,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
+ 	struct dquot **dquots;
+ 	int cnt, index;
+ 
+-	if (!dquot_active(inode)) {
++	if (!inode_quota_active(inode)) {
+ 		spin_lock(&inode->i_lock);
+ 		*inode_reserved_space(inode) -= number;
+ 		__inode_add_bytes(inode, number);
+@@ -1824,7 +1882,7 @@ void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number)
+ 	struct dquot **dquots;
+ 	int cnt, index;
+ 
+-	if (!dquot_active(inode)) {
++	if (!inode_quota_active(inode)) {
+ 		spin_lock(&inode->i_lock);
+ 		*inode_reserved_space(inode) += number;
+ 		__inode_sub_bytes(inode, number);
+@@ -1868,7 +1926,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
+ 	struct dquot **dquots;
+ 	int reserve = flags & DQUOT_SPACE_RESERVE, index;
+ 
+-	if (!dquot_active(inode)) {
++	if (!inode_quota_active(inode)) {
+ 		if (reserve) {
+ 			spin_lock(&inode->i_lock);
+ 			*inode_reserved_space(inode) -= number;
+@@ -1923,7 +1981,7 @@ void dquot_free_inode(struct inode *inode)
+ 	struct dquot * const *dquots;
+ 	int index;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return;
+ 
+ 	dquots = i_dquot(inode);
+@@ -2094,7 +2152,7 @@ int dquot_transfer(struct inode *inode, struct iattr *iattr)
+ 	struct super_block *sb = inode->i_sb;
+ 	int ret;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return 0;
+ 
+ 	if (iattr->ia_valid & ATTR_UID && !uid_eq(iattr->ia_uid, inode->i_uid)){
+diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
+index df5fc12a6ceed..cfa4defbd0401 100644
+--- a/fs/reiserfs/journal.c
++++ b/fs/reiserfs/journal.c
+@@ -2325,7 +2325,7 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev,
+ 	int i, j;
+ 
+ 	bh = __getblk(dev, block, bufsize);
+-	if (buffer_uptodate(bh))
++	if (!bh || buffer_uptodate(bh))
+ 		return (bh);
+ 
+ 	if (block + BUFNR > max_block) {
+@@ -2335,6 +2335,8 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev,
+ 	j = 1;
+ 	for (i = 1; i < blocks; i++) {
+ 		bh = __getblk(dev, block + i, bufsize);
++		if (!bh)
++			break;
+ 		if (buffer_uptodate(bh)) {
+ 			brelse(bh);
+ 			break;
+diff --git a/fs/udf/balloc.c b/fs/udf/balloc.c
+index 8e597db4d9710..f416b7fe092fc 100644
+--- a/fs/udf/balloc.c
++++ b/fs/udf/balloc.c
+@@ -36,18 +36,41 @@ static int read_block_bitmap(struct super_block *sb,
+ 			     unsigned long bitmap_nr)
+ {
+ 	struct buffer_head *bh = NULL;
+-	int retval = 0;
++	int i;
++	int max_bits, off, count;
+ 	struct kernel_lb_addr loc;
+ 
+ 	loc.logicalBlockNum = bitmap->s_extPosition;
+ 	loc.partitionReferenceNum = UDF_SB(sb)->s_partition;
+ 
+ 	bh = udf_tread(sb, udf_get_lb_pblock(sb, &loc, block));
++	bitmap->s_block_bitmap[bitmap_nr] = bh;
+ 	if (!bh)
+-		retval = -EIO;
++		return -EIO;
+ 
+-	bitmap->s_block_bitmap[bitmap_nr] = bh;
+-	return retval;
++	/* Check consistency of Space Bitmap buffer. */
++	max_bits = sb->s_blocksize * 8;
++	if (!bitmap_nr) {
++		off = sizeof(struct spaceBitmapDesc) << 3;
++		count = min(max_bits - off, bitmap->s_nr_groups);
++	} else {
++		/*
++		 * Rough check if bitmap number is too big to have any bitmap
++		 * blocks reserved.
++		 */
++		if (bitmap_nr >
++		    (bitmap->s_nr_groups >> (sb->s_blocksize_bits + 3)) + 2)
++			return 0;
++		off = 0;
++		count = bitmap->s_nr_groups - bitmap_nr * max_bits +
++				(sizeof(struct spaceBitmapDesc) << 3);
++		count = min(count, max_bits);
++	}
++
++	for (i = 0; i < count; i++)
++		if (udf_test_bit(i + off, bh->b_data))
++			return -EFSCORRUPTED;
++	return 0;
+ }
+ 
+ static int __load_block_bitmap(struct super_block *sb,
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index d114774ecdea8..499a27372a40b 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -57,15 +57,15 @@ static int udf_update_inode(struct inode *, int);
+ static int udf_sync_inode(struct inode *inode);
+ static int udf_alloc_i_data(struct inode *inode, size_t size);
+ static sector_t inode_getblk(struct inode *, sector_t, int *, int *);
+-static int8_t udf_insert_aext(struct inode *, struct extent_position,
+-			      struct kernel_lb_addr, uint32_t);
++static int udf_insert_aext(struct inode *, struct extent_position,
++			   struct kernel_lb_addr, uint32_t);
+ static void udf_split_extents(struct inode *, int *, int, udf_pblk_t,
+ 			      struct kernel_long_ad *, int *);
+ static void udf_prealloc_extents(struct inode *, int, int,
+ 				 struct kernel_long_ad *, int *);
+ static void udf_merge_extents(struct inode *, struct kernel_long_ad *, int *);
+-static void udf_update_extents(struct inode *, struct kernel_long_ad *, int,
+-			       int, struct extent_position *);
++static int udf_update_extents(struct inode *, struct kernel_long_ad *, int,
++			      int, struct extent_position *);
+ static int udf_get_block(struct inode *, sector_t, struct buffer_head *, int);
+ 
+ static void __udf_clear_extent_cache(struct inode *inode)
+@@ -695,7 +695,7 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 	struct kernel_lb_addr eloc, tmpeloc;
+ 	int c = 1;
+ 	loff_t lbcount = 0, b_off = 0;
+-	udf_pblk_t newblocknum, newblock;
++	udf_pblk_t newblocknum, newblock = 0;
+ 	sector_t offset = 0;
+ 	int8_t etype;
+ 	struct udf_inode_info *iinfo = UDF_I(inode);
+@@ -798,7 +798,6 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 		ret = udf_do_extend_file(inode, &prev_epos, laarr, hole_len);
+ 		if (ret < 0) {
+ 			*err = ret;
+-			newblock = 0;
+ 			goto out_free;
+ 		}
+ 		c = 0;
+@@ -861,7 +860,6 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 				goal, err);
+ 		if (!newblocknum) {
+ 			*err = -ENOSPC;
+-			newblock = 0;
+ 			goto out_free;
+ 		}
+ 		if (isBeyondEOF)
+@@ -887,7 +885,9 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 	/* write back the new extents, inserting new extents if the new number
+ 	 * of extents is greater than the old number, and deleting extents if
+ 	 * the new number of extents is less than the old number */
+-	udf_update_extents(inode, laarr, startnum, endnum, &prev_epos);
++	*err = udf_update_extents(inode, laarr, startnum, endnum, &prev_epos);
++	if (*err < 0)
++		goto out_free;
+ 
+ 	newblock = udf_get_pblock(inode->i_sb, newblocknum,
+ 				iinfo->i_location.partitionReferenceNum, 0);
+@@ -1155,21 +1155,30 @@ static void udf_merge_extents(struct inode *inode, struct kernel_long_ad *laarr,
+ 	}
+ }
+ 
+-static void udf_update_extents(struct inode *inode, struct kernel_long_ad *laarr,
+-			       int startnum, int endnum,
+-			       struct extent_position *epos)
++static int udf_update_extents(struct inode *inode, struct kernel_long_ad *laarr,
++			      int startnum, int endnum,
++			      struct extent_position *epos)
+ {
+ 	int start = 0, i;
+ 	struct kernel_lb_addr tmploc;
+ 	uint32_t tmplen;
++	int err;
+ 
+ 	if (startnum > endnum) {
+ 		for (i = 0; i < (startnum - endnum); i++)
+ 			udf_delete_aext(inode, *epos);
+ 	} else if (startnum < endnum) {
+ 		for (i = 0; i < (endnum - startnum); i++) {
+-			udf_insert_aext(inode, *epos, laarr[i].extLocation,
+-					laarr[i].extLength);
++			err = udf_insert_aext(inode, *epos,
++					      laarr[i].extLocation,
++					      laarr[i].extLength);
++			/*
++			 * If we fail here, we are likely corrupting the extent
++			 * list and leaking blocks. At least stop early to
++			 * limit the damage.
++			 */
++			if (err < 0)
++				return err;
+ 			udf_next_aext(inode, epos, &laarr[i].extLocation,
+ 				      &laarr[i].extLength, 1);
+ 			start++;
+@@ -1181,6 +1190,7 @@ static void udf_update_extents(struct inode *inode, struct kernel_long_ad *laarr
+ 		udf_write_aext(inode, epos, &laarr[i].extLocation,
+ 			       laarr[i].extLength, 1);
+ 	}
++	return 0;
+ }
+ 
+ struct buffer_head *udf_bread(struct inode *inode, udf_pblk_t block,
+@@ -2215,12 +2225,13 @@ int8_t udf_current_aext(struct inode *inode, struct extent_position *epos,
+ 	return etype;
+ }
+ 
+-static int8_t udf_insert_aext(struct inode *inode, struct extent_position epos,
+-			      struct kernel_lb_addr neloc, uint32_t nelen)
++static int udf_insert_aext(struct inode *inode, struct extent_position epos,
++			   struct kernel_lb_addr neloc, uint32_t nelen)
+ {
+ 	struct kernel_lb_addr oeloc;
+ 	uint32_t oelen;
+ 	int8_t etype;
++	int err;
+ 
+ 	if (epos.bh)
+ 		get_bh(epos.bh);
+@@ -2230,10 +2241,10 @@ static int8_t udf_insert_aext(struct inode *inode, struct extent_position epos,
+ 		neloc = oeloc;
+ 		nelen = (etype << 30) | oelen;
+ 	}
+-	udf_add_aext(inode, &epos, &neloc, nelen, 1);
++	err = udf_add_aext(inode, &epos, &neloc, nelen, 1);
+ 	brelse(epos.bh);
+ 
+-	return (nelen >> 30);
++	return err;
+ }
+ 
+ int8_t udf_delete_aext(struct inode *inode, struct extent_position epos)
+diff --git a/fs/verity/signature.c b/fs/verity/signature.c
+index b14ed96387ece..ed32afc238d22 100644
+--- a/fs/verity/signature.c
++++ b/fs/verity/signature.c
+@@ -61,6 +61,22 @@ int fsverity_verify_signature(const struct fsverity_info *vi,
+ 		return -EBADMSG;
+ 	}
+ 
++	if (fsverity_keyring->keys.nr_leaves_on_tree == 0) {
++		/*
++		 * The ".fs-verity" keyring is empty, due to builtin signatures
++		 * being supported by the kernel but not actually being used.
++		 * In this case, verify_pkcs7_signature() would always return an
++		 * error, usually ENOKEY.  It could also be EBADMSG if the
++		 * PKCS#7 is malformed, but that isn't very important to
++		 * distinguish.  So, just skip to ENOKEY to avoid the attack
++		 * surface of the PKCS#7 parser, which would otherwise be
++		 * reachable by any task able to execute FS_IOC_ENABLE_VERITY.
++		 */
++		fsverity_err(inode,
++			     "fs-verity keyring is empty, rejecting signed file!");
++		return -ENOKEY;
++	}
++
+ 	d = kzalloc(sizeof(*d) + hash_alg->digest_size, GFP_KERNEL);
+ 	if (!d)
+ 		return -ENOMEM;
+diff --git a/include/acpi/apei.h b/include/acpi/apei.h
+index 680f80960c3dc..a6ac2e8b72da8 100644
+--- a/include/acpi/apei.h
++++ b/include/acpi/apei.h
+@@ -27,14 +27,16 @@ extern int hest_disable;
+ extern int erst_disable;
+ #ifdef CONFIG_ACPI_APEI_GHES
+ extern bool ghes_disable;
++void __init ghes_init(void);
+ #else
+ #define ghes_disable 1
++static inline void ghes_init(void) { }
+ #endif
+ 
+ #ifdef CONFIG_ACPI_APEI
+ void __init acpi_hest_init(void);
+ #else
+-static inline void acpi_hest_init(void) { return; }
++static inline void acpi_hest_init(void) { }
+ #endif
+ 
+ typedef int (*apei_hest_func_t)(struct acpi_hest_header *hest_hdr, void *data);
+diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
+index 18dd7a4aaf7da..96dbd438cc700 100644
+--- a/include/crypto/algapi.h
++++ b/include/crypto/algapi.h
+@@ -10,6 +10,7 @@
+ #include <linux/crypto.h>
+ #include <linux/list.h>
+ #include <linux/kernel.h>
++#include <linux/workqueue.h>
+ 
+ /*
+  * Maximum values for blocksize and alignmask, used to allocate
+@@ -55,6 +56,8 @@ struct crypto_instance {
+ 		struct crypto_spawn *spawns;
+ 	};
+ 
++	struct work_struct free_work;
++
+ 	void *__ctx[] CRYPTO_MINALIGN_ATTR;
+ };
+ 
+diff --git a/include/linux/arm_sdei.h b/include/linux/arm_sdei.h
+index 0a241c5c911d8..255701e1251b4 100644
+--- a/include/linux/arm_sdei.h
++++ b/include/linux/arm_sdei.h
+@@ -46,9 +46,13 @@ int sdei_unregister_ghes(struct ghes *ghes);
+ /* For use by arch code when CPU hotplug notifiers are not appropriate. */
+ int sdei_mask_local_cpu(void);
+ int sdei_unmask_local_cpu(void);
++void __init sdei_init(void);
++void sdei_handler_abort(void);
+ #else
+ static inline int sdei_mask_local_cpu(void) { return 0; }
+ static inline int sdei_unmask_local_cpu(void) { return 0; }
++static inline void sdei_init(void) { }
++static inline void sdei_handler_abort(void) { }
+ #endif /* CONFIG_ARM_SDE_INTERFACE */
+ 
+ 
+diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
+index 6cd2a92daf205..c1bd4883e2faf 100644
+--- a/include/linux/eventfd.h
++++ b/include/linux/eventfd.h
+@@ -42,6 +42,7 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n);
+ __u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, unsigned mask);
+ int eventfd_ctx_remove_wait_queue(struct eventfd_ctx *ctx, wait_queue_entry_t *wait,
+ 				  __u64 *cnt);
++void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt);
+ 
+ DECLARE_PER_CPU(int, eventfd_wake_count);
+ 
+@@ -89,6 +90,11 @@ static inline bool eventfd_signal_count(void)
+ 	return false;
+ }
+ 
++static inline void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt)
++{
++
++}
++
+ #endif
+ 
+ #endif /* _LINUX_EVENTFD_H */
+diff --git a/include/linux/if_arp.h b/include/linux/if_arp.h
+index e147ea6794670..91db78e67edcc 100644
+--- a/include/linux/if_arp.h
++++ b/include/linux/if_arp.h
+@@ -52,6 +52,10 @@ static inline bool dev_is_mac_header_xmit(const struct net_device *dev)
+ 	case ARPHRD_NONE:
+ 	case ARPHRD_RAWIP:
+ 	case ARPHRD_PIMREG:
++	/* PPP adds its l2 header automatically in ppp_start_xmit().
++	 * This makes it look like an l3 device to __bpf_redirect() and tcf_mirred_init().
++	 */
++	case ARPHRD_PPP:
+ 		return false;
+ 	default:
+ 		return true;
+diff --git a/include/linux/nls.h b/include/linux/nls.h
+index 499e486b3722d..e0bf8367b274a 100644
+--- a/include/linux/nls.h
++++ b/include/linux/nls.h
+@@ -47,7 +47,7 @@ enum utf16_endian {
+ /* nls_base.c */
+ extern int __register_nls(struct nls_table *, struct module *);
+ extern int unregister_nls(struct nls_table *);
+-extern struct nls_table *load_nls(char *);
++extern struct nls_table *load_nls(const char *charset);
+ extern void unload_nls(struct nls_table *);
+ extern struct nls_table *load_nls_default(void);
+ #define register_nls(nls) __register_nls((nls), THIS_MODULE)
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index e418065c2c909..6fb20722f49b7 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -745,7 +745,8 @@ extern int  perf_uprobe_init(struct perf_event *event,
+ extern void perf_uprobe_destroy(struct perf_event *event);
+ extern int bpf_get_uprobe_info(const struct perf_event *event,
+ 			       u32 *fd_type, const char **filename,
+-			       u64 *probe_offset, bool perf_type_tracepoint);
++			       u64 *probe_offset, u64 *probe_addr,
++			       bool perf_type_tracepoint);
+ #endif
+ extern int  ftrace_profile_set_filter(struct perf_event *event, int event_id,
+ 				     char *filter_str);
+diff --git a/include/linux/usb/typec_altmode.h b/include/linux/usb/typec_altmode.h
+index 5e0a7b7647c3b..22b8ee8f03114 100644
+--- a/include/linux/usb/typec_altmode.h
++++ b/include/linux/usb/typec_altmode.h
+@@ -67,7 +67,7 @@ struct typec_altmode_ops {
+ 
+ int typec_altmode_enter(struct typec_altmode *altmode, u32 *vdo);
+ int typec_altmode_exit(struct typec_altmode *altmode);
+-void typec_altmode_attention(struct typec_altmode *altmode, u32 vdo);
++int typec_altmode_attention(struct typec_altmode *altmode, u32 vdo);
+ int typec_altmode_vdm(struct typec_altmode *altmode,
+ 		      const u32 header, const u32 *vdo, int count);
+ int typec_altmode_notify(struct typec_altmode *altmode, unsigned long conf,
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 8d1173577fb5c..9be2efe00f2c0 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -56,6 +56,7 @@ struct inet_skb_parm {
+ #define IPSKB_FRAG_PMTU		BIT(6)
+ #define IPSKB_L3SLAVE		BIT(7)
+ #define IPSKB_NOPOLICY		BIT(8)
++#define IPSKB_MULTIPATH		BIT(9)
+ 
+ 	u16			frag_max_size;
+ };
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index 1ddd401a8981f..58d8e6260aa13 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -455,15 +455,14 @@ static inline void iptunnel_xmit_stats(struct net_device *dev, int pkt_len)
+ 		tstats->tx_packets++;
+ 		u64_stats_update_end(&tstats->syncp);
+ 		put_cpu_ptr(tstats);
++		return;
++	}
++
++	if (pkt_len < 0) {
++		DEV_STATS_INC(dev, tx_errors);
++		DEV_STATS_INC(dev, tx_aborted_errors);
+ 	} else {
+-		struct net_device_stats *err_stats = &dev->stats;
+-
+-		if (pkt_len < 0) {
+-			err_stats->tx_errors++;
+-			err_stats->tx_aborted_errors++;
+-		} else {
+-			err_stats->tx_dropped++;
+-		}
++		DEV_STATS_INC(dev, tx_dropped);
+ 	}
+ }
+ 
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 4c8f97a6da5a7..47d644de0e47c 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -1249,7 +1249,7 @@ static inline int __ip6_sock_set_addr_preferences(struct sock *sk, int val)
+ 	return 0;
+ }
+ 
+-static inline int ip6_sock_set_addr_preferences(struct sock *sk, bool val)
++static inline int ip6_sock_set_addr_preferences(struct sock *sk, int val)
+ {
+ 	int ret;
+ 
+diff --git a/include/net/lwtunnel.h b/include/net/lwtunnel.h
+index 05cfd6ff65287..d4a90eca1921a 100644
+--- a/include/net/lwtunnel.h
++++ b/include/net/lwtunnel.h
+@@ -16,9 +16,12 @@
+ #define LWTUNNEL_STATE_INPUT_REDIRECT	BIT(1)
+ #define LWTUNNEL_STATE_XMIT_REDIRECT	BIT(2)
+ 
++/* LWTUNNEL_XMIT_CONTINUE should be distinguishable from dst_output return
++ * values (NET_XMIT_xxx and NETDEV_TX_xxx in linux/netdevice.h) for safety.
++ */
+ enum {
+ 	LWTUNNEL_XMIT_DONE,
+-	LWTUNNEL_XMIT_CONTINUE,
++	LWTUNNEL_XMIT_CONTINUE = 0x100,
+ };
+ 
+ 
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index dcca41f3a2240..b56f346020351 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -337,7 +337,6 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
+ 			struct pipe_inode_info *pipe, size_t len,
+ 			unsigned int flags);
+ 
+-void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks);
+ static inline void tcp_dec_quickack_mode(struct sock *sk,
+ 					 const unsigned int pkts)
+ {
+diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
+index e6a43163ab5b7..daf9b07956abf 100644
+--- a/include/scsi/libsas.h
++++ b/include/scsi/libsas.h
+@@ -474,10 +474,16 @@ enum service_response {
+ };
+ 
+ enum exec_status {
+-	/* The SAM_STAT_.. codes fit in the lower 6 bits, alias some of
+-	 * them here to silence 'case value not in enumerated type' warnings
++	/*
++	 * Values 0..0x7f are used to return the SAM_STAT_* codes.  To avoid
++	 * 'case value not in enumerated type' compiler warnings every value
++	 * returned through the exec_status enum needs an alias with the SAS_
++	 * prefix here.
+ 	 */
+-	__SAM_STAT_CHECK_CONDITION = SAM_STAT_CHECK_CONDITION,
++	SAS_SAM_STAT_GOOD = SAM_STAT_GOOD,
++	SAS_SAM_STAT_BUSY = SAM_STAT_BUSY,
++	SAS_SAM_STAT_TASK_ABORTED = SAM_STAT_TASK_ABORTED,
++	SAS_SAM_STAT_CHECK_CONDITION = SAM_STAT_CHECK_CONDITION,
+ 
+ 	SAS_DEV_NO_RESPONSE = 0x80,
+ 	SAS_DATA_UNDERRUN,
+diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
+index 701f178b20aee..4a9f1e6e3aaca 100644
+--- a/include/scsi/scsi_host.h
++++ b/include/scsi/scsi_host.h
+@@ -745,7 +745,7 @@ extern void scsi_remove_host(struct Scsi_Host *);
+ extern struct Scsi_Host *scsi_host_get(struct Scsi_Host *);
+ extern int scsi_host_busy(struct Scsi_Host *shost);
+ extern void scsi_host_put(struct Scsi_Host *t);
+-extern struct Scsi_Host *scsi_host_lookup(unsigned short);
++extern struct Scsi_Host *scsi_host_lookup(unsigned int hostnum);
+ extern const char *scsi_host_state_name(enum scsi_host_state);
+ extern void scsi_host_complete_all_commands(struct Scsi_Host *shost,
+ 					    int status);
+diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h
+index ee2dcfb3d6602..d7f7c04a6e0c1 100644
+--- a/include/uapi/linux/sync_file.h
++++ b/include/uapi/linux/sync_file.h
+@@ -52,7 +52,7 @@ struct sync_fence_info {
+  * @name:	name of fence
+  * @status:	status of fence. 1: signaled 0:active <0:error
+  * @flags:	sync_file_info flags
+- * @num_fences	number of fences in the sync_file
++ * @num_fences:	number of fences in the sync_file
+  * @pad:	padding for 64-bit alignment, should always be zero
+  * @sync_fence_info: pointer to array of structs sync_fence_info with all
+  *		 fences in the sync_file
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 81485c1a9879e..fe8594a0396ca 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -176,6 +176,16 @@ static void io_worker_ref_put(struct io_wq *wq)
+ 		complete(&wq->worker_done);
+ }
+ 
++bool io_wq_worker_stopped(void)
++{
++	struct io_worker *worker = current->pf_io_worker;
++
++	if (WARN_ON_ONCE(!io_wq_current_is_worker()))
++		return true;
++
++	return test_bit(IO_WQ_BIT_EXIT, &worker->wqe->wq->state);
++}
++
+ static void io_worker_cancel_cb(struct io_worker *worker)
+ {
+ 	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
+index bf5c4c5337605..48721cbd5f40b 100644
+--- a/io_uring/io-wq.h
++++ b/io_uring/io-wq.h
+@@ -129,6 +129,7 @@ void io_wq_hash_work(struct io_wq_work *work, void *val);
+ 
+ int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask);
+ int io_wq_max_workers(struct io_wq *wq, int *new_count);
++bool io_wq_worker_stopped(void);
+ 
+ static inline bool io_wq_is_hashed(struct io_wq_work *work)
+ {
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index dec8a6355f2ae..800b5cc385af3 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2665,6 +2665,11 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
+ 				break;
+ 		}
+ 		ret = io_do_iopoll(ctx, &nr_events, min);
++
++		if (task_sigpending(current)) {
++			ret = -EINTR;
++			goto out;
++		}
+ 	} while (!ret && nr_events < min && !need_resched());
+ out:
+ 	mutex_unlock(&ctx->uring_lock);
+@@ -5571,6 +5576,7 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
+ 	if (ret > 0)
+ 		return;
+ 
++	io_tw_lock(req->ctx, locked);
+ 	io_poll_remove_entries(req);
+ 	spin_lock(&ctx->completion_lock);
+ 	hash_del(&req->hash_node);
+@@ -6897,7 +6903,8 @@ static void io_wq_submit_work(struct io_wq_work *work)
+ 			 */
+ 			if (ret != -EAGAIN || !(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 				break;
+-
++			if (io_wq_worker_stopped())
++				break;
+ 			/*
+ 			 * If REQ_F_NOWAIT is set, then don't wait or retry with
+ 			 * poll. -EAGAIN is final for that case.
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 07e2788bbbf12..57b982b44732e 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -2203,6 +2203,8 @@ void __audit_inode_child(struct inode *parent,
+ 		}
+ 	}
+ 
++	cond_resched();
++
+ 	/* is there a matching child entry? */
+ 	list_for_each_entry(n, &context->names_list, list) {
+ 		/* can only match entries that have a name */
+diff --git a/kernel/cgroup/namespace.c b/kernel/cgroup/namespace.c
+index 812a61afd538a..d2b4dd753234e 100644
+--- a/kernel/cgroup/namespace.c
++++ b/kernel/cgroup/namespace.c
+@@ -149,9 +149,3 @@ const struct proc_ns_operations cgroupns_operations = {
+ 	.install	= cgroupns_install,
+ 	.owner		= cgroupns_owner,
+ };
+-
+-static __init int cgroup_namespaces_init(void)
+-{
+-	return 0;
+-}
+-subsys_initcall(cgroup_namespaces_init);
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 86d71c49b4957..f1ea3123f3832 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1628,6 +1628,17 @@ int __weak arch_check_ftrace_location(struct kprobe *p)
+ 	return 0;
+ }
+ 
++static bool is_cfi_preamble_symbol(unsigned long addr)
++{
++	char symbuf[KSYM_NAME_LEN];
++
++	if (lookup_symbol_name(addr, symbuf))
++		return false;
++
++	return str_has_prefix("__cfi_", symbuf) ||
++		str_has_prefix("__pfx_", symbuf);
++}
++
+ static int check_kprobe_address_safe(struct kprobe *p,
+ 				     struct module **probed_mod)
+ {
+@@ -1646,7 +1657,8 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ 	    within_kprobe_blacklist((unsigned long) p->addr) ||
+ 	    jump_label_text_reserved(p->addr, p->addr) ||
+ 	    static_call_text_reserved(p->addr, p->addr) ||
+-	    find_bug((unsigned long)p->addr)) {
++	    find_bug((unsigned long)p->addr) ||
++	    is_cfi_preamble_symbol((unsigned long)p->addr)) {
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/kernel/module.c b/kernel/module.c
+index 82af3ef79135a..72a5dcdccf7b1 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -2268,15 +2268,26 @@ static void free_module(struct module *mod)
+ void *__symbol_get(const char *symbol)
+ {
+ 	struct module *owner;
++	enum mod_license license;
+ 	const struct kernel_symbol *sym;
+ 
+ 	preempt_disable();
+-	sym = find_symbol(symbol, &owner, NULL, NULL, true, true);
+-	if (sym && strong_try_module_get(owner))
++	sym = find_symbol(symbol, &owner, NULL, &license, true, true);
++	if (!sym)
++		goto fail;
++	if (license != GPL_ONLY) {
++		pr_warn("failing symbol_get of non-GPLONLY symbol %s.\n",
++			symbol);
++		goto fail;
++	}
++	if (strong_try_module_get(owner))
+ 		sym = NULL;
+ 	preempt_enable();
+ 
+ 	return sym ? (void *)kernel_symbol_value(sym) : NULL;
++fail:
++	preempt_enable();
++	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(__symbol_get);
+ 
+diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
+index 617dd63589650..3e8214e9bf4ab 100644
+--- a/kernel/printk/printk_ringbuffer.c
++++ b/kernel/printk/printk_ringbuffer.c
+@@ -1726,7 +1726,7 @@ static bool copy_data(struct prb_data_ring *data_ring,
+ 	if (!buf || !buf_size)
+ 		return true;
+ 
+-	data_size = min_t(u16, buf_size, len);
++	data_size = min_t(unsigned int, buf_size, len);
+ 
+ 	memcpy(&buf[0], data, data_size); /* LMM(copy_data:A) */
+ 	return true;
+diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c
+index 4e419ca6d6114..dbd670376c42e 100644
+--- a/kernel/rcu/refscale.c
++++ b/kernel/rcu/refscale.c
+@@ -692,12 +692,11 @@ ref_scale_init(void)
+ 	VERBOSE_SCALEOUT("Starting %d reader threads\n", nreaders);
+ 
+ 	for (i = 0; i < nreaders; i++) {
++		init_waitqueue_head(&reader_tasks[i].wq);
+ 		firsterr = torture_create_kthread(ref_scale_reader, (void *)i,
+ 						  reader_tasks[i].task);
+ 		if (firsterr)
+ 			goto unwind;
+-
+-		init_waitqueue_head(&(reader_tasks[i].wq));
+ 	}
+ 
+ 	// Main Task
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 1de9a6bf84711..71e0c1bc9759e 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -2174,7 +2174,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
+ #ifdef CONFIG_UPROBE_EVENTS
+ 		if (flags & TRACE_EVENT_FL_UPROBE)
+ 			err = bpf_get_uprobe_info(event, fd_type, buf,
+-						  probe_offset,
++						  probe_offset, probe_addr,
+ 						  event->attr.type == PERF_TYPE_TRACEPOINT);
+ #endif
+ 	}
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 597487a7f1bfb..d3e0ae1168069 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6255,10 +6255,36 @@ tracing_max_lat_write(struct file *filp, const char __user *ubuf,
+ 
+ #endif
+ 
++static int open_pipe_on_cpu(struct trace_array *tr, int cpu)
++{
++	if (cpu == RING_BUFFER_ALL_CPUS) {
++		if (cpumask_empty(tr->pipe_cpumask)) {
++			cpumask_setall(tr->pipe_cpumask);
++			return 0;
++		}
++	} else if (!cpumask_test_cpu(cpu, tr->pipe_cpumask)) {
++		cpumask_set_cpu(cpu, tr->pipe_cpumask);
++		return 0;
++	}
++	return -EBUSY;
++}
++
++static void close_pipe_on_cpu(struct trace_array *tr, int cpu)
++{
++	if (cpu == RING_BUFFER_ALL_CPUS) {
++		WARN_ON(!cpumask_full(tr->pipe_cpumask));
++		cpumask_clear(tr->pipe_cpumask);
++	} else {
++		WARN_ON(!cpumask_test_cpu(cpu, tr->pipe_cpumask));
++		cpumask_clear_cpu(cpu, tr->pipe_cpumask);
++	}
++}
++
+ static int tracing_open_pipe(struct inode *inode, struct file *filp)
+ {
+ 	struct trace_array *tr = inode->i_private;
+ 	struct trace_iterator *iter;
++	int cpu;
+ 	int ret;
+ 
+ 	ret = tracing_check_open_get_tr(tr);
+@@ -6266,13 +6292,16 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
+ 		return ret;
+ 
+ 	mutex_lock(&trace_types_lock);
++	cpu = tracing_get_cpu(inode);
++	ret = open_pipe_on_cpu(tr, cpu);
++	if (ret)
++		goto fail_pipe_on_cpu;
+ 
+ 	/* create a buffer to store the information to pass to userspace */
+ 	iter = kzalloc(sizeof(*iter), GFP_KERNEL);
+ 	if (!iter) {
+ 		ret = -ENOMEM;
+-		__trace_array_put(tr);
+-		goto out;
++		goto fail_alloc_iter;
+ 	}
+ 
+ 	trace_seq_init(&iter->seq);
+@@ -6295,7 +6324,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
+ 
+ 	iter->tr = tr;
+ 	iter->array_buffer = &tr->array_buffer;
+-	iter->cpu_file = tracing_get_cpu(inode);
++	iter->cpu_file = cpu;
+ 	mutex_init(&iter->mutex);
+ 	filp->private_data = iter;
+ 
+@@ -6305,12 +6334,15 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
+ 	nonseekable_open(inode, filp);
+ 
+ 	tr->trace_ref++;
+-out:
++
+ 	mutex_unlock(&trace_types_lock);
+ 	return ret;
+ 
+ fail:
+ 	kfree(iter);
++fail_alloc_iter:
++	close_pipe_on_cpu(tr, cpu);
++fail_pipe_on_cpu:
+ 	__trace_array_put(tr);
+ 	mutex_unlock(&trace_types_lock);
+ 	return ret;
+@@ -6327,7 +6359,7 @@ static int tracing_release_pipe(struct inode *inode, struct file *file)
+ 
+ 	if (iter->trace->pipe_close)
+ 		iter->trace->pipe_close(iter);
+-
++	close_pipe_on_cpu(tr, iter->cpu_file);
+ 	mutex_unlock(&trace_types_lock);
+ 
+ 	free_cpumask_var(iter->started);
+@@ -7132,6 +7164,11 @@ out:
+ 	return ret;
+ }
+ 
++static void tracing_swap_cpu_buffer(void *tr)
++{
++	update_max_tr_single((struct trace_array *)tr, current, smp_processor_id());
++}
++
+ static ssize_t
+ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 		       loff_t *ppos)
+@@ -7190,13 +7227,15 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 			ret = tracing_alloc_snapshot_instance(tr);
+ 		if (ret < 0)
+ 			break;
+-		local_irq_disable();
+ 		/* Now, we're going to swap */
+-		if (iter->cpu_file == RING_BUFFER_ALL_CPUS)
++		if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
++			local_irq_disable();
+ 			update_max_tr(tr, current, smp_processor_id(), NULL);
+-		else
+-			update_max_tr_single(tr, current, iter->cpu_file);
+-		local_irq_enable();
++			local_irq_enable();
++		} else {
++			smp_call_function_single(iter->cpu_file, tracing_swap_cpu_buffer,
++						 (void *)tr, 1);
++		}
+ 		break;
+ 	default:
+ 		if (tr->allocated_snapshot) {
+@@ -8851,6 +8890,9 @@ static struct trace_array *trace_array_create(const char *name)
+ 	if (!alloc_cpumask_var(&tr->tracing_cpumask, GFP_KERNEL))
+ 		goto out_free_tr;
+ 
++	if (!zalloc_cpumask_var(&tr->pipe_cpumask, GFP_KERNEL))
++		goto out_free_tr;
++
+ 	tr->trace_flags = global_trace.trace_flags & ~ZEROED_TRACE_FLAGS;
+ 
+ 	cpumask_copy(tr->tracing_cpumask, cpu_all_mask);
+@@ -8892,6 +8934,7 @@ static struct trace_array *trace_array_create(const char *name)
+  out_free_tr:
+ 	ftrace_free_ftrace_ops(tr);
+ 	free_trace_buffers(tr);
++	free_cpumask_var(tr->pipe_cpumask);
+ 	free_cpumask_var(tr->tracing_cpumask);
+ 	kfree(tr->name);
+ 	kfree(tr);
+@@ -8993,6 +9036,7 @@ static int __remove_instance(struct trace_array *tr)
+ 	}
+ 	kfree(tr->topts);
+ 
++	free_cpumask_var(tr->pipe_cpumask);
+ 	free_cpumask_var(tr->tracing_cpumask);
+ 	kfree(tr->name);
+ 	kfree(tr);
+@@ -9692,12 +9736,14 @@ __init static int tracer_alloc_buffers(void)
+ 	if (trace_create_savedcmd() < 0)
+ 		goto out_free_temp_buffer;
+ 
++	if (!zalloc_cpumask_var(&global_trace.pipe_cpumask, GFP_KERNEL))
++		goto out_free_savedcmd;
++
+ 	/* TODO: make the number of buffers hot pluggable with CPUS */
+ 	if (allocate_trace_buffers(&global_trace, ring_buf_size) < 0) {
+ 		MEM_FAIL(1, "tracer: failed to allocate ring buffer!\n");
+-		goto out_free_savedcmd;
++		goto out_free_pipe_cpumask;
+ 	}
+-
+ 	if (global_trace.buffer_disabled)
+ 		tracing_off();
+ 
+@@ -9748,6 +9794,8 @@ __init static int tracer_alloc_buffers(void)
+ 
+ 	return 0;
+ 
++out_free_pipe_cpumask:
++	free_cpumask_var(global_trace.pipe_cpumask);
+ out_free_savedcmd:
+ 	free_saved_cmdlines_buffer(savedcmd);
+ out_free_temp_buffer:
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 892b3d2f33b79..dfde855dafda7 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -356,6 +356,8 @@ struct trace_array {
+ 	struct list_head	events;
+ 	struct trace_event_file *trace_marker_file;
+ 	cpumask_var_t		tracing_cpumask; /* only trace on set CPUs */
++	/* one per_cpu trace_pipe can be opened by only one user */
++	cpumask_var_t		pipe_cpumask;
+ 	int			ref;
+ 	int			trace_ref;
+ #ifdef CONFIG_FUNCTION_TRACER
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index f6c47361c154e..60ff36f5d7f9e 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1422,7 +1422,7 @@ static void uretprobe_perf_func(struct trace_uprobe *tu, unsigned long func,
+ 
+ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
+ 			const char **filename, u64 *probe_offset,
+-			bool perf_type_tracepoint)
++			u64 *probe_addr, bool perf_type_tracepoint)
+ {
+ 	const char *pevent = trace_event_name(event->tp_event);
+ 	const char *group = event->tp_event->class->system;
+@@ -1439,6 +1439,7 @@ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
+ 				    : BPF_FD_TYPE_UPROBE;
+ 	*filename = tu->filename;
+ 	*probe_offset = tu->offset;
++	*probe_addr = 0;
+ 	return 0;
+ }
+ #endif	/* CONFIG_PERF_EVENTS */
+diff --git a/lib/idr.c b/lib/idr.c
+index 7ecdfdb5309e7..13f2758c23773 100644
+--- a/lib/idr.c
++++ b/lib/idr.c
+@@ -100,7 +100,7 @@ EXPORT_SYMBOL_GPL(idr_alloc);
+  * @end: The maximum ID (exclusive).
+  * @gfp: Memory allocation flags.
+  *
+- * Allocates an unused ID in the range specified by @nextid and @end.  If
++ * Allocates an unused ID in the range specified by @start and @end.  If
+  * @end is <= 0, it is treated as one larger than %INT_MAX.  This allows
+  * callers to use @start + N as @end as long as N is within integer range.
+  * The search for an unused ID will start at the last ID allocated and will
+diff --git a/lib/test_meminit.c b/lib/test_meminit.c
+index 3ca717f113977..75638404ed573 100644
+--- a/lib/test_meminit.c
++++ b/lib/test_meminit.c
+@@ -86,7 +86,7 @@ static int __init test_pages(int *total_failures)
+ 	int failures = 0, num_tests = 0;
+ 	int i;
+ 
+-	for (i = 0; i < 10; i++)
++	for (i = 0; i <= MAX_ORDER; i++)
+ 		num_tests += do_alloc_pages_order(i, &failures);
+ 
+ 	REPORT_FAILURES_IN_FN();
+diff --git a/mm/shmem.c b/mm/shmem.c
+index cfa8f43cb3a62..e173d83b44481 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -3455,6 +3455,8 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param)
+ 	unsigned long long size;
+ 	char *rest;
+ 	int opt;
++	kuid_t kuid;
++	kgid_t kgid;
+ 
+ 	opt = fs_parse(fc, shmem_fs_parameters, param, &result);
+ 	if (opt < 0)
+@@ -3490,14 +3492,32 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param)
+ 		ctx->mode = result.uint_32 & 07777;
+ 		break;
+ 	case Opt_uid:
+-		ctx->uid = make_kuid(current_user_ns(), result.uint_32);
+-		if (!uid_valid(ctx->uid))
++		kuid = make_kuid(current_user_ns(), result.uint_32);
++		if (!uid_valid(kuid))
+ 			goto bad_value;
++
++		/*
++		 * The requested uid must be representable in the
++		 * filesystem's idmapping.
++		 */
++		if (!kuid_has_mapping(fc->user_ns, kuid))
++			goto bad_value;
++
++		ctx->uid = kuid;
+ 		break;
+ 	case Opt_gid:
+-		ctx->gid = make_kgid(current_user_ns(), result.uint_32);
+-		if (!gid_valid(ctx->gid))
++		kgid = make_kgid(current_user_ns(), result.uint_32);
++		if (!gid_valid(kgid))
+ 			goto bad_value;
++
++		/*
++		 * The requested gid must be representable in the
++		 * filesystem's idmapping.
++		 */
++		if (!kgid_has_mapping(fc->user_ns, kgid))
++			goto bad_value;
++
++		ctx->gid = kgid;
+ 		break;
+ 	case Opt_huge:
+ 		ctx->huge = result.uint_32;
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index f582351d84ecb..36b5f72e2165c 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -394,7 +394,7 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req,
+ 	struct page **in_pages = NULL, **out_pages = NULL;
+ 	struct virtio_chan *chan = client->trans;
+ 	struct scatterlist *sgs[4];
+-	size_t offs;
++	size_t offs = 0;
+ 	int need_drop = 0;
+ 	int kicked = 0;
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index bd6f20ef13f35..46e1e51ff28e3 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2343,9 +2343,9 @@ void hci_uuids_clear(struct hci_dev *hdev)
+ 
+ void hci_link_keys_clear(struct hci_dev *hdev)
+ {
+-	struct link_key *key;
++	struct link_key *key, *tmp;
+ 
+-	list_for_each_entry(key, &hdev->link_keys, list) {
++	list_for_each_entry_safe(key, tmp, &hdev->link_keys, list) {
+ 		list_del_rcu(&key->list);
+ 		kfree_rcu(key, rcu);
+ 	}
+@@ -2353,9 +2353,9 @@ void hci_link_keys_clear(struct hci_dev *hdev)
+ 
+ void hci_smp_ltks_clear(struct hci_dev *hdev)
+ {
+-	struct smp_ltk *k;
++	struct smp_ltk *k, *tmp;
+ 
+-	list_for_each_entry(k, &hdev->long_term_keys, list) {
++	list_for_each_entry_safe(k, tmp, &hdev->long_term_keys, list) {
+ 		list_del_rcu(&k->list);
+ 		kfree_rcu(k, rcu);
+ 	}
+@@ -2363,9 +2363,9 @@ void hci_smp_ltks_clear(struct hci_dev *hdev)
+ 
+ void hci_smp_irks_clear(struct hci_dev *hdev)
+ {
+-	struct smp_irk *k;
++	struct smp_irk *k, *tmp;
+ 
+-	list_for_each_entry(k, &hdev->identity_resolving_keys, list) {
++	list_for_each_entry_safe(k, tmp, &hdev->identity_resolving_keys, list) {
+ 		list_del_rcu(&k->list);
+ 		kfree_rcu(k, rcu);
+ 	}
+@@ -2373,9 +2373,9 @@ void hci_smp_irks_clear(struct hci_dev *hdev)
+ 
+ void hci_blocked_keys_clear(struct hci_dev *hdev)
+ {
+-	struct blocked_key *b;
++	struct blocked_key *b, *tmp;
+ 
+-	list_for_each_entry(b, &hdev->blocked_keys, list) {
++	list_for_each_entry_safe(b, tmp, &hdev->blocked_keys, list) {
+ 		list_del_rcu(&b->list);
+ 		kfree_rcu(b, rcu);
+ 	}
+diff --git a/net/core/filter.c b/net/core/filter.c
+index b9c954182b375..ea8ab9c704832 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6661,6 +6661,8 @@ BPF_CALL_3(bpf_sk_assign, struct sk_buff *, skb, struct sock *, sk, u64, flags)
+ 		return -ENETUNREACH;
+ 	if (unlikely(sk_fullsock(sk) && sk->sk_reuseport))
+ 		return -ESOCKTNOSUPPORT;
++	if (sk_unhashed(sk))
++		return -EOPNOTSUPP;
+ 	if (sk_is_refcounted(sk) &&
+ 	    unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+ 		return -ENOENT;
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index b8d082f557183..3d5192177560d 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1589,8 +1589,7 @@ u32 __skb_get_hash_symmetric(const struct sk_buff *skb)
+ 
+ 	memset(&keys, 0, sizeof(keys));
+ 	__skb_flow_dissect(NULL, skb, &flow_keys_dissector_symmetric,
+-			   &keys, NULL, 0, 0, 0,
+-			   FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);
++			   &keys, NULL, 0, 0, 0, 0);
+ 
+ 	return __flow_hash_from_keys(&keys, &hashrnd);
+ }
+diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c
+index 3fd207fe1284a..f6c327c7badb4 100644
+--- a/net/core/lwt_bpf.c
++++ b/net/core/lwt_bpf.c
+@@ -59,9 +59,8 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt,
+ 			ret = BPF_OK;
+ 		} else {
+ 			skb_reset_mac_header(skb);
+-			ret = skb_do_redirect(skb);
+-			if (ret == 0)
+-				ret = BPF_REDIRECT;
++			skb_do_redirect(skb);
++			ret = BPF_REDIRECT;
+ 		}
+ 		break;
+ 
+@@ -254,7 +253,7 @@ static int bpf_lwt_xmit_reroute(struct sk_buff *skb)
+ 
+ 	err = dst_output(dev_net(skb_dst(skb)->dev), skb->sk, skb);
+ 	if (unlikely(err))
+-		return err;
++		return net_xmit_errno(err);
+ 
+ 	/* ip[6]_finish_output2 understand LWTUNNEL_XMIT_DONE */
+ 	return LWTUNNEL_XMIT_DONE;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index b10285d06a2ca..196278a137c01 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3799,21 +3799,20 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
+ 	struct sk_buff *segs = NULL;
+ 	struct sk_buff *tail = NULL;
+ 	struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list;
+-	skb_frag_t *frag = skb_shinfo(head_skb)->frags;
+ 	unsigned int mss = skb_shinfo(head_skb)->gso_size;
+ 	unsigned int doffset = head_skb->data - skb_mac_header(head_skb);
+-	struct sk_buff *frag_skb = head_skb;
+ 	unsigned int offset = doffset;
+ 	unsigned int tnl_hlen = skb_tnl_header_len(head_skb);
+ 	unsigned int partial_segs = 0;
+ 	unsigned int headroom;
+ 	unsigned int len = head_skb->len;
++	struct sk_buff *frag_skb;
++	skb_frag_t *frag;
+ 	__be16 proto;
+ 	bool csum, sg;
+-	int nfrags = skb_shinfo(head_skb)->nr_frags;
+ 	int err = -ENOMEM;
+ 	int i = 0;
+-	int pos;
++	int nfrags, pos;
+ 
+ 	if ((skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY) &&
+ 	    mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) {
+@@ -3890,6 +3889,13 @@ normal:
+ 	headroom = skb_headroom(head_skb);
+ 	pos = skb_headlen(head_skb);
+ 
++	if (skb_orphan_frags(head_skb, GFP_ATOMIC))
++		return ERR_PTR(-ENOMEM);
++
++	nfrags = skb_shinfo(head_skb)->nr_frags;
++	frag = skb_shinfo(head_skb)->frags;
++	frag_skb = head_skb;
++
+ 	do {
+ 		struct sk_buff *nskb;
+ 		skb_frag_t *nskb_frag;
+@@ -3914,6 +3920,10 @@ normal:
+ 		    (skb_headlen(list_skb) == len || sg)) {
+ 			BUG_ON(skb_headlen(list_skb) > len);
+ 
++			nskb = skb_clone(list_skb, GFP_ATOMIC);
++			if (unlikely(!nskb))
++				goto err;
++
+ 			i = 0;
+ 			nfrags = skb_shinfo(list_skb)->nr_frags;
+ 			frag = skb_shinfo(list_skb)->frags;
+@@ -3932,12 +3942,8 @@ normal:
+ 				frag++;
+ 			}
+ 
+-			nskb = skb_clone(list_skb, GFP_ATOMIC);
+ 			list_skb = list_skb->next;
+ 
+-			if (unlikely(!nskb))
+-				goto err;
+-
+ 			if (unlikely(pskb_trim(nskb, len))) {
+ 				kfree_skb(nskb);
+ 				goto err;
+@@ -4008,12 +4014,16 @@ normal:
+ 		skb_shinfo(nskb)->tx_flags |= skb_shinfo(head_skb)->tx_flags &
+ 					      SKBTX_SHARED_FRAG;
+ 
+-		if (skb_orphan_frags(frag_skb, GFP_ATOMIC) ||
+-		    skb_zerocopy_clone(nskb, frag_skb, GFP_ATOMIC))
++		if (skb_zerocopy_clone(nskb, frag_skb, GFP_ATOMIC))
+ 			goto err;
+ 
+ 		while (pos < offset + len) {
+ 			if (i >= nfrags) {
++				if (skb_orphan_frags(list_skb, GFP_ATOMIC) ||
++				    skb_zerocopy_clone(nskb, list_skb,
++						       GFP_ATOMIC))
++					goto err;
++
+ 				i = 0;
+ 				nfrags = skb_shinfo(list_skb)->nr_frags;
+ 				frag = skb_shinfo(list_skb)->frags;
+@@ -4027,10 +4037,6 @@ normal:
+ 					i--;
+ 					frag--;
+ 				}
+-				if (skb_orphan_frags(frag_skb, GFP_ATOMIC) ||
+-				    skb_zerocopy_clone(nskb, frag_skb,
+-						       GFP_ATOMIC))
+-					goto err;
+ 
+ 				list_skb = list_skb->next;
+ 			}
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 742356cfd07c4..fcb998dc2dc68 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -690,7 +690,8 @@ bool sk_mc_loop(struct sock *sk)
+ 		return false;
+ 	if (!sk)
+ 		return true;
+-	switch (sk->sk_family) {
++	/* IPV6_ADDRFORM can change sk->sk_family under us. */
++	switch (READ_ONCE(sk->sk_family)) {
+ 	case AF_INET:
+ 		return inet_sk(sk)->mc_loop;
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -2314,9 +2315,9 @@ static long sock_wait_for_wmem(struct sock *sk, long timeo)
+ 		prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+ 		if (refcount_read(&sk->sk_wmem_alloc) < READ_ONCE(sk->sk_sndbuf))
+ 			break;
+-		if (sk->sk_shutdown & SEND_SHUTDOWN)
++		if (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN)
+ 			break;
+-		if (sk->sk_err)
++		if (READ_ONCE(sk->sk_err))
+ 			break;
+ 		timeo = schedule_timeout(timeo);
+ 	}
+@@ -2344,7 +2345,7 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len,
+ 			goto failure;
+ 
+ 		err = -EPIPE;
+-		if (sk->sk_shutdown & SEND_SHUTDOWN)
++		if (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN)
+ 			goto failure;
+ 
+ 		if (sk_wmem_alloc_get(sk) < READ_ONCE(sk->sk_sndbuf))
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index a2a8b952b3c55..398dc3e47d0c8 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -243,12 +243,17 @@ static int dccp_v4_err(struct sk_buff *skb, u32 info)
+ 	int err;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	/* Only need dccph_dport & dccph_sport which are the first
+-	 * 4 bytes in dccp header.
++	/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
++	 * which is in byte 7 of the dccp header.
+ 	 * Our caller (icmp_socket_deliver()) already pulled 8 bytes for us.
++	 *
++	 * Later on, we want to access the sequence number fields, which are
++	 * beyond 8 bytes, so we have to pskb_may_pull() ourselves.
+ 	 */
+-	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8);
+-	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8);
++	dh = (struct dccp_hdr *)(skb->data + offset);
++	if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
++		return -EINVAL;
++	iph = (struct iphdr *)skb->data;
+ 	dh = (struct dccp_hdr *)(skb->data + offset);
+ 
+ 	sk = __inet_lookup_established(net, &dccp_hashinfo,
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 64e91783860df..bfe11e96af7c9 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -67,7 +67,7 @@ static inline __u64 dccp_v6_init_sequence(struct sk_buff *skb)
+ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 			u8 type, u8 code, int offset, __be32 info)
+ {
+-	const struct ipv6hdr *hdr = (const struct ipv6hdr *)skb->data;
++	const struct ipv6hdr *hdr;
+ 	const struct dccp_hdr *dh;
+ 	struct dccp_sock *dp;
+ 	struct ipv6_pinfo *np;
+@@ -76,12 +76,17 @@ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	__u64 seq;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	/* Only need dccph_dport & dccph_sport which are the first
+-	 * 4 bytes in dccp header.
++	/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
++	 * which is in byte 7 of the dccp header.
+ 	 * Our caller (icmpv6_notify()) already pulled 8 bytes for us.
++	 *
++	 * Later on, we want to access the sequence number fields, which are
++	 * beyond 8 bytes, so we have to pskb_may_pull() ourselves.
+ 	 */
+-	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8);
+-	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8);
++	dh = (struct dccp_hdr *)(skb->data + offset);
++	if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
++		return -EINVAL;
++	hdr = (const struct ipv6hdr *)skb->data;
+ 	dh = (struct dccp_hdr *)(skb->data + offset);
+ 
+ 	sk = __inet6_lookup_established(net, &dccp_hashinfo,
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index aec48e670fb69..2a02cb2edec2f 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -531,6 +531,7 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ 		proto = vlan_hdr->vlanhdr.h_vlan_encapsulated_proto;
+ 		/* FIXME: */
+ 		netdev_warn_once(skb->dev, "VLAN not yet supported");
++		return -EINVAL;
+ 	}
+ 
+ 	frame->is_from_san = false;
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 88b6120878cd9..da1ca8081c035 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -351,14 +351,14 @@ static void __inet_del_ifa(struct in_device *in_dev,
+ {
+ 	struct in_ifaddr *promote = NULL;
+ 	struct in_ifaddr *ifa, *ifa1;
+-	struct in_ifaddr *last_prim;
++	struct in_ifaddr __rcu **last_prim;
+ 	struct in_ifaddr *prev_prom = NULL;
+ 	int do_promote = IN_DEV_PROMOTE_SECONDARIES(in_dev);
+ 
+ 	ASSERT_RTNL();
+ 
+ 	ifa1 = rtnl_dereference(*ifap);
+-	last_prim = rtnl_dereference(in_dev->ifa_list);
++	last_prim = ifap;
+ 	if (in_dev->dead)
+ 		goto no_promotions;
+ 
+@@ -372,7 +372,7 @@ static void __inet_del_ifa(struct in_device *in_dev,
+ 		while ((ifa = rtnl_dereference(*ifap1)) != NULL) {
+ 			if (!(ifa->ifa_flags & IFA_F_SECONDARY) &&
+ 			    ifa1->ifa_scope <= ifa->ifa_scope)
+-				last_prim = ifa;
++				last_prim = &ifa->ifa_next;
+ 
+ 			if (!(ifa->ifa_flags & IFA_F_SECONDARY) ||
+ 			    ifa1->ifa_mask != ifa->ifa_mask ||
+@@ -436,9 +436,9 @@ no_promotions:
+ 
+ 			rcu_assign_pointer(prev_prom->ifa_next, next_sec);
+ 
+-			last_sec = rtnl_dereference(last_prim->ifa_next);
++			last_sec = rtnl_dereference(*last_prim);
+ 			rcu_assign_pointer(promote->ifa_next, last_sec);
+-			rcu_assign_pointer(last_prim->ifa_next, promote);
++			rcu_assign_pointer(*last_prim, promote);
+ 		}
+ 
+ 		promote->ifa_flags &= ~IFA_F_SECONDARY;
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 4e94796ccdbd1..ed20d6ac10dc2 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -278,7 +278,8 @@ void fib_release_info(struct fib_info *fi)
+ 				hlist_del(&nexthop_nh->nh_hash);
+ 			} endfor_nexthops(fi)
+ 		}
+-		fi->fib_dead = 1;
++		/* Paired with READ_ONCE() from fib_table_lookup() */
++		WRITE_ONCE(fi->fib_dead, 1);
+ 		fib_info_put(fi);
+ 	}
+ 	spin_unlock_bh(&fib_info_lock);
+@@ -1599,6 +1600,7 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
+ link_it:
+ 	ofi = fib_find_info(fi);
+ 	if (ofi) {
++		/* fib_table_lookup() should not see @fi yet. */
+ 		fi->fib_dead = 1;
+ 		free_fib_info(fi);
+ 		ofi->fib_treeref++;
+@@ -1637,6 +1639,7 @@ err_inval:
+ 
+ failure:
+ 	if (fi) {
++		/* fib_table_lookup() should not see @fi yet. */
+ 		fi->fib_dead = 1;
+ 		free_fib_info(fi);
+ 	}
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index d11fb16234a6a..456240d2adc11 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -1534,7 +1534,8 @@ found:
+ 		}
+ 		if (fa->fa_tos && fa->fa_tos != flp->flowi4_tos)
+ 			continue;
+-		if (fi->fib_dead)
++		/* Paired with WRITE_ONCE() in fib_release_info() */
++		if (READ_ONCE(fi->fib_dead))
+ 			continue;
+ 		if (fa->fa_info->fib_scope < flp->flowi4_scope)
+ 			continue;
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index c71b863093ace..4cffdb9795ded 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -353,8 +353,9 @@ static struct sk_buff *igmpv3_newpack(struct net_device *dev, unsigned int mtu)
+ 	struct flowi4 fl4;
+ 	int hlen = LL_RESERVED_SPACE(dev);
+ 	int tlen = dev->needed_tailroom;
+-	unsigned int size = mtu;
++	unsigned int size;
+ 
++	size = min(mtu, IP_MAX_MTU);
+ 	while (1) {
+ 		skb = alloc_skb(size + hlen + tlen,
+ 				GFP_ATOMIC | __GFP_NOWARN);
+diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
+index eccd7897e7aa6..372579686162b 100644
+--- a/net/ipv4/ip_input.c
++++ b/net/ipv4/ip_input.c
+@@ -566,7 +566,8 @@ static void ip_sublist_rcv_finish(struct list_head *head)
+ static struct sk_buff *ip_extract_route_hint(const struct net *net,
+ 					     struct sk_buff *skb, int rt_type)
+ {
+-	if (fib4_has_custom_rules(net) || rt_type == RTN_BROADCAST)
++	if (fib4_has_custom_rules(net) || rt_type == RTN_BROADCAST ||
++	    IPCB(skb)->flags & IPSKB_MULTIPATH)
+ 		return NULL;
+ 
+ 	return skb;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 6fd04f2f8b40c..a99c374101fc5 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -223,7 +223,7 @@ static int ip_finish_output2(struct net *net, struct sock *sk, struct sk_buff *s
+ 	if (lwtunnel_xmit_redirect(dst->lwtstate)) {
+ 		int res = lwtunnel_xmit(skb);
+ 
+-		if (res < 0 || res == LWTUNNEL_XMIT_DONE)
++		if (res != LWTUNNEL_XMIT_CONTINUE)
+ 			return res;
+ 	}
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 374647693d7ac..3ddeb4fc0d08a 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2066,6 +2066,7 @@ static int ip_mkroute_input(struct sk_buff *skb,
+ 		int h = fib_multipath_hash(res->fi->fib_net, NULL, skb, hkeys);
+ 
+ 		fib_select_multipath(res, h);
++		IPCB(skb)->flags |= IPSKB_MULTIPATH;
+ 	}
+ #endif
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index d6dfbb88dcf5b..b8d2c45edbe02 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -286,7 +286,7 @@ static void tcp_incr_quickack(struct sock *sk, unsigned int max_quickacks)
+ 		icsk->icsk_ack.quick = quickacks;
+ }
+ 
+-void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks)
++static void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 
+@@ -294,7 +294,6 @@ void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks)
+ 	inet_csk_exit_pingpong_mode(sk);
+ 	icsk->icsk_ack.ato = TCP_ATO_MIN;
+ }
+-EXPORT_SYMBOL(tcp_enter_quickack_mode);
+ 
+ /* Send ACKs quickly, if "quick" count is not exhausted
+  * and the session is not interactive.
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index d2e07bb30164c..5c7e10939dd90 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -437,6 +437,22 @@ static void tcp_fastopen_synack_timer(struct sock *sk, struct request_sock *req)
+ 			  TCP_TIMEOUT_INIT << req->num_timeout, TCP_RTO_MAX);
+ }
+ 
++static bool tcp_rtx_probe0_timed_out(const struct sock *sk,
++				     const struct sk_buff *skb)
++{
++	const struct tcp_sock *tp = tcp_sk(sk);
++	const int timeout = TCP_RTO_MAX * 2;
++	u32 rcv_delta, rtx_delta;
++
++	rcv_delta = inet_csk(sk)->icsk_timeout - tp->rcv_tstamp;
++	if (rcv_delta <= timeout)
++		return false;
++
++	rtx_delta = (u32)msecs_to_jiffies(tcp_time_stamp(tp) -
++			(tp->retrans_stamp ?: tcp_skb_timestamp(skb)));
++
++	return rtx_delta > timeout;
++}
+ 
+ /**
+  *  tcp_retransmit_timer() - The TCP retransmit timeout handler
+@@ -502,7 +518,7 @@ void tcp_retransmit_timer(struct sock *sk)
+ 					    tp->snd_una, tp->snd_nxt);
+ 		}
+ #endif
+-		if (tcp_jiffies32 - tp->rcv_tstamp > TCP_RTO_MAX) {
++		if (tcp_rtx_probe0_timed_out(sk, skb)) {
+ 			tcp_write_err(sk);
+ 			goto out;
+ 		}
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index f0db66e415bd6..913966e7703fc 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -443,14 +443,24 @@ static struct sock *udp4_lib_lookup2(struct net *net,
+ 		score = compute_score(sk, net, saddr, sport,
+ 				      daddr, hnum, dif, sdif);
+ 		if (score > badness) {
+-			result = lookup_reuseport(net, sk, skb,
+-						  saddr, sport, daddr, hnum);
++			badness = score;
++			result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum);
++			if (!result) {
++				result = sk;
++				continue;
++			}
++
+ 			/* Fall back to scoring if group has connections */
+-			if (result && !reuseport_has_conns(sk))
++			if (!reuseport_has_conns(sk))
+ 				return result;
+ 
+-			result = result ? : sk;
+-			badness = score;
++			/* Reuseport logic returned an error, keep original score. */
++			if (IS_ERR(result))
++				continue;
++
++			badness = compute_score(result, net, saddr, sport,
++						daddr, hnum, dif, sdif);
++
+ 		}
+ 	}
+ 	return result;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 9b414681500a5..0eafe26c05f77 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -1359,7 +1359,7 @@ retry:
+ 	 * idev->desync_factor if it's larger
+ 	 */
+ 	cnf_temp_preferred_lft = READ_ONCE(idev->cnf.temp_prefered_lft);
+-	max_desync_factor = min_t(__u32,
++	max_desync_factor = min_t(long,
+ 				  idev->cnf.max_desync_factor,
+ 				  cnf_temp_preferred_lft - regen_advance);
+ 
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index c62e44224bf84..58b5ab5fcdbf1 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -131,7 +131,7 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
+ 	if (lwtunnel_xmit_redirect(dst->lwtstate)) {
+ 		int res = lwtunnel_xmit(skb);
+ 
+-		if (res < 0 || res == LWTUNNEL_XMIT_DONE)
++		if (res != LWTUNNEL_XMIT_CONTINUE)
+ 			return res;
+ 	}
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 788bb19f32e99..5385037209a6b 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -189,14 +189,23 @@ static struct sock *udp6_lib_lookup2(struct net *net,
+ 		score = compute_score(sk, net, saddr, sport,
+ 				      daddr, hnum, dif, sdif);
+ 		if (score > badness) {
+-			result = lookup_reuseport(net, sk, skb,
+-						  saddr, sport, daddr, hnum);
++			badness = score;
++			result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum);
++			if (!result) {
++				result = sk;
++				continue;
++			}
++
+ 			/* Fall back to scoring if group has connections */
+-			if (result && !reuseport_has_conns(sk))
++			if (!reuseport_has_conns(sk))
+ 				return result;
+ 
+-			result = result ? : sk;
+-			badness = score;
++			/* Reuseport logic returned an error, keep original score. */
++			if (IS_ERR(result))
++				continue;
++
++			badness = compute_score(sk, net, saddr, sport,
++						daddr, hnum, dif, sdif);
+ 		}
+ 	}
+ 	return result;
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 32b516ab9c475..39b3c7fbf9f66 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -1064,15 +1064,18 @@ partial_message:
+ out_error:
+ 	kcm_push(kcm);
+ 
+-	if (copied && sock->type == SOCK_SEQPACKET) {
++	if (sock->type == SOCK_SEQPACKET) {
+ 		/* Wrote some bytes before encountering an
+ 		 * error, return partial success.
+ 		 */
+-		goto partial_message;
+-	}
+-
+-	if (head != kcm->seq_skb)
++		if (copied)
++			goto partial_message;
++		if (head != kcm->seq_skb)
++			kfree_skb(head);
++	} else {
+ 		kfree_skb(head);
++		kcm->seq_skb = NULL;
++	}
+ 
+ 	err = sk_stream_error(sk, msg->msg_flags, err);
+ 
+@@ -1982,6 +1985,8 @@ static __net_exit void kcm_exit_net(struct net *net)
+ 	 * that all multiplexors and psocks have been destroyed.
+ 	 */
+ 	WARN_ON(!list_empty(&knet->mux_list));
++
++	mutex_destroy(&knet->mutex);
+ }
+ 
+ static struct pernet_operations kcm_net_ops = {
+diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c
+index 144346faffc13..b8ec2c414a5fb 100644
+--- a/net/netfilter/ipset/ip_set_hash_netportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netportnet.c
+@@ -35,6 +35,7 @@ MODULE_ALIAS("ip_set_hash:net,port,net");
+ #define IP_SET_HASH_WITH_PROTO
+ #define IP_SET_HASH_WITH_NETS
+ #define IPSET_NET_COUNT 2
++#define IP_SET_HASH_WITH_NET0
+ 
+ /* IPv4 variant */
+ 
+diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
+index 9dbaa5ce24e51..573a372e760f4 100644
+--- a/net/netfilter/nfnetlink_osf.c
++++ b/net/netfilter/nfnetlink_osf.c
+@@ -316,6 +316,14 @@ static int nfnl_osf_add_callback(struct net *net, struct sock *ctnl,
+ 
+ 	f = nla_data(osf_attrs[OSF_ATTR_FINGER]);
+ 
++	if (f->opt_num > ARRAY_SIZE(f->opt))
++		return -EINVAL;
++
++	if (!memchr(f->genre, 0, MAXGENRELEN) ||
++	    !memchr(f->subtype, 0, MAXGENRELEN) ||
++	    !memchr(f->version, 0, MAXGENRELEN))
++		return -EINVAL;
++
+ 	kf = kmalloc(sizeof(struct nf_osf_finger), GFP_KERNEL);
+ 	if (!kf)
+ 		return -ENOMEM;
+diff --git a/net/netfilter/xt_sctp.c b/net/netfilter/xt_sctp.c
+index 680015ba7cb6e..d4bf089c9e3f9 100644
+--- a/net/netfilter/xt_sctp.c
++++ b/net/netfilter/xt_sctp.c
+@@ -150,6 +150,8 @@ static int sctp_mt_check(const struct xt_mtchk_param *par)
+ {
+ 	const struct xt_sctp_info *info = par->matchinfo;
+ 
++	if (info->flag_count > ARRAY_SIZE(info->flag_info))
++		return -EINVAL;
+ 	if (info->flags & ~XT_SCTP_VALID_FLAGS)
+ 		return -EINVAL;
+ 	if (info->invflags & ~XT_SCTP_VALID_FLAGS)
+diff --git a/net/netfilter/xt_u32.c b/net/netfilter/xt_u32.c
+index 177b40d08098b..117d4615d6684 100644
+--- a/net/netfilter/xt_u32.c
++++ b/net/netfilter/xt_u32.c
+@@ -96,11 +96,32 @@ static bool u32_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 	return ret ^ data->invert;
+ }
+ 
++static int u32_mt_checkentry(const struct xt_mtchk_param *par)
++{
++	const struct xt_u32 *data = par->matchinfo;
++	const struct xt_u32_test *ct;
++	unsigned int i;
++
++	if (data->ntests > ARRAY_SIZE(data->tests))
++		return -EINVAL;
++
++	for (i = 0; i < data->ntests; ++i) {
++		ct = &data->tests[i];
++
++		if (ct->nnums > ARRAY_SIZE(ct->location) ||
++		    ct->nvalues > ARRAY_SIZE(ct->value))
++			return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static struct xt_match xt_u32_mt_reg __read_mostly = {
+ 	.name       = "u32",
+ 	.revision   = 0,
+ 	.family     = NFPROTO_UNSPEC,
+ 	.match      = u32_mt,
++	.checkentry = u32_mt_checkentry,
+ 	.matchsize  = sizeof(struct xt_u32),
+ 	.me         = THIS_MODULE,
+ };
+diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c
+index 91b35b7c80d82..96059c99b915e 100644
+--- a/net/netlabel/netlabel_kapi.c
++++ b/net/netlabel/netlabel_kapi.c
+@@ -857,7 +857,8 @@ int netlbl_catmap_setlong(struct netlbl_lsm_catmap **catmap,
+ 
+ 	offset -= iter->startbit;
+ 	idx = offset / NETLBL_CATMAP_MAPSIZE;
+-	iter->bitmap[idx] |= bitmap << (offset % NETLBL_CATMAP_MAPSIZE);
++	iter->bitmap[idx] |= (NETLBL_CATMAP_MAPTYPE)bitmap
++			     << (offset % NETLBL_CATMAP_MAPSIZE);
+ 
+ 	return 0;
+ }
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index 5c04da4cfbad0..24747163122bb 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -660,6 +660,11 @@ static int nr_connect(struct socket *sock, struct sockaddr *uaddr,
+ 		goto out_release;
+ 	}
+ 
++	if (sock->state == SS_CONNECTING) {
++		err = -EALREADY;
++		goto out_release;
++	}
++
+ 	sk->sk_state   = TCP_CLOSE;
+ 	sock->state = SS_UNCONNECTED;
+ 
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index 4f6b5b6fba3ed..a5b63158f081c 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -61,6 +61,7 @@ struct fq_pie_sched_data {
+ 	struct pie_params p_params;
+ 	u32 ecn_prob;
+ 	u32 flows_cnt;
++	u32 flows_cursor;
+ 	u32 quantum;
+ 	u32 memory_limit;
+ 	u32 new_flow_count;
+@@ -378,21 +379,31 @@ flow_error:
+ static void fq_pie_timer(struct timer_list *t)
+ {
+ 	struct fq_pie_sched_data *q = from_timer(q, t, adapt_timer);
++	unsigned long next, tupdate;
+ 	struct Qdisc *sch = q->sch;
+ 	spinlock_t *root_lock; /* to lock qdisc for probability calculations */
+-	u32 idx;
++	int max_cnt, i;
+ 
+ 	root_lock = qdisc_lock(qdisc_root_sleeping(sch));
+ 	spin_lock(root_lock);
+ 
+-	for (idx = 0; idx < q->flows_cnt; idx++)
+-		pie_calculate_probability(&q->p_params, &q->flows[idx].vars,
+-					  q->flows[idx].backlog);
+-
+-	/* reset the timer to fire after 'tupdate' jiffies. */
+-	if (q->p_params.tupdate)
+-		mod_timer(&q->adapt_timer, jiffies + q->p_params.tupdate);
++	/* Limit this expensive loop to 2048 flows per round. */
++	max_cnt = min_t(int, q->flows_cnt - q->flows_cursor, 2048);
++	for (i = 0; i < max_cnt; i++) {
++		pie_calculate_probability(&q->p_params,
++					  &q->flows[q->flows_cursor].vars,
++					  q->flows[q->flows_cursor].backlog);
++		q->flows_cursor++;
++	}
+ 
++	tupdate = q->p_params.tupdate;
++	next = 0;
++	if (q->flows_cursor >= q->flows_cnt) {
++		q->flows_cursor = 0;
++		next = tupdate;
++	}
++	if (tupdate)
++		mod_timer(&q->adapt_timer, jiffies + next);
+ 	spin_unlock(root_lock);
+ }
+ 
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index cdc43a06aa9bc..6076294a632c5 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -1012,6 +1012,10 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 		if (parent == NULL)
+ 			return -ENOENT;
+ 	}
++	if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) {
++		NL_SET_ERR_MSG(extack, "Invalid parent - parent class must have FSC");
++		return -EINVAL;
++	}
+ 
+ 	if (classid == 0 || TC_H_MAJ(classid ^ sch->handle) != 0)
+ 		return -EINVAL;
+diff --git a/net/sched/sch_plug.c b/net/sched/sch_plug.c
+index cbc2ebca4548c..339990bb59817 100644
+--- a/net/sched/sch_plug.c
++++ b/net/sched/sch_plug.c
+@@ -210,7 +210,7 @@ static struct Qdisc_ops plug_qdisc_ops __read_mostly = {
+ 	.priv_size   =       sizeof(struct plug_sched_data),
+ 	.enqueue     =       plug_enqueue,
+ 	.dequeue     =       plug_dequeue,
+-	.peek        =       qdisc_peek_head,
++	.peek        =       qdisc_peek_dequeued,
+ 	.init        =       plug_init,
+ 	.change      =       plug_change,
+ 	.reset       =	     qdisc_reset_queue,
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index d5a1e4b237b18..ebf9f473c9392 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -979,10 +979,13 @@ static void qfq_update_eligible(struct qfq_sched *q)
+ }
+ 
+ /* Dequeue head packet of the head class in the DRR queue of the aggregate. */
+-static void agg_dequeue(struct qfq_aggregate *agg,
+-			struct qfq_class *cl, unsigned int len)
++static struct sk_buff *agg_dequeue(struct qfq_aggregate *agg,
++				   struct qfq_class *cl, unsigned int len)
+ {
+-	qdisc_dequeue_peeked(cl->qdisc);
++	struct sk_buff *skb = qdisc_dequeue_peeked(cl->qdisc);
++
++	if (!skb)
++		return NULL;
+ 
+ 	cl->deficit -= (int) len;
+ 
+@@ -992,6 +995,8 @@ static void agg_dequeue(struct qfq_aggregate *agg,
+ 		cl->deficit += agg->lmax;
+ 		list_move_tail(&cl->alist, &agg->active);
+ 	}
++
++	return skb;
+ }
+ 
+ static inline struct sk_buff *qfq_peek_skb(struct qfq_aggregate *agg,
+@@ -1137,11 +1142,18 @@ static struct sk_buff *qfq_dequeue(struct Qdisc *sch)
+ 	if (!skb)
+ 		return NULL;
+ 
+-	qdisc_qstats_backlog_dec(sch, skb);
+ 	sch->q.qlen--;
++
++	skb = agg_dequeue(in_serv_agg, cl, len);
++
++	if (!skb) {
++		sch->q.qlen++;
++		return NULL;
++	}
++
++	qdisc_qstats_backlog_dec(sch, skb);
+ 	qdisc_bstats_update(sch, skb);
+ 
+-	agg_dequeue(in_serv_agg, cl, len);
+ 	/* If lmax is lowered, through qfq_change_class, for a class
+ 	 * owning pending packets with larger size than the new value
+ 	 * of lmax, then the following condition may hold.
+diff --git a/net/sctp/proc.c b/net/sctp/proc.c
+index 982a87b3e11f8..963b94517ec20 100644
+--- a/net/sctp/proc.c
++++ b/net/sctp/proc.c
+@@ -284,7 +284,7 @@ static int sctp_assocs_seq_show(struct seq_file *seq, void *v)
+ 		assoc->init_retries, assoc->shutdown_retries,
+ 		assoc->rtx_data_chunks,
+ 		refcount_read(&sk->sk_wmem_alloc),
+-		sk->sk_wmem_queued,
++		READ_ONCE(sk->sk_wmem_queued),
+ 		sk->sk_sndbuf,
+ 		sk->sk_rcvbuf);
+ 	seq_printf(seq, "\n");
+diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
+index d4e5969771f0f..30e9914526337 100644
+--- a/net/sctp/sm_sideeffect.c
++++ b/net/sctp/sm_sideeffect.c
+@@ -1241,7 +1241,10 @@ static int sctp_side_effects(enum sctp_event_type event_type,
+ 	default:
+ 		pr_err("impossible disposition %d in state %d, event_type %d, event_id %d\n",
+ 		       status, state, event_type, subtype.chunk);
+-		BUG();
++		error = status;
++		if (error >= 0)
++			error = -EINVAL;
++		WARN_ON_ONCE(1);
+ 		break;
+ 	}
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index fa4d31b507f29..68d53e3f0d07a 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -68,7 +68,7 @@
+ #include <net/sctp/stream_sched.h>
+ 
+ /* Forward declarations for internal helper functions. */
+-static bool sctp_writeable(struct sock *sk);
++static bool sctp_writeable(const struct sock *sk);
+ static void sctp_wfree(struct sk_buff *skb);
+ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+ 				size_t msg_len);
+@@ -138,7 +138,7 @@ static inline void sctp_set_owner_w(struct sctp_chunk *chunk)
+ 
+ 	refcount_add(sizeof(struct sctp_chunk), &sk->sk_wmem_alloc);
+ 	asoc->sndbuf_used += chunk->skb->truesize + sizeof(struct sctp_chunk);
+-	sk->sk_wmem_queued += chunk->skb->truesize + sizeof(struct sctp_chunk);
++	sk_wmem_queued_add(sk, chunk->skb->truesize + sizeof(struct sctp_chunk));
+ 	sk_mem_charge(sk, chunk->skb->truesize);
+ }
+ 
+@@ -8900,7 +8900,7 @@ static void sctp_wfree(struct sk_buff *skb)
+ 	struct sock *sk = asoc->base.sk;
+ 
+ 	sk_mem_uncharge(sk, skb->truesize);
+-	sk->sk_wmem_queued -= skb->truesize + sizeof(struct sctp_chunk);
++	sk_wmem_queued_add(sk, -(skb->truesize + sizeof(struct sctp_chunk)));
+ 	asoc->sndbuf_used -= skb->truesize + sizeof(struct sctp_chunk);
+ 	WARN_ON(refcount_sub_and_test(sizeof(struct sctp_chunk),
+ 				      &sk->sk_wmem_alloc));
+@@ -9055,9 +9055,9 @@ void sctp_write_space(struct sock *sk)
+  * UDP-style sockets or TCP-style sockets, this code should work.
+  *  - Daisy
+  */
+-static bool sctp_writeable(struct sock *sk)
++static bool sctp_writeable(const struct sock *sk)
+ {
+-	return sk->sk_sndbuf > sk->sk_wmem_queued;
++	return READ_ONCE(sk->sk_sndbuf) > READ_ONCE(sk->sk_wmem_queued);
+ }
+ 
+ /* Wait for an association to go into ESTABLISHED state. If timeout is 0,
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index e84241ff4ac4f..ab9ecdd1af0ac 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1101,6 +1101,7 @@ void smcr_port_add(struct smc_ib_device *smcibdev, u8 ibport)
+ {
+ 	struct smc_link_group *lgr, *n;
+ 
++	spin_lock_bh(&smc_lgr_list.lock);
+ 	list_for_each_entry_safe(lgr, n, &smc_lgr_list.list, list) {
+ 		struct smc_link *link;
+ 
+@@ -1115,6 +1116,7 @@ void smcr_port_add(struct smc_ib_device *smcibdev, u8 ibport)
+ 		if (link)
+ 			smc_llc_add_link_local(link);
+ 	}
++	spin_unlock_bh(&smc_lgr_list.lock);
+ }
+ 
+ /* link is down - switch connections to alternate link,
+diff --git a/net/socket.c b/net/socket.c
+index f2172b756c0f7..1a40335117035 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -3467,7 +3467,11 @@ EXPORT_SYMBOL(kernel_accept);
+ int kernel_connect(struct socket *sock, struct sockaddr *addr, int addrlen,
+ 		   int flags)
+ {
+-	return sock->ops->connect(sock, addr, addrlen, flags);
++	struct sockaddr_storage address;
++
++	memcpy(&address, addr, addrlen);
++
++	return sock->ops->connect(sock, (struct sockaddr *)&address, addrlen, flags);
+ }
+ EXPORT_SYMBOL(kernel_connect);
+ 
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index ac7feadb43904..50eae668578a7 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -810,7 +810,7 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ 	psock = sk_psock_get(sk);
+ 	if (!psock || !policy) {
+ 		err = tls_push_record(sk, flags, record_type);
+-		if (err && sk->sk_err == EBADMSG) {
++		if (err && err != -EINPROGRESS && sk->sk_err == EBADMSG) {
+ 			*copied -= sk_msg_free(sk, msg);
+ 			tls_free_open_rec(sk);
+ 			err = -sk->sk_err;
+@@ -839,7 +839,7 @@ more_data:
+ 	switch (psock->eval) {
+ 	case __SK_PASS:
+ 		err = tls_push_record(sk, flags, record_type);
+-		if (err && sk->sk_err == EBADMSG) {
++		if (err && err != -EINPROGRESS && sk->sk_err == EBADMSG) {
+ 			*copied -= sk_msg_free(sk, msg);
+ 			tls_free_open_rec(sk);
+ 			err = -sk->sk_err;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 8d941cbba5cb7..237488b1b58b6 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -587,7 +587,7 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 	 *	  What the above comment does talk about? --ANK(980817)
+ 	 */
+ 
+-	if (unix_tot_inflight)
++	if (READ_ONCE(unix_tot_inflight))
+ 		unix_gc();		/* Garbage collect fds */
+ }
+ 
+diff --git a/net/unix/scm.c b/net/unix/scm.c
+index aa27a02478dc1..e8e2a00bb0f58 100644
+--- a/net/unix/scm.c
++++ b/net/unix/scm.c
+@@ -63,7 +63,7 @@ void unix_inflight(struct user_struct *user, struct file *fp)
+ 		/* Paired with READ_ONCE() in wait_for_unix_gc() */
+ 		WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1);
+ 	}
+-	user->unix_inflight++;
++	WRITE_ONCE(user->unix_inflight, user->unix_inflight + 1);
+ 	spin_unlock(&unix_gc_lock);
+ }
+ 
+@@ -84,7 +84,7 @@ void unix_notinflight(struct user_struct *user, struct file *fp)
+ 		/* Paired with READ_ONCE() in wait_for_unix_gc() */
+ 		WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1);
+ 	}
+-	user->unix_inflight--;
++	WRITE_ONCE(user->unix_inflight, user->unix_inflight - 1);
+ 	spin_unlock(&unix_gc_lock);
+ }
+ 
+@@ -98,7 +98,7 @@ static inline bool too_many_unix_fds(struct task_struct *p)
+ {
+ 	struct user_struct *user = current_user();
+ 
+-	if (unlikely(user->unix_inflight > task_rlimit(p, RLIMIT_NOFILE)))
++	if (unlikely(READ_ONCE(user->unix_inflight) > task_rlimit(p, RLIMIT_NOFILE)))
+ 		return !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN);
+ 	return false;
+ }
+diff --git a/samples/bpf/tracex6_kern.c b/samples/bpf/tracex6_kern.c
+index acad5712d8b4f..fd602c2774b8b 100644
+--- a/samples/bpf/tracex6_kern.c
++++ b/samples/bpf/tracex6_kern.c
+@@ -2,6 +2,8 @@
+ #include <linux/version.h>
+ #include <uapi/linux/bpf.h>
+ #include <bpf/bpf_helpers.h>
++#include <bpf/bpf_tracing.h>
++#include <bpf/bpf_core_read.h>
+ 
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
+@@ -45,13 +47,24 @@ int bpf_prog1(struct pt_regs *ctx)
+ 	return 0;
+ }
+ 
+-SEC("kprobe/htab_map_lookup_elem")
+-int bpf_prog2(struct pt_regs *ctx)
++/*
++ * Since *_map_lookup_elem can't be expected to trigger bpf programs
++ * due to potential deadlocks (bpf_disable_instrumentation), this bpf
++ * program will be attached to bpf_map_copy_value (which is called
++ * from map_lookup_elem) and will only filter the hashtable type.
++ */
++SEC("kprobe/bpf_map_copy_value")
++int BPF_KPROBE(bpf_prog2, struct bpf_map *map)
+ {
+ 	u32 key = bpf_get_smp_processor_id();
+ 	struct bpf_perf_event_value *val, buf;
++	enum bpf_map_type type;
+ 	int error;
+ 
++	type = BPF_CORE_READ(map, map_type);
++	if (type != BPF_MAP_TYPE_HASH)
++		return 0;
++
+ 	error = bpf_perf_event_read_value(&counters, key, &buf, sizeof(buf));
+ 	if (error)
+ 		return 0;
+diff --git a/scripts/kconfig/preprocess.c b/scripts/kconfig/preprocess.c
+index 748da578b418c..d1f5bcff4b62d 100644
+--- a/scripts/kconfig/preprocess.c
++++ b/scripts/kconfig/preprocess.c
+@@ -396,6 +396,9 @@ static char *eval_clause(const char *str, size_t len, int argc, char *argv[])
+ 
+ 		p++;
+ 	}
++
++	if (new_argc >= FUNCTION_MAX_ARGS)
++		pperror("too many function arguments");
+ 	new_argv[new_argc++] = prev;
+ 
+ 	/*
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index 0a5ae1e8da47a..05b8f5bcc37ac 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -248,18 +248,6 @@ config IMA_APPRAISE_MODSIG
+ 	   The modsig keyword can be used in the IMA policy to allow a hook
+ 	   to accept such signatures.
+ 
+-config IMA_TRUSTED_KEYRING
+-	bool "Require all keys on the .ima keyring be signed (deprecated)"
+-	depends on IMA_APPRAISE && SYSTEM_TRUSTED_KEYRING
+-	depends on INTEGRITY_ASYMMETRIC_KEYS
+-	select INTEGRITY_TRUSTED_KEYRING
+-	default y
+-	help
+-	   This option requires that all keys added to the .ima
+-	   keyring be signed by a key on the system trusted keyring.
+-
+-	   This option is deprecated in favor of INTEGRITY_TRUSTED_KEYRING
+-
+ config IMA_KEYRINGS_PERMIT_SIGNED_BY_BUILTIN_OR_SECONDARY
+ 	bool "Permit keys validly signed by a built-in or secondary CA cert (EXPERIMENTAL)"
+ 	depends on SYSTEM_TRUSTED_KEYRING
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 61a614c21b9b6..e3ffaf5ad6394 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -980,14 +980,19 @@ long keyctl_chown_key(key_serial_t id, uid_t user, gid_t group)
+ 	ret = -EACCES;
+ 	down_write(&key->sem);
+ 
+-	if (!capable(CAP_SYS_ADMIN)) {
++	{
++		bool is_privileged_op = false;
++
+ 		/* only the sysadmin can chown a key to some other UID */
+ 		if (user != (uid_t) -1 && !uid_eq(key->uid, uid))
+-			goto error_put;
++			is_privileged_op = true;
+ 
+ 		/* only the sysadmin can set the key's GID to a group other
+ 		 * than one of those that the current process subscribes to */
+ 		if (group != (gid_t) -1 && !gid_eq(gid, key->gid) && !in_group_p(gid))
++			is_privileged_op = true;
++
++		if (is_privileged_op && !capable(CAP_SYS_ADMIN))
+ 			goto error_put;
+ 	}
+ 
+@@ -1088,7 +1093,7 @@ long keyctl_setperm_key(key_serial_t id, key_perm_t perm)
+ 	down_write(&key->sem);
+ 
+ 	/* if we're not the sysadmin, we can only change a key that we own */
+-	if (capable(CAP_SYS_ADMIN) || uid_eq(key->uid, current_fsuid())) {
++	if (uid_eq(key->uid, current_fsuid()) || capable(CAP_SYS_ADMIN)) {
+ 		key->perm = perm;
+ 		notify_key(key, NOTIFY_KEY_SETATTR, 0);
+ 		ret = 0;
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index 3eabcc469669e..8403c91a6b297 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -895,7 +895,7 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 	}
+ 
+ 	ret = sscanf(rule, "%d", &catlen);
+-	if (ret != 1 || catlen > SMACK_CIPSO_MAXCATNUM)
++	if (ret != 1 || catlen < 0 || catlen > SMACK_CIPSO_MAXCATNUM)
+ 		goto out;
+ 
+ 	if (format == SMK_FIXED24_FMT &&
+diff --git a/sound/Kconfig b/sound/Kconfig
+index 36785410fbe15..aaf2022ffc57d 100644
+--- a/sound/Kconfig
++++ b/sound/Kconfig
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ menuconfig SOUND
+ 	tristate "Sound card support"
+-	depends on HAS_IOMEM
++	depends on HAS_IOMEM || UML
+ 	help
+ 	  If you have a sound card in your computer, i.e. if it can say more
+ 	  than an occasional beep, say Y.
+diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c
+index a226d8f240287..fd644a31c824c 100644
+--- a/sound/core/pcm_compat.c
++++ b/sound/core/pcm_compat.c
+@@ -252,10 +252,14 @@ static int snd_pcm_ioctl_hw_params_compat(struct snd_pcm_substream *substream,
+ 		goto error;
+ 	}
+ 
+-	if (refine)
++	if (refine) {
+ 		err = snd_pcm_hw_refine(substream, data);
+-	else
++		if (err < 0)
++			goto error;
++		err = fixup_unreferenced_params(substream, data);
++	} else {
+ 		err = snd_pcm_hw_params(substream, data);
++	}
+ 	if (err < 0)
+ 		goto error;
+ 	if (copy_to_user(data32, data, sizeof(*data32)) ||
+diff --git a/sound/core/seq/oss/seq_oss_midi.c b/sound/core/seq/oss/seq_oss_midi.c
+index f73ee0798aeab..be80ce72e0c72 100644
+--- a/sound/core/seq/oss/seq_oss_midi.c
++++ b/sound/core/seq/oss/seq_oss_midi.c
+@@ -37,6 +37,7 @@ struct seq_oss_midi {
+ 	struct snd_midi_event *coder;	/* MIDI event coder */
+ 	struct seq_oss_devinfo *devinfo;	/* assigned OSSseq device */
+ 	snd_use_lock_t use_lock;
++	struct mutex open_mutex;
+ };
+ 
+ 
+@@ -171,6 +172,7 @@ snd_seq_oss_midi_check_new_port(struct snd_seq_port_info *pinfo)
+ 	mdev->flags = pinfo->capability;
+ 	mdev->opened = 0;
+ 	snd_use_lock_init(&mdev->use_lock);
++	mutex_init(&mdev->open_mutex);
+ 
+ 	/* copy and truncate the name of synth device */
+ 	strlcpy(mdev->name, pinfo->name, sizeof(mdev->name));
+@@ -319,14 +321,16 @@ snd_seq_oss_midi_open(struct seq_oss_devinfo *dp, int dev, int fmode)
+ 	int perm;
+ 	struct seq_oss_midi *mdev;
+ 	struct snd_seq_port_subscribe subs;
++	int err;
+ 
+ 	if ((mdev = get_mididev(dp, dev)) == NULL)
+ 		return -ENODEV;
+ 
++	mutex_lock(&mdev->open_mutex);
+ 	/* already used? */
+ 	if (mdev->opened && mdev->devinfo != dp) {
+-		snd_use_lock_free(&mdev->use_lock);
+-		return -EBUSY;
++		err = -EBUSY;
++		goto unlock;
+ 	}
+ 
+ 	perm = 0;
+@@ -336,14 +340,14 @@ snd_seq_oss_midi_open(struct seq_oss_devinfo *dp, int dev, int fmode)
+ 		perm |= PERM_READ;
+ 	perm &= mdev->flags;
+ 	if (perm == 0) {
+-		snd_use_lock_free(&mdev->use_lock);
+-		return -ENXIO;
++		err = -ENXIO;
++		goto unlock;
+ 	}
+ 
+ 	/* already opened? */
+ 	if ((mdev->opened & perm) == perm) {
+-		snd_use_lock_free(&mdev->use_lock);
+-		return 0;
++		err = 0;
++		goto unlock;
+ 	}
+ 
+ 	perm &= ~mdev->opened;
+@@ -368,13 +372,17 @@ snd_seq_oss_midi_open(struct seq_oss_devinfo *dp, int dev, int fmode)
+ 	}
+ 
+ 	if (! mdev->opened) {
+-		snd_use_lock_free(&mdev->use_lock);
+-		return -ENXIO;
++		err = -ENXIO;
++		goto unlock;
+ 	}
+ 
+ 	mdev->devinfo = dp;
++	err = 0;
++
++ unlock:
++	mutex_unlock(&mdev->open_mutex);
+ 	snd_use_lock_free(&mdev->use_lock);
+-	return 0;
++	return err;
+ }
+ 
+ /*
+@@ -388,10 +396,9 @@ snd_seq_oss_midi_close(struct seq_oss_devinfo *dp, int dev)
+ 
+ 	if ((mdev = get_mididev(dp, dev)) == NULL)
+ 		return -ENODEV;
+-	if (! mdev->opened || mdev->devinfo != dp) {
+-		snd_use_lock_free(&mdev->use_lock);
+-		return 0;
+-	}
++	mutex_lock(&mdev->open_mutex);
++	if (!mdev->opened || mdev->devinfo != dp)
++		goto unlock;
+ 
+ 	memset(&subs, 0, sizeof(subs));
+ 	if (mdev->opened & PERM_WRITE) {
+@@ -410,6 +417,8 @@ snd_seq_oss_midi_close(struct seq_oss_devinfo *dp, int dev)
+ 	mdev->opened = 0;
+ 	mdev->devinfo = NULL;
+ 
++ unlock:
++	mutex_unlock(&mdev->open_mutex);
+ 	snd_use_lock_free(&mdev->use_lock);
+ 	return 0;
+ }
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index e18572eae5e01..d894dcdf38f4c 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -2007,10 +2007,9 @@ int snd_ac97_mixer(struct snd_ac97_bus *bus, struct snd_ac97_template *template,
+ 		.dev_disconnect =	snd_ac97_dev_disconnect,
+ 	};
+ 
+-	if (!rac97)
+-		return -EINVAL;
+-	if (snd_BUG_ON(!bus || !template))
++	if (snd_BUG_ON(!bus || !template || !rac97))
+ 		return -EINVAL;
++	*rac97 = NULL;
+ 	if (snd_BUG_ON(template->num >= 4))
+ 		return -EINVAL;
+ 	if (bus->codec[template->num])
+diff --git a/sound/soc/atmel/atmel-i2s.c b/sound/soc/atmel/atmel-i2s.c
+index d870f56c44cfc..0341b31197670 100644
+--- a/sound/soc/atmel/atmel-i2s.c
++++ b/sound/soc/atmel/atmel-i2s.c
+@@ -163,11 +163,14 @@ struct atmel_i2s_gck_param {
+ 
+ #define I2S_MCK_12M288		12288000UL
+ #define I2S_MCK_11M2896		11289600UL
++#define I2S_MCK_6M144		6144000UL
+ 
+ /* mck = (32 * (imckfs+1) / (imckdiv+1)) * fs */
+ static const struct atmel_i2s_gck_param gck_params[] = {
++	/* mck = 6.144Mhz */
++	{  8000, I2S_MCK_6M144,  1, 47},	/* mck =  768 fs */
++
+ 	/* mck = 12.288MHz */
+-	{  8000, I2S_MCK_12M288, 0, 47},	/* mck = 1536 fs */
+ 	{ 16000, I2S_MCK_12M288, 1, 47},	/* mck =  768 fs */
+ 	{ 24000, I2S_MCK_12M288, 3, 63},	/* mck =  512 fs */
+ 	{ 32000, I2S_MCK_12M288, 3, 47},	/* mck =  384 fs */
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index f1c9e563994b2..04a7070c78e28 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -1295,6 +1295,7 @@ config SND_SOC_STA529
+ config SND_SOC_STAC9766
+ 	tristate
+ 	depends on SND_SOC_AC97_BUS
++	select REGMAP_AC97
+ 
+ config SND_SOC_STI_SAS
+ 	tristate "codec Audio support for STI SAS codec"
+diff --git a/sound/soc/codecs/da7219-aad.c b/sound/soc/codecs/da7219-aad.c
+index 48081d71c22c9..b316d613a7090 100644
+--- a/sound/soc/codecs/da7219-aad.c
++++ b/sound/soc/codecs/da7219-aad.c
+@@ -347,11 +347,15 @@ static irqreturn_t da7219_aad_irq_thread(int irq, void *data)
+ 	struct da7219_priv *da7219 = snd_soc_component_get_drvdata(component);
+ 	u8 events[DA7219_AAD_IRQ_REG_MAX];
+ 	u8 statusa;
+-	int i, report = 0, mask = 0;
++	int i, ret, report = 0, mask = 0;
+ 
+ 	/* Read current IRQ events */
+-	regmap_bulk_read(da7219->regmap, DA7219_ACCDET_IRQ_EVENT_A,
+-			 events, DA7219_AAD_IRQ_REG_MAX);
++	ret = regmap_bulk_read(da7219->regmap, DA7219_ACCDET_IRQ_EVENT_A,
++			       events, DA7219_AAD_IRQ_REG_MAX);
++	if (ret) {
++		dev_warn_ratelimited(component->dev, "Failed to read IRQ events: %d\n", ret);
++		return IRQ_NONE;
++	}
+ 
+ 	if (!events[DA7219_AAD_IRQ_REG_A] && !events[DA7219_AAD_IRQ_REG_B])
+ 		return IRQ_NONE;
+@@ -854,6 +858,8 @@ void da7219_aad_suspend(struct snd_soc_component *component)
+ 			}
+ 		}
+ 	}
++
++	synchronize_irq(da7219_aad->irq);
+ }
+ 
+ void da7219_aad_resume(struct snd_soc_component *component)
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index 03ad34a275da2..dafbf73d4eede 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -153,7 +153,7 @@ static const char * const es8316_dmic_txt[] = {
+ 		"dmic data at high level",
+ 		"dmic data at low level",
+ };
+-static const unsigned int es8316_dmic_values[] = { 0, 1, 2 };
++static const unsigned int es8316_dmic_values[] = { 0, 2, 3 };
+ static const struct soc_enum es8316_dmic_src_enum =
+ 	SOC_VALUE_ENUM_SINGLE(ES8316_ADC_DMIC, 0, 3,
+ 			      ARRAY_SIZE(es8316_dmic_txt),
+diff --git a/sound/soc/codecs/rt5682-sdw.c b/sound/soc/codecs/rt5682-sdw.c
+index 2fb6b1edf9331..1e9109a95e6e0 100644
+--- a/sound/soc/codecs/rt5682-sdw.c
++++ b/sound/soc/codecs/rt5682-sdw.c
+@@ -413,9 +413,11 @@ static int rt5682_io_init(struct device *dev, struct sdw_slave *slave)
+ 		usleep_range(30000, 30005);
+ 		loop--;
+ 	}
++
+ 	if (val != DEVICE_ID) {
+ 		dev_err(dev, "Device with ID register %x is not rt5682\n", val);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_nodev;
+ 	}
+ 
+ 	rt5682_calibrate(rt5682);
+@@ -486,10 +488,11 @@ reinit:
+ 	rt5682->hw_init = true;
+ 	rt5682->first_hw_init = true;
+ 
++err_nodev:
+ 	pm_runtime_mark_last_busy(&slave->dev);
+ 	pm_runtime_put_autosuspend(&slave->dev);
+ 
+-	dev_dbg(&slave->dev, "%s hw_init complete\n", __func__);
++	dev_dbg(&slave->dev, "%s hw_init complete: %d\n", __func__, ret);
+ 
+ 	return ret;
+ }
+diff --git a/tools/bpf/bpftool/skeleton/profiler.bpf.c b/tools/bpf/bpftool/skeleton/profiler.bpf.c
+index ce5b65e07ab10..2f80edc682f11 100644
+--- a/tools/bpf/bpftool/skeleton/profiler.bpf.c
++++ b/tools/bpf/bpftool/skeleton/profiler.bpf.c
+@@ -4,6 +4,12 @@
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
+ 
++struct bpf_perf_event_value___local {
++	__u64 counter;
++	__u64 enabled;
++	__u64 running;
++} __attribute__((preserve_access_index));
++
+ /* map of perf event fds, num_cpu * num_metric entries */
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
+@@ -15,14 +21,14 @@ struct {
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+ 	__uint(key_size, sizeof(u32));
+-	__uint(value_size, sizeof(struct bpf_perf_event_value));
++	__uint(value_size, sizeof(struct bpf_perf_event_value___local));
+ } fentry_readings SEC(".maps");
+ 
+ /* accumulated readings */
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+ 	__uint(key_size, sizeof(u32));
+-	__uint(value_size, sizeof(struct bpf_perf_event_value));
++	__uint(value_size, sizeof(struct bpf_perf_event_value___local));
+ } accum_readings SEC(".maps");
+ 
+ /* sample counts, one per cpu */
+@@ -39,7 +45,7 @@ const volatile __u32 num_metric = 1;
+ SEC("fentry/XXX")
+ int BPF_PROG(fentry_XXX)
+ {
+-	struct bpf_perf_event_value *ptrs[MAX_NUM_MATRICS];
++	struct bpf_perf_event_value___local *ptrs[MAX_NUM_MATRICS];
+ 	u32 key = bpf_get_smp_processor_id();
+ 	u32 i;
+ 
+@@ -53,10 +59,10 @@ int BPF_PROG(fentry_XXX)
+ 	}
+ 
+ 	for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) {
+-		struct bpf_perf_event_value reading;
++		struct bpf_perf_event_value___local reading;
+ 		int err;
+ 
+-		err = bpf_perf_event_read_value(&events, key, &reading,
++		err = bpf_perf_event_read_value(&events, key, (void *)&reading,
+ 						sizeof(reading));
+ 		if (err)
+ 			return 0;
+@@ -68,14 +74,14 @@ int BPF_PROG(fentry_XXX)
+ }
+ 
+ static inline void
+-fexit_update_maps(u32 id, struct bpf_perf_event_value *after)
++fexit_update_maps(u32 id, struct bpf_perf_event_value___local *after)
+ {
+-	struct bpf_perf_event_value *before, diff;
++	struct bpf_perf_event_value___local *before, diff;
+ 
+ 	before = bpf_map_lookup_elem(&fentry_readings, &id);
+ 	/* only account samples with a valid fentry_reading */
+ 	if (before && before->counter) {
+-		struct bpf_perf_event_value *accum;
++		struct bpf_perf_event_value___local *accum;
+ 
+ 		diff.counter = after->counter - before->counter;
+ 		diff.enabled = after->enabled - before->enabled;
+@@ -93,7 +99,7 @@ fexit_update_maps(u32 id, struct bpf_perf_event_value *after)
+ SEC("fexit/XXX")
+ int BPF_PROG(fexit_XXX)
+ {
+-	struct bpf_perf_event_value readings[MAX_NUM_MATRICS];
++	struct bpf_perf_event_value___local readings[MAX_NUM_MATRICS];
+ 	u32 cpu = bpf_get_smp_processor_id();
+ 	u32 i, zero = 0;
+ 	int err;
+@@ -102,7 +108,8 @@ int BPF_PROG(fexit_XXX)
+ 	/* read all events before updating the maps, to reduce error */
+ 	for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) {
+ 		err = bpf_perf_event_read_value(&events, cpu + i * num_cpu,
+-						readings + i, sizeof(*readings));
++						(void *)(readings + i),
++						sizeof(*readings));
+ 		if (err)
+ 			return 0;
+ 	}
+diff --git a/tools/hv/vmbus_testing b/tools/hv/vmbus_testing
+index e7212903dd1d9..4467979d8f699 100755
+--- a/tools/hv/vmbus_testing
++++ b/tools/hv/vmbus_testing
+@@ -164,7 +164,7 @@ def recursive_file_lookup(path, file_map):
+ def get_all_devices_test_status(file_map):
+ 
+         for device in file_map:
+-                if (get_test_state(locate_state(device, file_map)) is 1):
++                if (get_test_state(locate_state(device, file_map)) == 1):
+                         print("Testing = ON for: {}"
+                               .format(device.split("/")[5]))
+                 else:
+@@ -203,7 +203,7 @@ def write_test_files(path, value):
+ def set_test_state(state_path, state_value, quiet):
+ 
+         write_test_files(state_path, state_value)
+-        if (get_test_state(state_path) is 1):
++        if (get_test_state(state_path) == 1):
+                 if (not quiet):
+                         print("Testing = ON for device: {}"
+                               .format(state_path.split("/")[5]))
+diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
+index 7c64134472c77..ee30372f77133 100644
+--- a/tools/perf/builtin-top.c
++++ b/tools/perf/builtin-top.c
+@@ -1743,6 +1743,7 @@ int cmd_top(int argc, const char **argv)
+ 	top.session = perf_session__new(NULL, false, NULL);
+ 	if (IS_ERR(top.session)) {
+ 		status = PTR_ERR(top.session);
++		top.session = NULL;
+ 		goto out_delete_evlist;
+ 	}
+ 
+diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
+index b0e1880cf992b..f2586e46d53e8 100644
+--- a/tools/perf/ui/browsers/hists.c
++++ b/tools/perf/ui/browsers/hists.c
+@@ -407,11 +407,6 @@ static bool hist_browser__selection_has_children(struct hist_browser *browser)
+ 	return container_of(ms, struct callchain_list, ms)->has_children;
+ }
+ 
+-static bool hist_browser__he_selection_unfolded(struct hist_browser *browser)
+-{
+-	return browser->he_selection ? browser->he_selection->unfolded : false;
+-}
+-
+ static bool hist_browser__selection_unfolded(struct hist_browser *browser)
+ {
+ 	struct hist_entry *he = browser->he_selection;
+@@ -584,8 +579,8 @@ static int hierarchy_set_folding(struct hist_browser *hb, struct hist_entry *he,
+ 	return n;
+ }
+ 
+-static void __hist_entry__set_folding(struct hist_entry *he,
+-				      struct hist_browser *hb, bool unfold)
++static void hist_entry__set_folding(struct hist_entry *he,
++				    struct hist_browser *hb, bool unfold)
+ {
+ 	hist_entry__init_have_children(he);
+ 	he->unfolded = unfold ? he->has_children : false;
+@@ -603,34 +598,12 @@ static void __hist_entry__set_folding(struct hist_entry *he,
+ 		he->nr_rows = 0;
+ }
+ 
+-static void hist_entry__set_folding(struct hist_entry *he,
+-				    struct hist_browser *browser, bool unfold)
+-{
+-	double percent;
+-
+-	percent = hist_entry__get_percent_limit(he);
+-	if (he->filtered || percent < browser->min_pcnt)
+-		return;
+-
+-	__hist_entry__set_folding(he, browser, unfold);
+-
+-	if (!he->depth || unfold)
+-		browser->nr_hierarchy_entries++;
+-	if (he->leaf)
+-		browser->nr_callchain_rows += he->nr_rows;
+-	else if (unfold && !hist_entry__has_hierarchy_children(he, browser->min_pcnt)) {
+-		browser->nr_hierarchy_entries++;
+-		he->has_no_entry = true;
+-		he->nr_rows = 1;
+-	} else
+-		he->has_no_entry = false;
+-}
+-
+ static void
+ __hist_browser__set_folding(struct hist_browser *browser, bool unfold)
+ {
+ 	struct rb_node *nd;
+ 	struct hist_entry *he;
++	double percent;
+ 
+ 	nd = rb_first_cached(&browser->hists->entries);
+ 	while (nd) {
+@@ -640,6 +613,21 @@ __hist_browser__set_folding(struct hist_browser *browser, bool unfold)
+ 		nd = __rb_hierarchy_next(nd, HMD_FORCE_CHILD);
+ 
+ 		hist_entry__set_folding(he, browser, unfold);
++
++		percent = hist_entry__get_percent_limit(he);
++		if (he->filtered || percent < browser->min_pcnt)
++			continue;
++
++		if (!he->depth || unfold)
++			browser->nr_hierarchy_entries++;
++		if (he->leaf)
++			browser->nr_callchain_rows += he->nr_rows;
++		else if (unfold && !hist_entry__has_hierarchy_children(he, browser->min_pcnt)) {
++			browser->nr_hierarchy_entries++;
++			he->has_no_entry = true;
++			he->nr_rows = 1;
++		} else
++			he->has_no_entry = false;
+ 	}
+ }
+ 
+@@ -659,8 +647,10 @@ static void hist_browser__set_folding_selected(struct hist_browser *browser, boo
+ 	if (!browser->he_selection)
+ 		return;
+ 
+-	hist_entry__set_folding(browser->he_selection, browser, unfold);
+-	browser->b.nr_entries = hist_browser__nr_entries(browser);
++	if (unfold == browser->he_selection->unfolded)
++		return;
++
++	hist_browser__toggle_fold(browser);
+ }
+ 
+ static void ui_browser__warn_lost_events(struct ui_browser *browser)
+@@ -731,8 +721,8 @@ static int hist_browser__handle_hotkey(struct hist_browser *browser, bool warn_l
+ 		hist_browser__set_folding(browser, true);
+ 		break;
+ 	case 'e':
+-		/* Expand the selected entry. */
+-		hist_browser__set_folding_selected(browser, !hist_browser__he_selection_unfolded(browser));
++		/* Toggle expand/collapse the selected entry. */
++		hist_browser__toggle_fold(browser);
+ 		break;
+ 	case 'H':
+ 		browser->show_headers = !browser->show_headers;
+@@ -1778,7 +1768,7 @@ static void hists_browser__hierarchy_headers(struct hist_browser *browser)
+ 	hists_browser__scnprintf_hierarchy_headers(browser, headers,
+ 						   sizeof(headers));
+ 
+-	ui_browser__gotorc(&browser->b, 0, 0);
++	ui_browser__gotorc_title(&browser->b, 0, 0);
+ 	ui_browser__set_color(&browser->b, HE_COLORSET_ROOT);
+ 	ui_browser__write_nstring(&browser->b, headers, browser->b.width + 1);
+ }
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 3081894547883..c9078cee6be01 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -1718,8 +1718,11 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ 	perf_exe(tpath, sizeof(tpath));
+ 
+ 	bfdf = bfd_openr(tpath, NULL);
+-	assert(bfdf);
+-	assert(bfd_check_format(bfdf, bfd_object));
++	if (bfdf == NULL)
++		abort();
++
++	if (!bfd_check_format(bfdf, bfd_object))
++		abort();
+ 
+ 	s = open_memstream(&buf, &buf_size);
+ 	if (!s) {
+@@ -1767,7 +1770,8 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ #else
+ 	disassemble = disassembler(bfdf);
+ #endif
+-	assert(disassemble);
++	if (disassemble == NULL)
++		abort();
+ 
+ 	fflush(s);
+ 	do {
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index be850e9f88520..dd06770b43f1c 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -3987,7 +3987,8 @@ int perf_event__process_attr(struct perf_tool *tool __maybe_unused,
+ 			     union perf_event *event,
+ 			     struct evlist **pevlist)
+ {
+-	u32 i, ids, n_ids;
++	u32 i, n_ids;
++	u64 *ids;
+ 	struct evsel *evsel;
+ 	struct evlist *evlist = *pevlist;
+ 
+@@ -4003,9 +4004,8 @@ int perf_event__process_attr(struct perf_tool *tool __maybe_unused,
+ 
+ 	evlist__add(evlist, evsel);
+ 
+-	ids = event->header.size;
+-	ids -= (void *)&event->attr.id - (void *)event;
+-	n_ids = ids / sizeof(u64);
++	n_ids = event->header.size - sizeof(event->header) - event->attr.attr.size;
++	n_ids = n_ids / sizeof(u64);
+ 	/*
+ 	 * We don't have the cpu and thread maps on the header, so
+ 	 * for allocating the perf_sample_id table we fake 1 cpu and
+@@ -4014,8 +4014,9 @@ int perf_event__process_attr(struct perf_tool *tool __maybe_unused,
+ 	if (perf_evsel__alloc_id(&evsel->core, 1, n_ids))
+ 		return -ENOMEM;
+ 
++	ids = (void *)&event->attr.attr + event->attr.attr.size;
+ 	for (i = 0; i < n_ids; i++) {
+-		perf_evlist__id_add(&evlist->core, &evsel->core, 0, i, event->attr.id[i]);
++		perf_evlist__id_add(&evlist->core, &evsel->core, 0, i, ids[i]);
+ 	}
+ 
+ 	return 0;
+diff --git a/tools/testing/selftests/bpf/benchs/run_bench_rename.sh b/tools/testing/selftests/bpf/benchs/run_bench_rename.sh
+index 16f774b1cdbed..7b281dbe41656 100755
+--- a/tools/testing/selftests/bpf/benchs/run_bench_rename.sh
++++ b/tools/testing/selftests/bpf/benchs/run_bench_rename.sh
+@@ -2,7 +2,7 @@
+ 
+ set -eufo pipefail
+ 
+-for i in base kprobe kretprobe rawtp fentry fexit fmodret
++for i in base kprobe kretprobe rawtp fentry fexit
+ do
+ 	summary=$(sudo ./bench -w2 -d5 -a rename-$i | tail -n1 | cut -d'(' -f1 | cut -d' ' -f3-)
+ 	printf "%-10s: %s\n" $i "$summary"
+diff --git a/tools/testing/selftests/bpf/progs/test_cls_redirect.h b/tools/testing/selftests/bpf/progs/test_cls_redirect.h
+index 76eab0aacba0c..233b089d1fbac 100644
+--- a/tools/testing/selftests/bpf/progs/test_cls_redirect.h
++++ b/tools/testing/selftests/bpf/progs/test_cls_redirect.h
+@@ -12,6 +12,15 @@
+ #include <linux/ipv6.h>
+ #include <linux/udp.h>
+ 
++/* offsetof() is used in static asserts, and the libbpf-redefined CO-RE
++ * friendly version breaks compilation for older clang versions <= 15
++ * when invoked in a static assert.  Restore original here.
++ */
++#ifdef offsetof
++#undef offsetof
++#define offsetof(type, member) __builtin_offsetof(type, member)
++#endif
++
+ struct gre_base_hdr {
+ 	uint16_t flags;
+ 	uint16_t protocol;
+diff --git a/tools/testing/selftests/kselftest/runner.sh b/tools/testing/selftests/kselftest/runner.sh
+index cc9c846585f05..83616f0779a7e 100644
+--- a/tools/testing/selftests/kselftest/runner.sh
++++ b/tools/testing/selftests/kselftest/runner.sh
+@@ -33,9 +33,10 @@ tap_timeout()
+ {
+ 	# Make sure tests will time out if utility is available.
+ 	if [ -x /usr/bin/timeout ] ; then
+-		/usr/bin/timeout --foreground "$kselftest_timeout" "$1"
++		/usr/bin/timeout --foreground "$kselftest_timeout" \
++			/usr/bin/timeout "$kselftest_timeout" $1
+ 	else
+-		"$1"
++		$1
+ 	fi
+ }
+ 
+@@ -65,17 +66,25 @@ run_one()
+ 
+ 	TEST_HDR_MSG="selftests: $DIR: $BASENAME_TEST"
+ 	echo "# $TEST_HDR_MSG"
+-	if [ ! -x "$TEST" ]; then
+-		echo -n "# Warning: file $TEST is "
+-		if [ ! -e "$TEST" ]; then
+-			echo "missing!"
+-		else
+-			echo "not executable, correct this."
+-		fi
++	if [ ! -e "$TEST" ]; then
++		echo "# Warning: file $TEST is missing!"
+ 		echo "not ok $test_num $TEST_HDR_MSG"
+ 	else
++		cmd="./$BASENAME_TEST"
++		if [ ! -x "$TEST" ]; then
++			echo "# Warning: file $TEST is not executable"
++
++			if [ $(head -n 1 "$TEST" | cut -c -2) = "#!" ]
++			then
++				interpreter=$(head -n 1 "$TEST" | cut -c 3-)
++				cmd="$interpreter ./$BASENAME_TEST"
++			else
++				echo "not ok $test_num $TEST_HDR_MSG"
++				return
++			fi
++		fi
+ 		cd `dirname $TEST` > /dev/null
+-		((((( tap_timeout ./$BASENAME_TEST 2>&1; echo $? >&3) |
++		((((( tap_timeout "$cmd" 2>&1; echo $? >&3) |
+ 			tap_prefix >&4) 3>&1) |
+ 			(read xs; exit $xs)) 4>>"$logfile" &&
+ 		echo "ok $test_num $TEST_HDR_MSG") ||
+diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
+index 3e7b2e521cde4..2fadc99d93619 100644
+--- a/tools/testing/selftests/kselftest_harness.h
++++ b/tools/testing/selftests/kselftest_harness.h
+@@ -910,7 +910,11 @@ void __wait_for_test(struct __test_metadata *t)
+ 		fprintf(TH_LOG_STREAM,
+ 			"# %s: Test terminated by timeout\n", t->name);
+ 	} else if (WIFEXITED(status)) {
+-		if (t->termsig != -1) {
++		if (WEXITSTATUS(status) == 255) {
++			/* SKIP */
++			t->passed = 1;
++			t->skip = 1;
++		} else if (t->termsig != -1) {
+ 			t->passed = 0;
+ 			fprintf(TH_LOG_STREAM,
+ 				"# %s: Test exited normally instead of by signal (code: %d)\n",
+@@ -922,11 +926,6 @@ void __wait_for_test(struct __test_metadata *t)
+ 			case 0:
+ 				t->passed = 1;
+ 				break;
+-			/* SKIP */
+-			case 255:
+-				t->passed = 1;
+-				t->skip = 1;
+-				break;
+ 			/* Other failure, assume step report. */
+ 			default:
+ 				t->passed = 0;
+diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
+index 5922cc1b03867..b3c0e858c4e07 100644
+--- a/tools/testing/selftests/resctrl/cache.c
++++ b/tools/testing/selftests/resctrl/cache.c
+@@ -89,21 +89,19 @@ static int reset_enable_llc_perf(pid_t pid, int cpu_no)
+ static int get_llc_perf(unsigned long *llc_perf_miss)
+ {
+ 	__u64 total_misses;
++	int ret;
+ 
+ 	/* Stop counters after one span to get miss rate */
+ 
+ 	ioctl(fd_lm, PERF_EVENT_IOC_DISABLE, 0);
+ 
+-	if (read(fd_lm, &rf_cqm, sizeof(struct read_format)) == -1) {
++	ret = read(fd_lm, &rf_cqm, sizeof(struct read_format));
++	if (ret == -1) {
+ 		perror("Could not get llc misses through perf");
+-
+ 		return -1;
+ 	}
+ 
+ 	total_misses = rf_cqm.values[0].value;
+-
+-	close(fd_lm);
+-
+ 	*llc_perf_miss = total_misses;
+ 
+ 	return 0;
+@@ -256,17 +254,23 @@ int cat_val(struct resctrl_val_param *param)
+ 					 memflush, operation, resctrl_val)) {
+ 				fprintf(stderr, "Error-running fill buffer\n");
+ 				ret = -1;
+-				break;
++				goto pe_close;
+ 			}
+ 
+ 			sleep(1);
+ 			ret = measure_cache_vals(param, bm_pid);
+ 			if (ret)
+-				break;
++				goto pe_close;
++
++			close(fd_lm);
+ 		} else {
+ 			break;
+ 		}
+ 	}
+ 
+ 	return ret;
++
++pe_close:
++	close(fd_lm);
++	return ret;
+ }
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index c20d0a7ecbe63..ab1d91328d67b 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -184,12 +184,13 @@ fill_cache(unsigned long long buf_size, int malloc_and_init, int memflush,
+ 	else
+ 		ret = fill_cache_write(start_ptr, end_ptr, resctrl_val);
+ 
++	free(startptr);
++
+ 	if (ret) {
+ 		printf("\n Error in fill cache read/write...\n");
+ 		return -1;
+ 	}
+ 
+-	free(startptr);
+ 
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
+index 36da6136af968..c38f2d58df927 100644
+--- a/tools/testing/selftests/resctrl/resctrl.h
++++ b/tools/testing/selftests/resctrl/resctrl.h
+@@ -33,6 +33,7 @@
+ 	do {					\
+ 		perror(err_msg);		\
+ 		kill(ppid, SIGKILL);		\
++		umount_resctrlfs();		\
+ 		exit(EXIT_FAILURE);		\
+ 	} while (0)
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-09-21 11:29 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-09-21 11:29 UTC (permalink / raw
  To: gentoo-commits

commit:     6788828261a8548c2a47c35ba7ccb7bc2fb1c2a6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 21 11:29:30 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 21 11:29:30 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=67888282

Linux patch 5.10.196

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |  4 ++++
 1195_linux-5.10.196.patch | 33 +++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/0000_README b/0000_README
index 08e7549f..f7687328 100644
--- a/0000_README
+++ b/0000_README
@@ -823,6 +823,10 @@ Patch:  1194_linux-5.10.195.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.195
 
+Patch:  1195_linux-5.10.196.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.196
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1195_linux-5.10.196.patch b/1195_linux-5.10.196.patch
new file mode 100644
index 00000000..1c2b574a
--- /dev/null
+++ b/1195_linux-5.10.196.patch
@@ -0,0 +1,33 @@
+diff --git a/Makefile b/Makefile
+index 006700fbb6525..7021aa85afd1e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 195
++SUBLEVEL = 196
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 0b7e9ab517d58..12388ed4faa59 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -479,7 +479,6 @@ static struct dentry * configfs_lookup(struct inode *dir,
+ 	if (!configfs_dirent_is_ready(parent_sd))
+ 		goto out;
+ 
+-	spin_lock(&configfs_dirent_lock);
+ 	list_for_each_entry(sd, &parent_sd->s_children, s_sibling) {
+ 		if (sd->s_type & CONFIGFS_NOT_PINNED) {
+ 			const unsigned char * name = configfs_get_name(sd);
+@@ -492,7 +491,6 @@ static struct dentry * configfs_lookup(struct inode *dir,
+ 			break;
+ 		}
+ 	}
+-	spin_unlock(&configfs_dirent_lock);
+ 
+ 	if (!found) {
+ 		/*


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-09-23 10:19 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-09-23 10:19 UTC (permalink / raw
  To: gentoo-commits

commit:     224b8431e3d8ed56d5c1c50930d4abe1d96e86f3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 23 10:19:18 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 23 10:19:18 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=224b8431

Linux patch 5.10.197

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1196_linux-5.10.197.patch | 3660 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3664 insertions(+)

diff --git a/0000_README b/0000_README
index f7687328..7b7d5684 100644
--- a/0000_README
+++ b/0000_README
@@ -827,6 +827,10 @@ Patch:  1195_linux-5.10.196.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.196
 
+Patch:  1196_linux-5.10.197.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.197
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1196_linux-5.10.197.patch b/1196_linux-5.10.197.patch
new file mode 100644
index 00000000..c72f2c9a
--- /dev/null
+++ b/1196_linux-5.10.197.patch
@@ -0,0 +1,3660 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 4f3206495217c..10a26d44ef4a9 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -149,6 +149,9 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Hisilicon      | Hip08 SMMU PMCG | #162001800      | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Hisilicon      | Hip08 SMMU PMCG | #162001900      | N/A                         |
++|                | Hip09 SMMU PMCG |                 |                             |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Qualcomm Tech. | Kryo/Falkor v1  | E1003           | QCOM_FALKOR_ERRATUM_1003    |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Makefile b/Makefile
+index 7021aa85afd1e..12986f3532a98 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 196
++SUBLEVEL = 197
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
+index b1423fb130ea4..8f1fa7aac31fb 100644
+--- a/arch/arm/kernel/hw_breakpoint.c
++++ b/arch/arm/kernel/hw_breakpoint.c
+@@ -626,7 +626,7 @@ int hw_breakpoint_arch_parse(struct perf_event *bp,
+ 	hw->address &= ~alignment_mask;
+ 	hw->ctrl.len <<= offset;
+ 
+-	if (is_default_overflow_handler(bp)) {
++	if (uses_default_overflow_handler(bp)) {
+ 		/*
+ 		 * Mismatch breakpoints are required for single-stepping
+ 		 * breakpoints.
+@@ -798,7 +798,7 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
+ 		 * Otherwise, insert a temporary mismatch breakpoint so that
+ 		 * we can single-step over the watchpoint trigger.
+ 		 */
+-		if (!is_default_overflow_handler(wp))
++		if (!uses_default_overflow_handler(wp))
+ 			continue;
+ step:
+ 		enable_single_step(wp, instruction_pointer(regs));
+@@ -811,7 +811,7 @@ step:
+ 		info->trigger = addr;
+ 		pr_debug("watchpoint fired: address = 0x%x\n", info->trigger);
+ 		perf_bp_event(wp, regs);
+-		if (is_default_overflow_handler(wp))
++		if (uses_default_overflow_handler(wp))
+ 			enable_single_step(wp, instruction_pointer(regs));
+ 	}
+ 
+@@ -886,7 +886,7 @@ static void breakpoint_handler(unsigned long unknown, struct pt_regs *regs)
+ 			info->trigger = addr;
+ 			pr_debug("breakpoint fired: address = 0x%x\n", addr);
+ 			perf_bp_event(bp, regs);
+-			if (is_default_overflow_handler(bp))
++			if (uses_default_overflow_handler(bp))
+ 				enable_single_step(bp, addr);
+ 			goto unlock;
+ 		}
+diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
+index 712e97c03e54c..e5a0c38f1b5ee 100644
+--- a/arch/arm64/kernel/hw_breakpoint.c
++++ b/arch/arm64/kernel/hw_breakpoint.c
+@@ -654,7 +654,7 @@ static int breakpoint_handler(unsigned long unused, unsigned int esr,
+ 		perf_bp_event(bp, regs);
+ 
+ 		/* Do we need to handle the stepping? */
+-		if (is_default_overflow_handler(bp))
++		if (uses_default_overflow_handler(bp))
+ 			step = 1;
+ unlock:
+ 		rcu_read_unlock();
+@@ -733,7 +733,7 @@ static u64 get_distance_from_watchpoint(unsigned long addr, u64 val,
+ static int watchpoint_report(struct perf_event *wp, unsigned long addr,
+ 			     struct pt_regs *regs)
+ {
+-	int step = is_default_overflow_handler(wp);
++	int step = uses_default_overflow_handler(wp);
+ 	struct arch_hw_breakpoint *info = counter_arch_bp(wp);
+ 
+ 	info->trigger = addr;
+diff --git a/arch/powerpc/platforms/pseries/ibmebus.c b/arch/powerpc/platforms/pseries/ibmebus.c
+index 8c6e509f69675..c3cc010e9cc45 100644
+--- a/arch/powerpc/platforms/pseries/ibmebus.c
++++ b/arch/powerpc/platforms/pseries/ibmebus.c
+@@ -451,6 +451,7 @@ static int __init ibmebus_bus_init(void)
+ 	if (err) {
+ 		printk(KERN_WARNING "%s: device_register returned %i\n",
+ 		       __func__, err);
++		put_device(&ibmebus_bus_device);
+ 		bus_unregister(&ibmebus_bus_type);
+ 
+ 		return err;
+diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
+index 39b2eded7bc2b..f4a2e6d373b29 100644
+--- a/arch/x86/boot/compressed/ident_map_64.c
++++ b/arch/x86/boot/compressed/ident_map_64.c
+@@ -67,6 +67,14 @@ static void *alloc_pgt_page(void *context)
+ 		return NULL;
+ 	}
+ 
++	/* Consumed more tables than expected? */
++	if (pages->pgt_buf_offset == BOOT_PGT_SIZE_WARN) {
++		debug_putstr("pgt_buf running low in " __FILE__ "\n");
++		debug_putstr("Need to raise BOOT_PGT_SIZE?\n");
++		debug_putaddr(pages->pgt_buf_offset);
++		debug_putaddr(pages->pgt_buf_size);
++	}
++
+ 	entry = pages->pgt_buf + pages->pgt_buf_offset;
+ 	pages->pgt_buf_offset += PAGE_SIZE;
+ 
+diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
+index 9191280d9ea31..215d37f7dde8a 100644
+--- a/arch/x86/include/asm/boot.h
++++ b/arch/x86/include/asm/boot.h
+@@ -40,23 +40,40 @@
+ #ifdef CONFIG_X86_64
+ # define BOOT_STACK_SIZE	0x4000
+ 
++/*
++ * Used by decompressor's startup_32() to allocate page tables for identity
++ * mapping of the 4G of RAM in 4-level paging mode:
++ * - 1 level4 table;
++ * - 1 level3 table;
++ * - 4 level2 table that maps everything with 2M pages;
++ *
++ * The additional level5 table needed for 5-level paging is allocated from
++ * trampoline_32bit memory.
++ */
+ # define BOOT_INIT_PGT_SIZE	(6*4096)
+-# ifdef CONFIG_RANDOMIZE_BASE
++
+ /*
+- * Assuming all cross the 512GB boundary:
+- * 1 page for level4
+- * (2+2)*4 pages for kernel, param, cmd_line, and randomized kernel
+- * 2 pages for first 2M (video RAM: CONFIG_X86_VERBOSE_BOOTUP).
+- * Total is 19 pages.
++ * Total number of page tables kernel_add_identity_map() can allocate,
++ * including page tables consumed by startup_32().
++ *
++ * Worst-case scenario:
++ *  - 5-level paging needs 1 level5 table;
++ *  - KASLR needs to map kernel, boot_params, cmdline and randomized kernel,
++ *    assuming all of them cross 256T boundary:
++ *    + 4*2 level4 table;
++ *    + 4*2 level3 table;
++ *    + 4*2 level2 table;
++ *  - X86_VERBOSE_BOOTUP needs to map the first 2M (video RAM):
++ *    + 1 level4 table;
++ *    + 1 level3 table;
++ *    + 1 level2 table;
++ * Total: 28 tables
++ *
++ * Add 4 spare table in case decompressor touches anything beyond what is
++ * accounted above. Warn if it happens.
+  */
+-#  ifdef CONFIG_X86_VERBOSE_BOOTUP
+-#   define BOOT_PGT_SIZE	(19*4096)
+-#  else /* !CONFIG_X86_VERBOSE_BOOTUP */
+-#   define BOOT_PGT_SIZE	(17*4096)
+-#  endif
+-# else /* !CONFIG_RANDOMIZE_BASE */
+-#  define BOOT_PGT_SIZE		BOOT_INIT_PGT_SIZE
+-# endif
++# define BOOT_PGT_SIZE_WARN	(28*4096)
++# define BOOT_PGT_SIZE		(32*4096)
+ 
+ #else /* !CONFIG_X86_64 */
+ # define BOOT_STACK_SIZE	0x1000
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index bcf09fbc750af..80d9076e42e0b 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -357,10 +357,10 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	 * cipher name.
+ 	 */
+ 	if (!strncmp(cipher_name, "ecb(", 4)) {
+-		unsigned len;
++		int len;
+ 
+-		len = strlcpy(ecb_name, cipher_name + 4, sizeof(ecb_name));
+-		if (len < 2 || len >= sizeof(ecb_name))
++		len = strscpy(ecb_name, cipher_name + 4, sizeof(ecb_name));
++		if (len < 2)
+ 			goto err_free_inst;
+ 
+ 		if (ecb_name[len - 1] != ')')
+diff --git a/crypto/xts.c b/crypto/xts.c
+index c6a105dba38b9..74dc199d54867 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -395,10 +395,10 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	 * cipher name.
+ 	 */
+ 	if (!strncmp(cipher_name, "ecb(", 4)) {
+-		unsigned len;
++		int len;
+ 
+-		len = strlcpy(ctx->name, cipher_name + 4, sizeof(ctx->name));
+-		if (len < 2 || len >= sizeof(ctx->name))
++		len = strscpy(ctx->name, cipher_name + 4, sizeof(ctx->name));
++		if (len < 2)
+ 			goto err_free_inst;
+ 
+ 		if (ctx->name[len - 1] != ')')
+diff --git a/drivers/acpi/acpica/psopcode.c b/drivers/acpi/acpica/psopcode.c
+index 28af49263ebfa..62957cba30f61 100644
+--- a/drivers/acpi/acpica/psopcode.c
++++ b/drivers/acpi/acpica/psopcode.c
+@@ -603,7 +603,7 @@ const struct acpi_opcode_info acpi_gbl_aml_op_info[AML_NUM_OPCODES] = {
+ 
+ /* 7E */ ACPI_OP("Timer", ARGP_TIMER_OP, ARGI_TIMER_OP, ACPI_TYPE_ANY,
+ 			 AML_CLASS_EXECUTE, AML_TYPE_EXEC_0A_0T_1R,
+-			 AML_FLAGS_EXEC_0A_0T_1R),
++			 AML_FLAGS_EXEC_0A_0T_1R | AML_NO_OPERAND_RESOLVE),
+ 
+ /* ACPI 5.0 opcodes */
+ 
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 50ed949dc1449..554943be26984 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1474,7 +1474,10 @@ static void __init arm_smmu_v3_pmcg_init_resources(struct resource *res,
+ static struct acpi_platform_list pmcg_plat_info[] __initdata = {
+ 	/* HiSilicon Hip08 Platform */
+ 	{"HISI  ", "HIP08   ", 0, ACPI_SIG_IORT, greater_than_or_equal,
+-	 "Erratum #162001800", IORT_SMMU_V3_PMCG_HISI_HIP08},
++	 "Erratum #162001800, Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP08},
++	/* HiSilicon Hip09 Platform */
++	{"HISI  ", "HIP09   ", 0, ACPI_SIG_IORT, greater_than_or_equal,
++	 "Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP09},
+ 	{ }
+ };
+ 
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index b02d381e78483..a5cb9e1d48bcc 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -307,6 +307,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_BOARD_NAME, "Lenovo IdeaPad S405"),
+ 		},
+ 	},
++	{
++	 /* https://bugzilla.suse.com/show_bug.cgi?id=1208724 */
++	 .callback = video_detect_force_native,
++	 /* Lenovo Ideapad Z470 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		DMI_MATCH(DMI_PRODUCT_VERSION, "IdeaPad Z470"),
++		},
++	},
+ 	{
+ 	 /* https://bugzilla.redhat.com/show_bug.cgi?id=1187004 */
+ 	 .callback = video_detect_force_native,
+@@ -348,6 +357,24 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "iMac11,3"),
+ 		},
+ 	},
++	{
++	 /* https://gitlab.freedesktop.org/drm/amd/-/issues/1838 */
++	 .callback = video_detect_force_native,
++	 /* Apple iMac12,1 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "iMac12,1"),
++		},
++	},
++	{
++	 /* https://gitlab.freedesktop.org/drm/amd/-/issues/2753 */
++	 .callback = video_detect_force_native,
++	 /* Apple iMac12,2 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "iMac12,2"),
++		},
++	},
+ 	{
+ 	 /* https://bugzilla.redhat.com/show_bug.cgi?id=1217249 */
+ 	 .callback = video_detect_force_native,
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 7ca9fa9a75e24..bf949f7da483f 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -1882,6 +1882,15 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	else
+ 		dev_info(&pdev->dev, "SSS flag set, parallel bus scan disabled\n");
+ 
++	if (!(hpriv->cap & HOST_CAP_PART))
++		host->flags |= ATA_HOST_NO_PART;
++
++	if (!(hpriv->cap & HOST_CAP_SSC))
++		host->flags |= ATA_HOST_NO_SSC;
++
++	if (!(hpriv->cap2 & HOST_CAP2_SDS))
++		host->flags |= ATA_HOST_NO_DEVSLP;
++
+ 	if (pi.flags & ATA_FLAG_EM)
+ 		ahci_reset_em(host);
+ 
+diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c
+index c16423e445255..4fd9a107fe7f8 100644
+--- a/drivers/ata/libata-sata.c
++++ b/drivers/ata/libata-sata.c
+@@ -394,10 +394,23 @@ int sata_link_scr_lpm(struct ata_link *link, enum ata_lpm_policy policy,
+ 	case ATA_LPM_MED_POWER_WITH_DIPM:
+ 	case ATA_LPM_MIN_POWER_WITH_PARTIAL:
+ 	case ATA_LPM_MIN_POWER:
+-		if (ata_link_nr_enabled(link) > 0)
+-			/* no restrictions on LPM transitions */
++		if (ata_link_nr_enabled(link) > 0) {
++			/* assume no restrictions on LPM transitions */
+ 			scontrol &= ~(0x7 << 8);
+-		else {
++
++			/*
++			 * If the controller does not support partial, slumber,
++			 * or devsleep, then disallow these transitions.
++			 */
++			if (link->ap->host->flags & ATA_HOST_NO_PART)
++				scontrol |= (0x1 << 8);
++
++			if (link->ap->host->flags & ATA_HOST_NO_SSC)
++				scontrol |= (0x2 << 8);
++
++			if (link->ap->host->flags & ATA_HOST_NO_DEVSLP)
++				scontrol |= (0x4 << 8);
++		} else {
+ 			/* empty port, power off */
+ 			scontrol &= ~0xf;
+ 			scontrol |= (0x1 << 2);
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index c8e0f8cb9aa32..5e8c078efd22a 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1504,6 +1504,8 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff,
+ 		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47424e03, 0xffffffff,
++		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
+ 
+ 	/* Quirks that need to be set based on the module address */
+ 	SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff,
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index d7c440ac465f3..b3452259d6e0b 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -469,10 +469,17 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len)
+ 	int rc;
+ 	u32 ordinal;
+ 	unsigned long dur;
+-
+-	rc = tpm_tis_send_data(chip, buf, len);
+-	if (rc < 0)
+-		return rc;
++	unsigned int try;
++
++	for (try = 0; try < TPM_RETRY; try++) {
++		rc = tpm_tis_send_data(chip, buf, len);
++		if (rc >= 0)
++			/* Data transfer done successfully */
++			break;
++		else if (rc != -EIO)
++			/* Data transfer failed, not recoverable */
++			return rc;
++	}
+ 
+ 	/* go and do it */
+ 	rc = tpm_tis_write8(priv, TPM_STS(priv->locality), TPM_STS_GO);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index e25c3387bcf87..7f2adac82e3a6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -45,7 +45,6 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
+ 	struct drm_gem_object *gobj;
+ 	struct amdgpu_bo *bo;
+ 	unsigned long size;
+-	int r;
+ 
+ 	gobj = drm_gem_object_lookup(p->filp, data->handle);
+ 	if (gobj == NULL)
+@@ -60,23 +59,14 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
+ 	drm_gem_object_put(gobj);
+ 
+ 	size = amdgpu_bo_size(bo);
+-	if (size != PAGE_SIZE || (data->offset + 8) > size) {
+-		r = -EINVAL;
+-		goto error_unref;
+-	}
++	if (size != PAGE_SIZE || data->offset > (size - 8))
++		return -EINVAL;
+ 
+-	if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm)) {
+-		r = -EINVAL;
+-		goto error_unref;
+-	}
++	if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm))
++		return -EINVAL;
+ 
+ 	*offset = data->offset;
+-
+ 	return 0;
+-
+-error_unref:
+-	amdgpu_bo_unref(&bo);
+-	return r;
+ }
+ 
+ static int amdgpu_cs_bo_handles_chunk(struct amdgpu_cs_parser *p,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0bdc83d899463..652ddec188385 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -6971,6 +6971,13 @@ static void handle_cursor_update(struct drm_plane *plane,
+ 	attributes.rotation_angle    = 0;
+ 	attributes.attribute_flags.value = 0;
+ 
++	/* Enable cursor degamma ROM on DCN3+ for implicit sRGB degamma in DRM
++	 * legacy gamma setup.
++	 */
++	if (crtc_state->cm_is_degamma_srgb &&
++	    adev->dm.dc->caps.color.dpp.gamma_corr)
++		attributes.attribute_flags.bits.ENABLE_CURSOR_DEGAMMA = 1;
++
+ 	attributes.pitch = attributes.width;
+ 
+ 	if (crtc_state->stream) {
+diff --git a/drivers/gpu/drm/bridge/tc358762.c b/drivers/gpu/drm/bridge/tc358762.c
+index 1bfdfc6affafe..21c57d3435687 100644
+--- a/drivers/gpu/drm/bridge/tc358762.c
++++ b/drivers/gpu/drm/bridge/tc358762.c
+@@ -224,7 +224,7 @@ static int tc358762_probe(struct mipi_dsi_device *dsi)
+ 	dsi->lanes = 1;
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
+-			  MIPI_DSI_MODE_LPM;
++			  MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_VIDEO_HSE;
+ 
+ 	ret = tc358762_parse_dt(ctx);
+ 	if (ret < 0)
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_crtc.c b/drivers/gpu/drm/exynos/exynos_drm_crtc.c
+index 1c03485676efa..de9fadccf22e5 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_crtc.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_crtc.c
+@@ -39,13 +39,12 @@ static void exynos_drm_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	if (exynos_crtc->ops->atomic_disable)
+ 		exynos_crtc->ops->atomic_disable(exynos_crtc);
+ 
++	spin_lock_irq(&crtc->dev->event_lock);
+ 	if (crtc->state->event && !crtc->state->active) {
+-		spin_lock_irq(&crtc->dev->event_lock);
+ 		drm_crtc_send_vblank_event(crtc, crtc->state->event);
+-		spin_unlock_irq(&crtc->dev->event_lock);
+-
+ 		crtc->state->event = NULL;
+ 	}
++	spin_unlock_irq(&crtc->dev->event_lock);
+ }
+ 
+ static int exynos_crtc_atomic_check(struct drm_crtc *crtc,
+diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
+index 0f5d1e598d75f..1656f3ee0b193 100644
+--- a/drivers/gpu/drm/tiny/gm12u320.c
++++ b/drivers/gpu/drm/tiny/gm12u320.c
+@@ -67,10 +67,10 @@ MODULE_PARM_DESC(eco_mode, "Turn on Eco mode (less bright, more silent)");
+ #define READ_STATUS_SIZE		13
+ #define MISC_VALUE_SIZE			4
+ 
+-#define CMD_TIMEOUT			msecs_to_jiffies(200)
+-#define DATA_TIMEOUT			msecs_to_jiffies(1000)
+-#define IDLE_TIMEOUT			msecs_to_jiffies(2000)
+-#define FIRST_FRAME_TIMEOUT		msecs_to_jiffies(2000)
++#define CMD_TIMEOUT			200
++#define DATA_TIMEOUT			1000
++#define IDLE_TIMEOUT			2000
++#define FIRST_FRAME_TIMEOUT		2000
+ 
+ #define MISC_REQ_GET_SET_ECO_A		0xff
+ #define MISC_REQ_GET_SET_ECO_B		0x35
+@@ -399,7 +399,7 @@ static void gm12u320_fb_update_work(struct work_struct *work)
+ 	 * switches back to showing its logo.
+ 	 */
+ 	queue_delayed_work(system_long_wq, &gm12u320->fb_update.work,
+-			   IDLE_TIMEOUT);
++			   msecs_to_jiffies(IDLE_TIMEOUT));
+ 
+ 	return;
+ err:
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index 724bf30600d60..dac46bc2fafc8 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -693,13 +693,16 @@ static int aspeed_i2c_master_xfer(struct i2c_adapter *adap,
+ 
+ 	if (time_left == 0) {
+ 		/*
+-		 * If timed out and bus is still busy in a multi master
+-		 * environment, attempt recovery at here.
++		 * In a multi-master setup, if a timeout occurs, attempt
++		 * recovery. But if the bus is idle, we still need to reset the
++		 * i2c controller to clear the remaining interrupts.
+ 		 */
+ 		if (bus->multi_master &&
+ 		    (readl(bus->base + ASPEED_I2C_CMD_REG) &
+ 		     ASPEED_I2CD_BUS_BUSY_STS))
+ 			aspeed_i2c_recover_bus(bus);
++		else
++			aspeed_i2c_reset(bus);
+ 
+ 		/*
+ 		 * If timed out and the state is still pending, drop the pending
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 6b5cc3f59fb39..3619db7e382a0 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1793,6 +1793,9 @@ static int raid1_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	int number = rdev->raid_disk;
+ 	struct raid1_info *p = conf->mirrors + number;
+ 
++	if (unlikely(number >= conf->raid_disks))
++		goto abort;
++
+ 	if (rdev != p->rdev)
+ 		p = conf->mirrors + conf->raid_disks + number;
+ 
+diff --git a/drivers/media/pci/cx23885/cx23885-video.c b/drivers/media/pci/cx23885/cx23885-video.c
+index a380e0920a21f..86e3bb5903712 100644
+--- a/drivers/media/pci/cx23885/cx23885-video.c
++++ b/drivers/media/pci/cx23885/cx23885-video.c
+@@ -412,7 +412,7 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ 				dev->height >> 1);
+ 		break;
+ 	default:
+-		BUG();
++		return -EINVAL; /* should not happen */
+ 	}
+ 	dprintk(2, "[%p/%d] buffer_init - %dx%d %dbpp 0x%08x - dma=0x%08lx\n",
+ 		buf, buf->vb.vb2_buf.index,
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.c b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+index d6838c8ebd7e8..f8dca47904766 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.c
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.c
+@@ -355,7 +355,7 @@ static int cio2_hw_init(struct cio2_device *cio2, struct cio2_queue *q)
+ 	void __iomem *const base = cio2->base;
+ 	u8 lanes, csi2bus = q->csi2.port;
+ 	u8 sensor_vc = SENSOR_VIR_CH_DFLT;
+-	struct cio2_csi2_timing timing;
++	struct cio2_csi2_timing timing = { 0 };
+ 	int i, r;
+ 
+ 	fmt = cio2_find_format(NULL, &q->subdev_fmt.code);
+diff --git a/drivers/media/tuners/qt1010.c b/drivers/media/tuners/qt1010.c
+index 60931367b82ca..48fc79cd40273 100644
+--- a/drivers/media/tuners/qt1010.c
++++ b/drivers/media/tuners/qt1010.c
+@@ -345,11 +345,12 @@ static int qt1010_init(struct dvb_frontend *fe)
+ 			else
+ 				valptr = &tmpval;
+ 
+-			BUG_ON(i >= ARRAY_SIZE(i2c_data) - 1);
+-
+-			err = qt1010_init_meas1(priv, i2c_data[i+1].reg,
+-						i2c_data[i].reg,
+-						i2c_data[i].val, valptr);
++			if (i >= ARRAY_SIZE(i2c_data) - 1)
++				err = -EIO;
++			else
++				err = qt1010_init_meas1(priv, i2c_data[i + 1].reg,
++							i2c_data[i].reg,
++							i2c_data[i].val, valptr);
+ 			i++;
+ 			break;
+ 		}
+diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c
+index b1f69c11c8395..8cbaab9a60844 100644
+--- a/drivers/media/usb/dvb-usb-v2/af9035.c
++++ b/drivers/media/usb/dvb-usb-v2/af9035.c
+@@ -269,6 +269,7 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 	struct dvb_usb_device *d = i2c_get_adapdata(adap);
+ 	struct state *state = d_to_priv(d);
+ 	int ret;
++	u32 reg;
+ 
+ 	if (mutex_lock_interruptible(&d->i2c_mutex) < 0)
+ 		return -EAGAIN;
+@@ -321,8 +322,10 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 			ret = -EOPNOTSUPP;
+ 		} else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
+ 			   (msg[0].addr == state->af9033_i2c_addr[1])) {
++			if (msg[0].len < 3 || msg[1].len < 1)
++				return -EOPNOTSUPP;
+ 			/* demod access via firmware interface */
+-			u32 reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
++			reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
+ 					msg[0].buf[2];
+ 
+ 			if (msg[0].addr == state->af9033_i2c_addr[1])
+@@ -380,17 +383,16 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 			ret = -EOPNOTSUPP;
+ 		} else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
+ 			   (msg[0].addr == state->af9033_i2c_addr[1])) {
++			if (msg[0].len < 3)
++				return -EOPNOTSUPP;
+ 			/* demod access via firmware interface */
+-			u32 reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
++			reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
+ 					msg[0].buf[2];
+ 
+ 			if (msg[0].addr == state->af9033_i2c_addr[1])
+ 				reg |= 0x100000;
+ 
+-			ret = (msg[0].len >= 3) ? af9035_wr_regs(d, reg,
+-							         &msg[0].buf[3],
+-							         msg[0].len - 3)
+-					        : -EOPNOTSUPP;
++			ret = af9035_wr_regs(d, reg, &msg[0].buf[3], msg[0].len - 3);
+ 		} else {
+ 			/* I2C write */
+ 			u8 buf[MAX_XFER_SIZE];
+diff --git a/drivers/media/usb/dvb-usb-v2/anysee.c b/drivers/media/usb/dvb-usb-v2/anysee.c
+index 89a1b204b90c3..3dacf3914d75b 100644
+--- a/drivers/media/usb/dvb-usb-v2/anysee.c
++++ b/drivers/media/usb/dvb-usb-v2/anysee.c
+@@ -202,7 +202,7 @@ static int anysee_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msg,
+ 
+ 	while (i < num) {
+ 		if (num > i + 1 && (msg[i+1].flags & I2C_M_RD)) {
+-			if (msg[i].len > 2 || msg[i+1].len > 60) {
++			if (msg[i].len != 2 || msg[i + 1].len > 60) {
+ 				ret = -EOPNOTSUPP;
+ 				break;
+ 			}
+diff --git a/drivers/media/usb/dvb-usb-v2/az6007.c b/drivers/media/usb/dvb-usb-v2/az6007.c
+index 7524c90f5da61..6cbfe75791c21 100644
+--- a/drivers/media/usb/dvb-usb-v2/az6007.c
++++ b/drivers/media/usb/dvb-usb-v2/az6007.c
+@@ -788,6 +788,10 @@ static int az6007_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
+ 			if (az6007_xfer_debug)
+ 				printk(KERN_DEBUG "az6007: I2C W addr=0x%x len=%d\n",
+ 				       addr, msgs[i].len);
++			if (msgs[i].len < 1) {
++				ret = -EIO;
++				goto err;
++			}
+ 			req = AZ6007_I2C_WR;
+ 			index = msgs[i].buf[0];
+ 			value = addr | (1 << 8);
+@@ -802,6 +806,10 @@ static int az6007_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
+ 			if (az6007_xfer_debug)
+ 				printk(KERN_DEBUG "az6007: I2C R addr=0x%x len=%d\n",
+ 				       addr, msgs[i].len);
++			if (msgs[i].len < 1) {
++				ret = -EIO;
++				goto err;
++			}
+ 			req = AZ6007_I2C_RD;
+ 			index = msgs[i].buf[0];
+ 			value = addr;
+diff --git a/drivers/media/usb/dvb-usb-v2/gl861.c b/drivers/media/usb/dvb-usb-v2/gl861.c
+index 0c434259c36f1..c71e7b93476de 100644
+--- a/drivers/media/usb/dvb-usb-v2/gl861.c
++++ b/drivers/media/usb/dvb-usb-v2/gl861.c
+@@ -120,7 +120,7 @@ static int gl861_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 	} else if (num == 2 && !(msg[0].flags & I2C_M_RD) &&
+ 		   (msg[1].flags & I2C_M_RD)) {
+ 		/* I2C write + read */
+-		if (msg[0].len > 1 || msg[1].len > sizeof(ctx->buf)) {
++		if (msg[0].len != 1 || msg[1].len > sizeof(ctx->buf)) {
+ 			ret = -EOPNOTSUPP;
+ 			goto err;
+ 		}
+diff --git a/drivers/media/usb/dvb-usb/af9005.c b/drivers/media/usb/dvb-usb/af9005.c
+index b6a2436d16e97..9af54fcbed1de 100644
+--- a/drivers/media/usb/dvb-usb/af9005.c
++++ b/drivers/media/usb/dvb-usb/af9005.c
+@@ -422,6 +422,10 @@ static int af9005_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 		if (ret == 0)
+ 			ret = 2;
+ 	} else {
++		if (msg[0].len < 2) {
++			ret = -EOPNOTSUPP;
++			goto unlock;
++		}
+ 		/* write one or more registers */
+ 		reg = msg[0].buf[0];
+ 		addr = msg[0].addr;
+@@ -431,6 +435,7 @@ static int af9005_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			ret = 1;
+ 	}
+ 
++unlock:
+ 	mutex_unlock(&d->i2c_mutex);
+ 	return ret;
+ }
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index 3c4ac998d040f..2290f132a82c8 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -128,6 +128,10 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 
+ 	switch (num) {
+ 	case 2:
++		if (msg[0].len < 1) {
++			num = -EOPNOTSUPP;
++			break;
++		}
+ 		/* read stv0299 register */
+ 		value = msg[0].buf[0];/* register */
+ 		for (i = 0; i < msg[1].len; i++) {
+@@ -139,6 +143,10 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 	case 1:
+ 		switch (msg[0].addr) {
+ 		case 0x68:
++			if (msg[0].len < 2) {
++				num = -EOPNOTSUPP;
++				break;
++			}
+ 			/* write to stv0299 register */
+ 			buf6[0] = 0x2a;
+ 			buf6[1] = msg[0].buf[0];
+@@ -148,6 +156,10 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			break;
+ 		case 0x60:
+ 			if (msg[0].flags == 0) {
++				if (msg[0].len < 4) {
++					num = -EOPNOTSUPP;
++					break;
++				}
+ 			/* write to tuner pll */
+ 				buf6[0] = 0x2c;
+ 				buf6[1] = 5;
+@@ -159,6 +171,10 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 				dw210x_op_rw(d->udev, 0xb2, 0, 0,
+ 						buf6, 7, DW210X_WRITE_MSG);
+ 			} else {
++				if (msg[0].len < 1) {
++					num = -EOPNOTSUPP;
++					break;
++				}
+ 			/* read from tuner */
+ 				dw210x_op_rw(d->udev, 0xb5, 0, 0,
+ 						buf6, 1, DW210X_READ_MSG);
+@@ -166,12 +182,20 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			}
+ 			break;
+ 		case (DW2102_RC_QUERY):
++			if (msg[0].len < 2) {
++				num = -EOPNOTSUPP;
++				break;
++			}
+ 			dw210x_op_rw(d->udev, 0xb8, 0, 0,
+ 					buf6, 2, DW210X_READ_MSG);
+ 			msg[0].buf[0] = buf6[0];
+ 			msg[0].buf[1] = buf6[1];
+ 			break;
+ 		case (DW2102_VOLTAGE_CTRL):
++			if (msg[0].len < 1) {
++				num = -EOPNOTSUPP;
++				break;
++			}
+ 			buf6[0] = 0x30;
+ 			buf6[1] = msg[0].buf[0];
+ 			dw210x_op_rw(d->udev, 0xb2, 0, 0,
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 70f388f83485c..b030f657e2534 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -166,8 +166,8 @@
+ #define ESDHC_FLAG_HS400		BIT(9)
+ /*
+  * The IP has errata ERR010450
+- * uSDHC: Due to the I/O timing limit, for SDR mode, SD card clock can't
+- * exceed 150MHz, for DDR mode, SD card clock can't exceed 45MHz.
++ * uSDHC: At 1.8V due to the I/O timing limit, for SDR mode, SD card
++ * clock can't exceed 150MHz, for DDR mode, SD card clock can't exceed 45MHz.
+  */
+ #define ESDHC_FLAG_ERR010450		BIT(10)
+ /* The IP supports HS400ES mode */
+@@ -873,7 +873,8 @@ static inline void esdhc_pltfm_set_clock(struct sdhci_host *host,
+ 		| ESDHC_CLOCK_MASK);
+ 	sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL);
+ 
+-	if (imx_data->socdata->flags & ESDHC_FLAG_ERR010450) {
++	if ((imx_data->socdata->flags & ESDHC_FLAG_ERR010450) &&
++	    (!(host->quirks2 & SDHCI_QUIRK2_NO_1_8_V))) {
+ 		unsigned int max_clock;
+ 
+ 		max_clock = imx_data->is_ddr ? 45000000 : 150000000;
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index e170c545fec50..11d706ff30dd0 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -25,6 +25,7 @@
+ #include <linux/of.h>
+ #include <linux/of_platform.h>
+ #include <linux/slab.h>
++#include <linux/static_key.h>
+ #include <linux/list.h>
+ #include <linux/log2.h>
+ 
+@@ -207,6 +208,8 @@ enum {
+ 
+ struct brcmnand_host;
+ 
++static DEFINE_STATIC_KEY_FALSE(brcmnand_soc_has_ops_key);
++
+ struct brcmnand_controller {
+ 	struct device		*dev;
+ 	struct nand_controller	controller;
+@@ -265,6 +268,7 @@ struct brcmnand_controller {
+ 	const unsigned int	*page_sizes;
+ 	unsigned int		page_size_shift;
+ 	unsigned int		max_oob;
++	u32			ecc_level_shift;
+ 	u32			features;
+ 
+ 	/* for low-power standby/resume only */
+@@ -589,15 +593,53 @@ enum {
+ 	INTFC_CTLR_READY		= BIT(31),
+ };
+ 
++/***********************************************************************
++ * NAND ACC CONTROL bitfield
++ *
++ * Some bits have remained constant throughout hardware revision, while
++ * others have shifted around.
++ ***********************************************************************/
++
++/* Constant for all versions (where supported) */
++enum {
++	/* See BRCMNAND_HAS_CACHE_MODE */
++	ACC_CONTROL_CACHE_MODE				= BIT(22),
++
++	/* See BRCMNAND_HAS_PREFETCH */
++	ACC_CONTROL_PREFETCH				= BIT(23),
++
++	ACC_CONTROL_PAGE_HIT				= BIT(24),
++	ACC_CONTROL_WR_PREEMPT				= BIT(25),
++	ACC_CONTROL_PARTIAL_PAGE			= BIT(26),
++	ACC_CONTROL_RD_ERASED				= BIT(27),
++	ACC_CONTROL_FAST_PGM_RDIN			= BIT(28),
++	ACC_CONTROL_WR_ECC				= BIT(30),
++	ACC_CONTROL_RD_ECC				= BIT(31),
++};
++
++#define	ACC_CONTROL_ECC_SHIFT			16
++/* Only for v7.2 */
++#define	ACC_CONTROL_ECC_EXT_SHIFT		13
++
++static inline bool brcmnand_non_mmio_ops(struct brcmnand_controller *ctrl)
++{
++	return static_branch_unlikely(&brcmnand_soc_has_ops_key);
++}
++
+ static inline u32 nand_readreg(struct brcmnand_controller *ctrl, u32 offs)
+ {
++	if (brcmnand_non_mmio_ops(ctrl))
++		return brcmnand_soc_read(ctrl->soc, offs);
+ 	return brcmnand_readl(ctrl->nand_base + offs);
+ }
+ 
+ static inline void nand_writereg(struct brcmnand_controller *ctrl, u32 offs,
+ 				 u32 val)
+ {
+-	brcmnand_writel(val, ctrl->nand_base + offs);
++	if (brcmnand_non_mmio_ops(ctrl))
++		brcmnand_soc_write(ctrl->soc, val, offs);
++	else
++		brcmnand_writel(val, ctrl->nand_base + offs);
+ }
+ 
+ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
+@@ -716,6 +758,12 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
+ 	else if (of_property_read_bool(ctrl->dev->of_node, "brcm,nand-has-wp"))
+ 		ctrl->features |= BRCMNAND_HAS_WP;
+ 
++	/* v7.2 has different ecc level shift in the acc register */
++	if (ctrl->nand_version == 0x0702)
++		ctrl->ecc_level_shift = ACC_CONTROL_ECC_EXT_SHIFT;
++	else
++		ctrl->ecc_level_shift = ACC_CONTROL_ECC_SHIFT;
++
+ 	return 0;
+ }
+ 
+@@ -763,13 +811,18 @@ static inline void brcmnand_rmw_reg(struct brcmnand_controller *ctrl,
+ 
+ static inline u32 brcmnand_read_fc(struct brcmnand_controller *ctrl, int word)
+ {
++	if (brcmnand_non_mmio_ops(ctrl))
++		return brcmnand_soc_read(ctrl->soc, BRCMNAND_NON_MMIO_FC_ADDR);
+ 	return __raw_readl(ctrl->nand_fc + word * 4);
+ }
+ 
+ static inline void brcmnand_write_fc(struct brcmnand_controller *ctrl,
+ 				     int word, u32 val)
+ {
+-	__raw_writel(val, ctrl->nand_fc + word * 4);
++	if (brcmnand_non_mmio_ops(ctrl))
++		brcmnand_soc_write(ctrl->soc, val, BRCMNAND_NON_MMIO_FC_ADDR);
++	else
++		__raw_writel(val, ctrl->nand_fc + word * 4);
+ }
+ 
+ static inline void edu_writel(struct brcmnand_controller *ctrl,
+@@ -899,30 +952,6 @@ static inline int brcmnand_cmd_shift(struct brcmnand_controller *ctrl)
+ 	return 0;
+ }
+ 
+-/***********************************************************************
+- * NAND ACC CONTROL bitfield
+- *
+- * Some bits have remained constant throughout hardware revision, while
+- * others have shifted around.
+- ***********************************************************************/
+-
+-/* Constant for all versions (where supported) */
+-enum {
+-	/* See BRCMNAND_HAS_CACHE_MODE */
+-	ACC_CONTROL_CACHE_MODE				= BIT(22),
+-
+-	/* See BRCMNAND_HAS_PREFETCH */
+-	ACC_CONTROL_PREFETCH				= BIT(23),
+-
+-	ACC_CONTROL_PAGE_HIT				= BIT(24),
+-	ACC_CONTROL_WR_PREEMPT				= BIT(25),
+-	ACC_CONTROL_PARTIAL_PAGE			= BIT(26),
+-	ACC_CONTROL_RD_ERASED				= BIT(27),
+-	ACC_CONTROL_FAST_PGM_RDIN			= BIT(28),
+-	ACC_CONTROL_WR_ECC				= BIT(30),
+-	ACC_CONTROL_RD_ECC				= BIT(31),
+-};
+-
+ static inline u32 brcmnand_spare_area_mask(struct brcmnand_controller *ctrl)
+ {
+ 	if (ctrl->nand_version == 0x0702)
+@@ -935,18 +964,15 @@ static inline u32 brcmnand_spare_area_mask(struct brcmnand_controller *ctrl)
+ 		return GENMASK(4, 0);
+ }
+ 
+-#define NAND_ACC_CONTROL_ECC_SHIFT	16
+-#define NAND_ACC_CONTROL_ECC_EXT_SHIFT	13
+-
+ static inline u32 brcmnand_ecc_level_mask(struct brcmnand_controller *ctrl)
+ {
+ 	u32 mask = (ctrl->nand_version >= 0x0600) ? 0x1f : 0x0f;
+ 
+-	mask <<= NAND_ACC_CONTROL_ECC_SHIFT;
++	mask <<= ACC_CONTROL_ECC_SHIFT;
+ 
+ 	/* v7.2 includes additional ECC levels */
+-	if (ctrl->nand_version >= 0x0702)
+-		mask |= 0x7 << NAND_ACC_CONTROL_ECC_EXT_SHIFT;
++	if (ctrl->nand_version == 0x0702)
++		mask |= 0x7 << ACC_CONTROL_ECC_EXT_SHIFT;
+ 
+ 	return mask;
+ }
+@@ -960,8 +986,8 @@ static void brcmnand_set_ecc_enabled(struct brcmnand_host *host, int en)
+ 
+ 	if (en) {
+ 		acc_control |= ecc_flags; /* enable RD/WR ECC */
+-		acc_control |= host->hwcfg.ecc_level
+-			       << NAND_ACC_CONTROL_ECC_SHIFT;
++		acc_control &= ~brcmnand_ecc_level_mask(ctrl);
++		acc_control |= host->hwcfg.ecc_level << ctrl->ecc_level_shift;
+ 	} else {
+ 		acc_control &= ~ecc_flags; /* disable RD/WR ECC */
+ 		acc_control &= ~brcmnand_ecc_level_mask(ctrl);
+@@ -2515,7 +2541,7 @@ static int brcmnand_set_cfg(struct brcmnand_host *host,
+ 	tmp &= ~brcmnand_ecc_level_mask(ctrl);
+ 	tmp &= ~brcmnand_spare_area_mask(ctrl);
+ 	if (ctrl->nand_version >= 0x0302) {
+-		tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT;
++		tmp |= cfg->ecc_level << ctrl->ecc_level_shift;
+ 		tmp |= cfg->spare_area_size;
+ 	}
+ 	nand_writereg(ctrl, acc_control_offs, tmp);
+@@ -2985,6 +3011,12 @@ int brcmnand_probe(struct platform_device *pdev, struct brcmnand_soc *soc)
+ 	dev_set_drvdata(dev, ctrl);
+ 	ctrl->dev = dev;
+ 
++	/* Enable the static key if the soc provides I/O operations indicating
++	 * that a non-memory mapped IO access path must be used
++	 */
++	if (brcmnand_soc_has_ops(ctrl->soc))
++		static_branch_enable(&brcmnand_soc_has_ops_key);
++
+ 	init_completion(&ctrl->done);
+ 	init_completion(&ctrl->dma_done);
+ 	init_completion(&ctrl->edu_done);
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.h b/drivers/mtd/nand/raw/brcmnand/brcmnand.h
+index eb498fbe505ec..f1f93d85f50d2 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.h
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.h
+@@ -11,12 +11,25 @@
+ 
+ struct platform_device;
+ struct dev_pm_ops;
++struct brcmnand_io_ops;
++
++/* Special register offset constant to intercept a non-MMIO access
++ * to the flash cache register space. This is intentionally large
++ * not to overlap with an existing offset.
++ */
++#define BRCMNAND_NON_MMIO_FC_ADDR	0xffffffff
+ 
+ struct brcmnand_soc {
+ 	bool (*ctlrdy_ack)(struct brcmnand_soc *soc);
+ 	void (*ctlrdy_set_enabled)(struct brcmnand_soc *soc, bool en);
+ 	void (*prepare_data_bus)(struct brcmnand_soc *soc, bool prepare,
+ 				 bool is_param);
++	const struct brcmnand_io_ops *ops;
++};
++
++struct brcmnand_io_ops {
++	u32 (*read_reg)(struct brcmnand_soc *soc, u32 offset);
++	void (*write_reg)(struct brcmnand_soc *soc, u32 val, u32 offset);
+ };
+ 
+ static inline void brcmnand_soc_data_bus_prepare(struct brcmnand_soc *soc,
+@@ -58,6 +71,22 @@ static inline void brcmnand_writel(u32 val, void __iomem *addr)
+ 		writel_relaxed(val, addr);
+ }
+ 
++static inline bool brcmnand_soc_has_ops(struct brcmnand_soc *soc)
++{
++	return soc && soc->ops && soc->ops->read_reg && soc->ops->write_reg;
++}
++
++static inline u32 brcmnand_soc_read(struct brcmnand_soc *soc, u32 offset)
++{
++	return soc->ops->read_reg(soc, offset);
++}
++
++static inline void brcmnand_soc_write(struct brcmnand_soc *soc, u32 val,
++				      u32 offset)
++{
++	soc->ops->write_reg(soc, val, offset);
++}
++
+ int brcmnand_probe(struct platform_device *pdev, struct brcmnand_soc *soc);
+ int brcmnand_remove(struct platform_device *pdev);
+ 
+diff --git a/drivers/net/ethernet/atheros/alx/ethtool.c b/drivers/net/ethernet/atheros/alx/ethtool.c
+index 2f4eabf652e80..51e5aa2c74b34 100644
+--- a/drivers/net/ethernet/atheros/alx/ethtool.c
++++ b/drivers/net/ethernet/atheros/alx/ethtool.c
+@@ -281,9 +281,8 @@ static void alx_get_ethtool_stats(struct net_device *netdev,
+ 	spin_lock(&alx->stats_lock);
+ 
+ 	alx_update_hw_stats(hw);
+-	BUILD_BUG_ON(sizeof(hw->stats) - offsetof(struct alx_hw_stats, rx_ok) <
+-		     ALX_NUM_STATS * sizeof(u64));
+-	memcpy(data, &hw->stats.rx_ok, ALX_NUM_STATS * sizeof(u64));
++	BUILD_BUG_ON(sizeof(hw->stats) != ALX_NUM_STATS * sizeof(u64));
++	memcpy(data, &hw->stats, sizeof(hw->stats));
+ 
+ 	spin_unlock(&alx->stats_lock);
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/ahb.c b/drivers/net/wireless/ath/ath9k/ahb.c
+index cdefb8e2daf14..05fb76a4e144e 100644
+--- a/drivers/net/wireless/ath/ath9k/ahb.c
++++ b/drivers/net/wireless/ath/ath9k/ahb.c
+@@ -136,8 +136,8 @@ static int ath_ahb_probe(struct platform_device *pdev)
+ 
+ 	ah = sc->sc_ah;
+ 	ath9k_hw_name(ah, hw_name, sizeof(hw_name));
+-	wiphy_info(hw->wiphy, "%s mem=0x%lx, irq=%d\n",
+-		   hw_name, (unsigned long)mem, irq);
++	wiphy_info(hw->wiphy, "%s mem=0x%p, irq=%d\n",
++		   hw_name, mem, irq);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/mac.h b/drivers/net/wireless/ath/ath9k/mac.h
+index fd6aa49adadfe..9b00e77a6fc3c 100644
+--- a/drivers/net/wireless/ath/ath9k/mac.h
++++ b/drivers/net/wireless/ath/ath9k/mac.h
+@@ -113,8 +113,10 @@ struct ath_tx_status {
+ 	u8 qid;
+ 	u16 desc_id;
+ 	u8 tid;
+-	u32 ba_low;
+-	u32 ba_high;
++	struct_group(ba,
++		u32 ba_low;
++		u32 ba_high;
++	);
+ 	u32 evm0;
+ 	u32 evm1;
+ 	u32 evm2;
+diff --git a/drivers/net/wireless/ath/ath9k/pci.c b/drivers/net/wireless/ath/ath9k/pci.c
+index cff9af3af38d5..4f90c304d1214 100644
+--- a/drivers/net/wireless/ath/ath9k/pci.c
++++ b/drivers/net/wireless/ath/ath9k/pci.c
+@@ -994,8 +994,8 @@ static int ath_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	sc->sc_ah->msi_reg = 0;
+ 
+ 	ath9k_hw_name(sc->sc_ah, hw_name, sizeof(hw_name));
+-	wiphy_info(hw->wiphy, "%s mem=0x%lx, irq=%d\n",
+-		   hw_name, (unsigned long)sc->mem, pdev->irq);
++	wiphy_info(hw->wiphy, "%s mem=0x%p, irq=%d\n",
++		   hw_name, sc->mem, pdev->irq);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index 6555abf02f18b..84c68aefc171a 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -421,7 +421,7 @@ static void ath_tx_count_frames(struct ath_softc *sc, struct ath_buf *bf,
+ 	isaggr = bf_isaggr(bf);
+ 	if (isaggr) {
+ 		seq_st = ts->ts_seqnum;
+-		memcpy(ba, &ts->ba_low, WME_BA_BMP_SIZE >> 3);
++		memcpy(ba, &ts->ba, WME_BA_BMP_SIZE >> 3);
+ 	}
+ 
+ 	while (bf) {
+@@ -504,7 +504,7 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq,
+ 	if (isaggr && txok) {
+ 		if (ts->ts_flags & ATH9K_TX_BA) {
+ 			seq_st = ts->ts_seqnum;
+-			memcpy(ba, &ts->ba_low, WME_BA_BMP_SIZE >> 3);
++			memcpy(ba, &ts->ba, WME_BA_BMP_SIZE >> 3);
+ 		} else {
+ 			/*
+ 			 * AR5416 can become deaf/mute when BA
+diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c
+index cc830c795b33c..5b2de4f3fa0bd 100644
+--- a/drivers/net/wireless/ath/wil6210/txrx.c
++++ b/drivers/net/wireless/ath/wil6210/txrx.c
+@@ -666,7 +666,7 @@ static int wil_rx_crypto_check(struct wil6210_priv *wil, struct sk_buff *skb)
+ 	struct wil_tid_crypto_rx *c = mc ? &s->group_crypto_rx :
+ 				      &s->tid_crypto_rx[tid];
+ 	struct wil_tid_crypto_rx_single *cc = &c->key_id[key_id];
+-	const u8 *pn = (u8 *)&d->mac.pn_15_0;
++	const u8 *pn = (u8 *)&d->mac.pn;
+ 
+ 	if (!cc->key_set) {
+ 		wil_err_ratelimited(wil,
+diff --git a/drivers/net/wireless/ath/wil6210/txrx.h b/drivers/net/wireless/ath/wil6210/txrx.h
+index 1f4c8ec75be87..0f6f6b62bfc9a 100644
+--- a/drivers/net/wireless/ath/wil6210/txrx.h
++++ b/drivers/net/wireless/ath/wil6210/txrx.h
+@@ -343,8 +343,10 @@ struct vring_rx_mac {
+ 	u32 d0;
+ 	u32 d1;
+ 	u16 w4;
+-	u16 pn_15_0;
+-	u32 pn_47_16;
++	struct_group_attr(pn, __packed,
++		u16 pn_15_0;
++		u32 pn_47_16;
++	);
+ } __packed;
+ 
+ /* Rx descriptor - DMA part
+diff --git a/drivers/net/wireless/ath/wil6210/txrx_edma.c b/drivers/net/wireless/ath/wil6210/txrx_edma.c
+index 8ca2ce51c83ef..b23c05f16320b 100644
+--- a/drivers/net/wireless/ath/wil6210/txrx_edma.c
++++ b/drivers/net/wireless/ath/wil6210/txrx_edma.c
+@@ -548,7 +548,7 @@ static int wil_rx_crypto_check_edma(struct wil6210_priv *wil,
+ 	s = &wil->sta[cid];
+ 	c = mc ? &s->group_crypto_rx : &s->tid_crypto_rx[tid];
+ 	cc = &c->key_id[key_id];
+-	pn = (u8 *)&st->ext.pn_15_0;
++	pn = (u8 *)&st->ext.pn;
+ 
+ 	if (!cc->key_set) {
+ 		wil_err_ratelimited(wil,
+diff --git a/drivers/net/wireless/ath/wil6210/txrx_edma.h b/drivers/net/wireless/ath/wil6210/txrx_edma.h
+index c736f7413a35f..ee90e225bb050 100644
+--- a/drivers/net/wireless/ath/wil6210/txrx_edma.h
++++ b/drivers/net/wireless/ath/wil6210/txrx_edma.h
+@@ -330,8 +330,10 @@ struct wil_rx_status_extension {
+ 	u32 d0;
+ 	u32 d1;
+ 	__le16 seq_num; /* only lower 12 bits */
+-	u16 pn_15_0;
+-	u32 pn_47_16;
++	struct_group_attr(pn, __packed,
++		u16 pn_15_0;
++		u32 pn_47_16;
++	);
+ } __packed;
+ 
+ struct wil_rx_status_extended {
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 255286b2324e2..0d41f172a1dc2 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -3619,14 +3619,15 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 	frame_data_len = nla_len(info->attrs[HWSIM_ATTR_FRAME]);
+ 	frame_data = (void *)nla_data(info->attrs[HWSIM_ATTR_FRAME]);
+ 
++	if (frame_data_len < sizeof(struct ieee80211_hdr_3addr) ||
++	    frame_data_len > IEEE80211_MAX_DATA_LEN)
++		goto err;
++
+ 	/* Allocate new skb here */
+ 	skb = alloc_skb(frame_data_len, GFP_KERNEL);
+ 	if (skb == NULL)
+ 		goto err;
+ 
+-	if (frame_data_len > IEEE80211_MAX_DATA_LEN)
+-		goto err;
+-
+ 	/* Copy the data */
+ 	skb_put_data(skb, frame_data, frame_data_len);
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/tdls.c b/drivers/net/wireless/marvell/mwifiex/tdls.c
+index 97bb87c3676bb..6c60621b6cccb 100644
+--- a/drivers/net/wireless/marvell/mwifiex/tdls.c
++++ b/drivers/net/wireless/marvell/mwifiex/tdls.c
+@@ -735,6 +735,7 @@ mwifiex_construct_tdls_action_frame(struct mwifiex_private *priv,
+ 	int ret;
+ 	u16 capab;
+ 	struct ieee80211_ht_cap *ht_cap;
++	unsigned int extra;
+ 	u8 radio, *pos;
+ 
+ 	capab = priv->curr_bss_params.bss_descriptor.cap_info_bitmap;
+@@ -753,7 +754,10 @@ mwifiex_construct_tdls_action_frame(struct mwifiex_private *priv,
+ 
+ 	switch (action_code) {
+ 	case WLAN_PUB_ACTION_TDLS_DISCOVER_RES:
+-		skb_put(skb, sizeof(mgmt->u.action.u.tdls_discover_resp) + 1);
++		/* See the layout of 'struct ieee80211_mgmt'. */
++		extra = sizeof(mgmt->u.action.u.tdls_discover_resp) +
++			sizeof(mgmt->u.action.category);
++		skb_put(skb, extra);
+ 		mgmt->u.action.category = WLAN_CATEGORY_PUBLIC;
+ 		mgmt->u.action.u.tdls_discover_resp.action_code =
+ 					      WLAN_PUB_ACTION_TDLS_DISCOVER_RES;
+@@ -762,8 +766,7 @@ mwifiex_construct_tdls_action_frame(struct mwifiex_private *priv,
+ 		mgmt->u.action.u.tdls_discover_resp.capability =
+ 							     cpu_to_le16(capab);
+ 		/* move back for addr4 */
+-		memmove(pos + ETH_ALEN, &mgmt->u.action.category,
+-			sizeof(mgmt->u.action.u.tdls_discover_resp));
++		memmove(pos + ETH_ALEN, &mgmt->u.action, extra);
+ 		/* init address 4 */
+ 		eth_broadcast_addr(pos);
+ 
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index f5a33dbe7acb9..6ebe72b862661 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -95,6 +95,7 @@
+ #define SMMU_PMCG_PA_SHIFT              12
+ 
+ #define SMMU_PMCG_EVCNTR_RDONLY         BIT(0)
++#define SMMU_PMCG_HARDEN_DISABLE        BIT(1)
+ 
+ static int cpuhp_state_num;
+ 
+@@ -138,6 +139,20 @@ static inline void smmu_pmu_enable(struct pmu *pmu)
+ 	writel(SMMU_PMCG_CR_ENABLE, smmu_pmu->reg_base + SMMU_PMCG_CR);
+ }
+ 
++static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu,
++				       struct perf_event *event, int idx);
++
++static inline void smmu_pmu_enable_quirk_hip08_09(struct pmu *pmu)
++{
++	struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
++	unsigned int idx;
++
++	for_each_set_bit(idx, smmu_pmu->used_counters, smmu_pmu->num_counters)
++		smmu_pmu_apply_event_filter(smmu_pmu, smmu_pmu->events[idx], idx);
++
++	smmu_pmu_enable(pmu);
++}
++
+ static inline void smmu_pmu_disable(struct pmu *pmu)
+ {
+ 	struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
+@@ -146,6 +161,22 @@ static inline void smmu_pmu_disable(struct pmu *pmu)
+ 	writel(0, smmu_pmu->reg_base + SMMU_PMCG_IRQ_CTRL);
+ }
+ 
++static inline void smmu_pmu_disable_quirk_hip08_09(struct pmu *pmu)
++{
++	struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
++	unsigned int idx;
++
++	/*
++	 * The global disable of PMU sometimes fail to stop the counting.
++	 * Harden this by writing an invalid event type to each used counter
++	 * to forcibly stop counting.
++	 */
++	for_each_set_bit(idx, smmu_pmu->used_counters, smmu_pmu->num_counters)
++		writel(0xffff, smmu_pmu->reg_base + SMMU_PMCG_EVTYPER(idx));
++
++	smmu_pmu_disable(pmu);
++}
++
+ static inline void smmu_pmu_counter_set_value(struct smmu_pmu *smmu_pmu,
+ 					      u32 idx, u64 value)
+ {
+@@ -719,7 +750,10 @@ static void smmu_pmu_get_acpi_options(struct smmu_pmu *smmu_pmu)
+ 	switch (model) {
+ 	case IORT_SMMU_V3_PMCG_HISI_HIP08:
+ 		/* HiSilicon Erratum 162001800 */
+-		smmu_pmu->options |= SMMU_PMCG_EVCNTR_RDONLY;
++		smmu_pmu->options |= SMMU_PMCG_EVCNTR_RDONLY | SMMU_PMCG_HARDEN_DISABLE;
++		break;
++	case IORT_SMMU_V3_PMCG_HISI_HIP09:
++		smmu_pmu->options |= SMMU_PMCG_HARDEN_DISABLE;
+ 		break;
+ 	}
+ 
+@@ -806,6 +840,16 @@ static int smmu_pmu_probe(struct platform_device *pdev)
+ 
+ 	smmu_pmu_get_acpi_options(smmu_pmu);
+ 
++	/*
++	 * For platforms suffer this quirk, the PMU disable sometimes fails to
++	 * stop the counters. This will leads to inaccurate or error counting.
++	 * Forcibly disable the counters with these quirk handler.
++	 */
++	if (smmu_pmu->options & SMMU_PMCG_HARDEN_DISABLE) {
++		smmu_pmu->pmu.pmu_enable = smmu_pmu_enable_quirk_hip08_09;
++		smmu_pmu->pmu.pmu_disable = smmu_pmu_disable_quirk_hip08_09;
++	}
++
+ 	/* Pick one CPU to be the preferred one to use */
+ 	smmu_pmu->on_cpu = raw_smp_processor_id();
+ 	WARN_ON(irq_set_affinity_hint(smmu_pmu->irq,
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 2b77cbbcdccb6..f91eee01ce95e 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -5909,7 +5909,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
+ 					    phba->hba_debugfs_root,
+ 					    phba,
+ 					    &lpfc_debugfs_op_multixripools);
+-		if (!phba->debug_multixri_pools) {
++		if (IS_ERR(phba->debug_multixri_pools)) {
+ 			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+ 					 "0527 Cannot create debugfs multixripools\n");
+ 			goto debug_failed;
+@@ -5921,7 +5921,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
+ 			debugfs_create_file(name, 0644,
+ 					    phba->hba_debugfs_root,
+ 					    phba, &lpfc_debugfs_ras_log);
+-		if (!phba->debug_ras_log) {
++		if (IS_ERR(phba->debug_ras_log)) {
+ 			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+ 					 "6148 Cannot create debugfs"
+ 					 " ras_log\n");
+@@ -5942,7 +5942,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
+ 			debugfs_create_file(name, S_IFREG | 0644,
+ 					    phba->hba_debugfs_root,
+ 					    phba, &lpfc_debugfs_op_lockstat);
+-		if (!phba->debug_lockstat) {
++		if (IS_ERR(phba->debug_lockstat)) {
+ 			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+ 					 "4610 Can't create debugfs lockstat\n");
+ 			goto debug_failed;
+@@ -6171,7 +6171,7 @@ nvmeio_off:
+ 		debugfs_create_file(name, 0644,
+ 				    vport->vport_debugfs_root,
+ 				    vport, &lpfc_debugfs_op_scsistat);
+-	if (!vport->debug_scsistat) {
++	if (IS_ERR(vport->debug_scsistat)) {
+ 		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+ 				 "4611 Cannot create debugfs scsistat\n");
+ 		goto debug_failed;
+@@ -6182,7 +6182,7 @@ nvmeio_off:
+ 		debugfs_create_file(name, 0644,
+ 				    vport->vport_debugfs_root,
+ 				    vport, &lpfc_debugfs_op_ioktime);
+-	if (!vport->debug_ioktime) {
++	if (IS_ERR(vport->debug_ioktime)) {
+ 		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+ 				 "0815 Cannot create debugfs ioktime\n");
+ 		goto debug_failed;
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index 2d5b1d5978664..f78cb87e46fa4 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -2327,7 +2327,7 @@ struct megasas_instance {
+ 	u32 support_morethan256jbod; /* FW support for more than 256 PD/JBOD */
+ 	bool use_seqnum_jbod_fp;   /* Added for PD sequence */
+ 	bool smp_affinity_enable;
+-	spinlock_t crashdump_lock;
++	struct mutex crashdump_lock;
+ 
+ 	struct megasas_register_set __iomem *reg_set;
+ 	u32 __iomem *reply_post_host_index_addr[MR_MAX_MSIX_REG_ARRAY];
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index b5a74b237fd21..ec9a19d94855c 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -3221,14 +3221,13 @@ fw_crash_buffer_store(struct device *cdev,
+ 	struct megasas_instance *instance =
+ 		(struct megasas_instance *) shost->hostdata;
+ 	int val = 0;
+-	unsigned long flags;
+ 
+ 	if (kstrtoint(buf, 0, &val) != 0)
+ 		return -EINVAL;
+ 
+-	spin_lock_irqsave(&instance->crashdump_lock, flags);
++	mutex_lock(&instance->crashdump_lock);
+ 	instance->fw_crash_buffer_offset = val;
+-	spin_unlock_irqrestore(&instance->crashdump_lock, flags);
++	mutex_unlock(&instance->crashdump_lock);
+ 	return strlen(buf);
+ }
+ 
+@@ -3243,24 +3242,23 @@ fw_crash_buffer_show(struct device *cdev,
+ 	unsigned long dmachunk = CRASH_DMA_BUF_SIZE;
+ 	unsigned long chunk_left_bytes;
+ 	unsigned long src_addr;
+-	unsigned long flags;
+ 	u32 buff_offset;
+ 
+-	spin_lock_irqsave(&instance->crashdump_lock, flags);
++	mutex_lock(&instance->crashdump_lock);
+ 	buff_offset = instance->fw_crash_buffer_offset;
+ 	if (!instance->crash_dump_buf ||
+ 		!((instance->fw_crash_state == AVAILABLE) ||
+ 		(instance->fw_crash_state == COPYING))) {
+ 		dev_err(&instance->pdev->dev,
+ 			"Firmware crash dump is not available\n");
+-		spin_unlock_irqrestore(&instance->crashdump_lock, flags);
++		mutex_unlock(&instance->crashdump_lock);
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (buff_offset > (instance->fw_crash_buffer_size * dmachunk)) {
+ 		dev_err(&instance->pdev->dev,
+ 			"Firmware crash dump offset is out of range\n");
+-		spin_unlock_irqrestore(&instance->crashdump_lock, flags);
++		mutex_unlock(&instance->crashdump_lock);
+ 		return 0;
+ 	}
+ 
+@@ -3272,7 +3270,7 @@ fw_crash_buffer_show(struct device *cdev,
+ 	src_addr = (unsigned long)instance->crash_buf[buff_offset / dmachunk] +
+ 		(buff_offset % dmachunk);
+ 	memcpy(buf, (void *)src_addr, size);
+-	spin_unlock_irqrestore(&instance->crashdump_lock, flags);
++	mutex_unlock(&instance->crashdump_lock);
+ 
+ 	return size;
+ }
+@@ -3297,7 +3295,6 @@ fw_crash_state_store(struct device *cdev,
+ 	struct megasas_instance *instance =
+ 		(struct megasas_instance *) shost->hostdata;
+ 	int val = 0;
+-	unsigned long flags;
+ 
+ 	if (kstrtoint(buf, 0, &val) != 0)
+ 		return -EINVAL;
+@@ -3311,9 +3308,9 @@ fw_crash_state_store(struct device *cdev,
+ 	instance->fw_crash_state = val;
+ 
+ 	if ((val == COPIED) || (val == COPY_ERROR)) {
+-		spin_lock_irqsave(&instance->crashdump_lock, flags);
++		mutex_lock(&instance->crashdump_lock);
+ 		megasas_free_host_crash_buffer(instance);
+-		spin_unlock_irqrestore(&instance->crashdump_lock, flags);
++		mutex_unlock(&instance->crashdump_lock);
+ 		if (val == COPY_ERROR)
+ 			dev_info(&instance->pdev->dev, "application failed to "
+ 				"copy Firmware crash dump\n");
+@@ -7325,7 +7322,7 @@ static inline void megasas_init_ctrl_params(struct megasas_instance *instance)
+ 	init_waitqueue_head(&instance->int_cmd_wait_q);
+ 	init_waitqueue_head(&instance->abort_cmd_wait_q);
+ 
+-	spin_lock_init(&instance->crashdump_lock);
++	mutex_init(&instance->crashdump_lock);
+ 	spin_lock_init(&instance->mfi_pool_lock);
+ 	spin_lock_init(&instance->hba_lock);
+ 	spin_lock_init(&instance->stream_lock);
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 01eb2ade20709..f40db6f40b1dc 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -255,7 +255,6 @@ static irqreturn_t pm8001_interrupt_handler_intx(int irq, void *dev_id)
+ 	return ret;
+ }
+ 
+-static u32 pm8001_setup_irq(struct pm8001_hba_info *pm8001_ha);
+ static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha);
+ 
+ /**
+@@ -275,13 +274,6 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
+ 	pm8001_dbg(pm8001_ha, INIT, "pm8001_alloc: PHY:%x\n",
+ 		   pm8001_ha->chip->n_phy);
+ 
+-	/* Setup Interrupt */
+-	rc = pm8001_setup_irq(pm8001_ha);
+-	if (rc) {
+-		pm8001_dbg(pm8001_ha, FAIL,
+-			   "pm8001_setup_irq failed [ret: %d]\n", rc);
+-		goto err_out;
+-	}
+ 	/* Request Interrupt */
+ 	rc = pm8001_request_irq(pm8001_ha);
+ 	if (rc)
+@@ -990,47 +982,38 @@ static u32 pm8001_request_msix(struct pm8001_hba_info *pm8001_ha)
+ }
+ #endif
+ 
+-static u32 pm8001_setup_irq(struct pm8001_hba_info *pm8001_ha)
+-{
+-	struct pci_dev *pdev;
+-
+-	pdev = pm8001_ha->pdev;
+-
+-#ifdef PM8001_USE_MSIX
+-	if (pci_find_capability(pdev, PCI_CAP_ID_MSIX))
+-		return pm8001_setup_msix(pm8001_ha);
+-	pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
+-#endif
+-	return 0;
+-}
+-
+ /**
+  * pm8001_request_irq - register interrupt
+  * @pm8001_ha: our ha struct.
+  */
+ static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
+ {
+-	struct pci_dev *pdev;
++	struct pci_dev *pdev = pm8001_ha->pdev;
++#ifdef PM8001_USE_MSIX
+ 	int rc;
+ 
+-	pdev = pm8001_ha->pdev;
++	if (pci_find_capability(pdev, PCI_CAP_ID_MSIX)) {
++		rc = pm8001_setup_msix(pm8001_ha);
++		if (rc) {
++			pm8001_dbg(pm8001_ha, FAIL,
++				   "pm8001_setup_irq failed [ret: %d]\n", rc);
++			return rc;
++		}
+ 
+-#ifdef PM8001_USE_MSIX
+-	if (pdev->msix_cap && pci_msi_enabled())
+-		return pm8001_request_msix(pm8001_ha);
+-	else {
+-		pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
+-		goto intx;
++		if (pdev->msix_cap && pci_msi_enabled())
++			return pm8001_request_msix(pm8001_ha);
+ 	}
++
++	pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
+ #endif
+ 
+-intx:
+ 	/* initialize the INT-X interrupt */
+ 	pm8001_ha->irq_vector[0].irq_id = 0;
+ 	pm8001_ha->irq_vector[0].drv_inst = pm8001_ha;
+-	rc = request_irq(pdev->irq, pm8001_interrupt_handler_intx, IRQF_SHARED,
+-		pm8001_ha->name, SHOST_TO_SAS_HA(pm8001_ha->shost));
+-	return rc;
++
++	return request_irq(pdev->irq, pm8001_interrupt_handler_intx,
++			   IRQF_SHARED, pm8001_ha->name,
++			   SHOST_TO_SAS_HA(pm8001_ha->shost));
+ }
+ 
+ /**
+diff --git a/drivers/scsi/qla2xxx/qla_dfs.c b/drivers/scsi/qla2xxx/qla_dfs.c
+index d5ebcf7d70ff0..7d778bf3fd722 100644
+--- a/drivers/scsi/qla2xxx/qla_dfs.c
++++ b/drivers/scsi/qla2xxx/qla_dfs.c
+@@ -116,7 +116,7 @@ qla2x00_dfs_create_rport(scsi_qla_host_t *vha, struct fc_port *fp)
+ 
+ 	sprintf(wwn, "pn-%016llx", wwn_to_u64(fp->port_name));
+ 	fp->dfs_rport_dir = debugfs_create_dir(wwn, vha->dfs_rport_root);
+-	if (!fp->dfs_rport_dir)
++	if (IS_ERR(fp->dfs_rport_dir))
+ 		return;
+ 	if (NVME_TARGET(vha->hw, fp))
+ 		debugfs_create_file("dev_loss_tmo", 0600, fp->dfs_rport_dir,
+@@ -571,14 +571,14 @@ create_nodes:
+ 	if (IS_QLA27XX(ha) || IS_QLA83XX(ha) || IS_QLA28XX(ha)) {
+ 		ha->tgt.dfs_naqp = debugfs_create_file("naqp",
+ 		    0400, ha->dfs_dir, vha, &dfs_naqp_ops);
+-		if (!ha->tgt.dfs_naqp) {
++		if (IS_ERR(ha->tgt.dfs_naqp)) {
+ 			ql_log(ql_log_warn, vha, 0xd011,
+ 			       "Unable to create debugFS naqp node.\n");
+ 			goto out;
+ 		}
+ 	}
+ 	vha->dfs_rport_root = debugfs_create_dir("rports", ha->dfs_dir);
+-	if (!vha->dfs_rport_root) {
++	if (IS_ERR(vha->dfs_rport_root)) {
+ 		ql_log(ql_log_warn, vha, 0xd012,
+ 		       "Unable to create debugFS rports node.\n");
+ 		goto out;
+diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
+index 0fa1d57b26fa8..3cd671bbb9a41 100644
+--- a/drivers/target/iscsi/iscsi_target_configfs.c
++++ b/drivers/target/iscsi/iscsi_target_configfs.c
+@@ -508,102 +508,102 @@ static ssize_t lio_target_nacl_info_show(struct config_item *item, char *page)
+ 	spin_lock_bh(&se_nacl->nacl_sess_lock);
+ 	se_sess = se_nacl->nacl_sess;
+ 	if (!se_sess) {
+-		rb += sprintf(page+rb, "No active iSCSI Session for Initiator"
++		rb += sysfs_emit_at(page, rb, "No active iSCSI Session for Initiator"
+ 			" Endpoint: %s\n", se_nacl->initiatorname);
+ 	} else {
+ 		sess = se_sess->fabric_sess_ptr;
+ 
+-		rb += sprintf(page+rb, "InitiatorName: %s\n",
++		rb += sysfs_emit_at(page, rb, "InitiatorName: %s\n",
+ 			sess->sess_ops->InitiatorName);
+-		rb += sprintf(page+rb, "InitiatorAlias: %s\n",
++		rb += sysfs_emit_at(page, rb, "InitiatorAlias: %s\n",
+ 			sess->sess_ops->InitiatorAlias);
+ 
+-		rb += sprintf(page+rb,
++		rb += sysfs_emit_at(page, rb,
+ 			      "LIO Session ID: %u   ISID: 0x%6ph  TSIH: %hu  ",
+ 			      sess->sid, sess->isid, sess->tsih);
+-		rb += sprintf(page+rb, "SessionType: %s\n",
++		rb += sysfs_emit_at(page, rb, "SessionType: %s\n",
+ 				(sess->sess_ops->SessionType) ?
+ 				"Discovery" : "Normal");
+-		rb += sprintf(page+rb, "Session State: ");
++		rb += sysfs_emit_at(page, rb, "Session State: ");
+ 		switch (sess->session_state) {
+ 		case TARG_SESS_STATE_FREE:
+-			rb += sprintf(page+rb, "TARG_SESS_FREE\n");
++			rb += sysfs_emit_at(page, rb, "TARG_SESS_FREE\n");
+ 			break;
+ 		case TARG_SESS_STATE_ACTIVE:
+-			rb += sprintf(page+rb, "TARG_SESS_STATE_ACTIVE\n");
++			rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_ACTIVE\n");
+ 			break;
+ 		case TARG_SESS_STATE_LOGGED_IN:
+-			rb += sprintf(page+rb, "TARG_SESS_STATE_LOGGED_IN\n");
++			rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_LOGGED_IN\n");
+ 			break;
+ 		case TARG_SESS_STATE_FAILED:
+-			rb += sprintf(page+rb, "TARG_SESS_STATE_FAILED\n");
++			rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_FAILED\n");
+ 			break;
+ 		case TARG_SESS_STATE_IN_CONTINUE:
+-			rb += sprintf(page+rb, "TARG_SESS_STATE_IN_CONTINUE\n");
++			rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_IN_CONTINUE\n");
+ 			break;
+ 		default:
+-			rb += sprintf(page+rb, "ERROR: Unknown Session"
++			rb += sysfs_emit_at(page, rb, "ERROR: Unknown Session"
+ 					" State!\n");
+ 			break;
+ 		}
+ 
+-		rb += sprintf(page+rb, "---------------------[iSCSI Session"
++		rb += sysfs_emit_at(page, rb, "---------------------[iSCSI Session"
+ 				" Values]-----------------------\n");
+-		rb += sprintf(page+rb, "  CmdSN/WR  :  CmdSN/WC  :  ExpCmdSN"
++		rb += sysfs_emit_at(page, rb, "  CmdSN/WR  :  CmdSN/WC  :  ExpCmdSN"
+ 				"  :  MaxCmdSN  :     ITT    :     TTT\n");
+ 		max_cmd_sn = (u32) atomic_read(&sess->max_cmd_sn);
+-		rb += sprintf(page+rb, " 0x%08x   0x%08x   0x%08x   0x%08x"
++		rb += sysfs_emit_at(page, rb, " 0x%08x   0x%08x   0x%08x   0x%08x"
+ 				"   0x%08x   0x%08x\n",
+ 			sess->cmdsn_window,
+ 			(max_cmd_sn - sess->exp_cmd_sn) + 1,
+ 			sess->exp_cmd_sn, max_cmd_sn,
+ 			sess->init_task_tag, sess->targ_xfer_tag);
+-		rb += sprintf(page+rb, "----------------------[iSCSI"
++		rb += sysfs_emit_at(page, rb, "----------------------[iSCSI"
+ 				" Connections]-------------------------\n");
+ 
+ 		spin_lock(&sess->conn_lock);
+ 		list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+-			rb += sprintf(page+rb, "CID: %hu  Connection"
++			rb += sysfs_emit_at(page, rb, "CID: %hu  Connection"
+ 					" State: ", conn->cid);
+ 			switch (conn->conn_state) {
+ 			case TARG_CONN_STATE_FREE:
+-				rb += sprintf(page+rb,
++				rb += sysfs_emit_at(page, rb,
+ 					"TARG_CONN_STATE_FREE\n");
+ 				break;
+ 			case TARG_CONN_STATE_XPT_UP:
+-				rb += sprintf(page+rb,
++				rb += sysfs_emit_at(page, rb,
+ 					"TARG_CONN_STATE_XPT_UP\n");
+ 				break;
+ 			case TARG_CONN_STATE_IN_LOGIN:
+-				rb += sprintf(page+rb,
++				rb += sysfs_emit_at(page, rb,
+ 					"TARG_CONN_STATE_IN_LOGIN\n");
+ 				break;
+ 			case TARG_CONN_STATE_LOGGED_IN:
+-				rb += sprintf(page+rb,
++				rb += sysfs_emit_at(page, rb,
+ 					"TARG_CONN_STATE_LOGGED_IN\n");
+ 				break;
+ 			case TARG_CONN_STATE_IN_LOGOUT:
+-				rb += sprintf(page+rb,
++				rb += sysfs_emit_at(page, rb,
+ 					"TARG_CONN_STATE_IN_LOGOUT\n");
+ 				break;
+ 			case TARG_CONN_STATE_LOGOUT_REQUESTED:
+-				rb += sprintf(page+rb,
++				rb += sysfs_emit_at(page, rb,
+ 					"TARG_CONN_STATE_LOGOUT_REQUESTED\n");
+ 				break;
+ 			case TARG_CONN_STATE_CLEANUP_WAIT:
+-				rb += sprintf(page+rb,
++				rb += sysfs_emit_at(page, rb,
+ 					"TARG_CONN_STATE_CLEANUP_WAIT\n");
+ 				break;
+ 			default:
+-				rb += sprintf(page+rb,
++				rb += sysfs_emit_at(page, rb,
+ 					"ERROR: Unknown Connection State!\n");
+ 				break;
+ 			}
+ 
+-			rb += sprintf(page+rb, "   Address %pISc %s", &conn->login_sockaddr,
++			rb += sysfs_emit_at(page, rb, "   Address %pISc %s", &conn->login_sockaddr,
+ 				(conn->network_transport == ISCSI_TCP) ?
+ 				"TCP" : "SCTP");
+-			rb += sprintf(page+rb, "  StatSN: 0x%08x\n",
++			rb += sysfs_emit_at(page, rb, "  StatSN: 0x%08x\n",
+ 				conn->stat_sn);
+ 		}
+ 		spin_unlock(&sess->conn_lock);
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index 4df47d02b34b4..f727222469b60 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1263,19 +1263,14 @@ static void cpm_uart_console_write(struct console *co, const char *s,
+ {
+ 	struct uart_cpm_port *pinfo = &cpm_uart_ports[co->index];
+ 	unsigned long flags;
+-	int nolock = oops_in_progress;
+ 
+-	if (unlikely(nolock)) {
++	if (unlikely(oops_in_progress)) {
+ 		local_irq_save(flags);
+-	} else {
+-		spin_lock_irqsave(&pinfo->port.lock, flags);
+-	}
+-
+-	cpm_uart_early_write(pinfo, s, count, true);
+-
+-	if (unlikely(nolock)) {
++		cpm_uart_early_write(pinfo, s, count, true);
+ 		local_irq_restore(flags);
+ 	} else {
++		spin_lock_irqsave(&pinfo->port.lock, flags);
++		cpm_uart_early_write(pinfo, s, count, true);
+ 		spin_unlock_irqrestore(&pinfo->port.lock, flags);
+ 	}
+ }
+diff --git a/drivers/usb/gadget/udc/fsl_qe_udc.c b/drivers/usb/gadget/udc/fsl_qe_udc.c
+index fa66449b39075..618b0a38c8855 100644
+--- a/drivers/usb/gadget/udc/fsl_qe_udc.c
++++ b/drivers/usb/gadget/udc/fsl_qe_udc.c
+@@ -1950,9 +1950,13 @@ static void ch9getstatus(struct qe_udc *udc, u8 request_type, u16 value,
+ 	} else if ((request_type & USB_RECIP_MASK) == USB_RECIP_ENDPOINT) {
+ 		/* Get endpoint status */
+ 		int pipe = index & USB_ENDPOINT_NUMBER_MASK;
+-		struct qe_ep *target_ep = &udc->eps[pipe];
++		struct qe_ep *target_ep;
+ 		u16 usep;
+ 
++		if (pipe >= USB_MAX_ENDPOINTS)
++			goto stall;
++		target_ep = &udc->eps[pipe];
++
+ 		/* stall if endpoint doesn't exist */
+ 		if (!target_ep->ep.desc)
+ 			goto stall;
+diff --git a/fs/attr.c b/fs/attr.c
+index 326a0db3296d7..fefcdc7780286 100644
+--- a/fs/attr.c
++++ b/fs/attr.c
+@@ -309,9 +309,25 @@ int notify_change(struct dentry * dentry, struct iattr * attr, struct inode **de
+ 	}
+ 
+ 	if ((ia_valid & ATTR_MODE)) {
+-		umode_t amode = attr->ia_mode;
++		/*
++		 * Don't allow changing the mode of symlinks:
++		 *
++		 * (1) The vfs doesn't take the mode of symlinks into account
++		 *     during permission checking.
++		 * (2) This has never worked correctly. Most major filesystems
++		 *     did return EOPNOTSUPP due to interactions with POSIX ACLs
++		 *     but did still updated the mode of the symlink.
++		 *     This inconsistency led system call wrapper providers such
++		 *     as libc to block changing the mode of symlinks with
++		 *     EOPNOTSUPP already.
++		 * (3) To even do this in the first place one would have to use
++		 *     specific file descriptors and quite some effort.
++		 */
++		if (S_ISLNK(inode->i_mode))
++			return -EOPNOTSUPP;
++
+ 		/* Flag setting protected by i_mutex */
+-		if (is_sxid(amode))
++		if (is_sxid(attr->ia_mode))
+ 			inode->i_flags &= ~S_NOSEC;
+ 	}
+ 
+diff --git a/fs/autofs/waitq.c b/fs/autofs/waitq.c
+index 5ced859dac539..dd479198310e8 100644
+--- a/fs/autofs/waitq.c
++++ b/fs/autofs/waitq.c
+@@ -32,8 +32,9 @@ void autofs_catatonic_mode(struct autofs_sb_info *sbi)
+ 		wq->status = -ENOENT; /* Magic is gone - report failure */
+ 		kfree(wq->name.name);
+ 		wq->name.name = NULL;
+-		wq->wait_ctr--;
+ 		wake_up_interruptible(&wq->queue);
++		if (!--wq->wait_ctr)
++			kfree(wq);
+ 		wq = nwq;
+ 	}
+ 	fput(sbi->pipe);	/* Close the pipe */
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index bcc6848bb6d6a..67831868ef0de 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -529,8 +529,6 @@ struct btrfs_swapfile_pin {
+ 	int bg_extent_count;
+ };
+ 
+-bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr);
+-
+ enum {
+ 	BTRFS_FS_BARRIER,
+ 	BTRFS_FS_CLOSING_START,
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 04422d929c232..bcffe7886530a 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1173,20 +1173,33 @@ static int __btrfs_run_delayed_items(struct btrfs_trans_handle *trans, int nr)
+ 		ret = __btrfs_commit_inode_delayed_items(trans, path,
+ 							 curr_node);
+ 		if (ret) {
+-			btrfs_release_delayed_node(curr_node);
+-			curr_node = NULL;
+ 			btrfs_abort_transaction(trans, ret);
+ 			break;
+ 		}
+ 
+ 		prev_node = curr_node;
+ 		curr_node = btrfs_next_delayed_node(curr_node);
++		/*
++		 * See the comment below about releasing path before releasing
++		 * node. If the commit of delayed items was successful the path
++		 * should always be released, but in case of an error, it may
++		 * point to locked extent buffers (a leaf at the very least).
++		 */
++		ASSERT(path->nodes[0] == NULL);
+ 		btrfs_release_delayed_node(prev_node);
+ 	}
+ 
++	/*
++	 * Release the path to avoid a potential deadlock and lockdep splat when
++	 * releasing the delayed node, as that requires taking the delayed node's
++	 * mutex. If another task starts running delayed items before we take
++	 * the mutex, it will first lock the mutex and then it may try to lock
++	 * the same btree path (leaf).
++	 */
++	btrfs_free_path(path);
++
+ 	if (curr_node)
+ 		btrfs_release_delayed_node(curr_node);
+-	btrfs_free_path(path);
+ 	trans->block_rsv = block_rsv;
+ 
+ 	return ret;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 1bc6909d4de94..0e25a3f64b2e0 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2503,13 +2503,11 @@ static int validate_super(struct btrfs_fs_info *fs_info,
+ 		ret = -EINVAL;
+ 	}
+ 
+-	if (btrfs_fs_incompat(fs_info, METADATA_UUID) &&
+-	    memcmp(fs_info->fs_devices->metadata_uuid,
+-		   fs_info->super_copy->metadata_uuid, BTRFS_FSID_SIZE)) {
++	if (memcmp(fs_info->fs_devices->metadata_uuid, btrfs_sb_fsid_ptr(sb),
++		   BTRFS_FSID_SIZE) != 0) {
+ 		btrfs_err(fs_info,
+ "superblock metadata_uuid doesn't match metadata uuid of fs_devices: %pU != %pU",
+-			fs_info->super_copy->metadata_uuid,
+-			fs_info->fs_devices->metadata_uuid);
++			  btrfs_sb_fsid_ptr(sb), fs_info->fs_devices->metadata_uuid);
+ 		ret = -EINVAL;
+ 	}
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 9cdf50e2484e1..4d2f25ebe3048 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -857,6 +857,11 @@ again:
+ 		err = -ENOENT;
+ 		goto out;
+ 	} else if (WARN_ON(ret)) {
++		btrfs_print_leaf(path->nodes[0]);
++		btrfs_err(fs_info,
++"extent item not found for insert, bytenr %llu num_bytes %llu parent %llu root_objectid %llu owner %llu offset %llu",
++			  bytenr, num_bytes, parent, root_objectid, owner,
++			  offset);
+ 		err = -EIO;
+ 		goto out;
+ 	}
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 63bf68e0b0061..930126b094add 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2550,6 +2550,13 @@ static int btrfs_search_path_in_tree_user(struct inode *inode,
+ 				goto out_put;
+ 			}
+ 
++			/*
++			 * We don't need the path anymore, so release it and
++			 * avoid deadlocks and lockdep warnings in case
++			 * btrfs_iget() needs to lookup the inode from its root
++			 * btree and lock the same leaf.
++			 */
++			btrfs_release_path(path);
+ 			temp_inode = btrfs_iget(sb, key2.objectid, root);
+ 			if (IS_ERR(temp_inode)) {
+ 				ret = PTR_ERR(temp_inode);
+@@ -2569,7 +2576,6 @@ static int btrfs_search_path_in_tree_user(struct inode *inode,
+ 				goto out_put;
+ 			}
+ 
+-			btrfs_release_path(path);
+ 			key.objectid = key.offset;
+ 			key.offset = (u64)-1;
+ 			dirid = key.objectid;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index b798586263ebb..86c50e0570a5e 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -718,6 +718,14 @@ error_free_page:
+ 	return -EINVAL;
+ }
+ 
++u8 *btrfs_sb_fsid_ptr(struct btrfs_super_block *sb)
++{
++	bool has_metadata_uuid = (btrfs_super_incompat_flags(sb) &
++				  BTRFS_FEATURE_INCOMPAT_METADATA_UUID);
++
++	return has_metadata_uuid ? sb->metadata_uuid : sb->fsid;
++}
++
+ /*
+  * Handle scanned device having its CHANGING_FSID_V2 flag set and the fs_devices
+  * being created with a disk that has already completed its fsid change. Such
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index f2177263748e8..b2046e92b9143 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -580,4 +580,7 @@ int btrfs_bg_type_to_factor(u64 flags);
+ const char *btrfs_bg_type_to_raid_name(u64 flags);
+ int btrfs_verify_dev_extents(struct btrfs_fs_info *fs_info);
+ 
++bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr);
++u8 *btrfs_sb_fsid_ptr(struct btrfs_super_block *sb);
++
+ #endif
+diff --git a/fs/ext2/xattr.c b/fs/ext2/xattr.c
+index 841fa6d9d744b..f1dc11dab0d88 100644
+--- a/fs/ext2/xattr.c
++++ b/fs/ext2/xattr.c
+@@ -694,10 +694,10 @@ ext2_xattr_set2(struct inode *inode, struct buffer_head *old_bh,
+ 			/* We need to allocate a new block */
+ 			ext2_fsblk_t goal = ext2_group_first_block_no(sb,
+ 						EXT2_I(inode)->i_block_group);
+-			int block = ext2_new_block(inode, goal, &error);
++			ext2_fsblk_t block = ext2_new_block(inode, goal, &error);
+ 			if (error)
+ 				goto cleanup;
+-			ea_idebug(inode, "creating block %d", block);
++			ea_idebug(inode, "creating block %lu", block);
+ 
+ 			new_bh = sb_getblk(sb, block);
+ 			if (unlikely(!new_bh)) {
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index ca3d2a33a08c8..a50eb4c61ecc6 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -339,17 +339,17 @@ static struct ext4_dir_entry_tail *get_dirent_tail(struct inode *inode,
+ 						   struct buffer_head *bh)
+ {
+ 	struct ext4_dir_entry_tail *t;
++	int blocksize = EXT4_BLOCK_SIZE(inode->i_sb);
+ 
+ #ifdef PARANOID
+ 	struct ext4_dir_entry *d, *top;
+ 
+ 	d = (struct ext4_dir_entry *)bh->b_data;
+ 	top = (struct ext4_dir_entry *)(bh->b_data +
+-		(EXT4_BLOCK_SIZE(inode->i_sb) -
+-		 sizeof(struct ext4_dir_entry_tail)));
+-	while (d < top && d->rec_len)
++		(blocksize - sizeof(struct ext4_dir_entry_tail)));
++	while (d < top && ext4_rec_len_from_disk(d->rec_len, blocksize))
+ 		d = (struct ext4_dir_entry *)(((void *)d) +
+-		    le16_to_cpu(d->rec_len));
++		    ext4_rec_len_from_disk(d->rec_len, blocksize));
+ 
+ 	if (d != top)
+ 		return NULL;
+@@ -360,7 +360,8 @@ static struct ext4_dir_entry_tail *get_dirent_tail(struct inode *inode,
+ #endif
+ 
+ 	if (t->det_reserved_zero1 ||
+-	    le16_to_cpu(t->det_rec_len) != sizeof(struct ext4_dir_entry_tail) ||
++	    (ext4_rec_len_from_disk(t->det_rec_len, blocksize) !=
++	     sizeof(struct ext4_dir_entry_tail)) ||
+ 	    t->det_reserved_zero2 ||
+ 	    t->det_reserved_ft != EXT4_FT_DIR_CSUM)
+ 		return NULL;
+@@ -441,13 +442,14 @@ static struct dx_countlimit *get_dx_countlimit(struct inode *inode,
+ 	struct ext4_dir_entry *dp;
+ 	struct dx_root_info *root;
+ 	int count_offset;
++	int blocksize = EXT4_BLOCK_SIZE(inode->i_sb);
++	unsigned int rlen = ext4_rec_len_from_disk(dirent->rec_len, blocksize);
+ 
+-	if (le16_to_cpu(dirent->rec_len) == EXT4_BLOCK_SIZE(inode->i_sb))
++	if (rlen == blocksize)
+ 		count_offset = 8;
+-	else if (le16_to_cpu(dirent->rec_len) == 12) {
++	else if (rlen == 12) {
+ 		dp = (struct ext4_dir_entry *)(((void *)dirent) + 12);
+-		if (le16_to_cpu(dp->rec_len) !=
+-		    EXT4_BLOCK_SIZE(inode->i_sb) - 12)
++		if (ext4_rec_len_from_disk(dp->rec_len, blocksize) != blocksize - 12)
+ 			return NULL;
+ 		root = (struct dx_root_info *)(((void *)dp + 12));
+ 		if (root->reserved_zero ||
+@@ -1256,6 +1258,7 @@ static int dx_make_map(struct inode *dir, struct buffer_head *bh,
+ 	unsigned int buflen = bh->b_size;
+ 	char *base = bh->b_data;
+ 	struct dx_hash_info h = *hinfo;
++	int blocksize = EXT4_BLOCK_SIZE(dir->i_sb);
+ 
+ 	if (ext4_has_metadata_csum(dir->i_sb))
+ 		buflen -= sizeof(struct ext4_dir_entry_tail);
+@@ -1269,11 +1272,12 @@ static int dx_make_map(struct inode *dir, struct buffer_head *bh,
+ 			map_tail--;
+ 			map_tail->hash = h.hash;
+ 			map_tail->offs = ((char *) de - base)>>2;
+-			map_tail->size = le16_to_cpu(de->rec_len);
++			map_tail->size = ext4_rec_len_from_disk(de->rec_len,
++								blocksize);
+ 			count++;
+ 			cond_resched();
+ 		}
+-		de = ext4_next_entry(de, dir->i_sb->s_blocksize);
++		de = ext4_next_entry(de, blocksize);
+ 	}
+ 	return count;
+ }
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index cef3303d94995..a9c078fc2302a 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -269,6 +269,7 @@ int dbUnmount(struct inode *ipbmap, int mounterror)
+ 
+ 	/* free the memory for the in-memory bmap. */
+ 	kfree(bmp);
++	JFS_SBI(ipbmap->i_sb)->bmap = NULL;
+ 
+ 	return (0);
+ }
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index 937ca07b58b1d..67c67604b8c85 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -195,6 +195,7 @@ int diUnmount(struct inode *ipimap, int mounterror)
+ 	 * free in-memory control structure
+ 	 */
+ 	kfree(imap);
++	JFS_IP(ipimap)->i_imap = NULL;
+ 
+ 	return (0);
+ }
+diff --git a/fs/locks.c b/fs/locks.c
+index 12d72c3d8756c..cbb5701ce9f37 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -1339,6 +1339,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+  out:
+ 	spin_unlock(&ctx->flc_lock);
+ 	percpu_up_read(&file_rwsem);
++	trace_posix_lock_inode(inode, request, error);
+ 	/*
+ 	 * Free any unused locks.
+ 	 */
+@@ -1347,7 +1348,6 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ 	if (new_fl2)
+ 		locks_free_lock(new_fl2);
+ 	locks_dispose_list(&dispose);
+-	trace_posix_lock_inode(inode, request, error);
+ 
+ 	return error;
+ }
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 3c651cbcf8971..e84996c3867c7 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -881,8 +881,8 @@ nfsd4_rename(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			     rename->rn_tname, rename->rn_tnamelen);
+ 	if (status)
+ 		return status;
+-	set_change_info(&rename->rn_sinfo, &cstate->current_fh);
+-	set_change_info(&rename->rn_tinfo, &cstate->save_fh);
++	set_change_info(&rename->rn_sinfo, &cstate->save_fh);
++	set_change_info(&rename->rn_tinfo, &cstate->current_fh);
+ 	return nfs_ok;
+ }
+ 
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 0e734c8b4dfa2..244e4258ce16f 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -19,7 +19,6 @@ struct ovl_aio_req {
+ 	struct kiocb iocb;
+ 	refcount_t ref;
+ 	struct kiocb *orig_iocb;
+-	struct fd fd;
+ };
+ 
+ static struct kmem_cache *ovl_aio_request_cachep;
+@@ -261,7 +260,7 @@ static rwf_t ovl_iocb_to_rwf(int ifl)
+ static inline void ovl_aio_put(struct ovl_aio_req *aio_req)
+ {
+ 	if (refcount_dec_and_test(&aio_req->ref)) {
+-		fdput(aio_req->fd);
++		fput(aio_req->iocb.ki_filp);
+ 		kmem_cache_free(ovl_aio_request_cachep, aio_req);
+ 	}
+ }
+@@ -327,10 +326,9 @@ static ssize_t ovl_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 		if (!aio_req)
+ 			goto out;
+ 
+-		aio_req->fd = real;
+ 		real.flags = 0;
+ 		aio_req->orig_iocb = iocb;
+-		kiocb_clone(&aio_req->iocb, iocb, real.file);
++		kiocb_clone(&aio_req->iocb, iocb, get_file(real.file));
+ 		aio_req->iocb.ki_complete = ovl_aio_rw_complete;
+ 		refcount_set(&aio_req->ref, 2);
+ 		ret = vfs_iocb_iter_read(real.file, &aio_req->iocb, iter);
+@@ -399,10 +397,9 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 		/* Pacify lockdep, same trick as done in aio_write() */
+ 		__sb_writers_release(file_inode(real.file)->i_sb,
+ 				     SB_FREEZE_WRITE);
+-		aio_req->fd = real;
+ 		real.flags = 0;
+ 		aio_req->orig_iocb = iocb;
+-		kiocb_clone(&aio_req->iocb, iocb, real.file);
++		kiocb_clone(&aio_req->iocb, iocb, get_file(real.file));
+ 		aio_req->iocb.ki_flags = ifl;
+ 		aio_req->iocb.ki_complete = ovl_aio_rw_complete;
+ 		refcount_set(&aio_req->ref, 2);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index a484c30bd5cf6..712948e979911 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -1881,7 +1881,7 @@ void proc_pid_evict_inode(struct proc_inode *ei)
+ 	put_pid(pid);
+ }
+ 
+-struct inode *proc_pid_make_inode(struct super_block * sb,
++struct inode *proc_pid_make_inode(struct super_block *sb,
+ 				  struct task_struct *task, umode_t mode)
+ {
+ 	struct inode * inode;
+@@ -1910,11 +1910,6 @@ struct inode *proc_pid_make_inode(struct super_block * sb,
+ 
+ 	/* Let the pid remember us for quick removal */
+ 	ei->pid = pid;
+-	if (S_ISDIR(mode)) {
+-		spin_lock(&pid->lock);
+-		hlist_add_head_rcu(&ei->sibling_inodes, &pid->inodes);
+-		spin_unlock(&pid->lock);
+-	}
+ 
+ 	task_dump_owner(task, 0, &inode->i_uid, &inode->i_gid);
+ 	security_task_to_inode(task, inode);
+@@ -1927,6 +1922,39 @@ out_unlock:
+ 	return NULL;
+ }
+ 
++/*
++ * Generating an inode and adding it into @pid->inodes, so that task will
++ * invalidate inode's dentry before being released.
++ *
++ * This helper is used for creating dir-type entries under '/proc' and
++ * '/proc/<tgid>/task'. Other entries(eg. fd, stat) under '/proc/<tgid>'
++ * can be released by invalidating '/proc/<tgid>' dentry.
++ * In theory, dentries under '/proc/<tgid>/task' can also be released by
++ * invalidating '/proc/<tgid>' dentry, we reserve it to handle single
++ * thread exiting situation: Any one of threads should invalidate its
++ * '/proc/<tgid>/task/<pid>' dentry before released.
++ */
++static struct inode *proc_pid_make_base_inode(struct super_block *sb,
++				struct task_struct *task, umode_t mode)
++{
++	struct inode *inode;
++	struct proc_inode *ei;
++	struct pid *pid;
++
++	inode = proc_pid_make_inode(sb, task, mode);
++	if (!inode)
++		return NULL;
++
++	/* Let proc_flush_pid find this directory inode */
++	ei = PROC_I(inode);
++	pid = ei->pid;
++	spin_lock(&pid->lock);
++	hlist_add_head_rcu(&ei->sibling_inodes, &pid->inodes);
++	spin_unlock(&pid->lock);
++
++	return inode;
++}
++
+ int pid_getattr(const struct path *path, struct kstat *stat,
+ 		u32 request_mask, unsigned int query_flags)
+ {
+@@ -3341,7 +3369,8 @@ static struct dentry *proc_pid_instantiate(struct dentry * dentry,
+ {
+ 	struct inode *inode;
+ 
+-	inode = proc_pid_make_inode(dentry->d_sb, task, S_IFDIR | S_IRUGO | S_IXUGO);
++	inode = proc_pid_make_base_inode(dentry->d_sb, task,
++					 S_IFDIR | S_IRUGO | S_IXUGO);
+ 	if (!inode)
+ 		return ERR_PTR(-ENOENT);
+ 
+@@ -3637,7 +3666,8 @@ static struct dentry *proc_task_instantiate(struct dentry *dentry,
+ 	struct task_struct *task, const void *ptr)
+ {
+ 	struct inode *inode;
+-	inode = proc_pid_make_inode(dentry->d_sb, task, S_IFDIR | S_IRUGO | S_IXUGO);
++	inode = proc_pid_make_base_inode(dentry->d_sb, task,
++					 S_IFDIR | S_IRUGO | S_IXUGO);
+ 	if (!inode)
+ 		return ERR_PTR(-ENOENT);
+ 
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 4b70571368526..7d78984af4349 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -554,6 +554,9 @@ static struct dentry *__create_dir(const char *name, struct dentry *parent,
+  */
+ struct dentry *tracefs_create_dir(const char *name, struct dentry *parent)
+ {
++	if (security_locked_down(LOCKDOWN_TRACEFS))
++		return NULL;
++
+ 	return __create_dir(name, parent, &simple_dir_inode_operations);
+ }
+ 
+diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h
+index 1a12baa58e409..136dba94c646f 100644
+--- a/include/linux/acpi_iort.h
++++ b/include/linux/acpi_iort.h
+@@ -21,6 +21,7 @@
+  */
+ #define IORT_SMMU_V3_PMCG_GENERIC        0x00000000 /* Generic SMMUv3 PMCG */
+ #define IORT_SMMU_V3_PMCG_HISI_HIP08     0x00000001 /* HiSilicon HIP08 PMCG */
++#define IORT_SMMU_V3_PMCG_HISI_HIP09     0x00000002 /* HiSilicon HIP09 PMCG */
+ 
+ int iort_register_domain_token(int trans_id, phys_addr_t base,
+ 			       struct fwnode_handle *fw_node);
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index d3600fc7f7c4d..5ca9347bd8ef9 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -260,6 +260,10 @@ enum {
+ 	ATA_HOST_PARALLEL_SCAN	= (1 << 2),	/* Ports on this host can be scanned in parallel */
+ 	ATA_HOST_IGNORE_ATA	= (1 << 3),	/* Ignore ATA devices on this host. */
+ 
++	ATA_HOST_NO_PART	= (1 << 4), /* Host does not support partial */
++	ATA_HOST_NO_SSC		= (1 << 5), /* Host does not support slumber */
++	ATA_HOST_NO_DEVSLP	= (1 << 6), /* Host does not support devslp */
++
+ 	/* bits 24:31 of host->flags are reserved for LLD specific flags */
+ 
+ 	/* various lengths of time */
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 67a50c78232fe..93dffe2f3fff2 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1069,15 +1069,31 @@ extern int perf_event_output(struct perf_event *event,
+ 			     struct pt_regs *regs);
+ 
+ static inline bool
+-is_default_overflow_handler(struct perf_event *event)
++__is_default_overflow_handler(perf_overflow_handler_t overflow_handler)
+ {
+-	if (likely(event->overflow_handler == perf_event_output_forward))
++	if (likely(overflow_handler == perf_event_output_forward))
+ 		return true;
+-	if (unlikely(event->overflow_handler == perf_event_output_backward))
++	if (unlikely(overflow_handler == perf_event_output_backward))
+ 		return true;
+ 	return false;
+ }
+ 
++#define is_default_overflow_handler(event) \
++	__is_default_overflow_handler((event)->overflow_handler)
++
++#ifdef CONFIG_BPF_SYSCALL
++static inline bool uses_default_overflow_handler(struct perf_event *event)
++{
++	if (likely(is_default_overflow_handler(event)))
++		return true;
++
++	return __is_default_overflow_handler(event->orig_overflow_handler);
++}
++#else
++#define uses_default_overflow_handler(event) \
++	is_default_overflow_handler(event)
++#endif
++
+ extern void
+ perf_event_header__init_id(struct perf_event_header *header,
+ 			   struct perf_sample_data *data,
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index e8304e929e283..de21a45a4ee7d 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -110,10 +110,36 @@ static inline struct task_struct *get_task_struct(struct task_struct *t)
+ }
+ 
+ extern void __put_task_struct(struct task_struct *t);
++extern void __put_task_struct_rcu_cb(struct rcu_head *rhp);
+ 
+ static inline void put_task_struct(struct task_struct *t)
+ {
+-	if (refcount_dec_and_test(&t->usage))
++	if (!refcount_dec_and_test(&t->usage))
++		return;
++
++	/*
++	 * under PREEMPT_RT, we can't call put_task_struct
++	 * in atomic context because it will indirectly
++	 * acquire sleeping locks.
++	 *
++	 * call_rcu() will schedule delayed_put_task_struct_rcu()
++	 * to be called in process context.
++	 *
++	 * __put_task_struct() is called when
++	 * refcount_dec_and_test(&t->usage) succeeds.
++	 *
++	 * This means that it can't "conflict" with
++	 * put_task_struct_rcu_user() which abuses ->rcu the same
++	 * way; rcu_users has a reference so task->usage can't be
++	 * zero after rcu_users 1 -> 0 transition.
++	 *
++	 * delayed_free_task() also uses ->rcu, but it is only called
++	 * when it fails to fork a process. Therefore, there is no
++	 * way it can conflict with put_task_struct().
++	 */
++	if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible())
++		call_rcu(&t->rcu, __put_task_struct_rcu_cb);
++	else
+ 		__put_task_struct(t);
+ }
+ 
+diff --git a/include/uapi/linux/netfilter_bridge/ebtables.h b/include/uapi/linux/netfilter_bridge/ebtables.h
+index a494cf43a7552..b0caad82b6937 100644
+--- a/include/uapi/linux/netfilter_bridge/ebtables.h
++++ b/include/uapi/linux/netfilter_bridge/ebtables.h
+@@ -182,12 +182,14 @@ struct ebt_entry {
+ 	unsigned char sourcemsk[ETH_ALEN];
+ 	unsigned char destmac[ETH_ALEN];
+ 	unsigned char destmsk[ETH_ALEN];
+-	/* sizeof ebt_entry + matches */
+-	unsigned int watchers_offset;
+-	/* sizeof ebt_entry + matches + watchers */
+-	unsigned int target_offset;
+-	/* sizeof ebt_entry + matches + watchers + target */
+-	unsigned int next_offset;
++	__struct_group(/* no tag */, offsets, /* no attrs */,
++		/* sizeof ebt_entry + matches */
++		unsigned int watchers_offset;
++		/* sizeof ebt_entry + matches + watchers */
++		unsigned int target_offset;
++		/* sizeof ebt_entry + matches + watchers + target */
++		unsigned int next_offset;
++	);
+ 	unsigned char elems[0] __attribute__ ((aligned (__alignof__(struct ebt_replace))));
+ };
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 31455f5ab015a..633b0af1d1a73 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -745,6 +745,14 @@ void __put_task_struct(struct task_struct *tsk)
+ }
+ EXPORT_SYMBOL_GPL(__put_task_struct);
+ 
++void __put_task_struct_rcu_cb(struct rcu_head *rhp)
++{
++	struct task_struct *task = container_of(rhp, struct task_struct, rcu);
++
++	__put_task_struct(task);
++}
++EXPORT_SYMBOL_GPL(__put_task_struct_rcu_cb);
++
+ void __init __weak arch_task_cache_init(void) { }
+ 
+ /*
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index 6c05365ed80fc..3b9783eda6796 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -372,7 +372,7 @@ rcu_scale_writer(void *arg)
+ 	sched_set_fifo_low(current);
+ 
+ 	if (holdoff)
+-		schedule_timeout_uninterruptible(holdoff * HZ);
++		schedule_timeout_idle(holdoff * HZ);
+ 
+ 	/*
+ 	 * Wait until rcu_end_inkernel_boot() is called for normal GP tests
+diff --git a/kernel/scftorture.c b/kernel/scftorture.c
+index 060ee0b1569a0..be86207a2ab68 100644
+--- a/kernel/scftorture.c
++++ b/kernel/scftorture.c
+@@ -158,7 +158,8 @@ static void scf_torture_stats_print(void)
+ 		scfs.n_all_wait += scf_stats_p[i].n_all_wait;
+ 	}
+ 	if (atomic_read(&n_errs) || atomic_read(&n_mb_in_errs) ||
+-	    atomic_read(&n_mb_out_errs) || atomic_read(&n_alloc_errs))
++	    atomic_read(&n_mb_out_errs) ||
++	    (!IS_ENABLED(CONFIG_KASAN) && atomic_read(&n_alloc_errs)))
+ 		bangstr = "!!! ";
+ 	pr_alert("%s %sscf_invoked_count %s: %lld single: %lld/%lld single_ofl: %lld/%lld many: %lld/%lld all: %lld/%lld ",
+ 		 SCFTORT_FLAG, bangstr, isdone ? "VER" : "ver", invoked_count,
+@@ -306,7 +307,8 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
+ 		preempt_disable();
+ 	if (scfsp->scfs_prim == SCF_PRIM_SINGLE || scfsp->scfs_wait) {
+ 		scfcp = kmalloc(sizeof(*scfcp), GFP_ATOMIC);
+-		if (WARN_ON_ONCE(!scfcp)) {
++		if (!scfcp) {
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_KASAN));
+ 			atomic_inc(&n_alloc_errs);
+ 		} else {
+ 			scfcp->scfc_cpu = -1;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index d3e0ae1168069..7a64c0cd8819d 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7324,10 +7324,11 @@ static const struct file_operations tracing_max_lat_fops = {
+ #endif
+ 
+ static const struct file_operations set_tracer_fops = {
+-	.open		= tracing_open_generic,
++	.open		= tracing_open_generic_tr,
+ 	.read		= tracing_set_trace_read,
+ 	.write		= tracing_set_trace_write,
+ 	.llseek		= generic_file_llseek,
++	.release	= tracing_release_generic_tr,
+ };
+ 
+ static const struct file_operations tracing_pipe_fops = {
+@@ -8366,12 +8367,33 @@ trace_options_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 	return cnt;
+ }
+ 
++static int tracing_open_options(struct inode *inode, struct file *filp)
++{
++	struct trace_option_dentry *topt = inode->i_private;
++	int ret;
++
++	ret = tracing_check_open_get_tr(topt->tr);
++	if (ret)
++		return ret;
++
++	filp->private_data = inode->i_private;
++	return 0;
++}
++
++static int tracing_release_options(struct inode *inode, struct file *file)
++{
++	struct trace_option_dentry *topt = file->private_data;
++
++	trace_array_put(topt->tr);
++	return 0;
++}
+ 
+ static const struct file_operations trace_options_fops = {
+-	.open = tracing_open_generic,
++	.open = tracing_open_options,
+ 	.read = trace_options_read,
+ 	.write = trace_options_write,
+ 	.llseek	= generic_file_llseek,
++	.release = tracing_release_options,
+ };
+ 
+ /*
+diff --git a/lib/kobject.c b/lib/kobject.c
+index ea53b30cf4837..cd3e1a98eff9e 100644
+--- a/lib/kobject.c
++++ b/lib/kobject.c
+@@ -874,6 +874,11 @@ int kset_register(struct kset *k)
+ 	if (!k)
+ 		return -EINVAL;
+ 
++	if (!k->kobj.ktype) {
++		pr_err("must have a ktype to be initialized properly!\n");
++		return -EINVAL;
++	}
++
+ 	kset_init(k);
+ 	err = kobject_add_internal(&k->kobj);
+ 	if (err)
+diff --git a/lib/mpi/mpi-cmp.c b/lib/mpi/mpi-cmp.c
+index c4cfa3ff05818..0835b6213235e 100644
+--- a/lib/mpi/mpi-cmp.c
++++ b/lib/mpi/mpi-cmp.c
+@@ -25,8 +25,12 @@ int mpi_cmp_ui(MPI u, unsigned long v)
+ 	mpi_limb_t limb = v;
+ 
+ 	mpi_normalize(u);
+-	if (!u->nlimbs && !limb)
+-		return 0;
++	if (u->nlimbs == 0) {
++		if (v == 0)
++			return 0;
++		else
++			return -1;
++	}
+ 	if (u->sign)
+ 		return -1;
+ 	if (u->nlimbs > 1)
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 3a983bc1a71c9..3b0d8c6dd5870 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2203,6 +2203,9 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
+ 
+ 	if (unlikely(*ppos >= inode->i_sb->s_maxbytes))
+ 		return 0;
++	if (unlikely(!iov_iter_count(iter)))
++		return 0;
++
+ 	iov_iter_truncate(iter, inode->i_sb->s_maxbytes);
+ 
+ 	index = *ppos >> PAGE_SHIFT;
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 8335b7e4bcf6f..bab14186f9ad5 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -2001,8 +2001,7 @@ static int size_entry_mwt(const struct ebt_entry *entry, const unsigned char *ba
+ 		return ret;
+ 
+ 	offsets[0] = sizeof(struct ebt_entry); /* matches come first */
+-	memcpy(&offsets[1], &entry->watchers_offset,
+-			sizeof(offsets) - sizeof(offsets[0]));
++	memcpy(&offsets[1], &entry->offsets, sizeof(entry->offsets));
+ 
+ 	if (state->buf_kern_start) {
+ 		buf_start = state->buf_kern_start + state->buf_kern_offset;
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 00c6944ed6342..38666dde89340 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -3620,7 +3620,7 @@ static int devlink_param_get(struct devlink *devlink,
+ 			     const struct devlink_param *param,
+ 			     struct devlink_param_gset_ctx *ctx)
+ {
+-	if (!param->get || devlink->reload_failed)
++	if (!param->get)
+ 		return -EOPNOTSUPP;
+ 	return param->get(devlink, param->id, ctx);
+ }
+@@ -3629,7 +3629,7 @@ static int devlink_param_set(struct devlink *devlink,
+ 			     const struct devlink_param *param,
+ 			     struct devlink_param_gset_ctx *ctx)
+ {
+-	if (!param->set || devlink->reload_failed)
++	if (!param->set)
+ 		return -EOPNOTSUPP;
+ 	return param->set(devlink, param->id, ctx);
+ }
+diff --git a/net/sched/Kconfig b/net/sched/Kconfig
+index 697522371914c..2046c16b29f09 100644
+--- a/net/sched/Kconfig
++++ b/net/sched/Kconfig
+@@ -548,34 +548,6 @@ config CLS_U32_MARK
+ 	help
+ 	  Say Y here to be able to use netfilter marks as u32 key.
+ 
+-config NET_CLS_RSVP
+-	tristate "IPv4 Resource Reservation Protocol (RSVP)"
+-	select NET_CLS
+-	help
+-	  The Resource Reservation Protocol (RSVP) permits end systems to
+-	  request a minimum and maximum data flow rate for a connection; this
+-	  is important for real time data such as streaming sound or video.
+-
+-	  Say Y here if you want to be able to classify outgoing packets based
+-	  on their RSVP requests.
+-
+-	  To compile this code as a module, choose M here: the
+-	  module will be called cls_rsvp.
+-
+-config NET_CLS_RSVP6
+-	tristate "IPv6 Resource Reservation Protocol (RSVP6)"
+-	select NET_CLS
+-	help
+-	  The Resource Reservation Protocol (RSVP) permits end systems to
+-	  request a minimum and maximum data flow rate for a connection; this
+-	  is important for real time data such as streaming sound or video.
+-
+-	  Say Y here if you want to be able to classify outgoing packets based
+-	  on their RSVP requests and you are using the IPv6 protocol.
+-
+-	  To compile this code as a module, choose M here: the
+-	  module will be called cls_rsvp6.
+-
+ config NET_CLS_FLOW
+ 	tristate "Flow classifier"
+ 	select NET_CLS
+diff --git a/net/sched/Makefile b/net/sched/Makefile
+index 4311fdb211197..df2bcd785f7d1 100644
+--- a/net/sched/Makefile
++++ b/net/sched/Makefile
+@@ -68,8 +68,6 @@ obj-$(CONFIG_NET_SCH_TAPRIO)	+= sch_taprio.o
+ obj-$(CONFIG_NET_CLS_U32)	+= cls_u32.o
+ obj-$(CONFIG_NET_CLS_ROUTE4)	+= cls_route.o
+ obj-$(CONFIG_NET_CLS_FW)	+= cls_fw.o
+-obj-$(CONFIG_NET_CLS_RSVP)	+= cls_rsvp.o
+-obj-$(CONFIG_NET_CLS_RSVP6)	+= cls_rsvp6.o
+ obj-$(CONFIG_NET_CLS_BASIC)	+= cls_basic.o
+ obj-$(CONFIG_NET_CLS_FLOW)	+= cls_flow.o
+ obj-$(CONFIG_NET_CLS_CGROUP)	+= cls_cgroup.o
+diff --git a/net/sched/cls_rsvp.c b/net/sched/cls_rsvp.c
+deleted file mode 100644
+index de1c1d4da5977..0000000000000
+--- a/net/sched/cls_rsvp.c
++++ /dev/null
+@@ -1,24 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- * net/sched/cls_rsvp.c	Special RSVP packet classifier for IPv4.
+- *
+- * Authors:	Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
+- */
+-
+-#include <linux/module.h>
+-#include <linux/types.h>
+-#include <linux/kernel.h>
+-#include <linux/string.h>
+-#include <linux/errno.h>
+-#include <linux/skbuff.h>
+-#include <net/ip.h>
+-#include <net/netlink.h>
+-#include <net/act_api.h>
+-#include <net/pkt_cls.h>
+-
+-#define RSVP_DST_LEN	1
+-#define RSVP_ID		"rsvp"
+-#define RSVP_OPS	cls_rsvp_ops
+-
+-#include "cls_rsvp.h"
+-MODULE_LICENSE("GPL");
+diff --git a/net/sched/cls_rsvp.h b/net/sched/cls_rsvp.h
+deleted file mode 100644
+index d36949d9382c4..0000000000000
+--- a/net/sched/cls_rsvp.h
++++ /dev/null
+@@ -1,777 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-or-later */
+-/*
+- * net/sched/cls_rsvp.h	Template file for RSVPv[46] classifiers.
+- *
+- * Authors:	Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
+- */
+-
+-/*
+-   Comparing to general packet classification problem,
+-   RSVP needs only sevaral relatively simple rules:
+-
+-   * (dst, protocol) are always specified,
+-     so that we are able to hash them.
+-   * src may be exact, or may be wildcard, so that
+-     we can keep a hash table plus one wildcard entry.
+-   * source port (or flow label) is important only if src is given.
+-
+-   IMPLEMENTATION.
+-
+-   We use a two level hash table: The top level is keyed by
+-   destination address and protocol ID, every bucket contains a list
+-   of "rsvp sessions", identified by destination address, protocol and
+-   DPI(="Destination Port ID"): triple (key, mask, offset).
+-
+-   Every bucket has a smaller hash table keyed by source address
+-   (cf. RSVP flowspec) and one wildcard entry for wildcard reservations.
+-   Every bucket is again a list of "RSVP flows", selected by
+-   source address and SPI(="Source Port ID" here rather than
+-   "security parameter index"): triple (key, mask, offset).
+-
+-
+-   NOTE 1. All the packets with IPv6 extension headers (but AH and ESP)
+-   and all fragmented packets go to the best-effort traffic class.
+-
+-
+-   NOTE 2. Two "port id"'s seems to be redundant, rfc2207 requires
+-   only one "Generalized Port Identifier". So that for classic
+-   ah, esp (and udp,tcp) both *pi should coincide or one of them
+-   should be wildcard.
+-
+-   At first sight, this redundancy is just a waste of CPU
+-   resources. But DPI and SPI add the possibility to assign different
+-   priorities to GPIs. Look also at note 4 about tunnels below.
+-
+-
+-   NOTE 3. One complication is the case of tunneled packets.
+-   We implement it as following: if the first lookup
+-   matches a special session with "tunnelhdr" value not zero,
+-   flowid doesn't contain the true flow ID, but the tunnel ID (1...255).
+-   In this case, we pull tunnelhdr bytes and restart lookup
+-   with tunnel ID added to the list of keys. Simple and stupid 8)8)
+-   It's enough for PIMREG and IPIP.
+-
+-
+-   NOTE 4. Two GPIs make it possible to parse even GRE packets.
+-   F.e. DPI can select ETH_P_IP (and necessary flags to make
+-   tunnelhdr correct) in GRE protocol field and SPI matches
+-   GRE key. Is it not nice? 8)8)
+-
+-
+-   Well, as result, despite its simplicity, we get a pretty
+-   powerful classification engine.  */
+-
+-
+-struct rsvp_head {
+-	u32			tmap[256/32];
+-	u32			hgenerator;
+-	u8			tgenerator;
+-	struct rsvp_session __rcu *ht[256];
+-	struct rcu_head		rcu;
+-};
+-
+-struct rsvp_session {
+-	struct rsvp_session __rcu	*next;
+-	__be32				dst[RSVP_DST_LEN];
+-	struct tc_rsvp_gpi		dpi;
+-	u8				protocol;
+-	u8				tunnelid;
+-	/* 16 (src,sport) hash slots, and one wildcard source slot */
+-	struct rsvp_filter __rcu	*ht[16 + 1];
+-	struct rcu_head			rcu;
+-};
+-
+-
+-struct rsvp_filter {
+-	struct rsvp_filter __rcu	*next;
+-	__be32				src[RSVP_DST_LEN];
+-	struct tc_rsvp_gpi		spi;
+-	u8				tunnelhdr;
+-
+-	struct tcf_result		res;
+-	struct tcf_exts			exts;
+-
+-	u32				handle;
+-	struct rsvp_session		*sess;
+-	struct rcu_work			rwork;
+-};
+-
+-static inline unsigned int hash_dst(__be32 *dst, u8 protocol, u8 tunnelid)
+-{
+-	unsigned int h = (__force __u32)dst[RSVP_DST_LEN - 1];
+-
+-	h ^= h>>16;
+-	h ^= h>>8;
+-	return (h ^ protocol ^ tunnelid) & 0xFF;
+-}
+-
+-static inline unsigned int hash_src(__be32 *src)
+-{
+-	unsigned int h = (__force __u32)src[RSVP_DST_LEN-1];
+-
+-	h ^= h>>16;
+-	h ^= h>>8;
+-	h ^= h>>4;
+-	return h & 0xF;
+-}
+-
+-#define RSVP_APPLY_RESULT()				\
+-{							\
+-	int r = tcf_exts_exec(skb, &f->exts, res);	\
+-	if (r < 0)					\
+-		continue;				\
+-	else if (r > 0)					\
+-		return r;				\
+-}
+-
+-static int rsvp_classify(struct sk_buff *skb, const struct tcf_proto *tp,
+-			 struct tcf_result *res)
+-{
+-	struct rsvp_head *head = rcu_dereference_bh(tp->root);
+-	struct rsvp_session *s;
+-	struct rsvp_filter *f;
+-	unsigned int h1, h2;
+-	__be32 *dst, *src;
+-	u8 protocol;
+-	u8 tunnelid = 0;
+-	u8 *xprt;
+-#if RSVP_DST_LEN == 4
+-	struct ipv6hdr *nhptr;
+-
+-	if (!pskb_network_may_pull(skb, sizeof(*nhptr)))
+-		return -1;
+-	nhptr = ipv6_hdr(skb);
+-#else
+-	struct iphdr *nhptr;
+-
+-	if (!pskb_network_may_pull(skb, sizeof(*nhptr)))
+-		return -1;
+-	nhptr = ip_hdr(skb);
+-#endif
+-restart:
+-
+-#if RSVP_DST_LEN == 4
+-	src = &nhptr->saddr.s6_addr32[0];
+-	dst = &nhptr->daddr.s6_addr32[0];
+-	protocol = nhptr->nexthdr;
+-	xprt = ((u8 *)nhptr) + sizeof(struct ipv6hdr);
+-#else
+-	src = &nhptr->saddr;
+-	dst = &nhptr->daddr;
+-	protocol = nhptr->protocol;
+-	xprt = ((u8 *)nhptr) + (nhptr->ihl<<2);
+-	if (ip_is_fragment(nhptr))
+-		return -1;
+-#endif
+-
+-	h1 = hash_dst(dst, protocol, tunnelid);
+-	h2 = hash_src(src);
+-
+-	for (s = rcu_dereference_bh(head->ht[h1]); s;
+-	     s = rcu_dereference_bh(s->next)) {
+-		if (dst[RSVP_DST_LEN-1] == s->dst[RSVP_DST_LEN - 1] &&
+-		    protocol == s->protocol &&
+-		    !(s->dpi.mask &
+-		      (*(u32 *)(xprt + s->dpi.offset) ^ s->dpi.key)) &&
+-#if RSVP_DST_LEN == 4
+-		    dst[0] == s->dst[0] &&
+-		    dst[1] == s->dst[1] &&
+-		    dst[2] == s->dst[2] &&
+-#endif
+-		    tunnelid == s->tunnelid) {
+-
+-			for (f = rcu_dereference_bh(s->ht[h2]); f;
+-			     f = rcu_dereference_bh(f->next)) {
+-				if (src[RSVP_DST_LEN-1] == f->src[RSVP_DST_LEN - 1] &&
+-				    !(f->spi.mask & (*(u32 *)(xprt + f->spi.offset) ^ f->spi.key))
+-#if RSVP_DST_LEN == 4
+-				    &&
+-				    src[0] == f->src[0] &&
+-				    src[1] == f->src[1] &&
+-				    src[2] == f->src[2]
+-#endif
+-				    ) {
+-					*res = f->res;
+-					RSVP_APPLY_RESULT();
+-
+-matched:
+-					if (f->tunnelhdr == 0)
+-						return 0;
+-
+-					tunnelid = f->res.classid;
+-					nhptr = (void *)(xprt + f->tunnelhdr - sizeof(*nhptr));
+-					goto restart;
+-				}
+-			}
+-
+-			/* And wildcard bucket... */
+-			for (f = rcu_dereference_bh(s->ht[16]); f;
+-			     f = rcu_dereference_bh(f->next)) {
+-				*res = f->res;
+-				RSVP_APPLY_RESULT();
+-				goto matched;
+-			}
+-			return -1;
+-		}
+-	}
+-	return -1;
+-}
+-
+-static void rsvp_replace(struct tcf_proto *tp, struct rsvp_filter *n, u32 h)
+-{
+-	struct rsvp_head *head = rtnl_dereference(tp->root);
+-	struct rsvp_session *s;
+-	struct rsvp_filter __rcu **ins;
+-	struct rsvp_filter *pins;
+-	unsigned int h1 = h & 0xFF;
+-	unsigned int h2 = (h >> 8) & 0xFF;
+-
+-	for (s = rtnl_dereference(head->ht[h1]); s;
+-	     s = rtnl_dereference(s->next)) {
+-		for (ins = &s->ht[h2], pins = rtnl_dereference(*ins); ;
+-		     ins = &pins->next, pins = rtnl_dereference(*ins)) {
+-			if (pins->handle == h) {
+-				RCU_INIT_POINTER(n->next, pins->next);
+-				rcu_assign_pointer(*ins, n);
+-				return;
+-			}
+-		}
+-	}
+-
+-	/* Something went wrong if we are trying to replace a non-existant
+-	 * node. Mind as well halt instead of silently failing.
+-	 */
+-	BUG_ON(1);
+-}
+-
+-static void *rsvp_get(struct tcf_proto *tp, u32 handle)
+-{
+-	struct rsvp_head *head = rtnl_dereference(tp->root);
+-	struct rsvp_session *s;
+-	struct rsvp_filter *f;
+-	unsigned int h1 = handle & 0xFF;
+-	unsigned int h2 = (handle >> 8) & 0xFF;
+-
+-	if (h2 > 16)
+-		return NULL;
+-
+-	for (s = rtnl_dereference(head->ht[h1]); s;
+-	     s = rtnl_dereference(s->next)) {
+-		for (f = rtnl_dereference(s->ht[h2]); f;
+-		     f = rtnl_dereference(f->next)) {
+-			if (f->handle == handle)
+-				return f;
+-		}
+-	}
+-	return NULL;
+-}
+-
+-static int rsvp_init(struct tcf_proto *tp)
+-{
+-	struct rsvp_head *data;
+-
+-	data = kzalloc(sizeof(struct rsvp_head), GFP_KERNEL);
+-	if (data) {
+-		rcu_assign_pointer(tp->root, data);
+-		return 0;
+-	}
+-	return -ENOBUFS;
+-}
+-
+-static void __rsvp_delete_filter(struct rsvp_filter *f)
+-{
+-	tcf_exts_destroy(&f->exts);
+-	tcf_exts_put_net(&f->exts);
+-	kfree(f);
+-}
+-
+-static void rsvp_delete_filter_work(struct work_struct *work)
+-{
+-	struct rsvp_filter *f = container_of(to_rcu_work(work),
+-					     struct rsvp_filter,
+-					     rwork);
+-	rtnl_lock();
+-	__rsvp_delete_filter(f);
+-	rtnl_unlock();
+-}
+-
+-static void rsvp_delete_filter(struct tcf_proto *tp, struct rsvp_filter *f)
+-{
+-	tcf_unbind_filter(tp, &f->res);
+-	/* all classifiers are required to call tcf_exts_destroy() after rcu
+-	 * grace period, since converted-to-rcu actions are relying on that
+-	 * in cleanup() callback
+-	 */
+-	if (tcf_exts_get_net(&f->exts))
+-		tcf_queue_work(&f->rwork, rsvp_delete_filter_work);
+-	else
+-		__rsvp_delete_filter(f);
+-}
+-
+-static void rsvp_destroy(struct tcf_proto *tp, bool rtnl_held,
+-			 struct netlink_ext_ack *extack)
+-{
+-	struct rsvp_head *data = rtnl_dereference(tp->root);
+-	int h1, h2;
+-
+-	if (data == NULL)
+-		return;
+-
+-	for (h1 = 0; h1 < 256; h1++) {
+-		struct rsvp_session *s;
+-
+-		while ((s = rtnl_dereference(data->ht[h1])) != NULL) {
+-			RCU_INIT_POINTER(data->ht[h1], s->next);
+-
+-			for (h2 = 0; h2 <= 16; h2++) {
+-				struct rsvp_filter *f;
+-
+-				while ((f = rtnl_dereference(s->ht[h2])) != NULL) {
+-					rcu_assign_pointer(s->ht[h2], f->next);
+-					rsvp_delete_filter(tp, f);
+-				}
+-			}
+-			kfree_rcu(s, rcu);
+-		}
+-	}
+-	kfree_rcu(data, rcu);
+-}
+-
+-static int rsvp_delete(struct tcf_proto *tp, void *arg, bool *last,
+-		       bool rtnl_held, struct netlink_ext_ack *extack)
+-{
+-	struct rsvp_head *head = rtnl_dereference(tp->root);
+-	struct rsvp_filter *nfp, *f = arg;
+-	struct rsvp_filter __rcu **fp;
+-	unsigned int h = f->handle;
+-	struct rsvp_session __rcu **sp;
+-	struct rsvp_session *nsp, *s = f->sess;
+-	int i, h1;
+-
+-	fp = &s->ht[(h >> 8) & 0xFF];
+-	for (nfp = rtnl_dereference(*fp); nfp;
+-	     fp = &nfp->next, nfp = rtnl_dereference(*fp)) {
+-		if (nfp == f) {
+-			RCU_INIT_POINTER(*fp, f->next);
+-			rsvp_delete_filter(tp, f);
+-
+-			/* Strip tree */
+-
+-			for (i = 0; i <= 16; i++)
+-				if (s->ht[i])
+-					goto out;
+-
+-			/* OK, session has no flows */
+-			sp = &head->ht[h & 0xFF];
+-			for (nsp = rtnl_dereference(*sp); nsp;
+-			     sp = &nsp->next, nsp = rtnl_dereference(*sp)) {
+-				if (nsp == s) {
+-					RCU_INIT_POINTER(*sp, s->next);
+-					kfree_rcu(s, rcu);
+-					goto out;
+-				}
+-			}
+-
+-			break;
+-		}
+-	}
+-
+-out:
+-	*last = true;
+-	for (h1 = 0; h1 < 256; h1++) {
+-		if (rcu_access_pointer(head->ht[h1])) {
+-			*last = false;
+-			break;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+-static unsigned int gen_handle(struct tcf_proto *tp, unsigned salt)
+-{
+-	struct rsvp_head *data = rtnl_dereference(tp->root);
+-	int i = 0xFFFF;
+-
+-	while (i-- > 0) {
+-		u32 h;
+-
+-		if ((data->hgenerator += 0x10000) == 0)
+-			data->hgenerator = 0x10000;
+-		h = data->hgenerator|salt;
+-		if (!rsvp_get(tp, h))
+-			return h;
+-	}
+-	return 0;
+-}
+-
+-static int tunnel_bts(struct rsvp_head *data)
+-{
+-	int n = data->tgenerator >> 5;
+-	u32 b = 1 << (data->tgenerator & 0x1F);
+-
+-	if (data->tmap[n] & b)
+-		return 0;
+-	data->tmap[n] |= b;
+-	return 1;
+-}
+-
+-static void tunnel_recycle(struct rsvp_head *data)
+-{
+-	struct rsvp_session __rcu **sht = data->ht;
+-	u32 tmap[256/32];
+-	int h1, h2;
+-
+-	memset(tmap, 0, sizeof(tmap));
+-
+-	for (h1 = 0; h1 < 256; h1++) {
+-		struct rsvp_session *s;
+-		for (s = rtnl_dereference(sht[h1]); s;
+-		     s = rtnl_dereference(s->next)) {
+-			for (h2 = 0; h2 <= 16; h2++) {
+-				struct rsvp_filter *f;
+-
+-				for (f = rtnl_dereference(s->ht[h2]); f;
+-				     f = rtnl_dereference(f->next)) {
+-					if (f->tunnelhdr == 0)
+-						continue;
+-					data->tgenerator = f->res.classid;
+-					tunnel_bts(data);
+-				}
+-			}
+-		}
+-	}
+-
+-	memcpy(data->tmap, tmap, sizeof(tmap));
+-}
+-
+-static u32 gen_tunnel(struct rsvp_head *data)
+-{
+-	int i, k;
+-
+-	for (k = 0; k < 2; k++) {
+-		for (i = 255; i > 0; i--) {
+-			if (++data->tgenerator == 0)
+-				data->tgenerator = 1;
+-			if (tunnel_bts(data))
+-				return data->tgenerator;
+-		}
+-		tunnel_recycle(data);
+-	}
+-	return 0;
+-}
+-
+-static const struct nla_policy rsvp_policy[TCA_RSVP_MAX + 1] = {
+-	[TCA_RSVP_CLASSID]	= { .type = NLA_U32 },
+-	[TCA_RSVP_DST]		= { .len = RSVP_DST_LEN * sizeof(u32) },
+-	[TCA_RSVP_SRC]		= { .len = RSVP_DST_LEN * sizeof(u32) },
+-	[TCA_RSVP_PINFO]	= { .len = sizeof(struct tc_rsvp_pinfo) },
+-};
+-
+-static int rsvp_change(struct net *net, struct sk_buff *in_skb,
+-		       struct tcf_proto *tp, unsigned long base,
+-		       u32 handle,
+-		       struct nlattr **tca,
+-		       void **arg, bool ovr, bool rtnl_held,
+-		       struct netlink_ext_ack *extack)
+-{
+-	struct rsvp_head *data = rtnl_dereference(tp->root);
+-	struct rsvp_filter *f, *nfp;
+-	struct rsvp_filter __rcu **fp;
+-	struct rsvp_session *nsp, *s;
+-	struct rsvp_session __rcu **sp;
+-	struct tc_rsvp_pinfo *pinfo = NULL;
+-	struct nlattr *opt = tca[TCA_OPTIONS];
+-	struct nlattr *tb[TCA_RSVP_MAX + 1];
+-	struct tcf_exts e;
+-	unsigned int h1, h2;
+-	__be32 *dst;
+-	int err;
+-
+-	if (opt == NULL)
+-		return handle ? -EINVAL : 0;
+-
+-	err = nla_parse_nested_deprecated(tb, TCA_RSVP_MAX, opt, rsvp_policy,
+-					  NULL);
+-	if (err < 0)
+-		return err;
+-
+-	err = tcf_exts_init(&e, net, TCA_RSVP_ACT, TCA_RSVP_POLICE);
+-	if (err < 0)
+-		return err;
+-	err = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &e, ovr, true,
+-				extack);
+-	if (err < 0)
+-		goto errout2;
+-
+-	f = *arg;
+-	if (f) {
+-		/* Node exists: adjust only classid */
+-		struct rsvp_filter *n;
+-
+-		if (f->handle != handle && handle)
+-			goto errout2;
+-
+-		n = kmemdup(f, sizeof(*f), GFP_KERNEL);
+-		if (!n) {
+-			err = -ENOMEM;
+-			goto errout2;
+-		}
+-
+-		err = tcf_exts_init(&n->exts, net, TCA_RSVP_ACT,
+-				    TCA_RSVP_POLICE);
+-		if (err < 0) {
+-			kfree(n);
+-			goto errout2;
+-		}
+-
+-		if (tb[TCA_RSVP_CLASSID]) {
+-			n->res.classid = nla_get_u32(tb[TCA_RSVP_CLASSID]);
+-			tcf_bind_filter(tp, &n->res, base);
+-		}
+-
+-		tcf_exts_change(&n->exts, &e);
+-		rsvp_replace(tp, n, handle);
+-		return 0;
+-	}
+-
+-	/* Now more serious part... */
+-	err = -EINVAL;
+-	if (handle)
+-		goto errout2;
+-	if (tb[TCA_RSVP_DST] == NULL)
+-		goto errout2;
+-
+-	err = -ENOBUFS;
+-	f = kzalloc(sizeof(struct rsvp_filter), GFP_KERNEL);
+-	if (f == NULL)
+-		goto errout2;
+-
+-	err = tcf_exts_init(&f->exts, net, TCA_RSVP_ACT, TCA_RSVP_POLICE);
+-	if (err < 0)
+-		goto errout;
+-	h2 = 16;
+-	if (tb[TCA_RSVP_SRC]) {
+-		memcpy(f->src, nla_data(tb[TCA_RSVP_SRC]), sizeof(f->src));
+-		h2 = hash_src(f->src);
+-	}
+-	if (tb[TCA_RSVP_PINFO]) {
+-		pinfo = nla_data(tb[TCA_RSVP_PINFO]);
+-		f->spi = pinfo->spi;
+-		f->tunnelhdr = pinfo->tunnelhdr;
+-	}
+-	if (tb[TCA_RSVP_CLASSID])
+-		f->res.classid = nla_get_u32(tb[TCA_RSVP_CLASSID]);
+-
+-	dst = nla_data(tb[TCA_RSVP_DST]);
+-	h1 = hash_dst(dst, pinfo ? pinfo->protocol : 0, pinfo ? pinfo->tunnelid : 0);
+-
+-	err = -ENOMEM;
+-	if ((f->handle = gen_handle(tp, h1 | (h2<<8))) == 0)
+-		goto errout;
+-
+-	if (f->tunnelhdr) {
+-		err = -EINVAL;
+-		if (f->res.classid > 255)
+-			goto errout;
+-
+-		err = -ENOMEM;
+-		if (f->res.classid == 0 &&
+-		    (f->res.classid = gen_tunnel(data)) == 0)
+-			goto errout;
+-	}
+-
+-	for (sp = &data->ht[h1];
+-	     (s = rtnl_dereference(*sp)) != NULL;
+-	     sp = &s->next) {
+-		if (dst[RSVP_DST_LEN-1] == s->dst[RSVP_DST_LEN-1] &&
+-		    pinfo && pinfo->protocol == s->protocol &&
+-		    memcmp(&pinfo->dpi, &s->dpi, sizeof(s->dpi)) == 0 &&
+-#if RSVP_DST_LEN == 4
+-		    dst[0] == s->dst[0] &&
+-		    dst[1] == s->dst[1] &&
+-		    dst[2] == s->dst[2] &&
+-#endif
+-		    pinfo->tunnelid == s->tunnelid) {
+-
+-insert:
+-			/* OK, we found appropriate session */
+-
+-			fp = &s->ht[h2];
+-
+-			f->sess = s;
+-			if (f->tunnelhdr == 0)
+-				tcf_bind_filter(tp, &f->res, base);
+-
+-			tcf_exts_change(&f->exts, &e);
+-
+-			fp = &s->ht[h2];
+-			for (nfp = rtnl_dereference(*fp); nfp;
+-			     fp = &nfp->next, nfp = rtnl_dereference(*fp)) {
+-				__u32 mask = nfp->spi.mask & f->spi.mask;
+-
+-				if (mask != f->spi.mask)
+-					break;
+-			}
+-			RCU_INIT_POINTER(f->next, nfp);
+-			rcu_assign_pointer(*fp, f);
+-
+-			*arg = f;
+-			return 0;
+-		}
+-	}
+-
+-	/* No session found. Create new one. */
+-
+-	err = -ENOBUFS;
+-	s = kzalloc(sizeof(struct rsvp_session), GFP_KERNEL);
+-	if (s == NULL)
+-		goto errout;
+-	memcpy(s->dst, dst, sizeof(s->dst));
+-
+-	if (pinfo) {
+-		s->dpi = pinfo->dpi;
+-		s->protocol = pinfo->protocol;
+-		s->tunnelid = pinfo->tunnelid;
+-	}
+-	sp = &data->ht[h1];
+-	for (nsp = rtnl_dereference(*sp); nsp;
+-	     sp = &nsp->next, nsp = rtnl_dereference(*sp)) {
+-		if ((nsp->dpi.mask & s->dpi.mask) != s->dpi.mask)
+-			break;
+-	}
+-	RCU_INIT_POINTER(s->next, nsp);
+-	rcu_assign_pointer(*sp, s);
+-
+-	goto insert;
+-
+-errout:
+-	tcf_exts_destroy(&f->exts);
+-	kfree(f);
+-errout2:
+-	tcf_exts_destroy(&e);
+-	return err;
+-}
+-
+-static void rsvp_walk(struct tcf_proto *tp, struct tcf_walker *arg,
+-		      bool rtnl_held)
+-{
+-	struct rsvp_head *head = rtnl_dereference(tp->root);
+-	unsigned int h, h1;
+-
+-	if (arg->stop)
+-		return;
+-
+-	for (h = 0; h < 256; h++) {
+-		struct rsvp_session *s;
+-
+-		for (s = rtnl_dereference(head->ht[h]); s;
+-		     s = rtnl_dereference(s->next)) {
+-			for (h1 = 0; h1 <= 16; h1++) {
+-				struct rsvp_filter *f;
+-
+-				for (f = rtnl_dereference(s->ht[h1]); f;
+-				     f = rtnl_dereference(f->next)) {
+-					if (arg->count < arg->skip) {
+-						arg->count++;
+-						continue;
+-					}
+-					if (arg->fn(tp, f, arg) < 0) {
+-						arg->stop = 1;
+-						return;
+-					}
+-					arg->count++;
+-				}
+-			}
+-		}
+-	}
+-}
+-
+-static int rsvp_dump(struct net *net, struct tcf_proto *tp, void *fh,
+-		     struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
+-{
+-	struct rsvp_filter *f = fh;
+-	struct rsvp_session *s;
+-	struct nlattr *nest;
+-	struct tc_rsvp_pinfo pinfo;
+-
+-	if (f == NULL)
+-		return skb->len;
+-	s = f->sess;
+-
+-	t->tcm_handle = f->handle;
+-
+-	nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+-	if (nest == NULL)
+-		goto nla_put_failure;
+-
+-	if (nla_put(skb, TCA_RSVP_DST, sizeof(s->dst), &s->dst))
+-		goto nla_put_failure;
+-	pinfo.dpi = s->dpi;
+-	pinfo.spi = f->spi;
+-	pinfo.protocol = s->protocol;
+-	pinfo.tunnelid = s->tunnelid;
+-	pinfo.tunnelhdr = f->tunnelhdr;
+-	pinfo.pad = 0;
+-	if (nla_put(skb, TCA_RSVP_PINFO, sizeof(pinfo), &pinfo))
+-		goto nla_put_failure;
+-	if (f->res.classid &&
+-	    nla_put_u32(skb, TCA_RSVP_CLASSID, f->res.classid))
+-		goto nla_put_failure;
+-	if (((f->handle >> 8) & 0xFF) != 16 &&
+-	    nla_put(skb, TCA_RSVP_SRC, sizeof(f->src), f->src))
+-		goto nla_put_failure;
+-
+-	if (tcf_exts_dump(skb, &f->exts) < 0)
+-		goto nla_put_failure;
+-
+-	nla_nest_end(skb, nest);
+-
+-	if (tcf_exts_dump_stats(skb, &f->exts) < 0)
+-		goto nla_put_failure;
+-	return skb->len;
+-
+-nla_put_failure:
+-	nla_nest_cancel(skb, nest);
+-	return -1;
+-}
+-
+-static void rsvp_bind_class(void *fh, u32 classid, unsigned long cl, void *q,
+-			    unsigned long base)
+-{
+-	struct rsvp_filter *f = fh;
+-
+-	if (f && f->res.classid == classid) {
+-		if (cl)
+-			__tcf_bind_filter(q, &f->res, base);
+-		else
+-			__tcf_unbind_filter(q, &f->res);
+-	}
+-}
+-
+-static struct tcf_proto_ops RSVP_OPS __read_mostly = {
+-	.kind		=	RSVP_ID,
+-	.classify	=	rsvp_classify,
+-	.init		=	rsvp_init,
+-	.destroy	=	rsvp_destroy,
+-	.get		=	rsvp_get,
+-	.change		=	rsvp_change,
+-	.delete		=	rsvp_delete,
+-	.walk		=	rsvp_walk,
+-	.dump		=	rsvp_dump,
+-	.bind_class	=	rsvp_bind_class,
+-	.owner		=	THIS_MODULE,
+-};
+-
+-static int __init init_rsvp(void)
+-{
+-	return register_tcf_proto_ops(&RSVP_OPS);
+-}
+-
+-static void __exit exit_rsvp(void)
+-{
+-	unregister_tcf_proto_ops(&RSVP_OPS);
+-}
+-
+-module_init(init_rsvp)
+-module_exit(exit_rsvp)
+diff --git a/net/sched/cls_rsvp6.c b/net/sched/cls_rsvp6.c
+deleted file mode 100644
+index 64078846000ef..0000000000000
+--- a/net/sched/cls_rsvp6.c
++++ /dev/null
+@@ -1,24 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- * net/sched/cls_rsvp6.c	Special RSVP packet classifier for IPv6.
+- *
+- * Authors:	Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
+- */
+-
+-#include <linux/module.h>
+-#include <linux/types.h>
+-#include <linux/kernel.h>
+-#include <linux/string.h>
+-#include <linux/errno.h>
+-#include <linux/ipv6.h>
+-#include <linux/skbuff.h>
+-#include <net/act_api.h>
+-#include <net/pkt_cls.h>
+-#include <net/netlink.h>
+-
+-#define RSVP_DST_LEN	4
+-#define RSVP_ID		"rsvp6"
+-#define RSVP_OPS	cls_rsvp6_ops
+-
+-#include "cls_rsvp.h"
+-MODULE_LICENSE("GPL");
+diff --git a/samples/hw_breakpoint/data_breakpoint.c b/samples/hw_breakpoint/data_breakpoint.c
+index 418c46fe5ffc3..b99322f188e59 100644
+--- a/samples/hw_breakpoint/data_breakpoint.c
++++ b/samples/hw_breakpoint/data_breakpoint.c
+@@ -70,7 +70,9 @@ fail:
+ static void __exit hw_break_module_exit(void)
+ {
+ 	unregister_wide_hw_breakpoint(sample_hbp);
+-	symbol_put(ksym_name);
++#ifdef CONFIG_MODULE_UNLOAD
++	__symbol_put(ksym_name);
++#endif
+ 	printk(KERN_INFO "HW Breakpoint for %s write uninstalled\n", ksym_name);
+ }
+ 
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index f96e70c85f84a..801c89a3a1b6f 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -368,6 +368,14 @@ static const struct config_entry config_table[] = {
+ 	},
+ #endif
+ 
++/* Lunar Lake */
++#if IS_ENABLED(CONFIG_SND_SOC_SOF_LUNARLAKE)
++	/* Lunarlake-P */
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++		.device = PCI_DEVICE_ID_INTEL_HDA_LNL_P,
++	},
++#endif
+ };
+ 
+ static const struct config_entry *snd_intel_dsp_find_config
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 3e7706c251e9e..89905b4e93091 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -824,33 +824,36 @@ else
+   endif
+ endif
+ 
+-ifeq ($(feature-libbfd), 1)
+-  EXTLIBS += -lbfd -lopcodes
+-else
+-  # we are on a system that requires -liberty and (maybe) -lz
+-  # to link against -lbfd; test each case individually here
+-
+-  # call all detections now so we get correct
+-  # status in VF output
+-  $(call feature_check,libbfd-liberty)
+-  $(call feature_check,libbfd-liberty-z)
+ 
+-  ifeq ($(feature-libbfd-liberty), 1)
+-    EXTLIBS += -lbfd -lopcodes -liberty
+-    FEATURE_CHECK_LDFLAGS-disassembler-four-args += -liberty -ldl
++ifndef NO_LIBBFD
++  ifeq ($(feature-libbfd), 1)
++    EXTLIBS += -lbfd -lopcodes
+   else
+-    ifeq ($(feature-libbfd-liberty-z), 1)
+-      EXTLIBS += -lbfd -lopcodes -liberty -lz
+-      FEATURE_CHECK_LDFLAGS-disassembler-four-args += -liberty -lz -ldl
++    # we are on a system that requires -liberty and (maybe) -lz
++    # to link against -lbfd; test each case individually here
++
++    # call all detections now so we get correct
++    # status in VF output
++    $(call feature_check,libbfd-liberty)
++    $(call feature_check,libbfd-liberty-z)
++
++    ifeq ($(feature-libbfd-liberty), 1)
++      EXTLIBS += -lbfd -lopcodes -liberty
++      FEATURE_CHECK_LDFLAGS-disassembler-four-args += -liberty -ldl
++    else
++      ifeq ($(feature-libbfd-liberty-z), 1)
++        EXTLIBS += -lbfd -lopcodes -liberty -lz
++        FEATURE_CHECK_LDFLAGS-disassembler-four-args += -liberty -lz -ldl
++      endif
+     endif
++    $(call feature_check,disassembler-four-args)
+   endif
+-  $(call feature_check,disassembler-four-args)
+-endif
+ 
+-ifeq ($(feature-libbfd-buildid), 1)
+-  CFLAGS += -DHAVE_LIBBFD_BUILDID_SUPPORT
+-else
+-  msg := $(warning Old version of libbfd/binutils things like PE executable profiling will not be available);
++  ifeq ($(feature-libbfd-buildid), 1)
++    CFLAGS += -DHAVE_LIBBFD_BUILDID_SUPPORT
++  else
++    msg := $(warning Old version of libbfd/binutils things like PE executable profiling will not be available);
++  endif
+ endif
+ 
+ ifdef NO_DEMANGLE
+diff --git a/tools/perf/pmu-events/Build b/tools/perf/pmu-events/Build
+index 215ba30b85343..a055dee6a46af 100644
+--- a/tools/perf/pmu-events/Build
++++ b/tools/perf/pmu-events/Build
+@@ -6,10 +6,13 @@ pmu-events-y	+= pmu-events.o
+ JDIR		=  pmu-events/arch/$(SRCARCH)
+ JSON		=  $(shell [ -d $(JDIR) ] &&				\
+ 			find $(JDIR) -name '*.json' -o -name 'mapfile.csv')
++JDIR_TEST	=  pmu-events/arch/test
++JSON_TEST	=  $(shell [ -d $(JDIR_TEST) ] &&			\
++			find $(JDIR_TEST) -name '*.json')
+ 
+ #
+ # Locate/process JSON files in pmu-events/arch/
+ # directory and create tables in pmu-events.c.
+ #
+-$(OUTPUT)pmu-events/pmu-events.c: $(JSON) $(JEVENTS)
++$(OUTPUT)pmu-events/pmu-events.c: $(JSON) $(JSON_TEST) $(JEVENTS)
+ 	$(Q)$(call echo-cmd,gen)$(JEVENTS) $(SRCARCH) pmu-events/arch $(OUTPUT)pmu-events/pmu-events.c $(V)
+diff --git a/tools/testing/selftests/ftrace/ftracetest b/tools/testing/selftests/ftrace/ftracetest
+index 8ec1922e974eb..55314cd197ab9 100755
+--- a/tools/testing/selftests/ftrace/ftracetest
++++ b/tools/testing/selftests/ftrace/ftracetest
+@@ -30,6 +30,9 @@ err_ret=1
+ # kselftest skip code is 4
+ err_skip=4
+ 
++# umount required
++UMOUNT_DIR=""
++
+ # cgroup RT scheduling prevents chrt commands from succeeding, which
+ # induces failures in test wakeup tests.  Disable for the duration of
+ # the tests.
+@@ -44,6 +47,9 @@ setup() {
+ 
+ cleanup() {
+   echo $sched_rt_runtime_orig > $sched_rt_runtime
++  if [ -n "${UMOUNT_DIR}" ]; then
++    umount ${UMOUNT_DIR} ||:
++  fi
+ }
+ 
+ errexit() { # message
+@@ -155,11 +161,13 @@ if [ -z "$TRACING_DIR" ]; then
+ 	    mount -t tracefs nodev /sys/kernel/tracing ||
+ 	      errexit "Failed to mount /sys/kernel/tracing"
+ 	    TRACING_DIR="/sys/kernel/tracing"
++	    UMOUNT_DIR=${TRACING_DIR}
+ 	# If debugfs exists, then so does /sys/kernel/debug
+ 	elif [ -d "/sys/kernel/debug" ]; then
+ 	    mount -t debugfs nodev /sys/kernel/debug ||
+ 	      errexit "Failed to mount /sys/kernel/debug"
+ 	    TRACING_DIR="/sys/kernel/debug/tracing"
++	    UMOUNT_DIR=${TRACING_DIR}
+ 	else
+ 	    err_ret=$err_skip
+ 	    errexit "debugfs and tracefs are not configured in this kernel"


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-10-05 14:24 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-10-05 14:24 UTC (permalink / raw
  To: gentoo-commits

commit:     2bc17254a112bff562e05ee0d9504683723c8d08
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  5 14:04:01 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct  5 14:23:59 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2bc17254

select BLK_DEV_BSG if SCSI as it depends on it.

Thanks, Ancient.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index ab910775..435a76ea 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -122,7 +122,7 @@
 +	depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
 +
 +	select AUTOFS_FS
-+	select BLK_DEV_BSG
++	select BLK_DEV_BSG if SCSI
 +	select BPF_SYSCALL
 +	select CGROUP_BPF
 +	select CGROUPS


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-10-10 20:34 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-10-10 20:34 UTC (permalink / raw
  To: gentoo-commits

commit:     554cd6a818af69e465c547ac1ea9f6d8a3b76759
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 10 20:34:02 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Oct 10 20:34:02 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=554cd6a8

Linux patch 5.10.198

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1197_linux-5.10.198.patch | 13404 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 13408 insertions(+)

diff --git a/0000_README b/0000_README
index 7b7d5684..8a76a944 100644
--- a/0000_README
+++ b/0000_README
@@ -831,6 +831,10 @@ Patch:  1196_linux-5.10.197.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.197
 
+Patch:  1197_linux-5.10.198.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.198
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1197_linux-5.10.198.patch b/1197_linux-5.10.198.patch
new file mode 100644
index 00000000..4472cdaf
--- /dev/null
+++ b/1197_linux-5.10.198.patch
@@ -0,0 +1,13404 @@
+diff --git a/Makefile b/Makefile
+index 12986f3532a98..470e11dcf2a3e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 197
++SUBLEVEL = 198
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/am335x-guardian.dts b/arch/arm/boot/dts/am335x-guardian.dts
+index 1918766c1f809..9594276acf9d2 100644
+--- a/arch/arm/boot/dts/am335x-guardian.dts
++++ b/arch/arm/boot/dts/am335x-guardian.dts
+@@ -100,11 +100,12 @@
+ 
+ 	};
+ 
+-	pwm7: dmtimer-pwm {
++	guardian_beeper: pwm-7 {
+ 		compatible = "ti,omap-dmtimer-pwm";
++		#pwm-cells = <3>;
+ 		ti,timers = <&timer7>;
+ 		pinctrl-names = "default";
+-		pinctrl-0 = <&dmtimer7_pins>;
++		pinctrl-0 = <&guardian_beeper_pins>;
+ 		ti,clock-source = <0x01>;
+ 	};
+ 
+@@ -343,9 +344,9 @@
+ 		>;
+ 	};
+ 
+-	dmtimer7_pins: pinmux_dmtimer7_pins {
++	guardian_beeper_pins: pinmux_dmtimer7_pins {
+ 		pinctrl-single,pins = <
+-			AM33XX_IOPAD(0x968, PIN_OUTPUT | MUX_MODE5)
++			AM33XX_IOPAD(0x968, PIN_OUTPUT | MUX_MODE5) /* (E18) timer7 */
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/am3517-evm.dts b/arch/arm/boot/dts/am3517-evm.dts
+index c8b80f156ec98..9cc1ae36c4204 100644
+--- a/arch/arm/boot/dts/am3517-evm.dts
++++ b/arch/arm/boot/dts/am3517-evm.dts
+@@ -150,7 +150,7 @@
+ 		enable-gpios = <&gpio6 22 GPIO_ACTIVE_HIGH>; /* gpio_182 */
+ 	};
+ 
+-	pwm11: dmtimer-pwm@11 {
++	pwm11: pwm-11 {
+ 		compatible = "ti,omap-dmtimer-pwm";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pwm_pins>;
+diff --git a/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi b/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
+index 533a47bc4a531..1386a5e63eff7 100644
+--- a/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
++++ b/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
+@@ -59,7 +59,7 @@
+ 		};
+ 	};
+ 
+-	pwm10: dmtimer-pwm {
++	pwm10: pwm-10 {
+ 		compatible = "ti,omap-dmtimer-pwm";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pwm_pins>;
+diff --git a/arch/arm/boot/dts/motorola-mapphone-common.dtsi b/arch/arm/boot/dts/motorola-mapphone-common.dtsi
+index 5f8f77cfbe59f..8cb26b924d3ca 100644
+--- a/arch/arm/boot/dts/motorola-mapphone-common.dtsi
++++ b/arch/arm/boot/dts/motorola-mapphone-common.dtsi
+@@ -156,7 +156,7 @@
+ 		dais = <&mcbsp2_port>, <&mcbsp3_port>;
+ 	};
+ 
+-	pwm8: dmtimer-pwm-8 {
++	pwm8: pwm-8 {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&vibrator_direction_pin>;
+ 
+@@ -166,7 +166,7 @@
+ 		ti,clock-source = <0x01>;
+ 	};
+ 
+-	pwm9: dmtimer-pwm-9 {
++	pwm9: pwm-9 {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&vibrator_enable_pin>;
+ 
+@@ -192,6 +192,29 @@
+ 	};
+ };
+ 
++&cpu_thermal {
++	polling-delay = <10000>; /* milliseconds */
++};
++
++&cpu_alert0 {
++        temperature = <80000>; /* millicelsius */
++};
++
++&cpu0 {
++        /*
++	 * Note that the 1.2GiHz mode is enabled for all SoC variants for
++	 * the Motorola Android Linux v3.0.8 based kernel.
++	 */
++        operating-points = <
++	        /* kHz    uV */
++	        300000  1025000
++	        600000  1200000
++	        800000  1313000
++	        1008000 1375000
++		1200000 1375000
++        >;
++};
++
+ &dss {
+ 	status = "okay";
+ };
+@@ -384,7 +407,7 @@
+ 	#address-cells = <1>;
+ 	#size-cells = <0>;
+ 	wlcore: wlcore@2 {
+-		compatible = "ti,wl1285", "ti,wl1283";
++		compatible = "ti,wl1285";
+ 		reg = <2>;
+ 		/* gpio_100 with gpmc_wait2 pad as wakeirq */
+ 		interrupts-extended = <&gpio4 4 IRQ_TYPE_LEVEL_HIGH>,
+@@ -716,12 +739,12 @@
+ /* Configure pwm clock source for timers 8 & 9 */
+ &timer8 {
+ 	assigned-clocks = <&abe_clkctrl OMAP4_TIMER8_CLKCTRL 24>;
+-	assigned-clock-parents = <&sys_clkin_ck>;
++	assigned-clock-parents = <&sys_32k_ck>;
+ };
+ 
+ &timer9 {
+ 	assigned-clocks = <&l4_per_clkctrl OMAP4_TIMER9_CLKCTRL 24>;
+-	assigned-clock-parents = <&sys_clkin_ck>;
++	assigned-clock-parents = <&sys_32k_ck>;
+ };
+ 
+ /*
+diff --git a/arch/arm/boot/dts/omap-gpmc-smsc911x.dtsi b/arch/arm/boot/dts/omap-gpmc-smsc911x.dtsi
+index ded7e8fec9eba..9cf52650f0731 100644
+--- a/arch/arm/boot/dts/omap-gpmc-smsc911x.dtsi
++++ b/arch/arm/boot/dts/omap-gpmc-smsc911x.dtsi
+@@ -8,9 +8,9 @@
+ 
+ / {
+ 	vddvario: regulator-vddvario {
+-		  compatible = "regulator-fixed";
+-		  regulator-name = "vddvario";
+-		  regulator-always-on;
++		compatible = "regulator-fixed";
++		regulator-name = "vddvario";
++		regulator-always-on;
+ 	};
+ 
+ 	vdd33a: regulator-vdd33a {
+diff --git a/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi b/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi
+index e7534fe9c53cf..bc8961f3690f0 100644
+--- a/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi
++++ b/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi
+@@ -12,9 +12,9 @@
+ 
+ / {
+ 	vddvario: regulator-vddvario {
+-		  compatible = "regulator-fixed";
+-		  regulator-name = "vddvario";
+-		  regulator-always-on;
++		compatible = "regulator-fixed";
++		regulator-name = "vddvario";
++		regulator-always-on;
+ 	};
+ 
+ 	vdd33a: regulator-vdd33a {
+diff --git a/arch/arm/boot/dts/omap3-cm-t3517.dts b/arch/arm/boot/dts/omap3-cm-t3517.dts
+index 3b8349094baa6..f25c0a84a190c 100644
+--- a/arch/arm/boot/dts/omap3-cm-t3517.dts
++++ b/arch/arm/boot/dts/omap3-cm-t3517.dts
+@@ -11,12 +11,12 @@
+ 	model = "CompuLab CM-T3517";
+ 	compatible = "compulab,omap3-cm-t3517", "ti,am3517", "ti,omap3";
+ 
+-	vmmc:  regulator-vmmc {
+-                compatible = "regulator-fixed";
+-                regulator-name = "vmmc";
+-                regulator-min-microvolt = <3300000>;
+-                regulator-max-microvolt = <3300000>;
+-        };
++	vmmc: regulator-vmmc {
++		compatible = "regulator-fixed";
++		regulator-name = "vmmc";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++	};
+ 
+ 	wl12xx_vmmc2: wl12xx_vmmc2 {
+ 		compatible = "regulator-fixed";
+diff --git a/arch/arm/boot/dts/omap3-cpu-thermal.dtsi b/arch/arm/boot/dts/omap3-cpu-thermal.dtsi
+index 1ed8378593745..51e6c2d42be28 100644
+--- a/arch/arm/boot/dts/omap3-cpu-thermal.dtsi
++++ b/arch/arm/boot/dts/omap3-cpu-thermal.dtsi
+@@ -15,8 +15,7 @@ cpu_thermal: cpu_thermal {
+ 	polling-delay = <1000>; /* milliseconds */
+ 	coefficients = <0 20000>;
+ 
+-			/* sensor       ID */
+-	thermal-sensors = <&bandgap     0>;
++	thermal-sensors = <&bandgap>;
+ 
+ 	cpu_trips: trips {
+ 		cpu_alert0: cpu_alert {
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index e61e5ddbf2027..68e56b50652a9 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -147,7 +147,7 @@
+ 		pinctrl-0 = <&backlight_pins>;
+ 	};
+ 
+-	pwm11: dmtimer-pwm {
++	pwm11: pwm-11 {
+ 		compatible = "ti,omap-dmtimer-pwm";
+ 		ti,timers = <&timer11>;
+ 		#pwm-cells = <3>;
+@@ -332,7 +332,7 @@
+ 			OMAP3_CORE1_IOPAD(0x2108, PIN_OUTPUT | MUX_MODE0)   /* dss_data22.dss_data22 */
+ 			OMAP3_CORE1_IOPAD(0x210a, PIN_OUTPUT | MUX_MODE0)   /* dss_data23.dss_data23 */
+ 		>;
+-       };
++	};
+ 
+ 	gps_pins: pinmux_gps_pins {
+ 		pinctrl-single,pins = <
+@@ -866,8 +866,8 @@
+ };
+ 
+ &hdqw1w {
+-        pinctrl-names = "default";
+-        pinctrl-0 = <&hdq_pins>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&hdq_pins>;
+ };
+ 
+ /* image signal processor within OMAP3 SoC */
+diff --git a/arch/arm/boot/dts/omap3-ldp.dts b/arch/arm/boot/dts/omap3-ldp.dts
+index 9c6a927245904..b898e2f6f41dc 100644
+--- a/arch/arm/boot/dts/omap3-ldp.dts
++++ b/arch/arm/boot/dts/omap3-ldp.dts
+@@ -301,5 +301,5 @@
+ 
+ &vaux1 {
+ 	/* Needed for ads7846 */
+-        regulator-name = "vcc";
++	regulator-name = "vcc";
+ };
+diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
+index d40c3d2c4914e..7dafd69b7d359 100644
+--- a/arch/arm/boot/dts/omap3-n900.dts
++++ b/arch/arm/boot/dts/omap3-n900.dts
+@@ -156,7 +156,7 @@
+ 		io-channel-names = "temp", "bsi", "vbat";
+ 	};
+ 
+-	pwm9: dmtimer-pwm {
++	pwm9: pwm-9 {
+ 		compatible = "ti,omap-dmtimer-pwm";
+ 		#pwm-cells = <3>;
+ 		ti,timers = <&timer9>;
+@@ -236,27 +236,27 @@
+ 		pinctrl-single,pins = <
+ 
+ 			/* address lines */
+-                        OMAP3_CORE1_IOPAD(0x207a, PIN_OUTPUT | MUX_MODE0)       /* gpmc_a1.gpmc_a1 */
+-                        OMAP3_CORE1_IOPAD(0x207c, PIN_OUTPUT | MUX_MODE0)       /* gpmc_a2.gpmc_a2 */
+-                        OMAP3_CORE1_IOPAD(0x207e, PIN_OUTPUT | MUX_MODE0)       /* gpmc_a3.gpmc_a3 */
++			OMAP3_CORE1_IOPAD(0x207a, PIN_OUTPUT | MUX_MODE0)       /* gpmc_a1.gpmc_a1 */
++			OMAP3_CORE1_IOPAD(0x207c, PIN_OUTPUT | MUX_MODE0)       /* gpmc_a2.gpmc_a2 */
++			OMAP3_CORE1_IOPAD(0x207e, PIN_OUTPUT | MUX_MODE0)       /* gpmc_a3.gpmc_a3 */
+ 
+ 			/* data lines, gpmc_d0..d7 not muxable according to TRM */
+-                        OMAP3_CORE1_IOPAD(0x209e, PIN_INPUT | MUX_MODE0)        /* gpmc_d8.gpmc_d8 */
+-                        OMAP3_CORE1_IOPAD(0x20a0, PIN_INPUT | MUX_MODE0)        /* gpmc_d9.gpmc_d9 */
+-                        OMAP3_CORE1_IOPAD(0x20a2, PIN_INPUT | MUX_MODE0)        /* gpmc_d10.gpmc_d10 */
+-                        OMAP3_CORE1_IOPAD(0x20a4, PIN_INPUT | MUX_MODE0)        /* gpmc_d11.gpmc_d11 */
+-                        OMAP3_CORE1_IOPAD(0x20a6, PIN_INPUT | MUX_MODE0)        /* gpmc_d12.gpmc_d12 */
+-                        OMAP3_CORE1_IOPAD(0x20a8, PIN_INPUT | MUX_MODE0)        /* gpmc_d13.gpmc_d13 */
+-                        OMAP3_CORE1_IOPAD(0x20aa, PIN_INPUT | MUX_MODE0)        /* gpmc_d14.gpmc_d14 */
+-                        OMAP3_CORE1_IOPAD(0x20ac, PIN_INPUT | MUX_MODE0)        /* gpmc_d15.gpmc_d15 */
++			OMAP3_CORE1_IOPAD(0x209e, PIN_INPUT | MUX_MODE0)        /* gpmc_d8.gpmc_d8 */
++			OMAP3_CORE1_IOPAD(0x20a0, PIN_INPUT | MUX_MODE0)        /* gpmc_d9.gpmc_d9 */
++			OMAP3_CORE1_IOPAD(0x20a2, PIN_INPUT | MUX_MODE0)        /* gpmc_d10.gpmc_d10 */
++			OMAP3_CORE1_IOPAD(0x20a4, PIN_INPUT | MUX_MODE0)        /* gpmc_d11.gpmc_d11 */
++			OMAP3_CORE1_IOPAD(0x20a6, PIN_INPUT | MUX_MODE0)        /* gpmc_d12.gpmc_d12 */
++			OMAP3_CORE1_IOPAD(0x20a8, PIN_INPUT | MUX_MODE0)        /* gpmc_d13.gpmc_d13 */
++			OMAP3_CORE1_IOPAD(0x20aa, PIN_INPUT | MUX_MODE0)        /* gpmc_d14.gpmc_d14 */
++			OMAP3_CORE1_IOPAD(0x20ac, PIN_INPUT | MUX_MODE0)        /* gpmc_d15.gpmc_d15 */
+ 
+ 			/*
+ 			 * gpmc_ncs0, gpmc_nadv_ale, gpmc_noe, gpmc_nwe, gpmc_wait0 not muxable
+ 			 * according to TRM. OneNAND seems to require PIN_INPUT on clock.
+ 			 */
+-                        OMAP3_CORE1_IOPAD(0x20b0, PIN_OUTPUT | MUX_MODE0)       /* gpmc_ncs1.gpmc_ncs1 */
+-                        OMAP3_CORE1_IOPAD(0x20be, PIN_INPUT | MUX_MODE0)        /* gpmc_clk.gpmc_clk */
+-		>;
++			OMAP3_CORE1_IOPAD(0x20b0, PIN_OUTPUT | MUX_MODE0)       /* gpmc_ncs1.gpmc_ncs1 */
++			OMAP3_CORE1_IOPAD(0x20be, PIN_INPUT | MUX_MODE0)        /* gpmc_clk.gpmc_clk */
++			>;
+ 	};
+ 
+ 	i2c1_pins: pinmux_i2c1_pins {
+@@ -738,12 +738,12 @@
+ 
+ 	si4713: si4713@63 {
+ 		compatible = "silabs,si4713";
+-                reg = <0x63>;
++		reg = <0x63>;
+ 
+-                interrupts-extended = <&gpio2 21 IRQ_TYPE_EDGE_FALLING>; /* 53 */
+-                reset-gpios = <&gpio6 3 GPIO_ACTIVE_HIGH>; /* 163 */
+-                vio-supply = <&vio>;
+-                vdd-supply = <&vaux1>;
++		interrupts-extended = <&gpio2 21 IRQ_TYPE_EDGE_FALLING>; /* 53 */
++		reset-gpios = <&gpio6 3 GPIO_ACTIVE_HIGH>; /* 163 */
++		vio-supply = <&vio>;
++		vdd-supply = <&vaux1>;
+ 	};
+ 
+ 	bq24150a: bq24150a@6b {
+diff --git a/arch/arm/boot/dts/omap3-zoom3.dts b/arch/arm/boot/dts/omap3-zoom3.dts
+index 0482676d18306..ce58b1f208e81 100644
+--- a/arch/arm/boot/dts/omap3-zoom3.dts
++++ b/arch/arm/boot/dts/omap3-zoom3.dts
+@@ -23,9 +23,9 @@
+ 	};
+ 
+ 	vddvario: regulator-vddvario {
+-		  compatible = "regulator-fixed";
+-		  regulator-name = "vddvario";
+-		  regulator-always-on;
++		compatible = "regulator-fixed";
++		regulator-name = "vddvario";
++		regulator-always-on;
+ 	};
+ 
+ 	vdd33a: regulator-vdd33a {
+@@ -84,28 +84,28 @@
+ 
+ 	uart1_pins: pinmux_uart1_pins {
+ 		pinctrl-single,pins = <
+-                        OMAP3_CORE1_IOPAD(0x2180, PIN_INPUT | MUX_MODE0)		/* uart1_cts.uart1_cts */
+-                        OMAP3_CORE1_IOPAD(0x217e, PIN_OUTPUT | MUX_MODE0)		/* uart1_rts.uart1_rts */
+-                        OMAP3_CORE1_IOPAD(0x2182, WAKEUP_EN | PIN_INPUT | MUX_MODE0) /* uart1_rx.uart1_rx */
+-                        OMAP3_CORE1_IOPAD(0x217c, PIN_OUTPUT | MUX_MODE0)		/* uart1_tx.uart1_tx */
++			OMAP3_CORE1_IOPAD(0x2180, PIN_INPUT | MUX_MODE0)		/* uart1_cts.uart1_cts */
++			OMAP3_CORE1_IOPAD(0x217e, PIN_OUTPUT | MUX_MODE0)		/* uart1_rts.uart1_rts */
++			OMAP3_CORE1_IOPAD(0x2182, WAKEUP_EN | PIN_INPUT | MUX_MODE0) /* uart1_rx.uart1_rx */
++			OMAP3_CORE1_IOPAD(0x217c, PIN_OUTPUT | MUX_MODE0)		/* uart1_tx.uart1_tx */
+ 		>;
+ 	};
+ 
+ 	uart2_pins: pinmux_uart2_pins {
+ 		pinctrl-single,pins = <
+-                        OMAP3_CORE1_IOPAD(0x2174, PIN_INPUT_PULLUP | MUX_MODE0)	/* uart2_cts.uart2_cts */
+-                        OMAP3_CORE1_IOPAD(0x2176, PIN_OUTPUT | MUX_MODE0)		/* uart2_rts.uart2_rts */
+-                        OMAP3_CORE1_IOPAD(0x217a, PIN_INPUT | MUX_MODE0)		/* uart2_rx.uart2_rx */
+-                        OMAP3_CORE1_IOPAD(0x2178, PIN_OUTPUT | MUX_MODE0)		/* uart2_tx.uart2_tx */
++			OMAP3_CORE1_IOPAD(0x2174, PIN_INPUT_PULLUP | MUX_MODE0)	/* uart2_cts.uart2_cts */
++			OMAP3_CORE1_IOPAD(0x2176, PIN_OUTPUT | MUX_MODE0)		/* uart2_rts.uart2_rts */
++			OMAP3_CORE1_IOPAD(0x217a, PIN_INPUT | MUX_MODE0)		/* uart2_rx.uart2_rx */
++			OMAP3_CORE1_IOPAD(0x2178, PIN_OUTPUT | MUX_MODE0)		/* uart2_tx.uart2_tx */
+ 		>;
+ 	};
+ 
+ 	uart3_pins: pinmux_uart3_pins {
+ 		pinctrl-single,pins = <
+-                        OMAP3_CORE1_IOPAD(0x219a, PIN_INPUT_PULLDOWN | MUX_MODE0)	/* uart3_cts_rctx.uart3_cts_rctx */
+-                        OMAP3_CORE1_IOPAD(0x219c, PIN_OUTPUT | MUX_MODE0)		/* uart3_rts_sd.uart3_rts_sd */
+-                        OMAP3_CORE1_IOPAD(0x219e, PIN_INPUT | MUX_MODE0)		/* uart3_rx_irrx.uart3_rx_irrx */
+-                        OMAP3_CORE1_IOPAD(0x21a0, PIN_OUTPUT | MUX_MODE0)		/* uart3_tx_irtx.uart3_tx_irtx */
++			OMAP3_CORE1_IOPAD(0x219a, PIN_INPUT_PULLDOWN | MUX_MODE0)	/* uart3_cts_rctx.uart3_cts_rctx */
++			OMAP3_CORE1_IOPAD(0x219c, PIN_OUTPUT | MUX_MODE0)		/* uart3_rts_sd.uart3_rts_sd */
++			OMAP3_CORE1_IOPAD(0x219e, PIN_INPUT | MUX_MODE0)		/* uart3_rx_irrx.uart3_rx_irrx */
++			OMAP3_CORE1_IOPAD(0x21a0, PIN_OUTPUT | MUX_MODE0)		/* uart3_tx_irtx.uart3_tx_irtx */
+ 		>;
+ 	};
+ 
+@@ -205,22 +205,22 @@
+ };
+ 
+ &uart1 {
+-       pinctrl-names = "default";
+-       pinctrl-0 = <&uart1_pins>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&uart1_pins>;
+ };
+ 
+ &uart2 {
+-       pinctrl-names = "default";
+-       pinctrl-0 = <&uart2_pins>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&uart2_pins>;
+ };
+ 
+ &uart3 {
+-       pinctrl-names = "default";
+-       pinctrl-0 = <&uart3_pins>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&uart3_pins>;
+ };
+ 
+ &uart4 {
+-       status = "disabled";
++	status = "disabled";
+ };
+ 
+ &usb_otg_hs {
+diff --git a/arch/arm/boot/dts/omap4-cpu-thermal.dtsi b/arch/arm/boot/dts/omap4-cpu-thermal.dtsi
+index 03d054b2bf9af..4b3afe2980629 100644
+--- a/arch/arm/boot/dts/omap4-cpu-thermal.dtsi
++++ b/arch/arm/boot/dts/omap4-cpu-thermal.dtsi
+@@ -15,21 +15,24 @@ cpu_thermal: cpu_thermal {
+ 	polling-delay-passive = <250>; /* milliseconds */
+ 	polling-delay = <1000>; /* milliseconds */
+ 
+-			/* sensor       ID */
+-        thermal-sensors = <&bandgap     0>;
++	/*
++	 * See 44xx files for single sensor addressing, omap5 and dra7 need
++	 * also sensor ID for addressing.
++	 */
++	thermal-sensors = <&bandgap     0>;
+ 
+ 	cpu_trips: trips {
+-                cpu_alert0: cpu_alert {
+-                        temperature = <100000>; /* millicelsius */
+-                        hysteresis = <2000>; /* millicelsius */
+-                        type = "passive";
+-                };
+-                cpu_crit: cpu_crit {
+-                        temperature = <125000>; /* millicelsius */
+-                        hysteresis = <2000>; /* millicelsius */
+-                        type = "critical";
+-                };
+-        };
++		cpu_alert0: cpu_alert {
++			temperature = <100000>; /* millicelsius */
++			hysteresis = <2000>; /* millicelsius */
++			type = "passive";
++		};
++		cpu_crit: cpu_crit {
++			temperature = <125000>; /* millicelsius */
++			hysteresis = <2000>; /* millicelsius */
++			type = "critical";
++		};
++	};
+ 
+ 	cpu_cooling_maps: cooling-maps {
+ 		map0 {
+diff --git a/arch/arm/boot/dts/omap443x.dtsi b/arch/arm/boot/dts/omap443x.dtsi
+index dd8ef58cbaed4..cce39dce1428a 100644
+--- a/arch/arm/boot/dts/omap443x.dtsi
++++ b/arch/arm/boot/dts/omap443x.dtsi
+@@ -72,6 +72,7 @@
+ };
+ 
+ &cpu_thermal {
++	thermal-sensors = <&bandgap>;
+ 	coefficients = <0 20000>;
+ };
+ 
+diff --git a/arch/arm/boot/dts/omap4460.dtsi b/arch/arm/boot/dts/omap4460.dtsi
+index 2d3e54901b6ea..d62e2bacca181 100644
+--- a/arch/arm/boot/dts/omap4460.dtsi
++++ b/arch/arm/boot/dts/omap4460.dtsi
+@@ -89,6 +89,7 @@
+ };
+ 
+ &cpu_thermal {
++	thermal-sensors = <&bandgap>;
+ 	coefficients = <348 (-9301)>;
+ };
+ 
+diff --git a/arch/arm/boot/dts/omap5-cm-t54.dts b/arch/arm/boot/dts/omap5-cm-t54.dts
+index e62ea8b6d53fd..af288d63a26a4 100644
+--- a/arch/arm/boot/dts/omap5-cm-t54.dts
++++ b/arch/arm/boot/dts/omap5-cm-t54.dts
+@@ -84,36 +84,36 @@
+ 	};
+ 
+ 	lcd0: display {
+-                compatible = "startek,startek-kd050c", "panel-dpi";
+-                label = "lcd";
+-
+-                pinctrl-names = "default";
+-                pinctrl-0 = <&lcd_pins>;
+-
+-                enable-gpios = <&gpio8 3 GPIO_ACTIVE_HIGH>;
+-
+-                panel-timing {
+-                        clock-frequency = <33000000>;
+-                        hactive = <800>;
+-                        vactive = <480>;
+-                        hfront-porch = <40>;
+-                        hback-porch = <40>;
+-                        hsync-len = <43>;
+-                        vback-porch = <29>;
+-                        vfront-porch = <13>;
+-                        vsync-len = <3>;
+-                        hsync-active = <0>;
+-                        vsync-active = <0>;
+-                        de-active = <1>;
+-                        pixelclk-active = <1>;
+-                };
+-
+-                port {
+-                        lcd_in: endpoint {
+-                                remote-endpoint = <&dpi_lcd_out>;
+-                        };
+-                };
+-        };
++		compatible = "startek,startek-kd050c", "panel-dpi";
++		label = "lcd";
++
++		pinctrl-names = "default";
++		pinctrl-0 = <&lcd_pins>;
++
++		enable-gpios = <&gpio8 3 GPIO_ACTIVE_HIGH>;
++
++		panel-timing {
++			clock-frequency = <33000000>;
++			hactive = <800>;
++			vactive = <480>;
++			hfront-porch = <40>;
++			hback-porch = <40>;
++			hsync-len = <43>;
++			vback-porch = <29>;
++			vfront-porch = <13>;
++			vsync-len = <3>;
++			hsync-active = <0>;
++			vsync-active = <0>;
++			de-active = <1>;
++			pixelclk-active = <1>;
++		};
++
++		port {
++			lcd_in: endpoint {
++				remote-endpoint = <&dpi_lcd_out>;
++			};
++		};
++	};
+ 
+ 	hdmi0: connector0 {
+ 		compatible = "hdmi-connector";
+@@ -644,8 +644,8 @@
+ };
+ 
+ &usb3 {
+-       extcon = <&extcon_usb3>;
+-       vbus-supply = <&smps10_out1_reg>;
++	extcon = <&extcon_usb3>;
++	vbus-supply = <&smps10_out1_reg>;
+ };
+ 
+ &cpu0 {
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 9cf5d9551e991..c2a1ccd5fd468 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -79,6 +79,7 @@
+ #define ARM_CPU_PART_CORTEX_A78AE	0xD42
+ #define ARM_CPU_PART_CORTEX_X1		0xD44
+ #define ARM_CPU_PART_CORTEX_A510	0xD46
++#define ARM_CPU_PART_CORTEX_A520	0xD80
+ #define ARM_CPU_PART_CORTEX_A710	0xD47
+ #define ARM_CPU_PART_CORTEX_X2		0xD48
+ #define ARM_CPU_PART_NEOVERSE_N2	0xD49
+@@ -130,6 +131,7 @@
+ #define MIDR_CORTEX_A78AE	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
+ #define MIDR_CORTEX_X1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
+ #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
++#define MIDR_CORTEX_A520 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A520)
+ #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+ #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
+ #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
+diff --git a/arch/mips/alchemy/devboards/db1000.c b/arch/mips/alchemy/devboards/db1000.c
+index 50de86eb8784c..3183df60ad337 100644
+--- a/arch/mips/alchemy/devboards/db1000.c
++++ b/arch/mips/alchemy/devboards/db1000.c
+@@ -164,6 +164,7 @@ static struct platform_device db1x00_audio_dev = {
+ 
+ /******************************************************************************/
+ 
++#ifdef CONFIG_MMC_AU1X
+ static irqreturn_t db1100_mmc_cd(int irq, void *ptr)
+ {
+ 	mmc_detect_change(ptr, msecs_to_jiffies(500));
+@@ -369,6 +370,7 @@ static struct platform_device db1100_mmc1_dev = {
+ 	.num_resources	= ARRAY_SIZE(au1100_mmc1_res),
+ 	.resource	= au1100_mmc1_res,
+ };
++#endif /* CONFIG_MMC_AU1X */
+ 
+ /******************************************************************************/
+ 
+@@ -432,8 +434,10 @@ static struct platform_device *db1x00_devs[] = {
+ 
+ static struct platform_device *db1100_devs[] = {
+ 	&au1100_lcd_device,
++#ifdef CONFIG_MMC_AU1X
+ 	&db1100_mmc0_dev,
+ 	&db1100_mmc1_dev,
++#endif
+ };
+ 
+ int __init db1000_dev_setup(void)
+diff --git a/arch/mips/alchemy/devboards/db1200.c b/arch/mips/alchemy/devboards/db1200.c
+index b70e2cf8a27bc..414f92eacb5e5 100644
+--- a/arch/mips/alchemy/devboards/db1200.c
++++ b/arch/mips/alchemy/devboards/db1200.c
+@@ -326,6 +326,7 @@ static struct platform_device db1200_ide_dev = {
+ 
+ /**********************************************************************/
+ 
++#ifdef CONFIG_MMC_AU1X
+ /* SD carddetects:  they're supposed to be edge-triggered, but ack
+  * doesn't seem to work (CPLD Rev 2).  Instead, the screaming one
+  * is disabled and its counterpart enabled.  The 200ms timeout is
+@@ -584,6 +585,7 @@ static struct platform_device pb1200_mmc1_dev = {
+ 	.num_resources	= ARRAY_SIZE(au1200_mmc1_res),
+ 	.resource	= au1200_mmc1_res,
+ };
++#endif /* CONFIG_MMC_AU1X */
+ 
+ /**********************************************************************/
+ 
+@@ -751,7 +753,9 @@ static struct platform_device db1200_audiodma_dev = {
+ static struct platform_device *db1200_devs[] __initdata = {
+ 	NULL,		/* PSC0, selected by S6.8 */
+ 	&db1200_ide_dev,
++#ifdef CONFIG_MMC_AU1X
+ 	&db1200_mmc0_dev,
++#endif
+ 	&au1200_lcd_dev,
+ 	&db1200_eth_dev,
+ 	&db1200_nand_dev,
+@@ -762,7 +766,9 @@ static struct platform_device *db1200_devs[] __initdata = {
+ };
+ 
+ static struct platform_device *pb1200_devs[] __initdata = {
++#ifdef CONFIG_MMC_AU1X
+ 	&pb1200_mmc1_dev,
++#endif
+ };
+ 
+ /* Some peripheral base addresses differ on the PB1200 */
+diff --git a/arch/mips/alchemy/devboards/db1300.c b/arch/mips/alchemy/devboards/db1300.c
+index ca71e5ed51abd..c965d00074818 100644
+--- a/arch/mips/alchemy/devboards/db1300.c
++++ b/arch/mips/alchemy/devboards/db1300.c
+@@ -450,6 +450,7 @@ static struct platform_device db1300_ide_dev = {
+ 
+ /**********************************************************************/
+ 
++#ifdef CONFIG_MMC_AU1X
+ static irqreturn_t db1300_mmc_cd(int irq, void *ptr)
+ {
+ 	disable_irq_nosync(irq);
+@@ -632,6 +633,7 @@ static struct platform_device db1300_sd0_dev = {
+ 	.resource	= au1300_sd0_res,
+ 	.num_resources	= ARRAY_SIZE(au1300_sd0_res),
+ };
++#endif /* CONFIG_MMC_AU1X */
+ 
+ /**********************************************************************/
+ 
+@@ -776,8 +778,10 @@ static struct platform_device *db1300_dev[] __initdata = {
+ 	&db1300_5waysw_dev,
+ 	&db1300_nand_dev,
+ 	&db1300_ide_dev,
++#ifdef CONFIG_MMC_AU1X
+ 	&db1300_sd0_dev,
+ 	&db1300_sd1_dev,
++#endif
+ 	&db1300_lcd_dev,
+ 	&db1300_ac97_dev,
+ 	&db1300_i2s_dev,
+diff --git a/arch/parisc/include/asm/ldcw.h b/arch/parisc/include/asm/ldcw.h
+index 6d28b5514699a..10a061d6899cd 100644
+--- a/arch/parisc/include/asm/ldcw.h
++++ b/arch/parisc/include/asm/ldcw.h
+@@ -2,14 +2,28 @@
+ #ifndef __PARISC_LDCW_H
+ #define __PARISC_LDCW_H
+ 
+-#ifndef CONFIG_PA20
+ /* Because kmalloc only guarantees 8-byte alignment for kmalloc'd data,
+    and GCC only guarantees 8-byte alignment for stack locals, we can't
+    be assured of 16-byte alignment for atomic lock data even if we
+    specify "__attribute ((aligned(16)))" in the type declaration.  So,
+    we use a struct containing an array of four ints for the atomic lock
+    type and dynamically select the 16-byte aligned int from the array
+-   for the semaphore.  */
++   for the semaphore. */
++
++/* From: "Jim Hull" <jim.hull of hp.com>
++   I've attached a summary of the change, but basically, for PA 2.0, as
++   long as the ",CO" (coherent operation) completer is implemented, then the
++   16-byte alignment requirement for ldcw and ldcd is relaxed, and instead
++   they only require "natural" alignment (4-byte for ldcw, 8-byte for
++   ldcd).
++
++   Although the cache control hint is accepted by all PA 2.0 processors,
++   it is only implemented on PA8800/PA8900 CPUs. Prior PA8X00 CPUs still
++   require 16-byte alignment. If the address is unaligned, the operation
++   of the instruction is undefined. The ldcw instruction does not generate
++   unaligned data reference traps so misaligned accesses are not detected.
++   This hid the problem for years. So, restore the 16-byte alignment dropped
++   by Kyle McMartin in "Remove __ldcw_align for PA-RISC 2.0 processors". */
+ 
+ #define __PA_LDCW_ALIGNMENT	16
+ #define __PA_LDCW_ALIGN_ORDER	4
+@@ -19,22 +33,12 @@
+ 		& ~(__PA_LDCW_ALIGNMENT - 1);			\
+ 	(volatile unsigned int *) __ret;			\
+ })
+-#define __LDCW	"ldcw"
+ 
+-#else /*CONFIG_PA20*/
+-/* From: "Jim Hull" <jim.hull of hp.com>
+-   I've attached a summary of the change, but basically, for PA 2.0, as
+-   long as the ",CO" (coherent operation) completer is specified, then the
+-   16-byte alignment requirement for ldcw and ldcd is relaxed, and instead
+-   they only require "natural" alignment (4-byte for ldcw, 8-byte for
+-   ldcd). */
+-
+-#define __PA_LDCW_ALIGNMENT	4
+-#define __PA_LDCW_ALIGN_ORDER	2
+-#define __ldcw_align(a) (&(a)->slock)
++#ifdef CONFIG_PA20
+ #define __LDCW	"ldcw,co"
+-
+-#endif /*!CONFIG_PA20*/
++#else
++#define __LDCW	"ldcw"
++#endif
+ 
+ /* LDCW, the only atomic read-write operation PA-RISC has. *sigh*.
+    We don't explicitly expose that "*a" may be written as reload
+diff --git a/arch/parisc/include/asm/ropes.h b/arch/parisc/include/asm/ropes.h
+index 8e51c775c80a6..62399c7ea94a1 100644
+--- a/arch/parisc/include/asm/ropes.h
++++ b/arch/parisc/include/asm/ropes.h
+@@ -86,6 +86,9 @@ struct sba_device {
+ 	struct ioc		ioc[MAX_IOC];
+ };
+ 
++/* list of SBA's in system, see drivers/parisc/sba_iommu.c */
++extern struct sba_device *sba_list;
++
+ #define ASTRO_RUNWAY_PORT	0x582
+ #define IKE_MERCED_PORT		0x803
+ #define REO_MERCED_PORT		0x804
+diff --git a/arch/parisc/include/asm/spinlock_types.h b/arch/parisc/include/asm/spinlock_types.h
+index ca39ee350c3f4..35c5086b74d70 100644
+--- a/arch/parisc/include/asm/spinlock_types.h
++++ b/arch/parisc/include/asm/spinlock_types.h
+@@ -3,13 +3,8 @@
+ #define __ASM_SPINLOCK_TYPES_H
+ 
+ typedef struct {
+-#ifdef CONFIG_PA20
+-	volatile unsigned int slock;
+-# define __ARCH_SPIN_LOCK_UNLOCKED { 1 }
+-#else
+ 	volatile unsigned int lock[4];
+ # define __ARCH_SPIN_LOCK_UNLOCKED	{ { 1, 1, 1, 1 } }
+-#endif
+ } arch_spinlock_t;
+ 
+ 
+diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
+index d95157488832a..d11a3123f3dc4 100644
+--- a/arch/parisc/kernel/drivers.c
++++ b/arch/parisc/kernel/drivers.c
+@@ -925,9 +925,9 @@ static __init void qemu_header(void)
+ 	pr_info("#define PARISC_MODEL \"%s\"\n\n",
+ 			boot_cpu_data.pdc.sys_model_name);
+ 
++	#define p ((unsigned long *)&boot_cpu_data.pdc.model)
+ 	pr_info("#define PARISC_PDC_MODEL 0x%lx, 0x%lx, 0x%lx, "
+ 		"0x%lx, 0x%lx, 0x%lx, 0x%lx, 0x%lx, 0x%lx\n\n",
+-	#define p ((unsigned long *)&boot_cpu_data.pdc.model)
+ 		p[0], p[1], p[2], p[3], p[4], p[5], p[6], p[7], p[8]);
+ 	#undef p
+ 
+diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
+index 60f5829d476f5..2762e8540672e 100644
+--- a/arch/parisc/kernel/irq.c
++++ b/arch/parisc/kernel/irq.c
+@@ -388,7 +388,7 @@ union irq_stack_union {
+ 	volatile unsigned int lock[1];
+ };
+ 
+-DEFINE_PER_CPU(union irq_stack_union, irq_stack_union) = {
++static DEFINE_PER_CPU(union irq_stack_union, irq_stack_union) = {
+ 		.slock = { 1,1,1,1 },
+ 	};
+ #endif
+diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
+index f4e8f21046f54..6e5bed50c3578 100644
+--- a/arch/powerpc/kernel/hw_breakpoint.c
++++ b/arch/powerpc/kernel/hw_breakpoint.c
+@@ -479,11 +479,13 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs)
+ 	struct arch_hw_breakpoint *info;
+ 	int i;
+ 
++	preempt_disable();
++
+ 	for (i = 0; i < nr_wp_slots(); i++) {
+ 		if (unlikely(tsk->thread.last_hit_ubp[i]))
+ 			goto reset;
+ 	}
+-	return;
++	goto out;
+ 
+ reset:
+ 	regs->msr &= ~MSR_SE;
+@@ -492,6 +494,9 @@ reset:
+ 		__set_breakpoint(i, info);
+ 		tsk->thread.last_hit_ubp[i] = NULL;
+ 	}
++
++out:
++	preempt_enable();
+ }
+ 
+ static bool is_larx_stcx_instr(int type)
+diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
+index 1cd2351d241e8..61a08747b1641 100644
+--- a/arch/powerpc/perf/hv-24x7.c
++++ b/arch/powerpc/perf/hv-24x7.c
+@@ -1410,7 +1410,7 @@ static int h_24x7_event_init(struct perf_event *event)
+ 	}
+ 
+ 	domain = event_get_domain(event);
+-	if (domain >= HV_PERF_DOMAIN_MAX) {
++	if (domain  == 0 || domain >= HV_PERF_DOMAIN_MAX) {
+ 		pr_devel("invalid domain %d\n", domain);
+ 		return -EINVAL;
+ 	}
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 4d11a50089b27..ec3ddb9a456ba 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -2344,7 +2344,7 @@ static void __init srso_select_mitigation(void)
+ 
+ 	switch (srso_cmd) {
+ 	case SRSO_CMD_OFF:
+-		return;
++		goto pred_cmd;
+ 
+ 	case SRSO_CMD_MICROCODE:
+ 		if (has_microcode) {
+@@ -2622,7 +2622,7 @@ static ssize_t srso_show_state(char *buf)
+ 
+ 	return sysfs_emit(buf, "%s%s\n",
+ 			  srso_strings[srso_mitigation],
+-			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
++			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+ }
+ 
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+diff --git a/arch/xtensa/boot/Makefile b/arch/xtensa/boot/Makefile
+index f6bb352f94b43..c8fd705d08b2c 100644
+--- a/arch/xtensa/boot/Makefile
++++ b/arch/xtensa/boot/Makefile
+@@ -9,8 +9,7 @@
+ 
+ 
+ # KBUILD_CFLAGS used when building rest of boot (takes effect recursively)
+-KBUILD_CFLAGS	+= -fno-builtin -Iarch/$(ARCH)/boot/include
+-HOSTFLAGS	+= -Iarch/$(ARCH)/boot/include
++KBUILD_CFLAGS	+= -fno-builtin
+ 
+ BIG_ENDIAN	:= $(shell echo __XTENSA_EB__ | $(CC) -E - | grep -v "\#")
+ 
+diff --git a/arch/xtensa/boot/lib/zmem.c b/arch/xtensa/boot/lib/zmem.c
+index e3ecd743c5153..b89189355122a 100644
+--- a/arch/xtensa/boot/lib/zmem.c
++++ b/arch/xtensa/boot/lib/zmem.c
+@@ -4,13 +4,14 @@
+ /* bits taken from ppc */
+ 
+ extern void *avail_ram, *end_avail;
++void gunzip(void *dst, int dstlen, unsigned char *src, int *lenp);
+ 
+-void exit (void)
++static void exit(void)
+ {
+   for (;;);
+ }
+ 
+-void *zalloc(unsigned size)
++static void *zalloc(unsigned int size)
+ {
+         void *p = avail_ram;
+ 
+diff --git a/arch/xtensa/include/asm/core.h b/arch/xtensa/include/asm/core.h
+index a4e40166ff4bb..0fa3649649e98 100644
+--- a/arch/xtensa/include/asm/core.h
++++ b/arch/xtensa/include/asm/core.h
+@@ -6,6 +6,10 @@
+ 
+ #include <variant/core.h>
+ 
++#ifndef XCHAL_HAVE_DIV32
++#define XCHAL_HAVE_DIV32 0
++#endif
++
+ #ifndef XCHAL_HAVE_EXCLUSIVE
+ #define XCHAL_HAVE_EXCLUSIVE 0
+ #endif
+diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
+index 1270de83435eb..e8491ac0d5b93 100644
+--- a/arch/xtensa/platforms/iss/network.c
++++ b/arch/xtensa/platforms/iss/network.c
+@@ -204,7 +204,7 @@ static int tuntap_write(struct iss_net_private *lp, struct sk_buff **skb)
+ 	return simc_write(lp->tp.info.tuntap.fd, (*skb)->data, (*skb)->len);
+ }
+ 
+-unsigned short tuntap_protocol(struct sk_buff *skb)
++static unsigned short tuntap_protocol(struct sk_buff *skb)
+ {
+ 	return eth_type_trans(skb, skb->dev);
+ }
+@@ -477,7 +477,7 @@ static int iss_net_change_mtu(struct net_device *dev, int new_mtu)
+ 	return -EINVAL;
+ }
+ 
+-void iss_net_user_timer_expire(struct timer_list *unused)
++static void iss_net_user_timer_expire(struct timer_list *unused)
+ {
+ }
+ 
+diff --git a/block/blk-core.c b/block/blk-core.c
+index d0d0dd8151f75..e5eeec801f565 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -414,8 +414,6 @@ void blk_cleanup_queue(struct request_queue *q)
+ 		blk_mq_sched_free_requests(q);
+ 	mutex_unlock(&q->sysfs_lock);
+ 
+-	percpu_ref_exit(&q->q_usage_counter);
+-
+ 	/* @q is and will stay empty, shutdown and put */
+ 	blk_put_queue(q);
+ }
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 8c5816364dd17..9174137a913c4 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -726,6 +726,8 @@ static void blk_free_queue_rcu(struct rcu_head *rcu_head)
+ {
+ 	struct request_queue *q = container_of(rcu_head, struct request_queue,
+ 					       rcu_head);
++
++	percpu_ref_exit(&q->q_usage_counter);
+ 	kmem_cache_free(blk_requestq_cachep, q);
+ }
+ 
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index ecd2ddc2215f5..66e53df758655 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -1326,4 +1326,33 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
+ 	return 1;
+ }
+ EXPORT_SYMBOL_GPL(acpi_dev_pm_attach);
++
++/**
++ * acpi_storage_d3 - Check if D3 should be used in the suspend path
++ * @dev: Device to check
++ *
++ * Return %true if the platform firmware wants @dev to be programmed
++ * into D3hot or D3cold (if supported) in the suspend path, or %false
++ * when there is no specific preference. On some platforms, if this
++ * hint is ignored, @dev may remain unresponsive after suspending the
++ * platform as a whole.
++ *
++ * Although the property has storage in the name it actually is
++ * applied to the PCIe slot and plugging in a non-storage device the
++ * same platform restrictions will likely apply.
++ */
++bool acpi_storage_d3(struct device *dev)
++{
++	struct acpi_device *adev = ACPI_COMPANION(dev);
++	u8 val;
++
++	if (!adev)
++		return false;
++	if (fwnode_property_read_u8(acpi_fwnode_handle(adev), "StorageD3Enable",
++			&val))
++		return false;
++	return val == 1;
++}
++EXPORT_SYMBOL_GPL(acpi_storage_d3);
++
+ #endif /* CONFIG_PM */
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index bf949f7da483f..4297a8d69dbf7 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -50,7 +50,8 @@ enum board_ids {
+ 	/* board IDs by feature in alphabetical order */
+ 	board_ahci,
+ 	board_ahci_ign_iferr,
+-	board_ahci_mobile,
++	board_ahci_low_power,
++	board_ahci_no_debounce_delay,
+ 	board_ahci_nomsi,
+ 	board_ahci_noncq,
+ 	board_ahci_nosntf,
+@@ -135,13 +136,20 @@ static const struct ata_port_info ahci_port_info[] = {
+ 		.udma_mask	= ATA_UDMA6,
+ 		.port_ops	= &ahci_ops,
+ 	},
+-	[board_ahci_mobile] = {
++	[board_ahci_low_power] = {
+ 		AHCI_HFLAGS	(AHCI_HFLAG_IS_MOBILE),
+ 		.flags		= AHCI_FLAG_COMMON,
+ 		.pio_mask	= ATA_PIO4,
+ 		.udma_mask	= ATA_UDMA6,
+ 		.port_ops	= &ahci_ops,
+ 	},
++	[board_ahci_no_debounce_delay] = {
++		.flags		= AHCI_FLAG_COMMON,
++		.link_flags	= ATA_LFLAG_NO_DEBOUNCE_DELAY,
++		.pio_mask	= ATA_PIO4,
++		.udma_mask	= ATA_UDMA6,
++		.port_ops	= &ahci_ops,
++	},
+ 	[board_ahci_nomsi] = {
+ 		AHCI_HFLAGS	(AHCI_HFLAG_NO_MSI),
+ 		.flags		= AHCI_FLAG_COMMON,
+@@ -268,13 +276,13 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x2924), board_ahci }, /* ICH9 */
+ 	{ PCI_VDEVICE(INTEL, 0x2925), board_ahci }, /* ICH9 */
+ 	{ PCI_VDEVICE(INTEL, 0x2927), board_ahci }, /* ICH9 */
+-	{ PCI_VDEVICE(INTEL, 0x2929), board_ahci_mobile }, /* ICH9M */
+-	{ PCI_VDEVICE(INTEL, 0x292a), board_ahci_mobile }, /* ICH9M */
+-	{ PCI_VDEVICE(INTEL, 0x292b), board_ahci_mobile }, /* ICH9M */
+-	{ PCI_VDEVICE(INTEL, 0x292c), board_ahci_mobile }, /* ICH9M */
+-	{ PCI_VDEVICE(INTEL, 0x292f), board_ahci_mobile }, /* ICH9M */
++	{ PCI_VDEVICE(INTEL, 0x2929), board_ahci_low_power }, /* ICH9M */
++	{ PCI_VDEVICE(INTEL, 0x292a), board_ahci_low_power }, /* ICH9M */
++	{ PCI_VDEVICE(INTEL, 0x292b), board_ahci_low_power }, /* ICH9M */
++	{ PCI_VDEVICE(INTEL, 0x292c), board_ahci_low_power }, /* ICH9M */
++	{ PCI_VDEVICE(INTEL, 0x292f), board_ahci_low_power }, /* ICH9M */
+ 	{ PCI_VDEVICE(INTEL, 0x294d), board_ahci }, /* ICH9 */
+-	{ PCI_VDEVICE(INTEL, 0x294e), board_ahci_mobile }, /* ICH9M */
++	{ PCI_VDEVICE(INTEL, 0x294e), board_ahci_low_power }, /* ICH9M */
+ 	{ PCI_VDEVICE(INTEL, 0x502a), board_ahci }, /* Tolapai */
+ 	{ PCI_VDEVICE(INTEL, 0x502b), board_ahci }, /* Tolapai */
+ 	{ PCI_VDEVICE(INTEL, 0x3a05), board_ahci }, /* ICH10 */
+@@ -284,9 +292,9 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x3b23), board_ahci }, /* PCH AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x3b24), board_ahci }, /* PCH RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x3b25), board_ahci }, /* PCH RAID */
+-	{ PCI_VDEVICE(INTEL, 0x3b29), board_ahci_mobile }, /* PCH M AHCI */
++	{ PCI_VDEVICE(INTEL, 0x3b29), board_ahci_low_power }, /* PCH M AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x3b2b), board_ahci }, /* PCH RAID */
+-	{ PCI_VDEVICE(INTEL, 0x3b2c), board_ahci_mobile }, /* PCH M RAID */
++	{ PCI_VDEVICE(INTEL, 0x3b2c), board_ahci_low_power }, /* PCH M RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x3b2f), board_ahci }, /* PCH AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x19b0), board_ahci_pcs7 }, /* DNV AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x19b1), board_ahci_pcs7 }, /* DNV AHCI */
+@@ -309,9 +317,9 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x19cE), board_ahci_pcs7 }, /* DNV AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x19cF), board_ahci_pcs7 }, /* DNV AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x1c02), board_ahci }, /* CPT AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x1c03), board_ahci_mobile }, /* CPT M AHCI */
++	{ PCI_VDEVICE(INTEL, 0x1c03), board_ahci_low_power }, /* CPT M AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x1c04), board_ahci }, /* CPT RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1c05), board_ahci_mobile }, /* CPT M RAID */
++	{ PCI_VDEVICE(INTEL, 0x1c05), board_ahci_low_power }, /* CPT M RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1c06), board_ahci }, /* CPT RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1c07), board_ahci }, /* CPT RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1d02), board_ahci }, /* PBG AHCI */
+@@ -320,29 +328,29 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x2826), board_ahci }, /* PBG RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x2323), board_ahci }, /* DH89xxCC AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x1e02), board_ahci }, /* Panther Point AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x1e03), board_ahci_mobile }, /* Panther M AHCI */
++	{ PCI_VDEVICE(INTEL, 0x1e03), board_ahci_low_power }, /* Panther M AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x1e04), board_ahci }, /* Panther Point RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1e05), board_ahci }, /* Panther Point RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1e06), board_ahci }, /* Panther Point RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1e07), board_ahci_mobile }, /* Panther M RAID */
++	{ PCI_VDEVICE(INTEL, 0x1e07), board_ahci_low_power }, /* Panther M RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1e0e), board_ahci }, /* Panther Point RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8c02), board_ahci }, /* Lynx Point AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x8c03), board_ahci_mobile }, /* Lynx M AHCI */
++	{ PCI_VDEVICE(INTEL, 0x8c03), board_ahci_low_power }, /* Lynx M AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x8c04), board_ahci }, /* Lynx Point RAID */
+-	{ PCI_VDEVICE(INTEL, 0x8c05), board_ahci_mobile }, /* Lynx M RAID */
++	{ PCI_VDEVICE(INTEL, 0x8c05), board_ahci_low_power }, /* Lynx M RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8c06), board_ahci }, /* Lynx Point RAID */
+-	{ PCI_VDEVICE(INTEL, 0x8c07), board_ahci_mobile }, /* Lynx M RAID */
++	{ PCI_VDEVICE(INTEL, 0x8c07), board_ahci_low_power }, /* Lynx M RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8c0e), board_ahci }, /* Lynx Point RAID */
+-	{ PCI_VDEVICE(INTEL, 0x8c0f), board_ahci_mobile }, /* Lynx M RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9c02), board_ahci_mobile }, /* Lynx LP AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x9c03), board_ahci_mobile }, /* Lynx LP AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x9c04), board_ahci_mobile }, /* Lynx LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9c05), board_ahci_mobile }, /* Lynx LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9c06), board_ahci_mobile }, /* Lynx LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9c07), board_ahci_mobile }, /* Lynx LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9c0e), board_ahci_mobile }, /* Lynx LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9c0f), board_ahci_mobile }, /* Lynx LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9dd3), board_ahci_mobile }, /* Cannon Lake PCH-LP AHCI */
++	{ PCI_VDEVICE(INTEL, 0x8c0f), board_ahci_low_power }, /* Lynx M RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c02), board_ahci_low_power }, /* Lynx LP AHCI */
++	{ PCI_VDEVICE(INTEL, 0x9c03), board_ahci_low_power }, /* Lynx LP AHCI */
++	{ PCI_VDEVICE(INTEL, 0x9c04), board_ahci_low_power }, /* Lynx LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c05), board_ahci_low_power }, /* Lynx LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c06), board_ahci_low_power }, /* Lynx LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c07), board_ahci_low_power }, /* Lynx LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c0e), board_ahci_low_power }, /* Lynx LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c0f), board_ahci_low_power }, /* Lynx LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9dd3), board_ahci_low_power }, /* Cannon Lake PCH-LP AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x1f22), board_ahci }, /* Avoton AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x1f23), board_ahci }, /* Avoton AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x1f24), board_ahci }, /* Avoton RAID */
+@@ -374,26 +382,26 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x8d66), board_ahci }, /* Wellsburg RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8d6e), board_ahci }, /* Wellsburg RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x23a3), board_ahci }, /* Coleto Creek AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x9c83), board_ahci_mobile }, /* Wildcat LP AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x9c85), board_ahci_mobile }, /* Wildcat LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9c87), board_ahci_mobile }, /* Wildcat LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9c8f), board_ahci_mobile }, /* Wildcat LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c83), board_ahci_low_power }, /* Wildcat LP AHCI */
++	{ PCI_VDEVICE(INTEL, 0x9c85), board_ahci_low_power }, /* Wildcat LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c87), board_ahci_low_power }, /* Wildcat LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9c8f), board_ahci_low_power }, /* Wildcat LP RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8c82), board_ahci }, /* 9 Series AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x8c83), board_ahci_mobile }, /* 9 Series M AHCI */
++	{ PCI_VDEVICE(INTEL, 0x8c83), board_ahci_low_power }, /* 9 Series M AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x8c84), board_ahci }, /* 9 Series RAID */
+-	{ PCI_VDEVICE(INTEL, 0x8c85), board_ahci_mobile }, /* 9 Series M RAID */
++	{ PCI_VDEVICE(INTEL, 0x8c85), board_ahci_low_power }, /* 9 Series M RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8c86), board_ahci }, /* 9 Series RAID */
+-	{ PCI_VDEVICE(INTEL, 0x8c87), board_ahci_mobile }, /* 9 Series M RAID */
++	{ PCI_VDEVICE(INTEL, 0x8c87), board_ahci_low_power }, /* 9 Series M RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */
+-	{ PCI_VDEVICE(INTEL, 0x8c8f), board_ahci_mobile }, /* 9 Series M RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9d03), board_ahci_mobile }, /* Sunrise LP AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x9d05), board_ahci_mobile }, /* Sunrise LP RAID */
+-	{ PCI_VDEVICE(INTEL, 0x9d07), board_ahci_mobile }, /* Sunrise LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x8c8f), board_ahci_low_power }, /* 9 Series M RAID */
++	{ PCI_VDEVICE(INTEL, 0x9d03), board_ahci_low_power }, /* Sunrise LP AHCI */
++	{ PCI_VDEVICE(INTEL, 0x9d05), board_ahci_low_power }, /* Sunrise LP RAID */
++	{ PCI_VDEVICE(INTEL, 0x9d07), board_ahci_low_power }, /* Sunrise LP RAID */
+ 	{ PCI_VDEVICE(INTEL, 0xa102), board_ahci }, /* Sunrise Point-H AHCI */
+-	{ PCI_VDEVICE(INTEL, 0xa103), board_ahci_mobile }, /* Sunrise M AHCI */
++	{ PCI_VDEVICE(INTEL, 0xa103), board_ahci_low_power }, /* Sunrise M AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0xa105), board_ahci }, /* Sunrise Point-H RAID */
+ 	{ PCI_VDEVICE(INTEL, 0xa106), board_ahci }, /* Sunrise Point-H RAID */
+-	{ PCI_VDEVICE(INTEL, 0xa107), board_ahci_mobile }, /* Sunrise M RAID */
++	{ PCI_VDEVICE(INTEL, 0xa107), board_ahci_low_power }, /* Sunrise M RAID */
+ 	{ PCI_VDEVICE(INTEL, 0xa10f), board_ahci }, /* Sunrise Point-H RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x2822), board_ahci }, /* Lewisburg RAID*/
+ 	{ PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Lewisburg AHCI*/
+@@ -410,13 +418,15 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0xa356), board_ahci }, /* Cannon Lake PCH-H RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x06d7), board_ahci }, /* Comet Lake-H RAID */
+ 	{ PCI_VDEVICE(INTEL, 0xa386), board_ahci }, /* Comet Lake PCH-V RAID */
+-	{ PCI_VDEVICE(INTEL, 0x0f22), board_ahci_mobile }, /* Bay Trail AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x0f23), board_ahci_mobile }, /* Bay Trail AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x22a3), board_ahci_mobile }, /* Cherry Tr. AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x5ae3), board_ahci_mobile }, /* ApolloLake AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x34d3), board_ahci_mobile }, /* Ice Lake LP AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x02d3), board_ahci_mobile }, /* Comet Lake PCH-U AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x02d7), board_ahci_mobile }, /* Comet Lake PCH RAID */
++	{ PCI_VDEVICE(INTEL, 0x0f22), board_ahci_low_power }, /* Bay Trail AHCI */
++	{ PCI_VDEVICE(INTEL, 0x0f23), board_ahci_low_power }, /* Bay Trail AHCI */
++	{ PCI_VDEVICE(INTEL, 0x22a3), board_ahci_low_power }, /* Cherry Tr. AHCI */
++	{ PCI_VDEVICE(INTEL, 0x5ae3), board_ahci_low_power }, /* ApolloLake AHCI */
++	{ PCI_VDEVICE(INTEL, 0x34d3), board_ahci_low_power }, /* Ice Lake LP AHCI */
++	{ PCI_VDEVICE(INTEL, 0x02d3), board_ahci_low_power }, /* Comet Lake PCH-U AHCI */
++	{ PCI_VDEVICE(INTEL, 0x02d7), board_ahci_low_power }, /* Comet Lake PCH RAID */
++	/* Elkhart Lake IDs 0x4b60 & 0x4b62 https://sata-io.org/product/8803 not tested yet */
++	{ PCI_VDEVICE(INTEL, 0x4b63), board_ahci_low_power }, /* Elkhart Lake AHCI */
+ 
+ 	/* JMicron 360/1/3/5/6, match class to avoid IDE function */
+ 	{ PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+@@ -442,8 +452,9 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 		board_ahci_al },
+ 	/* AMD */
+ 	{ PCI_VDEVICE(AMD, 0x7800), board_ahci }, /* AMD Hudson-2 */
++	{ PCI_VDEVICE(AMD, 0x7801), board_ahci_no_debounce_delay }, /* AMD Hudson-2 (AHCI mode) */
+ 	{ PCI_VDEVICE(AMD, 0x7900), board_ahci }, /* AMD CZ */
+-	{ PCI_VDEVICE(AMD, 0x7901), board_ahci_mobile }, /* AMD Green Sardine */
++	{ PCI_VDEVICE(AMD, 0x7901), board_ahci_low_power }, /* AMD Green Sardine */
+ 	/* AMD is using RAID class only for ahci controllers */
+ 	{ PCI_VENDOR_ID_AMD, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+ 	  PCI_CLASS_STORAGE_RAID << 8, 0xffffff, board_ahci },
+@@ -703,7 +714,7 @@ static void ahci_pci_init_controller(struct ata_host *host)
+ 
+ 		/* clear port IRQ */
+ 		tmp = readl(port_mmio + PORT_IRQ_STAT);
+-		VPRINTK("PORT_IRQ_STAT 0x%x\n", tmp);
++		dev_dbg(&pdev->dev, "PORT_IRQ_STAT 0x%x\n", tmp);
+ 		if (tmp)
+ 			writel(tmp, port_mmio + PORT_IRQ_STAT);
+ 	}
+@@ -1495,7 +1506,6 @@ static irqreturn_t ahci_thunderx_irq_handler(int irq, void *dev_instance)
+ 	u32 irq_stat, irq_masked;
+ 	unsigned int handled = 1;
+ 
+-	VPRINTK("ENTER\n");
+ 	hpriv = host->private_data;
+ 	mmio = hpriv->mmio;
+ 	irq_stat = readl(mmio + HOST_IRQ_STAT);
+@@ -1512,7 +1522,6 @@ static irqreturn_t ahci_thunderx_irq_handler(int irq, void *dev_instance)
+ 		irq_stat = readl(mmio + HOST_IRQ_STAT);
+ 		spin_unlock(&host->lock);
+ 	} while (irq_stat);
+-	VPRINTK("EXIT\n");
+ 
+ 	return IRQ_RETVAL(handled);
+ }
+diff --git a/drivers/ata/ahci_brcm.c b/drivers/ata/ahci_brcm.c
+index 5b32df5d33adc..2e4252545fd27 100644
+--- a/drivers/ata/ahci_brcm.c
++++ b/drivers/ata/ahci_brcm.c
+@@ -332,7 +332,7 @@ static struct ata_port_operations ahci_brcm_platform_ops = {
+ 
+ static const struct ata_port_info ahci_brcm_port_info = {
+ 	.flags		= AHCI_FLAG_COMMON | ATA_FLAG_NO_DIPM,
+-	.link_flags	= ATA_LFLAG_NO_DB_DELAY,
++	.link_flags	= ATA_LFLAG_NO_DEBOUNCE_DELAY,
+ 	.pio_mask	= ATA_PIO4,
+ 	.udma_mask	= ATA_UDMA6,
+ 	.port_ops	= &ahci_brcm_platform_ops,
+diff --git a/drivers/ata/ahci_xgene.c b/drivers/ata/ahci_xgene.c
+index 16246c843365e..e0f0577ac191c 100644
+--- a/drivers/ata/ahci_xgene.c
++++ b/drivers/ata/ahci_xgene.c
+@@ -588,8 +588,6 @@ static irqreturn_t xgene_ahci_irq_intr(int irq, void *dev_instance)
+ 	void __iomem *mmio;
+ 	u32 irq_stat, irq_masked;
+ 
+-	VPRINTK("ENTER\n");
+-
+ 	hpriv = host->private_data;
+ 	mmio = hpriv->mmio;
+ 
+@@ -612,8 +610,6 @@ static irqreturn_t xgene_ahci_irq_intr(int irq, void *dev_instance)
+ 
+ 	spin_unlock(&host->lock);
+ 
+-	VPRINTK("EXIT\n");
+-
+ 	return IRQ_RETVAL(rc);
+ }
+ 
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index fec2e9754aed2..e188850f65ff2 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -1199,6 +1199,26 @@ static ssize_t ahci_activity_show(struct ata_device *dev, char *buf)
+ 	return sprintf(buf, "%d\n", emp->blink_policy);
+ }
+ 
++static void ahci_port_clear_pending_irq(struct ata_port *ap)
++{
++	struct ahci_host_priv *hpriv = ap->host->private_data;
++	void __iomem *port_mmio = ahci_port_base(ap);
++	u32 tmp;
++
++	/* clear SError */
++	tmp = readl(port_mmio + PORT_SCR_ERR);
++	dev_dbg(ap->host->dev, "PORT_SCR_ERR 0x%x\n", tmp);
++	writel(tmp, port_mmio + PORT_SCR_ERR);
++
++	/* clear port IRQ */
++	tmp = readl(port_mmio + PORT_IRQ_STAT);
++	dev_dbg(ap->host->dev, "PORT_IRQ_STAT 0x%x\n", tmp);
++	if (tmp)
++		writel(tmp, port_mmio + PORT_IRQ_STAT);
++
++	writel(1 << ap->port_no, hpriv->mmio + HOST_IRQ_STAT);
++}
++
+ static void ahci_port_init(struct device *dev, struct ata_port *ap,
+ 			   int port_no, void __iomem *mmio,
+ 			   void __iomem *port_mmio)
+@@ -1213,18 +1233,7 @@ static void ahci_port_init(struct device *dev, struct ata_port *ap,
+ 	if (rc)
+ 		dev_warn(dev, "%s (%d)\n", emsg, rc);
+ 
+-	/* clear SError */
+-	tmp = readl(port_mmio + PORT_SCR_ERR);
+-	VPRINTK("PORT_SCR_ERR 0x%x\n", tmp);
+-	writel(tmp, port_mmio + PORT_SCR_ERR);
+-
+-	/* clear port IRQ */
+-	tmp = readl(port_mmio + PORT_IRQ_STAT);
+-	VPRINTK("PORT_IRQ_STAT 0x%x\n", tmp);
+-	if (tmp)
+-		writel(tmp, port_mmio + PORT_IRQ_STAT);
+-
+-	writel(1 << port_no, mmio + HOST_IRQ_STAT);
++	ahci_port_clear_pending_irq(ap);
+ 
+ 	/* mark esata ports */
+ 	tmp = readl(port_mmio + PORT_CMD);
+@@ -1251,10 +1260,10 @@ void ahci_init_controller(struct ata_host *host)
+ 	}
+ 
+ 	tmp = readl(mmio + HOST_CTL);
+-	VPRINTK("HOST_CTL 0x%x\n", tmp);
++	dev_dbg(host->dev, "HOST_CTL 0x%x\n", tmp);
+ 	writel(tmp | HOST_IRQ_EN, mmio + HOST_CTL);
+ 	tmp = readl(mmio + HOST_CTL);
+-	VPRINTK("HOST_CTL 0x%x\n", tmp);
++	dev_dbg(host->dev, "HOST_CTL 0x%x\n", tmp);
+ }
+ EXPORT_SYMBOL_GPL(ahci_init_controller);
+ 
+@@ -1554,6 +1563,8 @@ int ahci_do_hardreset(struct ata_link *link, unsigned int *class,
+ 	tf.command = ATA_BUSY;
+ 	ata_tf_to_fis(&tf, 0, 0, d2h_fis);
+ 
++	ahci_port_clear_pending_irq(ap);
++
+ 	rc = sata_link_hardreset(link, timing, deadline, online,
+ 				 ahci_check_ready);
+ 
+@@ -1905,8 +1916,6 @@ static irqreturn_t ahci_multi_irqs_intr_hard(int irq, void *dev_instance)
+ 	void __iomem *port_mmio = ahci_port_base(ap);
+ 	u32 status;
+ 
+-	VPRINTK("ENTER\n");
+-
+ 	status = readl(port_mmio + PORT_IRQ_STAT);
+ 	writel(status, port_mmio + PORT_IRQ_STAT);
+ 
+@@ -1914,8 +1923,6 @@ static irqreturn_t ahci_multi_irqs_intr_hard(int irq, void *dev_instance)
+ 	ahci_handle_port_interrupt(ap, port_mmio, status);
+ 	spin_unlock(ap->lock);
+ 
+-	VPRINTK("EXIT\n");
+-
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -1932,9 +1939,7 @@ u32 ahci_handle_port_intr(struct ata_host *host, u32 irq_masked)
+ 		ap = host->ports[i];
+ 		if (ap) {
+ 			ahci_port_intr(ap);
+-			VPRINTK("port %u\n", i);
+ 		} else {
+-			VPRINTK("port %u (no irq)\n", i);
+ 			if (ata_ratelimit())
+ 				dev_warn(host->dev,
+ 					 "interrupt on disabled port %u\n", i);
+@@ -1955,8 +1960,6 @@ static irqreturn_t ahci_single_level_irq_intr(int irq, void *dev_instance)
+ 	void __iomem *mmio;
+ 	u32 irq_stat, irq_masked;
+ 
+-	VPRINTK("ENTER\n");
+-
+ 	hpriv = host->private_data;
+ 	mmio = hpriv->mmio;
+ 
+@@ -1984,8 +1987,6 @@ static irqreturn_t ahci_single_level_irq_intr(int irq, void *dev_instance)
+ 
+ 	spin_unlock(&host->lock);
+ 
+-	VPRINTK("EXIT\n");
+-
+ 	return IRQ_RETVAL(rc);
+ }
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 14150767be444..702b8e061b36e 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4974,17 +4974,19 @@ static void ata_port_request_pm(struct ata_port *ap, pm_message_t mesg,
+ 	struct ata_link *link;
+ 	unsigned long flags;
+ 
+-	/* Previous resume operation might still be in
+-	 * progress.  Wait for PM_PENDING to clear.
++	spin_lock_irqsave(ap->lock, flags);
++
++	/*
++	 * A previous PM operation might still be in progress. Wait for
++	 * ATA_PFLAG_PM_PENDING to clear.
+ 	 */
+ 	if (ap->pflags & ATA_PFLAG_PM_PENDING) {
++		spin_unlock_irqrestore(ap->lock, flags);
+ 		ata_port_wait_eh(ap);
+-		WARN_ON(ap->pflags & ATA_PFLAG_PM_PENDING);
++		spin_lock_irqsave(ap->lock, flags);
+ 	}
+ 
+-	/* request PM ops to EH */
+-	spin_lock_irqsave(ap->lock, flags);
+-
++	/* Request PM operation to EH */
+ 	ap->pm_mesg = mesg;
+ 	ap->pflags |= ATA_PFLAG_PM_PENDING;
+ 	ata_for_each_link(link, ap, HOST_FIRST) {
+@@ -4996,10 +4998,8 @@ static void ata_port_request_pm(struct ata_port *ap, pm_message_t mesg,
+ 
+ 	spin_unlock_irqrestore(ap->lock, flags);
+ 
+-	if (!async) {
++	if (!async)
+ 		ata_port_wait_eh(ap);
+-		WARN_ON(ap->pflags & ATA_PFLAG_PM_PENDING);
+-	}
+ }
+ 
+ /*
+@@ -5167,7 +5167,7 @@ EXPORT_SYMBOL_GPL(ata_host_resume);
+ #endif
+ 
+ const struct device_type ata_port_type = {
+-	.name = "ata_port",
++	.name = ATA_PORT_TYPE_NAME,
+ #ifdef CONFIG_PM
+ 	.pm = &ata_port_pm_ops,
+ #endif
+@@ -5915,11 +5915,30 @@ static void ata_port_detach(struct ata_port *ap)
+ 	if (!ap->ops->error_handler)
+ 		goto skip_eh;
+ 
+-	/* tell EH we're leaving & flush EH */
++	/* Wait for any ongoing EH */
++	ata_port_wait_eh(ap);
++
++	mutex_lock(&ap->scsi_scan_mutex);
+ 	spin_lock_irqsave(ap->lock, flags);
++
++	/* Remove scsi devices */
++	ata_for_each_link(link, ap, HOST_FIRST) {
++		ata_for_each_dev(dev, link, ALL) {
++			if (dev->sdev) {
++				spin_unlock_irqrestore(ap->lock, flags);
++				scsi_remove_device(dev->sdev);
++				spin_lock_irqsave(ap->lock, flags);
++				dev->sdev = NULL;
++			}
++		}
++	}
++
++	/* Tell EH to disable all devices */
+ 	ap->pflags |= ATA_PFLAG_UNLOADING;
+ 	ata_port_schedule_eh(ap);
++
+ 	spin_unlock_irqrestore(ap->lock, flags);
++	mutex_unlock(&ap->scsi_scan_mutex);
+ 
+ 	/* wait till EH commits suicide */
+ 	ata_port_wait_eh(ap);
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 973f4d34d7cda..5fb3eda0a280b 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2703,18 +2703,11 @@ int ata_eh_reset(struct ata_link *link, int classify,
+ 			postreset(slave, classes);
+ 	}
+ 
+-	/*
+-	 * Some controllers can't be frozen very well and may set spurious
+-	 * error conditions during reset.  Clear accumulated error
+-	 * information and re-thaw the port if frozen.  As reset is the
+-	 * final recovery action and we cross check link onlineness against
+-	 * device classification later, no hotplug event is lost by this.
+-	 */
++	/* clear cached SError */
+ 	spin_lock_irqsave(link->ap->lock, flags);
+-	memset(&link->eh_info, 0, sizeof(link->eh_info));
++	link->eh_info.serror = 0;
+ 	if (slave)
+-		memset(&slave->eh_info, 0, sizeof(link->eh_info));
+-	ap->pflags &= ~ATA_PFLAG_EH_PENDING;
++		slave->eh_info.serror = 0;
+ 	spin_unlock_irqrestore(link->ap->lock, flags);
+ 
+ 	if (ap->pflags & ATA_PFLAG_FROZEN)
+diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c
+index 4fd9a107fe7f8..45656067c547a 100644
+--- a/drivers/ata/libata-sata.c
++++ b/drivers/ata/libata-sata.c
+@@ -317,7 +317,7 @@ int sata_link_resume(struct ata_link *link, const unsigned long *params,
+ 		 * immediately after resuming.  Delay 200ms before
+ 		 * debouncing.
+ 		 */
+-		if (!(link->flags & ATA_LFLAG_NO_DB_DELAY))
++		if (!(link->flags & ATA_LFLAG_NO_DEBOUNCE_DELAY))
+ 			ata_msleep(link->ap, 200);
+ 
+ 		/* is SControl restored correctly? */
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index dfa090ccd21c6..36f32fa052df3 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -4259,7 +4259,7 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
+ 		break;
+ 
+ 	case MAINTENANCE_IN:
+-		if (scsicmd[1] == MI_REPORT_SUPPORTED_OPERATION_CODES)
++		if ((scsicmd[1] & 0x1f) == MI_REPORT_SUPPORTED_OPERATION_CODES)
+ 			ata_scsi_rbuf_fill(&args, ata_scsiop_maint_in);
+ 		else
+ 			ata_scsi_set_invalid_field(dev, cmd, 1, 0xff);
+diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c
+index 31a66fc0c31dc..513b379424224 100644
+--- a/drivers/ata/libata-transport.c
++++ b/drivers/ata/libata-transport.c
+@@ -266,6 +266,10 @@ void ata_tport_delete(struct ata_port *ap)
+ 	put_device(dev);
+ }
+ 
++static const struct device_type ata_port_sas_type = {
++	.name = ATA_PORT_TYPE_NAME,
++};
++
+ /** ata_tport_add - initialize a transport ATA port structure
+  *
+  * @parent:	parent device
+@@ -283,7 +287,10 @@ int ata_tport_add(struct device *parent,
+ 	struct device *dev = &ap->tdev;
+ 
+ 	device_initialize(dev);
+-	dev->type = &ata_port_type;
++	if (ap->flags & ATA_FLAG_SAS_HOST)
++		dev->type = &ata_port_sas_type;
++	else
++		dev->type = &ata_port_type;
+ 
+ 	dev->parent = parent;
+ 	ata_host_get(ap->host);
+diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
+index 68cdd81d747c5..bf71bd9e66cd8 100644
+--- a/drivers/ata/libata.h
++++ b/drivers/ata/libata.h
+@@ -30,6 +30,8 @@ enum {
+ 	ATA_DNXFER_QUIET	= (1 << 31),
+ };
+ 
++#define ATA_PORT_TYPE_NAME	"ata_port"
++
+ extern atomic_t ata_print_id;
+ extern int atapi_passthru16;
+ extern int libata_fua;
+diff --git a/drivers/base/regmap/regcache-rbtree.c b/drivers/base/regmap/regcache-rbtree.c
+index ae6b8788d5f3f..d65715b9e129e 100644
+--- a/drivers/base/regmap/regcache-rbtree.c
++++ b/drivers/base/regmap/regcache-rbtree.c
+@@ -453,7 +453,8 @@ static int regcache_rbtree_write(struct regmap *map, unsigned int reg,
+ 		if (!rbnode)
+ 			return -ENOMEM;
+ 		regcache_rbtree_set_register(map, rbnode,
+-					     reg - rbnode->base_reg, value);
++					     (reg - rbnode->base_reg) / map->reg_stride,
++					     value);
+ 		regcache_rbtree_insert(map, &rbtree_ctx->root, rbnode);
+ 		rbtree_ctx->cached_rbnode = rbnode;
+ 	}
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 95cbd5790ed60..b0f7930524ba0 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -632,9 +632,8 @@ void rbd_warn(struct rbd_device *rbd_dev, const char *fmt, ...)
+ static void rbd_dev_remove_parent(struct rbd_device *rbd_dev);
+ 
+ static int rbd_dev_refresh(struct rbd_device *rbd_dev);
+-static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev);
+-static int rbd_dev_header_info(struct rbd_device *rbd_dev);
+-static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev);
++static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev,
++				     struct rbd_image_header *header);
+ static const char *rbd_dev_v2_snap_name(struct rbd_device *rbd_dev,
+ 					u64 snap_id);
+ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
+@@ -1047,15 +1046,24 @@ static void rbd_init_layout(struct rbd_device *rbd_dev)
+ 	RCU_INIT_POINTER(rbd_dev->layout.pool_ns, NULL);
+ }
+ 
++static void rbd_image_header_cleanup(struct rbd_image_header *header)
++{
++	kfree(header->object_prefix);
++	ceph_put_snap_context(header->snapc);
++	kfree(header->snap_sizes);
++	kfree(header->snap_names);
++
++	memset(header, 0, sizeof(*header));
++}
++
+ /*
+  * Fill an rbd image header with information from the given format 1
+  * on-disk header.
+  */
+-static int rbd_header_from_disk(struct rbd_device *rbd_dev,
+-				 struct rbd_image_header_ondisk *ondisk)
++static int rbd_header_from_disk(struct rbd_image_header *header,
++				struct rbd_image_header_ondisk *ondisk,
++				bool first_time)
+ {
+-	struct rbd_image_header *header = &rbd_dev->header;
+-	bool first_time = header->object_prefix == NULL;
+ 	struct ceph_snap_context *snapc;
+ 	char *object_prefix = NULL;
+ 	char *snap_names = NULL;
+@@ -1122,11 +1130,6 @@ static int rbd_header_from_disk(struct rbd_device *rbd_dev,
+ 	if (first_time) {
+ 		header->object_prefix = object_prefix;
+ 		header->obj_order = ondisk->options.order;
+-		rbd_init_layout(rbd_dev);
+-	} else {
+-		ceph_put_snap_context(header->snapc);
+-		kfree(header->snap_names);
+-		kfree(header->snap_sizes);
+ 	}
+ 
+ 	/* The remaining fields always get updated (when we refresh) */
+@@ -4916,7 +4919,9 @@ out_req:
+  * return, the rbd_dev->header field will contain up-to-date
+  * information about the image.
+  */
+-static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev)
++static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev,
++				  struct rbd_image_header *header,
++				  bool first_time)
+ {
+ 	struct rbd_image_header_ondisk *ondisk = NULL;
+ 	u32 snap_count = 0;
+@@ -4964,7 +4969,7 @@ static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev)
+ 		snap_count = le32_to_cpu(ondisk->snap_count);
+ 	} while (snap_count != want_count);
+ 
+-	ret = rbd_header_from_disk(rbd_dev, ondisk);
++	ret = rbd_header_from_disk(header, ondisk, first_time);
+ out:
+ 	kfree(ondisk);
+ 
+@@ -4989,39 +4994,6 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
+ 	}
+ }
+ 
+-static int rbd_dev_refresh(struct rbd_device *rbd_dev)
+-{
+-	u64 mapping_size;
+-	int ret;
+-
+-	down_write(&rbd_dev->header_rwsem);
+-	mapping_size = rbd_dev->mapping.size;
+-
+-	ret = rbd_dev_header_info(rbd_dev);
+-	if (ret)
+-		goto out;
+-
+-	/*
+-	 * If there is a parent, see if it has disappeared due to the
+-	 * mapped image getting flattened.
+-	 */
+-	if (rbd_dev->parent) {
+-		ret = rbd_dev_v2_parent_info(rbd_dev);
+-		if (ret)
+-			goto out;
+-	}
+-
+-	rbd_assert(!rbd_is_snap(rbd_dev));
+-	rbd_dev->mapping.size = rbd_dev->header.image_size;
+-
+-out:
+-	up_write(&rbd_dev->header_rwsem);
+-	if (!ret && mapping_size != rbd_dev->mapping.size)
+-		rbd_dev_update_size(rbd_dev);
+-
+-	return ret;
+-}
+-
+ static const struct blk_mq_ops rbd_mq_ops = {
+ 	.queue_rq	= rbd_queue_rq,
+ };
+@@ -5576,17 +5548,12 @@ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
+ 	return 0;
+ }
+ 
+-static int rbd_dev_v2_image_size(struct rbd_device *rbd_dev)
+-{
+-	return _rbd_dev_v2_snap_size(rbd_dev, CEPH_NOSNAP,
+-					&rbd_dev->header.obj_order,
+-					&rbd_dev->header.image_size);
+-}
+-
+-static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev,
++				    char **pobject_prefix)
+ {
+ 	size_t size;
+ 	void *reply_buf;
++	char *object_prefix;
+ 	int ret;
+ 	void *p;
+ 
+@@ -5604,16 +5571,16 @@ static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev)
+ 		goto out;
+ 
+ 	p = reply_buf;
+-	rbd_dev->header.object_prefix = ceph_extract_encoded_string(&p,
+-						p + ret, NULL, GFP_NOIO);
++	object_prefix = ceph_extract_encoded_string(&p, p + ret, NULL,
++						    GFP_NOIO);
++	if (IS_ERR(object_prefix)) {
++		ret = PTR_ERR(object_prefix);
++		goto out;
++	}
+ 	ret = 0;
+ 
+-	if (IS_ERR(rbd_dev->header.object_prefix)) {
+-		ret = PTR_ERR(rbd_dev->header.object_prefix);
+-		rbd_dev->header.object_prefix = NULL;
+-	} else {
+-		dout("  object_prefix = %s\n", rbd_dev->header.object_prefix);
+-	}
++	*pobject_prefix = object_prefix;
++	dout("  object_prefix = %s\n", object_prefix);
+ out:
+ 	kfree(reply_buf);
+ 
+@@ -5664,13 +5631,6 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
+ 	return 0;
+ }
+ 
+-static int rbd_dev_v2_features(struct rbd_device *rbd_dev)
+-{
+-	return _rbd_dev_v2_snap_features(rbd_dev, CEPH_NOSNAP,
+-					 rbd_is_ro(rbd_dev),
+-					 &rbd_dev->header.features);
+-}
+-
+ /*
+  * These are generic image flags, but since they are used only for
+  * object map, store them in rbd_dev->object_map_flags.
+@@ -5707,6 +5667,14 @@ struct parent_image_info {
+ 	u64		overlap;
+ };
+ 
++static void rbd_parent_info_cleanup(struct parent_image_info *pii)
++{
++	kfree(pii->pool_ns);
++	kfree(pii->image_id);
++
++	memset(pii, 0, sizeof(*pii));
++}
++
+ /*
+  * The caller is responsible for @pii.
+  */
+@@ -5776,6 +5744,9 @@ static int __get_parent_info(struct rbd_device *rbd_dev,
+ 	if (pii->has_overlap)
+ 		ceph_decode_64_safe(&p, end, pii->overlap, e_inval);
+ 
++	dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
++	     __func__, pii->pool_id, pii->pool_ns, pii->image_id, pii->snap_id,
++	     pii->has_overlap, pii->overlap);
+ 	return 0;
+ 
+ e_inval:
+@@ -5814,14 +5785,17 @@ static int __get_parent_info_legacy(struct rbd_device *rbd_dev,
+ 	pii->has_overlap = true;
+ 	ceph_decode_64_safe(&p, end, pii->overlap, e_inval);
+ 
++	dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
++	     __func__, pii->pool_id, pii->pool_ns, pii->image_id, pii->snap_id,
++	     pii->has_overlap, pii->overlap);
+ 	return 0;
+ 
+ e_inval:
+ 	return -EINVAL;
+ }
+ 
+-static int get_parent_info(struct rbd_device *rbd_dev,
+-			   struct parent_image_info *pii)
++static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev,
++				  struct parent_image_info *pii)
+ {
+ 	struct page *req_page, *reply_page;
+ 	void *p;
+@@ -5849,7 +5823,7 @@ static int get_parent_info(struct rbd_device *rbd_dev,
+ 	return ret;
+ }
+ 
+-static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
++static int rbd_dev_setup_parent(struct rbd_device *rbd_dev)
+ {
+ 	struct rbd_spec *parent_spec;
+ 	struct parent_image_info pii = { 0 };
+@@ -5859,37 +5833,12 @@ static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
+ 	if (!parent_spec)
+ 		return -ENOMEM;
+ 
+-	ret = get_parent_info(rbd_dev, &pii);
++	ret = rbd_dev_v2_parent_info(rbd_dev, &pii);
+ 	if (ret)
+ 		goto out_err;
+ 
+-	dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
+-	     __func__, pii.pool_id, pii.pool_ns, pii.image_id, pii.snap_id,
+-	     pii.has_overlap, pii.overlap);
+-
+-	if (pii.pool_id == CEPH_NOPOOL || !pii.has_overlap) {
+-		/*
+-		 * Either the parent never existed, or we have
+-		 * record of it but the image got flattened so it no
+-		 * longer has a parent.  When the parent of a
+-		 * layered image disappears we immediately set the
+-		 * overlap to 0.  The effect of this is that all new
+-		 * requests will be treated as if the image had no
+-		 * parent.
+-		 *
+-		 * If !pii.has_overlap, the parent image spec is not
+-		 * applicable.  It's there to avoid duplication in each
+-		 * snapshot record.
+-		 */
+-		if (rbd_dev->parent_overlap) {
+-			rbd_dev->parent_overlap = 0;
+-			rbd_dev_parent_put(rbd_dev);
+-			pr_info("%s: clone image has been flattened\n",
+-				rbd_dev->disk->disk_name);
+-		}
+-
++	if (pii.pool_id == CEPH_NOPOOL || !pii.has_overlap)
+ 		goto out;	/* No parent?  No problem. */
+-	}
+ 
+ 	/* The ceph file layout needs to fit pool id in 32 bits */
+ 
+@@ -5901,58 +5850,46 @@ static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
+ 	}
+ 
+ 	/*
+-	 * The parent won't change (except when the clone is
+-	 * flattened, already handled that).  So we only need to
+-	 * record the parent spec we have not already done so.
++	 * The parent won't change except when the clone is flattened,
++	 * so we only need to record the parent image spec once.
+ 	 */
+-	if (!rbd_dev->parent_spec) {
+-		parent_spec->pool_id = pii.pool_id;
+-		if (pii.pool_ns && *pii.pool_ns) {
+-			parent_spec->pool_ns = pii.pool_ns;
+-			pii.pool_ns = NULL;
+-		}
+-		parent_spec->image_id = pii.image_id;
+-		pii.image_id = NULL;
+-		parent_spec->snap_id = pii.snap_id;
+-
+-		rbd_dev->parent_spec = parent_spec;
+-		parent_spec = NULL;	/* rbd_dev now owns this */
++	parent_spec->pool_id = pii.pool_id;
++	if (pii.pool_ns && *pii.pool_ns) {
++		parent_spec->pool_ns = pii.pool_ns;
++		pii.pool_ns = NULL;
+ 	}
++	parent_spec->image_id = pii.image_id;
++	pii.image_id = NULL;
++	parent_spec->snap_id = pii.snap_id;
++
++	rbd_assert(!rbd_dev->parent_spec);
++	rbd_dev->parent_spec = parent_spec;
++	parent_spec = NULL;	/* rbd_dev now owns this */
+ 
+ 	/*
+-	 * We always update the parent overlap.  If it's zero we issue
+-	 * a warning, as we will proceed as if there was no parent.
++	 * Record the parent overlap.  If it's zero, issue a warning as
++	 * we will proceed as if there is no parent.
+ 	 */
+-	if (!pii.overlap) {
+-		if (parent_spec) {
+-			/* refresh, careful to warn just once */
+-			if (rbd_dev->parent_overlap)
+-				rbd_warn(rbd_dev,
+-				    "clone now standalone (overlap became 0)");
+-		} else {
+-			/* initial probe */
+-			rbd_warn(rbd_dev, "clone is standalone (overlap 0)");
+-		}
+-	}
++	if (!pii.overlap)
++		rbd_warn(rbd_dev, "clone is standalone (overlap 0)");
+ 	rbd_dev->parent_overlap = pii.overlap;
+ 
+ out:
+ 	ret = 0;
+ out_err:
+-	kfree(pii.pool_ns);
+-	kfree(pii.image_id);
++	rbd_parent_info_cleanup(&pii);
+ 	rbd_spec_put(parent_spec);
+ 	return ret;
+ }
+ 
+-static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev,
++				    u64 *stripe_unit, u64 *stripe_count)
+ {
+ 	struct {
+ 		__le64 stripe_unit;
+ 		__le64 stripe_count;
+ 	} __attribute__ ((packed)) striping_info_buf = { 0 };
+ 	size_t size = sizeof (striping_info_buf);
+-	void *p;
+ 	int ret;
+ 
+ 	ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid,
+@@ -5964,27 +5901,33 @@ static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev)
+ 	if (ret < size)
+ 		return -ERANGE;
+ 
+-	p = &striping_info_buf;
+-	rbd_dev->header.stripe_unit = ceph_decode_64(&p);
+-	rbd_dev->header.stripe_count = ceph_decode_64(&p);
++	*stripe_unit = le64_to_cpu(striping_info_buf.stripe_unit);
++	*stripe_count = le64_to_cpu(striping_info_buf.stripe_count);
++	dout("  stripe_unit = %llu stripe_count = %llu\n", *stripe_unit,
++	     *stripe_count);
++
+ 	return 0;
+ }
+ 
+-static int rbd_dev_v2_data_pool(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_data_pool(struct rbd_device *rbd_dev, s64 *data_pool_id)
+ {
+-	__le64 data_pool_id;
++	__le64 data_pool_buf;
+ 	int ret;
+ 
+ 	ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid,
+ 				  &rbd_dev->header_oloc, "get_data_pool",
+-				  NULL, 0, &data_pool_id, sizeof(data_pool_id));
++				  NULL, 0, &data_pool_buf,
++				  sizeof(data_pool_buf));
++	dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret);
+ 	if (ret < 0)
+ 		return ret;
+-	if (ret < sizeof(data_pool_id))
++	if (ret < sizeof(data_pool_buf))
+ 		return -EBADMSG;
+ 
+-	rbd_dev->header.data_pool_id = le64_to_cpu(data_pool_id);
+-	WARN_ON(rbd_dev->header.data_pool_id == CEPH_NOPOOL);
++	*data_pool_id = le64_to_cpu(data_pool_buf);
++	dout("  data_pool_id = %lld\n", *data_pool_id);
++	WARN_ON(*data_pool_id == CEPH_NOPOOL);
++
+ 	return 0;
+ }
+ 
+@@ -6176,7 +6119,8 @@ out_err:
+ 	return ret;
+ }
+ 
+-static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev,
++				   struct ceph_snap_context **psnapc)
+ {
+ 	size_t size;
+ 	int ret;
+@@ -6237,9 +6181,7 @@ static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev)
+ 	for (i = 0; i < snap_count; i++)
+ 		snapc->snaps[i] = ceph_decode_64(&p);
+ 
+-	ceph_put_snap_context(rbd_dev->header.snapc);
+-	rbd_dev->header.snapc = snapc;
+-
++	*psnapc = snapc;
+ 	dout("  snap context seq = %llu, snap_count = %u\n",
+ 		(unsigned long long)seq, (unsigned int)snap_count);
+ out:
+@@ -6288,38 +6230,42 @@ out:
+ 	return snap_name;
+ }
+ 
+-static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev,
++				  struct rbd_image_header *header,
++				  bool first_time)
+ {
+-	bool first_time = rbd_dev->header.object_prefix == NULL;
+ 	int ret;
+ 
+-	ret = rbd_dev_v2_image_size(rbd_dev);
++	ret = _rbd_dev_v2_snap_size(rbd_dev, CEPH_NOSNAP,
++				    first_time ? &header->obj_order : NULL,
++				    &header->image_size);
+ 	if (ret)
+ 		return ret;
+ 
+ 	if (first_time) {
+-		ret = rbd_dev_v2_header_onetime(rbd_dev);
++		ret = rbd_dev_v2_header_onetime(rbd_dev, header);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	ret = rbd_dev_v2_snap_context(rbd_dev);
+-	if (ret && first_time) {
+-		kfree(rbd_dev->header.object_prefix);
+-		rbd_dev->header.object_prefix = NULL;
+-	}
++	ret = rbd_dev_v2_snap_context(rbd_dev, &header->snapc);
++	if (ret)
++		return ret;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+-static int rbd_dev_header_info(struct rbd_device *rbd_dev)
++static int rbd_dev_header_info(struct rbd_device *rbd_dev,
++			       struct rbd_image_header *header,
++			       bool first_time)
+ {
+ 	rbd_assert(rbd_image_format_valid(rbd_dev->image_format));
++	rbd_assert(!header->object_prefix && !header->snapc);
+ 
+ 	if (rbd_dev->image_format == 1)
+-		return rbd_dev_v1_header_info(rbd_dev);
++		return rbd_dev_v1_header_info(rbd_dev, header, first_time);
+ 
+-	return rbd_dev_v2_header_info(rbd_dev);
++	return rbd_dev_v2_header_info(rbd_dev, header, first_time);
+ }
+ 
+ /*
+@@ -6806,60 +6752,49 @@ out:
+  */
+ static void rbd_dev_unprobe(struct rbd_device *rbd_dev)
+ {
+-	struct rbd_image_header	*header;
+-
+ 	rbd_dev_parent_put(rbd_dev);
+ 	rbd_object_map_free(rbd_dev);
+ 	rbd_dev_mapping_clear(rbd_dev);
+ 
+ 	/* Free dynamic fields from the header, then zero it out */
+ 
+-	header = &rbd_dev->header;
+-	ceph_put_snap_context(header->snapc);
+-	kfree(header->snap_sizes);
+-	kfree(header->snap_names);
+-	kfree(header->object_prefix);
+-	memset(header, 0, sizeof (*header));
++	rbd_image_header_cleanup(&rbd_dev->header);
+ }
+ 
+-static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev,
++				     struct rbd_image_header *header)
+ {
+ 	int ret;
+ 
+-	ret = rbd_dev_v2_object_prefix(rbd_dev);
++	ret = rbd_dev_v2_object_prefix(rbd_dev, &header->object_prefix);
+ 	if (ret)
+-		goto out_err;
++		return ret;
+ 
+ 	/*
+ 	 * Get the and check features for the image.  Currently the
+ 	 * features are assumed to never change.
+ 	 */
+-	ret = rbd_dev_v2_features(rbd_dev);
++	ret = _rbd_dev_v2_snap_features(rbd_dev, CEPH_NOSNAP,
++					rbd_is_ro(rbd_dev), &header->features);
+ 	if (ret)
+-		goto out_err;
++		return ret;
+ 
+ 	/* If the image supports fancy striping, get its parameters */
+ 
+-	if (rbd_dev->header.features & RBD_FEATURE_STRIPINGV2) {
+-		ret = rbd_dev_v2_striping_info(rbd_dev);
+-		if (ret < 0)
+-			goto out_err;
++	if (header->features & RBD_FEATURE_STRIPINGV2) {
++		ret = rbd_dev_v2_striping_info(rbd_dev, &header->stripe_unit,
++					       &header->stripe_count);
++		if (ret)
++			return ret;
+ 	}
+ 
+-	if (rbd_dev->header.features & RBD_FEATURE_DATA_POOL) {
+-		ret = rbd_dev_v2_data_pool(rbd_dev);
++	if (header->features & RBD_FEATURE_DATA_POOL) {
++		ret = rbd_dev_v2_data_pool(rbd_dev, &header->data_pool_id);
+ 		if (ret)
+-			goto out_err;
++			return ret;
+ 	}
+ 
+-	rbd_init_layout(rbd_dev);
+ 	return 0;
+-
+-out_err:
+-	rbd_dev->header.features = 0;
+-	kfree(rbd_dev->header.object_prefix);
+-	rbd_dev->header.object_prefix = NULL;
+-	return ret;
+ }
+ 
+ /*
+@@ -7054,13 +6989,15 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
+ 	if (!depth)
+ 		down_write(&rbd_dev->header_rwsem);
+ 
+-	ret = rbd_dev_header_info(rbd_dev);
++	ret = rbd_dev_header_info(rbd_dev, &rbd_dev->header, true);
+ 	if (ret) {
+ 		if (ret == -ENOENT && !need_watch)
+ 			rbd_print_dne(rbd_dev, false);
+ 		goto err_out_probe;
+ 	}
+ 
++	rbd_init_layout(rbd_dev);
++
+ 	/*
+ 	 * If this image is the one being mapped, we have pool name and
+ 	 * id, image name and id, and snap name - need to fill snap id.
+@@ -7089,7 +7026,7 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
+ 	}
+ 
+ 	if (rbd_dev->header.features & RBD_FEATURE_LAYERING) {
+-		ret = rbd_dev_v2_parent_info(rbd_dev);
++		ret = rbd_dev_setup_parent(rbd_dev);
+ 		if (ret)
+ 			goto err_out_probe;
+ 	}
+@@ -7115,6 +7052,107 @@ err_out_format:
+ 	return ret;
+ }
+ 
++static void rbd_dev_update_header(struct rbd_device *rbd_dev,
++				  struct rbd_image_header *header)
++{
++	rbd_assert(rbd_image_format_valid(rbd_dev->image_format));
++	rbd_assert(rbd_dev->header.object_prefix); /* !first_time */
++
++	if (rbd_dev->header.image_size != header->image_size) {
++		rbd_dev->header.image_size = header->image_size;
++
++		if (!rbd_is_snap(rbd_dev)) {
++			rbd_dev->mapping.size = header->image_size;
++			rbd_dev_update_size(rbd_dev);
++		}
++	}
++
++	ceph_put_snap_context(rbd_dev->header.snapc);
++	rbd_dev->header.snapc = header->snapc;
++	header->snapc = NULL;
++
++	if (rbd_dev->image_format == 1) {
++		kfree(rbd_dev->header.snap_names);
++		rbd_dev->header.snap_names = header->snap_names;
++		header->snap_names = NULL;
++
++		kfree(rbd_dev->header.snap_sizes);
++		rbd_dev->header.snap_sizes = header->snap_sizes;
++		header->snap_sizes = NULL;
++	}
++}
++
++static void rbd_dev_update_parent(struct rbd_device *rbd_dev,
++				  struct parent_image_info *pii)
++{
++	if (pii->pool_id == CEPH_NOPOOL || !pii->has_overlap) {
++		/*
++		 * Either the parent never existed, or we have
++		 * record of it but the image got flattened so it no
++		 * longer has a parent.  When the parent of a
++		 * layered image disappears we immediately set the
++		 * overlap to 0.  The effect of this is that all new
++		 * requests will be treated as if the image had no
++		 * parent.
++		 *
++		 * If !pii.has_overlap, the parent image spec is not
++		 * applicable.  It's there to avoid duplication in each
++		 * snapshot record.
++		 */
++		if (rbd_dev->parent_overlap) {
++			rbd_dev->parent_overlap = 0;
++			rbd_dev_parent_put(rbd_dev);
++			pr_info("%s: clone has been flattened\n",
++				rbd_dev->disk->disk_name);
++		}
++	} else {
++		rbd_assert(rbd_dev->parent_spec);
++
++		/*
++		 * Update the parent overlap.  If it became zero, issue
++		 * a warning as we will proceed as if there is no parent.
++		 */
++		if (!pii->overlap && rbd_dev->parent_overlap)
++			rbd_warn(rbd_dev,
++				 "clone has become standalone (overlap 0)");
++		rbd_dev->parent_overlap = pii->overlap;
++	}
++}
++
++static int rbd_dev_refresh(struct rbd_device *rbd_dev)
++{
++	struct rbd_image_header	header = { 0 };
++	struct parent_image_info pii = { 0 };
++	int ret;
++
++	dout("%s rbd_dev %p\n", __func__, rbd_dev);
++
++	ret = rbd_dev_header_info(rbd_dev, &header, false);
++	if (ret)
++		goto out;
++
++	/*
++	 * If there is a parent, see if it has disappeared due to the
++	 * mapped image getting flattened.
++	 */
++	if (rbd_dev->parent) {
++		ret = rbd_dev_v2_parent_info(rbd_dev, &pii);
++		if (ret)
++			goto out;
++	}
++
++	down_write(&rbd_dev->header_rwsem);
++	rbd_dev_update_header(rbd_dev, &header);
++	if (rbd_dev->parent)
++		rbd_dev_update_parent(rbd_dev, &pii);
++	up_write(&rbd_dev->header_rwsem);
++
++out:
++	rbd_parent_info_cleanup(&pii);
++	rbd_image_header_cleanup(&header);
++	return ret;
++}
++
+ static ssize_t do_rbd_add(struct bus_type *bus,
+ 			  const char *buf,
+ 			  size_t count)
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 5e8c078efd22a..ef8c7bfd79a8f 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -38,6 +38,7 @@ enum sysc_soc {
+ 	SOC_2420,
+ 	SOC_2430,
+ 	SOC_3430,
++	SOC_AM35,
+ 	SOC_3630,
+ 	SOC_4430,
+ 	SOC_4460,
+@@ -1113,6 +1114,11 @@ static int sysc_enable_module(struct device *dev)
+ 	if (ddata->cfg.quirks & (SYSC_QUIRK_SWSUP_SIDLE |
+ 				 SYSC_QUIRK_SWSUP_SIDLE_ACT)) {
+ 		best_mode = SYSC_IDLE_NO;
++
++		/* Clear WAKEUP */
++		if (regbits->enwkup_shift >= 0 &&
++		    ddata->cfg.sysc_val & BIT(regbits->enwkup_shift))
++			reg &= ~BIT(regbits->enwkup_shift);
+ 	} else {
+ 		best_mode = fls(ddata->cfg.sidlemodes) - 1;
+ 		if (best_mode > SYSC_IDLE_MASK) {
+@@ -1233,6 +1239,13 @@ set_sidle:
+ 		}
+ 	}
+ 
++	if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_SIDLE_ACT) {
++		/* Set WAKEUP */
++		if (regbits->enwkup_shift >= 0 &&
++		    ddata->cfg.sysc_val & BIT(regbits->enwkup_shift))
++			reg |= BIT(regbits->enwkup_shift);
++	}
++
+ 	reg &= ~(SYSC_IDLE_MASK << regbits->sidle_shift);
+ 	reg |= best_mode << regbits->sidle_shift;
+ 	if (regbits->autoidle_shift >= 0 &&
+@@ -1496,16 +1509,16 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 	SYSC_QUIRK("smartreflex", 0, -ENODEV, 0x38, -ENODEV, 0x00000000, 0xffffffff,
+ 		   SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000046, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000052, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 	/* Uarts on omap4 and later */
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47424e03, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 
+ 	/* Quirks that need to be set based on the module address */
+ 	SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff,
+@@ -1818,7 +1831,7 @@ static void sysc_pre_reset_quirk_dss(struct sysc *ddata)
+ 		dev_warn(ddata->dev, "%s: timed out %08x !+ %08x\n",
+ 			 __func__, val, irq_mask);
+ 
+-	if (sysc_soc->soc == SOC_3430) {
++	if (sysc_soc->soc == SOC_3430 || sysc_soc->soc == SOC_AM35) {
+ 		/* Clear DSS_SDI_CONTROL */
+ 		sysc_write(ddata, 0x44, 0);
+ 
+@@ -2085,8 +2098,7 @@ static int sysc_reset(struct sysc *ddata)
+ 	}
+ 
+ 	if (ddata->cfg.srst_udelay)
+-		usleep_range(ddata->cfg.srst_udelay,
+-			     ddata->cfg.srst_udelay * 2);
++		fsleep(ddata->cfg.srst_udelay);
+ 
+ 	if (ddata->post_reset_quirk)
+ 		ddata->post_reset_quirk(ddata);
+@@ -2960,6 +2972,7 @@ static void ti_sysc_idle(struct work_struct *work)
+ static const struct soc_device_attribute sysc_soc_match[] = {
+ 	SOC_FLAG("OMAP242*", SOC_2420),
+ 	SOC_FLAG("OMAP243*", SOC_2430),
++	SOC_FLAG("AM35*", SOC_AM35),
+ 	SOC_FLAG("OMAP3[45]*", SOC_3430),
+ 	SOC_FLAG("OMAP3[67]*", SOC_3630),
+ 	SOC_FLAG("OMAP443*", SOC_4430),
+@@ -3147,7 +3160,7 @@ static int sysc_check_active_timer(struct sysc *ddata)
+ 	 * can be dropped if we stop supporting old beagleboard revisions
+ 	 * A to B4 at some point.
+ 	 */
+-	if (sysc_soc->soc == SOC_3430)
++	if (sysc_soc->soc == SOC_3430 || sysc_soc->soc == SOC_AM35)
+ 		error = -ENXIO;
+ 	else
+ 		error = -EBUSY;
+diff --git a/drivers/char/agp/parisc-agp.c b/drivers/char/agp/parisc-agp.c
+index 514f9f287a781..c6f181702b9a7 100644
+--- a/drivers/char/agp/parisc-agp.c
++++ b/drivers/char/agp/parisc-agp.c
+@@ -394,8 +394,6 @@ find_quicksilver(struct device *dev, void *data)
+ static int __init
+ parisc_agp_init(void)
+ {
+-	extern struct sba_device *sba_list;
+-
+ 	int err = -1;
+ 	struct parisc_device *sba = NULL, *lba = NULL;
+ 	struct lba_device *lbadev = NULL;
+diff --git a/drivers/clk/imx/clk-pll14xx.c b/drivers/clk/imx/clk-pll14xx.c
+index e46311c2e63ea..aba36e4217d2d 100644
+--- a/drivers/clk/imx/clk-pll14xx.c
++++ b/drivers/clk/imx/clk-pll14xx.c
+@@ -60,6 +60,8 @@ static const struct imx_pll14xx_rate_table imx_pll1443x_tbl[] = {
+ 	PLL_1443X_RATE(650000000U, 325, 3, 2, 0),
+ 	PLL_1443X_RATE(594000000U, 198, 2, 2, 0),
+ 	PLL_1443X_RATE(519750000U, 173, 2, 2, 16384),
++	PLL_1443X_RATE(393216000U, 262, 2, 3, 9437),
++	PLL_1443X_RATE(361267200U, 361, 3, 3, 17511),
+ };
+ 
+ struct imx_pll14xx_clk imx_1443x_pll = {
+diff --git a/drivers/clk/tegra/clk-bpmp.c b/drivers/clk/tegra/clk-bpmp.c
+index a66263b6490d3..00845044c98ef 100644
+--- a/drivers/clk/tegra/clk-bpmp.c
++++ b/drivers/clk/tegra/clk-bpmp.c
+@@ -159,7 +159,7 @@ static unsigned long tegra_bpmp_clk_recalc_rate(struct clk_hw *hw,
+ 
+ 	err = tegra_bpmp_clk_transfer(clk->bpmp, &msg);
+ 	if (err < 0)
+-		return err;
++		return 0;
+ 
+ 	return response.rate;
+ }
+diff --git a/drivers/gpio/gpio-aspeed.c b/drivers/gpio/gpio-aspeed.c
+index e0d5d80ec8e0f..bbd04a63fb12a 100644
+--- a/drivers/gpio/gpio-aspeed.c
++++ b/drivers/gpio/gpio-aspeed.c
+@@ -966,7 +966,7 @@ static int aspeed_gpio_set_config(struct gpio_chip *chip, unsigned int offset,
+ 	else if (param == PIN_CONFIG_BIAS_DISABLE ||
+ 			param == PIN_CONFIG_BIAS_PULL_DOWN ||
+ 			param == PIN_CONFIG_DRIVE_STRENGTH)
+-		return pinctrl_gpio_set_config(offset, config);
++		return pinctrl_gpio_set_config(chip->base + offset, config);
+ 	else if (param == PIN_CONFIG_DRIVE_OPEN_DRAIN ||
+ 			param == PIN_CONFIG_DRIVE_OPEN_SOURCE)
+ 		/* Return -ENOTSUPP to trigger emulation, as per datasheet */
+diff --git a/drivers/gpio/gpio-pmic-eic-sprd.c b/drivers/gpio/gpio-pmic-eic-sprd.c
+index 9382851905662..e969ce9131ddf 100644
+--- a/drivers/gpio/gpio-pmic-eic-sprd.c
++++ b/drivers/gpio/gpio-pmic-eic-sprd.c
+@@ -338,6 +338,7 @@ static int sprd_pmic_eic_probe(struct platform_device *pdev)
+ 	pmic_eic->chip.set_config = sprd_pmic_eic_set_config;
+ 	pmic_eic->chip.set = sprd_pmic_eic_set;
+ 	pmic_eic->chip.get = sprd_pmic_eic_get;
++	pmic_eic->chip.can_sleep = true;
+ 
+ 	pmic_eic->intc.name = dev_name(&pdev->dev);
+ 	pmic_eic->intc.irq_mask = sprd_pmic_eic_irq_mask;
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index 0cb6600b8eeee..1dbeaf6d00459 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -243,6 +243,7 @@ static bool pxa_gpio_has_pinctrl(void)
+ 	switch (gpio_type) {
+ 	case PXA3XX_GPIO:
+ 	case MMP2_GPIO:
++	case MMP_GPIO:
+ 		return false;
+ 
+ 	default:
+diff --git a/drivers/gpio/gpio-tb10x.c b/drivers/gpio/gpio-tb10x.c
+index 866201cf5f65e..4a9dcaad4a6cb 100644
+--- a/drivers/gpio/gpio-tb10x.c
++++ b/drivers/gpio/gpio-tb10x.c
+@@ -195,7 +195,7 @@ static int tb10x_gpio_probe(struct platform_device *pdev)
+ 				handle_edge_irq, IRQ_NOREQUEST, IRQ_NOPROBE,
+ 				IRQ_GC_INIT_MASK_CACHE);
+ 		if (ret)
+-			return ret;
++			goto err_remove_domain;
+ 
+ 		gc = tb10x_gpio->domain->gc->gc[0];
+ 		gc->reg_base                         = tb10x_gpio->base;
+@@ -209,6 +209,10 @@ static int tb10x_gpio_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	return 0;
++
++err_remove_domain:
++	irq_domain_remove(tb10x_gpio->domain);
++	return ret;
+ }
+ 
+ static int tb10x_gpio_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+index fe64bf2176f30..b20ea58907c2a 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+@@ -252,7 +252,7 @@ void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
+ 	if (!mtk_gem->kvaddr) {
+ 		kfree(sgt);
+ 		kfree(mtk_gem->pages);
+-		return -ENOMEM;
++		return NULL;
+ 	}
+ out:
+ 	kfree(sgt);
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 45682d30d7056..4aec451d72510 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1907,6 +1907,7 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 		"SMBus I801 adapter at %04lx", priv->smba);
+ 	err = i2c_add_adapter(&priv->adapter);
+ 	if (err) {
++		platform_device_unregister(priv->tco_pdev);
+ 		i801_acpi_remove(priv);
+ 		return err;
+ 	}
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index c1b6797372409..73c808ef1bfe5 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -675,6 +675,7 @@ static void npcm_i2c_callback(struct npcm_i2c *bus,
+ {
+ 	struct i2c_msg *msgs;
+ 	int msgs_num;
++	bool do_complete = false;
+ 
+ 	msgs = bus->msgs;
+ 	msgs_num = bus->msgs_num;
+@@ -701,23 +702,17 @@ static void npcm_i2c_callback(struct npcm_i2c *bus,
+ 				 msgs[1].flags & I2C_M_RD)
+ 				msgs[1].len = info;
+ 		}
+-		if (completion_done(&bus->cmd_complete) == false)
+-			complete(&bus->cmd_complete);
+-	break;
+-
++		do_complete = true;
++		break;
+ 	case I2C_NACK_IND:
+ 		/* MASTER transmit got a NACK before tx all bytes */
+ 		bus->cmd_err = -ENXIO;
+-		if (bus->master_or_slave == I2C_MASTER)
+-			complete(&bus->cmd_complete);
+-
++		do_complete = true;
+ 		break;
+ 	case I2C_BUS_ERR_IND:
+ 		/* Bus error */
+ 		bus->cmd_err = -EAGAIN;
+-		if (bus->master_or_slave == I2C_MASTER)
+-			complete(&bus->cmd_complete);
+-
++		do_complete = true;
+ 		break;
+ 	case I2C_WAKE_UP_IND:
+ 		/* I2C wake up */
+@@ -731,6 +726,8 @@ static void npcm_i2c_callback(struct npcm_i2c *bus,
+ 	if (bus->slave)
+ 		bus->master_or_slave = I2C_SLAVE;
+ #endif
++	if (do_complete)
++		complete(&bus->cmd_complete);
+ }
+ 
+ static u8 npcm_i2c_fifo_usage(struct npcm_i2c *bus)
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index f7a7405d4350a..8e8688e8de0fb 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -243,6 +243,10 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ 
+ 		props[i].name = devm_kstrdup(&pdev->dev, "status", GFP_KERNEL);
+ 		props[i].value = devm_kstrdup(&pdev->dev, "ok", GFP_KERNEL);
++		if (!props[i].name || !props[i].value) {
++			err = -ENOMEM;
++			goto err_rollback;
++		}
+ 		props[i].length = 3;
+ 
+ 		of_changeset_init(&priv->chan[i].chgset);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 805678f6fe576..0603d069de921 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -4723,7 +4723,7 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ 	int err = 0;
+ 	struct sockaddr *addr = (struct sockaddr *)&mc->addr;
+ 	struct net_device *ndev = NULL;
+-	struct ib_sa_multicast ib;
++	struct ib_sa_multicast ib = {};
+ 	enum ib_gid_type gid_type;
+ 	bool send_only;
+ 
+diff --git a/drivers/infiniband/core/cma_configfs.c b/drivers/infiniband/core/cma_configfs.c
+index 35d1ec1095f9c..d7cc8d5dcf35c 100644
+--- a/drivers/infiniband/core/cma_configfs.c
++++ b/drivers/infiniband/core/cma_configfs.c
+@@ -221,7 +221,7 @@ static int make_cma_ports(struct cma_dev_group *cma_dev_group,
+ 	}
+ 
+ 	for (i = 0; i < ports_num; i++) {
+-		char port_str[10];
++		char port_str[11];
+ 
+ 		ports[i].port_num = i + 1;
+ 		snprintf(port_str, sizeof(port_str), "%u", i + 1);
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index f7f80707af4b6..f8dfec7ad7cc4 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -2148,6 +2148,7 @@ static const struct rdma_nl_cbs nldev_cb_table[RDMA_NLDEV_NUM_OPS] = {
+ 	},
+ 	[RDMA_NLDEV_CMD_SYS_SET] = {
+ 		.doit = nldev_set_sys_set_doit,
++		.flags = RDMA_NL_ADMIN_PERM,
+ 	},
+ 	[RDMA_NLDEV_CMD_STAT_SET] = {
+ 		.doit = nldev_stat_set_doit,
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 099f5acc749e5..f2fcdb70d9030 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -535,7 +535,7 @@ static ssize_t verify_hdr(struct ib_uverbs_cmd_hdr *hdr,
+ 	if (hdr->in_words * 4 != count)
+ 		return -EINVAL;
+ 
+-	if (count < method_elm->req_size + sizeof(hdr)) {
++	if (count < method_elm->req_size + sizeof(*hdr)) {
+ 		/*
+ 		 * rdma-core v18 and v19 have a bug where they send DESTROY_CQ
+ 		 * with a 16 byte write instead of 24. Old kernels didn't
+diff --git a/drivers/infiniband/hw/mlx4/sysfs.c b/drivers/infiniband/hw/mlx4/sysfs.c
+index ea1f3a081b05a..6c3a23ee3bc72 100644
+--- a/drivers/infiniband/hw/mlx4/sysfs.c
++++ b/drivers/infiniband/hw/mlx4/sysfs.c
+@@ -221,7 +221,7 @@ void del_sysfs_port_mcg_attr(struct mlx4_ib_dev *device, int port_num,
+ static int add_port_entries(struct mlx4_ib_dev *device, int port_num)
+ {
+ 	int i;
+-	char buff[11];
++	char buff[12];
+ 	struct mlx4_ib_iov_port *port = NULL;
+ 	int ret = 0 ;
+ 	struct ib_port_attr attr;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 215d6618839be..d36436d4277a0 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2069,7 +2069,7 @@ static inline char *mmap_cmd2str(enum mlx5_ib_mmap_cmd cmd)
+ 	case MLX5_IB_MMAP_DEVICE_MEM:
+ 		return "Device Memory";
+ 	default:
+-		return NULL;
++		return "Unknown";
+ 	}
+ }
+ 
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index de5ab282ac748..df5f675993c74 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -973,6 +973,7 @@ static void siw_accept_newconn(struct siw_cep *cep)
+ 			siw_cep_put(cep);
+ 			new_cep->listen_cep = NULL;
+ 			if (rv) {
++				siw_cancel_mpatimer(new_cep);
+ 				siw_cep_set_free(new_cep);
+ 				goto error;
+ 			}
+@@ -1097,9 +1098,12 @@ static void siw_cm_work_handler(struct work_struct *w)
+ 				/*
+ 				 * Socket close before MPA request received.
+ 				 */
+-				siw_dbg_cep(cep, "no mpareq: drop listener\n");
+-				siw_cep_put(cep->listen_cep);
+-				cep->listen_cep = NULL;
++				if (cep->listen_cep) {
++					siw_dbg_cep(cep,
++						"no mpareq: drop listener\n");
++					siw_cep_put(cep->listen_cep);
++					cep->listen_cep = NULL;
++				}
+ 			}
+ 		}
+ 		release_cep = 1;
+@@ -1222,7 +1226,11 @@ static void siw_cm_llp_data_ready(struct sock *sk)
+ 	if (!cep)
+ 		goto out;
+ 
+-	siw_dbg_cep(cep, "state: %d\n", cep->state);
++	siw_dbg_cep(cep, "cep state: %d, socket state %d\n",
++		    cep->state, sk->sk_state);
++
++	if (sk->sk_state != TCP_ESTABLISHED)
++		goto out;
+ 
+ 	switch (cep->state) {
+ 	case SIW_EPSTATE_RDMA_MODE:
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+new file mode 100644
+index 0000000000000..1bd5898abb97d
+--- /dev/null
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -0,0 +1,1597 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++#ifndef _I8042_ACPIPNPIO_H
++#define _I8042_ACPIPNPIO_H
++
++
++#ifdef CONFIG_X86
++#include <asm/x86_init.h>
++#endif
++
++/*
++ * Names.
++ */
++
++#define I8042_KBD_PHYS_DESC "isa0060/serio0"
++#define I8042_AUX_PHYS_DESC "isa0060/serio1"
++#define I8042_MUX_PHYS_DESC "isa0060/serio%d"
++
++/*
++ * IRQs.
++ */
++
++#if defined(__ia64__)
++# define I8042_MAP_IRQ(x)	isa_irq_to_vector((x))
++#else
++# define I8042_MAP_IRQ(x)	(x)
++#endif
++
++#define I8042_KBD_IRQ	i8042_kbd_irq
++#define I8042_AUX_IRQ	i8042_aux_irq
++
++static int i8042_kbd_irq;
++static int i8042_aux_irq;
++
++/*
++ * Register numbers.
++ */
++
++#define I8042_COMMAND_REG	i8042_command_reg
++#define I8042_STATUS_REG	i8042_command_reg
++#define I8042_DATA_REG		i8042_data_reg
++
++static int i8042_command_reg = 0x64;
++static int i8042_data_reg = 0x60;
++
++
++static inline int i8042_read_data(void)
++{
++	return inb(I8042_DATA_REG);
++}
++
++static inline int i8042_read_status(void)
++{
++	return inb(I8042_STATUS_REG);
++}
++
++static inline void i8042_write_data(int val)
++{
++	outb(val, I8042_DATA_REG);
++}
++
++static inline void i8042_write_command(int val)
++{
++	outb(val, I8042_COMMAND_REG);
++}
++
++#ifdef CONFIG_X86
++
++#include <linux/dmi.h>
++
++#define SERIO_QUIRK_NOKBD		BIT(0)
++#define SERIO_QUIRK_NOAUX		BIT(1)
++#define SERIO_QUIRK_NOMUX		BIT(2)
++#define SERIO_QUIRK_FORCEMUX		BIT(3)
++#define SERIO_QUIRK_UNLOCK		BIT(4)
++#define SERIO_QUIRK_PROBE_DEFER		BIT(5)
++#define SERIO_QUIRK_RESET_ALWAYS	BIT(6)
++#define SERIO_QUIRK_RESET_NEVER		BIT(7)
++#define SERIO_QUIRK_DIECT		BIT(8)
++#define SERIO_QUIRK_DUMBKBD		BIT(9)
++#define SERIO_QUIRK_NOLOOP		BIT(10)
++#define SERIO_QUIRK_NOTIMEOUT		BIT(11)
++#define SERIO_QUIRK_KBDRESET		BIT(12)
++#define SERIO_QUIRK_DRITEK		BIT(13)
++#define SERIO_QUIRK_NOPNP		BIT(14)
++
++/* Quirk table for different mainboards. Options similar or identical to i8042
++ * module parameters.
++ * ORDERING IS IMPORTANT! The first match will be apllied and the rest ignored.
++ * This allows entries to overwrite vendor wide quirks on a per device basis.
++ * Where this is irrelevant, entries are sorted case sensitive by DMI_SYS_VENDOR
++ * and/or DMI_BOARD_VENDOR to make it easier to avoid dublicate entries.
++ */
++static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ALIENWARE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Sentia"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X750LN"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Asus X450LCP */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X450LCP"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_NEVER)
++	},
++	{
++		/* ASUS ZenBook UX425UA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER)
++	},
++	{
++		/* ASUS ZenBook UM325UA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER)
++	},
++	/*
++	 * On some Asus laptops, just running self tests cause problems.
++	 */
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_NEVER)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_NEVER)
++	},
++	{
++		/* ASUS P65UP5 - AUX LOOP command does not raise AUX IRQ */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "P/I-P65UP5"),
++			DMI_MATCH(DMI_BOARD_VERSION, "REV 2.X"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* ASUS G1S */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer Inc."),
++			DMI_MATCH(DMI_BOARD_NAME, "G1S"),
++			DMI_MATCH(DMI_BOARD_VERSION, "1.0"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 1360"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Acer Aspire 5710 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5710"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Acer Aspire 7738 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 7738"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Acer Aspire 5536 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5536"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "0100"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/*
++		 * Acer Aspire 5738z
++		 * Touchpad stops working in mux mode when dis- + re-enabled
++		 * with the touchpad enable/disable toggle hotkey
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Acer Aspire One 150 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "AOA150"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A114-31"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A314-31"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A315-31"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-132"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-332"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-432"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate Spin B118-RN"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	/*
++	 * Some Wistron based laptops need us to explicitly enable the 'Dritek
++	 * keyboard extension' to make their extra keys start generating scancodes.
++	 * Originally, this was just confined to older laptops, but a few Acer laptops
++	 * have turned up in 2007 that also need this again.
++	 */
++	{
++		/* Acer Aspire 5100 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5100"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer Aspire 5610 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5610"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer Aspire 5630 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5630"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer Aspire 5650 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5650"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer Aspire 5680 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5680"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer Aspire 5720 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5720"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer Aspire 9110 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 9110"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer TravelMate 660 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 660"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer TravelMate 2490 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 2490"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Acer TravelMate 4280 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 4280"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
++	},
++	{
++		/* Amoi M636/A737 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Amoi Electronics CO.,LTD."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "M636/A737 platform"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ByteSpeed LLC"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ByteSpeed Laptop C15B"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Compal HEL80I */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "COMPAL"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HEL80I"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ProLiant"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "8500"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ProLiant"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "DL760"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Advent 4211 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "DIXONSXP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Advent 4211"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Dell Embedded Box PC 3000 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Embedded Box PC 3000"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Dell XPS M1530 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "XPS M1530"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Dell Vostro 1510 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro1510"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Dell Vostro V13 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V13"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Dell Vostro 1320 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1320"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Dell Vostro 1520 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1520"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Dell Vostro 1720 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1720"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Entroware Proteus */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS)
++	},
++	/*
++	 * Some Fujitsu notebooks are having trouble with touchpads if
++	 * active multiplexing mode is activated. Luckily they don't have
++	 * external PS/2 ports so we can safely disable it.
++	 * ... apparently some Toshibas don't like MUX mode either and
++	 * die horrible death on reboot.
++	 */
++	{
++		/* Fujitsu Lifebook P7010/P7010D */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P7010"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu Lifebook P5020D */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook P Series"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu Lifebook S2000 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S Series"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu Lifebook S6230 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu Lifebook T725 laptop */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Fujitsu Lifebook U745 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U745"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu T70H */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "FMVLT70H"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu A544 laptop */
++		/* https://bugzilla.redhat.com/show_bug.cgi?id=1111138 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK A544"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Fujitsu AH544 laptop */
++		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Fujitsu U574 laptop */
++		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U574"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Fujitsu UH554 laptop */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK UH544"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* Fujitsu Lifebook P7010 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "0000000000"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu-Siemens Lifebook T3010 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T3010"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu-Siemens Lifebook E4010 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E4010"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu-Siemens Amilo Pro 2010 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "AMILO Pro V2010"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu-Siemens Amilo Pro 2030 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "AMILO PRO V2030"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Fujitsu Lifebook A574/H */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "FMVA0501PZ"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Gigabyte M912 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "M912"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "01"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Gigabyte Spring Peak - defines wrong chassis type */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Spring Peak"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Gigabyte T1005 - defines wrong chassis type ("Other") */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "T1005"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Gigabyte T1005M/P - defines wrong chassis type ("Other") */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "T1005M/P"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	/*
++	 * Some laptops need keyboard reset before probing for the trackpad to get
++	 * it detected, initialised & finally work.
++	 */
++	{
++		/* Gigabyte P35 v2 - Elantech touchpad */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P35V2"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
++	},
++		{
++		/* Aorus branded Gigabyte X3 Plus - Elantech touchpad */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X3"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
++	},
++	{
++		/* Gigabyte P34 - Elantech touchpad */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P34"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
++	},
++	{
++		/* Gigabyte P57 - Elantech touchpad */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P57"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
++	},
++	{
++		/* Gericom Bellagio */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Gericom"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "N34AS6"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Gigabyte M1022M netbook */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co.,Ltd."),
++			DMI_MATCH(DMI_BOARD_NAME, "M1022E"),
++			DMI_MATCH(DMI_BOARD_VERSION, "1.02"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv9700"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Rev 1"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/*
++		 * HP Pavilion DV4017EA -
++		 * errors on MUX ports are reported without raising AUXDATA
++		 * causing "spurious NAK" messages.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Pavilion dv4000 (EA032EA#ABF)"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/*
++		 * HP Pavilion ZT1000 -
++		 * like DV4017EA does not raise AUXERR for errors on MUX ports.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Notebook PC"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "HP Pavilion Notebook ZT1000"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/*
++		 * HP Pavilion DV4270ca -
++		 * like DV4017EA does not raise AUXERR for errors on MUX ports.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Pavilion dv4000 (EH476UA#ABL)"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Newer HP Pavilion dv4 models */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
++	},
++	{
++		/* IBM 2656 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "2656"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Avatar AVIU-145A6 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "IC4I"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Intel MBO Desktop D845PESV */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "D845PESV"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
++	},
++	{
++		/*
++		 * Intel NUC D54250WYK - does not have i8042 controller but
++		 * declares PS/2 devices in DSDT.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "D54250WYK"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
++	},
++	{
++		/* Lenovo 3000 n100 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "076804U"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Lenovo XiaoXin Air 12 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "80UN"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Lenovo LaVie Z */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Lenovo Ideapad U455 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "20046"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Lenovo ThinkPad L460 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L460"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Lenovo ThinkPad Twist S230u */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "33474HU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* LG Electronics X110 */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LG Electronics Inc."),
++			DMI_MATCH(DMI_BOARD_NAME, "X110"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Medion Akoya Mini E1210 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "E1210"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* Medion Akoya E1222 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "E122X"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* MSI Wind U-100 */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"),
++			DMI_MATCH(DMI_BOARD_NAME, "U-100"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOPNP)
++	},
++	{
++		/*
++		 * No data is coming from the touchscreen unless KBC
++		 * is in legacy mode.
++		 */
++		/* Panasonic CF-29 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Matsushita"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CF-29"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Medion Akoya E7225 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Medion"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Akoya E7225"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Microsoft Virtual Machine */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Virtual Machine"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "VS2005R2"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Medion MAM 2070 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MAM 2070"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "5a"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* TUXEDO BU1406 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "N24_25BU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Clevo P650RS, 650RP6, Sager NP8152-S, and others */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	{
++		/* OQO Model 01 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "OQO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ZEPTO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "00"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "PEGATRON CORPORATION"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "C15B"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Acer Aspire 5 A515 */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "PK"),
++			DMI_MATCH(DMI_BOARD_NAME, "Grumpy_PK"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
++	},
++	{
++		/* ULI EV4873 - AUX LOOP does not work properly */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ULI"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "EV4873"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "5a"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/*
++		 * Arima-Rioworks HDAMB -
++		 * AUX LOOP command does not raise AUX IRQ
++		 */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "RIOWORKS"),
++			DMI_MATCH(DMI_BOARD_NAME, "HDAMB"),
++			DMI_MATCH(DMI_BOARD_VERSION, "Rev E"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	{
++		/* Sharp Actius MM20 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "SHARP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "PC-MM20 Series"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/*
++		 * Sony Vaio FZ-240E -
++		 * reset and GET ID commands issued via KBD port are
++		 * sometimes being delivered to AUX3.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FZ240E"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/*
++		 * Most (all?) VAIOs do not have external PS/2 ports nor
++		 * they implement active multiplexing properly, and
++		 * MUX discovery usually messes up keyboard/touchpad.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "VAIO"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/* Sony Vaio FS-115b */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FS115B"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		/*
++		 * Sony Vaio VGN-CS series require MUX or the touch sensor
++		 * buttons will disturb touchpad operation
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-CS"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_FORCEMUX)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P10"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "EQUIUM A110"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE C850D"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
++	/*
++	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
++	 * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
++	 * none of them have an external PS/2 port so this can safely be set for
++	 * all of them. These two are based on a Clevo design, but have the
++	 * board_name changed.
++	 */
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"),
++			DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"),
++			DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		/* Mivvy M310 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "VIOOO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "N10"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
++	},
++	/*
++	 * Some laptops need keyboard reset before probing for the trackpad to get
++	 * it detected, initialised & finally work.
++	 */
++	{
++		/* Schenker XMG C504 - Elantech touchpad */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "XMG"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "C504"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
++	},
++	{
++		/* Blue FB5601 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "blue"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "FB5601"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "M606"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
++	},
++	/*
++	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
++	 * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
++	 * none of them have an external PS/2 port so this can safely be set for
++	 * all of them.
++	 * Clevo barebones come with board_vendor and/or system_vendor set to
++	 * either the very generic string "Notebook" and/or a different value
++	 * for each individual reseller. The only somewhat universal way to
++	 * identify them is by board_name.
++	 */
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "LAPQC71A"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "LAPQC71B"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "N140CU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "N141CU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	/*
++	 * At least one modern Clevo barebone has the touchpad connected both
++	 * via PS/2 and i2c interface. This causes a race condition between the
++	 * psmouse and i2c-hid driver. Since the full capability of the touchpad
++	 * is available via the i2c interface and the device has no external
++	 * PS/2 port, it is safe to just ignore all ps2 mouses here to avoid
++	 * this issue. The known affected device is the
++	 * TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU which comes with one of
++	 * the two different dmi strings below. NS50MU is not a typo!
++	 */
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "NS50MU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX |
++					SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
++					SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "NS50_70MU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX |
++					SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
++					SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "PB50_70DFx,DDx"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	/* See comment on TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU above */
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "PD5x_7xPNP_PNR_PNN_PNT"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "X170SM"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "X170KM-G"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{ }
++};
++
++#ifdef CONFIG_PNP
++static const struct dmi_system_id i8042_dmi_laptop_table[] __initconst = {
++	{
++		.matches = {
++			DMI_MATCH(DMI_CHASSIS_TYPE, "8"), /* Portable */
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_CHASSIS_TYPE, "9"), /* Laptop */
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_CHASSIS_TYPE, "14"), /* Sub-Notebook */
++		},
++	},
++	{ }
++};
++#endif
++
++#endif /* CONFIG_X86 */
++
++#ifdef CONFIG_PNP
++#include <linux/pnp.h>
++
++static bool i8042_pnp_kbd_registered;
++static unsigned int i8042_pnp_kbd_devices;
++static bool i8042_pnp_aux_registered;
++static unsigned int i8042_pnp_aux_devices;
++
++static int i8042_pnp_command_reg;
++static int i8042_pnp_data_reg;
++static int i8042_pnp_kbd_irq;
++static int i8042_pnp_aux_irq;
++
++static char i8042_pnp_kbd_name[32];
++static char i8042_pnp_aux_name[32];
++
++static void i8042_pnp_id_to_string(struct pnp_id *id, char *dst, int dst_size)
++{
++	strlcpy(dst, "PNP:", dst_size);
++
++	while (id) {
++		strlcat(dst, " ", dst_size);
++		strlcat(dst, id->id, dst_size);
++		id = id->next;
++	}
++}
++
++static int i8042_pnp_kbd_probe(struct pnp_dev *dev, const struct pnp_device_id *did)
++{
++	if (pnp_port_valid(dev, 0) && pnp_port_len(dev, 0) == 1)
++		i8042_pnp_data_reg = pnp_port_start(dev,0);
++
++	if (pnp_port_valid(dev, 1) && pnp_port_len(dev, 1) == 1)
++		i8042_pnp_command_reg = pnp_port_start(dev, 1);
++
++	if (pnp_irq_valid(dev,0))
++		i8042_pnp_kbd_irq = pnp_irq(dev, 0);
++
++	strlcpy(i8042_pnp_kbd_name, did->id, sizeof(i8042_pnp_kbd_name));
++	if (strlen(pnp_dev_name(dev))) {
++		strlcat(i8042_pnp_kbd_name, ":", sizeof(i8042_pnp_kbd_name));
++		strlcat(i8042_pnp_kbd_name, pnp_dev_name(dev), sizeof(i8042_pnp_kbd_name));
++	}
++	i8042_pnp_id_to_string(dev->id, i8042_kbd_firmware_id,
++			       sizeof(i8042_kbd_firmware_id));
++	i8042_kbd_fwnode = dev_fwnode(&dev->dev);
++
++	/* Keyboard ports are always supposed to be wakeup-enabled */
++	device_set_wakeup_enable(&dev->dev, true);
++
++	i8042_pnp_kbd_devices++;
++	return 0;
++}
++
++static int i8042_pnp_aux_probe(struct pnp_dev *dev, const struct pnp_device_id *did)
++{
++	if (pnp_port_valid(dev, 0) && pnp_port_len(dev, 0) == 1)
++		i8042_pnp_data_reg = pnp_port_start(dev,0);
++
++	if (pnp_port_valid(dev, 1) && pnp_port_len(dev, 1) == 1)
++		i8042_pnp_command_reg = pnp_port_start(dev, 1);
++
++	if (pnp_irq_valid(dev, 0))
++		i8042_pnp_aux_irq = pnp_irq(dev, 0);
++
++	strlcpy(i8042_pnp_aux_name, did->id, sizeof(i8042_pnp_aux_name));
++	if (strlen(pnp_dev_name(dev))) {
++		strlcat(i8042_pnp_aux_name, ":", sizeof(i8042_pnp_aux_name));
++		strlcat(i8042_pnp_aux_name, pnp_dev_name(dev), sizeof(i8042_pnp_aux_name));
++	}
++	i8042_pnp_id_to_string(dev->id, i8042_aux_firmware_id,
++			       sizeof(i8042_aux_firmware_id));
++
++	i8042_pnp_aux_devices++;
++	return 0;
++}
++
++static const struct pnp_device_id pnp_kbd_devids[] = {
++	{ .id = "PNP0300", .driver_data = 0 },
++	{ .id = "PNP0301", .driver_data = 0 },
++	{ .id = "PNP0302", .driver_data = 0 },
++	{ .id = "PNP0303", .driver_data = 0 },
++	{ .id = "PNP0304", .driver_data = 0 },
++	{ .id = "PNP0305", .driver_data = 0 },
++	{ .id = "PNP0306", .driver_data = 0 },
++	{ .id = "PNP0309", .driver_data = 0 },
++	{ .id = "PNP030a", .driver_data = 0 },
++	{ .id = "PNP030b", .driver_data = 0 },
++	{ .id = "PNP0320", .driver_data = 0 },
++	{ .id = "PNP0343", .driver_data = 0 },
++	{ .id = "PNP0344", .driver_data = 0 },
++	{ .id = "PNP0345", .driver_data = 0 },
++	{ .id = "CPQA0D7", .driver_data = 0 },
++	{ .id = "", },
++};
++MODULE_DEVICE_TABLE(pnp, pnp_kbd_devids);
++
++static struct pnp_driver i8042_pnp_kbd_driver = {
++	.name           = "i8042 kbd",
++	.id_table       = pnp_kbd_devids,
++	.probe          = i8042_pnp_kbd_probe,
++	.driver         = {
++		.probe_type = PROBE_FORCE_SYNCHRONOUS,
++		.suppress_bind_attrs = true,
++	},
++};
++
++static const struct pnp_device_id pnp_aux_devids[] = {
++	{ .id = "AUI0200", .driver_data = 0 },
++	{ .id = "FJC6000", .driver_data = 0 },
++	{ .id = "FJC6001", .driver_data = 0 },
++	{ .id = "PNP0f03", .driver_data = 0 },
++	{ .id = "PNP0f0b", .driver_data = 0 },
++	{ .id = "PNP0f0e", .driver_data = 0 },
++	{ .id = "PNP0f12", .driver_data = 0 },
++	{ .id = "PNP0f13", .driver_data = 0 },
++	{ .id = "PNP0f19", .driver_data = 0 },
++	{ .id = "PNP0f1c", .driver_data = 0 },
++	{ .id = "SYN0801", .driver_data = 0 },
++	{ .id = "", },
++};
++MODULE_DEVICE_TABLE(pnp, pnp_aux_devids);
++
++static struct pnp_driver i8042_pnp_aux_driver = {
++	.name           = "i8042 aux",
++	.id_table       = pnp_aux_devids,
++	.probe          = i8042_pnp_aux_probe,
++	.driver         = {
++		.probe_type = PROBE_FORCE_SYNCHRONOUS,
++		.suppress_bind_attrs = true,
++	},
++};
++
++static void i8042_pnp_exit(void)
++{
++	if (i8042_pnp_kbd_registered) {
++		i8042_pnp_kbd_registered = false;
++		pnp_unregister_driver(&i8042_pnp_kbd_driver);
++	}
++
++	if (i8042_pnp_aux_registered) {
++		i8042_pnp_aux_registered = false;
++		pnp_unregister_driver(&i8042_pnp_aux_driver);
++	}
++}
++
++static int __init i8042_pnp_init(void)
++{
++	char kbd_irq_str[4] = { 0 }, aux_irq_str[4] = { 0 };
++	bool pnp_data_busted = false;
++	int err;
++
++	if (i8042_nopnp) {
++		pr_info("PNP detection disabled\n");
++		return 0;
++	}
++
++	err = pnp_register_driver(&i8042_pnp_kbd_driver);
++	if (!err)
++		i8042_pnp_kbd_registered = true;
++
++	err = pnp_register_driver(&i8042_pnp_aux_driver);
++	if (!err)
++		i8042_pnp_aux_registered = true;
++
++	if (!i8042_pnp_kbd_devices && !i8042_pnp_aux_devices) {
++		i8042_pnp_exit();
++#if defined(__ia64__)
++		return -ENODEV;
++#else
++		pr_info("PNP: No PS/2 controller found.\n");
++		if (x86_platform.legacy.i8042 !=
++				X86_LEGACY_I8042_EXPECTED_PRESENT)
++			return -ENODEV;
++		pr_info("Probing ports directly.\n");
++		return 0;
++#endif
++	}
++
++	if (i8042_pnp_kbd_devices)
++		snprintf(kbd_irq_str, sizeof(kbd_irq_str),
++			"%d", i8042_pnp_kbd_irq);
++	if (i8042_pnp_aux_devices)
++		snprintf(aux_irq_str, sizeof(aux_irq_str),
++			"%d", i8042_pnp_aux_irq);
++
++	pr_info("PNP: PS/2 Controller [%s%s%s] at %#x,%#x irq %s%s%s\n",
++		i8042_pnp_kbd_name, (i8042_pnp_kbd_devices && i8042_pnp_aux_devices) ? "," : "",
++		i8042_pnp_aux_name,
++		i8042_pnp_data_reg, i8042_pnp_command_reg,
++		kbd_irq_str, (i8042_pnp_kbd_devices && i8042_pnp_aux_devices) ? "," : "",
++		aux_irq_str);
++
++#if defined(__ia64__)
++	if (!i8042_pnp_kbd_devices)
++		i8042_nokbd = true;
++	if (!i8042_pnp_aux_devices)
++		i8042_noaux = true;
++#endif
++
++	if (((i8042_pnp_data_reg & ~0xf) == (i8042_data_reg & ~0xf) &&
++	      i8042_pnp_data_reg != i8042_data_reg) ||
++	    !i8042_pnp_data_reg) {
++		pr_warn("PNP: PS/2 controller has invalid data port %#x; using default %#x\n",
++			i8042_pnp_data_reg, i8042_data_reg);
++		i8042_pnp_data_reg = i8042_data_reg;
++		pnp_data_busted = true;
++	}
++
++	if (((i8042_pnp_command_reg & ~0xf) == (i8042_command_reg & ~0xf) &&
++	      i8042_pnp_command_reg != i8042_command_reg) ||
++	    !i8042_pnp_command_reg) {
++		pr_warn("PNP: PS/2 controller has invalid command port %#x; using default %#x\n",
++			i8042_pnp_command_reg, i8042_command_reg);
++		i8042_pnp_command_reg = i8042_command_reg;
++		pnp_data_busted = true;
++	}
++
++	if (!i8042_nokbd && !i8042_pnp_kbd_irq) {
++		pr_warn("PNP: PS/2 controller doesn't have KBD irq; using default %d\n",
++			i8042_kbd_irq);
++		i8042_pnp_kbd_irq = i8042_kbd_irq;
++		pnp_data_busted = true;
++	}
++
++	if (!i8042_noaux && !i8042_pnp_aux_irq) {
++		if (!pnp_data_busted && i8042_pnp_kbd_irq) {
++			pr_warn("PNP: PS/2 appears to have AUX port disabled, "
++				"if this is incorrect please boot with i8042.nopnp\n");
++			i8042_noaux = true;
++		} else {
++			pr_warn("PNP: PS/2 controller doesn't have AUX irq; using default %d\n",
++				i8042_aux_irq);
++			i8042_pnp_aux_irq = i8042_aux_irq;
++		}
++	}
++
++	i8042_data_reg = i8042_pnp_data_reg;
++	i8042_command_reg = i8042_pnp_command_reg;
++	i8042_kbd_irq = i8042_pnp_kbd_irq;
++	i8042_aux_irq = i8042_pnp_aux_irq;
++
++#ifdef CONFIG_X86
++	i8042_bypass_aux_irq_test = !pnp_data_busted &&
++				    dmi_check_system(i8042_dmi_laptop_table);
++#endif
++
++	return 0;
++}
++
++#else  /* !CONFIG_PNP */
++static inline int i8042_pnp_init(void) { return 0; }
++static inline void i8042_pnp_exit(void) { }
++#endif /* CONFIG_PNP */
++
++
++#ifdef CONFIG_X86
++static void __init i8042_check_quirks(void)
++{
++	const struct dmi_system_id *device_quirk_info;
++	uintptr_t quirks;
++
++	device_quirk_info = dmi_first_match(i8042_dmi_quirk_table);
++	if (!device_quirk_info)
++		return;
++
++	quirks = (uintptr_t)device_quirk_info->driver_data;
++
++	if (quirks & SERIO_QUIRK_NOKBD)
++		i8042_nokbd = true;
++	if (quirks & SERIO_QUIRK_NOAUX)
++		i8042_noaux = true;
++	if (quirks & SERIO_QUIRK_NOMUX)
++		i8042_nomux = true;
++	if (quirks & SERIO_QUIRK_FORCEMUX)
++		i8042_nomux = false;
++	if (quirks & SERIO_QUIRK_UNLOCK)
++		i8042_unlock = true;
++	if (quirks & SERIO_QUIRK_PROBE_DEFER)
++		i8042_probe_defer = true;
++	/* Honor module parameter when value is not default */
++	if (i8042_reset == I8042_RESET_DEFAULT) {
++		if (quirks & SERIO_QUIRK_RESET_ALWAYS)
++			i8042_reset = I8042_RESET_ALWAYS;
++		if (quirks & SERIO_QUIRK_RESET_NEVER)
++			i8042_reset = I8042_RESET_NEVER;
++	}
++	if (quirks & SERIO_QUIRK_DIECT)
++		i8042_direct = true;
++	if (quirks & SERIO_QUIRK_DUMBKBD)
++		i8042_dumbkbd = true;
++	if (quirks & SERIO_QUIRK_NOLOOP)
++		i8042_noloop = true;
++	if (quirks & SERIO_QUIRK_NOTIMEOUT)
++		i8042_notimeout = true;
++	if (quirks & SERIO_QUIRK_KBDRESET)
++		i8042_kbdreset = true;
++	if (quirks & SERIO_QUIRK_DRITEK)
++		i8042_dritek = true;
++#ifdef CONFIG_PNP
++	if (quirks & SERIO_QUIRK_NOPNP)
++		i8042_nopnp = true;
++#endif
++}
++#else
++static inline void i8042_check_quirks(void) {}
++#endif
++
++static int __init i8042_platform_init(void)
++{
++	int retval;
++
++#ifdef CONFIG_X86
++	u8 a20_on = 0xdf;
++	/* Just return if platform does not have i8042 controller */
++	if (x86_platform.legacy.i8042 == X86_LEGACY_I8042_PLATFORM_ABSENT)
++		return -ENODEV;
++#endif
++
++/*
++ * On ix86 platforms touching the i8042 data register region can do really
++ * bad things. Because of this the region is always reserved on ix86 boxes.
++ *
++ *	if (!request_region(I8042_DATA_REG, 16, "i8042"))
++ *		return -EBUSY;
++ */
++
++	i8042_kbd_irq = I8042_MAP_IRQ(1);
++	i8042_aux_irq = I8042_MAP_IRQ(12);
++
++#if defined(__ia64__)
++	i8042_reset = I8042_RESET_ALWAYS;
++#endif
++
++	i8042_check_quirks();
++
++	retval = i8042_pnp_init();
++	if (retval)
++		return retval;
++
++#ifdef CONFIG_X86
++	/*
++	 * A20 was already enabled during early kernel init. But some buggy
++	 * BIOSes (in MSI Laptops) require A20 to be enabled using 8042 to
++	 * resume from S3. So we do it here and hope that nothing breaks.
++	 */
++	i8042_command(&a20_on, 0x10d1);
++	i8042_command(NULL, 0x00ff);	/* Null command for SMM firmware */
++#endif /* CONFIG_X86 */
++
++	return retval;
++}
++
++static inline void i8042_platform_exit(void)
++{
++	i8042_pnp_exit();
++}
++
++#endif /* _I8042_ACPIPNPIO_H */
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+deleted file mode 100644
+index 9dcdf21c50bdc..0000000000000
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ /dev/null
+@@ -1,1590 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-only */
+-#ifndef _I8042_X86IA64IO_H
+-#define _I8042_X86IA64IO_H
+-
+-
+-#ifdef CONFIG_X86
+-#include <asm/x86_init.h>
+-#endif
+-
+-/*
+- * Names.
+- */
+-
+-#define I8042_KBD_PHYS_DESC "isa0060/serio0"
+-#define I8042_AUX_PHYS_DESC "isa0060/serio1"
+-#define I8042_MUX_PHYS_DESC "isa0060/serio%d"
+-
+-/*
+- * IRQs.
+- */
+-
+-#if defined(__ia64__)
+-# define I8042_MAP_IRQ(x)	isa_irq_to_vector((x))
+-#else
+-# define I8042_MAP_IRQ(x)	(x)
+-#endif
+-
+-#define I8042_KBD_IRQ	i8042_kbd_irq
+-#define I8042_AUX_IRQ	i8042_aux_irq
+-
+-static int i8042_kbd_irq;
+-static int i8042_aux_irq;
+-
+-/*
+- * Register numbers.
+- */
+-
+-#define I8042_COMMAND_REG	i8042_command_reg
+-#define I8042_STATUS_REG	i8042_command_reg
+-#define I8042_DATA_REG		i8042_data_reg
+-
+-static int i8042_command_reg = 0x64;
+-static int i8042_data_reg = 0x60;
+-
+-
+-static inline int i8042_read_data(void)
+-{
+-	return inb(I8042_DATA_REG);
+-}
+-
+-static inline int i8042_read_status(void)
+-{
+-	return inb(I8042_STATUS_REG);
+-}
+-
+-static inline void i8042_write_data(int val)
+-{
+-	outb(val, I8042_DATA_REG);
+-}
+-
+-static inline void i8042_write_command(int val)
+-{
+-	outb(val, I8042_COMMAND_REG);
+-}
+-
+-#ifdef CONFIG_X86
+-
+-#include <linux/dmi.h>
+-
+-#define SERIO_QUIRK_NOKBD		BIT(0)
+-#define SERIO_QUIRK_NOAUX		BIT(1)
+-#define SERIO_QUIRK_NOMUX		BIT(2)
+-#define SERIO_QUIRK_FORCEMUX		BIT(3)
+-#define SERIO_QUIRK_UNLOCK		BIT(4)
+-#define SERIO_QUIRK_PROBE_DEFER		BIT(5)
+-#define SERIO_QUIRK_RESET_ALWAYS	BIT(6)
+-#define SERIO_QUIRK_RESET_NEVER		BIT(7)
+-#define SERIO_QUIRK_DIECT		BIT(8)
+-#define SERIO_QUIRK_DUMBKBD		BIT(9)
+-#define SERIO_QUIRK_NOLOOP		BIT(10)
+-#define SERIO_QUIRK_NOTIMEOUT		BIT(11)
+-#define SERIO_QUIRK_KBDRESET		BIT(12)
+-#define SERIO_QUIRK_DRITEK		BIT(13)
+-#define SERIO_QUIRK_NOPNP		BIT(14)
+-
+-/* Quirk table for different mainboards. Options similar or identical to i8042
+- * module parameters.
+- * ORDERING IS IMPORTANT! The first match will be apllied and the rest ignored.
+- * This allows entries to overwrite vendor wide quirks on a per device basis.
+- * Where this is irrelevant, entries are sorted case sensitive by DMI_SYS_VENDOR
+- * and/or DMI_BOARD_VENDOR to make it easier to avoid dublicate entries.
+- */
+-static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ALIENWARE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Sentia"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "X750LN"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Asus X450LCP */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "X450LCP"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_NEVER)
+-	},
+-	{
+-		/* ASUS ZenBook UX425UA */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER)
+-	},
+-	{
+-		/* ASUS ZenBook UM325UA */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER)
+-	},
+-	/*
+-	 * On some Asus laptops, just running self tests cause problems.
+-	 */
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_NEVER)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_NEVER)
+-	},
+-	{
+-		/* ASUS P65UP5 - AUX LOOP command does not raise AUX IRQ */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."),
+-			DMI_MATCH(DMI_BOARD_NAME, "P/I-P65UP5"),
+-			DMI_MATCH(DMI_BOARD_VERSION, "REV 2.X"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* ASUS G1S */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer Inc."),
+-			DMI_MATCH(DMI_BOARD_NAME, "G1S"),
+-			DMI_MATCH(DMI_BOARD_VERSION, "1.0"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 1360"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Acer Aspire 5710 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5710"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Acer Aspire 7738 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 7738"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Acer Aspire 5536 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5536"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "0100"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/*
+-		 * Acer Aspire 5738z
+-		 * Touchpad stops working in mux mode when dis- + re-enabled
+-		 * with the touchpad enable/disable toggle hotkey
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Acer Aspire One 150 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "AOA150"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A114-31"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A314-31"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A315-31"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-132"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-332"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire ES1-432"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate Spin B118-RN"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	/*
+-	 * Some Wistron based laptops need us to explicitly enable the 'Dritek
+-	 * keyboard extension' to make their extra keys start generating scancodes.
+-	 * Originally, this was just confined to older laptops, but a few Acer laptops
+-	 * have turned up in 2007 that also need this again.
+-	 */
+-	{
+-		/* Acer Aspire 5100 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5100"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer Aspire 5610 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5610"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer Aspire 5630 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5630"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer Aspire 5650 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5650"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer Aspire 5680 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5680"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer Aspire 5720 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5720"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer Aspire 9110 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 9110"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer TravelMate 660 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 660"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer TravelMate 2490 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 2490"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Acer TravelMate 4280 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 4280"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+-	},
+-	{
+-		/* Amoi M636/A737 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Amoi Electronics CO.,LTD."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "M636/A737 platform"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ByteSpeed LLC"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ByteSpeed Laptop C15B"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Compal HEL80I */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "COMPAL"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HEL80I"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ProLiant"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "8500"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ProLiant"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "DL760"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Advent 4211 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "DIXONSXP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Advent 4211"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* Dell Embedded Box PC 3000 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Embedded Box PC 3000"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Dell XPS M1530 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "XPS M1530"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Dell Vostro 1510 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro1510"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Dell Vostro V13 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V13"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
+-	},
+-	{
+-		/* Dell Vostro 1320 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1320"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* Dell Vostro 1520 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1520"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* Dell Vostro 1720 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1720"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* Entroware Proteus */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	/*
+-	 * Some Fujitsu notebooks are having trouble with touchpads if
+-	 * active multiplexing mode is activated. Luckily they don't have
+-	 * external PS/2 ports so we can safely disable it.
+-	 * ... apparently some Toshibas don't like MUX mode either and
+-	 * die horrible death on reboot.
+-	 */
+-	{
+-		/* Fujitsu Lifebook P7010/P7010D */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P7010"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu Lifebook P5020D */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook P Series"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu Lifebook S2000 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S Series"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu Lifebook S6230 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu Lifebook T725 laptop */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
+-	},
+-	{
+-		/* Fujitsu Lifebook U745 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U745"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu T70H */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "FMVLT70H"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu A544 laptop */
+-		/* https://bugzilla.redhat.com/show_bug.cgi?id=1111138 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK A544"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
+-	},
+-	{
+-		/* Fujitsu AH544 laptop */
+-		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
+-	},
+-	{
+-		/* Fujitsu U574 laptop */
+-		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U574"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
+-	},
+-	{
+-		/* Fujitsu UH554 laptop */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK UH544"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOTIMEOUT)
+-	},
+-	{
+-		/* Fujitsu Lifebook P7010 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "0000000000"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu-Siemens Lifebook T3010 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T3010"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu-Siemens Lifebook E4010 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E4010"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu-Siemens Amilo Pro 2010 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "AMILO Pro V2010"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu-Siemens Amilo Pro 2030 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "AMILO PRO V2030"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Fujitsu Lifebook A574/H */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "FMVA0501PZ"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Gigabyte M912 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "M912"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "01"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Gigabyte Spring Peak - defines wrong chassis type */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Spring Peak"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Gigabyte T1005 - defines wrong chassis type ("Other") */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "T1005"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Gigabyte T1005M/P - defines wrong chassis type ("Other") */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "T1005M/P"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	/*
+-	 * Some laptops need keyboard reset before probing for the trackpad to get
+-	 * it detected, initialised & finally work.
+-	 */
+-	{
+-		/* Gigabyte P35 v2 - Elantech touchpad */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P35V2"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+-	},
+-		{
+-		/* Aorus branded Gigabyte X3 Plus - Elantech touchpad */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "X3"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+-	},
+-	{
+-		/* Gigabyte P34 - Elantech touchpad */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P34"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+-	},
+-	{
+-		/* Gigabyte P57 - Elantech touchpad */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P57"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+-	},
+-	{
+-		/* Gericom Bellagio */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Gericom"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "N34AS6"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Gigabyte M1022M netbook */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co.,Ltd."),
+-			DMI_MATCH(DMI_BOARD_NAME, "M1022E"),
+-			DMI_MATCH(DMI_BOARD_VERSION, "1.02"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv9700"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "Rev 1"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/*
+-		 * HP Pavilion DV4017EA -
+-		 * errors on MUX ports are reported without raising AUXDATA
+-		 * causing "spurious NAK" messages.
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Pavilion dv4000 (EA032EA#ABF)"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/*
+-		 * HP Pavilion ZT1000 -
+-		 * like DV4017EA does not raise AUXERR for errors on MUX ports.
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Notebook PC"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "HP Pavilion Notebook ZT1000"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/*
+-		 * HP Pavilion DV4270ca -
+-		 * like DV4017EA does not raise AUXERR for errors on MUX ports.
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Pavilion dv4000 (EH476UA#ABL)"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Newer HP Pavilion dv4 models */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_NOTIMEOUT)
+-	},
+-	{
+-		/* IBM 2656 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "2656"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Avatar AVIU-145A6 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Intel"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "IC4I"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Intel MBO Desktop D845PESV */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "D845PESV"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		/*
+-		 * Intel NUC D54250WYK - does not have i8042 controller but
+-		 * declares PS/2 devices in DSDT.
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "D54250WYK"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		/* Lenovo 3000 n100 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "076804U"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Lenovo XiaoXin Air 12 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "80UN"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Lenovo LaVie Z */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Lenovo Ideapad U455 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "20046"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* Lenovo ThinkPad L460 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L460"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* Lenovo ThinkPad Twist S230u */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "33474HU"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* LG Electronics X110 */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "LG Electronics Inc."),
+-			DMI_MATCH(DMI_BOARD_NAME, "X110"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* Medion Akoya Mini E1210 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "E1210"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* Medion Akoya E1222 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "E122X"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* MSI Wind U-100 */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"),
+-			DMI_MATCH(DMI_BOARD_NAME, "U-100"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		/*
+-		 * No data is coming from the touchscreen unless KBC
+-		 * is in legacy mode.
+-		 */
+-		/* Panasonic CF-29 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Matsushita"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "CF-29"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Medion Akoya E7225 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Medion"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Akoya E7225"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Microsoft Virtual Machine */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Virtual Machine"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "VS2005R2"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Medion MAM 2070 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "MAM 2070"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "5a"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* TUXEDO BU1406 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "N24_25BU"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Clevo P650RS, 650RP6, Sager NP8152-S, and others */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	{
+-		/* OQO Model 01 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "OQO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "ZEPTO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "00"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "PEGATRON CORPORATION"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "C15B"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Acer Aspire 5 A515 */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "PK"),
+-			DMI_MATCH(DMI_BOARD_NAME, "Grumpy_PK"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		/* ULI EV4873 - AUX LOOP does not work properly */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ULI"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "EV4873"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "5a"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/*
+-		 * Arima-Rioworks HDAMB -
+-		 * AUX LOOP command does not raise AUX IRQ
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "RIOWORKS"),
+-			DMI_MATCH(DMI_BOARD_NAME, "HDAMB"),
+-			DMI_MATCH(DMI_BOARD_VERSION, "Rev E"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	{
+-		/* Sharp Actius MM20 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "SHARP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "PC-MM20 Series"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/*
+-		 * Sony Vaio FZ-240E -
+-		 * reset and GET ID commands issued via KBD port are
+-		 * sometimes being delivered to AUX3.
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FZ240E"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/*
+-		 * Most (all?) VAIOs do not have external PS/2 ports nor
+-		 * they implement active multiplexing properly, and
+-		 * MUX discovery usually messes up keyboard/touchpad.
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "VAIO"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/* Sony Vaio FS-115b */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FS115B"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		/*
+-		 * Sony Vaio VGN-CS series require MUX or the touch sensor
+-		 * buttons will disturb touchpad operation
+-		 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "VGN-CS"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_FORCEMUX)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P10"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "EQUIUM A110"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE C850D"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+-	},
+-	/*
+-	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
+-	 * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
+-	 * none of them have an external PS/2 port so this can safely be set for
+-	 * all of them. These two are based on a Clevo design, but have the
+-	 * board_name changed.
+-	 */
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"),
+-			DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"),
+-			DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		/* Mivvy M310 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "VIOOO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "N10"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_RESET_ALWAYS)
+-	},
+-	/*
+-	 * Some laptops need keyboard reset before probing for the trackpad to get
+-	 * it detected, initialised & finally work.
+-	 */
+-	{
+-		/* Schenker XMG C504 - Elantech touchpad */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "XMG"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "C504"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_KBDRESET)
+-	},
+-	{
+-		/* Blue FB5601 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "blue"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "FB5601"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "M606"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+-	},
+-	/*
+-	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
+-	 * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
+-	 * none of them have an external PS/2 port so this can safely be set for
+-	 * all of them.
+-	 * Clevo barebones come with board_vendor and/or system_vendor set to
+-	 * either the very generic string "Notebook" and/or a different value
+-	 * for each individual reseller. The only somewhat universal way to
+-	 * identify them is by board_name.
+-	 */
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "LAPQC71A"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "LAPQC71B"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "N140CU"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "N141CU"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	/*
+-	 * At least one modern Clevo barebone has the touchpad connected both
+-	 * via PS/2 and i2c interface. This causes a race condition between the
+-	 * psmouse and i2c-hid driver. Since the full capability of the touchpad
+-	 * is available via the i2c interface and the device has no external
+-	 * PS/2 port, it is safe to just ignore all ps2 mouses here to avoid
+-	 * this issue. The known affected device is the
+-	 * TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU which comes with one of
+-	 * the two different dmi strings below. NS50MU is not a typo!
+-	 */
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "NS50MU"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX |
+-					SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
+-					SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "NS50_70MU"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX |
+-					SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
+-					SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "PB50_70DFx,DDx"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "X170SM"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_NAME, "X170KM-G"),
+-		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+-	},
+-	{ }
+-};
+-
+-#ifdef CONFIG_PNP
+-static const struct dmi_system_id i8042_dmi_laptop_table[] __initconst = {
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "8"), /* Portable */
+-		},
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "9"), /* Laptop */
+-		},
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
+-		},
+-	},
+-	{
+-		.matches = {
+-			DMI_MATCH(DMI_CHASSIS_TYPE, "14"), /* Sub-Notebook */
+-		},
+-	},
+-	{ }
+-};
+-#endif
+-
+-#endif /* CONFIG_X86 */
+-
+-#ifdef CONFIG_PNP
+-#include <linux/pnp.h>
+-
+-static bool i8042_pnp_kbd_registered;
+-static unsigned int i8042_pnp_kbd_devices;
+-static bool i8042_pnp_aux_registered;
+-static unsigned int i8042_pnp_aux_devices;
+-
+-static int i8042_pnp_command_reg;
+-static int i8042_pnp_data_reg;
+-static int i8042_pnp_kbd_irq;
+-static int i8042_pnp_aux_irq;
+-
+-static char i8042_pnp_kbd_name[32];
+-static char i8042_pnp_aux_name[32];
+-
+-static void i8042_pnp_id_to_string(struct pnp_id *id, char *dst, int dst_size)
+-{
+-	strlcpy(dst, "PNP:", dst_size);
+-
+-	while (id) {
+-		strlcat(dst, " ", dst_size);
+-		strlcat(dst, id->id, dst_size);
+-		id = id->next;
+-	}
+-}
+-
+-static int i8042_pnp_kbd_probe(struct pnp_dev *dev, const struct pnp_device_id *did)
+-{
+-	if (pnp_port_valid(dev, 0) && pnp_port_len(dev, 0) == 1)
+-		i8042_pnp_data_reg = pnp_port_start(dev,0);
+-
+-	if (pnp_port_valid(dev, 1) && pnp_port_len(dev, 1) == 1)
+-		i8042_pnp_command_reg = pnp_port_start(dev, 1);
+-
+-	if (pnp_irq_valid(dev,0))
+-		i8042_pnp_kbd_irq = pnp_irq(dev, 0);
+-
+-	strlcpy(i8042_pnp_kbd_name, did->id, sizeof(i8042_pnp_kbd_name));
+-	if (strlen(pnp_dev_name(dev))) {
+-		strlcat(i8042_pnp_kbd_name, ":", sizeof(i8042_pnp_kbd_name));
+-		strlcat(i8042_pnp_kbd_name, pnp_dev_name(dev), sizeof(i8042_pnp_kbd_name));
+-	}
+-	i8042_pnp_id_to_string(dev->id, i8042_kbd_firmware_id,
+-			       sizeof(i8042_kbd_firmware_id));
+-	i8042_kbd_fwnode = dev_fwnode(&dev->dev);
+-
+-	/* Keyboard ports are always supposed to be wakeup-enabled */
+-	device_set_wakeup_enable(&dev->dev, true);
+-
+-	i8042_pnp_kbd_devices++;
+-	return 0;
+-}
+-
+-static int i8042_pnp_aux_probe(struct pnp_dev *dev, const struct pnp_device_id *did)
+-{
+-	if (pnp_port_valid(dev, 0) && pnp_port_len(dev, 0) == 1)
+-		i8042_pnp_data_reg = pnp_port_start(dev,0);
+-
+-	if (pnp_port_valid(dev, 1) && pnp_port_len(dev, 1) == 1)
+-		i8042_pnp_command_reg = pnp_port_start(dev, 1);
+-
+-	if (pnp_irq_valid(dev, 0))
+-		i8042_pnp_aux_irq = pnp_irq(dev, 0);
+-
+-	strlcpy(i8042_pnp_aux_name, did->id, sizeof(i8042_pnp_aux_name));
+-	if (strlen(pnp_dev_name(dev))) {
+-		strlcat(i8042_pnp_aux_name, ":", sizeof(i8042_pnp_aux_name));
+-		strlcat(i8042_pnp_aux_name, pnp_dev_name(dev), sizeof(i8042_pnp_aux_name));
+-	}
+-	i8042_pnp_id_to_string(dev->id, i8042_aux_firmware_id,
+-			       sizeof(i8042_aux_firmware_id));
+-
+-	i8042_pnp_aux_devices++;
+-	return 0;
+-}
+-
+-static const struct pnp_device_id pnp_kbd_devids[] = {
+-	{ .id = "PNP0300", .driver_data = 0 },
+-	{ .id = "PNP0301", .driver_data = 0 },
+-	{ .id = "PNP0302", .driver_data = 0 },
+-	{ .id = "PNP0303", .driver_data = 0 },
+-	{ .id = "PNP0304", .driver_data = 0 },
+-	{ .id = "PNP0305", .driver_data = 0 },
+-	{ .id = "PNP0306", .driver_data = 0 },
+-	{ .id = "PNP0309", .driver_data = 0 },
+-	{ .id = "PNP030a", .driver_data = 0 },
+-	{ .id = "PNP030b", .driver_data = 0 },
+-	{ .id = "PNP0320", .driver_data = 0 },
+-	{ .id = "PNP0343", .driver_data = 0 },
+-	{ .id = "PNP0344", .driver_data = 0 },
+-	{ .id = "PNP0345", .driver_data = 0 },
+-	{ .id = "CPQA0D7", .driver_data = 0 },
+-	{ .id = "", },
+-};
+-MODULE_DEVICE_TABLE(pnp, pnp_kbd_devids);
+-
+-static struct pnp_driver i8042_pnp_kbd_driver = {
+-	.name           = "i8042 kbd",
+-	.id_table       = pnp_kbd_devids,
+-	.probe          = i8042_pnp_kbd_probe,
+-	.driver         = {
+-		.probe_type = PROBE_FORCE_SYNCHRONOUS,
+-		.suppress_bind_attrs = true,
+-	},
+-};
+-
+-static const struct pnp_device_id pnp_aux_devids[] = {
+-	{ .id = "AUI0200", .driver_data = 0 },
+-	{ .id = "FJC6000", .driver_data = 0 },
+-	{ .id = "FJC6001", .driver_data = 0 },
+-	{ .id = "PNP0f03", .driver_data = 0 },
+-	{ .id = "PNP0f0b", .driver_data = 0 },
+-	{ .id = "PNP0f0e", .driver_data = 0 },
+-	{ .id = "PNP0f12", .driver_data = 0 },
+-	{ .id = "PNP0f13", .driver_data = 0 },
+-	{ .id = "PNP0f19", .driver_data = 0 },
+-	{ .id = "PNP0f1c", .driver_data = 0 },
+-	{ .id = "SYN0801", .driver_data = 0 },
+-	{ .id = "", },
+-};
+-MODULE_DEVICE_TABLE(pnp, pnp_aux_devids);
+-
+-static struct pnp_driver i8042_pnp_aux_driver = {
+-	.name           = "i8042 aux",
+-	.id_table       = pnp_aux_devids,
+-	.probe          = i8042_pnp_aux_probe,
+-	.driver         = {
+-		.probe_type = PROBE_FORCE_SYNCHRONOUS,
+-		.suppress_bind_attrs = true,
+-	},
+-};
+-
+-static void i8042_pnp_exit(void)
+-{
+-	if (i8042_pnp_kbd_registered) {
+-		i8042_pnp_kbd_registered = false;
+-		pnp_unregister_driver(&i8042_pnp_kbd_driver);
+-	}
+-
+-	if (i8042_pnp_aux_registered) {
+-		i8042_pnp_aux_registered = false;
+-		pnp_unregister_driver(&i8042_pnp_aux_driver);
+-	}
+-}
+-
+-static int __init i8042_pnp_init(void)
+-{
+-	char kbd_irq_str[4] = { 0 }, aux_irq_str[4] = { 0 };
+-	bool pnp_data_busted = false;
+-	int err;
+-
+-	if (i8042_nopnp) {
+-		pr_info("PNP detection disabled\n");
+-		return 0;
+-	}
+-
+-	err = pnp_register_driver(&i8042_pnp_kbd_driver);
+-	if (!err)
+-		i8042_pnp_kbd_registered = true;
+-
+-	err = pnp_register_driver(&i8042_pnp_aux_driver);
+-	if (!err)
+-		i8042_pnp_aux_registered = true;
+-
+-	if (!i8042_pnp_kbd_devices && !i8042_pnp_aux_devices) {
+-		i8042_pnp_exit();
+-#if defined(__ia64__)
+-		return -ENODEV;
+-#else
+-		pr_info("PNP: No PS/2 controller found.\n");
+-		if (x86_platform.legacy.i8042 !=
+-				X86_LEGACY_I8042_EXPECTED_PRESENT)
+-			return -ENODEV;
+-		pr_info("Probing ports directly.\n");
+-		return 0;
+-#endif
+-	}
+-
+-	if (i8042_pnp_kbd_devices)
+-		snprintf(kbd_irq_str, sizeof(kbd_irq_str),
+-			"%d", i8042_pnp_kbd_irq);
+-	if (i8042_pnp_aux_devices)
+-		snprintf(aux_irq_str, sizeof(aux_irq_str),
+-			"%d", i8042_pnp_aux_irq);
+-
+-	pr_info("PNP: PS/2 Controller [%s%s%s] at %#x,%#x irq %s%s%s\n",
+-		i8042_pnp_kbd_name, (i8042_pnp_kbd_devices && i8042_pnp_aux_devices) ? "," : "",
+-		i8042_pnp_aux_name,
+-		i8042_pnp_data_reg, i8042_pnp_command_reg,
+-		kbd_irq_str, (i8042_pnp_kbd_devices && i8042_pnp_aux_devices) ? "," : "",
+-		aux_irq_str);
+-
+-#if defined(__ia64__)
+-	if (!i8042_pnp_kbd_devices)
+-		i8042_nokbd = true;
+-	if (!i8042_pnp_aux_devices)
+-		i8042_noaux = true;
+-#endif
+-
+-	if (((i8042_pnp_data_reg & ~0xf) == (i8042_data_reg & ~0xf) &&
+-	      i8042_pnp_data_reg != i8042_data_reg) ||
+-	    !i8042_pnp_data_reg) {
+-		pr_warn("PNP: PS/2 controller has invalid data port %#x; using default %#x\n",
+-			i8042_pnp_data_reg, i8042_data_reg);
+-		i8042_pnp_data_reg = i8042_data_reg;
+-		pnp_data_busted = true;
+-	}
+-
+-	if (((i8042_pnp_command_reg & ~0xf) == (i8042_command_reg & ~0xf) &&
+-	      i8042_pnp_command_reg != i8042_command_reg) ||
+-	    !i8042_pnp_command_reg) {
+-		pr_warn("PNP: PS/2 controller has invalid command port %#x; using default %#x\n",
+-			i8042_pnp_command_reg, i8042_command_reg);
+-		i8042_pnp_command_reg = i8042_command_reg;
+-		pnp_data_busted = true;
+-	}
+-
+-	if (!i8042_nokbd && !i8042_pnp_kbd_irq) {
+-		pr_warn("PNP: PS/2 controller doesn't have KBD irq; using default %d\n",
+-			i8042_kbd_irq);
+-		i8042_pnp_kbd_irq = i8042_kbd_irq;
+-		pnp_data_busted = true;
+-	}
+-
+-	if (!i8042_noaux && !i8042_pnp_aux_irq) {
+-		if (!pnp_data_busted && i8042_pnp_kbd_irq) {
+-			pr_warn("PNP: PS/2 appears to have AUX port disabled, "
+-				"if this is incorrect please boot with i8042.nopnp\n");
+-			i8042_noaux = true;
+-		} else {
+-			pr_warn("PNP: PS/2 controller doesn't have AUX irq; using default %d\n",
+-				i8042_aux_irq);
+-			i8042_pnp_aux_irq = i8042_aux_irq;
+-		}
+-	}
+-
+-	i8042_data_reg = i8042_pnp_data_reg;
+-	i8042_command_reg = i8042_pnp_command_reg;
+-	i8042_kbd_irq = i8042_pnp_kbd_irq;
+-	i8042_aux_irq = i8042_pnp_aux_irq;
+-
+-#ifdef CONFIG_X86
+-	i8042_bypass_aux_irq_test = !pnp_data_busted &&
+-				    dmi_check_system(i8042_dmi_laptop_table);
+-#endif
+-
+-	return 0;
+-}
+-
+-#else  /* !CONFIG_PNP */
+-static inline int i8042_pnp_init(void) { return 0; }
+-static inline void i8042_pnp_exit(void) { }
+-#endif /* CONFIG_PNP */
+-
+-
+-#ifdef CONFIG_X86
+-static void __init i8042_check_quirks(void)
+-{
+-	const struct dmi_system_id *device_quirk_info;
+-	uintptr_t quirks;
+-
+-	device_quirk_info = dmi_first_match(i8042_dmi_quirk_table);
+-	if (!device_quirk_info)
+-		return;
+-
+-	quirks = (uintptr_t)device_quirk_info->driver_data;
+-
+-	if (quirks & SERIO_QUIRK_NOKBD)
+-		i8042_nokbd = true;
+-	if (quirks & SERIO_QUIRK_NOAUX)
+-		i8042_noaux = true;
+-	if (quirks & SERIO_QUIRK_NOMUX)
+-		i8042_nomux = true;
+-	if (quirks & SERIO_QUIRK_FORCEMUX)
+-		i8042_nomux = false;
+-	if (quirks & SERIO_QUIRK_UNLOCK)
+-		i8042_unlock = true;
+-	if (quirks & SERIO_QUIRK_PROBE_DEFER)
+-		i8042_probe_defer = true;
+-	/* Honor module parameter when value is not default */
+-	if (i8042_reset == I8042_RESET_DEFAULT) {
+-		if (quirks & SERIO_QUIRK_RESET_ALWAYS)
+-			i8042_reset = I8042_RESET_ALWAYS;
+-		if (quirks & SERIO_QUIRK_RESET_NEVER)
+-			i8042_reset = I8042_RESET_NEVER;
+-	}
+-	if (quirks & SERIO_QUIRK_DIECT)
+-		i8042_direct = true;
+-	if (quirks & SERIO_QUIRK_DUMBKBD)
+-		i8042_dumbkbd = true;
+-	if (quirks & SERIO_QUIRK_NOLOOP)
+-		i8042_noloop = true;
+-	if (quirks & SERIO_QUIRK_NOTIMEOUT)
+-		i8042_notimeout = true;
+-	if (quirks & SERIO_QUIRK_KBDRESET)
+-		i8042_kbdreset = true;
+-	if (quirks & SERIO_QUIRK_DRITEK)
+-		i8042_dritek = true;
+-#ifdef CONFIG_PNP
+-	if (quirks & SERIO_QUIRK_NOPNP)
+-		i8042_nopnp = true;
+-#endif
+-}
+-#else
+-static inline void i8042_check_quirks(void) {}
+-#endif
+-
+-static int __init i8042_platform_init(void)
+-{
+-	int retval;
+-
+-#ifdef CONFIG_X86
+-	u8 a20_on = 0xdf;
+-	/* Just return if platform does not have i8042 controller */
+-	if (x86_platform.legacy.i8042 == X86_LEGACY_I8042_PLATFORM_ABSENT)
+-		return -ENODEV;
+-#endif
+-
+-/*
+- * On ix86 platforms touching the i8042 data register region can do really
+- * bad things. Because of this the region is always reserved on ix86 boxes.
+- *
+- *	if (!request_region(I8042_DATA_REG, 16, "i8042"))
+- *		return -EBUSY;
+- */
+-
+-	i8042_kbd_irq = I8042_MAP_IRQ(1);
+-	i8042_aux_irq = I8042_MAP_IRQ(12);
+-
+-#if defined(__ia64__)
+-	i8042_reset = I8042_RESET_ALWAYS;
+-#endif
+-
+-	i8042_check_quirks();
+-
+-	retval = i8042_pnp_init();
+-	if (retval)
+-		return retval;
+-
+-#ifdef CONFIG_X86
+-	/*
+-	 * A20 was already enabled during early kernel init. But some buggy
+-	 * BIOSes (in MSI Laptops) require A20 to be enabled using 8042 to
+-	 * resume from S3. So we do it here and hope that nothing breaks.
+-	 */
+-	i8042_command(&a20_on, 0x10d1);
+-	i8042_command(NULL, 0x00ff);	/* Null command for SMM firmware */
+-#endif /* CONFIG_X86 */
+-
+-	return retval;
+-}
+-
+-static inline void i8042_platform_exit(void)
+-{
+-	i8042_pnp_exit();
+-}
+-
+-#endif /* _I8042_X86IA64IO_H */
+diff --git a/drivers/input/serio/i8042.h b/drivers/input/serio/i8042.h
+index 55381783dc82d..bf2592fa9a783 100644
+--- a/drivers/input/serio/i8042.h
++++ b/drivers/input/serio/i8042.h
+@@ -20,7 +20,7 @@
+ #elif defined(CONFIG_SPARC)
+ #include "i8042-sparcio.h"
+ #elif defined(CONFIG_X86) || defined(CONFIG_IA64)
+-#include "i8042-x86ia64io.h"
++#include "i8042-acpipnpio.h"
+ #else
+ #include "i8042-io.h"
+ #endif
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index 7e88df64d197b..48fc723f1ac81 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -750,17 +750,16 @@ err:
+ /*
+  * Cleanup zoned device information.
+  */
+-static void dmz_put_zoned_device(struct dm_target *ti)
++static void dmz_put_zoned_devices(struct dm_target *ti)
+ {
+ 	struct dmz_target *dmz = ti->private;
+ 	int i;
+ 
+-	for (i = 0; i < dmz->nr_ddevs; i++) {
+-		if (dmz->ddev[i]) {
++	for (i = 0; i < dmz->nr_ddevs; i++)
++		if (dmz->ddev[i])
+ 			dm_put_device(ti, dmz->ddev[i]);
+-			dmz->ddev[i] = NULL;
+-		}
+-	}
++
++	kfree(dmz->ddev);
+ }
+ 
+ static int dmz_fixup_devices(struct dm_target *ti)
+@@ -951,7 +950,7 @@ err_bio:
+ err_meta:
+ 	dmz_dtr_metadata(dmz->metadata);
+ err_dev:
+-	dmz_put_zoned_device(ti);
++	dmz_put_zoned_devices(ti);
+ err:
+ 	kfree(dmz->dev);
+ 	kfree(dmz);
+@@ -982,7 +981,7 @@ static void dmz_dtr(struct dm_target *ti)
+ 
+ 	bioset_exit(&dmz->bio_set);
+ 
+-	dmz_put_zoned_device(ti);
++	dmz_put_zoned_devices(ti);
+ 
+ 	mutex_destroy(&dmz->chunk_lock);
+ 
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 62d11c6e41d60..5f7ac2807e5f4 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -21,6 +21,7 @@
+ #include "core.h"
+ #include "firmware.h"
+ #include "pm_helpers.h"
++#include "hfi_venus_io.h"
+ 
+ static void venus_event_notify(struct venus_core *core, u32 event)
+ {
+@@ -210,6 +211,15 @@ err:
+ 	return ret;
+ }
+ 
++static void venus_assign_register_offsets(struct venus_core *core)
++{
++	core->vbif_base = core->base + VBIF_BASE;
++	core->cpu_base = core->base + CPU_BASE;
++	core->cpu_cs_base = core->base + CPU_CS_BASE;
++	core->cpu_ic_base = core->base + CPU_IC_BASE;
++	core->wrapper_base = core->base + WRAPPER_BASE;
++}
++
+ static int venus_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -276,6 +286,8 @@ static int venus_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_core_put;
+ 
++	venus_assign_register_offsets(core);
++
+ 	ret = v4l2_device_register(dev, &core->v4l2_dev);
+ 	if (ret)
+ 		goto err_core_deinit;
+diff --git a/drivers/media/platform/qcom/venus/core.h b/drivers/media/platform/qcom/venus/core.h
+index aebd4c664bfa1..75d0068033276 100644
+--- a/drivers/media/platform/qcom/venus/core.h
++++ b/drivers/media/platform/qcom/venus/core.h
+@@ -119,6 +119,11 @@ struct venus_caps {
+  * struct venus_core - holds core parameters valid for all instances
+  *
+  * @base:	IO memory base address
++ * @vbif_base	IO memory vbif base address
++ * @cpu_base	IO memory cpu base address
++ * @cpu_cs_base	IO memory cpu_cs base address
++ * @cpu_ic_base	IO memory cpu_ic base address
++ * @wrapper_base	IO memory wrapper base address
+  * @irq:		Venus irq
+  * @clks:	an array of struct clk pointers
+  * @vcodec0_clks: an array of vcodec0 struct clk pointers
+@@ -152,6 +157,11 @@ struct venus_caps {
+  */
+ struct venus_core {
+ 	void __iomem *base;
++	void __iomem *vbif_base;
++	void __iomem *cpu_base;
++	void __iomem *cpu_cs_base;
++	void __iomem *cpu_ic_base;
++	void __iomem *wrapper_base;
+ 	int irq;
+ 	struct clk *clks[VIDC_CLKS_NUM_MAX];
+ 	struct clk *vcodec0_clks[VIDC_VCODEC_CLKS_NUM_MAX];
+@@ -416,6 +426,7 @@ struct venus_inst {
+ #define IS_V1(core)	((core)->res->hfi_version == HFI_VERSION_1XX)
+ #define IS_V3(core)	((core)->res->hfi_version == HFI_VERSION_3XX)
+ #define IS_V4(core)	((core)->res->hfi_version == HFI_VERSION_4XX)
++#define IS_V6(core)	((core)->res->hfi_version == HFI_VERSION_6XX)
+ 
+ #define ctrl_to_inst(ctrl)	\
+ 	container_of((ctrl)->handler, struct venus_inst, ctrl_handler)
+diff --git a/drivers/media/platform/qcom/venus/firmware.c b/drivers/media/platform/qcom/venus/firmware.c
+index 1db64a854b88b..67b9138a7c5fb 100644
+--- a/drivers/media/platform/qcom/venus/firmware.c
++++ b/drivers/media/platform/qcom/venus/firmware.c
+@@ -27,19 +27,19 @@
+ static void venus_reset_cpu(struct venus_core *core)
+ {
+ 	u32 fw_size = core->fw.mapped_mem_size;
+-	void __iomem *base = core->base;
++	void __iomem *wrapper_base = core->wrapper_base;
+ 
+-	writel(0, base + WRAPPER_FW_START_ADDR);
+-	writel(fw_size, base + WRAPPER_FW_END_ADDR);
+-	writel(0, base + WRAPPER_CPA_START_ADDR);
+-	writel(fw_size, base + WRAPPER_CPA_END_ADDR);
+-	writel(fw_size, base + WRAPPER_NONPIX_START_ADDR);
+-	writel(fw_size, base + WRAPPER_NONPIX_END_ADDR);
+-	writel(0x0, base + WRAPPER_CPU_CGC_DIS);
+-	writel(0x0, base + WRAPPER_CPU_CLOCK_CONFIG);
++	writel(0, wrapper_base + WRAPPER_FW_START_ADDR);
++	writel(fw_size, wrapper_base + WRAPPER_FW_END_ADDR);
++	writel(0, wrapper_base + WRAPPER_CPA_START_ADDR);
++	writel(fw_size, wrapper_base + WRAPPER_CPA_END_ADDR);
++	writel(fw_size, wrapper_base + WRAPPER_NONPIX_START_ADDR);
++	writel(fw_size, wrapper_base + WRAPPER_NONPIX_END_ADDR);
++	writel(0x0, wrapper_base + WRAPPER_CPU_CGC_DIS);
++	writel(0x0, wrapper_base + WRAPPER_CPU_CLOCK_CONFIG);
+ 
+ 	/* Bring ARM9 out of reset */
+-	writel(0, base + WRAPPER_A9SS_SW_RESET);
++	writel(0, wrapper_base + WRAPPER_A9SS_SW_RESET);
+ }
+ 
+ int venus_set_hw_state(struct venus_core *core, bool resume)
+@@ -56,7 +56,7 @@ int venus_set_hw_state(struct venus_core *core, bool resume)
+ 	if (resume)
+ 		venus_reset_cpu(core);
+ 	else
+-		writel(1, core->base + WRAPPER_A9SS_SW_RESET);
++		writel(1, core->wrapper_base + WRAPPER_A9SS_SW_RESET);
+ 
+ 	return 0;
+ }
+@@ -159,12 +159,12 @@ static int venus_shutdown_no_tz(struct venus_core *core)
+ 	size_t unmapped;
+ 	u32 reg;
+ 	struct device *dev = core->fw.dev;
+-	void __iomem *base = core->base;
++	void __iomem *wrapper_base = core->wrapper_base;
+ 
+ 	/* Assert the reset to ARM9 */
+-	reg = readl_relaxed(base + WRAPPER_A9SS_SW_RESET);
++	reg = readl_relaxed(wrapper_base + WRAPPER_A9SS_SW_RESET);
+ 	reg |= WRAPPER_A9SS_SW_RESET_BIT;
+-	writel_relaxed(reg, base + WRAPPER_A9SS_SW_RESET);
++	writel_relaxed(reg, wrapper_base + WRAPPER_A9SS_SW_RESET);
+ 
+ 	/* Make sure reset is asserted before the mapping is removed */
+ 	mb();
+diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c
+index 4be4a75ddcb6e..9d939f63d16f4 100644
+--- a/drivers/media/platform/qcom/venus/hfi_venus.c
++++ b/drivers/media/platform/qcom/venus/hfi_venus.c
+@@ -345,16 +345,6 @@ static void venus_free(struct venus_hfi_device *hdev, struct mem_desc *mem)
+ 	dma_free_attrs(dev, mem->size, mem->kva, mem->da, mem->attrs);
+ }
+ 
+-static void venus_writel(struct venus_hfi_device *hdev, u32 reg, u32 value)
+-{
+-	writel(value, hdev->core->base + reg);
+-}
+-
+-static u32 venus_readl(struct venus_hfi_device *hdev, u32 reg)
+-{
+-	return readl(hdev->core->base + reg);
+-}
+-
+ static void venus_set_registers(struct venus_hfi_device *hdev)
+ {
+ 	const struct venus_resources *res = hdev->core->res;
+@@ -363,12 +353,14 @@ static void venus_set_registers(struct venus_hfi_device *hdev)
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < count; i++)
+-		venus_writel(hdev, tbl[i].reg, tbl[i].value);
++		writel(tbl[i].value, hdev->core->base + tbl[i].reg);
+ }
+ 
+ static void venus_soft_int(struct venus_hfi_device *hdev)
+ {
+-	venus_writel(hdev, CPU_IC_SOFTINT, BIT(CPU_IC_SOFTINT_H2A_SHIFT));
++	void __iomem *cpu_ic_base = hdev->core->cpu_ic_base;
++
++	writel(BIT(CPU_IC_SOFTINT_H2A_SHIFT), cpu_ic_base + CPU_IC_SOFTINT);
+ }
+ 
+ static int venus_iface_cmdq_write_nolock(struct venus_hfi_device *hdev,
+@@ -439,16 +431,25 @@ static int venus_boot_core(struct venus_hfi_device *hdev)
+ {
+ 	struct device *dev = hdev->core->dev;
+ 	static const unsigned int max_tries = 100;
+-	u32 ctrl_status = 0;
++	u32 ctrl_status = 0, mask_val;
+ 	unsigned int count = 0;
++	void __iomem *cpu_cs_base = hdev->core->cpu_cs_base;
++	void __iomem *wrapper_base = hdev->core->wrapper_base;
+ 	int ret = 0;
+ 
+-	venus_writel(hdev, VIDC_CTRL_INIT, BIT(VIDC_CTRL_INIT_CTRL_SHIFT));
+-	venus_writel(hdev, WRAPPER_INTR_MASK, WRAPPER_INTR_MASK_A2HVCODEC_MASK);
+-	venus_writel(hdev, CPU_CS_SCIACMDARG3, 1);
++	if (IS_V6(hdev->core)) {
++		mask_val = readl(wrapper_base + WRAPPER_INTR_MASK);
++		mask_val &= ~(WRAPPER_INTR_MASK_A2HWD_BASK_V6 |
++			      WRAPPER_INTR_MASK_A2HCPU_MASK);
++	} else {
++		mask_val = WRAPPER_INTR_MASK_A2HVCODEC_MASK;
++	}
++	writel(mask_val, wrapper_base + WRAPPER_INTR_MASK);
++	writel(1, cpu_cs_base + CPU_CS_SCIACMDARG3);
+ 
++	writel(BIT(VIDC_CTRL_INIT_CTRL_SHIFT), cpu_cs_base + VIDC_CTRL_INIT);
+ 	while (!ctrl_status && count < max_tries) {
+-		ctrl_status = venus_readl(hdev, CPU_CS_SCIACMDARG0);
++		ctrl_status = readl(cpu_cs_base + CPU_CS_SCIACMDARG0);
+ 		if ((ctrl_status & CPU_CS_SCIACMDARG0_ERROR_STATUS_MASK) == 4) {
+ 			dev_err(dev, "invalid setting for UC_REGION\n");
+ 			ret = -EINVAL;
+@@ -462,15 +463,20 @@ static int venus_boot_core(struct venus_hfi_device *hdev)
+ 	if (count >= max_tries)
+ 		ret = -ETIMEDOUT;
+ 
++	if (IS_V6(hdev->core))
++		writel(0x0, cpu_cs_base + CPU_CS_X2RPMH_V6);
++
+ 	return ret;
+ }
+ 
+ static u32 venus_hwversion(struct venus_hfi_device *hdev)
+ {
+ 	struct device *dev = hdev->core->dev;
+-	u32 ver = venus_readl(hdev, WRAPPER_HW_VERSION);
++	void __iomem *wrapper_base = hdev->core->wrapper_base;
++	u32 ver;
+ 	u32 major, minor, step;
+ 
++	ver = readl(wrapper_base + WRAPPER_HW_VERSION);
+ 	major = ver & WRAPPER_HW_VERSION_MAJOR_VERSION_MASK;
+ 	major = major >> WRAPPER_HW_VERSION_MAJOR_VERSION_SHIFT;
+ 	minor = ver & WRAPPER_HW_VERSION_MINOR_VERSION_MASK;
+@@ -485,6 +491,7 @@ static u32 venus_hwversion(struct venus_hfi_device *hdev)
+ static int venus_run(struct venus_hfi_device *hdev)
+ {
+ 	struct device *dev = hdev->core->dev;
++	void __iomem *cpu_cs_base = hdev->core->cpu_cs_base;
+ 	int ret;
+ 
+ 	/*
+@@ -493,12 +500,12 @@ static int venus_run(struct venus_hfi_device *hdev)
+ 	 */
+ 	venus_set_registers(hdev);
+ 
+-	venus_writel(hdev, UC_REGION_ADDR, hdev->ifaceq_table.da);
+-	venus_writel(hdev, UC_REGION_SIZE, SHARED_QSIZE);
+-	venus_writel(hdev, CPU_CS_SCIACMDARG2, hdev->ifaceq_table.da);
+-	venus_writel(hdev, CPU_CS_SCIACMDARG1, 0x01);
++	writel(hdev->ifaceq_table.da, cpu_cs_base + UC_REGION_ADDR);
++	writel(SHARED_QSIZE, cpu_cs_base + UC_REGION_SIZE);
++	writel(hdev->ifaceq_table.da, cpu_cs_base + CPU_CS_SCIACMDARG2);
++	writel(0x01, cpu_cs_base + CPU_CS_SCIACMDARG1);
+ 	if (hdev->sfr.da)
+-		venus_writel(hdev, SFR_ADDR, hdev->sfr.da);
++		writel(hdev->sfr.da, cpu_cs_base + SFR_ADDR);
+ 
+ 	ret = venus_boot_core(hdev);
+ 	if (ret) {
+@@ -513,17 +520,18 @@ static int venus_run(struct venus_hfi_device *hdev)
+ 
+ static int venus_halt_axi(struct venus_hfi_device *hdev)
+ {
+-	void __iomem *base = hdev->core->base;
++	void __iomem *wrapper_base = hdev->core->wrapper_base;
++	void __iomem *vbif_base = hdev->core->vbif_base;
+ 	struct device *dev = hdev->core->dev;
+ 	u32 val;
+ 	int ret;
+ 
+ 	if (IS_V4(hdev->core)) {
+-		val = venus_readl(hdev, WRAPPER_CPU_AXI_HALT);
++		val = readl(wrapper_base + WRAPPER_CPU_AXI_HALT);
+ 		val |= WRAPPER_CPU_AXI_HALT_HALT;
+-		venus_writel(hdev, WRAPPER_CPU_AXI_HALT, val);
++		writel(val, wrapper_base + WRAPPER_CPU_AXI_HALT);
+ 
+-		ret = readl_poll_timeout(base + WRAPPER_CPU_AXI_HALT_STATUS,
++		ret = readl_poll_timeout(wrapper_base + WRAPPER_CPU_AXI_HALT_STATUS,
+ 					 val,
+ 					 val & WRAPPER_CPU_AXI_HALT_STATUS_IDLE,
+ 					 POLL_INTERVAL_US,
+@@ -537,12 +545,12 @@ static int venus_halt_axi(struct venus_hfi_device *hdev)
+ 	}
+ 
+ 	/* Halt AXI and AXI IMEM VBIF Access */
+-	val = venus_readl(hdev, VBIF_AXI_HALT_CTRL0);
++	val = readl(vbif_base + VBIF_AXI_HALT_CTRL0);
+ 	val |= VBIF_AXI_HALT_CTRL0_HALT_REQ;
+-	venus_writel(hdev, VBIF_AXI_HALT_CTRL0, val);
++	writel(val, vbif_base + VBIF_AXI_HALT_CTRL0);
+ 
+ 	/* Request for AXI bus port halt */
+-	ret = readl_poll_timeout(base + VBIF_AXI_HALT_CTRL1, val,
++	ret = readl_poll_timeout(vbif_base + VBIF_AXI_HALT_CTRL1, val,
+ 				 val & VBIF_AXI_HALT_CTRL1_HALT_ACK,
+ 				 POLL_INTERVAL_US,
+ 				 VBIF_AXI_HALT_ACK_TIMEOUT_US);
+@@ -1035,19 +1043,21 @@ static irqreturn_t venus_isr(struct venus_core *core)
+ {
+ 	struct venus_hfi_device *hdev = to_hfi_priv(core);
+ 	u32 status;
++	void __iomem *cpu_cs_base = hdev->core->cpu_cs_base;
++	void __iomem *wrapper_base = hdev->core->wrapper_base;
+ 
+ 	if (!hdev)
+ 		return IRQ_NONE;
+ 
+-	status = venus_readl(hdev, WRAPPER_INTR_STATUS);
++	status = readl(wrapper_base + WRAPPER_INTR_STATUS);
+ 
+ 	if (status & WRAPPER_INTR_STATUS_A2H_MASK ||
+ 	    status & WRAPPER_INTR_STATUS_A2HWD_MASK ||
+ 	    status & CPU_CS_SCIACMDARG0_INIT_IDLE_MSG_MASK)
+ 		hdev->irq_status = status;
+ 
+-	venus_writel(hdev, CPU_CS_A2HSOFTINTCLR, 1);
+-	venus_writel(hdev, WRAPPER_INTR_CLEAR, status);
++	writel(1, cpu_cs_base + CPU_CS_A2HSOFTINTCLR);
++	writel(status, wrapper_base + WRAPPER_INTR_CLEAR);
+ 
+ 	return IRQ_WAKE_THREAD;
+ }
+@@ -1380,6 +1390,7 @@ static int venus_suspend_1xx(struct venus_core *core)
+ {
+ 	struct venus_hfi_device *hdev = to_hfi_priv(core);
+ 	struct device *dev = core->dev;
++	void __iomem *cpu_cs_base = hdev->core->cpu_cs_base;
+ 	u32 ctrl_status;
+ 	int ret;
+ 
+@@ -1414,7 +1425,7 @@ static int venus_suspend_1xx(struct venus_core *core)
+ 		return -EINVAL;
+ 	}
+ 
+-	ctrl_status = venus_readl(hdev, CPU_CS_SCIACMDARG0);
++	ctrl_status = readl(cpu_cs_base + CPU_CS_SCIACMDARG0);
+ 	if (!(ctrl_status & CPU_CS_SCIACMDARG0_PC_READY)) {
+ 		mutex_unlock(&hdev->lock);
+ 		return -EINVAL;
+@@ -1435,10 +1446,12 @@ static int venus_suspend_1xx(struct venus_core *core)
+ 
+ static bool venus_cpu_and_video_core_idle(struct venus_hfi_device *hdev)
+ {
++	void __iomem *wrapper_base = hdev->core->wrapper_base;
++	void __iomem *cpu_cs_base = hdev->core->cpu_cs_base;
+ 	u32 ctrl_status, cpu_status;
+ 
+-	cpu_status = venus_readl(hdev, WRAPPER_CPU_STATUS);
+-	ctrl_status = venus_readl(hdev, CPU_CS_SCIACMDARG0);
++	cpu_status = readl(wrapper_base + WRAPPER_CPU_STATUS);
++	ctrl_status = readl(cpu_cs_base + CPU_CS_SCIACMDARG0);
+ 
+ 	if (cpu_status & WRAPPER_CPU_STATUS_WFI &&
+ 	    ctrl_status & CPU_CS_SCIACMDARG0_INIT_IDLE_MSG_MASK)
+@@ -1449,10 +1462,12 @@ static bool venus_cpu_and_video_core_idle(struct venus_hfi_device *hdev)
+ 
+ static bool venus_cpu_idle_and_pc_ready(struct venus_hfi_device *hdev)
+ {
++	void __iomem *wrapper_base = hdev->core->wrapper_base;
++	void __iomem *cpu_cs_base = hdev->core->cpu_cs_base;
+ 	u32 ctrl_status, cpu_status;
+ 
+-	cpu_status = venus_readl(hdev, WRAPPER_CPU_STATUS);
+-	ctrl_status = venus_readl(hdev, CPU_CS_SCIACMDARG0);
++	cpu_status = readl(wrapper_base + WRAPPER_CPU_STATUS);
++	ctrl_status = readl(cpu_cs_base + CPU_CS_SCIACMDARG0);
+ 
+ 	if (cpu_status & WRAPPER_CPU_STATUS_WFI &&
+ 	    ctrl_status & CPU_CS_SCIACMDARG0_PC_READY)
+@@ -1465,6 +1480,7 @@ static int venus_suspend_3xx(struct venus_core *core)
+ {
+ 	struct venus_hfi_device *hdev = to_hfi_priv(core);
+ 	struct device *dev = core->dev;
++	void __iomem *cpu_cs_base = hdev->core->cpu_cs_base;
+ 	u32 ctrl_status;
+ 	bool val;
+ 	int ret;
+@@ -1481,7 +1497,7 @@ static int venus_suspend_3xx(struct venus_core *core)
+ 		return -EINVAL;
+ 	}
+ 
+-	ctrl_status = venus_readl(hdev, CPU_CS_SCIACMDARG0);
++	ctrl_status = readl(cpu_cs_base + CPU_CS_SCIACMDARG0);
+ 	if (ctrl_status & CPU_CS_SCIACMDARG0_PC_READY)
+ 		goto power_off;
+ 
+diff --git a/drivers/media/platform/qcom/venus/hfi_venus_io.h b/drivers/media/platform/qcom/venus/hfi_venus_io.h
+index 3b52f98478db0..9cad15eac9e80 100644
+--- a/drivers/media/platform/qcom/venus/hfi_venus_io.h
++++ b/drivers/media/platform/qcom/venus/hfi_venus_io.h
+@@ -8,27 +8,28 @@
+ 
+ #define VBIF_BASE				0x80000
+ 
+-#define VBIF_AXI_HALT_CTRL0			(VBIF_BASE + 0x208)
+-#define VBIF_AXI_HALT_CTRL1			(VBIF_BASE + 0x20c)
++#define VBIF_AXI_HALT_CTRL0			0x208
++#define VBIF_AXI_HALT_CTRL1			0x20c
+ 
+ #define VBIF_AXI_HALT_CTRL0_HALT_REQ		BIT(0)
+ #define VBIF_AXI_HALT_CTRL1_HALT_ACK		BIT(0)
+ #define VBIF_AXI_HALT_ACK_TIMEOUT_US		500000
+ 
+ #define CPU_BASE				0xc0000
++
+ #define CPU_CS_BASE				(CPU_BASE + 0x12000)
+ #define CPU_IC_BASE				(CPU_BASE + 0x1f000)
+ 
+-#define CPU_CS_A2HSOFTINTCLR			(CPU_CS_BASE + 0x1c)
++#define CPU_CS_A2HSOFTINTCLR			0x1c
+ 
+-#define VIDC_CTRL_INIT				(CPU_CS_BASE + 0x48)
++#define VIDC_CTRL_INIT				0x48
+ #define VIDC_CTRL_INIT_RESERVED_BITS31_1_MASK	0xfffffffe
+ #define VIDC_CTRL_INIT_RESERVED_BITS31_1_SHIFT	1
+ #define VIDC_CTRL_INIT_CTRL_MASK		0x1
+ #define VIDC_CTRL_INIT_CTRL_SHIFT		0
+ 
+ /* HFI control status */
+-#define CPU_CS_SCIACMDARG0			(CPU_CS_BASE + 0x4c)
++#define CPU_CS_SCIACMDARG0			0x4c
+ #define CPU_CS_SCIACMDARG0_MASK			0xff
+ #define CPU_CS_SCIACMDARG0_SHIFT		0x0
+ #define CPU_CS_SCIACMDARG0_ERROR_STATUS_MASK	0xfe
+@@ -39,42 +40,55 @@
+ #define CPU_CS_SCIACMDARG0_INIT_IDLE_MSG_MASK	BIT(30)
+ 
+ /* HFI queue table info */
+-#define CPU_CS_SCIACMDARG1			(CPU_CS_BASE + 0x50)
++#define CPU_CS_SCIACMDARG1			0x50
+ 
+ /* HFI queue table address */
+-#define CPU_CS_SCIACMDARG2			(CPU_CS_BASE + 0x54)
++#define CPU_CS_SCIACMDARG2			0x54
+ 
+ /* Venus cpu */
+-#define CPU_CS_SCIACMDARG3			(CPU_CS_BASE + 0x58)
+-
+-#define SFR_ADDR				(CPU_CS_BASE + 0x5c)
+-#define MMAP_ADDR				(CPU_CS_BASE + 0x60)
+-#define UC_REGION_ADDR				(CPU_CS_BASE + 0x64)
+-#define UC_REGION_SIZE				(CPU_CS_BASE + 0x68)
+-
+-#define CPU_IC_SOFTINT				(CPU_IC_BASE + 0x18)
++#define CPU_CS_SCIACMDARG3			0x58
++
++#define SFR_ADDR				0x5c
++#define MMAP_ADDR				0x60
++#define UC_REGION_ADDR				0x64
++#define UC_REGION_SIZE				0x68
++
++#define CPU_CS_H2XSOFTINTEN_V6			0x148
++
++#define CPU_CS_X2RPMH_V6			0x168
++#define CPU_CS_X2RPMH_MASK0_BMSK_V6		0x1
++#define CPU_CS_X2RPMH_MASK0_SHFT_V6		0x0
++#define CPU_CS_X2RPMH_MASK1_BMSK_V6		0x2
++#define CPU_CS_X2RPMH_MASK1_SHFT_V6		0x1
++#define CPU_CS_X2RPMH_SWOVERRIDE_BMSK_V6	0x4
++#define CPU_CS_X2RPMH_SWOVERRIDE_SHFT_V6	0x3
++
++/* Relative to CPU_IC_BASE */
++#define CPU_IC_SOFTINT				0x18
++#define CPU_IC_SOFTINT_V6			0x150
+ #define CPU_IC_SOFTINT_H2A_MASK			0x8000
+ #define CPU_IC_SOFTINT_H2A_SHIFT		0xf
++#define CPU_IC_SOFTINT_H2A_SHIFT_V6		0x0
+ 
+ /* Venus wrapper */
+ #define WRAPPER_BASE				0x000e0000
+ 
+-#define WRAPPER_HW_VERSION			(WRAPPER_BASE + 0x00)
++#define WRAPPER_HW_VERSION			0x00
+ #define WRAPPER_HW_VERSION_MAJOR_VERSION_MASK	0x78000000
+ #define WRAPPER_HW_VERSION_MAJOR_VERSION_SHIFT	28
+ #define WRAPPER_HW_VERSION_MINOR_VERSION_MASK	0xfff0000
+ #define WRAPPER_HW_VERSION_MINOR_VERSION_SHIFT	16
+ #define WRAPPER_HW_VERSION_STEP_VERSION_MASK	0xffff
+ 
+-#define WRAPPER_CLOCK_CONFIG			(WRAPPER_BASE + 0x04)
++#define WRAPPER_CLOCK_CONFIG			0x04
+ 
+-#define WRAPPER_INTR_STATUS			(WRAPPER_BASE + 0x0c)
++#define WRAPPER_INTR_STATUS			0x0c
+ #define WRAPPER_INTR_STATUS_A2HWD_MASK		0x10
+ #define WRAPPER_INTR_STATUS_A2HWD_SHIFT		0x4
+ #define WRAPPER_INTR_STATUS_A2H_MASK		0x4
+ #define WRAPPER_INTR_STATUS_A2H_SHIFT		0x2
+ 
+-#define WRAPPER_INTR_MASK			(WRAPPER_BASE + 0x10)
++#define WRAPPER_INTR_MASK			0x10
+ #define WRAPPER_INTR_MASK_A2HWD_BASK		0x10
+ #define WRAPPER_INTR_MASK_A2HWD_SHIFT		0x4
+ #define WRAPPER_INTR_MASK_A2HVCODEC_MASK	0x8
+@@ -82,41 +96,59 @@
+ #define WRAPPER_INTR_MASK_A2HCPU_MASK		0x4
+ #define WRAPPER_INTR_MASK_A2HCPU_SHIFT		0x2
+ 
+-#define WRAPPER_INTR_CLEAR			(WRAPPER_BASE + 0x14)
++#define WRAPPER_INTR_STATUS_A2HWD_MASK_V6	0x8
++#define WRAPPER_INTR_MASK_A2HWD_BASK_V6		0x8
++
++#define WRAPPER_INTR_CLEAR			0x14
+ #define WRAPPER_INTR_CLEAR_A2HWD_MASK		0x10
+ #define WRAPPER_INTR_CLEAR_A2HWD_SHIFT		0x4
+ #define WRAPPER_INTR_CLEAR_A2H_MASK		0x4
+ #define WRAPPER_INTR_CLEAR_A2H_SHIFT		0x2
+ 
+-#define WRAPPER_POWER_STATUS			(WRAPPER_BASE + 0x44)
+-#define WRAPPER_VDEC_VCODEC_POWER_CONTROL	(WRAPPER_BASE + 0x48)
+-#define WRAPPER_VENC_VCODEC_POWER_CONTROL	(WRAPPER_BASE + 0x4c)
+-#define WRAPPER_VDEC_VENC_AHB_BRIDGE_SYNC_RESET	(WRAPPER_BASE + 0x64)
++#define WRAPPER_POWER_STATUS			0x44
++#define WRAPPER_VDEC_VCODEC_POWER_CONTROL	0x48
++#define WRAPPER_VENC_VCODEC_POWER_CONTROL	0x4c
++#define WRAPPER_DEBUG_BRIDGE_LPI_CONTROL_V6	0x54
++#define WRAPPER_DEBUG_BRIDGE_LPI_STATUS_V6	0x58
++#define WRAPPER_VDEC_VENC_AHB_BRIDGE_SYNC_RESET	0x64
+ 
+-#define WRAPPER_CPU_CLOCK_CONFIG		(WRAPPER_BASE + 0x2000)
+-#define WRAPPER_CPU_AXI_HALT			(WRAPPER_BASE + 0x2008)
++#define WRAPPER_CPU_CLOCK_CONFIG		0x2000
++#define WRAPPER_CPU_AXI_HALT			0x2008
+ #define WRAPPER_CPU_AXI_HALT_HALT		BIT(16)
+-#define WRAPPER_CPU_AXI_HALT_STATUS		(WRAPPER_BASE + 0x200c)
++#define WRAPPER_CPU_AXI_HALT_STATUS		0x200c
+ #define WRAPPER_CPU_AXI_HALT_STATUS_IDLE	BIT(24)
+ 
+-#define WRAPPER_CPU_CGC_DIS			(WRAPPER_BASE + 0x2010)
+-#define WRAPPER_CPU_STATUS			(WRAPPER_BASE + 0x2014)
++#define WRAPPER_CPU_CGC_DIS			0x2010
++#define WRAPPER_CPU_STATUS			0x2014
+ #define WRAPPER_CPU_STATUS_WFI			BIT(0)
+-#define WRAPPER_SW_RESET			(WRAPPER_BASE + 0x3000)
+-#define WRAPPER_CPA_START_ADDR			(WRAPPER_BASE + 0x1020)
+-#define WRAPPER_CPA_END_ADDR			(WRAPPER_BASE + 0x1024)
+-#define WRAPPER_FW_START_ADDR			(WRAPPER_BASE + 0x1028)
+-#define WRAPPER_FW_END_ADDR			(WRAPPER_BASE + 0x102C)
+-#define WRAPPER_NONPIX_START_ADDR		(WRAPPER_BASE + 0x1030)
+-#define WRAPPER_NONPIX_END_ADDR			(WRAPPER_BASE + 0x1034)
+-#define WRAPPER_A9SS_SW_RESET			(WRAPPER_BASE + 0x3000)
++#define WRAPPER_SW_RESET			0x3000
++#define WRAPPER_CPA_START_ADDR			0x1020
++#define WRAPPER_CPA_END_ADDR			0x1024
++#define WRAPPER_FW_START_ADDR			0x1028
++#define WRAPPER_FW_END_ADDR			0x102C
++#define WRAPPER_NONPIX_START_ADDR		0x1030
++#define WRAPPER_NONPIX_END_ADDR			0x1034
++#define WRAPPER_A9SS_SW_RESET			0x3000
+ #define WRAPPER_A9SS_SW_RESET_BIT		BIT(4)
+ 
+ /* Venus 4xx */
+-#define WRAPPER_VCODEC0_MMCC_POWER_STATUS	(WRAPPER_BASE + 0x90)
+-#define WRAPPER_VCODEC0_MMCC_POWER_CONTROL	(WRAPPER_BASE + 0x94)
++#define WRAPPER_VCODEC0_MMCC_POWER_STATUS	0x90
++#define WRAPPER_VCODEC0_MMCC_POWER_CONTROL	0x94
++
++#define WRAPPER_VCODEC1_MMCC_POWER_STATUS	0x110
++#define WRAPPER_VCODEC1_MMCC_POWER_CONTROL	0x114
++
++/* Venus 6xx */
++#define WRAPPER_CORE_POWER_STATUS_V6		0x80
++#define WRAPPER_CORE_POWER_CONTROL_V6		0x84
++
++/* Wrapper TZ 6xx */
++#define WRAPPER_TZ_BASE_V6			0x000c0000
++#define WRAPPER_TZ_CPU_STATUS_V6		0x10
+ 
+-#define WRAPPER_VCODEC1_MMCC_POWER_STATUS	(WRAPPER_BASE + 0x110)
+-#define WRAPPER_VCODEC1_MMCC_POWER_CONTROL	(WRAPPER_BASE + 0x114)
++/* Venus AON */
++#define AON_BASE_V6				0x000e0000
++#define AON_WRAPPER_MVP_NOC_LPI_CONTROL		0x00
++#define AON_WRAPPER_MVP_NOC_LPI_STATUS		0x04
+ 
+ #endif
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index f7de02352f1b0..6bf9c5c319de7 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -304,9 +304,9 @@ vcodec_control_v3(struct venus_core *core, u32 session_type, bool enable)
+ 	void __iomem *ctrl;
+ 
+ 	if (session_type == VIDC_SESSION_TYPE_DEC)
+-		ctrl = core->base + WRAPPER_VDEC_VCODEC_POWER_CONTROL;
++		ctrl = core->wrapper_base + WRAPPER_VDEC_VCODEC_POWER_CONTROL;
+ 	else
+-		ctrl = core->base + WRAPPER_VENC_VCODEC_POWER_CONTROL;
++		ctrl = core->wrapper_base + WRAPPER_VENC_VCODEC_POWER_CONTROL;
+ 
+ 	if (enable)
+ 		writel(0, ctrl);
+@@ -381,11 +381,11 @@ static int vcodec_control_v4(struct venus_core *core, u32 coreid, bool enable)
+ 	int ret;
+ 
+ 	if (coreid == VIDC_CORE_ID_1) {
+-		ctrl = core->base + WRAPPER_VCODEC0_MMCC_POWER_CONTROL;
+-		stat = core->base + WRAPPER_VCODEC0_MMCC_POWER_STATUS;
++		ctrl = core->wrapper_base + WRAPPER_VCODEC0_MMCC_POWER_CONTROL;
++		stat = core->wrapper_base + WRAPPER_VCODEC0_MMCC_POWER_STATUS;
+ 	} else {
+-		ctrl = core->base + WRAPPER_VCODEC1_MMCC_POWER_CONTROL;
+-		stat = core->base + WRAPPER_VCODEC1_MMCC_POWER_STATUS;
++		ctrl = core->wrapper_base + WRAPPER_VCODEC1_MMCC_POWER_CONTROL;
++		stat = core->wrapper_base + WRAPPER_VCODEC1_MMCC_POWER_STATUS;
+ 	}
+ 
+ 	if (enable) {
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index a49b8fe2a0982..be4c2a848b520 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -556,16 +556,18 @@ static void renesas_sdhi_reset(struct tmio_mmc_host *host)
+ {
+ 	struct renesas_sdhi *priv = host_to_priv(host);
+ 
+-	renesas_sdhi_reset_scc(host, priv);
+-	renesas_sdhi_reset_hs400_mode(host, priv);
+-	priv->needs_adjust_hs400 = false;
++	if (priv->scc_ctl) {
++		renesas_sdhi_reset_scc(host, priv);
++		renesas_sdhi_reset_hs400_mode(host, priv);
++		priv->needs_adjust_hs400 = false;
+ 
+-	sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, CLK_CTL_SCLKEN |
+-			sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
++		sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, CLK_CTL_SCLKEN |
++				sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
+ 
+-	sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL,
+-		       ~SH_MOBILE_SDHI_SCC_RVSCNTL_RVSEN &
+-		       sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL));
++		sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL,
++			       ~SH_MOBILE_SDHI_SCC_RVSCNTL_RVSEN &
++			       sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL));
++	}
+ 
+ 	if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
+ 		sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK,
+@@ -1010,11 +1012,9 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		host->ops.start_signal_voltage_switch =
+ 			renesas_sdhi_start_signal_voltage_switch;
+ 		host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27;
+-
+-		if (of_data && of_data->scc_offset) {
+-			priv->scc_ctl = host->ctl + of_data->scc_offset;
+-			host->reset = renesas_sdhi_reset;
+-		}
++		host->reset = renesas_sdhi_reset;
++	} else {
++		host->sdcard_irq_mask_all = TMIO_MASK_ALL;
+ 	}
+ 
+ 	/* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
+@@ -1070,10 +1070,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 			quirks->hs400_calib_table + 1);
+ 	}
+ 
+-	ret = tmio_mmc_host_probe(host);
+-	if (ret < 0)
+-		goto edisclk;
+-
+ 	/* Enable tuning iff we have an SCC and a supported mode */
+ 	if (of_data && of_data->scc_offset &&
+ 	    (host->mmc->caps & MMC_CAP_UHS_SDR104 ||
+@@ -1098,6 +1094,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		if (!hit)
+ 			dev_warn(&host->pdev->dev, "Unknown clock rate for tuning\n");
+ 
++		priv->scc_ctl = host->ctl + of_data->scc_offset;
+ 		host->check_retune = renesas_sdhi_check_scc_error;
+ 		host->ops.execute_tuning = renesas_sdhi_execute_tuning;
+ 		host->ops.prepare_hs400_tuning = renesas_sdhi_prepare_hs400_tuning;
+@@ -1105,6 +1102,8 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		host->ops.hs400_complete = renesas_sdhi_hs400_complete;
+ 	}
+ 
++	sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask_all);
++
+ 	num_irqs = platform_irq_count(pdev);
+ 	if (num_irqs < 0) {
+ 		ret = num_irqs;
+@@ -1130,6 +1129,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 			goto eirq;
+ 	}
+ 
++	ret = tmio_mmc_host_probe(host);
++	if (ret < 0)
++		goto edisclk;
++
+ 	dev_info(&pdev->dev, "%s base at %pa, max clock rate %u MHz\n",
+ 		 mmc_hostname(host->mmc), &res->start, host->mmc->f_max / 1000000);
+ 
+diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
+index 9546e542619cb..d6ed5e1f8386e 100644
+--- a/drivers/mmc/host/tmio_mmc.h
++++ b/drivers/mmc/host/tmio_mmc.h
+@@ -161,6 +161,7 @@ struct tmio_mmc_host {
+ 	u32			sdio_irq_mask;
+ 	unsigned int		clk_cache;
+ 	u32			sdcard_irq_setbit_mask;
++	u32			sdcard_irq_mask_all;
+ 
+ 	spinlock_t		lock;		/* protect host private data */
+ 	unsigned long		last_req_ts;
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index ac4e7874a3f13..abf36acb2641f 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -1158,7 +1158,9 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host)
+ 	tmio_mmc_reset(_host);
+ 
+ 	_host->sdcard_irq_mask = sd_ctrl_read16_and_16_as_32(_host, CTL_IRQ_MASK);
+-	tmio_mmc_disable_mmc_irqs(_host, TMIO_MASK_ALL);
++	if (!_host->sdcard_irq_mask_all)
++		_host->sdcard_irq_mask_all = TMIO_MASK_ALL;
++	tmio_mmc_disable_mmc_irqs(_host, _host->sdcard_irq_mask_all);
+ 
+ 	if (_host->native_hotplug)
+ 		tmio_mmc_enable_mmc_irqs(_host,
+@@ -1212,7 +1214,7 @@ void tmio_mmc_host_remove(struct tmio_mmc_host *host)
+ 	cancel_work_sync(&host->done);
+ 	cancel_delayed_work_sync(&host->delayed_reset_work);
+ 	tmio_mmc_release_dma(host);
+-	tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_ALL);
++	tmio_mmc_disable_mmc_irqs(host, host->sdcard_irq_mask_all);
+ 
+ 	if (host->native_hotplug)
+ 		pm_runtime_put_noidle(&pdev->dev);
+@@ -1242,7 +1244,7 @@ int tmio_mmc_host_runtime_suspend(struct device *dev)
+ {
+ 	struct tmio_mmc_host *host = dev_get_drvdata(dev);
+ 
+-	tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_ALL);
++	tmio_mmc_disable_mmc_irqs(host, host->sdcard_irq_mask_all);
+ 
+ 	if (host->clk_cache)
+ 		host->set_clock(host, 0);
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index 929ce489b0629..c689bed646287 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -889,6 +889,13 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
+ 		return -EINVAL;
+ 	}
+ 
++	/* UBI cannot work on flashes with zero erasesize. */
++	if (!mtd->erasesize) {
++		pr_err("ubi: refuse attaching mtd%d - zero erasesize flash is not supported\n",
++			mtd->index);
++		return -EINVAL;
++	}
++
+ 	if (ubi_num == UBI_DEV_NUM_AUTO) {
+ 		/* Search for an empty slot in the @ubi_devices array */
+ 		for (ubi_num = 0; ubi_num < UBI_MAX_DEVICES; ubi_num++)
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 177151298d72a..53fbef9f4ce54 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2316,14 +2316,16 @@ static void mv88e6xxx_hardware_reset(struct mv88e6xxx_chip *chip)
+ 		 * from the wrong location resulting in the switch booting
+ 		 * to wrong mode and inoperable.
+ 		 */
+-		mv88e6xxx_g1_wait_eeprom_done(chip);
++		if (chip->info->ops->get_eeprom)
++			mv88e6xxx_g2_eeprom_wait(chip);
+ 
+ 		gpiod_set_value_cansleep(gpiod, 1);
+ 		usleep_range(10000, 20000);
+ 		gpiod_set_value_cansleep(gpiod, 0);
+ 		usleep_range(10000, 20000);
+ 
+-		mv88e6xxx_g1_wait_eeprom_done(chip);
++		if (chip->info->ops->get_eeprom)
++			mv88e6xxx_g2_eeprom_wait(chip);
+ 	}
+ }
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/global1.c b/drivers/net/dsa/mv88e6xxx/global1.c
+index 9936ae69e5ee4..ff43d9c9a7ebf 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1.c
++++ b/drivers/net/dsa/mv88e6xxx/global1.c
+@@ -75,37 +75,6 @@ static int mv88e6xxx_g1_wait_init_ready(struct mv88e6xxx_chip *chip)
+ 	return mv88e6xxx_g1_wait_bit(chip, MV88E6XXX_G1_STS, bit, 1);
+ }
+ 
+-void mv88e6xxx_g1_wait_eeprom_done(struct mv88e6xxx_chip *chip)
+-{
+-	const unsigned long timeout = jiffies + 1 * HZ;
+-	u16 val;
+-	int err;
+-
+-	/* Wait up to 1 second for the switch to finish reading the
+-	 * EEPROM.
+-	 */
+-	while (time_before(jiffies, timeout)) {
+-		err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_STS, &val);
+-		if (err) {
+-			dev_err(chip->dev, "Error reading status");
+-			return;
+-		}
+-
+-		/* If the switch is still resetting, it may not
+-		 * respond on the bus, and so MDIO read returns
+-		 * 0xffff. Differentiate between that, and waiting for
+-		 * the EEPROM to be done by bit 0 being set.
+-		 */
+-		if (val != 0xffff &&
+-		    val & BIT(MV88E6XXX_G1_STS_IRQ_EEPROM_DONE))
+-			return;
+-
+-		usleep_range(1000, 2000);
+-	}
+-
+-	dev_err(chip->dev, "Timeout waiting for EEPROM done");
+-}
+-
+ /* Offset 0x01: Switch MAC Address Register Bytes 0 & 1
+  * Offset 0x02: Switch MAC Address Register Bytes 2 & 3
+  * Offset 0x03: Switch MAC Address Register Bytes 4 & 5
+diff --git a/drivers/net/dsa/mv88e6xxx/global1.h b/drivers/net/dsa/mv88e6xxx/global1.h
+index e05abe61fa114..1e3546f8b0727 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1.h
++++ b/drivers/net/dsa/mv88e6xxx/global1.h
+@@ -278,7 +278,6 @@ int mv88e6xxx_g1_set_switch_mac(struct mv88e6xxx_chip *chip, u8 *addr);
+ int mv88e6185_g1_reset(struct mv88e6xxx_chip *chip);
+ int mv88e6352_g1_reset(struct mv88e6xxx_chip *chip);
+ int mv88e6250_g1_reset(struct mv88e6xxx_chip *chip);
+-void mv88e6xxx_g1_wait_eeprom_done(struct mv88e6xxx_chip *chip);
+ 
+ int mv88e6185_g1_ppu_enable(struct mv88e6xxx_chip *chip);
+ int mv88e6185_g1_ppu_disable(struct mv88e6xxx_chip *chip);
+diff --git a/drivers/net/dsa/mv88e6xxx/global2.c b/drivers/net/dsa/mv88e6xxx/global2.c
+index 75b227d0f73b4..8607b2445e1a2 100644
+--- a/drivers/net/dsa/mv88e6xxx/global2.c
++++ b/drivers/net/dsa/mv88e6xxx/global2.c
+@@ -323,7 +323,7 @@ int mv88e6xxx_g2_pot_clear(struct mv88e6xxx_chip *chip)
+  * Offset 0x15: EEPROM Addr (for 8-bit data access)
+  */
+ 
+-static int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip)
++int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip)
+ {
+ 	int bit = __bf_shf(MV88E6XXX_G2_EEPROM_CMD_BUSY);
+ 	int err;
+diff --git a/drivers/net/dsa/mv88e6xxx/global2.h b/drivers/net/dsa/mv88e6xxx/global2.h
+index 1f42ee656816b..de63e3f08e5cd 100644
+--- a/drivers/net/dsa/mv88e6xxx/global2.h
++++ b/drivers/net/dsa/mv88e6xxx/global2.h
+@@ -349,6 +349,7 @@ int mv88e6xxx_g2_trunk_clear(struct mv88e6xxx_chip *chip);
+ 
+ int mv88e6xxx_g2_device_mapping_write(struct mv88e6xxx_chip *chip, int target,
+ 				      int port);
++int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip);
+ 
+ extern const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops;
+ extern const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index d8366351cf14a..c67a108c2c07f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2404,6 +2404,7 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+ 	struct rx_cmp_ext *rxcmp1;
+ 	u32 cp_cons, tmp_raw_cons;
+ 	u32 raw_cons = cpr->cp_raw_cons;
++	bool flush_xdp = false;
+ 	u32 rx_pkts = 0;
+ 	u8 event = 0;
+ 
+@@ -2438,6 +2439,8 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+ 				rx_pkts++;
+ 			else if (rc == -EBUSY)	/* partial completion */
+ 				break;
++			if (event & BNXT_REDIRECT_EVENT)
++				flush_xdp = true;
+ 		} else if (unlikely(TX_CMP_TYPE(txcmp) ==
+ 				    CMPL_BASE_TYPE_HWRM_DONE)) {
+ 			bnxt_hwrm_handler(bp, txcmp);
+@@ -2457,6 +2460,8 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+ 
+ 	if (event & BNXT_AGG_EVENT)
+ 		bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
++	if (flush_xdp)
++		xdp_do_flush();
+ 
+ 	if (!bnxt_has_work(bp, cpr) && rx_pkts < budget) {
+ 		napi_complete_done(napi, rx_pkts);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 47f8f66cf7ecd..deba485ced1bd 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -3125,8 +3125,13 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
+ static void hclge_clear_event_cause(struct hclge_dev *hdev, u32 event_type,
+ 				    u32 regclr)
+ {
++#define HCLGE_IMP_RESET_DELAY		5
++
+ 	switch (event_type) {
+ 	case HCLGE_VECTOR0_EVENT_RST:
++		if (regclr == BIT(HCLGE_VECTOR0_IMPRESET_INT_B))
++			mdelay(HCLGE_IMP_RESET_DELAY);
++
+ 		hclge_write_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG, regclr);
+ 		break;
+ 	case HCLGE_VECTOR0_EVENT_MBX:
+@@ -7850,7 +7855,7 @@ static void hclge_update_overflow_flags(struct hclge_vport *vport,
+ 	if (mac_type == HCLGE_MAC_ADDR_UC) {
+ 		if (is_all_added)
+ 			vport->overflow_promisc_flags &= ~HNAE3_OVERFLOW_UPE;
+-		else
++		else if (hclge_is_umv_space_full(vport, true))
+ 			vport->overflow_promisc_flags |= HNAE3_OVERFLOW_UPE;
+ 	} else {
+ 		if (is_all_added)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index bb2a79b70c3ae..dfaa34f2473ab 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -4332,9 +4332,6 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
+ 		/* duplicate request, so just return success */
+ 		goto error_pvid;
+ 
+-	i40e_vc_reset_vf(vf, true);
+-	/* During reset the VF got a new VSI, so refresh a pointer. */
+-	vsi = pf->vsi[vf->lan_vsi_idx];
+ 	/* Locked once because multiple functions below iterate list */
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+ 
+@@ -4420,6 +4417,10 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
+ 	 */
+ 	vf->port_vlan_id = le16_to_cpu(vsi->info.pvid);
+ 
++	i40e_vc_reset_vf(vf, true);
++	/* During reset the VF got a new VSI, so refresh a pointer. */
++	vsi = pf->vsi[vf->lan_vsi_idx];
++
+ 	ret = i40e_config_vf_promiscuous_mode(vf, vsi->id, allmulti, alluni);
+ 	if (ret) {
+ 		dev_err(&pf->pdev->dev, "Unable to config vf promiscuous mode\n");
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.h b/drivers/net/ethernet/qlogic/qed/qed_ll2.h
+index df88d00053a29..efd025d15655d 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.h
+@@ -111,9 +111,9 @@ struct qed_ll2_info {
+ 	enum core_tx_dest tx_dest;
+ 	u8 tx_stats_en;
+ 	bool main_func_queue;
++	struct qed_ll2_cbs cbs;
+ 	struct qed_ll2_rx_queue rx_queue;
+ 	struct qed_ll2_tx_queue tx_queue;
+-	struct qed_ll2_cbs cbs;
+ };
+ 
+ extern const struct qed_ll2_ops qed_ll2_ops_pass;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
+index 5d4df4c5254ed..6623f5a079275 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
+@@ -105,6 +105,7 @@ struct stm32_ops {
+ 	int (*parse_data)(struct stm32_dwmac *dwmac,
+ 			  struct device *dev);
+ 	u32 syscfg_eth_mask;
++	bool clk_rx_enable_in_suspend;
+ };
+ 
+ static int stm32_dwmac_init(struct plat_stmmacenet_data *plat_dat)
+@@ -122,7 +123,8 @@ static int stm32_dwmac_init(struct plat_stmmacenet_data *plat_dat)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!dwmac->dev->power.is_suspended) {
++	if (!dwmac->ops->clk_rx_enable_in_suspend ||
++	    !dwmac->dev->power.is_suspended) {
+ 		ret = clk_prepare_enable(dwmac->clk_rx);
+ 		if (ret) {
+ 			clk_disable_unprepare(dwmac->clk_tx);
+@@ -515,7 +517,8 @@ static struct stm32_ops stm32mp1_dwmac_data = {
+ 	.suspend = stm32mp1_suspend,
+ 	.resume = stm32mp1_resume,
+ 	.parse_data = stm32mp1_parse_data,
+-	.syscfg_eth_mask = SYSCFG_MP1_ETH_MASK
++	.syscfg_eth_mask = SYSCFG_MP1_ETH_MASK,
++	.clk_rx_enable_in_suspend = true
+ };
+ 
+ static const struct of_device_id stm32_dwmac_match[] = {
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index e4af1f506b833..d103244313542 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -1496,6 +1496,7 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cpsw_common *common)
+ 		if (tx_chn->irq <= 0) {
+ 			dev_err(dev, "Failed to get tx dma irq %d\n",
+ 				tx_chn->irq);
++			ret = tx_chn->irq ?: -ENXIO;
+ 			goto err;
+ 		}
+ 
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 721b536ce8861..97a77dabed64c 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -2122,7 +2122,12 @@ static const struct ethtool_ops team_ethtool_ops = {
+ static void team_setup_by_port(struct net_device *dev,
+ 			       struct net_device *port_dev)
+ {
+-	dev->header_ops	= port_dev->header_ops;
++	struct team *team = netdev_priv(dev);
++
++	if (port_dev->type == ARPHRD_ETHER)
++		dev->header_ops	= team->header_ops_cache;
++	else
++		dev->header_ops	= port_dev->header_ops;
+ 	dev->type = port_dev->type;
+ 	dev->hard_header_len = port_dev->hard_header_len;
+ 	dev->needed_headroom = port_dev->needed_headroom;
+@@ -2169,8 +2174,11 @@ static int team_dev_type_check_change(struct net_device *dev,
+ 
+ static void team_setup(struct net_device *dev)
+ {
++	struct team *team = netdev_priv(dev);
++
+ 	ether_setup(dev);
+ 	dev->max_mtu = ETH_MAX_MTU;
++	team->header_ops_cache = dev->header_ops;
+ 
+ 	dev->netdev_ops = &team_netdev_ops;
+ 	dev->ethtool_ops = &team_ethtool_ops;
+diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c
+index 5d96dc1b00b36..e05bcf86306cf 100644
+--- a/drivers/net/thunderbolt.c
++++ b/drivers/net/thunderbolt.c
+@@ -958,12 +958,11 @@ static bool tbnet_xmit_csum_and_map(struct tbnet *net, struct sk_buff *skb,
+ 		*tucso = ~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+ 					    ip_hdr(skb)->daddr, 0,
+ 					    ip_hdr(skb)->protocol, 0);
+-	} else if (skb_is_gso_v6(skb)) {
++	} else if (skb_is_gso(skb) && skb_is_gso_v6(skb)) {
+ 		tucso = dest + ((void *)&(tcp_hdr(skb)->check) - data);
+ 		*tucso = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+ 					  &ipv6_hdr(skb)->daddr, 0,
+ 					  IPPROTO_TCP, 0);
+-		return false;
+ 	} else if (protocol == htons(ETH_P_IPV6)) {
+ 		tucso = dest + skb_checksum_start_offset(skb) + skb->csum_offset;
+ 		*tucso = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index fb1389bd09392..6310841aeac72 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -90,7 +90,9 @@ static int __must_check __smsc75xx_read_reg(struct usbnet *dev, u32 index,
+ 	ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
+ 		 | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 		 0, index, &buf, 4);
+-	if (unlikely(ret < 0)) {
++	if (unlikely(ret < 4)) {
++		ret = ret < 0 ? ret : -ENODATA;
++
+ 		netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
+ 			    index, ret);
+ 		return ret;
+diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c
+index ae1ae65e7f90a..bc3650c70730a 100644
+--- a/drivers/net/wan/fsl_ucc_hdlc.c
++++ b/drivers/net/wan/fsl_ucc_hdlc.c
+@@ -34,6 +34,8 @@
+ #define TDM_PPPOHT_SLIC_MAXIN
+ #define RX_BD_ERRORS (R_CD_S | R_OV_S | R_CR_S | R_AB_S | R_NO_S | R_LG_S)
+ 
++static int uhdlc_close(struct net_device *dev);
++
+ static struct ucc_tdm_info utdm_primary_info = {
+ 	.uf_info = {
+ 		.tsa = 0,
+@@ -708,6 +710,7 @@ static int uhdlc_open(struct net_device *dev)
+ 	hdlc_device *hdlc = dev_to_hdlc(dev);
+ 	struct ucc_hdlc_private *priv = hdlc->priv;
+ 	struct ucc_tdm *utdm = priv->utdm;
++	int rc = 0;
+ 
+ 	if (priv->hdlc_busy != 1) {
+ 		if (request_irq(priv->ut_info->uf_info.irq,
+@@ -731,10 +734,13 @@ static int uhdlc_open(struct net_device *dev)
+ 		napi_enable(&priv->napi);
+ 		netdev_reset_queue(dev);
+ 		netif_start_queue(dev);
+-		hdlc_open(dev);
++
++		rc = hdlc_open(dev);
++		if (rc)
++			uhdlc_close(dev);
+ 	}
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ static void uhdlc_memclean(struct ucc_hdlc_private *priv)
+@@ -824,6 +830,8 @@ static int uhdlc_close(struct net_device *dev)
+ 	netdev_reset_queue(dev);
+ 	priv->hdlc_busy = 0;
+ 
++	hdlc_close(dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h b/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h
+index cb40f509ab612..d08750abac953 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h
+@@ -334,9 +334,9 @@ struct iwl_fw_ini_fifo_hdr {
+ struct iwl_fw_ini_error_dump_range {
+ 	__le32 range_data_size;
+ 	union {
+-		__le32 internal_base_addr;
+-		__le64 dram_base_addr;
+-		__le32 page_num;
++		__le32 internal_base_addr __packed;
++		__le64 dram_base_addr __packed;
++		__le32 page_num __packed;
+ 		struct iwl_fw_ini_fifo_hdr fifo_hdr;
+ 		struct iwl_cmd_header fw_pkt_hdr;
+ 	};
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c b/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
+index 1046b59647f52..cbe4a200e4eaf 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
+@@ -977,8 +977,8 @@ void mwifiex_11n_rxba_sync_event(struct mwifiex_private *priv,
+ 			}
+ 		}
+ 
+-		tlv_buf_left -= (sizeof(*tlv_rxba) + tlv_len);
+-		tmp = (u8 *)tlv_rxba + tlv_len + sizeof(*tlv_rxba);
++		tlv_buf_left -= (sizeof(tlv_rxba->header) + tlv_len);
++		tmp = (u8 *)tlv_rxba  + sizeof(tlv_rxba->header) + tlv_len;
+ 		tlv_rxba = (struct mwifiex_ie_types_rxba_sync *)tmp;
+ 	}
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_rx.c b/drivers/net/wireless/marvell/mwifiex/sta_rx.c
+index 3c555946cb2cc..5b16e330014ac 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_rx.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_rx.c
+@@ -98,7 +98,8 @@ int mwifiex_process_rx_packet(struct mwifiex_private *priv,
+ 	rx_pkt_len = le16_to_cpu(local_rx_pd->rx_pkt_length);
+ 	rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_off;
+ 
+-	if (sizeof(*rx_pkt_hdr) + rx_pkt_off > skb->len) {
++	if (sizeof(rx_pkt_hdr->eth803_hdr) + sizeof(rfc1042_header) +
++	    rx_pkt_off > skb->len) {
+ 		mwifiex_dbg(priv->adapter, ERROR,
+ 			    "wrong rx packet offset: len=%d, rx_pkt_off=%d\n",
+ 			    skb->len, rx_pkt_off);
+@@ -107,12 +108,13 @@ int mwifiex_process_rx_packet(struct mwifiex_private *priv,
+ 		return -1;
+ 	}
+ 
+-	if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header,
+-		     sizeof(bridge_tunnel_header))) ||
+-	    (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header,
+-		     sizeof(rfc1042_header)) &&
+-	     ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_AARP &&
+-	     ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_IPX)) {
++	if (sizeof(*rx_pkt_hdr) + rx_pkt_off <= skb->len &&
++	    ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header,
++		      sizeof(bridge_tunnel_header))) ||
++	     (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header,
++		      sizeof(rfc1042_header)) &&
++	      ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_AARP &&
++	      ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_IPX))) {
+ 		/*
+ 		 *  Replace the 803 header and rfc1042 header (llc/snap) with an
+ 		 *    EthernetII header, keep the src/dst and snap_type
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c
+index 0acabba2d1a50..5d402cf2951cb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c
+@@ -131,15 +131,8 @@ u8 mt76x02_get_lna_gain(struct mt76x02_dev *dev,
+ 			s8 *lna_2g, s8 *lna_5g,
+ 			struct ieee80211_channel *chan)
+ {
+-	u16 val;
+ 	u8 lna;
+ 
+-	val = mt76x02_eeprom_get(dev, MT_EE_NIC_CONF_1);
+-	if (val & MT_EE_NIC_CONF_1_LNA_EXT_2G)
+-		*lna_2g = 0;
+-	if (val & MT_EE_NIC_CONF_1_LNA_EXT_5G)
+-		memset(lna_5g, 0, sizeof(s8) * 3);
+-
+ 	if (chan->band == NL80211_BAND_2GHZ)
+ 		lna = *lna_2g;
+ 	else if (chan->hw_value <= 64)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c
+index 410ffce3bafff..60478116014f8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c
+@@ -256,7 +256,8 @@ void mt76x2_read_rx_gain(struct mt76x02_dev *dev)
+ 	struct ieee80211_channel *chan = dev->mphy.chandef.chan;
+ 	int channel = chan->hw_value;
+ 	s8 lna_5g[3], lna_2g;
+-	u8 lna;
++	bool use_lna;
++	u8 lna = 0;
+ 	u16 val;
+ 
+ 	if (chan->band == NL80211_BAND_2GHZ)
+@@ -275,7 +276,15 @@ void mt76x2_read_rx_gain(struct mt76x02_dev *dev)
+ 	dev->cal.rx.mcu_gain |= (lna_5g[1] & 0xff) << 16;
+ 	dev->cal.rx.mcu_gain |= (lna_5g[2] & 0xff) << 24;
+ 
+-	lna = mt76x02_get_lna_gain(dev, &lna_2g, lna_5g, chan);
++	val = mt76x02_eeprom_get(dev, MT_EE_NIC_CONF_1);
++	if (chan->band == NL80211_BAND_2GHZ)
++		use_lna = !(val & MT_EE_NIC_CONF_1_LNA_EXT_2G);
++	else
++		use_lna = !(val & MT_EE_NIC_CONF_1_LNA_EXT_5G);
++
++	if (use_lna)
++		lna = mt76x02_get_lna_gain(dev, &lna_2g, lna_5g, chan);
++
+ 	dev->cal.rx.lna_gain = mt76x02_sign_extend(lna, 8);
+ }
+ EXPORT_SYMBOL_GPL(mt76x2_read_rx_gain);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 3aaead9b3a570..9c67ebd4eac38 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -387,14 +387,6 @@ static int nvme_pci_npages_sgl(void)
+ 			NVME_CTRL_PAGE_SIZE);
+ }
+ 
+-static size_t nvme_pci_iod_alloc_size(void)
+-{
+-	size_t npages = max(nvme_pci_npages_prp(), nvme_pci_npages_sgl());
+-
+-	return sizeof(__le64 *) * npages +
+-		sizeof(struct scatterlist) * NVME_MAX_SEGS;
+-}
+-
+ static int nvme_admin_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
+ 				unsigned int hctx_idx)
+ {
+@@ -2557,6 +2549,22 @@ static void nvme_release_prp_pools(struct nvme_dev *dev)
+ 	dma_pool_destroy(dev->prp_small_pool);
+ }
+ 
++static int nvme_pci_alloc_iod_mempool(struct nvme_dev *dev)
++{
++	size_t npages = max(nvme_pci_npages_prp(), nvme_pci_npages_sgl());
++	size_t alloc_size = sizeof(__le64 *) * npages +
++			    sizeof(struct scatterlist) * NVME_MAX_SEGS;
++
++	WARN_ON_ONCE(alloc_size > PAGE_SIZE);
++	dev->iod_mempool = mempool_create_node(1,
++			mempool_kmalloc, mempool_kfree,
++			(void *)alloc_size, GFP_KERNEL,
++			dev_to_node(dev->dev));
++	if (!dev->iod_mempool)
++		return -ENOMEM;
++	return 0;
++}
++
+ static void nvme_free_tagset(struct nvme_dev *dev)
+ {
+ 	if (dev->tagset.tags)
+@@ -2564,6 +2572,7 @@ static void nvme_free_tagset(struct nvme_dev *dev)
+ 	dev->ctrl.tagset = NULL;
+ }
+ 
++/* pairs with nvme_pci_alloc_dev */
+ static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
+ {
+ 	struct nvme_dev *dev = to_nvme_dev(ctrl);
+@@ -2840,32 +2849,6 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_ACPI
+-static bool nvme_acpi_storage_d3(struct pci_dev *dev)
+-{
+-	struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
+-	u8 val;
+-
+-	/*
+-	 * Look for _DSD property specifying that the storage device on the port
+-	 * must use D3 to support deep platform power savings during
+-	 * suspend-to-idle.
+-	 */
+-
+-	if (!adev)
+-		return false;
+-	if (fwnode_property_read_u8(acpi_fwnode_handle(adev), "StorageD3Enable",
+-			&val))
+-		return false;
+-	return val == 1;
+-}
+-#else
+-static inline bool nvme_acpi_storage_d3(struct pci_dev *dev)
+-{
+-	return false;
+-}
+-#endif /* CONFIG_ACPI */
+-
+ static void nvme_async_probe(void *data, async_cookie_t cookie)
+ {
+ 	struct nvme_dev *dev = data;
+@@ -2875,20 +2858,20 @@ static void nvme_async_probe(void *data, async_cookie_t cookie)
+ 	nvme_put_ctrl(&dev->ctrl);
+ }
+ 
+-static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
++static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
++		const struct pci_device_id *id)
+ {
+-	int node, result = -ENOMEM;
+-	struct nvme_dev *dev;
+ 	unsigned long quirks = id->driver_data;
+-	size_t alloc_size;
+-
+-	node = dev_to_node(&pdev->dev);
+-	if (node == NUMA_NO_NODE)
+-		set_dev_node(&pdev->dev, first_memory_node);
++	int node = dev_to_node(&pdev->dev);
++	struct nvme_dev *dev;
++	int ret = -ENOMEM;
+ 
+ 	dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node);
+ 	if (!dev)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
++	INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
++	INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work);
++	mutex_init(&dev->shutdown_lock);
+ 
+ 	dev->nr_write_queues = write_queues;
+ 	dev->nr_poll_queues = poll_queues;
+@@ -2896,26 +2879,12 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	dev->queues = kcalloc_node(dev->nr_allocated_queues,
+ 			sizeof(struct nvme_queue), GFP_KERNEL, node);
+ 	if (!dev->queues)
+-		goto free;
++		goto out_free_dev;
+ 
+ 	dev->dev = get_device(&pdev->dev);
+-	pci_set_drvdata(pdev, dev);
+-
+-	result = nvme_dev_map(dev);
+-	if (result)
+-		goto put_pci;
+-
+-	INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
+-	INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work);
+-	mutex_init(&dev->shutdown_lock);
+-
+-	result = nvme_setup_prp_pools(dev);
+-	if (result)
+-		goto unmap;
+ 
+ 	quirks |= check_vendor_combination_bug(pdev);
+-
+-	if (!noacpi && nvme_acpi_storage_d3(pdev)) {
++	if (!noacpi && acpi_storage_d3(&pdev->dev)) {
+ 		/*
+ 		 * Some systems use a bios work around to ask for D3 on
+ 		 * platforms that support kernel managed suspend.
+@@ -2924,46 +2893,54 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 			 "platform quirk: setting simple suspend\n");
+ 		quirks |= NVME_QUIRK_SIMPLE_SUSPEND;
+ 	}
++	ret = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops,
++			     quirks);
++	if (ret)
++		goto out_put_device;
++	return dev;
+ 
+-	/*
+-	 * Double check that our mempool alloc size will cover the biggest
+-	 * command we support.
+-	 */
+-	alloc_size = nvme_pci_iod_alloc_size();
+-	WARN_ON_ONCE(alloc_size > PAGE_SIZE);
++out_put_device:
++	put_device(dev->dev);
++	kfree(dev->queues);
++out_free_dev:
++	kfree(dev);
++	return ERR_PTR(ret);
++}
+ 
+-	dev->iod_mempool = mempool_create_node(1, mempool_kmalloc,
+-						mempool_kfree,
+-						(void *) alloc_size,
+-						GFP_KERNEL, node);
+-	if (!dev->iod_mempool) {
+-		result = -ENOMEM;
+-		goto release_pools;
+-	}
++static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
++{
++	struct nvme_dev *dev;
++	int result = -ENOMEM;
+ 
+-	result = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops,
+-			quirks);
++	dev = nvme_pci_alloc_dev(pdev, id);
++	if (IS_ERR(dev))
++		return PTR_ERR(dev);
++
++	result = nvme_dev_map(dev);
+ 	if (result)
+-		goto release_mempool;
++		goto out_uninit_ctrl;
++
++	result = nvme_setup_prp_pools(dev);
++	if (result)
++		goto out_dev_unmap;
++
++	result = nvme_pci_alloc_iod_mempool(dev);
++	if (result)
++		goto out_release_prp_pools;
+ 
+ 	dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev));
++	pci_set_drvdata(pdev, dev);
+ 
+ 	nvme_reset_ctrl(&dev->ctrl);
+ 	async_schedule(nvme_async_probe, dev);
+-
+ 	return 0;
+ 
+- release_mempool:
+-	mempool_destroy(dev->iod_mempool);
+- release_pools:
++out_release_prp_pools:
+ 	nvme_release_prp_pools(dev);
+- unmap:
++out_dev_unmap:
+ 	nvme_dev_unmap(dev);
+- put_pci:
+-	put_device(dev->dev);
+- free:
+-	kfree(dev->queues);
+-	kfree(dev);
++out_uninit_ctrl:
++	nvme_uninit_ctrl(&dev->ctrl);
+ 	return result;
+ }
+ 
+diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c
+index be26346085faf..7d0232af9c23d 100644
+--- a/drivers/of/dynamic.c
++++ b/drivers/of/dynamic.c
+@@ -893,13 +893,13 @@ int of_changeset_action(struct of_changeset *ocs, unsigned long action,
+ {
+ 	struct of_changeset_entry *ce;
+ 
++	if (WARN_ON(action >= ARRAY_SIZE(action_names)))
++		return -EINVAL;
++
+ 	ce = kzalloc(sizeof(*ce), GFP_KERNEL);
+ 	if (!ce)
+ 		return -ENOMEM;
+ 
+-	if (WARN_ON(action >= ARRAY_SIZE(action_names)))
+-		return -EINVAL;
+-
+ 	/* get a reference to the node */
+ 	ce->action = action;
+ 	ce->np = of_node_get(np);
+diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c
+index fd99735dca3e6..6ef663bbcdb01 100644
+--- a/drivers/parisc/iosapic.c
++++ b/drivers/parisc/iosapic.c
+@@ -202,9 +202,9 @@ static inline void iosapic_write(void __iomem *iosapic, unsigned int reg, u32 va
+ 
+ static DEFINE_SPINLOCK(iosapic_lock);
+ 
+-static inline void iosapic_eoi(void __iomem *addr, unsigned int data)
++static inline void iosapic_eoi(__le32 __iomem *addr, __le32 data)
+ {
+-	__raw_writel(data, addr);
++	__raw_writel((__force u32)data, addr);
+ }
+ 
+ /*
+diff --git a/drivers/parisc/iosapic_private.h b/drivers/parisc/iosapic_private.h
+index 73ecc657ad954..bd8ff40162b4b 100644
+--- a/drivers/parisc/iosapic_private.h
++++ b/drivers/parisc/iosapic_private.h
+@@ -118,8 +118,8 @@ struct iosapic_irt {
+ struct vector_info {
+ 	struct iosapic_info *iosapic;	/* I/O SAPIC this vector is on */
+ 	struct irt_entry *irte;		/* IRT entry */
+-	u32 __iomem *eoi_addr;		/* precalculate EOI reg address */
+-	u32	eoi_data;		/* IA64: ?       PA: swapped txn_data */
++	__le32 __iomem *eoi_addr;	/* precalculate EOI reg address */
++	__le32	eoi_data;		/* IA64: ?       PA: swapped txn_data */
+ 	int	txn_irq;		/* virtual IRQ number for processor */
+ 	ulong	txn_addr;		/* IA64: id_eid  PA: partial HPA */
+ 	u32	txn_data;		/* CPU interrupt bit */
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 737cc9d6fa6ab..c68e14271c02c 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -771,8 +771,6 @@ static int qcom_pcie_get_resources_2_4_0(struct qcom_pcie *pcie)
+ 			return PTR_ERR(res->phy_ahb_reset);
+ 	}
+ 
+-	dw_pcie_dbi_ro_wr_dis(pci);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/platform/mellanox/Kconfig b/drivers/platform/mellanox/Kconfig
+index 916b39dc11bc6..1a11d1a441b53 100644
+--- a/drivers/platform/mellanox/Kconfig
++++ b/drivers/platform/mellanox/Kconfig
+@@ -48,6 +48,7 @@ config MLXBF_BOOTCTL
+ 	tristate "Mellanox BlueField Firmware Boot Control driver"
+ 	depends on ARM64
+ 	depends on ACPI
++	depends on NET
+ 	help
+ 	  The Mellanox BlueField firmware implements functionality to
+ 	  request swapping the primary and alternate eMMC boot partition,
+diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
+index bdeb888c0fea4..84ed828694630 100644
+--- a/drivers/platform/x86/intel_scu_ipc.c
++++ b/drivers/platform/x86/intel_scu_ipc.c
+@@ -19,6 +19,7 @@
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+ 
+@@ -232,19 +233,15 @@ static inline u32 ipc_data_readl(struct intel_scu_ipc_dev *scu, u32 offset)
+ /* Wait till scu status is busy */
+ static inline int busy_loop(struct intel_scu_ipc_dev *scu)
+ {
+-	unsigned long end = jiffies + IPC_TIMEOUT;
+-
+-	do {
+-		u32 status;
+-
+-		status = ipc_read_status(scu);
+-		if (!(status & IPC_STATUS_BUSY))
+-			return (status & IPC_STATUS_ERR) ? -EIO : 0;
++	u8 status;
++	int err;
+ 
+-		usleep_range(50, 100);
+-	} while (time_before(jiffies, end));
++	err = readx_poll_timeout(ipc_read_status, scu, status, !(status & IPC_STATUS_BUSY),
++				 100, jiffies_to_usecs(IPC_TIMEOUT));
++	if (err)
++		return err;
+ 
+-	return -ETIMEDOUT;
++	return (status & IPC_STATUS_ERR) ? -EIO : 0;
+ }
+ 
+ /* Wait till ipc ioc interrupt is received or timeout in 10 HZ */
+@@ -252,10 +249,12 @@ static inline int ipc_wait_for_interrupt(struct intel_scu_ipc_dev *scu)
+ {
+ 	int status;
+ 
+-	if (!wait_for_completion_timeout(&scu->cmd_complete, IPC_TIMEOUT))
+-		return -ETIMEDOUT;
++	wait_for_completion_timeout(&scu->cmd_complete, IPC_TIMEOUT);
+ 
+ 	status = ipc_read_status(scu);
++	if (status & IPC_STATUS_BUSY)
++		return -ETIMEDOUT;
++
+ 	if (status & IPC_STATUS_ERR)
+ 		return -EIO;
+ 
+@@ -267,6 +266,24 @@ static int intel_scu_ipc_check_status(struct intel_scu_ipc_dev *scu)
+ 	return scu->irq > 0 ? ipc_wait_for_interrupt(scu) : busy_loop(scu);
+ }
+ 
++static struct intel_scu_ipc_dev *intel_scu_ipc_get(struct intel_scu_ipc_dev *scu)
++{
++	u8 status;
++
++	if (!scu)
++		scu = ipcdev;
++	if (!scu)
++		return ERR_PTR(-ENODEV);
++
++	status = ipc_read_status(scu);
++	if (status & IPC_STATUS_BUSY) {
++		dev_dbg(&scu->dev, "device is busy\n");
++		return ERR_PTR(-EBUSY);
++	}
++
++	return scu;
++}
++
+ /* Read/Write power control(PMIC in Langwell, MSIC in PenWell) registers */
+ static int pwr_reg_rdwr(struct intel_scu_ipc_dev *scu, u16 *addr, u8 *data,
+ 			u32 count, u32 op, u32 id)
+@@ -280,11 +297,10 @@ static int pwr_reg_rdwr(struct intel_scu_ipc_dev *scu, u16 *addr, u8 *data,
+ 	memset(cbuf, 0, sizeof(cbuf));
+ 
+ 	mutex_lock(&ipclock);
+-	if (!scu)
+-		scu = ipcdev;
+-	if (!scu) {
++	scu = intel_scu_ipc_get(scu);
++	if (IS_ERR(scu)) {
+ 		mutex_unlock(&ipclock);
+-		return -ENODEV;
++		return PTR_ERR(scu);
+ 	}
+ 
+ 	for (nc = 0; nc < count; nc++, offset += 2) {
+@@ -439,13 +455,12 @@ int intel_scu_ipc_dev_simple_command(struct intel_scu_ipc_dev *scu, int cmd,
+ 	int err;
+ 
+ 	mutex_lock(&ipclock);
+-	if (!scu)
+-		scu = ipcdev;
+-	if (!scu) {
++	scu = intel_scu_ipc_get(scu);
++	if (IS_ERR(scu)) {
+ 		mutex_unlock(&ipclock);
+-		return -ENODEV;
++		return PTR_ERR(scu);
+ 	}
+-	scu = ipcdev;
++
+ 	cmdval = sub << 12 | cmd;
+ 	ipc_command(scu, cmdval);
+ 	err = intel_scu_ipc_check_status(scu);
+@@ -485,11 +500,10 @@ int intel_scu_ipc_dev_command_with_size(struct intel_scu_ipc_dev *scu, int cmd,
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&ipclock);
+-	if (!scu)
+-		scu = ipcdev;
+-	if (!scu) {
++	scu = intel_scu_ipc_get(scu);
++	if (IS_ERR(scu)) {
+ 		mutex_unlock(&ipclock);
+-		return -ENODEV;
++		return PTR_ERR(scu);
+ 	}
+ 
+ 	memcpy(inbuf, in, inlen);
+diff --git a/drivers/power/supply/ucs1002_power.c b/drivers/power/supply/ucs1002_power.c
+index ef673ec3db568..332cb50d9fb4f 100644
+--- a/drivers/power/supply/ucs1002_power.c
++++ b/drivers/power/supply/ucs1002_power.c
+@@ -384,7 +384,8 @@ static int ucs1002_get_property(struct power_supply *psy,
+ 	case POWER_SUPPLY_PROP_USB_TYPE:
+ 		return ucs1002_get_usb_type(info, val);
+ 	case POWER_SUPPLY_PROP_HEALTH:
+-		return val->intval = info->health;
++		val->intval = info->health;
++		return 0;
+ 	case POWER_SUPPLY_PROP_PRESENT:
+ 		val->intval = info->present;
+ 		return 0;
+diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c
+index 18b713a616de9..36c2bd2016f22 100644
+--- a/drivers/s390/scsi/zfcp_aux.c
++++ b/drivers/s390/scsi/zfcp_aux.c
+@@ -497,12 +497,12 @@ struct zfcp_port *zfcp_port_enqueue(struct zfcp_adapter *adapter, u64 wwpn,
+ 	if (port) {
+ 		put_device(&port->dev);
+ 		retval = -EEXIST;
+-		goto err_out;
++		goto err_put;
+ 	}
+ 
+ 	port = kzalloc(sizeof(struct zfcp_port), GFP_KERNEL);
+ 	if (!port)
+-		goto err_out;
++		goto err_put;
+ 
+ 	rwlock_init(&port->unit_list_lock);
+ 	INIT_LIST_HEAD(&port->unit_list);
+@@ -525,7 +525,7 @@ struct zfcp_port *zfcp_port_enqueue(struct zfcp_adapter *adapter, u64 wwpn,
+ 
+ 	if (dev_set_name(&port->dev, "0x%016llx", (unsigned long long)wwpn)) {
+ 		kfree(port);
+-		goto err_out;
++		goto err_put;
+ 	}
+ 	retval = -EINVAL;
+ 
+@@ -542,7 +542,8 @@ struct zfcp_port *zfcp_port_enqueue(struct zfcp_adapter *adapter, u64 wwpn,
+ 
+ 	return port;
+ 
+-err_out:
++err_put:
+ 	zfcp_ccw_adapter_put(adapter);
++err_out:
+ 	return ERR_PTR(retval);
+ }
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index e9b3485baee01..2b20c6a0293f5 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -4344,7 +4344,7 @@ pm8001_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
+ 	payload.sas_identify.dev_type = SAS_END_DEVICE;
+ 	payload.sas_identify.initiator_bits = SAS_PROTOCOL_ALL;
+ 	memcpy(payload.sas_identify.sas_addr,
+-		pm8001_ha->sas_addr, SAS_ADDR_SIZE);
++		&pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE);
+ 	payload.sas_identify.phy_id = phy_id;
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload,
+ 			sizeof(payload), 0);
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index c98c0a53a018c..89051722e04d6 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -3722,10 +3722,12 @@ static int mpi_set_controller_config_resp(struct pm8001_hba_info *pm8001_ha,
+ 			(struct set_ctrl_cfg_resp *)(piomb + 4);
+ 	u32 status = le32_to_cpu(pPayload->status);
+ 	u32 err_qlfr_pgcd = le32_to_cpu(pPayload->err_qlfr_pgcd);
++	u32 tag = le32_to_cpu(pPayload->tag);
+ 
+ 	pm8001_dbg(pm8001_ha, MSG,
+ 		   "SET CONTROLLER RESP: status 0x%x qlfr_pgcd 0x%x\n",
+ 		   status, err_qlfr_pgcd);
++	pm8001_tag_free(pm8001_ha, tag);
+ 
+ 	return 0;
+ }
+@@ -4741,7 +4743,7 @@ pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
+ 	payload.sas_identify.dev_type = SAS_END_DEVICE;
+ 	payload.sas_identify.initiator_bits = SAS_PROTOCOL_ALL;
+ 	memcpy(payload.sas_identify.sas_addr,
+-	  &pm8001_ha->sas_addr, SAS_ADDR_SIZE);
++		&pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE);
+ 	payload.sas_identify.phy_id = phy_id;
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload,
+ 			sizeof(payload), 0);
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 472374d83cede..1f8e81296beb7 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -1924,6 +1924,7 @@ int qedf_initiate_abts(struct qedf_ioreq *io_req, bool return_scsi_cmd_on_abts)
+ 		goto drop_rdata_kref;
+ 	}
+ 
++	spin_lock_irqsave(&fcport->rport_lock, flags);
+ 	if (!test_bit(QEDF_CMD_OUTSTANDING, &io_req->flags) ||
+ 	    test_bit(QEDF_CMD_IN_CLEANUP, &io_req->flags) ||
+ 	    test_bit(QEDF_CMD_IN_ABORT, &io_req->flags)) {
+@@ -1931,17 +1932,20 @@ int qedf_initiate_abts(struct qedf_ioreq *io_req, bool return_scsi_cmd_on_abts)
+ 			 "io_req xid=0x%x sc_cmd=%p already in cleanup or abort processing or already completed.\n",
+ 			 io_req->xid, io_req->sc_cmd);
+ 		rc = 1;
++		spin_unlock_irqrestore(&fcport->rport_lock, flags);
+ 		goto drop_rdata_kref;
+ 	}
+ 
++	/* Set the command type to abort */
++	io_req->cmd_type = QEDF_ABTS;
++	spin_unlock_irqrestore(&fcport->rport_lock, flags);
++
+ 	kref_get(&io_req->refcount);
+ 
+ 	xid = io_req->xid;
+ 	qedf->control_requests++;
+ 	qedf->packet_aborts++;
+ 
+-	/* Set the command type to abort */
+-	io_req->cmd_type = QEDF_ABTS;
+ 	io_req->return_scsi_cmd_on_abts = return_scsi_cmd_on_abts;
+ 
+ 	set_bit(QEDF_CMD_IN_ABORT, &io_req->flags);
+@@ -2230,7 +2234,9 @@ process_els:
+ 		  refcount, fcport, fcport->rdata->ids.port_id);
+ 
+ 	/* Cleanup cmds re-use the same TID as the original I/O */
++	spin_lock_irqsave(&fcport->rport_lock, flags);
+ 	io_req->cmd_type = QEDF_CLEANUP;
++	spin_unlock_irqrestore(&fcport->rport_lock, flags);
+ 	io_req->return_scsi_cmd_on_abts = return_scsi_cmd_on_abts;
+ 
+ 	init_completion(&io_req->cleanup_done);
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 63c9368bafcf1..6923862be3fbc 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -2803,6 +2803,8 @@ void qedf_process_cqe(struct qedf_ctx *qedf, struct fcoe_cqe *cqe)
+ 	struct qedf_ioreq *io_req;
+ 	struct qedf_rport *fcport;
+ 	u32 comp_type;
++	u8 io_comp_type;
++	unsigned long flags;
+ 
+ 	comp_type = (cqe->cqe_data >> FCOE_CQE_CQE_TYPE_SHIFT) &
+ 	    FCOE_CQE_CQE_TYPE_MASK;
+@@ -2836,11 +2838,14 @@ void qedf_process_cqe(struct qedf_ctx *qedf, struct fcoe_cqe *cqe)
+ 		return;
+ 	}
+ 
++	spin_lock_irqsave(&fcport->rport_lock, flags);
++	io_comp_type = io_req->cmd_type;
++	spin_unlock_irqrestore(&fcport->rport_lock, flags);
+ 
+ 	switch (comp_type) {
+ 	case FCOE_GOOD_COMPLETION_CQE_TYPE:
+ 		atomic_inc(&fcport->free_sqes);
+-		switch (io_req->cmd_type) {
++		switch (io_comp_type) {
+ 		case QEDF_SCSI_CMD:
+ 			qedf_scsi_completion(qedf, cqe, io_req);
+ 			break;
+diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c
+index bcc0b5a3a459c..90b5fbc914ae2 100644
+--- a/drivers/spi/spi-nxp-fspi.c
++++ b/drivers/spi/spi-nxp-fspi.c
+@@ -950,6 +950,13 @@ static int nxp_fspi_default_setup(struct nxp_fspi *f)
+ 	fspi_writel(f, FSPI_AHBCR_PREF_EN | FSPI_AHBCR_RDADDROPT,
+ 		 base + FSPI_AHBCR);
+ 
++	/* Reset the FLSHxCR1 registers. */
++	reg = FSPI_FLSHXCR1_TCSH(0x3) | FSPI_FLSHXCR1_TCSS(0x3);
++	fspi_writel(f, reg, base + FSPI_FLSHA1CR1);
++	fspi_writel(f, reg, base + FSPI_FLSHA2CR1);
++	fspi_writel(f, reg, base + FSPI_FLSHB1CR1);
++	fspi_writel(f, reg, base + FSPI_FLSHB2CR1);
++
+ 	/* AHB Read - Set lut sequence ID for all CS. */
+ 	fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA1CR2);
+ 	fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA2CR2);
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index 3d3ac48243ebd..12d9c5d6b9e26 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1147,11 +1147,16 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
++
++	ret = pm_runtime_get_sync(&pdev->dev);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Failed to pm_runtime_get_sync: %d\n", ret);
++		goto clk_dis_all;
++	}
++
+ 	/* QSPI controller initializations */
+ 	zynqmp_qspi_init_hw(xqspi);
+ 
+-	pm_runtime_mark_last_busy(&pdev->dev);
+-	pm_runtime_put_autosuspend(&pdev->dev);
+ 	xqspi->irq = platform_get_irq(pdev, 0);
+ 	if (xqspi->irq <= 0) {
+ 		ret = -ENXIO;
+@@ -1178,6 +1183,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 	ctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_RX_DUAL | SPI_RX_QUAD |
+ 			    SPI_TX_DUAL | SPI_TX_QUAD;
+ 	ctlr->dev.of_node = np;
++	ctlr->auto_runtime_pm = true;
+ 
+ 	ret = devm_spi_register_controller(&pdev->dev, ctlr);
+ 	if (ret) {
+@@ -1185,11 +1191,15 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_all;
+ 	}
+ 
++	pm_runtime_mark_last_busy(&pdev->dev);
++	pm_runtime_put_autosuspend(&pdev->dev);
++
+ 	return 0;
+ 
+ clk_dis_all:
+-	pm_runtime_set_suspended(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ 	clk_disable_unprepare(xqspi->refclk);
+ clk_dis_pclk:
+ 	clk_disable_unprepare(xqspi->pclk);
+@@ -1213,11 +1223,15 @@ static int zynqmp_qspi_remove(struct platform_device *pdev)
+ {
+ 	struct zynqmp_qspi *xqspi = platform_get_drvdata(pdev);
+ 
++	pm_runtime_get_sync(&pdev->dev);
++
+ 	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
++
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ 	clk_disable_unprepare(xqspi->refclk);
+ 	clk_disable_unprepare(xqspi->pclk);
+-	pm_runtime_set_suspended(&pdev->dev);
+-	pm_runtime_disable(&pdev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 4664330fb55dd..9aeedcff7d02e 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -867,7 +867,6 @@ sector_t target_to_linux_sector(struct se_device *dev, sector_t lb)
+ EXPORT_SYMBOL(target_to_linux_sector);
+ 
+ struct devices_idr_iter {
+-	struct config_item *prev_item;
+ 	int (*fn)(struct se_device *dev, void *data);
+ 	void *data;
+ };
+@@ -877,11 +876,9 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ {
+ 	struct devices_idr_iter *iter = data;
+ 	struct se_device *dev = p;
++	struct config_item *item;
+ 	int ret;
+ 
+-	config_item_put(iter->prev_item);
+-	iter->prev_item = NULL;
+-
+ 	/*
+ 	 * We add the device early to the idr, so it can be used
+ 	 * by backend modules during configuration. We do not want
+@@ -891,12 +888,13 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ 	if (!target_dev_configured(dev))
+ 		return 0;
+ 
+-	iter->prev_item = config_item_get_unless_zero(&dev->dev_group.cg_item);
+-	if (!iter->prev_item)
++	item = config_item_get_unless_zero(&dev->dev_group.cg_item);
++	if (!item)
+ 		return 0;
+ 	mutex_unlock(&device_mutex);
+ 
+ 	ret = iter->fn(dev, iter->data);
++	config_item_put(item);
+ 
+ 	mutex_lock(&device_mutex);
+ 	return ret;
+@@ -919,7 +917,6 @@ int target_for_each_device(int (*fn)(struct se_device *dev, void *data),
+ 	mutex_lock(&device_mutex);
+ 	ret = idr_for_each(&devices_idr, target_devices_idr_iter, &iter);
+ 	mutex_unlock(&device_mutex);
+-	config_item_put(iter.prev_item);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 94c963462d74b..3693ad9f45212 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -2179,10 +2179,8 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ 
+ 	/* Free up any link layer users and finally the control channel */
+ 	for (i = NUM_DLCI - 1; i >= 0; i--)
+-		if (gsm->dlci[i]) {
++		if (gsm->dlci[i])
+ 			gsm_dlci_release(gsm->dlci[i]);
+-			gsm->dlci[i] = NULL;
+-		}
+ 	mutex_unlock(&gsm->mutex);
+ 	/* Now wipe the queues */
+ 	tty_ldisc_flush(gsm->tty);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 7499954c9aa76..8b49ac4856d2c 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1914,7 +1914,10 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ 		skip_rx = true;
+ 
+ 	if (status & (UART_LSR_DR | UART_LSR_BI) && !skip_rx) {
+-		if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
++		struct irq_data *d;
++
++		d = irq_get_irq_data(port->irq);
++		if (d && irqd_is_wakeup_set(d))
+ 			pm_wakeup_event(tport->tty->dev, 0);
+ 		if (!up->dma || handle_rx_dma(up, iir))
+ 			status = serial8250_rx_chars(up, status);
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 3ac78db17e466..dd59584630979 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -2014,7 +2014,7 @@ config FB_COBALT
+ 
+ config FB_SH7760
+ 	bool "SH7760/SH7763/SH7720/SH7721 LCDC support"
+-	depends on FB && (CPU_SUBTYPE_SH7760 || CPU_SUBTYPE_SH7763 \
++	depends on FB=y && (CPU_SUBTYPE_SH7760 || CPU_SUBTYPE_SH7763 \
+ 		|| CPU_SUBTYPE_SH7720 || CPU_SUBTYPE_SH7721)
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index a370a185a41c4..50c874d488607 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -426,6 +426,20 @@ static unsigned int iTCO_wdt_get_timeleft(struct watchdog_device *wd_dev)
+ 	return time_left;
+ }
+ 
++/* Returns true if the watchdog was running */
++static bool iTCO_wdt_set_running(struct iTCO_wdt_private *p)
++{
++	u16 val;
++
++	/* Bit 11: TCO Timer Halt -> 0 = The TCO timer is enabled */
++	val = inw(TCO1_CNT(p));
++	if (!(val & BIT(11))) {
++		set_bit(WDOG_HW_RUNNING, &p->wddev.status);
++		return true;
++	}
++	return false;
++}
++
+ /*
+  *	Kernel Interfaces
+  */
+@@ -514,9 +528,6 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 		return -ENODEV;	/* Cannot reset NO_REBOOT bit */
+ 	}
+ 
+-	/* Set the NO_REBOOT bit to prevent later reboots, just for sure */
+-	p->update_no_reboot_bit(p->no_reboot_priv, true);
+-
+ 	if (turn_SMI_watchdog_clear_off >= p->iTCO_version) {
+ 		/*
+ 		 * Bit 13: TCO_EN -> 0
+@@ -568,8 +579,13 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 	watchdog_set_drvdata(&p->wddev, p);
+ 	platform_set_drvdata(pdev, p);
+ 
+-	/* Make sure the watchdog is not running */
+-	iTCO_wdt_stop(&p->wddev);
++	if (!iTCO_wdt_set_running(p)) {
++		/*
++		 * If the watchdog was not running set NO_REBOOT now to
++		 * prevent later reboots.
++		 */
++		p->update_no_reboot_bit(p->no_reboot_priv, true);
++	}
+ 
+ 	/* Check that the heartbeat value is within it's range;
+ 	   if not reset to the default */
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index fba78daee449a..52891546e6973 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -33,6 +33,7 @@
+ #include <linux/slab.h>
+ #include <linux/irqnr.h>
+ #include <linux/pci.h>
++#include <linux/rcupdate.h>
+ #include <linux/spinlock.h>
+ #include <linux/cpuhotplug.h>
+ #include <linux/atomic.h>
+@@ -94,6 +95,7 @@ enum xen_irq_type {
+ struct irq_info {
+ 	struct list_head list;
+ 	struct list_head eoi_list;
++	struct rcu_work rwork;
+ 	short refcnt;
+ 	short spurious_cnt;
+ 	short type;             /* type */
+@@ -141,23 +143,13 @@ const struct evtchn_ops *evtchn_ops;
+  */
+ static DEFINE_MUTEX(irq_mapping_update_lock);
+ 
+-/*
+- * Lock protecting event handling loop against removing event channels.
+- * Adding of event channels is no issue as the associated IRQ becomes active
+- * only after everything is setup (before request_[threaded_]irq() the handler
+- * can't be entered for an event, as the event channel will be unmasked only
+- * then).
+- */
+-static DEFINE_RWLOCK(evtchn_rwlock);
+-
+ /*
+  * Lock hierarchy:
+  *
+  * irq_mapping_update_lock
+- *   evtchn_rwlock
+- *     IRQ-desc lock
+- *       percpu eoi_list_lock
+- *         irq_info->lock
++ *   IRQ-desc lock
++ *     percpu eoi_list_lock
++ *       irq_info->lock
+  */
+ 
+ static LIST_HEAD(xen_irq_list_head);
+@@ -272,6 +264,22 @@ static void set_info_for_irq(unsigned int irq, struct irq_info *info)
+ 		irq_set_chip_data(irq, info);
+ }
+ 
++static void delayed_free_irq(struct work_struct *work)
++{
++	struct irq_info *info = container_of(to_rcu_work(work), struct irq_info,
++					     rwork);
++	unsigned int irq = info->irq;
++
++	/* Remove the info pointer only now, with no potential users left. */
++	set_info_for_irq(irq, NULL);
++
++	kfree(info);
++
++	/* Legacy IRQ descriptors are managed by the arch. */
++	if (irq >= nr_legacy_irqs())
++		irq_free_desc(irq);
++}
++
+ /* Constructors for packed IRQ information. */
+ static int xen_irq_info_common_setup(struct irq_info *info,
+ 				     unsigned irq,
+@@ -606,33 +614,36 @@ static void xen_irq_lateeoi_worker(struct work_struct *work)
+ 
+ 	eoi = container_of(to_delayed_work(work), struct lateeoi_work, delayed);
+ 
+-	read_lock_irqsave(&evtchn_rwlock, flags);
++	rcu_read_lock();
+ 
+ 	while (true) {
+-		spin_lock(&eoi->eoi_list_lock);
++		spin_lock_irqsave(&eoi->eoi_list_lock, flags);
+ 
+ 		info = list_first_entry_or_null(&eoi->eoi_list, struct irq_info,
+ 						eoi_list);
+ 
+-		if (info == NULL || now < info->eoi_time) {
+-			spin_unlock(&eoi->eoi_list_lock);
++		if (info == NULL)
++			break;
++
++		if (now < info->eoi_time) {
++			mod_delayed_work_on(info->eoi_cpu, system_wq,
++					    &eoi->delayed,
++					    info->eoi_time - now);
+ 			break;
+ 		}
+ 
+ 		list_del_init(&info->eoi_list);
+ 
+-		spin_unlock(&eoi->eoi_list_lock);
++		spin_unlock_irqrestore(&eoi->eoi_list_lock, flags);
+ 
+ 		info->eoi_time = 0;
+ 
+ 		xen_irq_lateeoi_locked(info, false);
+ 	}
+ 
+-	if (info)
+-		mod_delayed_work_on(info->eoi_cpu, system_wq,
+-				    &eoi->delayed, info->eoi_time - now);
++	spin_unlock_irqrestore(&eoi->eoi_list_lock, flags);
+ 
+-	read_unlock_irqrestore(&evtchn_rwlock, flags);
++	rcu_read_unlock();
+ }
+ 
+ static void xen_cpu_init_eoi(unsigned int cpu)
+@@ -647,16 +658,15 @@ static void xen_cpu_init_eoi(unsigned int cpu)
+ void xen_irq_lateeoi(unsigned int irq, unsigned int eoi_flags)
+ {
+ 	struct irq_info *info;
+-	unsigned long flags;
+ 
+-	read_lock_irqsave(&evtchn_rwlock, flags);
++	rcu_read_lock();
+ 
+ 	info = info_for_irq(irq);
+ 
+ 	if (info)
+ 		xen_irq_lateeoi_locked(info, eoi_flags & XEN_EOI_FLAG_SPURIOUS);
+ 
+-	read_unlock_irqrestore(&evtchn_rwlock, flags);
++	rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(xen_irq_lateeoi);
+ 
+@@ -675,6 +685,7 @@ static void xen_irq_init(unsigned irq)
+ 
+ 	info->type = IRQT_UNBOUND;
+ 	info->refcnt = -1;
++	INIT_RCU_WORK(&info->rwork, delayed_free_irq);
+ 
+ 	set_info_for_irq(irq, info);
+ 
+@@ -727,31 +738,18 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
+ static void xen_free_irq(unsigned irq)
+ {
+ 	struct irq_info *info = info_for_irq(irq);
+-	unsigned long flags;
+ 
+ 	if (WARN_ON(!info))
+ 		return;
+ 
+-	write_lock_irqsave(&evtchn_rwlock, flags);
+-
+ 	if (!list_empty(&info->eoi_list))
+ 		lateeoi_list_del(info);
+ 
+ 	list_del(&info->list);
+ 
+-	set_info_for_irq(irq, NULL);
+-
+ 	WARN_ON(info->refcnt > 0);
+ 
+-	write_unlock_irqrestore(&evtchn_rwlock, flags);
+-
+-	kfree(info);
+-
+-	/* Legacy IRQ descriptors are managed by the arch. */
+-	if (irq < nr_legacy_irqs())
+-		return;
+-
+-	irq_free_desc(irq);
++	queue_rcu_work(system_wq, &info->rwork);
+ }
+ 
+ static void xen_evtchn_close(evtchn_port_t port)
+@@ -1639,7 +1637,14 @@ static void __xen_evtchn_do_upcall(void)
+ 	int cpu = smp_processor_id();
+ 	struct evtchn_loop_ctrl ctrl = { 0 };
+ 
+-	read_lock(&evtchn_rwlock);
++	/*
++	 * When closing an event channel the associated IRQ must not be freed
++	 * until all cpus have left the event handling loop. This is ensured
++	 * by taking the rcu_read_lock() while handling events, as freeing of
++	 * the IRQ is handled via queue_rcu_work() _after_ closing the event
++	 * channel.
++	 */
++	rcu_read_lock();
+ 
+ 	do {
+ 		vcpu_info->evtchn_upcall_pending = 0;
+@@ -1652,7 +1657,7 @@ static void __xen_evtchn_do_upcall(void)
+ 
+ 	} while (vcpu_info->evtchn_upcall_pending);
+ 
+-	read_unlock(&evtchn_rwlock);
++	rcu_read_unlock();
+ 
+ 	/*
+ 	 * Increment irq_epoch only now to defer EOIs only for
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index 8c3b7cc8e2a16..48bb1290ed2fd 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -345,10 +345,9 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ 	/* there's now no turning back... the old userspace image is dead,
+ 	 * defunct, deceased, etc.
+ 	 */
++	SET_PERSONALITY(exec_params.hdr);
+ 	if (elf_check_fdpic(&exec_params.hdr))
+-		set_personality(PER_LINUX_FDPIC);
+-	else
+-		set_personality(PER_LINUX);
++		current->personality |= PER_LINUX_FDPIC;
+ 	if (elf_read_implies_exec(&exec_params.hdr, executable_stack))
+ 		current->personality |= READ_IMPLIES_EXEC;
+ 
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 0e266772beaef..685a375bb6af5 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -5634,8 +5634,14 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
+ 	char *dst = (char *)dstv;
+ 	unsigned long i = start >> PAGE_SHIFT;
+ 
+-	if (check_eb_range(eb, start, len))
++	if (check_eb_range(eb, start, len)) {
++		/*
++		 * Invalid range hit, reset the memory, so callers won't get
++		 * some random garbage for their uninitialzed memory.
++		 */
++		memset(dstv, 0, len);
+ 		return;
++	}
+ 
+ 	offset = offset_in_page(start);
+ 
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index b33505330e335..ea731fa8bd350 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -2267,7 +2267,7 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	 * calculated f_bavail.
+ 	 */
+ 	if (!mixed && block_rsv->space_info->full &&
+-	    total_free_meta - thresh < block_rsv->size)
++	    (total_free_meta < thresh || total_free_meta - thresh < block_rsv->size))
+ 		buf->f_bavail = 0;
+ 
+ 	buf->f_type = BTRFS_SUPER_MAGIC;
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index c3e9cb5037631..fec021e6bb600 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1580,7 +1580,7 @@ struct ext4_sb_info {
+ 	struct task_struct *s_mmp_tsk;
+ 
+ 	/* record the last minlen when FITRIM is called. */
+-	atomic_t s_last_trim_minblks;
++	unsigned long s_last_trim_minblks;
+ 
+ 	/* Reference to checksum algorithm driver via cryptoapi */
+ 	struct crypto_shash *s_chksum_driver;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 2f6ed59d81f02..b35d59d41c896 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -16,6 +16,7 @@
+ #include <linux/slab.h>
+ #include <linux/nospec.h>
+ #include <linux/backing-dev.h>
++#include <linux/freezer.h>
+ #include <trace/events/ext4.h>
+ 
+ /*
+@@ -5859,19 +5860,19 @@ error_return:
+  * @sb:		super block for the file system
+  * @start:	starting block of the free extent in the alloc. group
+  * @count:	number of blocks to TRIM
+- * @group:	alloc. group we are working with
+  * @e4b:	ext4 buddy for the group
+  *
+  * Trim "count" blocks starting at "start" in the "group". To assure that no
+  * one will allocate those blocks, mark it as used in buddy bitmap. This must
+  * be called with under the group lock.
+  */
+-static int ext4_trim_extent(struct super_block *sb, int start, int count,
+-			     ext4_group_t group, struct ext4_buddy *e4b)
++static int ext4_trim_extent(struct super_block *sb,
++		int start, int count, struct ext4_buddy *e4b)
+ __releases(bitlock)
+ __acquires(bitlock)
+ {
+ 	struct ext4_free_extent ex;
++	ext4_group_t group = e4b->bd_group;
+ 	int ret = 0;
+ 
+ 	trace_ext4_trim_extent(sb, group, start, count);
+@@ -5894,6 +5895,71 @@ __acquires(bitlock)
+ 	return ret;
+ }
+ 
++static ext4_grpblk_t ext4_last_grp_cluster(struct super_block *sb,
++					   ext4_group_t grp)
++{
++	if (grp < ext4_get_groups_count(sb))
++		return EXT4_CLUSTERS_PER_GROUP(sb) - 1;
++	return (ext4_blocks_count(EXT4_SB(sb)->s_es) -
++		ext4_group_first_block_no(sb, grp) - 1) >>
++					EXT4_CLUSTER_BITS(sb);
++}
++
++static bool ext4_trim_interrupted(void)
++{
++	return fatal_signal_pending(current) || freezing(current);
++}
++
++static int ext4_try_to_trim_range(struct super_block *sb,
++		struct ext4_buddy *e4b, ext4_grpblk_t start,
++		ext4_grpblk_t max, ext4_grpblk_t minblocks)
++{
++	ext4_grpblk_t next, count, free_count;
++	bool set_trimmed = false;
++	void *bitmap;
++
++	bitmap = e4b->bd_bitmap;
++	if (start == 0 && max >= ext4_last_grp_cluster(sb, e4b->bd_group))
++		set_trimmed = true;
++	start = max(e4b->bd_info->bb_first_free, start);
++	count = 0;
++	free_count = 0;
++
++	while (start <= max) {
++		start = mb_find_next_zero_bit(bitmap, max + 1, start);
++		if (start > max)
++			break;
++		next = mb_find_next_bit(bitmap, max + 1, start);
++
++		if ((next - start) >= minblocks) {
++			int ret = ext4_trim_extent(sb, start, next - start, e4b);
++
++			if (ret && ret != -EOPNOTSUPP)
++				return count;
++			count += next - start;
++		}
++		free_count += next - start;
++		start = next + 1;
++
++		if (ext4_trim_interrupted())
++			return count;
++
++		if (need_resched()) {
++			ext4_unlock_group(sb, e4b->bd_group);
++			cond_resched();
++			ext4_lock_group(sb, e4b->bd_group);
++		}
++
++		if ((e4b->bd_info->bb_free - free_count) < minblocks)
++			break;
++	}
++
++	if (set_trimmed)
++		EXT4_MB_GRP_SET_TRIMMED(e4b->bd_info);
++
++	return count;
++}
++
+ /**
+  * ext4_trim_all_free -- function to trim all free space in alloc. group
+  * @sb:			super block for file system
+@@ -5917,10 +5983,8 @@ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+ 		   ext4_grpblk_t start, ext4_grpblk_t max,
+ 		   ext4_grpblk_t minblocks)
+ {
+-	void *bitmap;
+-	ext4_grpblk_t next, count = 0, free_count = 0;
+ 	struct ext4_buddy e4b;
+-	int ret = 0;
++	int ret;
+ 
+ 	trace_ext4_trim_all_free(sb, group, start, max);
+ 
+@@ -5930,58 +5994,20 @@ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+ 			     ret, group);
+ 		return ret;
+ 	}
+-	bitmap = e4b.bd_bitmap;
+ 
+ 	ext4_lock_group(sb, group);
+-	if (EXT4_MB_GRP_WAS_TRIMMED(e4b.bd_info) &&
+-	    minblocks >= atomic_read(&EXT4_SB(sb)->s_last_trim_minblks))
+-		goto out;
+ 
+-	start = (e4b.bd_info->bb_first_free > start) ?
+-		e4b.bd_info->bb_first_free : start;
+-
+-	while (start <= max) {
+-		start = mb_find_next_zero_bit(bitmap, max + 1, start);
+-		if (start > max)
+-			break;
+-		next = mb_find_next_bit(bitmap, max + 1, start);
+-
+-		if ((next - start) >= minblocks) {
+-			ret = ext4_trim_extent(sb, start,
+-					       next - start, group, &e4b);
+-			if (ret && ret != -EOPNOTSUPP)
+-				break;
+-			ret = 0;
+-			count += next - start;
+-		}
+-		free_count += next - start;
+-		start = next + 1;
+-
+-		if (fatal_signal_pending(current)) {
+-			count = -ERESTARTSYS;
+-			break;
+-		}
+-
+-		if (need_resched()) {
+-			ext4_unlock_group(sb, group);
+-			cond_resched();
+-			ext4_lock_group(sb, group);
+-		}
+-
+-		if ((e4b.bd_info->bb_free - free_count) < minblocks)
+-			break;
+-	}
++	if (!EXT4_MB_GRP_WAS_TRIMMED(e4b.bd_info) ||
++	    minblocks < EXT4_SB(sb)->s_last_trim_minblks)
++		ret = ext4_try_to_trim_range(sb, &e4b, start, max, minblocks);
++	else
++		ret = 0;
+ 
+-	if (!ret) {
+-		ret = count;
+-		EXT4_MB_GRP_SET_TRIMMED(e4b.bd_info);
+-	}
+-out:
+ 	ext4_unlock_group(sb, group);
+ 	ext4_mb_unload_buddy(&e4b);
+ 
+ 	ext4_debug("trimmed %d blocks in the group %d\n",
+-		count, group);
++		ret, group);
+ 
+ 	return ret;
+ }
+@@ -6026,7 +6052,7 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 		if (minlen > EXT4_CLUSTERS_PER_GROUP(sb))
+ 			goto out;
+ 	}
+-	if (end >= max_blks)
++	if (end >= max_blks - 1)
+ 		end = max_blks - 1;
+ 	if (end <= first_data_blk)
+ 		goto out;
+@@ -6043,6 +6069,8 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 	end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+ 
+ 	for (group = first_group; group <= last_group; group++) {
++		if (ext4_trim_interrupted())
++			break;
+ 		grp = ext4_get_group_info(sb, group);
+ 		if (!grp)
+ 			continue;
+@@ -6061,10 +6089,9 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 		 */
+ 		if (group == last_group)
+ 			end = last_cluster;
+-
+ 		if (grp->bb_free >= minlen) {
+ 			cnt = ext4_trim_all_free(sb, group, first_cluster,
+-						end, minlen);
++						 end, minlen);
+ 			if (cnt < 0) {
+ 				ret = cnt;
+ 				break;
+@@ -6080,7 +6107,7 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 	}
+ 
+ 	if (!ret)
+-		atomic_set(&EXT4_SB(sb)->s_last_trim_minblks, minlen);
++		EXT4_SB(sb)->s_last_trim_minblks = minlen;
+ 
+ out:
+ 	range->len = EXT4_C2B(EXT4_SB(sb), trimmed) << sb->s_blocksize_bits;
+@@ -6109,8 +6136,7 @@ ext4_mballoc_query_range(
+ 
+ 	ext4_lock_group(sb, group);
+ 
+-	start = (e4b.bd_info->bb_first_free > start) ?
+-		e4b.bd_info->bb_first_free : start;
++	start = max(e4b.bd_info->bb_first_free, start);
+ 	if (end >= EXT4_CLUSTERS_PER_GROUP(sb))
+ 		end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+ 
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 018af6ec97b40..5d86ffa72ceab 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -525,7 +525,9 @@ static void nfs_direct_add_page_head(struct list_head *list,
+ 	kref_get(&head->wb_kref);
+ }
+ 
+-static void nfs_direct_join_group(struct list_head *list, struct inode *inode)
++static void nfs_direct_join_group(struct list_head *list,
++				  struct nfs_commit_info *cinfo,
++				  struct inode *inode)
+ {
+ 	struct nfs_page *req, *subreq;
+ 
+@@ -547,7 +549,7 @@ static void nfs_direct_join_group(struct list_head *list, struct inode *inode)
+ 				nfs_release_request(subreq);
+ 			}
+ 		} while ((subreq = subreq->wb_this_page) != req);
+-		nfs_join_page_group(req, inode);
++		nfs_join_page_group(req, cinfo, inode);
+ 	}
+ }
+ 
+@@ -573,7 +575,7 @@ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq)
+ 	nfs_init_cinfo_from_dreq(&cinfo, dreq);
+ 	nfs_direct_write_scan_commit_list(dreq->inode, &reqs, &cinfo);
+ 
+-	nfs_direct_join_group(&reqs, dreq->inode);
++	nfs_direct_join_group(&reqs, &cinfo, dreq->inode);
+ 
+ 	dreq->count = 0;
+ 	dreq->max_count = 0;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index a8a02081942d2..e4f2820ba5a59 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1240,6 +1240,7 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ 		case -EPFNOSUPPORT:
+ 		case -EPROTONOSUPPORT:
+ 		case -EOPNOTSUPP:
++		case -EINVAL:
+ 		case -ECONNREFUSED:
+ 		case -ECONNRESET:
+ 		case -EHOSTDOWN:
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index c34df51a8f2b7..1c2ed14bccef2 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10408,7 +10408,9 @@ static void nfs4_disable_swap(struct inode *inode)
+ 	 */
+ 	struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
+ 
+-	nfs4_schedule_state_manager(clp);
++	set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
++	clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
++	wake_up_var(&clp->cl_state);
+ }
+ 
+ static const struct inode_operations nfs4_dir_inode_operations = {
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index ff6ca05a9d441..afb617a4a7e42 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1212,17 +1212,23 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
+ {
+ 	struct task_struct *task;
+ 	char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1];
+-	struct rpc_clnt *cl = clp->cl_rpcclient;
+-
+-	while (cl != cl->cl_parent)
+-		cl = cl->cl_parent;
++	struct rpc_clnt *clnt = clp->cl_rpcclient;
++	bool swapon = false;
+ 
+ 	set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
+-	if (test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state) != 0) {
+-		wake_up_var(&clp->cl_state);
+-		return;
++
++	if (atomic_read(&clnt->cl_swapper)) {
++		swapon = !test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE,
++					   &clp->cl_state);
++		if (!swapon) {
++			wake_up_var(&clp->cl_state);
++			return;
++		}
+ 	}
+-	set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state);
++
++	if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
++		return;
++
+ 	__module_get(THIS_MODULE);
+ 	refcount_inc(&clp->cl_count);
+ 
+@@ -1239,8 +1245,9 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
+ 			__func__, PTR_ERR(task));
+ 		if (!nfs_client_init_is_complete(clp))
+ 			nfs_mark_client_ready(clp, PTR_ERR(task));
++		if (swapon)
++			clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 		nfs4_clear_state_manager_bit(clp);
+-		clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 		nfs_put_client(clp);
+ 		module_put(THIS_MODULE);
+ 	}
+@@ -2683,6 +2690,13 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ 		nfs4_end_drain_session(clp);
+ 		nfs4_clear_state_manager_bit(clp);
+ 
++		if (test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state) &&
++		    !test_and_set_bit(NFS4CLNT_MANAGER_RUNNING,
++				      &clp->cl_state)) {
++			memflags = memalloc_nofs_save();
++			continue;
++		}
++
+ 		if (!test_and_set_bit(NFS4CLNT_RECALL_RUNNING, &clp->cl_state)) {
+ 			if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) {
+ 				nfs_client_return_marked_delegations(clp);
+@@ -2721,22 +2735,25 @@ static int nfs4_run_state_manager(void *ptr)
+ 
+ 	allow_signal(SIGKILL);
+ again:
+-	set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state);
+ 	nfs4_state_manager(clp);
+-	if (atomic_read(&cl->cl_swapper)) {
++
++	if (test_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state) &&
++	    !test_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state)) {
+ 		wait_var_event_interruptible(&clp->cl_state,
+ 					     test_bit(NFS4CLNT_RUN_MANAGER,
+ 						      &clp->cl_state));
+-		if (atomic_read(&cl->cl_swapper) &&
+-		    test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state))
++		if (!atomic_read(&cl->cl_swapper))
++			clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
++		if (refcount_read(&clp->cl_count) > 1 && !signalled() &&
++		    !test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state))
+ 			goto again;
+ 		/* Either no longer a swapper, or were signalled */
++		clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 	}
+-	clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 
+ 	if (refcount_read(&clp->cl_count) > 1 && !signalled() &&
+ 	    test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state) &&
+-	    !test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state))
++	    !test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state))
+ 		goto again;
+ 
+ 	nfs_put_client(clp);
+diff --git a/fs/nfs/sysfs.c b/fs/nfs/sysfs.c
+index 8cb70755e3c9e..f7f778e3e5ca7 100644
+--- a/fs/nfs/sysfs.c
++++ b/fs/nfs/sysfs.c
+@@ -18,7 +18,7 @@
+ #include "sysfs.h"
+ 
+ struct kobject *nfs_client_kobj;
+-static struct kset *nfs_client_kset;
++static struct kset *nfs_kset;
+ 
+ static void nfs_netns_object_release(struct kobject *kobj)
+ {
+@@ -55,13 +55,13 @@ static struct kobject *nfs_netns_object_alloc(const char *name,
+ 
+ int nfs_sysfs_init(void)
+ {
+-	nfs_client_kset = kset_create_and_add("nfs", NULL, fs_kobj);
+-	if (!nfs_client_kset)
++	nfs_kset = kset_create_and_add("nfs", NULL, fs_kobj);
++	if (!nfs_kset)
+ 		return -ENOMEM;
+-	nfs_client_kobj = nfs_netns_object_alloc("net", nfs_client_kset, NULL);
++	nfs_client_kobj = nfs_netns_object_alloc("net", nfs_kset, NULL);
+ 	if  (!nfs_client_kobj) {
+-		kset_unregister(nfs_client_kset);
+-		nfs_client_kset = NULL;
++		kset_unregister(nfs_kset);
++		nfs_kset = NULL;
+ 		return -ENOMEM;
+ 	}
+ 	return 0;
+@@ -70,7 +70,7 @@ int nfs_sysfs_init(void)
+ void nfs_sysfs_exit(void)
+ {
+ 	kobject_put(nfs_client_kobj);
+-	kset_unregister(nfs_client_kset);
++	kset_unregister(nfs_kset);
+ }
+ 
+ static ssize_t nfs_netns_identifier_show(struct kobject *kobj,
+@@ -158,7 +158,7 @@ static struct nfs_netns_client *nfs_netns_client_alloc(struct kobject *parent,
+ 	p = kzalloc(sizeof(*p), GFP_KERNEL);
+ 	if (p) {
+ 		p->net = net;
+-		p->kobject.kset = nfs_client_kset;
++		p->kobject.kset = nfs_kset;
+ 		if (kobject_init_and_add(&p->kobject, &nfs_netns_client_type,
+ 					parent, "nfs_client") == 0)
+ 			return p;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index dc08a0c02f095..d3cd099ffb6e1 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -58,7 +58,8 @@ static const struct nfs_pgio_completion_ops nfs_async_write_completion_ops;
+ static const struct nfs_commit_completion_ops nfs_commit_completion_ops;
+ static const struct nfs_rw_ops nfs_rw_write_ops;
+ static void nfs_inode_remove_request(struct nfs_page *req);
+-static void nfs_clear_request_commit(struct nfs_page *req);
++static void nfs_clear_request_commit(struct nfs_commit_info *cinfo,
++				     struct nfs_page *req);
+ static void nfs_init_cinfo_from_inode(struct nfs_commit_info *cinfo,
+ 				      struct inode *inode);
+ static struct nfs_page *
+@@ -500,8 +501,8 @@ nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list,
+  * the (former) group.  All subrequests are removed from any write or commit
+  * lists, unlinked from the group and destroyed.
+  */
+-void
+-nfs_join_page_group(struct nfs_page *head, struct inode *inode)
++void nfs_join_page_group(struct nfs_page *head, struct nfs_commit_info *cinfo,
++			 struct inode *inode)
+ {
+ 	struct nfs_page *subreq;
+ 	struct nfs_page *destroy_list = NULL;
+@@ -531,7 +532,7 @@ nfs_join_page_group(struct nfs_page *head, struct inode *inode)
+ 	 * Commit list removal accounting is done after locks are dropped */
+ 	subreq = head;
+ 	do {
+-		nfs_clear_request_commit(subreq);
++		nfs_clear_request_commit(cinfo, subreq);
+ 		subreq = subreq->wb_this_page;
+ 	} while (subreq != head);
+ 
+@@ -565,8 +566,10 @@ nfs_lock_and_join_requests(struct page *page)
+ {
+ 	struct inode *inode = page_file_mapping(page)->host;
+ 	struct nfs_page *head;
++	struct nfs_commit_info cinfo;
+ 	int ret;
+ 
++	nfs_init_cinfo_from_inode(&cinfo, inode);
+ 	/*
+ 	 * A reference is taken only on the head request which acts as a
+ 	 * reference to the whole page group - the group will not be destroyed
+@@ -583,7 +586,7 @@ nfs_lock_and_join_requests(struct page *page)
+ 		return ERR_PTR(ret);
+ 	}
+ 
+-	nfs_join_page_group(head, inode);
++	nfs_join_page_group(head, &cinfo, inode);
+ 
+ 	return head;
+ }
+@@ -944,18 +947,16 @@ nfs_clear_page_commit(struct page *page)
+ }
+ 
+ /* Called holding the request lock on @req */
+-static void
+-nfs_clear_request_commit(struct nfs_page *req)
++static void nfs_clear_request_commit(struct nfs_commit_info *cinfo,
++				     struct nfs_page *req)
+ {
+ 	if (test_bit(PG_CLEAN, &req->wb_flags)) {
+ 		struct nfs_open_context *ctx = nfs_req_openctx(req);
+ 		struct inode *inode = d_inode(ctx->dentry);
+-		struct nfs_commit_info cinfo;
+ 
+-		nfs_init_cinfo_from_inode(&cinfo, inode);
+ 		mutex_lock(&NFS_I(inode)->commit_mutex);
+-		if (!pnfs_clear_request_commit(req, &cinfo)) {
+-			nfs_request_remove_commit_list(req, &cinfo);
++		if (!pnfs_clear_request_commit(req, cinfo)) {
++			nfs_request_remove_commit_list(req, cinfo);
+ 		}
+ 		mutex_unlock(&NFS_I(inode)->commit_mutex);
+ 		nfs_clear_page_commit(req->wb_page);
+diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
+index aadea660c66c9..b0077f5f71124 100644
+--- a/fs/nilfs2/gcinode.c
++++ b/fs/nilfs2/gcinode.c
+@@ -73,10 +73,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
+ 		struct the_nilfs *nilfs = inode->i_sb->s_fs_info;
+ 
+ 		err = nilfs_dat_translate(nilfs->ns_dat, vbn, &pbn);
+-		if (unlikely(err)) { /* -EIO, -ENOMEM, -ENOENT */
+-			brelse(bh);
++		if (unlikely(err)) /* -EIO, -ENOMEM, -ENOENT */
+ 			goto failed;
+-		}
+ 	}
+ 
+ 	lock_buffer(bh);
+@@ -102,6 +100,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
+  failed:
+ 	unlock_page(bh->b_page);
+ 	put_page(bh->b_page);
++	if (unlikely(err))
++		brelse(bh);
+ 	return err;
+ }
+ 
+diff --git a/fs/proc/task_nommu.c b/fs/proc/task_nommu.c
+index a6d21fc0033c6..97f387d30e743 100644
+--- a/fs/proc/task_nommu.c
++++ b/fs/proc/task_nommu.c
+@@ -208,11 +208,16 @@ static void *m_start(struct seq_file *m, loff_t *pos)
+ 		return ERR_PTR(-ESRCH);
+ 
+ 	mm = priv->mm;
+-	if (!mm || !mmget_not_zero(mm))
++	if (!mm || !mmget_not_zero(mm)) {
++		put_task_struct(priv->task);
++		priv->task = NULL;
+ 		return NULL;
++	}
+ 
+ 	if (mmap_read_lock_killable(mm)) {
+ 		mmput(mm);
++		put_task_struct(priv->task);
++		priv->task = NULL;
+ 		return ERR_PTR(-EINTR);
+ 	}
+ 
+@@ -221,23 +226,21 @@ static void *m_start(struct seq_file *m, loff_t *pos)
+ 		if (n-- == 0)
+ 			return p;
+ 
+-	mmap_read_unlock(mm);
+-	mmput(mm);
+ 	return NULL;
+ }
+ 
+-static void m_stop(struct seq_file *m, void *_vml)
++static void m_stop(struct seq_file *m, void *v)
+ {
+ 	struct proc_maps_private *priv = m->private;
++	struct mm_struct *mm = priv->mm;
+ 
+-	if (!IS_ERR_OR_NULL(_vml)) {
+-		mmap_read_unlock(priv->mm);
+-		mmput(priv->mm);
+-	}
+-	if (priv->task) {
+-		put_task_struct(priv->task);
+-		priv->task = NULL;
+-	}
++	if (!priv->task)
++		return;
++
++	mmap_read_unlock(mm);
++	mmput(mm);
++	put_task_struct(priv->task);
++	priv->task = NULL;
+ }
+ 
+ static void *m_next(struct seq_file *m, void *_p, loff_t *pos)
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 96d69404a54ff..9c184dbceba47 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -1001,6 +1001,7 @@ int acpi_dev_resume(struct device *dev);
+ int acpi_subsys_runtime_suspend(struct device *dev);
+ int acpi_subsys_runtime_resume(struct device *dev);
+ int acpi_dev_pm_attach(struct device *dev, bool power_on);
++bool acpi_storage_d3(struct device *dev);
+ #else
+ static inline int acpi_subsys_runtime_suspend(struct device *dev) { return 0; }
+ static inline int acpi_subsys_runtime_resume(struct device *dev) { return 0; }
+@@ -1008,6 +1009,10 @@ static inline int acpi_dev_pm_attach(struct device *dev, bool power_on)
+ {
+ 	return 0;
+ }
++static inline bool acpi_storage_d3(struct device *dev)
++{
++	return false;
++}
+ #endif
+ 
+ #if defined(CONFIG_ACPI) && defined(CONFIG_PM_SLEEP)
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index b010d45a1ecd5..8f4379e93ad49 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -725,7 +725,7 @@ static inline int bpf_trampoline_unlink_prog(struct bpf_prog *prog,
+ static inline struct bpf_trampoline *bpf_trampoline_get(u64 key,
+ 							struct bpf_attach_target_info *tgt_info)
+ {
+-	return ERR_PTR(-EOPNOTSUPP);
++	return NULL;
+ }
+ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
+ #define DEFINE_BPF_DISPATCHER(name)
+diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
+index 57890b357f851..eca91e7a4d394 100644
+--- a/include/linux/btf_ids.h
++++ b/include/linux/btf_ids.h
+@@ -38,7 +38,7 @@ asm(							\
+ 	____BTF_ID(symbol)
+ 
+ #define __ID(prefix) \
+-	__PASTE(prefix, __COUNTER__)
++	__PASTE(__PASTE(prefix, __COUNTER__), __LINE__)
+ 
+ /*
+  * The BTF_ID defines unique symbol for each ID pointing
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index 959b370733f09..c9c430712d471 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -451,6 +451,7 @@ extern struct mutex cgroup_mutex;
+ extern spinlock_t css_set_lock;
+ #define task_css_set_check(task, __c)					\
+ 	rcu_dereference_check((task)->cgroups,				\
++		rcu_read_lock_sched_held() ||				\
+ 		lockdep_is_held(&cgroup_mutex) ||			\
+ 		lockdep_is_held(&css_set_lock) ||			\
+ 		((task)->flags & PF_EXITING) || (__c))
+@@ -779,11 +780,9 @@ static inline void cgroup_account_cputime(struct task_struct *task,
+ 
+ 	cpuacct_charge(task, delta_exec);
+ 
+-	rcu_read_lock();
+ 	cgrp = task_dfl_cgroup(task);
+ 	if (cgroup_parent(cgrp))
+ 		__cgroup_account_cputime(cgrp, delta_exec);
+-	rcu_read_unlock();
+ }
+ 
+ static inline void cgroup_account_cputime_field(struct task_struct *task,
+diff --git a/include/linux/if_team.h b/include/linux/if_team.h
+index 5dd1657947b75..762c77d13e7dd 100644
+--- a/include/linux/if_team.h
++++ b/include/linux/if_team.h
+@@ -189,6 +189,8 @@ struct team {
+ 	struct net_device *dev; /* associated netdevice */
+ 	struct team_pcpu_stats __percpu *pcpu_stats;
+ 
++	const struct header_ops *header_ops_cache;
++
+ 	struct mutex lock; /* used for overall locking, e.g. port lists write */
+ 
+ 	/*
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 5ca9347bd8ef9..1ceec830d5f74 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -187,7 +187,7 @@ enum {
+ 	ATA_LFLAG_NO_LPM	= (1 << 8), /* disable LPM on this link */
+ 	ATA_LFLAG_RST_ONCE	= (1 << 9), /* limit recovery to one reset */
+ 	ATA_LFLAG_CHANGED	= (1 << 10), /* LPM state changed on this link */
+-	ATA_LFLAG_NO_DB_DELAY	= (1 << 11), /* no debounce delay on link resume */
++	ATA_LFLAG_NO_DEBOUNCE_DELAY = (1 << 11), /* no debounce delay on link resume */
+ 
+ 	/* struct ata_port flags */
+ 	ATA_FLAG_SLAVE_POSS	= (1 << 0), /* host supports slave dev */
+@@ -297,7 +297,7 @@ enum {
+ 	 * advised to wait only for the following duration before
+ 	 * doing SRST.
+ 	 */
+-	ATA_TMOUT_PMP_SRST_WAIT	= 5000,
++	ATA_TMOUT_PMP_SRST_WAIT	= 10000,
+ 
+ 	/* When the LPM policy is set to ATA_LPM_MAX_POWER, there might
+ 	 * be a spurious PHY event, so ignore the first PHY event that
+diff --git a/include/linux/netfilter/nf_conntrack_sctp.h b/include/linux/netfilter/nf_conntrack_sctp.h
+index 625f491b95de8..fb31312825ae5 100644
+--- a/include/linux/netfilter/nf_conntrack_sctp.h
++++ b/include/linux/netfilter/nf_conntrack_sctp.h
+@@ -9,6 +9,7 @@ struct ip_ct_sctp {
+ 	enum sctp_conntrack state;
+ 
+ 	__be32 vtag[IP_CT_DIR_MAX];
++	u8 init[IP_CT_DIR_MAX];
+ 	u8 last_dir;
+ 	u8 flags;
+ };
+diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h
+index f0373a6cb5fb6..40aa09a21f75d 100644
+--- a/include/linux/nfs_page.h
++++ b/include/linux/nfs_page.h
+@@ -145,7 +145,9 @@ extern	void nfs_unlock_request(struct nfs_page *req);
+ extern	void nfs_unlock_and_release_request(struct nfs_page *);
+ extern	struct nfs_page *nfs_page_group_lock_head(struct nfs_page *req);
+ extern	int nfs_page_group_lock_subrequests(struct nfs_page *head);
+-extern	void nfs_join_page_group(struct nfs_page *head, struct inode *inode);
++extern void nfs_join_page_group(struct nfs_page *head,
++				struct nfs_commit_info *cinfo,
++				struct inode *inode);
+ extern int nfs_page_group_lock(struct nfs_page *);
+ extern void nfs_page_group_unlock(struct nfs_page *);
+ extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int);
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index 1ac20d75b0618..0928a60b8f825 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -307,10 +307,10 @@ SEQCOUNT_LOCKNAME(ww_mutex,     struct ww_mutex, true,     &s->lock->base, ww_mu
+ 	__seqprop_case((s),	mutex,		prop),			\
+ 	__seqprop_case((s),	ww_mutex,	prop))
+ 
+-#define __seqcount_ptr(s)		__seqprop(s, ptr)
+-#define __seqcount_sequence(s)		__seqprop(s, sequence)
+-#define __seqcount_lock_preemptible(s)	__seqprop(s, preemptible)
+-#define __seqcount_assert_lock_held(s)	__seqprop(s, assert)
++#define seqprop_ptr(s)			__seqprop(s, ptr)
++#define seqprop_sequence(s)		__seqprop(s, sequence)
++#define seqprop_preemptible(s)		__seqprop(s, preemptible)
++#define seqprop_assert(s)		__seqprop(s, assert)
+ 
+ /**
+  * __read_seqcount_begin() - begin a seqcount_t read section w/o barrier
+@@ -328,13 +328,13 @@ SEQCOUNT_LOCKNAME(ww_mutex,     struct ww_mutex, true,     &s->lock->base, ww_mu
+  */
+ #define __read_seqcount_begin(s)					\
+ ({									\
+-	unsigned seq;							\
++	unsigned __seq;							\
+ 									\
+-	while ((seq = __seqcount_sequence(s)) & 1)			\
++	while ((__seq = seqprop_sequence(s)) & 1)			\
+ 		cpu_relax();						\
+ 									\
+ 	kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);			\
+-	seq;								\
++	__seq;								\
+ })
+ 
+ /**
+@@ -345,10 +345,10 @@ SEQCOUNT_LOCKNAME(ww_mutex,     struct ww_mutex, true,     &s->lock->base, ww_mu
+  */
+ #define raw_read_seqcount_begin(s)					\
+ ({									\
+-	unsigned seq = __read_seqcount_begin(s);			\
++	unsigned _seq = __read_seqcount_begin(s);			\
+ 									\
+ 	smp_rmb();							\
+-	seq;								\
++	_seq;								\
+ })
+ 
+ /**
+@@ -359,7 +359,7 @@ SEQCOUNT_LOCKNAME(ww_mutex,     struct ww_mutex, true,     &s->lock->base, ww_mu
+  */
+ #define read_seqcount_begin(s)						\
+ ({									\
+-	seqcount_lockdep_reader_access(__seqcount_ptr(s));		\
++	seqcount_lockdep_reader_access(seqprop_ptr(s));			\
+ 	raw_read_seqcount_begin(s);					\
+ })
+ 
+@@ -376,11 +376,11 @@ SEQCOUNT_LOCKNAME(ww_mutex,     struct ww_mutex, true,     &s->lock->base, ww_mu
+  */
+ #define raw_read_seqcount(s)						\
+ ({									\
+-	unsigned seq = __seqcount_sequence(s);				\
++	unsigned __seq = seqprop_sequence(s);				\
+ 									\
+ 	smp_rmb();							\
+ 	kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);			\
+-	seq;								\
++	__seq;								\
+ })
+ 
+ /**
+@@ -425,9 +425,9 @@ SEQCOUNT_LOCKNAME(ww_mutex,     struct ww_mutex, true,     &s->lock->base, ww_mu
+  * Return: true if a read section retry is required, else false
+  */
+ #define __read_seqcount_retry(s, start)					\
+-	__read_seqcount_t_retry(__seqcount_ptr(s), start)
++	do___read_seqcount_retry(seqprop_ptr(s), start)
+ 
+-static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
++static inline int do___read_seqcount_retry(const seqcount_t *s, unsigned start)
+ {
+ 	kcsan_atomic_next(0);
+ 	return unlikely(READ_ONCE(s->sequence) != start);
+@@ -445,12 +445,12 @@ static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
+  * Return: true if a read section retry is required, else false
+  */
+ #define read_seqcount_retry(s, start)					\
+-	read_seqcount_t_retry(__seqcount_ptr(s), start)
++	do_read_seqcount_retry(seqprop_ptr(s), start)
+ 
+-static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
++static inline int do_read_seqcount_retry(const seqcount_t *s, unsigned start)
+ {
+ 	smp_rmb();
+-	return __read_seqcount_t_retry(s, start);
++	return do___read_seqcount_retry(s, start);
+ }
+ 
+ /**
+@@ -459,13 +459,13 @@ static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
+  */
+ #define raw_write_seqcount_begin(s)					\
+ do {									\
+-	if (__seqcount_lock_preemptible(s))				\
++	if (seqprop_preemptible(s))					\
+ 		preempt_disable();					\
+ 									\
+-	raw_write_seqcount_t_begin(__seqcount_ptr(s));			\
++	do_raw_write_seqcount_begin(seqprop_ptr(s));			\
+ } while (0)
+ 
+-static inline void raw_write_seqcount_t_begin(seqcount_t *s)
++static inline void do_raw_write_seqcount_begin(seqcount_t *s)
+ {
+ 	kcsan_nestable_atomic_begin();
+ 	s->sequence++;
+@@ -478,13 +478,13 @@ static inline void raw_write_seqcount_t_begin(seqcount_t *s)
+  */
+ #define raw_write_seqcount_end(s)					\
+ do {									\
+-	raw_write_seqcount_t_end(__seqcount_ptr(s));			\
++	do_raw_write_seqcount_end(seqprop_ptr(s));			\
+ 									\
+-	if (__seqcount_lock_preemptible(s))				\
++	if (seqprop_preemptible(s))					\
+ 		preempt_enable();					\
+ } while (0)
+ 
+-static inline void raw_write_seqcount_t_end(seqcount_t *s)
++static inline void do_raw_write_seqcount_end(seqcount_t *s)
+ {
+ 	smp_wmb();
+ 	s->sequence++;
+@@ -501,18 +501,18 @@ static inline void raw_write_seqcount_t_end(seqcount_t *s)
+  */
+ #define write_seqcount_begin_nested(s, subclass)			\
+ do {									\
+-	__seqcount_assert_lock_held(s);					\
++	seqprop_assert(s);						\
+ 									\
+-	if (__seqcount_lock_preemptible(s))				\
++	if (seqprop_preemptible(s))					\
+ 		preempt_disable();					\
+ 									\
+-	write_seqcount_t_begin_nested(__seqcount_ptr(s), subclass);	\
++	do_write_seqcount_begin_nested(seqprop_ptr(s), subclass);	\
+ } while (0)
+ 
+-static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
++static inline void do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
+ {
+-	raw_write_seqcount_t_begin(s);
+ 	seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
++	do_raw_write_seqcount_begin(s);
+ }
+ 
+ /**
+@@ -528,17 +528,17 @@ static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
+  */
+ #define write_seqcount_begin(s)						\
+ do {									\
+-	__seqcount_assert_lock_held(s);					\
++	seqprop_assert(s);						\
+ 									\
+-	if (__seqcount_lock_preemptible(s))				\
++	if (seqprop_preemptible(s))					\
+ 		preempt_disable();					\
+ 									\
+-	write_seqcount_t_begin(__seqcount_ptr(s));			\
++	do_write_seqcount_begin(seqprop_ptr(s));			\
+ } while (0)
+ 
+-static inline void write_seqcount_t_begin(seqcount_t *s)
++static inline void do_write_seqcount_begin(seqcount_t *s)
+ {
+-	write_seqcount_t_begin_nested(s, 0);
++	do_write_seqcount_begin_nested(s, 0);
+ }
+ 
+ /**
+@@ -549,16 +549,16 @@ static inline void write_seqcount_t_begin(seqcount_t *s)
+  */
+ #define write_seqcount_end(s)						\
+ do {									\
+-	write_seqcount_t_end(__seqcount_ptr(s));			\
++	do_write_seqcount_end(seqprop_ptr(s));				\
+ 									\
+-	if (__seqcount_lock_preemptible(s))				\
++	if (seqprop_preemptible(s))					\
+ 		preempt_enable();					\
+ } while (0)
+ 
+-static inline void write_seqcount_t_end(seqcount_t *s)
++static inline void do_write_seqcount_end(seqcount_t *s)
+ {
+ 	seqcount_release(&s->dep_map, _RET_IP_);
+-	raw_write_seqcount_t_end(s);
++	do_raw_write_seqcount_end(s);
+ }
+ 
+ /**
+@@ -603,9 +603,9 @@ static inline void write_seqcount_t_end(seqcount_t *s)
+  *      }
+  */
+ #define raw_write_seqcount_barrier(s)					\
+-	raw_write_seqcount_t_barrier(__seqcount_ptr(s))
++	do_raw_write_seqcount_barrier(seqprop_ptr(s))
+ 
+-static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
++static inline void do_raw_write_seqcount_barrier(seqcount_t *s)
+ {
+ 	kcsan_nestable_atomic_begin();
+ 	s->sequence++;
+@@ -623,9 +623,9 @@ static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
+  * will complete successfully and see data older than this.
+  */
+ #define write_seqcount_invalidate(s)					\
+-	write_seqcount_t_invalidate(__seqcount_ptr(s))
++	do_write_seqcount_invalidate(seqprop_ptr(s))
+ 
+-static inline void write_seqcount_t_invalidate(seqcount_t *s)
++static inline void do_write_seqcount_invalidate(seqcount_t *s)
+ {
+ 	smp_wmb();
+ 	kcsan_nestable_atomic_begin();
+@@ -862,9 +862,9 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
+ }
+ 
+ /*
+- * For all seqlock_t write side functions, use write_seqcount_*t*_begin()
+- * instead of the generic write_seqcount_begin(). This way, no redundant
+- * lockdep_assert_held() checks are added.
++ * For all seqlock_t write side functions, use the the internal
++ * do_write_seqcount_begin() instead of generic write_seqcount_begin().
++ * This way, no redundant lockdep_assert_held() checks are added.
+  */
+ 
+ /**
+@@ -883,7 +883,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
+ static inline void write_seqlock(seqlock_t *sl)
+ {
+ 	spin_lock(&sl->lock);
+-	write_seqcount_t_begin(&sl->seqcount.seqcount);
++	do_write_seqcount_begin(&sl->seqcount.seqcount);
+ }
+ 
+ /**
+@@ -895,7 +895,7 @@ static inline void write_seqlock(seqlock_t *sl)
+  */
+ static inline void write_sequnlock(seqlock_t *sl)
+ {
+-	write_seqcount_t_end(&sl->seqcount.seqcount);
++	do_write_seqcount_end(&sl->seqcount.seqcount);
+ 	spin_unlock(&sl->lock);
+ }
+ 
+@@ -909,7 +909,7 @@ static inline void write_sequnlock(seqlock_t *sl)
+ static inline void write_seqlock_bh(seqlock_t *sl)
+ {
+ 	spin_lock_bh(&sl->lock);
+-	write_seqcount_t_begin(&sl->seqcount.seqcount);
++	do_write_seqcount_begin(&sl->seqcount.seqcount);
+ }
+ 
+ /**
+@@ -922,7 +922,7 @@ static inline void write_seqlock_bh(seqlock_t *sl)
+  */
+ static inline void write_sequnlock_bh(seqlock_t *sl)
+ {
+-	write_seqcount_t_end(&sl->seqcount.seqcount);
++	do_write_seqcount_end(&sl->seqcount.seqcount);
+ 	spin_unlock_bh(&sl->lock);
+ }
+ 
+@@ -936,7 +936,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl)
+ static inline void write_seqlock_irq(seqlock_t *sl)
+ {
+ 	spin_lock_irq(&sl->lock);
+-	write_seqcount_t_begin(&sl->seqcount.seqcount);
++	do_write_seqcount_begin(&sl->seqcount.seqcount);
+ }
+ 
+ /**
+@@ -948,7 +948,7 @@ static inline void write_seqlock_irq(seqlock_t *sl)
+  */
+ static inline void write_sequnlock_irq(seqlock_t *sl)
+ {
+-	write_seqcount_t_end(&sl->seqcount.seqcount);
++	do_write_seqcount_end(&sl->seqcount.seqcount);
+ 	spin_unlock_irq(&sl->lock);
+ }
+ 
+@@ -957,7 +957,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&sl->lock, flags);
+-	write_seqcount_t_begin(&sl->seqcount.seqcount);
++	do_write_seqcount_begin(&sl->seqcount.seqcount);
+ 	return flags;
+ }
+ 
+@@ -986,7 +986,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
+ static inline void
+ write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
+ {
+-	write_seqcount_t_end(&sl->seqcount.seqcount);
++	do_write_seqcount_end(&sl->seqcount.seqcount);
+ 	spin_unlock_irqrestore(&sl->lock, flags);
+ }
+ 
+diff --git a/include/net/netfilter/ipv4/nf_reject.h b/include/net/netfilter/ipv4/nf_reject.h
+index 40e0e0623f461..d8207a82d761a 100644
+--- a/include/net/netfilter/ipv4/nf_reject.h
++++ b/include/net/netfilter/ipv4/nf_reject.h
+@@ -8,8 +8,8 @@
+ #include <net/netfilter/nf_reject.h>
+ 
+ void nf_send_unreach(struct sk_buff *skb_in, int code, int hook);
+-void nf_send_reset(struct net *net, struct sk_buff *oldskb, int hook);
+-
++void nf_send_reset(struct net *net, struct sock *, struct sk_buff *oldskb,
++		   int hook);
+ const struct tcphdr *nf_reject_ip_tcphdr_get(struct sk_buff *oldskb,
+ 					     struct tcphdr *_oth, int hook);
+ struct iphdr *nf_reject_iphdr_put(struct sk_buff *nskb,
+diff --git a/include/net/netfilter/ipv6/nf_reject.h b/include/net/netfilter/ipv6/nf_reject.h
+index 4a3ef9ebdf6f6..86e87bc2c5167 100644
+--- a/include/net/netfilter/ipv6/nf_reject.h
++++ b/include/net/netfilter/ipv6/nf_reject.h
+@@ -7,9 +7,8 @@
+ 
+ void nf_send_unreach6(struct net *net, struct sk_buff *skb_in, unsigned char code,
+ 		      unsigned int hooknum);
+-
+-void nf_send_reset6(struct net *net, struct sk_buff *oldskb, int hook);
+-
++void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
++		    int hook);
+ const struct tcphdr *nf_reject_ip6_tcphdr_get(struct sk_buff *oldskb,
+ 					      struct tcphdr *otcph,
+ 					      unsigned int *otcplen, int hook);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index eec29dd6681ca..1d59a109417d2 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -28,6 +28,16 @@ struct nft_pktinfo {
+ 	struct xt_action_param		xt;
+ };
+ 
++static inline struct sock *nft_sk(const struct nft_pktinfo *pkt)
++{
++	return pkt->xt.state->sk;
++}
++
++static inline unsigned int nft_thoff(const struct nft_pktinfo *pkt)
++{
++	return pkt->xt.thoff;
++}
++
+ static inline struct net *nft_net(const struct nft_pktinfo *pkt)
+ {
+ 	return pkt->xt.state->net;
+@@ -373,7 +383,8 @@ struct nft_set_ops {
+ 					       const struct nft_set *set,
+ 					       const struct nft_set_elem *elem,
+ 					       unsigned int flags);
+-
++	void				(*commit)(const struct nft_set *set);
++	void				(*abort)(const struct nft_set *set);
+ 	u64				(*privsize)(const struct nlattr * const nla[],
+ 						    const struct nft_set_desc *desc);
+ 	bool				(*estimate)(const struct nft_set_desc *desc,
+@@ -406,6 +417,7 @@ struct nft_set_type {
+  *
+  *	@list: table set list node
+  *	@bindings: list of set bindings
++ *	@refs: internal refcounting for async set destruction
+  *	@table: table this set belongs to
+  *	@net: netnamespace this set belongs to
+  * 	@name: name of the set
+@@ -427,6 +439,7 @@ struct nft_set_type {
+  *	@expr: stateful expression
+  * 	@ops: set ops
+  * 	@flags: set flags
++ *	@dead: set will be freed, never cleared
+  *	@genmask: generation mask
+  * 	@klen: key length
+  * 	@dlen: data length
+@@ -435,6 +448,7 @@ struct nft_set_type {
+ struct nft_set {
+ 	struct list_head		list;
+ 	struct list_head		bindings;
++	refcount_t			refs;
+ 	struct nft_table		*table;
+ 	possible_net_t			net;
+ 	char				*name;
+@@ -454,9 +468,11 @@ struct nft_set {
+ 	u16				udlen;
+ 	unsigned char			*udata;
+ 	struct nft_expr			*expr;
++	struct list_head		pending_update;
+ 	/* runtime data below here */
+ 	const struct nft_set_ops	*ops ____cacheline_aligned;
+-	u16				flags:14,
++	u16				flags:13,
++					dead:1,
+ 					genmask:2;
+ 	u8				klen;
+ 	u8				dlen;
+@@ -474,6 +490,11 @@ static inline void *nft_set_priv(const struct nft_set *set)
+ 	return (void *)set->data;
+ }
+ 
++static inline bool nft_set_gc_is_pending(const struct nft_set *s)
++{
++	return refcount_read(&s->refs) != 1;
++}
++
+ static inline struct nft_set *nft_set_container_of(const void *priv)
+ {
+ 	return (void *)priv - offsetof(struct nft_set, data);
+@@ -690,62 +711,6 @@ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+ void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
+ 				const struct nft_set *set, void *elem);
+ 
+-/**
+- *	struct nft_set_gc_batch_head - nf_tables set garbage collection batch
+- *
+- *	@rcu: rcu head
+- *	@set: set the elements belong to
+- *	@cnt: count of elements
+- */
+-struct nft_set_gc_batch_head {
+-	struct rcu_head			rcu;
+-	const struct nft_set		*set;
+-	unsigned int			cnt;
+-};
+-
+-#define NFT_SET_GC_BATCH_SIZE	((PAGE_SIZE -				  \
+-				  sizeof(struct nft_set_gc_batch_head)) / \
+-				 sizeof(void *))
+-
+-/**
+- *	struct nft_set_gc_batch - nf_tables set garbage collection batch
+- *
+- * 	@head: GC batch head
+- * 	@elems: garbage collection elements
+- */
+-struct nft_set_gc_batch {
+-	struct nft_set_gc_batch_head	head;
+-	void				*elems[NFT_SET_GC_BATCH_SIZE];
+-};
+-
+-struct nft_set_gc_batch *nft_set_gc_batch_alloc(const struct nft_set *set,
+-						gfp_t gfp);
+-void nft_set_gc_batch_release(struct rcu_head *rcu);
+-
+-static inline void nft_set_gc_batch_complete(struct nft_set_gc_batch *gcb)
+-{
+-	if (gcb != NULL)
+-		call_rcu(&gcb->head.rcu, nft_set_gc_batch_release);
+-}
+-
+-static inline struct nft_set_gc_batch *
+-nft_set_gc_batch_check(const struct nft_set *set, struct nft_set_gc_batch *gcb,
+-		       gfp_t gfp)
+-{
+-	if (gcb != NULL) {
+-		if (gcb->head.cnt + 1 < ARRAY_SIZE(gcb->elems))
+-			return gcb;
+-		nft_set_gc_batch_complete(gcb);
+-	}
+-	return nft_set_gc_batch_alloc(set, gfp);
+-}
+-
+-static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
+-					void *elem)
+-{
+-	gcb->elems[gcb->head.cnt++] = elem;
+-}
+-
+ struct nft_expr_ops;
+ /**
+  *	struct nft_expr_type - nf_tables expression type
+@@ -1413,39 +1378,30 @@ static inline void nft_set_elem_change_active(const struct net *net,
+ 
+ #endif /* IS_ENABLED(CONFIG_NF_TABLES) */
+ 
+-/*
+- * We use a free bit in the genmask field to indicate the element
+- * is busy, meaning it is currently being processed either by
+- * the netlink API or GC.
+- *
+- * Even though the genmask is only a single byte wide, this works
+- * because the extension structure if fully constant once initialized,
+- * so there are no non-atomic write accesses unless it is already
+- * marked busy.
+- */
+-#define NFT_SET_ELEM_BUSY_MASK	(1 << 2)
++#define NFT_SET_ELEM_DEAD_MASK	(1 << 2)
+ 
+ #if defined(__LITTLE_ENDIAN_BITFIELD)
+-#define NFT_SET_ELEM_BUSY_BIT	2
++#define NFT_SET_ELEM_DEAD_BIT	2
+ #elif defined(__BIG_ENDIAN_BITFIELD)
+-#define NFT_SET_ELEM_BUSY_BIT	(BITS_PER_LONG - BITS_PER_BYTE + 2)
++#define NFT_SET_ELEM_DEAD_BIT	(BITS_PER_LONG - BITS_PER_BYTE + 2)
+ #else
+ #error
+ #endif
+ 
+-static inline int nft_set_elem_mark_busy(struct nft_set_ext *ext)
++static inline void nft_set_elem_dead(struct nft_set_ext *ext)
+ {
+ 	unsigned long *word = (unsigned long *)ext;
+ 
+ 	BUILD_BUG_ON(offsetof(struct nft_set_ext, genmask) != 0);
+-	return test_and_set_bit(NFT_SET_ELEM_BUSY_BIT, word);
++	set_bit(NFT_SET_ELEM_DEAD_BIT, word);
+ }
+ 
+-static inline void nft_set_elem_clear_busy(struct nft_set_ext *ext)
++static inline int nft_set_elem_is_dead(const struct nft_set_ext *ext)
+ {
+ 	unsigned long *word = (unsigned long *)ext;
+ 
+-	clear_bit(NFT_SET_ELEM_BUSY_BIT, word);
++	BUILD_BUG_ON(offsetof(struct nft_set_ext, genmask) != 0);
++	return test_bit(NFT_SET_ELEM_DEAD_BIT, word);
+ }
+ 
+ /**
+@@ -1573,6 +1529,35 @@ struct nft_trans_flowtable {
+ #define nft_trans_flowtable_flags(trans)	\
+ 	(((struct nft_trans_flowtable *)trans->data)->flags)
+ 
++#define NFT_TRANS_GC_BATCHCOUNT		256
++
++struct nft_trans_gc {
++	struct list_head	list;
++	struct net		*net;
++	struct nft_set		*set;
++	u32			seq;
++	u16			count;
++	void			*priv[NFT_TRANS_GC_BATCHCOUNT];
++	struct rcu_head		rcu;
++};
++
++struct nft_trans_gc *nft_trans_gc_alloc(struct nft_set *set,
++					unsigned int gc_seq, gfp_t gfp);
++void nft_trans_gc_destroy(struct nft_trans_gc *trans);
++
++struct nft_trans_gc *nft_trans_gc_queue_async(struct nft_trans_gc *gc,
++					      unsigned int gc_seq, gfp_t gfp);
++void nft_trans_gc_queue_async_done(struct nft_trans_gc *gc);
++
++struct nft_trans_gc *nft_trans_gc_queue_sync(struct nft_trans_gc *gc, gfp_t gfp);
++void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans);
++
++void nft_trans_gc_elem_add(struct nft_trans_gc *gc, void *priv);
++
++void nft_setelem_data_deactivate(const struct net *net,
++				 const struct nft_set *set,
++				 struct nft_set_elem *elem);
++
+ int __init nft_chain_filter_init(void);
+ void nft_chain_filter_fini(void);
+ 
+@@ -1593,6 +1578,7 @@ struct nftables_pernet {
+ 	struct mutex		commit_mutex;
+ 	unsigned int		base_seq;
+ 	u8			validate_state;
++	unsigned int		gc_seq;
+ };
+ 
+ #endif /* _NET_NF_TABLES_H */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index b56f346020351..cb4b2fddd9eb3 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -337,12 +337,14 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
+ 			struct pipe_inode_info *pipe, size_t len,
+ 			unsigned int flags);
+ 
+-static inline void tcp_dec_quickack_mode(struct sock *sk,
+-					 const unsigned int pkts)
++static inline void tcp_dec_quickack_mode(struct sock *sk)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 
+ 	if (icsk->icsk_ack.quick) {
++		/* How many ACKs S/ACKing new data have we sent? */
++		const unsigned int pkts = inet_csk_ack_scheduled(sk) ? 1 : 0;
++
+ 		if (pkts >= icsk->icsk_ack.quick) {
+ 			icsk->icsk_ack.quick = 0;
+ 			/* Leaving quickack mode we deflate ATO. */
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 2a234023821e3..36ddfb98b70ea 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -976,7 +976,9 @@ union bpf_attr {
+  * 		performed again, if the helper is used in combination with
+  * 		direct packet access.
+  * 	Return
+- * 		0 on success, or a negative error in case of failure.
++ * 		0 on success, or a negative error in case of failure. Positive
++ * 		error indicates a potential drop or congestion in the target
++ * 		device. The particular positive error codes are not defined.
+  *
+  * u64 bpf_get_current_pid_tgid(void)
+  * 	Return
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index 98272cb5f6178..1d8dd58f83a50 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -797,11 +797,13 @@ enum nft_exthdr_flags {
+  * @NFT_EXTHDR_OP_IPV6: match against ipv6 extension headers
+  * @NFT_EXTHDR_OP_TCP: match against tcp options
+  * @NFT_EXTHDR_OP_IPV4: match against ipv4 options
++ * @NFT_EXTHDR_OP_SCTP: match against sctp chunks
+  */
+ enum nft_exthdr_op {
+ 	NFT_EXTHDR_OP_IPV6,
+ 	NFT_EXTHDR_OP_TCPOPT,
+ 	NFT_EXTHDR_OP_IPV4,
++	NFT_EXTHDR_OP_SCTP,
+ 	__NFT_EXTHDR_OP_MAX
+ };
+ #define NFT_EXTHDR_OP_MAX	(__NFT_EXTHDR_OP_MAX - 1)
+diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c
+index 0ee2347ba510d..a047a2053d41a 100644
+--- a/kernel/bpf/queue_stack_maps.c
++++ b/kernel/bpf/queue_stack_maps.c
+@@ -111,7 +111,12 @@ static int __queue_map_get(struct bpf_map *map, void *value, bool delete)
+ 	int err = 0;
+ 	void *ptr;
+ 
+-	raw_spin_lock_irqsave(&qs->lock, flags);
++	if (in_nmi()) {
++		if (!raw_spin_trylock_irqsave(&qs->lock, flags))
++			return -EBUSY;
++	} else {
++		raw_spin_lock_irqsave(&qs->lock, flags);
++	}
+ 
+ 	if (queue_stack_map_is_empty(qs)) {
+ 		memset(value, 0, qs->map.value_size);
+@@ -141,7 +146,12 @@ static int __stack_map_get(struct bpf_map *map, void *value, bool delete)
+ 	void *ptr;
+ 	u32 index;
+ 
+-	raw_spin_lock_irqsave(&qs->lock, flags);
++	if (in_nmi()) {
++		if (!raw_spin_trylock_irqsave(&qs->lock, flags))
++			return -EBUSY;
++	} else {
++		raw_spin_lock_irqsave(&qs->lock, flags);
++	}
+ 
+ 	if (queue_stack_map_is_empty(qs)) {
+ 		memset(value, 0, qs->map.value_size);
+@@ -206,7 +216,12 @@ static int queue_stack_map_push_elem(struct bpf_map *map, void *value,
+ 	if (flags & BPF_NOEXIST || flags > BPF_EXIST)
+ 		return -EINVAL;
+ 
+-	raw_spin_lock_irqsave(&qs->lock, irq_flags);
++	if (in_nmi()) {
++		if (!raw_spin_trylock_irqsave(&qs->lock, irq_flags))
++			return -EBUSY;
++	} else {
++		raw_spin_lock_irqsave(&qs->lock, irq_flags);
++	}
+ 
+ 	if (queue_stack_map_is_full(qs)) {
+ 		if (!replace) {
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index ae9fc1ee6d206..0263983089097 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -606,15 +606,19 @@ static struct dma_debug_entry *__dma_entry_alloc(void)
+ 	return entry;
+ }
+ 
+-static void __dma_entry_alloc_check_leak(void)
++/*
++ * This should be called outside of free_entries_lock scope to avoid potential
++ * deadlocks with serial consoles that use DMA.
++ */
++static void __dma_entry_alloc_check_leak(u32 nr_entries)
+ {
+-	u32 tmp = nr_total_entries % nr_prealloc_entries;
++	u32 tmp = nr_entries % nr_prealloc_entries;
+ 
+ 	/* Shout each time we tick over some multiple of the initial pool */
+ 	if (tmp < DMA_DEBUG_DYNAMIC_ENTRIES) {
+ 		pr_info("dma_debug_entry pool grown to %u (%u00%%)\n",
+-			nr_total_entries,
+-			(nr_total_entries / nr_prealloc_entries));
++			nr_entries,
++			(nr_entries / nr_prealloc_entries));
+ 	}
+ }
+ 
+@@ -625,8 +629,10 @@ static void __dma_entry_alloc_check_leak(void)
+  */
+ static struct dma_debug_entry *dma_entry_alloc(void)
+ {
++	bool alloc_check_leak = false;
+ 	struct dma_debug_entry *entry;
+ 	unsigned long flags;
++	u32 nr_entries;
+ 
+ 	spin_lock_irqsave(&free_entries_lock, flags);
+ 	if (num_free_entries == 0) {
+@@ -636,13 +642,17 @@ static struct dma_debug_entry *dma_entry_alloc(void)
+ 			pr_err("debugging out of memory - disabling\n");
+ 			return NULL;
+ 		}
+-		__dma_entry_alloc_check_leak();
++		alloc_check_leak = true;
++		nr_entries = nr_total_entries;
+ 	}
+ 
+ 	entry = __dma_entry_alloc();
+ 
+ 	spin_unlock_irqrestore(&free_entries_lock, flags);
+ 
++	if (alloc_check_leak)
++		__dma_entry_alloc_check_leak(nr_entries);
++
+ #ifdef CONFIG_STACKTRACE
+ 	entry->stack_len = stack_trace_save(entry->stack_entries,
+ 					    ARRAY_SIZE(entry->stack_entries),
+diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
+index 941c28cf97384..8ee298321d78b 100644
+--- a/kernel/sched/cpuacct.c
++++ b/kernel/sched/cpuacct.c
+@@ -21,15 +21,11 @@ static const char * const cpuacct_stat_desc[] = {
+ 	[CPUACCT_STAT_SYSTEM] = "system",
+ };
+ 
+-struct cpuacct_usage {
+-	u64	usages[CPUACCT_STAT_NSTATS];
+-};
+-
+ /* track CPU usage of a group of tasks and its child groups */
+ struct cpuacct {
+ 	struct cgroup_subsys_state	css;
+ 	/* cpuusage holds pointer to a u64-type object on every CPU */
+-	struct cpuacct_usage __percpu	*cpuusage;
++	u64 __percpu	*cpuusage;
+ 	struct kernel_cpustat __percpu	*cpustat;
+ };
+ 
+@@ -49,7 +45,7 @@ static inline struct cpuacct *parent_ca(struct cpuacct *ca)
+ 	return css_ca(ca->css.parent);
+ }
+ 
+-static DEFINE_PER_CPU(struct cpuacct_usage, root_cpuacct_cpuusage);
++static DEFINE_PER_CPU(u64, root_cpuacct_cpuusage);
+ static struct cpuacct root_cpuacct = {
+ 	.cpustat	= &kernel_cpustat,
+ 	.cpuusage	= &root_cpuacct_cpuusage,
+@@ -68,7 +64,7 @@ cpuacct_css_alloc(struct cgroup_subsys_state *parent_css)
+ 	if (!ca)
+ 		goto out;
+ 
+-	ca->cpuusage = alloc_percpu(struct cpuacct_usage);
++	ca->cpuusage = alloc_percpu(u64);
+ 	if (!ca->cpuusage)
+ 		goto out_free_ca;
+ 
+@@ -99,7 +95,8 @@ static void cpuacct_css_free(struct cgroup_subsys_state *css)
+ static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu,
+ 				 enum cpuacct_stat_index index)
+ {
+-	struct cpuacct_usage *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
++	u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
++	u64 *cpustat = per_cpu_ptr(ca->cpustat, cpu)->cpustat;
+ 	u64 data;
+ 
+ 	/*
+@@ -115,14 +112,17 @@ static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu,
+ 	raw_spin_lock_irq(&cpu_rq(cpu)->lock);
+ #endif
+ 
+-	if (index == CPUACCT_STAT_NSTATS) {
+-		int i = 0;
+-
+-		data = 0;
+-		for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
+-			data += cpuusage->usages[i];
+-	} else {
+-		data = cpuusage->usages[index];
++	switch (index) {
++	case CPUACCT_STAT_USER:
++		data = cpustat[CPUTIME_USER] + cpustat[CPUTIME_NICE];
++		break;
++	case CPUACCT_STAT_SYSTEM:
++		data = cpustat[CPUTIME_SYSTEM] + cpustat[CPUTIME_IRQ] +
++			cpustat[CPUTIME_SOFTIRQ];
++		break;
++	case CPUACCT_STAT_NSTATS:
++		data = *cpuusage;
++		break;
+ 	}
+ 
+ #ifndef CONFIG_64BIT
+@@ -132,10 +132,14 @@ static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu,
+ 	return data;
+ }
+ 
+-static void cpuacct_cpuusage_write(struct cpuacct *ca, int cpu, u64 val)
++static void cpuacct_cpuusage_write(struct cpuacct *ca, int cpu)
+ {
+-	struct cpuacct_usage *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
+-	int i;
++	u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
++	u64 *cpustat = per_cpu_ptr(ca->cpustat, cpu)->cpustat;
++
++	/* Don't allow to reset global kernel_cpustat */
++	if (ca == &root_cpuacct)
++		return;
+ 
+ #ifndef CONFIG_64BIT
+ 	/*
+@@ -143,9 +147,10 @@ static void cpuacct_cpuusage_write(struct cpuacct *ca, int cpu, u64 val)
+ 	 */
+ 	raw_spin_lock_irq(&cpu_rq(cpu)->lock);
+ #endif
+-
+-	for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
+-		cpuusage->usages[i] = val;
++	*cpuusage = 0;
++	cpustat[CPUTIME_USER] = cpustat[CPUTIME_NICE] = 0;
++	cpustat[CPUTIME_SYSTEM] = cpustat[CPUTIME_IRQ] = 0;
++	cpustat[CPUTIME_SOFTIRQ] = 0;
+ 
+ #ifndef CONFIG_64BIT
+ 	raw_spin_unlock_irq(&cpu_rq(cpu)->lock);
+@@ -196,7 +201,7 @@ static int cpuusage_write(struct cgroup_subsys_state *css, struct cftype *cft,
+ 		return -EINVAL;
+ 
+ 	for_each_possible_cpu(cpu)
+-		cpuacct_cpuusage_write(ca, cpu, 0);
++		cpuacct_cpuusage_write(ca, cpu);
+ 
+ 	return 0;
+ }
+@@ -243,25 +248,10 @@ static int cpuacct_all_seq_show(struct seq_file *m, void *V)
+ 	seq_puts(m, "\n");
+ 
+ 	for_each_possible_cpu(cpu) {
+-		struct cpuacct_usage *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
+-
+ 		seq_printf(m, "%d", cpu);
+-
+-		for (index = 0; index < CPUACCT_STAT_NSTATS; index++) {
+-#ifndef CONFIG_64BIT
+-			/*
+-			 * Take rq->lock to make 64-bit read safe on 32-bit
+-			 * platforms.
+-			 */
+-			raw_spin_lock_irq(&cpu_rq(cpu)->lock);
+-#endif
+-
+-			seq_printf(m, " %llu", cpuusage->usages[index]);
+-
+-#ifndef CONFIG_64BIT
+-			raw_spin_unlock_irq(&cpu_rq(cpu)->lock);
+-#endif
+-		}
++		for (index = 0; index < CPUACCT_STAT_NSTATS; index++)
++			seq_printf(m, " %llu",
++				   cpuacct_cpuusage_read(ca, cpu, index));
+ 		seq_puts(m, "\n");
+ 	}
+ 	return 0;
+@@ -338,19 +328,13 @@ static struct cftype files[] = {
+  */
+ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
+ {
++	unsigned int cpu = task_cpu(tsk);
+ 	struct cpuacct *ca;
+-	int index = CPUACCT_STAT_SYSTEM;
+-	struct pt_regs *regs = get_irq_regs() ? : task_pt_regs(tsk);
+ 
+-	if (regs && user_mode(regs))
+-		index = CPUACCT_STAT_USER;
+-
+-	rcu_read_lock();
++	lockdep_assert_held(&cpu_rq(cpu)->lock);
+ 
+ 	for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
+-		__this_cpu_add(ca->cpuusage->usages[index], cputime);
+-
+-	rcu_read_unlock();
++		*per_cpu_ptr(ca->cpuusage, cpu) += cputime;
+ }
+ 
+ /*
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index f8126fa0630e2..0938222b45988 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -355,10 +355,11 @@ static void rb_init_page(struct buffer_data_page *bpage)
+ 	local_set(&bpage->commit, 0);
+ }
+ 
+-/*
+- * Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing
+- * this issue out.
+- */
++static __always_inline unsigned int rb_page_commit(struct buffer_page *bpage)
++{
++	return local_read(&bpage->page->commit);
++}
++
+ static void free_buffer_page(struct buffer_page *bpage)
+ {
+ 	free_page((unsigned long)bpage->page);
+@@ -1008,6 +1009,9 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 	if (full) {
+ 		poll_wait(filp, &work->full_waiters, poll_table);
+ 		work->full_waiters_pending = true;
++		if (!cpu_buffer->shortest_full ||
++		    cpu_buffer->shortest_full > full)
++			cpu_buffer->shortest_full = full;
+ 	} else {
+ 		poll_wait(filp, &work->waiters, poll_table);
+ 		work->waiters_pending = true;
+@@ -1887,7 +1891,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+ 			 * Increment overrun to account for the lost events.
+ 			 */
+ 			local_add(page_entries, &cpu_buffer->overrun);
+-			local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
++			local_sub(rb_page_commit(to_remove_page), &cpu_buffer->entries_bytes);
+ 			local_inc(&cpu_buffer->pages_lost);
+ 		}
+ 
+@@ -2080,6 +2084,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 				err = -ENOMEM;
+ 				goto out_err;
+ 			}
++
++			cond_resched();
+ 		}
+ 
+ 		get_online_cpus();
+@@ -2235,11 +2241,6 @@ rb_reader_event(struct ring_buffer_per_cpu *cpu_buffer)
+ 			       cpu_buffer->reader_page->read);
+ }
+ 
+-static __always_inline unsigned rb_page_commit(struct buffer_page *bpage)
+-{
+-	return local_read(&bpage->page->commit);
+-}
+-
+ static struct ring_buffer_event *
+ rb_iter_head_event(struct ring_buffer_iter *iter)
+ {
+@@ -2258,6 +2259,11 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
+ 	 */
+ 	commit = rb_page_commit(iter_head_page);
+ 	smp_rmb();
++
++	/* An event needs to be at least 8 bytes in size */
++	if (iter->head > commit - 8)
++		goto reset;
++
+ 	event = __rb_page_index(iter_head_page, iter->head);
+ 	length = rb_event_length(event);
+ 
+@@ -2380,7 +2386,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer,
+ 		 * the counters.
+ 		 */
+ 		local_add(entries, &cpu_buffer->overrun);
+-		local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
++		local_sub(rb_page_commit(next_page), &cpu_buffer->entries_bytes);
+ 		local_inc(&cpu_buffer->pages_lost);
+ 
+ 		/*
+@@ -2523,9 +2529,6 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ 
+ 	event = __rb_page_index(tail_page, tail);
+ 
+-	/* account for padding bytes */
+-	local_add(BUF_PAGE_SIZE - tail, &cpu_buffer->entries_bytes);
+-
+ 	/*
+ 	 * Save the original length to the meta data.
+ 	 * This will be used by the reader to add lost event
+@@ -2539,7 +2542,8 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ 	 * write counter enough to allow another writer to slip
+ 	 * in on this page.
+ 	 * We put in a discarded commit instead, to make sure
+-	 * that this space is not used again.
++	 * that this space is not used again, and this space will
++	 * not be accounted into 'entries_bytes'.
+ 	 *
+ 	 * If we are less than the minimum size, we don't need to
+ 	 * worry about it.
+@@ -2564,6 +2568,9 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ 	/* time delta must be non zero */
+ 	event->time_delta = 1;
+ 
++	/* account for padding bytes */
++	local_add(BUF_PAGE_SIZE - tail, &cpu_buffer->entries_bytes);
++
+ 	/* Make sure the padding is visible before the tail_page->write update */
+ 	smp_wmb();
+ 
+@@ -3929,7 +3936,7 @@ u64 ring_buffer_oldest_event_ts(struct trace_buffer *buffer, int cpu)
+ EXPORT_SYMBOL_GPL(ring_buffer_oldest_event_ts);
+ 
+ /**
+- * ring_buffer_bytes_cpu - get the number of bytes consumed in a cpu buffer
++ * ring_buffer_bytes_cpu - get the number of bytes unconsumed in a cpu buffer
+  * @buffer: The ring buffer
+  * @cpu: The per CPU buffer to read from.
+  */
+@@ -4437,6 +4444,7 @@ static void rb_advance_reader(struct ring_buffer_per_cpu *cpu_buffer)
+ 
+ 	length = rb_event_length(event);
+ 	cpu_buffer->reader_page->read += length;
++	cpu_buffer->read_bytes += length;
+ }
+ 
+ static void rb_advance_iter(struct ring_buffer_iter *iter)
+@@ -5528,7 +5536,7 @@ int ring_buffer_read_page(struct trace_buffer *buffer,
+ 	} else {
+ 		/* update the entry counter */
+ 		cpu_buffer->read += rb_page_entries(reader);
+-		cpu_buffer->read_bytes += BUF_PAGE_SIZE;
++		cpu_buffer->read_bytes += rb_page_commit(reader);
+ 
+ 		/* swap the pages */
+ 		rb_init_page(bpage);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 7a64c0cd8819d..196eec0423ff2 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4559,6 +4559,33 @@ int tracing_open_generic_tr(struct inode *inode, struct file *filp)
+ 	return 0;
+ }
+ 
++/*
++ * The private pointer of the inode is the trace_event_file.
++ * Update the tr ref count associated to it.
++ */
++int tracing_open_file_tr(struct inode *inode, struct file *filp)
++{
++	struct trace_event_file *file = inode->i_private;
++	int ret;
++
++	ret = tracing_check_open_get_tr(file->tr);
++	if (ret)
++		return ret;
++
++	filp->private_data = inode->i_private;
++
++	return 0;
++}
++
++int tracing_release_file_tr(struct inode *inode, struct file *filp)
++{
++	struct trace_event_file *file = inode->i_private;
++
++	trace_array_put(file->tr);
++
++	return 0;
++}
++
+ static int tracing_release(struct inode *inode, struct file *file)
+ {
+ 	struct trace_array *tr = inode->i_private;
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index dfde855dafda7..7fa00b83dfa4b 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -730,6 +730,8 @@ void tracing_reset_all_online_cpus(void);
+ void tracing_reset_all_online_cpus_unlocked(void);
+ int tracing_open_generic(struct inode *inode, struct file *filp);
+ int tracing_open_generic_tr(struct inode *inode, struct file *filp);
++int tracing_open_file_tr(struct inode *inode, struct file *filp);
++int tracing_release_file_tr(struct inode *inode, struct file *filp);
+ bool tracing_is_disabled(void);
+ bool tracer_tracing_is_on(struct trace_array *tr);
+ void tracer_tracing_on(struct trace_array *tr);
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index a46d34d840f69..321cfda1b3338 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -1855,9 +1855,10 @@ static const struct file_operations ftrace_set_event_notrace_pid_fops = {
+ };
+ 
+ static const struct file_operations ftrace_enable_fops = {
+-	.open = tracing_open_generic,
++	.open = tracing_open_file_tr,
+ 	.read = event_enable_read,
+ 	.write = event_enable_write,
++	.release = tracing_release_file_tr,
+ 	.llseek = default_llseek,
+ };
+ 
+@@ -1874,9 +1875,10 @@ static const struct file_operations ftrace_event_id_fops = {
+ };
+ 
+ static const struct file_operations ftrace_event_filter_fops = {
+-	.open = tracing_open_generic,
++	.open = tracing_open_file_tr,
+ 	.read = event_filter_read,
+ 	.write = event_filter_write,
++	.release = tracing_release_file_tr,
+ 	.llseek = default_llseek,
+ };
+ 
+diff --git a/kernel/trace/trace_events_inject.c b/kernel/trace/trace_events_inject.c
+index 22bcf7c51d1ee..149c7dc6a4473 100644
+--- a/kernel/trace/trace_events_inject.c
++++ b/kernel/trace/trace_events_inject.c
+@@ -323,7 +323,8 @@ event_inject_read(struct file *file, char __user *buf, size_t size,
+ }
+ 
+ const struct file_operations event_inject_fops = {
+-	.open = tracing_open_generic,
++	.open = tracing_open_file_tr,
+ 	.read = event_inject_read,
+ 	.write = event_inject_write,
++	.release = tracing_release_file_tr,
+ };
+diff --git a/mm/frame_vector.c b/mm/frame_vector.c
+index 0e589a9a88012..1cd81d38ad2d0 100644
+--- a/mm/frame_vector.c
++++ b/mm/frame_vector.c
+@@ -29,6 +29,10 @@
+  * different type underlying the specified range of virtual addresses.
+  * When the function isn't able to map a single page, it returns error.
+  *
++ * Note that get_vaddr_frames() cannot follow VM_IO mappings. It used
++ * to be able to do that, but that could (racily) return non-refcounted
++ * pfns.
++ *
+  * This function takes care of grabbing mmap_lock as necessary.
+  */
+ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,
+@@ -77,8 +81,6 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames,
+ 			goto out;
+ 	}
+ 
+-	/* This used to (racily) return non-refcounted pfns. Let people know */
+-	WARN_ONCE(1, "get_vaddr_frames() cannot follow VM_IO mapping");
+ 	vec->nr_frames = 0;
+ 
+ out:
+diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c
+index 4610f3a13966f..f2ef75c7ccc68 100644
+--- a/net/bridge/br_forward.c
++++ b/net/bridge/br_forward.c
+@@ -118,7 +118,7 @@ static int deliver_clone(const struct net_bridge_port *prev,
+ 
+ 	skb = skb_clone(skb, GFP_ATOMIC);
+ 	if (!skb) {
+-		dev->stats.tx_dropped++;
++		DEV_STATS_INC(dev, tx_dropped);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -255,7 +255,7 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb,
+ 
+ 	skb = skb_copy(skb, GFP_ATOMIC);
+ 	if (!skb) {
+-		dev->stats.tx_dropped++;
++		DEV_STATS_INC(dev, tx_dropped);
+ 		return;
+ 	}
+ 
+diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
+index bf5bf148091f9..52dd0708fd143 100644
+--- a/net/bridge/br_input.c
++++ b/net/bridge/br_input.c
+@@ -145,12 +145,12 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
+ 			if ((mdst && mdst->host_joined) ||
+ 			    br_multicast_is_router(br)) {
+ 				local_rcv = true;
+-				br->dev->stats.multicast++;
++				DEV_STATS_INC(br->dev, multicast);
+ 			}
+ 			mcast_hit = true;
+ 		} else {
+ 			local_rcv = true;
+-			br->dev->stats.multicast++;
++			DEV_STATS_INC(br->dev, multicast);
+ 		}
+ 		break;
+ 	case BR_PKT_UNICAST:
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 3b642c412cf32..15267428c4f83 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -935,7 +935,9 @@ static void neigh_periodic_work(struct work_struct *work)
+ 			    (state == NUD_FAILED ||
+ 			     !time_in_range_open(jiffies, n->used,
+ 						 n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) {
+-				*np = n->next;
++				rcu_assign_pointer(*np,
++					rcu_dereference_protected(n->next,
++						lockdep_is_held(&tbl->lock)));
+ 				neigh_mark_dead(n);
+ 				write_unlock(&n->lock);
+ 				neigh_cleanup_and_release(n);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index 398dc3e47d0c8..f2a0a4e6dd748 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -243,13 +243,8 @@ static int dccp_v4_err(struct sk_buff *skb, u32 info)
+ 	int err;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
+-	 * which is in byte 7 of the dccp header.
+-	 * Our caller (icmp_socket_deliver()) already pulled 8 bytes for us.
+-	 *
+-	 * Later on, we want to access the sequence number fields, which are
+-	 * beyond 8 bytes, so we have to pskb_may_pull() ourselves.
+-	 */
++	if (!pskb_may_pull(skb, offset + sizeof(*dh)))
++		return -EINVAL;
+ 	dh = (struct dccp_hdr *)(skb->data + offset);
+ 	if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
+ 		return -EINVAL;
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index bfe11e96af7c9..6d6bbd43a1419 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -76,13 +76,8 @@ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	__u64 seq;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
+-	 * which is in byte 7 of the dccp header.
+-	 * Our caller (icmpv6_notify()) already pulled 8 bytes for us.
+-	 *
+-	 * Later on, we want to access the sequence number fields, which are
+-	 * beyond 8 bytes, so we have to pskb_may_pull() ourselves.
+-	 */
++	if (!pskb_may_pull(skb, offset + sizeof(*dh)))
++		return -EINVAL;
+ 	dh = (struct dccp_hdr *)(skb->data + offset);
+ 	if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
+ 		return -EINVAL;
+diff --git a/net/ipv4/netfilter/ipt_REJECT.c b/net/ipv4/netfilter/ipt_REJECT.c
+index e16b98ee6266e..4b88407347629 100644
+--- a/net/ipv4/netfilter/ipt_REJECT.c
++++ b/net/ipv4/netfilter/ipt_REJECT.c
+@@ -56,7 +56,8 @@ reject_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ 		nf_send_unreach(skb, ICMP_PKT_FILTERED, hook);
+ 		break;
+ 	case IPT_TCP_RESET:
+-		nf_send_reset(xt_net(par), skb, hook);
++		nf_send_reset(xt_net(par), par->state->sk, skb, hook);
++		break;
+ 	case IPT_ICMP_ECHOREPLY:
+ 		/* Doesn't happen. */
+ 		break;
+diff --git a/net/ipv4/netfilter/nf_reject_ipv4.c b/net/ipv4/netfilter/nf_reject_ipv4.c
+index 93b07739807b2..efe14a6a5d9b8 100644
+--- a/net/ipv4/netfilter/nf_reject_ipv4.c
++++ b/net/ipv4/netfilter/nf_reject_ipv4.c
+@@ -112,7 +112,8 @@ static int nf_reject_fill_skb_dst(struct sk_buff *skb_in)
+ }
+ 
+ /* Send RST reply */
+-void nf_send_reset(struct net *net, struct sk_buff *oldskb, int hook)
++void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
++		   int hook)
+ {
+ 	struct net_device *br_indev __maybe_unused;
+ 	struct sk_buff *nskb;
+@@ -144,8 +145,7 @@ void nf_send_reset(struct net *net, struct sk_buff *oldskb, int hook)
+ 	niph = nf_reject_iphdr_put(nskb, oldskb, IPPROTO_TCP,
+ 				   ip4_dst_hoplimit(skb_dst(nskb)));
+ 	nf_reject_ip_tcphdr_put(nskb, oldskb, oth);
+-
+-	if (ip_route_me_harder(net, nskb->sk, nskb, RTN_UNSPEC))
++	if (ip_route_me_harder(net, sk, nskb, RTN_UNSPEC))
+ 		goto free_nskb;
+ 
+ 	niph = ip_hdr(nskb);
+diff --git a/net/ipv4/netfilter/nft_reject_ipv4.c b/net/ipv4/netfilter/nft_reject_ipv4.c
+index e408f813f5d80..55fc23a8f7a70 100644
+--- a/net/ipv4/netfilter/nft_reject_ipv4.c
++++ b/net/ipv4/netfilter/nft_reject_ipv4.c
+@@ -27,7 +27,8 @@ static void nft_reject_ipv4_eval(const struct nft_expr *expr,
+ 		nf_send_unreach(pkt->skb, priv->icmp_code, nft_hook(pkt));
+ 		break;
+ 	case NFT_REJECT_TCP_RST:
+-		nf_send_reset(nft_net(pkt), pkt->skb, nft_hook(pkt));
++		nf_send_reset(nft_net(pkt), nft_sk(pkt), pkt->skb,
++			      nft_hook(pkt));
+ 		break;
+ 	default:
+ 		break;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 3ddeb4fc0d08a..445b1a2966d79 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1240,6 +1240,7 @@ static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie)
+ 
+ static void ipv4_send_dest_unreach(struct sk_buff *skb)
+ {
++	struct net_device *dev;
+ 	struct ip_options opt;
+ 	int res;
+ 
+@@ -1257,7 +1258,8 @@ static void ipv4_send_dest_unreach(struct sk_buff *skb)
+ 		opt.optlen = ip_hdr(skb)->ihl * 4 - sizeof(struct iphdr);
+ 
+ 		rcu_read_lock();
+-		res = __ip_options_compile(dev_net(skb->dev), &opt, skb, NULL);
++		dev = skb->dev ? skb->dev : skb_rtable(skb)->dst.dev;
++		res = __ip_options_compile(dev_net(dev), &opt, skb, NULL);
+ 		rcu_read_unlock();
+ 
+ 		if (res)
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index b8d2c45edbe02..3f2b6a3adf6a9 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -242,6 +242,19 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
+ 		if (unlikely(len > icsk->icsk_ack.rcv_mss +
+ 				   MAX_TCP_OPTION_SPACE))
+ 			tcp_gro_dev_warn(sk, skb, len);
++		/* If the skb has a len of exactly 1*MSS and has the PSH bit
++		 * set then it is likely the end of an application write. So
++		 * more data may not be arriving soon, and yet the data sender
++		 * may be waiting for an ACK if cwnd-bound or using TX zero
++		 * copy. So we set ICSK_ACK_PUSHED here so that
++		 * tcp_cleanup_rbuf() will send an ACK immediately if the app
++		 * reads all of the data and is not ping-pong. If len > MSS
++		 * then this logic does not matter (and does not hurt) because
++		 * tcp_cleanup_rbuf() will always ACK immediately if the app
++		 * reads data and there is more than an MSS of unACKed data.
++		 */
++		if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_PSH)
++			icsk->icsk_ack.pending |= ICSK_ACK_PUSHED;
+ 	} else {
+ 		/* Otherwise, we make more careful check taking into account,
+ 		 * that SACKs block is variable.
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 86e896351364e..6c14d67715d15 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -177,8 +177,7 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
+ }
+ 
+ /* Account for an ACK we sent. */
+-static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
+-				      u32 rcv_nxt)
++static inline void tcp_event_ack_sent(struct sock *sk, u32 rcv_nxt)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
+@@ -192,7 +191,7 @@ static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
+ 
+ 	if (unlikely(rcv_nxt != tp->rcv_nxt))
+ 		return;  /* Special ACK sent by DCTCP to reflect ECN */
+-	tcp_dec_quickack_mode(sk, pkts);
++	tcp_dec_quickack_mode(sk);
+ 	inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK);
+ }
+ 
+@@ -1374,7 +1373,7 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
+ 			   sk, skb);
+ 
+ 	if (likely(tcb->tcp_flags & TCPHDR_ACK))
+-		tcp_event_ack_sent(sk, tcp_skb_pcount(skb), rcv_nxt);
++		tcp_event_ack_sent(sk, rcv_nxt);
+ 
+ 	if (skb->len != tcp_header_size) {
+ 		tcp_event_data_sent(tp, sk);
+diff --git a/net/ipv6/netfilter/ip6t_REJECT.c b/net/ipv6/netfilter/ip6t_REJECT.c
+index 3ac5485049f09..a35019d2e480c 100644
+--- a/net/ipv6/netfilter/ip6t_REJECT.c
++++ b/net/ipv6/netfilter/ip6t_REJECT.c
+@@ -61,7 +61,7 @@ reject_tg6(struct sk_buff *skb, const struct xt_action_param *par)
+ 		/* Do nothing */
+ 		break;
+ 	case IP6T_TCP_RESET:
+-		nf_send_reset6(net, skb, xt_hooknum(par));
++		nf_send_reset6(net, par->state->sk, skb, xt_hooknum(par));
+ 		break;
+ 	case IP6T_ICMP6_POLICY_FAIL:
+ 		nf_send_unreach6(net, skb, ICMPV6_POLICY_FAIL, xt_hooknum(par));
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index bf95513736c92..832d9f9cd10ad 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -141,7 +141,8 @@ static int nf_reject6_fill_skb_dst(struct sk_buff *skb_in)
+ 	return 0;
+ }
+ 
+-void nf_send_reset6(struct net *net, struct sk_buff *oldskb, int hook)
++void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
++		    int hook)
+ {
+ 	struct net_device *br_indev __maybe_unused;
+ 	struct sk_buff *nskb;
+@@ -233,7 +234,7 @@ void nf_send_reset6(struct net *net, struct sk_buff *oldskb, int hook)
+ 		dev_queue_xmit(nskb);
+ 	} else
+ #endif
+-		ip6_local_out(net, nskb->sk, nskb);
++		ip6_local_out(net, sk, nskb);
+ }
+ EXPORT_SYMBOL_GPL(nf_send_reset6);
+ 
+diff --git a/net/ipv6/netfilter/nft_reject_ipv6.c b/net/ipv6/netfilter/nft_reject_ipv6.c
+index c1098a1968e1e..ed69c768797ec 100644
+--- a/net/ipv6/netfilter/nft_reject_ipv6.c
++++ b/net/ipv6/netfilter/nft_reject_ipv6.c
+@@ -28,7 +28,8 @@ static void nft_reject_ipv6_eval(const struct nft_expr *expr,
+ 				 nft_hook(pkt));
+ 		break;
+ 	case NFT_REJECT_TCP_RST:
+-		nf_send_reset6(nft_net(pkt), pkt->skb, nft_hook(pkt));
++		nf_send_reset6(nft_net(pkt), nft_sk(pkt), pkt->skb,
++			       nft_hook(pkt));
+ 		break;
+ 	default:
+ 		break;
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 382124d6f7647..9746c624a5503 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -508,7 +508,6 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	 */
+ 	if (len > INT_MAX - transhdrlen)
+ 		return -EMSGSIZE;
+-	ulen = len + transhdrlen;
+ 
+ 	/* Mirror BSD error message compatibility */
+ 	if (msg->msg_flags & MSG_OOB)
+@@ -629,6 +628,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ back_from_confirm:
+ 	lock_sock(sk);
++	ulen = len + skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0;
+ 	err = ip6_append_data(sk, ip_generic_getfrag, msg,
+ 			      ulen, transhdrlen, &ipc6,
+ 			      &fl6, (struct rt6_info *)dst,
+diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c
+index 62fb1031763d1..f8854bff286cb 100644
+--- a/net/ncsi/ncsi-aen.c
++++ b/net/ncsi/ncsi-aen.c
+@@ -89,6 +89,11 @@ static int ncsi_aen_handler_lsc(struct ncsi_dev_priv *ndp,
+ 	if ((had_link == has_link) || chained)
+ 		return 0;
+ 
++	if (had_link)
++		netif_carrier_off(ndp->ndev.dev);
++	else
++		netif_carrier_on(ndp->ndev.dev);
++
+ 	if (!ndp->multi_package && !nc->package->multi_channel) {
+ 		if (had_link) {
+ 			ndp->flags |= NCSI_DEV_RESHUFFLE;
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 55ac0cc12657c..26613e3731d02 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -682,6 +682,14 @@ __ip_set_put(struct ip_set *set)
+ /* set->ref can be swapped out by ip_set_swap, netlink events (like dump) need
+  * a separate reference counter
+  */
++static void
++__ip_set_get_netlink(struct ip_set *set)
++{
++	write_lock_bh(&ip_set_ref_lock);
++	set->ref_netlink++;
++	write_unlock_bh(&ip_set_ref_lock);
++}
++
+ static void
+ __ip_set_put_netlink(struct ip_set *set)
+ {
+@@ -1705,11 +1713,11 @@ call_ad(struct sock *ctnl, struct sk_buff *skb, struct ip_set *set,
+ 
+ 	do {
+ 		if (retried) {
+-			__ip_set_get(set);
++			__ip_set_get_netlink(set);
+ 			nfnl_unlock(NFNL_SUBSYS_IPSET);
+ 			cond_resched();
+ 			nfnl_lock(NFNL_SUBSYS_IPSET);
+-			__ip_set_put(set);
++			__ip_set_put_netlink(set);
+ 		}
+ 
+ 		ip_set_lock(set);
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index fc8db03d3efca..e45ffa762bbed 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -1507,8 +1507,8 @@ static int make_send_sock(struct netns_ipvs *ipvs, int id,
+ 	}
+ 
+ 	get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->mcfg, id);
+-	result = sock->ops->connect(sock, (struct sockaddr *) &mcast_addr,
+-				    salen, 0);
++	result = kernel_connect(sock, (struct sockaddr *)&mcast_addr,
++				salen, 0);
+ 	if (result < 0) {
+ 		pr_err("Error connecting to the multicast addr\n");
+ 		goto error;
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index 21cbaf6dac331..e7545bcca805e 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -112,7 +112,7 @@ static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {
+ /* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA},
+ /* error        */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't have Stale cookie*/
+ /* cookie_echo  */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL},/* 5.2.4 - Big TODO */
+-/* cookie_ack   */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */
++/* cookie_ack   */ {sCL, sCL, sCW, sES, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */
+ /* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL},
+ /* heartbeat    */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
+ /* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
+@@ -126,7 +126,7 @@ static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {
+ /* shutdown     */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV},
+ /* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV},
+ /* error        */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV},
+-/* cookie_echo  */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */
++/* cookie_echo  */ {sIV, sCL, sCE, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */
+ /* cookie_ack   */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV},
+ /* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV},
+ /* heartbeat    */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
+@@ -426,6 +426,9 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ 			/* (D) vtag must be same as init_vtag as found in INIT_ACK */
+ 			if (sh->vtag != ct->proto.sctp.vtag[dir])
+ 				goto out_unlock;
++		} else if (sch->type == SCTP_CID_COOKIE_ACK) {
++			ct->proto.sctp.init[dir] = 0;
++			ct->proto.sctp.init[!dir] = 0;
+ 		} else if (sch->type == SCTP_CID_HEARTBEAT) {
+ 			if (ct->proto.sctp.vtag[dir] == 0) {
+ 				pr_debug("Setting %d vtag %x for dir %d\n", sch->type, sh->vtag, dir);
+@@ -474,16 +477,18 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ 		}
+ 
+ 		/* If it is an INIT or an INIT ACK note down the vtag */
+-		if (sch->type == SCTP_CID_INIT ||
+-		    sch->type == SCTP_CID_INIT_ACK) {
+-			struct sctp_inithdr _inithdr, *ih;
++		if (sch->type == SCTP_CID_INIT) {
++			struct sctp_inithdr _ih, *ih;
+ 
+-			ih = skb_header_pointer(skb, offset + sizeof(_sch),
+-						sizeof(_inithdr), &_inithdr);
+-			if (ih == NULL)
++			ih = skb_header_pointer(skb, offset + sizeof(_sch), sizeof(*ih), &_ih);
++			if (!ih)
+ 				goto out_unlock;
+-			pr_debug("Setting vtag %x for dir %d\n",
+-				 ih->init_tag, !dir);
++
++			if (ct->proto.sctp.init[dir] && ct->proto.sctp.init[!dir])
++				ct->proto.sctp.init[!dir] = 0;
++			ct->proto.sctp.init[dir] = 1;
++
++			pr_debug("Setting vtag %x for dir %d\n", ih->init_tag, !dir);
+ 			ct->proto.sctp.vtag[!dir] = ih->init_tag;
+ 
+ 			/* don't renew timeout on init retransmit so
+@@ -494,6 +499,24 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ 			    old_state == SCTP_CONNTRACK_CLOSED &&
+ 			    nf_ct_is_confirmed(ct))
+ 				ignore = true;
++		} else if (sch->type == SCTP_CID_INIT_ACK) {
++			struct sctp_inithdr _ih, *ih;
++			__be32 vtag;
++
++			ih = skb_header_pointer(skb, offset + sizeof(_sch), sizeof(*ih), &_ih);
++			if (!ih)
++				goto out_unlock;
++
++			vtag = ct->proto.sctp.vtag[!dir];
++			if (!ct->proto.sctp.init[!dir] && vtag && vtag != ih->init_tag)
++				goto out_unlock;
++			/* collision */
++			if (ct->proto.sctp.init[dir] && ct->proto.sctp.init[!dir] &&
++			    vtag != ih->init_tag)
++				goto out_unlock;
++
++			pr_debug("Setting vtag %x for dir %d\n", ih->init_tag, !dir);
++			ct->proto.sctp.vtag[!dir] = ih->init_tag;
+ 		}
+ 
+ 		ct->proto.sctp.state = new_state;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 2669999d1bc9c..78b268bd7f012 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -32,7 +32,9 @@ static LIST_HEAD(nf_tables_expressions);
+ static LIST_HEAD(nf_tables_objects);
+ static LIST_HEAD(nf_tables_flowtables);
+ static LIST_HEAD(nf_tables_destroy_list);
++static LIST_HEAD(nf_tables_gc_list);
+ static DEFINE_SPINLOCK(nf_tables_destroy_list_lock);
++static DEFINE_SPINLOCK(nf_tables_gc_list_lock);
+ static u64 table_handle;
+ 
+ enum {
+@@ -124,6 +126,9 @@ static void nft_validate_state_update(struct net *net, u8 new_validate_state)
+ static void nf_tables_trans_destroy_work(struct work_struct *w);
+ static DECLARE_WORK(trans_destroy_work, nf_tables_trans_destroy_work);
+ 
++static void nft_trans_gc_work(struct work_struct *work);
++static DECLARE_WORK(trans_gc_work, nft_trans_gc_work);
++
+ static void nft_ctx_init(struct nft_ctx *ctx,
+ 			 struct net *net,
+ 			 const struct sk_buff *skb,
+@@ -298,12 +303,18 @@ err_register:
+ }
+ 
+ static void nft_netdev_unregister_hooks(struct net *net,
+-					struct list_head *hook_list)
++					struct list_head *hook_list,
++					bool release_netdev)
+ {
+-	struct nft_hook *hook;
++	struct nft_hook *hook, *next;
+ 
+-	list_for_each_entry(hook, hook_list, list)
++	list_for_each_entry_safe(hook, next, hook_list, list) {
+ 		nf_unregister_net_hook(net, &hook->ops);
++		if (release_netdev) {
++			list_del(&hook->list);
++			kfree_rcu(hook, rcu);
++		}
++	}
+ }
+ 
+ static int nf_tables_register_hook(struct net *net,
+@@ -329,9 +340,10 @@ static int nf_tables_register_hook(struct net *net,
+ 	return nf_register_net_hook(net, &basechain->ops);
+ }
+ 
+-static void nf_tables_unregister_hook(struct net *net,
+-				      const struct nft_table *table,
+-				      struct nft_chain *chain)
++static void __nf_tables_unregister_hook(struct net *net,
++					const struct nft_table *table,
++					struct nft_chain *chain,
++					bool release_netdev)
+ {
+ 	struct nft_base_chain *basechain;
+ 	const struct nf_hook_ops *ops;
+@@ -346,11 +358,19 @@ static void nf_tables_unregister_hook(struct net *net,
+ 		return basechain->type->ops_unregister(net, ops);
+ 
+ 	if (nft_base_chain_netdev(table->family, basechain->ops.hooknum))
+-		nft_netdev_unregister_hooks(net, &basechain->hook_list);
++		nft_netdev_unregister_hooks(net, &basechain->hook_list,
++					    release_netdev);
+ 	else
+ 		nf_unregister_net_hook(net, &basechain->ops);
+ }
+ 
++static void nf_tables_unregister_hook(struct net *net,
++				      const struct nft_table *table,
++				      struct nft_chain *chain)
++{
++	return __nf_tables_unregister_hook(net, table, chain, false);
++}
++
+ static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *trans)
+ {
+ 	struct nftables_pernet *nft_net;
+@@ -559,10 +579,6 @@ static int nft_trans_set_add(const struct nft_ctx *ctx, int msg_type,
+ 	return 0;
+ }
+ 
+-static void nft_setelem_data_deactivate(const struct net *net,
+-					const struct nft_set *set,
+-					struct nft_set_elem *elem);
+-
+ static int nft_mapelem_deactivate(const struct nft_ctx *ctx,
+ 				  struct nft_set *set,
+ 				  const struct nft_set_iter *iter,
+@@ -1252,7 +1268,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
+ 		if (!nft_is_active_next(ctx->net, chain))
+ 			continue;
+ 
+-		if (nft_chain_is_bound(chain))
++		if (nft_chain_binding(chain))
+ 			continue;
+ 
+ 		ctx->chain = chain;
+@@ -1266,8 +1282,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
+ 		if (!nft_is_active_next(ctx->net, set))
+ 			continue;
+ 
+-		if (nft_set_is_anonymous(set) &&
+-		    !list_empty(&set->bindings))
++		if (nft_set_is_anonymous(set))
+ 			continue;
+ 
+ 		err = nft_delset(ctx, set);
+@@ -1297,7 +1312,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
+ 		if (!nft_is_active_next(ctx->net, chain))
+ 			continue;
+ 
+-		if (nft_chain_is_bound(chain))
++		if (nft_chain_binding(chain))
+ 			continue;
+ 
+ 		ctx->chain = chain;
+@@ -2584,6 +2599,9 @@ static int nf_tables_delchain(struct net *net, struct sock *nlsk,
+ 		return PTR_ERR(chain);
+ 	}
+ 
++	if (nft_chain_binding(chain))
++		return -EOPNOTSUPP;
++
+ 	if (nlh->nlmsg_flags & NLM_F_NONREC &&
+ 	    chain->use > 0)
+ 		return -EBUSY;
+@@ -3483,6 +3501,11 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
+ 	}
+ 
+ 	if (nlh->nlmsg_flags & NLM_F_REPLACE) {
++		if (nft_chain_binding(chain)) {
++			err = -EOPNOTSUPP;
++			goto err_destroy_flow_rule;
++		}
++
+ 		trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
+ 		if (trans == NULL) {
+ 			err = -ENOMEM;
+@@ -3591,7 +3614,7 @@ static int nf_tables_delrule(struct net *net, struct sock *nlsk,
+ 			NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN]);
+ 			return PTR_ERR(chain);
+ 		}
+-		if (nft_chain_is_bound(chain))
++		if (nft_chain_binding(chain))
+ 			return -EOPNOTSUPP;
+ 	}
+ 
+@@ -3621,7 +3644,7 @@ static int nf_tables_delrule(struct net *net, struct sock *nlsk,
+ 		list_for_each_entry(chain, &table->chains, list) {
+ 			if (!nft_is_active_next(net, chain))
+ 				continue;
+-			if (nft_chain_is_bound(chain))
++			if (nft_chain_binding(chain))
+ 				continue;
+ 
+ 			ctx.chain = chain;
+@@ -4474,6 +4497,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	}
+ 
+ 	INIT_LIST_HEAD(&set->bindings);
++	refcount_set(&set->refs, 1);
+ 	set->table = table;
+ 	write_pnet(&set->net, net);
+ 	set->ops = ops;
+@@ -4509,6 +4533,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	}
+ 
+ 	set->handle = nf_tables_alloc_handle(table);
++	INIT_LIST_HEAD(&set->pending_update);
+ 
+ 	err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set);
+ 	if (err < 0)
+@@ -4533,6 +4558,14 @@ err_alloc:
+ 	return err;
+ }
+ 
++static void nft_set_put(struct nft_set *set)
++{
++	if (refcount_dec_and_test(&set->refs)) {
++		kfree(set->name);
++		kvfree(set);
++	}
++}
++
+ static void nft_set_destroy(const struct nft_ctx *ctx, struct nft_set *set)
+ {
+ 	if (WARN_ON(set->use > 0))
+@@ -4542,8 +4575,7 @@ static void nft_set_destroy(const struct nft_ctx *ctx, struct nft_set *set)
+ 		nft_expr_destroy(ctx, set->expr);
+ 
+ 	set->ops->destroy(ctx, set);
+-	kfree(set->name);
+-	kvfree(set);
++	nft_set_put(set);
+ }
+ 
+ static int nf_tables_delset(struct net *net, struct sock *nlsk,
+@@ -4928,8 +4960,12 @@ static int nf_tables_dump_setelem(const struct nft_ctx *ctx,
+ 				  const struct nft_set_iter *iter,
+ 				  struct nft_set_elem *elem)
+ {
++	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ 	struct nft_set_dump_args *args;
+ 
++	if (nft_set_elem_expired(ext))
++		return 0;
++
+ 	args = container_of(iter, struct nft_set_dump_args, iter);
+ 	return nf_tables_fill_setelem(args->skb, set, elem);
+ }
+@@ -5623,7 +5659,8 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 		goto err_elem_expr;
+ 	}
+ 
+-	ext->genmask = nft_genmask_cur(ctx->net) | NFT_SET_ELEM_BUSY_MASK;
++	ext->genmask = nft_genmask_cur(ctx->net);
++
+ 	err = set->ops->insert(ctx->net, set, &elem, &ext2);
+ 	if (err) {
+ 		if (err == -EEXIST) {
+@@ -5763,9 +5800,9 @@ static void nft_setelem_data_activate(const struct net *net,
+ 		nft_use_inc_restore(&(*nft_set_ext_obj(ext))->use);
+ }
+ 
+-static void nft_setelem_data_deactivate(const struct net *net,
+-					const struct nft_set *set,
+-					struct nft_set_elem *elem)
++void nft_setelem_data_deactivate(const struct net *net,
++				 const struct nft_set *set,
++				 struct nft_set_elem *elem)
+ {
+ 	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ 
+@@ -5907,8 +5944,10 @@ static int nf_tables_delsetelem(struct net *net, struct sock *nlsk,
+ 	if (IS_ERR(set))
+ 		return PTR_ERR(set);
+ 
+-	if (!list_empty(&set->bindings) &&
+-	    (set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS)))
++	if (nft_set_is_anonymous(set))
++		return -EOPNOTSUPP;
++
++	if (!list_empty(&set->bindings) && (set->flags & NFT_SET_CONSTANT))
+ 		return -EBUSY;
+ 
+ 	if (nla[NFTA_SET_ELEM_LIST_ELEMENTS] == NULL) {
+@@ -5931,29 +5970,6 @@ static int nf_tables_delsetelem(struct net *net, struct sock *nlsk,
+ 	return err;
+ }
+ 
+-void nft_set_gc_batch_release(struct rcu_head *rcu)
+-{
+-	struct nft_set_gc_batch *gcb;
+-	unsigned int i;
+-
+-	gcb = container_of(rcu, struct nft_set_gc_batch, head.rcu);
+-	for (i = 0; i < gcb->head.cnt; i++)
+-		nft_set_elem_destroy(gcb->head.set, gcb->elems[i], true);
+-	kfree(gcb);
+-}
+-
+-struct nft_set_gc_batch *nft_set_gc_batch_alloc(const struct nft_set *set,
+-						gfp_t gfp)
+-{
+-	struct nft_set_gc_batch *gcb;
+-
+-	gcb = kzalloc(sizeof(*gcb), gfp);
+-	if (gcb == NULL)
+-		return gcb;
+-	gcb->head.set = set;
+-	return gcb;
+-}
+-
+ /*
+  * Stateful objects
+  */
+@@ -6829,13 +6845,25 @@ static void nft_unregister_flowtable_hook(struct net *net,
+ 				    FLOW_BLOCK_UNBIND);
+ }
+ 
+-static void nft_unregister_flowtable_net_hooks(struct net *net,
+-					       struct list_head *hook_list)
++static void __nft_unregister_flowtable_net_hooks(struct net *net,
++						 struct list_head *hook_list,
++					         bool release_netdev)
+ {
+-	struct nft_hook *hook;
++	struct nft_hook *hook, *next;
+ 
+-	list_for_each_entry(hook, hook_list, list)
++	list_for_each_entry_safe(hook, next, hook_list, list) {
+ 		nf_unregister_net_hook(net, &hook->ops);
++		if (release_netdev) {
++			list_del(&hook->list);
++			kfree_rcu(hook, rcu);
++		}
++	}
++}
++
++static void nft_unregister_flowtable_net_hooks(struct net *net,
++					       struct list_head *hook_list)
++{
++	__nft_unregister_flowtable_net_hooks(net, hook_list, false);
+ }
+ 
+ static int nft_register_flowtable_net_hooks(struct net *net,
+@@ -7997,6 +8025,190 @@ void nft_chain_del(struct nft_chain *chain)
+ 	list_del_rcu(&chain->list);
+ }
+ 
++static void nft_trans_gc_setelem_remove(struct nft_ctx *ctx,
++					struct nft_trans_gc *trans)
++{
++	void **priv = trans->priv;
++	unsigned int i;
++
++	for (i = 0; i < trans->count; i++) {
++		struct nft_set_elem elem = {
++			.priv = priv[i],
++		};
++
++		nft_setelem_data_deactivate(ctx->net, trans->set, &elem);
++		trans->set->ops->remove(trans->net, trans->set, &elem);
++	}
++}
++
++void nft_trans_gc_destroy(struct nft_trans_gc *trans)
++{
++	nft_set_put(trans->set);
++	put_net(trans->net);
++	kfree(trans);
++}
++
++static void nft_trans_gc_trans_free(struct rcu_head *rcu)
++{
++	struct nft_set_elem elem = {};
++	struct nft_trans_gc *trans;
++	struct nft_ctx ctx = {};
++	unsigned int i;
++
++	trans = container_of(rcu, struct nft_trans_gc, rcu);
++	ctx.net = read_pnet(&trans->set->net);
++
++	for (i = 0; i < trans->count; i++) {
++		elem.priv = trans->priv[i];
++		atomic_dec(&trans->set->nelems);
++
++		nf_tables_set_elem_destroy(&ctx, trans->set, elem.priv);
++	}
++
++	nft_trans_gc_destroy(trans);
++}
++
++static bool nft_trans_gc_work_done(struct nft_trans_gc *trans)
++{
++	struct nftables_pernet *nft_net;
++	struct nft_ctx ctx = {};
++
++	nft_net = net_generic(trans->net, nf_tables_net_id);
++
++	mutex_lock(&nft_net->commit_mutex);
++
++	/* Check for race with transaction, otherwise this batch refers to
++	 * stale objects that might not be there anymore. Skip transaction if
++	 * set has been destroyed from control plane transaction in case gc
++	 * worker loses race.
++	 */
++	if (READ_ONCE(nft_net->gc_seq) != trans->seq || trans->set->dead) {
++		mutex_unlock(&nft_net->commit_mutex);
++		return false;
++	}
++
++	ctx.net = trans->net;
++	ctx.table = trans->set->table;
++
++	nft_trans_gc_setelem_remove(&ctx, trans);
++	mutex_unlock(&nft_net->commit_mutex);
++
++	return true;
++}
++
++static void nft_trans_gc_work(struct work_struct *work)
++{
++	struct nft_trans_gc *trans, *next;
++	LIST_HEAD(trans_gc_list);
++
++	spin_lock(&nf_tables_gc_list_lock);
++	list_splice_init(&nf_tables_gc_list, &trans_gc_list);
++	spin_unlock(&nf_tables_gc_list_lock);
++
++	list_for_each_entry_safe(trans, next, &trans_gc_list, list) {
++		list_del(&trans->list);
++		if (!nft_trans_gc_work_done(trans)) {
++			nft_trans_gc_destroy(trans);
++			continue;
++		}
++		call_rcu(&trans->rcu, nft_trans_gc_trans_free);
++	}
++}
++
++struct nft_trans_gc *nft_trans_gc_alloc(struct nft_set *set,
++					unsigned int gc_seq, gfp_t gfp)
++{
++	struct net *net = read_pnet(&set->net);
++	struct nft_trans_gc *trans;
++
++	trans = kzalloc(sizeof(*trans), gfp);
++	if (!trans)
++		return NULL;
++
++	trans->net = maybe_get_net(net);
++	if (!trans->net) {
++		kfree(trans);
++		return NULL;
++	}
++
++	refcount_inc(&set->refs);
++	trans->set = set;
++	trans->seq = gc_seq;
++
++	return trans;
++}
++
++void nft_trans_gc_elem_add(struct nft_trans_gc *trans, void *priv)
++{
++	trans->priv[trans->count++] = priv;
++}
++
++static void nft_trans_gc_queue_work(struct nft_trans_gc *trans)
++{
++	spin_lock(&nf_tables_gc_list_lock);
++	list_add_tail(&trans->list, &nf_tables_gc_list);
++	spin_unlock(&nf_tables_gc_list_lock);
++
++	schedule_work(&trans_gc_work);
++}
++
++static int nft_trans_gc_space(struct nft_trans_gc *trans)
++{
++	return NFT_TRANS_GC_BATCHCOUNT - trans->count;
++}
++
++struct nft_trans_gc *nft_trans_gc_queue_async(struct nft_trans_gc *gc,
++					      unsigned int gc_seq, gfp_t gfp)
++{
++	struct nft_set *set;
++
++	if (nft_trans_gc_space(gc))
++		return gc;
++
++	set = gc->set;
++	nft_trans_gc_queue_work(gc);
++
++	return nft_trans_gc_alloc(set, gc_seq, gfp);
++}
++
++void nft_trans_gc_queue_async_done(struct nft_trans_gc *trans)
++{
++	if (trans->count == 0) {
++		nft_trans_gc_destroy(trans);
++		return;
++	}
++
++	nft_trans_gc_queue_work(trans);
++}
++
++struct nft_trans_gc *nft_trans_gc_queue_sync(struct nft_trans_gc *gc, gfp_t gfp)
++{
++	struct nft_set *set;
++
++	if (WARN_ON_ONCE(!lockdep_commit_lock_is_held(gc->net)))
++		return NULL;
++
++	if (nft_trans_gc_space(gc))
++		return gc;
++
++	set = gc->set;
++	call_rcu(&gc->rcu, nft_trans_gc_trans_free);
++
++	return nft_trans_gc_alloc(set, 0, gfp);
++}
++
++void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans)
++{
++	WARN_ON_ONCE(!lockdep_commit_lock_is_held(trans->net));
++
++	if (trans->count == 0) {
++		nft_trans_gc_destroy(trans);
++		return;
++	}
++
++	call_rcu(&trans->rcu, nft_trans_gc_trans_free);
++}
++
+ static void nf_tables_module_autoload_cleanup(struct net *net)
+ {
+ 	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+@@ -8141,13 +8353,45 @@ static void nf_tables_commit_audit_log(struct list_head *adl, u32 generation)
+ 	}
+ }
+ 
++static void nft_set_commit_update(struct list_head *set_update_list)
++{
++	struct nft_set *set, *next;
++
++	list_for_each_entry_safe(set, next, set_update_list, pending_update) {
++		list_del_init(&set->pending_update);
++
++		if (!set->ops->commit)
++			continue;
++
++		set->ops->commit(set);
++	}
++}
++
++static unsigned int nft_gc_seq_begin(struct nftables_pernet *nft_net)
++{
++	unsigned int gc_seq;
++
++	/* Bump gc counter, it becomes odd, this is the busy mark. */
++	gc_seq = READ_ONCE(nft_net->gc_seq);
++	WRITE_ONCE(nft_net->gc_seq, ++gc_seq);
++
++	return gc_seq;
++}
++
++static void nft_gc_seq_end(struct nftables_pernet *nft_net, unsigned int gc_seq)
++{
++	WRITE_ONCE(nft_net->gc_seq, ++gc_seq);
++}
++
+ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ {
+ 	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_trans *trans, *next;
++	LIST_HEAD(set_update_list);
+ 	struct nft_trans_elem *te;
+ 	struct nft_chain *chain;
+ 	struct nft_table *table;
++	unsigned int gc_seq;
+ 	LIST_HEAD(adl);
+ 	int err;
+ 
+@@ -8220,6 +8464,8 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 	while (++nft_net->base_seq == 0)
+ 		;
+ 
++	gc_seq = nft_gc_seq_begin(nft_net);
++
+ 	/* step 3. Start new generation, rules_gen_X now in use. */
+ 	net->nft.gencursor = nft_gencursor_next(net);
+ 
+@@ -8299,6 +8545,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_DELSET:
++			nft_trans_set(trans)->dead = 1;
+ 			list_del_rcu(&nft_trans_set(trans)->list);
+ 			nf_tables_set_notify(&trans->ctx, nft_trans_set(trans),
+ 					     NFT_MSG_DELSET, GFP_KERNEL);
+@@ -8310,6 +8557,11 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			nf_tables_setelem_notify(&trans->ctx, te->set,
+ 						 &te->elem,
+ 						 NFT_MSG_NEWSETELEM, 0);
++			if (te->set->ops->commit &&
++			    list_empty(&te->set->pending_update)) {
++				list_add_tail(&te->set->pending_update,
++					      &set_update_list);
++			}
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_DELSETELEM:
+@@ -8321,6 +8573,11 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			te->set->ops->remove(net, te->set, &te->elem);
+ 			atomic_dec(&te->set->nelems);
+ 			te->set->ndeact--;
++			if (te->set->ops->commit &&
++			    list_empty(&te->set->pending_update)) {
++				list_add_tail(&te->set->pending_update,
++					      &set_update_list);
++			}
+ 			break;
+ 		case NFT_MSG_NEWOBJ:
+ 			if (nft_trans_obj_update(trans)) {
+@@ -8381,9 +8638,13 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 		}
+ 	}
+ 
++	nft_set_commit_update(&set_update_list);
++
+ 	nft_commit_notify(net, NETLINK_CB(skb).portid);
+ 	nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
+ 	nf_tables_commit_audit_log(&adl, nft_net->base_seq);
++
++	nft_gc_seq_end(nft_net, gc_seq);
+ 	nf_tables_commit_release(net);
+ 
+ 	return 0;
+@@ -8437,10 +8698,25 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ 	kfree(trans);
+ }
+ 
++static void nft_set_abort_update(struct list_head *set_update_list)
++{
++	struct nft_set *set, *next;
++
++	list_for_each_entry_safe(set, next, set_update_list, pending_update) {
++		list_del_init(&set->pending_update);
++
++		if (!set->ops->abort)
++			continue;
++
++		set->ops->abort(set);
++	}
++}
++
+ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ {
+ 	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_trans *trans, *next;
++	LIST_HEAD(set_update_list);
+ 	struct nft_trans_elem *te;
+ 
+ 	if (action == NFNL_ABORT_VALIDATE &&
+@@ -8529,6 +8805,12 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			te = (struct nft_trans_elem *)trans->data;
+ 			te->set->ops->remove(net, te->set, &te->elem);
+ 			atomic_dec(&te->set->nelems);
++
++			if (te->set->ops->abort &&
++			    list_empty(&te->set->pending_update)) {
++				list_add_tail(&te->set->pending_update,
++					      &set_update_list);
++			}
+ 			break;
+ 		case NFT_MSG_DELSETELEM:
+ 			te = (struct nft_trans_elem *)trans->data;
+@@ -8537,6 +8819,11 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			te->set->ops->activate(net, te->set, &te->elem);
+ 			te->set->ndeact--;
+ 
++			if (te->set->ops->abort &&
++			    list_empty(&te->set->pending_update)) {
++				list_add_tail(&te->set->pending_update,
++					      &set_update_list);
++			}
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWOBJ:
+@@ -8577,6 +8864,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 		}
+ 	}
+ 
++	nft_set_abort_update(&set_update_list);
++
+ 	synchronize_rcu();
+ 
+ 	list_for_each_entry_safe_reverse(trans, next,
+@@ -8597,7 +8886,12 @@ static int nf_tables_abort(struct net *net, struct sk_buff *skb,
+ 			   enum nfnl_abort_action action)
+ {
+ 	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+-	int ret = __nf_tables_abort(net, action);
++	unsigned int gc_seq;
++	int ret;
++
++	gc_seq = nft_gc_seq_begin(nft_net);
++	ret = __nf_tables_abort(net, action);
++	nft_gc_seq_end(nft_net, gc_seq);
+ 
+ 	mutex_unlock(&nft_net->commit_mutex);
+ 
+@@ -9207,16 +9501,25 @@ int __nft_release_basechain(struct nft_ctx *ctx)
+ }
+ EXPORT_SYMBOL_GPL(__nft_release_basechain);
+ 
++static void __nft_release_hook(struct net *net, struct nft_table *table)
++{
++	struct nft_flowtable *flowtable;
++	struct nft_chain *chain;
++
++	list_for_each_entry(chain, &table->chains, list)
++		__nf_tables_unregister_hook(net, table, chain, true);
++	list_for_each_entry(flowtable, &table->flowtables, list)
++		__nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list,
++						     true);
++}
++
+ static void __nft_release_hooks(struct net *net)
+ {
+ 	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
+ 	struct nft_table *table;
+-	struct nft_chain *chain;
+ 
+-	list_for_each_entry(table, &nft_net->tables, list) {
+-		list_for_each_entry(chain, &table->chains, list)
+-			nf_tables_unregister_hook(net, table, chain);
+-	}
++	list_for_each_entry(table, &nft_net->tables, list)
++		__nft_release_hook(net, table);
+ }
+ 
+ static void __nft_release_table(struct net *net, struct nft_table *table)
+@@ -9234,7 +9537,7 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
+ 	ctx.family = table->family;
+ 	ctx.table = table;
+ 	list_for_each_entry(chain, &table->chains, list) {
+-		if (nft_chain_is_bound(chain))
++		if (nft_chain_binding(chain))
+ 			continue;
+ 
+ 		ctx.chain = chain;
+@@ -9293,6 +9596,7 @@ static int __net_init nf_tables_init_net(struct net *net)
+ 	mutex_init(&nft_net->commit_mutex);
+ 	nft_net->base_seq = 1;
+ 	nft_net->validate_state = NFT_VALIDATE_SKIP;
++	nft_net->gc_seq = 0;
+ 
+ 	return 0;
+ }
+@@ -9309,21 +9613,34 @@ static void __net_exit nf_tables_pre_exit_net(struct net *net)
+ static void __net_exit nf_tables_exit_net(struct net *net)
+ {
+ 	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++	unsigned int gc_seq;
+ 
+ 	mutex_lock(&nft_net->commit_mutex);
++
++	gc_seq = nft_gc_seq_begin(nft_net);
++
+ 	if (!list_empty(&nft_net->commit_list))
+ 		__nf_tables_abort(net, NFNL_ABORT_NONE);
+ 	__nft_release_tables(net);
++
++	nft_gc_seq_end(nft_net, gc_seq);
++
+ 	mutex_unlock(&nft_net->commit_mutex);
+ 	WARN_ON_ONCE(!list_empty(&nft_net->tables));
+ 	WARN_ON_ONCE(!list_empty(&nft_net->module_list));
+ 	WARN_ON_ONCE(!list_empty(&nft_net->notify_list));
+ }
+ 
++static void nf_tables_exit_batch(struct list_head *net_exit_list)
++{
++	flush_work(&trans_gc_work);
++}
++
+ static struct pernet_operations nf_tables_net_ops = {
+ 	.init		= nf_tables_init_net,
+ 	.pre_exit	= nf_tables_pre_exit_net,
+ 	.exit		= nf_tables_exit_net,
++	.exit_batch	= nf_tables_exit_batch,
+ 	.id		= &nf_tables_net_id,
+ 	.size		= sizeof(struct nftables_pernet),
+ };
+@@ -9388,6 +9705,7 @@ static void __exit nf_tables_module_exit(void)
+ 	nft_chain_filter_fini();
+ 	nft_chain_route_fini();
+ 	unregister_pernet_subsys(&nf_tables_net_ops);
++	cancel_work_sync(&trans_gc_work);
+ 	cancel_work_sync(&trans_destroy_work);
+ 	rcu_barrier();
+ 	rhltable_destroy(&nft_objname_ht);
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index 9dc18429ed875..b0d711d498c66 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -125,7 +125,7 @@ static bool nft_payload_fast_eval(const struct nft_expr *expr,
+ 	else {
+ 		if (!pkt->tprot_set)
+ 			return false;
+-		ptr = skb_network_header(skb) + pkt->xt.thoff;
++		ptr = skb_network_header(skb) + nft_thoff(pkt);
+ 	}
+ 
+ 	ptr += priv->offset;
+diff --git a/net/netfilter/nf_tables_trace.c b/net/netfilter/nf_tables_trace.c
+index 0cf3278007ba5..e4fe2f0780eb6 100644
+--- a/net/netfilter/nf_tables_trace.c
++++ b/net/netfilter/nf_tables_trace.c
+@@ -113,17 +113,17 @@ static int nf_trace_fill_pkt_info(struct sk_buff *nlskb,
+ 	int off = skb_network_offset(skb);
+ 	unsigned int len, nh_end;
+ 
+-	nh_end = pkt->tprot_set ? pkt->xt.thoff : skb->len;
++	nh_end = pkt->tprot_set ? nft_thoff(pkt) : skb->len;
+ 	len = min_t(unsigned int, nh_end - skb_network_offset(skb),
+ 		    NFT_TRACETYPE_NETWORK_HSIZE);
+ 	if (trace_fill_header(nlskb, NFTA_TRACE_NETWORK_HEADER, skb, off, len))
+ 		return -1;
+ 
+ 	if (pkt->tprot_set) {
+-		len = min_t(unsigned int, skb->len - pkt->xt.thoff,
++		len = min_t(unsigned int, skb->len - nft_thoff(pkt),
+ 			    NFT_TRACETYPE_TRANSPORT_HSIZE);
+ 		if (trace_fill_header(nlskb, NFTA_TRACE_TRANSPORT_HEADER, skb,
+-				      pkt->xt.thoff, len))
++				      nft_thoff(pkt), len))
+ 			return -1;
+ 	}
+ 
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index 670dd146fb2b1..cb69a299f10c5 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -10,8 +10,10 @@
+ #include <linux/netlink.h>
+ #include <linux/netfilter.h>
+ #include <linux/netfilter/nf_tables.h>
++#include <linux/sctp.h>
+ #include <net/netfilter/nf_tables_core.h>
+ #include <net/netfilter/nf_tables.h>
++#include <net/sctp/sctp.h>
+ #include <net/tcp.h>
+ 
+ struct nft_exthdr {
+@@ -33,6 +35,14 @@ static unsigned int optlen(const u8 *opt, unsigned int offset)
+ 		return opt[offset + 1];
+ }
+ 
++static int nft_skb_copy_to_reg(const struct sk_buff *skb, int offset, u32 *dest, unsigned int len)
++{
++	if (len % NFT_REG32_SIZE)
++		dest[len / NFT_REG32_SIZE] = 0;
++
++	return skb_copy_bits(skb, offset, dest, len);
++}
++
+ static void nft_exthdr_ipv6_eval(const struct nft_expr *expr,
+ 				 struct nft_regs *regs,
+ 				 const struct nft_pktinfo *pkt)
+@@ -54,8 +64,7 @@ static void nft_exthdr_ipv6_eval(const struct nft_expr *expr,
+ 	}
+ 	offset += priv->offset;
+ 
+-	dest[priv->len / NFT_REG32_SIZE] = 0;
+-	if (skb_copy_bits(pkt->skb, offset, dest, priv->len) < 0)
++	if (nft_skb_copy_to_reg(pkt->skb, offset, dest, priv->len) < 0)
+ 		goto err;
+ 	return;
+ err:
+@@ -151,8 +160,7 @@ static void nft_exthdr_ipv4_eval(const struct nft_expr *expr,
+ 	}
+ 	offset += priv->offset;
+ 
+-	dest[priv->len / NFT_REG32_SIZE] = 0;
+-	if (skb_copy_bits(pkt->skb, offset, dest, priv->len) < 0)
++	if (nft_skb_copy_to_reg(pkt->skb, offset, dest, priv->len) < 0)
+ 		goto err;
+ 	return;
+ err:
+@@ -168,7 +176,7 @@ nft_tcp_header_pointer(const struct nft_pktinfo *pkt,
+ 	if (!pkt->tprot_set || pkt->tprot != IPPROTO_TCP)
+ 		return NULL;
+ 
+-	tcph = skb_header_pointer(pkt->skb, pkt->xt.thoff, sizeof(*tcph), buffer);
++	tcph = skb_header_pointer(pkt->skb, nft_thoff(pkt), sizeof(*tcph), buffer);
+ 	if (!tcph)
+ 		return NULL;
+ 
+@@ -176,7 +184,7 @@ nft_tcp_header_pointer(const struct nft_pktinfo *pkt,
+ 	if (*tcphdr_len < sizeof(*tcph) || *tcphdr_len > len)
+ 		return NULL;
+ 
+-	return skb_header_pointer(pkt->skb, pkt->xt.thoff, *tcphdr_len, buffer);
++	return skb_header_pointer(pkt->skb, nft_thoff(pkt), *tcphdr_len, buffer);
+ }
+ 
+ static void nft_exthdr_tcp_eval(const struct nft_expr *expr,
+@@ -208,7 +216,8 @@ static void nft_exthdr_tcp_eval(const struct nft_expr *expr,
+ 		if (priv->flags & NFT_EXTHDR_F_PRESENT) {
+ 			*dest = 1;
+ 		} else {
+-			dest[priv->len / NFT_REG32_SIZE] = 0;
++			if (priv->len % NFT_REG32_SIZE)
++				dest[priv->len / NFT_REG32_SIZE] = 0;
+ 			memcpy(dest, opt + offset, priv->len);
+ 		}
+ 
+@@ -234,9 +243,14 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr,
+ 
+ 	tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, &tcphdr_len);
+ 	if (!tcph)
+-		return;
++		goto err;
+ 
++	if (skb_ensure_writable(pkt->skb, nft_thoff(pkt) + tcphdr_len))
++		goto err;
++
++	tcph = (struct tcphdr *)(pkt->skb->data + nft_thoff(pkt));
+ 	opt = (u8 *)tcph;
++
+ 	for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) {
+ 		union {
+ 			__be16 v16;
+@@ -249,16 +263,7 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr,
+ 			continue;
+ 
+ 		if (i + optl > tcphdr_len || priv->len + priv->offset > optl)
+-			return;
+-
+-		if (skb_ensure_writable(pkt->skb,
+-					pkt->xt.thoff + i + priv->len))
+-			return;
+-
+-		tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff,
+-					      &tcphdr_len);
+-		if (!tcph)
+-			return;
++			goto err;
+ 
+ 		offset = i + priv->offset;
+ 
+@@ -301,6 +306,107 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr,
+ 
+ 		return;
+ 	}
++	return;
++err:
++	regs->verdict.code = NFT_BREAK;
++}
++
++static void nft_exthdr_tcp_strip_eval(const struct nft_expr *expr,
++				      struct nft_regs *regs,
++				      const struct nft_pktinfo *pkt)
++{
++	u8 buff[sizeof(struct tcphdr) + MAX_TCP_OPTION_SPACE];
++	struct nft_exthdr *priv = nft_expr_priv(expr);
++	unsigned int i, tcphdr_len, optl;
++	struct tcphdr *tcph;
++	u8 *opt;
++
++	tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, &tcphdr_len);
++	if (!tcph)
++		goto err;
++
++	if (skb_ensure_writable(pkt->skb, nft_thoff(pkt) + tcphdr_len))
++		goto drop;
++
++	tcph = (struct tcphdr *)(pkt->skb->data + nft_thoff(pkt));
++	opt = (u8 *)tcph;
++
++	for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) {
++		unsigned int j;
++
++		optl = optlen(opt, i);
++		if (priv->type != opt[i])
++			continue;
++
++		if (i + optl > tcphdr_len)
++			goto drop;
++
++		for (j = 0; j < optl; ++j) {
++			u16 n = TCPOPT_NOP;
++			u16 o = opt[i+j];
++
++			if ((i + j) % 2 == 0) {
++				o <<= 8;
++				n <<= 8;
++			}
++			inet_proto_csum_replace2(&tcph->check, pkt->skb, htons(o),
++						 htons(n), false);
++		}
++		memset(opt + i, TCPOPT_NOP, optl);
++		return;
++	}
++
++	/* option not found, continue. This allows to do multiple
++	 * option removals per rule.
++	 */
++	return;
++err:
++	regs->verdict.code = NFT_BREAK;
++	return;
++drop:
++	/* can't remove, no choice but to drop */
++	regs->verdict.code = NF_DROP;
++}
++
++static void nft_exthdr_sctp_eval(const struct nft_expr *expr,
++				 struct nft_regs *regs,
++				 const struct nft_pktinfo *pkt)
++{
++	unsigned int offset = nft_thoff(pkt) + sizeof(struct sctphdr);
++	struct nft_exthdr *priv = nft_expr_priv(expr);
++	u32 *dest = &regs->data[priv->dreg];
++	const struct sctp_chunkhdr *sch;
++	struct sctp_chunkhdr _sch;
++
++	if (pkt->tprot != IPPROTO_SCTP)
++		goto err;
++
++	do {
++		sch = skb_header_pointer(pkt->skb, offset, sizeof(_sch), &_sch);
++		if (!sch || !sch->length)
++			break;
++
++		if (sch->type == priv->type) {
++			if (priv->flags & NFT_EXTHDR_F_PRESENT) {
++				nft_reg_store8(dest, true);
++				return;
++			}
++			if (priv->offset + priv->len > ntohs(sch->length) ||
++			    offset + ntohs(sch->length) > pkt->skb->len)
++				break;
++
++			if (nft_skb_copy_to_reg(pkt->skb, offset + priv->offset,
++						dest, priv->len) < 0)
++				break;
++			return;
++		}
++		offset += SCTP_PAD4(ntohs(sch->length));
++	} while (offset < pkt->skb->len);
++err:
++	if (priv->flags & NFT_EXTHDR_F_PRESENT)
++		nft_reg_store8(dest, false);
++	else
++		regs->verdict.code = NFT_BREAK;
+ }
+ 
+ static const struct nla_policy nft_exthdr_policy[NFTA_EXTHDR_MAX + 1] = {
+@@ -410,6 +516,28 @@ static int nft_exthdr_tcp_set_init(const struct nft_ctx *ctx,
+ 				       priv->len);
+ }
+ 
++static int nft_exthdr_tcp_strip_init(const struct nft_ctx *ctx,
++				     const struct nft_expr *expr,
++				     const struct nlattr * const tb[])
++{
++	struct nft_exthdr *priv = nft_expr_priv(expr);
++
++	if (tb[NFTA_EXTHDR_SREG] ||
++	    tb[NFTA_EXTHDR_DREG] ||
++	    tb[NFTA_EXTHDR_FLAGS] ||
++	    tb[NFTA_EXTHDR_OFFSET] ||
++	    tb[NFTA_EXTHDR_LEN])
++		return -EINVAL;
++
++	if (!tb[NFTA_EXTHDR_TYPE])
++		return -EINVAL;
++
++	priv->type = nla_get_u8(tb[NFTA_EXTHDR_TYPE]);
++	priv->op = NFT_EXTHDR_OP_TCPOPT;
++
++	return 0;
++}
++
+ static int nft_exthdr_ipv4_init(const struct nft_ctx *ctx,
+ 				const struct nft_expr *expr,
+ 				const struct nlattr * const tb[])
+@@ -470,6 +598,13 @@ static int nft_exthdr_dump_set(struct sk_buff *skb, const struct nft_expr *expr)
+ 	return nft_exthdr_dump_common(skb, priv);
+ }
+ 
++static int nft_exthdr_dump_strip(struct sk_buff *skb, const struct nft_expr *expr)
++{
++	const struct nft_exthdr *priv = nft_expr_priv(expr);
++
++	return nft_exthdr_dump_common(skb, priv);
++}
++
+ static const struct nft_expr_ops nft_exthdr_ipv6_ops = {
+ 	.type		= &nft_exthdr_type,
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_exthdr)),
+@@ -502,6 +637,22 @@ static const struct nft_expr_ops nft_exthdr_tcp_set_ops = {
+ 	.dump		= nft_exthdr_dump_set,
+ };
+ 
++static const struct nft_expr_ops nft_exthdr_tcp_strip_ops = {
++	.type		= &nft_exthdr_type,
++	.size		= NFT_EXPR_SIZE(sizeof(struct nft_exthdr)),
++	.eval		= nft_exthdr_tcp_strip_eval,
++	.init		= nft_exthdr_tcp_strip_init,
++	.dump		= nft_exthdr_dump_strip,
++};
++
++static const struct nft_expr_ops nft_exthdr_sctp_ops = {
++	.type		= &nft_exthdr_type,
++	.size		= NFT_EXPR_SIZE(sizeof(struct nft_exthdr)),
++	.eval		= nft_exthdr_sctp_eval,
++	.init		= nft_exthdr_init,
++	.dump		= nft_exthdr_dump,
++};
++
+ static const struct nft_expr_ops *
+ nft_exthdr_select_ops(const struct nft_ctx *ctx,
+ 		      const struct nlattr * const tb[])
+@@ -521,7 +672,7 @@ nft_exthdr_select_ops(const struct nft_ctx *ctx,
+ 			return &nft_exthdr_tcp_set_ops;
+ 		if (tb[NFTA_EXTHDR_DREG])
+ 			return &nft_exthdr_tcp_ops;
+-		break;
++		return &nft_exthdr_tcp_strip_ops;
+ 	case NFT_EXTHDR_OP_IPV6:
+ 		if (tb[NFTA_EXTHDR_DREG])
+ 			return &nft_exthdr_ipv6_ops;
+@@ -532,6 +683,10 @@ nft_exthdr_select_ops(const struct nft_ctx *ctx,
+ 				return &nft_exthdr_ipv4_ops;
+ 		}
+ 		break;
++	case NFT_EXTHDR_OP_SCTP:
++		if (tb[NFTA_EXTHDR_DREG])
++			return &nft_exthdr_sctp_ops;
++		break;
+ 	}
+ 
+ 	return ERR_PTR(-EOPNOTSUPP);
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index d868eade60176..a44340dd3ce64 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -90,7 +90,7 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ 
+ 	switch (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum) {
+ 	case IPPROTO_TCP:
+-		tcph = skb_header_pointer(pkt->skb, pkt->xt.thoff,
++		tcph = skb_header_pointer(pkt->skb, nft_thoff(pkt),
+ 					  sizeof(_tcph), &_tcph);
+ 		if (unlikely(!tcph || tcph->fin || tcph->rst))
+ 			goto out;
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 74c220eeec1a8..b2b63c3653d49 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -110,7 +110,7 @@ void nft_payload_eval(const struct nft_expr *expr,
+ 	case NFT_PAYLOAD_TRANSPORT_HEADER:
+ 		if (!pkt->tprot_set)
+ 			goto err;
+-		offset = pkt->xt.thoff;
++		offset = nft_thoff(pkt);
+ 		break;
+ 	default:
+ 		BUG();
+@@ -510,7 +510,7 @@ static int nft_payload_l4csum_offset(const struct nft_pktinfo *pkt,
+ 		*l4csum_offset = offsetof(struct tcphdr, check);
+ 		break;
+ 	case IPPROTO_UDP:
+-		if (!nft_payload_udp_checksum(skb, pkt->xt.thoff))
++		if (!nft_payload_udp_checksum(skb, nft_thoff(pkt)))
+ 			return -1;
+ 		fallthrough;
+ 	case IPPROTO_UDPLITE:
+@@ -523,7 +523,7 @@ static int nft_payload_l4csum_offset(const struct nft_pktinfo *pkt,
+ 		return -1;
+ 	}
+ 
+-	*l4csum_offset += pkt->xt.thoff;
++	*l4csum_offset += nft_thoff(pkt);
+ 	return 0;
+ }
+ 
+@@ -615,7 +615,7 @@ static void nft_payload_set_eval(const struct nft_expr *expr,
+ 	case NFT_PAYLOAD_TRANSPORT_HEADER:
+ 		if (!pkt->tprot_set)
+ 			goto err;
+-		offset = pkt->xt.thoff;
++		offset = nft_thoff(pkt);
+ 		break;
+ 	default:
+ 		BUG();
+@@ -646,7 +646,7 @@ static void nft_payload_set_eval(const struct nft_expr *expr,
+ 	if (priv->csum_type == NFT_PAYLOAD_CSUM_SCTP &&
+ 	    pkt->tprot == IPPROTO_SCTP &&
+ 	    skb->ip_summed != CHECKSUM_PARTIAL) {
+-		if (nft_payload_csum_sctp(skb, pkt->xt.thoff))
++		if (nft_payload_csum_sctp(skb, nft_thoff(pkt)))
+ 			goto err;
+ 	}
+ 
+diff --git a/net/netfilter/nft_reject_inet.c b/net/netfilter/nft_reject_inet.c
+index cf8f2646e93c6..c00b94a166824 100644
+--- a/net/netfilter/nft_reject_inet.c
++++ b/net/netfilter/nft_reject_inet.c
+@@ -28,7 +28,8 @@ static void nft_reject_inet_eval(const struct nft_expr *expr,
+ 					nft_hook(pkt));
+ 			break;
+ 		case NFT_REJECT_TCP_RST:
+-			nf_send_reset(nft_net(pkt), pkt->skb, nft_hook(pkt));
++			nf_send_reset(nft_net(pkt), nft_sk(pkt),
++				      pkt->skb, nft_hook(pkt));
+ 			break;
+ 		case NFT_REJECT_ICMPX_UNREACH:
+ 			nf_send_unreach(pkt->skb,
+@@ -44,7 +45,8 @@ static void nft_reject_inet_eval(const struct nft_expr *expr,
+ 					 priv->icmp_code, nft_hook(pkt));
+ 			break;
+ 		case NFT_REJECT_TCP_RST:
+-			nf_send_reset6(nft_net(pkt), pkt->skb, nft_hook(pkt));
++			nf_send_reset6(nft_net(pkt), nft_sk(pkt),
++				       pkt->skb, nft_hook(pkt));
+ 			break;
+ 		case NFT_REJECT_ICMPX_UNREACH:
+ 			nf_send_unreach6(nft_net(pkt), pkt->skb,
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index 51d3e6f0934a9..f0a9ad1c4ea44 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -17,6 +17,9 @@
+ #include <linux/netfilter.h>
+ #include <linux/netfilter/nf_tables.h>
+ #include <net/netfilter/nf_tables_core.h>
++#include <net/netns/generic.h>
++
++extern unsigned int nf_tables_net_id;
+ 
+ /* We target a hash table size of 4, element hint is 75% of final size */
+ #define NFT_RHASH_ELEMENT_HINT 3
+@@ -59,6 +62,8 @@ static inline int nft_rhash_cmp(struct rhashtable_compare_arg *arg,
+ 
+ 	if (memcmp(nft_set_ext_key(&he->ext), x->key, x->set->klen))
+ 		return 1;
++	if (nft_set_elem_is_dead(&he->ext))
++		return 1;
+ 	if (nft_set_elem_expired(&he->ext))
+ 		return 1;
+ 	if (!nft_set_elem_active(&he->ext, x->genmask))
+@@ -187,7 +192,6 @@ static void nft_rhash_activate(const struct net *net, const struct nft_set *set,
+ 	struct nft_rhash_elem *he = elem->priv;
+ 
+ 	nft_set_elem_change_active(net, set, &he->ext);
+-	nft_set_elem_clear_busy(&he->ext);
+ }
+ 
+ static bool nft_rhash_flush(const struct net *net,
+@@ -195,12 +199,9 @@ static bool nft_rhash_flush(const struct net *net,
+ {
+ 	struct nft_rhash_elem *he = priv;
+ 
+-	if (!nft_set_elem_mark_busy(&he->ext) ||
+-	    !nft_is_active(net, &he->ext)) {
+-		nft_set_elem_change_active(net, set, &he->ext);
+-		return true;
+-	}
+-	return false;
++	nft_set_elem_change_active(net, set, &he->ext);
++
++	return true;
+ }
+ 
+ static void *nft_rhash_deactivate(const struct net *net,
+@@ -217,9 +218,8 @@ static void *nft_rhash_deactivate(const struct net *net,
+ 
+ 	rcu_read_lock();
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+-	if (he != NULL &&
+-	    !nft_rhash_flush(net, set, he))
+-		he = NULL;
++	if (he)
++		nft_set_elem_change_active(net, set, &he->ext);
+ 
+ 	rcu_read_unlock();
+ 
+@@ -251,7 +251,9 @@ static bool nft_rhash_delete(const struct nft_set *set,
+ 	if (he == NULL)
+ 		return false;
+ 
+-	return rhashtable_remove_fast(&priv->ht, &he->node, nft_rhash_params) == 0;
++	nft_set_elem_dead(&he->ext);
++
++	return true;
+ }
+ 
+ static void nft_rhash_walk(const struct nft_ctx *ctx, struct nft_set *set,
+@@ -277,8 +279,6 @@ static void nft_rhash_walk(const struct nft_ctx *ctx, struct nft_set *set,
+ 
+ 		if (iter->count < iter->skip)
+ 			goto cont;
+-		if (nft_set_elem_expired(&he->ext))
+-			goto cont;
+ 		if (!nft_set_elem_active(&he->ext, iter->genmask))
+ 			goto cont;
+ 
+@@ -297,49 +297,75 @@ cont:
+ 
+ static void nft_rhash_gc(struct work_struct *work)
+ {
++	struct nftables_pernet *nft_net;
+ 	struct nft_set *set;
+ 	struct nft_rhash_elem *he;
+ 	struct nft_rhash *priv;
+-	struct nft_set_gc_batch *gcb = NULL;
+ 	struct rhashtable_iter hti;
++	struct nft_trans_gc *gc;
++	struct net *net;
++	u32 gc_seq;
+ 
+ 	priv = container_of(work, struct nft_rhash, gc_work.work);
+ 	set  = nft_set_container_of(priv);
++	net  = read_pnet(&set->net);
++	nft_net = net_generic(net, nf_tables_net_id);
++	gc_seq = READ_ONCE(nft_net->gc_seq);
++
++	if (nft_set_gc_is_pending(set))
++		goto done;
++
++	gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL);
++	if (!gc)
++		goto done;
+ 
+ 	rhashtable_walk_enter(&priv->ht, &hti);
+ 	rhashtable_walk_start(&hti);
+ 
+ 	while ((he = rhashtable_walk_next(&hti))) {
+ 		if (IS_ERR(he)) {
+-			if (PTR_ERR(he) != -EAGAIN)
+-				break;
+-			continue;
++			nft_trans_gc_destroy(gc);
++			gc = NULL;
++			goto try_later;
+ 		}
+ 
++		/* Ruleset has been updated, try later. */
++		if (READ_ONCE(nft_net->gc_seq) != gc_seq) {
++			nft_trans_gc_destroy(gc);
++			gc = NULL;
++			goto try_later;
++		}
++
++		if (nft_set_elem_is_dead(&he->ext))
++			goto dead_elem;
++
+ 		if (nft_set_ext_exists(&he->ext, NFT_SET_EXT_EXPR)) {
+ 			struct nft_expr *expr = nft_set_ext_expr(&he->ext);
+ 
+ 			if (expr->ops->gc &&
+ 			    expr->ops->gc(read_pnet(&set->net), expr))
+-				goto gc;
++				goto needs_gc_run;
+ 		}
++
+ 		if (!nft_set_elem_expired(&he->ext))
+ 			continue;
+-gc:
+-		if (nft_set_elem_mark_busy(&he->ext))
+-			continue;
+-
+-		gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC);
+-		if (gcb == NULL)
+-			break;
+-		rhashtable_remove_fast(&priv->ht, &he->node, nft_rhash_params);
+-		atomic_dec(&set->nelems);
+-		nft_set_gc_batch_add(gcb, he);
++needs_gc_run:
++		nft_set_elem_dead(&he->ext);
++dead_elem:
++		gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
++		if (!gc)
++			goto try_later;
++
++		nft_trans_gc_elem_add(gc, he);
+ 	}
++
++try_later:
+ 	rhashtable_walk_stop(&hti);
+ 	rhashtable_walk_exit(&hti);
+ 
+-	nft_set_gc_batch_complete(gcb);
++	if (gc)
++		nft_trans_gc_queue_async_done(gc);
++done:
+ 	queue_delayed_work(system_power_efficient_wq, &priv->gc_work,
+ 			   nft_set_gc_interval(set));
+ }
+@@ -374,7 +400,7 @@ static int nft_rhash_init(const struct nft_set *set,
+ 		return err;
+ 
+ 	INIT_DEFERRABLE_WORK(&priv->gc_work, nft_rhash_gc);
+-	if (set->flags & NFT_SET_TIMEOUT)
++	if (set->flags & (NFT_SET_TIMEOUT | NFT_SET_EVAL))
+ 		nft_rhash_gc_init(set);
+ 
+ 	return 0;
+@@ -402,7 +428,6 @@ static void nft_rhash_destroy(const struct nft_ctx *ctx,
+ 	};
+ 
+ 	cancel_delayed_work_sync(&priv->gc_work);
+-	rcu_barrier();
+ 	rhashtable_free_and_destroy(&priv->ht, nft_rhash_elem_destroy,
+ 				    (void *)&rhash_ctx);
+ }
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 50f840e312b03..fbfcc3275cadf 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -566,8 +566,9 @@ next_match:
+ 			goto out;
+ 
+ 		if (last) {
+-			if (nft_set_elem_expired(&f->mt[b].e->ext) ||
+-			    (genmask &&
++			if (nft_set_elem_expired(&f->mt[b].e->ext))
++				goto next_match;
++			if ((genmask &&
+ 			     !nft_set_elem_active(&f->mt[b].e->ext, genmask)))
+ 				goto next_match;
+ 
+@@ -1536,15 +1537,32 @@ static void pipapo_drop(struct nft_pipapo_match *m,
+ 	}
+ }
+ 
++static void nft_pipapo_gc_deactivate(struct net *net, struct nft_set *set,
++				     struct nft_pipapo_elem *e)
++{
++	struct nft_set_elem elem = {
++		.priv   = e,
++	};
++
++	nft_setelem_data_deactivate(net, set, &elem);
++}
++
+ /**
+  * pipapo_gc() - Drop expired entries from set, destroy start and end elements
+- * @set:	nftables API set representation
++ * @_set:	nftables API set representation
+  * @m:		Matching data
+  */
+-static void pipapo_gc(const struct nft_set *set, struct nft_pipapo_match *m)
++static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m)
+ {
++	struct nft_set *set = (struct nft_set *) _set;
+ 	struct nft_pipapo *priv = nft_set_priv(set);
++	struct net *net = read_pnet(&set->net);
+ 	int rules_f0, first_rule = 0;
++	struct nft_trans_gc *gc;
++
++	gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL);
++	if (!gc)
++		return;
+ 
+ 	while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) {
+ 		union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS];
+@@ -1569,13 +1587,19 @@ static void pipapo_gc(const struct nft_set *set, struct nft_pipapo_match *m)
+ 		f--;
+ 		i--;
+ 		e = f->mt[rulemap[i].to].e;
+-		if (nft_set_elem_expired(&e->ext) &&
+-		    !nft_set_elem_mark_busy(&e->ext)) {
++		/* synchronous gc never fails, there is no need to set on
++		 * NFT_SET_ELEM_DEAD_BIT.
++		 */
++		if (nft_set_elem_expired(&e->ext)) {
+ 			priv->dirty = true;
+-			pipapo_drop(m, rulemap);
+ 
+-			rcu_barrier();
+-			nft_set_elem_destroy(set, e, true);
++			gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
++			if (!gc)
++				return;
++
++			nft_pipapo_gc_deactivate(net, set, e);
++			pipapo_drop(m, rulemap);
++			nft_trans_gc_elem_add(gc, e);
+ 
+ 			/* And check again current first rule, which is now the
+ 			 * first we haven't checked.
+@@ -1585,7 +1609,10 @@ static void pipapo_gc(const struct nft_set *set, struct nft_pipapo_match *m)
+ 		}
+ 	}
+ 
+-	priv->last_gc = jiffies;
++	if (gc) {
++		nft_trans_gc_queue_sync_done(gc);
++		priv->last_gc = jiffies;
++	}
+ }
+ 
+ /**
+@@ -1603,17 +1630,10 @@ static void pipapo_free_fields(struct nft_pipapo_match *m)
+ 	}
+ }
+ 
+-/**
+- * pipapo_reclaim_match - RCU callback to free fields from old matching data
+- * @rcu:	RCU head
+- */
+-static void pipapo_reclaim_match(struct rcu_head *rcu)
++static void pipapo_free_match(struct nft_pipapo_match *m)
+ {
+-	struct nft_pipapo_match *m;
+ 	int i;
+ 
+-	m = container_of(rcu, struct nft_pipapo_match, rcu);
+-
+ 	for_each_possible_cpu(i)
+ 		kfree(*per_cpu_ptr(m->scratch, i));
+ 
+@@ -1628,7 +1648,19 @@ static void pipapo_reclaim_match(struct rcu_head *rcu)
+ }
+ 
+ /**
+- * pipapo_commit() - Replace lookup data with current working copy
++ * pipapo_reclaim_match - RCU callback to free fields from old matching data
++ * @rcu:	RCU head
++ */
++static void pipapo_reclaim_match(struct rcu_head *rcu)
++{
++	struct nft_pipapo_match *m;
++
++	m = container_of(rcu, struct nft_pipapo_match, rcu);
++	pipapo_free_match(m);
++}
++
++/**
++ * nft_pipapo_commit() - Replace lookup data with current working copy
+  * @set:	nftables API set representation
+  *
+  * While at it, check if we should perform garbage collection on the working
+@@ -1638,7 +1670,7 @@ static void pipapo_reclaim_match(struct rcu_head *rcu)
+  * We also need to create a new working copy for subsequent insertions and
+  * deletions.
+  */
+-static void pipapo_commit(const struct nft_set *set)
++static void nft_pipapo_commit(const struct nft_set *set)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct nft_pipapo_match *new_clone, *old;
+@@ -1663,6 +1695,26 @@ static void pipapo_commit(const struct nft_set *set)
+ 	priv->clone = new_clone;
+ }
+ 
++static void nft_pipapo_abort(const struct nft_set *set)
++{
++	struct nft_pipapo *priv = nft_set_priv(set);
++	struct nft_pipapo_match *new_clone, *m;
++
++	if (!priv->dirty)
++		return;
++
++	m = rcu_dereference(priv->match);
++
++	new_clone = pipapo_clone(m);
++	if (IS_ERR(new_clone))
++		return;
++
++	priv->dirty = false;
++
++	pipapo_free_match(priv->clone);
++	priv->clone = new_clone;
++}
++
+ /**
+  * nft_pipapo_activate() - Mark element reference as active given key, commit
+  * @net:	Network namespace
+@@ -1670,8 +1722,7 @@ static void pipapo_commit(const struct nft_set *set)
+  * @elem:	nftables API element representation containing key data
+  *
+  * On insertion, elements are added to a copy of the matching data currently
+- * in use for lookups, and not directly inserted into current lookup data, so
+- * we'll take care of that by calling pipapo_commit() here. Both
++ * in use for lookups, and not directly inserted into current lookup data. Both
+  * nft_pipapo_insert() and nft_pipapo_activate() are called once for each
+  * element, hence we can't purpose either one as a real commit operation.
+  */
+@@ -1679,16 +1730,9 @@ static void nft_pipapo_activate(const struct net *net,
+ 				const struct nft_set *set,
+ 				const struct nft_set_elem *elem)
+ {
+-	struct nft_pipapo_elem *e;
+-
+-	e = pipapo_get(net, set, (const u8 *)elem->key.val.data, 0);
+-	if (IS_ERR(e))
+-		return;
++	struct nft_pipapo_elem *e = elem->priv;
+ 
+ 	nft_set_elem_change_active(net, set, &e->ext);
+-	nft_set_elem_clear_busy(&e->ext);
+-
+-	pipapo_commit(set);
+ }
+ 
+ /**
+@@ -1900,10 +1944,6 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+ 
+ 	data = (const u8 *)nft_set_ext_key(&e->ext);
+ 
+-	e = pipapo_get(net, set, data, 0);
+-	if (IS_ERR(e))
+-		return;
+-
+ 	while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) {
+ 		union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS];
+ 		const u8 *match_start, *match_end;
+@@ -1938,7 +1978,6 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+ 		if (i == m->field_count) {
+ 			priv->dirty = true;
+ 			pipapo_drop(m, rulemap);
+-			pipapo_commit(set);
+ 			return;
+ 		}
+ 
+@@ -1988,8 +2027,6 @@ static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set,
+ 			goto cont;
+ 
+ 		e = f->mt[r].e;
+-		if (nft_set_elem_expired(&e->ext))
+-			goto cont;
+ 
+ 		elem.priv = e;
+ 
+@@ -2245,6 +2282,8 @@ const struct nft_set_type nft_set_pipapo_type = {
+ 		.init		= nft_pipapo_init,
+ 		.destroy	= nft_pipapo_destroy,
+ 		.gc_init	= nft_pipapo_gc_init,
++		.commit		= nft_pipapo_commit,
++		.abort		= nft_pipapo_abort,
+ 		.elemsize	= offsetof(struct nft_pipapo_elem, ext),
+ 	},
+ };
+@@ -2267,6 +2306,8 @@ const struct nft_set_type nft_set_pipapo_avx2_type = {
+ 		.init		= nft_pipapo_init,
+ 		.destroy	= nft_pipapo_destroy,
+ 		.gc_init	= nft_pipapo_gc_init,
++		.commit		= nft_pipapo_commit,
++		.abort		= nft_pipapo_abort,
+ 		.elemsize	= offsetof(struct nft_pipapo_elem, ext),
+ 	},
+ };
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index eae760adae4d5..17abf17b673e2 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -14,6 +14,9 @@
+ #include <linux/netfilter.h>
+ #include <linux/netfilter/nf_tables.h>
+ #include <net/netfilter/nf_tables_core.h>
++#include <net/netns/generic.h>
++
++extern unsigned int nf_tables_net_id;
+ 
+ struct nft_rbtree {
+ 	struct rb_root		root;
+@@ -46,6 +49,12 @@ static int nft_rbtree_cmp(const struct nft_set *set,
+ 		      set->klen);
+ }
+ 
++static bool nft_rbtree_elem_expired(const struct nft_rbtree_elem *rbe)
++{
++	return nft_set_elem_expired(&rbe->ext) ||
++	       nft_set_elem_is_dead(&rbe->ext);
++}
++
+ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+ 				const u32 *key, const struct nft_set_ext **ext,
+ 				unsigned int seq)
+@@ -80,7 +89,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 				continue;
+ 			}
+ 
+-			if (nft_set_elem_expired(&rbe->ext))
++			if (nft_rbtree_elem_expired(rbe))
+ 				return false;
+ 
+ 			if (nft_rbtree_interval_end(rbe)) {
+@@ -98,7 +107,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 
+ 	if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
+ 	    nft_set_elem_active(&interval->ext, genmask) &&
+-	    !nft_set_elem_expired(&interval->ext) &&
++	    !nft_rbtree_elem_expired(interval) &&
+ 	    nft_rbtree_interval_start(interval)) {
+ 		*ext = &interval->ext;
+ 		return true;
+@@ -214,19 +223,31 @@ static void *nft_rbtree_get(const struct net *net, const struct nft_set *set,
+ 	return rbe;
+ }
+ 
+-static int nft_rbtree_gc_elem(const struct nft_set *__set,
+-			      struct nft_rbtree *priv,
+-			      struct nft_rbtree_elem *rbe,
+-			      u8 genmask)
++static void nft_rbtree_gc_remove(struct net *net, struct nft_set *set,
++				 struct nft_rbtree *priv,
++				 struct nft_rbtree_elem *rbe)
++{
++	struct nft_set_elem elem = {
++		.priv   = rbe,
++	};
++
++	nft_setelem_data_deactivate(net, set, &elem);
++	rb_erase(&rbe->node, &priv->root);
++}
++
++static const struct nft_rbtree_elem *
++nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
++		   struct nft_rbtree_elem *rbe, u8 genmask)
+ {
+ 	struct nft_set *set = (struct nft_set *)__set;
+ 	struct rb_node *prev = rb_prev(&rbe->node);
++	struct net *net = read_pnet(&set->net);
+ 	struct nft_rbtree_elem *rbe_prev;
+-	struct nft_set_gc_batch *gcb;
++	struct nft_trans_gc *gc;
+ 
+-	gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC);
+-	if (!gcb)
+-		return -ENOMEM;
++	gc = nft_trans_gc_alloc(set, 0, GFP_ATOMIC);
++	if (!gc)
++		return ERR_PTR(-ENOMEM);
+ 
+ 	/* search for end interval coming before this element.
+ 	 * end intervals don't carry a timeout extension, they
+@@ -241,21 +262,33 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
+ 		prev = rb_prev(prev);
+ 	}
+ 
++	rbe_prev = NULL;
+ 	if (prev) {
+ 		rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
++		nft_rbtree_gc_remove(net, set, priv, rbe_prev);
+ 
+-		rb_erase(&rbe_prev->node, &priv->root);
+-		atomic_dec(&set->nelems);
+-		nft_set_gc_batch_add(gcb, rbe_prev);
++		/* There is always room in this trans gc for this element,
++		 * memory allocation never actually happens, hence, the warning
++		 * splat in such case. No need to set NFT_SET_ELEM_DEAD_BIT,
++		 * this is synchronous gc which never fails.
++		 */
++		gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
++		if (WARN_ON_ONCE(!gc))
++			return ERR_PTR(-ENOMEM);
++
++		nft_trans_gc_elem_add(gc, rbe_prev);
+ 	}
+ 
+-	rb_erase(&rbe->node, &priv->root);
+-	atomic_dec(&set->nelems);
++	nft_rbtree_gc_remove(net, set, priv, rbe);
++	gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
++	if (WARN_ON_ONCE(!gc))
++		return ERR_PTR(-ENOMEM);
+ 
+-	nft_set_gc_batch_add(gcb, rbe);
+-	nft_set_gc_batch_complete(gcb);
++	nft_trans_gc_elem_add(gc, rbe);
+ 
+-	return 0;
++	nft_trans_gc_queue_sync_done(gc);
++
++	return rbe_prev;
+ }
+ 
+ static bool nft_rbtree_update_first(const struct nft_set *set,
+@@ -281,8 +314,9 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 	struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;
+ 	struct rb_node *node, *next, *parent, **p, *first = NULL;
+ 	struct nft_rbtree *priv = nft_set_priv(set);
++	u8 cur_genmask = nft_genmask_cur(net);
+ 	u8 genmask = nft_genmask_next(net);
+-	int d, err;
++	int d;
+ 
+ 	/* Descend the tree to search for an existing element greater than the
+ 	 * key value to insert that is greater than the new element. This is the
+@@ -326,11 +360,19 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 		if (!nft_set_elem_active(&rbe->ext, genmask))
+ 			continue;
+ 
+-		/* perform garbage collection to avoid bogus overlap reports. */
+-		if (nft_set_elem_expired(&rbe->ext)) {
+-			err = nft_rbtree_gc_elem(set, priv, rbe, genmask);
+-			if (err < 0)
+-				return err;
++		/* perform garbage collection to avoid bogus overlap reports
++		 * but skip new elements in this transaction.
++		 */
++		if (nft_set_elem_expired(&rbe->ext) &&
++		    nft_set_elem_active(&rbe->ext, cur_genmask)) {
++			const struct nft_rbtree_elem *removed_end;
++
++			removed_end = nft_rbtree_gc_elem(set, priv, rbe, genmask);
++			if (IS_ERR(removed_end))
++				return PTR_ERR(removed_end);
++
++			if (removed_end == rbe_le || removed_end == rbe_ge)
++				return -EAGAIN;
+ 
+ 			continue;
+ 		}
+@@ -451,11 +493,18 @@ static int nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 	struct nft_rbtree_elem *rbe = elem->priv;
+ 	int err;
+ 
+-	write_lock_bh(&priv->lock);
+-	write_seqcount_begin(&priv->count);
+-	err = __nft_rbtree_insert(net, set, rbe, ext);
+-	write_seqcount_end(&priv->count);
+-	write_unlock_bh(&priv->lock);
++	do {
++		if (fatal_signal_pending(current))
++			return -EINTR;
++
++		cond_resched();
++
++		write_lock_bh(&priv->lock);
++		write_seqcount_begin(&priv->count);
++		err = __nft_rbtree_insert(net, set, rbe, ext);
++		write_seqcount_end(&priv->count);
++		write_unlock_bh(&priv->lock);
++	} while (err == -EAGAIN);
+ 
+ 	return err;
+ }
+@@ -481,7 +530,6 @@ static void nft_rbtree_activate(const struct net *net,
+ 	struct nft_rbtree_elem *rbe = elem->priv;
+ 
+ 	nft_set_elem_change_active(net, set, &rbe->ext);
+-	nft_set_elem_clear_busy(&rbe->ext);
+ }
+ 
+ static bool nft_rbtree_flush(const struct net *net,
+@@ -489,12 +537,9 @@ static bool nft_rbtree_flush(const struct net *net,
+ {
+ 	struct nft_rbtree_elem *rbe = priv;
+ 
+-	if (!nft_set_elem_mark_busy(&rbe->ext) ||
+-	    !nft_is_active(net, &rbe->ext)) {
+-		nft_set_elem_change_active(net, set, &rbe->ext);
+-		return true;
+-	}
+-	return false;
++	nft_set_elem_change_active(net, set, &rbe->ext);
++
++	return true;
+ }
+ 
+ static void *nft_rbtree_deactivate(const struct net *net,
+@@ -551,8 +596,6 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx,
+ 
+ 		if (iter->count < iter->skip)
+ 			goto cont;
+-		if (nft_set_elem_expired(&rbe->ext))
+-			goto cont;
+ 		if (!nft_set_elem_active(&rbe->ext, iter->genmask))
+ 			goto cont;
+ 
+@@ -571,26 +614,42 @@ cont:
+ 
+ static void nft_rbtree_gc(struct work_struct *work)
+ {
+-	struct nft_rbtree_elem *rbe, *rbe_end = NULL, *rbe_prev = NULL;
+-	struct nft_set_gc_batch *gcb = NULL;
++	struct nft_rbtree_elem *rbe, *rbe_end = NULL;
++	struct nftables_pernet *nft_net;
+ 	struct nft_rbtree *priv;
++	struct nft_trans_gc *gc;
+ 	struct rb_node *node;
+ 	struct nft_set *set;
++	unsigned int gc_seq;
+ 	struct net *net;
+-	u8 genmask;
+ 
+ 	priv = container_of(work, struct nft_rbtree, gc_work.work);
+ 	set  = nft_set_container_of(priv);
+ 	net  = read_pnet(&set->net);
+-	genmask = nft_genmask_cur(net);
++	nft_net = net_generic(net, nf_tables_net_id);
++	gc_seq	= READ_ONCE(nft_net->gc_seq);
+ 
+-	write_lock_bh(&priv->lock);
+-	write_seqcount_begin(&priv->count);
++	if (nft_set_gc_is_pending(set))
++		goto done;
++
++	gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL);
++	if (!gc)
++		goto done;
++
++	read_lock_bh(&priv->lock);
+ 	for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
++
++		/* Ruleset has been updated, try later. */
++		if (READ_ONCE(nft_net->gc_seq) != gc_seq) {
++			nft_trans_gc_destroy(gc);
++			gc = NULL;
++			goto try_later;
++		}
++
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ 
+-		if (!nft_set_elem_active(&rbe->ext, genmask))
+-			continue;
++		if (nft_set_elem_is_dead(&rbe->ext))
++			goto dead_elem;
+ 
+ 		/* elements are reversed in the rbtree for historical reasons,
+ 		 * from highest to lowest value, that is why end element is
+@@ -603,40 +662,32 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 		if (!nft_set_elem_expired(&rbe->ext))
+ 			continue;
+ 
+-		if (nft_set_elem_mark_busy(&rbe->ext)) {
+-			rbe_end = NULL;
++		nft_set_elem_dead(&rbe->ext);
++
++		if (!rbe_end)
+ 			continue;
+-		}
+ 
+-		if (rbe_prev) {
+-			rb_erase(&rbe_prev->node, &priv->root);
+-			rbe_prev = NULL;
+-		}
+-		gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC);
+-		if (!gcb)
+-			break;
++		nft_set_elem_dead(&rbe_end->ext);
+ 
+-		atomic_dec(&set->nelems);
+-		nft_set_gc_batch_add(gcb, rbe);
+-		rbe_prev = rbe;
++		gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
++		if (!gc)
++			goto try_later;
+ 
+-		if (rbe_end) {
+-			atomic_dec(&set->nelems);
+-			nft_set_gc_batch_add(gcb, rbe_end);
+-			rb_erase(&rbe_end->node, &priv->root);
+-			rbe_end = NULL;
+-		}
+-		node = rb_next(node);
+-		if (!node)
+-			break;
+-	}
+-	if (rbe_prev)
+-		rb_erase(&rbe_prev->node, &priv->root);
+-	write_seqcount_end(&priv->count);
+-	write_unlock_bh(&priv->lock);
++		nft_trans_gc_elem_add(gc, rbe_end);
++		rbe_end = NULL;
++dead_elem:
++		gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
++		if (!gc)
++			goto try_later;
+ 
+-	nft_set_gc_batch_complete(gcb);
++		nft_trans_gc_elem_add(gc, rbe);
++	}
++try_later:
++	read_unlock_bh(&priv->lock);
+ 
++	if (gc)
++		nft_trans_gc_queue_async_done(gc);
++done:
+ 	queue_delayed_work(system_power_efficient_wq, &priv->gc_work,
+ 			   nft_set_gc_interval(set));
+ }
+diff --git a/net/netfilter/nft_synproxy.c b/net/netfilter/nft_synproxy.c
+index 59c4dfaf2ea1f..1133e06f3c40e 100644
+--- a/net/netfilter/nft_synproxy.c
++++ b/net/netfilter/nft_synproxy.c
+@@ -109,7 +109,7 @@ static void nft_synproxy_do_eval(const struct nft_synproxy *priv,
+ {
+ 	struct synproxy_options opts = {};
+ 	struct sk_buff *skb = pkt->skb;
+-	int thoff = pkt->xt.thoff;
++	int thoff = nft_thoff(pkt);
+ 	const struct tcphdr *tcp;
+ 	struct tcphdr _tcph;
+ 
+@@ -123,7 +123,7 @@ static void nft_synproxy_do_eval(const struct nft_synproxy *priv,
+ 		return;
+ 	}
+ 
+-	tcp = skb_header_pointer(skb, pkt->xt.thoff,
++	tcp = skb_header_pointer(skb, thoff,
+ 				 sizeof(struct tcphdr),
+ 				 &_tcph);
+ 	if (!tcp) {
+diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
+index c49d318f8e6ed..f8d277e05ef4f 100644
+--- a/net/netfilter/nft_tproxy.c
++++ b/net/netfilter/nft_tproxy.c
+@@ -88,9 +88,9 @@ static void nft_tproxy_eval_v6(const struct nft_expr *expr,
+ 	const struct nft_tproxy *priv = nft_expr_priv(expr);
+ 	struct sk_buff *skb = pkt->skb;
+ 	const struct ipv6hdr *iph = ipv6_hdr(skb);
+-	struct in6_addr taddr;
+-	int thoff = pkt->xt.thoff;
++	int thoff = nft_thoff(pkt);
+ 	struct udphdr _hdr, *hp;
++	struct in6_addr taddr;
+ 	__be16 tport = 0;
+ 	struct sock *sk;
+ 	int l4proto;
+diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
+index ddfd159f64e13..b1107570eaee8 100644
+--- a/net/nfc/llcp_core.c
++++ b/net/nfc/llcp_core.c
+@@ -1646,7 +1646,9 @@ int nfc_llcp_register_device(struct nfc_dev *ndev)
+ 	timer_setup(&local->sdreq_timer, nfc_llcp_sdreq_timer, 0);
+ 	INIT_WORK(&local->sdreq_timeout_work, nfc_llcp_sdreq_timeout_work);
+ 
++	spin_lock(&llcp_devices_lock);
+ 	list_add(&local->list, &llcp_devices);
++	spin_unlock(&llcp_devices_lock);
+ 
+ 	return 0;
+ }
+diff --git a/net/rds/rdma_transport.c b/net/rds/rdma_transport.c
+index 5f741e51b4baa..bb38124a5d3db 100644
+--- a/net/rds/rdma_transport.c
++++ b/net/rds/rdma_transport.c
+@@ -86,10 +86,12 @@ static int rds_rdma_cm_event_handler_cmn(struct rdma_cm_id *cm_id,
+ 		break;
+ 
+ 	case RDMA_CM_EVENT_ADDR_RESOLVED:
+-		rdma_set_service_type(cm_id, conn->c_tos);
+-		/* XXX do we need to clean up if this fails? */
+-		ret = rdma_resolve_route(cm_id,
++		if (conn) {
++			rdma_set_service_type(cm_id, conn->c_tos);
++			/* XXX do we need to clean up if this fails? */
++			ret = rdma_resolve_route(cm_id,
+ 					 RDS_RDMA_RESOLVE_TIMEOUT_MS);
++		}
+ 		break;
+ 
+ 	case RDMA_CM_EVENT_ROUTE_RESOLVED:
+diff --git a/net/rds/tcp_connect.c b/net/rds/tcp_connect.c
+index 4e64598176b05..2f38dac0160e8 100644
+--- a/net/rds/tcp_connect.c
++++ b/net/rds/tcp_connect.c
+@@ -169,7 +169,7 @@ int rds_tcp_conn_path_connect(struct rds_conn_path *cp)
+ 	 * own the socket
+ 	 */
+ 	rds_tcp_set_callbacks(sock, cp);
+-	ret = sock->ops->connect(sock, addr, addrlen, O_NONBLOCK);
++	ret = kernel_connect(sock, addr, addrlen, O_NONBLOCK);
+ 
+ 	rdsdebug("connect to address %pI6c returned %d\n", &conn->c_faddr, ret);
+ 	if (ret == -EINPROGRESS)
+diff --git a/net/sctp/associola.c b/net/sctp/associola.c
+index 2d4ec61877553..765eb617776b3 100644
+--- a/net/sctp/associola.c
++++ b/net/sctp/associola.c
+@@ -1151,8 +1151,7 @@ int sctp_assoc_update(struct sctp_association *asoc,
+ 		/* Add any peer addresses from the new association. */
+ 		list_for_each_entry(trans, &new->peer.transport_addr_list,
+ 				    transports)
+-			if (!sctp_assoc_lookup_paddr(asoc, &trans->ipaddr) &&
+-			    !sctp_assoc_add_peer(asoc, &trans->ipaddr,
++			if (!sctp_assoc_add_peer(asoc, &trans->ipaddr,
+ 						 GFP_ATOMIC, trans->state))
+ 				return -ENOMEM;
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 68d53e3f0d07a..bc4fe944ef858 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -2452,6 +2452,7 @@ static int sctp_apply_peer_addr_params(struct sctp_paddrparams *params,
+ 			if (trans) {
+ 				trans->hbinterval =
+ 				    msecs_to_jiffies(params->spp_hbinterval);
++				sctp_transport_reset_hb_timer(trans);
+ 			} else if (asoc) {
+ 				asoc->hbinterval =
+ 				    msecs_to_jiffies(params->spp_hbinterval);
+diff --git a/net/socket.c b/net/socket.c
+index 1a40335117035..de89ab55d4759 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -655,6 +655,14 @@ static inline int sock_sendmsg_nosec(struct socket *sock, struct msghdr *msg)
+ 	return ret;
+ }
+ 
++static int __sock_sendmsg(struct socket *sock, struct msghdr *msg)
++{
++	int err = security_socket_sendmsg(sock, msg,
++					  msg_data_left(msg));
++
++	return err ?: sock_sendmsg_nosec(sock, msg);
++}
++
+ /**
+  *	sock_sendmsg - send a message through @sock
+  *	@sock: socket
+@@ -665,10 +673,19 @@ static inline int sock_sendmsg_nosec(struct socket *sock, struct msghdr *msg)
+  */
+ int sock_sendmsg(struct socket *sock, struct msghdr *msg)
+ {
+-	int err = security_socket_sendmsg(sock, msg,
+-					  msg_data_left(msg));
++	struct sockaddr_storage *save_addr = (struct sockaddr_storage *)msg->msg_name;
++	struct sockaddr_storage address;
++	int ret;
+ 
+-	return err ?: sock_sendmsg_nosec(sock, msg);
++	if (msg->msg_name) {
++		memcpy(&address, msg->msg_name, msg->msg_namelen);
++		msg->msg_name = &address;
++	}
++
++	ret = __sock_sendmsg(sock, msg);
++	msg->msg_name = save_addr;
++
++	return ret;
+ }
+ EXPORT_SYMBOL(sock_sendmsg);
+ 
+@@ -995,7 +1012,7 @@ static ssize_t sock_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	if (sock->type == SOCK_SEQPACKET)
+ 		msg.msg_flags |= MSG_EOR;
+ 
+-	res = sock_sendmsg(sock, &msg);
++	res = __sock_sendmsg(sock, &msg);
+ 	*from = msg.msg_iter;
+ 	return res;
+ }
+@@ -1983,7 +2000,7 @@ int __sys_sendto(int fd, void __user *buff, size_t len, unsigned int flags,
+ 	if (sock->file->f_flags & O_NONBLOCK)
+ 		flags |= MSG_DONTWAIT;
+ 	msg.msg_flags = flags;
+-	err = sock_sendmsg(sock, &msg);
++	err = __sock_sendmsg(sock, &msg);
+ 
+ out_put:
+ 	fput_light(sock->file, fput_needed);
+@@ -2356,7 +2373,7 @@ static int ____sys_sendmsg(struct socket *sock, struct msghdr *msg_sys,
+ 		err = sock_sendmsg_nosec(sock, msg_sys);
+ 		goto out_freectl;
+ 	}
+-	err = sock_sendmsg(sock, msg_sys);
++	err = __sock_sendmsg(sock, msg_sys);
+ 	/*
+ 	 * If this is sendmmsg() and sending to current destination address was
+ 	 * successful, remember it.
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index e1ce0f261f0be..c7c1754f87440 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2354,8 +2354,7 @@ call_status(struct rpc_task *task)
+ 		goto out_exit;
+ 	}
+ 	task->tk_action = call_encode;
+-	if (status != -ECONNRESET && status != -ECONNABORTED)
+-		rpc_check_timeout(task);
++	rpc_check_timeout(task);
+ 	return;
+ out_exit:
+ 	rpc_call_rpcerror(task, status);
+@@ -2630,6 +2629,7 @@ out_msg_denied:
+ 	case rpc_autherr_rejectedverf:
+ 	case rpcsec_gsserr_credproblem:
+ 	case rpcsec_gsserr_ctxproblem:
++		rpcauth_invalcred(task);
+ 		if (!task->tk_cred_retry)
+ 			break;
+ 		task->tk_cred_retry--;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 2784d69892117..b5aa0a835bced 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -1445,14 +1445,14 @@ static int tipc_crypto_key_revoke(struct net *net, u8 tx_key)
+ 	struct tipc_crypto *tx = tipc_net(net)->crypto_tx;
+ 	struct tipc_key key;
+ 
+-	spin_lock(&tx->lock);
++	spin_lock_bh(&tx->lock);
+ 	key = tx->key;
+ 	WARN_ON(!key.active || tx_key != key.active);
+ 
+ 	/* Free the active key */
+ 	tipc_crypto_key_set_state(tx, key.passive, 0, key.pending);
+ 	tipc_crypto_key_detach(tx->aead[key.active], &tx->lock);
+-	spin_unlock(&tx->lock);
++	spin_unlock_bh(&tx->lock);
+ 
+ 	pr_warn("%s: key is revoked\n", tx->name);
+ 	return -EKEYREVOKED;
+diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
+index 2417dd1dee33c..da4df53ee6955 100644
+--- a/scripts/mod/file2alias.c
++++ b/scripts/mod/file2alias.c
+@@ -1490,7 +1490,7 @@ void handle_moddevtable(struct module *mod, struct elf_info *info,
+ 	/* First handle the "special" cases */
+ 	if (sym_is(name, namelen, "usb"))
+ 		do_usb_table(symval, sym->st_size, mod);
+-	if (sym_is(name, namelen, "of"))
++	else if (sym_is(name, namelen, "of"))
+ 		do_of_table(symval, sym->st_size, mod);
+ 	else if (sym_is(name, namelen, "pnp"))
+ 		do_pnp_device_entry(symval, sym->st_size, mod);
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index 05b8f5bcc37ac..d0d3ff58da491 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -29,9 +29,11 @@ config IMA
+ 	  to learn more about IMA.
+ 	  If unsure, say N.
+ 
++if IMA
++
+ config IMA_KEXEC
+ 	bool "Enable carrying the IMA measurement list across a soft boot"
+-	depends on IMA && TCG_TPM && HAVE_IMA_KEXEC
++	depends on TCG_TPM && HAVE_IMA_KEXEC
+ 	default n
+ 	help
+ 	   TPM PCRs are only reset on a hard reboot.  In order to validate
+@@ -43,7 +45,6 @@ config IMA_KEXEC
+ 
+ config IMA_MEASURE_PCR_IDX
+ 	int
+-	depends on IMA
+ 	range 8 14
+ 	default 10
+ 	help
+@@ -53,7 +54,7 @@ config IMA_MEASURE_PCR_IDX
+ 
+ config IMA_LSM_RULES
+ 	bool
+-	depends on IMA && AUDIT && (SECURITY_SELINUX || SECURITY_SMACK || SECURITY_APPARMOR)
++	depends on AUDIT && (SECURITY_SELINUX || SECURITY_SMACK || SECURITY_APPARMOR)
+ 	default y
+ 	help
+ 	  Disabling this option will disregard LSM based policy rules.
+@@ -61,7 +62,6 @@ config IMA_LSM_RULES
+ choice
+ 	prompt "Default template"
+ 	default IMA_NG_TEMPLATE
+-	depends on IMA
+ 	help
+ 	  Select the default IMA measurement template.
+ 
+@@ -80,14 +80,12 @@ endchoice
+ 
+ config IMA_DEFAULT_TEMPLATE
+ 	string
+-	depends on IMA
+ 	default "ima-ng" if IMA_NG_TEMPLATE
+ 	default "ima-sig" if IMA_SIG_TEMPLATE
+ 
+ choice
+ 	prompt "Default integrity hash algorithm"
+ 	default IMA_DEFAULT_HASH_SHA1
+-	depends on IMA
+ 	help
+ 	   Select the default hash algorithm used for the measurement
+ 	   list, integrity appraisal and audit log.  The compiled default
+@@ -117,7 +115,6 @@ endchoice
+ 
+ config IMA_DEFAULT_HASH
+ 	string
+-	depends on IMA
+ 	default "sha1" if IMA_DEFAULT_HASH_SHA1
+ 	default "sha256" if IMA_DEFAULT_HASH_SHA256
+ 	default "sha512" if IMA_DEFAULT_HASH_SHA512
+@@ -126,7 +123,6 @@ config IMA_DEFAULT_HASH
+ 
+ config IMA_WRITE_POLICY
+ 	bool "Enable multiple writes to the IMA policy"
+-	depends on IMA
+ 	default n
+ 	help
+ 	  IMA policy can now be updated multiple times.  The new rules get
+@@ -137,7 +133,6 @@ config IMA_WRITE_POLICY
+ 
+ config IMA_READ_POLICY
+ 	bool "Enable reading back the current IMA policy"
+-	depends on IMA
+ 	default y if IMA_WRITE_POLICY
+ 	default n if !IMA_WRITE_POLICY
+ 	help
+@@ -147,7 +142,6 @@ config IMA_READ_POLICY
+ 
+ config IMA_APPRAISE
+ 	bool "Appraise integrity measurements"
+-	depends on IMA
+ 	default n
+ 	help
+ 	  This option enables local measurement integrity appraisal.
+@@ -268,7 +262,7 @@ config IMA_KEYRINGS_PERMIT_SIGNED_BY_BUILTIN_OR_SECONDARY
+ config IMA_BLACKLIST_KEYRING
+ 	bool "Create IMA machine owner blacklist keyrings (EXPERIMENTAL)"
+ 	depends on SYSTEM_TRUSTED_KEYRING
+-	depends on IMA_TRUSTED_KEYRING
++	depends on INTEGRITY_TRUSTED_KEYRING
+ 	default n
+ 	help
+ 	   This option creates an IMA blacklist keyring, which contains all
+@@ -278,7 +272,7 @@ config IMA_BLACKLIST_KEYRING
+ 
+ config IMA_LOAD_X509
+ 	bool "Load X509 certificate onto the '.ima' trusted keyring"
+-	depends on IMA_TRUSTED_KEYRING
++	depends on INTEGRITY_TRUSTED_KEYRING
+ 	default n
+ 	help
+ 	   File signature verification is based on the public keys
+@@ -303,7 +297,6 @@ config IMA_APPRAISE_SIGNED_INIT
+ 
+ config IMA_MEASURE_ASYMMETRIC_KEYS
+ 	bool
+-	depends on IMA
+ 	depends on ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
+ 	default y
+ 
+@@ -319,3 +312,5 @@ config IMA_SECURE_AND_OR_TRUSTED_BOOT
+        help
+           This option is selected by architectures to enable secure and/or
+           trusted boot based on IMA runtime policies.
++
++endif
+diff --git a/security/smack/smack.h b/security/smack/smack.h
+index a9768b12716bf..b5187915e074e 100644
+--- a/security/smack/smack.h
++++ b/security/smack/smack.h
+@@ -120,6 +120,7 @@ struct inode_smack {
+ struct task_smack {
+ 	struct smack_known	*smk_task;	/* label for access control */
+ 	struct smack_known	*smk_forked;	/* label when forked */
++	struct smack_known	*smk_transmuted;/* label when transmuted */
+ 	struct list_head	smk_rules;	/* per task access rules */
+ 	struct mutex		smk_rules_lock;	/* lock for the rules */
+ 	struct list_head	smk_relabel;	/* transit allowed labels */
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index b36b8668f1f4a..814518ad4402b 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -972,8 +972,9 @@ static int smack_inode_init_security(struct inode *inode, struct inode *dir,
+ 				     const struct qstr *qstr, const char **name,
+ 				     void **value, size_t *len)
+ {
++	struct task_smack *tsp = smack_cred(current_cred());
+ 	struct inode_smack *issp = smack_inode(inode);
+-	struct smack_known *skp = smk_of_current();
++	struct smack_known *skp = smk_of_task(tsp);
+ 	struct smack_known *isp = smk_of_inode(inode);
+ 	struct smack_known *dsp = smk_of_inode(dir);
+ 	int may;
+@@ -982,20 +983,34 @@ static int smack_inode_init_security(struct inode *inode, struct inode *dir,
+ 		*name = XATTR_SMACK_SUFFIX;
+ 
+ 	if (value && len) {
+-		rcu_read_lock();
+-		may = smk_access_entry(skp->smk_known, dsp->smk_known,
+-				       &skp->smk_rules);
+-		rcu_read_unlock();
++		/*
++		 * If equal, transmuting already occurred in
++		 * smack_dentry_create_files_as(). No need to check again.
++		 */
++		if (tsp->smk_task != tsp->smk_transmuted) {
++			rcu_read_lock();
++			may = smk_access_entry(skp->smk_known, dsp->smk_known,
++					       &skp->smk_rules);
++			rcu_read_unlock();
++		}
+ 
+ 		/*
+-		 * If the access rule allows transmutation and
+-		 * the directory requests transmutation then
+-		 * by all means transmute.
++		 * In addition to having smk_task equal to smk_transmuted,
++		 * if the access rule allows transmutation and the directory
++		 * requests transmutation then by all means transmute.
+ 		 * Mark the inode as changed.
+ 		 */
+-		if (may > 0 && ((may & MAY_TRANSMUTE) != 0) &&
+-		    smk_inode_transmutable(dir)) {
+-			isp = dsp;
++		if ((tsp->smk_task == tsp->smk_transmuted) ||
++		    (may > 0 && ((may & MAY_TRANSMUTE) != 0) &&
++		     smk_inode_transmutable(dir))) {
++			/*
++			 * The caller of smack_dentry_create_files_as()
++			 * should have overridden the current cred, so the
++			 * inode label was already set correctly in
++			 * smack_inode_alloc_security().
++			 */
++			if (tsp->smk_task != tsp->smk_transmuted)
++				isp = dsp;
+ 			issp->smk_flags |= SMK_INODE_CHANGED;
+ 		}
+ 
+@@ -1429,10 +1444,19 @@ static int smack_inode_getsecurity(struct inode *inode,
+ 	struct super_block *sbp;
+ 	struct inode *ip = (struct inode *)inode;
+ 	struct smack_known *isp;
++	struct inode_smack *ispp;
++	size_t label_len;
++	char *label = NULL;
+ 
+-	if (strcmp(name, XATTR_SMACK_SUFFIX) == 0)
++	if (strcmp(name, XATTR_SMACK_SUFFIX) == 0) {
+ 		isp = smk_of_inode(inode);
+-	else {
++	} else if (strcmp(name, XATTR_SMACK_TRANSMUTE) == 0) {
++		ispp = smack_inode(inode);
++		if (ispp->smk_flags & SMK_INODE_TRANSMUTE)
++			label = TRANS_TRUE;
++		else
++			label = "";
++	} else {
+ 		/*
+ 		 * The rest of the Smack xattrs are only on sockets.
+ 		 */
+@@ -1454,13 +1478,18 @@ static int smack_inode_getsecurity(struct inode *inode,
+ 			return -EOPNOTSUPP;
+ 	}
+ 
++	if (!label)
++		label = isp->smk_known;
++
++	label_len = strlen(label);
++
+ 	if (alloc) {
+-		*buffer = kstrdup(isp->smk_known, GFP_KERNEL);
++		*buffer = kstrdup(label, GFP_KERNEL);
+ 		if (*buffer == NULL)
+ 			return -ENOMEM;
+ 	}
+ 
+-	return strlen(isp->smk_known);
++	return label_len;
+ }
+ 
+ 
+@@ -4634,7 +4663,7 @@ static int smack_inode_copy_up(struct dentry *dentry, struct cred **new)
+ 	/*
+ 	 * Get label from overlay inode and set it in create_sid
+ 	 */
+-	isp = smack_inode(d_inode(dentry->d_parent));
++	isp = smack_inode(d_inode(dentry));
+ 	skp = isp->smk_inode;
+ 	tsp->smk_task = skp;
+ 	*new = new_creds;
+@@ -4685,8 +4714,10 @@ static int smack_dentry_create_files_as(struct dentry *dentry, int mode,
+ 		 * providing access is transmuting use the containing
+ 		 * directory label instead of the process label.
+ 		 */
+-		if (may > 0 && (may & MAY_TRANSMUTE))
++		if (may > 0 && (may & MAY_TRANSMUTE)) {
+ 			ntsp->smk_task = isp->smk_inode;
++			ntsp->smk_transmuted = ntsp->smk_task;
++		}
+ 	}
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 1f641712233ef..dfef761d55214 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2271,6 +2271,7 @@ static const struct snd_pci_quirk power_save_denylist[] = {
+ 	SND_PCI_QUIRK(0x8086, 0x2068, "Intel NUC7i3BNB", 0),
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */
+ 	SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0),
++	SND_PCI_QUIRK(0x17aa, 0x316e, "Lenovo ThinkCentre M70q", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1689623 */
+ 	SND_PCI_QUIRK(0x17aa, 0x367b, "Lenovo IdeaCentre B550", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index 77d8234c7ac49..bb2aab1d2389e 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -322,7 +322,7 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->cpu_mclk)) {
+ 		ret = PTR_ERR(priv->cpu_mclk);
+ 		dev_err(&cpu_pdev->dev, "failed to get DAI mclk1: %d\n", ret);
+-		return -EINVAL;
++		return ret;
+ 	}
+ 
+ 	priv->audmix_pdev = audmix_pdev;
+diff --git a/sound/soc/meson/axg-spdifin.c b/sound/soc/meson/axg-spdifin.c
+index d0d09f945b489..7aaded1fc376b 100644
+--- a/sound/soc/meson/axg-spdifin.c
++++ b/sound/soc/meson/axg-spdifin.c
+@@ -112,34 +112,6 @@ static int axg_spdifin_prepare(struct snd_pcm_substream *substream,
+ 	return 0;
+ }
+ 
+-static int axg_spdifin_startup(struct snd_pcm_substream *substream,
+-			       struct snd_soc_dai *dai)
+-{
+-	struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai);
+-	int ret;
+-
+-	ret = clk_prepare_enable(priv->refclk);
+-	if (ret) {
+-		dev_err(dai->dev,
+-			"failed to enable spdifin reference clock\n");
+-		return ret;
+-	}
+-
+-	regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN,
+-			   SPDIFIN_CTRL0_EN);
+-
+-	return 0;
+-}
+-
+-static void axg_spdifin_shutdown(struct snd_pcm_substream *substream,
+-				 struct snd_soc_dai *dai)
+-{
+-	struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai);
+-
+-	regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN, 0);
+-	clk_disable_unprepare(priv->refclk);
+-}
+-
+ static void axg_spdifin_write_mode_param(struct regmap *map, int mode,
+ 					 unsigned int val,
+ 					 unsigned int num_per_reg,
+@@ -251,25 +223,38 @@ static int axg_spdifin_dai_probe(struct snd_soc_dai *dai)
+ 	ret = axg_spdifin_sample_mode_config(dai, priv);
+ 	if (ret) {
+ 		dev_err(dai->dev, "mode configuration failed\n");
+-		clk_disable_unprepare(priv->pclk);
+-		return ret;
++		goto pclk_err;
+ 	}
+ 
++	ret = clk_prepare_enable(priv->refclk);
++	if (ret) {
++		dev_err(dai->dev,
++			"failed to enable spdifin reference clock\n");
++		goto pclk_err;
++	}
++
++	regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN,
++			   SPDIFIN_CTRL0_EN);
++
+ 	return 0;
++
++pclk_err:
++	clk_disable_unprepare(priv->pclk);
++	return ret;
+ }
+ 
+ static int axg_spdifin_dai_remove(struct snd_soc_dai *dai)
+ {
+ 	struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai);
+ 
++	regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN, 0);
++	clk_disable_unprepare(priv->refclk);
+ 	clk_disable_unprepare(priv->pclk);
+ 	return 0;
+ }
+ 
+ static const struct snd_soc_dai_ops axg_spdifin_ops = {
+ 	.prepare	= axg_spdifin_prepare,
+-	.startup	= axg_spdifin_startup,
+-	.shutdown	= axg_spdifin_shutdown,
+ };
+ 
+ static int axg_spdifin_iec958_info(struct snd_kcontrol *kcontrol,
+diff --git a/tools/include/linux/btf_ids.h b/tools/include/linux/btf_ids.h
+index 57890b357f851..eca91e7a4d394 100644
+--- a/tools/include/linux/btf_ids.h
++++ b/tools/include/linux/btf_ids.h
+@@ -38,7 +38,7 @@ asm(							\
+ 	____BTF_ID(symbol)
+ 
+ #define __ID(prefix) \
+-	__PASTE(prefix, __COUNTER__)
++	__PASTE(__PASTE(prefix, __COUNTER__), __LINE__)
+ 
+ /*
+  * The BTF_ID defines unique symbol for each ID pointing
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 7943e748916d4..fd1a4d843e6f0 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -976,7 +976,9 @@ union bpf_attr {
+  * 		performed again, if the helper is used in combination with
+  * 		direct packet access.
+  * 	Return
+- * 		0 on success, or a negative error in case of failure.
++ * 		0 on success, or a negative error in case of failure. Positive
++ * 		error indicates a potential drop or congestion in the target
++ * 		device. The particular positive error codes are not defined.
+  *
+  * u64 bpf_get_current_pid_tgid(void)
+  * 	Return
+diff --git a/tools/perf/util/Build b/tools/perf/util/Build
+index 0cf27354aa451..0f9732d5452e6 100644
+--- a/tools/perf/util/Build
++++ b/tools/perf/util/Build
+@@ -253,6 +253,12 @@ ifeq ($(BISON_GE_35),1)
+ else
+   bison_flags += -w
+ endif
++
++BISON_LT_381 := $(shell expr $(shell $(BISON) --version | grep bison | sed -e 's/.\+ \([0-9]\+\).\([0-9]\+\).\([0-9]\+\)/\1\2\3/g') \< 381)
++ifeq ($(BISON_LT_381),1)
++  bison_flags += -DYYNOMEM=YYABORT
++endif
++
+ CFLAGS_parse-events-bison.o += $(bison_flags)
+ CFLAGS_pmu-bison.o          += -DYYLTYPE_IS_TRIVIAL=0 $(bison_flags)
+ CFLAGS_expr-bison.o         += -DYYLTYPE_IS_TRIVIAL=0 $(bison_flags)
+diff --git a/tools/power/cpupower/Makefile b/tools/power/cpupower/Makefile
+index c7bcddbd486d7..3b1594447f294 100644
+--- a/tools/power/cpupower/Makefile
++++ b/tools/power/cpupower/Makefile
+@@ -270,14 +270,14 @@ clean:
+ 	$(MAKE) -C bench O=$(OUTPUT) clean
+ 
+ 
+-install-lib:
++install-lib: libcpupower
+ 	$(INSTALL) -d $(DESTDIR)${libdir}
+ 	$(CP) $(OUTPUT)libcpupower.so* $(DESTDIR)${libdir}/
+ 	$(INSTALL) -d $(DESTDIR)${includedir}
+ 	$(INSTALL_DATA) lib/cpufreq.h $(DESTDIR)${includedir}/cpufreq.h
+ 	$(INSTALL_DATA) lib/cpuidle.h $(DESTDIR)${includedir}/cpuidle.h
+ 
+-install-tools:
++install-tools: $(OUTPUT)cpupower
+ 	$(INSTALL) -d $(DESTDIR)${bindir}
+ 	$(INSTALL_PROGRAM) $(OUTPUT)cpupower $(DESTDIR)${bindir}
+ 	$(INSTALL) -d $(DESTDIR)${bash_completion_dir}
+@@ -293,14 +293,14 @@ install-man:
+ 	$(INSTALL_DATA) -D man/cpupower-info.1 $(DESTDIR)${mandir}/man1/cpupower-info.1
+ 	$(INSTALL_DATA) -D man/cpupower-monitor.1 $(DESTDIR)${mandir}/man1/cpupower-monitor.1
+ 
+-install-gmo:
++install-gmo: create-gmo
+ 	$(INSTALL) -d $(DESTDIR)${localedir}
+ 	for HLANG in $(LANGUAGES); do \
+ 		echo '$(INSTALL_DATA) -D $(OUTPUT)po/$$HLANG.gmo $(DESTDIR)${localedir}/$$HLANG/LC_MESSAGES/cpupower.mo'; \
+ 		$(INSTALL_DATA) -D $(OUTPUT)po/$$HLANG.gmo $(DESTDIR)${localedir}/$$HLANG/LC_MESSAGES/cpupower.mo; \
+ 	done;
+ 
+-install-bench:
++install-bench: compile-bench
+ 	@#DESTDIR must be set from outside to survive
+ 	@sbindir=$(sbindir) bindir=$(bindir) docdir=$(docdir) confdir=$(confdir) $(MAKE) -C bench O=$(OUTPUT) install
+ 
+diff --git a/tools/power/cpupower/bench/Makefile b/tools/power/cpupower/bench/Makefile
+index f68b4bc552739..d9d9923af85c2 100644
+--- a/tools/power/cpupower/bench/Makefile
++++ b/tools/power/cpupower/bench/Makefile
+@@ -27,7 +27,7 @@ $(OUTPUT)cpufreq-bench: $(OBJS)
+ 
+ all: $(OUTPUT)cpufreq-bench
+ 
+-install:
++install: $(OUTPUT)cpufreq-bench
+ 	mkdir -p $(DESTDIR)/$(sbindir)
+ 	mkdir -p $(DESTDIR)/$(bindir)
+ 	mkdir -p $(DESTDIR)/$(docdir)
+diff --git a/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc b/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc
+index 0eb47fbb3f44d..42422e4251078 100644
+--- a/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc
++++ b/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc
+@@ -39,7 +39,7 @@ instance_read() {
+ 
+ instance_set() {
+         while :; do
+-                echo 1 > foo/events/sched/sched_switch
++                echo 1 > foo/events/sched/sched_switch/enable
+         done 2> /dev/null
+ }
+ 
+diff --git a/tools/testing/selftests/kselftest_deps.sh b/tools/testing/selftests/kselftest_deps.sh
+index bbc04646346b2..e6010de678200 100755
+--- a/tools/testing/selftests/kselftest_deps.sh
++++ b/tools/testing/selftests/kselftest_deps.sh
+@@ -46,11 +46,11 @@ fi
+ print_targets=0
+ 
+ while getopts "p" arg; do
+-    case $arg in
+-        p)
++	case $arg in
++		p)
+ 		print_targets=1
+ 	shift;;
+-    esac
++	esac
+ done
+ 
+ if [ $# -eq 0 ]
+@@ -92,6 +92,10 @@ pass_cnt=0
+ # Get all TARGETS from selftests Makefile
+ targets=$(egrep "^TARGETS +|^TARGETS =" Makefile | cut -d "=" -f2)
+ 
++# Initially, in LDLIBS related lines, the dep checker needs
++# to ignore lines containing the following strings:
++filter="\$(VAR_LDLIBS)\|pkg-config\|PKG_CONFIG\|IOURING_EXTRA_LIBS"
++
+ # Single test case
+ if [ $# -eq 2 ]
+ then
+@@ -100,6 +104,8 @@ then
+ 	l1_test $test
+ 	l2_test $test
+ 	l3_test $test
++	l4_test $test
++	l5_test $test
+ 
+ 	print_results $1 $2
+ 	exit $?
+@@ -113,7 +119,7 @@ fi
+ # Append space at the end of the list to append more tests.
+ 
+ l1_tests=$(grep -r --include=Makefile "^LDLIBS" | \
+-		grep -v "VAR_LDLIBS" | awk -F: '{print $1}')
++		grep -v "$filter" | awk -F: '{print $1}' | uniq)
+ 
+ # Level 2: LDLIBS set dynamically.
+ #
+@@ -126,7 +132,7 @@ l1_tests=$(grep -r --include=Makefile "^LDLIBS" | \
+ # Append space at the end of the list to append more tests.
+ 
+ l2_tests=$(grep -r --include=Makefile ": LDLIBS" | \
+-		grep -v "VAR_LDLIBS" | awk -F: '{print $1}')
++		grep -v "$filter" | awk -F: '{print $1}' | uniq)
+ 
+ # Level 3
+ # gpio,  memfd and others use pkg-config to find mount and fuse libs
+@@ -140,11 +146,32 @@ l2_tests=$(grep -r --include=Makefile ": LDLIBS" | \
+ #	VAR_LDLIBS := $(shell pkg-config fuse --libs 2>/dev/null)
+ 
+ l3_tests=$(grep -r --include=Makefile "^VAR_LDLIBS" | \
+-		grep -v "pkg-config" | awk -F: '{print $1}')
++		grep -v "pkg-config\|PKG_CONFIG" | awk -F: '{print $1}' | uniq)
+ 
+-#echo $l1_tests
+-#echo $l2_1_tests
+-#echo $l3_tests
++# Level 4
++# some tests may fall back to default using `|| echo -l<libname>`
++# if pkg-config doesn't find the libs, instead of using VAR_LDLIBS
++# as per level 3 checks.
++# e.g:
++# netfilter/Makefile
++#	LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl)
++l4_tests=$(grep -r --include=Makefile "^LDLIBS" | \
++		grep "pkg-config\|PKG_CONFIG" | awk -F: '{print $1}' | uniq)
++
++# Level 5
++# some tests may use IOURING_EXTRA_LIBS to add extra libs to LDLIBS,
++# which in turn may be defined in a sub-Makefile
++# e.g.:
++# mm/Makefile
++#	$(OUTPUT)/gup_longterm: LDLIBS += $(IOURING_EXTRA_LIBS)
++l5_tests=$(grep -r --include=Makefile "LDLIBS +=.*\$(IOURING_EXTRA_LIBS)" | \
++	awk -F: '{print $1}' | uniq)
++
++#echo l1_tests $l1_tests
++#echo l2_tests $l2_tests
++#echo l3_tests $l3_tests
++#echo l4_tests $l4_tests
++#echo l5_tests $l5_tests
+ 
+ all_tests
+ print_results $1 $2
+@@ -166,24 +193,32 @@ all_tests()
+ 	for test in $l3_tests; do
+ 		l3_test $test
+ 	done
++
++	for test in $l4_tests; do
++		l4_test $test
++	done
++
++	for test in $l5_tests; do
++		l5_test $test
++	done
+ }
+ 
+ # Use same parsing used for l1_tests and pick libraries this time.
+ l1_test()
+ {
+ 	test_libs=$(grep --include=Makefile "^LDLIBS" $test | \
+-			grep -v "VAR_LDLIBS" | \
++			grep -v "$filter" | \
+ 			sed -e 's/\:/ /' | \
+ 			sed -e 's/+/ /' | cut -d "=" -f 2)
+ 
+ 	check_libs $test $test_libs
+ }
+ 
+-# Use same parsing used for l2__tests and pick libraries this time.
++# Use same parsing used for l2_tests and pick libraries this time.
+ l2_test()
+ {
+ 	test_libs=$(grep --include=Makefile ": LDLIBS" $test | \
+-			grep -v "VAR_LDLIBS" | \
++			grep -v "$filter" | \
+ 			sed -e 's/\:/ /' | sed -e 's/+/ /' | \
+ 			cut -d "=" -f 2)
+ 
+@@ -199,6 +234,24 @@ l3_test()
+ 	check_libs $test $test_libs
+ }
+ 
++l4_test()
++{
++	test_libs=$(grep --include=Makefile "^VAR_LDLIBS\|^LDLIBS" $test | \
++			grep "\(pkg-config\|PKG_CONFIG\).*|| echo " | \
++			sed -e 's/.*|| echo //' | sed -e 's/)$//')
++
++	check_libs $test $test_libs
++}
++
++l5_test()
++{
++	tests=$(find $(dirname "$test") -type f -name "*.mk")
++	test_libs=$(grep "^IOURING_EXTRA_LIBS +\?=" $tests | \
++			cut -d "=" -f 2)
++
++	check_libs $test $test_libs
++}
++
+ check_libs()
+ {
+ 
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index b599f1fa99b55..44a25a9f1f722 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -384,11 +384,12 @@ TEST_F(tls, sendmsg_large)
+ 
+ 		msg.msg_iov = &vec;
+ 		msg.msg_iovlen = 1;
+-		EXPECT_EQ(sendmsg(self->cfd, &msg, 0), send_len);
++		EXPECT_EQ(sendmsg(self->fd, &msg, 0), send_len);
+ 	}
+ 
+-	while (recvs++ < sends)
+-		EXPECT_NE(recv(self->fd, mem, send_len, 0), -1);
++	while (recvs++ < sends) {
++		EXPECT_NE(recv(self->cfd, mem, send_len, 0), -1);
++	}
+ 
+ 	free(mem);
+ }
+@@ -416,9 +417,9 @@ TEST_F(tls, sendmsg_multiple)
+ 	msg.msg_iov = vec;
+ 	msg.msg_iovlen = iov_len;
+ 
+-	EXPECT_EQ(sendmsg(self->cfd, &msg, 0), total_len);
++	EXPECT_EQ(sendmsg(self->fd, &msg, 0), total_len);
+ 	buf = malloc(total_len);
+-	EXPECT_NE(recv(self->fd, buf, total_len, 0), -1);
++	EXPECT_NE(recv(self->cfd, buf, total_len, 0), -1);
+ 	for (i = 0; i < iov_len; i++) {
+ 		EXPECT_EQ(memcmp(test_strs[i], buf + len_cmp,
+ 				 strlen(test_strs[i])),


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-10-18 20:16 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-10-18 20:16 UTC (permalink / raw
  To: gentoo-commits

commit:     e8228d984659c51468157097b7051cc79b9322eb
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 18 20:16:33 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 18 20:16:33 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e8228d98

TAR override and gcc 14 patch

kheaders: make it possible to override TAR
gcc-plugins: Rename last_stmt() for GCC 14+

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 2930_tar_override.patch                   | 63 +++++++++++++++++++++++++++++++
 2945_handle-gcc-14-last-stmt-rename.patch | 31 +++++++++++++++
 2 files changed, 94 insertions(+)

diff --git a/2930_tar_override.patch b/2930_tar_override.patch
new file mode 100644
index 00000000..babfa83a
--- /dev/null
+++ b/2930_tar_override.patch
@@ -0,0 +1,63 @@
+From: "Michał Górny" <mgorny@gentoo.org>
+To: Dmitry Goldin <dgoldin+lkml@protonmail.ch>
+Cc: "Masahiro Yamada" <yamada.masahiro@socionext.com>,
+	linux-kernel@vger.kernel.org, "Michał Górny" <mgorny@gentoo.org>,
+	"Sam James" <sam@gentoo.org>,
+	"Masahiro Yamada" <masahiroy@kernel.org>
+Subject: [PATCH v2] kheaders: make it possible to override TAR
+Date: Wed, 12 Apr 2023 10:27:43 +0200	[thread overview]
+Message-ID: <20230412082743.350699-1-mgorny@gentoo.org> (raw)
+In-Reply-To: <CAK7LNATfrxu7BK0ZRq+qSjObiz6GpS3U5L=12vDys5_yy=Mdow@mail.gmail.com>
+
+Commit 86cdd2fdc4e39c388d39c7ba2396d1a9dfd66226 ("kheaders: make headers
+archive reproducible") introduced a number of options specific to GNU
+tar to the `tar` invocation in `gen_kheaders.sh` script.  This causes
+the script to fail to work on systems where `tar` is not GNU tar.  This
+can occur e.g. on recent Gentoo Linux installations that support using
+bsdtar from libarchive instead.
+
+Add a `TAR` make variable to make it possible to override the tar
+executable used, e.g. by specifying:
+
+  make TAR=gtar
+
+Link: https://bugs.gentoo.org/884061
+Reported-by: Sam James <sam@gentoo.org>
+Tested-by: Sam James <sam@gentoo.org>
+Co-developed-by: Masahiro Yamada <masahiroy@kernel.org>
+Signed-off-by: Michał Górny <mgorny@gentoo.org>
+---
+--- a/Makefile	2023-10-18 16:13:06.496343048 -0400
++++ b/Makefile	2023-10-18 16:14:00.136587613 -0400
+@@ -471,6 +471,7 @@ LZMA		= lzma
+ LZ4		= lz4c
+ XZ		= xz
+ ZSTD		= zstd
++TAR    = tar
+ 
+ PAHOLE_FLAGS	= $(shell PAHOLE=$(PAHOLE) $(srctree)/scripts/pahole-flags.sh)
+ 
+@@ -519,7 +520,7 @@ CLANG_FLAGS :=
+ export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC
+ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AWK INSTALLKERNEL
+ export PERL PYTHON PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
+-export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
++export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD TAR
+ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
+ 
+ export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
+diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
+index 1ef9a8751..82d539648 100755
+--- a/kernel/gen_kheaders.sh
++++ b/kernel/gen_kheaders.sh
+@@ -86,7 +86,7 @@ find $cpio_dir -type f -print0 |
+ # For compatibility with older versions of tar, files are fed to tar
+ # pre-sorted, as --sort=name might not be available.
+ find $cpio_dir -printf "./%P\n" | LC_ALL=C sort | \
+-    tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
++    ${TAR:-tar} "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
+     --owner=0 --group=0 --numeric-owner --no-recursion \
+     -I $XZ -cf $tarfile -C $cpio_dir/ -T - > /dev/null
+ 
+-- 
+2.40.0

diff --git a/2945_handle-gcc-14-last-stmt-rename.patch b/2945_handle-gcc-14-last-stmt-rename.patch
new file mode 100644
index 00000000..b04ce8da
--- /dev/null
+++ b/2945_handle-gcc-14-last-stmt-rename.patch
@@ -0,0 +1,31 @@
+From: Kees Cook <keescook@chromium.org>
+To: linux-hardening@vger.kernel.org
+Cc: Kees Cook <keescook@chromium.org>, linux-kernel@vger.kernel.org
+Subject: [PATCH] gcc-plugins: Rename last_stmt() for GCC 14+
+Date: Thu, 10 Aug 2023 23:05:49 -0700	[thread overview]
+Message-ID: <20230811060545.never.564-kees@kernel.org> (raw)
+
+In GCC 14, last_stmt() was renamed to last_nondebug_stmt(). Add a helper
+macro to handle the renaming.
+
+Cc: linux-hardening@vger.kernel.org
+Signed-off-by: Kees Cook <keescook@chromium.org>
+---
+ scripts/gcc-plugins/gcc-common.h | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/scripts/gcc-plugins/gcc-common.h b/scripts/gcc-plugins/gcc-common.h
+index 84c730da36dd..1ae39b9f4a95 100644
+--- a/scripts/gcc-plugins/gcc-common.h
++++ b/scripts/gcc-plugins/gcc-common.h
+@@ -440,4 +440,8 @@ static inline void debug_gimple_stmt(const_gimple s)
+ #define SET_DECL_MODE(decl, mode)	DECL_MODE(decl) = (mode)
+ #endif
+ 
++#if BUILDING_GCC_VERSION >= 14000
++#define last_stmt(x)			last_nondebug_stmt(x)
++#endif
++
+ #endif
+-- 
+2.34.1


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-10-25 11:38 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-10-25 11:38 UTC (permalink / raw
  To: gentoo-commits

commit:     e2383926216bf13a81750a7d37c44f34f21cb159
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 25 11:38:30 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 25 11:38:30 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e2383926

Linux patch 5.10.199

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1198_linux-5.10.199.patch | 7874 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7878 insertions(+)

diff --git a/0000_README b/0000_README
index 8a76a944..cc499ff3 100644
--- a/0000_README
+++ b/0000_README
@@ -835,6 +835,10 @@ Patch:  1197_linux-5.10.198.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.198
 
+Patch:  1198_linux-5.10.199.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.199
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1198_linux-5.10.199.patch b/1198_linux-5.10.199.patch
new file mode 100644
index 00000000..65fc3c47
--- /dev/null
+++ b/1198_linux-5.10.199.patch
@@ -0,0 +1,7874 @@
+diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst
+index 4f1a686b4b64b..1ba6c0b9c1617 100644
+--- a/Documentation/admin-guide/sysctl/net.rst
++++ b/Documentation/admin-guide/sysctl/net.rst
+@@ -31,18 +31,18 @@ see only some of them, depending on your kernel's configuration.
+ 
+ Table : Subdirectories in /proc/sys/net
+ 
+- ========= =================== = ========== ==================
++ ========= =================== = ========== ===================
+  Directory Content               Directory  Content
+- ========= =================== = ========== ==================
+- 802       E802 protocol         mptcp     Multipath TCP
+- appletalk Appletalk protocol    netfilter Network Filter
++ ========= =================== = ========== ===================
++ 802       E802 protocol         mptcp      Multipath TCP
++ appletalk Appletalk protocol    netfilter  Network Filter
+  ax25      AX25                  netrom     NET/ROM
+- bridge    Bridging              rose      X.25 PLP layer
+- core      General parameter     tipc      TIPC
+- ethernet  Ethernet protocol     unix      Unix domain sockets
+- ipv4      IP version 4          x25       X.25 protocol
++ bridge    Bridging              rose       X.25 PLP layer
++ core      General parameter     tipc       TIPC
++ ethernet  Ethernet protocol     unix       Unix domain sockets
++ ipv4      IP version 4          x25        X.25 protocol
+  ipv6      IP version 6
+- ========= =================== = ========== ==================
++ ========= =================== = ========== ===================
+ 
+ 1. /proc/sys/net/core - Network core options
+ ============================================
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index 252212998378e..e912a47765f36 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -1916,6 +1916,14 @@ accept_ra_min_hop_limit - INTEGER
+ 
+ 	Default: 1
+ 
++accept_ra_min_lft - INTEGER
++	Minimum acceptable lifetime value in Router Advertisement.
++
++	RA sections with a lifetime less than this value shall be
++	ignored. Zero lifetimes stay unaffected.
++
++	Default: 0
++
+ accept_ra_pinfo - BOOLEAN
+ 	Learn Prefix Information in Router Advertisement.
+ 
+diff --git a/Makefile b/Makefile
+index 470e11dcf2a3e..5105828bf6dab 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 198
++SUBLEVEL = 199
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/motorola-mapphone-common.dtsi b/arch/arm/boot/dts/motorola-mapphone-common.dtsi
+index 8cb26b924d3ca..f9392a135d387 100644
+--- a/arch/arm/boot/dts/motorola-mapphone-common.dtsi
++++ b/arch/arm/boot/dts/motorola-mapphone-common.dtsi
+@@ -765,6 +765,7 @@
+ &uart3 {
+ 	interrupts-extended = <&wakeupgen GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH
+ 			       &omap4_pmx_core 0x17c>;
++	overrun-throttle-ms = <500>;
+ };
+ 
+ &uart4 {
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index 31ba0ac7db630..7a2007c57d3cf 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -759,7 +759,8 @@ static inline bool system_supports_tlb_range(void)
+ 		cpus_have_const_cap(ARM64_HAS_TLB_RANGE);
+ }
+ 
+-extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
++int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
++bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
+ 
+ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
+ {
+diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
+index 59c3facb8a560..cc5b2203d8761 100644
+--- a/arch/arm64/include/asm/exception.h
++++ b/arch/arm64/include/asm/exception.h
+@@ -33,19 +33,22 @@ asmlinkage void exit_to_user_mode(void);
+ void arm64_enter_nmi(struct pt_regs *regs);
+ void arm64_exit_nmi(struct pt_regs *regs);
+ void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
+-void do_undefinstr(struct pt_regs *regs);
+-void do_bti(struct pt_regs *regs);
++void do_el0_undef(struct pt_regs *regs, unsigned long esr);
++void do_el1_undef(struct pt_regs *regs, unsigned long esr);
++void do_el0_bti(struct pt_regs *regs);
++void do_el1_bti(struct pt_regs *regs, unsigned long esr);
+ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr);
+ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr,
+ 			struct pt_regs *regs);
+ void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
+ void do_sve_acc(unsigned int esr, struct pt_regs *regs);
+ void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs);
+-void do_sysinstr(unsigned int esr, struct pt_regs *regs);
++void do_el0_sys(unsigned long esr, struct pt_regs *regs);
+ void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
+ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
+-void do_cp15instr(unsigned int esr, struct pt_regs *regs);
++void do_el0_cp15(unsigned long esr, struct pt_regs *regs);
+ void do_el0_svc(struct pt_regs *regs);
+ void do_el0_svc_compat(struct pt_regs *regs);
+-void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
++void do_el0_fpac(struct pt_regs *regs, unsigned long esr);
++void do_el1_fpac(struct pt_regs *regs, unsigned long esr);
+ #endif	/* __ASM_EXCEPTION_H */
+diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
+index 4b3a5f050f71f..e48afcb69392b 100644
+--- a/arch/arm64/include/asm/spectre.h
++++ b/arch/arm64/include/asm/spectre.h
+@@ -18,6 +18,7 @@ enum mitigation_state {
+ 	SPECTRE_VULNERABLE,
+ };
+ 
++struct pt_regs;
+ struct task_struct;
+ 
+ enum mitigation_state arm64_get_spectre_v2_state(void);
+@@ -33,4 +34,5 @@ enum mitigation_state arm64_get_spectre_bhb_state(void);
+ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope);
+ u8 spectre_bhb_loop_affected(int scope);
+ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
++bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr);
+ #endif	/* __ASM_SPECTRE_H */
+diff --git a/arch/arm64/include/asm/system_misc.h b/arch/arm64/include/asm/system_misc.h
+index 1ab63cfbbaf17..624a608933f13 100644
+--- a/arch/arm64/include/asm/system_misc.h
++++ b/arch/arm64/include/asm/system_misc.h
+@@ -18,7 +18,7 @@
+ 
+ struct pt_regs;
+ 
+-void die(const char *msg, struct pt_regs *regs, int err);
++void die(const char *msg, struct pt_regs *regs, long err);
+ 
+ struct siginfo;
+ void arm64_notify_die(const char *str, struct pt_regs *regs,
+diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
+index d96dc2c7c09da..ca8f19956b92b 100644
+--- a/arch/arm64/include/asm/traps.h
++++ b/arch/arm64/include/asm/traps.h
+@@ -13,17 +13,16 @@
+ 
+ struct pt_regs;
+ 
+-struct undef_hook {
+-	struct list_head node;
+-	u32 instr_mask;
+-	u32 instr_val;
+-	u64 pstate_mask;
+-	u64 pstate_val;
+-	int (*fn)(struct pt_regs *regs, u32 instr);
+-};
++#ifdef CONFIG_ARMV8_DEPRECATED
++bool try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn);
++#else
++static inline bool
++try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn)
++{
++	return false;
++}
++#endif /* CONFIG_ARMV8_DEPRECATED */
+ 
+-void register_undef_hook(struct undef_hook *hook);
+-void unregister_undef_hook(struct undef_hook *hook);
+ void force_signal_inject(int signal, int code, unsigned long address, unsigned int err);
+ void arm64_notify_segfault(unsigned long addr);
+ void arm64_force_sig_fault(int signo, int code, void __user *addr, const char *str);
+diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
+index 91b8a8378ba39..f0ba854f0045e 100644
+--- a/arch/arm64/kernel/armv8_deprecated.c
++++ b/arch/arm64/kernel/armv8_deprecated.c
+@@ -17,7 +17,6 @@
+ #include <asm/sysreg.h>
+ #include <asm/system_misc.h>
+ #include <asm/traps.h>
+-#include <asm/kprobes.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include "trace-events-emulation.h"
+@@ -39,226 +38,46 @@ enum insn_emulation_mode {
+ enum legacy_insn_status {
+ 	INSN_DEPRECATED,
+ 	INSN_OBSOLETE,
+-};
+-
+-struct insn_emulation_ops {
+-	const char		*name;
+-	enum legacy_insn_status	status;
+-	struct undef_hook	*hooks;
+-	int			(*set_hw_mode)(bool enable);
++	INSN_UNAVAILABLE,
+ };
+ 
+ struct insn_emulation {
+-	struct list_head node;
+-	struct insn_emulation_ops *ops;
++	const char			*name;
++	enum legacy_insn_status		status;
++	bool				(*try_emulate)(struct pt_regs *regs,
++						       u32 insn);
++	int				(*set_hw_mode)(bool enable);
++
+ 	int current_mode;
+ 	int min;
+ 	int max;
+-};
+-
+-static LIST_HEAD(insn_emulation);
+-static int nr_insn_emulated __initdata;
+-static DEFINE_RAW_SPINLOCK(insn_emulation_lock);
+-static DEFINE_MUTEX(insn_emulation_mutex);
+-
+-static void register_emulation_hooks(struct insn_emulation_ops *ops)
+-{
+-	struct undef_hook *hook;
+-
+-	BUG_ON(!ops->hooks);
+-
+-	for (hook = ops->hooks; hook->instr_mask; hook++)
+-		register_undef_hook(hook);
+-
+-	pr_notice("Registered %s emulation handler\n", ops->name);
+-}
+-
+-static void remove_emulation_hooks(struct insn_emulation_ops *ops)
+-{
+-	struct undef_hook *hook;
+-
+-	BUG_ON(!ops->hooks);
+-
+-	for (hook = ops->hooks; hook->instr_mask; hook++)
+-		unregister_undef_hook(hook);
+-
+-	pr_notice("Removed %s emulation handler\n", ops->name);
+-}
+-
+-static void enable_insn_hw_mode(void *data)
+-{
+-	struct insn_emulation *insn = (struct insn_emulation *)data;
+-	if (insn->ops->set_hw_mode)
+-		insn->ops->set_hw_mode(true);
+-}
+-
+-static void disable_insn_hw_mode(void *data)
+-{
+-	struct insn_emulation *insn = (struct insn_emulation *)data;
+-	if (insn->ops->set_hw_mode)
+-		insn->ops->set_hw_mode(false);
+-}
+-
+-/* Run set_hw_mode(mode) on all active CPUs */
+-static int run_all_cpu_set_hw_mode(struct insn_emulation *insn, bool enable)
+-{
+-	if (!insn->ops->set_hw_mode)
+-		return -EINVAL;
+-	if (enable)
+-		on_each_cpu(enable_insn_hw_mode, (void *)insn, true);
+-	else
+-		on_each_cpu(disable_insn_hw_mode, (void *)insn, true);
+-	return 0;
+-}
+-
+-/*
+- * Run set_hw_mode for all insns on a starting CPU.
+- * Returns:
+- *  0 		- If all the hooks ran successfully.
+- * -EINVAL	- At least one hook is not supported by the CPU.
+- */
+-static int run_all_insn_set_hw_mode(unsigned int cpu)
+-{
+-	int rc = 0;
+-	unsigned long flags;
+-	struct insn_emulation *insn;
+-
+-	raw_spin_lock_irqsave(&insn_emulation_lock, flags);
+-	list_for_each_entry(insn, &insn_emulation, node) {
+-		bool enable = (insn->current_mode == INSN_HW);
+-		if (insn->ops->set_hw_mode && insn->ops->set_hw_mode(enable)) {
+-			pr_warn("CPU[%u] cannot support the emulation of %s",
+-				cpu, insn->ops->name);
+-			rc = -EINVAL;
+-		}
+-	}
+-	raw_spin_unlock_irqrestore(&insn_emulation_lock, flags);
+-	return rc;
+-}
+-
+-static int update_insn_emulation_mode(struct insn_emulation *insn,
+-				       enum insn_emulation_mode prev)
+-{
+-	int ret = 0;
+-
+-	switch (prev) {
+-	case INSN_UNDEF: /* Nothing to be done */
+-		break;
+-	case INSN_EMULATE:
+-		remove_emulation_hooks(insn->ops);
+-		break;
+-	case INSN_HW:
+-		if (!run_all_cpu_set_hw_mode(insn, false))
+-			pr_notice("Disabled %s support\n", insn->ops->name);
+-		break;
+-	}
+-
+-	switch (insn->current_mode) {
+-	case INSN_UNDEF:
+-		break;
+-	case INSN_EMULATE:
+-		register_emulation_hooks(insn->ops);
+-		break;
+-	case INSN_HW:
+-		ret = run_all_cpu_set_hw_mode(insn, true);
+-		if (!ret)
+-			pr_notice("Enabled %s support\n", insn->ops->name);
+-		break;
+-	}
+ 
+-	return ret;
+-}
+-
+-static void __init register_insn_emulation(struct insn_emulation_ops *ops)
+-{
+-	unsigned long flags;
+-	struct insn_emulation *insn;
+-
+-	insn = kzalloc(sizeof(*insn), GFP_KERNEL);
+-	if (!insn)
+-		return;
+-
+-	insn->ops = ops;
+-	insn->min = INSN_UNDEF;
+-
+-	switch (ops->status) {
+-	case INSN_DEPRECATED:
+-		insn->current_mode = INSN_EMULATE;
+-		/* Disable the HW mode if it was turned on at early boot time */
+-		run_all_cpu_set_hw_mode(insn, false);
+-		insn->max = INSN_HW;
+-		break;
+-	case INSN_OBSOLETE:
+-		insn->current_mode = INSN_UNDEF;
+-		insn->max = INSN_EMULATE;
+-		break;
+-	}
+-
+-	raw_spin_lock_irqsave(&insn_emulation_lock, flags);
+-	list_add(&insn->node, &insn_emulation);
+-	nr_insn_emulated++;
+-	raw_spin_unlock_irqrestore(&insn_emulation_lock, flags);
+-
+-	/* Register any handlers if required */
+-	update_insn_emulation_mode(insn, INSN_UNDEF);
+-}
+-
+-static int emulation_proc_handler(struct ctl_table *table, int write,
+-				  void *buffer, size_t *lenp,
+-				  loff_t *ppos)
+-{
+-	int ret = 0;
+-	struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode);
+-	enum insn_emulation_mode prev_mode = insn->current_mode;
+-
+-	mutex_lock(&insn_emulation_mutex);
+-	ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
++	/*
++	 * sysctl for this emulation + a sentinal entry.
++	 */
++	struct ctl_table sysctl[2];
++};
+ 
+-	if (ret || !write || prev_mode == insn->current_mode)
+-		goto ret;
++#define ARM_OPCODE_CONDTEST_FAIL   0
++#define ARM_OPCODE_CONDTEST_PASS   1
++#define ARM_OPCODE_CONDTEST_UNCOND 2
+ 
+-	ret = update_insn_emulation_mode(insn, prev_mode);
+-	if (ret) {
+-		/* Mode change failed, revert to previous mode. */
+-		insn->current_mode = prev_mode;
+-		update_insn_emulation_mode(insn, INSN_UNDEF);
+-	}
+-ret:
+-	mutex_unlock(&insn_emulation_mutex);
+-	return ret;
+-}
++#define	ARM_OPCODE_CONDITION_UNCOND	0xf
+ 
+-static void __init register_insn_emulation_sysctl(void)
++static unsigned int __maybe_unused aarch32_check_condition(u32 opcode, u32 psr)
+ {
+-	unsigned long flags;
+-	int i = 0;
+-	struct insn_emulation *insn;
+-	struct ctl_table *insns_sysctl, *sysctl;
+-
+-	insns_sysctl = kcalloc(nr_insn_emulated + 1, sizeof(*sysctl),
+-			       GFP_KERNEL);
+-	if (!insns_sysctl)
+-		return;
+-
+-	raw_spin_lock_irqsave(&insn_emulation_lock, flags);
+-	list_for_each_entry(insn, &insn_emulation, node) {
+-		sysctl = &insns_sysctl[i];
+-
+-		sysctl->mode = 0644;
+-		sysctl->maxlen = sizeof(int);
++	u32 cc_bits  = opcode >> 28;
+ 
+-		sysctl->procname = insn->ops->name;
+-		sysctl->data = &insn->current_mode;
+-		sysctl->extra1 = &insn->min;
+-		sysctl->extra2 = &insn->max;
+-		sysctl->proc_handler = emulation_proc_handler;
+-		i++;
++	if (cc_bits != ARM_OPCODE_CONDITION_UNCOND) {
++		if ((*aarch32_opcode_cond_checks[cc_bits])(psr))
++			return ARM_OPCODE_CONDTEST_PASS;
++		else
++			return ARM_OPCODE_CONDTEST_FAIL;
+ 	}
+-	raw_spin_unlock_irqrestore(&insn_emulation_lock, flags);
+-
+-	register_sysctl("abi", insns_sysctl);
++	return ARM_OPCODE_CONDTEST_UNCOND;
+ }
+ 
++#ifdef CONFIG_SWP_EMULATION
+ /*
+  *  Implement emulation of the SWP/SWPB instructions using load-exclusive and
+  *  store-exclusive.
+@@ -345,25 +164,6 @@ static int emulate_swpX(unsigned int address, unsigned int *data,
+ 	return res;
+ }
+ 
+-#define ARM_OPCODE_CONDTEST_FAIL   0
+-#define ARM_OPCODE_CONDTEST_PASS   1
+-#define ARM_OPCODE_CONDTEST_UNCOND 2
+-
+-#define	ARM_OPCODE_CONDITION_UNCOND	0xf
+-
+-static unsigned int __kprobes aarch32_check_condition(u32 opcode, u32 psr)
+-{
+-	u32 cc_bits  = opcode >> 28;
+-
+-	if (cc_bits != ARM_OPCODE_CONDITION_UNCOND) {
+-		if ((*aarch32_opcode_cond_checks[cc_bits])(psr))
+-			return ARM_OPCODE_CONDTEST_PASS;
+-		else
+-			return ARM_OPCODE_CONDTEST_FAIL;
+-	}
+-	return ARM_OPCODE_CONDTEST_UNCOND;
+-}
+-
+ /*
+  * swp_handler logs the id of calling process, dissects the instruction, sanity
+  * checks the memory location, calls emulate_swpX for the actual operation and
+@@ -436,28 +236,27 @@ fault:
+ 	return 0;
+ }
+ 
+-/*
+- * Only emulate SWP/SWPB executed in ARM state/User mode.
+- * The kernel must be SWP free and SWP{B} does not exist in Thumb.
+- */
+-static struct undef_hook swp_hooks[] = {
+-	{
+-		.instr_mask	= 0x0fb00ff0,
+-		.instr_val	= 0x01000090,
+-		.pstate_mask	= PSR_AA32_MODE_MASK,
+-		.pstate_val	= PSR_AA32_MODE_USR,
+-		.fn		= swp_handler
+-	},
+-	{ }
+-};
++static bool try_emulate_swp(struct pt_regs *regs, u32 insn)
++{
++	/* SWP{B} only exists in ARM state and does not exist in Thumb */
++	if (!compat_user_mode(regs) || compat_thumb_mode(regs))
++		return false;
++
++	if ((insn & 0x0fb00ff0) != 0x01000090)
++		return false;
++
++	return swp_handler(regs, insn) == 0;
++}
+ 
+-static struct insn_emulation_ops swp_ops = {
++static struct insn_emulation insn_swp = {
+ 	.name = "swp",
+ 	.status = INSN_OBSOLETE,
+-	.hooks = swp_hooks,
++	.try_emulate = try_emulate_swp,
+ 	.set_hw_mode = NULL,
+ };
++#endif /* CONFIG_SWP_EMULATION */
+ 
++#ifdef CONFIG_CP15_BARRIER_EMULATION
+ static int cp15barrier_handler(struct pt_regs *regs, u32 instr)
+ {
+ 	perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, regs->pc);
+@@ -520,31 +319,29 @@ static int cp15_barrier_set_hw_mode(bool enable)
+ 	return 0;
+ }
+ 
+-static struct undef_hook cp15_barrier_hooks[] = {
+-	{
+-		.instr_mask	= 0x0fff0fdf,
+-		.instr_val	= 0x0e070f9a,
+-		.pstate_mask	= PSR_AA32_MODE_MASK,
+-		.pstate_val	= PSR_AA32_MODE_USR,
+-		.fn		= cp15barrier_handler,
+-	},
+-	{
+-		.instr_mask	= 0x0fff0fff,
+-		.instr_val	= 0x0e070f95,
+-		.pstate_mask	= PSR_AA32_MODE_MASK,
+-		.pstate_val	= PSR_AA32_MODE_USR,
+-		.fn		= cp15barrier_handler,
+-	},
+-	{ }
+-};
++static bool try_emulate_cp15_barrier(struct pt_regs *regs, u32 insn)
++{
++	if (!compat_user_mode(regs) || compat_thumb_mode(regs))
++		return false;
++
++	if ((insn & 0x0fff0fdf) == 0x0e070f9a)
++		return cp15barrier_handler(regs, insn) == 0;
++
++	if ((insn & 0x0fff0fff) == 0x0e070f95)
++		return cp15barrier_handler(regs, insn) == 0;
++
++	return false;
++}
+ 
+-static struct insn_emulation_ops cp15_barrier_ops = {
++static struct insn_emulation insn_cp15_barrier = {
+ 	.name = "cp15_barrier",
+ 	.status = INSN_DEPRECATED,
+-	.hooks = cp15_barrier_hooks,
++	.try_emulate = try_emulate_cp15_barrier,
+ 	.set_hw_mode = cp15_barrier_set_hw_mode,
+ };
++#endif /* CONFIG_CP15_BARRIER_EMULATION */
+ 
++#ifdef CONFIG_SETEND_EMULATION
+ static int setend_set_hw_mode(bool enable)
+ {
+ 	if (!cpu_supports_mixed_endian_el0())
+@@ -592,31 +389,221 @@ static int t16_setend_handler(struct pt_regs *regs, u32 instr)
+ 	return rc;
+ }
+ 
+-static struct undef_hook setend_hooks[] = {
+-	{
+-		.instr_mask	= 0xfffffdff,
+-		.instr_val	= 0xf1010000,
+-		.pstate_mask	= PSR_AA32_MODE_MASK,
+-		.pstate_val	= PSR_AA32_MODE_USR,
+-		.fn		= a32_setend_handler,
+-	},
+-	{
+-		/* Thumb mode */
+-		.instr_mask	= 0xfffffff7,
+-		.instr_val	= 0x0000b650,
+-		.pstate_mask	= (PSR_AA32_T_BIT | PSR_AA32_MODE_MASK),
+-		.pstate_val	= (PSR_AA32_T_BIT | PSR_AA32_MODE_USR),
+-		.fn		= t16_setend_handler,
+-	},
+-	{}
+-};
++static bool try_emulate_setend(struct pt_regs *regs, u32 insn)
++{
++	if (compat_thumb_mode(regs) &&
++	    (insn & 0xfffffff7) == 0x0000b650)
++		return t16_setend_handler(regs, insn) == 0;
++
++	if (compat_user_mode(regs) &&
++	    (insn & 0xfffffdff) == 0xf1010000)
++		return a32_setend_handler(regs, insn) == 0;
++
++	return false;
++}
+ 
+-static struct insn_emulation_ops setend_ops = {
++static struct insn_emulation insn_setend = {
+ 	.name = "setend",
+ 	.status = INSN_DEPRECATED,
+-	.hooks = setend_hooks,
++	.try_emulate = try_emulate_setend,
+ 	.set_hw_mode = setend_set_hw_mode,
+ };
++#endif /* CONFIG_SETEND_EMULATION */
++
++static struct insn_emulation *insn_emulations[] = {
++#ifdef CONFIG_SWP_EMULATION
++	&insn_swp,
++#endif
++#ifdef CONFIG_CP15_BARRIER_EMULATION
++	&insn_cp15_barrier,
++#endif
++#ifdef CONFIG_SETEND_EMULATION
++	&insn_setend,
++#endif
++};
++
++static DEFINE_MUTEX(insn_emulation_mutex);
++
++static void enable_insn_hw_mode(void *data)
++{
++	struct insn_emulation *insn = (struct insn_emulation *)data;
++	if (insn->set_hw_mode)
++		insn->set_hw_mode(true);
++}
++
++static void disable_insn_hw_mode(void *data)
++{
++	struct insn_emulation *insn = (struct insn_emulation *)data;
++	if (insn->set_hw_mode)
++		insn->set_hw_mode(false);
++}
++
++/* Run set_hw_mode(mode) on all active CPUs */
++static int run_all_cpu_set_hw_mode(struct insn_emulation *insn, bool enable)
++{
++	if (!insn->set_hw_mode)
++		return -EINVAL;
++	if (enable)
++		on_each_cpu(enable_insn_hw_mode, (void *)insn, true);
++	else
++		on_each_cpu(disable_insn_hw_mode, (void *)insn, true);
++	return 0;
++}
++
++/*
++ * Run set_hw_mode for all insns on a starting CPU.
++ * Returns:
++ *  0 		- If all the hooks ran successfully.
++ * -EINVAL	- At least one hook is not supported by the CPU.
++ */
++static int run_all_insn_set_hw_mode(unsigned int cpu)
++{
++	int i;
++	int rc = 0;
++	unsigned long flags;
++
++	/*
++	 * Disable IRQs to serialize against an IPI from
++	 * run_all_cpu_set_hw_mode(), ensuring the HW is programmed to the most
++	 * recent enablement state if the two race with one another.
++	 */
++	local_irq_save(flags);
++	for (i = 0; i < ARRAY_SIZE(insn_emulations); i++) {
++		struct insn_emulation *insn = insn_emulations[i];
++		bool enable = READ_ONCE(insn->current_mode) == INSN_HW;
++		if (insn->set_hw_mode && insn->set_hw_mode(enable)) {
++			pr_warn("CPU[%u] cannot support the emulation of %s",
++				cpu, insn->name);
++			rc = -EINVAL;
++		}
++	}
++	local_irq_restore(flags);
++
++	return rc;
++}
++
++static int update_insn_emulation_mode(struct insn_emulation *insn,
++				       enum insn_emulation_mode prev)
++{
++	int ret = 0;
++
++	switch (prev) {
++	case INSN_UNDEF: /* Nothing to be done */
++		break;
++	case INSN_EMULATE:
++		break;
++	case INSN_HW:
++		if (!run_all_cpu_set_hw_mode(insn, false))
++			pr_notice("Disabled %s support\n", insn->name);
++		break;
++	}
++
++	switch (insn->current_mode) {
++	case INSN_UNDEF:
++		break;
++	case INSN_EMULATE:
++		break;
++	case INSN_HW:
++		ret = run_all_cpu_set_hw_mode(insn, true);
++		if (!ret)
++			pr_notice("Enabled %s support\n", insn->name);
++		break;
++	}
++
++	return ret;
++}
++
++static int emulation_proc_handler(struct ctl_table *table, int write,
++				  void *buffer, size_t *lenp,
++				  loff_t *ppos)
++{
++	int ret = 0;
++	struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode);
++	enum insn_emulation_mode prev_mode = insn->current_mode;
++
++	mutex_lock(&insn_emulation_mutex);
++	ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
++
++	if (ret || !write || prev_mode == insn->current_mode)
++		goto ret;
++
++	ret = update_insn_emulation_mode(insn, prev_mode);
++	if (ret) {
++		/* Mode change failed, revert to previous mode. */
++		WRITE_ONCE(insn->current_mode, prev_mode);
++		update_insn_emulation_mode(insn, INSN_UNDEF);
++	}
++ret:
++	mutex_unlock(&insn_emulation_mutex);
++	return ret;
++}
++
++static void __init register_insn_emulation(struct insn_emulation *insn)
++{
++	struct ctl_table *sysctl;
++
++	insn->min = INSN_UNDEF;
++
++	switch (insn->status) {
++	case INSN_DEPRECATED:
++		insn->current_mode = INSN_EMULATE;
++		/* Disable the HW mode if it was turned on at early boot time */
++		run_all_cpu_set_hw_mode(insn, false);
++		insn->max = INSN_HW;
++		break;
++	case INSN_OBSOLETE:
++		insn->current_mode = INSN_UNDEF;
++		insn->max = INSN_EMULATE;
++		break;
++	case INSN_UNAVAILABLE:
++		insn->current_mode = INSN_UNDEF;
++		insn->max = INSN_UNDEF;
++		break;
++	}
++
++	/* Program the HW if required */
++	update_insn_emulation_mode(insn, INSN_UNDEF);
++
++	if (insn->status != INSN_UNAVAILABLE) {
++		sysctl = &insn->sysctl[0];
++
++		sysctl->mode = 0644;
++		sysctl->maxlen = sizeof(int);
++
++		sysctl->procname = insn->name;
++		sysctl->data = &insn->current_mode;
++		sysctl->extra1 = &insn->min;
++		sysctl->extra2 = &insn->max;
++		sysctl->proc_handler = emulation_proc_handler;
++
++		register_sysctl("abi", sysctl);
++	}
++}
++
++bool try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(insn_emulations); i++) {
++		struct insn_emulation *ie = insn_emulations[i];
++
++		if (ie->status == INSN_UNAVAILABLE)
++			continue;
++
++		/*
++		 * A trap may race with the mode being changed
++		 * INSN_EMULATE<->INSN_HW. Try to emulate the instruction to
++		 * avoid a spurious UNDEF.
++		 */
++		if (READ_ONCE(ie->current_mode) == INSN_UNDEF)
++			continue;
++
++		if (ie->try_emulate(regs, insn))
++			return true;
++	}
++
++	return false;
++}
+ 
+ /*
+  * Invoked as core_initcall, which guarantees that the instruction
+@@ -624,24 +611,27 @@ static struct insn_emulation_ops setend_ops = {
+  */
+ static int __init armv8_deprecated_init(void)
+ {
+-	if (IS_ENABLED(CONFIG_SWP_EMULATION))
+-		register_insn_emulation(&swp_ops);
++	int i;
+ 
+-	if (IS_ENABLED(CONFIG_CP15_BARRIER_EMULATION))
+-		register_insn_emulation(&cp15_barrier_ops);
++#ifdef CONFIG_SETEND_EMULATION
++	if (!system_supports_mixed_endian_el0()) {
++		insn_setend.status = INSN_UNAVAILABLE;
++		pr_info("setend instruction emulation is not supported on this system\n");
++	}
+ 
+-	if (IS_ENABLED(CONFIG_SETEND_EMULATION)) {
+-		if (system_supports_mixed_endian_el0())
+-			register_insn_emulation(&setend_ops);
+-		else
+-			pr_info("setend instruction emulation is not supported on this system\n");
++#endif
++	for (i = 0; i < ARRAY_SIZE(insn_emulations); i++) {
++		struct insn_emulation *ie = insn_emulations[i];
++
++		if (ie->status == INSN_UNAVAILABLE)
++			continue;
++
++		register_insn_emulation(ie);
+ 	}
+ 
+ 	cpuhp_setup_state_nocalls(CPUHP_AP_ARM64_ISNDEP_STARTING,
+ 				  "arm64/isndep:starting",
+ 				  run_all_insn_set_hw_mode, NULL);
+-	register_insn_emulation_sysctl();
+-
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index f3767c1445933..1f0a2deafd643 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -2852,35 +2852,22 @@ int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt)
+ 	return rc;
+ }
+ 
+-static int emulate_mrs(struct pt_regs *regs, u32 insn)
++bool try_emulate_mrs(struct pt_regs *regs, u32 insn)
+ {
+ 	u32 sys_reg, rt;
+ 
++	if (compat_user_mode(regs) || !aarch64_insn_is_mrs(insn))
++		return false;
++
+ 	/*
+ 	 * sys_reg values are defined as used in mrs/msr instruction.
+ 	 * shift the imm value to get the encoding.
+ 	 */
+ 	sys_reg = (u32)aarch64_insn_decode_immediate(AARCH64_INSN_IMM_16, insn) << 5;
+ 	rt = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RT, insn);
+-	return do_emulate_mrs(regs, sys_reg, rt);
++	return do_emulate_mrs(regs, sys_reg, rt) == 0;
+ }
+ 
+-static struct undef_hook mrs_hook = {
+-	.instr_mask = 0xfff00000,
+-	.instr_val  = 0xd5300000,
+-	.pstate_mask = PSR_AA32_MODE_MASK,
+-	.pstate_val = PSR_MODE_EL0t,
+-	.fn = emulate_mrs,
+-};
+-
+-static int __init enable_mrs_emulation(void)
+-{
+-	register_undef_hook(&mrs_hook);
+-	return 0;
+-}
+-
+-core_initcall(enable_mrs_emulation);
+-
+ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+ 			  char *buf)
+ {
+diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
+index ec120ed18faf4..7a8cd856ee85b 100644
+--- a/arch/arm64/kernel/entry-common.c
++++ b/arch/arm64/kernel/entry-common.c
+@@ -132,11 +132,20 @@ static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
+ 	exit_to_kernel_mode(regs);
+ }
+ 
+-static void noinstr el1_undef(struct pt_regs *regs)
++static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
+ {
+ 	enter_from_kernel_mode(regs);
+ 	local_daif_inherit(regs);
+-	do_undefinstr(regs);
++	do_el1_undef(regs, esr);
++	local_daif_mask();
++	exit_to_kernel_mode(regs);
++}
++
++static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr)
++{
++	enter_from_kernel_mode(regs);
++	local_daif_inherit(regs);
++	do_el1_bti(regs, esr);
+ 	local_daif_mask();
+ 	exit_to_kernel_mode(regs);
+ }
+@@ -187,7 +196,7 @@ static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
+ {
+ 	enter_from_kernel_mode(regs);
+ 	local_daif_inherit(regs);
+-	do_ptrauth_fault(regs, esr);
++	do_el1_fpac(regs, esr);
+ 	local_daif_mask();
+ 	exit_to_kernel_mode(regs);
+ }
+@@ -210,7 +219,10 @@ asmlinkage void noinstr el1_sync_handler(struct pt_regs *regs)
+ 		break;
+ 	case ESR_ELx_EC_SYS64:
+ 	case ESR_ELx_EC_UNKNOWN:
+-		el1_undef(regs);
++		el1_undef(regs, esr);
++		break;
++	case ESR_ELx_EC_BTI:
++		el1_bti(regs, esr);
+ 		break;
+ 	case ESR_ELx_EC_BREAKPT_CUR:
+ 	case ESR_ELx_EC_SOFTSTP_CUR:
+@@ -294,7 +306,7 @@ static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr)
+ {
+ 	enter_from_user_mode();
+ 	local_daif_restore(DAIF_PROCCTX);
+-	do_sysinstr(esr, regs);
++	do_el0_sys(esr, regs);
+ }
+ 
+ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
+@@ -316,18 +328,18 @@ static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr)
+ 	do_sp_pc_abort(regs->sp, esr, regs);
+ }
+ 
+-static void noinstr el0_undef(struct pt_regs *regs)
++static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr)
+ {
+ 	enter_from_user_mode();
+ 	local_daif_restore(DAIF_PROCCTX);
+-	do_undefinstr(regs);
++	do_el0_undef(regs, esr);
+ }
+ 
+ static void noinstr el0_bti(struct pt_regs *regs)
+ {
+ 	enter_from_user_mode();
+ 	local_daif_restore(DAIF_PROCCTX);
+-	do_bti(regs);
++	do_el0_bti(regs);
+ }
+ 
+ static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr)
+@@ -357,7 +369,7 @@ static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr)
+ {
+ 	enter_from_user_mode();
+ 	local_daif_restore(DAIF_PROCCTX);
+-	do_ptrauth_fault(regs, esr);
++	do_el0_fpac(regs, esr);
+ }
+ 
+ asmlinkage void noinstr el0_sync_handler(struct pt_regs *regs)
+@@ -394,7 +406,7 @@ asmlinkage void noinstr el0_sync_handler(struct pt_regs *regs)
+ 		el0_pc(regs, esr);
+ 		break;
+ 	case ESR_ELx_EC_UNKNOWN:
+-		el0_undef(regs);
++		el0_undef(regs, esr);
+ 		break;
+ 	case ESR_ELx_EC_BTI:
+ 		el0_bti(regs);
+@@ -418,7 +430,7 @@ static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
+ {
+ 	enter_from_user_mode();
+ 	local_daif_restore(DAIF_PROCCTX);
+-	do_cp15instr(esr, regs);
++	do_el0_cp15(esr, regs);
+ }
+ 
+ static void noinstr el0_svc_compat(struct pt_regs *regs)
+@@ -454,7 +466,7 @@ asmlinkage void noinstr el0_sync_compat_handler(struct pt_regs *regs)
+ 	case ESR_ELx_EC_CP14_MR:
+ 	case ESR_ELx_EC_CP14_LS:
+ 	case ESR_ELx_EC_CP14_64:
+-		el0_undef(regs);
++		el0_undef(regs, esr);
+ 		break;
+ 	case ESR_ELx_EC_CP15_32:
+ 	case ESR_ELx_EC_CP15_64:
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index faa8a6bf2376e..9c0e9d9eed6e2 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -537,10 +537,13 @@ bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope)
+ 	return state != SPECTRE_UNAFFECTED;
+ }
+ 
+-static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr)
++bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr)
+ {
+-	if (user_mode(regs))
+-		return 1;
++	const u32 instr_mask = ~(1U << PSTATE_Imm_shift);
++	const u32 instr_val = 0xd500401f | PSTATE_SSBS;
++
++	if ((instr & instr_mask) != instr_val)
++		return false;
+ 
+ 	if (instr & BIT(PSTATE_Imm_shift))
+ 		regs->pstate |= PSR_SSBS_BIT;
+@@ -548,19 +551,11 @@ static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr)
+ 		regs->pstate &= ~PSR_SSBS_BIT;
+ 
+ 	arm64_skip_faulting_instruction(regs, 4);
+-	return 0;
++	return true;
+ }
+ 
+-static struct undef_hook ssbs_emulation_hook = {
+-	.instr_mask	= ~(1U << PSTATE_Imm_shift),
+-	.instr_val	= 0xd500401f | PSTATE_SSBS,
+-	.fn		= ssbs_emulation_handler,
+-};
+-
+ static enum mitigation_state spectre_v4_enable_hw_mitigation(void)
+ {
+-	static bool undef_hook_registered = false;
+-	static DEFINE_RAW_SPINLOCK(hook_lock);
+ 	enum mitigation_state state;
+ 
+ 	/*
+@@ -571,13 +566,6 @@ static enum mitigation_state spectre_v4_enable_hw_mitigation(void)
+ 	if (state != SPECTRE_MITIGATED || !this_cpu_has_cap(ARM64_SSBS))
+ 		return state;
+ 
+-	raw_spin_lock(&hook_lock);
+-	if (!undef_hook_registered) {
+-		register_undef_hook(&ssbs_emulation_hook);
+-		undef_hook_registered = true;
+-	}
+-	raw_spin_unlock(&hook_lock);
+-
+ 	if (spectre_v4_mitigations_off()) {
+ 		sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS);
+ 		asm volatile(SET_PSTATE_SSBS(1));
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 2cdd53425509d..10a58017d0243 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -90,12 +90,12 @@ static void dump_kernel_instr(const char *lvl, struct pt_regs *regs)
+ 
+ #define S_SMP " SMP"
+ 
+-static int __die(const char *str, int err, struct pt_regs *regs)
++static int __die(const char *str, long err, struct pt_regs *regs)
+ {
+ 	static int die_counter;
+ 	int ret;
+ 
+-	pr_emerg("Internal error: %s: %x [#%d]" S_PREEMPT S_SMP "\n",
++	pr_emerg("Internal error: %s: %016lx [#%d]" S_PREEMPT S_SMP "\n",
+ 		 str, err, ++die_counter);
+ 
+ 	/* trap and error numbers are mostly meaningless on ARM */
+@@ -116,7 +116,7 @@ static DEFINE_RAW_SPINLOCK(die_lock);
+ /*
+  * This function is protected against re-entrancy.
+  */
+-void die(const char *str, struct pt_regs *regs, int err)
++void die(const char *str, struct pt_regs *regs, long err)
+ {
+ 	int ret;
+ 	unsigned long flags;
+@@ -282,51 +282,22 @@ void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
+ 		regs->pstate &= ~PSR_BTYPE_MASK;
+ }
+ 
+-static LIST_HEAD(undef_hook);
+-static DEFINE_RAW_SPINLOCK(undef_lock);
+-
+-void register_undef_hook(struct undef_hook *hook)
++static int user_insn_read(struct pt_regs *regs, u32 *insnp)
+ {
+-	unsigned long flags;
+-
+-	raw_spin_lock_irqsave(&undef_lock, flags);
+-	list_add(&hook->node, &undef_hook);
+-	raw_spin_unlock_irqrestore(&undef_lock, flags);
+-}
+-
+-void unregister_undef_hook(struct undef_hook *hook)
+-{
+-	unsigned long flags;
+-
+-	raw_spin_lock_irqsave(&undef_lock, flags);
+-	list_del(&hook->node);
+-	raw_spin_unlock_irqrestore(&undef_lock, flags);
+-}
+-
+-static int call_undef_hook(struct pt_regs *regs)
+-{
+-	struct undef_hook *hook;
+-	unsigned long flags;
+ 	u32 instr;
+-	int (*fn)(struct pt_regs *regs, u32 instr) = NULL;
+ 	void __user *pc = (void __user *)instruction_pointer(regs);
+ 
+-	if (!user_mode(regs)) {
+-		__le32 instr_le;
+-		if (get_kernel_nofault(instr_le, (__force __le32 *)pc))
+-			goto exit;
+-		instr = le32_to_cpu(instr_le);
+-	} else if (compat_thumb_mode(regs)) {
++	if (compat_thumb_mode(regs)) {
+ 		/* 16-bit Thumb instruction */
+ 		__le16 instr_le;
+ 		if (get_user(instr_le, (__le16 __user *)pc))
+-			goto exit;
++			return -EFAULT;
+ 		instr = le16_to_cpu(instr_le);
+ 		if (aarch32_insn_is_wide(instr)) {
+ 			u32 instr2;
+ 
+ 			if (get_user(instr_le, (__le16 __user *)(pc + 2)))
+-				goto exit;
++				return -EFAULT;
+ 			instr2 = le16_to_cpu(instr_le);
+ 			instr = (instr << 16) | instr2;
+ 		}
+@@ -334,19 +305,12 @@ static int call_undef_hook(struct pt_regs *regs)
+ 		/* 32-bit ARM instruction */
+ 		__le32 instr_le;
+ 		if (get_user(instr_le, (__le32 __user *)pc))
+-			goto exit;
++			return -EFAULT;
+ 		instr = le32_to_cpu(instr_le);
+ 	}
+ 
+-	raw_spin_lock_irqsave(&undef_lock, flags);
+-	list_for_each_entry(hook, &undef_hook, node)
+-		if ((instr & hook->instr_mask) == hook->instr_val &&
+-			(regs->pstate & hook->pstate_mask) == hook->pstate_val)
+-			fn = hook->fn;
+-
+-	raw_spin_unlock_irqrestore(&undef_lock, flags);
+-exit:
+-	return fn ? fn(regs, instr) : 1;
++	*insnp = instr;
++	return 0;
+ }
+ 
+ void force_signal_inject(int signal, int code, unsigned long address, unsigned int err)
+@@ -395,37 +359,64 @@ void arm64_notify_segfault(unsigned long addr)
+ 	force_signal_inject(SIGSEGV, code, addr, 0);
+ }
+ 
+-void do_undefinstr(struct pt_regs *regs)
++void do_el0_undef(struct pt_regs *regs, unsigned long esr)
+ {
++	u32 insn;
++
+ 	/* check for AArch32 breakpoint instructions */
+ 	if (!aarch32_break_handler(regs))
+ 		return;
+ 
+-	if (call_undef_hook(regs) == 0)
++	if (user_insn_read(regs, &insn))
++		goto out_err;
++
++	if (try_emulate_mrs(regs, insn))
++		return;
++
++	if (try_emulate_armv8_deprecated(regs, insn))
+ 		return;
+ 
+-	BUG_ON(!user_mode(regs));
++out_err:
+ 	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
+ }
+-NOKPROBE_SYMBOL(do_undefinstr);
+ 
+-void do_bti(struct pt_regs *regs)
++void do_el1_undef(struct pt_regs *regs, unsigned long esr)
++{
++	u32 insn;
++
++	if (aarch64_insn_read((void *)regs->pc, &insn))
++		goto out_err;
++
++	if (try_emulate_el1_ssbs(regs, insn))
++		return;
++
++out_err:
++	die("Oops - Undefined instruction", regs, esr);
++}
++
++void do_el0_bti(struct pt_regs *regs)
+ {
+-	BUG_ON(!user_mode(regs));
+ 	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
+ }
+-NOKPROBE_SYMBOL(do_bti);
+ 
+-void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr)
++void do_el1_bti(struct pt_regs *regs, unsigned long esr)
++{
++	die("Oops - BTI", regs, esr);
++}
++
++void do_el0_fpac(struct pt_regs *regs, unsigned long esr)
++{
++	force_signal_inject(SIGILL, ILL_ILLOPN, regs->pc, esr);
++}
++
++void do_el1_fpac(struct pt_regs *regs, unsigned long esr)
+ {
+ 	/*
+-	 * Unexpected FPAC exception or pointer authentication failure in
+-	 * the kernel: kill the task before it does any more harm.
++	 * Unexpected FPAC exception in the kernel: kill the task before it
++	 * does any more harm.
+ 	 */
+-	BUG_ON(!user_mode(regs));
+-	force_signal_inject(SIGILL, ILL_ILLOPN, regs->pc, esr);
++	die("Oops - FPAC", regs, esr);
+ }
+-NOKPROBE_SYMBOL(do_ptrauth_fault);
+ 
+ #define __user_cache_maint(insn, address, res)			\
+ 	if (address >= user_addr_max()) {			\
+@@ -640,7 +631,7 @@ static const struct sys64_hook cp15_64_hooks[] = {
+ 	{},
+ };
+ 
+-void do_cp15instr(unsigned int esr, struct pt_regs *regs)
++void do_el0_cp15(unsigned long esr, struct pt_regs *regs)
+ {
+ 	const struct sys64_hook *hook, *hook_base;
+ 
+@@ -661,7 +652,7 @@ void do_cp15instr(unsigned int esr, struct pt_regs *regs)
+ 		hook_base = cp15_64_hooks;
+ 		break;
+ 	default:
+-		do_undefinstr(regs);
++		do_el0_undef(regs, esr);
+ 		return;
+ 	}
+ 
+@@ -676,12 +667,11 @@ void do_cp15instr(unsigned int esr, struct pt_regs *regs)
+ 	 * EL0. Fall back to our usual undefined instruction handler
+ 	 * so that we handle these consistently.
+ 	 */
+-	do_undefinstr(regs);
++	do_el0_undef(regs, esr);
+ }
+-NOKPROBE_SYMBOL(do_cp15instr);
+ #endif
+ 
+-void do_sysinstr(unsigned int esr, struct pt_regs *regs)
++void do_el0_sys(unsigned long esr, struct pt_regs *regs)
+ {
+ 	const struct sys64_hook *hook;
+ 
+@@ -696,9 +686,8 @@ void do_sysinstr(unsigned int esr, struct pt_regs *regs)
+ 	 * back to our usual undefined instruction handler so that we handle
+ 	 * these consistently.
+ 	 */
+-	do_undefinstr(regs);
++	do_el0_undef(regs, esr);
+ }
+-NOKPROBE_SYMBOL(do_sysinstr);
+ 
+ static const char *esr_class_str[] = {
+ 	[0 ... ESR_ELx_EC_MAX]		= "UNRECOGNIZED EC",
+@@ -899,7 +888,7 @@ static int bug_handler(struct pt_regs *regs, unsigned int esr)
+ {
+ 	switch (report_bug(regs->pc, regs)) {
+ 	case BUG_TRAP_TYPE_BUG:
+-		die("Oops - BUG", regs, 0);
++		die("Oops - BUG", regs, esr);
+ 		break;
+ 
+ 	case BUG_TRAP_TYPE_WARN:
+@@ -967,7 +956,7 @@ static int kasan_handler(struct pt_regs *regs, unsigned int esr)
+ 	 * This is something that might be fixed at some point in the future.
+ 	 */
+ 	if (!recover)
+-		die("Oops - KASAN", regs, 0);
++		die("Oops - KASAN", regs, esr);
+ 
+ 	/* If thread survives, skip over the brk instruction and continue: */
+ 	arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h
+index 1581204467e1d..2b06da0ffd2d2 100644
+--- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h
++++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h
+@@ -94,6 +94,13 @@ static inline pte_t pte_wrprotect(pte_t pte)
+ 
+ #define pte_wrprotect pte_wrprotect
+ 
++static inline int pte_read(pte_t pte)
++{
++	return (pte_val(pte) & _PAGE_RO) != _PAGE_NA;
++}
++
++#define pte_read pte_read
++
+ static inline int pte_write(pte_t pte)
+ {
+ 	return !(pte_val(pte) & _PAGE_RO);
+diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
+index a4d475c0fc2c0..6075fac882862 100644
+--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
+@@ -216,7 +216,7 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
+ {
+ 	unsigned long old;
+ 
+-	if (pte_young(*ptep))
++	if (!pte_young(*ptep))
+ 		return 0;
+ 	old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
+ 	return (old & _PAGE_ACCESSED) != 0;
+diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
+index ac75f4ab0dba1..7ad1d1b042a60 100644
+--- a/arch/powerpc/include/asm/nohash/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/pgtable.h
+@@ -45,7 +45,9 @@ static inline int pte_write(pte_t pte)
+ 	return pte_val(pte) & _PAGE_RW;
+ }
+ #endif
++#ifndef pte_read
+ static inline int pte_read(pte_t pte)		{ return 1; }
++#endif
+ static inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & _PAGE_DIRTY; }
+ static inline int pte_special(pte_t pte)	{ return pte_val(pte) & _PAGE_SPECIAL; }
+ static inline int pte_none(pte_t pte)		{ return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
+diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
+index 053dc83e323b6..0cc3dd9d32e64 100644
+--- a/arch/riscv/net/bpf_jit_comp64.c
++++ b/arch/riscv/net/bpf_jit_comp64.c
+@@ -201,7 +201,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
+ 	emit_addi(RV_REG_SP, RV_REG_SP, stack_adjust, ctx);
+ 	/* Set return value. */
+ 	if (!is_tail_call)
+-		emit_mv(RV_REG_A0, RV_REG_A5, ctx);
++		emit_addiw(RV_REG_A0, RV_REG_A5, 0, ctx);
+ 	emit_jalr(RV_REG_ZERO, is_tail_call ? RV_REG_T3 : RV_REG_RA,
+ 		  is_tail_call ? 4 : 0, /* skip TCC init */
+ 		  ctx);
+@@ -394,12 +394,12 @@ static void emit_sext_32_rd(u8 *rd, struct rv_jit_context *ctx)
+ 	*rd = RV_REG_T2;
+ }
+ 
+-static int emit_jump_and_link(u8 rd, s64 rvoff, bool force_jalr,
++static int emit_jump_and_link(u8 rd, s64 rvoff, bool fixed_addr,
+ 			      struct rv_jit_context *ctx)
+ {
+ 	s64 upper, lower;
+ 
+-	if (rvoff && is_21b_int(rvoff) && !force_jalr) {
++	if (rvoff && fixed_addr && is_21b_int(rvoff)) {
+ 		emit(rv_jal(rd, rvoff >> 1), ctx);
+ 		return 0;
+ 	} else if (in_auipc_jalr_range(rvoff)) {
+@@ -420,24 +420,17 @@ static bool is_signed_bpf_cond(u8 cond)
+ 		cond == BPF_JSGE || cond == BPF_JSLE;
+ }
+ 
+-static int emit_call(bool fixed, u64 addr, struct rv_jit_context *ctx)
++static int emit_call(u64 addr, bool fixed_addr, struct rv_jit_context *ctx)
+ {
+ 	s64 off = 0;
+ 	u64 ip;
+-	u8 rd;
+-	int ret;
+ 
+ 	if (addr && ctx->insns) {
+ 		ip = (u64)(long)(ctx->insns + ctx->ninsns);
+ 		off = addr - ip;
+ 	}
+ 
+-	ret = emit_jump_and_link(RV_REG_RA, off, !fixed, ctx);
+-	if (ret)
+-		return ret;
+-	rd = bpf_to_rv_reg(BPF_REG_0, ctx);
+-	emit_mv(rd, RV_REG_A0, ctx);
+-	return 0;
++	return emit_jump_and_link(RV_REG_RA, off, fixed_addr, ctx);
+ }
+ 
+ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+@@ -731,7 +724,7 @@ out_be:
+ 	/* JUMP off */
+ 	case BPF_JMP | BPF_JA:
+ 		rvoff = rv_offset(i, off, ctx);
+-		ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);
++		ret = emit_jump_and_link(RV_REG_ZERO, rvoff, true, ctx);
+ 		if (ret)
+ 			return ret;
+ 		break;
+@@ -850,17 +843,21 @@ out_be:
+ 	/* function call */
+ 	case BPF_JMP | BPF_CALL:
+ 	{
+-		bool fixed;
++		bool fixed_addr;
+ 		u64 addr;
+ 
+ 		mark_call(ctx);
+-		ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass, &addr,
+-					    &fixed);
++		ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
++					    &addr, &fixed_addr);
+ 		if (ret < 0)
+ 			return ret;
+-		ret = emit_call(fixed, addr, ctx);
++
++		ret = emit_call(addr, fixed_addr, ctx);
+ 		if (ret)
+ 			return ret;
++
++		if (insn->src_reg != BPF_PSEUDO_CALL)
++			emit_mv(bpf_to_rv_reg(BPF_REG_0, ctx), RV_REG_A0, ctx);
+ 		break;
+ 	}
+ 	/* tail call */
+@@ -875,7 +872,7 @@ out_be:
+ 			break;
+ 
+ 		rvoff = epilogue_offset(ctx);
+-		ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);
++		ret = emit_jump_and_link(RV_REG_ZERO, rvoff, true, ctx);
+ 		if (ret)
+ 			return ret;
+ 		break;
+diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
+index ebc9a49523aa3..f6690a7003c6e 100644
+--- a/arch/s390/pci/pci_dma.c
++++ b/arch/s390/pci/pci_dma.c
+@@ -541,6 +541,17 @@ static void s390_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
+ 		s->dma_length = 0;
+ 	}
+ }
++
++static unsigned long *bitmap_vzalloc(size_t bits, gfp_t flags)
++{
++	size_t n = BITS_TO_LONGS(bits);
++	size_t bytes;
++
++	if (unlikely(check_mul_overflow(n, sizeof(unsigned long), &bytes)))
++		return NULL;
++
++	return vzalloc(bytes);
++}
+ 	
+ int zpci_dma_init_device(struct zpci_dev *zdev)
+ {
+@@ -577,13 +588,13 @@ int zpci_dma_init_device(struct zpci_dev *zdev)
+ 				zdev->end_dma - zdev->start_dma + 1);
+ 	zdev->end_dma = zdev->start_dma + zdev->iommu_size - 1;
+ 	zdev->iommu_pages = zdev->iommu_size >> PAGE_SHIFT;
+-	zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8);
++	zdev->iommu_bitmap = bitmap_vzalloc(zdev->iommu_pages, GFP_KERNEL);
+ 	if (!zdev->iommu_bitmap) {
+ 		rc = -ENOMEM;
+ 		goto free_dma_table;
+ 	}
+ 	if (!s390_iommu_strict) {
+-		zdev->lazy_bitmap = vzalloc(zdev->iommu_pages / 8);
++		zdev->lazy_bitmap = bitmap_vzalloc(zdev->iommu_pages, GFP_KERNEL);
+ 		if (!zdev->lazy_bitmap) {
+ 			rc = -ENOMEM;
+ 			goto free_bitmap;
+diff --git a/arch/x86/boot/compressed/sev-es.c b/arch/x86/boot/compressed/sev-es.c
+index 27826c265aab4..e23748fa2d5f5 100644
+--- a/arch/x86/boot/compressed/sev-es.c
++++ b/arch/x86/boot/compressed/sev-es.c
+@@ -106,6 +106,16 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
+ 	return ES_OK;
+ }
+ 
++static enum es_result vc_ioio_check(struct es_em_ctxt *ctxt, u16 port, size_t size)
++{
++	return ES_OK;
++}
++
++static bool fault_in_kernel_space(unsigned long address)
++{
++	return false;
++}
++
+ #undef __init
+ #undef __pa
+ #define __init
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 202a52e42a368..8465fc8ffd990 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -541,12 +541,17 @@
+ 
+ #define MSR_AMD64_VIRT_SPEC_CTRL	0xc001011f
+ 
+-/* Fam 17h MSRs */
+-#define MSR_F17H_IRPERF			0xc00000e9
++/* Zen4 */
++#define MSR_ZEN4_BP_CFG			0xc001102e
++#define MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT 5
+ 
++/* Zen 2 */
+ #define MSR_ZEN2_SPECTRAL_CHICKEN	0xc00110e3
+ #define MSR_ZEN2_SPECTRAL_CHICKEN_BIT	BIT_ULL(1)
+ 
++/* Fam 17h MSRs */
++#define MSR_F17H_IRPERF			0xc00000e9
++
+ /* Fam 16h MSRs */
+ #define MSR_F16H_L2I_PERF_CTL		0xc0010230
+ #define MSR_F16H_L2I_PERF_CTR		0xc0010231
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index e6e63a9d27cbe..d58621f0fe515 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -424,6 +424,17 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
+ 	u8 insn_buff[MAX_PATCH_LEN];
+ 
+ 	DPRINTK("alt table %px, -> %px", start, end);
++
++	/*
++	 * In the case CONFIG_X86_5LEVEL=y, KASAN_SHADOW_START is defined using
++	 * cpu_feature_enabled(X86_FEATURE_LA57) and is therefore patched here.
++	 * During the process, KASAN becomes confused seeing partial LA57
++	 * conversion and triggers a false-positive out-of-bound report.
++	 *
++	 * Disable KASAN until the patching is complete.
++	 */
++	kasan_disable_current();
++
+ 	/*
+ 	 * The scan order should be from start to end. A later scanned
+ 	 * alternative code can overwrite previously scanned alternative code.
+@@ -491,6 +502,8 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
+ next:
+ 		optimize_nops(instr, a->instrlen);
+ 	}
++
++	kasan_enable_current();
+ }
+ 
+ #if defined(CONFIG_RETPOLINE) && defined(CONFIG_STACK_VALIDATION)
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 2bc1c78a7cc57..d254e02aa179f 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -81,6 +81,10 @@ static const int amd_div0[] =
+ 	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0x00, 0x0, 0x2f, 0xf),
+ 			   AMD_MODEL_RANGE(0x17, 0x50, 0x0, 0x5f, 0xf));
+ 
++static const int amd_erratum_1485[] =
++	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x19, 0x10, 0x0, 0x1f, 0xf),
++			   AMD_MODEL_RANGE(0x19, 0x60, 0x0, 0xaf, 0xf));
++
+ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
+ {
+ 	int osvw_id = *erratum++;
+@@ -1178,6 +1182,10 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 		pr_notice_once("AMD Zen1 DIV0 bug detected. Disable SMT for full protection.\n");
+ 		setup_force_cpu_bug(X86_BUG_DIV0);
+ 	}
++
++	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) &&
++	     cpu_has_amd_erratum(c, amd_erratum_1485))
++		msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT);
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
+index 82db4014deb21..e9f8a2bc5de96 100644
+--- a/arch/x86/kernel/sev-es-shared.c
++++ b/arch/x86/kernel/sev-es-shared.c
+@@ -217,6 +217,23 @@ fail:
+ 		asm volatile("hlt\n");
+ }
+ 
++static enum es_result vc_insn_string_check(struct es_em_ctxt *ctxt,
++					   unsigned long address,
++					   bool write)
++{
++	if (user_mode(ctxt->regs) && fault_in_kernel_space(address)) {
++		ctxt->fi.vector     = X86_TRAP_PF;
++		ctxt->fi.error_code = X86_PF_USER;
++		ctxt->fi.cr2        = address;
++		if (write)
++			ctxt->fi.error_code |= X86_PF_WRITE;
++
++		return ES_EXCEPTION;
++	}
++
++	return ES_OK;
++}
++
+ static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
+ 					  void *src, char *buf,
+ 					  unsigned int data_size,
+@@ -224,7 +241,12 @@ static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
+ 					  bool backwards)
+ {
+ 	int i, b = backwards ? -1 : 1;
+-	enum es_result ret = ES_OK;
++	unsigned long address = (unsigned long)src;
++	enum es_result ret;
++
++	ret = vc_insn_string_check(ctxt, address, false);
++	if (ret != ES_OK)
++		return ret;
+ 
+ 	for (i = 0; i < count; i++) {
+ 		void *s = src + (i * data_size * b);
+@@ -245,7 +267,12 @@ static enum es_result vc_insn_string_write(struct es_em_ctxt *ctxt,
+ 					   bool backwards)
+ {
+ 	int i, s = backwards ? -1 : 1;
+-	enum es_result ret = ES_OK;
++	unsigned long address = (unsigned long)dst;
++	enum es_result ret;
++
++	ret = vc_insn_string_check(ctxt, address, true);
++	if (ret != ES_OK)
++		return ret;
+ 
+ 	for (i = 0; i < count; i++) {
+ 		void *d = dst + (i * data_size * s);
+@@ -281,6 +308,9 @@ static enum es_result vc_insn_string_write(struct es_em_ctxt *ctxt,
+ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
+ {
+ 	struct insn *insn = &ctxt->insn;
++	size_t size;
++	u64 port;
++
+ 	*exitinfo = 0;
+ 
+ 	switch (insn->opcode.bytes[0]) {
+@@ -289,7 +319,7 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
+ 	case 0x6d:
+ 		*exitinfo |= IOIO_TYPE_INS;
+ 		*exitinfo |= IOIO_SEG_ES;
+-		*exitinfo |= (ctxt->regs->dx & 0xffff) << 16;
++		port	   = ctxt->regs->dx & 0xffff;
+ 		break;
+ 
+ 	/* OUTS opcodes */
+@@ -297,41 +327,43 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
+ 	case 0x6f:
+ 		*exitinfo |= IOIO_TYPE_OUTS;
+ 		*exitinfo |= IOIO_SEG_DS;
+-		*exitinfo |= (ctxt->regs->dx & 0xffff) << 16;
++		port	   = ctxt->regs->dx & 0xffff;
+ 		break;
+ 
+ 	/* IN immediate opcodes */
+ 	case 0xe4:
+ 	case 0xe5:
+ 		*exitinfo |= IOIO_TYPE_IN;
+-		*exitinfo |= (u8)insn->immediate.value << 16;
++		port	   = (u8)insn->immediate.value & 0xffff;
+ 		break;
+ 
+ 	/* OUT immediate opcodes */
+ 	case 0xe6:
+ 	case 0xe7:
+ 		*exitinfo |= IOIO_TYPE_OUT;
+-		*exitinfo |= (u8)insn->immediate.value << 16;
++		port	   = (u8)insn->immediate.value & 0xffff;
+ 		break;
+ 
+ 	/* IN register opcodes */
+ 	case 0xec:
+ 	case 0xed:
+ 		*exitinfo |= IOIO_TYPE_IN;
+-		*exitinfo |= (ctxt->regs->dx & 0xffff) << 16;
++		port	   = ctxt->regs->dx & 0xffff;
+ 		break;
+ 
+ 	/* OUT register opcodes */
+ 	case 0xee:
+ 	case 0xef:
+ 		*exitinfo |= IOIO_TYPE_OUT;
+-		*exitinfo |= (ctxt->regs->dx & 0xffff) << 16;
++		port	   = ctxt->regs->dx & 0xffff;
+ 		break;
+ 
+ 	default:
+ 		return ES_DECODE_FAILED;
+ 	}
+ 
++	*exitinfo |= port << 16;
++
+ 	switch (insn->opcode.bytes[0]) {
+ 	case 0x6c:
+ 	case 0x6e:
+@@ -341,12 +373,15 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
+ 	case 0xee:
+ 		/* Single byte opcodes */
+ 		*exitinfo |= IOIO_DATA_8;
++		size       = 1;
+ 		break;
+ 	default:
+ 		/* Length determined by instruction parsing */
+ 		*exitinfo |= (insn->opnd_bytes == 2) ? IOIO_DATA_16
+ 						     : IOIO_DATA_32;
++		size       = (insn->opnd_bytes == 2) ? 2 : 4;
+ 	}
++
+ 	switch (insn->addr_bytes) {
+ 	case 2:
+ 		*exitinfo |= IOIO_ADDR_16;
+@@ -362,7 +397,7 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
+ 	if (insn_has_rep_prefix(insn))
+ 		*exitinfo |= IOIO_REP;
+ 
+-	return ES_OK;
++	return vc_ioio_check(ctxt, (u16)port, size);
+ }
+ 
+ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
+diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
+index f441002c23276..b6f9fe0d6173a 100644
+--- a/arch/x86/kernel/sev-es.c
++++ b/arch/x86/kernel/sev-es.c
+@@ -448,6 +448,33 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
+ 	return ES_OK;
+ }
+ 
++static enum es_result vc_ioio_check(struct es_em_ctxt *ctxt, u16 port, size_t size)
++{
++	BUG_ON(size > 4);
++
++	if (user_mode(ctxt->regs)) {
++		struct thread_struct *t = &current->thread;
++		struct io_bitmap *iobm = t->io_bitmap;
++		size_t idx;
++
++		if (!iobm)
++			goto fault;
++
++		for (idx = port; idx < port + size; ++idx) {
++			if (test_bit(idx, iobm->bitmap))
++				goto fault;
++		}
++	}
++
++	return ES_OK;
++
++fault:
++	ctxt->fi.vector = X86_TRAP_GP;
++	ctxt->fi.error_code = 0;
++
++	return ES_EXCEPTION;
++}
++
+ /* Include code shared with pre-decompression boot stage */
+ #include "sev-es-shared.c"
+ 
+@@ -970,6 +997,9 @@ static enum es_result vc_handle_mmio(struct ghcb *ghcb,
+ 	enum es_result ret;
+ 	long *reg_data;
+ 
++	if (user_mode(ctxt->regs))
++		return ES_UNSUPPORTED;
++
+ 	switch (insn->opcode.bytes[0]) {
+ 	/* MMIO Write */
+ 	case 0x88:
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 21189804524a7..a40408895e230 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2397,13 +2397,17 @@ int kvm_apic_local_deliver(struct kvm_lapic *apic, int lvt_type)
+ {
+ 	u32 reg = kvm_lapic_get_reg(apic, lvt_type);
+ 	int vector, mode, trig_mode;
++	int r;
+ 
+ 	if (kvm_apic_hw_enabled(apic) && !(reg & APIC_LVT_MASKED)) {
+ 		vector = reg & APIC_VECTOR_MASK;
+ 		mode = reg & APIC_MODE_MASK;
+ 		trig_mode = reg & APIC_LVT_LEVEL_TRIGGER;
+-		return __apic_accept_irq(apic, mode, vector, 1, trig_mode,
+-					NULL);
++
++		r = __apic_accept_irq(apic, mode, vector, 1, trig_mode, NULL);
++		if (r && lvt_type == APIC_LVTPC)
++			kvm_lapic_set_reg(apic, APIC_LVTPC, reg | APIC_LVT_MASKED);
++		return r;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/acpi/irq.c b/drivers/acpi/irq.c
+index e209081d644b5..6a9490ad78ceb 100644
+--- a/drivers/acpi/irq.c
++++ b/drivers/acpi/irq.c
+@@ -52,6 +52,7 @@ int acpi_register_gsi(struct device *dev, u32 gsi, int trigger,
+ 		      int polarity)
+ {
+ 	struct irq_fwspec fwspec;
++	unsigned int irq;
+ 
+ 	if (WARN_ON(!acpi_gsi_domain_id)) {
+ 		pr_warn("GSI: No registered irqchip, giving up\n");
+@@ -63,7 +64,11 @@ int acpi_register_gsi(struct device *dev, u32 gsi, int trigger,
+ 	fwspec.param[1] = acpi_dev_get_irq_type(trigger, polarity);
+ 	fwspec.param_count = 2;
+ 
+-	return irq_create_fwspec_mapping(&fwspec);
++	irq = irq_create_fwspec_mapping(&fwspec);
++	if (!irq)
++		return -EINVAL;
++
++	return irq;
+ }
+ EXPORT_SYMBOL_GPL(acpi_register_gsi);
+ 
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index f2f5f1dc7c61d..bfd821173f863 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -16,6 +16,7 @@
+ #include <linux/ioport.h>
+ #include <linux/slab.h>
+ #include <linux/irq.h>
++#include <linux/dmi.h>
+ 
+ #ifdef CONFIG_X86
+ #define valid_IRQ(i) (((i) != 0) && ((i) != 2))
+@@ -380,21 +381,117 @@ unsigned int acpi_dev_get_irq_type(int triggering, int polarity)
+ }
+ EXPORT_SYMBOL_GPL(acpi_dev_get_irq_type);
+ 
+-static void acpi_dev_irqresource_disabled(struct resource *res, u32 gsi)
++static const struct dmi_system_id medion_laptop[] = {
++	{
++		.ident = "MEDION P15651",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
++			DMI_MATCH(DMI_BOARD_NAME, "M15T"),
++		},
++	},
++	{ }
++};
++
++static const struct dmi_system_id asus_laptop[] = {
++	{
++		.ident = "Asus Vivobook K3402ZA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "K3402ZA"),
++		},
++	},
++	{
++		.ident = "Asus Vivobook K3502ZA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "K3502ZA"),
++		},
++	},
++	{
++		.ident = "Asus Vivobook S5402ZA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "S5402ZA"),
++		},
++	},
++	{
++		.ident = "Asus Vivobook S5602ZA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
++		},
++	},
++	{
++		.ident = "Asus ExpertBook B1402CBA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "B1402CBA"),
++		},
++	},
++	{
++		.ident = "Asus ExpertBook B1502CBA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "B1502CBA"),
++		},
++	},
++	{
++		.ident = "Asus ExpertBook B2402CBA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "B2402CBA"),
++		},
++	},
++	{
++		.ident = "Asus ExpertBook B2502",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "B2502CBA"),
++		},
++	},
++	{ }
++};
++
++struct irq_override_cmp {
++	const struct dmi_system_id *system;
++	unsigned char irq;
++	unsigned char triggering;
++	unsigned char polarity;
++	unsigned char shareable;
++};
++
++static const struct irq_override_cmp skip_override_table[] = {
++	{ medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
++	{ asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
++};
++
++static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
++				  u8 shareable)
+ {
+-	res->start = gsi;
+-	res->end = gsi;
+-	res->flags = IORESOURCE_IRQ | IORESOURCE_DISABLED | IORESOURCE_UNSET;
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(skip_override_table); i++) {
++		const struct irq_override_cmp *entry = &skip_override_table[i];
++
++		if (dmi_check_system(entry->system) &&
++		    entry->irq == gsi &&
++		    entry->triggering == triggering &&
++		    entry->polarity == polarity &&
++		    entry->shareable == shareable)
++			return false;
++	}
++
++	return true;
+ }
+ 
+ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 				     u8 triggering, u8 polarity, u8 shareable,
+-				     bool legacy)
++				     bool check_override)
+ {
+ 	int irq, p, t;
+ 
+ 	if (!valid_IRQ(gsi)) {
+-		acpi_dev_irqresource_disabled(res, gsi);
++		irqresource_disabled(res, gsi);
+ 		return;
+ 	}
+ 
+@@ -408,7 +505,9 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 	 * using extended IRQ descriptors we take the IRQ configuration
+ 	 * from _CRS directly.
+ 	 */
+-	if (legacy && !acpi_get_override_irq(gsi, &t, &p)) {
++	if (check_override &&
++	    acpi_dev_irq_override(gsi, triggering, polarity, shareable) &&
++	    !acpi_get_override_irq(gsi, &t, &p)) {
+ 		u8 trig = t ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE;
+ 		u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH;
+ 
+@@ -426,7 +525,7 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 		res->start = irq;
+ 		res->end = irq;
+ 	} else {
+-		acpi_dev_irqresource_disabled(res, gsi);
++		irqresource_disabled(res, gsi);
+ 	}
+ }
+ 
+@@ -463,7 +562,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
+ 		 */
+ 		irq = &ares->data.irq;
+ 		if (index >= irq->interrupt_count) {
+-			acpi_dev_irqresource_disabled(res, 0);
++			irqresource_disabled(res, 0);
+ 			return false;
+ 		}
+ 		acpi_dev_get_irqresource(res, irq->interrupts[index],
+@@ -473,7 +572,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
+ 	case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
+ 		ext_irq = &ares->data.extended_irq;
+ 		if (index >= ext_irq->interrupt_count) {
+-			acpi_dev_irqresource_disabled(res, 0);
++			irqresource_disabled(res, 0);
+ 			return false;
+ 		}
+ 		if (is_gsi(ext_irq))
+@@ -481,7 +580,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
+ 					 ext_irq->triggering, ext_irq->polarity,
+ 					 ext_irq->shareable, false);
+ 		else
+-			acpi_dev_irqresource_disabled(res, 0);
++			irqresource_disabled(res, 0);
+ 		break;
+ 	default:
+ 		res->flags = 0;
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 5fb3eda0a280b..2308c2be85a18 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2224,7 +2224,7 @@ static void ata_eh_link_report(struct ata_link *link)
+ 	struct ata_eh_context *ehc = &link->eh_context;
+ 	struct ata_queued_cmd *qc;
+ 	const char *frozen, *desc;
+-	char tries_buf[6] = "";
++	char tries_buf[16] = "";
+ 	int tag, nr_failed = 0;
+ 
+ 	if (ehc->i.flags & ATA_EHI_QUIET)
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 55a30afc14a00..3edff8606ac95 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1511,7 +1511,7 @@ static int dev_get_regmap_match(struct device *dev, void *res, void *data)
+ 
+ 	/* If the user didn't specify a name match any */
+ 	if (data)
+-		return !strcmp((*r)->name, data);
++		return (*r)->name && !strcmp((*r)->name, data);
+ 	else
+ 		return 1;
+ }
+diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
+index 8469f9876dd26..31d70bad83d29 100644
+--- a/drivers/bluetooth/hci_vhci.c
++++ b/drivers/bluetooth/hci_vhci.c
+@@ -67,7 +67,10 @@ static int vhci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
+ 	struct vhci_data *data = hci_get_drvdata(hdev);
+ 
+ 	memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1);
++
++	mutex_lock(&data->open_mutex);
+ 	skb_queue_tail(&data->readq, skb);
++	mutex_unlock(&data->open_mutex);
+ 
+ 	wake_up_interruptible(&data->read_wait);
+ 	return 0;
+diff --git a/drivers/counter/microchip-tcb-capture.c b/drivers/counter/microchip-tcb-capture.c
+index 85fbbac06d314..6f17644b6e0b1 100644
+--- a/drivers/counter/microchip-tcb-capture.c
++++ b/drivers/counter/microchip-tcb-capture.c
+@@ -111,7 +111,7 @@ static int mchp_tc_count_function_set(struct counter_device *counter,
+ 		priv->qdec_mode = 0;
+ 		/* Set highest rate based on whether soc has gclk or not */
+ 		bmr &= ~(ATMEL_TC_QDEN | ATMEL_TC_POSEN);
+-		if (priv->tc_cfg->has_gclk)
++		if (!priv->tc_cfg->has_gclk)
+ 			cmr |= ATMEL_TC_TIMER_CLOCK2;
+ 		else
+ 			cmr |= ATMEL_TC_TIMER_CLOCK1;
+diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
+index a1517ef1f4a01..0acf6a92a4ad3 100644
+--- a/drivers/dma/mediatek/mtk-uart-apdma.c
++++ b/drivers/dma/mediatek/mtk-uart-apdma.c
+@@ -451,9 +451,8 @@ static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
+ 	mtk_uart_apdma_write(c, VFF_EN, VFF_EN_CLR_B);
+ 	mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
+ 
+-	synchronize_irq(c->irq);
+-
+ 	spin_unlock_irqrestore(&c->vc.lock, flags);
++	synchronize_irq(c->irq);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
+index 9d54746c422c6..3e57176ca0ca5 100644
+--- a/drivers/dma/stm32-mdma.c
++++ b/drivers/dma/stm32-mdma.c
+@@ -1206,6 +1206,10 @@ static int stm32_mdma_resume(struct dma_chan *c)
+ 	unsigned long flags;
+ 	u32 status, reg;
+ 
++	/* Transfer can be terminated */
++	if (!chan->desc || (stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id)) & STM32_MDMA_CCR_EN))
++		return -EPERM;
++
+ 	hwdesc = chan->desc->node[chan->curr_hwdesc].hwdesc;
+ 
+ 	spin_lock_irqsave(&chan->vchan.lock, flags);
+diff --git a/drivers/gpio/gpio-timberdale.c b/drivers/gpio/gpio-timberdale.c
+index de14949a3fe5a..92c1f2baa4bff 100644
+--- a/drivers/gpio/gpio-timberdale.c
++++ b/drivers/gpio/gpio-timberdale.c
+@@ -43,9 +43,10 @@ static int timbgpio_update_bit(struct gpio_chip *gpio, unsigned index,
+ 	unsigned offset, bool enabled)
+ {
+ 	struct timbgpio *tgpio = gpiochip_get_data(gpio);
++	unsigned long flags;
+ 	u32 reg;
+ 
+-	spin_lock(&tgpio->lock);
++	spin_lock_irqsave(&tgpio->lock, flags);
+ 	reg = ioread32(tgpio->membase + offset);
+ 
+ 	if (enabled)
+@@ -54,7 +55,7 @@ static int timbgpio_update_bit(struct gpio_chip *gpio, unsigned index,
+ 		reg &= ~(1 << index);
+ 
+ 	iowrite32(reg, tgpio->membase + offset);
+-	spin_unlock(&tgpio->lock);
++	spin_unlock_irqrestore(&tgpio->lock, flags);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
+index 396a687e020f5..c2c38f13801f5 100644
+--- a/drivers/gpio/gpio-vf610.c
++++ b/drivers/gpio/gpio-vf610.c
+@@ -127,14 +127,14 @@ static int vf610_gpio_direction_output(struct gpio_chip *chip, unsigned gpio,
+ 	unsigned long mask = BIT(gpio);
+ 	u32 val;
+ 
++	vf610_gpio_set(chip, gpio, value);
++
+ 	if (port->sdata && port->sdata->have_paddr) {
+ 		val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR);
+ 		val |= mask;
+ 		vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR);
+ 	}
+ 
+-	vf610_gpio_set(chip, gpio, value);
+-
+ 	return pinctrl_gpio_direction_output(chip->base + gpio);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 099542dd31544..36a9e9c84ed44 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -862,12 +862,19 @@ static void disable_vbios_mode_if_required(
+ 		if (stream == NULL)
+ 			continue;
+ 
++		if (stream->apply_seamless_boot_optimization)
++			continue;
++
++		// only looking for first odm pipe
++		if (pipe->prev_odm_pipe)
++			continue;
++
+ 		if (stream->link->local_sink &&
+ 			stream->link->local_sink->sink_signal == SIGNAL_TYPE_EDP) {
+ 			link = stream->link;
+ 		}
+ 
+-		if (link != NULL) {
++		if (link != NULL && link->link_enc->funcs->is_dig_enabled(link->link_enc)) {
+ 			unsigned int enc_inst, tg_inst = 0;
+ 			unsigned int pix_clk_100hz;
+ 
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index 9c3bbe2c3e6f9..c4ed4f1b369c1 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -64,6 +64,14 @@
+  * support can instead use e.g. drm_helper_hpd_irq_event().
+  */
+ 
++/*
++ * Global connector list for drm_connector_find_by_fwnode().
++ * Note drm_connector_[un]register() first take connector->lock and then
++ * take the connector_list_lock.
++ */
++static DEFINE_MUTEX(connector_list_lock);
++static LIST_HEAD(connector_list);
++
+ struct drm_conn_prop_enum_list {
+ 	int type;
+ 	const char *name;
+@@ -265,6 +273,7 @@ int drm_connector_init(struct drm_device *dev,
+ 		goto out_put_type_id;
+ 	}
+ 
++	INIT_LIST_HEAD(&connector->global_connector_list_entry);
+ 	INIT_LIST_HEAD(&connector->probed_modes);
+ 	INIT_LIST_HEAD(&connector->modes);
+ 	mutex_init(&connector->mutex);
+@@ -471,6 +480,8 @@ void drm_connector_cleanup(struct drm_connector *connector)
+ 	drm_mode_object_unregister(dev, &connector->base);
+ 	kfree(connector->name);
+ 	connector->name = NULL;
++	fwnode_handle_put(connector->fwnode);
++	connector->fwnode = NULL;
+ 	spin_lock_irq(&dev->mode_config.connector_list_lock);
+ 	list_del(&connector->head);
+ 	dev->mode_config.num_connector--;
+@@ -532,6 +543,9 @@ int drm_connector_register(struct drm_connector *connector)
+ 	/* Let userspace know we have a new connector */
+ 	drm_sysfs_hotplug_event(connector->dev);
+ 
++	mutex_lock(&connector_list_lock);
++	list_add_tail(&connector->global_connector_list_entry, &connector_list);
++	mutex_unlock(&connector_list_lock);
+ 	goto unlock;
+ 
+ err_debugfs:
+@@ -560,6 +574,10 @@ void drm_connector_unregister(struct drm_connector *connector)
+ 		return;
+ 	}
+ 
++	mutex_lock(&connector_list_lock);
++	list_del_init(&connector->global_connector_list_entry);
++	mutex_unlock(&connector_list_lock);
++
+ 	if (connector->funcs->early_unregister)
+ 		connector->funcs->early_unregister(connector);
+ 
+@@ -2462,6 +2480,67 @@ out:
+ 	return ret;
+ }
+ 
++/**
++ * drm_connector_find_by_fwnode - Find a connector based on the associated fwnode
++ * @fwnode: fwnode for which to find the matching drm_connector
++ *
++ * This functions looks up a drm_connector based on its associated fwnode. When
++ * a connector is found a reference to the connector is returned. The caller must
++ * call drm_connector_put() to release this reference when it is done with the
++ * connector.
++ *
++ * Returns: A reference to the found connector or an ERR_PTR().
++ */
++struct drm_connector *drm_connector_find_by_fwnode(struct fwnode_handle *fwnode)
++{
++	struct drm_connector *connector, *found = ERR_PTR(-ENODEV);
++
++	if (!fwnode)
++		return ERR_PTR(-ENODEV);
++
++	mutex_lock(&connector_list_lock);
++
++	list_for_each_entry(connector, &connector_list, global_connector_list_entry) {
++		if (connector->fwnode == fwnode ||
++		    (connector->fwnode && connector->fwnode->secondary == fwnode)) {
++			drm_connector_get(connector);
++			found = connector;
++			break;
++		}
++	}
++
++	mutex_unlock(&connector_list_lock);
++
++	return found;
++}
++
++/**
++ * drm_connector_oob_hotplug_event - Report out-of-band hotplug event to connector
++ * @connector: connector to report the event on
++ *
++ * On some hardware a hotplug event notification may come from outside the display
++ * driver / device. An example of this is some USB Type-C setups where the hardware
++ * muxes the DisplayPort data and aux-lines but does not pass the altmode HPD
++ * status bit to the GPU's DP HPD pin.
++ *
++ * This function can be used to report these out-of-band events after obtaining
++ * a drm_connector reference through calling drm_connector_find_by_fwnode().
++ */
++void drm_connector_oob_hotplug_event(struct fwnode_handle *connector_fwnode)
++{
++	struct drm_connector *connector;
++
++	connector = drm_connector_find_by_fwnode(connector_fwnode);
++	if (IS_ERR(connector))
++		return;
++
++	if (connector->funcs->oob_hotplug_event)
++		connector->funcs->oob_hotplug_event(connector);
++
++	drm_connector_put(connector);
++}
++EXPORT_SYMBOL(drm_connector_oob_hotplug_event);
++
+ 
+ /**
+  * DOC: Tile group
+diff --git a/drivers/gpu/drm/drm_crtc_internal.h b/drivers/gpu/drm/drm_crtc_internal.h
+index da96b2f64d7e4..c3577eaee4164 100644
+--- a/drivers/gpu/drm/drm_crtc_internal.h
++++ b/drivers/gpu/drm/drm_crtc_internal.h
+@@ -57,6 +57,7 @@ struct drm_property;
+ struct edid;
+ struct kref;
+ struct work_struct;
++struct fwnode_handle;
+ 
+ /* drm_crtc.c */
+ int drm_mode_crtc_set_obj_prop(struct drm_mode_object *obj,
+@@ -182,6 +183,7 @@ int drm_connector_set_obj_prop(struct drm_mode_object *obj,
+ int drm_connector_create_standard_properties(struct drm_device *dev);
+ const char *drm_get_connector_force_name(enum drm_connector_force force);
+ void drm_connector_free_work_fn(struct work_struct *work);
++struct drm_connector *drm_connector_find_by_fwnode(struct fwnode_handle *fwnode);
+ 
+ /* IOCTL */
+ int drm_connector_property_set_ioctl(struct drm_device *dev,
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 6106fa7c43028..43de9dfcba19a 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -44,6 +44,14 @@ static const struct drm_dmi_panel_orientation_data gpd_micropc = {
+ 	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+ 
++static const struct drm_dmi_panel_orientation_data gpd_onemix2s = {
++	.width = 1200,
++	.height = 1920,
++	.bios_dates = (const char * const []){ "05/21/2018", "10/26/2018",
++		"03/04/2019", NULL },
++	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data gpd_pocket = {
+ 	.width = 1200,
+ 	.height = 1920,
+@@ -329,6 +337,14 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "LTH17"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* One Mix 2S (generic strings, also match on bios date) */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"),
++		  DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"),
++		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
++		},
++		.driver_data = (void *)&gpd_onemix2s,
+ 	},
+ 	{}
+ };
+diff --git a/drivers/gpu/drm/drm_sysfs.c b/drivers/gpu/drm/drm_sysfs.c
+index f0336c8046392..71a0d9596efee 100644
+--- a/drivers/gpu/drm/drm_sysfs.c
++++ b/drivers/gpu/drm/drm_sysfs.c
+@@ -10,6 +10,7 @@
+  * Copyright (c) 2003-2004 IBM Corp.
+  */
+ 
++#include <linux/acpi.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
+ #include <linux/export.h>
+@@ -50,8 +51,45 @@ static struct device_type drm_sysfs_device_minor = {
+ 	.name = "drm_minor"
+ };
+ 
++static struct device_type drm_sysfs_device_connector = {
++	.name = "drm_connector",
++};
++
+ struct class *drm_class;
+ 
++#ifdef CONFIG_ACPI
++static bool drm_connector_acpi_bus_match(struct device *dev)
++{
++	return dev->type == &drm_sysfs_device_connector;
++}
++
++static struct acpi_device *drm_connector_acpi_find_companion(struct device *dev)
++{
++	struct drm_connector *connector = to_drm_connector(dev);
++
++	return to_acpi_device_node(connector->fwnode);
++}
++
++static struct acpi_bus_type drm_connector_acpi_bus = {
++	.name = "drm_connector",
++	.match = drm_connector_acpi_bus_match,
++	.find_companion = drm_connector_acpi_find_companion,
++};
++
++static void drm_sysfs_acpi_register(void)
++{
++	register_acpi_bus_type(&drm_connector_acpi_bus);
++}
++
++static void drm_sysfs_acpi_unregister(void)
++{
++	unregister_acpi_bus_type(&drm_connector_acpi_bus);
++}
++#else
++static void drm_sysfs_acpi_register(void) { }
++static void drm_sysfs_acpi_unregister(void) { }
++#endif
++
+ static char *drm_devnode(struct device *dev, umode_t *mode)
+ {
+ 	return kasprintf(GFP_KERNEL, "dri/%s", dev_name(dev));
+@@ -85,6 +123,8 @@ int drm_sysfs_init(void)
+ 	}
+ 
+ 	drm_class->devnode = drm_devnode;
++
++	drm_sysfs_acpi_register();
+ 	return 0;
+ }
+ 
+@@ -97,11 +137,17 @@ void drm_sysfs_destroy(void)
+ {
+ 	if (IS_ERR_OR_NULL(drm_class))
+ 		return;
++	drm_sysfs_acpi_unregister();
+ 	class_remove_file(drm_class, &class_attr_version.attr);
+ 	class_destroy(drm_class);
+ 	drm_class = NULL;
+ }
+ 
++static void drm_sysfs_release(struct device *dev)
++{
++	kfree(dev);
++}
++
+ /*
+  * Connector properties
+  */
+@@ -274,27 +320,47 @@ static const struct attribute_group *connector_dev_groups[] = {
+ int drm_sysfs_connector_add(struct drm_connector *connector)
+ {
+ 	struct drm_device *dev = connector->dev;
++	struct device *kdev;
++	int r;
+ 
+ 	if (connector->kdev)
+ 		return 0;
+ 
+-	connector->kdev =
+-		device_create_with_groups(drm_class, dev->primary->kdev, 0,
+-					  connector, connector_dev_groups,
+-					  "card%d-%s", dev->primary->index,
+-					  connector->name);
++	kdev = kzalloc(sizeof(*kdev), GFP_KERNEL);
++	if (!kdev)
++		return -ENOMEM;
++
++	device_initialize(kdev);
++	kdev->class = drm_class;
++	kdev->type = &drm_sysfs_device_connector;
++	kdev->parent = dev->primary->kdev;
++	kdev->groups = connector_dev_groups;
++	kdev->release = drm_sysfs_release;
++	dev_set_drvdata(kdev, connector);
++
++	r = dev_set_name(kdev, "card%d-%s", dev->primary->index, connector->name);
++	if (r)
++		goto err_free;
++
+ 	DRM_DEBUG("adding \"%s\" to sysfs\n",
+ 		  connector->name);
+ 
+-	if (IS_ERR(connector->kdev)) {
+-		DRM_ERROR("failed to register connector device: %ld\n", PTR_ERR(connector->kdev));
+-		return PTR_ERR(connector->kdev);
++	r = device_add(kdev);
++	if (r) {
++		drm_err(dev, "failed to register connector device: %d\n", r);
++		goto err_free;
+ 	}
+ 
++	connector->kdev = kdev;
++
+ 	if (connector->ddc)
+ 		return sysfs_create_link(&connector->kdev->kobj,
+ 				 &connector->ddc->dev.kobj, "ddc");
+ 	return 0;
++
++err_free:
++	put_device(kdev);
++	return r;
+ }
+ 
+ void drm_sysfs_connector_remove(struct drm_connector *connector)
+@@ -375,11 +441,6 @@ void drm_sysfs_connector_status_event(struct drm_connector *connector,
+ }
+ EXPORT_SYMBOL(drm_sysfs_connector_status_event);
+ 
+-static void drm_sysfs_release(struct device *dev)
+-{
+-	kfree(dev);
+-}
+-
+ struct device *drm_sysfs_minor_alloc(struct drm_minor *minor)
+ {
+ 	const char *minor_str;
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 92dd65befbcb8..01a88b03bc6d3 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -222,6 +222,7 @@ static vm_fault_t i915_error_to_vmf_fault(int err)
+ 	case 0:
+ 	case -EAGAIN:
+ 	case -ENOSPC: /* transient failure to evict? */
++	case -ENOBUFS: /* temporarily out of fences? */
+ 	case -ERESTARTSYS:
+ 	case -EINTR:
+ 	case -EBUSY:
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+index 7ea90d25a3b69..8aa9f2335f57a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+@@ -147,6 +147,7 @@ static void _dpu_plane_calc_bw(struct drm_plane *plane,
+ 	const struct dpu_format *fmt = NULL;
+ 	struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane);
+ 	int src_width, src_height, dst_height, fps;
++	u64 plane_pixel_rate, plane_bit_rate;
+ 	u64 plane_prefill_bw;
+ 	u64 plane_bw;
+ 	u32 hw_latency_lines;
+@@ -168,13 +169,12 @@ static void _dpu_plane_calc_bw(struct drm_plane *plane,
+ 	scale_factor = src_height > dst_height ?
+ 		mult_frac(src_height, 1, dst_height) : 1;
+ 
+-	plane_bw =
+-		src_width * mode->vtotal * fps * fmt->bpp *
+-		scale_factor;
++	plane_pixel_rate = src_width * mode->vtotal * fps;
++	plane_bit_rate = plane_pixel_rate * fmt->bpp;
+ 
+-	plane_prefill_bw =
+-		src_width * hw_latency_lines * fps * fmt->bpp *
+-		scale_factor * mode->vtotal;
++	plane_bw = plane_bit_rate * scale_factor;
++
++	plane_prefill_bw = plane_bw * hw_latency_lines;
+ 
+ 	do_div(plane_prefill_bw, (vbp+vpw));
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 9fac55c24214a..07becbf3c64fc 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1665,13 +1665,6 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 		return rc;
+ 
+ 	while (--link_train_max_retries) {
+-		rc = dp_ctrl_reinitialize_mainlink(ctrl);
+-		if (rc) {
+-			DRM_ERROR("Failed to reinitialize mainlink. rc=%d\n",
+-					rc);
+-			break;
+-		}
+-
+ 		training_step = DP_TRAINING_NONE;
+ 		rc = dp_ctrl_setup_main_link(ctrl, &cr, &training_step);
+ 		if (rc == 0) {
+@@ -1712,6 +1705,12 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 				break; /* lane == 1 already */
+ 			}
+ 		}
++
++		rc = dp_ctrl_reinitialize_mainlink(ctrl);
++		if (rc) {
++			DRM_ERROR("Failed to reinitialize mainlink. rc=%d\n", rc);
++			break;
++		}
+ 	}
+ 
+ 	if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN)
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 5a76aa1389173..fb7792ca39e2c 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1075,9 +1075,21 @@ static void dsi_wait4video_done(struct msm_dsi_host *msm_host)
+ 
+ static void dsi_wait4video_eng_busy(struct msm_dsi_host *msm_host)
+ {
++	u32 data;
++
+ 	if (!(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO))
+ 		return;
+ 
++	data = dsi_read(msm_host, REG_DSI_STATUS0);
++
++	/* if video mode engine is not busy, its because
++	 * either timing engine was not turned on or the
++	 * DSI controller has finished transmitting the video
++	 * data already, so no need to wait in those cases
++	 */
++	if (!(data & DSI_STATUS0_VIDEO_MODE_ENGINE_BUSY))
++		return;
++
+ 	if (msm_host->power_on && msm_host->enabled) {
+ 		dsi_wait4video_done(msm_host);
+ 		/* delay 4 ms to skip BLLP */
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 4c6c2e5abf95e..00082c679170a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -1627,7 +1627,7 @@ static int vmw_cmd_tex_state(struct vmw_private *dev_priv,
+ {
+ 	VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdSetTextureState);
+ 	SVGA3dTextureState *last_state = (SVGA3dTextureState *)
+-	  ((unsigned long) header + header->size + sizeof(header));
++	  ((unsigned long) header + header->size + sizeof(*header));
+ 	SVGA3dTextureState *cur_state = (SVGA3dTextureState *)
+ 		((unsigned long) header + sizeof(*cmd));
+ 	struct vmw_resource *ctx;
+diff --git a/drivers/hid/hid-holtek-kbd.c b/drivers/hid/hid-holtek-kbd.c
+index 403506b9697e7..b346d68a06f5a 100644
+--- a/drivers/hid/hid-holtek-kbd.c
++++ b/drivers/hid/hid-holtek-kbd.c
+@@ -130,6 +130,10 @@ static int holtek_kbd_input_event(struct input_dev *dev, unsigned int type,
+ 		return -ENODEV;
+ 
+ 	boot_hid = usb_get_intfdata(boot_interface);
++	if (list_empty(&boot_hid->inputs)) {
++		hid_err(hid, "no inputs found\n");
++		return -ENODEV;
++	}
+ 	boot_hid_input = list_first_entry(&boot_hid->inputs,
+ 		struct hid_input, list);
+ 
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 6ac0c2d9a147c..651fa0966939e 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -3936,7 +3936,8 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 			goto hid_hw_init_fail;
+ 	}
+ 
+-	hidpp_connect_event(hidpp);
++	schedule_work(&hidpp->work);
++	flush_work(&hidpp->work);
+ 
+ 	if (will_restart) {
+ 		/* Reset the HID node state */
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index dc7c33f6b2c4e..84b12599eaf69 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2058,6 +2058,10 @@ static const struct hid_device_id mt_devices[] = {
+ 			USB_DEVICE_ID_MTP_STM)},
+ 
+ 	/* Synaptics devices */
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_SYNAPTICS, 0xcd7e) },
++
+ 	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 			USB_VENDOR_ID_SYNAPTICS, 0xce08) },
+diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
+index 774507b54b57b..c90cec8d9656d 100644
+--- a/drivers/i2c/i2c-mux.c
++++ b/drivers/i2c/i2c-mux.c
+@@ -340,7 +340,7 @@ int i2c_mux_add_adapter(struct i2c_mux_core *muxc,
+ 		priv->adap.lock_ops = &i2c_parent_lock_ops;
+ 
+ 	/* Sanity check on class */
+-	if (i2c_mux_parent_classes(parent) & class)
++	if (i2c_mux_parent_classes(parent) & class & ~I2C_CLASS_DEPRECATED)
+ 		dev_err(&parent->dev,
+ 			"Segment %d behind mux can't share classes with ancestors\n",
+ 			chan_id);
+diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
+index 6b7da40f99c82..919a338d91814 100644
+--- a/drivers/iio/pressure/bmp280-core.c
++++ b/drivers/iio/pressure/bmp280-core.c
+@@ -1112,7 +1112,7 @@ int bmp280_common_probe(struct device *dev,
+ 	 * however as it happens, the BMP085 shares the chip ID of BMP180
+ 	 * so we look for an IRQ if we have that.
+ 	 */
+-	if (irq > 0 || (chip_id  == BMP180_CHIP_ID)) {
++	if (irq > 0 && (chip_id  == BMP180_CHIP_ID)) {
+ 		ret = bmp085_fetch_eoc_irq(dev, name, irq, data);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/iio/pressure/dps310.c b/drivers/iio/pressure/dps310.c
+index cf8b92fae1b3d..1b6b9530f1662 100644
+--- a/drivers/iio/pressure/dps310.c
++++ b/drivers/iio/pressure/dps310.c
+@@ -57,8 +57,8 @@
+ #define  DPS310_RESET_MAGIC	0x09
+ #define DPS310_COEF_BASE	0x10
+ 
+-/* Make sure sleep time is <= 20ms for usleep_range */
+-#define DPS310_POLL_SLEEP_US(t)		min(20000, (t) / 8)
++/* Make sure sleep time is <= 30ms for usleep_range */
++#define DPS310_POLL_SLEEP_US(t)		min(30000, (t) / 8)
+ /* Silently handle error in rate value here */
+ #define DPS310_POLL_TIMEOUT_US(rc)	((rc) <= 0 ? 1000000 : 1000000 / (rc))
+ 
+@@ -402,8 +402,8 @@ static int dps310_reset_wait(struct dps310_data *data)
+ 	if (rc)
+ 		return rc;
+ 
+-	/* Wait for device chip access: 2.5ms in specification */
+-	usleep_range(2500, 12000);
++	/* Wait for device chip access: 15ms in specification */
++	usleep_range(15000, 55000);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/pressure/ms5611_core.c b/drivers/iio/pressure/ms5611_core.c
+index 874a73b3ea9d6..f88d8f2ce6102 100644
+--- a/drivers/iio/pressure/ms5611_core.c
++++ b/drivers/iio/pressure/ms5611_core.c
+@@ -76,7 +76,7 @@ static bool ms5611_prom_is_valid(u16 *prom, size_t len)
+ 
+ 	crc = (crc >> 12) & 0x000F;
+ 
+-	return crc_orig != 0x0000 && crc == crc_orig;
++	return crc == crc_orig;
+ }
+ 
+ static int ms5611_read_prom(struct iio_dev *indio_dev)
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index e42c812e74c3c..8c54b1be04424 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -1965,6 +1965,9 @@ static int send_fw_act_open_req(struct c4iw_ep *ep, unsigned int atid)
+ 	int win;
+ 
+ 	skb = get_skb(NULL, sizeof(*req), GFP_KERNEL);
++	if (!skb)
++		return -ENOMEM;
++
+ 	req = __skb_put_zero(skb, sizeof(*req));
+ 	req->op_compl = htonl(WR_OP_V(FW_OFLD_CONNECTION_WR));
+ 	req->len16_pkd = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(sizeof(*req), 16)));
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 9b9b9557ae746..68df080263f2f 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -965,67 +965,52 @@ static void srp_disconnect_target(struct srp_target_port *target)
+ 	}
+ }
+ 
+-static void srp_free_req_data(struct srp_target_port *target,
+-			      struct srp_rdma_ch *ch)
++static int srp_exit_cmd_priv(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
+ {
++	struct srp_target_port *target = host_to_target(shost);
+ 	struct srp_device *dev = target->srp_host->srp_dev;
+ 	struct ib_device *ibdev = dev->dev;
+-	struct srp_request *req;
+-	int i;
++	struct srp_request *req = scsi_cmd_priv(cmd);
+ 
+-	if (!ch->req_ring)
+-		return;
+-
+-	for (i = 0; i < target->req_ring_size; ++i) {
+-		req = &ch->req_ring[i];
+-		if (dev->use_fast_reg)
+-			kfree(req->fr_list);
+-		if (req->indirect_dma_addr) {
+-			ib_dma_unmap_single(ibdev, req->indirect_dma_addr,
+-					    target->indirect_size,
+-					    DMA_TO_DEVICE);
+-		}
+-		kfree(req->indirect_desc);
++	kfree(req->fr_list);
++	if (req->indirect_dma_addr) {
++		ib_dma_unmap_single(ibdev, req->indirect_dma_addr,
++				    target->indirect_size,
++				    DMA_TO_DEVICE);
+ 	}
++	kfree(req->indirect_desc);
+ 
+-	kfree(ch->req_ring);
+-	ch->req_ring = NULL;
++	return 0;
+ }
+ 
+-static int srp_alloc_req_data(struct srp_rdma_ch *ch)
++static int srp_init_cmd_priv(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
+ {
+-	struct srp_target_port *target = ch->target;
++	struct srp_target_port *target = host_to_target(shost);
+ 	struct srp_device *srp_dev = target->srp_host->srp_dev;
+ 	struct ib_device *ibdev = srp_dev->dev;
+-	struct srp_request *req;
++	struct srp_request *req = scsi_cmd_priv(cmd);
+ 	dma_addr_t dma_addr;
+-	int i, ret = -ENOMEM;
+-
+-	ch->req_ring = kcalloc(target->req_ring_size, sizeof(*ch->req_ring),
+-			       GFP_KERNEL);
+-	if (!ch->req_ring)
+-		goto out;
+-
+-	for (i = 0; i < target->req_ring_size; ++i) {
+-		req = &ch->req_ring[i];
+-		if (srp_dev->use_fast_reg) {
+-			req->fr_list = kmalloc_array(target->mr_per_cmd,
+-						sizeof(void *), GFP_KERNEL);
+-			if (!req->fr_list)
+-				goto out;
+-		}
+-		req->indirect_desc = kmalloc(target->indirect_size, GFP_KERNEL);
+-		if (!req->indirect_desc)
+-			goto out;
++	int ret = -ENOMEM;
+ 
+-		dma_addr = ib_dma_map_single(ibdev, req->indirect_desc,
+-					     target->indirect_size,
+-					     DMA_TO_DEVICE);
+-		if (ib_dma_mapping_error(ibdev, dma_addr))
++	if (srp_dev->use_fast_reg) {
++		req->fr_list = kmalloc_array(target->mr_per_cmd, sizeof(void *),
++					GFP_KERNEL);
++		if (!req->fr_list)
+ 			goto out;
++	}
++	req->indirect_desc = kmalloc(target->indirect_size, GFP_KERNEL);
++	if (!req->indirect_desc)
++		goto out;
+ 
+-		req->indirect_dma_addr = dma_addr;
++	dma_addr = ib_dma_map_single(ibdev, req->indirect_desc,
++				     target->indirect_size,
++				     DMA_TO_DEVICE);
++	if (ib_dma_mapping_error(ibdev, dma_addr)) {
++		srp_exit_cmd_priv(shost, cmd);
++		goto out;
+ 	}
++
++	req->indirect_dma_addr = dma_addr;
+ 	ret = 0;
+ 
+ out:
+@@ -1067,10 +1052,6 @@ static void srp_remove_target(struct srp_target_port *target)
+ 	}
+ 	cancel_work_sync(&target->tl_err_work);
+ 	srp_rport_put(target->rport);
+-	for (i = 0; i < target->ch_count; i++) {
+-		ch = &target->ch[i];
+-		srp_free_req_data(target, ch);
+-	}
+ 	kfree(target->ch);
+ 	target->ch = NULL;
+ 
+@@ -1289,22 +1270,32 @@ static void srp_finish_req(struct srp_rdma_ch *ch, struct srp_request *req,
+ 	}
+ }
+ 
+-static void srp_terminate_io(struct srp_rport *rport)
++struct srp_terminate_context {
++	struct srp_target_port *srp_target;
++	int scsi_result;
++};
++
++static bool srp_terminate_cmd(struct scsi_cmnd *scmnd, void *context_ptr,
++			      bool reserved)
+ {
+-	struct srp_target_port *target = rport->lld_data;
+-	struct srp_rdma_ch *ch;
+-	int i, j;
++	struct srp_terminate_context *context = context_ptr;
++	struct srp_target_port *target = context->srp_target;
++	u32 tag = blk_mq_unique_tag(scmnd->request);
++	struct srp_rdma_ch *ch = &target->ch[blk_mq_unique_tag_to_hwq(tag)];
++	struct srp_request *req = scsi_cmd_priv(scmnd);
+ 
+-	for (i = 0; i < target->ch_count; i++) {
+-		ch = &target->ch[i];
++	srp_finish_req(ch, req, NULL, context->scsi_result);
+ 
+-		for (j = 0; j < target->req_ring_size; ++j) {
+-			struct srp_request *req = &ch->req_ring[j];
++	return true;
++}
+ 
+-			srp_finish_req(ch, req, NULL,
+-				       DID_TRANSPORT_FAILFAST << 16);
+-		}
+-	}
++static void srp_terminate_io(struct srp_rport *rport)
++{
++	struct srp_target_port *target = rport->lld_data;
++	struct srp_terminate_context context = { .srp_target = target,
++		.scsi_result = DID_TRANSPORT_FAILFAST << 16 };
++
++	scsi_host_busy_iter(target->scsi_host, srp_terminate_cmd, &context);
+ }
+ 
+ /* Calculate maximum initiator to target information unit length. */
+@@ -1360,13 +1351,12 @@ static int srp_rport_reconnect(struct srp_rport *rport)
+ 		ch = &target->ch[i];
+ 		ret += srp_new_cm_id(ch);
+ 	}
+-	for (i = 0; i < target->ch_count; i++) {
+-		ch = &target->ch[i];
+-		for (j = 0; j < target->req_ring_size; ++j) {
+-			struct srp_request *req = &ch->req_ring[j];
++	{
++		struct srp_terminate_context context = {
++			.srp_target = target, .scsi_result = DID_RESET << 16};
+ 
+-			srp_finish_req(ch, req, NULL, DID_RESET << 16);
+-		}
++		scsi_host_busy_iter(target->scsi_host, srp_terminate_cmd,
++				    &context);
+ 	}
+ 	for (i = 0; i < target->ch_count; i++) {
+ 		ch = &target->ch[i];
+@@ -1962,11 +1952,9 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp)
+ 		spin_unlock_irqrestore(&ch->lock, flags);
+ 	} else {
+ 		scmnd = scsi_host_find_tag(target->scsi_host, rsp->tag);
+-		if (scmnd && scmnd->host_scribble) {
+-			req = (void *)scmnd->host_scribble;
++		if (scmnd) {
++			req = scsi_cmd_priv(scmnd);
+ 			scmnd = srp_claim_req(ch, req, NULL, scmnd);
+-		} else {
+-			scmnd = NULL;
+ 		}
+ 		if (!scmnd) {
+ 			shost_printk(KERN_ERR, target->scsi_host,
+@@ -1996,7 +1984,6 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp)
+ 		srp_free_req(ch, req, scmnd,
+ 			     be32_to_cpu(rsp->req_lim_delta));
+ 
+-		scmnd->host_scribble = NULL;
+ 		scmnd->scsi_done(scmnd);
+ 	}
+ }
+@@ -2164,13 +2151,12 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
+ {
+ 	struct srp_target_port *target = host_to_target(shost);
+ 	struct srp_rdma_ch *ch;
+-	struct srp_request *req;
++	struct srp_request *req = scsi_cmd_priv(scmnd);
+ 	struct srp_iu *iu;
+ 	struct srp_cmd *cmd;
+ 	struct ib_device *dev;
+ 	unsigned long flags;
+ 	u32 tag;
+-	u16 idx;
+ 	int len, ret;
+ 
+ 	scmnd->result = srp_chkready(target->rport);
+@@ -2180,10 +2166,6 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
+ 	WARN_ON_ONCE(scmnd->request->tag < 0);
+ 	tag = blk_mq_unique_tag(scmnd->request);
+ 	ch = &target->ch[blk_mq_unique_tag_to_hwq(tag)];
+-	idx = blk_mq_unique_tag_to_tag(tag);
+-	WARN_ONCE(idx >= target->req_ring_size, "%s: tag %#x: idx %d >= %d\n",
+-		  dev_name(&shost->shost_gendev), tag, idx,
+-		  target->req_ring_size);
+ 
+ 	spin_lock_irqsave(&ch->lock, flags);
+ 	iu = __srp_get_tx_iu(ch, SRP_IU_CMD);
+@@ -2192,13 +2174,10 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
+ 	if (!iu)
+ 		goto err;
+ 
+-	req = &ch->req_ring[idx];
+ 	dev = target->srp_host->srp_dev->dev;
+ 	ib_dma_sync_single_for_cpu(dev, iu->dma, ch->max_it_iu_len,
+ 				   DMA_TO_DEVICE);
+ 
+-	scmnd->host_scribble = (void *) req;
+-
+ 	cmd = iu->buf;
+ 	memset(cmd, 0, sizeof *cmd);
+ 
+@@ -2799,16 +2778,13 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
+ static int srp_abort(struct scsi_cmnd *scmnd)
+ {
+ 	struct srp_target_port *target = host_to_target(scmnd->device->host);
+-	struct srp_request *req = (struct srp_request *) scmnd->host_scribble;
++	struct srp_request *req = scsi_cmd_priv(scmnd);
+ 	u32 tag;
+ 	u16 ch_idx;
+ 	struct srp_rdma_ch *ch;
+-	int ret;
+ 
+ 	shost_printk(KERN_ERR, target->scsi_host, "SRP abort called\n");
+ 
+-	if (!req)
+-		return SUCCESS;
+ 	tag = blk_mq_unique_tag(scmnd->request);
+ 	ch_idx = blk_mq_unique_tag_to_hwq(tag);
+ 	if (WARN_ON_ONCE(ch_idx >= target->ch_count))
+@@ -2819,19 +2795,14 @@ static int srp_abort(struct scsi_cmnd *scmnd)
+ 	shost_printk(KERN_ERR, target->scsi_host,
+ 		     "Sending SRP abort for tag %#x\n", tag);
+ 	if (srp_send_tsk_mgmt(ch, tag, scmnd->device->lun,
+-			      SRP_TSK_ABORT_TASK, NULL) == 0)
+-		ret = SUCCESS;
+-	else if (target->rport->state == SRP_RPORT_LOST)
+-		ret = FAST_IO_FAIL;
+-	else
+-		ret = FAILED;
+-	if (ret == SUCCESS) {
++			      SRP_TSK_ABORT_TASK, NULL) == 0) {
+ 		srp_free_req(ch, req, scmnd, 0);
+-		scmnd->result = DID_ABORT << 16;
+-		scmnd->scsi_done(scmnd);
++		return SUCCESS;
+ 	}
++	if (target->rport->state == SRP_RPORT_LOST)
++		return FAST_IO_FAIL;
+ 
+-	return ret;
++	return FAILED;
+ }
+ 
+ static int srp_reset_device(struct scsi_cmnd *scmnd)
+@@ -3075,6 +3046,8 @@ static struct scsi_host_template srp_template = {
+ 	.target_alloc			= srp_target_alloc,
+ 	.slave_configure		= srp_slave_configure,
+ 	.info				= srp_target_info,
++	.init_cmd_priv			= srp_init_cmd_priv,
++	.exit_cmd_priv			= srp_exit_cmd_priv,
+ 	.queuecommand			= srp_queuecommand,
+ 	.change_queue_depth             = srp_change_queue_depth,
+ 	.eh_timed_out			= srp_timed_out,
+@@ -3088,6 +3061,7 @@ static struct scsi_host_template srp_template = {
+ 	.cmd_per_lun			= SRP_DEFAULT_CMD_SQ_SIZE,
+ 	.shost_attrs			= srp_host_attrs,
+ 	.track_queue_depth		= 1,
++	.cmd_size			= sizeof(struct srp_request),
+ };
+ 
+ static int srp_sdev_count(struct Scsi_Host *host)
+@@ -3735,8 +3709,6 @@ static ssize_t srp_create_target(struct device *dev,
+ 	if (ret)
+ 		goto out;
+ 
+-	target->req_ring_size = target->queue_size - SRP_TSK_MGMT_SQ_SIZE;
+-
+ 	if (!srp_conn_unique(target->srp_host, target)) {
+ 		if (target->using_rdma_cm) {
+ 			shost_printk(KERN_INFO, target->scsi_host,
+@@ -3839,10 +3811,6 @@ static ssize_t srp_create_target(struct device *dev,
+ 		if (ret)
+ 			goto err_disconnect;
+ 
+-		ret = srp_alloc_req_data(ch);
+-		if (ret)
+-			goto err_disconnect;
+-
+ 		ret = srp_connect_ch(ch, max_iu_len, multich);
+ 		if (ret) {
+ 			char dst[64];
+@@ -3861,7 +3829,6 @@ static ssize_t srp_create_target(struct device *dev,
+ 				goto free_ch;
+ 			} else {
+ 				srp_free_ch_ib(target, ch);
+-				srp_free_req_data(target, ch);
+ 				target->ch_count = ch - target->ch;
+ 				goto connected;
+ 			}
+@@ -3922,7 +3889,6 @@ free_ch:
+ 	for (i = 0; i < target->ch_count; i++) {
+ 		ch = &target->ch[i];
+ 		srp_free_ch_ib(target, ch);
+-		srp_free_req_data(target, ch);
+ 	}
+ 
+ 	kfree(target->ch);
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
+index 85bac20d9007d..152242e8f733d 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.h
++++ b/drivers/infiniband/ulp/srp/ib_srp.h
+@@ -176,7 +176,6 @@ struct srp_rdma_ch {
+ 
+ 	struct srp_iu	      **tx_ring;
+ 	struct srp_iu	      **rx_ring;
+-	struct srp_request     *req_ring;
+ 	int			comp_vector;
+ 
+ 	u64			tsk_mgmt_tag;
+@@ -222,7 +221,6 @@ struct srp_target_port {
+ 	int			mr_pool_size;
+ 	int			mr_per_cmd;
+ 	int			queue_size;
+-	int			req_ring_size;
+ 	int			comp_vector;
+ 	int			tl_retry_count;
+ 
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index b99318fb58dc6..762c502391464 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -252,6 +252,7 @@ static const struct xpad_device {
+ 	{ 0x1038, 0x1430, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 },
+ 	{ 0x1038, 0x1431, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 },
+ 	{ 0x11c9, 0x55f0, "Nacon GC-100XF", 0, XTYPE_XBOX360 },
++	{ 0x11ff, 0x0511, "PXN V900", 0, XTYPE_XBOX360 },
+ 	{ 0x1209, 0x2882, "Ardwiino Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x12ab, 0x0004, "Honey Bee Xbox360 dancepad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+ 	{ 0x12ab, 0x0301, "PDP AFTERGLOW AX.1", 0, XTYPE_XBOX360 },
+@@ -446,6 +447,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOXONE_VENDOR(0x0f0d),		/* Hori Controllers */
+ 	XPAD_XBOX360_VENDOR(0x1038),		/* SteelSeries Controllers */
+ 	XPAD_XBOX360_VENDOR(0x11c9),		/* Nacon GC100XF */
++	XPAD_XBOX360_VENDOR(0x11ff),		/* PXN V900 */
+ 	XPAD_XBOX360_VENDOR(0x1209),		/* Ardwiino Controllers */
+ 	XPAD_XBOX360_VENDOR(0x12ab),		/* X-Box 360 dance pads */
+ 	XPAD_XBOX360_VENDOR(0x1430),		/* RedOctane X-Box 360 controllers */
+diff --git a/drivers/input/misc/powermate.c b/drivers/input/misc/powermate.c
+index c4e0e1886061f..6b1b95d58e6b5 100644
+--- a/drivers/input/misc/powermate.c
++++ b/drivers/input/misc/powermate.c
+@@ -425,6 +425,7 @@ static void powermate_disconnect(struct usb_interface *intf)
+ 		pm->requires_update = 0;
+ 		usb_kill_urb(pm->irq);
+ 		input_unregister_device(pm->input);
++		usb_kill_urb(pm->config);
+ 		usb_free_urb(pm->irq);
+ 		usb_free_urb(pm->config);
+ 		powermate_free_buffers(interface_to_usbdev(intf), pm);
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 598fcb99f6c9e..400281feb4e8d 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -2112,6 +2112,7 @@ static int elantech_setup_ps2(struct psmouse *psmouse,
+ 	psmouse->protocol_handler = elantech_process_byte;
+ 	psmouse->disconnect = elantech_disconnect;
+ 	psmouse->reconnect = elantech_reconnect;
++	psmouse->fast_reconnect = NULL;
+ 	psmouse->pktsize = info->hw_version > 1 ? 6 : 4;
+ 
+ 	return 0;
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 82577095e175e..afa5d2623c410 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -1619,6 +1619,7 @@ static int synaptics_init_ps2(struct psmouse *psmouse,
+ 	psmouse->set_rate = synaptics_set_rate;
+ 	psmouse->disconnect = synaptics_disconnect;
+ 	psmouse->reconnect = synaptics_reconnect;
++	psmouse->fast_reconnect = NULL;
+ 	psmouse->cleanup = synaptics_reset;
+ 	/* Synaptics can usually stay in sync without extra help */
+ 	psmouse->resync_time = 0;
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 1bd5898abb97d..09528c0a8a34e 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -609,6 +609,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		},
+ 		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
++	{
++		/* Fujitsu Lifebook E5411 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU CLIENT COMPUTING LIMITED"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E5411"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
++	},
+ 	{
+ 		/* Gigabyte M912 */
+ 		.matches = {
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index 098115eb80841..53792a1b6ac39 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -820,6 +820,25 @@ static int goodix_add_acpi_gpio_mappings(struct goodix_ts_data *ts)
+ 		dev_info(dev, "No ACPI GpioInt resource, assuming that the GPIO order is reset, int\n");
+ 		ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_GPIO;
+ 		gpio_mapping = acpi_goodix_int_last_gpios;
++	} else if (ts->gpio_count == 1 && ts->gpio_int_idx == 0) {
++		/*
++		 * On newer devices there is only 1 GpioInt resource and _PS0
++		 * does the whole reset sequence for us.
++		 */
++		acpi_device_fix_up_power(ACPI_COMPANION(dev));
++
++		/*
++		 * Before the _PS0 call the int GPIO may have been in output
++		 * mode and the call should have put the int GPIO in input mode,
++		 * but the GPIO subsys cached state may still think it is
++		 * in output mode, causing gpiochip_lock_as_irq() failure.
++		 *
++		 * Add a mapping for the int GPIO to make the
++		 * gpiod_int = gpiod_get(..., GPIOD_IN) call succeed,
++		 * which will explicitly set the direction to input.
++		 */
++		ts->irq_pin_access_method = IRQ_PIN_ACCESS_NONE;
++		gpio_mapping = acpi_goodix_int_first_gpios;
+ 	} else {
+ 		dev_warn(dev, "Unexpected ACPI resources: gpio_count %d, gpio_int_idx %d\n",
+ 			 ts->gpio_count, ts->gpio_int_idx);
+diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
+index 8b8cd751fe9a9..806a3d9da12f2 100644
+--- a/drivers/mcb/mcb-core.c
++++ b/drivers/mcb/mcb-core.c
+@@ -389,17 +389,13 @@ EXPORT_SYMBOL_NS_GPL(mcb_free_dev, MCB);
+ 
+ static int __mcb_bus_add_devices(struct device *dev, void *data)
+ {
+-	struct mcb_device *mdev = to_mcb_device(dev);
+ 	int retval;
+ 
+-	if (mdev->is_added)
+-		return 0;
+-
+ 	retval = device_attach(dev);
+-	if (retval < 0)
++	if (retval < 0) {
+ 		dev_err(dev, "Error adding device (%d)\n", retval);
+-
+-	mdev->is_added = true;
++		return retval;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
+index aa6938da0db85..c41cbacc75a2c 100644
+--- a/drivers/mcb/mcb-parse.c
++++ b/drivers/mcb/mcb-parse.c
+@@ -99,8 +99,6 @@ static int chameleon_parse_gdd(struct mcb_bus *bus,
+ 	mdev->mem.end = mdev->mem.start + size - 1;
+ 	mdev->mem.flags = IORESOURCE_MEM;
+ 
+-	mdev->is_added = false;
+-
+ 	ret = mcb_device_register(bus, mdev);
+ 	if (ret < 0)
+ 		goto err;
+diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+index 88a23bce569d9..36109c324cb6c 100644
+--- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+@@ -1455,6 +1455,7 @@ static int mtk_jpeg_remove(struct platform_device *pdev)
+ {
+ 	struct mtk_jpeg_dev *jpeg = platform_get_drvdata(pdev);
+ 
++	cancel_delayed_work_sync(&jpeg->job_timeout_work);
+ 	pm_runtime_disable(&pdev->dev);
+ 	video_unregister_device(jpeg->vdev);
+ 	video_device_release(jpeg->vdev);
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 87807ef010a96..91c929bd69c07 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -95,7 +95,7 @@ static int mmc_decode_cid(struct mmc_card *card)
+ 	case 3: /* MMC v3.1 - v3.3 */
+ 	case 4: /* MMC v4 */
+ 		card->cid.manfid	= UNSTUFF_BITS(resp, 120, 8);
+-		card->cid.oemid		= UNSTUFF_BITS(resp, 104, 16);
++		card->cid.oemid		= UNSTUFF_BITS(resp, 104, 8);
+ 		card->cid.prod_name[0]	= UNSTUFF_BITS(resp, 96, 8);
+ 		card->cid.prod_name[1]	= UNSTUFF_BITS(resp, 88, 8);
+ 		card->cid.prod_name[2]	= UNSTUFF_BITS(resp, 80, 8);
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 99a4ce68d82f1..85c2947ed45e3 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -1075,8 +1075,14 @@ static int mmc_sdio_resume(struct mmc_host *host)
+ 		}
+ 		err = mmc_sdio_reinit_card(host);
+ 	} else if (mmc_card_wake_sdio_irq(host)) {
+-		/* We may have switched to 1-bit mode during suspend */
++		/*
++		 * We may have switched to 1-bit mode during suspend,
++		 * need to hold retuning, because tuning only supprt
++		 * 4-bit mode or 8 bit mode.
++		 */
++		mmc_retune_hold_now(host);
+ 		err = sdio_enable_4bit_bus(host->card);
++		mmc_retune_release(host);
+ 	}
+ 
+ 	if (err)
+diff --git a/drivers/mtd/maps/physmap-core.c b/drivers/mtd/maps/physmap-core.c
+index 4f63b8430c710..9ab795f03c546 100644
+--- a/drivers/mtd/maps/physmap-core.c
++++ b/drivers/mtd/maps/physmap-core.c
+@@ -556,6 +556,17 @@ static int physmap_flash_probe(struct platform_device *dev)
+ 		if (info->probe_type) {
+ 			info->mtds[i] = do_map_probe(info->probe_type,
+ 						     &info->maps[i]);
++
++			/* Fall back to mapping region as ROM */
++			if (!info->mtds[i] && IS_ENABLED(CONFIG_MTD_ROM) &&
++			    strcmp(info->probe_type, "map_rom")) {
++				dev_warn(&dev->dev,
++					 "map_probe() failed for type %s\n",
++					 info->probe_type);
++
++				info->mtds[i] = do_map_probe("map_rom",
++							     &info->maps[i]);
++			}
+ 		} else {
+ 			int j;
+ 
+diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
+index 6a0d48c42cfa9..ef062b5ea9cdd 100644
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -451,6 +451,7 @@ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
+ 	unsigned int len = mtd->writesize + (oob_required ? mtd->oobsize : 0);
+ 	dma_addr_t dma_addr;
++	u8 status;
+ 	int ret;
+ 	struct anfc_op nfc_op = {
+ 		.pkt_reg =
+@@ -497,10 +498,21 @@ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
+ 	}
+ 
+ 	/* Spare data is not protected */
+-	if (oob_required)
++	if (oob_required) {
+ 		ret = nand_write_oob_std(chip, page);
++		if (ret)
++			return ret;
++	}
+ 
+-	return ret;
++	/* Check write status on the chip side */
++	ret = nand_status_op(chip, &status);
++	if (ret)
++		return ret;
++
++	if (status & NAND_STATUS_FAIL)
++		return -EIO;
++
++	return 0;
+ }
+ 
+ static int anfc_sel_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index 2ef1a5adfcfc1..9ed3ff7f4cf94 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -1148,6 +1148,7 @@ static int marvell_nfc_hw_ecc_hmg_do_write_page(struct nand_chip *chip,
+ 		.ndcb[2] = NDCB2_ADDR5_PAGE(page),
+ 	};
+ 	unsigned int oob_bytes = lt->spare_bytes + (raw ? lt->ecc_bytes : 0);
++	u8 status;
+ 	int ret;
+ 
+ 	/* NFCv2 needs more information about the operation being executed */
+@@ -1181,7 +1182,18 @@ static int marvell_nfc_hw_ecc_hmg_do_write_page(struct nand_chip *chip,
+ 
+ 	ret = marvell_nfc_wait_op(chip,
+ 				  PSEC_TO_MSEC(sdr->tPROG_max));
+-	return ret;
++	if (ret)
++		return ret;
++
++	/* Check write status on the chip side */
++	ret = nand_status_op(chip, &status);
++	if (ret)
++		return ret;
++
++	if (status & NAND_STATUS_FAIL)
++		return -EIO;
++
++	return 0;
+ }
+ 
+ static int marvell_nfc_hw_ecc_hmg_write_page_raw(struct nand_chip *chip,
+@@ -1610,6 +1622,7 @@ static int marvell_nfc_hw_ecc_bch_write_page(struct nand_chip *chip,
+ 	int data_len = lt->data_bytes;
+ 	int spare_len = lt->spare_bytes;
+ 	int chunk, ret;
++	u8 status;
+ 
+ 	marvell_nfc_select_target(chip, chip->cur_cs);
+ 
+@@ -1646,6 +1659,14 @@ static int marvell_nfc_hw_ecc_bch_write_page(struct nand_chip *chip,
+ 	if (ret)
+ 		return ret;
+ 
++	/* Check write status on the chip side */
++	ret = nand_status_op(chip, &status);
++	if (ret)
++		return ret;
++
++	if (status & NAND_STATUS_FAIL)
++		return -EIO;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index bb181e18c7c52..be7190f0499bc 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -2996,7 +2996,7 @@ err_nandc_alloc:
+ err_aon_clk:
+ 	clk_disable_unprepare(nandc->core_clk);
+ err_core_clk:
+-	dma_unmap_resource(dev, res->start, resource_size(res),
++	dma_unmap_resource(dev, nandc->base_dma, resource_size(res),
+ 			   DMA_BIDIRECTIONAL, 0);
+ 	return ret;
+ }
+diff --git a/drivers/mtd/nand/spi/micron.c b/drivers/mtd/nand/spi/micron.c
+index 5d370cfcdaaaa..89e2b46bdc462 100644
+--- a/drivers/mtd/nand/spi/micron.c
++++ b/drivers/mtd/nand/spi/micron.c
+@@ -12,7 +12,7 @@
+ 
+ #define SPINAND_MFR_MICRON		0x2c
+ 
+-#define MICRON_STATUS_ECC_MASK		GENMASK(7, 4)
++#define MICRON_STATUS_ECC_MASK		GENMASK(6, 4)
+ #define MICRON_STATUS_ECC_NO_BITFLIPS	(0 << 4)
+ #define MICRON_STATUS_ECC_1TO3_BITFLIPS	(1 << 4)
+ #define MICRON_STATUS_ECC_4TO6_BITFLIPS	(3 << 4)
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index f2f890e559f3a..a5849663f65c3 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -497,17 +497,16 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 	dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio");
+ 	priv->master_mii_bus = of_mdio_find_bus(dn);
+ 	if (!priv->master_mii_bus) {
+-		of_node_put(dn);
+-		return -EPROBE_DEFER;
++		err = -EPROBE_DEFER;
++		goto err_of_node_put;
+ 	}
+ 
+-	get_device(&priv->master_mii_bus->dev);
+ 	priv->master_mii_dn = dn;
+ 
+ 	priv->slave_mii_bus = mdiobus_alloc();
+ 	if (!priv->slave_mii_bus) {
+-		of_node_put(dn);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto err_put_master_mii_bus_dev;
+ 	}
+ 
+ 	priv->slave_mii_bus->priv = priv;
+@@ -564,11 +563,17 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 	}
+ 
+ 	err = mdiobus_register(priv->slave_mii_bus);
+-	if (err && dn) {
+-		mdiobus_free(priv->slave_mii_bus);
+-		of_node_put(dn);
+-	}
++	if (err && dn)
++		goto err_free_slave_mii_bus;
+ 
++	return 0;
++
++err_free_slave_mii_bus:
++	mdiobus_free(priv->slave_mii_bus);
++err_put_master_mii_bus_dev:
++	put_device(&priv->master_mii_bus->dev);
++err_of_node_put:
++	of_node_put(dn);
+ 	return err;
+ }
+ 
+@@ -576,6 +581,7 @@ static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv)
+ {
+ 	mdiobus_unregister(priv->slave_mii_bus);
+ 	mdiobus_free(priv->slave_mii_bus);
++	put_device(&priv->master_mii_bus->dev);
+ 	of_node_put(priv->master_mii_dn);
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index ba109073d6052..59a467f7aba3f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -1339,7 +1339,7 @@ void i40e_clear_hw(struct i40e_hw *hw)
+ 		     I40E_PFLAN_QALLOC_FIRSTQ_SHIFT;
+ 	j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >>
+ 	    I40E_PFLAN_QALLOC_LASTQ_SHIFT;
+-	if (val & I40E_PFLAN_QALLOC_VALID_MASK)
++	if (val & I40E_PFLAN_QALLOC_VALID_MASK && j >= base_queue)
+ 		num_queues = (j - base_queue) + 1;
+ 	else
+ 		num_queues = 0;
+@@ -1349,7 +1349,7 @@ void i40e_clear_hw(struct i40e_hw *hw)
+ 	    I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT;
+ 	j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >>
+ 	    I40E_PF_VT_PFALLOC_LASTVF_SHIFT;
+-	if (val & I40E_PF_VT_PFALLOC_VALID_MASK)
++	if (val & I40E_PF_VT_PFALLOC_VALID_MASK && j >= i)
+ 		num_vfs = (j - i) + 1;
+ 	else
+ 		num_vfs = 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index ea8d868c8f30a..e99c8c10bc61b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -931,8 +931,7 @@ static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
+ 
+ 	ctxt->info.q_opt_rss = ((lut_type << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) &
+ 				ICE_AQ_VSI_Q_OPT_RSS_LUT_M) |
+-				((hash_type << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) &
+-				 ICE_AQ_VSI_Q_OPT_RSS_HASH_M);
++				(hash_type & ICE_AQ_VSI_Q_OPT_RSS_HASH_M);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index d2ee760f92942..02f72fbec1042 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -6,6 +6,7 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
+ #include <generated/utsrelease.h>
++#include <linux/crash_dump.h>
+ #include "ice.h"
+ #include "ice_base.h"
+ #include "ice_lib.h"
+@@ -4025,6 +4026,20 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
+ 		return -EINVAL;
+ 	}
+ 
++	/* when under a kdump kernel initiate a reset before enabling the
++	 * device in order to clear out any pending DMA transactions. These
++	 * transactions can cause some systems to machine check when doing
++	 * the pcim_enable_device() below.
++	 */
++	if (is_kdump_kernel()) {
++		pci_save_state(pdev);
++		pci_clear_master(pdev);
++		err = pcie_flr(pdev);
++		if (err)
++			return err;
++		pci_restore_state(pdev);
++	}
++
+ 	/* this driver uses devres, see
+ 	 * Documentation/driver-api/driver-model/devres.rst
+ 	 */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 0078ae5926164..5eba086690efa 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -28,6 +28,9 @@ static inline void ixgbe_alloc_vf_macvlans(struct ixgbe_adapter *adapter,
+ 	struct vf_macvlans *mv_list;
+ 	int num_vf_macvlans, i;
+ 
++	/* Initialize list of VF macvlans */
++	INIT_LIST_HEAD(&adapter->vf_mvs.l);
++
+ 	num_vf_macvlans = hw->mac.num_rar_entries -
+ 			  (IXGBE_MAX_PF_MACVLANS + 1 + num_vfs);
+ 	if (!num_vf_macvlans)
+@@ -36,8 +39,6 @@ static inline void ixgbe_alloc_vf_macvlans(struct ixgbe_adapter *adapter,
+ 	mv_list = kcalloc(num_vf_macvlans, sizeof(struct vf_macvlans),
+ 			  GFP_KERNEL);
+ 	if (mv_list) {
+-		/* Initialize list of VF macvlans */
+-		INIT_LIST_HEAD(&adapter->vf_mvs.l);
+ 		for (i = 0; i < num_vf_macvlans; i++) {
+ 			mv_list[i].vf = -1;
+ 			mv_list[i].free = true;
+diff --git a/drivers/net/ethernet/marvell/sky2.h b/drivers/net/ethernet/marvell/sky2.h
+index b2dddd8a246c8..2bd0a7971ae62 100644
+--- a/drivers/net/ethernet/marvell/sky2.h
++++ b/drivers/net/ethernet/marvell/sky2.h
+@@ -2195,7 +2195,7 @@ struct rx_ring_info {
+ 	struct sk_buff	*skb;
+ 	dma_addr_t	data_addr;
+ 	DEFINE_DMA_UNMAP_LEN(data_size);
+-	dma_addr_t	frag_addr[ETH_JUMBO_MTU >> PAGE_SHIFT];
++	dma_addr_t	frag_addr[ETH_JUMBO_MTU >> PAGE_SHIFT ?: 1];
+ };
+ 
+ enum flow_control {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 5273644fb2bf9..86088ccab23aa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -821,7 +821,7 @@ static void mlx5_fw_tracer_ownership_change(struct work_struct *work)
+ 
+ 	mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner);
+ 	if (tracer->owner) {
+-		tracer->owner = false;
++		mlx5_fw_tracer_ownership_acquire(tracer);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve_vxlan.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve_vxlan.c
+index 05517c7feaa56..a20ba23f0ed7a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve_vxlan.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve_vxlan.c
+@@ -294,8 +294,8 @@ const struct mlxsw_sp_nve_ops mlxsw_sp1_nve_vxlan_ops = {
+ 	.fdb_clear_offload = mlxsw_sp_nve_vxlan_clear_offload,
+ };
+ 
+-static bool mlxsw_sp2_nve_vxlan_learning_set(struct mlxsw_sp *mlxsw_sp,
+-					     bool learning_en)
++static int mlxsw_sp2_nve_vxlan_learning_set(struct mlxsw_sp *mlxsw_sp,
++					    bool learning_en)
+ {
+ 	char tnpc_pl[MLXSW_REG_TNPC_LEN];
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+index f2c8273dce67d..3f59834a0d3a9 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+@@ -87,7 +87,10 @@ static void qed_ll2b_complete_tx_packet(void *cxt,
+ static int qed_ll2_alloc_buffer(struct qed_dev *cdev,
+ 				u8 **data, dma_addr_t *phys_addr)
+ {
+-	*data = kmalloc(cdev->ll2->rx_size, GFP_ATOMIC);
++	size_t size = cdev->ll2->rx_size + NET_SKB_PAD +
++		      SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
++
++	*data = kmalloc(size, GFP_ATOMIC);
+ 	if (!(*data)) {
+ 		DP_INFO(cdev, "Failed to allocate LL2 buffer data\n");
+ 		return -ENOMEM;
+@@ -2541,7 +2544,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
+ 	INIT_LIST_HEAD(&cdev->ll2->list);
+ 	spin_lock_init(&cdev->ll2->lock);
+ 
+-	cdev->ll2->rx_size = NET_SKB_PAD + ETH_HLEN +
++	cdev->ll2->rx_size = PRM_DMA_PAD_BYTES_NUM + ETH_HLEN +
+ 			     L1_CACHE_BYTES + params->mtu;
+ 
+ 	/* Allocate memory for LL2.
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 410ccd28f6531..f218bacec0013 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1706,6 +1706,8 @@ static int ravb_close(struct net_device *ndev)
+ 			of_phy_deregister_fixed_link(np);
+ 	}
+ 
++	cancel_work_sync(&priv->work);
++
+ 	if (priv->chip_id != RCAR_GEN2) {
+ 		free_irq(priv->tx_irqs[RAVB_NC], ndev);
+ 		free_irq(priv->rx_irqs[RAVB_NC], ndev);
+@@ -2249,14 +2251,14 @@ static int ravb_remove(struct platform_device *pdev)
+ 	if (priv->chip_id != RCAR_GEN2)
+ 		ravb_ptp_stop(ndev);
+ 
+-	dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat,
+-			  priv->desc_bat_dma);
+ 	/* Set reset mode */
+ 	ravb_write(ndev, CCC_OPC_RESET, CCC);
+ 	unregister_netdev(ndev);
+ 	netif_napi_del(&priv->napi[RAVB_NC]);
+ 	netif_napi_del(&priv->napi[RAVB_BE]);
+ 	ravb_mdio_release(priv);
++	dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat,
++			  priv->desc_bat_dma);
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	free_netdev(ndev);
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index 1c5d70c60354b..0ce426c0c0bf1 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -2783,7 +2783,6 @@ static int ca8210_register_ext_clock(struct spi_device *spi)
+ 	struct device_node *np = spi->dev.of_node;
+ 	struct ca8210_priv *priv = spi_get_drvdata(spi);
+ 	struct ca8210_platform_data *pdata = spi->dev.platform_data;
+-	int ret = 0;
+ 
+ 	if (!np)
+ 		return -EFAULT;
+@@ -2800,18 +2799,8 @@ static int ca8210_register_ext_clock(struct spi_device *spi)
+ 		dev_crit(&spi->dev, "Failed to register external clk\n");
+ 		return PTR_ERR(priv->clk);
+ 	}
+-	ret = of_clk_add_provider(np, of_clk_src_simple_get, priv->clk);
+-	if (ret) {
+-		clk_unregister(priv->clk);
+-		dev_crit(
+-			&spi->dev,
+-			"Failed to register external clock as clock provider\n"
+-		);
+-	} else {
+-		dev_info(&spi->dev, "External clock set as clock provider\n");
+-	}
+ 
+-	return ret;
++	return of_clk_add_provider(np, of_clk_src_simple_get, priv->clk);
+ }
+ 
+ /**
+@@ -2823,8 +2812,8 @@ static void ca8210_unregister_ext_clock(struct spi_device *spi)
+ {
+ 	struct ca8210_priv *priv = spi_get_drvdata(spi);
+ 
+-	if (!priv->clk)
+-		return
++	if (IS_ERR_OR_NULL(priv->clk))
++		return;
+ 
+ 	of_clk_del_provider(spi->dev.of_node);
+ 	clk_unregister(priv->clk);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 4fb58fc5ec95a..0ffcef2fa10af 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -2414,6 +2414,7 @@ static int macsec_upd_txsa(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		ctx.sa.assoc_num = assoc_num;
+ 		ctx.sa.tx_sa = tx_sa;
++		ctx.sa.update_pn = !!prev_pn.full64;
+ 		ctx.secy = secy;
+ 
+ 		ret = macsec_offload(ops->mdo_upd_txsa, &ctx);
+@@ -2507,6 +2508,7 @@ static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		ctx.sa.assoc_num = assoc_num;
+ 		ctx.sa.rx_sa = rx_sa;
++		ctx.sa.update_pn = !!prev_pn.full64;
+ 		ctx.secy = secy;
+ 
+ 		ret = macsec_offload(ops->mdo_upd_rxsa, &ctx);
+diff --git a/drivers/net/phy/mscc/mscc_macsec.c b/drivers/net/phy/mscc/mscc_macsec.c
+index c00eef457b850..bec270785c594 100644
+--- a/drivers/net/phy/mscc/mscc_macsec.c
++++ b/drivers/net/phy/mscc/mscc_macsec.c
+@@ -880,6 +880,9 @@ static int vsc8584_macsec_upd_rxsa(struct macsec_context *ctx)
+ {
+ 	struct macsec_flow *flow;
+ 
++	if (ctx->sa.update_pn)
++		return -EINVAL;
++
+ 	flow = vsc8584_macsec_find_flow(ctx, MACSEC_INGR);
+ 	if (IS_ERR(flow))
+ 		return PTR_ERR(flow);
+@@ -929,6 +932,9 @@ static int vsc8584_macsec_upd_txsa(struct macsec_context *ctx)
+ {
+ 	struct macsec_flow *flow;
+ 
++	if (ctx->sa.update_pn)
++		return -EINVAL;
++
+ 	flow = vsc8584_macsec_find_flow(ctx, MACSEC_EGR);
+ 	if (IS_ERR(flow))
+ 		return PTR_ERR(flow);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 191bf0df14661..0b25b59f44033 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -3064,10 +3064,11 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
+ 	struct net *net = sock_net(&tfile->sk);
+ 	struct tun_struct *tun;
+ 	void __user* argp = (void __user*)arg;
+-	unsigned int ifindex, carrier;
++	unsigned int carrier;
+ 	struct ifreq ifr;
+ 	kuid_t owner;
+ 	kgid_t group;
++	int ifindex;
+ 	int sndbuf;
+ 	int vnet_hdr_sz;
+ 	int le;
+@@ -3124,7 +3125,9 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
+ 		ret = -EFAULT;
+ 		if (copy_from_user(&ifindex, argp, sizeof(ifindex)))
+ 			goto unlock;
+-
++		ret = -EINVAL;
++		if (ifindex < 0)
++			goto unlock;
+ 		ret = 0;
+ 		tfile->ifindex = ifindex;
+ 		goto unlock;
+diff --git a/drivers/net/usb/dm9601.c b/drivers/net/usb/dm9601.c
+index 915ac75b55fc7..5aad26600b03e 100644
+--- a/drivers/net/usb/dm9601.c
++++ b/drivers/net/usb/dm9601.c
+@@ -221,13 +221,18 @@ static int dm9601_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ 	struct usbnet *dev = netdev_priv(netdev);
+ 
+ 	__le16 res;
++	int err;
+ 
+ 	if (phy_id) {
+ 		netdev_dbg(dev->net, "Only internal phy supported\n");
+ 		return 0;
+ 	}
+ 
+-	dm_read_shared_word(dev, 1, loc, &res);
++	err = dm_read_shared_word(dev, 1, loc, &res);
++	if (err < 0) {
++		netdev_err(dev->net, "MDIO read error: %d\n", err);
++		return err;
++	}
+ 
+ 	netdev_dbg(dev->net,
+ 		   "dm9601_mdio_read() phy_id=0x%02x, loc=0x%02x, returns=0x%04x\n",
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 975f52605867f..9297f2078fd2c 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -863,7 +863,7 @@ static int smsc95xx_reset(struct usbnet *dev)
+ 
+ 	if (timeout >= 100) {
+ 		netdev_warn(dev->net, "timeout waiting for completion of Lite Reset\n");
+-		return ret;
++		return -ETIMEDOUT;
+ 	}
+ 
+ 	ret = smsc95xx_write_reg(dev, PM_CTRL, PM_CTL_PHY_RST_);
+diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
+index 97cf5bc48902a..09fc4bc0622e0 100644
+--- a/drivers/net/xen-netback/interface.c
++++ b/drivers/net/xen-netback/interface.c
+@@ -41,7 +41,6 @@
+ #include <asm/xen/hypercall.h>
+ #include <xen/balloon.h>
+ 
+-#define XENVIF_QUEUE_LENGTH 32
+ #define XENVIF_NAPI_WEIGHT  64
+ 
+ /* Number of bytes allowed on the internal guest Rx queue. */
+@@ -528,8 +527,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
+ 	dev->features = dev->hw_features | NETIF_F_RXCSUM;
+ 	dev->ethtool_ops = &xenvif_ethtool_ops;
+ 
+-	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
+-
+ 	dev->min_mtu = ETH_MIN_MTU;
+ 	dev->max_mtu = ETH_MAX_MTU - VLAN_ETH_HLEN;
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 9c67ebd4eac38..970a1b374a669 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3181,7 +3181,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x0a54),	/* Intel P4500/P4600 */
+ 		.driver_data = NVME_QUIRK_STRIPE_SIZE |
+ 				NVME_QUIRK_DEALLOCATE_ZEROES |
+-				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++				NVME_QUIRK_IGNORE_DEV_SUBNQN |
++				NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_VDEVICE(INTEL, 0x0a55),	/* Dell Express Flash P4600 */
+ 		.driver_data = NVME_QUIRK_STRIPE_SIZE |
+ 				NVME_QUIRK_DEALLOCATE_ZEROES, },
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 825c961c6fd50..ecc3f822df244 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -644,6 +644,9 @@ static void __nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
+ 
+ static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
+ {
++	if (!test_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
++		return;
++
+ 	mutex_lock(&queue->queue_lock);
+ 	if (test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags))
+ 		__nvme_rdma_stop_queue(queue);
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 2ddbd4f4f6281..7ce22d173fc79 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -336,6 +336,7 @@ static void nvmet_tcp_fatal_error(struct nvmet_tcp_queue *queue)
+ 
+ static void nvmet_tcp_socket_error(struct nvmet_tcp_queue *queue, int status)
+ {
++	queue->rcv_state = NVMET_TCP_RECV_ERR;
+ 	if (status == -EPIPE || status == -ECONNRESET)
+ 		kernel_sock_shutdown(queue->sock, SHUT_RDWR);
+ 	else
+@@ -882,15 +883,11 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
+ 	iov.iov_len = sizeof(*icresp);
+ 	ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
+ 	if (ret < 0)
+-		goto free_crypto;
++		return ret; /* queue removal will cleanup */
+ 
+ 	queue->state = NVMET_TCP_Q_LIVE;
+ 	nvmet_prepare_receive_pdu(queue);
+ 	return 0;
+-free_crypto:
+-	if (queue->hdr_digest || queue->data_digest)
+-		nvmet_tcp_free_crypto(queue);
+-	return ret;
+ }
+ 
+ static void nvmet_tcp_handle_req_failure(struct nvmet_tcp_queue *queue,
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 36061aaf026c8..ac4428e8fae92 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -1177,7 +1177,7 @@ static irqreturn_t arm_cmn_handle_irq(int irq, void *dev_id)
+ 		u64 delta;
+ 		int i;
+ 
+-		for (i = 0; i < CMN_DTM_NUM_COUNTERS; i++) {
++		for (i = 0; i < CMN_DT_NUM_COUNTERS; i++) {
+ 			if (status & (1U << i)) {
+ 				ret = IRQ_HANDLED;
+ 				if (WARN_ON(!dtc->counters[i]))
+diff --git a/drivers/phy/motorola/phy-mapphone-mdm6600.c b/drivers/phy/motorola/phy-mapphone-mdm6600.c
+index 3cd4d51c247c3..67802f9e40ba0 100644
+--- a/drivers/phy/motorola/phy-mapphone-mdm6600.c
++++ b/drivers/phy/motorola/phy-mapphone-mdm6600.c
+@@ -122,16 +122,10 @@ static int phy_mdm6600_power_on(struct phy *x)
+ {
+ 	struct phy_mdm6600 *ddata = phy_get_drvdata(x);
+ 	struct gpio_desc *enable_gpio = ddata->ctrl_gpios[PHY_MDM6600_ENABLE];
+-	int error;
+ 
+ 	if (!ddata->enabled)
+ 		return -ENODEV;
+ 
+-	error = pinctrl_pm_select_default_state(ddata->dev);
+-	if (error)
+-		dev_warn(ddata->dev, "%s: error with default_state: %i\n",
+-			 __func__, error);
+-
+ 	gpiod_set_value_cansleep(enable_gpio, 1);
+ 
+ 	/* Allow aggressive PM for USB, it's only needed for n_gsm port */
+@@ -160,11 +154,6 @@ static int phy_mdm6600_power_off(struct phy *x)
+ 
+ 	gpiod_set_value_cansleep(enable_gpio, 0);
+ 
+-	error = pinctrl_pm_select_sleep_state(ddata->dev);
+-	if (error)
+-		dev_warn(ddata->dev, "%s: error with sleep_state: %i\n",
+-			 __func__, error);
+-
+ 	return 0;
+ }
+ 
+@@ -456,6 +445,7 @@ static void phy_mdm6600_device_power_off(struct phy_mdm6600 *ddata)
+ {
+ 	struct gpio_desc *reset_gpio =
+ 		ddata->ctrl_gpios[PHY_MDM6600_RESET];
++	int error;
+ 
+ 	ddata->enabled = false;
+ 	phy_mdm6600_cmd(ddata, PHY_MDM6600_CMD_BP_SHUTDOWN_REQ);
+@@ -471,6 +461,17 @@ static void phy_mdm6600_device_power_off(struct phy_mdm6600 *ddata)
+ 	} else {
+ 		dev_err(ddata->dev, "Timed out powering down\n");
+ 	}
++
++	/*
++	 * Keep reset gpio high with padconf internal pull-up resistor to
++	 * prevent modem from waking up during deeper SoC idle states. The
++	 * gpio bank lines can have glitches if not in the always-on wkup
++	 * domain.
++	 */
++	error = pinctrl_pm_select_sleep_state(ddata->dev);
++	if (error)
++		dev_warn(ddata->dev, "%s: error with sleep_state: %i\n",
++			 __func__, error);
+ }
+ 
+ static void phy_mdm6600_deferred_power_on(struct work_struct *work)
+@@ -571,12 +572,6 @@ static int phy_mdm6600_probe(struct platform_device *pdev)
+ 	ddata->dev = &pdev->dev;
+ 	platform_set_drvdata(pdev, ddata);
+ 
+-	/* Active state selected in phy_mdm6600_power_on() */
+-	error = pinctrl_pm_select_sleep_state(ddata->dev);
+-	if (error)
+-		dev_warn(ddata->dev, "%s: error with sleep_state: %i\n",
+-			 __func__, error);
+-
+ 	error = phy_mdm6600_init_lines(ddata);
+ 	if (error)
+ 		return error;
+@@ -627,10 +622,12 @@ idle:
+ 	pm_runtime_put_autosuspend(ddata->dev);
+ 
+ cleanup:
+-	if (error < 0)
++	if (error < 0) {
+ 		phy_mdm6600_device_power_off(ddata);
+-	pm_runtime_disable(ddata->dev);
+-	pm_runtime_dont_use_autosuspend(ddata->dev);
++		pm_runtime_disable(ddata->dev);
++		pm_runtime_dont_use_autosuspend(ddata->dev);
++	}
++
+ 	return error;
+ }
+ 
+@@ -639,6 +636,7 @@ static int phy_mdm6600_remove(struct platform_device *pdev)
+ 	struct phy_mdm6600 *ddata = platform_get_drvdata(pdev);
+ 	struct gpio_desc *reset_gpio = ddata->ctrl_gpios[PHY_MDM6600_RESET];
+ 
++	pm_runtime_get_noresume(ddata->dev);
+ 	pm_runtime_dont_use_autosuspend(ddata->dev);
+ 	pm_runtime_put_sync(ddata->dev);
+ 	pm_runtime_disable(ddata->dev);
+diff --git a/drivers/pinctrl/renesas/Kconfig b/drivers/pinctrl/renesas/Kconfig
+index e941b8440dbc8..39559ce9d1ed9 100644
+--- a/drivers/pinctrl/renesas/Kconfig
++++ b/drivers/pinctrl/renesas/Kconfig
+@@ -212,6 +212,7 @@ config PINCTRL_RZN1
+ 	depends on OF
+ 	depends on ARCH_RZN1 || COMPILE_TEST
+ 	select GENERIC_PINCONF
++	select PINMUX
+ 	help
+ 	  This selects pinctrl driver for Renesas RZ/N1 devices.
+ 
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index b4a5cbdae904e..04503ad6c7fb0 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -475,6 +475,9 @@ static void asus_nb_wmi_quirks(struct asus_wmi_driver *driver)
+ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, ASUS_WMI_BRN_DOWN, { KEY_BRIGHTNESSDOWN } },
+ 	{ KE_KEY, ASUS_WMI_BRN_UP, { KEY_BRIGHTNESSUP } },
++	{ KE_KEY, 0x2a, { KEY_SELECTIVE_SCREENSHOT } },
++	{ KE_IGNORE, 0x2b, }, /* PrintScreen (also send via PS/2) on newer models */
++	{ KE_IGNORE, 0x2c, }, /* CapsLock (also send via PS/2) on newer models */
+ 	{ KE_KEY, 0x30, { KEY_VOLUMEUP } },
+ 	{ KE_KEY, 0x31, { KEY_VOLUMEDOWN } },
+ 	{ KE_KEY, 0x32, { KEY_MUTE } },
+diff --git a/drivers/platform/x86/asus-wmi.h b/drivers/platform/x86/asus-wmi.h
+index 1a95c172f94b0..1d0b592e2651b 100644
+--- a/drivers/platform/x86/asus-wmi.h
++++ b/drivers/platform/x86/asus-wmi.h
+@@ -18,7 +18,7 @@
+ #include <linux/i8042.h>
+ 
+ #define ASUS_WMI_KEY_IGNORE (-1)
+-#define ASUS_WMI_BRN_DOWN	0x20
++#define ASUS_WMI_BRN_DOWN	0x2e
+ #define ASUS_WMI_BRN_UP		0x2f
+ 
+ struct module;
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 55a18cd0c298f..eedff2ae28511 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -726,6 +726,21 @@ static const struct ts_dmi_data pipo_w11_data = {
+ 	.properties	= pipo_w11_props,
+ };
+ 
++static const struct property_entry positivo_c4128b_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 13),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1915),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1269),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-positivo-c4128b.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	{ }
++};
++
++static const struct ts_dmi_data positivo_c4128b_data = {
++	.acpi_name	= "MSSL1680:00",
++	.properties	= positivo_c4128b_props,
++};
++
+ static const struct property_entry pov_mobii_wintab_p800w_v20_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-min-x", 32),
+ 	PROPERTY_ENTRY_U32("touchscreen-min-y", 16),
+@@ -1389,6 +1404,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BIOS_VERSION, "MOMO.G.WI71C.MABMRBA02"),
+ 		},
+ 	},
++	{
++		/* Positivo C4128B */
++		.driver_data = (void *)&positivo_c4128b_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "C4128B-1"),
++		},
++	},
+ 	{
+ 		/* Point of View mobii wintab p800w (v2.0) */
+ 		.driver_data = (void *)&pov_mobii_wintab_p800w_v20_data,
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 52b75779dbb7e..51c4f604d3b24 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5483,15 +5483,11 @@ wash:
+ 	mutex_lock(&regulator_list_mutex);
+ 	regulator_ena_gpio_free(rdev);
+ 	mutex_unlock(&regulator_list_mutex);
+-	put_device(&rdev->dev);
+-	rdev = NULL;
+ clean:
+ 	if (dangling_of_gpiod)
+ 		gpiod_put(config->ena_gpiod);
+-	if (rdev && rdev->dev.of_node)
+-		of_node_put(rdev->dev.of_node);
+-	kfree(rdev);
+ 	kfree(config);
++	put_device(&rdev->dev);
+ rinse:
+ 	if (dangling_cfg_gpiod)
+ 		gpiod_put(cfg->ena_gpiod);
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index 12d9c5d6b9e26..3d3ac48243ebd 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1147,16 +1147,11 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+-
+-	ret = pm_runtime_get_sync(&pdev->dev);
+-	if (ret < 0) {
+-		dev_err(&pdev->dev, "Failed to pm_runtime_get_sync: %d\n", ret);
+-		goto clk_dis_all;
+-	}
+-
+ 	/* QSPI controller initializations */
+ 	zynqmp_qspi_init_hw(xqspi);
+ 
++	pm_runtime_mark_last_busy(&pdev->dev);
++	pm_runtime_put_autosuspend(&pdev->dev);
+ 	xqspi->irq = platform_get_irq(pdev, 0);
+ 	if (xqspi->irq <= 0) {
+ 		ret = -ENXIO;
+@@ -1183,7 +1178,6 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 	ctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_RX_DUAL | SPI_RX_QUAD |
+ 			    SPI_TX_DUAL | SPI_TX_QUAD;
+ 	ctlr->dev.of_node = np;
+-	ctlr->auto_runtime_pm = true;
+ 
+ 	ret = devm_spi_register_controller(&pdev->dev, ctlr);
+ 	if (ret) {
+@@ -1191,15 +1185,11 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_all;
+ 	}
+ 
+-	pm_runtime_mark_last_busy(&pdev->dev);
+-	pm_runtime_put_autosuspend(&pdev->dev);
+-
+ 	return 0;
+ 
+ clk_dis_all:
+-	pm_runtime_disable(&pdev->dev);
+-	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_set_suspended(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
+ 	clk_disable_unprepare(xqspi->refclk);
+ clk_dis_pclk:
+ 	clk_disable_unprepare(xqspi->pclk);
+@@ -1223,15 +1213,11 @@ static int zynqmp_qspi_remove(struct platform_device *pdev)
+ {
+ 	struct zynqmp_qspi *xqspi = platform_get_drvdata(pdev);
+ 
+-	pm_runtime_get_sync(&pdev->dev);
+-
+ 	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
+-
+-	pm_runtime_disable(&pdev->dev);
+-	pm_runtime_put_noidle(&pdev->dev);
+-	pm_runtime_set_suspended(&pdev->dev);
+ 	clk_disable_unprepare(xqspi->refclk);
+ 	clk_disable_unprepare(xqspi->pclk);
++	pm_runtime_set_suspended(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
+index 372d64756ed64..3c15f6a9e91c0 100644
+--- a/drivers/tee/amdtee/core.c
++++ b/drivers/tee/amdtee/core.c
+@@ -217,12 +217,12 @@ unlock:
+ 	return rc;
+ }
+ 
++/* mutex must be held by caller */
+ static void destroy_session(struct kref *ref)
+ {
+ 	struct amdtee_session *sess = container_of(ref, struct amdtee_session,
+ 						   refcount);
+ 
+-	mutex_lock(&session_list_mutex);
+ 	list_del(&sess->list_node);
+ 	mutex_unlock(&session_list_mutex);
+ 	kfree(sess);
+@@ -272,7 +272,8 @@ int amdtee_open_session(struct tee_context *ctx,
+ 	if (arg->ret != TEEC_SUCCESS) {
+ 		pr_err("open_session failed %d\n", arg->ret);
+ 		handle_unload_ta(ta_handle);
+-		kref_put(&sess->refcount, destroy_session);
++		kref_put_mutex(&sess->refcount, destroy_session,
++			       &session_list_mutex);
+ 		goto out;
+ 	}
+ 
+@@ -290,7 +291,8 @@ int amdtee_open_session(struct tee_context *ctx,
+ 		pr_err("reached maximum session count %d\n", TEE_NUM_SESSIONS);
+ 		handle_close_session(ta_handle, session_info);
+ 		handle_unload_ta(ta_handle);
+-		kref_put(&sess->refcount, destroy_session);
++		kref_put_mutex(&sess->refcount, destroy_session,
++			       &session_list_mutex);
+ 		rc = -ENOMEM;
+ 		goto out;
+ 	}
+@@ -331,7 +333,7 @@ int amdtee_close_session(struct tee_context *ctx, u32 session)
+ 	handle_close_session(ta_handle, session_info);
+ 	handle_unload_ta(ta_handle);
+ 
+-	kref_put(&sess->refcount, destroy_session);
++	kref_put_mutex(&sess->refcount, destroy_session, &session_list_mutex);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
+index b2fb3397310e4..90f1d9a534614 100644
+--- a/drivers/thunderbolt/icm.c
++++ b/drivers/thunderbolt/icm.c
+@@ -41,6 +41,7 @@
+ #define PHY_PORT_CS1_LINK_STATE_SHIFT	26
+ 
+ #define ICM_TIMEOUT			5000	/* ms */
++#define ICM_RETRIES			3
+ #define ICM_APPROVE_TIMEOUT		10000	/* ms */
+ #define ICM_MAX_LINK			4
+ 
+@@ -280,10 +281,9 @@ static bool icm_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
+ 
+ static int icm_request(struct tb *tb, const void *request, size_t request_size,
+ 		       void *response, size_t response_size, size_t npackets,
+-		       unsigned int timeout_msec)
++		       int retries, unsigned int timeout_msec)
+ {
+ 	struct icm *icm = tb_priv(tb);
+-	int retries = 3;
+ 
+ 	do {
+ 		struct tb_cfg_request *req;
+@@ -394,7 +394,7 @@ static int icm_fr_get_route(struct tb *tb, u8 link, u8 depth, u64 *route)
+ 		return -ENOMEM;
+ 
+ 	ret = icm_request(tb, &request, sizeof(request), switches,
+-			  sizeof(*switches), npackets, ICM_TIMEOUT);
++			  sizeof(*switches), npackets, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		goto err_free;
+ 
+@@ -447,7 +447,7 @@ icm_fr_driver_ready(struct tb *tb, enum tb_security_level *security_level,
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -472,7 +472,7 @@ static int icm_fr_approve_switch(struct tb *tb, struct tb_switch *sw)
+ 	memset(&reply, 0, sizeof(reply));
+ 	/* Use larger timeout as establishing tunnels can take some time */
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_APPROVE_TIMEOUT);
++			  1, ICM_RETRIES, ICM_APPROVE_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -499,7 +499,7 @@ static int icm_fr_add_switch_key(struct tb *tb, struct tb_switch *sw)
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -527,7 +527,7 @@ static int icm_fr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -559,7 +559,7 @@ static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -996,7 +996,7 @@ icm_tr_driver_ready(struct tb *tb, enum tb_security_level *security_level,
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, 20000);
++			  1, 10, 2000);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1026,7 +1026,7 @@ static int icm_tr_approve_switch(struct tb *tb, struct tb_switch *sw)
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_APPROVE_TIMEOUT);
++			  1, ICM_RETRIES, ICM_APPROVE_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1054,7 +1054,7 @@ static int icm_tr_add_switch_key(struct tb *tb, struct tb_switch *sw)
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1083,7 +1083,7 @@ static int icm_tr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1115,7 +1115,7 @@ static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1141,7 +1141,7 @@ static int icm_tr_xdomain_tear_down(struct tb *tb, struct tb_xdomain *xd,
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1460,7 +1460,7 @@ icm_ar_driver_ready(struct tb *tb, enum tb_security_level *security_level,
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1486,7 +1486,7 @@ static int icm_ar_get_route(struct tb *tb, u8 link, u8 depth, u64 *route)
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1507,7 +1507,7 @@ static int icm_ar_get_boot_acl(struct tb *tb, uuid_t *uuids, size_t nuuids)
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1568,7 +1568,7 @@ static int icm_ar_set_boot_acl(struct tb *tb, const uuid_t *uuids,
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, ICM_TIMEOUT);
++			  1, ICM_RETRIES, ICM_TIMEOUT);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1590,7 +1590,7 @@ icm_icl_driver_ready(struct tb *tb, enum tb_security_level *security_level,
+ 
+ 	memset(&reply, 0, sizeof(reply));
+ 	ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+-			  1, 20000);
++			  1, ICM_RETRIES, 20000);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index e881b72833dcf..22e5c4de345b5 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2303,6 +2303,13 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
+ 	    !tb_port_is_width_supported(down, 2))
+ 		return 0;
+ 
++	/*
++	 * Both lanes need to be in CL0. Here we assume lane 0 already be in
++	 * CL0 and check just for lane 1.
++	 */
++	if (tb_wait_for_port(down->dual_link_port, false) <= 0)
++		return -ENOTCONN;
++
+ 	ret = tb_port_lane_bonding_enable(up);
+ 	if (ret) {
+ 		tb_port_warn(up, "failed to enable lane bonding\n");
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index e26ac3f42e05c..e7e84aa2c5f84 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -32,6 +32,7 @@
+ #include "8250.h"
+ 
+ #define DEFAULT_CLK_SPEED	48000000
++#define OMAP_UART_REGSHIFT	2
+ 
+ #define UART_ERRATA_i202_MDR1_ACCESS	(1 << 0)
+ #define OMAP_UART_WER_HAS_TX_WAKEUP	(1 << 1)
+@@ -109,6 +110,7 @@
+ #define UART_OMAP_RX_LVL		0x19
+ 
+ struct omap8250_priv {
++	void __iomem *membase;
+ 	int line;
+ 	u8 habit;
+ 	u8 mdr1;
+@@ -152,9 +154,14 @@ static void omap_8250_rx_dma_flush(struct uart_8250_port *p);
+ static inline void omap_8250_rx_dma_flush(struct uart_8250_port *p) { }
+ #endif
+ 
+-static u32 uart_read(struct uart_8250_port *up, u32 reg)
++static u32 uart_read(struct omap8250_priv *priv, u32 reg)
++{
++	return readl(priv->membase + (reg << OMAP_UART_REGSHIFT));
++}
++
++static void uart_write(struct omap8250_priv *priv, u32 reg, u32 val)
+ {
+-	return readl(up->port.membase + (reg << up->port.regshift));
++	writel(val, priv->membase + (reg << OMAP_UART_REGSHIFT));
+ }
+ 
+ /*
+@@ -552,7 +559,7 @@ static void omap_serial_fill_features_erratas(struct uart_8250_port *up,
+ 	u32 mvr, scheme;
+ 	u16 revision, major, minor;
+ 
+-	mvr = uart_read(up, UART_OMAP_MVER);
++	mvr = uart_read(priv, UART_OMAP_MVER);
+ 
+ 	/* Check revision register scheme */
+ 	scheme = mvr >> OMAP_UART_MVR_SCHEME_SHIFT;
+@@ -1336,7 +1343,7 @@ static int omap8250_probe(struct platform_device *pdev)
+ 		UPF_HARD_FLOW;
+ 	up.port.private_data = priv;
+ 
+-	up.port.regshift = 2;
++	up.port.regshift = OMAP_UART_REGSHIFT;
+ 	up.port.fifosize = 64;
+ 	up.tx_loadsz = 64;
+ 	up.capabilities = UART_CAP_FIFO;
+@@ -1397,6 +1404,8 @@ static int omap8250_probe(struct platform_device *pdev)
+ 			 DEFAULT_CLK_SPEED);
+ 	}
+ 
++	priv->membase = membase;
++	priv->line = -ENODEV;
+ 	priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
+ 	priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
+ 	cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency);
+@@ -1404,6 +1413,8 @@ static int omap8250_probe(struct platform_device *pdev)
+ 
+ 	spin_lock_init(&priv->rx_dma_lock);
+ 
++	platform_set_drvdata(pdev, priv);
++
+ 	device_init_wakeup(&pdev->dev, true);
+ 	pm_runtime_enable(&pdev->dev);
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+@@ -1465,7 +1476,6 @@ static int omap8250_probe(struct platform_device *pdev)
+ 		goto err;
+ 	}
+ 	priv->line = ret;
+-	platform_set_drvdata(pdev, priv);
+ 	pm_runtime_mark_last_busy(&pdev->dev);
+ 	pm_runtime_put_autosuspend(&pdev->dev);
+ 	return 0;
+@@ -1487,11 +1497,12 @@ static int omap8250_remove(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
++	serial8250_unregister_port(priv->line);
++	priv->line = -ENODEV;
+ 	pm_runtime_dont_use_autosuspend(&pdev->dev);
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	flush_work(&priv->qos_work);
+ 	pm_runtime_disable(&pdev->dev);
+-	serial8250_unregister_port(priv->line);
+ 	cpu_latency_qos_remove_request(&priv->pm_qos_request);
+ 	device_init_wakeup(&pdev->dev, false);
+ 	return 0;
+@@ -1521,7 +1532,7 @@ static int omap8250_suspend(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
+ 	struct uart_8250_port *up = serial8250_get_port(priv->line);
+-	int err;
++	int err = 0;
+ 
+ 	serial8250_suspend_port(priv->line);
+ 
+@@ -1531,7 +1542,8 @@ static int omap8250_suspend(struct device *dev)
+ 	if (!device_may_wakeup(dev))
+ 		priv->wer = 0;
+ 	serial_out(up, UART_OMAP_WER, priv->wer);
+-	err = pm_runtime_force_suspend(dev);
++	if (uart_console(&up->port) && console_suspend_enabled)
++		err = pm_runtime_force_suspend(dev);
+ 	flush_work(&priv->qos_work);
+ 
+ 	return err;
+@@ -1540,11 +1552,15 @@ static int omap8250_suspend(struct device *dev)
+ static int omap8250_resume(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
++	struct uart_8250_port *up = serial8250_get_port(priv->line);
+ 	int err;
+ 
+-	err = pm_runtime_force_resume(dev);
+-	if (err)
+-		return err;
++	if (uart_console(&up->port) && console_suspend_enabled) {
++		err = pm_runtime_force_resume(dev);
++		if (err)
++			return err;
++	}
++
+ 	serial8250_resume_port(priv->line);
+ 	/* Paired with pm_runtime_resume_and_get() in omap8250_suspend() */
+ 	pm_runtime_mark_last_busy(dev);
+@@ -1577,7 +1593,6 @@ static int omap8250_lost_context(struct uart_8250_port *up)
+ static int omap8250_soft_reset(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
+-	struct uart_8250_port *up = serial8250_get_port(priv->line);
+ 	int timeout = 100;
+ 	int sysc;
+ 	int syss;
+@@ -1591,20 +1606,20 @@ static int omap8250_soft_reset(struct device *dev)
+ 	 * needing omap8250_soft_reset() quirk. Do it in two writes as
+ 	 * recommended in the comment for omap8250_update_scr().
+ 	 */
+-	serial_out(up, UART_OMAP_SCR, OMAP_UART_SCR_DMAMODE_1);
+-	serial_out(up, UART_OMAP_SCR,
++	uart_write(priv, UART_OMAP_SCR, OMAP_UART_SCR_DMAMODE_1);
++	uart_write(priv, UART_OMAP_SCR,
+ 		   OMAP_UART_SCR_DMAMODE_1 | OMAP_UART_SCR_DMAMODE_CTL);
+ 
+-	sysc = serial_in(up, UART_OMAP_SYSC);
++	sysc = uart_read(priv, UART_OMAP_SYSC);
+ 
+ 	/* softreset the UART */
+ 	sysc |= OMAP_UART_SYSC_SOFTRESET;
+-	serial_out(up, UART_OMAP_SYSC, sysc);
++	uart_write(priv, UART_OMAP_SYSC, sysc);
+ 
+ 	/* By experiments, 1us enough for reset complete on AM335x */
+ 	do {
+ 		udelay(1);
+-		syss = serial_in(up, UART_OMAP_SYSS);
++		syss = uart_read(priv, UART_OMAP_SYSS);
+ 	} while (--timeout && !(syss & OMAP_UART_SYSS_RESETDONE));
+ 
+ 	if (!timeout) {
+@@ -1618,23 +1633,10 @@ static int omap8250_soft_reset(struct device *dev)
+ static int omap8250_runtime_suspend(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
+-	struct uart_8250_port *up;
++	struct uart_8250_port *up = NULL;
+ 
+-	/* In case runtime-pm tries this before we are setup */
+-	if (!priv)
+-		return 0;
+-
+-	up = serial8250_get_port(priv->line);
+-	/*
+-	 * When using 'no_console_suspend', the console UART must not be
+-	 * suspended. Since driver suspend is managed by runtime suspend,
+-	 * preventing runtime suspend (by returning error) will keep device
+-	 * active during suspend.
+-	 */
+-	if (priv->is_suspending && !console_suspend_enabled) {
+-		if (uart_console(&up->port))
+-			return -EBUSY;
+-	}
++	if (priv->line >= 0)
++		up = serial8250_get_port(priv->line);
+ 
+ 	if (priv->habit & UART_ERRATA_CLOCK_DISABLE) {
+ 		int ret;
+@@ -1643,13 +1645,15 @@ static int omap8250_runtime_suspend(struct device *dev)
+ 		if (ret)
+ 			return ret;
+ 
+-		/* Restore to UART mode after reset (for wakeup) */
+-		omap8250_update_mdr1(up, priv);
+-		/* Restore wakeup enable register */
+-		serial_out(up, UART_OMAP_WER, priv->wer);
++		if (up) {
++			/* Restore to UART mode after reset (for wakeup) */
++			omap8250_update_mdr1(up, priv);
++			/* Restore wakeup enable register */
++			serial_out(up, UART_OMAP_WER, priv->wer);
++		}
+ 	}
+ 
+-	if (up->dma && up->dma->rxchan)
++	if (up && up->dma && up->dma->rxchan)
+ 		omap_8250_rx_dma_flush(up);
+ 
+ 	priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
+@@ -1661,18 +1665,15 @@ static int omap8250_runtime_suspend(struct device *dev)
+ static int omap8250_runtime_resume(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
+-	struct uart_8250_port *up;
+-
+-	/* In case runtime-pm tries this before we are setup */
+-	if (!priv)
+-		return 0;
++	struct uart_8250_port *up = NULL;
+ 
+-	up = serial8250_get_port(priv->line);
++	if (priv->line >= 0)
++		up = serial8250_get_port(priv->line);
+ 
+-	if (omap8250_lost_context(up))
++	if (up && omap8250_lost_context(up))
+ 		omap8250_restore_regs(up);
+ 
+-	if (up->dma && up->dma->rxchan && !(priv->habit & UART_HAS_EFR2))
++	if (up && up->dma && up->dma->rxchan && !(priv->habit & UART_HAS_EFR2))
+ 		omap_8250_rx_dma(up);
+ 
+ 	priv->latency = priv->calc_latency;
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 63bb04d262d84..0a77717d6af20 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2745,6 +2745,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 
+ 	rhdev->rx_lanes = 1;
+ 	rhdev->tx_lanes = 1;
++	rhdev->ssp_rate = USB_SSP_GEN_UNKNOWN;
+ 
+ 	switch (hcd->speed) {
+ 	case HCD_USB11:
+@@ -2762,8 +2763,11 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 	case HCD_USB32:
+ 		rhdev->rx_lanes = 2;
+ 		rhdev->tx_lanes = 2;
+-		fallthrough;
++		rhdev->ssp_rate = USB_SSP_GEN_2x2;
++		rhdev->speed = USB_SPEED_SUPER_PLUS;
++		break;
+ 	case HCD_USB31:
++		rhdev->ssp_rate = USB_SSP_GEN_2x1;
+ 		rhdev->speed = USB_SPEED_SUPER_PLUS;
+ 		break;
+ 	default:
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 580604596499a..cfcd4f2ffffaa 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -31,6 +31,7 @@
+ #include <linux/pm_qos.h>
+ #include <linux/kobject.h>
+ 
++#include <linux/bitfield.h>
+ #include <linux/uaccess.h>
+ #include <asm/byteorder.h>
+ 
+@@ -149,6 +150,10 @@ int usb_device_supports_lpm(struct usb_device *udev)
+ 	if (udev->quirks & USB_QUIRK_NO_LPM)
+ 		return 0;
+ 
++	/* Skip if the device BOS descriptor couldn't be read */
++	if (!udev->bos)
++		return 0;
++
+ 	/* USB 2.1 (and greater) devices indicate LPM support through
+ 	 * their USB 2.0 Extended Capabilities BOS descriptor.
+ 	 */
+@@ -325,6 +330,10 @@ static void usb_set_lpm_parameters(struct usb_device *udev)
+ 	if (!udev->lpm_capable || udev->speed < USB_SPEED_SUPER)
+ 		return;
+ 
++	/* Skip if the device BOS descriptor couldn't be read */
++	if (!udev->bos)
++		return;
++
+ 	hub = usb_hub_to_struct_hub(udev->parent);
+ 	/* It doesn't take time to transition the roothub into U0, since it
+ 	 * doesn't have an upstream link.
+@@ -2683,8 +2692,84 @@ out_authorized:
+ 	return result;
+ }
+ 
++/**
++ * get_port_ssp_rate - Match the extended port status to SSP rate
++ * @hdev: The hub device
++ * @ext_portstatus: extended port status
++ *
++ * Match the extended port status speed id to the SuperSpeed Plus sublink speed
++ * capability attributes. Base on the number of connected lanes and speed,
++ * return the corresponding enum usb_ssp_rate.
++ */
++static enum usb_ssp_rate get_port_ssp_rate(struct usb_device *hdev,
++					   u32 ext_portstatus)
++{
++	struct usb_ssp_cap_descriptor *ssp_cap = hdev->bos->ssp_cap;
++	u32 attr;
++	u8 speed_id;
++	u8 ssac;
++	u8 lanes;
++	int i;
++
++	if (!ssp_cap)
++		goto out;
++
++	speed_id = ext_portstatus & USB_EXT_PORT_STAT_RX_SPEED_ID;
++	lanes = USB_EXT_PORT_RX_LANES(ext_portstatus) + 1;
++
++	ssac = le32_to_cpu(ssp_cap->bmAttributes) &
++		USB_SSP_SUBLINK_SPEED_ATTRIBS;
++
++	for (i = 0; i <= ssac; i++) {
++		u8 ssid;
++
++		attr = le32_to_cpu(ssp_cap->bmSublinkSpeedAttr[i]);
++		ssid = FIELD_GET(USB_SSP_SUBLINK_SPEED_SSID, attr);
++		if (speed_id == ssid) {
++			u16 mantissa;
++			u8 lse;
++			u8 type;
++
++			/*
++			 * Note: currently asymmetric lane types are only
++			 * applicable for SSIC operate in SuperSpeed protocol
++			 */
++			type = FIELD_GET(USB_SSP_SUBLINK_SPEED_ST, attr);
++			if (type == USB_SSP_SUBLINK_SPEED_ST_ASYM_RX ||
++			    type == USB_SSP_SUBLINK_SPEED_ST_ASYM_TX)
++				goto out;
++
++			if (FIELD_GET(USB_SSP_SUBLINK_SPEED_LP, attr) !=
++			    USB_SSP_SUBLINK_SPEED_LP_SSP)
++				goto out;
++
++			lse = FIELD_GET(USB_SSP_SUBLINK_SPEED_LSE, attr);
++			mantissa = FIELD_GET(USB_SSP_SUBLINK_SPEED_LSM, attr);
++
++			/* Convert to Gbps */
++			for (; lse < USB_SSP_SUBLINK_SPEED_LSE_GBPS; lse++)
++				mantissa /= 1000;
++
++			if (mantissa >= 10 && lanes == 1)
++				return USB_SSP_GEN_2x1;
++
++			if (mantissa >= 10 && lanes == 2)
++				return USB_SSP_GEN_2x2;
++
++			if (mantissa >= 5 && lanes == 2)
++				return USB_SSP_GEN_1x2;
++
++			goto out;
++		}
++	}
++
++out:
++	return USB_SSP_GEN_UNKNOWN;
++}
++
+ /*
+- * Return 1 if port speed is SuperSpeedPlus, 0 otherwise
++ * Return 1 if port speed is SuperSpeedPlus, 0 otherwise or if the
++ * capability couldn't be checked.
+  * check it from the link protocol field of the current speed ID attribute.
+  * current speed ID is got from ext port status request. Sublink speed attribute
+  * table is returned with the hub BOS SSP device capability descriptor
+@@ -2694,8 +2779,12 @@ static int port_speed_is_ssp(struct usb_device *hdev, int speed_id)
+ 	int ssa_count;
+ 	u32 ss_attr;
+ 	int i;
+-	struct usb_ssp_cap_descriptor *ssp_cap = hdev->bos->ssp_cap;
++	struct usb_ssp_cap_descriptor *ssp_cap;
+ 
++	if (!hdev->bos)
++		return 0;
++
++	ssp_cap = hdev->bos->ssp_cap;
+ 	if (!ssp_cap)
+ 		return 0;
+ 
+@@ -2865,9 +2954,11 @@ static int hub_port_wait_reset(struct usb_hub *hub, int port1,
+ 		/* extended portstatus Rx and Tx lane count are zero based */
+ 		udev->rx_lanes = USB_EXT_PORT_RX_LANES(ext_portstatus) + 1;
+ 		udev->tx_lanes = USB_EXT_PORT_TX_LANES(ext_portstatus) + 1;
++		udev->ssp_rate = get_port_ssp_rate(hub->hdev, ext_portstatus);
+ 	} else {
+ 		udev->rx_lanes = 1;
+ 		udev->tx_lanes = 1;
++		udev->ssp_rate = USB_SSP_GEN_UNKNOWN;
+ 	}
+ 	if (hub_is_wusb(hub))
+ 		udev->speed = USB_SPEED_WIRELESS;
+@@ -4114,8 +4205,15 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
+ 		enum usb3_link_state state)
+ {
+ 	int timeout, ret;
+-	__u8 u1_mel = udev->bos->ss_cap->bU1devExitLat;
+-	__le16 u2_mel = udev->bos->ss_cap->bU2DevExitLat;
++	__u8 u1_mel;
++	__le16 u2_mel;
++
++	/* Skip if the device BOS descriptor couldn't be read */
++	if (!udev->bos)
++		return;
++
++	u1_mel = udev->bos->ss_cap->bU1devExitLat;
++	u2_mel = udev->bos->ss_cap->bU2DevExitLat;
+ 
+ 	/* If the device says it doesn't have *any* exit latency to come out of
+ 	 * U1 or U2, it's probably lying.  Assume it doesn't implement that link
+diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h
+index 22ea1f4f2d66d..db4c7e2c5960d 100644
+--- a/drivers/usb/core/hub.h
++++ b/drivers/usb/core/hub.h
+@@ -141,7 +141,7 @@ static inline int hub_is_superspeedplus(struct usb_device *hdev)
+ {
+ 	return (hdev->descriptor.bDeviceProtocol == USB_HUB_PR_SS &&
+ 		le16_to_cpu(hdev->descriptor.bcdUSB) >= 0x0310 &&
+-		hdev->bos->ssp_cap);
++		hdev->bos && hdev->bos->ssp_cap);
+ }
+ 
+ static inline unsigned hub_power_on_good_delay(struct usb_hub *hub)
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index b02b10acb8842..5bc245b4be441 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -277,9 +277,46 @@ int dwc3_core_soft_reset(struct dwc3 *dwc)
+ 	 * XHCI driver will reset the host block. If dwc3 was configured for
+ 	 * host-only mode or current role is host, then we can return early.
+ 	 */
+-	if (dwc->dr_mode == USB_DR_MODE_HOST || dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
++	if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
+ 		return 0;
+ 
++	/*
++	 * If the dr_mode is host and the dwc->current_dr_role is not the
++	 * corresponding DWC3_GCTL_PRTCAP_HOST, then the dwc3_core_init_mode
++	 * isn't executed yet. Ensure the phy is ready before the controller
++	 * updates the GCTL.PRTCAPDIR or other settings by soft-resetting
++	 * the phy.
++	 *
++	 * Note: GUSB3PIPECTL[n] and GUSB2PHYCFG[n] are port settings where n
++	 * is port index. If this is a multiport host, then we need to reset
++	 * all active ports.
++	 */
++	if (dwc->dr_mode == USB_DR_MODE_HOST) {
++		u32 usb3_port;
++		u32 usb2_port;
++
++		usb3_port = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
++		usb3_port |= DWC3_GUSB3PIPECTL_PHYSOFTRST;
++		dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), usb3_port);
++
++		usb2_port = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
++		usb2_port |= DWC3_GUSB2PHYCFG_PHYSOFTRST;
++		dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), usb2_port);
++
++		/* Small delay for phy reset assertion */
++		usleep_range(1000, 2000);
++
++		usb3_port &= ~DWC3_GUSB3PIPECTL_PHYSOFTRST;
++		dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), usb3_port);
++
++		usb2_port &= ~DWC3_GUSB2PHYCFG_PHYSOFTRST;
++		dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), usb2_port);
++
++		/* Wait for clock synchronization */
++		msleep(50);
++		return 0;
++	}
++
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ 	reg |= DWC3_DCTL_CSFTRST;
+ 	reg &= ~DWC3_DCTL_RUN_STOP;
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index f56147489835d..00aea45a04e95 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -1180,7 +1180,8 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 			  struct sk_buff_head *list)
+ {
+ 	struct f_ncm	*ncm = func_to_ncm(&port->func);
+-	__le16		*tmp = (void *) skb->data;
++	unsigned char	*ntb_ptr = skb->data;
++	__le16		*tmp;
+ 	unsigned	index, index2;
+ 	int		ndp_index;
+ 	unsigned	dg_len, dg_len2;
+@@ -1193,6 +1194,10 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 	const struct ndp_parser_opts *opts = ncm->parser_opts;
+ 	unsigned	crc_len = ncm->is_crc ? sizeof(uint32_t) : 0;
+ 	int		dgram_counter;
++	int		to_process = skb->len;
++
++parse_ntb:
++	tmp = (__le16 *)ntb_ptr;
+ 
+ 	/* dwSignature */
+ 	if (get_unaligned_le32(tmp) != opts->nth_sign) {
+@@ -1239,7 +1244,7 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 		 * walk through NDP
+ 		 * dwSignature
+ 		 */
+-		tmp = (void *)(skb->data + ndp_index);
++		tmp = (__le16 *)(ntb_ptr + ndp_index);
+ 		if (get_unaligned_le32(tmp) != ncm->ndp_sign) {
+ 			INFO(port->func.config->cdev, "Wrong NDP SIGN\n");
+ 			goto err;
+@@ -1296,11 +1301,11 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 			if (ncm->is_crc) {
+ 				uint32_t crc, crc2;
+ 
+-				crc = get_unaligned_le32(skb->data +
++				crc = get_unaligned_le32(ntb_ptr +
+ 							 index + dg_len -
+ 							 crc_len);
+ 				crc2 = ~crc32_le(~0,
+-						 skb->data + index,
++						 ntb_ptr + index,
+ 						 dg_len - crc_len);
+ 				if (crc != crc2) {
+ 					INFO(port->func.config->cdev,
+@@ -1327,7 +1332,7 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 							 dg_len - crc_len);
+ 			if (skb2 == NULL)
+ 				goto err;
+-			skb_put_data(skb2, skb->data + index,
++			skb_put_data(skb2, ntb_ptr + index,
+ 				     dg_len - crc_len);
+ 
+ 			skb_queue_tail(list, skb2);
+@@ -1340,10 +1345,17 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 		} while (ndp_len > 2 * (opts->dgram_item_len * 2));
+ 	} while (ndp_index);
+ 
+-	dev_consume_skb_any(skb);
+-
+ 	VDBG(port->func.config->cdev,
+ 	     "Parsed NTB with %d frames\n", dgram_counter);
++
++	to_process -= block_len;
++	if (to_process != 0) {
++		ntb_ptr = (unsigned char *)(ntb_ptr + block_len);
++		goto parse_ntb;
++	}
++
++	dev_consume_skb_any(skb);
++
+ 	return 0;
+ err:
+ 	skb_queue_purge(list);
+diff --git a/drivers/usb/gadget/udc/udc-xilinx.c b/drivers/usb/gadget/udc/udc-xilinx.c
+index 096f56a09e6a2..47486f0f21e5a 100644
+--- a/drivers/usb/gadget/udc/udc-xilinx.c
++++ b/drivers/usb/gadget/udc/udc-xilinx.c
+@@ -496,11 +496,13 @@ static int xudc_eptxrx(struct xusb_ep *ep, struct xusb_req *req,
+ 		/* Get the Buffer address and copy the transmit data.*/
+ 		eprambase = (u32 __force *)(udc->addr + ep->rambase);
+ 		if (ep->is_in) {
+-			memcpy(eprambase, bufferptr, bytestosend);
++			memcpy_toio((void __iomem *)eprambase, bufferptr,
++				    bytestosend);
+ 			udc->write_fn(udc->addr, ep->offset +
+ 				      XUSB_EP_BUF0COUNT_OFFSET, bufferlen);
+ 		} else {
+-			memcpy(bufferptr, eprambase, bytestosend);
++			memcpy_toio((void __iomem *)bufferptr, eprambase,
++				    bytestosend);
+ 		}
+ 		/*
+ 		 * Enable the buffer for transmission.
+@@ -514,11 +516,13 @@ static int xudc_eptxrx(struct xusb_ep *ep, struct xusb_req *req,
+ 		eprambase = (u32 __force *)(udc->addr + ep->rambase +
+ 			     ep->ep_usb.maxpacket);
+ 		if (ep->is_in) {
+-			memcpy(eprambase, bufferptr, bytestosend);
++			memcpy_toio((void __iomem *)eprambase, bufferptr,
++				    bytestosend);
+ 			udc->write_fn(udc->addr, ep->offset +
+ 				      XUSB_EP_BUF1COUNT_OFFSET, bufferlen);
+ 		} else {
+-			memcpy(bufferptr, eprambase, bytestosend);
++			memcpy_toio((void __iomem *)bufferptr, eprambase,
++				    bytestosend);
+ 		}
+ 		/*
+ 		 * Enable the buffer for transmission.
+@@ -1020,7 +1024,7 @@ static int __xudc_ep0_queue(struct xusb_ep *ep0, struct xusb_req *req)
+ 			   udc->addr);
+ 		length = req->usb_req.actual = min_t(u32, length,
+ 						     EP0_MAX_PACKET);
+-		memcpy(corebuf, req->usb_req.buf, length);
++		memcpy_toio((void __iomem *)corebuf, req->usb_req.buf, length);
+ 		udc->write_fn(udc->addr, XUSB_EP_BUF0COUNT_OFFSET, length);
+ 		udc->write_fn(udc->addr, XUSB_BUFFREADY_OFFSET, 1);
+ 	} else {
+@@ -1746,7 +1750,7 @@ static void xudc_handle_setup(struct xusb_udc *udc)
+ 
+ 	/* Load up the chapter 9 command buffer.*/
+ 	ep0rambase = (u32 __force *) (udc->addr + XUSB_SETUP_PKT_ADDR_OFFSET);
+-	memcpy(&setup, ep0rambase, 8);
++	memcpy_toio((void __iomem *)&setup, ep0rambase, 8);
+ 
+ 	udc->setup = setup;
+ 	udc->setup.wValue = cpu_to_le16(setup.wValue);
+@@ -1833,7 +1837,7 @@ static void xudc_ep0_out(struct xusb_udc *udc)
+ 			     (ep0->rambase << 2));
+ 		buffer = req->usb_req.buf + req->usb_req.actual;
+ 		req->usb_req.actual = req->usb_req.actual + bytes_to_rx;
+-		memcpy(buffer, ep0rambase, bytes_to_rx);
++		memcpy_toio((void __iomem *)buffer, ep0rambase, bytes_to_rx);
+ 
+ 		if (req->usb_req.length == req->usb_req.actual) {
+ 			/* Data transfer completed get ready for Status stage */
+@@ -1909,7 +1913,7 @@ static void xudc_ep0_in(struct xusb_udc *udc)
+ 				     (ep0->rambase << 2));
+ 			buffer = req->usb_req.buf + req->usb_req.actual;
+ 			req->usb_req.actual = req->usb_req.actual + length;
+-			memcpy(ep0rambase, buffer, length);
++			memcpy_toio((void __iomem *)ep0rambase, buffer, length);
+ 		}
+ 		udc->write_fn(udc->addr, XUSB_EP_BUF0COUNT_OFFSET, count);
+ 		udc->write_fn(udc->addr, XUSB_BUFFREADY_OFFSET, 1);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 7bb3067418076..e92f920256b2e 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -819,7 +819,7 @@ static void xhci_del_comp_mod_timer(struct xhci_hcd *xhci, u32 status,
+ }
+ 
+ static int xhci_handle_usb2_port_link_resume(struct xhci_port *port,
+-					     u32 *status, u32 portsc,
++					     u32 portsc,
+ 					     unsigned long *flags)
+ {
+ 	struct xhci_bus_state *bus_state;
+@@ -834,11 +834,10 @@ static int xhci_handle_usb2_port_link_resume(struct xhci_port *port,
+ 	wIndex = port->hcd_portnum;
+ 
+ 	if ((portsc & PORT_RESET) || !(portsc & PORT_PE)) {
+-		*status = 0xffffffff;
+ 		return -EINVAL;
+ 	}
+ 	/* did port event handler already start resume timing? */
+-	if (!bus_state->resume_done[wIndex]) {
++	if (!port->resume_timestamp) {
+ 		/* If not, maybe we are in a host initated resume? */
+ 		if (test_bit(wIndex, &bus_state->resuming_ports)) {
+ 			/* Host initated resume doesn't time the resume
+@@ -855,28 +854,29 @@ static int xhci_handle_usb2_port_link_resume(struct xhci_port *port,
+ 				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 
+ 			set_bit(wIndex, &bus_state->resuming_ports);
+-			bus_state->resume_done[wIndex] = timeout;
++			port->resume_timestamp = timeout;
+ 			mod_timer(&hcd->rh_timer, timeout);
+ 			usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 		}
+ 	/* Has resume been signalled for USB_RESUME_TIME yet? */
+-	} else if (time_after_eq(jiffies, bus_state->resume_done[wIndex])) {
++	} else if (time_after_eq(jiffies, port->resume_timestamp)) {
+ 		int time_left;
+ 
+ 		xhci_dbg(xhci, "resume USB2 port %d-%d\n",
+ 			 hcd->self.busnum, wIndex + 1);
+ 
+-		bus_state->resume_done[wIndex] = 0;
++		port->resume_timestamp = 0;
+ 		clear_bit(wIndex, &bus_state->resuming_ports);
+ 
+-		set_bit(wIndex, &bus_state->rexit_ports);
++		reinit_completion(&port->rexit_done);
++		port->rexit_active = true;
+ 
+ 		xhci_test_and_clear_bit(xhci, port, PORT_PLC);
+ 		xhci_set_link_state(xhci, port, XDEV_U0);
+ 
+ 		spin_unlock_irqrestore(&xhci->lock, *flags);
+ 		time_left = wait_for_completion_timeout(
+-			&bus_state->rexit_done[wIndex],
++			&port->rexit_done,
+ 			msecs_to_jiffies(XHCI_MAX_REXIT_TIMEOUT_MS));
+ 		spin_lock_irqsave(&xhci->lock, *flags);
+ 
+@@ -885,7 +885,6 @@ static int xhci_handle_usb2_port_link_resume(struct xhci_port *port,
+ 							    wIndex + 1);
+ 			if (!slot_id) {
+ 				xhci_dbg(xhci, "slot_id is zero\n");
+-				*status = 0xffffffff;
+ 				return -ENODEV;
+ 			}
+ 			xhci_ring_device(xhci, slot_id);
+@@ -894,22 +893,19 @@ static int xhci_handle_usb2_port_link_resume(struct xhci_port *port,
+ 
+ 			xhci_warn(xhci, "Port resume timed out, port %d-%d: 0x%x\n",
+ 				  hcd->self.busnum, wIndex + 1, port_status);
+-			*status |= USB_PORT_STAT_SUSPEND;
+-			clear_bit(wIndex, &bus_state->rexit_ports);
++			/*
++			 * keep rexit_active set if U0 transition failed so we
++			 * know to report PORT_STAT_SUSPEND status back to
++			 * usbcore. It will be cleared later once the port is
++			 * out of RESUME/U3 state
++			 */
+ 		}
+ 
+ 		usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 		bus_state->port_c_suspend |= 1 << wIndex;
+ 		bus_state->suspended_ports &= ~(1 << wIndex);
+-	} else {
+-		/*
+-		 * The resume has been signaling for less than
+-		 * USB_RESUME_TIME. Report the port status as SUSPEND,
+-		 * let the usbcore check port status again and clear
+-		 * resume signaling later.
+-		 */
+-		*status |= USB_PORT_STAT_SUSPEND;
+ 	}
++
+ 	return 0;
+ }
+ 
+@@ -961,19 +957,19 @@ static void xhci_get_usb3_port_status(struct xhci_port *port, u32 *status,
+ 		*status |= USB_PORT_STAT_C_CONFIG_ERROR << 16;
+ 
+ 	/* USB3 specific wPortStatus bits */
+-	if (portsc & PORT_POWER) {
++	if (portsc & PORT_POWER)
+ 		*status |= USB_SS_PORT_STAT_POWER;
+-		/* link state handling */
+-		if (link_state == XDEV_U0)
+-			bus_state->suspended_ports &= ~(1 << portnum);
+-	}
+ 
+-	/* remote wake resume signaling complete */
+-	if (bus_state->port_remote_wakeup & (1 << portnum) &&
++	/* no longer suspended or resuming */
++	if (link_state != XDEV_U3 &&
+ 	    link_state != XDEV_RESUME &&
+ 	    link_state != XDEV_RECOVERY) {
+-		bus_state->port_remote_wakeup &= ~(1 << portnum);
+-		usb_hcd_end_port_resume(&hcd->self, portnum);
++		/* remote wake resume signaling complete */
++		if (bus_state->port_remote_wakeup & (1 << portnum)) {
++			bus_state->port_remote_wakeup &= ~(1 << portnum);
++			usb_hcd_end_port_resume(&hcd->self, portnum);
++		}
++		bus_state->suspended_ports &= ~(1 << portnum);
+ 	}
+ 
+ 	xhci_hub_report_usb3_link_state(xhci, status, portsc);
+@@ -986,7 +982,7 @@ static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status,
+ 	struct xhci_bus_state *bus_state;
+ 	u32 link_state;
+ 	u32 portnum;
+-	int ret;
++	int err;
+ 
+ 	bus_state = &port->rhub->bus_state;
+ 	link_state = portsc & PORT_PLS_MASK;
+@@ -1002,22 +998,35 @@ static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status,
+ 		if (link_state == XDEV_U2)
+ 			*status |= USB_PORT_STAT_L1;
+ 		if (link_state == XDEV_U0) {
+-			if (bus_state->resume_done[portnum])
+-				usb_hcd_end_port_resume(&port->rhub->hcd->self,
+-							portnum);
+-			bus_state->resume_done[portnum] = 0;
+-			clear_bit(portnum, &bus_state->resuming_ports);
+ 			if (bus_state->suspended_ports & (1 << portnum)) {
+ 				bus_state->suspended_ports &= ~(1 << portnum);
+ 				bus_state->port_c_suspend |= 1 << portnum;
+ 			}
+ 		}
+ 		if (link_state == XDEV_RESUME) {
+-			ret = xhci_handle_usb2_port_link_resume(port, status,
+-								portsc, flags);
+-			if (ret)
+-				return;
++			err = xhci_handle_usb2_port_link_resume(port, portsc,
++								flags);
++			if (err < 0)
++				*status = 0xffffffff;
++			else if (port->resume_timestamp || port->rexit_active)
++				*status |= USB_PORT_STAT_SUSPEND;
++		}
++	}
++
++	/*
++	 * Clear usb2 resume signalling variables if port is no longer suspended
++	 * or resuming. Port either resumed to U0/U1/U2, disconnected, or in a
++	 * error state. Resume related variables should be cleared in all those cases.
++	 */
++	if (link_state != XDEV_U3 && link_state != XDEV_RESUME) {
++		if (port->resume_timestamp ||
++		    test_bit(portnum, &bus_state->resuming_ports)) {
++			port->resume_timestamp = 0;
++			clear_bit(portnum, &bus_state->resuming_ports);
++			usb_hcd_end_port_resume(&port->rhub->hcd->self, portnum);
+ 		}
++		port->rexit_active = 0;
++		bus_state->suspended_ports &= ~(1 << portnum);
+ 	}
+ }
+ 
+@@ -1073,18 +1082,6 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 	else
+ 		xhci_get_usb2_port_status(port, &status, raw_port_status,
+ 					  flags);
+-	/*
+-	 * Clear stale usb2 resume signalling variables in case port changed
+-	 * state during resume signalling. For example on error
+-	 */
+-	if ((bus_state->resume_done[wIndex] ||
+-	     test_bit(wIndex, &bus_state->resuming_ports)) &&
+-	    (raw_port_status & PORT_PLS_MASK) != XDEV_U3 &&
+-	    (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME) {
+-		bus_state->resume_done[wIndex] = 0;
+-		clear_bit(wIndex, &bus_state->resuming_ports);
+-		usb_hcd_end_port_resume(&hcd->self, wIndex);
+-	}
+ 
+ 	if (bus_state->port_c_suspend & (1 << wIndex))
+ 		status |= USB_PORT_STAT_C_SUSPEND << 16;
+@@ -1108,11 +1105,14 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 	u16 test_mode = 0;
+ 	struct xhci_hub *rhub;
+ 	struct xhci_port **ports;
++	struct xhci_port *port;
++	int portnum1;
+ 
+ 	rhub = xhci_get_rhub(hcd);
+ 	ports = rhub->ports;
+ 	max_ports = rhub->num_ports;
+ 	bus_state = &rhub->bus_state;
++	portnum1 = wIndex & 0xff;
+ 
+ 	spin_lock_irqsave(&xhci->lock, flags);
+ 	switch (typeReq) {
+@@ -1146,10 +1146,12 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		spin_unlock_irqrestore(&xhci->lock, flags);
+ 		return retval;
+ 	case GetPortStatus:
+-		if (!wIndex || wIndex > max_ports)
++		if (!portnum1 || portnum1 > max_ports)
+ 			goto error;
++
+ 		wIndex--;
+-		temp = readl(ports[wIndex]->addr);
++		port = ports[portnum1 - 1];
++		temp = readl(port->addr);
+ 		if (temp == ~(u32)0) {
+ 			xhci_hc_died(xhci);
+ 			retval = -ENODEV;
+@@ -1162,7 +1164,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			goto error;
+ 
+ 		xhci_dbg(xhci, "Get port status %d-%d read: 0x%x, return 0x%x",
+-			 hcd->self.busnum, wIndex + 1, temp, status);
++			 hcd->self.busnum, portnum1, temp, status);
+ 
+ 		put_unaligned(cpu_to_le32(status), (__le32 *) buf);
+ 		/* if USB 3.1 extended port status return additional 4 bytes */
+@@ -1174,7 +1176,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				retval = -EINVAL;
+ 				break;
+ 			}
+-			port_li = readl(ports[wIndex]->addr + PORTLI);
++			port_li = readl(port->addr + PORTLI);
+ 			status = xhci_get_ext_port_status(temp, port_li);
+ 			put_unaligned_le32(status, &buf[4]);
+ 		}
+@@ -1188,11 +1190,14 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			test_mode = (wIndex & 0xff00) >> 8;
+ 		/* The MSB of wIndex is the U1/U2 timeout */
+ 		timeout = (wIndex & 0xff00) >> 8;
++
+ 		wIndex &= 0xff;
+-		if (!wIndex || wIndex > max_ports)
++		if (!portnum1 || portnum1 > max_ports)
+ 			goto error;
++
++		port = ports[portnum1 - 1];
+ 		wIndex--;
+-		temp = readl(ports[wIndex]->addr);
++		temp = readl(port->addr);
+ 		if (temp == ~(u32)0) {
+ 			xhci_hc_died(xhci);
+ 			retval = -ENODEV;
+@@ -1202,11 +1207,10 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		/* FIXME: What new port features do we need to support? */
+ 		switch (wValue) {
+ 		case USB_PORT_FEAT_SUSPEND:
+-			temp = readl(ports[wIndex]->addr);
++			temp = readl(port->addr);
+ 			if ((temp & PORT_PLS_MASK) != XDEV_U0) {
+ 				/* Resume the port to U0 first */
+-				xhci_set_link_state(xhci, ports[wIndex],
+-							XDEV_U0);
++				xhci_set_link_state(xhci, port, XDEV_U0);
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+ 				msleep(10);
+ 				spin_lock_irqsave(&xhci->lock, flags);
+@@ -1215,16 +1219,16 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			 * a port unless the port reports that it is in the
+ 			 * enabled (PED = ‘1’,PLS < ‘3’) state.
+ 			 */
+-			temp = readl(ports[wIndex]->addr);
++			temp = readl(port->addr);
+ 			if ((temp & PORT_PE) == 0 || (temp & PORT_RESET)
+ 				|| (temp & PORT_PLS_MASK) >= XDEV_U3) {
+ 				xhci_warn(xhci, "USB core suspending port %d-%d not in U0/U1/U2\n",
+-					  hcd->self.busnum, wIndex + 1);
++					  hcd->self.busnum, portnum1);
+ 				goto error;
+ 			}
+ 
+ 			slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+-					wIndex + 1);
++							    portnum1);
+ 			if (!slot_id) {
+ 				xhci_warn(xhci, "slot_id is zero\n");
+ 				goto error;
+@@ -1234,21 +1238,21 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			xhci_stop_device(xhci, slot_id, 1);
+ 			spin_lock_irqsave(&xhci->lock, flags);
+ 
+-			xhci_set_link_state(xhci, ports[wIndex], XDEV_U3);
++			xhci_set_link_state(xhci, port, XDEV_U3);
+ 
+ 			spin_unlock_irqrestore(&xhci->lock, flags);
+ 			msleep(10); /* wait device to enter */
+ 			spin_lock_irqsave(&xhci->lock, flags);
+ 
+-			temp = readl(ports[wIndex]->addr);
++			temp = readl(port->addr);
+ 			bus_state->suspended_ports |= 1 << wIndex;
+ 			break;
+ 		case USB_PORT_FEAT_LINK_STATE:
+-			temp = readl(ports[wIndex]->addr);
++			temp = readl(port->addr);
+ 			/* Disable port */
+ 			if (link_state == USB_SS_PORT_LS_SS_DISABLED) {
+ 				xhci_dbg(xhci, "Disable port %d-%d\n",
+-					 hcd->self.busnum, wIndex + 1);
++					 hcd->self.busnum, portnum1);
+ 				temp = xhci_port_state_to_neutral(temp);
+ 				/*
+ 				 * Clear all change bits, so that we get a new
+@@ -1257,18 +1261,17 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				temp |= PORT_CSC | PORT_PEC | PORT_WRC |
+ 					PORT_OCC | PORT_RC | PORT_PLC |
+ 					PORT_CEC;
+-				writel(temp | PORT_PE, ports[wIndex]->addr);
+-				temp = readl(ports[wIndex]->addr);
++				writel(temp | PORT_PE, port->addr);
++				temp = readl(port->addr);
+ 				break;
+ 			}
+ 
+ 			/* Put link in RxDetect (enable port) */
+ 			if (link_state == USB_SS_PORT_LS_RX_DETECT) {
+ 				xhci_dbg(xhci, "Enable port %d-%d\n",
+-					 hcd->self.busnum, wIndex + 1);
+-				xhci_set_link_state(xhci, ports[wIndex],
+-							link_state);
+-				temp = readl(ports[wIndex]->addr);
++					 hcd->self.busnum, portnum1);
++				xhci_set_link_state(xhci, port,	link_state);
++				temp = readl(port->addr);
+ 				break;
+ 			}
+ 
+@@ -1298,11 +1301,10 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				}
+ 
+ 				xhci_dbg(xhci, "Enable compliance mode transition for port %d-%d\n",
+-					 hcd->self.busnum, wIndex + 1);
+-				xhci_set_link_state(xhci, ports[wIndex],
+-						link_state);
++					 hcd->self.busnum, portnum1);
++				xhci_set_link_state(xhci, port, link_state);
+ 
+-				temp = readl(ports[wIndex]->addr);
++				temp = readl(port->addr);
+ 				break;
+ 			}
+ 			/* Port must be enabled */
+@@ -1313,8 +1315,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			/* Can't set port link state above '3' (U3) */
+ 			if (link_state > USB_SS_PORT_LS_U3) {
+ 				xhci_warn(xhci, "Cannot set port %d-%d link state %d\n",
+-					  hcd->self.busnum, wIndex + 1,
+-					  link_state);
++					  hcd->self.busnum, portnum1, link_state);
+ 				goto error;
+ 			}
+ 
+@@ -1336,30 +1337,29 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				    pls == XDEV_RESUME ||
+ 				    pls == XDEV_RECOVERY) {
+ 					wait_u0 = true;
+-					reinit_completion(&bus_state->u3exit_done[wIndex]);
++					reinit_completion(&port->u3exit_done);
+ 				}
+ 				if (pls <= XDEV_U3) /* U1, U2, U3 */
+-					xhci_set_link_state(xhci, ports[wIndex],
+-							    USB_SS_PORT_LS_U0);
++					xhci_set_link_state(xhci, port, USB_SS_PORT_LS_U0);
+ 				if (!wait_u0) {
+ 					if (pls > XDEV_U3)
+ 						goto error;
+ 					break;
+ 				}
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+-				if (!wait_for_completion_timeout(&bus_state->u3exit_done[wIndex],
++				if (!wait_for_completion_timeout(&port->u3exit_done,
+ 								 msecs_to_jiffies(500)))
+ 					xhci_dbg(xhci, "missing U0 port change event for port %d-%d\n",
+-						 hcd->self.busnum, wIndex + 1);
++						 hcd->self.busnum, portnum1);
+ 				spin_lock_irqsave(&xhci->lock, flags);
+-				temp = readl(ports[wIndex]->addr);
++				temp = readl(port->addr);
+ 				break;
+ 			}
+ 
+ 			if (link_state == USB_SS_PORT_LS_U3) {
+ 				int retries = 16;
+ 				slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+-						wIndex + 1);
++								    portnum1);
+ 				if (slot_id) {
+ 					/* unlock to execute stop endpoint
+ 					 * commands */
+@@ -1368,16 +1368,16 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 					xhci_stop_device(xhci, slot_id, 1);
+ 					spin_lock_irqsave(&xhci->lock, flags);
+ 				}
+-				xhci_set_link_state(xhci, ports[wIndex], USB_SS_PORT_LS_U3);
++				xhci_set_link_state(xhci, port, USB_SS_PORT_LS_U3);
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+ 				while (retries--) {
+ 					usleep_range(4000, 8000);
+-					temp = readl(ports[wIndex]->addr);
++					temp = readl(port->addr);
+ 					if ((temp & PORT_PLS_MASK) == XDEV_U3)
+ 						break;
+ 				}
+ 				spin_lock_irqsave(&xhci->lock, flags);
+-				temp = readl(ports[wIndex]->addr);
++				temp = readl(port->addr);
+ 				bus_state->suspended_ports |= 1 << wIndex;
+ 			}
+ 			break;
+@@ -1392,39 +1392,38 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			break;
+ 		case USB_PORT_FEAT_RESET:
+ 			temp = (temp | PORT_RESET);
+-			writel(temp, ports[wIndex]->addr);
++			writel(temp, port->addr);
+ 
+-			temp = readl(ports[wIndex]->addr);
++			temp = readl(port->addr);
+ 			xhci_dbg(xhci, "set port reset, actual port %d-%d status  = 0x%x\n",
+-				 hcd->self.busnum, wIndex + 1, temp);
++				 hcd->self.busnum, portnum1, temp);
+ 			break;
+ 		case USB_PORT_FEAT_REMOTE_WAKE_MASK:
+-			xhci_set_remote_wake_mask(xhci, ports[wIndex],
+-						  wake_mask);
+-			temp = readl(ports[wIndex]->addr);
++			xhci_set_remote_wake_mask(xhci, port, wake_mask);
++			temp = readl(port->addr);
+ 			xhci_dbg(xhci, "set port remote wake mask, actual port %d-%d status  = 0x%x\n",
+-				 hcd->self.busnum, wIndex + 1, temp);
++				 hcd->self.busnum, portnum1, temp);
+ 			break;
+ 		case USB_PORT_FEAT_BH_PORT_RESET:
+ 			temp |= PORT_WR;
+-			writel(temp, ports[wIndex]->addr);
+-			temp = readl(ports[wIndex]->addr);
++			writel(temp, port->addr);
++			temp = readl(port->addr);
+ 			break;
+ 		case USB_PORT_FEAT_U1_TIMEOUT:
+ 			if (hcd->speed < HCD_USB3)
+ 				goto error;
+-			temp = readl(ports[wIndex]->addr + PORTPMSC);
++			temp = readl(port->addr + PORTPMSC);
+ 			temp &= ~PORT_U1_TIMEOUT_MASK;
+ 			temp |= PORT_U1_TIMEOUT(timeout);
+-			writel(temp, ports[wIndex]->addr + PORTPMSC);
++			writel(temp, port->addr + PORTPMSC);
+ 			break;
+ 		case USB_PORT_FEAT_U2_TIMEOUT:
+ 			if (hcd->speed < HCD_USB3)
+ 				goto error;
+-			temp = readl(ports[wIndex]->addr + PORTPMSC);
++			temp = readl(port->addr + PORTPMSC);
+ 			temp &= ~PORT_U2_TIMEOUT_MASK;
+ 			temp |= PORT_U2_TIMEOUT(timeout);
+-			writel(temp, ports[wIndex]->addr + PORTPMSC);
++			writel(temp, port->addr + PORTPMSC);
+ 			break;
+ 		case USB_PORT_FEAT_TEST:
+ 			/* 4.19.6 Port Test Modes (USB2 Test Mode) */
+@@ -1440,13 +1439,16 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			goto error;
+ 		}
+ 		/* unblock any posted writes */
+-		temp = readl(ports[wIndex]->addr);
++		temp = readl(port->addr);
+ 		break;
+ 	case ClearPortFeature:
+-		if (!wIndex || wIndex > max_ports)
++		if (!portnum1 || portnum1 > max_ports)
+ 			goto error;
++
++		port = ports[portnum1 - 1];
++
+ 		wIndex--;
+-		temp = readl(ports[wIndex]->addr);
++		temp = readl(port->addr);
+ 		if (temp == ~(u32)0) {
+ 			xhci_hc_died(xhci);
+ 			retval = -ENODEV;
+@@ -1456,7 +1458,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		temp = xhci_port_state_to_neutral(temp);
+ 		switch (wValue) {
+ 		case USB_PORT_FEAT_SUSPEND:
+-			temp = readl(ports[wIndex]->addr);
++			temp = readl(port->addr);
+ 			xhci_dbg(xhci, "clear USB_PORT_FEAT_SUSPEND\n");
+ 			xhci_dbg(xhci, "PORTSC %04x\n", temp);
+ 			if (temp & PORT_RESET)
+@@ -1467,20 +1469,18 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 
+ 				set_bit(wIndex, &bus_state->resuming_ports);
+ 				usb_hcd_start_port_resume(&hcd->self, wIndex);
+-				xhci_set_link_state(xhci, ports[wIndex],
+-						    XDEV_RESUME);
++				xhci_set_link_state(xhci, port, XDEV_RESUME);
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+ 				msleep(USB_RESUME_TIMEOUT);
+ 				spin_lock_irqsave(&xhci->lock, flags);
+-				xhci_set_link_state(xhci, ports[wIndex],
+-							XDEV_U0);
++				xhci_set_link_state(xhci, port, XDEV_U0);
+ 				clear_bit(wIndex, &bus_state->resuming_ports);
+ 				usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 			}
+ 			bus_state->port_c_suspend |= 1 << wIndex;
+ 
+ 			slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+-					wIndex + 1);
++					portnum1);
+ 			if (!slot_id) {
+ 				xhci_dbg(xhci, "slot_id is zero\n");
+ 				goto error;
+@@ -1498,11 +1498,11 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		case USB_PORT_FEAT_C_PORT_LINK_STATE:
+ 		case USB_PORT_FEAT_C_PORT_CONFIG_ERROR:
+ 			xhci_clear_port_change_bit(xhci, wValue, wIndex,
+-					ports[wIndex]->addr, temp);
++					port->addr, temp);
+ 			break;
+ 		case USB_PORT_FEAT_ENABLE:
+ 			xhci_disable_port(hcd, xhci, wIndex,
+-					ports[wIndex]->addr, temp);
++					port->addr, temp);
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			xhci_set_port_power(xhci, hcd, wIndex, false, &flags);
+@@ -1586,8 +1586,8 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+ 
+ 		if ((temp & mask) != 0 ||
+ 			(bus_state->port_c_suspend & 1 << i) ||
+-			(bus_state->resume_done[i] && time_after_eq(
+-			    jiffies, bus_state->resume_done[i]))) {
++			(ports[i]->resume_timestamp && time_after_eq(
++			    jiffies, ports[i]->resume_timestamp))) {
+ 			buf[(i + 1) / 8] |= 1 << (i + 1) % 8;
+ 			status = 1;
+ 		}
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 390bdf823e088..006e1b15fbda9 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2336,6 +2336,9 @@ static int xhci_setup_port_arrays(struct xhci_hcd *xhci, gfp_t flags)
+ 		xhci->hw_ports[i].addr = &xhci->op_regs->port_status_base +
+ 			NUM_PORT_REGS * i;
+ 		xhci->hw_ports[i].hw_portnum = i;
++
++		init_completion(&xhci->hw_ports[i].rexit_done);
++		init_completion(&xhci->hw_ports[i].u3exit_done);
+ 	}
+ 
+ 	xhci->rh_bw = kcalloc_node(num_ports, sizeof(*xhci->rh_bw), flags,
+@@ -2603,13 +2606,6 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ 	 */
+ 	for (i = 0; i < MAX_HC_SLOTS; i++)
+ 		xhci->devs[i] = NULL;
+-	for (i = 0; i < USB_MAXCHILDREN; i++) {
+-		xhci->usb2_rhub.bus_state.resume_done[i] = 0;
+-		xhci->usb3_rhub.bus_state.resume_done[i] = 0;
+-		/* Only the USB 2.0 completions will ever be used. */
+-		init_completion(&xhci->usb2_rhub.bus_state.rexit_done[i]);
+-		init_completion(&xhci->usb3_rhub.bus_state.u3exit_done[i]);
+-	}
+ 
+ 	if (scratchpad_alloc(xhci, flags))
+ 		goto fail;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index b69b8c7e7966c..5ee095a5d38aa 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -742,7 +742,7 @@ static void xhci_giveback_urb_in_irq(struct xhci_hcd *xhci,
+ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci,
+ 		struct xhci_ring *ring, struct xhci_td *td)
+ {
+-	struct device *dev = xhci_to_hcd(xhci)->self.controller;
++	struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
+ 	struct xhci_segment *seg = td->bounce_seg;
+ 	struct urb *urb = td->urb;
+ 	size_t len;
+@@ -1851,7 +1851,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 			goto cleanup;
+ 		} else if (!test_bit(hcd_portnum, &bus_state->resuming_ports)) {
+ 			xhci_dbg(xhci, "resume HS port %d\n", port_id);
+-			bus_state->resume_done[hcd_portnum] = jiffies +
++			port->resume_timestamp = jiffies +
+ 				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(hcd_portnum, &bus_state->resuming_ports);
+ 			/* Do the rest in GetPortStatus after resume time delay.
+@@ -1860,7 +1860,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 			 */
+ 			set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+ 			mod_timer(&hcd->rh_timer,
+-				  bus_state->resume_done[hcd_portnum]);
++				  port->resume_timestamp);
+ 			usb_hcd_start_port_resume(&hcd->self, hcd_portnum);
+ 			bogus_port_status = true;
+ 		}
+@@ -1872,7 +1872,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 	     (portsc & PORT_PLS_MASK) == XDEV_U1 ||
+ 	     (portsc & PORT_PLS_MASK) == XDEV_U2)) {
+ 		xhci_dbg(xhci, "resume SS port %d finished\n", port_id);
+-		complete(&bus_state->u3exit_done[hcd_portnum]);
++		complete(&port->u3exit_done);
+ 		/* We've just brought the device into U0/1/2 through either the
+ 		 * Resume state after a device remote wakeup, or through the
+ 		 * U3Exit state after a host-initiated resume.  If it's a device
+@@ -1897,10 +1897,9 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 	 * RExit to a disconnect state).  If so, let the the driver know it's
+ 	 * out of the RExit state.
+ 	 */
+-	if (!DEV_SUPERSPEED_ANY(portsc) && hcd->speed < HCD_USB3 &&
+-			test_and_clear_bit(hcd_portnum,
+-				&bus_state->rexit_ports)) {
+-		complete(&bus_state->rexit_done[hcd_portnum]);
++	if (hcd->speed < HCD_USB3 && port->rexit_active) {
++		complete(&port->rexit_done);
++		port->rexit_active = false;
+ 		bogus_port_status = true;
+ 		goto cleanup;
+ 	}
+@@ -3325,7 +3324,7 @@ static u32 xhci_td_remainder(struct xhci_hcd *xhci, int transferred,
+ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
+ 			 u32 *trb_buff_len, struct xhci_segment *seg)
+ {
+-	struct device *dev = xhci_to_hcd(xhci)->self.controller;
++	struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
+ 	unsigned int unalign;
+ 	unsigned int max_pkt;
+ 	u32 new_buff_len;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 6a7c05940e661..bb3c362a194b2 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1699,13 +1699,8 @@ struct xhci_bus_state {
+ 	u32			port_c_suspend;
+ 	u32			suspended_ports;
+ 	u32			port_remote_wakeup;
+-	unsigned long		resume_done[USB_MAXCHILDREN];
+ 	/* which ports have started to resume */
+ 	unsigned long		resuming_ports;
+-	/* Which ports are waiting on RExit to U0 transition. */
+-	unsigned long		rexit_ports;
+-	struct completion	rexit_done[USB_MAXCHILDREN];
+-	struct completion	u3exit_done[USB_MAXCHILDREN];
+ };
+ 
+ 
+@@ -1729,6 +1724,10 @@ struct xhci_port {
+ 	struct xhci_hub		*rhub;
+ 	struct xhci_port_cap	*port_cap;
+ 	unsigned int		lpm_incapable:1;
++	unsigned long		resume_timestamp;
++	bool			rexit_active;
++	struct completion	rexit_done;
++	struct completion	u3exit_done;
+ };
+ 
+ struct xhci_hub {
+diff --git a/drivers/usb/musb/musb_debugfs.c b/drivers/usb/musb/musb_debugfs.c
+index 30a89aa8a3e7a..5401ae66894eb 100644
+--- a/drivers/usb/musb/musb_debugfs.c
++++ b/drivers/usb/musb/musb_debugfs.c
+@@ -39,7 +39,7 @@ static const struct musb_register_map musb_regmap[] = {
+ 	{ "IntrUsbE",	MUSB_INTRUSBE,	8 },
+ 	{ "DevCtl",	MUSB_DEVCTL,	8 },
+ 	{ "VControl",	0x68,		32 },
+-	{ "HWVers",	0x69,		16 },
++	{ "HWVers",	MUSB_HWVERS,	16 },
+ 	{ "LinkInfo",	MUSB_LINKINFO,	8 },
+ 	{ "VPLen",	MUSB_VPLEN,	8 },
+ 	{ "HS_EOF1",	MUSB_HS_EOF1,	8 },
+diff --git a/drivers/usb/musb/musb_host.c b/drivers/usb/musb/musb_host.c
+index 30c5e7de0761c..1880b0f20df00 100644
+--- a/drivers/usb/musb/musb_host.c
++++ b/drivers/usb/musb/musb_host.c
+@@ -321,10 +321,16 @@ static void musb_advance_schedule(struct musb *musb, struct urb *urb,
+ 	musb_giveback(musb, urb, status);
+ 	qh->is_ready = ready;
+ 
++	/*
++	 * musb->lock had been unlocked in musb_giveback, so qh may
++	 * be freed, need to get it again
++	 */
++	qh = musb_ep_get_qh(hw_ep, is_in);
++
+ 	/* reclaim resources (and bandwidth) ASAP; deschedule it, and
+ 	 * invalidate qh as soon as list_empty(&hep->urb_list)
+ 	 */
+-	if (list_empty(&qh->hep->urb_list)) {
++	if (qh && list_empty(&qh->hep->urb_list)) {
+ 		struct list_head	*head;
+ 		struct dma_controller	*dma = musb->dma_controller;
+ 
+@@ -2404,6 +2410,7 @@ static int musb_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ 		 * and its URB list has emptied, recycle this qh.
+ 		 */
+ 		if (ready && list_empty(&qh->hep->urb_list)) {
++			musb_ep_set_qh(qh->hw_ep, is_in, NULL);
+ 			qh->hep->hcpriv = NULL;
+ 			list_del(&qh->ring);
+ 			kfree(qh);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 83567e6e32e08..3e0579d6ec82c 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -203,6 +203,9 @@ static void option_instat_callback(struct urb *urb);
+ #define DELL_PRODUCT_5829E_ESIM			0x81e4
+ #define DELL_PRODUCT_5829E			0x81e6
+ 
++#define DELL_PRODUCT_FM101R			0x8213
++#define DELL_PRODUCT_FM101R_ESIM		0x8215
++
+ #define KYOCERA_VENDOR_ID			0x0c88
+ #define KYOCERA_PRODUCT_KPC650			0x17da
+ #define KYOCERA_PRODUCT_KPC680			0x180a
+@@ -1108,6 +1111,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(0) | RSVD(6) },
+ 	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E_ESIM),
+ 	  .driver_info = RSVD(0) | RSVD(6) },
++	{ USB_DEVICE_INTERFACE_CLASS(DELL_VENDOR_ID, DELL_PRODUCT_FM101R, 0xff) },
++	{ USB_DEVICE_INTERFACE_CLASS(DELL_VENDOR_ID, DELL_PRODUCT_FM101R_ESIM, 0xff) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) },	/* ADU-E100, ADU-310 */
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
+@@ -1290,6 +1295,7 @@ static const struct usb_device_id option_ids[] = {
+ 	 .driver_info = NCTRL(0) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff),	/* Telit LE910C1-EUX (ECM) */
+ 	 .driver_info = NCTRL(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1035, 0xff) }, /* Telit LE910C4-WWX (ECM) */
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ 	  .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
+@@ -2262,6 +2268,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) },			/* GosunCn GM500 ECM/NCM */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ 	{ } /* Terminating entry */
+diff --git a/drivers/usb/typec/altmodes/Kconfig b/drivers/usb/typec/altmodes/Kconfig
+index 60d375e9c3c7c..1a6b5e872b0d9 100644
+--- a/drivers/usb/typec/altmodes/Kconfig
++++ b/drivers/usb/typec/altmodes/Kconfig
+@@ -4,6 +4,7 @@ menu "USB Type-C Alternate Mode drivers"
+ 
+ config TYPEC_DP_ALTMODE
+ 	tristate "DisplayPort Alternate Mode driver"
++	depends on DRM
+ 	help
+ 	  DisplayPort USB Type-C Alternate Mode allows DisplayPort
+ 	  displays and adapters to be attached to the USB Type-C
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index 0d4b1c0eeefb3..def903e9d2ab4 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -11,8 +11,10 @@
+ #include <linux/delay.h>
+ #include <linux/mutex.h>
+ #include <linux/module.h>
++#include <linux/property.h>
+ #include <linux/usb/pd_vdo.h>
+ #include <linux/usb/typec_dp.h>
++#include <drm/drm_connector.h>
+ #include "displayport.h"
+ 
+ #define DP_HEADER(_dp, cmd)		(VDO((_dp)->alt->svid, 1, cmd) | \
+@@ -57,11 +59,13 @@ struct dp_altmode {
+ 	struct typec_displayport_data data;
+ 
+ 	enum dp_state state;
++	bool hpd;
+ 
+ 	struct mutex lock; /* device lock */
+ 	struct work_struct work;
+ 	struct typec_altmode *alt;
+ 	const struct typec_altmode *port;
++	struct fwnode_handle *connector_fwnode;
+ };
+ 
+ static int dp_altmode_notify(struct dp_altmode *dp)
+@@ -122,6 +126,7 @@ static int dp_altmode_configure(struct dp_altmode *dp, u8 con)
+ static int dp_altmode_status_update(struct dp_altmode *dp)
+ {
+ 	bool configured = !!DP_CONF_GET_PIN_ASSIGN(dp->data.conf);
++	bool hpd = !!(dp->data.status & DP_STATUS_HPD_STATE);
+ 	u8 con = DP_STATUS_CONNECTION(dp->data.status);
+ 	int ret = 0;
+ 
+@@ -134,6 +139,11 @@ static int dp_altmode_status_update(struct dp_altmode *dp)
+ 		ret = dp_altmode_configure(dp, con);
+ 		if (!ret)
+ 			dp->state = DP_STATE_CONFIGURE;
++	} else {
++		if (dp->hpd != hpd) {
++			drm_connector_oob_hotplug_event(dp->connector_fwnode);
++			dp->hpd = hpd;
++		}
+ 	}
+ 
+ 	return ret;
+@@ -275,6 +285,11 @@ static int dp_altmode_vdm(struct typec_altmode *alt,
+ 		case CMD_EXIT_MODE:
+ 			dp->data.status = 0;
+ 			dp->data.conf = 0;
++			if (dp->hpd) {
++				drm_connector_oob_hotplug_event(dp->connector_fwnode);
++				dp->hpd = false;
++				sysfs_notify(&dp->alt->dev.kobj, "displayport", "hpd");
++			}
+ 			break;
+ 		case DP_CMD_STATUS_UPDATE:
+ 			dp->data.status = *vdo;
+@@ -526,6 +541,7 @@ static const struct attribute_group dp_altmode_group = {
+ int dp_altmode_probe(struct typec_altmode *alt)
+ {
+ 	const struct typec_altmode *port = typec_altmode_get_partner(alt);
++	struct fwnode_handle *fwnode;
+ 	struct dp_altmode *dp;
+ 	int ret;
+ 
+@@ -554,6 +570,11 @@ int dp_altmode_probe(struct typec_altmode *alt)
+ 	alt->desc = "DisplayPort";
+ 	alt->ops = &dp_altmode_ops;
+ 
++	fwnode = dev_fwnode(alt->dev.parent->parent); /* typec_port fwnode */
++	dp->connector_fwnode = fwnode_find_reference(fwnode, "displayport", 0);
++	if (IS_ERR(dp->connector_fwnode))
++		dp->connector_fwnode = NULL;
++
+ 	typec_altmode_set_drvdata(alt, dp);
+ 
+ 	dp->state = DP_STATE_ENTER;
+@@ -569,6 +590,13 @@ void dp_altmode_remove(struct typec_altmode *alt)
+ 
+ 	sysfs_remove_group(&alt->dev.kobj, &dp_altmode_group);
+ 	cancel_work_sync(&dp->work);
++
++	if (dp->connector_fwnode) {
++		if (dp->hpd)
++			drm_connector_oob_hotplug_event(dp->connector_fwnode);
++
++		fwnode_handle_put(dp->connector_fwnode);
++	}
+ }
+ EXPORT_SYMBOL_GPL(dp_altmode_remove);
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 4d2f25ebe3048..8f62e171053ba 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -1641,12 +1641,12 @@ static int run_delayed_tree_ref(struct btrfs_trans_handle *trans,
+ 		parent = ref->parent;
+ 	ref_root = ref->root;
+ 
+-	if (node->ref_mod != 1) {
++	if (unlikely(node->ref_mod != 1)) {
+ 		btrfs_err(trans->fs_info,
+-	"btree block(%llu) has %d references rather than 1: action %d ref_root %llu parent %llu",
++	"btree block %llu has %d references rather than 1: action %d ref_root %llu parent %llu",
+ 			  node->bytenr, node->ref_mod, node->action, ref_root,
+ 			  parent);
+-		return -EIO;
++		return -EUCLEAN;
+ 	}
+ 	if (node->action == BTRFS_ADD_DELAYED_REF && insert_reserved) {
+ 		BUG_ON(!extent_op || !extent_op->update_flags);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 930126b094add..cffd149faf639 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3518,7 +3518,7 @@ static void get_block_group_info(struct list_head *groups_list,
+ static long btrfs_ioctl_space_info(struct btrfs_fs_info *fs_info,
+ 				   void __user *arg)
+ {
+-	struct btrfs_ioctl_space_args space_args;
++	struct btrfs_ioctl_space_args space_args = { 0 };
+ 	struct btrfs_ioctl_space_info space;
+ 	struct btrfs_ioctl_space_info *dest;
+ 	struct btrfs_ioctl_space_info *dest_orig;
+@@ -4858,7 +4858,7 @@ static int _btrfs_ioctl_send(struct file *file, void __user *argp, bool compat)
+ 
+ 	if (compat) {
+ #if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT)
+-		struct btrfs_ioctl_send_args_32 args32;
++		struct btrfs_ioctl_send_args_32 args32 = { 0 };
+ 
+ 		ret = copy_from_user(&args32, argp, sizeof(args32));
+ 		if (ret)
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 9a8dc16673b43..10a0913ffb492 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -4337,7 +4337,7 @@ static int btrfs_log_prealloc_extents(struct btrfs_trans_handle *trans,
+ 	struct extent_buffer *leaf;
+ 	int slot;
+ 	int ins_nr = 0;
+-	int start_slot;
++	int start_slot = 0;
+ 	int ret;
+ 
+ 	if (!(inode->flags & BTRFS_INODE_PREALLOC))
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 943655e36a799..d4974c652e8e4 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -2428,7 +2428,7 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ 		ret = do_splice_direct(src_file, &src_off, dst_file,
+ 				       &dst_off, src_objlen, flags);
+ 		/* Abort on short copies or on error */
+-		if (ret < src_objlen) {
++		if (ret < (long)src_objlen) {
+ 			dout("Failed partial copy (%zd)\n", ret);
+ 			goto out;
+ 		}
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 76be50f6f041a..36e3342d3633c 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -635,9 +635,7 @@ int ceph_fill_file_size(struct inode *inode, int issued,
+ 			ci->i_truncate_seq = truncate_seq;
+ 
+ 			/* the MDS should have revoked these caps */
+-			WARN_ON_ONCE(issued & (CEPH_CAP_FILE_EXCL |
+-					       CEPH_CAP_FILE_RD |
+-					       CEPH_CAP_FILE_WR |
++			WARN_ON_ONCE(issued & (CEPH_CAP_FILE_RD |
+ 					       CEPH_CAP_FILE_LAZYIO));
+ 			/*
+ 			 * If we hold relevant caps, or in the case where we're
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 21436721745b6..ed6a3ed83755d 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2633,31 +2633,44 @@ pnfs_should_return_unused_layout(struct pnfs_layout_hdr *lo,
+ 	return mode == 0;
+ }
+ 
+-static int
+-pnfs_layout_return_unused_byserver(struct nfs_server *server, void *data)
++static int pnfs_layout_return_unused_byserver(struct nfs_server *server,
++					      void *data)
+ {
+ 	const struct pnfs_layout_range *range = data;
++	const struct cred *cred;
+ 	struct pnfs_layout_hdr *lo;
+ 	struct inode *inode;
++	nfs4_stateid stateid;
++	enum pnfs_iomode iomode;
++
+ restart:
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(lo, &server->layouts, plh_layouts) {
+-		if (!pnfs_layout_can_be_returned(lo) ||
++		inode = lo->plh_inode;
++		if (!inode || !pnfs_layout_can_be_returned(lo) ||
+ 		    test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags))
+ 			continue;
+-		inode = lo->plh_inode;
+ 		spin_lock(&inode->i_lock);
+-		if (!pnfs_should_return_unused_layout(lo, range)) {
++		if (!lo->plh_inode ||
++		    !pnfs_should_return_unused_layout(lo, range)) {
+ 			spin_unlock(&inode->i_lock);
+ 			continue;
+ 		}
++		pnfs_get_layout_hdr(lo);
++		pnfs_set_plh_return_info(lo, range->iomode, 0);
++		if (pnfs_mark_matching_lsegs_return(lo, &lo->plh_return_segs,
++						    range, 0) != 0 ||
++		    !pnfs_prepare_layoutreturn(lo, &stateid, &cred, &iomode)) {
++			spin_unlock(&inode->i_lock);
++			rcu_read_unlock();
++			pnfs_put_layout_hdr(lo);
++			cond_resched();
++			goto restart;
++		}
+ 		spin_unlock(&inode->i_lock);
+-		inode = pnfs_grab_inode_layout_hdr(lo);
+-		if (!inode)
+-			continue;
+ 		rcu_read_unlock();
+-		pnfs_mark_layout_for_return(inode, range);
+-		iput(inode);
++		pnfs_send_layoutreturn(lo, &stateid, &cred, iomode, false);
++		pnfs_put_layout_hdr(lo);
+ 		cond_resched();
+ 		goto restart;
+ 	}
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 7ef3c87f8a23d..65ac504595ba4 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -243,7 +243,7 @@ static int ovl_set_timestamps(struct dentry *upperdentry, struct kstat *stat)
+ {
+ 	struct iattr attr = {
+ 		.ia_valid =
+-		     ATTR_ATIME | ATTR_MTIME | ATTR_ATIME_SET | ATTR_MTIME_SET,
++		     ATTR_ATIME | ATTR_MTIME | ATTR_ATIME_SET | ATTR_MTIME_SET | ATTR_CTIME,
+ 		.ia_atime = stat->atime,
+ 		.ia_mtime = stat->mtime,
+ 	};
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 13a9a17d6a13b..fd2ca079448d5 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -233,19 +233,18 @@ static void put_quota_format(struct quota_format_type *fmt)
+  * All dquots are placed to the end of inuse_list when first created, and this
+  * list is used for invalidate operation, which must look at every dquot.
+  *
+- * When the last reference of a dquot will be dropped, the dquot will be
+- * added to releasing_dquots. We'd then queue work item which would call
++ * When the last reference of a dquot is dropped, the dquot is added to
++ * releasing_dquots. We'll then queue work item which will call
+  * synchronize_srcu() and after that perform the final cleanup of all the
+- * dquots on the list. Both releasing_dquots and free_dquots use the
+- * dq_free list_head in the dquot struct. When a dquot is removed from
+- * releasing_dquots, a reference count is always subtracted, and if
+- * dq_count == 0 at that point, the dquot will be added to the free_dquots.
++ * dquots on the list. Each cleaned up dquot is moved to free_dquots list.
++ * Both releasing_dquots and free_dquots use the dq_free list_head in the dquot
++ * struct.
+  *
+- * Unused dquots (dq_count == 0) are added to the free_dquots list when freed,
+- * and this list is searched whenever we need an available dquot.  Dquots are
+- * removed from the list as soon as they are used again, and
+- * dqstats.free_dquots gives the number of dquots on the list. When
+- * dquot is invalidated it's completely released from memory.
++ * Unused and cleaned up dquots are in the free_dquots list and this list is
++ * searched whenever we need an available dquot. Dquots are removed from the
++ * list as soon as they are used again and dqstats.free_dquots gives the number
++ * of dquots on the list. When dquot is invalidated it's completely released
++ * from memory.
+  *
+  * Dirty dquots are added to the dqi_dirty_list of quota_info when mark
+  * dirtied, and this list is searched when writing dirty dquots back to
+@@ -323,6 +322,7 @@ static inline void put_dquot_last(struct dquot *dquot)
+ static inline void put_releasing_dquots(struct dquot *dquot)
+ {
+ 	list_add_tail(&dquot->dq_free, &releasing_dquots);
++	set_bit(DQ_RELEASING_B, &dquot->dq_flags);
+ }
+ 
+ static inline void remove_free_dquot(struct dquot *dquot)
+@@ -330,8 +330,10 @@ static inline void remove_free_dquot(struct dquot *dquot)
+ 	if (list_empty(&dquot->dq_free))
+ 		return;
+ 	list_del_init(&dquot->dq_free);
+-	if (!atomic_read(&dquot->dq_count))
++	if (!test_bit(DQ_RELEASING_B, &dquot->dq_flags))
+ 		dqstats_dec(DQST_FREE_DQUOTS);
++	else
++		clear_bit(DQ_RELEASING_B, &dquot->dq_flags);
+ }
+ 
+ static inline void put_inuse(struct dquot *dquot)
+@@ -583,12 +585,6 @@ restart:
+ 			continue;
+ 		/* Wait for dquot users */
+ 		if (atomic_read(&dquot->dq_count)) {
+-			/* dquot in releasing_dquots, flush and retry */
+-			if (!list_empty(&dquot->dq_free)) {
+-				spin_unlock(&dq_list_lock);
+-				goto restart;
+-			}
+-
+ 			atomic_inc(&dquot->dq_count);
+ 			spin_unlock(&dq_list_lock);
+ 			/*
+@@ -607,6 +603,15 @@ restart:
+ 			 * restart. */
+ 			goto restart;
+ 		}
++		/*
++		 * The last user already dropped its reference but dquot didn't
++		 * get fully cleaned up yet. Restart the scan which flushes the
++		 * work cleaning up released dquots.
++		 */
++		if (test_bit(DQ_RELEASING_B, &dquot->dq_flags)) {
++			spin_unlock(&dq_list_lock);
++			goto restart;
++		}
+ 		/*
+ 		 * Quota now has no users and it has been written on last
+ 		 * dqput()
+@@ -698,6 +703,13 @@ int dquot_writeback_dquots(struct super_block *sb, int type)
+ 						 dq_dirty);
+ 
+ 			WARN_ON(!dquot_active(dquot));
++			/* If the dquot is releasing we should not touch it */
++			if (test_bit(DQ_RELEASING_B, &dquot->dq_flags)) {
++				spin_unlock(&dq_list_lock);
++				flush_delayed_work(&quota_release_work);
++				spin_lock(&dq_list_lock);
++				continue;
++			}
+ 
+ 			/* Now we have active dquot from which someone is
+  			 * holding reference so we can safely just increase
+@@ -811,18 +823,18 @@ static void quota_release_workfn(struct work_struct *work)
+ 	/* Exchange the list head to avoid livelock. */
+ 	list_replace_init(&releasing_dquots, &rls_head);
+ 	spin_unlock(&dq_list_lock);
++	synchronize_srcu(&dquot_srcu);
+ 
+ restart:
+-	synchronize_srcu(&dquot_srcu);
+ 	spin_lock(&dq_list_lock);
+ 	while (!list_empty(&rls_head)) {
+ 		dquot = list_first_entry(&rls_head, struct dquot, dq_free);
+-		/* Dquot got used again? */
+-		if (atomic_read(&dquot->dq_count) > 1) {
+-			remove_free_dquot(dquot);
+-			atomic_dec(&dquot->dq_count);
+-			continue;
+-		}
++		WARN_ON_ONCE(atomic_read(&dquot->dq_count));
++		/*
++		 * Note that DQ_RELEASING_B protects us from racing with
++		 * invalidate_dquots() calls so we are safe to work with the
++		 * dquot even after we drop dq_list_lock.
++		 */
+ 		if (dquot_dirty(dquot)) {
+ 			spin_unlock(&dq_list_lock);
+ 			/* Commit dquot before releasing */
+@@ -836,7 +848,6 @@ restart:
+ 		}
+ 		/* Dquot is inactive and clean, now move it to free list */
+ 		remove_free_dquot(dquot);
+-		atomic_dec(&dquot->dq_count);
+ 		put_dquot_last(dquot);
+ 	}
+ 	spin_unlock(&dq_list_lock);
+@@ -877,6 +888,7 @@ void dqput(struct dquot *dquot)
+ 	BUG_ON(!list_empty(&dquot->dq_free));
+ #endif
+ 	put_releasing_dquots(dquot);
++	atomic_dec(&dquot->dq_count);
+ 	spin_unlock(&dq_list_lock);
+ 	queue_delayed_work(system_unbound_wq, &quota_release_work, 1);
+ }
+@@ -965,7 +977,7 @@ we_slept:
+ 		dqstats_inc(DQST_LOOKUPS);
+ 	}
+ 	/* Wait for dq_lock - after this we know that either dquot_release() is
+-	 * already finished or it will be canceled due to dq_count > 1 test */
++	 * already finished or it will be canceled due to dq_count > 0 test */
+ 	wait_on_dquot(dquot);
+ 	/* Read the dquot / allocate space in quota file */
+ 	if (!dquot_active(dquot)) {
+diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
+index 928136556174c..03d39532c7080 100644
+--- a/include/drm/drm_connector.h
++++ b/include/drm/drm_connector.h
+@@ -1030,6 +1030,14 @@ struct drm_connector_funcs {
+ 	 */
+ 	void (*atomic_print_state)(struct drm_printer *p,
+ 				   const struct drm_connector_state *state);
++
++	/**
++	 * @oob_hotplug_event:
++	 *
++	 * This will get called when a hotplug-event for a drm-connector
++	 * has been received from a source outside the display driver / device.
++	 */
++	void (*oob_hotplug_event)(struct drm_connector *connector);
+ };
+ 
+ /**
+@@ -1174,6 +1182,14 @@ struct drm_connector {
+ 	struct device *kdev;
+ 	/** @attr: sysfs attributes */
+ 	struct device_attribute *attr;
++	/**
++	 * @fwnode: associated fwnode supplied by platform firmware
++	 *
++	 * Drivers can set this to associate a fwnode with a connector, drivers
++	 * are expected to get a reference on the fwnode when setting this.
++	 * drm_connector_cleanup() will call fwnode_handle_put() on this.
++	 */
++	struct fwnode_handle *fwnode;
+ 
+ 	/**
+ 	 * @head:
+@@ -1185,6 +1201,14 @@ struct drm_connector {
+ 	 */
+ 	struct list_head head;
+ 
++	/**
++	 * @global_connector_list_entry:
++	 *
++	 * Connector entry in the global connector-list, used by
++	 * drm_connector_find_by_fwnode().
++	 */
++	struct list_head global_connector_list_entry;
++
+ 	/** @base: base KMS object */
+ 	struct drm_mode_object base;
+ 
+@@ -1596,6 +1620,7 @@ drm_connector_is_unregistered(struct drm_connector *connector)
+ 		DRM_CONNECTOR_UNREGISTERED;
+ }
+ 
++void drm_connector_oob_hotplug_event(struct fwnode_handle *connector_fwnode);
+ const char *drm_get_connector_type_name(unsigned int connector_type);
+ const char *drm_get_connector_status_name(enum drm_connector_status status);
+ const char *drm_get_subpixel_order_name(enum subpixel_order order);
+diff --git a/include/linux/ioport.h b/include/linux/ioport.h
+index 5135d4b86cd6a..f9bf374f96336 100644
+--- a/include/linux/ioport.h
++++ b/include/linux/ioport.h
+@@ -307,6 +307,13 @@ struct resource *devm_request_free_mem_region(struct device *dev,
+ struct resource *request_free_mem_region(struct resource *base,
+ 		unsigned long size, const char *name);
+ 
++static inline void irqresource_disabled(struct resource *res, u32 irq)
++{
++	res->start = irq;
++	res->end = irq;
++	res->flags = IORESOURCE_IRQ | IORESOURCE_DISABLED | IORESOURCE_UNSET;
++}
++
+ #ifdef CONFIG_IO_STRICT_DEVMEM
+ void revoke_devmem(struct resource *res);
+ #else
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index 510f876564796..d758c131ed5e1 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -32,6 +32,7 @@ struct ipv6_devconf {
+ 	__s32		max_addresses;
+ 	__s32		accept_ra_defrtr;
+ 	__s32		accept_ra_min_hop_limit;
++	__s32		accept_ra_min_lft;
+ 	__s32		accept_ra_pinfo;
+ 	__s32		ignore_routes_with_linkdown;
+ #ifdef CONFIG_IPV6_ROUTER_PREF
+diff --git a/include/linux/mcb.h b/include/linux/mcb.h
+index 71dd10a3d9288..01fd26170e6bf 100644
+--- a/include/linux/mcb.h
++++ b/include/linux/mcb.h
+@@ -63,7 +63,6 @@ static inline struct mcb_bus *to_mcb_bus(struct device *dev)
+ struct mcb_device {
+ 	struct device dev;
+ 	struct mcb_bus *bus;
+-	bool is_added;
+ 	struct mcb_driver *driver;
+ 	u16 id;
+ 	int inst;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 302abfc2a1f63..e814ce78a1965 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3972,7 +3972,7 @@ static __always_inline int ____dev_forward_skb(struct net_device *dev,
+ 		return NET_RX_DROP;
+ 	}
+ 
+-	skb_scrub_packet(skb, true);
++	skb_scrub_packet(skb, !net_eq(dev_net(dev), dev_net(skb->dev)));
+ 	skb->priority = 0;
+ 	return 0;
+ }
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 93dffe2f3fff2..50557d903a059 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -659,6 +659,7 @@ struct perf_event {
+ 	/* The cumulative AND of all event_caps for events in this group. */
+ 	int				group_caps;
+ 
++	unsigned int			group_generation;
+ 	struct perf_event		*group_leader;
+ 	struct pmu			*pmu;
+ 	void				*pmu_private;
+diff --git a/include/linux/quota.h b/include/linux/quota.h
+index 27aab84fcbaac..b93cb93d19565 100644
+--- a/include/linux/quota.h
++++ b/include/linux/quota.h
+@@ -285,7 +285,9 @@ static inline void dqstats_dec(unsigned int type)
+ #define DQ_FAKE_B	3	/* no limits only usage */
+ #define DQ_READ_B	4	/* dquot was read into memory */
+ #define DQ_ACTIVE_B	5	/* dquot is active (dquot_release not called) */
+-#define DQ_LASTSET_B	6	/* Following 6 bits (see QIF_) are reserved\
++#define DQ_RELEASING_B	6	/* dquot is in releasing_dquots list waiting
++				 * to be cleaned up */
++#define DQ_LASTSET_B	7	/* Following 6 bits (see QIF_) are reserved\
+ 				 * for the mask of entries set via SETQUOTA\
+ 				 * quotactl. They are set under dq_data_lock\
+ 				 * and the quota format handling dquot can\
+diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
+index a0f6668924d3e..4bc8ff2a66143 100644
+--- a/include/linux/quotaops.h
++++ b/include/linux/quotaops.h
+@@ -56,7 +56,7 @@ static inline bool dquot_is_busy(struct dquot *dquot)
+ {
+ 	if (test_bit(DQ_MOD_B, &dquot->dq_flags))
+ 		return true;
+-	if (atomic_read(&dquot->dq_count) > 1)
++	if (atomic_read(&dquot->dq_count) > 0)
+ 		return true;
+ 	return false;
+ }
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index bc59237727033..8bc1119afc317 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -565,6 +565,7 @@ struct usb3_lpm_parameters {
+  * @speed: device speed: high/full/low (or error)
+  * @rx_lanes: number of rx lanes in use, USB 3.2 adds dual-lane support
+  * @tx_lanes: number of tx lanes in use, USB 3.2 adds dual-lane support
++ * @ssp_rate: SuperSpeed Plus phy signaling rate and lane count
+  * @tt: Transaction Translator info; used with low/full speed dev, highspeed hub
+  * @ttport: device port on that tt hub
+  * @toggle: one bit for each endpoint, with ([0] = IN, [1] = OUT) endpoints
+@@ -642,6 +643,7 @@ struct usb_device {
+ 	enum usb_device_speed	speed;
+ 	unsigned int		rx_lanes;
+ 	unsigned int		tx_lanes;
++	enum usb_ssp_rate	ssp_rate;
+ 
+ 	struct usb_tt	*tt;
+ 	int		ttport;
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 11a92bb4d7a9f..e33433ec4a98f 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -287,7 +287,7 @@ struct hci_dev {
+ 	struct list_head list;
+ 	struct mutex	lock;
+ 
+-	char		name[8];
++	const char	*name;
+ 	unsigned long	flags;
+ 	__u16		id;
+ 	__u8		bus;
+diff --git a/include/net/bluetooth/hci_mon.h b/include/net/bluetooth/hci_mon.h
+index 2d5fcda1bcd05..082f89531b889 100644
+--- a/include/net/bluetooth/hci_mon.h
++++ b/include/net/bluetooth/hci_mon.h
+@@ -56,7 +56,7 @@ struct hci_mon_new_index {
+ 	__u8		type;
+ 	__u8		bus;
+ 	bdaddr_t	bdaddr;
+-	char		name[8];
++	char		name[8] __nonstring;
+ } __packed;
+ #define HCI_MON_NEW_INDEX_SIZE 16
+ 
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 088f257cd6fb3..0d3cb34c7abc5 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -151,6 +151,7 @@ struct fib_info {
+ 	int			fib_nhs;
+ 	bool			fib_nh_is_v6;
+ 	bool			nh_updated;
++	bool			pfsrc_removed;
+ 	struct nexthop		*nh;
+ 	struct rcu_head		rcu;
+ 	struct fib_nh		fib_nh[];
+diff --git a/include/net/macsec.h b/include/net/macsec.h
+index d6fa6b97f6efa..0dc4303329391 100644
+--- a/include/net/macsec.h
++++ b/include/net/macsec.h
+@@ -240,6 +240,7 @@ struct macsec_context {
+ 	struct macsec_secy *secy;
+ 	struct macsec_rx_sc *rx_sc;
+ 	struct {
++		bool update_pn;
+ 		unsigned char assoc_num;
+ 		u8 key[MACSEC_MAX_KEY_LEN];
+ 		union {
+diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
+index 69e4161462fb4..7b87da22b2950 100644
+--- a/include/net/netns/xfrm.h
++++ b/include/net/netns/xfrm.h
+@@ -49,6 +49,7 @@ struct netns_xfrm {
+ 	struct list_head	policy_all;
+ 	struct hlist_head	*policy_byidx;
+ 	unsigned int		policy_idx_hmask;
++	unsigned int		idx_generator;
+ 	struct hlist_head	policy_inexact[XFRM_POLICY_MAX];
+ 	struct xfrm_policy_hash	policy_bydst[XFRM_POLICY_MAX];
+ 	unsigned int		policy_count[XFRM_POLICY_MAX * 2];
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index cb4b2fddd9eb3..772e593910287 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -141,6 +141,9 @@ void tcp_time_wait(struct sock *sk, int state, int timeo);
+ #define TCP_RTO_MAX	((unsigned)(120*HZ))
+ #define TCP_RTO_MIN	((unsigned)(HZ/5))
+ #define TCP_TIMEOUT_MIN	(2U) /* Min timeout for TCP timers in jiffies */
++
++#define TCP_TIMEOUT_MIN_US (2*USEC_PER_MSEC) /* Min TCP timeout in microsecs */
++
+ #define TCP_TIMEOUT_INIT ((unsigned)(1*HZ))	/* RFC6298 2.1 initial RTO value	*/
+ #define TCP_TIMEOUT_FALLBACK ((unsigned)(3*HZ))	/* RFC 1122 initial RTO value, now
+ 						 * used as a fallback RTO for the
+diff --git a/include/trace/events/neigh.h b/include/trace/events/neigh.h
+index 62bb17516713f..5ade62ac49b47 100644
+--- a/include/trace/events/neigh.h
++++ b/include/trace/events/neigh.h
+@@ -39,7 +39,6 @@ TRACE_EVENT(neigh_create,
+ 	),
+ 
+ 	TP_fast_assign(
+-		struct in6_addr *pin6;
+ 		__be32 *p32;
+ 
+ 		__entry->family = tbl->family;
+@@ -47,7 +46,6 @@ TRACE_EVENT(neigh_create,
+ 		__entry->entries = atomic_read(&tbl->gc_entries);
+ 		__entry->created = n != NULL;
+ 		__entry->gc_exempt = exempt_from_gc;
+-		pin6 = (struct in6_addr *)__entry->primary_key6;
+ 		p32 = (__be32 *)__entry->primary_key4;
+ 
+ 		if (tbl->family == AF_INET)
+@@ -57,6 +55,8 @@ TRACE_EVENT(neigh_create,
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ 		if (tbl->family == AF_INET6) {
++			struct in6_addr *pin6;
++
+ 			pin6 = (struct in6_addr *)__entry->primary_key6;
+ 			*pin6 = *(struct in6_addr *)pkey;
+ 		}
+diff --git a/include/uapi/linux/ipv6.h b/include/uapi/linux/ipv6.h
+index d44d0483fd73f..4fa8511b1e355 100644
+--- a/include/uapi/linux/ipv6.h
++++ b/include/uapi/linux/ipv6.h
+@@ -192,6 +192,13 @@ enum {
+ 	DEVCONF_ACCEPT_RA_RT_INFO_MIN_PLEN,
+ 	DEVCONF_NDISC_TCLASS,
+ 	DEVCONF_RPL_SEG_ENABLED,
++	DEVCONF_RA_DEFRTR_METRIC,
++	DEVCONF_IOAM6_ENABLED,
++	DEVCONF_IOAM6_ID,
++	DEVCONF_IOAM6_ID_WIDE,
++	DEVCONF_NDISC_EVICT_NOCARRIER,
++	DEVCONF_ACCEPT_UNTRACKED_NA,
++	DEVCONF_ACCEPT_RA_MIN_LFT,
+ 	DEVCONF_MAX
+ };
+ 
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 433b9e840b387..b044ce3026eb6 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -361,10 +361,9 @@ static int pidlist_array_load(struct cgroup *cgrp, enum cgroup_filetype type,
+ 	}
+ 	css_task_iter_end(&it);
+ 	length = n;
+-	/* now sort & (if procs) strip out duplicates */
++	/* now sort & strip out duplicates (tgids or recycled thread PIDs) */
+ 	sort(array, length, sizeof(pid_t), cmppid, NULL);
+-	if (type == CGROUP_FILE_PROCS)
+-		length = pidlist_uniq(array, length);
++	length = pidlist_uniq(array, length);
+ 
+ 	l = cgroup_pidlist_find_create(cgrp, type);
+ 	if (!l) {
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 8c5400fd227b8..b23961475692c 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2053,6 +2053,7 @@ static void perf_group_attach(struct perf_event *event)
+ 
+ 	list_add_tail(&event->sibling_list, &group_leader->sibling_list);
+ 	group_leader->nr_siblings++;
++	group_leader->group_generation++;
+ 
+ 	perf_event__header_size(group_leader);
+ 
+@@ -2245,6 +2246,7 @@ static void perf_group_detach(struct perf_event *event)
+ 	if (leader != event) {
+ 		list_del_init(&event->sibling_list);
+ 		event->group_leader->nr_siblings--;
++		event->group_leader->group_generation++;
+ 		goto out;
+ 	}
+ 
+@@ -5222,7 +5224,7 @@ static int __perf_read_group_add(struct perf_event *leader,
+ 					u64 read_format, u64 *values)
+ {
+ 	struct perf_event_context *ctx = leader->ctx;
+-	struct perf_event *sub;
++	struct perf_event *sub, *parent;
+ 	unsigned long flags;
+ 	int n = 1; /* skip @nr */
+ 	int ret;
+@@ -5232,6 +5234,33 @@ static int __perf_read_group_add(struct perf_event *leader,
+ 		return ret;
+ 
+ 	raw_spin_lock_irqsave(&ctx->lock, flags);
++	/*
++	 * Verify the grouping between the parent and child (inherited)
++	 * events is still in tact.
++	 *
++	 * Specifically:
++	 *  - leader->ctx->lock pins leader->sibling_list
++	 *  - parent->child_mutex pins parent->child_list
++	 *  - parent->ctx->mutex pins parent->sibling_list
++	 *
++	 * Because parent->ctx != leader->ctx (and child_list nests inside
++	 * ctx->mutex), group destruction is not atomic between children, also
++	 * see perf_event_release_kernel(). Additionally, parent can grow the
++	 * group.
++	 *
++	 * Therefore it is possible to have parent and child groups in a
++	 * different configuration and summing over such a beast makes no sense
++	 * what so ever.
++	 *
++	 * Reject this.
++	 */
++	parent = leader->parent;
++	if (parent &&
++	    (parent->group_generation != leader->group_generation ||
++	     parent->nr_siblings != leader->nr_siblings)) {
++		ret = -ECHILD;
++		goto unlock;
++	}
+ 
+ 	/*
+ 	 * Since we co-schedule groups, {enabled,running} times of siblings
+@@ -5261,8 +5290,9 @@ static int __perf_read_group_add(struct perf_event *leader,
+ 			values[n++] = primary_event_id(sub);
+ 	}
+ 
++unlock:
+ 	raw_spin_unlock_irqrestore(&ctx->lock, flags);
+-	return 0;
++	return ret;
+ }
+ 
+ static int perf_read_group(struct perf_event *event,
+@@ -5281,10 +5311,6 @@ static int perf_read_group(struct perf_event *event,
+ 
+ 	values[0] = 1 + leader->nr_siblings;
+ 
+-	/*
+-	 * By locking the child_mutex of the leader we effectively
+-	 * lock the child list of all siblings.. XXX explain how.
+-	 */
+ 	mutex_lock(&leader->child_mutex);
+ 
+ 	ret = __perf_read_group_add(leader, read_format, values);
+@@ -12820,6 +12846,7 @@ static int inherit_group(struct perf_event *parent_event,
+ 		    !perf_get_aux_event(child_ctr, leader))
+ 			return -EINVAL;
+ 	}
++	leader->group_generation = parent_event->group_generation;
+ 	return 0;
+ }
+ 
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 321cfda1b3338..c7f0a02442e50 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -2451,6 +2451,7 @@ void trace_event_eval_update(struct trace_eval_map **map, int len)
+ 				update_event_printk(call, map[i]);
+ 			}
+ 		}
++		cond_resched();
+ 	}
+ 	up_write(&trace_event_sem);
+ }
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index fa0a0e59b3851..37d01e44d4837 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5300,9 +5300,13 @@ static int workqueue_apply_unbound_cpumask(void)
+ 	list_for_each_entry(wq, &workqueues, list) {
+ 		if (!(wq->flags & WQ_UNBOUND))
+ 			continue;
++
+ 		/* creating multiple pwqs breaks ordering guarantee */
+-		if (wq->flags & __WQ_ORDERED)
+-			continue;
++		if (!list_empty(&wq->pwqs)) {
++			if (wq->flags & __WQ_ORDERED_EXPLICIT)
++				continue;
++			wq->flags &= ~__WQ_ORDERED;
++		}
+ 
+ 		ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs);
+ 		if (!ctx) {
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 19c28a34c5f1d..24ca61cf86ddc 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1136,13 +1136,16 @@ config DEBUG_TIMEKEEPING
+ config DEBUG_PREEMPT
+ 	bool "Debug preemptible kernel"
+ 	depends on DEBUG_KERNEL && PREEMPTION && TRACE_IRQFLAGS_SUPPORT
+-	default y
+ 	help
+ 	  If you say Y here then the kernel will use a debug variant of the
+ 	  commonly used smp_processor_id() function and will print warnings
+ 	  if kernel code uses it in a preemption-unsafe way. Also, the kernel
+ 	  will detect preemption count underflows.
+ 
++	  This option has potential to introduce high runtime overhead,
++	  depending on workload as it triggers debugging routines for each
++	  this_cpu operation. It should only be used for debugging purposes.
++
+ menu "Lock Debugging (spinlocks, mutexes, etc...)"
+ 
+ config LOCK_DEBUGGING_SUPPORT
+diff --git a/lib/test_meminit.c b/lib/test_meminit.c
+index 75638404ed573..0f1a3bd09b7b5 100644
+--- a/lib/test_meminit.c
++++ b/lib/test_meminit.c
+@@ -86,7 +86,7 @@ static int __init test_pages(int *total_failures)
+ 	int failures = 0, num_tests = 0;
+ 	int i;
+ 
+-	for (i = 0; i <= MAX_ORDER; i++)
++	for (i = 0; i < MAX_ORDER; i++)
+ 		num_tests += do_alloc_pages_order(i, &failures);
+ 
+ 	REPORT_FAILURES_IN_FN();
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 9ec9e1e677051..5286945470b92 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1279,6 +1279,8 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+ 	struct page *page, *head;
+ 	int ret = 0;
+ 	LIST_HEAD(source);
++	static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
++				      DEFAULT_RATELIMIT_BURST);
+ 
+ 	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+ 		if (!pfn_valid(pfn))
+@@ -1325,8 +1327,10 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+ 						    page_is_file_lru(page));
+ 
+ 		} else {
+-			pr_warn("failed to isolate pfn %lx\n", pfn);
+-			dump_page(page, "isolation failed");
++			if (__ratelimit(&migrate_rs)) {
++				pr_warn("failed to isolate pfn %lx\n", pfn);
++				dump_page(page, "isolation failed");
++			}
+ 		}
+ 		put_page(page);
+ 	}
+@@ -1355,9 +1359,11 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+ 			(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
+ 		if (ret) {
+ 			list_for_each_entry(page, &source, lru) {
+-				pr_warn("migrating pfn %lx failed ret:%d ",
+-				       page_to_pfn(page), ret);
+-				dump_page(page, "migration failure");
++				if (__ratelimit(&migrate_rs)) {
++					pr_warn("migrating pfn %lx failed ret:%d\n",
++						page_to_pfn(page), ret);
++					dump_page(page, "migration failure");
++				}
+ 			}
+ 			putback_movable_pages(&source);
+ 		}
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 140d9764c77e3..a9f6089a2ae2a 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1249,6 +1249,15 @@ struct hci_conn *hci_connect_acl(struct hci_dev *hdev, bdaddr_t *dst,
+ 		return ERR_PTR(-EOPNOTSUPP);
+ 	}
+ 
++	/* Reject outgoing connection to device with same BD ADDR against
++	 * CVE-2020-26555
++	 */
++	if (!bacmp(&hdev->bdaddr, dst)) {
++		bt_dev_dbg(hdev, "Reject connection with same BD_ADDR %pMR\n",
++			   dst);
++		return ERR_PTR(-ECONNREFUSED);
++	}
++
+ 	acl = hci_conn_hash_lookup_ba(hdev, ACL_LINK, dst);
+ 	if (!acl) {
+ 		acl = hci_conn_add(hdev, ACL_LINK, dst, HCI_ROLE_MASTER);
+@@ -1426,34 +1435,41 @@ int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type,
+ 	if (!test_bit(HCI_CONN_AUTH, &conn->flags))
+ 		goto auth;
+ 
+-	/* An authenticated FIPS approved combination key has sufficient
+-	 * security for security level 4. */
+-	if (conn->key_type == HCI_LK_AUTH_COMBINATION_P256 &&
+-	    sec_level == BT_SECURITY_FIPS)
+-		goto encrypt;
+-
+-	/* An authenticated combination key has sufficient security for
+-	   security level 3. */
+-	if ((conn->key_type == HCI_LK_AUTH_COMBINATION_P192 ||
+-	     conn->key_type == HCI_LK_AUTH_COMBINATION_P256) &&
+-	    sec_level == BT_SECURITY_HIGH)
+-		goto encrypt;
+-
+-	/* An unauthenticated combination key has sufficient security for
+-	   security level 1 and 2. */
+-	if ((conn->key_type == HCI_LK_UNAUTH_COMBINATION_P192 ||
+-	     conn->key_type == HCI_LK_UNAUTH_COMBINATION_P256) &&
+-	    (sec_level == BT_SECURITY_MEDIUM || sec_level == BT_SECURITY_LOW))
+-		goto encrypt;
+-
+-	/* A combination key has always sufficient security for the security
+-	   levels 1 or 2. High security level requires the combination key
+-	   is generated using maximum PIN code length (16).
+-	   For pre 2.1 units. */
+-	if (conn->key_type == HCI_LK_COMBINATION &&
+-	    (sec_level == BT_SECURITY_MEDIUM || sec_level == BT_SECURITY_LOW ||
+-	     conn->pin_length == 16))
+-		goto encrypt;
++	switch (conn->key_type) {
++	case HCI_LK_AUTH_COMBINATION_P256:
++		/* An authenticated FIPS approved combination key has
++		 * sufficient security for security level 4 or lower.
++		 */
++		if (sec_level <= BT_SECURITY_FIPS)
++			goto encrypt;
++		break;
++	case HCI_LK_AUTH_COMBINATION_P192:
++		/* An authenticated combination key has sufficient security for
++		 * security level 3 or lower.
++		 */
++		if (sec_level <= BT_SECURITY_HIGH)
++			goto encrypt;
++		break;
++	case HCI_LK_UNAUTH_COMBINATION_P192:
++	case HCI_LK_UNAUTH_COMBINATION_P256:
++		/* An unauthenticated combination key has sufficient security
++		 * for security level 2 or lower.
++		 */
++		if (sec_level <= BT_SECURITY_MEDIUM)
++			goto encrypt;
++		break;
++	case HCI_LK_COMBINATION:
++		/* A combination key has always sufficient security for the
++		 * security levels 2 or lower. High security level requires the
++		 * combination key is generated using maximum PIN code length
++		 * (16). For pre 2.1 units.
++		 */
++		if (sec_level <= BT_SECURITY_MEDIUM || conn->pin_length == 16)
++			goto encrypt;
++		break;
++	default:
++		break;
++	}
+ 
+ auth:
+ 	if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags))
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 46e1e51ff28e3..e33fe4b1c4e29 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3734,7 +3734,11 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	if (id < 0)
+ 		return id;
+ 
+-	snprintf(hdev->name, sizeof(hdev->name), "hci%d", id);
++	error = dev_set_name(&hdev->dev, "hci%u", id);
++	if (error)
++		return error;
++
++	hdev->name = dev_name(&hdev->dev);
+ 	hdev->id = id;
+ 
+ 	BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus);
+@@ -3756,8 +3760,6 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	if (!IS_ERR_OR_NULL(bt_debugfs))
+ 		hdev->debugfs = debugfs_create_dir(hdev->name, bt_debugfs);
+ 
+-	dev_set_name(&hdev->dev, "%s", hdev->name);
+-
+ 	error = device_add(&hdev->dev);
+ 	if (error < 0)
+ 		goto err_wqueue;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index eb111504afc60..ad5294de97594 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -25,6 +25,8 @@
+ /* Bluetooth HCI event handling. */
+ 
+ #include <asm/unaligned.h>
++#include <linux/crypto.h>
++#include <crypto/algapi.h>
+ 
+ #include <net/bluetooth/bluetooth.h>
+ #include <net/bluetooth/hci_core.h>
+@@ -2701,6 +2703,16 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	BT_DBG("%s bdaddr %pMR type 0x%x", hdev->name, &ev->bdaddr,
+ 	       ev->link_type);
+ 
++	/* Reject incoming connection from device with same BD ADDR against
++	 * CVE-2020-26555
++	 */
++	if (hdev && !bacmp(&hdev->bdaddr, &ev->bdaddr)) {
++		bt_dev_dbg(hdev, "Reject connection with same BD_ADDR %pMR\n",
++			   &ev->bdaddr);
++		hci_reject_conn(hdev, &ev->bdaddr);
++		return;
++	}
++
+ 	mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ev->link_type,
+ 				      &flags);
+ 
+@@ -4065,6 +4077,15 @@ static void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	if (!conn)
+ 		goto unlock;
+ 
++	/* Ignore NULL link key against CVE-2020-26555 */
++	if (!crypto_memneq(ev->link_key, ZERO_KEY, HCI_LINK_KEY_SIZE)) {
++		bt_dev_dbg(hdev, "Ignore NULL link key (ZERO KEY) for %pMR",
++			   &ev->bdaddr);
++		hci_disconnect(conn, HCI_ERROR_AUTH_FAILURE);
++		hci_conn_drop(conn);
++		goto unlock;
++	}
++
+ 	hci_conn_hold(conn);
+ 	conn->disc_timeout = HCI_DISCONN_TIMEOUT;
+ 	hci_conn_drop(conn);
+@@ -4569,8 +4590,8 @@ static u8 bredr_oob_data_present(struct hci_conn *conn)
+ 		 * available, then do not declare that OOB data is
+ 		 * present.
+ 		 */
+-		if (!memcmp(data->rand256, ZERO_KEY, 16) ||
+-		    !memcmp(data->hash256, ZERO_KEY, 16))
++		if (!crypto_memneq(data->rand256, ZERO_KEY, 16) ||
++		    !crypto_memneq(data->hash256, ZERO_KEY, 16))
+ 			return 0x00;
+ 
+ 		return 0x02;
+@@ -4580,8 +4601,8 @@ static u8 bredr_oob_data_present(struct hci_conn *conn)
+ 	 * not supported by the hardware, then check that if
+ 	 * P-192 data values are present.
+ 	 */
+-	if (!memcmp(data->rand192, ZERO_KEY, 16) ||
+-	    !memcmp(data->hash192, ZERO_KEY, 16))
++	if (!crypto_memneq(data->rand192, ZERO_KEY, 16) ||
++	    !crypto_memneq(data->hash192, ZERO_KEY, 16))
+ 		return 0x00;
+ 
+ 	return 0x01;
+@@ -4597,7 +4618,7 @@ static void hci_io_capa_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	hci_dev_lock(hdev);
+ 
+ 	conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr);
+-	if (!conn)
++	if (!conn || !hci_conn_ssp_enabled(conn))
+ 		goto unlock;
+ 
+ 	hci_conn_hold(conn);
+@@ -4842,7 +4863,7 @@ static void hci_simple_pair_complete_evt(struct hci_dev *hdev,
+ 	hci_dev_lock(hdev);
+ 
+ 	conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr);
+-	if (!conn)
++	if (!conn || !hci_conn_ssp_enabled(conn))
+ 		goto unlock;
+ 
+ 	/* Reset the authentication requirement to unknown */
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index eafb2bebc12cb..04db39f67c90e 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -438,7 +438,8 @@ static struct sk_buff *create_monitor_event(struct hci_dev *hdev, int event)
+ 		ni->type = hdev->dev_type;
+ 		ni->bus = hdev->bus;
+ 		bacpy(&ni->bdaddr, &hdev->bdaddr);
+-		memcpy(ni->name, hdev->name, 8);
++		memcpy_and_pad(ni->name, sizeof(ni->name), hdev->name,
++			       strnlen(hdev->name, sizeof(ni->name)), '\0');
+ 
+ 		opcode = cpu_to_le16(HCI_MON_NEW_INDEX);
+ 		break;
+diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
+index af0f1fa249375..192a43f8efa28 100644
+--- a/net/ceph/messenger.c
++++ b/net/ceph/messenger.c
+@@ -477,8 +477,8 @@ static int ceph_tcp_connect(struct ceph_connection *con)
+ 	dout("connect %s\n", ceph_pr_addr(&con->peer_addr));
+ 
+ 	con_sock_state_connecting(con);
+-	ret = sock->ops->connect(sock, (struct sockaddr *)&ss, sizeof(ss),
+-				 O_NONBLOCK);
++	ret = kernel_connect(sock, (struct sockaddr *)&ss, sizeof(ss),
++			     O_NONBLOCK);
+ 	if (ret == -EINPROGRESS) {
+ 		dout("connect %s EINPROGRESS sk_state = %u\n",
+ 		     ceph_pr_addr(&con->peer_addr),
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index 3fba429f1f57b..c1e3d3bea1286 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -645,19 +645,19 @@ static int pktgen_if_show(struct seq_file *seq, void *v)
+ 	seq_puts(seq, "     Flags: ");
+ 
+ 	for (i = 0; i < NR_PKT_FLAGS; i++) {
+-		if (i == F_FLOW_SEQ)
++		if (i == FLOW_SEQ_SHIFT)
+ 			if (!pkt_dev->cflows)
+ 				continue;
+ 
+-		if (pkt_dev->flags & (1 << i))
++		if (pkt_dev->flags & (1 << i)) {
+ 			seq_printf(seq, "%s  ", pkt_flag_names[i]);
+-		else if (i == F_FLOW_SEQ)
+-			seq_puts(seq, "FLOW_RND  ");
+-
+ #ifdef CONFIG_XFRM
+-		if (i == F_IPSEC && pkt_dev->spi)
+-			seq_printf(seq, "spi:%u", pkt_dev->spi);
++			if (i == IPSEC_SHIFT && pkt_dev->spi)
++				seq_printf(seq, "spi:%u  ", pkt_dev->spi);
+ #endif
++		} else if (i == FLOW_SEQ_SHIFT) {
++			seq_puts(seq, "FLOW_RND  ");
++		}
+ 	}
+ 
+ 	seq_puts(seq, "\n");
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 28252029bd798..412a3c153cad3 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -741,7 +741,9 @@ static inline int esp_remove_trailer(struct sk_buff *skb)
+ 		skb->csum = csum_block_sub(skb->csum, csumdiff,
+ 					   skb->len - trimlen);
+ 	}
+-	pskb_trim(skb, skb->len - trimlen);
++	ret = pskb_trim(skb, skb->len - trimlen);
++	if (unlikely(ret))
++		return ret;
+ 
+ 	ret = nexthdr[1];
+ 
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index ed20d6ac10dc2..bb5255178d75c 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1345,15 +1345,18 @@ __be32 fib_info_update_nhc_saddr(struct net *net, struct fib_nh_common *nhc,
+ 				 unsigned char scope)
+ {
+ 	struct fib_nh *nh;
++	__be32 saddr;
+ 
+ 	if (nhc->nhc_family != AF_INET)
+ 		return inet_select_addr(nhc->nhc_dev, 0, scope);
+ 
+ 	nh = container_of(nhc, struct fib_nh, nh_common);
+-	nh->nh_saddr = inet_select_addr(nh->fib_nh_dev, nh->fib_nh_gw4, scope);
+-	nh->nh_saddr_genid = atomic_read(&net->ipv4.dev_addr_genid);
++	saddr = inet_select_addr(nh->fib_nh_dev, nh->fib_nh_gw4, scope);
+ 
+-	return nh->nh_saddr;
++	WRITE_ONCE(nh->nh_saddr, saddr);
++	WRITE_ONCE(nh->nh_saddr_genid, atomic_read(&net->ipv4.dev_addr_genid));
++
++	return saddr;
+ }
+ 
+ __be32 fib_result_prefsrc(struct net *net, struct fib_result *res)
+@@ -1367,8 +1370,9 @@ __be32 fib_result_prefsrc(struct net *net, struct fib_result *res)
+ 		struct fib_nh *nh;
+ 
+ 		nh = container_of(nhc, struct fib_nh, nh_common);
+-		if (nh->nh_saddr_genid == atomic_read(&net->ipv4.dev_addr_genid))
+-			return nh->nh_saddr;
++		if (READ_ONCE(nh->nh_saddr_genid) ==
++		    atomic_read(&net->ipv4.dev_addr_genid))
++			return READ_ONCE(nh->nh_saddr);
+ 	}
+ 
+ 	return fib_info_update_nhc_saddr(net, nhc, res->fi->fib_scope);
+@@ -1904,6 +1908,7 @@ int fib_sync_down_addr(struct net_device *dev, __be32 local)
+ 			continue;
+ 		if (fi->fib_prefsrc == local) {
+ 			fi->fib_flags |= RTNH_F_DEAD;
++			fi->pfsrc_removed = true;
+ 			ret++;
+ 		}
+ 	}
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index 456240d2adc11..3f4f6458d40e9 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -1977,6 +1977,7 @@ void fib_table_flush_external(struct fib_table *tb)
+ int fib_table_flush(struct net *net, struct fib_table *tb, bool flush_all)
+ {
+ 	struct trie *t = (struct trie *)tb->tb_data;
++	struct nl_info info = { .nl_net = net };
+ 	struct key_vector *pn = t->kv;
+ 	unsigned long cindex = 1;
+ 	struct hlist_node *tmp;
+@@ -2039,6 +2040,9 @@ int fib_table_flush(struct net *net, struct fib_table *tb, bool flush_all)
+ 
+ 			fib_notify_alias_delete(net, n->key, &n->leaf, fa,
+ 						NULL);
++			if (fi->pfsrc_removed)
++				rtmsg_fib(RTM_DELROUTE, htonl(n->key), fa,
++					  KEYLENGTH - fa->fa_slen, tb->tb_id, &info, 0);
+ 			hlist_del_rcu(&fa->fa_list);
+ 			fib_release_info(fa->fa_info);
+ 			alias_free_mem_rcu(fa);
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index b40780fde7915..7a94acbd9f142 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1832,6 +1832,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ #ifdef CONFIG_TLS_DEVICE
+ 	    tail->decrypted != skb->decrypted ||
+ #endif
++	    !mptcp_skb_can_collapse(tail, skb) ||
+ 	    thtail->doff != th->doff ||
+ 	    memcmp(thtail + 1, th + 1, hdrlen - sizeof(*th)))
+ 		goto no_coalesce;
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 6c14d67715d15..4df287885dd75 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2482,6 +2482,18 @@ static bool tcp_pacing_check(struct sock *sk)
+ 	return true;
+ }
+ 
++static bool tcp_rtx_queue_empty_or_single_skb(const struct sock *sk)
++{
++	const struct rb_node *node = sk->tcp_rtx_queue.rb_node;
++
++	/* No skb in the rtx queue. */
++	if (!node)
++		return true;
++
++	/* Only one skb in rtx queue. */
++	return !node->rb_left && !node->rb_right;
++}
++
+ /* TCP Small Queues :
+  * Control number of packets in qdisc/devices to two packets / or ~1 ms.
+  * (These limits are doubled for retransmits)
+@@ -2519,12 +2531,12 @@ static bool tcp_small_queue_check(struct sock *sk, const struct sk_buff *skb,
+ 		limit += extra_bytes;
+ 	}
+ 	if (refcount_read(&sk->sk_wmem_alloc) > limit) {
+-		/* Always send skb if rtx queue is empty.
++		/* Always send skb if rtx queue is empty or has one skb.
+ 		 * No need to wait for TX completion to call us back,
+ 		 * after softirq/tasklet schedule.
+ 		 * This helps when TX completions are delayed too much.
+ 		 */
+-		if (tcp_rtx_queue_empty(sk))
++		if (tcp_rtx_queue_empty_or_single_skb(sk))
+ 			return false;
+ 
+ 		set_bit(TSQ_THROTTLED, &sk->sk_tsq_flags);
+@@ -2727,7 +2739,7 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+-	u32 timeout, rto_delta_us;
++	u32 timeout, timeout_us, rto_delta_us;
+ 	int early_retrans;
+ 
+ 	/* Don't do any loss probe on a Fast Open connection before 3WHS
+@@ -2751,11 +2763,12 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto)
+ 	 * sample is available then probe after TCP_TIMEOUT_INIT.
+ 	 */
+ 	if (tp->srtt_us) {
+-		timeout = usecs_to_jiffies(tp->srtt_us >> 2);
++		timeout_us = tp->srtt_us >> 2;
+ 		if (tp->packets_out == 1)
+-			timeout += TCP_RTO_MIN;
++			timeout_us += tcp_rto_min_us(sk);
+ 		else
+-			timeout += TCP_TIMEOUT_MIN;
++			timeout_us += TCP_TIMEOUT_MIN_US;
++		timeout = usecs_to_jiffies(timeout_us);
+ 	} else {
+ 		timeout = TCP_TIMEOUT_INIT;
+ 	}
+diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c
+index 21fc9859d421e..f84c5804e2db9 100644
+--- a/net/ipv4/tcp_recovery.c
++++ b/net/ipv4/tcp_recovery.c
+@@ -109,7 +109,7 @@ bool tcp_rack_mark_lost(struct sock *sk)
+ 	tp->rack.advanced = 0;
+ 	tcp_rack_detect_loss(sk, &timeout);
+ 	if (timeout) {
+-		timeout = usecs_to_jiffies(timeout) + TCP_TIMEOUT_MIN;
++		timeout = usecs_to_jiffies(timeout + TCP_TIMEOUT_MIN_US);
+ 		inet_csk_reset_xmit_timer(sk, ICSK_TIME_REO_TIMEOUT,
+ 					  timeout, inet_csk(sk)->icsk_rto);
+ 	}
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 0eafe26c05f77..193e5f2757330 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -207,6 +207,7 @@ static struct ipv6_devconf ipv6_devconf __read_mostly = {
+ 	.accept_ra_defrtr	= 1,
+ 	.accept_ra_from_local	= 0,
+ 	.accept_ra_min_hop_limit= 1,
++	.accept_ra_min_lft	= 0,
+ 	.accept_ra_pinfo	= 1,
+ #ifdef CONFIG_IPV6_ROUTER_PREF
+ 	.accept_ra_rtr_pref	= 1,
+@@ -262,6 +263,7 @@ static struct ipv6_devconf ipv6_devconf_dflt __read_mostly = {
+ 	.accept_ra_defrtr	= 1,
+ 	.accept_ra_from_local	= 0,
+ 	.accept_ra_min_hop_limit= 1,
++	.accept_ra_min_lft	= 0,
+ 	.accept_ra_pinfo	= 1,
+ #ifdef CONFIG_IPV6_ROUTER_PREF
+ 	.accept_ra_rtr_pref	= 1,
+@@ -2724,6 +2726,9 @@ void addrconf_prefix_rcv(struct net_device *dev, u8 *opt, int len, bool sllao)
+ 		return;
+ 	}
+ 
++	if (valid_lft != 0 && valid_lft < in6_dev->cnf.accept_ra_min_lft)
++		goto put;
++
+ 	/*
+ 	 *	Two things going on here:
+ 	 *	1) Add routes for on-link prefixes
+@@ -5559,6 +5564,7 @@ static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
+ 	array[DEVCONF_DISABLE_POLICY] = cnf->disable_policy;
+ 	array[DEVCONF_NDISC_TCLASS] = cnf->ndisc_tclass;
+ 	array[DEVCONF_RPL_SEG_ENABLED] = cnf->rpl_seg_enabled;
++	array[DEVCONF_ACCEPT_RA_MIN_LFT] = cnf->accept_ra_min_lft;
+ }
+ 
+ static inline size_t inet6_ifla6_size(void)
+@@ -6716,6 +6722,13 @@ static const struct ctl_table addrconf_sysctl[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec,
+ 	},
++	{
++		.procname	= "accept_ra_min_lft",
++		.data		= &ipv6_devconf.accept_ra_min_lft,
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec,
++	},
+ 	{
+ 		.procname	= "accept_ra_pinfo",
+ 		.data		= &ipv6_devconf.accept_ra_pinfo,
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index cb28f8928f9ee..fddc811bbde1f 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -776,7 +776,9 @@ static inline int esp_remove_trailer(struct sk_buff *skb)
+ 		skb->csum = csum_block_sub(skb->csum, csumdiff,
+ 					   skb->len - trimlen);
+ 	}
+-	pskb_trim(skb, skb->len - trimlen);
++	ret = pskb_trim(skb, skb->len - trimlen);
++	if (unlikely(ret))
++		return ret;
+ 
+ 	ret = nexthdr[1];
+ 
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index ac1e51087b1d8..14251347c4a50 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1269,6 +1269,14 @@ static void ndisc_router_discovery(struct sk_buff *skb)
+ 		goto skip_defrtr;
+ 	}
+ 
++	lifetime = ntohs(ra_msg->icmph.icmp6_rt_lifetime);
++	if (lifetime != 0 && lifetime < in6_dev->cnf.accept_ra_min_lft) {
++		ND_PRINTK(2, info,
++			  "RA: router lifetime (%ds) is too short: %s\n",
++			  lifetime, skb->dev->name);
++		goto skip_defrtr;
++	}
++
+ 	/* Do not accept RA with source-addr found on local machine unless
+ 	 * accept_ra_from_local is set to true.
+ 	 */
+@@ -1281,8 +1289,6 @@ static void ndisc_router_discovery(struct sk_buff *skb)
+ 		goto skip_defrtr;
+ 	}
+ 
+-	lifetime = ntohs(ra_msg->icmph.icmp6_rt_lifetime);
+-
+ #ifdef CONFIG_IPV6_ROUTER_PREF
+ 	pref = ra_msg->icmph.icmp6_router_pref;
+ 	/* 10b is handled as if it were 00b (medium) */
+@@ -1453,6 +1459,9 @@ skip_linkparms:
+ 			if (ri->prefix_len == 0 &&
+ 			    !in6_dev->cnf.accept_ra_defrtr)
+ 				continue;
++			if (ri->lifetime != 0 &&
++			    ntohl(ri->lifetime) < in6_dev->cnf.accept_ra_min_lft)
++				continue;
+ 			if (ri->prefix_len < in6_dev->cnf.accept_ra_rt_info_min_plen)
+ 				continue;
+ 			if (ri->prefix_len > in6_dev->cnf.accept_ra_rt_info_max_plen)
+diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c
+index 247296e3294bd..4c3aa97f23faa 100644
+--- a/net/ipv6/xfrm6_policy.c
++++ b/net/ipv6/xfrm6_policy.c
+@@ -120,11 +120,11 @@ static void xfrm6_dst_destroy(struct dst_entry *dst)
+ {
+ 	struct xfrm_dst *xdst = (struct xfrm_dst *)dst;
+ 
+-	if (likely(xdst->u.rt6.rt6i_idev))
+-		in6_dev_put(xdst->u.rt6.rt6i_idev);
+ 	dst_destroy_metrics_generic(dst);
+ 	if (xdst->u.rt6.rt6i_uncached_list)
+ 		rt6_uncached_list_del(&xdst->u.rt6);
++	if (likely(xdst->u.rt6.rt6i_idev))
++		in6_dev_put(xdst->u.rt6.rt6i_idev);
+ 	xfrm_dst_destroy(xdst);
+ }
+ 
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index bbbcc678c655c..788b6a3c14191 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -656,7 +656,8 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx)
+ 		}
+ 
+ 		if (unlikely(tx->key && tx->key->flags & KEY_FLAG_TAINTED &&
+-			     !ieee80211_is_deauth(hdr->frame_control)))
++			     !ieee80211_is_deauth(hdr->frame_control)) &&
++			     tx->skb->protocol != tx->sdata->control_port_protocol)
+ 			return TX_DROP;
+ 
+ 		if (!skip_hw && tx->key &&
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index e45ffa762bbed..87b1cf69b9c2f 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -1441,7 +1441,7 @@ static int bind_mcastif_addr(struct socket *sock, struct net_device *dev)
+ 	sin.sin_addr.s_addr  = addr;
+ 	sin.sin_port         = 0;
+ 
+-	return sock->ops->bind(sock, (struct sockaddr*)&sin, sizeof(sin));
++	return kernel_bind(sock, (struct sockaddr *)&sin, sizeof(sin));
+ }
+ 
+ static void get_mcast_sockaddr(union ipvs_sockaddr *sa, int *salen,
+@@ -1548,7 +1548,7 @@ static int make_receive_sock(struct netns_ipvs *ipvs, int id,
+ 
+ 	get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->bcfg, id);
+ 	sock->sk->sk_bound_dev_if = dev->ifindex;
+-	result = sock->ops->bind(sock, (struct sockaddr *)&mcast_addr, salen);
++	result = kernel_bind(sock, (struct sockaddr *)&mcast_addr, salen);
+ 	if (result < 0) {
+ 		pr_err("Error binding to the multicast addr\n");
+ 		goto error;
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index b2b63c3653d49..56f6c05362ae8 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -93,7 +93,7 @@ void nft_payload_eval(const struct nft_expr *expr,
+ 
+ 	switch (priv->base) {
+ 	case NFT_PAYLOAD_LL_HEADER:
+-		if (!skb_mac_header_was_set(skb))
++		if (!skb_mac_header_was_set(skb) || skb_mac_header_len(skb) == 0)
+ 			goto err;
+ 
+ 		if (skb_vlan_tag_present(skb)) {
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 17abf17b673e2..12d9d0d0c6022 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -570,6 +570,8 @@ static void *nft_rbtree_deactivate(const struct net *net,
+ 				   nft_rbtree_interval_end(this)) {
+ 				parent = parent->rb_right;
+ 				continue;
++			} else if (nft_set_elem_expired(&rbe->ext)) {
++				break;
+ 			} else if (!nft_set_elem_active(&rbe->ext, genmask)) {
+ 				parent = parent->rb_left;
+ 				continue;
+diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
+index b1107570eaee8..92f70686bee0a 100644
+--- a/net/nfc/llcp_core.c
++++ b/net/nfc/llcp_core.c
+@@ -205,17 +205,13 @@ static struct nfc_llcp_sock *nfc_llcp_sock_get(struct nfc_llcp_local *local,
+ 
+ 		if (tmp_sock->ssap == ssap && tmp_sock->dsap == dsap) {
+ 			llcp_sock = tmp_sock;
++			sock_hold(&llcp_sock->sk);
+ 			break;
+ 		}
+ 	}
+ 
+ 	read_unlock(&local->sockets.lock);
+ 
+-	if (llcp_sock == NULL)
+-		return NULL;
+-
+-	sock_hold(&llcp_sock->sk);
+-
+ 	return llcp_sock;
+ }
+ 
+@@ -348,7 +344,8 @@ static int nfc_llcp_wks_sap(const char *service_name, size_t service_name_len)
+ 
+ static
+ struct nfc_llcp_sock *nfc_llcp_sock_from_sn(struct nfc_llcp_local *local,
+-					    const u8 *sn, size_t sn_len)
++					    const u8 *sn, size_t sn_len,
++					    bool needref)
+ {
+ 	struct sock *sk;
+ 	struct nfc_llcp_sock *llcp_sock, *tmp_sock;
+@@ -384,6 +381,8 @@ struct nfc_llcp_sock *nfc_llcp_sock_from_sn(struct nfc_llcp_local *local,
+ 
+ 		if (memcmp(sn, tmp_sock->service_name, sn_len) == 0) {
+ 			llcp_sock = tmp_sock;
++			if (needref)
++				sock_hold(&llcp_sock->sk);
+ 			break;
+ 		}
+ 	}
+@@ -425,7 +424,8 @@ u8 nfc_llcp_get_sdp_ssap(struct nfc_llcp_local *local,
+ 		 * to this service name.
+ 		 */
+ 		if (nfc_llcp_sock_from_sn(local, sock->service_name,
+-					  sock->service_name_len) != NULL) {
++					  sock->service_name_len,
++					  false) != NULL) {
+ 			mutex_unlock(&local->sdp_lock);
+ 
+ 			return LLCP_SAP_MAX;
+@@ -833,16 +833,7 @@ out:
+ static struct nfc_llcp_sock *nfc_llcp_sock_get_sn(struct nfc_llcp_local *local,
+ 						  const u8 *sn, size_t sn_len)
+ {
+-	struct nfc_llcp_sock *llcp_sock;
+-
+-	llcp_sock = nfc_llcp_sock_from_sn(local, sn, sn_len);
+-
+-	if (llcp_sock == NULL)
+-		return NULL;
+-
+-	sock_hold(&llcp_sock->sk);
+-
+-	return llcp_sock;
++	return nfc_llcp_sock_from_sn(local, sn, sn_len, true);
+ }
+ 
+ static const u8 *nfc_llcp_connect_sn(const struct sk_buff *skb, size_t *sn_len)
+@@ -1307,7 +1298,8 @@ static void nfc_llcp_recv_snl(struct nfc_llcp_local *local,
+ 			}
+ 
+ 			llcp_sock = nfc_llcp_sock_from_sn(local, service_name,
+-							  service_name_len);
++							  service_name_len,
++							  true);
+ 			if (!llcp_sock) {
+ 				sap = 0;
+ 				goto add_snl;
+@@ -1327,6 +1319,7 @@ static void nfc_llcp_recv_snl(struct nfc_llcp_local *local,
+ 
+ 				if (sap == LLCP_SAP_MAX) {
+ 					sap = 0;
++					nfc_llcp_sock_put(llcp_sock);
+ 					goto add_snl;
+ 				}
+ 
+@@ -1344,6 +1337,7 @@ static void nfc_llcp_recv_snl(struct nfc_llcp_local *local,
+ 
+ 			pr_debug("%p %d\n", llcp_sock, sap);
+ 
++			nfc_llcp_sock_put(llcp_sock);
+ add_snl:
+ 			sdp = nfc_llcp_build_sdres_tlv(tid, sap);
+ 			if (sdp == NULL)
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index ed9019d807c78..4c931bd1c1743 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -894,6 +894,11 @@ static int nci_activate_target(struct nfc_dev *nfc_dev,
+ 		return -EINVAL;
+ 	}
+ 
++	if (protocol >= NFC_PROTO_MAX) {
++		pr_err("the requested nfc protocol is invalid\n");
++		return -EINVAL;
++	}
++
+ 	if (!(nci_target->supported_protocols & (1 << protocol))) {
+ 		pr_err("target does not support the requested protocol 0x%x\n",
+ 		       protocol);
+diff --git a/net/nfc/nci/spi.c b/net/nfc/nci/spi.c
+index 7d8e10e27c209..0651640d68683 100644
+--- a/net/nfc/nci/spi.c
++++ b/net/nfc/nci/spi.c
+@@ -151,6 +151,8 @@ static int send_acknowledge(struct nci_spi *nspi, u8 acknowledge)
+ 	int ret;
+ 
+ 	skb = nci_skb_alloc(nspi->ndev, 0, GFP_KERNEL);
++	if (!skb)
++		return -ENOMEM;
+ 
+ 	/* add the NCI SPI header to the start of the buffer */
+ 	hdr = skb_push(skb, NCI_SPI_HDR_LEN);
+diff --git a/net/rds/tcp_connect.c b/net/rds/tcp_connect.c
+index 2f38dac0160e8..09967bab2c36a 100644
+--- a/net/rds/tcp_connect.c
++++ b/net/rds/tcp_connect.c
+@@ -141,7 +141,7 @@ int rds_tcp_conn_path_connect(struct rds_conn_path *cp)
+ 		addrlen = sizeof(sin);
+ 	}
+ 
+-	ret = sock->ops->bind(sock, addr, addrlen);
++	ret = kernel_bind(sock, addr, addrlen);
+ 	if (ret) {
+ 		rdsdebug("bind failed with %d at address %pI6c\n",
+ 			 ret, &conn->c_laddr);
+diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
+index 09cadd556d1e1..3994eeef95a3c 100644
+--- a/net/rds/tcp_listen.c
++++ b/net/rds/tcp_listen.c
+@@ -301,7 +301,7 @@ struct socket *rds_tcp_listen_init(struct net *net, bool isv6)
+ 		addr_len = sizeof(*sin);
+ 	}
+ 
+-	ret = sock->ops->bind(sock, (struct sockaddr *)&ss, addr_len);
++	ret = kernel_bind(sock, (struct sockaddr *)&ss, addr_len);
+ 	if (ret < 0) {
+ 		rdsdebug("could not bind %s listener socket: %d\n",
+ 			 isv6 ? "IPv6" : "IPv4", ret);
+diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c
+index f5afc9bcdee65..2cc95c8dc4c7b 100644
+--- a/net/rfkill/rfkill-gpio.c
++++ b/net/rfkill/rfkill-gpio.c
+@@ -98,13 +98,13 @@ static int rfkill_gpio_probe(struct platform_device *pdev)
+ 
+ 	rfkill->clk = devm_clk_get(&pdev->dev, NULL);
+ 
+-	gpio = devm_gpiod_get_optional(&pdev->dev, "reset", GPIOD_OUT_LOW);
++	gpio = devm_gpiod_get_optional(&pdev->dev, "reset", GPIOD_ASIS);
+ 	if (IS_ERR(gpio))
+ 		return PTR_ERR(gpio);
+ 
+ 	rfkill->reset_gpio = gpio;
+ 
+-	gpio = devm_gpiod_get_optional(&pdev->dev, "shutdown", GPIOD_OUT_LOW);
++	gpio = devm_gpiod_get_optional(&pdev->dev, "shutdown", GPIOD_ASIS);
+ 	if (IS_ERR(gpio))
+ 		return PTR_ERR(gpio);
+ 
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index 6076294a632c5..adcf87d417ae4 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -903,6 +903,14 @@ hfsc_change_usc(struct hfsc_class *cl, struct tc_service_curve *usc,
+ 	cl->cl_flags |= HFSC_USC;
+ }
+ 
++static void
++hfsc_upgrade_rt(struct hfsc_class *cl)
++{
++	cl->cl_fsc = cl->cl_rsc;
++	rtsc_init(&cl->cl_virtual, &cl->cl_fsc, cl->cl_vt, cl->cl_total);
++	cl->cl_flags |= HFSC_FSC;
++}
++
+ static const struct nla_policy hfsc_policy[TCA_HFSC_MAX + 1] = {
+ 	[TCA_HFSC_RSC]	= { .len = sizeof(struct tc_service_curve) },
+ 	[TCA_HFSC_FSC]	= { .len = sizeof(struct tc_service_curve) },
+@@ -1012,10 +1020,6 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 		if (parent == NULL)
+ 			return -ENOENT;
+ 	}
+-	if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) {
+-		NL_SET_ERR_MSG(extack, "Invalid parent - parent class must have FSC");
+-		return -EINVAL;
+-	}
+ 
+ 	if (classid == 0 || TC_H_MAJ(classid ^ sch->handle) != 0)
+ 		return -EINVAL;
+@@ -1068,6 +1072,12 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 	cl->cf_tree = RB_ROOT;
+ 
+ 	sch_tree_lock(sch);
++	/* Check if the inner class is a misconfigured 'rt' */
++	if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) {
++		NL_SET_ERR_MSG(extack,
++			       "Forced curve change on parent 'rt' to 'sc'");
++		hfsc_upgrade_rt(parent);
++	}
+ 	qdisc_class_hash_insert(&q->clhash, &cl->cl_common);
+ 	list_add_tail(&cl->siblings, &parent->children);
+ 	if (parent->level == 0)
+diff --git a/net/socket.c b/net/socket.c
+index de89ab55d4759..36e38ee434ea1 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -3414,7 +3414,11 @@ static long compat_sock_ioctl(struct file *file, unsigned int cmd,
+ 
+ int kernel_bind(struct socket *sock, struct sockaddr *addr, int addrlen)
+ {
+-	return sock->ops->bind(sock, addr, addrlen);
++	struct sockaddr_storage address;
++
++	memcpy(&address, addr, addrlen);
++
++	return sock->ops->bind(sock, (struct sockaddr *)&address, addrlen);
+ }
+ EXPORT_SYMBOL(kernel_bind);
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index ea36d8c47b31a..0ac829c8f1888 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -7467,7 +7467,7 @@ static int nl80211_update_mesh_config(struct sk_buff *skb,
+ 	struct cfg80211_registered_device *rdev = info->user_ptr[0];
+ 	struct net_device *dev = info->user_ptr[1];
+ 	struct wireless_dev *wdev = dev->ieee80211_ptr;
+-	struct mesh_config cfg;
++	struct mesh_config cfg = {};
+ 	u32 mask;
+ 	int err;
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index f59691936e5b8..1e6dfe204ff36 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -875,6 +875,10 @@ static int cfg80211_scan_6ghz(struct cfg80211_registered_device *rdev)
+ 		    !cfg80211_find_ssid_match(ap, request))
+ 			continue;
+ 
++		if (!is_broadcast_ether_addr(request->bssid) &&
++		    !ether_addr_equal(request->bssid, ap->bssid))
++			continue;
++
+ 		if (!request->n_ssids && ap->multi_bss && !ap->transmitted_bssid)
+ 			continue;
+ 
+diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c
+index 4eeec33675754..9eaf0174d9981 100644
+--- a/net/xfrm/xfrm_interface_core.c
++++ b/net/xfrm/xfrm_interface_core.c
+@@ -274,8 +274,8 @@ static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
+ 	skb->dev = dev;
+ 
+ 	if (err) {
+-		dev->stats.rx_errors++;
+-		dev->stats.rx_dropped++;
++		DEV_STATS_INC(dev, rx_errors);
++		DEV_STATS_INC(dev, rx_dropped);
+ 
+ 		return 0;
+ 	}
+@@ -309,7 +309,6 @@ static int
+ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ {
+ 	struct xfrm_if *xi = netdev_priv(dev);
+-	struct net_device_stats *stats = &xi->dev->stats;
+ 	struct dst_entry *dst = skb_dst(skb);
+ 	unsigned int length = skb->len;
+ 	struct net_device *tdev;
+@@ -335,7 +334,7 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 	tdev = dst->dev;
+ 
+ 	if (tdev == dev) {
+-		stats->collisions++;
++		DEV_STATS_INC(dev, collisions);
+ 		net_warn_ratelimited("%s: Local routing loop detected!\n",
+ 				     dev->name);
+ 		goto tx_err_dst_release;
+@@ -378,13 +377,13 @@ xmit:
+ 		tstats->tx_packets++;
+ 		u64_stats_update_end(&tstats->syncp);
+ 	} else {
+-		stats->tx_errors++;
+-		stats->tx_aborted_errors++;
++		DEV_STATS_INC(dev, tx_errors);
++		DEV_STATS_INC(dev, tx_aborted_errors);
+ 	}
+ 
+ 	return 0;
+ tx_err_link_failure:
+-	stats->tx_carrier_errors++;
++	DEV_STATS_INC(dev, tx_carrier_errors);
+ 	dst_link_failure(skb);
+ tx_err_dst_release:
+ 	dst_release(dst);
+@@ -394,7 +393,6 @@ tx_err_dst_release:
+ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct xfrm_if *xi = netdev_priv(dev);
+-	struct net_device_stats *stats = &xi->dev->stats;
+ 	struct dst_entry *dst = skb_dst(skb);
+ 	struct flowi fl;
+ 	int ret;
+@@ -411,7 +409,7 @@ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			dst = ip6_route_output(dev_net(dev), NULL, &fl.u.ip6);
+ 			if (dst->error) {
+ 				dst_release(dst);
+-				stats->tx_carrier_errors++;
++				DEV_STATS_INC(dev, tx_carrier_errors);
+ 				goto tx_err;
+ 			}
+ 			skb_dst_set(skb, dst);
+@@ -427,7 +425,7 @@ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			fl.u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC;
+ 			rt = __ip_route_output_key(dev_net(dev), &fl.u.ip4);
+ 			if (IS_ERR(rt)) {
+-				stats->tx_carrier_errors++;
++				DEV_STATS_INC(dev, tx_carrier_errors);
+ 				goto tx_err;
+ 			}
+ 			skb_dst_set(skb, &rt->dst);
+@@ -446,8 +444,8 @@ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	return NETDEV_TX_OK;
+ 
+ tx_err:
+-	stats->tx_errors++;
+-	stats->tx_dropped++;
++	DEV_STATS_INC(dev, tx_errors);
++	DEV_STATS_INC(dev, tx_dropped);
+ 	kfree_skb(skb);
+ 	return NETDEV_TX_OK;
+ }
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 465d28341ed6d..664d55957feb5 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1371,8 +1371,6 @@ EXPORT_SYMBOL(xfrm_policy_hash_rebuild);
+  * of an absolute inpredictability of ordering of rules. This will not pass. */
+ static u32 xfrm_gen_index(struct net *net, int dir, u32 index)
+ {
+-	static u32 idx_generator;
+-
+ 	for (;;) {
+ 		struct hlist_head *list;
+ 		struct xfrm_policy *p;
+@@ -1380,8 +1378,8 @@ static u32 xfrm_gen_index(struct net *net, int dir, u32 index)
+ 		int found;
+ 
+ 		if (!index) {
+-			idx = (idx_generator | dir);
+-			idx_generator += 8;
++			idx = (net->xfrm.idx_generator | dir);
++			net->xfrm.idx_generator += 8;
+ 		} else {
+ 			idx = index;
+ 			index = 0;
+diff --git a/sound/soc/pxa/pxa-ssp.c b/sound/soc/pxa/pxa-ssp.c
+index c4e7307a44374..d847263a18b93 100644
+--- a/sound/soc/pxa/pxa-ssp.c
++++ b/sound/soc/pxa/pxa-ssp.c
+@@ -797,7 +797,7 @@ static int pxa_ssp_probe(struct snd_soc_dai *dai)
+ 		if (IS_ERR(priv->extclk)) {
+ 			ret = PTR_ERR(priv->extclk);
+ 			if (ret == -EPROBE_DEFER)
+-				return ret;
++				goto err_priv;
+ 
+ 			priv->extclk = NULL;
+ 		}
+diff --git a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
+index 18d33684faade..7536ff2f890a1 100644
+--- a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
+@@ -21,19 +21,23 @@ if [[ "$1" == "-cgroup-v2" ]]; then
+   reservation_usage_file=rsvd.current
+ fi
+ 
+-cgroup_path=/dev/cgroup/memory
+-if [[ ! -e $cgroup_path ]]; then
+-  mkdir -p $cgroup_path
+-  if [[ $cgroup2 ]]; then
++if [[ $cgroup2 ]]; then
++  cgroup_path=$(mount -t cgroup2 | head -1 | awk '{print $3}')
++  if [[ -z "$cgroup_path" ]]; then
++    cgroup_path=/dev/cgroup/memory
+     mount -t cgroup2 none $cgroup_path
+-  else
++    do_umount=1
++  fi
++  echo "+hugetlb" >$cgroup_path/cgroup.subtree_control
++else
++  cgroup_path=$(mount -t cgroup | grep ",hugetlb" | awk '{print $3}')
++  if [[ -z "$cgroup_path" ]]; then
++    cgroup_path=/dev/cgroup/memory
+     mount -t cgroup memory,hugetlb $cgroup_path
++    do_umount=1
+   fi
+ fi
+-
+-if [[ $cgroup2 ]]; then
+-  echo "+hugetlb" >/dev/cgroup/memory/cgroup.subtree_control
+-fi
++export cgroup_path
+ 
+ function cleanup() {
+   if [[ $cgroup2 ]]; then
+@@ -105,7 +109,7 @@ function setup_cgroup() {
+ 
+ function wait_for_hugetlb_memory_to_get_depleted() {
+   local cgroup="$1"
+-  local path="/dev/cgroup/memory/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
++  local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
+   # Wait for hugetlbfs memory to get depleted.
+   while [ $(cat $path) != 0 ]; do
+     echo Waiting for hugetlb memory to get depleted.
+@@ -118,7 +122,7 @@ function wait_for_hugetlb_memory_to_get_reserved() {
+   local cgroup="$1"
+   local size="$2"
+ 
+-  local path="/dev/cgroup/memory/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
++  local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
+   # Wait for hugetlbfs memory to get written.
+   while [ $(cat $path) != $size ]; do
+     echo Waiting for hugetlb memory reservation to reach size $size.
+@@ -131,7 +135,7 @@ function wait_for_hugetlb_memory_to_get_written() {
+   local cgroup="$1"
+   local size="$2"
+ 
+-  local path="/dev/cgroup/memory/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
++  local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
+   # Wait for hugetlbfs memory to get written.
+   while [ $(cat $path) != $size ]; do
+     echo Waiting for hugetlb memory to reach size $size.
+@@ -571,5 +575,7 @@ for populate in "" "-o"; do
+   done     # populate
+ done       # method
+ 
+-umount $cgroup_path
+-rmdir $cgroup_path
++if [[ $do_umount ]]; then
++  umount $cgroup_path
++  rmdir $cgroup_path
++fi
+diff --git a/tools/testing/selftests/vm/hugetlb_reparenting_test.sh b/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
+index d11d1febccc3b..c665b16f1e370 100644
+--- a/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
++++ b/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
+@@ -15,19 +15,24 @@ if [[ "$1" == "-cgroup-v2" ]]; then
+   usage_file=current
+ fi
+ 
+-CGROUP_ROOT='/dev/cgroup/memory'
+-MNT='/mnt/huge/'
+ 
+-if [[ ! -e $CGROUP_ROOT ]]; then
+-  mkdir -p $CGROUP_ROOT
+-  if [[ $cgroup2 ]]; then
++if [[ $cgroup2 ]]; then
++  CGROUP_ROOT=$(mount -t cgroup2 | head -1 | awk '{print $3}')
++  if [[ -z "$CGROUP_ROOT" ]]; then
++    CGROUP_ROOT=/dev/cgroup/memory
+     mount -t cgroup2 none $CGROUP_ROOT
+-    sleep 1
+-    echo "+hugetlb +memory" >$CGROUP_ROOT/cgroup.subtree_control
+-  else
++    do_umount=1
++  fi
++  echo "+hugetlb +memory" >$CGROUP_ROOT/cgroup.subtree_control
++else
++  CGROUP_ROOT=$(mount -t cgroup | grep ",hugetlb" | awk '{print $3}')
++  if [[ -z "$CGROUP_ROOT" ]]; then
++    CGROUP_ROOT=/dev/cgroup/memory
+     mount -t cgroup memory,hugetlb $CGROUP_ROOT
++    do_umount=1
+   fi
+ fi
++MNT='/mnt/huge/'
+ 
+ function get_machine_hugepage_size() {
+   hpz=$(grep -i hugepagesize /proc/meminfo)
+diff --git a/tools/testing/selftests/vm/write_hugetlb_memory.sh b/tools/testing/selftests/vm/write_hugetlb_memory.sh
+index d3d0d108924d4..70a02301f4c27 100644
+--- a/tools/testing/selftests/vm/write_hugetlb_memory.sh
++++ b/tools/testing/selftests/vm/write_hugetlb_memory.sh
+@@ -14,7 +14,7 @@ want_sleep=$8
+ reserve=$9
+ 
+ echo "Putting task in cgroup '$cgroup'"
+-echo $$ > /dev/cgroup/memory/"$cgroup"/cgroup.procs
++echo $$ > ${cgroup_path:-/dev/cgroup/memory}/"$cgroup"/cgroup.procs
+ 
+ echo "Method is $method"
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-11-08 17:28 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-11-08 17:28 UTC (permalink / raw
  To: gentoo-commits

commit:     f59e2bc692d0aef0c8fdc8ce3900008ed3ac7799
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Nov  8 17:28:09 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov  8 17:28:09 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f59e2bc6

Linux patch 5.10.200

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1199_linux-5.10.200.patch | 3491 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3495 insertions(+)

diff --git a/0000_README b/0000_README
index cc499ff3..7b076361 100644
--- a/0000_README
+++ b/0000_README
@@ -839,6 +839,10 @@ Patch:  1198_linux-5.10.199.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.199
 
+Patch:  1199_linux-5.10.200.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.200
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1199_linux-5.10.200.patch b/1199_linux-5.10.200.patch
new file mode 100644
index 00000000..09743b1a
--- /dev/null
+++ b/1199_linux-5.10.200.patch
@@ -0,0 +1,3491 @@
+diff --git a/Makefile b/Makefile
+index 5105828bf6dab..da4a3de444cfd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 199
++SUBLEVEL = 200
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
+index da8c71f321ad3..1e417c3eedfef 100644
+--- a/arch/powerpc/kernel/setup-common.c
++++ b/arch/powerpc/kernel/setup-common.c
+@@ -906,6 +906,8 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	/* Parse memory topology */
+ 	mem_topology_setup();
++	/* Set max_mapnr before paging_init() */
++	set_max_mapnr(max_pfn);
+ 
+ 	/*
+ 	 * Release secondary cpus out of their spinloops at 0x60 now that
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index 1ed276d2305fa..08e3422eb7926 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -293,7 +293,6 @@ void __init mem_init(void)
+ #endif
+ 
+ 	high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
+-	set_max_mapnr(max_pfn);
+ 
+ 	kasan_late_init();
+ 
+diff --git a/arch/sparc/lib/checksum_32.S b/arch/sparc/lib/checksum_32.S
+index 7488d130faf73..f5a8851e0e55b 100644
+--- a/arch/sparc/lib/checksum_32.S
++++ b/arch/sparc/lib/checksum_32.S
+@@ -463,5 +463,5 @@ ccslow:	cmp	%g1, 0
+  * we only bother with faults on loads... */
+ 
+ cc_fault:
+-	ret
++	retl
+ 	 clr	%o0
+diff --git a/arch/x86/include/asm/i8259.h b/arch/x86/include/asm/i8259.h
+index 89789e8c80f66..e16574c16e933 100644
+--- a/arch/x86/include/asm/i8259.h
++++ b/arch/x86/include/asm/i8259.h
+@@ -67,6 +67,8 @@ struct legacy_pic {
+ 	void (*make_irq)(unsigned int irq);
+ };
+ 
++void legacy_pic_pcat_compat(void);
++
+ extern struct legacy_pic *legacy_pic;
+ extern struct legacy_pic null_legacy_pic;
+ 
+diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
+index 389d851a02c4f..4e1757bf66a89 100644
+--- a/arch/x86/include/asm/setup.h
++++ b/arch/x86/include/asm/setup.h
+@@ -108,27 +108,16 @@ extern unsigned long _brk_end;
+ void *extend_brk(size_t size, size_t align);
+ 
+ /*
+- * Reserve space in the brk section.  The name must be unique within
+- * the file, and somewhat descriptive.  The size is in bytes.  Must be
+- * used at file scope.
++ * Reserve space in the .brk section, which is a block of memory from which the
++ * caller is allowed to allocate very early (before even memblock is available)
++ * by calling extend_brk().  All allocated memory will be eventually converted
++ * to memblock.  Any leftover unallocated memory will be freed.
+  *
+- * (This uses a temp function to wrap the asm so we can pass it the
+- * size parameter; otherwise we wouldn't be able to.  We can't use a
+- * "section" attribute on a normal variable because it always ends up
+- * being @progbits, which ends up allocating space in the vmlinux
+- * executable.)
++ * The size is in bytes.
+  */
+-#define RESERVE_BRK(name,sz)						\
+-	static void __section(".discard.text") __used notrace		\
+-	__brk_reservation_fn_##name##__(void) {				\
+-		asm volatile (						\
+-			".pushsection .brk_reservation,\"aw\",@nobits;" \
+-			".brk." #name ":"				\
+-			" 1:.skip %c0;"					\
+-			" .size .brk." #name ", . - 1b;"		\
+-			" .popsection"					\
+-			: : "i" (sz));					\
+-	}
++#define RESERVE_BRK(name, size)					\
++	__section(".bss..brk") __aligned(1) __used	\
++	static char __brk_##name[size]
+ 
+ /* Helper for reserving space for arrays of things */
+ #define RESERVE_BRK_ARRAY(type, name, entries)		\
+@@ -146,12 +135,19 @@ asmlinkage void __init x86_64_start_reservations(char *real_mode_data);
+ 
+ #endif /* __i386__ */
+ #endif /* _SETUP */
+-#else
+-#define RESERVE_BRK(name,sz)				\
+-	.pushsection .brk_reservation,"aw",@nobits;	\
+-.brk.name:						\
+-1:	.skip sz;					\
+-	.size .brk.name,.-1b;				\
++
++#else  /* __ASSEMBLY */
++
++.macro __RESERVE_BRK name, size
++	.pushsection .bss..brk, "aw"
++SYM_DATA_START(__brk_\name)
++	.skip \size
++SYM_DATA_END(__brk_\name)
+ 	.popsection
++.endm
++
++#define RESERVE_BRK(name, size) __RESERVE_BRK name, size
++
+ #endif /* __ASSEMBLY__ */
++
+ #endif /* _ASM_X86_SETUP_H */
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 55562a9b7f92e..c91851573e5f1 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -141,6 +141,9 @@ static int __init acpi_parse_madt(struct acpi_table_header *table)
+ 		       madt->address);
+ 	}
+ 
++	if (madt->flags & ACPI_MADT_PCAT_COMPAT)
++		legacy_pic_pcat_compat();
++
+ 	default_acpi_madt_oem_check(madt->header.oem_id,
+ 				    madt->header.oem_table_id);
+ 
+diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
+index f325389d03516..4c9f559e1388b 100644
+--- a/arch/x86/kernel/i8259.c
++++ b/arch/x86/kernel/i8259.c
+@@ -32,6 +32,7 @@
+  */
+ static void init_8259A(int auto_eoi);
+ 
++static bool pcat_compat __ro_after_init;
+ static int i8259A_auto_eoi;
+ DEFINE_RAW_SPINLOCK(i8259A_lock);
+ 
+@@ -301,15 +302,32 @@ static void unmask_8259A(void)
+ 
+ static int probe_8259A(void)
+ {
++	unsigned char new_val, probe_val = ~(1 << PIC_CASCADE_IR);
+ 	unsigned long flags;
+-	unsigned char probe_val = ~(1 << PIC_CASCADE_IR);
+-	unsigned char new_val;
++
++	/*
++	 * If MADT has the PCAT_COMPAT flag set, then do not bother probing
++	 * for the PIC. Some BIOSes leave the PIC uninitialized and probing
++	 * fails.
++	 *
++	 * Right now this causes problems as quite some code depends on
++	 * nr_legacy_irqs() > 0 or has_legacy_pic() == true. This is silly
++	 * when the system has an IO/APIC because then PIC is not required
++	 * at all, except for really old machines where the timer interrupt
++	 * must be routed through the PIC. So just pretend that the PIC is
++	 * there and let legacy_pic->init() initialize it for nothing.
++	 *
++	 * Alternatively this could just try to initialize the PIC and
++	 * repeat the probe, but for cases where there is no PIC that's
++	 * just pointless.
++	 */
++	if (pcat_compat)
++		return nr_legacy_irqs();
++
+ 	/*
+-	 * Check to see if we have a PIC.
+-	 * Mask all except the cascade and read
+-	 * back the value we just wrote. If we don't
+-	 * have a PIC, we will read 0xff as opposed to the
+-	 * value we wrote.
++	 * Check to see if we have a PIC.  Mask all except the cascade and
++	 * read back the value we just wrote. If we don't have a PIC, we
++	 * will read 0xff as opposed to the value we wrote.
+ 	 */
+ 	raw_spin_lock_irqsave(&i8259A_lock, flags);
+ 
+@@ -431,5 +449,9 @@ static int __init i8259A_init_ops(void)
+ 
+ 	return 0;
+ }
+-
+ device_initcall(i8259A_init_ops);
++
++void __init legacy_pic_pcat_compat(void)
++{
++	pcat_compat = true;
++}
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 065152d9265e4..e9b483c6f4131 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -64,11 +64,6 @@ RESERVE_BRK(dmi_alloc, 65536);
+ #endif
+ 
+ 
+-/*
+- * Range of the BSS area. The size of the BSS area is determined
+- * at link time, with RESERVE_BRK*() facility reserving additional
+- * chunks.
+- */
+ unsigned long _brk_start = (unsigned long)__brk_base;
+ unsigned long _brk_end   = (unsigned long)__brk_base;
+ 
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index f0d4500ae77ae..740f87d8aa481 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -414,7 +414,7 @@ SECTIONS
+ 	.brk : AT(ADDR(.brk) - LOAD_OFFSET) {
+ 		__brk_base = .;
+ 		. += 64 * 1024;		/* 64k alignment slop space */
+-		*(.brk_reservation)	/* areas brk users have reserved */
++		*(.bss..brk)		/* areas brk users have reserved */
+ 		__brk_limit = .;
+ 	}
+ 
+diff --git a/drivers/base/driver.c b/drivers/base/driver.c
+index 8c0d33e182fd5..1b9d47b10bd0a 100644
+--- a/drivers/base/driver.c
++++ b/drivers/base/driver.c
+@@ -30,6 +30,75 @@ static struct device *next_device(struct klist_iter *i)
+ 	return dev;
+ }
+ 
++/**
++ * driver_set_override() - Helper to set or clear driver override.
++ * @dev: Device to change
++ * @override: Address of string to change (e.g. &device->driver_override);
++ *            The contents will be freed and hold newly allocated override.
++ * @s: NUL-terminated string, new driver name to force a match, pass empty
++ *     string to clear it ("" or "\n", where the latter is only for sysfs
++ *     interface).
++ * @len: length of @s
++ *
++ * Helper to set or clear driver override in a device, intended for the cases
++ * when the driver_override field is allocated by driver/bus code.
++ *
++ * Returns: 0 on success or a negative error code on failure.
++ */
++int driver_set_override(struct device *dev, const char **override,
++			const char *s, size_t len)
++{
++	const char *new, *old;
++	char *cp;
++
++	if (!override || !s)
++		return -EINVAL;
++
++	/*
++	 * The stored value will be used in sysfs show callback (sysfs_emit()),
++	 * which has a length limit of PAGE_SIZE and adds a trailing newline.
++	 * Thus we can store one character less to avoid truncation during sysfs
++	 * show.
++	 */
++	if (len >= (PAGE_SIZE - 1))
++		return -EINVAL;
++
++	if (!len) {
++		/* Empty string passed - clear override */
++		device_lock(dev);
++		old = *override;
++		*override = NULL;
++		device_unlock(dev);
++		kfree(old);
++
++		return 0;
++	}
++
++	cp = strnchr(s, len, '\n');
++	if (cp)
++		len = cp - s;
++
++	new = kstrndup(s, len, GFP_KERNEL);
++	if (!new)
++		return -ENOMEM;
++
++	device_lock(dev);
++	old = *override;
++	if (cp != s) {
++		*override = new;
++	} else {
++		/* "\n" passed - clear override */
++		kfree(new);
++		*override = NULL;
++	}
++	device_unlock(dev);
++
++	kfree(old);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(driver_set_override);
++
+ /**
+  * driver_for_each_device - Iterator for devices bound to a driver.
+  * @drv: Driver we're iterating.
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index 88aef93eb4ddf..647066229fec3 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -1046,31 +1046,11 @@ static ssize_t driver_override_store(struct device *dev,
+ 				     const char *buf, size_t count)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+-	char *driver_override, *old, *cp;
+-
+-	/* We need to keep extra room for a newline */
+-	if (count >= (PAGE_SIZE - 1))
+-		return -EINVAL;
+-
+-	driver_override = kstrndup(buf, count, GFP_KERNEL);
+-	if (!driver_override)
+-		return -ENOMEM;
+-
+-	cp = strchr(driver_override, '\n');
+-	if (cp)
+-		*cp = '\0';
+-
+-	device_lock(dev);
+-	old = pdev->driver_override;
+-	if (strlen(driver_override)) {
+-		pdev->driver_override = driver_override;
+-	} else {
+-		kfree(driver_override);
+-		pdev->driver_override = NULL;
+-	}
+-	device_unlock(dev);
++	int ret;
+ 
+-	kfree(old);
++	ret = driver_set_override(dev, &pdev->driver_override, buf, count);
++	if (ret)
++		return ret;
+ 
+ 	return count;
+ }
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 3575afe16a574..62572d59e7e38 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3167,6 +3167,7 @@ static void possible_parent_show(struct seq_file *s, struct clk_core *core,
+ 				 unsigned int i, char terminator)
+ {
+ 	struct clk_core *parent;
++	const char *name = NULL;
+ 
+ 	/*
+ 	 * Go through the following options to fetch a parent's name.
+@@ -3181,18 +3182,20 @@ static void possible_parent_show(struct seq_file *s, struct clk_core *core,
+ 	 * registered (yet).
+ 	 */
+ 	parent = clk_core_get_parent_by_index(core, i);
+-	if (parent)
++	if (parent) {
+ 		seq_puts(s, parent->name);
+-	else if (core->parents[i].name)
++	} else if (core->parents[i].name) {
+ 		seq_puts(s, core->parents[i].name);
+-	else if (core->parents[i].fw_name)
++	} else if (core->parents[i].fw_name) {
+ 		seq_printf(s, "<%s>(fw)", core->parents[i].fw_name);
+-	else if (core->parents[i].index >= 0)
+-		seq_puts(s,
+-			 of_clk_get_parent_name(core->of_node,
+-						core->parents[i].index));
+-	else
+-		seq_puts(s, "(missing)");
++	} else {
++		if (core->parents[i].index >= 0)
++			name = of_clk_get_parent_name(core->of_node, core->parents[i].index);
++		if (!name)
++			name = "(missing)";
++
++		seq_puts(s, name);
++	}
+ 
+ 	seq_putc(s, terminator);
+ }
+diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
+index d99fec8215083..4c306dd13e865 100644
+--- a/drivers/dma/ste_dma40.c
++++ b/drivers/dma/ste_dma40.c
+@@ -3698,6 +3698,7 @@ static int __init d40_probe(struct platform_device *pdev)
+ 		regulator_disable(base->lcpa_regulator);
+ 		regulator_put(base->lcpa_regulator);
+ 	}
++	pm_runtime_disable(base->dev);
+ 
+ 	kfree(base->lcla_pool.alloc_map);
+ 	kfree(base->lookup_log_chans);
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index b5e15933cb5f4..27305f3398819 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -2612,14 +2612,14 @@ static struct drm_dp_mst_branch *get_mst_branch_device_by_guid_helper(
+ 	struct drm_dp_mst_branch *found_mstb;
+ 	struct drm_dp_mst_port *port;
+ 
++	if (!mstb)
++		return NULL;
++
+ 	if (memcmp(mstb->guid, guid, 16) == 0)
+ 		return mstb;
+ 
+ 
+ 	list_for_each_entry(port, &mstb->ports, next) {
+-		if (!port->mstb)
+-			continue;
+-
+ 		found_mstb = get_mst_branch_device_by_guid_helper(port->mstb, guid);
+ 
+ 		if (found_mstb)
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index dac46bc2fafc8..a1bc8a2c3eadc 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -740,6 +740,8 @@ static void __aspeed_i2c_reg_slave(struct aspeed_i2c_bus *bus, u16 slave_addr)
+ 	func_ctrl_reg_val = readl(bus->base + ASPEED_I2C_FUN_CTRL_REG);
+ 	func_ctrl_reg_val |= ASPEED_I2CD_SLAVE_EN;
+ 	writel(func_ctrl_reg_val, bus->base + ASPEED_I2C_FUN_CTRL_REG);
++
++	bus->slave_state = ASPEED_I2C_SLAVE_INACTIVE;
+ }
+ 
+ static int aspeed_i2c_reg_slave(struct i2c_client *client)
+@@ -756,7 +758,6 @@ static int aspeed_i2c_reg_slave(struct i2c_client *client)
+ 	__aspeed_i2c_reg_slave(bus, client->addr);
+ 
+ 	bus->slave = client;
+-	bus->slave_state = ASPEED_I2C_SLAVE_INACTIVE;
+ 	spin_unlock_irqrestore(&bus->lock, flags);
+ 
+ 	return 0;
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index 0f4c0282028b0..7b9272f9cc211 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -1042,9 +1042,10 @@ static int stm32f7_i2c_smbus_xfer_msg(struct stm32f7_i2c_dev *i2c_dev,
+ 	/* Configure PEC */
+ 	if ((flags & I2C_CLIENT_PEC) && f7_msg->size != I2C_SMBUS_QUICK) {
+ 		cr1 |= STM32F7_I2C_CR1_PECEN;
+-		cr2 |= STM32F7_I2C_CR2_PECBYTE;
+-		if (!f7_msg->read_write)
++		if (!f7_msg->read_write) {
++			cr2 |= STM32F7_I2C_CR2_PECBYTE;
+ 			f7_msg->count++;
++		}
+ 	} else {
+ 		cr1 &= ~STM32F7_I2C_CR1_PECEN;
+ 		cr2 &= ~STM32F7_I2C_CR2_PECBYTE;
+@@ -1132,8 +1133,10 @@ static void stm32f7_i2c_smbus_rep_start(struct stm32f7_i2c_dev *i2c_dev)
+ 	f7_msg->stop = true;
+ 
+ 	/* Add one byte for PEC if needed */
+-	if (cr1 & STM32F7_I2C_CR1_PECEN)
++	if (cr1 & STM32F7_I2C_CR1_PECEN) {
++		cr2 |= STM32F7_I2C_CR2_PECBYTE;
+ 		f7_msg->count++;
++	}
+ 
+ 	/* Set number of bytes to be transferred */
+ 	cr2 &= ~(STM32F7_I2C_CR2_NBYTES_MASK);
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index 8e8688e8de0fb..45a3f7e7b3f68 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -61,7 +61,7 @@ static int i2c_demux_activate_master(struct i2c_demux_pinctrl_priv *priv, u32 ne
+ 	if (ret)
+ 		goto err;
+ 
+-	adap = of_find_i2c_adapter_by_node(priv->chan[new_chan].parent_np);
++	adap = of_get_i2c_adapter_by_node(priv->chan[new_chan].parent_np);
+ 	if (!adap) {
+ 		ret = -ENODEV;
+ 		goto err_with_revert;
+diff --git a/drivers/i2c/muxes/i2c-mux-gpmux.c b/drivers/i2c/muxes/i2c-mux-gpmux.c
+index 33024acaac02b..0ebc12575081c 100644
+--- a/drivers/i2c/muxes/i2c-mux-gpmux.c
++++ b/drivers/i2c/muxes/i2c-mux-gpmux.c
+@@ -52,7 +52,7 @@ static struct i2c_adapter *mux_parent_adapter(struct device *dev)
+ 		dev_err(dev, "Cannot parse i2c-parent\n");
+ 		return ERR_PTR(-ENODEV);
+ 	}
+-	parent = of_find_i2c_adapter_by_node(parent_np);
++	parent = of_get_i2c_adapter_by_node(parent_np);
+ 	of_node_put(parent_np);
+ 	if (!parent)
+ 		return ERR_PTR(-EPROBE_DEFER);
+diff --git a/drivers/i2c/muxes/i2c-mux-pinctrl.c b/drivers/i2c/muxes/i2c-mux-pinctrl.c
+index f1bb00a11ad62..fc991cf002af4 100644
+--- a/drivers/i2c/muxes/i2c-mux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-mux-pinctrl.c
+@@ -62,7 +62,7 @@ static struct i2c_adapter *i2c_mux_pinctrl_parent_adapter(struct device *dev)
+ 		dev_err(dev, "Cannot parse i2c-parent\n");
+ 		return ERR_PTR(-ENODEV);
+ 	}
+-	parent = of_find_i2c_adapter_by_node(parent_np);
++	parent = of_get_i2c_adapter_by_node(parent_np);
+ 	of_node_put(parent_np);
+ 	if (!parent)
+ 		return ERR_PTR(-EPROBE_DEFER);
+diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c
+index 99f4404e9fd15..b6f09e5acb91d 100644
+--- a/drivers/iio/adc/exynos_adc.c
++++ b/drivers/iio/adc/exynos_adc.c
+@@ -821,16 +821,26 @@ static int exynos_adc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	/* leave out any TS related code if unreachable */
++	if (IS_REACHABLE(CONFIG_INPUT)) {
++		has_ts = of_property_read_bool(pdev->dev.of_node,
++					       "has-touchscreen") || pdata;
++	}
++
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+ 		return irq;
+ 	info->irq = irq;
+ 
+-	irq = platform_get_irq(pdev, 1);
+-	if (irq == -EPROBE_DEFER)
+-		return irq;
++	if (has_ts) {
++		irq = platform_get_irq(pdev, 1);
++		if (irq == -EPROBE_DEFER)
++			return irq;
+ 
+-	info->tsirq = irq;
++		info->tsirq = irq;
++	} else {
++		info->tsirq = -1;
++	}
+ 
+ 	info->dev = &pdev->dev;
+ 
+@@ -895,12 +905,6 @@ static int exynos_adc_probe(struct platform_device *pdev)
+ 	if (info->data->init_hw)
+ 		info->data->init_hw(info);
+ 
+-	/* leave out any TS related code if unreachable */
+-	if (IS_REACHABLE(CONFIG_INPUT)) {
+-		has_ts = of_property_read_bool(pdev->dev.of_node,
+-					       "has-touchscreen") || pdata;
+-	}
+-
+ 	if (pdata)
+ 		info->delay = pdata->delay;
+ 	else
+diff --git a/drivers/iio/adc/xilinx-xadc-core.c b/drivers/iio/adc/xilinx-xadc-core.c
+index f93c34fe58731..30b5a17ce41a7 100644
+--- a/drivers/iio/adc/xilinx-xadc-core.c
++++ b/drivers/iio/adc/xilinx-xadc-core.c
+@@ -19,6 +19,7 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
++#include <linux/overflow.h>
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/sysfs.h>
+@@ -585,15 +586,22 @@ static int xadc_update_scan_mode(struct iio_dev *indio_dev,
+ 	const unsigned long *mask)
+ {
+ 	struct xadc *xadc = iio_priv(indio_dev);
+-	unsigned int n;
++	size_t new_size, n;
++	void *data;
+ 
+ 	n = bitmap_weight(mask, indio_dev->masklength);
+ 
+-	kfree(xadc->data);
+-	xadc->data = kcalloc(n, sizeof(*xadc->data), GFP_KERNEL);
+-	if (!xadc->data)
++	if (check_mul_overflow(n, sizeof(*xadc->data), &new_size))
++		return -ENOMEM;
++
++	data = devm_krealloc(indio_dev->dev.parent, xadc->data,
++			     new_size, GFP_KERNEL);
++	if (!data)
+ 		return -ENOMEM;
+ 
++	memset(data, 0, new_size);
++	xadc->data = data;
++
+ 	return 0;
+ }
+ 
+@@ -705,11 +713,12 @@ static const struct iio_trigger_ops xadc_trigger_ops = {
+ static struct iio_trigger *xadc_alloc_trigger(struct iio_dev *indio_dev,
+ 	const char *name)
+ {
++	struct device *dev = indio_dev->dev.parent;
+ 	struct iio_trigger *trig;
+ 	int ret;
+ 
+-	trig = iio_trigger_alloc("%s%d-%s", indio_dev->name,
+-				indio_dev->id, name);
++	trig = devm_iio_trigger_alloc(dev, "%s%d-%s", indio_dev->name,
++				      indio_dev->id, name);
+ 	if (trig == NULL)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -717,15 +726,11 @@ static struct iio_trigger *xadc_alloc_trigger(struct iio_dev *indio_dev,
+ 	trig->ops = &xadc_trigger_ops;
+ 	iio_trigger_set_drvdata(trig, iio_priv(indio_dev));
+ 
+-	ret = iio_trigger_register(trig);
++	ret = devm_iio_trigger_register(dev, trig);
+ 	if (ret)
+-		goto error_free_trig;
++		return ERR_PTR(ret);
+ 
+ 	return trig;
+-
+-error_free_trig:
+-	iio_trigger_free(trig);
+-	return ERR_PTR(ret);
+ }
+ 
+ static int xadc_power_adc_b(struct xadc *xadc, unsigned int seq_mode)
+@@ -1184,8 +1189,23 @@ static int xadc_parse_dt(struct iio_dev *indio_dev, struct device_node *np,
+ 	return 0;
+ }
+ 
++static void xadc_clk_disable_unprepare(void *data)
++{
++	struct clk *clk = data;
++
++	clk_disable_unprepare(clk);
++}
++
++static void xadc_cancel_delayed_work(void *data)
++{
++	struct delayed_work *work = data;
++
++	cancel_delayed_work_sync(work);
++}
++
+ static int xadc_probe(struct platform_device *pdev)
+ {
++	struct device *dev = &pdev->dev;
+ 	const struct of_device_id *id;
+ 	struct iio_dev *indio_dev;
+ 	unsigned int bipolar_mask;
+@@ -1195,10 +1215,10 @@ static int xadc_probe(struct platform_device *pdev)
+ 	int irq;
+ 	int i;
+ 
+-	if (!pdev->dev.of_node)
++	if (!dev->of_node)
+ 		return -ENODEV;
+ 
+-	id = of_match_node(xadc_of_match_table, pdev->dev.of_node);
++	id = of_match_node(xadc_of_match_table, dev->of_node);
+ 	if (!id)
+ 		return -EINVAL;
+ 
+@@ -1206,7 +1226,7 @@ static int xadc_probe(struct platform_device *pdev)
+ 	if (irq <= 0)
+ 		return -ENXIO;
+ 
+-	indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*xadc));
++	indio_dev = devm_iio_device_alloc(dev, sizeof(*xadc));
+ 	if (!indio_dev)
+ 		return -ENOMEM;
+ 
+@@ -1226,39 +1246,40 @@ static int xadc_probe(struct platform_device *pdev)
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &xadc_info;
+ 
+-	ret = xadc_parse_dt(indio_dev, pdev->dev.of_node, &conf0);
++	ret = xadc_parse_dt(indio_dev, dev->of_node, &conf0);
+ 	if (ret)
+ 		return ret;
+ 
+ 	if (xadc->ops->flags & XADC_FLAGS_BUFFERED) {
+-		ret = iio_triggered_buffer_setup(indio_dev,
+-			&iio_pollfunc_store_time, &xadc_trigger_handler,
+-			&xadc_buffer_ops);
++		ret = devm_iio_triggered_buffer_setup(dev, indio_dev,
++						      &iio_pollfunc_store_time,
++						      &xadc_trigger_handler,
++						      &xadc_buffer_ops);
+ 		if (ret)
+ 			return ret;
+ 
+ 		xadc->convst_trigger = xadc_alloc_trigger(indio_dev, "convst");
+-		if (IS_ERR(xadc->convst_trigger)) {
+-			ret = PTR_ERR(xadc->convst_trigger);
+-			goto err_triggered_buffer_cleanup;
+-		}
++		if (IS_ERR(xadc->convst_trigger))
++			return PTR_ERR(xadc->convst_trigger);
++
+ 		xadc->samplerate_trigger = xadc_alloc_trigger(indio_dev,
+ 			"samplerate");
+-		if (IS_ERR(xadc->samplerate_trigger)) {
+-			ret = PTR_ERR(xadc->samplerate_trigger);
+-			goto err_free_convst_trigger;
+-		}
++		if (IS_ERR(xadc->samplerate_trigger))
++			return PTR_ERR(xadc->samplerate_trigger);
+ 	}
+ 
+-	xadc->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(xadc->clk)) {
+-		ret = PTR_ERR(xadc->clk);
+-		goto err_free_samplerate_trigger;
+-	}
++	xadc->clk = devm_clk_get(dev, NULL);
++	if (IS_ERR(xadc->clk))
++		return PTR_ERR(xadc->clk);
+ 
+ 	ret = clk_prepare_enable(xadc->clk);
+ 	if (ret)
+-		goto err_free_samplerate_trigger;
++		return ret;
++
++	ret = devm_add_action_or_reset(dev,
++				       xadc_clk_disable_unprepare, xadc->clk);
++	if (ret)
++		return ret;
+ 
+ 	/*
+ 	 * Make sure not to exceed the maximum samplerate since otherwise the
+@@ -1267,22 +1288,28 @@ static int xadc_probe(struct platform_device *pdev)
+ 	if (xadc->ops->flags & XADC_FLAGS_BUFFERED) {
+ 		ret = xadc_read_samplerate(xadc);
+ 		if (ret < 0)
+-			goto err_free_samplerate_trigger;
++			return ret;
++
+ 		if (ret > XADC_MAX_SAMPLERATE) {
+ 			ret = xadc_write_samplerate(xadc, XADC_MAX_SAMPLERATE);
+ 			if (ret < 0)
+-				goto err_free_samplerate_trigger;
++				return ret;
+ 		}
+ 	}
+ 
+-	ret = request_irq(xadc->irq, xadc->ops->interrupt_handler, 0,
+-			dev_name(&pdev->dev), indio_dev);
++	ret = devm_request_irq(dev, xadc->irq, xadc->ops->interrupt_handler, 0,
++			       dev_name(dev), indio_dev);
+ 	if (ret)
+-		goto err_clk_disable_unprepare;
++		return ret;
++
++	ret = devm_add_action_or_reset(dev, xadc_cancel_delayed_work,
++				       &xadc->zynq_unmask_work);
++	if (ret)
++		return ret;
+ 
+ 	ret = xadc->ops->setup(pdev, indio_dev, xadc->irq);
+ 	if (ret)
+-		goto err_free_irq;
++		return ret;
+ 
+ 	for (i = 0; i < 16; i++)
+ 		xadc_read_adc_reg(xadc, XADC_REG_THRESHOLD(i),
+@@ -1290,7 +1317,7 @@ static int xadc_probe(struct platform_device *pdev)
+ 
+ 	ret = xadc_write_adc_reg(xadc, XADC_REG_CONF0, conf0);
+ 	if (ret)
+-		goto err_free_irq;
++		return ret;
+ 
+ 	bipolar_mask = 0;
+ 	for (i = 0; i < indio_dev->num_channels; i++) {
+@@ -1300,85 +1327,21 @@ static int xadc_probe(struct platform_device *pdev)
+ 
+ 	ret = xadc_write_adc_reg(xadc, XADC_REG_INPUT_MODE(0), bipolar_mask);
+ 	if (ret)
+-		goto err_free_irq;
++		return ret;
++
+ 	ret = xadc_write_adc_reg(xadc, XADC_REG_INPUT_MODE(1),
+ 		bipolar_mask >> 16);
+ 	if (ret)
+-		goto err_free_irq;
+-
+-	/* Disable all alarms */
+-	ret = xadc_update_adc_reg(xadc, XADC_REG_CONF1, XADC_CONF1_ALARM_MASK,
+-				  XADC_CONF1_ALARM_MASK);
+-	if (ret)
+-		goto err_free_irq;
+-
+-	/* Set thresholds to min/max */
+-	for (i = 0; i < 16; i++) {
+-		/*
+-		 * Set max voltage threshold and both temperature thresholds to
+-		 * 0xffff, min voltage threshold to 0.
+-		 */
+-		if (i % 8 < 4 || i == 7)
+-			xadc->threshold[i] = 0xffff;
+-		else
+-			xadc->threshold[i] = 0;
+-		ret = xadc_write_adc_reg(xadc, XADC_REG_THRESHOLD(i),
+-			xadc->threshold[i]);
+-		if (ret)
+-			goto err_free_irq;
+-	}
++		return ret;
+ 
+ 	/* Go to non-buffered mode */
+ 	xadc_postdisable(indio_dev);
+ 
+-	ret = iio_device_register(indio_dev);
+-	if (ret)
+-		goto err_free_irq;
+-
+-	platform_set_drvdata(pdev, indio_dev);
+-
+-	return 0;
+-
+-err_free_irq:
+-	free_irq(xadc->irq, indio_dev);
+-	cancel_delayed_work_sync(&xadc->zynq_unmask_work);
+-err_clk_disable_unprepare:
+-	clk_disable_unprepare(xadc->clk);
+-err_free_samplerate_trigger:
+-	if (xadc->ops->flags & XADC_FLAGS_BUFFERED)
+-		iio_trigger_free(xadc->samplerate_trigger);
+-err_free_convst_trigger:
+-	if (xadc->ops->flags & XADC_FLAGS_BUFFERED)
+-		iio_trigger_free(xadc->convst_trigger);
+-err_triggered_buffer_cleanup:
+-	if (xadc->ops->flags & XADC_FLAGS_BUFFERED)
+-		iio_triggered_buffer_cleanup(indio_dev);
+-
+-	return ret;
+-}
+-
+-static int xadc_remove(struct platform_device *pdev)
+-{
+-	struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+-	struct xadc *xadc = iio_priv(indio_dev);
+-
+-	iio_device_unregister(indio_dev);
+-	if (xadc->ops->flags & XADC_FLAGS_BUFFERED) {
+-		iio_trigger_free(xadc->samplerate_trigger);
+-		iio_trigger_free(xadc->convst_trigger);
+-		iio_triggered_buffer_cleanup(indio_dev);
+-	}
+-	free_irq(xadc->irq, indio_dev);
+-	cancel_delayed_work_sync(&xadc->zynq_unmask_work);
+-	clk_disable_unprepare(xadc->clk);
+-	kfree(xadc->data);
+-
+-	return 0;
++	return devm_iio_device_register(dev, indio_dev);
+ }
+ 
+ static struct platform_driver xadc_driver = {
+ 	.probe = xadc_probe,
+-	.remove = xadc_remove,
+ 	.driver = {
+ 		.name = "xadc",
+ 		.of_match_table = xadc_of_match_table,
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index afa5d2623c410..e2c130832c159 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -1749,6 +1749,7 @@ static int synaptics_create_intertouch(struct psmouse *psmouse,
+ 		psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids) &&
+ 		!SYN_CAP_EXT_BUTTONS_STICK(info->ext_cap_10);
+ 	const struct rmi_device_platform_data pdata = {
++		.reset_delay_ms = 30,
+ 		.sensor_pdata = {
+ 			.sensor_type = rmi_sensor_touchpad,
+ 			.axis_align.flip_y = true,
+diff --git a/drivers/input/rmi4/rmi_smbus.c b/drivers/input/rmi4/rmi_smbus.c
+index 2407ea43de59b..f38bf9a5f599d 100644
+--- a/drivers/input/rmi4/rmi_smbus.c
++++ b/drivers/input/rmi4/rmi_smbus.c
+@@ -235,12 +235,29 @@ static void rmi_smb_clear_state(struct rmi_smb_xport *rmi_smb)
+ 
+ static int rmi_smb_enable_smbus_mode(struct rmi_smb_xport *rmi_smb)
+ {
+-	int retval;
++	struct i2c_client *client = rmi_smb->client;
++	int smbus_version;
++
++	/*
++	 * psmouse driver resets the controller, we only need to wait
++	 * to give the firmware chance to fully reinitialize.
++	 */
++	if (rmi_smb->xport.pdata.reset_delay_ms)
++		msleep(rmi_smb->xport.pdata.reset_delay_ms);
+ 
+ 	/* we need to get the smbus version to activate the touchpad */
+-	retval = rmi_smb_get_version(rmi_smb);
+-	if (retval < 0)
+-		return retval;
++	smbus_version = rmi_smb_get_version(rmi_smb);
++	if (smbus_version < 0)
++		return smbus_version;
++
++	rmi_dbg(RMI_DEBUG_XPORT, &client->dev, "Smbus version is %d",
++		smbus_version);
++
++	if (smbus_version != 2 && smbus_version != 3) {
++		dev_err(&client->dev, "Unrecognized SMB version %d\n",
++				smbus_version);
++		return -ENODEV;
++	}
+ 
+ 	return 0;
+ }
+@@ -253,11 +270,10 @@ static int rmi_smb_reset(struct rmi_transport_dev *xport, u16 reset_addr)
+ 	rmi_smb_clear_state(rmi_smb);
+ 
+ 	/*
+-	 * we do not call the actual reset command, it has to be handled in
+-	 * PS/2 or there will be races between PS/2 and SMBus.
+-	 * PS/2 should ensure that a psmouse_reset is called before
+-	 * intializing the device and after it has been removed to be in a known
+-	 * state.
++	 * We do not call the actual reset command, it has to be handled in
++	 * PS/2 or there will be races between PS/2 and SMBus. PS/2 should
++	 * ensure that a psmouse_reset is called before initializing the
++	 * device and after it has been removed to be in a known state.
+ 	 */
+ 	return rmi_smb_enable_smbus_mode(rmi_smb);
+ }
+@@ -273,7 +289,6 @@ static int rmi_smb_probe(struct i2c_client *client,
+ {
+ 	struct rmi_device_platform_data *pdata = dev_get_platdata(&client->dev);
+ 	struct rmi_smb_xport *rmi_smb;
+-	int smbus_version;
+ 	int error;
+ 
+ 	if (!pdata) {
+@@ -312,18 +327,9 @@ static int rmi_smb_probe(struct i2c_client *client,
+ 	rmi_smb->xport.proto_name = "smb";
+ 	rmi_smb->xport.ops = &rmi_smb_ops;
+ 
+-	smbus_version = rmi_smb_get_version(rmi_smb);
+-	if (smbus_version < 0)
+-		return smbus_version;
+-
+-	rmi_dbg(RMI_DEBUG_XPORT, &client->dev, "Smbus version is %d",
+-		smbus_version);
+-
+-	if (smbus_version != 2 && smbus_version != 3) {
+-		dev_err(&client->dev, "Unrecognized SMB version %d\n",
+-				smbus_version);
+-		return -ENODEV;
+-	}
++	error = rmi_smb_enable_smbus_mode(rmi_smb);
++	if (error)
++		return error;
+ 
+ 	i2c_set_clientdata(client, rmi_smb);
+ 
+diff --git a/drivers/irqchip/irq-stm32-exti.c b/drivers/irqchip/irq-stm32-exti.c
+index 8662d7b7b2625..cec9080cccad0 100644
+--- a/drivers/irqchip/irq-stm32-exti.c
++++ b/drivers/irqchip/irq-stm32-exti.c
+@@ -403,6 +403,7 @@ static const struct irq_domain_ops irq_exti_domain_ops = {
+ 	.map	= irq_map_generic_chip,
+ 	.alloc  = stm32_exti_alloc,
+ 	.free	= stm32_exti_free,
++	.xlate	= irq_domain_xlate_twocell,
+ };
+ 
+ static void stm32_irq_ack(struct irq_data *d)
+diff --git a/drivers/mcb/mcb-lpc.c b/drivers/mcb/mcb-lpc.c
+index 506676754538b..d513d61f4ba84 100644
+--- a/drivers/mcb/mcb-lpc.c
++++ b/drivers/mcb/mcb-lpc.c
+@@ -23,7 +23,7 @@ static int mcb_lpc_probe(struct platform_device *pdev)
+ {
+ 	struct resource *res;
+ 	struct priv *priv;
+-	int ret = 0;
++	int ret = 0, table_size;
+ 
+ 	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+@@ -58,16 +58,43 @@ static int mcb_lpc_probe(struct platform_device *pdev)
+ 
+ 	ret = chameleon_parse_cells(priv->bus, priv->mem->start, priv->base);
+ 	if (ret < 0) {
+-		mcb_release_bus(priv->bus);
+-		return ret;
++		goto out_mcb_bus;
+ 	}
+ 
+-	dev_dbg(&pdev->dev, "Found %d cells\n", ret);
++	table_size = ret;
++
++	if (table_size < CHAM_HEADER_SIZE) {
++		/* Release the previous resources */
++		devm_iounmap(&pdev->dev, priv->base);
++		devm_release_mem_region(&pdev->dev, priv->mem->start, resource_size(priv->mem));
++
++		/* Then, allocate it again with the actual chameleon table size */
++		res = devm_request_mem_region(&pdev->dev, priv->mem->start,
++					      table_size,
++					      KBUILD_MODNAME);
++		if (!res) {
++			dev_err(&pdev->dev, "Failed to request PCI memory\n");
++			ret = -EBUSY;
++			goto out_mcb_bus;
++		}
++
++		priv->base = devm_ioremap(&pdev->dev, priv->mem->start, table_size);
++		if (!priv->base) {
++			dev_err(&pdev->dev, "Cannot ioremap\n");
++			ret = -ENOMEM;
++			goto out_mcb_bus;
++		}
++
++		platform_set_drvdata(pdev, priv);
++	}
+ 
+ 	mcb_bus_add_devices(priv->bus);
+ 
+ 	return 0;
+ 
++out_mcb_bus:
++	mcb_release_bus(priv->bus);
++	return ret;
+ }
+ 
+ static int mcb_lpc_remove(struct platform_device *pdev)
+diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
+index c41cbacc75a2c..656b6b71c7682 100644
+--- a/drivers/mcb/mcb-parse.c
++++ b/drivers/mcb/mcb-parse.c
+@@ -128,7 +128,7 @@ static void chameleon_parse_bar(void __iomem *base,
+ 	}
+ }
+ 
+-static int chameleon_get_bar(char __iomem **base, phys_addr_t mapbase,
++static int chameleon_get_bar(void __iomem **base, phys_addr_t mapbase,
+ 			     struct chameleon_bar **cb)
+ {
+ 	struct chameleon_bar *c;
+@@ -177,12 +177,13 @@ int chameleon_parse_cells(struct mcb_bus *bus, phys_addr_t mapbase,
+ {
+ 	struct chameleon_fpga_header *header;
+ 	struct chameleon_bar *cb;
+-	char __iomem *p = base;
++	void __iomem *p = base;
+ 	int num_cells = 0;
+ 	uint32_t dtype;
+ 	int bar_count;
+ 	int ret;
+ 	u32 hsize;
++	u32 table_size;
+ 
+ 	hsize = sizeof(struct chameleon_fpga_header);
+ 
+@@ -237,12 +238,16 @@ int chameleon_parse_cells(struct mcb_bus *bus, phys_addr_t mapbase,
+ 		num_cells++;
+ 	}
+ 
+-	if (num_cells == 0)
+-		num_cells = -EINVAL;
++	if (num_cells == 0) {
++		ret = -EINVAL;
++		goto free_bar;
++	}
+ 
++	table_size = p - base;
++	pr_debug("%d cell(s) found. Chameleon table size: 0x%04x bytes\n", num_cells, table_size);
+ 	kfree(cb);
+ 	kfree(header);
+-	return num_cells;
++	return table_size;
+ 
+ free_bar:
+ 	kfree(cb);
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index eed047e971e70..9822efdc6cc23 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -990,11 +990,6 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl,  u32 kernel,
+ 		err = wait_for_completion_interruptible(&ctx->work);
+ 	}
+ 
+-	if (err)
+-		goto bail;
+-
+-	/* Check the response from remote dsp */
+-	err = ctx->retval;
+ 	if (err)
+ 		goto bail;
+ 
+@@ -1007,6 +1002,11 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl,  u32 kernel,
+ 			goto bail;
+ 	}
+ 
++	/* Check the response from remote dsp */
++	err = ctx->retval;
++	if (err)
++		goto bail;
++
+ bail:
+ 	if (err != -ERESTARTSYS && err != -ETIMEDOUT) {
+ 		/* We are done with this compute context */
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index be4c2a848b520..24e524a1b9274 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -571,7 +571,7 @@ static void renesas_sdhi_reset(struct tmio_mmc_host *host)
+ 
+ 	if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
+ 		sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK,
+-					     TMIO_MASK_INIT_RCAR2);
++					     TMIO_MASK_ALL_RCAR2);
+ }
+ 
+ #define SH_MOBILE_SDHI_MIN_TAP_ROW 3
+@@ -1012,6 +1012,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		host->ops.start_signal_voltage_switch =
+ 			renesas_sdhi_start_signal_voltage_switch;
+ 		host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27;
++		host->sdcard_irq_mask_all = TMIO_MASK_ALL_RCAR2;
+ 		host->reset = renesas_sdhi_reset;
+ 	} else {
+ 		host->sdcard_irq_mask_all = TMIO_MASK_ALL;
+diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
+index d6ed5e1f8386e..330a17267f7ed 100644
+--- a/drivers/mmc/host/tmio_mmc.h
++++ b/drivers/mmc/host/tmio_mmc.h
+@@ -97,8 +97,8 @@
+ 
+ /* Define some IRQ masks */
+ /* This is the mask used at reset by the chip */
+-#define TMIO_MASK_INIT_RCAR2	0x8b7f031d /* Initial value for R-Car Gen2+ */
+ #define TMIO_MASK_ALL           0x837f031d
++#define TMIO_MASK_ALL_RCAR2	0x8b7f031d
+ #define TMIO_MASK_READOP  (TMIO_STAT_RXRDY | TMIO_STAT_DATAEND)
+ #define TMIO_MASK_WRITEOP (TMIO_STAT_TXRQ | TMIO_STAT_DATAEND)
+ #define TMIO_MASK_CMD     (TMIO_STAT_CMDRESPEND | TMIO_STAT_CMDTIMEOUT | \
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 964ea3491b80b..7e8a8ea6d8f7d 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -3846,6 +3846,8 @@ int t4_load_phy_fw(struct adapter *adap, int win,
+ 		 FW_PARAMS_PARAM_Z_V(FW_PARAMS_PARAM_DEV_PHYFW_DOWNLOAD));
+ 	ret = t4_set_params_timeout(adap, adap->mbox, adap->pf, 0, 1,
+ 				    &param, &val, 30000);
++	if (ret)
++		return ret;
+ 
+ 	/* If we have version number support, then check to see that the new
+ 	 * firmware got loaded properly.
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 43be33d87e391..88d8f17cefd8e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -2663,7 +2663,7 @@ tx_only:
+ 		return budget;
+ 	}
+ 
+-	if (vsi->back->flags & I40E_TXR_FLAGS_WB_ON_ITR)
++	if (q_vector->tx.ring[0].flags & I40E_TXR_FLAGS_WB_ON_ITR)
+ 		q_vector->arm_wb_state = false;
+ 
+ 	/* Exit the polling mode, but don't re-enable interrupts if stack might
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index d9de3b8115431..2d1d9090f2cbf 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -2987,11 +2987,15 @@ static int igb_add_ethtool_nfc_entry(struct igb_adapter *adapter,
+ 	if (err)
+ 		goto err_out_w_lock;
+ 
+-	igb_update_ethtool_nfc_entry(adapter, input, input->sw_idx);
++	err = igb_update_ethtool_nfc_entry(adapter, input, input->sw_idx);
++	if (err)
++		goto err_out_input_filter;
+ 
+ 	spin_unlock(&adapter->nfc_lock);
+ 	return 0;
+ 
++err_out_input_filter:
++	igb_erase_filter(adapter, input);
+ err_out_w_lock:
+ 	spin_unlock(&adapter->nfc_lock);
+ err_out:
+diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+index d28ac3a025ab1..9b01912c6e171 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
++++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+@@ -1775,7 +1775,7 @@ igc_ethtool_set_link_ksettings(struct net_device *netdev,
+ 	struct igc_adapter *adapter = netdev_priv(netdev);
+ 	struct net_device *dev = adapter->netdev;
+ 	struct igc_hw *hw = &adapter->hw;
+-	u32 advertising;
++	u16 advertised = 0;
+ 
+ 	/* When adapter in resetting mode, autoneg/speed/duplex
+ 	 * cannot be changed
+@@ -1800,18 +1800,33 @@ igc_ethtool_set_link_ksettings(struct net_device *netdev,
+ 	while (test_and_set_bit(__IGC_RESETTING, &adapter->state))
+ 		usleep_range(1000, 2000);
+ 
+-	ethtool_convert_link_mode_to_legacy_u32(&advertising,
+-						cmd->link_modes.advertising);
+-	/* Converting to legacy u32 drops ETHTOOL_LINK_MODE_2500baseT_Full_BIT.
+-	 * We have to check this and convert it to ADVERTISE_2500_FULL
+-	 * (aka ETHTOOL_LINK_MODE_2500baseX_Full_BIT) explicitly.
+-	 */
+-	if (ethtool_link_ksettings_test_link_mode(cmd, advertising, 2500baseT_Full))
+-		advertising |= ADVERTISE_2500_FULL;
++	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
++						  2500baseT_Full))
++		advertised |= ADVERTISE_2500_FULL;
++
++	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
++						  1000baseT_Full))
++		advertised |= ADVERTISE_1000_FULL;
++
++	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
++						  100baseT_Full))
++		advertised |= ADVERTISE_100_FULL;
++
++	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
++						  100baseT_Half))
++		advertised |= ADVERTISE_100_HALF;
++
++	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
++						  10baseT_Full))
++		advertised |= ADVERTISE_10_FULL;
++
++	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
++						  10baseT_Half))
++		advertised |= ADVERTISE_10_HALF;
+ 
+ 	if (cmd->base.autoneg == AUTONEG_ENABLE) {
+ 		hw->mac.autoneg = 1;
+-		hw->phy.autoneg_advertised = advertising;
++		hw->phy.autoneg_advertised = advertised;
+ 		if (adapter->fc_autoneg)
+ 			hw->fc.requested_mode = igc_fc_default;
+ 	} else {
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index c025dadcce289..37e34d8f7946e 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4470,7 +4470,7 @@ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp,
+ 		struct sk_buff *skb = tp->tx_skb[entry].skb;
+ 		u32 status;
+ 
+-		status = le32_to_cpu(tp->TxDescArray[entry].opts1);
++		status = le32_to_cpu(READ_ONCE(tp->TxDescArray[entry].opts1));
+ 		if (status & DescOwn)
+ 			break;
+ 
+@@ -4544,7 +4544,7 @@ static int rtl_rx(struct net_device *dev, struct rtl8169_private *tp, u32 budget
+ 		dma_addr_t addr;
+ 		u32 status;
+ 
+-		status = le32_to_cpu(desc->opts1);
++		status = le32_to_cpu(READ_ONCE(desc->opts1));
+ 		if (status & DescOwn)
+ 			break;
+ 
+diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_wireless.c b/drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
+index dc14a66583ff3..44488c153ea25 100644
+--- a/drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
++++ b/drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
+@@ -1217,7 +1217,7 @@ static int gelic_wl_set_encodeext(struct net_device *netdev,
+ 		key_index = wl->current_key;
+ 
+ 	if (!enc->length && (ext->ext_flags & IW_ENCODE_EXT_SET_TX_KEY)) {
+-		/* reques to change default key index */
++		/* request to change default key index */
+ 		pr_debug("%s: request to change default key to %d\n",
+ 			 __func__, key_index);
+ 		wl->current_key = key_index;
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 05ea3a18552b6..ed247cba22916 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -538,8 +538,9 @@ static int gtp_build_skb_ip4(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	rt->dst.ops->update_pmtu(&rt->dst, NULL, skb, mtu, false);
+ 
+-	if (!skb_is_gso(skb) && (iph->frag_off & htons(IP_DF)) &&
+-	    mtu < ntohs(iph->tot_len)) {
++	if (iph->frag_off & htons(IP_DF) &&
++	    ((!skb_is_gso(skb) && skb->len > mtu) ||
++	     (skb_is_gso(skb) && !skb_gso_validate_network_len(skb, mtu)))) {
+ 		netdev_dbg(dev, "packet too big, fragmentation needed\n");
+ 		icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+ 			      htonl(mtu));
+diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
+index 07adbeec19787..14cf8b0dfad90 100644
+--- a/drivers/net/ieee802154/adf7242.c
++++ b/drivers/net/ieee802154/adf7242.c
+@@ -1162,9 +1162,10 @@ static int adf7242_stats_show(struct seq_file *file, void *offset)
+ 
+ static void adf7242_debugfs_init(struct adf7242_local *lp)
+ {
+-	char debugfs_dir_name[DNAME_INLINE_LEN + 1] = "adf7242-";
++	char debugfs_dir_name[DNAME_INLINE_LEN + 1];
+ 
+-	strncat(debugfs_dir_name, dev_name(&lp->spi->dev), DNAME_INLINE_LEN);
++	snprintf(debugfs_dir_name, sizeof(debugfs_dir_name),
++		 "adf7242-%s", dev_name(&lp->spi->dev));
+ 
+ 	lp->debugfs_root = debugfs_create_dir(debugfs_dir_name, NULL);
+ 
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index cc7c86debfa27..0d6f10c9bb139 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -1042,7 +1042,7 @@ int get_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data)
+ 
+ 	ret = usb_control_msg(tp->udev, usb_rcvctrlpipe(tp->udev, 0),
+ 			      RTL8152_REQ_GET_REGS, RTL8152_REQT_READ,
+-			      value, index, tmp, size, 500);
++			      value, index, tmp, size, USB_CTRL_GET_TIMEOUT);
+ 	if (ret < 0)
+ 		memset(data, 0xff, size);
+ 	else
+@@ -1065,7 +1065,7 @@ int set_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data)
+ 
+ 	ret = usb_control_msg(tp->udev, usb_sndctrlpipe(tp->udev, 0),
+ 			      RTL8152_REQ_SET_REGS, RTL8152_REQT_WRITE,
+-			      value, index, tmp, size, 500);
++			      value, index, tmp, size, USB_CTRL_SET_TIMEOUT);
+ 
+ 	kfree(tmp);
+ 
+@@ -6615,7 +6615,8 @@ static u8 rtl_get_version(struct usb_interface *intf)
+ 
+ 	ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 			      RTL8152_REQ_GET_REGS, RTL8152_REQT_READ,
+-			      PLA_TCR0, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500);
++			      PLA_TCR0, MCU_TYPE_PLA, tmp, sizeof(*tmp),
++			      USB_CTRL_GET_TIMEOUT);
+ 	if (ret > 0)
+ 		ocp_data = (__le32_to_cpu(*tmp) >> 16) & VERSION_MASK;
+ 
+@@ -6824,6 +6825,10 @@ static int rtl8152_probe(struct usb_interface *intf,
+ 
+ out1:
+ 	tasklet_kill(&tp->tx_tl);
++	cancel_delayed_work_sync(&tp->hw_phy_work);
++	if (tp->rtl_ops.unload)
++		tp->rtl_ops.unload(tp);
++	rtl8152_release_firmware(tp);
+ 	usb_set_intfdata(intf, NULL);
+ out:
+ 	free_netdev(netdev);
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 9297f2078fd2c..569be01700aa1 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -86,7 +86,9 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
+ 	ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
+ 		 | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 		 0, index, &buf, 4);
+-	if (ret < 0) {
++	if (ret < 4) {
++		ret = ret < 0 ? ret : -ENODATA;
++
+ 		if (ret != -ENODEV)
+ 			netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
+ 				    index, ret);
+diff --git a/drivers/nvmem/imx-ocotp.c b/drivers/nvmem/imx-ocotp.c
+index 7a1ebd6fd08b2..fddb5459e4279 100644
+--- a/drivers/nvmem/imx-ocotp.c
++++ b/drivers/nvmem/imx-ocotp.c
+@@ -467,7 +467,7 @@ static const struct ocotp_params imx6sl_params = {
+ };
+ 
+ static const struct ocotp_params imx6sll_params = {
+-	.nregs = 128,
++	.nregs = 80,
+ 	.bank_address_words = 0,
+ 	.set_timing = imx_ocotp_set_imx6_timing,
+ 	.ctrl = IMX_OCOTP_BM_CTRL_DEFAULT,
+@@ -481,14 +481,14 @@ static const struct ocotp_params imx6sx_params = {
+ };
+ 
+ static const struct ocotp_params imx6ul_params = {
+-	.nregs = 128,
++	.nregs = 144,
+ 	.bank_address_words = 0,
+ 	.set_timing = imx_ocotp_set_imx6_timing,
+ 	.ctrl = IMX_OCOTP_BM_CTRL_DEFAULT,
+ };
+ 
+ static const struct ocotp_params imx6ull_params = {
+-	.nregs = 64,
++	.nregs = 80,
+ 	.bank_address_words = 0,
+ 	.set_timing = imx_ocotp_set_imx6_timing,
+ 	.ctrl = IMX_OCOTP_BM_CTRL_DEFAULT,
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index c0d1134811915..158ff4331a141 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -597,7 +597,7 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI,	PCI_DEVICE_ID_ATI_RS100,   quirk_ati_
+ /*
+  * In the AMD NL platform, this device ([1022:7912]) has a class code of
+  * PCI_CLASS_SERIAL_USB_XHCI (0x0c0330), which means the xhci driver will
+- * claim it.
++ * claim it. The same applies on the VanGogh platform device ([1022:163a]).
+  *
+  * But the dwc3 driver is a more specific driver for this device, and we'd
+  * prefer to use it instead of xhci. To prevent xhci from claiming the
+@@ -605,7 +605,7 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI,	PCI_DEVICE_ID_ATI_RS100,   quirk_ati_
+  * defines as "USB device (not host controller)". The dwc3 driver can then
+  * claim it based on its Vendor and Device ID.
+  */
+-static void quirk_amd_nl_class(struct pci_dev *pdev)
++static void quirk_amd_dwc_class(struct pci_dev *pdev)
+ {
+ 	u32 class = pdev->class;
+ 
+@@ -615,7 +615,9 @@ static void quirk_amd_nl_class(struct pci_dev *pdev)
+ 		 class, pdev->class);
+ }
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB,
+-		quirk_amd_nl_class);
++		quirk_amd_dwc_class);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VANGOGH_USB,
++		quirk_amd_dwc_class);
+ 
+ /*
+  * Synopsys USB 3.x host HAPS platform has a class code of
+diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
+index 194f3205e5597..767f4406e55f1 100644
+--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
+@@ -588,24 +588,25 @@ static void mlxbf_tmfifo_rxtx_word(struct mlxbf_tmfifo_vring *vring,
+ 
+ 	if (vring->cur_len + sizeof(u64) <= len) {
+ 		/* The whole word. */
+-		if (!IS_VRING_DROP(vring)) {
+-			if (is_rx)
++		if (is_rx) {
++			if (!IS_VRING_DROP(vring))
+ 				memcpy(addr + vring->cur_len, &data,
+ 				       sizeof(u64));
+-			else
+-				memcpy(&data, addr + vring->cur_len,
+-				       sizeof(u64));
++		} else {
++			memcpy(&data, addr + vring->cur_len,
++			       sizeof(u64));
+ 		}
+ 		vring->cur_len += sizeof(u64);
+ 	} else {
+ 		/* Leftover bytes. */
+-		if (!IS_VRING_DROP(vring)) {
+-			if (is_rx)
++		if (is_rx) {
++			if (!IS_VRING_DROP(vring))
+ 				memcpy(addr + vring->cur_len, &data,
+ 				       len - vring->cur_len);
+-			else
+-				memcpy(&data, addr + vring->cur_len,
+-				       len - vring->cur_len);
++		} else {
++			data = 0;
++			memcpy(&data, addr + vring->cur_len,
++			       len - vring->cur_len);
+ 		}
+ 		vring->cur_len = len;
+ 	}
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index e776d1bfc9767..28b6ae0e1a2fd 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -1379,6 +1379,7 @@ static void qcom_glink_rpdev_release(struct device *dev)
+ 	struct glink_channel *channel = to_glink_channel(rpdev->ept);
+ 
+ 	channel->rpdev = NULL;
++	kfree(rpdev->driver_override);
+ 	kfree(rpdev);
+ }
+ 
+@@ -1607,6 +1608,7 @@ static void qcom_glink_device_release(struct device *dev)
+ 
+ 	/* Release qcom_glink_alloc_channel() reference */
+ 	kref_put(&channel->refcount, qcom_glink_channel_release);
++	kfree(rpdev->driver_override);
+ 	kfree(rpdev);
+ }
+ 
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index 028ca5961bc2a..fd3d7b3fbbd1f 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -332,7 +332,8 @@ field##_store(struct device *dev, struct device_attribute *attr,	\
+ 	      const char *buf, size_t sz)				\
+ {									\
+ 	struct rpmsg_device *rpdev = to_rpmsg_device(dev);		\
+-	char *new, *old;						\
++	const char *old;						\
++	char *new;							\
+ 									\
+ 	new = kstrndup(buf, sz, GFP_KERNEL);				\
+ 	if (!new)							\
+@@ -525,24 +526,52 @@ static struct bus_type rpmsg_bus = {
+ 	.remove		= rpmsg_dev_remove,
+ };
+ 
+-int rpmsg_register_device(struct rpmsg_device *rpdev)
++/*
++ * A helper for registering rpmsg device with driver override and name.
++ * Drivers should not be using it, but instead rpmsg_register_device().
++ */
++int rpmsg_register_device_override(struct rpmsg_device *rpdev,
++				   const char *driver_override)
+ {
+ 	struct device *dev = &rpdev->dev;
+ 	int ret;
+ 
++	if (driver_override)
++		strcpy(rpdev->id.name, driver_override);
++
+ 	dev_set_name(&rpdev->dev, "%s.%s.%d.%d", dev_name(dev->parent),
+ 		     rpdev->id.name, rpdev->src, rpdev->dst);
+ 
+ 	rpdev->dev.bus = &rpmsg_bus;
+ 
+-	ret = device_register(&rpdev->dev);
++	device_initialize(dev);
++	if (driver_override) {
++		ret = driver_set_override(dev, &rpdev->driver_override,
++					  driver_override,
++					  strlen(driver_override));
++		if (ret) {
++			dev_err(dev, "device_set_override failed: %d\n", ret);
++			put_device(dev);
++			return ret;
++		}
++	}
++
++	ret = device_add(dev);
+ 	if (ret) {
+-		dev_err(dev, "device_register failed: %d\n", ret);
++		dev_err(dev, "device_add failed: %d\n", ret);
++		kfree(rpdev->driver_override);
++		rpdev->driver_override = NULL;
+ 		put_device(&rpdev->dev);
+ 	}
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL(rpmsg_register_device_override);
++
++int rpmsg_register_device(struct rpmsg_device *rpdev)
++{
++	return rpmsg_register_device_override(rpdev, NULL);
++}
+ EXPORT_SYMBOL(rpmsg_register_device);
+ 
+ /*
+diff --git a/drivers/rpmsg/rpmsg_internal.h b/drivers/rpmsg/rpmsg_internal.h
+index 3fc83cd50e98f..f305279e2e24c 100644
+--- a/drivers/rpmsg/rpmsg_internal.h
++++ b/drivers/rpmsg/rpmsg_internal.h
+@@ -84,10 +84,7 @@ struct device *rpmsg_find_device(struct device *parent,
+  */
+ static inline int rpmsg_chrdev_register_device(struct rpmsg_device *rpdev)
+ {
+-	strcpy(rpdev->id.name, "rpmsg_chrdev");
+-	rpdev->driver_override = "rpmsg_chrdev";
+-
+-	return rpmsg_register_device(rpdev);
++	return rpmsg_register_device_override(rpdev, "rpmsg_ctrl");
+ }
+ 
+ #endif
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index c3a5978b0efac..e797f6e3982cf 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -11624,8 +11624,10 @@ _mpt3sas_init(void)
+ 	mpt3sas_ctl_init(hbas_to_enumerate);
+ 
+ 	error = pci_register_driver(&mpt3sas_driver);
+-	if (error)
++	if (error) {
++		mpt3sas_ctl_exit(hbas_to_enumerate);
+ 		scsih_exit();
++	}
+ 
+ 	return error;
+ }
+diff --git a/drivers/spi/spi-npcm-fiu.c b/drivers/spi/spi-npcm-fiu.c
+index b62471ab6d7f2..1edaf22e265bf 100644
+--- a/drivers/spi/spi-npcm-fiu.c
++++ b/drivers/spi/spi-npcm-fiu.c
+@@ -334,8 +334,9 @@ static int npcm_fiu_uma_read(struct spi_mem *mem,
+ 		uma_cfg |= ilog2(op->cmd.buswidth);
+ 		uma_cfg |= ilog2(op->addr.buswidth)
+ 			<< NPCM_FIU_UMA_CFG_ADBPCK_SHIFT;
+-		uma_cfg |= ilog2(op->dummy.buswidth)
+-			<< NPCM_FIU_UMA_CFG_DBPCK_SHIFT;
++		if (op->dummy.nbytes)
++			uma_cfg |= ilog2(op->dummy.buswidth)
++				<< NPCM_FIU_UMA_CFG_DBPCK_SHIFT;
+ 		uma_cfg |= ilog2(op->data.buswidth)
+ 			<< NPCM_FIU_UMA_CFG_RDBPCK_SHIFT;
+ 		uma_cfg |= op->dummy.nbytes << NPCM_FIU_UMA_CFG_DBSIZ_SHIFT;
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index fd857d4343266..89b14f5541fa1 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -5132,6 +5132,12 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		0, 0,
+ 		pbn_b1_bt_1_115200 },
+ 
++	/*
++	 * IntaShield IS-100
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0D60,
++		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
++		pbn_b2_1_115200 },
+ 	/*
+ 	 * IntaShield IS-200
+ 	 */
+@@ -5159,10 +5165,14 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		0, 0,
+ 		pbn_b2_1_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0AA2,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_1_115200 },
+ 	/*
+-	 * Brainboxes UC-257
++	 * Brainboxes UC-253/UC-734
+ 	 */
+-	{	PCI_VENDOR_ID_INTASHIELD, 0x0861,
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0CA1,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		0, 0,
+ 		pbn_b2_2_115200 },
+@@ -5177,6 +5187,66 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
+ 		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UP-189
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0AC1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0AC2,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0AC3,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UP-200
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0B21,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0B22,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0B23,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UP-869
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0C01,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0C02,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0C03,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UP-880
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0C21,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0C22,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0C23,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
+ 	/*
+ 	 * Brainboxes UC-268
+ 	 */
+@@ -5198,6 +5268,14 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		0, 0,
+ 		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x08E2,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x08E3,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
+ 	/*
+ 	 * Brainboxes UC-310
+ 	 */
+@@ -5208,6 +5286,14 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	/*
+ 	 * Brainboxes UC-313
+ 	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x08A1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{       PCI_VENDOR_ID_INTASHIELD, 0x08A2,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
+ 	{       PCI_VENDOR_ID_INTASHIELD, 0x08A3,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		0, 0,
+@@ -5222,6 +5308,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	/*
+ 	 * Brainboxes UC-346
+ 	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0B01,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_4_115200 },
+ 	{	PCI_VENDOR_ID_INTASHIELD, 0x0B02,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		0, 0,
+@@ -5233,6 +5323,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		0, 0,
+ 		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0A82,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
+ 	{	PCI_VENDOR_ID_INTASHIELD, 0x0A83,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		0, 0,
+@@ -5245,12 +5339,34 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		0, 0,
+ 		pbn_b2_4_115200 },
+ 	/*
+-	 * Brainboxes UC-420/431
++	 * Brainboxes UC-420
+ 	 */
+ 	{       PCI_VENDOR_ID_INTASHIELD, 0x0921,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+ 		0, 0,
+ 		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-607
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x09A1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x09A2,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x09A3,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-836
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0D41,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_4_115200 },
+ 	/*
+ 	 * Perle PCI-RAS cards
+ 	 */
+diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c
+index ddb39e6728017..72ecce73ab53c 100644
+--- a/drivers/usb/gadget/legacy/raw_gadget.c
++++ b/drivers/usb/gadget/legacy/raw_gadget.c
+@@ -662,12 +662,12 @@ static int raw_process_ep0_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
+ 	if (WARN_ON(in && dev->ep0_out_pending)) {
+ 		ret = -ENODEV;
+ 		dev->state = STATE_DEV_FAILED;
+-		goto out_done;
++		goto out_unlock;
+ 	}
+ 	if (WARN_ON(!in && dev->ep0_in_pending)) {
+ 		ret = -ENODEV;
+ 		dev->state = STATE_DEV_FAILED;
+-		goto out_done;
++		goto out_unlock;
+ 	}
+ 
+ 	dev->req->buf = data;
+@@ -682,7 +682,7 @@ static int raw_process_ep0_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
+ 				"fail, usb_ep_queue returned %d\n", ret);
+ 		spin_lock_irqsave(&dev->lock, flags);
+ 		dev->state = STATE_DEV_FAILED;
+-		goto out_done;
++		goto out_queue_failed;
+ 	}
+ 
+ 	ret = wait_for_completion_interruptible(&dev->ep0_done);
+@@ -691,13 +691,16 @@ static int raw_process_ep0_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
+ 		usb_ep_dequeue(dev->gadget->ep0, dev->req);
+ 		wait_for_completion(&dev->ep0_done);
+ 		spin_lock_irqsave(&dev->lock, flags);
+-		goto out_done;
++		if (dev->ep0_status == -ECONNRESET)
++			dev->ep0_status = -EINTR;
++		goto out_interrupted;
+ 	}
+ 
+ 	spin_lock_irqsave(&dev->lock, flags);
+-	ret = dev->ep0_status;
+ 
+-out_done:
++out_interrupted:
++	ret = dev->ep0_status;
++out_queue_failed:
+ 	dev->ep0_urb_queued = false;
+ out_unlock:
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+@@ -1059,7 +1062,7 @@ static int raw_process_ep_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
+ 				"fail, usb_ep_queue returned %d\n", ret);
+ 		spin_lock_irqsave(&dev->lock, flags);
+ 		dev->state = STATE_DEV_FAILED;
+-		goto out_done;
++		goto out_queue_failed;
+ 	}
+ 
+ 	ret = wait_for_completion_interruptible(&done);
+@@ -1068,13 +1071,16 @@ static int raw_process_ep_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
+ 		usb_ep_dequeue(ep->ep, ep->req);
+ 		wait_for_completion(&done);
+ 		spin_lock_irqsave(&dev->lock, flags);
+-		goto out_done;
++		if (ep->status == -ECONNRESET)
++			ep->status = -EINTR;
++		goto out_interrupted;
+ 	}
+ 
+ 	spin_lock_irqsave(&dev->lock, flags);
+-	ret = ep->status;
+ 
+-out_done:
++out_interrupted:
++	ret = ep->status;
++out_queue_failed:
+ 	ep->urb_queued = false;
+ out_unlock:
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+diff --git a/drivers/usb/storage/unusual_cypress.h b/drivers/usb/storage/unusual_cypress.h
+index 0547daf116a26..5df40759d77ad 100644
+--- a/drivers/usb/storage/unusual_cypress.h
++++ b/drivers/usb/storage/unusual_cypress.h
+@@ -19,7 +19,7 @@ UNUSUAL_DEV(  0x04b4, 0x6831, 0x0000, 0x9999,
+ 		"Cypress ISD-300LP",
+ 		USB_SC_CYP_ATACB, USB_PR_DEVICE, NULL, 0),
+ 
+-UNUSUAL_DEV( 0x14cd, 0x6116, 0x0160, 0x0160,
++UNUSUAL_DEV( 0x14cd, 0x6116, 0x0150, 0x0160,
+ 		"Super Top",
+ 		"USB 2.0  SATA BRIDGE",
+ 		USB_SC_CYP_ATACB, USB_PR_DEVICE, NULL, 0),
+diff --git a/drivers/video/fbdev/aty/atyfb_base.c b/drivers/video/fbdev/aty/atyfb_base.c
+index c8feff0ee8da9..eb32ff0910d3e 100644
+--- a/drivers/video/fbdev/aty/atyfb_base.c
++++ b/drivers/video/fbdev/aty/atyfb_base.c
+@@ -3440,11 +3440,15 @@ static int atyfb_setup_generic(struct pci_dev *pdev, struct fb_info *info,
+ 	}
+ 
+ 	info->fix.mmio_start = raddr;
++#if defined(__i386__) || defined(__ia64__)
+ 	/*
+ 	 * By using strong UC we force the MTRR to never have an
+ 	 * effect on the MMIO region on both non-PAT and PAT systems.
+ 	 */
+ 	par->ati_regbase = ioremap_uc(info->fix.mmio_start, 0x1000);
++#else
++	par->ati_regbase = ioremap(info->fix.mmio_start, 0x1000);
++#endif
+ 	if (par->ati_regbase == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/video/fbdev/uvesafb.c b/drivers/video/fbdev/uvesafb.c
+index 661f12742e4f0..d999a7cdb5409 100644
+--- a/drivers/video/fbdev/uvesafb.c
++++ b/drivers/video/fbdev/uvesafb.c
+@@ -1933,10 +1933,10 @@ static void uvesafb_exit(void)
+ 		}
+ 	}
+ 
+-	cn_del_callback(&uvesafb_cn_id);
+ 	driver_remove_file(&uvesafb_driver.driver, &driver_attr_v86d);
+ 	platform_device_unregister(uvesafb_device);
+ 	platform_driver_unregister(&uvesafb_driver);
++	cn_del_callback(&uvesafb_cn_id);
+ }
+ 
+ module_exit(uvesafb_exit);
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 481611c09dae1..935ea2f3dac7d 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -402,7 +402,11 @@ static inline s64 towards_target(struct virtio_balloon *vb)
+ 	virtio_cread_le(vb->vdev, struct virtio_balloon_config, num_pages,
+ 			&num_pages);
+ 
+-	target = num_pages;
++	/*
++	 * Aligned up to guest page size to avoid inflating and deflating
++	 * balloon endlessly.
++	 */
++	target = ALIGN(num_pages, VIRTIO_BALLOON_PAGES_PER_PAGE);
+ 	return target - vb->num_pages;
+ }
+ 
+diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
+index 136f90dbad831..7fcc307fac725 100644
+--- a/drivers/virtio/virtio_mmio.c
++++ b/drivers/virtio/virtio_mmio.c
+@@ -596,14 +596,17 @@ static int virtio_mmio_probe(struct platform_device *pdev)
+ 	spin_lock_init(&vm_dev->lock);
+ 
+ 	vm_dev->base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(vm_dev->base))
+-		return PTR_ERR(vm_dev->base);
++	if (IS_ERR(vm_dev->base)) {
++		rc = PTR_ERR(vm_dev->base);
++		goto free_vm_dev;
++	}
+ 
+ 	/* Check magic value */
+ 	magic = readl(vm_dev->base + VIRTIO_MMIO_MAGIC_VALUE);
+ 	if (magic != ('v' | 'i' << 8 | 'r' << 16 | 't' << 24)) {
+ 		dev_warn(&pdev->dev, "Wrong magic value 0x%08lx!\n", magic);
+-		return -ENODEV;
++		rc = -ENODEV;
++		goto free_vm_dev;
+ 	}
+ 
+ 	/* Check device version */
+@@ -611,7 +614,8 @@ static int virtio_mmio_probe(struct platform_device *pdev)
+ 	if (vm_dev->version < 1 || vm_dev->version > 2) {
+ 		dev_err(&pdev->dev, "Version %ld not supported!\n",
+ 				vm_dev->version);
+-		return -ENXIO;
++		rc = -ENXIO;
++		goto free_vm_dev;
+ 	}
+ 
+ 	vm_dev->vdev.id.device = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID);
+@@ -620,7 +624,8 @@ static int virtio_mmio_probe(struct platform_device *pdev)
+ 		 * virtio-mmio device with an ID 0 is a (dummy) placeholder
+ 		 * with no function. End probing now with no error reported.
+ 		 */
+-		return -ENODEV;
++		rc = -ENODEV;
++		goto free_vm_dev;
+ 	}
+ 	vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID);
+ 
+@@ -650,6 +655,10 @@ static int virtio_mmio_probe(struct platform_device *pdev)
+ 		put_device(&vm_dev->vdev.dev);
+ 
+ 	return rc;
++
++free_vm_dev:
++	kfree(vm_dev);
++	return rc;
+ }
+ 
+ static int virtio_mmio_remove(struct platform_device *pdev)
+diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
+index bcc611069308a..7d18b92688176 100644
+--- a/fs/cifs/smbdirect.c
++++ b/fs/cifs/smbdirect.c
+@@ -571,8 +571,13 @@ static struct rdma_cm_id *smbd_create_id(
+ 		log_rdma_event(ERR, "rdma_resolve_addr() failed %i\n", rc);
+ 		goto out;
+ 	}
+-	wait_for_completion_interruptible_timeout(
++	rc = wait_for_completion_interruptible_timeout(
+ 		&info->ri_done, msecs_to_jiffies(RDMA_RESOLVE_TIMEOUT));
++	/* e.g. if interrupted returns -ERESTARTSYS */
++	if (rc < 0) {
++		log_rdma_event(ERR, "rdma_resolve_addr timeout rc: %i\n", rc);
++		goto out;
++	}
+ 	rc = info->ri_rc;
+ 	if (rc) {
+ 		log_rdma_event(ERR, "rdma_resolve_addr() completed %i\n", rc);
+@@ -585,8 +590,13 @@ static struct rdma_cm_id *smbd_create_id(
+ 		log_rdma_event(ERR, "rdma_resolve_route() failed %i\n", rc);
+ 		goto out;
+ 	}
+-	wait_for_completion_interruptible_timeout(
++	rc = wait_for_completion_interruptible_timeout(
+ 		&info->ri_done, msecs_to_jiffies(RDMA_RESOLVE_TIMEOUT));
++	/* e.g. if interrupted returns -ERESTARTSYS */
++	if (rc < 0)  {
++		log_rdma_event(ERR, "rdma_resolve_addr timeout rc: %i\n", rc);
++		goto out;
++	}
+ 	rc = info->ri_rc;
+ 	if (rc) {
+ 		log_rdma_event(ERR, "rdma_resolve_route() completed %i\n", rc);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index b35d59d41c896..c9ac43f407469 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3515,8 +3515,7 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 	struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
+ 	struct ext4_super_block *es = sbi->s_es;
+ 	int bsbits, max;
+-	ext4_lblk_t end;
+-	loff_t size, start_off;
++	loff_t size, start_off, end;
+ 	loff_t orig_size __maybe_unused;
+ 	ext4_lblk_t start;
+ 	struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
+@@ -3545,7 +3544,7 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 
+ 	/* first, let's learn actual file size
+ 	 * given current request is allocated */
+-	size = ac->ac_o_ex.fe_logical + EXT4_C2B(sbi, ac->ac_o_ex.fe_len);
++	size = extent_logical_end(sbi, &ac->ac_o_ex);
+ 	size = size << bsbits;
+ 	if (size < i_size_read(ac->ac_inode))
+ 		size = i_size_read(ac->ac_inode);
+@@ -3624,7 +3623,7 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 	/* check we don't cross already preallocated blocks */
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(pa, &ei->i_prealloc_list, pa_inode_list) {
+-		ext4_lblk_t pa_end;
++		loff_t pa_end;
+ 
+ 		if (pa->pa_deleted)
+ 			continue;
+@@ -3634,8 +3633,7 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 			continue;
+ 		}
+ 
+-		pa_end = pa->pa_lstart + EXT4_C2B(EXT4_SB(ac->ac_sb),
+-						  pa->pa_len);
++		pa_end = pa_logical_end(EXT4_SB(ac->ac_sb), pa);
+ 
+ 		/* PA must not overlap original request */
+ 		BUG_ON(!(ac->ac_o_ex.fe_logical >= pa_end ||
+@@ -3664,12 +3662,11 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 	/* XXX: extra loop to check we really don't overlap preallocations */
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(pa, &ei->i_prealloc_list, pa_inode_list) {
+-		ext4_lblk_t pa_end;
++		loff_t pa_end;
+ 
+ 		spin_lock(&pa->pa_lock);
+ 		if (pa->pa_deleted == 0) {
+-			pa_end = pa->pa_lstart + EXT4_C2B(EXT4_SB(ac->ac_sb),
+-							  pa->pa_len);
++			pa_end = pa_logical_end(EXT4_SB(ac->ac_sb), pa);
+ 			BUG_ON(!(start >= pa_end || end <= pa->pa_lstart));
+ 		}
+ 		spin_unlock(&pa->pa_lock);
+@@ -3885,8 +3882,7 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
+ 		/* all fields in this condition don't change,
+ 		 * so we can skip locking for them */
+ 		if (ac->ac_o_ex.fe_logical < pa->pa_lstart ||
+-		    ac->ac_o_ex.fe_logical >= (pa->pa_lstart +
+-					       EXT4_C2B(sbi, pa->pa_len)))
++		    ac->ac_o_ex.fe_logical >= pa_logical_end(sbi, pa))
+ 			continue;
+ 
+ 		/* non-extent files can't have physical blocks past 2^32 */
+@@ -4131,8 +4127,11 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 	pa = ac->ac_pa;
+ 
+ 	if (ac->ac_b_ex.fe_len < ac->ac_g_ex.fe_len) {
+-		int new_bex_start;
+-		int new_bex_end;
++		struct ext4_free_extent ex = {
++			.fe_logical = ac->ac_g_ex.fe_logical,
++			.fe_len = ac->ac_g_ex.fe_len,
++		};
++		loff_t orig_goal_end = extent_logical_end(sbi, &ex);
+ 
+ 		/* we can't allocate as much as normalizer wants.
+ 		 * so, found space must get proper lstart
+@@ -4151,29 +4150,23 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 		 *    still cover original start
+ 		 * 3. Else, keep the best ex at start of original request.
+ 		 */
+-		new_bex_end = ac->ac_g_ex.fe_logical +
+-			EXT4_C2B(sbi, ac->ac_g_ex.fe_len);
+-		new_bex_start = new_bex_end - EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
+-		if (ac->ac_o_ex.fe_logical >= new_bex_start)
+-			goto adjust_bex;
++		ex.fe_len = ac->ac_b_ex.fe_len;
+ 
+-		new_bex_start = ac->ac_g_ex.fe_logical;
+-		new_bex_end =
+-			new_bex_start + EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
+-		if (ac->ac_o_ex.fe_logical < new_bex_end)
++		ex.fe_logical = orig_goal_end - EXT4_C2B(sbi, ex.fe_len);
++		if (ac->ac_o_ex.fe_logical >= ex.fe_logical)
+ 			goto adjust_bex;
+ 
+-		new_bex_start = ac->ac_o_ex.fe_logical;
+-		new_bex_end =
+-			new_bex_start + EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
++		ex.fe_logical = ac->ac_g_ex.fe_logical;
++		if (ac->ac_o_ex.fe_logical < extent_logical_end(sbi, &ex))
++			goto adjust_bex;
+ 
++		ex.fe_logical = ac->ac_o_ex.fe_logical;
+ adjust_bex:
+-		ac->ac_b_ex.fe_logical = new_bex_start;
++		ac->ac_b_ex.fe_logical = ex.fe_logical;
+ 
+ 		BUG_ON(ac->ac_o_ex.fe_logical < ac->ac_b_ex.fe_logical);
+ 		BUG_ON(ac->ac_o_ex.fe_len > ac->ac_b_ex.fe_len);
+-		BUG_ON(new_bex_end > (ac->ac_g_ex.fe_logical +
+-				      EXT4_C2B(sbi, ac->ac_g_ex.fe_len)));
++		BUG_ON(extent_logical_end(sbi, &ex) > orig_goal_end);
+ 	}
+ 
+ 	/* preallocation can change ac_b_ex, thus we store actually
+@@ -4704,7 +4697,7 @@ static void ext4_mb_group_or_file(struct ext4_allocation_context *ac)
+ 	if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))
+ 		return;
+ 
+-	size = ac->ac_o_ex.fe_logical + EXT4_C2B(sbi, ac->ac_o_ex.fe_len);
++	size = extent_logical_end(sbi, &ac->ac_o_ex);
+ 	isize = (i_size_read(ac->ac_inode) + ac->ac_sb->s_blocksize - 1)
+ 		>> bsbits;
+ 
+diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
+index 7be6288e48ec2..1e9c402189cb5 100644
+--- a/fs/ext4/mballoc.h
++++ b/fs/ext4/mballoc.h
+@@ -199,6 +199,20 @@ static inline ext4_fsblk_t ext4_grp_offs_to_block(struct super_block *sb,
+ 		(fex->fe_start << EXT4_SB(sb)->s_cluster_bits);
+ }
+ 
++static inline loff_t extent_logical_end(struct ext4_sb_info *sbi,
++					struct ext4_free_extent *fex)
++{
++	/* Use loff_t to avoid end exceeding ext4_lblk_t max. */
++	return (loff_t)fex->fe_logical + EXT4_C2B(sbi, fex->fe_len);
++}
++
++static inline loff_t pa_logical_end(struct ext4_sb_info *sbi,
++				    struct ext4_prealloc_space *pa)
++{
++	/* Use loff_t to avoid end exceeding ext4_lblk_t max. */
++	return (loff_t)pa->pa_lstart + EXT4_C2B(sbi, pa->pa_len);
++}
++
+ typedef int (*ext4_mballoc_query_range_fn)(
+ 	struct super_block		*sb,
+ 	ext4_group_t			agno,
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 8e4daee0171f8..dfa99cd195b83 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1451,7 +1451,8 @@ next_step:
+ 
+ 		if (phase == 3) {
+ 			inode = f2fs_iget(sb, dni.ino);
+-			if (IS_ERR(inode) || is_bad_inode(inode)) {
++			if (IS_ERR(inode) || is_bad_inode(inode) ||
++					special_file(inode->i_mode)) {
+ 				set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 				continue;
+ 			}
+diff --git a/include/linux/device/driver.h b/include/linux/device/driver.h
+index ee7ba5b5417e5..a44f5adeaef5a 100644
+--- a/include/linux/device/driver.h
++++ b/include/linux/device/driver.h
+@@ -150,6 +150,8 @@ extern int __must_check driver_create_file(struct device_driver *driver,
+ extern void driver_remove_file(struct device_driver *driver,
+ 			       const struct driver_attribute *attr);
+ 
++int driver_set_override(struct device *dev, const char **override,
++			const char *s, size_t len);
+ extern int __must_check driver_for_each_device(struct device_driver *drv,
+ 					       struct device *start,
+ 					       void *data,
+diff --git a/include/linux/kasan.h b/include/linux/kasan.h
+index 30d343b4a40a5..c0b976dd138b1 100644
+--- a/include/linux/kasan.h
++++ b/include/linux/kasan.h
+@@ -234,10 +234,10 @@ static inline void kasan_release_vmalloc(unsigned long start,
+ 					 unsigned long free_region_end) {}
+ #endif
+ 
+-#ifdef CONFIG_KASAN_INLINE
++#ifdef CONFIG_KASAN
+ void kasan_non_canonical_hook(unsigned long addr);
+-#else /* CONFIG_KASAN_INLINE */
++#else /* CONFIG_KASAN */
+ static inline void kasan_non_canonical_hook(unsigned long addr) { }
+-#endif /* CONFIG_KASAN_INLINE */
++#endif /* CONFIG_KASAN */
+ 
+ #endif /* LINUX_KASAN_H */
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 4b34a5c125999..1a41147b22e8f 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -555,6 +555,7 @@
+ #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F3 0x1493
+ #define PCI_DEVICE_ID_AMD_17H_M60H_DF_F3 0x144b
+ #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F3 0x1443
++#define PCI_DEVICE_ID_AMD_VANGOGH_USB	0x163a
+ #define PCI_DEVICE_ID_AMD_19H_DF_F3	0x1653
+ #define PCI_DEVICE_ID_AMD_CNB17H_F3	0x1703
+ #define PCI_DEVICE_ID_AMD_LANCE		0x2000
+diff --git a/include/linux/platform_device.h b/include/linux/platform_device.h
+index 17f9cd5626c83..e7a83b0218077 100644
+--- a/include/linux/platform_device.h
++++ b/include/linux/platform_device.h
+@@ -30,7 +30,11 @@ struct platform_device {
+ 	struct resource	*resource;
+ 
+ 	const struct platform_device_id	*id_entry;
+-	char *driver_override; /* Driver name to force a match */
++	/*
++	 * Driver name to force a match.  Do not set directly, because core
++	 * frees it.  Use driver_set_override() to set or clear it.
++	 */
++	const char *driver_override;
+ 
+ 	/* MFD cell pointer */
+ 	struct mfd_cell *mfd_cell;
+diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h
+index a68972b097b72..267533fecbdd9 100644
+--- a/include/linux/rpmsg.h
++++ b/include/linux/rpmsg.h
+@@ -41,7 +41,9 @@ struct rpmsg_channel_info {
+  * rpmsg_device - device that belong to the rpmsg bus
+  * @dev: the device struct
+  * @id: device id (used to match between rpmsg drivers and devices)
+- * @driver_override: driver name to force a match
++ * @driver_override: driver name to force a match; do not set directly,
++ *                   because core frees it; use driver_set_override() to
++ *                   set or clear it.
+  * @src: local address
+  * @dst: destination address
+  * @ept: the rpmsg endpoint of this channel
+@@ -50,7 +52,7 @@ struct rpmsg_channel_info {
+ struct rpmsg_device {
+ 	struct device dev;
+ 	struct rpmsg_device_id id;
+-	char *driver_override;
++	const char *driver_override;
+ 	u32 src;
+ 	u32 dst;
+ 	struct rpmsg_endpoint *ept;
+@@ -113,6 +115,8 @@ struct rpmsg_driver {
+ 
+ #if IS_ENABLED(CONFIG_RPMSG)
+ 
++int rpmsg_register_device_override(struct rpmsg_device *rpdev,
++				   const char *driver_override);
+ int register_rpmsg_device(struct rpmsg_device *dev);
+ void unregister_rpmsg_device(struct rpmsg_device *dev);
+ int __register_rpmsg_driver(struct rpmsg_driver *drv, struct module *owner);
+@@ -137,6 +141,12 @@ __poll_t rpmsg_poll(struct rpmsg_endpoint *ept, struct file *filp,
+ 
+ #else
+ 
++static inline int rpmsg_register_device_override(struct rpmsg_device *rpdev,
++						 const char *driver_override)
++{
++	return -ENXIO;
++}
++
+ static inline int register_rpmsg_device(struct rpmsg_device *dev)
+ {
+ 	return -ENXIO;
+diff --git a/include/uapi/linux/can/isotp.h b/include/uapi/linux/can/isotp.h
+index 590f8aea2b6d2..439c982f7e811 100644
+--- a/include/uapi/linux/can/isotp.h
++++ b/include/uapi/linux/can/isotp.h
+@@ -124,18 +124,19 @@ struct can_isotp_ll_options {
+ 
+ /* flags for isotp behaviour */
+ 
+-#define CAN_ISOTP_LISTEN_MODE	0x001	/* listen only (do not send FC) */
+-#define CAN_ISOTP_EXTEND_ADDR	0x002	/* enable extended addressing */
+-#define CAN_ISOTP_TX_PADDING	0x004	/* enable CAN frame padding tx path */
+-#define CAN_ISOTP_RX_PADDING	0x008	/* enable CAN frame padding rx path */
+-#define CAN_ISOTP_CHK_PAD_LEN	0x010	/* check received CAN frame padding */
+-#define CAN_ISOTP_CHK_PAD_DATA	0x020	/* check received CAN frame padding */
+-#define CAN_ISOTP_HALF_DUPLEX	0x040	/* half duplex error state handling */
+-#define CAN_ISOTP_FORCE_TXSTMIN	0x080	/* ignore stmin from received FC */
+-#define CAN_ISOTP_FORCE_RXSTMIN	0x100	/* ignore CFs depending on rx stmin */
+-#define CAN_ISOTP_RX_EXT_ADDR	0x200	/* different rx extended addressing */
+-#define CAN_ISOTP_WAIT_TX_DONE	0x400	/* wait for tx completion */
+-#define CAN_ISOTP_SF_BROADCAST	0x800	/* 1-to-N functional addressing */
++#define CAN_ISOTP_LISTEN_MODE	0x0001	/* listen only (do not send FC) */
++#define CAN_ISOTP_EXTEND_ADDR	0x0002	/* enable extended addressing */
++#define CAN_ISOTP_TX_PADDING	0x0004	/* enable CAN frame padding tx path */
++#define CAN_ISOTP_RX_PADDING	0x0008	/* enable CAN frame padding rx path */
++#define CAN_ISOTP_CHK_PAD_LEN	0x0010	/* check received CAN frame padding */
++#define CAN_ISOTP_CHK_PAD_DATA	0x0020	/* check received CAN frame padding */
++#define CAN_ISOTP_HALF_DUPLEX	0x0040	/* half duplex error state handling */
++#define CAN_ISOTP_FORCE_TXSTMIN	0x0080	/* ignore stmin from received FC */
++#define CAN_ISOTP_FORCE_RXSTMIN	0x0100	/* ignore CFs depending on rx stmin */
++#define CAN_ISOTP_RX_EXT_ADDR	0x0200	/* different rx extended addressing */
++#define CAN_ISOTP_WAIT_TX_DONE	0x0400	/* wait for tx completion */
++#define CAN_ISOTP_SF_BROADCAST	0x0800	/* 1-to-N functional addressing */
++#define CAN_ISOTP_CF_BROADCAST	0x1000	/* 1-to-N transmission w/o FC */
+ 
+ /* protocol machine default values */
+ 
+diff --git a/include/uapi/linux/gtp.h b/include/uapi/linux/gtp.h
+index 79f9191bbb24c..82d0e58ec3ce2 100644
+--- a/include/uapi/linux/gtp.h
++++ b/include/uapi/linux/gtp.h
+@@ -32,6 +32,6 @@ enum gtp_attrs {
+ 	GTPA_PAD,
+ 	__GTPA_MAX,
+ };
+-#define GTPA_MAX (__GTPA_MAX + 1)
++#define GTPA_MAX (__GTPA_MAX - 1)
+ 
+ #endif /* _UAPI_LINUX_GTP_H_ */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index b23961475692c..5072635f0b0c1 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -12846,7 +12846,8 @@ static int inherit_group(struct perf_event *parent_event,
+ 		    !perf_get_aux_event(child_ctr, leader))
+ 			return -EINVAL;
+ 	}
+-	leader->group_generation = parent_event->group_generation;
++	if (leader)
++		leader->group_generation = parent_event->group_generation;
+ 	return 0;
+ }
+ 
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index b882c6519b035..37e1ec1d3ee54 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -952,7 +952,7 @@ EXPORT_SYMBOL_GPL(kprobe_event_cmd_init);
+  * @name: The name of the kprobe event
+  * @loc: The location of the kprobe event
+  * @kretprobe: Is this a return probe?
+- * @args: Variable number of arg (pairs), one pair for each field
++ * @...: Variable number of arg (pairs), one pair for each field
+  *
+  * NOTE: Users normally won't want to call this function directly, but
+  * rather use the kprobe_event_gen_cmd_start() wrapper, which automatically
+@@ -1025,7 +1025,7 @@ EXPORT_SYMBOL_GPL(__kprobe_event_gen_cmd_start);
+ /**
+  * __kprobe_event_add_fields - Add probe fields to a kprobe command from arg list
+  * @cmd: A pointer to the dynevent_cmd struct representing the new event
+- * @args: Variable number of arg (pairs), one pair for each field
++ * @...: Variable number of arg (pairs), one pair for each field
+  *
+  * NOTE: Users normally won't want to call this function directly, but
+  * rather use the kprobe_event_add_fields() wrapper, which
+diff --git a/lib/kobject.c b/lib/kobject.c
+index cd3e1a98eff9e..73047e847e912 100644
+--- a/lib/kobject.c
++++ b/lib/kobject.c
+@@ -144,7 +144,7 @@ static int get_kobj_path_length(struct kobject *kobj)
+ 	return length;
+ }
+ 
+-static void fill_kobj_path(struct kobject *kobj, char *path, int length)
++static int fill_kobj_path(struct kobject *kobj, char *path, int length)
+ {
+ 	struct kobject *parent;
+ 
+@@ -153,12 +153,16 @@ static void fill_kobj_path(struct kobject *kobj, char *path, int length)
+ 		int cur = strlen(kobject_name(parent));
+ 		/* back up enough to print this name with '/' */
+ 		length -= cur;
++		if (length <= 0)
++			return -EINVAL;
+ 		memcpy(path + length, kobject_name(parent), cur);
+ 		*(path + --length) = '/';
+ 	}
+ 
+ 	pr_debug("kobject: '%s' (%p): %s: path = '%s'\n", kobject_name(kobj),
+ 		 kobj, __func__, path);
++
++	return 0;
+ }
+ 
+ /**
+@@ -173,13 +177,17 @@ char *kobject_get_path(struct kobject *kobj, gfp_t gfp_mask)
+ 	char *path;
+ 	int len;
+ 
++retry:
+ 	len = get_kobj_path_length(kobj);
+ 	if (len == 0)
+ 		return NULL;
+ 	path = kzalloc(len, gfp_mask);
+ 	if (!path)
+ 		return NULL;
+-	fill_kobj_path(kobj, path, len);
++	if (fill_kobj_path(kobj, path, len)) {
++		kfree(path);
++		goto retry;
++	}
+ 
+ 	return path;
+ }
+diff --git a/mm/kasan/report.c b/mm/kasan/report.c
+index 2f5e96ac4d008..98b08807c9c26 100644
+--- a/mm/kasan/report.c
++++ b/mm/kasan/report.c
+@@ -560,9 +560,8 @@ bool kasan_report(unsigned long addr, size_t size, bool is_write,
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_KASAN_INLINE
+ /*
+- * With CONFIG_KASAN_INLINE, accesses to bogus pointers (outside the high
++ * With CONFIG_KASAN, accesses to bogus pointers (outside the high
+  * canonical half of the address space) cause out-of-bounds shadow memory reads
+  * before the actual access. For addresses in the low canonical half of the
+  * address space, as well as most non-canonical addresses, that out-of-bounds
+@@ -598,4 +597,3 @@ void kasan_non_canonical_hook(unsigned long addr)
+ 	pr_alert("KASAN: %s in range [0x%016lx-0x%016lx]\n", bug_type,
+ 		 orig_addr, orig_addr + KASAN_SHADOW_MASK);
+ }
+-#endif
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index d85435db35f37..124ab93246104 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -8932,6 +8932,7 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page,
+ 			next_page = page;
+ 			current_buddy = page + size;
+ 		}
++		page = next_page;
+ 
+ 		if (set_page_guard(zone, current_buddy, high, migratetype))
+ 			continue;
+@@ -8939,7 +8940,6 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page,
+ 		if (current_buddy != target) {
+ 			add_to_free_list(current_buddy, zone, high, migratetype);
+ 			set_buddy_order(current_buddy, high);
+-			page = next_page;
+ 		}
+ 	}
+ }
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 16ebc187af1c6..c646fef8f3baf 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -14,7 +14,6 @@
+  * - use CAN_ISOTP_WAIT_TX_DONE flag to block the caller until the PDU is sent
+  * - as we have static buffers the check whether the PDU fits into the buffer
+  *   is done at FF reception time (no support for sending 'wait frames')
+- * - take care of the tx-queue-len as traffic shaping is still on the TODO list
+  *
+  * Copyright (c) 2020 Volkswagen Group Electronic Research
+  * All rights reserved.
+@@ -87,9 +86,9 @@ MODULE_ALIAS("can-proto-6");
+ /* ISO 15765-2:2016 supports more than 4095 byte per ISO PDU as the FF_DL can
+  * take full 32 bit values (4 Gbyte). We would need some good concept to handle
+  * this between user space and kernel space. For now increase the static buffer
+- * to something about 8 kbyte to be able to test this new functionality.
++ * to something about 64 kbyte to be able to test this new functionality.
+  */
+-#define MAX_MSG_LENGTH 8200
++#define MAX_MSG_LENGTH 66000
+ 
+ /* N_PCI type values in bits 7-4 of N_PCI bytes */
+ #define N_PCI_SF 0x00	/* single frame */
+@@ -105,18 +104,23 @@ MODULE_ALIAS("can-proto-6");
+ #define FC_CONTENT_SZ 3	/* flow control content size in byte (FS/BS/STmin) */
+ 
+ #define ISOTP_CHECK_PADDING (CAN_ISOTP_CHK_PAD_LEN | CAN_ISOTP_CHK_PAD_DATA)
++#define ISOTP_ALL_BC_FLAGS (CAN_ISOTP_SF_BROADCAST | CAN_ISOTP_CF_BROADCAST)
+ 
+ /* Flow Status given in FC frame */
+ #define ISOTP_FC_CTS 0		/* clear to send */
+ #define ISOTP_FC_WT 1		/* wait */
+ #define ISOTP_FC_OVFLW 2	/* overflow */
+ 
++#define ISOTP_FC_TIMEOUT 1	/* 1 sec */
++#define ISOTP_ECHO_TIMEOUT 2	/* 2 secs */
++
+ enum {
+ 	ISOTP_IDLE = 0,
+ 	ISOTP_WAIT_FIRST_FC,
+ 	ISOTP_WAIT_FC,
+ 	ISOTP_WAIT_DATA,
+-	ISOTP_SENDING
++	ISOTP_SENDING,
++	ISOTP_SHUTDOWN,
+ };
+ 
+ struct tpcon {
+@@ -137,13 +141,14 @@ struct isotp_sock {
+ 	canid_t rxid;
+ 	ktime_t tx_gap;
+ 	ktime_t lastrxcf_tstamp;
+-	struct hrtimer rxtimer, txtimer;
++	struct hrtimer rxtimer, txtimer, txfrtimer;
+ 	struct can_isotp_options opt;
+ 	struct can_isotp_fc_options rxfc, txfc;
+ 	struct can_isotp_ll_options ll;
+ 	u32 frame_txtime;
+ 	u32 force_tx_stmin;
+ 	u32 force_rx_stmin;
++	u32 cfecho; /* consecutive frame echo tag */
+ 	struct tpcon rx, tx;
+ 	struct list_head notifier;
+ 	wait_queue_head_t wait;
+@@ -159,6 +164,17 @@ static inline struct isotp_sock *isotp_sk(const struct sock *sk)
+ 	return (struct isotp_sock *)sk;
+ }
+ 
++static u32 isotp_bc_flags(struct isotp_sock *so)
++{
++	return so->opt.flags & ISOTP_ALL_BC_FLAGS;
++}
++
++static bool isotp_register_rxid(struct isotp_sock *so)
++{
++	/* no broadcast modes => register rx_id for FC frame reception */
++	return (isotp_bc_flags(so) == 0);
++}
++
+ static enum hrtimer_restart isotp_rx_timer_handler(struct hrtimer *hrtimer)
+ {
+ 	struct isotp_sock *so = container_of(hrtimer, struct isotp_sock,
+@@ -228,8 +244,8 @@ static int isotp_send_fc(struct sock *sk, int ae, u8 flowstatus)
+ 
+ 	can_send_ret = can_send(nskb, 1);
+ 	if (can_send_ret)
+-		pr_notice_once("can-isotp: %s: can_send_ret %d\n",
+-			       __func__, can_send_ret);
++		pr_notice_once("can-isotp: %s: can_send_ret %pe\n",
++			       __func__, ERR_PTR(can_send_ret));
+ 
+ 	dev_put(dev);
+ 
+@@ -240,7 +256,8 @@ static int isotp_send_fc(struct sock *sk, int ae, u8 flowstatus)
+ 	so->lastrxcf_tstamp = ktime_set(0, 0);
+ 
+ 	/* start rx timeout watchdog */
+-	hrtimer_start(&so->rxtimer, ktime_set(1, 0), HRTIMER_MODE_REL_SOFT);
++	hrtimer_start(&so->rxtimer, ktime_set(ISOTP_FC_TIMEOUT, 0),
++		      HRTIMER_MODE_REL_SOFT);
+ 	return 0;
+ }
+ 
+@@ -326,6 +343,8 @@ static int check_pad(struct isotp_sock *so, struct canfd_frame *cf,
+ 	return 0;
+ }
+ 
++static void isotp_send_cframe(struct isotp_sock *so);
++
+ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae)
+ {
+ 	struct sock *sk = &so->sk;
+@@ -380,14 +399,15 @@ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae)
+ 	case ISOTP_FC_CTS:
+ 		so->tx.bs = 0;
+ 		so->tx.state = ISOTP_SENDING;
+-		/* start cyclic timer for sending CF frame */
+-		hrtimer_start(&so->txtimer, so->tx_gap,
++		/* send CF frame and enable echo timeout handling */
++		hrtimer_start(&so->txtimer, ktime_set(ISOTP_ECHO_TIMEOUT, 0),
+ 			      HRTIMER_MODE_REL_SOFT);
++		isotp_send_cframe(so);
+ 		break;
+ 
+ 	case ISOTP_FC_WT:
+ 		/* start timer to wait for next FC frame */
+-		hrtimer_start(&so->txtimer, ktime_set(1, 0),
++		hrtimer_start(&so->txtimer, ktime_set(ISOTP_FC_TIMEOUT, 0),
+ 			      HRTIMER_MODE_REL_SOFT);
+ 		break;
+ 
+@@ -582,7 +602,7 @@ static int isotp_rcv_cf(struct sock *sk, struct canfd_frame *cf, int ae,
+ 	/* perform blocksize handling, if enabled */
+ 	if (!so->rxfc.bs || ++so->rx.bs < so->rxfc.bs) {
+ 		/* start rx timeout watchdog */
+-		hrtimer_start(&so->rxtimer, ktime_set(1, 0),
++		hrtimer_start(&so->rxtimer, ktime_set(ISOTP_FC_TIMEOUT, 0),
+ 			      HRTIMER_MODE_REL_SOFT);
+ 		return 0;
+ 	}
+@@ -713,6 +733,63 @@ static void isotp_fill_dataframe(struct canfd_frame *cf, struct isotp_sock *so,
+ 		cf->data[0] = so->opt.ext_address;
+ }
+ 
++static void isotp_send_cframe(struct isotp_sock *so)
++{
++	struct sock *sk = &so->sk;
++	struct sk_buff *skb;
++	struct net_device *dev;
++	struct canfd_frame *cf;
++	int can_send_ret;
++	int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
++
++	dev = dev_get_by_index(sock_net(sk), so->ifindex);
++	if (!dev)
++		return;
++
++	skb = alloc_skb(so->ll.mtu + sizeof(struct can_skb_priv), GFP_ATOMIC);
++	if (!skb) {
++		dev_put(dev);
++		return;
++	}
++
++	can_skb_reserve(skb);
++	can_skb_prv(skb)->ifindex = dev->ifindex;
++	can_skb_prv(skb)->skbcnt = 0;
++
++	cf = (struct canfd_frame *)skb->data;
++	skb_put_zero(skb, so->ll.mtu);
++
++	/* create consecutive frame */
++	isotp_fill_dataframe(cf, so, ae, 0);
++
++	/* place consecutive frame N_PCI in appropriate index */
++	cf->data[ae] = N_PCI_CF | so->tx.sn++;
++	so->tx.sn %= 16;
++	so->tx.bs++;
++
++	cf->flags = so->ll.tx_flags;
++
++	skb->dev = dev;
++	can_skb_set_owner(skb, sk);
++
++	/* cfecho should have been zero'ed by init/isotp_rcv_echo() */
++	if (so->cfecho)
++		pr_notice_once("can-isotp: cfecho is %08X != 0\n", so->cfecho);
++
++	/* set consecutive frame echo tag */
++	so->cfecho = *(u32 *)cf->data;
++
++	/* send frame with local echo enabled */
++	can_send_ret = can_send(skb, 1);
++	if (can_send_ret) {
++		pr_notice_once("can-isotp: %s: can_send_ret %pe\n",
++			       __func__, ERR_PTR(can_send_ret));
++		if (can_send_ret == -ENOBUFS)
++			pr_notice_once("can-isotp: tx queue is full\n");
++	}
++	dev_put(dev);
++}
++
+ static void isotp_create_fframe(struct canfd_frame *cf, struct isotp_sock *so,
+ 				int ae)
+ {
+@@ -746,143 +823,120 @@ static void isotp_create_fframe(struct canfd_frame *cf, struct isotp_sock *so,
+ 		cf->data[i] = so->tx.buf[so->tx.idx++];
+ 
+ 	so->tx.sn = 1;
+-	so->tx.state = ISOTP_WAIT_FIRST_FC;
+ }
+ 
+-static enum hrtimer_restart isotp_tx_timer_handler(struct hrtimer *hrtimer)
++static void isotp_rcv_echo(struct sk_buff *skb, void *data)
+ {
+-	struct isotp_sock *so = container_of(hrtimer, struct isotp_sock,
+-					     txtimer);
+-	struct sock *sk = &so->sk;
+-	struct sk_buff *skb;
+-	struct net_device *dev;
+-	struct canfd_frame *cf;
+-	enum hrtimer_restart restart = HRTIMER_NORESTART;
+-	int can_send_ret;
+-	int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
++	struct sock *sk = (struct sock *)data;
++	struct isotp_sock *so = isotp_sk(sk);
++	struct canfd_frame *cf = (struct canfd_frame *)skb->data;
+ 
+-	switch (so->tx.state) {
+-	case ISOTP_WAIT_FC:
+-	case ISOTP_WAIT_FIRST_FC:
++	/* only handle my own local echo CF/SF skb's (no FF!) */
++	if (skb->sk != sk || so->cfecho != *(u32 *)cf->data)
++		return;
+ 
+-		/* we did not get any flow control frame in time */
++	/* cancel local echo timeout */
++	hrtimer_cancel(&so->txtimer);
+ 
+-		/* report 'communication error on send' */
+-		sk->sk_err = ECOMM;
+-		if (!sock_flag(sk, SOCK_DEAD))
+-			sk->sk_error_report(sk);
++	/* local echo skb with consecutive frame has been consumed */
++	so->cfecho = 0;
+ 
+-		/* reset tx state */
++	if (so->tx.idx >= so->tx.len) {
++		/* we are done */
+ 		so->tx.state = ISOTP_IDLE;
+ 		wake_up_interruptible(&so->wait);
+-		break;
+-
+-	case ISOTP_SENDING:
+-
+-		/* push out the next segmented pdu */
+-		dev = dev_get_by_index(sock_net(sk), so->ifindex);
+-		if (!dev)
+-			break;
+-
+-isotp_tx_burst:
+-		skb = alloc_skb(so->ll.mtu + sizeof(struct can_skb_priv),
+-				GFP_ATOMIC);
+-		if (!skb) {
+-			dev_put(dev);
+-			break;
+-		}
++		return;
++	}
+ 
+-		can_skb_reserve(skb);
+-		can_skb_prv(skb)->ifindex = dev->ifindex;
+-		can_skb_prv(skb)->skbcnt = 0;
++	if (so->txfc.bs && so->tx.bs >= so->txfc.bs) {
++		/* stop and wait for FC with timeout */
++		so->tx.state = ISOTP_WAIT_FC;
++		hrtimer_start(&so->txtimer, ktime_set(ISOTP_FC_TIMEOUT, 0),
++			      HRTIMER_MODE_REL_SOFT);
++		return;
++	}
+ 
+-		cf = (struct canfd_frame *)skb->data;
+-		skb_put_zero(skb, so->ll.mtu);
++	/* no gap between data frames needed => use burst mode */
++	if (!so->tx_gap) {
++		/* enable echo timeout handling */
++		hrtimer_start(&so->txtimer, ktime_set(ISOTP_ECHO_TIMEOUT, 0),
++			      HRTIMER_MODE_REL_SOFT);
++		isotp_send_cframe(so);
++		return;
++	}
+ 
+-		/* create consecutive frame */
+-		isotp_fill_dataframe(cf, so, ae, 0);
++	/* start timer to send next consecutive frame with correct delay */
++	hrtimer_start(&so->txfrtimer, so->tx_gap, HRTIMER_MODE_REL_SOFT);
++}
+ 
+-		/* place consecutive frame N_PCI in appropriate index */
+-		cf->data[ae] = N_PCI_CF | so->tx.sn++;
+-		so->tx.sn %= 16;
+-		so->tx.bs++;
++static enum hrtimer_restart isotp_tx_timer_handler(struct hrtimer *hrtimer)
++{
++	struct isotp_sock *so = container_of(hrtimer, struct isotp_sock,
++					     txtimer);
++	struct sock *sk = &so->sk;
+ 
+-		cf->flags = so->ll.tx_flags;
++	/* don't handle timeouts in IDLE or SHUTDOWN state */
++	if (so->tx.state == ISOTP_IDLE || so->tx.state == ISOTP_SHUTDOWN)
++		return HRTIMER_NORESTART;
+ 
+-		skb->dev = dev;
+-		can_skb_set_owner(skb, sk);
++	/* we did not get any flow control or echo frame in time */
+ 
+-		can_send_ret = can_send(skb, 1);
+-		if (can_send_ret)
+-			pr_notice_once("can-isotp: %s: can_send_ret %d\n",
+-				       __func__, can_send_ret);
++	/* report 'communication error on send' */
++	sk->sk_err = ECOMM;
++	if (!sock_flag(sk, SOCK_DEAD))
++		sk->sk_error_report(sk);
+ 
+-		if (so->tx.idx >= so->tx.len) {
+-			/* we are done */
+-			so->tx.state = ISOTP_IDLE;
+-			dev_put(dev);
+-			wake_up_interruptible(&so->wait);
+-			break;
+-		}
++	/* reset tx state */
++	so->tx.state = ISOTP_IDLE;
++	wake_up_interruptible(&so->wait);
+ 
+-		if (so->txfc.bs && so->tx.bs >= so->txfc.bs) {
+-			/* stop and wait for FC */
+-			so->tx.state = ISOTP_WAIT_FC;
+-			dev_put(dev);
+-			hrtimer_set_expires(&so->txtimer,
+-					    ktime_add(ktime_get(),
+-						      ktime_set(1, 0)));
+-			restart = HRTIMER_RESTART;
+-			break;
+-		}
++	return HRTIMER_NORESTART;
++}
+ 
+-		/* no gap between data frames needed => use burst mode */
+-		if (!so->tx_gap)
+-			goto isotp_tx_burst;
++static enum hrtimer_restart isotp_txfr_timer_handler(struct hrtimer *hrtimer)
++{
++	struct isotp_sock *so = container_of(hrtimer, struct isotp_sock,
++					     txfrtimer);
+ 
+-		/* start timer to send next data frame with correct delay */
+-		dev_put(dev);
+-		hrtimer_set_expires(&so->txtimer,
+-				    ktime_add(ktime_get(), so->tx_gap));
+-		restart = HRTIMER_RESTART;
+-		break;
++	/* start echo timeout handling and cover below protocol error */
++	hrtimer_start(&so->txtimer, ktime_set(ISOTP_ECHO_TIMEOUT, 0),
++		      HRTIMER_MODE_REL_SOFT);
+ 
+-	default:
+-		WARN_ON_ONCE(1);
+-	}
++	/* cfecho should be consumed by isotp_rcv_echo() here */
++	if (so->tx.state == ISOTP_SENDING && !so->cfecho)
++		isotp_send_cframe(so);
+ 
+-	return restart;
++	return HRTIMER_NORESTART;
+ }
+ 
+ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct isotp_sock *so = isotp_sk(sk);
+-	u32 old_state = so->tx.state;
+ 	struct sk_buff *skb;
+ 	struct net_device *dev;
+ 	struct canfd_frame *cf;
+ 	int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
+ 	int wait_tx_done = (so->opt.flags & CAN_ISOTP_WAIT_TX_DONE) ? 1 : 0;
+-	s64 hrtimer_sec = 0;
++	s64 hrtimer_sec = ISOTP_ECHO_TIMEOUT;
+ 	int off;
+ 	int err;
+ 
+-	if (!so->bound)
++	if (!so->bound || so->tx.state == ISOTP_SHUTDOWN)
+ 		return -EADDRNOTAVAIL;
+ 
+-	/* we do not support multiple buffers - for now */
+-	if (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE ||
+-	    wq_has_sleeper(&so->wait)) {
+-		if (msg->msg_flags & MSG_DONTWAIT) {
+-			err = -EAGAIN;
+-			goto err_out;
+-		}
++	while (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE) {
++		/* we do not support multiple buffers - for now */
++		if (msg->msg_flags & MSG_DONTWAIT)
++			return -EAGAIN;
++
++		if (so->tx.state == ISOTP_SHUTDOWN)
++			return -EADDRNOTAVAIL;
+ 
+ 		/* wait for complete transmission of current pdu */
+ 		err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
+ 		if (err)
+-			goto err_out;
++			goto err_event_drop;
+ 	}
+ 
+ 	if (!size || size > MAX_MSG_LENGTH) {
+@@ -894,7 +948,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	off = (so->tx.ll_dl > CAN_MAX_DLEN) ? 1 : 0;
+ 
+ 	/* does the given data fit into a single frame for SF_BROADCAST? */
+-	if ((so->opt.flags & CAN_ISOTP_SF_BROADCAST) &&
++	if ((isotp_bc_flags(so) == CAN_ISOTP_SF_BROADCAST) &&
+ 	    (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off)) {
+ 		err = -EINVAL;
+ 		goto err_out_drop;
+@@ -927,6 +981,10 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	cf = (struct canfd_frame *)skb->data;
+ 	skb_put_zero(skb, so->ll.mtu);
+ 
++	/* cfecho should have been zero'ed by init / former isotp_rcv_echo() */
++	if (so->cfecho)
++		pr_notice_once("can-isotp: uninit cfecho %08X\n", so->cfecho);
++
+ 	/* check for single frame transmission depending on TX_DL */
+ 	if (size <= so->tx.ll_dl - SF_PCI_SZ4 - ae - off) {
+ 		/* The message size generally fits into a SingleFrame - good.
+@@ -952,22 +1010,40 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		else
+ 			cf->data[ae] |= size;
+ 
+-		so->tx.state = ISOTP_IDLE;
+-		wake_up_interruptible(&so->wait);
+-
+-		/* don't enable wait queue for a single frame transmission */
+-		wait_tx_done = 0;
++		/* set CF echo tag for isotp_rcv_echo() (SF-mode) */
++		so->cfecho = *(u32 *)cf->data;
+ 	} else {
+-		/* send first frame and wait for FC */
++		/* send first frame */
+ 
+ 		isotp_create_fframe(cf, so, ae);
+ 
+-		/* start timeout for FC */
+-		hrtimer_sec = 1;
+-		hrtimer_start(&so->txtimer, ktime_set(hrtimer_sec, 0),
+-			      HRTIMER_MODE_REL_SOFT);
++		if (isotp_bc_flags(so) == CAN_ISOTP_CF_BROADCAST) {
++			/* set timer for FC-less operation (STmin = 0) */
++			if (so->opt.flags & CAN_ISOTP_FORCE_TXSTMIN)
++				so->tx_gap = ktime_set(0, so->force_tx_stmin);
++			else
++				so->tx_gap = ktime_set(0, so->frame_txtime);
++
++			/* disable wait for FCs due to activated block size */
++			so->txfc.bs = 0;
++
++			/* set CF echo tag for isotp_rcv_echo() (CF-mode) */
++			so->cfecho = *(u32 *)cf->data;
++		} else {
++			/* standard flow control check */
++			so->tx.state = ISOTP_WAIT_FIRST_FC;
++
++			/* start timeout for FC */
++			hrtimer_sec = ISOTP_FC_TIMEOUT;
++
++			/* no CF echo tag for isotp_rcv_echo() (FF-mode) */
++			so->cfecho = 0;
++		}
+ 	}
+ 
++	hrtimer_start(&so->txtimer, ktime_set(hrtimer_sec, 0),
++		      HRTIMER_MODE_REL_SOFT);
++
+ 	/* send the first or only CAN frame */
+ 	cf->flags = so->ll.tx_flags;
+ 
+@@ -976,19 +1052,23 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	err = can_send(skb, 1);
+ 	dev_put(dev);
+ 	if (err) {
+-		pr_notice_once("can-isotp: %s: can_send_ret %d\n",
+-			       __func__, err);
++		pr_notice_once("can-isotp: %s: can_send_ret %pe\n",
++			       __func__, ERR_PTR(err));
+ 
+ 		/* no transmission -> no timeout monitoring */
+-		if (hrtimer_sec)
+-			hrtimer_cancel(&so->txtimer);
++		hrtimer_cancel(&so->txtimer);
++
++		/* reset consecutive frame echo tag */
++		so->cfecho = 0;
+ 
+ 		goto err_out_drop;
+ 	}
+ 
+ 	if (wait_tx_done) {
+ 		/* wait for complete transmission of current pdu */
+-		wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++		err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++		if (err)
++			goto err_event_drop;
+ 
+ 		err = sock_error(sk);
+ 		if (err)
+@@ -997,13 +1077,15 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 
+ 	return size;
+ 
++err_event_drop:
++	/* got signal: force tx state machine to be idle */
++	so->tx.state = ISOTP_IDLE;
++	hrtimer_cancel(&so->txfrtimer);
++	hrtimer_cancel(&so->txtimer);
+ err_out_drop:
+ 	/* drop this PDU and unlock a potential wait queue */
+-	old_state = ISOTP_IDLE;
+-err_out:
+-	so->tx.state = old_state;
+-	if (so->tx.state == ISOTP_IDLE)
+-		wake_up_interruptible(&so->wait);
++	so->tx.state = ISOTP_IDLE;
++	wake_up_interruptible(&so->wait);
+ 
+ 	return err;
+ }
+@@ -1067,7 +1149,13 @@ static int isotp_release(struct socket *sock)
+ 	net = sock_net(sk);
+ 
+ 	/* wait for complete transmission of current pdu */
+-	wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++	while (wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE) == 0 &&
++	       cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SHUTDOWN) != ISOTP_IDLE)
++		;
++
++	/* force state machines to be idle also when a signal occurred */
++	so->tx.state = ISOTP_SHUTDOWN;
++	so->rx.state = ISOTP_IDLE;
+ 
+ 	spin_lock(&isotp_notifier_lock);
+ 	while (isotp_busy_notifier == so) {
+@@ -1081,21 +1169,27 @@ static int isotp_release(struct socket *sock)
+ 	lock_sock(sk);
+ 
+ 	/* remove current filters & unregister */
+-	if (so->bound && (!(so->opt.flags & CAN_ISOTP_SF_BROADCAST))) {
++	if (so->bound) {
+ 		if (so->ifindex) {
+ 			struct net_device *dev;
+ 
+ 			dev = dev_get_by_index(net, so->ifindex);
+ 			if (dev) {
+-				can_rx_unregister(net, dev, so->rxid,
+-						  SINGLE_MASK(so->rxid),
+-						  isotp_rcv, sk);
++				if (isotp_register_rxid(so))
++					can_rx_unregister(net, dev, so->rxid,
++							  SINGLE_MASK(so->rxid),
++							  isotp_rcv, sk);
++
++				can_rx_unregister(net, dev, so->txid,
++						  SINGLE_MASK(so->txid),
++						  isotp_rcv_echo, sk);
+ 				dev_put(dev);
+ 				synchronize_rcu();
+ 			}
+ 		}
+ 	}
+ 
++	hrtimer_cancel(&so->txfrtimer);
+ 	hrtimer_cancel(&so->txtimer);
+ 	hrtimer_cancel(&so->rxtimer);
+ 
+@@ -1119,26 +1213,38 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	struct net *net = sock_net(sk);
+ 	int ifindex;
+ 	struct net_device *dev;
+-	canid_t tx_id, rx_id;
++	canid_t tx_id = addr->can_addr.tp.tx_id;
++	canid_t rx_id = addr->can_addr.tp.rx_id;
+ 	int err = 0;
+ 	int notify_enetdown = 0;
+-	int do_rx_reg = 1;
+ 
+ 	if (len < ISOTP_MIN_NAMELEN)
+ 		return -EINVAL;
+ 
+-	/* sanitize tx/rx CAN identifiers */
+-	tx_id = addr->can_addr.tp.tx_id;
++	if (addr->can_family != AF_CAN)
++		return -EINVAL;
++
++	/* sanitize tx CAN identifier */
+ 	if (tx_id & CAN_EFF_FLAG)
+ 		tx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
+ 	else
+ 		tx_id &= CAN_SFF_MASK;
+ 
+-	rx_id = addr->can_addr.tp.rx_id;
+-	if (rx_id & CAN_EFF_FLAG)
+-		rx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
+-	else
+-		rx_id &= CAN_SFF_MASK;
++	/* give feedback on wrong CAN-ID value */
++	if (tx_id != addr->can_addr.tp.tx_id)
++		return -EINVAL;
++
++	/* sanitize rx CAN identifier (if needed) */
++	if (isotp_register_rxid(so)) {
++		if (rx_id & CAN_EFF_FLAG)
++			rx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
++		else
++			rx_id &= CAN_SFF_MASK;
++
++		/* give feedback on wrong CAN-ID value */
++		if (rx_id != addr->can_addr.tp.rx_id)
++			return -EINVAL;
++	}
+ 
+ 	if (!addr->can_ifindex)
+ 		return -ENODEV;
+@@ -1150,12 +1256,8 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 		goto out;
+ 	}
+ 
+-	/* do not register frame reception for functional addressing */
+-	if (so->opt.flags & CAN_ISOTP_SF_BROADCAST)
+-		do_rx_reg = 0;
+-
+-	/* do not validate rx address for functional addressing */
+-	if (do_rx_reg && rx_id == tx_id) {
++	/* ensure different CAN IDs when the rx_id is to be registered */
++	if (isotp_register_rxid(so) && rx_id == tx_id) {
+ 		err = -EADDRNOTAVAIL;
+ 		goto out;
+ 	}
+@@ -1180,10 +1282,17 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 
+ 	ifindex = dev->ifindex;
+ 
+-	if (do_rx_reg)
++	if (isotp_register_rxid(so))
+ 		can_rx_register(net, dev, rx_id, SINGLE_MASK(rx_id),
+ 				isotp_rcv, sk, "isotp", sk);
+ 
++	/* no consecutive frame echo skb in flight */
++	so->cfecho = 0;
++
++	/* register for echo skb's */
++	can_rx_register(net, dev, tx_id, SINGLE_MASK(tx_id),
++			isotp_rcv_echo, sk, "isotpe", sk);
++
+ 	dev_put(dev);
+ 
+ 	/* switch to new settings */
+@@ -1244,6 +1353,15 @@ static int isotp_setsockopt_locked(struct socket *sock, int level, int optname,
+ 		if (!(so->opt.flags & CAN_ISOTP_RX_EXT_ADDR))
+ 			so->opt.rx_ext_address = so->opt.ext_address;
+ 
++		/* these broadcast flags are not allowed together */
++		if (isotp_bc_flags(so) == ISOTP_ALL_BC_FLAGS) {
++			/* CAN_ISOTP_SF_BROADCAST is prioritized */
++			so->opt.flags &= ~CAN_ISOTP_CF_BROADCAST;
++
++			/* give user feedback on wrong config attempt */
++			ret = -EINVAL;
++		}
++
+ 		/* check for frame_txtime changes (0 => no changes) */
+ 		if (so->opt.frame_txtime) {
+ 			if (so->opt.frame_txtime == CAN_ISOTP_FRAME_TXTIME_ZERO)
+@@ -1394,10 +1512,16 @@ static void isotp_notify(struct isotp_sock *so, unsigned long msg,
+ 	case NETDEV_UNREGISTER:
+ 		lock_sock(sk);
+ 		/* remove current filters & unregister */
+-		if (so->bound && (!(so->opt.flags & CAN_ISOTP_SF_BROADCAST)))
+-			can_rx_unregister(dev_net(dev), dev, so->rxid,
+-					  SINGLE_MASK(so->rxid),
+-					  isotp_rcv, sk);
++		if (so->bound) {
++			if (isotp_register_rxid(so))
++				can_rx_unregister(dev_net(dev), dev, so->rxid,
++						  SINGLE_MASK(so->rxid),
++						  isotp_rcv, sk);
++
++			can_rx_unregister(dev_net(dev), dev, so->txid,
++					  SINGLE_MASK(so->txid),
++					  isotp_rcv_echo, sk);
++		}
+ 
+ 		so->ifindex = 0;
+ 		so->bound  = 0;
+@@ -1470,6 +1594,8 @@ static int isotp_init(struct sock *sk)
+ 	so->rxtimer.function = isotp_rx_timer_handler;
+ 	hrtimer_init(&so->txtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
+ 	so->txtimer.function = isotp_tx_timer_handler;
++	hrtimer_init(&so->txfrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
++	so->txfrtimer.function = isotp_txfr_timer_handler;
+ 
+ 	init_waitqueue_head(&so->wait);
+ 	spin_lock_init(&so->rx_lock);
+@@ -1550,7 +1676,7 @@ static __init int isotp_module_init(void)
+ 
+ 	err = can_proto_register(&isotp_can_proto);
+ 	if (err < 0)
+-		pr_err("can: registration of isotp protocol failed\n");
++		pr_err("can: registration of isotp protocol failed %pe\n", ERR_PTR(err));
+ 	else
+ 		register_netdevice_notifier(&canisotp_notifier);
+ 
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 15267428c4f83..4c43183a8d93a 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -224,7 +224,8 @@ bool neigh_remove_one(struct neighbour *ndel, struct neigh_table *tbl)
+ 
+ static int neigh_forced_gc(struct neigh_table *tbl)
+ {
+-	int max_clean = atomic_read(&tbl->gc_entries) - tbl->gc_thresh2;
++	int max_clean = atomic_read(&tbl->gc_entries) -
++			READ_ONCE(tbl->gc_thresh2);
+ 	unsigned long tref = jiffies - 5 * HZ;
+ 	struct neighbour *n, *tmp;
+ 	int shrunk = 0;
+@@ -253,7 +254,7 @@ static int neigh_forced_gc(struct neigh_table *tbl)
+ 		}
+ 	}
+ 
+-	tbl->last_flush = jiffies;
++	WRITE_ONCE(tbl->last_flush, jiffies);
+ 
+ 	write_unlock_bh(&tbl->lock);
+ 
+@@ -409,17 +410,17 @@ static struct neighbour *neigh_alloc(struct neigh_table *tbl,
+ {
+ 	struct neighbour *n = NULL;
+ 	unsigned long now = jiffies;
+-	int entries;
++	int entries, gc_thresh3;
+ 
+ 	if (exempt_from_gc)
+ 		goto do_alloc;
+ 
+ 	entries = atomic_inc_return(&tbl->gc_entries) - 1;
+-	if (entries >= tbl->gc_thresh3 ||
+-	    (entries >= tbl->gc_thresh2 &&
+-	     time_after(now, tbl->last_flush + 5 * HZ))) {
+-		if (!neigh_forced_gc(tbl) &&
+-		    entries >= tbl->gc_thresh3) {
++	gc_thresh3 = READ_ONCE(tbl->gc_thresh3);
++	if (entries >= gc_thresh3 ||
++	    (entries >= READ_ONCE(tbl->gc_thresh2) &&
++	     time_after(now, READ_ONCE(tbl->last_flush) + 5 * HZ))) {
++		if (!neigh_forced_gc(tbl) && entries >= gc_thresh3) {
+ 			net_info_ratelimited("%s: neighbor table overflow!\n",
+ 					     tbl->id);
+ 			NEIGH_CACHE_STAT_INC(tbl, table_fulls);
+@@ -902,13 +903,14 @@ static void neigh_periodic_work(struct work_struct *work)
+ 
+ 	if (time_after(jiffies, tbl->last_rand + 300 * HZ)) {
+ 		struct neigh_parms *p;
+-		tbl->last_rand = jiffies;
++
++		WRITE_ONCE(tbl->last_rand, jiffies);
+ 		list_for_each_entry(p, &tbl->parms_list, list)
+ 			p->reachable_time =
+ 				neigh_rand_reach_time(NEIGH_VAR(p, BASE_REACHABLE_TIME));
+ 	}
+ 
+-	if (atomic_read(&tbl->entries) < tbl->gc_thresh1)
++	if (atomic_read(&tbl->entries) < READ_ONCE(tbl->gc_thresh1))
+ 		goto out;
+ 
+ 	for (i = 0 ; i < (1 << nht->hash_shift); i++) {
+@@ -2055,15 +2057,16 @@ static int neightbl_fill_info(struct sk_buff *skb, struct neigh_table *tbl,
+ 	ndtmsg->ndtm_pad2   = 0;
+ 
+ 	if (nla_put_string(skb, NDTA_NAME, tbl->id) ||
+-	    nla_put_msecs(skb, NDTA_GC_INTERVAL, tbl->gc_interval, NDTA_PAD) ||
+-	    nla_put_u32(skb, NDTA_THRESH1, tbl->gc_thresh1) ||
+-	    nla_put_u32(skb, NDTA_THRESH2, tbl->gc_thresh2) ||
+-	    nla_put_u32(skb, NDTA_THRESH3, tbl->gc_thresh3))
++	    nla_put_msecs(skb, NDTA_GC_INTERVAL, READ_ONCE(tbl->gc_interval),
++			  NDTA_PAD) ||
++	    nla_put_u32(skb, NDTA_THRESH1, READ_ONCE(tbl->gc_thresh1)) ||
++	    nla_put_u32(skb, NDTA_THRESH2, READ_ONCE(tbl->gc_thresh2)) ||
++	    nla_put_u32(skb, NDTA_THRESH3, READ_ONCE(tbl->gc_thresh3)))
+ 		goto nla_put_failure;
+ 	{
+ 		unsigned long now = jiffies;
+-		long flush_delta = now - tbl->last_flush;
+-		long rand_delta = now - tbl->last_rand;
++		long flush_delta = now - READ_ONCE(tbl->last_flush);
++		long rand_delta = now - READ_ONCE(tbl->last_rand);
+ 		struct neigh_hash_table *nht;
+ 		struct ndt_config ndc = {
+ 			.ndtc_key_len		= tbl->key_len,
+@@ -2071,7 +2074,7 @@ static int neightbl_fill_info(struct sk_buff *skb, struct neigh_table *tbl,
+ 			.ndtc_entries		= atomic_read(&tbl->entries),
+ 			.ndtc_last_flush	= jiffies_to_msecs(flush_delta),
+ 			.ndtc_last_rand		= jiffies_to_msecs(rand_delta),
+-			.ndtc_proxy_qlen	= tbl->proxy_queue.qlen,
++			.ndtc_proxy_qlen	= READ_ONCE(tbl->proxy_queue.qlen),
+ 		};
+ 
+ 		rcu_read_lock_bh();
+@@ -2094,17 +2097,17 @@ static int neightbl_fill_info(struct sk_buff *skb, struct neigh_table *tbl,
+ 			struct neigh_statistics	*st;
+ 
+ 			st = per_cpu_ptr(tbl->stats, cpu);
+-			ndst.ndts_allocs		+= st->allocs;
+-			ndst.ndts_destroys		+= st->destroys;
+-			ndst.ndts_hash_grows		+= st->hash_grows;
+-			ndst.ndts_res_failed		+= st->res_failed;
+-			ndst.ndts_lookups		+= st->lookups;
+-			ndst.ndts_hits			+= st->hits;
+-			ndst.ndts_rcv_probes_mcast	+= st->rcv_probes_mcast;
+-			ndst.ndts_rcv_probes_ucast	+= st->rcv_probes_ucast;
+-			ndst.ndts_periodic_gc_runs	+= st->periodic_gc_runs;
+-			ndst.ndts_forced_gc_runs	+= st->forced_gc_runs;
+-			ndst.ndts_table_fulls		+= st->table_fulls;
++			ndst.ndts_allocs		+= READ_ONCE(st->allocs);
++			ndst.ndts_destroys		+= READ_ONCE(st->destroys);
++			ndst.ndts_hash_grows		+= READ_ONCE(st->hash_grows);
++			ndst.ndts_res_failed		+= READ_ONCE(st->res_failed);
++			ndst.ndts_lookups		+= READ_ONCE(st->lookups);
++			ndst.ndts_hits			+= READ_ONCE(st->hits);
++			ndst.ndts_rcv_probes_mcast	+= READ_ONCE(st->rcv_probes_mcast);
++			ndst.ndts_rcv_probes_ucast	+= READ_ONCE(st->rcv_probes_ucast);
++			ndst.ndts_periodic_gc_runs	+= READ_ONCE(st->periodic_gc_runs);
++			ndst.ndts_forced_gc_runs	+= READ_ONCE(st->forced_gc_runs);
++			ndst.ndts_table_fulls		+= READ_ONCE(st->table_fulls);
+ 		}
+ 
+ 		if (nla_put_64bit(skb, NDTA_STATS, sizeof(ndst), &ndst,
+@@ -2328,16 +2331,16 @@ static int neightbl_set(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		goto errout_tbl_lock;
+ 
+ 	if (tb[NDTA_THRESH1])
+-		tbl->gc_thresh1 = nla_get_u32(tb[NDTA_THRESH1]);
++		WRITE_ONCE(tbl->gc_thresh1, nla_get_u32(tb[NDTA_THRESH1]));
+ 
+ 	if (tb[NDTA_THRESH2])
+-		tbl->gc_thresh2 = nla_get_u32(tb[NDTA_THRESH2]);
++		WRITE_ONCE(tbl->gc_thresh2, nla_get_u32(tb[NDTA_THRESH2]));
+ 
+ 	if (tb[NDTA_THRESH3])
+-		tbl->gc_thresh3 = nla_get_u32(tb[NDTA_THRESH3]);
++		WRITE_ONCE(tbl->gc_thresh3, nla_get_u32(tb[NDTA_THRESH3]));
+ 
+ 	if (tb[NDTA_GC_INTERVAL])
+-		tbl->gc_interval = nla_get_msecs(tb[NDTA_GC_INTERVAL]);
++		WRITE_ONCE(tbl->gc_interval, nla_get_msecs(tb[NDTA_GC_INTERVAL]));
+ 
+ 	err = 0;
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 3f2b6a3adf6a9..0c935904ced82 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2185,16 +2185,17 @@ void tcp_enter_loss(struct sock *sk)
+  * restore sanity to the SACK scoreboard. If the apparent reneging
+  * persists until this RTO then we'll clear the SACK scoreboard.
+  */
+-static bool tcp_check_sack_reneging(struct sock *sk, int flag)
++static bool tcp_check_sack_reneging(struct sock *sk, int *ack_flag)
+ {
+-	if (flag & FLAG_SACK_RENEGING &&
+-	    flag & FLAG_SND_UNA_ADVANCED) {
++	if (*ack_flag & FLAG_SACK_RENEGING &&
++	    *ack_flag & FLAG_SND_UNA_ADVANCED) {
+ 		struct tcp_sock *tp = tcp_sk(sk);
+ 		unsigned long delay = max(usecs_to_jiffies(tp->srtt_us >> 4),
+ 					  msecs_to_jiffies(10));
+ 
+ 		inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS,
+ 					  delay, TCP_RTO_MAX);
++		*ack_flag &= ~FLAG_SET_XMIT_TIMER;
+ 		return true;
+ 	}
+ 	return false;
+@@ -2950,7 +2951,7 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una,
+ 		tp->prior_ssthresh = 0;
+ 
+ 	/* B. In all the states check for reneging SACKs. */
+-	if (tcp_check_sack_reneging(sk, flag))
++	if (tcp_check_sack_reneging(sk, ack_flag))
+ 		return;
+ 
+ 	/* C. Check consistency of the current state. */
+diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
+index f087baa95b07b..80c09070ea9fa 100644
+--- a/net/netfilter/nfnetlink_log.c
++++ b/net/netfilter/nfnetlink_log.c
+@@ -683,8 +683,8 @@ nfulnl_log_packet(struct net *net,
+ 	unsigned int plen = 0;
+ 	struct nfnl_log_net *log = nfnl_log_pernet(net);
+ 	const struct nfnl_ct_hook *nfnl_ct = NULL;
++	enum ip_conntrack_info ctinfo = 0;
+ 	struct nf_conn *ct = NULL;
+-	enum ip_conntrack_info ctinfo;
+ 
+ 	if (li_user && li_user->type == NF_LOG_TYPE_ULOG)
+ 		li = li_user;
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index b2d2ba561eba1..f2a0c10682fc8 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -364,7 +364,7 @@ static int u32_init(struct tcf_proto *tp)
+ 	idr_init(&root_ht->handle_idr);
+ 
+ 	if (tp_c == NULL) {
+-		tp_c = kzalloc(struct_size(tp_c, hlist->ht, 1), GFP_KERNEL);
++		tp_c = kzalloc(sizeof(*tp_c), GFP_KERNEL);
+ 		if (tp_c == NULL) {
+ 			kfree(root_ht);
+ 			return -ENOBUFS;
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 801c89a3a1b6f..48c78388c1d20 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -329,6 +329,12 @@ static const struct config_entry config_table[] = {
+ 					DMI_MATCH(DMI_SYS_VENDOR, "Google"),
+ 				}
+ 			},
++			{
++				.ident = "Google firmware",
++				.matches = {
++					DMI_MATCH(DMI_BIOS_VERSION, "Google"),
++				}
++			},
+ 			{}
+ 		}
+ 	},
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index d1533e95a74f6..99d91bfb88122 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3241,6 +3241,8 @@ int rt5645_set_jack_detect(struct snd_soc_component *component,
+ 				RT5645_GP1_PIN_IRQ, RT5645_GP1_PIN_IRQ);
+ 		regmap_update_bits(rt5645->regmap, RT5645_GEN_CTRL1,
+ 				RT5645_DIG_GATE_CTRL, RT5645_DIG_GATE_CTRL);
++		regmap_update_bits(rt5645->regmap, RT5645_DEPOP_M1,
++				RT5645_HP_CB_MASK, RT5645_HP_CB_PU);
+ 	}
+ 	rt5645_irq(0, rt5645);
+ 
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index bd24951faa094..059b78d08f7af 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2107,7 +2107,7 @@ static bool is_special_call(struct instruction *insn)
+ 		if (!dest)
+ 			return false;
+ 
+-		if (dest->fentry)
++		if (dest->fentry || dest->embedded_insn)
+ 			return true;
+ 	}
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_non_uniq_symbol.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_non_uniq_symbol.tc
+new file mode 100644
+index 0000000000000..bc9514428dbaf
+--- /dev/null
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_non_uniq_symbol.tc
+@@ -0,0 +1,13 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0
++# description: Test failure of registering kprobe on non unique symbol
++# requires: kprobe_events
++
++SYMBOL='name_show'
++
++# We skip this test on kernel where SYMBOL is unique or does not exist.
++if [ "$(grep -c -E "[[:alnum:]]+ t ${SYMBOL}" /proc/kallsyms)" -le '1' ]; then
++	exit_unsupported
++fi
++
++! echo "p:test_non_unique ${SYMBOL}" > kprobe_events


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-11-20 11:25 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-11-20 11:25 UTC (permalink / raw
  To: gentoo-commits

commit:     acc00e60c0af5285291a995497b845ada326c8c0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Nov 20 11:25:31 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Nov 20 11:25:31 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=acc00e60

Linux patch 5.10.201

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1200_linux-5.10.201.patch | 7298 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7302 insertions(+)

diff --git a/0000_README b/0000_README
index 7b076361..be2bdc48 100644
--- a/0000_README
+++ b/0000_README
@@ -843,6 +843,10 @@ Patch:  1199_linux-5.10.200.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.200
 
+Patch:  1200_linux-5.10.201.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.201
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1200_linux-5.10.201.patch b/1200_linux-5.10.201.patch
new file mode 100644
index 00000000..d8d57dd8
--- /dev/null
+++ b/1200_linux-5.10.201.patch
@@ -0,0 +1,7298 @@
+diff --git a/Documentation/process/deprecated.rst b/Documentation/process/deprecated.rst
+index 9d83b8db88740..86ea327b7e3a7 100644
+--- a/Documentation/process/deprecated.rst
++++ b/Documentation/process/deprecated.rst
+@@ -70,6 +70,9 @@ Instead, the 2-factor form of the allocator should be used::
+ 
+ 	foo = kmalloc_array(count, size, GFP_KERNEL);
+ 
++Specifically, kmalloc() can be replaced with kmalloc_array(), and
++kzalloc() can be replaced with kcalloc().
++
+ If no 2-factor form is available, the saturate-on-overflow helpers should
+ be used::
+ 
+@@ -90,9 +93,20 @@ Instead, use the helper::
+         array usage and switch to a `flexible array member
+         <#zero-length-and-one-element-arrays>`_ instead.
+ 
+-See array_size(), array3_size(), and struct_size(),
+-for more details as well as the related check_add_overflow() and
+-check_mul_overflow() family of functions.
++For other calculations, please compose the use of the size_mul(),
++size_add(), and size_sub() helpers. For example, in the case of::
++
++	foo = krealloc(current_size + chunk_size * (count - 3), GFP_KERNEL);
++
++Instead, use the helpers::
++
++	foo = krealloc(size_add(current_size,
++				size_mul(chunk_size,
++					 size_sub(count, 3))), GFP_KERNEL);
++
++For more details, also see array3_size() and flex_array_size(),
++as well as the related check_mul_overflow(), check_add_overflow(),
++check_sub_overflow(), and check_shl_overflow() family of functions.
+ 
+ simple_strtol(), simple_strtoll(), simple_strtoul(), simple_strtoull()
+ ----------------------------------------------------------------------
+diff --git a/Makefile b/Makefile
+index da4a3de444cfd..7ef22771de767 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 200
++SUBLEVEL = 201
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/qcom-mdm9615.dtsi b/arch/arm/boot/dts/qcom-mdm9615.dtsi
+index ad9b52d53ef9b..982f3c3921965 100644
+--- a/arch/arm/boot/dts/qcom-mdm9615.dtsi
++++ b/arch/arm/boot/dts/qcom-mdm9615.dtsi
+@@ -82,14 +82,12 @@
+ 		};
+ 	};
+ 
+-	regulators {
+-		vsdcc_fixed: vsdcc-regulator {
+-			compatible = "regulator-fixed";
+-			regulator-name = "SDCC Power";
+-			regulator-min-microvolt = <2700000>;
+-			regulator-max-microvolt = <2700000>;
+-			regulator-always-on;
+-		};
++	vsdcc_fixed: vsdcc-regulator {
++		compatible = "regulator-fixed";
++		regulator-name = "SDCC Power";
++		regulator-min-microvolt = <2700000>;
++		regulator-max-microvolt = <2700000>;
++		regulator-always-on;
+ 	};
+ 
+ 	soc: soc {
+diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
+index 6ca4535c47fb6..e36d053a8a903 100644
+--- a/arch/arm/lib/memset.S
++++ b/arch/arm/lib/memset.S
+@@ -16,6 +16,7 @@
+ ENTRY(mmioset)
+ ENTRY(memset)
+ UNWIND( .fnstart         )
++	and	r1, r1, #255		@ cast to unsigned char
+ 	ands	r3, r0, #3		@ 1 unaligned?
+ 	mov	ip, r0			@ preserve r0 as return value
+ 	bne	6f			@ 1
+diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
+index 8ad576ecd0f1d..3d25fd615250a 100644
+--- a/arch/arm/xen/enlighten.c
++++ b/arch/arm/xen/enlighten.c
+@@ -158,9 +158,6 @@ static int xen_starting_cpu(unsigned int cpu)
+ 	BUG_ON(err);
+ 	per_cpu(xen_vcpu, cpu) = vcpup;
+ 
+-	if (!xen_kernel_unmapped_at_usr())
+-		xen_setup_runstate_info(cpu);
+-
+ after_register_vcpu_info:
+ 	enable_percpu_irq(xen_events_irq, 0);
+ 	return 0;
+@@ -386,9 +383,6 @@ static int __init xen_guest_init(void)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!xen_kernel_unmapped_at_usr())
+-		xen_time_setup_guest();
+-
+ 	if (xen_initial_domain())
+ 		pvclock_gtod_register_notifier(&xen_pvclock_gtod_notifier);
+ 
+@@ -398,7 +392,13 @@ static int __init xen_guest_init(void)
+ }
+ early_initcall(xen_guest_init);
+ 
+-static int __init xen_pm_init(void)
++static int xen_starting_runstate_cpu(unsigned int cpu)
++{
++	xen_setup_runstate_info(cpu);
++	return 0;
++}
++
++static int __init xen_late_init(void)
+ {
+ 	if (!xen_domain())
+ 		return -ENODEV;
+@@ -411,9 +411,16 @@ static int __init xen_pm_init(void)
+ 		do_settimeofday64(&ts);
+ 	}
+ 
+-	return 0;
++	if (xen_kernel_unmapped_at_usr())
++		return 0;
++
++	xen_time_setup_guest();
++
++	return cpuhp_setup_state(CPUHP_AP_ARM_XEN_RUNSTATE_STARTING,
++				 "arm/xen_runstate:starting",
++				 xen_starting_runstate_cpu, NULL);
+ }
+-late_initcall(xen_pm_init);
++late_initcall(xen_late_init);
+ 
+ 
+ /* empty stubs */
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 5b79e4a373311..c39a299fc636f 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1175,7 +1175,7 @@
+ 			#size-cells = <1>;
+ 			#iommu-cells = <1>;
+ 			compatible = "qcom,msm8916-iommu", "qcom,msm-iommu-v1";
+-			ranges = <0 0x01e20000 0x40000>;
++			ranges = <0 0x01e20000 0x20000>;
+ 			reg = <0x01ef0000 0x3000>;
+ 			clocks = <&gcc GCC_SMMU_CFG_CLK>,
+ 				 <&gcc GCC_APSS_TCU_CLK>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-mtp.dts b/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
+index 1372fe8601f50..e3d2fb9aafe42 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
+@@ -564,6 +564,8 @@
+ 	vdd-1.8-xo-supply = <&vreg_l7a_1p8>;
+ 	vdd-1.3-rfa-supply = <&vreg_l17a_1p3>;
+ 	vdd-3.3-ch0-supply = <&vreg_l25a_3p3>;
++
++	qcom,snoc-host-cap-8bit-quirk;
+ };
+ 
+ /* PINCTRL - additions to nodes defined in sdm845.dtsi */
+diff --git a/arch/powerpc/include/asm/nohash/32/pte-40x.h b/arch/powerpc/include/asm/nohash/32/pte-40x.h
+index 2d3153cfc0d79..acf61242e85bf 100644
+--- a/arch/powerpc/include/asm/nohash/32/pte-40x.h
++++ b/arch/powerpc/include/asm/nohash/32/pte-40x.h
+@@ -69,9 +69,6 @@
+ 
+ #define _PTE_NONE_MASK	0
+ 
+-/* Until my rework is finished, 40x still needs atomic PTE updates */
+-#define PTE_ATOMIC_UPDATES	1
+-
+ #define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED)
+ #define _PAGE_BASE	(_PAGE_BASE_NC)
+ 
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index b773c411aa5c2..3e15d0d054b2d 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -50,7 +50,7 @@ static int trace_imc_mem_size;
+  * core and trace-imc
+  */
+ static struct imc_pmu_ref imc_global_refc = {
+-	.lock = __SPIN_LOCK_INITIALIZER(imc_global_refc.lock),
++	.lock = __SPIN_LOCK_UNLOCKED(imc_global_refc.lock),
+ 	.id = 0,
+ 	.refc = 0,
+ };
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 68f3b082245e0..4a3425fb19398 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -523,8 +523,10 @@ static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p,
+ 
+ 	if (cmd) {
+ 		rc = init_cpu_associativity();
+-		if (rc)
++		if (rc) {
++			destroy_cpu_associativity();
+ 			goto out;
++		}
+ 
+ 		for_each_possible_cpu(cpu) {
+ 			disp = per_cpu_ptr(&vcpu_disp_data, cpu);
+diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c
+index cb58ec7ce77ac..1c7e49d9eaeea 100644
+--- a/arch/powerpc/sysdev/xive/native.c
++++ b/arch/powerpc/sysdev/xive/native.c
+@@ -779,7 +779,7 @@ int xive_native_get_queue_info(u32 vp_id, u32 prio,
+ 	if (out_qpage)
+ 		*out_qpage = be64_to_cpu(qpage);
+ 	if (out_qsize)
+-		*out_qsize = be32_to_cpu(qsize);
++		*out_qsize = be64_to_cpu(qsize);
+ 	if (out_qeoi_page)
+ 		*out_qeoi_page = be64_to_cpu(qeoi_page);
+ 	if (out_escalate_irq)
+diff --git a/arch/sh/Kconfig.debug b/arch/sh/Kconfig.debug
+index 7bc1b10b81c96..f030dd4c08607 100644
+--- a/arch/sh/Kconfig.debug
++++ b/arch/sh/Kconfig.debug
+@@ -25,6 +25,17 @@ config STACK_DEBUG
+ 	  every function call and will therefore incur a major
+ 	  performance hit. Most users should say N.
+ 
++config EARLY_PRINTK
++	bool "Early printk"
++	depends on SH_STANDARD_BIOS
++	help
++	  Say Y here to redirect kernel printk messages to the serial port
++	  used by the SH-IPL bootloader, starting very early in the boot
++	  process and ending when the kernel's serial console is initialised.
++	  This option is only useful while porting the kernel to a new machine,
++	  when the kernel may crash or hang before the serial console is
++	  initialised.  If unsure, say N.
++
+ config 4KSTACKS
+ 	bool "Use 4Kb for kernel stacks instead of 8Kb"
+ 	depends on DEBUG_KERNEL && (MMU || BROKEN) && !PAGE_SIZE_64KB
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index d87421acddc39..5667b8b994e34 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -1360,20 +1360,10 @@ static void pt_addr_filters_fini(struct perf_event *event)
+ }
+ 
+ #ifdef CONFIG_X86_64
+-static u64 canonical_address(u64 vaddr, u8 vaddr_bits)
+-{
+-	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
+-}
+-
+-static u64 is_canonical_address(u64 vaddr, u8 vaddr_bits)
+-{
+-	return canonical_address(vaddr, vaddr_bits) == vaddr;
+-}
+-
+ /* Clamp to a canonical address greater-than-or-equal-to the address given */
+ static u64 clamp_to_ge_canonical_addr(u64 vaddr, u8 vaddr_bits)
+ {
+-	return is_canonical_address(vaddr, vaddr_bits) ?
++	return __is_canonical_address(vaddr, vaddr_bits) ?
+ 	       vaddr :
+ 	       -BIT_ULL(vaddr_bits - 1);
+ }
+@@ -1381,7 +1371,7 @@ static u64 clamp_to_ge_canonical_addr(u64 vaddr, u8 vaddr_bits)
+ /* Clamp to a canonical address less-than-or-equal-to the address given */
+ static u64 clamp_to_le_canonical_addr(u64 vaddr, u8 vaddr_bits)
+ {
+-	return is_canonical_address(vaddr, vaddr_bits) ?
++	return __is_canonical_address(vaddr, vaddr_bits) ?
+ 	       vaddr :
+ 	       BIT_ULL(vaddr_bits - 1) - 1;
+ }
+diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
+index 7555b48803a8c..ffae5ea9fd4e1 100644
+--- a/arch/x86/include/asm/page.h
++++ b/arch/x86/include/asm/page.h
+@@ -71,6 +71,16 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
+ extern bool __virt_addr_valid(unsigned long kaddr);
+ #define virt_addr_valid(kaddr)	__virt_addr_valid((unsigned long) (kaddr))
+ 
++static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
++{
++	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
++}
++
++static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
++{
++	return __canonical_address(vaddr, vaddr_bits) == vaddr;
++}
++
+ #endif	/* __ASSEMBLY__ */
+ 
+ #include <asm-generic/memory_model.h>
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index bb1430283c726..bf2561a5eb581 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -446,7 +446,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
+ #define copy_mc_to_kernel copy_mc_to_kernel
+ 
+ unsigned long __must_check
+-copy_mc_to_user(void *to, const void *from, unsigned len);
++copy_mc_to_user(void __user *to, const void *from, unsigned len);
+ #endif
+ 
+ /*
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index ec3ddb9a456ba..d9fda0b6eb19e 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -2407,7 +2407,7 @@ static void __init srso_select_mitigation(void)
+ 	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
+ 
+ pred_cmd:
+-	if ((boot_cpu_has(X86_FEATURE_SRSO_NO) || srso_cmd == SRSO_CMD_OFF) &&
++	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
+ 	     boot_cpu_has(X86_FEATURE_SBPB))
+ 		x86_pred_cmd = PRED_CMD_SBPB;
+ }
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index efe13ab366f47..8596b4dca9455 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -80,7 +80,7 @@ static struct desc_struct startup_gdt[GDT_ENTRIES] = {
+  * while the kernel still uses a direct mapping.
+  */
+ static struct desc_ptr startup_gdt_descr = {
+-	.size = sizeof(startup_gdt),
++	.size = sizeof(startup_gdt)-1,
+ 	.address = 0,
+ };
+ 
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 63efccc8f4292..56750febf4604 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -688,7 +688,7 @@ static inline u8 ctxt_virt_addr_bits(struct x86_emulate_ctxt *ctxt)
+ static inline bool emul_is_noncanonical_address(u64 la,
+ 						struct x86_emulate_ctxt *ctxt)
+ {
+-	return get_canonical(la, ctxt_virt_addr_bits(ctxt)) != la;
++	return !__is_canonical_address(la, ctxt_virt_addr_bits(ctxt));
+ }
+ 
+ /*
+@@ -738,7 +738,7 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt,
+ 	case X86EMUL_MODE_PROT64:
+ 		*linear = la;
+ 		va_bits = ctxt_virt_addr_bits(ctxt);
+-		if (get_canonical(la, va_bits) != la)
++		if (!__is_canonical_address(la, va_bits))
+ 			goto bad;
+ 
+ 		*max_size = min_t(u64, ~0u, (1ull << va_bits) - la);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 9d3015863e581..c2899ff31a068 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1640,7 +1640,7 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
+ 		 * value, and that something deterministic happens if the guest
+ 		 * invokes 64-bit SYSENTER.
+ 		 */
+-		data = get_canonical(data, vcpu_virt_addr_bits(vcpu));
++		data = __canonical_address(data, vcpu_virt_addr_bits(vcpu));
+ 	}
+ 
+ 	msr.data = data;
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 2bff44f1efec8..4037b3cc704e8 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -156,14 +156,9 @@ static inline u8 vcpu_virt_addr_bits(struct kvm_vcpu *vcpu)
+ 	return kvm_read_cr4_bits(vcpu, X86_CR4_LA57) ? 57 : 48;
+ }
+ 
+-static inline u64 get_canonical(u64 la, u8 vaddr_bits)
+-{
+-	return ((int64_t)la << (64 - vaddr_bits)) >> (64 - vaddr_bits);
+-}
+-
+ static inline bool is_noncanonical_address(u64 la, struct kvm_vcpu *vcpu)
+ {
+-	return get_canonical(la, vcpu_virt_addr_bits(vcpu)) != la;
++	return !__is_canonical_address(la, vcpu_virt_addr_bits(vcpu));
+ }
+ 
+ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
+diff --git a/arch/x86/lib/copy_mc.c b/arch/x86/lib/copy_mc.c
+index c13e8c9ee926b..e058ef2d454d0 100644
+--- a/arch/x86/lib/copy_mc.c
++++ b/arch/x86/lib/copy_mc.c
+@@ -74,23 +74,23 @@ unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigne
+ }
+ EXPORT_SYMBOL_GPL(copy_mc_to_kernel);
+ 
+-unsigned long __must_check copy_mc_to_user(void *dst, const void *src, unsigned len)
++unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, unsigned len)
+ {
+ 	unsigned long ret;
+ 
+ 	if (copy_mc_fragile_enabled) {
+ 		__uaccess_begin();
+-		ret = copy_mc_fragile(dst, src, len);
++		ret = copy_mc_fragile((__force void *)dst, src, len);
+ 		__uaccess_end();
+ 		return ret;
+ 	}
+ 
+ 	if (static_cpu_has(X86_FEATURE_ERMS)) {
+ 		__uaccess_begin();
+-		ret = copy_mc_enhanced_fast_string(dst, src, len);
++		ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len);
+ 		__uaccess_end();
+ 		return ret;
+ 	}
+ 
+-	return copy_user_generic(dst, src, len);
++	return copy_user_generic((__force void *)dst, src, len);
+ }
+diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
+index 92ec176a72937..6993f026adec9 100644
+--- a/arch/x86/mm/maccess.c
++++ b/arch/x86/mm/maccess.c
+@@ -4,22 +4,26 @@
+ #include <linux/kernel.h>
+ 
+ #ifdef CONFIG_X86_64
+-static __always_inline u64 canonical_address(u64 vaddr, u8 vaddr_bits)
+-{
+-	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
+-}
+-
+ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
+ {
+ 	unsigned long vaddr = (unsigned long)unsafe_src;
+ 
+ 	/*
+-	 * Range covering the highest possible canonical userspace address
+-	 * as well as non-canonical address range. For the canonical range
+-	 * we also need to include the userspace guard page.
++	 * Do not allow userspace addresses.  This disallows
++	 * normal userspace and the userspace guard page:
+ 	 */
+-	return vaddr >= TASK_SIZE_MAX + PAGE_SIZE &&
+-	       canonical_address(vaddr, boot_cpu_data.x86_virt_bits) == vaddr;
++	if (vaddr < TASK_SIZE_MAX + PAGE_SIZE)
++		return false;
++
++	/*
++	 * Allow everything during early boot before 'x86_virt_bits'
++	 * is initialized.  Needed for instruction decoding in early
++	 * exception handlers.
++	 */
++	if (!boot_cpu_data.x86_virt_bits)
++		return true;
++
++	return __is_canonical_address(vaddr, boot_cpu_data.x86_virt_bits);
+ }
+ #else
+ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index fe8c7e79f4726..566067a855a13 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -156,8 +156,8 @@ static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias,
+ 		return 0;
+ 
+ 	len = snprintf(modalias, size, "acpi:");
+-	if (len <= 0)
+-		return len;
++	if (len >= size)
++		return -ENOMEM;
+ 
+ 	size -= len;
+ 
+@@ -210,8 +210,10 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
+ 	len = snprintf(modalias, size, "of:N%sT", (char *)buf.pointer);
+ 	ACPI_FREE(buf.pointer);
+ 
+-	if (len <= 0)
+-		return len;
++	if (len >= size)
++		return -ENOMEM;
++
++	size -= len;
+ 
+ 	of_compatible = acpi_dev->data.of_compatible;
+ 	if (of_compatible->type == ACPI_TYPE_PACKAGE) {
+diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
+index 211a335a608d7..ed54dc31e6fd4 100644
+--- a/drivers/base/regmap/regmap-debugfs.c
++++ b/drivers/base/regmap/regmap-debugfs.c
+@@ -48,7 +48,7 @@ static ssize_t regmap_name_read_file(struct file *file,
+ 		name = map->dev->driver->name;
+ 
+ 	ret = snprintf(buf, PAGE_SIZE, "%s\n", name);
+-	if (ret < 0) {
++	if (ret >= PAGE_SIZE) {
+ 		kfree(buf);
+ 		return ret;
+ 	}
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 3edff8606ac95..7bc603145bd98 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1643,17 +1643,19 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 	}
+ 
+ 	if (!map->cache_bypass && map->format.parse_val) {
+-		unsigned int ival;
++		unsigned int ival, offset;
+ 		int val_bytes = map->format.val_bytes;
+-		for (i = 0; i < val_len / val_bytes; i++) {
+-			ival = map->format.parse_val(val + (i * val_bytes));
+-			ret = regcache_write(map,
+-					     reg + regmap_get_offset(map, i),
+-					     ival);
++
++		/* Cache the last written value for noinc writes */
++		i = noinc ? val_len - val_bytes : 0;
++		for (; i < val_len; i += val_bytes) {
++			ival = map->format.parse_val(val + i);
++			offset = noinc ? 0 : regmap_get_offset(map, i / val_bytes);
++			ret = regcache_write(map, reg + offset, ival);
+ 			if (ret) {
+ 				dev_err(map->dev,
+ 					"Error in caching of register: %x ret: %d\n",
+-					reg + regmap_get_offset(map, i), ret);
++					reg + offset, ret);
+ 				return ret;
+ 			}
+ 		}
+diff --git a/drivers/char/hw_random/geode-rng.c b/drivers/char/hw_random/geode-rng.c
+index 207272979f233..2f8289865ec81 100644
+--- a/drivers/char/hw_random/geode-rng.c
++++ b/drivers/char/hw_random/geode-rng.c
+@@ -58,7 +58,8 @@ struct amd_geode_priv {
+ 
+ static int geode_rng_data_read(struct hwrng *rng, u32 *data)
+ {
+-	void __iomem *mem = (void __iomem *)rng->priv;
++	struct amd_geode_priv *priv = (struct amd_geode_priv *)rng->priv;
++	void __iomem *mem = priv->membase;
+ 
+ 	*data = readl(mem + GEODE_RNG_DATA_REG);
+ 
+@@ -67,7 +68,8 @@ static int geode_rng_data_read(struct hwrng *rng, u32 *data)
+ 
+ static int geode_rng_data_present(struct hwrng *rng, int wait)
+ {
+-	void __iomem *mem = (void __iomem *)rng->priv;
++	struct amd_geode_priv *priv = (struct amd_geode_priv *)rng->priv;
++	void __iomem *mem = priv->membase;
+ 	int data, i;
+ 
+ 	for (i = 0; i < 20; i++) {
+diff --git a/drivers/clk/clk-asm9260.c b/drivers/clk/clk-asm9260.c
+index bacebd457e6f3..8b3c059e19a12 100644
+--- a/drivers/clk/clk-asm9260.c
++++ b/drivers/clk/clk-asm9260.c
+@@ -80,7 +80,7 @@ struct asm9260_mux_clock {
+ 	u8			mask;
+ 	u32			*table;
+ 	const char		*name;
+-	const char		**parent_names;
++	const struct clk_parent_data *parent_data;
+ 	u8			num_parents;
+ 	unsigned long		offset;
+ 	unsigned long		flags;
+@@ -232,10 +232,10 @@ static const struct asm9260_gate_data asm9260_ahb_gates[] __initconst = {
+ 		HW_AHBCLKCTRL1,	16 },
+ };
+ 
+-static const char __initdata *main_mux_p[] =   { NULL, NULL };
+-static const char __initdata *i2s0_mux_p[] =   { NULL, NULL, "i2s0m_div"};
+-static const char __initdata *i2s1_mux_p[] =   { NULL, NULL, "i2s1m_div"};
+-static const char __initdata *clkout_mux_p[] = { NULL, NULL, "rtc"};
++static struct clk_parent_data __initdata main_mux_p[] =   { { .index = 0, }, { .name = "pll" } };
++static struct clk_parent_data __initdata i2s0_mux_p[] =   { { .index = 0, }, { .name = "pll" }, { .name = "i2s0m_div"} };
++static struct clk_parent_data __initdata i2s1_mux_p[] =   { { .index = 0, }, { .name = "pll" }, { .name = "i2s1m_div"} };
++static struct clk_parent_data __initdata clkout_mux_p[] = { { .index = 0, }, { .name = "pll" }, { .name = "rtc"} };
+ static u32 three_mux_table[] = {0, 1, 3};
+ 
+ static struct asm9260_mux_clock asm9260_mux_clks[] __initdata = {
+@@ -255,9 +255,10 @@ static struct asm9260_mux_clock asm9260_mux_clks[] __initdata = {
+ 
+ static void __init asm9260_acc_init(struct device_node *np)
+ {
+-	struct clk_hw *hw;
++	struct clk_hw *hw, *pll_hw;
+ 	struct clk_hw **hws;
+-	const char *ref_clk, *pll_clk = "pll";
++	const char *pll_clk = "pll";
++	struct clk_parent_data pll_parent_data = { .index = 0 };
+ 	u32 rate;
+ 	int n;
+ 
+@@ -274,21 +275,15 @@ static void __init asm9260_acc_init(struct device_node *np)
+ 	/* register pll */
+ 	rate = (ioread32(base + HW_SYSPLLCTRL) & 0xffff) * 1000000;
+ 
+-	/* TODO: Convert to DT parent scheme */
+-	ref_clk = of_clk_get_parent_name(np, 0);
+-	hw = __clk_hw_register_fixed_rate(NULL, NULL, pll_clk,
+-			ref_clk, NULL, NULL, 0, rate, 0,
+-			CLK_FIXED_RATE_PARENT_ACCURACY);
+-
+-	if (IS_ERR(hw))
++	pll_hw = clk_hw_register_fixed_rate_parent_accuracy(NULL, pll_clk, &pll_parent_data,
++							0, rate);
++	if (IS_ERR(pll_hw))
+ 		panic("%pOFn: can't register REFCLK. Check DT!", np);
+ 
+ 	for (n = 0; n < ARRAY_SIZE(asm9260_mux_clks); n++) {
+ 		const struct asm9260_mux_clock *mc = &asm9260_mux_clks[n];
+ 
+-		mc->parent_names[0] = ref_clk;
+-		mc->parent_names[1] = pll_clk;
+-		hw = clk_hw_register_mux_table(NULL, mc->name, mc->parent_names,
++		hw = clk_hw_register_mux_table_parent_data(NULL, mc->name, mc->parent_data,
+ 				mc->num_parents, mc->flags, base + mc->offset,
+ 				0, mc->mask, 0, mc->table, &asm9260_clk_lock);
+ 	}
+diff --git a/drivers/clk/clk-npcm7xx.c b/drivers/clk/clk-npcm7xx.c
+index 27a86b7a34dbf..c82df105b0a21 100644
+--- a/drivers/clk/clk-npcm7xx.c
++++ b/drivers/clk/clk-npcm7xx.c
+@@ -647,7 +647,7 @@ static void __init npcm7xx_clk_init(struct device_node *clk_np)
+ 	return;
+ 
+ npcm7xx_init_fail:
+-	kfree(npcm7xx_clk_data->hws);
++	kfree(npcm7xx_clk_data);
+ npcm7xx_init_np_err:
+ 	iounmap(clk_base);
+ npcm7xx_init_error:
+diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
+index c754dfbb73fd4..c62636fb4aca8 100644
+--- a/drivers/clk/clk-scmi.c
++++ b/drivers/clk/clk-scmi.c
+@@ -170,6 +170,7 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
+ 		sclk->info = handle->clk_ops->info_get(handle, idx);
+ 		if (!sclk->info) {
+ 			dev_dbg(dev, "invalid clock info for idx %d\n", idx);
++			devm_kfree(dev, sclk);
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/clk/imx/Kconfig b/drivers/clk/imx/Kconfig
+index 47d9ec3abd2f7..d3d730610cb4f 100644
+--- a/drivers/clk/imx/Kconfig
++++ b/drivers/clk/imx/Kconfig
+@@ -96,5 +96,6 @@ config CLK_IMX8QXP
+ 	depends on (ARCH_MXC && ARM64) || COMPILE_TEST
+ 	depends on IMX_SCU && HAVE_ARM_SMCCC
+ 	select MXC_CLK_SCU
++	select MXC_CLK
+ 	help
+ 	  Build the driver for IMX8QXP SCU based clocks.
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index f679e5cc320b5..89313dd7a57f6 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -280,8 +280,7 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 	void __iomem *base;
+ 	int err;
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+-					  IMX8MQ_CLK_END), GFP_KERNEL);
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws, IMX8MQ_CLK_END), GFP_KERNEL);
+ 	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+ 
+@@ -298,10 +297,12 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MQ_CLK_EXT4] = imx_obtain_fixed_clk_hw(np, "clk_ext4");
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-anatop");
+-	base = of_iomap(np, 0);
++	base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!base))
+-		return -ENOMEM;
++	if (WARN_ON(IS_ERR(base))) {
++		err = PTR_ERR(base);
++		goto unregister_hws;
++	}
+ 
+ 	hws[IMX8MQ_ARM_PLL_REF_SEL] = imx_clk_hw_mux("arm_pll_ref_sel", base + 0x28, 16, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
+ 	hws[IMX8MQ_GPU_PLL_REF_SEL] = imx_clk_hw_mux("gpu_pll_ref_sel", base + 0x18, 16, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
+@@ -373,8 +374,10 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 
+ 	np = dev->of_node;
+ 	base = devm_platform_ioremap_resource(pdev, 0);
+-	if (WARN_ON(IS_ERR(base)))
+-		return PTR_ERR(base);
++	if (WARN_ON(IS_ERR(base))) {
++		err = PTR_ERR(base);
++		goto unregister_hws;
++	}
+ 
+ 	/* CORE */
+ 	hws[IMX8MQ_CLK_A53_DIV] = imx8m_clk_hw_composite_core("arm_a53_div", imx8mq_a53_sels, base + 0x8000);
+diff --git a/drivers/clk/keystone/pll.c b/drivers/clk/keystone/pll.c
+index ee5c72369334f..6bbdd4705d71f 100644
+--- a/drivers/clk/keystone/pll.c
++++ b/drivers/clk/keystone/pll.c
+@@ -281,12 +281,13 @@ static void __init of_pll_div_clk_init(struct device_node *node)
+ 
+ 	clk = clk_register_divider(NULL, clk_name, parent_name, 0, reg, shift,
+ 				 mask, 0, NULL);
+-	if (clk) {
+-		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+-	} else {
++	if (IS_ERR(clk)) {
+ 		pr_err("%s: error registering divider %s\n", __func__, clk_name);
+ 		iounmap(reg);
++		return;
+ 	}
++
++	of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ }
+ CLK_OF_DECLARE(pll_divider_clock, "ti,keystone,pll-divider-clock", of_pll_div_clk_init);
+ 
+@@ -328,10 +329,12 @@ static void __init of_pll_mux_clk_init(struct device_node *node)
+ 	clk = clk_register_mux(NULL, clk_name, (const char **)&parents,
+ 				ARRAY_SIZE(parents) , 0, reg, shift, mask,
+ 				0, NULL);
+-	if (clk)
+-		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+-	else
++	if (IS_ERR(clk)) {
+ 		pr_err("%s: error registering mux %s\n", __func__, clk_name);
++		return;
++	}
++
++	of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ }
+ CLK_OF_DECLARE(pll_mux_clock, "ti,keystone,pll-mux-clock", of_pll_mux_clk_init);
+ 
+diff --git a/drivers/clk/mediatek/clk-mt2701.c b/drivers/clk/mediatek/clk-mt2701.c
+index 695be0f774270..c67cd73aca171 100644
+--- a/drivers/clk/mediatek/clk-mt2701.c
++++ b/drivers/clk/mediatek/clk-mt2701.c
+@@ -675,6 +675,8 @@ static int mtk_topckgen_init(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_TOP_NR);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_fixed_clks(top_fixed_clks, ARRAY_SIZE(top_fixed_clks),
+ 								clk_data);
+@@ -742,6 +744,8 @@ static void __init mtk_infrasys_init_early(struct device_node *node)
+ 
+ 	if (!infra_clk_data) {
+ 		infra_clk_data = mtk_alloc_clk_data(CLK_INFRA_NR);
++		if (!infra_clk_data)
++			return;
+ 
+ 		for (i = 0; i < CLK_INFRA_NR; i++)
+ 			infra_clk_data->clks[i] = ERR_PTR(-EPROBE_DEFER);
+@@ -768,6 +772,8 @@ static int mtk_infrasys_init(struct platform_device *pdev)
+ 
+ 	if (!infra_clk_data) {
+ 		infra_clk_data = mtk_alloc_clk_data(CLK_INFRA_NR);
++		if (!infra_clk_data)
++			return -ENOMEM;
+ 	} else {
+ 		for (i = 0; i < CLK_INFRA_NR; i++) {
+ 			if (infra_clk_data->clks[i] == ERR_PTR(-EPROBE_DEFER))
+@@ -896,6 +902,8 @@ static int mtk_pericfg_init(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_PERI_NR);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_gates(node, peri_clks, ARRAY_SIZE(peri_clks),
+ 						clk_data);
+diff --git a/drivers/clk/mediatek/clk-mt6765.c b/drivers/clk/mediatek/clk-mt6765.c
+index d77ea5aff2920..17352342b6989 100644
+--- a/drivers/clk/mediatek/clk-mt6765.c
++++ b/drivers/clk/mediatek/clk-mt6765.c
+@@ -785,6 +785,8 @@ static int clk_mt6765_apmixed_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_APMIXED_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data);
+ 
+@@ -820,6 +822,8 @@ static int clk_mt6765_top_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_TOP_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_fixed_clks(fixed_clks, ARRAY_SIZE(fixed_clks),
+ 				    clk_data);
+@@ -860,6 +864,8 @@ static int clk_mt6765_ifr_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_IFR_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_gates(node, ifr_clks, ARRAY_SIZE(ifr_clks),
+ 			       clk_data);
+diff --git a/drivers/clk/mediatek/clk-mt6779.c b/drivers/clk/mediatek/clk-mt6779.c
+index 6e0d3a1667291..cf720651fc536 100644
+--- a/drivers/clk/mediatek/clk-mt6779.c
++++ b/drivers/clk/mediatek/clk-mt6779.c
+@@ -1216,6 +1216,8 @@ static int clk_mt6779_apmixed_probe(struct platform_device *pdev)
+ 	struct device_node *node = pdev->dev.of_node;
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_APMIXED_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data);
+ 
+@@ -1236,6 +1238,8 @@ static int clk_mt6779_top_probe(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_TOP_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_fixed_clks(top_fixed_clks, ARRAY_SIZE(top_fixed_clks),
+ 				    clk_data);
+diff --git a/drivers/clk/mediatek/clk-mt6797.c b/drivers/clk/mediatek/clk-mt6797.c
+index 428eb24ffec55..98d456023f4e4 100644
+--- a/drivers/clk/mediatek/clk-mt6797.c
++++ b/drivers/clk/mediatek/clk-mt6797.c
+@@ -391,6 +391,8 @@ static int mtk_topckgen_init(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_TOP_NR);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_factors(top_fixed_divs, ARRAY_SIZE(top_fixed_divs),
+ 				 clk_data);
+@@ -563,6 +565,8 @@ static void mtk_infrasys_init_early(struct device_node *node)
+ 
+ 	if (!infra_clk_data) {
+ 		infra_clk_data = mtk_alloc_clk_data(CLK_INFRA_NR);
++		if (!infra_clk_data)
++			return;
+ 
+ 		for (i = 0; i < CLK_INFRA_NR; i++)
+ 			infra_clk_data->clks[i] = ERR_PTR(-EPROBE_DEFER);
+@@ -587,6 +591,8 @@ static int mtk_infrasys_init(struct platform_device *pdev)
+ 
+ 	if (!infra_clk_data) {
+ 		infra_clk_data = mtk_alloc_clk_data(CLK_INFRA_NR);
++		if (!infra_clk_data)
++			return -ENOMEM;
+ 	} else {
+ 		for (i = 0; i < CLK_INFRA_NR; i++) {
+ 			if (infra_clk_data->clks[i] == ERR_PTR(-EPROBE_DEFER))
+diff --git a/drivers/clk/mediatek/clk-mt7629-eth.c b/drivers/clk/mediatek/clk-mt7629-eth.c
+index 88279d0ea1a76..3ab7b672f8c70 100644
+--- a/drivers/clk/mediatek/clk-mt7629-eth.c
++++ b/drivers/clk/mediatek/clk-mt7629-eth.c
+@@ -83,6 +83,8 @@ static int clk_mt7629_ethsys_init(struct platform_device *pdev)
+ 	int r;
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_ETH_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_gates(node, eth_clks, CLK_ETH_NR_CLK, clk_data);
+ 
+@@ -105,6 +107,8 @@ static int clk_mt7629_sgmiisys_init(struct platform_device *pdev)
+ 	int r;
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_SGMII_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_gates(node, sgmii_clks[id++], CLK_SGMII_NR_CLK,
+ 			       clk_data);
+diff --git a/drivers/clk/mediatek/clk-mt7629.c b/drivers/clk/mediatek/clk-mt7629.c
+index a0ee079670c7e..f791e53b812ab 100644
+--- a/drivers/clk/mediatek/clk-mt7629.c
++++ b/drivers/clk/mediatek/clk-mt7629.c
+@@ -580,6 +580,8 @@ static int mtk_topckgen_init(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_TOP_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_fixed_clks(top_fixed_clks, ARRAY_SIZE(top_fixed_clks),
+ 				    clk_data);
+@@ -603,6 +605,8 @@ static int mtk_infrasys_init(struct platform_device *pdev)
+ 	struct clk_onecell_data *clk_data;
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_INFRA_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_gates(node, infra_clks, ARRAY_SIZE(infra_clks),
+ 			       clk_data);
+@@ -626,6 +630,8 @@ static int mtk_pericfg_init(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_PERI_NR_CLK);
++	if (!clk_data)
++		return -ENOMEM;
+ 
+ 	mtk_clk_register_gates(node, peri_clks, ARRAY_SIZE(peri_clks),
+ 			       clk_data);
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 3a965bd326d5f..3998e25c4192f 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -110,6 +110,7 @@ config IPQ_APSS_6018
+ 	tristate "IPQ APSS Clock Controller"
+ 	select IPQ_APSS_PLL
+ 	depends on QCOM_APCS_IPC || COMPILE_TEST
++	depends on QCOM_SMEM
+ 	help
+ 	  Support for APSS clock controller on IPQ platforms. The
+ 	  APSS clock controller manages the Mux and enable block that feeds the
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index 71a0d30cf44df..eb4fd803bae0d 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -147,17 +147,11 @@ static int clk_rcg2_set_parent(struct clk_hw *hw, u8 index)
+ static unsigned long
+ calc_rate(unsigned long rate, u32 m, u32 n, u32 mode, u32 hid_div)
+ {
+-	if (hid_div) {
+-		rate *= 2;
+-		rate /= hid_div + 1;
+-	}
++	if (hid_div)
++		rate = mult_frac(rate, 2, hid_div + 1);
+ 
+-	if (mode) {
+-		u64 tmp = rate;
+-		tmp *= m;
+-		do_div(tmp, n);
+-		rate = tmp;
+-	}
++	if (mode)
++		rate = mult_frac(rate, m, n);
+ 
+ 	return rate;
+ }
+diff --git a/drivers/clk/qcom/gcc-sm8150.c b/drivers/clk/qcom/gcc-sm8150.c
+index 8e9b5b3cceaf7..3d9ba3ccb6b68 100644
+--- a/drivers/clk/qcom/gcc-sm8150.c
++++ b/drivers/clk/qcom/gcc-sm8150.c
+@@ -241,7 +241,7 @@ static struct clk_rcg2 gcc_cpuss_ahb_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_cpuss_ahb_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -264,7 +264,7 @@ static struct clk_rcg2 gcc_emac_ptp_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_emac_ptp_clk_src",
+ 		.parent_data = gcc_parents_5,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parents_5),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -290,7 +290,7 @@ static struct clk_rcg2 gcc_emac_rgmii_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_emac_rgmii_clk_src",
+ 		.parent_data = gcc_parents_5,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parents_5),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -314,7 +314,7 @@ static struct clk_rcg2 gcc_gp1_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp1_clk_src",
+ 		.parent_data = gcc_parents_1,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parents_1),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -329,7 +329,7 @@ static struct clk_rcg2 gcc_gp2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp2_clk_src",
+ 		.parent_data = gcc_parents_1,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parents_1),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -344,7 +344,7 @@ static struct clk_rcg2 gcc_gp3_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_gp3_clk_src",
+ 		.parent_data = gcc_parents_1,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parents_1),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -365,7 +365,7 @@ static struct clk_rcg2 gcc_pcie_0_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pcie_0_aux_clk_src",
+ 		.parent_data = gcc_parents_2,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parents_2),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -380,7 +380,7 @@ static struct clk_rcg2 gcc_pcie_1_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pcie_1_aux_clk_src",
+ 		.parent_data = gcc_parents_2,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parents_2),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -401,7 +401,7 @@ static struct clk_rcg2 gcc_pcie_phy_refgen_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pcie_phy_refgen_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -423,7 +423,7 @@ static struct clk_rcg2 gcc_pdm2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_pdm2_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -446,7 +446,7 @@ static struct clk_rcg2 gcc_qspi_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qspi_core_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -480,7 +480,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap0_s0_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -495,7 +495,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap0_s1_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -510,7 +510,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap0_s2_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -525,7 +525,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap0_s3_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -540,7 +540,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap0_s4_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -555,7 +555,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap0_s5_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -570,7 +570,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap0_s6_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -585,7 +585,7 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap0_s7_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -600,7 +600,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap1_s0_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -615,7 +615,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap1_s1_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -630,7 +630,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap1_s2_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -645,7 +645,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap1_s3_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -660,7 +660,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap1_s4_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -675,7 +675,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap1_s5_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -690,7 +690,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap2_s0_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -705,7 +705,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap2_s1_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -720,7 +720,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap2_s2_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -735,7 +735,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap2_s3_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -750,7 +750,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap2_s4_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -765,7 +765,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_qupv3_wrap2_s5_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -791,8 +791,8 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parents_6,
+-		.num_parents = 5,
+-		.flags = CLK_SET_RATE_PARENT,
++		.num_parents = ARRAY_SIZE(gcc_parents_6),
++		.flags = CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+@@ -816,7 +816,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_sdcc4_apps_clk_src",
+ 		.parent_data = gcc_parents_3,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+@@ -836,7 +836,7 @@ static struct clk_rcg2 gcc_tsif_ref_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_tsif_ref_clk_src",
+ 		.parent_data = gcc_parents_7,
+-		.num_parents = 5,
++		.num_parents = ARRAY_SIZE(gcc_parents_7),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -860,7 +860,7 @@ static struct clk_rcg2 gcc_ufs_card_axi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_card_axi_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -883,7 +883,7 @@ static struct clk_rcg2 gcc_ufs_card_ice_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_card_ice_core_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -903,7 +903,7 @@ static struct clk_rcg2 gcc_ufs_card_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_card_phy_aux_clk_src",
+ 		.parent_data = gcc_parents_4,
+-		.num_parents = 2,
++		.num_parents = ARRAY_SIZE(gcc_parents_4),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -925,7 +925,7 @@ static struct clk_rcg2 gcc_ufs_card_unipro_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_card_unipro_core_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -949,7 +949,7 @@ static struct clk_rcg2 gcc_ufs_phy_axi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_axi_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -964,7 +964,7 @@ static struct clk_rcg2 gcc_ufs_phy_ice_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_ice_core_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -979,7 +979,7 @@ static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_phy_aux_clk_src",
+ 		.parent_data = gcc_parents_4,
+-		.num_parents = 2,
++		.num_parents = ARRAY_SIZE(gcc_parents_4),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -994,7 +994,7 @@ static struct clk_rcg2 gcc_ufs_phy_unipro_core_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_ufs_phy_unipro_core_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1018,7 +1018,7 @@ static struct clk_rcg2 gcc_usb30_prim_master_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_prim_master_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1040,7 +1040,7 @@ static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_prim_mock_utmi_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1055,7 +1055,7 @@ static struct clk_rcg2 gcc_usb30_sec_master_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_sec_master_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1070,7 +1070,7 @@ static struct clk_rcg2 gcc_usb30_sec_mock_utmi_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb30_sec_mock_utmi_clk_src",
+ 		.parent_data = gcc_parents_0,
+-		.num_parents = 4,
++		.num_parents = ARRAY_SIZE(gcc_parents_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1085,7 +1085,7 @@ static struct clk_rcg2 gcc_usb3_prim_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb3_prim_phy_aux_clk_src",
+ 		.parent_data = gcc_parents_2,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parents_2),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1100,7 +1100,7 @@ static struct clk_rcg2 gcc_usb3_sec_phy_aux_clk_src = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gcc_usb3_sec_phy_aux_clk_src",
+ 		.parent_data = gcc_parents_2,
+-		.num_parents = 3,
++		.num_parents = ARRAY_SIZE(gcc_parents_2),
+ 		.flags = CLK_SET_RATE_PARENT,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+diff --git a/drivers/clk/qcom/mmcc-msm8998.c b/drivers/clk/qcom/mmcc-msm8998.c
+index dd68983fe22ef..a68764cfb7930 100644
+--- a/drivers/clk/qcom/mmcc-msm8998.c
++++ b/drivers/clk/qcom/mmcc-msm8998.c
+@@ -1211,6 +1211,8 @@ static struct clk_rcg2 vfe1_clk_src = {
+ 
+ static struct clk_branch misc_ahb_clk = {
+ 	.halt_reg = 0x328,
++	.hwcg_reg = 0x328,
++	.hwcg_bit = 1,
+ 	.clkr = {
+ 		.enable_reg = 0x328,
+ 		.enable_mask = BIT(0),
+@@ -1241,6 +1243,8 @@ static struct clk_branch video_core_clk = {
+ 
+ static struct clk_branch video_ahb_clk = {
+ 	.halt_reg = 0x1030,
++	.hwcg_reg = 0x1030,
++	.hwcg_bit = 1,
+ 	.clkr = {
+ 		.enable_reg = 0x1030,
+ 		.enable_mask = BIT(0),
+@@ -1315,6 +1319,8 @@ static struct clk_branch video_subcore1_clk = {
+ 
+ static struct clk_branch mdss_ahb_clk = {
+ 	.halt_reg = 0x2308,
++	.hwcg_reg = 0x2308,
++	.hwcg_bit = 1,
+ 	.clkr = {
+ 		.enable_reg = 0x2308,
+ 		.enable_mask = BIT(0),
+@@ -2481,6 +2487,7 @@ static struct clk_branch fd_ahb_clk = {
+ 
+ static struct clk_branch mnoc_ahb_clk = {
+ 	.halt_reg = 0x5024,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x5024,
+ 		.enable_mask = BIT(0),
+@@ -2496,6 +2503,9 @@ static struct clk_branch mnoc_ahb_clk = {
+ 
+ static struct clk_branch bimc_smmu_ahb_clk = {
+ 	.halt_reg = 0xe004,
++	.halt_check = BRANCH_HALT_SKIP,
++	.hwcg_reg = 0xe004,
++	.hwcg_bit = 1,
+ 	.clkr = {
+ 		.enable_reg = 0xe004,
+ 		.enable_mask = BIT(0),
+@@ -2511,6 +2521,9 @@ static struct clk_branch bimc_smmu_ahb_clk = {
+ 
+ static struct clk_branch bimc_smmu_axi_clk = {
+ 	.halt_reg = 0xe008,
++	.halt_check = BRANCH_HALT_SKIP,
++	.hwcg_reg = 0xe008,
++	.hwcg_bit = 1,
+ 	.clkr = {
+ 		.enable_reg = 0xe008,
+ 		.enable_mask = BIT(0),
+@@ -2649,11 +2662,13 @@ static struct gdsc camss_cpp_gdsc = {
+ static struct gdsc bimc_smmu_gdsc = {
+ 	.gdscr = 0xe020,
+ 	.gds_hw_ctrl = 0xe024,
++	.cxcs = (unsigned int []){ 0xe008 },
++	.cxc_count = 1,
+ 	.pd = {
+ 		.name = "bimc_smmu",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = HW_CTRL,
++	.flags = VOTABLE,
+ };
+ 
+ static struct clk_regmap *mmcc_msm8998_clocks[] = {
+diff --git a/drivers/clk/ti/apll.c b/drivers/clk/ti/apll.c
+index ac5bc8857a514..f921c6812852f 100644
+--- a/drivers/clk/ti/apll.c
++++ b/drivers/clk/ti/apll.c
+@@ -139,6 +139,7 @@ static void __init omap_clk_register_apll(void *user,
+ 	struct clk_hw *hw = user;
+ 	struct clk_hw_omap *clk_hw = to_clk_hw_omap(hw);
+ 	struct dpll_data *ad = clk_hw->dpll_data;
++	const char *name;
+ 	struct clk *clk;
+ 	const struct clk_init_data *init = clk_hw->hw.init;
+ 
+@@ -166,7 +167,8 @@ static void __init omap_clk_register_apll(void *user,
+ 
+ 	ad->clk_bypass = __clk_get_hw(clk);
+ 
+-	clk = ti_clk_register_omap_hw(NULL, &clk_hw->hw, node->name);
++	name = ti_dt_clk_name(node);
++	clk = of_ti_clk_register_omap_hw(node, &clk_hw->hw, name);
+ 	if (!IS_ERR(clk)) {
+ 		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 		kfree(init->parent_names);
+@@ -198,7 +200,7 @@ static void __init of_dra7_apll_setup(struct device_node *node)
+ 	clk_hw->dpll_data = ad;
+ 	clk_hw->hw.init = init;
+ 
+-	init->name = node->name;
++	init->name = ti_dt_clk_name(node);
+ 	init->ops = &apll_ck_ops;
+ 
+ 	init->num_parents = of_clk_get_parent_count(node);
+@@ -347,6 +349,7 @@ static void __init of_omap2_apll_setup(struct device_node *node)
+ 	struct dpll_data *ad = NULL;
+ 	struct clk_hw_omap *clk_hw = NULL;
+ 	struct clk_init_data *init = NULL;
++	const char *name;
+ 	struct clk *clk;
+ 	const char *parent_name;
+ 	u32 val;
+@@ -362,7 +365,8 @@ static void __init of_omap2_apll_setup(struct device_node *node)
+ 	clk_hw->dpll_data = ad;
+ 	clk_hw->hw.init = init;
+ 	init->ops = &omap2_apll_ops;
+-	init->name = node->name;
++	name = ti_dt_clk_name(node);
++	init->name = name;
+ 	clk_hw->ops = &omap2_apll_hwops;
+ 
+ 	init->num_parents = of_clk_get_parent_count(node);
+@@ -403,7 +407,8 @@ static void __init of_omap2_apll_setup(struct device_node *node)
+ 	if (ret)
+ 		goto cleanup;
+ 
+-	clk = ti_clk_register_omap_hw(NULL, &clk_hw->hw, node->name);
++	name = ti_dt_clk_name(node);
++	clk = of_ti_clk_register_omap_hw(node, &clk_hw->hw, name);
+ 	if (!IS_ERR(clk)) {
+ 		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 		kfree(init);
+diff --git a/drivers/clk/ti/autoidle.c b/drivers/clk/ti/autoidle.c
+index f6f8a409f148f..d6e5f1511ace8 100644
+--- a/drivers/clk/ti/autoidle.c
++++ b/drivers/clk/ti/autoidle.c
+@@ -205,7 +205,7 @@ int __init of_ti_clk_autoidle_setup(struct device_node *node)
+ 		return -ENOMEM;
+ 
+ 	clk->shift = shift;
+-	clk->name = node->name;
++	clk->name = ti_dt_clk_name(node);
+ 	ret = ti_clk_get_reg_addr(node, 0, &clk->reg);
+ 	if (ret) {
+ 		kfree(clk);
+diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
+index e2e59d78c173f..62508e74a47a7 100644
+--- a/drivers/clk/ti/clk-dra7-atl.c
++++ b/drivers/clk/ti/clk-dra7-atl.c
+@@ -173,6 +173,7 @@ static void __init of_dra7_atl_clock_setup(struct device_node *node)
+ 	struct dra7_atl_desc *clk_hw = NULL;
+ 	struct clk_init_data init = { NULL };
+ 	const char **parent_names = NULL;
++	const char *name;
+ 	struct clk *clk;
+ 
+ 	clk_hw = kzalloc(sizeof(*clk_hw), GFP_KERNEL);
+@@ -183,7 +184,8 @@ static void __init of_dra7_atl_clock_setup(struct device_node *node)
+ 
+ 	clk_hw->hw.init = &init;
+ 	clk_hw->divider = 1;
+-	init.name = node->name;
++	name = ti_dt_clk_name(node);
++	init.name = name;
+ 	init.ops = &atl_clk_ops;
+ 	init.flags = CLK_IGNORE_UNUSED;
+ 	init.num_parents = of_clk_get_parent_count(node);
+@@ -203,7 +205,7 @@ static void __init of_dra7_atl_clock_setup(struct device_node *node)
+ 
+ 	init.parent_names = parent_names;
+ 
+-	clk = ti_clk_register(NULL, &clk_hw->hw, node->name);
++	clk = of_ti_clk_register(node, &clk_hw->hw, name);
+ 
+ 	if (!IS_ERR(clk)) {
+ 		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
+index 29eafab4353ef..6a39fb051b2ee 100644
+--- a/drivers/clk/ti/clk.c
++++ b/drivers/clk/ti/clk.c
+@@ -402,6 +402,24 @@ static const struct of_device_id simple_clk_match_table[] __initconst = {
+ 	{ }
+ };
+ 
++/**
++ * ti_dt_clk_name - init clock name from first output name or node name
++ * @np: device node
++ *
++ * Use the first clock-output-name for the clock name if found. Fall back
++ * to legacy naming based on node name.
++ */
++const char *ti_dt_clk_name(struct device_node *np)
++{
++	const char *name;
++
++	if (!of_property_read_string_index(np, "clock-output-names", 0,
++					   &name))
++		return name;
++
++	return np->name;
++}
++
+ /**
+  * ti_clk_add_aliases - setup clock aliases
+  *
+@@ -418,7 +436,7 @@ void __init ti_clk_add_aliases(void)
+ 		clkspec.np = np;
+ 		clk = of_clk_get_from_provider(&clkspec);
+ 
+-		ti_clk_add_alias(NULL, clk, np->name);
++		ti_clk_add_alias(clk, ti_dt_clk_name(np));
+ 	}
+ }
+ 
+@@ -471,7 +489,6 @@ void omap2_clk_enable_init_clocks(const char **clk_names, u8 num_clocks)
+ 
+ /**
+  * ti_clk_add_alias - add a clock alias for a TI clock
+- * @dev: device alias for this clock
+  * @clk: clock handle to create alias for
+  * @con: connection ID for this clock
+  *
+@@ -479,7 +496,7 @@ void omap2_clk_enable_init_clocks(const char **clk_names, u8 num_clocks)
+  * and assigns the data to it. Returns 0 if successful, negative error
+  * value otherwise.
+  */
+-int ti_clk_add_alias(struct device *dev, struct clk *clk, const char *con)
++int ti_clk_add_alias(struct clk *clk, const char *con)
+ {
+ 	struct clk_lookup *cl;
+ 
+@@ -493,8 +510,6 @@ int ti_clk_add_alias(struct device *dev, struct clk *clk, const char *con)
+ 	if (!cl)
+ 		return -ENOMEM;
+ 
+-	if (dev)
+-		cl->dev_id = dev_name(dev);
+ 	cl->con_id = con;
+ 	cl->clk = clk;
+ 
+@@ -504,8 +519,8 @@ int ti_clk_add_alias(struct device *dev, struct clk *clk, const char *con)
+ }
+ 
+ /**
+- * ti_clk_register - register a TI clock to the common clock framework
+- * @dev: device for this clock
++ * of_ti_clk_register - register a TI clock to the common clock framework
++ * @node: device node for this clock
+  * @hw: hardware clock handle
+  * @con: connection ID for this clock
+  *
+@@ -513,17 +528,18 @@ int ti_clk_add_alias(struct device *dev, struct clk *clk, const char *con)
+  * alias for it. Returns a handle to the registered clock if successful,
+  * ERR_PTR value in failure.
+  */
+-struct clk *ti_clk_register(struct device *dev, struct clk_hw *hw,
+-			    const char *con)
++struct clk *of_ti_clk_register(struct device_node *node, struct clk_hw *hw,
++			       const char *con)
+ {
+ 	struct clk *clk;
+ 	int ret;
+ 
+-	clk = clk_register(dev, hw);
+-	if (IS_ERR(clk))
+-		return clk;
++	ret = of_clk_hw_register(node, hw);
++	if (ret)
++		return ERR_PTR(ret);
+ 
+-	ret = ti_clk_add_alias(dev, clk, con);
++	clk = hw->clk;
++	ret = ti_clk_add_alias(clk, con);
+ 	if (ret) {
+ 		clk_unregister(clk);
+ 		return ERR_PTR(ret);
+@@ -533,8 +549,8 @@ struct clk *ti_clk_register(struct device *dev, struct clk_hw *hw,
+ }
+ 
+ /**
+- * ti_clk_register_omap_hw - register a clk_hw_omap to the clock framework
+- * @dev: device for this clock
++ * of_ti_clk_register_omap_hw - register a clk_hw_omap to the clock framework
++ * @node: device node for this clock
+  * @hw: hardware clock handle
+  * @con: connection ID for this clock
+  *
+@@ -543,13 +559,13 @@ struct clk *ti_clk_register(struct device *dev, struct clk_hw *hw,
+  * Returns a handle to the registered clock if successful, ERR_PTR value
+  * in failure.
+  */
+-struct clk *ti_clk_register_omap_hw(struct device *dev, struct clk_hw *hw,
+-				    const char *con)
++struct clk *of_ti_clk_register_omap_hw(struct device_node *node,
++				       struct clk_hw *hw, const char *con)
+ {
+ 	struct clk *clk;
+ 	struct clk_hw_omap *oclk;
+ 
+-	clk = ti_clk_register(dev, hw, con);
++	clk = of_ti_clk_register(node, hw, con);
+ 	if (IS_ERR(clk))
+ 		return clk;
+ 
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 157abc46dcf44..1424b615a4cc5 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -317,7 +317,7 @@ _ti_clkctrl_clk_register(struct omap_clkctrl_provider *provider,
+ 	init.ops = ops;
+ 	init.flags = 0;
+ 
+-	clk = ti_clk_register(NULL, clk_hw, init.name);
++	clk = of_ti_clk_register(node, clk_hw, init.name);
+ 	if (IS_ERR_OR_NULL(clk)) {
+ 		ret = -EINVAL;
+ 		goto cleanup;
+@@ -701,7 +701,7 @@ clkdm_found:
+ 		init.ops = &omap4_clkctrl_clk_ops;
+ 		hw->hw.init = &init;
+ 
+-		clk = ti_clk_register_omap_hw(NULL, &hw->hw, init.name);
++		clk = of_ti_clk_register_omap_hw(node, &hw->hw, init.name);
+ 		if (IS_ERR_OR_NULL(clk))
+ 			goto cleanup;
+ 
+diff --git a/drivers/clk/ti/clock.h b/drivers/clk/ti/clock.h
+index f1dd62de2bfcb..821f33ee330e4 100644
+--- a/drivers/clk/ti/clock.h
++++ b/drivers/clk/ti/clock.h
+@@ -210,11 +210,12 @@ extern const struct omap_clkctrl_data dm816_clkctrl_data[];
+ 
+ typedef void (*ti_of_clk_init_cb_t)(void *, struct device_node *);
+ 
+-struct clk *ti_clk_register(struct device *dev, struct clk_hw *hw,
+-			    const char *con);
+-struct clk *ti_clk_register_omap_hw(struct device *dev, struct clk_hw *hw,
+-				    const char *con);
+-int ti_clk_add_alias(struct device *dev, struct clk *clk, const char *con);
++struct clk *of_ti_clk_register(struct device_node *node, struct clk_hw *hw,
++			       const char *con);
++struct clk *of_ti_clk_register_omap_hw(struct device_node *node,
++				       struct clk_hw *hw, const char *con);
++const char *ti_dt_clk_name(struct device_node *np);
++int ti_clk_add_alias(struct clk *clk, const char *con);
+ void ti_clk_add_aliases(void);
+ 
+ void ti_clk_latch(struct clk_omap_reg *reg, s8 shift);
+diff --git a/drivers/clk/ti/clockdomain.c b/drivers/clk/ti/clockdomain.c
+index 700b7f44f6716..e5f447f4377b7 100644
+--- a/drivers/clk/ti/clockdomain.c
++++ b/drivers/clk/ti/clockdomain.c
+@@ -131,7 +131,7 @@ static void __init of_ti_clockdomain_setup(struct device_node *node)
+ {
+ 	struct clk *clk;
+ 	struct clk_hw *clk_hw;
+-	const char *clkdm_name = node->name;
++	const char *clkdm_name = ti_dt_clk_name(node);
+ 	int i;
+ 	unsigned int num_clks;
+ 
+diff --git a/drivers/clk/ti/composite.c b/drivers/clk/ti/composite.c
+index eaa43575cfa5e..78d44158fb7d9 100644
+--- a/drivers/clk/ti/composite.c
++++ b/drivers/clk/ti/composite.c
+@@ -125,6 +125,7 @@ static void __init _register_composite(void *user,
+ 	struct component_clk *comp;
+ 	int num_parents = 0;
+ 	const char **parent_names = NULL;
++	const char *name;
+ 	int i;
+ 	int ret;
+ 
+@@ -172,7 +173,8 @@ static void __init _register_composite(void *user,
+ 		goto cleanup;
+ 	}
+ 
+-	clk = clk_register_composite(NULL, node->name,
++	name = ti_dt_clk_name(node);
++	clk = clk_register_composite(NULL, name,
+ 				     parent_names, num_parents,
+ 				     _get_hw(cclk, CLK_COMPONENT_TYPE_MUX),
+ 				     &ti_clk_mux_ops,
+@@ -182,7 +184,7 @@ static void __init _register_composite(void *user,
+ 				     &ti_composite_gate_ops, 0);
+ 
+ 	if (!IS_ERR(clk)) {
+-		ret = ti_clk_add_alias(NULL, clk, node->name);
++		ret = ti_clk_add_alias(clk, name);
+ 		if (ret) {
+ 			clk_unregister(clk);
+ 			goto cleanup;
+diff --git a/drivers/clk/ti/divider.c b/drivers/clk/ti/divider.c
+index 28080df92f722..4cc0aaa6cb139 100644
+--- a/drivers/clk/ti/divider.c
++++ b/drivers/clk/ti/divider.c
+@@ -317,13 +317,14 @@ static struct clk *_register_divider(struct device_node *node,
+ 				     u32 flags,
+ 				     struct clk_omap_divider *div)
+ {
+-	struct clk *clk;
+ 	struct clk_init_data init;
+ 	const char *parent_name;
++	const char *name;
+ 
+ 	parent_name = of_clk_get_parent_name(node, 0);
+ 
+-	init.name = node->name;
++	name = ti_dt_clk_name(node);
++	init.name = name;
+ 	init.ops = &ti_clk_divider_ops;
+ 	init.flags = flags;
+ 	init.parent_names = (parent_name ? &parent_name : NULL);
+@@ -332,12 +333,7 @@ static struct clk *_register_divider(struct device_node *node,
+ 	div->hw.init = &init;
+ 
+ 	/* register the clock */
+-	clk = ti_clk_register(NULL, &div->hw, node->name);
+-
+-	if (IS_ERR(clk))
+-		kfree(div);
+-
+-	return clk;
++	return of_ti_clk_register(node, &div->hw, name);
+ }
+ 
+ int ti_clk_parse_divider_data(int *div_table, int num_dividers, int max_div,
+diff --git a/drivers/clk/ti/dpll.c b/drivers/clk/ti/dpll.c
+index 247510e306e2a..13d01594516d1 100644
+--- a/drivers/clk/ti/dpll.c
++++ b/drivers/clk/ti/dpll.c
+@@ -164,6 +164,7 @@ static void __init _register_dpll(void *user,
+ 	struct clk_hw *hw = user;
+ 	struct clk_hw_omap *clk_hw = to_clk_hw_omap(hw);
+ 	struct dpll_data *dd = clk_hw->dpll_data;
++	const char *name;
+ 	struct clk *clk;
+ 	const struct clk_init_data *init = hw->init;
+ 
+@@ -193,7 +194,8 @@ static void __init _register_dpll(void *user,
+ 	dd->clk_bypass = __clk_get_hw(clk);
+ 
+ 	/* register the clock */
+-	clk = ti_clk_register_omap_hw(NULL, &clk_hw->hw, node->name);
++	name = ti_dt_clk_name(node);
++	clk = of_ti_clk_register_omap_hw(node, &clk_hw->hw, name);
+ 
+ 	if (!IS_ERR(clk)) {
+ 		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+@@ -227,7 +229,7 @@ static void _register_dpll_x2(struct device_node *node,
+ 	struct clk *clk;
+ 	struct clk_init_data init = { NULL };
+ 	struct clk_hw_omap *clk_hw;
+-	const char *name = node->name;
++	const char *name = ti_dt_clk_name(node);
+ 	const char *parent_name;
+ 
+ 	parent_name = of_clk_get_parent_name(node, 0);
+@@ -265,7 +267,7 @@ static void _register_dpll_x2(struct device_node *node,
+ #endif
+ 
+ 	/* register the clock */
+-	clk = ti_clk_register_omap_hw(NULL, &clk_hw->hw, name);
++	clk = of_ti_clk_register_omap_hw(node, &clk_hw->hw, name);
+ 
+ 	if (IS_ERR(clk))
+ 		kfree(clk_hw);
+@@ -302,7 +304,7 @@ static void __init of_ti_dpll_setup(struct device_node *node,
+ 	clk_hw->ops = &clkhwops_omap3_dpll;
+ 	clk_hw->hw.init = init;
+ 
+-	init->name = node->name;
++	init->name = ti_dt_clk_name(node);
+ 	init->ops = ops;
+ 
+ 	init->num_parents = of_clk_get_parent_count(node);
+diff --git a/drivers/clk/ti/fapll.c b/drivers/clk/ti/fapll.c
+index 8024c6d2b9e95..749c6b73abff3 100644
+--- a/drivers/clk/ti/fapll.c
++++ b/drivers/clk/ti/fapll.c
+@@ -19,6 +19,8 @@
+ #include <linux/of_address.h>
+ #include <linux/clk/ti.h>
+ 
++#include "clock.h"
++
+ /* FAPLL Control Register PLL_CTRL */
+ #define FAPLL_MAIN_MULT_N_SHIFT	16
+ #define FAPLL_MAIN_DIV_P_SHIFT	8
+@@ -542,6 +544,7 @@ static void __init ti_fapll_setup(struct device_node *node)
+ 	struct clk_init_data *init = NULL;
+ 	const char *parent_name[2];
+ 	struct clk *pll_clk;
++	const char *name;
+ 	int i;
+ 
+ 	fd = kzalloc(sizeof(*fd), GFP_KERNEL);
+@@ -559,7 +562,8 @@ static void __init ti_fapll_setup(struct device_node *node)
+ 		goto free;
+ 
+ 	init->ops = &ti_fapll_ops;
+-	init->name = node->name;
++	name = ti_dt_clk_name(node);
++	init->name = name;
+ 
+ 	init->num_parents = of_clk_get_parent_count(node);
+ 	if (init->num_parents != 2) {
+@@ -591,7 +595,7 @@ static void __init ti_fapll_setup(struct device_node *node)
+ 	if (fapll_is_ddr_pll(fd->base))
+ 		fd->bypass_bit_inverted = true;
+ 
+-	fd->name = node->name;
++	fd->name = name;
+ 	fd->hw.init = init;
+ 
+ 	/* Register the parent PLL */
+@@ -638,8 +642,7 @@ static void __init ti_fapll_setup(struct device_node *node)
+ 				freq = NULL;
+ 		}
+ 		synth_clk = ti_fapll_synth_setup(fd, freq, div, output_instance,
+-						 output_name, node->name,
+-						 pll_clk);
++						 output_name, name, pll_clk);
+ 		if (IS_ERR(synth_clk))
+ 			continue;
+ 
+diff --git a/drivers/clk/ti/fixed-factor.c b/drivers/clk/ti/fixed-factor.c
+index 7cbe896db0716..a4f9c1c156137 100644
+--- a/drivers/clk/ti/fixed-factor.c
++++ b/drivers/clk/ti/fixed-factor.c
+@@ -36,7 +36,7 @@
+ static void __init of_ti_fixed_factor_clk_setup(struct device_node *node)
+ {
+ 	struct clk *clk;
+-	const char *clk_name = node->name;
++	const char *clk_name = ti_dt_clk_name(node);
+ 	const char *parent_name;
+ 	u32 div, mult;
+ 	u32 flags = 0;
+@@ -62,7 +62,7 @@ static void __init of_ti_fixed_factor_clk_setup(struct device_node *node)
+ 	if (!IS_ERR(clk)) {
+ 		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 		of_ti_clk_autoidle_setup(node);
+-		ti_clk_add_alias(NULL, clk, clk_name);
++		ti_clk_add_alias(clk, clk_name);
+ 	}
+ }
+ CLK_OF_DECLARE(ti_fixed_factor_clk, "ti,fixed-factor-clock",
+diff --git a/drivers/clk/ti/gate.c b/drivers/clk/ti/gate.c
+index 42389558418c5..0cc1babad661d 100644
+--- a/drivers/clk/ti/gate.c
++++ b/drivers/clk/ti/gate.c
+@@ -93,7 +93,7 @@ static int omap36xx_gate_clk_enable_with_hsdiv_restore(struct clk_hw *hw)
+ 	return ret;
+ }
+ 
+-static struct clk *_register_gate(struct device *dev, const char *name,
++static struct clk *_register_gate(struct device_node *node, const char *name,
+ 				  const char *parent_name, unsigned long flags,
+ 				  struct clk_omap_reg *reg, u8 bit_idx,
+ 				  u8 clk_gate_flags, const struct clk_ops *ops,
+@@ -123,7 +123,7 @@ static struct clk *_register_gate(struct device *dev, const char *name,
+ 
+ 	init.flags = flags;
+ 
+-	clk = ti_clk_register_omap_hw(NULL, &clk_hw->hw, name);
++	clk = of_ti_clk_register_omap_hw(node, &clk_hw->hw, name);
+ 
+ 	if (IS_ERR(clk))
+ 		kfree(clk_hw);
+@@ -138,6 +138,7 @@ static void __init _of_ti_gate_clk_setup(struct device_node *node,
+ 	struct clk *clk;
+ 	const char *parent_name;
+ 	struct clk_omap_reg reg;
++	const char *name;
+ 	u8 enable_bit = 0;
+ 	u32 val;
+ 	u32 flags = 0;
+@@ -164,7 +165,8 @@ static void __init _of_ti_gate_clk_setup(struct device_node *node,
+ 	if (of_property_read_bool(node, "ti,set-bit-to-disable"))
+ 		clk_gate_flags |= INVERT_ENABLE;
+ 
+-	clk = _register_gate(NULL, node->name, parent_name, flags, &reg,
++	name = ti_dt_clk_name(node);
++	clk = _register_gate(node, name, parent_name, flags, &reg,
+ 			     enable_bit, clk_gate_flags, ops, hw_ops);
+ 
+ 	if (!IS_ERR(clk))
+diff --git a/drivers/clk/ti/interface.c b/drivers/clk/ti/interface.c
+index 83e34429d3b10..1ccd5dbf2bb48 100644
+--- a/drivers/clk/ti/interface.c
++++ b/drivers/clk/ti/interface.c
+@@ -32,7 +32,8 @@ static const struct clk_ops ti_interface_clk_ops = {
+ 	.is_enabled	= &omap2_dflt_clk_is_enabled,
+ };
+ 
+-static struct clk *_register_interface(struct device *dev, const char *name,
++static struct clk *_register_interface(struct device_node *node,
++				       const char *name,
+ 				       const char *parent_name,
+ 				       struct clk_omap_reg *reg, u8 bit_idx,
+ 				       const struct clk_hw_omap_ops *ops)
+@@ -57,7 +58,7 @@ static struct clk *_register_interface(struct device *dev, const char *name,
+ 	init.num_parents = 1;
+ 	init.parent_names = &parent_name;
+ 
+-	clk = ti_clk_register_omap_hw(NULL, &clk_hw->hw, name);
++	clk = of_ti_clk_register_omap_hw(node, &clk_hw->hw, name);
+ 
+ 	if (IS_ERR(clk))
+ 		kfree(clk_hw);
+@@ -72,6 +73,7 @@ static void __init _of_ti_interface_clk_setup(struct device_node *node,
+ 	const char *parent_name;
+ 	struct clk_omap_reg reg;
+ 	u8 enable_bit = 0;
++	const char *name;
+ 	u32 val;
+ 
+ 	if (ti_clk_get_reg_addr(node, 0, &reg))
+@@ -86,7 +88,8 @@ static void __init _of_ti_interface_clk_setup(struct device_node *node,
+ 		return;
+ 	}
+ 
+-	clk = _register_interface(NULL, node->name, parent_name, &reg,
++	name = ti_dt_clk_name(node);
++	clk = _register_interface(node, name, parent_name, &reg,
+ 				  enable_bit, ops);
+ 
+ 	if (!IS_ERR(clk))
+diff --git a/drivers/clk/ti/mux.c b/drivers/clk/ti/mux.c
+index 0069e7cf3ebcc..4205ff4bad217 100644
+--- a/drivers/clk/ti/mux.c
++++ b/drivers/clk/ti/mux.c
+@@ -126,7 +126,7 @@ const struct clk_ops ti_clk_mux_ops = {
+ 	.restore_context = clk_mux_restore_context,
+ };
+ 
+-static struct clk *_register_mux(struct device *dev, const char *name,
++static struct clk *_register_mux(struct device_node *node, const char *name,
+ 				 const char * const *parent_names,
+ 				 u8 num_parents, unsigned long flags,
+ 				 struct clk_omap_reg *reg, u8 shift, u32 mask,
+@@ -156,7 +156,7 @@ static struct clk *_register_mux(struct device *dev, const char *name,
+ 	mux->table = table;
+ 	mux->hw.init = &init;
+ 
+-	clk = ti_clk_register(dev, &mux->hw, name);
++	clk = of_ti_clk_register(node, &mux->hw, name);
+ 
+ 	if (IS_ERR(clk))
+ 		kfree(mux);
+@@ -176,6 +176,7 @@ static void of_mux_clk_setup(struct device_node *node)
+ 	struct clk_omap_reg reg;
+ 	unsigned int num_parents;
+ 	const char **parent_names;
++	const char *name;
+ 	u8 clk_mux_flags = 0;
+ 	u32 mask = 0;
+ 	u32 shift = 0;
+@@ -213,7 +214,8 @@ static void of_mux_clk_setup(struct device_node *node)
+ 
+ 	mask = (1 << fls(mask)) - 1;
+ 
+-	clk = _register_mux(NULL, node->name, parent_names, num_parents,
++	name = ti_dt_clk_name(node);
++	clk = _register_mux(node, name, parent_names, num_parents,
+ 			    flags, &reg, shift, mask, latch, clk_mux_flags,
+ 			    NULL);
+ 
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 8697ae53b0633..6b513555df348 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -566,7 +566,8 @@ static int chachapoly_setkey(struct crypto_aead *aead, const u8 *key,
+ 	if (keylen != CHACHA_KEY_SIZE + saltlen)
+ 		return -EINVAL;
+ 
+-	ctx->cdata.key_virt = key;
++	memcpy(ctx->key, key, keylen);
++	ctx->cdata.key_virt = ctx->key;
+ 	ctx->cdata.keylen = keylen - saltlen;
+ 
+ 	return chachapoly_set_sh_desc(aead);
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index 5a40c7d10cc9a..43bed47ce59be 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -636,7 +636,8 @@ static int chachapoly_setkey(struct crypto_aead *aead, const u8 *key,
+ 	if (keylen != CHACHA_KEY_SIZE + saltlen)
+ 		return -EINVAL;
+ 
+-	ctx->cdata.key_virt = key;
++	memcpy(ctx->key, key, keylen);
++	ctx->cdata.key_virt = ctx->key;
+ 	ctx->cdata.keylen = keylen - saltlen;
+ 
+ 	return chachapoly_set_sh_desc(aead);
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
+index a33394d91bbf8..8da9d6dc6e87a 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
+@@ -637,7 +637,7 @@ static int hpre_cluster_debugfs_init(struct hisi_qm *qm)
+ 
+ 	for (i = 0; i < HPRE_CLUSTERS_NUM; i++) {
+ 		ret = snprintf(buf, HPRE_DBGFS_VAL_MAX_LEN, "cluster%d", i);
+-		if (ret < 0)
++		if (ret >= HPRE_DBGFS_VAL_MAX_LEN)
+ 			return -EINVAL;
+ 		tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
+ 
+diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c
+index aee494d3da529..4b2f5aa833919 100644
+--- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c
++++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c
+@@ -17,15 +17,33 @@ static struct adf_hw_device_class c3xxx_class = {
+ 	.instances = 0
+ };
+ 
+-static u32 get_accel_mask(u32 fuse)
++static u32 get_accel_mask(struct adf_hw_device_data *self)
+ {
+-	return (~fuse) >> ADF_C3XXX_ACCELERATORS_REG_OFFSET &
+-		ADF_C3XXX_ACCELERATORS_MASK;
++	u32 straps = self->straps;
++	u32 fuses = self->fuses;
++	u32 accel;
++
++	accel = ~(fuses | straps) >> ADF_C3XXX_ACCELERATORS_REG_OFFSET;
++	accel &= ADF_C3XXX_ACCELERATORS_MASK;
++
++	return accel;
+ }
+ 
+-static u32 get_ae_mask(u32 fuse)
++static u32 get_ae_mask(struct adf_hw_device_data *self)
+ {
+-	return (~fuse) & ADF_C3XXX_ACCELENGINES_MASK;
++	u32 straps = self->straps;
++	u32 fuses = self->fuses;
++	unsigned long disabled;
++	u32 ae_disable;
++	int accel;
++
++	/* If an accel is disabled, then disable the corresponding two AEs */
++	disabled = ~get_accel_mask(self) & ADF_C3XXX_ACCELERATORS_MASK;
++	ae_disable = BIT(1) | BIT(0);
++	for_each_set_bit(accel, &disabled, ADF_C3XXX_MAX_ACCELERATORS)
++		straps |= ae_disable << (accel << 1);
++
++	return ~(fuses | straps) & ADF_C3XXX_ACCELENGINES_MASK;
+ }
+ 
+ static u32 get_num_accels(struct adf_hw_device_data *self)
+@@ -109,11 +127,13 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev)
+ {
+ 	struct adf_hw_device_data *hw_device = accel_dev->hw_device;
+ 	struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_C3XXX_PMISC_BAR];
++	unsigned long accel_mask = hw_device->accel_mask;
++	unsigned long ae_mask = hw_device->ae_mask;
+ 	void __iomem *csr = misc_bar->virt_addr;
+ 	unsigned int val, i;
+ 
+ 	/* Enable Accel Engine error detection & correction */
+-	for (i = 0; i < hw_device->get_num_aes(hw_device); i++) {
++	for_each_set_bit(i, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) {
+ 		val = ADF_CSR_RD(csr, ADF_C3XXX_AE_CTX_ENABLES(i));
+ 		val |= ADF_C3XXX_ENABLE_AE_ECC_ERR;
+ 		ADF_CSR_WR(csr, ADF_C3XXX_AE_CTX_ENABLES(i), val);
+@@ -123,7 +143,7 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev)
+ 	}
+ 
+ 	/* Enable shared memory error detection & correction */
+-	for (i = 0; i < hw_device->get_num_accels(hw_device); i++) {
++	for_each_set_bit(i, &accel_mask, ADF_C3XXX_MAX_ACCELERATORS) {
+ 		val = ADF_CSR_RD(csr, ADF_C3XXX_UERRSSMSH(i));
+ 		val |= ADF_C3XXX_ERRSSMSH_EN;
+ 		ADF_CSR_WR(csr, ADF_C3XXX_UERRSSMSH(i), val);
+diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h
+index 8b5dd2c94ebfa..94097816f68ae 100644
+--- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h
++++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h
+@@ -18,6 +18,7 @@
+ #define ADF_C3XXX_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30)
+ #define ADF_C3XXX_SMIA0_MASK 0xFFFF
+ #define ADF_C3XXX_SMIA1_MASK 0x1
++#define ADF_C3XXX_SOFTSTRAP_CSR_OFFSET 0x2EC
+ /* Error detection and correction */
+ #define ADF_C3XXX_AE_CTX_ENABLES(i) (i * 0x1000 + 0x20818)
+ #define ADF_C3XXX_AE_MISC_CONTROL(i) (i * 0x1000 + 0x20960)
+diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+index ed0e8e33fe4b3..da6e880269881 100644
+--- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+@@ -126,10 +126,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	pci_read_config_byte(pdev, PCI_REVISION_ID, &accel_pci_dev->revid);
+ 	pci_read_config_dword(pdev, ADF_DEVICE_FUSECTL_OFFSET,
+ 			      &hw_data->fuses);
++	pci_read_config_dword(pdev, ADF_C3XXX_SOFTSTRAP_CSR_OFFSET,
++			      &hw_data->straps);
+ 
+ 	/* Get Accelerators and Accelerators Engines masks */
+-	hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses);
+-	hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses);
++	hw_data->accel_mask = hw_data->get_accel_mask(hw_data);
++	hw_data->ae_mask = hw_data->get_ae_mask(hw_data);
+ 	accel_pci_dev->sku = hw_data->get_sku(hw_data);
+ 	/* If the device has no acceleration engines then ignore it. */
+ 	if (!hw_data->accel_mask || !hw_data->ae_mask ||
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+index 9709f29b64540..26b13973f9ac9 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+@@ -11,12 +11,12 @@ static struct adf_hw_device_class c3xxxiov_class = {
+ 	.instances = 0
+ };
+ 
+-static u32 get_accel_mask(u32 fuse)
++static u32 get_accel_mask(struct adf_hw_device_data *self)
+ {
+ 	return ADF_C3XXXIOV_ACCELERATORS_MASK;
+ }
+ 
+-static u32 get_ae_mask(u32 fuse)
++static u32 get_ae_mask(struct adf_hw_device_data *self)
+ {
+ 	return ADF_C3XXXIOV_ACCELENGINES_MASK;
+ }
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+index ea932b6c4534f..067ca5e17d387 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+@@ -119,8 +119,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	adf_init_hw_data_c3xxxiov(accel_dev->hw_device);
+ 
+ 	/* Get Accelerators and Accelerators Engines masks */
+-	hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses);
+-	hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses);
++	hw_data->accel_mask = hw_data->get_accel_mask(hw_data);
++	hw_data->ae_mask = hw_data->get_ae_mask(hw_data);
+ 	accel_pci_dev->sku = hw_data->get_sku(hw_data);
+ 
+ 	/* Create dev top level debugfs entry */
+diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c
+index 844ad5ed33fcd..c0b5751e96821 100644
+--- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c
++++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c
+@@ -22,15 +22,33 @@ static struct adf_hw_device_class c62x_class = {
+ 	.instances = 0
+ };
+ 
+-static u32 get_accel_mask(u32 fuse)
++static u32 get_accel_mask(struct adf_hw_device_data *self)
+ {
+-	return (~fuse) >> ADF_C62X_ACCELERATORS_REG_OFFSET &
+-			  ADF_C62X_ACCELERATORS_MASK;
++	u32 straps = self->straps;
++	u32 fuses = self->fuses;
++	u32 accel;
++
++	accel = ~(fuses | straps) >> ADF_C62X_ACCELERATORS_REG_OFFSET;
++	accel &= ADF_C62X_ACCELERATORS_MASK;
++
++	return accel;
+ }
+ 
+-static u32 get_ae_mask(u32 fuse)
++static u32 get_ae_mask(struct adf_hw_device_data *self)
+ {
+-	return (~fuse) & ADF_C62X_ACCELENGINES_MASK;
++	u32 straps = self->straps;
++	u32 fuses = self->fuses;
++	unsigned long disabled;
++	u32 ae_disable;
++	int accel;
++
++	/* If an accel is disabled, then disable the corresponding two AEs */
++	disabled = ~get_accel_mask(self) & ADF_C62X_ACCELERATORS_MASK;
++	ae_disable = BIT(1) | BIT(0);
++	for_each_set_bit(accel, &disabled, ADF_C62X_MAX_ACCELERATORS)
++		straps |= ae_disable << (accel << 1);
++
++	return ~(fuses | straps) & ADF_C62X_ACCELENGINES_MASK;
+ }
+ 
+ static u32 get_num_accels(struct adf_hw_device_data *self)
+@@ -119,11 +137,13 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev)
+ {
+ 	struct adf_hw_device_data *hw_device = accel_dev->hw_device;
+ 	struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_C62X_PMISC_BAR];
++	unsigned long accel_mask = hw_device->accel_mask;
++	unsigned long ae_mask = hw_device->ae_mask;
+ 	void __iomem *csr = misc_bar->virt_addr;
+ 	unsigned int val, i;
+ 
+ 	/* Enable Accel Engine error detection & correction */
+-	for (i = 0; i < hw_device->get_num_aes(hw_device); i++) {
++	for_each_set_bit(i, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) {
+ 		val = ADF_CSR_RD(csr, ADF_C62X_AE_CTX_ENABLES(i));
+ 		val |= ADF_C62X_ENABLE_AE_ECC_ERR;
+ 		ADF_CSR_WR(csr, ADF_C62X_AE_CTX_ENABLES(i), val);
+@@ -133,7 +153,7 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev)
+ 	}
+ 
+ 	/* Enable shared memory error detection & correction */
+-	for (i = 0; i < hw_device->get_num_accels(hw_device); i++) {
++	for_each_set_bit(i, &accel_mask, ADF_C62X_MAX_ACCELERATORS) {
+ 		val = ADF_CSR_RD(csr, ADF_C62X_UERRSSMSH(i));
+ 		val |= ADF_C62X_ERRSSMSH_EN;
+ 		ADF_CSR_WR(csr, ADF_C62X_UERRSSMSH(i), val);
+diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h
+index 88504d2bf30d5..a2e2961a21022 100644
+--- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h
++++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h
+@@ -19,6 +19,7 @@
+ #define ADF_C62X_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30)
+ #define ADF_C62X_SMIA0_MASK 0xFFFF
+ #define ADF_C62X_SMIA1_MASK 0x1
++#define ADF_C62X_SOFTSTRAP_CSR_OFFSET 0x2EC
+ /* Error detection and correction */
+ #define ADF_C62X_AE_CTX_ENABLES(i) (i * 0x1000 + 0x20818)
+ #define ADF_C62X_AE_MISC_CONTROL(i) (i * 0x1000 + 0x20960)
+diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c
+index d8e7c9c255903..3da697a566ad7 100644
+--- a/drivers/crypto/qat/qat_c62x/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62x/adf_drv.c
+@@ -126,10 +126,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	pci_read_config_byte(pdev, PCI_REVISION_ID, &accel_pci_dev->revid);
+ 	pci_read_config_dword(pdev, ADF_DEVICE_FUSECTL_OFFSET,
+ 			      &hw_data->fuses);
++	pci_read_config_dword(pdev, ADF_C62X_SOFTSTRAP_CSR_OFFSET,
++			      &hw_data->straps);
+ 
+ 	/* Get Accelerators and Accelerators Engines masks */
+-	hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses);
+-	hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses);
++	hw_data->accel_mask = hw_data->get_accel_mask(hw_data);
++	hw_data->ae_mask = hw_data->get_ae_mask(hw_data);
+ 	accel_pci_dev->sku = hw_data->get_sku(hw_data);
+ 	/* If the device has no acceleration engines then ignore it. */
+ 	if (!hw_data->accel_mask || !hw_data->ae_mask ||
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+index 5e6909d6cfc65..ff5a57824eca4 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+@@ -11,12 +11,12 @@ static struct adf_hw_device_class c62xiov_class = {
+ 	.instances = 0
+ };
+ 
+-static u32 get_accel_mask(u32 fuse)
++static u32 get_accel_mask(struct adf_hw_device_data *self)
+ {
+ 	return ADF_C62XIOV_ACCELERATORS_MASK;
+ }
+ 
+-static u32 get_ae_mask(u32 fuse)
++static u32 get_ae_mask(struct adf_hw_device_data *self)
+ {
+ 	return ADF_C62XIOV_ACCELENGINES_MASK;
+ }
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+index 6200ad448b119..51ea88c0b17d7 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+@@ -119,8 +119,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	adf_init_hw_data_c62xiov(accel_dev->hw_device);
+ 
+ 	/* Get Accelerators and Accelerators Engines masks */
+-	hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses);
+-	hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses);
++	hw_data->accel_mask = hw_data->get_accel_mask(hw_data);
++	hw_data->ae_mask = hw_data->get_ae_mask(hw_data);
+ 	accel_pci_dev->sku = hw_data->get_sku(hw_data);
+ 
+ 	/* Create dev top level debugfs entry */
+diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h
+index 06952ece53d91..961a8b3650c73 100644
+--- a/drivers/crypto/qat/qat_common/adf_accel_devices.h
++++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h
+@@ -23,7 +23,7 @@
+ #define ADF_PCI_MAX_BARS 3
+ #define ADF_DEVICE_NAME_LENGTH 32
+ #define ADF_ETR_MAX_RINGS_PER_BANK 16
+-#define ADF_MAX_MSIX_VECTOR_NAME 16
++#define ADF_MAX_MSIX_VECTOR_NAME 48
+ #define ADF_DEVICE_NAME_PREFIX "qat_"
+ 
+ enum adf_accel_capabilities {
+@@ -104,8 +104,8 @@ struct adf_etr_ring_data;
+ 
+ struct adf_hw_device_data {
+ 	struct adf_hw_device_class *dev_class;
+-	u32 (*get_accel_mask)(u32 fuse);
+-	u32 (*get_ae_mask)(u32 fuse);
++	u32 (*get_accel_mask)(struct adf_hw_device_data *self);
++	u32 (*get_ae_mask)(struct adf_hw_device_data *self);
+ 	u32 (*get_sram_bar_id)(struct adf_hw_device_data *self);
+ 	u32 (*get_misc_bar_id)(struct adf_hw_device_data *self);
+ 	u32 (*get_etr_bar_id)(struct adf_hw_device_data *self);
+@@ -131,6 +131,7 @@ struct adf_hw_device_data {
+ 	const char *fw_name;
+ 	const char *fw_mmp_name;
+ 	u32 fuses;
++	u32 straps;
+ 	u32 accel_capabilities_mask;
+ 	u32 instance_id;
+ 	u16 accel_mask;
+diff --git a/drivers/crypto/qat/qat_common/adf_transport_debug.c b/drivers/crypto/qat/qat_common/adf_transport_debug.c
+index dac25ba47260b..e6bdbd3c9b1f2 100644
+--- a/drivers/crypto/qat/qat_common/adf_transport_debug.c
++++ b/drivers/crypto/qat/qat_common/adf_transport_debug.c
+@@ -89,7 +89,7 @@ DEFINE_SEQ_ATTRIBUTE(adf_ring_debug);
+ int adf_ring_debugfs_add(struct adf_etr_ring_data *ring, const char *name)
+ {
+ 	struct adf_etr_ring_debug_entry *ring_debug;
+-	char entry_name[8];
++	char entry_name[16];
+ 
+ 	ring_debug = kzalloc(sizeof(*ring_debug), GFP_KERNEL);
+ 	if (!ring_debug)
+@@ -184,7 +184,7 @@ int adf_bank_debugfs_add(struct adf_etr_bank_data *bank)
+ {
+ 	struct adf_accel_dev *accel_dev = bank->accel_dev;
+ 	struct dentry *parent = accel_dev->transport->debug;
+-	char name[8];
++	char name[16];
+ 
+ 	snprintf(name, sizeof(name), "bank_%02d", bank->bank_number);
+ 	bank->bank_debug_dir = debugfs_create_dir(name, parent);
+diff --git a/drivers/crypto/qat/qat_common/qat_hal.c b/drivers/crypto/qat/qat_common/qat_hal.c
+index b40e81e0088f0..76d8470651b85 100644
+--- a/drivers/crypto/qat/qat_common/qat_hal.c
++++ b/drivers/crypto/qat/qat_common/qat_hal.c
+@@ -346,11 +346,12 @@ static void qat_hal_put_wakeup_event(struct icp_qat_fw_loader_handle *handle,
+ 
+ static int qat_hal_check_ae_alive(struct icp_qat_fw_loader_handle *handle)
+ {
++	unsigned long ae_mask = handle->hal_handle->ae_mask;
+ 	unsigned int base_cnt, cur_cnt;
+ 	unsigned char ae;
+ 	int times = MAX_RETRY_TIMES;
+ 
+-	for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) {
++	for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) {
+ 		base_cnt = qat_hal_rd_ae_csr(handle, ae, PROFILE_COUNT);
+ 		base_cnt &= 0xffff;
+ 
+@@ -384,6 +385,7 @@ int qat_hal_check_ae_active(struct icp_qat_fw_loader_handle *handle,
+ 
+ static void qat_hal_reset_timestamp(struct icp_qat_fw_loader_handle *handle)
+ {
++	unsigned long ae_mask = handle->hal_handle->ae_mask;
+ 	unsigned int misc_ctl;
+ 	unsigned char ae;
+ 
+@@ -393,7 +395,7 @@ static void qat_hal_reset_timestamp(struct icp_qat_fw_loader_handle *handle)
+ 		SET_GLB_CSR(handle, MISC_CONTROL, misc_ctl &
+ 			    (~MC_TIMESTAMP_ENABLE));
+ 
+-	for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) {
++	for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) {
+ 		qat_hal_wr_ae_csr(handle, ae, TIMESTAMP_LOW, 0);
+ 		qat_hal_wr_ae_csr(handle, ae, TIMESTAMP_HIGH, 0);
+ 	}
+@@ -438,6 +440,7 @@ static int qat_hal_init_esram(struct icp_qat_fw_loader_handle *handle)
+ #define SHRAM_INIT_CYCLES 2060
+ int qat_hal_clr_reset(struct icp_qat_fw_loader_handle *handle)
+ {
++	unsigned long ae_mask = handle->hal_handle->ae_mask;
+ 	unsigned int ae_reset_csr;
+ 	unsigned char ae;
+ 	unsigned int clk_csr;
+@@ -464,7 +467,7 @@ int qat_hal_clr_reset(struct icp_qat_fw_loader_handle *handle)
+ 		goto out_err;
+ 
+ 	/* Set undefined power-up/reset states to reasonable default values */
+-	for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) {
++	for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) {
+ 		qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES,
+ 				  INIT_CTX_ENABLE_VALUE);
+ 		qat_hal_wr_indr_csr(handle, ae, ICP_QAT_UCLO_AE_ALL_CTX,
+@@ -570,10 +573,11 @@ static void qat_hal_enable_ctx(struct icp_qat_fw_loader_handle *handle,
+ 
+ static void qat_hal_clear_xfer(struct icp_qat_fw_loader_handle *handle)
+ {
++	unsigned long ae_mask = handle->hal_handle->ae_mask;
+ 	unsigned char ae;
+ 	unsigned short reg;
+ 
+-	for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) {
++	for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) {
+ 		for (reg = 0; reg < ICP_QAT_UCLO_MAX_GPR_REG; reg++) {
+ 			qat_hal_init_rd_xfer(handle, ae, 0, ICP_SR_RD_ABS,
+ 					     reg, 0);
+@@ -585,6 +589,7 @@ static void qat_hal_clear_xfer(struct icp_qat_fw_loader_handle *handle)
+ 
+ static int qat_hal_clear_gpr(struct icp_qat_fw_loader_handle *handle)
+ {
++	unsigned long ae_mask = handle->hal_handle->ae_mask;
+ 	unsigned char ae;
+ 	unsigned int ctx_mask = ICP_QAT_UCLO_AE_ALL_CTX;
+ 	int times = MAX_RETRY_TIMES;
+@@ -592,7 +597,7 @@ static int qat_hal_clear_gpr(struct icp_qat_fw_loader_handle *handle)
+ 	unsigned int savctx = 0;
+ 	int ret = 0;
+ 
+-	for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) {
++	for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) {
+ 		csr_val = qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL);
+ 		csr_val &= ~(1 << MMC_SHARE_CS_BITPOS);
+ 		qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, csr_val);
+@@ -613,7 +618,7 @@ static int qat_hal_clear_gpr(struct icp_qat_fw_loader_handle *handle)
+ 		qat_hal_wr_ae_csr(handle, ae, CTX_SIG_EVENTS_ACTIVE, 0);
+ 		qat_hal_enable_ctx(handle, ae, ctx_mask);
+ 	}
+-	for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) {
++	for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) {
+ 		/* wait for AE to finish */
+ 		do {
+ 			ret = qat_hal_wait_cycles(handle, ae, 20, 1);
+@@ -654,6 +659,8 @@ int qat_hal_init(struct adf_accel_dev *accel_dev)
+ 	struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ 	struct adf_bar *misc_bar =
+ 			&pci_info->pci_bars[hw_data->get_misc_bar_id(hw_data)];
++	unsigned long ae_mask = hw_data->ae_mask;
++	unsigned int csr_val = 0;
+ 	struct adf_bar *sram_bar;
+ 
+ 	handle = kzalloc(sizeof(*handle), GFP_KERNEL);
+@@ -689,9 +696,7 @@ int qat_hal_init(struct adf_accel_dev *accel_dev)
+ 	/* create AE objects */
+ 	handle->hal_handle->upc_mask = 0x1ffff;
+ 	handle->hal_handle->max_ustore = 0x4000;
+-	for (ae = 0; ae < ICP_QAT_UCLO_MAX_AE; ae++) {
+-		if (!(hw_data->ae_mask & (1 << ae)))
+-			continue;
++	for_each_set_bit(ae, &ae_mask, ICP_QAT_UCLO_MAX_AE) {
+ 		handle->hal_handle->aes[ae].free_addr = 0;
+ 		handle->hal_handle->aes[ae].free_size =
+ 		    handle->hal_handle->max_ustore;
+@@ -714,9 +719,7 @@ int qat_hal_init(struct adf_accel_dev *accel_dev)
+ 	}
+ 
+ 	/* Set SIGNATURE_ENABLE[0] to 0x1 in order to enable ALU_OUT csr */
+-	for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) {
+-		unsigned int csr_val = 0;
+-
++	for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) {
+ 		csr_val = qat_hal_rd_ae_csr(handle, ae, SIGNATURE_ENABLE);
+ 		csr_val |= 0x1;
+ 		qat_hal_wr_ae_csr(handle, ae, SIGNATURE_ENABLE, csr_val);
+diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+index b975c263446db..6a0d01103136f 100644
+--- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
++++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+@@ -24,15 +24,19 @@ static struct adf_hw_device_class dh895xcc_class = {
+ 	.instances = 0
+ };
+ 
+-static u32 get_accel_mask(u32 fuse)
++static u32 get_accel_mask(struct adf_hw_device_data *self)
+ {
+-	return (~fuse) >> ADF_DH895XCC_ACCELERATORS_REG_OFFSET &
+-			  ADF_DH895XCC_ACCELERATORS_MASK;
++	u32 fuses = self->fuses;
++
++	return ~fuses >> ADF_DH895XCC_ACCELERATORS_REG_OFFSET &
++			 ADF_DH895XCC_ACCELERATORS_MASK;
+ }
+ 
+-static u32 get_ae_mask(u32 fuse)
++static u32 get_ae_mask(struct adf_hw_device_data *self)
+ {
+-	return (~fuse) & ADF_DH895XCC_ACCELENGINES_MASK;
++	u32 fuses = self->fuses;
++
++	return ~fuses & ADF_DH895XCC_ACCELENGINES_MASK;
+ }
+ 
+ static u32 get_num_accels(struct adf_hw_device_data *self)
+@@ -131,11 +135,13 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev)
+ {
+ 	struct adf_hw_device_data *hw_device = accel_dev->hw_device;
+ 	struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_DH895XCC_PMISC_BAR];
++	unsigned long accel_mask = hw_device->accel_mask;
++	unsigned long ae_mask = hw_device->ae_mask;
+ 	void __iomem *csr = misc_bar->virt_addr;
+ 	unsigned int val, i;
+ 
+ 	/* Enable Accel Engine error detection & correction */
+-	for (i = 0; i < hw_device->get_num_aes(hw_device); i++) {
++	for_each_set_bit(i, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) {
+ 		val = ADF_CSR_RD(csr, ADF_DH895XCC_AE_CTX_ENABLES(i));
+ 		val |= ADF_DH895XCC_ENABLE_AE_ECC_ERR;
+ 		ADF_CSR_WR(csr, ADF_DH895XCC_AE_CTX_ENABLES(i), val);
+@@ -145,7 +151,7 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev)
+ 	}
+ 
+ 	/* Enable shared memory error detection & correction */
+-	for (i = 0; i < hw_device->get_num_accels(hw_device); i++) {
++	for_each_set_bit(i, &accel_mask, ADF_DH895XCC_MAX_ACCELERATORS) {
+ 		val = ADF_CSR_RD(csr, ADF_DH895XCC_UERRSSMSH(i));
+ 		val |= ADF_DH895XCC_ERRSSMSH_EN;
+ 		ADF_CSR_WR(csr, ADF_DH895XCC_UERRSSMSH(i), val);
+diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+index ecb4f6f20e22b..d7941bc2bafd6 100644
+--- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+@@ -128,8 +128,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			      &hw_data->fuses);
+ 
+ 	/* Get Accelerators and Accelerators Engines masks */
+-	hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses);
+-	hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses);
++	hw_data->accel_mask = hw_data->get_accel_mask(hw_data);
++	hw_data->ae_mask = hw_data->get_ae_mask(hw_data);
+ 	accel_pci_dev->sku = hw_data->get_sku(hw_data);
+ 	/* If the device has no acceleration engines then ignore it. */
+ 	if (!hw_data->accel_mask || !hw_data->ae_mask ||
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+index fc4cf141b1dea..7930e4c7883db 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+@@ -11,12 +11,12 @@ static struct adf_hw_device_class dh895xcciov_class = {
+ 	.instances = 0
+ };
+ 
+-static u32 get_accel_mask(u32 fuse)
++static u32 get_accel_mask(struct adf_hw_device_data *self)
+ {
+ 	return ADF_DH895XCCIOV_ACCELERATORS_MASK;
+ }
+ 
+-static u32 get_ae_mask(u32 fuse)
++static u32 get_ae_mask(struct adf_hw_device_data *self)
+ {
+ 	return ADF_DH895XCCIOV_ACCELENGINES_MASK;
+ }
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+index 737508ded37b4..29999da716cc9 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+@@ -119,8 +119,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	adf_init_hw_data_dh895xcciov(accel_dev->hw_device);
+ 
+ 	/* Get Accelerators and Accelerators Engines masks */
+-	hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses);
+-	hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses);
++	hw_data->accel_mask = hw_data->get_accel_mask(hw_data);
++	hw_data->ae_mask = hw_data->get_ae_mask(hw_data);
+ 	accel_pci_dev->sku = hw_data->get_sku(hw_data);
+ 
+ 	/* Create dev top level debugfs entry */
+diff --git a/drivers/devfreq/event/rockchip-dfi.c b/drivers/devfreq/event/rockchip-dfi.c
+index 9a88faaf8b27f..4dafdf23197b9 100644
+--- a/drivers/devfreq/event/rockchip-dfi.c
++++ b/drivers/devfreq/event/rockchip-dfi.c
+@@ -194,14 +194,15 @@ static int rockchip_dfi_probe(struct platform_device *pdev)
+ 		return PTR_ERR(data->clk);
+ 	}
+ 
+-	/* try to find the optional reference to the pmu syscon */
+ 	node = of_parse_phandle(np, "rockchip,pmu", 0);
+-	if (node) {
+-		data->regmap_pmu = syscon_node_to_regmap(node);
+-		of_node_put(node);
+-		if (IS_ERR(data->regmap_pmu))
+-			return PTR_ERR(data->regmap_pmu);
+-	}
++	if (!node)
++		return dev_err_probe(&pdev->dev, -ENODEV, "Can't find pmu_grf registers\n");
++
++	data->regmap_pmu = syscon_node_to_regmap(node);
++	of_node_put(node);
++	if (IS_ERR(data->regmap_pmu))
++		return PTR_ERR(data->regmap_pmu);
++
+ 	data->dev = dev;
+ 
+ 	desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL);
+diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
+index 68d9d60c051d9..9ce75ff9fa1cc 100644
+--- a/drivers/dma/pxa_dma.c
++++ b/drivers/dma/pxa_dma.c
+@@ -723,7 +723,6 @@ static void pxad_free_desc(struct virt_dma_desc *vd)
+ 	dma_addr_t dma;
+ 	struct pxad_desc_sw *sw_desc = to_pxad_sw_desc(vd);
+ 
+-	BUG_ON(sw_desc->nb_desc == 0);
+ 	for (i = sw_desc->nb_desc - 1; i >= 0; i--) {
+ 		if (i > 0)
+ 			dma = sw_desc->hw_desc[i - 1]->ddadr;
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index 35d81bd857f11..a1adc8d91fd8d 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -2459,7 +2459,7 @@ static int edma_probe(struct platform_device *pdev)
+ 	if (irq < 0 && node)
+ 		irq = irq_of_parse_and_map(node, 0);
+ 
+-	if (irq >= 0) {
++	if (irq > 0) {
+ 		irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_ccint",
+ 					  dev_name(dev));
+ 		ret = devm_request_irq(dev, irq, dma_irq_handler, 0, irq_name,
+@@ -2475,7 +2475,7 @@ static int edma_probe(struct platform_device *pdev)
+ 	if (irq < 0 && node)
+ 		irq = irq_of_parse_and_map(node, 2);
+ 
+-	if (irq >= 0) {
++	if (irq > 0) {
+ 		irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_ccerrint",
+ 					  dev_name(dev));
+ 		ret = devm_request_irq(dev, irq, dma_ccerr_handler, 0, irq_name,
+diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c
+index 896f53ec78571..fe6be0771a076 100644
+--- a/drivers/firmware/ti_sci.c
++++ b/drivers/firmware/ti_sci.c
+@@ -190,19 +190,6 @@ static int ti_sci_debugfs_create(struct platform_device *pdev,
+ 	return 0;
+ }
+ 
+-/**
+- * ti_sci_debugfs_destroy() - clean up log debug file
+- * @pdev:	platform device pointer
+- * @info:	Pointer to SCI entity information
+- */
+-static void ti_sci_debugfs_destroy(struct platform_device *pdev,
+-				   struct ti_sci_info *info)
+-{
+-	if (IS_ERR(info->debug_region))
+-		return;
+-
+-	debugfs_remove(info->d);
+-}
+ #else /* CONFIG_DEBUG_FS */
+ static inline int ti_sci_debugfs_create(struct platform_device *dev,
+ 					struct ti_sci_info *info)
+@@ -3510,43 +3497,12 @@ out:
+ 	return ret;
+ }
+ 
+-static int ti_sci_remove(struct platform_device *pdev)
+-{
+-	struct ti_sci_info *info;
+-	struct device *dev = &pdev->dev;
+-	int ret = 0;
+-
+-	of_platform_depopulate(dev);
+-
+-	info = platform_get_drvdata(pdev);
+-
+-	if (info->nb.notifier_call)
+-		unregister_restart_handler(&info->nb);
+-
+-	mutex_lock(&ti_sci_list_mutex);
+-	if (info->users)
+-		ret = -EBUSY;
+-	else
+-		list_del(&info->node);
+-	mutex_unlock(&ti_sci_list_mutex);
+-
+-	if (!ret) {
+-		ti_sci_debugfs_destroy(pdev, info);
+-
+-		/* Safe to free channels since no more users */
+-		mbox_free_channel(info->chan_tx);
+-		mbox_free_channel(info->chan_rx);
+-	}
+-
+-	return ret;
+-}
+-
+ static struct platform_driver ti_sci_driver = {
+ 	.probe = ti_sci_probe,
+-	.remove = ti_sci_remove,
+ 	.driver = {
+ 		   .name = "ti-sci",
+ 		   .of_match_table = of_match_ptr(ti_sci_of_match),
++		   .suppress_bind_attrs = true,
+ 	},
+ };
+ module_platform_driver(ti_sci_driver);
+diff --git a/drivers/gpu/drm/bridge/tc358768.c b/drivers/gpu/drm/bridge/tc358768.c
+index b4a69b2104514..48dab19f3e236 100644
+--- a/drivers/gpu/drm/bridge/tc358768.c
++++ b/drivers/gpu/drm/bridge/tc358768.c
+@@ -217,6 +217,10 @@ static void tc358768_update_bits(struct tc358768_priv *priv, u32 reg, u32 mask,
+ 	u32 tmp, orig;
+ 
+ 	tc358768_read(priv, reg, &orig);
++
++	if (priv->error)
++		return;
++
+ 	tmp = orig & ~mask;
+ 	tmp |= val & mask;
+ 	if (tmp != orig)
+@@ -633,6 +637,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ {
+ 	struct tc358768_priv *priv = bridge_to_tc358768(bridge);
+ 	struct mipi_dsi_device *dsi_dev = priv->output.dev;
++	unsigned long mode_flags = dsi_dev->mode_flags;
+ 	u32 val, val2, lptxcnt, hact, data_type;
+ 	s32 raw_val;
+ 	const struct drm_display_mode *mode;
+@@ -640,6 +645,11 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	u32 dsiclk, dsibclk;
+ 	int ret, i;
+ 
++	if (mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS) {
++		dev_warn_once(priv->dev, "Non-continuous mode unimplemented, falling back to continuous\n");
++		mode_flags &= ~MIPI_DSI_CLOCK_NON_CONTINUOUS;
++	}
++
+ 	tc358768_hw_enable(priv);
+ 
+ 	ret = tc358768_sw_reset(priv);
+@@ -775,8 +785,8 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 		val |= BIT(i + 1);
+ 	tc358768_write(priv, TC358768_HSTXVREGEN, val);
+ 
+-	if (!(dsi_dev->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS))
+-		tc358768_write(priv, TC358768_TXOPTIONCNTRL, 0x1);
++	tc358768_write(priv, TC358768_TXOPTIONCNTRL,
++		       (mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS) ? 0 : BIT(0));
+ 
+ 	/* TXTAGOCNT[26:16] RXTASURECNT[10:0] */
+ 	val = tc358768_to_ns((lptxcnt + 1) * dsibclk_nsk * 4);
+@@ -812,11 +822,12 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	tc358768_write(priv, TC358768_DSI_HACT, hact);
+ 
+ 	/* VSYNC polarity */
+-	if (!(mode->flags & DRM_MODE_FLAG_NVSYNC))
+-		tc358768_update_bits(priv, TC358768_CONFCTL, BIT(5), BIT(5));
++	tc358768_update_bits(priv, TC358768_CONFCTL, BIT(5),
++			     (mode->flags & DRM_MODE_FLAG_PVSYNC) ? BIT(5) : 0);
++
+ 	/* HSYNC polarity */
+-	if (mode->flags & DRM_MODE_FLAG_PHSYNC)
+-		tc358768_update_bits(priv, TC358768_PP_MISC, BIT(0), BIT(0));
++	tc358768_update_bits(priv, TC358768_PP_MISC, BIT(0),
++			     (mode->flags & DRM_MODE_FLAG_PHSYNC) ? BIT(0) : 0);
+ 
+ 	/* Start DSI Tx */
+ 	tc358768_write(priv, TC358768_DSI_START, 0x1);
+@@ -832,7 +843,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 
+ 	val |= TC358768_DSI_CONTROL_TXMD;
+ 
+-	if (!(dsi_dev->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS))
++	if (!(mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS))
+ 		val |= TC358768_DSI_CONTROL_HSCKMD;
+ 
+ 	if (dsi_dev->mode_flags & MIPI_DSI_MODE_EOT_PACKET)
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index 993898a2c7791..738e60139db90 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -983,7 +983,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
+ 		fence = drm_syncobj_fence_get(syncobjs[i]);
+ 		if (!fence || dma_fence_chain_find_seqno(&fence, points[i])) {
+ 			dma_fence_put(fence);
+-			if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
++			if (flags & (DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
++				     DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE)) {
+ 				continue;
+ 			} else {
+ 				timeout = -EINVAL;
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index e83b1c406b96a..cc3cb5b63d444 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -320,6 +320,9 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc)
+ 		unsigned int local_layer;
+ 
+ 		plane_state = to_mtk_plane_state(plane->state);
++
++		/* should not enable layer before crtc enabled */
++		plane_state->pending.enable = false;
+ 		comp = mtk_drm_ddp_comp_for_plane(crtc, plane, &local_layer);
+ 		if (comp)
+ 			mtk_ddp_comp_layer_config(comp, local_layer,
+diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
+index 14d90dc376e71..061ef6c008592 100644
+--- a/drivers/gpu/drm/radeon/evergreen.c
++++ b/drivers/gpu/drm/radeon/evergreen.c
+@@ -4819,14 +4819,15 @@ restart_ih:
+ 			break;
+ 		case 44: /* hdmi */
+ 			afmt_idx = src_data;
+-			if (!(afmt_status[afmt_idx] & AFMT_AZ_FORMAT_WTRIG))
+-				DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
+-
+ 			if (afmt_idx > 5) {
+ 				DRM_ERROR("Unhandled interrupt: %d %d\n",
+ 					  src_id, src_data);
+ 				break;
+ 			}
++
++			if (!(afmt_status[afmt_idx] & AFMT_AZ_FORMAT_WTRIG))
++				DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
+ 			afmt_status[afmt_idx] &= ~AFMT_AZ_FORMAT_WTRIG;
+ 			queue_hdmi = true;
+ 			DRM_DEBUG("IH: HDMI%d\n", afmt_idx + 1);
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index adeaa0140f0f7..53cad1003ad77 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -1145,6 +1145,7 @@ static int cdn_dp_probe(struct platform_device *pdev)
+ 	struct cdn_dp_device *dp;
+ 	struct extcon_dev *extcon;
+ 	struct phy *phy;
++	int ret;
+ 	int i;
+ 
+ 	dp = devm_kzalloc(dev, sizeof(*dp), GFP_KERNEL);
+@@ -1185,9 +1186,19 @@ static int cdn_dp_probe(struct platform_device *pdev)
+ 	mutex_init(&dp->lock);
+ 	dev_set_drvdata(dev, dp);
+ 
+-	cdn_dp_audio_codec_init(dp, dev);
++	ret = cdn_dp_audio_codec_init(dp, dev);
++	if (ret)
++		return ret;
++
++	ret = component_add(dev, &cdn_dp_component_ops);
++	if (ret)
++		goto err_audio_deinit;
+ 
+-	return component_add(dev, &cdn_dp_component_ops);
++	return 0;
++
++err_audio_deinit:
++	platform_device_unregister(dp->audio_pdev);
++	return ret;
+ }
+ 
+ static int cdn_dp_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+index 22ff4a5929768..6038aafa29c68 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+@@ -38,7 +38,7 @@ static int rockchip_gem_iommu_map(struct rockchip_gem_object *rk_obj)
+ 
+ 	ret = iommu_map_sgtable(private->domain, rk_obj->dma_addr, rk_obj->sgt,
+ 				prot);
+-	if (ret < rk_obj->base.size) {
++	if (ret < (ssize_t)rk_obj->base.size) {
+ 		DRM_ERROR("failed to map buffer: size=%zd request_size=%zd\n",
+ 			  ret, rk_obj->base.size);
+ 		ret = -ENOMEM;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 65dde9df9793e..05fcc9e078d6d 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -1533,7 +1533,8 @@ static struct drm_crtc_state *vop_crtc_duplicate_state(struct drm_crtc *crtc)
+ 	if (WARN_ON(!crtc->state))
+ 		return NULL;
+ 
+-	rockchip_state = kzalloc(sizeof(*rockchip_state), GFP_KERNEL);
++	rockchip_state = kmemdup(to_rockchip_crtc_state(crtc->state),
++				 sizeof(*rockchip_state), GFP_KERNEL);
+ 	if (!rockchip_state)
+ 		return NULL;
+ 
+@@ -1558,7 +1559,10 @@ static void vop_crtc_reset(struct drm_crtc *crtc)
+ 	if (crtc->state)
+ 		vop_crtc_destroy_state(crtc, crtc->state);
+ 
+-	__drm_atomic_helper_crtc_reset(crtc, &crtc_state->base);
++	if (crtc_state)
++		__drm_atomic_helper_crtc_reset(crtc, &crtc_state->base);
++	else
++		__drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+ 
+ #ifdef CONFIG_DRM_ANALOGIX_DP
+diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
+index d902fe43cb818..6a11b3a329aec 100644
+--- a/drivers/hid/hid-cp2112.c
++++ b/drivers/hid/hid-cp2112.c
+@@ -1157,8 +1157,6 @@ static unsigned int cp2112_gpio_irq_startup(struct irq_data *d)
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct cp2112_device *dev = gpiochip_get_data(gc);
+ 
+-	INIT_DELAYED_WORK(&dev->gpio_poll_worker, cp2112_gpio_poll_callback);
+-
+ 	if (!dev->gpio_poll) {
+ 		dev->gpio_poll = true;
+ 		schedule_delayed_work(&dev->gpio_poll_worker, 0);
+@@ -1173,7 +1171,10 @@ static void cp2112_gpio_irq_shutdown(struct irq_data *d)
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct cp2112_device *dev = gpiochip_get_data(gc);
+ 
+-	cancel_delayed_work_sync(&dev->gpio_poll_worker);
++	if (!dev->irq_mask) {
++		dev->gpio_poll = false;
++		cancel_delayed_work_sync(&dev->gpio_poll_worker);
++	}
+ }
+ 
+ static int cp2112_gpio_irq_type(struct irq_data *d, unsigned int type)
+@@ -1354,6 +1355,8 @@ static int cp2112_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	girq->handler = handle_simple_irq;
+ 	girq->threaded = true;
+ 
++	INIT_DELAYED_WORK(&dev->gpio_poll_worker, cp2112_gpio_poll_callback);
++
+ 	ret = gpiochip_add_data(&dev->gc, dev);
+ 	if (ret < 0) {
+ 		hid_err(hdev, "error registering gpio chip\n");
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 651fa0966939e..8bdcd4027416f 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -31,11 +31,6 @@ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Benjamin Tissoires <benjamin.tissoires@gmail.com>");
+ MODULE_AUTHOR("Nestor Lopez Casado <nlopezcasad@logitech.com>");
+ 
+-static bool disable_raw_mode;
+-module_param(disable_raw_mode, bool, 0644);
+-MODULE_PARM_DESC(disable_raw_mode,
+-	"Disable Raw mode reporting for touchpads and keep firmware gestures.");
+-
+ static bool disable_tap_to_click;
+ module_param(disable_tap_to_click, bool, 0644);
+ MODULE_PARM_DESC(disable_tap_to_click,
+@@ -66,7 +61,7 @@ MODULE_PARM_DESC(disable_tap_to_click,
+ /* bits 2..20 are reserved for classes */
+ /* #define HIDPP_QUIRK_CONNECT_EVENTS		BIT(21) disabled */
+ #define HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS	BIT(22)
+-#define HIDPP_QUIRK_NO_HIDINPUT			BIT(23)
++#define HIDPP_QUIRK_DELAYED_INIT		BIT(23)
+ #define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS	BIT(24)
+ #define HIDPP_QUIRK_UNIFYING			BIT(25)
+ #define HIDPP_QUIRK_HI_RES_SCROLL_1P0		BIT(26)
+@@ -85,8 +80,6 @@ MODULE_PARM_DESC(disable_tap_to_click,
+ 					 HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \
+ 					 HIDPP_QUIRK_HI_RES_SCROLL_X2121)
+ 
+-#define HIDPP_QUIRK_DELAYED_INIT		HIDPP_QUIRK_NO_HIDINPUT
+-
+ #define HIDPP_CAPABILITY_HIDPP10_BATTERY	BIT(0)
+ #define HIDPP_CAPABILITY_HIDPP20_BATTERY	BIT(1)
+ #define HIDPP_CAPABILITY_BATTERY_MILEAGE	BIT(2)
+@@ -1495,15 +1488,14 @@ static int hidpp_battery_get_property(struct power_supply *psy,
+ /* -------------------------------------------------------------------------- */
+ #define HIDPP_PAGE_WIRELESS_DEVICE_STATUS			0x1d4b
+ 
+-static int hidpp_set_wireless_feature_index(struct hidpp_device *hidpp)
++static int hidpp_get_wireless_feature_index(struct hidpp_device *hidpp, u8 *feature_index)
+ {
+ 	u8 feature_type;
+ 	int ret;
+ 
+ 	ret = hidpp_root_get_feature(hidpp,
+ 				     HIDPP_PAGE_WIRELESS_DEVICE_STATUS,
+-				     &hidpp->wireless_feature_index,
+-				     &feature_type);
++				     feature_index, &feature_type);
+ 
+ 	return ret;
+ }
+@@ -3673,6 +3665,13 @@ static void hidpp_connect_event(struct hidpp_device *hidpp)
+ 		}
+ 	}
+ 
++	if (hidpp->protocol_major >= 2) {
++		u8 feature_index;
++
++		if (!hidpp_get_wireless_feature_index(hidpp, &feature_index))
++			hidpp->wireless_feature_index = feature_index;
++	}
++
+ 	if (hidpp->name == hdev->name && hidpp->protocol_major >= 2) {
+ 		name = hidpp_get_device_name(hidpp);
+ 		if (name) {
+@@ -3707,7 +3706,7 @@ static void hidpp_connect_event(struct hidpp_device *hidpp)
+ 	if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL)
+ 		hi_res_scroll_enable(hidpp);
+ 
+-	if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input)
++	if (!(hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT) || hidpp->delayed_input)
+ 		/* if the input nodes are already created, we can stop now */
+ 		return;
+ 
+@@ -3810,7 +3809,6 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	bool connected;
+ 	unsigned int connect_mask = HID_CONNECT_DEFAULT;
+ 	struct hidpp_ff_private_data data;
+-	bool will_restart = false;
+ 
+ 	/* report_fixup needs drvdata to be set before we call hid_parse */
+ 	hidpp = devm_kzalloc(&hdev->dev, sizeof(*hidpp), GFP_KERNEL);
+@@ -3851,11 +3849,6 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	    hidpp_application_equals(hdev, HID_GD_KEYBOARD))
+ 		hidpp->quirks |= HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS;
+ 
+-	if (disable_raw_mode) {
+-		hidpp->quirks &= ~HIDPP_QUIRK_CLASS_WTP;
+-		hidpp->quirks &= ~HIDPP_QUIRK_NO_HIDINPUT;
+-	}
+-
+ 	if (hidpp->quirks & HIDPP_QUIRK_CLASS_WTP) {
+ 		ret = wtp_allocate(hdev, id);
+ 		if (ret)
+@@ -3866,10 +3859,6 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 			return ret;
+ 	}
+ 
+-	if (hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT ||
+-	    hidpp->quirks & HIDPP_QUIRK_UNIFYING)
+-		will_restart = true;
+-
+ 	INIT_WORK(&hidpp->work, delayed_work_cb);
+ 	mutex_init(&hidpp->send_mutex);
+ 	init_waitqueue_head(&hidpp->wait);
+@@ -3881,10 +3870,12 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 			 hdev->name);
+ 
+ 	/*
+-	 * Plain USB connections need to actually call start and open
+-	 * on the transport driver to allow incoming data.
++	 * First call hid_hw_start(hdev, 0) to allow IO without connecting any
++	 * hid subdrivers (hid-input, hidraw). This allows retrieving the dev's
++	 * name and serial number and store these in hdev->name and hdev->uniq,
++	 * before the hid-input and hidraw drivers expose these to userspace.
+ 	 */
+-	ret = hid_hw_start(hdev, will_restart ? 0 : connect_mask);
++	ret = hid_hw_start(hdev, 0);
+ 	if (ret) {
+ 		hid_err(hdev, "hw start failed\n");
+ 		goto hid_hw_start_fail;
+@@ -3917,15 +3908,6 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 		hidpp_overwrite_name(hdev);
+ 	}
+ 
+-	if (connected && hidpp->protocol_major >= 2) {
+-		ret = hidpp_set_wireless_feature_index(hidpp);
+-		if (ret == -ENOENT)
+-			hidpp->wireless_feature_index = 0;
+-		else if (ret)
+-			goto hid_hw_init_fail;
+-		ret = 0;
+-	}
+-
+ 	if (connected && (hidpp->quirks & HIDPP_QUIRK_CLASS_WTP)) {
+ 		ret = wtp_get_config(hidpp);
+ 		if (ret)
+@@ -3939,21 +3921,14 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	schedule_work(&hidpp->work);
+ 	flush_work(&hidpp->work);
+ 
+-	if (will_restart) {
+-		/* Reset the HID node state */
+-		hid_device_io_stop(hdev);
+-		hid_hw_close(hdev);
+-		hid_hw_stop(hdev);
++	if (hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT)
++		connect_mask &= ~HID_CONNECT_HIDINPUT;
+ 
+-		if (hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT)
+-			connect_mask &= ~HID_CONNECT_HIDINPUT;
+-
+-		/* Now export the actual inputs and hidraw nodes to the world */
+-		ret = hid_hw_start(hdev, connect_mask);
+-		if (ret) {
+-			hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
+-			goto hid_hw_start_fail;
+-		}
++	/* Now export the actual inputs and hidraw nodes to the world */
++	ret = hid_connect(hdev, connect_mask);
++	if (ret) {
++		hid_err(hdev, "%s:hid_connect returned error %d\n", __func__, ret);
++		goto hid_hw_init_fail;
+ 	}
+ 
+ 	if (hidpp->quirks & HIDPP_QUIRK_CLASS_G920) {
+@@ -3964,6 +3939,11 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 				 ret);
+ 	}
+ 
++	/*
++	 * This relies on logi_dj_ll_close() being a no-op so that DJ connection
++	 * events will still be received.
++	 */
++	hid_hw_close(hdev);
+ 	return ret;
+ 
+ hid_hw_init_fail:
+diff --git a/drivers/hwmon/axi-fan-control.c b/drivers/hwmon/axi-fan-control.c
+index e3f6b03e6764b..4b8250f2bb421 100644
+--- a/drivers/hwmon/axi-fan-control.c
++++ b/drivers/hwmon/axi-fan-control.c
+@@ -8,6 +8,7 @@
+ #include <linux/clk.h>
+ #include <linux/fpga/adi-axi-common.h>
+ #include <linux/hwmon.h>
++#include <linux/hwmon-sysfs.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
+@@ -23,6 +24,14 @@
+ #define ADI_REG_PWM_PERIOD	0x00c0
+ #define ADI_REG_TACH_MEASUR	0x00c4
+ #define ADI_REG_TEMPERATURE	0x00c8
++#define ADI_REG_TEMP_00_H	0x0100
++#define ADI_REG_TEMP_25_L	0x0104
++#define ADI_REG_TEMP_25_H	0x0108
++#define ADI_REG_TEMP_50_L	0x010c
++#define ADI_REG_TEMP_50_H	0x0110
++#define ADI_REG_TEMP_75_L	0x0114
++#define ADI_REG_TEMP_75_H	0x0118
++#define ADI_REG_TEMP_100_L	0x011c
+ 
+ #define ADI_REG_IRQ_MASK	0x0040
+ #define ADI_REG_IRQ_PENDING	0x0044
+@@ -62,6 +71,39 @@ static inline u32 axi_ioread(const u32 reg,
+ 	return ioread32(ctl->base + reg);
+ }
+ 
++/*
++ * The core calculates the temperature as:
++ *	T = /raw * 509.3140064 / 65535) - 280.2308787
++ */
++static ssize_t axi_fan_control_show(struct device *dev, struct device_attribute *da, char *buf)
++{
++	struct axi_fan_control_data *ctl = dev_get_drvdata(dev);
++	struct sensor_device_attribute *attr = to_sensor_dev_attr(da);
++	u32 temp = axi_ioread(attr->index, ctl);
++
++	temp = DIV_ROUND_CLOSEST_ULL(temp * 509314ULL, 65535) - 280230;
++
++	return sprintf(buf, "%u\n", temp);
++}
++
++static ssize_t axi_fan_control_store(struct device *dev, struct device_attribute *da,
++				     const char *buf, size_t count)
++{
++	struct axi_fan_control_data *ctl = dev_get_drvdata(dev);
++	struct sensor_device_attribute *attr = to_sensor_dev_attr(da);
++	u32 temp;
++	int ret;
++
++	ret = kstrtou32(buf, 10, &temp);
++	if (ret)
++		return ret;
++
++	temp = DIV_ROUND_CLOSEST_ULL((temp + 280230) * 65535ULL, 509314);
++	axi_iowrite(temp, attr->index, ctl);
++
++	return count;
++}
++
+ static long axi_fan_control_get_pwm_duty(const struct axi_fan_control_data *ctl)
+ {
+ 	u32 pwm_width = axi_ioread(ADI_REG_PWM_WIDTH, ctl);
+@@ -370,6 +412,36 @@ static const struct hwmon_chip_info axi_chip_info = {
+ 	.info = axi_fan_control_info,
+ };
+ 
++/* temperature threshold below which PWM should be 0% */
++static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point1_temp_hyst, axi_fan_control, ADI_REG_TEMP_00_H);
++/* temperature threshold above which PWM should be 25% */
++static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point1_temp, axi_fan_control, ADI_REG_TEMP_25_L);
++/* temperature threshold below which PWM should be 25% */
++static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point2_temp_hyst, axi_fan_control, ADI_REG_TEMP_25_H);
++/* temperature threshold above which PWM should be 50% */
++static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point2_temp, axi_fan_control, ADI_REG_TEMP_50_L);
++/* temperature threshold below which PWM should be 50% */
++static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point3_temp_hyst, axi_fan_control, ADI_REG_TEMP_50_H);
++/* temperature threshold above which PWM should be 75% */
++static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point3_temp, axi_fan_control, ADI_REG_TEMP_75_L);
++/* temperature threshold below which PWM should be 75% */
++static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point4_temp_hyst, axi_fan_control, ADI_REG_TEMP_75_H);
++/* temperature threshold above which PWM should be 100% */
++static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point4_temp, axi_fan_control, ADI_REG_TEMP_100_L);
++
++static struct attribute *axi_fan_control_attrs[] = {
++	&sensor_dev_attr_pwm1_auto_point1_temp_hyst.dev_attr.attr,
++	&sensor_dev_attr_pwm1_auto_point1_temp.dev_attr.attr,
++	&sensor_dev_attr_pwm1_auto_point2_temp_hyst.dev_attr.attr,
++	&sensor_dev_attr_pwm1_auto_point2_temp.dev_attr.attr,
++	&sensor_dev_attr_pwm1_auto_point3_temp_hyst.dev_attr.attr,
++	&sensor_dev_attr_pwm1_auto_point3_temp.dev_attr.attr,
++	&sensor_dev_attr_pwm1_auto_point4_temp_hyst.dev_attr.attr,
++	&sensor_dev_attr_pwm1_auto_point4_temp.dev_attr.attr,
++	NULL,
++};
++ATTRIBUTE_GROUPS(axi_fan_control);
++
+ static const u32 version_1_0_0 = ADI_AXI_PCORE_VER(1, 0, 'a');
+ 
+ static const struct of_device_id axi_fan_control_of_match[] = {
+@@ -423,6 +495,21 @@ static int axi_fan_control_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
++	ret = axi_fan_control_init(ctl, pdev->dev.of_node);
++	if (ret) {
++		dev_err(&pdev->dev, "Failed to initialize device\n");
++		return ret;
++	}
++
++	ctl->hdev = devm_hwmon_device_register_with_info(&pdev->dev,
++							 name,
++							 ctl,
++							 &axi_chip_info,
++							 axi_fan_control_groups);
++
++	if (IS_ERR(ctl->hdev))
++		return PTR_ERR(ctl->hdev);
++
+ 	ctl->irq = platform_get_irq(pdev, 0);
+ 	if (ctl->irq < 0)
+ 		return ctl->irq;
+@@ -436,19 +523,7 @@ static int axi_fan_control_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	ret = axi_fan_control_init(ctl, pdev->dev.of_node);
+-	if (ret) {
+-		dev_err(&pdev->dev, "Failed to initialize device\n");
+-		return ret;
+-	}
+-
+-	ctl->hdev = devm_hwmon_device_register_with_info(&pdev->dev,
+-							 name,
+-							 ctl,
+-							 &axi_chip_info,
+-							 NULL);
+-
+-	return PTR_ERR_OR_ZERO(ctl->hdev);
++	return 0;
+ }
+ 
+ static struct platform_driver axi_fan_control_driver = {
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index eaae5de2ab616..5b2057ce5a59d 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -41,7 +41,7 @@ MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
+ #define PKG_SYSFS_ATTR_NO	1	/* Sysfs attribute for package temp */
+ #define BASE_SYSFS_ATTR_NO	2	/* Sysfs Base attr no for coretemp */
+ #define NUM_REAL_CORES		128	/* Number of Real cores per cpu */
+-#define CORETEMP_NAME_LENGTH	19	/* String Length of attrs */
++#define CORETEMP_NAME_LENGTH	28	/* String Length of attrs */
+ #define MAX_CORE_ATTRS		4	/* Maximum no of basic attrs */
+ #define TOTAL_ATTRS		(MAX_CORE_ATTRS + 1)
+ #define MAX_CORE_DATA		(NUM_REAL_CORES + BASE_SYSFS_ATTR_NO)
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 1c6b78ad5ade4..828fb236a63ae 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1509,9 +1509,11 @@ i3c_master_register_new_i3c_devs(struct i3c_master_controller *master)
+ 			desc->dev->dev.of_node = desc->boardinfo->of_node;
+ 
+ 		ret = device_register(&desc->dev->dev);
+-		if (ret)
++		if (ret) {
+ 			dev_err(&master->dev,
+ 				"Failed to add I3C device (err = %d)\n", ret);
++			put_device(&desc->dev->dev);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/infiniband/hw/hfi1/efivar.c b/drivers/infiniband/hw/hfi1/efivar.c
+index c22ab7b5163b3..49b2c751197e2 100644
+--- a/drivers/infiniband/hw/hfi1/efivar.c
++++ b/drivers/infiniband/hw/hfi1/efivar.c
+@@ -152,7 +152,7 @@ int read_hfi1_efi_var(struct hfi1_devdata *dd, const char *kind,
+ 		      unsigned long *size, void **return_data)
+ {
+ 	char prefix_name[64];
+-	char name[64];
++	char name[128];
+ 	int result;
+ 	int i;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 322f341f41458..518b38e9158d4 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -247,7 +247,7 @@ static bool check_inl_data_len(struct hns_roce_qp *qp, unsigned int len)
+ 	struct hns_roce_dev *hr_dev = to_hr_dev(qp->ibqp.device);
+ 	int mtu = ib_mtu_enum_to_int(qp->path_mtu);
+ 
+-	if (len > qp->max_inline_data || len > mtu) {
++	if (mtu < 0 || len > qp->max_inline_data || len > mtu) {
+ 		ibdev_err(&hr_dev->ib_dev,
+ 			  "invalid length of data, data len = %u, max inline len = %u, path mtu = %d.\n",
+ 			  len, qp->max_inline_data, mtu);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index c42c6761382d1..d1c07f1f8fe98 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -906,7 +906,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ {
+ 	struct hns_roce_ib_create_qp_resp resp = {};
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+-	struct hns_roce_ib_create_qp ucmd;
++	struct hns_roce_ib_create_qp ucmd = {};
+ 	int ret;
+ 
+ 	mutex_init(&hr_qp->mutex);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 0c47e3e24b2a4..e3cc856e70e5d 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3714,6 +3714,30 @@ static unsigned int get_tx_affinity(struct ib_qp *qp,
+ 	return tx_affinity;
+ }
+ 
++static int __mlx5_ib_qp_set_raw_qp_counter(struct mlx5_ib_qp *qp, u32 set_id,
++					   struct mlx5_core_dev *mdev)
++{
++	struct mlx5_ib_raw_packet_qp *raw_packet_qp = &qp->raw_packet_qp;
++	struct mlx5_ib_rq *rq = &raw_packet_qp->rq;
++	u32 in[MLX5_ST_SZ_DW(modify_rq_in)] = {};
++	void *rqc;
++
++	if (!qp->rq.wqe_cnt)
++		return 0;
++
++	MLX5_SET(modify_rq_in, in, rq_state, rq->state);
++	MLX5_SET(modify_rq_in, in, uid, to_mpd(qp->ibqp.pd)->uid);
++
++	rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx);
++	MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RDY);
++
++	MLX5_SET64(modify_rq_in, in, modify_bitmask,
++		   MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_RQ_COUNTER_SET_ID);
++	MLX5_SET(rqc, rqc, counter_set_id, set_id);
++
++	return mlx5_core_modify_rq(mdev, rq->base.mqp.qpn, in);
++}
++
+ static int __mlx5_ib_qp_set_counter(struct ib_qp *qp,
+ 				    struct rdma_counter *counter)
+ {
+@@ -3729,6 +3753,9 @@ static int __mlx5_ib_qp_set_counter(struct ib_qp *qp,
+ 	else
+ 		set_id = mlx5_ib_get_counters_id(dev, mqp->port - 1);
+ 
++	if (mqp->type == IB_QPT_RAW_PACKET)
++		return __mlx5_ib_qp_set_raw_qp_counter(mqp, set_id, dev->mdev);
++
+ 	base = &mqp->trans_qp.base;
+ 	MLX5_SET(rts2rts_qp_in, in, opcode, MLX5_CMD_OP_RTS2RTS_QP);
+ 	MLX5_SET(rts2rts_qp_in, in, qpn, base->mqp.qpn);
+diff --git a/drivers/input/rmi4/rmi_bus.c b/drivers/input/rmi4/rmi_bus.c
+index 47d1b97ed6cf3..a1b2c86ef611a 100644
+--- a/drivers/input/rmi4/rmi_bus.c
++++ b/drivers/input/rmi4/rmi_bus.c
+@@ -276,11 +276,11 @@ void rmi_unregister_function(struct rmi_function *fn)
+ 
+ 	device_del(&fn->dev);
+ 	of_node_put(fn->dev.of_node);
+-	put_device(&fn->dev);
+ 
+ 	for (i = 0; i < fn->num_of_irqs; i++)
+ 		irq_dispose_mapping(fn->irq[i]);
+ 
++	put_device(&fn->dev);
+ }
+ 
+ /**
+diff --git a/drivers/interconnect/qcom/sc7180.c b/drivers/interconnect/qcom/sc7180.c
+index 597a7ee7a9bbf..f5b3c68bb66c2 100644
+--- a/drivers/interconnect/qcom/sc7180.c
++++ b/drivers/interconnect/qcom/sc7180.c
+@@ -153,30 +153,238 @@ DEFINE_QNODE(srvc_snoc, SC7180_SLAVE_SERVICE_SNOC, 1, 4);
+ DEFINE_QNODE(xs_qdss_stm, SC7180_SLAVE_QDSS_STM, 1, 4);
+ DEFINE_QNODE(xs_sys_tcu_cfg, SC7180_SLAVE_TCU, 1, 8);
+ 
+-DEFINE_QBCM(bcm_acv, "ACV", false, &ebi);
+-DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi);
+-DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc);
+-DEFINE_QBCM(bcm_mm0, "MM0", false, &qns_mem_noc_hf);
+-DEFINE_QBCM(bcm_ce0, "CE0", false, &qxm_crypto);
+-DEFINE_QBCM(bcm_cn0, "CN0", true, &qnm_snoc, &xm_qdss_dap, &qhs_a1_noc_cfg, &qhs_a2_noc_cfg, &qhs_ahb2phy0, &qhs_aop, &qhs_aoss, &qhs_boot_rom, &qhs_camera_cfg, &qhs_camera_nrt_throttle_cfg, &qhs_camera_rt_throttle_cfg, &qhs_clk_ctl, &qhs_cpr_cx, &qhs_cpr_mx, &qhs_crypto0_cfg, &qhs_dcc_cfg, &qhs_ddrss_cfg, &qhs_display_cfg, &qhs_display_rt_throttle_cfg, &qhs_display_throttle_cfg, &qhs_glm, &qhs_gpuss_cfg, &qhs_imem_cfg, &qhs_ipa, &qhs_mnoc_cfg, &qhs_mss_cfg, &qhs_npu_cfg, &qhs_npu_dma_throttle_cfg, &qhs_npu_dsp_throttle_cfg, &qhs_pimem_cfg, &qhs_prng, &qhs_qdss_cfg, &qhs_qm_cfg, &qhs_qm_mpu_cfg, &qhs_qup0, &qhs_qup1, &qhs_security, &qhs_snoc_cfg, &qhs_tcsr, &qhs_tlmm_1, &qhs_tlmm_2, &qhs_tlmm_3, &qhs_ufs_mem_cfg, &qhs_usb3, &qhs_venus_cfg, &qhs_venus_throttle_cfg, &qhs_vsense_ctrl_cfg, &srvc_cnoc);
+-DEFINE_QBCM(bcm_mm1, "MM1", false, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qhm_mnoc_cfg, &qxm_mdp0, &qxm_rot, &qxm_venus0, &qxm_venus_arm9);
+-DEFINE_QBCM(bcm_sh2, "SH2", false, &acm_sys_tcu);
+-DEFINE_QBCM(bcm_mm2, "MM2", false, &qns_mem_noc_sf);
+-DEFINE_QBCM(bcm_qup0, "QUP0", false, &qup_core_master_1, &qup_core_master_2);
+-DEFINE_QBCM(bcm_sh3, "SH3", false, &qnm_cmpnoc);
+-DEFINE_QBCM(bcm_sh4, "SH4", false, &acm_apps0);
+-DEFINE_QBCM(bcm_sn0, "SN0", true, &qns_gemnoc_sf);
+-DEFINE_QBCM(bcm_co0, "CO0", false, &qns_cdsp_gemnoc);
+-DEFINE_QBCM(bcm_sn1, "SN1", false, &qxs_imem);
+-DEFINE_QBCM(bcm_cn1, "CN1", false, &qhm_qspi, &xm_sdc2, &xm_emmc, &qhs_ahb2phy2, &qhs_emmc_cfg, &qhs_pdm, &qhs_qspi, &qhs_sdc2);
+-DEFINE_QBCM(bcm_sn2, "SN2", false, &qxm_pimem, &qns_gemnoc_gc);
+-DEFINE_QBCM(bcm_co2, "CO2", false, &qnm_npu);
+-DEFINE_QBCM(bcm_sn3, "SN3", false, &qxs_pimem);
+-DEFINE_QBCM(bcm_co3, "CO3", false, &qxm_npu_dsp);
+-DEFINE_QBCM(bcm_sn4, "SN4", false, &xs_qdss_stm);
+-DEFINE_QBCM(bcm_sn7, "SN7", false, &qnm_aggre1_noc);
+-DEFINE_QBCM(bcm_sn9, "SN9", false, &qnm_aggre2_noc);
+-DEFINE_QBCM(bcm_sn12, "SN12", false, &qnm_gemnoc);
++static struct qcom_icc_bcm bcm_acv = {
++	.name = "ACV",
++	.enable_mask = BIT(3),
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &ebi },
++};
++
++static struct qcom_icc_bcm bcm_mc0 = {
++	.name = "MC0",
++	.keepalive = true,
++	.num_nodes = 1,
++	.nodes = { &ebi },
++};
++
++static struct qcom_icc_bcm bcm_sh0 = {
++	.name = "SH0",
++	.keepalive = true,
++	.num_nodes = 1,
++	.nodes = { &qns_llcc },
++};
++
++static struct qcom_icc_bcm bcm_mm0 = {
++	.name = "MM0",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qns_mem_noc_hf },
++};
++
++static struct qcom_icc_bcm bcm_ce0 = {
++	.name = "CE0",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qxm_crypto },
++};
++
++static struct qcom_icc_bcm bcm_cn0 = {
++	.name = "CN0",
++	.keepalive = true,
++	.num_nodes = 48,
++	.nodes = { &qnm_snoc,
++		   &xm_qdss_dap,
++		   &qhs_a1_noc_cfg,
++		   &qhs_a2_noc_cfg,
++		   &qhs_ahb2phy0,
++		   &qhs_aop,
++		   &qhs_aoss,
++		   &qhs_boot_rom,
++		   &qhs_camera_cfg,
++		   &qhs_camera_nrt_throttle_cfg,
++		   &qhs_camera_rt_throttle_cfg,
++		   &qhs_clk_ctl,
++		   &qhs_cpr_cx,
++		   &qhs_cpr_mx,
++		   &qhs_crypto0_cfg,
++		   &qhs_dcc_cfg,
++		   &qhs_ddrss_cfg,
++		   &qhs_display_cfg,
++		   &qhs_display_rt_throttle_cfg,
++		   &qhs_display_throttle_cfg,
++		   &qhs_glm,
++		   &qhs_gpuss_cfg,
++		   &qhs_imem_cfg,
++		   &qhs_ipa,
++		   &qhs_mnoc_cfg,
++		   &qhs_mss_cfg,
++		   &qhs_npu_cfg,
++		   &qhs_npu_dma_throttle_cfg,
++		   &qhs_npu_dsp_throttle_cfg,
++		   &qhs_pimem_cfg,
++		   &qhs_prng,
++		   &qhs_qdss_cfg,
++		   &qhs_qm_cfg,
++		   &qhs_qm_mpu_cfg,
++		   &qhs_qup0,
++		   &qhs_qup1,
++		   &qhs_security,
++		   &qhs_snoc_cfg,
++		   &qhs_tcsr,
++		   &qhs_tlmm_1,
++		   &qhs_tlmm_2,
++		   &qhs_tlmm_3,
++		   &qhs_ufs_mem_cfg,
++		   &qhs_usb3,
++		   &qhs_venus_cfg,
++		   &qhs_venus_throttle_cfg,
++		   &qhs_vsense_ctrl_cfg,
++		   &srvc_cnoc
++	},
++};
++
++static struct qcom_icc_bcm bcm_mm1 = {
++	.name = "MM1",
++	.keepalive = false,
++	.num_nodes = 8,
++	.nodes = { &qxm_camnoc_hf0_uncomp,
++		   &qxm_camnoc_hf1_uncomp,
++		   &qxm_camnoc_sf_uncomp,
++		   &qhm_mnoc_cfg,
++		   &qxm_mdp0,
++		   &qxm_rot,
++		   &qxm_venus0,
++		   &qxm_venus_arm9
++	},
++};
++
++static struct qcom_icc_bcm bcm_sh2 = {
++	.name = "SH2",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &acm_sys_tcu },
++};
++
++static struct qcom_icc_bcm bcm_mm2 = {
++	.name = "MM2",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qns_mem_noc_sf },
++};
++
++static struct qcom_icc_bcm bcm_qup0 = {
++	.name = "QUP0",
++	.keepalive = false,
++	.num_nodes = 2,
++	.nodes = { &qup_core_master_1, &qup_core_master_2 },
++};
++
++static struct qcom_icc_bcm bcm_sh3 = {
++	.name = "SH3",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qnm_cmpnoc },
++};
++
++static struct qcom_icc_bcm bcm_sh4 = {
++	.name = "SH4",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &acm_apps0 },
++};
++
++static struct qcom_icc_bcm bcm_sn0 = {
++	.name = "SN0",
++	.keepalive = true,
++	.num_nodes = 1,
++	.nodes = { &qns_gemnoc_sf },
++};
++
++static struct qcom_icc_bcm bcm_co0 = {
++	.name = "CO0",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qns_cdsp_gemnoc },
++};
++
++static struct qcom_icc_bcm bcm_sn1 = {
++	.name = "SN1",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qxs_imem },
++};
++
++static struct qcom_icc_bcm bcm_cn1 = {
++	.name = "CN1",
++	.keepalive = false,
++	.num_nodes = 8,
++	.nodes = { &qhm_qspi,
++		   &xm_sdc2,
++		   &xm_emmc,
++		   &qhs_ahb2phy2,
++		   &qhs_emmc_cfg,
++		   &qhs_pdm,
++		   &qhs_qspi,
++		   &qhs_sdc2
++	},
++};
++
++static struct qcom_icc_bcm bcm_sn2 = {
++	.name = "SN2",
++	.keepalive = false,
++	.num_nodes = 2,
++	.nodes = { &qxm_pimem, &qns_gemnoc_gc },
++};
++
++static struct qcom_icc_bcm bcm_co2 = {
++	.name = "CO2",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qnm_npu },
++};
++
++static struct qcom_icc_bcm bcm_sn3 = {
++	.name = "SN3",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qxs_pimem },
++};
++
++static struct qcom_icc_bcm bcm_co3 = {
++	.name = "CO3",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qxm_npu_dsp },
++};
++
++static struct qcom_icc_bcm bcm_sn4 = {
++	.name = "SN4",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &xs_qdss_stm },
++};
++
++static struct qcom_icc_bcm bcm_sn7 = {
++	.name = "SN7",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qnm_aggre1_noc },
++};
++
++static struct qcom_icc_bcm bcm_sn9 = {
++	.name = "SN9",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qnm_aggre2_noc },
++};
++
++static struct qcom_icc_bcm bcm_sn12 = {
++	.name = "SN12",
++	.keepalive = false,
++	.num_nodes = 1,
++	.nodes = { &qnm_gemnoc },
++};
+ 
+ static struct qcom_icc_bcm *aggre1_noc_bcms[] = {
+ 	&bcm_cn1,
+diff --git a/drivers/leds/leds-pwm.c b/drivers/leds/leds-pwm.c
+index f53f9309ca6cc..f4c0507becb31 100644
+--- a/drivers/leds/leds-pwm.c
++++ b/drivers/leds/leds-pwm.c
+@@ -51,7 +51,7 @@ static int led_pwm_set(struct led_classdev *led_cdev,
+ 		duty = led_dat->pwmstate.period - duty;
+ 
+ 	led_dat->pwmstate.duty_cycle = duty;
+-	led_dat->pwmstate.enabled = duty > 0;
++	led_dat->pwmstate.enabled = true;
+ 	return pwm_apply_state(led_dat->pwm, &led_dat->pwmstate);
+ }
+ 
+diff --git a/drivers/leds/trigger/ledtrig-cpu.c b/drivers/leds/trigger/ledtrig-cpu.c
+index fca62d5035909..f19baed615023 100644
+--- a/drivers/leds/trigger/ledtrig-cpu.c
++++ b/drivers/leds/trigger/ledtrig-cpu.c
+@@ -130,7 +130,7 @@ static int ledtrig_prepare_down_cpu(unsigned int cpu)
+ 
+ static int __init ledtrig_cpu_init(void)
+ {
+-	int cpu;
++	unsigned int cpu;
+ 	int ret;
+ 
+ 	/* Supports up to 9999 cpu cores */
+@@ -152,7 +152,7 @@ static int __init ledtrig_cpu_init(void)
+ 		if (cpu >= 8)
+ 			continue;
+ 
+-		snprintf(trig->name, MAX_NAME_LEN, "cpu%d", cpu);
++		snprintf(trig->name, MAX_NAME_LEN, "cpu%u", cpu);
+ 
+ 		led_trigger_register_simple(trig->name, &trig->_trig);
+ 	}
+diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
+index 62ce27552dd3c..a358eb3fe0f4d 100644
+--- a/drivers/media/i2c/max9286.c
++++ b/drivers/media/i2c/max9286.c
+@@ -1143,7 +1143,6 @@ static int max9286_parse_dt(struct max9286_priv *priv)
+ 
+ 		i2c_mux_mask |= BIT(id);
+ 	}
+-	of_node_put(node);
+ 	of_node_put(i2c_mux);
+ 
+ 	/* Parse the endpoints */
+@@ -1207,7 +1206,6 @@ static int max9286_parse_dt(struct max9286_priv *priv)
+ 		priv->source_mask |= BIT(ep.port);
+ 		priv->nsources++;
+ 	}
+-	of_node_put(node);
+ 
+ 	priv->route_mask = priv->source_mask;
+ 
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 1f0e4b913a053..5f1bd9b38e75f 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -4258,6 +4258,7 @@ static void bttv_remove(struct pci_dev *pci_dev)
+ 
+ 	/* free resources */
+ 	free_irq(btv->c.pci->irq,btv);
++	del_timer_sync(&btv->timeout);
+ 	iounmap(btv->bt848_mmio);
+ 	release_mem_region(pci_resource_start(btv->c.pci,0),
+ 			   pci_resource_len(btv->c.pci,0));
+diff --git a/drivers/media/platform/s3c-camif/camif-capture.c b/drivers/media/platform/s3c-camif/camif-capture.c
+index 9ca49af29542d..a3ba72a08daec 100644
+--- a/drivers/media/platform/s3c-camif/camif-capture.c
++++ b/drivers/media/platform/s3c-camif/camif-capture.c
+@@ -1132,12 +1132,12 @@ int s3c_camif_register_video_node(struct camif_dev *camif, int idx)
+ 
+ 	ret = vb2_queue_init(q);
+ 	if (ret)
+-		goto err_vd_rel;
++		return ret;
+ 
+ 	vp->pad.flags = MEDIA_PAD_FL_SINK;
+ 	ret = media_entity_pads_init(&vfd->entity, 1, &vp->pad);
+ 	if (ret)
+-		goto err_vd_rel;
++		return ret;
+ 
+ 	video_set_drvdata(vfd, vp);
+ 
+@@ -1170,8 +1170,6 @@ err_ctrlh_free:
+ 	v4l2_ctrl_handler_free(&vp->ctrl_handler);
+ err_me_cleanup:
+ 	media_entity_cleanup(&vfd->entity);
+-err_vd_rel:
+-	video_device_release(vfd);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_mux.c b/drivers/media/test-drivers/vidtv/vidtv_mux.c
+index b51e6a3b8cbeb..f99878eff7ace 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_mux.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_mux.c
+@@ -504,13 +504,16 @@ struct vidtv_mux *vidtv_mux_init(struct dvb_frontend *fe,
+ 	m->priv = args->priv;
+ 	m->network_id = args->network_id;
+ 	m->network_name = kstrdup(args->network_name, GFP_KERNEL);
++	if (!m->network_name)
++		goto free_mux_buf;
++
+ 	m->timing.current_jiffies = get_jiffies_64();
+ 
+ 	if (args->channels)
+ 		m->channels = args->channels;
+ 	else
+ 		if (vidtv_channels_init(m) < 0)
+-			goto free_mux_buf;
++			goto free_mux_network_name;
+ 
+ 	/* will alloc data for pmt_sections after initializing pat */
+ 	if (vidtv_channel_si_init(m) < 0)
+@@ -527,6 +530,8 @@ free_channel_si:
+ 	vidtv_channel_si_destroy(m);
+ free_channels:
+ 	vidtv_channels_destroy(m);
++free_mux_network_name:
++	kfree(m->network_name);
+ free_mux_buf:
+ 	vfree(m->mux_buf);
+ free_mux:
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_psi.c b/drivers/media/test-drivers/vidtv/vidtv_psi.c
+index 1724bb485e670..1726e76f0106a 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_psi.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_psi.c
+@@ -308,16 +308,29 @@ struct vidtv_psi_desc_service *vidtv_psi_service_desc_init(struct vidtv_psi_desc
+ 
+ 	desc->service_name_len = service_name_len;
+ 
+-	if (service_name && service_name_len)
++	if (service_name && service_name_len) {
+ 		desc->service_name = kstrdup(service_name, GFP_KERNEL);
++		if (!desc->service_name)
++			goto free_desc;
++	}
+ 
+ 	desc->provider_name_len = provider_name_len;
+ 
+-	if (provider_name && provider_name_len)
++	if (provider_name && provider_name_len) {
+ 		desc->provider_name = kstrdup(provider_name, GFP_KERNEL);
++		if (!desc->provider_name)
++			goto free_desc_service_name;
++	}
+ 
+ 	vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc);
+ 	return desc;
++
++free_desc_service_name:
++	if (service_name && service_name_len)
++		kfree(desc->service_name);
++free_desc:
++	kfree(desc);
++	return NULL;
+ }
+ 
+ struct vidtv_psi_desc_registration
+@@ -362,8 +375,13 @@ struct vidtv_psi_desc_network_name
+ 
+ 	desc->length = network_name_len;
+ 
+-	if (network_name && network_name_len)
++	if (network_name && network_name_len) {
+ 		desc->network_name = kstrdup(network_name, GFP_KERNEL);
++		if (!desc->network_name) {
++			kfree(desc);
++			return NULL;
++		}
++	}
+ 
+ 	vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc);
+ 	return desc;
+@@ -449,15 +467,32 @@ struct vidtv_psi_desc_short_event
+ 		iso_language_code = "eng";
+ 
+ 	desc->iso_language_code = kstrdup(iso_language_code, GFP_KERNEL);
++	if (!desc->iso_language_code)
++		goto free_desc;
+ 
+-	if (event_name && event_name_len)
++	if (event_name && event_name_len) {
+ 		desc->event_name = kstrdup(event_name, GFP_KERNEL);
++		if (!desc->event_name)
++			goto free_desc_language_code;
++	}
+ 
+-	if (text && text_len)
++	if (text && text_len) {
+ 		desc->text = kstrdup(text, GFP_KERNEL);
++		if (!desc->text)
++			goto free_desc_event_name;
++	}
+ 
+ 	vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc);
+ 	return desc;
++
++free_desc_event_name:
++	if (event_name && event_name_len)
++		kfree(desc->event_name);
++free_desc_language_code:
++	kfree(desc->iso_language_code);
++free_desc:
++	kfree(desc);
++	return NULL;
+ }
+ 
+ struct vidtv_psi_desc *vidtv_psi_desc_clone(struct vidtv_psi_desc *desc)
+diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c
+index 8cbaab9a60844..f0bc3e060ab8d 100644
+--- a/drivers/media/usb/dvb-usb-v2/af9035.c
++++ b/drivers/media/usb/dvb-usb-v2/af9035.c
+@@ -322,8 +322,10 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 			ret = -EOPNOTSUPP;
+ 		} else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
+ 			   (msg[0].addr == state->af9033_i2c_addr[1])) {
+-			if (msg[0].len < 3 || msg[1].len < 1)
+-				return -EOPNOTSUPP;
++			if (msg[0].len < 3 || msg[1].len < 1) {
++				ret = -EOPNOTSUPP;
++				goto unlock;
++			}
+ 			/* demod access via firmware interface */
+ 			reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
+ 					msg[0].buf[2];
+@@ -383,8 +385,10 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 			ret = -EOPNOTSUPP;
+ 		} else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
+ 			   (msg[0].addr == state->af9033_i2c_addr[1])) {
+-			if (msg[0].len < 3)
+-				return -EOPNOTSUPP;
++			if (msg[0].len < 3) {
++				ret = -EOPNOTSUPP;
++				goto unlock;
++			}
+ 			/* demod access via firmware interface */
+ 			reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
+ 					msg[0].buf[2];
+@@ -459,6 +463,7 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 		ret = -EOPNOTSUPP;
+ 	}
+ 
++unlock:
+ 	mutex_unlock(&d->i2c_mutex);
+ 
+ 	if (ret < 0)
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index fc65f9e25fda8..852129ea07666 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -836,7 +836,6 @@ out_stop_rx:
+ 	dln2_stop_rx_urbs(dln2);
+ 
+ out_free:
+-	usb_put_dev(dln2->usb_dev);
+ 	dln2_free(dln2);
+ 
+ 	return ret;
+diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
+index a3a6faa99de05..c0083e38d5273 100644
+--- a/drivers/mfd/mfd-core.c
++++ b/drivers/mfd/mfd-core.c
+@@ -171,6 +171,7 @@ static int mfd_add_device(struct device *parent, int id,
+ 	struct platform_device *pdev;
+ 	struct device_node *np = NULL;
+ 	struct mfd_of_node_entry *of_entry, *tmp;
++	bool disabled = false;
+ 	int ret = -ENOMEM;
+ 	int platform_id;
+ 	int r;
+@@ -208,11 +209,10 @@ static int mfd_add_device(struct device *parent, int id,
+ 	if (IS_ENABLED(CONFIG_OF) && parent->of_node && cell->of_compatible) {
+ 		for_each_child_of_node(parent->of_node, np) {
+ 			if (of_device_is_compatible(np, cell->of_compatible)) {
+-				/* Ignore 'disabled' devices error free */
++				/* Skip 'disabled' devices */
+ 				if (!of_device_is_available(np)) {
+-					of_node_put(np);
+-					ret = 0;
+-					goto fail_alias;
++					disabled = true;
++					continue;
+ 				}
+ 
+ 				ret = mfd_match_of_node_to_dev(pdev, np, cell);
+@@ -222,10 +222,17 @@ static int mfd_add_device(struct device *parent, int id,
+ 				if (ret)
+ 					goto fail_alias;
+ 
+-				break;
++				goto match;
+ 			}
+ 		}
+ 
++		if (disabled) {
++			/* Ignore 'disabled' devices error free */
++			ret = 0;
++			goto fail_alias;
++		}
++
++match:
+ 		if (!pdev->dev.of_node)
+ 			pr_warn("%s: Failed to locate of_node [id: %d]\n",
+ 				cell->name, platform_id);
+diff --git a/drivers/misc/ti-st/st_core.c b/drivers/misc/ti-st/st_core.c
+index f4ddd1e670151..ca115f344fa29 100644
+--- a/drivers/misc/ti-st/st_core.c
++++ b/drivers/misc/ti-st/st_core.c
+@@ -15,6 +15,7 @@
+ #include <linux/skbuff.h>
+ 
+ #include <linux/ti_wilink_st.h>
++#include <linux/netdevice.h>
+ 
+ extern void st_kim_recv(void *, const unsigned char *, long);
+ void st_int_recv(void *, const unsigned char *, long);
+@@ -436,7 +437,7 @@ static void st_int_enqueue(struct st_data_s *st_gdata, struct sk_buff *skb)
+ 	case ST_LL_AWAKE_TO_ASLEEP:
+ 		pr_err("ST LL is illegal state(%ld),"
+ 			   "purging received skb.", st_ll_getstate(st_gdata));
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 		break;
+ 	case ST_LL_ASLEEP:
+ 		skb_queue_tail(&st_gdata->tx_waitq, skb);
+@@ -445,7 +446,7 @@ static void st_int_enqueue(struct st_data_s *st_gdata, struct sk_buff *skb)
+ 	default:
+ 		pr_err("ST LL is illegal state(%ld),"
+ 			   "purging received skb.", st_ll_getstate(st_gdata));
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 		break;
+ 	}
+ 
+@@ -499,7 +500,7 @@ void st_tx_wakeup(struct st_data_s *st_data)
+ 				spin_unlock_irqrestore(&st_data->lock, flags);
+ 				break;
+ 			}
+-			kfree_skb(skb);
++			dev_kfree_skb_irq(skb);
+ 			spin_unlock_irqrestore(&st_data->lock, flags);
+ 		}
+ 		/* if wake-up is set in another context- restart sending */
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 91c929bd69c07..87807ef010a96 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -95,7 +95,7 @@ static int mmc_decode_cid(struct mmc_card *card)
+ 	case 3: /* MMC v3.1 - v3.3 */
+ 	case 4: /* MMC v4 */
+ 		card->cid.manfid	= UNSTUFF_BITS(resp, 120, 8);
+-		card->cid.oemid		= UNSTUFF_BITS(resp, 104, 8);
++		card->cid.oemid		= UNSTUFF_BITS(resp, 104, 16);
+ 		card->cid.prod_name[0]	= UNSTUFF_BITS(resp, 96, 8);
+ 		card->cid.prod_name[1]	= UNSTUFF_BITS(resp, 88, 8);
+ 		card->cid.prod_name[2]	= UNSTUFF_BITS(resp, 80, 8);
+diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
+index 2b38a99884f2f..b5e79d63d59b5 100644
+--- a/drivers/net/can/dev/dev.c
++++ b/drivers/net/can/dev/dev.c
+@@ -578,7 +578,8 @@ static void can_restart(struct net_device *dev)
+ 	struct can_frame *cf;
+ 	int err;
+ 
+-	BUG_ON(netif_carrier_ok(dev));
++	if (netif_carrier_ok(dev))
++		netdev_err(dev, "Attempt to restart for bus-off recovery, but carrier is OK?\n");
+ 
+ 	/* No synchronization needed because the device is bus-off and
+ 	 * no messages can come in or go out.
+@@ -602,11 +603,12 @@ restart:
+ 	priv->can_stats.restarts++;
+ 
+ 	/* Now restart the device */
+-	err = priv->do_set_mode(dev, CAN_MODE_START);
+-
+ 	netif_carrier_on(dev);
+-	if (err)
++	err = priv->do_set_mode(dev, CAN_MODE_START);
++	if (err) {
+ 		netdev_err(dev, "Error %d during restart", err);
++		netif_carrier_off(dev);
++	}
+ }
+ 
+ static void can_restart_work(struct work_struct *work)
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index d14f37be1eb3e..5647833303a44 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -18156,7 +18156,8 @@ static void tg3_shutdown(struct pci_dev *pdev)
+ 	if (netif_running(dev))
+ 		dev_close(dev);
+ 
+-	tg3_power_down(tp);
++	if (system_state == SYSTEM_POWER_OFF)
++		tg3_power_down(tp);
+ 
+ 	rtnl_unlock();
+ 
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index cd6e016e62103..ccf2bec283d35 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -2259,7 +2259,7 @@ static void chtls_rx_ack(struct sock *sk, struct sk_buff *skb)
+ 
+ 		if (tp->snd_una != snd_una) {
+ 			tp->snd_una = snd_una;
+-			tp->rcv_tstamp = tcp_time_stamp(tp);
++			tp->rcv_tstamp = tcp_jiffies32;
+ 			if (tp->snd_una == tp->snd_nxt &&
+ 			    !csk_flag_nochk(csk, CSK_TX_FAILOVER))
+ 				csk_reset_flag(csk, CSK_TX_WAIT_IDLE);
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index f0c1e6c80b61c..b76d1d019a81d 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -128,7 +128,7 @@ static int gve_alloc_stats_report(struct gve_priv *priv)
+ 	rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) *
+ 		       priv->rx_cfg.num_queues;
+ 	priv->stats_report_len = struct_size(priv->stats_report, stats,
+-					     tx_stats_num + rx_stats_num);
++					     size_add(tx_stats_num, rx_stats_num));
+ 	priv->stats_report =
+ 		dma_alloc_coherent(&priv->pdev->dev, priv->stats_report_len,
+ 				   &priv->stats_report_bus, GFP_KERNEL);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index d23a467d0d209..64e1f6f407b48 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -15601,11 +15601,15 @@ static void i40e_remove(struct pci_dev *pdev)
+ 			i40e_switch_branch_release(pf->veb[i]);
+ 	}
+ 
+-	/* Now we can shutdown the PF's VSI, just before we kill
++	/* Now we can shutdown the PF's VSIs, just before we kill
+ 	 * adminq and hmc.
+ 	 */
+-	if (pf->vsi[pf->lan_vsi])
+-		i40e_vsi_release(pf->vsi[pf->lan_vsi]);
++	for (i = pf->num_alloc_vsi; i--;)
++		if (pf->vsi[i]) {
++			i40e_vsi_close(pf->vsi[i]);
++			i40e_vsi_release(pf->vsi[i]);
++			pf->vsi[i] = NULL;
++		}
+ 
+ 	i40e_cloud_filter_exit(pf);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+index dbd3bebf11eca..2e8b17e3b9358 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+@@ -251,7 +251,7 @@ mlxsw_sp_acl_bf_init(struct mlxsw_sp *mlxsw_sp, unsigned int num_erp_banks)
+ 	 * is 2^ACL_MAX_BF_LOG
+ 	 */
+ 	bf_bank_size = 1 << MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_BF_LOG);
+-	bf = kzalloc(struct_size(bf, refcnt, bf_bank_size * num_erp_banks),
++	bf = kzalloc(struct_size(bf, refcnt, size_mul(bf_bank_size, num_erp_banks)),
+ 		     GFP_KERNEL);
+ 	if (!bf)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 37e34d8f7946e..6e0fe77d1019c 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2577,9 +2577,13 @@ static void rtl_set_rx_mode(struct net_device *dev)
+ 
+ 	if (dev->flags & IFF_PROMISC) {
+ 		rx_mode |= AcceptAllPhys;
++	} else if (!(dev->flags & IFF_MULTICAST)) {
++		rx_mode &= ~AcceptMulticast;
+ 	} else if (netdev_mc_count(dev) > MC_FILTER_LIMIT ||
+ 		   dev->flags & IFF_ALLMULTI ||
+-		   tp->mac_version == RTL_GIGA_MAC_VER_35) {
++		   tp->mac_version == RTL_GIGA_MAC_VER_35 ||
++		   tp->mac_version == RTL_GIGA_MAC_VER_46 ||
++		   tp->mac_version == RTL_GIGA_MAC_VER_48) {
+ 		/* accept all multicasts */
+ 	} else if (netdev_mc_empty(dev)) {
+ 		rx_mode &= ~AcceptMulticast;
+@@ -4688,12 +4692,17 @@ static int rtl8169_poll(struct napi_struct *napi, int budget)
+ static void r8169_phylink_handler(struct net_device *ndev)
+ {
+ 	struct rtl8169_private *tp = netdev_priv(ndev);
++	struct device *d = tp_to_dev(tp);
+ 
+ 	if (netif_carrier_ok(ndev)) {
+ 		rtl_link_chg_patch(tp);
+-		pm_request_resume(&tp->pci_dev->dev);
++		pm_request_resume(d);
++		netif_wake_queue(tp->dev);
+ 	} else {
+-		pm_runtime_idle(&tp->pci_dev->dev);
++		/* In few cases rx is broken after link-down otherwise */
++		if (rtl_is_8125(tp))
++			rtl_reset_work(tp);
++		pm_runtime_idle(d);
+ 	}
+ 
+ 	if (net_ratelimit())
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
+index 6c3b8a950f58d..eee58e0513877 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
+@@ -222,7 +222,7 @@
+ 	((val) << XGMAC_PPS_MINIDX(x))
+ #define XGMAC_PPSCMD_START		0x2
+ #define XGMAC_PPSCMD_STOP		0x5
+-#define XGMAC_PPSEN0			BIT(4)
++#define XGMAC_PPSENx(x)			BIT(4 + (x) * 8)
+ #define XGMAC_PPSx_TARGET_TIME_SEC(x)	(0x00000d80 + (x) * 0x10)
+ #define XGMAC_PPSx_TARGET_TIME_NSEC(x)	(0x00000d84 + (x) * 0x10)
+ #define XGMAC_TRGTBUSY0			BIT(31)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index ad4df9bddcf35..b060667463028 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -1132,7 +1132,19 @@ static int dwxgmac2_flex_pps_config(void __iomem *ioaddr, int index,
+ 
+ 	val |= XGMAC_PPSCMDx(index, XGMAC_PPSCMD_START);
+ 	val |= XGMAC_TRGTMODSELx(index, XGMAC_PPSCMD_START);
+-	val |= XGMAC_PPSEN0;
++
++	/* XGMAC Core has 4 PPS outputs at most.
++	 *
++	 * Prior XGMAC Core 3.20, Fixed mode or Flexible mode are selectable for
++	 * PPS0 only via PPSEN0. PPS{1,2,3} are in Flexible mode by default,
++	 * and can not be switched to Fixed mode, since PPSEN{1,2,3} are
++	 * read-only reserved to 0.
++	 * But we always set PPSEN{1,2,3} do not make things worse ;-)
++	 *
++	 * From XGMAC Core 3.20 and later, PPSEN{0,1,2,3} are writable and must
++	 * be set, or the PPS outputs stay in Fixed PPS mode by default.
++	 */
++	val |= XGMAC_PPSENx(index);
+ 
+ 	writel(cfg->start.tv_sec, ioaddr + XGMAC_PPSx_TARGET_TIME_SEC(index));
+ 
+diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c
+index 5f5b33e6653b2..9d4c49f28d31f 100644
+--- a/drivers/net/ethernet/toshiba/spider_net.c
++++ b/drivers/net/ethernet/toshiba/spider_net.c
+@@ -2311,7 +2311,7 @@ spider_net_alloc_card(void)
+ 	struct spider_net_card *card;
+ 
+ 	netdev = alloc_etherdev(struct_size(card, darray,
+-					    tx_descriptors + rx_descriptors));
++					    size_add(tx_descriptors, rx_descriptors)));
+ 	if (!netdev)
+ 		return NULL;
+ 
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index ab09d110760ec..b5a61b16a7eab 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -442,12 +442,12 @@ static int ipvlan_process_v4_outbound(struct sk_buff *skb)
+ 
+ 	err = ip_local_out(net, skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+-		dev->stats.tx_errors++;
++		DEV_STATS_INC(dev, tx_errors);
+ 	else
+ 		ret = NET_XMIT_SUCCESS;
+ 	goto out;
+ err:
+-	dev->stats.tx_errors++;
++	DEV_STATS_INC(dev, tx_errors);
+ 	kfree_skb(skb);
+ out:
+ 	return ret;
+@@ -483,12 +483,12 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+ 
+ 	err = ip6_local_out(net, skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+-		dev->stats.tx_errors++;
++		DEV_STATS_INC(dev, tx_errors);
+ 	else
+ 		ret = NET_XMIT_SUCCESS;
+ 	goto out;
+ err:
+-	dev->stats.tx_errors++;
++	DEV_STATS_INC(dev, tx_errors);
+ 	kfree_skb(skb);
+ out:
+ 	return ret;
+diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
+index 93be7dd571fc5..f59ef2e2a614b 100644
+--- a/drivers/net/ipvlan/ipvlan_main.c
++++ b/drivers/net/ipvlan/ipvlan_main.c
+@@ -322,6 +322,7 @@ static void ipvlan_get_stats64(struct net_device *dev,
+ 		s->rx_dropped = rx_errs;
+ 		s->tx_dropped = tx_drps;
+ 	}
++	s->tx_errors = DEV_STATS_READ(dev, tx_errors);
+ }
+ 
+ static int ipvlan_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 0ffcef2fa10af..83b02dc7dfd2d 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3686,9 +3686,9 @@ static void macsec_get_stats64(struct net_device *dev,
+ 
+ 	dev_fetch_sw_netstats(s, dev->tstats);
+ 
+-	s->rx_dropped = atomic_long_read(&dev->stats.__rx_dropped);
+-	s->tx_dropped = atomic_long_read(&dev->stats.__tx_dropped);
+-	s->rx_errors = atomic_long_read(&dev->stats.__rx_errors);
++	s->rx_dropped = DEV_STATS_READ(dev, rx_dropped);
++	s->tx_dropped = DEV_STATS_READ(dev, tx_dropped);
++	s->rx_errors = DEV_STATS_READ(dev, rx_errors);
+ }
+ 
+ static int macsec_get_iflink(const struct net_device *dev)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/core.c b/drivers/net/wireless/mediatek/mt76/mt7603/core.c
+index 60a996b63c0c0..915b8349146af 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/core.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/core.c
+@@ -42,11 +42,13 @@ irqreturn_t mt7603_irq_handler(int irq, void *dev_instance)
+ 	}
+ 
+ 	if (intr & MT_INT_RX_DONE(0)) {
++		dev->rx_pse_check = 0;
+ 		mt7603_irq_disable(dev, MT_INT_RX_DONE(0));
+ 		napi_schedule(&dev->mt76.napi[0]);
+ 	}
+ 
+ 	if (intr & MT_INT_RX_DONE(1)) {
++		dev->rx_pse_check = 0;
+ 		mt7603_irq_disable(dev, MT_INT_RX_DONE(1));
+ 		napi_schedule(&dev->mt76.napi[1]);
+ 	}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+index f665a1c95eed2..9eb898ebbb445 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+@@ -1535,20 +1535,29 @@ static bool mt7603_rx_pse_busy(struct mt7603_dev *dev)
+ {
+ 	u32 addr, val;
+ 
+-	if (mt76_rr(dev, MT_MCU_DEBUG_RESET) & MT_MCU_DEBUG_RESET_QUEUES)
+-		return true;
+-
+ 	if (mt7603_rx_fifo_busy(dev))
+-		return false;
++		goto out;
+ 
+ 	addr = mt7603_reg_map(dev, MT_CLIENT_BASE_PHYS_ADDR + MT_CLIENT_STATUS);
+ 	mt76_wr(dev, addr, 3);
+ 	val = mt76_rr(dev, addr) >> 16;
+ 
+-	if (is_mt7628(dev) && (val & 0x4001) == 0x4001)
+-		return true;
++	if (!(val & BIT(0)))
++		return false;
++
++	if (is_mt7628(dev))
++		val &= 0xa000;
++	else
++		val &= 0x8000;
++	if (!val)
++		return false;
++
++out:
++	if (mt76_rr(dev, MT_INT_SOURCE_CSR) &
++	    (MT_INT_RX_DONE(0) | MT_INT_RX_DONE(1)))
++		return false;
+ 
+-	return (val & 0x8001) == 0x8001 || (val & 0xe001) == 0xe001;
++	return true;
+ }
+ 
+ static bool
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c
+index d10c14c694da8..a1b920843b869 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c
+@@ -799,7 +799,7 @@ static void rtl88e_dm_check_edca_turbo(struct ieee80211_hw *hw)
+ 	}
+ 
+ 	if (rtlpriv->btcoexist.bt_edca_dl != 0) {
+-		edca_be_ul = rtlpriv->btcoexist.bt_edca_dl;
++		edca_be_dl = rtlpriv->btcoexist.bt_edca_dl;
+ 		bt_change_edca = true;
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c
+index 265a1a336304e..c493e50b7bc58 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c
+@@ -640,7 +640,7 @@ static void rtl92c_dm_check_edca_turbo(struct ieee80211_hw *hw)
+ 	}
+ 
+ 	if (rtlpriv->btcoexist.bt_edca_dl != 0) {
+-		edca_be_ul = rtlpriv->btcoexist.bt_edca_dl;
++		edca_be_dl = rtlpriv->btcoexist.bt_edca_dl;
+ 		bt_change_edca = true;
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c
+index 8ada31380efa4..0ff8e355c23a4 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c
+@@ -466,7 +466,7 @@ static void rtl8723e_dm_check_edca_turbo(struct ieee80211_hw *hw)
+ 	}
+ 
+ 	if (rtlpriv->btcoexist.bt_edca_dl != 0) {
+-		edca_be_ul = rtlpriv->btcoexist.bt_edca_dl;
++		edca_be_dl = rtlpriv->btcoexist.bt_edca_dl;
+ 		bt_change_edca = true;
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/debug.c b/drivers/net/wireless/realtek/rtw88/debug.c
+index 8bb6cc8ca74e5..83413cda9bc5e 100644
+--- a/drivers/net/wireless/realtek/rtw88/debug.c
++++ b/drivers/net/wireless/realtek/rtw88/debug.c
+@@ -901,9 +901,9 @@ static struct rtw_debugfs_priv rtw_debug_priv_coex_info = {
+ #define rtw_debugfs_add_core(name, mode, fopname, parent)		\
+ 	do {								\
+ 		rtw_debug_priv_ ##name.rtwdev = rtwdev;			\
+-		if (!debugfs_create_file(#name, mode,			\
++		if (IS_ERR(debugfs_create_file(#name, mode,		\
+ 					 parent, &rtw_debug_priv_ ##name,\
+-					 &file_ops_ ##fopname))		\
++					 &file_ops_ ##fopname)))	\
+ 			pr_debug("Unable to initialize debugfs:%s\n",	\
+ 			       #name);					\
+ 	} while (0)
+diff --git a/drivers/nvdimm/of_pmem.c b/drivers/nvdimm/of_pmem.c
+index 10dbdcdfb9ce9..0243789ba914b 100644
+--- a/drivers/nvdimm/of_pmem.c
++++ b/drivers/nvdimm/of_pmem.c
+@@ -30,7 +30,13 @@ static int of_pmem_region_probe(struct platform_device *pdev)
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
+-	priv->bus_desc.provider_name = kstrdup(pdev->name, GFP_KERNEL);
++	priv->bus_desc.provider_name = devm_kstrdup(&pdev->dev, pdev->name,
++							GFP_KERNEL);
++	if (!priv->bus_desc.provider_name) {
++		kfree(priv);
++		return -ENOMEM;
++	}
++
+ 	priv->bus_desc.module = THIS_MODULE;
+ 	priv->bus_desc.of_node = np;
+ 
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index 1d72653b5c8d1..0a75948cde5a1 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -959,7 +959,8 @@ unsigned int nd_region_acquire_lane(struct nd_region *nd_region)
+ {
+ 	unsigned int cpu, lane;
+ 
+-	cpu = get_cpu();
++	migrate_disable();
++	cpu = smp_processor_id();
+ 	if (nd_region->num_lanes < nr_cpu_ids) {
+ 		struct nd_percpu_lane *ndl_lock, *ndl_count;
+ 
+@@ -978,16 +979,15 @@ EXPORT_SYMBOL(nd_region_acquire_lane);
+ void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane)
+ {
+ 	if (nd_region->num_lanes < nr_cpu_ids) {
+-		unsigned int cpu = get_cpu();
++		unsigned int cpu = smp_processor_id();
+ 		struct nd_percpu_lane *ndl_lock, *ndl_count;
+ 
+ 		ndl_count = per_cpu_ptr(nd_region->lane, cpu);
+ 		ndl_lock = per_cpu_ptr(nd_region->lane, lane);
+ 		if (--ndl_count->count == 0)
+ 			spin_unlock(&ndl_lock->lock);
+-		put_cpu();
+ 	}
+-	put_cpu();
++	migrate_enable();
+ }
+ EXPORT_SYMBOL(nd_region_release_lane);
+ 
+diff --git a/drivers/pcmcia/cs.c b/drivers/pcmcia/cs.c
+index f70197154a362..820cce7c8b400 100644
+--- a/drivers/pcmcia/cs.c
++++ b/drivers/pcmcia/cs.c
+@@ -605,6 +605,7 @@ static int pccardd(void *__skt)
+ 		dev_warn(&skt->dev, "PCMCIA: unable to register socket\n");
+ 		skt->thread = NULL;
+ 		complete(&skt->thread_done);
++		put_device(&skt->dev);
+ 		return 0;
+ 	}
+ 	ret = pccard_sysfs_add_socket(&skt->dev);
+diff --git a/drivers/pcmcia/ds.c b/drivers/pcmcia/ds.c
+index 72114907c0e4d..bf2e856f53e97 100644
+--- a/drivers/pcmcia/ds.c
++++ b/drivers/pcmcia/ds.c
+@@ -518,9 +518,6 @@ static struct pcmcia_device *pcmcia_device_add(struct pcmcia_socket *s,
+ 	/* by default don't allow DMA */
+ 	p_dev->dma_mask = 0;
+ 	p_dev->dev.dma_mask = &p_dev->dma_mask;
+-	dev_set_name(&p_dev->dev, "%d.%d", p_dev->socket->sock, p_dev->device_no);
+-	if (!dev_name(&p_dev->dev))
+-		goto err_free;
+ 	p_dev->devname = kasprintf(GFP_KERNEL, "pcmcia%s", dev_name(&p_dev->dev));
+ 	if (!p_dev->devname)
+ 		goto err_free;
+@@ -578,8 +575,15 @@ static struct pcmcia_device *pcmcia_device_add(struct pcmcia_socket *s,
+ 
+ 	pcmcia_device_query(p_dev);
+ 
+-	if (device_register(&p_dev->dev))
+-		goto err_unreg;
++	dev_set_name(&p_dev->dev, "%d.%d", p_dev->socket->sock, p_dev->device_no);
++	if (device_register(&p_dev->dev)) {
++		mutex_lock(&s->ops_mutex);
++		list_del(&p_dev->socket_device_list);
++		s->device_count--;
++		mutex_unlock(&s->ops_mutex);
++		put_device(&p_dev->dev);
++		return NULL;
++	}
+ 
+ 	return p_dev;
+ 
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index 567c28705cb1b..62c673660c9a5 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -185,7 +185,7 @@ static int get_subobj_info(acpi_handle handle, const char *pathname,
+ 
+ static acpi_status wmi_method_enable(struct wmi_block *wblock, int enable)
+ {
+-	struct guid_block *block = NULL;
++	struct guid_block *block;
+ 	char method[5];
+ 	acpi_status status;
+ 	acpi_handle handle;
+@@ -259,8 +259,8 @@ EXPORT_SYMBOL_GPL(wmi_evaluate_method);
+ acpi_status wmidev_evaluate_method(struct wmi_device *wdev, u8 instance,
+ 	u32 method_id, const struct acpi_buffer *in, struct acpi_buffer *out)
+ {
+-	struct guid_block *block = NULL;
+-	struct wmi_block *wblock = NULL;
++	struct guid_block *block;
++	struct wmi_block *wblock;
+ 	acpi_handle handle;
+ 	acpi_status status;
+ 	struct acpi_object_list input;
+@@ -307,7 +307,7 @@ EXPORT_SYMBOL_GPL(wmidev_evaluate_method);
+ static acpi_status __query_block(struct wmi_block *wblock, u8 instance,
+ 				 struct acpi_buffer *out)
+ {
+-	struct guid_block *block = NULL;
++	struct guid_block *block;
+ 	acpi_handle handle;
+ 	acpi_status status, wc_status = AE_ERROR;
+ 	struct acpi_object_list input;
+@@ -420,8 +420,8 @@ EXPORT_SYMBOL_GPL(wmidev_block_query);
+ acpi_status wmi_set_block(const char *guid_string, u8 instance,
+ 			  const struct acpi_buffer *in)
+ {
+-	struct guid_block *block = NULL;
+ 	struct wmi_block *wblock = NULL;
++	struct guid_block *block;
+ 	acpi_handle handle;
+ 	struct acpi_object_list input;
+ 	union acpi_object params[2];
+@@ -820,21 +820,13 @@ static int wmi_dev_match(struct device *dev, struct device_driver *driver)
+ }
+ static int wmi_char_open(struct inode *inode, struct file *filp)
+ {
+-	const char *driver_name = filp->f_path.dentry->d_iname;
+-	struct wmi_block *wblock = NULL;
+-	struct wmi_block *next = NULL;
+-
+-	list_for_each_entry_safe(wblock, next, &wmi_block_list, list) {
+-		if (!wblock->dev.dev.driver)
+-			continue;
+-		if (strcmp(driver_name, wblock->dev.dev.driver->name) == 0) {
+-			filp->private_data = wblock;
+-			break;
+-		}
+-	}
++	/*
++	 * The miscdevice already stores a pointer to itself
++	 * inside filp->private_data
++	 */
++	struct wmi_block *wblock = container_of(filp->private_data, struct wmi_block, char_dev);
+ 
+-	if (!filp->private_data)
+-		return -ENODEV;
++	filp->private_data = wblock;
+ 
+ 	return nonseekable_open(inode, filp);
+ }
+@@ -854,8 +846,8 @@ static long wmi_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	struct wmi_ioctl_buffer __user *input =
+ 		(struct wmi_ioctl_buffer __user *) arg;
+ 	struct wmi_block *wblock = filp->private_data;
+-	struct wmi_ioctl_buffer *buf = NULL;
+-	struct wmi_driver *wdriver = NULL;
++	struct wmi_ioctl_buffer *buf;
++	struct wmi_driver *wdriver;
+ 	int ret;
+ 
+ 	if (_IOC_TYPE(cmd) != WMI_IOC)
+@@ -1157,8 +1149,8 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
+ 	struct wmi_block *wblock, *next;
+ 	union acpi_object *obj;
+ 	acpi_status status;
+-	int retval = 0;
+ 	u32 i, total;
++	int retval;
+ 
+ 	status = acpi_evaluate_object(device->handle, "_WDG", NULL, &out);
+ 	if (ACPI_FAILURE(status))
+@@ -1169,8 +1161,8 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
+ 		return -ENXIO;
+ 
+ 	if (obj->type != ACPI_TYPE_BUFFER) {
+-		retval = -ENXIO;
+-		goto out_free_pointer;
++		kfree(obj);
++		return -ENXIO;
+ 	}
+ 
+ 	gblock = (const struct guid_block *)obj->buffer.pointer;
+@@ -1191,8 +1183,8 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
+ 
+ 		wblock = kzalloc(sizeof(struct wmi_block), GFP_KERNEL);
+ 		if (!wblock) {
+-			retval = -ENOMEM;
+-			break;
++			dev_err(wmi_bus_dev, "Failed to allocate %pUL\n", &gblock[i].guid);
++			continue;
+ 		}
+ 
+ 		wblock->acpi_device = device;
+@@ -1231,9 +1223,9 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
+ 		}
+ 	}
+ 
+-out_free_pointer:
+-	kfree(out.pointer);
+-	return retval;
++	kfree(obj);
++
++	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/pwm/pwm-brcmstb.c b/drivers/pwm/pwm-brcmstb.c
+index fea612c45f200..fd8479cd67715 100644
+--- a/drivers/pwm/pwm-brcmstb.c
++++ b/drivers/pwm/pwm-brcmstb.c
+@@ -298,7 +298,7 @@ static int brcmstb_pwm_suspend(struct device *dev)
+ {
+ 	struct brcmstb_pwm *p = dev_get_drvdata(dev);
+ 
+-	clk_disable(p->clk);
++	clk_disable_unprepare(p->clk);
+ 
+ 	return 0;
+ }
+@@ -307,7 +307,7 @@ static int brcmstb_pwm_resume(struct device *dev)
+ {
+ 	struct brcmstb_pwm *p = dev_get_drvdata(dev);
+ 
+-	clk_enable(p->clk);
++	clk_prepare_enable(p->clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pwm/pwm-sti.c b/drivers/pwm/pwm-sti.c
+index 1508616d794cd..9b2174adab3db 100644
+--- a/drivers/pwm/pwm-sti.c
++++ b/drivers/pwm/pwm-sti.c
+@@ -79,6 +79,7 @@ struct sti_pwm_compat_data {
+ 	unsigned int cpt_num_devs;
+ 	unsigned int max_pwm_cnt;
+ 	unsigned int max_prescale;
++	struct sti_cpt_ddata *ddata;
+ };
+ 
+ struct sti_pwm_chip {
+@@ -314,7 +315,7 @@ static int sti_pwm_capture(struct pwm_chip *chip, struct pwm_device *pwm,
+ {
+ 	struct sti_pwm_chip *pc = to_sti_pwmchip(chip);
+ 	struct sti_pwm_compat_data *cdata = pc->cdata;
+-	struct sti_cpt_ddata *ddata = pwm_get_chip_data(pwm);
++	struct sti_cpt_ddata *ddata = &cdata->ddata[pwm->hwpwm];
+ 	struct device *dev = pc->dev;
+ 	unsigned int effective_ticks;
+ 	unsigned long long high, low;
+@@ -417,7 +418,7 @@ static irqreturn_t sti_pwm_interrupt(int irq, void *data)
+ 	while (cpt_int_stat) {
+ 		devicenum = ffs(cpt_int_stat) - 1;
+ 
+-		ddata = pwm_get_chip_data(&pc->chip.pwms[devicenum]);
++		ddata = &pc->cdata->ddata[devicenum];
+ 
+ 		/*
+ 		 * Capture input:
+@@ -593,61 +594,55 @@ static int sti_pwm_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!cdata->pwm_num_devs)
+-		goto skip_pwm;
+-
+-	pc->pwm_clk = of_clk_get_by_name(dev->of_node, "pwm");
+-	if (IS_ERR(pc->pwm_clk)) {
+-		dev_err(dev, "failed to get PWM clock\n");
+-		return PTR_ERR(pc->pwm_clk);
+-	}
++	if (cdata->pwm_num_devs) {
++		pc->pwm_clk = of_clk_get_by_name(dev->of_node, "pwm");
++		if (IS_ERR(pc->pwm_clk)) {
++			dev_err(dev, "failed to get PWM clock\n");
++			return PTR_ERR(pc->pwm_clk);
++		}
+ 
+-	ret = clk_prepare(pc->pwm_clk);
+-	if (ret) {
+-		dev_err(dev, "failed to prepare clock\n");
+-		return ret;
++		ret = clk_prepare(pc->pwm_clk);
++		if (ret) {
++			dev_err(dev, "failed to prepare clock\n");
++			return ret;
++		}
+ 	}
+ 
+-skip_pwm:
+-	if (!cdata->cpt_num_devs)
+-		goto skip_cpt;
++	if (cdata->cpt_num_devs) {
++		pc->cpt_clk = of_clk_get_by_name(dev->of_node, "capture");
++		if (IS_ERR(pc->cpt_clk)) {
++			dev_err(dev, "failed to get PWM capture clock\n");
++			return PTR_ERR(pc->cpt_clk);
++		}
+ 
+-	pc->cpt_clk = of_clk_get_by_name(dev->of_node, "capture");
+-	if (IS_ERR(pc->cpt_clk)) {
+-		dev_err(dev, "failed to get PWM capture clock\n");
+-		return PTR_ERR(pc->cpt_clk);
+-	}
++		ret = clk_prepare(pc->cpt_clk);
++		if (ret) {
++			dev_err(dev, "failed to prepare clock\n");
++			return ret;
++		}
+ 
+-	ret = clk_prepare(pc->cpt_clk);
+-	if (ret) {
+-		dev_err(dev, "failed to prepare clock\n");
+-		return ret;
++		cdata->ddata = devm_kzalloc(dev, cdata->cpt_num_devs * sizeof(*cdata->ddata), GFP_KERNEL);
++		if (!cdata->ddata)
++			return -ENOMEM;
+ 	}
+ 
+-skip_cpt:
+ 	pc->chip.dev = dev;
+ 	pc->chip.ops = &sti_pwm_ops;
+ 	pc->chip.base = -1;
+ 	pc->chip.npwm = pc->cdata->pwm_num_devs;
+ 
+-	ret = pwmchip_add(&pc->chip);
+-	if (ret < 0) {
+-		clk_unprepare(pc->pwm_clk);
+-		clk_unprepare(pc->cpt_clk);
+-		return ret;
+-	}
+-
+ 	for (i = 0; i < cdata->cpt_num_devs; i++) {
+-		struct sti_cpt_ddata *ddata;
+-
+-		ddata = devm_kzalloc(dev, sizeof(*ddata), GFP_KERNEL);
+-		if (!ddata)
+-			return -ENOMEM;
++		struct sti_cpt_ddata *ddata = &cdata->ddata[i];
+ 
+ 		init_waitqueue_head(&ddata->wait);
+ 		mutex_init(&ddata->lock);
++	}
+ 
+-		pwm_set_chip_data(&pc->chip.pwms[i], ddata);
++	ret = pwmchip_add(&pc->chip);
++	if (ret < 0) {
++		clk_unprepare(pc->pwm_clk);
++		clk_unprepare(pc->cpt_clk);
++		return ret;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, pc);
+diff --git a/drivers/rtc/rtc-pcf85363.c b/drivers/rtc/rtc-pcf85363.c
+index 3450d615974d5..bb962dce3ab26 100644
+--- a/drivers/rtc/rtc-pcf85363.c
++++ b/drivers/rtc/rtc-pcf85363.c
+@@ -407,7 +407,7 @@ static int pcf85363_probe(struct i2c_client *client,
+ 	if (client->irq > 0) {
+ 		regmap_write(pcf85363->regmap, CTRL_FLAGS, 0);
+ 		regmap_update_bits(pcf85363->regmap, CTRL_PIN_IO,
+-				   PIN_IO_INTA_OUT, PIN_IO_INTAPM);
++				   PIN_IO_INTAPM, PIN_IO_INTA_OUT);
+ 		ret = devm_request_threaded_irq(&client->dev, client->irq,
+ 						NULL, pcf85363_rtc_handle_irq,
+ 						IRQF_TRIGGER_LOW | IRQF_ONESHOT,
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index f3389e9131794..a432aebd14be6 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -3339,7 +3339,7 @@ int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
+ 		 */
+ 		ret = utf16s_to_utf8s(uc_str->uc,
+ 				      uc_str->len - QUERY_DESC_HDR_SIZE,
+-				      UTF16_BIG_ENDIAN, str, ascii_len);
++				      UTF16_BIG_ENDIAN, str, ascii_len - 1);
+ 
+ 		/* replace non-printable or non-ASCII characters with spaces */
+ 		for (i = 0; i < ret; i++)
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index c60fe98f03e37..8cf7b142e1410 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -413,6 +413,9 @@ static int qcom_llcc_probe(struct platform_device *pdev)
+ 	const struct llcc_slice_config *llcc_cfg;
+ 	u32 sz;
+ 
++	if (!IS_ERR(drv_data))
++		return -EBUSY;
++
+ 	drv_data = devm_kzalloc(dev, sizeof(*drv_data), GFP_KERNEL);
+ 	if (!drv_data) {
+ 		ret = -ENOMEM;
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index 4d98ce7571df0..5fd9b515f6f14 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -949,6 +949,7 @@ config SPI_XTENSA_XTFPGA
+ config SPI_ZYNQ_QSPI
+ 	tristate "Xilinx Zynq QSPI controller"
+ 	depends on ARCH_ZYNQ || COMPILE_TEST
++	depends on SPI_MEM
+ 	help
+ 	  This enables support for the Zynq Quad SPI controller
+ 	  in master mode.
+diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c
+index 90b5fbc914ae2..f40b93960b893 100644
+--- a/drivers/spi/spi-nxp-fspi.c
++++ b/drivers/spi/spi-nxp-fspi.c
+@@ -685,7 +685,7 @@ static int nxp_fspi_read_ahb(struct nxp_fspi *f, const struct spi_mem_op *op)
+ 		f->memmap_len = len > NXP_FSPI_MIN_IOMAP ?
+ 				len : NXP_FSPI_MIN_IOMAP;
+ 
+-		f->ahb_addr = ioremap_wc(f->memmap_phy + f->memmap_start,
++		f->ahb_addr = ioremap(f->memmap_phy + f->memmap_start,
+ 					 f->memmap_len);
+ 
+ 		if (!f->ahb_addr) {
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_hw.c b/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
+index bcf050a04ffc4..e782731f0a6a4 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
+@@ -145,12 +145,12 @@ int cedrus_hw_suspend(struct device *device)
+ {
+ 	struct cedrus_dev *dev = dev_get_drvdata(device);
+ 
+-	reset_control_assert(dev->rstc);
+-
+ 	clk_disable_unprepare(dev->ram_clk);
+ 	clk_disable_unprepare(dev->mod_clk);
+ 	clk_disable_unprepare(dev->ahb_clk);
+ 
++	reset_control_assert(dev->rstc);
++
+ 	return 0;
+ }
+ 
+@@ -159,11 +159,18 @@ int cedrus_hw_resume(struct device *device)
+ 	struct cedrus_dev *dev = dev_get_drvdata(device);
+ 	int ret;
+ 
++	ret = reset_control_reset(dev->rstc);
++	if (ret) {
++		dev_err(dev->dev, "Failed to apply reset\n");
++
++		return ret;
++	}
++
+ 	ret = clk_prepare_enable(dev->ahb_clk);
+ 	if (ret) {
+ 		dev_err(dev->dev, "Failed to enable AHB clock\n");
+ 
+-		return ret;
++		goto err_rst;
+ 	}
+ 
+ 	ret = clk_prepare_enable(dev->mod_clk);
+@@ -180,21 +187,14 @@ int cedrus_hw_resume(struct device *device)
+ 		goto err_mod_clk;
+ 	}
+ 
+-	ret = reset_control_reset(dev->rstc);
+-	if (ret) {
+-		dev_err(dev->dev, "Failed to apply reset\n");
+-
+-		goto err_ram_clk;
+-	}
+-
+ 	return 0;
+ 
+-err_ram_clk:
+-	clk_disable_unprepare(dev->ram_clk);
+ err_mod_clk:
+ 	clk_disable_unprepare(dev->mod_clk);
+ err_ahb_clk:
+ 	clk_disable_unprepare(dev->ahb_clk);
++err_rst:
++	reset_control_assert(dev->rstc);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index dd449945e1e5e..1cf49912dc96c 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -879,7 +879,8 @@ int thermal_zone_bind_cooling_device(struct thermal_zone_device *tz,
+ 	if (result)
+ 		goto release_ida;
+ 
+-	sprintf(dev->attr_name, "cdev%d_trip_point", dev->id);
++	snprintf(dev->attr_name, sizeof(dev->attr_name), "cdev%d_trip_point",
++		 dev->id);
+ 	sysfs_attr_init(&dev->attr.attr);
+ 	dev->attr.attr.name = dev->attr_name;
+ 	dev->attr.attr.mode = 0444;
+@@ -888,7 +889,8 @@ int thermal_zone_bind_cooling_device(struct thermal_zone_device *tz,
+ 	if (result)
+ 		goto remove_symbol_link;
+ 
+-	sprintf(dev->weight_attr_name, "cdev%d_weight", dev->id);
++	snprintf(dev->weight_attr_name, sizeof(dev->weight_attr_name),
++		 "cdev%d_weight", dev->id);
+ 	sysfs_attr_init(&dev->weight_attr.attr);
+ 	dev->weight_attr.attr.name = dev->weight_attr_name;
+ 	dev->weight_attr.attr.mode = S_IWUSR | S_IRUGO;
+diff --git a/drivers/tty/tty_jobctrl.c b/drivers/tty/tty_jobctrl.c
+index 95d67613b25b6..a27d021871748 100644
+--- a/drivers/tty/tty_jobctrl.c
++++ b/drivers/tty/tty_jobctrl.c
+@@ -291,12 +291,7 @@ void disassociate_ctty(int on_exit)
+ 		return;
+ 	}
+ 
+-	spin_lock_irq(&current->sighand->siglock);
+-	put_pid(current->signal->tty_old_pgrp);
+-	current->signal->tty_old_pgrp = NULL;
+-	tty = tty_kref_get(current->signal->tty);
+-	spin_unlock_irq(&current->sighand->siglock);
+-
++	tty = get_current_tty();
+ 	if (tty) {
+ 		unsigned long flags;
+ 
+@@ -311,6 +306,16 @@ void disassociate_ctty(int on_exit)
+ 		tty_kref_put(tty);
+ 	}
+ 
++	/* If tty->ctrl.pgrp is not NULL, it may be assigned to
++	 * current->signal->tty_old_pgrp in a race condition, and
++	 * cause pid memleak. Release current->signal->tty_old_pgrp
++	 * after tty->ctrl.pgrp set to NULL.
++	 */
++	spin_lock_irq(&current->sighand->siglock);
++	put_pid(current->signal->tty_old_pgrp);
++	current->signal->tty_old_pgrp = NULL;
++	spin_unlock_irq(&current->sighand->siglock);
++
+ 	/* Now clear signal->tty under the lock */
+ 	read_lock(&tasklist_lock);
+ 	session_clear_tty(task_session(current));
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 9279d3d3698c2..14925fedb01aa 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -4684,8 +4684,8 @@ fail3:
+ 	if (qh_allocated && qh->channel && qh->channel->qh == qh)
+ 		qh->channel->qh = NULL;
+ fail2:
+-	spin_unlock_irqrestore(&hsotg->lock, flags);
+ 	urb->hcpriv = NULL;
++	spin_unlock_irqrestore(&hsotg->lock, flags);
+ 	kfree(qtd);
+ fail1:
+ 	if (qh_allocated) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 8034e643a4afd..7d0c2cccbfc0f 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -345,6 +345,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	/* xHC spec requires PCI devices to support D3hot and D3cold */
+ 	if (xhci->hci_version >= 0x120)
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
++	else if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version >= 0x110)
++		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 972a44b2a7f12..e56a1fb9715a7 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -466,23 +466,38 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
+ 	int ret;
+ 
+ 	if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
+-		clk_prepare_enable(xhci->clk);
+-		clk_prepare_enable(xhci->reg_clk);
++		ret = clk_prepare_enable(xhci->clk);
++		if (ret)
++			return ret;
++
++		ret = clk_prepare_enable(xhci->reg_clk);
++		if (ret) {
++			clk_disable_unprepare(xhci->clk);
++			return ret;
++		}
+ 	}
+ 
+ 	ret = xhci_priv_resume_quirk(hcd);
+ 	if (ret)
+-		return ret;
++		goto disable_clks;
+ 
+ 	ret = xhci_resume(xhci, 0);
+ 	if (ret)
+-		return ret;
++		goto disable_clks;
+ 
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+ 
+ 	return 0;
++
++disable_clks:
++	if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
++		clk_disable_unprepare(xhci->clk);
++		clk_disable_unprepare(xhci->reg_clk);
++	}
++
++	return ret;
+ }
+ 
+ static int __maybe_unused xhci_plat_runtime_suspend(struct device *dev)
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index 3c6d452e3bf40..4104eea03e806 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -462,8 +462,13 @@ static void stub_disconnect(struct usb_device *udev)
+ 	/* release port */
+ 	rc = usb_hub_release_port(udev->parent, udev->portnum,
+ 				  (struct usb_dev_state *) udev);
+-	if (rc) {
+-		dev_dbg(&udev->dev, "unable to release port\n");
++	/*
++	 * NOTE: If a HUB disconnect triggered disconnect of the down stream
++	 * device usb_hub_release_port will return -ENODEV so we can safely ignore
++	 * that error here.
++	 */
++	if (rc && (rc != -ENODEV)) {
++		dev_dbg(&udev->dev, "unable to release port (%i)\n", rc);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/video/fbdev/fsl-diu-fb.c b/drivers/video/fbdev/fsl-diu-fb.c
+index a547c21c7e928..5d564e8670c52 100644
+--- a/drivers/video/fbdev/fsl-diu-fb.c
++++ b/drivers/video/fbdev/fsl-diu-fb.c
+@@ -490,7 +490,7 @@ static enum fsl_diu_monitor_port fsl_diu_name_to_port(const char *s)
+  * Workaround for failed writing desc register of planes.
+  * Needed with MPC5121 DIU rev 2.0 silicon.
+  */
+-void wr_reg_wa(u32 *reg, u32 val)
++static void wr_reg_wa(u32 *reg, u32 val)
+ {
+ 	do {
+ 		out_be32(reg, val);
+diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c
+index 1b2fb8ed76237..e559c844436bd 100644
+--- a/drivers/video/fbdev/imsttfb.c
++++ b/drivers/video/fbdev/imsttfb.c
+@@ -1489,8 +1489,8 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	if (!request_mem_region(addr, size, "imsttfb")) {
+ 		printk(KERN_ERR "imsttfb: Can't reserve memory region\n");
+-		framebuffer_release(info);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto release_info;
+ 	}
+ 
+ 	switch (pdev->device) {
+@@ -1507,34 +1507,39 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			printk(KERN_INFO "imsttfb: Device 0x%x unknown, "
+ 					 "contact maintainer.\n", pdev->device);
+ 			ret = -ENODEV;
+-			goto error;
++			goto release_mem_region;
+ 	}
+ 
+ 	info->fix.smem_start = addr;
+ 	info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ?
+ 					    0x400000 : 0x800000);
+ 	if (!info->screen_base)
+-		goto error;
++		goto release_mem_region;
+ 	info->fix.mmio_start = addr + 0x800000;
+ 	par->dc_regs = ioremap(addr + 0x800000, 0x1000);
+ 	if (!par->dc_regs)
+-		goto error;
++		goto unmap_screen_base;
+ 	par->cmap_regs_phys = addr + 0x840000;
+ 	par->cmap_regs = (__u8 *)ioremap(addr + 0x840000, 0x1000);
+ 	if (!par->cmap_regs)
+-		goto error;
++		goto unmap_dc_regs;
+ 	info->pseudo_palette = par->palette;
+ 	ret = init_imstt(info);
+-	if (!ret)
+-		pci_set_drvdata(pdev, info);
+-	return ret;
++	if (ret)
++		goto unmap_cmap_regs;
++
++	pci_set_drvdata(pdev, info);
++	return 0;
+ 
+-error:
+-	if (par->dc_regs)
+-		iounmap(par->dc_regs);
+-	if (info->screen_base)
+-		iounmap(info->screen_base);
++unmap_cmap_regs:
++	iounmap(par->cmap_regs);
++unmap_dc_regs:
++	iounmap(par->dc_regs);
++unmap_screen_base:
++	iounmap(info->screen_base);
++release_mem_region:
+ 	release_mem_region(addr, size);
++release_info:
+ 	framebuffer_release(info);
+ 	return ret;
+ }
+diff --git a/drivers/xen/xen-pciback/conf_space.c b/drivers/xen/xen-pciback/conf_space.c
+index 059de92aea7d0..d47eee6c51435 100644
+--- a/drivers/xen/xen-pciback/conf_space.c
++++ b/drivers/xen/xen-pciback/conf_space.c
+@@ -288,12 +288,6 @@ int xen_pcibk_get_interrupt_type(struct pci_dev *dev)
+ 	u16 val;
+ 	int ret = 0;
+ 
+-	err = pci_read_config_word(dev, PCI_COMMAND, &val);
+-	if (err)
+-		return err;
+-	if (!(val & PCI_COMMAND_INTX_DISABLE))
+-		ret |= INTERRUPT_TYPE_INTX;
+-
+ 	/*
+ 	 * Do not trust dev->msi(x)_enabled here, as enabling could be done
+ 	 * bypassing the pci_*msi* functions, by the qemu.
+@@ -316,6 +310,19 @@ int xen_pcibk_get_interrupt_type(struct pci_dev *dev)
+ 		if (val & PCI_MSIX_FLAGS_ENABLE)
+ 			ret |= INTERRUPT_TYPE_MSIX;
+ 	}
++
++	/*
++	 * PCIe spec says device cannot use INTx if MSI/MSI-X is enabled,
++	 * so check for INTx only when both are disabled.
++	 */
++	if (!ret) {
++		err = pci_read_config_word(dev, PCI_COMMAND, &val);
++		if (err)
++			return err;
++		if (!(val & PCI_COMMAND_INTX_DISABLE))
++			ret |= INTERRUPT_TYPE_INTX;
++	}
++
+ 	return ret ?: INTERRUPT_TYPE_NONE;
+ }
+ 
+diff --git a/drivers/xen/xen-pciback/conf_space_capability.c b/drivers/xen/xen-pciback/conf_space_capability.c
+index 097316a741268..1948a9700c8fa 100644
+--- a/drivers/xen/xen-pciback/conf_space_capability.c
++++ b/drivers/xen/xen-pciback/conf_space_capability.c
+@@ -236,10 +236,16 @@ static int msi_msix_flags_write(struct pci_dev *dev, int offset, u16 new_value,
+ 		return PCIBIOS_SET_FAILED;
+ 
+ 	if (new_value & field_config->enable_bit) {
+-		/* don't allow enabling together with other interrupt types */
++		/*
++		 * Don't allow enabling together with other interrupt type, but do
++		 * allow enabling MSI(-X) while INTx is still active to please Linuxes
++		 * MSI(-X) startup sequence. It is safe to do, as according to PCI
++		 * spec, device with enabled MSI(-X) shouldn't use INTx.
++		 */
+ 		int int_type = xen_pcibk_get_interrupt_type(dev);
+ 
+ 		if (int_type == INTERRUPT_TYPE_NONE ||
++		    int_type == INTERRUPT_TYPE_INTX ||
+ 		    int_type == field_config->int_type)
+ 			goto write;
+ 		return PCIBIOS_SET_FAILED;
+diff --git a/drivers/xen/xen-pciback/conf_space_header.c b/drivers/xen/xen-pciback/conf_space_header.c
+index ac45cdc38e859..fcaa050d692d2 100644
+--- a/drivers/xen/xen-pciback/conf_space_header.c
++++ b/drivers/xen/xen-pciback/conf_space_header.c
+@@ -104,24 +104,9 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
+ 		pci_clear_mwi(dev);
+ 	}
+ 
+-	if (dev_data && dev_data->allow_interrupt_control) {
+-		if ((cmd->val ^ value) & PCI_COMMAND_INTX_DISABLE) {
+-			if (value & PCI_COMMAND_INTX_DISABLE) {
+-				pci_intx(dev, 0);
+-			} else {
+-				/* Do not allow enabling INTx together with MSI or MSI-X. */
+-				switch (xen_pcibk_get_interrupt_type(dev)) {
+-				case INTERRUPT_TYPE_NONE:
+-					pci_intx(dev, 1);
+-					break;
+-				case INTERRUPT_TYPE_INTX:
+-					break;
+-				default:
+-					return PCIBIOS_SET_FAILED;
+-				}
+-			}
+-		}
+-	}
++	if (dev_data && dev_data->allow_interrupt_control &&
++	    ((cmd->val ^ value) & PCI_COMMAND_INTX_DISABLE))
++		pci_intx(dev, !(value & PCI_COMMAND_INTX_DISABLE));
+ 
+ 	cmd->val = value;
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index cffd149faf639..6da8587e2ae37 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2104,7 +2104,7 @@ static noinline int key_in_sk(struct btrfs_key *key,
+ static noinline int copy_to_sk(struct btrfs_path *path,
+ 			       struct btrfs_key *key,
+ 			       struct btrfs_ioctl_search_key *sk,
+-			       size_t *buf_size,
++			       u64 *buf_size,
+ 			       char __user *ubuf,
+ 			       unsigned long *sk_offset,
+ 			       int *num_found)
+@@ -2236,7 +2236,7 @@ out:
+ 
+ static noinline int search_ioctl(struct inode *inode,
+ 				 struct btrfs_ioctl_search_key *sk,
+-				 size_t *buf_size,
++				 u64 *buf_size,
+ 				 char __user *ubuf)
+ {
+ 	struct btrfs_fs_info *info = btrfs_sb(inode->i_sb);
+@@ -2306,7 +2306,7 @@ static noinline int btrfs_ioctl_tree_search(struct file *file,
+ 	struct btrfs_ioctl_search_key sk;
+ 	struct inode *inode;
+ 	int ret;
+-	size_t buf_size;
++	u64 buf_size;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+@@ -2340,8 +2340,8 @@ static noinline int btrfs_ioctl_tree_search_v2(struct file *file,
+ 	struct btrfs_ioctl_search_args_v2 args;
+ 	struct inode *inode;
+ 	int ret;
+-	size_t buf_size;
+-	const size_t buf_limit = SZ_16M;
++	u64 buf_size;
++	const u64 buf_limit = SZ_16M;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 2c2e1cc43e0e8..193b13630ac1e 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -1003,6 +1003,11 @@ static int ext4_ext_insert_index(handle_t *handle, struct inode *inode,
+ 		ix = curp->p_idx;
+ 	}
+ 
++	if (unlikely(ix > EXT_MAX_INDEX(curp->p_hdr))) {
++		EXT4_ERROR_INODE(inode, "ix > EXT_MAX_INDEX!");
++		return -EFSCORRUPTED;
++	}
++
+ 	len = EXT_LAST_INDEX(curp->p_hdr) - ix + 1;
+ 	BUG_ON(len < 0);
+ 	if (len > 0) {
+@@ -1012,11 +1017,6 @@ static int ext4_ext_insert_index(handle_t *handle, struct inode *inode,
+ 		memmove(ix + 1, ix, len * sizeof(struct ext4_extent_idx));
+ 	}
+ 
+-	if (unlikely(ix > EXT_MAX_INDEX(curp->p_hdr))) {
+-		EXT4_ERROR_INODE(inode, "ix > EXT_MAX_INDEX!");
+-		return -EFSCORRUPTED;
+-	}
+-
+ 	ix->ei_block = cpu_to_le32(logical);
+ 	ext4_idx_store_pblock(ix, ptr);
+ 	le16_add_cpu(&curp->p_hdr->eh_entries, 1);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6e91be5b8c30f..4e6b93f167589 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -3315,6 +3315,7 @@ int f2fs_precache_extents(struct inode *inode)
+ 		return -EOPNOTSUPP;
+ 
+ 	map.m_lblk = 0;
++	map.m_pblk = 0;
+ 	map.m_next_pgofs = NULL;
+ 	map.m_next_extent = &m_next_extent;
+ 	map.m_seg_type = NO_CHECK_TYPE;
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index ce03c3dbb5c30..d59f13b1fb96b 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -558,6 +558,8 @@ out:
+  */
+ int pstore_register(struct pstore_info *psi)
+ {
++	char *new_backend;
++
+ 	if (backend && strcmp(backend, psi->name)) {
+ 		pr_warn("ignoring unexpected backend '%s'\n", psi->name);
+ 		return -EPERM;
+@@ -577,11 +579,16 @@ int pstore_register(struct pstore_info *psi)
+ 		return -EINVAL;
+ 	}
+ 
++	new_backend = kstrdup(psi->name, GFP_KERNEL);
++	if (!new_backend)
++		return -ENOMEM;
++
+ 	mutex_lock(&psinfo_lock);
+ 	if (psinfo) {
+ 		pr_warn("backend '%s' already loaded: ignoring '%s'\n",
+ 			psinfo->name, psi->name);
+ 		mutex_unlock(&psinfo_lock);
++		kfree(new_backend);
+ 		return -EBUSY;
+ 	}
+ 
+@@ -614,7 +621,7 @@ int pstore_register(struct pstore_info *psi)
+ 	 * Update the module parameter backend, so it is visible
+ 	 * through /sys/module/pstore/parameters/backend
+ 	 */
+-	backend = kstrdup(psi->name, GFP_KERNEL);
++	backend = new_backend;
+ 
+ 	pr_info("Registered %s as persistent store backend\n", psi->name);
+ 
+diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
+index 03a5de5f99f4a..aa8cbf8829145 100644
+--- a/include/linux/clk-provider.h
++++ b/include/linux/clk-provider.h
+@@ -61,7 +61,7 @@ struct clk_rate_request {
+ };
+ 
+ /**
+- * struct clk_duty - Struture encoding the duty cycle ratio of a clock
++ * struct clk_duty - Structure encoding the duty cycle ratio of a clock
+  *
+  * @num:	Numerator of the duty cycle ratio
+  * @den:	Denominator of the duty cycle ratio
+@@ -116,7 +116,7 @@ struct clk_duty {
+  * @restore_context: Restore the context of the clock after a restoration
+  *		of power.
+  *
+- * @recalc_rate	Recalculate the rate of this clock, by querying hardware. The
++ * @recalc_rate: Recalculate the rate of this clock, by querying hardware. The
+  *		parent rate is an input parameter.  It is up to the caller to
+  *		ensure that the prepare_mutex is held across this call.
+  *		Returns the calculated rate.  Optional, but recommended - if
+@@ -429,7 +429,7 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+  * clock with the clock framework
+  * @dev: device that is registering this clock
+  * @name: name of this clock
+- * @parent_name: name of clock's parent
++ * @parent_data: name of clock's parent
+  * @flags: framework-specific flags
+  * @fixed_rate: non-adjustable clock rate
+  * @fixed_accuracy: non-adjustable clock accuracy
+@@ -439,6 +439,20 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+ 	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, NULL,	      \
+ 				     (parent_data), NULL, (flags),	      \
+ 				     (fixed_rate), (fixed_accuracy), 0)
++/**
++ * clk_hw_register_fixed_rate_parent_accuracy - register fixed-rate clock with
++ * the clock framework
++ * @dev: device that is registering this clock
++ * @name: name of this clock
++ * @parent_data: name of clock's parent
++ * @flags: framework-specific flags
++ * @fixed_rate: non-adjustable clock rate
++ */
++#define clk_hw_register_fixed_rate_parent_accuracy(dev, name, parent_data,    \
++						   flags, fixed_rate)	      \
++	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, NULL,      \
++				     (parent_data), (flags), (fixed_rate), 0,    \
++				     CLK_FIXED_RATE_PARENT_ACCURACY)
+ 
+ void clk_unregister_fixed_rate(struct clk *clk);
+ void clk_hw_unregister_fixed_rate(struct clk_hw *hw);
+@@ -566,7 +580,7 @@ struct clk_div_table {
+  * Clock with an adjustable divider affecting its output frequency.  Implements
+  * .recalc_rate, .set_rate and .round_rate
+  *
+- * Flags:
++ * @flags:
+  * CLK_DIVIDER_ONE_BASED - by default the divisor is the value read from the
+  *	register plus one.  If CLK_DIVIDER_ONE_BASED is set then the divider is
+  *	the raw value read from the register, with the value of zero considered
+@@ -858,6 +872,13 @@ struct clk *clk_register_mux_table(struct device *dev, const char *name,
+ 			      (parent_names), NULL, NULL, (flags), (reg),     \
+ 			      (shift), (mask), (clk_mux_flags), (table),      \
+ 			      (lock))
++#define clk_hw_register_mux_table_parent_data(dev, name, parent_data,	      \
++				  num_parents, flags, reg, shift, mask,	      \
++				  clk_mux_flags, table, lock)		      \
++	__clk_hw_register_mux((dev), NULL, (name), (num_parents),	      \
++			      NULL, NULL, (parent_data), (flags), (reg),      \
++			      (shift), (mask), (clk_mux_flags), (table),      \
++			      (lock))
+ #define clk_hw_register_mux(dev, name, parent_names, num_parents, flags, reg, \
+ 			    shift, width, clk_mux_flags, lock)		      \
+ 	__clk_hw_register_mux((dev), NULL, (name), (num_parents),	      \
+@@ -924,11 +945,12 @@ void clk_hw_unregister_fixed_factor(struct clk_hw *hw);
+  * @mwidth:	width of the numerator bit field
+  * @nshift:	shift to the denominator bit field
+  * @nwidth:	width of the denominator bit field
++ * @approximation: clk driver's callback for calculating the divider clock
+  * @lock:	register lock
+  *
+  * Clock with adjustable fractional divider affecting its output frequency.
+  *
+- * Flags:
++ * @flags:
+  * CLK_FRAC_DIVIDER_ZERO_BASED - by default the numerator and denominator
+  *	is the value read from the register. If CLK_FRAC_DIVIDER_ZERO_BASED
+  *	is set then the numerator and denominator are both the value read
+@@ -981,7 +1003,7 @@ void clk_hw_unregister_fractional_divider(struct clk_hw *hw);
+  * Clock with an adjustable multiplier affecting its output frequency.
+  * Implements .recalc_rate, .set_rate and .round_rate
+  *
+- * Flags:
++ * @flags:
+  * CLK_MULTIPLIER_ZERO_BYPASS - By default, the multiplier is the value read
+  *	from the register, with 0 being a valid value effectively
+  *	zeroing the output clock rate. If CLK_MULTIPLIER_ZERO_BYPASS is
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index cb87247da5ba1..7cc2889608e0f 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -144,6 +144,7 @@ enum cpuhp_state {
+ 	/* Must be the last timer callback */
+ 	CPUHP_AP_DUMMY_TIMER_STARTING,
+ 	CPUHP_AP_ARM_XEN_STARTING,
++	CPUHP_AP_ARM_XEN_RUNSTATE_STARTING,
+ 	CPUHP_AP_ARM_CORESIGHT_STARTING,
+ 	CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,
+ 	CPUHP_AP_ARM64_ISNDEP_STARTING,
+diff --git a/include/linux/idr.h b/include/linux/idr.h
+index a0dce14090a9e..da5f5fa4a3a6a 100644
+--- a/include/linux/idr.h
++++ b/include/linux/idr.h
+@@ -200,7 +200,7 @@ static inline void idr_preload_end(void)
+  */
+ #define idr_for_each_entry_ul(idr, entry, tmp, id)			\
+ 	for (tmp = 0, id = 0;						\
+-	     tmp <= id && ((entry) = idr_get_next_ul(idr, &(id))) != NULL; \
++	     ((entry) = tmp <= id ? idr_get_next_ul(idr, &(id)) : NULL) != NULL; \
+ 	     tmp = id, ++id)
+ 
+ /**
+@@ -224,10 +224,12 @@ static inline void idr_preload_end(void)
+  * @id: Entry ID.
+  *
+  * Continue to iterate over entries, continuing after the current position.
++ * After normal termination @entry is left with the value NULL.  This
++ * is convenient for a "not found" value.
+  */
+ #define idr_for_each_entry_continue_ul(idr, entry, tmp, id)		\
+ 	for (tmp = id;							\
+-	     tmp <= id && ((entry) = idr_get_next_ul(idr, &(id))) != NULL; \
++	     ((entry) = tmp <= id ? idr_get_next_ul(idr, &(id)) : NULL) != NULL; \
+ 	     tmp = id, ++id)
+ 
+ /*
+diff --git a/include/linux/mfd/core.h b/include/linux/mfd/core.h
+index 4b35baa14d308..8974c7142c94c 100644
+--- a/include/linux/mfd/core.h
++++ b/include/linux/mfd/core.h
+@@ -92,7 +92,7 @@ struct mfd_cell {
+ 	 * (above) when matching OF nodes with devices that have identical
+ 	 * compatible strings
+ 	 */
+-	const u64 of_reg;
++	u64 of_reg;
+ 
+ 	/* Set to 'true' to use 'of_reg' (above) - allows for of_reg=0 */
+ 	bool use_of_reg;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index e814ce78a1965..3380668478e8a 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -5286,5 +5286,6 @@ extern struct net_device *blackhole_netdev;
+ #define DEV_STATS_INC(DEV, FIELD) atomic_long_inc(&(DEV)->stats.__##FIELD)
+ #define DEV_STATS_ADD(DEV, FIELD, VAL) 	\
+ 		atomic_long_add((VAL), &(DEV)->stats.__##FIELD)
++#define DEV_STATS_READ(DEV, FIELD) atomic_long_read(&(DEV)->stats.__##FIELD)
+ 
+ #endif	/* _LINUX_NETDEVICE_H */
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index ef74051d5cfed..35af574d006f5 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -250,81 +250,94 @@ static inline bool __must_check __must_check_overflow(bool overflow)
+ }))
+ 
+ /**
+- * array_size() - Calculate size of 2-dimensional array.
+- *
+- * @a: dimension one
+- * @b: dimension two
++ * size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX
+  *
+- * Calculates size of 2-dimensional array: @a * @b.
++ * @factor1: first factor
++ * @factor2: second factor
+  *
+- * Returns: number of bytes needed to represent the array or SIZE_MAX on
+- * overflow.
++ * Returns: calculate @factor1 * @factor2, both promoted to size_t,
++ * with any overflow causing the return value to be SIZE_MAX. The
++ * lvalue must be size_t to avoid implicit type conversion.
+  */
+-static inline __must_check size_t array_size(size_t a, size_t b)
++static inline size_t __must_check size_mul(size_t factor1, size_t factor2)
+ {
+ 	size_t bytes;
+ 
+-	if (check_mul_overflow(a, b, &bytes))
++	if (check_mul_overflow(factor1, factor2, &bytes))
+ 		return SIZE_MAX;
+ 
+ 	return bytes;
+ }
+ 
+ /**
+- * array3_size() - Calculate size of 3-dimensional array.
++ * size_add() - Calculate size_t addition with saturation at SIZE_MAX
+  *
+- * @a: dimension one
+- * @b: dimension two
+- * @c: dimension three
+- *
+- * Calculates size of 3-dimensional array: @a * @b * @c.
++ * @addend1: first addend
++ * @addend2: second addend
+  *
+- * Returns: number of bytes needed to represent the array or SIZE_MAX on
+- * overflow.
++ * Returns: calculate @addend1 + @addend2, both promoted to size_t,
++ * with any overflow causing the return value to be SIZE_MAX. The
++ * lvalue must be size_t to avoid implicit type conversion.
+  */
+-static inline __must_check size_t array3_size(size_t a, size_t b, size_t c)
++static inline size_t __must_check size_add(size_t addend1, size_t addend2)
+ {
+ 	size_t bytes;
+ 
+-	if (check_mul_overflow(a, b, &bytes))
+-		return SIZE_MAX;
+-	if (check_mul_overflow(bytes, c, &bytes))
++	if (check_add_overflow(addend1, addend2, &bytes))
+ 		return SIZE_MAX;
+ 
+ 	return bytes;
+ }
+ 
+-/*
+- * Compute a*b+c, returning SIZE_MAX on overflow. Internal helper for
+- * struct_size() below.
++/**
++ * size_sub() - Calculate size_t subtraction with saturation at SIZE_MAX
++ *
++ * @minuend: value to subtract from
++ * @subtrahend: value to subtract from @minuend
++ *
++ * Returns: calculate @minuend - @subtrahend, both promoted to size_t,
++ * with any overflow causing the return value to be SIZE_MAX. For
++ * composition with the size_add() and size_mul() helpers, neither
++ * argument may be SIZE_MAX (or the result with be forced to SIZE_MAX).
++ * The lvalue must be size_t to avoid implicit type conversion.
+  */
+-static inline __must_check size_t __ab_c_size(size_t a, size_t b, size_t c)
++static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
+ {
+ 	size_t bytes;
+ 
+-	if (check_mul_overflow(a, b, &bytes))
+-		return SIZE_MAX;
+-	if (check_add_overflow(bytes, c, &bytes))
++	if (minuend == SIZE_MAX || subtrahend == SIZE_MAX ||
++	    check_sub_overflow(minuend, subtrahend, &bytes))
+ 		return SIZE_MAX;
+ 
+ 	return bytes;
+ }
+ 
+ /**
+- * struct_size() - Calculate size of structure with trailing array.
+- * @p: Pointer to the structure.
+- * @member: Name of the array member.
+- * @count: Number of elements in the array.
++ * array_size() - Calculate size of 2-dimensional array.
+  *
+- * Calculates size of memory needed for structure @p followed by an
+- * array of @count number of @member elements.
++ * @a: dimension one
++ * @b: dimension two
+  *
+- * Return: number of bytes needed or SIZE_MAX on overflow.
++ * Calculates size of 2-dimensional array: @a * @b.
++ *
++ * Returns: number of bytes needed to represent the array or SIZE_MAX on
++ * overflow.
+  */
+-#define struct_size(p, member, count)					\
+-	__ab_c_size(count,						\
+-		    sizeof(*(p)->member) + __must_be_array((p)->member),\
+-		    sizeof(*(p)))
++#define array_size(a, b)	size_mul(a, b)
++
++/**
++ * array3_size() - Calculate size of 3-dimensional array.
++ *
++ * @a: dimension one
++ * @b: dimension two
++ * @c: dimension three
++ *
++ * Calculates size of 3-dimensional array: @a * @b * @c.
++ *
++ * Returns: number of bytes needed to represent the array or SIZE_MAX on
++ * overflow.
++ */
++#define array3_size(a, b, c)	size_mul(size_mul(a, b), c)
+ 
+ /**
+  * flex_array_size() - Calculate size of a flexible array member
+@@ -340,7 +353,22 @@ static inline __must_check size_t __ab_c_size(size_t a, size_t b, size_t c)
+  * Return: number of bytes needed or SIZE_MAX on overflow.
+  */
+ #define flex_array_size(p, member, count)				\
+-	array_size(count,						\
+-		    sizeof(*(p)->member) + __must_be_array((p)->member))
++	size_mul(count,							\
++		 sizeof(*(p)->member) + __must_be_array((p)->member))
++
++/**
++ * struct_size() - Calculate size of structure with trailing flexible array.
++ *
++ * @p: Pointer to the structure.
++ * @member: Name of the array member.
++ * @count: Number of elements in the array.
++ *
++ * Calculates size of memory needed for structure @p followed by an
++ * array of @count number of @member elements.
++ *
++ * Return: number of bytes needed or SIZE_MAX on overflow.
++ */
++#define struct_size(p, member, count)					\
++	size_add(sizeof(*(p)), flex_array_size(p, member, count))
+ 
+ #endif /* __LINUX_OVERFLOW_H */
+diff --git a/include/linux/padata.h b/include/linux/padata.h
+index a433f13fc4bf7..495b16b6b4d72 100644
+--- a/include/linux/padata.h
++++ b/include/linux/padata.h
+@@ -12,6 +12,7 @@
+ #ifndef PADATA_H
+ #define PADATA_H
+ 
++#include <linux/refcount.h>
+ #include <linux/compiler_types.h>
+ #include <linux/workqueue.h>
+ #include <linux/spinlock.h>
+@@ -96,7 +97,7 @@ struct parallel_data {
+ 	struct padata_shell		*ps;
+ 	struct padata_list		__percpu *reorder_list;
+ 	struct padata_serial_queue	__percpu *squeue;
+-	atomic_t			refcnt;
++	refcount_t			refcnt;
+ 	unsigned int			seq_nr;
+ 	unsigned int			processed;
+ 	int				cpu;
+diff --git a/include/net/flow.h b/include/net/flow.h
+index 236d0941c69f0..7ffa1fe1107cc 100644
+--- a/include/net/flow.h
++++ b/include/net/flow.h
+@@ -39,8 +39,8 @@ struct flowi_common {
+ #define FLOWI_FLAG_SKIP_NH_OIF		0x04
+ 	__u32	flowic_secid;
+ 	kuid_t  flowic_uid;
+-	struct flowi_tunnel flowic_tun_key;
+ 	__u32		flowic_multipath_hash;
++	struct flowi_tunnel flowic_tun_key;
+ };
+ 
+ union flowi_uli {
+diff --git a/include/net/netfilter/nf_nat_redirect.h b/include/net/netfilter/nf_nat_redirect.h
+index 2418653a66db1..279380de904c8 100644
+--- a/include/net/netfilter/nf_nat_redirect.h
++++ b/include/net/netfilter/nf_nat_redirect.h
+@@ -6,8 +6,7 @@
+ #include <uapi/linux/netfilter/nf_nat.h>
+ 
+ unsigned int
+-nf_nat_redirect_ipv4(struct sk_buff *skb,
+-		     const struct nf_nat_ipv4_multi_range_compat *mr,
++nf_nat_redirect_ipv4(struct sk_buff *skb, const struct nf_nat_range2 *range,
+ 		     unsigned int hooknum);
+ unsigned int
+ nf_nat_redirect_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range,
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 772e593910287..5c03dc6d0f792 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -776,7 +776,7 @@ static inline u32 tcp_time_stamp(const struct tcp_sock *tp)
+ }
+ 
+ /* Convert a nsec timestamp into TCP TSval timestamp (ms based currently) */
+-static inline u32 tcp_ns_to_ts(u64 ns)
++static inline u64 tcp_ns_to_ts(u64 ns)
+ {
+ 	return div_u64(ns, NSEC_PER_SEC / TCP_TS_HZ);
+ }
+diff --git a/kernel/futex/core.c b/kernel/futex/core.c
+index 8dd0bc50ac36d..cde0ca876b935 100644
+--- a/kernel/futex/core.c
++++ b/kernel/futex/core.c
+@@ -514,7 +514,17 @@ static int get_futex_key(u32 __user *uaddr, bool fshared, union futex_key *key,
+ 	 *        but access_ok() should be faster than find_vma()
+ 	 */
+ 	if (!fshared) {
+-		key->private.mm = mm;
++		/*
++		 * On no-MMU, shared futexes are treated as private, therefore
++		 * we must not include the current process in the key. Since
++		 * there is only one address space, the address is a unique key
++		 * on its own.
++		 */
++		if (IS_ENABLED(CONFIG_MMU))
++			key->private.mm = mm;
++		else
++			key->private.mm = NULL;
++
+ 		key->private.address = address;
+ 		return 0;
+ 	}
+diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
+index 8e586858bcf41..d25edbb87119f 100644
+--- a/kernel/irq/matrix.c
++++ b/kernel/irq/matrix.c
+@@ -466,16 +466,16 @@ unsigned int irq_matrix_reserved(struct irq_matrix *m)
+ }
+ 
+ /**
+- * irq_matrix_allocated - Get the number of allocated irqs on the local cpu
++ * irq_matrix_allocated - Get the number of allocated non-managed irqs on the local CPU
+  * @m:		Pointer to the matrix to search
+  *
+- * This returns number of allocated irqs
++ * This returns number of allocated non-managed interrupts.
+  */
+ unsigned int irq_matrix_allocated(struct irq_matrix *m)
+ {
+ 	struct cpumap *cm = this_cpu_ptr(m->maps);
+ 
+-	return cm->allocated;
++	return cm->allocated - cm->managed_allocated;
+ }
+ 
+ #ifdef CONFIG_GENERIC_IRQ_DEBUGFS
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index e8bdce6fdd647..f5faf935c2d8f 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -245,7 +245,7 @@ static int klp_resolve_symbols(Elf_Shdr *sechdrs, const char *strtab,
+ 		 * symbols are exported and normal relas can be used instead.
+ 		 */
+ 		if (!sec_vmlinux && sym_vmlinux) {
+-			pr_err("invalid access to vmlinux symbol '%s' from module-specific livepatch relocation section",
++			pr_err("invalid access to vmlinux symbol '%s' from module-specific livepatch relocation section\n",
+ 			       sym_name);
+ 			return -EINVAL;
+ 		}
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 11ca3ebd8b123..7d500219f96bd 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -211,7 +211,7 @@ int padata_do_parallel(struct padata_shell *ps,
+ 	if ((pinst->flags & PADATA_RESET))
+ 		goto out;
+ 
+-	atomic_inc(&pd->refcnt);
++	refcount_inc(&pd->refcnt);
+ 	padata->pd = pd;
+ 	padata->cb_cpu = *cb_cpu;
+ 
+@@ -385,7 +385,7 @@ static void padata_serial_worker(struct work_struct *serial_work)
+ 	}
+ 	local_bh_enable();
+ 
+-	if (atomic_sub_and_test(cnt, &pd->refcnt))
++	if (refcount_sub_and_test(cnt, &pd->refcnt))
+ 		padata_free_pd(pd);
+ }
+ 
+@@ -598,7 +598,7 @@ static struct parallel_data *padata_alloc_pd(struct padata_shell *ps)
+ 	padata_init_reorder_list(pd);
+ 	padata_init_squeues(pd);
+ 	pd->seq_nr = -1;
+-	atomic_set(&pd->refcnt, 1);
++	refcount_set(&pd->refcnt, 1);
+ 	spin_lock_init(&pd->lock);
+ 	pd->cpu = cpumask_first(pd->cpumask.pcpu);
+ 	INIT_WORK(&pd->reorder_work, invoke_padata_reorder);
+@@ -672,7 +672,7 @@ static int padata_replace(struct padata_instance *pinst)
+ 	synchronize_rcu();
+ 
+ 	list_for_each_entry_continue_reverse(ps, &pinst->pslist, list)
+-		if (atomic_dec_and_test(&ps->opd->refcnt))
++		if (refcount_dec_and_test(&ps->opd->refcnt))
+ 			padata_free_pd(ps->opd);
+ 
+ 	pinst->flags &= ~PADATA_RESET;
+@@ -1107,12 +1107,16 @@ EXPORT_SYMBOL(padata_alloc_shell);
+  */
+ void padata_free_shell(struct padata_shell *ps)
+ {
++	struct parallel_data *pd;
++
+ 	if (!ps)
+ 		return;
+ 
+ 	mutex_lock(&ps->pinst->lock);
+ 	list_del(&ps->list);
+-	padata_free_pd(rcu_dereference_protected(ps->pd, 1));
++	pd = rcu_dereference_protected(ps->pd, 1);
++	if (refcount_dec_and_test(&pd->refcnt))
++		padata_free_pd(pd);
+ 	mutex_unlock(&ps->pinst->lock);
+ 
+ 	kfree(ps);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index d53f57ac76094..73a89fbd81be8 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3927,22 +3927,6 @@ static inline unsigned long task_util_est(struct task_struct *p)
+ 	return max(task_util(p), _task_util_est(p));
+ }
+ 
+-#ifdef CONFIG_UCLAMP_TASK
+-static inline unsigned long uclamp_task_util(struct task_struct *p,
+-					     unsigned long uclamp_min,
+-					     unsigned long uclamp_max)
+-{
+-	return clamp(task_util_est(p), uclamp_min, uclamp_max);
+-}
+-#else
+-static inline unsigned long uclamp_task_util(struct task_struct *p,
+-					     unsigned long uclamp_min,
+-					     unsigned long uclamp_max)
+-{
+-	return task_util_est(p);
+-}
+-#endif
+-
+ static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
+ 				    struct task_struct *p)
+ {
+@@ -6842,7 +6826,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
+ 		goto fail;
+ 
+ 	sync_entity_load_avg(&p->se);
+-	if (!uclamp_task_util(p, p_util_min, p_util_max))
++	if (!task_util_est(p) && p_util_min == 0)
+ 		goto unlock;
+ 
+ 	for (; pd; pd = pd->next) {
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 37e1ec1d3ee54..7183572898998 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -949,9 +949,9 @@ EXPORT_SYMBOL_GPL(kprobe_event_cmd_init);
+ /**
+  * __kprobe_event_gen_cmd_start - Generate a kprobe event command from arg list
+  * @cmd: A pointer to the dynevent_cmd struct representing the new event
++ * @kretprobe: Is this a return probe?
+  * @name: The name of the kprobe event
+  * @loc: The location of the kprobe event
+- * @kretprobe: Is this a return probe?
+  * @...: Variable number of arg (pairs), one pair for each field
+  *
+  * NOTE: Users normally won't want to call this function directly, but
+diff --git a/lib/test_overflow.c b/lib/test_overflow.c
+index 7a4b6f6c5473c..7a5a5738d2d21 100644
+--- a/lib/test_overflow.c
++++ b/lib/test_overflow.c
+@@ -588,12 +588,110 @@ static int __init test_overflow_allocation(void)
+ 	return err;
+ }
+ 
++struct __test_flex_array {
++	unsigned long flags;
++	size_t count;
++	unsigned long data[];
++};
++
++static int __init test_overflow_size_helpers(void)
++{
++	struct __test_flex_array *obj;
++	int count = 0;
++	int err = 0;
++	int var;
++
++#define check_one_size_helper(expected, func, args...)	({	\
++	bool __failure = false;					\
++	size_t _r;						\
++								\
++	_r = func(args);					\
++	if (_r != (expected)) {					\
++		pr_warn("expected " #func "(" #args ") "	\
++			"to return %zu but got %zu instead\n",	\
++			(size_t)(expected), _r);		\
++		__failure = true;				\
++	}							\
++	count++;						\
++	__failure;						\
++})
++
++	var = 4;
++	err |= check_one_size_helper(20,       size_mul, var++, 5);
++	err |= check_one_size_helper(20,       size_mul, 4, var++);
++	err |= check_one_size_helper(0,	       size_mul, 0, 3);
++	err |= check_one_size_helper(0,	       size_mul, 3, 0);
++	err |= check_one_size_helper(6,	       size_mul, 2, 3);
++	err |= check_one_size_helper(SIZE_MAX, size_mul, SIZE_MAX,  1);
++	err |= check_one_size_helper(SIZE_MAX, size_mul, SIZE_MAX,  3);
++	err |= check_one_size_helper(SIZE_MAX, size_mul, SIZE_MAX, -3);
++
++	var = 4;
++	err |= check_one_size_helper(9,        size_add, var++, 5);
++	err |= check_one_size_helper(9,        size_add, 4, var++);
++	err |= check_one_size_helper(9,	       size_add, 9, 0);
++	err |= check_one_size_helper(9,	       size_add, 0, 9);
++	err |= check_one_size_helper(5,	       size_add, 2, 3);
++	err |= check_one_size_helper(SIZE_MAX, size_add, SIZE_MAX,  1);
++	err |= check_one_size_helper(SIZE_MAX, size_add, SIZE_MAX,  3);
++	err |= check_one_size_helper(SIZE_MAX, size_add, SIZE_MAX, -3);
++
++	var = 4;
++	err |= check_one_size_helper(1,        size_sub, var--, 3);
++	err |= check_one_size_helper(1,        size_sub, 4, var--);
++	err |= check_one_size_helper(1,        size_sub, 3, 2);
++	err |= check_one_size_helper(9,	       size_sub, 9, 0);
++	err |= check_one_size_helper(SIZE_MAX, size_sub, 9, -3);
++	err |= check_one_size_helper(SIZE_MAX, size_sub, 0, 9);
++	err |= check_one_size_helper(SIZE_MAX, size_sub, 2, 3);
++	err |= check_one_size_helper(SIZE_MAX, size_sub, SIZE_MAX,  0);
++	err |= check_one_size_helper(SIZE_MAX, size_sub, SIZE_MAX, 10);
++	err |= check_one_size_helper(SIZE_MAX, size_sub, 0,  SIZE_MAX);
++	err |= check_one_size_helper(SIZE_MAX, size_sub, 14, SIZE_MAX);
++	err |= check_one_size_helper(SIZE_MAX - 2, size_sub, SIZE_MAX - 1,  1);
++	err |= check_one_size_helper(SIZE_MAX - 4, size_sub, SIZE_MAX - 1,  3);
++	err |= check_one_size_helper(1,		size_sub, SIZE_MAX - 1, -3);
++
++	var = 4;
++	err |= check_one_size_helper(4 * sizeof(*obj->data),
++				     flex_array_size, obj, data, var++);
++	err |= check_one_size_helper(5 * sizeof(*obj->data),
++				     flex_array_size, obj, data, var++);
++	err |= check_one_size_helper(0, flex_array_size, obj, data, 0);
++	err |= check_one_size_helper(sizeof(*obj->data),
++				     flex_array_size, obj, data, 1);
++	err |= check_one_size_helper(7 * sizeof(*obj->data),
++				     flex_array_size, obj, data, 7);
++	err |= check_one_size_helper(SIZE_MAX,
++				     flex_array_size, obj, data, -1);
++	err |= check_one_size_helper(SIZE_MAX,
++				     flex_array_size, obj, data, SIZE_MAX - 4);
++
++	var = 4;
++	err |= check_one_size_helper(sizeof(*obj) + (4 * sizeof(*obj->data)),
++				     struct_size, obj, data, var++);
++	err |= check_one_size_helper(sizeof(*obj) + (5 * sizeof(*obj->data)),
++				     struct_size, obj, data, var++);
++	err |= check_one_size_helper(sizeof(*obj), struct_size, obj, data, 0);
++	err |= check_one_size_helper(sizeof(*obj) + sizeof(*obj->data),
++				     struct_size, obj, data, 1);
++	err |= check_one_size_helper(SIZE_MAX,
++				     struct_size, obj, data, -3);
++	err |= check_one_size_helper(SIZE_MAX,
++				     struct_size, obj, data, SIZE_MAX - 3);
++
++	pr_info("%d overflow size helper tests finished\n", count);
++
++	return err;
++}
++
+ static int __init test_module_init(void)
+ {
+ 	int err = 0;
+ 
+ 	err |= test_overflow_calculation();
+ 	err |= test_overflow_shift();
++	err |= test_overflow_size_helpers();
+ 	err |= test_overflow_allocation();
+ 
+ 	if (err) {
+diff --git a/mm/readahead.c b/mm/readahead.c
+index c5b0457415bef..d30bcf4bc63be 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -625,7 +625,8 @@ ssize_t ksys_readahead(int fd, loff_t offset, size_t count)
+ 	 */
+ 	ret = -EINVAL;
+ 	if (!f.file->f_mapping || !f.file->f_mapping->a_ops ||
+-	    !S_ISREG(file_inode(f.file)->i_mode))
++	    (!S_ISREG(file_inode(f.file)->i_mode) &&
++	    !S_ISBLK(file_inode(f.file)->i_mode)))
+ 		goto out;
+ 
+ 	ret = vfs_fadvise(f.file, offset, count, POSIX_FADV_WILLNEED);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index f2a0a4e6dd748..2c7c1bdd39e14 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -613,9 +613,6 @@ int dccp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
+ 	if (dccp_parse_options(sk, dreq, skb))
+ 		goto drop_and_free;
+ 
+-	if (security_inet_conn_request(sk, skb, req))
+-		goto drop_and_free;
+-
+ 	ireq = inet_rsk(req);
+ 	sk_rcv_saddr_set(req_to_sk(req), ip_hdr(skb)->daddr);
+ 	sk_daddr_set(req_to_sk(req), ip_hdr(skb)->saddr);
+@@ -623,6 +620,9 @@ int dccp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
+ 	ireq->ireq_family = AF_INET;
+ 	ireq->ir_iif = sk->sk_bound_dev_if;
+ 
++	if (security_inet_conn_request(sk, skb, req))
++		goto drop_and_free;
++
+ 	/*
+ 	 * Step 3: Process LISTEN state
+ 	 *
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 6d6bbd43a1419..991ca2dc2c029 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -349,15 +349,15 @@ static int dccp_v6_conn_request(struct sock *sk, struct sk_buff *skb)
+ 	if (dccp_parse_options(sk, dreq, skb))
+ 		goto drop_and_free;
+ 
+-	if (security_inet_conn_request(sk, skb, req))
+-		goto drop_and_free;
+-
+ 	ireq = inet_rsk(req);
+ 	ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr;
+ 	ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr;
+ 	ireq->ireq_family = AF_INET6;
+ 	ireq->ir_mark = inet_request_mark(sk, skb);
+ 
++	if (security_inet_conn_request(sk, skb, req))
++		goto drop_and_free;
++
+ 	if (ipv6_opt_accepted(sk, skb, IP6CB(skb)) ||
+ 	    np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo ||
+ 	    np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) {
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index 2a02cb2edec2f..0c115d8ded03c 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -296,9 +296,7 @@ struct sk_buff *prp_create_tagged_frame(struct hsr_frame_info *frame,
+ 	skb = skb_copy_expand(frame->skb_std, 0,
+ 			      skb_tailroom(frame->skb_std) + HSR_HLEN,
+ 			      GFP_ATOMIC);
+-	prp_fill_rct(skb, frame, port);
+-
+-	return skb;
++	return prp_fill_rct(skb, frame, port);
+ }
+ 
+ static void hsr_deliver_master(struct sk_buff *skb, struct net_device *dev,
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 542b66783493b..cc860f2dcf658 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -43,7 +43,6 @@ static siphash_key_t syncookie_secret[2] __read_mostly;
+  * requested/supported by the syn/synack exchange.
+  */
+ #define TSBITS	6
+-#define TSMASK	(((__u32)1 << TSBITS) - 1)
+ 
+ static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
+ 		       u32 count, int c)
+@@ -64,27 +63,22 @@ static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
+  */
+ u64 cookie_init_timestamp(struct request_sock *req, u64 now)
+ {
+-	struct inet_request_sock *ireq;
+-	u32 ts, ts_now = tcp_ns_to_ts(now);
++	const struct inet_request_sock *ireq = inet_rsk(req);
++	u64 ts, ts_now = tcp_ns_to_ts(now);
+ 	u32 options = 0;
+ 
+-	ireq = inet_rsk(req);
+-
+ 	options = ireq->wscale_ok ? ireq->snd_wscale : TS_OPT_WSCALE_MASK;
+ 	if (ireq->sack_ok)
+ 		options |= TS_OPT_SACK;
+ 	if (ireq->ecn_ok)
+ 		options |= TS_OPT_ECN;
+ 
+-	ts = ts_now & ~TSMASK;
++	ts = (ts_now >> TSBITS) << TSBITS;
+ 	ts |= options;
+-	if (ts > ts_now) {
+-		ts >>= TSBITS;
+-		ts--;
+-		ts <<= TSBITS;
+-		ts |= options;
+-	}
+-	return (u64)ts * (NSEC_PER_SEC / TCP_TS_HZ);
++	if (ts > ts_now)
++		ts -= (1UL << TSBITS);
++
++	return ts * (NSEC_PER_SEC / TCP_TS_HZ);
+ }
+ 
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 0c935904ced82..a8948c76d19b6 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -6349,22 +6349,23 @@ reset_and_undo:
+ 
+ static void tcp_rcv_synrecv_state_fastopen(struct sock *sk)
+ {
++	struct tcp_sock *tp = tcp_sk(sk);
+ 	struct request_sock *req;
+ 
+ 	/* If we are still handling the SYNACK RTO, see if timestamp ECR allows
+ 	 * undo. If peer SACKs triggered fast recovery, we can't undo here.
+ 	 */
+-	if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss)
+-		tcp_try_undo_loss(sk, false);
++	if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss && !tp->packets_out)
++		tcp_try_undo_recovery(sk);
+ 
+ 	/* Reset rtx states to prevent spurious retransmits_timed_out() */
+-	tcp_sk(sk)->retrans_stamp = 0;
++	tp->retrans_stamp = 0;
+ 	inet_csk(sk)->icsk_retransmits = 0;
+ 
+ 	/* Once we leave TCP_SYN_RECV or TCP_FIN_WAIT_1,
+ 	 * we no longer need req so release it.
+ 	 */
+-	req = rcu_dereference_protected(tcp_sk(sk)->fastopen_rsk,
++	req = rcu_dereference_protected(tp->fastopen_rsk,
+ 					lockdep_sock_is_held(sk));
+ 	reqsk_fastopen_remove(sk, req, false);
+ 
+diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
+index a707fa1dbcafd..f823a15b973c4 100644
+--- a/net/ipv4/tcp_metrics.c
++++ b/net/ipv4/tcp_metrics.c
+@@ -470,11 +470,15 @@ void tcp_init_metrics(struct sock *sk)
+ 	u32 val, crtt = 0; /* cached RTT scaled by 8 */
+ 
+ 	sk_dst_confirm(sk);
++	/* ssthresh may have been reduced unnecessarily during.
++	 * 3WHS. Restore it back to its initial default.
++	 */
++	tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+ 	if (!dst)
+ 		goto reset;
+ 
+ 	rcu_read_lock();
+-	tm = tcp_get_metrics(sk, dst, true);
++	tm = tcp_get_metrics(sk, dst, false);
+ 	if (!tm) {
+ 		rcu_read_unlock();
+ 		goto reset;
+@@ -489,11 +493,6 @@ void tcp_init_metrics(struct sock *sk)
+ 		tp->snd_ssthresh = val;
+ 		if (tp->snd_ssthresh > tp->snd_cwnd_clamp)
+ 			tp->snd_ssthresh = tp->snd_cwnd_clamp;
+-	} else {
+-		/* ssthresh may have been reduced unnecessarily during.
+-		 * 3WHS. Restore it back to its initial default.
+-		 */
+-		tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+ 	}
+ 	val = tcp_metric_get(tm, TCP_METRIC_REORDERING);
+ 	if (val && tp->reordering != val)
+@@ -908,7 +907,7 @@ static void tcp_metrics_flush_all(struct net *net)
+ 			match = net ? net_eq(tm_net(tm), net) :
+ 				!refcount_read(&tm_net(tm)->count);
+ 			if (match) {
+-				*pp = tm->tcpm_next;
++				rcu_assign_pointer(*pp, tm->tcpm_next);
+ 				kfree_rcu(tm, rcu_head);
+ 			} else {
+ 				pp = &tm->tcpm_next;
+@@ -949,7 +948,7 @@ static int tcp_metrics_nl_cmd_del(struct sk_buff *skb, struct genl_info *info)
+ 		if (addr_same(&tm->tcpm_daddr, &daddr) &&
+ 		    (!src || addr_same(&tm->tcpm_saddr, &saddr)) &&
+ 		    net_eq(tm_net(tm), net)) {
+-			*pp = tm->tcpm_next;
++			rcu_assign_pointer(*pp, tm->tcpm_next);
+ 			kfree_rcu(tm, rcu_head);
+ 			found = true;
+ 		} else {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 913966e7703fc..476f79f1563a8 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2645,10 +2645,12 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
+ 		case UDP_ENCAP_ESPINUDP_NON_IKE:
+ #if IS_ENABLED(CONFIG_IPV6)
+ 			if (sk->sk_family == AF_INET6)
+-				up->encap_rcv = ipv6_stub->xfrm6_udp_encap_rcv;
++				WRITE_ONCE(up->encap_rcv,
++					   ipv6_stub->xfrm6_udp_encap_rcv);
+ 			else
+ #endif
+-				up->encap_rcv = xfrm4_udp_encap_rcv;
++				WRITE_ONCE(up->encap_rcv,
++					   xfrm4_udp_encap_rcv);
+ #endif
+ 			fallthrough;
+ 		case UDP_ENCAP_L2TPINUDP:
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 58b5ab5fcdbf1..4126be15e0d9b 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -178,7 +178,13 @@ ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk,
+ 		int err;
+ 
+ 		skb_mark_not_on_list(segs);
+-		err = ip6_fragment(net, sk, segs, ip6_finish_output2);
++		/* Last GSO segment can be smaller than gso_size (and MTU).
++		 * Adding a fragment header would produce an "atomic fragment",
++		 * which is considered harmful (RFC-8021). Avoid that.
++		 */
++		err = segs->len > mtu ?
++			ip6_fragment(net, sk, segs, ip6_finish_output2) :
++			ip6_finish_output2(net, sk, segs);
+ 		if (err && ret == 0)
+ 			ret = err;
+ 	}
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index 12ae817aaf2ec..c10a68bb85de3 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -180,14 +180,15 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ 	treq = tcp_rsk(req);
+ 	treq->tfo_listener = false;
+ 
+-	if (security_inet_conn_request(sk, skb, req))
+-		goto out_free;
+-
+ 	req->mss = mss;
+ 	ireq->ir_rmt_port = th->source;
+ 	ireq->ir_num = ntohs(th->dest);
+ 	ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr;
+ 	ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr;
++
++	if (security_inet_conn_request(sk, skb, req))
++		goto out_free;
++
+ 	if (ipv6_opt_accepted(sk, skb, &TCP_SKB_CB(skb)->header.h6) ||
+ 	    np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo ||
+ 	    np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) {
+diff --git a/net/llc/llc_input.c b/net/llc/llc_input.c
+index 7cac441862e21..51bccfb00a9cd 100644
+--- a/net/llc/llc_input.c
++++ b/net/llc/llc_input.c
+@@ -127,8 +127,14 @@ static inline int llc_fixup_skb(struct sk_buff *skb)
+ 	skb->transport_header += llc_len;
+ 	skb_pull(skb, llc_len);
+ 	if (skb->protocol == htons(ETH_P_802_2)) {
+-		__be16 pdulen = eth_hdr(skb)->h_proto;
+-		s32 data_size = ntohs(pdulen) - llc_len;
++		__be16 pdulen;
++		s32 data_size;
++
++		if (skb->mac_len < ETH_HLEN)
++			return 0;
++
++		pdulen = eth_hdr(skb)->h_proto;
++		data_size = ntohs(pdulen) - llc_len;
+ 
+ 		if (data_size < 0 ||
+ 		    !pskb_may_pull(skb, data_size))
+diff --git a/net/llc/llc_s_ac.c b/net/llc/llc_s_ac.c
+index 9fa3342c7a829..df26557a02448 100644
+--- a/net/llc/llc_s_ac.c
++++ b/net/llc/llc_s_ac.c
+@@ -153,6 +153,9 @@ int llc_sap_action_send_test_r(struct llc_sap *sap, struct sk_buff *skb)
+ 	int rc = 1;
+ 	u32 data_size;
+ 
++	if (skb->mac_len < ETH_HLEN)
++		return 1;
++
+ 	llc_pdu_decode_sa(skb, mac_da);
+ 	llc_pdu_decode_da(skb, mac_sa);
+ 	llc_pdu_decode_ssap(skb, &dsap);
+diff --git a/net/llc/llc_station.c b/net/llc/llc_station.c
+index c29170e767a8c..64e2c67e16ba3 100644
+--- a/net/llc/llc_station.c
++++ b/net/llc/llc_station.c
+@@ -77,6 +77,9 @@ static int llc_station_ac_send_test_r(struct sk_buff *skb)
+ 	u32 data_size;
+ 	struct sk_buff *nskb;
+ 
++	if (skb->mac_len < ETH_HLEN)
++		goto out;
++
+ 	/* The test request command is type U (llc_len = 3) */
+ 	data_size = ntohs(eth_hdr(skb)->h_proto) - 3;
+ 	nskb = llc_alloc_frame(NULL, skb->dev, LLC_PDU_TYPE_U, data_size);
+diff --git a/net/netfilter/nf_nat_redirect.c b/net/netfilter/nf_nat_redirect.c
+index f91579c821e9a..5b37487d9d11f 100644
+--- a/net/netfilter/nf_nat_redirect.c
++++ b/net/netfilter/nf_nat_redirect.c
+@@ -10,6 +10,7 @@
+ 
+ #include <linux/if.h>
+ #include <linux/inetdevice.h>
++#include <linux/in.h>
+ #include <linux/ip.h>
+ #include <linux/kernel.h>
+ #include <linux/netdevice.h>
+@@ -24,81 +25,104 @@
+ #include <net/netfilter/nf_nat.h>
+ #include <net/netfilter/nf_nat_redirect.h>
+ 
++static unsigned int
++nf_nat_redirect(struct sk_buff *skb, const struct nf_nat_range2 *range,
++		const union nf_inet_addr *newdst)
++{
++	struct nf_nat_range2 newrange;
++	enum ip_conntrack_info ctinfo;
++	struct nf_conn *ct;
++
++	ct = nf_ct_get(skb, &ctinfo);
++
++	memset(&newrange, 0, sizeof(newrange));
++
++	newrange.flags		= range->flags | NF_NAT_RANGE_MAP_IPS;
++	newrange.min_addr	= *newdst;
++	newrange.max_addr	= *newdst;
++	newrange.min_proto	= range->min_proto;
++	newrange.max_proto	= range->max_proto;
++
++	return nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_DST);
++}
++
+ unsigned int
+-nf_nat_redirect_ipv4(struct sk_buff *skb,
+-		     const struct nf_nat_ipv4_multi_range_compat *mr,
++nf_nat_redirect_ipv4(struct sk_buff *skb, const struct nf_nat_range2 *range,
+ 		     unsigned int hooknum)
+ {
+-	struct nf_conn *ct;
+-	enum ip_conntrack_info ctinfo;
+-	__be32 newdst;
+-	struct nf_nat_range2 newrange;
++	union nf_inet_addr newdst = {};
+ 
+ 	WARN_ON(hooknum != NF_INET_PRE_ROUTING &&
+ 		hooknum != NF_INET_LOCAL_OUT);
+ 
+-	ct = nf_ct_get(skb, &ctinfo);
+-	WARN_ON(!(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED)));
+-
+ 	/* Local packets: make them go to loopback */
+ 	if (hooknum == NF_INET_LOCAL_OUT) {
+-		newdst = htonl(0x7F000001);
++		newdst.ip = htonl(INADDR_LOOPBACK);
+ 	} else {
+ 		const struct in_device *indev;
+ 
+-		newdst = 0;
+-
+ 		indev = __in_dev_get_rcu(skb->dev);
+ 		if (indev) {
+ 			const struct in_ifaddr *ifa;
+ 
+ 			ifa = rcu_dereference(indev->ifa_list);
+ 			if (ifa)
+-				newdst = ifa->ifa_local;
++				newdst.ip = ifa->ifa_local;
+ 		}
+ 
+-		if (!newdst)
++		if (!newdst.ip)
+ 			return NF_DROP;
+ 	}
+ 
+-	/* Transfer from original range. */
+-	memset(&newrange.min_addr, 0, sizeof(newrange.min_addr));
+-	memset(&newrange.max_addr, 0, sizeof(newrange.max_addr));
+-	newrange.flags	     = mr->range[0].flags | NF_NAT_RANGE_MAP_IPS;
+-	newrange.min_addr.ip = newdst;
+-	newrange.max_addr.ip = newdst;
+-	newrange.min_proto   = mr->range[0].min;
+-	newrange.max_proto   = mr->range[0].max;
+-
+-	/* Hand modified range to generic setup. */
+-	return nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_DST);
++	return nf_nat_redirect(skb, range, &newdst);
+ }
+ EXPORT_SYMBOL_GPL(nf_nat_redirect_ipv4);
+ 
+ static const struct in6_addr loopback_addr = IN6ADDR_LOOPBACK_INIT;
+ 
++static bool nf_nat_redirect_ipv6_usable(const struct inet6_ifaddr *ifa, unsigned int scope)
++{
++	unsigned int ifa_addr_type = ipv6_addr_type(&ifa->addr);
++
++	if (ifa_addr_type & IPV6_ADDR_MAPPED)
++		return false;
++
++	if ((ifa->flags & IFA_F_TENTATIVE) && (!(ifa->flags & IFA_F_OPTIMISTIC)))
++		return false;
++
++	if (scope) {
++		unsigned int ifa_scope = ifa_addr_type & IPV6_ADDR_SCOPE_MASK;
++
++		if (!(scope & ifa_scope))
++			return false;
++	}
++
++	return true;
++}
++
+ unsigned int
+ nf_nat_redirect_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range,
+ 		     unsigned int hooknum)
+ {
+-	struct nf_nat_range2 newrange;
+-	struct in6_addr newdst;
+-	enum ip_conntrack_info ctinfo;
+-	struct nf_conn *ct;
++	union nf_inet_addr newdst = {};
+ 
+-	ct = nf_ct_get(skb, &ctinfo);
+ 	if (hooknum == NF_INET_LOCAL_OUT) {
+-		newdst = loopback_addr;
++		newdst.in6 = loopback_addr;
+ 	} else {
++		unsigned int scope = ipv6_addr_scope(&ipv6_hdr(skb)->daddr);
+ 		struct inet6_dev *idev;
+-		struct inet6_ifaddr *ifa;
+ 		bool addr = false;
+ 
+ 		idev = __in6_dev_get(skb->dev);
+ 		if (idev != NULL) {
++			const struct inet6_ifaddr *ifa;
++
+ 			read_lock_bh(&idev->lock);
+ 			list_for_each_entry(ifa, &idev->addr_list, if_list) {
+-				newdst = ifa->addr;
++				if (!nf_nat_redirect_ipv6_usable(ifa, scope))
++					continue;
++
++				newdst.in6 = ifa->addr;
+ 				addr = true;
+ 				break;
+ 			}
+@@ -109,12 +133,6 @@ nf_nat_redirect_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range,
+ 			return NF_DROP;
+ 	}
+ 
+-	newrange.flags		= range->flags | NF_NAT_RANGE_MAP_IPS;
+-	newrange.min_addr.in6	= newdst;
+-	newrange.max_addr.in6	= newdst;
+-	newrange.min_proto	= range->min_proto;
+-	newrange.max_proto	= range->max_proto;
+-
+-	return nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_DST);
++	return nf_nat_redirect(skb, range, &newdst);
+ }
+ EXPORT_SYMBOL_GPL(nf_nat_redirect_ipv6);
+diff --git a/net/netfilter/nft_redir.c b/net/netfilter/nft_redir.c
+index e64f531d66cfc..677ce42183691 100644
+--- a/net/netfilter/nft_redir.c
++++ b/net/netfilter/nft_redir.c
+@@ -64,6 +64,8 @@ static int nft_redir_init(const struct nft_ctx *ctx,
+ 		} else {
+ 			priv->sreg_proto_max = priv->sreg_proto_min;
+ 		}
++
++		priv->flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
+ 	}
+ 
+ 	if (tb[NFTA_REDIR_FLAGS]) {
+@@ -98,25 +100,37 @@ nla_put_failure:
+ 	return -1;
+ }
+ 
+-static void nft_redir_ipv4_eval(const struct nft_expr *expr,
+-				struct nft_regs *regs,
+-				const struct nft_pktinfo *pkt)
++static void nft_redir_eval(const struct nft_expr *expr,
++			   struct nft_regs *regs,
++			   const struct nft_pktinfo *pkt)
+ {
+-	struct nft_redir *priv = nft_expr_priv(expr);
+-	struct nf_nat_ipv4_multi_range_compat mr;
++	const struct nft_redir *priv = nft_expr_priv(expr);
++	struct nf_nat_range2 range;
+ 
+-	memset(&mr, 0, sizeof(mr));
++	memset(&range, 0, sizeof(range));
++	range.flags = priv->flags;
+ 	if (priv->sreg_proto_min) {
+-		mr.range[0].min.all = (__force __be16)nft_reg_load16(
+-			&regs->data[priv->sreg_proto_min]);
+-		mr.range[0].max.all = (__force __be16)nft_reg_load16(
+-			&regs->data[priv->sreg_proto_max]);
+-		mr.range[0].flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
++		range.min_proto.all = (__force __be16)
++			nft_reg_load16(&regs->data[priv->sreg_proto_min]);
++		range.max_proto.all = (__force __be16)
++			nft_reg_load16(&regs->data[priv->sreg_proto_max]);
+ 	}
+ 
+-	mr.range[0].flags |= priv->flags;
+-
+-	regs->verdict.code = nf_nat_redirect_ipv4(pkt->skb, &mr, nft_hook(pkt));
++	switch (nft_pf(pkt)) {
++	case NFPROTO_IPV4:
++		regs->verdict.code = nf_nat_redirect_ipv4(pkt->skb, &range,
++							  nft_hook(pkt));
++		break;
++#ifdef CONFIG_NF_TABLES_IPV6
++	case NFPROTO_IPV6:
++		regs->verdict.code = nf_nat_redirect_ipv6(pkt->skb, &range,
++							  nft_hook(pkt));
++		break;
++#endif
++	default:
++		WARN_ON_ONCE(1);
++		break;
++	}
+ }
+ 
+ static void
+@@ -129,7 +143,7 @@ static struct nft_expr_type nft_redir_ipv4_type;
+ static const struct nft_expr_ops nft_redir_ipv4_ops = {
+ 	.type		= &nft_redir_ipv4_type,
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_redir)),
+-	.eval		= nft_redir_ipv4_eval,
++	.eval		= nft_redir_eval,
+ 	.init		= nft_redir_init,
+ 	.destroy	= nft_redir_ipv4_destroy,
+ 	.dump		= nft_redir_dump,
+@@ -146,28 +160,6 @@ static struct nft_expr_type nft_redir_ipv4_type __read_mostly = {
+ };
+ 
+ #ifdef CONFIG_NF_TABLES_IPV6
+-static void nft_redir_ipv6_eval(const struct nft_expr *expr,
+-				struct nft_regs *regs,
+-				const struct nft_pktinfo *pkt)
+-{
+-	struct nft_redir *priv = nft_expr_priv(expr);
+-	struct nf_nat_range2 range;
+-
+-	memset(&range, 0, sizeof(range));
+-	if (priv->sreg_proto_min) {
+-		range.min_proto.all = (__force __be16)nft_reg_load16(
+-			&regs->data[priv->sreg_proto_min]);
+-		range.max_proto.all = (__force __be16)nft_reg_load16(
+-			&regs->data[priv->sreg_proto_max]);
+-		range.flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
+-	}
+-
+-	range.flags |= priv->flags;
+-
+-	regs->verdict.code =
+-		nf_nat_redirect_ipv6(pkt->skb, &range, nft_hook(pkt));
+-}
+-
+ static void
+ nft_redir_ipv6_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
+ {
+@@ -178,7 +170,7 @@ static struct nft_expr_type nft_redir_ipv6_type;
+ static const struct nft_expr_ops nft_redir_ipv6_ops = {
+ 	.type		= &nft_redir_ipv6_type,
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_redir)),
+-	.eval		= nft_redir_ipv6_eval,
++	.eval		= nft_redir_eval,
+ 	.init		= nft_redir_init,
+ 	.destroy	= nft_redir_ipv6_destroy,
+ 	.dump		= nft_redir_dump,
+@@ -196,20 +188,6 @@ static struct nft_expr_type nft_redir_ipv6_type __read_mostly = {
+ #endif
+ 
+ #ifdef CONFIG_NF_TABLES_INET
+-static void nft_redir_inet_eval(const struct nft_expr *expr,
+-				struct nft_regs *regs,
+-				const struct nft_pktinfo *pkt)
+-{
+-	switch (nft_pf(pkt)) {
+-	case NFPROTO_IPV4:
+-		return nft_redir_ipv4_eval(expr, regs, pkt);
+-	case NFPROTO_IPV6:
+-		return nft_redir_ipv6_eval(expr, regs, pkt);
+-	}
+-
+-	WARN_ON_ONCE(1);
+-}
+-
+ static void
+ nft_redir_inet_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
+ {
+@@ -220,7 +198,7 @@ static struct nft_expr_type nft_redir_inet_type;
+ static const struct nft_expr_ops nft_redir_inet_ops = {
+ 	.type		= &nft_redir_inet_type,
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_redir)),
+-	.eval		= nft_redir_inet_eval,
++	.eval		= nft_redir_eval,
+ 	.init		= nft_redir_init,
+ 	.destroy	= nft_redir_inet_destroy,
+ 	.dump		= nft_redir_dump,
+diff --git a/net/netfilter/xt_REDIRECT.c b/net/netfilter/xt_REDIRECT.c
+index 353ca7801251a..ff66b56a3f97d 100644
+--- a/net/netfilter/xt_REDIRECT.c
++++ b/net/netfilter/xt_REDIRECT.c
+@@ -46,7 +46,6 @@ static void redirect_tg_destroy(const struct xt_tgdtor_param *par)
+ 	nf_ct_netns_put(par->net, par->family);
+ }
+ 
+-/* FIXME: Take multiple ranges --RR */
+ static int redirect_tg4_check(const struct xt_tgchk_param *par)
+ {
+ 	const struct nf_nat_ipv4_multi_range_compat *mr = par->targinfo;
+@@ -65,7 +64,14 @@ static int redirect_tg4_check(const struct xt_tgchk_param *par)
+ static unsigned int
+ redirect_tg4(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+-	return nf_nat_redirect_ipv4(skb, par->targinfo, xt_hooknum(par));
++	const struct nf_nat_ipv4_multi_range_compat *mr = par->targinfo;
++	struct nf_nat_range2 range = {
++		.flags       = mr->range[0].flags,
++		.min_proto   = mr->range[0].min,
++		.max_proto   = mr->range[0].max,
++	};
++
++	return nf_nat_redirect_ipv4(skb, &range, xt_hooknum(par));
+ }
+ 
+ static struct xt_target redirect_tg_reg[] __read_mostly = {
+diff --git a/net/netfilter/xt_recent.c b/net/netfilter/xt_recent.c
+index 0446307516cdf..39937ff245275 100644
+--- a/net/netfilter/xt_recent.c
++++ b/net/netfilter/xt_recent.c
+@@ -561,7 +561,7 @@ recent_mt_proc_write(struct file *file, const char __user *input,
+ {
+ 	struct recent_table *t = PDE_DATA(file_inode(file));
+ 	struct recent_entry *e;
+-	char buf[sizeof("+b335:1d35:1e55:dead:c0de:1715:5afe:c0de")];
++	char buf[sizeof("+b335:1d35:1e55:dead:c0de:1715:255.255.255.255")];
+ 	const char *c = buf;
+ 	union nf_inet_addr addr = {};
+ 	u_int16_t family;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 8ab84926816f6..9fc47292b68d8 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -143,7 +143,7 @@ static int __smc_release(struct smc_sock *smc)
+ 
+ 	if (!smc->use_fallback) {
+ 		rc = smc_close_active(smc);
+-		sock_set_flag(sk, SOCK_DEAD);
++		smc_sock_set_flag(sk, SOCK_DEAD);
+ 		sk->sk_shutdown |= SHUTDOWN_MASK;
+ 	} else {
+ 		if (sk->sk_state != SMC_CLOSED) {
+@@ -1169,7 +1169,7 @@ static int smc_clcsock_accept(struct smc_sock *lsmc, struct smc_sock **new_smc)
+ 		if (new_clcsock)
+ 			sock_release(new_clcsock);
+ 		new_sk->sk_state = SMC_CLOSED;
+-		sock_set_flag(new_sk, SOCK_DEAD);
++		smc_sock_set_flag(new_sk, SOCK_DEAD);
+ 		sock_put(new_sk); /* final */
+ 		*new_smc = NULL;
+ 		goto out;
+diff --git a/net/smc/smc.h b/net/smc/smc.h
+index e6919fe31617b..1eee40ec1d969 100644
+--- a/net/smc/smc.h
++++ b/net/smc/smc.h
+@@ -297,4 +297,9 @@ static inline bool using_ipsec(struct smc_sock *smc)
+ struct sock *smc_accept_dequeue(struct sock *parent, struct socket *new_sock);
+ void smc_close_non_accepted(struct sock *sk);
+ 
++static inline void smc_sock_set_flag(struct sock *sk, enum sock_flags flag)
++{
++	set_bit(flag, &sk->sk_flags);
++}
++
+ #endif	/* __SMC_H */
+diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c
+index 9125d28d9ff5d..b92b2cee2079c 100644
+--- a/net/smc/smc_cdc.c
++++ b/net/smc/smc_cdc.c
+@@ -28,13 +28,15 @@ static void smc_cdc_tx_handler(struct smc_wr_tx_pend_priv *pnd_snd,
+ {
+ 	struct smc_cdc_tx_pend *cdcpend = (struct smc_cdc_tx_pend *)pnd_snd;
+ 	struct smc_connection *conn = cdcpend->conn;
++	struct smc_buf_desc *sndbuf_desc;
+ 	struct smc_sock *smc;
+ 	int diff;
+ 
++	sndbuf_desc = conn->sndbuf_desc;
+ 	smc = container_of(conn, struct smc_sock, conn);
+ 	bh_lock_sock(&smc->sk);
+-	if (!wc_status) {
+-		diff = smc_curs_diff(cdcpend->conn->sndbuf_desc->len,
++	if (!wc_status && sndbuf_desc) {
++		diff = smc_curs_diff(sndbuf_desc->len,
+ 				     &cdcpend->conn->tx_curs_fin,
+ 				     &cdcpend->cursor);
+ 		/* sndbuf_space is decreased in smc_sendmsg */
+@@ -104,9 +106,6 @@ int smc_cdc_msg_send(struct smc_connection *conn,
+ 	union smc_host_cursor cfed;
+ 	int rc;
+ 
+-	if (unlikely(!READ_ONCE(conn->sndbuf_desc)))
+-		return -ENOBUFS;
+-
+ 	smc_cdc_add_pending_send(conn, pend);
+ 
+ 	conn->tx_cdc_seq++;
+@@ -370,7 +369,7 @@ static void smc_cdc_msg_recv_action(struct smc_sock *smc,
+ 		smc->sk.sk_shutdown |= RCV_SHUTDOWN;
+ 		if (smc->clcsock && smc->clcsock->sk)
+ 			smc->clcsock->sk->sk_shutdown |= RCV_SHUTDOWN;
+-		sock_set_flag(&smc->sk, SOCK_DONE);
++		smc_sock_set_flag(&smc->sk, SOCK_DONE);
+ 		sock_hold(&smc->sk); /* sock_put in close_work */
+ 		if (!queue_work(smc_close_wq, &conn->close_work))
+ 			sock_put(&smc->sk);
+diff --git a/net/smc/smc_close.c b/net/smc/smc_close.c
+index 149a59ecd299f..bcd3ea894555d 100644
+--- a/net/smc/smc_close.c
++++ b/net/smc/smc_close.c
+@@ -113,7 +113,8 @@ static void smc_close_cancel_work(struct smc_sock *smc)
+ 	struct sock *sk = &smc->sk;
+ 
+ 	release_sock(sk);
+-	cancel_work_sync(&smc->conn.close_work);
++	if (cancel_work_sync(&smc->conn.close_work))
++		sock_put(sk);
+ 	cancel_delayed_work_sync(&smc->conn.tx_work);
+ 	lock_sock(sk);
+ }
+@@ -170,7 +171,7 @@ void smc_close_active_abort(struct smc_sock *smc)
+ 		break;
+ 	}
+ 
+-	sock_set_flag(sk, SOCK_DEAD);
++	smc_sock_set_flag(sk, SOCK_DEAD);
+ 	sk->sk_state_change(sk);
+ 
+ 	if (release_clcsock) {
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index dbb1bc722ba9b..5f849c7300283 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -1410,7 +1410,7 @@ u16 tipc_get_gap_ack_blks(struct tipc_gap_ack_blks **ga, struct tipc_link *l,
+ 		p = (struct tipc_gap_ack_blks *)msg_data(hdr);
+ 		sz = ntohs(p->len);
+ 		/* Sanity check */
+-		if (sz == struct_size(p, gacks, p->ugack_cnt + p->bgack_cnt)) {
++		if (sz == struct_size(p, gacks, size_add(p->ugack_cnt, p->bgack_cnt))) {
+ 			/* Good, check if the desired type exists */
+ 			if ((uc && p->ugack_cnt) || (!uc && p->bgack_cnt))
+ 				goto ok;
+@@ -1497,7 +1497,7 @@ static u16 tipc_build_gap_ack_blks(struct tipc_link *l, struct tipc_msg *hdr)
+ 			__tipc_build_gap_ack_blks(ga, l, ga->bgack_cnt) : 0;
+ 
+ 	/* Total len */
+-	len = struct_size(ga, gacks, ga->bgack_cnt + ga->ugack_cnt);
++	len = struct_size(ga, gacks, size_add(ga->bgack_cnt, ga->ugack_cnt));
+ 	ga->len = htons(len);
+ 	return len;
+ }
+diff --git a/net/tipc/netlink.c b/net/tipc/netlink.c
+index c447cb5f879e7..f10ed873858f8 100644
+--- a/net/tipc/netlink.c
++++ b/net/tipc/netlink.c
+@@ -88,7 +88,7 @@ const struct nla_policy tipc_nl_net_policy[TIPC_NLA_NET_MAX + 1] = {
+ 
+ const struct nla_policy tipc_nl_link_policy[TIPC_NLA_LINK_MAX + 1] = {
+ 	[TIPC_NLA_LINK_UNSPEC]		= { .type = NLA_UNSPEC },
+-	[TIPC_NLA_LINK_NAME]		= { .type = NLA_STRING,
++	[TIPC_NLA_LINK_NAME]		= { .type = NLA_NUL_STRING,
+ 					    .len = TIPC_MAX_LINK_NAME },
+ 	[TIPC_NLA_LINK_MTU]		= { .type = NLA_U32 },
+ 	[TIPC_NLA_LINK_BROADCAST]	= { .type = NLA_FLAG },
+@@ -125,7 +125,7 @@ const struct nla_policy tipc_nl_prop_policy[TIPC_NLA_PROP_MAX + 1] = {
+ 
+ const struct nla_policy tipc_nl_bearer_policy[TIPC_NLA_BEARER_MAX + 1]	= {
+ 	[TIPC_NLA_BEARER_UNSPEC]	= { .type = NLA_UNSPEC },
+-	[TIPC_NLA_BEARER_NAME]		= { .type = NLA_STRING,
++	[TIPC_NLA_BEARER_NAME]		= { .type = NLA_NUL_STRING,
+ 					    .len = TIPC_MAX_BEARER_NAME },
+ 	[TIPC_NLA_BEARER_PROP]		= { .type = NLA_NESTED },
+ 	[TIPC_NLA_BEARER_DOMAIN]	= { .type = NLA_U32 }
+diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
+index da4df53ee6955..7154df094f40b 100644
+--- a/scripts/mod/file2alias.c
++++ b/scripts/mod/file2alias.c
+@@ -1326,13 +1326,13 @@ static int do_typec_entry(const char *filename, void *symval, char *alias)
+ /* Looks like: tee:uuid */
+ static int do_tee_entry(const char *filename, void *symval, char *alias)
+ {
+-	DEF_FIELD(symval, tee_client_device_id, uuid);
++	DEF_FIELD_ADDR(symval, tee_client_device_id, uuid);
+ 
+ 	sprintf(alias, "tee:%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-%02x%02x%02x%02x%02x%02x",
+-		uuid.b[0], uuid.b[1], uuid.b[2], uuid.b[3], uuid.b[4],
+-		uuid.b[5], uuid.b[6], uuid.b[7], uuid.b[8], uuid.b[9],
+-		uuid.b[10], uuid.b[11], uuid.b[12], uuid.b[13], uuid.b[14],
+-		uuid.b[15]);
++		uuid->b[0], uuid->b[1], uuid->b[2], uuid->b[3], uuid->b[4],
++		uuid->b[5], uuid->b[6], uuid->b[7], uuid->b[8], uuid->b[9],
++		uuid->b[10], uuid->b[11], uuid->b[12], uuid->b[13], uuid->b[14],
++		uuid->b[15]);
+ 
+ 	add_wildcard(alias);
+ 	return 1;
+diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
+index 3cf1f40e68924..acd500c3b9d04 100644
+--- a/sound/soc/fsl/fsl_easrc.c
++++ b/sound/soc/fsl/fsl_easrc.c
+@@ -1971,17 +1971,21 @@ static int fsl_easrc_probe(struct platform_device *pdev)
+ 					      &fsl_easrc_dai, 1);
+ 	if (ret) {
+ 		dev_err(dev, "failed to register ASoC DAI\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = devm_snd_soc_register_component(dev, &fsl_asrc_component,
+ 					      NULL, 0);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register ASoC platform\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	return 0;
++
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
++	return ret;
+ }
+ 
+ static int fsl_easrc_remove(struct platform_device *pdev)
+diff --git a/sound/soc/fsl/mpc5200_dma.c b/sound/soc/fsl/mpc5200_dma.c
+index 2319848821762..2e82731072c98 100644
+--- a/sound/soc/fsl/mpc5200_dma.c
++++ b/sound/soc/fsl/mpc5200_dma.c
+@@ -107,6 +107,9 @@ static int psc_dma_hw_free(struct snd_soc_component *component,
+ 
+ /**
+  * psc_dma_trigger: start and stop the DMA transfer.
++ * @component: triggered component
++ * @substream: triggered substream
++ * @cmd: triggered command
+  *
+  * This function is called by ALSA to start, stop, pause, and resume the DMA
+  * transfer of data.
+diff --git a/sound/soc/intel/skylake/skl-sst-utils.c b/sound/soc/intel/skylake/skl-sst-utils.c
+index 57ea815d3f041..b776c58dcf47a 100644
+--- a/sound/soc/intel/skylake/skl-sst-utils.c
++++ b/sound/soc/intel/skylake/skl-sst-utils.c
+@@ -299,6 +299,7 @@ int snd_skl_parse_uuids(struct sst_dsp *ctx, const struct firmware *fw,
+ 		module->instance_id = devm_kzalloc(ctx->dev, size, GFP_KERNEL);
+ 		if (!module->instance_id) {
+ 			ret = -ENOMEM;
++			kfree(module);
+ 			goto free_uuid_list;
+ 		}
+ 
+diff --git a/sound/soc/ti/ams-delta.c b/sound/soc/ti/ams-delta.c
+index 57feb473a579c..fbd75732a68b3 100644
+--- a/sound/soc/ti/ams-delta.c
++++ b/sound/soc/ti/ams-delta.c
+@@ -303,7 +303,7 @@ static int cx81801_open(struct tty_struct *tty)
+ static void cx81801_close(struct tty_struct *tty)
+ {
+ 	struct snd_soc_component *component = tty->disc_data;
+-	struct snd_soc_dapm_context *dapm = &component->card->dapm;
++	struct snd_soc_dapm_context *dapm;
+ 
+ 	del_timer_sync(&cx81801_timer);
+ 
+@@ -315,6 +315,8 @@ static void cx81801_close(struct tty_struct *tty)
+ 
+ 	v253_ops.close(tty);
+ 
++	dapm = &component->card->dapm;
++
+ 	/* Revert back to default audio input/output constellation */
+ 	snd_soc_dapm_mutex_lock(dapm);
+ 
+diff --git a/tools/iio/iio_generic_buffer.c b/tools/iio/iio_generic_buffer.c
+index 34d63bcebcd28..2fd10eab75b53 100644
+--- a/tools/iio/iio_generic_buffer.c
++++ b/tools/iio/iio_generic_buffer.c
+@@ -49,12 +49,15 @@ enum autochan {
+  * Has the side effect of filling the channels[i].location values used
+  * in processing the buffer output.
+  **/
+-int size_from_channelarray(struct iio_channel_info *channels, int num_channels)
++static unsigned int size_from_channelarray(struct iio_channel_info *channels, int num_channels)
+ {
+-	int bytes = 0;
+-	int i = 0;
++	unsigned int bytes = 0;
++	int i = 0, max = 0;
++	unsigned int misalignment;
+ 
+ 	while (i < num_channels) {
++		if (channels[i].bytes > max)
++			max = channels[i].bytes;
+ 		if (bytes % channels[i].bytes == 0)
+ 			channels[i].location = bytes;
+ 		else
+@@ -64,11 +67,19 @@ int size_from_channelarray(struct iio_channel_info *channels, int num_channels)
+ 		bytes = channels[i].location + channels[i].bytes;
+ 		i++;
+ 	}
++	/*
++	 * We want the data in next sample to also be properly aligned so
++	 * we'll add padding at the end if needed. Adding padding only
++	 * works for channel data which size is 2^n bytes.
++	 */
++	misalignment = bytes % max;
++	if (misalignment)
++		bytes += max - misalignment;
+ 
+ 	return bytes;
+ }
+ 
+-void print1byte(uint8_t input, struct iio_channel_info *info)
++static void print1byte(uint8_t input, struct iio_channel_info *info)
+ {
+ 	/*
+ 	 * Shift before conversion to avoid sign extension
+@@ -85,7 +96,7 @@ void print1byte(uint8_t input, struct iio_channel_info *info)
+ 	}
+ }
+ 
+-void print2byte(uint16_t input, struct iio_channel_info *info)
++static void print2byte(uint16_t input, struct iio_channel_info *info)
+ {
+ 	/* First swap if incorrect endian */
+ 	if (info->be)
+@@ -108,7 +119,7 @@ void print2byte(uint16_t input, struct iio_channel_info *info)
+ 	}
+ }
+ 
+-void print4byte(uint32_t input, struct iio_channel_info *info)
++static void print4byte(uint32_t input, struct iio_channel_info *info)
+ {
+ 	/* First swap if incorrect endian */
+ 	if (info->be)
+@@ -131,7 +142,7 @@ void print4byte(uint32_t input, struct iio_channel_info *info)
+ 	}
+ }
+ 
+-void print8byte(uint64_t input, struct iio_channel_info *info)
++static void print8byte(uint64_t input, struct iio_channel_info *info)
+ {
+ 	/* First swap if incorrect endian */
+ 	if (info->be)
+@@ -167,9 +178,8 @@ void print8byte(uint64_t input, struct iio_channel_info *info)
+  *			      to fill the location offsets.
+  * @num_channels:	number of channels
+  **/
+-void process_scan(char *data,
+-		  struct iio_channel_info *channels,
+-		  int num_channels)
++static void process_scan(char *data, struct iio_channel_info *channels,
++			 int num_channels)
+ {
+ 	int k;
+ 
+@@ -238,7 +248,7 @@ static int enable_disable_all_channels(char *dev_dir_name, int enable)
+ 	return 0;
+ }
+ 
+-void print_usage(void)
++static void print_usage(void)
+ {
+ 	fprintf(stderr, "Usage: generic_buffer [options]...\n"
+ 		"Capture, convert and output data from IIO device buffer\n"
+@@ -257,12 +267,12 @@ void print_usage(void)
+ 		"  -w <n>     Set delay between reads in us (event-less mode)\n");
+ }
+ 
+-enum autochan autochannels = AUTOCHANNELS_DISABLED;
+-char *dev_dir_name = NULL;
+-char *buf_dir_name = NULL;
+-bool current_trigger_set = false;
++static enum autochan autochannels = AUTOCHANNELS_DISABLED;
++static char *dev_dir_name = NULL;
++static char *buf_dir_name = NULL;
++static bool current_trigger_set = false;
+ 
+-void cleanup(void)
++static void cleanup(void)
+ {
+ 	int ret;
+ 
+@@ -294,14 +304,14 @@ void cleanup(void)
+ 	}
+ }
+ 
+-void sig_handler(int signum)
++static void sig_handler(int signum)
+ {
+ 	fprintf(stderr, "Caught signal %d\n", signum);
+ 	cleanup();
+ 	exit(-signum);
+ }
+ 
+-void register_cleanup(void)
++static void register_cleanup(void)
+ {
+ 	struct sigaction sa = { .sa_handler = sig_handler };
+ 	const int signums[] = { SIGINT, SIGTERM, SIGABRT };
+@@ -343,7 +353,7 @@ int main(int argc, char **argv)
+ 	ssize_t read_size;
+ 	int dev_num = -1, trig_num = -1;
+ 	char *buffer_access = NULL;
+-	int scan_size;
++	unsigned int scan_size;
+ 	int noevents = 0;
+ 	int notrigger = 0;
+ 	char *dummy;
+@@ -613,7 +623,16 @@ int main(int argc, char **argv)
+ 	}
+ 
+ 	scan_size = size_from_channelarray(channels, num_channels);
+-	data = malloc(scan_size * buf_len);
++
++	size_t total_buf_len = scan_size * buf_len;
++
++	if (scan_size > 0 && total_buf_len / scan_size != buf_len) {
++		ret = -EFAULT;
++		perror("Integer overflow happened when calculate scan_size * buf_len");
++		goto error;
++	}
++
++	data = malloc(total_buf_len);
+ 	if (!data) {
+ 		ret = -ENOMEM;
+ 		goto error;
+diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
+index 8a793e4c9400a..c78d8813811cc 100644
+--- a/tools/perf/util/hist.c
++++ b/tools/perf/util/hist.c
+@@ -2624,8 +2624,6 @@ void hist__account_cycles(struct branch_stack *bs, struct addr_location *al,
+ 
+ 	/* If we have branch cycles always annotate them. */
+ 	if (bs && bs->nr && entries[0].flags.cycles) {
+-		int i;
+-
+ 		bi = sample__resolve_bstack(sample, al);
+ 		if (bi) {
+ 			struct addr_map_symbol *prev = NULL;
+@@ -2640,7 +2638,7 @@ void hist__account_cycles(struct branch_stack *bs, struct addr_location *al,
+ 			 * Note that perf stores branches reversed from
+ 			 * program order!
+ 			 */
+-			for (i = bs->nr - 1; i >= 0; i--) {
++			for (int i = bs->nr - 1; i >= 0; i--) {
+ 				addr_map_symbol__account_cycles(&bi[i].from,
+ 					nonany_branch_mode ? NULL : prev,
+ 					bi[i].flags.cycles);
+@@ -2649,6 +2647,12 @@ void hist__account_cycles(struct branch_stack *bs, struct addr_location *al,
+ 				if (total_cycles)
+ 					*total_cycles += bi[i].flags.cycles;
+ 			}
++			for (unsigned int i = 0; i < bs->nr; i++) {
++				map__put(bi[i].to.ms.map);
++				maps__put(bi[i].to.ms.maps);
++				map__put(bi[i].from.ms.map);
++				maps__put(bi[i].from.ms.maps);
++			}
+ 			free(bi);
+ 		}
+ 	}
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index df515cd8d0184..eec926c313b13 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2387,16 +2387,18 @@ static int lbr_callchain_add_lbr_ip(struct thread *thread,
+ 		save_lbr_cursor_node(thread, cursor, i);
+ 	}
+ 
+-	/* Add LBR ip from first entries.to */
+-	ip = entries[0].to;
+-	flags = &entries[0].flags;
+-	*branch_from = entries[0].from;
+-	err = add_callchain_ip(thread, cursor, parent,
+-			       root_al, &cpumode, ip,
+-			       true, flags, NULL,
+-			       *branch_from);
+-	if (err)
+-		return err;
++	if (lbr_nr > 0) {
++		/* Add LBR ip from first entries.to */
++		ip = entries[0].to;
++		flags = &entries[0].flags;
++		*branch_from = entries[0].from;
++		err = add_callchain_ip(thread, cursor, parent,
++				root_al, &cpumode, ip,
++				true, flags, NULL,
++				*branch_from);
++		if (err)
++			return err;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c b/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
+index 3fd8e903118f5..3bc46d6151f44 100644
+--- a/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
++++ b/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
+@@ -62,7 +62,7 @@ static void error_report(struct error *err, const char *test_name)
+ 		break;
+ 
+ 	case PIDFD_PASS:
+-		ksft_test_result_pass("%s test: Passed\n");
++		ksft_test_result_pass("%s test: Passed\n", test_name);
+ 		break;
+ 
+ 	default:
+diff --git a/tools/testing/selftests/pidfd/pidfd_test.c b/tools/testing/selftests/pidfd/pidfd_test.c
+index 9a2d64901d591..79f543ad394c2 100644
+--- a/tools/testing/selftests/pidfd/pidfd_test.c
++++ b/tools/testing/selftests/pidfd/pidfd_test.c
+@@ -380,13 +380,13 @@ static int test_pidfd_send_signal_syscall_support(void)
+ 
+ static void *test_pidfd_poll_exec_thread(void *priv)
+ {
+-	ksft_print_msg("Child Thread: starting. pid %d tid %d ; and sleeping\n",
++	ksft_print_msg("Child Thread: starting. pid %d tid %ld ; and sleeping\n",
+ 			getpid(), syscall(SYS_gettid));
+ 	ksft_print_msg("Child Thread: doing exec of sleep\n");
+ 
+ 	execl("/bin/sleep", "sleep", str(CHILD_THREAD_MIN_WAIT), (char *)NULL);
+ 
+-	ksft_print_msg("Child Thread: DONE. pid %d tid %d\n",
++	ksft_print_msg("Child Thread: DONE. pid %d tid %ld\n",
+ 			getpid(), syscall(SYS_gettid));
+ 	return NULL;
+ }
+@@ -426,7 +426,7 @@ static int child_poll_exec_test(void *args)
+ {
+ 	pthread_t t1;
+ 
+-	ksft_print_msg("Child (pidfd): starting. pid %d tid %d\n", getpid(),
++	ksft_print_msg("Child (pidfd): starting. pid %d tid %ld\n", getpid(),
+ 			syscall(SYS_gettid));
+ 	pthread_create(&t1, NULL, test_pidfd_poll_exec_thread, NULL);
+ 	/*
+@@ -477,10 +477,10 @@ static void test_pidfd_poll_exec(int use_waitpid)
+ 
+ static void *test_pidfd_poll_leader_exit_thread(void *priv)
+ {
+-	ksft_print_msg("Child Thread: starting. pid %d tid %d ; and sleeping\n",
++	ksft_print_msg("Child Thread: starting. pid %d tid %ld ; and sleeping\n",
+ 			getpid(), syscall(SYS_gettid));
+ 	sleep(CHILD_THREAD_MIN_WAIT);
+-	ksft_print_msg("Child Thread: DONE. pid %d tid %d\n", getpid(), syscall(SYS_gettid));
++	ksft_print_msg("Child Thread: DONE. pid %d tid %ld\n", getpid(), syscall(SYS_gettid));
+ 	return NULL;
+ }
+ 
+@@ -489,7 +489,7 @@ static int child_poll_leader_exit_test(void *args)
+ {
+ 	pthread_t t1, t2;
+ 
+-	ksft_print_msg("Child: starting. pid %d tid %d\n", getpid(), syscall(SYS_gettid));
++	ksft_print_msg("Child: starting. pid %d tid %ld\n", getpid(), syscall(SYS_gettid));
+ 	pthread_create(&t1, NULL, test_pidfd_poll_leader_exit_thread, NULL);
+ 	pthread_create(&t2, NULL, test_pidfd_poll_leader_exit_thread, NULL);
+ 
+diff --git a/tools/testing/selftests/resctrl/resctrl_tests.c b/tools/testing/selftests/resctrl/resctrl_tests.c
+index bd98746c6f858..b33d1d6dd99ae 100644
+--- a/tools/testing/selftests/resctrl/resctrl_tests.c
++++ b/tools/testing/selftests/resctrl/resctrl_tests.c
+@@ -132,9 +132,14 @@ int main(int argc, char **argv)
+ 	detect_amd();
+ 
+ 	if (has_ben) {
++		if (argc - ben_ind >= BENCHMARK_ARGS)
++			ksft_exit_fail_msg("Too long benchmark command.\n");
++
+ 		/* Extract benchmark command from command line. */
+ 		for (i = ben_ind; i < argc; i++) {
+ 			benchmark_cmd[i - ben_ind] = benchmark_cmd_area[i];
++			if (strlen(argv[i]) >= BENCHMARK_ARG_SIZE)
++				ksft_exit_fail_msg("Too long benchmark command argument.\n");
+ 			sprintf(benchmark_cmd[i - ben_ind], "%s", argv[i]);
+ 		}
+ 		benchmark_cmd[ben_count] = NULL;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-11-28 17:52 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-11-28 17:52 UTC (permalink / raw
  To: gentoo-commits

commit:     3fb7e3159d4516b7da427d1270beda3d0a670de2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov 28 17:52:42 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Nov 28 17:52:42 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3fb7e315

Linux patch 5.10.202

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1201_linux-5.10.202.patch | 5378 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5382 insertions(+)

diff --git a/0000_README b/0000_README
index be2bdc48..5b0c512c 100644
--- a/0000_README
+++ b/0000_README
@@ -847,6 +847,10 @@ Patch:  1200_linux-5.10.201.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.201
 
+Patch:  1201_linux-5.10.202.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.202
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1201_linux-5.10.202.patch b/1201_linux-5.10.202.patch
new file mode 100644
index 00000000..5fae3b09
--- /dev/null
+++ b/1201_linux-5.10.202.patch
@@ -0,0 +1,5378 @@
+diff --git a/Makefile b/Makefile
+index 7ef22771de767..ed0e42c99c906 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 201
++SUBLEVEL = 202
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/include/asm/exception.h b/arch/arm/include/asm/exception.h
+index 58e039a851af0..3c82975d46db3 100644
+--- a/arch/arm/include/asm/exception.h
++++ b/arch/arm/include/asm/exception.h
+@@ -10,10 +10,6 @@
+ 
+ #include <linux/interrupt.h>
+ 
+-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ #define __exception_irq_entry	__irq_entry
+-#else
+-#define __exception_irq_entry
+-#endif
+ 
+ #endif /* __ASM_ARM_EXCEPTION_H */
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 34bd4cba81e66..13cf137da999a 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -986,6 +986,8 @@ choice
+ config CPU_BIG_ENDIAN
+ 	bool "Build big-endian kernel"
+ 	depends on !LD_IS_LLD || LLD_VERSION >= 130000
++	# https://github.com/llvm/llvm-project/commit/1379b150991f70a5782e9a143c2ba5308da1161c
++	depends on AS_IS_GNU || AS_VERSION >= 150000
+ 	help
+ 	  Say Y if you plan on running a kernel with a big-endian userspace.
+ 
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 2a1f03cdb52c7..3677209106773 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -129,12 +129,6 @@
+ 		};
+ 	};
+ 
+-	tcsr_mutex: hwlock {
+-		compatible = "qcom,tcsr-mutex";
+-		syscon = <&tcsr_mutex_regs 0 0x80>;
+-		#hwlock-cells = <1>;
+-	};
+-
+ 	pmuv8: pmu {
+ 		compatible = "arm,cortex-a53-pmu";
+ 		interrupts = <GIC_PPI 7 (GIC_CPU_MASK_SIMPLE(4) |
+@@ -175,7 +169,7 @@
+ 	smem {
+ 		compatible = "qcom,smem";
+ 		memory-region = <&smem_region>;
+-		hwlocks = <&tcsr_mutex 0>;
++		hwlocks = <&tcsr_mutex 3>;
+ 	};
+ 
+ 	soc: soc {
+@@ -242,9 +236,10 @@
+ 			#reset-cells = <1>;
+ 		};
+ 
+-		tcsr_mutex_regs: syscon@1905000 {
+-			compatible = "syscon";
+-			reg = <0x0 0x01905000 0x0 0x8000>;
++		tcsr_mutex: hwlock@1905000 {
++			compatible = "qcom,ipq6018-tcsr-mutex", "qcom,tcsr-mutex";
++			reg = <0x0 0x01905000 0x0 0x20000>;
++			#hwlock-cells = <1>;
+ 		};
+ 
+ 		tcsr_q6: syscon@1945000 {
+diff --git a/arch/parisc/include/uapi/asm/pdc.h b/arch/parisc/include/uapi/asm/pdc.h
+index 15211723ebf54..ee903551ae105 100644
+--- a/arch/parisc/include/uapi/asm/pdc.h
++++ b/arch/parisc/include/uapi/asm/pdc.h
+@@ -465,6 +465,7 @@ struct pdc_model {		/* for PDC_MODEL */
+ 	unsigned long arch_rev;
+ 	unsigned long pot_key;
+ 	unsigned long curr_key;
++	unsigned long width;	/* default of PSW_W bit (1=enabled) */
+ };
+ 
+ struct pdc_cache_cf {		/* for PDC_CACHE  (I/D-caches) */
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 05bed27eef859..25bef679290f7 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -497,13 +497,13 @@
+ 	 * to a CPU TLB 4k PFN (4k => 12 bits to shift) */
+ 	#define PAGE_ADD_SHIFT		(PAGE_SHIFT-12)
+ 	#define PAGE_ADD_HUGE_SHIFT	(REAL_HPAGE_SHIFT-12)
++	#define PFN_START_BIT	(63-ASM_PFN_PTE_SHIFT+(63-58)-PAGE_ADD_SHIFT)
+ 
+ 	/* Drop prot bits and convert to page addr for iitlbt and idtlbt */
+ 	.macro		convert_for_tlb_insert20 pte,tmp
+ #ifdef CONFIG_HUGETLB_PAGE
+ 	copy		\pte,\tmp
+-	extrd,u		\tmp,(63-ASM_PFN_PTE_SHIFT)+(63-58)+PAGE_ADD_SHIFT,\
+-				64-PAGE_SHIFT-PAGE_ADD_SHIFT,\pte
++	extrd,u		\tmp,PFN_START_BIT,PFN_START_BIT+1,\pte
+ 
+ 	depdi		_PAGE_SIZE_ENCODING_DEFAULT,63,\
+ 				(63-58)+PAGE_ADD_SHIFT,\pte
+@@ -511,8 +511,7 @@
+ 	depdi		_HUGE_PAGE_SIZE_ENCODING_DEFAULT,63,\
+ 				(63-58)+PAGE_ADD_HUGE_SHIFT,\pte
+ #else /* Huge pages disabled */
+-	extrd,u		\pte,(63-ASM_PFN_PTE_SHIFT)+(63-58)+PAGE_ADD_SHIFT,\
+-				64-PAGE_SHIFT-PAGE_ADD_SHIFT,\pte
++	extrd,u		\pte,PFN_START_BIT,PFN_START_BIT+1,\pte
+ 	depdi		_PAGE_SIZE_ENCODING_DEFAULT,63,\
+ 				(63-58)+PAGE_ADD_SHIFT,\pte
+ #endif
+diff --git a/arch/parisc/kernel/head.S b/arch/parisc/kernel/head.S
+index 598d0938449da..2f95c2429f772 100644
+--- a/arch/parisc/kernel/head.S
++++ b/arch/parisc/kernel/head.S
+@@ -69,9 +69,8 @@ $bss_loop:
+ 	stw,ma          %arg2,4(%r1)
+ 	stw,ma          %arg3,4(%r1)
+ 
+-#if !defined(CONFIG_64BIT) && defined(CONFIG_PA20)
+-	/* This 32-bit kernel was compiled for PA2.0 CPUs. Check current CPU
+-	 * and halt kernel if we detect a PA1.x CPU. */
++#if defined(CONFIG_PA20)
++	/* check for 64-bit capable CPU as required by current kernel */
+ 	ldi		32,%r10
+ 	mtctl		%r10,%cr11
+ 	.level 2.0
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 6e3e50614353b..1dc98e2fce2e8 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1289,8 +1289,7 @@ static void power_pmu_disable(struct pmu *pmu)
+ 		/*
+ 		 * Disable instruction sampling if it was enabled
+ 		 */
+-		if (cpuhw->mmcr.mmcra & MMCRA_SAMPLE_ENABLE)
+-			val &= ~MMCRA_SAMPLE_ENABLE;
++		val &= ~MMCRA_SAMPLE_ENABLE;
+ 
+ 		/* Disable BHRB via mmcra (BHRBRD) for p10 */
+ 		if (ppmu->flags & PPMU_ARCH_31)
+@@ -1301,7 +1300,7 @@ static void power_pmu_disable(struct pmu *pmu)
+ 		 * instruction sampling or BHRB.
+ 		 */
+ 		if (val != mmcra) {
+-			mtspr(SPRN_MMCRA, mmcra);
++			mtspr(SPRN_MMCRA, val);
+ 			mb();
+ 			isync();
+ 		}
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 8465fc8ffd990..7d7a3cbb8e017 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -505,6 +505,7 @@
+ #define MSR_AMD64_CPUID_FN_1		0xc0011004
+ #define MSR_AMD64_LS_CFG		0xc0011020
+ #define MSR_AMD64_DC_CFG		0xc0011022
++#define MSR_AMD64_TW_CFG		0xc0011023
+ 
+ #define MSR_AMD64_DE_CFG		0xc0011029
+ #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT	 1
+diff --git a/arch/x86/include/asm/numa.h b/arch/x86/include/asm/numa.h
+index e3bae2b60a0db..ef2844d691735 100644
+--- a/arch/x86/include/asm/numa.h
++++ b/arch/x86/include/asm/numa.h
+@@ -12,13 +12,6 @@
+ 
+ #define NR_NODE_MEMBLKS		(MAX_NUMNODES*2)
+ 
+-/*
+- * Too small node sizes may confuse the VM badly. Usually they
+- * result from BIOS bugs. So dont recognize nodes as standalone
+- * NUMA entities that have less than this amount of RAM listed:
+- */
+-#define NODE_MIN_SIZE (4*1024*1024)
+-
+ extern int numa_off;
+ 
+ /*
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index 205fa420ee7ca..3f5c00b15e2c1 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -89,8 +89,12 @@ static void hygon_get_topology(struct cpuinfo_x86 *c)
+ 		if (!err)
+ 			c->x86_coreid_bits = get_count_order(c->x86_max_cores);
+ 
+-		/* Socket ID is ApicId[6] for these processors. */
+-		c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT;
++		/*
++		 * Socket ID is ApicId[6] for the processors with model <= 0x3
++		 * when running on host.
++		 */
++		if (!boot_cpu_has(X86_FEATURE_HYPERVISOR) && c->x86_model <= 0x3)
++			c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT;
+ 
+ 		cacheinfo_hygon_init_llc_id(c, cpu);
+ 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index e03e320847cdd..20eb8f55e1f1e 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -674,10 +674,12 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count,
+ 
+ 	stimer_cleanup(stimer);
+ 	stimer->count = count;
+-	if (stimer->count == 0)
+-		stimer->config.enable = 0;
+-	else if (stimer->config.auto_enable)
+-		stimer->config.enable = 1;
++	if (!host) {
++		if (stimer->count == 0)
++			stimer->config.enable = 0;
++		else if (stimer->config.auto_enable)
++			stimer->config.enable = 1;
++	}
+ 
+ 	if (stimer->config.enable)
+ 		stimer_mark_pending(stimer, false);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c2899ff31a068..13e4699a0744f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3132,6 +3132,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_AMD64_PATCH_LOADER:
+ 	case MSR_AMD64_BU_CFG2:
+ 	case MSR_AMD64_DC_CFG:
++	case MSR_AMD64_TW_CFG:
+ 	case MSR_F15H_EX_CFG:
+ 		break;
+ 
+@@ -3485,6 +3486,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_AMD64_BU_CFG2:
+ 	case MSR_IA32_PERF_CTL:
+ 	case MSR_AMD64_DC_CFG:
++	case MSR_AMD64_TW_CFG:
+ 	case MSR_F15H_EX_CFG:
+ 	/*
+ 	 * Intel Sandy Bridge CPUs must support the RAPL (running average power
+diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
+index 9dc31996c7edb..62a119170376b 100644
+--- a/arch/x86/mm/numa.c
++++ b/arch/x86/mm/numa.c
+@@ -602,13 +602,6 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
+ 		if (start >= end)
+ 			continue;
+ 
+-		/*
+-		 * Don't confuse VM with a node that doesn't have the
+-		 * minimum amount of memory:
+-		 */
+-		if (end && (end - start) < NODE_MIN_SIZE)
+-			continue;
+-
+ 		alloc_node_data(nid);
+ 	}
+ 
+diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
+index 9d10b846ccf73..005a36cb21bc4 100644
+--- a/crypto/pcrypt.c
++++ b/crypto/pcrypt.c
+@@ -117,6 +117,8 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
+ 	err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu);
+ 	if (!err)
+ 		return -EINPROGRESS;
++	if (err == -EBUSY)
++		return -EAGAIN;
+ 
+ 	return err;
+ }
+@@ -164,6 +166,8 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
+ 	err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu);
+ 	if (!err)
+ 		return -EINPROGRESS;
++	if (err == -EBUSY)
++		return -EAGAIN;
+ 
+ 	return err;
+ }
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index bfd821173f863..ba70bcded1283 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -442,6 +442,18 @@ static const struct dmi_system_id asus_laptop[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "B2402CBA"),
+ 		},
+ 	},
++	{
++		/* TongFang GMxXGxx/TUXEDO Polaris 15 Gen5 AMD */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GMxXGxx"),
++		},
++	},
++	{
++		/* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GM6XGxX"),
++		},
++	},
+ 	{
+ 		.ident = "Asus ExpertBook B2502",
+ 		.matches = {
+diff --git a/drivers/atm/iphase.c b/drivers/atm/iphase.c
+index a59554e5b8b0f..cc90f550ab75a 100644
+--- a/drivers/atm/iphase.c
++++ b/drivers/atm/iphase.c
+@@ -2290,19 +2290,21 @@ static int get_esi(struct atm_dev *dev)
+ static int reset_sar(struct atm_dev *dev)  
+ {  
+ 	IADEV *iadev;  
+-	int i, error = 1;  
++	int i, error;
+ 	unsigned int pci[64];  
+ 	  
+ 	iadev = INPH_IA_DEV(dev);  
+-	for(i=0; i<64; i++)  
+-	  if ((error = pci_read_config_dword(iadev->pci,  
+-				i*4, &pci[i])) != PCIBIOS_SUCCESSFUL)  
+-  	      return error;  
++	for (i = 0; i < 64; i++) {
++		error = pci_read_config_dword(iadev->pci, i * 4, &pci[i]);
++		if (error != PCIBIOS_SUCCESSFUL)
++			return error;
++	}
+ 	writel(0, iadev->reg+IPHASE5575_EXT_RESET);  
+-	for(i=0; i<64; i++)  
+-	  if ((error = pci_write_config_dword(iadev->pci,  
+-					i*4, pci[i])) != PCIBIOS_SUCCESSFUL)  
+-	    return error;  
++	for (i = 0; i < 64; i++) {
++		error = pci_write_config_dword(iadev->pci, i * 4, pci[i]);
++		if (error != PCIBIOS_SUCCESSFUL)
++			return error;
++	}
+ 	udelay(5);  
+ 	return 0;  
+ }  
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index f99d190770204..a1a8b282b99b1 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -415,6 +415,18 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x13d3, 0x3586), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 
++	/* Realtek 8852BE Bluetooth devices */
++	{ USB_DEVICE(0x0cb8, 0xc559), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0bda, 0x887b), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0bda, 0xb85b), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x13d3, 0x3570), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x13d3, 0x3571), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++
+ 	/* Realtek Bluetooth devices */
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01),
+ 	  .driver_info = BTUSB_REALTEK },
+@@ -3095,6 +3107,9 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 		goto err_free_wc;
+ 	}
+ 
++	if (data->evt_skb == NULL)
++		goto err_free_wc;
++
+ 	/* Parse and handle the return WMT event */
+ 	wmt_evt = (struct btmtk_hci_wmt_evt *)data->evt_skb->data;
+ 	if (wmt_evt->whdr.op != hdr->op) {
+diff --git a/drivers/clk/qcom/gcc-ipq6018.c b/drivers/clk/qcom/gcc-ipq6018.c
+index cde62a11f5736..4c5c7a8f41d08 100644
+--- a/drivers/clk/qcom/gcc-ipq6018.c
++++ b/drivers/clk/qcom/gcc-ipq6018.c
+@@ -75,7 +75,6 @@ static struct clk_fixed_factor gpll0_out_main_div2 = {
+ 				&gpll0_main.clkr.hw },
+ 		.num_parents = 1,
+ 		.ops = &clk_fixed_factor_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -89,7 +88,6 @@ static struct clk_alpha_pll_postdiv gpll0 = {
+ 				&gpll0_main.clkr.hw },
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -164,7 +162,6 @@ static struct clk_alpha_pll_postdiv gpll6 = {
+ 				&gpll6_main.clkr.hw },
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -195,7 +192,6 @@ static struct clk_alpha_pll_postdiv gpll4 = {
+ 				&gpll4_main.clkr.hw },
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -246,7 +242,6 @@ static struct clk_alpha_pll_postdiv gpll2 = {
+ 				&gpll2_main.clkr.hw },
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -277,7 +272,6 @@ static struct clk_alpha_pll_postdiv nss_crypto_pll = {
+ 				&nss_crypto_pll_main.clkr.hw },
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index d6d5defb82c9f..0393154fea2f9 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -418,7 +418,6 @@ static struct clk_fixed_factor gpll0_out_main_div2 = {
+ 		},
+ 		.num_parents = 1,
+ 		.ops = &clk_fixed_factor_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -465,7 +464,6 @@ static struct clk_alpha_pll_postdiv gpll2 = {
+ 		},
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -498,7 +496,6 @@ static struct clk_alpha_pll_postdiv gpll4 = {
+ 		},
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -532,7 +529,6 @@ static struct clk_alpha_pll_postdiv gpll6 = {
+ 		},
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -546,7 +542,6 @@ static struct clk_fixed_factor gpll6_out_main_div2 = {
+ 		},
+ 		.num_parents = 1,
+ 		.ops = &clk_fixed_factor_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+@@ -611,7 +606,6 @@ static struct clk_alpha_pll_postdiv nss_crypto_pll = {
+ 		},
+ 		.num_parents = 1,
+ 		.ops = &clk_alpha_pll_postdiv_ro_ops,
+-		.flags = CLK_SET_RATE_PARENT,
+ 	},
+ };
+ 
+diff --git a/drivers/clocksource/timer-atmel-tcb.c b/drivers/clocksource/timer-atmel-tcb.c
+index 787dbebbb4324..5ea1efd87f580 100644
+--- a/drivers/clocksource/timer-atmel-tcb.c
++++ b/drivers/clocksource/timer-atmel-tcb.c
+@@ -315,6 +315,7 @@ static void __init tcb_setup_dual_chan(struct atmel_tc *tc, int mck_divisor_idx)
+ 	writel(mck_divisor_idx			/* likely divide-by-8 */
+ 			| ATMEL_TC_WAVE
+ 			| ATMEL_TC_WAVESEL_UP		/* free-run */
++			| ATMEL_TC_ASWTRG_SET		/* TIOA0 rises at software trigger */
+ 			| ATMEL_TC_ACPA_SET		/* TIOA0 rises at 0 */
+ 			| ATMEL_TC_ACPC_CLEAR,		/* (duty cycle 50%) */
+ 			tcaddr + ATMEL_TC_REG(0, CMR));
+diff --git a/drivers/clocksource/timer-imx-gpt.c b/drivers/clocksource/timer-imx-gpt.c
+index 7b2c70f2f353b..fabff69e52e58 100644
+--- a/drivers/clocksource/timer-imx-gpt.c
++++ b/drivers/clocksource/timer-imx-gpt.c
+@@ -454,12 +454,16 @@ static int __init mxc_timer_init_dt(struct device_node *np,  enum imx_gpt_type t
+ 		return -ENOMEM;
+ 
+ 	imxtm->base = of_iomap(np, 0);
+-	if (!imxtm->base)
+-		return -ENXIO;
++	if (!imxtm->base) {
++		ret = -ENXIO;
++		goto err_kfree;
++	}
+ 
+ 	imxtm->irq = irq_of_parse_and_map(np, 0);
+-	if (imxtm->irq <= 0)
+-		return -EINVAL;
++	if (imxtm->irq <= 0) {
++		ret = -EINVAL;
++		goto err_kfree;
++	}
+ 
+ 	imxtm->clk_ipg = of_clk_get_by_name(np, "ipg");
+ 
+@@ -472,11 +476,15 @@ static int __init mxc_timer_init_dt(struct device_node *np,  enum imx_gpt_type t
+ 
+ 	ret = _mxc_timer_init(imxtm);
+ 	if (ret)
+-		return ret;
++		goto err_kfree;
+ 
+ 	initialized = 1;
+ 
+ 	return 0;
++
++err_kfree:
++	kfree(imxtm);
++	return ret;
+ }
+ 
+ static int __init imx1_timer_init_dt(struct device_node *np)
+diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
+index 6cd5c8ab5d49f..ab856dd2d5fc0 100644
+--- a/drivers/cpufreq/cpufreq_stats.c
++++ b/drivers/cpufreq/cpufreq_stats.c
+@@ -131,25 +131,25 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
+ 	len += scnprintf(buf + len, PAGE_SIZE - len, "   From  :    To\n");
+ 	len += scnprintf(buf + len, PAGE_SIZE - len, "         : ");
+ 	for (i = 0; i < stats->state_num; i++) {
+-		if (len >= PAGE_SIZE)
++		if (len >= PAGE_SIZE - 1)
+ 			break;
+ 		len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ",
+ 				stats->freq_table[i]);
+ 	}
+-	if (len >= PAGE_SIZE)
+-		return PAGE_SIZE;
++	if (len >= PAGE_SIZE - 1)
++		return PAGE_SIZE - 1;
+ 
+ 	len += scnprintf(buf + len, PAGE_SIZE - len, "\n");
+ 
+ 	for (i = 0; i < stats->state_num; i++) {
+-		if (len >= PAGE_SIZE)
++		if (len >= PAGE_SIZE - 1)
+ 			break;
+ 
+ 		len += scnprintf(buf + len, PAGE_SIZE - len, "%9u: ",
+ 				stats->freq_table[i]);
+ 
+ 		for (j = 0; j < stats->state_num; j++) {
+-			if (len >= PAGE_SIZE)
++			if (len >= PAGE_SIZE - 1)
+ 				break;
+ 
+ 			if (pending)
+@@ -159,12 +159,12 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
+ 
+ 			len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ", count);
+ 		}
+-		if (len >= PAGE_SIZE)
++		if (len >= PAGE_SIZE - 1)
+ 			break;
+ 		len += scnprintf(buf + len, PAGE_SIZE - len, "\n");
+ 	}
+ 
+-	if (len >= PAGE_SIZE) {
++	if (len >= PAGE_SIZE - 1) {
+ 		pr_warn_once("cpufreq transition table exceeds PAGE_SIZE. Disabling\n");
+ 		return -EFBIG;
+ 	}
+diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
+index 3e57176ca0ca5..40b052c6733a1 100644
+--- a/drivers/dma/stm32-mdma.c
++++ b/drivers/dma/stm32-mdma.c
+@@ -509,7 +509,7 @@ static int stm32_mdma_set_xfer_param(struct stm32_mdma_chan *chan,
+ 	src_maxburst = chan->dma_config.src_maxburst;
+ 	dst_maxburst = chan->dma_config.dst_maxburst;
+ 
+-	ccr = stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id));
++	ccr = stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id)) & ~STM32_MDMA_CCR_EN;
+ 	ctcr = stm32_mdma_read(dmadev, STM32_MDMA_CTCR(chan->id));
+ 	ctbr = stm32_mdma_read(dmadev, STM32_MDMA_CTBR(chan->id));
+ 
+@@ -937,7 +937,7 @@ stm32_mdma_prep_dma_memcpy(struct dma_chan *c, dma_addr_t dest, dma_addr_t src,
+ 	if (!desc)
+ 		return NULL;
+ 
+-	ccr = stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id));
++	ccr = stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id)) & ~STM32_MDMA_CCR_EN;
+ 	ctcr = stm32_mdma_read(dmadev, STM32_MDMA_CTCR(chan->id));
+ 	ctbr = stm32_mdma_read(dmadev, STM32_MDMA_CTBR(chan->id));
+ 	cbndtr = stm32_mdma_read(dmadev, STM32_MDMA_CBNDTR(chan->id));
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 96086e7df9100..3e3e37e0e6607 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -136,6 +136,12 @@ static enum qcom_scm_convention __get_convention(void)
+ 	if (likely(qcom_scm_convention != SMC_CONVENTION_UNKNOWN))
+ 		return qcom_scm_convention;
+ 
++	/*
++	 * Per the "SMC calling convention specification", the 64-bit calling
++	 * convention can only be used when the client is 64-bit, otherwise
++	 * system will encounter the undefined behaviour.
++	 */
++#if IS_ENABLED(CONFIG_ARM64)
+ 	/*
+ 	 * Device isn't required as there is only one argument - no device
+ 	 * needed to dma_map_single to secure world
+@@ -156,6 +162,7 @@ static enum qcom_scm_convention __get_convention(void)
+ 		forced = true;
+ 		goto found;
+ 	}
++#endif
+ 
+ 	probed_convention = SMC_CONVENTION_ARM_32;
+ 	ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+index 714178f1b6c6e..9fb8012007e2e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+@@ -178,6 +178,7 @@ int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id,
+ 	}
+ 
+ 	rcu_read_unlock();
++	*result = NULL;
+ 	return -ENOENT;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index 00a190929b55c..48df32dd352ed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -575,6 +575,9 @@ static ssize_t amdgpu_debugfs_regs_smc_read(struct file *f, char __user *buf,
+ 	ssize_t result = 0;
+ 	int r;
+ 
++	if (!adev->smc_rreg)
++		return -EPERM;
++
+ 	if (size & 0x3 || *pos & 0x3)
+ 		return -EINVAL;
+ 
+@@ -634,6 +637,9 @@ static ssize_t amdgpu_debugfs_regs_smc_write(struct file *f, const char __user *
+ 	ssize_t result = 0;
+ 	int r;
+ 
++	if (!adev->smc_wreg)
++		return -EPERM;
++
+ 	if (size & 0x3 || *pos & 0x3)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index f0db9724ca85e..a093f1b277244 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4530,7 +4530,8 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 	 * Flush RAM to disk so that after reboot
+ 	 * the user can read log and see why the system rebooted.
+ 	 */
+-	if (need_emergency_restart && amdgpu_ras_get_context(adev)->reboot) {
++	if (need_emergency_restart && amdgpu_ras_get_context(adev) &&
++		amdgpu_ras_get_context(adev)->reboot) {
+ 		DRM_WARN("Emergency reboot.");
+ 
+ 		ksys_sync_helper();
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 3638f0e12a2b8..a8f1c4969fac7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1031,7 +1031,8 @@ static void amdgpu_ras_sysfs_remove_bad_page_node(struct amdgpu_device *adev)
+ {
+ 	struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
+ 
+-	sysfs_remove_file_from_group(&adev->dev->kobj,
++	if (adev->dev->kobj.sd)
++		sysfs_remove_file_from_group(&adev->dev->kobj,
+ 				&con->badpages_attr.attr,
+ 				RAS_FS_NAME);
+ }
+@@ -1048,7 +1049,8 @@ static int amdgpu_ras_sysfs_remove_feature_node(struct amdgpu_device *adev)
+ 		.attrs = attrs,
+ 	};
+ 
+-	sysfs_remove_group(&adev->dev->kobj, &group);
++	if (adev->dev->kobj.sd)
++		sysfs_remove_group(&adev->dev->kobj, &group);
+ 
+ 	return 0;
+ }
+@@ -1096,7 +1098,8 @@ int amdgpu_ras_sysfs_remove(struct amdgpu_device *adev,
+ 	if (!obj || !obj->attr_inuse)
+ 		return -EINVAL;
+ 
+-	sysfs_remove_file_from_group(&adev->dev->kobj,
++	if (adev->dev->kobj.sd)
++		sysfs_remove_file_from_group(&adev->dev->kobj,
+ 				&obj->sysfs_attr.attr,
+ 				RAS_FS_NAME);
+ 	obj->attr_inuse = 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 652ddec188385..54d6b4128721e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1293,7 +1293,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
+ 	struct dmub_srv_create_params create_params;
+ 	struct dmub_srv_region_params region_params;
+ 	struct dmub_srv_region_info region_info;
+-	struct dmub_srv_fb_params fb_params;
++	struct dmub_srv_memory_params memory_params;
+ 	struct dmub_srv_fb_info *fb_info;
+ 	struct dmub_srv *dmub_srv;
+ 	const struct dmcub_firmware_header_v1_0 *hdr;
+@@ -1389,6 +1389,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
+ 		adev->dm.dmub_fw->data +
+ 		le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
+ 		PSP_HEADER_BYTES;
++	region_params.is_mailbox_in_inbox = false;
+ 
+ 	status = dmub_srv_calc_region_info(dmub_srv, &region_params,
+ 					   &region_info);
+@@ -1410,10 +1411,10 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
+ 		return r;
+ 
+ 	/* Rebase the regions on the framebuffer address. */
+-	memset(&fb_params, 0, sizeof(fb_params));
+-	fb_params.cpu_addr = adev->dm.dmub_bo_cpu_addr;
+-	fb_params.gpu_addr = adev->dm.dmub_bo_gpu_addr;
+-	fb_params.region_info = &region_info;
++	memset(&memory_params, 0, sizeof(memory_params));
++	memory_params.cpu_fb_addr = adev->dm.dmub_bo_cpu_addr;
++	memory_params.gpu_fb_addr = adev->dm.dmub_bo_gpu_addr;
++	memory_params.region_info = &region_info;
+ 
+ 	adev->dm.dmub_fb_info =
+ 		kzalloc(sizeof(*adev->dm.dmub_fb_info), GFP_KERNEL);
+@@ -1425,7 +1426,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
+ 		return -ENOMEM;
+ 	}
+ 
+-	status = dmub_srv_calc_fb_info(dmub_srv, &fb_params, fb_info);
++	status = dmub_srv_calc_mem_info(dmub_srv, &memory_params, fb_info);
+ 	if (status != DMUB_STATUS_OK) {
+ 		DRM_ERROR("Error calculating DMUB FB info: %d\n", status);
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+index d48fd87d3b953..8206c6edba746 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -534,7 +534,7 @@ uint32_t dc_stream_get_vblank_counter(const struct dc_stream_state *stream)
+ 	for (i = 0; i < MAX_PIPES; i++) {
+ 		struct timing_generator *tg = res_ctx->pipe_ctx[i].stream_res.tg;
+ 
+-		if (res_ctx->pipe_ctx[i].stream != stream)
++		if (res_ctx->pipe_ctx[i].stream != stream || !tg)
+ 			continue;
+ 
+ 		return tg->funcs->get_frame_count(tg);
+@@ -593,7 +593,7 @@ bool dc_stream_get_scanoutpos(const struct dc_stream_state *stream,
+ 	for (i = 0; i < MAX_PIPES; i++) {
+ 		struct timing_generator *tg = res_ctx->pipe_ctx[i].stream_res.tg;
+ 
+-		if (res_ctx->pipe_ctx[i].stream != stream)
++		if (res_ctx->pipe_ctx[i].stream != stream || !tg)
+ 			continue;
+ 
+ 		tg->funcs->get_scanoutpos(tg,
+diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+index 882b4e2816b53..733e41b81a4e6 100644
+--- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
++++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+@@ -152,6 +152,7 @@ struct dmub_srv_region_params {
+ 	uint32_t vbios_size;
+ 	const uint8_t *fw_inst_const;
+ 	const uint8_t *fw_bss_data;
++	bool is_mailbox_in_inbox;
+ };
+ 
+ /**
+@@ -171,20 +172,25 @@ struct dmub_srv_region_params {
+  */
+ struct dmub_srv_region_info {
+ 	uint32_t fb_size;
++	uint32_t inbox_size;
+ 	uint8_t num_regions;
+ 	struct dmub_region regions[DMUB_WINDOW_TOTAL];
+ };
+ 
+ /**
+- * struct dmub_srv_fb_params - parameters used for driver fb setup
++ * struct dmub_srv_memory_params - parameters used for driver fb setup
+  * @region_info: region info calculated by dmub service
+- * @cpu_addr: base cpu address for the framebuffer
+- * @gpu_addr: base gpu virtual address for the framebuffer
++ * @cpu_fb_addr: base cpu address for the framebuffer
++ * @cpu_inbox_addr: base cpu address for the gart
++ * @gpu_fb_addr: base gpu virtual address for the framebuffer
++ * @gpu_inbox_addr: base gpu virtual address for the gart
+  */
+-struct dmub_srv_fb_params {
++struct dmub_srv_memory_params {
+ 	const struct dmub_srv_region_info *region_info;
+-	void *cpu_addr;
+-	uint64_t gpu_addr;
++	void *cpu_fb_addr;
++	void *cpu_inbox_addr;
++	uint64_t gpu_fb_addr;
++	uint64_t gpu_inbox_addr;
+ };
+ 
+ /**
+@@ -398,8 +404,8 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
+  *   DMUB_STATUS_OK - success
+  *   DMUB_STATUS_INVALID - unspecified error
+  */
+-enum dmub_status dmub_srv_calc_fb_info(struct dmub_srv *dmub,
+-				       const struct dmub_srv_fb_params *params,
++enum dmub_status dmub_srv_calc_mem_info(struct dmub_srv *dmub,
++				       const struct dmub_srv_memory_params *params,
+ 				       struct dmub_srv_fb_info *out);
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+index 08da423b24a1c..56c671d21dc8d 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+@@ -250,7 +250,7 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
+ 	uint32_t fw_state_size = DMUB_FW_STATE_SIZE;
+ 	uint32_t trace_buffer_size = DMUB_TRACE_BUFFER_SIZE;
+ 	uint32_t scratch_mem_size = DMUB_SCRATCH_MEM_SIZE;
+-
++	uint32_t previous_top = 0;
+ 	if (!dmub->sw_init)
+ 		return DMUB_STATUS_INVALID;
+ 
+@@ -275,8 +275,15 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
+ 	bios->base = dmub_align(stack->top, 256);
+ 	bios->top = bios->base + params->vbios_size;
+ 
+-	mail->base = dmub_align(bios->top, 256);
+-	mail->top = mail->base + DMUB_MAILBOX_SIZE;
++	if (params->is_mailbox_in_inbox) {
++		mail->base = 0;
++		mail->top = mail->base + DMUB_MAILBOX_SIZE;
++		previous_top = bios->top;
++	} else {
++		mail->base = dmub_align(bios->top, 256);
++		mail->top = mail->base + DMUB_MAILBOX_SIZE;
++		previous_top = mail->top;
++	}
+ 
+ 	fw_info = dmub_get_fw_meta_info(params);
+ 
+@@ -295,7 +302,7 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
+ 			dmub->fw_version = fw_info->fw_version;
+ 	}
+ 
+-	trace_buff->base = dmub_align(mail->top, 256);
++	trace_buff->base = dmub_align(previous_top, 256);
+ 	trace_buff->top = trace_buff->base + dmub_align(trace_buffer_size, 64);
+ 
+ 	fw_state->base = dmub_align(trace_buff->top, 256);
+@@ -306,11 +313,14 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
+ 
+ 	out->fb_size = dmub_align(scratch_mem->top, 4096);
+ 
++	if (params->is_mailbox_in_inbox)
++		out->inbox_size = dmub_align(mail->top, 4096);
++
+ 	return DMUB_STATUS_OK;
+ }
+ 
+-enum dmub_status dmub_srv_calc_fb_info(struct dmub_srv *dmub,
+-				       const struct dmub_srv_fb_params *params,
++enum dmub_status dmub_srv_calc_mem_info(struct dmub_srv *dmub,
++				       const struct dmub_srv_memory_params *params,
+ 				       struct dmub_srv_fb_info *out)
+ {
+ 	uint8_t *cpu_base;
+@@ -325,8 +335,8 @@ enum dmub_status dmub_srv_calc_fb_info(struct dmub_srv *dmub,
+ 	if (params->region_info->num_regions != DMUB_NUM_WINDOWS)
+ 		return DMUB_STATUS_INVALID;
+ 
+-	cpu_base = (uint8_t *)params->cpu_addr;
+-	gpu_base = params->gpu_addr;
++	cpu_base = (uint8_t *)params->cpu_fb_addr;
++	gpu_base = params->gpu_fb_addr;
+ 
+ 	for (i = 0; i < DMUB_NUM_WINDOWS; ++i) {
+ 		const struct dmub_region *reg =
+@@ -334,6 +344,12 @@ enum dmub_status dmub_srv_calc_fb_info(struct dmub_srv *dmub,
+ 
+ 		out->fb[i].cpu_addr = cpu_base + reg->base;
+ 		out->fb[i].gpu_addr = gpu_base + reg->base;
++
++		if (i == DMUB_WINDOW_4_MAILBOX && params->cpu_inbox_addr != 0) {
++			out->fb[i].cpu_addr = (uint8_t *)params->cpu_inbox_addr + reg->base;
++			out->fb[i].gpu_addr = params->gpu_inbox_addr + reg->base;
++		}
++
+ 		out->fb[i].size = reg->top - reg->base;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/include/pptable.h b/drivers/gpu/drm/amd/include/pptable.h
+index 0b6a057e0a4c4..5aac8d545bdc6 100644
+--- a/drivers/gpu/drm/amd/include/pptable.h
++++ b/drivers/gpu/drm/amd/include/pptable.h
+@@ -78,7 +78,7 @@ typedef struct _ATOM_PPLIB_THERMALCONTROLLER
+ typedef struct _ATOM_PPLIB_STATE
+ {
+     UCHAR ucNonClockStateIndex;
+-    UCHAR ucClockStateIndices[1]; // variable-sized
++    UCHAR ucClockStateIndices[]; // variable-sized
+ } ATOM_PPLIB_STATE;
+ 
+ 
+@@ -473,7 +473,7 @@ typedef struct _ATOM_PPLIB_STATE_V2
+       /**
+       * Driver will read the first ucNumDPMLevels in this array
+       */
+-      UCHAR clockInfoIndex[1];
++      UCHAR clockInfoIndex[];
+ } ATOM_PPLIB_STATE_V2;
+ 
+ typedef struct _StateArray{
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index d58a59cf4f853..fb1aaf959e7c3 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -776,7 +776,7 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
+ 	if (amdgpu_in_reset(adev))
+ 		return -EPERM;
+ 
+-	if (count > 127)
++	if (count > 127 || count == 0)
+ 		return -EINVAL;
+ 
+ 	if (*buf == 's')
+@@ -792,7 +792,8 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
+ 	else
+ 		return -EINVAL;
+ 
+-	memcpy(buf_cpy, buf, count+1);
++	memcpy(buf_cpy, buf, count);
++	buf_cpy[count] = 0;
+ 
+ 	tmp_str = buf_cpy;
+ 
+@@ -807,6 +808,9 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
+ 			return -EINVAL;
+ 		parameter_size++;
+ 
++		if (!tmp_str)
++			break;
++
+ 		while (isspace(*tmp_str))
+ 			tmp_str++;
+ 	}
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pptable_v1_0.h b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pptable_v1_0.h
+index 1e870f58dd12a..0c61e2bc14cde 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pptable_v1_0.h
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pptable_v1_0.h
+@@ -164,7 +164,7 @@ typedef struct _ATOM_Tonga_State {
+ typedef struct _ATOM_Tonga_State_Array {
+ 	UCHAR ucRevId;
+ 	UCHAR ucNumEntries;		/* Number of entries. */
+-	ATOM_Tonga_State entries[1];	/* Dynamically allocate entries. */
++	ATOM_Tonga_State entries[];	/* Dynamically allocate entries. */
+ } ATOM_Tonga_State_Array;
+ 
+ typedef struct _ATOM_Tonga_MCLK_Dependency_Record {
+@@ -179,7 +179,7 @@ typedef struct _ATOM_Tonga_MCLK_Dependency_Record {
+ typedef struct _ATOM_Tonga_MCLK_Dependency_Table {
+ 	UCHAR ucRevId;
+ 	UCHAR ucNumEntries; 										/* Number of entries. */
+-	ATOM_Tonga_MCLK_Dependency_Record entries[1];				/* Dynamically allocate entries. */
++	ATOM_Tonga_MCLK_Dependency_Record entries[];				/* Dynamically allocate entries. */
+ } ATOM_Tonga_MCLK_Dependency_Table;
+ 
+ typedef struct _ATOM_Tonga_SCLK_Dependency_Record {
+@@ -194,7 +194,7 @@ typedef struct _ATOM_Tonga_SCLK_Dependency_Record {
+ typedef struct _ATOM_Tonga_SCLK_Dependency_Table {
+ 	UCHAR ucRevId;
+ 	UCHAR ucNumEntries; 										/* Number of entries. */
+-	ATOM_Tonga_SCLK_Dependency_Record entries[1];				 /* Dynamically allocate entries. */
++	ATOM_Tonga_SCLK_Dependency_Record entries[];				 /* Dynamically allocate entries. */
+ } ATOM_Tonga_SCLK_Dependency_Table;
+ 
+ typedef struct _ATOM_Polaris_SCLK_Dependency_Record {
+@@ -210,7 +210,7 @@ typedef struct _ATOM_Polaris_SCLK_Dependency_Record {
+ typedef struct _ATOM_Polaris_SCLK_Dependency_Table {
+ 	UCHAR ucRevId;
+ 	UCHAR ucNumEntries;							/* Number of entries. */
+-	ATOM_Polaris_SCLK_Dependency_Record entries[1];				 /* Dynamically allocate entries. */
++	ATOM_Polaris_SCLK_Dependency_Record entries[];				 /* Dynamically allocate entries. */
+ } ATOM_Polaris_SCLK_Dependency_Table;
+ 
+ typedef struct _ATOM_Tonga_PCIE_Record {
+@@ -222,7 +222,7 @@ typedef struct _ATOM_Tonga_PCIE_Record {
+ typedef struct _ATOM_Tonga_PCIE_Table {
+ 	UCHAR ucRevId;
+ 	UCHAR ucNumEntries; 										/* Number of entries. */
+-	ATOM_Tonga_PCIE_Record entries[1];							/* Dynamically allocate entries. */
++	ATOM_Tonga_PCIE_Record entries[];							/* Dynamically allocate entries. */
+ } ATOM_Tonga_PCIE_Table;
+ 
+ typedef struct _ATOM_Polaris10_PCIE_Record {
+@@ -235,7 +235,7 @@ typedef struct _ATOM_Polaris10_PCIE_Record {
+ typedef struct _ATOM_Polaris10_PCIE_Table {
+ 	UCHAR ucRevId;
+ 	UCHAR ucNumEntries;                                         /* Number of entries. */
+-	ATOM_Polaris10_PCIE_Record entries[1];                      /* Dynamically allocate entries. */
++	ATOM_Polaris10_PCIE_Record entries[];                      /* Dynamically allocate entries. */
+ } ATOM_Polaris10_PCIE_Table;
+ 
+ 
+@@ -252,7 +252,7 @@ typedef struct _ATOM_Tonga_MM_Dependency_Record {
+ typedef struct _ATOM_Tonga_MM_Dependency_Table {
+ 	UCHAR ucRevId;
+ 	UCHAR ucNumEntries; 										/* Number of entries. */
+-	ATOM_Tonga_MM_Dependency_Record entries[1]; 			   /* Dynamically allocate entries. */
++	ATOM_Tonga_MM_Dependency_Record entries[]; 			   /* Dynamically allocate entries. */
+ } ATOM_Tonga_MM_Dependency_Table;
+ 
+ typedef struct _ATOM_Tonga_Voltage_Lookup_Record {
+@@ -265,7 +265,7 @@ typedef struct _ATOM_Tonga_Voltage_Lookup_Record {
+ typedef struct _ATOM_Tonga_Voltage_Lookup_Table {
+ 	UCHAR ucRevId;
+ 	UCHAR ucNumEntries; 										/* Number of entries. */
+-	ATOM_Tonga_Voltage_Lookup_Record entries[1];				/* Dynamically allocate entries. */
++	ATOM_Tonga_Voltage_Lookup_Record entries[];				/* Dynamically allocate entries. */
+ } ATOM_Tonga_Voltage_Lookup_Table;
+ 
+ typedef struct _ATOM_Tonga_Fan_Table {
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
+index c3cdf283ecefa..1e922703e26b2 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
+@@ -1223,7 +1223,7 @@ int komeda_build_display_data_flow(struct komeda_crtc *kcrtc,
+ 	return 0;
+ }
+ 
+-static void
++static int
+ komeda_pipeline_unbound_components(struct komeda_pipeline *pipe,
+ 				   struct komeda_pipeline_state *new)
+ {
+@@ -1243,8 +1243,12 @@ komeda_pipeline_unbound_components(struct komeda_pipeline *pipe,
+ 		c = komeda_pipeline_get_component(pipe, id);
+ 		c_st = komeda_component_get_state_and_set_user(c,
+ 				drm_st, NULL, new->crtc);
++		if (PTR_ERR(c_st) == -EDEADLK)
++			return -EDEADLK;
+ 		WARN_ON(IS_ERR(c_st));
+ 	}
++
++	return 0;
+ }
+ 
+ /* release unclaimed pipeline resource */
+@@ -1266,9 +1270,8 @@ int komeda_release_unclaimed_resources(struct komeda_pipeline *pipe,
+ 	if (WARN_ON(IS_ERR_OR_NULL(st)))
+ 		return -EINVAL;
+ 
+-	komeda_pipeline_unbound_components(pipe, st);
++	return komeda_pipeline_unbound_components(pipe, st);
+ 
+-	return 0;
+ }
+ 
+ /* Since standalong disabled components must be disabled separately and in the
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 4e8a19114e87d..93a2ee0f772fc 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -264,26 +264,9 @@ int dp_panel_get_modes(struct dp_panel *dp_panel,
+ 
+ static u8 dp_panel_get_edid_checksum(struct edid *edid)
+ {
+-	struct edid *last_block;
+-	u8 *raw_edid;
+-	bool is_edid_corrupt = false;
++	edid += edid->extensions;
+ 
+-	if (!edid) {
+-		DRM_ERROR("invalid edid input\n");
+-		return 0;
+-	}
+-
+-	raw_edid = (u8 *)edid;
+-	raw_edid += (edid->extensions * EDID_LENGTH);
+-	last_block = (struct edid *)raw_edid;
+-
+-	/* block type extension */
+-	drm_edid_block_valid(raw_edid, 1, false, &is_edid_corrupt);
+-	if (!is_edid_corrupt)
+-		return last_block->checksum;
+-
+-	DRM_ERROR("Invalid block, no checksum\n");
+-	return 0;
++	return edid->checksum;
+ }
+ 
+ void dp_panel_handle_sink_request(struct dp_panel *dp_panel)
+diff --git a/drivers/gpu/drm/panel/panel-arm-versatile.c b/drivers/gpu/drm/panel/panel-arm-versatile.c
+index abb0788843c60..503ecea72c5ea 100644
+--- a/drivers/gpu/drm/panel/panel-arm-versatile.c
++++ b/drivers/gpu/drm/panel/panel-arm-versatile.c
+@@ -267,6 +267,8 @@ static int versatile_panel_get_modes(struct drm_panel *panel,
+ 	connector->display_info.bus_flags = vpanel->panel_type->bus_flags;
+ 
+ 	mode = drm_mode_duplicate(connector->dev, &vpanel->panel_type->mode);
++	if (!mode)
++		return -ENOMEM;
+ 	drm_mode_set_name(mode);
+ 	mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
+ 
+diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7703.c b/drivers/gpu/drm/panel/panel-sitronix-st7703.c
+index c22e7c49e0778..67e1da0a7db53 100644
+--- a/drivers/gpu/drm/panel/panel-sitronix-st7703.c
++++ b/drivers/gpu/drm/panel/panel-sitronix-st7703.c
+@@ -428,29 +428,30 @@ static int st7703_prepare(struct drm_panel *panel)
+ 		return 0;
+ 
+ 	dev_dbg(ctx->dev, "Resetting the panel\n");
+-	ret = regulator_enable(ctx->vcc);
++	gpiod_set_value_cansleep(ctx->reset_gpio, 1);
++
++	ret = regulator_enable(ctx->iovcc);
+ 	if (ret < 0) {
+-		dev_err(ctx->dev, "Failed to enable vcc supply: %d\n", ret);
++		dev_err(ctx->dev, "Failed to enable iovcc supply: %d\n", ret);
+ 		return ret;
+ 	}
+-	ret = regulator_enable(ctx->iovcc);
++
++	ret = regulator_enable(ctx->vcc);
+ 	if (ret < 0) {
+-		dev_err(ctx->dev, "Failed to enable iovcc supply: %d\n", ret);
+-		goto disable_vcc;
++		dev_err(ctx->dev, "Failed to enable vcc supply: %d\n", ret);
++		regulator_disable(ctx->iovcc);
++		return ret;
+ 	}
+ 
+-	gpiod_set_value_cansleep(ctx->reset_gpio, 1);
+-	usleep_range(20, 40);
++	/* Give power supplies time to stabilize before deasserting reset. */
++	usleep_range(10000, 20000);
++
+ 	gpiod_set_value_cansleep(ctx->reset_gpio, 0);
+-	msleep(20);
++	usleep_range(15000, 20000);
+ 
+ 	ctx->prepared = true;
+ 
+ 	return 0;
+-
+-disable_vcc:
+-	regulator_disable(ctx->vcc);
+-	return ret;
+ }
+ 
+ static int st7703_get_modes(struct drm_panel *panel,
+diff --git a/drivers/gpu/drm/panel/panel-tpo-tpg110.c b/drivers/gpu/drm/panel/panel-tpo-tpg110.c
+index d57ed75a977c3..494cec50a682b 100644
+--- a/drivers/gpu/drm/panel/panel-tpo-tpg110.c
++++ b/drivers/gpu/drm/panel/panel-tpo-tpg110.c
+@@ -378,6 +378,8 @@ static int tpg110_get_modes(struct drm_panel *panel,
+ 	connector->display_info.bus_flags = tpg->panel_mode->bus_flags;
+ 
+ 	mode = drm_mode_duplicate(connector->dev, &tpg->panel_mode->mode);
++	if (!mode)
++		return -ENOMEM;
+ 	drm_mode_set_name(mode);
+ 	mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
+ 
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 6712d99ad80da..7c688d7f8ccff 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -345,6 +345,7 @@
+ 
+ #define USB_VENDOR_ID_DELL				0x413c
+ #define USB_DEVICE_ID_DELL_PIXART_USB_OPTICAL_MOUSE	0x301a
++#define USB_DEVICE_ID_DELL_PRO_WIRELESS_KM5221W		0x4503
+ 
+ #define USB_VENDOR_ID_DELORME		0x1163
+ #define USB_DEVICE_ID_DELORME_EARTHMATE	0x0100
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index 0ff03fed97709..71f7b0d539df5 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -50,7 +50,12 @@ struct lenovo_drvdata {
+ 	int select_right;
+ 	int sensitivity;
+ 	int press_speed;
+-	u8 middlebutton_state; /* 0:Up, 1:Down (undecided), 2:Scrolling */
++	/* 0: Up
++	 * 1: Down (undecided)
++	 * 2: Scrolling
++	 * 3: Patched firmware, disable workaround
++	 */
++	u8 middlebutton_state;
+ 	bool fn_lock;
+ };
+ 
+@@ -478,31 +483,48 @@ static int lenovo_event_cptkbd(struct hid_device *hdev,
+ {
+ 	struct lenovo_drvdata *cptkbd_data = hid_get_drvdata(hdev);
+ 
+-	/* "wheel" scroll events */
+-	if (usage->type == EV_REL && (usage->code == REL_WHEEL ||
+-			usage->code == REL_HWHEEL)) {
+-		/* Scroll events disable middle-click event */
+-		cptkbd_data->middlebutton_state = 2;
+-		return 0;
+-	}
++	if (cptkbd_data->middlebutton_state != 3) {
++		/* REL_X and REL_Y events during middle button pressed
++		 * are only possible on patched, bug-free firmware
++		 * so set middlebutton_state to 3
++		 * to never apply workaround anymore
++		 */
++		if (cptkbd_data->middlebutton_state == 1 &&
++				usage->type == EV_REL &&
++				(usage->code == REL_X || usage->code == REL_Y)) {
++			cptkbd_data->middlebutton_state = 3;
++			/* send middle button press which was hold before */
++			input_event(field->hidinput->input,
++				EV_KEY, BTN_MIDDLE, 1);
++			input_sync(field->hidinput->input);
++		}
+ 
+-	/* Middle click events */
+-	if (usage->type == EV_KEY && usage->code == BTN_MIDDLE) {
+-		if (value == 1) {
+-			cptkbd_data->middlebutton_state = 1;
+-		} else if (value == 0) {
+-			if (cptkbd_data->middlebutton_state == 1) {
+-				/* No scrolling inbetween, send middle-click */
+-				input_event(field->hidinput->input,
+-					EV_KEY, BTN_MIDDLE, 1);
+-				input_sync(field->hidinput->input);
+-				input_event(field->hidinput->input,
+-					EV_KEY, BTN_MIDDLE, 0);
+-				input_sync(field->hidinput->input);
++		/* "wheel" scroll events */
++		if (usage->type == EV_REL && (usage->code == REL_WHEEL ||
++				usage->code == REL_HWHEEL)) {
++			/* Scroll events disable middle-click event */
++			cptkbd_data->middlebutton_state = 2;
++			return 0;
++		}
++
++		/* Middle click events */
++		if (usage->type == EV_KEY && usage->code == BTN_MIDDLE) {
++			if (value == 1) {
++				cptkbd_data->middlebutton_state = 1;
++			} else if (value == 0) {
++				if (cptkbd_data->middlebutton_state == 1) {
++					/* No scrolling inbetween, send middle-click */
++					input_event(field->hidinput->input,
++						EV_KEY, BTN_MIDDLE, 1);
++					input_sync(field->hidinput->input);
++					input_event(field->hidinput->input,
++						EV_KEY, BTN_MIDDLE, 0);
++					input_sync(field->hidinput->input);
++				}
++				cptkbd_data->middlebutton_state = 0;
+ 			}
+-			cptkbd_data->middlebutton_state = 0;
++			return 1;
+ 		}
+-		return 1;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 4229e5de06745..787349f2de01d 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -66,6 +66,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_STRAFE), HID_QUIRK_NO_INIT_REPORTS | HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CREATIVELABS, USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DELL, USB_DEVICE_ID_DELL_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_DELL, USB_DEVICE_ID_DELL_PRO_WIRELESS_KM5221W), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRACAL_RAPHNET, USB_DEVICE_ID_RAPHNET_2NES2SNES), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DRACAL_RAPHNET, USB_DEVICE_ID_RAPHNET_4NES4SNES), HID_QUIRK_MULTI_INPUT },
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 106080b25e81c..8a3d35cacdcb5 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -362,10 +362,16 @@ i2c_dw_xfer_msg(struct dw_i2c_dev *dev)
+ 
+ 		/*
+ 		 * Because we don't know the buffer length in the
+-		 * I2C_FUNC_SMBUS_BLOCK_DATA case, we can't stop
+-		 * the transaction here.
++		 * I2C_FUNC_SMBUS_BLOCK_DATA case, we can't stop the
++		 * transaction here. Also disable the TX_EMPTY IRQ
++		 * while waiting for the data length byte to avoid the
++		 * bogus interrupts flood.
+ 		 */
+-		if (buf_len > 0 || flags & I2C_M_RECV_LEN) {
++		if (flags & I2C_M_RECV_LEN) {
++			dev->status |= STATUS_WRITE_IN_PROGRESS;
++			intr_mask &= ~DW_IC_INTR_TX_EMPTY;
++			break;
++		} else if (buf_len > 0) {
+ 			/* more bytes to be written */
+ 			dev->status |= STATUS_WRITE_IN_PROGRESS;
+ 			break;
+@@ -401,6 +407,13 @@ i2c_dw_recv_len(struct dw_i2c_dev *dev, u8 len)
+ 	msgs[dev->msg_read_idx].len = len;
+ 	msgs[dev->msg_read_idx].flags &= ~I2C_M_RECV_LEN;
+ 
++	/*
++	 * Received buffer length, re-enable TX_EMPTY interrupt
++	 * to resume the SMBUS transaction.
++	 */
++	regmap_update_bits(dev->map, DW_IC_INTR_MASK, DW_IC_INTR_TX_EMPTY,
++			   DW_IC_INTR_TX_EMPTY);
++
+ 	return len;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 4aec451d72510..cb8f560225928 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -735,15 +735,11 @@ static int i801_block_transaction_byte_by_byte(struct i801_priv *priv,
+ 		return i801_check_post(priv, status);
+ 	}
+ 
+-	for (i = 1; i <= len; i++) {
+-		if (i == len && read_write == I2C_SMBUS_READ)
+-			smbcmd |= SMBHSTCNT_LAST_BYTE;
+-		outb_p(smbcmd, SMBHSTCNT(priv));
+-
+-		if (i == 1)
+-			outb_p(inb(SMBHSTCNT(priv)) | SMBHSTCNT_START,
+-			       SMBHSTCNT(priv));
++	if (len == 1 && read_write == I2C_SMBUS_READ)
++		smbcmd |= SMBHSTCNT_LAST_BYTE;
++	outb_p(smbcmd | SMBHSTCNT_START, SMBHSTCNT(priv));
+ 
++	for (i = 1; i <= len; i++) {
+ 		status = i801_wait_byte_done(priv);
+ 		if (status)
+ 			goto exit;
+@@ -766,9 +762,12 @@ static int i801_block_transaction_byte_by_byte(struct i801_priv *priv,
+ 			data->block[0] = len;
+ 		}
+ 
+-		/* Retrieve/store value in SMBBLKDAT */
+-		if (read_write == I2C_SMBUS_READ)
++		if (read_write == I2C_SMBUS_READ) {
+ 			data->block[i] = inb_p(SMBBLKDAT(priv));
++			if (i == len - 1)
++				outb_p(smbcmd | SMBHSTCNT_LAST_BYTE, SMBHSTCNT(priv));
++		}
++
+ 		if (read_write == I2C_SMBUS_WRITE && i+1 <= len)
+ 			outb_p(data->block[i+1], SMBBLKDAT(priv));
+ 
+diff --git a/drivers/i2c/busses/i2c-sun6i-p2wi.c b/drivers/i2c/busses/i2c-sun6i-p2wi.c
+index 2f6f6468214dd..4f7a4f5a1150a 100644
+--- a/drivers/i2c/busses/i2c-sun6i-p2wi.c
++++ b/drivers/i2c/busses/i2c-sun6i-p2wi.c
+@@ -201,6 +201,11 @@ static int p2wi_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
++	if (clk_freq == 0) {
++		dev_err(dev, "clock-frequency is set to 0 in DT\n");
++		return -EINVAL;
++	}
++
+ 	if (of_get_child_count(np) > 1) {
+ 		dev_err(dev, "P2WI only supports one slave device\n");
+ 		return -EINVAL;
+diff --git a/drivers/i2c/i2c-core.h b/drivers/i2c/i2c-core.h
+index 8ce261167a2d3..ea17f13b44c84 100644
+--- a/drivers/i2c/i2c-core.h
++++ b/drivers/i2c/i2c-core.h
+@@ -29,7 +29,7 @@ int i2c_dev_irq_from_resources(const struct resource *resources,
+  */
+ static inline bool i2c_in_atomic_xfer_mode(void)
+ {
+-	return system_state > SYSTEM_RUNNING && irqs_disabled();
++	return system_state > SYSTEM_RUNNING && !preemptible();
+ }
+ 
+ static inline int __i2c_lock_bus_helper(struct i2c_adapter *adap)
+diff --git a/drivers/i3c/master/i3c-master-cdns.c b/drivers/i3c/master/i3c-master-cdns.c
+index 3f2226928fe05..6b9df33ac5618 100644
+--- a/drivers/i3c/master/i3c-master-cdns.c
++++ b/drivers/i3c/master/i3c-master-cdns.c
+@@ -192,7 +192,7 @@
+ #define SLV_STATUS1_HJ_DIS		BIT(18)
+ #define SLV_STATUS1_MR_DIS		BIT(17)
+ #define SLV_STATUS1_PROT_ERR		BIT(16)
+-#define SLV_STATUS1_DA(x)		(((s) & GENMASK(15, 9)) >> 9)
++#define SLV_STATUS1_DA(s)		(((s) & GENMASK(15, 9)) >> 9)
+ #define SLV_STATUS1_HAS_DA		BIT(8)
+ #define SLV_STATUS1_DDR_RX_FULL		BIT(7)
+ #define SLV_STATUS1_DDR_TX_FULL		BIT(6)
+@@ -1622,13 +1622,13 @@ static int cdns_i3c_master_probe(struct platform_device *pdev)
+ 	/* Device ID0 is reserved to describe this master. */
+ 	master->maxdevs = CONF_STATUS0_DEVS_NUM(val);
+ 	master->free_rr_slots = GENMASK(master->maxdevs, 1);
++	master->caps.ibirfifodepth = CONF_STATUS0_IBIR_DEPTH(val);
++	master->caps.cmdrfifodepth = CONF_STATUS0_CMDR_DEPTH(val);
+ 
+ 	val = readl(master->regs + CONF_STATUS1);
+ 	master->caps.cmdfifodepth = CONF_STATUS1_CMD_DEPTH(val);
+ 	master->caps.rxfifodepth = CONF_STATUS1_RX_DEPTH(val);
+ 	master->caps.txfifodepth = CONF_STATUS1_TX_DEPTH(val);
+-	master->caps.ibirfifodepth = CONF_STATUS0_IBIR_DEPTH(val);
+-	master->caps.cmdrfifodepth = CONF_STATUS0_CMDR_DEPTH(val);
+ 
+ 	spin_lock_init(&master->ibi.lock);
+ 	master->ibi.num_slots = CONF_STATUS1_IBI_HW_RES(val);
+diff --git a/drivers/infiniband/hw/hfi1/pcie.c b/drivers/infiniband/hw/hfi1/pcie.c
+index 18d32f053d26e..3aa0215fca419 100644
+--- a/drivers/infiniband/hw/hfi1/pcie.c
++++ b/drivers/infiniband/hw/hfi1/pcie.c
+@@ -45,6 +45,7 @@
+  *
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/pci.h>
+ #include <linux/io.h>
+ #include <linux/delay.h>
+@@ -261,12 +262,6 @@ static u32 extract_speed(u16 linkstat)
+ 	return speed;
+ }
+ 
+-/* return the PCIe link speed from the given link status */
+-static u32 extract_width(u16 linkstat)
+-{
+-	return (linkstat & PCI_EXP_LNKSTA_NLW) >> PCI_EXP_LNKSTA_NLW_SHIFT;
+-}
+-
+ /* read the link status and set dd->{lbus_width,lbus_speed,lbus_info} */
+ static void update_lbus_info(struct hfi1_devdata *dd)
+ {
+@@ -279,7 +274,7 @@ static void update_lbus_info(struct hfi1_devdata *dd)
+ 		return;
+ 	}
+ 
+-	dd->lbus_width = extract_width(linkstat);
++	dd->lbus_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, linkstat);
+ 	dd->lbus_speed = extract_speed(linkstat);
+ 	snprintf(dd->lbus_info, sizeof(dd->lbus_info),
+ 		 "PCIe,%uMHz,x%u", dd->lbus_speed, dd->lbus_width);
+diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
+index 3c0809095a31c..320e418cf753e 100644
+--- a/drivers/interconnect/qcom/bcm-voter.c
++++ b/drivers/interconnect/qcom/bcm-voter.c
+@@ -90,6 +90,11 @@ static void bcm_aggregate(struct qcom_icc_bcm *bcm)
+ 
+ 		temp = agg_peak[bucket] * bcm->vote_scale;
+ 		bcm->vote_y[bucket] = bcm_div(temp, bcm->aux_data.unit);
++
++		if (bcm->enable_mask && (bcm->vote_x[bucket] || bcm->vote_y[bucket])) {
++			bcm->vote_x[bucket] = 0;
++			bcm->vote_y[bucket] = bcm->enable_mask;
++		}
+ 	}
+ 
+ 	if (bcm->keepalive && bcm->vote_x[QCOM_ICC_BUCKET_AMC] == 0 &&
+diff --git a/drivers/interconnect/qcom/icc-rpmh.h b/drivers/interconnect/qcom/icc-rpmh.h
+index e5f61ab989e71..029a350c2884f 100644
+--- a/drivers/interconnect/qcom/icc-rpmh.h
++++ b/drivers/interconnect/qcom/icc-rpmh.h
+@@ -81,6 +81,7 @@ struct qcom_icc_node {
+  * @vote_x: aggregated threshold values, represents sum_bw when @type is bw bcm
+  * @vote_y: aggregated threshold values, represents peak_bw when @type is bw bcm
+  * @vote_scale: scaling factor for vote_x and vote_y
++ * @enable_mask: optional mask to send as vote instead of vote_x/vote_y
+  * @dirty: flag used to indicate whether the bcm needs to be committed
+  * @keepalive: flag used to indicate whether a keepalive is required
+  * @aux_data: auxiliary data used when calculating threshold values and
+@@ -97,6 +98,7 @@ struct qcom_icc_bcm {
+ 	u64 vote_x[QCOM_ICC_NUM_BUCKETS];
+ 	u64 vote_y[QCOM_ICC_NUM_BUCKETS];
+ 	u64 vote_scale;
++	u32 enable_mask;
+ 	bool dirty;
+ 	bool keepalive;
+ 	struct bcm_db aux_data;
+diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
+index 806a3d9da12f2..2ca69184ca1ae 100644
+--- a/drivers/mcb/mcb-core.c
++++ b/drivers/mcb/mcb-core.c
+@@ -248,6 +248,7 @@ int mcb_device_register(struct mcb_bus *bus, struct mcb_device *dev)
+ 	return 0;
+ 
+ out:
++	put_device(&dev->dev);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
+index 656b6b71c7682..1ae37e693de04 100644
+--- a/drivers/mcb/mcb-parse.c
++++ b/drivers/mcb/mcb-parse.c
+@@ -106,7 +106,7 @@ static int chameleon_parse_gdd(struct mcb_bus *bus,
+ 	return 0;
+ 
+ err:
+-	put_device(&mdev->dev);
++	mcb_free_dev(mdev);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/pci/cobalt/cobalt-driver.c b/drivers/media/pci/cobalt/cobalt-driver.c
+index 1bd8bbe57a30e..1f230b14cbfdd 100644
+--- a/drivers/media/pci/cobalt/cobalt-driver.c
++++ b/drivers/media/pci/cobalt/cobalt-driver.c
+@@ -8,6 +8,7 @@
+  *  All rights reserved.
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <media/i2c/adv7604.h>
+ #include <media/i2c/adv7842.h>
+@@ -210,17 +211,17 @@ void cobalt_pcie_status_show(struct cobalt *cobalt)
+ 	pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &stat);
+ 	cobalt_info("PCIe link capability 0x%08x: %s per lane and %u lanes\n",
+ 			capa, get_link_speed(capa),
+-			(capa & PCI_EXP_LNKCAP_MLW) >> 4);
++			FIELD_GET(PCI_EXP_LNKCAP_MLW, capa));
+ 	cobalt_info("PCIe link control 0x%04x\n", ctrl);
+ 	cobalt_info("PCIe link status 0x%04x: %s per lane and %u lanes\n",
+ 		    stat, get_link_speed(stat),
+-		    (stat & PCI_EXP_LNKSTA_NLW) >> 4);
++		    FIELD_GET(PCI_EXP_LNKSTA_NLW, stat));
+ 
+ 	/* Bus */
+ 	pcie_capability_read_dword(pci_bus_dev, PCI_EXP_LNKCAP, &capa);
+ 	cobalt_info("PCIe bus link capability 0x%08x: %s per lane and %u lanes\n",
+ 			capa, get_link_speed(capa),
+-			(capa & PCI_EXP_LNKCAP_MLW) >> 4);
++			FIELD_GET(PCI_EXP_LNKCAP_MLW, capa));
+ 
+ 	/* Slot */
+ 	pcie_capability_read_dword(pci_dev, PCI_EXP_SLTCAP, &capa);
+@@ -239,7 +240,7 @@ static unsigned pcie_link_get_lanes(struct cobalt *cobalt)
+ 	if (!pci_is_pcie(pci_dev))
+ 		return 0;
+ 	pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &link);
+-	return (link & PCI_EXP_LNKSTA_NLW) >> 4;
++	return FIELD_GET(PCI_EXP_LNKSTA_NLW, link);
+ }
+ 
+ static unsigned pcie_bus_link_get_lanes(struct cobalt *cobalt)
+@@ -250,7 +251,7 @@ static unsigned pcie_bus_link_get_lanes(struct cobalt *cobalt)
+ 	if (!pci_is_pcie(pci_dev))
+ 		return 0;
+ 	pcie_capability_read_dword(pci_dev, PCI_EXP_LNKCAP, &link);
+-	return (link & PCI_EXP_LNKCAP_MLW) >> 4;
++	return FIELD_GET(PCI_EXP_LNKCAP_MLW, link);
+ }
+ 
+ static void msi_config_show(struct cobalt *cobalt, struct pci_dev *pci_dev)
+diff --git a/drivers/media/platform/qcom/camss/camss-vfe.c b/drivers/media/platform/qcom/camss/camss-vfe.c
+index b7d2293a5004f..a9fee4a2e214d 100644
+--- a/drivers/media/platform/qcom/camss/camss-vfe.c
++++ b/drivers/media/platform/qcom/camss/camss-vfe.c
+@@ -1282,7 +1282,7 @@ static int vfe_get(struct vfe_device *vfe)
+ 	} else {
+ 		ret = vfe_check_clock_rates(vfe);
+ 		if (ret < 0)
+-			goto error_pm_runtime_get;
++			goto error_pm_domain;
+ 	}
+ 	vfe->power_count++;
+ 
+diff --git a/drivers/media/platform/qcom/venus/hfi_msgs.c b/drivers/media/platform/qcom/venus/hfi_msgs.c
+index 06a1908ca225f..1d9b743062f16 100644
+--- a/drivers/media/platform/qcom/venus/hfi_msgs.c
++++ b/drivers/media/platform/qcom/venus/hfi_msgs.c
+@@ -351,7 +351,7 @@ session_get_prop_buf_req(struct hfi_msg_session_property_info_pkt *pkt,
+ 		memcpy(&bufreq[idx], buf_req, sizeof(*bufreq));
+ 		idx++;
+ 
+-		if (idx > HFI_BUFFER_TYPE_MAX)
++		if (idx >= HFI_BUFFER_TYPE_MAX)
+ 			return HFI_ERR_SESSION_INVALID_PARAMETER;
+ 
+ 		req_bytes -= sizeof(struct hfi_buffer_requirements);
+diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c
+index 2dcf7eaea4ce2..e13e555b35150 100644
+--- a/drivers/media/platform/qcom/venus/hfi_parser.c
++++ b/drivers/media/platform/qcom/venus/hfi_parser.c
+@@ -19,6 +19,9 @@ static void init_codecs(struct venus_core *core)
+ 	struct venus_caps *caps = core->caps, *cap;
+ 	unsigned long bit;
+ 
++	if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > MAX_CODEC_NUM)
++		return;
++
+ 	for_each_set_bit(bit, &core->dec_codecs, MAX_CODEC_NUM) {
+ 		cap = &caps[core->codecs_count++];
+ 		cap->codec = BIT(bit);
+@@ -86,6 +89,9 @@ static void fill_profile_level(struct venus_caps *cap, const void *data,
+ {
+ 	const struct hfi_profile_level *pl = data;
+ 
++	if (cap->num_pl + num >= HFI_MAX_PROFILE_COUNT)
++		return;
++
+ 	memcpy(&cap->pl[cap->num_pl], pl, num * sizeof(*pl));
+ 	cap->num_pl += num;
+ }
+@@ -111,6 +117,9 @@ fill_caps(struct venus_caps *cap, const void *data, unsigned int num)
+ {
+ 	const struct hfi_capability *caps = data;
+ 
++	if (cap->num_caps + num >= MAX_CAP_ENTRIES)
++		return;
++
+ 	memcpy(&cap->caps[cap->num_caps], caps, num * sizeof(*caps));
+ 	cap->num_caps += num;
+ }
+@@ -137,6 +146,9 @@ static void fill_raw_fmts(struct venus_caps *cap, const void *fmts,
+ {
+ 	const struct raw_formats *formats = fmts;
+ 
++	if (cap->num_fmts + num_fmts >= MAX_FMT_ENTRIES)
++		return;
++
+ 	memcpy(&cap->fmts[cap->num_fmts], formats, num_fmts * sizeof(*formats));
+ 	cap->num_fmts += num_fmts;
+ }
+@@ -159,6 +171,9 @@ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ 		rawfmts[i].buftype = fmt->buffer_type;
+ 		i++;
+ 
++		if (i >= MAX_FMT_ENTRIES)
++			return;
++
+ 		if (pinfo->num_planes > MAX_PLANES)
+ 			break;
+ 
+diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c
+index 9d939f63d16f4..8b1375c97c81f 100644
+--- a/drivers/media/platform/qcom/venus/hfi_venus.c
++++ b/drivers/media/platform/qcom/venus/hfi_venus.c
+@@ -206,6 +206,11 @@ static int venus_write_queue(struct venus_hfi_device *hdev,
+ 
+ 	new_wr_idx = wr_idx + dwords;
+ 	wr_ptr = (u32 *)(queue->qmem.kva + (wr_idx << 2));
++
++	if (wr_ptr < (u32 *)queue->qmem.kva ||
++	    wr_ptr > (u32 *)(queue->qmem.kva + queue->qmem.size - sizeof(*wr_ptr)))
++		return -EINVAL;
++
+ 	if (new_wr_idx < qsize) {
+ 		memcpy(wr_ptr, packet, dwords << 2);
+ 	} else {
+@@ -273,6 +278,11 @@ static int venus_read_queue(struct venus_hfi_device *hdev,
+ 	}
+ 
+ 	rd_ptr = (u32 *)(queue->qmem.kva + (rd_idx << 2));
++
++	if (rd_ptr < (u32 *)queue->qmem.kva ||
++	    rd_ptr > (u32 *)(queue->qmem.kva + queue->qmem.size - sizeof(*rd_ptr)))
++		return -EINVAL;
++
+ 	dwords = *rd_ptr >> 2;
+ 	if (!dwords)
+ 		return -EINVAL;
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index 98a38755c694e..253a1d1a840a0 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -2430,6 +2430,12 @@ static int imon_probe(struct usb_interface *interface,
+ 		goto fail;
+ 	}
+ 
++	if (first_if->dev.driver != interface->dev.driver) {
++		dev_err(&interface->dev, "inconsistent driver matching\n");
++		ret = -EINVAL;
++		goto fail;
++	}
++
+ 	if (ifnum == 0) {
+ 		ictx = imon_init_intf0(interface, id);
+ 		if (!ictx) {
+diff --git a/drivers/media/rc/ir-sharp-decoder.c b/drivers/media/rc/ir-sharp-decoder.c
+index d09c38c07dbdb..053151cd8f214 100644
+--- a/drivers/media/rc/ir-sharp-decoder.c
++++ b/drivers/media/rc/ir-sharp-decoder.c
+@@ -15,7 +15,9 @@
+ #define SHARP_UNIT		40  /* us */
+ #define SHARP_BIT_PULSE		(8    * SHARP_UNIT) /* 320us */
+ #define SHARP_BIT_0_PERIOD	(25   * SHARP_UNIT) /* 1ms (680us space) */
+-#define SHARP_BIT_1_PERIOD	(50   * SHARP_UNIT) /* 2ms (1680ms space) */
++#define SHARP_BIT_1_PERIOD	(50   * SHARP_UNIT) /* 2ms (1680us space) */
++#define SHARP_BIT_0_SPACE	(17   * SHARP_UNIT) /* 680us space */
++#define SHARP_BIT_1_SPACE	(42   * SHARP_UNIT) /* 1680us space */
+ #define SHARP_ECHO_SPACE	(1000 * SHARP_UNIT) /* 40 ms */
+ #define SHARP_TRAILER_SPACE	(125  * SHARP_UNIT) /* 5 ms (even longer) */
+ 
+@@ -168,8 +170,8 @@ static const struct ir_raw_timings_pd ir_sharp_timings = {
+ 	.header_pulse  = 0,
+ 	.header_space  = 0,
+ 	.bit_pulse     = SHARP_BIT_PULSE,
+-	.bit_space[0]  = SHARP_BIT_0_PERIOD,
+-	.bit_space[1]  = SHARP_BIT_1_PERIOD,
++	.bit_space[0]  = SHARP_BIT_0_SPACE,
++	.bit_space[1]  = SHARP_BIT_1_SPACE,
+ 	.trailer_pulse = SHARP_BIT_PULSE,
+ 	.trailer_space = SHARP_ECHO_SPACE,
+ 	.msb_first     = 1,
+diff --git a/drivers/media/rc/lirc_dev.c b/drivers/media/rc/lirc_dev.c
+index 220363b9a868d..9c888047fa994 100644
+--- a/drivers/media/rc/lirc_dev.c
++++ b/drivers/media/rc/lirc_dev.c
+@@ -286,7 +286,11 @@ static ssize_t lirc_transmit(struct file *file, const char __user *buf,
+ 		if (ret < 0)
+ 			goto out_kfree_raw;
+ 
+-		count = ret;
++		/* drop trailing space */
++		if (!(ret % 2))
++			count = ret - 1;
++		else
++			count = ret;
+ 
+ 		txbuf = kmalloc_array(count, sizeof(unsigned int), GFP_KERNEL);
+ 		if (!txbuf) {
+diff --git a/drivers/media/test-drivers/vivid/vivid-rds-gen.c b/drivers/media/test-drivers/vivid/vivid-rds-gen.c
+index b5b104ee64c99..c57771119a34b 100644
+--- a/drivers/media/test-drivers/vivid/vivid-rds-gen.c
++++ b/drivers/media/test-drivers/vivid/vivid-rds-gen.c
+@@ -145,7 +145,7 @@ void vivid_rds_gen_fill(struct vivid_rds_gen *rds, unsigned freq,
+ 	rds->ta = alt;
+ 	rds->ms = true;
+ 	snprintf(rds->psname, sizeof(rds->psname), "%6d.%1d",
+-		 freq / 16, ((freq & 0xf) * 10) / 16);
++		 (freq / 16) % 1000000, (((freq & 0xf) * 10) / 16) % 10);
+ 	if (alt)
+ 		strscpy(rds->radiotext,
+ 			" The Radio Data System can switch between different Radio Texts ",
+diff --git a/drivers/media/usb/gspca/cpia1.c b/drivers/media/usb/gspca/cpia1.c
+index d93d384286c16..de945e13c7c6b 100644
+--- a/drivers/media/usb/gspca/cpia1.c
++++ b/drivers/media/usb/gspca/cpia1.c
+@@ -18,6 +18,7 @@
+ 
+ #include <linux/input.h>
+ #include <linux/sched/signal.h>
++#include <linux/bitops.h>
+ 
+ #include "gspca.h"
+ 
+@@ -1027,6 +1028,8 @@ static int set_flicker(struct gspca_dev *gspca_dev, int on, int apply)
+ 			sd->params.exposure.expMode = 2;
+ 			sd->exposure_status = EXPOSURE_NORMAL;
+ 		}
++		if (sd->params.exposure.gain >= BITS_PER_TYPE(currentexp))
++			return -EINVAL;
+ 		currentexp = currentexp << sd->params.exposure.gain;
+ 		sd->params.exposure.gain = 0;
+ 		/* round down current exposure to nearest value */
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 6c4c85eb71479..b4a07a166605a 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -79,6 +79,7 @@
+ #define PCI_DEVICE_ID_RENESAS_R8A774B1		0x002b
+ #define PCI_DEVICE_ID_RENESAS_R8A774C0		0x002d
+ #define PCI_DEVICE_ID_RENESAS_R8A774E1		0x0025
++#define PCI_DEVICE_ID_RENESAS_R8A779F0		0x0031
+ 
+ static DEFINE_IDA(pci_endpoint_test_ida);
+ 
+@@ -993,6 +994,9 @@ static const struct pci_device_id pci_endpoint_test_tbl[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774B1),},
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774C0),},
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774E1),},
++	{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A779F0),
++	  .driver_data = (kernel_ulong_t)&default_data,
++	},
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E),
+ 	  .driver_data = (kernel_ulong_t)&j721e_data,
+ 	},
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 1992eea8b777e..78c9a56b051dd 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -800,7 +800,6 @@ static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd)
+ 
+ 	cmd_cfg |= FIELD_PREP(CMD_CFG_CMD_INDEX_MASK, cmd->opcode);
+ 	cmd_cfg |= CMD_CFG_OWNER;  /* owned by CPU */
+-	cmd_cfg |= CMD_CFG_ERROR; /* stop in case of error */
+ 
+ 	meson_mmc_set_response_bits(cmd, &cmd_cfg);
+ 
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index 8e52905458f9c..eb52e0c5a0202 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -600,7 +600,7 @@ static int sdhci_am654_get_otap_delay(struct sdhci_host *host,
+ 		return 0;
+ 	}
+ 
+-	for (i = MMC_TIMING_MMC_HS; i <= MMC_TIMING_MMC_HS400; i++) {
++	for (i = MMC_TIMING_LEGACY; i <= MMC_TIMING_MMC_HS400; i++) {
+ 
+ 		ret = device_property_read_u32(dev, td[i].otap_binding,
+ 					       &sdhci_am654->otap_del_sel[i]);
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index 7dc0e91dabfc7..05ffd5bf5a6f0 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -2311,6 +2311,7 @@ static int vub300_probe(struct usb_interface *interface,
+ 		vub300->read_only =
+ 			(0x0010 & vub300->system_port_status.port_flags) ? 1 : 0;
+ 	} else {
++		retval = -EINVAL;
+ 		goto error5;
+ 	}
+ 	usb_set_intfdata(interface, vub300);
+diff --git a/drivers/mtd/chips/cfi_cmdset_0001.c b/drivers/mtd/chips/cfi_cmdset_0001.c
+index 42001c49833b9..a45a27c70c873 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0001.c
++++ b/drivers/mtd/chips/cfi_cmdset_0001.c
+@@ -420,9 +420,25 @@ read_pri_intelext(struct map_info *map, __u16 adr)
+ 		extra_size = 0;
+ 
+ 		/* Protection Register info */
+-		if (extp->NumProtectionFields)
++		if (extp->NumProtectionFields) {
++			struct cfi_intelext_otpinfo *otp =
++				(struct cfi_intelext_otpinfo *)&extp->extra[0];
++
+ 			extra_size += (extp->NumProtectionFields - 1) *
+-				      sizeof(struct cfi_intelext_otpinfo);
++				sizeof(struct cfi_intelext_otpinfo);
++
++			if (extp_size >= sizeof(*extp) + extra_size) {
++				int i;
++
++				/* Do some byteswapping if necessary */
++				for (i = 0; i < extp->NumProtectionFields - 1; i++) {
++					otp->ProtRegAddr = le32_to_cpu(otp->ProtRegAddr);
++					otp->FactGroups = le16_to_cpu(otp->FactGroups);
++					otp->UserGroups = le16_to_cpu(otp->UserGroups);
++					otp++;
++				}
++			}
++		}
+ 	}
+ 
+ 	if (extp->MinorVersion >= '1') {
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index bcb019121d835..50fabba042488 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1433,6 +1433,10 @@ done:
+ static void bond_setup_by_slave(struct net_device *bond_dev,
+ 				struct net_device *slave_dev)
+ {
++	bool was_up = !!(bond_dev->flags & IFF_UP);
++
++	dev_close(bond_dev);
++
+ 	bond_dev->header_ops	    = slave_dev->header_ops;
+ 
+ 	bond_dev->type		    = slave_dev->type;
+@@ -1447,6 +1451,8 @@ static void bond_setup_by_slave(struct net_device *bond_dev,
+ 		bond_dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
+ 		bond_dev->flags |= (IFF_POINTOPOINT | IFF_NOARP);
+ 	}
++	if (was_up)
++		dev_open(bond_dev, NULL);
+ }
+ 
+ /* On bonding slaves other than the currently active slave, suppress
+diff --git a/drivers/net/dsa/lan9303_mdio.c b/drivers/net/dsa/lan9303_mdio.c
+index 9cbe80460b53c..a5787dc370eb2 100644
+--- a/drivers/net/dsa/lan9303_mdio.c
++++ b/drivers/net/dsa/lan9303_mdio.c
+@@ -32,7 +32,7 @@ static int lan9303_mdio_write(void *ctx, uint32_t reg, uint32_t val)
+ 	struct lan9303_mdio *sw_dev = (struct lan9303_mdio *)ctx;
+ 
+ 	reg <<= 2; /* reg num to offset */
+-	mutex_lock(&sw_dev->device->bus->mdio_lock);
++	mutex_lock_nested(&sw_dev->device->bus->mdio_lock, MDIO_MUTEX_NESTED);
+ 	lan9303_mdio_real_write(sw_dev->device, reg, val & 0xffff);
+ 	lan9303_mdio_real_write(sw_dev->device, reg + 2, (val >> 16) & 0xffff);
+ 	mutex_unlock(&sw_dev->device->bus->mdio_lock);
+@@ -50,7 +50,7 @@ static int lan9303_mdio_read(void *ctx, uint32_t reg, uint32_t *val)
+ 	struct lan9303_mdio *sw_dev = (struct lan9303_mdio *)ctx;
+ 
+ 	reg <<= 2; /* reg num to offset */
+-	mutex_lock(&sw_dev->device->bus->mdio_lock);
++	mutex_lock_nested(&sw_dev->device->bus->mdio_lock, MDIO_MUTEX_NESTED);
+ 	*val = lan9303_mdio_real_read(sw_dev->device, reg);
+ 	*val |= (lan9303_mdio_real_read(sw_dev->device, reg + 2) << 16);
+ 	mutex_unlock(&sw_dev->device->bus->mdio_lock);
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 85ea073b742fb..c78587ddb32fd 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -433,8 +433,8 @@ static const struct gmac_max_framelen gmac_maxlens[] = {
+ 		.val = CONFIG0_MAXLEN_1536,
+ 	},
+ 	{
+-		.max_l3_len = 1542,
+-		.val = CONFIG0_MAXLEN_1542,
++		.max_l3_len = 1548,
++		.val = CONFIG0_MAXLEN_1548,
+ 	},
+ 	{
+ 		.max_l3_len = 9212,
+@@ -1146,6 +1146,7 @@ static int gmac_map_tx_bufs(struct net_device *netdev, struct sk_buff *skb,
+ 	dma_addr_t mapping;
+ 	unsigned short mtu;
+ 	void *buffer;
++	int ret;
+ 
+ 	mtu  = ETH_HLEN;
+ 	mtu += netdev->mtu;
+@@ -1160,9 +1161,30 @@ static int gmac_map_tx_bufs(struct net_device *netdev, struct sk_buff *skb,
+ 		word3 |= mtu;
+ 	}
+ 
+-	if (skb->ip_summed != CHECKSUM_NONE) {
++	if (skb->len >= ETH_FRAME_LEN) {
++		/* Hardware offloaded checksumming isn't working on frames
++		 * bigger than 1514 bytes. A hypothesis about this is that the
++		 * checksum buffer is only 1518 bytes, so when the frames get
++		 * bigger they get truncated, or the last few bytes get
++		 * overwritten by the FCS.
++		 *
++		 * Just use software checksumming and bypass on bigger frames.
++		 */
++		if (skb->ip_summed == CHECKSUM_PARTIAL) {
++			ret = skb_checksum_help(skb);
++			if (ret)
++				return ret;
++		}
++		word1 |= TSS_BYPASS_BIT;
++	} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		int tcp = 0;
+ 
++		/* We do not switch off the checksumming on non TCP/UDP
++		 * frames: as is shown from tests, the checksumming engine
++		 * is smart enough to see that a frame is not actually TCP
++		 * or UDP and then just pass it through without any changes
++		 * to the frame.
++		 */
+ 		if (skb->protocol == htons(ETH_P_IP)) {
+ 			word1 |= TSS_IP_CHKSUM_BIT;
+ 			tcp = ip_hdr(skb)->protocol == IPPROTO_TCP;
+@@ -1979,15 +2001,6 @@ static int gmac_change_mtu(struct net_device *netdev, int new_mtu)
+ 	return 0;
+ }
+ 
+-static netdev_features_t gmac_fix_features(struct net_device *netdev,
+-					   netdev_features_t features)
+-{
+-	if (netdev->mtu + ETH_HLEN + VLAN_HLEN > MTU_SIZE_BIT_MASK)
+-		features &= ~GMAC_OFFLOAD_FEATURES;
+-
+-	return features;
+-}
+-
+ static int gmac_set_features(struct net_device *netdev,
+ 			     netdev_features_t features)
+ {
+@@ -2205,7 +2218,6 @@ static const struct net_device_ops gmac_351x_ops = {
+ 	.ndo_set_mac_address	= gmac_set_mac_address,
+ 	.ndo_get_stats64	= gmac_get_stats64,
+ 	.ndo_change_mtu		= gmac_change_mtu,
+-	.ndo_fix_features	= gmac_fix_features,
+ 	.ndo_set_features	= gmac_set_features,
+ };
+ 
+@@ -2463,11 +2475,12 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+ 
+ 	netdev->hw_features = GMAC_OFFLOAD_FEATURES;
+ 	netdev->features |= GMAC_OFFLOAD_FEATURES | NETIF_F_GRO;
+-	/* We can handle jumbo frames up to 10236 bytes so, let's accept
+-	 * payloads of 10236 bytes minus VLAN and ethernet header
++	/* We can receive jumbo frames up to 10236 bytes but only
++	 * transmit 2047 bytes so, let's accept payloads of 2047
++	 * bytes minus VLAN and ethernet header
+ 	 */
+ 	netdev->min_mtu = ETH_MIN_MTU;
+-	netdev->max_mtu = 10236 - VLAN_ETH_HLEN;
++	netdev->max_mtu = MTU_SIZE_BIT_MASK - VLAN_ETH_HLEN;
+ 
+ 	port->freeq_refill = 0;
+ 	netif_napi_add(netdev, &port->napi, gmac_napi_poll,
+diff --git a/drivers/net/ethernet/cortina/gemini.h b/drivers/net/ethernet/cortina/gemini.h
+index 9fdf77d5eb374..24bb989981f23 100644
+--- a/drivers/net/ethernet/cortina/gemini.h
++++ b/drivers/net/ethernet/cortina/gemini.h
+@@ -502,7 +502,7 @@ union gmac_txdesc_3 {
+ #define SOF_BIT			0x80000000
+ #define EOF_BIT			0x40000000
+ #define EOFIE_BIT		BIT(29)
+-#define MTU_SIZE_BIT_MASK	0x1fff
++#define MTU_SIZE_BIT_MASK	0x7ff /* Max MTU 2047 bytes */
+ 
+ /* GMAC Tx Descriptor */
+ struct gmac_txdesc {
+@@ -787,7 +787,7 @@ union gmac_config0 {
+ #define  CONFIG0_MAXLEN_1536	0
+ #define  CONFIG0_MAXLEN_1518	1
+ #define  CONFIG0_MAXLEN_1522	2
+-#define  CONFIG0_MAXLEN_1542	3
++#define  CONFIG0_MAXLEN_1548	3
+ #define  CONFIG0_MAXLEN_9k	4	/* 9212 */
+ #define  CONFIG0_MAXLEN_10k	5	/* 10236 */
+ #define  CONFIG0_MAXLEN_1518__6	6
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index ae7cd73c823b7..4df5e91e86ce7 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -3974,7 +3974,7 @@ static int hns3_init_mac_addr(struct net_device *netdev)
+ {
+ 	struct hns3_nic_priv *priv = netdev_priv(netdev);
+ 	struct hnae3_handle *h = priv->ae_handle;
+-	u8 mac_addr_temp[ETH_ALEN];
++	u8 mac_addr_temp[ETH_ALEN] = {0};
+ 	int ret = 0;
+ 
+ 	if (h->ae_algo->ops->get_mac_addr)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 7d05915c35e38..2bb0ce1761fb0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2363,8 +2363,18 @@ static enum hclgevf_evt_cause hclgevf_check_evt_cause(struct hclgevf_dev *hdev,
+ 	return HCLGEVF_VECTOR0_EVENT_OTHER;
+ }
+ 
++static void hclgevf_reset_timer(struct timer_list *t)
++{
++	struct hclgevf_dev *hdev = from_timer(hdev, t, reset_timer);
++
++	hclgevf_clear_event_cause(hdev, HCLGEVF_VECTOR0_EVENT_RST);
++	hclgevf_reset_task_schedule(hdev);
++}
++
+ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+ {
++#define HCLGEVF_RESET_DELAY	5
++
+ 	enum hclgevf_evt_cause event_cause;
+ 	struct hclgevf_dev *hdev = data;
+ 	u32 clearval;
+@@ -2376,7 +2386,8 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+ 
+ 	switch (event_cause) {
+ 	case HCLGEVF_VECTOR0_EVENT_RST:
+-		hclgevf_reset_task_schedule(hdev);
++		mod_timer(&hdev->reset_timer,
++			  jiffies + msecs_to_jiffies(HCLGEVF_RESET_DELAY));
+ 		break;
+ 	case HCLGEVF_VECTOR0_EVENT_MBX:
+ 		hclgevf_mbx_handler(hdev);
+@@ -3269,6 +3280,7 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ 		 HCLGEVF_DRIVER_NAME);
+ 
+ 	hclgevf_task_schedule(hdev, round_jiffies_relative(HZ));
++	timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index c9b0fa5e8589d..9469af8c49ace 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -274,6 +274,7 @@ struct hclgevf_dev {
+ 	enum hnae3_reset_type reset_level;
+ 	unsigned long reset_pending;
+ 	enum hnae3_reset_type reset_type;
++	struct timer_list reset_timer;
+ 
+ #define HCLGEVF_RESET_REQUESTED		0
+ #define HCLGEVF_RESET_PENDING		1
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index 0e699330ae77c..060561f633114 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -52,7 +52,7 @@ mlx5_devlink_info_get(struct devlink *devlink, struct devlink_info_req *req,
+ 	u32 running_fw, stored_fw;
+ 	int err;
+ 
+-	err = devlink_info_driver_name_put(req, DRIVER_NAME);
++	err = devlink_info_driver_name_put(req, KBUILD_MODNAME);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index 90930e54b6f28..05bcd69994eca 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -267,9 +267,6 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
+ 	if (err)
+ 		goto destroy_neigh_entry;
+ 
+-	e->encap_size = ipv4_encap_size;
+-	e->encap_header = encap_header;
+-
+ 	if (!(nud_state & NUD_VALID)) {
+ 		neigh_event_send(n, NULL);
+ 		/* the encap entry will be made valid on neigh update event
+@@ -286,6 +283,8 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
+ 		goto destroy_neigh_entry;
+ 	}
+ 
++	e->encap_size = ipv4_encap_size;
++	e->encap_header = encap_header;
+ 	e->flags |= MLX5_ENCAP_ENTRY_VALID;
+ 	mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev));
+ 	mlx5e_route_lookup_ipv4_put(route_dev, n);
+@@ -431,9 +430,6 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
+ 	if (err)
+ 		goto destroy_neigh_entry;
+ 
+-	e->encap_size = ipv6_encap_size;
+-	e->encap_header = encap_header;
+-
+ 	if (!(nud_state & NUD_VALID)) {
+ 		neigh_event_send(n, NULL);
+ 		/* the encap entry will be made valid on neigh update event
+@@ -451,6 +447,8 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
+ 		goto destroy_neigh_entry;
+ 	}
+ 
++	e->encap_size = ipv6_encap_size;
++	e->encap_header = encap_header;
+ 	e->flags |= MLX5_ENCAP_ENTRY_VALID;
+ 	mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev));
+ 	mlx5e_route_lookup_ipv6_put(route_dev, n);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 6a1b1363ac16a..d3817dd07e3dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -40,9 +40,7 @@ void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv,
+ {
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 
+-	strlcpy(drvinfo->driver, DRIVER_NAME, sizeof(drvinfo->driver));
+-	strlcpy(drvinfo->version, DRIVER_VERSION,
+-		sizeof(drvinfo->version));
++	strlcpy(drvinfo->driver, KBUILD_MODNAME, sizeof(drvinfo->driver));
+ 	snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+ 		 "%d.%d.%04d (%.16s)",
+ 		 fw_rev_maj(mdev), fw_rev_min(mdev), fw_rev_sub(mdev),
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index b991f03c7e991..f9f1a79d6bddb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -61,14 +61,17 @@ static void mlx5e_rep_get_drvinfo(struct net_device *dev,
+ {
+ 	struct mlx5e_priv *priv = netdev_priv(dev);
+ 	struct mlx5_core_dev *mdev = priv->mdev;
++	int count;
+ 
+ 	strlcpy(drvinfo->driver, mlx5e_rep_driver_name,
+ 		sizeof(drvinfo->driver));
+-	strlcpy(drvinfo->version, UTS_RELEASE, sizeof(drvinfo->version));
+-	snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+-		 "%d.%d.%04d (%.16s)",
+-		 fw_rev_maj(mdev), fw_rev_min(mdev),
+-		 fw_rev_sub(mdev), mdev->board_id);
++	count = snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
++			 "%d.%d.%04d (%.16s)", fw_rev_maj(mdev),
++			 fw_rev_min(mdev), fw_rev_sub(mdev), mdev->board_id);
++	if (count == sizeof(drvinfo->fw_version))
++		snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
++			 "%d.%d.%04d", fw_rev_maj(mdev),
++			 fw_rev_min(mdev), fw_rev_sub(mdev));
+ }
+ 
+ static void mlx5e_uplink_rep_get_drvinfo(struct net_device *dev,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
+index 2cf7f0fc170b8..d7bda76507673 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
+@@ -39,7 +39,7 @@ static void mlx5i_get_drvinfo(struct net_device *dev,
+ 	struct mlx5e_priv *priv = mlx5i_epriv(dev);
+ 
+ 	mlx5e_ethtool_get_drvinfo(priv, drvinfo);
+-	strlcpy(drvinfo->driver, DRIVER_NAME "[ib_ipoib]",
++	strlcpy(drvinfo->driver, KBUILD_MODNAME "[ib_ipoib]",
+ 		sizeof(drvinfo->driver));
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 22907f6364f54..35e11cb883c97 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -77,7 +77,6 @@
+ MODULE_AUTHOR("Eli Cohen <eli@mellanox.com>");
+ MODULE_DESCRIPTION("Mellanox 5th generation network adapters (ConnectX series) core driver");
+ MODULE_LICENSE("Dual BSD/GPL");
+-MODULE_VERSION(DRIVER_VERSION);
+ 
+ unsigned int mlx5_core_debug_mask;
+ module_param_named(debug_mask, mlx5_core_debug_mask, uint, 0644);
+@@ -228,7 +227,7 @@ static void mlx5_set_driver_version(struct mlx5_core_dev *dev)
+ 	strncat(string, ",", remaining_size);
+ 
+ 	remaining_size = max_t(int, 0, driver_ver_sz - strlen(string));
+-	strncat(string, DRIVER_NAME, remaining_size);
++	strncat(string, KBUILD_MODNAME, remaining_size);
+ 
+ 	remaining_size = max_t(int, 0, driver_ver_sz - strlen(string));
+ 	strncat(string, ",", remaining_size);
+@@ -313,7 +312,7 @@ static int request_bar(struct pci_dev *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	err = pci_request_regions(pdev, DRIVER_NAME);
++	err = pci_request_regions(pdev, KBUILD_MODNAME);
+ 	if (err)
+ 		dev_err(&pdev->dev, "Couldn't get PCI resources, aborting\n");
+ 
+@@ -1620,7 +1619,7 @@ void mlx5_recover_device(struct mlx5_core_dev *dev)
+ }
+ 
+ static struct pci_driver mlx5_core_driver = {
+-	.name           = DRIVER_NAME,
++	.name           = KBUILD_MODNAME,
+ 	.id_table       = mlx5_core_pci_table,
+ 	.probe          = init_one,
+ 	.remove         = remove_one,
+@@ -1646,6 +1645,9 @@ static int __init mlx5_init(void)
+ {
+ 	int err;
+ 
++	WARN_ONCE(strcmp(MLX5_ADEV_NAME, KBUILD_MODNAME),
++		  "mlx5_core name not in sync with kernel module name");
++
+ 	get_random_bytes(&sw_owner_id, sizeof(sw_owner_id));
+ 
+ 	mlx5_core_verify_params();
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+index 8cec85ab419d0..b285f1515e4e8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+@@ -42,9 +42,6 @@
+ #include <linux/mlx5/fs.h>
+ #include <linux/mlx5/driver.h>
+ 
+-#define DRIVER_NAME "mlx5_core"
+-#define DRIVER_VERSION "5.0-0"
+-
+ extern uint mlx5_core_debug_mask;
+ 
+ #define mlx5_core_dbg(__dev, format, ...)				\
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 6e0fe77d1019c..ab6af1f1ad5bb 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2581,9 +2581,7 @@ static void rtl_set_rx_mode(struct net_device *dev)
+ 		rx_mode &= ~AcceptMulticast;
+ 	} else if (netdev_mc_count(dev) > MC_FILTER_LIMIT ||
+ 		   dev->flags & IFF_ALLMULTI ||
+-		   tp->mac_version == RTL_GIGA_MAC_VER_35 ||
+-		   tp->mac_version == RTL_GIGA_MAC_VER_46 ||
+-		   tp->mac_version == RTL_GIGA_MAC_VER_48) {
++		   tp->mac_version == RTL_GIGA_MAC_VER_35) {
+ 		/* accept all multicasts */
+ 	} else if (netdev_mc_empty(dev)) {
+ 		rx_mode &= ~AcceptMulticast;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 83e9a4d019c16..59a07a01e80ca 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3858,10 +3858,10 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 			len = 0;
+ 		}
+ 
++read_again:
+ 		if (count >= limit)
+ 			break;
+ 
+-read_again:
+ 		buf1_len = 0;
+ 		buf2_len = 0;
+ 		entry = next_entry;
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index b5a61b16a7eab..bfea28bd45027 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -412,7 +412,7 @@ struct ipvl_addr *ipvlan_addr_lookup(struct ipvl_port *port, void *lyr3h,
+ 	return addr;
+ }
+ 
+-static int ipvlan_process_v4_outbound(struct sk_buff *skb)
++static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb)
+ {
+ 	const struct iphdr *ip4h = ip_hdr(skb);
+ 	struct net_device *dev = skb->dev;
+@@ -454,13 +454,11 @@ out:
+ }
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+-static int ipvlan_process_v6_outbound(struct sk_buff *skb)
++
++static noinline_for_stack int
++ipvlan_route_v6_outbound(struct net_device *dev, struct sk_buff *skb)
+ {
+ 	const struct ipv6hdr *ip6h = ipv6_hdr(skb);
+-	struct net_device *dev = skb->dev;
+-	struct net *net = dev_net(dev);
+-	struct dst_entry *dst;
+-	int err, ret = NET_XMIT_DROP;
+ 	struct flowi6 fl6 = {
+ 		.flowi6_oif = dev->ifindex,
+ 		.daddr = ip6h->daddr,
+@@ -470,27 +468,38 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+ 		.flowi6_mark = skb->mark,
+ 		.flowi6_proto = ip6h->nexthdr,
+ 	};
++	struct dst_entry *dst;
++	int err;
+ 
+-	dst = ip6_route_output(net, NULL, &fl6);
+-	if (dst->error) {
+-		ret = dst->error;
++	dst = ip6_route_output(dev_net(dev), NULL, &fl6);
++	err = dst->error;
++	if (err) {
+ 		dst_release(dst);
+-		goto err;
++		return err;
+ 	}
+ 	skb_dst_set(skb, dst);
++	return 0;
++}
++
++static int ipvlan_process_v6_outbound(struct sk_buff *skb)
++{
++	struct net_device *dev = skb->dev;
++	int err, ret = NET_XMIT_DROP;
++
++	err = ipvlan_route_v6_outbound(dev, skb);
++	if (unlikely(err)) {
++		DEV_STATS_INC(dev, tx_errors);
++		kfree_skb(skb);
++		return err;
++	}
+ 
+ 	memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+ 
+-	err = ip6_local_out(net, skb->sk, skb);
++	err = ip6_local_out(dev_net(dev), skb->sk, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+ 		DEV_STATS_INC(dev, tx_errors);
+ 	else
+ 		ret = NET_XMIT_SUCCESS;
+-	goto out;
+-err:
+-	DEV_STATS_INC(dev, tx_errors);
+-	kfree_skb(skb);
+-out:
+ 	return ret;
+ }
+ #else
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index 5869bc2c3aa79..9c77e6ab2b307 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -765,7 +765,7 @@ static void macvlan_change_rx_flags(struct net_device *dev, int change)
+ 	if (dev->flags & IFF_UP) {
+ 		if (change & IFF_ALLMULTI)
+ 			dev_set_allmulti(lowerdev, dev->flags & IFF_ALLMULTI ? 1 : -1);
+-		if (change & IFF_PROMISC)
++		if (!macvlan_passthru(vlan->port) && change & IFF_PROMISC)
+ 			dev_set_promiscuity(lowerdev,
+ 					    dev->flags & IFF_PROMISC ? 1 : -1);
+ 
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 57b1b138522e0..cbdd01f311625 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -838,6 +838,7 @@ struct phylink *phylink_create(struct phylink_config *config,
+ 	pl->config = config;
+ 	if (config->type == PHYLINK_NETDEV) {
+ 		pl->netdev = to_net_dev(config->dev);
++		netif_carrier_off(pl->netdev);
+ 	} else if (config->type == PHYLINK_DEV) {
+ 		pl->dev = config->dev;
+ 	} else {
+diff --git a/drivers/net/ppp/ppp_synctty.c b/drivers/net/ppp/ppp_synctty.c
+index f774b7e52da44..7174316362758 100644
+--- a/drivers/net/ppp/ppp_synctty.c
++++ b/drivers/net/ppp/ppp_synctty.c
+@@ -464,6 +464,10 @@ ppp_sync_ioctl(struct ppp_channel *chan, unsigned int cmd, unsigned long arg)
+ 	case PPPIOCSMRU:
+ 		if (get_user(val, (int __user *) argp))
+ 			break;
++		if (val > U16_MAX) {
++			err = -EINVAL;
++			break;
++		}
+ 		if (val < PPP_MRU)
+ 			val = PPP_MRU;
+ 		ap->mru = val;
+@@ -699,7 +703,7 @@ ppp_sync_input(struct syncppp *ap, const unsigned char *buf,
+ 
+ 	/* strip address/control field if present */
+ 	p = skb->data;
+-	if (p[0] == PPP_ALLSTATIONS && p[1] == PPP_UI) {
++	if (skb->len >= 2 && p[0] == PPP_ALLSTATIONS && p[1] == PPP_UI) {
+ 		/* chop off address/control */
+ 		if (skb->len < 3)
+ 			goto err;
+diff --git a/drivers/net/wireless/ath/ath10k/debug.c b/drivers/net/wireless/ath/ath10k/debug.c
+index e8250a6654338..ab737177a86bf 100644
+--- a/drivers/net/wireless/ath/ath10k/debug.c
++++ b/drivers/net/wireless/ath/ath10k/debug.c
+@@ -1139,7 +1139,7 @@ void ath10k_debug_get_et_strings(struct ieee80211_hw *hw,
+ 				 u32 sset, u8 *data)
+ {
+ 	if (sset == ETH_SS_STATS)
+-		memcpy(data, *ath10k_gstrings_stats,
++		memcpy(data, ath10k_gstrings_stats,
+ 		       sizeof(ath10k_gstrings_stats));
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index 4870a3dab0ded..f7ee1032b1729 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -827,12 +827,20 @@ static void ath10k_snoc_hif_get_default_pipe(struct ath10k *ar,
+ 
+ static inline void ath10k_snoc_irq_disable(struct ath10k *ar)
+ {
+-	ath10k_ce_disable_interrupts(ar);
++	struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
++	int id;
++
++	for (id = 0; id < CE_COUNT_MAX; id++)
++		disable_irq(ar_snoc->ce_irqs[id].irq_line);
+ }
+ 
+ static inline void ath10k_snoc_irq_enable(struct ath10k *ar)
+ {
+-	ath10k_ce_enable_interrupts(ar);
++	struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
++	int id;
++
++	for (id = 0; id < CE_COUNT_MAX; id++)
++		enable_irq(ar_snoc->ce_irqs[id].irq_line);
+ }
+ 
+ static void ath10k_snoc_rx_pipe_cleanup(struct ath10k_snoc_pipe *snoc_pipe)
+@@ -1048,6 +1056,8 @@ static int ath10k_snoc_hif_power_up(struct ath10k *ar,
+ 		goto err_free_rri;
+ 	}
+ 
++	ath10k_ce_enable_interrupts(ar);
++
+ 	return 0;
+ 
+ err_free_rri:
+@@ -1209,8 +1219,8 @@ static int ath10k_snoc_request_irq(struct ath10k *ar)
+ 
+ 	for (id = 0; id < CE_COUNT_MAX; id++) {
+ 		ret = request_irq(ar_snoc->ce_irqs[id].irq_line,
+-				  ath10k_snoc_per_engine_handler, 0,
+-				  ce_name[id], ar);
++				  ath10k_snoc_per_engine_handler,
++				  IRQF_NO_AUTOEN, ce_name[id], ar);
+ 		if (ret) {
+ 			ath10k_err(ar,
+ 				   "failed to register IRQ handler for CE %d: %d\n",
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 583bcf148403b..a50325f4634ba 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -1578,14 +1578,20 @@ static void ath11k_htt_pktlog(struct ath11k_base *ab, struct sk_buff *skb)
+ 	u8 pdev_id;
+ 
+ 	pdev_id = FIELD_GET(HTT_T2H_PPDU_STATS_INFO_PDEV_ID, data->hdr);
++
++	rcu_read_lock();
++
+ 	ar = ath11k_mac_get_ar_by_pdev_id(ab, pdev_id);
+ 	if (!ar) {
+ 		ath11k_warn(ab, "invalid pdev id %d on htt pktlog\n", pdev_id);
+-		return;
++		goto out;
+ 	}
+ 
+ 	trace_ath11k_htt_pktlog(ar, data->payload, hdr->size,
+ 				ar->ab->pktlog_defs_checksum);
++
++out:
++	rcu_read_unlock();
+ }
+ 
+ static void ath11k_htt_backpressure_event_handler(struct ath11k_base *ab,
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 53846dc9a5c5a..7c2d74533521f 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -6355,6 +6355,8 @@ ath11k_wmi_pdev_dfs_radar_detected_event(struct ath11k_base *ab, struct sk_buff
+ 		   ev->detector_id, ev->segment_id, ev->timestamp, ev->is_chirp,
+ 		   ev->freq_offset, ev->sidx);
+ 
++	rcu_read_lock();
++
+ 	ar = ath11k_mac_get_ar_by_pdev_id(ab, ev->pdev_id);
+ 
+ 	if (!ar) {
+@@ -6372,6 +6374,8 @@ ath11k_wmi_pdev_dfs_radar_detected_event(struct ath11k_base *ab, struct sk_buff
+ 		ieee80211_radar_detected(ar->hw);
+ 
+ exit:
++	rcu_read_unlock();
++
+ 	kfree(tb);
+ }
+ 
+@@ -6401,15 +6405,19 @@ ath11k_wmi_pdev_temperature_event(struct ath11k_base *ab,
+ 	ath11k_dbg(ab, ATH11K_DBG_WMI,
+ 		   "pdev temperature ev temp %d pdev_id %d\n", ev->temp, ev->pdev_id);
+ 
++	rcu_read_lock();
++
+ 	ar = ath11k_mac_get_ar_by_pdev_id(ab, ev->pdev_id);
+ 	if (!ar) {
+ 		ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev->pdev_id);
+-		kfree(tb);
+-		return;
++		goto exit;
+ 	}
+ 
+ 	ath11k_thermal_event_temperature(ar, ev->temp);
+ 
++exit:
++	rcu_read_unlock();
++
+ 	kfree(tb);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c
+index 859a865c59950..8d98347e0ddff 100644
+--- a/drivers/net/wireless/ath/ath9k/debug.c
++++ b/drivers/net/wireless/ath/ath9k/debug.c
+@@ -1284,7 +1284,7 @@ void ath9k_get_et_strings(struct ieee80211_hw *hw,
+ 			  u32 sset, u8 *data)
+ {
+ 	if (sset == ETH_SS_STATS)
+-		memcpy(data, *ath9k_gstrings_stats,
++		memcpy(data, ath9k_gstrings_stats,
+ 		       sizeof(ath9k_gstrings_stats));
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+index c55aab01fff5d..e79bbcd3279af 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+@@ -428,7 +428,7 @@ void ath9k_htc_get_et_strings(struct ieee80211_hw *hw,
+ 			      u32 sset, u8 *data)
+ {
+ 	if (sset == ETH_SS_STATS)
+-		memcpy(data, *ath9k_htc_gstrings_stats,
++		memcpy(data, ath9k_htc_gstrings_stats,
+ 		       sizeof(ath9k_htc_gstrings_stats));
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index d310337b16251..99150fec151b8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -532,16 +532,20 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 			flags |= IWL_TX_FLAGS_ENCRYPT_DIS;
+ 
+ 		/*
+-		 * For data packets rate info comes from the fw. Only
+-		 * set rate/antenna during connection establishment or in case
+-		 * no station is given.
++		 * For data and mgmt packets rate info comes from the fw. Only
++		 * set rate/antenna for injected frames with fixed rate, or
++		 * when no sta is given.
+ 		 */
+-		if (!sta || !ieee80211_is_data(hdr->frame_control) ||
+-		    mvmsta->sta_state < IEEE80211_STA_AUTHORIZED) {
++		if (unlikely(!sta ||
++			     info->control.flags & IEEE80211_TX_CTRL_RATE_INJECT)) {
+ 			flags |= IWL_TX_FLAGS_CMD_RATE;
+ 			rate_n_flags =
+ 				iwl_mvm_get_tx_rate_n_flags(mvm, info, sta,
+ 							    hdr->frame_control);
++		} else if (!ieee80211_is_data(hdr->frame_control) ||
++			   mvmsta->sta_state < IEEE80211_STA_AUTHORIZED) {
++			/* These are important frames */
++			flags |= IWL_TX_FLAGS_HIGH_PRI;
+ 		}
+ 
+ 		if (mvm->trans->trans_cfg->device_family >=
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 0d41f172a1dc2..037358606a51a 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -2543,7 +2543,7 @@ static void mac80211_hwsim_get_et_strings(struct ieee80211_hw *hw,
+ 					  u32 sset, u8 *data)
+ {
+ 	if (sset == ETH_SS_STATS)
+-		memcpy(data, *mac80211_hwsim_gstrings_stats,
++		memcpy(data, mac80211_hwsim_gstrings_stats,
+ 		       sizeof(mac80211_hwsim_gstrings_stats));
+ }
+ 
+diff --git a/drivers/pci/controller/dwc/pci-exynos.c b/drivers/pci/controller/dwc/pci-exynos.c
+index 242683cde04a5..3823e58819964 100644
+--- a/drivers/pci/controller/dwc/pci-exynos.c
++++ b/drivers/pci/controller/dwc/pci-exynos.c
+@@ -503,7 +503,7 @@ fail_probe:
+ 	return ret;
+ }
+ 
+-static int __exit exynos_pcie_remove(struct platform_device *pdev)
++static int exynos_pcie_remove(struct platform_device *pdev)
+ {
+ 	struct exynos_pcie *ep = platform_get_drvdata(pdev);
+ 
+@@ -522,7 +522,7 @@ static const struct of_device_id exynos_pcie_of_match[] = {
+ };
+ 
+ static struct platform_driver exynos_pcie_driver = {
+-	.remove		= __exit_p(exynos_pcie_remove),
++	.remove		= exynos_pcie_remove,
+ 	.driver = {
+ 		.name	= "exynos-pcie",
+ 		.of_match_table = exynos_pcie_of_match,
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 90482d5246ff1..5b722287aac92 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -1142,7 +1142,7 @@ static const struct of_device_id ks_pcie_of_match[] = {
+ 	{ },
+ };
+ 
+-static int __init ks_pcie_probe(struct platform_device *pdev)
++static int ks_pcie_probe(struct platform_device *pdev)
+ {
+ 	const struct dw_pcie_host_ops *host_ops;
+ 	const struct dw_pcie_ep_ops *ep_ops;
+@@ -1338,7 +1338,7 @@ err_link:
+ 	return ret;
+ }
+ 
+-static int __exit ks_pcie_remove(struct platform_device *pdev)
++static int ks_pcie_remove(struct platform_device *pdev)
+ {
+ 	struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev);
+ 	struct device_link **link = ks_pcie->link;
+@@ -1354,9 +1354,9 @@ static int __exit ks_pcie_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static struct platform_driver ks_pcie_driver __refdata = {
++static struct platform_driver ks_pcie_driver = {
+ 	.probe  = ks_pcie_probe,
+-	.remove = __exit_p(ks_pcie_remove),
++	.remove = ks_pcie_remove,
+ 	.driver = {
+ 		.name	= "keystone-pcie",
+ 		.of_match_table = of_match_ptr(ks_pcie_of_match),
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index a215777df96c7..80c2015b49d8f 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -7,6 +7,7 @@
+  * Author: Vidya Sagar <vidyas@nvidia.com>
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/clk.h>
+ #include <linux/debugfs.h>
+ #include <linux/delay.h>
+@@ -346,8 +347,7 @@ static void apply_bad_link_workaround(struct pcie_port *pp)
+ 	 */
+ 	val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
+ 	if (val & PCI_EXP_LNKSTA_LBMS) {
+-		current_link_width = (val & PCI_EXP_LNKSTA_NLW) >>
+-				     PCI_EXP_LNKSTA_NLW_SHIFT;
++		current_link_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val);
+ 		if (pcie->init_link_width > current_link_width) {
+ 			dev_warn(pci->dev, "PCIe link is bad, width reduced\n");
+ 			val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
+@@ -731,8 +731,7 @@ static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
+ 
+ 	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
+ 				  PCI_EXP_LNKSTA);
+-	pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >>
+-				PCI_EXP_LNKSTA_NLW_SHIFT;
++	pcie->init_link_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val_w);
+ 
+ 	val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
+ 				  PCI_EXP_LNKCTL);
+@@ -885,7 +884,7 @@ static void tegra_pcie_prepare_host(struct pcie_port *pp)
+ 	/* Configure Max lane width from DT */
+ 	val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
+ 	val &= ~PCI_EXP_LNKCAP_MLW;
+-	val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
++	val |= FIELD_PREP(PCI_EXP_LNKCAP_MLW, pcie->num_lanes);
+ 	dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
+ 
+ 	config_gen3_gen4_eq_presets(pcie);
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 745a4e0c4994f..bda45c5241879 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -909,7 +909,7 @@ static pci_power_t acpi_pci_choose_state(struct pci_dev *pdev)
+ {
+ 	int acpi_state, d_max;
+ 
+-	if (pdev->no_d3cold)
++	if (pdev->no_d3cold || !pdev->d3cold_allowed)
+ 		d_max = ACPI_STATE_D3_HOT;
+ 	else
+ 		d_max = ACPI_STATE_D3_COLD;
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 361de072dfdca..e14c83f59b48a 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -500,10 +500,7 @@ static ssize_t d3cold_allowed_store(struct device *dev,
+ 		return -EINVAL;
+ 
+ 	pdev->d3cold_allowed = !!val;
+-	if (pdev->d3cold_allowed)
+-		pci_d3cold_enable(pdev);
+-	else
+-		pci_d3cold_disable(pdev);
++	pci_bridge_d3_update(pdev);
+ 
+ 	pm_runtime_resume(dev);
+ 
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index ef6f0ceb92f9f..8ab8abd79e896 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -1250,6 +1250,8 @@ static ssize_t aspm_attr_store_common(struct device *dev,
+ 			link->aspm_disable &= ~ASPM_STATE_L1;
+ 	} else {
+ 		link->aspm_disable |= state;
++		if (state & ASPM_STATE_L1)
++			link->aspm_disable |= ASPM_STATE_L1SS;
+ 	}
+ 
+ 	pcie_config_aspm_link(link, policy_to_aspm_state(link));
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index d8d241344d22d..00ca996b4d4b9 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -9718,6 +9718,7 @@ static const struct tpacpi_quirk battery_quirk_table[] __initconst = {
+ 	 * Individual addressing is broken on models that expose the
+ 	 * primary battery as BAT1.
+ 	 */
++	TPACPI_Q_LNV('8', 'F', true),       /* Thinkpad X120e */
+ 	TPACPI_Q_LNV('J', '7', true),       /* B5400 */
+ 	TPACPI_Q_LNV('J', 'I', true),       /* Thinkpad 11e */
+ 	TPACPI_Q_LNV3('R', '0', 'B', true), /* Thinkpad 11e gen 3 */
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index af3bc65c4595d..9311f3d09c8fc 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -487,7 +487,8 @@ ssize_t ptp_read(struct posix_clock *pc,
+ 
+ 	for (i = 0; i < cnt; i++) {
+ 		event[i] = queue->buf[queue->head];
+-		queue->head = (queue->head + 1) % PTP_MAX_TIMESTAMPS;
++		/* Paired with READ_ONCE() in queue_cnt() */
++		WRITE_ONCE(queue->head, (queue->head + 1) % PTP_MAX_TIMESTAMPS);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&queue->lock, flags);
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 21c4c34c52d8d..ed766943a3563 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -55,10 +55,11 @@ static void enqueue_external_timestamp(struct timestamp_event_queue *queue,
+ 	dst->t.sec = seconds;
+ 	dst->t.nsec = remainder;
+ 
++	/* Both WRITE_ONCE() are paired with READ_ONCE() in queue_cnt() */
+ 	if (!queue_free(queue))
+-		queue->head = (queue->head + 1) % PTP_MAX_TIMESTAMPS;
++		WRITE_ONCE(queue->head, (queue->head + 1) % PTP_MAX_TIMESTAMPS);
+ 
+-	queue->tail = (queue->tail + 1) % PTP_MAX_TIMESTAMPS;
++	WRITE_ONCE(queue->tail, (queue->tail + 1) % PTP_MAX_TIMESTAMPS);
+ 
+ 	spin_unlock_irqrestore(&queue->lock, flags);
+ }
+diff --git a/drivers/ptp/ptp_private.h b/drivers/ptp/ptp_private.h
+index 6b97155148f11..d2cb956706763 100644
+--- a/drivers/ptp/ptp_private.h
++++ b/drivers/ptp/ptp_private.h
+@@ -55,9 +55,13 @@ struct ptp_clock {
+  * that a writer might concurrently increment the tail does not
+  * matter, since the queue remains nonempty nonetheless.
+  */
+-static inline int queue_cnt(struct timestamp_event_queue *q)
++static inline int queue_cnt(const struct timestamp_event_queue *q)
+ {
+-	int cnt = q->tail - q->head;
++	/*
++	 * Paired with WRITE_ONCE() in enqueue_external_timestamp(),
++	 * ptp_read(), extts_fifo_show().
++	 */
++	int cnt = READ_ONCE(q->tail) - READ_ONCE(q->head);
+ 	return cnt < 0 ? PTP_MAX_TIMESTAMPS + cnt : cnt;
+ }
+ 
+diff --git a/drivers/ptp/ptp_sysfs.c b/drivers/ptp/ptp_sysfs.c
+index 8cd59e8481631..8d52815e05b31 100644
+--- a/drivers/ptp/ptp_sysfs.c
++++ b/drivers/ptp/ptp_sysfs.c
+@@ -78,7 +78,8 @@ static ssize_t extts_fifo_show(struct device *dev,
+ 	qcnt = queue_cnt(queue);
+ 	if (qcnt) {
+ 		event = queue->buf[queue->head];
+-		queue->head = (queue->head + 1) % PTP_MAX_TIMESTAMPS;
++		/* Paired with READ_ONCE() in queue_cnt() */
++		WRITE_ONCE(queue->head, (queue->head + 1) % PTP_MAX_TIMESTAMPS);
+ 	}
+ 	spin_unlock_irqrestore(&queue->lock, flags);
+ 
+diff --git a/drivers/scsi/libfc/fc_lport.c b/drivers/scsi/libfc/fc_lport.c
+index abb14b206be04..82b8477c7d737 100644
+--- a/drivers/scsi/libfc/fc_lport.c
++++ b/drivers/scsi/libfc/fc_lport.c
+@@ -238,6 +238,12 @@ static void fc_lport_ptp_setup(struct fc_lport *lport,
+ 	}
+ 	mutex_lock(&lport->disc.disc_mutex);
+ 	lport->ptp_rdata = fc_rport_create(lport, remote_fid);
++	if (!lport->ptp_rdata) {
++		printk(KERN_WARNING "libfc: Failed to setup lport 0x%x\n",
++			lport->port_id);
++		mutex_unlock(&lport->disc.disc_mutex);
++		return;
++	}
+ 	kref_get(&lport->ptp_rdata->kref);
+ 	lport->ptp_rdata->ids.port_name = remote_wwpn;
+ 	lport->ptp_rdata->ids.node_name = remote_wwnn;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index ec9a19d94855c..365279d7c9829 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -248,13 +248,13 @@ u32 megasas_readl(struct megasas_instance *instance,
+ 	 * Fusion registers could intermittently return all zeroes.
+ 	 * This behavior is transient in nature and subsequent reads will
+ 	 * return valid value. As a workaround in driver, retry readl for
+-	 * upto three times until a non-zero value is read.
++	 * up to thirty times until a non-zero value is read.
+ 	 */
+ 	if (instance->adapter_type == AERO_SERIES) {
+ 		do {
+ 			ret_val = readl(addr);
+ 			i++;
+-		} while (ret_val == 0 && i < 3);
++		} while (ret_val == 0 && i < 30);
+ 		return ret_val;
+ 	} else {
+ 		return readl(addr);
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 3728e4cf6e057..814ac25238058 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -216,8 +216,8 @@ _base_readl_ext_retry(const volatile void __iomem *addr)
+ 
+ 	for (i = 0 ; i < 30 ; i++) {
+ 		ret_val = readl(addr);
+-		if (ret_val == 0)
+-			continue;
++		if (ret_val != 0)
++			break;
+ 	}
+ 
+ 	return ret_val;
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index 6f387a4fd96a4..cf0fb650a9247 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -363,18 +363,21 @@ void xen_console_resume(void)
+ #ifdef CONFIG_HVC_XEN_FRONTEND
+ static void xencons_disconnect_backend(struct xencons_info *info)
+ {
+-	if (info->irq > 0)
+-		unbind_from_irqhandler(info->irq, NULL);
+-	info->irq = 0;
++	if (info->hvc != NULL)
++		hvc_remove(info->hvc);
++	info->hvc = NULL;
++	if (info->irq > 0) {
++		evtchn_put(info->evtchn);
++		info->irq = 0;
++		info->evtchn = 0;
++	}
++	/* evtchn_put() will also close it so this is only an error path */
+ 	if (info->evtchn > 0)
+ 		xenbus_free_evtchn(info->xbdev, info->evtchn);
+ 	info->evtchn = 0;
+ 	if (info->gntref > 0)
+ 		gnttab_free_grant_references(info->gntref);
+ 	info->gntref = 0;
+-	if (info->hvc != NULL)
+-		hvc_remove(info->hvc);
+-	info->hvc = NULL;
+ }
+ 
+ static void xencons_free(struct xencons_info *info)
+@@ -538,10 +541,23 @@ static void xencons_backend_changed(struct xenbus_device *dev,
+ 		if (dev->state == XenbusStateClosed)
+ 			break;
+ 		fallthrough;	/* Missed the backend's CLOSING state */
+-	case XenbusStateClosing:
++	case XenbusStateClosing: {
++		struct xencons_info *info = dev_get_drvdata(&dev->dev);;
++
++		/*
++		 * Don't tear down the evtchn and grant ref before the other
++		 * end has disconnected, but do stop userspace from trying
++		 * to use the device before we allow the backend to close.
++		 */
++		if (info->hvc) {
++			hvc_remove(info->hvc);
++			info->hvc = NULL;
++		}
++
+ 		xenbus_frontend_closed(dev);
+ 		break;
+ 	}
++	}
+ }
+ 
+ static const struct xenbus_device_id xencons_ids[] = {
+@@ -572,7 +588,7 @@ static int __init xen_hvc_init(void)
+ 		ops = &dom0_hvc_ops;
+ 		r = xen_initial_domain_console_init();
+ 		if (r < 0)
+-			return r;
++			goto register_fe;
+ 		info = vtermno_to_xencons(HVC_COOKIE);
+ 	} else {
+ 		ops = &domU_hvc_ops;
+@@ -581,7 +597,7 @@ static int __init xen_hvc_init(void)
+ 		else
+ 			r = xen_pv_console_init();
+ 		if (r < 0)
+-			return r;
++			goto register_fe;
+ 
+ 		info = vtermno_to_xencons(HVC_COOKIE);
+ 		info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn);
+@@ -600,12 +616,13 @@ static int __init xen_hvc_init(void)
+ 		list_del(&info->list);
+ 		spin_unlock_irqrestore(&xencons_lock, flags);
+ 		if (info->irq)
+-			unbind_from_irqhandler(info->irq, NULL);
++			evtchn_put(info->evtchn);
+ 		kfree(info);
+ 		return r;
+ 	}
+ 
+ 	r = 0;
++ register_fe:
+ #ifdef CONFIG_HVC_XEN_FRONTEND
+ 	r = xenbus_register_frontend(&xencons_driver);
+ #endif
+diff --git a/drivers/tty/serial/meson_uart.c b/drivers/tty/serial/meson_uart.c
+index 91b7359b79a2f..bb66a3f06626c 100644
+--- a/drivers/tty/serial/meson_uart.c
++++ b/drivers/tty/serial/meson_uart.c
+@@ -370,10 +370,14 @@ static void meson_uart_set_termios(struct uart_port *port,
+ 	else
+ 		val |= AML_UART_STOP_BIT_1SB;
+ 
+-	if (cflags & CRTSCTS)
+-		val &= ~AML_UART_TWO_WIRE_EN;
+-	else
++	if (cflags & CRTSCTS) {
++		if (port->flags & UPF_HARD_FLOW)
++			val &= ~AML_UART_TWO_WIRE_EN;
++		else
++			termios->c_cflag &= ~CRTSCTS;
++	} else {
+ 		val |= AML_UART_TWO_WIRE_EN;
++	}
+ 
+ 	writel(val, port->membase + AML_UART_CONTROL);
+ 
+@@ -726,15 +730,19 @@ static int meson_uart_probe_clocks(struct platform_device *pdev,
+ 
+ static int meson_uart_probe(struct platform_device *pdev)
+ {
+-	struct resource *res_mem, *res_irq;
++	struct resource *res_mem;
+ 	struct uart_port *port;
++	u32 fifosize = 64; /* Default is 64, 128 for EE UART_0 */
+ 	int ret = 0;
+-	int id = -1;
++	int irq;
++	bool has_rtscts;
+ 
+ 	if (pdev->dev.of_node)
+ 		pdev->id = of_alias_get_id(pdev->dev.of_node, "serial");
+ 
+ 	if (pdev->id < 0) {
++		int id;
++
+ 		for (id = AML_UART_PORT_OFFSET; id < AML_UART_PORT_NUM; id++) {
+ 			if (!meson_ports[id]) {
+ 				pdev->id = id;
+@@ -750,9 +758,12 @@ static int meson_uart_probe(struct platform_device *pdev)
+ 	if (!res_mem)
+ 		return -ENODEV;
+ 
+-	res_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (!res_irq)
+-		return -ENODEV;
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0)
++		return irq;
++
++	of_property_read_u32(pdev->dev.of_node, "fifo-size", &fifosize);
++	has_rtscts = of_property_read_bool(pdev->dev.of_node, "uart-has-rtscts");
+ 
+ 	if (meson_ports[pdev->id]) {
+ 		dev_err(&pdev->dev, "port %d already allocated\n", pdev->id);
+@@ -775,15 +786,17 @@ static int meson_uart_probe(struct platform_device *pdev)
+ 	port->iotype = UPIO_MEM;
+ 	port->mapbase = res_mem->start;
+ 	port->mapsize = resource_size(res_mem);
+-	port->irq = res_irq->start;
++	port->irq = irq;
+ 	port->flags = UPF_BOOT_AUTOCONF | UPF_LOW_LATENCY;
++	if (has_rtscts)
++		port->flags |= UPF_HARD_FLOW;
+ 	port->has_sysrq = IS_ENABLED(CONFIG_SERIAL_MESON_CONSOLE);
+ 	port->dev = &pdev->dev;
+ 	port->line = pdev->id;
+ 	port->type = PORT_MESON;
+ 	port->x_char = 0;
+ 	port->ops = &meson_uart_ops;
+-	port->fifosize = 64;
++	port->fifosize = fifosize;
+ 
+ 	meson_ports[pdev->id] = port;
+ 	platform_set_drvdata(pdev, port);
+diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
+index 7ca209d4e0883..ad08eddc15326 100644
+--- a/drivers/tty/sysrq.c
++++ b/drivers/tty/sysrq.c
+@@ -262,13 +262,14 @@ static void sysrq_handle_showallcpus(int key)
+ 		if (in_irq())
+ 			regs = get_irq_regs();
+ 
+-		pr_info("CPU%d:\n", smp_processor_id());
++		pr_info("CPU%d:\n", get_cpu());
+ 		if (regs)
+ 			show_regs(regs);
+ 		else
+ 			show_stack(NULL, NULL, KERN_INFO);
+ 
+ 		schedule_work(&sysrq_showallcpus);
++		put_cpu();
+ 	}
+ }
+ 
+diff --git a/drivers/tty/vcc.c b/drivers/tty/vcc.c
+index 9ffd42e333b83..6b2d35ac6e3b3 100644
+--- a/drivers/tty/vcc.c
++++ b/drivers/tty/vcc.c
+@@ -587,18 +587,22 @@ static int vcc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ 		return -ENOMEM;
+ 
+ 	name = kstrdup(dev_name(&vdev->dev), GFP_KERNEL);
++	if (!name) {
++		rv = -ENOMEM;
++		goto free_port;
++	}
+ 
+ 	rv = vio_driver_init(&port->vio, vdev, VDEV_CONSOLE_CON, vcc_versions,
+ 			     ARRAY_SIZE(vcc_versions), NULL, name);
+ 	if (rv)
+-		goto free_port;
++		goto free_name;
+ 
+ 	port->vio.debug = vcc_dbg_vio;
+ 	vcc_ldc_cfg.debug = vcc_dbg_ldc;
+ 
+ 	rv = vio_ldc_alloc(&port->vio, &vcc_ldc_cfg, port);
+ 	if (rv)
+-		goto free_port;
++		goto free_name;
+ 
+ 	spin_lock_init(&port->lock);
+ 
+@@ -632,6 +636,11 @@ static int vcc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ 		goto unreg_tty;
+ 	}
+ 	port->domain = kstrdup(domain, GFP_KERNEL);
++	if (!port->domain) {
++		rv = -ENOMEM;
++		goto unreg_tty;
++	}
++
+ 
+ 	mdesc_release(hp);
+ 
+@@ -661,8 +670,9 @@ free_table:
+ 	vcc_table_remove(port->index);
+ free_ldc:
+ 	vio_ldc_free(&port->vio);
+-free_port:
++free_name:
+ 	kfree(name);
++free_port:
+ 	kfree(port);
+ 
+ 	return rv;
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index 00aea45a04e95..d42cd1d036bdf 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -1435,7 +1435,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
+ 	struct usb_composite_dev *cdev = c->cdev;
+ 	struct f_ncm		*ncm = func_to_ncm(f);
+ 	struct usb_string	*us;
+-	int			status;
++	int			status = 0;
+ 	struct usb_ep		*ep;
+ 	struct f_ncm_opts	*ncm_opts;
+ 
+@@ -1453,22 +1453,17 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
+ 		f->os_desc_table[0].os_desc = &ncm_opts->ncm_os_desc;
+ 	}
+ 
+-	/*
+-	 * in drivers/usb/gadget/configfs.c:configfs_composite_bind()
+-	 * configurations are bound in sequence with list_for_each_entry,
+-	 * in each configuration its functions are bound in sequence
+-	 * with list_for_each_entry, so we assume no race condition
+-	 * with regard to ncm_opts->bound access
+-	 */
+-	if (!ncm_opts->bound) {
+-		mutex_lock(&ncm_opts->lock);
+-		gether_set_gadget(ncm_opts->net, cdev->gadget);
++	mutex_lock(&ncm_opts->lock);
++	gether_set_gadget(ncm_opts->net, cdev->gadget);
++	if (!ncm_opts->bound)
+ 		status = gether_register_netdev(ncm_opts->net);
+-		mutex_unlock(&ncm_opts->lock);
+-		if (status)
+-			goto fail;
+-		ncm_opts->bound = true;
+-	}
++	mutex_unlock(&ncm_opts->lock);
++
++	if (status)
++		goto fail;
++
++	ncm_opts->bound = true;
++
+ 	us = usb_gstrings_attach(cdev, ncm_strings,
+ 				 ARRAY_SIZE(ncm_string_defs));
+ 	if (IS_ERR(us)) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 7d0c2cccbfc0f..0933a94ebe29e 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -505,7 +505,9 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	/* USB-2 and USB-3 roothubs initialized, allow runtime pm suspend */
+ 	pm_runtime_put_noidle(&dev->dev);
+ 
+-	if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
++	if (pci_choose_state(dev, PMSG_SUSPEND) == PCI_D0)
++		pm_runtime_forbid(&dev->dev);
++	else if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
+ 		pm_runtime_allow(&dev->dev);
+ 
+ 	dma_set_max_seg_size(&dev->dev, UINT_MAX);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 5ee095a5d38aa..eb70f07e3623a 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -868,6 +868,58 @@ static void xhci_handle_halted_endpoint(struct xhci_hcd *xhci,
+ 	xhci_ring_cmd_db(xhci);
+ }
+ 
++/*
++ * Fix up the ep ring first, so HW stops executing cancelled TDs.
++ * We have the xHCI lock, so nothing can modify this list until we drop it.
++ * We're also in the event handler, so we can't get re-interrupted if another
++ * Stop Endpoint command completes.
++ */
++
++static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep,
++					 struct xhci_dequeue_state *deq_state)
++{
++	struct xhci_hcd		*xhci;
++	struct xhci_td		*td = NULL;
++	struct xhci_td		*tmp_td = NULL;
++	struct xhci_ring	*ring;
++	u64			hw_deq;
++
++	xhci = ep->xhci;
++
++	list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) {
++		xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
++				"Removing canceled TD starting at 0x%llx (dma).",
++				(unsigned long long)xhci_trb_virt_to_dma(
++					td->start_seg, td->first_trb));
++		list_del_init(&td->td_list);
++		ring = xhci_urb_to_transfer_ring(xhci, td->urb);
++		if (!ring) {
++			xhci_warn(xhci, "WARN Cancelled URB %p has invalid stream ID %u.\n",
++				  td->urb, td->urb->stream_id);
++			continue;
++		}
++		/*
++		 * If ring stopped on the TD we need to cancel, then we have to
++		 * move the xHC endpoint ring dequeue pointer past this TD.
++		 */
++		hw_deq = xhci_get_hw_deq(xhci, ep->vdev, ep->ep_index,
++					 td->urb->stream_id);
++		hw_deq &= ~0xf;
++
++		if (trb_in_td(xhci, td->start_seg, td->first_trb,
++			      td->last_trb, hw_deq, false)) {
++			xhci_find_new_dequeue_state(xhci, ep->vdev->slot_id,
++						    ep->ep_index,
++						    td->urb->stream_id,
++						    td, deq_state);
++		} else {
++			td_to_noop(xhci, ring, td, false);
++		}
++
++	}
++	return 0;
++}
++
+ /*
+  * When we get a command completion for a Stop Endpoint Command, we need to
+  * unlink any cancelled TDs from the ring.  There are two ways to do that:
+@@ -888,7 +940,6 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ 	struct xhci_td *last_unlinked_td;
+ 	struct xhci_ep_ctx *ep_ctx;
+ 	struct xhci_virt_device *vdev;
+-	u64 hw_deq;
+ 	struct xhci_dequeue_state deq_state;
+ 
+ 	if (unlikely(TRB_TO_SUSPEND_PORT(le32_to_cpu(trb->generic.field[3])))) {
+@@ -919,60 +970,7 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ 		return;
+ 	}
+ 
+-	/* Fix up the ep ring first, so HW stops executing cancelled TDs.
+-	 * We have the xHCI lock, so nothing can modify this list until we drop
+-	 * it.  We're also in the event handler, so we can't get re-interrupted
+-	 * if another Stop Endpoint command completes
+-	 */
+-	list_for_each_entry(cur_td, &ep->cancelled_td_list, cancelled_td_list) {
+-		xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
+-				"Removing canceled TD starting at 0x%llx (dma).",
+-				(unsigned long long)xhci_trb_virt_to_dma(
+-					cur_td->start_seg, cur_td->first_trb));
+-		ep_ring = xhci_urb_to_transfer_ring(xhci, cur_td->urb);
+-		if (!ep_ring) {
+-			/* This shouldn't happen unless a driver is mucking
+-			 * with the stream ID after submission.  This will
+-			 * leave the TD on the hardware ring, and the hardware
+-			 * will try to execute it, and may access a buffer
+-			 * that has already been freed.  In the best case, the
+-			 * hardware will execute it, and the event handler will
+-			 * ignore the completion event for that TD, since it was
+-			 * removed from the td_list for that endpoint.  In
+-			 * short, don't muck with the stream ID after
+-			 * submission.
+-			 */
+-			xhci_warn(xhci, "WARN Cancelled URB %p "
+-					"has invalid stream ID %u.\n",
+-					cur_td->urb,
+-					cur_td->urb->stream_id);
+-			goto remove_finished_td;
+-		}
+-		/*
+-		 * If we stopped on the TD we need to cancel, then we have to
+-		 * move the xHC endpoint ring dequeue pointer past this TD.
+-		 */
+-		hw_deq = xhci_get_hw_deq(xhci, vdev, ep_index,
+-					 cur_td->urb->stream_id);
+-		hw_deq &= ~0xf;
+-
+-		if (trb_in_td(xhci, cur_td->start_seg, cur_td->first_trb,
+-			      cur_td->last_trb, hw_deq, false)) {
+-			xhci_find_new_dequeue_state(xhci, slot_id, ep_index,
+-						    cur_td->urb->stream_id,
+-						    cur_td, &deq_state);
+-		} else {
+-			td_to_noop(xhci, ep_ring, cur_td, false);
+-		}
+-
+-remove_finished_td:
+-		/*
+-		 * The event handler won't see a completion for this TD anymore,
+-		 * so remove it from the endpoint ring's TD list.  Keep it in
+-		 * the cancelled TD list for URB completion later.
+-		 */
+-		list_del_init(&cur_td->td_list);
+-	}
++	xhci_invalidate_cancelled_tds(ep, &deq_state);
+ 
+ 	xhci_stop_watchdog_timer_in_irq(xhci, ep);
+ 
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 52891546e6973..24e39984914fe 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -551,7 +551,9 @@ static void lateeoi_list_add(struct irq_info *info)
+ 
+ 	spin_lock_irqsave(&eoi->eoi_list_lock, flags);
+ 
+-	if (list_empty(&eoi->eoi_list)) {
++	elem = list_first_entry_or_null(&eoi->eoi_list, struct irq_info,
++					eoi_list);
++	if (!elem || info->eoi_time < elem->eoi_time) {
+ 		list_add(&info->eoi_list, &eoi->eoi_list);
+ 		mod_delayed_work_on(info->eoi_cpu, system_wq,
+ 				    &eoi->delayed, delay);
+diff --git a/fs/btrfs/delalloc-space.c b/fs/btrfs/delalloc-space.c
+index bacee09b7bfdd..45b2a5ecd63a6 100644
+--- a/fs/btrfs/delalloc-space.c
++++ b/fs/btrfs/delalloc-space.c
+@@ -307,9 +307,6 @@ int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes)
+ 	} else {
+ 		if (current->journal_info)
+ 			flush = BTRFS_RESERVE_FLUSH_LIMIT;
+-
+-		if (btrfs_transaction_in_commit(fs_info))
+-			schedule_timeout(1);
+ 	}
+ 
+ 	num_bytes = ALIGN(num_bytes, fs_info->sectorsize);
+diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c
+index 7b9b876b513bd..4f9d08ac9dde5 100644
+--- a/fs/cifs/cifs_spnego.c
++++ b/fs/cifs/cifs_spnego.c
+@@ -76,8 +76,8 @@ struct key_type cifs_spnego_key_type = {
+  * strlen(";sec=ntlmsspi") */
+ #define MAX_MECH_STR_LEN	13
+ 
+-/* strlen of "host=" */
+-#define HOST_KEY_LEN		5
++/* strlen of ";host=" */
++#define HOST_KEY_LEN		6
+ 
+ /* strlen of ";ip4=" or ";ip6=" */
+ #define IP_KEY_LEN		5
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index e6fa76ab70be7..d659eb70df76d 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -433,6 +433,8 @@ generate_smb3signingkey(struct cifs_ses *ses,
+ 				  ptriplet->encryption.context,
+ 				  ses->smb3encryptionkey,
+ 				  SMB3_ENC_DEC_KEY_SIZE);
++		if (rc)
++			return rc;
+ 		rc = generate_key(ses, ptriplet->decryption.label,
+ 				  ptriplet->decryption.context,
+ 				  ses->smb3decryptionkey,
+@@ -441,9 +443,6 @@ generate_smb3signingkey(struct cifs_ses *ses,
+ 			return rc;
+ 	}
+ 
+-	if (rc)
+-		return rc;
+-
+ #ifdef CONFIG_CIFS_DEBUG_DUMP_KEYS
+ 	cifs_dbg(VFS, "%s: dumping generated AES session keys\n", __func__);
+ 	/*
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index bd00afc5e4c16..d62d961e278d9 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -330,14 +330,20 @@ static int exfat_find_empty_entry(struct inode *inode,
+ 		if (exfat_check_max_dentries(inode))
+ 			return -ENOSPC;
+ 
+-		/* we trust p_dir->size regardless of FAT type */
+-		if (exfat_find_last_cluster(sb, p_dir, &last_clu))
+-			return -EIO;
+-
+ 		/*
+ 		 * Allocate new cluster to this directory
+ 		 */
+-		exfat_chain_set(&clu, last_clu + 1, 0, p_dir->flags);
++		if (ei->start_clu != EXFAT_EOF_CLUSTER) {
++			/* we trust p_dir->size regardless of FAT type */
++			if (exfat_find_last_cluster(sb, p_dir, &last_clu))
++				return -EIO;
++
++			exfat_chain_set(&clu, last_clu + 1, 0, p_dir->flags);
++		} else {
++			/* This directory is empty */
++			exfat_chain_set(&clu, EXFAT_EOF_CLUSTER, 0,
++					ALLOC_NO_FAT_CHAIN);
++		}
+ 
+ 		/* allocate a cluster */
+ 		ret = exfat_alloc_cluster(inode, 1, &clu);
+@@ -347,6 +353,11 @@ static int exfat_find_empty_entry(struct inode *inode,
+ 		if (exfat_zeroed_cluster(inode, clu.dir))
+ 			return -EIO;
+ 
++		if (ei->start_clu == EXFAT_EOF_CLUSTER) {
++			ei->start_clu = clu.dir;
++			p_dir->dir = clu.dir;
++		}
++
+ 		/* append to the FAT chain */
+ 		if (clu.flags != p_dir->flags) {
+ 			/* no-fat-chain bit is disabled,
+@@ -644,7 +655,7 @@ static int exfat_find(struct inode *dir, struct qstr *qname,
+ 	info->type = exfat_get_entry_type(ep);
+ 	info->attr = le16_to_cpu(ep->dentry.file.attr);
+ 	info->size = le64_to_cpu(ep2->dentry.stream.valid_size);
+-	if ((info->type == TYPE_FILE) && (info->size == 0)) {
++	if (info->size == 0) {
+ 		info->flags = ALLOC_NO_FAT_CHAIN;
+ 		info->start_clu = EXFAT_EOF_CLUSTER;
+ 	} else {
+@@ -890,6 +901,9 @@ static int exfat_check_dir_empty(struct super_block *sb,
+ 
+ 	dentries_per_clu = sbi->dentries_per_clu;
+ 
++	if (p_dir->dir == EXFAT_EOF_CLUSTER)
++		return 0;
++
+ 	exfat_chain_dup(&clu, p_dir);
+ 
+ 	while (clu.dir != EXFAT_EOF_CLUSTER) {
+@@ -1296,7 +1310,8 @@ static int __exfat_rename(struct inode *old_parent_inode,
+ 		}
+ 
+ 		/* Free the clusters if new_inode is a dir(as if exfat_rmdir) */
+-		if (new_entry_type == TYPE_DIR) {
++		if (new_entry_type == TYPE_DIR &&
++		    new_ei->start_clu != EXFAT_EOF_CLUSTER) {
+ 			/* new_ei, new_clu_to_free */
+ 			struct exfat_chain new_clu_to_free;
+ 
+diff --git a/fs/ext4/acl.h b/fs/ext4/acl.h
+index 9b63f5416a2f0..7f3b25b3fa6d3 100644
+--- a/fs/ext4/acl.h
++++ b/fs/ext4/acl.h
+@@ -67,6 +67,11 @@ extern int ext4_init_acl(handle_t *, struct inode *, struct inode *);
+ static inline int
+ ext4_init_acl(handle_t *handle, struct inode *inode, struct inode *dir)
+ {
++	/* usually, the umask is applied by posix_acl_create(), but if
++	   ext4 ACL support is disabled at compile time, we need to do
++	   it here, because posix_acl_create() will never be called */
++	inode->i_mode &= ~current_umask();
++
+ 	return 0;
+ }
+ #endif  /* CONFIG_EXT4_FS_POSIX_ACL */
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index fee54ab42bbaa..7806adcc41a7a 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -1366,8 +1366,8 @@ retry:
+ 			}
+ 		}
+ 		if (count_reserved)
+-			count_rsvd(inode, lblk, orig_es.es_len - len1 - len2,
+-				   &orig_es, &rc);
++			count_rsvd(inode, orig_es.es_lblk + len1,
++				   orig_es.es_len - len1 - len2, &orig_es, &rc);
+ 		goto out_get_reserved;
+ 	}
+ 
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 51cebc1990eb1..9b4199a1e0397 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -554,13 +554,8 @@ static int setup_new_flex_group_blocks(struct super_block *sb,
+ 		if (meta_bg == 0 && !ext4_bg_has_super(sb, group))
+ 			goto handle_itb;
+ 
+-		if (meta_bg == 1) {
+-			ext4_group_t first_group;
+-			first_group = ext4_meta_bg_first_group(sb, group);
+-			if (first_group != group + 1 &&
+-			    first_group != group + EXT4_DESC_PER_BLOCK(sb) - 1)
+-				goto handle_itb;
+-		}
++		if (meta_bg == 1)
++			goto handle_itb;
+ 
+ 		block = start + ext4_bg_has_super(sb, group);
+ 		/* Copy all of the GDT blocks into the backup in this group */
+@@ -1543,6 +1538,8 @@ exit_journal:
+ 		int gdb_num_end = ((group + flex_gd->count - 1) /
+ 				   EXT4_DESC_PER_BLOCK(sb));
+ 		int meta_bg = ext4_has_feature_meta_bg(sb);
++		sector_t padding_blocks = meta_bg ? 0 : sbi->s_sbh->b_blocknr -
++					 ext4_group_first_block_no(sb, 0);
+ 		sector_t old_gdb = 0;
+ 
+ 		update_backups(sb, ext4_group_first_block_no(sb, 0),
+@@ -1554,8 +1551,8 @@ exit_journal:
+ 						     gdb_num);
+ 			if (old_gdb == gdb_bh->b_blocknr)
+ 				continue;
+-			update_backups(sb, gdb_bh->b_blocknr, gdb_bh->b_data,
+-				       gdb_bh->b_size, meta_bg);
++			update_backups(sb, gdb_bh->b_blocknr - padding_blocks,
++				       gdb_bh->b_data, gdb_bh->b_size, meta_bg);
+ 			old_gdb = gdb_bh->b_blocknr;
+ 		}
+ 	}
+@@ -1916,9 +1913,7 @@ static int ext4_convert_meta_bg(struct super_block *sb, struct inode *inode)
+ 
+ errout:
+ 	ret = ext4_journal_stop(handle);
+-	if (!err)
+-		err = ret;
+-	return ret;
++	return err ? err : ret;
+ 
+ invalid_resize_inode:
+ 	ext4_error(sb, "corrupted/inconsistent resize inode");
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 1be9de40f0b5a..a94e102d15866 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1574,7 +1574,7 @@ unlock:
+ int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi)
+ {
+ 	dev_t dev = sbi->sb->s_bdev->bd_dev;
+-	char slab_name[32];
++	char slab_name[35];
+ 
+ 	sprintf(slab_name, "f2fs_page_array_entry-%u:%u", MAJOR(dev), MINOR(dev));
+ 
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index 74a6b0800e059..d75d56d9ea0ca 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -1835,16 +1835,24 @@ out:
+ 
+ int gfs2_permission(struct inode *inode, int mask)
+ {
++	int may_not_block = mask & MAY_NOT_BLOCK;
+ 	struct gfs2_inode *ip;
+ 	struct gfs2_holder i_gh;
++	struct gfs2_glock *gl;
+ 	int error;
+ 
+ 	gfs2_holder_mark_uninitialized(&i_gh);
+ 	ip = GFS2_I(inode);
+-	if (gfs2_glock_is_locked_by_me(ip->i_gl) == NULL) {
+-		if (mask & MAY_NOT_BLOCK)
++	gl = rcu_dereference_check(ip->i_gl, !may_not_block);
++	if (unlikely(!gl)) {
++		/* inode is getting torn down, must be RCU mode */
++		WARN_ON_ONCE(!may_not_block);
++		return -ECHILD;
++        }
++	if (gfs2_glock_is_locked_by_me(gl) == NULL) {
++		if (may_not_block)
+ 			return -ECHILD;
+-		error = gfs2_glock_nq_init(ip->i_gl, LM_ST_SHARED, LM_FLAG_ANY, &i_gh);
++		error = gfs2_glock_nq_init(gl, LM_ST_SHARED, LM_FLAG_ANY, &i_gh);
+ 		if (error)
+ 			return error;
+ 	}
+diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
+index ad953ecb58532..8c226aa286336 100644
+--- a/fs/gfs2/quota.c
++++ b/fs/gfs2/quota.c
+@@ -431,6 +431,17 @@ static int qd_check_sync(struct gfs2_sbd *sdp, struct gfs2_quota_data *qd,
+ 	    (sync_gen && (qd->qd_sync_gen >= *sync_gen)))
+ 		return 0;
+ 
++	/*
++	 * If qd_change is 0 it means a pending quota change was negated.
++	 * We should not sync it, but we still have a qd reference and slot
++	 * reference taken by gfs2_quota_change -> do_qc that need to be put.
++	 */
++	if (!qd->qd_change && test_and_clear_bit(QDF_CHANGE, &qd->qd_flags)) {
++		slot_put(qd);
++		qd_put(qd);
++		return 0;
++	}
++
+ 	if (!lockref_get_not_dead(&qd->qd_lockref))
+ 		return 0;
+ 
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index b61de8dab51a0..8cf4ef61cdc41 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1471,7 +1471,7 @@ out:
+ 		wait_on_bit_io(&ip->i_flags, GIF_GLOP_PENDING, TASK_UNINTERRUPTIBLE);
+ 		gfs2_glock_add_to_lru(ip->i_gl);
+ 		gfs2_glock_put_eventually(ip->i_gl);
+-		ip->i_gl = NULL;
++		rcu_assign_pointer(ip->i_gl, NULL);
+ 	}
+ }
+ 
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index 1ae1697fe99bd..3755a462d7c72 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -287,6 +287,8 @@ int jbd2_journal_recover(journal_t *journal)
+ 	journal_superblock_t *	sb;
+ 
+ 	struct recovery_info	info;
++	errseq_t		wb_err;
++	struct address_space	*mapping;
+ 
+ 	memset(&info, 0, sizeof(info));
+ 	sb = journal->j_superblock;
+@@ -304,6 +306,9 @@ int jbd2_journal_recover(journal_t *journal)
+ 		return 0;
+ 	}
+ 
++	wb_err = 0;
++	mapping = journal->j_fs_dev->bd_inode->i_mapping;
++	errseq_check_and_advance(&mapping->wb_err, &wb_err);
+ 	err = do_one_pass(journal, &info, PASS_SCAN);
+ 	if (!err)
+ 		err = do_one_pass(journal, &info, PASS_REVOKE);
+@@ -322,6 +327,9 @@ int jbd2_journal_recover(journal_t *journal)
+ 
+ 	jbd2_journal_clear_revoke(journal);
+ 	err2 = sync_blockdev(journal->j_fs_dev);
++	if (!err)
++		err = err2;
++	err2 = errseq_check_and_advance(&mapping->wb_err, &wb_err);
+ 	if (!err)
+ 		err = err2;
+ 	/* Make sure all replayed data is on permanent storage */
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index a9c078fc2302a..72eb5ed54c2ab 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -87,7 +87,7 @@ static int dbAllocCtl(struct bmap * bmp, s64 nblocks, int l2nb, s64 blkno,
+ static int dbExtend(struct inode *ip, s64 blkno, s64 nblocks, s64 addnblocks);
+ static int dbFindBits(u32 word, int l2nb);
+ static int dbFindCtl(struct bmap * bmp, int l2nb, int level, s64 * blkno);
+-static int dbFindLeaf(dmtree_t * tp, int l2nb, int *leafidx);
++static int dbFindLeaf(dmtree_t *tp, int l2nb, int *leafidx, bool is_ctl);
+ static int dbFreeBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 		      int nblocks);
+ static int dbFreeDmap(struct bmap * bmp, struct dmap * dp, s64 blkno,
+@@ -180,7 +180,8 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_nfree = le64_to_cpu(dbmp_le->dn_nfree);
+ 
+ 	bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
+-	if (bmp->db_l2nbperpage > L2PSIZE - L2MINBLOCKSIZE) {
++	if (bmp->db_l2nbperpage > L2PSIZE - L2MINBLOCKSIZE ||
++		bmp->db_l2nbperpage < 0) {
+ 		err = -EINVAL;
+ 		goto err_release_metapage;
+ 	}
+@@ -194,6 +195,12 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_maxlevel = le32_to_cpu(dbmp_le->dn_maxlevel);
+ 	bmp->db_maxag = le32_to_cpu(dbmp_le->dn_maxag);
+ 	bmp->db_agpref = le32_to_cpu(dbmp_le->dn_agpref);
++	if (bmp->db_maxag >= MAXAG || bmp->db_maxag < 0 ||
++		bmp->db_agpref >= MAXAG || bmp->db_agpref < 0) {
++		err = -EINVAL;
++		goto err_release_metapage;
++	}
++
+ 	bmp->db_aglevel = le32_to_cpu(dbmp_le->dn_aglevel);
+ 	bmp->db_agheight = le32_to_cpu(dbmp_le->dn_agheight);
+ 	bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
+@@ -1778,7 +1785,7 @@ static int dbFindCtl(struct bmap * bmp, int l2nb, int level, s64 * blkno)
+ 		 * dbFindLeaf() returns the index of the leaf at which
+ 		 * free space was found.
+ 		 */
+-		rc = dbFindLeaf((dmtree_t *) dcp, l2nb, &leafidx);
++		rc = dbFindLeaf((dmtree_t *) dcp, l2nb, &leafidx, true);
+ 
+ 		/* release the buffer.
+ 		 */
+@@ -2025,7 +2032,7 @@ dbAllocDmapLev(struct bmap * bmp,
+ 	 * free space.  if sufficient free space is found, dbFindLeaf()
+ 	 * returns the index of the leaf at which free space was found.
+ 	 */
+-	if (dbFindLeaf((dmtree_t *) & dp->tree, l2nb, &leafidx))
++	if (dbFindLeaf((dmtree_t *) &dp->tree, l2nb, &leafidx, false))
+ 		return -ENOSPC;
+ 
+ 	if (leafidx < 0)
+@@ -2985,14 +2992,18 @@ static void dbAdjTree(dmtree_t * tp, int leafno, int newval)
+  *	leafidx	- return pointer to be set to the index of the leaf
+  *		  describing at least l2nb free blocks if sufficient
+  *		  free blocks are found.
++ *	is_ctl	- determines if the tree is of type ctl
+  *
+  * RETURN VALUES:
+  *	0	- success
+  *	-ENOSPC	- insufficient free blocks.
+  */
+-static int dbFindLeaf(dmtree_t * tp, int l2nb, int *leafidx)
++static int dbFindLeaf(dmtree_t *tp, int l2nb, int *leafidx, bool is_ctl)
+ {
+ 	int ti, n = 0, k, x = 0;
++	int max_size;
++
++	max_size = is_ctl ? CTLTREESIZE : TREESIZE;
+ 
+ 	/* first check the root of the tree to see if there is
+ 	 * sufficient free space.
+@@ -3013,6 +3024,8 @@ static int dbFindLeaf(dmtree_t * tp, int l2nb, int *leafidx)
+ 			/* sufficient free space found.  move to the next
+ 			 * level (or quit if this is the last level).
+ 			 */
++			if (x + n > max_size)
++				return -ENOSPC;
+ 			if (l2nb <= tp->dmt_stree[x + n])
+ 				break;
+ 		}
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index 67c67604b8c85..14f918a4831d3 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -1322,7 +1322,7 @@ diInitInode(struct inode *ip, int iagno, int ino, int extno, struct iag * iagp)
+ int diAlloc(struct inode *pip, bool dir, struct inode *ip)
+ {
+ 	int rc, ino, iagno, addext, extno, bitno, sword;
+-	int nwords, rem, i, agno;
++	int nwords, rem, i, agno, dn_numag;
+ 	u32 mask, inosmap, extsmap;
+ 	struct inode *ipimap;
+ 	struct metapage *mp;
+@@ -1358,6 +1358,9 @@ int diAlloc(struct inode *pip, bool dir, struct inode *ip)
+ 
+ 	/* get the ag number of this iag */
+ 	agno = BLKTOAG(JFS_IP(pip)->agstart, JFS_SBI(pip->i_sb));
++	dn_numag = JFS_SBI(pip->i_sb)->bmap->db_numag;
++	if (agno < 0 || agno > dn_numag)
++		return -EIO;
+ 
+ 	if (atomic_read(&JFS_SBI(pip->i_sb)->bmap->db_active[agno])) {
+ 		/*
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 1c2ed14bccef2..f3f41027f6977 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5508,7 +5508,7 @@ static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr,
+ 
+ 	msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_WRITE];
+ 	nfs4_init_sequence(&hdr->args.seq_args, &hdr->res.seq_res, 0, 0);
+-	nfs4_state_protect_write(server->nfs_client, clnt, msg, hdr);
++	nfs4_state_protect_write(hdr->ds_clp ? hdr->ds_clp : server->nfs_client, clnt, msg, hdr);
+ }
+ 
+ static void nfs4_proc_commit_rpc_prepare(struct rpc_task *task, struct nfs_commit_data *data)
+@@ -5549,7 +5549,8 @@ static void nfs4_proc_commit_setup(struct nfs_commit_data *data, struct rpc_mess
+ 	data->res.server = server;
+ 	msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_COMMIT];
+ 	nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1, 0);
+-	nfs4_state_protect(server->nfs_client, NFS_SP4_MACH_CRED_COMMIT, clnt, msg);
++	nfs4_state_protect(data->ds_clp ? data->ds_clp : server->nfs_client,
++			NFS_SP4_MACH_CRED_COMMIT, clnt, msg);
+ }
+ 
+ static int _nfs4_proc_commit(struct file *dst, struct nfs_commitargs *args,
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index b045be7394a08..d402ca0b535f0 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -2647,7 +2647,7 @@ static int client_opens_release(struct inode *inode, struct file *file)
+ 
+ 	/* XXX: alternatively, we could get/drop in seq start/stop */
+ 	drop_client(clp);
+-	return 0;
++	return seq_release(inode, file);
+ }
+ 
+ static const struct file_operations client_states_fops = {
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index e0384095ca960..5d7df839902df 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -2028,7 +2028,7 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 	sb->s_xattr = ovl_xattr_handlers;
+ 	sb->s_fs_info = ofs;
+ 	sb->s_flags |= SB_POSIXACL;
+-	sb->s_iflags |= SB_I_SKIP_SYNC | SB_I_IMA_UNVERIFIABLE_SIGNATURE;
++	sb->s_iflags |= SB_I_SKIP_SYNC;
+ 
+ 	err = -ENOMEM;
+ 	root_dentry = ovl_get_root(sb, upperpath.dentry, oe);
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 682f2bf2e5259..aff9593feb73c 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1767,7 +1767,6 @@ static const struct sysctl_alias sysctl_aliases[] = {
+ 	{"hung_task_panic",			"kernel.hung_task_panic" },
+ 	{"numa_zonelist_order",			"vm.numa_zonelist_order" },
+ 	{"softlockup_all_cpu_backtrace",	"kernel.softlockup_all_cpu_backtrace" },
+-	{"softlockup_panic",			"kernel.softlockup_panic" },
+ 	{ }
+ };
+ 
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index fd2ca079448d5..4bb4b4b79827a 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -2398,6 +2398,20 @@ static int vfs_setup_quota_inode(struct inode *inode, int type)
+ 	if (sb_has_quota_loaded(sb, type))
+ 		return -EBUSY;
+ 
++	/*
++	 * Quota files should never be encrypted.  They should be thought of as
++	 * filesystem metadata, not user data.  New-style internal quota files
++	 * cannot be encrypted by users anyway, but old-style external quota
++	 * files could potentially be incorrectly created in an encrypted
++	 * directory, hence this explicit check.  Some reasons why encrypted
++	 * quota files don't work include: (1) some filesystems that support
++	 * encryption don't handle it in their quota_read and quota_write, and
++	 * (2) cleaning up encrypted quota files at unmount would need special
++	 * consideration, as quota files are cleaned up later than user files.
++	 */
++	if (IS_ENCRYPTED(inode))
++		return -EINVAL;
++
+ 	dqopt->files[type] = igrab(inode);
+ 	if (!dqopt->files[type])
+ 		return -EIO;
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index d13631a5e9087..693ed9c614b65 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -48,7 +48,7 @@ LSM_HOOK(int, 0, quota_on, struct dentry *dentry)
+ LSM_HOOK(int, 0, syslog, int type)
+ LSM_HOOK(int, 0, settime, const struct timespec64 *ts,
+ 	 const struct timezone *tz)
+-LSM_HOOK(int, 0, vm_enough_memory, struct mm_struct *mm, long pages)
++LSM_HOOK(int, 1, vm_enough_memory, struct mm_struct *mm, long pages)
+ LSM_HOOK(int, 0, bprm_creds_for_exec, struct linux_binprm *bprm)
+ LSM_HOOK(int, 0, bprm_creds_from_file, struct linux_binprm *bprm, struct file *file)
+ LSM_HOOK(int, 0, bprm_check_security, struct linux_binprm *bprm)
+@@ -255,7 +255,7 @@ LSM_HOOK(void, LSM_RET_VOID, release_secctx, char *secdata, u32 seclen)
+ LSM_HOOK(void, LSM_RET_VOID, inode_invalidate_secctx, struct inode *inode)
+ LSM_HOOK(int, 0, inode_notifysecctx, struct inode *inode, void *ctx, u32 ctxlen)
+ LSM_HOOK(int, 0, inode_setsecctx, struct dentry *dentry, void *ctx, u32 ctxlen)
+-LSM_HOOK(int, 0, inode_getsecctx, struct inode *inode, void **ctx,
++LSM_HOOK(int, -EOPNOTSUPP, inode_getsecctx, struct inode *inode, void **ctx,
+ 	 u32 *ctxlen)
+ 
+ #if defined(CONFIG_SECURITY) && defined(CONFIG_WATCH_QUEUE)
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 4f95b98215d81..2cd89af4dbf62 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -56,6 +56,8 @@
+ #include <linux/ptp_clock_kernel.h>
+ #include <net/devlink.h>
+ 
++#define MLX5_ADEV_NAME "mlx5_core"
++
+ enum {
+ 	MLX5_BOARD_ID_LEN = 64,
+ };
+diff --git a/include/linux/pwm.h b/include/linux/pwm.h
+index a13ff383fa1d5..c0cf6613373f9 100644
+--- a/include/linux/pwm.h
++++ b/include/linux/pwm.h
+@@ -44,8 +44,8 @@ struct pwm_args {
+ };
+ 
+ enum {
+-	PWMF_REQUESTED = 1 << 0,
+-	PWMF_EXPORTED = 1 << 1,
++	PWMF_REQUESTED = 0,
++	PWMF_EXPORTED = 1,
+ };
+ 
+ /*
+diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
+index 02e7a5863d289..41ed614e69209 100644
+--- a/include/linux/sunrpc/clnt.h
++++ b/include/linux/sunrpc/clnt.h
+@@ -79,6 +79,7 @@ struct rpc_clnt {
+ 		struct work_struct	cl_work;
+ 	};
+ 	const struct cred	*cl_cred;
++	struct super_block *pipefs_sb;
+ };
+ 
+ /*
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index 6fb20722f49b7..f7ed0471d5a85 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -369,6 +369,7 @@ enum {
+ 	EVENT_FILE_FL_TRIGGER_COND_BIT,
+ 	EVENT_FILE_FL_PID_FILTER_BIT,
+ 	EVENT_FILE_FL_WAS_ENABLED_BIT,
++	EVENT_FILE_FL_FREED_BIT,
+ };
+ 
+ extern struct trace_event_file *trace_get_event_file(const char *instance,
+@@ -507,6 +508,7 @@ extern int __kprobe_event_add_fields(struct dynevent_cmd *cmd, ...);
+  *  TRIGGER_COND  - When set, one or more triggers has an associated filter
+  *  PID_FILTER    - When set, the event is filtered based on pid
+  *  WAS_ENABLED   - Set when enabled to know to clear trace on module removal
++ *  FREED         - File descriptor is freed, all fields should be considered invalid
+  */
+ enum {
+ 	EVENT_FILE_FL_ENABLED		= (1 << EVENT_FILE_FL_ENABLED_BIT),
+@@ -520,6 +522,7 @@ enum {
+ 	EVENT_FILE_FL_TRIGGER_COND	= (1 << EVENT_FILE_FL_TRIGGER_COND_BIT),
+ 	EVENT_FILE_FL_PID_FILTER	= (1 << EVENT_FILE_FL_PID_FILTER_BIT),
+ 	EVENT_FILE_FL_WAS_ENABLED	= (1 << EVENT_FILE_FL_WAS_ENABLED_BIT),
++	EVENT_FILE_FL_FREED		= (1 << EVENT_FILE_FL_FREED_BIT),
+ };
+ 
+ struct trace_event_file {
+@@ -548,6 +551,7 @@ struct trace_event_file {
+ 	 * caching and such. Which is mostly OK ;-)
+ 	 */
+ 	unsigned long		flags;
++	atomic_t		ref;	/* ref count for opened files */
+ 	atomic_t		sm_ref;	/* soft-mode reference counter */
+ 	atomic_t		tm_ref;	/* trigger-mode reference counter */
+ };
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 1d59a109417d2..2237657514e14 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1479,13 +1479,10 @@ struct nft_trans_chain {
+ 
+ struct nft_trans_table {
+ 	bool				update;
+-	bool				enable;
+ };
+ 
+ #define nft_trans_table_update(trans)	\
+ 	(((struct nft_trans_table *)trans->data)->update)
+-#define nft_trans_table_enable(trans)	\
+-	(((struct nft_trans_table *)trans->data)->enable)
+ 
+ struct nft_trans_elem {
+ 	struct nft_set			*set;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 234196d904238..87ee284ea9cb3 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1853,21 +1853,33 @@ static inline void sk_tx_queue_set(struct sock *sk, int tx_queue)
+ 	/* sk_tx_queue_mapping accept only upto a 16-bit value */
+ 	if (WARN_ON_ONCE((unsigned short)tx_queue >= USHRT_MAX))
+ 		return;
+-	sk->sk_tx_queue_mapping = tx_queue;
++	/* Paired with READ_ONCE() in sk_tx_queue_get() and
++	 * other WRITE_ONCE() because socket lock might be not held.
++	 */
++	WRITE_ONCE(sk->sk_tx_queue_mapping, tx_queue);
+ }
+ 
+ #define NO_QUEUE_MAPPING	USHRT_MAX
+ 
+ static inline void sk_tx_queue_clear(struct sock *sk)
+ {
+-	sk->sk_tx_queue_mapping = NO_QUEUE_MAPPING;
++	/* Paired with READ_ONCE() in sk_tx_queue_get() and
++	 * other WRITE_ONCE() because socket lock might be not held.
++	 */
++	WRITE_ONCE(sk->sk_tx_queue_mapping, NO_QUEUE_MAPPING);
+ }
+ 
+ static inline int sk_tx_queue_get(const struct sock *sk)
+ {
+-	if (sk && sk->sk_tx_queue_mapping != NO_QUEUE_MAPPING)
+-		return sk->sk_tx_queue_mapping;
++	if (sk) {
++		/* Paired with WRITE_ONCE() in sk_tx_queue_clear()
++		 * and sk_tx_queue_set().
++		 */
++		int val = READ_ONCE(sk->sk_tx_queue_mapping);
+ 
++		if (val != NO_QUEUE_MAPPING)
++			return val;
++	}
+ 	return -1;
+ }
+ 
+@@ -2001,7 +2013,7 @@ static inline void __dst_negative_advice(struct sock *sk)
+ 		if (ndst != dst) {
+ 			rcu_assign_pointer(sk->sk_dst_cache, ndst);
+ 			sk_tx_queue_clear(sk);
+-			sk->sk_dst_pending_confirm = 0;
++			WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
+ 		}
+ 	}
+ }
+@@ -2018,7 +2030,7 @@ __sk_dst_set(struct sock *sk, struct dst_entry *dst)
+ 	struct dst_entry *old_dst;
+ 
+ 	sk_tx_queue_clear(sk);
+-	sk->sk_dst_pending_confirm = 0;
++	WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
+ 	old_dst = rcu_dereference_protected(sk->sk_dst_cache,
+ 					    lockdep_sock_is_held(sk));
+ 	rcu_assign_pointer(sk->sk_dst_cache, dst);
+@@ -2031,7 +2043,7 @@ sk_dst_set(struct sock *sk, struct dst_entry *dst)
+ 	struct dst_entry *old_dst;
+ 
+ 	sk_tx_queue_clear(sk);
+-	sk->sk_dst_pending_confirm = 0;
++	WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
+ 	old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst);
+ 	dst_release(old_dst);
+ }
+diff --git a/include/sound/soc-card.h b/include/sound/soc-card.h
+index 4f2cc4fb56b7f..9a5429260ece5 100644
+--- a/include/sound/soc-card.h
++++ b/include/sound/soc-card.h
+@@ -40,6 +40,43 @@ int snd_soc_card_add_dai_link(struct snd_soc_card *card,
+ void snd_soc_card_remove_dai_link(struct snd_soc_card *card,
+ 				  struct snd_soc_dai_link *dai_link);
+ 
++#ifdef CONFIG_PCI
++static inline void snd_soc_card_set_pci_ssid(struct snd_soc_card *card,
++					     unsigned short vendor,
++					     unsigned short device)
++{
++	card->pci_subsystem_vendor = vendor;
++	card->pci_subsystem_device = device;
++	card->pci_subsystem_set = true;
++}
++
++static inline int snd_soc_card_get_pci_ssid(struct snd_soc_card *card,
++					    unsigned short *vendor,
++					    unsigned short *device)
++{
++	if (!card->pci_subsystem_set)
++		return -ENOENT;
++
++	*vendor = card->pci_subsystem_vendor;
++	*device = card->pci_subsystem_device;
++
++	return 0;
++}
++#else /* !CONFIG_PCI */
++static inline void snd_soc_card_set_pci_ssid(struct snd_soc_card *card,
++					     unsigned short vendor,
++					     unsigned short device)
++{
++}
++
++static inline int snd_soc_card_get_pci_ssid(struct snd_soc_card *card,
++					    unsigned short *vendor,
++					    unsigned short *device)
++{
++	return -ENOENT;
++}
++#endif /* CONFIG_PCI */
++
+ /* device driver data */
+ static inline void snd_soc_card_set_drvdata(struct snd_soc_card *card,
+ 					    void *data)
+diff --git a/include/sound/soc.h b/include/sound/soc.h
+index 3b038c563ae14..e973044143bc9 100644
+--- a/include/sound/soc.h
++++ b/include/sound/soc.h
+@@ -977,6 +977,17 @@ struct snd_soc_card {
+ #ifdef CONFIG_DMI
+ 	char dmi_longname[80];
+ #endif /* CONFIG_DMI */
++
++#ifdef CONFIG_PCI
++	/*
++	 * PCI does not define 0 as invalid, so pci_subsystem_set indicates
++	 * whether a value has been written to these fields.
++	 */
++	unsigned short pci_subsystem_vendor;
++	unsigned short pci_subsystem_device;
++	bool pci_subsystem_set;
++#endif /* CONFIG_PCI */
++
+ 	char topology_shortname[32];
+ 
+ 	struct device *dev;
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index 1d8dd58f83a50..163b7ec577e74 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -165,6 +165,7 @@ enum nft_hook_attributes {
+ enum nft_table_flags {
+ 	NFT_TABLE_F_DORMANT	= 0x1,
+ };
++#define NFT_TABLE_F_MASK       (NFT_TABLE_F_DORMANT)
+ 
+ /**
+  * enum nft_table_attributes - nf_tables table netlink attributes
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 800b5cc385af3..1fe05e70cc79f 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -10238,7 +10238,7 @@ static int io_uring_show_cred(struct seq_file *m, unsigned int id,
+ 
+ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ {
+-	struct io_sq_data *sq = NULL;
++	int sq_pid = -1, sq_cpu = -1;
+ 	bool has_lock;
+ 	int i;
+ 
+@@ -10251,13 +10251,19 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ 	has_lock = mutex_trylock(&ctx->uring_lock);
+ 
+ 	if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL)) {
+-		sq = ctx->sq_data;
+-		if (!sq->thread)
+-			sq = NULL;
++		struct io_sq_data *sq = ctx->sq_data;
++
++		if (mutex_trylock(&sq->lock)) {
++			if (sq->thread) {
++				sq_pid = task_pid_nr(sq->thread);
++				sq_cpu = task_cpu(sq->thread);
++			}
++			mutex_unlock(&sq->lock);
++		}
+ 	}
+ 
+-	seq_printf(m, "SqThread:\t%d\n", sq ? task_pid_nr(sq->thread) : -1);
+-	seq_printf(m, "SqThreadCpu:\t%d\n", sq ? task_cpu(sq->thread) : -1);
++	seq_printf(m, "SqThread:\t%d\n", sq_pid);
++	seq_printf(m, "SqThreadCpu:\t%d\n", sq_cpu);
+ 	seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files);
+ 	for (i = 0; has_lock && i < ctx->nr_user_files; i++) {
+ 		struct file *f = io_file_from_index(ctx, i);
+diff --git a/kernel/audit_watch.c b/kernel/audit_watch.c
+index 2acf7ca491542..edbeffee64b8e 100644
+--- a/kernel/audit_watch.c
++++ b/kernel/audit_watch.c
+@@ -527,11 +527,18 @@ int audit_exe_compare(struct task_struct *tsk, struct audit_fsnotify_mark *mark)
+ 	unsigned long ino;
+ 	dev_t dev;
+ 
+-	exe_file = get_task_exe_file(tsk);
++	/* only do exe filtering if we are recording @current events/records */
++	if (tsk != current)
++		return 0;
++
++	if (!current->mm)
++		return 0;
++	exe_file = get_mm_exe_file(current->mm);
+ 	if (!exe_file)
+ 		return 0;
+ 	ino = file_inode(exe_file)->i_ino;
+ 	dev = file_inode(exe_file)->i_sb->s_dev;
+ 	fput(exe_file);
++
+ 	return audit_mark_compare(mark, ino, dev);
+ }
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index d3f6a070875cb..33ea6ab12f47c 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -602,7 +602,11 @@ static __always_inline int bpf_tree_comp(void *key, struct latch_tree_node *n)
+ 
+ 	if (val < ksym->start)
+ 		return -1;
+-	if (val >= ksym->end)
++	/* Ensure that we detect return addresses as part of the program, when
++	 * the final instruction is a call for a program part of the stack
++	 * trace. Therefore, do val > ksym->end instead of val >= ksym->end.
++	 */
++	if (val > ksym->end)
+ 		return  1;
+ 
+ 	return 0;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 8f1e43df8c5fa..45c50ee9b0370 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1841,7 +1841,12 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 	if (class == BPF_ALU || class == BPF_ALU64) {
+ 		if (!(*reg_mask & dreg))
+ 			return 0;
+-		if (opcode == BPF_MOV) {
++		if (opcode == BPF_END || opcode == BPF_NEG) {
++			/* sreg is reserved and unused
++			 * dreg still need precision before this insn
++			 */
++			return 0;
++		} else if (opcode == BPF_MOV) {
+ 			if (BPF_SRC(insn->code) == BPF_X) {
+ 				/* dreg = sreg
+ 				 * dreg needs precision after this insn
+@@ -2534,7 +2539,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 		   insn->imm != 0 && env->bpf_capable) {
+ 		struct bpf_reg_state fake_reg = {};
+ 
+-		__mark_reg_known(&fake_reg, (u32)insn->imm);
++		__mark_reg_known(&fake_reg, insn->imm);
+ 		fake_reg.type = SCALAR_VALUE;
+ 		save_register_state(state, spi, &fake_reg, size);
+ 	} else if (reg && is_spillable_regtype(reg->type)) {
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index 0f31b22abe8d9..ef54254a5dd13 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -1022,6 +1022,9 @@ void kgdb_panic(const char *msg)
+ 	if (panic_timeout)
+ 		return;
+ 
++	debug_locks_off();
++	console_flush_on_panic(CONSOLE_FLUSH_PENDING);
++
+ 	if (dbg_kdb_mode)
+ 		kdb_printf("PANIC: %s\n", msg);
+ 
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 4032cd4750001..01351e7e25435 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -691,6 +691,12 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
+ 		max_order--;
+ 	}
+ 
++	/*
++	 * kcalloc_node() is unable to allocate buffer if the size is larger
++	 * than: PAGE_SIZE << MAX_ORDER; directly bail out in this case.
++	 */
++	if (get_order((unsigned long)nr_pages * sizeof(void *)) > MAX_ORDER)
++		return -ENOMEM;
+ 	rb->aux_pages = kcalloc_node(nr_pages, sizeof(void *), GFP_KERNEL,
+ 				     node);
+ 	if (!rb->aux_pages)
+diff --git a/kernel/irq/generic-chip.c b/kernel/irq/generic-chip.c
+index e2999a070a99a..4195e7ad1ff2f 100644
+--- a/kernel/irq/generic-chip.c
++++ b/kernel/irq/generic-chip.c
+@@ -537,21 +537,34 @@ EXPORT_SYMBOL_GPL(irq_setup_alt_chip);
+ void irq_remove_generic_chip(struct irq_chip_generic *gc, u32 msk,
+ 			     unsigned int clr, unsigned int set)
+ {
+-	unsigned int i = gc->irq_base;
++	unsigned int i, virq;
+ 
+ 	raw_spin_lock(&gc_lock);
+ 	list_del(&gc->list);
+ 	raw_spin_unlock(&gc_lock);
+ 
+-	for (; msk; msk >>= 1, i++) {
++	for (i = 0; msk; msk >>= 1, i++) {
+ 		if (!(msk & 0x01))
+ 			continue;
+ 
++		/*
++		 * Interrupt domain based chips store the base hardware
++		 * interrupt number in gc::irq_base. Otherwise gc::irq_base
++		 * contains the base Linux interrupt number.
++		 */
++		if (gc->domain) {
++			virq = irq_find_mapping(gc->domain, gc->irq_base + i);
++			if (!virq)
++				continue;
++		} else {
++			virq = gc->irq_base + i;
++		}
++
+ 		/* Remove handler first. That will mask the irq line */
+-		irq_set_handler(i, NULL);
+-		irq_set_chip(i, &no_irq_chip);
+-		irq_set_chip_data(i, NULL);
+-		irq_modify_status(i, clr, set);
++		irq_set_handler(virq, NULL);
++		irq_set_chip(virq, &no_irq_chip);
++		irq_set_chip_data(virq, NULL);
++		irq_modify_status(virq, clr, set);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(irq_remove_generic_chip);
+diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
+index 3e82f449b4ff7..da36997d8742c 100644
+--- a/kernel/locking/test-ww_mutex.c
++++ b/kernel/locking/test-ww_mutex.c
+@@ -426,7 +426,6 @@ retry:
+ 	} while (!time_after(jiffies, stress->timeout));
+ 
+ 	kfree(order);
+-	kfree(stress);
+ }
+ 
+ struct reorder_lock {
+@@ -491,7 +490,6 @@ out:
+ 	list_for_each_entry_safe(ll, ln, &locks, link)
+ 		kfree(ll);
+ 	kfree(order);
+-	kfree(stress);
+ }
+ 
+ static void stress_one_work(struct work_struct *work)
+@@ -512,8 +510,6 @@ static void stress_one_work(struct work_struct *work)
+ 			break;
+ 		}
+ 	} while (!time_after(jiffies, stress->timeout));
+-
+-	kfree(stress);
+ }
+ 
+ #define STRESS_INORDER BIT(0)
+@@ -524,15 +520,24 @@ static void stress_one_work(struct work_struct *work)
+ static int stress(int nlocks, int nthreads, unsigned int flags)
+ {
+ 	struct ww_mutex *locks;
+-	int n;
++	struct stress *stress_array;
++	int n, count;
+ 
+ 	locks = kmalloc_array(nlocks, sizeof(*locks), GFP_KERNEL);
+ 	if (!locks)
+ 		return -ENOMEM;
+ 
++	stress_array = kmalloc_array(nthreads, sizeof(*stress_array),
++				     GFP_KERNEL);
++	if (!stress_array) {
++		kfree(locks);
++		return -ENOMEM;
++	}
++
+ 	for (n = 0; n < nlocks; n++)
+ 		ww_mutex_init(&locks[n], &ww_class);
+ 
++	count = 0;
+ 	for (n = 0; nthreads; n++) {
+ 		struct stress *stress;
+ 		void (*fn)(struct work_struct *work);
+@@ -556,9 +561,7 @@ static int stress(int nlocks, int nthreads, unsigned int flags)
+ 		if (!fn)
+ 			continue;
+ 
+-		stress = kmalloc(sizeof(*stress), GFP_KERNEL);
+-		if (!stress)
+-			break;
++		stress = &stress_array[count++];
+ 
+ 		INIT_WORK(&stress->work, fn);
+ 		stress->locks = locks;
+@@ -573,6 +576,7 @@ static int stress(int nlocks, int nthreads, unsigned int flags)
+ 
+ 	for (n = 0; n < nlocks; n++)
+ 		ww_mutex_destroy(&locks[n]);
++	kfree(stress_array);
+ 	kfree(locks);
+ 
+ 	return 0;
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 7d500219f96bd..fdcd78302cd72 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -207,7 +207,7 @@ int padata_do_parallel(struct padata_shell *ps,
+ 		*cb_cpu = cpu;
+ 	}
+ 
+-	err =  -EBUSY;
++	err = -EBUSY;
+ 	if ((pinst->flags & PADATA_RESET))
+ 		goto out;
+ 
+diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
+index f5dccd445d360..c1bdaae07f4c7 100644
+--- a/kernel/power/snapshot.c
++++ b/kernel/power/snapshot.c
+@@ -2372,8 +2372,9 @@ static void *get_highmem_page_buffer(struct page *page,
+ 		pbe->copy_page = tmp;
+ 	} else {
+ 		/* Copy of the page will be stored in normal memory */
+-		kaddr = safe_pages_list;
+-		safe_pages_list = safe_pages_list->next;
++		kaddr = __get_safe_page(ca->gfp_mask);
++		if (!kaddr)
++			return ERR_PTR(-ENOMEM);
+ 		pbe->copy_page = virt_to_page(kaddr);
+ 	}
+ 	pbe->next = highmem_pblist;
+@@ -2553,8 +2554,9 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca)
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	pbe->orig_address = page_address(page);
+-	pbe->address = safe_pages_list;
+-	safe_pages_list = safe_pages_list->next;
++	pbe->address = __get_safe_page(ca->gfp_mask);
++	if (!pbe->address)
++		return ERR_PTR(-ENOMEM);
+ 	pbe->next = restore_pblist;
+ 	restore_pblist = pbe;
+ 	return pbe->address;
+@@ -2585,8 +2587,6 @@ int snapshot_write_next(struct snapshot_handle *handle)
+ 	if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages)
+ 		return 0;
+ 
+-	handle->sync_read = 1;
+-
+ 	if (!handle->cur) {
+ 		if (!buffer)
+ 			/* This makes the buffer be freed by swsusp_free() */
+@@ -2622,7 +2622,6 @@ int snapshot_write_next(struct snapshot_handle *handle)
+ 			memory_bm_position_reset(&orig_bm);
+ 			restore_pblist = NULL;
+ 			handle->buffer = get_buffer(&orig_bm, &ca);
+-			handle->sync_read = 0;
+ 			if (IS_ERR(handle->buffer))
+ 				return PTR_ERR(handle->buffer);
+ 		}
+@@ -2632,9 +2631,8 @@ int snapshot_write_next(struct snapshot_handle *handle)
+ 		handle->buffer = get_buffer(&orig_bm, &ca);
+ 		if (IS_ERR(handle->buffer))
+ 			return PTR_ERR(handle->buffer);
+-		if (handle->buffer != buffer)
+-			handle->sync_read = 0;
+ 	}
++	handle->sync_read = (handle->buffer == buffer);
+ 	handle->cur++;
+ 	return PAGE_SIZE;
+ }
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index eec8e2f7537eb..06bfe61d3cd38 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -31,6 +31,7 @@
+ #include <linux/bitops.h>
+ #include <linux/export.h>
+ #include <linux/completion.h>
++#include <linux/kmemleak.h>
+ #include <linux/moduleparam.h>
+ #include <linux/percpu.h>
+ #include <linux/notifier.h>
+@@ -3547,6 +3548,14 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
+ 
+ 	WRITE_ONCE(krcp->count, krcp->count + 1);
+ 
++	/*
++	 * The kvfree_rcu() caller considers the pointer freed at this point
++	 * and likely removes any references to it. Since the actual slab
++	 * freeing (and kmemleak_free()) is deferred, tell kmemleak to ignore
++	 * this object (no scanning or false positives reporting).
++	 */
++	kmemleak_ignore(ptr);
++
+ 	// Set timer to drain after KFREE_DRAIN_JIFFIES.
+ 	if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
+ 	    !krcp->monitor_todo) {
+diff --git a/kernel/reboot.c b/kernel/reboot.c
+index af6f23d8bea16..e297b35fc6521 100644
+--- a/kernel/reboot.c
++++ b/kernel/reboot.c
+@@ -64,6 +64,7 @@ EXPORT_SYMBOL_GPL(pm_power_off_prepare);
+ void emergency_restart(void)
+ {
+ 	kmsg_dump(KMSG_DUMP_EMERG);
++	system_state = SYSTEM_RESTART;
+ 	machine_emergency_restart();
+ }
+ EXPORT_SYMBOL_GPL(emergency_restart);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 196eec0423ff2..89a8bb8e24df2 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4572,6 +4572,20 @@ int tracing_open_file_tr(struct inode *inode, struct file *filp)
+ 	if (ret)
+ 		return ret;
+ 
++	mutex_lock(&event_mutex);
++
++	/* Fail if the file is marked for removal */
++	if (file->flags & EVENT_FILE_FL_FREED) {
++		trace_array_put(file->tr);
++		ret = -ENODEV;
++	} else {
++		event_file_get(file);
++	}
++
++	mutex_unlock(&event_mutex);
++	if (ret)
++		return ret;
++
+ 	filp->private_data = inode->i_private;
+ 
+ 	return 0;
+@@ -4582,6 +4596,7 @@ int tracing_release_file_tr(struct inode *inode, struct file *filp)
+ 	struct trace_event_file *file = inode->i_private;
+ 
+ 	trace_array_put(file->tr);
++	event_file_put(file);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 7fa00b83dfa4b..7c90872f2435d 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1784,6 +1784,9 @@ extern int register_event_command(struct event_command *cmd);
+ extern int unregister_event_command(struct event_command *cmd);
+ extern int register_trigger_hist_enable_disable_cmds(void);
+ 
++extern void event_file_get(struct trace_event_file *file);
++extern void event_file_put(struct trace_event_file *file);
++
+ /**
+  * struct event_trigger_ops - callbacks for trace event triggers
+  *
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index c7f0a02442e50..4b5a8d7275be7 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -746,26 +746,38 @@ static void remove_subsystem(struct trace_subsystem_dir *dir)
+ 	}
+ }
+ 
+-static void remove_event_file_dir(struct trace_event_file *file)
++void event_file_get(struct trace_event_file *file)
+ {
+-	struct dentry *dir = file->dir;
+-	struct dentry *child;
++	atomic_inc(&file->ref);
++}
+ 
+-	if (dir) {
+-		spin_lock(&dir->d_lock);	/* probably unneeded */
+-		list_for_each_entry(child, &dir->d_subdirs, d_child) {
+-			if (d_really_is_positive(child))	/* probably unneeded */
+-				d_inode(child)->i_private = NULL;
+-		}
+-		spin_unlock(&dir->d_lock);
++void event_file_put(struct trace_event_file *file)
++{
++	if (WARN_ON_ONCE(!atomic_read(&file->ref))) {
++		if (file->flags & EVENT_FILE_FL_FREED)
++			kmem_cache_free(file_cachep, file);
++		return;
++	}
+ 
+-		tracefs_remove(dir);
++	if (atomic_dec_and_test(&file->ref)) {
++		/* Count should only go to zero when it is freed */
++		if (WARN_ON_ONCE(!(file->flags & EVENT_FILE_FL_FREED)))
++			return;
++		kmem_cache_free(file_cachep, file);
+ 	}
++}
++
++static void remove_event_file_dir(struct trace_event_file *file)
++{
++	struct dentry *dir = file->dir;
++
++	tracefs_remove(dir);
+ 
+ 	list_del(&file->list);
+ 	remove_subsystem(file->system);
+ 	free_event_filter(file->filter);
+-	kmem_cache_free(file_cachep, file);
++	file->flags |= EVENT_FILE_FL_FREED;
++	event_file_put(file);
+ }
+ 
+ /*
+@@ -1138,7 +1150,7 @@ event_enable_read(struct file *filp, char __user *ubuf, size_t cnt,
+ 		flags = file->flags;
+ 	mutex_unlock(&event_mutex);
+ 
+-	if (!file)
++	if (!file || flags & EVENT_FILE_FL_FREED)
+ 		return -ENODEV;
+ 
+ 	if (flags & EVENT_FILE_FL_ENABLED &&
+@@ -1176,7 +1188,7 @@ event_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 		ret = -ENODEV;
+ 		mutex_lock(&event_mutex);
+ 		file = event_file_data(filp);
+-		if (likely(file))
++		if (likely(file && !(file->flags & EVENT_FILE_FL_FREED)))
+ 			ret = ftrace_event_enable_disable(file, val);
+ 		mutex_unlock(&event_mutex);
+ 		break;
+@@ -1445,7 +1457,7 @@ event_filter_read(struct file *filp, char __user *ubuf, size_t cnt,
+ 
+ 	mutex_lock(&event_mutex);
+ 	file = event_file_data(filp);
+-	if (file)
++	if (file && !(file->flags & EVENT_FILE_FL_FREED))
+ 		print_event_filter(file, s);
+ 	mutex_unlock(&event_mutex);
+ 
+@@ -2482,6 +2494,7 @@ trace_create_new_event(struct trace_event_call *call,
+ 	atomic_set(&file->tm_ref, 0);
+ 	INIT_LIST_HEAD(&file->triggers);
+ 	list_add(&file->list, &tr->events);
++	event_file_get(file);
+ 
+ 	return file;
+ }
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index a255ffbe342f3..c1db5b62d8114 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1893,6 +1893,9 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
+ 	struct event_filter *filter = NULL;
+ 	int err;
+ 
++	if (file->flags & EVENT_FILE_FL_FREED)
++		return -ENODEV;
++
+ 	if (!strcmp(strstrip(filter_string), "0")) {
+ 		filter_disable(file);
+ 		filter = event_filter(file);
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index ec34d9f2eab2d..78a4603e14cdf 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -176,6 +176,13 @@ static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts);
+ static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
+ static unsigned long soft_lockup_nmi_warn;
+ 
++static int __init softlockup_panic_setup(char *str)
++{
++	softlockup_panic = simple_strtoul(str, NULL, 0);
++	return 1;
++}
++__setup("softlockup_panic=", softlockup_panic_setup);
++
+ static int __init nowatchdog_setup(char *str)
+ {
+ 	watchdog_user_enabled = 0;
+diff --git a/mm/cma.c b/mm/cma.c
+index 7f415d7cda9f0..3c3f1436f6436 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -482,7 +482,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
+ 	 */
+ 	if (page) {
+ 		for (i = 0; i < count; i++)
+-			page_kasan_tag_reset(page + i);
++			page_kasan_tag_reset(nth_page(page, i));
+ 	}
+ 
+ 	if (ret && !no_warn) {
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 751e3670d7b0c..ddc8ed096deca 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2892,7 +2892,8 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg)
+  * Moreover, it should not come from DMA buffer and is not readily
+  * reclaimable. So those GFP bits should be masked off.
+  */
+-#define OBJCGS_CLEAR_MASK	(__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT)
++#define OBJCGS_CLEAR_MASK	(__GFP_DMA | __GFP_RECLAIMABLE | \
++				 __GFP_ACCOUNT | __GFP_NOFAIL)
+ 
+ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
+ 				 gfp_t gfp)
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 5286945470b92..553b0705dce8e 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1263,7 +1263,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
+ 		head = compound_head(page);
+ 		if (page_huge_active(head))
+ 			goto found;
+-		skip = compound_nr(head) - (page - head);
++		skip = compound_nr(head) - (pfn - page_to_pfn(head));
+ 		pfn += skip - 1;
+ 	}
+ 	return -ENOENT;
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index e070a0b8e5ca3..63f4d2067059e 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -823,14 +823,21 @@ static int p9_fd_open(struct p9_client *client, int rfd, int wfd)
+ 		goto out_free_ts;
+ 	if (!(ts->rd->f_mode & FMODE_READ))
+ 		goto out_put_rd;
+-	/* prevent workers from hanging on IO when fd is a pipe */
+-	ts->rd->f_flags |= O_NONBLOCK;
++	/* Prevent workers from hanging on IO when fd is a pipe.
++	 * It's technically possible for userspace or concurrent mounts to
++	 * modify this flag concurrently, which will likely result in a
++	 * broken filesystem. However, just having bad flags here should
++	 * not crash the kernel or cause any other sort of bug, so mark this
++	 * particular data race as intentional so that tooling (like KCSAN)
++	 * can allow it and detect further problems.
++	 */
++	data_race(ts->rd->f_flags |= O_NONBLOCK);
+ 	ts->wr = fget(wfd);
+ 	if (!ts->wr)
+ 		goto out_put_rd;
+ 	if (!(ts->wr->f_mode & FMODE_WRITE))
+ 		goto out_put_wr;
+-	ts->wr->f_flags |= O_NONBLOCK;
++	data_race(ts->wr->f_flags |= O_NONBLOCK);
+ 
+ 	client->trans = ts;
+ 	client->status = Connected;
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index a9f6089a2ae2a..74721c3e49b34 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -135,13 +135,11 @@ static void hci_conn_cleanup(struct hci_conn *conn)
+ 			hdev->notify(hdev, HCI_NOTIFY_CONN_DEL);
+ 	}
+ 
+-	hci_conn_del_sysfs(conn);
+-
+ 	debugfs_remove_recursive(conn->debugfs);
+ 
+-	hci_dev_put(hdev);
++	hci_conn_del_sysfs(conn);
+ 
+-	hci_conn_put(conn);
++	hci_dev_put(hdev);
+ }
+ 
+ static void le_scan_cleanup(struct work_struct *work)
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index ccd2c377bf83c..266112c960ee8 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -33,7 +33,7 @@ void hci_conn_init_sysfs(struct hci_conn *conn)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+ 	conn->dev.type = &bt_link;
+ 	conn->dev.class = bt_class;
+@@ -46,27 +46,30 @@ void hci_conn_add_sysfs(struct hci_conn *conn)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+ 	if (device_is_registered(&conn->dev))
+ 		return;
+ 
+ 	dev_set_name(&conn->dev, "%s:%d", hdev->name, conn->handle);
+ 
+-	if (device_add(&conn->dev) < 0) {
++	if (device_add(&conn->dev) < 0)
+ 		bt_dev_err(hdev, "failed to register connection device");
+-		return;
+-	}
+-
+-	hci_dev_hold(hdev);
+ }
+ 
+ void hci_conn_del_sysfs(struct hci_conn *conn)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+ 
+-	if (!device_is_registered(&conn->dev))
++	bt_dev_dbg(hdev, "conn %p", conn);
++
++	if (!device_is_registered(&conn->dev)) {
++		/* If device_add() has *not* succeeded, use *only* put_device()
++		 * to drop the reference count.
++		 */
++		put_device(&conn->dev);
+ 		return;
++	}
+ 
+ 	while (1) {
+ 		struct device *dev;
+@@ -78,9 +81,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
+ 		put_device(dev);
+ 	}
+ 
+-	device_del(&conn->dev);
+-
+-	hci_dev_put(hdev);
++	device_unregister(&conn->dev);
+ }
+ 
+ static void bt_host_release(struct device *dev)
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index fdbed31585553..d14b2dbbd1dfb 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -36,7 +36,7 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ 	ktime_t tstamp = skb->tstamp;
+ 	struct ip_frag_state state;
+ 	struct iphdr *iph;
+-	int err;
++	int err = 0;
+ 
+ 	/* for offloaded checksums cleanup checksum before fragmentation */
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL &&
+diff --git a/net/core/sock.c b/net/core/sock.c
+index fcb998dc2dc68..a069b5476df46 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -532,7 +532,7 @@ struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie)
+ 
+ 	if (dst && dst->obsolete && dst->ops->check(dst, cookie) == NULL) {
+ 		sk_tx_queue_clear(sk);
+-		sk->sk_dst_pending_confirm = 0;
++		WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
+ 		RCU_INIT_POINTER(sk->sk_dst_cache, NULL);
+ 		dst_release(dst);
+ 		return NULL;
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 4df287885dd75..f8ad8465f76cb 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1320,7 +1320,7 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
+ 	skb_set_hash_from_sk(skb, sk);
+ 	refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+ 
+-	skb_set_dst_pending_confirm(skb, sk->sk_dst_pending_confirm);
++	skb_set_dst_pending_confirm(skb, READ_ONCE(sk->sk_dst_pending_confirm));
+ 
+ 	/* Build TCP header and checksum it. */
+ 	th = (struct tcphdr *)skb->data;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index c6a7f1c99abc5..45bb6f2755987 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2726,6 +2726,10 @@ static int ieee80211_get_tx_power(struct wiphy *wiphy,
+ 	else
+ 		*dbm = sdata->vif.bss_conf.txpower;
+ 
++	/* INT_MIN indicates no power level was set yet */
++	if (*dbm == INT_MIN)
++		return -EINVAL;
++
+ 	return 0;
+ }
+ 
+diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c
+index f8854bff286cb..62fb1031763d1 100644
+--- a/net/ncsi/ncsi-aen.c
++++ b/net/ncsi/ncsi-aen.c
+@@ -89,11 +89,6 @@ static int ncsi_aen_handler_lsc(struct ncsi_dev_priv *ndp,
+ 	if ((had_link == has_link) || chained)
+ 		return 0;
+ 
+-	if (had_link)
+-		netif_carrier_off(ndp->ndev.dev);
+-	else
+-		netif_carrier_on(ndp->ndev.dev);
+-
+ 	if (!ndp->multi_package && !nc->package->multi_channel) {
+ 		if (had_link) {
+ 			ndp->flags |= NCSI_DEV_RESHUFFLE;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 78b268bd7f012..dfb347f8d3adc 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -863,7 +863,8 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ 		goto nla_put_failure;
+ 
+ 	if (nla_put_string(skb, NFTA_TABLE_NAME, table->name) ||
+-	    nla_put_be32(skb, NFTA_TABLE_FLAGS, htonl(table->flags)) ||
++	    nla_put_be32(skb, NFTA_TABLE_FLAGS,
++			 htonl(table->flags & NFT_TABLE_F_MASK)) ||
+ 	    nla_put_be32(skb, NFTA_TABLE_USE, htonl(table->use)) ||
+ 	    nla_put_be64(skb, NFTA_TABLE_HANDLE, cpu_to_be64(table->handle),
+ 			 NFTA_TABLE_PAD))
+@@ -1071,14 +1072,22 @@ err_register_hooks:
+ 
+ static void nf_tables_table_disable(struct net *net, struct nft_table *table)
+ {
++	table->flags &= ~NFT_TABLE_F_DORMANT;
+ 	nft_table_disable(net, table, 0);
++	table->flags |= NFT_TABLE_F_DORMANT;
+ }
+ 
++#define __NFT_TABLE_F_INTERNAL		(NFT_TABLE_F_MASK + 1)
++#define __NFT_TABLE_F_WAS_DORMANT	(__NFT_TABLE_F_INTERNAL << 0)
++#define __NFT_TABLE_F_WAS_AWAKEN	(__NFT_TABLE_F_INTERNAL << 1)
++#define __NFT_TABLE_F_UPDATE		(__NFT_TABLE_F_WAS_DORMANT | \
++					 __NFT_TABLE_F_WAS_AWAKEN)
++
+ static int nf_tables_updtable(struct nft_ctx *ctx)
+ {
+ 	struct nft_trans *trans;
+ 	u32 flags;
+-	int ret = 0;
++	int ret;
+ 
+ 	if (!ctx->nla[NFTA_TABLE_FLAGS])
+ 		return 0;
+@@ -1090,6 +1099,10 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 	if (flags == ctx->table->flags)
+ 		return 0;
+ 
++	/* No dormant off/on/off/on games in single transaction */
++	if (ctx->table->flags & __NFT_TABLE_F_UPDATE)
++		return -EINVAL;
++
+ 	trans = nft_trans_alloc(ctx, NFT_MSG_NEWTABLE,
+ 				sizeof(struct nft_trans_table));
+ 	if (trans == NULL)
+@@ -1097,23 +1110,27 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 
+ 	if ((flags & NFT_TABLE_F_DORMANT) &&
+ 	    !(ctx->table->flags & NFT_TABLE_F_DORMANT)) {
+-		nft_trans_table_enable(trans) = false;
++		ctx->table->flags |= NFT_TABLE_F_DORMANT;
++		if (!(ctx->table->flags & __NFT_TABLE_F_UPDATE))
++			ctx->table->flags |= __NFT_TABLE_F_WAS_AWAKEN;
+ 	} else if (!(flags & NFT_TABLE_F_DORMANT) &&
+ 		   ctx->table->flags & NFT_TABLE_F_DORMANT) {
+ 		ctx->table->flags &= ~NFT_TABLE_F_DORMANT;
+-		ret = nf_tables_table_enable(ctx->net, ctx->table);
+-		if (ret >= 0)
+-			nft_trans_table_enable(trans) = true;
+-		else
+-			ctx->table->flags |= NFT_TABLE_F_DORMANT;
++		if (!(ctx->table->flags & __NFT_TABLE_F_UPDATE)) {
++			ret = nf_tables_table_enable(ctx->net, ctx->table);
++			if (ret < 0)
++				goto err_register_hooks;
++
++			ctx->table->flags |= __NFT_TABLE_F_WAS_DORMANT;
++		}
+ 	}
+-	if (ret < 0)
+-		goto err;
+ 
+ 	nft_trans_table_update(trans) = true;
+ 	nft_trans_commit_list_add_tail(ctx->net, trans);
++
+ 	return 0;
+-err:
++
++err_register_hooks:
+ 	nft_trans_destroy(trans);
+ 	return ret;
+ }
+@@ -8475,11 +8492,14 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 		switch (trans->msg_type) {
+ 		case NFT_MSG_NEWTABLE:
+ 			if (nft_trans_table_update(trans)) {
+-				if (!nft_trans_table_enable(trans)) {
+-					nf_tables_table_disable(net,
+-								trans->ctx.table);
+-					trans->ctx.table->flags |= NFT_TABLE_F_DORMANT;
++				if (!(trans->ctx.table->flags & __NFT_TABLE_F_UPDATE)) {
++					nft_trans_destroy(trans);
++					break;
+ 				}
++				if (trans->ctx.table->flags & NFT_TABLE_F_DORMANT)
++					nf_tables_table_disable(net, trans->ctx.table);
++
++				trans->ctx.table->flags &= ~__NFT_TABLE_F_UPDATE;
+ 			} else {
+ 				nft_clear(net, trans->ctx.table);
+ 			}
+@@ -8728,11 +8748,17 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 		switch (trans->msg_type) {
+ 		case NFT_MSG_NEWTABLE:
+ 			if (nft_trans_table_update(trans)) {
+-				if (nft_trans_table_enable(trans)) {
+-					nf_tables_table_disable(net,
+-								trans->ctx.table);
++				if (!(trans->ctx.table->flags & __NFT_TABLE_F_UPDATE)) {
++					nft_trans_destroy(trans);
++					break;
++				}
++				if (trans->ctx.table->flags & __NFT_TABLE_F_WAS_DORMANT) {
++					nf_tables_table_disable(net, trans->ctx.table);
+ 					trans->ctx.table->flags |= NFT_TABLE_F_DORMANT;
++				} else if (trans->ctx.table->flags & __NFT_TABLE_F_WAS_AWAKEN) {
++					trans->ctx.table->flags &= ~NFT_TABLE_F_DORMANT;
+ 				}
++				trans->ctx.table->flags &= ~__NFT_TABLE_F_UPDATE;
+ 				nft_trans_destroy(trans);
+ 			} else {
+ 				list_del_rcu(&trans->ctx.table->list);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index c7c1754f87440..360a3bcd91fe1 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -109,7 +109,8 @@ static void rpc_clnt_remove_pipedir(struct rpc_clnt *clnt)
+ 
+ 	pipefs_sb = rpc_get_sb_net(net);
+ 	if (pipefs_sb) {
+-		__rpc_clnt_remove_pipedir(clnt);
++		if (pipefs_sb == clnt->pipefs_sb)
++			__rpc_clnt_remove_pipedir(clnt);
+ 		rpc_put_sb_net(net);
+ 	}
+ }
+@@ -149,6 +150,8 @@ rpc_setup_pipedir(struct super_block *pipefs_sb, struct rpc_clnt *clnt)
+ {
+ 	struct dentry *dentry;
+ 
++	clnt->pipefs_sb = pipefs_sb;
++
+ 	if (clnt->cl_program->pipe_dir_name != NULL) {
+ 		dentry = rpc_setup_pipedir_sb(pipefs_sb, clnt);
+ 		if (IS_ERR(dentry))
+@@ -2074,6 +2077,7 @@ call_connect_status(struct rpc_task *task)
+ 	task->tk_status = 0;
+ 	switch (status) {
+ 	case -ECONNREFUSED:
++	case -ECONNRESET:
+ 		/* A positive refusal suggests a rebind is needed. */
+ 		if (RPC_IS_SOFTCONN(task))
+ 			break;
+@@ -2082,7 +2086,6 @@ call_connect_status(struct rpc_task *task)
+ 			goto out_retry;
+ 		}
+ 		fallthrough;
+-	case -ECONNRESET:
+ 	case -ECONNABORTED:
+ 	case -ENETDOWN:
+ 	case -ENETUNREACH:
+diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
+index 38fe2ce8a5aa1..8fad45320e1b9 100644
+--- a/net/sunrpc/rpcb_clnt.c
++++ b/net/sunrpc/rpcb_clnt.c
+@@ -743,6 +743,10 @@ void rpcb_getport_async(struct rpc_task *task)
+ 
+ 	child = rpcb_call_async(rpcb_clnt, map, proc);
+ 	rpc_release_client(rpcb_clnt);
++	if (IS_ERR(child)) {
++		/* rpcb_map_release() has freed the arguments */
++		return;
++	}
+ 
+ 	xprt->stat.bind_count++;
+ 	rpc_put_task(child);
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 2d62932b59878..6f0c09b6a1531 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -101,6 +101,7 @@ static int tipc_add_tlv(struct sk_buff *skb, u16 type, void *data, u16 len)
+ 		return -EMSGSIZE;
+ 
+ 	skb_put(skb, TLV_SPACE(len));
++	memset(tlv, 0, TLV_SPACE(len));
+ 	tlv->tlv_type = htons(type);
+ 	tlv->tlv_len = htons(TLV_LENGTH(len));
+ 	if (len && data)
+diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c b/scripts/gcc-plugins/randomize_layout_plugin.c
+index bd29e4e7a5241..c7ff92b4189cb 100644
+--- a/scripts/gcc-plugins/randomize_layout_plugin.c
++++ b/scripts/gcc-plugins/randomize_layout_plugin.c
+@@ -209,12 +209,14 @@ static void partition_struct(tree *fields, unsigned long length, struct partitio
+ 
+ static void performance_shuffle(tree *newtree, unsigned long length, ranctx *prng_state)
+ {
+-	unsigned long i, x;
++	unsigned long i, x, index;
+ 	struct partition_group size_group[length];
+ 	unsigned long num_groups = 0;
+ 	unsigned long randnum;
+ 
+ 	partition_struct(newtree, length, (struct partition_group *)&size_group, &num_groups);
++
++	/* FIXME: this group shuffle is currently a no-op. */
+ 	for (i = num_groups - 1; i > 0; i--) {
+ 		struct partition_group tmp;
+ 		randnum = ranval(prng_state) % (i + 1);
+@@ -224,11 +226,14 @@ static void performance_shuffle(tree *newtree, unsigned long length, ranctx *prn
+ 	}
+ 
+ 	for (x = 0; x < num_groups; x++) {
+-		for (i = size_group[x].start + size_group[x].length - 1; i > size_group[x].start; i--) {
++		for (index = size_group[x].length - 1; index > 0; index--) {
+ 			tree tmp;
++
++			i = size_group[x].start + index;
+ 			if (DECL_BIT_FIELD_TYPE(newtree[i]))
+ 				continue;
+-			randnum = ranval(prng_state) % (i + 1);
++			randnum = ranval(prng_state) % (index + 1);
++			randnum += size_group[x].start;
+ 			// we could handle this case differently if desired
+ 			if (DECL_BIT_FIELD_TYPE(newtree[randnum]))
+ 				continue;
+diff --git a/security/integrity/ima/ima_api.c b/security/integrity/ima/ima_api.c
+index 4f39fb93f278a..70efd4aa1bd11 100644
+--- a/security/integrity/ima/ima_api.c
++++ b/security/integrity/ima/ima_api.c
+@@ -212,6 +212,7 @@ int ima_collect_measurement(struct integrity_iint_cache *iint,
+ {
+ 	const char *audit_cause = "failed";
+ 	struct inode *inode = file_inode(file);
++	struct inode *real_inode = d_real_inode(file_dentry(file));
+ 	const char *filename = file->f_path.dentry->d_name.name;
+ 	int result = 0;
+ 	int length;
+@@ -262,6 +263,10 @@ int ima_collect_measurement(struct integrity_iint_cache *iint,
+ 	iint->ima_hash = tmpbuf;
+ 	memcpy(iint->ima_hash, &hash, length);
+ 	iint->version = i_version;
++	if (real_inode != inode) {
++		iint->real_ino = real_inode->i_ino;
++		iint->real_dev = real_inode->i_sb->s_dev;
++	}
+ 
+ 	/* Possibly temporary failure due to type of read (eg. O_DIRECT) */
+ 	if (!result)
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index dd4b28b11ebe3..8e0fe0ce61646 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -26,6 +26,7 @@
+ #include <linux/ima.h>
+ #include <linux/iversion.h>
+ #include <linux/fs.h>
++#include <linux/iversion.h>
+ 
+ #include "ima.h"
+ 
+@@ -197,7 +198,7 @@ static int process_measurement(struct file *file, const struct cred *cred,
+ 			       u32 secid, char *buf, loff_t size, int mask,
+ 			       enum ima_hooks func)
+ {
+-	struct inode *inode = file_inode(file);
++	struct inode *backing_inode, *inode = file_inode(file);
+ 	struct integrity_iint_cache *iint = NULL;
+ 	struct ima_template_desc *template_desc = NULL;
+ 	char *pathbuf = NULL;
+@@ -271,6 +272,19 @@ static int process_measurement(struct file *file, const struct cred *cred,
+ 		iint->measured_pcrs = 0;
+ 	}
+ 
++	/* Detect and re-evaluate changes made to the backing file. */
++	backing_inode = d_real_inode(file_dentry(file));
++	if (backing_inode != inode &&
++	    (action & IMA_DO_MASK) && (iint->flags & IMA_DONE_MASK)) {
++		if (!IS_I_VERSION(backing_inode) ||
++		    backing_inode->i_sb->s_dev != iint->real_dev ||
++		    backing_inode->i_ino != iint->real_ino ||
++		    !inode_eq_iversion(backing_inode, iint->version)) {
++			iint->flags &= ~IMA_DONE_MASK;
++			iint->measured_pcrs = 0;
++		}
++	}
++
+ 	/* Determine if already appraised/measured based on bitmask
+ 	 * (IMA_MEASURE, IMA_MEASURED, IMA_XXXX_APPRAISE, IMA_XXXX_APPRAISED,
+ 	 *  IMA_AUDIT, IMA_AUDITED)
+diff --git a/security/integrity/integrity.h b/security/integrity/integrity.h
+index 413c803c52089..bc51a4e839897 100644
+--- a/security/integrity/integrity.h
++++ b/security/integrity/integrity.h
+@@ -131,6 +131,8 @@ struct integrity_iint_cache {
+ 	unsigned long flags;
+ 	unsigned long measured_pcrs;
+ 	unsigned long atomic_flags;
++	unsigned long real_ino;
++	dev_t real_dev;
+ 	enum integrity_status ima_file_status:4;
+ 	enum integrity_status ima_mmap_status:4;
+ 	enum integrity_status ima_bprm_status:4;
+diff --git a/sound/core/info.c b/sound/core/info.c
+index d6fb11c3250c4..ebae11b494294 100644
+--- a/sound/core/info.c
++++ b/sound/core/info.c
+@@ -57,7 +57,7 @@ struct snd_info_private_data {
+ };
+ 
+ static int snd_info_version_init(void);
+-static void snd_info_disconnect(struct snd_info_entry *entry);
++static void snd_info_clear_entries(struct snd_info_entry *entry);
+ 
+ /*
+ 
+@@ -570,11 +570,16 @@ void snd_info_card_disconnect(struct snd_card *card)
+ {
+ 	if (!card)
+ 		return;
+-	mutex_lock(&info_mutex);
++
+ 	proc_remove(card->proc_root_link);
+-	card->proc_root_link = NULL;
+ 	if (card->proc_root)
+-		snd_info_disconnect(card->proc_root);
++		proc_remove(card->proc_root->p);
++
++	mutex_lock(&info_mutex);
++	if (card->proc_root)
++		snd_info_clear_entries(card->proc_root);
++	card->proc_root_link = NULL;
++	card->proc_root = NULL;
+ 	mutex_unlock(&info_mutex);
+ }
+ 
+@@ -746,15 +751,14 @@ struct snd_info_entry *snd_info_create_card_entry(struct snd_card *card,
+ }
+ EXPORT_SYMBOL(snd_info_create_card_entry);
+ 
+-static void snd_info_disconnect(struct snd_info_entry *entry)
++static void snd_info_clear_entries(struct snd_info_entry *entry)
+ {
+ 	struct snd_info_entry *p;
+ 
+ 	if (!entry->p)
+ 		return;
+ 	list_for_each_entry(p, &entry->children, list)
+-		snd_info_disconnect(p);
+-	proc_remove(entry->p);
++		snd_info_clear_entries(p);
+ 	entry->p = NULL;
+ }
+ 
+@@ -771,8 +775,9 @@ void snd_info_free_entry(struct snd_info_entry * entry)
+ 	if (!entry)
+ 		return;
+ 	if (entry->p) {
++		proc_remove(entry->p);
+ 		mutex_lock(&info_mutex);
+-		snd_info_disconnect(entry);
++		snd_info_clear_entries(entry);
+ 		mutex_unlock(&info_mutex);
+ 	}
+ 
+diff --git a/sound/hda/hdac_stream.c b/sound/hda/hdac_stream.c
+index 1e0f61affd979..5570722458caf 100644
+--- a/sound/hda/hdac_stream.c
++++ b/sound/hda/hdac_stream.c
+@@ -320,8 +320,10 @@ struct hdac_stream *snd_hdac_stream_assign(struct hdac_bus *bus,
+ 	struct hdac_stream *res = NULL;
+ 
+ 	/* make a non-zero unique key for the substream */
+-	int key = (substream->pcm->device << 16) | (substream->number << 2) |
+-		(substream->stream + 1);
++	int key = (substream->number << 2) | (substream->stream + 1);
++
++	if (substream->pcm)
++		key |= (substream->pcm->device << 16);
+ 
+ 	spin_lock_irq(&bus->reg_lock);
+ 	list_for_each_entry(azx_dev, &bus->stream_list, list) {
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index adfab80b8189d..65f8dc65b9675 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9036,6 +9036,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x10a1, "ASUS UX391UA", ALC294_FIXUP_ASUS_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x10c0, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x10d0, "ASUS X540LA/X540LJ", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1043, 0x10d3, "ASUS K6500ZC", ALC294_FIXUP_ASUS_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -9881,22 +9882,6 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60130},
+ 		{0x17, 0x90170110},
+ 		{0x21, 0x03211020}),
+-	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+-		{0x14, 0x90170110},
+-		{0x21, 0x04211020}),
+-	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+-		{0x14, 0x90170110},
+-		{0x21, 0x04211030}),
+-	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+-		ALC295_STANDARD_PINS,
+-		{0x17, 0x21014020},
+-		{0x18, 0x21a19030}),
+-	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+-		ALC295_STANDARD_PINS,
+-		{0x17, 0x21014040},
+-		{0x18, 0x21a19050}),
+-	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+-		ALC295_STANDARD_PINS),
+ 	SND_HDA_PIN_QUIRK(0x10ec0298, 0x1028, "Dell", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC298_STANDARD_PINS,
+ 		{0x17, 0x90170110}),
+@@ -9940,6 +9925,9 @@ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = {
+ 	SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+ 		{0x19, 0x40000000},
+ 		{0x1b, 0x40000000}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
++		{0x19, 0x40000000},
++		{0x1b, 0x40000000}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		{0x19, 0x40000000},
+ 		{0x1a, 0x40000000}),
+diff --git a/sound/soc/ti/omap-mcbsp.c b/sound/soc/ti/omap-mcbsp.c
+index 6025b30bbe77e..9a88992ac5f33 100644
+--- a/sound/soc/ti/omap-mcbsp.c
++++ b/sound/soc/ti/omap-mcbsp.c
+@@ -74,14 +74,16 @@ static int omap2_mcbsp_set_clks_src(struct omap_mcbsp *mcbsp, u8 fck_src_id)
+ 		return -EINVAL;
+ 	}
+ 
+-	pm_runtime_put_sync(mcbsp->dev);
++	if (mcbsp->active)
++		pm_runtime_put_sync(mcbsp->dev);
+ 
+ 	r = clk_set_parent(mcbsp->fclk, fck_src);
+ 	if (r)
+ 		dev_err(mcbsp->dev, "CLKS: could not clk_set_parent() to %s\n",
+ 			src);
+ 
+-	pm_runtime_get_sync(mcbsp->dev);
++	if (mcbsp->active)
++		pm_runtime_get_sync(mcbsp->dev);
+ 
+ 	clk_put(fck_src);
+ 
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index d33c9d427e573..9d4a249cc98bb 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -1995,7 +1995,7 @@ retry:
+ 	if ((DO_BIC(BIC_CPU_c6) || soft_c1_residency_display(BIC_CPU_c6)) && !do_knl_cstates) {
+ 		if (get_msr(cpu, MSR_CORE_C6_RESIDENCY, &c->c6))
+ 			return -7;
+-	} else if (do_knl_cstates || soft_c1_residency_display(BIC_CPU_c6)) {
++	} else if (do_knl_cstates && soft_c1_residency_display(BIC_CPU_c6)) {
+ 		if (get_msr(cpu, MSR_KNL_CORE_C6_RESIDENCY, &c->c6))
+ 			return -7;
+ 	}
+diff --git a/tools/testing/selftests/efivarfs/create-read.c b/tools/testing/selftests/efivarfs/create-read.c
+index 9674a19396a32..7bc7af4eb2c17 100644
+--- a/tools/testing/selftests/efivarfs/create-read.c
++++ b/tools/testing/selftests/efivarfs/create-read.c
+@@ -32,8 +32,10 @@ int main(int argc, char **argv)
+ 	rc = read(fd, buf, sizeof(buf));
+ 	if (rc != 0) {
+ 		fprintf(stderr, "Reading a new var should return EOF\n");
++		close(fd);
+ 		return EXIT_FAILURE;
+ 	}
+ 
++	close(fd);
+ 	return EXIT_SUCCESS;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-12-01 17:47 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-12-01 17:47 UTC (permalink / raw
  To: gentoo-commits

commit:     d85daa9ff86b2761cd1156c29b8f2edd6a802da5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  1 17:46:45 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec  1 17:46:45 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d85daa9f

neighbour: Fix __randomize_layout crash in struct neighbour

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                           |  4 ++++
 2010_Fix_randomize_layout_crash_in_struct_neigh.patch | 11 +++++++++++
 2 files changed, 15 insertions(+)

diff --git a/0000_README b/0000_README
index 5b0c512c..52122716 100644
--- a/0000_README
+++ b/0000_README
@@ -863,6 +863,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2010_Fix_randomize_layout_crash_in_struct_neigh.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=45b3fae4675d
+Desc:   neighbour: Fix __randomize_layout crash in struct neighbour
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2010_Fix_randomize_layout_crash_in_struct_neigh.patch b/2010_Fix_randomize_layout_crash_in_struct_neigh.patch
new file mode 100644
index 00000000..8364902a
--- /dev/null
+++ b/2010_Fix_randomize_layout_crash_in_struct_neigh.patch
@@ -0,0 +1,11 @@
+--- a/include/net/neighbour.h	2023-12-01 12:42:53.249733734 -0500
++++ b/include/net/neighbour.h	2023-12-01 12:43:07.539739154 -0500
+@@ -157,7 +157,7 @@ struct neighbour {
+ 	struct list_head	gc_list;
+ 	struct rcu_head		rcu;
+ 	struct net_device	*dev;
+-	u8			primary_key[0];
++	u8			primary_key[];
+ } __randomize_layout;
+ 
+ struct neigh_ops {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-12-08 11:16 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-12-08 11:16 UTC (permalink / raw
  To: gentoo-commits

commit:     9397d201e000dc60f6df20b1a5d9d3231d10771d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  8 11:16:01 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec  8 11:16:01 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9397d201

Linux patch 5.10.203

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1202_linux-5.10.203.patch | 4503 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4507 insertions(+)

diff --git a/0000_README b/0000_README
index 52122716..9fa95630 100644
--- a/0000_README
+++ b/0000_README
@@ -851,6 +851,10 @@ Patch:  1201_linux-5.10.202.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.202
 
+Patch:  1202_linux-5.10.203.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.203
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1202_linux-5.10.203.patch b/1202_linux-5.10.203.patch
new file mode 100644
index 00000000..d35ef548
--- /dev/null
+++ b/1202_linux-5.10.203.patch
@@ -0,0 +1,4503 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-usb b/Documentation/ABI/testing/sysfs-bus-usb
+index bf2c1968525f0..73eb23bc1f343 100644
+--- a/Documentation/ABI/testing/sysfs-bus-usb
++++ b/Documentation/ABI/testing/sysfs-bus-usb
+@@ -154,17 +154,6 @@ Description:
+ 		files hold a string value (enable or disable) indicating whether
+ 		or not USB3 hardware LPM U1 or U2 is enabled for the device.
+ 
+-What:		/sys/bus/usb/devices/.../removable
+-Date:		February 2012
+-Contact:	Matthew Garrett <mjg@redhat.com>
+-Description:
+-		Some information about whether a given USB device is
+-		physically fixed to the platform can be inferred from a
+-		combination of hub descriptor bits and platform-specific data
+-		such as ACPI. This file will read either "removable" or
+-		"fixed" if the information is available, and "unknown"
+-		otherwise.
+-
+ What:		/sys/bus/usb/devices/.../ltm_capable
+ Date:		July 2012
+ Contact:	Sarah Sharp <sarah.a.sharp@linux.intel.com>
+diff --git a/Documentation/ABI/testing/sysfs-devices-removable b/Documentation/ABI/testing/sysfs-devices-removable
+new file mode 100644
+index 0000000000000..acf7766e800bd
+--- /dev/null
++++ b/Documentation/ABI/testing/sysfs-devices-removable
+@@ -0,0 +1,17 @@
++What:		/sys/devices/.../removable
++Date:		May 2021
++Contact:	Rajat Jain <rajatxjain@gmail.com>
++Description:
++		Information about whether a given device can be removed from the
++		platform by the	user. This is determined by its subsystem in a
++		bus / platform-specific way. This attribute is only present for
++		devices that can support determining such information:
++
++		"removable": device can be removed from the platform by the user
++		"fixed":     device is fixed to the platform / cannot be removed
++			     by the user.
++		"unknown":   The information is unavailable / cannot be deduced.
++
++		Currently this is only supported by USB (which infers the
++		information from a combination of hub descriptor bits and
++		platform-specific data such as ACPI).
+diff --git a/Makefile b/Makefile
+index ed0e42c99c906..4cbeb198196b0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 202
++SUBLEVEL = 203
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
+index 3d25fd615250a..518c7557f78a2 100644
+--- a/arch/arm/xen/enlighten.c
++++ b/arch/arm/xen/enlighten.c
+@@ -351,7 +351,8 @@ static int __init xen_guest_init(void)
+ 	 * for secondary CPUs as they are brought up.
+ 	 * For uniformity we use VCPUOP_register_vcpu_info even on cpu0.
+ 	 */
+-	xen_vcpu_info = alloc_percpu(struct vcpu_info);
++	xen_vcpu_info = __alloc_percpu(sizeof(struct vcpu_info),
++				       1 << fls(sizeof(struct vcpu_info) - 1));
+ 	if (xen_vcpu_info == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
+index 28c366d307e70..38a5be10a3dbc 100644
+--- a/arch/mips/kvm/mmu.c
++++ b/arch/mips/kvm/mmu.c
+@@ -667,7 +667,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
+ 	gfn_t gfn = gpa >> PAGE_SHIFT;
+ 	int srcu_idx, err;
+ 	kvm_pfn_t pfn;
+-	pte_t *ptep, entry, old_pte;
++	pte_t *ptep, entry;
+ 	bool writeable;
+ 	unsigned long prot_bits;
+ 	unsigned long mmu_seq;
+@@ -739,7 +739,6 @@ retry:
+ 	entry = pfn_pte(pfn, __pgprot(prot_bits));
+ 
+ 	/* Write the PTE */
+-	old_pte = *ptep;
+ 	set_pte(ptep, entry);
+ 
+ 	err = 0;
+diff --git a/arch/parisc/include/uapi/asm/errno.h b/arch/parisc/include/uapi/asm/errno.h
+index 87245c584784e..8d94739d75c67 100644
+--- a/arch/parisc/include/uapi/asm/errno.h
++++ b/arch/parisc/include/uapi/asm/errno.h
+@@ -75,7 +75,6 @@
+ 
+ /* We now return you to your regularly scheduled HPUX. */
+ 
+-#define ENOSYM		215	/* symbol does not exist in executable */
+ #define	ENOTSOCK	216	/* Socket operation on non-socket */
+ #define	EDESTADDRREQ	217	/* Destination address required */
+ #define	EMSGSIZE	218	/* Message too long */
+@@ -101,7 +100,6 @@
+ #define	ETIMEDOUT	238	/* Connection timed out */
+ #define	ECONNREFUSED	239	/* Connection refused */
+ #define	EREFUSED	ECONNREFUSED	/* for HP's NFS apparently */
+-#define	EREMOTERELEASE	240	/* Remote peer released connection */
+ #define	EHOSTDOWN	241	/* Host is down */
+ #define	EHOSTUNREACH	242	/* No route to host */
+ 
+diff --git a/arch/powerpc/kernel/fpu.S b/arch/powerpc/kernel/fpu.S
+index 3ff9a8fafa467..a56597c88e94e 100644
+--- a/arch/powerpc/kernel/fpu.S
++++ b/arch/powerpc/kernel/fpu.S
+@@ -23,6 +23,15 @@
+ #include <asm/feature-fixups.h>
+ 
+ #ifdef CONFIG_VSX
++#define __REST_1FPVSR(n,c,base)						\
++BEGIN_FTR_SECTION							\
++	b	2f;							\
++END_FTR_SECTION_IFSET(CPU_FTR_VSX);					\
++	REST_FPR(n,base);						\
++	b	3f;							\
++2:	REST_VSR(n,c,base);						\
++3:
++
+ #define __REST_32FPVSRS(n,c,base)					\
+ BEGIN_FTR_SECTION							\
+ 	b	2f;							\
+@@ -41,9 +50,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX);					\
+ 2:	SAVE_32VSRS(n,c,base);						\
+ 3:
+ #else
++#define __REST_1FPVSR(n,b,base)		REST_FPR(n, base)
+ #define __REST_32FPVSRS(n,b,base)	REST_32FPRS(n, base)
+ #define __SAVE_32FPVSRS(n,b,base)	SAVE_32FPRS(n, base)
+ #endif
++#define REST_1FPVSR(n,c,base)   __REST_1FPVSR(n,__REG_##c,__REG_##base)
+ #define REST_32FPVSRS(n,c,base) __REST_32FPVSRS(n,__REG_##c,__REG_##base)
+ #define SAVE_32FPVSRS(n,c,base) __SAVE_32FPVSRS(n,__REG_##c,__REG_##base)
+ 
+@@ -67,6 +78,7 @@ _GLOBAL(store_fp_state)
+ 	SAVE_32FPVSRS(0, R4, R3)
+ 	mffs	fr0
+ 	stfd	fr0,FPSTATE_FPSCR(r3)
++	REST_1FPVSR(0, R4, R3)
+ 	blr
+ EXPORT_SYMBOL(store_fp_state)
+ 
+@@ -132,4 +144,5 @@ _GLOBAL(save_fpu)
+ 2:	SAVE_32FPVSRS(0, R4, R6)
+ 	mffs	fr0
+ 	stfd	fr0,FPSTATE_FPSCR(r6)
++	REST_1FPVSR(0, R4, R6)
+ 	blr
+diff --git a/arch/powerpc/kernel/vector.S b/arch/powerpc/kernel/vector.S
+index 801dc28fdcca5..6bd47d4242d03 100644
+--- a/arch/powerpc/kernel/vector.S
++++ b/arch/powerpc/kernel/vector.S
+@@ -32,6 +32,7 @@ _GLOBAL(store_vr_state)
+ 	mfvscr	v0
+ 	li	r4, VRSTATE_VSCR
+ 	stvx	v0, r4, r3
++	lvx	v0, 0, r3
+ 	blr
+ EXPORT_SYMBOL(store_vr_state)
+ 
+@@ -104,6 +105,7 @@ _GLOBAL(save_altivec)
+ 	mfvscr	v0
+ 	li	r4,VRSTATE_VSCR
+ 	stvx	v0,r4,r7
++	lvx	v0,0,r7
+ 	blr
+ 
+ #ifdef CONFIG_VSX
+diff --git a/arch/s390/mm/page-states.c b/arch/s390/mm/page-states.c
+index 567c69f3069e7..4e5312fa2c539 100644
+--- a/arch/s390/mm/page-states.c
++++ b/arch/s390/mm/page-states.c
+@@ -112,7 +112,7 @@ static void mark_kernel_pmd(pud_t *pud, unsigned long addr, unsigned long end)
+ 		next = pmd_addr_end(addr, end);
+ 		if (pmd_none(*pmd) || pmd_large(*pmd))
+ 			continue;
+-		page = virt_to_page(pmd_val(*pmd));
++		page = phys_to_page(pmd_val(*pmd));
+ 		set_bit(PG_arch_1, &page->flags);
+ 	} while (pmd++, addr = next, addr != end);
+ }
+@@ -130,8 +130,8 @@ static void mark_kernel_pud(p4d_t *p4d, unsigned long addr, unsigned long end)
+ 		if (pud_none(*pud) || pud_large(*pud))
+ 			continue;
+ 		if (!pud_folded(*pud)) {
+-			page = virt_to_page(pud_val(*pud));
+-			for (i = 0; i < 3; i++)
++			page = phys_to_page(pud_val(*pud));
++			for (i = 0; i < 4; i++)
+ 				set_bit(PG_arch_1, &page[i].flags);
+ 		}
+ 		mark_kernel_pmd(pud, addr, next);
+@@ -151,8 +151,8 @@ static void mark_kernel_p4d(pgd_t *pgd, unsigned long addr, unsigned long end)
+ 		if (p4d_none(*p4d))
+ 			continue;
+ 		if (!p4d_folded(*p4d)) {
+-			page = virt_to_page(p4d_val(*p4d));
+-			for (i = 0; i < 3; i++)
++			page = phys_to_page(p4d_val(*p4d));
++			for (i = 0; i < 4; i++)
+ 				set_bit(PG_arch_1, &page[i].flags);
+ 		}
+ 		mark_kernel_pud(p4d, addr, next);
+@@ -173,8 +173,8 @@ static void mark_kernel_pgd(void)
+ 		if (pgd_none(*pgd))
+ 			continue;
+ 		if (!pgd_folded(*pgd)) {
+-			page = virt_to_page(pgd_val(*pgd));
+-			for (i = 0; i < 3; i++)
++			page = phys_to_page(pgd_val(*pgd));
++			for (i = 0; i < 4; i++)
+ 				set_bit(PG_arch_1, &page[i].flags);
+ 		}
+ 		mark_kernel_p4d(pgd, addr, next);
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index ba70bcded1283..508d22728ce8c 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -448,6 +448,13 @@ static const struct dmi_system_id asus_laptop[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "GMxXGxx"),
+ 		},
+ 	},
++	{
++		/* Asus ExpertBook B1402CVA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "B1402CVA"),
++		},
++	},
+ 	{
+ 		/* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */
+ 		.matches = {
+diff --git a/drivers/ata/pata_isapnp.c b/drivers/ata/pata_isapnp.c
+index 43bb224430d3c..8892931ea8676 100644
+--- a/drivers/ata/pata_isapnp.c
++++ b/drivers/ata/pata_isapnp.c
+@@ -82,6 +82,9 @@ static int isapnp_init_one(struct pnp_dev *idev, const struct pnp_device_id *dev
+ 	if (pnp_port_valid(idev, 1)) {
+ 		ctl_addr = devm_ioport_map(&idev->dev,
+ 					   pnp_port_start(idev, 1), 1);
++		if (!ctl_addr)
++			return -ENOMEM;
++
+ 		ap->ioaddr.altstatus_addr = ctl_addr;
+ 		ap->ioaddr.ctl_addr = ctl_addr;
+ 		ap->ops = &isapnp_port_ops;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index cb859febd03cf..d98cab88c38af 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2053,6 +2053,25 @@ static ssize_t online_store(struct device *dev, struct device_attribute *attr,
+ }
+ static DEVICE_ATTR_RW(online);
+ 
++static ssize_t removable_show(struct device *dev, struct device_attribute *attr,
++			      char *buf)
++{
++	const char *loc;
++
++	switch (dev->removable) {
++	case DEVICE_REMOVABLE:
++		loc = "removable";
++		break;
++	case DEVICE_FIXED:
++		loc = "fixed";
++		break;
++	default:
++		loc = "unknown";
++	}
++	return sysfs_emit(buf, "%s\n", loc);
++}
++static DEVICE_ATTR_RO(removable);
++
+ int device_add_groups(struct device *dev, const struct attribute_group **groups)
+ {
+ 	return sysfs_create_groups(&dev->kobj, groups);
+@@ -2230,8 +2249,16 @@ static int device_add_attrs(struct device *dev)
+ 			goto err_remove_dev_online;
+ 	}
+ 
++	if (dev_removable_is_valid(dev)) {
++		error = device_create_file(dev, &dev_attr_removable);
++		if (error)
++			goto err_remove_dev_waiting_for_supplier;
++	}
++
+ 	return 0;
+ 
++ err_remove_dev_waiting_for_supplier:
++	device_remove_file(dev, &dev_attr_waiting_for_supplier);
+  err_remove_dev_online:
+ 	device_remove_file(dev, &dev_attr_online);
+  err_remove_dev_groups:
+@@ -2251,6 +2278,7 @@ static void device_remove_attrs(struct device *dev)
+ 	struct class *class = dev->class;
+ 	const struct device_type *type = dev->type;
+ 
++	device_remove_file(dev, &dev_attr_removable);
+ 	device_remove_file(dev, &dev_attr_waiting_for_supplier);
+ 	device_remove_file(dev, &dev_attr_online);
+ 	device_remove_groups(dev, dev->groups);
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 503c01d4015dc..1e8318acf6218 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -1187,8 +1187,6 @@ static void __device_release_driver(struct device *dev, struct device *parent)
+ 		else if (drv->remove)
+ 			drv->remove(dev);
+ 
+-		device_links_driver_cleanup(dev);
+-
+ 		devres_release_all(dev);
+ 		arch_teardown_dma_ops(dev);
+ 		kfree(dev->dma_range_map);
+@@ -1200,6 +1198,8 @@ static void __device_release_driver(struct device *dev, struct device *parent)
+ 		pm_runtime_reinit(dev);
+ 		dev_pm_set_driver_flags(dev, 0);
+ 
++		device_links_driver_cleanup(dev);
++
+ 		klist_remove(&dev->p->knode_driver);
+ 		device_pm_check_callbacks(dev);
+ 		if (dev->bus)
+diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
+index 5bf5fc759881f..00f7ad7466680 100644
+--- a/drivers/cpufreq/imx6q-cpufreq.c
++++ b/drivers/cpufreq/imx6q-cpufreq.c
+@@ -209,6 +209,14 @@ static struct cpufreq_driver imx6q_cpufreq_driver = {
+ 	.suspend = cpufreq_generic_suspend,
+ };
+ 
++static void imx6x_disable_freq_in_opp(struct device *dev, unsigned long freq)
++{
++	int ret = dev_pm_opp_disable(dev, freq);
++
++	if (ret < 0 && ret != -ENODEV)
++		dev_warn(dev, "failed to disable %ldMHz OPP\n", freq / 1000000);
++}
++
+ #define OCOTP_CFG3			0x440
+ #define OCOTP_CFG3_SPEED_SHIFT		16
+ #define OCOTP_CFG3_SPEED_1P2GHZ		0x3
+@@ -254,17 +262,15 @@ static int imx6q_opp_check_speed_grading(struct device *dev)
+ 	val &= 0x3;
+ 
+ 	if (val < OCOTP_CFG3_SPEED_996MHZ)
+-		if (dev_pm_opp_disable(dev, 996000000))
+-			dev_warn(dev, "failed to disable 996MHz OPP\n");
++		imx6x_disable_freq_in_opp(dev, 996000000);
+ 
+ 	if (of_machine_is_compatible("fsl,imx6q") ||
+ 	    of_machine_is_compatible("fsl,imx6qp")) {
+ 		if (val != OCOTP_CFG3_SPEED_852MHZ)
+-			if (dev_pm_opp_disable(dev, 852000000))
+-				dev_warn(dev, "failed to disable 852MHz OPP\n");
++			imx6x_disable_freq_in_opp(dev, 852000000);
++
+ 		if (val != OCOTP_CFG3_SPEED_1P2GHZ)
+-			if (dev_pm_opp_disable(dev, 1200000000))
+-				dev_warn(dev, "failed to disable 1.2GHz OPP\n");
++			imx6x_disable_freq_in_opp(dev, 1200000000);
+ 	}
+ 
+ 	return 0;
+@@ -316,20 +322,16 @@ static int imx6ul_opp_check_speed_grading(struct device *dev)
+ 	val >>= OCOTP_CFG3_SPEED_SHIFT;
+ 	val &= 0x3;
+ 
+-	if (of_machine_is_compatible("fsl,imx6ul")) {
++	if (of_machine_is_compatible("fsl,imx6ul"))
+ 		if (val != OCOTP_CFG3_6UL_SPEED_696MHZ)
+-			if (dev_pm_opp_disable(dev, 696000000))
+-				dev_warn(dev, "failed to disable 696MHz OPP\n");
+-	}
++			imx6x_disable_freq_in_opp(dev, 696000000);
+ 
+ 	if (of_machine_is_compatible("fsl,imx6ull")) {
+-		if (val != OCOTP_CFG3_6ULL_SPEED_792MHZ)
+-			if (dev_pm_opp_disable(dev, 792000000))
+-				dev_warn(dev, "failed to disable 792MHz OPP\n");
++		if (val < OCOTP_CFG3_6ULL_SPEED_792MHZ)
++			imx6x_disable_freq_in_opp(dev, 792000000);
+ 
+ 		if (val != OCOTP_CFG3_6ULL_SPEED_900MHZ)
+-			if (dev_pm_opp_disable(dev, 900000000))
+-				dev_warn(dev, "failed to disable 900MHz OPP\n");
++			imx6x_disable_freq_in_opp(dev, 900000000);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/firewire/core-device.c b/drivers/firewire/core-device.c
+index 80db43a220698..94ae27865b9ed 100644
+--- a/drivers/firewire/core-device.c
++++ b/drivers/firewire/core-device.c
+@@ -719,14 +719,11 @@ static void create_units(struct fw_device *device)
+ 					fw_unit_attributes,
+ 					&unit->attribute_group);
+ 
+-		if (device_register(&unit->device) < 0)
+-			goto skip_unit;
+-
+ 		fw_device_get(device);
+-		continue;
+-
+-	skip_unit:
+-		kfree(unit);
++		if (device_register(&unit->device) < 0) {
++			put_device(&unit->device);
++			continue;
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+index 4b568ee932435..02a73b9e08d7d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+@@ -29,6 +29,7 @@
+ #include "amdgpu.h"
+ #include "atom.h"
+ 
++#include <linux/device.h>
+ #include <linux/pci.h>
+ #include <linux/slab.h>
+ #include <linux/acpi.h>
+@@ -285,6 +286,10 @@ static bool amdgpu_atrm_get_bios(struct amdgpu_device *adev)
+ 	if (adev->flags & AMD_IS_APU)
+ 		return false;
+ 
++	/* ATRM is for on-platform devices only */
++	if (dev_is_removable(&adev->pdev->dev))
++		return false;
++
+ 	while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
+ 		dhandle = ACPI_HANDLE(&pdev->dev);
+ 		if (!dhandle)
+diff --git a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+index db9d0b86d5428..9e518213a54ff 100644
+--- a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
++++ b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+@@ -36,6 +36,7 @@ struct panel_desc {
+ 	const struct panel_init_cmd *init_cmds;
+ 	unsigned int lanes;
+ 	bool discharge_on_disable;
++	bool lp11_before_reset;
+ };
+ 
+ struct boe_panel {
+@@ -551,6 +552,10 @@ static int boe_panel_prepare(struct drm_panel *panel)
+ 
+ 	usleep_range(5000, 10000);
+ 
++	if (boe->desc->lp11_before_reset) {
++		mipi_dsi_dcs_nop(boe->dsi);
++		usleep_range(1000, 2000);
++	}
+ 	gpiod_set_value(boe->enable_gpio, 1);
+ 	usleep_range(1000, 2000);
+ 	gpiod_set_value(boe->enable_gpio, 0);
+@@ -692,6 +697,7 @@ static const struct panel_desc auo_b101uan08_3_desc = {
+ 	.mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
+ 		      MIPI_DSI_MODE_LPM,
+ 	.init_cmds = auo_b101uan08_3_init_cmd,
++	.lp11_before_reset = true,
+ };
+ 
+ static const struct drm_display_mode boe_tv105wum_nw0_default_mode = {
+@@ -719,6 +725,7 @@ static const struct panel_desc boe_tv105wum_nw0_desc = {
+ 	.mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
+ 		      MIPI_DSI_MODE_LPM,
+ 	.init_cmds = boe_init_cmd,
++	.lp11_before_reset = true,
+ };
+ 
+ static int boe_panel_get_modes(struct drm_panel *panel,
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index e90b518118881..ee01b61a6bafa 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2154,13 +2154,13 @@ static const struct panel_desc innolux_g070y2_l01 = {
+ static const struct display_timing innolux_g101ice_l01_timing = {
+ 	.pixelclock = { 60400000, 71100000, 74700000 },
+ 	.hactive = { 1280, 1280, 1280 },
+-	.hfront_porch = { 41, 80, 100 },
+-	.hback_porch = { 40, 79, 99 },
+-	.hsync_len = { 1, 1, 1 },
++	.hfront_porch = { 30, 60, 70 },
++	.hback_porch = { 30, 60, 70 },
++	.hsync_len = { 22, 40, 60 },
+ 	.vactive = { 800, 800, 800 },
+-	.vfront_porch = { 5, 11, 14 },
+-	.vback_porch = { 4, 11, 14 },
+-	.vsync_len = { 1, 1, 1 },
++	.vfront_porch = { 3, 8, 14 },
++	.vback_porch = { 3, 8, 14 },
++	.vsync_len = { 4, 7, 12 },
+ 	.flags = DISPLAY_FLAGS_DE_HIGH,
+ };
+ 
+@@ -2177,6 +2177,7 @@ static const struct panel_desc innolux_g101ice_l01 = {
+ 		.disable = 200,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 05fcc9e078d6d..682d78fab9a59 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -248,14 +248,22 @@ static inline void vop_cfg_done(struct vop *vop)
+ 	VOP_REG_SET(vop, common, cfg_done, 1);
+ }
+ 
+-static bool has_rb_swapped(uint32_t format)
++static bool has_rb_swapped(uint32_t version, uint32_t format)
+ {
+ 	switch (format) {
+ 	case DRM_FORMAT_XBGR8888:
+ 	case DRM_FORMAT_ABGR8888:
+-	case DRM_FORMAT_BGR888:
+ 	case DRM_FORMAT_BGR565:
+ 		return true;
++	/*
++	 * full framework (IP version 3.x) only need rb swapped for RGB888 and
++	 * little framework (IP version 2.x) only need rb swapped for BGR888,
++	 * check for 3.x to also only rb swap BGR888 for unknown vop version
++	 */
++	case DRM_FORMAT_RGB888:
++		return VOP_MAJOR(version) == 3;
++	case DRM_FORMAT_BGR888:
++		return VOP_MAJOR(version) != 3;
+ 	default:
+ 		return false;
+ 	}
+@@ -988,7 +996,7 @@ static void vop_plane_atomic_update(struct drm_plane *plane,
+ 	VOP_WIN_SET(vop, win, dsp_info, dsp_info);
+ 	VOP_WIN_SET(vop, win, dsp_st, dsp_st);
+ 
+-	rb_swap = has_rb_swapped(fb->format->format);
++	rb_swap = has_rb_swapped(vop->data->version, fb->format->format);
+ 	VOP_WIN_SET(vop, win, rb_swap, rb_swap);
+ 
+ 	/*
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 524d6d712e724..476967ab6294c 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -702,15 +702,22 @@ static void hid_close_report(struct hid_device *device)
+  * Free a device structure, all reports, and all fields.
+  */
+ 
+-static void hid_device_release(struct device *dev)
++void hiddev_free(struct kref *ref)
+ {
+-	struct hid_device *hid = to_hid_device(dev);
++	struct hid_device *hid = container_of(ref, struct hid_device, ref);
+ 
+ 	hid_close_report(hid);
+ 	kfree(hid->dev_rdesc);
+ 	kfree(hid);
+ }
+ 
++static void hid_device_release(struct device *dev)
++{
++	struct hid_device *hid = to_hid_device(dev);
++
++	kref_put(&hid->ref, hiddev_free);
++}
++
+ /*
+  * Fetch a report description item from the data stream. We support long
+  * items, though they are not used yet.
+@@ -2444,10 +2451,12 @@ int hid_add_device(struct hid_device *hdev)
+ 			hid_warn(hdev, "bad device descriptor (%d)\n", ret);
+ 	}
+ 
++	hdev->id = atomic_inc_return(&id);
++
+ 	/* XXX hack, any other cleaner solution after the driver core
+ 	 * is converted to allow more than 20 bytes as the device name? */
+ 	dev_set_name(&hdev->dev, "%04X:%04X:%04X.%04X", hdev->bus,
+-		     hdev->vendor, hdev->product, atomic_inc_return(&id));
++		     hdev->vendor, hdev->product, hdev->id);
+ 
+ 	hid_debug_register(hdev, dev_name(&hdev->dev));
+ 	ret = device_add(&hdev->dev);
+@@ -2490,6 +2499,7 @@ struct hid_device *hid_allocate_device(void)
+ 	spin_lock_init(&hdev->debug_list_lock);
+ 	sema_init(&hdev->driver_input_lock, 1);
+ 	mutex_init(&hdev->ll_open_lock);
++	kref_init(&hdev->ref);
+ 
+ 	return hdev;
+ }
+diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
+index 1f60a381ae63e..81da80f0c75b5 100644
+--- a/drivers/hid/hid-debug.c
++++ b/drivers/hid/hid-debug.c
+@@ -1082,6 +1082,7 @@ static int hid_debug_events_open(struct inode *inode, struct file *file)
+ 		goto out;
+ 	}
+ 	list->hdev = (struct hid_device *) inode->i_private;
++	kref_get(&list->hdev->ref);
+ 	file->private_data = list;
+ 	mutex_init(&list->read_mutex);
+ 
+@@ -1174,6 +1175,8 @@ static int hid_debug_events_release(struct inode *inode, struct file *file)
+ 	list_del(&list->node);
+ 	spin_unlock_irqrestore(&list->hdev->debug_list_lock, flags);
+ 	kfifo_free(&list->hid_debug_fifo);
++
++	kref_put(&list->hdev->ref, hiddev_free);
+ 	kfree(list);
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_ctrl.c b/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
+index 86d3f8aff329c..2b18bb36e4e37 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
+@@ -3033,6 +3033,9 @@ static enum i40iw_status_code i40iw_sc_alloc_stag(
+ 	u64 header;
+ 	enum i40iw_page_size page_size;
+ 
++	if (!info->total_len && !info->all_memory)
++		return -EINVAL;
++
+ 	page_size = (info->page_size == 0x200000) ? I40IW_PAGE_SIZE_2M : I40IW_PAGE_SIZE_4K;
+ 	cqp = dev->cqp;
+ 	wqe = i40iw_sc_cqp_get_next_send_wqe(cqp, scratch);
+@@ -3091,6 +3094,9 @@ static enum i40iw_status_code i40iw_sc_mr_reg_non_shared(
+ 	u8 addr_type;
+ 	enum i40iw_page_size page_size;
+ 
++	if (!info->total_len && !info->all_memory)
++		return -EINVAL;
++
+ 	page_size = (info->page_size == 0x200000) ? I40IW_PAGE_SIZE_2M : I40IW_PAGE_SIZE_4K;
+ 	if (info->access_rights & (I40IW_ACCESS_FLAGS_REMOTEREAD_ONLY |
+ 				   I40IW_ACCESS_FLAGS_REMOTEWRITE_ONLY))
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_type.h b/drivers/infiniband/hw/i40iw/i40iw_type.h
+index c3babf3cbb8e5..341aa6b1b6c1e 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_type.h
++++ b/drivers/infiniband/hw/i40iw/i40iw_type.h
+@@ -786,6 +786,7 @@ struct i40iw_allocate_stag_info {
+ 	bool use_hmc_fcn_index;
+ 	u8 hmc_fcn_index;
+ 	bool use_pf_rid;
++	bool all_memory;
+ };
+ 
+ struct i40iw_reg_ns_stag_info {
+@@ -804,6 +805,7 @@ struct i40iw_reg_ns_stag_info {
+ 	bool use_hmc_fcn_index;
+ 	u8 hmc_fcn_index;
+ 	bool use_pf_rid;
++	bool all_memory;
+ };
+ 
+ struct i40iw_fast_reg_stag_info {
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+index 533f3caecb7a9..89654dc91d81a 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+@@ -1494,7 +1494,8 @@ static int i40iw_handle_q_mem(struct i40iw_device *iwdev,
+ static int i40iw_hw_alloc_stag(struct i40iw_device *iwdev, struct i40iw_mr *iwmr)
+ {
+ 	struct i40iw_allocate_stag_info *info;
+-	struct i40iw_pd *iwpd = to_iwpd(iwmr->ibmr.pd);
++	struct ib_pd *pd = iwmr->ibmr.pd;
++	struct i40iw_pd *iwpd = to_iwpd(pd);
+ 	enum i40iw_status_code status;
+ 	int err = 0;
+ 	struct i40iw_cqp_request *cqp_request;
+@@ -1511,6 +1512,7 @@ static int i40iw_hw_alloc_stag(struct i40iw_device *iwdev, struct i40iw_mr *iwmr
+ 	info->stag_idx = iwmr->stag >> I40IW_CQPSQ_STAG_IDX_SHIFT;
+ 	info->pd_id = iwpd->sc_pd.pd_id;
+ 	info->total_len = iwmr->length;
++	info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY;
+ 	info->remote_access = true;
+ 	cqp_info->cqp_cmd = OP_ALLOC_STAG;
+ 	cqp_info->post_sq = 1;
+@@ -1563,6 +1565,8 @@ static struct ib_mr *i40iw_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type,
+ 	iwmr->type = IW_MEMREG_TYPE_MEM;
+ 	palloc = &iwpbl->pble_alloc;
+ 	iwmr->page_cnt = max_num_sg;
++	/* Use system PAGE_SIZE as the sg page sizes are unknown at this point */
++	iwmr->length = max_num_sg * PAGE_SIZE;
+ 	mutex_lock(&iwdev->pbl_mutex);
+ 	status = i40iw_get_pble(&iwdev->sc_dev, iwdev->pble_rsrc, palloc, iwmr->page_cnt);
+ 	mutex_unlock(&iwdev->pbl_mutex);
+@@ -1659,7 +1663,8 @@ static int i40iw_hwreg_mr(struct i40iw_device *iwdev,
+ {
+ 	struct i40iw_pbl *iwpbl = &iwmr->iwpbl;
+ 	struct i40iw_reg_ns_stag_info *stag_info;
+-	struct i40iw_pd *iwpd = to_iwpd(iwmr->ibmr.pd);
++	struct ib_pd *pd = iwmr->ibmr.pd;
++	struct i40iw_pd *iwpd = to_iwpd(pd);
+ 	struct i40iw_pble_alloc *palloc = &iwpbl->pble_alloc;
+ 	enum i40iw_status_code status;
+ 	int err = 0;
+@@ -1679,6 +1684,7 @@ static int i40iw_hwreg_mr(struct i40iw_device *iwdev,
+ 	stag_info->total_len = iwmr->length;
+ 	stag_info->access_rights = access;
+ 	stag_info->pd_id = iwpd->sc_pd.pd_id;
++	stag_info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY;
+ 	stag_info->addr_type = I40IW_ADDR_TYPE_VA_BASED;
+ 	stag_info->page_size = iwmr->page_size;
+ 
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 762c502391464..beedad0fe09ae 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -119,6 +119,7 @@ static const struct xpad_device {
+ 	{ 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
+ 	{ 0x044f, 0x0f10, "Thrustmaster Modena GT Wheel", 0, XTYPE_XBOX },
+ 	{ 0x044f, 0xb326, "Thrustmaster Gamepad GP XID", 0, XTYPE_XBOX360 },
++	{ 0x03f0, 0x0495, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE },
+ 	{ 0x045e, 0x0202, "Microsoft X-Box pad v1 (US)", 0, XTYPE_XBOX },
+ 	{ 0x045e, 0x0285, "Microsoft X-Box pad (Japan)", 0, XTYPE_XBOX },
+ 	{ 0x045e, 0x0287, "Microsoft Xbox Controller S", 0, XTYPE_XBOX },
+@@ -431,6 +432,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOX360_VENDOR(0x0079),		/* GPD Win 2 Controller */
+ 	XPAD_XBOX360_VENDOR(0x03eb),		/* Wooting Keyboards (Legacy) */
+ 	XPAD_XBOX360_VENDOR(0x044f),		/* Thrustmaster X-Box 360 controllers */
++	XPAD_XBOXONE_VENDOR(0x03f0),		/* HP HyperX Xbox One Controllers */
+ 	XPAD_XBOX360_VENDOR(0x045e),		/* Microsoft X-Box 360 controllers */
+ 	XPAD_XBOXONE_VENDOR(0x045e),		/* Microsoft X-Box One controllers */
+ 	XPAD_XBOX360_VENDOR(0x046d),		/* Logitech X-Box 360 style controllers */
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 47666c9b4ba11..6be92e0afdb06 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -6325,7 +6325,7 @@ static void quirk_igfx_skip_te_disable(struct pci_dev *dev)
+ 	ver = (dev->device >> 8) & 0xff;
+ 	if (ver != 0x45 && ver != 0x46 && ver != 0x4c &&
+ 	    ver != 0x4e && ver != 0x8a && ver != 0x98 &&
+-	    ver != 0x9a && ver != 0xa7)
++	    ver != 0x9a && ver != 0xa7 && ver != 0x7d)
+ 		return;
+ 
+ 	if (risky_device(dev))
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 24c57bb85b359..3d6a7fbcbb15e 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -1342,7 +1342,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ 	memset(new_nodes, 0, sizeof(new_nodes));
+ 	closure_init_stack(&cl);
+ 
+-	while (nodes < GC_MERGE_NODES && !IS_ERR(r[nodes].b))
++	while (nodes < GC_MERGE_NODES && !IS_ERR_OR_NULL(r[nodes].b))
+ 		keys += r[nodes++].keys;
+ 
+ 	blocks = btree_default_blocks(b->c) * 2 / 3;
+@@ -1489,7 +1489,7 @@ out_nocoalesce:
+ 	bch_keylist_free(&keylist);
+ 
+ 	for (i = 0; i < nodes; i++)
+-		if (!IS_ERR(new_nodes[i])) {
++		if (!IS_ERR_OR_NULL(new_nodes[i])) {
+ 			btree_node_free(new_nodes[i]);
+ 			rw_unlock(true, new_nodes[i]);
+ 		}
+@@ -1506,6 +1506,8 @@ static int btree_gc_rewrite_node(struct btree *b, struct btree_op *op,
+ 		return 0;
+ 
+ 	n = btree_node_alloc_replacement(replace, NULL);
++	if (IS_ERR(n))
++		return 0;
+ 
+ 	/* recheck reserve after allocating replacement node */
+ 	if (btree_check_reserve(b, NULL)) {
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index 554e3afc9b688..ca3e2f000cd4d 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -1078,7 +1078,7 @@ SHOW(__bch_cache)
+ 			sum += INITIAL_PRIO - cached[i];
+ 
+ 		if (n)
+-			do_div(sum, n);
++			sum = div64_u64(sum, n);
+ 
+ 		for (i = 0; i < ARRAY_SIZE(q); i++)
+ 			q[i] = INITIAL_PRIO - cached[n * (i + 1) /
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 6324c922f6ba4..94e899ce38554 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -921,24 +921,35 @@ static int bch_btre_dirty_init_thread_nr(void)
+ void bch_sectors_dirty_init(struct bcache_device *d)
+ {
+ 	int i;
++	struct btree *b = NULL;
+ 	struct bkey *k = NULL;
+ 	struct btree_iter iter;
+ 	struct sectors_dirty_init op;
+ 	struct cache_set *c = d->c;
+ 	struct bch_dirty_init_state state;
+ 
++retry_lock:
++	b = c->root;
++	rw_lock(0, b, b->level);
++	if (b != c->root) {
++		rw_unlock(0, b);
++		goto retry_lock;
++	}
++
+ 	/* Just count root keys if no leaf node */
+-	rw_lock(0, c->root, c->root->level);
+ 	if (c->root->level == 0) {
+ 		bch_btree_op_init(&op.op, -1);
+ 		op.inode = d->id;
+ 		op.count = 0;
+ 
+ 		for_each_key_filter(&c->root->keys,
+-				    k, &iter, bch_ptr_invalid)
++				    k, &iter, bch_ptr_invalid) {
++			if (KEY_INODE(k) != op.inode)
++				continue;
+ 			sectors_dirty_init_fn(&op.op, c->root, k);
++		}
+ 
+-		rw_unlock(0, c->root);
++		rw_unlock(0, b);
+ 		return;
+ 	}
+ 
+@@ -958,23 +969,24 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ 		if (atomic_read(&state.enough))
+ 			break;
+ 
++		atomic_inc(&state.started);
+ 		state.infos[i].state = &state;
+ 		state.infos[i].thread =
+ 			kthread_run(bch_dirty_init_thread, &state.infos[i],
+ 				    "bch_dirtcnt[%d]", i);
+ 		if (IS_ERR(state.infos[i].thread)) {
+ 			pr_err("fails to run thread bch_dirty_init[%d]\n", i);
++			atomic_dec(&state.started);
+ 			for (--i; i >= 0; i--)
+ 				kthread_stop(state.infos[i].thread);
+ 			goto out;
+ 		}
+-		atomic_inc(&state.started);
+ 	}
+ 
+ out:
+ 	/* Must wait for all threads to stop. */
+ 	wait_event(state.wait, atomic_read(&state.started) == 0);
+-	rw_unlock(0, c->root);
++	rw_unlock(0, b);
+ }
+ 
+ void bch_cached_dev_writeback_init(struct cached_dev *dc)
+diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c
+index 2628a832787b0..d58b9ae6f2870 100644
+--- a/drivers/md/dm-delay.c
++++ b/drivers/md/dm-delay.c
+@@ -30,7 +30,7 @@ struct delay_c {
+ 	struct workqueue_struct *kdelayd_wq;
+ 	struct work_struct flush_expired_bios;
+ 	struct list_head delayed_bios;
+-	atomic_t may_delay;
++	bool may_delay;
+ 
+ 	struct delay_class read;
+ 	struct delay_class write;
+@@ -191,7 +191,7 @@ static int delay_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 	INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
+ 	INIT_LIST_HEAD(&dc->delayed_bios);
+ 	mutex_init(&dc->timer_lock);
+-	atomic_set(&dc->may_delay, 1);
++	dc->may_delay = true;
+ 	dc->argc = argc;
+ 
+ 	ret = delay_class_ctr(ti, &dc->read, argv);
+@@ -245,7 +245,7 @@ static int delay_bio(struct delay_c *dc, struct delay_class *c, struct bio *bio)
+ 	struct dm_delay_info *delayed;
+ 	unsigned long expires = 0;
+ 
+-	if (!c->delay || !atomic_read(&dc->may_delay))
++	if (!c->delay)
+ 		return DM_MAPIO_REMAPPED;
+ 
+ 	delayed = dm_per_bio_data(bio, sizeof(struct dm_delay_info));
+@@ -254,6 +254,10 @@ static int delay_bio(struct delay_c *dc, struct delay_class *c, struct bio *bio)
+ 	delayed->expires = expires = jiffies + msecs_to_jiffies(c->delay);
+ 
+ 	mutex_lock(&delayed_bios_lock);
++	if (unlikely(!dc->may_delay)) {
++		mutex_unlock(&delayed_bios_lock);
++		return DM_MAPIO_REMAPPED;
++	}
+ 	c->ops++;
+ 	list_add_tail(&delayed->list, &dc->delayed_bios);
+ 	mutex_unlock(&delayed_bios_lock);
+@@ -267,7 +271,10 @@ static void delay_presuspend(struct dm_target *ti)
+ {
+ 	struct delay_c *dc = ti->private;
+ 
+-	atomic_set(&dc->may_delay, 0);
++	mutex_lock(&delayed_bios_lock);
++	dc->may_delay = false;
++	mutex_unlock(&delayed_bios_lock);
++
+ 	del_timer_sync(&dc->delay_timer);
+ 	flush_bios(flush_delayed_bios(dc, 1));
+ }
+@@ -276,7 +283,7 @@ static void delay_resume(struct dm_target *ti)
+ {
+ 	struct delay_c *dc = ti->private;
+ 
+-	atomic_set(&dc->may_delay, 1);
++	dc->may_delay = true;
+ }
+ 
+ static int delay_map(struct dm_target *ti, struct bio *bio)
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index cea2b37897367..442437e4e03ba 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -24,7 +24,8 @@ bool verity_fec_is_enabled(struct dm_verity *v)
+  */
+ static inline struct dm_verity_fec_io *fec_io(struct dm_verity_io *io)
+ {
+-	return (struct dm_verity_fec_io *) verity_io_digest_end(io->v, io);
++	return (struct dm_verity_fec_io *)
++		((char *)io + io->v->ti->per_io_data_size - sizeof(struct dm_verity_fec_io));
+ }
+ 
+ /*
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 0c2048d2b847e..7671949c8a2d9 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -583,7 +583,9 @@ static void verity_end_io(struct bio *bio)
+ 	struct dm_verity_io *io = bio->bi_private;
+ 
+ 	if (bio->bi_status &&
+-	    (!verity_fec_is_enabled(io->v) || verity_is_system_shutting_down())) {
++	    (!verity_fec_is_enabled(io->v) ||
++	     verity_is_system_shutting_down() ||
++	     (bio->bi_opf & REQ_RAHEAD))) {
+ 		verity_finish_io(io, bio->bi_status);
+ 		return;
+ 	}
+diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h
+index 4e769d13473a9..78d1e51195ada 100644
+--- a/drivers/md/dm-verity.h
++++ b/drivers/md/dm-verity.h
+@@ -111,12 +111,6 @@ static inline u8 *verity_io_want_digest(struct dm_verity *v,
+ 	return (u8 *)(io + 1) + v->ahash_reqsize + v->digest_size;
+ }
+ 
+-static inline u8 *verity_io_digest_end(struct dm_verity *v,
+-				       struct dm_verity_io *io)
+-{
+-	return verity_io_want_digest(v, io) + v->digest_size;
+-}
+-
+ extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
+ 			       struct bvec_iter *iter,
+ 			       int (*process)(struct dm_verity *v,
+diff --git a/drivers/media/i2c/smiapp/smiapp-core.c b/drivers/media/i2c/smiapp/smiapp-core.c
+index 6fc0680a93d04..3bf96b0471326 100644
+--- a/drivers/media/i2c/smiapp/smiapp-core.c
++++ b/drivers/media/i2c/smiapp/smiapp-core.c
+@@ -2647,7 +2647,7 @@ static int smiapp_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
+ 		try_fmt->code = sensor->internal_csi_format->code;
+ 		try_fmt->field = V4L2_FIELD_NONE;
+ 
+-		if (ssd != sensor->pixel_array)
++		if (ssd == sensor->pixel_array)
+ 			continue;
+ 
+ 		try_comp = v4l2_subdev_get_try_compose(sd, fh->pad, i);
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index b4a07a166605a..cdb9a55a2c706 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -70,6 +70,9 @@
+ 
+ #define PCI_DEVICE_ID_TI_J721E			0xb00d
+ #define PCI_DEVICE_ID_TI_AM654			0xb00c
++#define PCI_DEVICE_ID_TI_J7200			0xb00f
++#define PCI_DEVICE_ID_TI_AM64			0xb010
++#define PCI_DEVICE_ID_TI_J721S2		0xb013
+ #define PCI_DEVICE_ID_LS1088A			0x80c0
+ 
+ #define is_am654_pci_dev(pdev)		\
+@@ -1000,6 +1003,15 @@ static const struct pci_device_id pci_endpoint_test_tbl[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E),
+ 	  .driver_data = (kernel_ulong_t)&j721e_data,
+ 	},
++	{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J7200),
++	  .driver_data = (kernel_ulong_t)&j721e_data,
++	},
++	{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM64),
++	  .driver_data = (kernel_ulong_t)&j721e_data,
++	},
++	{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721S2),
++	  .driver_data = (kernel_ulong_t)&j721e_data,
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index d81baf750aebb..41d98d7198be5 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1418,6 +1418,8 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
+ 			blk_mq_requeue_request(req, true);
+ 		else
+ 			__blk_mq_end_request(req, BLK_STS_OK);
++	} else if (mq->in_recovery) {
++		blk_mq_requeue_request(req, true);
+ 	} else {
+ 		blk_mq_end_request(req, BLK_STS_OK);
+ 	}
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index 8f24653942536..fdeaaae080603 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -547,22 +547,25 @@ int mmc_cqe_recovery(struct mmc_host *host)
+ 	host->cqe_ops->cqe_recovery_start(host);
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+-	cmd.opcode       = MMC_STOP_TRANSMISSION,
+-	cmd.flags        = MMC_RSP_R1B | MMC_CMD_AC,
++	cmd.opcode       = MMC_STOP_TRANSMISSION;
++	cmd.flags        = MMC_RSP_R1B | MMC_CMD_AC;
+ 	cmd.flags       &= ~MMC_RSP_CRC; /* Ignore CRC */
+-	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT,
+-	mmc_wait_for_cmd(host, &cmd, 0);
++	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT;
++	mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+ 	cmd.opcode       = MMC_CMDQ_TASK_MGMT;
+ 	cmd.arg          = 1; /* Discard entire queue */
+ 	cmd.flags        = MMC_RSP_R1B | MMC_CMD_AC;
+ 	cmd.flags       &= ~MMC_RSP_CRC; /* Ignore CRC */
+-	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT,
+-	err = mmc_wait_for_cmd(host, &cmd, 0);
++	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT;
++	err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
+ 
+ 	host->cqe_ops->cqe_recovery_finish(host);
+ 
++	if (err)
++		err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
++
+ 	mmc_retune_release(host);
+ 
+ 	return err;
+diff --git a/drivers/mmc/core/regulator.c b/drivers/mmc/core/regulator.c
+index 609201a467ef9..4dcbc2281d2b5 100644
+--- a/drivers/mmc/core/regulator.c
++++ b/drivers/mmc/core/regulator.c
+@@ -271,3 +271,44 @@ int mmc_regulator_get_supply(struct mmc_host *mmc)
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(mmc_regulator_get_supply);
++
++/**
++ * mmc_regulator_enable_vqmmc - enable VQMMC regulator for a host
++ * @mmc: the host to regulate
++ *
++ * Returns 0 or errno. Enables the regulator for vqmmc.
++ * Keeps track of the enable status for ensuring that calls to
++ * regulator_enable/disable are balanced.
++ */
++int mmc_regulator_enable_vqmmc(struct mmc_host *mmc)
++{
++	int ret = 0;
++
++	if (!IS_ERR(mmc->supply.vqmmc) && !mmc->vqmmc_enabled) {
++		ret = regulator_enable(mmc->supply.vqmmc);
++		if (ret < 0)
++			dev_err(mmc_dev(mmc), "enabling vqmmc regulator failed\n");
++		else
++			mmc->vqmmc_enabled = true;
++	}
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(mmc_regulator_enable_vqmmc);
++
++/**
++ * mmc_regulator_disable_vqmmc - disable VQMMC regulator for a host
++ * @mmc: the host to regulate
++ *
++ * Returns 0 or errno. Disables the regulator for vqmmc.
++ * Keeps track of the enable status for ensuring that calls to
++ * regulator_enable/disable are balanced.
++ */
++void mmc_regulator_disable_vqmmc(struct mmc_host *mmc)
++{
++	if (!IS_ERR(mmc->supply.vqmmc) && mmc->vqmmc_enabled) {
++		regulator_disable(mmc->supply.vqmmc);
++		mmc->vqmmc_enabled = false;
++	}
++}
++EXPORT_SYMBOL_GPL(mmc_regulator_disable_vqmmc);
+diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
+index 7ba4f714106f9..23cf7912c1ba3 100644
+--- a/drivers/mmc/host/cqhci.c
++++ b/drivers/mmc/host/cqhci.c
+@@ -890,8 +890,8 @@ static bool cqhci_clear_all_tasks(struct mmc_host *mmc, unsigned int timeout)
+ 	ret = cqhci_tasks_cleared(cq_host);
+ 
+ 	if (!ret)
+-		pr_debug("%s: cqhci: Failed to clear tasks\n",
+-			 mmc_hostname(mmc));
++		pr_warn("%s: cqhci: Failed to clear tasks\n",
++			mmc_hostname(mmc));
+ 
+ 	return ret;
+ }
+@@ -924,7 +924,7 @@ static bool cqhci_halt(struct mmc_host *mmc, unsigned int timeout)
+ 	ret = cqhci_halted(cq_host);
+ 
+ 	if (!ret)
+-		pr_debug("%s: cqhci: Failed to halt\n", mmc_hostname(mmc));
++		pr_warn("%s: cqhci: Failed to halt\n", mmc_hostname(mmc));
+ 
+ 	return ret;
+ }
+@@ -932,10 +932,10 @@ static bool cqhci_halt(struct mmc_host *mmc, unsigned int timeout)
+ /*
+  * After halting we expect to be able to use the command line. We interpret the
+  * failure to halt to mean the data lines might still be in use (and the upper
+- * layers will need to send a STOP command), so we set the timeout based on a
+- * generous command timeout.
++ * layers will need to send a STOP command), however failing to halt complicates
++ * the recovery, so set a timeout that would reasonably allow I/O to complete.
+  */
+-#define CQHCI_START_HALT_TIMEOUT	5
++#define CQHCI_START_HALT_TIMEOUT	500
+ 
+ static void cqhci_recovery_start(struct mmc_host *mmc)
+ {
+@@ -1023,28 +1023,28 @@ static void cqhci_recovery_finish(struct mmc_host *mmc)
+ 
+ 	ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
+ 
+-	if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
+-		ok = false;
+-
+ 	/*
+ 	 * The specification contradicts itself, by saying that tasks cannot be
+ 	 * cleared if CQHCI does not halt, but if CQHCI does not halt, it should
+ 	 * be disabled/re-enabled, but not to disable before clearing tasks.
+ 	 * Have a go anyway.
+ 	 */
+-	if (!ok) {
+-		pr_debug("%s: cqhci: disable / re-enable\n", mmc_hostname(mmc));
+-		cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
+-		cqcfg &= ~CQHCI_ENABLE;
+-		cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+-		cqcfg |= CQHCI_ENABLE;
+-		cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+-		/* Be sure that there are no tasks */
+-		ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
+-		if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
+-			ok = false;
+-		WARN_ON(!ok);
+-	}
++	if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
++		ok = false;
++
++	/* Disable to make sure tasks really are cleared */
++	cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
++	cqcfg &= ~CQHCI_ENABLE;
++	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
++
++	cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
++	cqcfg |= CQHCI_ENABLE;
++	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
++
++	cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
++
++	if (!ok)
++		cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT);
+ 
+ 	cqhci_recover_mrqs(cq_host);
+ 
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index 540ebccaa9a35..d8e412bbb93bf 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -392,12 +392,33 @@ static void sdhci_sprd_request_done(struct sdhci_host *host,
+ 	mmc_request_done(host->mmc, mrq);
+ }
+ 
++static void sdhci_sprd_set_power(struct sdhci_host *host, unsigned char mode,
++				 unsigned short vdd)
++{
++	struct mmc_host *mmc = host->mmc;
++
++	switch (mode) {
++	case MMC_POWER_OFF:
++		mmc_regulator_set_ocr(host->mmc, mmc->supply.vmmc, 0);
++
++		mmc_regulator_disable_vqmmc(mmc);
++		break;
++	case MMC_POWER_ON:
++		mmc_regulator_enable_vqmmc(mmc);
++		break;
++	case MMC_POWER_UP:
++		mmc_regulator_set_ocr(host->mmc, mmc->supply.vmmc, vdd);
++		break;
++	}
++}
++
+ static struct sdhci_ops sdhci_sprd_ops = {
+ 	.read_l = sdhci_sprd_readl,
+ 	.write_l = sdhci_sprd_writel,
+ 	.write_w = sdhci_sprd_writew,
+ 	.write_b = sdhci_sprd_writeb,
+ 	.set_clock = sdhci_sprd_set_clock,
++	.set_power = sdhci_sprd_set_power,
+ 	.get_max_clock = sdhci_sprd_get_max_clock,
+ 	.get_min_clock = sdhci_sprd_get_min_clock,
+ 	.set_bus_width = sdhci_set_bus_width,
+@@ -663,6 +684,10 @@ static int sdhci_sprd_probe(struct platform_device *pdev)
+ 	host->caps1 &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_SDR104 |
+ 			 SDHCI_SUPPORT_DDR50);
+ 
++	ret = mmc_regulator_get_supply(host->mmc);
++	if (ret)
++		goto pm_runtime_disable;
++
+ 	ret = sdhci_setup_host(host);
+ 	if (ret)
+ 		goto pm_runtime_disable;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index a5d6faf7b89e1..23401958bc135 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -682,10 +682,24 @@ static void xgbe_service(struct work_struct *work)
+ static void xgbe_service_timer(struct timer_list *t)
+ {
+ 	struct xgbe_prv_data *pdata = from_timer(pdata, t, service_timer);
++	struct xgbe_channel *channel;
++	unsigned int i;
+ 
+ 	queue_work(pdata->dev_workqueue, &pdata->service_work);
+ 
+ 	mod_timer(&pdata->service_timer, jiffies + HZ);
++
++	if (!pdata->tx_usecs)
++		return;
++
++	for (i = 0; i < pdata->channel_count; i++) {
++		channel = pdata->channel[i];
++		if (!channel->tx_ring || channel->tx_timer_active)
++			break;
++		channel->tx_timer_active = 1;
++		mod_timer(&channel->tx_timer,
++			  jiffies + usecs_to_jiffies(pdata->tx_usecs));
++	}
+ }
+ 
+ static void xgbe_init_timers(struct xgbe_prv_data *pdata)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+index 61f39a0e04f95..64aebf922dda9 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+@@ -314,10 +314,15 @@ static int xgbe_get_link_ksettings(struct net_device *netdev,
+ 
+ 	cmd->base.phy_address = pdata->phy.address;
+ 
+-	cmd->base.autoneg = pdata->phy.autoneg;
+-	cmd->base.speed = pdata->phy.speed;
+-	cmd->base.duplex = pdata->phy.duplex;
++	if (netif_carrier_ok(netdev)) {
++		cmd->base.speed = pdata->phy.speed;
++		cmd->base.duplex = pdata->phy.duplex;
++	} else {
++		cmd->base.speed = SPEED_UNKNOWN;
++		cmd->base.duplex = DUPLEX_UNKNOWN;
++	}
+ 
++	cmd->base.autoneg = pdata->phy.autoneg;
+ 	cmd->base.port = PORT_NONE;
+ 
+ 	XGBE_LM_COPY(cmd, supported, lks, supported);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index ca7372369b3e6..60be836b294bb 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -1178,7 +1178,19 @@ static int xgbe_phy_config_fixed(struct xgbe_prv_data *pdata)
+ 	if (pdata->phy.duplex != DUPLEX_FULL)
+ 		return -EINVAL;
+ 
+-	xgbe_set_mode(pdata, mode);
++	/* Force the mode change for SFI in Fixed PHY config.
++	 * Fixed PHY configs needs PLL to be enabled while doing mode set.
++	 * When the SFP module isn't connected during boot, driver assumes
++	 * AN is ON and attempts autonegotiation. However, if the connected
++	 * SFP comes up in Fixed PHY config, the link will not come up as
++	 * PLL isn't enabled while the initial mode set command is issued.
++	 * So, force the mode change for SFI in Fixed PHY configuration to
++	 * fix link issues.
++	 */
++	if (mode == XGBE_MODE_SFI)
++		xgbe_change_mode(pdata, mode);
++	else
++		xgbe_set_mode(pdata, mode);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 35401202523ef..07ba0438f9655 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -928,14 +928,12 @@ static int dpaa2_eth_build_single_fd(struct dpaa2_eth_priv *priv,
+ 	dma_addr_t addr;
+ 
+ 	buffer_start = skb->data - dpaa2_eth_needed_headroom(skb);
+-
+-	/* If there's enough room to align the FD address, do it.
+-	 * It will help hardware optimize accesses.
+-	 */
+ 	aligned_start = PTR_ALIGN(buffer_start - DPAA2_ETH_TX_BUF_ALIGN,
+ 				  DPAA2_ETH_TX_BUF_ALIGN);
+ 	if (aligned_start >= skb->head)
+ 		buffer_start = aligned_start;
++	else
++		return -ENOMEM;
+ 
+ 	/* Store a backpointer to the skb at the beginning of the buffer
+ 	 * (in the private data area) such that we can release it
+@@ -4337,6 +4335,8 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ 	if (err)
+ 		goto err_dl_port_add;
+ 
++	net_dev->needed_headroom = DPAA2_ETH_SWA_SIZE + DPAA2_ETH_TX_BUF_ALIGN;
++
+ 	err = register_netdev(net_dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "register_netdev() failed\n");
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+index d236b8695c39c..2825f53e7e9b1 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+@@ -664,7 +664,7 @@ static inline bool dpaa2_eth_rx_pause_enabled(u64 link_options)
+ 
+ static inline unsigned int dpaa2_eth_needed_headroom(struct sk_buff *skb)
+ {
+-	unsigned int headroom = DPAA2_ETH_SWA_SIZE;
++	unsigned int headroom = DPAA2_ETH_SWA_SIZE + DPAA2_ETH_TX_BUF_ALIGN;
+ 
+ 	/* If we don't have an skb (e.g. XDP buffer), we only need space for
+ 	 * the software annotation area
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 000dd89c4baff..d6f7a2a58aee8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -556,7 +556,9 @@ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq)
+ 		otx2_write64(pf, RVU_PF_VFPF_MBOX_INTX(1), intr);
+ 		otx2_queue_work(mbox, pf->mbox_pfvf_wq, 64, vfs, intr,
+ 				TYPE_PFVF);
+-		vfs -= 64;
++		if (intr)
++			trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
++		vfs = 64;
+ 	}
+ 
+ 	intr = otx2_read64(pf, RVU_PF_VFPF_MBOX_INTX(0));
+@@ -564,7 +566,8 @@ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq)
+ 
+ 	otx2_queue_work(mbox, pf->mbox_pfvf_wq, 0, vfs, intr, TYPE_PFVF);
+ 
+-	trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
++	if (intr)
++		trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index ab6af1f1ad5bb..88253cedd9d96 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -582,6 +582,8 @@ struct rtl8169_tc_offsets {
+ enum rtl_flag {
+ 	RTL_FLAG_TASK_ENABLED = 0,
+ 	RTL_FLAG_TASK_RESET_PENDING,
++	RTL_FLAG_TASK_RESET_NO_QUEUE_WAKE,
++	RTL_FLAG_TASK_TX_TIMEOUT,
+ 	RTL_FLAG_MAX
+ };
+ 
+@@ -4036,7 +4038,7 @@ static void rtl8169_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ {
+ 	struct rtl8169_private *tp = netdev_priv(dev);
+ 
+-	rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING);
++	rtl_schedule_task(tp, RTL_FLAG_TASK_TX_TIMEOUT);
+ }
+ 
+ static int rtl8169_tx_map(struct rtl8169_private *tp, const u32 *opts, u32 len,
+@@ -4656,6 +4658,7 @@ static void rtl_task(struct work_struct *work)
+ {
+ 	struct rtl8169_private *tp =
+ 		container_of(work, struct rtl8169_private, wk.work);
++	int ret;
+ 
+ 	rtnl_lock();
+ 
+@@ -4663,9 +4666,21 @@ static void rtl_task(struct work_struct *work)
+ 	    !test_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags))
+ 		goto out_unlock;
+ 
++	if (test_and_clear_bit(RTL_FLAG_TASK_TX_TIMEOUT, tp->wk.flags)) {
++		/* ASPM compatibility issues are a typical reason for tx timeouts */
++		ret = pci_disable_link_state(tp->pci_dev, PCIE_LINK_STATE_L1 |
++							  PCIE_LINK_STATE_L0S);
++		if (!ret)
++			netdev_warn_once(tp->dev, "ASPM disabled on Tx timeout\n");
++		goto reset;
++	}
++
+ 	if (test_and_clear_bit(RTL_FLAG_TASK_RESET_PENDING, tp->wk.flags)) {
++reset:
+ 		rtl_reset_work(tp);
+ 		netif_wake_queue(tp->dev);
++	} else if (test_and_clear_bit(RTL_FLAG_TASK_RESET_NO_QUEUE_WAKE, tp->wk.flags)) {
++		rtl_reset_work(tp);
+ 	}
+ out_unlock:
+ 	rtnl_unlock();
+@@ -4699,7 +4714,7 @@ static void r8169_phylink_handler(struct net_device *ndev)
+ 	} else {
+ 		/* In few cases rx is broken after link-down otherwise */
+ 		if (rtl_is_8125(tp))
+-			rtl_reset_work(tp);
++			rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_NO_QUEUE_WAKE);
+ 		pm_runtime_idle(d);
+ 	}
+ 
+@@ -4769,7 +4784,7 @@ static int rtl8169_close(struct net_device *dev)
+ 	rtl8169_down(tp);
+ 	rtl8169_rx_clear(tp);
+ 
+-	cancel_work_sync(&tp->wk.work);
++	cancel_work(&tp->wk.work);
+ 
+ 	free_irq(pci_irq_vector(pdev, 0), tp);
+ 
+@@ -5035,6 +5050,8 @@ static void rtl_remove_one(struct pci_dev *pdev)
+ 	if (pci_dev_run_wake(pdev))
+ 		pm_runtime_get_noresume(&pdev->dev);
+ 
++	cancel_work_sync(&tp->wk.work);
++
+ 	unregister_netdev(tp->dev);
+ 
+ 	if (r8168_check_dash(tp))
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index f218bacec0013..f092f468016bd 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1383,13 +1383,13 @@ static int ravb_open(struct net_device *ndev)
+ 	if (priv->chip_id == RCAR_GEN2)
+ 		ravb_ptp_init(ndev, priv->pdev);
+ 
+-	netif_tx_start_all_queues(ndev);
+-
+ 	/* PHY control start */
+ 	error = ravb_phy_start(ndev);
+ 	if (error)
+ 		goto out_ptp_stop;
+ 
++	netif_tx_start_all_queues(ndev);
++
+ 	return 0;
+ 
+ out_ptp_stop:
+@@ -1438,6 +1438,12 @@ static void ravb_tx_timeout_work(struct work_struct *work)
+ 	struct net_device *ndev = priv->ndev;
+ 	int error;
+ 
++	if (!rtnl_trylock()) {
++		usleep_range(1000, 2000);
++		schedule_work(&priv->work);
++		return;
++	}
++
+ 	netif_tx_stop_all_queues(ndev);
+ 
+ 	/* Stop PTP Clock driver */
+@@ -1470,7 +1476,7 @@ static void ravb_tx_timeout_work(struct work_struct *work)
+ 		 */
+ 		netdev_err(ndev, "%s: ravb_dmac_init() failed, error %d\n",
+ 			   __func__, error);
+-		return;
++		goto out_unlock;
+ 	}
+ 	ravb_emac_init(ndev);
+ 
+@@ -1480,6 +1486,9 @@ out:
+ 		ravb_ptp_init(ndev, priv->pdev);
+ 
+ 	netif_tx_start_all_queues(ndev);
++
++out_unlock:
++	rtnl_unlock();
+ }
+ 
+ /* Packet transmit function for Ethernet AVB */
+@@ -2063,7 +2072,9 @@ static int ravb_probe(struct platform_device *pdev)
+ 	ndev->hw_features = NETIF_F_RXCSUM;
+ 
+ 	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_get_sync(&pdev->dev);
++	error = pm_runtime_resume_and_get(&pdev->dev);
++	if (error < 0)
++		goto out_rpm_disable;
+ 
+ 	/* The Ether-specific entries in the device structure. */
+ 	ndev->base_addr = res->start;
+@@ -2238,6 +2249,7 @@ out_release:
+ 	free_netdev(ndev);
+ 
+ 	pm_runtime_put(&pdev->dev);
++out_rpm_disable:
+ 	pm_runtime_disable(&pdev->dev);
+ 	return error;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/mmc_core.c b/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
+index a57b0fa815aba..a510bac0b825b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
+@@ -177,8 +177,10 @@
+ #define MMC_XGMAC_RX_DISCARD_OCT_GB	0x1b4
+ #define MMC_XGMAC_RX_ALIGN_ERR_PKT	0x1bc
+ 
++#define MMC_XGMAC_TX_FPE_INTR_MASK	0x204
+ #define MMC_XGMAC_TX_FPE_FRAG		0x208
+ #define MMC_XGMAC_TX_HOLD_REQ		0x20c
++#define MMC_XGMAC_RX_FPE_INTR_MASK	0x224
+ #define MMC_XGMAC_RX_PKT_ASSEMBLY_ERR	0x228
+ #define MMC_XGMAC_RX_PKT_SMD_ERR	0x22c
+ #define MMC_XGMAC_RX_PKT_ASSEMBLY_OK	0x230
+@@ -352,6 +354,8 @@ static void dwxgmac_mmc_intr_all_mask(void __iomem *mmcaddr)
+ {
+ 	writel(0x0, mmcaddr + MMC_RX_INTR_MASK);
+ 	writel(0x0, mmcaddr + MMC_TX_INTR_MASK);
++	writel(MMC_DEFAULT_MASK, mmcaddr + MMC_XGMAC_TX_FPE_INTR_MASK);
++	writel(MMC_DEFAULT_MASK, mmcaddr + MMC_XGMAC_RX_FPE_INTR_MASK);
+ 	writel(MMC_DEFAULT_MASK, mmcaddr + MMC_XGMAC_RX_IPC_INTR_MASK);
+ }
+ 
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 9d362283196aa..2a5a3f8761c30 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -763,7 +763,7 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 		if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
+ 			/* Tx Full Checksum Offload Enabled */
+ 			cur_p->app0 |= 2;
+-		} else if (lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) {
++		} else if (lp->features & XAE_FEATURE_PARTIAL_TX_CSUM) {
+ 			csum_start_off = skb_transport_offset(skb);
+ 			csum_index_off = csum_start_off + skb->csum_offset;
+ 			/* Tx Partial Checksum Offload Enabled */
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index f2020be43cfea..790bf750281ad 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2217,9 +2217,6 @@ static int netvsc_vf_join(struct net_device *vf_netdev,
+ 		goto upper_link_failed;
+ 	}
+ 
+-	/* set slave flag before open to prevent IPv6 addrconf */
+-	vf_netdev->flags |= IFF_SLAVE;
+-
+ 	schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT);
+ 
+ 	call_netdevice_notifiers(NETDEV_JOIN, vf_netdev);
+@@ -2317,16 +2314,18 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
+ 			return hv_get_drvdata(ndev_ctx->device_ctx);
+ 	}
+ 
+-	/* Fallback path to check synthetic vf with
+-	 * help of mac addr
++	/* Fallback path to check synthetic vf with help of mac addr.
++	 * Because this function can be called before vf_netdev is
++	 * initialized (NETDEV_POST_INIT) when its perm_addr has not been copied
++	 * from dev_addr, also try to match to its dev_addr.
++	 * Note: On Hyper-V and Azure, it's not possible to set a MAC address
++	 * on a VF that matches to the MAC of a unrelated NETVSC device.
+ 	 */
+ 	list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {
+ 		ndev = hv_get_drvdata(ndev_ctx->device_ctx);
+-		if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr)) {
+-			netdev_notice(vf_netdev,
+-				      "falling back to mac addr based matching\n");
++		if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr) ||
++		    ether_addr_equal(vf_netdev->dev_addr, ndev->perm_addr))
+ 			return ndev;
+-		}
+ 	}
+ 
+ 	netdev_notice(vf_netdev,
+@@ -2334,6 +2333,19 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
+ 	return NULL;
+ }
+ 
++static int netvsc_prepare_bonding(struct net_device *vf_netdev)
++{
++	struct net_device *ndev;
++
++	ndev = get_netvsc_byslot(vf_netdev);
++	if (!ndev)
++		return NOTIFY_DONE;
++
++	/* set slave flag before open to prevent IPv6 addrconf */
++	vf_netdev->flags |= IFF_SLAVE;
++	return NOTIFY_DONE;
++}
++
+ static int netvsc_register_vf(struct net_device *vf_netdev)
+ {
+ 	struct net_device_context *net_device_ctx;
+@@ -2516,15 +2528,6 @@ static int netvsc_probe(struct hv_device *dev,
+ 		goto devinfo_failed;
+ 	}
+ 
+-	nvdev = rndis_filter_device_add(dev, device_info);
+-	if (IS_ERR(nvdev)) {
+-		ret = PTR_ERR(nvdev);
+-		netdev_err(net, "unable to add netvsc device (ret %d)\n", ret);
+-		goto rndis_failed;
+-	}
+-
+-	memcpy(net->dev_addr, device_info->mac_adr, ETH_ALEN);
+-
+ 	/* We must get rtnl lock before scheduling nvdev->subchan_work,
+ 	 * otherwise netvsc_subchan_work() can get rtnl lock first and wait
+ 	 * all subchannels to show up, but that may not happen because
+@@ -2532,9 +2535,23 @@ static int netvsc_probe(struct hv_device *dev,
+ 	 * -> ... -> device_add() -> ... -> __device_attach() can't get
+ 	 * the device lock, so all the subchannels can't be processed --
+ 	 * finally netvsc_subchan_work() hangs forever.
++	 *
++	 * The rtnl lock also needs to be held before rndis_filter_device_add()
++	 * which advertises nvsp_2_vsc_capability / sriov bit, and triggers
++	 * VF NIC offering and registering. If VF NIC finished register_netdev()
++	 * earlier it may cause name based config failure.
+ 	 */
+ 	rtnl_lock();
+ 
++	nvdev = rndis_filter_device_add(dev, device_info);
++	if (IS_ERR(nvdev)) {
++		ret = PTR_ERR(nvdev);
++		netdev_err(net, "unable to add netvsc device (ret %d)\n", ret);
++		goto rndis_failed;
++	}
++
++	memcpy(net->dev_addr, device_info->mac_adr, ETH_ALEN);
++
+ 	if (nvdev->num_chn > 1)
+ 		schedule_work(&nvdev->subchan_work);
+ 
+@@ -2568,9 +2585,9 @@ static int netvsc_probe(struct hv_device *dev,
+ 	return 0;
+ 
+ register_failed:
+-	rtnl_unlock();
+ 	rndis_filter_device_remove(dev, nvdev);
+ rndis_failed:
++	rtnl_unlock();
+ 	netvsc_devinfo_put(device_info);
+ devinfo_failed:
+ 	free_percpu(net_device_ctx->vf_stats);
+@@ -2737,6 +2754,8 @@ static int netvsc_netdev_event(struct notifier_block *this,
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {
++	case NETDEV_POST_INIT:
++		return netvsc_prepare_bonding(event_dev);
+ 	case NETDEV_REGISTER:
+ 		return netvsc_register_vf(event_dev);
+ 	case NETDEV_UNREGISTER:
+@@ -2771,12 +2790,17 @@ static int __init netvsc_drv_init(void)
+ 	}
+ 	netvsc_ring_bytes = ring_size * PAGE_SIZE;
+ 
++	register_netdevice_notifier(&netvsc_netdev_notifier);
++
+ 	ret = vmbus_driver_register(&netvsc_drv);
+ 	if (ret)
+-		return ret;
++		goto err_vmbus_reg;
+ 
+-	register_netdevice_notifier(&netvsc_netdev_notifier);
+ 	return 0;
++
++err_vmbus_reg:
++	unregister_netdevice_notifier(&netvsc_netdev_notifier);
++	return ret;
+ }
+ 
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 79a53fe245e5c..38cb863ccb911 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1700,11 +1700,11 @@ static int ax88179_reset(struct usbnet *dev)
+ 
+ 	*tmp16 = AX_PHYPWR_RSTCTL_IPRL;
+ 	ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_PHYPWR_RSTCTL, 2, 2, tmp16);
+-	msleep(200);
++	msleep(500);
+ 
+ 	*tmp = AX_CLK_SELECT_ACS | AX_CLK_SELECT_BCS;
+ 	ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, tmp);
+-	msleep(100);
++	msleep(200);
+ 
+ 	/* Ethernet PHY Auto Detach*/
+ 	ax88179_auto_detach(dev, 0);
+diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
+index e0693cd965ec4..713ca20feaef4 100644
+--- a/drivers/net/wireguard/device.c
++++ b/drivers/net/wireguard/device.c
+@@ -193,7 +193,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 */
+ 	while (skb_queue_len(&peer->staged_packet_queue) > MAX_STAGED_PACKETS) {
+ 		dev_kfree_skb(__skb_dequeue(&peer->staged_packet_queue));
+-		++dev->stats.tx_dropped;
++		DEV_STATS_INC(dev, tx_dropped);
+ 	}
+ 	skb_queue_splice_tail(&packets, &peer->staged_packet_queue);
+ 	spin_unlock_bh(&peer->staged_packet_queue.lock);
+@@ -211,7 +211,7 @@ err_icmp:
+ 	else if (skb->protocol == htons(ETH_P_IPV6))
+ 		icmpv6_ndo_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0);
+ err:
+-	++dev->stats.tx_errors;
++	DEV_STATS_INC(dev, tx_errors);
+ 	kfree_skb(skb);
+ 	return ret;
+ }
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index f500aaf678370..d38b24339a1f9 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -423,20 +423,20 @@ dishonest_packet_peer:
+ 	net_dbg_skb_ratelimited("%s: Packet has unallowed src IP (%pISc) from peer %llu (%pISpfsc)\n",
+ 				dev->name, skb, peer->internal_id,
+ 				&peer->endpoint.addr);
+-	++dev->stats.rx_errors;
+-	++dev->stats.rx_frame_errors;
++	DEV_STATS_INC(dev, rx_errors);
++	DEV_STATS_INC(dev, rx_frame_errors);
+ 	goto packet_processed;
+ dishonest_packet_type:
+ 	net_dbg_ratelimited("%s: Packet is neither ipv4 nor ipv6 from peer %llu (%pISpfsc)\n",
+ 			    dev->name, peer->internal_id, &peer->endpoint.addr);
+-	++dev->stats.rx_errors;
+-	++dev->stats.rx_frame_errors;
++	DEV_STATS_INC(dev, rx_errors);
++	DEV_STATS_INC(dev, rx_frame_errors);
+ 	goto packet_processed;
+ dishonest_packet_size:
+ 	net_dbg_ratelimited("%s: Packet has incorrect size from peer %llu (%pISpfsc)\n",
+ 			    dev->name, peer->internal_id, &peer->endpoint.addr);
+-	++dev->stats.rx_errors;
+-	++dev->stats.rx_length_errors;
++	DEV_STATS_INC(dev, rx_errors);
++	DEV_STATS_INC(dev, rx_length_errors);
+ 	goto packet_processed;
+ packet_processed:
+ 	dev_kfree_skb(skb);
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index 95c853b59e1da..0d48e0f4a1ba3 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -333,7 +333,8 @@ err:
+ void wg_packet_purge_staged_packets(struct wg_peer *peer)
+ {
+ 	spin_lock_bh(&peer->staged_packet_queue.lock);
+-	peer->device->dev->stats.tx_dropped += peer->staged_packet_queue.qlen;
++	DEV_STATS_ADD(peer->device->dev, tx_dropped,
++		      peer->staged_packet_queue.qlen);
+ 	__skb_queue_purge(&peer->staged_packet_queue);
+ 	spin_unlock_bh(&peer->staged_packet_queue.lock);
+ }
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index a82a0796a6148..59109eb8e8e46 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1189,19 +1189,19 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
+ 	ctrl->cap |= NVMET_QUEUE_SIZE - 1;
+ }
+ 
+-u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid,
+-		struct nvmet_req *req, struct nvmet_ctrl **ret)
++struct nvmet_ctrl *nvmet_ctrl_find_get(const char *subsysnqn,
++				       const char *hostnqn, u16 cntlid,
++				       struct nvmet_req *req)
+ {
++	struct nvmet_ctrl *ctrl = NULL;
+ 	struct nvmet_subsys *subsys;
+-	struct nvmet_ctrl *ctrl;
+-	u16 status = 0;
+ 
+ 	subsys = nvmet_find_get_subsys(req->port, subsysnqn);
+ 	if (!subsys) {
+ 		pr_warn("connect request for invalid subsystem %s!\n",
+ 			subsysnqn);
+ 		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
+-		return NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
++		goto out;
+ 	}
+ 
+ 	mutex_lock(&subsys->lock);
+@@ -1214,20 +1214,21 @@ u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid,
+ 			if (!kref_get_unless_zero(&ctrl->ref))
+ 				continue;
+ 
+-			*ret = ctrl;
+-			goto out;
++			/* ctrl found */
++			goto found;
+ 		}
+ 	}
+ 
++	ctrl = NULL; /* ctrl not found */
+ 	pr_warn("could not find controller %d for subsys %s / host %s\n",
+ 		cntlid, subsysnqn, hostnqn);
+ 	req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
+-	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
+ 
+-out:
++found:
+ 	mutex_unlock(&subsys->lock);
+ 	nvmet_subsys_put(subsys);
+-	return status;
++out:
++	return ctrl;
+ }
+ 
+ u16 nvmet_check_ctrl_status(struct nvmet_req *req, struct nvme_command *cmd)
+diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
+index e62d3d0fa6c85..fb4f62982cb7e 100644
+--- a/drivers/nvme/target/fabrics-cmd.c
++++ b/drivers/nvme/target/fabrics-cmd.c
+@@ -189,6 +189,8 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
+ 		goto out;
+ 	}
+ 
++	d->subsysnqn[NVMF_NQN_FIELD_LEN - 1] = '\0';
++	d->hostnqn[NVMF_NQN_FIELD_LEN - 1] = '\0';
+ 	status = nvmet_alloc_ctrl(d->subsysnqn, d->hostnqn, req,
+ 				  le32_to_cpu(c->kato), &ctrl);
+ 	if (status) {
+@@ -223,7 +225,7 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
+ {
+ 	struct nvmf_connect_command *c = &req->cmd->connect;
+ 	struct nvmf_connect_data *d;
+-	struct nvmet_ctrl *ctrl = NULL;
++	struct nvmet_ctrl *ctrl;
+ 	u16 qid = le16_to_cpu(c->qid);
+ 	u16 status = 0;
+ 
+@@ -250,11 +252,14 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
+ 		goto out;
+ 	}
+ 
+-	status = nvmet_ctrl_find_get(d->subsysnqn, d->hostnqn,
+-				     le16_to_cpu(d->cntlid),
+-				     req, &ctrl);
+-	if (status)
++	d->subsysnqn[NVMF_NQN_FIELD_LEN - 1] = '\0';
++	d->hostnqn[NVMF_NQN_FIELD_LEN - 1] = '\0';
++	ctrl = nvmet_ctrl_find_get(d->subsysnqn, d->hostnqn,
++				   le16_to_cpu(d->cntlid), req);
++	if (!ctrl) {
++		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
+ 		goto out;
++	}
+ 
+ 	if (unlikely(qid > ctrl->subsys->max_qid)) {
+ 		pr_warn("invalid queue id (%d)\n", qid);
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index 4bf6d21290c23..ef162b64fabef 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -430,8 +430,9 @@ void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl);
+ void nvmet_update_cc(struct nvmet_ctrl *ctrl, u32 new);
+ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 		struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp);
+-u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid,
+-		struct nvmet_req *req, struct nvmet_ctrl **ret);
++struct nvmet_ctrl *nvmet_ctrl_find_get(const char *subsysnqn,
++				       const char *hostnqn, u16 cntlid,
++				       struct nvmet_req *req);
+ void nvmet_ctrl_put(struct nvmet_ctrl *ctrl);
+ u16 nvmet_check_ctrl_status(struct nvmet_req *req, struct nvme_command *cmd);
+ 
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 5b722287aac92..afaea201a5afc 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -865,8 +865,8 @@ static irqreturn_t ks_pcie_err_irq_handler(int irq, void *priv)
+ 	return ks_pcie_handle_error_irq(ks_pcie);
+ }
+ 
+-static int __init ks_pcie_add_pcie_port(struct keystone_pcie *ks_pcie,
+-					struct platform_device *pdev)
++static int ks_pcie_add_pcie_port(struct keystone_pcie *ks_pcie,
++				 struct platform_device *pdev)
+ {
+ 	struct dw_pcie *pci = ks_pcie->pci;
+ 	struct pcie_port *pp = &pci->pp;
+@@ -978,8 +978,8 @@ static const struct dw_pcie_ep_ops ks_pcie_am654_ep_ops = {
+ 	.get_features = &ks_pcie_am654_get_features,
+ };
+ 
+-static int __init ks_pcie_add_pcie_ep(struct keystone_pcie *ks_pcie,
+-				      struct platform_device *pdev)
++static int ks_pcie_add_pcie_ep(struct keystone_pcie *ks_pcie,
++			       struct platform_device *pdev)
+ {
+ 	int ret;
+ 	struct dw_pcie_ep *ep;
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 840000870d5a0..4d52494369935 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1239,17 +1239,17 @@ static void pinctrl_link_add(struct pinctrl_dev *pctldev,
+ static int pinctrl_commit_state(struct pinctrl *p, struct pinctrl_state *state)
+ {
+ 	struct pinctrl_setting *setting, *setting2;
+-	struct pinctrl_state *old_state = p->state;
++	struct pinctrl_state *old_state = READ_ONCE(p->state);
+ 	int ret;
+ 
+-	if (p->state) {
++	if (old_state) {
+ 		/*
+ 		 * For each pinmux setting in the old state, forget SW's record
+ 		 * of mux owner for that pingroup. Any pingroups which are
+ 		 * still owned by the new state will be re-acquired by the call
+ 		 * to pinmux_enable_setting() in the loop below.
+ 		 */
+-		list_for_each_entry(setting, &p->state->settings, node) {
++		list_for_each_entry(setting, &old_state->settings, node) {
+ 			if (setting->type != PIN_MAP_TYPE_MUX_GROUP)
+ 				continue;
+ 			pinmux_disable_setting(setting);
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 09e932a7b17f4..81de5c98221a6 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -737,18 +737,20 @@ static void dasd_profile_start(struct dasd_block *block,
+ 	 * we count each request only once.
+ 	 */
+ 	device = cqr->startdev;
+-	if (device->profile.data) {
+-		counter = 1; /* request is not yet queued on the start device */
+-		list_for_each(l, &device->ccw_queue)
+-			if (++counter >= 31)
+-				break;
+-	}
++	if (!device->profile.data)
++		return;
++
++	spin_lock(get_ccwdev_lock(device->cdev));
++	counter = 1; /* request is not yet queued on the start device */
++	list_for_each(l, &device->ccw_queue)
++		if (++counter >= 31)
++			break;
++	spin_unlock(get_ccwdev_lock(device->cdev));
++
+ 	spin_lock(&device->profile.lock);
+-	if (device->profile.data) {
+-		device->profile.data->dasd_io_nr_req[counter]++;
+-		if (rq_data_dir(req) == READ)
+-			device->profile.data->dasd_read_nr_req[counter]++;
+-	}
++	device->profile.data->dasd_io_nr_req[counter]++;
++	if (rq_data_dir(req) == READ)
++		device->profile.data->dasd_read_nr_req[counter]++;
+ 	spin_unlock(&device->profile.lock);
+ }
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 8d199deaf3b12..0930bf996cd30 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -839,7 +839,7 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 		uint16_t hwq;
+ 		struct qla_qpair *qpair = NULL;
+ 
+-		tag = blk_mq_unique_tag(cmd->request);
++		tag = blk_mq_unique_tag(scsi_cmd_to_rq(cmd));
+ 		hwq = blk_mq_unique_tag_to_hwq(tag);
+ 		qpair = ha->queue_pair_map[hwq];
+ 
+@@ -1714,8 +1714,16 @@ static void qla2x00_abort_srb(struct qla_qpair *qp, srb_t *sp, const int res,
+ 		}
+ 
+ 		spin_lock_irqsave(qp->qp_lock_ptr, *flags);
+-		if (ret_cmd && blk_mq_request_started(cmd->request))
+-			sp->done(sp, res);
++		switch (sp->type) {
++		case SRB_SCSI_CMD:
++			if (ret_cmd && blk_mq_request_started(scsi_cmd_to_rq(cmd)))
++				sp->done(sp, res);
++			break;
++		default:
++			if (ret_cmd)
++				sp->done(sp, res);
++			break;
++		}
+ 	} else {
+ 		sp->done(sp, res);
+ 	}
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 39f1eca60a714..2f8a1225b6976 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -61,7 +61,7 @@ static void usb_parse_ssp_isoc_endpoint_companion(struct device *ddev,
+ 	desc = (struct usb_ssp_isoc_ep_comp_descriptor *) buffer;
+ 	if (desc->bDescriptorType != USB_DT_SSP_ISOC_ENDPOINT_COMP ||
+ 	    size < USB_DT_SSP_ISOC_EP_COMP_SIZE) {
+-		dev_warn(ddev, "Invalid SuperSpeedPlus isoc endpoint companion"
++		dev_notice(ddev, "Invalid SuperSpeedPlus isoc endpoint companion"
+ 			 "for config %d interface %d altsetting %d ep %d.\n",
+ 			 cfgno, inum, asnum, ep->desc.bEndpointAddress);
+ 		return;
+@@ -83,7 +83,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 
+ 	if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP ||
+ 			size < USB_DT_SS_EP_COMP_SIZE) {
+-		dev_warn(ddev, "No SuperSpeed endpoint companion for config %d "
++		dev_notice(ddev, "No SuperSpeed endpoint companion for config %d "
+ 				" interface %d altsetting %d ep %d: "
+ 				"using minimum values\n",
+ 				cfgno, inum, asnum, ep->desc.bEndpointAddress);
+@@ -109,13 +109,13 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 
+ 	/* Check the various values */
+ 	if (usb_endpoint_xfer_control(&ep->desc) && desc->bMaxBurst != 0) {
+-		dev_warn(ddev, "Control endpoint with bMaxBurst = %d in "
++		dev_notice(ddev, "Control endpoint with bMaxBurst = %d in "
+ 				"config %d interface %d altsetting %d ep %d: "
+ 				"setting to zero\n", desc->bMaxBurst,
+ 				cfgno, inum, asnum, ep->desc.bEndpointAddress);
+ 		ep->ss_ep_comp.bMaxBurst = 0;
+ 	} else if (desc->bMaxBurst > 15) {
+-		dev_warn(ddev, "Endpoint with bMaxBurst = %d in "
++		dev_notice(ddev, "Endpoint with bMaxBurst = %d in "
+ 				"config %d interface %d altsetting %d ep %d: "
+ 				"setting to 15\n", desc->bMaxBurst,
+ 				cfgno, inum, asnum, ep->desc.bEndpointAddress);
+@@ -125,7 +125,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 	if ((usb_endpoint_xfer_control(&ep->desc) ||
+ 			usb_endpoint_xfer_int(&ep->desc)) &&
+ 				desc->bmAttributes != 0) {
+-		dev_warn(ddev, "%s endpoint with bmAttributes = %d in "
++		dev_notice(ddev, "%s endpoint with bmAttributes = %d in "
+ 				"config %d interface %d altsetting %d ep %d: "
+ 				"setting to zero\n",
+ 				usb_endpoint_xfer_control(&ep->desc) ? "Control" : "Bulk",
+@@ -134,7 +134,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 		ep->ss_ep_comp.bmAttributes = 0;
+ 	} else if (usb_endpoint_xfer_bulk(&ep->desc) &&
+ 			desc->bmAttributes > 16) {
+-		dev_warn(ddev, "Bulk endpoint with more than 65536 streams in "
++		dev_notice(ddev, "Bulk endpoint with more than 65536 streams in "
+ 				"config %d interface %d altsetting %d ep %d: "
+ 				"setting to max\n",
+ 				cfgno, inum, asnum, ep->desc.bEndpointAddress);
+@@ -142,7 +142,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 	} else if (usb_endpoint_xfer_isoc(&ep->desc) &&
+ 		   !USB_SS_SSP_ISOC_COMP(desc->bmAttributes) &&
+ 		   USB_SS_MULT(desc->bmAttributes) > 3) {
+-		dev_warn(ddev, "Isoc endpoint has Mult of %d in "
++		dev_notice(ddev, "Isoc endpoint has Mult of %d in "
+ 				"config %d interface %d altsetting %d ep %d: "
+ 				"setting to 3\n",
+ 				USB_SS_MULT(desc->bmAttributes),
+@@ -160,7 +160,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 	else
+ 		max_tx = 999999;
+ 	if (le16_to_cpu(desc->wBytesPerInterval) > max_tx) {
+-		dev_warn(ddev, "%s endpoint with wBytesPerInterval of %d in "
++		dev_notice(ddev, "%s endpoint with wBytesPerInterval of %d in "
+ 				"config %d interface %d altsetting %d ep %d: "
+ 				"setting to %d\n",
+ 				usb_endpoint_xfer_isoc(&ep->desc) ? "Isoc" : "Int",
+@@ -273,7 +273,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	else if (d->bLength >= USB_DT_ENDPOINT_SIZE)
+ 		n = USB_DT_ENDPOINT_SIZE;
+ 	else {
+-		dev_warn(ddev, "config %d interface %d altsetting %d has an "
++		dev_notice(ddev, "config %d interface %d altsetting %d has an "
+ 		    "invalid endpoint descriptor of length %d, skipping\n",
+ 		    cfgno, inum, asnum, d->bLength);
+ 		goto skip_to_next_endpoint_or_interface_descriptor;
+@@ -281,7 +281,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 
+ 	i = d->bEndpointAddress & ~USB_ENDPOINT_DIR_MASK;
+ 	if (i >= 16 || i == 0) {
+-		dev_warn(ddev, "config %d interface %d altsetting %d has an "
++		dev_notice(ddev, "config %d interface %d altsetting %d has an "
+ 		    "invalid endpoint with address 0x%X, skipping\n",
+ 		    cfgno, inum, asnum, d->bEndpointAddress);
+ 		goto skip_to_next_endpoint_or_interface_descriptor;
+@@ -293,7 +293,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 
+ 	/* Check for duplicate endpoint addresses */
+ 	if (config_endpoint_is_duplicate(config, inum, asnum, d)) {
+-		dev_warn(ddev, "config %d interface %d altsetting %d has a duplicate endpoint with address 0x%X, skipping\n",
++		dev_notice(ddev, "config %d interface %d altsetting %d has a duplicate endpoint with address 0x%X, skipping\n",
+ 				cfgno, inum, asnum, d->bEndpointAddress);
+ 		goto skip_to_next_endpoint_or_interface_descriptor;
+ 	}
+@@ -301,7 +301,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	/* Ignore some endpoints */
+ 	if (udev->quirks & USB_QUIRK_ENDPOINT_IGNORE) {
+ 		if (usb_endpoint_is_ignored(udev, ifp, d)) {
+-			dev_warn(ddev, "config %d interface %d altsetting %d has an ignored endpoint with address 0x%X, skipping\n",
++			dev_notice(ddev, "config %d interface %d altsetting %d has an ignored endpoint with address 0x%X, skipping\n",
+ 					cfgno, inum, asnum,
+ 					d->bEndpointAddress);
+ 			goto skip_to_next_endpoint_or_interface_descriptor;
+@@ -378,7 +378,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 		}
+ 	}
+ 	if (d->bInterval < i || d->bInterval > j) {
+-		dev_warn(ddev, "config %d interface %d altsetting %d "
++		dev_notice(ddev, "config %d interface %d altsetting %d "
+ 		    "endpoint 0x%X has an invalid bInterval %d, "
+ 		    "changing to %d\n",
+ 		    cfgno, inum, asnum,
+@@ -391,7 +391,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	 * them usable, we will try treating them as Interrupt endpoints.
+ 	 */
+ 	if (udev->speed == USB_SPEED_LOW && usb_endpoint_xfer_bulk(d)) {
+-		dev_warn(ddev, "config %d interface %d altsetting %d "
++		dev_notice(ddev, "config %d interface %d altsetting %d "
+ 		    "endpoint 0x%X is Bulk; changing to Interrupt\n",
+ 		    cfgno, inum, asnum, d->bEndpointAddress);
+ 		endpoint->desc.bmAttributes = USB_ENDPOINT_XFER_INT;
+@@ -408,7 +408,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	 */
+ 	maxp = le16_to_cpu(endpoint->desc.wMaxPacketSize);
+ 	if (maxp == 0 && !(usb_endpoint_xfer_isoc(d) && asnum == 0)) {
+-		dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid wMaxPacketSize 0\n",
++		dev_notice(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid wMaxPacketSize 0\n",
+ 		    cfgno, inum, asnum, d->bEndpointAddress);
+ 	}
+ 
+@@ -439,7 +439,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	j = maxpacket_maxes[usb_endpoint_type(&endpoint->desc)];
+ 
+ 	if (maxp > j) {
+-		dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid maxpacket %d, setting to %d\n",
++		dev_notice(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid maxpacket %d, setting to %d\n",
+ 		    cfgno, inum, asnum, d->bEndpointAddress, maxp, j);
+ 		maxp = j;
+ 		endpoint->desc.wMaxPacketSize = cpu_to_le16(i | maxp);
+@@ -452,7 +452,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	 */
+ 	if (udev->speed == USB_SPEED_HIGH && usb_endpoint_xfer_bulk(d)) {
+ 		if (maxp != 512)
+-			dev_warn(ddev, "config %d interface %d altsetting %d "
++			dev_notice(ddev, "config %d interface %d altsetting %d "
+ 				"bulk endpoint 0x%X has invalid maxpacket %d\n",
+ 				cfgno, inum, asnum, d->bEndpointAddress,
+ 				maxp);
+@@ -533,7 +533,7 @@ static int usb_parse_interface(struct device *ddev, int cfgno,
+ 	      i < intfc->num_altsetting;
+ 	     (++i, ++alt)) {
+ 		if (alt->desc.bAlternateSetting == asnum) {
+-			dev_warn(ddev, "Duplicate descriptor for config %d "
++			dev_notice(ddev, "Duplicate descriptor for config %d "
+ 			    "interface %d altsetting %d, skipping\n",
+ 			    cfgno, inum, asnum);
+ 			goto skip_to_next_interface_descriptor;
+@@ -559,7 +559,7 @@ static int usb_parse_interface(struct device *ddev, int cfgno,
+ 	num_ep = num_ep_orig = alt->desc.bNumEndpoints;
+ 	alt->desc.bNumEndpoints = 0;		/* Use as a counter */
+ 	if (num_ep > USB_MAXENDPOINTS) {
+-		dev_warn(ddev, "too many endpoints for config %d interface %d "
++		dev_notice(ddev, "too many endpoints for config %d interface %d "
+ 		    "altsetting %d: %d, using maximum allowed: %d\n",
+ 		    cfgno, inum, asnum, num_ep, USB_MAXENDPOINTS);
+ 		num_ep = USB_MAXENDPOINTS;
+@@ -590,7 +590,7 @@ static int usb_parse_interface(struct device *ddev, int cfgno,
+ 	}
+ 
+ 	if (n != num_ep_orig)
+-		dev_warn(ddev, "config %d interface %d altsetting %d has %d "
++		dev_notice(ddev, "config %d interface %d altsetting %d has %d "
+ 		    "endpoint descriptor%s, different from the interface "
+ 		    "descriptor's value: %d\n",
+ 		    cfgno, inum, asnum, n, plural(n), num_ep_orig);
+@@ -625,7 +625,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 	if (config->desc.bDescriptorType != USB_DT_CONFIG ||
+ 	    config->desc.bLength < USB_DT_CONFIG_SIZE ||
+ 	    config->desc.bLength > size) {
+-		dev_err(ddev, "invalid descriptor for config index %d: "
++		dev_notice(ddev, "invalid descriptor for config index %d: "
+ 		    "type = 0x%X, length = %d\n", cfgidx,
+ 		    config->desc.bDescriptorType, config->desc.bLength);
+ 		return -EINVAL;
+@@ -636,7 +636,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 	size -= config->desc.bLength;
+ 
+ 	if (nintf > USB_MAXINTERFACES) {
+-		dev_warn(ddev, "config %d has too many interfaces: %d, "
++		dev_notice(ddev, "config %d has too many interfaces: %d, "
+ 		    "using maximum allowed: %d\n",
+ 		    cfgno, nintf, USB_MAXINTERFACES);
+ 		nintf = USB_MAXINTERFACES;
+@@ -650,7 +650,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 	     (buffer2 += header->bLength, size2 -= header->bLength)) {
+ 
+ 		if (size2 < sizeof(struct usb_descriptor_header)) {
+-			dev_warn(ddev, "config %d descriptor has %d excess "
++			dev_notice(ddev, "config %d descriptor has %d excess "
+ 			    "byte%s, ignoring\n",
+ 			    cfgno, size2, plural(size2));
+ 			break;
+@@ -658,7 +658,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 
+ 		header = (struct usb_descriptor_header *) buffer2;
+ 		if ((header->bLength > size2) || (header->bLength < 2)) {
+-			dev_warn(ddev, "config %d has an invalid descriptor "
++			dev_notice(ddev, "config %d has an invalid descriptor "
+ 			    "of length %d, skipping remainder of the config\n",
+ 			    cfgno, header->bLength);
+ 			break;
+@@ -670,7 +670,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 
+ 			d = (struct usb_interface_descriptor *) header;
+ 			if (d->bLength < USB_DT_INTERFACE_SIZE) {
+-				dev_warn(ddev, "config %d has an invalid "
++				dev_notice(ddev, "config %d has an invalid "
+ 				    "interface descriptor of length %d, "
+ 				    "skipping\n", cfgno, d->bLength);
+ 				continue;
+@@ -680,7 +680,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 
+ 			if ((dev->quirks & USB_QUIRK_HONOR_BNUMINTERFACES) &&
+ 			    n >= nintf_orig) {
+-				dev_warn(ddev, "config %d has more interface "
++				dev_notice(ddev, "config %d has more interface "
+ 				    "descriptors, than it declares in "
+ 				    "bNumInterfaces, ignoring interface "
+ 				    "number: %d\n", cfgno, inum);
+@@ -688,7 +688,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 			}
+ 
+ 			if (inum >= nintf_orig)
+-				dev_warn(ddev, "config %d has an invalid "
++				dev_notice(ddev, "config %d has an invalid "
+ 				    "interface number: %d but max is %d\n",
+ 				    cfgno, inum, nintf_orig - 1);
+ 
+@@ -713,14 +713,14 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 
+ 			d = (struct usb_interface_assoc_descriptor *)header;
+ 			if (d->bLength < USB_DT_INTERFACE_ASSOCIATION_SIZE) {
+-				dev_warn(ddev,
++				dev_notice(ddev,
+ 					 "config %d has an invalid interface association descriptor of length %d, skipping\n",
+ 					 cfgno, d->bLength);
+ 				continue;
+ 			}
+ 
+ 			if (iad_num == USB_MAXIADS) {
+-				dev_warn(ddev, "found more Interface "
++				dev_notice(ddev, "found more Interface "
+ 					       "Association Descriptors "
+ 					       "than allocated for in "
+ 					       "configuration %d\n", cfgno);
+@@ -731,7 +731,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 
+ 		} else if (header->bDescriptorType == USB_DT_DEVICE ||
+ 			    header->bDescriptorType == USB_DT_CONFIG)
+-			dev_warn(ddev, "config %d contains an unexpected "
++			dev_notice(ddev, "config %d contains an unexpected "
+ 			    "descriptor of type 0x%X, skipping\n",
+ 			    cfgno, header->bDescriptorType);
+ 
+@@ -740,11 +740,11 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 	config->desc.wTotalLength = cpu_to_le16(buffer2 - buffer0);
+ 
+ 	if (n != nintf)
+-		dev_warn(ddev, "config %d has %d interface%s, different from "
++		dev_notice(ddev, "config %d has %d interface%s, different from "
+ 		    "the descriptor's value: %d\n",
+ 		    cfgno, n, plural(n), nintf_orig);
+ 	else if (n == 0)
+-		dev_warn(ddev, "config %d has no interfaces?\n", cfgno);
++		dev_notice(ddev, "config %d has no interfaces?\n", cfgno);
+ 	config->desc.bNumInterfaces = nintf = n;
+ 
+ 	/* Check for missing interface numbers */
+@@ -754,7 +754,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 				break;
+ 		}
+ 		if (j >= nintf)
+-			dev_warn(ddev, "config %d has no interface number "
++			dev_notice(ddev, "config %d has no interface number "
+ 			    "%d\n", cfgno, i);
+ 	}
+ 
+@@ -762,7 +762,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 	for (i = 0; i < nintf; ++i) {
+ 		j = nalts[i];
+ 		if (j > USB_MAXALTSETTING) {
+-			dev_warn(ddev, "too many alternate settings for "
++			dev_notice(ddev, "too many alternate settings for "
+ 			    "config %d interface %d: %d, "
+ 			    "using maximum allowed: %d\n",
+ 			    cfgno, inums[i], j, USB_MAXALTSETTING);
+@@ -811,7 +811,7 @@ static int usb_parse_configuration(struct usb_device *dev, int cfgidx,
+ 					break;
+ 			}
+ 			if (n >= intfc->num_altsetting)
+-				dev_warn(ddev, "config %d interface %d has no "
++				dev_notice(ddev, "config %d interface %d has no "
+ 				    "altsetting %d\n", cfgno, inums[i], j);
+ 		}
+ 	}
+@@ -868,7 +868,7 @@ int usb_get_configuration(struct usb_device *dev)
+ 	int result;
+ 
+ 	if (ncfg > USB_MAXCONFIG) {
+-		dev_warn(ddev, "too many configurations: %d, "
++		dev_notice(ddev, "too many configurations: %d, "
+ 		    "using maximum allowed: %d\n", ncfg, USB_MAXCONFIG);
+ 		dev->descriptor.bNumConfigurations = ncfg = USB_MAXCONFIG;
+ 	}
+@@ -902,7 +902,7 @@ int usb_get_configuration(struct usb_device *dev)
+ 			    "descriptor/%s: %d\n", cfgno, "start", result);
+ 			if (result != -EPIPE)
+ 				goto err;
+-			dev_err(ddev, "chopping to %d config(s)\n", cfgno);
++			dev_notice(ddev, "chopping to %d config(s)\n", cfgno);
+ 			dev->descriptor.bNumConfigurations = cfgno;
+ 			break;
+ 		} else if (result < 4) {
+@@ -934,7 +934,7 @@ int usb_get_configuration(struct usb_device *dev)
+ 			goto err;
+ 		}
+ 		if (result < length) {
+-			dev_warn(ddev, "config index %d descriptor too short "
++			dev_notice(ddev, "config index %d descriptor too short "
+ 			    "(expected %i, got %i)\n", cfgno, length, result);
+ 			length = result;
+ 		}
+@@ -993,7 +993,7 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ 	/* Get BOS descriptor */
+ 	ret = usb_get_descriptor(dev, USB_DT_BOS, 0, bos, USB_DT_BOS_SIZE);
+ 	if (ret < USB_DT_BOS_SIZE || bos->bLength < USB_DT_BOS_SIZE) {
+-		dev_err(ddev, "unable to get BOS descriptor or descriptor too short\n");
++		dev_notice(ddev, "unable to get BOS descriptor or descriptor too short\n");
+ 		if (ret >= 0)
+ 			ret = -ENOMSG;
+ 		kfree(bos);
+@@ -1021,7 +1021,7 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ 
+ 	ret = usb_get_descriptor(dev, USB_DT_BOS, 0, buffer, total_len);
+ 	if (ret < total_len) {
+-		dev_err(ddev, "unable to get BOS descriptor set\n");
++		dev_notice(ddev, "unable to get BOS descriptor set\n");
+ 		if (ret >= 0)
+ 			ret = -ENOMSG;
+ 		goto err;
+@@ -1046,8 +1046,8 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ 		}
+ 
+ 		if (cap->bDescriptorType != USB_DT_DEVICE_CAPABILITY) {
+-			dev_warn(ddev, "descriptor type invalid, skip\n");
+-			continue;
++			dev_notice(ddev, "descriptor type invalid, skip\n");
++			goto skip_to_next_descriptor;
+ 		}
+ 
+ 		switch (cap_type) {
+@@ -1080,6 +1080,7 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ 			break;
+ 		}
+ 
++skip_to_next_descriptor:
+ 		total_len -= length;
+ 		buffer += length;
+ 	}
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index cfcd4f2ffffaa..331f41c6cc75e 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2450,6 +2450,8 @@ static void set_usb_port_removable(struct usb_device *udev)
+ 	u16 wHubCharacteristics;
+ 	bool removable = true;
+ 
++	dev_set_removable(&udev->dev, DEVICE_REMOVABLE_UNKNOWN);
++
+ 	if (!hdev)
+ 		return;
+ 
+@@ -2461,11 +2463,11 @@ static void set_usb_port_removable(struct usb_device *udev)
+ 	 */
+ 	switch (hub->ports[udev->portnum - 1]->connect_type) {
+ 	case USB_PORT_CONNECT_TYPE_HOT_PLUG:
+-		udev->removable = USB_DEVICE_REMOVABLE;
++		dev_set_removable(&udev->dev, DEVICE_REMOVABLE);
+ 		return;
+ 	case USB_PORT_CONNECT_TYPE_HARD_WIRED:
+ 	case USB_PORT_NOT_USED:
+-		udev->removable = USB_DEVICE_FIXED;
++		dev_set_removable(&udev->dev, DEVICE_FIXED);
+ 		return;
+ 	default:
+ 		break;
+@@ -2490,9 +2492,9 @@ static void set_usb_port_removable(struct usb_device *udev)
+ 	}
+ 
+ 	if (removable)
+-		udev->removable = USB_DEVICE_REMOVABLE;
++		dev_set_removable(&udev->dev, DEVICE_REMOVABLE);
+ 	else
+-		udev->removable = USB_DEVICE_FIXED;
++		dev_set_removable(&udev->dev, DEVICE_FIXED);
+ 
+ }
+ 
+@@ -2564,8 +2566,7 @@ int usb_new_device(struct usb_device *udev)
+ 	device_enable_async_suspend(&udev->dev);
+ 
+ 	/* check whether the hub or firmware marks this port as non-removable */
+-	if (udev->parent)
+-		set_usb_port_removable(udev);
++	set_usb_port_removable(udev);
+ 
+ 	/* Register the device.  The device driver is responsible
+ 	 * for configuring the device and invoking the add-device
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index a2ca38e25e0c3..35ce8b87e9396 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -298,29 +298,6 @@ static ssize_t urbnum_show(struct device *dev, struct device_attribute *attr,
+ }
+ static DEVICE_ATTR_RO(urbnum);
+ 
+-static ssize_t removable_show(struct device *dev, struct device_attribute *attr,
+-			      char *buf)
+-{
+-	struct usb_device *udev;
+-	char *state;
+-
+-	udev = to_usb_device(dev);
+-
+-	switch (udev->removable) {
+-	case USB_DEVICE_REMOVABLE:
+-		state = "removable";
+-		break;
+-	case USB_DEVICE_FIXED:
+-		state = "fixed";
+-		break;
+-	default:
+-		state = "unknown";
+-	}
+-
+-	return sprintf(buf, "%s\n", state);
+-}
+-static DEVICE_ATTR_RO(removable);
+-
+ static ssize_t ltm_capable_show(struct device *dev,
+ 				struct device_attribute *attr, char *buf)
+ {
+@@ -825,7 +802,6 @@ static struct attribute *dev_attrs[] = {
+ 	&dev_attr_avoid_reset_quirk.attr,
+ 	&dev_attr_authorized.attr,
+ 	&dev_attr_remove.attr,
+-	&dev_attr_removable.attr,
+ 	&dev_attr_ltm_capable.attr,
+ #ifdef CONFIG_OF
+ 	&dev_attr_devspec.attr,
+diff --git a/drivers/usb/dwc2/hcd_intr.c b/drivers/usb/dwc2/hcd_intr.c
+index d5f4ec1b73b15..08e2792cb7323 100644
+--- a/drivers/usb/dwc2/hcd_intr.c
++++ b/drivers/usb/dwc2/hcd_intr.c
+@@ -2045,15 +2045,17 @@ static void dwc2_hc_n_intr(struct dwc2_hsotg *hsotg, int chnum)
+ {
+ 	struct dwc2_qtd *qtd;
+ 	struct dwc2_host_chan *chan;
+-	u32 hcint, hcintmsk;
++	u32 hcint, hcintraw, hcintmsk;
+ 
+ 	chan = hsotg->hc_ptr_array[chnum];
+ 
+-	hcint = dwc2_readl(hsotg, HCINT(chnum));
++	hcintraw = dwc2_readl(hsotg, HCINT(chnum));
+ 	hcintmsk = dwc2_readl(hsotg, HCINTMSK(chnum));
++	hcint = hcintraw & hcintmsk;
++	dwc2_writel(hsotg, hcint, HCINT(chnum));
++
+ 	if (!chan) {
+ 		dev_err(hsotg->dev, "## hc_ptr_array for channel is NULL ##\n");
+-		dwc2_writel(hsotg, hcint, HCINT(chnum));
+ 		return;
+ 	}
+ 
+@@ -2062,11 +2064,9 @@ static void dwc2_hc_n_intr(struct dwc2_hsotg *hsotg, int chnum)
+ 			 chnum);
+ 		dev_vdbg(hsotg->dev,
+ 			 "  hcint 0x%08x, hcintmsk 0x%08x, hcint&hcintmsk 0x%08x\n",
+-			 hcint, hcintmsk, hcint & hcintmsk);
++			 hcintraw, hcintmsk, hcint);
+ 	}
+ 
+-	dwc2_writel(hsotg, hcint, HCINT(chnum));
+-
+ 	/*
+ 	 * If we got an interrupt after someone called
+ 	 * dwc2_hcd_endpoint_disable() we don't want to crash below
+@@ -2076,8 +2076,7 @@ static void dwc2_hc_n_intr(struct dwc2_hsotg *hsotg, int chnum)
+ 		return;
+ 	}
+ 
+-	chan->hcint = hcint;
+-	hcint &= hcintmsk;
++	chan->hcint = hcintraw;
+ 
+ 	/*
+ 	 * If the channel was halted due to a dequeue, the qtd list might
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 5bc245b4be441..214a8ff2d69c8 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1627,6 +1627,8 @@ static int dwc3_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_put(dev);
+ 
++	dma_set_max_seg_size(dev, UINT_MAX);
++
+ 	return 0;
+ 
+ err5:
+diff --git a/drivers/usb/dwc3/drd.c b/drivers/usb/dwc3/drd.c
+index 0a96f44ccca78..8bcfbb29ce70f 100644
+--- a/drivers/usb/dwc3/drd.c
++++ b/drivers/usb/dwc3/drd.c
+@@ -547,6 +547,7 @@ static int dwc3_setup_role_switch(struct dwc3 *dwc)
+ 		dwc->role_switch_default_mode = USB_DR_MODE_PERIPHERAL;
+ 		mode = DWC3_GCTL_PRTCAP_DEVICE;
+ 	}
++	dwc3_set_mode(dwc, mode);
+ 
+ 	dwc3_role_switch.fwnode = dev_fwnode(dwc->dev);
+ 	dwc3_role_switch.set = dwc3_usb_role_switch_set;
+@@ -556,7 +557,6 @@ static int dwc3_setup_role_switch(struct dwc3 *dwc)
+ 	if (IS_ERR(dwc->role_sw))
+ 		return PTR_ERR(dwc->role_sw);
+ 
+-	dwc3_set_mode(dwc, mode);
+ 	return 0;
+ }
+ #else
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 3973f6c18857e..db3559a102077 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -492,7 +492,7 @@ static int dwc3_qcom_setup_irq(struct platform_device *pdev)
+ 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
+ 		ret = devm_request_threaded_irq(qcom->dev, irq, NULL,
+ 					qcom_dwc3_resume_irq,
+-					IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
++					IRQF_ONESHOT,
+ 					"qcom_dwc3 HS", qcom);
+ 		if (ret) {
+ 			dev_err(qcom->dev, "hs_phy_irq failed: %d\n", ret);
+@@ -507,7 +507,7 @@ static int dwc3_qcom_setup_irq(struct platform_device *pdev)
+ 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
+ 		ret = devm_request_threaded_irq(qcom->dev, irq, NULL,
+ 					qcom_dwc3_resume_irq,
+-					IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
++					IRQF_ONESHOT,
+ 					"qcom_dwc3 DP_HS", qcom);
+ 		if (ret) {
+ 			dev_err(qcom->dev, "dp_hs_phy_irq failed: %d\n", ret);
+@@ -522,7 +522,7 @@ static int dwc3_qcom_setup_irq(struct platform_device *pdev)
+ 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
+ 		ret = devm_request_threaded_irq(qcom->dev, irq, NULL,
+ 					qcom_dwc3_resume_irq,
+-					IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
++					IRQF_ONESHOT,
+ 					"qcom_dwc3 DM_HS", qcom);
+ 		if (ret) {
+ 			dev_err(qcom->dev, "dm_hs_phy_irq failed: %d\n", ret);
+@@ -537,7 +537,7 @@ static int dwc3_qcom_setup_irq(struct platform_device *pdev)
+ 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
+ 		ret = devm_request_threaded_irq(qcom->dev, irq, NULL,
+ 					qcom_dwc3_resume_irq,
+-					IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
++					IRQF_ONESHOT,
+ 					"qcom_dwc3 SS", qcom);
+ 		if (ret) {
+ 			dev_err(qcom->dev, "ss_phy_irq failed: %d\n", ret);
+@@ -690,6 +690,7 @@ static int dwc3_qcom_of_register_core(struct platform_device *pdev)
+ 	if (!qcom->dwc3) {
+ 		ret = -ENODEV;
+ 		dev_err(dev, "failed to get dwc3 platform device\n");
++		of_platform_depopulate(dev);
+ 	}
+ 
+ node_put:
+@@ -698,9 +699,9 @@ node_put:
+ 	return ret;
+ }
+ 
+-static struct platform_device *
+-dwc3_qcom_create_urs_usb_platdev(struct device *dev)
++static struct platform_device *dwc3_qcom_create_urs_usb_platdev(struct device *dev)
+ {
++	struct platform_device *urs_usb = NULL;
+ 	struct fwnode_handle *fwh;
+ 	struct acpi_device *adev;
+ 	char name[8];
+@@ -720,9 +721,26 @@ dwc3_qcom_create_urs_usb_platdev(struct device *dev)
+ 
+ 	adev = to_acpi_device_node(fwh);
+ 	if (!adev)
+-		return NULL;
++		goto err_put_handle;
++
++	urs_usb = acpi_create_platform_device(adev, NULL);
++	if (IS_ERR_OR_NULL(urs_usb))
++		goto err_put_handle;
++
++	return urs_usb;
++
++err_put_handle:
++	fwnode_handle_put(fwh);
++
++	return urs_usb;
++}
++
++static void dwc3_qcom_destroy_urs_usb_platdev(struct platform_device *urs_usb)
++{
++	struct fwnode_handle *fwh = urs_usb->dev.fwnode;
+ 
+-	return acpi_create_platform_device(adev, NULL);
++	platform_device_unregister(urs_usb);
++	fwnode_handle_put(fwh);
+ }
+ 
+ static int dwc3_qcom_probe(struct platform_device *pdev)
+@@ -807,13 +825,13 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 	if (IS_ERR(qcom->qscratch_base)) {
+ 		dev_err(dev, "failed to map qscratch, err=%d\n", ret);
+ 		ret = PTR_ERR(qcom->qscratch_base);
+-		goto clk_disable;
++		goto free_urs;
+ 	}
+ 
+ 	ret = dwc3_qcom_setup_irq(pdev);
+ 	if (ret) {
+ 		dev_err(dev, "failed to setup IRQs, err=%d\n", ret);
+-		goto clk_disable;
++		goto free_urs;
+ 	}
+ 
+ 	/*
+@@ -832,7 +850,7 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 
+ 	if (ret) {
+ 		dev_err(dev, "failed to register DWC3 Core, err=%d\n", ret);
+-		goto depopulate;
++		goto free_urs;
+ 	}
+ 
+ 	ret = dwc3_qcom_interconnect_init(qcom);
+@@ -864,7 +882,11 @@ depopulate:
+ 	if (np)
+ 		of_platform_depopulate(&pdev->dev);
+ 	else
+-		platform_device_put(pdev);
++		platform_device_del(qcom->dwc3);
++	platform_device_put(qcom->dwc3);
++free_urs:
++	if (qcom->urs_usb)
++		dwc3_qcom_destroy_urs_usb_platdev(qcom->urs_usb);
+ clk_disable:
+ 	for (i = qcom->num_clocks - 1; i >= 0; i--) {
+ 		clk_disable_unprepare(qcom->clks[i]);
+@@ -886,7 +908,11 @@ static int dwc3_qcom_remove(struct platform_device *pdev)
+ 	if (np)
+ 		of_platform_depopulate(&pdev->dev);
+ 	else
+-		platform_device_put(pdev);
++		platform_device_del(qcom->dwc3);
++	platform_device_put(qcom->dwc3);
++
++	if (qcom->urs_usb)
++		dwc3_qcom_destroy_urs_usb_platdev(qcom->urs_usb);
+ 
+ 	for (i = qcom->num_clocks - 1; i >= 0; i--) {
+ 		clk_disable_unprepare(qcom->clks[i]);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 3e0579d6ec82c..851c242539406 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -203,8 +203,8 @@ static void option_instat_callback(struct urb *urb);
+ #define DELL_PRODUCT_5829E_ESIM			0x81e4
+ #define DELL_PRODUCT_5829E			0x81e6
+ 
+-#define DELL_PRODUCT_FM101R			0x8213
+-#define DELL_PRODUCT_FM101R_ESIM		0x8215
++#define DELL_PRODUCT_FM101R_ESIM		0x8213
++#define DELL_PRODUCT_FM101R			0x8215
+ 
+ #define KYOCERA_VENDOR_ID			0x0c88
+ #define KYOCERA_PRODUCT_KPC650			0x17da
+@@ -609,6 +609,8 @@ static void option_instat_callback(struct urb *urb);
+ #define UNISOC_VENDOR_ID			0x1782
+ /* TOZED LT70-C based on UNISOC SL8563 uses UNISOC's vendor ID */
+ #define TOZED_PRODUCT_LT70C			0x4055
++/* Luat Air72*U series based on UNISOC UIS8910 uses UNISOC's vendor ID */
++#define LUAT_PRODUCT_AIR720U			0x4e00
+ 
+ /* Device flags */
+ 
+@@ -1546,7 +1548,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0165, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0167, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(4) },
+-	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0189, 0xff, 0xff, 0xff) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0189, 0xff, 0xff, 0xff),
++	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0191, 0xff, 0xff, 0xff), /* ZTE EuFi890 */
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0196, 0xff, 0xff, 0xff) },
+@@ -2249,6 +2252,7 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x1782, 0x4d10) },						/* Fibocom L610 (AT mode) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x1782, 0x4d11, 0xff) },			/* Fibocom L610 (ECM/RNDIS mode) */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x0001, 0xff, 0xff, 0xff) },	/* Fibocom L716-EU (ECM/RNDIS mode) */
+ 	{ USB_DEVICE(0x2cb7, 0x0104),						/* Fibocom NL678 series */
+ 	  .driver_info = RSVD(4) | RSVD(5) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff),			/* Fibocom NL678 series */
+@@ -2271,6 +2275,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/video/fbdev/sticore.h b/drivers/video/fbdev/sticore.h
+index 0ebdd28a0b813..d83ab3ded5f3d 100644
+--- a/drivers/video/fbdev/sticore.h
++++ b/drivers/video/fbdev/sticore.h
+@@ -231,7 +231,7 @@ struct sti_rom_font {
+ 	 u8 height;
+ 	 u8 font_type;		/* language type */
+ 	 u8 bytes_per_char;
+-	u32 next_font;
++	s32 next_font;		/* note: signed int */
+ 	 u8 underline_height;
+ 	 u8 underline_pos;
+ 	 u8 res008[2];
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index 2b385c1b4a99c..ad3ee4857e154 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -579,4 +579,5 @@ const struct dma_map_ops xen_swiotlb_dma_ops = {
+ 	.get_sgtable = dma_common_get_sgtable,
+ 	.alloc_pages = dma_common_alloc_pages,
+ 	.free_pages = dma_common_free_pages,
++	.max_mapping_size = swiotlb_max_mapping_size,
+ };
+diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c
+index db832cc931c87..b35c6081dbfe1 100644
+--- a/fs/afs/dynroot.c
++++ b/fs/afs/dynroot.c
+@@ -131,8 +131,8 @@ static int afs_probe_cell_name(struct dentry *dentry)
+ 
+ 	ret = dns_query(net->net, "afsdb", name, len, "srv=1",
+ 			NULL, NULL, false);
+-	if (ret == -ENODATA)
+-		ret = -EDESTADDRREQ;
++	if (ret == -ENODATA || ret == -ENOKEY)
++		ret = -ENOENT;
+ 	return ret;
+ }
+ 
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 637cbe549397c..31c7a562147c2 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -546,6 +546,7 @@ struct afs_server_entry {
+ };
+ 
+ struct afs_server_list {
++	struct rcu_head		rcu;
+ 	afs_volid_t		vids[AFS_MAXTYPES]; /* Volume IDs */
+ 	refcount_t		usage;
+ 	unsigned char		nr_servers;
+diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c
+index ed9056703505f..b59896b1de0af 100644
+--- a/fs/afs/server_list.c
++++ b/fs/afs/server_list.c
+@@ -17,7 +17,7 @@ void afs_put_serverlist(struct afs_net *net, struct afs_server_list *slist)
+ 		for (i = 0; i < slist->nr_servers; i++)
+ 			afs_unuse_server(net, slist->servers[i].server,
+ 					 afs_server_trace_put_slist);
+-		kfree(slist);
++		kfree_rcu(slist, rcu);
+ 	}
+ }
+ 
+diff --git a/fs/afs/super.c b/fs/afs/super.c
+index e38bb1e7a4d22..1b62a99b36731 100644
+--- a/fs/afs/super.c
++++ b/fs/afs/super.c
+@@ -406,6 +406,8 @@ static int afs_validate_fc(struct fs_context *fc)
+ 			return PTR_ERR(volume);
+ 
+ 		ctx->volume = volume;
++		if (volume->type != AFSVL_RWVOL)
++			ctx->flock_mode = afs_flock_mode_local;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/afs/vl_rotate.c b/fs/afs/vl_rotate.c
+index 488e58490b16e..eb415ce563600 100644
+--- a/fs/afs/vl_rotate.c
++++ b/fs/afs/vl_rotate.c
+@@ -58,6 +58,12 @@ static bool afs_start_vl_iteration(struct afs_vl_cursor *vc)
+ 		}
+ 
+ 		/* Status load is ordered after lookup counter load */
++		if (cell->dns_status == DNS_LOOKUP_GOT_NOT_FOUND) {
++			pr_warn("No record of cell %s\n", cell->name);
++			vc->error = -ENOENT;
++			return false;
++		}
++
+ 		if (cell->dns_source == DNS_RECORD_UNAVAILABLE) {
+ 			vc->error = -EDESTADDRREQ;
+ 			return false;
+@@ -285,6 +291,7 @@ failed:
+  */
+ static void afs_vl_dump_edestaddrreq(const struct afs_vl_cursor *vc)
+ {
++	struct afs_cell *cell = vc->cell;
+ 	static int count;
+ 	int i;
+ 
+@@ -294,6 +301,9 @@ static void afs_vl_dump_edestaddrreq(const struct afs_vl_cursor *vc)
+ 
+ 	rcu_read_lock();
+ 	pr_notice("EDESTADDR occurred\n");
++	pr_notice("CELL: %s err=%d\n", cell->name, cell->error);
++	pr_notice("DNS: src=%u st=%u lc=%x\n",
++		  cell->dns_source, cell->dns_status, cell->dns_lookup_count);
+ 	pr_notice("VC: ut=%lx ix=%u ni=%hu fl=%hx err=%hd\n",
+ 		  vc->untried, vc->index, vc->nr_iterations, vc->flags, vc->error);
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 0e25a3f64b2e0..cd689d5c4c821 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2972,6 +2972,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		goto fail_alloc;
+ 	}
+ 
++	btrfs_info(fs_info, "first mount of filesystem %pU", disk_super->fsid);
+ 	/*
+ 	 * Verify the type first, if that or the checksum value are
+ 	 * corrupted, we'll find out
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index 78693d3dd15bc..bd3bb94cc56bd 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -804,6 +804,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_info *fs_info,
+ 			dump_ref_action(fs_info, ra);
+ 			kfree(ref);
+ 			kfree(ra);
++			kfree(re);
+ 			goto out_unlock;
+ 		} else if (be->num_refs == 0) {
+ 			btrfs_err(fs_info,
+@@ -813,6 +814,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_info *fs_info,
+ 			dump_ref_action(fs_info, ra);
+ 			kfree(ref);
+ 			kfree(ra);
++			kfree(re);
+ 			goto out_unlock;
+ 		}
+ 
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index b081b61e97c8d..af9701afcab77 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -7303,7 +7303,7 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 	sctx->flags = arg->flags;
+ 
+ 	sctx->send_filp = fget(arg->send_fd);
+-	if (!sctx->send_filp) {
++	if (!sctx->send_filp || !(sctx->send_filp->f_mode & FMODE_WRITE)) {
+ 		ret = -EBADF;
+ 		goto out;
+ 	}
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index ea731fa8bd350..fd4edd9664500 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -318,7 +318,10 @@ void __btrfs_panic(struct btrfs_fs_info *fs_info, const char *function,
+ 
+ static void btrfs_put_super(struct super_block *sb)
+ {
+-	close_ctree(btrfs_sb(sb));
++	struct btrfs_fs_info *fs_info = btrfs_sb(sb);
++
++	btrfs_info(fs_info, "last unmount of filesystem %pU", fs_info->fs_devices->fsid);
++	close_ctree(fs_info);
+ }
+ 
+ enum {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 86c50e0570a5e..eaf5cd043dace 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2993,15 +2993,16 @@ struct extent_map *btrfs_get_chunk_map(struct btrfs_fs_info *fs_info,
+ 	read_unlock(&em_tree->lock);
+ 
+ 	if (!em) {
+-		btrfs_crit(fs_info, "unable to find logical %llu length %llu",
++		btrfs_crit(fs_info,
++			   "unable to find chunk map for logical %llu length %llu",
+ 			   logical, length);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	if (em->start > logical || em->start + em->len < logical) {
++	if (em->start > logical || em->start + em->len <= logical) {
+ 		btrfs_crit(fs_info,
+-			   "found a bad mapping, wanted %llu-%llu, found %llu-%llu",
+-			   logical, length, em->start, em->start + em->len);
++			   "found a bad chunk map, wanted %llu-%llu, found %llu-%llu",
++			   logical, logical + length, em->start, em->start + em->len);
+ 		free_extent_map(em);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 0d8d7b57636f7..b87d6792b14ad 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -1076,6 +1076,7 @@ const struct inode_operations cifs_file_inode_ops = {
+ 
+ const struct inode_operations cifs_symlink_inode_ops = {
+ 	.get_link = cifs_get_link,
++	.setattr = cifs_setattr,
+ 	.permission = cifs_permission,
+ 	.listxattr = cifs_listxattr,
+ };
+diff --git a/fs/cifs/xattr.c b/fs/cifs/xattr.c
+index b8299173ea7e3..c6931eab3f97e 100644
+--- a/fs/cifs/xattr.c
++++ b/fs/cifs/xattr.c
+@@ -157,10 +157,13 @@ static int cifs_xattr_set(const struct xattr_handler *handler,
+ 		if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_XATTR)
+ 			goto out;
+ 
+-		if (pTcon->ses->server->ops->set_EA)
++		if (pTcon->ses->server->ops->set_EA) {
+ 			rc = pTcon->ses->server->ops->set_EA(xid, pTcon,
+ 				full_path, name, value, (__u16)size,
+ 				cifs_sb->local_nls, cifs_sb);
++			if (rc == 0)
++				inode_set_ctime_current(inode);
++		}
+ 		break;
+ 
+ 	case XATTR_CIFS_ACL:
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index 7806adcc41a7a..cccbdfd49a86b 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -144,14 +144,17 @@
+ static struct kmem_cache *ext4_es_cachep;
+ static struct kmem_cache *ext4_pending_cachep;
+ 
+-static int __es_insert_extent(struct inode *inode, struct extent_status *newes);
++static int __es_insert_extent(struct inode *inode, struct extent_status *newes,
++			      struct extent_status *prealloc);
+ static int __es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
+-			      ext4_lblk_t end, int *reserved);
++			      ext4_lblk_t end, int *reserved,
++			      struct extent_status *prealloc);
+ static int es_reclaim_extents(struct ext4_inode_info *ei, int *nr_to_scan);
+ static int __es_shrink(struct ext4_sb_info *sbi, int nr_to_scan,
+ 		       struct ext4_inode_info *locked_ei);
+-static void __revise_pending(struct inode *inode, ext4_lblk_t lblk,
+-			     ext4_lblk_t len);
++static int __revise_pending(struct inode *inode, ext4_lblk_t lblk,
++			    ext4_lblk_t len,
++			    struct pending_reservation **prealloc);
+ 
+ int __init ext4_init_es(void)
+ {
+@@ -448,22 +451,49 @@ static void ext4_es_list_del(struct inode *inode)
+ 	spin_unlock(&sbi->s_es_lock);
+ }
+ 
+-static struct extent_status *
+-ext4_es_alloc_extent(struct inode *inode, ext4_lblk_t lblk, ext4_lblk_t len,
+-		     ext4_fsblk_t pblk)
++static inline struct pending_reservation *__alloc_pending(bool nofail)
++{
++	if (!nofail)
++		return kmem_cache_alloc(ext4_pending_cachep, GFP_ATOMIC);
++
++	return kmem_cache_zalloc(ext4_pending_cachep, GFP_KERNEL | __GFP_NOFAIL);
++}
++
++static inline void __free_pending(struct pending_reservation *pr)
++{
++	kmem_cache_free(ext4_pending_cachep, pr);
++}
++
++/*
++ * Returns true if we cannot fail to allocate memory for this extent_status
++ * entry and cannot reclaim it until its status changes.
++ */
++static inline bool ext4_es_must_keep(struct extent_status *es)
++{
++	/* fiemap, bigalloc, and seek_data/hole need to use it. */
++	if (ext4_es_is_delayed(es))
++		return true;
++
++	return false;
++}
++
++static inline struct extent_status *__es_alloc_extent(bool nofail)
++{
++	if (!nofail)
++		return kmem_cache_alloc(ext4_es_cachep, GFP_ATOMIC);
++
++	return kmem_cache_zalloc(ext4_es_cachep, GFP_KERNEL | __GFP_NOFAIL);
++}
++
++static void ext4_es_init_extent(struct inode *inode, struct extent_status *es,
++		ext4_lblk_t lblk, ext4_lblk_t len, ext4_fsblk_t pblk)
+ {
+-	struct extent_status *es;
+-	es = kmem_cache_alloc(ext4_es_cachep, GFP_ATOMIC);
+-	if (es == NULL)
+-		return NULL;
+ 	es->es_lblk = lblk;
+ 	es->es_len = len;
+ 	es->es_pblk = pblk;
+ 
+-	/*
+-	 * We don't count delayed extent because we never try to reclaim them
+-	 */
+-	if (!ext4_es_is_delayed(es)) {
++	/* We never try to reclaim a must kept extent, so we don't count it. */
++	if (!ext4_es_must_keep(es)) {
+ 		if (!EXT4_I(inode)->i_es_shk_nr++)
+ 			ext4_es_list_add(inode);
+ 		percpu_counter_inc(&EXT4_SB(inode->i_sb)->
+@@ -472,8 +502,11 @@ ext4_es_alloc_extent(struct inode *inode, ext4_lblk_t lblk, ext4_lblk_t len,
+ 
+ 	EXT4_I(inode)->i_es_all_nr++;
+ 	percpu_counter_inc(&EXT4_SB(inode->i_sb)->s_es_stats.es_stats_all_cnt);
++}
+ 
+-	return es;
++static inline void __es_free_extent(struct extent_status *es)
++{
++	kmem_cache_free(ext4_es_cachep, es);
+ }
+ 
+ static void ext4_es_free_extent(struct inode *inode, struct extent_status *es)
+@@ -481,8 +514,8 @@ static void ext4_es_free_extent(struct inode *inode, struct extent_status *es)
+ 	EXT4_I(inode)->i_es_all_nr--;
+ 	percpu_counter_dec(&EXT4_SB(inode->i_sb)->s_es_stats.es_stats_all_cnt);
+ 
+-	/* Decrease the shrink counter when this es is not delayed */
+-	if (!ext4_es_is_delayed(es)) {
++	/* Decrease the shrink counter when we can reclaim the extent. */
++	if (!ext4_es_must_keep(es)) {
+ 		BUG_ON(EXT4_I(inode)->i_es_shk_nr == 0);
+ 		if (!--EXT4_I(inode)->i_es_shk_nr)
+ 			ext4_es_list_del(inode);
+@@ -490,7 +523,7 @@ static void ext4_es_free_extent(struct inode *inode, struct extent_status *es)
+ 					s_es_stats.es_stats_shk_cnt);
+ 	}
+ 
+-	kmem_cache_free(ext4_es_cachep, es);
++	__es_free_extent(es);
+ }
+ 
+ /*
+@@ -752,7 +785,8 @@ static inline void ext4_es_insert_extent_check(struct inode *inode,
+ }
+ #endif
+ 
+-static int __es_insert_extent(struct inode *inode, struct extent_status *newes)
++static int __es_insert_extent(struct inode *inode, struct extent_status *newes,
++			      struct extent_status *prealloc)
+ {
+ 	struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
+ 	struct rb_node **p = &tree->root.rb_node;
+@@ -792,10 +826,15 @@ static int __es_insert_extent(struct inode *inode, struct extent_status *newes)
+ 		}
+ 	}
+ 
+-	es = ext4_es_alloc_extent(inode, newes->es_lblk, newes->es_len,
+-				  newes->es_pblk);
++	if (prealloc)
++		es = prealloc;
++	else
++		es = __es_alloc_extent(false);
+ 	if (!es)
+ 		return -ENOMEM;
++	ext4_es_init_extent(inode, es, newes->es_lblk, newes->es_len,
++			    newes->es_pblk);
++
+ 	rb_link_node(&es->rb_node, parent, p);
+ 	rb_insert_color(&es->rb_node, &tree->root);
+ 
+@@ -816,8 +855,12 @@ int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
+ {
+ 	struct extent_status newes;
+ 	ext4_lblk_t end = lblk + len - 1;
+-	int err = 0;
++	int err1 = 0, err2 = 0, err3 = 0;
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	struct extent_status *es1 = NULL;
++	struct extent_status *es2 = NULL;
++	struct pending_reservation *pr = NULL;
++	bool revise_pending = false;
+ 
+ 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
+ 		return 0;
+@@ -845,29 +888,57 @@ int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
+ 
+ 	ext4_es_insert_extent_check(inode, &newes);
+ 
++	revise_pending = sbi->s_cluster_ratio > 1 &&
++			 test_opt(inode->i_sb, DELALLOC) &&
++			 (status & (EXTENT_STATUS_WRITTEN |
++				    EXTENT_STATUS_UNWRITTEN));
++retry:
++	if (err1 && !es1)
++		es1 = __es_alloc_extent(true);
++	if ((err1 || err2) && !es2)
++		es2 = __es_alloc_extent(true);
++	if ((err1 || err2 || err3) && revise_pending && !pr)
++		pr = __alloc_pending(true);
+ 	write_lock(&EXT4_I(inode)->i_es_lock);
+-	err = __es_remove_extent(inode, lblk, end, NULL);
+-	if (err != 0)
++
++	err1 = __es_remove_extent(inode, lblk, end, NULL, es1);
++	if (err1 != 0)
+ 		goto error;
+-retry:
+-	err = __es_insert_extent(inode, &newes);
+-	if (err == -ENOMEM && __es_shrink(EXT4_SB(inode->i_sb),
+-					  128, EXT4_I(inode)))
+-		goto retry;
+-	if (err == -ENOMEM && !ext4_es_is_delayed(&newes))
+-		err = 0;
++	/* Free preallocated extent if it didn't get used. */
++	if (es1) {
++		if (!es1->es_len)
++			__es_free_extent(es1);
++		es1 = NULL;
++	}
+ 
+-	if (sbi->s_cluster_ratio > 1 && test_opt(inode->i_sb, DELALLOC) &&
+-	    (status & EXTENT_STATUS_WRITTEN ||
+-	     status & EXTENT_STATUS_UNWRITTEN))
+-		__revise_pending(inode, lblk, len);
++	err2 = __es_insert_extent(inode, &newes, es2);
++	if (err2 == -ENOMEM && !ext4_es_must_keep(&newes))
++		err2 = 0;
++	if (err2 != 0)
++		goto error;
++	/* Free preallocated extent if it didn't get used. */
++	if (es2) {
++		if (!es2->es_len)
++			__es_free_extent(es2);
++		es2 = NULL;
++	}
+ 
++	if (revise_pending) {
++		err3 = __revise_pending(inode, lblk, len, &pr);
++		if (err3 != 0)
++			goto error;
++		if (pr) {
++			__free_pending(pr);
++			pr = NULL;
++		}
++	}
+ error:
+ 	write_unlock(&EXT4_I(inode)->i_es_lock);
++	if (err1 || err2 || err3)
++		goto retry;
+ 
+ 	ext4_es_print_tree(inode);
+-
+-	return err;
++	return 0;
+ }
+ 
+ /*
+@@ -900,7 +971,7 @@ void ext4_es_cache_extent(struct inode *inode, ext4_lblk_t lblk,
+ 
+ 	es = __es_tree_search(&EXT4_I(inode)->i_es_tree.root, lblk);
+ 	if (!es || es->es_lblk > end)
+-		__es_insert_extent(inode, &newes);
++		__es_insert_extent(inode, &newes, NULL);
+ 	write_unlock(&EXT4_I(inode)->i_es_lock);
+ }
+ 
+@@ -1271,7 +1342,7 @@ static unsigned int get_rsvd(struct inode *inode, ext4_lblk_t end,
+ 				rc->ndelonly--;
+ 				node = rb_next(&pr->rb_node);
+ 				rb_erase(&pr->rb_node, &tree->root);
+-				kmem_cache_free(ext4_pending_cachep, pr);
++				__free_pending(pr);
+ 				if (!node)
+ 					break;
+ 				pr = rb_entry(node, struct pending_reservation,
+@@ -1290,6 +1361,7 @@ static unsigned int get_rsvd(struct inode *inode, ext4_lblk_t end,
+  * @lblk - first block in range
+  * @end - last block in range
+  * @reserved - number of cluster reservations released
++ * @prealloc - pre-allocated es to avoid memory allocation failures
+  *
+  * If @reserved is not NULL and delayed allocation is enabled, counts
+  * block/cluster reservations freed by removing range and if bigalloc
+@@ -1297,7 +1369,8 @@ static unsigned int get_rsvd(struct inode *inode, ext4_lblk_t end,
+  * error code on failure.
+  */
+ static int __es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
+-			      ext4_lblk_t end, int *reserved)
++			      ext4_lblk_t end, int *reserved,
++			      struct extent_status *prealloc)
+ {
+ 	struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
+ 	struct rb_node *node;
+@@ -1305,14 +1378,12 @@ static int __es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
+ 	struct extent_status orig_es;
+ 	ext4_lblk_t len1, len2;
+ 	ext4_fsblk_t block;
+-	int err;
++	int err = 0;
+ 	bool count_reserved = true;
+ 	struct rsvd_count rc;
+ 
+ 	if (reserved == NULL || !test_opt(inode->i_sb, DELALLOC))
+ 		count_reserved = false;
+-retry:
+-	err = 0;
+ 
+ 	es = __es_tree_search(&tree->root, lblk);
+ 	if (!es)
+@@ -1346,14 +1417,13 @@ retry:
+ 					orig_es.es_len - len2;
+ 			ext4_es_store_pblock_status(&newes, block,
+ 						    ext4_es_status(&orig_es));
+-			err = __es_insert_extent(inode, &newes);
++			err = __es_insert_extent(inode, &newes, prealloc);
+ 			if (err) {
++				if (!ext4_es_must_keep(&newes))
++					return 0;
++
+ 				es->es_lblk = orig_es.es_lblk;
+ 				es->es_len = orig_es.es_len;
+-				if ((err == -ENOMEM) &&
+-				    __es_shrink(EXT4_SB(inode->i_sb),
+-							128, EXT4_I(inode)))
+-					goto retry;
+ 				goto out;
+ 			}
+ 		} else {
+@@ -1433,6 +1503,7 @@ int ext4_es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
+ 	ext4_lblk_t end;
+ 	int err = 0;
+ 	int reserved = 0;
++	struct extent_status *es = NULL;
+ 
+ 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
+ 		return 0;
+@@ -1447,17 +1518,29 @@ int ext4_es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
+ 	end = lblk + len - 1;
+ 	BUG_ON(end < lblk);
+ 
++retry:
++	if (err && !es)
++		es = __es_alloc_extent(true);
+ 	/*
+ 	 * ext4_clear_inode() depends on us taking i_es_lock unconditionally
+ 	 * so that we are sure __es_shrink() is done with the inode before it
+ 	 * is reclaimed.
+ 	 */
+ 	write_lock(&EXT4_I(inode)->i_es_lock);
+-	err = __es_remove_extent(inode, lblk, end, &reserved);
++	err = __es_remove_extent(inode, lblk, end, &reserved, es);
++	/* Free preallocated extent if it didn't get used. */
++	if (es) {
++		if (!es->es_len)
++			__es_free_extent(es);
++		es = NULL;
++	}
+ 	write_unlock(&EXT4_I(inode)->i_es_lock);
++	if (err)
++		goto retry;
++
+ 	ext4_es_print_tree(inode);
+ 	ext4_da_release_space(inode, reserved);
+-	return err;
++	return 0;
+ }
+ 
+ static int __es_shrink(struct ext4_sb_info *sbi, int nr_to_scan,
+@@ -1704,11 +1787,8 @@ static int es_do_reclaim_extents(struct ext4_inode_info *ei, ext4_lblk_t end,
+ 
+ 		(*nr_to_scan)--;
+ 		node = rb_next(&es->rb_node);
+-		/*
+-		 * We can't reclaim delayed extent from status tree because
+-		 * fiemap, bigallic, and seek_data/hole need to use it.
+-		 */
+-		if (ext4_es_is_delayed(es))
++
++		if (ext4_es_must_keep(es))
+ 			goto next;
+ 		if (ext4_es_is_referenced(es)) {
+ 			ext4_es_clear_referenced(es);
+@@ -1772,7 +1852,7 @@ void ext4_clear_inode_es(struct inode *inode)
+ 	while (node) {
+ 		es = rb_entry(node, struct extent_status, rb_node);
+ 		node = rb_next(node);
+-		if (!ext4_es_is_delayed(es)) {
++		if (!ext4_es_must_keep(es)) {
+ 			rb_erase(&es->rb_node, &tree->root);
+ 			ext4_es_free_extent(inode, es);
+ 		}
+@@ -1859,11 +1939,13 @@ static struct pending_reservation *__get_pending(struct inode *inode,
+  *
+  * @inode - file containing the cluster
+  * @lblk - logical block in the cluster to be added
++ * @prealloc - preallocated pending entry
+  *
+  * Returns 0 on successful insertion and -ENOMEM on failure.  If the
+  * pending reservation is already in the set, returns successfully.
+  */
+-static int __insert_pending(struct inode *inode, ext4_lblk_t lblk)
++static int __insert_pending(struct inode *inode, ext4_lblk_t lblk,
++			    struct pending_reservation **prealloc)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	struct ext4_pending_tree *tree = &EXT4_I(inode)->i_pending_tree;
+@@ -1889,10 +1971,15 @@ static int __insert_pending(struct inode *inode, ext4_lblk_t lblk)
+ 		}
+ 	}
+ 
+-	pr = kmem_cache_alloc(ext4_pending_cachep, GFP_ATOMIC);
+-	if (pr == NULL) {
+-		ret = -ENOMEM;
+-		goto out;
++	if (likely(*prealloc == NULL)) {
++		pr = __alloc_pending(false);
++		if (!pr) {
++			ret = -ENOMEM;
++			goto out;
++		}
++	} else {
++		pr = *prealloc;
++		*prealloc = NULL;
+ 	}
+ 	pr->lclu = lclu;
+ 
+@@ -1922,7 +2009,7 @@ static void __remove_pending(struct inode *inode, ext4_lblk_t lblk)
+ 	if (pr != NULL) {
+ 		tree = &EXT4_I(inode)->i_pending_tree;
+ 		rb_erase(&pr->rb_node, &tree->root);
+-		kmem_cache_free(ext4_pending_cachep, pr);
++		__free_pending(pr);
+ 	}
+ }
+ 
+@@ -1983,7 +2070,10 @@ int ext4_es_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk,
+ 				 bool allocated)
+ {
+ 	struct extent_status newes;
+-	int err = 0;
++	int err1 = 0, err2 = 0, err3 = 0;
++	struct extent_status *es1 = NULL;
++	struct extent_status *es2 = NULL;
++	struct pending_reservation *pr = NULL;
+ 
+ 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
+ 		return 0;
+@@ -1998,29 +2088,52 @@ int ext4_es_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk,
+ 
+ 	ext4_es_insert_extent_check(inode, &newes);
+ 
++retry:
++	if (err1 && !es1)
++		es1 = __es_alloc_extent(true);
++	if ((err1 || err2) && !es2)
++		es2 = __es_alloc_extent(true);
++	if ((err1 || err2 || err3) && allocated && !pr)
++		pr = __alloc_pending(true);
+ 	write_lock(&EXT4_I(inode)->i_es_lock);
+ 
+-	err = __es_remove_extent(inode, lblk, lblk, NULL);
+-	if (err != 0)
+-		goto error;
+-retry:
+-	err = __es_insert_extent(inode, &newes);
+-	if (err == -ENOMEM && __es_shrink(EXT4_SB(inode->i_sb),
+-					  128, EXT4_I(inode)))
+-		goto retry;
+-	if (err != 0)
++	err1 = __es_remove_extent(inode, lblk, lblk, NULL, es1);
++	if (err1 != 0)
+ 		goto error;
++	/* Free preallocated extent if it didn't get used. */
++	if (es1) {
++		if (!es1->es_len)
++			__es_free_extent(es1);
++		es1 = NULL;
++	}
+ 
+-	if (allocated)
+-		__insert_pending(inode, lblk);
++	err2 = __es_insert_extent(inode, &newes, es2);
++	if (err2 != 0)
++		goto error;
++	/* Free preallocated extent if it didn't get used. */
++	if (es2) {
++		if (!es2->es_len)
++			__es_free_extent(es2);
++		es2 = NULL;
++	}
+ 
++	if (allocated) {
++		err3 = __insert_pending(inode, lblk, &pr);
++		if (err3 != 0)
++			goto error;
++		if (pr) {
++			__free_pending(pr);
++			pr = NULL;
++		}
++	}
+ error:
+ 	write_unlock(&EXT4_I(inode)->i_es_lock);
++	if (err1 || err2 || err3)
++		goto retry;
+ 
+ 	ext4_es_print_tree(inode);
+ 	ext4_print_pending_tree(inode);
+-
+-	return err;
++	return 0;
+ }
+ 
+ /*
+@@ -2121,21 +2234,24 @@ unsigned int ext4_es_delayed_clu(struct inode *inode, ext4_lblk_t lblk,
+  * @inode - file containing the range
+  * @lblk - logical block defining the start of range
+  * @len  - length of range in blocks
++ * @prealloc - preallocated pending entry
+  *
+  * Used after a newly allocated extent is added to the extents status tree.
+  * Requires that the extents in the range have either written or unwritten
+  * status.  Must be called while holding i_es_lock.
+  */
+-static void __revise_pending(struct inode *inode, ext4_lblk_t lblk,
+-			     ext4_lblk_t len)
++static int __revise_pending(struct inode *inode, ext4_lblk_t lblk,
++			    ext4_lblk_t len,
++			    struct pending_reservation **prealloc)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	ext4_lblk_t end = lblk + len - 1;
+ 	ext4_lblk_t first, last;
+ 	bool f_del = false, l_del = false;
++	int ret = 0;
+ 
+ 	if (len == 0)
+-		return;
++		return 0;
+ 
+ 	/*
+ 	 * Two cases - block range within single cluster and block range
+@@ -2156,7 +2272,9 @@ static void __revise_pending(struct inode *inode, ext4_lblk_t lblk,
+ 			f_del = __es_scan_range(inode, &ext4_es_is_delonly,
+ 						first, lblk - 1);
+ 		if (f_del) {
+-			__insert_pending(inode, first);
++			ret = __insert_pending(inode, first, prealloc);
++			if (ret < 0)
++				goto out;
+ 		} else {
+ 			last = EXT4_LBLK_CMASK(sbi, end) +
+ 			       sbi->s_cluster_ratio - 1;
+@@ -2164,9 +2282,11 @@ static void __revise_pending(struct inode *inode, ext4_lblk_t lblk,
+ 				l_del = __es_scan_range(inode,
+ 							&ext4_es_is_delonly,
+ 							end + 1, last);
+-			if (l_del)
+-				__insert_pending(inode, last);
+-			else
++			if (l_del) {
++				ret = __insert_pending(inode, last, prealloc);
++				if (ret < 0)
++					goto out;
++			} else
+ 				__remove_pending(inode, last);
+ 		}
+ 	} else {
+@@ -2174,18 +2294,24 @@ static void __revise_pending(struct inode *inode, ext4_lblk_t lblk,
+ 		if (first != lblk)
+ 			f_del = __es_scan_range(inode, &ext4_es_is_delonly,
+ 						first, lblk - 1);
+-		if (f_del)
+-			__insert_pending(inode, first);
+-		else
++		if (f_del) {
++			ret = __insert_pending(inode, first, prealloc);
++			if (ret < 0)
++				goto out;
++		} else
+ 			__remove_pending(inode, first);
+ 
+ 		last = EXT4_LBLK_CMASK(sbi, end) + sbi->s_cluster_ratio - 1;
+ 		if (last != end)
+ 			l_del = __es_scan_range(inode, &ext4_es_is_delonly,
+ 						end + 1, last);
+-		if (l_del)
+-			__insert_pending(inode, last);
+-		else
++		if (l_del) {
++			ret = __insert_pending(inode, last, prealloc);
++			if (ret < 0)
++				goto out;
++		} else
+ 			__remove_pending(inode, last);
+ 	}
++out:
++	return ret;
+ }
+diff --git a/fs/inode.c b/fs/inode.c
+index 311237e8d3595..5c7139aa2bda7 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -2392,6 +2392,22 @@ int vfs_ioc_fssetxattr_check(struct inode *inode, const struct fsxattr *old_fa,
+ }
+ EXPORT_SYMBOL(vfs_ioc_fssetxattr_check);
+ 
++/**
++ * inode_set_ctime_current - set the ctime to current_time
++ * @inode: inode
++ *
++ * Set the inode->i_ctime to the current value for the inode. Returns
++ * the current value that was assigned to i_ctime.
++ */
++struct timespec64 inode_set_ctime_current(struct inode *inode)
++{
++	struct timespec64 now = current_time(inode);
++
++	inode_set_ctime(inode, now.tv_sec, now.tv_nsec);
++	return now;
++}
++EXPORT_SYMBOL(inode_set_ctime_current);
++
+ /**
+  * in_group_or_capable - check whether caller is CAP_FSETID privileged
+  * @inode:	inode to check
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index b09ead06a2490..31edb883afd0d 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1758,6 +1758,12 @@ nfsd_rename(struct svc_rqst *rqstp, struct svc_fh *ffhp, char *fname, int flen,
+ 	if (!flen || isdotent(fname, flen) || !tlen || isdotent(tname, tlen))
+ 		goto out;
+ 
++	err = (rqstp->rq_vers == 2) ? nfserr_acces : nfserr_xdev;
++	if (ffhp->fh_export->ex_path.mnt != tfhp->fh_export->ex_path.mnt)
++		goto out;
++	if (ffhp->fh_export->ex_path.dentry != tfhp->fh_export->ex_path.dentry)
++		goto out;
++
+ retry:
+ 	host_err = fh_want_write(ffhp);
+ 	if (host_err) {
+@@ -1792,12 +1798,6 @@ retry:
+ 	if (ndentry == trap)
+ 		goto out_dput_new;
+ 
+-	host_err = -EXDEV;
+-	if (ffhp->fh_export->ex_path.mnt != tfhp->fh_export->ex_path.mnt)
+-		goto out_dput_new;
+-	if (ffhp->fh_export->ex_path.dentry != tfhp->fh_export->ex_path.dentry)
+-		goto out_dput_new;
+-
+ 	if (nfsd_has_cached_files(ndentry)) {
+ 		has_cached = true;
+ 		goto out_dput_old;
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 4f7e0c85e11fa..6394c4b70a090 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -348,6 +348,22 @@ enum dl_dev_state {
+ 	DL_DEV_UNBINDING,
+ };
+ 
++/**
++ * enum device_removable - Whether the device is removable. The criteria for a
++ * device to be classified as removable is determined by its subsystem or bus.
++ * @DEVICE_REMOVABLE_NOT_SUPPORTED: This attribute is not supported for this
++ *				    device (default).
++ * @DEVICE_REMOVABLE_UNKNOWN:  Device location is Unknown.
++ * @DEVICE_FIXED: Device is not removable by the user.
++ * @DEVICE_REMOVABLE: Device is removable by the user.
++ */
++enum device_removable {
++	DEVICE_REMOVABLE_NOT_SUPPORTED = 0, /* must be 0 */
++	DEVICE_REMOVABLE_UNKNOWN,
++	DEVICE_FIXED,
++	DEVICE_REMOVABLE,
++};
++
+ /**
+  * struct dev_links_info - Device data related to device links.
+  * @suppliers: List of links to supplier devices.
+@@ -435,6 +451,9 @@ struct dev_links_info {
+  * 		device (i.e. the bus driver that discovered the device).
+  * @iommu_group: IOMMU group the device belongs to.
+  * @iommu:	Per device generic IOMMU runtime data
++ * @removable:  Whether the device can be removed from the system. This
++ *              should be set by the subsystem / bus driver that discovered
++ *              the device.
+  *
+  * @offline_disabled: If set, the device is permanently online.
+  * @offline:	Set after successful invocation of bus type's .offline().
+@@ -546,6 +565,8 @@ struct device {
+ 	struct iommu_group	*iommu_group;
+ 	struct dev_iommu	*iommu;
+ 
++	enum device_removable	removable;
++
+ 	bool			offline_disabled:1;
+ 	bool			offline:1;
+ 	bool			of_node_reused:1;
+@@ -781,6 +802,22 @@ static inline bool dev_has_sync_state(struct device *dev)
+ 	return false;
+ }
+ 
++static inline void dev_set_removable(struct device *dev,
++				     enum device_removable removable)
++{
++	dev->removable = removable;
++}
++
++static inline bool dev_is_removable(struct device *dev)
++{
++	return dev->removable == DEVICE_REMOVABLE;
++}
++
++static inline bool dev_removable_is_valid(struct device *dev)
++{
++	return dev->removable != DEVICE_REMOVABLE_NOT_SUPPORTED;
++}
++
+ /*
+  * High level routines for use by the bus drivers
+  */
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index f9e25d0a7b9c8..82316863c71fd 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1580,7 +1580,50 @@ static inline void i_gid_write(struct inode *inode, gid_t gid)
+ 	inode->i_gid = make_kgid(inode->i_sb->s_user_ns, gid);
+ }
+ 
+-extern struct timespec64 current_time(struct inode *inode);
++struct timespec64 current_time(struct inode *inode);
++struct timespec64 inode_set_ctime_current(struct inode *inode);
++
++/**
++ * inode_get_ctime - fetch the current ctime from the inode
++ * @inode: inode from which to fetch ctime
++ *
++ * Grab the current ctime from the inode and return it.
++ */
++static inline struct timespec64 inode_get_ctime(const struct inode *inode)
++{
++	return inode->i_ctime;
++}
++
++/**
++ * inode_set_ctime_to_ts - set the ctime in the inode
++ * @inode: inode in which to set the ctime
++ * @ts: value to set in the ctime field
++ *
++ * Set the ctime in @inode to @ts
++ */
++static inline struct timespec64 inode_set_ctime_to_ts(struct inode *inode,
++						      struct timespec64 ts)
++{
++	inode->i_ctime = ts;
++	return ts;
++}
++
++/**
++ * inode_set_ctime - set the ctime in the inode
++ * @inode: inode in which to set the ctime
++ * @sec: tv_sec value to set
++ * @nsec: tv_nsec value to set
++ *
++ * Set the ctime in @inode to { @sec, @nsec }
++ */
++static inline struct timespec64 inode_set_ctime(struct inode *inode,
++						time64_t sec, long nsec)
++{
++	struct timespec64 ts = { .tv_sec  = sec,
++				 .tv_nsec = nsec };
++
++	return inode_set_ctime_to_ts(inode, ts);
++}
+ 
+ /*
+  * Snapshotting support.
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 256f34f49167c..9e306bf9959df 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -624,8 +624,13 @@ struct hid_device {							/* device report descriptor */
+ 	struct list_head debug_list;
+ 	spinlock_t  debug_list_lock;
+ 	wait_queue_head_t debug_wait;
++	struct kref			ref;
++
++	unsigned int id;						/* system unique id */
+ };
+ 
++void hiddev_free(struct kref *ref);
++
+ #define to_hid_device(pdev) \
+ 	container_of(pdev, struct hid_device, dev)
+ 
+diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
+index fb294cbb9081d..fb08b86acdbf3 100644
+--- a/include/linux/mmc/host.h
++++ b/include/linux/mmc/host.h
+@@ -405,6 +405,7 @@ struct mmc_host {
+ 	unsigned int		use_blk_mq:1;	/* use blk-mq */
+ 	unsigned int		retune_crc_disable:1; /* don't trigger retune upon crc */
+ 	unsigned int		can_dma_map_merge:1; /* merging can be used */
++	unsigned int		vqmmc_enabled:1; /* vqmmc regulator is enabled */
+ 
+ 	int			rescan_disable;	/* disable card detection */
+ 	int			rescan_entered;	/* used with nonremovable devices */
+@@ -546,6 +547,8 @@ static inline int mmc_regulator_set_vqmmc(struct mmc_host *mmc,
+ #endif
+ 
+ int mmc_regulator_get_supply(struct mmc_host *mmc);
++int mmc_regulator_enable_vqmmc(struct mmc_host *mmc);
++void mmc_regulator_disable_vqmmc(struct mmc_host *mmc);
+ 
+ static inline int mmc_card_is_removable(struct mmc_host *host)
+ {
+diff --git a/include/linux/platform_data/x86/soc.h b/include/linux/platform_data/x86/soc.h
+new file mode 100644
+index 0000000000000..da05f425587a0
+--- /dev/null
++++ b/include/linux/platform_data/x86/soc.h
+@@ -0,0 +1,65 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Helpers for Intel SoC model detection
++ *
++ * Copyright (c) 2019, Intel Corporation.
++ */
++
++#ifndef __PLATFORM_DATA_X86_SOC_H
++#define __PLATFORM_DATA_X86_SOC_H
++
++#if IS_ENABLED(CONFIG_X86)
++
++#include <asm/cpu_device_id.h>
++#include <asm/intel-family.h>
++
++#define SOC_INTEL_IS_CPU(soc, type)				\
++static inline bool soc_intel_is_##soc(void)			\
++{								\
++	static const struct x86_cpu_id soc##_cpu_ids[] = {	\
++		X86_MATCH_INTEL_FAM6_MODEL(type, NULL),		\
++		{}						\
++	};							\
++	const struct x86_cpu_id *id;				\
++								\
++	id = x86_match_cpu(soc##_cpu_ids);			\
++	if (id)							\
++		return true;					\
++	return false;						\
++}
++
++SOC_INTEL_IS_CPU(byt, ATOM_SILVERMONT);
++SOC_INTEL_IS_CPU(cht, ATOM_AIRMONT);
++SOC_INTEL_IS_CPU(apl, ATOM_GOLDMONT);
++SOC_INTEL_IS_CPU(glk, ATOM_GOLDMONT_PLUS);
++SOC_INTEL_IS_CPU(cml, KABYLAKE_L);
++
++#else /* IS_ENABLED(CONFIG_X86) */
++
++static inline bool soc_intel_is_byt(void)
++{
++	return false;
++}
++
++static inline bool soc_intel_is_cht(void)
++{
++	return false;
++}
++
++static inline bool soc_intel_is_apl(void)
++{
++	return false;
++}
++
++static inline bool soc_intel_is_glk(void)
++{
++	return false;
++}
++
++static inline bool soc_intel_is_cml(void)
++{
++	return false;
++}
++#endif /* IS_ENABLED(CONFIG_X86) */
++
++#endif /* __PLATFORM_DATA_X86_SOC_H */
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 8bc1119afc317..e02cf70ca52f6 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -478,12 +478,6 @@ struct usb_dev_state;
+ 
+ struct usb_tt;
+ 
+-enum usb_device_removable {
+-	USB_DEVICE_REMOVABLE_UNKNOWN = 0,
+-	USB_DEVICE_REMOVABLE,
+-	USB_DEVICE_FIXED,
+-};
+-
+ enum usb_port_connect_type {
+ 	USB_PORT_CONNECT_TYPE_UNKNOWN = 0,
+ 	USB_PORT_CONNECT_TYPE_HOT_PLUG,
+@@ -710,7 +704,6 @@ struct usb_device {
+ #endif
+ 	struct wusb_dev *wusb_dev;
+ 	int slot_id;
+-	enum usb_device_removable removable;
+ 	struct usb2_lpm_parameters l1_params;
+ 	struct usb3_lpm_parameters u1_params;
+ 	struct usb3_lpm_parameters u2_params;
+diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
+index 460c58fa011ae..0383fac637e19 100644
+--- a/include/linux/workqueue.h
++++ b/include/linux/workqueue.h
+@@ -460,6 +460,7 @@ extern int schedule_on_each_cpu(work_func_t func);
+ int execute_in_process_context(work_func_t fn, struct execute_work *);
+ 
+ extern bool flush_work(struct work_struct *work);
++extern bool cancel_work(struct work_struct *work);
+ extern bool cancel_work_sync(struct work_struct *work);
+ 
+ extern bool flush_delayed_work(struct delayed_work *dwork);
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index 4d272e834ca2e..b1c9b52876f3c 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -144,6 +144,12 @@ struct scsi_cmnd {
+ 	unsigned int extra_len;	/* length of alignment and padding */
+ };
+ 
++/* Variant of blk_mq_rq_from_pdu() that verifies the type of its argument. */
++static inline struct request *scsi_cmd_to_rq(struct scsi_cmnd *scmd)
++{
++	return blk_mq_rq_from_pdu(scmd);
++}
++
+ /*
+  * Return the driver private allocation behind the command.
+  * Only works if cmd_size is set in the host template.
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 1fe05e70cc79f..0c4f159865b95 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -3149,7 +3149,7 @@ static int __io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter
+ 		 */
+ 		const struct bio_vec *bvec = imu->bvec;
+ 
+-		if (offset <= bvec->bv_len) {
++		if (offset < bvec->bv_len) {
+ 			iov_iter_advance(iter, offset);
+ 		} else {
+ 			unsigned long seg_skip;
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 6cbd2b4444769..7471d85f54ae5 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3357,7 +3357,8 @@ static int alloc_chain_hlocks(int req)
+ 		size = chain_block_size(curr);
+ 		if (likely(size >= req)) {
+ 			del_chain_block(0, size, chain_block_next(curr));
+-			add_chain_block(curr + req, size - req);
++			if (size > req)
++				add_chain_block(curr + req, size - req);
+ 			return curr;
+ 		}
+ 	}
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 37d01e44d4837..63140e4dd5dfc 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3240,6 +3240,15 @@ static bool __cancel_work(struct work_struct *work, bool is_dwork)
+ 	return ret;
+ }
+ 
++/*
++ * See cancel_delayed_work()
++ */
++bool cancel_work(struct work_struct *work)
++{
++	return __cancel_work(work, false);
++}
++EXPORT_SYMBOL(cancel_work);
++
+ /**
+  * cancel_delayed_work - cancel a delayed work
+  * @dwork: delayed_work to cancel
+diff --git a/lib/errname.c b/lib/errname.c
+index a5799a8c9cab9..fec08e7c8a1db 100644
+--- a/lib/errname.c
++++ b/lib/errname.c
+@@ -110,9 +110,6 @@ static const char *names_0[] = {
+ 	E(ENOSPC),
+ 	E(ENOSR),
+ 	E(ENOSTR),
+-#ifdef ENOSYM
+-	E(ENOSYM),
+-#endif
+ 	E(ENOSYS),
+ 	E(ENOTBLK),
+ 	E(ENOTCONN),
+@@ -143,9 +140,6 @@ static const char *names_0[] = {
+ #endif
+ 	E(EREMOTE),
+ 	E(EREMOTEIO),
+-#ifdef EREMOTERELEASE
+-	E(EREMOTERELEASE),
+-#endif
+ 	E(ERESTART),
+ 	E(ERFKILL),
+ 	E(EROFS),
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 4cffdb9795ded..cb55fede03c04 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -216,8 +216,10 @@ static void igmp_start_timer(struct ip_mc_list *im, int max_delay)
+ 	int tv = prandom_u32() % max_delay;
+ 
+ 	im->tm_running = 1;
+-	if (!mod_timer(&im->timer, jiffies+tv+2))
+-		refcount_inc(&im->refcnt);
++	if (refcount_inc_not_zero(&im->refcnt)) {
++		if (mod_timer(&im->timer, jiffies + tv + 2))
++			ip_ma_put(im);
++	}
+ }
+ 
+ static void igmp_gq_start_timer(struct in_device *in_dev)
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 445b1a2966d79..d360c7d70e8a2 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -808,7 +808,7 @@ static void __ip_do_redirect(struct rtable *rt, struct sk_buff *skb, struct flow
+ 			goto reject_redirect;
+ 	}
+ 
+-	n = __ipv4_neigh_lookup(rt->dst.dev, new_gw);
++	n = __ipv4_neigh_lookup(rt->dst.dev, (__force u32)new_gw);
+ 	if (!n)
+ 		n = neigh_create(&arp_tbl, &new_gw, rt->dst.dev);
+ 	if (!IS_ERR(n)) {
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 9fc47292b68d8..664ddf5641dea 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -396,8 +396,12 @@ static int smcr_clnt_conf_first_link(struct smc_sock *smc)
+ 	struct smc_llc_qentry *qentry;
+ 	int rc;
+ 
+-	/* receive CONFIRM LINK request from server over RoCE fabric */
+-	qentry = smc_llc_wait(link->lgr, NULL, SMC_LLC_WAIT_TIME,
++	/* Receive CONFIRM LINK request from server over RoCE fabric.
++	 * Increasing the client's timeout by twice as much as the server's
++	 * timeout by default can temporarily avoid decline messages of
++	 * both sides crossing or colliding
++	 */
++	qentry = smc_llc_wait(link->lgr, NULL, 2 * SMC_LLC_WAIT_TIME,
+ 			      SMC_LLC_CONFIRM_LINK);
+ 	if (!qentry) {
+ 		struct smc_clc_msg_decline dclc;
+diff --git a/security/integrity/iint.c b/security/integrity/iint.c
+index 9ed2d5bfbed5e..b6e33ac70aef5 100644
+--- a/security/integrity/iint.c
++++ b/security/integrity/iint.c
+@@ -66,9 +66,32 @@ struct integrity_iint_cache *integrity_iint_find(struct inode *inode)
+ 	return iint;
+ }
+ 
+-static void iint_free(struct integrity_iint_cache *iint)
++#define IMA_MAX_NESTING (FILESYSTEM_MAX_STACK_DEPTH+1)
++
++/*
++ * It is not clear that IMA should be nested at all, but as long is it measures
++ * files both on overlayfs and on underlying fs, we need to annotate the iint
++ * mutex to avoid lockdep false positives related to IMA + overlayfs.
++ * See ovl_lockdep_annotate_inode_mutex_key() for more details.
++ */
++static inline void iint_lockdep_annotate(struct integrity_iint_cache *iint,
++					 struct inode *inode)
++{
++#ifdef CONFIG_LOCKDEP
++	static struct lock_class_key iint_mutex_key[IMA_MAX_NESTING];
++
++	int depth = inode->i_sb->s_stack_depth;
++
++	if (WARN_ON_ONCE(depth < 0 || depth >= IMA_MAX_NESTING))
++		depth = 0;
++
++	lockdep_set_class(&iint->mutex, &iint_mutex_key[depth]);
++#endif
++}
++
++static void iint_init_always(struct integrity_iint_cache *iint,
++			     struct inode *inode)
+ {
+-	kfree(iint->ima_hash);
+ 	iint->ima_hash = NULL;
+ 	iint->version = 0;
+ 	iint->flags = 0UL;
+@@ -80,6 +103,14 @@ static void iint_free(struct integrity_iint_cache *iint)
+ 	iint->ima_creds_status = INTEGRITY_UNKNOWN;
+ 	iint->evm_status = INTEGRITY_UNKNOWN;
+ 	iint->measured_pcrs = 0;
++	mutex_init(&iint->mutex);
++	iint_lockdep_annotate(iint, inode);
++}
++
++static void iint_free(struct integrity_iint_cache *iint)
++{
++	kfree(iint->ima_hash);
++	mutex_destroy(&iint->mutex);
+ 	kmem_cache_free(iint_cache, iint);
+ }
+ 
+@@ -112,6 +143,8 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode)
+ 	if (!iint)
+ 		return NULL;
+ 
++	iint_init_always(iint, inode);
++
+ 	write_lock(&integrity_iint_lock);
+ 
+ 	p = &integrity_iint_tree.rb_node;
+@@ -161,25 +194,18 @@ void integrity_inode_free(struct inode *inode)
+ 	iint_free(iint);
+ }
+ 
+-static void init_once(void *foo)
++static void iint_init_once(void *foo)
+ {
+ 	struct integrity_iint_cache *iint = foo;
+ 
+ 	memset(iint, 0, sizeof(*iint));
+-	iint->ima_file_status = INTEGRITY_UNKNOWN;
+-	iint->ima_mmap_status = INTEGRITY_UNKNOWN;
+-	iint->ima_bprm_status = INTEGRITY_UNKNOWN;
+-	iint->ima_read_status = INTEGRITY_UNKNOWN;
+-	iint->ima_creds_status = INTEGRITY_UNKNOWN;
+-	iint->evm_status = INTEGRITY_UNKNOWN;
+-	mutex_init(&iint->mutex);
+ }
+ 
+ static int __init integrity_iintcache_init(void)
+ {
+ 	iint_cache =
+ 	    kmem_cache_create("iint_cache", sizeof(struct integrity_iint_cache),
+-			      0, SLAB_PANIC, init_once);
++			      0, SLAB_PANIC, iint_init_once);
+ 	return 0;
+ }
+ DEFINE_LSM(integrity) = {
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index dfef761d55214..12c6eb76fca31 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2278,6 +2278,8 @@ static const struct snd_pci_quirk power_save_denylist[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
+ 	/* https://bugs.launchpad.net/bugs/1821663 */
+ 	SND_PCI_QUIRK(0x1631, 0xe017, "Packard Bell NEC IMEDIA 5204", 0),
++	/* KONTRON SinglePC may cause a stall at runtime resume */
++	SND_PCI_QUIRK(0x1734, 0x1232, "KONTRON SinglePC", 0),
+ 	{}
+ };
+ #endif /* CONFIG_PM */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 65f8dc65b9675..3d94a2f41d820 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1994,6 +1994,7 @@ enum {
+ 	ALC887_FIXUP_ASUS_AUDIO,
+ 	ALC887_FIXUP_ASUS_HMIC,
+ 	ALCS1200A_FIXUP_MIC_VREF,
++	ALC888VD_FIXUP_MIC_100VREF,
+ };
+ 
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -2547,6 +2548,13 @@ static const struct hda_fixup alc882_fixups[] = {
+ 			{}
+ 		}
+ 	},
++	[ALC888VD_FIXUP_MIC_100VREF] = {
++		.type = HDA_FIXUP_PINCTLS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x18, PIN_VREF100 }, /* headset mic */
++			{}
++		}
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+@@ -2616,6 +2624,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x106b, 0x4a00, "Macbook 5,2", ALC889_FIXUP_MBA11_VREF),
+ 
+ 	SND_PCI_QUIRK(0x1071, 0x8258, "Evesham Voyaeger", ALC882_FIXUP_EAPD),
++	SND_PCI_QUIRK(0x10ec, 0x12d8, "iBase Elo Touch", ALC888VD_FIXUP_MIC_100VREF),
+ 	SND_PCI_QUIRK(0x13fe, 0x1009, "Advantech MIT-W101", ALC886_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+@@ -3257,6 +3266,7 @@ static void alc_disable_headset_jack_key(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x10ec0257:
+ 	case 0x19e58326:
+ 		alc_write_coef_idx(codec, 0x48, 0x0);
+ 		alc_update_coef_idx(codec, 0x49, 0x0045, 0x0);
+@@ -3286,6 +3296,7 @@ static void alc_enable_headset_jack_key(struct hda_codec *codec)
+ 	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x10ec0257:
+ 	case 0x19e58326:
+ 		alc_write_coef_idx(codec, 0x48, 0xd011);
+ 		alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
+@@ -6415,6 +6426,7 @@ static void alc_combo_jack_hp_jd_restart(struct hda_codec *codec)
+ 	case 0x10ec0236:
+ 	case 0x10ec0255:
+ 	case 0x10ec0256:
++	case 0x10ec0257:
+ 	case 0x19e58326:
+ 		alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
+ 		alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
+diff --git a/sound/soc/generic/simple-card.c b/sound/soc/generic/simple-card.c
+index ac97e8b7978c7..8723d540ac2fa 100644
+--- a/sound/soc/generic/simple-card.c
++++ b/sound/soc/generic/simple-card.c
+@@ -634,10 +634,12 @@ static int asoc_simple_probe(struct platform_device *pdev)
+ 
+ 		int dai_idx = 0;
+ 
++		ret = -EINVAL;
++
+ 		cinfo = dev->platform_data;
+ 		if (!cinfo) {
+ 			dev_err(dev, "no info for asoc-simple-card\n");
+-			return -EINVAL;
++			goto err;
+ 		}
+ 
+ 		if (!cinfo->name ||
+@@ -646,7 +648,7 @@ static int asoc_simple_probe(struct platform_device *pdev)
+ 		    !cinfo->platform ||
+ 		    !cinfo->cpu_dai.name) {
+ 			dev_err(dev, "insufficient asoc_simple_card_info settings\n");
+-			return -EINVAL;
++			goto err;
+ 		}
+ 
+ 		dai_props->cpu_dai	= &priv->dais[dai_idx++];
+diff --git a/sound/soc/intel/common/soc-intel-quirks.h b/sound/soc/intel/common/soc-intel-quirks.h
+index a93987ab7f4d7..de4e550c5b34d 100644
+--- a/sound/soc/intel/common/soc-intel-quirks.h
++++ b/sound/soc/intel/common/soc-intel-quirks.h
+@@ -9,34 +9,13 @@
+ #ifndef _SND_SOC_INTEL_QUIRKS_H
+ #define _SND_SOC_INTEL_QUIRKS_H
+ 
++#include <linux/platform_data/x86/soc.h>
++
+ #if IS_ENABLED(CONFIG_X86)
+ 
+ #include <linux/dmi.h>
+-#include <asm/cpu_device_id.h>
+-#include <asm/intel-family.h>
+ #include <asm/iosf_mbi.h>
+ 
+-#define SOC_INTEL_IS_CPU(soc, type)				\
+-static inline bool soc_intel_is_##soc(void)			\
+-{								\
+-	static const struct x86_cpu_id soc##_cpu_ids[] = {	\
+-		X86_MATCH_INTEL_FAM6_MODEL(type, NULL),		\
+-		{}						\
+-	};							\
+-	const struct x86_cpu_id *id;				\
+-								\
+-	id = x86_match_cpu(soc##_cpu_ids);			\
+-	if (id)							\
+-		return true;					\
+-	return false;						\
+-}
+-
+-SOC_INTEL_IS_CPU(byt, ATOM_SILVERMONT);
+-SOC_INTEL_IS_CPU(cht, ATOM_AIRMONT);
+-SOC_INTEL_IS_CPU(apl, ATOM_GOLDMONT);
+-SOC_INTEL_IS_CPU(glk, ATOM_GOLDMONT_PLUS);
+-SOC_INTEL_IS_CPU(cml, KABYLAKE_L);
+-
+ static inline bool soc_intel_is_byt_cr(struct platform_device *pdev)
+ {
+ 	/*
+@@ -114,30 +93,6 @@ static inline bool soc_intel_is_byt_cr(struct platform_device *pdev)
+ 	return false;
+ }
+ 
+-static inline bool soc_intel_is_byt(void)
+-{
+-	return false;
+-}
+-
+-static inline bool soc_intel_is_cht(void)
+-{
+-	return false;
+-}
+-
+-static inline bool soc_intel_is_apl(void)
+-{
+-	return false;
+-}
+-
+-static inline bool soc_intel_is_glk(void)
+-{
+-	return false;
+-}
+-
+-static inline bool soc_intel_is_cml(void)
+-{
+-	return false;
+-}
+ #endif
+ 
+- #endif /* _SND_SOC_INTEL_QUIRKS_H */
++#endif /* _SND_SOC_INTEL_QUIRKS_H */
+diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c
+index fe9feaab6a0ac..ade292a61ed6d 100644
+--- a/sound/soc/sof/sof-pci-dev.c
++++ b/sound/soc/sof/sof-pci-dev.c
+@@ -12,6 +12,7 @@
+ #include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
++#include <linux/platform_data/x86/soc.h>
+ #include <linux/pm_runtime.h>
+ #include <sound/intel-dsp-config.h>
+ #include <sound/soc-acpi.h>
+@@ -31,17 +32,22 @@ static char *tplg_path;
+ module_param(tplg_path, charp, 0444);
+ MODULE_PARM_DESC(tplg_path, "alternate path for SOF topology.");
+ 
++static char *tplg_filename;
++module_param(tplg_filename, charp, 0444);
++MODULE_PARM_DESC(tplg_filename, "alternate filename for SOF topology.");
++
+ static int sof_pci_debug;
+ module_param_named(sof_pci_debug, sof_pci_debug, int, 0444);
+ MODULE_PARM_DESC(sof_pci_debug, "SOF PCI debug options (0x0 all off)");
+ 
+-static const char *sof_override_tplg_name;
++static const char *sof_dmi_override_tplg_name;
++static bool sof_dmi_use_community_key;
+ 
+ #define SOF_PCI_DISABLE_PM_RUNTIME BIT(0)
+ 
+ static int sof_tplg_cb(const struct dmi_system_id *id)
+ {
+-	sof_override_tplg_name = id->driver_data;
++	sof_dmi_override_tplg_name = id->driver_data;
+ 	return 1;
+ }
+ 
+@@ -57,25 +63,44 @@ static const struct dmi_system_id sof_tplg_table[] = {
+ 	{}
+ };
+ 
++/* all Up boards use the community key */
++static int up_use_community_key(const struct dmi_system_id *id)
++{
++	sof_dmi_use_community_key = true;
++	return 1;
++}
++
++/*
++ * For ApolloLake Chromebooks we want to force the use of the Intel production key.
++ * All newer platforms use the community key
++ */
++static int chromebook_use_community_key(const struct dmi_system_id *id)
++{
++	if (!soc_intel_is_apl())
++		sof_dmi_use_community_key = true;
++	return 1;
++}
++
+ static const struct dmi_system_id community_key_platforms[] = {
+ 	{
+-		.ident = "Up Squared",
++		.ident = "Up boards",
++		.callback = up_use_community_key,
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "AAEON"),
+-			DMI_MATCH(DMI_BOARD_NAME, "UP-APL01"),
+ 		}
+ 	},
+ 	{
+-		.ident = "Up Extreme",
++		.ident = "Google Chromebooks",
++		.callback = chromebook_use_community_key,
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "AAEON"),
+-			DMI_MATCH(DMI_BOARD_NAME, "UP-WHL01"),
++			DMI_MATCH(DMI_PRODUCT_FAMILY, "Google"),
+ 		}
+ 	},
+ 	{
+-		.ident = "Google Chromebooks",
++		.ident = "Google firmware",
++		.callback = chromebook_use_community_key,
+ 		.matches = {
+-			DMI_MATCH(DMI_PRODUCT_FAMILY, "Google"),
++			DMI_MATCH(DMI_BIOS_VERSION, "Google"),
+ 		}
+ 	},
+ 	{},
+@@ -379,7 +404,7 @@ static int sof_pci_probe(struct pci_dev *pci,
+ 			"Module parameter used, changed fw path to %s\n",
+ 			sof_pdata->fw_filename_prefix);
+ 
+-	} else if (dmi_check_system(community_key_platforms)) {
++	} else if (dmi_check_system(community_key_platforms) && sof_dmi_use_community_key) {
+ 		sof_pdata->fw_filename_prefix =
+ 			devm_kasprintf(dev, GFP_KERNEL, "%s/%s",
+ 				       sof_pdata->desc->default_fw_path,
+@@ -399,9 +424,20 @@ static int sof_pci_probe(struct pci_dev *pci,
+ 		sof_pdata->tplg_filename_prefix =
+ 			sof_pdata->desc->default_tplg_path;
+ 
+-	dmi_check_system(sof_tplg_table);
+-	if (sof_override_tplg_name)
+-		sof_pdata->tplg_filename = sof_override_tplg_name;
++	/*
++	 * the topology filename will be provided in the machine descriptor, unless
++	 * it is overridden by a module parameter or DMI quirk.
++	 */
++	if (tplg_filename) {
++		sof_pdata->tplg_filename = tplg_filename;
++
++		dev_dbg(dev, "Module parameter used, changed tplg filename to %s\n",
++			sof_pdata->tplg_filename);
++	} else {
++		dmi_check_system(sof_tplg_table);
++		if (sof_dmi_override_tplg_name)
++			sof_pdata->tplg_filename = sof_dmi_override_tplg_name;
++	}
+ 
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE)
+ 	/* set callback to enable runtime_pm */
+diff --git a/tools/arch/parisc/include/uapi/asm/errno.h b/tools/arch/parisc/include/uapi/asm/errno.h
+index 87245c584784e..8d94739d75c67 100644
+--- a/tools/arch/parisc/include/uapi/asm/errno.h
++++ b/tools/arch/parisc/include/uapi/asm/errno.h
+@@ -75,7 +75,6 @@
+ 
+ /* We now return you to your regularly scheduled HPUX. */
+ 
+-#define ENOSYM		215	/* symbol does not exist in executable */
+ #define	ENOTSOCK	216	/* Socket operation on non-socket */
+ #define	EDESTADDRREQ	217	/* Destination address required */
+ #define	EMSGSIZE	218	/* Message too long */
+@@ -101,7 +100,6 @@
+ #define	ETIMEDOUT	238	/* Connection timed out */
+ #define	ECONNREFUSED	239	/* Connection refused */
+ #define	EREFUSED	ECONNREFUSED	/* for HP's NFS apparently */
+-#define	EREMOTERELEASE	240	/* Remote peer released connection */
+ #define	EHOSTDOWN	241	/* Host is down */
+ #define	EHOSTUNREACH	242	/* No route to host */
+ 
+diff --git a/tools/testing/selftests/net/ipsec.c b/tools/testing/selftests/net/ipsec.c
+index 17ced7d6ce259..03b048b668315 100644
+--- a/tools/testing/selftests/net/ipsec.c
++++ b/tools/testing/selftests/net/ipsec.c
+@@ -2117,7 +2117,7 @@ static int check_results(void)
+ 
+ int main(int argc, char **argv)
+ {
+-	unsigned int nr_process = 1;
++	long nr_process = 1;
+ 	int route_sock = -1, ret = KSFT_SKIP;
+ 	int test_desc_fd[2];
+ 	uint32_t route_seq;
+@@ -2138,7 +2138,7 @@ int main(int argc, char **argv)
+ 			exit_usage(argv);
+ 		}
+ 
+-		if (nr_process > MAX_PROCESSES || !nr_process) {
++		if (nr_process > MAX_PROCESSES || nr_process < 1) {
+ 			printk("nr_process should be between [1; %u]",
+ 					MAX_PROCESSES);
+ 			exit_usage(argv);
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index 77bb62feb8726..37c1ec888ba2e 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -15,6 +15,7 @@
+ #include <unistd.h>
+ 
+ #include <sys/poll.h>
++#include <sys/random.h>
+ #include <sys/sendfile.h>
+ #include <sys/stat.h>
+ #include <sys/socket.h>
+@@ -734,15 +735,11 @@ int main_loop_s(int listensock)
+ 
+ static void init_rng(void)
+ {
+-	int fd = open("/dev/urandom", O_RDONLY);
+ 	unsigned int foo;
+ 
+-	if (fd > 0) {
+-		int ret = read(fd, &foo, sizeof(foo));
+-
+-		if (ret < 0)
+-			srand(fd + foo);
+-		close(fd);
++	if (getrandom(&foo, sizeof(foo), 0) == -1) {
++		perror("getrandom");
++		exit(1);
+ 	}
+ 
+ 	srand(foo);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-12-13 18:29 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-12-13 18:29 UTC (permalink / raw
  To: gentoo-commits

commit:     fff95d7d155d1df9764c8f7c13a36f332b7bc142
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 13 18:29:15 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 13 18:29:15 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fff95d7d

Linux patch 5.10.204

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1203_linux-5.10.204.patch | 5161 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5165 insertions(+)

diff --git a/0000_README b/0000_README
index 9fa95630..0ee398ed 100644
--- a/0000_README
+++ b/0000_README
@@ -855,6 +855,10 @@ Patch:  1202_linux-5.10.203.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.203
 
+Patch:  1203_linux-5.10.204.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.204
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1203_linux-5.10.204.patch b/1203_linux-5.10.204.patch
new file mode 100644
index 00000000..88f6b212
--- /dev/null
+++ b/1203_linux-5.10.204.patch
@@ -0,0 +1,5161 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-optee-devices b/Documentation/ABI/testing/sysfs-bus-optee-devices
+index 0f58701367b66..af31e5a22d89f 100644
+--- a/Documentation/ABI/testing/sysfs-bus-optee-devices
++++ b/Documentation/ABI/testing/sysfs-bus-optee-devices
+@@ -6,3 +6,12 @@ Description:
+ 		OP-TEE bus provides reference to registered drivers under this directory. The <uuid>
+ 		matches Trusted Application (TA) driver and corresponding TA in secure OS. Drivers
+ 		are free to create needed API under optee-ta-<uuid> directory.
++
++What:		/sys/bus/tee/devices/optee-ta-<uuid>/need_supplicant
++Date:		November 2023
++KernelVersion:	6.7
++Contact:	op-tee@lists.trustedfirmware.org
++Description:
++		Allows to distinguish whether an OP-TEE based TA/device requires user-space
++		tee-supplicant to function properly or not. This attribute will be present for
++		devices which depend on tee-supplicant to be running.
+diff --git a/Documentation/ABI/testing/sysfs-platform-asus-wmi b/Documentation/ABI/testing/sysfs-platform-asus-wmi
+index 04885738cf156..0f8f0772d6f3b 100644
+--- a/Documentation/ABI/testing/sysfs-platform-asus-wmi
++++ b/Documentation/ABI/testing/sysfs-platform-asus-wmi
+@@ -57,3 +57,12 @@ Description:
+ 			* 0 - default,
+ 			* 1 - overboost,
+ 			* 2 - silent
++
++What:		/sys/devices/platform/<platform>/dgpu_disable
++Date:		Aug 2022
++KernelVersion:	5.17
++Contact:	"Luke Jones" <luke@ljones.dev>
++Description:
++		Disable discrete GPU:
++			* 0 - Enable dGPU,
++			* 1 - Disable dGPU
+diff --git a/Makefile b/Makefile
+index 4cbeb198196b0..87886018d774c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 203
++SUBLEVEL = 204
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 854c380cccea4..03bde2fb9bb11 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -437,7 +437,7 @@
+ 			};
+ 
+ 			gpt1: timer@302d0000 {
+-				compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
++				compatible = "fsl,imx7d-gpt", "fsl,imx6dl-gpt";
+ 				reg = <0x302d0000 0x10000>;
+ 				interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX7D_GPT1_ROOT_CLK>,
+@@ -446,7 +446,7 @@
+ 			};
+ 
+ 			gpt2: timer@302e0000 {
+-				compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
++				compatible = "fsl,imx7d-gpt", "fsl,imx6dl-gpt";
+ 				reg = <0x302e0000 0x10000>;
+ 				interrupts = <GIC_SPI 54 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX7D_GPT2_ROOT_CLK>,
+@@ -456,7 +456,7 @@
+ 			};
+ 
+ 			gpt3: timer@302f0000 {
+-				compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
++				compatible = "fsl,imx7d-gpt", "fsl,imx6dl-gpt";
+ 				reg = <0x302f0000 0x10000>;
+ 				interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX7D_GPT3_ROOT_CLK>,
+@@ -466,7 +466,7 @@
+ 			};
+ 
+ 			gpt4: timer@30300000 {
+-				compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
++				compatible = "fsl,imx7d-gpt", "fsl,imx6dl-gpt";
+ 				reg = <0x30300000 0x10000>;
+ 				interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX7D_GPT4_ROOT_CLK>,
+diff --git a/arch/arm/mach-imx/mmdc.c b/arch/arm/mach-imx/mmdc.c
+index b9efe9da06e0b..3d76e8c28c51d 100644
+--- a/arch/arm/mach-imx/mmdc.c
++++ b/arch/arm/mach-imx/mmdc.c
+@@ -502,6 +502,10 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ 
+ 	name = devm_kasprintf(&pdev->dev,
+ 				GFP_KERNEL, "mmdc%d", ret);
++	if (!name) {
++		ret = -ENOMEM;
++		goto pmu_release_id;
++	}
+ 
+ 	pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
+ 	pmu_mmdc->devtype_data = (struct fsl_mmdc_devtype_data *)of_id->data;
+@@ -524,9 +528,10 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ 
+ pmu_register_err:
+ 	pr_warn("MMDC Perf PMU failed (%d), disabled\n", ret);
+-	ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
+ 	cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
+ 	hrtimer_cancel(&pmu_mmdc->hrtimer);
++pmu_release_id:
++	ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
+ pmu_free:
+ 	kfree(pmu_mmdc);
+ 	return ret;
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+index 3053f484c8cc1..7e6cffdc5a551 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+@@ -69,7 +69,7 @@
+ 		};
+ 	};
+ 
+-	memory {
++	memory@40000000 {
+ 		reg = <0 0x40000000 0 0x40000000>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+index 08ad0ffb24df3..993f033d0bf04 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+@@ -55,7 +55,7 @@
+ 		};
+ 	};
+ 
+-	memory {
++	memory@40000000 {
+ 		reg = <0 0x40000000 0 0x20000000>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+index 6dffada2e66b4..2b66afcf026e1 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+@@ -43,7 +43,7 @@
+ 		id-gpio = <&pio 16 GPIO_ACTIVE_HIGH>;
+ 	};
+ 
+-	usb_p1_vbus: regulator@0 {
++	usb_p1_vbus: regulator-usb-p1 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "usb_vbus";
+ 		regulator-min-microvolt = <5000000>;
+@@ -52,7 +52,7 @@
+ 		enable-active-high;
+ 	};
+ 
+-	usb_p0_vbus: regulator@1 {
++	usb_p0_vbus: regulator-usb-p0 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vbus";
+ 		regulator-min-microvolt = <5000000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-evb.dts b/arch/arm64/boot/dts/mediatek/mt8183-evb.dts
+index cba2d8933e793..1d2d2e7239a39 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-evb.dts
+@@ -30,7 +30,7 @@
+ 		#address-cells = <2>;
+ 		#size-cells = <2>;
+ 		ranges;
+-		scp_mem_reserved: scp_mem_region {
++		scp_mem_reserved: memory@50000000 {
+ 			compatible = "shared-dma-pool";
+ 			reg = <0 0x50000000 0 0x2900000>;
+ 			no-map;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index 85f7c33ba4461..a4f860bb4a842 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -95,7 +95,7 @@
+ 		#size-cells = <2>;
+ 		ranges;
+ 
+-		scp_mem_reserved: scp_mem_region {
++		scp_mem_reserved: memory@50000000 {
+ 			compatible = "shared-dma-pool";
+ 			reg = <0 0x50000000 0 0x2900000>;
+ 			no-map;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index c5f3d4f8f4d21..3180f576ed02e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1022,7 +1022,9 @@
+ 			power-domain@RK3399_PD_VDU {
+ 				reg = <RK3399_PD_VDU>;
+ 				clocks = <&cru ACLK_VDU>,
+-					 <&cru HCLK_VDU>;
++					 <&cru HCLK_VDU>,
++					 <&cru SCLK_VDU_CA>,
++					 <&cru SCLK_VDU_CORE>;
+ 				pm_qos = <&qos_video_m1_r>,
+ 					 <&qos_video_m1_w>;
+ 			};
+@@ -1276,7 +1278,7 @@
+ 
+ 	vdec: video-codec@ff660000 {
+ 		compatible = "rockchip,rk3399-vdec";
+-		reg = <0x0 0xff660000 0x0 0x400>;
++		reg = <0x0 0xff660000 0x0 0x480>;
+ 		interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH 0>;
+ 		clocks = <&cru ACLK_VDU>, <&cru HCLK_VDU>,
+ 			 <&cru SCLK_VDU_CA>, <&cru SCLK_VDU_CORE>;
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 57839f63074f7..18ebacf298891 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -468,6 +468,7 @@ config MACH_LOONGSON2EF
+ 
+ config MACH_LOONGSON64
+ 	bool "Loongson 64-bit family of machines"
++	select ARCH_DMA_DEFAULT_COHERENT
+ 	select ARCH_SPARSEMEM_ENABLE
+ 	select ARCH_MIGHT_HAVE_PC_PARPORT
+ 	select ARCH_MIGHT_HAVE_PC_SERIO
+@@ -1379,6 +1380,7 @@ config CPU_LOONGSON64
+ 	select CPU_SUPPORTS_MSA
+ 	select CPU_DIEI_BROKEN if !LOONGSON3_ENHANCEMENT
+ 	select CPU_MIPSR2_IRQ_VI
++	select DMA_NONCOHERENT
+ 	select WEAK_ORDERING
+ 	select WEAK_REORDERING_BEYOND_LLSC
+ 	select MIPS_ASID_BITS_VARIABLE
+diff --git a/arch/mips/include/asm/mach-loongson64/boot_param.h b/arch/mips/include/asm/mach-loongson64/boot_param.h
+index afc92b7a61c60..de0bd14d798aa 100644
+--- a/arch/mips/include/asm/mach-loongson64/boot_param.h
++++ b/arch/mips/include/asm/mach-loongson64/boot_param.h
+@@ -117,7 +117,8 @@ struct irq_source_routing_table {
+ 	u64 pci_io_start_addr;
+ 	u64 pci_io_end_addr;
+ 	u64 pci_config_addr;
+-	u32 dma_mask_bits;
++	u16 dma_mask_bits;
++	u16 dma_noncoherent;
+ } __packed;
+ 
+ struct interface_info {
+diff --git a/arch/mips/loongson64/env.c b/arch/mips/loongson64/env.c
+index 134cb8e9efc21..a59bae36f86a7 100644
+--- a/arch/mips/loongson64/env.c
++++ b/arch/mips/loongson64/env.c
+@@ -13,6 +13,8 @@
+  * Copyright (C) 2009 Lemote Inc.
+  * Author: Wu Zhangjin, wuzhangjin@gmail.com
+  */
++
++#include <linux/dma-map-ops.h>
+ #include <linux/export.h>
+ #include <linux/pci_ids.h>
+ #include <asm/bootinfo.h>
+@@ -131,8 +133,14 @@ void __init prom_init_env(void)
+ 	loongson_sysconf.pci_io_base = eirq_source->pci_io_start_addr;
+ 	loongson_sysconf.dma_mask_bits = eirq_source->dma_mask_bits;
+ 	if (loongson_sysconf.dma_mask_bits < 32 ||
+-		loongson_sysconf.dma_mask_bits > 64)
++			loongson_sysconf.dma_mask_bits > 64) {
+ 		loongson_sysconf.dma_mask_bits = 32;
++		dma_default_coherent = true;
++	} else {
++		dma_default_coherent = !eirq_source->dma_noncoherent;
++	}
++
++	pr_info("Firmware: Coherent DMA: %s\n", dma_default_coherent ? "on" : "off");
+ 
+ 	loongson_sysconf.restart_addr = boot_p->reset_system.ResetWarm;
+ 	loongson_sysconf.poweroff_addr = boot_p->reset_system.Shutdown;
+diff --git a/arch/mips/loongson64/init.c b/arch/mips/loongson64/init.c
+index 052cce6a8a998..4071cc8841e33 100644
+--- a/arch/mips/loongson64/init.c
++++ b/arch/mips/loongson64/init.c
+@@ -140,6 +140,11 @@ static __init void reserve_pio_range(void)
+ 			}
+ 		}
+ 	}
++
++	/* Reserve vgabios if it comes from firmware */
++	if (loongson_sysconf.vgabios_addr)
++		memblock_reserve(virt_to_phys((void *)loongson_sysconf.vgabios_addr),
++				SZ_256K);
+ }
+ 
+ void __init arch_init_irq(void)
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 46c4dafe3ba0e..b246c3dc69930 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -344,16 +344,14 @@ int handle_misaligned_store(struct pt_regs *regs)
+ 	} else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) {
+ 		len = 8;
+ 		val.data_ulong = GET_RS2S(insn, regs);
+-	} else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP &&
+-		   ((insn >> SH_RD) & 0x1f)) {
++	} else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP) {
+ 		len = 8;
+ 		val.data_ulong = GET_RS2C(insn, regs);
+ #endif
+ 	} else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) {
+ 		len = 4;
+ 		val.data_ulong = GET_RS2S(insn, regs);
+-	} else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP &&
+-		   ((insn >> SH_RD) & 0x1f)) {
++	} else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP) {
+ 		len = 4;
+ 		val.data_ulong = GET_RS2C(insn, regs);
+ 	} else {
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index 1c05caf68e7d8..5b62e385af386 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -717,7 +717,7 @@ void ptep_zap_unused(struct mm_struct *mm, unsigned long addr,
+ 		pte_clear(mm, addr, ptep);
+ 	}
+ 	if (reset)
+-		pgste_val(pgste) &= ~_PGSTE_GPS_USAGE_MASK;
++		pgste_val(pgste) &= ~(_PGSTE_GPS_USAGE_MASK | _PGSTE_GPS_NODAT);
+ 	pgste_set_unlock(ptep, pgste);
+ 	preempt_enable();
+ }
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index d254e02aa179f..f29c6bed9d657 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1328,6 +1328,9 @@ static void zenbleed_check_cpu(void *unused)
+ 
+ void amd_check_microcode(void)
+ {
++	if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
++		return;
++
+ 	on_each_cpu(zenbleed_check_cpu, NULL, 1);
+ }
+ 
+diff --git a/drivers/base/devcoredump.c b/drivers/base/devcoredump.c
+index 9243468e2c99f..4d725d1fd61ae 100644
+--- a/drivers/base/devcoredump.c
++++ b/drivers/base/devcoredump.c
+@@ -29,6 +29,47 @@ struct devcd_entry {
+ 	struct device devcd_dev;
+ 	void *data;
+ 	size_t datalen;
++	/*
++	 * Here, mutex is required to serialize the calls to del_wk work between
++	 * user/kernel space which happens when devcd is added with device_add()
++	 * and that sends uevent to user space. User space reads the uevents,
++	 * and calls to devcd_data_write() which try to modify the work which is
++	 * not even initialized/queued from devcoredump.
++	 *
++	 *
++	 *
++	 *        cpu0(X)                                 cpu1(Y)
++	 *
++	 *        dev_coredump() uevent sent to user space
++	 *        device_add()  ======================> user space process Y reads the
++	 *                                              uevents writes to devcd fd
++	 *                                              which results into writes to
++	 *
++	 *                                             devcd_data_write()
++	 *                                               mod_delayed_work()
++	 *                                                 try_to_grab_pending()
++	 *                                                   del_timer()
++	 *                                                     debug_assert_init()
++	 *       INIT_DELAYED_WORK()
++	 *       schedule_delayed_work()
++	 *
++	 *
++	 * Also, mutex alone would not be enough to avoid scheduling of
++	 * del_wk work after it get flush from a call to devcd_free()
++	 * mentioned as below.
++	 *
++	 *	disabled_store()
++	 *        devcd_free()
++	 *          mutex_lock()             devcd_data_write()
++	 *          flush_delayed_work()
++	 *          mutex_unlock()
++	 *                                   mutex_lock()
++	 *                                   mod_delayed_work()
++	 *                                   mutex_unlock()
++	 * So, delete_work flag is required.
++	 */
++	struct mutex mutex;
++	bool delete_work;
+ 	struct module *owner;
+ 	ssize_t (*read)(char *buffer, loff_t offset, size_t count,
+ 			void *data, size_t datalen);
+@@ -88,7 +129,12 @@ static ssize_t devcd_data_write(struct file *filp, struct kobject *kobj,
+ 	struct device *dev = kobj_to_dev(kobj);
+ 	struct devcd_entry *devcd = dev_to_devcd(dev);
+ 
+-	mod_delayed_work(system_wq, &devcd->del_wk, 0);
++	mutex_lock(&devcd->mutex);
++	if (!devcd->delete_work) {
++		devcd->delete_work = true;
++		mod_delayed_work(system_wq, &devcd->del_wk, 0);
++	}
++	mutex_unlock(&devcd->mutex);
+ 
+ 	return count;
+ }
+@@ -116,7 +162,12 @@ static int devcd_free(struct device *dev, void *data)
+ {
+ 	struct devcd_entry *devcd = dev_to_devcd(dev);
+ 
++	mutex_lock(&devcd->mutex);
++	if (!devcd->delete_work)
++		devcd->delete_work = true;
++
+ 	flush_delayed_work(&devcd->del_wk);
++	mutex_unlock(&devcd->mutex);
+ 	return 0;
+ }
+ 
+@@ -126,6 +177,30 @@ static ssize_t disabled_show(struct class *class, struct class_attribute *attr,
+ 	return sysfs_emit(buf, "%d\n", devcd_disabled);
+ }
+ 
++/*
++ *
++ *	disabled_store()                                	worker()
++ *	 class_for_each_device(&devcd_class,
++ *		NULL, NULL, devcd_free)
++ *         ...
++ *         ...
++ *	   while ((dev = class_dev_iter_next(&iter))
++ *                                                             devcd_del()
++ *                                                               device_del()
++ *                                                                 put_device() <- last reference
++ *             error = fn(dev, data)                           devcd_dev_release()
++ *             devcd_free(dev, data)                           kfree(devcd)
++ *             mutex_lock(&devcd->mutex);
++ *
++ *
++ * In the above diagram, It looks like disabled_store() would be racing with parallely
++ * running devcd_del() and result in memory abort while acquiring devcd->mutex which
++ * is called after kfree of devcd memory  after dropping its last reference with
++ * put_device(). However, this will not happens as fn(dev, data) runs
++ * with its own reference to device via klist_node so it is not its last reference.
++ * so, above situation would not occur.
++ */
++
+ static ssize_t disabled_store(struct class *class, struct class_attribute *attr,
+ 			      const char *buf, size_t count)
+ {
+@@ -282,13 +357,17 @@ void dev_coredumpm(struct device *dev, struct module *owner,
+ 	devcd->read = read;
+ 	devcd->free = free;
+ 	devcd->failing_dev = get_device(dev);
++	devcd->delete_work = false;
+ 
++	mutex_init(&devcd->mutex);
+ 	device_initialize(&devcd->devcd_dev);
+ 
+ 	dev_set_name(&devcd->devcd_dev, "devcd%d",
+ 		     atomic_inc_return(&devcd_count));
+ 	devcd->devcd_dev.class = &devcd_class;
+ 
++	mutex_lock(&devcd->mutex);
++	dev_set_uevent_suppress(&devcd->devcd_dev, true);
+ 	if (device_add(&devcd->devcd_dev))
+ 		goto put_device;
+ 
+@@ -300,12 +379,15 @@ void dev_coredumpm(struct device *dev, struct module *owner,
+ 			      "devcoredump"))
+ 		/* nothing - symlink will be missing */;
+ 
++	dev_set_uevent_suppress(&devcd->devcd_dev, false);
++	kobject_uevent(&devcd->devcd_dev.kobj, KOBJ_ADD);
+ 	INIT_DELAYED_WORK(&devcd->del_wk, devcd_del);
+ 	schedule_delayed_work(&devcd->del_wk, DEVCD_TIMEOUT);
+-
++	mutex_unlock(&devcd->mutex);
+ 	return;
+  put_device:
+ 	put_device(&devcd->devcd_dev);
++	mutex_unlock(&devcd->mutex);
+  put_module:
+ 	module_put(owner);
+  free:
+diff --git a/drivers/gpio/gpiolib-sysfs.c b/drivers/gpio/gpiolib-sysfs.c
+index fa5d945b2f286..f45ba0f789d4f 100644
+--- a/drivers/gpio/gpiolib-sysfs.c
++++ b/drivers/gpio/gpiolib-sysfs.c
+@@ -491,14 +491,17 @@ static ssize_t export_store(struct class *class,
+ 	}
+ 
+ 	status = gpiod_set_transitory(desc, false);
+-	if (!status) {
+-		status = gpiod_export(desc, true);
+-		if (status < 0)
+-			gpiod_free(desc);
+-		else
+-			set_bit(FLAG_SYSFS, &desc->flags);
++	if (status) {
++		gpiod_free(desc);
++		goto done;
+ 	}
+ 
++	status = gpiod_export(desc, true);
++	if (status < 0)
++		gpiod_free(desc);
++	else
++		set_bit(FLAG_SYSFS, &desc->flags);
++
+ done:
+ 	if (status)
+ 		pr_debug("%s: status %d\n", __func__, status);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 7f2adac82e3a6..addeda42339fa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -143,7 +143,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
+ 	}
+ 
+ 	for (i = 0; i < p->nchunks; i++) {
+-		struct drm_amdgpu_cs_chunk __user **chunk_ptr = NULL;
++		struct drm_amdgpu_cs_chunk __user *chunk_ptr = NULL;
+ 		struct drm_amdgpu_cs_chunk user_chunk;
+ 		uint32_t __user *cdata;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 7cc7af2a6822e..3671a700189d0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -299,13 +299,11 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
+ 		adev->have_disp_power_ref = true;
+ 		return ret;
+ 	}
+-	/* if we have no active crtcs, then drop the power ref
+-	   we got before */
+-	if (!active && adev->have_disp_power_ref) {
+-		pm_runtime_put_autosuspend(dev->dev);
++	/* if we have no active crtcs, then go to
++	 * drop the power ref we got before
++	 */
++	if (!active && adev->have_disp_power_ref)
+ 		adev->have_disp_power_ref = false;
+-	}
+-
+ out:
+ 	/* drop the power reference we got coming in here */
+ 	pm_runtime_put_autosuspend(dev->dev);
+diff --git a/drivers/hwmon/acpi_power_meter.c b/drivers/hwmon/acpi_power_meter.c
+index a270b975e90bb..5794d44538e9c 100644
+--- a/drivers/hwmon/acpi_power_meter.c
++++ b/drivers/hwmon/acpi_power_meter.c
+@@ -32,6 +32,7 @@ ACPI_MODULE_NAME(ACPI_POWER_METER_NAME);
+ #define POWER_METER_CAN_NOTIFY	(1 << 3)
+ #define POWER_METER_IS_BATTERY	(1 << 8)
+ #define UNKNOWN_HYSTERESIS	0xFFFFFFFF
++#define UNKNOWN_POWER		0xFFFFFFFF
+ 
+ #define METER_NOTIFY_CONFIG	0x80
+ #define METER_NOTIFY_TRIP	0x81
+@@ -343,6 +344,9 @@ static ssize_t show_power(struct device *dev,
+ 	update_meter(resource);
+ 	mutex_unlock(&resource->lock);
+ 
++	if (resource->power == UNKNOWN_POWER)
++		return -ENODATA;
++
+ 	return sprintf(buf, "%llu\n", resource->power * 1000);
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index 682fffaab2b40..c9f7783ac7cb1 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -63,7 +63,7 @@ static int dw_reg_read(void *context, unsigned int reg, unsigned int *val)
+ {
+ 	struct dw_i2c_dev *dev = context;
+ 
+-	*val = readl_relaxed(dev->base + reg);
++	*val = readl(dev->base + reg);
+ 
+ 	return 0;
+ }
+@@ -72,7 +72,7 @@ static int dw_reg_write(void *context, unsigned int reg, unsigned int val)
+ {
+ 	struct dw_i2c_dev *dev = context;
+ 
+-	writel_relaxed(val, dev->base + reg);
++	writel(val, dev->base + reg);
+ 
+ 	return 0;
+ }
+@@ -81,7 +81,7 @@ static int dw_reg_read_swab(void *context, unsigned int reg, unsigned int *val)
+ {
+ 	struct dw_i2c_dev *dev = context;
+ 
+-	*val = swab32(readl_relaxed(dev->base + reg));
++	*val = swab32(readl(dev->base + reg));
+ 
+ 	return 0;
+ }
+@@ -90,7 +90,7 @@ static int dw_reg_write_swab(void *context, unsigned int reg, unsigned int val)
+ {
+ 	struct dw_i2c_dev *dev = context;
+ 
+-	writel_relaxed(swab32(val), dev->base + reg);
++	writel(swab32(val), dev->base + reg);
+ 
+ 	return 0;
+ }
+@@ -99,8 +99,8 @@ static int dw_reg_read_word(void *context, unsigned int reg, unsigned int *val)
+ {
+ 	struct dw_i2c_dev *dev = context;
+ 
+-	*val = readw_relaxed(dev->base + reg) |
+-		(readw_relaxed(dev->base + reg + 2) << 16);
++	*val = readw(dev->base + reg) |
++		(readw(dev->base + reg + 2) << 16);
+ 
+ 	return 0;
+ }
+@@ -109,8 +109,8 @@ static int dw_reg_write_word(void *context, unsigned int reg, unsigned int val)
+ {
+ 	struct dw_i2c_dev *dev = context;
+ 
+-	writew_relaxed(val, dev->base + reg);
+-	writew_relaxed(val >> 16, dev->base + reg + 2);
++	writew(val, dev->base + reg);
++	writew(val >> 16, dev->base + reg + 2);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 8a618769915d5..76030eac49a37 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -70,7 +70,7 @@ static char version[] =
+ 		BNXT_RE_DESC "\n";
+ 
+ MODULE_AUTHOR("Eddie Wai <eddie.wai@broadcom.com>");
+-MODULE_DESCRIPTION(BNXT_RE_DESC " Driver");
++MODULE_DESCRIPTION(BNXT_RE_DESC);
+ MODULE_LICENSE("Dual BSD/GPL");
+ 
+ /* globals */
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 1a5805260778b..e8b2b58cc9bc6 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -387,7 +387,7 @@ static void complete_rdma_req(struct rtrs_clt_io_req *req, int errno,
+ 	struct rtrs_clt_sess *sess;
+ 	int err;
+ 
+-	if (WARN_ON(!req->in_use))
++	if (!req->in_use)
+ 		return;
+ 	if (WARN_ON(!req->con))
+ 		return;
+diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
+index d5c3f7d54634c..aa87542f116c2 100644
+--- a/drivers/misc/mei/client.c
++++ b/drivers/misc/mei/client.c
+@@ -1969,7 +1969,7 @@ ssize_t mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb)
+ 
+ 	mei_hdr = mei_msg_hdr_init(cb);
+ 	if (IS_ERR(mei_hdr)) {
+-		rets = -PTR_ERR(mei_hdr);
++		rets = PTR_ERR(mei_hdr);
+ 		mei_hdr = NULL;
+ 		goto err;
+ 	}
+@@ -1993,7 +1993,7 @@ ssize_t mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb)
+ 
+ 	hbuf_slots = mei_hbuf_empty_slots(dev);
+ 	if (hbuf_slots < 0) {
+-		rets = -EOVERFLOW;
++		buf_len = -EOVERFLOW;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index fdeaaae080603..d5ca59bd1c995 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -553,6 +553,8 @@ int mmc_cqe_recovery(struct mmc_host *host)
+ 	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT;
+ 	mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
+ 
++	mmc_poll_for_busy(host->card, MMC_CQE_RECOVERY_TIMEOUT, MMC_BUSY_IO);
++
+ 	memset(&cmd, 0, sizeof(cmd));
+ 	cmd.opcode       = MMC_CMDQ_TASK_MGMT;
+ 	cmd.arg          = 1; /* Discard entire queue */
+diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
+index ebad70e4481af..b6bb7f7faee2b 100644
+--- a/drivers/mmc/core/mmc_ops.c
++++ b/drivers/mmc/core/mmc_ops.c
+@@ -452,7 +452,7 @@ static int mmc_busy_status(struct mmc_card *card, bool retry_crc_err,
+ 	u32 status = 0;
+ 	int err;
+ 
+-	if (host->ops->card_busy) {
++	if (busy_cmd != MMC_BUSY_IO && host->ops->card_busy) {
+ 		*busy = host->ops->card_busy(host);
+ 		return 0;
+ 	}
+@@ -473,6 +473,7 @@ static int mmc_busy_status(struct mmc_card *card, bool retry_crc_err,
+ 		err = R1_STATUS(status) ? -EIO : 0;
+ 		break;
+ 	case MMC_BUSY_HPI:
++	case MMC_BUSY_IO:
+ 		break;
+ 	default:
+ 		err = -EINVAL;
+diff --git a/drivers/mmc/core/mmc_ops.h b/drivers/mmc/core/mmc_ops.h
+index 632009260e51d..e3cbb1ddc31c9 100644
+--- a/drivers/mmc/core/mmc_ops.h
++++ b/drivers/mmc/core/mmc_ops.h
+@@ -14,6 +14,7 @@ enum mmc_busy_cmd {
+ 	MMC_BUSY_CMD6,
+ 	MMC_BUSY_ERASE,
+ 	MMC_BUSY_HPI,
++	MMC_BUSY_IO,
+ };
+ 
+ struct mmc_host;
+diff --git a/drivers/net/arcnet/arcdevice.h b/drivers/net/arcnet/arcdevice.h
+index 5d4a4c7efbbff..deeabd6ec2e81 100644
+--- a/drivers/net/arcnet/arcdevice.h
++++ b/drivers/net/arcnet/arcdevice.h
+@@ -186,6 +186,8 @@ do {									\
+ #define ARC_IS_5MBIT    1   /* card default speed is 5MBit */
+ #define ARC_CAN_10MBIT  2   /* card uses COM20022, supporting 10MBit,
+ 				 but default is 2.5MBit. */
++#define ARC_HAS_LED     4   /* card has software controlled LEDs */
++#define ARC_HAS_ROTARY  8   /* card has rotary encoder */
+ 
+ /* information needed to define an encapsulation driver */
+ struct ArcProto {
+diff --git a/drivers/net/arcnet/com20020-pci.c b/drivers/net/arcnet/com20020-pci.c
+index b4f8798d8c509..9d9e4200064f9 100644
+--- a/drivers/net/arcnet/com20020-pci.c
++++ b/drivers/net/arcnet/com20020-pci.c
+@@ -127,6 +127,8 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ 	int i, ioaddr, ret;
+ 	struct resource *r;
+ 
++	ret = 0;
++
+ 	if (pci_enable_device(pdev))
+ 		return -EIO;
+ 
+@@ -142,6 +144,8 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ 	priv->ci = ci;
+ 	mm = &ci->misc_map;
+ 
++	pci_set_drvdata(pdev, priv);
++
+ 	INIT_LIST_HEAD(&priv->list_dev);
+ 
+ 	if (mm->size) {
+@@ -164,7 +168,7 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ 		dev = alloc_arcdev(device);
+ 		if (!dev) {
+ 			ret = -ENOMEM;
+-			goto out_port;
++			break;
+ 		}
+ 		dev->dev_port = i;
+ 
+@@ -181,7 +185,7 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ 			pr_err("IO region %xh-%xh already allocated\n",
+ 			       ioaddr, ioaddr + cm->size - 1);
+ 			ret = -EBUSY;
+-			goto out_port;
++			goto err_free_arcdev;
+ 		}
+ 
+ 		/* Dummy access after Reset
+@@ -209,76 +213,79 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ 		if (!strncmp(ci->name, "EAE PLX-PCI FB2", 15))
+ 			lp->backplane = 1;
+ 
+-		/* Get the dev_id from the PLX rotary coder */
+-		if (!strncmp(ci->name, "EAE PLX-PCI MA1", 15))
+-			dev_id_mask = 0x3;
+-		dev->dev_id = (inb(priv->misc + ci->rotary) >> 4) & dev_id_mask;
+-
+-		snprintf(dev->name, sizeof(dev->name), "arc%d-%d", dev->dev_id, i);
++		if (ci->flags & ARC_HAS_ROTARY) {
++			/* Get the dev_id from the PLX rotary coder */
++			if (!strncmp(ci->name, "EAE PLX-PCI MA1", 15))
++				dev_id_mask = 0x3;
++			dev->dev_id = (inb(priv->misc + ci->rotary) >> 4) & dev_id_mask;
++			snprintf(dev->name, sizeof(dev->name), "arc%d-%d", dev->dev_id, i);
++		}
+ 
+ 		if (arcnet_inb(ioaddr, COM20020_REG_R_STATUS) == 0xFF) {
+ 			pr_err("IO address %Xh is empty!\n", ioaddr);
+ 			ret = -EIO;
+-			goto out_port;
++			goto err_free_arcdev;
+ 		}
+ 		if (com20020_check(dev)) {
+ 			ret = -EIO;
+-			goto out_port;
++			goto err_free_arcdev;
+ 		}
+ 
++		ret = com20020_found(dev, IRQF_SHARED);
++		if (ret)
++			goto err_free_arcdev;
++
+ 		card = devm_kzalloc(&pdev->dev, sizeof(struct com20020_dev),
+ 				    GFP_KERNEL);
+ 		if (!card) {
+ 			ret = -ENOMEM;
+-			goto out_port;
++			goto err_free_arcdev;
+ 		}
+ 
+ 		card->index = i;
+ 		card->pci_priv = priv;
+-		card->tx_led.brightness_set = led_tx_set;
+-		card->tx_led.default_trigger = devm_kasprintf(&pdev->dev,
+-						GFP_KERNEL, "arc%d-%d-tx",
+-						dev->dev_id, i);
+-		card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+-						"pci:green:tx:%d-%d",
+-						dev->dev_id, i);
+-
+-		card->tx_led.dev = &dev->dev;
+-		card->recon_led.brightness_set = led_recon_set;
+-		card->recon_led.default_trigger = devm_kasprintf(&pdev->dev,
+-						GFP_KERNEL, "arc%d-%d-recon",
+-						dev->dev_id, i);
+-		card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+-						"pci:red:recon:%d-%d",
+-						dev->dev_id, i);
+-		card->recon_led.dev = &dev->dev;
+-		card->dev = dev;
+-
+-		ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
+-		if (ret)
+-			goto out_port;
+-
+-		ret = devm_led_classdev_register(&pdev->dev, &card->recon_led);
+-		if (ret)
+-			goto out_port;
+-
+-		dev_set_drvdata(&dev->dev, card);
+ 
+-		ret = com20020_found(dev, IRQF_SHARED);
+-		if (ret)
+-			goto out_port;
+-
+-		devm_arcnet_led_init(dev, dev->dev_id, i);
++		if (ci->flags & ARC_HAS_LED) {
++			card->tx_led.brightness_set = led_tx_set;
++			card->tx_led.default_trigger = devm_kasprintf(&pdev->dev,
++							GFP_KERNEL, "arc%d-%d-tx",
++							dev->dev_id, i);
++			card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
++							"pci:green:tx:%d-%d",
++							dev->dev_id, i);
++
++			card->tx_led.dev = &dev->dev;
++			card->recon_led.brightness_set = led_recon_set;
++			card->recon_led.default_trigger = devm_kasprintf(&pdev->dev,
++							GFP_KERNEL, "arc%d-%d-recon",
++							dev->dev_id, i);
++			card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
++							"pci:red:recon:%d-%d",
++							dev->dev_id, i);
++			card->recon_led.dev = &dev->dev;
++
++			ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
++			if (ret)
++				goto err_free_arcdev;
++
++			ret = devm_led_classdev_register(&pdev->dev, &card->recon_led);
++			if (ret)
++				goto err_free_arcdev;
++
++			dev_set_drvdata(&dev->dev, card);
++			devm_arcnet_led_init(dev, dev->dev_id, i);
++		}
+ 
++		card->dev = dev;
+ 		list_add(&card->list, &priv->list_dev);
+-	}
+-
+-	pci_set_drvdata(pdev, priv);
+-
+-	return 0;
++		continue;
+ 
+-out_port:
+-	com20020pci_remove(pdev);
++err_free_arcdev:
++		free_arcdev(dev);
++		break;
++	}
++	if (ret)
++		com20020pci_remove(pdev);
+ 	return ret;
+ }
+ 
+@@ -325,7 +332,7 @@ static struct com20020_pci_card_info card_info_5mbit = {
+ };
+ 
+ static struct com20020_pci_card_info card_info_sohard = {
+-	.name = "PLX-PCI",
++	.name = "SOHARD SH ARC-PCI",
+ 	.devcount = 1,
+ 	/* SOHARD needs PCI base addr 4 */
+ 	.chan_map_tbl = {
+@@ -360,7 +367,7 @@ static struct com20020_pci_card_info card_info_eae_arc1 = {
+ 		},
+ 	},
+ 	.rotary = 0x0,
+-	.flags = ARC_CAN_10MBIT,
++	.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
+ };
+ 
+ static struct com20020_pci_card_info card_info_eae_ma1 = {
+@@ -392,7 +399,7 @@ static struct com20020_pci_card_info card_info_eae_ma1 = {
+ 		},
+ 	},
+ 	.rotary = 0x0,
+-	.flags = ARC_CAN_10MBIT,
++	.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
+ };
+ 
+ static struct com20020_pci_card_info card_info_eae_fb2 = {
+@@ -417,7 +424,7 @@ static struct com20020_pci_card_info card_info_eae_fb2 = {
+ 		},
+ 	},
+ 	.rotary = 0x0,
+-	.flags = ARC_CAN_10MBIT,
++	.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
+ };
+ 
+ static const struct pci_device_id com20020pci_id_table[] = {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index 3e9b1f59e381d..775d0b7521ca0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -2061,6 +2061,7 @@ destroy_flow_table:
+ 	rhashtable_destroy(&tc_info->flow_table);
+ free_tc_info:
+ 	kfree(tc_info);
++	bp->tc_info = NULL;
+ 	return rc;
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index 5647833303a44..b010f28b0abf4 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -6864,7 +6864,7 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget)
+ 				       desc_idx, *post_ptr);
+ 		drop_it_no_recycle:
+ 			/* Other statistics kept track of by card. */
+-			tp->rx_dropped++;
++			tnapi->rx_dropped++;
+ 			goto next_pkt;
+ 		}
+ 
+@@ -7890,8 +7890,10 @@ static int tg3_tso_bug(struct tg3 *tp, struct tg3_napi *tnapi,
+ 
+ 	segs = skb_gso_segment(skb, tp->dev->features &
+ 				    ~(NETIF_F_TSO | NETIF_F_TSO6));
+-	if (IS_ERR(segs) || !segs)
++	if (IS_ERR(segs) || !segs) {
++		tnapi->tx_dropped++;
+ 		goto tg3_tso_bug_end;
++	}
+ 
+ 	skb_list_walk_safe(segs, seg, next) {
+ 		skb_mark_not_on_list(seg);
+@@ -8161,7 +8163,7 @@ dma_error:
+ drop:
+ 	dev_kfree_skb_any(skb);
+ drop_nofree:
+-	tp->tx_dropped++;
++	tnapi->tx_dropped++;
+ 	return NETDEV_TX_OK;
+ }
+ 
+@@ -9340,7 +9342,7 @@ static void __tg3_set_rx_mode(struct net_device *);
+ /* tp->lock is held. */
+ static int tg3_halt(struct tg3 *tp, int kind, bool silent)
+ {
+-	int err;
++	int err, i;
+ 
+ 	tg3_stop_fw(tp);
+ 
+@@ -9361,6 +9363,13 @@ static int tg3_halt(struct tg3 *tp, int kind, bool silent)
+ 
+ 		/* And make sure the next sample is new data */
+ 		memset(tp->hw_stats, 0, sizeof(struct tg3_hw_stats));
++
++		for (i = 0; i < TG3_IRQ_MAX_VECS; ++i) {
++			struct tg3_napi *tnapi = &tp->napi[i];
++
++			tnapi->rx_dropped = 0;
++			tnapi->tx_dropped = 0;
++		}
+ 	}
+ 
+ 	return err;
+@@ -11915,6 +11924,9 @@ static void tg3_get_nstats(struct tg3 *tp, struct rtnl_link_stats64 *stats)
+ {
+ 	struct rtnl_link_stats64 *old_stats = &tp->net_stats_prev;
+ 	struct tg3_hw_stats *hw_stats = tp->hw_stats;
++	unsigned long rx_dropped;
++	unsigned long tx_dropped;
++	int i;
+ 
+ 	stats->rx_packets = old_stats->rx_packets +
+ 		get_stat64(&hw_stats->rx_ucast_packets) +
+@@ -11961,8 +11973,26 @@ static void tg3_get_nstats(struct tg3 *tp, struct rtnl_link_stats64 *stats)
+ 	stats->rx_missed_errors = old_stats->rx_missed_errors +
+ 		get_stat64(&hw_stats->rx_discards);
+ 
+-	stats->rx_dropped = tp->rx_dropped;
+-	stats->tx_dropped = tp->tx_dropped;
++	/* Aggregate per-queue counters. The per-queue counters are updated
++	 * by a single writer, race-free. The result computed by this loop
++	 * might not be 100% accurate (counters can be updated in the middle of
++	 * the loop) but the next tg3_get_nstats() will recompute the current
++	 * value so it is acceptable.
++	 *
++	 * Note that these counters wrap around at 4G on 32bit machines.
++	 */
++	rx_dropped = (unsigned long)(old_stats->rx_dropped);
++	tx_dropped = (unsigned long)(old_stats->tx_dropped);
++
++	for (i = 0; i < tp->irq_cnt; i++) {
++		struct tg3_napi *tnapi = &tp->napi[i];
++
++		rx_dropped += tnapi->rx_dropped;
++		tx_dropped += tnapi->tx_dropped;
++	}
++
++	stats->rx_dropped = rx_dropped;
++	stats->tx_dropped = tx_dropped;
+ }
+ 
+ static int tg3_get_regs_len(struct net_device *dev)
+diff --git a/drivers/net/ethernet/broadcom/tg3.h b/drivers/net/ethernet/broadcom/tg3.h
+index 1000c894064f0..8d753f8c5b065 100644
+--- a/drivers/net/ethernet/broadcom/tg3.h
++++ b/drivers/net/ethernet/broadcom/tg3.h
+@@ -3018,6 +3018,7 @@ struct tg3_napi {
+ 	u16				*rx_rcb_prod_idx;
+ 	struct tg3_rx_prodring_set	prodring;
+ 	struct tg3_rx_buffer_desc	*rx_rcb;
++	unsigned long			rx_dropped;
+ 
+ 	u32				tx_prod	____cacheline_aligned;
+ 	u32				tx_cons;
+@@ -3026,6 +3027,7 @@ struct tg3_napi {
+ 	u32				prodmbox;
+ 	struct tg3_tx_buffer_desc	*tx_ring;
+ 	struct tg3_tx_ring_info		*tx_buffers;
++	unsigned long			tx_dropped;
+ 
+ 	dma_addr_t			status_mapping;
+ 	dma_addr_t			rx_rcb_mapping;
+@@ -3219,8 +3221,6 @@ struct tg3 {
+ 
+ 
+ 	/* begin "everything else" cacheline(s) section */
+-	unsigned long			rx_dropped;
+-	unsigned long			tx_dropped;
+ 	struct rtnl_link_stats64	net_stats_prev;
+ 	struct tg3_ethtool_stats	estats_prev;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+index 4a448138b4ec1..1f44a6463f45b 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+@@ -66,6 +66,27 @@ static enum mac_mode hns_get_enet_interface(const struct hns_mac_cb *mac_cb)
+ 	}
+ }
+ 
++static u32 hns_mac_link_anti_shake(struct mac_driver *mac_ctrl_drv)
++{
++#define HNS_MAC_LINK_WAIT_TIME 5
++#define HNS_MAC_LINK_WAIT_CNT 40
++
++	u32 link_status = 0;
++	int i;
++
++	if (!mac_ctrl_drv->get_link_status)
++		return link_status;
++
++	for (i = 0; i < HNS_MAC_LINK_WAIT_CNT; i++) {
++		msleep(HNS_MAC_LINK_WAIT_TIME);
++		mac_ctrl_drv->get_link_status(mac_ctrl_drv, &link_status);
++		if (!link_status)
++			break;
++	}
++
++	return link_status;
++}
++
+ void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
+ {
+ 	struct mac_driver *mac_ctrl_drv;
+@@ -83,6 +104,14 @@ void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
+ 							       &sfp_prsnt);
+ 		if (!ret)
+ 			*link_status = *link_status && sfp_prsnt;
++
++		/* for FIBER port, it may have a fake link up.
++		 * when the link status changes from down to up, we need to do
++		 * anti-shake. the anti-shake time is base on tests.
++		 * only FIBER port need to do this.
++		 */
++		if (*link_status && !mac_cb->link)
++			*link_status = hns_mac_link_anti_shake(mac_ctrl_drv);
+ 	}
+ 
+ 	mac_cb->link = *link_status;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 64e1f6f407b48..36e387ae967f7 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -15480,7 +15480,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	       I40E_PRTGL_SAH_MFS_MASK) >> I40E_PRTGL_SAH_MFS_SHIFT;
+ 	if (val < MAX_FRAME_SIZE_DEFAULT)
+ 		dev_warn(&pdev->dev, "MFS for port %x has been set below the default: %x\n",
+-			 i, val);
++			 pf->hw.port, val);
+ 
+ 	/* Add a filter to drop all Flow control frames from any VSI from being
+ 	 * transmitted. By doing so we stop a malicious VF from sending out
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+index c6d408de06050..fc4ca8246df24 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+@@ -254,9 +254,12 @@ static void otx2_get_pauseparam(struct net_device *netdev,
+ 	if (is_otx2_lbkvf(pfvf->pdev))
+ 		return;
+ 
++	mutex_lock(&pfvf->mbox.lock);
+ 	req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(&pfvf->mbox);
+-	if (!req)
++	if (!req) {
++		mutex_unlock(&pfvf->mbox.lock);
+ 		return;
++	}
+ 
+ 	if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
+ 		rsp = (struct cgx_pause_frm_cfg *)
+@@ -264,6 +267,7 @@ static void otx2_get_pauseparam(struct net_device *netdev,
+ 		pause->rx_pause = rsp->rx_pause;
+ 		pause->tx_pause = rsp->tx_pause;
+ 	}
++	mutex_unlock(&pfvf->mbox.lock);
+ }
+ 
+ static int otx2_set_pauseparam(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+index 6c243b17312c7..64d27e8e07725 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+@@ -191,7 +191,7 @@ struct ionic_desc_info {
+ 	void *cb_arg;
+ };
+ 
+-#define IONIC_QUEUE_NAME_MAX_SZ		32
++#define IONIC_QUEUE_NAME_MAX_SZ		16
+ 
+ struct ionic_queue {
+ 	struct device *dev;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 098772601df8c..49c28134ac2cc 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -45,24 +45,24 @@ static void ionic_lif_queue_identify(struct ionic_lif *lif);
+ static void ionic_dim_work(struct work_struct *work)
+ {
+ 	struct dim *dim = container_of(work, struct dim, work);
++	struct ionic_intr_info *intr;
+ 	struct dim_cq_moder cur_moder;
+ 	struct ionic_qcq *qcq;
++	struct ionic_lif *lif;
+ 	u32 new_coal;
+ 
+ 	cur_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
+ 	qcq = container_of(dim, struct ionic_qcq, dim);
+-	new_coal = ionic_coal_usec_to_hw(qcq->q.lif->ionic, cur_moder.usec);
++	lif = qcq->q.lif;
++	new_coal = ionic_coal_usec_to_hw(lif->ionic, cur_moder.usec);
+ 	new_coal = new_coal ? new_coal : 1;
+ 
+-	if (qcq->intr.dim_coal_hw != new_coal) {
+-		unsigned int qi = qcq->cq.bound_q->index;
+-		struct ionic_lif *lif = qcq->q.lif;
+-
+-		qcq->intr.dim_coal_hw = new_coal;
++	intr = &qcq->intr;
++	if (intr->dim_coal_hw != new_coal) {
++		intr->dim_coal_hw = new_coal;
+ 
+ 		ionic_intr_coal_init(lif->ionic->idev.intr_ctrl,
+-				     lif->rxqcqs[qi]->intr.index,
+-				     qcq->intr.dim_coal_hw);
++				     intr->index, intr->dim_coal_hw);
+ 	}
+ 
+ 	dim->state = DIM_START_MEASURE;
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 88253cedd9d96..c72ff0fd38c42 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -205,6 +205,7 @@ enum rtl_registers {
+ 					/* No threshold before first PCI xfer */
+ #define	RX_FIFO_THRESH			(7 << RXCFG_FIFO_SHIFT)
+ #define	RX_EARLY_OFF			(1 << 11)
++#define	RX_PAUSE_SLOT_ON		(1 << 11)	/* 8125b and later */
+ #define	RXCFG_DMA_SHIFT			8
+ 					/* Unlimited maximum PCI burst. */
+ #define	RX_DMA_BURST			(7 << RXCFG_DMA_SHIFT)
+@@ -2318,9 +2319,13 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp)
+ 	case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_52:
+ 		RTL_W32(tp, RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST | RX_EARLY_OFF);
+ 		break;
+-	case RTL_GIGA_MAC_VER_60 ... RTL_GIGA_MAC_VER_63:
++	case RTL_GIGA_MAC_VER_61:
+ 		RTL_W32(tp, RxConfig, RX_FETCH_DFLT_8125 | RX_DMA_BURST);
+ 		break;
++	case RTL_GIGA_MAC_VER_63:
++		RTL_W32(tp, RxConfig, RX_FETCH_DFLT_8125 | RX_DMA_BURST |
++			RX_PAUSE_SLOT_ON);
++		break;
+ 	default:
+ 		RTL_W32(tp, RxConfig, RX128_INT_EN | RX_DMA_BURST);
+ 		break;
+diff --git a/drivers/net/hyperv/Kconfig b/drivers/net/hyperv/Kconfig
+index ca7bf7f897d36..c8cbd85adcf99 100644
+--- a/drivers/net/hyperv/Kconfig
++++ b/drivers/net/hyperv/Kconfig
+@@ -3,5 +3,6 @@ config HYPERV_NET
+ 	tristate "Microsoft Hyper-V virtual network driver"
+ 	depends on HYPERV
+ 	select UCS2_STRING
++	select NLS
+ 	help
+ 	  Select this option to enable the Hyper-V virtual network driver.
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index a44a0e7ba2510..eb02974f36bdb 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -244,7 +244,7 @@ struct device_node *__of_find_all_nodes(struct device_node *prev)
+  * @prev:	Previous node or NULL to start iteration
+  *		of_node_put() will be called on it
+  *
+- * Returns a node pointer with refcount incremented, use
++ * Return: A node pointer with refcount incremented, use
+  * of_node_put() on it when done.
+  */
+ struct device_node *of_find_all_nodes(struct device_node *prev)
+@@ -305,7 +305,7 @@ bool __weak arch_match_cpu_phys_id(int cpu, u64 phys_id)
+ 	return (u32)phys_id == cpu;
+ }
+ 
+-/**
++/*
+  * Checks if the given "prop_name" property holds the physical id of the
+  * core/thread corresponding to the logical cpu 'cpu'. If 'thread' is not
+  * NULL, local thread number within the core is returned in it.
+@@ -374,7 +374,7 @@ bool __weak arch_find_n_match_cpu_physical_id(struct device_node *cpun,
+  * before booting secondary cores. This function uses arch_match_cpu_phys_id
+  * which can be overridden by architecture specific implementation.
+  *
+- * Returns a node pointer for the logical cpu with refcount incremented, use
++ * Return: A node pointer for the logical cpu with refcount incremented, use
+  * of_node_put() on it when done. Returns NULL if not found.
+  */
+ struct device_node *of_get_cpu_node(int cpu, unsigned int *thread)
+@@ -394,8 +394,8 @@ EXPORT_SYMBOL(of_get_cpu_node);
+  *
+  * @cpu_node: Pointer to the device_node for CPU.
+  *
+- * Returns the logical CPU number of the given CPU device_node.
+- * Returns -ENODEV if the CPU is not found.
++ * Return: The logical CPU number of the given CPU device_node or -ENODEV if the
++ * CPU is not found.
+  */
+ int of_cpu_node_to_id(struct device_node *cpu_node)
+ {
+@@ -427,7 +427,7 @@ EXPORT_SYMBOL(of_cpu_node_to_id);
+  * bindings. This function check for both and returns the idle state node for
+  * the requested index.
+  *
+- * In case an idle state node is found at @index, the refcount is incremented
++ * Return: An idle state node if found at @index. The refcount is incremented
+  * for it, so call of_node_put() on it when done. Returns NULL if not found.
+  */
+ struct device_node *of_get_cpu_state_node(struct device_node *cpu_node,
+@@ -561,7 +561,7 @@ int of_device_compatible_match(struct device_node *device,
+  * of_machine_is_compatible - Test root of device tree for a given compatible value
+  * @compat: compatible string to look for in root node's compatible property.
+  *
+- * Returns a positive integer if the root node has the given value in its
++ * Return: A positive integer if the root node has the given value in its
+  * compatible property.
+  */
+ int of_machine_is_compatible(const char *compat)
+@@ -583,7 +583,7 @@ EXPORT_SYMBOL(of_machine_is_compatible);
+  *
+  *  @device: Node to check for availability, with locks already held
+  *
+- *  Returns true if the status property is absent or set to "okay" or "ok",
++ *  Return: True if the status property is absent or set to "okay" or "ok",
+  *  false otherwise
+  */
+ static bool __of_device_is_available(const struct device_node *device)
+@@ -611,7 +611,7 @@ static bool __of_device_is_available(const struct device_node *device)
+  *
+  *  @device: Node to check for availability
+  *
+- *  Returns true if the status property is absent or set to "okay" or "ok",
++ *  Return: True if the status property is absent or set to "okay" or "ok",
+  *  false otherwise
+  */
+ bool of_device_is_available(const struct device_node *device)
+@@ -632,7 +632,7 @@ EXPORT_SYMBOL(of_device_is_available);
+  *
+  *  @device: Node to check for endianness
+  *
+- *  Returns true if the device has a "big-endian" property, or if the kernel
++ *  Return: True if the device has a "big-endian" property, or if the kernel
+  *  was compiled for BE *and* the device has a "native-endian" property.
+  *  Returns false otherwise.
+  *
+@@ -651,11 +651,11 @@ bool of_device_is_big_endian(const struct device_node *device)
+ EXPORT_SYMBOL(of_device_is_big_endian);
+ 
+ /**
+- *	of_get_parent - Get a node's parent if any
+- *	@node:	Node to get parent
++ * of_get_parent - Get a node's parent if any
++ * @node:	Node to get parent
+  *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.
+  */
+ struct device_node *of_get_parent(const struct device_node *node)
+ {
+@@ -673,15 +673,15 @@ struct device_node *of_get_parent(const struct device_node *node)
+ EXPORT_SYMBOL(of_get_parent);
+ 
+ /**
+- *	of_get_next_parent - Iterate to a node's parent
+- *	@node:	Node to get parent of
++ * of_get_next_parent - Iterate to a node's parent
++ * @node:	Node to get parent of
+  *
+- *	This is like of_get_parent() except that it drops the
+- *	refcount on the passed node, making it suitable for iterating
+- *	through a node's parents.
++ * This is like of_get_parent() except that it drops the
++ * refcount on the passed node, making it suitable for iterating
++ * through a node's parents.
+  *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.
+  */
+ struct device_node *of_get_next_parent(struct device_node *node)
+ {
+@@ -719,13 +719,13 @@ static struct device_node *__of_get_next_child(const struct device_node *node,
+ 	     child = __of_get_next_child(parent, child))
+ 
+ /**
+- *	of_get_next_child - Iterate a node childs
+- *	@node:	parent node
+- *	@prev:	previous child of the parent node, or NULL to get first
++ * of_get_next_child - Iterate a node childs
++ * @node:	parent node
++ * @prev:	previous child of the parent node, or NULL to get first
+  *
+- *	Returns a node pointer with refcount incremented, use of_node_put() on
+- *	it when done. Returns NULL when prev is the last child. Decrements the
+- *	refcount of prev.
++ * Return: A node pointer with refcount incremented, use of_node_put() on
++ * it when done. Returns NULL when prev is the last child. Decrements the
++ * refcount of prev.
+  */
+ struct device_node *of_get_next_child(const struct device_node *node,
+ 	struct device_node *prev)
+@@ -741,12 +741,12 @@ struct device_node *of_get_next_child(const struct device_node *node,
+ EXPORT_SYMBOL(of_get_next_child);
+ 
+ /**
+- *	of_get_next_available_child - Find the next available child node
+- *	@node:	parent node
+- *	@prev:	previous child of the parent node, or NULL to get first
++ * of_get_next_available_child - Find the next available child node
++ * @node:	parent node
++ * @prev:	previous child of the parent node, or NULL to get first
+  *
+- *      This function is like of_get_next_child(), except that it
+- *      automatically skips any disabled nodes (i.e. status = "disabled").
++ * This function is like of_get_next_child(), except that it
++ * automatically skips any disabled nodes (i.e. status = "disabled").
+  */
+ struct device_node *of_get_next_available_child(const struct device_node *node,
+ 	struct device_node *prev)
+@@ -772,12 +772,12 @@ struct device_node *of_get_next_available_child(const struct device_node *node,
+ EXPORT_SYMBOL(of_get_next_available_child);
+ 
+ /**
+- *	of_get_next_cpu_node - Iterate on cpu nodes
+- *	@prev:	previous child of the /cpus node, or NULL to get first
++ * of_get_next_cpu_node - Iterate on cpu nodes
++ * @prev:	previous child of the /cpus node, or NULL to get first
+  *
+- *	Returns a cpu node pointer with refcount incremented, use of_node_put()
+- *	on it when done. Returns NULL when prev is the last child. Decrements
+- *	the refcount of prev.
++ * Return: A cpu node pointer with refcount incremented, use of_node_put()
++ * on it when done. Returns NULL when prev is the last child. Decrements
++ * the refcount of prev.
+  */
+ struct device_node *of_get_next_cpu_node(struct device_node *prev)
+ {
+@@ -816,7 +816,7 @@ EXPORT_SYMBOL(of_get_next_cpu_node);
+  * Lookup child node whose compatible property contains the given compatible
+  * string.
+  *
+- * Returns a node pointer with refcount incremented, use of_node_put() on it
++ * Return: a node pointer with refcount incremented, use of_node_put() on it
+  * when done; or NULL if not found.
+  */
+ struct device_node *of_get_compatible_child(const struct device_node *parent,
+@@ -834,15 +834,15 @@ struct device_node *of_get_compatible_child(const struct device_node *parent,
+ EXPORT_SYMBOL(of_get_compatible_child);
+ 
+ /**
+- *	of_get_child_by_name - Find the child node by name for a given parent
+- *	@node:	parent node
+- *	@name:	child name to look for.
++ * of_get_child_by_name - Find the child node by name for a given parent
++ * @node:	parent node
++ * @name:	child name to look for.
+  *
+- *      This function looks for child node for given matching name
++ * This function looks for child node for given matching name
+  *
+- *	Returns a node pointer if found, with refcount incremented, use
+- *	of_node_put() on it when done.
+- *	Returns NULL if node is not found.
++ * Return: A node pointer if found, with refcount incremented, use
++ * of_node_put() on it when done.
++ * Returns NULL if node is not found.
+  */
+ struct device_node *of_get_child_by_name(const struct device_node *node,
+ 				const char *name)
+@@ -893,22 +893,22 @@ struct device_node *__of_find_node_by_full_path(struct device_node *node,
+ }
+ 
+ /**
+- *	of_find_node_opts_by_path - Find a node matching a full OF path
+- *	@path: Either the full path to match, or if the path does not
+- *	       start with '/', the name of a property of the /aliases
+- *	       node (an alias).  In the case of an alias, the node
+- *	       matching the alias' value will be returned.
+- *	@opts: Address of a pointer into which to store the start of
+- *	       an options string appended to the end of the path with
+- *	       a ':' separator.
+- *
+- *	Valid paths:
+- *		/foo/bar	Full path
+- *		foo		Valid alias
+- *		foo/bar		Valid alias + relative path
+- *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.
++ * of_find_node_opts_by_path - Find a node matching a full OF path
++ * @path: Either the full path to match, or if the path does not
++ *       start with '/', the name of a property of the /aliases
++ *       node (an alias).  In the case of an alias, the node
++ *       matching the alias' value will be returned.
++ * @opts: Address of a pointer into which to store the start of
++ *       an options string appended to the end of the path with
++ *       a ':' separator.
++ *
++ * Valid paths:
++ *  * /foo/bar	Full path
++ *  * foo	Valid alias
++ *  * foo/bar	Valid alias + relative path
++ *
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.
+  */
+ struct device_node *of_find_node_opts_by_path(const char *path, const char **opts)
+ {
+@@ -958,15 +958,15 @@ struct device_node *of_find_node_opts_by_path(const char *path, const char **opt
+ EXPORT_SYMBOL(of_find_node_opts_by_path);
+ 
+ /**
+- *	of_find_node_by_name - Find a node by its "name" property
+- *	@from:	The node to start searching from or NULL; the node
++ * of_find_node_by_name - Find a node by its "name" property
++ * @from:	The node to start searching from or NULL; the node
+  *		you pass will not be searched, only the next one
+  *		will. Typically, you pass what the previous call
+  *		returned. of_node_put() will be called on @from.
+- *	@name:	The name string to match against
++ * @name:	The name string to match against
+  *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.
+  */
+ struct device_node *of_find_node_by_name(struct device_node *from,
+ 	const char *name)
+@@ -985,16 +985,16 @@ struct device_node *of_find_node_by_name(struct device_node *from,
+ EXPORT_SYMBOL(of_find_node_by_name);
+ 
+ /**
+- *	of_find_node_by_type - Find a node by its "device_type" property
+- *	@from:	The node to start searching from, or NULL to start searching
++ * of_find_node_by_type - Find a node by its "device_type" property
++ * @from:	The node to start searching from, or NULL to start searching
+  *		the entire device tree. The node you pass will not be
+  *		searched, only the next one will; typically, you pass
+  *		what the previous call returned. of_node_put() will be
+  *		called on from for you.
+- *	@type:	The type string to match against
++ * @type:	The type string to match against
+  *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.
+  */
+ struct device_node *of_find_node_by_type(struct device_node *from,
+ 	const char *type)
+@@ -1013,18 +1013,18 @@ struct device_node *of_find_node_by_type(struct device_node *from,
+ EXPORT_SYMBOL(of_find_node_by_type);
+ 
+ /**
+- *	of_find_compatible_node - Find a node based on type and one of the
++ * of_find_compatible_node - Find a node based on type and one of the
+  *                                tokens in its "compatible" property
+- *	@from:		The node to start searching from or NULL, the node
+- *			you pass will not be searched, only the next one
+- *			will; typically, you pass what the previous call
+- *			returned. of_node_put() will be called on it
+- *	@type:		The type string to match "device_type" or NULL to ignore
+- *	@compatible:	The string to match to one of the tokens in the device
+- *			"compatible" list.
+- *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.
++ * @from:	The node to start searching from or NULL, the node
++ *		you pass will not be searched, only the next one
++ *		will; typically, you pass what the previous call
++ *		returned. of_node_put() will be called on it
++ * @type:	The type string to match "device_type" or NULL to ignore
++ * @compatible:	The string to match to one of the tokens in the device
++ *		"compatible" list.
++ *
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.
+  */
+ struct device_node *of_find_compatible_node(struct device_node *from,
+ 	const char *type, const char *compatible)
+@@ -1044,16 +1044,16 @@ struct device_node *of_find_compatible_node(struct device_node *from,
+ EXPORT_SYMBOL(of_find_compatible_node);
+ 
+ /**
+- *	of_find_node_with_property - Find a node which has a property with
+- *                                   the given name.
+- *	@from:		The node to start searching from or NULL, the node
+- *			you pass will not be searched, only the next one
+- *			will; typically, you pass what the previous call
+- *			returned. of_node_put() will be called on it
+- *	@prop_name:	The name of the property to look for.
+- *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.
++ * of_find_node_with_property - Find a node which has a property with
++ *                              the given name.
++ * @from:	The node to start searching from or NULL, the node
++ *		you pass will not be searched, only the next one
++ *		will; typically, you pass what the previous call
++ *		returned. of_node_put() will be called on it
++ * @prop_name:	The name of the property to look for.
++ *
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.
+  */
+ struct device_node *of_find_node_with_property(struct device_node *from,
+ 	const char *prop_name)
+@@ -1102,10 +1102,10 @@ const struct of_device_id *__of_match_node(const struct of_device_id *matches,
+ 
+ /**
+  * of_match_node - Tell if a device_node has a matching of_match structure
+- *	@matches:	array of of device match structures to search in
+- *	@node:		the of device structure to match against
++ * @matches:	array of of device match structures to search in
++ * @node:	the of device structure to match against
+  *
+- *	Low level utility function used by device matching.
++ * Low level utility function used by device matching.
+  */
+ const struct of_device_id *of_match_node(const struct of_device_id *matches,
+ 					 const struct device_node *node)
+@@ -1121,17 +1121,17 @@ const struct of_device_id *of_match_node(const struct of_device_id *matches,
+ EXPORT_SYMBOL(of_match_node);
+ 
+ /**
+- *	of_find_matching_node_and_match - Find a node based on an of_device_id
+- *					  match table.
+- *	@from:		The node to start searching from or NULL, the node
+- *			you pass will not be searched, only the next one
+- *			will; typically, you pass what the previous call
+- *			returned. of_node_put() will be called on it
+- *	@matches:	array of of device match structures to search in
+- *	@match		Updated to point at the matches entry which matched
+- *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.
++ * of_find_matching_node_and_match - Find a node based on an of_device_id
++ *				     match table.
++ * @from:	The node to start searching from or NULL, the node
++ *		you pass will not be searched, only the next one
++ *		will; typically, you pass what the previous call
++ *		returned. of_node_put() will be called on it
++ * @matches:	array of of device match structures to search in
++ * @match:	Updated to point at the matches entry which matched
++ *
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.
+  */
+ struct device_node *of_find_matching_node_and_match(struct device_node *from,
+ 					const struct of_device_id *matches,
+@@ -1170,7 +1170,7 @@ EXPORT_SYMBOL(of_find_matching_node_and_match);
+  * It does this by stripping the manufacturer prefix (as delimited by a ',')
+  * from the first entry in the compatible list property.
+  *
+- * This routine returns 0 on success, <0 on failure.
++ * Return: This routine returns 0 on success, <0 on failure.
+  */
+ int of_modalias_node(struct device_node *node, char *modalias, int len)
+ {
+@@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_GPL(of_modalias_node);
+  * of_find_node_by_phandle - Find a node given a phandle
+  * @handle:	phandle of the node to find
+  *
+- * Returns a node pointer with refcount incremented, use
++ * Return: A node pointer with refcount incremented, use
+  * of_node_put() on it when done.
+  */
+ struct device_node *of_find_node_by_phandle(phandle handle)
+@@ -1431,7 +1431,7 @@ static int __of_parse_phandle_with_args(const struct device_node *np,
+  * @index: For properties holding a table of phandles, this is the index into
+  *         the table
+  *
+- * Returns the device_node pointer with refcount incremented.  Use
++ * Return: The device_node pointer with refcount incremented.  Use
+  * of_node_put() on it when done.
+  */
+ struct device_node *of_parse_phandle(const struct device_node *np,
+@@ -1465,21 +1465,21 @@ EXPORT_SYMBOL(of_parse_phandle);
+  * Caller is responsible to call of_node_put() on the returned out_args->np
+  * pointer.
+  *
+- * Example:
++ * Example::
+  *
+- * phandle1: node1 {
++ *  phandle1: node1 {
+  *	#list-cells = <2>;
+- * }
++ *  };
+  *
+- * phandle2: node2 {
++ *  phandle2: node2 {
+  *	#list-cells = <1>;
+- * }
++ *  };
+  *
+- * node3 {
++ *  node3 {
+  *	list = <&phandle1 1 2 &phandle2 3>;
+- * }
++ *  };
+  *
+- * To get a device_node of the `node2' node you may call this:
++ * To get a device_node of the ``node2`` node you may call this:
+  * of_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);
+  */
+ int of_parse_phandle_with_args(const struct device_node *np, const char *list_name,
+@@ -1517,29 +1517,29 @@ EXPORT_SYMBOL(of_parse_phandle_with_args);
+  * Caller is responsible to call of_node_put() on the returned out_args->np
+  * pointer.
+  *
+- * Example:
++ * Example::
+  *
+- * phandle1: node1 {
+- *	#list-cells = <2>;
+- * }
++ *  phandle1: node1 {
++ *  	#list-cells = <2>;
++ *  };
+  *
+- * phandle2: node2 {
+- *	#list-cells = <1>;
+- * }
++ *  phandle2: node2 {
++ *  	#list-cells = <1>;
++ *  };
+  *
+- * phandle3: node3 {
+- * 	#list-cells = <1>;
+- * 	list-map = <0 &phandle2 3>,
+- * 		   <1 &phandle2 2>,
+- * 		   <2 &phandle1 5 1>;
+- *	list-map-mask = <0x3>;
+- * };
++ *  phandle3: node3 {
++ *  	#list-cells = <1>;
++ *  	list-map = <0 &phandle2 3>,
++ *  		   <1 &phandle2 2>,
++ *  		   <2 &phandle1 5 1>;
++ *  	list-map-mask = <0x3>;
++ *  };
+  *
+- * node4 {
+- *	list = <&phandle1 1 2 &phandle3 0>;
+- * }
++ *  node4 {
++ *  	list = <&phandle1 1 2 &phandle3 0>;
++ *  };
+  *
+- * To get a device_node of the `node2' node you may call this:
++ * To get a device_node of the ``node2`` node you may call this:
+  * of_parse_phandle_with_args(node4, "list", "list", 1, &args);
+  */
+ int of_parse_phandle_with_args_map(const struct device_node *np,
+@@ -1699,19 +1699,19 @@ EXPORT_SYMBOL(of_parse_phandle_with_args_map);
+  * Caller is responsible to call of_node_put() on the returned out_args->np
+  * pointer.
+  *
+- * Example:
++ * Example::
+  *
+- * phandle1: node1 {
+- * }
++ *  phandle1: node1 {
++ *  };
+  *
+- * phandle2: node2 {
+- * }
++ *  phandle2: node2 {
++ *  };
+  *
+- * node3 {
+- *	list = <&phandle1 0 2 &phandle2 2 3>;
+- * }
++ *  node3 {
++ *  	list = <&phandle1 0 2 &phandle2 2 3>;
++ *  };
+  *
+- * To get a device_node of the `node2' node you may call this:
++ * To get a device_node of the ``node2`` node you may call this:
+  * of_parse_phandle_with_fixed_args(node3, "list", 2, 1, &args);
+  */
+ int of_parse_phandle_with_fixed_args(const struct device_node *np,
+@@ -1731,7 +1731,7 @@ EXPORT_SYMBOL(of_parse_phandle_with_fixed_args);
+  * @list_name:	property name that contains a list
+  * @cells_name:	property name that specifies phandles' arguments count
+  *
+- * Returns the number of phandle + argument tuples within a property. It
++ * Return: The number of phandle + argument tuples within a property. It
+  * is a typical pattern to encode a list of phandle and variable
+  * arguments into a single property. The number of arguments is encoded
+  * by a property in the phandle-target node. For example, a gpios
+@@ -1779,6 +1779,8 @@ EXPORT_SYMBOL(of_count_phandle_with_args);
+ 
+ /**
+  * __of_add_property - Add a property to a node without lock operations
++ * @np:		Caller's Device Node
++ * @prob:	Property to add
+  */
+ int __of_add_property(struct device_node *np, struct property *prop)
+ {
+@@ -1800,6 +1802,8 @@ int __of_add_property(struct device_node *np, struct property *prop)
+ 
+ /**
+  * of_add_property - Add a property to a node
++ * @np:		Caller's Device Node
++ * @prob:	Property to add
+  */
+ int of_add_property(struct device_node *np, struct property *prop)
+ {
+@@ -1844,6 +1848,8 @@ int __of_remove_property(struct device_node *np, struct property *prop)
+ 
+ /**
+  * of_remove_property - Remove a property from a node.
++ * @np:		Caller's Device Node
++ * @prob:	Property to remove
+  *
+  * Note that we don't actually remove it, since we have given out
+  * who-knows-how-many pointers to the data using get-property.
+@@ -1951,13 +1957,12 @@ static void of_alias_add(struct alias_prop *ap, struct device_node *np,
+ 
+ /**
+  * of_alias_scan - Scan all properties of the 'aliases' node
++ * @dt_alloc:	An allocator that provides a virtual address to memory
++ *		for storing the resulting tree
+  *
+  * The function scans all the properties of the 'aliases' node and populates
+  * the global lookup table with the properties.  It returns the
+  * number of alias properties found, or an error code in case of failure.
+- *
+- * @dt_alloc:	An allocator that provides a virtual address to memory
+- *		for storing the resulting tree
+  */
+ void of_alias_scan(void * (*dt_alloc)(u64 size, u64 align))
+ {
+@@ -2026,7 +2031,9 @@ void of_alias_scan(void * (*dt_alloc)(u64 size, u64 align))
+  * @stem:	Alias stem of the given device_node
+  *
+  * The function travels the lookup table to get the alias id for the given
+- * device_node and alias stem.  It returns the alias id if found.
++ * device_node and alias stem.
++ *
++ * Return: The alias id if found.
+  */
+ int of_alias_get_id(struct device_node *np, const char *stem)
+ {
+@@ -2130,13 +2137,14 @@ EXPORT_SYMBOL_GPL(of_alias_get_highest_id);
+ 
+ /**
+  * of_console_check() - Test and setup console for DT setup
+- * @dn - Pointer to device node
+- * @name - Name to use for preferred console without index. ex. "ttyS"
+- * @index - Index to use for preferred console.
++ * @dn: Pointer to device node
++ * @name: Name to use for preferred console without index. ex. "ttyS"
++ * @index: Index to use for preferred console.
+  *
+  * Check if the given device node matches the stdout-path property in the
+- * /chosen node. If it does then register it as the preferred console and return
+- * TRUE. Otherwise return FALSE.
++ * /chosen node. If it does then register it as the preferred console.
++ *
++ * Return: TRUE if console successfully setup. Otherwise return FALSE.
+  */
+ bool of_console_check(struct device_node *dn, char *name, int index)
+ {
+@@ -2152,12 +2160,12 @@ bool of_console_check(struct device_node *dn, char *name, int index)
+ EXPORT_SYMBOL_GPL(of_console_check);
+ 
+ /**
+- *	of_find_next_cache_node - Find a node's subsidiary cache
+- *	@np:	node of type "cpu" or "cache"
++ * of_find_next_cache_node - Find a node's subsidiary cache
++ * @np:	node of type "cpu" or "cache"
+  *
+- *	Returns a node pointer with refcount incremented, use
+- *	of_node_put() on it when done.  Caller should hold a reference
+- *	to np.
++ * Return: A node pointer with refcount incremented, use
++ * of_node_put() on it when done.  Caller should hold a reference
++ * to np.
+  */
+ struct device_node *of_find_next_cache_node(const struct device_node *np)
+ {
+@@ -2187,7 +2195,7 @@ struct device_node *of_find_next_cache_node(const struct device_node *np)
+  *
+  * @cpu: cpu number(logical index) for which the last cache level is needed
+  *
+- * Returns the the level at which the last cache is present. It is exactly
++ * Return: The the level at which the last cache is present. It is exactly
+  * same as  the total number of cache levels for the given logical cpu.
+  */
+ int of_find_last_cache_level(unsigned int cpu)
+diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c
+index 7d0232af9c23d..b6a3ee65437b9 100644
+--- a/drivers/of/dynamic.c
++++ b/drivers/of/dynamic.c
+@@ -27,7 +27,7 @@ static struct device_node *kobj_to_device_node(struct kobject *kobj)
+  * @node:	Node to inc refcount, NULL is supported to simplify writing of
+  *		callers
+  *
+- * Returns node.
++ * Return: The node with refcount incremented.
+  */
+ struct device_node *of_node_get(struct device_node *node)
+ {
+@@ -103,8 +103,10 @@ int of_reconfig_notify(unsigned long action, struct of_reconfig_data *p)
+  * @arg		- argument of the of notifier
+  *
+  * Returns the new state of a device based on the notifier used.
+- * Returns 0 on device going from enabled to disabled, 1 on device
+- * going from disabled to enabled and -1 on no change.
++ *
++ * Return: OF_RECONFIG_CHANGE_REMOVE on device going from enabled to
++ * disabled, OF_RECONFIG_CHANGE_ADD on device going from disabled to
++ * enabled and OF_RECONFIG_NO_CHANGE on no change.
+  */
+ int of_reconfig_get_state_change(unsigned long action, struct of_reconfig_data *pr)
+ {
+@@ -370,7 +372,8 @@ void of_node_release(struct kobject *kobj)
+  * property structure and the property name & contents. The property's
+  * flags have the OF_DYNAMIC bit set so that we can differentiate between
+  * dynamically allocated properties and not.
+- * Returns the newly allocated property or NULL on out of memory error.
++ *
++ * Return: The newly allocated property or NULL on out of memory error.
+  */
+ struct property *__of_prop_dup(const struct property *prop, gfp_t allocflags)
+ {
+@@ -413,7 +416,7 @@ struct property *__of_prop_dup(const struct property *prop, gfp_t allocflags)
+  * another node.  The node data are dynamically allocated and all the node
+  * flags have the OF_DYNAMIC & OF_DETACHED bits set.
+  *
+- * Returns the newly allocated node or NULL on out of memory error.
++ * Return: The newly allocated node or NULL on out of memory error.
+  */
+ struct device_node *__of_node_dup(const struct device_node *np,
+ 				  const char *full_name)
+@@ -764,7 +767,8 @@ static int __of_changeset_apply(struct of_changeset *ocs)
+  * Any side-effects of live tree state changes are applied here on
+  * success, like creation/destruction of devices and side-effects
+  * like creation of sysfs properties and directories.
+- * Returns 0 on success, a negative error value in case of an error.
++ *
++ * Return: 0 on success, a negative error value in case of an error.
+  * On error the partially applied effects are reverted.
+  */
+ int of_changeset_apply(struct of_changeset *ocs)
+@@ -858,7 +862,8 @@ static int __of_changeset_revert(struct of_changeset *ocs)
+  * was before the application.
+  * Any side-effects like creation/destruction of devices and
+  * removal of sysfs properties and directories are applied.
+- * Returns 0 on success, a negative error value in case of an error.
++ *
++ * Return: 0 on success, a negative error value in case of an error.
+  */
+ int of_changeset_revert(struct of_changeset *ocs)
+ {
+@@ -886,7 +891,8 @@ EXPORT_SYMBOL_GPL(of_changeset_revert);
+  * + OF_RECONFIG_ADD_PROPERTY
+  * + OF_RECONFIG_REMOVE_PROPERTY,
+  * + OF_RECONFIG_UPDATE_PROPERTY
+- * Returns 0 on success, a negative error value in case of an error.
++ *
++ * Return: 0 on success, a negative error value in case of an error.
+  */
+ int of_changeset_action(struct of_changeset *ocs, unsigned long action,
+ 		struct device_node *np, struct property *prop)
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 5a1b8688b4605..384510d0200cd 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -282,7 +282,7 @@ static void reverse_nodes(struct device_node *parent)
+  * @dad: Parent struct device_node
+  * @nodepp: The device_node tree created by the call
+  *
+- * It returns the size of unflattened device tree or error code
++ * Return: The size of unflattened device tree or error code
+  */
+ static int unflatten_dt_nodes(const void *blob,
+ 			      void *mem,
+@@ -349,11 +349,6 @@ static int unflatten_dt_nodes(const void *blob,
+ 
+ /**
+  * __unflatten_device_tree - create tree of device_nodes from flat blob
+- *
+- * unflattens a device-tree, creating the
+- * tree of struct device_node. It also fills the "name" and "type"
+- * pointers of the nodes so the normal device-tree walking functions
+- * can be used.
+  * @blob: The blob to expand
+  * @dad: Parent device node
+  * @mynodes: The device_node tree created by the call
+@@ -361,7 +356,11 @@ static int unflatten_dt_nodes(const void *blob,
+  * for the resulting tree
+  * @detached: if true set OF_DETACHED on @mynodes
+  *
+- * Returns NULL on failure or the memory chunk containing the unflattened
++ * unflattens a device-tree, creating the tree of struct device_node. It also
++ * fills the "name" and "type" pointers of the nodes so the normal device-tree
++ * walking functions can be used.
++ *
++ * Return: NULL on failure or the memory chunk containing the unflattened
+  * device tree on success.
+  */
+ void *__unflatten_device_tree(const void *blob,
+@@ -442,7 +441,7 @@ static DEFINE_MUTEX(of_fdt_unflatten_mutex);
+  * pointers of the nodes so the normal device-tree walking functions
+  * can be used.
+  *
+- * Returns NULL on failure or the memory chunk containing the unflattened
++ * Return: NULL on failure or the memory chunk containing the unflattened
+  * device tree on success.
+  */
+ void *of_fdt_unflatten_tree(const unsigned long *blob,
+@@ -716,7 +715,7 @@ const void *__init of_get_flat_dt_prop(unsigned long node, const char *name,
+  * @node: node to test
+  * @compat: compatible string to compare with compatible list.
+  *
+- * On match, returns a non-zero value with smaller values returned for more
++ * Return: a non-zero value on match with smaller values returned for more
+  * specific compatible values.
+  */
+ static int of_fdt_is_compatible(const void *blob,
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index 25d17b8a1a1aa..352e14b007e78 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -48,7 +48,7 @@ EXPORT_SYMBOL_GPL(irq_of_parse_and_map);
+  * of_irq_find_parent - Given a device node, find its interrupt parent node
+  * @child: pointer to device node
+  *
+- * Returns a pointer to the interrupt parent node, or NULL if the interrupt
++ * Return: A pointer to the interrupt parent node, or NULL if the interrupt
+  * parent could not be determined.
+  */
+ struct device_node *of_irq_find_parent(struct device_node *child)
+@@ -81,14 +81,14 @@ EXPORT_SYMBOL_GPL(of_irq_find_parent);
+  * @addr:	address specifier (start of "reg" property of the device) in be32 format
+  * @out_irq:	structure of_phandle_args updated by this function
+  *
+- * Returns 0 on success and a negative number on error
+- *
+  * This function is a low-level interrupt tree walking function. It
+  * can be used to do a partial walk with synthetized reg and interrupts
+  * properties, for example when resolving PCI interrupts when no device
+  * node exist for the parent. It takes an interrupt specifier structure as
+  * input, walks the tree looking for any interrupt-map properties, translates
+  * the specifier for each map, and then returns the translated map.
++ *
++ * Return: 0 on success and a negative number on error
+  */
+ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq)
+ {
+@@ -380,7 +380,7 @@ EXPORT_SYMBOL_GPL(of_irq_to_resource);
+  * @dev: pointer to device tree node
+  * @index: zero-based index of the IRQ
+  *
+- * Returns Linux IRQ number on success, or 0 on the IRQ mapping failure, or
++ * Return: Linux IRQ number on success, or 0 on the IRQ mapping failure, or
+  * -EPROBE_DEFER if the IRQ domain is not yet created, or error code in case
+  * of any other failure.
+  */
+@@ -407,7 +407,7 @@ EXPORT_SYMBOL_GPL(of_irq_get);
+  * @dev: pointer to device tree node
+  * @name: IRQ name
+  *
+- * Returns Linux IRQ number on success, or 0 on the IRQ mapping failure, or
++ * Return: Linux IRQ number on success, or 0 on the IRQ mapping failure, or
+  * -EPROBE_DEFER if the IRQ domain is not yet created, or error code in case
+  * of any other failure.
+  */
+@@ -447,7 +447,7 @@ int of_irq_count(struct device_node *dev)
+  * @res: array of resources to fill in
+  * @nr_irqs: the number of IRQs (and upper bound for num of @res elements)
+  *
+- * Returns the size of the filled in table (up to @nr_irqs).
++ * Return: The size of the filled in table (up to @nr_irqs).
+  */
+ int of_irq_to_resource_table(struct device_node *dev, struct resource *res,
+ 		int nr_irqs)
+@@ -602,7 +602,7 @@ static u32 __of_msi_map_id(struct device *dev, struct device_node **np,
+  * Walk up the device hierarchy looking for devices with a "msi-map"
+  * property.  If found, apply the mapping to @id_in.
+  *
+- * Returns the mapped MSI ID.
++ * Return: The mapped MSI ID.
+  */
+ u32 of_msi_map_id(struct device *dev, struct device_node *msi_np, u32 id_in)
+ {
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 67b404f36e79c..3ac874e2be080 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -296,7 +296,7 @@ err_free_target_path:
+  *
+  * Update of property in symbols node is not allowed.
+  *
+- * Returns 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if
++ * Return: 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if
+  * invalid @overlay.
+  */
+ static int add_changeset_property(struct overlay_changeset *ovcs,
+@@ -401,7 +401,7 @@ static int add_changeset_property(struct overlay_changeset *ovcs,
+  *
+  * NOTE_2: Multiple mods of created nodes not supported.
+  *
+- * Returns 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if
++ * Return: 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if
+  * invalid @overlay.
+  */
+ static int add_changeset_node(struct overlay_changeset *ovcs,
+@@ -473,7 +473,7 @@ static int add_changeset_node(struct overlay_changeset *ovcs,
+  *
+  * Do not allow symbols node to have any children.
+  *
+- * Returns 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if
++ * Return: 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if
+  * invalid @overlay_node.
+  */
+ static int build_changeset_next_level(struct overlay_changeset *ovcs,
+@@ -604,7 +604,7 @@ static int find_dup_cset_prop(struct overlay_changeset *ovcs,
+  * the same node or duplicate {add, delete, or update} properties entries
+  * for the same property.
+  *
+- * Returns 0 on success, or -EINVAL if duplicate changeset entry found.
++ * Return: 0 on success, or -EINVAL if duplicate changeset entry found.
+  */
+ static int changeset_dup_entry_check(struct overlay_changeset *ovcs)
+ {
+@@ -628,7 +628,7 @@ static int changeset_dup_entry_check(struct overlay_changeset *ovcs)
+  * any portions of the changeset that were successfully created will remain
+  * in @ovcs->cset.
+  *
+- * Returns 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if
++ * Return: 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if
+  * invalid overlay in @ovcs->fragments[].
+  */
+ static int build_changeset(struct overlay_changeset *ovcs)
+@@ -724,7 +724,7 @@ static struct device_node *find_target(struct device_node *info_node)
+  * the top level of @tree.  The relevant top level nodes are the fragment
+  * nodes and the __symbols__ node.  Any other top level node will be ignored.
+  *
+- * Returns 0 on success, -ENOMEM if memory allocation failure, -EINVAL if error
++ * Return: 0 on success, -ENOMEM if memory allocation failure, -EINVAL if error
+  * detected in @tree, or -ENOSPC if idr_alloc() error.
+  */
+ static int init_overlay_changeset(struct overlay_changeset *ovcs,
+@@ -1179,7 +1179,7 @@ static int overlay_removal_is_ok(struct overlay_changeset *remove_ovcs)
+  * If an error is returned by an overlay changeset post-remove notifier
+  * then no further overlay changeset post-remove notifier will be called.
+  *
+- * Returns 0 on success, or a negative error number.  *ovcs_id is set to
++ * Return: 0 on success, or a negative error number.  *ovcs_id is set to
+  * zero after reverting the changeset, even if a subsequent error occurs.
+  */
+ int of_overlay_remove(int *ovcs_id)
+@@ -1257,7 +1257,7 @@ EXPORT_SYMBOL_GPL(of_overlay_remove);
+  *
+  * Removes all overlays from the system in the correct order.
+  *
+- * Returns 0 on success, or a negative error number
++ * Return: 0 on success, or a negative error number
+  */
+ int of_overlay_remove_all(void)
+ {
+diff --git a/drivers/of/platform.c b/drivers/of/platform.c
+index b557a0fcd4ba0..43748c6480c80 100644
+--- a/drivers/of/platform.c
++++ b/drivers/of/platform.c
+@@ -44,7 +44,7 @@ static const struct of_device_id of_skipped_node_table[] = {
+  * Takes a reference to the embedded struct device which needs to be dropped
+  * after use.
+  *
+- * Returns platform_device pointer, or NULL if not found
++ * Return: platform_device pointer, or NULL if not found
+  */
+ struct platform_device *of_find_device_by_node(struct device_node *np)
+ {
+@@ -160,7 +160,7 @@ EXPORT_SYMBOL(of_device_alloc);
+  * @platform_data: pointer to populate platform_data pointer with
+  * @parent: Linux device model parent device.
+  *
+- * Returns pointer to created platform device, or NULL if a device was not
++ * Return: Pointer to created platform device, or NULL if a device was not
+  * registered.  Unavailable devices will not get registered.
+  */
+ static struct platform_device *of_platform_device_create_pdata(
+@@ -204,7 +204,7 @@ err_clear_flag:
+  * @bus_id: name to assign device
+  * @parent: Linux device model parent device.
+  *
+- * Returns pointer to created platform device, or NULL if a device was not
++ * Return: Pointer to created platform device, or NULL if a device was not
+  * registered.  Unavailable devices will not get registered.
+  */
+ struct platform_device *of_platform_device_create(struct device_node *np,
+@@ -463,7 +463,7 @@ EXPORT_SYMBOL(of_platform_bus_probe);
+  * New board support should be using this function instead of
+  * of_platform_bus_probe().
+  *
+- * Returns 0 on success, < 0 on failure.
++ * Return: 0 on success, < 0 on failure.
+  */
+ int of_platform_populate(struct device_node *root,
+ 			const struct of_device_id *matches,
+@@ -608,7 +608,7 @@ static void devm_of_platform_populate_release(struct device *dev, void *res)
+  * Similar to of_platform_populate(), but will automatically call
+  * of_platform_depopulate() when the device is unbound from the bus.
+  *
+- * Returns 0 on success, < 0 on failure.
++ * Return: 0 on success, < 0 on failure.
+  */
+ int devm_of_platform_populate(struct device *dev)
+ {
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 8f998351bf4fb..a411460d2b211 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -60,9 +60,11 @@ EXPORT_SYMBOL(of_graph_is_present);
+  * @elem_size:	size of the individual element
+  *
+  * Search for a property in a device node and count the number of elements of
+- * size elem_size in it. Returns number of elements on sucess, -EINVAL if the
+- * property does not exist or its length does not match a multiple of elem_size
+- * and -ENODATA if the property does not have a value.
++ * size elem_size in it.
++ *
++ * Return: The number of elements on sucess, -EINVAL if the property does not
++ * exist or its length does not match a multiple of elem_size and -ENODATA if
++ * the property does not have a value.
+  */
+ int of_property_count_elems_of_size(const struct device_node *np,
+ 				const char *propname, int elem_size)
+@@ -94,8 +96,9 @@ EXPORT_SYMBOL_GPL(of_property_count_elems_of_size);
+  * @len:	if !=NULL, actual length is written to here
+  *
+  * Search for a property in a device node and valid the requested size.
+- * Returns the property value on success, -EINVAL if the property does not
+- *  exist, -ENODATA if property does not have a value, and -EOVERFLOW if the
++ *
++ * Return: The property value on success, -EINVAL if the property does not
++ * exist, -ENODATA if property does not have a value, and -EOVERFLOW if the
+  * property data is too small or too large.
+  *
+  */
+@@ -128,7 +131,9 @@ static void *of_find_property_value_of_size(const struct device_node *np,
+  * @out_value:	pointer to return value, modified only if no error.
+  *
+  * Search for a property in a device node and read nth 32-bit value from
+- * it. Returns 0 on success, -EINVAL if the property does not exist,
++ * it.
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
+  * -ENODATA if property does not have a value, and -EOVERFLOW if the
+  * property data isn't large enough.
+  *
+@@ -160,7 +165,9 @@ EXPORT_SYMBOL_GPL(of_property_read_u32_index);
+  * @out_value:	pointer to return value, modified only if no error.
+  *
+  * Search for a property in a device node and read nth 64-bit value from
+- * it. Returns 0 on success, -EINVAL if the property does not exist,
++ * it.
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
+  * -ENODATA if property does not have a value, and -EOVERFLOW if the
+  * property data isn't large enough.
+  *
+@@ -195,12 +202,14 @@ EXPORT_SYMBOL_GPL(of_property_read_u64_index);
+  *		sz_min will be read.
+  *
+  * Search for a property in a device node and read 8-bit value(s) from
+- * it. Returns number of elements read on success, -EINVAL if the property
+- * does not exist, -ENODATA if property does not have a value, and -EOVERFLOW
+- * if the property data is smaller than sz_min or longer than sz_max.
++ * it.
+  *
+  * dts entry of array should be like:
+- *	property = /bits/ 8 <0x50 0x60 0x70>;
++ *  ``property = /bits/ 8 <0x50 0x60 0x70>;``
++ *
++ * Return: The number of elements read on success, -EINVAL if the property
++ * does not exist, -ENODATA if property does not have a value, and -EOVERFLOW
++ * if the property data is smaller than sz_min or longer than sz_max.
+  *
+  * The out_values is modified only if a valid u8 value can be decoded.
+  */
+@@ -243,12 +252,14 @@ EXPORT_SYMBOL_GPL(of_property_read_variable_u8_array);
+  *		sz_min will be read.
+  *
+  * Search for a property in a device node and read 16-bit value(s) from
+- * it. Returns number of elements read on success, -EINVAL if the property
+- * does not exist, -ENODATA if property does not have a value, and -EOVERFLOW
+- * if the property data is smaller than sz_min or longer than sz_max.
++ * it.
+  *
+  * dts entry of array should be like:
+- *	property = /bits/ 16 <0x5000 0x6000 0x7000>;
++ *  ``property = /bits/ 16 <0x5000 0x6000 0x7000>;``
++ *
++ * Return: The number of elements read on success, -EINVAL if the property
++ * does not exist, -ENODATA if property does not have a value, and -EOVERFLOW
++ * if the property data is smaller than sz_min or longer than sz_max.
+  *
+  * The out_values is modified only if a valid u16 value can be decoded.
+  */
+@@ -291,7 +302,9 @@ EXPORT_SYMBOL_GPL(of_property_read_variable_u16_array);
+  *		sz_min will be read.
+  *
+  * Search for a property in a device node and read 32-bit value(s) from
+- * it. Returns number of elements read on success, -EINVAL if the property
++ * it.
++ *
++ * Return: The number of elements read on success, -EINVAL if the property
+  * does not exist, -ENODATA if property does not have a value, and -EOVERFLOW
+  * if the property data is smaller than sz_min or longer than sz_max.
+  *
+@@ -330,7 +343,9 @@ EXPORT_SYMBOL_GPL(of_property_read_variable_u32_array);
+  * @out_value:	pointer to return value, modified only if return value is 0.
+  *
+  * Search for a property in a device node and read a 64-bit value from
+- * it. Returns 0 on success, -EINVAL if the property does not exist,
++ * it.
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
+  * -ENODATA if property does not have a value, and -EOVERFLOW if the
+  * property data isn't large enough.
+  *
+@@ -365,7 +380,9 @@ EXPORT_SYMBOL_GPL(of_property_read_u64);
+  *		sz_min will be read.
+  *
+  * Search for a property in a device node and read 64-bit value(s) from
+- * it. Returns number of elements read on success, -EINVAL if the property
++ * it.
++ *
++ * Return: The number of elements read on success, -EINVAL if the property
+  * does not exist, -ENODATA if property does not have a value, and -EOVERFLOW
+  * if the property data is smaller than sz_min or longer than sz_max.
+  *
+@@ -407,10 +424,11 @@ EXPORT_SYMBOL_GPL(of_property_read_variable_u64_array);
+  *		return value is 0.
+  *
+  * Search for a property in a device tree node and retrieve a null
+- * terminated string value (pointer to data, not a copy). Returns 0 on
+- * success, -EINVAL if the property does not exist, -ENODATA if property
+- * does not have a value, and -EILSEQ if the string is not null-terminated
+- * within the length of the property data.
++ * terminated string value (pointer to data, not a copy).
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist, -ENODATA if
++ * property does not have a value, and -EILSEQ if the string is not
++ * null-terminated within the length of the property data.
+  *
+  * The out_string pointer is modified only if a valid string can be decoded.
+  */
+@@ -774,7 +792,7 @@ EXPORT_SYMBOL(of_graph_get_remote_port_parent);
+  * @node: pointer to a local endpoint device_node
+  *
+  * Return: Remote port node associated with remote endpoint node linked
+- *	   to @node. Use of_node_put() on it when done.
++ * to @node. Use of_node_put() on it when done.
+  */
+ struct device_node *of_graph_get_remote_port(const struct device_node *node)
+ {
+@@ -807,7 +825,7 @@ EXPORT_SYMBOL(of_graph_get_endpoint_count);
+  * @endpoint: identifier (value of reg property) of the endpoint node
+  *
+  * Return: Remote device node associated with remote endpoint node linked
+- *	   to @node. Use of_node_put() on it when done.
++ * to @node. Use of_node_put() on it when done.
+  */
+ struct device_node *of_graph_get_remote_node(const struct device_node *node,
+ 					     u32 port, u32 endpoint)
+diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
+index 925be41eeebec..de5a823f30310 100644
+--- a/drivers/parport/parport_pc.c
++++ b/drivers/parport/parport_pc.c
+@@ -2613,6 +2613,8 @@ enum parport_pc_pci_cards {
+ 	netmos_9865,
+ 	quatech_sppxp100,
+ 	wch_ch382l,
++	brainboxes_uc146,
++	brainboxes_px203,
+ };
+ 
+ 
+@@ -2676,6 +2678,8 @@ static struct parport_pc_pci {
+ 	/* netmos_9865 */               { 1, { { 0, -1 }, } },
+ 	/* quatech_sppxp100 */		{ 1, { { 0, 1 }, } },
+ 	/* wch_ch382l */		{ 1, { { 2, -1 }, } },
++	/* brainboxes_uc146 */	{ 1, { { 3, -1 }, } },
++	/* brainboxes_px203 */	{ 1, { { 0, -1 }, } },
+ };
+ 
+ static const struct pci_device_id parport_pc_pci_tbl[] = {
+@@ -2767,6 +2771,23 @@ static const struct pci_device_id parport_pc_pci_tbl[] = {
+ 	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, quatech_sppxp100 },
+ 	/* WCH CH382L PCI-E single parallel port card */
+ 	{ 0x1c00, 0x3050, 0x1c00, 0x3050, 0, 0, wch_ch382l },
++	/* Brainboxes IX-500/550 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x402a,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, oxsemi_pcie_pport },
++	/* Brainboxes UC-146/UC-157 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0be1,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc146 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0be2,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc146 },
++	/* Brainboxes PX-146/PX-257 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x401c,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, oxsemi_pcie_pport },
++	/* Brainboxes PX-203 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x4007,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_px203 },
++	/* Brainboxes PX-475 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x401f,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, oxsemi_pcie_pport },
+ 	{ 0, } /* terminate list */
+ };
+ MODULE_DEVICE_TABLE(pci, parport_pc_pci_tbl);
+diff --git a/drivers/platform/mellanox/mlxbf-bootctl.c b/drivers/platform/mellanox/mlxbf-bootctl.c
+index 5d21c6adf1ab6..9911d4f854696 100644
+--- a/drivers/platform/mellanox/mlxbf-bootctl.c
++++ b/drivers/platform/mellanox/mlxbf-bootctl.c
+@@ -17,6 +17,7 @@
+ 
+ #define MLXBF_BOOTCTL_SB_SECURE_MASK		0x03
+ #define MLXBF_BOOTCTL_SB_TEST_MASK		0x0c
++#define MLXBF_BOOTCTL_SB_DEV_MASK		BIT(4)
+ 
+ #define MLXBF_SB_KEY_NUM			4
+ 
+@@ -37,11 +38,18 @@ static struct mlxbf_bootctl_name boot_names[] = {
+ 	{ MLXBF_BOOTCTL_NONE, "none" },
+ };
+ 
++enum {
++	MLXBF_BOOTCTL_SB_LIFECYCLE_PRODUCTION = 0,
++	MLXBF_BOOTCTL_SB_LIFECYCLE_GA_SECURE = 1,
++	MLXBF_BOOTCTL_SB_LIFECYCLE_GA_NON_SECURE = 2,
++	MLXBF_BOOTCTL_SB_LIFECYCLE_RMA = 3
++};
++
+ static const char * const mlxbf_bootctl_lifecycle_states[] = {
+-	[0] = "Production",
+-	[1] = "GA Secured",
+-	[2] = "GA Non-Secured",
+-	[3] = "RMA",
++	[MLXBF_BOOTCTL_SB_LIFECYCLE_PRODUCTION] = "Production",
++	[MLXBF_BOOTCTL_SB_LIFECYCLE_GA_SECURE] = "GA Secured",
++	[MLXBF_BOOTCTL_SB_LIFECYCLE_GA_NON_SECURE] = "GA Non-Secured",
++	[MLXBF_BOOTCTL_SB_LIFECYCLE_RMA] = "RMA",
+ };
+ 
+ /* ARM SMC call which is atomic and no need for lock. */
+@@ -165,25 +173,30 @@ static ssize_t second_reset_action_store(struct device *dev,
+ static ssize_t lifecycle_state_show(struct device *dev,
+ 				    struct device_attribute *attr, char *buf)
+ {
++	int status_bits;
++	int use_dev_key;
++	int test_state;
+ 	int lc_state;
+ 
+-	lc_state = mlxbf_bootctl_smc(MLXBF_BOOTCTL_GET_TBB_FUSE_STATUS,
+-				     MLXBF_BOOTCTL_FUSE_STATUS_LIFECYCLE);
+-	if (lc_state < 0)
+-		return lc_state;
++	status_bits = mlxbf_bootctl_smc(MLXBF_BOOTCTL_GET_TBB_FUSE_STATUS,
++					MLXBF_BOOTCTL_FUSE_STATUS_LIFECYCLE);
++	if (status_bits < 0)
++		return status_bits;
+ 
+-	lc_state &=
+-		MLXBF_BOOTCTL_SB_TEST_MASK | MLXBF_BOOTCTL_SB_SECURE_MASK;
++	use_dev_key = status_bits & MLXBF_BOOTCTL_SB_DEV_MASK;
++	test_state = status_bits & MLXBF_BOOTCTL_SB_TEST_MASK;
++	lc_state = status_bits & MLXBF_BOOTCTL_SB_SECURE_MASK;
+ 
+ 	/*
+ 	 * If the test bits are set, we specify that the current state may be
+ 	 * due to using the test bits.
+ 	 */
+-	if (lc_state & MLXBF_BOOTCTL_SB_TEST_MASK) {
+-		lc_state &= MLXBF_BOOTCTL_SB_SECURE_MASK;
+-
++	if (test_state) {
+ 		return sprintf(buf, "%s(test)\n",
+ 			       mlxbf_bootctl_lifecycle_states[lc_state]);
++	} else if (use_dev_key &&
++		   (lc_state == MLXBF_BOOTCTL_SB_LIFECYCLE_GA_SECURE)) {
++		return sprintf(buf, "Secured (development)\n");
+ 	}
+ 
+ 	return sprintf(buf, "%s\n", mlxbf_bootctl_lifecycle_states[lc_state]);
+diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
+index 84c5b922f245e..c14c145b88d80 100644
+--- a/drivers/platform/x86/Kconfig
++++ b/drivers/platform/x86/Kconfig
+@@ -243,6 +243,7 @@ config ASUS_WMI
+ 	depends on RFKILL || RFKILL = n
+ 	depends on HOTPLUG_PCI
+ 	depends on ACPI_VIDEO || ACPI_VIDEO = n
++	depends on SERIO_I8042 || SERIO_I8042 = n
+ 	select INPUT_SPARSEKMAP
+ 	select LEDS_CLASS
+ 	select NEW_LEDS
+@@ -256,7 +257,6 @@ config ASUS_WMI
+ config ASUS_NB_WMI
+ 	tristate "Asus Notebook WMI Driver"
+ 	depends on ASUS_WMI
+-	depends on SERIO_I8042 || SERIO_I8042 = n
+ 	help
+ 	  This is a driver for newer Asus notebooks. It adds extra features
+ 	  like wireless radio and bluetooth control, leds, hotkeys, backlight...
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 04503ad6c7fb0..49505939352ae 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -41,6 +41,10 @@ static int wapf = -1;
+ module_param(wapf, uint, 0444);
+ MODULE_PARM_DESC(wapf, "WAPF value");
+ 
++static int tablet_mode_sw = -1;
++module_param(tablet_mode_sw, uint, 0444);
++MODULE_PARM_DESC(tablet_mode_sw, "Tablet mode detect: -1:auto 0:disable 1:kbd-dock 2:lid-flip");
++
+ static struct quirk_entry *quirks;
+ 
+ static bool asus_q500a_i8042_filter(unsigned char data, unsigned char str,
+@@ -111,7 +115,17 @@ static struct quirk_entry quirk_asus_forceals = {
+ };
+ 
+ static struct quirk_entry quirk_asus_use_kbd_dock_devid = {
+-	.use_kbd_dock_devid = true,
++	.tablet_switch_mode = asus_wmi_kbd_dock_devid,
++};
++
++static struct quirk_entry quirk_asus_use_lid_flip_devid = {
++	.wmi_backlight_set_devstate = true,
++	.tablet_switch_mode = asus_wmi_lid_flip_devid,
++};
++
++static struct quirk_entry quirk_asus_tablet_mode = {
++	.wmi_backlight_set_devstate = true,
++	.tablet_switch_mode = asus_wmi_lid_flip_rog_devid,
+ };
+ 
+ static int dmi_matched(const struct dmi_system_id *dmi)
+@@ -443,13 +457,39 @@ static const struct dmi_system_id asus_quirks[] = {
+ 		},
+ 		.driver_data = &quirk_asus_use_kbd_dock_devid,
+ 	},
++	{
++		.callback = dmi_matched,
++		.ident = "ASUS ZenBook Flip UX360",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			/* Match UX360* */
++			DMI_MATCH(DMI_PRODUCT_NAME, "UX360"),
++		},
++		.driver_data = &quirk_asus_use_lid_flip_devid,
++	},
++	{
++		.callback = dmi_matched,
++		.ident = "ASUS TP200s / E205SA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "E205SA"),
++		},
++		.driver_data = &quirk_asus_use_lid_flip_devid,
++	},
++	{
++		.callback = dmi_matched,
++		.ident = "ASUS ROG FLOW X13",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "GV301Q"),
++		},
++		.driver_data = &quirk_asus_tablet_mode,
++	},
+ 	{},
+ };
+ 
+ static void asus_nb_wmi_quirks(struct asus_wmi_driver *driver)
+ {
+-	int ret;
+-
+ 	quirks = &quirk_asus_unknown;
+ 	dmi_check_system(asus_quirks);
+ 
+@@ -462,14 +502,8 @@ static void asus_nb_wmi_quirks(struct asus_wmi_driver *driver)
+ 	else
+ 		wapf = quirks->wapf;
+ 
+-	if (quirks->i8042_filter) {
+-		ret = i8042_install_filter(quirks->i8042_filter);
+-		if (ret) {
+-			pr_warn("Unable to install key filter\n");
+-			return;
+-		}
+-		pr_info("Using i8042 filter function for receiving events\n");
+-	}
++	if (tablet_mode_sw != -1)
++		quirks->tablet_switch_mode = tablet_mode_sw;
+ }
+ 
+ static const struct key_entry asus_nb_wmi_keymap[] = {
+@@ -541,6 +575,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
+ 	{ KE_IGNORE, 0xC6, },  /* Ambient Light Sensor notification */
+ 	{ KE_KEY, 0xFA, { KEY_PROG2 } },           /* Lid flip action */
++	{ KE_KEY, 0xBD, { KEY_PROG2 } },           /* Lid flip action on ROG xflow laptops */
+ 	{ KE_END, 0},
+ };
+ 
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index db369cf261119..9316564947682 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -63,6 +63,8 @@ MODULE_LICENSE("GPL");
+ #define NOTIFY_KBD_BRTTOGGLE		0xc7
+ #define NOTIFY_KBD_FBM			0x99
+ #define NOTIFY_KBD_TTP			0xae
++#define NOTIFY_LID_FLIP			0xfa
++#define NOTIFY_LID_FLIP_ROG		0xbd
+ 
+ #define ASUS_WMI_FNLOCK_BIOS_DISABLED	BIT(0)
+ 
+@@ -198,6 +200,10 @@ struct asus_wmi {
+ 	struct asus_rfkill gps;
+ 	struct asus_rfkill uwb;
+ 
++	int tablet_switch_event_code;
++	u32 tablet_switch_dev_id;
++	bool tablet_switch_inverted;
++
+ 	enum fan_type fan_type;
+ 	int fan_pwm_mode;
+ 	int agfn_pwm;
+@@ -206,6 +212,9 @@ struct asus_wmi {
+ 	u8 fan_boost_mode_mask;
+ 	u8 fan_boost_mode;
+ 
++	bool dgpu_disable_available;
++	bool dgpu_disable;
++
+ 	bool throttle_thermal_policy_available;
+ 	u8 throttle_thermal_policy_mode;
+ 
+@@ -346,10 +355,35 @@ static bool asus_wmi_dev_is_present(struct asus_wmi *asus, u32 dev_id)
+ }
+ 
+ /* Input **********************************************************************/
++static void asus_wmi_tablet_sw_report(struct asus_wmi *asus, bool value)
++{
++	input_report_switch(asus->inputdev, SW_TABLET_MODE,
++			    asus->tablet_switch_inverted ? !value : value);
++	input_sync(asus->inputdev);
++}
++
++static void asus_wmi_tablet_sw_init(struct asus_wmi *asus, u32 dev_id, int event_code)
++{
++	struct device *dev = &asus->platform_device->dev;
++	int result;
++
++	result = asus_wmi_get_devstate_simple(asus, dev_id);
++	if (result >= 0) {
++		input_set_capability(asus->inputdev, EV_SW, SW_TABLET_MODE);
++		asus_wmi_tablet_sw_report(asus, result);
++		asus->tablet_switch_dev_id = dev_id;
++		asus->tablet_switch_event_code = event_code;
++	} else if (result == -ENODEV) {
++		dev_err(dev, "This device has tablet-mode-switch quirk but got ENODEV checking it. This is a bug.");
++	} else {
++		dev_err(dev, "Error checking for tablet-mode-switch: %d\n", result);
++	}
++}
+ 
+ static int asus_wmi_input_init(struct asus_wmi *asus)
+ {
+-	int err, result;
++	struct device *dev = &asus->platform_device->dev;
++	int err;
+ 
+ 	asus->inputdev = input_allocate_device();
+ 	if (!asus->inputdev)
+@@ -358,21 +392,26 @@ static int asus_wmi_input_init(struct asus_wmi *asus)
+ 	asus->inputdev->name = asus->driver->input_name;
+ 	asus->inputdev->phys = asus->driver->input_phys;
+ 	asus->inputdev->id.bustype = BUS_HOST;
+-	asus->inputdev->dev.parent = &asus->platform_device->dev;
++	asus->inputdev->dev.parent = dev;
+ 	set_bit(EV_REP, asus->inputdev->evbit);
+ 
+ 	err = sparse_keymap_setup(asus->inputdev, asus->driver->keymap, NULL);
+ 	if (err)
+ 		goto err_free_dev;
+ 
+-	if (asus->driver->quirks->use_kbd_dock_devid) {
+-		result = asus_wmi_get_devstate_simple(asus, ASUS_WMI_DEVID_KBD_DOCK);
+-		if (result >= 0) {
+-			input_set_capability(asus->inputdev, EV_SW, SW_TABLET_MODE);
+-			input_report_switch(asus->inputdev, SW_TABLET_MODE, !result);
+-		} else if (result != -ENODEV) {
+-			pr_err("Error checking for keyboard-dock: %d\n", result);
+-		}
++	switch (asus->driver->quirks->tablet_switch_mode) {
++	case asus_wmi_no_tablet_switch:
++		break;
++	case asus_wmi_kbd_dock_devid:
++		asus->tablet_switch_inverted = true;
++		asus_wmi_tablet_sw_init(asus, ASUS_WMI_DEVID_KBD_DOCK, NOTIFY_KBD_DOCK_CHANGE);
++		break;
++	case asus_wmi_lid_flip_devid:
++		asus_wmi_tablet_sw_init(asus, ASUS_WMI_DEVID_LID_FLIP, NOTIFY_LID_FLIP);
++		break;
++	case asus_wmi_lid_flip_rog_devid:
++		asus_wmi_tablet_sw_init(asus, ASUS_WMI_DEVID_LID_FLIP_ROG, NOTIFY_LID_FLIP_ROG);
++		break;
+ 	}
+ 
+ 	err = input_register_device(asus->inputdev);
+@@ -394,6 +433,107 @@ static void asus_wmi_input_exit(struct asus_wmi *asus)
+ 	asus->inputdev = NULL;
+ }
+ 
++/* Tablet mode ****************************************************************/
++
++static void asus_wmi_tablet_mode_get_state(struct asus_wmi *asus)
++{
++	int result;
++
++	if (!asus->tablet_switch_dev_id)
++		return;
++
++	result = asus_wmi_get_devstate_simple(asus, asus->tablet_switch_dev_id);
++	if (result >= 0)
++		asus_wmi_tablet_sw_report(asus, result);
++}
++
++/* dGPU ********************************************************************/
++static int dgpu_disable_check_present(struct asus_wmi *asus)
++{
++	u32 result;
++	int err;
++
++	asus->dgpu_disable_available = false;
++
++	err = asus_wmi_get_devstate(asus, ASUS_WMI_DEVID_DGPU, &result);
++	if (err) {
++		if (err == -ENODEV)
++			return 0;
++		return err;
++	}
++
++	if (result & ASUS_WMI_DSTS_PRESENCE_BIT) {
++		asus->dgpu_disable_available = true;
++		asus->dgpu_disable = result & ASUS_WMI_DSTS_STATUS_BIT;
++	}
++
++	return 0;
++}
++
++static int dgpu_disable_write(struct asus_wmi *asus)
++{
++	u32 retval;
++	u8 value;
++	int err;
++
++	/* Don't rely on type conversion */
++	value = asus->dgpu_disable ? 1 : 0;
++
++	err = asus_wmi_set_devstate(ASUS_WMI_DEVID_DGPU, value, &retval);
++	if (err) {
++		pr_warn("Failed to set dgpu disable: %d\n", err);
++		return err;
++	}
++
++	if (retval > 1 || retval < 0) {
++		pr_warn("Failed to set dgpu disable (retval): 0x%x\n", retval);
++		return -EIO;
++	}
++
++	sysfs_notify(&asus->platform_device->dev.kobj, NULL, "dgpu_disable");
++
++	return 0;
++}
++
++static ssize_t dgpu_disable_show(struct device *dev,
++				   struct device_attribute *attr, char *buf)
++{
++	struct asus_wmi *asus = dev_get_drvdata(dev);
++	u8 mode = asus->dgpu_disable;
++
++	return sysfs_emit(buf, "%d\n", mode);
++}
++
++/*
++ * A user may be required to store the value twice, typcial store first, then
++ * rescan PCI bus to activate power, then store a second time to save correctly.
++ * The reason for this is that an extra code path in the ACPI is enabled when
++ * the device and bus are powered.
++ */
++static ssize_t dgpu_disable_store(struct device *dev,
++				    struct device_attribute *attr,
++				    const char *buf, size_t count)
++{
++	bool disable;
++	int result;
++
++	struct asus_wmi *asus = dev_get_drvdata(dev);
++
++	result = kstrtobool(buf, &disable);
++	if (result)
++		return result;
++
++	asus->dgpu_disable = disable;
++
++	result = dgpu_disable_write(asus);
++	if (result)
++		return result;
++
++	return count;
++}
++
++static DEVICE_ATTR_RW(dgpu_disable);
++
+ /* Battery ********************************************************************/
+ 
+ /* The battery maximum charging percentage */
+@@ -2074,9 +2214,7 @@ static void asus_wmi_handle_event_code(int code, struct asus_wmi *asus)
+ {
+ 	unsigned int key_value = 1;
+ 	bool autorelease = 1;
+-	int result, orig_code;
+-
+-	orig_code = code;
++	int orig_code = code;
+ 
+ 	if (asus->driver->key_filter) {
+ 		asus->driver->key_filter(asus->driver, &code, &key_value,
+@@ -2119,14 +2257,8 @@ static void asus_wmi_handle_event_code(int code, struct asus_wmi *asus)
+ 		return;
+ 	}
+ 
+-	if (asus->driver->quirks->use_kbd_dock_devid && code == NOTIFY_KBD_DOCK_CHANGE) {
+-		result = asus_wmi_get_devstate_simple(asus,
+-						      ASUS_WMI_DEVID_KBD_DOCK);
+-		if (result >= 0) {
+-			input_report_switch(asus->inputdev, SW_TABLET_MODE,
+-					    !result);
+-			input_sync(asus->inputdev);
+-		}
++	if (code == asus->tablet_switch_event_code) {
++		asus_wmi_tablet_mode_get_state(asus);
+ 		return;
+ 	}
+ 
+@@ -2287,6 +2419,7 @@ static struct attribute *platform_attributes[] = {
+ 	&dev_attr_camera.attr,
+ 	&dev_attr_cardr.attr,
+ 	&dev_attr_touchpad.attr,
++	&dev_attr_dgpu_disable.attr,
+ 	&dev_attr_lid_resume.attr,
+ 	&dev_attr_als_enable.attr,
+ 	&dev_attr_fan_boost_mode.attr,
+@@ -2312,6 +2445,8 @@ static umode_t asus_sysfs_is_visible(struct kobject *kobj,
+ 		devid = ASUS_WMI_DEVID_LID_RESUME;
+ 	else if (attr == &dev_attr_als_enable.attr)
+ 		devid = ASUS_WMI_DEVID_ALS_ENABLE;
++	else if (attr == &dev_attr_dgpu_disable.attr)
++		ok = asus->dgpu_disable_available;
+ 	else if (attr == &dev_attr_fan_boost_mode.attr)
+ 		ok = asus->fan_boost_mode_available;
+ 	else if (attr == &dev_attr_throttle_thermal_policy.attr)
+@@ -2571,6 +2706,10 @@ static int asus_wmi_add(struct platform_device *pdev)
+ 	if (err)
+ 		goto fail_platform;
+ 
++	err = dgpu_disable_check_present(asus);
++	if (err)
++		goto fail_dgpu_disable;
++
+ 	err = fan_boost_mode_check_present(asus);
+ 	if (err)
+ 		goto fail_fan_boost_mode;
+@@ -2647,6 +2786,12 @@ static int asus_wmi_add(struct platform_device *pdev)
+ 		goto fail_wmi_handler;
+ 	}
+ 
++	if (asus->driver->quirks->i8042_filter) {
++		err = i8042_install_filter(asus->driver->quirks->i8042_filter);
++		if (err)
++			pr_warn("Unable to install key filter - %d\n", err);
++	}
++
+ 	asus_wmi_battery_init(asus);
+ 
+ 	asus_wmi_debugfs_init(asus);
+@@ -2667,6 +2812,7 @@ fail_input:
+ fail_sysfs:
+ fail_throttle_thermal_policy:
+ fail_fan_boost_mode:
++fail_dgpu_disable:
+ fail_platform:
+ 	kfree(asus);
+ 	return err;
+@@ -2677,6 +2823,8 @@ static int asus_wmi_remove(struct platform_device *device)
+ 	struct asus_wmi *asus;
+ 
+ 	asus = platform_get_drvdata(device);
++	if (asus->driver->quirks->i8042_filter)
++		i8042_remove_filter(asus->driver->quirks->i8042_filter);
+ 	wmi_remove_notify_handler(asus->driver->event_guid);
+ 	asus_wmi_backlight_exit(asus);
+ 	asus_wmi_input_exit(asus);
+@@ -2721,6 +2869,8 @@ static int asus_hotk_resume(struct device *device)
+ 
+ 	if (asus_wmi_has_fnlock_key(asus))
+ 		asus_wmi_fnlock_update(asus);
++
++	asus_wmi_tablet_mode_get_state(asus);
+ 	return 0;
+ }
+ 
+@@ -2759,6 +2909,8 @@ static int asus_hotk_restore(struct device *device)
+ 
+ 	if (asus_wmi_has_fnlock_key(asus))
+ 		asus_wmi_fnlock_update(asus);
++
++	asus_wmi_tablet_mode_get_state(asus);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/platform/x86/asus-wmi.h b/drivers/platform/x86/asus-wmi.h
+index 1d0b592e2651b..b817a312f2e1a 100644
+--- a/drivers/platform/x86/asus-wmi.h
++++ b/drivers/platform/x86/asus-wmi.h
+@@ -25,6 +25,13 @@ struct module;
+ struct key_entry;
+ struct asus_wmi;
+ 
++enum asus_wmi_tablet_switch_mode {
++	asus_wmi_no_tablet_switch,
++	asus_wmi_kbd_dock_devid,
++	asus_wmi_lid_flip_devid,
++	asus_wmi_lid_flip_rog_devid,
++};
++
+ struct quirk_entry {
+ 	bool hotplug_wireless;
+ 	bool scalar_panel_brightness;
+@@ -33,7 +40,7 @@ struct quirk_entry {
+ 	bool wmi_backlight_native;
+ 	bool wmi_backlight_set_devstate;
+ 	bool wmi_force_als_set;
+-	bool use_kbd_dock_devid;
++	enum asus_wmi_tablet_switch_mode tablet_switch_mode;
+ 	int wapf;
+ 	/*
+ 	 * For machines with AMD graphic chips, it will send out WMI event
+diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
+index b977e039bb789..5e5c359eaf987 100644
+--- a/drivers/scsi/be2iscsi/be_main.c
++++ b/drivers/scsi/be2iscsi/be_main.c
+@@ -2694,6 +2694,7 @@ init_wrb_hndl_failed:
+ 		kfree(pwrb_context->pwrb_handle_base);
+ 		kfree(pwrb_context->pwrb_handle_basestd);
+ 	}
++	kfree(phwi_ctxt->be_wrbq);
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/drivers/tee/optee/device.c b/drivers/tee/optee/device.c
+index 60ffc54da0033..3cb39f02fae0d 100644
+--- a/drivers/tee/optee/device.c
++++ b/drivers/tee/optee/device.c
+@@ -60,7 +60,16 @@ static void optee_release_device(struct device *dev)
+ 	kfree(optee_device);
+ }
+ 
+-static int optee_register_device(const uuid_t *device_uuid)
++static ssize_t need_supplicant_show(struct device *dev,
++				    struct device_attribute *attr,
++				    char *buf)
++{
++	return 0;
++}
++
++static DEVICE_ATTR_RO(need_supplicant);
++
++static int optee_register_device(const uuid_t *device_uuid, u32 func)
+ {
+ 	struct tee_client_device *optee_device = NULL;
+ 	int rc;
+@@ -83,6 +92,10 @@ static int optee_register_device(const uuid_t *device_uuid)
+ 		put_device(&optee_device->dev);
+ 	}
+ 
++	if (func == PTA_CMD_GET_DEVICES_SUPP)
++		device_create_file(&optee_device->dev,
++				   &dev_attr_need_supplicant);
++
+ 	return rc;
+ }
+ 
+@@ -143,7 +156,7 @@ static int __optee_enumerate_devices(u32 func)
+ 	num_devices = shm_size / sizeof(uuid_t);
+ 
+ 	for (idx = 0; idx < num_devices; idx++) {
+-		rc = optee_register_device(&device_uuid[idx]);
++		rc = optee_register_device(&device_uuid[idx], func);
+ 		if (rc)
+ 			goto out_shm;
+ 	}
+diff --git a/drivers/tty/serial/8250/8250_early.c b/drivers/tty/serial/8250/8250_early.c
+index 70d7826788f53..ccacbf0f14b3e 100644
+--- a/drivers/tty/serial/8250/8250_early.c
++++ b/drivers/tty/serial/8250/8250_early.c
+@@ -199,6 +199,7 @@ static int __init early_omap8250_setup(struct earlycon_device *device,
+ OF_EARLYCON_DECLARE(omap8250, "ti,omap2-uart", early_omap8250_setup);
+ OF_EARLYCON_DECLARE(omap8250, "ti,omap3-uart", early_omap8250_setup);
+ OF_EARLYCON_DECLARE(omap8250, "ti,omap4-uart", early_omap8250_setup);
++OF_EARLYCON_DECLARE(omap8250, "ti,am654-uart", early_omap8250_setup);
+ 
+ #endif
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index e7e84aa2c5f84..bd4118f1b6944 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -844,7 +844,7 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
+ 	if (priv->habit & UART_HAS_RHR_IT_DIS) {
+ 		reg = serial_in(p, UART_OMAP_IER2);
+ 		reg &= ~UART_OMAP_IER2_RHR_IT_DIS;
+-		serial_out(p, UART_OMAP_IER2, UART_OMAP_IER2_RHR_IT_DIS);
++		serial_out(p, UART_OMAP_IER2, reg);
+ 	}
+ 
+ 	dmaengine_tx_status(rxchan, cookie, &state);
+@@ -986,7 +986,7 @@ static int omap_8250_rx_dma(struct uart_8250_port *p)
+ 	if (priv->habit & UART_HAS_RHR_IT_DIS) {
+ 		reg = serial_in(p, UART_OMAP_IER2);
+ 		reg |= UART_OMAP_IER2_RHR_IT_DIS;
+-		serial_out(p, UART_OMAP_IER2, UART_OMAP_IER2_RHR_IT_DIS);
++		serial_out(p, UART_OMAP_IER2, reg);
+ 	}
+ 
+ 	dma_async_issue_pending(dma->rxchan);
+@@ -1209,10 +1209,12 @@ static int omap_8250_dma_handle_irq(struct uart_port *port)
+ 
+ 	status = serial_port_in(port, UART_LSR);
+ 
+-	if (priv->habit & UART_HAS_EFR2)
+-		am654_8250_handle_rx_dma(up, iir, status);
+-	else
+-		status = omap_8250_handle_rx_dma(up, iir, status);
++	if ((iir & 0x3f) != UART_IIR_THRI) {
++		if (priv->habit & UART_HAS_EFR2)
++			am654_8250_handle_rx_dma(up, iir, status);
++		else
++			status = omap_8250_handle_rx_dma(up, iir, status);
++	}
+ 
+ 	serial8250_modem_status(up);
+ 	if (status & UART_LSR_THRE && up->dma->tx_err) {
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 348d4b2a391a3..d4a93a94b4ca6 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -222,17 +222,18 @@ static struct vendor_data vendor_zte = {
+ 
+ /* Deals with DMA transactions */
+ 
+-struct pl011_sgbuf {
+-	struct scatterlist sg;
+-	char *buf;
++struct pl011_dmabuf {
++	dma_addr_t		dma;
++	size_t			len;
++	char			*buf;
+ };
+ 
+ struct pl011_dmarx_data {
+ 	struct dma_chan		*chan;
+ 	struct completion	complete;
+ 	bool			use_buf_b;
+-	struct pl011_sgbuf	sgbuf_a;
+-	struct pl011_sgbuf	sgbuf_b;
++	struct pl011_dmabuf	dbuf_a;
++	struct pl011_dmabuf	dbuf_b;
+ 	dma_cookie_t		cookie;
+ 	bool			running;
+ 	struct timer_list	timer;
+@@ -245,7 +246,8 @@ struct pl011_dmarx_data {
+ 
+ struct pl011_dmatx_data {
+ 	struct dma_chan		*chan;
+-	struct scatterlist	sg;
++	dma_addr_t		dma;
++	size_t			len;
+ 	char			*buf;
+ 	bool			queued;
+ };
+@@ -366,32 +368,24 @@ static int pl011_fifo_to_tty(struct uart_amba_port *uap)
+ 
+ #define PL011_DMA_BUFFER_SIZE PAGE_SIZE
+ 
+-static int pl011_sgbuf_init(struct dma_chan *chan, struct pl011_sgbuf *sg,
++static int pl011_dmabuf_init(struct dma_chan *chan, struct pl011_dmabuf *db,
+ 	enum dma_data_direction dir)
+ {
+-	dma_addr_t dma_addr;
+-
+-	sg->buf = dma_alloc_coherent(chan->device->dev,
+-		PL011_DMA_BUFFER_SIZE, &dma_addr, GFP_KERNEL);
+-	if (!sg->buf)
++	db->buf = dma_alloc_coherent(chan->device->dev, PL011_DMA_BUFFER_SIZE,
++				     &db->dma, GFP_KERNEL);
++	if (!db->buf)
+ 		return -ENOMEM;
+-
+-	sg_init_table(&sg->sg, 1);
+-	sg_set_page(&sg->sg, phys_to_page(dma_addr),
+-		PL011_DMA_BUFFER_SIZE, offset_in_page(dma_addr));
+-	sg_dma_address(&sg->sg) = dma_addr;
+-	sg_dma_len(&sg->sg) = PL011_DMA_BUFFER_SIZE;
++	db->len = PL011_DMA_BUFFER_SIZE;
+ 
+ 	return 0;
+ }
+ 
+-static void pl011_sgbuf_free(struct dma_chan *chan, struct pl011_sgbuf *sg,
++static void pl011_dmabuf_free(struct dma_chan *chan, struct pl011_dmabuf *db,
+ 	enum dma_data_direction dir)
+ {
+-	if (sg->buf) {
++	if (db->buf) {
+ 		dma_free_coherent(chan->device->dev,
+-			PL011_DMA_BUFFER_SIZE, sg->buf,
+-			sg_dma_address(&sg->sg));
++				  PL011_DMA_BUFFER_SIZE, db->buf, db->dma);
+ 	}
+ }
+ 
+@@ -552,8 +546,8 @@ static void pl011_dma_tx_callback(void *data)
+ 
+ 	spin_lock_irqsave(&uap->port.lock, flags);
+ 	if (uap->dmatx.queued)
+-		dma_unmap_sg(dmatx->chan->device->dev, &dmatx->sg, 1,
+-			     DMA_TO_DEVICE);
++		dma_unmap_single(dmatx->chan->device->dev, dmatx->dma,
++				dmatx->len, DMA_TO_DEVICE);
+ 
+ 	dmacr = uap->dmacr;
+ 	uap->dmacr = dmacr & ~UART011_TXDMAE;
+@@ -639,18 +633,19 @@ static int pl011_dma_tx_refill(struct uart_amba_port *uap)
+ 			memcpy(&dmatx->buf[first], &xmit->buf[0], second);
+ 	}
+ 
+-	dmatx->sg.length = count;
+-
+-	if (dma_map_sg(dma_dev->dev, &dmatx->sg, 1, DMA_TO_DEVICE) != 1) {
++	dmatx->len = count;
++	dmatx->dma = dma_map_single(dma_dev->dev, dmatx->buf, count,
++				    DMA_TO_DEVICE);
++	if (dmatx->dma == DMA_MAPPING_ERROR) {
+ 		uap->dmatx.queued = false;
+ 		dev_dbg(uap->port.dev, "unable to map TX DMA\n");
+ 		return -EBUSY;
+ 	}
+ 
+-	desc = dmaengine_prep_slave_sg(chan, &dmatx->sg, 1, DMA_MEM_TO_DEV,
++	desc = dmaengine_prep_slave_single(chan, dmatx->dma, dmatx->len, DMA_MEM_TO_DEV,
+ 					     DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ 	if (!desc) {
+-		dma_unmap_sg(dma_dev->dev, &dmatx->sg, 1, DMA_TO_DEVICE);
++		dma_unmap_single(dma_dev->dev, dmatx->dma, dmatx->len, DMA_TO_DEVICE);
+ 		uap->dmatx.queued = false;
+ 		/*
+ 		 * If DMA cannot be used right now, we complete this
+@@ -814,8 +809,8 @@ __acquires(&uap->port.lock)
+ 	dmaengine_terminate_async(uap->dmatx.chan);
+ 
+ 	if (uap->dmatx.queued) {
+-		dma_unmap_sg(uap->dmatx.chan->device->dev, &uap->dmatx.sg, 1,
+-			     DMA_TO_DEVICE);
++		dma_unmap_single(uap->dmatx.chan->device->dev, uap->dmatx.dma,
++				 uap->dmatx.len, DMA_TO_DEVICE);
+ 		uap->dmatx.queued = false;
+ 		uap->dmacr &= ~UART011_TXDMAE;
+ 		pl011_write(uap->dmacr, uap, REG_DMACR);
+@@ -829,15 +824,15 @@ static int pl011_dma_rx_trigger_dma(struct uart_amba_port *uap)
+ 	struct dma_chan *rxchan = uap->dmarx.chan;
+ 	struct pl011_dmarx_data *dmarx = &uap->dmarx;
+ 	struct dma_async_tx_descriptor *desc;
+-	struct pl011_sgbuf *sgbuf;
++	struct pl011_dmabuf *dbuf;
+ 
+ 	if (!rxchan)
+ 		return -EIO;
+ 
+ 	/* Start the RX DMA job */
+-	sgbuf = uap->dmarx.use_buf_b ?
+-		&uap->dmarx.sgbuf_b : &uap->dmarx.sgbuf_a;
+-	desc = dmaengine_prep_slave_sg(rxchan, &sgbuf->sg, 1,
++	dbuf = uap->dmarx.use_buf_b ?
++		&uap->dmarx.dbuf_b : &uap->dmarx.dbuf_a;
++	desc = dmaengine_prep_slave_single(rxchan, dbuf->dma, dbuf->len,
+ 					DMA_DEV_TO_MEM,
+ 					DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ 	/*
+@@ -877,8 +872,8 @@ static void pl011_dma_rx_chars(struct uart_amba_port *uap,
+ 			       bool readfifo)
+ {
+ 	struct tty_port *port = &uap->port.state->port;
+-	struct pl011_sgbuf *sgbuf = use_buf_b ?
+-		&uap->dmarx.sgbuf_b : &uap->dmarx.sgbuf_a;
++	struct pl011_dmabuf *dbuf = use_buf_b ?
++		&uap->dmarx.dbuf_b : &uap->dmarx.dbuf_a;
+ 	int dma_count = 0;
+ 	u32 fifotaken = 0; /* only used for vdbg() */
+ 
+@@ -887,7 +882,7 @@ static void pl011_dma_rx_chars(struct uart_amba_port *uap,
+ 
+ 	if (uap->dmarx.poll_rate) {
+ 		/* The data can be taken by polling */
+-		dmataken = sgbuf->sg.length - dmarx->last_residue;
++		dmataken = dbuf->len - dmarx->last_residue;
+ 		/* Recalculate the pending size */
+ 		if (pending >= dmataken)
+ 			pending -= dmataken;
+@@ -901,7 +896,7 @@ static void pl011_dma_rx_chars(struct uart_amba_port *uap,
+ 		 * Note that tty_insert_flip_buf() tries to take as many chars
+ 		 * as it can.
+ 		 */
+-		dma_count = tty_insert_flip_string(port, sgbuf->buf + dmataken,
++		dma_count = tty_insert_flip_string(port, dbuf->buf + dmataken,
+ 				pending);
+ 
+ 		uap->port.icount.rx += dma_count;
+@@ -912,7 +907,7 @@ static void pl011_dma_rx_chars(struct uart_amba_port *uap,
+ 
+ 	/* Reset the last_residue for Rx DMA poll */
+ 	if (uap->dmarx.poll_rate)
+-		dmarx->last_residue = sgbuf->sg.length;
++		dmarx->last_residue = dbuf->len;
+ 
+ 	/*
+ 	 * Only continue with trying to read the FIFO if all DMA chars have
+@@ -949,8 +944,8 @@ static void pl011_dma_rx_irq(struct uart_amba_port *uap)
+ {
+ 	struct pl011_dmarx_data *dmarx = &uap->dmarx;
+ 	struct dma_chan *rxchan = dmarx->chan;
+-	struct pl011_sgbuf *sgbuf = dmarx->use_buf_b ?
+-		&dmarx->sgbuf_b : &dmarx->sgbuf_a;
++	struct pl011_dmabuf *dbuf = dmarx->use_buf_b ?
++		&dmarx->dbuf_b : &dmarx->dbuf_a;
+ 	size_t pending;
+ 	struct dma_tx_state state;
+ 	enum dma_status dmastat;
+@@ -972,7 +967,7 @@ static void pl011_dma_rx_irq(struct uart_amba_port *uap)
+ 	pl011_write(uap->dmacr, uap, REG_DMACR);
+ 	uap->dmarx.running = false;
+ 
+-	pending = sgbuf->sg.length - state.residue;
++	pending = dbuf->len - state.residue;
+ 	BUG_ON(pending > PL011_DMA_BUFFER_SIZE);
+ 	/* Then we terminate the transfer - we now know our residue */
+ 	dmaengine_terminate_all(rxchan);
+@@ -999,8 +994,8 @@ static void pl011_dma_rx_callback(void *data)
+ 	struct pl011_dmarx_data *dmarx = &uap->dmarx;
+ 	struct dma_chan *rxchan = dmarx->chan;
+ 	bool lastbuf = dmarx->use_buf_b;
+-	struct pl011_sgbuf *sgbuf = dmarx->use_buf_b ?
+-		&dmarx->sgbuf_b : &dmarx->sgbuf_a;
++	struct pl011_dmabuf *dbuf = dmarx->use_buf_b ?
++		&dmarx->dbuf_b : &dmarx->dbuf_a;
+ 	size_t pending;
+ 	struct dma_tx_state state;
+ 	int ret;
+@@ -1018,7 +1013,7 @@ static void pl011_dma_rx_callback(void *data)
+ 	 * the DMA irq handler. So we check the residue here.
+ 	 */
+ 	rxchan->device->device_tx_status(rxchan, dmarx->cookie, &state);
+-	pending = sgbuf->sg.length - state.residue;
++	pending = dbuf->len - state.residue;
+ 	BUG_ON(pending > PL011_DMA_BUFFER_SIZE);
+ 	/* Then we terminate the transfer - we now know our residue */
+ 	dmaengine_terminate_all(rxchan);
+@@ -1070,16 +1065,16 @@ static void pl011_dma_rx_poll(struct timer_list *t)
+ 	unsigned long flags = 0;
+ 	unsigned int dmataken = 0;
+ 	unsigned int size = 0;
+-	struct pl011_sgbuf *sgbuf;
++	struct pl011_dmabuf *dbuf;
+ 	int dma_count;
+ 	struct dma_tx_state state;
+ 
+-	sgbuf = dmarx->use_buf_b ? &uap->dmarx.sgbuf_b : &uap->dmarx.sgbuf_a;
++	dbuf = dmarx->use_buf_b ? &uap->dmarx.dbuf_b : &uap->dmarx.dbuf_a;
+ 	rxchan->device->device_tx_status(rxchan, dmarx->cookie, &state);
+ 	if (likely(state.residue < dmarx->last_residue)) {
+-		dmataken = sgbuf->sg.length - dmarx->last_residue;
++		dmataken = dbuf->len - dmarx->last_residue;
+ 		size = dmarx->last_residue - state.residue;
+-		dma_count = tty_insert_flip_string(port, sgbuf->buf + dmataken,
++		dma_count = tty_insert_flip_string(port, dbuf->buf + dmataken,
+ 				size);
+ 		if (dma_count == size)
+ 			dmarx->last_residue =  state.residue;
+@@ -1126,7 +1121,7 @@ static void pl011_dma_startup(struct uart_amba_port *uap)
+ 		return;
+ 	}
+ 
+-	sg_init_one(&uap->dmatx.sg, uap->dmatx.buf, PL011_DMA_BUFFER_SIZE);
++	uap->dmatx.len = PL011_DMA_BUFFER_SIZE;
+ 
+ 	/* The DMA buffer is now the FIFO the TTY subsystem can use */
+ 	uap->port.fifosize = PL011_DMA_BUFFER_SIZE;
+@@ -1136,7 +1131,7 @@ static void pl011_dma_startup(struct uart_amba_port *uap)
+ 		goto skip_rx;
+ 
+ 	/* Allocate and map DMA RX buffers */
+-	ret = pl011_sgbuf_init(uap->dmarx.chan, &uap->dmarx.sgbuf_a,
++	ret = pl011_dmabuf_init(uap->dmarx.chan, &uap->dmarx.dbuf_a,
+ 			       DMA_FROM_DEVICE);
+ 	if (ret) {
+ 		dev_err(uap->port.dev, "failed to init DMA %s: %d\n",
+@@ -1144,12 +1139,12 @@ static void pl011_dma_startup(struct uart_amba_port *uap)
+ 		goto skip_rx;
+ 	}
+ 
+-	ret = pl011_sgbuf_init(uap->dmarx.chan, &uap->dmarx.sgbuf_b,
++	ret = pl011_dmabuf_init(uap->dmarx.chan, &uap->dmarx.dbuf_b,
+ 			       DMA_FROM_DEVICE);
+ 	if (ret) {
+ 		dev_err(uap->port.dev, "failed to init DMA %s: %d\n",
+ 			"RX buffer B", ret);
+-		pl011_sgbuf_free(uap->dmarx.chan, &uap->dmarx.sgbuf_a,
++		pl011_dmabuf_free(uap->dmarx.chan, &uap->dmarx.dbuf_a,
+ 				 DMA_FROM_DEVICE);
+ 		goto skip_rx;
+ 	}
+@@ -1203,8 +1198,9 @@ static void pl011_dma_shutdown(struct uart_amba_port *uap)
+ 		/* In theory, this should already be done by pl011_dma_flush_buffer */
+ 		dmaengine_terminate_all(uap->dmatx.chan);
+ 		if (uap->dmatx.queued) {
+-			dma_unmap_sg(uap->dmatx.chan->device->dev, &uap->dmatx.sg, 1,
+-				     DMA_TO_DEVICE);
++			dma_unmap_single(uap->dmatx.chan->device->dev,
++					 uap->dmatx.dma, uap->dmatx.len,
++					 DMA_TO_DEVICE);
+ 			uap->dmatx.queued = false;
+ 		}
+ 
+@@ -1215,8 +1211,8 @@ static void pl011_dma_shutdown(struct uart_amba_port *uap)
+ 	if (uap->using_rx_dma) {
+ 		dmaengine_terminate_all(uap->dmarx.chan);
+ 		/* Clean up the RX DMA */
+-		pl011_sgbuf_free(uap->dmarx.chan, &uap->dmarx.sgbuf_a, DMA_FROM_DEVICE);
+-		pl011_sgbuf_free(uap->dmarx.chan, &uap->dmarx.sgbuf_b, DMA_FROM_DEVICE);
++		pl011_dmabuf_free(uap->dmarx.chan, &uap->dmarx.dbuf_a, DMA_FROM_DEVICE);
++		pl011_dmabuf_free(uap->dmarx.chan, &uap->dmarx.dbuf_b, DMA_FROM_DEVICE);
+ 		if (uap->dmarx.poll_rate)
+ 			del_timer_sync(&uap->dmarx.timer);
+ 		uap->using_rx_dma = false;
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index c681ed589303c..fd9be81bcfd86 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -694,6 +694,18 @@ static bool sc16is7xx_port_irq(struct sc16is7xx_port *s, int portno)
+ 		case SC16IS7XX_IIR_RTOI_SRC:
+ 		case SC16IS7XX_IIR_XOFFI_SRC:
+ 			rxlen = sc16is7xx_port_read(port, SC16IS7XX_RXLVL_REG);
++
++			/*
++			 * There is a silicon bug that makes the chip report a
++			 * time-out interrupt but no data in the FIFO. This is
++			 * described in errata section 18.1.4.
++			 *
++			 * When this happens, read one byte from the FIFO to
++			 * clear the interrupt.
++			 */
++			if (iir == SC16IS7XX_IIR_RTOI_SRC && !rxlen)
++				rxlen = 1;
++
+ 			if (rxlen)
+ 				sc16is7xx_handle_rx(port, rxlen, iir);
+ 			break;
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index e7cf56b13c643..ba018aeb21d8c 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -88,6 +88,7 @@ static void hidg_release(struct device *dev)
+ {
+ 	struct f_hidg *hidg = container_of(dev, struct f_hidg, dev);
+ 
++	kfree(hidg->report_desc);
+ 	kfree(hidg->set_report_buf);
+ 	kfree(hidg);
+ }
+@@ -1287,9 +1288,9 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
+ 	hidg->report_length = opts->report_length;
+ 	hidg->report_desc_length = opts->report_desc_length;
+ 	if (opts->report_desc) {
+-		hidg->report_desc = devm_kmemdup(&hidg->dev, opts->report_desc,
+-						 opts->report_desc_length,
+-						 GFP_KERNEL);
++		hidg->report_desc = kmemdup(opts->report_desc,
++					    opts->report_desc_length,
++					    GFP_KERNEL);
+ 		if (!hidg->report_desc) {
+ 			put_device(&hidg->dev);
+ 			--opts->refcnt;
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 0933a94ebe29e..53f8327873501 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -345,8 +345,6 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	/* xHC spec requires PCI devices to support D3hot and D3cold */
+ 	if (xhci->hci_version >= 0x120)
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+-	else if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version >= 0x110)
+-		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index 9d3a35b2046d3..18b35e8173614 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -194,7 +194,7 @@ static void typec_altmode_put_partner(struct altmode *altmode)
+ 	if (!partner)
+ 		return;
+ 
+-	adev = &partner->adev;
++	adev = &altmode->adev;
+ 
+ 	if (is_typec_plug(adev->dev.parent)) {
+ 		struct typec_plug *plug = to_typec_plug(adev->dev.parent);
+@@ -424,7 +424,8 @@ static void typec_altmode_release(struct device *dev)
+ {
+ 	struct altmode *alt = to_altmode(to_typec_altmode(dev));
+ 
+-	typec_altmode_put_partner(alt);
++	if (!is_typec_port(dev->parent))
++		typec_altmode_put_partner(alt);
+ 
+ 	altmode_id_remove(alt->adev.dev.parent, alt->id);
+ 	kfree(alt);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index cd689d5c4c821..0e25a3f64b2e0 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2972,7 +2972,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		goto fail_alloc;
+ 	}
+ 
+-	btrfs_info(fs_info, "first mount of filesystem %pU", disk_super->fsid);
+ 	/*
+ 	 * Verify the type first, if that or the checksum value are
+ 	 * corrupted, we'll find out
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index fd4edd9664500..ea731fa8bd350 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -318,10 +318,7 @@ void __btrfs_panic(struct btrfs_fs_info *fs_info, const char *function,
+ 
+ static void btrfs_put_super(struct super_block *sb)
+ {
+-	struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+-
+-	btrfs_info(fs_info, "last unmount of filesystem %pU", fs_info->fs_devices->fsid);
+-	close_ctree(fs_info);
++	close_ctree(btrfs_sb(sb));
+ }
+ 
+ enum {
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index b87d6792b14ad..16155529e449f 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -1093,7 +1093,9 @@ static loff_t cifs_remap_file_range(struct file *src_file, loff_t off,
+ 	unsigned int xid;
+ 	int rc;
+ 
+-	if (remap_flags & ~(REMAP_FILE_DEDUP | REMAP_FILE_ADVISORY))
++	if (remap_flags & REMAP_FILE_DEDUP)
++		return -EOPNOTSUPP;
++	if (remap_flags & ~REMAP_FILE_ADVISORY)
+ 		return -EINVAL;
+ 
+ 	cifs_dbg(FYI, "clone range\n");
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 015b7b37edee5..e58525a958270 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -2765,6 +2765,8 @@ smb2_get_dfs_refer(const unsigned int xid, struct cifs_ses *ses,
+ 				(char **)&dfs_rsp, &dfs_rsp_size);
+ 	} while (rc == -EAGAIN);
+ 
++	if (!rc && !dfs_rsp)
++		rc = -EIO;
+ 	if (rc) {
+ 		if ((rc != -ENOENT) && (rc != -EOPNOTSUPP))
+ 			cifs_tcon_dbg(VFS, "ioctl error in %s rc=%d\n", __func__, rc);
+diff --git a/fs/nilfs2/sufile.c b/fs/nilfs2/sufile.c
+index b3abe69382fd0..23b4b8863e7f9 100644
+--- a/fs/nilfs2/sufile.c
++++ b/fs/nilfs2/sufile.c
+@@ -501,15 +501,38 @@ int nilfs_sufile_mark_dirty(struct inode *sufile, __u64 segnum)
+ 
+ 	down_write(&NILFS_MDT(sufile)->mi_sem);
+ 	ret = nilfs_sufile_get_segment_usage_block(sufile, segnum, 0, &bh);
+-	if (!ret) {
+-		mark_buffer_dirty(bh);
+-		nilfs_mdt_mark_dirty(sufile);
+-		kaddr = kmap_atomic(bh->b_page);
+-		su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr);
++	if (ret)
++		goto out_sem;
++
++	kaddr = kmap_atomic(bh->b_page);
++	su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr);
++	if (unlikely(nilfs_segment_usage_error(su))) {
++		struct the_nilfs *nilfs = sufile->i_sb->s_fs_info;
++
++		kunmap_atomic(kaddr);
++		brelse(bh);
++		if (nilfs_segment_is_active(nilfs, segnum)) {
++			nilfs_error(sufile->i_sb,
++				    "active segment %llu is erroneous",
++				    (unsigned long long)segnum);
++		} else {
++			/*
++			 * Segments marked erroneous are never allocated by
++			 * nilfs_sufile_alloc(); only active segments, ie,
++			 * the segments indexed by ns_segnum or ns_nextnum,
++			 * can be erroneous here.
++			 */
++			WARN_ON_ONCE(1);
++		}
++		ret = -EIO;
++	} else {
+ 		nilfs_segment_usage_set_dirty(su);
+ 		kunmap_atomic(kaddr);
++		mark_buffer_dirty(bh);
++		nilfs_mdt_mark_dirty(sufile);
+ 		brelse(bh);
+ 	}
++out_sem:
+ 	up_write(&NILFS_MDT(sufile)->mi_sem);
+ 	return ret;
+ }
+@@ -536,9 +559,14 @@ int nilfs_sufile_set_segment_usage(struct inode *sufile, __u64 segnum,
+ 
+ 	kaddr = kmap_atomic(bh->b_page);
+ 	su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr);
+-	WARN_ON(nilfs_segment_usage_error(su));
+-	if (modtime)
++	if (modtime) {
++		/*
++		 * Check segusage error and set su_lastmod only when updating
++		 * this entry with a valid timestamp, not for cancellation.
++		 */
++		WARN_ON_ONCE(nilfs_segment_usage_error(su));
+ 		su->su_lastmod = cpu_to_le64(modtime);
++	}
+ 	su->su_nblocks = cpu_to_le32(nblocks);
+ 	kunmap_atomic(kaddr);
+ 
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index 1566e86e8e555..a8374d89da7c8 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -717,7 +717,11 @@ int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb, char *data)
+ 			goto failed_sbh;
+ 		}
+ 		nilfs_release_super_block(nilfs);
+-		sb_set_blocksize(sb, blocksize);
++		if (!sb_set_blocksize(sb, blocksize)) {
++			nilfs_err(sb, "bad blocksize %d", blocksize);
++			err = -EINVAL;
++			goto out;
++		}
+ 
+ 		err = nilfs_load_super_block(nilfs, sb, blocksize, &sbp);
+ 		if (err)
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index 7cc2889608e0f..f5a5df3a8cfd1 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -149,6 +149,7 @@ enum cpuhp_state {
+ 	CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,
+ 	CPUHP_AP_ARM64_ISNDEP_STARTING,
+ 	CPUHP_AP_SMPCFD_DYING,
++	CPUHP_AP_HRTIMERS_DYING,
+ 	CPUHP_AP_X86_TBOOT_DYING,
+ 	CPUHP_AP_ARM_CACHE_B15_RAC_DYING,
+ 	CPUHP_AP_ONLINE,
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index 7f1b8549ebcee..a88be8bd4e1d1 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -526,9 +526,9 @@ extern void sysrq_timer_list_show(void);
+ 
+ int hrtimers_prepare_cpu(unsigned int cpu);
+ #ifdef CONFIG_HOTPLUG_CPU
+-int hrtimers_dead_cpu(unsigned int cpu);
++int hrtimers_cpu_dying(unsigned int cpu);
+ #else
+-#define hrtimers_dead_cpu	NULL
++#define hrtimers_cpu_dying	NULL
+ #endif
+ 
+ #endif
+diff --git a/include/linux/of.h b/include/linux/of.h
+index 0f4e81e6fb232..57f2d3dddc0ce 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -424,12 +424,14 @@ extern int of_detach_node(struct device_node *);
+  * @sz:		number of array elements to read
+  *
+  * Search for a property in a device node and read 8-bit value(s) from
+- * it. Returns 0 on success, -EINVAL if the property does not exist,
+- * -ENODATA if property does not have a value, and -EOVERFLOW if the
+- * property data isn't large enough.
++ * it.
+  *
+  * dts entry of array should be like:
+- *	property = /bits/ 8 <0x50 0x60 0x70>;
++ *  ``property = /bits/ 8 <0x50 0x60 0x70>;``
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
++ * -ENODATA if property does not have a value, and -EOVERFLOW if the
++ * property data isn't large enough.
+  *
+  * The out_values is modified only if a valid u8 value can be decoded.
+  */
+@@ -454,12 +456,14 @@ static inline int of_property_read_u8_array(const struct device_node *np,
+  * @sz:		number of array elements to read
+  *
+  * Search for a property in a device node and read 16-bit value(s) from
+- * it. Returns 0 on success, -EINVAL if the property does not exist,
+- * -ENODATA if property does not have a value, and -EOVERFLOW if the
+- * property data isn't large enough.
++ * it.
+  *
+  * dts entry of array should be like:
+- *	property = /bits/ 16 <0x5000 0x6000 0x7000>;
++ *  ``property = /bits/ 16 <0x5000 0x6000 0x7000>;``
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
++ * -ENODATA if property does not have a value, and -EOVERFLOW if the
++ * property data isn't large enough.
+  *
+  * The out_values is modified only if a valid u16 value can be decoded.
+  */
+@@ -485,7 +489,9 @@ static inline int of_property_read_u16_array(const struct device_node *np,
+  * @sz:		number of array elements to read
+  *
+  * Search for a property in a device node and read 32-bit value(s) from
+- * it. Returns 0 on success, -EINVAL if the property does not exist,
++ * it.
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
+  * -ENODATA if property does not have a value, and -EOVERFLOW if the
+  * property data isn't large enough.
+  *
+@@ -513,7 +519,9 @@ static inline int of_property_read_u32_array(const struct device_node *np,
+  * @sz:		number of array elements to read
+  *
+  * Search for a property in a device node and read 64-bit value(s) from
+- * it. Returns 0 on success, -EINVAL if the property does not exist,
++ * it.
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
+  * -ENODATA if property does not have a value, and -EOVERFLOW if the
+  * property data isn't large enough.
+  *
+@@ -1063,7 +1071,9 @@ static inline bool of_node_is_type(const struct device_node *np, const char *typ
+  * @propname:	name of the property to be searched.
+  *
+  * Search for a property in a device node and count the number of u8 elements
+- * in it. Returns number of elements on sucess, -EINVAL if the property does
++ * in it.
++ *
++ * Return: The number of elements on sucess, -EINVAL if the property does
+  * not exist or its length does not match a multiple of u8 and -ENODATA if the
+  * property does not have a value.
+  */
+@@ -1080,7 +1090,9 @@ static inline int of_property_count_u8_elems(const struct device_node *np,
+  * @propname:	name of the property to be searched.
+  *
+  * Search for a property in a device node and count the number of u16 elements
+- * in it. Returns number of elements on sucess, -EINVAL if the property does
++ * in it.
++ *
++ * Return: The number of elements on sucess, -EINVAL if the property does
+  * not exist or its length does not match a multiple of u16 and -ENODATA if the
+  * property does not have a value.
+  */
+@@ -1097,7 +1109,9 @@ static inline int of_property_count_u16_elems(const struct device_node *np,
+  * @propname:	name of the property to be searched.
+  *
+  * Search for a property in a device node and count the number of u32 elements
+- * in it. Returns number of elements on sucess, -EINVAL if the property does
++ * in it.
++ *
++ * Return: The number of elements on sucess, -EINVAL if the property does
+  * not exist or its length does not match a multiple of u32 and -ENODATA if the
+  * property does not have a value.
+  */
+@@ -1114,7 +1128,9 @@ static inline int of_property_count_u32_elems(const struct device_node *np,
+  * @propname:	name of the property to be searched.
+  *
+  * Search for a property in a device node and count the number of u64 elements
+- * in it. Returns number of elements on sucess, -EINVAL if the property does
++ * in it.
++ *
++ * Return: The number of elements on sucess, -EINVAL if the property does
+  * not exist or its length does not match a multiple of u64 and -ENODATA if the
+  * property does not have a value.
+  */
+@@ -1135,7 +1151,7 @@ static inline int of_property_count_u64_elems(const struct device_node *np,
+  * Search for a property in a device tree node and retrieve a list of
+  * terminated string values (pointer to data, not a copy) in that property.
+  *
+- * If @out_strs is NULL, the number of strings in the property is returned.
++ * Return: If @out_strs is NULL, the number of strings in the property is returned.
+  */
+ static inline int of_property_read_string_array(const struct device_node *np,
+ 						const char *propname, const char **out_strs,
+@@ -1151,10 +1167,11 @@ static inline int of_property_read_string_array(const struct device_node *np,
+  * @propname:	name of the property to be searched.
+  *
+  * Search for a property in a device tree node and retrieve the number of null
+- * terminated string contain in it. Returns the number of strings on
+- * success, -EINVAL if the property does not exist, -ENODATA if property
+- * does not have a value, and -EILSEQ if the string is not null-terminated
+- * within the length of the property data.
++ * terminated string contain in it.
++ *
++ * Return: The number of strings on success, -EINVAL if the property does not
++ * exist, -ENODATA if property does not have a value, and -EILSEQ if the string
++ * is not null-terminated within the length of the property data.
+  */
+ static inline int of_property_count_strings(const struct device_node *np,
+ 					    const char *propname)
+@@ -1174,7 +1191,8 @@ static inline int of_property_count_strings(const struct device_node *np,
+  * Search for a property in a device tree node and retrieve a null
+  * terminated string value (pointer to data, not a copy) in the list of strings
+  * contained in that property.
+- * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist, -ENODATA if
+  * property does not have a value, and -EILSEQ if the string is not
+  * null-terminated within the length of the property data.
+  *
+@@ -1194,7 +1212,8 @@ static inline int of_property_read_string_index(const struct device_node *np,
+  * @propname:	name of the property to be searched.
+  *
+  * Search for a property in a device node.
+- * Returns true if the property exists false otherwise.
++ *
++ * Return: true if the property exists false otherwise.
+  */
+ static inline bool of_property_read_bool(const struct device_node *np,
+ 					 const char *propname)
+@@ -1440,7 +1459,7 @@ static inline int of_reconfig_get_state_change(unsigned long action,
+  * of_device_is_system_power_controller - Tells if system-power-controller is found for device_node
+  * @np: Pointer to the given device_node
+  *
+- * return true if present false otherwise
++ * Return: true if present false otherwise
+  */
+ static inline bool of_device_is_system_power_controller(const struct device_node *np)
+ {
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 50557d903a059..3b038a2cc0071 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -744,6 +744,8 @@ struct perf_event {
+ 	struct pid_namespace		*ns;
+ 	u64				id;
+ 
++	atomic64_t			lost_samples;
++
+ 	u64				(*clock)(void);
+ 	perf_overflow_handler_t		overflow_handler;
+ 	void				*overflow_handler_context;
+diff --git a/include/linux/platform_data/x86/asus-wmi.h b/include/linux/platform_data/x86/asus-wmi.h
+index 897b8332a39f4..60cad2aac5c1c 100644
+--- a/include/linux/platform_data/x86/asus-wmi.h
++++ b/include/linux/platform_data/x86/asus-wmi.h
+@@ -62,6 +62,8 @@
+ 
+ /* Misc */
+ #define ASUS_WMI_DEVID_CAMERA		0x00060013
++#define ASUS_WMI_DEVID_LID_FLIP		0x00060062
++#define ASUS_WMI_DEVID_LID_FLIP_ROG	0x00060077
+ 
+ /* Storage */
+ #define ASUS_WMI_DEVID_CARDREADER	0x00080013
+@@ -88,6 +90,9 @@
+ /* Keyboard dock */
+ #define ASUS_WMI_DEVID_KBD_DOCK		0x00120063
+ 
++/* dgpu on/off */
++#define ASUS_WMI_DEVID_DGPU		0x00090020
++
+ /* DSTS masks */
+ #define ASUS_WMI_DSTS_STATUS_BIT	0x00000001
+ #define ASUS_WMI_DSTS_UNKNOWN_BIT	0x00000002
+diff --git a/include/net/genetlink.h b/include/net/genetlink.h
+index e55ec1597ce79..3057c8e6dcfe9 100644
+--- a/include/net/genetlink.h
++++ b/include/net/genetlink.h
+@@ -11,9 +11,12 @@
+ /**
+  * struct genl_multicast_group - generic netlink multicast group
+  * @name: name of the multicast group, names are per-family
++ * @cap_sys_admin: whether %CAP_SYS_ADMIN is required for binding
+  */
+ struct genl_multicast_group {
+ 	char			name[GENL_NAMSIZ];
++	u8			flags;
++	u8			cap_sys_admin:1;
+ };
+ 
+ struct genl_ops;
+diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
+index b95d3c485d27e..6ca63ab6bee5d 100644
+--- a/include/uapi/linux/perf_event.h
++++ b/include/uapi/linux/perf_event.h
+@@ -279,6 +279,7 @@ enum {
+  *	  { u64		time_enabled; } && PERF_FORMAT_TOTAL_TIME_ENABLED
+  *	  { u64		time_running; } && PERF_FORMAT_TOTAL_TIME_RUNNING
+  *	  { u64		id;           } && PERF_FORMAT_ID
++ *	  { u64		lost;         } && PERF_FORMAT_LOST
+  *	} && !PERF_FORMAT_GROUP
+  *
+  *	{ u64		nr;
+@@ -286,6 +287,7 @@ enum {
+  *	  { u64		time_running; } && PERF_FORMAT_TOTAL_TIME_RUNNING
+  *	  { u64		value;
+  *	    { u64	id;           } && PERF_FORMAT_ID
++ *	    { u64	lost;         } && PERF_FORMAT_LOST
+  *	  }		cntr[nr];
+  *	} && PERF_FORMAT_GROUP
+  * };
+@@ -295,8 +297,9 @@ enum perf_event_read_format {
+ 	PERF_FORMAT_TOTAL_TIME_RUNNING		= 1U << 1,
+ 	PERF_FORMAT_ID				= 1U << 2,
+ 	PERF_FORMAT_GROUP			= 1U << 3,
++	PERF_FORMAT_LOST			= 1U << 4,
+ 
+-	PERF_FORMAT_MAX = 1U << 4,		/* non-ABI */
++	PERF_FORMAT_MAX = 1U << 5,		/* non-ABI */
+ };
+ 
+ #define PERF_ATTR_SIZE_VER0	64	/* sizeof first published struct */
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 0c4f159865b95..aa15e8d878a9e 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -8452,49 +8452,6 @@ out_free:
+ 	return ret;
+ }
+ 
+-static int io_sqe_file_register(struct io_ring_ctx *ctx, struct file *file,
+-				int index)
+-{
+-#if defined(CONFIG_UNIX)
+-	struct sock *sock = ctx->ring_sock->sk;
+-	struct sk_buff_head *head = &sock->sk_receive_queue;
+-	struct sk_buff *skb;
+-
+-	/*
+-	 * See if we can merge this file into an existing skb SCM_RIGHTS
+-	 * file set. If there's no room, fall back to allocating a new skb
+-	 * and filling it in.
+-	 */
+-	spin_lock_irq(&head->lock);
+-	skb = skb_peek(head);
+-	if (skb) {
+-		struct scm_fp_list *fpl = UNIXCB(skb).fp;
+-
+-		if (fpl->count < SCM_MAX_FD) {
+-			__skb_unlink(skb, head);
+-			spin_unlock_irq(&head->lock);
+-			fpl->fp[fpl->count] = get_file(file);
+-			unix_inflight(fpl->user, fpl->fp[fpl->count]);
+-			fpl->count++;
+-			spin_lock_irq(&head->lock);
+-			__skb_queue_head(head, skb);
+-		} else {
+-			skb = NULL;
+-		}
+-	}
+-	spin_unlock_irq(&head->lock);
+-
+-	if (skb) {
+-		fput(file);
+-		return 0;
+-	}
+-
+-	return __io_sqe_files_scm(ctx, 1, index);
+-#else
+-	return 0;
+-#endif
+-}
+-
+ static int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
+ 				 struct io_rsrc_node *node, void *rsrc)
+ {
+@@ -8552,12 +8509,6 @@ static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
+ 
+ 	*io_get_tag_slot(ctx->file_data, slot_index) = 0;
+ 	io_fixed_file_set(file_slot, file);
+-	ret = io_sqe_file_register(ctx, file, slot_index);
+-	if (ret) {
+-		file_slot->file_ptr = 0;
+-		goto err;
+-	}
+-
+ 	ret = 0;
+ err:
+ 	if (needs_switch)
+@@ -8671,12 +8622,6 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ 			}
+ 			*io_get_tag_slot(data, i) = tag;
+ 			io_fixed_file_set(file_slot, file);
+-			err = io_sqe_file_register(ctx, file, i);
+-			if (err) {
+-				file_slot->file_ptr = 0;
+-				fput(file);
+-				break;
+-			}
+ 		}
+ 	}
+ 
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 008b50da22246..abf717c4f57c2 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -1595,7 +1595,7 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ 	[CPUHP_HRTIMERS_PREPARE] = {
+ 		.name			= "hrtimers:prepare",
+ 		.startup.single		= hrtimers_prepare_cpu,
+-		.teardown.single	= hrtimers_dead_cpu,
++		.teardown.single	= NULL,
+ 	},
+ 	[CPUHP_SMPCFD_PREPARE] = {
+ 		.name			= "smpcfd:prepare",
+@@ -1662,6 +1662,12 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ 		.startup.single		= NULL,
+ 		.teardown.single	= smpcfd_dying_cpu,
+ 	},
++	[CPUHP_AP_HRTIMERS_DYING] = {
++		.name			= "hrtimers:dying",
++		.startup.single		= NULL,
++		.teardown.single	= hrtimers_cpu_dying,
++	},
++
+ 	/* Entry state on starting. Interrupts enabled from here on. Transient
+ 	 * state for synchronsization */
+ 	[CPUHP_AP_ONLINE] = {
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 5072635f0b0c1..0a6855d648031 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -1912,28 +1912,34 @@ static inline void perf_event__state_init(struct perf_event *event)
+ 					      PERF_EVENT_STATE_INACTIVE;
+ }
+ 
+-static void __perf_event_read_size(struct perf_event *event, int nr_siblings)
++static int __perf_event_read_size(u64 read_format, int nr_siblings)
+ {
+ 	int entry = sizeof(u64); /* value */
+ 	int size = 0;
+ 	int nr = 1;
+ 
+-	if (event->attr.read_format & PERF_FORMAT_TOTAL_TIME_ENABLED)
++	if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED)
+ 		size += sizeof(u64);
+ 
+-	if (event->attr.read_format & PERF_FORMAT_TOTAL_TIME_RUNNING)
++	if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING)
+ 		size += sizeof(u64);
+ 
+-	if (event->attr.read_format & PERF_FORMAT_ID)
++	if (read_format & PERF_FORMAT_ID)
++		entry += sizeof(u64);
++
++	if (read_format & PERF_FORMAT_LOST)
+ 		entry += sizeof(u64);
+ 
+-	if (event->attr.read_format & PERF_FORMAT_GROUP) {
++	if (read_format & PERF_FORMAT_GROUP) {
+ 		nr += nr_siblings;
+ 		size += sizeof(u64);
+ 	}
+ 
+-	size += entry * nr;
+-	event->read_size = size;
++	/*
++	 * Since perf_event_validate_size() limits this to 16k and inhibits
++	 * adding more siblings, this will never overflow.
++	 */
++	return size + nr * entry;
+ }
+ 
+ static void __perf_event_header_size(struct perf_event *event, u64 sample_type)
+@@ -1977,8 +1983,9 @@ static void __perf_event_header_size(struct perf_event *event, u64 sample_type)
+  */
+ static void perf_event__header_size(struct perf_event *event)
+ {
+-	__perf_event_read_size(event,
+-			       event->group_leader->nr_siblings);
++	event->read_size =
++		__perf_event_read_size(event->attr.read_format,
++				       event->group_leader->nr_siblings);
+ 	__perf_event_header_size(event, event->attr.sample_type);
+ }
+ 
+@@ -2009,24 +2016,35 @@ static void perf_event__id_header_size(struct perf_event *event)
+ 	event->id_header_size = size;
+ }
+ 
++/*
++ * Check that adding an event to the group does not result in anybody
++ * overflowing the 64k event limit imposed by the output buffer.
++ *
++ * Specifically, check that the read_size for the event does not exceed 16k,
++ * read_size being the one term that grows with groups size. Since read_size
++ * depends on per-event read_format, also (re)check the existing events.
++ *
++ * This leaves 48k for the constant size fields and things like callchains,
++ * branch stacks and register sets.
++ */
+ static bool perf_event_validate_size(struct perf_event *event)
+ {
+-	/*
+-	 * The values computed here will be over-written when we actually
+-	 * attach the event.
+-	 */
+-	__perf_event_read_size(event, event->group_leader->nr_siblings + 1);
+-	__perf_event_header_size(event, event->attr.sample_type & ~PERF_SAMPLE_READ);
+-	perf_event__id_header_size(event);
++	struct perf_event *sibling, *group_leader = event->group_leader;
+ 
+-	/*
+-	 * Sum the lot; should not exceed the 64k limit we have on records.
+-	 * Conservative limit to allow for callchains and other variable fields.
+-	 */
+-	if (event->read_size + event->header_size +
+-	    event->id_header_size + sizeof(struct perf_event_header) >= 16*1024)
++	if (__perf_event_read_size(event->attr.read_format,
++				   group_leader->nr_siblings + 1) > 16*1024)
++		return false;
++
++	if (__perf_event_read_size(group_leader->attr.read_format,
++				   group_leader->nr_siblings + 1) > 16*1024)
+ 		return false;
+ 
++	for_each_sibling_event(sibling, group_leader) {
++		if (__perf_event_read_size(sibling->attr.read_format,
++					   group_leader->nr_siblings + 1) > 16*1024)
++			return false;
++	}
++
+ 	return true;
+ }
+ 
+@@ -5283,11 +5301,15 @@ static int __perf_read_group_add(struct perf_event *leader,
+ 	values[n++] += perf_event_count(leader);
+ 	if (read_format & PERF_FORMAT_ID)
+ 		values[n++] = primary_event_id(leader);
++	if (read_format & PERF_FORMAT_LOST)
++		values[n++] = atomic64_read(&leader->lost_samples);
+ 
+ 	for_each_sibling_event(sub, leader) {
+ 		values[n++] += perf_event_count(sub);
+ 		if (read_format & PERF_FORMAT_ID)
+ 			values[n++] = primary_event_id(sub);
++		if (read_format & PERF_FORMAT_LOST)
++			values[n++] = atomic64_read(&sub->lost_samples);
+ 	}
+ 
+ unlock:
+@@ -5341,7 +5363,7 @@ static int perf_read_one(struct perf_event *event,
+ 				 u64 read_format, char __user *buf)
+ {
+ 	u64 enabled, running;
+-	u64 values[4];
++	u64 values[5];
+ 	int n = 0;
+ 
+ 	values[n++] = __perf_event_read_value(event, &enabled, &running);
+@@ -5351,6 +5373,8 @@ static int perf_read_one(struct perf_event *event,
+ 		values[n++] = running;
+ 	if (read_format & PERF_FORMAT_ID)
+ 		values[n++] = primary_event_id(event);
++	if (read_format & PERF_FORMAT_LOST)
++		values[n++] = atomic64_read(&event->lost_samples);
+ 
+ 	if (copy_to_user(buf, values, n * sizeof(u64)))
+ 		return -EFAULT;
+@@ -6830,7 +6854,7 @@ static void perf_output_read_one(struct perf_output_handle *handle,
+ 				 u64 enabled, u64 running)
+ {
+ 	u64 read_format = event->attr.read_format;
+-	u64 values[4];
++	u64 values[5];
+ 	int n = 0;
+ 
+ 	values[n++] = perf_event_count(event);
+@@ -6844,6 +6868,8 @@ static void perf_output_read_one(struct perf_output_handle *handle,
+ 	}
+ 	if (read_format & PERF_FORMAT_ID)
+ 		values[n++] = primary_event_id(event);
++	if (read_format & PERF_FORMAT_LOST)
++		values[n++] = atomic64_read(&event->lost_samples);
+ 
+ 	__output_copy(handle, values, n * sizeof(u64));
+ }
+@@ -6854,7 +6880,7 @@ static void perf_output_read_group(struct perf_output_handle *handle,
+ {
+ 	struct perf_event *leader = event->group_leader, *sub;
+ 	u64 read_format = event->attr.read_format;
+-	u64 values[5];
++	u64 values[6];
+ 	int n = 0;
+ 
+ 	values[n++] = 1 + leader->nr_siblings;
+@@ -6872,6 +6898,8 @@ static void perf_output_read_group(struct perf_output_handle *handle,
+ 	values[n++] = perf_event_count(leader);
+ 	if (read_format & PERF_FORMAT_ID)
+ 		values[n++] = primary_event_id(leader);
++	if (read_format & PERF_FORMAT_LOST)
++		values[n++] = atomic64_read(&leader->lost_samples);
+ 
+ 	__output_copy(handle, values, n * sizeof(u64));
+ 
+@@ -6885,6 +6913,8 @@ static void perf_output_read_group(struct perf_output_handle *handle,
+ 		values[n++] = perf_event_count(sub);
+ 		if (read_format & PERF_FORMAT_ID)
+ 			values[n++] = primary_event_id(sub);
++		if (read_format & PERF_FORMAT_LOST)
++			values[n++] = atomic64_read(&sub->lost_samples);
+ 
+ 		__output_copy(handle, values, n * sizeof(u64));
+ 	}
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 01351e7e25435..ca27946fdaaf2 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -172,8 +172,10 @@ __perf_output_begin(struct perf_output_handle *handle,
+ 		goto out;
+ 
+ 	if (unlikely(rb->paused)) {
+-		if (rb->nr_pages)
++		if (rb->nr_pages) {
+ 			local_inc(&rb->lost);
++			atomic64_inc(&event->lost_samples);
++		}
+ 		goto out;
+ 	}
+ 
+@@ -254,6 +256,7 @@ __perf_output_begin(struct perf_output_handle *handle,
+ 
+ fail:
+ 	local_inc(&rb->lost);
++	atomic64_inc(&event->lost_samples);
+ 	perf_output_put_handle(handle);
+ out:
+ 	rcu_read_unlock();
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 70deb2f01e97a..ede09dda36e90 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2114,29 +2114,22 @@ static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
+ 	}
+ }
+ 
+-int hrtimers_dead_cpu(unsigned int scpu)
++int hrtimers_cpu_dying(unsigned int dying_cpu)
+ {
+ 	struct hrtimer_cpu_base *old_base, *new_base;
+-	int i;
++	int i, ncpu = cpumask_first(cpu_active_mask);
+ 
+-	BUG_ON(cpu_online(scpu));
+-	tick_cancel_sched_timer(scpu);
++	tick_cancel_sched_timer(dying_cpu);
++
++	old_base = this_cpu_ptr(&hrtimer_bases);
++	new_base = &per_cpu(hrtimer_bases, ncpu);
+ 
+-	/*
+-	 * this BH disable ensures that raise_softirq_irqoff() does
+-	 * not wakeup ksoftirqd (and acquire the pi-lock) while
+-	 * holding the cpu_base lock
+-	 */
+-	local_bh_disable();
+-	local_irq_disable();
+-	old_base = &per_cpu(hrtimer_bases, scpu);
+-	new_base = this_cpu_ptr(&hrtimer_bases);
+ 	/*
+ 	 * The caller is globally serialized and nobody else
+ 	 * takes two locks at once, deadlock is not possible.
+ 	 */
+-	raw_spin_lock(&new_base->lock);
+-	raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
++	raw_spin_lock(&old_base->lock);
++	raw_spin_lock_nested(&new_base->lock, SINGLE_DEPTH_NESTING);
+ 
+ 	for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
+ 		migrate_hrtimer_list(&old_base->clock_base[i],
+@@ -2147,15 +2140,13 @@ int hrtimers_dead_cpu(unsigned int scpu)
+ 	 * The migration might have changed the first expiring softirq
+ 	 * timer on this CPU. Update it.
+ 	 */
+-	hrtimer_update_softirq_timer(new_base, false);
++	__hrtimer_get_next_event(new_base, HRTIMER_ACTIVE_SOFT);
++	/* Tell the other CPU to retrigger the next event */
++	smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
+ 
+-	raw_spin_unlock(&old_base->lock);
+ 	raw_spin_unlock(&new_base->lock);
++	raw_spin_unlock(&old_base->lock);
+ 
+-	/* Check, if we got expired work to do */
+-	__hrtimer_peek_ahead_timers();
+-	local_irq_enable();
+-	local_bh_enable();
+ 	return 0;
+ }
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 0938222b45988..7e1148aafd284 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2903,22 +2903,19 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer,
+ 			local_read(&bpage->write) & ~RB_WRITE_MASK;
+ 		unsigned long event_length = rb_event_length(event);
+ 
++		/*
++		 * For the before_stamp to be different than the write_stamp
++		 * to make sure that the next event adds an absolute
++		 * value and does not rely on the saved write stamp, which
++		 * is now going to be bogus.
++		 */
++		rb_time_set(&cpu_buffer->before_stamp, 0);
++
+ 		/* Something came in, can't discard */
+ 		if (!rb_time_cmpxchg(&cpu_buffer->write_stamp,
+ 				       write_stamp, write_stamp - delta))
+ 			return 0;
+ 
+-		/*
+-		 * It's possible that the event time delta is zero
+-		 * (has the same time stamp as the previous event)
+-		 * in which case write_stamp and before_stamp could
+-		 * be the same. In such a case, force before_stamp
+-		 * to be different than write_stamp. It doesn't
+-		 * matter what it is, as long as its different.
+-		 */
+-		if (!delta)
+-			rb_time_set(&cpu_buffer->before_stamp, 0);
+-
+ 		/*
+ 		 * If an event were to come in now, it would see that the
+ 		 * write_stamp and the before_stamp are different, and assume
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 89a8bb8e24df2..4e0411b19ef96 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2287,13 +2287,7 @@ int is_tracing_stopped(void)
+ 	return global_trace.stop_count;
+ }
+ 
+-/**
+- * tracing_start - quick start of the tracer
+- *
+- * If tracing is enabled but was stopped by tracing_stop,
+- * this will start the tracer back up.
+- */
+-void tracing_start(void)
++static void tracing_start_tr(struct trace_array *tr)
+ {
+ 	struct trace_buffer *buffer;
+ 	unsigned long flags;
+@@ -2301,119 +2295,83 @@ void tracing_start(void)
+ 	if (tracing_disabled)
+ 		return;
+ 
+-	raw_spin_lock_irqsave(&global_trace.start_lock, flags);
+-	if (--global_trace.stop_count) {
+-		if (global_trace.stop_count < 0) {
++	raw_spin_lock_irqsave(&tr->start_lock, flags);
++	if (--tr->stop_count) {
++		if (WARN_ON_ONCE(tr->stop_count < 0)) {
+ 			/* Someone screwed up their debugging */
+-			WARN_ON_ONCE(1);
+-			global_trace.stop_count = 0;
++			tr->stop_count = 0;
+ 		}
+ 		goto out;
+ 	}
+ 
+ 	/* Prevent the buffers from switching */
+-	arch_spin_lock(&global_trace.max_lock);
++	arch_spin_lock(&tr->max_lock);
+ 
+-	buffer = global_trace.array_buffer.buffer;
++	buffer = tr->array_buffer.buffer;
+ 	if (buffer)
+ 		ring_buffer_record_enable(buffer);
+ 
+ #ifdef CONFIG_TRACER_MAX_TRACE
+-	buffer = global_trace.max_buffer.buffer;
++	buffer = tr->max_buffer.buffer;
+ 	if (buffer)
+ 		ring_buffer_record_enable(buffer);
+ #endif
+ 
+-	arch_spin_unlock(&global_trace.max_lock);
+-
+- out:
+-	raw_spin_unlock_irqrestore(&global_trace.start_lock, flags);
+-}
+-
+-static void tracing_start_tr(struct trace_array *tr)
+-{
+-	struct trace_buffer *buffer;
+-	unsigned long flags;
+-
+-	if (tracing_disabled)
+-		return;
+-
+-	/* If global, we need to also start the max tracer */
+-	if (tr->flags & TRACE_ARRAY_FL_GLOBAL)
+-		return tracing_start();
+-
+-	raw_spin_lock_irqsave(&tr->start_lock, flags);
+-
+-	if (--tr->stop_count) {
+-		if (tr->stop_count < 0) {
+-			/* Someone screwed up their debugging */
+-			WARN_ON_ONCE(1);
+-			tr->stop_count = 0;
+-		}
+-		goto out;
+-	}
+-
+-	buffer = tr->array_buffer.buffer;
+-	if (buffer)
+-		ring_buffer_record_enable(buffer);
++	arch_spin_unlock(&tr->max_lock);
+ 
+  out:
+ 	raw_spin_unlock_irqrestore(&tr->start_lock, flags);
+ }
+ 
+ /**
+- * tracing_stop - quick stop of the tracer
++ * tracing_start - quick start of the tracer
+  *
+- * Light weight way to stop tracing. Use in conjunction with
+- * tracing_start.
++ * If tracing is enabled but was stopped by tracing_stop,
++ * this will start the tracer back up.
+  */
+-void tracing_stop(void)
++void tracing_start(void)
++
++{
++	return tracing_start_tr(&global_trace);
++}
++
++static void tracing_stop_tr(struct trace_array *tr)
+ {
+ 	struct trace_buffer *buffer;
+ 	unsigned long flags;
+ 
+-	raw_spin_lock_irqsave(&global_trace.start_lock, flags);
+-	if (global_trace.stop_count++)
++	raw_spin_lock_irqsave(&tr->start_lock, flags);
++	if (tr->stop_count++)
+ 		goto out;
+ 
+ 	/* Prevent the buffers from switching */
+-	arch_spin_lock(&global_trace.max_lock);
++	arch_spin_lock(&tr->max_lock);
+ 
+-	buffer = global_trace.array_buffer.buffer;
++	buffer = tr->array_buffer.buffer;
+ 	if (buffer)
+ 		ring_buffer_record_disable(buffer);
+ 
+ #ifdef CONFIG_TRACER_MAX_TRACE
+-	buffer = global_trace.max_buffer.buffer;
++	buffer = tr->max_buffer.buffer;
+ 	if (buffer)
+ 		ring_buffer_record_disable(buffer);
+ #endif
+ 
+-	arch_spin_unlock(&global_trace.max_lock);
++	arch_spin_unlock(&tr->max_lock);
+ 
+  out:
+-	raw_spin_unlock_irqrestore(&global_trace.start_lock, flags);
++	raw_spin_unlock_irqrestore(&tr->start_lock, flags);
+ }
+ 
+-static void tracing_stop_tr(struct trace_array *tr)
++/**
++ * tracing_stop - quick stop of the tracer
++ *
++ * Light weight way to stop tracing. Use in conjunction with
++ * tracing_start.
++ */
++void tracing_stop(void)
+ {
+-	struct trace_buffer *buffer;
+-	unsigned long flags;
+-
+-	/* If global, we need to also stop the max tracer */
+-	if (tr->flags & TRACE_ARRAY_FL_GLOBAL)
+-		return tracing_stop();
+-
+-	raw_spin_lock_irqsave(&tr->start_lock, flags);
+-	if (tr->stop_count++)
+-		goto out;
+-
+-	buffer = tr->array_buffer.buffer;
+-	if (buffer)
+-		ring_buffer_record_disable(buffer);
+-
+- out:
+-	raw_spin_unlock_irqrestore(&tr->start_lock, flags);
++	return tracing_stop_tr(&global_trace);
+ }
+ 
+ static int trace_save_cmdline(struct task_struct *tsk)
+@@ -2687,8 +2645,11 @@ void trace_buffered_event_enable(void)
+ 	for_each_tracing_cpu(cpu) {
+ 		page = alloc_pages_node(cpu_to_node(cpu),
+ 					GFP_KERNEL | __GFP_NORETRY, 0);
+-		if (!page)
+-			goto failed;
++		/* This is just an optimization and can handle failures */
++		if (!page) {
++			pr_err("Failed to allocate event buffer\n");
++			break;
++		}
+ 
+ 		event = page_address(page);
+ 		memset(event, 0, sizeof(*event));
+@@ -2702,10 +2663,6 @@ void trace_buffered_event_enable(void)
+ 			WARN_ON_ONCE(1);
+ 		preempt_enable();
+ 	}
+-
+-	return;
+- failed:
+-	trace_buffered_event_disable();
+ }
+ 
+ static void enable_trace_buffered_event(void *data)
+@@ -2740,11 +2697,9 @@ void trace_buffered_event_disable(void)
+ 	if (--trace_buffered_event_ref)
+ 		return;
+ 
+-	preempt_disable();
+ 	/* For each CPU, set the buffer as used. */
+-	smp_call_function_many(tracing_buffer_mask,
+-			       disable_trace_buffered_event, NULL, 1);
+-	preempt_enable();
++	on_each_cpu_mask(tracing_buffer_mask, disable_trace_buffered_event,
++			 NULL, true);
+ 
+ 	/* Wait for all current users to finish */
+ 	synchronize_rcu();
+@@ -2753,17 +2708,19 @@ void trace_buffered_event_disable(void)
+ 		free_page((unsigned long)per_cpu(trace_buffered_event, cpu));
+ 		per_cpu(trace_buffered_event, cpu) = NULL;
+ 	}
++
+ 	/*
+-	 * Make sure trace_buffered_event is NULL before clearing
+-	 * trace_buffered_event_cnt.
++	 * Wait for all CPUs that potentially started checking if they can use
++	 * their event buffer only after the previous synchronize_rcu() call and
++	 * they still read a valid pointer from trace_buffered_event. It must be
++	 * ensured they don't see cleared trace_buffered_event_cnt else they
++	 * could wrongly decide to use the pointed-to buffer which is now freed.
+ 	 */
+-	smp_wmb();
++	synchronize_rcu();
+ 
+-	preempt_disable();
+-	/* Do the work on each cpu */
+-	smp_call_function_many(tracing_buffer_mask,
+-			       enable_trace_buffered_event, NULL, 1);
+-	preempt_enable();
++	/* For each CPU, relinquish the buffer */
++	on_each_cpu_mask(tracing_buffer_mask, enable_trace_buffered_event, NULL,
++			 true);
+ }
+ 
+ static struct trace_buffer *temp_buffer;
+@@ -5895,6 +5852,15 @@ static void set_buffer_entries(struct array_buffer *buf, unsigned long val)
+ 		per_cpu_ptr(buf->data, cpu)->entries = val;
+ }
+ 
++static void update_buffer_entries(struct array_buffer *buf, int cpu)
++{
++	if (cpu == RING_BUFFER_ALL_CPUS) {
++		set_buffer_entries(buf, ring_buffer_size(buf->buffer, 0));
++	} else {
++		per_cpu_ptr(buf->data, cpu)->entries = ring_buffer_size(buf->buffer, cpu);
++	}
++}
++
+ #ifdef CONFIG_TRACER_MAX_TRACE
+ /* resize @tr's buffer to the size of @size_tr's entries */
+ static int resize_buffer_duplicate_size(struct array_buffer *trace_buf,
+@@ -5939,13 +5905,15 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
+ 	if (!tr->array_buffer.buffer)
+ 		return 0;
+ 
++	/* Do not allow tracing while resizng ring buffer */
++	tracing_stop_tr(tr);
++
+ 	ret = ring_buffer_resize(tr->array_buffer.buffer, size, cpu);
+ 	if (ret < 0)
+-		return ret;
++		goto out_start;
+ 
+ #ifdef CONFIG_TRACER_MAX_TRACE
+-	if (!(tr->flags & TRACE_ARRAY_FL_GLOBAL) ||
+-	    !tr->current_trace->use_max_tr)
++	if (!tr->current_trace->use_max_tr)
+ 		goto out;
+ 
+ 	ret = ring_buffer_resize(tr->max_buffer.buffer, size, cpu);
+@@ -5970,22 +5938,17 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
+ 			WARN_ON(1);
+ 			tracing_disabled = 1;
+ 		}
+-		return ret;
++		goto out_start;
+ 	}
+ 
+-	if (cpu == RING_BUFFER_ALL_CPUS)
+-		set_buffer_entries(&tr->max_buffer, size);
+-	else
+-		per_cpu_ptr(tr->max_buffer.data, cpu)->entries = size;
++	update_buffer_entries(&tr->max_buffer, cpu);
+ 
+  out:
+ #endif /* CONFIG_TRACER_MAX_TRACE */
+ 
+-	if (cpu == RING_BUFFER_ALL_CPUS)
+-		set_buffer_entries(&tr->array_buffer, size);
+-	else
+-		per_cpu_ptr(tr->array_buffer.data, cpu)->entries = size;
+-
++	update_buffer_entries(&tr->array_buffer, cpu);
++ out_start:
++	tracing_start_tr(tr);
+ 	return ret;
+ }
+ 
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index ed9dd17f9348c..7742ee689141f 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -183,7 +183,7 @@ out:
+ }
+ 
+ static const struct genl_multicast_group dropmon_mcgrps[] = {
+-	{ .name = "events", },
++	{ .name = "events", .cap_sys_admin = 1 },
+ };
+ 
+ static void send_dm_alert(struct work_struct *work)
+@@ -1616,11 +1616,13 @@ static const struct genl_small_ops dropmon_ops[] = {
+ 		.cmd = NET_DM_CMD_START,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = net_dm_cmd_trace,
++		.flags = GENL_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = NET_DM_CMD_STOP,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = net_dm_cmd_trace,
++		.flags = GENL_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = NET_DM_CMD_CONFIG_GET,
+diff --git a/net/core/filter.c b/net/core/filter.c
+index ea8ab9c704832..6cfc8fb0562a2 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2595,6 +2595,22 @@ BPF_CALL_2(bpf_msg_cork_bytes, struct sk_msg *, msg, u32, bytes)
+ 	return 0;
+ }
+ 
++static void sk_msg_reset_curr(struct sk_msg *msg)
++{
++	u32 i = msg->sg.start;
++	u32 len = 0;
++
++	do {
++		len += sk_msg_elem(msg, i)->length;
++		sk_msg_iter_var_next(i);
++		if (len >= msg->sg.size)
++			break;
++	} while (i != msg->sg.end);
++
++	msg->sg.curr = i;
++	msg->sg.copybreak = 0;
++}
++
+ static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
+ 	.func           = bpf_msg_cork_bytes,
+ 	.gpl_only       = false,
+@@ -2714,6 +2730,7 @@ BPF_CALL_4(bpf_msg_pull_data, struct sk_msg *, msg, u32, start,
+ 		      msg->sg.end - shift + NR_MSG_FRAG_IDS :
+ 		      msg->sg.end - shift;
+ out:
++	sk_msg_reset_curr(msg);
+ 	msg->data = sg_virt(&msg->sg.data[first_sge]) + start - offset;
+ 	msg->data_end = msg->data + bytes;
+ 	return 0;
+@@ -2850,6 +2867,7 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ 		msg->sg.data[new] = rsge;
+ 	}
+ 
++	sk_msg_reset_curr(msg);
+ 	sk_msg_compute_data_pointers(msg);
+ 	return 0;
+ }
+@@ -3018,6 +3036,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ 
+ 	sk_mem_uncharge(msg->sk, len - pop);
+ 	msg->sg.size -= (len - pop);
++	sk_msg_reset_curr(msg);
+ 	sk_msg_compute_data_pointers(msg);
+ 	return 0;
+ }
+diff --git a/net/core/scm.c b/net/core/scm.c
+index 8156d4fb8a396..3c7f160720d34 100644
+--- a/net/core/scm.c
++++ b/net/core/scm.c
+@@ -26,6 +26,7 @@
+ #include <linux/nsproxy.h>
+ #include <linux/slab.h>
+ #include <linux/errqueue.h>
++#include <linux/io_uring.h>
+ 
+ #include <linux/uaccess.h>
+ 
+@@ -103,6 +104,11 @@ static int scm_fp_copy(struct cmsghdr *cmsg, struct scm_fp_list **fplp)
+ 
+ 		if (fd < 0 || !(file = fget_raw(fd)))
+ 			return -EBADF;
++		/* don't allow io_uring files */
++		if (io_uring_get_socket(file)) {
++			fput(file);
++			return -EINVAL;
++		}
+ 		*fpp++ = file;
+ 		fpl->count++;
+ 	}
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 9d1a506571043..a6ad0fe1387c0 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -629,15 +629,18 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
+ 	}
+ 
+ 	if (dev->header_ops) {
++		int pull_len = tunnel->hlen + sizeof(struct iphdr);
++
+ 		if (skb_cow_head(skb, 0))
+ 			goto free_skb;
+ 
+ 		tnl_params = (const struct iphdr *)skb->data;
+ 
+-		/* Pull skb since ip_tunnel_xmit() needs skb->data pointing
+-		 * to gre header.
+-		 */
+-		skb_pull(skb, tunnel->hlen + sizeof(struct iphdr));
++		if (!pskb_network_may_pull(skb, pull_len))
++			goto free_skb;
++
++		/* ip_tunnel_xmit() needs skb->data pointing to gre header. */
++		skb_pull(skb, pull_len);
+ 		skb_reset_mac_header(skb);
+ 
+ 		if (skb->ip_summed == CHECKSUM_PARTIAL &&
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index a8948c76d19b6..0f9fe5edad142 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3772,8 +3772,12 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ 	 * then we can probably ignore it.
+ 	 */
+ 	if (before(ack, prior_snd_una)) {
++		u32 max_window;
++
++		/* do not accept ACK for bytes we never sent. */
++		max_window = min_t(u64, tp->max_window, tp->bytes_acked);
+ 		/* RFC 5961 5.2 [Blind Data Injection Attack].[Mitigation] */
+-		if (before(ack, prior_snd_una - tp->max_window)) {
++		if (before(ack, prior_snd_una - max_window)) {
+ 			if (!(flag & FLAG_NO_CHALLENGE_ACK))
+ 				tcp_send_challenge_ack(sk, skb);
+ 			return -1;
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index c783b91231321..608205c632c8c 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1499,13 +1499,9 @@ out:
+ 			if (!pn_leaf && !(pn->fn_flags & RTN_RTINFO)) {
+ 				pn_leaf = fib6_find_prefix(info->nl_net, table,
+ 							   pn);
+-#if RT6_DEBUG >= 2
+-				if (!pn_leaf) {
+-					WARN_ON(!pn_leaf);
++				if (!pn_leaf)
+ 					pn_leaf =
+ 					    info->nl_net->ipv6.fib6_null_entry;
+-				}
+-#endif
+ 				fib6_info_hold(pn_leaf);
+ 				rcu_assign_pointer(pn->leaf, pn_leaf);
+ 			}
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 26613e3731d02..24f81826ed4a5 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -61,6 +61,8 @@ MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_IPSET);
+ 	ip_set_dereference((inst)->ip_set_list)[id]
+ #define ip_set_ref_netlink(inst,id)	\
+ 	rcu_dereference_raw((inst)->ip_set_list)[id]
++#define ip_set_dereference_nfnl(p)	\
++	rcu_dereference_check(p, lockdep_nfnl_is_held(NFNL_SUBSYS_IPSET))
+ 
+ /* The set types are implemented in modules and registered set types
+  * can be found in ip_set_type_list. Adding/deleting types is
+@@ -708,15 +710,10 @@ __ip_set_put_netlink(struct ip_set *set)
+ static struct ip_set *
+ ip_set_rcu_get(struct net *net, ip_set_id_t index)
+ {
+-	struct ip_set *set;
+ 	struct ip_set_net *inst = ip_set_pernet(net);
+ 
+-	rcu_read_lock();
+-	/* ip_set_list itself needs to be protected */
+-	set = rcu_dereference(inst->ip_set_list)[index];
+-	rcu_read_unlock();
+-
+-	return set;
++	/* ip_set_list and the set pointer need to be protected */
++	return ip_set_dereference_nfnl(inst->ip_set_list)[index];
+ }
+ 
+ static inline void
+@@ -1407,6 +1404,9 @@ static int ip_set_swap(struct net *net, struct sock *ctnl, struct sk_buff *skb,
+ 	ip_set(inst, to_id) = from;
+ 	write_unlock_bh(&ip_set_ref_lock);
+ 
++	/* Make sure all readers of the old set pointers are completed. */
++	synchronize_rcu();
++
+ 	return 0;
+ }
+ 
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index fbfcc3275cadf..bc30bd121ff2f 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -2028,6 +2028,9 @@ static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set,
+ 
+ 		e = f->mt[r].e;
+ 
++		if (!nft_set_elem_active(&e->ext, iter->genmask))
++			goto cont;
++
+ 		elem.priv = e;
+ 
+ 		iter->err = iter->fn(ctx, set, iter, &elem);
+diff --git a/net/netfilter/xt_owner.c b/net/netfilter/xt_owner.c
+index e85ce69924aee..50332888c8d23 100644
+--- a/net/netfilter/xt_owner.c
++++ b/net/netfilter/xt_owner.c
+@@ -76,18 +76,23 @@ owner_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 		 */
+ 		return false;
+ 
+-	filp = sk->sk_socket->file;
+-	if (filp == NULL)
++	read_lock_bh(&sk->sk_callback_lock);
++	filp = sk->sk_socket ? sk->sk_socket->file : NULL;
++	if (filp == NULL) {
++		read_unlock_bh(&sk->sk_callback_lock);
+ 		return ((info->match ^ info->invert) &
+ 		       (XT_OWNER_UID | XT_OWNER_GID)) == 0;
++	}
+ 
+ 	if (info->match & XT_OWNER_UID) {
+ 		kuid_t uid_min = make_kuid(net->user_ns, info->uid_min);
+ 		kuid_t uid_max = make_kuid(net->user_ns, info->uid_max);
+ 		if ((uid_gte(filp->f_cred->fsuid, uid_min) &&
+ 		     uid_lte(filp->f_cred->fsuid, uid_max)) ^
+-		    !(info->invert & XT_OWNER_UID))
++		    !(info->invert & XT_OWNER_UID)) {
++			read_unlock_bh(&sk->sk_callback_lock);
+ 			return false;
++		}
+ 	}
+ 
+ 	if (info->match & XT_OWNER_GID) {
+@@ -112,10 +117,13 @@ owner_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 			}
+ 		}
+ 
+-		if (match ^ !(info->invert & XT_OWNER_GID))
++		if (match ^ !(info->invert & XT_OWNER_GID)) {
++			read_unlock_bh(&sk->sk_callback_lock);
+ 			return false;
++		}
+ 	}
+ 
++	read_unlock_bh(&sk->sk_callback_lock);
+ 	return true;
+ }
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 9737c3229c12a..901358a5b5931 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1021,7 +1021,6 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
+ 			return -EINVAL;
+ 	}
+ 
+-	netlink_lock_table();
+ 	if (nlk->netlink_bind && groups) {
+ 		int group;
+ 
+@@ -1033,13 +1032,14 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
+ 			if (!err)
+ 				continue;
+ 			netlink_undo_bind(group, groups, sk);
+-			goto unlock;
++			return err;
+ 		}
+ 	}
+ 
+ 	/* No need for barriers here as we return to user-space without
+ 	 * using any of the bound attributes.
+ 	 */
++	netlink_lock_table();
+ 	if (!bound) {
+ 		err = nladdr->nl_pid ?
+ 			netlink_insert(sk, nladdr->nl_pid) :
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 9fd7ba01b9f8b..e9035de655467 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1364,11 +1364,46 @@ static struct genl_family genl_ctrl __ro_after_init = {
+ 	.netnsok = true,
+ };
+ 
++static int genl_bind(struct net *net, int group)
++{
++	const struct genl_family *family;
++	unsigned int id;
++	int ret = 0;
++
++	genl_lock_all();
++
++	idr_for_each_entry(&genl_fam_idr, family, id) {
++		const struct genl_multicast_group *grp;
++		int i;
++
++		if (family->n_mcgrps == 0)
++			continue;
++
++		i = group - family->mcgrp_offset;
++		if (i < 0 || i >= family->n_mcgrps)
++			continue;
++
++		grp = &family->mcgrps[i];
++		if ((grp->flags & GENL_UNS_ADMIN_PERM) &&
++		    !ns_capable(net->user_ns, CAP_NET_ADMIN))
++			ret = -EPERM;
++		if (grp->cap_sys_admin &&
++		    !ns_capable(net->user_ns, CAP_SYS_ADMIN))
++			ret = -EPERM;
++
++		break;
++	}
++
++	genl_unlock_all();
++	return ret;
++}
++
+ static int __net_init genl_pernet_init(struct net *net)
+ {
+ 	struct netlink_kernel_cfg cfg = {
+ 		.input		= genl_rcv,
+ 		.flags		= NL_CFG_F_NONROOT_RECV,
++		.bind		= genl_bind,
+ 	};
+ 
+ 	/* we'll bump the group number right afterwards */
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index bbdb32acac324..b292d58fdcc4c 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4248,7 +4248,7 @@ static void packet_mm_open(struct vm_area_struct *vma)
+ 	struct sock *sk = sock->sk;
+ 
+ 	if (sk)
+-		atomic_inc(&pkt_sk(sk)->mapped);
++		atomic_long_inc(&pkt_sk(sk)->mapped);
+ }
+ 
+ static void packet_mm_close(struct vm_area_struct *vma)
+@@ -4258,7 +4258,7 @@ static void packet_mm_close(struct vm_area_struct *vma)
+ 	struct sock *sk = sock->sk;
+ 
+ 	if (sk)
+-		atomic_dec(&pkt_sk(sk)->mapped);
++		atomic_long_dec(&pkt_sk(sk)->mapped);
+ }
+ 
+ static const struct vm_operations_struct packet_mmap_ops = {
+@@ -4353,7 +4353,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 
+ 	err = -EBUSY;
+ 	if (!closing) {
+-		if (atomic_read(&po->mapped))
++		if (atomic_long_read(&po->mapped))
+ 			goto out;
+ 		if (packet_read_pending(rb))
+ 			goto out;
+@@ -4456,7 +4456,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 
+ 	err = -EBUSY;
+ 	mutex_lock(&po->pg_vec_lock);
+-	if (closing || atomic_read(&po->mapped) == 0) {
++	if (closing || atomic_long_read(&po->mapped) == 0) {
+ 		err = 0;
+ 		spin_lock_bh(&rb_queue->lock);
+ 		swap(rb->pg_vec, pg_vec);
+@@ -4474,9 +4474,9 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 		po->prot_hook.func = (po->rx_ring.pg_vec) ?
+ 						tpacket_rcv : packet_rcv;
+ 		skb_queue_purge(rb_queue);
+-		if (atomic_read(&po->mapped))
+-			pr_err("packet_mmap: vma is busy: %d\n",
+-			       atomic_read(&po->mapped));
++		if (atomic_long_read(&po->mapped))
++			pr_err("packet_mmap: vma is busy: %ld\n",
++			       atomic_long_read(&po->mapped));
+ 	}
+ 	mutex_unlock(&po->pg_vec_lock);
+ 
+@@ -4554,7 +4554,7 @@ static int packet_mmap(struct file *file, struct socket *sock,
+ 		}
+ 	}
+ 
+-	atomic_inc(&po->mapped);
++	atomic_long_inc(&po->mapped);
+ 	vma->vm_ops = &packet_mmap_ops;
+ 	err = 0;
+ 
+diff --git a/net/packet/internal.h b/net/packet/internal.h
+index 3938cb413d5d3..69e58cb744929 100644
+--- a/net/packet/internal.h
++++ b/net/packet/internal.h
+@@ -126,7 +126,7 @@ struct packet_sock {
+ 	__be16			num;
+ 	struct packet_rollover	*rollover;
+ 	struct packet_mclist	*mclist;
+-	atomic_t		mapped;
++	atomic_long_t		mapped;
+ 	enum tpacket_versions	tp_version;
+ 	unsigned int		tp_hdrlen;
+ 	unsigned int		tp_reserve;
+diff --git a/net/psample/psample.c b/net/psample/psample.c
+index 482c07f2766b1..5913da77c71d2 100644
+--- a/net/psample/psample.c
++++ b/net/psample/psample.c
+@@ -30,7 +30,8 @@ enum psample_nl_multicast_groups {
+ 
+ static const struct genl_multicast_group psample_nl_mcgrps[] = {
+ 	[PSAMPLE_NL_MCGRP_CONFIG] = { .name = PSAMPLE_NL_MCGRP_CONFIG_NAME },
+-	[PSAMPLE_NL_MCGRP_SAMPLE] = { .name = PSAMPLE_NL_MCGRP_SAMPLE_NAME },
++	[PSAMPLE_NL_MCGRP_SAMPLE] = { .name = PSAMPLE_NL_MCGRP_SAMPLE_NAME,
++				      .flags = GENL_UNS_ADMIN_PERM },
+ };
+ 
+ static struct genl_family psample_nl_family __ro_after_init;
+diff --git a/scripts/checkstack.pl b/scripts/checkstack.pl
+index d2c38584ece6f..758884b61f923 100755
+--- a/scripts/checkstack.pl
++++ b/scripts/checkstack.pl
+@@ -142,15 +142,11 @@ $total_size = 0;
+ while (my $line = <STDIN>) {
+ 	if ($line =~ m/$funcre/) {
+ 		$func = $1;
+-		next if $line !~ m/^($xs*)/;
++		next if $line !~ m/^($x*)/;
+ 		if ($total_size > $min_stack) {
+ 			push @stack, "$intro$total_size\n";
+ 		}
+-
+-		$addr = $1;
+-		$addr =~ s/ /0/g;
+-		$addr = "0x$addr";
+-
++		$addr = "0x$1";
+ 		$intro = "$addr $func [$file]:";
+ 		my $padlen = 56 - length($intro);
+ 		while ($padlen > 0) {
+diff --git a/scripts/kconfig/symbol.c b/scripts/kconfig/symbol.c
+index ffa3ec65cc907..a2056fa80de2b 100644
+--- a/scripts/kconfig/symbol.c
++++ b/scripts/kconfig/symbol.c
+@@ -123,9 +123,9 @@ static long long sym_get_range_val(struct symbol *sym, int base)
+ static void sym_validate_range(struct symbol *sym)
+ {
+ 	struct property *prop;
++	struct symbol *range_sym;
+ 	int base;
+ 	long long val, val2;
+-	char str[64];
+ 
+ 	switch (sym->type) {
+ 	case S_INT:
+@@ -141,17 +141,15 @@ static void sym_validate_range(struct symbol *sym)
+ 	if (!prop)
+ 		return;
+ 	val = strtoll(sym->curr.val, NULL, base);
+-	val2 = sym_get_range_val(prop->expr->left.sym, base);
++	range_sym = prop->expr->left.sym;
++	val2 = sym_get_range_val(range_sym, base);
+ 	if (val >= val2) {
+-		val2 = sym_get_range_val(prop->expr->right.sym, base);
++		range_sym = prop->expr->right.sym;
++		val2 = sym_get_range_val(range_sym, base);
+ 		if (val <= val2)
+ 			return;
+ 	}
+-	if (sym->type == S_INT)
+-		sprintf(str, "%lld", val2);
+-	else
+-		sprintf(str, "0x%llx", val2);
+-	sym->curr.val = xstrdup(str);
++	sym->curr.val = range_sym->curr.val;
+ }
+ 
+ static void sym_set_changed(struct symbol *sym)
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index 59d222446d777..bec81f6ba6ba5 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -251,6 +251,7 @@ static const char * const snd_pcm_state_names[] = {
+ 	STATE(DRAINING),
+ 	STATE(PAUSED),
+ 	STATE(SUSPENDED),
++	STATE(DISCONNECTED),
+ };
+ 
+ static const char * const snd_pcm_access_names[] = {
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3d94a2f41d820..d9140630cf3d9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11227,6 +11227,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x32f7, "Lenovo ThinkCentre M90", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x3321, "Lenovo ThinkCentre M70 Gen4", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x331b, "Lenovo ThinkCentre M90 Gen4", ALC897_FIXUP_HEADSET_MIC_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x3364, "Lenovo ThinkCentre M90 Gen5", ALC897_FIXUP_HEADSET_MIC_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x3742, "Lenovo TianYi510Pro-14IOB", ALC897_FIXUP_HEADSET_MIC_PIN2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 10189f44af28f..ef7936fce7a23 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -3763,12 +3763,12 @@ static int wm_adsp_buffer_populate(struct wm_adsp_compr_buf *buf)
+ 		ret = wm_adsp_buffer_read(buf, caps->region_defs[i].base_offset,
+ 					  &region->base_addr);
+ 		if (ret < 0)
+-			return ret;
++			goto err;
+ 
+ 		ret = wm_adsp_buffer_read(buf, caps->region_defs[i].size_offset,
+ 					  &offset);
+ 		if (ret < 0)
+-			return ret;
++			goto err;
+ 
+ 		region->cumulative_size = offset;
+ 
+@@ -3779,6 +3779,10 @@ static int wm_adsp_buffer_populate(struct wm_adsp_compr_buf *buf)
+ 	}
+ 
+ 	return 0;
++
++err:
++	kfree(buf->regions);
++	return ret;
+ }
+ 
+ static void wm_adsp_buffer_clear(struct wm_adsp_compr_buf *buf)
+diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
+index b95d3c485d27e..6ca63ab6bee5d 100644
+--- a/tools/include/uapi/linux/perf_event.h
++++ b/tools/include/uapi/linux/perf_event.h
+@@ -279,6 +279,7 @@ enum {
+  *	  { u64		time_enabled; } && PERF_FORMAT_TOTAL_TIME_ENABLED
+  *	  { u64		time_running; } && PERF_FORMAT_TOTAL_TIME_RUNNING
+  *	  { u64		id;           } && PERF_FORMAT_ID
++ *	  { u64		lost;         } && PERF_FORMAT_LOST
+  *	} && !PERF_FORMAT_GROUP
+  *
+  *	{ u64		nr;
+@@ -286,6 +287,7 @@ enum {
+  *	  { u64		time_running; } && PERF_FORMAT_TOTAL_TIME_RUNNING
+  *	  { u64		value;
+  *	    { u64	id;           } && PERF_FORMAT_ID
++ *	    { u64	lost;         } && PERF_FORMAT_LOST
+  *	  }		cntr[nr];
+  *	} && PERF_FORMAT_GROUP
+  * };
+@@ -295,8 +297,9 @@ enum perf_event_read_format {
+ 	PERF_FORMAT_TOTAL_TIME_RUNNING		= 1U << 1,
+ 	PERF_FORMAT_ID				= 1U << 2,
+ 	PERF_FORMAT_GROUP			= 1U << 3,
++	PERF_FORMAT_LOST			= 1U << 4,
+ 
+-	PERF_FORMAT_MAX = 1U << 4,		/* non-ABI */
++	PERF_FORMAT_MAX = 1U << 5,		/* non-ABI */
+ };
+ 
+ #define PERF_ATTR_SIZE_VER0	64	/* sizeof first published struct */


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2023-12-20 15:21 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2023-12-20 15:21 UTC (permalink / raw
  To: gentoo-commits

commit:     0497334717dee4962ffa897db229380db63d4af4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 20 15:20:43 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 20 15:20:43 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=04973347

Linux patch 5.10.205

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1204_linux-5.10.205.patch | 2021 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2025 insertions(+)

diff --git a/0000_README b/0000_README
index 0ee398ed..161db05e 100644
--- a/0000_README
+++ b/0000_README
@@ -859,6 +859,10 @@ Patch:  1203_linux-5.10.204.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.204
 
+Patch:  1204_linux-5.10.205.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.205
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1204_linux-5.10.205.patch b/1204_linux-5.10.205.patch
new file mode 100644
index 00000000..84b67075
--- /dev/null
+++ b/1204_linux-5.10.205.patch
@@ -0,0 +1,2021 @@
+diff --git a/Makefile b/Makefile
+index 87886018d774c..5ba86be42e7c9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 204
++SUBLEVEL = 205
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 4eedfd784cf63..7756a365d4c49 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -744,6 +744,12 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ 	if (pte_hw_dirty(pte))
+ 		pte = pte_mkdirty(pte);
+ 	pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
++	/*
++	 * If we end up clearing hw dirtiness for a sw-dirty PTE, set hardware
++	 * dirtiness again.
++	 */
++	if (pte_sw_dirty(pte))
++		pte = pte_mkdirty(pte);
+ 	return pte;
+ }
+ 
+diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+index f9fd5f743eba3..0bc39ff532331 100644
+--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
++++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+@@ -36,6 +36,9 @@ _GLOBAL(ftrace_regs_caller)
+ 	/* Save the original return address in A's stack frame */
+ 	std	r0,LRSAVE(r1)
+ 
++	/* Create a minimal stack frame for representing B */
++	stdu	r1, -STACK_FRAME_MIN_SIZE(r1)
++
+ 	/* Create our stack frame + pt_regs */
+ 	stdu	r1,-SWITCH_FRAME_SIZE(r1)
+ 
+@@ -52,7 +55,7 @@ _GLOBAL(ftrace_regs_caller)
+ 	SAVE_10GPRS(22, r1)
+ 
+ 	/* Save previous stack pointer (r1) */
+-	addi	r8, r1, SWITCH_FRAME_SIZE
++	addi	r8, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
+ 	std	r8, GPR1(r1)
+ 
+ 	/* Load special regs for save below */
+@@ -65,6 +68,8 @@ _GLOBAL(ftrace_regs_caller)
+ 	mflr	r7
+ 	/* Save it as pt_regs->nip */
+ 	std     r7, _NIP(r1)
++	/* Also save it in B's stackframe header for proper unwind */
++	std	r7, LRSAVE+SWITCH_FRAME_SIZE(r1)
+ 	/* Save the read LR in pt_regs->link */
+ 	std     r0, _LINK(r1)
+ 
+@@ -121,7 +126,7 @@ ftrace_regs_call:
+ 	ld	r2, 24(r1)
+ 
+ 	/* Pop our stack frame */
+-	addi r1, r1, SWITCH_FRAME_SIZE
++	addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
+ 
+ #ifdef CONFIG_LIVEPATCH
+         /* Based on the cmpd above, if the NIP was altered handle livepatch */
+@@ -145,7 +150,7 @@ ftrace_no_trace:
+ 	mflr	r3
+ 	mtctr	r3
+ 	REST_GPR(3, r1)
+-	addi	r1, r1, SWITCH_FRAME_SIZE
++	addi	r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
+ 	mtlr	r0
+ 	bctr
+ 
+@@ -153,6 +158,9 @@ _GLOBAL(ftrace_caller)
+ 	/* Save the original return address in A's stack frame */
+ 	std	r0, LRSAVE(r1)
+ 
++	/* Create a minimal stack frame for representing B */
++	stdu	r1, -STACK_FRAME_MIN_SIZE(r1)
++
+ 	/* Create our stack frame + pt_regs */
+ 	stdu	r1, -SWITCH_FRAME_SIZE(r1)
+ 
+@@ -166,6 +174,7 @@ _GLOBAL(ftrace_caller)
+ 	/* Get the _mcount() call site out of LR */
+ 	mflr	r7
+ 	std     r7, _NIP(r1)
++	std	r7, LRSAVE+SWITCH_FRAME_SIZE(r1)
+ 
+ 	/* Save callee's TOC in the ABI compliant location */
+ 	std	r2, 24(r1)
+@@ -200,7 +209,7 @@ ftrace_call:
+ 	ld	r2, 24(r1)
+ 
+ 	/* Pop our stack frame */
+-	addi	r1, r1, SWITCH_FRAME_SIZE
++	addi	r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
+ 
+ 	/* Reload original LR */
+ 	ld	r0, LRSAVE(r1)
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index c526fdd0a7b90..4bf514a7bd82c 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -1409,6 +1409,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global)
+ 		   tg_bps_limit(tg, READ), tg_bps_limit(tg, WRITE),
+ 		   tg_iops_limit(tg, READ), tg_iops_limit(tg, WRITE));
+ 
++	rcu_read_lock();
+ 	/*
+ 	 * Update has_rules[] flags for the updated tg's subtree.  A tg is
+ 	 * considered to have rules if either the tg itself or any of its
+@@ -1436,6 +1437,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global)
+ 		this_tg->latency_target = max(this_tg->latency_target,
+ 				parent_tg->latency_target);
+ 	}
++	rcu_read_unlock();
+ 
+ 	/*
+ 	 * We're already holding queue_lock and know @tg is valid.  Let's
+diff --git a/drivers/atm/solos-pci.c b/drivers/atm/solos-pci.c
+index 94fbc3abe60e6..d3c30a28c410e 100644
+--- a/drivers/atm/solos-pci.c
++++ b/drivers/atm/solos-pci.c
+@@ -449,9 +449,9 @@ static ssize_t console_show(struct device *dev, struct device_attribute *attr,
+ 	struct sk_buff *skb;
+ 	unsigned int len;
+ 
+-	spin_lock(&card->cli_queue_lock);
++	spin_lock_bh(&card->cli_queue_lock);
+ 	skb = skb_dequeue(&card->cli_queue[SOLOS_CHAN(atmdev)]);
+-	spin_unlock(&card->cli_queue_lock);
++	spin_unlock_bh(&card->cli_queue_lock);
+ 	if(skb == NULL)
+ 		return sprintf(buf, "No data.\n");
+ 
+@@ -956,14 +956,14 @@ static void pclose(struct atm_vcc *vcc)
+ 	struct pkt_hdr *header;
+ 
+ 	/* Remove any yet-to-be-transmitted packets from the pending queue */
+-	spin_lock(&card->tx_queue_lock);
++	spin_lock_bh(&card->tx_queue_lock);
+ 	skb_queue_walk_safe(&card->tx_queue[port], skb, tmpskb) {
+ 		if (SKB_CB(skb)->vcc == vcc) {
+ 			skb_unlink(skb, &card->tx_queue[port]);
+ 			solos_pop(vcc, skb);
+ 		}
+ 	}
+-	spin_unlock(&card->tx_queue_lock);
++	spin_unlock_bh(&card->tx_queue_lock);
+ 
+ 	skb = alloc_skb(sizeof(*header), GFP_KERNEL);
+ 	if (!skb) {
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index cc3cb5b63d444..1eaf513166a1a 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -582,6 +582,7 @@ static void mtk_drm_crtc_atomic_begin(struct drm_crtc *crtc,
+ {
+ 	struct mtk_crtc_state *state = to_mtk_crtc_state(crtc->state);
+ 	struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
++	unsigned long flags;
+ 
+ 	if (mtk_crtc->event && state->base.event)
+ 		DRM_ERROR("new event while there is still a pending event\n");
+@@ -589,7 +590,11 @@ static void mtk_drm_crtc_atomic_begin(struct drm_crtc *crtc,
+ 	if (state->base.event) {
+ 		state->base.event->pipe = drm_crtc_index(crtc);
+ 		WARN_ON(drm_crtc_vblank_get(crtc) != 0);
++
++		spin_lock_irqsave(&crtc->dev->event_lock, flags);
+ 		mtk_crtc->event = state->base.event;
++		spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
++
+ 		state->base.event = NULL;
+ 	}
+ }
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index 6865cab33cf8a..b46fb92d28e58 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -336,7 +336,7 @@ static int asus_raw_event(struct hid_device *hdev,
+ 	return 0;
+ }
+ 
+-static int asus_kbd_set_report(struct hid_device *hdev, u8 *buf, size_t buf_size)
++static int asus_kbd_set_report(struct hid_device *hdev, const u8 *buf, size_t buf_size)
+ {
+ 	unsigned char *dmabuf;
+ 	int ret;
+@@ -355,7 +355,7 @@ static int asus_kbd_set_report(struct hid_device *hdev, u8 *buf, size_t buf_size
+ 
+ static int asus_kbd_init(struct hid_device *hdev)
+ {
+-	u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x41, 0x53, 0x55, 0x53, 0x20, 0x54,
++	const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x41, 0x53, 0x55, 0x53, 0x20, 0x54,
+ 		     0x65, 0x63, 0x68, 0x2e, 0x49, 0x6e, 0x63, 0x2e, 0x00 };
+ 	int ret;
+ 
+@@ -369,7 +369,7 @@ static int asus_kbd_init(struct hid_device *hdev)
+ static int asus_kbd_get_functions(struct hid_device *hdev,
+ 				  unsigned char *kbd_func)
+ {
+-	u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 };
++	const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 };
+ 	u8 *readbuf;
+ 	int ret;
+ 
+@@ -901,6 +901,24 @@ static int asus_start_multitouch(struct hid_device *hdev)
+ 	return 0;
+ }
+ 
++static int __maybe_unused asus_resume(struct hid_device *hdev) {
++	struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
++	int ret = 0;
++
++	if (drvdata->kbd_backlight) {
++		const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0xba, 0xc5, 0xc4,
++				drvdata->kbd_backlight->cdev.brightness };
++		ret = asus_kbd_set_report(hdev, buf, sizeof(buf));
++		if (ret < 0) {
++			hid_err(hdev, "Asus failed to set keyboard backlight: %d\n", ret);
++			goto asus_resume_err;
++		}
++	}
++
++asus_resume_err:
++	return ret;
++}
++
+ static int __maybe_unused asus_reset_resume(struct hid_device *hdev)
+ {
+ 	struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
+@@ -1177,6 +1195,7 @@ static struct hid_driver asus_driver = {
+ 	.input_configured       = asus_input_configured,
+ #ifdef CONFIG_PM
+ 	.reset_resume           = asus_reset_resume,
++	.resume					= asus_resume,
+ #endif
+ 	.event			= asus_event,
+ 	.raw_event		= asus_raw_event
+diff --git a/drivers/hid/hid-glorious.c b/drivers/hid/hid-glorious.c
+index 558eb08c19ef9..281b3a7187cec 100644
+--- a/drivers/hid/hid-glorious.c
++++ b/drivers/hid/hid-glorious.c
+@@ -21,6 +21,10 @@ MODULE_DESCRIPTION("HID driver for Glorious PC Gaming Race mice");
+  * Glorious Model O and O- specify the const flag in the consumer input
+  * report descriptor, which leads to inputs being ignored. Fix this
+  * by patching the descriptor.
++ *
++ * Glorious Model I incorrectly specifes the Usage Minimum for its
++ * keyboard HID report, causing keycodes to be misinterpreted.
++ * Fix this by setting Usage Minimum to 0 in that report.
+  */
+ static __u8 *glorious_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 		unsigned int *rsize)
+@@ -32,6 +36,10 @@ static __u8 *glorious_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 		rdesc[85] = rdesc[113] = rdesc[141] = \
+ 			HID_MAIN_ITEM_VARIABLE | HID_MAIN_ITEM_RELATIVE;
+ 	}
++	if (*rsize == 156 && rdesc[41] == 1) {
++		hid_info(hdev, "patching Glorious Model I keyboard report descriptor\n");
++		rdesc[41] = 0;
++	}
+ 	return rdesc;
+ }
+ 
+@@ -44,6 +52,8 @@ static void glorious_update_name(struct hid_device *hdev)
+ 		model = "Model O"; break;
+ 	case USB_DEVICE_ID_GLORIOUS_MODEL_D:
+ 		model = "Model D"; break;
++	case USB_DEVICE_ID_GLORIOUS_MODEL_I:
++		model = "Model I"; break;
+ 	}
+ 
+ 	snprintf(hdev->name, sizeof(hdev->name), "%s %s", "Glorious", model);
+@@ -66,10 +76,12 @@ static int glorious_probe(struct hid_device *hdev,
+ }
+ 
+ static const struct hid_device_id glorious_devices[] = {
+-	{ HID_USB_DEVICE(USB_VENDOR_ID_GLORIOUS,
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SINOWEALTH,
+ 		USB_DEVICE_ID_GLORIOUS_MODEL_O) },
+-	{ HID_USB_DEVICE(USB_VENDOR_ID_GLORIOUS,
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SINOWEALTH,
+ 		USB_DEVICE_ID_GLORIOUS_MODEL_D) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_LAVIEW,
++		USB_DEVICE_ID_GLORIOUS_MODEL_I) },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(hid, glorious_devices);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 7c688d7f8ccff..6273ab615af89 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -471,10 +471,6 @@
+ #define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_010A 0x010a
+ #define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_E100 0xe100
+ 
+-#define USB_VENDOR_ID_GLORIOUS  0x258a
+-#define USB_DEVICE_ID_GLORIOUS_MODEL_D 0x0033
+-#define USB_DEVICE_ID_GLORIOUS_MODEL_O 0x0036
+-
+ #define I2C_VENDOR_ID_GOODIX		0x27c6
+ #define I2C_DEVICE_ID_GOODIX_01F0	0x01f0
+ 
+@@ -697,6 +693,9 @@
+ #define USB_VENDOR_ID_LABTEC		0x1020
+ #define USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD	0x0006
+ 
++#define USB_VENDOR_ID_LAVIEW		0x22D4
++#define USB_DEVICE_ID_GLORIOUS_MODEL_I	0x1503
++
+ #define USB_VENDOR_ID_LCPOWER		0x1241
+ #define USB_DEVICE_ID_LCPOWER_LC1000	0xf767
+ 
+@@ -1068,6 +1067,10 @@
+ #define USB_VENDOR_ID_SIGMATEL		0x066F
+ #define USB_DEVICE_ID_SIGMATEL_STMP3780	0x3780
+ 
++#define USB_VENDOR_ID_SINOWEALTH  0x258a
++#define USB_DEVICE_ID_GLORIOUS_MODEL_D 0x0033
++#define USB_DEVICE_ID_GLORIOUS_MODEL_O 0x0036
++
+ #define USB_VENDOR_ID_SIS_TOUCH		0x0457
+ #define USB_DEVICE_ID_SIS9200_TOUCH	0x9200
+ #define USB_DEVICE_ID_SIS817_TOUCH	0x0817
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index 71f7b0d539df5..249af8d26fe78 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -489,7 +489,8 @@ static int lenovo_event_cptkbd(struct hid_device *hdev,
+ 		 * so set middlebutton_state to 3
+ 		 * to never apply workaround anymore
+ 		 */
+-		if (cptkbd_data->middlebutton_state == 1 &&
++		if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD &&
++				cptkbd_data->middlebutton_state == 1 &&
+ 				usage->type == EV_REL &&
+ 				(usage->code == REL_X || usage->code == REL_Y)) {
+ 			cptkbd_data->middlebutton_state = 3;
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 84b12599eaf69..7d43d62df2409 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1962,6 +1962,11 @@ static const struct hid_device_id mt_devices[] = {
+ 		MT_USB_DEVICE(USB_VENDOR_ID_HANVON_ALT,
+ 			USB_DEVICE_ID_HANVON_ALT_MULTITOUCH) },
+ 
++	/* HONOR GLO-GXXX panel */
++	{ .driver_data = MT_CLS_VTL,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			0x347d, 0x7853) },
++
+ 	/* Ilitek dual touch panel */
+ 	{  .driver_data = MT_CLS_NSMU,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_ILITEK,
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 787349f2de01d..1b3a83fa76168 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -33,6 +33,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_AKAI, USB_DEVICE_ID_AKAI_MPKMINI2), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ALPS, USB_DEVICE_ID_IBM_GAMEPAD), HID_QUIRK_BADPAD },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_AMI, USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE), HID_QUIRK_ALWAYS_POLL },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_REVB_ANSI), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM), HID_QUIRK_NOGET },
+diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
+index e8bf4f752e8be..5804a8e6e215a 100644
+--- a/drivers/md/bcache/bcache.h
++++ b/drivers/md/bcache/bcache.h
+@@ -265,6 +265,7 @@ struct bcache_device {
+ #define BCACHE_DEV_WB_RUNNING		3
+ #define BCACHE_DEV_RATE_DW_RUNNING	4
+ 	int			nr_stripes;
++#define BCH_MIN_STRIPE_SZ		((4 << 20) >> SECTOR_SHIFT)
+ 	unsigned int		stripe_size;
+ 	atomic_t		*stripe_sectors_dirty;
+ 	unsigned long		*full_dirty_stripes;
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 3d6a7fbcbb15e..1a1a9554474ae 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -974,6 +974,9 @@ err:
+  *
+  * The btree node will have either a read or a write lock held, depending on
+  * level and op->lock.
++ *
++ * Note: Only error code or btree pointer will be returned, it is unncessary
++ *       for callers to check NULL pointer.
+  */
+ struct btree *bch_btree_node_get(struct cache_set *c, struct btree_op *op,
+ 				 struct bkey *k, int level, bool write,
+@@ -1085,6 +1088,10 @@ retry:
+ 	mutex_unlock(&b->c->bucket_lock);
+ }
+ 
++/*
++ * Only error code or btree pointer will be returned, it is unncessary for
++ * callers to check NULL pointer.
++ */
+ struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
+ 				     int level, bool wait,
+ 				     struct btree *parent)
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index e8c8077667f9e..04ddaa4bbd77f 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -917,6 +917,8 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ 
+ 	if (!d->stripe_size)
+ 		d->stripe_size = 1 << 31;
++	else if (d->stripe_size < BCH_MIN_STRIPE_SZ)
++		d->stripe_size = roundup(BCH_MIN_STRIPE_SZ, d->stripe_size);
+ 
+ 	n = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
+ 	if (!n || n > max_stripes) {
+@@ -2041,7 +2043,7 @@ static int run_cache_set(struct cache_set *c)
+ 		c->root = bch_btree_node_get(c, NULL, k,
+ 					     j->btree_level,
+ 					     true, NULL);
+-		if (IS_ERR_OR_NULL(c->root))
++		if (IS_ERR(c->root))
+ 			goto err;
+ 
+ 		list_del_init(&c->root->list);
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 94e899ce38554..8e3f5f004c397 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -857,7 +857,7 @@ static int bch_dirty_init_thread(void *arg)
+ 	int cur_idx, prev_idx, skip_nr;
+ 
+ 	k = p = NULL;
+-	cur_idx = prev_idx = 0;
++	prev_idx = 0;
+ 
+ 	bch_btree_iter_init(&c->root->keys, &iter, NULL);
+ 	k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad);
+diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
+index 032ab9f204388..3af9dbf245f2a 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
+@@ -316,9 +316,6 @@ static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq,
+ 	 * compare it to the stored version, just create the meta
+ 	 */
+ 	if (io_sq->disable_meta_caching) {
+-		if (unlikely(!ena_tx_ctx->meta_valid))
+-			return -EINVAL;
+-
+ 		*have_meta = true;
+ 		return ena_com_create_meta(io_sq, ena_meta);
+ 	}
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index 1722d4091ea3f..e13ae04d2f0fd 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -77,6 +77,8 @@ static void ena_unmap_tx_buff(struct ena_ring *tx_ring,
+ 			      struct ena_tx_buffer *tx_info);
+ static int ena_create_io_tx_queues_in_range(struct ena_adapter *adapter,
+ 					    int first_index, int count);
++static void ena_free_all_io_tx_resources_in_range(struct ena_adapter *adapter,
++						  int first_index, int count);
+ 
+ static void ena_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ {
+@@ -388,23 +390,22 @@ static void ena_init_all_xdp_queues(struct ena_adapter *adapter)
+ 
+ static int ena_setup_and_create_all_xdp_queues(struct ena_adapter *adapter)
+ {
++	u32 xdp_first_ring = adapter->xdp_first_ring;
++	u32 xdp_num_queues = adapter->xdp_num_queues;
+ 	int rc = 0;
+ 
+-	rc = ena_setup_tx_resources_in_range(adapter, adapter->xdp_first_ring,
+-					     adapter->xdp_num_queues);
++	rc = ena_setup_tx_resources_in_range(adapter, xdp_first_ring, xdp_num_queues);
+ 	if (rc)
+ 		goto setup_err;
+ 
+-	rc = ena_create_io_tx_queues_in_range(adapter,
+-					      adapter->xdp_first_ring,
+-					      adapter->xdp_num_queues);
++	rc = ena_create_io_tx_queues_in_range(adapter, xdp_first_ring, xdp_num_queues);
+ 	if (rc)
+ 		goto create_err;
+ 
+ 	return 0;
+ 
+ create_err:
+-	ena_free_all_io_tx_resources(adapter);
++	ena_free_all_io_tx_resources_in_range(adapter, xdp_first_ring, xdp_num_queues);
+ setup_err:
+ 	return rc;
+ }
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index e9c6f1fa0b1a7..98e8997f80366 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -577,11 +577,14 @@ void aq_ring_free(struct aq_ring_s *self)
+ 		return;
+ 
+ 	kfree(self->buff_ring);
++	self->buff_ring = NULL;
+ 
+-	if (self->dx_ring)
++	if (self->dx_ring) {
+ 		dma_free_coherent(aq_nic_get_dev(self->aq_nic),
+ 				  self->size * self->dx_size, self->dx_ring,
+ 				  self->dx_ring_pa);
++		self->dx_ring = NULL;
++	}
+ }
+ 
+ unsigned int aq_ring_fill_stats_data(struct aq_ring_s *self, u64 *data)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 4f669e7c75587..4509a29ff73f9 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -1924,8 +1924,7 @@ u16 bnx2x_select_queue(struct net_device *dev, struct sk_buff *skb,
+ 
+ 		/* Skip VLAN tag if present */
+ 		if (ether_type == ETH_P_8021Q) {
+-			struct vlan_ethhdr *vhdr =
+-				(struct vlan_ethhdr *)skb->data;
++			struct vlan_ethhdr *vhdr = skb_vlan_eth_hdr(skb);
+ 
+ 			ether_type = ntohs(vhdr->h_vlan_encapsulated_proto);
+ 		}
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 52b399aa3213d..edd4dd73b3e32 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -1125,7 +1125,7 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
+ 						  struct be_wrb_params
+ 						  *wrb_params)
+ {
+-	struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;
++	struct vlan_ethhdr *veh = skb_vlan_eth_hdr(skb);
+ 	unsigned int eth_hdr_len;
+ 	struct iphdr *ip;
+ 
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index e18b3b72fc0df..4ce913559c91d 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3266,31 +3266,26 @@ static int fec_set_features(struct net_device *netdev,
+ 	return 0;
+ }
+ 
+-static u16 fec_enet_get_raw_vlan_tci(struct sk_buff *skb)
+-{
+-	struct vlan_ethhdr *vhdr;
+-	unsigned short vlan_TCI = 0;
+-
+-	if (skb->protocol == htons(ETH_P_ALL)) {
+-		vhdr = (struct vlan_ethhdr *)(skb->data);
+-		vlan_TCI = ntohs(vhdr->h_vlan_TCI);
+-	}
+-
+-	return vlan_TCI;
+-}
+-
+ static u16 fec_enet_select_queue(struct net_device *ndev, struct sk_buff *skb,
+ 				 struct net_device *sb_dev)
+ {
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+-	u16 vlan_tag;
++	u16 vlan_tag = 0;
+ 
+ 	if (!(fep->quirks & FEC_QUIRK_HAS_AVB))
+ 		return netdev_pick_tx(ndev, skb, NULL);
+ 
+-	vlan_tag = fec_enet_get_raw_vlan_tci(skb);
+-	if (!vlan_tag)
++	/* VLAN is present in the payload.*/
++	if (eth_type_vlan(skb->protocol)) {
++		struct vlan_ethhdr *vhdr = skb_vlan_eth_hdr(skb);
++
++		vlan_tag = ntohs(vhdr->h_vlan_TCI);
++	/*  VLAN is present in the skb but not yet pushed in the payload.*/
++	} else if (skb_vlan_tag_present(skb)) {
++		vlan_tag = skb->vlan_tci;
++	} else {
+ 		return vlan_tag;
++	}
+ 
+ 	return fec_enet_vlan_pri_to_queue[vlan_tag >> 13];
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 4df5e91e86ce7..a4ab3e7efa5e4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1002,7 +1002,7 @@ static int hns3_handle_vtags(struct hns3_enet_ring *tx_ring,
+ 	if (unlikely(rc < 0))
+ 		return rc;
+ 
+-	vhdr = (struct vlan_ethhdr *)skb->data;
++	vhdr = skb_vlan_eth_hdr(skb);
+ 	vhdr->h_vlan_TCI |= cpu_to_be16((skb->priority << VLAN_PRIO_SHIFT)
+ 					 & VLAN_PRIO_MASK);
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 88d8f17cefd8e..57667ccc28f54 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -2879,7 +2879,7 @@ static inline int i40e_tx_prepare_vlan_flags(struct sk_buff *skb,
+ 			rc = skb_cow_head(skb, 0);
+ 			if (rc < 0)
+ 				return rc;
+-			vhdr = (struct vlan_ethhdr *)skb->data;
++			vhdr = skb_vlan_eth_hdr(skb);
+ 			vhdr->h_vlan_TCI = htons(tx_flags >>
+ 						 I40E_TX_FLAGS_VLAN_SHIFT);
+ 		} else {
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 2b100b7b325a5..5829d81f2cb11 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -8707,7 +8707,7 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct sk_buff *skb,
+ 
+ 			if (skb_cow_head(skb, 0))
+ 				goto out_drop;
+-			vhdr = (struct vlan_ethhdr *)skb->data;
++			vhdr = skb_vlan_eth_hdr(skb);
+ 			vhdr->h_vlan_TCI = htons(tx_flags >>
+ 						 IXGBE_TX_FLAGS_VLAN_SHIFT);
+ 		} else {
+diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+index e2046b6d65a30..2c54600835643 100644
+--- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
++++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+@@ -1861,7 +1861,7 @@ netxen_tso_check(struct net_device *netdev,
+ 
+ 	if (protocol == cpu_to_be16(ETH_P_8021Q)) {
+ 
+-		vh = (struct vlan_ethhdr *)skb->data;
++		vh = skb_vlan_eth_hdr(skb);
+ 		protocol = vh->h_vlan_encapsulated_proto;
+ 		flags = FLAGS_VLAN_TAGGED;
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.c b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+index 0a22f8ce9a2c3..1c3737921b6cf 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_cxt.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+@@ -933,6 +933,7 @@ static void qed_ilt_shadow_free(struct qed_hwfn *p_hwfn)
+ 		p_dma->virt_addr = NULL;
+ 	}
+ 	kfree(p_mngr->ilt_shadow);
++	p_mngr->ilt_shadow = NULL;
+ }
+ 
+ static int qed_ilt_blk_alloc(struct qed_hwfn *p_hwfn,
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+index bdf15d2a64313..7260e57d79f2e 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+@@ -317,7 +317,7 @@ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 
+ 	if (adapter->flags & QLCNIC_VLAN_FILTERING) {
+ 		if (protocol == ETH_P_8021Q) {
+-			vh = (struct vlan_ethhdr *)skb->data;
++			vh = skb_vlan_eth_hdr(skb);
+ 			vlan_id = ntohs(vh->h_vlan_TCI);
+ 		} else if (skb_vlan_tag_present(skb)) {
+ 			vlan_id = skb_vlan_tag_get(skb);
+@@ -467,7 +467,7 @@ static int qlcnic_tx_pkt(struct qlcnic_adapter *adapter,
+ 	u32 producer = tx_ring->producer;
+ 
+ 	if (protocol == ETH_P_8021Q) {
+-		vh = (struct vlan_ethhdr *)skb->data;
++		vh = skb_vlan_eth_hdr(skb);
+ 		flags = QLCNIC_FLAGS_VLAN_TAGGED;
+ 		vlan_tci = ntohs(vh->h_vlan_TCI);
+ 		protocol = ntohs(vh->h_vlan_encapsulated_proto);
+diff --git a/drivers/net/ethernet/qualcomm/qca_debug.c b/drivers/net/ethernet/qualcomm/qca_debug.c
+index 702aa217a27ad..66229b300c5a4 100644
+--- a/drivers/net/ethernet/qualcomm/qca_debug.c
++++ b/drivers/net/ethernet/qualcomm/qca_debug.c
+@@ -30,6 +30,8 @@
+ 
+ #define QCASPI_MAX_REGS 0x20
+ 
++#define QCASPI_RX_MAX_FRAMES 4
++
+ static const u16 qcaspi_spi_regs[] = {
+ 	SPI_REG_BFR_SIZE,
+ 	SPI_REG_WRBUF_SPC_AVA,
+@@ -249,31 +251,30 @@ qcaspi_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
+ {
+ 	struct qcaspi *qca = netdev_priv(dev);
+ 
+-	ring->rx_max_pending = 4;
++	ring->rx_max_pending = QCASPI_RX_MAX_FRAMES;
+ 	ring->tx_max_pending = TX_RING_MAX_LEN;
+-	ring->rx_pending = 4;
++	ring->rx_pending = QCASPI_RX_MAX_FRAMES;
+ 	ring->tx_pending = qca->txr.count;
+ }
+ 
+ static int
+ qcaspi_set_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
+ {
+-	const struct net_device_ops *ops = dev->netdev_ops;
+ 	struct qcaspi *qca = netdev_priv(dev);
+ 
+-	if ((ring->rx_pending) ||
++	if (ring->rx_pending != QCASPI_RX_MAX_FRAMES ||
+ 	    (ring->rx_mini_pending) ||
+ 	    (ring->rx_jumbo_pending))
+ 		return -EINVAL;
+ 
+-	if (netif_running(dev))
+-		ops->ndo_stop(dev);
++	if (qca->spi_thread)
++		kthread_park(qca->spi_thread);
+ 
+ 	qca->txr.count = max_t(u32, ring->tx_pending, TX_RING_MIN_LEN);
+ 	qca->txr.count = min_t(u16, qca->txr.count, TX_RING_MAX_LEN);
+ 
+-	if (netif_running(dev))
+-		ops->ndo_open(dev);
++	if (qca->spi_thread)
++		kthread_unpark(qca->spi_thread);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 44fa959ebcaa5..ffa1846f5b4c4 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -573,6 +573,18 @@ qcaspi_spi_thread(void *data)
+ 	netdev_info(qca->net_dev, "SPI thread created\n");
+ 	while (!kthread_should_stop()) {
+ 		set_current_state(TASK_INTERRUPTIBLE);
++		if (kthread_should_park()) {
++			netif_tx_disable(qca->net_dev);
++			netif_carrier_off(qca->net_dev);
++			qcaspi_flush_tx_ring(qca);
++			kthread_parkme();
++			if (qca->sync == QCASPI_SYNC_READY) {
++				netif_carrier_on(qca->net_dev);
++				netif_wake_queue(qca->net_dev);
++			}
++			continue;
++		}
++
+ 		if ((qca->intr_req == qca->intr_svc) &&
+ 		    !qca->txr.skb[qca->txr.head])
+ 			schedule();
+@@ -601,11 +613,17 @@ qcaspi_spi_thread(void *data)
+ 			if (intr_cause & SPI_INT_CPU_ON) {
+ 				qcaspi_qca7k_sync(qca, QCASPI_EVENT_CPUON);
+ 
++				/* Frame decoding in progress */
++				if (qca->frm_handle.state != qca->frm_handle.init)
++					qca->net_dev->stats.rx_dropped++;
++
++				qcafrm_fsm_init_spi(&qca->frm_handle);
++				qca->stats.device_reset++;
++
+ 				/* not synced. */
+ 				if (qca->sync != QCASPI_SYNC_READY)
+ 					continue;
+ 
+-				qca->stats.device_reset++;
+ 				netif_wake_queue(qca->net_dev);
+ 				netif_carrier_on(qca->net_dev);
+ 			}
+diff --git a/drivers/net/ethernet/sfc/tx_tso.c b/drivers/net/ethernet/sfc/tx_tso.c
+index 898e5c61d9086..d381d8164f07c 100644
+--- a/drivers/net/ethernet/sfc/tx_tso.c
++++ b/drivers/net/ethernet/sfc/tx_tso.c
+@@ -147,7 +147,7 @@ static __be16 efx_tso_check_protocol(struct sk_buff *skb)
+ 	EFX_WARN_ON_ONCE_PARANOID(((struct ethhdr *)skb->data)->h_proto !=
+ 				  protocol);
+ 	if (protocol == htons(ETH_P_8021Q)) {
+-		struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;
++		struct vlan_ethhdr *veh = skb_vlan_eth_hdr(skb);
+ 
+ 		protocol = veh->h_vlan_encapsulated_proto;
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 59a07a01e80ca..8416a186cd7f3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3660,13 +3660,10 @@ dma_map_err:
+ 
+ static void stmmac_rx_vlan(struct net_device *dev, struct sk_buff *skb)
+ {
+-	struct vlan_ethhdr *veth;
+-	__be16 vlan_proto;
++	struct vlan_ethhdr *veth = skb_vlan_eth_hdr(skb);
++	__be16 vlan_proto = veth->h_vlan_proto;
+ 	u16 vlanid;
+ 
+-	veth = (struct vlan_ethhdr *)skb->data;
+-	vlan_proto = veth->h_vlan_proto;
+-
+ 	if ((vlan_proto == htons(ETH_P_8021Q) &&
+ 	     dev->features & NETIF_F_HW_VLAN_CTAG_RX) ||
+ 	    (vlan_proto == htons(ETH_P_8021AD) &&
+@@ -5190,9 +5187,9 @@ int stmmac_dvr_probe(struct device *device,
+ 		/* MDIO bus Registration */
+ 		ret = stmmac_mdio_register(ndev);
+ 		if (ret < 0) {
+-			dev_err(priv->device,
+-				"%s: MDIO bus (id: %d) registration failed",
+-				__func__, priv->plat->bus_id);
++			dev_err_probe(priv->device, ret,
++				      "%s: MDIO bus (id: %d) registration failed\n",
++				      __func__, priv->plat->bus_id);
+ 			goto error_mdio_register;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index 7c1a14b256da3..d9b76e1e09493 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -459,8 +459,12 @@ int stmmac_mdio_register(struct net_device *ndev)
+ 	new_bus->parent = priv->device;
+ 
+ 	err = of_mdiobus_register(new_bus, mdio_node);
+-	if (err != 0) {
+-		dev_err(dev, "Cannot register the MDIO bus\n");
++	if (err == -ENODEV) {
++		err = 0;
++		dev_info(dev, "MDIO bus is disabled\n");
++		goto bus_register_fail;
++	} else if (err) {
++		dev_err_probe(dev, err, "Cannot register the MDIO bus\n");
+ 		goto bus_register_fail;
+ 	}
+ 
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 97a77dabed64c..49d7030ddc1b4 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -284,8 +284,10 @@ static int __team_options_register(struct team *team,
+ 	return 0;
+ 
+ inst_rollback:
+-	for (i--; i >= 0; i--)
++	for (i--; i >= 0; i--) {
+ 		__team_option_inst_del_option(team, dst_opts[i]);
++		list_del(&dst_opts[i]->list);
++	}
+ 
+ 	i = option_count;
+ alloc_rollback:
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index c9c4095181744..4ea02116be182 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -1079,17 +1079,17 @@ static int aqc111_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 	u16 pkt_count = 0;
+ 	u64 desc_hdr = 0;
+ 	u16 vlan_tag = 0;
+-	u32 skb_len = 0;
++	u32 skb_len;
+ 
+ 	if (!skb)
+ 		goto err;
+ 
+-	if (skb->len == 0)
++	skb_len = skb->len;
++	if (skb_len < sizeof(desc_hdr))
+ 		goto err;
+ 
+-	skb_len = skb->len;
+ 	/* RX Descriptor Header */
+-	skb_trim(skb, skb->len - sizeof(desc_hdr));
++	skb_trim(skb, skb_len - sizeof(desc_hdr));
+ 	desc_hdr = le64_to_cpup((u64 *)skb_tail_pointer(skb));
+ 
+ 	/* Check these packets */
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 5cf7f389bf4ef..3d342908f57a0 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1225,6 +1225,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x19d2, 0x0168, 4)},
+ 	{QMI_FIXED_INTF(0x19d2, 0x0176, 3)},
+ 	{QMI_FIXED_INTF(0x19d2, 0x0178, 3)},
++	{QMI_FIXED_INTF(0x19d2, 0x0189, 4)},    /* ZTE MF290 */
+ 	{QMI_FIXED_INTF(0x19d2, 0x0191, 4)},	/* ZTE EuFi890 */
+ 	{QMI_FIXED_INTF(0x19d2, 0x0199, 1)},	/* ZTE MF820S */
+ 	{QMI_FIXED_INTF(0x19d2, 0x0200, 1)},
+diff --git a/drivers/pci/controller/pci-loongson.c b/drivers/pci/controller/pci-loongson.c
+index e73e18a73833b..8b40b45590a02 100644
+--- a/drivers/pci/controller/pci-loongson.c
++++ b/drivers/pci/controller/pci-loongson.c
+@@ -65,13 +65,49 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ 			DEV_LS7A_LPC, system_bus_quirk);
+ 
++/*
++ * Some Loongson PCIe ports have hardware limitations on their Maximum Read
++ * Request Size. They can't handle anything larger than this.  Sane
++ * firmware will set proper MRRS at boot, so we only need no_inc_mrrs for
++ * bridges. However, some MIPS Loongson firmware doesn't set MRRS properly,
++ * so we have to enforce maximum safe MRRS, which is 256 bytes.
++ */
++#ifdef CONFIG_MIPS
++static void loongson_set_min_mrrs_quirk(struct pci_dev *pdev)
++{
++	struct pci_bus *bus = pdev->bus;
++	struct pci_dev *bridge;
++	static const struct pci_device_id bridge_devids[] = {
++		{ PCI_VDEVICE(LOONGSON, DEV_LS2K_PCIE_PORT0) },
++		{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT0) },
++		{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT1) },
++		{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT2) },
++		{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT3) },
++		{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT4) },
++		{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT5) },
++		{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT6) },
++		{ 0, },
++	};
++
++	/* look for the matching bridge */
++	while (!pci_is_root_bus(bus)) {
++		bridge = bus->self;
++		bus = bus->parent;
++
++		if (pci_match_id(bridge_devids, bridge)) {
++			if (pcie_get_readrq(pdev) > 256) {
++				pci_info(pdev, "limiting MRRS to 256\n");
++				pcie_set_readrq(pdev, 256);
++			}
++			break;
++		}
++	}
++}
++DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, loongson_set_min_mrrs_quirk);
++#endif
++
+ static void loongson_mrrs_quirk(struct pci_dev *pdev)
+ {
+-	/*
+-	 * Some Loongson PCIe ports have h/w limitations of maximum read
+-	 * request size. They can't handle anything larger than this. So
+-	 * force this limit on any devices attached under these ports.
+-	 */
+ 	struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus);
+ 
+ 	bridge->no_inc_mrrs = 1;
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index 0a37967b0a939..f031302ad4019 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -503,15 +503,12 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
+ 				if (pass && dev->subordinate) {
+ 					check_hotplug_bridge(slot, dev);
+ 					pcibios_resource_survey_bus(dev->subordinate);
+-					if (pci_is_root_bus(bus))
+-						__pci_bus_size_bridges(dev->subordinate, &add_list);
++					__pci_bus_size_bridges(dev->subordinate,
++							       &add_list);
+ 				}
+ 			}
+ 		}
+-		if (pci_is_root_bus(bus))
+-			__pci_bus_assign_resources(bus, &add_list, NULL);
+-		else
+-			pci_assign_unassigned_bridge_resources(bus->self);
++		__pci_bus_assign_resources(bus, &add_list, NULL);
+ 	}
+ 
+ 	acpiphp_sanitize_bus(bus);
+diff --git a/drivers/platform/x86/intel_telemetry_core.c b/drivers/platform/x86/intel_telemetry_core.c
+index fdf55b5d69480..e4be40f73eebf 100644
+--- a/drivers/platform/x86/intel_telemetry_core.c
++++ b/drivers/platform/x86/intel_telemetry_core.c
+@@ -102,7 +102,7 @@ static const struct telemetry_core_ops telm_defpltops = {
+ /**
+  * telemetry_update_events() - Update telemetry Configuration
+  * @pss_evtconfig: PSS related config. No change if num_evts = 0.
+- * @pss_evtconfig: IOSS related config. No change if num_evts = 0.
++ * @ioss_evtconfig: IOSS related config. No change if num_evts = 0.
+  *
+  * This API updates the IOSS & PSS Telemetry configuration. Old config
+  * is overwritten. Call telemetry_reset_events when logging is over
+@@ -176,7 +176,7 @@ EXPORT_SYMBOL_GPL(telemetry_reset_events);
+ /**
+  * telemetry_get_eventconfig() - Returns the pss and ioss events enabled
+  * @pss_evtconfig: Pointer to PSS related configuration.
+- * @pss_evtconfig: Pointer to IOSS related configuration.
++ * @ioss_evtconfig: Pointer to IOSS related configuration.
+  * @pss_len:	   Number of u32 elements allocated for pss_evtconfig array
+  * @ioss_len:	   Number of u32 elements allocated for ioss_evtconfig array
+  *
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 79554d28f08aa..a377c3d02c559 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -724,14 +724,15 @@ error_1:
+  * sdw_ml_sync_bank_switch: Multilink register bank switch
+  *
+  * @bus: SDW bus instance
++ * @multi_link: whether this is a multi-link stream with hardware-based sync
+  *
+  * Caller function should free the buffers on error
+  */
+-static int sdw_ml_sync_bank_switch(struct sdw_bus *bus)
++static int sdw_ml_sync_bank_switch(struct sdw_bus *bus, bool multi_link)
+ {
+ 	unsigned long time_left;
+ 
+-	if (!bus->multi_link)
++	if (!multi_link)
+ 		return 0;
+ 
+ 	/* Wait for completion of transfer */
+@@ -827,7 +828,7 @@ static int do_bank_switch(struct sdw_stream_runtime *stream)
+ 			bus->bank_switch_timeout = DEFAULT_BANK_SWITCH_TIMEOUT;
+ 
+ 		/* Check if bank switch was successful */
+-		ret = sdw_ml_sync_bank_switch(bus);
++		ret = sdw_ml_sync_bank_switch(bus, multi_link);
+ 		if (ret < 0) {
+ 			dev_err(bus->dev,
+ 				"multi link bank switch failed: %d\n", ret);
+diff --git a/drivers/staging/gdm724x/gdm_lte.c b/drivers/staging/gdm724x/gdm_lte.c
+index de30262c3fae0..e0787647093d1 100644
+--- a/drivers/staging/gdm724x/gdm_lte.c
++++ b/drivers/staging/gdm724x/gdm_lte.c
+@@ -350,7 +350,7 @@ static s32 gdm_lte_tx_nic_type(struct net_device *dev, struct sk_buff *skb)
+ 	/* Get ethernet protocol */
+ 	eth = (struct ethhdr *)skb->data;
+ 	if (ntohs(eth->h_proto) == ETH_P_8021Q) {
+-		vlan_eth = (struct vlan_ethhdr *)skb->data;
++		vlan_eth = skb_vlan_eth_hdr(skb);
+ 		mac_proto = ntohs(vlan_eth->h_vlan_encapsulated_proto);
+ 		network_data = skb->data + VLAN_ETH_HLEN;
+ 		nic_type |= NIC_TYPE_F_VLAN;
+@@ -436,7 +436,7 @@ static netdev_tx_t gdm_lte_tx(struct sk_buff *skb, struct net_device *dev)
+ 	 * driver based on the NIC mac
+ 	 */
+ 	if (nic_type & NIC_TYPE_F_VLAN) {
+-		struct vlan_ethhdr *vlan_eth = (struct vlan_ethhdr *)skb->data;
++		struct vlan_ethhdr *vlan_eth = skb_vlan_eth_hdr(skb);
+ 
+ 		nic->vlan_id = ntohs(vlan_eth->h_vlan_TCI) & VLAN_VID_MASK;
+ 		data_buf = skb->data + (VLAN_ETH_HLEN - ETH_HLEN);
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 3693ad9f45212..fa49529682cec 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -235,6 +235,7 @@ struct gsm_mux {
+ 	struct gsm_dlci *dlci[NUM_DLCI];
+ 	int old_c_iflag;		/* termios c_iflag value before attach */
+ 	bool constipated;		/* Asked by remote to shut up */
++	bool has_devices;		/* Devices were registered */
+ 
+ 	spinlock_t tx_lock;
+ 	unsigned int tx_bytes;		/* TX data outstanding */
+@@ -464,6 +465,68 @@ static u8 gsm_encode_modem(const struct gsm_dlci *dlci)
+ 	return modembits;
+ }
+ 
++/**
++ *	gsm_register_devices	-	register all tty devices for a given mux index
++ *
++ *	@driver: the tty driver that describes the tty devices
++ *	@index:  the mux number is used to calculate the minor numbers of the
++ *	         ttys for this mux and may differ from the position in the
++ *	         mux array.
++ */
++static int gsm_register_devices(struct tty_driver *driver, unsigned int index)
++{
++	struct device *dev;
++	int i;
++	unsigned int base;
++
++	if (!driver || index >= MAX_MUX)
++		return -EINVAL;
++
++	base = index * NUM_DLCI; /* first minor for this index */
++	for (i = 1; i < NUM_DLCI; i++) {
++		/* Don't register device 0 - this is the control channel
++		 * and not a usable tty interface
++		 */
++		dev = tty_register_device(gsm_tty_driver, base + i, NULL);
++		if (IS_ERR(dev)) {
++			if (debug & 8)
++				pr_info("%s failed to register device minor %u",
++					__func__, base + i);
++			for (i--; i >= 1; i--)
++				tty_unregister_device(gsm_tty_driver, base + i);
++			return PTR_ERR(dev);
++		}
++	}
++
++	return 0;
++}
++
++/**
++ *	gsm_unregister_devices	-	unregister all tty devices for a given mux index
++ *
++ *	@driver: the tty driver that describes the tty devices
++ *	@index:  the mux number is used to calculate the minor numbers of the
++ *	         ttys for this mux and may differ from the position in the
++ *	         mux array.
++ */
++static void gsm_unregister_devices(struct tty_driver *driver,
++				   unsigned int index)
++{
++	int i;
++	unsigned int base;
++
++	if (!driver || index >= MAX_MUX)
++		return;
++
++	base = index * NUM_DLCI; /* first minor for this index */
++	for (i = 1; i < NUM_DLCI; i++) {
++		/* Don't unregister device 0 - this is the control
++		 * channel and not a usable tty interface
++		 */
++		tty_unregister_device(gsm_tty_driver, base + i);
++	}
++}
++
+ /**
+  *	gsm_print_packet	-	display a frame for debug
+  *	@hdr: header to print before decode
+@@ -2178,6 +2241,10 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ 	del_timer_sync(&gsm->t2_timer);
+ 
+ 	/* Free up any link layer users and finally the control channel */
++	if (gsm->has_devices) {
++		gsm_unregister_devices(gsm_tty_driver, gsm->num);
++		gsm->has_devices = false;
++	}
+ 	for (i = NUM_DLCI - 1; i >= 0; i--)
+ 		if (gsm->dlci[i])
+ 			gsm_dlci_release(gsm->dlci[i]);
+@@ -2201,15 +2268,21 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ static int gsm_activate_mux(struct gsm_mux *gsm)
+ {
+ 	struct gsm_dlci *dlci;
++	int ret;
+ 
+ 	if (gsm->encoding == 0)
+ 		gsm->receive = gsm0_receive;
+ 	else
+ 		gsm->receive = gsm1_receive;
+ 
++	ret = gsm_register_devices(gsm_tty_driver, gsm->num);
++	if (ret)
++		return ret;
++
+ 	dlci = gsm_dlci_alloc(gsm, 0);
+ 	if (dlci == NULL)
+ 		return -ENOMEM;
++	gsm->has_devices = true;
+ 	gsm->dead = false;		/* Tty opens are now permissible */
+ 	return 0;
+ }
+@@ -2475,39 +2548,14 @@ static int gsmld_output(struct gsm_mux *gsm, u8 *data, int len)
+  *	will need moving to an ioctl path.
+  */
+ 
+-static int gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
++static void gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ {
+-	unsigned int base;
+-	int ret, i;
+-
+ 	gsm->tty = tty_kref_get(tty);
+ 	/* Turn off tty XON/XOFF handling to handle it explicitly. */
+ 	gsm->old_c_iflag = tty->termios.c_iflag;
+ 	tty->termios.c_iflag &= (IXON | IXOFF);
+-	ret =  gsm_activate_mux(gsm);
+-	if (ret != 0)
+-		tty_kref_put(gsm->tty);
+-	else {
+-		/* Don't register device 0 - this is the control channel and not
+-		   a usable tty interface */
+-		base = mux_num_to_base(gsm); /* Base for this MUX */
+-		for (i = 1; i < NUM_DLCI; i++) {
+-			struct device *dev;
+-
+-			dev = tty_register_device(gsm_tty_driver,
+-							base + i, NULL);
+-			if (IS_ERR(dev)) {
+-				for (i--; i >= 1; i--)
+-					tty_unregister_device(gsm_tty_driver,
+-								base + i);
+-				return PTR_ERR(dev);
+-			}
+-		}
+-	}
+-	return ret;
+ }
+ 
+-
+ /**
+  *	gsmld_detach_gsm	-	stop doing 0710 mux
+  *	@tty: tty attached to the mux
+@@ -2518,12 +2566,7 @@ static int gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ 
+ static void gsmld_detach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ {
+-	unsigned int base = mux_num_to_base(gsm); /* Base for this MUX */
+-	int i;
+-
+ 	WARN_ON(tty != gsm->tty);
+-	for (i = 1; i < NUM_DLCI; i++)
+-		tty_unregister_device(gsm_tty_driver, base + i);
+ 	/* Restore tty XON/XOFF handling. */
+ 	gsm->tty->termios.c_iflag = gsm->old_c_iflag;
+ 	tty_kref_put(gsm->tty);
+@@ -2534,27 +2577,25 @@ static void gsmld_receive_buf(struct tty_struct *tty, const unsigned char *cp,
+ 			      char *fp, int count)
+ {
+ 	struct gsm_mux *gsm = tty->disc_data;
+-	const unsigned char *dp;
+-	char *f;
+-	int i;
+ 	char flags = TTY_NORMAL;
+ 
+ 	if (debug & 4)
+ 		print_hex_dump_bytes("gsmld_receive: ", DUMP_PREFIX_OFFSET,
+ 				     cp, count);
+ 
+-	for (i = count, dp = cp, f = fp; i; i--, dp++) {
+-		if (f)
+-			flags = *f++;
++	for (; count; count--, cp++) {
++		if (fp)
++			flags = *fp++;
+ 		switch (flags) {
+ 		case TTY_NORMAL:
+-			gsm->receive(gsm, *dp);
++			if (gsm->receive)
++				gsm->receive(gsm, *cp);
+ 			break;
+ 		case TTY_OVERRUN:
+ 		case TTY_BREAK:
+ 		case TTY_PARITY:
+ 		case TTY_FRAME:
+-			gsm_error(gsm, *dp, flags);
++			gsm_error(gsm, *cp, flags);
+ 			break;
+ 		default:
+ 			WARN_ONCE(1, "%s: unknown flag %d\n",
+@@ -2619,7 +2660,6 @@ static void gsmld_close(struct tty_struct *tty)
+ static int gsmld_open(struct tty_struct *tty)
+ {
+ 	struct gsm_mux *gsm;
+-	int ret;
+ 
+ 	if (tty->ops->write == NULL)
+ 		return -EINVAL;
+@@ -2635,12 +2675,11 @@ static int gsmld_open(struct tty_struct *tty)
+ 	/* Attach the initial passive connection */
+ 	gsm->encoding = 1;
+ 
+-	ret = gsmld_attach_gsm(tty, gsm);
+-	if (ret != 0) {
+-		gsm_cleanup_mux(gsm, false);
+-		mux_put(gsm);
+-	}
+-	return ret;
++	gsmld_attach_gsm(tty, gsm);
++
++	timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
++
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 14d9d1ee16fc4..7b3c0787d5a45 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1353,8 +1353,6 @@ static void usb_gadget_remove_driver(struct usb_udc *udc)
+ 	dev_dbg(&udc->dev, "unregistering UDC driver [%s]\n",
+ 			udc->driver->function);
+ 
+-	kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+-
+ 	usb_gadget_disconnect(udc->gadget);
+ 	if (udc->gadget->irq)
+ 		synchronize_irq(udc->gadget->irq);
+@@ -1363,6 +1361,8 @@ static void usb_gadget_remove_driver(struct usb_udc *udc)
+ 
+ 	udc->driver = NULL;
+ 	udc->gadget->dev.driver = NULL;
++
++	kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+ }
+ 
+ /**
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 535d28b44bca3..1820b53657a6c 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -491,7 +491,7 @@ error_kill_call:
+ 	if (call->async) {
+ 		if (cancel_work_sync(&call->async_work))
+ 			afs_put_call(call);
+-		afs_put_call(call);
++		afs_set_call_complete(call, ret, 0);
+ 	}
+ 
+ 	ac->error = ret;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index c9ac43f407469..3babc07ae613e 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3603,6 +3603,10 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ 	start = max(start, rounddown(ac->ac_o_ex.fe_logical,
+ 			(ext4_lblk_t)EXT4_BLOCKS_PER_GROUP(ac->ac_sb)));
+ 
++	/* avoid unnecessary preallocation that may trigger assertions */
++	if (start + size > EXT_MAX_BLOCKS)
++		size = EXT_MAX_BLOCKS - start;
++
+ 	/* don't cover already allocated blocks in selected range */
+ 	if (ar->pleft && start <= ar->lleft) {
+ 		size -= ar->lleft + 1 - start;
+diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
+index ff99ab2a3c430..2739e218df643 100644
+--- a/fs/fuse/dax.c
++++ b/fs/fuse/dax.c
+@@ -1228,6 +1228,7 @@ void fuse_dax_conn_free(struct fuse_conn *fc)
+ 	if (fc->dax) {
+ 		fuse_free_dax_mem_ranges(&fc->dax->free_ranges);
+ 		kfree(fc->dax);
++		fc->dax = NULL;
+ 	}
+ }
+ 
+diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
+index 4fe7fd0fe834d..d1ddbcf71791d 100644
+--- a/include/asm-generic/qspinlock.h
++++ b/include/asm-generic/qspinlock.h
+@@ -41,7 +41,7 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
+  */
+ static __always_inline int queued_spin_value_unlocked(struct qspinlock lock)
+ {
+-	return !atomic_read(&lock.val);
++	return !lock.val.counter;
+ }
+ 
+ /**
+diff --git a/include/linux/cred.h b/include/linux/cred.h
+index 18639c069263f..d35e4867d5cc7 100644
+--- a/include/linux/cred.h
++++ b/include/linux/cred.h
+@@ -109,7 +109,7 @@ static inline int groups_search(const struct group_info *group_info, kgid_t grp)
+  * same context as task->real_cred.
+  */
+ struct cred {
+-	atomic_t	usage;
++	atomic_long_t	usage;
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	atomic_t	subscribers;	/* number of processes subscribed */
+ 	void		*put_addr;
+@@ -227,7 +227,7 @@ static inline bool cap_ambient_invariant_ok(const struct cred *cred)
+  */
+ static inline struct cred *get_new_cred(struct cred *cred)
+ {
+-	atomic_inc(&cred->usage);
++	atomic_long_inc(&cred->usage);
+ 	return cred;
+ }
+ 
+@@ -259,7 +259,7 @@ static inline const struct cred *get_cred_rcu(const struct cred *cred)
+ 	struct cred *nonconst_cred = (struct cred *) cred;
+ 	if (!cred)
+ 		return NULL;
+-	if (!atomic_inc_not_zero(&nonconst_cred->usage))
++	if (!atomic_long_inc_not_zero(&nonconst_cred->usage))
+ 		return NULL;
+ 	validate_creds(cred);
+ 	nonconst_cred->non_rcu = 0;
+@@ -283,7 +283,7 @@ static inline void put_cred(const struct cred *_cred)
+ 
+ 	if (cred) {
+ 		validate_creds(cred);
+-		if (atomic_dec_and_test(&(cred)->usage))
++		if (atomic_long_dec_and_test(&(cred)->usage))
+ 			__put_cred(cred);
+ 	}
+ }
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index 4e7e72f3da5bd..ce6714bec65fd 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -60,6 +60,14 @@ static inline struct vlan_ethhdr *vlan_eth_hdr(const struct sk_buff *skb)
+ 	return (struct vlan_ethhdr *)skb_mac_header(skb);
+ }
+ 
++/* Prefer this version in TX path, instead of
++ * skb_reset_mac_header() + vlan_eth_hdr()
++ */
++static inline struct vlan_ethhdr *skb_vlan_eth_hdr(const struct sk_buff *skb)
++{
++	return (struct vlan_ethhdr *)skb->data;
++}
++
+ #define VLAN_PRIO_MASK		0xe000 /* Priority Code Point */
+ #define VLAN_PRIO_SHIFT		13
+ #define VLAN_CFI_MASK		0x1000 /* Canonical Format Indicator / Drop Eligible Indicator */
+@@ -526,7 +534,7 @@ static inline void __vlan_hwaccel_put_tag(struct sk_buff *skb,
+  */
+ static inline int __vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci)
+ {
+-	struct vlan_ethhdr *veth = (struct vlan_ethhdr *)skb->data;
++	struct vlan_ethhdr *veth = skb_vlan_eth_hdr(skb);
+ 
+ 	if (!eth_type_vlan(veth->h_vlan_proto))
+ 		return -EINVAL;
+@@ -727,7 +735,7 @@ static inline bool skb_vlan_tagged_multi(struct sk_buff *skb)
+ 		if (unlikely(!pskb_may_pull(skb, VLAN_ETH_HLEN)))
+ 			return false;
+ 
+-		veh = (struct vlan_ethhdr *)skb->data;
++		veh = skb_vlan_eth_hdr(skb);
+ 		protocol = veh->h_vlan_encapsulated_proto;
+ 	}
+ 
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index edba74a536839..4d0c4cf1d4c88 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -31,17 +31,22 @@ struct prefix_info {
+ 	__u8			length;
+ 	__u8			prefix_len;
+ 
++	union __packed {
++		__u8		flags;
++		struct __packed {
+ #if defined(__BIG_ENDIAN_BITFIELD)
+-	__u8			onlink : 1,
++			__u8	onlink : 1,
+ 			 	autoconf : 1,
+ 				reserved : 6;
+ #elif defined(__LITTLE_ENDIAN_BITFIELD)
+-	__u8			reserved : 6,
++			__u8	reserved : 6,
+ 				autoconf : 1,
+ 				onlink : 1;
+ #else
+ #error "Please fix <asm/byteorder.h>"
+ #endif
++		};
++	};
+ 	__be32			valid;
+ 	__be32			prefered;
+ 	__be32			reserved2;
+@@ -49,6 +54,9 @@ struct prefix_info {
+ 	struct in6_addr		prefix;
+ };
+ 
++/* rfc4861 4.6.2: IPv6 PIO is 32 bytes in size */
++static_assert(sizeof(struct prefix_info) == 32);
++
+ #include <linux/ipv6.h>
+ #include <linux/netdevice.h>
+ #include <net/if_inet6.h>
+diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h
+index e03ba8e80781a..d4d7f4d52b152 100644
+--- a/include/net/if_inet6.h
++++ b/include/net/if_inet6.h
+@@ -22,10 +22,6 @@
+ #define IF_RS_SENT	0x10
+ #define IF_READY	0x80000000
+ 
+-/* prefix flags */
+-#define IF_PREFIX_ONLINK	0x01
+-#define IF_PREFIX_AUTOCONF	0x02
+-
+ enum {
+ 	INET6_IFADDR_STATE_PREDAD,
+ 	INET6_IFADDR_STATE_DAD,
+diff --git a/kernel/cred.c b/kernel/cred.c
+index 421b1149c6516..54042ebed5d2a 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -98,17 +98,17 @@ static void put_cred_rcu(struct rcu_head *rcu)
+ 
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	if (cred->magic != CRED_MAGIC_DEAD ||
+-	    atomic_read(&cred->usage) != 0 ||
++	    atomic_long_read(&cred->usage) != 0 ||
+ 	    read_cred_subscribers(cred) != 0)
+ 		panic("CRED: put_cred_rcu() sees %p with"
+-		      " mag %x, put %p, usage %d, subscr %d\n",
++		      " mag %x, put %p, usage %ld, subscr %d\n",
+ 		      cred, cred->magic, cred->put_addr,
+-		      atomic_read(&cred->usage),
++		      atomic_long_read(&cred->usage),
+ 		      read_cred_subscribers(cred));
+ #else
+-	if (atomic_read(&cred->usage) != 0)
+-		panic("CRED: put_cred_rcu() sees %p with usage %d\n",
+-		      cred, atomic_read(&cred->usage));
++	if (atomic_long_read(&cred->usage) != 0)
++		panic("CRED: put_cred_rcu() sees %p with usage %ld\n",
++		      cred, atomic_long_read(&cred->usage));
+ #endif
+ 
+ 	security_cred_free(cred);
+@@ -131,11 +131,11 @@ static void put_cred_rcu(struct rcu_head *rcu)
+  */
+ void __put_cred(struct cred *cred)
+ {
+-	kdebug("__put_cred(%p{%d,%d})", cred,
+-	       atomic_read(&cred->usage),
++	kdebug("__put_cred(%p{%ld,%d})", cred,
++	       atomic_long_read(&cred->usage),
+ 	       read_cred_subscribers(cred));
+ 
+-	BUG_ON(atomic_read(&cred->usage) != 0);
++	BUG_ON(atomic_long_read(&cred->usage) != 0);
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	BUG_ON(read_cred_subscribers(cred) != 0);
+ 	cred->magic = CRED_MAGIC_DEAD;
+@@ -158,8 +158,8 @@ void exit_creds(struct task_struct *tsk)
+ {
+ 	struct cred *cred;
+ 
+-	kdebug("exit_creds(%u,%p,%p,{%d,%d})", tsk->pid, tsk->real_cred, tsk->cred,
+-	       atomic_read(&tsk->cred->usage),
++	kdebug("exit_creds(%u,%p,%p,{%ld,%d})", tsk->pid, tsk->real_cred, tsk->cred,
++	       atomic_long_read(&tsk->cred->usage),
+ 	       read_cred_subscribers(tsk->cred));
+ 
+ 	cred = (struct cred *) tsk->real_cred;
+@@ -218,7 +218,7 @@ struct cred *cred_alloc_blank(void)
+ 	if (!new)
+ 		return NULL;
+ 
+-	atomic_set(&new->usage, 1);
++	atomic_long_set(&new->usage, 1);
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	new->magic = CRED_MAGIC;
+ #endif
+@@ -265,7 +265,7 @@ struct cred *prepare_creds(void)
+ 	memcpy(new, old, sizeof(struct cred));
+ 
+ 	new->non_rcu = 0;
+-	atomic_set(&new->usage, 1);
++	atomic_long_set(&new->usage, 1);
+ 	set_cred_subscribers(new, 0);
+ 	get_group_info(new->group_info);
+ 	get_uid(new->user);
+@@ -348,8 +348,8 @@ int copy_creds(struct task_struct *p, unsigned long clone_flags)
+ 		p->real_cred = get_cred(p->cred);
+ 		get_cred(p->cred);
+ 		alter_cred_subscribers(p->cred, 2);
+-		kdebug("share_creds(%p{%d,%d})",
+-		       p->cred, atomic_read(&p->cred->usage),
++		kdebug("share_creds(%p{%ld,%d})",
++		       p->cred, atomic_long_read(&p->cred->usage),
+ 		       read_cred_subscribers(p->cred));
+ 		atomic_inc(&p->cred->user->processes);
+ 		return 0;
+@@ -439,8 +439,8 @@ int commit_creds(struct cred *new)
+ 	struct task_struct *task = current;
+ 	const struct cred *old = task->real_cred;
+ 
+-	kdebug("commit_creds(%p{%d,%d})", new,
+-	       atomic_read(&new->usage),
++	kdebug("commit_creds(%p{%ld,%d})", new,
++	       atomic_long_read(&new->usage),
+ 	       read_cred_subscribers(new));
+ 
+ 	BUG_ON(task->cred != old);
+@@ -449,7 +449,7 @@ int commit_creds(struct cred *new)
+ 	validate_creds(old);
+ 	validate_creds(new);
+ #endif
+-	BUG_ON(atomic_read(&new->usage) < 1);
++	BUG_ON(atomic_long_read(&new->usage) < 1);
+ 
+ 	get_cred(new); /* we will require a ref for the subj creds too */
+ 
+@@ -522,14 +522,14 @@ EXPORT_SYMBOL(commit_creds);
+  */
+ void abort_creds(struct cred *new)
+ {
+-	kdebug("abort_creds(%p{%d,%d})", new,
+-	       atomic_read(&new->usage),
++	kdebug("abort_creds(%p{%ld,%d})", new,
++	       atomic_long_read(&new->usage),
+ 	       read_cred_subscribers(new));
+ 
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	BUG_ON(read_cred_subscribers(new) != 0);
+ #endif
+-	BUG_ON(atomic_read(&new->usage) < 1);
++	BUG_ON(atomic_long_read(&new->usage) < 1);
+ 	put_cred(new);
+ }
+ EXPORT_SYMBOL(abort_creds);
+@@ -545,8 +545,8 @@ const struct cred *override_creds(const struct cred *new)
+ {
+ 	const struct cred *old = current->cred;
+ 
+-	kdebug("override_creds(%p{%d,%d})", new,
+-	       atomic_read(&new->usage),
++	kdebug("override_creds(%p{%ld,%d})", new,
++	       atomic_long_read(&new->usage),
+ 	       read_cred_subscribers(new));
+ 
+ 	validate_creds(old);
+@@ -568,8 +568,8 @@ const struct cred *override_creds(const struct cred *new)
+ 	rcu_assign_pointer(current->cred, new);
+ 	alter_cred_subscribers(old, -1);
+ 
+-	kdebug("override_creds() = %p{%d,%d}", old,
+-	       atomic_read(&old->usage),
++	kdebug("override_creds() = %p{%ld,%d}", old,
++	       atomic_long_read(&old->usage),
+ 	       read_cred_subscribers(old));
+ 	return old;
+ }
+@@ -586,8 +586,8 @@ void revert_creds(const struct cred *old)
+ {
+ 	const struct cred *override = current->cred;
+ 
+-	kdebug("revert_creds(%p{%d,%d})", old,
+-	       atomic_read(&old->usage),
++	kdebug("revert_creds(%p{%ld,%d})", old,
++	       atomic_long_read(&old->usage),
+ 	       read_cred_subscribers(old));
+ 
+ 	validate_creds(old);
+@@ -699,7 +699,7 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
+ 
+ 	*new = *old;
+ 	new->non_rcu = 0;
+-	atomic_set(&new->usage, 1);
++	atomic_long_set(&new->usage, 1);
+ 	set_cred_subscribers(new, 0);
+ 	get_uid(new->user);
+ 	get_user_ns(new->user_ns);
+@@ -809,8 +809,8 @@ static void dump_invalid_creds(const struct cred *cred, const char *label,
+ 	       cred == tsk->cred ? "[eff]" : "");
+ 	printk(KERN_ERR "CRED: ->magic=%x, put_addr=%p\n",
+ 	       cred->magic, cred->put_addr);
+-	printk(KERN_ERR "CRED: ->usage=%d, subscr=%d\n",
+-	       atomic_read(&cred->usage),
++	printk(KERN_ERR "CRED: ->usage=%ld, subscr=%d\n",
++	       atomic_long_read(&cred->usage),
+ 	       read_cred_subscribers(cred));
+ 	printk(KERN_ERR "CRED: ->*uid = { %d,%d,%d,%d }\n",
+ 		from_kuid_munged(&init_user_ns, cred->uid),
+@@ -882,9 +882,9 @@ EXPORT_SYMBOL(__validate_process_creds);
+  */
+ void validate_creds_for_do_exit(struct task_struct *tsk)
+ {
+-	kdebug("validate_creds_for_do_exit(%p,%p{%d,%d})",
++	kdebug("validate_creds_for_do_exit(%p,%p{%ld,%d})",
+ 	       tsk->real_cred, tsk->cred,
+-	       atomic_read(&tsk->cred->usage),
++	       atomic_long_read(&tsk->cred->usage),
+ 	       read_cred_subscribers(tsk->cred));
+ 
+ 	__validate_process_creds(tsk, __FILE__, __LINE__);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 0a6855d648031..afedd008e0afd 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2039,6 +2039,16 @@ static bool perf_event_validate_size(struct perf_event *event)
+ 				   group_leader->nr_siblings + 1) > 16*1024)
+ 		return false;
+ 
++	/*
++	 * When creating a new group leader, group_leader->ctx is initialized
++	 * after the size has been validated, but we cannot safely use
++	 * for_each_sibling_event() until group_leader->ctx is set. A new group
++	 * leader cannot have any siblings yet, so we can safely skip checking
++	 * the non-existent siblings.
++	 */
++	if (event == group_leader)
++		return true;
++
+ 	for_each_sibling_event(sibling, group_leader) {
+ 		if (__perf_event_read_size(sibling->attr.read_format,
+ 					   group_leader->nr_siblings + 1) > 16*1024)
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 7e1148aafd284..364fa91ab33e4 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -703,6 +703,9 @@ static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set)
+ 	unsigned long cnt2, top2, bottom2;
+ 	u64 val;
+ 
++	/* Any interruptions in this function should cause a failure */
++	cnt = local_read(&t->cnt);
++
+ 	/* The cmpxchg always fails if it interrupted an update */
+ 	 if (!__rb_time_read(t, &val, &cnt2))
+ 		 return false;
+@@ -710,7 +713,6 @@ static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set)
+ 	 if (val != expect)
+ 		 return false;
+ 
+-	 cnt = local_read(&t->cnt);
+ 	 if ((cnt & 3) != cnt2)
+ 		 return false;
+ 
+@@ -1667,6 +1669,8 @@ static void rb_free_cpu_buffer(struct ring_buffer_per_cpu *cpu_buffer)
+ 		free_buffer_page(bpage);
+ 	}
+ 
++	free_page((unsigned long)cpu_buffer->free_page);
++
+ 	kfree(cpu_buffer);
+ }
+ 
+@@ -2273,7 +2277,7 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
+ 	 */
+ 	barrier();
+ 
+-	if ((iter->head + length) > commit || length > BUF_MAX_DATA_SIZE)
++	if ((iter->head + length) > commit || length > BUF_PAGE_SIZE)
+ 		/* Writer corrupted the read? */
+ 		goto reset;
+ 
+@@ -3299,7 +3303,10 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer,
+ 		 * absolute timestamp.
+ 		 * Don't bother if this is the start of a new page (w == 0).
+ 		 */
+-		if (unlikely(!a_ok || !b_ok || (info->before != info->after && w))) {
++		if (!w) {
++			/* Use the sub-buffer timestamp */
++			info->delta = 0;
++		} else if (unlikely(!a_ok || !b_ok || info->before != info->after)) {
+ 			info->add_timestamp |= RB_ADD_STAMP_FORCE | RB_ADD_STAMP_EXTEND;
+ 			info->length += RB_LEN_TIME_EXTEND;
+ 		} else {
+@@ -3454,6 +3461,8 @@ rb_reserve_next_event(struct trace_buffer *buffer,
+ 	if (ring_buffer_time_stamp_abs(cpu_buffer->buffer)) {
+ 		add_ts_default = RB_ADD_STAMP_ABSOLUTE;
+ 		info.length += RB_LEN_TIME_EXTEND;
++		if (info.length > BUF_MAX_DATA_SIZE)
++			goto out_fail;
+ 	} else {
+ 		add_ts_default = RB_ADD_STAMP_NONE;
+ 	}
+@@ -4834,7 +4843,8 @@ ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags)
+ 	if (!iter)
+ 		return NULL;
+ 
+-	iter->event = kmalloc(BUF_MAX_DATA_SIZE, flags);
++	/* Holds the entire event: data and meta data */
++	iter->event = kmalloc(BUF_PAGE_SIZE, flags);
+ 	if (!iter->event) {
+ 		kfree(iter);
+ 		return NULL;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 4e0411b19ef96..b2527624481de 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5905,7 +5905,7 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
+ 	if (!tr->array_buffer.buffer)
+ 		return 0;
+ 
+-	/* Do not allow tracing while resizng ring buffer */
++	/* Do not allow tracing while resizing ring buffer */
+ 	tracing_stop_tr(tr);
+ 
+ 	ret = ring_buffer_resize(tr->array_buffer.buffer, size, cpu);
+@@ -5913,7 +5913,7 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
+ 		goto out_start;
+ 
+ #ifdef CONFIG_TRACER_MAX_TRACE
+-	if (!tr->current_trace->use_max_tr)
++	if (!tr->allocated_snapshot)
+ 		goto out;
+ 
+ 	ret = ring_buffer_resize(tr->max_buffer.buffer, size, cpu);
+diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
+index c94b212d8e7ca..46adb8cefccf2 100644
+--- a/net/appletalk/ddp.c
++++ b/net/appletalk/ddp.c
+@@ -1811,15 +1811,14 @@ static int atalk_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 		break;
+ 	}
+ 	case TIOCINQ: {
+-		/*
+-		 * These two are safe on a single CPU system as only
+-		 * user tasks fiddle here
+-		 */
+-		struct sk_buff *skb = skb_peek(&sk->sk_receive_queue);
++		struct sk_buff *skb;
+ 		long amount = 0;
+ 
++		spin_lock_irq(&sk->sk_receive_queue.lock);
++		skb = skb_peek(&sk->sk_receive_queue);
+ 		if (skb)
+ 			amount = skb->len - sizeof(struct ddpehdr);
++		spin_unlock_irq(&sk->sk_receive_queue.lock);
+ 		rc = put_user(amount, (int __user *)argp);
+ 		break;
+ 	}
+diff --git a/net/atm/ioctl.c b/net/atm/ioctl.c
+index 838ebf0cabbfb..f81f8d56f5c0c 100644
+--- a/net/atm/ioctl.c
++++ b/net/atm/ioctl.c
+@@ -73,14 +73,17 @@ static int do_vcc_ioctl(struct socket *sock, unsigned int cmd,
+ 	case SIOCINQ:
+ 	{
+ 		struct sk_buff *skb;
++		int amount;
+ 
+ 		if (sock->state != SS_CONNECTED) {
+ 			error = -EINVAL;
+ 			goto done;
+ 		}
++		spin_lock_irq(&sk->sk_receive_queue.lock);
+ 		skb = skb_peek(&sk->sk_receive_queue);
+-		error = put_user(skb ? skb->len : 0,
+-				 (int __user *)argp) ? -EFAULT : 0;
++		amount = skb ? skb->len : 0;
++		spin_unlock_irq(&sk->sk_receive_queue.lock);
++		error = put_user(amount, (int __user *)argp) ? -EFAULT : 0;
+ 		goto done;
+ 	}
+ 	case ATM_SETSC:
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index 7ac16d7b94a2f..0e6d0a5e68413 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -454,7 +454,7 @@ void batadv_interface_rx(struct net_device *soft_iface,
+ 		if (!pskb_may_pull(skb, VLAN_ETH_HLEN))
+ 			goto dropped;
+ 
+-		vhdr = (struct vlan_ethhdr *)skb->data;
++		vhdr = skb_vlan_eth_hdr(skb);
+ 
+ 		/* drop batman-in-batman packets to prevent loops */
+ 		if (vhdr->h_vlan_encapsulated_proto != htons(ETH_P_BATMAN))
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index f8ad8465f76cb..f0df14782ee01 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3171,7 +3171,13 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
+ 	if (skb_still_in_host_queue(sk, skb))
+ 		return -EBUSY;
+ 
++start:
+ 	if (before(TCP_SKB_CB(skb)->seq, tp->snd_una)) {
++		if (unlikely(TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN)) {
++			TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_SYN;
++			TCP_SKB_CB(skb)->seq++;
++			goto start;
++		}
+ 		if (unlikely(before(TCP_SKB_CB(skb)->end_seq, tp->snd_una))) {
+ 			WARN_ON_ONCE(1);
+ 			return -EINVAL;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 193e5f2757330..79787a1f5ab34 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -6034,11 +6034,7 @@ static int inet6_fill_prefix(struct sk_buff *skb, struct inet6_dev *idev,
+ 	pmsg->prefix_len = pinfo->prefix_len;
+ 	pmsg->prefix_type = pinfo->type;
+ 	pmsg->prefix_pad3 = 0;
+-	pmsg->prefix_flags = 0;
+-	if (pinfo->onlink)
+-		pmsg->prefix_flags |= IF_PREFIX_ONLINK;
+-	if (pinfo->autoconf)
+-		pmsg->prefix_flags |= IF_PREFIX_AUTOCONF;
++	pmsg->prefix_flags = pinfo->flags;
+ 
+ 	if (nla_put(skb, PREFIX_ADDRESS, sizeof(pinfo->prefix), &pinfo->prefix))
+ 		goto nla_put_failure;
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index cb69a299f10c5..c9f89f035ccff 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -214,7 +214,7 @@ static void nft_exthdr_tcp_eval(const struct nft_expr *expr,
+ 
+ 		offset = i + priv->offset;
+ 		if (priv->flags & NFT_EXTHDR_F_PRESENT) {
+-			*dest = 1;
++			nft_reg_store8(dest, 1);
+ 		} else {
+ 			if (priv->len % NFT_REG32_SIZE)
+ 				dest[priv->len / NFT_REG32_SIZE] = 0;
+diff --git a/net/netfilter/nft_fib.c b/net/netfilter/nft_fib.c
+index b10ce732b337c..1fd4b2054e8f7 100644
+--- a/net/netfilter/nft_fib.c
++++ b/net/netfilter/nft_fib.c
+@@ -140,11 +140,15 @@ void nft_fib_store_result(void *reg, const struct nft_fib *priv,
+ 	switch (priv->result) {
+ 	case NFT_FIB_RESULT_OIF:
+ 		index = dev ? dev->ifindex : 0;
+-		*dreg = (priv->flags & NFTA_FIB_F_PRESENT) ? !!index : index;
++		if (priv->flags & NFTA_FIB_F_PRESENT)
++			nft_reg_store8(dreg, !!index);
++		else
++			*dreg = index;
++
+ 		break;
+ 	case NFT_FIB_RESULT_OIFNAME:
+ 		if (priv->flags & NFTA_FIB_F_PRESENT)
+-			*dreg = !!dev;
++			nft_reg_store8(dreg, !!dev);
+ 		else
+ 			strncpy(reg, dev ? dev->name : "", IFNAMSIZ);
+ 		break;
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 86c93cf1744b0..b3e7a92f1ec19 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -1307,9 +1307,11 @@ static int rose_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 	case TIOCINQ: {
+ 		struct sk_buff *skb;
+ 		long amount = 0L;
+-		/* These two are safe on a single CPU system as only user tasks fiddle here */
++
++		spin_lock_irq(&sk->sk_receive_queue.lock);
+ 		if ((skb = skb_peek(&sk->sk_receive_queue)) != NULL)
+ 			amount = skb->len;
++		spin_unlock_irq(&sk->sk_receive_queue.lock);
+ 		return put_user(amount, (unsigned int __user *) argp);
+ 	}
+ 
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index c9ee9259af48a..080e8f2bf9855 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -436,7 +436,7 @@ static s64 virtio_transport_has_space(struct vsock_sock *vsk)
+ 	struct virtio_vsock_sock *vvs = vsk->trans;
+ 	s64 bytes;
+ 
+-	bytes = vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt);
++	bytes = (s64)vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt);
+ 	if (bytes < 0)
+ 		bytes = 0;
+ 
+diff --git a/scripts/sign-file.c b/scripts/sign-file.c
+index 7434e9ea926e2..12acc70e5a7a5 100644
+--- a/scripts/sign-file.c
++++ b/scripts/sign-file.c
+@@ -322,7 +322,7 @@ int main(int argc, char **argv)
+ 				     CMS_NOSMIMECAP | use_keyid |
+ 				     use_signed_attrs),
+ 		    "CMS_add1_signer");
+-		ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) < 0,
++		ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) != 1,
+ 		    "CMS_final");
+ 
+ #else
+@@ -341,10 +341,10 @@ int main(int argc, char **argv)
+ 			b = BIO_new_file(sig_file_name, "wb");
+ 			ERR(!b, "%s", sig_file_name);
+ #ifndef USE_PKCS7
+-			ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) < 0,
++			ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) != 1,
+ 			    "%s", sig_file_name);
+ #else
+-			ERR(i2d_PKCS7_bio(b, pkcs7) < 0,
++			ERR(i2d_PKCS7_bio(b, pkcs7) != 1,
+ 			    "%s", sig_file_name);
+ #endif
+ 			BIO_free(b);
+@@ -374,9 +374,9 @@ int main(int argc, char **argv)
+ 
+ 	if (!raw_sig) {
+ #ifndef USE_PKCS7
+-		ERR(i2d_CMS_bio_stream(bd, cms, NULL, 0) < 0, "%s", dest_name);
++		ERR(i2d_CMS_bio_stream(bd, cms, NULL, 0) != 1, "%s", dest_name);
+ #else
+-		ERR(i2d_PKCS7_bio(bd, pkcs7) < 0, "%s", dest_name);
++		ERR(i2d_PKCS7_bio(bd, pkcs7) != 1, "%s", dest_name);
+ #endif
+ 	} else {
+ 		BIO *b;
+@@ -396,7 +396,7 @@ int main(int argc, char **argv)
+ 	ERR(BIO_write(bd, &sig_info, sizeof(sig_info)) < 0, "%s", dest_name);
+ 	ERR(BIO_write(bd, magic_number, sizeof(magic_number) - 1) < 0, "%s", dest_name);
+ 
+-	ERR(BIO_free(bd) < 0, "%s", dest_name);
++	ERR(BIO_free(bd) != 1, "%s", dest_name);
+ 
+ 	/* Finally, if we're signing in place, replace the original. */
+ 	if (replace_orig)
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index c19afe4861949..86a43b955db91 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1967,6 +1967,8 @@ static const struct snd_pci_quirk force_connect_list[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x8715, "HP", 1),
++	SND_PCI_QUIRK(0x1043, 0x86ae, "ASUS", 1),  /* Z170 PRO */
++	SND_PCI_QUIRK(0x1043, 0x86c7, "ASUS", 1),  /* Z170M PLUS */
+ 	SND_PCI_QUIRK(0x1462, 0xec94, "MS-7C94", 1),
+ 	{}
+ };
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d9140630cf3d9..0743fcd747079 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8986,6 +8986,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x841c, "HP Pavilion 15-CK0xx", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x84ae, "HP 15-db0403ng", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-01-05 14:29 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-01-05 14:29 UTC (permalink / raw
  To: gentoo-commits

commit:     4e840e6b00bef81289a9642bd0203617cc0d7b89
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan  5 14:29:27 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan  5 14:29:27 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4e840e6b

Linux patch 5.10.206

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1205_linux-5.10.206.patch | 3471 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3475 insertions(+)

diff --git a/0000_README b/0000_README
index 161db05e..45acc0d1 100644
--- a/0000_README
+++ b/0000_README
@@ -863,6 +863,10 @@ Patch:  1204_linux-5.10.205.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.205
 
+Patch:  1205_linux-5.10.206.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.206
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1205_linux-5.10.206.patch b/1205_linux-5.10.206.patch
new file mode 100644
index 00000000..aec73015
--- /dev/null
+++ b/1205_linux-5.10.206.patch
@@ -0,0 +1,3471 @@
+diff --git a/Documentation/devicetree/bindings/nvmem/mxs-ocotp.yaml b/Documentation/devicetree/bindings/nvmem/mxs-ocotp.yaml
+index ff317fd7c15bf..2e1fcff3c2801 100644
+--- a/Documentation/devicetree/bindings/nvmem/mxs-ocotp.yaml
++++ b/Documentation/devicetree/bindings/nvmem/mxs-ocotp.yaml
+@@ -14,9 +14,11 @@ allOf:
+ 
+ properties:
+   compatible:
+-    enum:
+-      - fsl,imx23-ocotp
+-      - fsl,imx28-ocotp
++    items:
++      - enum:
++          - fsl,imx23-ocotp
++          - fsl,imx28-ocotp
++      - const: fsl,ocotp
+ 
+   "#address-cells":
+     const: 1
+@@ -40,7 +42,7 @@ additionalProperties: false
+ examples:
+   - |
+     ocotp: efuse@8002c000 {
+-        compatible = "fsl,imx28-ocotp";
++        compatible = "fsl,imx28-ocotp", "fsl,ocotp";
+         #address-cells = <1>;
+         #size-cells = <1>;
+         reg = <0x8002c000 0x2000>;
+diff --git a/Makefile b/Makefile
+index 5ba86be42e7c9..134fba99314d9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 205
++SUBLEVEL = 206
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/am33xx.dtsi b/arch/arm/boot/dts/am33xx.dtsi
+index f09a61cac2dc9..3d064eb290997 100644
+--- a/arch/arm/boot/dts/am33xx.dtsi
++++ b/arch/arm/boot/dts/am33xx.dtsi
+@@ -347,6 +347,7 @@
+ 					<SYSC_IDLE_NO>,
+ 					<SYSC_IDLE_SMART>,
+ 					<SYSC_IDLE_SMART_WKUP>;
++			ti,sysc-delay-us = <2>;
+ 			clocks = <&l3s_clkctrl AM3_L3S_USB_OTG_HS_CLKCTRL 0>;
+ 			clock-names = "fck";
+ 			#address-cells = <1>;
+diff --git a/arch/arm/mach-omap2/id.c b/arch/arm/mach-omap2/id.c
+index 59755b5a1ad7a..75091aa7269ae 100644
+--- a/arch/arm/mach-omap2/id.c
++++ b/arch/arm/mach-omap2/id.c
+@@ -793,11 +793,16 @@ void __init omap_soc_device_init(void)
+ 
+ 	soc_dev_attr->machine  = soc_name;
+ 	soc_dev_attr->family   = omap_get_family();
++	if (!soc_dev_attr->family) {
++		kfree(soc_dev_attr);
++		return;
++	}
+ 	soc_dev_attr->revision = soc_rev;
+ 	soc_dev_attr->custom_attr_group = omap_soc_groups[0];
+ 
+ 	soc_dev = soc_device_register(soc_dev_attr);
+ 	if (IS_ERR(soc_dev)) {
++		kfree(soc_dev_attr->family);
+ 		kfree(soc_dev_attr);
+ 		return;
+ 	}
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 18ebacf298891..57839f63074f7 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -468,7 +468,6 @@ config MACH_LOONGSON2EF
+ 
+ config MACH_LOONGSON64
+ 	bool "Loongson 64-bit family of machines"
+-	select ARCH_DMA_DEFAULT_COHERENT
+ 	select ARCH_SPARSEMEM_ENABLE
+ 	select ARCH_MIGHT_HAVE_PC_PARPORT
+ 	select ARCH_MIGHT_HAVE_PC_SERIO
+@@ -1380,7 +1379,6 @@ config CPU_LOONGSON64
+ 	select CPU_SUPPORTS_MSA
+ 	select CPU_DIEI_BROKEN if !LOONGSON3_ENHANCEMENT
+ 	select CPU_MIPSR2_IRQ_VI
+-	select DMA_NONCOHERENT
+ 	select WEAK_ORDERING
+ 	select WEAK_REORDERING_BEYOND_LLSC
+ 	select MIPS_ASID_BITS_VARIABLE
+diff --git a/arch/mips/include/asm/mach-loongson64/boot_param.h b/arch/mips/include/asm/mach-loongson64/boot_param.h
+index de0bd14d798aa..afc92b7a61c60 100644
+--- a/arch/mips/include/asm/mach-loongson64/boot_param.h
++++ b/arch/mips/include/asm/mach-loongson64/boot_param.h
+@@ -117,8 +117,7 @@ struct irq_source_routing_table {
+ 	u64 pci_io_start_addr;
+ 	u64 pci_io_end_addr;
+ 	u64 pci_config_addr;
+-	u16 dma_mask_bits;
+-	u16 dma_noncoherent;
++	u32 dma_mask_bits;
+ } __packed;
+ 
+ struct interface_info {
+diff --git a/arch/mips/loongson64/env.c b/arch/mips/loongson64/env.c
+index a59bae36f86a7..134cb8e9efc21 100644
+--- a/arch/mips/loongson64/env.c
++++ b/arch/mips/loongson64/env.c
+@@ -13,8 +13,6 @@
+  * Copyright (C) 2009 Lemote Inc.
+  * Author: Wu Zhangjin, wuzhangjin@gmail.com
+  */
+-
+-#include <linux/dma-map-ops.h>
+ #include <linux/export.h>
+ #include <linux/pci_ids.h>
+ #include <asm/bootinfo.h>
+@@ -133,14 +131,8 @@ void __init prom_init_env(void)
+ 	loongson_sysconf.pci_io_base = eirq_source->pci_io_start_addr;
+ 	loongson_sysconf.dma_mask_bits = eirq_source->dma_mask_bits;
+ 	if (loongson_sysconf.dma_mask_bits < 32 ||
+-			loongson_sysconf.dma_mask_bits > 64) {
++		loongson_sysconf.dma_mask_bits > 64)
+ 		loongson_sysconf.dma_mask_bits = 32;
+-		dma_default_coherent = true;
+-	} else {
+-		dma_default_coherent = !eirq_source->dma_noncoherent;
+-	}
+-
+-	pr_info("Firmware: Coherent DMA: %s\n", dma_default_coherent ? "on" : "off");
+ 
+ 	loongson_sysconf.restart_addr = boot_p->reset_system.ResetWarm;
+ 	loongson_sysconf.poweroff_addr = boot_p->reset_system.Shutdown;
+diff --git a/arch/s390/include/asm/fpu/api.h b/arch/s390/include/asm/fpu/api.h
+index 34a7ae68485c6..be16a6c0f1276 100644
+--- a/arch/s390/include/asm/fpu/api.h
++++ b/arch/s390/include/asm/fpu/api.h
+@@ -76,7 +76,7 @@ static inline int test_fp_ctl(u32 fpc)
+ #define KERNEL_VXR_HIGH		(KERNEL_VXR_V16V23|KERNEL_VXR_V24V31)
+ 
+ #define KERNEL_VXR		(KERNEL_VXR_LOW|KERNEL_VXR_HIGH)
+-#define KERNEL_FPR		(KERNEL_FPC|KERNEL_VXR_V0V7)
++#define KERNEL_FPR		(KERNEL_FPC|KERNEL_VXR_LOW)
+ 
+ struct kernel_fpu;
+ 
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index d58621f0fe515..9e0a3daa838c2 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -1093,8 +1093,8 @@ void __init_or_module text_poke_early(void *addr, const void *opcode,
+ 	} else {
+ 		local_irq_save(flags);
+ 		memcpy(addr, opcode, len);
+-		local_irq_restore(flags);
+ 		sync_core();
++		local_irq_restore(flags);
+ 
+ 		/*
+ 		 * Could also do a CLFLUSH here to speed up CPU recovery; but
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index ef8c7bfd79a8f..b1aa793b9eeda 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -2093,13 +2093,23 @@ static int sysc_reset(struct sysc *ddata)
+ 		sysc_val = sysc_read_sysconfig(ddata);
+ 		sysc_val |= sysc_mask;
+ 		sysc_write(ddata, sysc_offset, sysc_val);
+-		/* Flush posted write */
++
++		/*
++		 * Some devices need a delay before reading registers
++		 * after reset. Presumably a srst_udelay is not needed
++		 * for devices that use a rstctrl register reset.
++		 */
++		if (ddata->cfg.srst_udelay)
++			fsleep(ddata->cfg.srst_udelay);
++
++		/*
++		 * Flush posted write. For devices needing srst_udelay
++		 * this should trigger an interconnect error if the
++		 * srst_udelay value is needed but not configured.
++		 */
+ 		sysc_val = sysc_read_sysconfig(ddata);
+ 	}
+ 
+-	if (ddata->cfg.srst_udelay)
+-		fsleep(ddata->cfg.srst_udelay);
+-
+ 	if (ddata->post_reset_quirk)
+ 		ddata->post_reset_quirk(ddata);
+ 
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index a1bc8a2c3eadc..b915dfcff00d8 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -250,18 +250,46 @@ static u32 aspeed_i2c_slave_irq(struct aspeed_i2c_bus *bus, u32 irq_status)
+ 	if (!slave)
+ 		return 0;
+ 
+-	command = readl(bus->base + ASPEED_I2C_CMD_REG);
++	/*
++	 * Handle stop conditions early, prior to SLAVE_MATCH. Some masters may drive
++	 * transfers with low enough latency between the nak/stop phase of the current
++	 * command and the start/address phase of the following command that the
++	 * interrupts are coalesced by the time we process them.
++	 */
++	if (irq_status & ASPEED_I2CD_INTR_NORMAL_STOP) {
++		irq_handled |= ASPEED_I2CD_INTR_NORMAL_STOP;
++		bus->slave_state = ASPEED_I2C_SLAVE_STOP;
++	}
++
++	if (irq_status & ASPEED_I2CD_INTR_TX_NAK &&
++	    bus->slave_state == ASPEED_I2C_SLAVE_READ_PROCESSED) {
++		irq_handled |= ASPEED_I2CD_INTR_TX_NAK;
++		bus->slave_state = ASPEED_I2C_SLAVE_STOP;
++	}
++
++	/* Propagate any stop conditions to the slave implementation. */
++	if (bus->slave_state == ASPEED_I2C_SLAVE_STOP) {
++		i2c_slave_event(slave, I2C_SLAVE_STOP, &value);
++		bus->slave_state = ASPEED_I2C_SLAVE_INACTIVE;
++	}
+ 
+-	/* Slave was requested, restart state machine. */
++	/*
++	 * Now that we've dealt with any potentially coalesced stop conditions,
++	 * address any start conditions.
++	 */
+ 	if (irq_status & ASPEED_I2CD_INTR_SLAVE_MATCH) {
+ 		irq_handled |= ASPEED_I2CD_INTR_SLAVE_MATCH;
+ 		bus->slave_state = ASPEED_I2C_SLAVE_START;
+ 	}
+ 
+-	/* Slave is not currently active, irq was for someone else. */
++	/*
++	 * If the slave has been stopped and not started then slave interrupt
++	 * handling is complete.
++	 */
+ 	if (bus->slave_state == ASPEED_I2C_SLAVE_INACTIVE)
+ 		return irq_handled;
+ 
++	command = readl(bus->base + ASPEED_I2C_CMD_REG);
+ 	dev_dbg(bus->dev, "slave irq status 0x%08x, cmd 0x%08x\n",
+ 		irq_status, command);
+ 
+@@ -280,17 +308,6 @@ static u32 aspeed_i2c_slave_irq(struct aspeed_i2c_bus *bus, u32 irq_status)
+ 		irq_handled |= ASPEED_I2CD_INTR_RX_DONE;
+ 	}
+ 
+-	/* Slave was asked to stop. */
+-	if (irq_status & ASPEED_I2CD_INTR_NORMAL_STOP) {
+-		irq_handled |= ASPEED_I2CD_INTR_NORMAL_STOP;
+-		bus->slave_state = ASPEED_I2C_SLAVE_STOP;
+-	}
+-	if (irq_status & ASPEED_I2CD_INTR_TX_NAK &&
+-	    bus->slave_state == ASPEED_I2C_SLAVE_READ_PROCESSED) {
+-		irq_handled |= ASPEED_I2CD_INTR_TX_NAK;
+-		bus->slave_state = ASPEED_I2C_SLAVE_STOP;
+-	}
+-
+ 	switch (bus->slave_state) {
+ 	case ASPEED_I2C_SLAVE_READ_REQUESTED:
+ 		if (unlikely(irq_status & ASPEED_I2CD_INTR_TX_ACK))
+@@ -319,8 +336,7 @@ static u32 aspeed_i2c_slave_irq(struct aspeed_i2c_bus *bus, u32 irq_status)
+ 		i2c_slave_event(slave, I2C_SLAVE_WRITE_RECEIVED, &value);
+ 		break;
+ 	case ASPEED_I2C_SLAVE_STOP:
+-		i2c_slave_event(slave, I2C_SLAVE_STOP, &value);
+-		bus->slave_state = ASPEED_I2C_SLAVE_INACTIVE;
++		/* Stop event handling is done early. Unreachable. */
+ 		break;
+ 	case ASPEED_I2C_SLAVE_START:
+ 		/* Slave was just started. Waiting for the next event. */;
+diff --git a/drivers/iio/adc/ti_am335x_adc.c b/drivers/iio/adc/ti_am335x_adc.c
+index e946903b09936..0ae57d6cfc494 100644
+--- a/drivers/iio/adc/ti_am335x_adc.c
++++ b/drivers/iio/adc/ti_am335x_adc.c
+@@ -640,8 +640,10 @@ static int tiadc_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, indio_dev);
+ 
+ 	err = tiadc_request_dma(pdev, adc_dev);
+-	if (err && err == -EPROBE_DEFER)
++	if (err && err != -ENODEV) {
++		dev_err_probe(&pdev->dev, err, "DMA request failed\n");
+ 		goto err_dma;
++	}
+ 
+ 	return 0;
+ 
+diff --git a/drivers/iio/common/ms_sensors/ms_sensors_i2c.c b/drivers/iio/common/ms_sensors/ms_sensors_i2c.c
+index b9e2038d05ef4..74806807a0716 100644
+--- a/drivers/iio/common/ms_sensors/ms_sensors_i2c.c
++++ b/drivers/iio/common/ms_sensors/ms_sensors_i2c.c
+@@ -15,8 +15,8 @@
+ /* Conversion times in us */
+ static const u16 ms_sensors_ht_t_conversion_time[] = { 50000, 25000,
+ 						       13000, 7000 };
+-static const u16 ms_sensors_ht_h_conversion_time[] = { 16000, 3000,
+-						       5000, 8000 };
++static const u16 ms_sensors_ht_h_conversion_time[] = { 16000, 5000,
++						       3000, 8000 };
+ static const u16 ms_sensors_tp_conversion_time[] = { 500, 1100, 2100,
+ 						     4100, 8220, 16440 };
+ 
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+index ae391ec4a7275..533c71a0dd01a 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+@@ -707,13 +707,13 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+ 			ret = inv_mpu6050_sensor_show(st, st->reg->gyro_offset,
+ 						chan->channel2, val);
+ 			mutex_unlock(&st->lock);
+-			return IIO_VAL_INT;
++			return ret;
+ 		case IIO_ACCEL:
+ 			mutex_lock(&st->lock);
+ 			ret = inv_mpu6050_sensor_show(st, st->reg->accl_offset,
+ 						chan->channel2, val);
+ 			mutex_unlock(&st->lock);
+-			return IIO_VAL_INT;
++			return ret;
+ 
+ 		default:
+ 			return -EINVAL;
+diff --git a/drivers/input/keyboard/ipaq-micro-keys.c b/drivers/input/keyboard/ipaq-micro-keys.c
+index e3f9e445e8800..fe5a9c54ad583 100644
+--- a/drivers/input/keyboard/ipaq-micro-keys.c
++++ b/drivers/input/keyboard/ipaq-micro-keys.c
+@@ -105,6 +105,9 @@ static int micro_key_probe(struct platform_device *pdev)
+ 	keys->codes = devm_kmemdup(&pdev->dev, micro_keycodes,
+ 			   keys->input->keycodesize * keys->input->keycodemax,
+ 			   GFP_KERNEL);
++	if (!keys->codes)
++		return -ENOMEM;
++
+ 	keys->input->keycode = keys->codes;
+ 
+ 	__set_bit(EV_KEY, keys->input->evbit);
+diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
+index 67a134c8448d2..b9ef03af5263e 100644
+--- a/drivers/input/misc/soc_button_array.c
++++ b/drivers/input/misc/soc_button_array.c
+@@ -299,6 +299,11 @@ static int soc_button_parse_btn_desc(struct device *dev,
+ 		info->name = "power";
+ 		info->event_code = KEY_POWER;
+ 		info->wakeup = true;
++	} else if (upage == 0x01 && usage == 0xc6) {
++		info->name = "airplane mode switch";
++		info->event_type = EV_SW;
++		info->event_code = SW_RFKILL_ALL;
++		info->active_low = false;
+ 	} else if (upage == 0x01 && usage == 0xca) {
+ 		info->name = "rotation lock switch";
+ 		info->event_type = EV_SW;
+diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
+index 7db6d0fc6ec2e..9390e22106631 100644
+--- a/drivers/interconnect/core.c
++++ b/drivers/interconnect/core.c
+@@ -380,6 +380,9 @@ struct icc_node_data *of_icc_get_from_provider(struct of_phandle_args *spec)
+ 	}
+ 	mutex_unlock(&icc_lock);
+ 
++	if (!node)
++		return ERR_PTR(-EINVAL);
++
+ 	if (IS_ERR(node))
+ 		return ERR_CAST(node);
+ 
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 1667ac1406098..62cae34ca3b43 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -1657,11 +1657,12 @@ static void integrity_metadata(struct work_struct *w)
+ 		sectors_to_process = dio->range.n_sectors;
+ 
+ 		__bio_for_each_segment(bv, bio, iter, dio->bio_details.bi_iter) {
++			struct bio_vec bv_copy = bv;
+ 			unsigned pos;
+ 			char *mem, *checksums_ptr;
+ 
+ again:
+-			mem = (char *)kmap_atomic(bv.bv_page) + bv.bv_offset;
++			mem = (char *)kmap_atomic(bv_copy.bv_page) + bv_copy.bv_offset;
+ 			pos = 0;
+ 			checksums_ptr = checksums;
+ 			do {
+@@ -1670,7 +1671,7 @@ again:
+ 				sectors_to_process -= ic->sectors_per_block;
+ 				pos += ic->sectors_per_block << SECTOR_SHIFT;
+ 				sector += ic->sectors_per_block;
+-			} while (pos < bv.bv_len && sectors_to_process && checksums != checksums_onstack);
++			} while (pos < bv_copy.bv_len && sectors_to_process && checksums != checksums_onstack);
+ 			kunmap_atomic(mem);
+ 
+ 			r = dm_integrity_rw_tag(ic, checksums, &dio->metadata_block, &dio->metadata_offset,
+@@ -1691,9 +1692,9 @@ again:
+ 			if (!sectors_to_process)
+ 				break;
+ 
+-			if (unlikely(pos < bv.bv_len)) {
+-				bv.bv_offset += pos;
+-				bv.bv_len -= pos;
++			if (unlikely(pos < bv_copy.bv_len)) {
++				bv_copy.bv_offset += pos;
++				bv_copy.bv_len -= pos;
+ 				goto again;
+ 			}
+ 		}
+diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
+index 696ce3c5a8ba3..e8772972bc2f0 100644
+--- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
++++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
+@@ -866,10 +866,13 @@ static int atl1e_setup_ring_resources(struct atl1e_adapter *adapter)
+ 		netdev_err(adapter->netdev, "offset(%d) > ring size(%d) !!\n",
+ 			   offset, adapter->ring_size);
+ 		err = -1;
+-		goto failed;
++		goto free_buffer;
+ 	}
+ 
+ 	return 0;
++free_buffer:
++	kfree(tx_ring->tx_buffer);
++	tx_ring->tx_buffer = NULL;
+ failed:
+ 	if (adapter->ring_vir_addr != NULL) {
+ 		dma_free_coherent(&pdev->dev, adapter->ring_size,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 86088ccab23aa..d49fd21f49637 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -691,7 +691,7 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
+ 
+ 	while (block_timestamp > tracer->last_timestamp) {
+ 		/* Check block override if it's not the first block */
+-		if (!tracer->last_timestamp) {
++		if (tracer->last_timestamp) {
+ 			u64 *ts_event;
+ 			/* To avoid block override be the HW in case of buffer
+ 			 * wraparound, the time stamp of the previous block
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index 05bcd69994eca..90930e54b6f28 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -267,6 +267,9 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
+ 	if (err)
+ 		goto destroy_neigh_entry;
+ 
++	e->encap_size = ipv4_encap_size;
++	e->encap_header = encap_header;
++
+ 	if (!(nud_state & NUD_VALID)) {
+ 		neigh_event_send(n, NULL);
+ 		/* the encap entry will be made valid on neigh update event
+@@ -283,8 +286,6 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
+ 		goto destroy_neigh_entry;
+ 	}
+ 
+-	e->encap_size = ipv4_encap_size;
+-	e->encap_header = encap_header;
+ 	e->flags |= MLX5_ENCAP_ENTRY_VALID;
+ 	mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev));
+ 	mlx5e_route_lookup_ipv4_put(route_dev, n);
+@@ -430,6 +431,9 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
+ 	if (err)
+ 		goto destroy_neigh_entry;
+ 
++	e->encap_size = ipv6_encap_size;
++	e->encap_header = encap_header;
++
+ 	if (!(nud_state & NUD_VALID)) {
+ 		neigh_event_send(n, NULL);
+ 		/* the encap entry will be made valid on neigh update event
+@@ -447,8 +451,6 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
+ 		goto destroy_neigh_entry;
+ 	}
+ 
+-	e->encap_size = ipv6_encap_size;
+-	e->encap_header = encap_header;
+ 	e->flags |= MLX5_ENCAP_ENTRY_VALID;
+ 	mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev));
+ 	mlx5e_route_lookup_ipv6_put(route_dev, n);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index f9f1a79d6bddb..6512bb231e7e0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -68,7 +68,7 @@ static void mlx5e_rep_get_drvinfo(struct net_device *dev,
+ 	count = snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+ 			 "%d.%d.%04d (%.16s)", fw_rev_maj(mdev),
+ 			 fw_rev_min(mdev), fw_rev_sub(mdev), mdev->board_id);
+-	if (count == sizeof(drvinfo->fw_version))
++	if (count >= sizeof(drvinfo->fw_version))
+ 		snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+ 			 "%d.%d.%04d", fw_rev_maj(mdev),
+ 			 fw_rev_min(mdev), fw_rev_sub(mdev));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+index fc91bbf7d0c37..e77cf11356c07 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+@@ -276,7 +276,7 @@ int mlx5_query_nic_vport_mac_list(struct mlx5_core_dev *dev,
+ 		req_list_size = max_list_size;
+ 	}
+ 
+-	out_sz = MLX5_ST_SZ_BYTES(query_nic_vport_context_in) +
++	out_sz = MLX5_ST_SZ_BYTES(query_nic_vport_context_out) +
+ 			req_list_size * MLX5_ST_SZ_BYTES(mac_address_layout);
+ 
+ 	out = kzalloc(out_sz, GFP_KERNEL);
+diff --git a/drivers/net/ethernet/micrel/ks8851.h b/drivers/net/ethernet/micrel/ks8851.h
+index 2b319e4511217..0f37448d26659 100644
+--- a/drivers/net/ethernet/micrel/ks8851.h
++++ b/drivers/net/ethernet/micrel/ks8851.h
+@@ -350,6 +350,8 @@ union ks8851_tx_hdr {
+  * @rxd: Space for receiving SPI data, in DMA-able space.
+  * @txd: Space for transmitting SPI data, in DMA-able space.
+  * @msg_enable: The message flags controlling driver output (see ethtool).
++ * @tx_space: Free space in the hardware TX buffer (cached copy of KS_TXMIR).
++ * @queued_len: Space required in hardware TX buffer for queued packets in txq.
+  * @fid: Incrementing frame id tag.
+  * @rc_ier: Cached copy of KS_IER.
+  * @rc_ccr: Cached copy of KS_CCR.
+@@ -398,6 +400,7 @@ struct ks8851_net {
+ 	struct work_struct	rxctrl_work;
+ 
+ 	struct sk_buff_head	txq;
++	unsigned int		queued_len;
+ 
+ 	struct eeprom_93cx6	eeprom;
+ 	struct regulator	*vdd_reg;
+diff --git a/drivers/net/ethernet/micrel/ks8851_common.c b/drivers/net/ethernet/micrel/ks8851_common.c
+index f74eae8eed02f..3d0ac7f3c87e1 100644
+--- a/drivers/net/ethernet/micrel/ks8851_common.c
++++ b/drivers/net/ethernet/micrel/ks8851_common.c
+@@ -363,16 +363,18 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
+ 		handled |= IRQ_RXPSI;
+ 
+ 	if (status & IRQ_TXI) {
+-		handled |= IRQ_TXI;
++		unsigned short tx_space = ks8851_rdreg16(ks, KS_TXMIR);
+ 
+-		/* no lock here, tx queue should have been stopped */
++		netif_dbg(ks, intr, ks->netdev,
++			  "%s: txspace %d\n", __func__, tx_space);
+ 
+-		/* update our idea of how much tx space is available to the
+-		 * system */
+-		ks->tx_space = ks8851_rdreg16(ks, KS_TXMIR);
++		spin_lock(&ks->statelock);
++		ks->tx_space = tx_space;
++		if (netif_queue_stopped(ks->netdev))
++			netif_wake_queue(ks->netdev);
++		spin_unlock(&ks->statelock);
+ 
+-		netif_dbg(ks, intr, ks->netdev,
+-			  "%s: txspace %d\n", __func__, ks->tx_space);
++		handled |= IRQ_TXI;
+ 	}
+ 
+ 	if (status & IRQ_RXI)
+@@ -415,9 +417,6 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
+ 	if (status & IRQ_LCI)
+ 		mii_check_link(&ks->mii);
+ 
+-	if (status & IRQ_TXI)
+-		netif_wake_queue(ks->netdev);
+-
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -501,6 +500,7 @@ static int ks8851_net_open(struct net_device *dev)
+ 	ks8851_wrreg16(ks, KS_ISR, ks->rc_ier);
+ 	ks8851_wrreg16(ks, KS_IER, ks->rc_ier);
+ 
++	ks->queued_len = 0;
+ 	netif_start_queue(ks->netdev);
+ 
+ 	netif_dbg(ks, ifup, ks->netdev, "network device up\n");
+diff --git a/drivers/net/ethernet/micrel/ks8851_spi.c b/drivers/net/ethernet/micrel/ks8851_spi.c
+index 4ec7f16159775..8fb5a4cd21bb7 100644
+--- a/drivers/net/ethernet/micrel/ks8851_spi.c
++++ b/drivers/net/ethernet/micrel/ks8851_spi.c
+@@ -288,6 +288,18 @@ static void ks8851_wrfifo_spi(struct ks8851_net *ks, struct sk_buff *txp,
+ 		netdev_err(ks->netdev, "%s: spi_sync() failed\n", __func__);
+ }
+ 
++/**
++ * calc_txlen - calculate size of message to send packet
++ * @len: Length of data
++ *
++ * Returns the size of the TXFIFO message needed to send
++ * this packet.
++ */
++static unsigned int calc_txlen(unsigned int len)
++{
++	return ALIGN(len + 4, 4);
++}
++
+ /**
+  * ks8851_rx_skb_spi - receive skbuff
+  * @ks: The device state
+@@ -307,7 +319,9 @@ static void ks8851_rx_skb_spi(struct ks8851_net *ks, struct sk_buff *skb)
+  */
+ static void ks8851_tx_work(struct work_struct *work)
+ {
++	unsigned int dequeued_len = 0;
+ 	struct ks8851_net_spi *kss;
++	unsigned short tx_space;
+ 	struct ks8851_net *ks;
+ 	unsigned long flags;
+ 	struct sk_buff *txb;
+@@ -324,6 +338,8 @@ static void ks8851_tx_work(struct work_struct *work)
+ 		last = skb_queue_empty(&ks->txq);
+ 
+ 		if (txb) {
++			dequeued_len += calc_txlen(txb->len);
++
+ 			ks8851_wrreg16_spi(ks, KS_RXQCR,
+ 					   ks->rc_rxqcr | RXQCR_SDA);
+ 			ks8851_wrfifo_spi(ks, txb, last);
+@@ -334,6 +350,13 @@ static void ks8851_tx_work(struct work_struct *work)
+ 		}
+ 	}
+ 
++	tx_space = ks8851_rdreg16_spi(ks, KS_TXMIR);
++
++	spin_lock(&ks->statelock);
++	ks->queued_len -= dequeued_len;
++	ks->tx_space = tx_space;
++	spin_unlock(&ks->statelock);
++
+ 	ks8851_unlock_spi(ks, &flags);
+ }
+ 
+@@ -348,18 +371,6 @@ static void ks8851_flush_tx_work_spi(struct ks8851_net *ks)
+ 	flush_work(&kss->tx_work);
+ }
+ 
+-/**
+- * calc_txlen - calculate size of message to send packet
+- * @len: Length of data
+- *
+- * Returns the size of the TXFIFO message needed to send
+- * this packet.
+- */
+-static unsigned int calc_txlen(unsigned int len)
+-{
+-	return ALIGN(len + 4, 4);
+-}
+-
+ /**
+  * ks8851_start_xmit_spi - transmit packet using SPI
+  * @skb: The buffer to transmit
+@@ -388,16 +399,17 @@ static netdev_tx_t ks8851_start_xmit_spi(struct sk_buff *skb,
+ 
+ 	spin_lock(&ks->statelock);
+ 
+-	if (needed > ks->tx_space) {
++	if (ks->queued_len + needed > ks->tx_space) {
+ 		netif_stop_queue(dev);
+ 		ret = NETDEV_TX_BUSY;
+ 	} else {
+-		ks->tx_space -= needed;
++		ks->queued_len += needed;
+ 		skb_queue_tail(&ks->txq, skb);
+ 	}
+ 
+ 	spin_unlock(&ks->statelock);
+-	schedule_work(&kss->tx_work);
++	if (ret == NETDEV_TX_OK)
++		schedule_work(&kss->tx_work);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index bf8aa0ea35d1b..e26c496cdfb85 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -999,6 +999,13 @@ static const struct of_device_id atmel_pctrl_of_match[] = {
+ 	}
+ };
+ 
++/*
++ * This lock class allows to tell lockdep that parent IRQ and children IRQ do
++ * not share the same class so it does not raise false positive
++ */
++static struct lock_class_key atmel_lock_key;
++static struct lock_class_key atmel_request_key;
++
+ static int atmel_pinctrl_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -1148,6 +1155,7 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+ 		irq_set_chip_and_handler(irq, &atmel_gpio_irq_chip,
+ 					 handle_simple_irq);
+ 		irq_set_chip_data(irq, atmel_pioctrl);
++		irq_set_lockdep_class(irq, &atmel_lock_key, &atmel_request_key);
+ 		dev_dbg(dev,
+ 			"atmel gpio irq domain: hwirq: %d, linux irq: %d\n",
+ 			i, irq);
+diff --git a/drivers/reset/core.c b/drivers/reset/core.c
+index f93388b9a4a1f..662867435fbe8 100644
+--- a/drivers/reset/core.c
++++ b/drivers/reset/core.c
+@@ -599,6 +599,9 @@ static void __reset_control_put_internal(struct reset_control *rstc)
+ {
+ 	lockdep_assert_held(&reset_list_mutex);
+ 
++	if (IS_ERR_OR_NULL(rstc))
++		return;
++
+ 	kref_put(&rstc->refcnt, __reset_control_release);
+ }
+ 
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index 8f47bf83694f6..45dbab8cbb548 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -430,7 +430,6 @@ static int bnx2fc_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	struct fcoe_ctlr *ctlr;
+ 	struct fcoe_rcv_info *fr;
+ 	struct fcoe_percpu_s *bg;
+-	struct sk_buff *tmp_skb;
+ 
+ 	interface = container_of(ptype, struct bnx2fc_interface,
+ 				 fcoe_packet_type);
+@@ -442,11 +441,9 @@ static int bnx2fc_rcv(struct sk_buff *skb, struct net_device *dev,
+ 		goto err;
+ 	}
+ 
+-	tmp_skb = skb_share_check(skb, GFP_ATOMIC);
+-	if (!tmp_skb)
+-		goto err;
+-
+-	skb = tmp_skb;
++	skb = skb_share_check(skb, GFP_ATOMIC);
++	if (!skb)
++		return -1;
+ 
+ 	if (unlikely(eth_hdr(skb)->h_proto != htons(ETH_P_FCOE))) {
+ 		printk(KERN_ERR PFX "bnx2fc_rcv: Wrong FC type frame\n");
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index d6c25a88cebc9..034f2c8a9e0b5 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -197,7 +197,7 @@ void scsi_finish_command(struct scsi_cmnd *cmd)
+ 				"(result %x)\n", cmd->result));
+ 
+ 	good_bytes = scsi_bufflen(cmd);
+-	if (!blk_rq_is_passthrough(cmd->request)) {
++	if (!blk_rq_is_passthrough(scsi_cmd_to_rq(cmd))) {
+ 		int old_good_bytes = good_bytes;
+ 		drv = scsi_cmd_to_driver(cmd);
+ 		if (drv->done)
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 0c4bc42b55c20..30eb8769dbab9 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -50,8 +50,6 @@
+ 
+ #include <asm/unaligned.h>
+ 
+-static void scsi_eh_done(struct scsi_cmnd *scmd);
+-
+ /*
+  * These should *probably* be handled by the host itself.
+  * Since it is allowed to sleep, it probably should.
+@@ -230,7 +228,7 @@ scsi_abort_command(struct scsi_cmnd *scmd)
+  */
+ static void scsi_eh_reset(struct scsi_cmnd *scmd)
+ {
+-	if (!blk_rq_is_passthrough(scmd->request)) {
++	if (!blk_rq_is_passthrough(scsi_cmd_to_rq(scmd))) {
+ 		struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd);
+ 		if (sdrv->eh_reset)
+ 			sdrv->eh_reset(scmd);
+@@ -500,7 +498,8 @@ int scsi_check_sense(struct scsi_cmnd *scmd)
+ 		/* handler does not care. Drop down to default handling */
+ 	}
+ 
+-	if (scmd->cmnd[0] == TEST_UNIT_READY && scmd->scsi_done != scsi_eh_done)
++	if (scmd->cmnd[0] == TEST_UNIT_READY &&
++	    scmd->submitter != SUBMITTED_BY_SCSI_ERROR_HANDLER)
+ 		/*
+ 		 * nasty: for mid-layer issued TURs, we need to return the
+ 		 * actual sense data without any recovery attempt.  For eh
+@@ -768,7 +767,7 @@ static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
+  * scsi_eh_done - Completion function for error handling.
+  * @scmd:	Cmd that is done.
+  */
+-static void scsi_eh_done(struct scsi_cmnd *scmd)
++void scsi_eh_done(struct scsi_cmnd *scmd)
+ {
+ 	struct completion *eh_action;
+ 
+@@ -1068,7 +1067,8 @@ retry:
+ 	shost->eh_action = &done;
+ 
+ 	scsi_log_send(scmd);
+-	scmd->scsi_done = scsi_eh_done;
++	scmd->submitter = SUBMITTED_BY_SCSI_ERROR_HANDLER;
++	scmd->flags |= SCMD_LAST;
+ 
+ 	/*
+ 	 * Lock sdev->state_mutex to avoid that scsi_device_quiesce() can
+@@ -1095,6 +1095,7 @@ retry:
+ 	if (rtn) {
+ 		if (timeleft > stall_for) {
+ 			scsi_eh_restore_cmnd(scmd, &ses);
++
+ 			timeleft -= stall_for;
+ 			msleep(jiffies_to_msecs(stall_for));
+ 			goto retry;
+@@ -1167,7 +1168,7 @@ static int scsi_request_sense(struct scsi_cmnd *scmd)
+ 
+ static int scsi_eh_action(struct scsi_cmnd *scmd, int rtn)
+ {
+-	if (!blk_rq_is_passthrough(scmd->request)) {
++	if (!blk_rq_is_passthrough(scsi_cmd_to_rq(scmd))) {
+ 		struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd);
+ 		if (sdrv->eh_action)
+ 			rtn = sdrv->eh_action(scmd, rtn);
+@@ -1733,22 +1734,24 @@ static void scsi_eh_offline_sdevs(struct list_head *work_q,
+  */
+ int scsi_noretry_cmd(struct scsi_cmnd *scmd)
+ {
++	struct request *req = scsi_cmd_to_rq(scmd);
++
+ 	switch (host_byte(scmd->result)) {
+ 	case DID_OK:
+ 		break;
+ 	case DID_TIME_OUT:
+ 		goto check_type;
+ 	case DID_BUS_BUSY:
+-		return (scmd->request->cmd_flags & REQ_FAILFAST_TRANSPORT);
++		return req->cmd_flags & REQ_FAILFAST_TRANSPORT;
+ 	case DID_PARITY:
+-		return (scmd->request->cmd_flags & REQ_FAILFAST_DEV);
++		return req->cmd_flags & REQ_FAILFAST_DEV;
+ 	case DID_ERROR:
+ 		if (msg_byte(scmd->result) == COMMAND_COMPLETE &&
+ 		    status_byte(scmd->result) == RESERVATION_CONFLICT)
+ 			return 0;
+ 		fallthrough;
+ 	case DID_SOFT_ERROR:
+-		return (scmd->request->cmd_flags & REQ_FAILFAST_DRIVER);
++		return req->cmd_flags & REQ_FAILFAST_DRIVER;
+ 	}
+ 
+ 	if (status_byte(scmd->result) != CHECK_CONDITION)
+@@ -1759,8 +1762,7 @@ check_type:
+ 	 * assume caller has checked sense and determined
+ 	 * the check condition was retryable.
+ 	 */
+-	if (scmd->request->cmd_flags & REQ_FAILFAST_DEV ||
+-	    blk_rq_is_passthrough(scmd->request))
++	if (req->cmd_flags & REQ_FAILFAST_DEV || blk_rq_is_passthrough(req))
+ 		return 1;
+ 
+ 	return 0;
+@@ -2321,11 +2323,6 @@ void scsi_report_device_reset(struct Scsi_Host *shost, int channel, int target)
+ }
+ EXPORT_SYMBOL(scsi_report_device_reset);
+ 
+-static void
+-scsi_reset_provider_done_command(struct scsi_cmnd *scmd)
+-{
+-}
+-
+ /**
+  * scsi_ioctl_reset: explicitly reset a host/bus/target/device
+  * @dev:	scsi_device to operate on
+@@ -2362,7 +2359,8 @@ scsi_ioctl_reset(struct scsi_device *dev, int __user *arg)
+ 	scmd->request = rq;
+ 	scmd->cmnd = scsi_req(rq)->cmd;
+ 
+-	scmd->scsi_done		= scsi_reset_provider_done_command;
++	scmd->submitter = SUBMITTED_BY_SCSI_RESET_IOCTL;
++	scmd->flags |= SCMD_LAST;
+ 	memset(&scmd->sdb, 0, sizeof(scmd->sdb));
+ 
+ 	scmd->cmd_len			= 0;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 99b90031500b2..20c2700e1f639 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -153,13 +153,15 @@ scsi_set_blocked(struct scsi_cmnd *cmd, int reason)
+ 
+ static void scsi_mq_requeue_cmd(struct scsi_cmnd *cmd)
+ {
+-	if (cmd->request->rq_flags & RQF_DONTPREP) {
+-		cmd->request->rq_flags &= ~RQF_DONTPREP;
++	struct request *rq = scsi_cmd_to_rq(cmd);
++
++	if (rq->rq_flags & RQF_DONTPREP) {
++		rq->rq_flags &= ~RQF_DONTPREP;
+ 		scsi_mq_uninit_cmd(cmd);
+ 	} else {
+ 		WARN_ON_ONCE(true);
+ 	}
+-	blk_mq_requeue_request(cmd->request, true);
++	blk_mq_requeue_request(rq, true);
+ }
+ 
+ /**
+@@ -198,7 +200,7 @@ static void __scsi_queue_insert(struct scsi_cmnd *cmd, int reason, bool unbusy)
+ 	 */
+ 	cmd->result = 0;
+ 
+-	blk_mq_requeue_request(cmd->request, true);
++	blk_mq_requeue_request(scsi_cmd_to_rq(cmd), true);
+ }
+ 
+ /**
+@@ -508,7 +510,7 @@ void scsi_run_host_queues(struct Scsi_Host *shost)
+ 
+ static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
+ {
+-	if (!blk_rq_is_passthrough(cmd->request)) {
++	if (!blk_rq_is_passthrough(scsi_cmd_to_rq(cmd))) {
+ 		struct scsi_driver *drv = scsi_cmd_to_driver(cmd);
+ 
+ 		if (drv->uninit_command)
+@@ -658,7 +660,7 @@ static void scsi_io_completion_reprep(struct scsi_cmnd *cmd,
+ 
+ static bool scsi_cmd_runtime_exceeced(struct scsi_cmnd *cmd)
+ {
+-	struct request *req = cmd->request;
++	struct request *req = scsi_cmd_to_rq(cmd);
+ 	unsigned long wait_for;
+ 
+ 	if (cmd->allowed == SCSI_CMD_RETRIES_NO_LIMIT)
+@@ -677,7 +679,7 @@ static bool scsi_cmd_runtime_exceeced(struct scsi_cmnd *cmd)
+ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ {
+ 	struct request_queue *q = cmd->device->request_queue;
+-	struct request *req = cmd->request;
++	struct request *req = scsi_cmd_to_rq(cmd);
+ 	int level = 0;
+ 	enum {ACTION_FAIL, ACTION_REPREP, ACTION_RETRY,
+ 	      ACTION_DELAYED_RETRY} action;
+@@ -849,7 +851,7 @@ static int scsi_io_completion_nz_result(struct scsi_cmnd *cmd, int result,
+ {
+ 	bool sense_valid;
+ 	bool sense_current = true;	/* false implies "deferred sense" */
+-	struct request *req = cmd->request;
++	struct request *req = scsi_cmd_to_rq(cmd);
+ 	struct scsi_sense_hdr sshdr;
+ 
+ 	sense_valid = scsi_command_normalize_sense(cmd, &sshdr);
+@@ -938,7 +940,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ {
+ 	int result = cmd->result;
+ 	struct request_queue *q = cmd->device->request_queue;
+-	struct request *req = cmd->request;
++	struct request *req = scsi_cmd_to_rq(cmd);
+ 	blk_status_t blk_stat = BLK_STS_OK;
+ 
+ 	if (unlikely(result))	/* a nz result may or may not be an error */
+@@ -1006,7 +1008,7 @@ static inline bool scsi_cmd_needs_dma_drain(struct scsi_device *sdev,
+ blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd)
+ {
+ 	struct scsi_device *sdev = cmd->device;
+-	struct request *rq = cmd->request;
++	struct request *rq = scsi_cmd_to_rq(cmd);
+ 	unsigned short nr_segs = blk_rq_nr_phys_segments(rq);
+ 	struct scatterlist *last_sg = NULL;
+ 	blk_status_t ret;
+@@ -1135,7 +1137,7 @@ void scsi_init_command(struct scsi_device *dev, struct scsi_cmnd *cmd)
+ {
+ 	void *buf = cmd->sense_buffer;
+ 	void *prot = cmd->prot_sdb;
+-	struct request *rq = blk_mq_rq_from_pdu(cmd);
++	struct request *rq = scsi_cmd_to_rq(cmd);
+ 	unsigned int flags = cmd->flags & SCMD_PRESERVED_FLAGS;
+ 	unsigned long jiffies_at_alloc;
+ 	int retries, to_clear;
+@@ -1594,12 +1596,21 @@ static blk_status_t scsi_prepare_cmd(struct request *req)
+ 
+ static void scsi_mq_done(struct scsi_cmnd *cmd)
+ {
+-	if (unlikely(blk_should_fake_timeout(cmd->request->q)))
++	switch (cmd->submitter) {
++	case SUBMITTED_BY_BLOCK_LAYER:
++		break;
++	case SUBMITTED_BY_SCSI_ERROR_HANDLER:
++		return scsi_eh_done(cmd);
++	case SUBMITTED_BY_SCSI_RESET_IOCTL:
++		return;
++	}
++
++	if (unlikely(blk_should_fake_timeout(scsi_cmd_to_rq(cmd)->q)))
+ 		return;
+ 	if (unlikely(test_and_set_bit(SCMD_STATE_COMPLETE, &cmd->state)))
+ 		return;
+ 	trace_scsi_dispatch_cmd_done(cmd);
+-	blk_mq_complete_request(cmd->request);
++	blk_mq_complete_request(scsi_cmd_to_rq(cmd));
+ }
+ 
+ static void scsi_mq_put_budget(struct request_queue *q)
+@@ -1683,6 +1694,7 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	scsi_set_resid(cmd, 0);
+ 	memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
++	cmd->submitter = SUBMITTED_BY_BLOCK_LAYER;
+ 	cmd->scsi_done = scsi_mq_done;
+ 
+ 	blk_mq_start_request(req);
+diff --git a/drivers/scsi/scsi_logging.c b/drivers/scsi/scsi_logging.c
+index 8ea44c6595efa..f0ae55ad09738 100644
+--- a/drivers/scsi/scsi_logging.c
++++ b/drivers/scsi/scsi_logging.c
+@@ -28,8 +28,9 @@ static void scsi_log_release_buffer(char *bufptr)
+ 
+ static inline const char *scmd_name(const struct scsi_cmnd *scmd)
+ {
+-	return scmd->request->rq_disk ?
+-		scmd->request->rq_disk->disk_name : NULL;
++	struct request *rq = scsi_cmd_to_rq((struct scsi_cmnd *)scmd);
++
++	return rq->rq_disk ? rq->rq_disk->disk_name : NULL;
+ }
+ 
+ static size_t sdev_format_header(char *logbuf, size_t logbuf_len,
+@@ -91,7 +92,7 @@ void scmd_printk(const char *level, const struct scsi_cmnd *scmd,
+ 	if (!logbuf)
+ 		return;
+ 	off = sdev_format_header(logbuf, logbuf_len, scmd_name(scmd),
+-				 scmd->request->tag);
++				 scsi_cmd_to_rq((struct scsi_cmnd *)scmd)->tag);
+ 	if (off < logbuf_len) {
+ 		va_start(args, fmt);
+ 		off += vscnprintf(logbuf + off, logbuf_len - off, fmt, args);
+@@ -188,7 +189,7 @@ void scsi_print_command(struct scsi_cmnd *cmd)
+ 		return;
+ 
+ 	off = sdev_format_header(logbuf, logbuf_len,
+-				 scmd_name(cmd), cmd->request->tag);
++				 scmd_name(cmd), scsi_cmd_to_rq(cmd)->tag);
+ 	if (off >= logbuf_len)
+ 		goto out_printk;
+ 	off += scnprintf(logbuf + off, logbuf_len - off, "CDB: ");
+@@ -210,7 +211,7 @@ void scsi_print_command(struct scsi_cmnd *cmd)
+ 
+ 			off = sdev_format_header(logbuf, logbuf_len,
+ 						 scmd_name(cmd),
+-						 cmd->request->tag);
++						 scsi_cmd_to_rq(cmd)->tag);
+ 			if (!WARN_ON(off > logbuf_len - 58)) {
+ 				off += scnprintf(logbuf + off, logbuf_len - off,
+ 						 "CDB[%02x]: ", k);
+@@ -373,7 +374,8 @@ EXPORT_SYMBOL(__scsi_print_sense);
+ /* Normalize and print sense buffer in SCSI command */
+ void scsi_print_sense(const struct scsi_cmnd *cmd)
+ {
+-	scsi_log_print_sense(cmd->device, scmd_name(cmd), cmd->request->tag,
++	scsi_log_print_sense(cmd->device, scmd_name(cmd),
++			     scsi_cmd_to_rq((struct scsi_cmnd *)cmd)->tag,
+ 			     cmd->sense_buffer, SCSI_SENSE_BUFFERSIZE);
+ }
+ EXPORT_SYMBOL(scsi_print_sense);
+@@ -392,8 +394,8 @@ void scsi_print_result(const struct scsi_cmnd *cmd, const char *msg,
+ 	if (!logbuf)
+ 		return;
+ 
+-	off = sdev_format_header(logbuf, logbuf_len,
+-				 scmd_name(cmd), cmd->request->tag);
++	off = sdev_format_header(logbuf, logbuf_len, scmd_name(cmd),
++				 scsi_cmd_to_rq((struct scsi_cmnd *)cmd)->tag);
+ 
+ 	if (off >= logbuf_len)
+ 		goto out_printk;
+diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
+index 180636d54982d..89992d8879acd 100644
+--- a/drivers/scsi/scsi_priv.h
++++ b/drivers/scsi/scsi_priv.h
+@@ -82,6 +82,7 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost,
+ int scsi_eh_get_sense(struct list_head *work_q,
+ 		      struct list_head *done_q);
+ int scsi_noretry_cmd(struct scsi_cmnd *scmd);
++void scsi_eh_done(struct scsi_cmnd *scmd);
+ 
+ /* scsi_lib.c */
+ extern int scsi_maybe_unblock_host(struct scsi_device *sdev);
+diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c
+index 1db43cbead575..26fa3f8260fb6 100644
+--- a/drivers/spi/spi-atmel.c
++++ b/drivers/spi/spi-atmel.c
+@@ -352,8 +352,6 @@ static void cs_activate(struct atmel_spi *as, struct spi_device *spi)
+ 		}
+ 
+ 		mr = spi_readl(as, MR);
+-		if (spi->cs_gpiod)
+-			gpiod_set_value(spi->cs_gpiod, 1);
+ 	} else {
+ 		u32 cpol = (spi->mode & SPI_CPOL) ? SPI_BIT(CPOL) : 0;
+ 		int i;
+@@ -369,8 +367,6 @@ static void cs_activate(struct atmel_spi *as, struct spi_device *spi)
+ 
+ 		mr = spi_readl(as, MR);
+ 		mr = SPI_BFINS(PCS, ~(1 << chip_select), mr);
+-		if (spi->cs_gpiod)
+-			gpiod_set_value(spi->cs_gpiod, 1);
+ 		spi_writel(as, MR, mr);
+ 	}
+ 
+@@ -400,8 +396,6 @@ static void cs_deactivate(struct atmel_spi *as, struct spi_device *spi)
+ 
+ 	if (!spi->cs_gpiod)
+ 		spi_writel(as, CR, SPI_BIT(LASTXFER));
+-	else
+-		gpiod_set_value(spi->cs_gpiod, 0);
+ }
+ 
+ static void atmel_spi_lock(struct atmel_spi *as) __acquires(&as->lock)
+@@ -867,7 +861,6 @@ static int atmel_spi_set_xfer_speed(struct atmel_spi *as,
+  * lock is held, spi irq is blocked
+  */
+ static void atmel_spi_pdc_next_xfer(struct spi_master *master,
+-					struct spi_message *msg,
+ 					struct spi_transfer *xfer)
+ {
+ 	struct atmel_spi	*as = spi_master_get_devdata(master);
+@@ -883,12 +876,12 @@ static void atmel_spi_pdc_next_xfer(struct spi_master *master,
+ 	spi_writel(as, RPR, rx_dma);
+ 	spi_writel(as, TPR, tx_dma);
+ 
+-	if (msg->spi->bits_per_word > 8)
++	if (xfer->bits_per_word > 8)
+ 		len >>= 1;
+ 	spi_writel(as, RCR, len);
+ 	spi_writel(as, TCR, len);
+ 
+-	dev_dbg(&msg->spi->dev,
++	dev_dbg(&master->dev,
+ 		"  start xfer %p: len %u tx %p/%08llx rx %p/%08llx\n",
+ 		xfer, xfer->len, xfer->tx_buf,
+ 		(unsigned long long)xfer->tx_dma, xfer->rx_buf,
+@@ -902,12 +895,12 @@ static void atmel_spi_pdc_next_xfer(struct spi_master *master,
+ 		spi_writel(as, RNPR, rx_dma);
+ 		spi_writel(as, TNPR, tx_dma);
+ 
+-		if (msg->spi->bits_per_word > 8)
++		if (xfer->bits_per_word > 8)
+ 			len >>= 1;
+ 		spi_writel(as, RNCR, len);
+ 		spi_writel(as, TNCR, len);
+ 
+-		dev_dbg(&msg->spi->dev,
++		dev_dbg(&master->dev,
+ 			"  next xfer %p: len %u tx %p/%08llx rx %p/%08llx\n",
+ 			xfer, xfer->len, xfer->tx_buf,
+ 			(unsigned long long)xfer->tx_dma, xfer->rx_buf,
+@@ -1277,12 +1270,28 @@ static int atmel_spi_setup(struct spi_device *spi)
+ 	return 0;
+ }
+ 
++static void atmel_spi_set_cs(struct spi_device *spi, bool enable)
++{
++	struct atmel_spi *as = spi_master_get_devdata(spi->master);
++	/* the core doesn't really pass us enable/disable, but CS HIGH vs CS LOW
++	 * since we already have routines for activate/deactivate translate
++	 * high/low to active/inactive
++	 */
++	enable = (!!(spi->mode & SPI_CS_HIGH) == enable);
++
++	if (enable) {
++		cs_activate(as, spi);
++	} else {
++		cs_deactivate(as, spi);
++	}
++
++}
++
+ static int atmel_spi_one_transfer(struct spi_master *master,
+-					struct spi_message *msg,
++					struct spi_device *spi,
+ 					struct spi_transfer *xfer)
+ {
+ 	struct atmel_spi	*as;
+-	struct spi_device	*spi = msg->spi;
+ 	u8			bits;
+ 	u32			len;
+ 	struct atmel_spi_device	*asd;
+@@ -1291,11 +1300,8 @@ static int atmel_spi_one_transfer(struct spi_master *master,
+ 	unsigned long		dma_timeout;
+ 
+ 	as = spi_master_get_devdata(master);
+-
+-	if (!(xfer->tx_buf || xfer->rx_buf) && xfer->len) {
+-		dev_dbg(&spi->dev, "missing rx or tx buf\n");
+-		return -EINVAL;
+-	}
++	/* This lock was orignally taken in atmel_spi_trasfer_one_message */
++	atmel_spi_lock(as);
+ 
+ 	asd = spi->controller_state;
+ 	bits = (asd->csr >> 4) & 0xf;
+@@ -1309,13 +1315,13 @@ static int atmel_spi_one_transfer(struct spi_master *master,
+ 	 * DMA map early, for performance (empties dcache ASAP) and
+ 	 * better fault reporting.
+ 	 */
+-	if ((!msg->is_dma_mapped)
++	if ((!master->cur_msg->is_dma_mapped)
+ 		&& as->use_pdc) {
+ 		if (atmel_spi_dma_map_xfer(as, xfer) < 0)
+ 			return -ENOMEM;
+ 	}
+ 
+-	atmel_spi_set_xfer_speed(as, msg->spi, xfer);
++	atmel_spi_set_xfer_speed(as, spi, xfer);
+ 
+ 	as->done_status = 0;
+ 	as->current_transfer = xfer;
+@@ -1324,7 +1330,7 @@ static int atmel_spi_one_transfer(struct spi_master *master,
+ 		reinit_completion(&as->xfer_completion);
+ 
+ 		if (as->use_pdc) {
+-			atmel_spi_pdc_next_xfer(master, msg, xfer);
++			atmel_spi_pdc_next_xfer(master, xfer);
+ 		} else if (atmel_spi_use_dma(as, xfer)) {
+ 			len = as->current_remaining_bytes;
+ 			ret = atmel_spi_next_xfer_dma_submit(master,
+@@ -1332,7 +1338,8 @@ static int atmel_spi_one_transfer(struct spi_master *master,
+ 			if (ret) {
+ 				dev_err(&spi->dev,
+ 					"unable to use DMA, fallback to PIO\n");
+-				atmel_spi_next_xfer_pio(master, xfer);
++				as->done_status = ret;
++				break;
+ 			} else {
+ 				as->current_remaining_bytes -= len;
+ 				if (as->current_remaining_bytes < 0)
+@@ -1385,90 +1392,18 @@ static int atmel_spi_one_transfer(struct spi_master *master,
+ 		} else if (atmel_spi_use_dma(as, xfer)) {
+ 			atmel_spi_stop_dma(master);
+ 		}
+-
+-		if (!msg->is_dma_mapped
+-			&& as->use_pdc)
+-			atmel_spi_dma_unmap_xfer(master, xfer);
+-
+-		return 0;
+-
+-	} else {
+-		/* only update length if no error */
+-		msg->actual_length += xfer->len;
+ 	}
+ 
+-	if (!msg->is_dma_mapped
++	if (!master->cur_msg->is_dma_mapped
+ 		&& as->use_pdc)
+ 		atmel_spi_dma_unmap_xfer(master, xfer);
+ 
+-	spi_transfer_delay_exec(xfer);
+-
+-	if (xfer->cs_change) {
+-		if (list_is_last(&xfer->transfer_list,
+-				 &msg->transfers)) {
+-			as->keep_cs = true;
+-		} else {
+-			cs_deactivate(as, msg->spi);
+-			udelay(10);
+-			cs_activate(as, msg->spi);
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+-static int atmel_spi_transfer_one_message(struct spi_master *master,
+-						struct spi_message *msg)
+-{
+-	struct atmel_spi *as;
+-	struct spi_transfer *xfer;
+-	struct spi_device *spi = msg->spi;
+-	int ret = 0;
+-
+-	as = spi_master_get_devdata(master);
+-
+-	dev_dbg(&spi->dev, "new message %p submitted for %s\n",
+-					msg, dev_name(&spi->dev));
+-
+-	atmel_spi_lock(as);
+-	cs_activate(as, spi);
+-
+-	as->keep_cs = false;
+-
+-	msg->status = 0;
+-	msg->actual_length = 0;
+-
+-	list_for_each_entry(xfer, &msg->transfers, transfer_list) {
+-		trace_spi_transfer_start(msg, xfer);
+-
+-		ret = atmel_spi_one_transfer(master, msg, xfer);
+-		if (ret)
+-			goto msg_done;
+-
+-		trace_spi_transfer_stop(msg, xfer);
+-	}
+-
+ 	if (as->use_pdc)
+ 		atmel_spi_disable_pdc_transfer(as);
+ 
+-	list_for_each_entry(xfer, &msg->transfers, transfer_list) {
+-		dev_dbg(&spi->dev,
+-			"  xfer %p: len %u tx %p/%pad rx %p/%pad\n",
+-			xfer, xfer->len,
+-			xfer->tx_buf, &xfer->tx_dma,
+-			xfer->rx_buf, &xfer->rx_dma);
+-	}
+-
+-msg_done:
+-	if (!as->keep_cs)
+-		cs_deactivate(as, msg->spi);
+-
+ 	atmel_spi_unlock(as);
+ 
+-	msg->status = as->done_status;
+-	spi_finalize_current_message(spi->master);
+-
+-	return ret;
++	return as->done_status;
+ }
+ 
+ static void atmel_spi_cleanup(struct spi_device *spi)
+@@ -1557,8 +1492,10 @@ static int atmel_spi_probe(struct platform_device *pdev)
+ 	master->bus_num = pdev->id;
+ 	master->num_chipselect = 4;
+ 	master->setup = atmel_spi_setup;
+-	master->flags = (SPI_MASTER_MUST_RX | SPI_MASTER_MUST_TX);
+-	master->transfer_one_message = atmel_spi_transfer_one_message;
++	master->flags = (SPI_MASTER_MUST_RX | SPI_MASTER_MUST_TX |
++			SPI_MASTER_GPIO_SS);
++	master->transfer_one = atmel_spi_one_transfer;
++	master->set_cs = atmel_spi_set_cs;
+ 	master->cleanup = atmel_spi_cleanup;
+ 	master->auto_runtime_pm = true;
+ 	master->max_dma_len = SPI_MAX_DMA_XFER;
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index ff0b3457fd342..de925433d4c5f 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -429,8 +429,6 @@ static void qh_lines(struct fotg210_hcd *fotg210, struct fotg210_qh *qh,
+ 			temp = size;
+ 		size -= temp;
+ 		next += temp;
+-		if (temp == size)
+-			goto done;
+ 	}
+ 
+ 	temp = snprintf(next, size, "\n");
+@@ -440,7 +438,6 @@ static void qh_lines(struct fotg210_hcd *fotg210, struct fotg210_qh *qh,
+ 	size -= temp;
+ 	next += temp;
+ 
+-done:
+ 	*sizep = size;
+ 	*nextp = next;
+ }
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 3bfa395c31120..4d7f4a4ab69fb 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1011,9 +1011,9 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(FTDI_VID, ACTISENSE_USG_PID) },
+ 	{ USB_DEVICE(FTDI_VID, ACTISENSE_NGT_PID) },
+ 	{ USB_DEVICE(FTDI_VID, ACTISENSE_NGW_PID) },
+-	{ USB_DEVICE(FTDI_VID, ACTISENSE_D9AC_PID) },
+-	{ USB_DEVICE(FTDI_VID, ACTISENSE_D9AD_PID) },
+-	{ USB_DEVICE(FTDI_VID, ACTISENSE_D9AE_PID) },
++	{ USB_DEVICE(FTDI_VID, ACTISENSE_UID_PID) },
++	{ USB_DEVICE(FTDI_VID, ACTISENSE_USA_PID) },
++	{ USB_DEVICE(FTDI_VID, ACTISENSE_NGX_PID) },
+ 	{ USB_DEVICE(FTDI_VID, ACTISENSE_D9AF_PID) },
+ 	{ USB_DEVICE(FTDI_VID, CHETCO_SEAGAUGE_PID) },
+ 	{ USB_DEVICE(FTDI_VID, CHETCO_SEASWITCH_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 31c8ccabbbb78..9a0f9fc991246 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1561,9 +1561,9 @@
+ #define ACTISENSE_USG_PID		0xD9A9 /* USG USB Serial Adapter */
+ #define ACTISENSE_NGT_PID		0xD9AA /* NGT NMEA2000 Interface */
+ #define ACTISENSE_NGW_PID		0xD9AB /* NGW NMEA2000 Gateway */
+-#define ACTISENSE_D9AC_PID		0xD9AC /* Actisense Reserved */
+-#define ACTISENSE_D9AD_PID		0xD9AD /* Actisense Reserved */
+-#define ACTISENSE_D9AE_PID		0xD9AE /* Actisense Reserved */
++#define ACTISENSE_UID_PID		0xD9AC /* USB Isolating Device */
++#define ACTISENSE_USA_PID		0xD9AD /* USB to Serial Adapter */
++#define ACTISENSE_NGX_PID		0xD9AE /* NGX NMEA2000 Gateway */
+ #define ACTISENSE_D9AF_PID		0xD9AF /* Actisense Reserved */
+ #define CHETCO_SEAGAUGE_PID		0xA548 /* SeaGauge USB Adapter */
+ #define CHETCO_SEASWITCH_PID		0xA549 /* SeaSwitch USB Adapter */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 851c242539406..6be7358ca1aff 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -272,6 +272,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
+ #define QUECTEL_PRODUCT_RM520N			0x0801
+ #define QUECTEL_PRODUCT_EC200U			0x0901
++#define QUECTEL_PRODUCT_EG912Y			0x6001
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
+ #define QUECTEL_PRODUCT_EC200A			0x6005
+ #define QUECTEL_PRODUCT_EM061K_LWW		0x6008
+@@ -1232,6 +1233,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0700, 0xff), /* BG95 */
+ 	  .driver_info = RSVD(3) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+ 	  .driver_info = ZLP },
+@@ -1244,6 +1246,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG912Y, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
+ 
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+@@ -2242,6 +2245,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE(0x0489, 0xe0b5),						/* Foxconn T77W968 ESIM */
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0da, 0xff),                     /* Foxconn T99W265 MBIM variant */
++	  .driver_info = RSVD(3) | RSVD(5) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0db, 0xff),			/* Foxconn T99W265 MBIM */
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0ee, 0xff),			/* Foxconn T99W368 MBIM */
+diff --git a/fs/afs/cell.c b/fs/afs/cell.c
+index 887b673f62230..cfd3c4dabdb2c 100644
+--- a/fs/afs/cell.c
++++ b/fs/afs/cell.c
+@@ -407,10 +407,12 @@ static int afs_update_cell(struct afs_cell *cell)
+ 		if (ret == -ENOMEM)
+ 			goto out_wake;
+ 
+-		ret = -ENOMEM;
+ 		vllist = afs_alloc_vlserver_list(0);
+-		if (!vllist)
++		if (!vllist) {
++			if (ret >= 0)
++				ret = -ENOMEM;
+ 			goto out_wake;
++		}
+ 
+ 		switch (ret) {
+ 		case -ENODATA:
+diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c
+index b35c6081dbfe1..96b404d9e13ac 100644
+--- a/fs/afs/dynroot.c
++++ b/fs/afs/dynroot.c
+@@ -113,6 +113,7 @@ static int afs_probe_cell_name(struct dentry *dentry)
+ 	struct afs_net *net = afs_d2net(dentry);
+ 	const char *name = dentry->d_name.name;
+ 	size_t len = dentry->d_name.len;
++	char *result = NULL;
+ 	int ret;
+ 
+ 	/* Names prefixed with a dot are R/W mounts. */
+@@ -130,9 +131,22 @@ static int afs_probe_cell_name(struct dentry *dentry)
+ 	}
+ 
+ 	ret = dns_query(net->net, "afsdb", name, len, "srv=1",
+-			NULL, NULL, false);
+-	if (ret == -ENODATA || ret == -ENOKEY)
++			&result, NULL, false);
++	if (ret == -ENODATA || ret == -ENOKEY || ret == 0)
+ 		ret = -ENOENT;
++	if (ret > 0 && ret >= sizeof(struct dns_server_list_v1_header)) {
++		struct dns_server_list_v1_header *v1 = (void *)result;
++
++		if (v1->hdr.zero == 0 &&
++		    v1->hdr.content == DNS_PAYLOAD_IS_SERVER_LIST &&
++		    v1->hdr.version == 1 &&
++		    (v1->status != DNS_LOOKUP_GOOD &&
++		     v1->status != DNS_LOOKUP_GOOD_WITH_BAD))
++			return -ENOENT;
++
++	}
++
++	kfree(result);
+ 	return ret;
+ }
+ 
+@@ -251,20 +265,9 @@ static int afs_dynroot_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	return 1;
+ }
+ 
+-/*
+- * Allow the VFS to enquire as to whether a dentry should be unhashed (mustn't
+- * sleep)
+- * - called from dput() when d_count is going to 0.
+- * - return 1 to request dentry be unhashed, 0 otherwise
+- */
+-static int afs_dynroot_d_delete(const struct dentry *dentry)
+-{
+-	return d_really_is_positive(dentry);
+-}
+-
+ const struct dentry_operations afs_dynroot_dentry_operations = {
+ 	.d_revalidate	= afs_dynroot_d_revalidate,
+-	.d_delete	= afs_dynroot_d_delete,
++	.d_delete	= always_delete_dentry,
+ 	.d_release	= afs_d_release,
+ 	.d_automount	= afs_d_automount,
+ };
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 6da8587e2ae37..f06824bea4686 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1868,6 +1868,15 @@ static noinline int __btrfs_ioctl_snap_create(struct file *file,
+ 			 * are limited to own subvolumes only
+ 			 */
+ 			ret = -EPERM;
++		} else if (btrfs_ino(BTRFS_I(src_inode)) != BTRFS_FIRST_FREE_OBJECTID) {
++			/*
++			 * Snapshots must be made with the src_inode referring
++			 * to the subvolume inode, otherwise the permission
++			 * checking above is useless because we may have
++			 * permission on a lower directory but not the subvol
++			 * itself.
++			 */
++			ret = -EINVAL;
+ 		} else {
+ 			ret = btrfs_mksnapshot(&file->f_path, name, namelen,
+ 					     BTRFS_I(src_inode)->root,
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 9044b0fca9a3d..2d46018b02839 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -353,6 +353,10 @@ checkSMB(char *buf, unsigned int total_read, struct TCP_Server_Info *server)
+ 			cifs_dbg(VFS, "Length less than smb header size\n");
+ 		}
+ 		return -EIO;
++	} else if (total_read < sizeof(*smb) + 2 * smb->WordCount) {
++		cifs_dbg(VFS, "%s: can't read BCC due to invalid WordCount(%u)\n",
++			 __func__, smb->WordCount);
++		return -EIO;
+ 	}
+ 
+ 	/* otherwise, there is enough to get to the BCC */
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index be3df90bb2bcc..b98bba887f84b 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -305,6 +305,9 @@ static const bool has_smb2_data_area[NUMBER_OF_SMB2_COMMANDS] = {
+ char *
+ smb2_get_data_area_len(int *off, int *len, struct smb2_sync_hdr *shdr)
+ {
++	const int max_off = 4096;
++	const int max_len = 128 * 1024;
++
+ 	*off = 0;
+ 	*len = 0;
+ 
+@@ -376,29 +379,20 @@ smb2_get_data_area_len(int *off, int *len, struct smb2_sync_hdr *shdr)
+ 	 * Invalid length or offset probably means data area is invalid, but
+ 	 * we have little choice but to ignore the data area in this case.
+ 	 */
+-	if (*off > 4096) {
+-		cifs_dbg(VFS, "offset %d too large, data area ignored\n", *off);
+-		*len = 0;
+-		*off = 0;
+-	} else if (*off < 0) {
+-		cifs_dbg(VFS, "negative offset %d to data invalid ignore data area\n",
+-			 *off);
++	if (unlikely(*off < 0 || *off > max_off ||
++		     *len < 0 || *len > max_len)) {
++		cifs_dbg(VFS, "%s: invalid data area (off=%d len=%d)\n",
++			 __func__, *off, *len);
+ 		*off = 0;
+ 		*len = 0;
+-	} else if (*len < 0) {
+-		cifs_dbg(VFS, "negative data length %d invalid, data area ignored\n",
+-			 *len);
+-		*len = 0;
+-	} else if (*len > 128 * 1024) {
+-		cifs_dbg(VFS, "data area larger than 128K: %d\n", *len);
++	} else if (*off == 0) {
+ 		*len = 0;
+ 	}
+ 
+ 	/* return pointer to beginning of data area, ie offset from SMB start */
+-	if ((*off != 0) && (*len != 0))
++	if (*off > 0 && *len > 0)
+ 		return (char *)shdr + *off;
+-	else
+-		return NULL;
++	return NULL;
+ }
+ 
+ /*
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index e58525a958270..26edaeb4245d8 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3099,7 +3099,7 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
+ 	struct kvec close_iov[1];
+ 	struct smb2_ioctl_rsp *ioctl_rsp;
+ 	struct reparse_data_buffer *reparse_buf;
+-	u32 plen;
++	u32 off, count, len;
+ 
+ 	cifs_dbg(FYI, "%s: path: %s\n", __func__, full_path);
+ 
+@@ -3178,16 +3178,22 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
+ 	 */
+ 	if (rc == 0) {
+ 		/* See MS-FSCC 2.3.23 */
++		off = le32_to_cpu(ioctl_rsp->OutputOffset);
++		count = le32_to_cpu(ioctl_rsp->OutputCount);
++		if (check_add_overflow(off, count, &len) ||
++		    len > rsp_iov[1].iov_len) {
++			cifs_tcon_dbg(VFS, "%s: invalid ioctl: off=%d count=%d\n",
++				      __func__, off, count);
++			rc = -EIO;
++			goto query_rp_exit;
++		}
+ 
+-		reparse_buf = (struct reparse_data_buffer *)
+-			((char *)ioctl_rsp +
+-			 le32_to_cpu(ioctl_rsp->OutputOffset));
+-		plen = le32_to_cpu(ioctl_rsp->OutputCount);
+-
+-		if (plen + le32_to_cpu(ioctl_rsp->OutputOffset) >
+-		    rsp_iov[1].iov_len) {
+-			cifs_tcon_dbg(FYI, "srv returned invalid ioctl len: %d\n",
+-				 plen);
++		reparse_buf = (void *)((u8 *)ioctl_rsp + off);
++		len = sizeof(*reparse_buf);
++		if (count < len ||
++		    count < le16_to_cpu(reparse_buf->ReparseDataLength) + len) {
++			cifs_tcon_dbg(VFS, "%s: invalid ioctl: off=%d count=%d\n",
++				      __func__, off, count);
+ 			rc = -EIO;
+ 			goto query_rp_exit;
+ 		}
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 9a80047bc9b7b..76679dc4e6328 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -373,10 +373,15 @@ static int __smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
+ 				 void **request_buf, unsigned int *total_len)
+ {
+ 	/* BB eventually switch this to SMB2 specific small buf size */
+-	if (smb2_command == SMB2_SET_INFO)
++	switch (smb2_command) {
++	case SMB2_SET_INFO:
++	case SMB2_QUERY_INFO:
+ 		*request_buf = cifs_buf_get();
+-	else
++		break;
++	default:
+ 		*request_buf = cifs_small_buf_get();
++		break;
++	}
+ 	if (*request_buf == NULL) {
+ 		/* BB should we add a retry in here if not a writepage? */
+ 		return -ENOMEM;
+@@ -3346,8 +3351,13 @@ SMB2_query_info_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
+ 	struct smb2_query_info_req *req;
+ 	struct kvec *iov = rqst->rq_iov;
+ 	unsigned int total_len;
++	size_t len;
+ 	int rc;
+ 
++	if (unlikely(check_add_overflow(input_len, sizeof(*req), &len) ||
++		     len > CIFSMaxBufSize))
++		return -EINVAL;
++
+ 	rc = smb2_plain_req_init(SMB2_QUERY_INFO, tcon, server,
+ 				 (void **) &req, &total_len);
+ 	if (rc)
+@@ -3369,7 +3379,7 @@ SMB2_query_info_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
+ 
+ 	iov[0].iov_base = (char *)req;
+ 	/* 1 for Buffer */
+-	iov[0].iov_len = total_len - 1 + input_len;
++	iov[0].iov_len = len;
+ 	return 0;
+ }
+ 
+@@ -3377,7 +3387,7 @@ void
+ SMB2_query_info_free(struct smb_rqst *rqst)
+ {
+ 	if (rqst && rqst->rq_iov)
+-		cifs_small_buf_release(rqst->rq_iov[0].iov_base); /* request */
++		cifs_buf_release(rqst->rq_iov[0].iov_base); /* request */
+ }
+ 
+ static int
+@@ -5104,6 +5114,11 @@ build_qfs_info_req(struct kvec *iov, struct cifs_tcon *tcon,
+ 	return 0;
+ }
+ 
++static inline void free_qfs_info_req(struct kvec *iov)
++{
++	cifs_buf_release(iov->iov_base);
++}
++
+ int
+ SMB311_posix_qfs_info(const unsigned int xid, struct cifs_tcon *tcon,
+ 	      u64 persistent_fid, u64 volatile_fid, struct kstatfs *fsdata)
+@@ -5135,7 +5150,7 @@ SMB311_posix_qfs_info(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	rc = cifs_send_recv(xid, ses, server,
+ 			    &rqst, &resp_buftype, flags, &rsp_iov);
+-	cifs_small_buf_release(iov.iov_base);
++	free_qfs_info_req(&iov);
+ 	if (rc) {
+ 		cifs_stats_fail_inc(tcon, SMB2_QUERY_INFO_HE);
+ 		goto posix_qfsinf_exit;
+@@ -5186,7 +5201,7 @@ SMB2_QFS_info(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	rc = cifs_send_recv(xid, ses, server,
+ 			    &rqst, &resp_buftype, flags, &rsp_iov);
+-	cifs_small_buf_release(iov.iov_base);
++	free_qfs_info_req(&iov);
+ 	if (rc) {
+ 		cifs_stats_fail_inc(tcon, SMB2_QUERY_INFO_HE);
+ 		goto qfsinf_exit;
+@@ -5253,7 +5268,7 @@ SMB2_QFS_attr(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	rc = cifs_send_recv(xid, ses, server,
+ 			    &rqst, &resp_buftype, flags, &rsp_iov);
+-	cifs_small_buf_release(iov.iov_base);
++	free_qfs_info_req(&iov);
+ 	if (rc) {
+ 		cifs_stats_fail_inc(tcon, SMB2_QUERY_INFO_HE);
+ 		goto qfsattr_exit;
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index 56ec9fba3925b..89a732b31390e 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -755,7 +755,7 @@ struct smb2_tree_disconnect_rsp {
+ #define SMB2_CREATE_SD_BUFFER			"SecD" /* security descriptor */
+ #define SMB2_CREATE_DURABLE_HANDLE_REQUEST	"DHnQ"
+ #define SMB2_CREATE_DURABLE_HANDLE_RECONNECT	"DHnC"
+-#define SMB2_CREATE_ALLOCATION_SIZE		"AISi"
++#define SMB2_CREATE_ALLOCATION_SIZE		"AlSi"
+ #define SMB2_CREATE_QUERY_MAXIMAL_ACCESS_REQUEST "MxAc"
+ #define SMB2_CREATE_TIMEWARP_REQUEST		"TWrp"
+ #define SMB2_CREATE_QUERY_ON_DISK_ID		"QFid"
+diff --git a/include/linux/key-type.h b/include/linux/key-type.h
+index 2ab2d6d6aeab8..7de851a9af8f3 100644
+--- a/include/linux/key-type.h
++++ b/include/linux/key-type.h
+@@ -72,6 +72,7 @@ struct key_type {
+ 
+ 	unsigned int flags;
+ #define KEY_TYPE_NET_DOMAIN	0x00000001 /* Keys of this type have a net namespace domain */
++#define KEY_TYPE_INSTANT_REAP	0x00000002 /* Keys of this type don't have a delay after expiring */
+ 
+ 	/* vet a description */
+ 	int (*vet_description)(const char *description);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index e33433ec4a98f..a168a64696b6b 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -174,6 +174,7 @@ struct blocked_key {
+ struct smp_csrk {
+ 	bdaddr_t bdaddr;
+ 	u8 bdaddr_type;
++	u8 link_type;
+ 	u8 type;
+ 	u8 val[16];
+ };
+@@ -183,6 +184,7 @@ struct smp_ltk {
+ 	struct rcu_head rcu;
+ 	bdaddr_t bdaddr;
+ 	u8 bdaddr_type;
++	u8 link_type;
+ 	u8 authenticated;
+ 	u8 type;
+ 	u8 enc_size;
+@@ -197,6 +199,7 @@ struct smp_irk {
+ 	bdaddr_t rpa;
+ 	bdaddr_t bdaddr;
+ 	u8 addr_type;
++	u8 link_type;
+ 	u8 val[16];
+ };
+ 
+@@ -204,6 +207,8 @@ struct link_key {
+ 	struct list_head list;
+ 	struct rcu_head rcu;
+ 	bdaddr_t bdaddr;
++	u8 bdaddr_type;
++	u8 link_type;
+ 	u8 type;
+ 	u8 val[HCI_LINK_KEY_SIZE];
+ 	u8 pin_len;
+diff --git a/include/net/bluetooth/mgmt.h b/include/net/bluetooth/mgmt.h
+index 6b55155e05e97..faaba22e0d386 100644
+--- a/include/net/bluetooth/mgmt.h
++++ b/include/net/bluetooth/mgmt.h
+@@ -202,7 +202,7 @@ struct mgmt_cp_load_link_keys {
+ struct mgmt_ltk_info {
+ 	struct mgmt_addr_info addr;
+ 	__u8	type;
+-	__u8	master;
++	__u8	initiator;
+ 	__u8	enc_size;
+ 	__le16	ediv;
+ 	__le64	rand;
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index b1c9b52876f3c..2e26eb0d353e8 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -65,6 +65,12 @@ struct scsi_pointer {
+ #define SCMD_STATE_COMPLETE	0
+ #define SCMD_STATE_INFLIGHT	1
+ 
++enum scsi_cmnd_submitter {
++	SUBMITTED_BY_BLOCK_LAYER = 0,
++	SUBMITTED_BY_SCSI_ERROR_HANDLER = 1,
++	SUBMITTED_BY_SCSI_RESET_IOCTL = 2,
++} __packed;
++
+ struct scsi_cmnd {
+ 	struct scsi_request req;
+ 	struct scsi_device *device;
+@@ -88,6 +94,7 @@ struct scsi_cmnd {
+ 	unsigned char prot_op;
+ 	unsigned char prot_type;
+ 	unsigned char prot_flags;
++	enum scsi_cmnd_submitter submitter;
+ 
+ 	unsigned short cmd_len;
+ 	enum dma_data_direction sc_data_direction;
+@@ -162,7 +169,9 @@ static inline void *scsi_cmd_priv(struct scsi_cmnd *cmd)
+ /* make sure not to use it with passthrough commands */
+ static inline struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd)
+ {
+-	return *(struct scsi_driver **)cmd->request->rq_disk->private_data;
++	struct request *rq = scsi_cmd_to_rq(cmd);
++
++	return *(struct scsi_driver **)rq->rq_disk->private_data;
+ }
+ 
+ extern void scsi_finish_command(struct scsi_cmnd *cmd);
+@@ -224,6 +233,18 @@ static inline int scsi_sg_copy_to_buffer(struct scsi_cmnd *cmd,
+ 				 buf, buflen);
+ }
+ 
++static inline sector_t scsi_get_sector(struct scsi_cmnd *scmd)
++{
++	return blk_rq_pos(scsi_cmd_to_rq(scmd));
++}
++
++static inline sector_t scsi_get_lba(struct scsi_cmnd *scmd)
++{
++	unsigned int shift = ilog2(scmd->device->sector_size) - SECTOR_SHIFT;
++
++	return blk_rq_pos(scsi_cmd_to_rq(scmd)) >> shift;
++}
++
+ /*
+  * The operations below are hints that tell the controller driver how
+  * to handle I/Os with DIF or similar types of protection information.
+@@ -286,9 +307,11 @@ static inline unsigned char scsi_get_prot_type(struct scsi_cmnd *scmd)
+ 	return scmd->prot_type;
+ }
+ 
+-static inline sector_t scsi_get_lba(struct scsi_cmnd *scmd)
++static inline u32 scsi_prot_ref_tag(struct scsi_cmnd *scmd)
+ {
+-	return blk_rq_pos(scmd->request);
++	struct request *rq = blk_mq_rq_from_pdu(scmd);
++
++	return t10_pi_ref_tag(rq);
+ }
+ 
+ static inline unsigned int scsi_prot_interval(struct scsi_cmnd *scmd)
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index 1a5c9a3df6d69..993e1e79dd0ce 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -264,13 +264,15 @@ sdev_prefix_printk(const char *, const struct scsi_device *, const char *,
+ __printf(3, 4) void
+ scmd_printk(const char *, const struct scsi_cmnd *, const char *, ...);
+ 
+-#define scmd_dbg(scmd, fmt, a...)					   \
+-	do {								   \
+-		if ((scmd)->request->rq_disk)				   \
+-			sdev_dbg((scmd)->device, "[%s] " fmt,		   \
+-				 (scmd)->request->rq_disk->disk_name, ##a);\
+-		else							   \
+-			sdev_dbg((scmd)->device, fmt, ##a);		   \
++#define scmd_dbg(scmd, fmt, a...)					\
++	do {								\
++		struct request *__rq = scsi_cmd_to_rq((scmd));		\
++									\
++		if (__rq->rq_disk)					\
++			sdev_dbg((scmd)->device, "[%s] " fmt,		\
++				 __rq->rq_disk->disk_name, ##a);	\
++		else							\
++			sdev_dbg((scmd)->device, fmt, ##a);		\
+ 	} while (0)
+ 
+ enum scsi_target_state {
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 364fa91ab33e4..5abe88091803e 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -810,9 +810,14 @@ static __always_inline bool full_hit(struct trace_buffer *buffer, int cpu, int f
+ 	if (!nr_pages || !full)
+ 		return true;
+ 
+-	dirty = ring_buffer_nr_dirty_pages(buffer, cpu);
++	/*
++	 * Add one as dirty will never equal nr_pages, as the sub-buffer
++	 * that the writer is on is not counted as dirty.
++	 * This is needed if "buffer_percent" is set to 100.
++	 */
++	dirty = ring_buffer_nr_dirty_pages(buffer, cpu) + 1;
+ 
+-	return (dirty * 100) > (full * nr_pages);
++	return (dirty * 100) >= (full * nr_pages);
+ }
+ 
+ /*
+@@ -861,7 +866,8 @@ void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu)
+ 	/* make sure the waiters see the new index */
+ 	smp_wmb();
+ 
+-	rb_wake_up_waiters(&rbwork->work);
++	/* This can be called in any context */
++	irq_work_queue(&rbwork->work);
+ }
+ 
+ /**
+diff --git a/kernel/trace/synth_event_gen_test.c b/kernel/trace/synth_event_gen_test.c
+index a6a2813afb87f..a5050f3f6f028 100644
+--- a/kernel/trace/synth_event_gen_test.c
++++ b/kernel/trace/synth_event_gen_test.c
+@@ -477,6 +477,17 @@ static int __init synth_event_gen_test_init(void)
+ 
+ 	ret = test_trace_synth_event();
+ 	WARN_ON(ret);
++
++	/* Disable when done */
++	trace_array_set_clr_event(gen_synth_test->tr,
++				  "synthetic",
++				  "gen_synth_test", false);
++	trace_array_set_clr_event(empty_synth_test->tr,
++				  "synthetic",
++				  "empty_synth_test", false);
++	trace_array_set_clr_event(create_synth_test->tr,
++				  "synthetic",
++				  "create_synth_test", false);
+  out:
+ 	return ret;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b2527624481de..0cbf833bebccf 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1892,17 +1892,31 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
+ 
+ 	__update_max_tr(tr, tsk, cpu);
+ 	arch_spin_unlock(&tr->max_lock);
++
++	/* Any waiters on the old snapshot buffer need to wake up */
++	ring_buffer_wake_waiters(tr->array_buffer.buffer, RING_BUFFER_ALL_CPUS);
+ }
+ #endif /* CONFIG_TRACER_MAX_TRACE */
+ 
+ static int wait_on_pipe(struct trace_iterator *iter, int full)
+ {
++	int ret;
++
+ 	/* Iterators are static, they should be filled or empty */
+ 	if (trace_buffer_iter(iter, iter->cpu_file))
+ 		return 0;
+ 
+-	return ring_buffer_wait(iter->array_buffer->buffer, iter->cpu_file,
+-				full);
++	ret = ring_buffer_wait(iter->array_buffer->buffer, iter->cpu_file, full);
++
++#ifdef CONFIG_TRACER_MAX_TRACE
++	/*
++	 * Make sure this is still the snapshot buffer, as if a snapshot were
++	 * to happen, this would now be the main buffer.
++	 */
++	if (iter->snapshot)
++		iter->array_buffer = &iter->tr->max_buffer;
++#endif
++	return ret;
+ }
+ 
+ #ifdef CONFIG_FTRACE_STARTUP_TEST
+@@ -7953,7 +7967,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
+ 		if ((file->f_flags & O_NONBLOCK) || (flags & SPLICE_F_NONBLOCK))
+ 			goto out;
+ 
+-		ret = wait_on_pipe(iter, iter->tr->buffer_percent);
++		ret = wait_on_pipe(iter, iter->snapshot ? 0 : iter->tr->buffer_percent);
+ 		if (ret)
+ 			goto out;
+ 
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index daf32a489dc06..b08b8ee1bbc01 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -1984,15 +1984,20 @@ char *fwnode_full_name_string(struct fwnode_handle *fwnode, char *buf,
+ 
+ 	/* Loop starting from the root node to the current node. */
+ 	for (depth = fwnode_count_parents(fwnode); depth >= 0; depth--) {
+-		struct fwnode_handle *__fwnode =
+-			fwnode_get_nth_parent(fwnode, depth);
++		/*
++		 * Only get a reference for other nodes (i.e. parent nodes).
++		 * fwnode refcount may be 0 here.
++		 */
++		struct fwnode_handle *__fwnode = depth ?
++			fwnode_get_nth_parent(fwnode, depth) : fwnode;
+ 
+ 		buf = string(buf, end, fwnode_get_name_prefix(__fwnode),
+ 			     default_str_spec);
+ 		buf = string(buf, end, fwnode_get_name(__fwnode),
+ 			     default_str_spec);
+ 
+-		fwnode_handle_put(__fwnode);
++		if (depth)
++			fwnode_handle_put(__fwnode);
+ 	}
+ 
+ 	return buf;
+diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c
+index 78ec2e1b14d15..43aea97c57620 100644
+--- a/net/8021q/vlan_core.c
++++ b/net/8021q/vlan_core.c
+@@ -406,6 +406,8 @@ int vlan_vids_add_by_dev(struct net_device *dev,
+ 		return 0;
+ 
+ 	list_for_each_entry(vid_info, &vlan_info->vid_list, list) {
++		if (!vlan_hw_filter_capable(by_dev, vid_info->proto))
++			continue;
+ 		err = vlan_vid_add(dev, vid_info->proto, vid_info->vid);
+ 		if (err)
+ 			goto unwind;
+@@ -416,6 +418,8 @@ unwind:
+ 	list_for_each_entry_continue_reverse(vid_info,
+ 					     &vlan_info->vid_list,
+ 					     list) {
++		if (!vlan_hw_filter_capable(by_dev, vid_info->proto))
++			continue;
+ 		vlan_vid_del(dev, vid_info->proto, vid_info->vid);
+ 	}
+ 
+@@ -435,8 +439,11 @@ void vlan_vids_del_by_dev(struct net_device *dev,
+ 	if (!vlan_info)
+ 		return;
+ 
+-	list_for_each_entry(vid_info, &vlan_info->vid_list, list)
++	list_for_each_entry(vid_info, &vlan_info->vid_list, list) {
++		if (!vlan_hw_filter_capable(by_dev, vid_info->proto))
++			continue;
+ 		vlan_vid_del(dev, vid_info->proto, vid_info->vid);
++	}
+ }
+ EXPORT_SYMBOL(vlan_vids_del_by_dev);
+ 
+diff --git a/net/9p/client.c b/net/9p/client.c
+index e8862cd4f91b4..cd85a4b6448b4 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -520,11 +520,14 @@ static int p9_check_errors(struct p9_client *c, struct p9_req_t *req)
+ 		return 0;
+ 
+ 	if (!p9_is_proto_dotl(c)) {
+-		char *ename;
++		char *ename = NULL;
++
+ 		err = p9pdu_readf(&req->rc, c->proto_version, "s?d",
+ 				  &ename, &ecode);
+-		if (err)
++		if (err) {
++			kfree(ename);
+ 			goto out_err;
++		}
+ 
+ 		if (p9_is_proto_dotu(c) && ecode < 512)
+ 			err = -ecode;
+diff --git a/net/9p/protocol.c b/net/9p/protocol.c
+index 03593eb240d87..ef0a82dbbe042 100644
+--- a/net/9p/protocol.c
++++ b/net/9p/protocol.c
+@@ -228,6 +228,8 @@ p9pdu_vreadf(struct p9_fcall *pdu, int proto_version, const char *fmt,
+ 				uint16_t *nwname = va_arg(ap, uint16_t *);
+ 				char ***wnames = va_arg(ap, char ***);
+ 
++				*wnames = NULL;
++
+ 				errcode = p9pdu_readf(pdu, proto_version,
+ 								"w", nwname);
+ 				if (!errcode) {
+@@ -237,6 +239,8 @@ p9pdu_vreadf(struct p9_fcall *pdu, int proto_version, const char *fmt,
+ 							  GFP_NOFS);
+ 					if (!*wnames)
+ 						errcode = -ENOMEM;
++					else
++						(*wnames)[0] = NULL;
+ 				}
+ 
+ 				if (!errcode) {
+@@ -248,8 +252,10 @@ p9pdu_vreadf(struct p9_fcall *pdu, int proto_version, const char *fmt,
+ 								proto_version,
+ 								"s",
+ 								&(*wnames)[i]);
+-						if (errcode)
++						if (errcode) {
++							(*wnames)[i] = NULL;
+ 							break;
++						}
+ 					}
+ 				}
+ 
+@@ -257,11 +263,14 @@ p9pdu_vreadf(struct p9_fcall *pdu, int proto_version, const char *fmt,
+ 					if (*wnames) {
+ 						int i;
+ 
+-						for (i = 0; i < *nwname; i++)
++						for (i = 0; i < *nwname; i++) {
++							if (!(*wnames)[i])
++								break;
+ 							kfree((*wnames)[i]);
++						}
++						kfree(*wnames);
++						*wnames = NULL;
+ 					}
+-					kfree(*wnames);
+-					*wnames = NULL;
+ 				}
+ 			}
+ 			break;
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 2f87f57e7a4fd..14a917e70f3ee 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -263,11 +263,14 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	if (flags & MSG_OOB)
+ 		return -EOPNOTSUPP;
+ 
++	lock_sock(sk);
++
+ 	skb = skb_recv_datagram(sk, flags, noblock, &err);
+ 	if (!skb) {
+ 		if (sk->sk_shutdown & RCV_SHUTDOWN)
+-			return 0;
++			err = 0;
+ 
++		release_sock(sk);
+ 		return err;
+ 	}
+ 
+@@ -293,6 +296,8 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 
+ 	skb_free_datagram(sk, skb);
+ 
++	release_sock(sk);
++
+ 	if (flags & MSG_TRUNC)
+ 		copied = skblen;
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index ad5294de97594..ee2c1a17366a2 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -1834,7 +1834,8 @@ static void hci_cs_inquiry(struct hci_dev *hdev, __u8 status)
+ 		return;
+ 	}
+ 
+-	set_bit(HCI_INQUIRY, &hdev->flags);
++	if (hci_sent_cmd_data(hdev, HCI_OP_INQUIRY))
++		set_bit(HCI_INQUIRY, &hdev->flags);
+ }
+ 
+ static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status)
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 7b40e4737a2bb..cf78a48085eda 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -6488,6 +6488,14 @@ drop:
+ 	kfree_skb(skb);
+ }
+ 
++static inline void l2cap_sig_send_rej(struct l2cap_conn *conn, u16 ident)
++{
++	struct l2cap_cmd_rej_unk rej;
++
++	rej.reason = cpu_to_le16(L2CAP_REJ_NOT_UNDERSTOOD);
++	l2cap_send_cmd(conn, ident, L2CAP_COMMAND_REJ, sizeof(rej), &rej);
++}
++
+ static inline void l2cap_sig_channel(struct l2cap_conn *conn,
+ 				     struct sk_buff *skb)
+ {
+@@ -6513,23 +6521,24 @@ static inline void l2cap_sig_channel(struct l2cap_conn *conn,
+ 
+ 		if (len > skb->len || !cmd->ident) {
+ 			BT_DBG("corrupted command");
++			l2cap_sig_send_rej(conn, cmd->ident);
+ 			break;
+ 		}
+ 
+ 		err = l2cap_bredr_sig_cmd(conn, cmd, len, skb->data);
+ 		if (err) {
+-			struct l2cap_cmd_rej_unk rej;
+-
+ 			BT_ERR("Wrong link type (%d)", err);
+-
+-			rej.reason = cpu_to_le16(L2CAP_REJ_NOT_UNDERSTOOD);
+-			l2cap_send_cmd(conn, cmd->ident, L2CAP_COMMAND_REJ,
+-				       sizeof(rej), &rej);
++			l2cap_sig_send_rej(conn, cmd->ident);
+ 		}
+ 
+ 		skb_pull(skb, len);
+ 	}
+ 
++	if (skb->len > 0) {
++		BT_DBG("corrupted command");
++		l2cap_sig_send_rej(conn, 0);
++	}
++
+ drop:
+ 	kfree_skb(skb);
+ }
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 878bf73822449..bd8cfcfca7aef 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -2373,7 +2373,8 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	for (i = 0; i < key_count; i++) {
+ 		struct mgmt_link_key_info *key = &cp->keys[i];
+ 
+-		if (key->addr.type != BDADDR_BREDR || key->type > 0x08)
++		/* Considering SMP over BREDR/LE, there is no need to check addr_type */
++		if (key->type > 0x08)
+ 			return mgmt_cmd_status(sk, hdev->id,
+ 					       MGMT_OP_LOAD_LINK_KEYS,
+ 					       MGMT_STATUS_INVALID_PARAMS);
+@@ -5914,6 +5915,7 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
+ 
+ 	for (i = 0; i < irk_count; i++) {
+ 		struct mgmt_irk_info *irk = &cp->irks[i];
++		u8 addr_type = le_addr_type(irk->addr.type);
+ 
+ 		if (hci_is_blocked_key(hdev,
+ 				       HCI_BLOCKED_KEY_TYPE_IRK,
+@@ -5923,8 +5925,12 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
+ 			continue;
+ 		}
+ 
++		/* When using SMP over BR/EDR, the addr type should be set to BREDR */
++		if (irk->addr.type == BDADDR_BREDR)
++			addr_type = BDADDR_BREDR;
++
+ 		hci_add_irk(hdev, &irk->addr.bdaddr,
+-			    le_addr_type(irk->addr.type), irk->val,
++			    addr_type, irk->val,
+ 			    BDADDR_ANY);
+ 	}
+ 
+@@ -5939,7 +5945,7 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
+ 
+ static bool ltk_is_valid(struct mgmt_ltk_info *key)
+ {
+-	if (key->master != 0x00 && key->master != 0x01)
++	if (key->initiator != 0x00 && key->initiator != 0x01)
+ 		return false;
+ 
+ 	switch (key->addr.type) {
+@@ -6005,6 +6011,7 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 	for (i = 0; i < key_count; i++) {
+ 		struct mgmt_ltk_info *key = &cp->keys[i];
+ 		u8 type, authenticated;
++		u8 addr_type = le_addr_type(key->addr.type);
+ 
+ 		if (hci_is_blocked_key(hdev,
+ 				       HCI_BLOCKED_KEY_TYPE_LTK,
+@@ -6017,11 +6024,11 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 		switch (key->type) {
+ 		case MGMT_LTK_UNAUTHENTICATED:
+ 			authenticated = 0x00;
+-			type = key->master ? SMP_LTK : SMP_LTK_SLAVE;
++			type = key->initiator ? SMP_LTK : SMP_LTK_RESPONDER;
+ 			break;
+ 		case MGMT_LTK_AUTHENTICATED:
+ 			authenticated = 0x01;
+-			type = key->master ? SMP_LTK : SMP_LTK_SLAVE;
++			type = key->initiator ? SMP_LTK : SMP_LTK_RESPONDER;
+ 			break;
+ 		case MGMT_LTK_P256_UNAUTH:
+ 			authenticated = 0x00;
+@@ -6039,8 +6046,12 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 			continue;
+ 		}
+ 
++		/* When using SMP over BR/EDR, the addr type should be set to BREDR */
++		if (key->addr.type == BDADDR_BREDR)
++			addr_type = BDADDR_BREDR;
++
+ 		hci_add_ltk(hdev, &key->addr.bdaddr,
+-			    le_addr_type(key->addr.type), type, authenticated,
++			    addr_type, type, authenticated,
+ 			    key->val, key->enc_size, key->ediv, key->rand);
+ 	}
+ 
+@@ -8043,7 +8054,7 @@ void mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key,
+ 
+ 	ev.store_hint = persistent;
+ 	bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
+-	ev.key.addr.type = BDADDR_BREDR;
++	ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
+ 	ev.key.type = key->type;
+ 	memcpy(ev.key.val, key->val, HCI_LINK_KEY_SIZE);
+ 	ev.key.pin_len = key->pin_len;
+@@ -8055,7 +8066,7 @@ static u8 mgmt_ltk_type(struct smp_ltk *ltk)
+ {
+ 	switch (ltk->type) {
+ 	case SMP_LTK:
+-	case SMP_LTK_SLAVE:
++	case SMP_LTK_RESPONDER:
+ 		if (ltk->authenticated)
+ 			return MGMT_LTK_AUTHENTICATED;
+ 		return MGMT_LTK_UNAUTHENTICATED;
+@@ -8094,14 +8105,14 @@ void mgmt_new_ltk(struct hci_dev *hdev, struct smp_ltk *key, bool persistent)
+ 		ev.store_hint = persistent;
+ 
+ 	bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
+-	ev.key.addr.type = link_to_bdaddr(LE_LINK, key->bdaddr_type);
++	ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
+ 	ev.key.type = mgmt_ltk_type(key);
+ 	ev.key.enc_size = key->enc_size;
+ 	ev.key.ediv = key->ediv;
+ 	ev.key.rand = key->rand;
+ 
+ 	if (key->type == SMP_LTK)
+-		ev.key.master = 1;
++		ev.key.initiator = 1;
+ 
+ 	/* Make sure we copy only the significant bytes based on the
+ 	 * encryption key size, and set the rest of the value to zeroes.
+@@ -8123,7 +8134,7 @@ void mgmt_new_irk(struct hci_dev *hdev, struct smp_irk *irk, bool persistent)
+ 
+ 	bacpy(&ev.rpa, &irk->rpa);
+ 	bacpy(&ev.irk.addr.bdaddr, &irk->bdaddr);
+-	ev.irk.addr.type = link_to_bdaddr(LE_LINK, irk->addr_type);
++	ev.irk.addr.type = link_to_bdaddr(irk->link_type, irk->addr_type);
+ 	memcpy(ev.irk.val, irk->val, sizeof(irk->val));
+ 
+ 	mgmt_event(MGMT_EV_NEW_IRK, hdev, &ev, sizeof(ev), NULL);
+@@ -8152,7 +8163,7 @@ void mgmt_new_csrk(struct hci_dev *hdev, struct smp_csrk *csrk,
+ 		ev.store_hint = persistent;
+ 
+ 	bacpy(&ev.key.addr.bdaddr, &csrk->bdaddr);
+-	ev.key.addr.type = link_to_bdaddr(LE_LINK, csrk->bdaddr_type);
++	ev.key.addr.type = link_to_bdaddr(csrk->link_type, csrk->bdaddr_type);
+ 	ev.key.type = csrk->type;
+ 	memcpy(ev.key.val, csrk->val, sizeof(csrk->val));
+ 
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index b7374dbee23a3..27381e7425a8f 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -112,9 +112,9 @@ struct smp_chan {
+ 	u8		id_addr_type;
+ 	u8		irk[16];
+ 	struct smp_csrk	*csrk;
+-	struct smp_csrk	*slave_csrk;
++	struct smp_csrk	*responder_csrk;
+ 	struct smp_ltk	*ltk;
+-	struct smp_ltk	*slave_ltk;
++	struct smp_ltk	*responder_ltk;
+ 	struct smp_irk	*remote_irk;
+ 	u8		*link_key;
+ 	unsigned long	flags;
+@@ -596,7 +596,7 @@ static void smp_send_cmd(struct l2cap_conn *conn, u8 code, u16 len, void *data)
+ 	if (!chan)
+ 		return;
+ 
+-	BT_DBG("code 0x%2.2x", code);
++	bt_dev_dbg(conn->hcon->hdev, "code 0x%2.2x", code);
+ 
+ 	iv[0].iov_base = &code;
+ 	iv[0].iov_len = 1;
+@@ -754,7 +754,7 @@ static void smp_chan_destroy(struct l2cap_conn *conn)
+ 	mgmt_smp_complete(hcon, complete);
+ 
+ 	kfree_sensitive(smp->csrk);
+-	kfree_sensitive(smp->slave_csrk);
++	kfree_sensitive(smp->responder_csrk);
+ 	kfree_sensitive(smp->link_key);
+ 
+ 	crypto_free_shash(smp->tfm_cmac);
+@@ -777,9 +777,9 @@ static void smp_chan_destroy(struct l2cap_conn *conn)
+ 			kfree_rcu(smp->ltk, rcu);
+ 		}
+ 
+-		if (smp->slave_ltk) {
+-			list_del_rcu(&smp->slave_ltk->list);
+-			kfree_rcu(smp->slave_ltk, rcu);
++		if (smp->responder_ltk) {
++			list_del_rcu(&smp->responder_ltk->list);
++			kfree_rcu(smp->responder_ltk, rcu);
+ 		}
+ 
+ 		if (smp->remote_irk) {
+@@ -860,7 +860,8 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth,
+ 	memset(smp->tk, 0, sizeof(smp->tk));
+ 	clear_bit(SMP_FLAG_TK_VALID, &smp->flags);
+ 
+-	BT_DBG("tk_request: auth:%d lcl:%d rem:%d", auth, local_io, remote_io);
++	bt_dev_dbg(hcon->hdev, "auth:%d lcl:%d rem:%d", auth, local_io,
++		   remote_io);
+ 
+ 	/* If neither side wants MITM, either "just" confirm an incoming
+ 	 * request or use just-works for outgoing ones. The JUST_CFM
+@@ -925,7 +926,7 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth,
+ 		get_random_bytes(&passkey, sizeof(passkey));
+ 		passkey %= 1000000;
+ 		put_unaligned_le32(passkey, smp->tk);
+-		BT_DBG("PassKey: %d", passkey);
++		bt_dev_dbg(hcon->hdev, "PassKey: %d", passkey);
+ 		set_bit(SMP_FLAG_TK_VALID, &smp->flags);
+ 	}
+ 
+@@ -950,7 +951,7 @@ static u8 smp_confirm(struct smp_chan *smp)
+ 	struct smp_cmd_pairing_confirm cp;
+ 	int ret;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(conn->hcon->hdev, "conn %p", conn);
+ 
+ 	ret = smp_c1(smp->tk, smp->prnd, smp->preq, smp->prsp,
+ 		     conn->hcon->init_addr_type, &conn->hcon->init_addr,
+@@ -978,7 +979,8 @@ static u8 smp_random(struct smp_chan *smp)
+ 	u8 confirm[16];
+ 	int ret;
+ 
+-	BT_DBG("conn %p %s", conn, conn->hcon->out ? "master" : "slave");
++	bt_dev_dbg(conn->hcon->hdev, "conn %p %s", conn,
++		   conn->hcon->out ? "initiator" : "responder");
+ 
+ 	ret = smp_c1(smp->tk, smp->rrnd, smp->preq, smp->prsp,
+ 		     hcon->init_addr_type, &hcon->init_addr,
+@@ -1020,8 +1022,8 @@ static u8 smp_random(struct smp_chan *smp)
+ 		else
+ 			auth = 0;
+ 
+-		/* Even though there's no _SLAVE suffix this is the
+-		 * slave STK we're adding for later lookup (the master
++		/* Even though there's no _RESPONDER suffix this is the
++		 * responder STK we're adding for later lookup (the initiator
+ 		 * STK never needs to be stored).
+ 		 */
+ 		hci_add_ltk(hcon->hdev, &hcon->dst, hcon->dst_type,
+@@ -1057,6 +1059,7 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 	}
+ 
+ 	if (smp->remote_irk) {
++		smp->remote_irk->link_type = hcon->type;
+ 		mgmt_new_irk(hdev, smp->remote_irk, persistent);
+ 
+ 		/* Now that user space can be considered to know the
+@@ -1071,27 +1074,31 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 	}
+ 
+ 	if (smp->csrk) {
++		smp->csrk->link_type = hcon->type;
+ 		smp->csrk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->csrk->bdaddr, &hcon->dst);
+ 		mgmt_new_csrk(hdev, smp->csrk, persistent);
+ 	}
+ 
+-	if (smp->slave_csrk) {
+-		smp->slave_csrk->bdaddr_type = hcon->dst_type;
+-		bacpy(&smp->slave_csrk->bdaddr, &hcon->dst);
+-		mgmt_new_csrk(hdev, smp->slave_csrk, persistent);
++	if (smp->responder_csrk) {
++		smp->responder_csrk->link_type = hcon->type;
++		smp->responder_csrk->bdaddr_type = hcon->dst_type;
++		bacpy(&smp->responder_csrk->bdaddr, &hcon->dst);
++		mgmt_new_csrk(hdev, smp->responder_csrk, persistent);
+ 	}
+ 
+ 	if (smp->ltk) {
++		smp->ltk->link_type = hcon->type;
+ 		smp->ltk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->ltk->bdaddr, &hcon->dst);
+ 		mgmt_new_ltk(hdev, smp->ltk, persistent);
+ 	}
+ 
+-	if (smp->slave_ltk) {
+-		smp->slave_ltk->bdaddr_type = hcon->dst_type;
+-		bacpy(&smp->slave_ltk->bdaddr, &hcon->dst);
+-		mgmt_new_ltk(hdev, smp->slave_ltk, persistent);
++	if (smp->responder_ltk) {
++		smp->responder_ltk->link_type = hcon->type;
++		smp->responder_ltk->bdaddr_type = hcon->dst_type;
++		bacpy(&smp->responder_ltk->bdaddr, &hcon->dst);
++		mgmt_new_ltk(hdev, smp->responder_ltk, persistent);
+ 	}
+ 
+ 	if (smp->link_key) {
+@@ -1108,6 +1115,8 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 		key = hci_add_link_key(hdev, smp->conn->hcon, &hcon->dst,
+ 				       smp->link_key, type, 0, &persistent);
+ 		if (key) {
++			key->link_type = hcon->type;
++			key->bdaddr_type = hcon->dst_type;
+ 			mgmt_new_link_key(hdev, key, persistent);
+ 
+ 			/* Don't keep debug keys around if the relevant
+@@ -1237,7 +1246,7 @@ static void smp_distribute_keys(struct smp_chan *smp)
+ 	struct hci_dev *hdev = hcon->hdev;
+ 	__u8 *keydist;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+ 	rsp = (void *) &smp->prsp[1];
+ 
+@@ -1267,11 +1276,11 @@ static void smp_distribute_keys(struct smp_chan *smp)
+ 		*keydist &= ~SMP_SC_NO_DIST;
+ 	}
+ 
+-	BT_DBG("keydist 0x%x", *keydist);
++	bt_dev_dbg(hdev, "keydist 0x%x", *keydist);
+ 
+ 	if (*keydist & SMP_DIST_ENC_KEY) {
+ 		struct smp_cmd_encrypt_info enc;
+-		struct smp_cmd_master_ident ident;
++		struct smp_cmd_initiator_ident ident;
+ 		struct smp_ltk *ltk;
+ 		u8 authenticated;
+ 		__le16 ediv;
+@@ -1292,14 +1301,15 @@ static void smp_distribute_keys(struct smp_chan *smp)
+ 
+ 		authenticated = hcon->sec_level == BT_SECURITY_HIGH;
+ 		ltk = hci_add_ltk(hdev, &hcon->dst, hcon->dst_type,
+-				  SMP_LTK_SLAVE, authenticated, enc.ltk,
++				  SMP_LTK_RESPONDER, authenticated, enc.ltk,
+ 				  smp->enc_key_size, ediv, rand);
+-		smp->slave_ltk = ltk;
++		smp->responder_ltk = ltk;
+ 
+ 		ident.ediv = ediv;
+ 		ident.rand = rand;
+ 
+-		smp_send_cmd(conn, SMP_CMD_MASTER_IDENT, sizeof(ident), &ident);
++		smp_send_cmd(conn, SMP_CMD_INITIATOR_IDENT, sizeof(ident),
++			     &ident);
+ 
+ 		*keydist &= ~SMP_DIST_ENC_KEY;
+ 	}
+@@ -1342,7 +1352,7 @@ static void smp_distribute_keys(struct smp_chan *smp)
+ 				csrk->type = MGMT_CSRK_LOCAL_UNAUTHENTICATED;
+ 			memcpy(csrk->val, sign.csrk, sizeof(csrk->val));
+ 		}
+-		smp->slave_csrk = csrk;
++		smp->responder_csrk = csrk;
+ 
+ 		smp_send_cmd(conn, SMP_CMD_SIGN_INFO, sizeof(sign), &sign);
+ 
+@@ -1367,13 +1377,14 @@ static void smp_timeout(struct work_struct *work)
+ 					    security_timer.work);
+ 	struct l2cap_conn *conn = smp->conn;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(conn->hcon->hdev, "conn %p", conn);
+ 
+ 	hci_disconnect(conn->hcon, HCI_ERROR_REMOTE_USER_TERM);
+ }
+ 
+ static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
+ {
++	struct hci_conn *hcon = conn->hcon;
+ 	struct l2cap_chan *chan = conn->smp;
+ 	struct smp_chan *smp;
+ 
+@@ -1383,13 +1394,13 @@ static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
+ 
+ 	smp->tfm_cmac = crypto_alloc_shash("cmac(aes)", 0, 0);
+ 	if (IS_ERR(smp->tfm_cmac)) {
+-		BT_ERR("Unable to create CMAC crypto context");
++		bt_dev_err(hcon->hdev, "Unable to create CMAC crypto context");
+ 		goto zfree_smp;
+ 	}
+ 
+ 	smp->tfm_ecdh = crypto_alloc_kpp("ecdh", 0, 0);
+ 	if (IS_ERR(smp->tfm_ecdh)) {
+-		BT_ERR("Unable to create ECDH crypto context");
++		bt_dev_err(hcon->hdev, "Unable to create ECDH crypto context");
+ 		goto free_shash;
+ 	}
+ 
+@@ -1400,7 +1411,7 @@ static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
+ 
+ 	INIT_DELAYED_WORK(&smp->security_timer, smp_timeout);
+ 
+-	hci_conn_hold(conn->hcon);
++	hci_conn_hold(hcon);
+ 
+ 	return smp;
+ 
+@@ -1565,8 +1576,8 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
+ 		if (!hcon->out)
+ 			return 0;
+ 
+-		BT_DBG("%s Starting passkey round %u", hdev->name,
+-		       smp->passkey_round + 1);
++		bt_dev_dbg(hdev, "Starting passkey round %u",
++			   smp->passkey_round + 1);
+ 
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
+ 
+@@ -1626,7 +1637,7 @@ int smp_user_confirm_reply(struct hci_conn *hcon, u16 mgmt_op, __le32 passkey)
+ 	u32 value;
+ 	int err;
+ 
+-	BT_DBG("");
++	bt_dev_dbg(conn->hcon->hdev, "");
+ 
+ 	if (!conn)
+ 		return -ENOTCONN;
+@@ -1652,7 +1663,7 @@ int smp_user_confirm_reply(struct hci_conn *hcon, u16 mgmt_op, __le32 passkey)
+ 	case MGMT_OP_USER_PASSKEY_REPLY:
+ 		value = le32_to_cpu(passkey);
+ 		memset(smp->tk, 0, sizeof(smp->tk));
+-		BT_DBG("PassKey: %d", value);
++		bt_dev_dbg(conn->hcon->hdev, "PassKey: %d", value);
+ 		put_unaligned_le32(value, smp->tk);
+ 		fallthrough;
+ 	case MGMT_OP_USER_CONFIRM_REPLY:
+@@ -1734,7 +1745,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	u8 key_size, auth, sec_level;
+ 	int ret;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(*req))
+ 		return SMP_INVALID_PARAMS;
+@@ -1888,7 +1899,7 @@ static u8 sc_send_public_key(struct smp_chan *smp)
+ 	}
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_USE_DEBUG_KEYS)) {
+-		BT_DBG("Using debug keys");
++		bt_dev_dbg(hdev, "Using debug keys");
+ 		if (set_ecdh_privkey(smp->tfm_ecdh, debug_sk))
+ 			return SMP_UNSPECIFIED;
+ 		memcpy(smp->local_pk, debug_pk, 64);
+@@ -1925,7 +1936,7 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	u8 key_size, auth;
+ 	int ret;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(*rsp))
+ 		return SMP_INVALID_PARAMS;
+@@ -2020,7 +2031,7 @@ static u8 sc_check_confirm(struct smp_chan *smp)
+ {
+ 	struct l2cap_conn *conn = smp->conn;
+ 
+-	BT_DBG("");
++	bt_dev_dbg(conn->hcon->hdev, "");
+ 
+ 	if (smp->method == REQ_PASSKEY || smp->method == DSP_PASSKEY)
+ 		return sc_passkey_round(smp, SMP_CMD_PAIRING_CONFIRM);
+@@ -2046,7 +2057,7 @@ static int fixup_sc_false_positive(struct smp_chan *smp)
+ 	struct smp_cmd_pairing *req, *rsp;
+ 	u8 auth;
+ 
+-	/* The issue is only observed when we're in slave role */
++	/* The issue is only observed when we're in responder role */
+ 	if (hcon->out)
+ 		return SMP_UNSPECIFIED;
+ 
+@@ -2079,8 +2090,11 @@ static u8 smp_cmd_pairing_confirm(struct l2cap_conn *conn, struct sk_buff *skb)
+ {
+ 	struct l2cap_chan *chan = conn->smp;
+ 	struct smp_chan *smp = chan->data;
++	struct hci_conn *hcon = conn->hcon;
++	struct hci_dev *hdev = hcon->hdev;
+ 
+-	BT_DBG("conn %p %s", conn, conn->hcon->out ? "master" : "slave");
++	bt_dev_dbg(hdev, "conn %p %s", conn,
++		   hcon->out ? "initiator" : "responder");
+ 
+ 	if (skb->len < sizeof(smp->pcnf))
+ 		return SMP_INVALID_PARAMS;
+@@ -2095,7 +2109,7 @@ static u8 smp_cmd_pairing_confirm(struct l2cap_conn *conn, struct sk_buff *skb)
+ 		if (test_bit(SMP_FLAG_REMOTE_PK, &smp->flags))
+ 			return sc_check_confirm(smp);
+ 
+-		BT_ERR("Unexpected SMP Pairing Confirm");
++		bt_dev_err(hdev, "Unexpected SMP Pairing Confirm");
+ 
+ 		ret = fixup_sc_false_positive(smp);
+ 		if (ret)
+@@ -2126,7 +2140,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	u32 passkey;
+ 	int err;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hcon->hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(smp->rrnd))
+ 		return SMP_INVALID_PARAMS;
+@@ -2247,7 +2261,7 @@ static bool smp_ltk_encrypt(struct l2cap_conn *conn, u8 sec_level)
+ 	hci_le_start_enc(hcon, key->ediv, key->rand, key->val, key->enc_size);
+ 	hcon->enc_key_size = key->enc_size;
+ 
+-	/* We never store STKs for master role, so clear this flag */
++	/* We never store STKs for initiator role, so clear this flag */
+ 	clear_bit(HCI_CONN_STK_ENCRYPT, &hcon->flags);
+ 
+ 	return true;
+@@ -2285,7 +2299,7 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct smp_chan *smp;
+ 	u8 sec_level, auth;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(*rp))
+ 		return SMP_INVALID_PARAMS;
+@@ -2348,7 +2362,8 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
+ 	__u8 authreq;
+ 	int ret;
+ 
+-	BT_DBG("conn %p hcon %p level 0x%2.2x", conn, hcon, sec_level);
++	bt_dev_dbg(hcon->hdev, "conn %p hcon %p level 0x%2.2x", conn, hcon,
++		   sec_level);
+ 
+ 	/* This may be NULL if there's an unexpected disconnection */
+ 	if (!conn)
+@@ -2462,7 +2477,7 @@ int smp_cancel_and_remove_pairing(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 		/* Set keys to NULL to make sure smp_failure() does not try to
+ 		 * remove and free already invalidated rcu list entries. */
+ 		smp->ltk = NULL;
+-		smp->slave_ltk = NULL;
++		smp->responder_ltk = NULL;
+ 		smp->remote_irk = NULL;
+ 
+ 		if (test_bit(SMP_FLAG_COMPLETE, &smp->flags))
+@@ -2484,7 +2499,7 @@ static int smp_cmd_encrypt_info(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct l2cap_chan *chan = conn->smp;
+ 	struct smp_chan *smp = chan->data;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(conn->hcon->hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(*rp))
+ 		return SMP_INVALID_PARAMS;
+@@ -2498,7 +2513,7 @@ static int smp_cmd_encrypt_info(struct l2cap_conn *conn, struct sk_buff *skb)
+ 		return SMP_INVALID_PARAMS;
+ 	}
+ 
+-	SMP_ALLOW_CMD(smp, SMP_CMD_MASTER_IDENT);
++	SMP_ALLOW_CMD(smp, SMP_CMD_INITIATOR_IDENT);
+ 
+ 	skb_pull(skb, sizeof(*rp));
+ 
+@@ -2507,9 +2522,9 @@ static int smp_cmd_encrypt_info(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	return 0;
+ }
+ 
+-static int smp_cmd_master_ident(struct l2cap_conn *conn, struct sk_buff *skb)
++static int smp_cmd_initiator_ident(struct l2cap_conn *conn, struct sk_buff *skb)
+ {
+-	struct smp_cmd_master_ident *rp = (void *) skb->data;
++	struct smp_cmd_initiator_ident *rp = (void *)skb->data;
+ 	struct l2cap_chan *chan = conn->smp;
+ 	struct smp_chan *smp = chan->data;
+ 	struct hci_dev *hdev = conn->hcon->hdev;
+@@ -2517,7 +2532,7 @@ static int smp_cmd_master_ident(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct smp_ltk *ltk;
+ 	u8 authenticated;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(*rp))
+ 		return SMP_INVALID_PARAMS;
+@@ -2549,7 +2564,7 @@ static int smp_cmd_ident_info(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct l2cap_chan *chan = conn->smp;
+ 	struct smp_chan *smp = chan->data;
+ 
+-	BT_DBG("");
++	bt_dev_dbg(conn->hcon->hdev, "");
+ 
+ 	if (skb->len < sizeof(*info))
+ 		return SMP_INVALID_PARAMS;
+@@ -2581,7 +2596,7 @@ static int smp_cmd_ident_addr_info(struct l2cap_conn *conn,
+ 	struct hci_conn *hcon = conn->hcon;
+ 	bdaddr_t rpa;
+ 
+-	BT_DBG("");
++	bt_dev_dbg(hcon->hdev, "");
+ 
+ 	if (skb->len < sizeof(*info))
+ 		return SMP_INVALID_PARAMS;
+@@ -2648,7 +2663,7 @@ static int smp_cmd_sign_info(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct smp_chan *smp = chan->data;
+ 	struct smp_csrk *csrk;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(conn->hcon->hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(*rp))
+ 		return SMP_INVALID_PARAMS;
+@@ -2728,7 +2743,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct smp_cmd_pairing_confirm cfm;
+ 	int err;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(*key))
+ 		return SMP_INVALID_PARAMS;
+@@ -2792,7 +2807,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 
+ 	smp->method = sc_select_method(smp);
+ 
+-	BT_DBG("%s selected method 0x%02x", hdev->name, smp->method);
++	bt_dev_dbg(hdev, "selected method 0x%02x", smp->method);
+ 
+ 	/* JUST_WORKS and JUST_CFM result in an unauthenticated key */
+ 	if (smp->method == JUST_WORKS || smp->method == JUST_CFM)
+@@ -2867,7 +2882,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	u8 io_cap[3], r[16], e[16];
+ 	int err;
+ 
+-	BT_DBG("conn %p", conn);
++	bt_dev_dbg(hcon->hdev, "conn %p", conn);
+ 
+ 	if (skb->len < sizeof(*check))
+ 		return SMP_INVALID_PARAMS;
+@@ -2908,7 +2923,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
+ 			return 0;
+ 		}
+ 
+-		/* Slave sends DHKey check as response to master */
++		/* Responder sends DHKey check as response to initiator */
+ 		sc_dhkey_check(smp);
+ 	}
+ 
+@@ -2927,7 +2942,7 @@ static int smp_cmd_keypress_notify(struct l2cap_conn *conn,
+ {
+ 	struct smp_cmd_keypress_notify *kp = (void *) skb->data;
+ 
+-	BT_DBG("value 0x%02x", kp->value);
++	bt_dev_dbg(conn->hcon->hdev, "value 0x%02x", kp->value);
+ 
+ 	return 0;
+ }
+@@ -2995,8 +3010,8 @@ static int smp_sig_channel(struct l2cap_chan *chan, struct sk_buff *skb)
+ 		reason = smp_cmd_encrypt_info(conn, skb);
+ 		break;
+ 
+-	case SMP_CMD_MASTER_IDENT:
+-		reason = smp_cmd_master_ident(conn, skb);
++	case SMP_CMD_INITIATOR_IDENT:
++		reason = smp_cmd_initiator_ident(conn, skb);
+ 		break;
+ 
+ 	case SMP_CMD_IDENT_INFO:
+@@ -3024,7 +3039,7 @@ static int smp_sig_channel(struct l2cap_chan *chan, struct sk_buff *skb)
+ 		break;
+ 
+ 	default:
+-		BT_DBG("Unknown command code 0x%2.2x", code);
++		bt_dev_dbg(hcon->hdev, "Unknown command code 0x%2.2x", code);
+ 		reason = SMP_CMD_NOTSUPP;
+ 		goto done;
+ 	}
+@@ -3049,7 +3064,7 @@ static void smp_teardown_cb(struct l2cap_chan *chan, int err)
+ {
+ 	struct l2cap_conn *conn = chan->conn;
+ 
+-	BT_DBG("chan %p", chan);
++	bt_dev_dbg(conn->hcon->hdev, "chan %p", chan);
+ 
+ 	if (chan->data)
+ 		smp_chan_destroy(conn);
+@@ -3066,7 +3081,7 @@ static void bredr_pairing(struct l2cap_chan *chan)
+ 	struct smp_cmd_pairing req;
+ 	struct smp_chan *smp;
+ 
+-	BT_DBG("chan %p", chan);
++	bt_dev_dbg(hdev, "chan %p", chan);
+ 
+ 	/* Only new pairings are interesting */
+ 	if (!test_bit(HCI_CONN_NEW_LINK_KEY, &hcon->flags))
+@@ -3113,7 +3128,7 @@ static void bredr_pairing(struct l2cap_chan *chan)
+ 
+ 	set_bit(SMP_FLAG_SC, &smp->flags);
+ 
+-	BT_DBG("%s starting SMP over BR/EDR", hdev->name);
++	bt_dev_dbg(hdev, "starting SMP over BR/EDR");
+ 
+ 	/* Prepare and send the BR/EDR SMP Pairing Request */
+ 	build_bredr_pairing_cmd(smp, &req, NULL);
+@@ -3131,7 +3146,7 @@ static void smp_resume_cb(struct l2cap_chan *chan)
+ 	struct l2cap_conn *conn = chan->conn;
+ 	struct hci_conn *hcon = conn->hcon;
+ 
+-	BT_DBG("chan %p", chan);
++	bt_dev_dbg(hcon->hdev, "chan %p", chan);
+ 
+ 	if (hcon->type == ACL_LINK) {
+ 		bredr_pairing(chan);
+@@ -3154,7 +3169,7 @@ static void smp_ready_cb(struct l2cap_chan *chan)
+ 	struct l2cap_conn *conn = chan->conn;
+ 	struct hci_conn *hcon = conn->hcon;
+ 
+-	BT_DBG("chan %p", chan);
++	bt_dev_dbg(hcon->hdev, "chan %p", chan);
+ 
+ 	/* No need to call l2cap_chan_hold() here since we already own
+ 	 * the reference taken in smp_new_conn_cb(). This is just the
+@@ -3172,7 +3187,7 @@ static int smp_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb)
+ {
+ 	int err;
+ 
+-	BT_DBG("chan %p", chan);
++	bt_dev_dbg(chan->conn->hcon->hdev, "chan %p", chan);
+ 
+ 	err = smp_sig_channel(chan, skb);
+ 	if (err) {
+@@ -3286,14 +3301,14 @@ static struct l2cap_chan *smp_add_cid(struct hci_dev *hdev, u16 cid)
+ 
+ 	tfm_cmac = crypto_alloc_shash("cmac(aes)", 0, 0);
+ 	if (IS_ERR(tfm_cmac)) {
+-		BT_ERR("Unable to create CMAC crypto context");
++		bt_dev_err(hdev, "Unable to create CMAC crypto context");
+ 		kfree_sensitive(smp);
+ 		return ERR_CAST(tfm_cmac);
+ 	}
+ 
+ 	tfm_ecdh = crypto_alloc_kpp("ecdh", 0, 0);
+ 	if (IS_ERR(tfm_ecdh)) {
+-		BT_ERR("Unable to create ECDH crypto context");
++		bt_dev_err(hdev, "Unable to create ECDH crypto context");
+ 		crypto_free_shash(tfm_cmac);
+ 		kfree_sensitive(smp);
+ 		return ERR_CAST(tfm_ecdh);
+@@ -3422,7 +3437,7 @@ int smp_register(struct hci_dev *hdev)
+ {
+ 	struct l2cap_chan *chan;
+ 
+-	BT_DBG("%s", hdev->name);
++	bt_dev_dbg(hdev, "");
+ 
+ 	/* If the controller does not support Low Energy operation, then
+ 	 * there is also no need to register any SMP channel.
+diff --git a/net/bluetooth/smp.h b/net/bluetooth/smp.h
+index 121edadd5f8da..5fe68e255cb29 100644
+--- a/net/bluetooth/smp.h
++++ b/net/bluetooth/smp.h
+@@ -79,8 +79,8 @@ struct smp_cmd_encrypt_info {
+ 	__u8	ltk[16];
+ } __packed;
+ 
+-#define SMP_CMD_MASTER_IDENT	0x07
+-struct smp_cmd_master_ident {
++#define SMP_CMD_INITIATOR_IDENT	0x07
++struct smp_cmd_initiator_ident {
+ 	__le16	ediv;
+ 	__le64	rand;
+ } __packed;
+@@ -146,7 +146,7 @@ struct smp_cmd_keypress_notify {
+ enum {
+ 	SMP_STK,
+ 	SMP_LTK,
+-	SMP_LTK_SLAVE,
++	SMP_LTK_RESPONDER,
+ 	SMP_LTK_P256,
+ 	SMP_LTK_P256_DEBUG,
+ };
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 3fc27b52bf429..fc881d60a9dcc 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3523,6 +3523,14 @@ static netdev_features_t gso_features_check(const struct sk_buff *skb,
+ 	if (gso_segs > dev->gso_max_segs)
+ 		return features & ~NETIF_F_GSO_MASK;
+ 
++	if (unlikely(skb->len >= READ_ONCE(dev->gso_max_size)))
++		return features & ~NETIF_F_GSO_MASK;
++
++	if (!skb_shinfo(skb)->gso_type) {
++		skb_warn_bad_offload(skb);
++		return features & ~NETIF_F_GSO_MASK;
++	}
++
+ 	/* Support for GSO partial features requires software
+ 	 * intervention before we can actually process the packets
+ 	 * so we need to strip support for any partial features now
+diff --git a/net/dns_resolver/dns_key.c b/net/dns_resolver/dns_key.c
+index 3aced951d5ab8..03f8f33dc134c 100644
+--- a/net/dns_resolver/dns_key.c
++++ b/net/dns_resolver/dns_key.c
+@@ -91,6 +91,7 @@ const struct cred *dns_resolver_cache;
+ static int
+ dns_resolver_preparse(struct key_preparsed_payload *prep)
+ {
++	const struct dns_server_list_v1_header *v1;
+ 	const struct dns_payload_header *bin;
+ 	struct user_key_payload *upayload;
+ 	unsigned long derrno;
+@@ -122,6 +123,13 @@ dns_resolver_preparse(struct key_preparsed_payload *prep)
+ 			return -EINVAL;
+ 		}
+ 
++		v1 = (const struct dns_server_list_v1_header *)bin;
++		if ((v1->status != DNS_LOOKUP_GOOD &&
++		     v1->status != DNS_LOOKUP_GOOD_WITH_BAD)) {
++			if (prep->expiry == TIME64_MAX)
++				prep->expiry = ktime_get_real_seconds() + 1;
++		}
++
+ 		result_len = datalen;
+ 		goto store_result;
+ 	}
+@@ -314,7 +322,7 @@ static long dns_resolver_read(const struct key *key,
+ 
+ struct key_type key_type_dns_resolver = {
+ 	.name		= "dns_resolver",
+-	.flags		= KEY_TYPE_NET_DOMAIN,
++	.flags		= KEY_TYPE_NET_DOMAIN | KEY_TYPE_INSTANT_REAP,
+ 	.preparse	= dns_resolver_preparse,
+ 	.free_preparse	= dns_resolver_free_preparse,
+ 	.instantiate	= generic_key_instantiate,
+diff --git a/net/ife/ife.c b/net/ife/ife.c
+index 13bbf8cb6a396..be05b690b9ef2 100644
+--- a/net/ife/ife.c
++++ b/net/ife/ife.c
+@@ -82,6 +82,7 @@ void *ife_decode(struct sk_buff *skb, u16 *metalen)
+ 	if (unlikely(!pskb_may_pull(skb, total_pull)))
+ 		return NULL;
+ 
++	ifehdr = (struct ifeheadr *)(skb->data + skb->dev->hard_header_len);
+ 	skb_set_mac_header(skb, total_pull);
+ 	__skb_pull(skb, total_pull);
+ 	*metalen = ifehdrln - IFE_METAHDRLEN;
+diff --git a/net/mac80211/mesh_plink.c b/net/mac80211/mesh_plink.c
+index aca26df7587dc..ee8b5013d67d7 100644
+--- a/net/mac80211/mesh_plink.c
++++ b/net/mac80211/mesh_plink.c
+@@ -1050,8 +1050,8 @@ mesh_plink_get_event(struct ieee80211_sub_if_data *sdata,
+ 	case WLAN_SP_MESH_PEERING_OPEN:
+ 		if (!matches_local)
+ 			event = OPN_RJCT;
+-		if (!mesh_plink_free_count(sdata) ||
+-		    (sta->mesh->plid && sta->mesh->plid != plid))
++		else if (!mesh_plink_free_count(sdata) ||
++			 (sta->mesh->plid && sta->mesh->plid != plid))
+ 			event = OPN_IGNR;
+ 		else
+ 			event = OPN_ACPT;
+@@ -1059,9 +1059,9 @@ mesh_plink_get_event(struct ieee80211_sub_if_data *sdata,
+ 	case WLAN_SP_MESH_PEERING_CONFIRM:
+ 		if (!matches_local)
+ 			event = CNF_RJCT;
+-		if (!mesh_plink_free_count(sdata) ||
+-		    sta->mesh->llid != llid ||
+-		    (sta->mesh->plid && sta->mesh->plid != plid))
++		else if (!mesh_plink_free_count(sdata) ||
++			 sta->mesh->llid != llid ||
++			 (sta->mesh->plid && sta->mesh->plid != plid))
+ 			event = CNF_IGNR;
+ 		else
+ 			event = CNF_ACPT;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index dfb347f8d3adc..f244a4323a43b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -8377,7 +8377,7 @@ static void nft_set_commit_update(struct list_head *set_update_list)
+ 	list_for_each_entry_safe(set, next, set_update_list, pending_update) {
+ 		list_del_init(&set->pending_update);
+ 
+-		if (!set->ops->commit)
++		if (!set->ops->commit || set->dead)
+ 			continue;
+ 
+ 		set->ops->commit(set);
+diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c
+index 2cc95c8dc4c7b..f74baefd855d3 100644
+--- a/net/rfkill/rfkill-gpio.c
++++ b/net/rfkill/rfkill-gpio.c
+@@ -116,6 +116,14 @@ static int rfkill_gpio_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
++	ret = gpiod_direction_output(rfkill->reset_gpio, true);
++	if (ret)
++		return ret;
++
++	ret = gpiod_direction_output(rfkill->shutdown_gpio, true);
++	if (ret)
++		return ret;
++
+ 	rfkill->rfkill_dev = rfkill_alloc(rfkill->name, &pdev->dev,
+ 					  rfkill->type, &rfkill_gpio_ops,
+ 					  rfkill);
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index b3e7a92f1ec19..1d95ff34b13c9 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -181,21 +181,47 @@ void rose_kill_by_neigh(struct rose_neigh *neigh)
+  */
+ static void rose_kill_by_device(struct net_device *dev)
+ {
+-	struct sock *s;
++	struct sock *sk, *array[16];
++	struct rose_sock *rose;
++	bool rescan;
++	int i, cnt;
+ 
++start:
++	rescan = false;
++	cnt = 0;
+ 	spin_lock_bh(&rose_list_lock);
+-	sk_for_each(s, &rose_list) {
+-		struct rose_sock *rose = rose_sk(s);
++	sk_for_each(sk, &rose_list) {
++		rose = rose_sk(sk);
++		if (rose->device == dev) {
++			if (cnt == ARRAY_SIZE(array)) {
++				rescan = true;
++				break;
++			}
++			sock_hold(sk);
++			array[cnt++] = sk;
++		}
++	}
++	spin_unlock_bh(&rose_list_lock);
+ 
++	for (i = 0; i < cnt; i++) {
++		sk = array[cnt];
++		rose = rose_sk(sk);
++		lock_sock(sk);
++		spin_lock_bh(&rose_list_lock);
+ 		if (rose->device == dev) {
+-			rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
++			rose_disconnect(sk, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
+ 			if (rose->neighbour)
+ 				rose->neighbour->use--;
+ 			dev_put(rose->device);
+ 			rose->device = NULL;
+ 		}
++		spin_unlock_bh(&rose_list_lock);
++		release_sock(sk);
++		sock_put(sk);
++		cond_resched();
+ 	}
+-	spin_unlock_bh(&rose_list_lock);
++	if (rescan)
++		goto start;
+ }
+ 
+ /*
+@@ -655,7 +681,10 @@ static int rose_release(struct socket *sock)
+ 		break;
+ 	}
+ 
++	spin_lock_bh(&rose_list_lock);
+ 	dev_put(rose->device);
++	rose->device = NULL;
++	spin_unlock_bh(&rose_list_lock);
+ 	sock->sk = NULL;
+ 	release_sock(sk);
+ 	sock_put(sk);
+diff --git a/net/wireless/certs/wens.hex b/net/wireless/certs/wens.hex
+new file mode 100644
+index 0000000000000..0d50369bede98
+--- /dev/null
++++ b/net/wireless/certs/wens.hex
+@@ -0,0 +1,87 @@
++/* Chen-Yu Tsai's regdb certificate */
++0x30, 0x82, 0x02, 0xa7, 0x30, 0x82, 0x01, 0x8f,
++0x02, 0x14, 0x61, 0xc0, 0x38, 0x65, 0x1a, 0xab,
++0xdc, 0xf9, 0x4b, 0xd0, 0xac, 0x7f, 0xf0, 0x6c,
++0x72, 0x48, 0xdb, 0x18, 0xc6, 0x00, 0x30, 0x0d,
++0x06, 0x09, 0x2a, 0x86, 0x48, 0x86, 0xf7, 0x0d,
++0x01, 0x01, 0x0b, 0x05, 0x00, 0x30, 0x0f, 0x31,
++0x0d, 0x30, 0x0b, 0x06, 0x03, 0x55, 0x04, 0x03,
++0x0c, 0x04, 0x77, 0x65, 0x6e, 0x73, 0x30, 0x20,
++0x17, 0x0d, 0x32, 0x33, 0x31, 0x32, 0x30, 0x31,
++0x30, 0x37, 0x34, 0x31, 0x31, 0x34, 0x5a, 0x18,
++0x0f, 0x32, 0x31, 0x32, 0x33, 0x31, 0x31, 0x30,
++0x37, 0x30, 0x37, 0x34, 0x31, 0x31, 0x34, 0x5a,
++0x30, 0x0f, 0x31, 0x0d, 0x30, 0x0b, 0x06, 0x03,
++0x55, 0x04, 0x03, 0x0c, 0x04, 0x77, 0x65, 0x6e,
++0x73, 0x30, 0x82, 0x01, 0x22, 0x30, 0x0d, 0x06,
++0x09, 0x2a, 0x86, 0x48, 0x86, 0xf7, 0x0d, 0x01,
++0x01, 0x01, 0x05, 0x00, 0x03, 0x82, 0x01, 0x0f,
++0x00, 0x30, 0x82, 0x01, 0x0a, 0x02, 0x82, 0x01,
++0x01, 0x00, 0xa9, 0x7a, 0x2c, 0x78, 0x4d, 0xa7,
++0x19, 0x2d, 0x32, 0x52, 0xa0, 0x2e, 0x6c, 0xef,
++0x88, 0x7f, 0x15, 0xc5, 0xb6, 0x69, 0x54, 0x16,
++0x43, 0x14, 0x79, 0x53, 0xb7, 0xae, 0x88, 0xfe,
++0xc0, 0xb7, 0x5d, 0x47, 0x8e, 0x1a, 0xe1, 0xef,
++0xb3, 0x90, 0x86, 0xda, 0xd3, 0x64, 0x81, 0x1f,
++0xce, 0x5d, 0x9e, 0x4b, 0x6e, 0x58, 0x02, 0x3e,
++0xb2, 0x6f, 0x5e, 0x42, 0x47, 0x41, 0xf4, 0x2c,
++0xb8, 0xa8, 0xd4, 0xaa, 0xc0, 0x0e, 0xe6, 0x48,
++0xf0, 0xa8, 0xce, 0xcb, 0x08, 0xae, 0x37, 0xaf,
++0xf6, 0x40, 0x39, 0xcb, 0x55, 0x6f, 0x5b, 0x4f,
++0x85, 0x34, 0xe6, 0x69, 0x10, 0x50, 0x72, 0x5e,
++0x4e, 0x9d, 0x4c, 0xba, 0x38, 0x36, 0x0d, 0xce,
++0x73, 0x38, 0xd7, 0x27, 0x02, 0x2a, 0x79, 0x03,
++0xe1, 0xac, 0xcf, 0xb0, 0x27, 0x85, 0x86, 0x93,
++0x17, 0xab, 0xec, 0x42, 0x77, 0x37, 0x65, 0x8a,
++0x44, 0xcb, 0xd6, 0x42, 0x93, 0x92, 0x13, 0xe3,
++0x39, 0x45, 0xc5, 0x6e, 0x00, 0x4a, 0x7f, 0xcb,
++0x42, 0x17, 0x2b, 0x25, 0x8c, 0xb8, 0x17, 0x3b,
++0x15, 0x36, 0x59, 0xde, 0x42, 0xce, 0x21, 0xe6,
++0xb6, 0xc7, 0x6e, 0x5e, 0x26, 0x1f, 0xf7, 0x8a,
++0x57, 0x9e, 0xa5, 0x96, 0x72, 0xb7, 0x02, 0x32,
++0xeb, 0x07, 0x2b, 0x73, 0xe2, 0x4f, 0x66, 0x58,
++0x9a, 0xeb, 0x0f, 0x07, 0xb6, 0xab, 0x50, 0x8b,
++0xc3, 0x8f, 0x17, 0xfa, 0x0a, 0x99, 0xc2, 0x16,
++0x25, 0xbf, 0x2d, 0x6b, 0x1a, 0xaa, 0xe6, 0x3e,
++0x5f, 0xeb, 0x6d, 0x9b, 0x5d, 0x4d, 0x42, 0x83,
++0x2d, 0x39, 0xb8, 0xc9, 0xac, 0xdb, 0x3a, 0x91,
++0x50, 0xdf, 0xbb, 0xb1, 0x76, 0x6d, 0x15, 0x73,
++0xfd, 0xc6, 0xe6, 0x6b, 0x71, 0x9e, 0x67, 0x36,
++0x22, 0x83, 0x79, 0xb1, 0xd6, 0xb8, 0x84, 0x52,
++0xaf, 0x96, 0x5b, 0xc3, 0x63, 0x02, 0x4e, 0x78,
++0x70, 0x57, 0x02, 0x03, 0x01, 0x00, 0x01, 0x30,
++0x0d, 0x06, 0x09, 0x2a, 0x86, 0x48, 0x86, 0xf7,
++0x0d, 0x01, 0x01, 0x0b, 0x05, 0x00, 0x03, 0x82,
++0x01, 0x01, 0x00, 0x24, 0x28, 0xee, 0x22, 0x74,
++0x7f, 0x7c, 0xfa, 0x6c, 0x1f, 0xb3, 0x18, 0xd1,
++0xc2, 0x3d, 0x7d, 0x29, 0x42, 0x88, 0xad, 0x82,
++0xa5, 0xb1, 0x8a, 0x05, 0xd0, 0xec, 0x5c, 0x91,
++0x20, 0xf6, 0x82, 0xfd, 0xd5, 0x67, 0x60, 0x5f,
++0x31, 0xf5, 0xbd, 0x88, 0x91, 0x70, 0xbd, 0xb8,
++0xb9, 0x8c, 0x88, 0xfe, 0x53, 0xc9, 0x54, 0x9b,
++0x43, 0xc4, 0x7a, 0x43, 0x74, 0x6b, 0xdd, 0xb0,
++0xb1, 0x3b, 0x33, 0x45, 0x46, 0x78, 0xa3, 0x1c,
++0xef, 0x54, 0x68, 0xf7, 0x85, 0x9c, 0xe4, 0x51,
++0x6f, 0x06, 0xaf, 0x81, 0xdb, 0x2a, 0x7b, 0x7b,
++0x6f, 0xa8, 0x9c, 0x67, 0xd8, 0xcb, 0xc9, 0x91,
++0x40, 0x00, 0xae, 0xd9, 0xa1, 0x9f, 0xdd, 0xa6,
++0x43, 0x0e, 0x28, 0x7b, 0xaa, 0x1b, 0xe9, 0x84,
++0xdb, 0x76, 0x64, 0x42, 0x70, 0xc9, 0xc0, 0xeb,
++0xae, 0x84, 0x11, 0x16, 0x68, 0x4e, 0x84, 0x9e,
++0x7e, 0x92, 0x36, 0xee, 0x1c, 0x3b, 0x08, 0x63,
++0xeb, 0x79, 0x84, 0x15, 0x08, 0x9d, 0xaf, 0xc8,
++0x9a, 0xc7, 0x34, 0xd3, 0x94, 0x4b, 0xd1, 0x28,
++0x97, 0xbe, 0xd1, 0x45, 0x75, 0xdc, 0x35, 0x62,
++0xac, 0x1d, 0x1f, 0xb7, 0xb7, 0x15, 0x87, 0xc8,
++0x98, 0xc0, 0x24, 0x31, 0x56, 0x8d, 0xed, 0xdb,
++0x06, 0xc6, 0x46, 0xbf, 0x4b, 0x6d, 0xa6, 0xd5,
++0xab, 0xcc, 0x60, 0xfc, 0xe5, 0x37, 0xb6, 0x53,
++0x7d, 0x58, 0x95, 0xa9, 0x56, 0xc7, 0xf7, 0xee,
++0xc3, 0xa0, 0x76, 0xf7, 0x65, 0x4d, 0x53, 0xfa,
++0xff, 0x5f, 0x76, 0x33, 0x5a, 0x08, 0xfa, 0x86,
++0x92, 0x5a, 0x13, 0xfa, 0x1a, 0xfc, 0xf2, 0x1b,
++0x8c, 0x7f, 0x42, 0x6d, 0xb7, 0x7e, 0xb7, 0xb4,
++0xf0, 0xc7, 0x83, 0xbb, 0xa2, 0x81, 0x03, 0x2d,
++0xd4, 0x2a, 0x63, 0x3f, 0xf7, 0x31, 0x2e, 0x40,
++0x33, 0x5c, 0x46, 0xbc, 0x9b, 0xc1, 0x05, 0xa5,
++0x45, 0x4e, 0xc3,
+diff --git a/security/keys/gc.c b/security/keys/gc.c
+index 3c90807476eb0..eaddaceda14ea 100644
+--- a/security/keys/gc.c
++++ b/security/keys/gc.c
+@@ -66,6 +66,19 @@ void key_schedule_gc(time64_t gc_at)
+ 	}
+ }
+ 
++/*
++ * Set the expiration time on a key.
++ */
++void key_set_expiry(struct key *key, time64_t expiry)
++{
++	key->expiry = expiry;
++	if (expiry != TIME64_MAX) {
++		if (!(key->type->flags & KEY_TYPE_INSTANT_REAP))
++			expiry += key_gc_delay;
++		key_schedule_gc(expiry);
++	}
++}
++
+ /*
+  * Schedule a dead links collection run.
+  */
+@@ -176,7 +189,6 @@ static void key_garbage_collector(struct work_struct *work)
+ 	static u8 gc_state;		/* Internal persistent state */
+ #define KEY_GC_REAP_AGAIN	0x01	/* - Need another cycle */
+ #define KEY_GC_REAPING_LINKS	0x02	/* - We need to reap links */
+-#define KEY_GC_SET_TIMER	0x04	/* - We need to restart the timer */
+ #define KEY_GC_REAPING_DEAD_1	0x10	/* - We need to mark dead keys */
+ #define KEY_GC_REAPING_DEAD_2	0x20	/* - We need to reap dead key links */
+ #define KEY_GC_REAPING_DEAD_3	0x40	/* - We need to reap dead keys */
+@@ -184,21 +196,17 @@ static void key_garbage_collector(struct work_struct *work)
+ 
+ 	struct rb_node *cursor;
+ 	struct key *key;
+-	time64_t new_timer, limit;
++	time64_t new_timer, limit, expiry;
+ 
+ 	kenter("[%lx,%x]", key_gc_flags, gc_state);
+ 
+ 	limit = ktime_get_real_seconds();
+-	if (limit > key_gc_delay)
+-		limit -= key_gc_delay;
+-	else
+-		limit = key_gc_delay;
+ 
+ 	/* Work out what we're going to be doing in this pass */
+ 	gc_state &= KEY_GC_REAPING_DEAD_1 | KEY_GC_REAPING_DEAD_2;
+ 	gc_state <<= 1;
+ 	if (test_and_clear_bit(KEY_GC_KEY_EXPIRED, &key_gc_flags))
+-		gc_state |= KEY_GC_REAPING_LINKS | KEY_GC_SET_TIMER;
++		gc_state |= KEY_GC_REAPING_LINKS;
+ 
+ 	if (test_and_clear_bit(KEY_GC_REAP_KEYTYPE, &key_gc_flags))
+ 		gc_state |= KEY_GC_REAPING_DEAD_1;
+@@ -233,8 +241,11 @@ continue_scanning:
+ 			}
+ 		}
+ 
+-		if (gc_state & KEY_GC_SET_TIMER) {
+-			if (key->expiry > limit && key->expiry < new_timer) {
++		expiry = key->expiry;
++		if (expiry != TIME64_MAX) {
++			if (!(key->type->flags & KEY_TYPE_INSTANT_REAP))
++				expiry += key_gc_delay;
++			if (expiry > limit && expiry < new_timer) {
+ 				kdebug("will expire %x in %lld",
+ 				       key_serial(key), key->expiry - limit);
+ 				new_timer = key->expiry;
+@@ -276,7 +287,7 @@ maybe_resched:
+ 	 */
+ 	kdebug("pass complete");
+ 
+-	if (gc_state & KEY_GC_SET_TIMER && new_timer != (time64_t)TIME64_MAX) {
++	if (new_timer != TIME64_MAX) {
+ 		new_timer += key_gc_delay;
+ 		key_schedule_gc(new_timer);
+ 	}
+diff --git a/security/keys/internal.h b/security/keys/internal.h
+index 9b9cf3b6fcbb4..bede6c71ffd97 100644
+--- a/security/keys/internal.h
++++ b/security/keys/internal.h
+@@ -176,6 +176,7 @@ extern unsigned key_gc_delay;
+ extern void keyring_gc(struct key *keyring, time64_t limit);
+ extern void keyring_restriction_gc(struct key *keyring,
+ 				   struct key_type *dead_type);
++void key_set_expiry(struct key *key, time64_t expiry);
+ extern void key_schedule_gc(time64_t gc_at);
+ extern void key_schedule_gc_links(void);
+ extern void key_gc_keytype(struct key_type *ktype);
+@@ -224,10 +225,18 @@ extern struct key *key_get_instantiation_authkey(key_serial_t target_id);
+  */
+ static inline bool key_is_dead(const struct key *key, time64_t limit)
+ {
++	time64_t expiry = key->expiry;
++
++	if (expiry != TIME64_MAX) {
++		if (!(key->type->flags & KEY_TYPE_INSTANT_REAP))
++			expiry += key_gc_delay;
++		if (expiry <= limit)
++			return true;
++	}
++
+ 	return
+ 		key->flags & ((1 << KEY_FLAG_DEAD) |
+ 			      (1 << KEY_FLAG_INVALIDATED)) ||
+-		(key->expiry > 0 && key->expiry <= limit) ||
+ 		key->domain_tag->removed;
+ }
+ 
+diff --git a/security/keys/key.c b/security/keys/key.c
+index 151ff39b68030..67ad0826e385c 100644
+--- a/security/keys/key.c
++++ b/security/keys/key.c
+@@ -294,6 +294,7 @@ struct key *key_alloc(struct key_type *type, const char *desc,
+ 	key->uid = uid;
+ 	key->gid = gid;
+ 	key->perm = perm;
++	key->expiry = TIME64_MAX;
+ 	key->restrict_link = restrict_link;
+ 	key->last_used_at = ktime_get_real_seconds();
+ 
+@@ -463,10 +464,7 @@ static int __key_instantiate_and_link(struct key *key,
+ 			if (authkey)
+ 				key_invalidate(authkey);
+ 
+-			if (prep->expiry != TIME64_MAX) {
+-				key->expiry = prep->expiry;
+-				key_schedule_gc(prep->expiry + key_gc_delay);
+-			}
++			key_set_expiry(key, prep->expiry);
+ 		}
+ 	}
+ 
+@@ -605,8 +603,7 @@ int key_reject_and_link(struct key *key,
+ 		atomic_inc(&key->user->nikeys);
+ 		mark_key_instantiated(key, -error);
+ 		notify_key(key, NOTIFY_KEY_INSTANTIATED, -error);
+-		key->expiry = ktime_get_real_seconds() + timeout;
+-		key_schedule_gc(key->expiry + key_gc_delay);
++		key_set_expiry(key, ktime_get_real_seconds() + timeout);
+ 
+ 		if (test_and_clear_bit(KEY_FLAG_USER_CONSTRUCT, &key->flags))
+ 			awaken = 1;
+@@ -721,16 +718,14 @@ found_kernel_type:
+ 
+ void key_set_timeout(struct key *key, unsigned timeout)
+ {
+-	time64_t expiry = 0;
++	time64_t expiry = TIME64_MAX;
+ 
+ 	/* make the changes with the locks held to prevent races */
+ 	down_write(&key->sem);
+ 
+ 	if (timeout > 0)
+ 		expiry = ktime_get_real_seconds() + timeout;
+-
+-	key->expiry = expiry;
+-	key_schedule_gc(key->expiry + key_gc_delay);
++	key_set_expiry(key, expiry);
+ 
+ 	up_write(&key->sem);
+ }
+diff --git a/security/keys/proc.c b/security/keys/proc.c
+index d0cde6685627f..4f4e2c1824f18 100644
+--- a/security/keys/proc.c
++++ b/security/keys/proc.c
+@@ -198,7 +198,7 @@ static int proc_keys_show(struct seq_file *m, void *v)
+ 
+ 	/* come up with a suitable timeout value */
+ 	expiry = READ_ONCE(key->expiry);
+-	if (expiry == 0) {
++	if (expiry == TIME64_MAX) {
+ 		memcpy(xbuf, "perm", 5);
+ 	} else if (now >= expiry) {
+ 		memcpy(xbuf, "expd", 5);
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 86a43b955db91..0d1c6c4c1ee62 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1970,6 +1970,8 @@ static const struct snd_pci_quirk force_connect_list[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x86ae, "ASUS", 1),  /* Z170 PRO */
+ 	SND_PCI_QUIRK(0x1043, 0x86c7, "ASUS", 1),  /* Z170M PLUS */
+ 	SND_PCI_QUIRK(0x1462, 0xec94, "MS-7C94", 1),
++	SND_PCI_QUIRK(0x8086, 0x2060, "Intel NUC5CPYB", 1),
++	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", 1),
+ 	{}
+ };
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-01-12 20:35 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-01-12 20:35 UTC (permalink / raw
  To: gentoo-commits

commit:     0f0e44d7c81eec3d8342229bcdb41197fbb1fb31
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 12 20:35:16 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 12 20:35:16 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0f0e44d7

Linux patch 5.10.207

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1206_linux-5.10.207.patch | 454 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 458 insertions(+)

diff --git a/0000_README b/0000_README
index 45acc0d1..6163bbfe 100644
--- a/0000_README
+++ b/0000_README
@@ -867,6 +867,10 @@ Patch:  1205_linux-5.10.206.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.206
 
+Patch:  1206_linux-5.10.207.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.207
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1206_linux-5.10.207.patch b/1206_linux-5.10.207.patch
new file mode 100644
index 00000000..bd76f97f
--- /dev/null
+++ b/1206_linux-5.10.207.patch
@@ -0,0 +1,454 @@
+diff --git a/Makefile b/Makefile
+index 134fba99314d9..2435bf3197de5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 206
++SUBLEVEL = 207
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index 034f2c8a9e0b5..d6c25a88cebc9 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -197,7 +197,7 @@ void scsi_finish_command(struct scsi_cmnd *cmd)
+ 				"(result %x)\n", cmd->result));
+ 
+ 	good_bytes = scsi_bufflen(cmd);
+-	if (!blk_rq_is_passthrough(scsi_cmd_to_rq(cmd))) {
++	if (!blk_rq_is_passthrough(cmd->request)) {
+ 		int old_good_bytes = good_bytes;
+ 		drv = scsi_cmd_to_driver(cmd);
+ 		if (drv->done)
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 30eb8769dbab9..3d3d139127eec 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -50,6 +50,8 @@
+ 
+ #include <asm/unaligned.h>
+ 
++static void scsi_eh_done(struct scsi_cmnd *scmd);
++
+ /*
+  * These should *probably* be handled by the host itself.
+  * Since it is allowed to sleep, it probably should.
+@@ -228,7 +230,7 @@ scsi_abort_command(struct scsi_cmnd *scmd)
+  */
+ static void scsi_eh_reset(struct scsi_cmnd *scmd)
+ {
+-	if (!blk_rq_is_passthrough(scsi_cmd_to_rq(scmd))) {
++	if (!blk_rq_is_passthrough(scmd->request)) {
+ 		struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd);
+ 		if (sdrv->eh_reset)
+ 			sdrv->eh_reset(scmd);
+@@ -498,8 +500,7 @@ int scsi_check_sense(struct scsi_cmnd *scmd)
+ 		/* handler does not care. Drop down to default handling */
+ 	}
+ 
+-	if (scmd->cmnd[0] == TEST_UNIT_READY &&
+-	    scmd->submitter != SUBMITTED_BY_SCSI_ERROR_HANDLER)
++	if (scmd->cmnd[0] == TEST_UNIT_READY && scmd->scsi_done != scsi_eh_done)
+ 		/*
+ 		 * nasty: for mid-layer issued TURs, we need to return the
+ 		 * actual sense data without any recovery attempt.  For eh
+@@ -767,7 +768,7 @@ static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
+  * scsi_eh_done - Completion function for error handling.
+  * @scmd:	Cmd that is done.
+  */
+-void scsi_eh_done(struct scsi_cmnd *scmd)
++static void scsi_eh_done(struct scsi_cmnd *scmd)
+ {
+ 	struct completion *eh_action;
+ 
+@@ -1067,7 +1068,7 @@ retry:
+ 	shost->eh_action = &done;
+ 
+ 	scsi_log_send(scmd);
+-	scmd->submitter = SUBMITTED_BY_SCSI_ERROR_HANDLER;
++	scmd->scsi_done = scsi_eh_done;
+ 	scmd->flags |= SCMD_LAST;
+ 
+ 	/*
+@@ -1095,7 +1096,6 @@ retry:
+ 	if (rtn) {
+ 		if (timeleft > stall_for) {
+ 			scsi_eh_restore_cmnd(scmd, &ses);
+-
+ 			timeleft -= stall_for;
+ 			msleep(jiffies_to_msecs(stall_for));
+ 			goto retry;
+@@ -1168,7 +1168,7 @@ static int scsi_request_sense(struct scsi_cmnd *scmd)
+ 
+ static int scsi_eh_action(struct scsi_cmnd *scmd, int rtn)
+ {
+-	if (!blk_rq_is_passthrough(scsi_cmd_to_rq(scmd))) {
++	if (!blk_rq_is_passthrough(scmd->request)) {
+ 		struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd);
+ 		if (sdrv->eh_action)
+ 			rtn = sdrv->eh_action(scmd, rtn);
+@@ -1734,24 +1734,22 @@ static void scsi_eh_offline_sdevs(struct list_head *work_q,
+  */
+ int scsi_noretry_cmd(struct scsi_cmnd *scmd)
+ {
+-	struct request *req = scsi_cmd_to_rq(scmd);
+-
+ 	switch (host_byte(scmd->result)) {
+ 	case DID_OK:
+ 		break;
+ 	case DID_TIME_OUT:
+ 		goto check_type;
+ 	case DID_BUS_BUSY:
+-		return req->cmd_flags & REQ_FAILFAST_TRANSPORT;
++		return (scmd->request->cmd_flags & REQ_FAILFAST_TRANSPORT);
+ 	case DID_PARITY:
+-		return req->cmd_flags & REQ_FAILFAST_DEV;
++		return (scmd->request->cmd_flags & REQ_FAILFAST_DEV);
+ 	case DID_ERROR:
+ 		if (msg_byte(scmd->result) == COMMAND_COMPLETE &&
+ 		    status_byte(scmd->result) == RESERVATION_CONFLICT)
+ 			return 0;
+ 		fallthrough;
+ 	case DID_SOFT_ERROR:
+-		return req->cmd_flags & REQ_FAILFAST_DRIVER;
++		return (scmd->request->cmd_flags & REQ_FAILFAST_DRIVER);
+ 	}
+ 
+ 	if (status_byte(scmd->result) != CHECK_CONDITION)
+@@ -1762,7 +1760,8 @@ check_type:
+ 	 * assume caller has checked sense and determined
+ 	 * the check condition was retryable.
+ 	 */
+-	if (req->cmd_flags & REQ_FAILFAST_DEV || blk_rq_is_passthrough(req))
++	if (scmd->request->cmd_flags & REQ_FAILFAST_DEV ||
++	    blk_rq_is_passthrough(scmd->request))
+ 		return 1;
+ 
+ 	return 0;
+@@ -2323,6 +2322,11 @@ void scsi_report_device_reset(struct Scsi_Host *shost, int channel, int target)
+ }
+ EXPORT_SYMBOL(scsi_report_device_reset);
+ 
++static void
++scsi_reset_provider_done_command(struct scsi_cmnd *scmd)
++{
++}
++
+ /**
+  * scsi_ioctl_reset: explicitly reset a host/bus/target/device
+  * @dev:	scsi_device to operate on
+@@ -2358,9 +2362,9 @@ scsi_ioctl_reset(struct scsi_device *dev, int __user *arg)
+ 	scsi_init_command(dev, scmd);
+ 	scmd->request = rq;
+ 	scmd->cmnd = scsi_req(rq)->cmd;
+-
+-	scmd->submitter = SUBMITTED_BY_SCSI_RESET_IOCTL;
+ 	scmd->flags |= SCMD_LAST;
++
++	scmd->scsi_done		= scsi_reset_provider_done_command;
+ 	memset(&scmd->sdb, 0, sizeof(scmd->sdb));
+ 
+ 	scmd->cmd_len			= 0;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 20c2700e1f639..99b90031500b2 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -153,15 +153,13 @@ scsi_set_blocked(struct scsi_cmnd *cmd, int reason)
+ 
+ static void scsi_mq_requeue_cmd(struct scsi_cmnd *cmd)
+ {
+-	struct request *rq = scsi_cmd_to_rq(cmd);
+-
+-	if (rq->rq_flags & RQF_DONTPREP) {
+-		rq->rq_flags &= ~RQF_DONTPREP;
++	if (cmd->request->rq_flags & RQF_DONTPREP) {
++		cmd->request->rq_flags &= ~RQF_DONTPREP;
+ 		scsi_mq_uninit_cmd(cmd);
+ 	} else {
+ 		WARN_ON_ONCE(true);
+ 	}
+-	blk_mq_requeue_request(rq, true);
++	blk_mq_requeue_request(cmd->request, true);
+ }
+ 
+ /**
+@@ -200,7 +198,7 @@ static void __scsi_queue_insert(struct scsi_cmnd *cmd, int reason, bool unbusy)
+ 	 */
+ 	cmd->result = 0;
+ 
+-	blk_mq_requeue_request(scsi_cmd_to_rq(cmd), true);
++	blk_mq_requeue_request(cmd->request, true);
+ }
+ 
+ /**
+@@ -510,7 +508,7 @@ void scsi_run_host_queues(struct Scsi_Host *shost)
+ 
+ static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
+ {
+-	if (!blk_rq_is_passthrough(scsi_cmd_to_rq(cmd))) {
++	if (!blk_rq_is_passthrough(cmd->request)) {
+ 		struct scsi_driver *drv = scsi_cmd_to_driver(cmd);
+ 
+ 		if (drv->uninit_command)
+@@ -660,7 +658,7 @@ static void scsi_io_completion_reprep(struct scsi_cmnd *cmd,
+ 
+ static bool scsi_cmd_runtime_exceeced(struct scsi_cmnd *cmd)
+ {
+-	struct request *req = scsi_cmd_to_rq(cmd);
++	struct request *req = cmd->request;
+ 	unsigned long wait_for;
+ 
+ 	if (cmd->allowed == SCSI_CMD_RETRIES_NO_LIMIT)
+@@ -679,7 +677,7 @@ static bool scsi_cmd_runtime_exceeced(struct scsi_cmnd *cmd)
+ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ {
+ 	struct request_queue *q = cmd->device->request_queue;
+-	struct request *req = scsi_cmd_to_rq(cmd);
++	struct request *req = cmd->request;
+ 	int level = 0;
+ 	enum {ACTION_FAIL, ACTION_REPREP, ACTION_RETRY,
+ 	      ACTION_DELAYED_RETRY} action;
+@@ -851,7 +849,7 @@ static int scsi_io_completion_nz_result(struct scsi_cmnd *cmd, int result,
+ {
+ 	bool sense_valid;
+ 	bool sense_current = true;	/* false implies "deferred sense" */
+-	struct request *req = scsi_cmd_to_rq(cmd);
++	struct request *req = cmd->request;
+ 	struct scsi_sense_hdr sshdr;
+ 
+ 	sense_valid = scsi_command_normalize_sense(cmd, &sshdr);
+@@ -940,7 +938,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ {
+ 	int result = cmd->result;
+ 	struct request_queue *q = cmd->device->request_queue;
+-	struct request *req = scsi_cmd_to_rq(cmd);
++	struct request *req = cmd->request;
+ 	blk_status_t blk_stat = BLK_STS_OK;
+ 
+ 	if (unlikely(result))	/* a nz result may or may not be an error */
+@@ -1008,7 +1006,7 @@ static inline bool scsi_cmd_needs_dma_drain(struct scsi_device *sdev,
+ blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd)
+ {
+ 	struct scsi_device *sdev = cmd->device;
+-	struct request *rq = scsi_cmd_to_rq(cmd);
++	struct request *rq = cmd->request;
+ 	unsigned short nr_segs = blk_rq_nr_phys_segments(rq);
+ 	struct scatterlist *last_sg = NULL;
+ 	blk_status_t ret;
+@@ -1137,7 +1135,7 @@ void scsi_init_command(struct scsi_device *dev, struct scsi_cmnd *cmd)
+ {
+ 	void *buf = cmd->sense_buffer;
+ 	void *prot = cmd->prot_sdb;
+-	struct request *rq = scsi_cmd_to_rq(cmd);
++	struct request *rq = blk_mq_rq_from_pdu(cmd);
+ 	unsigned int flags = cmd->flags & SCMD_PRESERVED_FLAGS;
+ 	unsigned long jiffies_at_alloc;
+ 	int retries, to_clear;
+@@ -1596,21 +1594,12 @@ static blk_status_t scsi_prepare_cmd(struct request *req)
+ 
+ static void scsi_mq_done(struct scsi_cmnd *cmd)
+ {
+-	switch (cmd->submitter) {
+-	case SUBMITTED_BY_BLOCK_LAYER:
+-		break;
+-	case SUBMITTED_BY_SCSI_ERROR_HANDLER:
+-		return scsi_eh_done(cmd);
+-	case SUBMITTED_BY_SCSI_RESET_IOCTL:
+-		return;
+-	}
+-
+-	if (unlikely(blk_should_fake_timeout(scsi_cmd_to_rq(cmd)->q)))
++	if (unlikely(blk_should_fake_timeout(cmd->request->q)))
+ 		return;
+ 	if (unlikely(test_and_set_bit(SCMD_STATE_COMPLETE, &cmd->state)))
+ 		return;
+ 	trace_scsi_dispatch_cmd_done(cmd);
+-	blk_mq_complete_request(scsi_cmd_to_rq(cmd));
++	blk_mq_complete_request(cmd->request);
+ }
+ 
+ static void scsi_mq_put_budget(struct request_queue *q)
+@@ -1694,7 +1683,6 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	scsi_set_resid(cmd, 0);
+ 	memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+-	cmd->submitter = SUBMITTED_BY_BLOCK_LAYER;
+ 	cmd->scsi_done = scsi_mq_done;
+ 
+ 	blk_mq_start_request(req);
+diff --git a/drivers/scsi/scsi_logging.c b/drivers/scsi/scsi_logging.c
+index f0ae55ad09738..8ea44c6595efa 100644
+--- a/drivers/scsi/scsi_logging.c
++++ b/drivers/scsi/scsi_logging.c
+@@ -28,9 +28,8 @@ static void scsi_log_release_buffer(char *bufptr)
+ 
+ static inline const char *scmd_name(const struct scsi_cmnd *scmd)
+ {
+-	struct request *rq = scsi_cmd_to_rq((struct scsi_cmnd *)scmd);
+-
+-	return rq->rq_disk ? rq->rq_disk->disk_name : NULL;
++	return scmd->request->rq_disk ?
++		scmd->request->rq_disk->disk_name : NULL;
+ }
+ 
+ static size_t sdev_format_header(char *logbuf, size_t logbuf_len,
+@@ -92,7 +91,7 @@ void scmd_printk(const char *level, const struct scsi_cmnd *scmd,
+ 	if (!logbuf)
+ 		return;
+ 	off = sdev_format_header(logbuf, logbuf_len, scmd_name(scmd),
+-				 scsi_cmd_to_rq((struct scsi_cmnd *)scmd)->tag);
++				 scmd->request->tag);
+ 	if (off < logbuf_len) {
+ 		va_start(args, fmt);
+ 		off += vscnprintf(logbuf + off, logbuf_len - off, fmt, args);
+@@ -189,7 +188,7 @@ void scsi_print_command(struct scsi_cmnd *cmd)
+ 		return;
+ 
+ 	off = sdev_format_header(logbuf, logbuf_len,
+-				 scmd_name(cmd), scsi_cmd_to_rq(cmd)->tag);
++				 scmd_name(cmd), cmd->request->tag);
+ 	if (off >= logbuf_len)
+ 		goto out_printk;
+ 	off += scnprintf(logbuf + off, logbuf_len - off, "CDB: ");
+@@ -211,7 +210,7 @@ void scsi_print_command(struct scsi_cmnd *cmd)
+ 
+ 			off = sdev_format_header(logbuf, logbuf_len,
+ 						 scmd_name(cmd),
+-						 scsi_cmd_to_rq(cmd)->tag);
++						 cmd->request->tag);
+ 			if (!WARN_ON(off > logbuf_len - 58)) {
+ 				off += scnprintf(logbuf + off, logbuf_len - off,
+ 						 "CDB[%02x]: ", k);
+@@ -374,8 +373,7 @@ EXPORT_SYMBOL(__scsi_print_sense);
+ /* Normalize and print sense buffer in SCSI command */
+ void scsi_print_sense(const struct scsi_cmnd *cmd)
+ {
+-	scsi_log_print_sense(cmd->device, scmd_name(cmd),
+-			     scsi_cmd_to_rq((struct scsi_cmnd *)cmd)->tag,
++	scsi_log_print_sense(cmd->device, scmd_name(cmd), cmd->request->tag,
+ 			     cmd->sense_buffer, SCSI_SENSE_BUFFERSIZE);
+ }
+ EXPORT_SYMBOL(scsi_print_sense);
+@@ -394,8 +392,8 @@ void scsi_print_result(const struct scsi_cmnd *cmd, const char *msg,
+ 	if (!logbuf)
+ 		return;
+ 
+-	off = sdev_format_header(logbuf, logbuf_len, scmd_name(cmd),
+-				 scsi_cmd_to_rq((struct scsi_cmnd *)cmd)->tag);
++	off = sdev_format_header(logbuf, logbuf_len,
++				 scmd_name(cmd), cmd->request->tag);
+ 
+ 	if (off >= logbuf_len)
+ 		goto out_printk;
+diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
+index 89992d8879acd..180636d54982d 100644
+--- a/drivers/scsi/scsi_priv.h
++++ b/drivers/scsi/scsi_priv.h
+@@ -82,7 +82,6 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost,
+ int scsi_eh_get_sense(struct list_head *work_q,
+ 		      struct list_head *done_q);
+ int scsi_noretry_cmd(struct scsi_cmnd *scmd);
+-void scsi_eh_done(struct scsi_cmnd *scmd);
+ 
+ /* scsi_lib.c */
+ extern int scsi_maybe_unblock_host(struct scsi_device *sdev);
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index 2e26eb0d353e8..b1c9b52876f3c 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -65,12 +65,6 @@ struct scsi_pointer {
+ #define SCMD_STATE_COMPLETE	0
+ #define SCMD_STATE_INFLIGHT	1
+ 
+-enum scsi_cmnd_submitter {
+-	SUBMITTED_BY_BLOCK_LAYER = 0,
+-	SUBMITTED_BY_SCSI_ERROR_HANDLER = 1,
+-	SUBMITTED_BY_SCSI_RESET_IOCTL = 2,
+-} __packed;
+-
+ struct scsi_cmnd {
+ 	struct scsi_request req;
+ 	struct scsi_device *device;
+@@ -94,7 +88,6 @@ struct scsi_cmnd {
+ 	unsigned char prot_op;
+ 	unsigned char prot_type;
+ 	unsigned char prot_flags;
+-	enum scsi_cmnd_submitter submitter;
+ 
+ 	unsigned short cmd_len;
+ 	enum dma_data_direction sc_data_direction;
+@@ -169,9 +162,7 @@ static inline void *scsi_cmd_priv(struct scsi_cmnd *cmd)
+ /* make sure not to use it with passthrough commands */
+ static inline struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd)
+ {
+-	struct request *rq = scsi_cmd_to_rq(cmd);
+-
+-	return *(struct scsi_driver **)rq->rq_disk->private_data;
++	return *(struct scsi_driver **)cmd->request->rq_disk->private_data;
+ }
+ 
+ extern void scsi_finish_command(struct scsi_cmnd *cmd);
+@@ -233,18 +224,6 @@ static inline int scsi_sg_copy_to_buffer(struct scsi_cmnd *cmd,
+ 				 buf, buflen);
+ }
+ 
+-static inline sector_t scsi_get_sector(struct scsi_cmnd *scmd)
+-{
+-	return blk_rq_pos(scsi_cmd_to_rq(scmd));
+-}
+-
+-static inline sector_t scsi_get_lba(struct scsi_cmnd *scmd)
+-{
+-	unsigned int shift = ilog2(scmd->device->sector_size) - SECTOR_SHIFT;
+-
+-	return blk_rq_pos(scsi_cmd_to_rq(scmd)) >> shift;
+-}
+-
+ /*
+  * The operations below are hints that tell the controller driver how
+  * to handle I/Os with DIF or similar types of protection information.
+@@ -307,11 +286,9 @@ static inline unsigned char scsi_get_prot_type(struct scsi_cmnd *scmd)
+ 	return scmd->prot_type;
+ }
+ 
+-static inline u32 scsi_prot_ref_tag(struct scsi_cmnd *scmd)
++static inline sector_t scsi_get_lba(struct scsi_cmnd *scmd)
+ {
+-	struct request *rq = blk_mq_rq_from_pdu(scmd);
+-
+-	return t10_pi_ref_tag(rq);
++	return blk_rq_pos(scmd->request);
+ }
+ 
+ static inline unsigned int scsi_prot_interval(struct scsi_cmnd *scmd)
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index 993e1e79dd0ce..1a5c9a3df6d69 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -264,15 +264,13 @@ sdev_prefix_printk(const char *, const struct scsi_device *, const char *,
+ __printf(3, 4) void
+ scmd_printk(const char *, const struct scsi_cmnd *, const char *, ...);
+ 
+-#define scmd_dbg(scmd, fmt, a...)					\
+-	do {								\
+-		struct request *__rq = scsi_cmd_to_rq((scmd));		\
+-									\
+-		if (__rq->rq_disk)					\
+-			sdev_dbg((scmd)->device, "[%s] " fmt,		\
+-				 __rq->rq_disk->disk_name, ##a);	\
+-		else							\
+-			sdev_dbg((scmd)->device, fmt, ##a);		\
++#define scmd_dbg(scmd, fmt, a...)					   \
++	do {								   \
++		if ((scmd)->request->rq_disk)				   \
++			sdev_dbg((scmd)->device, "[%s] " fmt,		   \
++				 (scmd)->request->rq_disk->disk_name, ##a);\
++		else							   \
++			sdev_dbg((scmd)->device, fmt, ##a);		   \
+ 	} while (0)
+ 
+ enum scsi_target_state {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-01-15 18:49 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-01-15 18:49 UTC (permalink / raw
  To: gentoo-commits

commit:     e95f1c8e8c742d31dae1737a225a50c14ddc2f10
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jan 15 18:49:00 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jan 15 18:49:00 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e95f1c8e

Linux patch 5.10.208

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1207_linux-5.10.208.patch | 1630 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1634 insertions(+)

diff --git a/0000_README b/0000_README
index 6163bbfe..437d971c 100644
--- a/0000_README
+++ b/0000_README
@@ -871,6 +871,10 @@ Patch:  1206_linux-5.10.207.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.207
 
+Patch:  1207_linux-5.10.208.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.208
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1207_linux-5.10.208.patch b/1207_linux-5.10.208.patch
new file mode 100644
index 00000000..3d625c77
--- /dev/null
+++ b/1207_linux-5.10.208.patch
@@ -0,0 +1,1630 @@
+diff --git a/Makefile b/Makefile
+index 2435bf3197de5..a4b42141ba1b2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 207
++SUBLEVEL = 208
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/mach-sunxi/mc_smp.c b/arch/arm/mach-sunxi/mc_smp.c
+index 26cbce1353387..b2f5f4f28705f 100644
+--- a/arch/arm/mach-sunxi/mc_smp.c
++++ b/arch/arm/mach-sunxi/mc_smp.c
+@@ -808,12 +808,12 @@ static int __init sunxi_mc_smp_init(void)
+ 			break;
+ 	}
+ 
+-	is_a83t = sunxi_mc_smp_data[i].is_a83t;
+-
+ 	of_node_put(node);
+ 	if (ret)
+ 		return -ENODEV;
+ 
++	is_a83t = sunxi_mc_smp_data[i].is_a83t;
++
+ 	if (!sunxi_mc_smp_cpu_table_init())
+ 		return -EINVAL;
+ 
+diff --git a/arch/powerpc/kernel/ppc_save_regs.S b/arch/powerpc/kernel/ppc_save_regs.S
+index 2d4d21bb46a97..5d284f78b0b4d 100644
+--- a/arch/powerpc/kernel/ppc_save_regs.S
++++ b/arch/powerpc/kernel/ppc_save_regs.S
+@@ -58,10 +58,10 @@ _GLOBAL(ppc_save_regs)
+ 	lbz	r0,PACAIRQSOFTMASK(r13)
+ 	PPC_STL	r0,SOFTE-STACK_FRAME_OVERHEAD(r3)
+ #endif
+-	/* go up one stack frame for SP */
+-	PPC_LL	r4,0(r1)
+-	PPC_STL	r4,1*SZL(r3)
++	/* store current SP */
++	PPC_STL r1,1*SZL(r3)
+ 	/* get caller's LR */
++	PPC_LL	r4,0(r1)
+ 	PPC_LL	r0,LRSAVE(r4)
+ 	PPC_STL	r0,_LINK-STACK_FRAME_OVERHEAD(r3)
+ 	mflr	r0
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index c78b4946385e7..e7edc9e4c6cd9 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -569,7 +569,8 @@ static void kprobe_emulate_call_indirect(struct kprobe *p, struct pt_regs *regs)
+ {
+ 	unsigned long offs = addrmode_regoffs[p->ainsn.indirect.reg];
+ 
+-	int3_emulate_call(regs, regs_get_register(regs, offs));
++	int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + p->ainsn.size);
++	int3_emulate_jmp(regs, regs_get_register(regs, offs));
+ }
+ NOKPROBE_SYMBOL(kprobe_emulate_call_indirect);
+ 
+diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c
+index 9811c40956e54..45d19cc0aeac0 100644
+--- a/drivers/firewire/ohci.c
++++ b/drivers/firewire/ohci.c
+@@ -279,6 +279,51 @@ static char ohci_driver_name[] = KBUILD_MODNAME;
+ #define QUIRK_TI_SLLZ059		0x20
+ #define QUIRK_IR_WAKE			0x40
+ 
++// On PCI Express Root Complex in any type of AMD Ryzen machine, VIA VT6306/6307/6308 with Asmedia
++// ASM1083/1085 brings an inconvenience that the read accesses to 'Isochronous Cycle Timer' register
++// (at offset 0xf0 in PCI I/O space) often causes unexpected system reboot. The mechanism is not
++// clear, since the read access to the other registers is enough safe; e.g. 'Node ID' register,
++// while it is probable due to detection of any type of PCIe error.
++#define QUIRK_REBOOT_BY_CYCLE_TIMER_READ	0x80000000
++
++#if IS_ENABLED(CONFIG_X86)
++
++static bool has_reboot_by_cycle_timer_read_quirk(const struct fw_ohci *ohci)
++{
++	return !!(ohci->quirks & QUIRK_REBOOT_BY_CYCLE_TIMER_READ);
++}
++
++#define PCI_DEVICE_ID_ASMEDIA_ASM108X	0x1080
++
++static bool detect_vt630x_with_asm1083_on_amd_ryzen_machine(const struct pci_dev *pdev)
++{
++	const struct pci_dev *pcie_to_pci_bridge;
++
++	// Detect any type of AMD Ryzen machine.
++	if (!static_cpu_has(X86_FEATURE_ZEN))
++		return false;
++
++	// Detect VIA VT6306/6307/6308.
++	if (pdev->vendor != PCI_VENDOR_ID_VIA)
++		return false;
++	if (pdev->device != PCI_DEVICE_ID_VIA_VT630X)
++		return false;
++
++	// Detect Asmedia ASM1083/1085.
++	pcie_to_pci_bridge = pdev->bus->self;
++	if (pcie_to_pci_bridge->vendor != PCI_VENDOR_ID_ASMEDIA)
++		return false;
++	if (pcie_to_pci_bridge->device != PCI_DEVICE_ID_ASMEDIA_ASM108X)
++		return false;
++
++	return true;
++}
++
++#else
++#define has_reboot_by_cycle_timer_read_quirk(ohci) false
++#define detect_vt630x_with_asm1083_on_amd_ryzen_machine(pdev)	false
++#endif
++
+ /* In case of multiple matches in ohci_quirks[], only the first one is used. */
+ static const struct {
+ 	unsigned short vendor, device, revision, flags;
+@@ -1713,6 +1758,9 @@ static u32 get_cycle_time(struct fw_ohci *ohci)
+ 	s32 diff01, diff12;
+ 	int i;
+ 
++	if (has_reboot_by_cycle_timer_read_quirk(ohci))
++		return 0;
++
+ 	c2 = reg_read(ohci, OHCI1394_IsochronousCycleTimer);
+ 
+ 	if (ohci->quirks & QUIRK_CYCLE_TIMER) {
+@@ -3615,6 +3663,9 @@ static int pci_probe(struct pci_dev *dev,
+ 	if (param_quirks)
+ 		ohci->quirks = param_quirks;
+ 
++	if (detect_vt630x_with_asm1083_on_amd_ryzen_machine(dev))
++		ohci->quirks |= QUIRK_REBOOT_BY_CYCLE_TIMER_READ;
++
+ 	/*
+ 	 * Because dma_alloc_coherent() allocates at least one page,
+ 	 * we save space by using a common buffer for the AR request/
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 7f633f8b3239a..a79c62c43a6ff 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -5584,7 +5584,7 @@ void intel_dp_process_phy_request(struct intel_dp *intel_dp)
+ 	intel_dp_autotest_phy_ddi_enable(intel_dp, data->num_lanes);
+ 
+ 	drm_dp_set_phy_test_pattern(&intel_dp->aux, data,
+-				    link_status[DP_DPCD_REV]);
++				    intel_dp->dpcd[DP_DPCD_REV]);
+ }
+ 
+ static u8 intel_dp_autotest_phy_pattern(struct intel_dp *intel_dp)
+diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
+index aae90a9ee1dbc..ee59ef2cba773 100644
+--- a/drivers/gpu/drm/qxl/qxl_drv.h
++++ b/drivers/gpu/drm/qxl/qxl_drv.h
+@@ -329,7 +329,7 @@ int qxl_gem_object_create_with_handle(struct qxl_device *qdev,
+ 				      u32 domain,
+ 				      size_t size,
+ 				      struct qxl_surface *surf,
+-				      struct qxl_bo **qobj,
++				      struct drm_gem_object **gobj,
+ 				      uint32_t *handle);
+ void qxl_gem_object_free(struct drm_gem_object *gobj);
+ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
+diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
+index e377bdbff90dd..f7bafc791b1e6 100644
+--- a/drivers/gpu/drm/qxl/qxl_dumb.c
++++ b/drivers/gpu/drm/qxl/qxl_dumb.c
+@@ -34,6 +34,7 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
+ {
+ 	struct qxl_device *qdev = to_qxl(dev);
+ 	struct qxl_bo *qobj;
++	struct drm_gem_object *gobj;
+ 	uint32_t handle;
+ 	int r;
+ 	struct qxl_surface surf;
+@@ -62,11 +63,13 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
+ 
+ 	r = qxl_gem_object_create_with_handle(qdev, file_priv,
+ 					      QXL_GEM_DOMAIN_SURFACE,
+-					      args->size, &surf, &qobj,
++					      args->size, &surf, &gobj,
+ 					      &handle);
+ 	if (r)
+ 		return r;
++	qobj = gem_to_qxl_bo(gobj);
+ 	qobj->is_dumb = true;
++	drm_gem_object_put(gobj);
+ 	args->pitch = pitch;
+ 	args->handle = handle;
+ 	return 0;
+diff --git a/drivers/gpu/drm/qxl/qxl_gem.c b/drivers/gpu/drm/qxl/qxl_gem.c
+index a08da0bd9098b..fc5e3763c3595 100644
+--- a/drivers/gpu/drm/qxl/qxl_gem.c
++++ b/drivers/gpu/drm/qxl/qxl_gem.c
+@@ -72,32 +72,41 @@ int qxl_gem_object_create(struct qxl_device *qdev, int size,
+ 	return 0;
+ }
+ 
++/*
++ * If the caller passed a valid gobj pointer, it is responsible to call
++ * drm_gem_object_put() when it no longer needs to acess the object.
++ *
++ * If gobj is NULL, it is handled internally.
++ */
+ int qxl_gem_object_create_with_handle(struct qxl_device *qdev,
+ 				      struct drm_file *file_priv,
+ 				      u32 domain,
+ 				      size_t size,
+ 				      struct qxl_surface *surf,
+-				      struct qxl_bo **qobj,
++				      struct drm_gem_object **gobj,
+ 				      uint32_t *handle)
+ {
+-	struct drm_gem_object *gobj;
+ 	int r;
++	struct drm_gem_object *local_gobj;
+ 
+-	BUG_ON(!qobj);
+ 	BUG_ON(!handle);
+ 
+ 	r = qxl_gem_object_create(qdev, size, 0,
+ 				  domain,
+ 				  false, false, surf,
+-				  &gobj);
++				  &local_gobj);
+ 	if (r)
+ 		return -ENOMEM;
+-	r = drm_gem_handle_create(file_priv, gobj, handle);
++	r = drm_gem_handle_create(file_priv, local_gobj, handle);
+ 	if (r)
+ 		return r;
+-	/* drop reference from allocate - handle holds it now */
+-	*qobj = gem_to_qxl_bo(gobj);
+-	drm_gem_object_put(gobj);
++
++	if (gobj)
++		*gobj = local_gobj;
++	else
++		/* drop reference from allocate - handle holds it now */
++		drm_gem_object_put(local_gobj);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c
+index 5cea6eea72abb..9a02c48714007 100644
+--- a/drivers/gpu/drm/qxl/qxl_ioctl.c
++++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
+@@ -39,7 +39,6 @@ static int qxl_alloc_ioctl(struct drm_device *dev, void *data,
+ 	struct qxl_device *qdev = to_qxl(dev);
+ 	struct drm_qxl_alloc *qxl_alloc = data;
+ 	int ret;
+-	struct qxl_bo *qobj;
+ 	uint32_t handle;
+ 	u32 domain = QXL_GEM_DOMAIN_VRAM;
+ 
+@@ -51,7 +50,7 @@ static int qxl_alloc_ioctl(struct drm_device *dev, void *data,
+ 						domain,
+ 						qxl_alloc->size,
+ 						NULL,
+-						&qobj, &handle);
++						NULL, &handle);
+ 	if (ret) {
+ 		DRM_ERROR("%s: failed to create gem ret=%d\n",
+ 			  __func__, ret);
+@@ -393,7 +392,6 @@ static int qxl_alloc_surf_ioctl(struct drm_device *dev, void *data,
+ {
+ 	struct qxl_device *qdev = to_qxl(dev);
+ 	struct drm_qxl_alloc_surf *param = data;
+-	struct qxl_bo *qobj;
+ 	int handle;
+ 	int ret;
+ 	int size, actual_stride;
+@@ -413,7 +411,7 @@ static int qxl_alloc_surf_ioctl(struct drm_device *dev, void *data,
+ 						QXL_GEM_DOMAIN_SURFACE,
+ 						size,
+ 						&surf,
+-						&qobj, &handle);
++						NULL, &handle);
+ 	if (ret) {
+ 		DRM_ERROR("%s: failed to create gem ret=%d\n",
+ 			  __func__, ret);
+diff --git a/drivers/i2c/i2c-core.h b/drivers/i2c/i2c-core.h
+index ea17f13b44c84..ec28024a646d9 100644
+--- a/drivers/i2c/i2c-core.h
++++ b/drivers/i2c/i2c-core.h
+@@ -3,6 +3,7 @@
+  * i2c-core.h - interfaces internal to the I2C framework
+  */
+ 
++#include <linux/kconfig.h>
+ #include <linux/rwsem.h>
+ 
+ struct i2c_devinfo {
+@@ -29,7 +30,8 @@ int i2c_dev_irq_from_resources(const struct resource *resources,
+  */
+ static inline bool i2c_in_atomic_xfer_mode(void)
+ {
+-	return system_state > SYSTEM_RUNNING && !preemptible();
++	return system_state > SYSTEM_RUNNING &&
++	       (IS_ENABLED(CONFIG_PREEMPT_COUNT) ? !preemptible() : irqs_disabled());
+ }
+ 
+ static inline int __i2c_lock_bus_helper(struct i2c_adapter *adap)
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 41d98d7198be5..8d842ff241b29 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -800,9 +800,10 @@ static const struct block_device_operations mmc_bdops = {
+ static int mmc_blk_part_switch_pre(struct mmc_card *card,
+ 				   unsigned int part_type)
+ {
++	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_RPMB;
+ 	int ret = 0;
+ 
+-	if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
++	if ((part_type & mask) == mask) {
+ 		if (card->ext_csd.cmdq_en) {
+ 			ret = mmc_cmdq_disable(card);
+ 			if (ret)
+@@ -817,9 +818,10 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
+ static int mmc_blk_part_switch_post(struct mmc_card *card,
+ 				    unsigned int part_type)
+ {
++	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_RPMB;
+ 	int ret = 0;
+ 
+-	if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
++	if ((part_type & mask) == mask) {
+ 		mmc_retune_unpause(card->host);
+ 		if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
+ 			ret = mmc_cmdq_enable(card);
+@@ -3102,4 +3104,3 @@ module_exit(mmc_blk_exit);
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Multimedia Card (MMC) block device driver");
+-
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index 1f46694b2e531..b949a4468bf58 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -612,6 +612,7 @@ EXPORT_SYMBOL(mmc_remove_host);
+  */
+ void mmc_free_host(struct mmc_host *host)
+ {
++	cancel_delayed_work_sync(&host->detect);
+ 	mmc_pwrseq_free(host);
+ 	put_device(&host->class_dev);
+ }
+diff --git a/drivers/mmc/host/meson-mx-sdhc-mmc.c b/drivers/mmc/host/meson-mx-sdhc-mmc.c
+index 28aa78aa08f3f..ba59061fea8b8 100644
+--- a/drivers/mmc/host/meson-mx-sdhc-mmc.c
++++ b/drivers/mmc/host/meson-mx-sdhc-mmc.c
+@@ -269,7 +269,7 @@ static int meson_mx_sdhc_enable_clks(struct mmc_host *mmc)
+ static int meson_mx_sdhc_set_clk(struct mmc_host *mmc, struct mmc_ios *ios)
+ {
+ 	struct meson_mx_sdhc_host *host = mmc_priv(mmc);
+-	u32 rx_clk_phase;
++	u32 val, rx_clk_phase;
+ 	int ret;
+ 
+ 	meson_mx_sdhc_disable_clks(mmc);
+@@ -290,27 +290,11 @@ static int meson_mx_sdhc_set_clk(struct mmc_host *mmc, struct mmc_ios *ios)
+ 		mmc->actual_clock = clk_get_rate(host->sd_clk);
+ 
+ 		/*
+-		 * according to Amlogic the following latching points are
+-		 * selected with empirical values, there is no (known) formula
+-		 * to calculate these.
++		 * Phase 90 should work in most cases. For data transmission,
++		 * meson_mx_sdhc_execute_tuning() will find a accurate value
+ 		 */
+-		if (mmc->actual_clock > 100000000) {
+-			rx_clk_phase = 1;
+-		} else if (mmc->actual_clock > 45000000) {
+-			if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330)
+-				rx_clk_phase = 15;
+-			else
+-				rx_clk_phase = 11;
+-		} else if (mmc->actual_clock >= 25000000) {
+-			rx_clk_phase = 15;
+-		} else if (mmc->actual_clock > 5000000) {
+-			rx_clk_phase = 23;
+-		} else if (mmc->actual_clock > 1000000) {
+-			rx_clk_phase = 55;
+-		} else {
+-			rx_clk_phase = 1061;
+-		}
+-
++		regmap_read(host->regmap, MESON_SDHC_CLKC, &val);
++		rx_clk_phase = FIELD_GET(MESON_SDHC_CLKC_CLK_DIV, val) / 4;
+ 		regmap_update_bits(host->regmap, MESON_SDHC_CLK2,
+ 				   MESON_SDHC_CLK2_RX_CLK_PHASE,
+ 				   FIELD_PREP(MESON_SDHC_CLK2_RX_CLK_PHASE,
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index d8e412bbb93bf..52bfe356f1e54 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -224,15 +224,19 @@ static inline void _sdhci_sprd_set_clock(struct sdhci_host *host,
+ 	div = ((div & 0x300) >> 2) | ((div & 0xFF) << 8);
+ 	sdhci_enable_clk(host, div);
+ 
++	val = sdhci_readl(host, SDHCI_SPRD_REG_32_BUSY_POSI);
++	mask = SDHCI_SPRD_BIT_OUTR_CLK_AUTO_EN | SDHCI_SPRD_BIT_INNR_CLK_AUTO_EN;
+ 	/* Enable CLK_AUTO when the clock is greater than 400K. */
+ 	if (clk > 400000) {
+-		val = sdhci_readl(host, SDHCI_SPRD_REG_32_BUSY_POSI);
+-		mask = SDHCI_SPRD_BIT_OUTR_CLK_AUTO_EN |
+-			SDHCI_SPRD_BIT_INNR_CLK_AUTO_EN;
+ 		if (mask != (val & mask)) {
+ 			val |= mask;
+ 			sdhci_writel(host, val, SDHCI_SPRD_REG_32_BUSY_POSI);
+ 		}
++	} else {
++		if (val & mask) {
++			val &= ~mask;
++			sdhci_writel(host, val, SDHCI_SPRD_REG_32_BUSY_POSI);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index c67a108c2c07f..584f365de563f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -11143,6 +11143,8 @@ static void bnxt_sp_task(struct work_struct *work)
+ 		bnxt_cfg_ntp_filters(bp);
+ 	if (test_and_clear_bit(BNXT_HWRM_EXEC_FWD_REQ_SP_EVENT, &bp->sp_event))
+ 		bnxt_hwrm_exec_fwd_req(bp);
++	if (test_and_clear_bit(BNXT_HWRM_PF_UNLOAD_SP_EVENT, &bp->sp_event))
++		netdev_info(bp->dev, "Receive PF driver unload event!\n");
+ 	if (test_and_clear_bit(BNXT_PERIODIC_STATS_SP_EVENT, &bp->sp_event)) {
+ 		bnxt_hwrm_port_qstats(bp, 0);
+ 		bnxt_hwrm_port_qstats_ext(bp, 0);
+@@ -12097,8 +12099,6 @@ static void bnxt_cfg_ntp_filters(struct bnxt *bp)
+ 			}
+ 		}
+ 	}
+-	if (test_and_clear_bit(BNXT_HWRM_PF_UNLOAD_SP_EVENT, &bp->sp_event))
+-		netdev_info(bp->dev, "Receive PF driver unload event!\n");
+ }
+ 
+ #else
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 145488449f133..8edf12077e663 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2086,8 +2086,10 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		/* Note: if we ever change from DMA_TX_APPEND_CRC below we
+ 		 * will need to restore software padding of "runt" packets
+ 		 */
++		len_stat |= DMA_TX_APPEND_CRC;
++
+ 		if (!i) {
+-			len_stat |= DMA_TX_APPEND_CRC | DMA_SOP;
++			len_stat |= DMA_SOP;
+ 			if (skb->ip_summed == CHECKSUM_PARTIAL)
+ 				len_stat |= DMA_TX_DO_CSUM;
+ 		}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 36e387ae967f7..d83b96aa3e42a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -101,12 +101,18 @@ static struct workqueue_struct *i40e_wq;
+ static void netdev_hw_addr_refcnt(struct i40e_mac_filter *f,
+ 				  struct net_device *netdev, int delta)
+ {
++	struct netdev_hw_addr_list *ha_list;
+ 	struct netdev_hw_addr *ha;
+ 
+ 	if (!f || !netdev)
+ 		return;
+ 
+-	netdev_for_each_mc_addr(ha, netdev) {
++	if (is_unicast_ether_addr(f->macaddr) || is_link_local_ether_addr(f->macaddr))
++		ha_list = &netdev->uc;
++	else
++		ha_list = &netdev->mc;
++
++	netdev_hw_addr_list_for_each(ha, ha_list) {
+ 		if (ether_addr_equal(ha->addr, f->macaddr)) {
+ 			ha->refcount += delta;
+ 			if (ha->refcount <= 0)
+@@ -15758,6 +15764,9 @@ static void i40e_pci_error_reset_done(struct pci_dev *pdev)
+ 	struct i40e_pf *pf = pci_get_drvdata(pdev);
+ 
+ 	i40e_reset_and_rebuild(pf, false, false);
++#ifdef CONFIG_PCI_IOV
++	i40e_restore_all_vfs_msi_state(pdev);
++#endif /* CONFIG_PCI_IOV */
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index dfaa34f2473ab..7b0ed15f4df32 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -99,6 +99,32 @@ void i40e_vc_notify_reset(struct i40e_pf *pf)
+ 			     (u8 *)&pfe, sizeof(struct virtchnl_pf_event));
+ }
+ 
++#ifdef CONFIG_PCI_IOV
++void i40e_restore_all_vfs_msi_state(struct pci_dev *pdev)
++{
++	u16 vf_id;
++	u16 pos;
++
++	/* Continue only if this is a PF */
++	if (!pdev->is_physfn)
++		return;
++
++	if (!pci_num_vf(pdev))
++		return;
++
++	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV);
++	if (pos) {
++		struct pci_dev *vf_dev = NULL;
++
++		pci_read_config_word(pdev, pos + PCI_SRIOV_VF_DID, &vf_id);
++		while ((vf_dev = pci_get_device(pdev->vendor, vf_id, vf_dev))) {
++			if (vf_dev->is_virtfn && vf_dev->physfn == pdev)
++				pci_restore_msi_state(vf_dev);
++		}
++	}
++}
++#endif /* CONFIG_PCI_IOV */
++
+ /**
+  * i40e_vc_notify_vf_reset
+  * @vf: pointer to the VF structure
+@@ -3369,16 +3395,16 @@ static int i40e_validate_cloud_filter(struct i40e_vf *vf,
+ 	bool found = false;
+ 	int bkt;
+ 
+-	if (!tc_filter->action) {
++	if (tc_filter->action != VIRTCHNL_ACTION_TC_REDIRECT) {
+ 		dev_info(&pf->pdev->dev,
+-			 "VF %d: Currently ADq doesn't support Drop Action\n",
+-			 vf->vf_id);
++			 "VF %d: ADQ doesn't support this action (%d)\n",
++			 vf->vf_id, tc_filter->action);
+ 		goto err;
+ 	}
+ 
+ 	/* action_meta is TC number here to which the filter is applied */
+ 	if (!tc_filter->action_meta ||
+-	    tc_filter->action_meta > I40E_MAX_VF_VSI) {
++	    tc_filter->action_meta > vf->num_tc) {
+ 		dev_info(&pf->pdev->dev, "VF %d: Invalid TC number %u\n",
+ 			 vf->vf_id, tc_filter->action_meta);
+ 		goto err;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 358bbdb587951..bd497cc5303a1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -135,6 +135,9 @@ int i40e_ndo_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool enable);
+ 
+ void i40e_vc_notify_link_state(struct i40e_pf *pf);
+ void i40e_vc_notify_reset(struct i40e_pf *pf);
++#ifdef CONFIG_PCI_IOV
++void i40e_restore_all_vfs_msi_state(struct pci_dev *pdev);
++#endif /* CONFIG_PCI_IOV */
+ int i40e_get_vf_stats(struct net_device *netdev, int vf_id,
+ 		      struct ifla_vf_stats *vf_stats);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+index 407b9477da248..dc34e564c9192 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+@@ -359,7 +359,7 @@ struct npc_lt_def {
+ 	u8	ltype_mask;
+ 	u8	ltype_match;
+ 	u8	lid;
+-};
++} __packed;
+ 
+ struct npc_lt_def_ipsec {
+ 	u8	ltype_mask;
+@@ -367,7 +367,7 @@ struct npc_lt_def_ipsec {
+ 	u8	lid;
+ 	u8	spi_offset;
+ 	u8	spi_nz;
+-};
++} __packed;
+ 
+ struct npc_lt_def_cfg {
+ 	struct npc_lt_def	rx_ol2;
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index 99fd35a8ca750..127daad4410b9 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -315,12 +315,11 @@ static void ql_release_to_lrg_buf_free_list(struct ql3_adapter *qdev,
+ 			 * buffer
+ 			 */
+ 			skb_reserve(lrg_buf_cb->skb, QL_HEADER_SPACE);
+-			map = pci_map_single(qdev->pdev,
++			map = dma_map_single(&qdev->pdev->dev,
+ 					     lrg_buf_cb->skb->data,
+-					     qdev->lrg_buffer_len -
+-					     QL_HEADER_SPACE,
+-					     PCI_DMA_FROMDEVICE);
+-			err = pci_dma_mapping_error(qdev->pdev, map);
++					     qdev->lrg_buffer_len - QL_HEADER_SPACE,
++					     DMA_FROM_DEVICE);
++			err = dma_mapping_error(&qdev->pdev->dev, map);
+ 			if (err) {
+ 				netdev_err(qdev->ndev,
+ 					   "PCI mapping failed with error: %d\n",
+@@ -1802,13 +1801,12 @@ static int ql_populate_free_queue(struct ql3_adapter *qdev)
+ 				 * first buffer
+ 				 */
+ 				skb_reserve(lrg_buf_cb->skb, QL_HEADER_SPACE);
+-				map = pci_map_single(qdev->pdev,
++				map = dma_map_single(&qdev->pdev->dev,
+ 						     lrg_buf_cb->skb->data,
+-						     qdev->lrg_buffer_len -
+-						     QL_HEADER_SPACE,
+-						     PCI_DMA_FROMDEVICE);
++						     qdev->lrg_buffer_len - QL_HEADER_SPACE,
++						     DMA_FROM_DEVICE);
+ 
+-				err = pci_dma_mapping_error(qdev->pdev, map);
++				err = dma_mapping_error(&qdev->pdev->dev, map);
+ 				if (err) {
+ 					netdev_err(qdev->ndev,
+ 						   "PCI mapping failed with error: %d\n",
+@@ -1943,18 +1941,16 @@ static void ql_process_mac_tx_intr(struct ql3_adapter *qdev,
+ 		goto invalid_seg_count;
+ 	}
+ 
+-	pci_unmap_single(qdev->pdev,
++	dma_unmap_single(&qdev->pdev->dev,
+ 			 dma_unmap_addr(&tx_cb->map[0], mapaddr),
+-			 dma_unmap_len(&tx_cb->map[0], maplen),
+-			 PCI_DMA_TODEVICE);
++			 dma_unmap_len(&tx_cb->map[0], maplen), DMA_TO_DEVICE);
+ 	tx_cb->seg_count--;
+ 	if (tx_cb->seg_count) {
+ 		for (i = 1; i < tx_cb->seg_count; i++) {
+-			pci_unmap_page(qdev->pdev,
+-				       dma_unmap_addr(&tx_cb->map[i],
+-						      mapaddr),
++			dma_unmap_page(&qdev->pdev->dev,
++				       dma_unmap_addr(&tx_cb->map[i], mapaddr),
+ 				       dma_unmap_len(&tx_cb->map[i], maplen),
+-				       PCI_DMA_TODEVICE);
++				       DMA_TO_DEVICE);
+ 		}
+ 	}
+ 	qdev->ndev->stats.tx_packets++;
+@@ -2021,10 +2017,9 @@ static void ql_process_mac_rx_intr(struct ql3_adapter *qdev,
+ 	qdev->ndev->stats.rx_bytes += length;
+ 
+ 	skb_put(skb, length);
+-	pci_unmap_single(qdev->pdev,
++	dma_unmap_single(&qdev->pdev->dev,
+ 			 dma_unmap_addr(lrg_buf_cb2, mapaddr),
+-			 dma_unmap_len(lrg_buf_cb2, maplen),
+-			 PCI_DMA_FROMDEVICE);
++			 dma_unmap_len(lrg_buf_cb2, maplen), DMA_FROM_DEVICE);
+ 	prefetch(skb->data);
+ 	skb_checksum_none_assert(skb);
+ 	skb->protocol = eth_type_trans(skb, qdev->ndev);
+@@ -2067,10 +2062,9 @@ static void ql_process_macip_rx_intr(struct ql3_adapter *qdev,
+ 	skb2 = lrg_buf_cb2->skb;
+ 
+ 	skb_put(skb2, length);	/* Just the second buffer length here. */
+-	pci_unmap_single(qdev->pdev,
++	dma_unmap_single(&qdev->pdev->dev,
+ 			 dma_unmap_addr(lrg_buf_cb2, mapaddr),
+-			 dma_unmap_len(lrg_buf_cb2, maplen),
+-			 PCI_DMA_FROMDEVICE);
++			 dma_unmap_len(lrg_buf_cb2, maplen), DMA_FROM_DEVICE);
+ 	prefetch(skb2->data);
+ 
+ 	skb_checksum_none_assert(skb2);
+@@ -2319,9 +2313,9 @@ static int ql_send_map(struct ql3_adapter *qdev,
+ 	/*
+ 	 * Map the skb buffer first.
+ 	 */
+-	map = pci_map_single(qdev->pdev, skb->data, len, PCI_DMA_TODEVICE);
++	map = dma_map_single(&qdev->pdev->dev, skb->data, len, DMA_TO_DEVICE);
+ 
+-	err = pci_dma_mapping_error(qdev->pdev, map);
++	err = dma_mapping_error(&qdev->pdev->dev, map);
+ 	if (err) {
+ 		netdev_err(qdev->ndev, "PCI mapping failed with error: %d\n",
+ 			   err);
+@@ -2357,11 +2351,11 @@ static int ql_send_map(struct ql3_adapter *qdev,
+ 		    (seg == 7 && seg_cnt > 8) ||
+ 		    (seg == 12 && seg_cnt > 13) ||
+ 		    (seg == 17 && seg_cnt > 18)) {
+-			map = pci_map_single(qdev->pdev, oal,
++			map = dma_map_single(&qdev->pdev->dev, oal,
+ 					     sizeof(struct oal),
+-					     PCI_DMA_TODEVICE);
++					     DMA_TO_DEVICE);
+ 
+-			err = pci_dma_mapping_error(qdev->pdev, map);
++			err = dma_mapping_error(&qdev->pdev->dev, map);
+ 			if (err) {
+ 				netdev_err(qdev->ndev,
+ 					   "PCI mapping outbound address list with error: %d\n",
+@@ -2423,24 +2417,24 @@ map_error:
+ 		    (seg == 7 && seg_cnt > 8) ||
+ 		    (seg == 12 && seg_cnt > 13) ||
+ 		    (seg == 17 && seg_cnt > 18)) {
+-			pci_unmap_single(qdev->pdev,
+-				dma_unmap_addr(&tx_cb->map[seg], mapaddr),
+-				dma_unmap_len(&tx_cb->map[seg], maplen),
+-				 PCI_DMA_TODEVICE);
++			dma_unmap_single(&qdev->pdev->dev,
++					 dma_unmap_addr(&tx_cb->map[seg], mapaddr),
++					 dma_unmap_len(&tx_cb->map[seg], maplen),
++					 DMA_TO_DEVICE);
+ 			oal++;
+ 			seg++;
+ 		}
+ 
+-		pci_unmap_page(qdev->pdev,
++		dma_unmap_page(&qdev->pdev->dev,
+ 			       dma_unmap_addr(&tx_cb->map[seg], mapaddr),
+ 			       dma_unmap_len(&tx_cb->map[seg], maplen),
+-			       PCI_DMA_TODEVICE);
++			       DMA_TO_DEVICE);
+ 	}
+ 
+-	pci_unmap_single(qdev->pdev,
++	dma_unmap_single(&qdev->pdev->dev,
+ 			 dma_unmap_addr(&tx_cb->map[0], mapaddr),
+ 			 dma_unmap_addr(&tx_cb->map[0], maplen),
+-			 PCI_DMA_TODEVICE);
++			 DMA_TO_DEVICE);
+ 
+ 	return NETDEV_TX_BUSY;
+ 
+@@ -2526,9 +2520,8 @@ static int ql_alloc_net_req_rsp_queues(struct ql3_adapter *qdev)
+ 	wmb();
+ 
+ 	qdev->req_q_virt_addr =
+-	    pci_alloc_consistent(qdev->pdev,
+-				 (size_t) qdev->req_q_size,
+-				 &qdev->req_q_phy_addr);
++	    dma_alloc_coherent(&qdev->pdev->dev, (size_t)qdev->req_q_size,
++			       &qdev->req_q_phy_addr, GFP_KERNEL);
+ 
+ 	if ((qdev->req_q_virt_addr == NULL) ||
+ 	    LS_64BITS(qdev->req_q_phy_addr) & (qdev->req_q_size - 1)) {
+@@ -2537,16 +2530,14 @@ static int ql_alloc_net_req_rsp_queues(struct ql3_adapter *qdev)
+ 	}
+ 
+ 	qdev->rsp_q_virt_addr =
+-	    pci_alloc_consistent(qdev->pdev,
+-				 (size_t) qdev->rsp_q_size,
+-				 &qdev->rsp_q_phy_addr);
++	    dma_alloc_coherent(&qdev->pdev->dev, (size_t)qdev->rsp_q_size,
++			       &qdev->rsp_q_phy_addr, GFP_KERNEL);
+ 
+ 	if ((qdev->rsp_q_virt_addr == NULL) ||
+ 	    LS_64BITS(qdev->rsp_q_phy_addr) & (qdev->rsp_q_size - 1)) {
+ 		netdev_err(qdev->ndev, "rspQ allocation failed\n");
+-		pci_free_consistent(qdev->pdev, (size_t) qdev->req_q_size,
+-				    qdev->req_q_virt_addr,
+-				    qdev->req_q_phy_addr);
++		dma_free_coherent(&qdev->pdev->dev, (size_t)qdev->req_q_size,
++				  qdev->req_q_virt_addr, qdev->req_q_phy_addr);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -2562,15 +2553,13 @@ static void ql_free_net_req_rsp_queues(struct ql3_adapter *qdev)
+ 		return;
+ 	}
+ 
+-	pci_free_consistent(qdev->pdev,
+-			    qdev->req_q_size,
+-			    qdev->req_q_virt_addr, qdev->req_q_phy_addr);
++	dma_free_coherent(&qdev->pdev->dev, qdev->req_q_size,
++			  qdev->req_q_virt_addr, qdev->req_q_phy_addr);
+ 
+ 	qdev->req_q_virt_addr = NULL;
+ 
+-	pci_free_consistent(qdev->pdev,
+-			    qdev->rsp_q_size,
+-			    qdev->rsp_q_virt_addr, qdev->rsp_q_phy_addr);
++	dma_free_coherent(&qdev->pdev->dev, qdev->rsp_q_size,
++			  qdev->rsp_q_virt_addr, qdev->rsp_q_phy_addr);
+ 
+ 	qdev->rsp_q_virt_addr = NULL;
+ 
+@@ -2594,12 +2583,13 @@ static int ql_alloc_buffer_queues(struct ql3_adapter *qdev)
+ 		return -ENOMEM;
+ 
+ 	qdev->lrg_buf_q_alloc_virt_addr =
+-		pci_alloc_consistent(qdev->pdev,
+-				     qdev->lrg_buf_q_alloc_size,
+-				     &qdev->lrg_buf_q_alloc_phy_addr);
++		dma_alloc_coherent(&qdev->pdev->dev,
++				   qdev->lrg_buf_q_alloc_size,
++				   &qdev->lrg_buf_q_alloc_phy_addr, GFP_KERNEL);
+ 
+ 	if (qdev->lrg_buf_q_alloc_virt_addr == NULL) {
+ 		netdev_err(qdev->ndev, "lBufQ failed\n");
++		kfree(qdev->lrg_buf);
+ 		return -ENOMEM;
+ 	}
+ 	qdev->lrg_buf_q_virt_addr = qdev->lrg_buf_q_alloc_virt_addr;
+@@ -2614,15 +2604,17 @@ static int ql_alloc_buffer_queues(struct ql3_adapter *qdev)
+ 		qdev->small_buf_q_alloc_size = qdev->small_buf_q_size * 2;
+ 
+ 	qdev->small_buf_q_alloc_virt_addr =
+-		pci_alloc_consistent(qdev->pdev,
+-				     qdev->small_buf_q_alloc_size,
+-				     &qdev->small_buf_q_alloc_phy_addr);
++		dma_alloc_coherent(&qdev->pdev->dev,
++				   qdev->small_buf_q_alloc_size,
++				   &qdev->small_buf_q_alloc_phy_addr, GFP_KERNEL);
+ 
+ 	if (qdev->small_buf_q_alloc_virt_addr == NULL) {
+ 		netdev_err(qdev->ndev, "Small Buffer Queue allocation failed\n");
+-		pci_free_consistent(qdev->pdev, qdev->lrg_buf_q_alloc_size,
+-				    qdev->lrg_buf_q_alloc_virt_addr,
+-				    qdev->lrg_buf_q_alloc_phy_addr);
++		dma_free_coherent(&qdev->pdev->dev,
++				  qdev->lrg_buf_q_alloc_size,
++				  qdev->lrg_buf_q_alloc_virt_addr,
++				  qdev->lrg_buf_q_alloc_phy_addr);
++		kfree(qdev->lrg_buf);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -2639,17 +2631,15 @@ static void ql_free_buffer_queues(struct ql3_adapter *qdev)
+ 		return;
+ 	}
+ 	kfree(qdev->lrg_buf);
+-	pci_free_consistent(qdev->pdev,
+-			    qdev->lrg_buf_q_alloc_size,
+-			    qdev->lrg_buf_q_alloc_virt_addr,
+-			    qdev->lrg_buf_q_alloc_phy_addr);
++	dma_free_coherent(&qdev->pdev->dev, qdev->lrg_buf_q_alloc_size,
++			  qdev->lrg_buf_q_alloc_virt_addr,
++			  qdev->lrg_buf_q_alloc_phy_addr);
+ 
+ 	qdev->lrg_buf_q_virt_addr = NULL;
+ 
+-	pci_free_consistent(qdev->pdev,
+-			    qdev->small_buf_q_alloc_size,
+-			    qdev->small_buf_q_alloc_virt_addr,
+-			    qdev->small_buf_q_alloc_phy_addr);
++	dma_free_coherent(&qdev->pdev->dev, qdev->small_buf_q_alloc_size,
++			  qdev->small_buf_q_alloc_virt_addr,
++			  qdev->small_buf_q_alloc_phy_addr);
+ 
+ 	qdev->small_buf_q_virt_addr = NULL;
+ 
+@@ -2667,9 +2657,9 @@ static int ql_alloc_small_buffers(struct ql3_adapter *qdev)
+ 		 QL_SMALL_BUFFER_SIZE);
+ 
+ 	qdev->small_buf_virt_addr =
+-		pci_alloc_consistent(qdev->pdev,
+-				     qdev->small_buf_total_size,
+-				     &qdev->small_buf_phy_addr);
++		dma_alloc_coherent(&qdev->pdev->dev,
++				   qdev->small_buf_total_size,
++				   &qdev->small_buf_phy_addr, GFP_KERNEL);
+ 
+ 	if (qdev->small_buf_virt_addr == NULL) {
+ 		netdev_err(qdev->ndev, "Failed to get small buffer memory\n");
+@@ -2702,10 +2692,10 @@ static void ql_free_small_buffers(struct ql3_adapter *qdev)
+ 		return;
+ 	}
+ 	if (qdev->small_buf_virt_addr != NULL) {
+-		pci_free_consistent(qdev->pdev,
+-				    qdev->small_buf_total_size,
+-				    qdev->small_buf_virt_addr,
+-				    qdev->small_buf_phy_addr);
++		dma_free_coherent(&qdev->pdev->dev,
++				  qdev->small_buf_total_size,
++				  qdev->small_buf_virt_addr,
++				  qdev->small_buf_phy_addr);
+ 
+ 		qdev->small_buf_virt_addr = NULL;
+ 	}
+@@ -2720,10 +2710,10 @@ static void ql_free_large_buffers(struct ql3_adapter *qdev)
+ 		lrg_buf_cb = &qdev->lrg_buf[i];
+ 		if (lrg_buf_cb->skb) {
+ 			dev_kfree_skb(lrg_buf_cb->skb);
+-			pci_unmap_single(qdev->pdev,
++			dma_unmap_single(&qdev->pdev->dev,
+ 					 dma_unmap_addr(lrg_buf_cb, mapaddr),
+ 					 dma_unmap_len(lrg_buf_cb, maplen),
+-					 PCI_DMA_FROMDEVICE);
++					 DMA_FROM_DEVICE);
+ 			memset(lrg_buf_cb, 0, sizeof(struct ql_rcv_buf_cb));
+ 		} else {
+ 			break;
+@@ -2775,13 +2765,11 @@ static int ql_alloc_large_buffers(struct ql3_adapter *qdev)
+ 			 * buffer
+ 			 */
+ 			skb_reserve(skb, QL_HEADER_SPACE);
+-			map = pci_map_single(qdev->pdev,
+-					     skb->data,
+-					     qdev->lrg_buffer_len -
+-					     QL_HEADER_SPACE,
+-					     PCI_DMA_FROMDEVICE);
++			map = dma_map_single(&qdev->pdev->dev, skb->data,
++					     qdev->lrg_buffer_len - QL_HEADER_SPACE,
++					     DMA_FROM_DEVICE);
+ 
+-			err = pci_dma_mapping_error(qdev->pdev, map);
++			err = dma_mapping_error(&qdev->pdev->dev, map);
+ 			if (err) {
+ 				netdev_err(qdev->ndev,
+ 					   "PCI mapping failed with error: %d\n",
+@@ -2866,8 +2854,8 @@ static int ql_alloc_mem_resources(struct ql3_adapter *qdev)
+ 	 * Network Completion Queue Producer Index Register
+ 	 */
+ 	qdev->shadow_reg_virt_addr =
+-		pci_alloc_consistent(qdev->pdev,
+-				     PAGE_SIZE, &qdev->shadow_reg_phy_addr);
++		dma_alloc_coherent(&qdev->pdev->dev, PAGE_SIZE,
++				   &qdev->shadow_reg_phy_addr, GFP_KERNEL);
+ 
+ 	if (qdev->shadow_reg_virt_addr != NULL) {
+ 		qdev->preq_consumer_index = qdev->shadow_reg_virt_addr;
+@@ -2922,10 +2910,9 @@ err_small_buffers:
+ err_buffer_queues:
+ 	ql_free_net_req_rsp_queues(qdev);
+ err_req_rsp:
+-	pci_free_consistent(qdev->pdev,
+-			    PAGE_SIZE,
+-			    qdev->shadow_reg_virt_addr,
+-			    qdev->shadow_reg_phy_addr);
++	dma_free_coherent(&qdev->pdev->dev, PAGE_SIZE,
++			  qdev->shadow_reg_virt_addr,
++			  qdev->shadow_reg_phy_addr);
+ 
+ 	return -ENOMEM;
+ }
+@@ -2938,10 +2925,9 @@ static void ql_free_mem_resources(struct ql3_adapter *qdev)
+ 	ql_free_buffer_queues(qdev);
+ 	ql_free_net_req_rsp_queues(qdev);
+ 	if (qdev->shadow_reg_virt_addr != NULL) {
+-		pci_free_consistent(qdev->pdev,
+-				    PAGE_SIZE,
+-				    qdev->shadow_reg_virt_addr,
+-				    qdev->shadow_reg_phy_addr);
++		dma_free_coherent(&qdev->pdev->dev, PAGE_SIZE,
++				  qdev->shadow_reg_virt_addr,
++				  qdev->shadow_reg_phy_addr);
+ 		qdev->shadow_reg_virt_addr = NULL;
+ 	}
+ }
+@@ -3642,18 +3628,15 @@ static void ql_reset_work(struct work_struct *work)
+ 			if (tx_cb->skb) {
+ 				netdev_printk(KERN_DEBUG, ndev,
+ 					      "Freeing lost SKB\n");
+-				pci_unmap_single(qdev->pdev,
+-					 dma_unmap_addr(&tx_cb->map[0],
+-							mapaddr),
+-					 dma_unmap_len(&tx_cb->map[0], maplen),
+-					 PCI_DMA_TODEVICE);
++				dma_unmap_single(&qdev->pdev->dev,
++						 dma_unmap_addr(&tx_cb->map[0], mapaddr),
++						 dma_unmap_len(&tx_cb->map[0], maplen),
++						 DMA_TO_DEVICE);
+ 				for (j = 1; j < tx_cb->seg_count; j++) {
+-					pci_unmap_page(qdev->pdev,
+-					       dma_unmap_addr(&tx_cb->map[j],
+-							      mapaddr),
+-					       dma_unmap_len(&tx_cb->map[j],
+-							     maplen),
+-					       PCI_DMA_TODEVICE);
++					dma_unmap_page(&qdev->pdev->dev,
++						       dma_unmap_addr(&tx_cb->map[j], mapaddr),
++						       dma_unmap_len(&tx_cb->map[j], maplen),
++						       DMA_TO_DEVICE);
+ 				}
+ 				dev_kfree_skb(tx_cb->skb);
+ 				tx_cb->skb = NULL;
+@@ -3785,13 +3768,10 @@ static int ql3xxx_probe(struct pci_dev *pdev,
+ 
+ 	pci_set_master(pdev);
+ 
+-	if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
++	if (!dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)))
+ 		pci_using_dac = 1;
+-		err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+-	} else if (!(err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)))) {
++	else if (!(err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))))
+ 		pci_using_dac = 0;
+-		err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+-	}
+ 
+ 	if (err) {
+ 		pr_err("%s no usable DMA configuration\n", pci_name(pdev));
+diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c
+index 36b46ddb67107..0ea3168e08960 100644
+--- a/drivers/net/ethernet/sfc/rx_common.c
++++ b/drivers/net/ethernet/sfc/rx_common.c
+@@ -837,8 +837,10 @@ int efx_probe_filters(struct efx_nic *efx)
+ 		}
+ 
+ 		if (!success) {
+-			efx_for_each_channel(channel, efx)
++			efx_for_each_channel(channel, efx) {
+ 				kfree(channel->rps_flow_id);
++				channel->rps_flow_id = NULL;
++			}
+ 			efx->type->filter_table_remove(efx);
+ 			rc = -ENOMEM;
+ 			goto out_unlock;
+diff --git a/drivers/net/usb/ax88172a.c b/drivers/net/usb/ax88172a.c
+index fd3a04d98dc14..2bdb163e458ad 100644
+--- a/drivers/net/usb/ax88172a.c
++++ b/drivers/net/usb/ax88172a.c
+@@ -175,7 +175,9 @@ static int ax88172a_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	u8 buf[ETH_ALEN];
+ 	struct ax88172a_private *priv;
+ 
+-	usbnet_get_endpoints(dev, intf);
++	ret = usbnet_get_endpoints(dev, intf);
++	if (ret)
++		return ret;
+ 
+ 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+diff --git a/drivers/nvme/host/trace.h b/drivers/nvme/host/trace.h
+index b258f7b8788e1..700fdce2ecf15 100644
+--- a/drivers/nvme/host/trace.h
++++ b/drivers/nvme/host/trace.h
+@@ -98,7 +98,7 @@ TRACE_EVENT(nvme_complete_rq,
+ 	    TP_fast_assign(
+ 		__entry->ctrl_id = nvme_req(req)->ctrl->instance;
+ 		__entry->qid = nvme_req_qid(req);
+-		__entry->cid = nvme_req(req)->cmd->common.command_id;
++		__entry->cid = req->tag;
+ 		__entry->result = le64_to_cpu(nvme_req(req)->result.u64);
+ 		__entry->retries = nvme_req(req)->retries;
+ 		__entry->flags = nvme_req(req)->flags;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 158ff4331a141..500905dad6434 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5353,6 +5353,12 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0420, quirk_no_ext_tags);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
+ 
+ #ifdef CONFIG_PCI_ATS
++static void quirk_no_ats(struct pci_dev *pdev)
++{
++	pci_info(pdev, "disabling ATS\n");
++	pdev->ats_cap = 0;
++}
++
+ /*
+  * Some devices require additional driver setup to enable ATS.  Don't use
+  * ATS for those devices as ATS will be enabled before the driver has had a
+@@ -5365,8 +5371,7 @@ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ 	    (pdev->device == 0x7341 && pdev->revision != 0x00))
+ 		return;
+ 
+-	pci_info(pdev, "disabling ATS\n");
+-	pdev->ats_cap = 0;
++	quirk_no_ats(pdev);
+ }
+ 
+ /* AMD Stoney platform GPU */
+@@ -5378,6 +5383,25 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
+ /* AMD Navi14 dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7341, quirk_amd_harvest_no_ats);
++
++/*
++ * Intel IPU E2000 revisions before C0 implement incorrect endianness
++ * in ATS Invalidate Request message body. Disable ATS for those devices.
++ */
++static void quirk_intel_e2000_no_ats(struct pci_dev *pdev)
++{
++	if (pdev->revision < 0x20)
++		quirk_no_ats(pdev);
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1451, quirk_intel_e2000_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1452, quirk_intel_e2000_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1453, quirk_intel_e2000_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1454, quirk_intel_e2000_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1455, quirk_intel_e2000_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1457, quirk_intel_e2000_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1459, quirk_intel_e2000_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x145a, quirk_intel_e2000_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x145c, quirk_intel_e2000_no_ats);
+ #endif /* CONFIG_PCI_ATS */
+ 
+ /* Freescale PCIe doesn't support MSI in RC mode */
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 29f020c4b2d0d..906f985c74e7c 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -2031,22 +2031,33 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
+ 	if ((start | len) & (bdev_logical_block_size(bdev) - 1))
+ 		return -EINVAL;
+ 
+-	/* Invalidate the page cache, including dirty pages. */
+-	error = truncate_bdev_range(bdev, file->f_mode, start, end);
+-	if (error)
+-		return error;
+-
++	/*
++	 * Invalidate the page cache, including dirty pages, for valid
++	 * de-allocate mode calls to fallocate().
++	 */
+ 	switch (mode) {
+ 	case FALLOC_FL_ZERO_RANGE:
+ 	case FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE:
++		error = truncate_bdev_range(bdev, file->f_mode, start, end);
++		if (error)
++			break;
++
+ 		error = blkdev_issue_zeroout(bdev, start >> 9, len >> 9,
+ 					    GFP_KERNEL, BLKDEV_ZERO_NOUNMAP);
+ 		break;
+ 	case FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE:
++		error = truncate_bdev_range(bdev, file->f_mode, start, end);
++		if (error)
++			break;
++
+ 		error = blkdev_issue_zeroout(bdev, start >> 9, len >> 9,
+ 					     GFP_KERNEL, BLKDEV_ZERO_NOFALLBACK);
+ 		break;
+ 	case FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE | FALLOC_FL_NO_HIDE_STALE:
++		error = truncate_bdev_range(bdev, file->f_mode, start, end);
++		if (error)
++			break;
++
+ 		error = blkdev_issue_discard(bdev, start >> 9, len >> 9,
+ 					     GFP_KERNEL, 0);
+ 		break;
+diff --git a/include/net/dst_ops.h b/include/net/dst_ops.h
+index 88ff7bb2bb9bd..632086b2f644a 100644
+--- a/include/net/dst_ops.h
++++ b/include/net/dst_ops.h
+@@ -16,7 +16,7 @@ struct dst_ops {
+ 	unsigned short		family;
+ 	unsigned int		gc_thresh;
+ 
+-	int			(*gc)(struct dst_ops *ops);
++	void			(*gc)(struct dst_ops *ops);
+ 	struct dst_entry *	(*check)(struct dst_entry *, __u32 cookie);
+ 	unsigned int		(*default_advmss)(const struct dst_entry *);
+ 	unsigned int		(*mtu)(const struct dst_entry *);
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 652283a1353d7..f320ff02cc196 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1010,7 +1010,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
+ 	 * This check implies we don't kill processes if their pages
+ 	 * are in the swap cache early. Those are always late kills.
+ 	 */
+-	if (!page_mapped(hpage))
++	if (!page_mapped(p))
+ 		return true;
+ 
+ 	if (PageKsm(p)) {
+@@ -1075,12 +1075,12 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
+ 				unmap_success = false;
+ 			}
+ 		} else {
+-			unmap_success = try_to_unmap(hpage, ttu);
++			unmap_success = try_to_unmap(p, ttu);
+ 		}
+ 	}
+ 	if (!unmap_success)
+ 		pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n",
+-		       pfn, page_mapcount(hpage));
++		       pfn, page_mapcount(p));
+ 
+ 	/*
+ 	 * try_to_unmap() might put mlocked page in lru cache, so call
+diff --git a/mm/memory.c b/mm/memory.c
+index cbc0a163d7057..1d101aeae416a 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3300,8 +3300,8 @@ void unmap_mapping_pages(struct address_space *mapping, pgoff_t start,
+ void unmap_mapping_range(struct address_space *mapping,
+ 		loff_t const holebegin, loff_t const holelen, int even_cows)
+ {
+-	pgoff_t hba = holebegin >> PAGE_SHIFT;
+-	pgoff_t hlen = (holelen + PAGE_SIZE - 1) >> PAGE_SHIFT;
++	pgoff_t hba = (pgoff_t)(holebegin) >> PAGE_SHIFT;
++	pgoff_t hlen = ((pgoff_t)(holelen) + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ 
+ 	/* Check for overflow. */
+ 	if (sizeof(holelen) > sizeof(hlen)) {
+diff --git a/net/core/dst.c b/net/core/dst.c
+index fb3bcba87744d..453ec8aafc4ab 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -83,12 +83,8 @@ void *dst_alloc(struct dst_ops *ops, struct net_device *dev,
+ 
+ 	if (ops->gc &&
+ 	    !(flags & DST_NOCOUNT) &&
+-	    dst_entries_get_fast(ops) > ops->gc_thresh) {
+-		if (ops->gc(ops)) {
+-			pr_notice_ratelimited("Route cache is full: consider increasing sysctl net.ipv6.route.max_size.\n");
+-			return NULL;
+-		}
+-	}
++	    dst_entries_get_fast(ops) > ops->gc_thresh)
++		ops->gc(ops);
+ 
+ 	dst = kmem_cache_alloc(ops->kmem_cachep, GFP_ATOMIC);
+ 	if (!dst)
+diff --git a/net/core/sock.c b/net/core/sock.c
+index a069b5476df46..769e969cd1dc5 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2395,6 +2395,7 @@ int __sock_cmsg_send(struct sock *sk, struct msghdr *msg, struct cmsghdr *cmsg,
+ 		sockc->mark = *(u32 *)CMSG_DATA(cmsg);
+ 		break;
+ 	case SO_TIMESTAMPING_OLD:
++	case SO_TIMESTAMPING_NEW:
+ 		if (cmsg->cmsg_len != CMSG_LEN(sizeof(u32)))
+ 			return -EINVAL;
+ 
+diff --git a/net/dns_resolver/dns_key.c b/net/dns_resolver/dns_key.c
+index 03f8f33dc134c..8324e9f970668 100644
+--- a/net/dns_resolver/dns_key.c
++++ b/net/dns_resolver/dns_key.c
+@@ -91,8 +91,6 @@ const struct cred *dns_resolver_cache;
+ static int
+ dns_resolver_preparse(struct key_preparsed_payload *prep)
+ {
+-	const struct dns_server_list_v1_header *v1;
+-	const struct dns_payload_header *bin;
+ 	struct user_key_payload *upayload;
+ 	unsigned long derrno;
+ 	int ret;
+@@ -103,27 +101,28 @@ dns_resolver_preparse(struct key_preparsed_payload *prep)
+ 		return -EINVAL;
+ 
+ 	if (data[0] == 0) {
++		const struct dns_server_list_v1_header *v1;
++
+ 		/* It may be a server list. */
+-		if (datalen <= sizeof(*bin))
++		if (datalen <= sizeof(*v1))
+ 			return -EINVAL;
+ 
+-		bin = (const struct dns_payload_header *)data;
+-		kenter("[%u,%u],%u", bin->content, bin->version, datalen);
+-		if (bin->content != DNS_PAYLOAD_IS_SERVER_LIST) {
++		v1 = (const struct dns_server_list_v1_header *)data;
++		kenter("[%u,%u],%u", v1->hdr.content, v1->hdr.version, datalen);
++		if (v1->hdr.content != DNS_PAYLOAD_IS_SERVER_LIST) {
+ 			pr_warn_ratelimited(
+ 				"dns_resolver: Unsupported content type (%u)\n",
+-				bin->content);
++				v1->hdr.content);
+ 			return -EINVAL;
+ 		}
+ 
+-		if (bin->version != 1) {
++		if (v1->hdr.version != 1) {
+ 			pr_warn_ratelimited(
+ 				"dns_resolver: Unsupported server list version (%u)\n",
+-				bin->version);
++				v1->hdr.version);
+ 			return -EINVAL;
+ 		}
+ 
+-		v1 = (const struct dns_server_list_v1_header *)bin;
+ 		if ((v1->status != DNS_LOOKUP_GOOD &&
+ 		     v1->status != DNS_LOOKUP_GOOD_WITH_BAD)) {
+ 			if (prep->expiry == TIME64_MAX)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index a6d5c99f65a3a..b23e42efb3dff 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -89,7 +89,7 @@ static struct dst_entry *ip6_negative_advice(struct dst_entry *);
+ static void		ip6_dst_destroy(struct dst_entry *);
+ static void		ip6_dst_ifdown(struct dst_entry *,
+ 				       struct net_device *dev, int how);
+-static int		 ip6_dst_gc(struct dst_ops *ops);
++static void		 ip6_dst_gc(struct dst_ops *ops);
+ 
+ static int		ip6_pkt_discard(struct sk_buff *skb);
+ static int		ip6_pkt_discard_out(struct net *net, struct sock *sk, struct sk_buff *skb);
+@@ -3184,11 +3184,10 @@ out:
+ 	return dst;
+ }
+ 
+-static int ip6_dst_gc(struct dst_ops *ops)
++static void ip6_dst_gc(struct dst_ops *ops)
+ {
+ 	struct net *net = container_of(ops, struct net, ipv6.ip6_dst_ops);
+ 	int rt_min_interval = net->ipv6.sysctl.ip6_rt_gc_min_interval;
+-	int rt_max_size = net->ipv6.sysctl.ip6_rt_max_size;
+ 	int rt_elasticity = net->ipv6.sysctl.ip6_rt_gc_elasticity;
+ 	int rt_gc_timeout = net->ipv6.sysctl.ip6_rt_gc_timeout;
+ 	unsigned long rt_last_gc = net->ipv6.ip6_rt_last_gc;
+@@ -3196,11 +3195,10 @@ static int ip6_dst_gc(struct dst_ops *ops)
+ 	int entries;
+ 
+ 	entries = dst_entries_get_fast(ops);
+-	if (entries > rt_max_size)
++	if (entries > ops->gc_thresh)
+ 		entries = dst_entries_get_slow(ops);
+ 
+-	if (time_after(rt_last_gc + rt_min_interval, jiffies) &&
+-	    entries <= rt_max_size)
++	if (time_after(rt_last_gc + rt_min_interval, jiffies))
+ 		goto out;
+ 
+ 	fib6_run_gc(atomic_inc_return(&net->ipv6.ip6_rt_gc_expire), net, true);
+@@ -3210,7 +3208,6 @@ static int ip6_dst_gc(struct dst_ops *ops)
+ out:
+ 	val = atomic_read(&net->ipv6.ip6_rt_gc_expire);
+ 	atomic_set(&net->ipv6.ip6_rt_gc_expire, val - (val >> rt_elasticity));
+-	return entries > rt_max_size;
+ }
+ 
+ static int ip6_nh_lookup_table(struct net *net, struct fib6_config *cfg,
+@@ -6363,7 +6360,7 @@ static int __net_init ip6_route_net_init(struct net *net)
+ #endif
+ 
+ 	net->ipv6.sysctl.flush_delay = 0;
+-	net->ipv6.sysctl.ip6_rt_max_size = 4096;
++	net->ipv6.sysctl.ip6_rt_max_size = INT_MAX;
+ 	net->ipv6.sysctl.ip6_rt_gc_min_interval = HZ / 2;
+ 	net->ipv6.sysctl.ip6_rt_gc_timeout = 60*HZ;
+ 	net->ipv6.sysctl.ip6_rt_gc_interval = 30*HZ;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f244a4323a43b..4d1a009dab450 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1186,6 +1186,30 @@ static int nft_objname_hash_cmp(struct rhashtable_compare_arg *arg,
+ 	return strcmp(obj->key.name, k->name);
+ }
+ 
++static bool nft_supported_family(u8 family)
++{
++	return false
++#ifdef CONFIG_NF_TABLES_INET
++		|| family == NFPROTO_INET
++#endif
++#ifdef CONFIG_NF_TABLES_IPV4
++		|| family == NFPROTO_IPV4
++#endif
++#ifdef CONFIG_NF_TABLES_ARP
++		|| family == NFPROTO_ARP
++#endif
++#ifdef CONFIG_NF_TABLES_NETDEV
++		|| family == NFPROTO_NETDEV
++#endif
++#if IS_ENABLED(CONFIG_NF_TABLES_BRIDGE)
++		|| family == NFPROTO_BRIDGE
++#endif
++#ifdef CONFIG_NF_TABLES_IPV6
++		|| family == NFPROTO_IPV6
++#endif
++		;
++}
++
+ static int nf_tables_newtable(struct net *net, struct sock *nlsk,
+ 			      struct sk_buff *skb, const struct nlmsghdr *nlh,
+ 			      const struct nlattr * const nla[],
+@@ -1201,6 +1225,9 @@ static int nf_tables_newtable(struct net *net, struct sock *nlsk,
+ 	u32 flags = 0;
+ 	int err;
+ 
++	if (!nft_supported_family(family))
++		return -EOPNOTSUPP;
++
+ 	lockdep_assert_held(&nft_net->commit_mutex);
+ 	attr = nla[NFTA_TABLE_NAME];
+ 	table = nft_table_lookup(net, attr, family, genmask);
+@@ -8994,26 +9021,38 @@ EXPORT_SYMBOL_GPL(nft_chain_validate_hooks);
+ static int nf_tables_check_loops(const struct nft_ctx *ctx,
+ 				 const struct nft_chain *chain);
+ 
++static int nft_check_loops(const struct nft_ctx *ctx,
++			   const struct nft_set_ext *ext)
++{
++	const struct nft_data *data;
++	int ret;
++
++	data = nft_set_ext_data(ext);
++	switch (data->verdict.code) {
++	case NFT_JUMP:
++	case NFT_GOTO:
++		ret = nf_tables_check_loops(ctx, data->verdict.chain);
++		break;
++	default:
++		ret = 0;
++		break;
++	}
++
++	return ret;
++}
++
+ static int nf_tables_loop_check_setelem(const struct nft_ctx *ctx,
+ 					struct nft_set *set,
+ 					const struct nft_set_iter *iter,
+ 					struct nft_set_elem *elem)
+ {
+ 	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+-	const struct nft_data *data;
+ 
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) &&
+ 	    *nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END)
+ 		return 0;
+ 
+-	data = nft_set_ext_data(ext);
+-	switch (data->verdict.code) {
+-	case NFT_JUMP:
+-	case NFT_GOTO:
+-		return nf_tables_check_loops(ctx, data->verdict.chain);
+-	default:
+-		return 0;
+-	}
++	return nft_check_loops(ctx, ext);
+ }
+ 
+ static int nf_tables_check_loops(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index 7d5b63c5a30af..d154fe67ca8a6 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -78,7 +78,7 @@ static int nft_immediate_init(const struct nft_ctx *ctx,
+ 		case NFT_GOTO:
+ 			err = nf_tables_bind_chain(ctx, chain);
+ 			if (err < 0)
+-				return err;
++				goto err1;
+ 			break;
+ 		default:
+ 			break;
+diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
+index 92f70686bee0a..da3cb0d29b972 100644
+--- a/net/nfc/llcp_core.c
++++ b/net/nfc/llcp_core.c
+@@ -147,6 +147,13 @@ static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool device,
+ 
+ static struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
+ {
++	/* Since using nfc_llcp_local may result in usage of nfc_dev, whenever
++	 * we hold a reference to local, we also need to hold a reference to
++	 * the device to avoid UAF.
++	 */
++	if (!nfc_get_device(local->dev->idx))
++		return NULL;
++
+ 	kref_get(&local->ref);
+ 
+ 	return local;
+@@ -179,10 +186,18 @@ static void local_release(struct kref *ref)
+ 
+ int nfc_llcp_local_put(struct nfc_llcp_local *local)
+ {
++	struct nfc_dev *dev;
++	int ret;
++
+ 	if (local == NULL)
+ 		return 0;
+ 
+-	return kref_put(&local->ref, local_release);
++	dev = local->dev;
++
++	ret = kref_put(&local->ref, local_release);
++	nfc_put_device(dev);
++
++	return ret;
+ }
+ 
+ static struct nfc_llcp_sock *nfc_llcp_sock_get(struct nfc_llcp_local *local,
+@@ -968,8 +983,17 @@ static void nfc_llcp_recv_connect(struct nfc_llcp_local *local,
+ 	}
+ 
+ 	new_sock = nfc_llcp_sock(new_sk);
+-	new_sock->dev = local->dev;
++
+ 	new_sock->local = nfc_llcp_local_get(local);
++	if (!new_sock->local) {
++		reason = LLCP_DM_REJ;
++		sock_put(&new_sock->sk);
++		release_sock(&sock->sk);
++		sock_put(&sock->sk);
++		goto fail;
++	}
++
++	new_sock->dev = local->dev;
+ 	new_sock->rw = sock->rw;
+ 	new_sock->miux = sock->miux;
+ 	new_sock->nfc_protocol = sock->nfc_protocol;
+@@ -1607,7 +1631,16 @@ int nfc_llcp_register_device(struct nfc_dev *ndev)
+ 	if (local == NULL)
+ 		return -ENOMEM;
+ 
+-	local->dev = ndev;
++	/* As we are going to initialize local's refcount, we need to get the
++	 * nfc_dev to avoid UAF, otherwise there is no point in continuing.
++	 * See nfc_llcp_local_get().
++	 */
++	local->dev = nfc_get_device(ndev->idx);
++	if (!local->dev) {
++		kfree(local);
++		return -ENODEV;
++	}
++
+ 	INIT_LIST_HEAD(&local->list);
+ 	kref_init(&local->ref);
+ 	mutex_init(&local->sdp_lock);
+diff --git a/net/sched/em_text.c b/net/sched/em_text.c
+index 6f3c1fb2fb44c..f176afb70559e 100644
+--- a/net/sched/em_text.c
++++ b/net/sched/em_text.c
+@@ -97,8 +97,10 @@ retry:
+ 
+ static void em_text_destroy(struct tcf_ematch *m)
+ {
+-	if (EM_TEXT_PRIV(m) && EM_TEXT_PRIV(m)->config)
++	if (EM_TEXT_PRIV(m) && EM_TEXT_PRIV(m)->config) {
+ 		textsearch_destroy(EM_TEXT_PRIV(m)->config);
++		kfree(EM_TEXT_PRIV(m));
++	}
+ }
+ 
+ static int em_text_dump(struct sk_buff *skb, struct tcf_ematch *m)
+diff --git a/net/socket.c b/net/socket.c
+index 36e38ee434ea1..2a48aa89c035b 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -675,6 +675,7 @@ int sock_sendmsg(struct socket *sock, struct msghdr *msg)
+ {
+ 	struct sockaddr_storage *save_addr = (struct sockaddr_storage *)msg->msg_name;
+ 	struct sockaddr_storage address;
++	int save_len = msg->msg_namelen;
+ 	int ret;
+ 
+ 	if (msg->msg_name) {
+@@ -684,6 +685,7 @@ int sock_sendmsg(struct socket *sock, struct msghdr *msg)
+ 
+ 	ret = __sock_sendmsg(sock, msg);
+ 	msg->msg_name = save_addr;
++	msg->msg_namelen = save_len;
+ 
+ 	return ret;
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 50eae668578a7..dd980438f201f 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1215,6 +1215,8 @@ alloc_payload:
+ 		}
+ 
+ 		sk_msg_page_add(msg_pl, page, copy, offset);
++		msg_pl->sg.copybreak = 0;
++		msg_pl->sg.curr = msg_pl->sg.end;
+ 		sk_mem_charge(sk, copy);
+ 
+ 		offset += copy;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 0743fcd747079..99ba89723cd31 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8990,6 +8990,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
++	SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+diff --git a/sound/soc/meson/g12a-toacodec.c b/sound/soc/meson/g12a-toacodec.c
+index 9339fabccb796..5ddeb22ac685a 100644
+--- a/sound/soc/meson/g12a-toacodec.c
++++ b/sound/soc/meson/g12a-toacodec.c
+@@ -46,6 +46,9 @@ static int g12a_toacodec_mux_put_enum(struct snd_kcontrol *kcontrol,
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	unsigned int mux, changed;
+ 
++	if (ucontrol->value.enumerated.item[0] >= e->items)
++		return -EINVAL;
++
+ 	mux = snd_soc_enum_item_to_val(e, ucontrol->value.enumerated.item[0]);
+ 	changed = snd_soc_component_test_bits(component, e->reg,
+ 					      CTRL0_DAT_SEL,
+@@ -82,7 +85,7 @@ static int g12a_toacodec_mux_put_enum(struct snd_kcontrol *kcontrol,
+ 
+ 	snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static SOC_ENUM_SINGLE_DECL(g12a_toacodec_mux_enum, TOACODEC_CTRL0,
+diff --git a/sound/soc/meson/g12a-tohdmitx.c b/sound/soc/meson/g12a-tohdmitx.c
+index 6c99052feafd8..4a9b67421c705 100644
+--- a/sound/soc/meson/g12a-tohdmitx.c
++++ b/sound/soc/meson/g12a-tohdmitx.c
+@@ -45,6 +45,9 @@ static int g12a_tohdmitx_i2s_mux_put_enum(struct snd_kcontrol *kcontrol,
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	unsigned int mux, changed;
+ 
++	if (ucontrol->value.enumerated.item[0] >= e->items)
++		return -EINVAL;
++
+ 	mux = snd_soc_enum_item_to_val(e, ucontrol->value.enumerated.item[0]);
+ 	changed = snd_soc_component_test_bits(component, e->reg,
+ 					      CTRL0_I2S_DAT_SEL,
+@@ -93,6 +96,9 @@ static int g12a_tohdmitx_spdif_mux_put_enum(struct snd_kcontrol *kcontrol,
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	unsigned int mux, changed;
+ 
++	if (ucontrol->value.enumerated.item[0] >= e->items)
++		return -EINVAL;
++
+ 	mux = snd_soc_enum_item_to_val(e, ucontrol->value.enumerated.item[0]);
+ 	changed = snd_soc_component_test_bits(component, TOHDMITX_CTRL0,
+ 					      CTRL0_SPDIF_SEL,
+@@ -112,7 +118,7 @@ static int g12a_tohdmitx_spdif_mux_put_enum(struct snd_kcontrol *kcontrol,
+ 
+ 	snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static SOC_ENUM_SINGLE_DECL(g12a_tohdmitx_spdif_mux_enum, TOHDMITX_CTRL0,


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-01-25 23:34 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-01-25 23:34 UTC (permalink / raw
  To: gentoo-commits

commit:     bbf6a329d076ead96c79d51029458317fbf21c2c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 25 23:34:25 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 25 23:34:25 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bbf6a329

Linux patch 5.10.209

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1208_linux-5.10.209.patch | 10120 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 10124 insertions(+)

diff --git a/0000_README b/0000_README
index 437d971c..8e064c6a 100644
--- a/0000_README
+++ b/0000_README
@@ -875,6 +875,10 @@ Patch:  1207_linux-5.10.208.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.208
 
+Patch:  1208_linux-5.10.209.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.209
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1208_linux-5.10.209.patch b/1208_linux-5.10.209.patch
new file mode 100644
index 00000000..abcd0533
--- /dev/null
+++ b/1208_linux-5.10.209.patch
@@ -0,0 +1,10120 @@
+diff --git a/Makefile b/Makefile
+index a4b42141ba1b2..613b25d330b0a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 208
++SUBLEVEL = 209
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arc/kernel/signal.c b/arch/arc/kernel/signal.c
+index 4868bdebf586d..c43d323bcb593 100644
+--- a/arch/arc/kernel/signal.c
++++ b/arch/arc/kernel/signal.c
+@@ -61,7 +61,7 @@ struct rt_sigframe {
+ 	unsigned int sigret_magic;
+ };
+ 
+-static int save_arcv2_regs(struct sigcontext *mctx, struct pt_regs *regs)
++static int save_arcv2_regs(struct sigcontext __user *mctx, struct pt_regs *regs)
+ {
+ 	int err = 0;
+ #ifndef CONFIG_ISA_ARCOMPACT
+@@ -74,12 +74,12 @@ static int save_arcv2_regs(struct sigcontext *mctx, struct pt_regs *regs)
+ #else
+ 	v2abi.r58 = v2abi.r59 = 0;
+ #endif
+-	err = __copy_to_user(&mctx->v2abi, &v2abi, sizeof(v2abi));
++	err = __copy_to_user(&mctx->v2abi, (void const *)&v2abi, sizeof(v2abi));
+ #endif
+ 	return err;
+ }
+ 
+-static int restore_arcv2_regs(struct sigcontext *mctx, struct pt_regs *regs)
++static int restore_arcv2_regs(struct sigcontext __user *mctx, struct pt_regs *regs)
+ {
+ 	int err = 0;
+ #ifndef CONFIG_ISA_ARCOMPACT
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index fb25ede1ce9f9..3f1002c34446c 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -760,7 +760,7 @@
+ 
+ 				xoadc: xoadc@197 {
+ 					compatible = "qcom,pm8921-adc";
+-					reg = <197>;
++					reg = <0x197>;
+ 					interrupts-extended = <&pmicintc 78 IRQ_TYPE_EDGE_RISING>;
+ 					#address-cells = <2>;
+ 					#size-cells = <0>;
+diff --git a/arch/arm/mach-davinci/Kconfig b/arch/arm/mach-davinci/Kconfig
+index de11030748d0b..b30221f7dfa48 100644
+--- a/arch/arm/mach-davinci/Kconfig
++++ b/arch/arm/mach-davinci/Kconfig
+@@ -3,6 +3,7 @@
+ menuconfig ARCH_DAVINCI
+ 	bool "TI DaVinci"
+ 	depends on ARCH_MULTI_V5
++	select CPU_ARM926T
+ 	select DAVINCI_TIMER
+ 	select ZONE_DMA
+ 	select PM_GENERIC_DOMAINS if PM
+diff --git a/arch/arm/mach-sunxi/mc_smp.c b/arch/arm/mach-sunxi/mc_smp.c
+index b2f5f4f28705f..f779e386b6e7d 100644
+--- a/arch/arm/mach-sunxi/mc_smp.c
++++ b/arch/arm/mach-sunxi/mc_smp.c
+@@ -804,12 +804,12 @@ static int __init sunxi_mc_smp_init(void)
+ 	for (i = 0; i < ARRAY_SIZE(sunxi_mc_smp_data); i++) {
+ 		ret = of_property_match_string(node, "enable-method",
+ 					       sunxi_mc_smp_data[i].enable_method);
+-		if (!ret)
++		if (ret >= 0)
+ 			break;
+ 	}
+ 
+ 	of_node_put(node);
+-	if (ret)
++	if (ret < 0)
+ 		return -ENODEV;
+ 
+ 	is_a83t = sunxi_mc_smp_data[i].is_a83t;
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index eea8d23683dc1..39e9b38757616 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -129,7 +129,7 @@
+ 		compatible = "microchip,mcp7940x";
+ 		reg = <0x6f>;
+ 		interrupt-parent = <&gpiosb>;
+-		interrupts = <5 0>; /* GPIO2_5 */
++		interrupts = <5 IRQ_TYPE_EDGE_FALLING>; /* GPIO2_5 */
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+index 949fee6949e61..5516d2067c823 100644
+--- a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
++++ b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+@@ -38,8 +38,8 @@
+ 		user4 {
+ 			label = "green:user4";
+ 			gpios = <&pm8150_gpios 10 GPIO_ACTIVE_HIGH>;
+-			linux,default-trigger = "panic-indicator";
+ 			default-state = "off";
++			panic-indicator;
+ 		};
+ 
+ 		wlan {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index 1e889ca932e41..31f4f05750940 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -55,8 +55,8 @@
+ 		user4 {
+ 			label = "green:user4";
+ 			gpios = <&pm8998_gpio 13 GPIO_ACTIVE_HIGH>;
+-			linux,default-trigger = "panic-indicator";
+ 			default-state = "off";
++			panic-indicator;
+ 		};
+ 
+ 		wlan {
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index 4265f627ca167..a3538279d7106 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -865,7 +865,7 @@
+ 		assigned-clocks = <&k3_clks 67 2>;
+ 		assigned-clock-parents = <&k3_clks 67 5>;
+ 
+-		interrupts = <GIC_SPI 166 IRQ_TYPE_EDGE_RISING>;
++		interrupts = <GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>;
+ 
+ 		status = "disabled";
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index 23710bf5a86b6..62f261b8eb62f 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -584,7 +584,11 @@ static struct vgic_irq *vgic_its_check_cache(struct kvm *kvm, phys_addr_t db,
+ 	unsigned long flags;
+ 
+ 	raw_spin_lock_irqsave(&dist->lpi_list_lock, flags);
++
+ 	irq = __vgic_its_check_cache(dist, db, devid, eventid);
++	if (irq)
++		vgic_get_irq_kref(irq);
++
+ 	raw_spin_unlock_irqrestore(&dist->lpi_list_lock, flags);
+ 
+ 	return irq;
+@@ -763,6 +767,7 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
+ 	raw_spin_lock_irqsave(&irq->irq_lock, flags);
+ 	irq->pending_latch = true;
+ 	vgic_queue_irq_unlock(kvm, irq, flags);
++	vgic_put_irq(kvm, irq);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+index 15a6c98ee92f0..7a6360eba3214 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+@@ -356,19 +356,26 @@ static int vgic_v3_uaccess_write_pending(struct kvm_vcpu *vcpu,
+ 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
+ 
+ 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
+-		if (test_bit(i, &val)) {
+-			/*
+-			 * pending_latch is set irrespective of irq type
+-			 * (level or edge) to avoid dependency that VM should
+-			 * restore irq config before pending info.
+-			 */
+-			irq->pending_latch = true;
+-			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+-		} else {
++
++		/*
++		 * pending_latch is set irrespective of irq type
++		 * (level or edge) to avoid dependency that VM should
++		 * restore irq config before pending info.
++		 */
++		irq->pending_latch = test_bit(i, &val);
++
++		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
++			irq_set_irqchip_state(irq->host_irq,
++					      IRQCHIP_STATE_PENDING,
++					      irq->pending_latch);
+ 			irq->pending_latch = false;
+-			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+ 		}
+ 
++		if (irq->pending_latch)
++			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
++		else
++			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
++
+ 		vgic_put_irq(vcpu->kvm, irq);
+ 	}
+ 
+diff --git a/arch/mips/alchemy/devboards/db1200.c b/arch/mips/alchemy/devboards/db1200.c
+index 414f92eacb5e5..9ad26215b0042 100644
+--- a/arch/mips/alchemy/devboards/db1200.c
++++ b/arch/mips/alchemy/devboards/db1200.c
+@@ -847,7 +847,7 @@ int __init db1200_dev_setup(void)
+ 	i2c_register_board_info(0, db1200_i2c_devs,
+ 				ARRAY_SIZE(db1200_i2c_devs));
+ 	spi_register_board_info(db1200_spi_devs,
+-				ARRAY_SIZE(db1200_i2c_devs));
++				ARRAY_SIZE(db1200_spi_devs));
+ 
+ 	/* SWITCHES:	S6.8 I2C/SPI selector  (OFF=I2C	 ON=SPI)
+ 	 *		S6.7 AC97/I2S selector (OFF=AC97 ON=I2S)
+diff --git a/arch/mips/alchemy/devboards/db1550.c b/arch/mips/alchemy/devboards/db1550.c
+index 752b93d91ac9a..06811a5db71d7 100644
+--- a/arch/mips/alchemy/devboards/db1550.c
++++ b/arch/mips/alchemy/devboards/db1550.c
+@@ -588,7 +588,7 @@ int __init db1550_dev_setup(void)
+ 	i2c_register_board_info(0, db1550_i2c_devs,
+ 				ARRAY_SIZE(db1550_i2c_devs));
+ 	spi_register_board_info(db1550_spi_devs,
+-				ARRAY_SIZE(db1550_i2c_devs));
++				ARRAY_SIZE(db1550_spi_devs));
+ 
+ 	c = clk_get(NULL, "psc0_intclk");
+ 	if (!IS_ERR(c)) {
+diff --git a/arch/mips/include/asm/dmi.h b/arch/mips/include/asm/dmi.h
+index 27415a288adf5..dc397f630c660 100644
+--- a/arch/mips/include/asm/dmi.h
++++ b/arch/mips/include/asm/dmi.h
+@@ -5,7 +5,7 @@
+ #include <linux/io.h>
+ #include <linux/memblock.h>
+ 
+-#define dmi_early_remap(x, l)		ioremap_cache(x, l)
++#define dmi_early_remap(x, l)		ioremap(x, l)
+ #define dmi_early_unmap(x, l)		iounmap(x)
+ #define dmi_remap(x, l)			ioremap_cache(x, l)
+ #define dmi_unmap(x)			iounmap(x)
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index b7eb7dd96e179..66643cb659630 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -322,11 +322,11 @@ static void __init bootmem_init(void)
+ 		panic("Incorrect memory mapping !!!");
+ 
+ 	if (max_pfn > PFN_DOWN(HIGHMEM_START)) {
++		max_low_pfn = PFN_DOWN(HIGHMEM_START);
+ #ifdef CONFIG_HIGHMEM
+-		highstart_pfn = PFN_DOWN(HIGHMEM_START);
++		highstart_pfn = max_low_pfn;
+ 		highend_pfn = max_pfn;
+ #else
+-		max_low_pfn = PFN_DOWN(HIGHMEM_START);
+ 		max_pfn = max_low_pfn;
+ #endif
+ 	}
+diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h
+index 324d7b298ec34..6e4af4492a144 100644
+--- a/arch/powerpc/include/asm/sections.h
++++ b/arch/powerpc/include/asm/sections.h
+@@ -38,14 +38,6 @@ extern char start_virt_trampolines[];
+ extern char end_virt_trampolines[];
+ #endif
+ 
+-static inline int in_kernel_text(unsigned long addr)
+-{
+-	if (addr >= (unsigned long)_stext && addr < (unsigned long)__init_end)
+-		return 1;
+-
+-	return 0;
+-}
+-
+ static inline unsigned long kernel_toc_addr(void)
+ {
+ 	/* Defined by the linker, see vmlinux.lds.S */
+diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
+index 0697a0e014ae8..321cab5c3ea02 100644
+--- a/arch/powerpc/lib/Makefile
++++ b/arch/powerpc/lib/Makefile
+@@ -38,7 +38,7 @@ obj-$(CONFIG_FUNCTION_ERROR_INJECTION)	+= error-inject.o
+ # so it is only needed for modules, and only for older linkers which
+ # do not support --save-restore-funcs
+ ifeq ($(call ld-ifversion, -lt, 225000000, y),y)
+-extra-$(CONFIG_PPC64)	+= crtsavres.o
++always-$(CONFIG_PPC64)	+= crtsavres.o
+ endif
+ 
+ obj-$(CONFIG_PPC_BOOK3S_64) += copyuser_power7.o copypage_power7.o \
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index 3e15d0d054b2d..5464c87511fa1 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -292,6 +292,8 @@ static int update_events_in_group(struct device_node *node, struct imc_pmu *pmu)
+ 	attr_group->attrs = attrs;
+ 	do {
+ 		ev_val_str = kasprintf(GFP_KERNEL, "event=0x%x", pmu->events[i].value);
++		if (!ev_val_str)
++			continue;
+ 		dev_str = device_str_attr_create(pmu->events[i].name, ev_val_str);
+ 		if (!dev_str)
+ 			continue;
+@@ -299,6 +301,8 @@ static int update_events_in_group(struct device_node *node, struct imc_pmu *pmu)
+ 		attrs[j++] = dev_str;
+ 		if (pmu->events[i].scale) {
+ 			ev_scale_str = kasprintf(GFP_KERNEL, "%s.scale", pmu->events[i].name);
++			if (!ev_scale_str)
++				continue;
+ 			dev_str = device_str_attr_create(ev_scale_str, pmu->events[i].scale);
+ 			if (!dev_str)
+ 				continue;
+@@ -308,6 +312,8 @@ static int update_events_in_group(struct device_node *node, struct imc_pmu *pmu)
+ 
+ 		if (pmu->events[i].unit) {
+ 			ev_unit_str = kasprintf(GFP_KERNEL, "%s.unit", pmu->events[i].name);
++			if (!ev_unit_str)
++				continue;
+ 			dev_str = device_str_attr_create(ev_unit_str, pmu->events[i].unit);
+ 			if (!dev_str)
+ 				continue;
+diff --git a/arch/powerpc/platforms/44x/Kconfig b/arch/powerpc/platforms/44x/Kconfig
+index 78ac6d67a9356..9bc852c7e92f4 100644
+--- a/arch/powerpc/platforms/44x/Kconfig
++++ b/arch/powerpc/platforms/44x/Kconfig
+@@ -177,6 +177,7 @@ config ISS4xx
+ config CURRITUCK
+ 	bool "IBM Currituck (476fpe) Support"
+ 	depends on PPC_47x
++	select I2C
+ 	select SWIOTLB
+ 	select 476FPE
+ 	select FORCE_PCI
+diff --git a/arch/powerpc/platforms/powernv/opal-irqchip.c b/arch/powerpc/platforms/powernv/opal-irqchip.c
+index c164419e254df..dcec0f760c8f8 100644
+--- a/arch/powerpc/platforms/powernv/opal-irqchip.c
++++ b/arch/powerpc/platforms/powernv/opal-irqchip.c
+@@ -278,6 +278,8 @@ int __init opal_event_init(void)
+ 		else
+ 			name = kasprintf(GFP_KERNEL, "opal");
+ 
++		if (!name)
++			continue;
+ 		/* Install interrupt handler */
+ 		rc = request_irq(r->start, opal_interrupt, r->flags & IRQD_TRIGGER_MASK,
+ 				 name, NULL);
+diff --git a/arch/powerpc/platforms/powernv/opal-powercap.c b/arch/powerpc/platforms/powernv/opal-powercap.c
+index c16d44f6f1d12..ce9ec3962cef0 100644
+--- a/arch/powerpc/platforms/powernv/opal-powercap.c
++++ b/arch/powerpc/platforms/powernv/opal-powercap.c
+@@ -196,6 +196,12 @@ void __init opal_powercap_init(void)
+ 
+ 		j = 0;
+ 		pcaps[i].pg.name = kasprintf(GFP_KERNEL, "%pOFn", node);
++		if (!pcaps[i].pg.name) {
++			kfree(pcaps[i].pattrs);
++			kfree(pcaps[i].pg.attrs);
++			goto out_pcaps_pattrs;
++		}
++
+ 		if (has_min) {
+ 			powercap_add_attr(min, "powercap-min",
+ 					  &pcaps[i].pattrs[j]);
+diff --git a/arch/powerpc/platforms/powernv/opal-xscom.c b/arch/powerpc/platforms/powernv/opal-xscom.c
+index fd510d961b8c7..d5814c5046ba6 100644
+--- a/arch/powerpc/platforms/powernv/opal-xscom.c
++++ b/arch/powerpc/platforms/powernv/opal-xscom.c
+@@ -165,6 +165,11 @@ static int scom_debug_init_one(struct dentry *root, struct device_node *dn,
+ 	ent->chip = chip;
+ 	snprintf(ent->name, 16, "%08x", chip);
+ 	ent->path.data = (void *)kasprintf(GFP_KERNEL, "%pOF", dn);
++	if (!ent->path.data) {
++		kfree(ent);
++		return -ENOMEM;
++	}
++
+ 	ent->path.size = strlen((char *)ent->path.data);
+ 
+ 	dir = debugfs_create_dir(ent->name, root);
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index a5f968b5fa3a8..96dd82d63e987 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -481,7 +481,7 @@ static int dlpar_memory_remove_by_index(u32 drc_index)
+ 	int lmb_found;
+ 	int rc;
+ 
+-	pr_info("Attempting to hot-remove LMB, drc index %x\n", drc_index);
++	pr_debug("Attempting to hot-remove LMB, drc index %x\n", drc_index);
+ 
+ 	lmb_found = 0;
+ 	for_each_drmem_lmb(lmb) {
+@@ -495,14 +495,15 @@ static int dlpar_memory_remove_by_index(u32 drc_index)
+ 		}
+ 	}
+ 
+-	if (!lmb_found)
++	if (!lmb_found) {
++		pr_debug("Failed to look up LMB for drc index %x\n", drc_index);
+ 		rc = -EINVAL;
+-
+-	if (rc)
+-		pr_info("Failed to hot-remove memory at %llx\n",
+-			lmb->base_addr);
+-	else
+-		pr_info("Memory at %llx was hot-removed\n", lmb->base_addr);
++	} else if (rc) {
++		pr_debug("Failed to hot-remove memory at %llx\n",
++			 lmb->base_addr);
++	} else {
++		pr_debug("Memory at %llx was hot-removed\n", lmb->base_addr);
++	}
+ 
+ 	return rc;
+ }
+@@ -719,8 +720,8 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
+ 			if (!drmem_lmb_reserved(lmb))
+ 				continue;
+ 
+-			pr_info("Memory at %llx (drc index %x) was hot-added\n",
+-				lmb->base_addr, lmb->drc_index);
++			pr_debug("Memory at %llx (drc index %x) was hot-added\n",
++				 lmb->base_addr, lmb->drc_index);
+ 			drmem_remove_lmb_reservation(lmb);
+ 		}
+ 		rc = 0;
+diff --git a/arch/s390/include/asm/pci_io.h b/arch/s390/include/asm/pci_io.h
+index 287bb88f76986..2686bee800e3d 100644
+--- a/arch/s390/include/asm/pci_io.h
++++ b/arch/s390/include/asm/pci_io.h
+@@ -11,6 +11,8 @@
+ /* I/O size constraints */
+ #define ZPCI_MAX_READ_SIZE	8
+ #define ZPCI_MAX_WRITE_SIZE	128
++#define ZPCI_BOUNDARY_SIZE	(1 << 12)
++#define ZPCI_BOUNDARY_MASK	(ZPCI_BOUNDARY_SIZE - 1)
+ 
+ /* I/O Map */
+ #define ZPCI_IOMAP_SHIFT		48
+@@ -125,16 +127,18 @@ out:
+ int zpci_write_block(volatile void __iomem *dst, const void *src,
+ 		     unsigned long len);
+ 
+-static inline u8 zpci_get_max_write_size(u64 src, u64 dst, int len, int max)
++static inline int zpci_get_max_io_size(u64 src, u64 dst, int len, int max)
+ {
+-	int count = len > max ? max : len, size = 1;
++	int offset = dst & ZPCI_BOUNDARY_MASK;
++	int size;
+ 
+-	while (!(src & 0x1) && !(dst & 0x1) && ((size << 1) <= count)) {
+-		dst = dst >> 1;
+-		src = src >> 1;
+-		size = size << 1;
+-	}
+-	return size;
++	size = min3(len, ZPCI_BOUNDARY_SIZE - offset, max);
++	if (IS_ALIGNED(src, 8) && IS_ALIGNED(dst, 8) && IS_ALIGNED(size, 8))
++		return size;
++
++	if (size >= 8)
++		return 8;
++	return rounddown_pow_of_two(size);
+ }
+ 
+ static inline int zpci_memcpy_fromio(void *dst,
+@@ -144,9 +148,9 @@ static inline int zpci_memcpy_fromio(void *dst,
+ 	int size, rc = 0;
+ 
+ 	while (n > 0) {
+-		size = zpci_get_max_write_size((u64 __force) src,
+-					       (u64) dst, n,
+-					       ZPCI_MAX_READ_SIZE);
++		size = zpci_get_max_io_size((u64 __force) src,
++					    (u64) dst, n,
++					    ZPCI_MAX_READ_SIZE);
+ 		rc = zpci_read_single(dst, src, size);
+ 		if (rc)
+ 			break;
+@@ -166,9 +170,9 @@ static inline int zpci_memcpy_toio(volatile void __iomem *dst,
+ 		return -EINVAL;
+ 
+ 	while (n > 0) {
+-		size = zpci_get_max_write_size((u64 __force) dst,
+-					       (u64) src, n,
+-					       ZPCI_MAX_WRITE_SIZE);
++		size = zpci_get_max_io_size((u64 __force) dst,
++					    (u64) src, n,
++					    ZPCI_MAX_WRITE_SIZE);
+ 		if (size > 8) /* main path */
+ 			rc = zpci_write_block(dst, src, size);
+ 		else
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index 1ec8076209cab..6e7c4762bd231 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -100,9 +100,9 @@ static inline int __memcpy_toio_inuser(void __iomem *dst,
+ 
+ 	old_fs = enable_sacf_uaccess();
+ 	while (n > 0) {
+-		size = zpci_get_max_write_size((u64 __force) dst,
+-					       (u64 __force) src, n,
+-					       ZPCI_MAX_WRITE_SIZE);
++		size = zpci_get_max_io_size((u64 __force) dst,
++					    (u64 __force) src, n,
++					    ZPCI_MAX_WRITE_SIZE);
+ 		if (size > 8) /* main path */
+ 			rc = __pcistb_mio_inuser(dst, src, size, &status);
+ 		else
+@@ -252,9 +252,9 @@ static inline int __memcpy_fromio_inuser(void __user *dst,
+ 
+ 	old_fs = enable_sacf_uaccess();
+ 	while (n > 0) {
+-		size = zpci_get_max_write_size((u64 __force) src,
+-					       (u64 __force) dst, n,
+-					       ZPCI_MAX_READ_SIZE);
++		size = zpci_get_max_io_size((u64 __force) src,
++					    (u64 __force) dst, n,
++					    ZPCI_MAX_READ_SIZE);
+ 		rc = __pcilg_mio_inuser(dst, src, size, &status);
+ 		if (rc)
+ 			break;
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index bb657e2e6c687..b79b4ee49fbd0 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -24,8 +24,8 @@
+ 
+ static int kvmclock __initdata = 1;
+ static int kvmclock_vsyscall __initdata = 1;
+-static int msr_kvm_system_time __ro_after_init = MSR_KVM_SYSTEM_TIME;
+-static int msr_kvm_wall_clock __ro_after_init = MSR_KVM_WALL_CLOCK;
++static int msr_kvm_system_time __ro_after_init;
++static int msr_kvm_wall_clock __ro_after_init;
+ static u64 kvm_sched_clock_offset __ro_after_init;
+ 
+ static int __init parse_no_kvmclock(char *arg)
+@@ -196,7 +196,8 @@ static void kvm_setup_secondary_clock(void)
+ 
+ void kvmclock_disable(void)
+ {
+-	native_write_msr(msr_kvm_system_time, 0, 0);
++	if (msr_kvm_system_time)
++		native_write_msr(msr_kvm_system_time, 0, 0);
+ }
+ 
+ static void __init kvmclock_init_mem(void)
+@@ -292,7 +293,10 @@ void __init kvmclock_init(void)
+ 	if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE2)) {
+ 		msr_kvm_system_time = MSR_KVM_SYSTEM_TIME_NEW;
+ 		msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK_NEW;
+-	} else if (!kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE)) {
++	} else if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE)) {
++		msr_kvm_system_time = MSR_KVM_SYSTEM_TIME;
++		msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK;
++	} else {
+ 		return;
+ 	}
+ 
+diff --git a/arch/x86/lib/misc.c b/arch/x86/lib/misc.c
+index a018ec4fba53e..c97be9a1430a0 100644
+--- a/arch/x86/lib/misc.c
++++ b/arch/x86/lib/misc.c
+@@ -6,7 +6,7 @@
+  */
+ int num_digits(int val)
+ {
+-	int m = 10;
++	long long m = 10;
+ 	int d = 1;
+ 
+ 	if (val < 0) {
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 9acb9d2c4bcf9..755e6caf18d28 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -1039,9 +1039,13 @@ EXPORT_SYMBOL_GPL(af_alg_sendpage);
+ void af_alg_free_resources(struct af_alg_async_req *areq)
+ {
+ 	struct sock *sk = areq->sk;
++	struct af_alg_ctx *ctx;
+ 
+ 	af_alg_free_areq_sgls(areq);
+ 	sock_kfree_s(sk, areq, areq->areqlen);
++
++	ctx = alg_sk(sk)->private;
++	ctx->inflight = false;
+ }
+ EXPORT_SYMBOL_GPL(af_alg_free_resources);
+ 
+@@ -1105,11 +1109,19 @@ EXPORT_SYMBOL_GPL(af_alg_poll);
+ struct af_alg_async_req *af_alg_alloc_areq(struct sock *sk,
+ 					   unsigned int areqlen)
+ {
+-	struct af_alg_async_req *areq = sock_kmalloc(sk, areqlen, GFP_KERNEL);
++	struct af_alg_ctx *ctx = alg_sk(sk)->private;
++	struct af_alg_async_req *areq;
++
++	/* Only one AIO request can be in flight. */
++	if (ctx->inflight)
++		return ERR_PTR(-EBUSY);
+ 
++	areq = sock_kmalloc(sk, areqlen, GFP_KERNEL);
+ 	if (unlikely(!areq))
+ 		return ERR_PTR(-ENOMEM);
+ 
++	ctx->inflight = true;
++
+ 	areq->areqlen = areqlen;
+ 	areq->sk = sk;
+ 	areq->last_rsgl = NULL;
+diff --git a/crypto/scompress.c b/crypto/scompress.c
+index 738f4f8f0f41a..4d6366a444007 100644
+--- a/crypto/scompress.c
++++ b/crypto/scompress.c
+@@ -124,6 +124,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
+ 	struct crypto_scomp *scomp = *tfm_ctx;
+ 	void **ctx = acomp_request_ctx(req);
+ 	struct scomp_scratch *scratch;
++	unsigned int dlen;
+ 	int ret;
+ 
+ 	if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
+@@ -135,6 +136,8 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
+ 	if (!req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
+ 		req->dlen = SCOMP_SCRATCH_SIZE;
+ 
++	dlen = req->dlen;
++
+ 	scratch = raw_cpu_ptr(&scomp_scratch);
+ 	spin_lock(&scratch->lock);
+ 
+@@ -152,6 +155,9 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
++		} else if (req->dlen > dlen) {
++			ret = -ENOSPC;
++			goto out;
+ 		}
+ 		scatterwalk_map_and_copy(scratch->dst, req->dst, 0, req->dlen,
+ 					 1);
+diff --git a/drivers/acpi/acpi_extlog.c b/drivers/acpi/acpi_extlog.c
+index e648158368a7d..088db2356998f 100644
+--- a/drivers/acpi/acpi_extlog.c
++++ b/drivers/acpi/acpi_extlog.c
+@@ -145,9 +145,14 @@ static int extlog_print(struct notifier_block *nb, unsigned long val,
+ 	static u32 err_seq;
+ 
+ 	estatus = extlog_elog_entry_check(cpu, bank);
+-	if (estatus == NULL || (mce->kflags & MCE_HANDLED_CEC))
++	if (!estatus)
+ 		return NOTIFY_DONE;
+ 
++	if (mce->kflags & MCE_HANDLED_CEC) {
++		estatus->block_status = 0;
++		return NOTIFY_DONE;
++	}
++
+ 	memcpy(elog_buf, (void *)estatus, ELOG_ENTRY_LEN);
+ 	/* clear record status to enable BIOS to update it again */
+ 	estatus->block_status = 0;
+diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c
+index 48e5059d67cab..7de59730030c2 100644
+--- a/drivers/acpi/acpi_lpit.c
++++ b/drivers/acpi/acpi_lpit.c
+@@ -98,7 +98,7 @@ static void lpit_update_residency(struct lpit_residency_info *info,
+ 				 struct acpi_lpit_native *lpit_native)
+ {
+ 	info->frequency = lpit_native->counter_frequency ?
+-				lpit_native->counter_frequency : tsc_khz * 1000;
++				lpit_native->counter_frequency : mul_u32_u32(tsc_khz, 1000U);
+ 	if (!info->frequency)
+ 		info->frequency = 1;
+ 
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index cf6c9ffe04a2d..9d384656323a9 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -1788,12 +1788,12 @@ static void acpi_video_dev_register_backlight(struct acpi_video_device *device)
+ 		return;
+ 	count++;
+ 
+-	acpi_get_parent(device->dev->handle, &acpi_parent);
+-
+-	pdev = acpi_get_pci_dev(acpi_parent);
+-	if (pdev) {
+-		parent = &pdev->dev;
+-		pci_dev_put(pdev);
++	if (ACPI_SUCCESS(acpi_get_parent(device->dev->handle, &acpi_parent))) {
++		pdev = acpi_get_pci_dev(acpi_parent);
++		if (pdev) {
++			parent = &pdev->dev;
++			pci_dev_put(pdev);
++		}
+ 	}
+ 
+ 	memset(&props, 0, sizeof(struct backlight_properties));
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 80e92c298055d..cf872dc5b07a6 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -639,6 +639,7 @@ acpi_fwnode_get_named_child_node(const struct fwnode_handle *fwnode,
+  * @index: Index of the reference to return
+  * @num_args: Maximum number of arguments after each reference
+  * @args: Location to store the returned reference with optional arguments
++ *	  (may be NULL)
+  *
+  * Find property with @name, verifify that it is a package containing at least
+  * one object reference and if so, store the ACPI device object pointer to the
+@@ -697,6 +698,9 @@ int __acpi_node_get_property_reference(const struct fwnode_handle *fwnode,
+ 		if (ret)
+ 			return ret == -ENODEV ? -EINVAL : ret;
+ 
++		if (!args)
++			return 0;
++
+ 		args->fwnode = acpi_fwnode_handle(device);
+ 		args->nargs = 0;
+ 		return 0;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 508d22728ce8c..42b1b06efda6f 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -455,6 +455,13 @@ static const struct dmi_system_id asus_laptop[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "B1402CVA"),
+ 		},
+ 	},
++	{
++		/* TongFang GMxXGxx sold as Eluktronics Inc. RP-15 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Eluktronics Inc."),
++			DMI_MATCH(DMI_BOARD_NAME, "RP-15"),
++		},
++	},
+ 	{
+ 		/* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */
+ 		.matches = {
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 2aaccb78235b5..7db748cfcbc67 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -5173,7 +5173,7 @@ static __poll_t binder_poll(struct file *filp,
+ 
+ 	thread = binder_get_thread(proc);
+ 	if (!thread)
+-		return POLLERR;
++		return EPOLLERR;
+ 
+ 	binder_inner_proc_lock(thread->proc);
+ 	thread->looper |= BINDER_LOOPER_STATE_POLL;
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index b6bf9caaf1d1e..b655bc3956c79 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -271,7 +271,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+ 	}
+ 	if (mm) {
+ 		mmap_write_unlock(mm);
+-		mmput(mm);
++		mmput_async(mm);
+ 	}
+ 	return 0;
+ 
+@@ -304,7 +304,7 @@ err_page_ptr_cleared:
+ err_no_vma:
+ 	if (mm) {
+ 		mmap_write_unlock(mm);
+-		mmput(mm);
++		mmput_async(mm);
+ 	}
+ 	return vma ? -ENOMEM : -ESRCH;
+ }
+@@ -359,8 +359,7 @@ static void debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
+ 			continue;
+ 		if (!buffer->async_transaction)
+ 			continue;
+-		total_alloc_size += binder_alloc_buffer_size(alloc, buffer)
+-			+ sizeof(struct binder_buffer);
++		total_alloc_size += binder_alloc_buffer_size(alloc, buffer);
+ 		num_buffers++;
+ 	}
+ 
+@@ -415,17 +414,17 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 				alloc->pid, extra_buffers_size);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+-	if (is_async &&
+-	    alloc->free_async_space < size + sizeof(struct binder_buffer)) {
++
++	/* Pad 0-size buffers so they get assigned unique addresses */
++	size = max(size, sizeof(void *));
++
++	if (is_async && alloc->free_async_space < size) {
+ 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+ 			     "%d: binder_alloc_buf size %zd failed, no async space left\n",
+ 			      alloc->pid, size);
+ 		return ERR_PTR(-ENOSPC);
+ 	}
+ 
+-	/* Pad 0-size buffers so they get assigned unique addresses */
+-	size = max(size, sizeof(void *));
+-
+ 	while (n) {
+ 		buffer = rb_entry(n, struct binder_buffer, rb_node);
+ 		BUG_ON(!buffer->free);
+@@ -526,7 +525,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 	buffer->extra_buffers_size = extra_buffers_size;
+ 	buffer->pid = pid;
+ 	if (is_async) {
+-		alloc->free_async_space -= size + sizeof(struct binder_buffer);
++		alloc->free_async_space -= size;
+ 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+ 			     "%d: binder_alloc_buf size %zd async free %zd\n",
+ 			      alloc->pid, size, alloc->free_async_space);
+@@ -562,7 +561,7 @@ err_alloc_buf_struct_failed:
+  * is the sum of the three given sizes (each rounded up to
+  * pointer-sized boundary)
+  *
+- * Return:	The allocated buffer or %NULL if error
++ * Return:	The allocated buffer or %ERR_PTR(-errno) if error
+  */
+ struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
+ 					   size_t data_size,
+@@ -662,8 +661,7 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
+ 	BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size);
+ 
+ 	if (buffer->async_transaction) {
+-		alloc->free_async_space += buffer_size + sizeof(struct binder_buffer);
+-
++		alloc->free_async_space += buffer_size;
+ 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+ 			     "%d: binder_free_buf size %zd async free %zd\n",
+ 			      alloc->pid, size, alloc->free_async_space);
+@@ -711,7 +709,7 @@ void binder_alloc_free_buf(struct binder_alloc *alloc,
+ 	/*
+ 	 * We could eliminate the call to binder_alloc_clear_buf()
+ 	 * from binder_alloc_deferred_release() by moving this to
+-	 * binder_alloc_free_buf_locked(). However, that could
++	 * binder_free_buf_locked(). However, that could
+ 	 * increase contention for the alloc mutex if clear_on_free
+ 	 * is used frequently for large buffers. The mutex is not
+ 	 * needed for correctness here.
+@@ -1002,7 +1000,9 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 		goto err_mmget;
+ 	if (!mmap_read_trylock(mm))
+ 		goto err_mmap_read_lock_failed;
+-	vma = binder_alloc_get_vma(alloc);
++	vma = find_vma(mm, page_addr);
++	if (vma && vma != binder_alloc_get_vma(alloc))
++		goto err_invalid_vma;
+ 
+ 	list_lru_isolate(lru, item);
+ 	spin_unlock(lock);
+@@ -1028,6 +1028,8 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 	mutex_unlock(&alloc->mutex);
+ 	return LRU_REMOVED_RETRY;
+ 
++err_invalid_vma:
++	mmap_read_unlock(mm);
+ err_mmap_read_lock_failed:
+ 	mmput_async(mm);
+ err_mmget:
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index d2fb3eb5816c3..b664c36388e24 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -531,6 +531,9 @@ software_node_get_reference_args(const struct fwnode_handle *fwnode,
+ 	if (nargs > NR_FWNODE_REFERENCE_ARGS)
+ 		return -EINVAL;
+ 
++	if (!args)
++		return 0;
++
+ 	args->fwnode = software_node_get(refnode);
+ 	args->nargs = nargs;
+ 
+diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
+index 719d4685a2ddd..1a908827b1f2b 100644
+--- a/drivers/bluetooth/btmtkuart.c
++++ b/drivers/bluetooth/btmtkuart.c
+@@ -471,7 +471,7 @@ mtk_stp_split(struct btmtkuart_dev *bdev, const unsigned char *data, int count,
+ 	return data;
+ }
+ 
+-static int btmtkuart_recv(struct hci_dev *hdev, const u8 *data, size_t count)
++static void btmtkuart_recv(struct hci_dev *hdev, const u8 *data, size_t count)
+ {
+ 	struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+ 	const unsigned char *p_left = data, *p_h4;
+@@ -510,25 +510,20 @@ static int btmtkuart_recv(struct hci_dev *hdev, const u8 *data, size_t count)
+ 			bt_dev_err(bdev->hdev,
+ 				   "Frame reassembly failed (%d)", err);
+ 			bdev->rx_skb = NULL;
+-			return err;
++			return;
+ 		}
+ 
+ 		sz_left -= sz_h4;
+ 		p_left += sz_h4;
+ 	}
+-
+-	return 0;
+ }
+ 
+ static int btmtkuart_receive_buf(struct serdev_device *serdev, const u8 *data,
+ 				 size_t count)
+ {
+ 	struct btmtkuart_dev *bdev = serdev_device_get_drvdata(serdev);
+-	int err;
+ 
+-	err = btmtkuart_recv(bdev->hdev, data, count);
+-	if (err < 0)
+-		return err;
++	btmtkuart_recv(bdev->hdev, data, count);
+ 
+ 	bdev->hdev->stat.byte_rx += count;
+ 
+diff --git a/drivers/clk/clk-fixed-rate.c b/drivers/clk/clk-fixed-rate.c
+index 45501637705c3..62e994d18fe24 100644
+--- a/drivers/clk/clk-fixed-rate.c
++++ b/drivers/clk/clk-fixed-rate.c
+@@ -49,12 +49,24 @@ const struct clk_ops clk_fixed_rate_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_fixed_rate_ops);
+ 
++static void devm_clk_hw_register_fixed_rate_release(struct device *dev, void *res)
++{
++	struct clk_fixed_rate *fix = res;
++
++	/*
++	 * We can not use clk_hw_unregister_fixed_rate, since it will kfree()
++	 * the hw, resulting in double free. Just unregister the hw and let
++	 * devres code kfree() it.
++	 */
++	clk_hw_unregister(&fix->hw);
++}
++
+ struct clk_hw *__clk_hw_register_fixed_rate(struct device *dev,
+ 		struct device_node *np, const char *name,
+ 		const char *parent_name, const struct clk_hw *parent_hw,
+ 		const struct clk_parent_data *parent_data, unsigned long flags,
+ 		unsigned long fixed_rate, unsigned long fixed_accuracy,
+-		unsigned long clk_fixed_flags)
++		unsigned long clk_fixed_flags, bool devm)
+ {
+ 	struct clk_fixed_rate *fixed;
+ 	struct clk_hw *hw;
+@@ -62,7 +74,11 @@ struct clk_hw *__clk_hw_register_fixed_rate(struct device *dev,
+ 	int ret = -EINVAL;
+ 
+ 	/* allocate fixed-rate clock */
+-	fixed = kzalloc(sizeof(*fixed), GFP_KERNEL);
++	if (devm)
++		fixed = devres_alloc(devm_clk_hw_register_fixed_rate_release,
++				     sizeof(*fixed), GFP_KERNEL);
++	else
++		fixed = kzalloc(sizeof(*fixed), GFP_KERNEL);
+ 	if (!fixed)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -90,9 +106,13 @@ struct clk_hw *__clk_hw_register_fixed_rate(struct device *dev,
+ 	else if (np)
+ 		ret = of_clk_hw_register(np, hw);
+ 	if (ret) {
+-		kfree(fixed);
++		if (devm)
++			devres_free(fixed);
++		else
++			kfree(fixed);
+ 		hw = ERR_PTR(ret);
+-	}
++	} else if (devm)
++		devres_add(dev, fixed);
+ 
+ 	return hw;
+ }
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index 4dea29fa901d4..d6515661580c1 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -888,10 +888,8 @@ static int si5341_output_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	r[0] = r_div ? (r_div & 0xff) : 1;
+ 	r[1] = (r_div >> 8) & 0xff;
+ 	r[2] = (r_div >> 16) & 0xff;
+-	err = regmap_bulk_write(output->data->regmap,
++	return regmap_bulk_write(output->data->regmap,
+ 			SI5341_OUT_R_REG(output), r, 3);
+-
+-	return 0;
+ }
+ 
+ static int si5341_output_reparent(struct clk_si5341_output *output, u8 index)
+diff --git a/drivers/clk/qcom/gpucc-sm8150.c b/drivers/clk/qcom/gpucc-sm8150.c
+index 27c40754b2c79..3d9b296a6e4a8 100644
+--- a/drivers/clk/qcom/gpucc-sm8150.c
++++ b/drivers/clk/qcom/gpucc-sm8150.c
+@@ -38,8 +38,8 @@ static struct alpha_pll_config gpu_cc_pll1_config = {
+ 	.config_ctl_hi_val = 0x00002267,
+ 	.config_ctl_hi1_val = 0x00000024,
+ 	.test_ctl_val = 0x00000000,
+-	.test_ctl_hi_val = 0x00000002,
+-	.test_ctl_hi1_val = 0x00000000,
++	.test_ctl_hi_val = 0x00000000,
++	.test_ctl_hi1_val = 0x00000020,
+ 	.user_ctl_val = 0x00000000,
+ 	.user_ctl_hi_val = 0x00000805,
+ 	.user_ctl_hi1_val = 0x000000d0,
+diff --git a/drivers/clk/qcom/videocc-sm8150.c b/drivers/clk/qcom/videocc-sm8150.c
+index 3087e2ec8fd47..61089cde4222e 100644
+--- a/drivers/clk/qcom/videocc-sm8150.c
++++ b/drivers/clk/qcom/videocc-sm8150.c
+@@ -37,6 +37,7 @@ static struct alpha_pll_config video_pll0_config = {
+ 	.config_ctl_val = 0x20485699,
+ 	.config_ctl_hi_val = 0x00002267,
+ 	.config_ctl_hi1_val = 0x00000024,
++	.test_ctl_hi1_val = 0x00000020,
+ 	.user_ctl_val = 0x00000000,
+ 	.user_ctl_hi_val = 0x00000805,
+ 	.user_ctl_hi1_val = 0x000000D0,
+@@ -218,6 +219,10 @@ static const struct regmap_config video_cc_sm8150_regmap_config = {
+ 
+ static const struct qcom_reset_map video_cc_sm8150_resets[] = {
+ 	[VIDEO_CC_MVSC_CORE_CLK_BCR] = { 0x850, 2 },
++	[VIDEO_CC_INTERFACE_BCR] = { 0x8f0 },
++	[VIDEO_CC_MVS0_BCR] = { 0x870 },
++	[VIDEO_CC_MVS1_BCR] = { 0x8b0 },
++	[VIDEO_CC_MVSC_BCR] = { 0x810 },
+ };
+ 
+ static const struct qcom_cc_desc video_cc_sm8150_desc = {
+diff --git a/drivers/clk/rockchip/clk-rk3128.c b/drivers/clk/rockchip/clk-rk3128.c
+index 4b1122e98e167..ddfe1c402e80b 100644
+--- a/drivers/clk/rockchip/clk-rk3128.c
++++ b/drivers/clk/rockchip/clk-rk3128.c
+@@ -489,7 +489,7 @@ static struct rockchip_clk_branch common_clk_branches[] __initdata = {
+ 	GATE(HCLK_I2S_2CH, "hclk_i2s_2ch", "hclk_peri", 0, RK2928_CLKGATE_CON(7), 2, GFLAGS),
+ 	GATE(0, "hclk_usb_peri", "hclk_peri", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(9), 13, GFLAGS),
+ 	GATE(HCLK_HOST2, "hclk_host2", "hclk_peri", 0, RK2928_CLKGATE_CON(7), 3, GFLAGS),
+-	GATE(HCLK_OTG, "hclk_otg", "hclk_peri", 0, RK2928_CLKGATE_CON(3), 13, GFLAGS),
++	GATE(HCLK_OTG, "hclk_otg", "hclk_peri", 0, RK2928_CLKGATE_CON(5), 13, GFLAGS),
+ 	GATE(0, "hclk_peri_ahb", "hclk_peri", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(9), 14, GFLAGS),
+ 	GATE(HCLK_SPDIF, "hclk_spdif", "hclk_peri", 0, RK2928_CLKGATE_CON(10), 9, GFLAGS),
+ 	GATE(HCLK_TSP, "hclk_tsp", "hclk_peri", 0, RK2928_CLKGATE_CON(10), 12, GFLAGS),
+diff --git a/drivers/clk/zynqmp/clk-mux-zynqmp.c b/drivers/clk/zynqmp/clk-mux-zynqmp.c
+index 06194149be831..46ff5cb733ee1 100644
+--- a/drivers/clk/zynqmp/clk-mux-zynqmp.c
++++ b/drivers/clk/zynqmp/clk-mux-zynqmp.c
+@@ -83,7 +83,7 @@ static int zynqmp_clk_mux_set_parent(struct clk_hw *hw, u8 index)
+ static const struct clk_ops zynqmp_clk_mux_ops = {
+ 	.get_parent = zynqmp_clk_mux_get_parent,
+ 	.set_parent = zynqmp_clk_mux_set_parent,
+-	.determine_rate = __clk_mux_determine_rate,
++	.determine_rate = __clk_mux_determine_rate_closest,
+ };
+ 
+ static const struct clk_ops zynqmp_clk_mux_ro_ops = {
+diff --git a/drivers/clk/zynqmp/divider.c b/drivers/clk/zynqmp/divider.c
+index 66da02b83d393..acfd4878cce20 100644
+--- a/drivers/clk/zynqmp/divider.c
++++ b/drivers/clk/zynqmp/divider.c
+@@ -109,49 +109,6 @@ static unsigned long zynqmp_clk_divider_recalc_rate(struct clk_hw *hw,
+ 	return DIV_ROUND_UP_ULL(parent_rate, value);
+ }
+ 
+-static void zynqmp_get_divider2_val(struct clk_hw *hw,
+-				    unsigned long rate,
+-				    struct zynqmp_clk_divider *divider,
+-				    int *bestdiv)
+-{
+-	int div1;
+-	int div2;
+-	long error = LONG_MAX;
+-	unsigned long div1_prate;
+-	struct clk_hw *div1_parent_hw;
+-	struct clk_hw *div2_parent_hw = clk_hw_get_parent(hw);
+-	struct zynqmp_clk_divider *pdivider =
+-				to_zynqmp_clk_divider(div2_parent_hw);
+-
+-	if (!pdivider)
+-		return;
+-
+-	div1_parent_hw = clk_hw_get_parent(div2_parent_hw);
+-	if (!div1_parent_hw)
+-		return;
+-
+-	div1_prate = clk_hw_get_rate(div1_parent_hw);
+-	*bestdiv = 1;
+-	for (div1 = 1; div1 <= pdivider->max_div;) {
+-		for (div2 = 1; div2 <= divider->max_div;) {
+-			long new_error = ((div1_prate / div1) / div2) - rate;
+-
+-			if (abs(new_error) < abs(error)) {
+-				*bestdiv = div2;
+-				error = new_error;
+-			}
+-			if (divider->flags & CLK_DIVIDER_POWER_OF_TWO)
+-				div2 = div2 << 1;
+-			else
+-				div2++;
+-		}
+-		if (pdivider->flags & CLK_DIVIDER_POWER_OF_TWO)
+-			div1 = div1 << 1;
+-		else
+-			div1++;
+-	}
+-}
+-
+ /**
+  * zynqmp_clk_divider_round_rate() - Round rate of divider clock
+  * @hw:			handle between common and hardware-specific interfaces
+@@ -170,6 +127,7 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
+ 	u32 div_type = divider->div_type;
+ 	u32 bestdiv;
+ 	int ret;
++	u8 width;
+ 
+ 	/* if read only, just return current value */
+ 	if (divider->flags & CLK_DIVIDER_READ_ONLY) {
+@@ -189,23 +147,12 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
+ 		return DIV_ROUND_UP_ULL((u64)*prate, bestdiv);
+ 	}
+ 
+-	bestdiv = zynqmp_divider_get_val(*prate, rate, divider->flags);
+-
+-	/*
+-	 * In case of two divisors, compute best divider values and return
+-	 * divider2 value based on compute value. div1 will  be automatically
+-	 * set to optimum based on required total divider value.
+-	 */
+-	if (div_type == TYPE_DIV2 &&
+-	    (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT)) {
+-		zynqmp_get_divider2_val(hw, rate, divider, &bestdiv);
+-	}
++	width = fls(divider->max_div);
+ 
+-	if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && divider->is_frac)
+-		bestdiv = rate % *prate ? 1 : bestdiv;
++	rate = divider_round_rate(hw, rate, prate, NULL, width, divider->flags);
+ 
+-	bestdiv = min_t(u32, bestdiv, divider->max_div);
+-	*prate = rate * bestdiv;
++	if (divider->is_frac && (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && (rate % *prate))
++		*prate = rate;
+ 
+ 	return rate;
+ }
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index aea285651fbaf..e1dcdb0ea1c44 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -163,7 +163,7 @@ static bool __init cpu0_node_has_opp_v2_prop(void)
+ 	struct device_node *np = of_cpu_device_node_get(0);
+ 	bool ret = false;
+ 
+-	if (of_get_property(np, "operating-points-v2", NULL))
++	if (of_property_present(np, "operating-points-v2"))
+ 		ret = true;
+ 
+ 	of_node_put(np);
+diff --git a/drivers/cpufreq/imx-cpufreq-dt.c b/drivers/cpufreq/imx-cpufreq-dt.c
+index 3fe9125156b44..0942498b348c3 100644
+--- a/drivers/cpufreq/imx-cpufreq-dt.c
++++ b/drivers/cpufreq/imx-cpufreq-dt.c
+@@ -89,7 +89,7 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
+ 
+ 	cpu_dev = get_cpu_device(0);
+ 
+-	if (!of_find_property(cpu_dev->of_node, "cpu-supply", NULL))
++	if (!of_property_present(cpu_dev->of_node, "cpu-supply"))
+ 		return -ENODEV;
+ 
+ 	if (of_machine_is_compatible("fsl,imx7ulp")) {
+diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
+index 00f7ad7466680..49cbc43f1fa92 100644
+--- a/drivers/cpufreq/imx6q-cpufreq.c
++++ b/drivers/cpufreq/imx6q-cpufreq.c
+@@ -230,7 +230,7 @@ static int imx6q_opp_check_speed_grading(struct device *dev)
+ 	u32 val;
+ 	int ret;
+ 
+-	if (of_find_property(dev->of_node, "nvmem-cells", NULL)) {
++	if (of_property_present(dev->of_node, "nvmem-cells")) {
+ 		ret = nvmem_cell_read_u32(dev, "speed_grade", &val);
+ 		if (ret)
+ 			return ret;
+@@ -285,7 +285,7 @@ static int imx6ul_opp_check_speed_grading(struct device *dev)
+ 	u32 val;
+ 	int ret = 0;
+ 
+-	if (of_find_property(dev->of_node, "nvmem-cells", NULL)) {
++	if (of_property_present(dev->of_node, "nvmem-cells")) {
+ 		ret = nvmem_cell_read_u32(dev, "speed_grade", &val);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index 8286205c7165d..bb1389f276d7a 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -238,8 +238,11 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev)
+ 
+ #ifdef CONFIG_COMMON_CLK
+ 	/* dummy clock provider as needed by OPP if clocks property is used */
+-	if (of_find_property(dev->of_node, "#clock-cells", NULL))
+-		devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL);
++	if (of_property_present(dev->of_node, "#clock-cells")) {
++		ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL);
++		if (ret)
++			return dev_err_probe(dev, ret, "%s: registering clock provider failed\n", __func__);
++	}
+ #endif
+ 
+ 	ret = cpufreq_register_driver(&scmi_cpufreq_driver);
+diff --git a/drivers/cpufreq/tegra20-cpufreq.c b/drivers/cpufreq/tegra20-cpufreq.c
+index 8c893043953e4..111d4ff6fe86f 100644
+--- a/drivers/cpufreq/tegra20-cpufreq.c
++++ b/drivers/cpufreq/tegra20-cpufreq.c
+@@ -25,7 +25,7 @@ static bool cpu0_node_has_opp_v2_prop(void)
+ 	struct device_node *np = of_cpu_device_node_get(0);
+ 	bool ret = false;
+ 
+-	if (of_get_property(np, "operating-points-v2", NULL))
++	if (of_property_present(np, "operating-points-v2"))
+ 		ret = true;
+ 
+ 	of_node_put(np);
+diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
+index c15625e8ff66e..7e2fbba945cdc 100644
+--- a/drivers/crypto/ccp/ccp-ops.c
++++ b/drivers/crypto/ccp/ccp-ops.c
+@@ -179,8 +179,11 @@ static int ccp_init_dm_workarea(struct ccp_dm_workarea *wa,
+ 
+ 		wa->dma.address = dma_map_single(wa->dev, wa->address, len,
+ 						 dir);
+-		if (dma_mapping_error(wa->dev, wa->dma.address))
++		if (dma_mapping_error(wa->dev, wa->dma.address)) {
++			kfree(wa->address);
++			wa->address = NULL;
+ 			return -ENOMEM;
++		}
+ 
+ 		wa->dma.length = len;
+ 	}
+diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
+index f15fc1fb37079..0888f4489a765 100644
+--- a/drivers/crypto/sa2ul.c
++++ b/drivers/crypto/sa2ul.c
+@@ -1848,9 +1848,8 @@ static int sa_aead_setkey(struct crypto_aead *authenc,
+ 	crypto_aead_set_flags(ctx->fallback.aead,
+ 			      crypto_aead_get_flags(authenc) &
+ 			      CRYPTO_TFM_REQ_MASK);
+-	crypto_aead_setkey(ctx->fallback.aead, key, keylen);
+ 
+-	return 0;
++	return crypto_aead_setkey(ctx->fallback.aead, key, keylen);
+ }
+ 
+ static int sa_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
+diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
+index 2043dd0611217..c2a3a10c34845 100644
+--- a/drivers/crypto/sahara.c
++++ b/drivers/crypto/sahara.c
+@@ -43,7 +43,6 @@
+ #define FLAGS_MODE_MASK		0x000f
+ #define FLAGS_ENCRYPT		BIT(0)
+ #define FLAGS_CBC		BIT(1)
+-#define FLAGS_NEW_KEY		BIT(3)
+ 
+ #define SAHARA_HDR_BASE			0x00800000
+ #define SAHARA_HDR_SKHA_ALG_AES	0
+@@ -141,8 +140,6 @@ struct sahara_hw_link {
+ };
+ 
+ struct sahara_ctx {
+-	unsigned long flags;
+-
+ 	/* AES-specific context */
+ 	int keylen;
+ 	u8 key[AES_KEYSIZE_128];
+@@ -151,6 +148,7 @@ struct sahara_ctx {
+ 
+ struct sahara_aes_reqctx {
+ 	unsigned long mode;
++	u8 iv_out[AES_BLOCK_SIZE];
+ 	struct skcipher_request fallback_req;	// keep at the end
+ };
+ 
+@@ -446,27 +444,24 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
+ 	int ret;
+ 	int i, j;
+ 	int idx = 0;
++	u32 len;
+ 
+-	/* Copy new key if necessary */
+-	if (ctx->flags & FLAGS_NEW_KEY) {
+-		memcpy(dev->key_base, ctx->key, ctx->keylen);
+-		ctx->flags &= ~FLAGS_NEW_KEY;
++	memcpy(dev->key_base, ctx->key, ctx->keylen);
+ 
+-		if (dev->flags & FLAGS_CBC) {
+-			dev->hw_desc[idx]->len1 = AES_BLOCK_SIZE;
+-			dev->hw_desc[idx]->p1 = dev->iv_phys_base;
+-		} else {
+-			dev->hw_desc[idx]->len1 = 0;
+-			dev->hw_desc[idx]->p1 = 0;
+-		}
+-		dev->hw_desc[idx]->len2 = ctx->keylen;
+-		dev->hw_desc[idx]->p2 = dev->key_phys_base;
+-		dev->hw_desc[idx]->next = dev->hw_phys_desc[1];
++	if (dev->flags & FLAGS_CBC) {
++		dev->hw_desc[idx]->len1 = AES_BLOCK_SIZE;
++		dev->hw_desc[idx]->p1 = dev->iv_phys_base;
++	} else {
++		dev->hw_desc[idx]->len1 = 0;
++		dev->hw_desc[idx]->p1 = 0;
++	}
++	dev->hw_desc[idx]->len2 = ctx->keylen;
++	dev->hw_desc[idx]->p2 = dev->key_phys_base;
++	dev->hw_desc[idx]->next = dev->hw_phys_desc[1];
++	dev->hw_desc[idx]->hdr = sahara_aes_key_hdr(dev);
+ 
+-		dev->hw_desc[idx]->hdr = sahara_aes_key_hdr(dev);
++	idx++;
+ 
+-		idx++;
+-	}
+ 
+ 	dev->nb_in_sg = sg_nents_for_len(dev->in_sg, dev->total);
+ 	if (dev->nb_in_sg < 0) {
+@@ -488,24 +483,27 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
+ 			 DMA_TO_DEVICE);
+ 	if (ret != dev->nb_in_sg) {
+ 		dev_err(dev->device, "couldn't map in sg\n");
+-		goto unmap_in;
++		return -EINVAL;
+ 	}
++
+ 	ret = dma_map_sg(dev->device, dev->out_sg, dev->nb_out_sg,
+ 			 DMA_FROM_DEVICE);
+ 	if (ret != dev->nb_out_sg) {
+ 		dev_err(dev->device, "couldn't map out sg\n");
+-		goto unmap_out;
++		goto unmap_in;
+ 	}
+ 
+ 	/* Create input links */
+ 	dev->hw_desc[idx]->p1 = dev->hw_phys_link[0];
+ 	sg = dev->in_sg;
++	len = dev->total;
+ 	for (i = 0; i < dev->nb_in_sg; i++) {
+-		dev->hw_link[i]->len = sg->length;
++		dev->hw_link[i]->len = min(len, sg->length);
+ 		dev->hw_link[i]->p = sg->dma_address;
+ 		if (i == (dev->nb_in_sg - 1)) {
+ 			dev->hw_link[i]->next = 0;
+ 		} else {
++			len -= min(len, sg->length);
+ 			dev->hw_link[i]->next = dev->hw_phys_link[i + 1];
+ 			sg = sg_next(sg);
+ 		}
+@@ -514,12 +512,14 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
+ 	/* Create output links */
+ 	dev->hw_desc[idx]->p2 = dev->hw_phys_link[i];
+ 	sg = dev->out_sg;
++	len = dev->total;
+ 	for (j = i; j < dev->nb_out_sg + i; j++) {
+-		dev->hw_link[j]->len = sg->length;
++		dev->hw_link[j]->len = min(len, sg->length);
+ 		dev->hw_link[j]->p = sg->dma_address;
+ 		if (j == (dev->nb_out_sg + i - 1)) {
+ 			dev->hw_link[j]->next = 0;
+ 		} else {
++			len -= min(len, sg->length);
+ 			dev->hw_link[j]->next = dev->hw_phys_link[j + 1];
+ 			sg = sg_next(sg);
+ 		}
+@@ -538,9 +538,6 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
+ 
+ 	return 0;
+ 
+-unmap_out:
+-	dma_unmap_sg(dev->device, dev->out_sg, dev->nb_out_sg,
+-		DMA_FROM_DEVICE);
+ unmap_in:
+ 	dma_unmap_sg(dev->device, dev->in_sg, dev->nb_in_sg,
+ 		DMA_TO_DEVICE);
+@@ -548,8 +545,24 @@ unmap_in:
+ 	return -EINVAL;
+ }
+ 
++static void sahara_aes_cbc_update_iv(struct skcipher_request *req)
++{
++	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
++	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
++	unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
++
++	/* Update IV buffer to contain the last ciphertext block */
++	if (rctx->mode & FLAGS_ENCRYPT) {
++		sg_pcopy_to_buffer(req->dst, sg_nents(req->dst), req->iv,
++				   ivsize, req->cryptlen - ivsize);
++	} else {
++		memcpy(req->iv, rctx->iv_out, ivsize);
++	}
++}
++
+ static int sahara_aes_process(struct skcipher_request *req)
+ {
++	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+ 	struct sahara_dev *dev = dev_ptr;
+ 	struct sahara_ctx *ctx;
+ 	struct sahara_aes_reqctx *rctx;
+@@ -571,8 +584,17 @@ static int sahara_aes_process(struct skcipher_request *req)
+ 	rctx->mode &= FLAGS_MODE_MASK;
+ 	dev->flags = (dev->flags & ~FLAGS_MODE_MASK) | rctx->mode;
+ 
+-	if ((dev->flags & FLAGS_CBC) && req->iv)
+-		memcpy(dev->iv_base, req->iv, AES_KEYSIZE_128);
++	if ((dev->flags & FLAGS_CBC) && req->iv) {
++		unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
++
++		memcpy(dev->iv_base, req->iv, ivsize);
++
++		if (!(dev->flags & FLAGS_ENCRYPT)) {
++			sg_pcopy_to_buffer(req->src, sg_nents(req->src),
++					   rctx->iv_out, ivsize,
++					   req->cryptlen - ivsize);
++		}
++	}
+ 
+ 	/* assign new context to device */
+ 	dev->ctx = ctx;
+@@ -585,16 +607,20 @@ static int sahara_aes_process(struct skcipher_request *req)
+ 
+ 	timeout = wait_for_completion_timeout(&dev->dma_completion,
+ 				msecs_to_jiffies(SAHARA_TIMEOUT_MS));
+-	if (!timeout) {
+-		dev_err(dev->device, "AES timeout\n");
+-		return -ETIMEDOUT;
+-	}
+ 
+ 	dma_unmap_sg(dev->device, dev->out_sg, dev->nb_out_sg,
+ 		DMA_FROM_DEVICE);
+ 	dma_unmap_sg(dev->device, dev->in_sg, dev->nb_in_sg,
+ 		DMA_TO_DEVICE);
+ 
++	if (!timeout) {
++		dev_err(dev->device, "AES timeout\n");
++		return -ETIMEDOUT;
++	}
++
++	if ((dev->flags & FLAGS_CBC) && req->iv)
++		sahara_aes_cbc_update_iv(req);
++
+ 	return 0;
+ }
+ 
+@@ -608,7 +634,6 @@ static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ 	/* SAHARA only supports 128bit keys */
+ 	if (keylen == AES_KEYSIZE_128) {
+ 		memcpy(ctx->key, key, keylen);
+-		ctx->flags |= FLAGS_NEW_KEY;
+ 		return 0;
+ 	}
+ 
+@@ -624,12 +649,40 @@ static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ 	return crypto_skcipher_setkey(ctx->fallback, key, keylen);
+ }
+ 
++static int sahara_aes_fallback(struct skcipher_request *req, unsigned long mode)
++{
++	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
++	struct sahara_ctx *ctx = crypto_skcipher_ctx(
++		crypto_skcipher_reqtfm(req));
++
++	skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
++	skcipher_request_set_callback(&rctx->fallback_req,
++				      req->base.flags,
++				      req->base.complete,
++				      req->base.data);
++	skcipher_request_set_crypt(&rctx->fallback_req, req->src,
++				   req->dst, req->cryptlen, req->iv);
++
++	if (mode & FLAGS_ENCRYPT)
++		return crypto_skcipher_encrypt(&rctx->fallback_req);
++
++	return crypto_skcipher_decrypt(&rctx->fallback_req);
++}
++
+ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode)
+ {
+ 	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
++	struct sahara_ctx *ctx = crypto_skcipher_ctx(
++		crypto_skcipher_reqtfm(req));
+ 	struct sahara_dev *dev = dev_ptr;
+ 	int err = 0;
+ 
++	if (!req->cryptlen)
++		return 0;
++
++	if (unlikely(ctx->keylen != AES_KEYSIZE_128))
++		return sahara_aes_fallback(req, mode);
++
+ 	dev_dbg(dev->device, "nbytes: %d, enc: %d, cbc: %d\n",
+ 		req->cryptlen, !!(mode & FLAGS_ENCRYPT), !!(mode & FLAGS_CBC));
+ 
+@@ -652,81 +705,21 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode)
+ 
+ static int sahara_aes_ecb_encrypt(struct skcipher_request *req)
+ {
+-	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
+-	struct sahara_ctx *ctx = crypto_skcipher_ctx(
+-		crypto_skcipher_reqtfm(req));
+-
+-	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
+-		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+-		skcipher_request_set_callback(&rctx->fallback_req,
+-					      req->base.flags,
+-					      req->base.complete,
+-					      req->base.data);
+-		skcipher_request_set_crypt(&rctx->fallback_req, req->src,
+-					   req->dst, req->cryptlen, req->iv);
+-		return crypto_skcipher_encrypt(&rctx->fallback_req);
+-	}
+-
+ 	return sahara_aes_crypt(req, FLAGS_ENCRYPT);
+ }
+ 
+ static int sahara_aes_ecb_decrypt(struct skcipher_request *req)
+ {
+-	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
+-	struct sahara_ctx *ctx = crypto_skcipher_ctx(
+-		crypto_skcipher_reqtfm(req));
+-
+-	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
+-		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+-		skcipher_request_set_callback(&rctx->fallback_req,
+-					      req->base.flags,
+-					      req->base.complete,
+-					      req->base.data);
+-		skcipher_request_set_crypt(&rctx->fallback_req, req->src,
+-					   req->dst, req->cryptlen, req->iv);
+-		return crypto_skcipher_decrypt(&rctx->fallback_req);
+-	}
+-
+ 	return sahara_aes_crypt(req, 0);
+ }
+ 
+ static int sahara_aes_cbc_encrypt(struct skcipher_request *req)
+ {
+-	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
+-	struct sahara_ctx *ctx = crypto_skcipher_ctx(
+-		crypto_skcipher_reqtfm(req));
+-
+-	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
+-		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+-		skcipher_request_set_callback(&rctx->fallback_req,
+-					      req->base.flags,
+-					      req->base.complete,
+-					      req->base.data);
+-		skcipher_request_set_crypt(&rctx->fallback_req, req->src,
+-					   req->dst, req->cryptlen, req->iv);
+-		return crypto_skcipher_encrypt(&rctx->fallback_req);
+-	}
+-
+ 	return sahara_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC);
+ }
+ 
+ static int sahara_aes_cbc_decrypt(struct skcipher_request *req)
+ {
+-	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
+-	struct sahara_ctx *ctx = crypto_skcipher_ctx(
+-		crypto_skcipher_reqtfm(req));
+-
+-	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
+-		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+-		skcipher_request_set_callback(&rctx->fallback_req,
+-					      req->base.flags,
+-					      req->base.complete,
+-					      req->base.data);
+-		skcipher_request_set_crypt(&rctx->fallback_req, req->src,
+-					   req->dst, req->cryptlen, req->iv);
+-		return crypto_skcipher_decrypt(&rctx->fallback_req);
+-	}
+-
+ 	return sahara_aes_crypt(req, FLAGS_CBC);
+ }
+ 
+@@ -783,6 +776,7 @@ static int sahara_sha_hw_links_create(struct sahara_dev *dev,
+ 				       int start)
+ {
+ 	struct scatterlist *sg;
++	unsigned int len;
+ 	unsigned int i;
+ 	int ret;
+ 
+@@ -804,12 +798,14 @@ static int sahara_sha_hw_links_create(struct sahara_dev *dev,
+ 	if (!ret)
+ 		return -EFAULT;
+ 
++	len = rctx->total;
+ 	for (i = start; i < dev->nb_in_sg + start; i++) {
+-		dev->hw_link[i]->len = sg->length;
++		dev->hw_link[i]->len = min(len, sg->length);
+ 		dev->hw_link[i]->p = sg->dma_address;
+ 		if (i == (dev->nb_in_sg + start - 1)) {
+ 			dev->hw_link[i]->next = 0;
+ 		} else {
++			len -= min(len, sg->length);
+ 			dev->hw_link[i]->next = dev->hw_phys_link[i + 1];
+ 			sg = sg_next(sg);
+ 		}
+@@ -890,24 +886,6 @@ static int sahara_sha_hw_context_descriptor_create(struct sahara_dev *dev,
+ 	return 0;
+ }
+ 
+-static int sahara_walk_and_recalc(struct scatterlist *sg, unsigned int nbytes)
+-{
+-	if (!sg || !sg->length)
+-		return nbytes;
+-
+-	while (nbytes && sg) {
+-		if (nbytes <= sg->length) {
+-			sg->length = nbytes;
+-			sg_mark_end(sg);
+-			break;
+-		}
+-		nbytes -= sg->length;
+-		sg = sg_next(sg);
+-	}
+-
+-	return nbytes;
+-}
+-
+ static int sahara_sha_prepare_request(struct ahash_request *req)
+ {
+ 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+@@ -944,36 +922,20 @@ static int sahara_sha_prepare_request(struct ahash_request *req)
+ 					hash_later, 0);
+ 	}
+ 
+-	/* nbytes should now be multiple of blocksize */
+-	req->nbytes = req->nbytes - hash_later;
+-
+-	sahara_walk_and_recalc(req->src, req->nbytes);
+-
++	rctx->total = len - hash_later;
+ 	/* have data from previous operation and current */
+ 	if (rctx->buf_cnt && req->nbytes) {
+ 		sg_init_table(rctx->in_sg_chain, 2);
+ 		sg_set_buf(rctx->in_sg_chain, rctx->rembuf, rctx->buf_cnt);
+-
+ 		sg_chain(rctx->in_sg_chain, 2, req->src);
+-
+-		rctx->total = req->nbytes + rctx->buf_cnt;
+ 		rctx->in_sg = rctx->in_sg_chain;
+-
+-		req->src = rctx->in_sg_chain;
+ 	/* only data from previous operation */
+ 	} else if (rctx->buf_cnt) {
+-		if (req->src)
+-			rctx->in_sg = req->src;
+-		else
+-			rctx->in_sg = rctx->in_sg_chain;
+-		/* buf was copied into rembuf above */
++		rctx->in_sg = rctx->in_sg_chain;
+ 		sg_init_one(rctx->in_sg, rctx->rembuf, rctx->buf_cnt);
+-		rctx->total = rctx->buf_cnt;
+ 	/* no data from previous operation */
+ 	} else {
+ 		rctx->in_sg = req->src;
+-		rctx->total = req->nbytes;
+-		req->src = rctx->in_sg;
+ 	}
+ 
+ 	/* on next call, we only have the remaining data in the buffer */
+@@ -994,7 +956,10 @@ static int sahara_sha_process(struct ahash_request *req)
+ 		return ret;
+ 
+ 	if (rctx->first) {
+-		sahara_sha_hw_data_descriptor_create(dev, rctx, req, 0);
++		ret = sahara_sha_hw_data_descriptor_create(dev, rctx, req, 0);
++		if (ret)
++			return ret;
++
+ 		dev->hw_desc[0]->next = 0;
+ 		rctx->first = 0;
+ 	} else {
+@@ -1002,7 +967,10 @@ static int sahara_sha_process(struct ahash_request *req)
+ 
+ 		sahara_sha_hw_context_descriptor_create(dev, rctx, req, 0);
+ 		dev->hw_desc[0]->next = dev->hw_phys_desc[1];
+-		sahara_sha_hw_data_descriptor_create(dev, rctx, req, 1);
++		ret = sahara_sha_hw_data_descriptor_create(dev, rctx, req, 1);
++		if (ret)
++			return ret;
++
+ 		dev->hw_desc[1]->next = 0;
+ 	}
+ 
+@@ -1015,18 +983,19 @@ static int sahara_sha_process(struct ahash_request *req)
+ 
+ 	timeout = wait_for_completion_timeout(&dev->dma_completion,
+ 				msecs_to_jiffies(SAHARA_TIMEOUT_MS));
+-	if (!timeout) {
+-		dev_err(dev->device, "SHA timeout\n");
+-		return -ETIMEDOUT;
+-	}
+ 
+ 	if (rctx->sg_in_idx)
+ 		dma_unmap_sg(dev->device, dev->in_sg, dev->nb_in_sg,
+ 			     DMA_TO_DEVICE);
+ 
++	if (!timeout) {
++		dev_err(dev->device, "SHA timeout\n");
++		return -ETIMEDOUT;
++	}
++
+ 	memcpy(rctx->context, dev->context_base, rctx->context_size);
+ 
+-	if (req->result)
++	if (req->result && rctx->last)
+ 		memcpy(req->result, rctx->context, rctx->digest_size);
+ 
+ 	return 0;
+@@ -1170,8 +1139,7 @@ static int sahara_sha_import(struct ahash_request *req, const void *in)
+ static int sahara_sha_cra_init(struct crypto_tfm *tfm)
+ {
+ 	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+-				 sizeof(struct sahara_sha_reqctx) +
+-				 SHA_BUFFER_LEN + SHA256_BLOCK_SIZE);
++				 sizeof(struct sahara_sha_reqctx));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/virtio/Kconfig b/drivers/crypto/virtio/Kconfig
+index b894e3a8be4fa..5f8915f4a9ffe 100644
+--- a/drivers/crypto/virtio/Kconfig
++++ b/drivers/crypto/virtio/Kconfig
+@@ -3,8 +3,11 @@ config CRYPTO_DEV_VIRTIO
+ 	tristate "VirtIO crypto driver"
+ 	depends on VIRTIO
+ 	select CRYPTO_AEAD
++	select CRYPTO_AKCIPHER2
+ 	select CRYPTO_SKCIPHER
+ 	select CRYPTO_ENGINE
++	select CRYPTO_RSA
++	select MPILIB
+ 	help
+ 	  This driver provides support for virtio crypto device. If you
+ 	  choose 'M' here, this module will be called virtio_crypto.
+diff --git a/drivers/crypto/virtio/Makefile b/drivers/crypto/virtio/Makefile
+index cbfccccfa135d..f2b839473d61c 100644
+--- a/drivers/crypto/virtio/Makefile
++++ b/drivers/crypto/virtio/Makefile
+@@ -2,5 +2,6 @@
+ obj-$(CONFIG_CRYPTO_DEV_VIRTIO) += virtio_crypto.o
+ virtio_crypto-objs := \
+ 	virtio_crypto_algs.o \
++	virtio_crypto_akcipher_algs.o \
+ 	virtio_crypto_mgr.o \
+ 	virtio_crypto_core.o
+diff --git a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
+new file mode 100644
+index 0000000000000..2cfc36d141c07
+--- /dev/null
++++ b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
+@@ -0,0 +1,591 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++ /* Asymmetric algorithms supported by virtio crypto device
++  *
++  * Authors: zhenwei pi <pizhenwei@bytedance.com>
++  *          lei he <helei.sig11@bytedance.com>
++  *
++  * Copyright 2022 Bytedance CO., LTD.
++  */
++
++#include <linux/mpi.h>
++#include <linux/scatterlist.h>
++#include <crypto/algapi.h>
++#include <crypto/internal/akcipher.h>
++#include <crypto/internal/rsa.h>
++#include <linux/err.h>
++#include <crypto/scatterwalk.h>
++#include <linux/atomic.h>
++
++#include <uapi/linux/virtio_crypto.h>
++#include "virtio_crypto_common.h"
++
++struct virtio_crypto_rsa_ctx {
++	MPI n;
++};
++
++struct virtio_crypto_akcipher_ctx {
++	struct crypto_engine_ctx enginectx;
++	struct virtio_crypto *vcrypto;
++	struct crypto_akcipher *tfm;
++	bool session_valid;
++	__u64 session_id;
++	union {
++		struct virtio_crypto_rsa_ctx rsa_ctx;
++	};
++};
++
++struct virtio_crypto_akcipher_request {
++	struct virtio_crypto_request base;
++	struct virtio_crypto_akcipher_ctx *akcipher_ctx;
++	struct akcipher_request *akcipher_req;
++	void *src_buf;
++	void *dst_buf;
++	uint32_t opcode;
++};
++
++struct virtio_crypto_akcipher_algo {
++	uint32_t algonum;
++	uint32_t service;
++	unsigned int active_devs;
++	struct akcipher_alg algo;
++};
++
++static DEFINE_MUTEX(algs_lock);
++
++static void virtio_crypto_akcipher_finalize_req(
++	struct virtio_crypto_akcipher_request *vc_akcipher_req,
++	struct akcipher_request *req, int err)
++{
++	kfree(vc_akcipher_req->src_buf);
++	kfree(vc_akcipher_req->dst_buf);
++	vc_akcipher_req->src_buf = NULL;
++	vc_akcipher_req->dst_buf = NULL;
++	virtcrypto_clear_request(&vc_akcipher_req->base);
++
++	crypto_finalize_akcipher_request(vc_akcipher_req->base.dataq->engine, req, err);
++}
++
++static void virtio_crypto_dataq_akcipher_callback(struct virtio_crypto_request *vc_req, int len)
++{
++	struct virtio_crypto_akcipher_request *vc_akcipher_req =
++		container_of(vc_req, struct virtio_crypto_akcipher_request, base);
++	struct akcipher_request *akcipher_req;
++	int error;
++
++	switch (vc_req->status) {
++	case VIRTIO_CRYPTO_OK:
++		error = 0;
++		break;
++	case VIRTIO_CRYPTO_INVSESS:
++	case VIRTIO_CRYPTO_ERR:
++		error = -EINVAL;
++		break;
++	case VIRTIO_CRYPTO_BADMSG:
++		error = -EBADMSG;
++		break;
++
++	case VIRTIO_CRYPTO_KEY_REJECTED:
++		error = -EKEYREJECTED;
++		break;
++
++	default:
++		error = -EIO;
++		break;
++	}
++
++	akcipher_req = vc_akcipher_req->akcipher_req;
++	if (vc_akcipher_req->opcode != VIRTIO_CRYPTO_AKCIPHER_VERIFY)
++		sg_copy_from_buffer(akcipher_req->dst, sg_nents(akcipher_req->dst),
++				    vc_akcipher_req->dst_buf, akcipher_req->dst_len);
++	virtio_crypto_akcipher_finalize_req(vc_akcipher_req, akcipher_req, error);
++}
++
++static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher_ctx *ctx,
++		struct virtio_crypto_ctrl_header *header, void *para,
++		const uint8_t *key, unsigned int keylen)
++{
++	struct scatterlist outhdr_sg, key_sg, inhdr_sg, *sgs[3];
++	struct virtio_crypto *vcrypto = ctx->vcrypto;
++	uint8_t *pkey;
++	int err;
++	unsigned int num_out = 0, num_in = 0;
++	struct virtio_crypto_op_ctrl_req *ctrl;
++	struct virtio_crypto_session_input *input;
++	struct virtio_crypto_ctrl_request *vc_ctrl_req;
++
++	pkey = kmemdup(key, keylen, GFP_ATOMIC);
++	if (!pkey)
++		return -ENOMEM;
++
++	vc_ctrl_req = kzalloc(sizeof(*vc_ctrl_req), GFP_KERNEL);
++	if (!vc_ctrl_req) {
++		err = -ENOMEM;
++		goto out;
++	}
++
++	ctrl = &vc_ctrl_req->ctrl;
++	memcpy(&ctrl->header, header, sizeof(ctrl->header));
++	memcpy(&ctrl->u, para, sizeof(ctrl->u));
++	input = &vc_ctrl_req->input;
++	input->status = cpu_to_le32(VIRTIO_CRYPTO_ERR);
++
++	sg_init_one(&outhdr_sg, ctrl, sizeof(*ctrl));
++	sgs[num_out++] = &outhdr_sg;
++
++	sg_init_one(&key_sg, pkey, keylen);
++	sgs[num_out++] = &key_sg;
++
++	sg_init_one(&inhdr_sg, input, sizeof(*input));
++	sgs[num_out + num_in++] = &inhdr_sg;
++
++	err = virtio_crypto_ctrl_vq_request(vcrypto, sgs, num_out, num_in, vc_ctrl_req);
++	if (err < 0)
++		goto out;
++
++	if (le32_to_cpu(input->status) != VIRTIO_CRYPTO_OK) {
++		pr_err("virtio_crypto: Create session failed status: %u\n",
++			le32_to_cpu(input->status));
++		err = -EINVAL;
++		goto out;
++	}
++
++	ctx->session_id = le64_to_cpu(input->session_id);
++	ctx->session_valid = true;
++	err = 0;
++
++out:
++	kfree(vc_ctrl_req);
++	kfree_sensitive(pkey);
++
++	return err;
++}
++
++static int virtio_crypto_alg_akcipher_close_session(struct virtio_crypto_akcipher_ctx *ctx)
++{
++	struct scatterlist outhdr_sg, inhdr_sg, *sgs[2];
++	struct virtio_crypto_destroy_session_req *destroy_session;
++	struct virtio_crypto *vcrypto = ctx->vcrypto;
++	unsigned int num_out = 0, num_in = 0;
++	int err;
++	struct virtio_crypto_op_ctrl_req *ctrl;
++	struct virtio_crypto_inhdr *ctrl_status;
++	struct virtio_crypto_ctrl_request *vc_ctrl_req;
++
++	if (!ctx->session_valid)
++		return 0;
++
++	vc_ctrl_req = kzalloc(sizeof(*vc_ctrl_req), GFP_KERNEL);
++	if (!vc_ctrl_req)
++		return -ENOMEM;
++
++	ctrl_status = &vc_ctrl_req->ctrl_status;
++	ctrl_status->status = VIRTIO_CRYPTO_ERR;
++	ctrl = &vc_ctrl_req->ctrl;
++	ctrl->header.opcode = cpu_to_le32(VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION);
++	ctrl->header.queue_id = 0;
++
++	destroy_session = &ctrl->u.destroy_session;
++	destroy_session->session_id = cpu_to_le64(ctx->session_id);
++
++	sg_init_one(&outhdr_sg, ctrl, sizeof(*ctrl));
++	sgs[num_out++] = &outhdr_sg;
++
++	sg_init_one(&inhdr_sg, &ctrl_status->status, sizeof(ctrl_status->status));
++	sgs[num_out + num_in++] = &inhdr_sg;
++
++	err = virtio_crypto_ctrl_vq_request(vcrypto, sgs, num_out, num_in, vc_ctrl_req);
++	if (err < 0)
++		goto out;
++
++	if (ctrl_status->status != VIRTIO_CRYPTO_OK) {
++		pr_err("virtio_crypto: Close session failed status: %u, session_id: 0x%llx\n",
++			ctrl_status->status, destroy_session->session_id);
++		err = -EINVAL;
++		goto out;
++	}
++
++	err = 0;
++	ctx->session_valid = false;
++
++out:
++	kfree(vc_ctrl_req);
++
++	return err;
++}
++
++static int __virtio_crypto_akcipher_do_req(struct virtio_crypto_akcipher_request *vc_akcipher_req,
++		struct akcipher_request *req, struct data_queue *data_vq)
++{
++	struct virtio_crypto_akcipher_ctx *ctx = vc_akcipher_req->akcipher_ctx;
++	struct virtio_crypto_request *vc_req = &vc_akcipher_req->base;
++	struct virtio_crypto *vcrypto = ctx->vcrypto;
++	struct virtio_crypto_op_data_req *req_data = vc_req->req_data;
++	struct scatterlist *sgs[4], outhdr_sg, inhdr_sg, srcdata_sg, dstdata_sg;
++	void *src_buf = NULL, *dst_buf = NULL;
++	unsigned int num_out = 0, num_in = 0;
++	int node = dev_to_node(&vcrypto->vdev->dev);
++	unsigned long flags;
++	int ret = -ENOMEM;
++	bool verify = vc_akcipher_req->opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY;
++	unsigned int src_len = verify ? req->src_len + req->dst_len : req->src_len;
++
++	/* out header */
++	sg_init_one(&outhdr_sg, req_data, sizeof(*req_data));
++	sgs[num_out++] = &outhdr_sg;
++
++	/* src data */
++	src_buf = kcalloc_node(src_len, 1, GFP_KERNEL, node);
++	if (!src_buf)
++		goto err;
++
++	if (verify) {
++		/* for verify operation, both src and dst data work as OUT direction */
++		sg_copy_to_buffer(req->src, sg_nents(req->src), src_buf, src_len);
++		sg_init_one(&srcdata_sg, src_buf, src_len);
++		sgs[num_out++] = &srcdata_sg;
++	} else {
++		sg_copy_to_buffer(req->src, sg_nents(req->src), src_buf, src_len);
++		sg_init_one(&srcdata_sg, src_buf, src_len);
++		sgs[num_out++] = &srcdata_sg;
++
++		/* dst data */
++		dst_buf = kcalloc_node(req->dst_len, 1, GFP_KERNEL, node);
++		if (!dst_buf)
++			goto err;
++
++		sg_init_one(&dstdata_sg, dst_buf, req->dst_len);
++		sgs[num_out + num_in++] = &dstdata_sg;
++	}
++
++	vc_akcipher_req->src_buf = src_buf;
++	vc_akcipher_req->dst_buf = dst_buf;
++
++	/* in header */
++	sg_init_one(&inhdr_sg, &vc_req->status, sizeof(vc_req->status));
++	sgs[num_out + num_in++] = &inhdr_sg;
++
++	spin_lock_irqsave(&data_vq->lock, flags);
++	ret = virtqueue_add_sgs(data_vq->vq, sgs, num_out, num_in, vc_req, GFP_ATOMIC);
++	virtqueue_kick(data_vq->vq);
++	spin_unlock_irqrestore(&data_vq->lock, flags);
++	if (ret)
++		goto err;
++
++	return 0;
++
++err:
++	kfree(src_buf);
++	kfree(dst_buf);
++
++	return -ENOMEM;
++}
++
++static int virtio_crypto_rsa_do_req(struct crypto_engine *engine, void *vreq)
++{
++	struct akcipher_request *req = container_of(vreq, struct akcipher_request, base);
++	struct virtio_crypto_akcipher_request *vc_akcipher_req = akcipher_request_ctx(req);
++	struct virtio_crypto_request *vc_req = &vc_akcipher_req->base;
++	struct virtio_crypto_akcipher_ctx *ctx = vc_akcipher_req->akcipher_ctx;
++	struct virtio_crypto *vcrypto = ctx->vcrypto;
++	struct data_queue *data_vq = vc_req->dataq;
++	struct virtio_crypto_op_header *header;
++	struct virtio_crypto_akcipher_data_req *akcipher_req;
++	int ret;
++
++	vc_req->sgs = NULL;
++	vc_req->req_data = kzalloc_node(sizeof(*vc_req->req_data),
++		GFP_KERNEL, dev_to_node(&vcrypto->vdev->dev));
++	if (!vc_req->req_data)
++		return -ENOMEM;
++
++	/* build request header */
++	header = &vc_req->req_data->header;
++	header->opcode = cpu_to_le32(vc_akcipher_req->opcode);
++	header->algo = cpu_to_le32(VIRTIO_CRYPTO_AKCIPHER_RSA);
++	header->session_id = cpu_to_le64(ctx->session_id);
++
++	/* build request akcipher data */
++	akcipher_req = &vc_req->req_data->u.akcipher_req;
++	akcipher_req->para.src_data_len = cpu_to_le32(req->src_len);
++	akcipher_req->para.dst_data_len = cpu_to_le32(req->dst_len);
++
++	ret = __virtio_crypto_akcipher_do_req(vc_akcipher_req, req, data_vq);
++	if (ret < 0) {
++		kfree_sensitive(vc_req->req_data);
++		vc_req->req_data = NULL;
++		return ret;
++	}
++
++	return 0;
++}
++
++static int virtio_crypto_rsa_req(struct akcipher_request *req, uint32_t opcode)
++{
++	struct crypto_akcipher *atfm = crypto_akcipher_reqtfm(req);
++	struct virtio_crypto_akcipher_ctx *ctx = akcipher_tfm_ctx(atfm);
++	struct virtio_crypto_akcipher_request *vc_akcipher_req = akcipher_request_ctx(req);
++	struct virtio_crypto_request *vc_req = &vc_akcipher_req->base;
++	struct virtio_crypto *vcrypto = ctx->vcrypto;
++	/* Use the first data virtqueue as default */
++	struct data_queue *data_vq = &vcrypto->data_vq[0];
++
++	vc_req->dataq = data_vq;
++	vc_req->alg_cb = virtio_crypto_dataq_akcipher_callback;
++	vc_akcipher_req->akcipher_ctx = ctx;
++	vc_akcipher_req->akcipher_req = req;
++	vc_akcipher_req->opcode = opcode;
++
++	return crypto_transfer_akcipher_request_to_engine(data_vq->engine, req);
++}
++
++static int virtio_crypto_rsa_encrypt(struct akcipher_request *req)
++{
++	return virtio_crypto_rsa_req(req, VIRTIO_CRYPTO_AKCIPHER_ENCRYPT);
++}
++
++static int virtio_crypto_rsa_decrypt(struct akcipher_request *req)
++{
++	return virtio_crypto_rsa_req(req, VIRTIO_CRYPTO_AKCIPHER_DECRYPT);
++}
++
++static int virtio_crypto_rsa_sign(struct akcipher_request *req)
++{
++	return virtio_crypto_rsa_req(req, VIRTIO_CRYPTO_AKCIPHER_SIGN);
++}
++
++static int virtio_crypto_rsa_verify(struct akcipher_request *req)
++{
++	return virtio_crypto_rsa_req(req, VIRTIO_CRYPTO_AKCIPHER_VERIFY);
++}
++
++static int virtio_crypto_rsa_set_key(struct crypto_akcipher *tfm,
++				     const void *key,
++				     unsigned int keylen,
++				     bool private,
++				     int padding_algo,
++				     int hash_algo)
++{
++	struct virtio_crypto_akcipher_ctx *ctx = akcipher_tfm_ctx(tfm);
++	struct virtio_crypto_rsa_ctx *rsa_ctx = &ctx->rsa_ctx;
++	struct virtio_crypto *vcrypto;
++	struct virtio_crypto_ctrl_header header;
++	struct virtio_crypto_akcipher_session_para para;
++	struct rsa_key rsa_key = {0};
++	int node = virtio_crypto_get_current_node();
++	uint32_t keytype;
++	int ret;
++
++	/* mpi_free will test n, just free it. */
++	mpi_free(rsa_ctx->n);
++	rsa_ctx->n = NULL;
++
++	if (private) {
++		keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE;
++		ret = rsa_parse_priv_key(&rsa_key, key, keylen);
++	} else {
++		keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC;
++		ret = rsa_parse_pub_key(&rsa_key, key, keylen);
++	}
++
++	if (ret)
++		return ret;
++
++	rsa_ctx->n = mpi_read_raw_data(rsa_key.n, rsa_key.n_sz);
++	if (!rsa_ctx->n)
++		return -ENOMEM;
++
++	if (!ctx->vcrypto) {
++		vcrypto = virtcrypto_get_dev_node(node, VIRTIO_CRYPTO_SERVICE_AKCIPHER,
++						VIRTIO_CRYPTO_AKCIPHER_RSA);
++		if (!vcrypto) {
++			pr_err("virtio_crypto: Could not find a virtio device in the system or unsupported algo\n");
++			return -ENODEV;
++		}
++
++		ctx->vcrypto = vcrypto;
++	} else {
++		virtio_crypto_alg_akcipher_close_session(ctx);
++	}
++
++	/* set ctrl header */
++	header.opcode =	cpu_to_le32(VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION);
++	header.algo = cpu_to_le32(VIRTIO_CRYPTO_AKCIPHER_RSA);
++	header.queue_id = 0;
++
++	/* set RSA para */
++	para.algo = cpu_to_le32(VIRTIO_CRYPTO_AKCIPHER_RSA);
++	para.keytype = cpu_to_le32(keytype);
++	para.keylen = cpu_to_le32(keylen);
++	para.u.rsa.padding_algo = cpu_to_le32(padding_algo);
++	para.u.rsa.hash_algo = cpu_to_le32(hash_algo);
++
++	return virtio_crypto_alg_akcipher_init_session(ctx, &header, &para, key, keylen);
++}
++
++static int virtio_crypto_rsa_raw_set_priv_key(struct crypto_akcipher *tfm,
++					      const void *key,
++					      unsigned int keylen)
++{
++	return virtio_crypto_rsa_set_key(tfm, key, keylen, 1,
++					 VIRTIO_CRYPTO_RSA_RAW_PADDING,
++					 VIRTIO_CRYPTO_RSA_NO_HASH);
++}
++
++
++static int virtio_crypto_p1pad_rsa_sha1_set_priv_key(struct crypto_akcipher *tfm,
++						     const void *key,
++						     unsigned int keylen)
++{
++	return virtio_crypto_rsa_set_key(tfm, key, keylen, 1,
++					 VIRTIO_CRYPTO_RSA_PKCS1_PADDING,
++					 VIRTIO_CRYPTO_RSA_SHA1);
++}
++
++static int virtio_crypto_rsa_raw_set_pub_key(struct crypto_akcipher *tfm,
++					     const void *key,
++					     unsigned int keylen)
++{
++	return virtio_crypto_rsa_set_key(tfm, key, keylen, 0,
++					 VIRTIO_CRYPTO_RSA_RAW_PADDING,
++					 VIRTIO_CRYPTO_RSA_NO_HASH);
++}
++
++static int virtio_crypto_p1pad_rsa_sha1_set_pub_key(struct crypto_akcipher *tfm,
++						    const void *key,
++						    unsigned int keylen)
++{
++	return virtio_crypto_rsa_set_key(tfm, key, keylen, 0,
++					 VIRTIO_CRYPTO_RSA_PKCS1_PADDING,
++					 VIRTIO_CRYPTO_RSA_SHA1);
++}
++
++static unsigned int virtio_crypto_rsa_max_size(struct crypto_akcipher *tfm)
++{
++	struct virtio_crypto_akcipher_ctx *ctx = akcipher_tfm_ctx(tfm);
++	struct virtio_crypto_rsa_ctx *rsa_ctx = &ctx->rsa_ctx;
++
++	return mpi_get_size(rsa_ctx->n);
++}
++
++static int virtio_crypto_rsa_init_tfm(struct crypto_akcipher *tfm)
++{
++	struct virtio_crypto_akcipher_ctx *ctx = akcipher_tfm_ctx(tfm);
++
++	ctx->tfm = tfm;
++	ctx->enginectx.op.do_one_request = virtio_crypto_rsa_do_req;
++	ctx->enginectx.op.prepare_request = NULL;
++	ctx->enginectx.op.unprepare_request = NULL;
++
++	return 0;
++}
++
++static void virtio_crypto_rsa_exit_tfm(struct crypto_akcipher *tfm)
++{
++	struct virtio_crypto_akcipher_ctx *ctx = akcipher_tfm_ctx(tfm);
++	struct virtio_crypto_rsa_ctx *rsa_ctx = &ctx->rsa_ctx;
++
++	virtio_crypto_alg_akcipher_close_session(ctx);
++	virtcrypto_dev_put(ctx->vcrypto);
++	mpi_free(rsa_ctx->n);
++	rsa_ctx->n = NULL;
++}
++
++static struct virtio_crypto_akcipher_algo virtio_crypto_akcipher_algs[] = {
++	{
++		.algonum = VIRTIO_CRYPTO_AKCIPHER_RSA,
++		.service = VIRTIO_CRYPTO_SERVICE_AKCIPHER,
++		.algo = {
++			.encrypt = virtio_crypto_rsa_encrypt,
++			.decrypt = virtio_crypto_rsa_decrypt,
++			.set_pub_key = virtio_crypto_rsa_raw_set_pub_key,
++			.set_priv_key = virtio_crypto_rsa_raw_set_priv_key,
++			.max_size = virtio_crypto_rsa_max_size,
++			.init = virtio_crypto_rsa_init_tfm,
++			.exit = virtio_crypto_rsa_exit_tfm,
++			.reqsize = sizeof(struct virtio_crypto_akcipher_request),
++			.base = {
++				.cra_name = "rsa",
++				.cra_driver_name = "virtio-crypto-rsa",
++				.cra_priority = 150,
++				.cra_module = THIS_MODULE,
++				.cra_ctxsize = sizeof(struct virtio_crypto_akcipher_ctx),
++			},
++		},
++	},
++	{
++		.algonum = VIRTIO_CRYPTO_AKCIPHER_RSA,
++		.service = VIRTIO_CRYPTO_SERVICE_AKCIPHER,
++		.algo = {
++			.encrypt = virtio_crypto_rsa_encrypt,
++			.decrypt = virtio_crypto_rsa_decrypt,
++			.sign = virtio_crypto_rsa_sign,
++			.verify = virtio_crypto_rsa_verify,
++			.set_pub_key = virtio_crypto_p1pad_rsa_sha1_set_pub_key,
++			.set_priv_key = virtio_crypto_p1pad_rsa_sha1_set_priv_key,
++			.max_size = virtio_crypto_rsa_max_size,
++			.init = virtio_crypto_rsa_init_tfm,
++			.exit = virtio_crypto_rsa_exit_tfm,
++			.reqsize = sizeof(struct virtio_crypto_akcipher_request),
++			.base = {
++				.cra_name = "pkcs1pad(rsa,sha1)",
++				.cra_driver_name = "virtio-pkcs1-rsa-with-sha1",
++				.cra_priority = 150,
++				.cra_module = THIS_MODULE,
++				.cra_ctxsize = sizeof(struct virtio_crypto_akcipher_ctx),
++			},
++		},
++	},
++};
++
++int virtio_crypto_akcipher_algs_register(struct virtio_crypto *vcrypto)
++{
++	int ret = 0;
++	int i = 0;
++
++	mutex_lock(&algs_lock);
++
++	for (i = 0; i < ARRAY_SIZE(virtio_crypto_akcipher_algs); i++) {
++		uint32_t service = virtio_crypto_akcipher_algs[i].service;
++		uint32_t algonum = virtio_crypto_akcipher_algs[i].algonum;
++
++		if (!virtcrypto_algo_is_supported(vcrypto, service, algonum))
++			continue;
++
++		if (virtio_crypto_akcipher_algs[i].active_devs == 0) {
++			ret = crypto_register_akcipher(&virtio_crypto_akcipher_algs[i].algo);
++			if (ret)
++				goto unlock;
++		}
++
++		virtio_crypto_akcipher_algs[i].active_devs++;
++		dev_info(&vcrypto->vdev->dev, "Registered akcipher algo %s\n",
++			 virtio_crypto_akcipher_algs[i].algo.base.cra_name);
++	}
++
++unlock:
++	mutex_unlock(&algs_lock);
++	return ret;
++}
++
++void virtio_crypto_akcipher_algs_unregister(struct virtio_crypto *vcrypto)
++{
++	int i = 0;
++
++	mutex_lock(&algs_lock);
++
++	for (i = 0; i < ARRAY_SIZE(virtio_crypto_akcipher_algs); i++) {
++		uint32_t service = virtio_crypto_akcipher_algs[i].service;
++		uint32_t algonum = virtio_crypto_akcipher_algs[i].algonum;
++
++		if (virtio_crypto_akcipher_algs[i].active_devs == 0 ||
++		    !virtcrypto_algo_is_supported(vcrypto, service, algonum))
++			continue;
++
++		if (virtio_crypto_akcipher_algs[i].active_devs == 1)
++			crypto_unregister_akcipher(&virtio_crypto_akcipher_algs[i].algo);
++
++		virtio_crypto_akcipher_algs[i].active_devs--;
++	}
++
++	mutex_unlock(&algs_lock);
++}
+diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
+index 583c0b535d13b..ae645c9d47a7a 100644
+--- a/drivers/crypto/virtio/virtio_crypto_algs.c
++++ b/drivers/crypto/virtio/virtio_crypto_algs.c
+@@ -118,11 +118,14 @@ static int virtio_crypto_alg_skcipher_init_session(
+ 		int encrypt)
+ {
+ 	struct scatterlist outhdr, key_sg, inhdr, *sgs[3];
+-	unsigned int tmp;
+ 	struct virtio_crypto *vcrypto = ctx->vcrypto;
+ 	int op = encrypt ? VIRTIO_CRYPTO_OP_ENCRYPT : VIRTIO_CRYPTO_OP_DECRYPT;
+ 	int err;
+ 	unsigned int num_out = 0, num_in = 0;
++	struct virtio_crypto_op_ctrl_req *ctrl;
++	struct virtio_crypto_session_input *input;
++	struct virtio_crypto_sym_create_session_req *sym_create_session;
++	struct virtio_crypto_ctrl_request *vc_ctrl_req;
+ 
+ 	/*
+ 	 * Avoid to do DMA from the stack, switch to using
+@@ -133,26 +136,29 @@ static int virtio_crypto_alg_skcipher_init_session(
+ 	if (!cipher_key)
+ 		return -ENOMEM;
+ 
+-	spin_lock(&vcrypto->ctrl_lock);
++	vc_ctrl_req = kzalloc(sizeof(*vc_ctrl_req), GFP_KERNEL);
++	if (!vc_ctrl_req) {
++		err = -ENOMEM;
++		goto out;
++	}
++
+ 	/* Pad ctrl header */
+-	vcrypto->ctrl.header.opcode =
+-		cpu_to_le32(VIRTIO_CRYPTO_CIPHER_CREATE_SESSION);
+-	vcrypto->ctrl.header.algo = cpu_to_le32(alg);
++	ctrl = &vc_ctrl_req->ctrl;
++	ctrl->header.opcode = cpu_to_le32(VIRTIO_CRYPTO_CIPHER_CREATE_SESSION);
++	ctrl->header.algo = cpu_to_le32(alg);
+ 	/* Set the default dataqueue id to 0 */
+-	vcrypto->ctrl.header.queue_id = 0;
++	ctrl->header.queue_id = 0;
+ 
+-	vcrypto->input.status = cpu_to_le32(VIRTIO_CRYPTO_ERR);
++	input = &vc_ctrl_req->input;
++	input->status = cpu_to_le32(VIRTIO_CRYPTO_ERR);
+ 	/* Pad cipher's parameters */
+-	vcrypto->ctrl.u.sym_create_session.op_type =
+-		cpu_to_le32(VIRTIO_CRYPTO_SYM_OP_CIPHER);
+-	vcrypto->ctrl.u.sym_create_session.u.cipher.para.algo =
+-		vcrypto->ctrl.header.algo;
+-	vcrypto->ctrl.u.sym_create_session.u.cipher.para.keylen =
+-		cpu_to_le32(keylen);
+-	vcrypto->ctrl.u.sym_create_session.u.cipher.para.op =
+-		cpu_to_le32(op);
+-
+-	sg_init_one(&outhdr, &vcrypto->ctrl, sizeof(vcrypto->ctrl));
++	sym_create_session = &ctrl->u.sym_create_session;
++	sym_create_session->op_type = cpu_to_le32(VIRTIO_CRYPTO_SYM_OP_CIPHER);
++	sym_create_session->u.cipher.para.algo = ctrl->header.algo;
++	sym_create_session->u.cipher.para.keylen = cpu_to_le32(keylen);
++	sym_create_session->u.cipher.para.op = cpu_to_le32(op);
++
++	sg_init_one(&outhdr, ctrl, sizeof(*ctrl));
+ 	sgs[num_out++] = &outhdr;
+ 
+ 	/* Set key */
+@@ -160,45 +166,30 @@ static int virtio_crypto_alg_skcipher_init_session(
+ 	sgs[num_out++] = &key_sg;
+ 
+ 	/* Return status and session id back */
+-	sg_init_one(&inhdr, &vcrypto->input, sizeof(vcrypto->input));
++	sg_init_one(&inhdr, input, sizeof(*input));
+ 	sgs[num_out + num_in++] = &inhdr;
+ 
+-	err = virtqueue_add_sgs(vcrypto->ctrl_vq, sgs, num_out,
+-				num_in, vcrypto, GFP_ATOMIC);
+-	if (err < 0) {
+-		spin_unlock(&vcrypto->ctrl_lock);
+-		kfree_sensitive(cipher_key);
+-		return err;
+-	}
+-	virtqueue_kick(vcrypto->ctrl_vq);
+-
+-	/*
+-	 * Trapping into the hypervisor, so the request should be
+-	 * handled immediately.
+-	 */
+-	while (!virtqueue_get_buf(vcrypto->ctrl_vq, &tmp) &&
+-	       !virtqueue_is_broken(vcrypto->ctrl_vq))
+-		cpu_relax();
++	err = virtio_crypto_ctrl_vq_request(vcrypto, sgs, num_out, num_in, vc_ctrl_req);
++	if (err < 0)
++		goto out;
+ 
+-	if (le32_to_cpu(vcrypto->input.status) != VIRTIO_CRYPTO_OK) {
+-		spin_unlock(&vcrypto->ctrl_lock);
++	if (le32_to_cpu(input->status) != VIRTIO_CRYPTO_OK) {
+ 		pr_err("virtio_crypto: Create session failed status: %u\n",
+-			le32_to_cpu(vcrypto->input.status));
+-		kfree_sensitive(cipher_key);
+-		return -EINVAL;
++			le32_to_cpu(input->status));
++		err = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	if (encrypt)
+-		ctx->enc_sess_info.session_id =
+-			le64_to_cpu(vcrypto->input.session_id);
++		ctx->enc_sess_info.session_id = le64_to_cpu(input->session_id);
+ 	else
+-		ctx->dec_sess_info.session_id =
+-			le64_to_cpu(vcrypto->input.session_id);
+-
+-	spin_unlock(&vcrypto->ctrl_lock);
++		ctx->dec_sess_info.session_id = le64_to_cpu(input->session_id);
+ 
++	err = 0;
++out:
++	kfree(vc_ctrl_req);
+ 	kfree_sensitive(cipher_key);
+-	return 0;
++	return err;
+ }
+ 
+ static int virtio_crypto_alg_skcipher_close_session(
+@@ -206,60 +197,56 @@ static int virtio_crypto_alg_skcipher_close_session(
+ 		int encrypt)
+ {
+ 	struct scatterlist outhdr, status_sg, *sgs[2];
+-	unsigned int tmp;
+ 	struct virtio_crypto_destroy_session_req *destroy_session;
+ 	struct virtio_crypto *vcrypto = ctx->vcrypto;
+ 	int err;
+ 	unsigned int num_out = 0, num_in = 0;
++	struct virtio_crypto_op_ctrl_req *ctrl;
++	struct virtio_crypto_inhdr *ctrl_status;
++	struct virtio_crypto_ctrl_request *vc_ctrl_req;
++
++	vc_ctrl_req = kzalloc(sizeof(*vc_ctrl_req), GFP_KERNEL);
++	if (!vc_ctrl_req)
++		return -ENOMEM;
+ 
+-	spin_lock(&vcrypto->ctrl_lock);
+-	vcrypto->ctrl_status.status = VIRTIO_CRYPTO_ERR;
++	ctrl_status = &vc_ctrl_req->ctrl_status;
++	ctrl_status->status = VIRTIO_CRYPTO_ERR;
+ 	/* Pad ctrl header */
+-	vcrypto->ctrl.header.opcode =
+-		cpu_to_le32(VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION);
++	ctrl = &vc_ctrl_req->ctrl;
++	ctrl->header.opcode = cpu_to_le32(VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION);
+ 	/* Set the default virtqueue id to 0 */
+-	vcrypto->ctrl.header.queue_id = 0;
++	ctrl->header.queue_id = 0;
+ 
+-	destroy_session = &vcrypto->ctrl.u.destroy_session;
++	destroy_session = &ctrl->u.destroy_session;
+ 
+ 	if (encrypt)
+-		destroy_session->session_id =
+-			cpu_to_le64(ctx->enc_sess_info.session_id);
++		destroy_session->session_id = cpu_to_le64(ctx->enc_sess_info.session_id);
+ 	else
+-		destroy_session->session_id =
+-			cpu_to_le64(ctx->dec_sess_info.session_id);
++		destroy_session->session_id = cpu_to_le64(ctx->dec_sess_info.session_id);
+ 
+-	sg_init_one(&outhdr, &vcrypto->ctrl, sizeof(vcrypto->ctrl));
++	sg_init_one(&outhdr, ctrl, sizeof(*ctrl));
+ 	sgs[num_out++] = &outhdr;
+ 
+ 	/* Return status and session id back */
+-	sg_init_one(&status_sg, &vcrypto->ctrl_status.status,
+-		sizeof(vcrypto->ctrl_status.status));
++	sg_init_one(&status_sg, &ctrl_status->status, sizeof(ctrl_status->status));
+ 	sgs[num_out + num_in++] = &status_sg;
+ 
+-	err = virtqueue_add_sgs(vcrypto->ctrl_vq, sgs, num_out,
+-			num_in, vcrypto, GFP_ATOMIC);
+-	if (err < 0) {
+-		spin_unlock(&vcrypto->ctrl_lock);
+-		return err;
+-	}
+-	virtqueue_kick(vcrypto->ctrl_vq);
++	err = virtio_crypto_ctrl_vq_request(vcrypto, sgs, num_out, num_in, vc_ctrl_req);
++	if (err < 0)
++		goto out;
+ 
+-	while (!virtqueue_get_buf(vcrypto->ctrl_vq, &tmp) &&
+-	       !virtqueue_is_broken(vcrypto->ctrl_vq))
+-		cpu_relax();
+-
+-	if (vcrypto->ctrl_status.status != VIRTIO_CRYPTO_OK) {
+-		spin_unlock(&vcrypto->ctrl_lock);
++	if (ctrl_status->status != VIRTIO_CRYPTO_OK) {
+ 		pr_err("virtio_crypto: Close session failed status: %u, session_id: 0x%llx\n",
+-			vcrypto->ctrl_status.status,
+-			destroy_session->session_id);
++			ctrl_status->status, destroy_session->session_id);
+ 
+-		return -EINVAL;
++		err = -EINVAL;
++		goto out;
+ 	}
+-	spin_unlock(&vcrypto->ctrl_lock);
+ 
+-	return 0;
++	err = 0;
++out:
++	kfree(vc_ctrl_req);
++	return err;
+ }
+ 
+ static int virtio_crypto_alg_skcipher_init_sessions(
+diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
+index a24f85c589e7e..d61c991d77735 100644
+--- a/drivers/crypto/virtio/virtio_crypto_common.h
++++ b/drivers/crypto/virtio/virtio_crypto_common.h
+@@ -10,9 +10,11 @@
+ #include <linux/virtio.h>
+ #include <linux/crypto.h>
+ #include <linux/spinlock.h>
++#include <linux/interrupt.h>
+ #include <crypto/aead.h>
+ #include <crypto/aes.h>
+ #include <crypto/engine.h>
++#include <uapi/linux/virtio_crypto.h>
+ 
+ 
+ /* Internal representation of a data virtqueue */
+@@ -27,6 +29,7 @@ struct data_queue {
+ 	char name[32];
+ 
+ 	struct crypto_engine *engine;
++	struct tasklet_struct done_task;
+ };
+ 
+ struct virtio_crypto {
+@@ -56,6 +59,7 @@ struct virtio_crypto {
+ 	u32 mac_algo_l;
+ 	u32 mac_algo_h;
+ 	u32 aead_algo;
++	u32 akcipher_algo;
+ 
+ 	/* Maximum length of cipher key */
+ 	u32 max_cipher_key_len;
+@@ -64,11 +68,6 @@ struct virtio_crypto {
+ 	/* Maximum size of per request */
+ 	u64 max_size;
+ 
+-	/* Control VQ buffers: protected by the ctrl_lock */
+-	struct virtio_crypto_op_ctrl_req ctrl;
+-	struct virtio_crypto_session_input input;
+-	struct virtio_crypto_inhdr ctrl_status;
+-
+ 	unsigned long status;
+ 	atomic_t ref_count;
+ 	struct list_head list;
+@@ -84,6 +83,18 @@ struct virtio_crypto_sym_session_info {
+ 	__u64 session_id;
+ };
+ 
++/*
++ * Note: there are padding fields in request, clear them to zero before
++ *       sending to host to avoid to divulge any information.
++ * Ex, virtio_crypto_ctrl_request::ctrl::u::destroy_session::padding[48]
++ */
++struct virtio_crypto_ctrl_request {
++	struct virtio_crypto_op_ctrl_req ctrl;
++	struct virtio_crypto_session_input input;
++	struct virtio_crypto_inhdr ctrl_status;
++	struct completion compl;
++};
++
+ struct virtio_crypto_request;
+ typedef void (*virtio_crypto_data_callback)
+ 		(struct virtio_crypto_request *vc_req, int len);
+@@ -131,5 +142,10 @@ static inline int virtio_crypto_get_current_node(void)
+ 
+ int virtio_crypto_algs_register(struct virtio_crypto *vcrypto);
+ void virtio_crypto_algs_unregister(struct virtio_crypto *vcrypto);
++int virtio_crypto_akcipher_algs_register(struct virtio_crypto *vcrypto);
++void virtio_crypto_akcipher_algs_unregister(struct virtio_crypto *vcrypto);
++int virtio_crypto_ctrl_vq_request(struct virtio_crypto *vcrypto, struct scatterlist *sgs[],
++				  unsigned int out_sgs, unsigned int in_sgs,
++				  struct virtio_crypto_ctrl_request *vc_ctrl_req);
+ 
+ #endif /* _VIRTIO_CRYPTO_COMMON_H */
+diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
+index 080955a1dd9c0..3da956145892e 100644
+--- a/drivers/crypto/virtio/virtio_crypto_core.c
++++ b/drivers/crypto/virtio/virtio_crypto_core.c
+@@ -22,27 +22,78 @@ virtcrypto_clear_request(struct virtio_crypto_request *vc_req)
+ 	}
+ }
+ 
+-static void virtcrypto_dataq_callback(struct virtqueue *vq)
++static void virtio_crypto_ctrlq_callback(struct virtio_crypto_ctrl_request *vc_ctrl_req)
++{
++	complete(&vc_ctrl_req->compl);
++}
++
++static void virtcrypto_ctrlq_callback(struct virtqueue *vq)
+ {
+ 	struct virtio_crypto *vcrypto = vq->vdev->priv;
+-	struct virtio_crypto_request *vc_req;
++	struct virtio_crypto_ctrl_request *vc_ctrl_req;
+ 	unsigned long flags;
+ 	unsigned int len;
+-	unsigned int qid = vq->index;
+ 
+-	spin_lock_irqsave(&vcrypto->data_vq[qid].lock, flags);
++	spin_lock_irqsave(&vcrypto->ctrl_lock, flags);
++	do {
++		virtqueue_disable_cb(vq);
++		while ((vc_ctrl_req = virtqueue_get_buf(vq, &len)) != NULL) {
++			spin_unlock_irqrestore(&vcrypto->ctrl_lock, flags);
++			virtio_crypto_ctrlq_callback(vc_ctrl_req);
++			spin_lock_irqsave(&vcrypto->ctrl_lock, flags);
++		}
++		if (unlikely(virtqueue_is_broken(vq)))
++			break;
++	} while (!virtqueue_enable_cb(vq));
++	spin_unlock_irqrestore(&vcrypto->ctrl_lock, flags);
++}
++
++int virtio_crypto_ctrl_vq_request(struct virtio_crypto *vcrypto, struct scatterlist *sgs[],
++		unsigned int out_sgs, unsigned int in_sgs,
++		struct virtio_crypto_ctrl_request *vc_ctrl_req)
++{
++	int err;
++	unsigned long flags;
++
++	init_completion(&vc_ctrl_req->compl);
++
++	spin_lock_irqsave(&vcrypto->ctrl_lock, flags);
++	err = virtqueue_add_sgs(vcrypto->ctrl_vq, sgs, out_sgs, in_sgs, vc_ctrl_req, GFP_ATOMIC);
++	if (err < 0) {
++		spin_unlock_irqrestore(&vcrypto->ctrl_lock, flags);
++		return err;
++	}
++
++	virtqueue_kick(vcrypto->ctrl_vq);
++	spin_unlock_irqrestore(&vcrypto->ctrl_lock, flags);
++
++	wait_for_completion(&vc_ctrl_req->compl);
++
++	return 0;
++}
++
++static void virtcrypto_done_task(unsigned long data)
++{
++	struct data_queue *data_vq = (struct data_queue *)data;
++	struct virtqueue *vq = data_vq->vq;
++	struct virtio_crypto_request *vc_req;
++	unsigned int len;
++
+ 	do {
+ 		virtqueue_disable_cb(vq);
+ 		while ((vc_req = virtqueue_get_buf(vq, &len)) != NULL) {
+-			spin_unlock_irqrestore(
+-				&vcrypto->data_vq[qid].lock, flags);
+ 			if (vc_req->alg_cb)
+ 				vc_req->alg_cb(vc_req, len);
+-			spin_lock_irqsave(
+-				&vcrypto->data_vq[qid].lock, flags);
+ 		}
+ 	} while (!virtqueue_enable_cb(vq));
+-	spin_unlock_irqrestore(&vcrypto->data_vq[qid].lock, flags);
++}
++
++static void virtcrypto_dataq_callback(struct virtqueue *vq)
++{
++	struct virtio_crypto *vcrypto = vq->vdev->priv;
++	struct data_queue *dq = &vcrypto->data_vq[vq->index];
++
++	tasklet_schedule(&dq->done_task);
+ }
+ 
+ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
+@@ -73,7 +124,7 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
+ 		goto err_names;
+ 
+ 	/* Parameters for control virtqueue */
+-	callbacks[total_vqs - 1] = NULL;
++	callbacks[total_vqs - 1] = virtcrypto_ctrlq_callback;
+ 	names[total_vqs - 1] = "controlq";
+ 
+ 	/* Allocate/initialize parameters for data virtqueues */
+@@ -99,6 +150,8 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
+ 			ret = -ENOMEM;
+ 			goto err_engine;
+ 		}
++		tasklet_init(&vi->data_vq[i].done_task, virtcrypto_done_task,
++				(unsigned long)&vi->data_vq[i]);
+ 	}
+ 
+ 	kfree(names);
+@@ -297,6 +350,7 @@ static int virtcrypto_probe(struct virtio_device *vdev)
+ 	u32 mac_algo_l = 0;
+ 	u32 mac_algo_h = 0;
+ 	u32 aead_algo = 0;
++	u32 akcipher_algo = 0;
+ 	u32 crypto_services = 0;
+ 
+ 	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
+@@ -348,6 +402,9 @@ static int virtcrypto_probe(struct virtio_device *vdev)
+ 			mac_algo_h, &mac_algo_h);
+ 	virtio_cread_le(vdev, struct virtio_crypto_config,
+ 			aead_algo, &aead_algo);
++	if (crypto_services & (1 << VIRTIO_CRYPTO_SERVICE_AKCIPHER))
++		virtio_cread_le(vdev, struct virtio_crypto_config,
++				akcipher_algo, &akcipher_algo);
+ 
+ 	/* Add virtio crypto device to global table */
+ 	err = virtcrypto_devmgr_add_dev(vcrypto);
+@@ -374,7 +431,7 @@ static int virtcrypto_probe(struct virtio_device *vdev)
+ 	vcrypto->mac_algo_h = mac_algo_h;
+ 	vcrypto->hash_algo = hash_algo;
+ 	vcrypto->aead_algo = aead_algo;
+-
++	vcrypto->akcipher_algo = akcipher_algo;
+ 
+ 	dev_info(&vdev->dev,
+ 		"max_queues: %u, max_cipher_key_len: %u, max_auth_key_len: %u, max_size 0x%llx\n",
+@@ -431,11 +488,14 @@ static void virtcrypto_free_unused_reqs(struct virtio_crypto *vcrypto)
+ static void virtcrypto_remove(struct virtio_device *vdev)
+ {
+ 	struct virtio_crypto *vcrypto = vdev->priv;
++	int i;
+ 
+ 	dev_info(&vdev->dev, "Start virtcrypto_remove.\n");
+ 
+ 	if (virtcrypto_dev_started(vcrypto))
+ 		virtcrypto_dev_stop(vcrypto);
++	for (i = 0; i < vcrypto->max_data_queues; i++)
++		tasklet_kill(&vcrypto->data_vq[i].done_task);
+ 	vdev->config->reset(vdev);
+ 	virtcrypto_free_unused_reqs(vcrypto);
+ 	virtcrypto_clear_crypto_engines(vcrypto);
+diff --git a/drivers/crypto/virtio/virtio_crypto_mgr.c b/drivers/crypto/virtio/virtio_crypto_mgr.c
+index 6860f8180c7c1..1cb92418b3216 100644
+--- a/drivers/crypto/virtio/virtio_crypto_mgr.c
++++ b/drivers/crypto/virtio/virtio_crypto_mgr.c
+@@ -242,6 +242,12 @@ int virtcrypto_dev_start(struct virtio_crypto *vcrypto)
+ 		return -EFAULT;
+ 	}
+ 
++	if (virtio_crypto_akcipher_algs_register(vcrypto)) {
++		pr_err("virtio_crypto: Failed to register crypto akcipher algs\n");
++		virtio_crypto_algs_unregister(vcrypto);
++		return -EFAULT;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -258,6 +264,7 @@ int virtcrypto_dev_start(struct virtio_crypto *vcrypto)
+ void virtcrypto_dev_stop(struct virtio_crypto *vcrypto)
+ {
+ 	virtio_crypto_algs_unregister(vcrypto);
++	virtio_crypto_akcipher_algs_unregister(vcrypto);
+ }
+ 
+ /*
+@@ -312,6 +319,10 @@ bool virtcrypto_algo_is_supported(struct virtio_crypto *vcrypto,
+ 	case VIRTIO_CRYPTO_SERVICE_AEAD:
+ 		algo_mask = vcrypto->aead_algo;
+ 		break;
++
++	case VIRTIO_CRYPTO_SERVICE_AKCIPHER:
++		algo_mask = vcrypto->akcipher_algo;
++		break;
+ 	}
+ 
+ 	if (!(algo_mask & (1u << algo)))
+diff --git a/drivers/edac/thunderx_edac.c b/drivers/edac/thunderx_edac.c
+index 0eb5eb97fd742..3b0e1fe27d938 100644
+--- a/drivers/edac/thunderx_edac.c
++++ b/drivers/edac/thunderx_edac.c
+@@ -1133,7 +1133,7 @@ static irqreturn_t thunderx_ocx_com_threaded_isr(int irq, void *irq_id)
+ 		decode_register(other, OCX_OTHER_SIZE,
+ 				ocx_com_errors, ctx->reg_com_int);
+ 
+-		strncat(msg, other, OCX_MESSAGE_SIZE);
++		strlcat(msg, other, OCX_MESSAGE_SIZE);
+ 
+ 		for (lane = 0; lane < OCX_RX_LANES; lane++)
+ 			if (ctx->reg_com_int & BIT(lane)) {
+@@ -1142,12 +1142,12 @@ static irqreturn_t thunderx_ocx_com_threaded_isr(int irq, void *irq_id)
+ 					 lane, ctx->reg_lane_int[lane],
+ 					 lane, ctx->reg_lane_stat11[lane]);
+ 
+-				strncat(msg, other, OCX_MESSAGE_SIZE);
++				strlcat(msg, other, OCX_MESSAGE_SIZE);
+ 
+ 				decode_register(other, OCX_OTHER_SIZE,
+ 						ocx_lane_errors,
+ 						ctx->reg_lane_int[lane]);
+-				strncat(msg, other, OCX_MESSAGE_SIZE);
++				strlcat(msg, other, OCX_MESSAGE_SIZE);
+ 			}
+ 
+ 		if (ctx->reg_com_int & OCX_COM_INT_CE)
+@@ -1217,7 +1217,7 @@ static irqreturn_t thunderx_ocx_lnk_threaded_isr(int irq, void *irq_id)
+ 		decode_register(other, OCX_OTHER_SIZE,
+ 				ocx_com_link_errors, ctx->reg_com_link_int);
+ 
+-		strncat(msg, other, OCX_MESSAGE_SIZE);
++		strlcat(msg, other, OCX_MESSAGE_SIZE);
+ 
+ 		if (ctx->reg_com_link_int & OCX_COM_LINK_INT_UE)
+ 			edac_device_handle_ue(ocx->edac_dev, 0, 0, msg);
+@@ -1896,7 +1896,7 @@ static irqreturn_t thunderx_l2c_threaded_isr(int irq, void *irq_id)
+ 
+ 		decode_register(other, L2C_OTHER_SIZE, l2_errors, ctx->reg_int);
+ 
+-		strncat(msg, other, L2C_MESSAGE_SIZE);
++		strlcat(msg, other, L2C_MESSAGE_SIZE);
+ 
+ 		if (ctx->reg_int & mask_ue)
+ 			edac_device_handle_ue(l2c->edac_dev, 0, 0, msg);
+diff --git a/drivers/firmware/meson/meson_sm.c b/drivers/firmware/meson/meson_sm.c
+index ed27ff2e503ef..7dff8833132dc 100644
+--- a/drivers/firmware/meson/meson_sm.c
++++ b/drivers/firmware/meson/meson_sm.c
+@@ -313,11 +313,14 @@ static int __init meson_sm_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, fw);
+ 
+-	pr_info("secure-monitor enabled\n");
++	if (devm_of_platform_populate(dev))
++		goto out_in_base;
+ 
+ 	if (sysfs_create_group(&pdev->dev.kobj, &meson_sm_sysfs_attr_group))
+ 		goto out_in_base;
+ 
++	pr_info("secure-monitor enabled\n");
++
+ 	return 0;
+ 
+ out_in_base:
+diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c
+index fe6be0771a076..b0576cec263ba 100644
+--- a/drivers/firmware/ti_sci.c
++++ b/drivers/firmware/ti_sci.c
+@@ -161,7 +161,7 @@ static int ti_sci_debugfs_create(struct platform_device *pdev,
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct resource *res;
+-	char debug_name[50] = "ti_sci_debug@";
++	char debug_name[50];
+ 
+ 	/* Debug region is optional */
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+@@ -178,10 +178,10 @@ static int ti_sci_debugfs_create(struct platform_device *pdev,
+ 	/* Setup NULL termination */
+ 	info->debug_buffer[info->debug_region_size] = 0;
+ 
+-	info->d = debugfs_create_file(strncat(debug_name, dev_name(dev),
+-					      sizeof(debug_name) -
+-					      sizeof("ti_sci_debug@")),
+-				      0444, NULL, info, &ti_sci_debug_fops);
++	snprintf(debug_name, sizeof(debug_name), "ti_sci_debug@%s",
++		 dev_name(dev));
++	info->d = debugfs_create_file(debug_name, 0444, NULL, info,
++				      &ti_sci_debug_fops);
+ 	if (IS_ERR(info->d))
+ 		return PTR_ERR(info->d);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index 48df32dd352ed..8a1cb1de2b13a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -459,6 +459,9 @@ static ssize_t amdgpu_debugfs_regs_didt_read(struct file *f, char __user *buf,
+ 	if (size & 0x3 || *pos & 0x3)
+ 		return -EINVAL;
+ 
++	if (!adev->didt_rreg)
++		return -EOPNOTSUPP;
++
+ 	r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
+ 	if (r < 0) {
+ 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
+@@ -518,6 +521,9 @@ static ssize_t amdgpu_debugfs_regs_didt_write(struct file *f, const char __user
+ 	if (size & 0x3 || *pos & 0x3)
+ 		return -EINVAL;
+ 
++	if (!adev->didt_wreg)
++		return -EOPNOTSUPP;
++
+ 	r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
+ 	if (r < 0) {
+ 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
+@@ -576,7 +582,7 @@ static ssize_t amdgpu_debugfs_regs_smc_read(struct file *f, char __user *buf,
+ 	int r;
+ 
+ 	if (!adev->smc_rreg)
+-		return -EPERM;
++		return -EOPNOTSUPP;
+ 
+ 	if (size & 0x3 || *pos & 0x3)
+ 		return -EINVAL;
+@@ -638,7 +644,7 @@ static ssize_t amdgpu_debugfs_regs_smc_write(struct file *f, const char __user *
+ 	int r;
+ 
+ 	if (!adev->smc_wreg)
+-		return -EPERM;
++		return -EOPNOTSUPP;
+ 
+ 	if (size & 0x3 || *pos & 0x3)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+index c8a5a5698edd9..6eb6f05c11367 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+@@ -2733,10 +2733,8 @@ static int kv_parse_power_table(struct amdgpu_device *adev)
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+ 		ps = kzalloc(sizeof(struct kv_ps), GFP_KERNEL);
+-		if (ps == NULL) {
+-			kfree(adev->pm.dpm.ps);
++		if (ps == NULL)
+ 			return -ENOMEM;
+-		}
+ 		adev->pm.dpm.ps[i].ps_priv = ps;
+ 		k = 0;
+ 		idx = (u8 *)&power_state->v2.clockInfoIndex[0];
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+index d6544a6dabc7c..6f0653c81f8fb 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+@@ -7349,10 +7349,9 @@ static int si_dpm_init(struct amdgpu_device *adev)
+ 		kcalloc(4,
+ 			sizeof(struct amdgpu_clock_voltage_dependency_entry),
+ 			GFP_KERNEL);
+-	if (!adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries) {
+-		amdgpu_free_extended_power_table(adev);
++	if (!adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries)
+ 		return -ENOMEM;
+-	}
++
+ 	adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.count = 4;
+ 	adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[0].clk = 0;
+ 	adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[0].v = 0;
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index b4f7e7a7f7c51..9c905634fec79 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1637,7 +1637,7 @@ static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	} else {
+ 		if (tc->hpd_pin < 0 || tc->hpd_pin > 1) {
+ 			dev_err(dev, "failed to parse HPD number\n");
+-			return ret;
++			return -EINVAL;
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/bridge/ti-tpd12s015.c b/drivers/gpu/drm/bridge/ti-tpd12s015.c
+index e0e015243a602..b588fea12502d 100644
+--- a/drivers/gpu/drm/bridge/ti-tpd12s015.c
++++ b/drivers/gpu/drm/bridge/ti-tpd12s015.c
+@@ -179,7 +179,7 @@ static int tpd12s015_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int __exit tpd12s015_remove(struct platform_device *pdev)
++static int tpd12s015_remove(struct platform_device *pdev)
+ {
+ 	struct tpd12s015_device *tpd = platform_get_drvdata(pdev);
+ 
+@@ -197,7 +197,7 @@ MODULE_DEVICE_TABLE(of, tpd12s015_of_match);
+ 
+ static struct platform_driver tpd12s015_driver = {
+ 	.probe	= tpd12s015_probe,
+-	.remove	= __exit_p(tpd12s015_remove),
++	.remove = tpd12s015_remove,
+ 	.driver	= {
+ 		.name	= "tpd12s015",
+ 		.of_match_table = tpd12s015_of_match,
+diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
+index aecdd7ea26dc8..4ed3fc28d4dab 100644
+--- a/drivers/gpu/drm/drm_crtc.c
++++ b/drivers/gpu/drm/drm_crtc.c
+@@ -562,8 +562,7 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	struct drm_mode_set set;
+ 	uint32_t __user *set_connectors_ptr;
+ 	struct drm_modeset_acquire_ctx ctx;
+-	int ret;
+-	int i;
++	int ret, i, num_connectors = 0;
+ 
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+@@ -721,6 +720,7 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 					connector->name);
+ 
+ 			connector_set[i] = connector;
++			num_connectors++;
+ 		}
+ 	}
+ 
+@@ -729,7 +729,7 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	set.y = crtc_req->y;
+ 	set.mode = mode;
+ 	set.connectors = connector_set;
+-	set.num_connectors = crtc_req->count_connectors;
++	set.num_connectors = num_connectors;
+ 	set.fb = fb;
+ 
+ 	if (drm_drv_uses_atomic_modeset(dev))
+@@ -742,7 +742,7 @@ out:
+ 		drm_framebuffer_put(fb);
+ 
+ 	if (connector_set) {
+-		for (i = 0; i < crtc_req->count_connectors; i++) {
++		for (i = 0; i < num_connectors; i++) {
+ 			if (connector_set[i])
+ 				drm_connector_put(connector_set[i]);
+ 		}
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 4ca995ce19af6..10831d8d71486 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -892,8 +892,11 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
+ 			goto err_minors;
+ 	}
+ 
+-	if (drm_core_check_feature(dev, DRIVER_MODESET))
+-		drm_modeset_register_all(dev);
++	if (drm_core_check_feature(dev, DRIVER_MODESET)) {
++		ret = drm_modeset_register_all(dev);
++		if (ret)
++			goto err_unload;
++	}
+ 
+ 	ret = 0;
+ 
+@@ -905,6 +908,9 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
+ 
+ 	goto out_unlock;
+ 
++err_unload:
++	if (dev->driver->unload)
++		dev->driver->unload(dev);
+ err_minors:
+ 	remove_compat_control_link(dev);
+ 	drm_minor_unregister(dev, DRM_MINOR_PRIMARY);
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_dma.c b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+index bf33c3084cb41..6b4d6da3b1f4e 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_dma.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+@@ -108,18 +108,16 @@ int exynos_drm_register_dma(struct drm_device *drm, struct device *dev,
+ 		return 0;
+ 
+ 	if (!priv->mapping) {
+-		void *mapping;
++		void *mapping = NULL;
+ 
+ 		if (IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU))
+ 			mapping = arm_iommu_create_mapping(&platform_bus_type,
+ 				EXYNOS_DEV_ADDR_START, EXYNOS_DEV_ADDR_SIZE);
+ 		else if (IS_ENABLED(CONFIG_IOMMU_DMA))
+ 			mapping = iommu_get_domain_for_dev(priv->dma_dev);
+-		else
+-			mapping = ERR_PTR(-ENODEV);
+ 
+-		if (IS_ERR(mapping))
+-			return PTR_ERR(mapping);
++		if (!mapping)
++			return -ENODEV;
+ 		priv->mapping = mapping;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c
+index dc01c188c0e09..981bffacda243 100644
+--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
++++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
+@@ -1849,6 +1849,8 @@ static int hdmi_bind(struct device *dev, struct device *master, void *data)
+ 		return ret;
+ 
+ 	crtc = exynos_drm_crtc_get_by_type(drm_dev, EXYNOS_DISPLAY_TYPE_HDMI);
++	if (IS_ERR(crtc))
++		return PTR_ERR(crtc);
+ 	crtc->pipe_clk = &hdata->phy_clk;
+ 
+ 	ret = hdmi_create_connector(encoder);
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
+index a0253297bc769..bf54b9e4fd61a 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
+@@ -268,6 +268,7 @@ static void mdp4_crtc_atomic_disable(struct drm_crtc *crtc,
+ {
+ 	struct mdp4_crtc *mdp4_crtc = to_mdp4_crtc(crtc);
+ 	struct mdp4_kms *mdp4_kms = get_kms(crtc);
++	unsigned long flags;
+ 
+ 	DBG("%s", mdp4_crtc->name);
+ 
+@@ -280,6 +281,14 @@ static void mdp4_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	mdp_irq_unregister(&mdp4_kms->base, &mdp4_crtc->err);
+ 	mdp4_disable(mdp4_kms);
+ 
++	if (crtc->state->event && !crtc->state->active) {
++		WARN_ON(mdp4_crtc->event);
++		spin_lock_irqsave(&mdp4_kms->dev->event_lock, flags);
++		drm_crtc_send_vblank_event(crtc, crtc->state->event);
++		crtc->state->event = NULL;
++		spin_unlock_irqrestore(&mdp4_kms->dev->event_lock, flags);
++	}
++
+ 	mdp4_crtc->enabled = false;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+index 2e0be85ec3947..10eacfd95fb1c 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+@@ -558,7 +558,9 @@ static int dsi_phy_enable_resource(struct msm_dsi_phy *phy)
+ 	struct device *dev = &phy->pdev->dev;
+ 	int ret;
+ 
+-	pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
++	if (ret)
++		return ret;
+ 
+ 	ret = clk_prepare_enable(phy->ahb_clk);
+ 	if (ret) {
+diff --git a/drivers/gpu/drm/nouveau/nv04_fence.c b/drivers/gpu/drm/nouveau/nv04_fence.c
+index 5b71a5a5cd85c..cdbc75e3d1f66 100644
+--- a/drivers/gpu/drm/nouveau/nv04_fence.c
++++ b/drivers/gpu/drm/nouveau/nv04_fence.c
+@@ -39,7 +39,7 @@ struct nv04_fence_priv {
+ static int
+ nv04_fence_emit(struct nouveau_fence *fence)
+ {
+-	struct nvif_push *push = fence->channel->chan.push;
++	struct nvif_push *push = unrcu_pointer(fence->channel)->chan.push;
+ 	int ret = PUSH_WAIT(push, 2);
+ 	if (ret == 0) {
+ 		PUSH_NVSQ(push, NV_SW, 0x0150, fence->base.seqno);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmtu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmtu102.c
+index b1294d0076c08..72449bf613bf1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmtu102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmtu102.c
+@@ -32,7 +32,7 @@ tu102_vmm_flush(struct nvkm_vmm *vmm, int depth)
+ 
+ 	type |= 0x00000001; /* PAGE_ALL */
+ 	if (atomic_read(&vmm->engref[NVKM_SUBDEV_BAR]))
+-		type |= 0x00000004; /* HUB_ONLY */
++		type |= 0x00000006; /* HUB_ONLY | ALL PDB (hack) */
+ 
+ 	mutex_lock(&subdev->mutex);
+ 
+diff --git a/drivers/gpu/drm/panel/panel-elida-kd35t133.c b/drivers/gpu/drm/panel/panel-elida-kd35t133.c
+index fe5ac3ef90185..9c1591f2920c6 100644
+--- a/drivers/gpu/drm/panel/panel-elida-kd35t133.c
++++ b/drivers/gpu/drm/panel/panel-elida-kd35t133.c
+@@ -111,6 +111,8 @@ static int kd35t133_unprepare(struct drm_panel *panel)
+ 		return ret;
+ 	}
+ 
++	gpiod_set_value_cansleep(ctx->reset_gpio, 1);
++
+ 	regulator_disable(ctx->iovcc);
+ 	regulator_disable(ctx->vdd);
+ 
+diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
+index 24c8db673931a..6e4600c216974 100644
+--- a/drivers/gpu/drm/radeon/r100.c
++++ b/drivers/gpu/drm/radeon/r100.c
+@@ -2313,7 +2313,7 @@ int r100_cs_track_check(struct radeon_device *rdev, struct r100_cs_track *track)
+ 	switch (prim_walk) {
+ 	case 1:
+ 		for (i = 0; i < track->num_arrays; i++) {
+-			size = track->arrays[i].esize * track->max_indx * 4;
++			size = track->arrays[i].esize * track->max_indx * 4UL;
+ 			if (track->arrays[i].robj == NULL) {
+ 				DRM_ERROR("(PW %u) Vertex array %u no buffer "
+ 					  "bound\n", prim_walk, i);
+@@ -2332,7 +2332,7 @@ int r100_cs_track_check(struct radeon_device *rdev, struct r100_cs_track *track)
+ 		break;
+ 	case 2:
+ 		for (i = 0; i < track->num_arrays; i++) {
+-			size = track->arrays[i].esize * (nverts - 1) * 4;
++			size = track->arrays[i].esize * (nverts - 1) * 4UL;
+ 			if (track->arrays[i].robj == NULL) {
+ 				DRM_ERROR("(PW %u) Vertex array %u no buffer "
+ 					  "bound\n", prim_walk, i);
+diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
+index 390a9621604ae..1e6ad9daff534 100644
+--- a/drivers/gpu/drm/radeon/r600_cs.c
++++ b/drivers/gpu/drm/radeon/r600_cs.c
+@@ -1276,7 +1276,7 @@ static int r600_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
+ 			return -EINVAL;
+ 		}
+ 		tmp = (reg - CB_COLOR0_BASE) / 4;
+-		track->cb_color_bo_offset[tmp] = radeon_get_ib_value(p, idx) << 8;
++		track->cb_color_bo_offset[tmp] = (u64)radeon_get_ib_value(p, idx) << 8;
+ 		ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff);
+ 		track->cb_color_base_last[tmp] = ib[idx];
+ 		track->cb_color_bo[tmp] = reloc->robj;
+@@ -1303,7 +1303,7 @@ static int r600_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
+ 					"0x%04X\n", reg);
+ 			return -EINVAL;
+ 		}
+-		track->htile_offset = radeon_get_ib_value(p, idx) << 8;
++		track->htile_offset = (u64)radeon_get_ib_value(p, idx) << 8;
+ 		ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff);
+ 		track->htile_bo = reloc->robj;
+ 		track->db_dirty = true;
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index 71bdafac9210d..07d23a1e62a07 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -689,11 +689,16 @@ static void radeon_crtc_init(struct drm_device *dev, int index)
+ 	if (radeon_crtc == NULL)
+ 		return;
+ 
++	radeon_crtc->flip_queue = alloc_workqueue("radeon-crtc", WQ_HIGHPRI, 0);
++	if (!radeon_crtc->flip_queue) {
++		kfree(radeon_crtc);
++		return;
++	}
++
+ 	drm_crtc_init(dev, &radeon_crtc->base, &radeon_crtc_funcs);
+ 
+ 	drm_mode_crtc_set_gamma_size(&radeon_crtc->base, 256);
+ 	radeon_crtc->crtc_id = index;
+-	radeon_crtc->flip_queue = alloc_workqueue("radeon-crtc", WQ_HIGHPRI, 0);
+ 	rdev->mode_info.crtcs[index] = radeon_crtc;
+ 
+ 	if (rdev->family >= CHIP_BONAIRE) {
+diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c
+index 27b14eff532cb..cb75ff1f6f2ce 100644
+--- a/drivers/gpu/drm/radeon/radeon_vm.c
++++ b/drivers/gpu/drm/radeon/radeon_vm.c
+@@ -1206,13 +1206,17 @@ int radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm)
+ 	r = radeon_bo_create(rdev, pd_size, align, true,
+ 			     RADEON_GEM_DOMAIN_VRAM, 0, NULL,
+ 			     NULL, &vm->page_directory);
+-	if (r)
++	if (r) {
++		kfree(vm->page_tables);
++		vm->page_tables = NULL;
+ 		return r;
+-
++	}
+ 	r = radeon_vm_clear_bo(rdev, vm->page_directory);
+ 	if (r) {
+ 		radeon_bo_unref(&vm->page_directory);
+ 		vm->page_directory = NULL;
++		kfree(vm->page_tables);
++		vm->page_tables = NULL;
+ 		return r;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index 31e2c1083b089..e2eac766666e2 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -3616,6 +3616,10 @@ static int si_cp_start(struct radeon_device *rdev)
+ 	for (i = RADEON_RING_TYPE_GFX_INDEX; i <= CAYMAN_RING_TYPE_CP2_INDEX; ++i) {
+ 		ring = &rdev->ring[i];
+ 		r = radeon_ring_lock(rdev, ring, 2);
++		if (r) {
++			DRM_ERROR("radeon: cp failed to lock ring (%d).\n", r);
++			return r;
++		}
+ 
+ 		/* clear the compute context state */
+ 		radeon_ring_write(ring, PACKET3_COMPUTE(PACKET3_CLEAR_STATE, 0));
+diff --git a/drivers/gpu/drm/radeon/sumo_dpm.c b/drivers/gpu/drm/radeon/sumo_dpm.c
+index b95d5d390cafd..45d04996adf5b 100644
+--- a/drivers/gpu/drm/radeon/sumo_dpm.c
++++ b/drivers/gpu/drm/radeon/sumo_dpm.c
+@@ -1493,8 +1493,10 @@ static int sumo_parse_power_table(struct radeon_device *rdev)
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+-		if (!rdev->pm.power_state[i].clock_info)
++		if (!rdev->pm.power_state[i].clock_info) {
++			kfree(rdev->pm.dpm.ps);
+ 			return -EINVAL;
++		}
+ 		ps = kzalloc(sizeof(struct sumo_ps), GFP_KERNEL);
+ 		if (ps == NULL) {
+ 			kfree(rdev->pm.dpm.ps);
+diff --git a/drivers/gpu/drm/radeon/trinity_dpm.c b/drivers/gpu/drm/radeon/trinity_dpm.c
+index 4d93b84aa7397..49c28fbe366e6 100644
+--- a/drivers/gpu/drm/radeon/trinity_dpm.c
++++ b/drivers/gpu/drm/radeon/trinity_dpm.c
+@@ -1770,8 +1770,10 @@ static int trinity_parse_power_table(struct radeon_device *rdev)
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+-		if (!rdev->pm.power_state[i].clock_info)
++		if (!rdev->pm.power_state[i].clock_info) {
++			kfree(rdev->pm.dpm.ps);
+ 			return -EINVAL;
++		}
+ 		ps = kzalloc(sizeof(struct sumo_ps), GFP_KERNEL);
+ 		if (ps == NULL) {
+ 			kfree(rdev->pm.dpm.ps);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 1b5e5e9f577db..726a5d76615d2 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2612,8 +2612,8 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
+ {
+ 	struct hid_data *hid_data = &wacom_wac->hid_data;
+ 	bool mt = wacom_wac->features.touch_max > 1;
+-	bool prox = hid_data->tipswitch &&
+-		    report_touch_events(wacom_wac);
++	bool touch_down = hid_data->tipswitch && hid_data->confidence;
++	bool prox = touch_down && report_touch_events(wacom_wac);
+ 
+ 	if (wacom_wac->shared->has_mute_touch_switch &&
+ 	    !wacom_wac->shared->is_touch_on) {
+@@ -2652,24 +2652,6 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
+ 	}
+ }
+ 
+-static bool wacom_wac_slot_is_active(struct input_dev *dev, int key)
+-{
+-	struct input_mt *mt = dev->mt;
+-	struct input_mt_slot *s;
+-
+-	if (!mt)
+-		return false;
+-
+-	for (s = mt->slots; s != mt->slots + mt->num_slots; s++) {
+-		if (s->key == key &&
+-			input_mt_get_value(s, ABS_MT_TRACKING_ID) >= 0) {
+-			return true;
+-		}
+-	}
+-
+-	return false;
+-}
+-
+ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 		struct hid_field *field, struct hid_usage *usage, __s32 value)
+ {
+@@ -2717,14 +2699,8 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 	}
+ 
+ 	if (usage->usage_index + 1 == field->report_count) {
+-		if (equivalent_usage == wacom_wac->hid_data.last_slot_field) {
+-			bool touch_removed = wacom_wac_slot_is_active(wacom_wac->touch_input,
+-				wacom_wac->hid_data.id) && !wacom_wac->hid_data.tipswitch;
+-
+-			if (wacom_wac->hid_data.confidence || touch_removed) {
+-				wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
+-			}
+-		}
++		if (equivalent_usage == wacom_wac->hid_data.last_slot_field)
++			wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
+ 	}
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
+index eefc7371c6c4d..2c994a8c4a767 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.h
++++ b/drivers/hwtracing/coresight/coresight-etm4x.h
+@@ -440,7 +440,7 @@ struct etmv4_drvdata {
+ 	u8				ctxid_size;
+ 	u8				vmid_size;
+ 	u8				ccsize;
+-	u8				ccitmin;
++	u16				ccitmin;
+ 	u8				s_ex_level;
+ 	u8				ns_ex_level;
+ 	u8				q_support;
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 13c14eb175e94..6abcf975a2db9 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -178,6 +178,7 @@ struct rk3x_i2c_soc_data {
+  * @clk: function clk for rk3399 or function & Bus clks for others
+  * @pclk: Bus clk for rk3399
+  * @clk_rate_nb: i2c clk rate change notify
++ * @irq: irq number
+  * @t: I2C known timing information
+  * @lock: spinlock for the i2c bus
+  * @wait: the waitqueue to wait for i2c transfer
+@@ -200,6 +201,7 @@ struct rk3x_i2c {
+ 	struct clk *clk;
+ 	struct clk *pclk;
+ 	struct notifier_block clk_rate_nb;
++	int irq;
+ 
+ 	/* Settings */
+ 	struct i2c_timings t;
+@@ -1087,13 +1089,18 @@ static int rk3x_i2c_xfer_common(struct i2c_adapter *adap,
+ 
+ 		spin_unlock_irqrestore(&i2c->lock, flags);
+ 
+-		rk3x_i2c_start(i2c);
+-
+ 		if (!polling) {
++			rk3x_i2c_start(i2c);
++
+ 			timeout = wait_event_timeout(i2c->wait, !i2c->busy,
+ 						     msecs_to_jiffies(WAIT_TIMEOUT));
+ 		} else {
++			disable_irq(i2c->irq);
++			rk3x_i2c_start(i2c);
++
+ 			timeout = rk3x_i2c_wait_xfer_poll(i2c);
++
++			enable_irq(i2c->irq);
+ 		}
+ 
+ 		spin_lock_irqsave(&i2c->lock, flags);
+@@ -1301,6 +1308,8 @@ static int rk3x_i2c_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	i2c->irq = irq;
++
+ 	platform_set_drvdata(pdev, i2c);
+ 
+ 	if (i2c->soc_data->calc_timings == rk3x_i2c_v0_calc_timings) {
+diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
+index 05831848b7bf6..fd0969cd7dc67 100644
+--- a/drivers/i2c/busses/i2c-s3c2410.c
++++ b/drivers/i2c/busses/i2c-s3c2410.c
+@@ -223,8 +223,17 @@ static bool is_ack(struct s3c24xx_i2c *i2c)
+ 	int tries;
+ 
+ 	for (tries = 50; tries; --tries) {
+-		if (readl(i2c->regs + S3C2410_IICCON)
+-			& S3C2410_IICCON_IRQPEND) {
++		unsigned long tmp = readl(i2c->regs + S3C2410_IICCON);
++
++		if (!(tmp & S3C2410_IICCON_ACKEN)) {
++			/*
++			 * Wait a bit for the bus to stabilize,
++			 * delay estimated experimentally.
++			 */
++			usleep_range(100, 200);
++			return true;
++		}
++		if (tmp & S3C2410_IICCON_IRQPEND) {
+ 			if (!(readl(i2c->regs + S3C2410_IICSTAT)
+ 				& S3C2410_IICSTAT_LASTBIT))
+ 				return true;
+@@ -277,16 +286,6 @@ static void s3c24xx_i2c_message_start(struct s3c24xx_i2c *i2c,
+ 
+ 	stat |= S3C2410_IICSTAT_START;
+ 	writel(stat, i2c->regs + S3C2410_IICSTAT);
+-
+-	if (i2c->quirks & QUIRK_POLL) {
+-		while ((i2c->msg_num != 0) && is_ack(i2c)) {
+-			i2c_s3c_irq_nextbyte(i2c, stat);
+-			stat = readl(i2c->regs + S3C2410_IICSTAT);
+-
+-			if (stat & S3C2410_IICSTAT_ARBITR)
+-				dev_err(i2c->dev, "deal with arbitration loss\n");
+-		}
+-	}
+ }
+ 
+ static inline void s3c24xx_i2c_stop(struct s3c24xx_i2c *i2c, int ret)
+@@ -693,7 +692,7 @@ static void s3c24xx_i2c_wait_idle(struct s3c24xx_i2c *i2c)
+ static int s3c24xx_i2c_doxfer(struct s3c24xx_i2c *i2c,
+ 			      struct i2c_msg *msgs, int num)
+ {
+-	unsigned long timeout;
++	unsigned long timeout = 0;
+ 	int ret;
+ 
+ 	ret = s3c24xx_i2c_set_master(i2c);
+@@ -713,16 +712,19 @@ static int s3c24xx_i2c_doxfer(struct s3c24xx_i2c *i2c,
+ 	s3c24xx_i2c_message_start(i2c, msgs);
+ 
+ 	if (i2c->quirks & QUIRK_POLL) {
+-		ret = i2c->msg_idx;
++		while ((i2c->msg_num != 0) && is_ack(i2c)) {
++			unsigned long stat = readl(i2c->regs + S3C2410_IICSTAT);
+ 
+-		if (ret != num)
+-			dev_dbg(i2c->dev, "incomplete xfer (%d)\n", ret);
++			i2c_s3c_irq_nextbyte(i2c, stat);
+ 
+-		goto out;
++			stat = readl(i2c->regs + S3C2410_IICSTAT);
++			if (stat & S3C2410_IICSTAT_ARBITR)
++				dev_err(i2c->dev, "deal with arbitration loss\n");
++		}
++	} else {
++		timeout = wait_event_timeout(i2c->wait, i2c->msg_num == 0, HZ * 5);
+ 	}
+ 
+-	timeout = wait_event_timeout(i2c->wait, i2c->msg_num == 0, HZ * 5);
+-
+ 	ret = i2c->msg_idx;
+ 
+ 	/*
+diff --git a/drivers/iio/adc/ad7091r-base.c b/drivers/iio/adc/ad7091r-base.c
+index 63b4d6ea4566a..811f04448d8d9 100644
+--- a/drivers/iio/adc/ad7091r-base.c
++++ b/drivers/iio/adc/ad7091r-base.c
+@@ -174,8 +174,8 @@ static const struct iio_info ad7091r_info = {
+ 
+ static irqreturn_t ad7091r_event_handler(int irq, void *private)
+ {
+-	struct ad7091r_state *st = (struct ad7091r_state *) private;
+-	struct iio_dev *iio_dev = dev_get_drvdata(st->dev);
++	struct iio_dev *iio_dev = private;
++	struct ad7091r_state *st = iio_priv(iio_dev);
+ 	unsigned int i, read_val;
+ 	int ret;
+ 	s64 timestamp = iio_get_time_ns(iio_dev);
+@@ -234,7 +234,7 @@ int ad7091r_probe(struct device *dev, const char *name,
+ 	if (irq) {
+ 		ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				ad7091r_event_handler,
+-				IRQF_TRIGGER_FALLING | IRQF_ONESHOT, name, st);
++				IRQF_TRIGGER_FALLING | IRQF_ONESHOT, name, iio_dev);
+ 		if (ret)
+ 			return ret;
+ 	}
+diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
+index 19a45dd437967..6b9627bfebb0a 100644
+--- a/drivers/iio/adc/ad9467.c
++++ b/drivers/iio/adc/ad9467.c
+@@ -119,9 +119,9 @@ struct ad9467_state {
+ 	struct spi_device		*spi;
+ 	struct clk			*clk;
+ 	unsigned int			output_mode;
++	unsigned int                    (*scales)[2];
+ 
+ 	struct gpio_desc		*pwrdown_gpio;
+-	struct gpio_desc		*reset_gpio;
+ };
+ 
+ static int ad9467_spi_read(struct spi_device *spi, unsigned int reg)
+@@ -163,9 +163,10 @@ static int ad9467_reg_access(struct adi_axi_adc_conv *conv, unsigned int reg,
+ 
+ 	if (readval == NULL) {
+ 		ret = ad9467_spi_write(spi, reg, writeval);
+-		ad9467_spi_write(spi, AN877_ADC_REG_TRANSFER,
+-				 AN877_ADC_TRANSFER_SYNC);
+-		return ret;
++		if (ret)
++			return ret;
++		return ad9467_spi_write(spi, AN877_ADC_REG_TRANSFER,
++					AN877_ADC_TRANSFER_SYNC);
+ 	}
+ 
+ 	ret = ad9467_spi_read(spi, reg);
+@@ -212,6 +213,7 @@ static void __ad9467_get_scale(struct adi_axi_adc_conv *conv, int index,
+ 	.channel = _chan,						\
+ 	.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) |		\
+ 		BIT(IIO_CHAN_INFO_SAMP_FREQ),				\
++	.info_mask_shared_by_type_available = BIT(IIO_CHAN_INFO_SCALE), \
+ 	.scan_index = _si,						\
+ 	.scan_type = {							\
+ 		.sign = _sign,						\
+@@ -273,10 +275,13 @@ static int ad9467_get_scale(struct adi_axi_adc_conv *conv, int *val, int *val2)
+ 	const struct ad9467_chip_info *info1 = to_ad9467_chip_info(info);
+ 	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
+ 	unsigned int i, vref_val;
++	int ret;
+ 
+-	vref_val = ad9467_spi_read(st->spi, AN877_ADC_REG_VREF);
++	ret = ad9467_spi_read(st->spi, AN877_ADC_REG_VREF);
++	if (ret < 0)
++		return ret;
+ 
+-	vref_val &= info1->vref_mask;
++	vref_val = ret & info1->vref_mask;
+ 
+ 	for (i = 0; i < info->num_scales; i++) {
+ 		if (vref_val == info->scale_table[i][1])
+@@ -297,6 +302,7 @@ static int ad9467_set_scale(struct adi_axi_adc_conv *conv, int val, int val2)
+ 	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
+ 	unsigned int scale_val[2];
+ 	unsigned int i;
++	int ret;
+ 
+ 	if (val != 0)
+ 		return -EINVAL;
+@@ -306,11 +312,13 @@ static int ad9467_set_scale(struct adi_axi_adc_conv *conv, int val, int val2)
+ 		if (scale_val[0] != val || scale_val[1] != val2)
+ 			continue;
+ 
+-		ad9467_spi_write(st->spi, AN877_ADC_REG_VREF,
+-				 info->scale_table[i][1]);
+-		ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
+-				 AN877_ADC_TRANSFER_SYNC);
+-		return 0;
++		ret = ad9467_spi_write(st->spi, AN877_ADC_REG_VREF,
++				       info->scale_table[i][1]);
++		if (ret < 0)
++			return ret;
++
++		return ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
++					AN877_ADC_TRANSFER_SYNC);
+ 	}
+ 
+ 	return -EINVAL;
+@@ -359,6 +367,26 @@ static int ad9467_write_raw(struct adi_axi_adc_conv *conv,
+ 	}
+ }
+ 
++static int ad9467_read_avail(struct adi_axi_adc_conv *conv,
++			     struct iio_chan_spec const *chan,
++			     const int **vals, int *type, int *length,
++			     long mask)
++{
++	const struct adi_axi_adc_chip_info *info = conv->chip_info;
++	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
++
++	switch (mask) {
++	case IIO_CHAN_INFO_SCALE:
++		*vals = (const int *)st->scales;
++		*type = IIO_VAL_INT_PLUS_MICRO;
++		/* Values are stored in a 2D matrix */
++		*length = info->num_scales * 2;
++		return IIO_AVAIL_LIST;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int ad9467_outputmode_set(struct spi_device *spi, unsigned int mode)
+ {
+ 	int ret;
+@@ -371,6 +399,26 @@ static int ad9467_outputmode_set(struct spi_device *spi, unsigned int mode)
+ 				AN877_ADC_TRANSFER_SYNC);
+ }
+ 
++static int ad9467_scale_fill(struct adi_axi_adc_conv *conv)
++{
++	const struct adi_axi_adc_chip_info *info = conv->chip_info;
++	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
++	unsigned int i, val1, val2;
++
++	st->scales = devm_kmalloc_array(&st->spi->dev, info->num_scales,
++					sizeof(*st->scales), GFP_KERNEL);
++	if (!st->scales)
++		return -ENOMEM;
++
++	for (i = 0; i < info->num_scales; i++) {
++		__ad9467_get_scale(conv, i, &val1, &val2);
++		st->scales[i][0] = val1;
++		st->scales[i][1] = val2;
++	}
++
++	return 0;
++}
++
+ static int ad9467_preenable_setup(struct adi_axi_adc_conv *conv)
+ {
+ 	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
+@@ -378,11 +426,19 @@ static int ad9467_preenable_setup(struct adi_axi_adc_conv *conv)
+ 	return ad9467_outputmode_set(st->spi, st->output_mode);
+ }
+ 
+-static void ad9467_clk_disable(void *data)
++static int ad9467_reset(struct device *dev)
+ {
+-	struct ad9467_state *st = data;
++	struct gpio_desc *gpio;
+ 
+-	clk_disable_unprepare(st->clk);
++	gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
++	if (IS_ERR_OR_NULL(gpio))
++		return PTR_ERR_OR_ZERO(gpio);
++
++	fsleep(1);
++	gpiod_set_value_cansleep(gpio, 0);
++	fsleep(10 * USEC_PER_MSEC);
++
++	return 0;
+ }
+ 
+ static int ad9467_probe(struct spi_device *spi)
+@@ -404,40 +460,27 @@ static int ad9467_probe(struct spi_device *spi)
+ 	st = adi_axi_adc_conv_priv(conv);
+ 	st->spi = spi;
+ 
+-	st->clk = devm_clk_get(&spi->dev, "adc-clk");
++	st->clk = devm_clk_get_enabled(&spi->dev, "adc-clk");
+ 	if (IS_ERR(st->clk))
+ 		return PTR_ERR(st->clk);
+ 
+-	ret = clk_prepare_enable(st->clk);
+-	if (ret < 0)
+-		return ret;
+-
+-	ret = devm_add_action_or_reset(&spi->dev, ad9467_clk_disable, st);
+-	if (ret)
+-		return ret;
+-
+ 	st->pwrdown_gpio = devm_gpiod_get_optional(&spi->dev, "powerdown",
+ 						   GPIOD_OUT_LOW);
+ 	if (IS_ERR(st->pwrdown_gpio))
+ 		return PTR_ERR(st->pwrdown_gpio);
+ 
+-	st->reset_gpio = devm_gpiod_get_optional(&spi->dev, "reset",
+-						 GPIOD_OUT_LOW);
+-	if (IS_ERR(st->reset_gpio))
+-		return PTR_ERR(st->reset_gpio);
+-
+-	if (st->reset_gpio) {
+-		udelay(1);
+-		ret = gpiod_direction_output(st->reset_gpio, 1);
+-		if (ret)
+-			return ret;
+-		mdelay(10);
+-	}
++	ret = ad9467_reset(&spi->dev);
++	if (ret)
++		return ret;
+ 
+ 	spi_set_drvdata(spi, st);
+ 
+ 	conv->chip_info = &info->axi_adc_info;
+ 
++	ret = ad9467_scale_fill(conv);
++	if (ret)
++		return ret;
++
+ 	id = ad9467_spi_read(spi, AN877_ADC_REG_CHIP_ID);
+ 	if (id != conv->chip_info->id) {
+ 		dev_err(&spi->dev, "Mismatch CHIP_ID, got 0x%X, expected 0x%X\n",
+@@ -448,6 +491,7 @@ static int ad9467_probe(struct spi_device *spi)
+ 	conv->reg_access = ad9467_reg_access;
+ 	conv->write_raw = ad9467_write_raw;
+ 	conv->read_raw = ad9467_read_raw;
++	conv->read_avail = ad9467_read_avail;
+ 	conv->preenable_setup = ad9467_preenable_setup;
+ 
+ 	st->output_mode = info->default_output_mode |
+diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
+index cbe1011a2408a..d721fb1bcae7b 100644
+--- a/drivers/iio/adc/adi-axi-adc.c
++++ b/drivers/iio/adc/adi-axi-adc.c
+@@ -150,6 +150,20 @@ static int adi_axi_adc_write_raw(struct iio_dev *indio_dev,
+ 	return conv->write_raw(conv, chan, val, val2, mask);
+ }
+ 
++static int adi_axi_adc_read_avail(struct iio_dev *indio_dev,
++				  struct iio_chan_spec const *chan,
++				  const int **vals, int *type, int *length,
++				  long mask)
++{
++	struct adi_axi_adc_state *st = iio_priv(indio_dev);
++	struct adi_axi_adc_conv *conv = &st->client->conv;
++
++	if (!conv->read_avail)
++		return -EOPNOTSUPP;
++
++	return conv->read_avail(conv, chan, vals, type, length, mask);
++}
++
+ static int adi_axi_adc_update_scan_mode(struct iio_dev *indio_dev,
+ 					const unsigned long *scan_mask)
+ {
+@@ -238,69 +252,11 @@ struct adi_axi_adc_conv *devm_adi_axi_adc_conv_register(struct device *dev,
+ }
+ EXPORT_SYMBOL_GPL(devm_adi_axi_adc_conv_register);
+ 
+-static ssize_t in_voltage_scale_available_show(struct device *dev,
+-					       struct device_attribute *attr,
+-					       char *buf)
+-{
+-	struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+-	struct adi_axi_adc_state *st = iio_priv(indio_dev);
+-	struct adi_axi_adc_conv *conv = &st->client->conv;
+-	size_t len = 0;
+-	int i;
+-
+-	for (i = 0; i < conv->chip_info->num_scales; i++) {
+-		const unsigned int *s = conv->chip_info->scale_table[i];
+-
+-		len += scnprintf(buf + len, PAGE_SIZE - len,
+-				 "%u.%06u ", s[0], s[1]);
+-	}
+-	buf[len - 1] = '\n';
+-
+-	return len;
+-}
+-
+-static IIO_DEVICE_ATTR_RO(in_voltage_scale_available, 0);
+-
+-enum {
+-	ADI_AXI_ATTR_SCALE_AVAIL,
+-};
+-
+-#define ADI_AXI_ATTR(_en_, _file_)			\
+-	[ADI_AXI_ATTR_##_en_] = &iio_dev_attr_##_file_.dev_attr.attr
+-
+-static struct attribute *adi_axi_adc_attributes[] = {
+-	ADI_AXI_ATTR(SCALE_AVAIL, in_voltage_scale_available),
+-	NULL
+-};
+-
+-static umode_t axi_adc_attr_is_visible(struct kobject *kobj,
+-				       struct attribute *attr, int n)
+-{
+-	struct device *dev = kobj_to_dev(kobj);
+-	struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+-	struct adi_axi_adc_state *st = iio_priv(indio_dev);
+-	struct adi_axi_adc_conv *conv = &st->client->conv;
+-
+-	switch (n) {
+-	case ADI_AXI_ATTR_SCALE_AVAIL:
+-		if (!conv->chip_info->num_scales)
+-			return 0;
+-		return attr->mode;
+-	default:
+-		return attr->mode;
+-	}
+-}
+-
+-static const struct attribute_group adi_axi_adc_attribute_group = {
+-	.attrs = adi_axi_adc_attributes,
+-	.is_visible = axi_adc_attr_is_visible,
+-};
+-
+ static const struct iio_info adi_axi_adc_info = {
+ 	.read_raw = &adi_axi_adc_read_raw,
+ 	.write_raw = &adi_axi_adc_write_raw,
+-	.attrs = &adi_axi_adc_attribute_group,
+ 	.update_scan_mode = &adi_axi_adc_update_scan_mode,
++	.read_avail = &adi_axi_adc_read_avail,
+ };
+ 
+ static const struct adi_axi_adc_core_info adi_axi_adc_10_0_a_info = {
+diff --git a/drivers/infiniband/hw/mthca/mthca_cmd.c b/drivers/infiniband/hw/mthca/mthca_cmd.c
+index bdf5ed38de220..0307c45aa6d31 100644
+--- a/drivers/infiniband/hw/mthca/mthca_cmd.c
++++ b/drivers/infiniband/hw/mthca/mthca_cmd.c
+@@ -635,7 +635,7 @@ void mthca_free_mailbox(struct mthca_dev *dev, struct mthca_mailbox *mailbox)
+ 
+ int mthca_SYS_EN(struct mthca_dev *dev)
+ {
+-	u64 out;
++	u64 out = 0;
+ 	int ret;
+ 
+ 	ret = mthca_cmd_imm(dev, 0, &out, 0, 0, CMD_SYS_EN, CMD_TIME_CLASS_D);
+@@ -1955,7 +1955,7 @@ int mthca_WRITE_MGM(struct mthca_dev *dev, int index,
+ int mthca_MGID_HASH(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
+ 		    u16 *hash)
+ {
+-	u64 imm;
++	u64 imm = 0;
+ 	int err;
+ 
+ 	err = mthca_cmd_imm(dev, mailbox->dma, &imm, 0, 0, CMD_MGID_HASH,
+diff --git a/drivers/infiniband/hw/mthca/mthca_main.c b/drivers/infiniband/hw/mthca/mthca_main.c
+index fe9654a7af713..3acd1372c8140 100644
+--- a/drivers/infiniband/hw/mthca/mthca_main.c
++++ b/drivers/infiniband/hw/mthca/mthca_main.c
+@@ -382,7 +382,7 @@ static int mthca_init_icm(struct mthca_dev *mdev,
+ 			  struct mthca_init_hca_param *init_hca,
+ 			  u64 icm_size)
+ {
+-	u64 aux_pages;
++	u64 aux_pages = 0;
+ 	int err;
+ 
+ 	err = mthca_SET_ICM_SIZE(mdev, icm_size, &aux_pages);
+diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.h b/drivers/infiniband/ulp/iser/iscsi_iser.h
+index 78ee9445f8019..45a2d2b82b093 100644
+--- a/drivers/infiniband/ulp/iser/iscsi_iser.h
++++ b/drivers/infiniband/ulp/iser/iscsi_iser.h
+@@ -322,12 +322,10 @@ struct iser_device {
+  *
+  * @mr:         memory region
+  * @sig_mr:     signature memory region
+- * @mr_valid:   is mr valid indicator
+  */
+ struct iser_reg_resources {
+ 	struct ib_mr                     *mr;
+ 	struct ib_mr                     *sig_mr;
+-	u8				  mr_valid:1;
+ };
+ 
+ /**
+diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
+index 27a6f75a9912f..9ea88dd6a414e 100644
+--- a/drivers/infiniband/ulp/iser/iser_initiator.c
++++ b/drivers/infiniband/ulp/iser/iser_initiator.c
+@@ -602,7 +602,10 @@ iser_inv_desc(struct iser_fr_desc *desc, u32 rkey)
+ 		return -EINVAL;
+ 	}
+ 
+-	desc->rsc.mr_valid = 0;
++	if (desc->sig_protected)
++		desc->rsc.sig_mr->need_inval = false;
++	else
++		desc->rsc.mr->need_inval = false;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
+index d4e057fac219b..519fea5ec3a1b 100644
+--- a/drivers/infiniband/ulp/iser/iser_memory.c
++++ b/drivers/infiniband/ulp/iser/iser_memory.c
+@@ -250,7 +250,7 @@ iser_reg_sig_mr(struct iscsi_iser_task *iser_task,
+ 
+ 	iser_set_prot_checks(iser_task->sc, &sig_attrs->check_mask);
+ 
+-	if (rsc->mr_valid)
++	if (rsc->sig_mr->need_inval)
+ 		iser_inv_rkey(&tx_desc->inv_wr, mr, cqe, &wr->wr);
+ 
+ 	ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
+@@ -274,7 +274,7 @@ iser_reg_sig_mr(struct iscsi_iser_task *iser_task,
+ 	wr->access = IB_ACCESS_LOCAL_WRITE |
+ 		     IB_ACCESS_REMOTE_READ |
+ 		     IB_ACCESS_REMOTE_WRITE;
+-	rsc->mr_valid = 1;
++	rsc->sig_mr->need_inval = true;
+ 
+ 	sig_reg->sge.lkey = mr->lkey;
+ 	sig_reg->rkey = mr->rkey;
+@@ -299,7 +299,7 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
+ 	struct ib_reg_wr *wr = &tx_desc->reg_wr;
+ 	int n;
+ 
+-	if (rsc->mr_valid)
++	if (rsc->mr->need_inval)
+ 		iser_inv_rkey(&tx_desc->inv_wr, mr, cqe, &wr->wr);
+ 
+ 	ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
+@@ -322,7 +322,7 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
+ 		     IB_ACCESS_REMOTE_WRITE |
+ 		     IB_ACCESS_REMOTE_READ;
+ 
+-	rsc->mr_valid = 1;
++	rsc->mr->need_inval = true;
+ 
+ 	reg->sge.lkey = mr->lkey;
+ 	reg->rkey = mr->rkey;
+diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c
+index 2bd18b0068934..b5127479860d8 100644
+--- a/drivers/infiniband/ulp/iser/iser_verbs.c
++++ b/drivers/infiniband/ulp/iser/iser_verbs.c
+@@ -136,7 +136,6 @@ iser_create_fastreg_desc(struct iser_device *device,
+ 			goto err_alloc_mr_integrity;
+ 		}
+ 	}
+-	desc->rsc.mr_valid = 0;
+ 
+ 	return desc;
+ 
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index beedad0fe09ae..239471cf7e4c2 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -266,6 +266,7 @@ static const struct xpad_device {
+ 	{ 0x146b, 0x0604, "Bigben Interactive DAIJA Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ 	{ 0x1532, 0x0a00, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ 	{ 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE },
++	{ 0x1532, 0x0a29, "Razer Wolverine V2", 0, XTYPE_XBOXONE },
+ 	{ 0x15e4, 0x3f00, "Power A Mini Pro Elite", 0, XTYPE_XBOX360 },
+ 	{ 0x15e4, 0x3f0a, "Xbox Airflo wired controller", 0, XTYPE_XBOX360 },
+ 	{ 0x15e4, 0x3f10, "Batarang Xbox 360 controller", 0, XTYPE_XBOX360 },
+diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c
+index edc613efc158c..4912582c54dad 100644
+--- a/drivers/input/keyboard/atkbd.c
++++ b/drivers/input/keyboard/atkbd.c
+@@ -756,6 +756,44 @@ static void atkbd_deactivate(struct atkbd *atkbd)
+ 			ps2dev->serio->phys);
+ }
+ 
++#ifdef CONFIG_X86
++static bool atkbd_is_portable_device(void)
++{
++	static const char * const chassis_types[] = {
++		"8",	/* Portable */
++		"9",	/* Laptop */
++		"10",	/* Notebook */
++		"14",	/* Sub-Notebook */
++		"31",	/* Convertible */
++		"32",	/* Detachable */
++	};
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(chassis_types); i++)
++		if (dmi_match(DMI_CHASSIS_TYPE, chassis_types[i]))
++			return true;
++
++	return false;
++}
++
++/*
++ * On many modern laptops ATKBD_CMD_GETID may cause problems, on these laptops
++ * the controller is always in translated mode. In this mode mice/touchpads will
++ * not work. So in this case simply assume a keyboard is connected to avoid
++ * confusing some laptop keyboards.
++ *
++ * Skipping ATKBD_CMD_GETID ends up using a fake keyboard id. Using the standard
++ * 0xab83 id is ok in translated mode, only atkbd_select_set() checks atkbd->id
++ * and in translated mode that is a no-op.
++ */
++static bool atkbd_skip_getid(struct atkbd *atkbd)
++{
++	return atkbd->translated && atkbd_is_portable_device();
++}
++#else
++static inline bool atkbd_skip_getid(struct atkbd *atkbd) { return false; }
++#endif
++
+ /*
+  * atkbd_probe() probes for an AT keyboard on a serio port.
+  */
+@@ -764,6 +802,7 @@ static int atkbd_probe(struct atkbd *atkbd)
+ {
+ 	struct ps2dev *ps2dev = &atkbd->ps2dev;
+ 	unsigned char param[2];
++	bool skip_getid;
+ 
+ /*
+  * Some systems, where the bit-twiddling when testing the io-lines of the
+@@ -785,17 +824,18 @@ static int atkbd_probe(struct atkbd *atkbd)
+  */
+ 
+ 	param[0] = param[1] = 0xa5;	/* initialize with invalid values */
+-	if (ps2_command(ps2dev, param, ATKBD_CMD_GETID)) {
++	skip_getid = atkbd_skip_getid(atkbd);
++	if (skip_getid || ps2_command(ps2dev, param, ATKBD_CMD_GETID)) {
+ 
+ /*
+- * If the get ID command failed, we check if we can at least set the LEDs on
+- * the keyboard. This should work on every keyboard out there. It also turns
+- * the LEDs off, which we want anyway.
++ * If the get ID command was skipped or failed, we check if we can at least set
++ * the LEDs on the keyboard. This should work on every keyboard out there.
++ * It also turns the LEDs off, which we want anyway.
+  */
+ 		param[0] = 0;
+ 		if (ps2_command(ps2dev, param, ATKBD_CMD_SETLEDS))
+ 			return -1;
+-		atkbd->id = 0xabba;
++		atkbd->id = skip_getid ? 0xab83 : 0xabba;
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 09528c0a8a34e..124ab98ea43a4 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -351,6 +351,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		},
+ 		.driver_data = (void *)(SERIO_QUIRK_DRITEK)
+ 	},
++	{
++		/* Acer TravelMate P459-G2-M */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate P459-G2-M"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
+ 	{
+ 		/* Amoi M636/A737 */
+ 		.matches = {
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 1598a1ddbf694..a5164d5cb6a35 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -21,6 +21,7 @@ static struct qcom_smmu *to_qcom_smmu(struct arm_smmu_device *smmu)
+ 
+ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
+ 	{ .compatible = "qcom,adreno" },
++	{ .compatible = "qcom,adreno-gmu" },
+ 	{ .compatible = "qcom,mdp4" },
+ 	{ .compatible = "qcom,mdss" },
+ 	{ .compatible = "qcom,sc7180-mdss" },
+diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
+index ad84be4f68171..03c4c5e4d35ca 100644
+--- a/drivers/leds/Kconfig
++++ b/drivers/leds/Kconfig
+@@ -116,6 +116,7 @@ config LEDS_AS3645A
+ config LEDS_AW2013
+ 	tristate "LED support for Awinic AW2013"
+ 	depends on LEDS_CLASS && I2C && OF
++	select REGMAP_I2C
+ 	help
+ 	  This option enables support for the AW2013 3-channel
+ 	  LED driver.
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 2ff8a1b776fb4..3a83e8e092568 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -114,6 +114,8 @@ static int dvb_device_open(struct inode *inode, struct file *file)
+ 			err = file->f_op->open(inode, file);
+ 		up_read(&minor_rwsem);
+ 		mutex_unlock(&dvbdev_mutex);
++		if (err)
++			dvb_device_put(dvbdev);
+ 		return err;
+ 	}
+ fail:
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index ff106d6ece68a..94c6cc59372c6 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -1898,7 +1898,7 @@ static int m88ds3103_probe(struct i2c_client *client,
+ 		/* get frontend address */
+ 		ret = regmap_read(dev->regmap, 0x29, &utmp);
+ 		if (ret)
+-			goto err_kfree;
++			goto err_del_adapters;
+ 		dev->dt_addr = ((utmp & 0x80) == 0) ? 0x42 >> 1 : 0x40 >> 1;
+ 		dev_dbg(&client->dev, "dt addr is 0x%02x\n", dev->dt_addr);
+ 
+@@ -1906,11 +1906,14 @@ static int m88ds3103_probe(struct i2c_client *client,
+ 						      dev->dt_addr);
+ 		if (IS_ERR(dev->dt_client)) {
+ 			ret = PTR_ERR(dev->dt_client);
+-			goto err_kfree;
++			goto err_del_adapters;
+ 		}
+ 	}
+ 
+ 	return 0;
++
++err_del_adapters:
++	i2c_mux_del_adapters(dev->muxc);
+ err_kfree:
+ 	kfree(dev);
+ err:
+diff --git a/drivers/media/usb/cx231xx/cx231xx-core.c b/drivers/media/usb/cx231xx/cx231xx-core.c
+index 05d91caaed0cd..46e34225b45c0 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-core.c
++++ b/drivers/media/usb/cx231xx/cx231xx-core.c
+@@ -1024,6 +1024,7 @@ int cx231xx_init_isoc(struct cx231xx *dev, int max_packets,
+ 	if (!dev->video_mode.isoc_ctl.urb) {
+ 		dev_err(dev->dev,
+ 			"cannot alloc memory for usb buffers\n");
++		kfree(dma_q->p_left_data);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -1033,6 +1034,7 @@ int cx231xx_init_isoc(struct cx231xx *dev, int max_packets,
+ 		dev_err(dev->dev,
+ 			"cannot allocate memory for usbtransfer\n");
+ 		kfree(dev->video_mode.isoc_ctl.urb);
++		kfree(dma_q->p_left_data);
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-context.c b/drivers/media/usb/pvrusb2/pvrusb2-context.c
+index 14170a5d72b35..1764674de98bc 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-context.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-context.c
+@@ -268,7 +268,8 @@ void pvr2_context_disconnect(struct pvr2_context *mp)
+ {
+ 	pvr2_hdw_disconnect(mp->hdw);
+ 	mp->disconnect_flag = !0;
+-	pvr2_context_notify(mp);
++	if (!pvr2_context_shutok())
++		pvr2_context_notify(mp);
+ }
+ 
+ 
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index df5cebb372a59..60f74144a4f88 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -103,6 +103,10 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_clk)
+ 
+ 	syscon_config.name = kasprintf(GFP_KERNEL, "%pOFn@%llx", np,
+ 				       (u64)res.start);
++	if (!syscon_config.name) {
++		ret = -ENOMEM;
++		goto err_regmap;
++	}
+ 	syscon_config.reg_stride = reg_io_width;
+ 	syscon_config.val_bits = reg_io_width * 8;
+ 	syscon_config.max_register = resource_size(&res) - reg_io_width;
+diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
+index a5b2bf0e40cc1..8fe4a0fd6ef18 100644
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -1065,14 +1065,15 @@ config MMC_SDHCI_XENON
+ 
+ config MMC_SDHCI_OMAP
+ 	tristate "TI SDHCI Controller Support"
++	depends on ARCH_OMAP2PLUS || ARCH_KEYSTONE || COMPILE_TEST
+ 	depends on MMC_SDHCI_PLTFM && OF
+ 	select THERMAL
+ 	imply TI_SOC_THERMAL
+ 	select MMC_SDHCI_EXTERNAL_DMA if DMA_ENGINE
+ 	help
+ 	  This selects the Secure Digital Host Controller Interface (SDHCI)
+-	  support present in TI's DRA7 SOCs. The controller supports
+-	  SD/MMC/SDIO devices.
++	  support present in TI's Keystone/OMAP2+/DRA7 SOCs. The controller
++	  supports SD/MMC/SDIO devices.
+ 
+ 	  If you have a controller with this interface, say Y or M here.
+ 
+@@ -1080,14 +1081,15 @@ config MMC_SDHCI_OMAP
+ 
+ config MMC_SDHCI_AM654
+ 	tristate "Support for the SDHCI Controller in TI's AM654 SOCs"
++	depends on ARCH_K3 || COMPILE_TEST
+ 	depends on MMC_SDHCI_PLTFM && OF
+ 	select MMC_SDHCI_IO_ACCESSORS
+ 	select MMC_CQHCI
+ 	select REGMAP_MMIO
+ 	help
+ 	  This selects the Secure Digital Host Controller Interface (SDHCI)
+-	  support present in TI's AM654 SOCs. The controller supports
+-	  SD/MMC/SDIO devices.
++	  support present in TI's AM65x/AM64x/AM62x/J721E SOCs. The controller
++	  supports SD/MMC/SDIO devices.
+ 
+ 	  If you have a controller with this interface, say Y or M here.
+ 
+diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
+index 0c05f77f9b216..dd0d0bf5f57f9 100644
+--- a/drivers/mtd/mtd_blkdevs.c
++++ b/drivers/mtd/mtd_blkdevs.c
+@@ -533,7 +533,7 @@ static void blktrans_notify_add(struct mtd_info *mtd)
+ {
+ 	struct mtd_blktrans_ops *tr;
+ 
+-	if (mtd->type == MTD_ABSENT)
++	if (mtd->type == MTD_ABSENT || mtd->type == MTD_UBIVOLUME)
+ 		return;
+ 
+ 	list_for_each_entry(tr, &blktrans_majors, list)
+@@ -576,7 +576,7 @@ int register_mtd_blktrans(struct mtd_blktrans_ops *tr)
+ 	list_add(&tr->list, &blktrans_majors);
+ 
+ 	mtd_for_each_device(mtd)
+-		if (mtd->type != MTD_ABSENT)
++		if (mtd->type != MTD_ABSENT && mtd->type != MTD_UBIVOLUME)
+ 			tr->add_mtd(tr, mtd);
+ 
+ 	mutex_unlock(&mtd_table_mutex);
+diff --git a/drivers/mtd/nand/raw/fsl_ifc_nand.c b/drivers/mtd/nand/raw/fsl_ifc_nand.c
+index e345f9d9f8e8d..fcda744e8a406 100644
+--- a/drivers/mtd/nand/raw/fsl_ifc_nand.c
++++ b/drivers/mtd/nand/raw/fsl_ifc_nand.c
+@@ -21,7 +21,7 @@
+ 
+ #define ERR_BYTE		0xFF /* Value returned for read
+ 					bytes when read failed	*/
+-#define IFC_TIMEOUT_MSECS	500  /* Maximum number of mSecs to wait
++#define IFC_TIMEOUT_MSECS	1000 /* Maximum timeout to wait
+ 					for IFC NAND Machine	*/
+ 
+ struct fsl_ifc_ctrl;
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index 80eadf509c0a9..018988b95035e 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -1119,6 +1119,8 @@ static int vsc73xx_gpio_probe(struct vsc73xx *vsc)
+ 
+ 	vsc->gc.label = devm_kasprintf(vsc->dev, GFP_KERNEL, "VSC%04x",
+ 				       vsc->chipid);
++	if (!vsc->gc.label)
++		return -ENOMEM;
+ 	vsc->gc.ngpio = 4;
+ 	vsc->gc.owner = THIS_MODULE;
+ 	vsc->gc.parent = vsc->dev;
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index b010f28b0abf4..fe2c9b110e606 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -6454,6 +6454,14 @@ static void tg3_dump_state(struct tg3 *tp)
+ 	int i;
+ 	u32 *regs;
+ 
++	/* If it is a PCI error, all registers will be 0xffff,
++	 * we don't dump them out, just report the error and return
++	 */
++	if (tp->pdev->error_state != pci_channel_io_normal) {
++		netdev_err(tp->dev, "PCI channel ERROR!\n");
++		return;
++	}
++
+ 	regs = kzalloc(TG3_REG_BLK_SIZE, GFP_ATOMIC);
+ 	if (!regs)
+ 		return;
+@@ -11195,7 +11203,8 @@ static void tg3_reset_task(struct work_struct *work)
+ 	rtnl_lock();
+ 	tg3_full_lock(tp, 0);
+ 
+-	if (tp->pcierr_recovery || !netif_running(tp->dev)) {
++	if (tp->pcierr_recovery || !netif_running(tp->dev) ||
++	    tp->pdev->error_state != pci_channel_io_normal) {
+ 		tg3_flag_clear(tp, RESET_TASK_PENDING);
+ 		tg3_full_unlock(tp);
+ 		rtnl_unlock();
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index aa9e616cc1d59..011210e6842de 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -2302,7 +2302,6 @@ static int mtk_open(struct net_device *dev)
+ 	if (!refcount_read(&eth->dma_refcnt)) {
+ 		int err = mtk_start_dma(eth);
+ 
+-		if (err)
+ 		if (err) {
+ 			phylink_disconnect_phy(mac->phylink);
+ 			return err;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
+index ded4cf6586809..4b713832fdd55 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
+@@ -119,7 +119,6 @@ mlxsw_sp_acl_atcam_region_12kb_init(struct mlxsw_sp_acl_atcam_region *aregion)
+ {
+ 	struct mlxsw_sp *mlxsw_sp = aregion->region->mlxsw_sp;
+ 	struct mlxsw_sp_acl_atcam_region_12kb *region_12kb;
+-	size_t alloc_size;
+ 	u64 max_lkey_id;
+ 	int err;
+ 
+@@ -131,8 +130,7 @@ mlxsw_sp_acl_atcam_region_12kb_init(struct mlxsw_sp_acl_atcam_region *aregion)
+ 	if (!region_12kb)
+ 		return -ENOMEM;
+ 
+-	alloc_size = BITS_TO_LONGS(max_lkey_id) * sizeof(unsigned long);
+-	region_12kb->used_lkey_id = kzalloc(alloc_size, GFP_KERNEL);
++	region_12kb->used_lkey_id = bitmap_zalloc(max_lkey_id, GFP_KERNEL);
+ 	if (!region_12kb->used_lkey_id) {
+ 		err = -ENOMEM;
+ 		goto err_used_lkey_id_alloc;
+@@ -149,7 +147,7 @@ mlxsw_sp_acl_atcam_region_12kb_init(struct mlxsw_sp_acl_atcam_region *aregion)
+ 	return 0;
+ 
+ err_rhashtable_init:
+-	kfree(region_12kb->used_lkey_id);
++	bitmap_free(region_12kb->used_lkey_id);
+ err_used_lkey_id_alloc:
+ 	kfree(region_12kb);
+ 	return err;
+@@ -161,7 +159,7 @@ mlxsw_sp_acl_atcam_region_12kb_fini(struct mlxsw_sp_acl_atcam_region *aregion)
+ 	struct mlxsw_sp_acl_atcam_region_12kb *region_12kb = aregion->priv;
+ 
+ 	rhashtable_destroy(&region_12kb->lkey_ht);
+-	kfree(region_12kb->used_lkey_id);
++	bitmap_free(region_12kb->used_lkey_id);
+ 	kfree(region_12kb);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+index 4c98950380d53..d231f4d2888be 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+@@ -301,6 +301,7 @@ mlxsw_sp_acl_erp_table_alloc(struct mlxsw_sp_acl_erp_core *erp_core,
+ 			     unsigned long *p_index)
+ {
+ 	unsigned int num_rows, entry_size;
++	unsigned long index;
+ 
+ 	/* We only allow allocations of entire rows */
+ 	if (num_erps % erp_core->num_erp_banks != 0)
+@@ -309,10 +310,11 @@ mlxsw_sp_acl_erp_table_alloc(struct mlxsw_sp_acl_erp_core *erp_core,
+ 	entry_size = erp_core->erpt_entries_size[region_type];
+ 	num_rows = num_erps / erp_core->num_erp_banks;
+ 
+-	*p_index = gen_pool_alloc(erp_core->erp_tables, num_rows * entry_size);
+-	if (*p_index == 0)
++	index = gen_pool_alloc(erp_core->erp_tables, num_rows * entry_size);
++	if (!index)
+ 		return -ENOBUFS;
+-	*p_index -= MLXSW_SP_ACL_ERP_GENALLOC_OFFSET;
++
++	*p_index = index - MLXSW_SP_ACL_ERP_GENALLOC_OFFSET;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+index 7cccc41dd69c9..483c8b75bebb8 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+@@ -29,70 +29,6 @@ size_t mlxsw_sp_acl_tcam_priv_size(struct mlxsw_sp *mlxsw_sp)
+ #define MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_MIN 3000 /* ms */
+ #define MLXSW_SP_ACL_TCAM_VREGION_REHASH_CREDITS 100 /* number of entries */
+ 
+-int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp,
+-			   struct mlxsw_sp_acl_tcam *tcam)
+-{
+-	const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
+-	u64 max_tcam_regions;
+-	u64 max_regions;
+-	u64 max_groups;
+-	size_t alloc_size;
+-	int err;
+-
+-	mutex_init(&tcam->lock);
+-	tcam->vregion_rehash_intrvl =
+-			MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_DFLT;
+-	INIT_LIST_HEAD(&tcam->vregion_list);
+-
+-	max_tcam_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core,
+-					      ACL_MAX_TCAM_REGIONS);
+-	max_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_REGIONS);
+-
+-	/* Use 1:1 mapping between ACL region and TCAM region */
+-	if (max_tcam_regions < max_regions)
+-		max_regions = max_tcam_regions;
+-
+-	alloc_size = sizeof(tcam->used_regions[0]) * BITS_TO_LONGS(max_regions);
+-	tcam->used_regions = kzalloc(alloc_size, GFP_KERNEL);
+-	if (!tcam->used_regions)
+-		return -ENOMEM;
+-	tcam->max_regions = max_regions;
+-
+-	max_groups = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_GROUPS);
+-	alloc_size = sizeof(tcam->used_groups[0]) * BITS_TO_LONGS(max_groups);
+-	tcam->used_groups = kzalloc(alloc_size, GFP_KERNEL);
+-	if (!tcam->used_groups) {
+-		err = -ENOMEM;
+-		goto err_alloc_used_groups;
+-	}
+-	tcam->max_groups = max_groups;
+-	tcam->max_group_size = MLXSW_CORE_RES_GET(mlxsw_sp->core,
+-						 ACL_MAX_GROUP_SIZE);
+-
+-	err = ops->init(mlxsw_sp, tcam->priv, tcam);
+-	if (err)
+-		goto err_tcam_init;
+-
+-	return 0;
+-
+-err_tcam_init:
+-	kfree(tcam->used_groups);
+-err_alloc_used_groups:
+-	kfree(tcam->used_regions);
+-	return err;
+-}
+-
+-void mlxsw_sp_acl_tcam_fini(struct mlxsw_sp *mlxsw_sp,
+-			    struct mlxsw_sp_acl_tcam *tcam)
+-{
+-	const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
+-
+-	mutex_destroy(&tcam->lock);
+-	ops->fini(mlxsw_sp, tcam->priv);
+-	kfree(tcam->used_groups);
+-	kfree(tcam->used_regions);
+-}
+-
+ int mlxsw_sp_acl_tcam_priority_get(struct mlxsw_sp *mlxsw_sp,
+ 				   struct mlxsw_sp_acl_rule_info *rulei,
+ 				   u32 *priority, bool fillup_priority)
+@@ -1545,6 +1481,73 @@ mlxsw_sp_acl_tcam_vregion_rehash(struct mlxsw_sp *mlxsw_sp,
+ 		mlxsw_sp_acl_tcam_vregion_rehash_end(mlxsw_sp, vregion, ctx);
+ }
+ 
++int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp,
++			   struct mlxsw_sp_acl_tcam *tcam)
++{
++	const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
++	u64 max_tcam_regions;
++	u64 max_regions;
++	u64 max_groups;
++	int err;
++
++	mutex_init(&tcam->lock);
++	tcam->vregion_rehash_intrvl =
++			MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_DFLT;
++	INIT_LIST_HEAD(&tcam->vregion_list);
++
++	max_tcam_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core,
++					      ACL_MAX_TCAM_REGIONS);
++	max_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_REGIONS);
++
++	/* Use 1:1 mapping between ACL region and TCAM region */
++	if (max_tcam_regions < max_regions)
++		max_regions = max_tcam_regions;
++
++	tcam->used_regions = bitmap_zalloc(max_regions, GFP_KERNEL);
++	if (!tcam->used_regions) {
++		err = -ENOMEM;
++		goto err_alloc_used_regions;
++	}
++	tcam->max_regions = max_regions;
++
++	max_groups = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_GROUPS);
++	tcam->used_groups = bitmap_zalloc(max_groups, GFP_KERNEL);
++	if (!tcam->used_groups) {
++		err = -ENOMEM;
++		goto err_alloc_used_groups;
++	}
++	tcam->max_groups = max_groups;
++	tcam->max_group_size = MLXSW_CORE_RES_GET(mlxsw_sp->core,
++						  ACL_MAX_GROUP_SIZE);
++	tcam->max_group_size = min_t(unsigned int, tcam->max_group_size,
++				     MLXSW_REG_PAGT_ACL_MAX_NUM);
++
++	err = ops->init(mlxsw_sp, tcam->priv, tcam);
++	if (err)
++		goto err_tcam_init;
++
++	return 0;
++
++err_tcam_init:
++	bitmap_free(tcam->used_groups);
++err_alloc_used_groups:
++	bitmap_free(tcam->used_regions);
++err_alloc_used_regions:
++	mutex_destroy(&tcam->lock);
++	return err;
++}
++
++void mlxsw_sp_acl_tcam_fini(struct mlxsw_sp *mlxsw_sp,
++			    struct mlxsw_sp_acl_tcam *tcam)
++{
++	const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
++
++	ops->fini(mlxsw_sp, tcam->priv);
++	bitmap_free(tcam->used_groups);
++	bitmap_free(tcam->used_regions);
++	mutex_destroy(&tcam->lock);
++}
++
+ static const enum mlxsw_afk_element mlxsw_sp_acl_tcam_pattern_ipv4[] = {
+ 	MLXSW_AFK_ELEMENT_SRC_SYS_PORT,
+ 	MLXSW_AFK_ELEMENT_DMAC_32_47,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c
+index b65b93a2b9bc6..fc2257753b9b3 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c
+@@ -122,7 +122,6 @@ int mlxsw_sp_counter_pool_init(struct mlxsw_sp *mlxsw_sp)
+ 	unsigned int sub_pools_count = ARRAY_SIZE(mlxsw_sp_counter_sub_pools);
+ 	struct devlink *devlink = priv_to_devlink(mlxsw_sp->core);
+ 	struct mlxsw_sp_counter_pool *pool;
+-	unsigned int map_size;
+ 	int err;
+ 
+ 	pool = kzalloc(struct_size(pool, sub_pools, sub_pools_count),
+@@ -143,9 +142,7 @@ int mlxsw_sp_counter_pool_init(struct mlxsw_sp *mlxsw_sp)
+ 	devlink_resource_occ_get_register(devlink, MLXSW_SP_RESOURCE_COUNTERS,
+ 					  mlxsw_sp_counter_pool_occ_get, pool);
+ 
+-	map_size = BITS_TO_LONGS(pool->pool_size) * sizeof(unsigned long);
+-
+-	pool->usage = kzalloc(map_size, GFP_KERNEL);
++	pool->usage = bitmap_zalloc(pool->pool_size, GFP_KERNEL);
+ 	if (!pool->usage) {
+ 		err = -ENOMEM;
+ 		goto err_usage_alloc;
+@@ -158,7 +155,7 @@ int mlxsw_sp_counter_pool_init(struct mlxsw_sp *mlxsw_sp)
+ 	return 0;
+ 
+ err_sub_pools_init:
+-	kfree(pool->usage);
++	bitmap_free(pool->usage);
+ err_usage_alloc:
+ 	devlink_resource_occ_get_unregister(devlink,
+ 					    MLXSW_SP_RESOURCE_COUNTERS);
+@@ -176,7 +173,7 @@ void mlxsw_sp_counter_pool_fini(struct mlxsw_sp *mlxsw_sp)
+ 	WARN_ON(find_first_bit(pool->usage, pool->pool_size) !=
+ 			       pool->pool_size);
+ 	WARN_ON(atomic_read(&pool->active_entries_count));
+-	kfree(pool->usage);
++	bitmap_free(pool->usage);
+ 	devlink_resource_occ_get_unregister(devlink,
+ 					    MLXSW_SP_RESOURCE_COUNTERS);
+ 	kfree(pool);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 368fa0e5ad315..ea37f5000caa1 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -1631,16 +1631,13 @@ mlxsw_sp_mid *__mlxsw_sp_mc_alloc(struct mlxsw_sp *mlxsw_sp,
+ 				  u16 fid)
+ {
+ 	struct mlxsw_sp_mid *mid;
+-	size_t alloc_size;
+ 
+ 	mid = kzalloc(sizeof(*mid), GFP_KERNEL);
+ 	if (!mid)
+ 		return NULL;
+ 
+-	alloc_size = sizeof(unsigned long) *
+-		     BITS_TO_LONGS(mlxsw_core_max_ports(mlxsw_sp->core));
+-
+-	mid->ports_in_mid = kzalloc(alloc_size, GFP_KERNEL);
++	mid->ports_in_mid = bitmap_zalloc(mlxsw_core_max_ports(mlxsw_sp->core),
++					  GFP_KERNEL);
+ 	if (!mid->ports_in_mid)
+ 		goto err_ports_in_mid_alloc;
+ 
+@@ -1659,7 +1656,7 @@ out:
+ 	return mid;
+ 
+ err_write_mdb_entry:
+-	kfree(mid->ports_in_mid);
++	bitmap_free(mid->ports_in_mid);
+ err_ports_in_mid_alloc:
+ 	kfree(mid);
+ 	return NULL;
+@@ -1676,7 +1673,7 @@ static int mlxsw_sp_port_remove_from_mid(struct mlxsw_sp_port *mlxsw_sp_port,
+ 			 mlxsw_core_max_ports(mlxsw_sp->core))) {
+ 		err = mlxsw_sp_mc_remove_mdb_entry(mlxsw_sp, mid);
+ 		list_del(&mid->list);
+-		kfree(mid->ports_in_mid);
++		bitmap_free(mid->ports_in_mid);
+ 		kfree(mid);
+ 	}
+ 	return err;
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+index 8d51b0cb545ca..93fa10ad08a0c 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+@@ -389,7 +389,7 @@ nla_put_failure:
+ 
+ struct rtnl_link_ops rmnet_link_ops __read_mostly = {
+ 	.kind		= "rmnet",
+-	.maxtype	= __IFLA_RMNET_MAX,
++	.maxtype	= IFLA_RMNET_MAX,
+ 	.priv_size	= sizeof(struct rmnet_priv),
+ 	.setup		= rmnet_vnd_setup,
+ 	.validate	= rmnet_rtnl_validate,
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index f092f468016bd..8a4dff0566f7d 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1500,7 +1500,7 @@ static netdev_tx_t ravb_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	struct ravb_tstamp_skb *ts_skb;
+ 	struct ravb_tx_desc *desc;
+ 	unsigned long flags;
+-	u32 dma_addr;
++	dma_addr_t dma_addr;
+ 	void *buffer;
+ 	u32 entry;
+ 	u32 len;
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index d103244313542..94e36deefe88a 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -51,7 +51,7 @@
+ #define AM65_CPSW_MAX_PORTS	8
+ 
+ #define AM65_CPSW_MIN_PACKET_SIZE	VLAN_ETH_ZLEN
+-#define AM65_CPSW_MAX_PACKET_SIZE	(VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)
++#define AM65_CPSW_MAX_PACKET_SIZE	2024
+ 
+ #define AM65_CPSW_REG_CTL		0x004
+ #define AM65_CPSW_REG_STAT_PORT_EN	0x014
+@@ -1853,7 +1853,8 @@ static int am65_cpsw_nuss_init_ndev_2g(struct am65_cpsw_common *common)
+ 	ether_addr_copy(port->ndev->dev_addr, port->slave.mac_addr);
+ 
+ 	port->ndev->min_mtu = AM65_CPSW_MIN_PACKET_SIZE;
+-	port->ndev->max_mtu = AM65_CPSW_MAX_PACKET_SIZE;
++	port->ndev->max_mtu = AM65_CPSW_MAX_PACKET_SIZE -
++			      (VLAN_ETH_HLEN + ETH_FCS_LEN);
+ 	port->ndev->hw_features = NETIF_F_SG |
+ 				  NETIF_F_RXCSUM |
+ 				  NETIF_F_HW_CSUM |
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index bbbe198f83e88..2b7616f161d69 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1345,6 +1345,7 @@ static struct phy_driver ksphy_driver[] = {
+ 	/* PHY_GBIT_FEATURES */
+ 	.driver_data	= &ksz9021_type,
+ 	.probe		= kszphy_probe,
++	.soft_reset	= genphy_soft_reset,
+ 	.config_init	= ksz9131_config_init,
+ 	.read_status	= genphy_read_status,
+ 	.ack_interrupt	= kszphy_ack_interrupt,
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 190bc5712e965..24006ddfba89f 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -625,8 +625,8 @@ static int ath11k_core_get_rproc(struct ath11k_base *ab)
+ 
+ 	prproc = rproc_get_by_phandle(rproc_phandle);
+ 	if (!prproc) {
+-		ath11k_err(ab, "failed to get rproc\n");
+-		return -EINVAL;
++		ath11k_dbg(ab, ATH11K_DBG_AHB, "failed to get rproc, deferring\n");
++		return -EPROBE_DEFER;
+ 	}
+ 	ab_ahb->tgt_rproc = prproc;
+ 
+diff --git a/drivers/net/wireless/marvell/libertas/Kconfig b/drivers/net/wireless/marvell/libertas/Kconfig
+index 6d62ab49aa8d4..c7d02adb3eead 100644
+--- a/drivers/net/wireless/marvell/libertas/Kconfig
++++ b/drivers/net/wireless/marvell/libertas/Kconfig
+@@ -2,8 +2,6 @@
+ config LIBERTAS
+ 	tristate "Marvell 8xxx Libertas WLAN driver support"
+ 	depends on CFG80211
+-	select WIRELESS_EXT
+-	select WEXT_SPY
+ 	select LIB80211
+ 	select FW_LOADER
+ 	help
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 3d1b5d3d295ae..2f5f1ff22a601 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -1980,6 +1980,8 @@ static int mwifiex_cfg80211_start_ap(struct wiphy *wiphy,
+ 
+ 	mwifiex_set_sys_config_invalid_data(bss_cfg);
+ 
++	memcpy(bss_cfg->mac_addr, priv->curr_addr, ETH_ALEN);
++
+ 	if (params->beacon_interval)
+ 		bss_cfg->beacon_period = params->beacon_interval;
+ 	if (params->dtim_period)
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index 470d669c7f149..96c42b979e9be 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -177,6 +177,7 @@ enum MWIFIEX_802_11_PRIVACY_FILTER {
+ #define TLV_TYPE_STA_MAC_ADDR       (PROPRIETARY_TLV_BASE_ID + 32)
+ #define TLV_TYPE_BSSID              (PROPRIETARY_TLV_BASE_ID + 35)
+ #define TLV_TYPE_CHANNELBANDLIST    (PROPRIETARY_TLV_BASE_ID + 42)
++#define TLV_TYPE_UAP_MAC_ADDRESS    (PROPRIETARY_TLV_BASE_ID + 43)
+ #define TLV_TYPE_UAP_BEACON_PERIOD  (PROPRIETARY_TLV_BASE_ID + 44)
+ #define TLV_TYPE_UAP_DTIM_PERIOD    (PROPRIETARY_TLV_BASE_ID + 45)
+ #define TLV_TYPE_UAP_BCAST_SSID     (PROPRIETARY_TLV_BASE_ID + 48)
+diff --git a/drivers/net/wireless/marvell/mwifiex/ioctl.h b/drivers/net/wireless/marvell/mwifiex/ioctl.h
+index 3db449efa167c..cdb5b3881782f 100644
+--- a/drivers/net/wireless/marvell/mwifiex/ioctl.h
++++ b/drivers/net/wireless/marvell/mwifiex/ioctl.h
+@@ -119,6 +119,7 @@ struct mwifiex_uap_bss_param {
+ 	u8 qos_info;
+ 	u8 power_constraint;
+ 	struct mwifiex_types_wmm_info wmm_info;
++	u8 mac_addr[ETH_ALEN];
+ };
+ 
+ enum {
+diff --git a/drivers/net/wireless/marvell/mwifiex/uap_cmd.c b/drivers/net/wireless/marvell/mwifiex/uap_cmd.c
+index b48a85d791f68..2d444e660ac59 100644
+--- a/drivers/net/wireless/marvell/mwifiex/uap_cmd.c
++++ b/drivers/net/wireless/marvell/mwifiex/uap_cmd.c
+@@ -479,6 +479,7 @@ void mwifiex_config_uap_11d(struct mwifiex_private *priv,
+ static int
+ mwifiex_uap_bss_param_prepare(u8 *tlv, void *cmd_buf, u16 *param_size)
+ {
++	struct host_cmd_tlv_mac_addr *mac_tlv;
+ 	struct host_cmd_tlv_dtim_period *dtim_period;
+ 	struct host_cmd_tlv_beacon_period *beacon_period;
+ 	struct host_cmd_tlv_ssid *ssid;
+@@ -498,6 +499,13 @@ mwifiex_uap_bss_param_prepare(u8 *tlv, void *cmd_buf, u16 *param_size)
+ 	int i;
+ 	u16 cmd_size = *param_size;
+ 
++	mac_tlv = (struct host_cmd_tlv_mac_addr *)tlv;
++	mac_tlv->header.type = cpu_to_le16(TLV_TYPE_UAP_MAC_ADDRESS);
++	mac_tlv->header.len = cpu_to_le16(ETH_ALEN);
++	memcpy(mac_tlv->mac_addr, bss_cfg->mac_addr, ETH_ALEN);
++	cmd_size += sizeof(struct host_cmd_tlv_mac_addr);
++	tlv += sizeof(struct host_cmd_tlv_mac_addr);
++
+ 	if (bss_cfg->ssid.ssid_len) {
+ 		ssid = (struct host_cmd_tlv_ssid *)tlv;
+ 		ssid->header.type = cpu_to_le16(TLV_TYPE_UAP_SSID);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 3776495fd9d03..679ae786cf450 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -164,21 +164,29 @@ static bool _rtl_pci_platform_switch_device_pci_aspm(
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
+ 
++	value &= PCI_EXP_LNKCTL_ASPMC;
++
+ 	if (rtlhal->hw_type != HARDWARE_TYPE_RTL8192SE)
+-		value |= 0x40;
++		value |= PCI_EXP_LNKCTL_CCC;
+ 
+-	pci_write_config_byte(rtlpci->pdev, 0x80, value);
++	pcie_capability_clear_and_set_word(rtlpci->pdev, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_ASPMC | value,
++					   value);
+ 
+ 	return false;
+ }
+ 
+-/*When we set 0x01 to enable clk request. Set 0x0 to disable clk req.*/
+-static void _rtl_pci_switch_clk_req(struct ieee80211_hw *hw, u8 value)
++/* @value is PCI_EXP_LNKCTL_CLKREQ_EN or 0 to enable/disable clk request. */
++static void _rtl_pci_switch_clk_req(struct ieee80211_hw *hw, u16 value)
+ {
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
+ 
+-	pci_write_config_byte(rtlpci->pdev, 0x81, value);
++	value &= PCI_EXP_LNKCTL_CLKREQ_EN;
++
++	pcie_capability_clear_and_set_word(rtlpci->pdev, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_CLKREQ_EN,
++					   value);
+ 
+ 	if (rtlhal->hw_type == HARDWARE_TYPE_RTL8192SE)
+ 		udelay(100);
+@@ -192,11 +200,8 @@ static void rtl_pci_disable_aspm(struct ieee80211_hw *hw)
+ 	struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	u8 pcibridge_vendor = pcipriv->ndis_adapter.pcibridge_vendor;
+-	u8 num4bytes = pcipriv->ndis_adapter.num4bytes;
+ 	/*Retrieve original configuration settings. */
+ 	u8 linkctrl_reg = pcipriv->ndis_adapter.linkctrl_reg;
+-	u16 pcibridge_linkctrlreg = pcipriv->ndis_adapter.
+-				pcibridge_linkctrlreg;
+ 	u16 aspmlevel = 0;
+ 	u8 tmp_u1b = 0;
+ 
+@@ -221,16 +226,8 @@ static void rtl_pci_disable_aspm(struct ieee80211_hw *hw)
+ 	/*Set corresponding value. */
+ 	aspmlevel |= BIT(0) | BIT(1);
+ 	linkctrl_reg &= ~aspmlevel;
+-	pcibridge_linkctrlreg &= ~(BIT(0) | BIT(1));
+ 
+ 	_rtl_pci_platform_switch_device_pci_aspm(hw, linkctrl_reg);
+-	udelay(50);
+-
+-	/*4 Disable Pci Bridge ASPM */
+-	pci_write_config_byte(rtlpci->pdev, (num4bytes << 2),
+-			      pcibridge_linkctrlreg);
+-
+-	udelay(50);
+ }
+ 
+ /*Enable RTL8192SE ASPM & Enable Pci Bridge ASPM for
+@@ -245,9 +242,7 @@ static void rtl_pci_enable_aspm(struct ieee80211_hw *hw)
+ 	struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	u8 pcibridge_vendor = pcipriv->ndis_adapter.pcibridge_vendor;
+-	u8 num4bytes = pcipriv->ndis_adapter.num4bytes;
+ 	u16 aspmlevel;
+-	u8 u_pcibridge_aspmsetting;
+ 	u8 u_device_aspmsetting;
+ 
+ 	if (!ppsc->support_aspm)
+@@ -259,25 +254,6 @@ static void rtl_pci_enable_aspm(struct ieee80211_hw *hw)
+ 		return;
+ 	}
+ 
+-	/*4 Enable Pci Bridge ASPM */
+-
+-	u_pcibridge_aspmsetting =
+-	    pcipriv->ndis_adapter.pcibridge_linkctrlreg |
+-	    rtlpci->const_hostpci_aspm_setting;
+-
+-	if (pcibridge_vendor == PCI_BRIDGE_VENDOR_INTEL)
+-		u_pcibridge_aspmsetting &= ~BIT(0);
+-
+-	pci_write_config_byte(rtlpci->pdev, (num4bytes << 2),
+-			      u_pcibridge_aspmsetting);
+-
+-	rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+-		"PlatformEnableASPM(): Write reg[%x] = %x\n",
+-		(pcipriv->ndis_adapter.pcibridge_pciehdr_offset + 0x10),
+-		u_pcibridge_aspmsetting);
+-
+-	udelay(50);
+-
+ 	/*Get ASPM level (with/without Clock Req) */
+ 	aspmlevel = rtlpci->const_devicepci_aspm_setting;
+ 	u_device_aspmsetting = pcipriv->ndis_adapter.linkctrl_reg;
+@@ -291,7 +267,8 @@ static void rtl_pci_enable_aspm(struct ieee80211_hw *hw)
+ 
+ 	if (ppsc->reg_rfps_level & RT_RF_OFF_LEVL_CLK_REQ) {
+ 		_rtl_pci_switch_clk_req(hw, (ppsc->reg_rfps_level &
+-					     RT_RF_OFF_LEVL_CLK_REQ) ? 1 : 0);
++					     RT_RF_OFF_LEVL_CLK_REQ) ?
++					     PCI_EXP_LNKCTL_CLKREQ_EN : 0);
+ 		RT_SET_PS_LEVEL(ppsc, RT_RF_OFF_LEVL_CLK_REQ);
+ 	}
+ 	udelay(100);
+@@ -359,22 +336,6 @@ static bool rtl_pci_check_buddy_priv(struct ieee80211_hw *hw,
+ 	return find_buddy_priv;
+ }
+ 
+-static void rtl_pci_get_linkcontrol_field(struct ieee80211_hw *hw)
+-{
+-	struct rtl_pci_priv *pcipriv = rtl_pcipriv(hw);
+-	struct rtl_pci *rtlpci = rtl_pcidev(pcipriv);
+-	u8 capabilityoffset = pcipriv->ndis_adapter.pcibridge_pciehdr_offset;
+-	u8 linkctrl_reg;
+-	u8 num4bbytes;
+-
+-	num4bbytes = (capabilityoffset + 0x10) / 4;
+-
+-	/*Read  Link Control Register */
+-	pci_read_config_byte(rtlpci->pdev, (num4bbytes << 2), &linkctrl_reg);
+-
+-	pcipriv->ndis_adapter.pcibridge_linkctrlreg = linkctrl_reg;
+-}
+-
+ static void rtl_pci_parse_configuration(struct pci_dev *pdev,
+ 					struct ieee80211_hw *hw)
+ {
+@@ -2035,12 +1996,6 @@ static bool _rtl_pci_find_adapter(struct pci_dev *pdev,
+ 		    PCI_SLOT(bridge_pdev->devfn);
+ 		pcipriv->ndis_adapter.pcibridge_funcnum =
+ 		    PCI_FUNC(bridge_pdev->devfn);
+-		pcipriv->ndis_adapter.pcibridge_pciehdr_offset =
+-		    pci_pcie_cap(bridge_pdev);
+-		pcipriv->ndis_adapter.num4bytes =
+-		    (pcipriv->ndis_adapter.pcibridge_pciehdr_offset + 0x10) / 4;
+-
+-		rtl_pci_get_linkcontrol_field(hw);
+ 
+ 		if (pcipriv->ndis_adapter.pcibridge_vendor ==
+ 		    PCI_BRIDGE_VENDOR_AMD) {
+@@ -2057,13 +2012,11 @@ static bool _rtl_pci_find_adapter(struct pci_dev *pdev,
+ 		pdev->vendor, pcipriv->ndis_adapter.linkctrl_reg);
+ 
+ 	rtl_dbg(rtlpriv, COMP_INIT, DBG_DMESG,
+-		"pci_bridge busnumber:devnumber:funcnumber:vendor:pcie_cap:link_ctl_reg:amd %d:%d:%d:%x:%x:%x:%x\n",
++		"pci_bridge busnumber:devnumber:funcnumber:vendor:amd %d:%d:%d:%x:%x\n",
+ 		pcipriv->ndis_adapter.pcibridge_busnum,
+ 		pcipriv->ndis_adapter.pcibridge_devnum,
+ 		pcipriv->ndis_adapter.pcibridge_funcnum,
+ 		pcibridge_vendors[pcipriv->ndis_adapter.pcibridge_vendor],
+-		pcipriv->ndis_adapter.pcibridge_pciehdr_offset,
+-		pcipriv->ndis_adapter.pcibridge_linkctrlreg,
+ 		pcipriv->ndis_adapter.amd_l1_patch);
+ 
+ 	rtl_pci_parse_configuration(pdev, hw);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.h b/drivers/net/wireless/realtek/rtlwifi/pci.h
+index 866861626a0a1..d6307197dfea0 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.h
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.h
+@@ -236,11 +236,6 @@ struct mp_adapter {
+ 	u16 pcibridge_vendorid;
+ 	u16 pcibridge_deviceid;
+ 
+-	u8 num4bytes;
+-
+-	u8 pcibridge_pciehdr_offset;
+-	u8 pcibridge_linkctrlreg;
+-
+ 	bool amd_l1_patch;
+ };
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c
+index 9be032e8ec95b..3395601eaa59d 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c
+@@ -16,12 +16,6 @@ static u32 _rtl88e_phy_rf_serial_read(struct ieee80211_hw *hw,
+ static void _rtl88e_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 					enum radio_path rfpath, u32 offset,
+ 					u32 data);
+-static u32 _rtl88e_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+ static bool _rtl88e_phy_bb8188e_config_parafile(struct ieee80211_hw *hw);
+ static bool _rtl88e_phy_config_mac_with_headerfile(struct ieee80211_hw *hw);
+ static bool phy_config_bb_with_headerfile(struct ieee80211_hw *hw,
+@@ -51,7 +45,7 @@ u32 rtl88e_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+ 		"regaddr(%#x), bitmask(%#x)\n", regaddr, bitmask);
+ 	originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-	bitshift = _rtl88e_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+@@ -74,7 +68,7 @@ void rtl88e_phy_set_bb_reg(struct ieee80211_hw *hw,
+ 
+ 	if (bitmask != MASKDWORD) {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl88e_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+@@ -99,7 +93,7 @@ u32 rtl88e_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 
+ 
+ 	original_value = _rtl88e_phy_rf_serial_read(hw, rfpath, regaddr);
+-	bitshift = _rtl88e_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -127,7 +121,7 @@ void rtl88e_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl88e_phy_rf_serial_read(hw,
+ 								    rfpath,
+ 								    regaddr);
+-			bitshift = _rtl88e_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c
+index 3d29c8dbb2559..144ee780e1b6a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c
+@@ -17,7 +17,7 @@ u32 rtl92c_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE, "regaddr(%#x), bitmask(%#x)\n",
+ 		regaddr, bitmask);
+ 	originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-	bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+@@ -40,7 +40,7 @@ void rtl92c_phy_set_bb_reg(struct ieee80211_hw *hw,
+ 
+ 	if (bitmask != MASKDWORD) {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+@@ -143,14 +143,6 @@ void _rtl92c_phy_rf_serial_write(struct ieee80211_hw *hw,
+ }
+ EXPORT_SYMBOL(_rtl92c_phy_rf_serial_write);
+ 
+-u32 _rtl92c_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+-EXPORT_SYMBOL(_rtl92c_phy_calculate_bit_shift);
+-
+ static void _rtl92c_phy_bb_config_1t(struct ieee80211_hw *hw)
+ {
+ 	rtl_set_bbreg(hw, RFPGA0_TXINFO, 0x3, 0x2);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h
+index 75afa6253ad02..e64d377dfe9e2 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h
+@@ -196,7 +196,6 @@ bool rtl92c_phy_set_rf_power_state(struct ieee80211_hw *hw,
+ void rtl92ce_phy_set_rf_on(struct ieee80211_hw *hw);
+ void rtl92c_phy_set_io(struct ieee80211_hw *hw);
+ void rtl92c_bb_block_on(struct ieee80211_hw *hw);
+-u32 _rtl92c_phy_calculate_bit_shift(u32 bitmask);
+ long _rtl92c_phy_txpwr_idx_to_dbm(struct ieee80211_hw *hw,
+ 				  enum wireless_mode wirelessmode,
+ 				  u8 txpwridx);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.c
+index 04735da11168a..6b98e77768e96 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.c
+@@ -39,7 +39,7 @@ u32 rtl92c_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 							       rfpath, regaddr);
+ 	}
+ 
+-	bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -110,7 +110,7 @@ void rtl92ce_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl92c_phy_rf_serial_read(hw,
+ 								    rfpath,
+ 								    regaddr);
+-			bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+@@ -122,7 +122,7 @@ void rtl92ce_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl92c_phy_fw_rf_serial_read(hw,
+ 								       rfpath,
+ 								       regaddr);
+-			bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h
+index 7582a162bd112..c7a0d4c776f0a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h
+@@ -94,7 +94,6 @@ u32 _rtl92c_phy_rf_serial_read(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 			       u32 offset);
+ u32 _rtl92c_phy_fw_rf_serial_read(struct ieee80211_hw *hw,
+ 				  enum radio_path rfpath, u32 offset);
+-u32 _rtl92c_phy_calculate_bit_shift(u32 bitmask);
+ void _rtl92c_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 				 enum radio_path rfpath, u32 offset, u32 data);
+ void _rtl92c_phy_fw_rf_serial_write(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/phy.c
+index a8d9fe269f313..0b8cb7e61fd80 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/phy.c
+@@ -32,7 +32,7 @@ u32 rtl92cu_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 		original_value = _rtl92c_phy_fw_rf_serial_read(hw,
+ 							       rfpath, regaddr);
+ 	}
+-	bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+ 		"regaddr(%#x), rfpath(%#x), bitmask(%#x), original_value(%#x)\n",
+@@ -56,7 +56,7 @@ void rtl92cu_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl92c_phy_rf_serial_read(hw,
+ 								    rfpath,
+ 								    regaddr);
+-			bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+@@ -67,7 +67,7 @@ void rtl92cu_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl92c_phy_fw_rf_serial_read(hw,
+ 								       rfpath,
+ 								       regaddr);
+-			bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+index d3027f8fbd383..af0c7d74b3f5a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+@@ -160,12 +160,14 @@ static u32 targetchnl_2g[TARGET_CHNL_NUM_2G] = {
+ 	25711, 25658, 25606, 25554, 25502, 25451, 25328
+ };
+ 
+-static u32 _rtl92d_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
++static const u8 channel_all[59] = {
++	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
++	36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58,
++	60, 62, 64, 100, 102, 104, 106, 108, 110, 112,
++	114, 116, 118, 120, 122, 124, 126, 128,	130,
++	132, 134, 136, 138, 140, 149, 151, 153, 155,
++	157, 159, 161, 163, 165
++};
+ 
+ u32 rtl92d_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ {
+@@ -189,7 +191,7 @@ u32 rtl92d_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 	} else {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+ 	}
+-	bitshift = _rtl92d_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+ 		"BBR MASK=0x%x Addr[0x%x]=0x%x\n",
+@@ -221,7 +223,7 @@ void rtl92d_phy_set_bb_reg(struct ieee80211_hw *hw,
+ 					dbi_direct);
+ 		else
+ 			originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl92d_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 	if (rtlhal->during_mac1init_radioa || rtlhal->during_mac0init_radiob)
+@@ -308,7 +310,7 @@ u32 rtl92d_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 		regaddr, rfpath, bitmask);
+ 	spin_lock(&rtlpriv->locks.rf_lock);
+ 	original_value = _rtl92d_phy_rf_serial_read(hw, rfpath, regaddr);
+-	bitshift = _rtl92d_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+@@ -334,7 +336,7 @@ void rtl92d_phy_set_rf_reg(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 		if (bitmask != RFREG_OFFSET_MASK) {
+ 			original_value = _rtl92d_phy_rf_serial_read(hw,
+ 				rfpath, regaddr);
+-			bitshift = _rtl92d_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data = ((original_value & (~bitmask)) |
+ 				(data << bitshift));
+ 		}
+@@ -1354,14 +1356,6 @@ static void _rtl92d_phy_switch_rf_setting(struct ieee80211_hw *hw, u8 channel)
+ 
+ u8 rtl92d_get_rightchnlplace_for_iqk(u8 chnl)
+ {
+-	u8 channel_all[59] = {
+-		1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
+-		36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58,
+-		60, 62, 64, 100, 102, 104, 106, 108, 110, 112,
+-		114, 116, 118, 120, 122, 124, 126, 128,	130,
+-		132, 134, 136, 138, 140, 149, 151, 153, 155,
+-		157, 159, 161, 163, 165
+-	};
+ 	u8 place = chnl;
+ 
+ 	if (chnl > 14) {
+@@ -3216,37 +3210,28 @@ void rtl92d_phy_config_macphymode_info(struct ieee80211_hw *hw)
+ u8 rtl92d_get_chnlgroup_fromarray(u8 chnl)
+ {
+ 	u8 group;
+-	u8 channel_info[59] = {
+-		1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
+-		36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56,
+-		58, 60, 62, 64, 100, 102, 104, 106, 108,
+-		110, 112, 114, 116, 118, 120, 122, 124,
+-		126, 128, 130, 132, 134, 136, 138, 140,
+-		149, 151, 153, 155, 157, 159, 161, 163,
+-		165
+-	};
+ 
+-	if (channel_info[chnl] <= 3)
++	if (channel_all[chnl] <= 3)
+ 		group = 0;
+-	else if (channel_info[chnl] <= 9)
++	else if (channel_all[chnl] <= 9)
+ 		group = 1;
+-	else if (channel_info[chnl] <= 14)
++	else if (channel_all[chnl] <= 14)
+ 		group = 2;
+-	else if (channel_info[chnl] <= 44)
++	else if (channel_all[chnl] <= 44)
+ 		group = 3;
+-	else if (channel_info[chnl] <= 54)
++	else if (channel_all[chnl] <= 54)
+ 		group = 4;
+-	else if (channel_info[chnl] <= 64)
++	else if (channel_all[chnl] <= 64)
+ 		group = 5;
+-	else if (channel_info[chnl] <= 112)
++	else if (channel_all[chnl] <= 112)
+ 		group = 6;
+-	else if (channel_info[chnl] <= 126)
++	else if (channel_all[chnl] <= 126)
+ 		group = 7;
+-	else if (channel_info[chnl] <= 140)
++	else if (channel_all[chnl] <= 140)
+ 		group = 8;
+-	else if (channel_info[chnl] <= 153)
++	else if (channel_all[chnl] <= 153)
+ 		group = 9;
+-	else if (channel_info[chnl] <= 159)
++	else if (channel_all[chnl] <= 159)
+ 		group = 10;
+ 	else
+ 		group = 11;
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c
+index cc0bcaf13e96e..73ef602bfb01a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c
+@@ -16,7 +16,6 @@ static u32 _rtl92ee_phy_rf_serial_read(struct ieee80211_hw *hw,
+ static void _rtl92ee_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 					 enum radio_path rfpath, u32 offset,
+ 					 u32 data);
+-static u32 _rtl92ee_phy_calculate_bit_shift(u32 bitmask);
+ static bool _rtl92ee_phy_bb8192ee_config_parafile(struct ieee80211_hw *hw);
+ static bool _rtl92ee_phy_config_mac_with_headerfile(struct ieee80211_hw *hw);
+ static bool phy_config_bb_with_hdr_file(struct ieee80211_hw *hw,
+@@ -46,7 +45,7 @@ u32 rtl92ee_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+ 		"regaddr(%#x), bitmask(%#x)\n", regaddr, bitmask);
+ 	originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-	bitshift = _rtl92ee_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+@@ -68,7 +67,7 @@ void rtl92ee_phy_set_bb_reg(struct ieee80211_hw *hw, u32 regaddr,
+ 
+ 	if (bitmask != MASKDWORD) {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl92ee_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+@@ -92,7 +91,7 @@ u32 rtl92ee_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 	spin_lock(&rtlpriv->locks.rf_lock);
+ 
+ 	original_value = _rtl92ee_phy_rf_serial_read(hw , rfpath, regaddr);
+-	bitshift = _rtl92ee_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -119,7 +118,7 @@ void rtl92ee_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 
+ 	if (bitmask != RFREG_OFFSET_MASK) {
+ 		original_value = _rtl92ee_phy_rf_serial_read(hw, rfpath, addr);
+-		bitshift = _rtl92ee_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = (original_value & (~bitmask)) | (data << bitshift);
+ 	}
+ 
+@@ -201,13 +200,6 @@ static void _rtl92ee_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 		pphyreg->rf3wire_offset, data_and_addr);
+ }
+ 
+-static u32 _rtl92ee_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+-
+ bool rtl92ee_phy_mac_config(struct ieee80211_hw *hw)
+ {
+ 	return _rtl92ee_phy_config_mac_with_headerfile(hw);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c
+index 63283d9e74850..cd735d61f6304 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c
+@@ -14,13 +14,6 @@
+ #include "hw.h"
+ #include "table.h"
+ 
+-static u32 _rtl92s_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+-
+ u32 rtl92s_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+@@ -30,7 +23,7 @@ u32 rtl92s_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 		regaddr, bitmask);
+ 
+ 	originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-	bitshift = _rtl92s_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE, "BBR MASK=0x%x Addr[0x%x]=0x%x\n",
+@@ -52,7 +45,7 @@ void rtl92s_phy_set_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask,
+ 
+ 	if (bitmask != MASKDWORD) {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl92s_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+@@ -160,7 +153,7 @@ u32 rtl92s_phy_query_rf_reg(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 
+ 	original_value = _rtl92s_phy_rf_serial_read(hw, rfpath, regaddr);
+ 
+-	bitshift = _rtl92s_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -191,7 +184,7 @@ void rtl92s_phy_set_rf_reg(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 	if (bitmask != RFREG_OFFSET_MASK) {
+ 		original_value = _rtl92s_phy_rf_serial_read(hw, rfpath,
+ 							    regaddr);
+-		bitshift = _rtl92s_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((original_value & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+index c0c06ab6d3e76..fb143a5f9cc3c 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+@@ -29,9 +29,10 @@ static void _rtl8821ae_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 					   u32 data);
+ static u32 _rtl8821ae_phy_calculate_bit_shift(u32 bitmask)
+ {
+-	u32 i = ffs(bitmask);
++	if (WARN_ON_ONCE(!bitmask))
++		return 0;
+ 
+-	return i ? i - 1 : 32;
++	return __ffs(bitmask);
+ }
+ static bool _rtl8821ae_phy_bb8821a_config_parafile(struct ieee80211_hw *hw);
+ /*static bool _rtl8812ae_phy_config_mac_with_headerfile(struct ieee80211_hw *hw);*/
+diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+index fdccfd29fd618..a89e232d6963f 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
++++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+@@ -3111,4 +3111,11 @@ static inline struct ieee80211_sta *rtl_find_sta(struct ieee80211_hw *hw,
+ 	return ieee80211_find_sta(mac->vif, mac_addr);
+ }
+ 
++static inline u32 calculate_bit_shift(u32 bitmask)
++{
++	if (WARN_ON_ONCE(!bitmask))
++		return 0;
++
++	return __ffs(bitmask);
++}
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index c92fba2fa4808..0a37667813475 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -264,9 +264,9 @@ static void rtw_ops_configure_filter(struct ieee80211_hw *hw,
+ 
+ 	if (changed_flags & FIF_ALLMULTI) {
+ 		if (*new_flags & FIF_ALLMULTI)
+-			rtwdev->hal.rcr |= BIT_AM | BIT_AB;
++			rtwdev->hal.rcr |= BIT_AM;
+ 		else
+-			rtwdev->hal.rcr &= ~(BIT_AM | BIT_AB);
++			rtwdev->hal.rcr &= ~(BIT_AM);
+ 	}
+ 	if (changed_flags & FIF_FCSFAIL) {
+ 		if (*new_flags & FIF_FCSFAIL)
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index 1c366ddf62bc5..d25bb5b9a54cd 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -463,12 +463,25 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 	}
+ 
+ 	for (shinfo->nr_frags = 0; nr_slots > 0 && shinfo->nr_frags < MAX_SKB_FRAGS;
+-	     shinfo->nr_frags++, gop++, nr_slots--) {
++	     nr_slots--) {
++		if (unlikely(!txp->size)) {
++			unsigned long flags;
++
++			spin_lock_irqsave(&queue->response_lock, flags);
++			make_tx_response(queue, txp, 0, XEN_NETIF_RSP_OKAY);
++			push_tx_responses(queue);
++			spin_unlock_irqrestore(&queue->response_lock, flags);
++			++txp;
++			continue;
++		}
++
+ 		index = pending_index(queue->pending_cons++);
+ 		pending_idx = queue->pending_ring[index];
+ 		xenvif_tx_create_map_op(queue, pending_idx, txp,
+ 				        txp == first ? extra_count : 0, gop);
+ 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
++		++shinfo->nr_frags;
++		++gop;
+ 
+ 		if (txp == first)
+ 			txp = txfrags;
+@@ -481,20 +494,39 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 		shinfo = skb_shinfo(nskb);
+ 		frags = shinfo->frags;
+ 
+-		for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots;
+-		     shinfo->nr_frags++, txp++, gop++) {
++		for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; ++txp) {
++			if (unlikely(!txp->size)) {
++				unsigned long flags;
++
++				spin_lock_irqsave(&queue->response_lock, flags);
++				make_tx_response(queue, txp, 0,
++						 XEN_NETIF_RSP_OKAY);
++				push_tx_responses(queue);
++				spin_unlock_irqrestore(&queue->response_lock,
++						       flags);
++				continue;
++			}
++
+ 			index = pending_index(queue->pending_cons++);
+ 			pending_idx = queue->pending_ring[index];
+ 			xenvif_tx_create_map_op(queue, pending_idx, txp, 0,
+ 						gop);
+ 			frag_set_pending_idx(&frags[shinfo->nr_frags],
+ 					     pending_idx);
++			++shinfo->nr_frags;
++			++gop;
+ 		}
+ 
+-		skb_shinfo(skb)->frag_list = nskb;
+-	} else if (nskb) {
++		if (shinfo->nr_frags) {
++			skb_shinfo(skb)->frag_list = nskb;
++			nskb = NULL;
++		}
++	}
++
++	if (nskb) {
+ 		/* A frag_list skb was allocated but it is no longer needed
+-		 * because enough slots were converted to copy ops above.
++		 * because enough slots were converted to copy ops above or some
++		 * were empty.
+ 		 */
+ 		kfree_skb(nskb);
+ 	}
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 07c41a149328a..30a642c8f5374 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2071,9 +2071,10 @@ static void nvme_update_disk_info(struct gendisk *disk,
+ 
+ 	/*
+ 	 * The block layer can't support LBA sizes larger than the page size
+-	 * yet, so catch this early and don't allow block I/O.
++	 * or smaller than a sector size yet, so catch this early and don't
++	 * allow block I/O.
+ 	 */
+-	if (ns->lba_shift > PAGE_SHIFT) {
++	if (ns->lba_shift > PAGE_SHIFT || ns->lba_shift < SECTOR_SHIFT) {
+ 		capacity = 0;
+ 		bs = (1 << 9);
+ 	}
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index c3e4d9b6f9c0d..1e56fe8e8157c 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -354,6 +354,11 @@ struct nvme_ctrl {
+ 	struct nvme_fault_inject fault_inject;
+ };
+ 
++static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl)
++{
++	return READ_ONCE(ctrl->state);
++}
++
+ enum nvme_iopolicy {
+ 	NVME_IOPOLICY_NUMA,
+ 	NVME_IOPOLICY_RR,
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 7ce22d173fc79..116ae6fd35e2d 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -18,6 +18,7 @@
+ #include "nvmet.h"
+ 
+ #define NVMET_TCP_DEF_INLINE_DATA_SIZE	(4 * PAGE_SIZE)
++#define NVMET_TCP_MAXH2CDATA		0x400000 /* 16M arbitrary limit */
+ 
+ /* Define the socket priority to use for connections were it is desirable
+  * that the NIC consider performing optimized packet processing or filtering.
+@@ -872,7 +873,7 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
+ 	icresp->hdr.pdo = 0;
+ 	icresp->hdr.plen = cpu_to_le32(icresp->hdr.hlen);
+ 	icresp->pfv = cpu_to_le16(NVME_TCP_PFV_1_0);
+-	icresp->maxdata = cpu_to_le32(0x400000); /* 16M arbitrary limit */
++	icresp->maxdata = cpu_to_le32(NVMET_TCP_MAXH2CDATA);
+ 	icresp->cpda = 0;
+ 	if (queue->hdr_digest)
+ 		icresp->digest |= NVME_TCP_HDR_DIGEST_ENABLE;
+@@ -918,6 +919,7 @@ static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue)
+ {
+ 	struct nvme_tcp_data_pdu *data = &queue->pdu.data;
+ 	struct nvmet_tcp_cmd *cmd;
++	unsigned int exp_data_len;
+ 
+ 	if (likely(queue->nr_cmds)) {
+ 		if (unlikely(data->ttag >= queue->nr_cmds)) {
+@@ -936,12 +938,24 @@ static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue)
+ 			data->ttag, le32_to_cpu(data->data_offset),
+ 			cmd->rbytes_done);
+ 		/* FIXME: use path and transport errors */
+-		nvmet_req_complete(&cmd->req,
+-			NVME_SC_INVALID_FIELD | NVME_SC_DNR);
++		nvmet_tcp_fatal_error(queue);
+ 		return -EPROTO;
+ 	}
+ 
++	exp_data_len = le32_to_cpu(data->hdr.plen) -
++			nvmet_tcp_hdgst_len(queue) -
++			nvmet_tcp_ddgst_len(queue) -
++			sizeof(*data);
++
+ 	cmd->pdu_len = le32_to_cpu(data->data_length);
++	if (unlikely(cmd->pdu_len != exp_data_len ||
++		     cmd->pdu_len == 0 ||
++		     cmd->pdu_len > NVMET_TCP_MAXH2CDATA)) {
++		pr_err("H2CData PDU len %u is invalid\n", cmd->pdu_len);
++		/* FIXME: use proper transport errors */
++		nvmet_tcp_fatal_error(queue);
++		return -EPROTO;
++	}
+ 	cmd->pdu_recv = 0;
+ 	nvmet_tcp_map_pdu_iovec(cmd);
+ 	queue->cmd = cmd;
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index eb02974f36bdb..0e428880d88bd 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -1670,6 +1670,7 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ 		out_args->np = new;
+ 		of_node_put(cur);
+ 		cur = new;
++		new = NULL;
+ 	}
+ put:
+ 	of_node_put(cur);
+diff --git a/drivers/of/unittest-data/tests-phandle.dtsi b/drivers/of/unittest-data/tests-phandle.dtsi
+index 6b33be4c4416c..aa0d7027ffa68 100644
+--- a/drivers/of/unittest-data/tests-phandle.dtsi
++++ b/drivers/of/unittest-data/tests-phandle.dtsi
+@@ -38,6 +38,13 @@
+ 				phandle-map-pass-thru = <0x0 0xf0>;
+ 			};
+ 
++			provider5: provider5 {
++				#phandle-cells = <2>;
++				phandle-map = <2 7 &provider4 2 3>;
++				phandle-map-mask = <0xff 0xf>;
++				phandle-map-pass-thru = <0x0 0xf0>;
++			};
++
+ 			consumer-a {
+ 				phandle-list =	<&provider1 1>,
+ 						<&provider2 2 0>,
+@@ -64,7 +71,8 @@
+ 						<&provider4 4 0x100>,
+ 						<&provider4 0 0x61>,
+ 						<&provider0>,
+-						<&provider4 19 0x20>;
++						<&provider4 19 0x20>,
++						<&provider5 2 7>;
+ 				phandle-list-bad-phandle = <12345678 0 0>;
+ 				phandle-list-bad-args = <&provider2 1 0>,
+ 							<&provider4 0>;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 412d7ddb3b8b2..f9083c868a36d 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -447,6 +447,9 @@ static void __init of_unittest_parse_phandle_with_args(void)
+ 
+ 		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
+ 			 i, args.np, rc);
++
++		if (rc == 0)
++			of_node_put(args.np);
+ 	}
+ 
+ 	/* Check for missing list property */
+@@ -536,8 +539,9 @@ static void __init of_unittest_parse_phandle_with_args(void)
+ 
+ static void __init of_unittest_parse_phandle_with_args_map(void)
+ {
+-	struct device_node *np, *p0, *p1, *p2, *p3;
++	struct device_node *np, *p[6] = {};
+ 	struct of_phandle_args args;
++	unsigned int prefs[6];
+ 	int i, rc;
+ 
+ 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
+@@ -546,34 +550,24 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 		return;
+ 	}
+ 
+-	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
+-	if (!p0) {
+-		pr_err("missing testcase data\n");
+-		return;
+-	}
+-
+-	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
+-	if (!p1) {
+-		pr_err("missing testcase data\n");
+-		return;
+-	}
+-
+-	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
+-	if (!p2) {
+-		pr_err("missing testcase data\n");
+-		return;
+-	}
+-
+-	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
+-	if (!p3) {
+-		pr_err("missing testcase data\n");
+-		return;
++	p[0] = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
++	p[1] = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
++	p[2] = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
++	p[3] = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
++	p[4] = of_find_node_by_path("/testcase-data/phandle-tests/provider4");
++	p[5] = of_find_node_by_path("/testcase-data/phandle-tests/provider5");
++	for (i = 0; i < ARRAY_SIZE(p); ++i) {
++		if (!p[i]) {
++			pr_err("missing testcase data\n");
++			return;
++		}
++		prefs[i] = kref_read(&p[i]->kobj.kref);
+ 	}
+ 
+ 	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
+-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
++	unittest(rc == 8, "of_count_phandle_with_args() returned %i, expected 8\n", rc);
+ 
+-	for (i = 0; i < 8; i++) {
++	for (i = 0; i < 9; i++) {
+ 		bool passed = true;
+ 
+ 		memset(&args, 0, sizeof(args));
+@@ -584,13 +578,13 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 		switch (i) {
+ 		case 0:
+ 			passed &= !rc;
+-			passed &= (args.np == p1);
++			passed &= (args.np == p[1]);
+ 			passed &= (args.args_count == 1);
+ 			passed &= (args.args[0] == 1);
+ 			break;
+ 		case 1:
+ 			passed &= !rc;
+-			passed &= (args.np == p3);
++			passed &= (args.np == p[3]);
+ 			passed &= (args.args_count == 3);
+ 			passed &= (args.args[0] == 2);
+ 			passed &= (args.args[1] == 5);
+@@ -601,28 +595,36 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 			break;
+ 		case 3:
+ 			passed &= !rc;
+-			passed &= (args.np == p0);
++			passed &= (args.np == p[0]);
+ 			passed &= (args.args_count == 0);
+ 			break;
+ 		case 4:
+ 			passed &= !rc;
+-			passed &= (args.np == p1);
++			passed &= (args.np == p[1]);
+ 			passed &= (args.args_count == 1);
+ 			passed &= (args.args[0] == 3);
+ 			break;
+ 		case 5:
+ 			passed &= !rc;
+-			passed &= (args.np == p0);
++			passed &= (args.np == p[0]);
+ 			passed &= (args.args_count == 0);
+ 			break;
+ 		case 6:
+ 			passed &= !rc;
+-			passed &= (args.np == p2);
++			passed &= (args.np == p[2]);
+ 			passed &= (args.args_count == 2);
+ 			passed &= (args.args[0] == 15);
+ 			passed &= (args.args[1] == 0x20);
+ 			break;
+ 		case 7:
++			passed &= !rc;
++			passed &= (args.np == p[3]);
++			passed &= (args.args_count == 3);
++			passed &= (args.args[0] == 2);
++			passed &= (args.args[1] == 5);
++			passed &= (args.args[2] == 3);
++			break;
++		case 8:
+ 			passed &= (rc == -ENOENT);
+ 			break;
+ 		default:
+@@ -631,6 +633,9 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 
+ 		unittest(passed, "index %i - data error on node %s rc=%i\n",
+ 			 i, args.np->full_name, rc);
++
++		if (rc == 0)
++			of_node_put(args.np);
+ 	}
+ 
+ 	/* Check for missing list property */
+@@ -677,6 +682,13 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 		   "OF: /testcase-data/phandle-tests/consumer-b: #phandle-cells = 2 found -1");
+ 
+ 	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
++
++	for (i = 0; i < ARRAY_SIZE(p); ++i) {
++		unittest(prefs[i] == kref_read(&p[i]->kobj.kref),
++			 "provider%d: expected:%d got:%d\n",
++			 i, prefs[i], kref_read(&p[i]->kobj.kref));
++		of_node_put(p[i]);
++	}
+ }
+ 
+ static void __init of_unittest_property_string(void)
+diff --git a/drivers/parport/parport_serial.c b/drivers/parport/parport_serial.c
+index 96b888bb49c6e..46f2132706899 100644
+--- a/drivers/parport/parport_serial.c
++++ b/drivers/parport/parport_serial.c
+@@ -65,6 +65,10 @@ enum parport_pc_pci_cards {
+ 	sunix_5069a,
+ 	sunix_5079a,
+ 	sunix_5099a,
++	brainboxes_uc257,
++	brainboxes_is300,
++	brainboxes_uc414,
++	brainboxes_px263,
+ };
+ 
+ /* each element directly indexed from enum list, above */
+@@ -158,6 +162,10 @@ static struct parport_pc_pci cards[] = {
+ 	/* sunix_5069a */		{ 1, { { 1, 2 }, } },
+ 	/* sunix_5079a */		{ 1, { { 1, 2 }, } },
+ 	/* sunix_5099a */		{ 1, { { 1, 2 }, } },
++	/* brainboxes_uc257 */	{ 1, { { 3, -1 }, } },
++	/* brainboxes_is300 */	{ 1, { { 3, -1 }, } },
++	/* brainboxes_uc414 */  { 1, { { 3, -1 }, } },
++	/* brainboxes_px263 */	{ 1, { { 3, -1 }, } },
+ };
+ 
+ static struct pci_device_id parport_serial_pci_tbl[] = {
+@@ -277,6 +285,38 @@ static struct pci_device_id parport_serial_pci_tbl[] = {
+ 	{ PCI_VENDOR_ID_SUNIX, PCI_DEVICE_ID_SUNIX_1999, PCI_VENDOR_ID_SUNIX,
+ 	  0x0104, 0, 0, sunix_5099a },
+ 
++	/* Brainboxes UC-203 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0bc1,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0bc2,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++
++	/* Brainboxes UC-257 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0861,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0862,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0863,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++
++	/* Brainboxes UC-414 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0e61,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc414 },
++
++	/* Brainboxes UC-475 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0981,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0982,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++
++	/* Brainboxes IS-300/IS-500 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0da0,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_is300 },
++
++	/* Brainboxes PX-263/PX-295 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x402c,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_px263 },
++
+ 	{ 0, } /* terminate list */
+ };
+ MODULE_DEVICE_TABLE(pci,parport_serial_pci_tbl);
+@@ -542,6 +582,30 @@ static struct pciserial_board pci_parport_serial_boards[] = {
+ 		.base_baud      = 921600,
+ 		.uart_offset	= 0x8,
+ 	},
++	[brainboxes_uc257] = {
++		.flags		= FL_BASE2,
++		.num_ports	= 2,
++		.base_baud	= 115200,
++		.uart_offset	= 8,
++	},
++	[brainboxes_is300] = {
++		.flags		= FL_BASE2,
++		.num_ports	= 1,
++		.base_baud	= 115200,
++		.uart_offset	= 8,
++	},
++	[brainboxes_uc414] = {
++		.flags		= FL_BASE2,
++		.num_ports	= 4,
++		.base_baud	= 115200,
++		.uart_offset	= 8,
++	},
++	[brainboxes_px263] = {
++		.flags		= FL_BASE2,
++		.num_ports	= 4,
++		.base_baud	= 921600,
++		.uart_offset	= 8,
++	},
+ };
+ 
+ struct parport_serial_private {
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index afaea201a5afc..d3c3ca3ef4bae 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -1261,7 +1261,16 @@ static int ks_pcie_probe(struct platform_device *pdev)
+ 		goto err_link;
+ 	}
+ 
++	/* Obtain references to the PHYs */
++	for (i = 0; i < num_lanes; i++)
++		phy_pm_runtime_get_sync(ks_pcie->phy[i]);
++
+ 	ret = ks_pcie_enable_phy(ks_pcie);
++
++	/* Release references to the PHYs */
++	for (i = 0; i < num_lanes; i++)
++		phy_pm_runtime_put_sync(ks_pcie->phy[i]);
++
+ 	if (ret) {
+ 		dev_err(dev, "failed to enable phy\n");
+ 		goto err_link;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 500905dad6434..21661feeeeb65 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4551,17 +4551,21 @@ static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags)
+  * But the implementation could block peer-to-peer transactions between them
+  * and provide ACS-like functionality.
+  */
+-static int  pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16 acs_flags)
++static int pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16 acs_flags)
+ {
+ 	if (!pci_is_pcie(dev) ||
+ 	    ((pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) &&
+ 	     (pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM)))
+ 		return -ENOTTY;
+ 
++	/*
++	 * Future Zhaoxin Root Ports and Switch Downstream Ports will
++	 * implement ACS capability in accordance with the PCIe Spec.
++	 */
+ 	switch (dev->device) {
+ 	case 0x0710 ... 0x071e:
+ 	case 0x0721:
+-	case 0x0723 ... 0x0732:
++	case 0x0723 ... 0x0752:
+ 		return pci_acs_ctrl_enabled(acs_flags,
+ 			PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ 	}
+diff --git a/drivers/pinctrl/cirrus/Kconfig b/drivers/pinctrl/cirrus/Kconfig
+index 530426a74f751..b3cea8d56c4f6 100644
+--- a/drivers/pinctrl/cirrus/Kconfig
++++ b/drivers/pinctrl/cirrus/Kconfig
+@@ -1,7 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config PINCTRL_LOCHNAGAR
+ 	tristate "Cirrus Logic Lochnagar pinctrl driver"
+-	depends on MFD_LOCHNAGAR
++	# Avoid clash caused by MIPS defining RST, which is used in the driver
++	depends on MFD_LOCHNAGAR && !MIPS
+ 	select GPIOLIB
+ 	select PINMUX
+ 	select PINCONF
+diff --git a/drivers/power/supply/cw2015_battery.c b/drivers/power/supply/cw2015_battery.c
+index de1fa71be1e83..d1071dbb904ee 100644
+--- a/drivers/power/supply/cw2015_battery.c
++++ b/drivers/power/supply/cw2015_battery.c
+@@ -490,7 +490,7 @@ static int cw_battery_get_property(struct power_supply *psy,
+ 
+ 	case POWER_SUPPLY_PROP_TIME_TO_EMPTY_NOW:
+ 		if (cw_battery_valid_time_to_empty(cw_bat))
+-			val->intval = cw_bat->time_to_empty;
++			val->intval = cw_bat->time_to_empty * 60;
+ 		else
+ 			val->intval = 0;
+ 		break;
+diff --git a/drivers/pwm/pwm-jz4740.c b/drivers/pwm/pwm-jz4740.c
+index 00c642fa2eed1..db951a37c0488 100644
+--- a/drivers/pwm/pwm-jz4740.c
++++ b/drivers/pwm/pwm-jz4740.c
+@@ -60,9 +60,10 @@ static int jz4740_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	snprintf(name, sizeof(name), "timer%u", pwm->hwpwm);
+ 
+ 	clk = clk_get(chip->dev, name);
+-	if (IS_ERR(clk))
+-		return dev_err_probe(chip->dev, PTR_ERR(clk),
+-				     "Failed to get clock\n");
++	if (IS_ERR(clk)) {
++		dev_err(chip->dev, "error %pe: Failed to get clock\n", clk);
++		return PTR_ERR(clk);
++	}
+ 
+ 	err = clk_prepare_enable(clk);
+ 	if (err < 0) {
+diff --git a/drivers/pwm/pwm-stm32.c b/drivers/pwm/pwm-stm32.c
+index d3be944f2ae96..69b7bc6049466 100644
+--- a/drivers/pwm/pwm-stm32.c
++++ b/drivers/pwm/pwm-stm32.c
+@@ -115,14 +115,14 @@ static int stm32_pwm_raw_capture(struct stm32_pwm *priv, struct pwm_device *pwm,
+ 	int ret;
+ 
+ 	/* Ensure registers have been updated, enable counter and capture */
+-	regmap_update_bits(priv->regmap, TIM_EGR, TIM_EGR_UG, TIM_EGR_UG);
+-	regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, TIM_CR1_CEN);
++	regmap_set_bits(priv->regmap, TIM_EGR, TIM_EGR_UG);
++	regmap_set_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN);
+ 
+ 	/* Use cc1 or cc3 DMA resp for PWM input channels 1 & 2 or 3 & 4 */
+ 	dma_id = pwm->hwpwm < 2 ? STM32_TIMERS_DMA_CH1 : STM32_TIMERS_DMA_CH3;
+ 	ccen = pwm->hwpwm < 2 ? TIM_CCER_CC12E : TIM_CCER_CC34E;
+ 	ccr = pwm->hwpwm < 2 ? TIM_CCR1 : TIM_CCR3;
+-	regmap_update_bits(priv->regmap, TIM_CCER, ccen, ccen);
++	regmap_set_bits(priv->regmap, TIM_CCER, ccen);
+ 
+ 	/*
+ 	 * Timer DMA burst mode. Request 2 registers, 2 bursts, to get both
+@@ -160,8 +160,8 @@ static int stm32_pwm_raw_capture(struct stm32_pwm *priv, struct pwm_device *pwm,
+ 	}
+ 
+ stop:
+-	regmap_update_bits(priv->regmap, TIM_CCER, ccen, 0);
+-	regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0);
++	regmap_clear_bits(priv->regmap, TIM_CCER, ccen);
++	regmap_clear_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN);
+ 
+ 	return ret;
+ }
+@@ -359,7 +359,7 @@ static int stm32_pwm_config(struct stm32_pwm *priv, int ch,
+ 
+ 	regmap_write(priv->regmap, TIM_PSC, prescaler);
+ 	regmap_write(priv->regmap, TIM_ARR, prd - 1);
+-	regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, TIM_CR1_ARPE);
++	regmap_set_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE);
+ 
+ 	/* Calculate the duty cycles */
+ 	dty = prd * duty_ns;
+@@ -377,7 +377,7 @@ static int stm32_pwm_config(struct stm32_pwm *priv, int ch,
+ 	else
+ 		regmap_update_bits(priv->regmap, TIM_CCMR2, mask, ccmr);
+ 
+-	regmap_update_bits(priv->regmap, TIM_BDTR, TIM_BDTR_MOE, TIM_BDTR_MOE);
++	regmap_set_bits(priv->regmap, TIM_BDTR, TIM_BDTR_MOE);
+ 
+ 	return 0;
+ }
+@@ -411,13 +411,13 @@ static int stm32_pwm_enable(struct stm32_pwm *priv, int ch)
+ 	if (priv->have_complementary_output)
+ 		mask |= TIM_CCER_CC1NE << (ch * 4);
+ 
+-	regmap_update_bits(priv->regmap, TIM_CCER, mask, mask);
++	regmap_set_bits(priv->regmap, TIM_CCER, mask);
+ 
+ 	/* Make sure that registers are updated */
+-	regmap_update_bits(priv->regmap, TIM_EGR, TIM_EGR_UG, TIM_EGR_UG);
++	regmap_set_bits(priv->regmap, TIM_EGR, TIM_EGR_UG);
+ 
+ 	/* Enable controller */
+-	regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, TIM_CR1_CEN);
++	regmap_set_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN);
+ 
+ 	return 0;
+ }
+@@ -431,11 +431,11 @@ static void stm32_pwm_disable(struct stm32_pwm *priv, int ch)
+ 	if (priv->have_complementary_output)
+ 		mask |= TIM_CCER_CC1NE << (ch * 4);
+ 
+-	regmap_update_bits(priv->regmap, TIM_CCER, mask, 0);
++	regmap_clear_bits(priv->regmap, TIM_CCER, mask);
+ 
+ 	/* When all channels are disabled, we can disable the controller */
+ 	if (!active_channels(priv))
+-		regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0);
++		regmap_clear_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN);
+ 
+ 	clk_disable(priv->clk);
+ }
+@@ -568,41 +568,30 @@ static void stm32_pwm_detect_complementary(struct stm32_pwm *priv)
+ 	 * If complementary bit doesn't exist writing 1 will have no
+ 	 * effect so we can detect it.
+ 	 */
+-	regmap_update_bits(priv->regmap,
+-			   TIM_CCER, TIM_CCER_CC1NE, TIM_CCER_CC1NE);
++	regmap_set_bits(priv->regmap, TIM_CCER, TIM_CCER_CC1NE);
+ 	regmap_read(priv->regmap, TIM_CCER, &ccer);
+-	regmap_update_bits(priv->regmap, TIM_CCER, TIM_CCER_CC1NE, 0);
++	regmap_clear_bits(priv->regmap, TIM_CCER, TIM_CCER_CC1NE);
+ 
+ 	priv->have_complementary_output = (ccer != 0);
+ }
+ 
+-static int stm32_pwm_detect_channels(struct stm32_pwm *priv)
++static unsigned int stm32_pwm_detect_channels(struct stm32_pwm *priv,
++					      unsigned int *num_enabled)
+ {
+-	u32 ccer;
+-	int npwm = 0;
++	u32 ccer, ccer_backup;
+ 
+ 	/*
+ 	 * If channels enable bits don't exist writing 1 will have no
+ 	 * effect so we can detect and count them.
+ 	 */
+-	regmap_update_bits(priv->regmap,
+-			   TIM_CCER, TIM_CCER_CCXE, TIM_CCER_CCXE);
++	regmap_read(priv->regmap, TIM_CCER, &ccer_backup);
++	regmap_set_bits(priv->regmap, TIM_CCER, TIM_CCER_CCXE);
+ 	regmap_read(priv->regmap, TIM_CCER, &ccer);
+-	regmap_update_bits(priv->regmap, TIM_CCER, TIM_CCER_CCXE, 0);
+-
+-	if (ccer & TIM_CCER_CC1E)
+-		npwm++;
++	regmap_write(priv->regmap, TIM_CCER, ccer_backup);
+ 
+-	if (ccer & TIM_CCER_CC2E)
+-		npwm++;
++	*num_enabled = hweight32(ccer_backup & TIM_CCER_CCXE);
+ 
+-	if (ccer & TIM_CCER_CC3E)
+-		npwm++;
+-
+-	if (ccer & TIM_CCER_CC4E)
+-		npwm++;
+-
+-	return npwm;
++	return hweight32(ccer & TIM_CCER_CCXE);
+ }
+ 
+ static int stm32_pwm_probe(struct platform_device *pdev)
+@@ -611,6 +600,8 @@ static int stm32_pwm_probe(struct platform_device *pdev)
+ 	struct device_node *np = dev->of_node;
+ 	struct stm32_timers *ddata = dev_get_drvdata(pdev->dev.parent);
+ 	struct stm32_pwm *priv;
++	unsigned int num_enabled;
++	unsigned int i;
+ 	int ret;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -636,7 +627,11 @@ static int stm32_pwm_probe(struct platform_device *pdev)
+ 	priv->chip.base = -1;
+ 	priv->chip.dev = dev;
+ 	priv->chip.ops = &stm32pwm_ops;
+-	priv->chip.npwm = stm32_pwm_detect_channels(priv);
++	priv->chip.npwm = stm32_pwm_detect_channels(priv, &num_enabled);
++
++	/* Initialize clock refcount to number of enabled PWM channels. */
++	for (i = 0; i < num_enabled; i++)
++		clk_enable(priv->clk);
+ 
+ 	ret = pwmchip_add(&priv->chip);
+ 	if (ret < 0)
+diff --git a/drivers/reset/hisilicon/hi6220_reset.c b/drivers/reset/hisilicon/hi6220_reset.c
+index 19926506d0335..2a7688fa9b9ba 100644
+--- a/drivers/reset/hisilicon/hi6220_reset.c
++++ b/drivers/reset/hisilicon/hi6220_reset.c
+@@ -164,7 +164,7 @@ static int hi6220_reset_probe(struct platform_device *pdev)
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+-	type = (enum hi6220_reset_ctrl_type)of_device_get_match_data(dev);
++	type = (uintptr_t)of_device_get_match_data(dev);
+ 
+ 	regmap = syscon_node_to_regmap(np);
+ 	if (IS_ERR(regmap)) {
+diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
+index a4f6f2e62b1dc..b5b36217b15eb 100644
+--- a/drivers/s390/block/scm_blk.c
++++ b/drivers/s390/block/scm_blk.c
+@@ -18,6 +18,7 @@
+ #include <linux/genhd.h>
+ #include <linux/slab.h>
+ #include <linux/list.h>
++#include <linux/io.h>
+ #include <asm/eadm.h>
+ #include "scm_blk.h"
+ 
+@@ -131,7 +132,7 @@ static void scm_request_done(struct scm_request *scmrq)
+ 
+ 	for (i = 0; i < nr_requests_per_io && scmrq->request[i]; i++) {
+ 		msb = &scmrq->aob->msb[i];
+-		aidaw = msb->data_addr;
++		aidaw = (u64)phys_to_virt(msb->data_addr);
+ 
+ 		if ((msb->flags & MSB_FLAG_IDA) && aidaw &&
+ 		    IS_ALIGNED(aidaw, PAGE_SIZE))
+@@ -196,12 +197,12 @@ static int scm_request_prepare(struct scm_request *scmrq)
+ 	msb->scm_addr = scmdev->address + ((u64) blk_rq_pos(req) << 9);
+ 	msb->oc = (rq_data_dir(req) == READ) ? MSB_OC_READ : MSB_OC_WRITE;
+ 	msb->flags |= MSB_FLAG_IDA;
+-	msb->data_addr = (u64) aidaw;
++	msb->data_addr = (u64)virt_to_phys(aidaw);
+ 
+ 	rq_for_each_segment(bv, req, iter) {
+ 		WARN_ON(bv.bv_offset);
+ 		msb->blk_count += bv.bv_len >> 12;
+-		aidaw->data_addr = (u64) page_address(bv.bv_page);
++		aidaw->data_addr = virt_to_phys(page_address(bv.bv_page));
+ 		aidaw++;
+ 	}
+ 
+diff --git a/drivers/scsi/fnic/fnic_debugfs.c b/drivers/scsi/fnic/fnic_debugfs.c
+index 6c049360f136b..56d52efc7314e 100644
+--- a/drivers/scsi/fnic/fnic_debugfs.c
++++ b/drivers/scsi/fnic/fnic_debugfs.c
+@@ -67,9 +67,10 @@ int fnic_debugfs_init(void)
+ 		fc_trc_flag->fnic_trace = 2;
+ 		fc_trc_flag->fc_trace = 3;
+ 		fc_trc_flag->fc_clear = 4;
++		return 0;
+ 	}
+ 
+-	return 0;
++	return -ENOMEM;
+ }
+ 
+ /*
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index e5b9229310a0f..8e5d23c6b8de5 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1584,10 +1584,10 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 		queue_work(hisi_hba->wq, &hisi_hba->debugfs_work);
+ 
+ 	if (!hisi_hba->hw->soft_reset)
+-		return -1;
++		return -ENOENT;
+ 
+ 	if (test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+-		return -1;
++		return -EPERM;
+ 
+ 	dev_info(dev, "controller resetting...\n");
+ 	hisi_sas_controller_reset_prepare(hisi_hba);
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 0d21c64efa817..f03a09c9e865e 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -3479,7 +3479,7 @@ static int _suspend_v3_hw(struct device *device)
+ 	}
+ 
+ 	if (test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+-		return -1;
++		return -EPERM;
+ 
+ 	scsi_block_requests(shost);
+ 	set_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index 5fd9b515f6f14..74ea6b6b5f746 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -957,9 +957,10 @@ config SPI_ZYNQ_QSPI
+ 
+ config SPI_ZYNQMP_GQSPI
+ 	tristate "Xilinx ZynqMP GQSPI controller"
+-	depends on (SPI_MASTER && HAS_DMA) || COMPILE_TEST
++	depends on (SPI_MEM && HAS_DMA) || COMPILE_TEST
+ 	help
+ 	  Enables Xilinx GQSPI controller driver for Zynq UltraScale+ MPSoC.
++	  This controller only supports SPI memory interface.
+ 
+ config SPI_AMD
+ 	tristate "AMD SPI controller"
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index b2579af0e3eb0..35d30378256f6 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -30,12 +30,15 @@
+ 
+ #include <asm/unaligned.h>
+ 
++#define SH_MSIOF_FLAG_FIXED_DTDL_200	BIT(0)
++
+ struct sh_msiof_chipdata {
+ 	u32 bits_per_word_mask;
+ 	u16 tx_fifo_size;
+ 	u16 rx_fifo_size;
+ 	u16 ctlr_flags;
+ 	u16 min_div_pow;
++	u32 flags;
+ };
+ 
+ struct sh_msiof_spi_priv {
+@@ -1069,6 +1072,16 @@ static const struct sh_msiof_chipdata rcar_gen3_data = {
+ 	.min_div_pow = 1,
+ };
+ 
++static const struct sh_msiof_chipdata rcar_r8a7795_data = {
++	.bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16) |
++			      SPI_BPW_MASK(24) | SPI_BPW_MASK(32),
++	.tx_fifo_size = 64,
++	.rx_fifo_size = 64,
++	.ctlr_flags = SPI_CONTROLLER_MUST_TX,
++	.min_div_pow = 1,
++	.flags = SH_MSIOF_FLAG_FIXED_DTDL_200,
++};
++
+ static const struct of_device_id sh_msiof_match[] = {
+ 	{ .compatible = "renesas,sh-mobile-msiof", .data = &sh_data },
+ 	{ .compatible = "renesas,msiof-r8a7743",   .data = &rcar_gen2_data },
+@@ -1079,6 +1092,7 @@ static const struct of_device_id sh_msiof_match[] = {
+ 	{ .compatible = "renesas,msiof-r8a7793",   .data = &rcar_gen2_data },
+ 	{ .compatible = "renesas,msiof-r8a7794",   .data = &rcar_gen2_data },
+ 	{ .compatible = "renesas,rcar-gen2-msiof", .data = &rcar_gen2_data },
++	{ .compatible = "renesas,msiof-r8a7795",   .data = &rcar_r8a7795_data },
+ 	{ .compatible = "renesas,msiof-r8a7796",   .data = &rcar_gen3_data },
+ 	{ .compatible = "renesas,rcar-gen3-msiof", .data = &rcar_gen3_data },
+ 	{ .compatible = "renesas,sh-msiof",        .data = &sh_data }, /* Deprecated */
+@@ -1274,6 +1288,9 @@ static int sh_msiof_spi_probe(struct platform_device *pdev)
+ 		return -ENXIO;
+ 	}
+ 
++	if (chipdata->flags & SH_MSIOF_FLAG_FIXED_DTDL_200)
++		info->dtdl = 200;
++
+ 	if (info->mode == MSIOF_SPI_SLAVE)
+ 		ctlr = spi_alloc_slave(&pdev->dev,
+ 				       sizeof(struct sh_msiof_spi_priv));
+diff --git a/drivers/staging/media/rkisp1/rkisp1-dev.c b/drivers/staging/media/rkisp1/rkisp1-dev.c
+index 06de5540c8af4..663b2efec9b09 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-dev.c
++++ b/drivers/staging/media/rkisp1/rkisp1-dev.c
+@@ -518,7 +518,7 @@ static int rkisp1_probe(struct platform_device *pdev)
+ 
+ 	ret = v4l2_device_register(rkisp1->dev, &rkisp1->v4l2_dev);
+ 	if (ret)
+-		return ret;
++		goto err_pm_runtime_disable;
+ 
+ 	ret = media_device_register(&rkisp1->media_dev);
+ 	if (ret) {
+@@ -538,6 +538,7 @@ err_unreg_media_dev:
+ 	media_device_unregister(&rkisp1->media_dev);
+ err_unreg_v4l2_dev:
+ 	v4l2_device_unregister(&rkisp1->v4l2_dev);
++err_pm_runtime_disable:
+ 	pm_runtime_disable(&pdev->dev);
+ 	return ret;
+ }
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index bd4118f1b6944..25765ebb756ae 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1497,7 +1497,7 @@ static int omap8250_remove(struct platform_device *pdev)
+ 
+ 	err = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (err)
+-		return err;
++		dev_err(&pdev->dev, "Failed to resume hardware\n");
+ 
+ 	serial8250_unregister_port(priv->line);
+ 	priv->line = -ENODEV;
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 164597e2e0044..6e49928bb8646 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -461,13 +461,13 @@ static void imx_uart_stop_tx(struct uart_port *port)
+ 	ucr1 = imx_uart_readl(sport, UCR1);
+ 	imx_uart_writel(sport, ucr1 & ~UCR1_TRDYEN, UCR1);
+ 
++	ucr4 = imx_uart_readl(sport, UCR4);
+ 	usr2 = imx_uart_readl(sport, USR2);
+-	if (!(usr2 & USR2_TXDC)) {
++	if ((!(usr2 & USR2_TXDC)) && (ucr4 & UCR4_TCEN)) {
+ 		/* The shifter is still busy, so retry once TC triggers */
+ 		return;
+ 	}
+ 
+-	ucr4 = imx_uart_readl(sport, UCR4);
+ 	ucr4 &= ~UCR4_TCEN;
+ 	imx_uart_writel(sport, ucr4, UCR4);
+ 
+@@ -2346,7 +2346,7 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 	/* For register access, we only need to enable the ipg clock. */
+ 	ret = clk_prepare_enable(sport->clk_ipg);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "failed to enable per clk: %d\n", ret);
++		dev_err(&pdev->dev, "failed to enable ipg clk: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+@@ -2358,10 +2358,8 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 	sport->ufcr = readl(sport->port.membase + UFCR);
+ 
+ 	ret = uart_get_rs485_mode(&sport->port);
+-	if (ret) {
+-		clk_disable_unprepare(sport->clk_ipg);
+-		return ret;
+-	}
++	if (ret)
++		goto err_clk;
+ 
+ 	if (sport->port.rs485.flags & SER_RS485_ENABLED &&
+ 	    (!sport->have_rtscts && !sport->have_rtsgpio))
+@@ -2415,8 +2413,6 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		imx_uart_writel(sport, ucr3, UCR3);
+ 	}
+ 
+-	clk_disable_unprepare(sport->clk_ipg);
+-
+ 	hrtimer_init(&sport->trigger_start_tx, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	hrtimer_init(&sport->trigger_stop_tx, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	sport->trigger_start_tx.function = imx_trigger_start_tx;
+@@ -2432,7 +2428,7 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to request rx irq: %d\n",
+ 				ret);
+-			return ret;
++			goto err_clk;
+ 		}
+ 
+ 		ret = devm_request_irq(&pdev->dev, txirq, imx_uart_txint, 0,
+@@ -2440,7 +2436,7 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to request tx irq: %d\n",
+ 				ret);
+-			return ret;
++			goto err_clk;
+ 		}
+ 
+ 		ret = devm_request_irq(&pdev->dev, rtsirq, imx_uart_rtsint, 0,
+@@ -2448,14 +2444,14 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to request rts irq: %d\n",
+ 				ret);
+-			return ret;
++			goto err_clk;
+ 		}
+ 	} else {
+ 		ret = devm_request_irq(&pdev->dev, rxirq, imx_uart_int, 0,
+ 				       dev_name(&pdev->dev), sport);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to request irq: %d\n", ret);
+-			return ret;
++			goto err_clk;
+ 		}
+ 	}
+ 
+@@ -2463,7 +2459,12 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, sport);
+ 
+-	return uart_add_one_port(&imx_uart_uart_driver, &sport->port);
++	ret = uart_add_one_port(&imx_uart_uart_driver, &sport->port);
++
++err_clk:
++	clk_disable_unprepare(sport->clk_ipg);
++
++	return ret;
+ }
+ 
+ static int imx_uart_remove(struct platform_device *pdev)
+diff --git a/drivers/tty/tty.h b/drivers/tty/tty.h
+index 1908f27a795a0..3d2d82ff6a034 100644
+--- a/drivers/tty/tty.h
++++ b/drivers/tty/tty.h
+@@ -64,7 +64,7 @@ int tty_check_change(struct tty_struct *tty);
+ void __stop_tty(struct tty_struct *tty);
+ void __start_tty(struct tty_struct *tty);
+ void tty_write_unlock(struct tty_struct *tty);
+-int tty_write_lock(struct tty_struct *tty, int ndelay);
++int tty_write_lock(struct tty_struct *tty, bool ndelay);
+ void tty_vhangup_session(struct tty_struct *tty);
+ void tty_open_proc_set_tty(struct file *filp, struct tty_struct *tty);
+ int tty_signal_session_leader(struct tty_struct *tty, int exit_session);
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 094e82a12d298..984e3098e6317 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -948,7 +948,7 @@ void tty_write_unlock(struct tty_struct *tty)
+ 	wake_up_interruptible_poll(&tty->write_wait, EPOLLOUT);
+ }
+ 
+-int tty_write_lock(struct tty_struct *tty, int ndelay)
++int tty_write_lock(struct tty_struct *tty, bool ndelay)
+ {
+ 	if (!mutex_trylock(&tty->atomic_write_lock)) {
+ 		if (ndelay)
+@@ -1167,7 +1167,7 @@ int tty_send_xchar(struct tty_struct *tty, char ch)
+ 		return 0;
+ 	}
+ 
+-	if (tty_write_lock(tty, 0) < 0)
++	if (tty_write_lock(tty, false) < 0)
+ 		return -ERESTARTSYS;
+ 
+ 	down_read(&tty->termios_rwsem);
+@@ -2470,22 +2470,25 @@ static int send_break(struct tty_struct *tty, unsigned int duration)
+ 		return 0;
+ 
+ 	if (tty->driver->flags & TTY_DRIVER_HARDWARE_BREAK)
+-		retval = tty->ops->break_ctl(tty, duration);
+-	else {
+-		/* Do the work ourselves */
+-		if (tty_write_lock(tty, 0) < 0)
+-			return -EINTR;
+-		retval = tty->ops->break_ctl(tty, -1);
+-		if (retval)
+-			goto out;
+-		if (!signal_pending(current))
+-			msleep_interruptible(duration);
++		return tty->ops->break_ctl(tty, duration);
++
++	/* Do the work ourselves */
++	if (tty_write_lock(tty, false) < 0)
++		return -EINTR;
++
++	retval = tty->ops->break_ctl(tty, -1);
++	if (!retval) {
++		msleep_interruptible(duration);
+ 		retval = tty->ops->break_ctl(tty, 0);
+-out:
+-		tty_write_unlock(tty);
+-		if (signal_pending(current))
+-			retval = -EINTR;
++	} else if (retval == -EOPNOTSUPP) {
++		/* some drivers can tell only dynamically */
++		retval = 0;
+ 	}
++	tty_write_unlock(tty);
++
++	if (signal_pending(current))
++		retval = -EINTR;
++
+ 	return retval;
+ }
+ 
+diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
+index 68b07250dcb60..12a30329abdb0 100644
+--- a/drivers/tty/tty_ioctl.c
++++ b/drivers/tty/tty_ioctl.c
+@@ -404,7 +404,7 @@ retry_write_wait:
+ 		if (retval < 0)
+ 			return retval;
+ 
+-		if (tty_write_lock(tty, 0) < 0)
++		if (tty_write_lock(tty, false) < 0)
+ 			goto retry_write_wait;
+ 
+ 		/* Racing writer? */
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index be06f1a961c2c..f453b3c7ee422 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -464,13 +464,13 @@ static int uio_open(struct inode *inode, struct file *filep)
+ 
+ 	mutex_lock(&minor_lock);
+ 	idev = idr_find(&uio_idr, iminor(inode));
+-	mutex_unlock(&minor_lock);
+ 	if (!idev) {
+ 		ret = -ENODEV;
++		mutex_unlock(&minor_lock);
+ 		goto out;
+ 	}
+-
+ 	get_device(&idev->dev);
++	mutex_unlock(&minor_lock);
+ 
+ 	if (!try_module_get(idev->owner)) {
+ 		ret = -ENODEV;
+@@ -1062,9 +1062,8 @@ void uio_unregister_device(struct uio_info *info)
+ 	wake_up_interruptible(&idev->wait);
+ 	kill_fasync(&idev->async_queue, SIGIO, POLL_HUP);
+ 
+-	device_unregister(&idev->dev);
+-
+ 	uio_free_minor(minor);
++	device_unregister(&idev->dev);
+ 
+ 	return;
+ }
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 3d18599c5b9e4..b8bb4cdadff8f 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -516,6 +516,13 @@ static irqreturn_t ci_irq_handler(int irq, void *data)
+ 	u32 otgsc = 0;
+ 
+ 	if (ci->in_lpm) {
++		/*
++		 * If we already have a wakeup irq pending there,
++		 * let's just return to wait resume finished firstly.
++		 */
++		if (ci->wakeup_int)
++			return IRQ_HANDLED;
++
+ 		disable_irq_nosync(irq);
+ 		ci->wakeup_int = true;
+ 		pm_runtime_get(ci->dev);
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 070b838c7da98..4e4a71307d63c 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -896,6 +896,9 @@ static int acm_tty_break_ctl(struct tty_struct *tty, int state)
+ 	struct acm *acm = tty->driver_data;
+ 	int retval;
+ 
++	if (!(acm->ctrl_caps & USB_CDC_CAP_BRK))
++		return -EOPNOTSUPP;
++
+ 	retval = acm_send_break(acm, state ? 0xffff : 0);
+ 	if (retval < 0)
+ 		dev_dbg(&acm->control->dev,
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 214a8ff2d69c8..26f9928b972c6 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -275,48 +275,11 @@ int dwc3_core_soft_reset(struct dwc3 *dwc)
+ 	/*
+ 	 * We're resetting only the device side because, if we're in host mode,
+ 	 * XHCI driver will reset the host block. If dwc3 was configured for
+-	 * host-only mode or current role is host, then we can return early.
++	 * host-only mode, then we can return early.
+ 	 */
+ 	if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
+ 		return 0;
+ 
+-	/*
+-	 * If the dr_mode is host and the dwc->current_dr_role is not the
+-	 * corresponding DWC3_GCTL_PRTCAP_HOST, then the dwc3_core_init_mode
+-	 * isn't executed yet. Ensure the phy is ready before the controller
+-	 * updates the GCTL.PRTCAPDIR or other settings by soft-resetting
+-	 * the phy.
+-	 *
+-	 * Note: GUSB3PIPECTL[n] and GUSB2PHYCFG[n] are port settings where n
+-	 * is port index. If this is a multiport host, then we need to reset
+-	 * all active ports.
+-	 */
+-	if (dwc->dr_mode == USB_DR_MODE_HOST) {
+-		u32 usb3_port;
+-		u32 usb2_port;
+-
+-		usb3_port = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
+-		usb3_port |= DWC3_GUSB3PIPECTL_PHYSOFTRST;
+-		dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), usb3_port);
+-
+-		usb2_port = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+-		usb2_port |= DWC3_GUSB2PHYCFG_PHYSOFTRST;
+-		dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), usb2_port);
+-
+-		/* Small delay for phy reset assertion */
+-		usleep_range(1000, 2000);
+-
+-		usb3_port &= ~DWC3_GUSB3PIPECTL_PHYSOFTRST;
+-		dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), usb3_port);
+-
+-		usb2_port &= ~DWC3_GUSB2PHYCFG_PHYSOFTRST;
+-		dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), usb2_port);
+-
+-		/* Wait for clock synchronization */
+-		msleep(50);
+-		return 0;
+-	}
+-
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ 	reg |= DWC3_DCTL_CSFTRST;
+ 	reg &= ~DWC3_DCTL_RUN_STOP;
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index 3cd2942643725..14bdef97090b2 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -236,7 +236,10 @@ static void dwc3_ep0_stall_and_restart(struct dwc3 *dwc)
+ 		struct dwc3_request	*req;
+ 
+ 		req = next_request(&dep->pending_list);
+-		dwc3_gadget_giveback(dep, req, -ECONNRESET);
++		if (!dwc->connected)
++			dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
++		else
++			dwc3_gadget_giveback(dep, req, -ECONNRESET);
+ 	}
+ 
+ 	dwc->ep0state = EP0_SETUP_PHASE;
+diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
+index 094e812e9e692..35483217b1f6c 100644
+--- a/drivers/usb/mon/mon_bin.c
++++ b/drivers/usb/mon/mon_bin.c
+@@ -1247,14 +1247,19 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
+ 	struct mon_reader_bin *rp = vmf->vma->vm_private_data;
+ 	unsigned long offset, chunk_idx;
+ 	struct page *pageptr;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&rp->b_lock, flags);
+ 	offset = vmf->pgoff << PAGE_SHIFT;
+-	if (offset >= rp->b_size)
++	if (offset >= rp->b_size) {
++		spin_unlock_irqrestore(&rp->b_lock, flags);
+ 		return VM_FAULT_SIGBUS;
++	}
+ 	chunk_idx = offset / CHUNK_SIZE;
+ 	pageptr = rp->b_vec[chunk_idx].pg;
+ 	get_page(pageptr);
+ 	vmf->page = pageptr;
++	spin_unlock_irqrestore(&rp->b_lock, flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
+index 70e23334b27f9..e3cddcac3252c 100644
+--- a/drivers/usb/phy/phy-mxs-usb.c
++++ b/drivers/usb/phy/phy-mxs-usb.c
+@@ -388,8 +388,7 @@ static void __mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool disconnect)
+ 
+ static bool mxs_phy_is_otg_host(struct mxs_phy *mxs_phy)
+ {
+-	return IS_ENABLED(CONFIG_USB_OTG) &&
+-		mxs_phy->phy.last_event == USB_EVENT_ID;
++	return mxs_phy->phy.last_event == USB_EVENT_ID;
+ }
+ 
+ static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on)
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index 18b35e8173614..7fa95e7012446 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -190,11 +190,13 @@ static void typec_altmode_put_partner(struct altmode *altmode)
+ {
+ 	struct altmode *partner = altmode->partner;
+ 	struct typec_altmode *adev;
++	struct typec_altmode *partner_adev;
+ 
+ 	if (!partner)
+ 		return;
+ 
+ 	adev = &altmode->adev;
++	partner_adev = &partner->adev;
+ 
+ 	if (is_typec_plug(adev->dev.parent)) {
+ 		struct typec_plug *plug = to_typec_plug(adev->dev.parent);
+@@ -203,7 +205,7 @@ static void typec_altmode_put_partner(struct altmode *altmode)
+ 	} else {
+ 		partner->partner = NULL;
+ 	}
+-	put_device(&adev->dev);
++	put_device(&partner_adev->dev);
+ }
+ 
+ /**
+diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
+index a591d291b231a..0708e214c5a33 100644
+--- a/drivers/video/fbdev/core/fb_defio.c
++++ b/drivers/video/fbdev/core/fb_defio.c
+@@ -78,11 +78,7 @@ int fb_deferred_io_fsync(struct file *file, loff_t start, loff_t end, int datasy
+ 		return 0;
+ 
+ 	inode_lock(inode);
+-	/* Kill off the delayed work */
+-	cancel_delayed_work_sync(&info->deferred_work);
+-
+-	/* Run it immediately */
+-	schedule_delayed_work(&info->deferred_work, 0);
++	flush_delayed_work(&info->deferred_work);
+ 	inode_unlock(inode);
+ 
+ 	return 0;
+diff --git a/drivers/watchdog/bcm2835_wdt.c b/drivers/watchdog/bcm2835_wdt.c
+index dec6ca019beaa..3a8dec05b5911 100644
+--- a/drivers/watchdog/bcm2835_wdt.c
++++ b/drivers/watchdog/bcm2835_wdt.c
+@@ -42,6 +42,7 @@
+ 
+ #define SECS_TO_WDOG_TICKS(x) ((x) << 16)
+ #define WDOG_TICKS_TO_SECS(x) ((x) >> 16)
++#define WDOG_TICKS_TO_MSECS(x) ((x) * 1000 >> 16)
+ 
+ struct bcm2835_wdt {
+ 	void __iomem		*base;
+@@ -140,7 +141,7 @@ static struct watchdog_device bcm2835_wdt_wdd = {
+ 	.info =		&bcm2835_wdt_info,
+ 	.ops =		&bcm2835_wdt_ops,
+ 	.min_timeout =	1,
+-	.max_timeout =	WDOG_TICKS_TO_SECS(PM_WDOG_TIME_SET),
++	.max_hw_heartbeat_ms =	WDOG_TICKS_TO_MSECS(PM_WDOG_TIME_SET),
+ 	.timeout =	WDOG_TICKS_TO_SECS(PM_WDOG_TIME_SET),
+ };
+ 
+diff --git a/drivers/watchdog/hpwdt.c b/drivers/watchdog/hpwdt.c
+index 7d34bcf1c45b4..53573c3ddd1a6 100644
+--- a/drivers/watchdog/hpwdt.c
++++ b/drivers/watchdog/hpwdt.c
+@@ -174,7 +174,7 @@ static int hpwdt_pretimeout(unsigned int ulReason, struct pt_regs *regs)
+ 		"3. OA Forward Progress Log\n"
+ 		"4. iLO Event Log";
+ 
+-	if (ilo5 && ulReason == NMI_UNKNOWN && !mynmi)
++	if (ulReason == NMI_UNKNOWN && !mynmi)
+ 		return NMI_DONE;
+ 
+ 	if (ilo5 && !pretimeout && !mynmi)
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 46c2a4bd9ebe9..daa00f3c5a6af 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -70,6 +70,11 @@ static int rti_wdt_start(struct watchdog_device *wdd)
+ {
+ 	u32 timer_margin;
+ 	struct rti_wdt_device *wdt = watchdog_get_drvdata(wdd);
++	int ret;
++
++	ret = pm_runtime_resume_and_get(wdd->parent);
++	if (ret)
++		return ret;
+ 
+ 	/* set timeout period */
+ 	timer_margin = (u64)wdd->timeout * wdt->freq;
+@@ -296,6 +301,9 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ 	if (last_ping)
+ 		watchdog_set_last_hw_keepalive(wdd, last_ping);
+ 
++	if (!watchdog_hw_running(wdd))
++		pm_runtime_put_sync(&pdev->dev);
++
+ 	return 0;
+ 
+ err_iomap:
+@@ -310,7 +318,10 @@ static int rti_wdt_remove(struct platform_device *pdev)
+ 	struct rti_wdt_device *wdt = platform_get_drvdata(pdev);
+ 
+ 	watchdog_unregister_device(&wdt->wdd);
+-	pm_runtime_put(&pdev->dev);
++
++	if (!pm_runtime_suspended(&pdev->dev))
++		pm_runtime_put(&pdev->dev);
++
+ 	pm_runtime_disable(&pdev->dev);
+ 
+ 	return 0;
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index f37255cd75fdf..24d0470af81ca 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -1028,6 +1028,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ 
+ 	/* Fill in the data structures */
+ 	cdev_init(&wd_data->cdev, &watchdog_fops);
++	wd_data->cdev.owner = wdd->ops->owner;
+ 
+ 	/* Add the device */
+ 	err = cdev_device_add(&wd_data->cdev, &wd_data->dev);
+@@ -1042,8 +1043,6 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ 		return err;
+ 	}
+ 
+-	wd_data->cdev.owner = wdd->ops->owner;
+-
+ 	/* Record time of most recent heartbeat as 'just before now'. */
+ 	wd_data->last_hw_keepalive = ktime_sub(ktime_get(), 1);
+ 	watchdog_set_open_deadline(wd_data);
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index 42bab9270e7d6..9c0aadedfbffe 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -84,6 +84,14 @@ int debugfs_file_get(struct dentry *dentry)
+ 	struct debugfs_fsdata *fsd;
+ 	void *d_fsd;
+ 
++	/*
++	 * This could only happen if some debugfs user erroneously calls
++	 * debugfs_file_get() on a dentry that isn't even a file, let
++	 * them know about it.
++	 */
++	if (WARN_ON(!d_is_reg(dentry)))
++		return -EINVAL;
++
+ 	d_fsd = READ_ONCE(dentry->d_fsdata);
+ 	if (!((unsigned long)d_fsd & DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)) {
+ 		fsd = d_fsd;
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index f47f0a7d2c3b9..d04930c199cb4 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -210,17 +210,19 @@ static const struct super_operations debugfs_super_operations = {
+ 
+ static void debugfs_release_dentry(struct dentry *dentry)
+ {
+-	void *fsd = dentry->d_fsdata;
++	struct debugfs_fsdata *fsd = dentry->d_fsdata;
+ 
+-	if (!((unsigned long)fsd & DEBUGFS_FSDATA_IS_REAL_FOPS_BIT))
+-		kfree(dentry->d_fsdata);
++	if ((unsigned long)fsd & DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)
++		return;
++
++	kfree(fsd);
+ }
+ 
+ static struct vfsmount *debugfs_automount(struct path *path)
+ {
+-	debugfs_automount_t f;
+-	f = (debugfs_automount_t)path->dentry->d_fsdata;
+-	return f(path->dentry, d_inode(path->dentry)->i_private);
++	struct debugfs_fsdata *fsd = path->dentry->d_fsdata;
++
++	return fsd->automount(path->dentry, d_inode(path->dentry)->i_private);
+ }
+ 
+ static const struct dentry_operations debugfs_dops = {
+@@ -598,13 +600,23 @@ struct dentry *debugfs_create_automount(const char *name,
+ 					void *data)
+ {
+ 	struct dentry *dentry = start_creating(name, parent);
++	struct debugfs_fsdata *fsd;
+ 	struct inode *inode;
+ 
+ 	if (IS_ERR(dentry))
+ 		return dentry;
+ 
++	fsd = kzalloc(sizeof(*fsd), GFP_KERNEL);
++	if (!fsd) {
++		failed_creating(dentry);
++		return ERR_PTR(-ENOMEM);
++	}
++
++	fsd->automount = f;
++
+ 	if (!(debugfs_allow & DEBUGFS_ALLOW_API)) {
+ 		failed_creating(dentry);
++		kfree(fsd);
+ 		return ERR_PTR(-EPERM);
+ 	}
+ 
+@@ -612,13 +624,14 @@ struct dentry *debugfs_create_automount(const char *name,
+ 	if (unlikely(!inode)) {
+ 		pr_err("out of free dentries, can not create automount '%s'\n",
+ 		       name);
++		kfree(fsd);
+ 		return failed_creating(dentry);
+ 	}
+ 
+ 	make_empty_dir_inode(inode);
+ 	inode->i_flags |= S_AUTOMOUNT;
+ 	inode->i_private = data;
+-	dentry->d_fsdata = (void *)f;
++	dentry->d_fsdata = fsd;
+ 	/* directory inodes start off with i_nlink == 2 (for "." entry) */
+ 	inc_nlink(inode);
+ 	d_instantiate(dentry, inode);
+diff --git a/fs/debugfs/internal.h b/fs/debugfs/internal.h
+index 92af8ae313134..f7c489b5a368c 100644
+--- a/fs/debugfs/internal.h
++++ b/fs/debugfs/internal.h
+@@ -17,8 +17,14 @@ extern const struct file_operations debugfs_full_proxy_file_operations;
+ 
+ struct debugfs_fsdata {
+ 	const struct file_operations *real_fops;
+-	refcount_t active_users;
+-	struct completion active_users_drained;
++	union {
++		/* automount_fn is used when real_fops is NULL */
++		debugfs_automount_t automount;
++		struct {
++			refcount_t active_users;
++			struct completion active_users_drained;
++		};
++	};
+ };
+ 
+ /*
+diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
+index 15880a68faadc..3626816b174ad 100644
+--- a/fs/efivarfs/super.c
++++ b/fs/efivarfs/super.c
+@@ -13,6 +13,7 @@
+ #include <linux/ucs2_string.h>
+ #include <linux/slab.h>
+ #include <linux/magic.h>
++#include <linux/printk.h>
+ 
+ #include "internal.h"
+ 
+@@ -231,8 +232,19 @@ static int efivarfs_get_tree(struct fs_context *fc)
+ 	return get_tree_single(fc, efivarfs_fill_super);
+ }
+ 
++static int efivarfs_reconfigure(struct fs_context *fc)
++{
++	if (!efivar_supports_writes() && !(fc->sb_flags & SB_RDONLY)) {
++		pr_err("Firmware does not support SetVariableRT. Can not remount with rw\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static const struct fs_context_operations efivarfs_context_ops = {
+ 	.get_tree	= efivarfs_get_tree,
++	.reconfigure	= efivarfs_reconfigure,
+ };
+ 
+ static int efivarfs_init_fs_context(struct fs_context *fc)
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 4e6b93f167589..55818bd510fb0 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -42,7 +42,7 @@ static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
+ 	ret = filemap_fault(vmf);
+ 	up_read(&F2FS_I(inode)->i_mmap_sem);
+ 
+-	if (!ret)
++	if (ret & VM_FAULT_LOCKED)
+ 		f2fs_update_iostat(F2FS_I_SB(inode), APP_MAPPED_READ_IO,
+ 							F2FS_BLKSIZE);
+ 
+@@ -2816,6 +2816,11 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
+ 			goto out;
+ 	}
+ 
++	if (f2fs_compressed_file(src) || f2fs_compressed_file(dst)) {
++		ret = -EOPNOTSUPP;
++		goto out_unlock;
++	}
++
+ 	ret = -EINVAL;
+ 	if (pos_in + len > src->i_size || pos_in + len < pos_in)
+ 		goto out_unlock;
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 72b109685db47..99e4ec48d2a46 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -1066,7 +1066,7 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	}
+ 
+ 	if (old_dir_entry) {
+-		if (old_dir != new_dir && !whiteout)
++		if (old_dir != new_dir)
+ 			f2fs_set_link(old_inode, old_dir_entry,
+ 						old_dir_page, new_dir);
+ 		else
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index f44c60114379e..dd50b747b671e 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -741,6 +741,12 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ 		memcpy(pval, value, size);
+ 		last->e_value_size = cpu_to_le16(size);
+ 		new_hsize += newsize;
++		/*
++		 * Explicitly add the null terminator.  The unused xattr space
++		 * is supposed to always be zeroed, which would make this
++		 * unnecessary, but don't depend on that.
++		 */
++		*(u32 *)((u8 *)last + newsize) = 0;
+ 	}
+ 
+ 	error = write_all_xattrs(inode, new_hsize, base_addr, ipage);
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index c5bde789a16db..1ffdc4ad6246b 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -1247,6 +1247,8 @@ static int update_rgrp_lvb(struct gfs2_rgrpd *rgd)
+ 		rgd->rd_flags &= ~GFS2_RDF_CHECK;
+ 	rgd->rd_free = be32_to_cpu(rgd->rd_rgl->rl_free);
+ 	rgd->rd_free_clone = rgd->rd_free;
++	/* max out the rgrp allocation failure point */
++	rgd->rd_extfail_pt = rgd->rd_free;
+ 	rgd->rd_dinodes = be32_to_cpu(rgd->rd_rgl->rl_dinodes);
+ 	rgd->rd_igeneration = be64_to_cpu(rgd->rd_rgl->rl_igeneration);
+ 	return 0;
+@@ -1648,7 +1650,7 @@ static int gfs2_reservation_check_and_update(struct gfs2_rbm *rbm,
+ 	 * If we have a minimum extent length, then skip over any extent
+ 	 * which is less than the min extent length in size.
+ 	 */
+-	if (minext) {
++	if (minext > 1) {
+ 		extlen = gfs2_free_extlen(rbm, minext);
+ 		if (extlen <= maxext->len)
+ 			goto fail;
+@@ -1680,7 +1682,7 @@ fail:
+  * gfs2_rbm_find - Look for blocks of a particular state
+  * @rbm: Value/result starting position and final position
+  * @state: The state which we want to find
+- * @minext: Pointer to the requested extent length (NULL for a single block)
++ * @minext: Pointer to the requested extent length
+  *          This is updated to be the actual reservation size.
+  * @ip: If set, check for reservations
+  * @nowrap: Stop looking at the end of the rgrp, rather than wrapping
+@@ -1717,8 +1719,7 @@ static int gfs2_rbm_find(struct gfs2_rbm *rbm, u8 state, u32 *minext,
+ 
+ 	while(1) {
+ 		bi = rbm_bi(rbm);
+-		if ((ip == NULL || !gfs2_rs_active(&ip->i_res)) &&
+-		    test_bit(GBF_FULL, &bi->bi_flags) &&
++		if (test_bit(GBF_FULL, &bi->bi_flags) &&
+ 		    (state == GFS2_BLKST_FREE))
+ 			goto next_bitmap;
+ 
+@@ -1737,8 +1738,7 @@ static int gfs2_rbm_find(struct gfs2_rbm *rbm, u8 state, u32 *minext,
+ 		if (ip == NULL)
+ 			return 0;
+ 
+-		ret = gfs2_reservation_check_and_update(rbm, ip,
+-							minext ? *minext : 0,
++		ret = gfs2_reservation_check_and_update(rbm, ip, *minext,
+ 							&maxext);
+ 		if (ret == 0)
+ 			return 0;
+@@ -1770,7 +1770,7 @@ next_iter:
+ 			break;
+ 	}
+ 
+-	if (minext == NULL || state != GFS2_BLKST_FREE)
++	if (state != GFS2_BLKST_FREE)
+ 		return -ENOSPC;
+ 
+ 	/* If the extent was too small, and it's smaller than the smallest
+@@ -1778,7 +1778,7 @@ next_iter:
+ 	   useless to search this rgrp again for this amount or more. */
+ 	if (wrapped && (scan_from_start || rbm->bii > last_bii) &&
+ 	    *minext < rbm->rgd->rd_extfail_pt)
+-		rbm->rgd->rd_extfail_pt = *minext;
++		rbm->rgd->rd_extfail_pt = *minext - 1;
+ 
+ 	/* If the maximum extent we found is big enough to fulfill the
+ 	   minimum requirements, use it anyway. */
+@@ -2231,7 +2231,7 @@ void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_rgrpd *rgd,
+ 		       (unsigned long long)rgd->rd_addr, rgd->rd_flags,
+ 		       rgd->rd_free, rgd->rd_free_clone, rgd->rd_dinodes,
+ 		       rgd->rd_reserved, rgd->rd_extfail_pt);
+-	if (rgd->rd_sbd->sd_args.ar_rgrplvb) {
++	if (rgd->rd_sbd->sd_args.ar_rgrplvb && rgd->rd_rgl) {
+ 		struct gfs2_rgrp_lvb *rgl = rgd->rd_rgl;
+ 
+ 		gfs2_print_dbg(seq, "%s  L: f:%02x b:%u i:%u\n", fs_id_buf,
+@@ -2352,14 +2352,15 @@ int gfs2_alloc_blocks(struct gfs2_inode *ip, u64 *bn, unsigned int *nblocks,
+ 	struct gfs2_rbm rbm = { .rgd = ip->i_res.rs_rbm.rgd, };
+ 	unsigned int ndata;
+ 	u64 block; /* block, within the file system scope */
++	u32 minext = 1;
+ 	int error;
+ 
+ 	gfs2_set_alloc_start(&rbm, ip, dinode);
+-	error = gfs2_rbm_find(&rbm, GFS2_BLKST_FREE, NULL, ip, false);
++	error = gfs2_rbm_find(&rbm, GFS2_BLKST_FREE, &minext, ip, false);
+ 
+ 	if (error == -ENOSPC) {
+ 		gfs2_set_alloc_start(&rbm, ip, dinode);
+-		error = gfs2_rbm_find(&rbm, GFS2_BLKST_FREE, NULL, NULL, false);
++		error = gfs2_rbm_find(&rbm, GFS2_BLKST_FREE, &minext, NULL, false);
+ 	}
+ 
+ 	/* Since all blocks are reserved in advance, this shouldn't happen */
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index fa24b407a9dcb..db137671a41f1 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -300,6 +300,7 @@ static int journal_finish_inode_data_buffers(journal_t *journal,
+ 			if (!ret)
+ 				ret = err;
+ 		}
++		cond_resched();
+ 		spin_lock(&journal->j_list_lock);
+ 		jinode->i_flags &= ~JI_COMMIT_RUNNING;
+ 		smp_mb();
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index fee325d62bfd9..effd837b8c1ff 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1568,9 +1568,11 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ 		return -EIO;
+ 	}
+ 
+-	trace_jbd2_write_superblock(journal, write_flags);
+ 	if (!(journal->j_flags & JBD2_BARRIER))
+ 		write_flags &= ~(REQ_FUA | REQ_PREFLUSH);
++
++	trace_jbd2_write_superblock(journal, write_flags);
++
+ 	if (buffer_write_io_error(bh)) {
+ 		/*
+ 		 * Oh, dear.  A previous attempt to write the journal
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 046b084136c51..b020a12c53a2a 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2627,7 +2627,12 @@ static int do_remount(struct path *path, int ms_flags, int sb_flags,
+ 	if (IS_ERR(fc))
+ 		return PTR_ERR(fc);
+ 
++	/*
++	 * Indicate to the filesystem that the remount request is coming
++	 * from the legacy mount system call.
++	 */
+ 	fc->oldapi = true;
++
+ 	err = parse_monolithic_mount_data(fc, data);
+ 	if (!err) {
+ 		down_write(&sb->s_umount);
+@@ -2886,6 +2891,12 @@ static int do_new_mount(struct path *path, const char *fstype, int sb_flags,
+ 	if (IS_ERR(fc))
+ 		return PTR_ERR(fc);
+ 
++	/*
++	 * Indicate to the filesystem that the mount request is coming
++	 * from the legacy mount system call.
++	 */
++	fc->oldapi = true;
++
+ 	if (subtype)
+ 		err = vfs_parse_fs_string(fc, "subtype",
+ 					  subtype, strlen(subtype));
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 08108b6d2fa10..73000aa2d220b 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -604,6 +604,8 @@ retry:
+ 		nfs4_delete_deviceid(node->ld, node->nfs_client, id);
+ 		goto retry;
+ 	}
++
++	nfs4_put_deviceid_node(node);
+ 	return ERR_PTR(-ENODEV);
+ }
+ 
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index f3f41027f6977..7c3c96ed60853 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -177,6 +177,7 @@ static int nfs4_map_errors(int err)
+ 	case -NFS4ERR_RESOURCE:
+ 	case -NFS4ERR_LAYOUTTRYLATER:
+ 	case -NFS4ERR_RECALLCONFLICT:
++	case -NFS4ERR_RETURNCONFLICT:
+ 		return -EREMOTEIO;
+ 	case -NFS4ERR_WRONGSEC:
+ 	case -NFS4ERR_WRONG_CRED:
+@@ -563,6 +564,7 @@ static int nfs4_do_handle_exception(struct nfs_server *server,
+ 		case -NFS4ERR_GRACE:
+ 		case -NFS4ERR_LAYOUTTRYLATER:
+ 		case -NFS4ERR_RECALLCONFLICT:
++		case -NFS4ERR_RETURNCONFLICT:
+ 			exception->delay = 1;
+ 			return 0;
+ 
+@@ -9445,6 +9447,7 @@ nfs4_layoutget_handle_exception(struct rpc_task *task,
+ 		status = -EBUSY;
+ 		break;
+ 	case -NFS4ERR_RECALLCONFLICT:
++	case -NFS4ERR_RETURNCONFLICT:
+ 		status = -ERECALLCONFLICT;
+ 		break;
+ 	case -NFS4ERR_DELEG_REVOKED:
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index a0fa3820ef2ab..5ac9b1f155a81 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -190,7 +190,7 @@ static int persistent_ram_init_ecc(struct persistent_ram_zone *prz,
+ {
+ 	int numerr;
+ 	struct persistent_ram_buffer *buffer = prz->buffer;
+-	int ecc_blocks;
++	size_t ecc_blocks;
+ 	size_t ecc_total;
+ 
+ 	if (!ecc_info || !ecc_info->ecc_size)
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index a5db86670bdfa..a406e281ae571 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -138,6 +138,7 @@ struct af_alg_async_req {
+  *			recvmsg is invoked.
+  * @init:		True if metadata has been sent.
+  * @len:		Length of memory allocated for this data structure.
++ * @inflight:		Non-zero when AIO requests are in flight.
+  */
+ struct af_alg_ctx {
+ 	struct list_head tsgl_list;
+@@ -156,6 +157,8 @@ struct af_alg_ctx {
+ 	bool init;
+ 
+ 	unsigned int len;
++
++	unsigned int inflight;
+ };
+ 
+ int af_alg_register_type(const struct af_alg_type *type);
+diff --git a/include/drm/drm_bridge.h b/include/drm/drm_bridge.h
+index 055486e35e68f..3826cf9553c0b 100644
+--- a/include/drm/drm_bridge.h
++++ b/include/drm/drm_bridge.h
+@@ -186,7 +186,7 @@ struct drm_bridge_funcs {
+ 	 * or &drm_encoder_helper_funcs.dpms hook.
+ 	 *
+ 	 * The bridge must assume that the display pipe (i.e. clocks and timing
+-	 * singals) feeding it is no longer running when this callback is
++	 * signals) feeding it is no longer running when this callback is
+ 	 * called.
+ 	 *
+ 	 * The @post_disable callback is optional.
+diff --git a/include/dt-bindings/clock/qcom,videocc-sm8150.h b/include/dt-bindings/clock/qcom,videocc-sm8150.h
+index e24ee840cfdb8..c557b78dc572f 100644
+--- a/include/dt-bindings/clock/qcom,videocc-sm8150.h
++++ b/include/dt-bindings/clock/qcom,videocc-sm8150.h
+@@ -16,6 +16,10 @@
+ 
+ /* VIDEO_CC Resets */
+ #define VIDEO_CC_MVSC_CORE_CLK_BCR	0
++#define VIDEO_CC_INTERFACE_BCR		1
++#define VIDEO_CC_MVS0_BCR		2
++#define VIDEO_CC_MVS1_BCR		3
++#define VIDEO_CC_MVSC_BCR		4
+ 
+ /* VIDEO_CC GDSCRs */
+ #define VENUS_GDSC			0
+diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
+index aa8cbf8829145..6fa85be64b896 100644
+--- a/include/linux/clk-provider.h
++++ b/include/linux/clk-provider.h
+@@ -350,7 +350,7 @@ struct clk_hw *__clk_hw_register_fixed_rate(struct device *dev,
+ 		const char *parent_name, const struct clk_hw *parent_hw,
+ 		const struct clk_parent_data *parent_data, unsigned long flags,
+ 		unsigned long fixed_rate, unsigned long fixed_accuracy,
+-		unsigned long clk_fixed_flags);
++		unsigned long clk_fixed_flags, bool devm);
+ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+ 		const char *parent_name, unsigned long flags,
+ 		unsigned long fixed_rate);
+@@ -365,7 +365,20 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+  */
+ #define clk_hw_register_fixed_rate(dev, name, parent_name, flags, fixed_rate)  \
+ 	__clk_hw_register_fixed_rate((dev), NULL, (name), (parent_name), NULL, \
+-				     NULL, (flags), (fixed_rate), 0, 0)
++				     NULL, (flags), (fixed_rate), 0, 0, false)
++
++/**
++ * devm_clk_hw_register_fixed_rate - register fixed-rate clock with the clock
++ * framework
++ * @dev: device that is registering this clock
++ * @name: name of this clock
++ * @parent_name: name of clock's parent
++ * @flags: framework-specific flags
++ * @fixed_rate: non-adjustable clock rate
++ */
++#define devm_clk_hw_register_fixed_rate(dev, name, parent_name, flags, fixed_rate)  \
++	__clk_hw_register_fixed_rate((dev), NULL, (name), (parent_name), NULL, \
++				     NULL, (flags), (fixed_rate), 0, 0, true)
+ /**
+  * clk_hw_register_fixed_rate_parent_hw - register fixed-rate clock with
+  * the clock framework
+@@ -378,7 +391,7 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+ #define clk_hw_register_fixed_rate_parent_hw(dev, name, parent_hw, flags,     \
+ 					     fixed_rate)		      \
+ 	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, (parent_hw),  \
+-				     NULL, (flags), (fixed_rate), 0, 0)
++				     NULL, (flags), (fixed_rate), 0, 0, false)
+ /**
+  * clk_hw_register_fixed_rate_parent_data - register fixed-rate clock with
+  * the clock framework
+@@ -392,7 +405,7 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+ 					     fixed_rate)		      \
+ 	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, NULL,	      \
+ 				     (parent_data), (flags), (fixed_rate), 0, \
+-				     0)
++				     0, false)
+ /**
+  * clk_hw_register_fixed_rate_with_accuracy - register fixed-rate clock with
+  * the clock framework
+@@ -408,7 +421,7 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+ 						 fixed_accuracy)	      \
+ 	__clk_hw_register_fixed_rate((dev), NULL, (name), (parent_name),      \
+ 				     NULL, NULL, (flags), (fixed_rate),       \
+-				     (fixed_accuracy), 0)
++				     (fixed_accuracy), 0, false)
+ /**
+  * clk_hw_register_fixed_rate_with_accuracy_parent_hw - register fixed-rate
+  * clock with the clock framework
+@@ -421,9 +434,9 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+  */
+ #define clk_hw_register_fixed_rate_with_accuracy_parent_hw(dev, name,	      \
+ 		parent_hw, flags, fixed_rate, fixed_accuracy)		      \
+-	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, (parent_hw)   \
+-				     NULL, NULL, (flags), (fixed_rate),	      \
+-				     (fixed_accuracy), 0)
++	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, (parent_hw),  \
++				     NULL, (flags), (fixed_rate),	      \
++				     (fixed_accuracy), 0, false)
+ /**
+  * clk_hw_register_fixed_rate_with_accuracy_parent_data - register fixed-rate
+  * clock with the clock framework
+@@ -438,7 +451,7 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+ 		parent_data, flags, fixed_rate, fixed_accuracy)		      \
+ 	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, NULL,	      \
+ 				     (parent_data), NULL, (flags),	      \
+-				     (fixed_rate), (fixed_accuracy), 0)
++				     (fixed_rate), (fixed_accuracy), 0, false)
+ /**
+  * clk_hw_register_fixed_rate_parent_accuracy - register fixed-rate clock with
+  * the clock framework
+@@ -452,7 +465,7 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+ 						   flags, fixed_rate)	      \
+ 	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, NULL,      \
+ 				     (parent_data), (flags), (fixed_rate), 0,    \
+-				     CLK_FIXED_RATE_PARENT_ACCURACY)
++				     CLK_FIXED_RATE_PARENT_ACCURACY, false)
+ 
+ void clk_unregister_fixed_rate(struct clk *clk);
+ void clk_hw_unregister_fixed_rate(struct clk_hw *hw);
+diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
+index a5f89fc4d6df1..ba1d93c662c6c 100644
+--- a/include/linux/dma-map-ops.h
++++ b/include/linux/dma-map-ops.h
+@@ -165,6 +165,7 @@ static inline void dma_pernuma_cma_reserve(void) { }
+ #ifdef CONFIG_DMA_DECLARE_COHERENT
+ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
+ 		dma_addr_t device_addr, size_t size);
++void dma_release_coherent_memory(struct device *dev);
+ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
+ 		dma_addr_t *dma_handle, void **ret);
+ int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
+@@ -183,9 +184,11 @@ static inline int dma_declare_coherent_memory(struct device *dev,
+ {
+ 	return -ENOSYS;
+ }
++
+ #define dma_alloc_from_dev_coherent(dev, size, handle, ret) (0)
+ #define dma_release_from_dev_coherent(dev, order, vaddr) (0)
+ #define dma_mmap_from_dev_coherent(dev, vma, vaddr, order, ret) (0)
++static inline void dma_release_coherent_memory(struct device *dev) { }
+ 
+ static inline void *dma_alloc_from_global_coherent(struct device *dev,
+ 		ssize_t size, dma_addr_t *dma_handle)
+diff --git a/include/linux/iio/adc/adi-axi-adc.h b/include/linux/iio/adc/adi-axi-adc.h
+index c5d48e1c2d361..77b7f66e6fa80 100644
+--- a/include/linux/iio/adc/adi-axi-adc.h
++++ b/include/linux/iio/adc/adi-axi-adc.h
+@@ -41,6 +41,7 @@ struct adi_axi_adc_chip_info {
+  * @reg_access		IIO debugfs_reg_access hook for the client ADC
+  * @read_raw		IIO read_raw hook for the client ADC
+  * @write_raw		IIO write_raw hook for the client ADC
++ * @read_avail		IIO read_avail hook for the client ADC
+  */
+ struct adi_axi_adc_conv {
+ 	const struct adi_axi_adc_chip_info		*chip_info;
+@@ -54,6 +55,9 @@ struct adi_axi_adc_conv {
+ 	int (*write_raw)(struct adi_axi_adc_conv *conv,
+ 			 struct iio_chan_spec const *chan,
+ 			 int val, int val2, long mask);
++	int (*read_avail)(struct adi_axi_adc_conv *conv,
++			  struct iio_chan_spec const *chan,
++			  const int **val, int *type, int *length, long mask);
+ };
+ 
+ struct adi_axi_adc_conv *devm_adi_axi_adc_conv_register(struct device *dev,
+diff --git a/include/linux/of.h b/include/linux/of.h
+index 57f2d3dddc0ce..4ed8dce624e7b 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -415,130 +415,6 @@ extern int of_detach_node(struct device_node *);
+ 
+ #define of_match_ptr(_ptr)	(_ptr)
+ 
+-/**
+- * of_property_read_u8_array - Find and read an array of u8 from a property.
+- *
+- * @np:		device node from which the property value is to be read.
+- * @propname:	name of the property to be searched.
+- * @out_values:	pointer to return value, modified only if return value is 0.
+- * @sz:		number of array elements to read
+- *
+- * Search for a property in a device node and read 8-bit value(s) from
+- * it.
+- *
+- * dts entry of array should be like:
+- *  ``property = /bits/ 8 <0x50 0x60 0x70>;``
+- *
+- * Return: 0 on success, -EINVAL if the property does not exist,
+- * -ENODATA if property does not have a value, and -EOVERFLOW if the
+- * property data isn't large enough.
+- *
+- * The out_values is modified only if a valid u8 value can be decoded.
+- */
+-static inline int of_property_read_u8_array(const struct device_node *np,
+-					    const char *propname,
+-					    u8 *out_values, size_t sz)
+-{
+-	int ret = of_property_read_variable_u8_array(np, propname, out_values,
+-						     sz, 0);
+-	if (ret >= 0)
+-		return 0;
+-	else
+-		return ret;
+-}
+-
+-/**
+- * of_property_read_u16_array - Find and read an array of u16 from a property.
+- *
+- * @np:		device node from which the property value is to be read.
+- * @propname:	name of the property to be searched.
+- * @out_values:	pointer to return value, modified only if return value is 0.
+- * @sz:		number of array elements to read
+- *
+- * Search for a property in a device node and read 16-bit value(s) from
+- * it.
+- *
+- * dts entry of array should be like:
+- *  ``property = /bits/ 16 <0x5000 0x6000 0x7000>;``
+- *
+- * Return: 0 on success, -EINVAL if the property does not exist,
+- * -ENODATA if property does not have a value, and -EOVERFLOW if the
+- * property data isn't large enough.
+- *
+- * The out_values is modified only if a valid u16 value can be decoded.
+- */
+-static inline int of_property_read_u16_array(const struct device_node *np,
+-					     const char *propname,
+-					     u16 *out_values, size_t sz)
+-{
+-	int ret = of_property_read_variable_u16_array(np, propname, out_values,
+-						      sz, 0);
+-	if (ret >= 0)
+-		return 0;
+-	else
+-		return ret;
+-}
+-
+-/**
+- * of_property_read_u32_array - Find and read an array of 32 bit integers
+- * from a property.
+- *
+- * @np:		device node from which the property value is to be read.
+- * @propname:	name of the property to be searched.
+- * @out_values:	pointer to return value, modified only if return value is 0.
+- * @sz:		number of array elements to read
+- *
+- * Search for a property in a device node and read 32-bit value(s) from
+- * it.
+- *
+- * Return: 0 on success, -EINVAL if the property does not exist,
+- * -ENODATA if property does not have a value, and -EOVERFLOW if the
+- * property data isn't large enough.
+- *
+- * The out_values is modified only if a valid u32 value can be decoded.
+- */
+-static inline int of_property_read_u32_array(const struct device_node *np,
+-					     const char *propname,
+-					     u32 *out_values, size_t sz)
+-{
+-	int ret = of_property_read_variable_u32_array(np, propname, out_values,
+-						      sz, 0);
+-	if (ret >= 0)
+-		return 0;
+-	else
+-		return ret;
+-}
+-
+-/**
+- * of_property_read_u64_array - Find and read an array of 64 bit integers
+- * from a property.
+- *
+- * @np:		device node from which the property value is to be read.
+- * @propname:	name of the property to be searched.
+- * @out_values:	pointer to return value, modified only if return value is 0.
+- * @sz:		number of array elements to read
+- *
+- * Search for a property in a device node and read 64-bit value(s) from
+- * it.
+- *
+- * Return: 0 on success, -EINVAL if the property does not exist,
+- * -ENODATA if property does not have a value, and -EOVERFLOW if the
+- * property data isn't large enough.
+- *
+- * The out_values is modified only if a valid u64 value can be decoded.
+- */
+-static inline int of_property_read_u64_array(const struct device_node *np,
+-					     const char *propname,
+-					     u64 *out_values, size_t sz)
+-{
+-	int ret = of_property_read_variable_u64_array(np, propname, out_values,
+-						      sz, 0);
+-	if (ret >= 0)
+-		return 0;
+-	else
+-		return ret;
+-}
+-
+ /*
+  * struct property *prop;
+  * const __be32 *p;
+@@ -726,32 +602,6 @@ static inline int of_property_count_elems_of_size(const struct device_node *np,
+ 	return -ENOSYS;
+ }
+ 
+-static inline int of_property_read_u8_array(const struct device_node *np,
+-			const char *propname, u8 *out_values, size_t sz)
+-{
+-	return -ENOSYS;
+-}
+-
+-static inline int of_property_read_u16_array(const struct device_node *np,
+-			const char *propname, u16 *out_values, size_t sz)
+-{
+-	return -ENOSYS;
+-}
+-
+-static inline int of_property_read_u32_array(const struct device_node *np,
+-					     const char *propname,
+-					     u32 *out_values, size_t sz)
+-{
+-	return -ENOSYS;
+-}
+-
+-static inline int of_property_read_u64_array(const struct device_node *np,
+-					     const char *propname,
+-					     u64 *out_values, size_t sz)
+-{
+-	return -ENOSYS;
+-}
+-
+ static inline int of_property_read_u32_index(const struct device_node *np,
+ 			const char *propname, u32 index, u32 *out_value)
+ {
+@@ -1211,7 +1061,8 @@ static inline int of_property_read_string_index(const struct device_node *np,
+  * @np:		device node from which the property value is to be read.
+  * @propname:	name of the property to be searched.
+  *
+- * Search for a property in a device node.
++ * Search for a boolean property in a device node. Usage on non-boolean
++ * property types is deprecated.
+  *
+  * Return: true if the property exists false otherwise.
+  */
+@@ -1223,6 +1074,144 @@ static inline bool of_property_read_bool(const struct device_node *np,
+ 	return prop ? true : false;
+ }
+ 
++/**
++ * of_property_present - Test if a property is present in a node
++ * @np:		device node to search for the property.
++ * @propname:	name of the property to be searched.
++ *
++ * Test for a property present in a device node.
++ *
++ * Return: true if the property exists false otherwise.
++ */
++static inline bool of_property_present(const struct device_node *np, const char *propname)
++{
++	return of_property_read_bool(np, propname);
++}
++
++/**
++ * of_property_read_u8_array - Find and read an array of u8 from a property.
++ *
++ * @np:		device node from which the property value is to be read.
++ * @propname:	name of the property to be searched.
++ * @out_values:	pointer to return value, modified only if return value is 0.
++ * @sz:		number of array elements to read
++ *
++ * Search for a property in a device node and read 8-bit value(s) from
++ * it.
++ *
++ * dts entry of array should be like:
++ *  ``property = /bits/ 8 <0x50 0x60 0x70>;``
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
++ * -ENODATA if property does not have a value, and -EOVERFLOW if the
++ * property data isn't large enough.
++ *
++ * The out_values is modified only if a valid u8 value can be decoded.
++ */
++static inline int of_property_read_u8_array(const struct device_node *np,
++					    const char *propname,
++					    u8 *out_values, size_t sz)
++{
++	int ret = of_property_read_variable_u8_array(np, propname, out_values,
++						     sz, 0);
++	if (ret >= 0)
++		return 0;
++	else
++		return ret;
++}
++
++/**
++ * of_property_read_u16_array - Find and read an array of u16 from a property.
++ *
++ * @np:		device node from which the property value is to be read.
++ * @propname:	name of the property to be searched.
++ * @out_values:	pointer to return value, modified only if return value is 0.
++ * @sz:		number of array elements to read
++ *
++ * Search for a property in a device node and read 16-bit value(s) from
++ * it.
++ *
++ * dts entry of array should be like:
++ *  ``property = /bits/ 16 <0x5000 0x6000 0x7000>;``
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
++ * -ENODATA if property does not have a value, and -EOVERFLOW if the
++ * property data isn't large enough.
++ *
++ * The out_values is modified only if a valid u16 value can be decoded.
++ */
++static inline int of_property_read_u16_array(const struct device_node *np,
++					     const char *propname,
++					     u16 *out_values, size_t sz)
++{
++	int ret = of_property_read_variable_u16_array(np, propname, out_values,
++						      sz, 0);
++	if (ret >= 0)
++		return 0;
++	else
++		return ret;
++}
++
++/**
++ * of_property_read_u32_array - Find and read an array of 32 bit integers
++ * from a property.
++ *
++ * @np:		device node from which the property value is to be read.
++ * @propname:	name of the property to be searched.
++ * @out_values:	pointer to return value, modified only if return value is 0.
++ * @sz:		number of array elements to read
++ *
++ * Search for a property in a device node and read 32-bit value(s) from
++ * it.
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
++ * -ENODATA if property does not have a value, and -EOVERFLOW if the
++ * property data isn't large enough.
++ *
++ * The out_values is modified only if a valid u32 value can be decoded.
++ */
++static inline int of_property_read_u32_array(const struct device_node *np,
++					     const char *propname,
++					     u32 *out_values, size_t sz)
++{
++	int ret = of_property_read_variable_u32_array(np, propname, out_values,
++						      sz, 0);
++	if (ret >= 0)
++		return 0;
++	else
++		return ret;
++}
++
++/**
++ * of_property_read_u64_array - Find and read an array of 64 bit integers
++ * from a property.
++ *
++ * @np:		device node from which the property value is to be read.
++ * @propname:	name of the property to be searched.
++ * @out_values:	pointer to return value, modified only if return value is 0.
++ * @sz:		number of array elements to read
++ *
++ * Search for a property in a device node and read 64-bit value(s) from
++ * it.
++ *
++ * Return: 0 on success, -EINVAL if the property does not exist,
++ * -ENODATA if property does not have a value, and -EOVERFLOW if the
++ * property data isn't large enough.
++ *
++ * The out_values is modified only if a valid u64 value can be decoded.
++ */
++static inline int of_property_read_u64_array(const struct device_node *np,
++					     const char *propname,
++					     u64 *out_values, size_t sz)
++{
++	int ret = of_property_read_variable_u64_array(np, propname, out_values,
++						      sz, 0);
++	if (ret >= 0)
++		return 0;
++	else
++		return ret;
++}
++
+ static inline int of_property_read_u8(const struct device_node *np,
+ 				       const char *propname,
+ 				       u8 *out_value)
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index ef8d56b18da6b..8716a17063518 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -366,6 +366,20 @@ static inline void rcu_preempt_sleep_check(void) { }
+ #define rcu_check_sparse(p, space)
+ #endif /* #else #ifdef __CHECKER__ */
+ 
++/**
++ * unrcu_pointer - mark a pointer as not being RCU protected
++ * @p: pointer needing to lose its __rcu property
++ *
++ * Converts @p from an __rcu pointer to a __kernel pointer.
++ * This allows an __rcu pointer to be used with xchg() and friends.
++ */
++#define unrcu_pointer(p)						\
++({									\
++	typeof(*p) *_________p1 = (typeof(*p) *__force)(p);		\
++	rcu_check_sparse(p, __rcu); 					\
++	((typeof(*p) __force __kernel *)(_________p1)); 		\
++})
++
+ #define __rcu_access_pointer(p, space) \
+ ({ \
+ 	typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index a168a64696b6b..33873266b2bc7 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -819,7 +819,6 @@ void hci_inquiry_cache_flush(struct hci_dev *hdev);
+ /* ----- HCI Connections ----- */
+ enum {
+ 	HCI_CONN_AUTH_PEND,
+-	HCI_CONN_REAUTH_PEND,
+ 	HCI_CONN_ENCRYPT_PEND,
+ 	HCI_CONN_RSWITCH_PEND,
+ 	HCI_CONN_MODE_CHANGE_PEND,
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 36ddfb98b70ea..29cc0eb2e4885 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -3424,6 +3424,8 @@ union bpf_attr {
+  * long bpf_get_task_stack(struct task_struct *task, void *buf, u32 size, u64 flags)
+  *	Description
+  *		Return a user or a kernel stack in bpf program provided buffer.
++ *		Note: the user stack will only be populated if the *task* is
++ *		the current task; all other tasks will return -EOPNOTSUPP.
+  *		To achieve this, the helper needs *task*, which is a valid
+  *		pointer to **struct task_struct**. To store the stacktrace, the
+  *		bpf program provides *buf* with a nonnegative *size*.
+@@ -3435,6 +3437,7 @@ union bpf_attr {
+  *
+  *		**BPF_F_USER_STACK**
+  *			Collect a user space stack instead of a kernel stack.
++ *			The *task* must be the current task.
+  *		**BPF_F_USER_BUILD_ID**
+  *			Collect buildid+offset instead of ips for user stack,
+  *			only valid if **BPF_F_USER_STACK** is also specified.
+diff --git a/include/uapi/linux/virtio_crypto.h b/include/uapi/linux/virtio_crypto.h
+index a03932f105652..71a54a6849ca9 100644
+--- a/include/uapi/linux/virtio_crypto.h
++++ b/include/uapi/linux/virtio_crypto.h
+@@ -37,6 +37,7 @@
+ #define VIRTIO_CRYPTO_SERVICE_HASH   1
+ #define VIRTIO_CRYPTO_SERVICE_MAC    2
+ #define VIRTIO_CRYPTO_SERVICE_AEAD   3
++#define VIRTIO_CRYPTO_SERVICE_AKCIPHER 4
+ 
+ #define VIRTIO_CRYPTO_OPCODE(service, op)   (((service) << 8) | (op))
+ 
+@@ -57,6 +58,10 @@ struct virtio_crypto_ctrl_header {
+ 	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x02)
+ #define VIRTIO_CRYPTO_AEAD_DESTROY_SESSION \
+ 	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x03)
++#define VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION \
++	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x04)
++#define VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION \
++	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x05)
+ 	__le32 opcode;
+ 	__le32 algo;
+ 	__le32 flag;
+@@ -180,6 +185,58 @@ struct virtio_crypto_aead_create_session_req {
+ 	__u8 padding[32];
+ };
+ 
++struct virtio_crypto_rsa_session_para {
++#define VIRTIO_CRYPTO_RSA_RAW_PADDING   0
++#define VIRTIO_CRYPTO_RSA_PKCS1_PADDING 1
++	__le32 padding_algo;
++
++#define VIRTIO_CRYPTO_RSA_NO_HASH   0
++#define VIRTIO_CRYPTO_RSA_MD2       1
++#define VIRTIO_CRYPTO_RSA_MD3       2
++#define VIRTIO_CRYPTO_RSA_MD4       3
++#define VIRTIO_CRYPTO_RSA_MD5       4
++#define VIRTIO_CRYPTO_RSA_SHA1      5
++#define VIRTIO_CRYPTO_RSA_SHA256    6
++#define VIRTIO_CRYPTO_RSA_SHA384    7
++#define VIRTIO_CRYPTO_RSA_SHA512    8
++#define VIRTIO_CRYPTO_RSA_SHA224    9
++	__le32 hash_algo;
++};
++
++struct virtio_crypto_ecdsa_session_para {
++#define VIRTIO_CRYPTO_CURVE_UNKNOWN   0
++#define VIRTIO_CRYPTO_CURVE_NIST_P192 1
++#define VIRTIO_CRYPTO_CURVE_NIST_P224 2
++#define VIRTIO_CRYPTO_CURVE_NIST_P256 3
++#define VIRTIO_CRYPTO_CURVE_NIST_P384 4
++#define VIRTIO_CRYPTO_CURVE_NIST_P521 5
++	__le32 curve_id;
++	__le32 padding;
++};
++
++struct virtio_crypto_akcipher_session_para {
++#define VIRTIO_CRYPTO_NO_AKCIPHER    0
++#define VIRTIO_CRYPTO_AKCIPHER_RSA   1
++#define VIRTIO_CRYPTO_AKCIPHER_DSA   2
++#define VIRTIO_CRYPTO_AKCIPHER_ECDSA 3
++	__le32 algo;
++
++#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC  1
++#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE 2
++	__le32 keytype;
++	__le32 keylen;
++
++	union {
++		struct virtio_crypto_rsa_session_para rsa;
++		struct virtio_crypto_ecdsa_session_para ecdsa;
++	} u;
++};
++
++struct virtio_crypto_akcipher_create_session_req {
++	struct virtio_crypto_akcipher_session_para para;
++	__u8 padding[36];
++};
++
+ struct virtio_crypto_alg_chain_session_para {
+ #define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER  1
+ #define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH  2
+@@ -247,6 +304,8 @@ struct virtio_crypto_op_ctrl_req {
+ 			mac_create_session;
+ 		struct virtio_crypto_aead_create_session_req
+ 			aead_create_session;
++		struct virtio_crypto_akcipher_create_session_req
++			akcipher_create_session;
+ 		struct virtio_crypto_destroy_session_req
+ 			destroy_session;
+ 		__u8 padding[56];
+@@ -266,6 +325,14 @@ struct virtio_crypto_op_header {
+ 	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x00)
+ #define VIRTIO_CRYPTO_AEAD_DECRYPT \
+ 	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x01)
++#define VIRTIO_CRYPTO_AKCIPHER_ENCRYPT \
++	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x00)
++#define VIRTIO_CRYPTO_AKCIPHER_DECRYPT \
++	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x01)
++#define VIRTIO_CRYPTO_AKCIPHER_SIGN \
++	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x02)
++#define VIRTIO_CRYPTO_AKCIPHER_VERIFY \
++	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x03)
+ 	__le32 opcode;
+ 	/* algo should be service-specific algorithms */
+ 	__le32 algo;
+@@ -390,6 +457,16 @@ struct virtio_crypto_aead_data_req {
+ 	__u8 padding[32];
+ };
+ 
++struct virtio_crypto_akcipher_para {
++	__le32 src_data_len;
++	__le32 dst_data_len;
++};
++
++struct virtio_crypto_akcipher_data_req {
++	struct virtio_crypto_akcipher_para para;
++	__u8 padding[40];
++};
++
+ /* The request of the data virtqueue's packet */
+ struct virtio_crypto_op_data_req {
+ 	struct virtio_crypto_op_header header;
+@@ -399,6 +476,7 @@ struct virtio_crypto_op_data_req {
+ 		struct virtio_crypto_hash_data_req hash_req;
+ 		struct virtio_crypto_mac_data_req mac_req;
+ 		struct virtio_crypto_aead_data_req aead_req;
++		struct virtio_crypto_akcipher_data_req akcipher_req;
+ 		__u8 padding[48];
+ 	} u;
+ };
+@@ -408,6 +486,8 @@ struct virtio_crypto_op_data_req {
+ #define VIRTIO_CRYPTO_BADMSG    2
+ #define VIRTIO_CRYPTO_NOTSUPP   3
+ #define VIRTIO_CRYPTO_INVSESS   4 /* Invalid session id */
++#define VIRTIO_CRYPTO_NOSPC     5 /* no free session ID */
++#define VIRTIO_CRYPTO_KEY_REJECTED 6 /* Signature verification failed */
+ 
+ /* The accelerator hardware is ready */
+ #define VIRTIO_CRYPTO_S_HW_READY  (1 << 0)
+@@ -438,7 +518,7 @@ struct virtio_crypto_config {
+ 	__le32 max_cipher_key_len;
+ 	/* Maximum length of authenticated key */
+ 	__le32 max_auth_key_len;
+-	__le32 reserve;
++	__le32 akcipher_algo;
+ 	/* Maximum size of each crypto request's content */
+ 	__le64 max_size;
+ };
+diff --git a/init/do_mounts.c b/init/do_mounts.c
+index b5f9604d0c98a..8ef154fb418c1 100644
+--- a/init/do_mounts.c
++++ b/init/do_mounts.c
+@@ -649,7 +649,10 @@ struct file_system_type rootfs_fs_type = {
+ 
+ void __init init_rootfs(void)
+ {
+-	if (IS_ENABLED(CONFIG_TMPFS) && !saved_root_name[0] &&
+-		(!root_fs_names || strstr(root_fs_names, "tmpfs")))
+-		is_tmpfs = true;
++	if (IS_ENABLED(CONFIG_TMPFS)) {
++		if (!saved_root_name[0] && !root_fs_names)
++			is_tmpfs = true;
++		else if (root_fs_names && !!strstr(root_fs_names, "tmpfs"))
++			is_tmpfs = true;
++	}
+ }
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index aa15e8d878a9e..936abc6ee450c 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -3485,14 +3485,17 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
+ 	struct iovec *iov = iorw->fast_iov;
+ 	int ret;
+ 
++	iorw->bytes_done = 0;
++	iorw->free_iovec = NULL;
++
+ 	ret = io_import_iovec(rw, req, &iov, &iorw->iter, false);
+ 	if (unlikely(ret < 0))
+ 		return ret;
+ 
+-	iorw->bytes_done = 0;
+-	iorw->free_iovec = iov;
+-	if (iov)
++	if (iov) {
++		iorw->free_iovec = iov;
+ 		req->flags |= REQ_F_NEED_CLEANUP;
++	}
+ 	iov_iter_save_state(&iorw->iter, &iorw->iter_state);
+ 	return 0;
+ }
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 00e32f2ec3e60..3c2d8722d45b3 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -230,6 +230,9 @@ static void *trie_lookup_elem(struct bpf_map *map, void *_key)
+ 	struct lpm_trie_node *node, *found = NULL;
+ 	struct bpf_lpm_trie_key *key = _key;
+ 
++	if (key->prefixlen > trie->max_prefixlen)
++		return NULL;
++
+ 	/* Start walking the trie from the root node ... */
+ 
+ 	for (node = rcu_dereference(trie->root); node;) {
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 0c5bf98d55767..b8afea2ceeeb1 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -575,6 +575,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ {
+ 	u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
+ 	bool user_build_id = flags & BPF_F_USER_BUILD_ID;
++	bool crosstask = task && task != current;
+ 	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ 	bool user = flags & BPF_F_USER_STACK;
+ 	struct perf_callchain_entry *trace;
+@@ -597,6 +598,14 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ 	if (task && user && !user_mode(regs))
+ 		goto err_fault;
+ 
++	/* get_perf_callchain does not support crosstask user stack walking
++	 * but returns an empty stack instead of NULL.
++	 */
++	if (crosstask && user) {
++		err = -EOPNOTSUPP;
++		goto clear;
++	}
++
+ 	num_elem = size / elem_size;
+ 	max_depth = num_elem + skip;
+ 	if (sysctl_perf_event_max_stack < max_depth)
+@@ -608,7 +617,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ 		trace = get_callchain_entry_for_task(task, max_depth);
+ 	else
+ 		trace = get_perf_callchain(regs, 0, kernel, user, max_depth,
+-					   false, false);
++					   crosstask, false);
+ 	if (unlikely(!trace))
+ 		goto err_fault;
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 45c50ee9b0370..fce2345f600f2 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2493,7 +2493,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 	 * so it's aligned access and [off, off + size) are within stack limits
+ 	 */
+ 	if (!env->allow_ptr_leaks &&
+-	    state->stack[spi].slot_type[0] == STACK_SPILL &&
++	    is_spilled_reg(&state->stack[spi]) &&
+ 	    size != BPF_REG_SIZE) {
+ 		verbose(env, "attempt to corrupt spilled pointer on stack\n");
+ 		return -EACCES;
+@@ -3926,10 +3926,7 @@ static int check_stack_access_within_bounds(
+ 
+ 	if (tnum_is_const(reg->var_off)) {
+ 		min_off = reg->var_off.value + off;
+-		if (access_size > 0)
+-			max_off = min_off + access_size - 1;
+-		else
+-			max_off = min_off;
++		max_off = min_off + access_size;
+ 	} else {
+ 		if (reg->smax_value >= BPF_MAX_VAR_OFF ||
+ 		    reg->smin_value <= -BPF_MAX_VAR_OFF) {
+@@ -3938,15 +3935,12 @@ static int check_stack_access_within_bounds(
+ 			return -EACCES;
+ 		}
+ 		min_off = reg->smin_value + off;
+-		if (access_size > 0)
+-			max_off = reg->smax_value + off + access_size - 1;
+-		else
+-			max_off = min_off;
++		max_off = reg->smax_value + off + access_size;
+ 	}
+ 
+ 	err = check_stack_slot_within_bounds(min_off, state, type);
+-	if (!err)
+-		err = check_stack_slot_within_bounds(max_off, state, type);
++	if (!err && max_off > 0)
++		err = -EINVAL; /* out of stack access into non-negative offsets */
+ 
+ 	if (err) {
+ 		if (tnum_is_const(reg->var_off)) {
+diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
+index 4e09fab52faf5..c27b3dfa19210 100644
+--- a/kernel/debug/kdb/kdb_main.c
++++ b/kernel/debug/kdb/kdb_main.c
+@@ -1364,8 +1364,6 @@ do_full_getstr:
+ 		/* PROMPT can only be set if we have MEM_READ permission. */
+ 		snprintf(kdb_prompt_str, CMD_BUFLEN, kdbgetenv("PROMPT"),
+ 			 raw_smp_processor_id());
+-		if (defcmd_in_progress)
+-			strncat(kdb_prompt_str, "[defcmd]", CMD_BUFLEN);
+ 
+ 		/*
+ 		 * Fetch command from keyboard
+diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
+index 5b5b6c7ec7f28..49aaad3936f19 100644
+--- a/kernel/dma/coherent.c
++++ b/kernel/dma/coherent.c
+@@ -84,7 +84,7 @@ out:
+ 	return ret;
+ }
+ 
+-static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
++static void _dma_release_coherent_memory(struct dma_coherent_mem *mem)
+ {
+ 	if (!mem)
+ 		return;
+@@ -136,10 +136,18 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
+ 
+ 	ret = dma_assign_coherent_memory(dev, mem);
+ 	if (ret)
+-		dma_release_coherent_memory(mem);
++		_dma_release_coherent_memory(mem);
+ 	return ret;
+ }
+ 
++void dma_release_coherent_memory(struct device *dev)
++{
++	if (dev) {
++		_dma_release_coherent_memory(dev->dma_mem);
++		dev->dma_mem = NULL;
++	}
++}
++
+ static void *__dma_alloc_from_coherent(struct device *dev,
+ 				       struct dma_coherent_mem *mem,
+ 				       ssize_t size, dma_addr_t *dma_handle)
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index f1ea3123f3832..05d3e156a7d63 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -545,17 +545,15 @@ static void do_unoptimize_kprobes(void)
+ 	/* See comment in do_optimize_kprobes() */
+ 	lockdep_assert_cpus_held();
+ 
+-	/* Unoptimization must be done anytime */
+-	if (list_empty(&unoptimizing_list))
+-		return;
++	if (!list_empty(&unoptimizing_list))
++		arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
+ 
+-	arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
+-	/* Loop free_list for disarming */
++	/* Loop on 'freeing_list' for disarming and removing from kprobe hash list */
+ 	list_for_each_entry_safe(op, tmp, &freeing_list, list) {
+ 		/* Switching from detour code to origin */
+ 		op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
+-		/* Disarm probes if marked disabled */
+-		if (kprobe_disabled(&op->kp))
++		/* Disarm probes if marked disabled and not gone */
++		if (kprobe_disabled(&op->kp) && !kprobe_gone(&op->kp))
+ 			arch_disarm_kprobe(&op->kp);
+ 		if (kprobe_unused(&op->kp)) {
+ 			/*
+@@ -784,14 +782,13 @@ static void kill_optimized_kprobe(struct kprobe *p)
+ 	op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
+ 
+ 	if (kprobe_unused(p)) {
+-		/* Enqueue if it is unused */
+-		list_add(&op->list, &freeing_list);
+ 		/*
+-		 * Remove unused probes from the hash list. After waiting
+-		 * for synchronization, this probe is reclaimed.
+-		 * (reclaiming is done by do_free_cleaned_kprobes().)
++		 * Unused kprobe is on unoptimizing or freeing list. We move it
++		 * to freeing_list and let the kprobe_optimizer() remove it from
++		 * the kprobe hash list and free it.
+ 		 */
+-		hlist_del_rcu(&op->kp.hlist);
++		if (optprobe_queued_unopt(op))
++			list_move(&op->list, &freeing_list);
+ 	}
+ 
+ 	/* Don't touch the code, because it is already freed. */
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index fc79b04b59470..bc00ab0118e6c 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -1439,13 +1439,18 @@ void tick_setup_sched_timer(void)
+ void tick_cancel_sched_timer(int cpu)
+ {
+ 	struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
++	ktime_t idle_sleeptime, iowait_sleeptime;
+ 
+ # ifdef CONFIG_HIGH_RES_TIMERS
+ 	if (ts->sched_timer.base)
+ 		hrtimer_cancel(&ts->sched_timer);
+ # endif
+ 
++	idle_sleeptime = ts->idle_sleeptime;
++	iowait_sleeptime = ts->iowait_sleeptime;
+ 	memset(ts, 0, sizeof(*ts));
++	ts->idle_sleeptime = idle_sleeptime;
++	ts->iowait_sleeptime = iowait_sleeptime;
+ }
+ #endif
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 5abe88091803e..041b91c2ba10a 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -3444,6 +3444,12 @@ rb_reserve_next_event(struct trace_buffer *buffer,
+ 	int nr_loops = 0;
+ 	int add_ts_default;
+ 
++	/* ring buffer does cmpxchg, make sure it is safe in NMI context */
++	if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
++	    (unlikely(in_nmi()))) {
++		return NULL;
++	}
++
+ 	rb_start_commit(cpu_buffer);
+ 	/* The commit page can not change after this */
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 0cbf833bebccf..548f694fc8574 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4339,7 +4339,11 @@ static int s_show(struct seq_file *m, void *v)
+ 		iter->leftover = ret;
+ 
+ 	} else {
+-		print_trace_line(iter);
++		ret = print_trace_line(iter);
++		if (ret == TRACE_TYPE_PARTIAL_LINE) {
++			iter->seq.full = 0;
++			trace_seq_puts(&iter->seq, "[LINE TOO BIG]\n");
++		}
+ 		ret = trace_print_seq(m, &iter->seq);
+ 		/*
+ 		 * If we overflow the seq_file buffer, then it will
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index 94b0991717b6d..753b84c50848a 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -1313,11 +1313,12 @@ static enum print_line_t trace_print_print(struct trace_iterator *iter,
+ {
+ 	struct print_entry *field;
+ 	struct trace_seq *s = &iter->seq;
++	int max = iter->ent_size - offsetof(struct print_entry, buf);
+ 
+ 	trace_assign_type(field, iter->ent);
+ 
+ 	seq_print_ip_sym(s, field->ip, flags);
+-	trace_seq_printf(s, ": %s", field->buf);
++	trace_seq_printf(s, ": %.*s", max, field->buf);
+ 
+ 	return trace_handle_return(s);
+ }
+@@ -1326,10 +1327,11 @@ static enum print_line_t trace_print_raw(struct trace_iterator *iter, int flags,
+ 					 struct trace_event *event)
+ {
+ 	struct print_entry *field;
++	int max = iter->ent_size - offsetof(struct print_entry, buf);
+ 
+ 	trace_assign_type(field, iter->ent);
+ 
+-	trace_seq_printf(&iter->seq, "# %lx %s", field->ip, field->buf);
++	trace_seq_printf(&iter->seq, "# %lx %.*s", field->ip, max, field->buf);
+ 
+ 	return trace_handle_return(&iter->seq);
+ }
+diff --git a/lib/idr.c b/lib/idr.c
+index 13f2758c23773..da36054c3ca02 100644
+--- a/lib/idr.c
++++ b/lib/idr.c
+@@ -508,7 +508,7 @@ void ida_free(struct ida *ida, unsigned int id)
+ 			goto delete;
+ 		xas_store(&xas, xa_mk_value(v));
+ 	} else {
+-		if (!test_bit(bit, bitmap->bitmap))
++		if (!bitmap || !test_bit(bit, bitmap->bitmap))
+ 			goto err;
+ 		__clear_bit(bit, bitmap->bitmap);
+ 		xas_set_mark(&xas, XA_FREE_MARK);
+diff --git a/lib/test_ida.c b/lib/test_ida.c
+index b068806259615..55105baa19da9 100644
+--- a/lib/test_ida.c
++++ b/lib/test_ida.c
+@@ -150,6 +150,45 @@ static void ida_check_conv(struct ida *ida)
+ 	IDA_BUG_ON(ida, !ida_is_empty(ida));
+ }
+ 
++/*
++ * Check various situations where we attempt to free an ID we don't own.
++ */
++static void ida_check_bad_free(struct ida *ida)
++{
++	unsigned long i;
++
++	printk("vvv Ignore \"not allocated\" warnings\n");
++	/* IDA is empty; all of these will fail */
++	ida_free(ida, 0);
++	for (i = 0; i < 31; i++)
++		ida_free(ida, 1 << i);
++
++	/* IDA contains a single value entry */
++	IDA_BUG_ON(ida, ida_alloc_min(ida, 3, GFP_KERNEL) != 3);
++	ida_free(ida, 0);
++	for (i = 0; i < 31; i++)
++		ida_free(ida, 1 << i);
++
++	/* IDA contains a single bitmap */
++	IDA_BUG_ON(ida, ida_alloc_min(ida, 1023, GFP_KERNEL) != 1023);
++	ida_free(ida, 0);
++	for (i = 0; i < 31; i++)
++		ida_free(ida, 1 << i);
++
++	/* IDA contains a tree */
++	IDA_BUG_ON(ida, ida_alloc_min(ida, (1 << 20) - 1, GFP_KERNEL) != (1 << 20) - 1);
++	ida_free(ida, 0);
++	for (i = 0; i < 31; i++)
++		ida_free(ida, 1 << i);
++	printk("^^^ \"not allocated\" warnings over\n");
++
++	ida_free(ida, 3);
++	ida_free(ida, 1023);
++	ida_free(ida, (1 << 20) - 1);
++
++	IDA_BUG_ON(ida, !ida_is_empty(ida));
++}
++
+ static DEFINE_IDA(ida);
+ 
+ static int ida_checks(void)
+@@ -162,6 +201,7 @@ static int ida_checks(void)
+ 	ida_check_leaf(&ida, 1024 * 64);
+ 	ida_check_max(&ida);
+ 	ida_check_conv(&ida);
++	ida_check_bad_free(&ida);
+ 
+ 	printk("IDA: %u of %u tests passed\n", tests_passed, tests_run);
+ 	return (tests_run != tests_passed) ? 0 : -EINVAL;
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 74721c3e49b34..52e512f41da3c 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1385,12 +1385,10 @@ static int hci_conn_auth(struct hci_conn *conn, __u8 sec_level, __u8 auth_type)
+ 		hci_send_cmd(conn->hdev, HCI_OP_AUTH_REQUESTED,
+ 			     sizeof(cp), &cp);
+ 
+-		/* If we're already encrypted set the REAUTH_PEND flag,
+-		 * otherwise set the ENCRYPT_PEND.
++		/* Set the ENCRYPT_PEND to trigger encryption after
++		 * authentication.
+ 		 */
+-		if (test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+-			set_bit(HCI_CONN_REAUTH_PEND, &conn->flags);
+-		else
++		if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 			set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags);
+ 	}
+ 
+diff --git a/net/bluetooth/hci_debugfs.c b/net/bluetooth/hci_debugfs.c
+index 338833f123659..d4efc4aa55af8 100644
+--- a/net/bluetooth/hci_debugfs.c
++++ b/net/bluetooth/hci_debugfs.c
+@@ -994,10 +994,12 @@ static int min_key_size_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val > hdev->le_max_key_size || val < SMP_MIN_ENC_KEY_SIZE)
++	hci_dev_lock(hdev);
++	if (val > hdev->le_max_key_size || val < SMP_MIN_ENC_KEY_SIZE) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->le_min_key_size = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -1022,10 +1024,12 @@ static int max_key_size_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val > SMP_MAX_ENC_KEY_SIZE || val < hdev->le_min_key_size)
++	hci_dev_lock(hdev);
++	if (val > SMP_MAX_ENC_KEY_SIZE || val < hdev->le_min_key_size) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->le_max_key_size = val;
+ 	hci_dev_unlock(hdev);
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index ee2c1a17366a2..4027c79786fd3 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -2924,14 +2924,8 @@ static void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	if (!ev->status) {
+ 		clear_bit(HCI_CONN_AUTH_FAILURE, &conn->flags);
+-
+-		if (!hci_conn_ssp_enabled(conn) &&
+-		    test_bit(HCI_CONN_REAUTH_PEND, &conn->flags)) {
+-			bt_dev_info(hdev, "re-auth of legacy device is not possible.");
+-		} else {
+-			set_bit(HCI_CONN_AUTH, &conn->flags);
+-			conn->sec_level = conn->pending_sec_level;
+-		}
++		set_bit(HCI_CONN_AUTH, &conn->flags);
++		conn->sec_level = conn->pending_sec_level;
+ 	} else {
+ 		if (ev->status == HCI_ERROR_PIN_OR_KEY_MISSING)
+ 			set_bit(HCI_CONN_AUTH_FAILURE, &conn->flags);
+@@ -2940,7 +2934,6 @@ static void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	}
+ 
+ 	clear_bit(HCI_CONN_AUTH_PEND, &conn->flags);
+-	clear_bit(HCI_CONN_REAUTH_PEND, &conn->flags);
+ 
+ 	if (conn->state == BT_CONFIG) {
+ 		if (!ev->status && hci_conn_ssp_enabled(conn)) {
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 4c43183a8d93a..432e3a64dc4a5 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -226,9 +226,11 @@ static int neigh_forced_gc(struct neigh_table *tbl)
+ {
+ 	int max_clean = atomic_read(&tbl->gc_entries) -
+ 			READ_ONCE(tbl->gc_thresh2);
++	u64 tmax = ktime_get_ns() + NSEC_PER_MSEC;
+ 	unsigned long tref = jiffies - 5 * HZ;
+ 	struct neighbour *n, *tmp;
+ 	int shrunk = 0;
++	int loop = 0;
+ 
+ 	NEIGH_CACHE_STAT_INC(tbl, forced_gc_runs);
+ 
+@@ -251,11 +253,16 @@ static int neigh_forced_gc(struct neigh_table *tbl)
+ 				shrunk++;
+ 			if (shrunk >= max_clean)
+ 				break;
++			if (++loop == 16) {
++				if (ktime_get_ns() > tmax)
++					goto unlock;
++				loop = 0;
++			}
+ 		}
+ 	}
+ 
+ 	WRITE_ONCE(tbl->last_flush, jiffies);
+-
++unlock:
+ 	write_unlock_bh(&tbl->lock);
+ 
+ 	return shrunk;
+diff --git a/net/dns_resolver/dns_key.c b/net/dns_resolver/dns_key.c
+index 8324e9f970668..26a9d8434c234 100644
+--- a/net/dns_resolver/dns_key.c
++++ b/net/dns_resolver/dns_key.c
+@@ -104,7 +104,7 @@ dns_resolver_preparse(struct key_preparsed_payload *prep)
+ 		const struct dns_server_list_v1_header *v1;
+ 
+ 		/* It may be a server list. */
+-		if (datalen <= sizeof(*v1))
++		if (datalen < sizeof(*v1))
+ 			return -EINVAL;
+ 
+ 		v1 = (const struct dns_server_list_v1_header *)data;
+diff --git a/net/ethtool/features.c b/net/ethtool/features.c
+index 1c9f4df273bd5..faccab84d8656 100644
+--- a/net/ethtool/features.c
++++ b/net/ethtool/features.c
+@@ -235,17 +235,20 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+ 	dev = req_info.dev;
+ 
+ 	rtnl_lock();
++	ret = ethnl_ops_begin(dev);
++	if (ret < 0)
++		goto out_rtnl;
+ 	ethnl_features_to_bitmap(old_active, dev->features);
+ 	ethnl_features_to_bitmap(old_wanted, dev->wanted_features);
+ 	ret = ethnl_parse_bitset(req_wanted, req_mask, NETDEV_FEATURE_COUNT,
+ 				 tb[ETHTOOL_A_FEATURES_WANTED],
+ 				 netdev_features_strings, info->extack);
+ 	if (ret < 0)
+-		goto out_rtnl;
++		goto out_ops;
+ 	if (ethnl_bitmap_to_features(req_mask) & ~NETIF_F_ETHTOOL_BITS) {
+ 		GENL_SET_ERR_MSG(info, "attempt to change non-ethtool features");
+ 		ret = -EINVAL;
+-		goto out_rtnl;
++		goto out_ops;
+ 	}
+ 
+ 	/* set req_wanted bits not in req_mask from old_wanted */
+@@ -282,6 +285,8 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+ 	if (mod)
+ 		netdev_features_change(dev);
+ 
++out_ops:
++	ethnl_ops_complete(dev);
+ out_rtnl:
+ 	rtnl_unlock();
+ 	dev_put(dev);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index a03a322e0cc1c..edf4a842506f2 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -427,7 +427,7 @@ __u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw)
+ 	const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)raw;
+ 	unsigned int nhoff = raw - skb->data;
+ 	unsigned int off = nhoff + sizeof(*ipv6h);
+-	u8 next, nexthdr = ipv6h->nexthdr;
++	u8 nexthdr = ipv6h->nexthdr;
+ 
+ 	while (ipv6_ext_hdr(nexthdr) && nexthdr != NEXTHDR_NONE) {
+ 		struct ipv6_opt_hdr *hdr;
+@@ -438,25 +438,25 @@ __u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw)
+ 
+ 		hdr = (struct ipv6_opt_hdr *)(skb->data + off);
+ 		if (nexthdr == NEXTHDR_FRAGMENT) {
+-			struct frag_hdr *frag_hdr = (struct frag_hdr *) hdr;
+-			if (frag_hdr->frag_off)
+-				break;
+ 			optlen = 8;
+ 		} else if (nexthdr == NEXTHDR_AUTH) {
+ 			optlen = ipv6_authlen(hdr);
+ 		} else {
+ 			optlen = ipv6_optlen(hdr);
+ 		}
+-		/* cache hdr->nexthdr, since pskb_may_pull() might
+-		 * invalidate hdr
+-		 */
+-		next = hdr->nexthdr;
+-		if (nexthdr == NEXTHDR_DEST) {
+-			u16 i = 2;
+ 
+-			/* Remember : hdr is no longer valid at this point. */
+-			if (!pskb_may_pull(skb, off + optlen))
++		if (!pskb_may_pull(skb, off + optlen))
++			break;
++
++		hdr = (struct ipv6_opt_hdr *)(skb->data + off);
++		if (nexthdr == NEXTHDR_FRAGMENT) {
++			struct frag_hdr *frag_hdr = (struct frag_hdr *)hdr;
++
++			if (frag_hdr->frag_off)
+ 				break;
++		}
++		if (nexthdr == NEXTHDR_DEST) {
++			u16 i = 2;
+ 
+ 			while (1) {
+ 				struct ipv6_tlv_tnl_enc_lim *tel;
+@@ -477,7 +477,7 @@ __u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw)
+ 					i++;
+ 			}
+ 		}
+-		nexthdr = next;
++		nexthdr = hdr->nexthdr;
+ 		off += optlen;
+ 	}
+ 	return 0;
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 64afe71e2129a..c389d7e47135d 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -92,6 +92,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			mp_opt->dss = 1;
+ 			mp_opt->use_map = 1;
+ 			mp_opt->mpc_map = 1;
++			mp_opt->use_ack = 0;
+ 			mp_opt->data_len = get_unaligned_be16(ptr);
+ 			ptr += 2;
+ 		}
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index e37102546be6a..ec765f2a75691 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -98,9 +98,12 @@ enum {
+ 
+ 
+ struct ncsi_channel_version {
+-	u32 version;		/* Supported BCD encoded NCSI version */
+-	u32 alpha2;		/* Supported BCD encoded NCSI version */
+-	u8  fw_name[12];	/* Firware name string                */
++	u8   major;		/* NCSI version major */
++	u8   minor;		/* NCSI version minor */
++	u8   update;		/* NCSI version update */
++	char alpha1;		/* NCSI version alpha1 */
++	char alpha2;		/* NCSI version alpha2 */
++	u8  fw_name[12];	/* Firmware name string                */
+ 	u32 fw_version;		/* Firmware version                   */
+ 	u16 pci_ids[4];		/* PCI identification                 */
+ 	u32 mf_id;		/* Manufacture ID                     */
+diff --git a/net/ncsi/ncsi-netlink.c b/net/ncsi/ncsi-netlink.c
+index c189b4c8a1823..db350b8f5d88b 100644
+--- a/net/ncsi/ncsi-netlink.c
++++ b/net/ncsi/ncsi-netlink.c
+@@ -71,8 +71,8 @@ static int ncsi_write_channel_info(struct sk_buff *skb,
+ 	if (nc == nc->package->preferred_channel)
+ 		nla_put_flag(skb, NCSI_CHANNEL_ATTR_FORCED);
+ 
+-	nla_put_u32(skb, NCSI_CHANNEL_ATTR_VERSION_MAJOR, nc->version.version);
+-	nla_put_u32(skb, NCSI_CHANNEL_ATTR_VERSION_MINOR, nc->version.alpha2);
++	nla_put_u32(skb, NCSI_CHANNEL_ATTR_VERSION_MAJOR, nc->version.major);
++	nla_put_u32(skb, NCSI_CHANNEL_ATTR_VERSION_MINOR, nc->version.minor);
+ 	nla_put_string(skb, NCSI_CHANNEL_ATTR_VERSION_STR, nc->version.fw_name);
+ 
+ 	vid_nest = nla_nest_start_noflag(skb, NCSI_CHANNEL_ATTR_VLAN_LIST);
+diff --git a/net/ncsi/ncsi-pkt.h b/net/ncsi/ncsi-pkt.h
+index 80938b338feee..3fbea7e74fb1c 100644
+--- a/net/ncsi/ncsi-pkt.h
++++ b/net/ncsi/ncsi-pkt.h
+@@ -191,9 +191,12 @@ struct ncsi_rsp_gls_pkt {
+ /* Get Version ID */
+ struct ncsi_rsp_gvi_pkt {
+ 	struct ncsi_rsp_pkt_hdr rsp;          /* Response header */
+-	__be32                  ncsi_version; /* NCSI version    */
++	unsigned char           major;        /* NCSI version major */
++	unsigned char           minor;        /* NCSI version minor */
++	unsigned char           update;       /* NCSI version update */
++	unsigned char           alpha1;       /* NCSI version alpha1 */
+ 	unsigned char           reserved[3];  /* Reserved        */
+-	unsigned char           alpha2;       /* NCSI version    */
++	unsigned char           alpha2;       /* NCSI version alpha2 */
+ 	unsigned char           fw_name[12];  /* f/w name string */
+ 	__be32                  fw_version;   /* f/w version     */
+ 	__be16                  pci_ids[4];   /* PCI IDs         */
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 888ccc2d4e34b..6a46388116601 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -19,6 +19,19 @@
+ #include "ncsi-pkt.h"
+ #include "ncsi-netlink.h"
+ 
++/* Nibbles within [0xA, 0xF] add zero "0" to the returned value.
++ * Optional fields (encoded as 0xFF) will default to zero.
++ */
++static u8 decode_bcd_u8(u8 x)
++{
++	int lo = x & 0xF;
++	int hi = x >> 4;
++
++	lo = lo < 0xA ? lo : 0;
++	hi = hi < 0xA ? hi : 0;
++	return lo + hi * 10;
++}
++
+ static int ncsi_validate_rsp_pkt(struct ncsi_request *nr,
+ 				 unsigned short payload)
+ {
+@@ -755,9 +768,18 @@ static int ncsi_rsp_handler_gvi(struct ncsi_request *nr)
+ 	if (!nc)
+ 		return -ENODEV;
+ 
+-	/* Update to channel's version info */
++	/* Update channel's version info
++	 *
++	 * Major, minor, and update fields are supposed to be
++	 * unsigned integers encoded as packed BCD.
++	 *
++	 * Alpha1 and alpha2 are ISO/IEC 8859-1 characters.
++	 */
+ 	ncv = &nc->version;
+-	ncv->version = ntohl(rsp->ncsi_version);
++	ncv->major = decode_bcd_u8(rsp->major);
++	ncv->minor = decode_bcd_u8(rsp->minor);
++	ncv->update = decode_bcd_u8(rsp->update);
++	ncv->alpha1 = rsp->alpha1;
+ 	ncv->alpha2 = rsp->alpha2;
+ 	memcpy(ncv->fw_name, rsp->fw_name, 12);
+ 	ncv->fw_version = ntohl(rsp->fw_version);
+diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
+index cd2130e98836b..c87dbc8970023 100644
+--- a/net/netfilter/ipvs/ip_vs_xmit.c
++++ b/net/netfilter/ipvs/ip_vs_xmit.c
+@@ -271,7 +271,7 @@ static inline bool decrement_ttl(struct netns_ipvs *ipvs,
+ 			skb->dev = dst->dev;
+ 			icmpv6_send(skb, ICMPV6_TIME_EXCEED,
+ 				    ICMPV6_EXC_HOPLIMIT, 0);
+-			__IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
++			IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
+ 
+ 			return false;
+ 		}
+@@ -286,7 +286,7 @@ static inline bool decrement_ttl(struct netns_ipvs *ipvs,
+ 	{
+ 		if (ip_hdr(skb)->ttl <= 1) {
+ 			/* Tell the sender its packet died... */
+-			__IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
++			IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
+ 			icmp_send(skb, ICMP_TIME_EXCEEDED, ICMP_EXC_TTL, 0);
+ 			return false;
+ 		}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4d1a009dab450..fca8f9a360632 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4302,8 +4302,8 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
+ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ 			       const struct nlattr *nla)
+ {
++	u32 num_regs = 0, key_num_regs = 0;
+ 	struct nlattr *attr;
+-	u32 num_regs = 0;
+ 	int rem, err, i;
+ 
+ 	nla_for_each_nested(attr, nla, rem) {
+@@ -4318,6 +4318,10 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ 	for (i = 0; i < desc->field_count; i++)
+ 		num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32));
+ 
++	key_num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32));
++	if (key_num_regs != num_regs)
++		return -EINVAL;
++
+ 	if (num_regs > NFT_REG32_COUNT)
+ 		return -E2BIG;
+ 
+@@ -4462,8 +4466,12 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 		if (err < 0)
+ 			return err;
+ 
+-		if (desc.field_count > 1 && !(flags & NFT_SET_CONCAT))
++		if (desc.field_count > 1) {
++			if (!(flags & NFT_SET_CONCAT))
++				return -EINVAL;
++		} else if (flags & NFT_SET_CONCAT) {
+ 			return -EINVAL;
++		}
+ 	} else if (flags & NFT_SET_CONCAT) {
+ 		return -EINVAL;
+ 	}
+@@ -5007,7 +5015,7 @@ static int nf_tables_dump_setelem(const struct nft_ctx *ctx,
+ 	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ 	struct nft_set_dump_args *args;
+ 
+-	if (nft_set_elem_expired(ext))
++	if (nft_set_elem_expired(ext) || nft_set_elem_is_dead(ext))
+ 		return 0;
+ 
+ 	args = container_of(iter, struct nft_set_dump_args, iter);
+@@ -8840,6 +8848,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 				nft_trans_destroy(trans);
+ 				break;
+ 			}
++			nft_trans_set(trans)->dead = 1;
+ 			list_del_rcu(&nft_trans_set(trans)->list);
+ 			break;
+ 		case NFT_MSG_DELSET:
+diff --git a/net/netlabel/netlabel_calipso.c b/net/netlabel/netlabel_calipso.c
+index 4e62f2ad35757..84ef4a29864bb 100644
+--- a/net/netlabel/netlabel_calipso.c
++++ b/net/netlabel/netlabel_calipso.c
+@@ -54,6 +54,28 @@ static const struct nla_policy calipso_genl_policy[NLBL_CALIPSO_A_MAX + 1] = {
+ 	[NLBL_CALIPSO_A_MTYPE] = { .type = NLA_U32 },
+ };
+ 
++static const struct netlbl_calipso_ops *calipso_ops;
++
++/**
++ * netlbl_calipso_ops_register - Register the CALIPSO operations
++ * @ops: ops to register
++ *
++ * Description:
++ * Register the CALIPSO packet engine operations.
++ *
++ */
++const struct netlbl_calipso_ops *
++netlbl_calipso_ops_register(const struct netlbl_calipso_ops *ops)
++{
++	return xchg(&calipso_ops, ops);
++}
++EXPORT_SYMBOL(netlbl_calipso_ops_register);
++
++static const struct netlbl_calipso_ops *netlbl_calipso_ops_get(void)
++{
++	return READ_ONCE(calipso_ops);
++}
++
+ /* NetLabel Command Handlers
+  */
+ /**
+@@ -96,16 +118,19 @@ static int netlbl_calipso_add_pass(struct genl_info *info,
+  *
+  */
+ static int netlbl_calipso_add(struct sk_buff *skb, struct genl_info *info)
+-
+ {
+ 	int ret_val = -EINVAL;
+ 	struct netlbl_audit audit_info;
++	const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
+ 
+ 	if (!info->attrs[NLBL_CALIPSO_A_DOI] ||
+ 	    !info->attrs[NLBL_CALIPSO_A_MTYPE])
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	if (!ops)
++		return -EOPNOTSUPP;
++
++	netlbl_netlink_auditinfo(&audit_info);
+ 	switch (nla_get_u32(info->attrs[NLBL_CALIPSO_A_MTYPE])) {
+ 	case CALIPSO_MAP_PASS:
+ 		ret_val = netlbl_calipso_add_pass(info, &audit_info);
+@@ -287,7 +312,7 @@ static int netlbl_calipso_remove(struct sk_buff *skb, struct genl_info *info)
+ 	if (!info->attrs[NLBL_CALIPSO_A_DOI])
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 	cb_arg.doi = nla_get_u32(info->attrs[NLBL_CALIPSO_A_DOI]);
+ 	cb_arg.audit_info = &audit_info;
+ 	ret_val = netlbl_domhsh_walk(&skip_bkt, &skip_chain,
+@@ -362,27 +387,6 @@ int __init netlbl_calipso_genl_init(void)
+ 	return genl_register_family(&netlbl_calipso_gnl_family);
+ }
+ 
+-static const struct netlbl_calipso_ops *calipso_ops;
+-
+-/**
+- * netlbl_calipso_ops_register - Register the CALIPSO operations
+- *
+- * Description:
+- * Register the CALIPSO packet engine operations.
+- *
+- */
+-const struct netlbl_calipso_ops *
+-netlbl_calipso_ops_register(const struct netlbl_calipso_ops *ops)
+-{
+-	return xchg(&calipso_ops, ops);
+-}
+-EXPORT_SYMBOL(netlbl_calipso_ops_register);
+-
+-static const struct netlbl_calipso_ops *netlbl_calipso_ops_get(void)
+-{
+-	return READ_ONCE(calipso_ops);
+-}
+-
+ /**
+  * calipso_doi_add - Add a new DOI to the CALIPSO protocol engine
+  * @doi_def: the DOI structure
+diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
+index f3f1df1b0f8e2..894e6b8f1a868 100644
+--- a/net/netlabel/netlabel_cipso_v4.c
++++ b/net/netlabel/netlabel_cipso_v4.c
+@@ -410,7 +410,7 @@ static int netlbl_cipsov4_add(struct sk_buff *skb, struct genl_info *info)
+ 	    !info->attrs[NLBL_CIPSOV4_A_MTYPE])
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 	switch (nla_get_u32(info->attrs[NLBL_CIPSOV4_A_MTYPE])) {
+ 	case CIPSO_V4_MAP_TRANS:
+ 		ret_val = netlbl_cipsov4_add_std(info, &audit_info);
+@@ -709,7 +709,7 @@ static int netlbl_cipsov4_remove(struct sk_buff *skb, struct genl_info *info)
+ 	if (!info->attrs[NLBL_CIPSOV4_A_DOI])
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 	cb_arg.doi = nla_get_u32(info->attrs[NLBL_CIPSOV4_A_DOI]);
+ 	cb_arg.audit_info = &audit_info;
+ 	ret_val = netlbl_domhsh_walk(&skip_bkt, &skip_chain,
+diff --git a/net/netlabel/netlabel_mgmt.c b/net/netlabel/netlabel_mgmt.c
+index 02a97bca1a1a2..750a7a842ab99 100644
+--- a/net/netlabel/netlabel_mgmt.c
++++ b/net/netlabel/netlabel_mgmt.c
+@@ -435,7 +435,7 @@ static int netlbl_mgmt_add(struct sk_buff *skb, struct genl_info *info)
+ 	     (info->attrs[NLBL_MGMT_A_IPV6MASK] != NULL)))
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 
+ 	return netlbl_mgmt_add_common(info, &audit_info);
+ }
+@@ -458,7 +458,7 @@ static int netlbl_mgmt_remove(struct sk_buff *skb, struct genl_info *info)
+ 	if (!info->attrs[NLBL_MGMT_A_DOMAIN])
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 
+ 	domain = nla_data(info->attrs[NLBL_MGMT_A_DOMAIN]);
+ 	return netlbl_domhsh_remove(domain, AF_UNSPEC, &audit_info);
+@@ -558,7 +558,7 @@ static int netlbl_mgmt_adddef(struct sk_buff *skb, struct genl_info *info)
+ 	     (info->attrs[NLBL_MGMT_A_IPV6MASK] != NULL)))
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 
+ 	return netlbl_mgmt_add_common(info, &audit_info);
+ }
+@@ -577,7 +577,7 @@ static int netlbl_mgmt_removedef(struct sk_buff *skb, struct genl_info *info)
+ {
+ 	struct netlbl_audit audit_info;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 
+ 	return netlbl_domhsh_remove_default(AF_UNSPEC, &audit_info);
+ }
+diff --git a/net/netlabel/netlabel_unlabeled.c b/net/netlabel/netlabel_unlabeled.c
+index ccb4916428116..3049fff0b7f86 100644
+--- a/net/netlabel/netlabel_unlabeled.c
++++ b/net/netlabel/netlabel_unlabeled.c
+@@ -814,7 +814,7 @@ static int netlbl_unlabel_accept(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NLBL_UNLABEL_A_ACPTFLG]) {
+ 		value = nla_get_u8(info->attrs[NLBL_UNLABEL_A_ACPTFLG]);
+ 		if (value == 1 || value == 0) {
+-			netlbl_netlink_auditinfo(skb, &audit_info);
++			netlbl_netlink_auditinfo(&audit_info);
+ 			netlbl_unlabel_acceptflg_set(value, &audit_info);
+ 			return 0;
+ 		}
+@@ -897,7 +897,7 @@ static int netlbl_unlabel_staticadd(struct sk_buff *skb,
+ 	       !info->attrs[NLBL_UNLABEL_A_IPV6MASK])))
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 
+ 	ret_val = netlbl_unlabel_addrinfo_get(info, &addr, &mask, &addr_len);
+ 	if (ret_val != 0)
+@@ -947,7 +947,7 @@ static int netlbl_unlabel_staticadddef(struct sk_buff *skb,
+ 	       !info->attrs[NLBL_UNLABEL_A_IPV6MASK])))
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 
+ 	ret_val = netlbl_unlabel_addrinfo_get(info, &addr, &mask, &addr_len);
+ 	if (ret_val != 0)
+@@ -994,7 +994,7 @@ static int netlbl_unlabel_staticremove(struct sk_buff *skb,
+ 	       !info->attrs[NLBL_UNLABEL_A_IPV6MASK])))
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 
+ 	ret_val = netlbl_unlabel_addrinfo_get(info, &addr, &mask, &addr_len);
+ 	if (ret_val != 0)
+@@ -1034,7 +1034,7 @@ static int netlbl_unlabel_staticremovedef(struct sk_buff *skb,
+ 	       !info->attrs[NLBL_UNLABEL_A_IPV6MASK])))
+ 		return -EINVAL;
+ 
+-	netlbl_netlink_auditinfo(skb, &audit_info);
++	netlbl_netlink_auditinfo(&audit_info);
+ 
+ 	ret_val = netlbl_unlabel_addrinfo_get(info, &addr, &mask, &addr_len);
+ 	if (ret_val != 0)
+diff --git a/net/netlabel/netlabel_user.h b/net/netlabel/netlabel_user.h
+index 3c67afce64f12..32d8f92c9a20a 100644
+--- a/net/netlabel/netlabel_user.h
++++ b/net/netlabel/netlabel_user.h
+@@ -28,11 +28,9 @@
+ 
+ /**
+  * netlbl_netlink_auditinfo - Fetch the audit information from a NETLINK msg
+- * @skb: the packet
+  * @audit_info: NetLabel audit information
+  */
+-static inline void netlbl_netlink_auditinfo(struct sk_buff *skb,
+-					    struct netlbl_audit *audit_info)
++static inline void netlbl_netlink_auditinfo(struct netlbl_audit *audit_info)
+ {
+ 	security_task_getsecid(current, &audit_info->secid);
+ 	audit_info->loginuid = audit_get_loginuid(current);
+diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
+index 713e9940d88bb..c92dd960bfefa 100644
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -577,7 +577,9 @@ static int ctrl_cmd_del_server(struct sockaddr_qrtr *from,
+ 	if (!node)
+ 		return -ENOENT;
+ 
+-	return server_del(node, port, true);
++	server_del(node, port, true);
++
++	return 0;
+ }
+ 
+ static int ctrl_cmd_new_lookup(struct sockaddr_qrtr *from,
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 080e8f2bf9855..4102689b3348a 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -340,6 +340,8 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 	struct virtio_vsock_pkt *pkt;
+ 	size_t bytes, total = 0;
+ 	u32 free_space;
++	u32 fwd_cnt_delta;
++	bool low_rx_bytes;
+ 	int err = -EFAULT;
+ 
+ 	spin_lock_bh(&vvs->rx_lock);
+@@ -371,7 +373,10 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 		}
+ 	}
+ 
+-	free_space = vvs->buf_alloc - (vvs->fwd_cnt - vvs->last_fwd_cnt);
++	fwd_cnt_delta = vvs->fwd_cnt - vvs->last_fwd_cnt;
++	free_space = vvs->buf_alloc - fwd_cnt_delta;
++	low_rx_bytes = (vvs->rx_bytes <
++			sock_rcvlowat(sk_vsock(vsk), 0, INT_MAX));
+ 
+ 	spin_unlock_bh(&vvs->rx_lock);
+ 
+@@ -381,9 +386,11 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 	 * too high causes extra messages. Too low causes transmitter
+ 	 * stalls. As stalls are in theory more expensive than extra
+ 	 * messages, we set the limit to a high value. TODO: experiment
+-	 * with different values.
++	 * with different values. Also send credit update message when
++	 * number of bytes in rx queue is not enough to wake up reader.
+ 	 */
+-	if (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) {
++	if (fwd_cnt_delta &&
++	    (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE || low_rx_bytes)) {
+ 		virtio_transport_send_credit_update(vsk,
+ 						    VIRTIO_VSOCK_TYPE_STREAM,
+ 						    NULL);
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 10896d69c442a..6c2a536173b5b 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -696,6 +696,10 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 
+ 	tmpname = aa_splitn_fqname(name, strlen(name), &tmpns, &ns_len);
+ 	if (tmpns) {
++		if (!tmpname) {
++			info = "empty profile name";
++			goto fail;
++		}
+ 		*ns_name = kstrndup(tmpns, ns_len, GFP_KERNEL);
+ 		if (!*ns_name) {
+ 			info = "out of memory";
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index ee37ce2e2619b..f545321d96dc3 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -4620,6 +4620,13 @@ static int selinux_socket_bind(struct socket *sock, struct sockaddr *address, in
+ 				return -EINVAL;
+ 			addr4 = (struct sockaddr_in *)address;
+ 			if (family_sa == AF_UNSPEC) {
++				if (family == PF_INET6) {
++					/* Length check from inet6_bind_sk() */
++					if (addrlen < SIN6_LEN_RFC2133)
++						return -EINVAL;
++					/* Family check from __inet6_bind() */
++					goto err_af;
++				}
+ 				/* see __inet_bind(), we only want to allow
+ 				 * AF_UNSPEC if the address is INADDR_ANY
+ 				 */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 99ba89723cd31..412fbe098e0c7 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6797,6 +6797,7 @@ enum {
+ 	ALC290_FIXUP_SUBWOOFER_HSJACK,
+ 	ALC269_FIXUP_THINKPAD_ACPI,
+ 	ALC269_FIXUP_DMIC_THINKPAD_ACPI,
++	ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO,
+ 	ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+ 	ALC255_FIXUP_ASUS_MIC_NO_PRESENCE,
+ 	ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+@@ -7094,6 +7095,14 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc269_fixup_pincfg_U7x7_headset_mic,
+ 	},
++	[ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x18, 0x03a19020 }, /* headset mic */
++			{ 0x1b, 0x90170150 }, /* speaker */
++			{ }
++		},
++	},
+ 	[ALC269_FIXUP_AMIC] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -9025,6 +9034,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f6, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x87fe, "HP Laptop 15s-fq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+@@ -9320,6 +9330,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
++	SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ 	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+diff --git a/sound/pci/oxygen/oxygen_mixer.c b/sound/pci/oxygen/oxygen_mixer.c
+index 46705ec77b481..eb3aca16359c5 100644
+--- a/sound/pci/oxygen/oxygen_mixer.c
++++ b/sound/pci/oxygen/oxygen_mixer.c
+@@ -718,7 +718,7 @@ static int ac97_fp_rec_volume_put(struct snd_kcontrol *ctl,
+ 	oldreg = oxygen_read_ac97(chip, 1, AC97_REC_GAIN);
+ 	newreg = oldreg & ~0x0707;
+ 	newreg = newreg | (value->value.integer.value[0] & 7);
+-	newreg = newreg | ((value->value.integer.value[0] & 7) << 8);
++	newreg = newreg | ((value->value.integer.value[1] & 7) << 8);
+ 	change = newreg != oldreg;
+ 	if (change)
+ 		oxygen_write_ac97(chip, 1, AC97_REC_GAIN, newreg);
+diff --git a/sound/soc/atmel/sam9g20_wm8731.c b/sound/soc/atmel/sam9g20_wm8731.c
+index d243de5f23dc1..8a55d59a6c2aa 100644
+--- a/sound/soc/atmel/sam9g20_wm8731.c
++++ b/sound/soc/atmel/sam9g20_wm8731.c
+@@ -46,6 +46,35 @@
+  */
+ #undef ENABLE_MIC_INPUT
+ 
++static struct clk *mclk;
++
++static int at91sam9g20ek_set_bias_level(struct snd_soc_card *card,
++					struct snd_soc_dapm_context *dapm,
++					enum snd_soc_bias_level level)
++{
++	static int mclk_on;
++	int ret = 0;
++
++	switch (level) {
++	case SND_SOC_BIAS_ON:
++	case SND_SOC_BIAS_PREPARE:
++		if (!mclk_on)
++			ret = clk_enable(mclk);
++		if (ret == 0)
++			mclk_on = 1;
++		break;
++
++	case SND_SOC_BIAS_OFF:
++	case SND_SOC_BIAS_STANDBY:
++		if (mclk_on)
++			clk_disable(mclk);
++		mclk_on = 0;
++		break;
++	}
++
++	return ret;
++}
++
+ static const struct snd_soc_dapm_widget at91sam9g20ek_dapm_widgets[] = {
+ 	SND_SOC_DAPM_MIC("Int Mic", NULL),
+ 	SND_SOC_DAPM_SPK("Ext Spk", NULL),
+@@ -106,6 +135,7 @@ static struct snd_soc_card snd_soc_at91sam9g20ek = {
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &at91sam9g20ek_dai,
+ 	.num_links = 1,
++	.set_bias_level = at91sam9g20ek_set_bias_level,
+ 
+ 	.dapm_widgets = at91sam9g20ek_dapm_widgets,
+ 	.num_dapm_widgets = ARRAY_SIZE(at91sam9g20ek_dapm_widgets),
+@@ -118,6 +148,7 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct device_node *codec_np, *cpu_np;
++	struct clk *pllb;
+ 	struct snd_soc_card *card = &snd_soc_at91sam9g20ek;
+ 	int ret;
+ 
+@@ -131,6 +162,31 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * Codec MCLK is supplied by PCK0 - set it up.
++	 */
++	mclk = clk_get(NULL, "pck0");
++	if (IS_ERR(mclk)) {
++		dev_err(&pdev->dev, "Failed to get MCLK\n");
++		ret = PTR_ERR(mclk);
++		goto err;
++	}
++
++	pllb = clk_get(NULL, "pllb");
++	if (IS_ERR(pllb)) {
++		dev_err(&pdev->dev, "Failed to get PLLB\n");
++		ret = PTR_ERR(pllb);
++		goto err_mclk;
++	}
++	ret = clk_set_parent(mclk, pllb);
++	clk_put(pllb);
++	if (ret != 0) {
++		dev_err(&pdev->dev, "Failed to set MCLK parent\n");
++		goto err_mclk;
++	}
++
++	clk_set_rate(mclk, MCLK_RATE);
++
+ 	card->dev = &pdev->dev;
+ 
+ 	/* Parse device node info */
+@@ -174,6 +230,9 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ 
+ 	return ret;
+ 
++err_mclk:
++	clk_put(mclk);
++	mclk = NULL;
+ err:
+ 	atmel_ssc_put_audio(0);
+ 	return ret;
+@@ -183,6 +242,8 @@ static int at91sam9g20ek_audio_remove(struct platform_device *pdev)
+ {
+ 	struct snd_soc_card *card = platform_get_drvdata(pdev);
+ 
++	clk_disable(mclk);
++	mclk = NULL;
+ 	snd_soc_unregister_card(card);
+ 	atmel_ssc_put_audio(0);
+ 
+diff --git a/sound/soc/codecs/cs35l33.c b/sound/soc/codecs/cs35l33.c
+index 8894369e329af..87b299d24bd8e 100644
+--- a/sound/soc/codecs/cs35l33.c
++++ b/sound/soc/codecs/cs35l33.c
+@@ -22,13 +22,11 @@
+ #include <sound/soc-dapm.h>
+ #include <sound/initval.h>
+ #include <sound/tlv.h>
+-#include <linux/gpio.h>
+ #include <linux/gpio/consumer.h>
+ #include <sound/cs35l33.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/regulator/machine.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+ #include <linux/of_irq.h>
+@@ -1168,7 +1166,7 @@ static int cs35l33_i2c_probe(struct i2c_client *i2c_client,
+ 
+ 	/* We could issue !RST or skip it based on AMP topology */
+ 	cs35l33->reset_gpio = devm_gpiod_get_optional(&i2c_client->dev,
+-			"reset-gpios", GPIOD_OUT_HIGH);
++			"reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(cs35l33->reset_gpio)) {
+ 		dev_err(&i2c_client->dev, "%s ERROR: Can't get reset GPIO\n",
+ 			__func__);
+diff --git a/sound/soc/codecs/cs35l34.c b/sound/soc/codecs/cs35l34.c
+index b792c006e530d..d9f975b52b211 100644
+--- a/sound/soc/codecs/cs35l34.c
++++ b/sound/soc/codecs/cs35l34.c
+@@ -20,14 +20,12 @@
+ #include <linux/regulator/machine.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/of_device.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of_irq.h>
+ #include <sound/core.h>
+ #include <sound/pcm.h>
+ #include <sound/pcm_params.h>
+ #include <sound/soc.h>
+ #include <sound/soc-dapm.h>
+-#include <linux/gpio.h>
+ #include <linux/gpio/consumer.h>
+ #include <sound/initval.h>
+ #include <sound/tlv.h>
+@@ -1058,7 +1056,7 @@ static int cs35l34_i2c_probe(struct i2c_client *i2c_client,
+ 		dev_err(&i2c_client->dev, "Failed to request IRQ: %d\n", ret);
+ 
+ 	cs35l34->reset_gpio = devm_gpiod_get_optional(&i2c_client->dev,
+-				"reset-gpios", GPIOD_OUT_LOW);
++				"reset", GPIOD_OUT_LOW);
+ 	if (IS_ERR(cs35l34->reset_gpio))
+ 		return PTR_ERR(cs35l34->reset_gpio);
+ 
+diff --git a/sound/soc/codecs/cs43130.c b/sound/soc/codecs/cs43130.c
+index 8f70dee958786..02fb9317b6970 100644
+--- a/sound/soc/codecs/cs43130.c
++++ b/sound/soc/codecs/cs43130.c
+@@ -578,7 +578,7 @@ static int cs43130_set_sp_fmt(int dai_id, unsigned int bitwidth_sclk,
+ 		break;
+ 	case SND_SOC_DAIFMT_LEFT_J:
+ 		hi_size = bitwidth_sclk;
+-		frm_delay = 2;
++		frm_delay = 0;
+ 		frm_phase = 1;
+ 		break;
+ 	case SND_SOC_DAIFMT_DSP_A:
+@@ -1683,7 +1683,7 @@ static ssize_t cs43130_show_dc_r(struct device *dev,
+ 	return cs43130_show_dc(dev, buf, HP_RIGHT);
+ }
+ 
+-static u16 const cs43130_ac_freq[CS43130_AC_FREQ] = {
++static const u16 cs43130_ac_freq[CS43130_AC_FREQ] = {
+ 	24,
+ 	43,
+ 	93,
+@@ -2364,7 +2364,7 @@ static const struct regmap_config cs43130_regmap = {
+ 	.use_single_write	= true,
+ };
+ 
+-static u16 const cs43130_dc_threshold[CS43130_DC_THRESHOLD] = {
++static const u16 cs43130_dc_threshold[CS43130_DC_THRESHOLD] = {
+ 	50,
+ 	120,
+ };
+diff --git a/sound/soc/codecs/da7219-aad.c b/sound/soc/codecs/da7219-aad.c
+index b316d613a7090..b6030709b6b6d 100644
+--- a/sound/soc/codecs/da7219-aad.c
++++ b/sound/soc/codecs/da7219-aad.c
+@@ -654,7 +654,7 @@ static struct da7219_aad_pdata *da7219_aad_fw_to_pdata(struct device *dev)
+ 		aad_pdata->mic_det_thr =
+ 			da7219_aad_fw_mic_det_thr(dev, fw_val32);
+ 	else
+-		aad_pdata->mic_det_thr = DA7219_AAD_MIC_DET_THR_500_OHMS;
++		aad_pdata->mic_det_thr = DA7219_AAD_MIC_DET_THR_200_OHMS;
+ 
+ 	if (fwnode_property_read_u32(aad_np, "dlg,jack-ins-deb", &fw_val32) >= 0)
+ 		aad_pdata->jack_ins_deb =
+diff --git a/sound/soc/codecs/nau8822.c b/sound/soc/codecs/nau8822.c
+index d831959d8ff73..4ce15cd9ed020 100644
+--- a/sound/soc/codecs/nau8822.c
++++ b/sound/soc/codecs/nau8822.c
+@@ -184,6 +184,7 @@ static int nau8822_eq_get(struct snd_kcontrol *kcontrol,
+ 	struct soc_bytes_ext *params = (void *)kcontrol->private_value;
+ 	int i, reg;
+ 	u16 reg_val, *val;
++	__be16 tmp;
+ 
+ 	val = (u16 *)ucontrol->value.bytes.data;
+ 	reg = NAU8822_REG_EQ1;
+@@ -192,8 +193,8 @@ static int nau8822_eq_get(struct snd_kcontrol *kcontrol,
+ 		/* conversion of 16-bit integers between native CPU format
+ 		 * and big endian format
+ 		 */
+-		reg_val = cpu_to_be16(reg_val);
+-		memcpy(val + i, &reg_val, sizeof(reg_val));
++		tmp = cpu_to_be16(reg_val);
++		memcpy(val + i, &tmp, sizeof(tmp));
+ 	}
+ 
+ 	return 0;
+@@ -216,6 +217,7 @@ static int nau8822_eq_put(struct snd_kcontrol *kcontrol,
+ 	void *data;
+ 	u16 *val, value;
+ 	int i, reg, ret;
++	__be16 *tmp;
+ 
+ 	data = kmemdup(ucontrol->value.bytes.data,
+ 		params->max, GFP_KERNEL | GFP_DMA);
+@@ -228,7 +230,8 @@ static int nau8822_eq_put(struct snd_kcontrol *kcontrol,
+ 		/* conversion of 16-bit integers between native CPU format
+ 		 * and big endian format
+ 		 */
+-		value = be16_to_cpu(*(val + i));
++		tmp = (__be16 *)(val + i);
++		value = be16_to_cpup(tmp);
+ 		ret = snd_soc_component_write(component, reg + i, value);
+ 		if (ret) {
+ 			dev_err(component->dev,
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 99d91bfb88122..7dc80183921ed 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -421,6 +421,7 @@ struct rt5645_priv {
+ 	struct regulator_bulk_data supplies[ARRAY_SIZE(rt5645_supply_names)];
+ 	struct rt5645_eq_param_s *eq_param;
+ 	struct timer_list btn_check_timer;
++	struct mutex jd_mutex;
+ 
+ 	int codec_type;
+ 	int sysclk;
+@@ -3179,6 +3180,8 @@ static int rt5645_jack_detect(struct snd_soc_component *component, int jack_inse
+ 				rt5645_enable_push_button_irq(component, true);
+ 			}
+ 		} else {
++			if (rt5645->en_button_func)
++				rt5645_enable_push_button_irq(component, false);
+ 			snd_soc_dapm_disable_pin(dapm, "Mic Det Power");
+ 			snd_soc_dapm_sync(dapm);
+ 			rt5645->jack_type = SND_JACK_HEADPHONE;
+@@ -3259,6 +3262,8 @@ static void rt5645_jack_detect_work(struct work_struct *work)
+ 	if (!rt5645->component)
+ 		return;
+ 
++	mutex_lock(&rt5645->jd_mutex);
++
+ 	switch (rt5645->pdata.jd_mode) {
+ 	case 0: /* Not using rt5645 JD */
+ 		if (rt5645->gpiod_hp_det) {
+@@ -3283,7 +3288,7 @@ static void rt5645_jack_detect_work(struct work_struct *work)
+ 
+ 	if (!val && (rt5645->jack_type == 0)) { /* jack in */
+ 		report = rt5645_jack_detect(rt5645->component, 1);
+-	} else if (!val && rt5645->jack_type != 0) {
++	} else if (!val && rt5645->jack_type == SND_JACK_HEADSET) {
+ 		/* for push button and jack out */
+ 		btn_type = 0;
+ 		if (snd_soc_component_read(rt5645->component, RT5645_INT_IRQ_ST) & 0x4) {
+@@ -3339,6 +3344,8 @@ static void rt5645_jack_detect_work(struct work_struct *work)
+ 		rt5645_jack_detect(rt5645->component, 0);
+ 	}
+ 
++	mutex_unlock(&rt5645->jd_mutex);
++
+ 	snd_soc_jack_report(rt5645->hp_jack, report, SND_JACK_HEADPHONE);
+ 	snd_soc_jack_report(rt5645->mic_jack, report, SND_JACK_MICROPHONE);
+ 	if (rt5645->en_button_func)
+@@ -4062,6 +4069,7 @@ static int rt5645_i2c_probe(struct i2c_client *i2c,
+ 	}
+ 	timer_setup(&rt5645->btn_check_timer, rt5645_btn_check_callback, 0);
+ 
++	mutex_init(&rt5645->jd_mutex);
+ 	INIT_DELAYED_WORK(&rt5645->jack_detect_work, rt5645_jack_detect_work);
+ 	INIT_DELAYED_WORK(&rt5645->rcclock_work, rt5645_rcclock_work);
+ 
+diff --git a/sound/soc/codecs/wm8974.c b/sound/soc/codecs/wm8974.c
+index c86231dfcf4f8..600e93d61a90f 100644
+--- a/sound/soc/codecs/wm8974.c
++++ b/sound/soc/codecs/wm8974.c
+@@ -186,7 +186,7 @@ SOC_DAPM_SINGLE("PCM Playback Switch", WM8974_MONOMIX, 0, 1, 0),
+ 
+ /* Boost mixer */
+ static const struct snd_kcontrol_new wm8974_boost_mixer[] = {
+-SOC_DAPM_SINGLE("Aux Switch", WM8974_INPPGA, 6, 1, 1),
++SOC_DAPM_SINGLE("PGA Switch", WM8974_INPPGA, 6, 1, 1),
+ };
+ 
+ /* Input PGA */
+@@ -246,8 +246,8 @@ static const struct snd_soc_dapm_route wm8974_dapm_routes[] = {
+ 
+ 	/* Boost Mixer */
+ 	{"ADC", NULL, "Boost Mixer"},
+-	{"Boost Mixer", "Aux Switch", "Aux Input"},
+-	{"Boost Mixer", NULL, "Input PGA"},
++	{"Boost Mixer", NULL, "Aux Input"},
++	{"Boost Mixer", "PGA Switch", "Input PGA"},
+ 	{"Boost Mixer", NULL, "MICP"},
+ 
+ 	/* Input PGA */
+diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
+index b531d9dfc2d64..935c871abdaa6 100644
+--- a/sound/soc/intel/skylake/skl-pcm.c
++++ b/sound/soc/intel/skylake/skl-pcm.c
+@@ -251,8 +251,10 @@ static int skl_pcm_open(struct snd_pcm_substream *substream,
+ 	snd_pcm_set_sync(substream);
+ 
+ 	mconfig = skl_tplg_fe_get_cpr_module(dai, substream->stream);
+-	if (!mconfig)
++	if (!mconfig) {
++		kfree(dma_params);
+ 		return -EINVAL;
++	}
+ 
+ 	skl_tplg_d0i3_get(skl, mconfig->d0i3_caps);
+ 
+@@ -1475,6 +1477,7 @@ int skl_platform_register(struct device *dev)
+ 		dais = krealloc(skl->dais, sizeof(skl_fe_dai) +
+ 				sizeof(skl_platform_dai), GFP_KERNEL);
+ 		if (!dais) {
++			kfree(skl->dais);
+ 			ret = -ENOMEM;
+ 			goto err;
+ 		}
+@@ -1487,8 +1490,10 @@ int skl_platform_register(struct device *dev)
+ 
+ 	ret = devm_snd_soc_register_component(dev, &skl_component,
+ 					 skl->dais, num_dais);
+-	if (ret)
++	if (ret) {
++		kfree(skl->dais);
+ 		dev_err(dev, "soc component registration failed %d\n", ret);
++	}
+ err:
+ 	return ret;
+ }
+diff --git a/sound/soc/intel/skylake/skl-sst-ipc.c b/sound/soc/intel/skylake/skl-sst-ipc.c
+index 7a425271b08b1..fd9624ad5f72b 100644
+--- a/sound/soc/intel/skylake/skl-sst-ipc.c
++++ b/sound/soc/intel/skylake/skl-sst-ipc.c
+@@ -1003,8 +1003,10 @@ int skl_ipc_get_large_config(struct sst_generic_ipc *ipc,
+ 
+ 	reply.size = (reply.header >> 32) & IPC_DATA_OFFSET_SZ_MASK;
+ 	buf = krealloc(reply.data, reply.size, GFP_KERNEL);
+-	if (!buf)
++	if (!buf) {
++		kfree(reply.data);
+ 		return -ENOMEM;
++	}
+ 	*payload = buf;
+ 	*bytes = reply.size;
+ 
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index fd1a4d843e6f0..63ea5bc6f1c4f 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -3424,6 +3424,8 @@ union bpf_attr {
+  * long bpf_get_task_stack(struct task_struct *task, void *buf, u32 size, u64 flags)
+  *	Description
+  *		Return a user or a kernel stack in bpf program provided buffer.
++ *		Note: the user stack will only be populated if the *task* is
++ *		the current task; all other tasks will return -EOPNOTSUPP.
+  *		To achieve this, the helper needs *task*, which is a valid
+  *		pointer to **struct task_struct**. To store the stacktrace, the
+  *		bpf program provides *buf* with a nonnegative *size*.
+@@ -3435,6 +3437,7 @@ union bpf_attr {
+  *
+  *		**BPF_F_USER_STACK**
+  *			Collect a user space stack instead of a kernel stack.
++ *			The *task* must be the current task.
+  *		**BPF_F_USER_BUILD_ID**
+  *			Collect buildid+offset instead of ips for user stack,
+  *			only valid if **BPF_F_USER_STACK** is also specified.
+diff --git a/tools/lib/api/io.h b/tools/lib/api/io.h
+index 777c20f6b6047..458acd294237d 100644
+--- a/tools/lib/api/io.h
++++ b/tools/lib/api/io.h
+@@ -9,6 +9,7 @@
+ 
+ #include <stdlib.h>
+ #include <unistd.h>
++#include <linux/types.h>
+ 
+ struct io {
+ 	/* File descriptor being read/ */
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index 4eb02762104ba..c50d2c7a264fe 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -533,9 +533,9 @@ int evlist__add_bpf_sb_event(struct evlist *evlist, struct perf_env *env)
+ 	return perf_evlist__add_sb_event(evlist, &attr, bpf_event__sb_cb, env);
+ }
+ 
+-void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+-				    struct perf_env *env,
+-				    FILE *fp)
++void __bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
++				      struct perf_env *env,
++				      FILE *fp)
+ {
+ 	__u32 *prog_lens = (__u32 *)(uintptr_t)(info->jited_func_lens);
+ 	__u64 *prog_addrs = (__u64 *)(uintptr_t)(info->jited_ksyms);
+@@ -551,7 +551,7 @@ void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+ 	if (info->btf_id) {
+ 		struct btf_node *node;
+ 
+-		node = perf_env__find_btf(env, info->btf_id);
++		node = __perf_env__find_btf(env, info->btf_id);
+ 		if (node)
+ 			btf = btf__new((__u8 *)(node->data),
+ 				       node->data_size);
+diff --git a/tools/perf/util/bpf-event.h b/tools/perf/util/bpf-event.h
+index 68f315c3df5be..50f7412464dfc 100644
+--- a/tools/perf/util/bpf-event.h
++++ b/tools/perf/util/bpf-event.h
+@@ -34,9 +34,9 @@ struct btf_node {
+ int machine__process_bpf(struct machine *machine, union perf_event *event,
+ 			 struct perf_sample *sample);
+ int evlist__add_bpf_sb_event(struct evlist *evlist, struct perf_env *env);
+-void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+-				    struct perf_env *env,
+-				    FILE *fp);
++void __bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
++				      struct perf_env *env,
++				      FILE *fp);
+ #else
+ static inline int machine__process_bpf(struct machine *machine __maybe_unused,
+ 				       union perf_event *event __maybe_unused,
+@@ -51,9 +51,9 @@ static inline int evlist__add_bpf_sb_event(struct evlist *evlist __maybe_unused,
+ 	return 0;
+ }
+ 
+-static inline void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info __maybe_unused,
+-						  struct perf_env *env __maybe_unused,
+-						  FILE *fp __maybe_unused)
++static inline void __bpf_event__print_bpf_prog_info(struct bpf_prog_info *info __maybe_unused,
++						    struct perf_env *env __maybe_unused,
++						    FILE *fp __maybe_unused)
+ {
+ 
+ }
+diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
+index d81ed1bc14bdc..ed2a42abe1270 100644
+--- a/tools/perf/util/env.c
++++ b/tools/perf/util/env.c
+@@ -17,13 +17,19 @@ struct perf_env perf_env;
+ 
+ void perf_env__insert_bpf_prog_info(struct perf_env *env,
+ 				    struct bpf_prog_info_node *info_node)
++{
++	down_write(&env->bpf_progs.lock);
++	__perf_env__insert_bpf_prog_info(env, info_node);
++	up_write(&env->bpf_progs.lock);
++}
++
++void __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info_node *info_node)
+ {
+ 	__u32 prog_id = info_node->info_linear->info.id;
+ 	struct bpf_prog_info_node *node;
+ 	struct rb_node *parent = NULL;
+ 	struct rb_node **p;
+ 
+-	down_write(&env->bpf_progs.lock);
+ 	p = &env->bpf_progs.infos.rb_node;
+ 
+ 	while (*p != NULL) {
+@@ -35,15 +41,13 @@ void perf_env__insert_bpf_prog_info(struct perf_env *env,
+ 			p = &(*p)->rb_right;
+ 		} else {
+ 			pr_debug("duplicated bpf prog info %u\n", prog_id);
+-			goto out;
++			return;
+ 		}
+ 	}
+ 
+ 	rb_link_node(&info_node->rb_node, parent, p);
+ 	rb_insert_color(&info_node->rb_node, &env->bpf_progs.infos);
+ 	env->bpf_progs.infos_cnt++;
+-out:
+-	up_write(&env->bpf_progs.lock);
+ }
+ 
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+@@ -72,14 +76,22 @@ out:
+ }
+ 
+ bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
++{
++	bool ret;
++
++	down_write(&env->bpf_progs.lock);
++	ret = __perf_env__insert_btf(env, btf_node);
++	up_write(&env->bpf_progs.lock);
++	return ret;
++}
++
++bool __perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
+ {
+ 	struct rb_node *parent = NULL;
+ 	__u32 btf_id = btf_node->id;
+ 	struct btf_node *node;
+ 	struct rb_node **p;
+-	bool ret = true;
+ 
+-	down_write(&env->bpf_progs.lock);
+ 	p = &env->bpf_progs.btfs.rb_node;
+ 
+ 	while (*p != NULL) {
+@@ -91,25 +103,31 @@ bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
+ 			p = &(*p)->rb_right;
+ 		} else {
+ 			pr_debug("duplicated btf %u\n", btf_id);
+-			ret = false;
+-			goto out;
++			return false;
+ 		}
+ 	}
+ 
+ 	rb_link_node(&btf_node->rb_node, parent, p);
+ 	rb_insert_color(&btf_node->rb_node, &env->bpf_progs.btfs);
+ 	env->bpf_progs.btfs_cnt++;
+-out:
+-	up_write(&env->bpf_progs.lock);
+-	return ret;
++	return true;
+ }
+ 
+ struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id)
++{
++	struct btf_node *res;
++
++	down_read(&env->bpf_progs.lock);
++	res = __perf_env__find_btf(env, btf_id);
++	up_read(&env->bpf_progs.lock);
++	return res;
++}
++
++struct btf_node *__perf_env__find_btf(struct perf_env *env, __u32 btf_id)
+ {
+ 	struct btf_node *node = NULL;
+ 	struct rb_node *n;
+ 
+-	down_read(&env->bpf_progs.lock);
+ 	n = env->bpf_progs.btfs.rb_node;
+ 
+ 	while (n) {
+@@ -119,13 +137,9 @@ struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id)
+ 		else if (btf_id > node->id)
+ 			n = n->rb_right;
+ 		else
+-			goto out;
++			return node;
+ 	}
+-	node = NULL;
+-
+-out:
+-	up_read(&env->bpf_progs.lock);
+-	return node;
++	return NULL;
+ }
+ 
+ /* purge data in bpf_progs.infos tree */
+diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
+index 01378a955dd5e..ef0fd544cd672 100644
+--- a/tools/perf/util/env.h
++++ b/tools/perf/util/env.h
+@@ -139,12 +139,16 @@ const char *perf_env__raw_arch(struct perf_env *env);
+ int perf_env__nr_cpus_avail(struct perf_env *env);
+ 
+ void perf_env__init(struct perf_env *env);
++void __perf_env__insert_bpf_prog_info(struct perf_env *env,
++				      struct bpf_prog_info_node *info_node);
+ void perf_env__insert_bpf_prog_info(struct perf_env *env,
+ 				    struct bpf_prog_info_node *info_node);
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+ 							__u32 prog_id);
+ bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node);
++bool __perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node);
+ struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id);
++struct btf_node *__perf_env__find_btf(struct perf_env *env, __u32 btf_id);
+ 
+ int perf_env__numa_node(struct perf_env *env, int cpu);
+ #endif /* __PERF_ENV_H */
+diff --git a/tools/perf/util/genelf.c b/tools/perf/util/genelf.c
+index 02cd9f75e3d2f..89a85601485d9 100644
+--- a/tools/perf/util/genelf.c
++++ b/tools/perf/util/genelf.c
+@@ -291,9 +291,9 @@ jit_write_elf(int fd, uint64_t load_addr, const char *sym,
+ 	 */
+ 	phdr = elf_newphdr(e, 1);
+ 	phdr[0].p_type = PT_LOAD;
+-	phdr[0].p_offset = 0;
+-	phdr[0].p_vaddr = 0;
+-	phdr[0].p_paddr = 0;
++	phdr[0].p_offset = GEN_ELF_TEXT_OFFSET;
++	phdr[0].p_vaddr = GEN_ELF_TEXT_OFFSET;
++	phdr[0].p_paddr = GEN_ELF_TEXT_OFFSET;
+ 	phdr[0].p_filesz = csize;
+ 	phdr[0].p_memsz = csize;
+ 	phdr[0].p_flags = PF_X | PF_R;
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index dd06770b43f1c..d2812d98968df 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -1655,8 +1655,8 @@ static void print_bpf_prog_info(struct feat_fd *ff, FILE *fp)
+ 		node = rb_entry(next, struct bpf_prog_info_node, rb_node);
+ 		next = rb_next(&node->rb_node);
+ 
+-		bpf_event__print_bpf_prog_info(&node->info_linear->info,
+-					       env, fp);
++		__bpf_event__print_bpf_prog_info(&node->info_linear->info,
++						 env, fp);
+ 	}
+ 
+ 	up_read(&env->bpf_progs.lock);
+@@ -2927,7 +2927,7 @@ static int process_bpf_prog_info(struct feat_fd *ff, void *data __maybe_unused)
+ 		/* after reading from file, translate offset to address */
+ 		bpf_program__bpil_offs_to_addr(info_linear);
+ 		info_node->info_linear = info_linear;
+-		perf_env__insert_bpf_prog_info(env, info_node);
++		__perf_env__insert_bpf_prog_info(env, info_node);
+ 	}
+ 
+ 	up_write(&env->bpf_progs.lock);
+@@ -2980,7 +2980,7 @@ static int process_bpf_btf(struct feat_fd *ff, void *data __maybe_unused)
+ 		if (__do_read(ff, node->data, data_size))
+ 			goto out;
+ 
+-		perf_env__insert_btf(env, node);
++		__perf_env__insert_btf(env, node);
+ 		node = NULL;
+ 	}
+ 
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
+index 5c7700212f753..56761de1ca3b7 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
+@@ -121,6 +121,9 @@ h2_destroy()
+ 
+ switch_create()
+ {
++	local lanes_swp4
++	local pg1_size
++
+ 	# pools
+ 	# -----
+ 
+@@ -171,7 +174,7 @@ switch_create()
+ 	# assignment.
+ 	tc qdisc replace dev $swp1 root handle 1: \
+ 	   ets bands 8 strict 8 priomap 7 6
+-	__mlnx_qos -i $swp1 --prio2buffer=0,1,0,0,0,0,0,0 >/dev/null
++	dcb buffer set dev $swp1 prio-buffer all:0 1:1
+ 
+ 	# $swp2
+ 	# -----
+@@ -209,8 +212,8 @@ switch_create()
+ 	# the lossless prio into a buffer of its own. Don't bother with buffer
+ 	# sizes though, there is not going to be any pressure in the "backward"
+ 	# direction.
+-	__mlnx_qos -i $swp3 --prio2buffer=0,1,0,0,0,0,0,0 >/dev/null
+-	__mlnx_qos -i $swp3 --pfc=0,1,0,0,0,0,0,0 >/dev/null
++	dcb buffer set dev $swp3 prio-buffer all:0 1:1
++	dcb pfc set dev $swp3 prio-pfc all:off 1:on
+ 
+ 	# $swp4
+ 	# -----
+@@ -226,11 +229,24 @@ switch_create()
+ 	# Configure qdisc so that we can hand-tune headroom.
+ 	tc qdisc replace dev $swp4 root handle 1: \
+ 	   ets bands 8 strict 8 priomap 7 6
+-	__mlnx_qos -i $swp4 --prio2buffer=0,1,0,0,0,0,0,0 >/dev/null
+-	__mlnx_qos -i $swp4 --pfc=0,1,0,0,0,0,0,0 >/dev/null
++	dcb buffer set dev $swp4 prio-buffer all:0 1:1
++	dcb pfc set dev $swp4 prio-pfc all:off 1:on
+ 	# PG0 will get autoconfigured to Xoff, give PG1 arbitrarily 100K, which
+ 	# is (-2*MTU) about 80K of delay provision.
+-	__mlnx_qos -i $swp4 --buffer_size=0,$_100KB,0,0,0,0,0,0 >/dev/null
++	pg1_size=$_100KB
++
++	setup_wait_dev_with_timeout $swp4
++
++	lanes_swp4=$(ethtool $swp4 | grep 'Lanes:')
++	lanes_swp4=${lanes_swp4#*"Lanes: "}
++
++	# 8-lane ports use two buffers among which the configured buffer
++	# is split, so double the size to get twice (20K + 80K).
++	if [[ $lanes_swp4 -eq 8 ]]; then
++		pg1_size=$((pg1_size * 2))
++	fi
++
++	dcb buffer set dev $swp4 buffer-size all:0 1:$pg1_size
+ 
+ 	# bridges
+ 	# -------
+@@ -273,9 +289,9 @@ switch_destroy()
+ 	# $swp4
+ 	# -----
+ 
+-	__mlnx_qos -i $swp4 --buffer_size=0,0,0,0,0,0,0,0 >/dev/null
+-	__mlnx_qos -i $swp4 --pfc=0,0,0,0,0,0,0,0 >/dev/null
+-	__mlnx_qos -i $swp4 --prio2buffer=0,0,0,0,0,0,0,0 >/dev/null
++	dcb buffer set dev $swp4 buffer-size all:0
++	dcb pfc set dev $swp4 prio-pfc all:off
++	dcb buffer set dev $swp4 prio-buffer all:0
+ 	tc qdisc del dev $swp4 root
+ 
+ 	devlink_tc_bind_pool_th_restore $swp4 1 ingress
+@@ -288,8 +304,8 @@ switch_destroy()
+ 	# $swp3
+ 	# -----
+ 
+-	__mlnx_qos -i $swp3 --pfc=0,0,0,0,0,0,0,0 >/dev/null
+-	__mlnx_qos -i $swp3 --prio2buffer=0,0,0,0,0,0,0,0 >/dev/null
++	dcb pfc set dev $swp3 prio-pfc all:off
++	dcb buffer set dev $swp3 prio-buffer all:0
+ 	tc qdisc del dev $swp3 root
+ 
+ 	devlink_tc_bind_pool_th_restore $swp3 1 egress
+@@ -315,7 +331,7 @@ switch_destroy()
+ 	# $swp1
+ 	# -----
+ 
+-	__mlnx_qos -i $swp1 --prio2buffer=0,0,0,0,0,0,0,0 >/dev/null
++	dcb buffer set dev $swp1 prio-buffer all:0
+ 	tc qdisc del dev $swp1 root
+ 
+ 	devlink_tc_bind_pool_th_restore $swp1 1 ingress
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
+index fb850e0ec8375..616d3581419ca 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
+@@ -10,7 +10,8 @@ lib_dir=$(dirname $0)/../../../../net/forwarding
+ ALL_TESTS="single_mask_test identical_filters_test two_masks_test \
+ 	multiple_masks_test ctcam_edge_cases_test delta_simple_test \
+ 	delta_two_masks_one_key_test delta_simple_rehash_test \
+-	bloom_simple_test bloom_complex_test bloom_delta_test"
++	bloom_simple_test bloom_complex_test bloom_delta_test \
++	max_erp_entries_test max_group_size_test"
+ NUM_NETIFS=2
+ source $lib_dir/lib.sh
+ source $lib_dir/tc_common.sh
+@@ -983,6 +984,109 @@ bloom_delta_test()
+ 	log_test "bloom delta test ($tcflags)"
+ }
+ 
++max_erp_entries_test()
++{
++	# The number of eRP entries is limited. Once the maximum number of eRPs
++	# has been reached, filters cannot be added. This test verifies that
++	# when this limit is reached, inserstion fails without crashing.
++
++	RET=0
++
++	local num_masks=32
++	local num_regions=15
++	local chain_failed
++	local mask_failed
++	local ret
++
++	if [[ "$tcflags" != "skip_sw" ]]; then
++		return 0;
++	fi
++
++	for ((i=1; i < $num_regions; i++)); do
++		for ((j=$num_masks; j >= 0; j--)); do
++			tc filter add dev $h2 ingress chain $i protocol ip \
++				pref $i	handle $j flower $tcflags \
++				dst_ip 192.1.0.0/$j &> /dev/null
++			ret=$?
++
++			if [ $ret -ne 0 ]; then
++				chain_failed=$i
++				mask_failed=$j
++				break 2
++			fi
++		done
++	done
++
++	# We expect to exceed the maximum number of eRP entries, so that
++	# insertion eventually fails. Otherwise, the test should be adjusted to
++	# add more filters.
++	check_fail $ret "expected to exceed number of eRP entries"
++
++	for ((; i >= 1; i--)); do
++		for ((j=0; j <= $num_masks; j++)); do
++			tc filter del dev $h2 ingress chain $i protocol ip \
++				pref $i handle $j flower &> /dev/null
++		done
++	done
++
++	log_test "max eRP entries test ($tcflags). " \
++		"max chain $chain_failed, mask $mask_failed"
++}
++
++max_group_size_test()
++{
++	# The number of ACLs in an ACL group is limited. Once the maximum
++	# number of ACLs has been reached, filters cannot be added. This test
++	# verifies that when this limit is reached, insertion fails without
++	# crashing.
++
++	RET=0
++
++	local num_acls=32
++	local max_size
++	local ret
++
++	if [[ "$tcflags" != "skip_sw" ]]; then
++		return 0;
++	fi
++
++	for ((i=1; i < $num_acls; i++)); do
++		if [[ $(( i % 2 )) == 1 ]]; then
++			tc filter add dev $h2 ingress pref $i proto ipv4 \
++				flower $tcflags dst_ip 198.51.100.1/32 \
++				ip_proto tcp tcp_flags 0x01/0x01 \
++				action drop &> /dev/null
++		else
++			tc filter add dev $h2 ingress pref $i proto ipv6 \
++				flower $tcflags dst_ip 2001:db8:1::1/128 \
++				action drop &> /dev/null
++		fi
++
++		ret=$?
++		[[ $ret -ne 0 ]] && max_size=$((i - 1)) && break
++	done
++
++	# We expect to exceed the maximum number of ACLs in a group, so that
++	# insertion eventually fails. Otherwise, the test should be adjusted to
++	# add more filters.
++	check_fail $ret "expected to exceed number of ACLs in a group"
++
++	for ((; i >= 1; i--)); do
++		if [[ $(( i % 2 )) == 1 ]]; then
++			tc filter del dev $h2 ingress pref $i proto ipv4 \
++				flower $tcflags dst_ip 198.51.100.1/32 \
++				ip_proto tcp tcp_flags 0x01/0x01 \
++				action drop &> /dev/null
++		else
++			tc filter del dev $h2 ingress pref $i proto ipv6 \
++				flower $tcflags dst_ip 2001:db8:1::1/128 \
++				action drop &> /dev/null
++		fi
++	done
++
++	log_test "max ACL group size test ($tcflags). max size $max_size"
++}
++
+ setup_prepare()
+ {
+ 	h1=${NETIFS[p1]}
+diff --git a/tools/testing/selftests/net/fib_nexthop_multiprefix.sh b/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
+index 51df5e305855a..b52d59547fc59 100755
+--- a/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
++++ b/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
+@@ -209,12 +209,12 @@ validate_v6_exception()
+ 		echo "Route get"
+ 		ip -netns h0 -6 ro get ${dst}
+ 		echo "Searching for:"
+-		echo "    ${dst} from :: via ${r1} dev eth0 src ${h0} .* mtu ${mtu}"
++		echo "    ${dst}.* via ${r1} dev eth0 src ${h0} .* mtu ${mtu}"
+ 		echo
+ 	fi
+ 
+ 	ip -netns h0 -6 ro get ${dst} | \
+-	grep -q "${dst} from :: via ${r1} dev eth0 src ${h0} .* mtu ${mtu}"
++	grep -q "${dst}.* via ${r1} dev eth0 src ${h0} .* mtu ${mtu}"
+ 	rc=$?
+ 
+ 	log_test $rc 0 "IPv6: host 0 to host ${i}, mtu ${mtu}"
+diff --git a/tools/testing/selftests/powerpc/math/fpu_preempt.c b/tools/testing/selftests/powerpc/math/fpu_preempt.c
+index 5235bdc8c0b11..3e5b5663d2449 100644
+--- a/tools/testing/selftests/powerpc/math/fpu_preempt.c
++++ b/tools/testing/selftests/powerpc/math/fpu_preempt.c
+@@ -37,19 +37,20 @@ __thread double darray[] = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0,
+ int threads_starting;
+ int running;
+ 
+-extern void preempt_fpu(double *darray, int *threads_starting, int *running);
++extern int preempt_fpu(double *darray, int *threads_starting, int *running);
+ 
+ void *preempt_fpu_c(void *p)
+ {
++	long rc;
+ 	int i;
++
+ 	srand(pthread_self());
+ 	for (i = 0; i < 21; i++)
+ 		darray[i] = rand();
+ 
+-	/* Test failed if it ever returns */
+-	preempt_fpu(darray, &threads_starting, &running);
++	rc = preempt_fpu(darray, &threads_starting, &running);
+ 
+-	return p;
++	return (void *)rc;
+ }
+ 
+ int test_preempt_fpu(void)
+diff --git a/tools/testing/selftests/powerpc/math/vmx_preempt.c b/tools/testing/selftests/powerpc/math/vmx_preempt.c
+index 6761d6ce30eca..6f7cf400c6875 100644
+--- a/tools/testing/selftests/powerpc/math/vmx_preempt.c
++++ b/tools/testing/selftests/powerpc/math/vmx_preempt.c
+@@ -37,19 +37,21 @@ __thread vector int varray[] = {{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10,11,12},
+ int threads_starting;
+ int running;
+ 
+-extern void preempt_vmx(vector int *varray, int *threads_starting, int *running);
++extern int preempt_vmx(vector int *varray, int *threads_starting, int *running);
+ 
+ void *preempt_vmx_c(void *p)
+ {
+ 	int i, j;
++	long rc;
++
+ 	srand(pthread_self());
+ 	for (i = 0; i < 12; i++)
+ 		for (j = 0; j < 4; j++)
+ 			varray[i][j] = rand();
+ 
+-	/* Test fails if it ever returns */
+-	preempt_vmx(varray, &threads_starting, &running);
+-	return p;
++	rc = preempt_vmx(varray, &threads_starting, &running);
++
++	return (void *)rc;
+ }
+ 
+ int test_preempt_vmx(void)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-02-23 12:39 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-02-23 12:39 UTC (permalink / raw
  To: gentoo-commits

commit:     7d56c0468d15e24e81f39a9552f671b363f8181d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 23 12:39:42 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 23 12:39:42 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7d56c046

Linux patch 5.10.210

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1209_linux-5.10.210.patch | 17743 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 17747 insertions(+)

diff --git a/0000_README b/0000_README
index 8e064c6a..2aa6b81e 100644
--- a/0000_README
+++ b/0000_README
@@ -879,6 +879,10 @@ Patch:  1208_linux-5.10.209.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.209
 
+Patch:  1209_linux-5.10.210.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.210
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1209_linux-5.10.210.patch b/1209_linux-5.10.210.patch
new file mode 100644
index 00000000..3b6a2814
--- /dev/null
+++ b/1209_linux-5.10.210.patch
@@ -0,0 +1,17743 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-net-queues b/Documentation/ABI/testing/sysfs-class-net-queues
+index 978b76358661a..40d5aab8452d5 100644
+--- a/Documentation/ABI/testing/sysfs-class-net-queues
++++ b/Documentation/ABI/testing/sysfs-class-net-queues
+@@ -1,4 +1,4 @@
+-What:		/sys/class/<iface>/queues/rx-<queue>/rps_cpus
++What:		/sys/class/net/<iface>/queues/rx-<queue>/rps_cpus
+ Date:		March 2010
+ KernelVersion:	2.6.35
+ Contact:	netdev@vger.kernel.org
+@@ -8,7 +8,7 @@ Description:
+ 		network device queue. Possible values depend on the number
+ 		of available CPU(s) in the system.
+ 
+-What:		/sys/class/<iface>/queues/rx-<queue>/rps_flow_cnt
++What:		/sys/class/net/<iface>/queues/rx-<queue>/rps_flow_cnt
+ Date:		April 2010
+ KernelVersion:	2.6.35
+ Contact:	netdev@vger.kernel.org
+@@ -16,7 +16,7 @@ Description:
+ 		Number of Receive Packet Steering flows being currently
+ 		processed by this particular network device receive queue.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/tx_timeout
++What:		/sys/class/net/<iface>/queues/tx-<queue>/tx_timeout
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -24,7 +24,7 @@ Description:
+ 		Indicates the number of transmit timeout events seen by this
+ 		network interface transmit queue.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/tx_maxrate
++What:		/sys/class/net/<iface>/queues/tx-<queue>/tx_maxrate
+ Date:		March 2015
+ KernelVersion:	4.1
+ Contact:	netdev@vger.kernel.org
+@@ -32,7 +32,7 @@ Description:
+ 		A Mbps max-rate set for the queue, a value of zero means disabled,
+ 		default is disabled.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/xps_cpus
++What:		/sys/class/net/<iface>/queues/tx-<queue>/xps_cpus
+ Date:		November 2010
+ KernelVersion:	2.6.38
+ Contact:	netdev@vger.kernel.org
+@@ -42,7 +42,7 @@ Description:
+ 		network device transmit queue. Possible vaules depend on the
+ 		number of available CPU(s) in the system.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/xps_rxqs
++What:		/sys/class/net/<iface>/queues/tx-<queue>/xps_rxqs
+ Date:		June 2018
+ KernelVersion:	4.18.0
+ Contact:	netdev@vger.kernel.org
+@@ -53,7 +53,7 @@ Description:
+ 		number of available receive queue(s) in the network device.
+ 		Default is disabled.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -62,7 +62,7 @@ Description:
+ 		of this particular network device transmit queue.
+ 		Default value is 1000.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/inflight
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/inflight
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -70,7 +70,7 @@ Description:
+ 		Indicates the number of bytes (objects) in flight on this
+ 		network device transmit queue.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -79,7 +79,7 @@ Description:
+ 		on this network device transmit queue. This value is clamped
+ 		to be within the bounds defined by limit_max and limit_min.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -88,7 +88,7 @@ Description:
+ 		queued on this network device transmit queue. See
+ 		include/linux/dynamic_queue_limits.h for the default value.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+diff --git a/Documentation/ABI/testing/sysfs-class-net-statistics b/Documentation/ABI/testing/sysfs-class-net-statistics
+index 55db27815361b..53e508c6936a5 100644
+--- a/Documentation/ABI/testing/sysfs-class-net-statistics
++++ b/Documentation/ABI/testing/sysfs-class-net-statistics
+@@ -1,4 +1,4 @@
+-What:		/sys/class/<iface>/statistics/collisions
++What:		/sys/class/net/<iface>/statistics/collisions
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -6,7 +6,7 @@ Description:
+ 		Indicates the number of collisions seen by this network device.
+ 		This value might not be relevant with all MAC layers.
+ 
+-What:		/sys/class/<iface>/statistics/multicast
++What:		/sys/class/net/<iface>/statistics/multicast
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -14,7 +14,7 @@ Description:
+ 		Indicates the number of multicast packets received by this
+ 		network device.
+ 
+-What:		/sys/class/<iface>/statistics/rx_bytes
++What:		/sys/class/net/<iface>/statistics/rx_bytes
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -23,7 +23,7 @@ Description:
+ 		See the network driver for the exact meaning of when this
+ 		value is incremented.
+ 
+-What:		/sys/class/<iface>/statistics/rx_compressed
++What:		/sys/class/net/<iface>/statistics/rx_compressed
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -32,7 +32,7 @@ Description:
+ 		network device. This value might only be relevant for interfaces
+ 		that support packet compression (e.g: PPP).
+ 
+-What:		/sys/class/<iface>/statistics/rx_crc_errors
++What:		/sys/class/net/<iface>/statistics/rx_crc_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -41,7 +41,7 @@ Description:
+ 		by this network device. Note that the specific meaning might
+ 		depend on the MAC layer used by the interface.
+ 
+-What:		/sys/class/<iface>/statistics/rx_dropped
++What:		/sys/class/net/<iface>/statistics/rx_dropped
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -51,7 +51,7 @@ Description:
+ 		packet processing. See the network driver for the exact
+ 		meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_errors
++What:		/sys/class/net/<iface>/statistics/rx_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -59,7 +59,7 @@ Description:
+ 		Indicates the number of receive errors on this network device.
+ 		See the network driver for the exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_fifo_errors
++What:		/sys/class/net/<iface>/statistics/rx_fifo_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -68,7 +68,7 @@ Description:
+ 		network device. See the network driver for the exact
+ 		meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_frame_errors
++What:		/sys/class/net/<iface>/statistics/rx_frame_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -78,7 +78,7 @@ Description:
+ 		on the MAC layer protocol used. See the network driver for
+ 		the exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_length_errors
++What:		/sys/class/net/<iface>/statistics/rx_length_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -87,7 +87,7 @@ Description:
+ 		error, oversized or undersized. See the network driver for the
+ 		exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_missed_errors
++What:		/sys/class/net/<iface>/statistics/rx_missed_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -96,7 +96,7 @@ Description:
+ 		due to lack of capacity in the receive side. See the network
+ 		driver for the exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_nohandler
++What:		/sys/class/net/<iface>/statistics/rx_nohandler
+ Date:		February 2016
+ KernelVersion:	4.6
+ Contact:	netdev@vger.kernel.org
+@@ -104,7 +104,7 @@ Description:
+ 		Indicates the number of received packets that were dropped on
+ 		an inactive device by the network core.
+ 
+-What:		/sys/class/<iface>/statistics/rx_over_errors
++What:		/sys/class/net/<iface>/statistics/rx_over_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -114,7 +114,7 @@ Description:
+ 		(e.g: larger than MTU). See the network driver for the exact
+ 		meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_packets
++What:		/sys/class/net/<iface>/statistics/rx_packets
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -122,7 +122,7 @@ Description:
+ 		Indicates the total number of good packets received by this
+ 		network device.
+ 
+-What:		/sys/class/<iface>/statistics/tx_aborted_errors
++What:		/sys/class/net/<iface>/statistics/tx_aborted_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -132,7 +132,7 @@ Description:
+ 		a medium collision). See the network driver for the exact
+ 		meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/tx_bytes
++What:		/sys/class/net/<iface>/statistics/tx_bytes
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -143,7 +143,7 @@ Description:
+ 		transmitted packets or all packets that have been queued for
+ 		transmission.
+ 
+-What:		/sys/class/<iface>/statistics/tx_carrier_errors
++What:		/sys/class/net/<iface>/statistics/tx_carrier_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -152,7 +152,7 @@ Description:
+ 		because of carrier errors (e.g: physical link down). See the
+ 		network driver for the exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/tx_compressed
++What:		/sys/class/net/<iface>/statistics/tx_compressed
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -161,7 +161,7 @@ Description:
+ 		this might only be relevant for devices that support
+ 		compression (e.g: PPP).
+ 
+-What:		/sys/class/<iface>/statistics/tx_dropped
++What:		/sys/class/net/<iface>/statistics/tx_dropped
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -170,7 +170,7 @@ Description:
+ 		See the driver for the exact reasons as to why the packets were
+ 		dropped.
+ 
+-What:		/sys/class/<iface>/statistics/tx_errors
++What:		/sys/class/net/<iface>/statistics/tx_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -179,7 +179,7 @@ Description:
+ 		a network device. See the driver for the exact reasons as to
+ 		why the packets were dropped.
+ 
+-What:		/sys/class/<iface>/statistics/tx_fifo_errors
++What:		/sys/class/net/<iface>/statistics/tx_fifo_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -188,7 +188,7 @@ Description:
+ 		FIFO error. See the driver for the exact reasons as to why the
+ 		packets were dropped.
+ 
+-What:		/sys/class/<iface>/statistics/tx_heartbeat_errors
++What:		/sys/class/net/<iface>/statistics/tx_heartbeat_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -197,7 +197,7 @@ Description:
+ 		reported as heartbeat errors. See the driver for the exact
+ 		reasons as to why the packets were dropped.
+ 
+-What:		/sys/class/<iface>/statistics/tx_packets
++What:		/sys/class/net/<iface>/statistics/tx_packets
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -206,7 +206,7 @@ Description:
+ 		device. See the driver for whether this reports the number of all
+ 		attempted or successful transmissions.
+ 
+-What:		/sys/class/<iface>/statistics/tx_window_errors
++What:		/sys/class/net/<iface>/statistics/tx_window_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+diff --git a/Documentation/filesystems/directory-locking.rst b/Documentation/filesystems/directory-locking.rst
+index dccd61c7c5c3b..193c22687851a 100644
+--- a/Documentation/filesystems/directory-locking.rst
++++ b/Documentation/filesystems/directory-locking.rst
+@@ -22,13 +22,16 @@ exclusive.
+ 3) object removal.  Locking rules: caller locks parent, finds victim,
+ locks victim and calls the method.  Locks are exclusive.
+ 
+-4) rename() that is _not_ cross-directory.  Locking rules: caller locks the
+-parent and finds source and target.  We lock both (provided they exist).  If we
+-need to lock two inodes of different type (dir vs non-dir), we lock directory
+-first.  If we need to lock two inodes of the same type, lock them in inode
+-pointer order.  Then call the method.  All locks are exclusive.
+-NB: we might get away with locking the source (and target in exchange
+-case) shared.
++4) rename() that is _not_ cross-directory.  Locking rules: caller locks
++the parent and finds source and target.  Then we decide which of the
++source and target need to be locked.  Source needs to be locked if it's a
++non-directory; target - if it's a non-directory or about to be removed.
++Take the locks that need to be taken, in inode pointer order if need
++to take both (that can happen only when both source and target are
++non-directories - the source because it wouldn't be locked otherwise
++and the target because mixing directory and non-directory is allowed
++only with RENAME_EXCHANGE, and that won't be removing the target).
++After the locks had been taken, call the method.  All locks are exclusive.
+ 
+ 5) link creation.  Locking rules:
+ 
+@@ -44,20 +47,17 @@ rules:
+ 
+ 	* lock the filesystem
+ 	* lock parents in "ancestors first" order. If one is not ancestor of
+-	  the other, lock them in inode pointer order.
++	  the other, lock the parent of source first.
+ 	* find source and target.
+ 	* if old parent is equal to or is a descendent of target
+ 	  fail with -ENOTEMPTY
+ 	* if new parent is equal to or is a descendent of source
+ 	  fail with -ELOOP
+-	* Lock both the source and the target provided they exist. If we
+-	  need to lock two inodes of different type (dir vs non-dir), we lock
+-	  the directory first. If we need to lock two inodes of the same type,
+-	  lock them in inode pointer order.
++	* Lock subdirectories involved (source before target).
++	* Lock non-directories involved, in inode pointer order.
+ 	* call the method.
+ 
+-All ->i_rwsem are taken exclusive.  Again, we might get away with locking
+-the source (and target in exchange case) shared.
++All ->i_rwsem are taken exclusive.
+ 
+ The rules above obviously guarantee that all directories that are going to be
+ read, modified or removed by method will be locked by caller.
+@@ -67,6 +67,7 @@ If no directory is its own ancestor, the scheme above is deadlock-free.
+ 
+ Proof:
+ 
++[XXX: will be updated once we are done massaging the lock_rename()]
+ 	First of all, at any moment we have a linear ordering of the
+ 	objects - A < B iff (A is an ancestor of B) or (B is not an ancestor
+         of A and ptr(A) < ptr(B)).
+diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
+index c0f2c7586531b..fbd695d66905f 100644
+--- a/Documentation/filesystems/locking.rst
++++ b/Documentation/filesystems/locking.rst
+@@ -95,7 +95,7 @@ symlink:	exclusive
+ mkdir:		exclusive
+ unlink:		exclusive (both)
+ rmdir:		exclusive (both)(see below)
+-rename:		exclusive (all)	(see below)
++rename:		exclusive (both parents, some children)	(see below)
+ readlink:	no
+ get_link:	no
+ setattr:	exclusive
+@@ -113,6 +113,9 @@ tmpfile:	no
+ 	Additionally, ->rmdir(), ->unlink() and ->rename() have ->i_rwsem
+ 	exclusive on victim.
+ 	cross-directory ->rename() has (per-superblock) ->s_vfs_rename_sem.
++	->unlink() and ->rename() have ->i_rwsem exclusive on all non-directories
++	involved.
++	->rename() has ->i_rwsem exclusive on any subdirectory that changes parent.
+ 
+ See Documentation/filesystems/directory-locking.rst for more detailed discussion
+ of the locking scheme for directory operations.
+diff --git a/Documentation/filesystems/porting.rst b/Documentation/filesystems/porting.rst
+index 867036aa90b83..0a2d29d844190 100644
+--- a/Documentation/filesystems/porting.rst
++++ b/Documentation/filesystems/porting.rst
+@@ -865,3 +865,21 @@ no matter what.  Everything is handled by the caller.
+ 
+ clone_private_mount() returns a longterm mount now, so the proper destructor of
+ its result is kern_unmount() or kern_unmount_array().
++
++---
++
++**mandatory**
++
++If ->rename() update of .. on cross-directory move needs an exclusion with
++directory modifications, do *not* lock the subdirectory in question in your
++->rename() - it's done by the caller now [that item should've been added in
++28eceeda130f "fs: Lock moved directories"].
++
++---
++
++**mandatory**
++
++On same-directory ->rename() the (tautological) update of .. is not protected
++by any locks; just don't do it if the old parent is the same as the new one.
++We really can't lock two subdirectories in same-directory rename - not without
++deadlocks.
+diff --git a/Documentation/sound/soc/dapm.rst b/Documentation/sound/soc/dapm.rst
+index 8e44107933abf..c3154ce6e1b27 100644
+--- a/Documentation/sound/soc/dapm.rst
++++ b/Documentation/sound/soc/dapm.rst
+@@ -234,7 +234,7 @@ corresponding soft power control. In this case it is necessary to create
+ a virtual widget - a widget with no control bits e.g.
+ ::
+ 
+-  SND_SOC_DAPM_MIXER("AC97 Mixer", SND_SOC_DAPM_NOPM, 0, 0, NULL, 0),
++  SND_SOC_DAPM_MIXER("AC97 Mixer", SND_SOC_NOPM, 0, 0, NULL, 0),
+ 
+ This can be used to merge to signal paths together in software.
+ 
+diff --git a/Makefile b/Makefile
+index 613b25d330b0a..6e9ee164b9dfd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 209
++SUBLEVEL = 210
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-i9100.dts b/arch/arm/boot/dts/exynos4210-i9100.dts
+index d186b93144e38..5256181971973 100644
+--- a/arch/arm/boot/dts/exynos4210-i9100.dts
++++ b/arch/arm/boot/dts/exynos4210-i9100.dts
+@@ -464,6 +464,14 @@ vtcam_reg: LDO12 {
+ 				regulator-name = "VT_CAM_1.8V";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
++
++				/*
++				 * Force-enable this regulator; otherwise the
++				 * kernel hangs very early in the boot process
++				 * for about 12 seconds, without apparent
++				 * reason.
++				 */
++				regulator-always-on;
+ 			};
+ 
+ 			vcclcd_reg: LDO13 {
+diff --git a/arch/arm/boot/dts/imx1-ads.dts b/arch/arm/boot/dts/imx1-ads.dts
+index 5833fb6f15d88..2c817c4a4c68f 100644
+--- a/arch/arm/boot/dts/imx1-ads.dts
++++ b/arch/arm/boot/dts/imx1-ads.dts
+@@ -65,7 +65,7 @@ &weim {
+ 	pinctrl-0 = <&pinctrl_weim>;
+ 	status = "okay";
+ 
+-	nor: nor@0,0 {
++	nor: flash@0,0 {
+ 		compatible = "cfi-flash";
+ 		reg = <0 0x00000000 0x02000000>;
+ 		bank-width = <4>;
+diff --git a/arch/arm/boot/dts/imx1-apf9328.dts b/arch/arm/boot/dts/imx1-apf9328.dts
+index 77b21aa7a1469..27e72b07b517a 100644
+--- a/arch/arm/boot/dts/imx1-apf9328.dts
++++ b/arch/arm/boot/dts/imx1-apf9328.dts
+@@ -45,7 +45,7 @@ &weim {
+ 	pinctrl-0 = <&pinctrl_weim>;
+ 	status = "okay";
+ 
+-	nor: nor@0,0 {
++	nor: flash@0,0 {
+ 		compatible = "cfi-flash";
+ 		reg = <0 0x00000000 0x02000000>;
+ 		bank-width = <2>;
+diff --git a/arch/arm/boot/dts/imx1.dtsi b/arch/arm/boot/dts/imx1.dtsi
+index 9b940987864c7..8d6e900a9081e 100644
+--- a/arch/arm/boot/dts/imx1.dtsi
++++ b/arch/arm/boot/dts/imx1.dtsi
+@@ -268,9 +268,12 @@ weim: weim@220000 {
+ 			status = "disabled";
+ 		};
+ 
+-		esram: esram@300000 {
++		esram: sram@300000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00300000 0x20000>;
++			ranges = <0 0x00300000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/imx23-sansa.dts b/arch/arm/boot/dts/imx23-sansa.dts
+index 46057d9bf555b..c2efcc20ae802 100644
+--- a/arch/arm/boot/dts/imx23-sansa.dts
++++ b/arch/arm/boot/dts/imx23-sansa.dts
+@@ -175,10 +175,8 @@ i2c-0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 		compatible = "i2c-gpio";
+-		gpios = <
+-			&gpio1 24 0		/* SDA */
+-			&gpio1 22 0		/* SCL */
+-		>;
++		sda-gpios = <&gpio1 24 0>;
++		scl-gpios = <&gpio1 22 0>;
+ 		i2c-gpio,delay-us = <2>;	/* ~100 kHz */
+ 	};
+ 
+@@ -186,10 +184,8 @@ i2c-1 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 		compatible = "i2c-gpio";
+-		gpios = <
+-			&gpio0 31 0		/* SDA */
+-			&gpio0 30 0		/* SCL */
+-		>;
++		sda-gpios = <&gpio0 31 0>;
++		scl-gpios = <&gpio0 30 0>;
+ 		i2c-gpio,delay-us = <2>;	/* ~100 kHz */
+ 
+ 		touch: touch@20 {
+diff --git a/arch/arm/boot/dts/imx23.dtsi b/arch/arm/boot/dts/imx23.dtsi
+index ce3d6360a7efb..b236d23f80715 100644
+--- a/arch/arm/boot/dts/imx23.dtsi
++++ b/arch/arm/boot/dts/imx23.dtsi
+@@ -414,7 +414,7 @@ emi@80020000 {
+ 				status = "disabled";
+ 			};
+ 
+-			dma_apbx: dma-apbx@80024000 {
++			dma_apbx: dma-controller@80024000 {
+ 				compatible = "fsl,imx23-dma-apbx";
+ 				reg = <0x80024000 0x2000>;
+ 				interrupts = <7 5 9 26
+diff --git a/arch/arm/boot/dts/imx25-eukrea-cpuimx25.dtsi b/arch/arm/boot/dts/imx25-eukrea-cpuimx25.dtsi
+index 0703f62d10d1c..93a6e4e680b45 100644
+--- a/arch/arm/boot/dts/imx25-eukrea-cpuimx25.dtsi
++++ b/arch/arm/boot/dts/imx25-eukrea-cpuimx25.dtsi
+@@ -27,7 +27,7 @@ &i2c1 {
+ 	pinctrl-0 = <&pinctrl_i2c1>;
+ 	status = "okay";
+ 
+-	pcf8563@51 {
++	rtc@51 {
+ 		compatible = "nxp,pcf8563";
+ 		reg = <0x51>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-cmo-qvga.dts b/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-cmo-qvga.dts
+index 7d4301b22b902..1ed3fb7b9ce62 100644
+--- a/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-cmo-qvga.dts
++++ b/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-cmo-qvga.dts
+@@ -16,7 +16,7 @@ cmo_qvga: display {
+ 		bus-width = <18>;
+ 		display-timings {
+ 			native-mode = <&qvga_timings>;
+-			qvga_timings: 320x240 {
++			qvga_timings: timing0 {
+ 				clock-frequency = <6500000>;
+ 				hactive = <320>;
+ 				vactive = <240>;
+diff --git a/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-dvi-svga.dts b/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-dvi-svga.dts
+index 80a7f96de4c6a..64b2ffac463b2 100644
+--- a/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-dvi-svga.dts
++++ b/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-dvi-svga.dts
+@@ -16,7 +16,7 @@ dvi_svga: display {
+ 		bus-width = <18>;
+ 		display-timings {
+ 			native-mode = <&dvi_svga_timings>;
+-			dvi_svga_timings: 800x600 {
++			dvi_svga_timings: timing0 {
+ 				clock-frequency = <40000000>;
+ 				hactive = <800>;
+ 				vactive = <600>;
+diff --git a/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dts b/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dts
+index 24027a1fb46d1..fb074bfdaa8dc 100644
+--- a/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dts
++++ b/arch/arm/boot/dts/imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dts
+@@ -16,7 +16,7 @@ dvi_vga: display {
+ 		bus-width = <18>;
+ 		display-timings {
+ 			native-mode = <&dvi_vga_timings>;
+-			dvi_vga_timings: 640x480 {
++			dvi_vga_timings: timing0 {
+ 				clock-frequency = <31250000>;
+ 				hactive = <640>;
+ 				vactive = <480>;
+diff --git a/arch/arm/boot/dts/imx25-pdk.dts b/arch/arm/boot/dts/imx25-pdk.dts
+index fb66884d8a2fa..59b40d13a6401 100644
+--- a/arch/arm/boot/dts/imx25-pdk.dts
++++ b/arch/arm/boot/dts/imx25-pdk.dts
+@@ -78,7 +78,7 @@ wvga: display {
+ 		bus-width = <18>;
+ 		display-timings {
+ 			native-mode = <&wvga_timings>;
+-			wvga_timings: 640x480 {
++			wvga_timings: timing0 {
+ 				hactive = <640>;
+ 				vactive = <480>;
+ 				hback-porch = <45>;
+diff --git a/arch/arm/boot/dts/imx25.dtsi b/arch/arm/boot/dts/imx25.dtsi
+index d24b1da18766b..99886ba367240 100644
+--- a/arch/arm/boot/dts/imx25.dtsi
++++ b/arch/arm/boot/dts/imx25.dtsi
+@@ -543,7 +543,7 @@ pwm1: pwm@53fe0000 {
+ 			};
+ 
+ 			iim: efuse@53ff0000 {
+-				compatible = "fsl,imx25-iim", "fsl,imx27-iim";
++				compatible = "fsl,imx25-iim";
+ 				reg = <0x53ff0000 0x4000>;
+ 				interrupts = <19>;
+ 				clocks = <&clks 99>;
+diff --git a/arch/arm/boot/dts/imx27-apf27dev.dts b/arch/arm/boot/dts/imx27-apf27dev.dts
+index 6f1e8ce9e76e9..3d9bb7fc3be2e 100644
+--- a/arch/arm/boot/dts/imx27-apf27dev.dts
++++ b/arch/arm/boot/dts/imx27-apf27dev.dts
+@@ -16,7 +16,7 @@ display: display {
+ 		fsl,pcr = <0xfae80083>;	/* non-standard but required */
+ 		display-timings {
+ 			native-mode = <&timing0>;
+-			timing0: 800x480 {
++			timing0: timing0 {
+ 				clock-frequency = <33000033>;
+ 				hactive = <800>;
+ 				vactive = <480>;
+@@ -47,7 +47,7 @@ leds {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_gpio_leds>;
+ 
+-		user {
++		led-user {
+ 			label = "Heartbeat";
+ 			gpios = <&gpio6 14 GPIO_ACTIVE_HIGH>;
+ 			linux,default-trigger = "heartbeat";
+diff --git a/arch/arm/boot/dts/imx27-eukrea-cpuimx27.dtsi b/arch/arm/boot/dts/imx27-eukrea-cpuimx27.dtsi
+index 74110bbcd9d4f..c7e9235848782 100644
+--- a/arch/arm/boot/dts/imx27-eukrea-cpuimx27.dtsi
++++ b/arch/arm/boot/dts/imx27-eukrea-cpuimx27.dtsi
+@@ -33,7 +33,7 @@ &i2c1 {
+ 	pinctrl-0 = <&pinctrl_i2c1>;
+ 	status = "okay";
+ 
+-	pcf8563@51 {
++	rtc@51 {
+ 		compatible = "nxp,pcf8563";
+ 		reg = <0x51>;
+ 	};
+@@ -90,7 +90,7 @@ &usbotg {
+ &weim {
+ 	status = "okay";
+ 
+-	nor: nor@0,0 {
++	nor: flash@0,0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "cfi-flash";
+diff --git a/arch/arm/boot/dts/imx27-eukrea-mbimxsd27-baseboard.dts b/arch/arm/boot/dts/imx27-eukrea-mbimxsd27-baseboard.dts
+index 9c3ec82ec7e5a..50fa0bd4c8a18 100644
+--- a/arch/arm/boot/dts/imx27-eukrea-mbimxsd27-baseboard.dts
++++ b/arch/arm/boot/dts/imx27-eukrea-mbimxsd27-baseboard.dts
+@@ -16,7 +16,7 @@ display0: CMO-QVGA {
+ 
+ 		display-timings {
+ 			native-mode = <&timing0>;
+-			timing0: 320x240 {
++			timing0: timing0 {
+ 				clock-frequency = <6500000>;
+ 				hactive = <320>;
+ 				vactive = <240>;
+diff --git a/arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts b/arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts
+index 188639738dc3e..7f36af150a254 100644
+--- a/arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts
++++ b/arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts
+@@ -19,7 +19,7 @@ display: display {
+ 		fsl,pcr = <0xf0c88080>;	/* non-standard but required */
+ 		display-timings {
+ 			native-mode = <&timing0>;
+-			timing0: 640x480 {
++			timing0: timing0 {
+ 				hactive = <640>;
+ 				vactive = <480>;
+ 				hback-porch = <112>;
+diff --git a/arch/arm/boot/dts/imx27-phytec-phycore-rdk.dts b/arch/arm/boot/dts/imx27-phytec-phycore-rdk.dts
+index 344e777901524..d133b9f08b3a0 100644
+--- a/arch/arm/boot/dts/imx27-phytec-phycore-rdk.dts
++++ b/arch/arm/boot/dts/imx27-phytec-phycore-rdk.dts
+@@ -19,7 +19,7 @@ display0: LQ035Q7 {
+ 
+ 		display-timings {
+ 			native-mode = <&timing0>;
+-			timing0: 240x320 {
++			timing0: timing0 {
+ 				clock-frequency = <5500000>;
+ 				hactive = <240>;
+ 				vactive = <320>;
+diff --git a/arch/arm/boot/dts/imx27-phytec-phycore-som.dtsi b/arch/arm/boot/dts/imx27-phytec-phycore-som.dtsi
+index 3d10273177e9b..a5fdc2fd4ce5a 100644
+--- a/arch/arm/boot/dts/imx27-phytec-phycore-som.dtsi
++++ b/arch/arm/boot/dts/imx27-phytec-phycore-som.dtsi
+@@ -322,7 +322,7 @@ &usbotg {
+ &weim {
+ 	status = "okay";
+ 
+-	nor: nor@0,0 {
++	nor: flash@0,0 {
+ 		compatible = "cfi-flash";
+ 		reg = <0 0x00000000 0x02000000>;
+ 		bank-width = <2>;
+diff --git a/arch/arm/boot/dts/imx27.dtsi b/arch/arm/boot/dts/imx27.dtsi
+index 7bc132737a375..8ae24c8655217 100644
+--- a/arch/arm/boot/dts/imx27.dtsi
++++ b/arch/arm/boot/dts/imx27.dtsi
+@@ -588,6 +588,9 @@ weim: weim@d8002000 {
+ 		iram: sram@ffff4c00 {
+ 			compatible = "mmio-sram";
+ 			reg = <0xffff4c00 0xb400>;
++			ranges = <0 0xffff4c00 0xb400>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/imx28.dtsi b/arch/arm/boot/dts/imx28.dtsi
+index 6cab8b66db805..23ef4a322995d 100644
+--- a/arch/arm/boot/dts/imx28.dtsi
++++ b/arch/arm/boot/dts/imx28.dtsi
+@@ -982,7 +982,7 @@ etm: etm@80022000 {
+ 				status = "disabled";
+ 			};
+ 
+-			dma_apbx: dma-apbx@80024000 {
++			dma_apbx: dma-controller@80024000 {
+ 				compatible = "fsl,imx28-dma-apbx";
+ 				reg = <0x80024000 0x2000>;
+ 				interrupts = <78 79 66 0
+diff --git a/arch/arm/boot/dts/imx7d.dtsi b/arch/arm/boot/dts/imx7d.dtsi
+index b0bcfa9094a30..8ad3e60fd7d1c 100644
+--- a/arch/arm/boot/dts/imx7d.dtsi
++++ b/arch/arm/boot/dts/imx7d.dtsi
+@@ -209,9 +209,6 @@ pcie: pcie@33800000 {
+ };
+ 
+ &ca_funnel_in_ports {
+-	#address-cells = <1>;
+-	#size-cells = <0>;
+-
+ 	port@1 {
+ 		reg = <1>;
+ 		ca_funnel_in_port1: endpoint {
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 03bde2fb9bb11..b4cab6a214370 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -173,7 +173,11 @@ funnel@30041000 {
+ 			clock-names = "apb_pclk";
+ 
+ 			ca_funnel_in_ports: in-ports {
+-				port {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				port@0 {
++					reg = <0>;
+ 					ca_funnel_in_port0: endpoint {
+ 						remote-endpoint = <&etm0_out_port>;
+ 					};
+@@ -769,7 +773,7 @@ csi_from_csi_mux: endpoint {
+ 			};
+ 
+ 			lcdif: lcdif@30730000 {
+-				compatible = "fsl,imx7d-lcdif", "fsl,imx28-lcdif";
++				compatible = "fsl,imx7d-lcdif", "fsl,imx6sx-lcdif";
+ 				reg = <0x30730000 0x10000>;
+ 				interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>,
+@@ -1231,7 +1235,7 @@ dma_apbh: dma-controller@33000000 {
+ 		gpmi: nand-controller@33002000{
+ 			compatible = "fsl,imx7d-gpmi-nand";
+ 			#address-cells = <1>;
+-			#size-cells = <1>;
++			#size-cells = <0>;
+ 			reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
+ 			reg-names = "gpmi-nand", "bch";
+ 			interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/rk3036.dtsi b/arch/arm/boot/dts/rk3036.dtsi
+index 093567022386d..5f47b638f5327 100644
+--- a/arch/arm/boot/dts/rk3036.dtsi
++++ b/arch/arm/boot/dts/rk3036.dtsi
+@@ -336,12 +336,20 @@ hdmi: hdmi@20034000 {
+ 		pinctrl-0 = <&hdmi_ctl>;
+ 		status = "disabled";
+ 
+-		hdmi_in: port {
++		ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+-			hdmi_in_vop: endpoint@0 {
++
++			hdmi_in: port@0 {
+ 				reg = <0>;
+-				remote-endpoint = <&vop_out_hdmi>;
++
++				hdmi_in_vop: endpoint {
++					remote-endpoint = <&vop_out_hdmi>;
++				};
++			};
++
++			hdmi_out: port@1 {
++				reg = <1>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 0bc5fefb7a49b..d766f3b5c03ec 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -139,6 +139,19 @@ memory {
+ 		reg = <0 0 0 0>;
+ 	};
+ 
++	etm {
++		compatible = "qcom,coresight-remote-etm";
++
++		out-ports {
++			port {
++				modem_etm_out_funnel_in2: endpoint {
++					remote-endpoint =
++					  <&funnel_in2_in_modem_etm>;
++				};
++			};
++		};
++	};
++
+ 	psci {
+ 		compatible = "arm,psci-1.0";
+ 		method = "smc";
+@@ -1374,6 +1387,14 @@ funnel@3023000 {
+ 			clocks = <&rpmcc RPM_QDSS_CLK>, <&rpmcc RPM_QDSS_A_CLK>;
+ 			clock-names = "apb_pclk", "atclk";
+ 
++			in-ports {
++				port {
++					funnel_in2_in_modem_etm: endpoint {
++						remote-endpoint =
++						  <&modem_etm_out_funnel_in2>;
++					};
++				};
++			};
+ 
+ 			out-ports {
+ 				port {
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index 7c8d69ca91cf4..ca8e7848769a6 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -1577,9 +1577,11 @@ etm5: etm@7c40000 {
+ 
+ 			cpu = <&CPU4>;
+ 
+-			port{
+-				etm4_out: endpoint {
+-					remote-endpoint = <&apss_funnel_in4>;
++			out-ports {
++				port{
++					etm4_out: endpoint {
++						remote-endpoint = <&apss_funnel_in4>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -1594,9 +1596,11 @@ etm6: etm@7d40000 {
+ 
+ 			cpu = <&CPU5>;
+ 
+-			port{
+-				etm5_out: endpoint {
+-					remote-endpoint = <&apss_funnel_in5>;
++			out-ports {
++				port{
++					etm5_out: endpoint {
++						remote-endpoint = <&apss_funnel_in5>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -1611,9 +1615,11 @@ etm7: etm@7e40000 {
+ 
+ 			cpu = <&CPU6>;
+ 
+-			port{
+-				etm6_out: endpoint {
+-					remote-endpoint = <&apss_funnel_in6>;
++			out-ports {
++				port{
++					etm6_out: endpoint {
++						remote-endpoint = <&apss_funnel_in6>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -1628,9 +1634,11 @@ etm8: etm@7f40000 {
+ 
+ 			cpu = <&CPU7>;
+ 
+-			port{
+-				etm7_out: endpoint {
+-					remote-endpoint = <&apss_funnel_in7>;
++			out-ports {
++				port{
++					etm7_out: endpoint {
++						remote-endpoint = <&apss_funnel_in7>;
++					};
+ 				};
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index eb07a882d43b3..be40821dfeb9d 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -2688,10 +2688,10 @@ usb_1: usb@a6f8800 {
+ 					  <&gcc GCC_USB30_PRIM_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <150000000>;
+ 
+-			interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 6 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 8 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 9 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 5c696ebf5c20c..e3c6b05869e7f 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -3565,10 +3565,10 @@ usb_1: usb@a6f8800 {
+ 					  <&gcc GCC_USB30_PRIM_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <150000000>;
+ 
+-			interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
++					      <&intc GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc_intc 8 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc_intc 9 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+@@ -3613,10 +3613,10 @@ usb_2: usb@a8f8800 {
+ 					  <&gcc GCC_USB30_SEC_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <150000000>;
+ 
+-			interrupts = <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 490 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 491 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
++					      <&intc GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc_intc 10 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc_intc 11 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index a94acea770c7c..020a455824bed 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -431,7 +431,9 @@ SYM_CODE_END(__swpan_exit_el0)
+ 
+ 	.macro	irq_stack_entry
+ 	mov	x19, sp			// preserve the original sp
+-	scs_save tsk			// preserve the original shadow stack
++#ifdef CONFIG_SHADOW_CALL_STACK
++	mov	x24, scs_sp		// preserve the original shadow stack
++#endif
+ 
+ 	/*
+ 	 * Compare sp with the base of the task stack.
+@@ -465,7 +467,9 @@ SYM_CODE_END(__swpan_exit_el0)
+ 	 */
+ 	.macro	irq_stack_exit
+ 	mov	sp, x19
+-	scs_load_current
++#ifdef CONFIG_SHADOW_CALL_STACK
++	mov	scs_sp, x24
++#endif
+ 	.endm
+ 
+ /* GPRs used by entry code */
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index cdb3d4549b3a9..8e428f8dd108b 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -171,7 +171,11 @@ armv8pmu_events_sysfs_show(struct device *dev,
+ 	}).attr.attr)
+ 
+ static struct attribute *armv8_pmuv3_event_attrs[] = {
+-	ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR),
++	/*
++	 * Don't expose the sw_incr event in /sys. It's not usable as writes to
++	 * PMSWINC_EL0 will trap as PMUSERENR.{SW,EN}=={0,0} and event rotation
++	 * means we don't have a fixed event<->counter relationship regardless.
++	 */
+ 	ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL),
+ 	ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL),
+ 	ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL),
+diff --git a/arch/mips/include/asm/checksum.h b/arch/mips/include/asm/checksum.h
+index 5f80c28f52534..6c837a256cf66 100644
+--- a/arch/mips/include/asm/checksum.h
++++ b/arch/mips/include/asm/checksum.h
+@@ -242,7 +242,8 @@ static __inline__ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ 	"	.set	pop"
+ 	: "=&r" (sum), "=&r" (tmp)
+ 	: "r" (saddr), "r" (daddr),
+-	  "0" (htonl(len)), "r" (htonl(proto)), "r" (sum));
++	  "0" (htonl(len)), "r" (htonl(proto)), "r" (sum)
++	: "memory");
+ 
+ 	return csum_fold(sum);
+ }
+diff --git a/arch/mips/kernel/elf.c b/arch/mips/kernel/elf.c
+index 7b045d2a0b51b..bbc6f07d81243 100644
+--- a/arch/mips/kernel/elf.c
++++ b/arch/mips/kernel/elf.c
+@@ -11,6 +11,7 @@
+ 
+ #include <asm/cpu-features.h>
+ #include <asm/cpu-info.h>
++#include <asm/fpu.h>
+ 
+ #ifdef CONFIG_MIPS_FP_SUPPORT
+ 
+@@ -309,6 +310,11 @@ void mips_set_personality_nan(struct arch_elf_state *state)
+ 	struct cpuinfo_mips *c = &boot_cpu_data;
+ 	struct task_struct *t = current;
+ 
++	/* Do this early so t->thread.fpu.fcr31 won't be clobbered in case
++	 * we are preempted before the lose_fpu(0) in start_thread.
++	 */
++	lose_fpu(0);
++
+ 	t->thread.fpu.fcr31 = c->fpu_csr31;
+ 	switch (state->nan_2008) {
+ 	case 0:
+diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
+index 07e84a7749387..32e7b869a2910 100644
+--- a/arch/mips/mm/init.c
++++ b/arch/mips/mm/init.c
+@@ -421,7 +421,12 @@ void __init paging_init(void)
+ 		       (highend_pfn - max_low_pfn) << (PAGE_SHIFT - 10));
+ 		max_zone_pfns[ZONE_HIGHMEM] = max_low_pfn;
+ 	}
++
++	max_mapnr = highend_pfn ? highend_pfn : max_low_pfn;
++#else
++	max_mapnr = max_low_pfn;
+ #endif
++	high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
+ 
+ 	free_area_init(max_zone_pfns);
+ }
+@@ -457,16 +462,6 @@ void __init mem_init(void)
+ 	 */
+ 	BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (_PFN_SHIFT > PAGE_SHIFT));
+ 
+-#ifdef CONFIG_HIGHMEM
+-#ifdef CONFIG_DISCONTIGMEM
+-#error "CONFIG_HIGHMEM and CONFIG_DISCONTIGMEM dont work together yet"
+-#endif
+-	max_mapnr = highend_pfn ? highend_pfn : max_low_pfn;
+-#else
+-	max_mapnr = max_low_pfn;
+-#endif
+-	high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
+-
+ 	maar_init();
+ 	memblock_free_all();
+ 	setup_zero_pages();	/* Setup zeroed pages.  */
+diff --git a/arch/parisc/kernel/firmware.c b/arch/parisc/kernel/firmware.c
+index 7ed28ddcaba7d..25050b0ab6fde 100644
+--- a/arch/parisc/kernel/firmware.c
++++ b/arch/parisc/kernel/firmware.c
+@@ -123,10 +123,10 @@ static unsigned long f_extend(unsigned long address)
+ #ifdef CONFIG_64BIT
+ 	if(unlikely(parisc_narrow_firmware)) {
+ 		if((address & 0xff000000) == 0xf0000000)
+-			return 0xf0f0f0f000000000UL | (u32)address;
++			return (0xfffffff0UL << 32) | (u32)address;
+ 
+ 		if((address & 0xf0000000) == 0xf0000000)
+-			return 0xffffffff00000000UL | (u32)address;
++			return (0xffffffffUL << 32) | (u32)address;
+ 	}
+ #endif
+ 	return address;
+diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
+index 255a1837e9f7f..3a5a27318a0e5 100644
+--- a/arch/powerpc/include/asm/mmu.h
++++ b/arch/powerpc/include/asm/mmu.h
+@@ -390,5 +390,9 @@ extern void *abatron_pteptrs[2];
+ #include <asm/nohash/mmu.h>
+ #endif
+ 
++#if defined(CONFIG_FA_DUMP) || defined(CONFIG_PRESERVE_FA_DUMP)
++#define __HAVE_ARCH_RESERVED_KERNEL_PAGES
++#endif
++
+ #endif /* __KERNEL__ */
+ #endif /* _ASM_POWERPC_MMU_H_ */
+diff --git a/arch/powerpc/include/asm/mmzone.h b/arch/powerpc/include/asm/mmzone.h
+index 6cda76b57c5dd..bd1a8d7256ff2 100644
+--- a/arch/powerpc/include/asm/mmzone.h
++++ b/arch/powerpc/include/asm/mmzone.h
+@@ -42,9 +42,6 @@ u64 memory_hotplug_max(void);
+ #else
+ #define memory_hotplug_max() memblock_end_of_DRAM()
+ #endif /* CONFIG_NEED_MULTIPLE_NODES */
+-#ifdef CONFIG_FA_DUMP
+-#define __HAVE_ARCH_RESERVED_KERNEL_PAGES
+-#endif
+ 
+ #ifdef CONFIG_MEMORY_HOTPLUG
+ extern int create_section_mapping(unsigned long start, unsigned long end,
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 5e5a2448ae79a..b0e87dce2b9a0 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -1432,10 +1432,12 @@ static int emulate_instruction(struct pt_regs *regs)
+ 	return -EINVAL;
+ }
+ 
++#ifdef CONFIG_GENERIC_BUG
+ int is_valid_bugaddr(unsigned long addr)
+ {
+ 	return is_kernel_addr(addr);
+ }
++#endif
+ 
+ #ifdef CONFIG_MATH_EMULATION
+ static int emulate_math(struct pt_regs *regs)
+diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
+index 3dd58b4ee33e5..5f6b3f80023de 100644
+--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
++++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
+@@ -250,7 +250,7 @@ int kvmppc_uvmem_slot_init(struct kvm *kvm, const struct kvm_memory_slot *slot)
+ 	p = kzalloc(sizeof(*p), GFP_KERNEL);
+ 	if (!p)
+ 		return -ENOMEM;
+-	p->pfns = vzalloc(array_size(slot->npages, sizeof(*p->pfns)));
++	p->pfns = vcalloc(slot->npages, sizeof(*p->pfns));
+ 	if (!p->pfns) {
+ 		kfree(p);
+ 		return -ENOMEM;
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index 2d19655328f12..ca4733fbd02de 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -512,6 +512,8 @@ static int do_fp_load(struct instruction_op *op, unsigned long ea,
+ 	} u;
+ 
+ 	nb = GETSIZE(op->type);
++	if (nb > sizeof(u))
++		return -EINVAL;
+ 	if (!address_ok(regs, ea, nb))
+ 		return -EFAULT;
+ 	rn = op->reg;
+@@ -562,6 +564,8 @@ static int do_fp_store(struct instruction_op *op, unsigned long ea,
+ 	} u;
+ 
+ 	nb = GETSIZE(op->type);
++	if (nb > sizeof(u))
++		return -EINVAL;
+ 	if (!address_ok(regs, ea, nb))
+ 		return -EFAULT;
+ 	rn = op->reg;
+@@ -606,6 +610,9 @@ static nokprobe_inline int do_vec_load(int rn, unsigned long ea,
+ 		u8 b[sizeof(__vector128)];
+ 	} u = {};
+ 
++	if (size > sizeof(u))
++		return -EINVAL;
++
+ 	if (!address_ok(regs, ea & ~0xfUL, 16))
+ 		return -EFAULT;
+ 	/* align to multiple of size */
+@@ -633,6 +640,9 @@ static nokprobe_inline int do_vec_store(int rn, unsigned long ea,
+ 		u8 b[sizeof(__vector128)];
+ 	} u;
+ 
++	if (size > sizeof(u))
++		return -EINVAL;
++
+ 	if (!address_ok(regs, ea & ~0xfUL, 16))
+ 		return -EFAULT;
+ 	/* align to multiple of size */
+diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
+index e18ae50a275c8..a86d932a7c306 100644
+--- a/arch/powerpc/mm/book3s64/pgtable.c
++++ b/arch/powerpc/mm/book3s64/pgtable.c
+@@ -446,6 +446,7 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
+ 	set_pte_at(vma->vm_mm, addr, ptep, pte);
+ }
+ 
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ /*
+  * For hash translation mode, we use the deposited table to store hash slot
+  * information and they are stored at PTRS_PER_PMD offset from related pmd
+@@ -467,6 +468,7 @@ int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
+ 
+ 	return true;
+ }
++#endif
+ 
+ /*
+  * Does the CPU support tlbie?
+diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
+index 8e0d792ac2967..52a20c97e46ed 100644
+--- a/arch/powerpc/mm/init-common.c
++++ b/arch/powerpc/mm/init-common.c
+@@ -111,7 +111,7 @@ void pgtable_cache_add(unsigned int shift)
+ 	 * as to leave enough 0 bits in the address to contain it. */
+ 	unsigned long minalign = max(MAX_PGTABLE_INDEX_SIZE + 1,
+ 				     HUGEPD_SHIFT_MASK + 1);
+-	struct kmem_cache *new;
++	struct kmem_cache *new = NULL;
+ 
+ 	/* It would be nice if this was a BUILD_BUG_ON(), but at the
+ 	 * moment, gcc doesn't seem to recognize is_power_of_2 as a
+@@ -124,7 +124,8 @@ void pgtable_cache_add(unsigned int shift)
+ 
+ 	align = max_t(unsigned long, align, minalign);
+ 	name = kasprintf(GFP_KERNEL, "pgtable-2^%d", shift);
+-	new = kmem_cache_create(name, table_size, align, 0, ctor(shift));
++	if (name)
++		new = kmem_cache_create(name, table_size, align, 0, ctor(shift));
+ 	if (!new)
+ 		panic("Could not allocate pgtable cache for order %d", shift);
+ 
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index f3e4d069e0ba7..643fc525897da 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -64,6 +64,7 @@ int __init __weak kasan_init_region(void *start, size_t size)
+ 	if (ret)
+ 		return ret;
+ 
++	k_start = k_start & PAGE_MASK;
+ 	block = memblock_alloc(k_end - k_start, PAGE_SIZE);
+ 	if (!block)
+ 		return -ENOMEM;
+diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
+index 812730e6bfffd..8ef4e5034765b 100644
+--- a/arch/s390/crypto/aes_s390.c
++++ b/arch/s390/crypto/aes_s390.c
+@@ -600,7 +600,9 @@ static int ctr_aes_crypt(struct skcipher_request *req)
+ 	 * final block may be < AES_BLOCK_SIZE, copy only nbytes
+ 	 */
+ 	if (nbytes) {
+-		cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
++		memset(buf, 0, AES_BLOCK_SIZE);
++		memcpy(buf, walk.src.virt.addr, nbytes);
++		cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
+ 			    AES_BLOCK_SIZE, walk.iv);
+ 		memcpy(walk.dst.virt.addr, buf, nbytes);
+ 		crypto_inc(walk.iv, AES_BLOCK_SIZE);
+diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
+index a6727ad58d65a..6a0a22621671c 100644
+--- a/arch/s390/crypto/paes_s390.c
++++ b/arch/s390/crypto/paes_s390.c
+@@ -676,9 +676,11 @@ static int ctr_paes_crypt(struct skcipher_request *req)
+ 	 * final block may be < AES_BLOCK_SIZE, copy only nbytes
+ 	 */
+ 	if (nbytes) {
++		memset(buf, 0, AES_BLOCK_SIZE);
++		memcpy(buf, walk.src.virt.addr, nbytes);
+ 		while (1) {
+ 			if (cpacf_kmctr(ctx->fc, &param, buf,
+-					walk.src.virt.addr, AES_BLOCK_SIZE,
++					buf, AES_BLOCK_SIZE,
+ 					walk.iv) == AES_BLOCK_SIZE)
+ 				break;
+ 			if (__paes_convert_key(ctx))
+diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
+index 3009bb5272524..f381caddd9059 100644
+--- a/arch/s390/kernel/ptrace.c
++++ b/arch/s390/kernel/ptrace.c
+@@ -411,6 +411,7 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
+ 		/*
+ 		 * floating point control reg. is in the thread structure
+ 		 */
++		save_fpu_regs();
+ 		if ((unsigned int) data != 0 ||
+ 		    test_fp_ctl(data >> (BITS_PER_LONG - 32)))
+ 			return -EINVAL;
+@@ -771,6 +772,7 @@ static int __poke_user_compat(struct task_struct *child,
+ 		/*
+ 		 * floating point control reg. is in the thread structure
+ 		 */
++		save_fpu_regs();
+ 		if (test_fp_ctl(tmp))
+ 			return -EINVAL;
+ 		child->thread.fpu.fpc = data;
+@@ -1010,9 +1012,7 @@ static int s390_fpregs_set(struct task_struct *target,
+ 	int rc = 0;
+ 	freg_t fprs[__NUM_FPRS];
+ 
+-	if (target == current)
+-		save_fpu_regs();
+-
++	save_fpu_regs();
+ 	if (MACHINE_HAS_VX)
+ 		convert_vx_to_fp(fprs, target->thread.fpu.vxrs);
+ 	else
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 7a326d03087ab..f6c27b44766f0 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -3649,10 +3649,6 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+ 
+ 	vcpu_load(vcpu);
+ 
+-	if (test_fp_ctl(fpu->fpc)) {
+-		ret = -EINVAL;
+-		goto out;
+-	}
+ 	vcpu->run->s.regs.fpc = fpu->fpc;
+ 	if (MACHINE_HAS_VX)
+ 		convert_fp_to_vx((__vector128 *) vcpu->run->s.regs.vrs,
+@@ -3660,7 +3656,6 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+ 	else
+ 		memcpy(vcpu->run->s.regs.fprs, &fpu->fprs, sizeof(fpu->fprs));
+ 
+-out:
+ 	vcpu_put(vcpu);
+ 	return ret;
+ }
+diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
+index 1802cf4ef5a5a..ee55333255d02 100644
+--- a/arch/um/drivers/net_kern.c
++++ b/arch/um/drivers/net_kern.c
+@@ -204,7 +204,7 @@ static int uml_net_close(struct net_device *dev)
+ 	return 0;
+ }
+ 
+-static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct uml_net_private *lp = netdev_priv(dev);
+ 	unsigned long flags;
+diff --git a/arch/um/include/shared/kern_util.h b/arch/um/include/shared/kern_util.h
+index 9c08e728a675e..83171f9e0912d 100644
+--- a/arch/um/include/shared/kern_util.h
++++ b/arch/um/include/shared/kern_util.h
+@@ -51,7 +51,7 @@ extern void do_uml_exitcalls(void);
+  * Are we disallowed to sleep? Used to choose between GFP_KERNEL and
+  * GFP_ATOMIC.
+  */
+-extern int __cant_sleep(void);
++extern int __uml_cant_sleep(void);
+ extern int get_current_pid(void);
+ extern int copy_from_user_proc(void *to, void *from, int size);
+ extern int cpu(void);
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index e6c9b11b20334..76faaf1082cec 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -221,7 +221,7 @@ void arch_cpu_idle(void)
+ 	raw_local_irq_enable();
+ }
+ 
+-int __cant_sleep(void) {
++int __uml_cant_sleep(void) {
+ 	return in_atomic() || irqs_disabled() || in_interrupt();
+ 	/* Is in_interrupt() really needed? */
+ }
+diff --git a/arch/um/os-Linux/helper.c b/arch/um/os-Linux/helper.c
+index 9fa6e4187d4fb..57a27555092fc 100644
+--- a/arch/um/os-Linux/helper.c
++++ b/arch/um/os-Linux/helper.c
+@@ -45,7 +45,7 @@ int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv)
+ 	unsigned long stack, sp;
+ 	int pid, fds[2], ret, n;
+ 
+-	stack = alloc_stack(0, __cant_sleep());
++	stack = alloc_stack(0, __uml_cant_sleep());
+ 	if (stack == 0)
+ 		return -ENOMEM;
+ 
+@@ -69,7 +69,7 @@ int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv)
+ 	data.pre_data = pre_data;
+ 	data.argv = argv;
+ 	data.fd = fds[1];
+-	data.buf = __cant_sleep() ? uml_kmalloc(PATH_MAX, UM_GFP_ATOMIC) :
++	data.buf = __uml_cant_sleep() ? uml_kmalloc(PATH_MAX, UM_GFP_ATOMIC) :
+ 					uml_kmalloc(PATH_MAX, UM_GFP_KERNEL);
+ 	pid = clone(helper_child, (void *) sp, CLONE_VM, &data);
+ 	if (pid < 0) {
+@@ -116,7 +116,7 @@ int run_helper_thread(int (*proc)(void *), void *arg, unsigned int flags,
+ 	unsigned long stack, sp;
+ 	int pid, status, err;
+ 
+-	stack = alloc_stack(0, __cant_sleep());
++	stack = alloc_stack(0, __uml_cant_sleep());
+ 	if (stack == 0)
+ 		return -ENOMEM;
+ 
+diff --git a/arch/um/os-Linux/util.c b/arch/um/os-Linux/util.c
+index 07327425d06ea..56d9589e1cd1f 100644
+--- a/arch/um/os-Linux/util.c
++++ b/arch/um/os-Linux/util.c
+@@ -166,23 +166,38 @@ __uml_setup("quiet", quiet_cmd_param,
+ "quiet\n"
+ "    Turns off information messages during boot.\n\n");
+ 
++/*
++ * The os_info/os_warn functions will be called by helper threads. These
++ * have a very limited stack size and using the libc formatting functions
++ * may overflow the stack.
++ * So pull in the kernel vscnprintf and use that instead with a fixed
++ * on-stack buffer.
++ */
++int vscnprintf(char *buf, size_t size, const char *fmt, va_list args);
++
+ void os_info(const char *fmt, ...)
+ {
++	char buf[256];
+ 	va_list list;
++	int len;
+ 
+ 	if (quiet_info)
+ 		return;
+ 
+ 	va_start(list, fmt);
+-	vfprintf(stderr, fmt, list);
++	len = vscnprintf(buf, sizeof(buf), fmt, list);
++	fwrite(buf, len, 1, stderr);
+ 	va_end(list);
+ }
+ 
+ void os_warn(const char *fmt, ...)
+ {
++	char buf[256];
+ 	va_list list;
++	int len;
+ 
+ 	va_start(list, fmt);
+-	vfprintf(stderr, fmt, list);
++	len = vscnprintf(buf, sizeof(buf), fmt, list);
++	fwrite(buf, len, 1, stderr);
+ 	va_end(list);
+ }
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index 814fe0d349b01..6f55609ba7067 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -379,7 +379,7 @@ config X86_CMOV
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+ 	default "64" if X86_64
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8)
+ 	default "5" if X86_32 && X86_CMPXCHG64
+ 	default "4"
+ 
+diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
+index f4a2e6d373b29..1e4eb3894ec4d 100644
+--- a/arch/x86/boot/compressed/ident_map_64.c
++++ b/arch/x86/boot/compressed/ident_map_64.c
+@@ -367,3 +367,8 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code)
+ 	 */
+ 	add_identity_map(address, end);
+ }
++
++void do_boot_nmi_trap(struct pt_regs *regs, unsigned long error_code)
++{
++	/* Empty handler to ignore NMI during early boot */
++}
+diff --git a/arch/x86/boot/compressed/idt_64.c b/arch/x86/boot/compressed/idt_64.c
+index 804a502ee0d28..eb30bb20c33b3 100644
+--- a/arch/x86/boot/compressed/idt_64.c
++++ b/arch/x86/boot/compressed/idt_64.c
+@@ -45,6 +45,7 @@ void load_stage2_idt(void)
+ 	boot_idt_desc.address = (unsigned long)boot_idt;
+ 
+ 	set_idt_entry(X86_TRAP_PF, boot_page_fault);
++	set_idt_entry(X86_TRAP_NMI, boot_nmi_trap);
+ 
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ 	set_idt_entry(X86_TRAP_VC, boot_stage2_vc);
+diff --git a/arch/x86/boot/compressed/idt_handlers_64.S b/arch/x86/boot/compressed/idt_handlers_64.S
+index 22890e199f5b4..4d03c8562f637 100644
+--- a/arch/x86/boot/compressed/idt_handlers_64.S
++++ b/arch/x86/boot/compressed/idt_handlers_64.S
+@@ -70,6 +70,7 @@ SYM_FUNC_END(\name)
+ 	.code64
+ 
+ EXCEPTION_HANDLER	boot_page_fault do_boot_page_fault error_code=1
++EXCEPTION_HANDLER	boot_nmi_trap do_boot_nmi_trap error_code=0
+ 
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ EXCEPTION_HANDLER	boot_stage1_vc do_vc_no_ghcb		error_code=1
+diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
+index d9a631c5973c7..0ccc327184831 100644
+--- a/arch/x86/boot/compressed/misc.h
++++ b/arch/x86/boot/compressed/misc.h
+@@ -156,6 +156,7 @@ extern struct desc_ptr boot_idt_desc;
+ 
+ /* IDT Entry Points */
+ void boot_page_fault(void);
++void boot_nmi_trap(void);
+ void boot_stage1_vc(void);
+ void boot_stage2_vc(void);
+ 
+diff --git a/arch/x86/include/asm/syscall_wrapper.h b/arch/x86/include/asm/syscall_wrapper.h
+index a84333adeef23..a507be3689275 100644
+--- a/arch/x86/include/asm/syscall_wrapper.h
++++ b/arch/x86/include/asm/syscall_wrapper.h
+@@ -58,12 +58,29 @@ extern long __ia32_sys_ni_syscall(const struct pt_regs *regs);
+ 		,,regs->di,,regs->si,,regs->dx				\
+ 		,,regs->r10,,regs->r8,,regs->r9)			\
+ 
++
++/* SYSCALL_PT_ARGS is Adapted from s390x */
++#define SYSCALL_PT_ARG6(m, t1, t2, t3, t4, t5, t6)			\
++	SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5), m(t6, (regs->bp))
++#define SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5)				\
++	SYSCALL_PT_ARG4(m, t1, t2, t3, t4),  m(t5, (regs->di))
++#define SYSCALL_PT_ARG4(m, t1, t2, t3, t4)				\
++	SYSCALL_PT_ARG3(m, t1, t2, t3),  m(t4, (regs->si))
++#define SYSCALL_PT_ARG3(m, t1, t2, t3)					\
++	SYSCALL_PT_ARG2(m, t1, t2), m(t3, (regs->dx))
++#define SYSCALL_PT_ARG2(m, t1, t2)					\
++	SYSCALL_PT_ARG1(m, t1), m(t2, (regs->cx))
++#define SYSCALL_PT_ARG1(m, t1) m(t1, (regs->bx))
++#define SYSCALL_PT_ARGS(x, ...) SYSCALL_PT_ARG##x(__VA_ARGS__)
++
++#define __SC_COMPAT_CAST(t, a)						\
++	(__typeof(__builtin_choose_expr(__TYPE_IS_L(t), 0, 0U)))	\
++	(unsigned int)a
++
+ /* Mapping of registers to parameters for syscalls on i386 */
+ #define SC_IA32_REGS_TO_ARGS(x, ...)					\
+-	__MAP(x,__SC_ARGS						\
+-	      ,,(unsigned int)regs->bx,,(unsigned int)regs->cx		\
+-	      ,,(unsigned int)regs->dx,,(unsigned int)regs->si		\
+-	      ,,(unsigned int)regs->di,,(unsigned int)regs->bp)
++	SYSCALL_PT_ARGS(x, __SC_COMPAT_CAST,				\
++			__MAP(x, __SC_TYPE, __VA_ARGS__))		\
+ 
+ #define __SYS_STUB0(abi, name)						\
+ 	long __##abi##_##name(const struct pt_regs *regs);		\
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 0b7c81389c50a..18a6ed2afca03 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -44,6 +44,7 @@
+ #include <linux/sync_core.h>
+ #include <linux/task_work.h>
+ #include <linux/hardirq.h>
++#include <linux/kexec.h>
+ 
+ #include <asm/intel-family.h>
+ #include <asm/processor.h>
+@@ -274,6 +275,7 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
+ 	struct llist_node *pending;
+ 	struct mce_evt_llist *l;
+ 	int apei_err = 0;
++	struct page *p;
+ 
+ 	/*
+ 	 * Allow instrumentation around external facilities usage. Not that it
+@@ -329,6 +331,20 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
+ 	if (!fake_panic) {
+ 		if (panic_timeout == 0)
+ 			panic_timeout = mca_cfg.panic_timeout;
++
++		/*
++		 * Kdump skips the poisoned page in order to avoid
++		 * touching the error bits again. Poison the page even
++		 * if the error is fatal and the machine is about to
++		 * panic.
++		 */
++		if (kexec_crash_loaded()) {
++			if (final && (final->status & MCI_STATUS_ADDRV)) {
++				p = pfn_to_online_page(final->addr >> PAGE_SHIFT);
++				if (p)
++					SetPageHWPoison(p);
++			}
++		}
+ 		panic(msg);
+ 	} else
+ 		pr_emerg(HW_ERR "Fake kernel panic: %s\n", msg);
+diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
+index 81cf4babbd0b4..3c379335ea477 100644
+--- a/arch/x86/kvm/mmu/page_track.c
++++ b/arch/x86/kvm/mmu/page_track.c
+@@ -35,7 +35,7 @@ int kvm_page_track_create_memslot(struct kvm_memory_slot *slot,
+ 
+ 	for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) {
+ 		slot->arch.gfn_track[i] =
+-			kvcalloc(npages, sizeof(*slot->arch.gfn_track[i]),
++			__vcalloc(npages, sizeof(*slot->arch.gfn_track[i]),
+ 				 GFP_KERNEL_ACCOUNT);
+ 		if (!slot->arch.gfn_track[i])
+ 			goto track_free;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 13e4699a0744f..6c2bf7cd7aec6 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10826,14 +10826,14 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
+ 				      slot->base_gfn, level) + 1;
+ 
+ 		slot->arch.rmap[i] =
+-			kvcalloc(lpages, sizeof(*slot->arch.rmap[i]),
++			__vcalloc(lpages, sizeof(*slot->arch.rmap[i]),
+ 				 GFP_KERNEL_ACCOUNT);
+ 		if (!slot->arch.rmap[i])
+ 			goto out_free;
+ 		if (i == 0)
+ 			continue;
+ 
+-		linfo = kvcalloc(lpages, sizeof(*linfo), GFP_KERNEL_ACCOUNT);
++		linfo = __vcalloc(lpages, sizeof(*linfo), GFP_KERNEL_ACCOUNT);
+ 		if (!linfo)
+ 			goto out_free;
+ 
+diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
+index 968d7005f4a72..f50cc210a9818 100644
+--- a/arch/x86/mm/ident_map.c
++++ b/arch/x86/mm/ident_map.c
+@@ -26,18 +26,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
+ 	for (; addr < end; addr = next) {
+ 		pud_t *pud = pud_page + pud_index(addr);
+ 		pmd_t *pmd;
++		bool use_gbpage;
+ 
+ 		next = (addr & PUD_MASK) + PUD_SIZE;
+ 		if (next > end)
+ 			next = end;
+ 
+-		if (info->direct_gbpages) {
+-			pud_t pudval;
++		/* if this is already a gbpage, this portion is already mapped */
++		if (pud_large(*pud))
++			continue;
++
++		/* Is using a gbpage allowed? */
++		use_gbpage = info->direct_gbpages;
+ 
+-			if (pud_present(*pud))
+-				continue;
++		/* Don't use gbpage if it maps more than the requested region. */
++		/* at the begining: */
++		use_gbpage &= ((addr & ~PUD_MASK) == 0);
++		/* ... or at the end: */
++		use_gbpage &= ((next & ~PUD_MASK) == 0);
++
++		/* Never overwrite existing mappings */
++		use_gbpage &= !pud_present(*pud);
++
++		if (use_gbpage) {
++			pud_t pudval;
+ 
+-			addr &= PUD_MASK;
+ 			pudval = __pud((addr - info->offset) | info->page_flag);
+ 			set_pud(pud, pudval);
+ 			continue;
+diff --git a/block/bio.c b/block/bio.c
+index 6d6e7b96b0021..6f7a1aa9ea225 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -770,7 +770,7 @@ static bool bio_try_merge_hw_seg(struct request_queue *q, struct bio *bio,
+ 
+ 	if ((addr1 | mask) != (addr2 | mask))
+ 		return false;
+-	if (bv->bv_len + len > queue_max_segment_size(q))
++	if (len > queue_max_segment_size(q) - bv->bv_len)
+ 		return false;
+ 	return __bio_try_merge_page(bio, page, len, offset, same_page);
+ }
+@@ -954,7 +954,7 @@ void bio_release_pages(struct bio *bio, bool mark_dirty)
+ 		return;
+ 
+ 	bio_for_each_segment_all(bvec, bio, iter_all) {
+-		if (mark_dirty && !PageCompound(bvec->bv_page))
++		if (mark_dirty)
+ 			set_page_dirty_lock(bvec->bv_page);
+ 		put_page(bvec->bv_page);
+ 	}
+@@ -1326,8 +1326,7 @@ void bio_set_pages_dirty(struct bio *bio)
+ 	struct bvec_iter_all iter_all;
+ 
+ 	bio_for_each_segment_all(bvec, bio, iter_all) {
+-		if (!PageCompound(bvec->bv_page))
+-			set_page_dirty_lock(bvec->bv_page);
++		set_page_dirty_lock(bvec->bv_page);
+ 	}
+ }
+ 
+@@ -1375,7 +1374,7 @@ void bio_check_pages_dirty(struct bio *bio)
+ 	struct bvec_iter_all iter_all;
+ 
+ 	bio_for_each_segment_all(bvec, bio, iter_all) {
+-		if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page))
++		if (!PageDirty(bvec->bv_page))
+ 			goto defer;
+ 	}
+ 
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 7ba7c4e4e4c93..63a8fb456b283 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1296,6 +1296,13 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now)
+ 
+ 	lockdep_assert_held(&iocg->waitq.lock);
+ 
++	/*
++	 * If the delay is set by another CPU, we may be in the past. No need to
++	 * change anything if so. This avoids decay calculation underflow.
++	 */
++	if (time_before64(now->now, iocg->delay_at))
++		return false;
++
+ 	/* calculate the current delay in effect - 1/2 every second */
+ 	tdelta = now->now - iocg->delay_at;
+ 	if (iocg->delay)
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index e153a36c9ba3a..a7a31d7090aed 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1188,6 +1188,22 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 	wait->flags &= ~WQ_FLAG_EXCLUSIVE;
+ 	__add_wait_queue(wq, wait);
+ 
++	/*
++	 * Add one explicit barrier since blk_mq_get_driver_tag() may
++	 * not imply barrier in case of failure.
++	 *
++	 * Order adding us to wait queue and allocating driver tag.
++	 *
++	 * The pair is the one implied in sbitmap_queue_wake_up() which
++	 * orders clearing sbitmap tag bits and waitqueue_active() in
++	 * __sbitmap_queue_wake_up(), since waitqueue_active() is lockless
++	 *
++	 * Otherwise, re-order of adding wait queue and getting driver tag
++	 * may cause __sbitmap_queue_wake_up() to wake up nothing because
++	 * the waitqueue_active() may not observe us in wait queue.
++	 */
++	smp_mb();
++
+ 	/*
+ 	 * It's possible that a tag was freed in the window between the
+ 	 * allocation failure and adding the hardware queue to the wait
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 5d422e725b267..bb03bed14f740 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -258,6 +258,7 @@ static struct crypto_larval *__crypto_register_alg(struct crypto_alg *alg)
+ 		}
+ 
+ 		if (!strcmp(q->cra_driver_name, alg->cra_name) ||
++		    !strcmp(q->cra_driver_name, alg->cra_driver_name) ||
+ 		    !strcmp(q->cra_name, alg->cra_driver_name))
+ 			goto err;
+ 	}
+diff --git a/drivers/acpi/acpi_extlog.c b/drivers/acpi/acpi_extlog.c
+index 088db2356998f..0a84d5afd37c1 100644
+--- a/drivers/acpi/acpi_extlog.c
++++ b/drivers/acpi/acpi_extlog.c
+@@ -308,9 +308,10 @@ static int __init extlog_init(void)
+ static void __exit extlog_exit(void)
+ {
+ 	mce_unregister_decode_chain(&extlog_mce_dec);
+-	((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN;
+-	if (extlog_l1_addr)
++	if (extlog_l1_addr) {
++		((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN;
+ 		acpi_os_unmap_iomem(extlog_l1_addr, l1_size);
++	}
+ 	if (elog_addr)
+ 		acpi_os_unmap_iomem(elog_addr, elog_size);
+ 	release_mem_region(elog_base, elog_size);
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index 9d384656323a9..b2364ac455f34 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -568,6 +568,15 @@ static const struct dmi_system_id video_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3350"),
+ 		},
+ 	},
++	{
++	 .callback = video_set_report_key_events,
++	 .driver_data = (void *)((uintptr_t)REPORT_BRIGHTNESS_KEY_EVENTS),
++	 .ident = "COLORFUL X15 AT 23",
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "COLORFUL"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "X15 AT 23"),
++		},
++	},
+ 	/*
+ 	 * Some machines change the brightness themselves when a brightness
+ 	 * hotkey gets pressed, despite us telling them not to. In this case
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 8678e162181f4..160606af8b4f5 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -99,6 +99,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes)
+ 	return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2;
+ }
+ 
++/*
++ * A platform may describe one error source for the handling of synchronous
++ * errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI
++ * or External Interrupt). On x86, the HEST notifications are always
++ * asynchronous, so only SEA on ARM is delivered as a synchronous
++ * notification.
++ */
++static inline bool is_hest_sync_notify(struct ghes *ghes)
++{
++	u8 notify_type = ghes->generic->notify.type;
++
++	return notify_type == ACPI_HEST_NOTIFY_SEA;
++}
++
+ /*
+  * This driver isn't really modular, however for the time being,
+  * continuing to use module_param is the easiest way to remain
+@@ -461,7 +475,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags)
+ }
+ 
+ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+-				       int sev)
++				       int sev, bool sync)
+ {
+ 	int flags = -1;
+ 	int sec_sev = ghes_severity(gdata->error_severity);
+@@ -475,7 +489,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+ 	    (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
+ 		flags = MF_SOFT_OFFLINE;
+ 	if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
+-		flags = 0;
++		flags = sync ? MF_ACTION_REQUIRED : 0;
+ 
+ 	if (flags != -1)
+ 		return ghes_do_memory_failure(mem_err->physical_addr, flags);
+@@ -483,9 +497,11 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+ 	return false;
+ }
+ 
+-static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev)
++static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata,
++				       int sev, bool sync)
+ {
+ 	struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
++	int flags = sync ? MF_ACTION_REQUIRED : 0;
+ 	bool queued = false;
+ 	int sec_sev, i;
+ 	char *p;
+@@ -510,7 +526,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s
+ 		 * and don't filter out 'corrected' error here.
+ 		 */
+ 		if (is_cache && has_pa) {
+-			queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0);
++			queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags);
+ 			p += err_info->length;
+ 			continue;
+ 		}
+@@ -631,6 +647,7 @@ static bool ghes_do_proc(struct ghes *ghes,
+ 	const guid_t *fru_id = &guid_null;
+ 	char *fru_text = "";
+ 	bool queued = false;
++	bool sync = is_hest_sync_notify(ghes);
+ 
+ 	sev = ghes_severity(estatus->error_severity);
+ 	apei_estatus_for_each_section(estatus, gdata) {
+@@ -648,13 +665,13 @@ static bool ghes_do_proc(struct ghes *ghes,
+ 			ghes_edac_report_mem_error(sev, mem_err);
+ 
+ 			arch_apei_report_mem_error(sev, mem_err);
+-			queued = ghes_handle_memory_failure(gdata, sev);
++			queued = ghes_handle_memory_failure(gdata, sev, sync);
+ 		}
+ 		else if (guid_equal(sec_type, &CPER_SEC_PCIE)) {
+ 			ghes_handle_aer(gdata);
+ 		}
+ 		else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
+-			queued = ghes_handle_arm_hw_error(gdata, sev);
++			queued = ghes_handle_arm_hw_error(gdata, sev, sync);
+ 		} else {
+ 			void *err = acpi_hest_get_payload(gdata);
+ 
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 7db748cfcbc67..708c91215ec06 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -836,6 +836,16 @@ binder_enqueue_thread_work_ilocked(struct binder_thread *thread,
+ {
+ 	WARN_ON(!list_empty(&thread->waiting_thread_node));
+ 	binder_enqueue_work_ilocked(work, &thread->todo);
++
++	/* (e)poll-based threads require an explicit wakeup signal when
++	 * queuing their own work; they rely on these events to consume
++	 * messages without I/O block. Without it, threads risk waiting
++	 * indefinitely without handling the work.
++	 */
++	if (thread->looper & BINDER_LOOPER_STATE_POLL &&
++	    thread->pid == current->pid && !thread->process_todo)
++		wake_up_interruptible_sync(&thread->wait);
++
+ 	thread->process_todo = true;
+ }
+ 
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 2308c2be85a18..48130b2543966 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -1607,7 +1607,7 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc,
+ 	}
+ 
+ 	if (qc->flags & ATA_QCFLAG_SENSE_VALID) {
+-		int ret = scsi_check_sense(qc->scsicmd);
++		enum scsi_disposition ret = scsi_check_sense(qc->scsicmd);
+ 		/*
+ 		 * SUCCESS here means that the sense code could be
+ 		 * evaluated and should be passed to the upper layers
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index a217b50439e72..e616e33c8a209 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -2936,6 +2936,8 @@ open_card_ubr0(struct idt77252_dev *card)
+ 	vc->scq = alloc_scq(card, vc->class);
+ 	if (!vc->scq) {
+ 		printk("%s: can't get SCQ.\n", card->name);
++		kfree(card->vcs[0]);
++		card->vcs[0] = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 8a90f08c9682b..f5a032b6b8d69 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -958,7 +958,7 @@ static int __init genpd_power_off_unused(void)
+ 
+ 	return 0;
+ }
+-late_initcall(genpd_power_off_unused);
++late_initcall_sync(genpd_power_off_unused);
+ 
+ #ifdef CONFIG_PM_SLEEP
+ 
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 1dbaaddf540e1..fbc57c4fcdd01 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -16,6 +16,7 @@
+  */
+ 
+ #define pr_fmt(fmt) "PM: " fmt
++#define dev_fmt pr_fmt
+ 
+ #include <linux/device.h>
+ #include <linux/export.h>
+@@ -449,8 +450,8 @@ static void pm_dev_dbg(struct device *dev, pm_message_t state, const char *info)
+ static void pm_dev_err(struct device *dev, pm_message_t state, const char *info,
+ 			int error)
+ {
+-	pr_err("Device %s failed to %s%s: error %d\n",
+-	       dev_name(dev), pm_verb(state.event), info, error);
++	dev_err(dev, "failed to %s%s: error %d\n", pm_verb(state.event), info,
++		error);
+ }
+ 
+ static void dpm_show_time(ktime_t starttime, pm_message_t state, int error,
+@@ -582,7 +583,7 @@ bool dev_pm_skip_resume(struct device *dev)
+ }
+ 
+ /**
+- * device_resume_noirq - Execute a "noirq resume" callback for given device.
++ * __device_resume_noirq - Execute a "noirq resume" callback for given device.
+  * @dev: Device to handle.
+  * @state: PM transition of the system being carried out.
+  * @async: If true, the device is being resumed asynchronously.
+@@ -590,7 +591,7 @@ bool dev_pm_skip_resume(struct device *dev)
+  * The driver of @dev will not receive interrupts while this function is being
+  * executed.
+  */
+-static int device_resume_noirq(struct device *dev, pm_message_t state, bool async)
++static void __device_resume_noirq(struct device *dev, pm_message_t state, bool async)
+ {
+ 	pm_callback_t callback = NULL;
+ 	const char *info = NULL;
+@@ -658,7 +659,13 @@ static int device_resume_noirq(struct device *dev, pm_message_t state, bool asyn
+ Out:
+ 	complete_all(&dev->power.completion);
+ 	TRACE_RESUME(error);
+-	return error;
++
++	if (error) {
++		suspend_stats.failed_resume_noirq++;
++		dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
++		dpm_save_failed_dev(dev_name(dev));
++		pm_dev_err(dev, state, async ? " async noirq" : " noirq", error);
++	}
+ }
+ 
+ static bool is_async(struct device *dev)
+@@ -671,27 +678,35 @@ static bool dpm_async_fn(struct device *dev, async_func_t func)
+ {
+ 	reinit_completion(&dev->power.completion);
+ 
+-	if (is_async(dev)) {
+-		get_device(dev);
+-		async_schedule_dev(func, dev);
++	if (!is_async(dev))
++		return false;
++
++	get_device(dev);
++
++	if (async_schedule_dev_nocall(func, dev))
+ 		return true;
+-	}
++
++	put_device(dev);
+ 
+ 	return false;
+ }
+ 
+ static void async_resume_noirq(void *data, async_cookie_t cookie)
+ {
+-	struct device *dev = (struct device *)data;
+-	int error;
+-
+-	error = device_resume_noirq(dev, pm_transition, true);
+-	if (error)
+-		pm_dev_err(dev, pm_transition, " async", error);
++	struct device *dev = data;
+ 
++	__device_resume_noirq(dev, pm_transition, true);
+ 	put_device(dev);
+ }
+ 
++static void device_resume_noirq(struct device *dev)
++{
++	if (dpm_async_fn(dev, async_resume_noirq))
++		return;
++
++	__device_resume_noirq(dev, pm_transition, false);
++}
++
+ static void dpm_noirq_resume_devices(pm_message_t state)
+ {
+ 	struct device *dev;
+@@ -701,34 +716,18 @@ static void dpm_noirq_resume_devices(pm_message_t state)
+ 	mutex_lock(&dpm_list_mtx);
+ 	pm_transition = state;
+ 
+-	/*
+-	 * Advanced the async threads upfront,
+-	 * in case the starting of async threads is
+-	 * delayed by non-async resuming devices.
+-	 */
+-	list_for_each_entry(dev, &dpm_noirq_list, power.entry)
+-		dpm_async_fn(dev, async_resume_noirq);
+-
+ 	while (!list_empty(&dpm_noirq_list)) {
+ 		dev = to_device(dpm_noirq_list.next);
+ 		get_device(dev);
+ 		list_move_tail(&dev->power.entry, &dpm_late_early_list);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+-		if (!is_async(dev)) {
+-			int error;
++		device_resume_noirq(dev);
+ 
+-			error = device_resume_noirq(dev, state, false);
+-			if (error) {
+-				suspend_stats.failed_resume_noirq++;
+-				dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+-				dpm_save_failed_dev(dev_name(dev));
+-				pm_dev_err(dev, state, " noirq", error);
+-			}
+-		}
++		put_device(dev);
+ 
+ 		mutex_lock(&dpm_list_mtx);
+-		put_device(dev);
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+ 	async_synchronize_full();
+@@ -754,14 +753,14 @@ void dpm_resume_noirq(pm_message_t state)
+ }
+ 
+ /**
+- * device_resume_early - Execute an "early resume" callback for given device.
++ * __device_resume_early - Execute an "early resume" callback for given device.
+  * @dev: Device to handle.
+  * @state: PM transition of the system being carried out.
+  * @async: If true, the device is being resumed asynchronously.
+  *
+  * Runtime PM is disabled for @dev while this function is being executed.
+  */
+-static int device_resume_early(struct device *dev, pm_message_t state, bool async)
++static void __device_resume_early(struct device *dev, pm_message_t state, bool async)
+ {
+ 	pm_callback_t callback = NULL;
+ 	const char *info = NULL;
+@@ -814,21 +813,31 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn
+ 
+ 	pm_runtime_enable(dev);
+ 	complete_all(&dev->power.completion);
+-	return error;
++
++	if (error) {
++		suspend_stats.failed_resume_early++;
++		dpm_save_failed_step(SUSPEND_RESUME_EARLY);
++		dpm_save_failed_dev(dev_name(dev));
++		pm_dev_err(dev, state, async ? " async early" : " early", error);
++	}
+ }
+ 
+ static void async_resume_early(void *data, async_cookie_t cookie)
+ {
+-	struct device *dev = (struct device *)data;
+-	int error;
+-
+-	error = device_resume_early(dev, pm_transition, true);
+-	if (error)
+-		pm_dev_err(dev, pm_transition, " async", error);
++	struct device *dev = data;
+ 
++	__device_resume_early(dev, pm_transition, true);
+ 	put_device(dev);
+ }
+ 
++static void device_resume_early(struct device *dev)
++{
++	if (dpm_async_fn(dev, async_resume_early))
++		return;
++
++	__device_resume_early(dev, pm_transition, false);
++}
++
+ /**
+  * dpm_resume_early - Execute "early resume" callbacks for all devices.
+  * @state: PM transition of the system being carried out.
+@@ -842,33 +851,18 @@ void dpm_resume_early(pm_message_t state)
+ 	mutex_lock(&dpm_list_mtx);
+ 	pm_transition = state;
+ 
+-	/*
+-	 * Advanced the async threads upfront,
+-	 * in case the starting of async threads is
+-	 * delayed by non-async resuming devices.
+-	 */
+-	list_for_each_entry(dev, &dpm_late_early_list, power.entry)
+-		dpm_async_fn(dev, async_resume_early);
+-
+ 	while (!list_empty(&dpm_late_early_list)) {
+ 		dev = to_device(dpm_late_early_list.next);
+ 		get_device(dev);
+ 		list_move_tail(&dev->power.entry, &dpm_suspended_list);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+-		if (!is_async(dev)) {
+-			int error;
++		device_resume_early(dev);
+ 
+-			error = device_resume_early(dev, state, false);
+-			if (error) {
+-				suspend_stats.failed_resume_early++;
+-				dpm_save_failed_step(SUSPEND_RESUME_EARLY);
+-				dpm_save_failed_dev(dev_name(dev));
+-				pm_dev_err(dev, state, " early", error);
+-			}
+-		}
+-		mutex_lock(&dpm_list_mtx);
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+ 	async_synchronize_full();
+@@ -888,12 +882,12 @@ void dpm_resume_start(pm_message_t state)
+ EXPORT_SYMBOL_GPL(dpm_resume_start);
+ 
+ /**
+- * device_resume - Execute "resume" callbacks for given device.
++ * __device_resume - Execute "resume" callbacks for given device.
+  * @dev: Device to handle.
+  * @state: PM transition of the system being carried out.
+  * @async: If true, the device is being resumed asynchronously.
+  */
+-static int device_resume(struct device *dev, pm_message_t state, bool async)
++static void __device_resume(struct device *dev, pm_message_t state, bool async)
+ {
+ 	pm_callback_t callback = NULL;
+ 	const char *info = NULL;
+@@ -975,20 +969,30 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
+ 
+ 	TRACE_RESUME(error);
+ 
+-	return error;
++	if (error) {
++		suspend_stats.failed_resume++;
++		dpm_save_failed_step(SUSPEND_RESUME);
++		dpm_save_failed_dev(dev_name(dev));
++		pm_dev_err(dev, state, async ? " async" : "", error);
++	}
+ }
+ 
+ static void async_resume(void *data, async_cookie_t cookie)
+ {
+-	struct device *dev = (struct device *)data;
+-	int error;
++	struct device *dev = data;
+ 
+-	error = device_resume(dev, pm_transition, true);
+-	if (error)
+-		pm_dev_err(dev, pm_transition, " async", error);
++	__device_resume(dev, pm_transition, true);
+ 	put_device(dev);
+ }
+ 
++static void device_resume(struct device *dev)
++{
++	if (dpm_async_fn(dev, async_resume))
++		return;
++
++	__device_resume(dev, pm_transition, false);
++}
++
+ /**
+  * dpm_resume - Execute "resume" callbacks for non-sysdev devices.
+  * @state: PM transition of the system being carried out.
+@@ -1008,30 +1012,25 @@ void dpm_resume(pm_message_t state)
+ 	pm_transition = state;
+ 	async_error = 0;
+ 
+-	list_for_each_entry(dev, &dpm_suspended_list, power.entry)
+-		dpm_async_fn(dev, async_resume);
+-
+ 	while (!list_empty(&dpm_suspended_list)) {
+ 		dev = to_device(dpm_suspended_list.next);
++
+ 		get_device(dev);
+-		if (!is_async(dev)) {
+-			int error;
+ 
+-			mutex_unlock(&dpm_list_mtx);
++		mutex_unlock(&dpm_list_mtx);
+ 
+-			error = device_resume(dev, state, false);
+-			if (error) {
+-				suspend_stats.failed_resume++;
+-				dpm_save_failed_step(SUSPEND_RESUME);
+-				dpm_save_failed_dev(dev_name(dev));
+-				pm_dev_err(dev, state, "", error);
+-			}
++		device_resume(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 
+-			mutex_lock(&dpm_list_mtx);
+-		}
+ 		if (!list_empty(&dev->power.entry))
+ 			list_move_tail(&dev->power.entry, &dpm_prepared_list);
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+ 	async_synchronize_full();
+@@ -1109,14 +1108,16 @@ void dpm_complete(pm_message_t state)
+ 		get_device(dev);
+ 		dev->power.is_prepared = false;
+ 		list_move(&dev->power.entry, &list);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		trace_device_pm_callback_start(dev, "", state.event);
+ 		device_complete(dev, state);
+ 		trace_device_pm_callback_end(dev, 0);
+ 
+-		mutex_lock(&dpm_list_mtx);
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	list_splice(&list, &dpm_list);
+ 	mutex_unlock(&dpm_list_mtx);
+@@ -1262,7 +1263,7 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
+ 
+ static void async_suspend_noirq(void *data, async_cookie_t cookie)
+ {
+-	struct device *dev = (struct device *)data;
++	struct device *dev = data;
+ 	int error;
+ 
+ 	error = __device_suspend_noirq(dev, pm_transition, true);
+@@ -1301,17 +1302,21 @@ static int dpm_noirq_suspend_devices(pm_message_t state)
+ 		error = device_suspend_noirq(dev);
+ 
+ 		mutex_lock(&dpm_list_mtx);
++
+ 		if (error) {
+ 			pm_dev_err(dev, state, " noirq", error);
+ 			dpm_save_failed_dev(dev_name(dev));
+-			put_device(dev);
+-			break;
+-		}
+-		if (!list_empty(&dev->power.entry))
++		} else if (!list_empty(&dev->power.entry)) {
+ 			list_move(&dev->power.entry, &dpm_noirq_list);
++		}
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
+ 
+-		if (async_error)
++		mutex_lock(&dpm_list_mtx);
++
++		if (error || async_error)
+ 			break;
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+@@ -1441,7 +1446,7 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
+ 
+ static void async_suspend_late(void *data, async_cookie_t cookie)
+ {
+-	struct device *dev = (struct device *)data;
++	struct device *dev = data;
+ 	int error;
+ 
+ 	error = __device_suspend_late(dev, pm_transition, true);
+@@ -1478,23 +1483,28 @@ int dpm_suspend_late(pm_message_t state)
+ 		struct device *dev = to_device(dpm_suspended_list.prev);
+ 
+ 		get_device(dev);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		error = device_suspend_late(dev);
+ 
+ 		mutex_lock(&dpm_list_mtx);
++
+ 		if (!list_empty(&dev->power.entry))
+ 			list_move(&dev->power.entry, &dpm_late_early_list);
+ 
+ 		if (error) {
+ 			pm_dev_err(dev, state, " late", error);
+ 			dpm_save_failed_dev(dev_name(dev));
+-			put_device(dev);
+-			break;
+ 		}
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
+ 
+-		if (async_error)
++		mutex_lock(&dpm_list_mtx);
++
++		if (error || async_error)
+ 			break;
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+@@ -1712,7 +1722,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 
+ static void async_suspend(void *data, async_cookie_t cookie)
+ {
+-	struct device *dev = (struct device *)data;
++	struct device *dev = data;
+ 	int error;
+ 
+ 	error = __device_suspend(dev, pm_transition, true);
+@@ -1754,21 +1764,27 @@ int dpm_suspend(pm_message_t state)
+ 		struct device *dev = to_device(dpm_prepared_list.prev);
+ 
+ 		get_device(dev);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		error = device_suspend(dev);
+ 
+ 		mutex_lock(&dpm_list_mtx);
++
+ 		if (error) {
+ 			pm_dev_err(dev, state, "", error);
+ 			dpm_save_failed_dev(dev_name(dev));
+-			put_device(dev);
+-			break;
+-		}
+-		if (!list_empty(&dev->power.entry))
++		} else if (!list_empty(&dev->power.entry)) {
+ 			list_move(&dev->power.entry, &dpm_suspended_list);
++		}
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
+-		if (async_error)
++
++		mutex_lock(&dpm_list_mtx);
++
++		if (error || async_error)
+ 			break;
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+@@ -1881,10 +1897,11 @@ int dpm_prepare(pm_message_t state)
+ 	device_block_probing();
+ 
+ 	mutex_lock(&dpm_list_mtx);
+-	while (!list_empty(&dpm_list)) {
++	while (!list_empty(&dpm_list) && !error) {
+ 		struct device *dev = to_device(dpm_list.next);
+ 
+ 		get_device(dev);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		trace_device_pm_callback_start(dev, "", state.event);
+@@ -1892,21 +1909,23 @@ int dpm_prepare(pm_message_t state)
+ 		trace_device_pm_callback_end(dev, error);
+ 
+ 		mutex_lock(&dpm_list_mtx);
+-		if (error) {
+-			if (error == -EAGAIN) {
+-				put_device(dev);
+-				error = 0;
+-				continue;
+-			}
+-			pr_info("Device %s not prepared for power transition: code %d\n",
+-				dev_name(dev), error);
+-			put_device(dev);
+-			break;
++
++		if (!error) {
++			dev->power.is_prepared = true;
++			if (!list_empty(&dev->power.entry))
++				list_move_tail(&dev->power.entry, &dpm_prepared_list);
++		} else if (error == -EAGAIN) {
++			error = 0;
++		} else {
++			dev_info(dev, "not prepared for power transition: code %d\n",
++				 error);
+ 		}
+-		dev->power.is_prepared = true;
+-		if (!list_empty(&dev->power.entry))
+-			list_move_tail(&dev->power.entry, &dpm_prepared_list);
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+ 	trace_suspend_resume(TPS("dpm_prepare"), state.event, false);
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index fbbc3ed143f27..f5c9e6629f0c7 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1479,6 +1479,28 @@ void pm_runtime_enable(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(pm_runtime_enable);
+ 
++static void pm_runtime_disable_action(void *data)
++{
++	pm_runtime_dont_use_autosuspend(data);
++	pm_runtime_disable(data);
++}
++
++/**
++ * devm_pm_runtime_enable - devres-enabled version of pm_runtime_enable.
++ *
++ * NOTE: this will also handle calling pm_runtime_dont_use_autosuspend() for
++ * you at driver exit time if needed.
++ *
++ * @dev: Device to handle.
++ */
++int devm_pm_runtime_enable(struct device *dev)
++{
++	pm_runtime_enable(dev);
++
++	return devm_add_action_or_reset(dev, pm_runtime_disable_action, dev);
++}
++EXPORT_SYMBOL_GPL(devm_pm_runtime_enable);
++
+ /**
+  * pm_runtime_forbid - Block runtime PM of a device.
+  * @dev: Device to handle.
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index b0f7930524ba0..5b102d333a410 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -3517,14 +3517,15 @@ static bool rbd_lock_add_request(struct rbd_img_request *img_req)
+ static void rbd_lock_del_request(struct rbd_img_request *img_req)
+ {
+ 	struct rbd_device *rbd_dev = img_req->rbd_dev;
+-	bool need_wakeup;
++	bool need_wakeup = false;
+ 
+ 	lockdep_assert_held(&rbd_dev->lock_rwsem);
+ 	spin_lock(&rbd_dev->lock_lists_lock);
+-	rbd_assert(!list_empty(&img_req->lock_item));
+-	list_del_init(&img_req->lock_item);
+-	need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
+-		       list_empty(&rbd_dev->running_list));
++	if (!list_empty(&img_req->lock_item)) {
++		list_del_init(&img_req->lock_item);
++		need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
++			       list_empty(&rbd_dev->running_list));
++	}
+ 	spin_unlock(&rbd_dev->lock_lists_lock);
+ 	if (need_wakeup)
+ 		complete(&rbd_dev->releasing_wait);
+@@ -3907,14 +3908,19 @@ static void wake_lock_waiters(struct rbd_device *rbd_dev, int result)
+ 		return;
+ 	}
+ 
+-	list_for_each_entry(img_req, &rbd_dev->acquiring_list, lock_item) {
++	while (!list_empty(&rbd_dev->acquiring_list)) {
++		img_req = list_first_entry(&rbd_dev->acquiring_list,
++					   struct rbd_img_request, lock_item);
+ 		mutex_lock(&img_req->state_mutex);
+ 		rbd_assert(img_req->state == RBD_IMG_EXCLUSIVE_LOCK);
++		if (!result)
++			list_move_tail(&img_req->lock_item,
++				       &rbd_dev->running_list);
++		else
++			list_del_init(&img_req->lock_item);
+ 		rbd_img_schedule(img_req, result);
+ 		mutex_unlock(&img_req->state_mutex);
+ 	}
+-
+-	list_splice_tail_init(&rbd_dev->acquiring_list, &rbd_dev->running_list);
+ }
+ 
+ static bool locker_equal(const struct ceph_locker *lhs,
+diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c
+index e1bc8b4cd5929..9c5d52335e17c 100644
+--- a/drivers/block/rnbd/rnbd-srv.c
++++ b/drivers/block/rnbd/rnbd-srv.c
+@@ -591,6 +591,7 @@ static char *rnbd_srv_get_full_path(struct rnbd_srv_session *srv_sess,
+ {
+ 	char *full_path;
+ 	char *a, *b;
++	int len;
+ 
+ 	full_path = kmalloc(PATH_MAX, GFP_KERNEL);
+ 	if (!full_path)
+@@ -602,19 +603,19 @@ static char *rnbd_srv_get_full_path(struct rnbd_srv_session *srv_sess,
+ 	 */
+ 	a = strnstr(dev_search_path, "%SESSNAME%", sizeof(dev_search_path));
+ 	if (a) {
+-		int len = a - dev_search_path;
++		len = a - dev_search_path;
+ 
+ 		len = snprintf(full_path, PATH_MAX, "%.*s/%s/%s", len,
+ 			       dev_search_path, srv_sess->sessname, dev_name);
+-		if (len >= PATH_MAX) {
+-			pr_err("Too long path: %s, %s, %s\n",
+-			       dev_search_path, srv_sess->sessname, dev_name);
+-			kfree(full_path);
+-			return ERR_PTR(-EINVAL);
+-		}
+ 	} else {
+-		snprintf(full_path, PATH_MAX, "%s/%s",
+-			 dev_search_path, dev_name);
++		len = snprintf(full_path, PATH_MAX, "%s/%s",
++			       dev_search_path, dev_name);
++	}
++	if (len >= PATH_MAX) {
++		pr_err("Too long path: %s, %s, %s\n",
++		       dev_search_path, srv_sess->sessname, dev_name);
++		kfree(full_path);
++		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	/* eliminitate duplicated slashes */
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index bc0850d3f7d28..6e0c0762fbabf 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1814,6 +1814,7 @@ static const struct qca_device_data qca_soc_data_wcn3998 = {
+ static const struct qca_device_data qca_soc_data_qca6390 = {
+ 	.soc_type = QCA_QCA6390,
+ 	.num_vregs = 0,
++	.capabilities = QCA_CAP_WIDEBAND_SPEECH | QCA_CAP_VALID_LE_STATES,
+ };
+ 
+ static void qca_power_shutdown(struct hci_uart *hu)
+diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
+index 614dd287cb4ff..49c0f5ad0b73f 100644
+--- a/drivers/bus/mhi/host/main.c
++++ b/drivers/bus/mhi/host/main.c
+@@ -569,6 +569,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ 			mhi_del_ring_element(mhi_cntrl, tre_ring);
+ 			local_rp = tre_ring->rp;
+ 
++			read_unlock_bh(&mhi_chan->lock);
++
+ 			/* notify client */
+ 			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ 
+@@ -591,6 +593,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ 					kfree(buf_info->cb_buf);
+ 				}
+ 			}
++
++			read_lock_bh(&mhi_chan->lock);
+ 		}
+ 		break;
+ 	} /* CC_EOT */
+diff --git a/drivers/bus/moxtet.c b/drivers/bus/moxtet.c
+index b20fdcbd035b2..34377195bf877 100644
+--- a/drivers/bus/moxtet.c
++++ b/drivers/bus/moxtet.c
+@@ -832,6 +832,12 @@ static int moxtet_remove(struct spi_device *spi)
+ 	return 0;
+ }
+ 
++static const struct spi_device_id moxtet_spi_ids[] = {
++	{ "moxtet" },
++	{ },
++};
++MODULE_DEVICE_TABLE(spi, moxtet_spi_ids);
++
+ static const struct of_device_id moxtet_dt_ids[] = {
+ 	{ .compatible = "cznic,moxtet" },
+ 	{},
+@@ -843,6 +849,7 @@ static struct spi_driver moxtet_spi_driver = {
+ 		.name		= "moxtet",
+ 		.of_match_table = moxtet_dt_ids,
+ 	},
++	.id_table	= moxtet_spi_ids,
+ 	.probe		= moxtet_probe,
+ 	.remove		= moxtet_remove,
+ };
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 5749998feaa46..6e2c1ba18012a 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -24,10 +24,13 @@
+ #include <linux/random.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
++#include <linux/string.h>
+ #include <linux/uaccess.h>
+ 
+ #define RNG_MODULE_NAME		"hw_random"
+ 
++#define RNG_BUFFER_SIZE (SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES)
++
+ static struct hwrng *current_rng;
+ /* the current rng has been explicitly chosen by user via sysfs */
+ static int cur_rng_set_by_user;
+@@ -59,7 +62,7 @@ static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,
+ 
+ static size_t rng_buffer_size(void)
+ {
+-	return SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES;
++	return RNG_BUFFER_SIZE;
+ }
+ 
+ static void add_early_randomness(struct hwrng *rng)
+@@ -206,6 +209,7 @@ static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,
+ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
+ 			    size_t size, loff_t *offp)
+ {
++	u8 buffer[RNG_BUFFER_SIZE];
+ 	ssize_t ret = 0;
+ 	int err = 0;
+ 	int bytes_read, len;
+@@ -233,34 +237,37 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
+ 			if (bytes_read < 0) {
+ 				err = bytes_read;
+ 				goto out_unlock_reading;
++			} else if (bytes_read == 0 &&
++				   (filp->f_flags & O_NONBLOCK)) {
++				err = -EAGAIN;
++				goto out_unlock_reading;
+ 			}
++
+ 			data_avail = bytes_read;
+ 		}
+ 
+-		if (!data_avail) {
+-			if (filp->f_flags & O_NONBLOCK) {
+-				err = -EAGAIN;
+-				goto out_unlock_reading;
+-			}
+-		} else {
+-			len = data_avail;
++		len = data_avail;
++		if (len) {
+ 			if (len > size)
+ 				len = size;
+ 
+ 			data_avail -= len;
+ 
+-			if (copy_to_user(buf + ret, rng_buffer + data_avail,
+-								len)) {
++			memcpy(buffer, rng_buffer + data_avail, len);
++		}
++		mutex_unlock(&reading_mutex);
++		put_rng(rng);
++
++		if (len) {
++			if (copy_to_user(buf + ret, buffer, len)) {
+ 				err = -EFAULT;
+-				goto out_unlock_reading;
++				goto out;
+ 			}
+ 
+ 			size -= len;
+ 			ret += len;
+ 		}
+ 
+-		mutex_unlock(&reading_mutex);
+-		put_rng(rng);
+ 
+ 		if (need_resched())
+ 			schedule_timeout_interruptible(1);
+@@ -271,6 +278,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
+ 		}
+ 	}
+ out:
++	memzero_explicit(buffer, sizeof(buffer));
+ 	return ret ? : err;
+ 
+ out_unlock_reading:
+diff --git a/drivers/clk/hisilicon/clk-hi3620.c b/drivers/clk/hisilicon/clk-hi3620.c
+index a3d04c7c3da87..eb9c139babc33 100644
+--- a/drivers/clk/hisilicon/clk-hi3620.c
++++ b/drivers/clk/hisilicon/clk-hi3620.c
+@@ -467,8 +467,10 @@ static void __init hi3620_mmc_clk_init(struct device_node *node)
+ 		return;
+ 
+ 	clk_data->clks = kcalloc(num, sizeof(*clk_data->clks), GFP_KERNEL);
+-	if (!clk_data->clks)
++	if (!clk_data->clks) {
++		kfree(clk_data);
+ 		return;
++	}
+ 
+ 	for (i = 0; i < num; i++) {
+ 		struct hisi_mmc_clock *mmc_clk = &hi3620_mmc_clks[i];
+diff --git a/drivers/clk/mmp/clk-of-pxa168.c b/drivers/clk/mmp/clk-of-pxa168.c
+index f110c02e83cb6..9674c6c06dca9 100644
+--- a/drivers/clk/mmp/clk-of-pxa168.c
++++ b/drivers/clk/mmp/clk-of-pxa168.c
+@@ -258,18 +258,21 @@ static void __init pxa168_clk_init(struct device_node *np)
+ 	pxa_unit->mpmu_base = of_iomap(np, 0);
+ 	if (!pxa_unit->mpmu_base) {
+ 		pr_err("failed to map mpmu registers\n");
++		kfree(pxa_unit);
+ 		return;
+ 	}
+ 
+ 	pxa_unit->apmu_base = of_iomap(np, 1);
+ 	if (!pxa_unit->apmu_base) {
+ 		pr_err("failed to map apmu registers\n");
++		kfree(pxa_unit);
+ 		return;
+ 	}
+ 
+ 	pxa_unit->apbc_base = of_iomap(np, 2);
+ 	if (!pxa_unit->apbc_base) {
+ 		pr_err("failed to map apbc registers\n");
++		kfree(pxa_unit);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 8e2672ec6e038..055cbb2ad75e1 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -304,10 +304,16 @@ EXPORT_SYMBOL_GPL(sev_platform_init);
+ 
+ static int __sev_platform_shutdown_locked(int *error)
+ {
+-	struct sev_device *sev = psp_master->sev_data;
++	struct psp_device *psp = psp_master;
++	struct sev_device *sev;
+ 	int ret;
+ 
+-	if (!sev || sev->state == SEV_STATE_UNINIT)
++	if (!psp || !psp->sev_data)
++		return 0;
++
++	sev = psp->sev_data;
++
++	if (sev->state == SEV_STATE_UNINIT)
+ 		return 0;
+ 
+ 	ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
+diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
+index 90a920e7f6642..c439be1650c84 100644
+--- a/drivers/crypto/stm32/stm32-crc32.c
++++ b/drivers/crypto/stm32/stm32-crc32.c
+@@ -104,7 +104,7 @@ static struct stm32_crc *stm32_crc_get_next_crc(void)
+ 	struct stm32_crc *crc;
+ 
+ 	spin_lock_bh(&crc_list.lock);
+-	crc = list_first_entry(&crc_list.dev_list, struct stm32_crc, list);
++	crc = list_first_entry_or_null(&crc_list.dev_list, struct stm32_crc, list);
+ 	if (crc)
+ 		list_move_tail(&crc->list, &crc_list.dev_list);
+ 	spin_unlock_bh(&crc_list.lock);
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 42c1eed445296..216594b861191 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -438,10 +438,14 @@ static void devfreq_monitor(struct work_struct *work)
+ 	if (err)
+ 		dev_err(&devfreq->dev, "dvfs failed with (%d) error\n", err);
+ 
++	if (devfreq->stop_polling)
++		goto out;
++
+ 	queue_delayed_work(devfreq_wq, &devfreq->work,
+ 				msecs_to_jiffies(devfreq->profile->polling_ms));
+-	mutex_unlock(&devfreq->lock);
+ 
++out:
++	mutex_unlock(&devfreq->lock);
+ 	trace_devfreq_monitor(devfreq);
+ }
+ 
+@@ -459,6 +463,10 @@ void devfreq_monitor_start(struct devfreq *devfreq)
+ 	if (devfreq->governor->interrupt_driven)
+ 		return;
+ 
++	mutex_lock(&devfreq->lock);
++	if (delayed_work_pending(&devfreq->work))
++		goto out;
++
+ 	switch (devfreq->profile->timer) {
+ 	case DEVFREQ_TIMER_DEFERRABLE:
+ 		INIT_DEFERRABLE_WORK(&devfreq->work, devfreq_monitor);
+@@ -467,12 +475,16 @@ void devfreq_monitor_start(struct devfreq *devfreq)
+ 		INIT_DELAYED_WORK(&devfreq->work, devfreq_monitor);
+ 		break;
+ 	default:
+-		return;
++		goto out;
+ 	}
+ 
+ 	if (devfreq->profile->polling_ms)
+ 		queue_delayed_work(devfreq_wq, &devfreq->work,
+ 			msecs_to_jiffies(devfreq->profile->polling_ms));
++
++out:
++	devfreq->stop_polling = false;
++	mutex_unlock(&devfreq->lock);
+ }
+ EXPORT_SYMBOL(devfreq_monitor_start);
+ 
+@@ -489,6 +501,14 @@ void devfreq_monitor_stop(struct devfreq *devfreq)
+ 	if (devfreq->governor->interrupt_driven)
+ 		return;
+ 
++	mutex_lock(&devfreq->lock);
++	if (devfreq->stop_polling) {
++		mutex_unlock(&devfreq->lock);
++		return;
++	}
++
++	devfreq->stop_polling = true;
++	mutex_unlock(&devfreq->lock);
+ 	cancel_delayed_work_sync(&devfreq->work);
+ }
+ EXPORT_SYMBOL(devfreq_monitor_stop);
+diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
+index 4ec7bb58c195f..9559ebd61f3bb 100644
+--- a/drivers/dma/dmaengine.c
++++ b/drivers/dma/dmaengine.c
+@@ -1108,6 +1108,9 @@ EXPORT_SYMBOL_GPL(dma_async_device_channel_register);
+ static void __dma_async_device_channel_unregister(struct dma_device *device,
+ 						  struct dma_chan *chan)
+ {
++	if (chan->local == NULL)
++		return;
++
+ 	WARN_ONCE(!device->device_release && chan->client_count,
+ 		  "%s called while %d clients hold a reference\n",
+ 		  __func__, chan->client_count);
+diff --git a/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c b/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
+index 4ae057922ef1f..2d905f0633d57 100644
+--- a/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
++++ b/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
+@@ -38,15 +38,17 @@ static int dpaa2_qdma_alloc_chan_resources(struct dma_chan *chan)
+ 	if (!dpaa2_chan->fd_pool)
+ 		goto err;
+ 
+-	dpaa2_chan->fl_pool = dma_pool_create("fl_pool", dev,
+-					      sizeof(struct dpaa2_fl_entry),
+-					      sizeof(struct dpaa2_fl_entry), 0);
++	dpaa2_chan->fl_pool =
++		dma_pool_create("fl_pool", dev,
++				 sizeof(struct dpaa2_fl_entry) * 3,
++				 sizeof(struct dpaa2_fl_entry), 0);
++
+ 	if (!dpaa2_chan->fl_pool)
+ 		goto err_fd;
+ 
+ 	dpaa2_chan->sdd_pool =
+ 		dma_pool_create("sdd_pool", dev,
+-				sizeof(struct dpaa2_qdma_sd_d),
++				sizeof(struct dpaa2_qdma_sd_d) * 2,
+ 				sizeof(struct dpaa2_qdma_sd_d), 0);
+ 	if (!dpaa2_chan->sdd_pool)
+ 		goto err_fl;
+diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
+index 045ead46ec8fc..69385f32e2756 100644
+--- a/drivers/dma/fsl-qdma.c
++++ b/drivers/dma/fsl-qdma.c
+@@ -514,11 +514,11 @@ static struct fsl_qdma_queue
+ 			queue_temp = queue_head + i + (j * queue_num);
+ 
+ 			queue_temp->cq =
+-			dma_alloc_coherent(&pdev->dev,
+-					   sizeof(struct fsl_qdma_format) *
+-					   queue_size[i],
+-					   &queue_temp->bus_addr,
+-					   GFP_KERNEL);
++			dmam_alloc_coherent(&pdev->dev,
++					    sizeof(struct fsl_qdma_format) *
++					    queue_size[i],
++					    &queue_temp->bus_addr,
++					    GFP_KERNEL);
+ 			if (!queue_temp->cq)
+ 				return NULL;
+ 			queue_temp->block_base = fsl_qdma->block_base +
+@@ -563,11 +563,11 @@ static struct fsl_qdma_queue
+ 	/*
+ 	 * Buffer for queue command
+ 	 */
+-	status_head->cq = dma_alloc_coherent(&pdev->dev,
+-					     sizeof(struct fsl_qdma_format) *
+-					     status_size,
+-					     &status_head->bus_addr,
+-					     GFP_KERNEL);
++	status_head->cq = dmam_alloc_coherent(&pdev->dev,
++					      sizeof(struct fsl_qdma_format) *
++					      status_size,
++					      &status_head->bus_addr,
++					      GFP_KERNEL);
+ 	if (!status_head->cq) {
+ 		devm_kfree(&pdev->dev, status_head);
+ 		return NULL;
+@@ -1272,8 +1272,6 @@ static void fsl_qdma_cleanup_vchan(struct dma_device *dmadev)
+ 
+ static int fsl_qdma_remove(struct platform_device *pdev)
+ {
+-	int i;
+-	struct fsl_qdma_queue *status;
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct fsl_qdma_engine *fsl_qdma = platform_get_drvdata(pdev);
+ 
+@@ -1282,11 +1280,6 @@ static int fsl_qdma_remove(struct platform_device *pdev)
+ 	of_dma_controller_free(np);
+ 	dma_async_device_unregister(&fsl_qdma->dma_dev);
+ 
+-	for (i = 0; i < fsl_qdma->block_number; i++) {
+-		status = fsl_qdma->status[i];
+-		dma_free_coherent(&pdev->dev, sizeof(struct fsl_qdma_format) *
+-				status->n_cq, status->cq, status->bus_addr);
+-	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index d3902784cae24..15eecb757619e 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -2877,6 +2877,7 @@ static void udma_desc_pre_callback(struct virt_dma_chan *vc,
+ {
+ 	struct udma_chan *uc = to_udma_chan(&vc->chan);
+ 	struct udma_desc *d;
++	u8 status;
+ 
+ 	if (!vd)
+ 		return;
+@@ -2886,12 +2887,12 @@ static void udma_desc_pre_callback(struct virt_dma_chan *vc,
+ 	if (d->metadata_size)
+ 		udma_fetch_epib(uc, d);
+ 
+-	/* Provide residue information for the client */
+ 	if (result) {
+ 		void *desc_vaddr = udma_curr_cppi5_desc_vaddr(d, d->desc_idx);
+ 
+ 		if (cppi5_desc_get_type(desc_vaddr) ==
+ 		    CPPI5_INFO0_DESC_TYPE_VAL_HOST) {
++			/* Provide residue information for the client */
+ 			result->residue = d->residue -
+ 					  cppi5_hdesc_get_pktlen(desc_vaddr);
+ 			if (result->residue)
+@@ -2900,7 +2901,12 @@ static void udma_desc_pre_callback(struct virt_dma_chan *vc,
+ 				result->result = DMA_TRANS_NOERROR;
+ 		} else {
+ 			result->residue = 0;
+-			result->result = DMA_TRANS_NOERROR;
++			/* Propagate TR Response errors to the client */
++			status = d->hwdesc[0].tr_resp_base->status;
++			if (status)
++				result->result = DMA_TRANS_ABORTED;
++			else
++				result->result = DMA_TRANS_NOERROR;
+ 		}
+ 	}
+ }
+diff --git a/drivers/firewire/core-device.c b/drivers/firewire/core-device.c
+index 94ae27865b9ed..9bc181865ecc3 100644
+--- a/drivers/firewire/core-device.c
++++ b/drivers/firewire/core-device.c
+@@ -100,10 +100,9 @@ static int textual_leaf_to_string(const u32 *block, char *buf, size_t size)
+  * @buf:	where to put the string
+  * @size:	size of @buf, in bytes
+  *
+- * The string is taken from a minimal ASCII text descriptor leaf after
+- * the immediate entry with @key.  The string is zero-terminated.
+- * An overlong string is silently truncated such that it and the
+- * zero byte fit into @size.
++ * The string is taken from a minimal ASCII text descriptor leaf just after the entry with the
++ * @key. The string is zero-terminated. An overlong string is silently truncated such that it
++ * and the zero byte fit into @size.
+  *
+  * Returns strlen(buf) or a negative error code.
+  */
+diff --git a/drivers/gpio/gpio-eic-sprd.c b/drivers/gpio/gpio-eic-sprd.c
+index 865ab2b34fdda..3dfb8b6c6c710 100644
+--- a/drivers/gpio/gpio-eic-sprd.c
++++ b/drivers/gpio/gpio-eic-sprd.c
+@@ -318,20 +318,27 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 		switch (flow_type) {
+ 		case IRQ_TYPE_LEVEL_HIGH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IC, 1);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_LOW:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IC, 1);
+ 			break;
+ 		case IRQ_TYPE_EDGE_RISING:
+ 		case IRQ_TYPE_EDGE_FALLING:
+ 		case IRQ_TYPE_EDGE_BOTH:
+ 			state = sprd_eic_get(chip, offset);
+-			if (state)
++			if (state) {
+ 				sprd_eic_update(chip, offset,
+ 						SPRD_EIC_DBNC_IEV, 0);
+-			else
++				sprd_eic_update(chip, offset,
++						SPRD_EIC_DBNC_IC, 1);
++			} else {
+ 				sprd_eic_update(chip, offset,
+ 						SPRD_EIC_DBNC_IEV, 1);
++				sprd_eic_update(chip, offset,
++						SPRD_EIC_DBNC_IC, 1);
++			}
+ 			break;
+ 		default:
+ 			return -ENOTSUPP;
+@@ -343,20 +350,27 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 		switch (flow_type) {
+ 		case IRQ_TYPE_LEVEL_HIGH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTCLR, 1);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_LOW:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTCLR, 1);
+ 			break;
+ 		case IRQ_TYPE_EDGE_RISING:
+ 		case IRQ_TYPE_EDGE_FALLING:
+ 		case IRQ_TYPE_EDGE_BOTH:
+ 			state = sprd_eic_get(chip, offset);
+-			if (state)
++			if (state) {
+ 				sprd_eic_update(chip, offset,
+ 						SPRD_EIC_LATCH_INTPOL, 0);
+-			else
++				sprd_eic_update(chip, offset,
++						SPRD_EIC_LATCH_INTCLR, 1);
++			} else {
+ 				sprd_eic_update(chip, offset,
+ 						SPRD_EIC_LATCH_INTPOL, 1);
++				sprd_eic_update(chip, offset,
++						SPRD_EIC_LATCH_INTCLR, 1);
++			}
+ 			break;
+ 		default:
+ 			return -ENOTSUPP;
+@@ -370,29 +384,34 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_FALLING:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_BOTH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_HIGH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_level_irq);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_LOW:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_level_irq);
+ 			break;
+ 		default:
+@@ -405,29 +424,34 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_FALLING:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_BOTH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_HIGH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_level_irq);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_LOW:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_level_irq);
+ 			break;
+ 		default:
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 44ee319da1b35..12012e1645d7b 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -1479,6 +1479,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ 			.ignore_wake = "INT33FF:01@0",
+ 		},
+ 	},
++	{
++		/*
++		 * Spurious wakeups from TP_ATTN# pin
++		 * Found in BIOS 0.35
++		 * https://gitlab.freedesktop.org/drm/amd/-/issues/3073
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GPD"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "G1619-04"),
++		},
++		.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++			.ignore_wake = "PNP0C50:00@8",
++		},
++	},
+ 	{} /* Terminating entry */
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
+index 3107b95759291..eef7517c9d24b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
+@@ -88,7 +88,7 @@ struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f)
+ 		return NULL;
+ 
+ 	fence = container_of(f, struct amdgpu_amdkfd_fence, base);
+-	if (fence && f->ops == &amdkfd_fence_ops)
++	if (f->ops == &amdkfd_fence_ops)
+ 		return fence;
+ 
+ 	return NULL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index a093f1b277244..e833c02fabff3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -1184,6 +1184,7 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev)
+ 				return true;
+ 
+ 			fw_ver = *((uint32_t *)adev->pm.fw->data + 69);
++			release_firmware(adev->pm.fw);
+ 			if (fw_ver < 0x00160e00)
+ 				return true;
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+index 8ea6c49529e7d..6a22bc41c2056 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+@@ -241,7 +241,8 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync,
+ 
+ 		/* Never sync to VM updates either. */
+ 		if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
+-		    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
++		    owner != AMDGPU_FENCE_OWNER_UNDEFINED &&
++	    owner != AMDGPU_FENCE_OWNER_KFD)
+ 			continue;
+ 
+ 		/* Ignore fences depending on the sync mode */
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 36a9e9c84ed44..272252cd05001 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1440,6 +1440,10 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+ 		wait_for_no_pipes_pending(dc, context);
+ 		/* pplib is notified if disp_num changed */
+ 		dc->hwss.optimize_bandwidth(dc, context);
++		/* Need to do otg sync again as otg could be out of sync due to otg
++		 * workaround applied during clock update
++		 */
++		dc_trigger_sync(dc, context);
+ 	}
+ 
+ 	context->stream_mask = get_stream_mask(dc, context);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/process_pptables_v1_0.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/process_pptables_v1_0.c
+index b760f95e7fa7a..5998c78ad536c 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/process_pptables_v1_0.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/process_pptables_v1_0.c
+@@ -204,7 +204,7 @@ static int get_platform_power_management_table(
+ 		struct pp_hwmgr *hwmgr,
+ 		ATOM_Tonga_PPM_Table *atom_ppm_table)
+ {
+-	struct phm_ppm_table *ptr = kzalloc(sizeof(ATOM_Tonga_PPM_Table), GFP_KERNEL);
++	struct phm_ppm_table *ptr = kzalloc(sizeof(*ptr), GFP_KERNEL);
+ 	struct phm_ppt_v1_information *pp_table_information =
+ 		(struct phm_ppt_v1_information *)(hwmgr->pptable);
+ 
+diff --git a/drivers/gpu/drm/bridge/nxp-ptn3460.c b/drivers/gpu/drm/bridge/nxp-ptn3460.c
+index e941c11325984..eebb2dece768b 100644
+--- a/drivers/gpu/drm/bridge/nxp-ptn3460.c
++++ b/drivers/gpu/drm/bridge/nxp-ptn3460.c
+@@ -54,13 +54,13 @@ static int ptn3460_read_bytes(struct ptn3460_bridge *ptn_bridge, char addr,
+ 	int ret;
+ 
+ 	ret = i2c_master_send(ptn_bridge->client, &addr, 1);
+-	if (ret <= 0) {
++	if (ret < 0) {
+ 		DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	ret = i2c_master_recv(ptn_bridge->client, buf, len);
+-	if (ret <= 0) {
++	if (ret < 0) {
+ 		DRM_ERROR("Failed to recv i2c data, ret=%d\n", ret);
+ 		return ret;
+ 	}
+@@ -78,7 +78,7 @@ static int ptn3460_write_byte(struct ptn3460_bridge *ptn_bridge, char addr,
+ 	buf[1] = val;
+ 
+ 	ret = i2c_master_send(ptn_bridge->client, buf, ARRAY_SIZE(buf));
+-	if (ret <= 0) {
++	if (ret < 0) {
+ 		DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 537e7de8e9c33..93da7b5d785be 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -411,7 +411,7 @@ int drm_open(struct inode *inode, struct file *filp)
+ {
+ 	struct drm_device *dev;
+ 	struct drm_minor *minor;
+-	int retcode;
++	int retcode = 0;
+ 	int need_setup = 0;
+ 
+ 	minor = drm_minor_acquire(iminor(inode));
+diff --git a/drivers/gpu/drm/drm_framebuffer.c b/drivers/gpu/drm/drm_framebuffer.c
+index 2f5b0c2bb0fe3..e490ef42441f3 100644
+--- a/drivers/gpu/drm/drm_framebuffer.c
++++ b/drivers/gpu/drm/drm_framebuffer.c
+@@ -570,7 +570,7 @@ int drm_mode_getfb2_ioctl(struct drm_device *dev,
+ 	struct drm_mode_fb_cmd2 *r = data;
+ 	struct drm_framebuffer *fb;
+ 	unsigned int i;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 0c806e99e8690..83918ac1f6086 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -300,7 +300,8 @@ static int mipi_dsi_remove_device_fn(struct device *dev, void *priv)
+ {
+ 	struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
+ 
+-	mipi_dsi_detach(dsi);
++	if (dsi->attached)
++		mipi_dsi_detach(dsi);
+ 	mipi_dsi_device_unregister(dsi);
+ 
+ 	return 0;
+@@ -323,11 +324,18 @@ EXPORT_SYMBOL(mipi_dsi_host_unregister);
+ int mipi_dsi_attach(struct mipi_dsi_device *dsi)
+ {
+ 	const struct mipi_dsi_host_ops *ops = dsi->host->ops;
++	int ret;
+ 
+ 	if (!ops || !ops->attach)
+ 		return -ENOSYS;
+ 
+-	return ops->attach(dsi->host, dsi);
++	ret = ops->attach(dsi->host, dsi);
++	if (ret)
++		return ret;
++
++	dsi->attached = true;
++
++	return 0;
+ }
+ EXPORT_SYMBOL(mipi_dsi_attach);
+ 
+@@ -339,9 +347,14 @@ int mipi_dsi_detach(struct mipi_dsi_device *dsi)
+ {
+ 	const struct mipi_dsi_host_ops *ops = dsi->host->ops;
+ 
++	if (WARN_ON(!dsi->attached))
++		return -EINVAL;
++
+ 	if (!ops || !ops->detach)
+ 		return -ENOSYS;
+ 
++	dsi->attached = false;
++
+ 	return ops->detach(dsi->host, dsi);
+ }
+ EXPORT_SYMBOL(mipi_dsi_detach);
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index 24f643982903a..79249568bdec3 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -1213,6 +1213,7 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
+ out:
+ 	if (fb)
+ 		drm_framebuffer_put(fb);
++	fb = NULL;
+ 	if (plane->old_fb)
+ 		drm_framebuffer_put(plane->old_fb);
+ 	plane->old_fb = NULL;
+diff --git a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+index c277d2fc50c66..e43dfea09527f 100644
+--- a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
++++ b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+@@ -318,9 +318,9 @@ static void decon_win_set_bldmod(struct decon_context *ctx, unsigned int win,
+ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
+ 				 struct drm_framebuffer *fb)
+ {
+-	struct exynos_drm_plane plane = ctx->planes[win];
++	struct exynos_drm_plane *plane = &ctx->planes[win];
+ 	struct exynos_drm_plane_state *state =
+-		to_exynos_plane_state(plane.base.state);
++		to_exynos_plane_state(plane->base.state);
+ 	unsigned int alpha = state->base.alpha;
+ 	unsigned int pixel_alpha;
+ 	unsigned long val;
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c b/drivers/gpu/drm/exynos/exynos_drm_drv.c
+index dbd80f1e4c78b..7e13c15500837 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_drv.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c
+@@ -344,6 +344,7 @@ static int exynos_drm_bind(struct device *dev)
+ 	drm_mode_config_cleanup(drm);
+ 	exynos_drm_cleanup_dma(drm);
+ 	kfree(private);
++	dev_set_drvdata(dev, NULL);
+ err_free_drm:
+ 	drm_dev_put(drm);
+ 
+@@ -358,6 +359,7 @@ static void exynos_drm_unbind(struct device *dev)
+ 
+ 	exynos_drm_fbdev_fini(drm);
+ 	drm_kms_helper_poll_fini(drm);
++	drm_atomic_helper_shutdown(drm);
+ 
+ 	component_unbind_all(drm->dev, drm);
+ 	drm_mode_config_cleanup(drm);
+@@ -395,9 +397,18 @@ static int exynos_drm_platform_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void exynos_drm_platform_shutdown(struct platform_device *pdev)
++{
++	struct drm_device *drm = platform_get_drvdata(pdev);
++
++	if (drm)
++		drm_atomic_helper_shutdown(drm);
++}
++
+ static struct platform_driver exynos_drm_platform_driver = {
+ 	.probe	= exynos_drm_platform_probe,
+ 	.remove	= exynos_drm_platform_remove,
++	.shutdown = exynos_drm_platform_shutdown,
+ 	.driver	= {
+ 		.name	= "exynos-drm",
+ 		.pm	= &exynos_drm_pm_ops,
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+index bb67cad8371f0..c045330f9c48f 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+@@ -637,9 +637,9 @@ static void fimd_win_set_bldmod(struct fimd_context *ctx, unsigned int win,
+ static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win,
+ 				struct drm_framebuffer *fb, int width)
+ {
+-	struct exynos_drm_plane plane = ctx->planes[win];
++	struct exynos_drm_plane *plane = &ctx->planes[win];
+ 	struct exynos_drm_plane_state *state =
+-		to_exynos_plane_state(plane.base.state);
++		to_exynos_plane_state(plane->base.state);
+ 	uint32_t pixel_format = fb->format->format;
+ 	unsigned int alpha = state->base.alpha;
+ 	u32 val = WINCONx_ENWIN;
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_gsc.c b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+index 45e9aee8366a8..bcf830c5b8ea9 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_gsc.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+@@ -1344,7 +1344,7 @@ static int __maybe_unused gsc_runtime_resume(struct device *dev)
+ 	for (i = 0; i < ctx->num_clocks; i++) {
+ 		ret = clk_prepare_enable(ctx->clocks[i]);
+ 		if (ret) {
+-			while (--i > 0)
++			while (--i >= 0)
+ 				clk_disable_unprepare(ctx->clocks[i]);
+ 			return ret;
+ 		}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 408fc6c8a6df8..44033a6394196 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -45,6 +45,9 @@
+ 		(p) ? ((p)->hw_pp ? (p)->hw_pp->idx - PINGPONG_0 : -1) : -1, \
+ 		##__VA_ARGS__)
+ 
++#define DPU_ERROR_ENC_RATELIMITED(e, fmt, ...) DPU_ERROR_RATELIMITED("enc%d " fmt,\
++		(e) ? (e)->base.base.id : -1, ##__VA_ARGS__)
++
+ /*
+  * Two to anticipate panels that can do cmd/vid dynamic switching
+  * plan is to create all possible physical encoder types, and switch between
+@@ -2135,7 +2138,7 @@ static void dpu_encoder_frame_done_timeout(struct timer_list *t)
+ 		return;
+ 	}
+ 
+-	DPU_ERROR_ENC(dpu_enc, "frame done timeout\n");
++	DPU_ERROR_ENC_RATELIMITED(dpu_enc, "frame done timeout\n");
+ 
+ 	event = DPU_ENCODER_FRAME_EVENT_ERROR;
+ 	trace_dpu_enc_frame_done_timeout(DRMID(drm_enc), event);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+index 1c0e4c0c9ffb3..bb7c7e437242e 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+@@ -52,6 +52,7 @@
+ 	} while (0)
+ 
+ #define DPU_ERROR(fmt, ...) pr_err("[dpu error]" fmt, ##__VA_ARGS__)
++#define DPU_ERROR_RATELIMITED(fmt, ...) pr_err_ratelimited("[dpu error]" fmt, ##__VA_ARGS__)
+ 
+ /**
+  * ktime_compare_safe - compare two ktime structures
+diff --git a/drivers/gpu/drm/msm/dp/dp_link.c b/drivers/gpu/drm/msm/dp/dp_link.c
+index be986da78c4a5..172a33e8fd8fb 100644
+--- a/drivers/gpu/drm/msm/dp/dp_link.c
++++ b/drivers/gpu/drm/msm/dp/dp_link.c
+@@ -7,6 +7,7 @@
+ 
+ #include <drm/drm_print.h>
+ 
++#include "dp_reg.h"
+ #include "dp_link.h"
+ #include "dp_panel.h"
+ 
+@@ -1078,7 +1079,7 @@ int dp_link_process_request(struct dp_link *dp_link)
+ 
+ int dp_link_get_colorimetry_config(struct dp_link *dp_link)
+ {
+-	u32 cc;
++	u32 cc = DP_MISC0_COLORIMERY_CFG_LEGACY_RGB;
+ 	struct dp_link_private *link;
+ 
+ 	if (!dp_link) {
+@@ -1092,10 +1093,11 @@ int dp_link_get_colorimetry_config(struct dp_link *dp_link)
+ 	 * Unless a video pattern CTS test is ongoing, use RGB_VESA
+ 	 * Only RGB_VESA and RGB_CEA supported for now
+ 	 */
+-	if (dp_link_is_video_pattern_requested(link))
+-		cc = link->dp_link.test_video.test_dyn_range;
+-	else
+-		cc = DP_TEST_DYNAMIC_RANGE_VESA;
++	if (dp_link_is_video_pattern_requested(link)) {
++		if (link->dp_link.test_video.test_dyn_range &
++					DP_TEST_DYNAMIC_RANGE_CEA)
++			cc = DP_MISC0_COLORIMERY_CFG_CEA_RGB;
++	}
+ 
+ 	return cc;
+ }
+diff --git a/drivers/gpu/drm/msm/dp/dp_reg.h b/drivers/gpu/drm/msm/dp/dp_reg.h
+index 268602803d9a3..176a503ece9c0 100644
+--- a/drivers/gpu/drm/msm/dp/dp_reg.h
++++ b/drivers/gpu/drm/msm/dp/dp_reg.h
+@@ -129,6 +129,9 @@
+ #define DP_MISC0_COLORIMETRY_CFG_SHIFT		(0x00000001)
+ #define DP_MISC0_TEST_BITS_DEPTH_SHIFT		(0x00000005)
+ 
++#define DP_MISC0_COLORIMERY_CFG_LEGACY_RGB	(0)
++#define DP_MISC0_COLORIMERY_CFG_CEA_RGB		(0x04)
++
+ #define REG_DP_VALID_BOUNDARY			(0x00000030)
+ #define REG_DP_VALID_BOUNDARY_2			(0x00000034)
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+index 10eacfd95fb1c..b49135f38583a 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+@@ -710,6 +710,10 @@ static int dsi_phy_driver_probe(struct platform_device *pdev)
+ 			goto fail;
+ 	}
+ 
++	ret = devm_pm_runtime_enable(&pdev->dev);
++	if (ret)
++		return ret;
++
+ 	/* PLL init will call into clk_register which requires
+ 	 * register access, so we need to enable power and ahb clock.
+ 	 */
+diff --git a/drivers/gpu/drm/nouveau/nouveau_vmm.c b/drivers/gpu/drm/nouveau/nouveau_vmm.c
+index a49e88129c922..ce1d53b8597f5 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_vmm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_vmm.c
+@@ -108,6 +108,9 @@ nouveau_vma_new(struct nouveau_bo *nvbo, struct nouveau_vmm *vmm,
+ 	} else {
+ 		ret = nvif_vmm_get(&vmm->vmm, PTES, false, mem->mem.page, 0,
+ 				   mem->mem.size, &tmp);
++		if (ret)
++			goto done;
++
+ 		vma->addr = tmp.addr;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index ee01b61a6bafa..51470020ba61d 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -3635,6 +3635,7 @@ static const struct panel_desc tianma_tm070jdhg30 = {
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ };
+ 
+ static const struct panel_desc tianma_tm070jvhg33 = {
+@@ -3647,6 +3648,7 @@ static const struct panel_desc tianma_tm070jvhg33 = {
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ };
+ 
+ static const struct display_timing tianma_tm070rvhg71_timing = {
+diff --git a/drivers/gpu/drm/tidss/tidss_crtc.c b/drivers/gpu/drm/tidss/tidss_crtc.c
+index 3c5744a91d4a0..26fd2761e80db 100644
+--- a/drivers/gpu/drm/tidss/tidss_crtc.c
++++ b/drivers/gpu/drm/tidss/tidss_crtc.c
+@@ -168,13 +168,13 @@ static void tidss_crtc_atomic_flush(struct drm_crtc *crtc,
+ 	struct tidss_device *tidss = to_tidss(ddev);
+ 	unsigned long flags;
+ 
+-	dev_dbg(ddev->dev,
+-		"%s: %s enabled %d, needs modeset %d, event %p\n", __func__,
+-		crtc->name, drm_atomic_crtc_needs_modeset(crtc->state),
+-		crtc->state->enable, crtc->state->event);
++	dev_dbg(ddev->dev, "%s: %s is %sactive, %s modeset, event %p\n",
++		__func__, crtc->name, crtc->state->active ? "" : "not ",
++		drm_atomic_crtc_needs_modeset(crtc->state) ? "needs" : "doesn't need",
++		crtc->state->event);
+ 
+ 	/* There is nothing to do if CRTC is not going to be enabled. */
+-	if (!crtc->state->enable)
++	if (!crtc->state->active)
+ 		return;
+ 
+ 	/*
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index e5d2e7e9541b8..0dc55465b452e 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -70,6 +70,28 @@ struct apple_key_translation {
+ 	u8 flags;
+ };
+ 
++static const struct apple_key_translation apple2021_fn_keys[] = {
++	{ KEY_BACKSPACE, KEY_DELETE },
++	{ KEY_ENTER,	KEY_INSERT },
++	{ KEY_F1,	KEY_BRIGHTNESSDOWN, APPLE_FLAG_FKEY },
++	{ KEY_F2,	KEY_BRIGHTNESSUP,   APPLE_FLAG_FKEY },
++	{ KEY_F3,	KEY_SCALE,          APPLE_FLAG_FKEY },
++	{ KEY_F4,	KEY_SEARCH,         APPLE_FLAG_FKEY },
++	{ KEY_F5,	KEY_MICMUTE,        APPLE_FLAG_FKEY },
++	{ KEY_F6,	KEY_SLEEP,          APPLE_FLAG_FKEY },
++	{ KEY_F7,	KEY_PREVIOUSSONG,   APPLE_FLAG_FKEY },
++	{ KEY_F8,	KEY_PLAYPAUSE,      APPLE_FLAG_FKEY },
++	{ KEY_F9,	KEY_NEXTSONG,       APPLE_FLAG_FKEY },
++	{ KEY_F10,	KEY_MUTE,           APPLE_FLAG_FKEY },
++	{ KEY_F11,	KEY_VOLUMEDOWN,     APPLE_FLAG_FKEY },
++	{ KEY_F12,	KEY_VOLUMEUP,       APPLE_FLAG_FKEY },
++	{ KEY_UP,	KEY_PAGEUP },
++	{ KEY_DOWN,	KEY_PAGEDOWN },
++	{ KEY_LEFT,	KEY_HOME },
++	{ KEY_RIGHT,	KEY_END },
++	{ }
++};
++
+ static const struct apple_key_translation macbookair_fn_keys[] = {
+ 	{ KEY_BACKSPACE, KEY_DELETE },
+ 	{ KEY_ENTER,	KEY_INSERT },
+@@ -204,7 +226,9 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input,
+ 	}
+ 
+ 	if (fnmode) {
+-		if (hid->product >= USB_DEVICE_ID_APPLE_WELLSPRING4_ANSI &&
++		if (hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021)
++			table = apple2021_fn_keys;
++		else if (hid->product >= USB_DEVICE_ID_APPLE_WELLSPRING4_ANSI &&
+ 				hid->product <= USB_DEVICE_ID_APPLE_WELLSPRING4A_JIS)
+ 			table = macbookair_fn_keys;
+ 		else if (hid->product < 0x21d || hid->product >= 0x300)
+@@ -363,6 +387,9 @@ static void apple_setup_input(struct input_dev *input)
+ 	for (trans = apple_iso_keyboard; trans->from; trans++)
+ 		set_bit(trans->to, input->keybit);
+ 
++	for (trans = apple2021_fn_keys; trans->from; trans++)
++		set_bit(trans->to, input->keybit);
++
+ 	if (swap_fn_leftctrl) {
+ 		for (trans = swapped_fn_leftctrl_keys; trans->from; trans++)
+ 			set_bit(trans->to, input->keybit);
+@@ -624,6 +651,10 @@ static const struct hid_device_id apple_devices[] = {
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY),
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021),
++		.driver_data = APPLE_HAS_FN },
++	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021),
++		.driver_data = APPLE_HAS_FN },
+ 
+ 	{ }
+ };
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 6273ab615af89..0732fe6c7a853 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -173,6 +173,7 @@
+ #define USB_DEVICE_ID_APPLE_IRCONTROL3	0x8241
+ #define USB_DEVICE_ID_APPLE_IRCONTROL4	0x8242
+ #define USB_DEVICE_ID_APPLE_IRCONTROL5	0x8243
++#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021   0x029c
+ 
+ #define USB_VENDOR_ID_ASUS		0x0486
+ #define USB_DEVICE_ID_ASUS_T91MT	0x0185
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 1b3a83fa76168..67953cdae31c6 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -309,6 +309,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_TP_ONLY) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021) },
+ #endif
+ #if IS_ENABLED(CONFIG_HID_APPLEIR)
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_IRCONTROL) },
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 1a7e1d3e7a379..eacbd7eae2e6d 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2088,7 +2088,7 @@ static int wacom_allocate_inputs(struct wacom *wacom)
+ 	return 0;
+ }
+ 
+-static int wacom_register_inputs(struct wacom *wacom)
++static int wacom_setup_inputs(struct wacom *wacom)
+ {
+ 	struct input_dev *pen_input_dev, *touch_input_dev, *pad_input_dev;
+ 	struct wacom_wac *wacom_wac = &(wacom->wacom_wac);
+@@ -2107,10 +2107,6 @@ static int wacom_register_inputs(struct wacom *wacom)
+ 		input_free_device(pen_input_dev);
+ 		wacom_wac->pen_input = NULL;
+ 		pen_input_dev = NULL;
+-	} else {
+-		error = input_register_device(pen_input_dev);
+-		if (error)
+-			goto fail;
+ 	}
+ 
+ 	error = wacom_setup_touch_input_capabilities(touch_input_dev, wacom_wac);
+@@ -2119,10 +2115,6 @@ static int wacom_register_inputs(struct wacom *wacom)
+ 		input_free_device(touch_input_dev);
+ 		wacom_wac->touch_input = NULL;
+ 		touch_input_dev = NULL;
+-	} else {
+-		error = input_register_device(touch_input_dev);
+-		if (error)
+-			goto fail;
+ 	}
+ 
+ 	error = wacom_setup_pad_input_capabilities(pad_input_dev, wacom_wac);
+@@ -2131,7 +2123,34 @@ static int wacom_register_inputs(struct wacom *wacom)
+ 		input_free_device(pad_input_dev);
+ 		wacom_wac->pad_input = NULL;
+ 		pad_input_dev = NULL;
+-	} else {
++	}
++
++	return 0;
++}
++
++static int wacom_register_inputs(struct wacom *wacom)
++{
++	struct input_dev *pen_input_dev, *touch_input_dev, *pad_input_dev;
++	struct wacom_wac *wacom_wac = &(wacom->wacom_wac);
++	int error = 0;
++
++	pen_input_dev = wacom_wac->pen_input;
++	touch_input_dev = wacom_wac->touch_input;
++	pad_input_dev = wacom_wac->pad_input;
++
++	if (pen_input_dev) {
++		error = input_register_device(pen_input_dev);
++		if (error)
++			goto fail;
++	}
++
++	if (touch_input_dev) {
++		error = input_register_device(touch_input_dev);
++		if (error)
++			goto fail;
++	}
++
++	if (pad_input_dev) {
+ 		error = input_register_device(pad_input_dev);
+ 		if (error)
+ 			goto fail;
+@@ -2381,6 +2400,20 @@ static int wacom_parse_and_register(struct wacom *wacom, bool wireless)
+ 			goto fail;
+ 	}
+ 
++	error = wacom_setup_inputs(wacom);
++	if (error)
++		goto fail;
++
++	if (features->type == HID_GENERIC)
++		connect_mask |= HID_CONNECT_DRIVER;
++
++	/* Regular HID work starts now */
++	error = hid_hw_start(hdev, connect_mask);
++	if (error) {
++		hid_err(hdev, "hw start failed\n");
++		goto fail;
++	}
++
+ 	error = wacom_register_inputs(wacom);
+ 	if (error)
+ 		goto fail;
+@@ -2395,16 +2428,6 @@ static int wacom_parse_and_register(struct wacom *wacom, bool wireless)
+ 			goto fail;
+ 	}
+ 
+-	if (features->type == HID_GENERIC)
+-		connect_mask |= HID_CONNECT_DRIVER;
+-
+-	/* Regular HID work starts now */
+-	error = hid_hw_start(hdev, connect_mask);
+-	if (error) {
+-		hid_err(hdev, "hw start failed\n");
+-		goto fail;
+-	}
+-
+ 	if (!wireless) {
+ 		/* Note that if query fails it is not a hard failure */
+ 		wacom_query_tablet_data(wacom);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 726a5d76615d2..c454768ffb490 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2540,7 +2540,14 @@ static void wacom_wac_pen_report(struct hid_device *hdev,
+ 				wacom_wac->hid_data.tipswitch);
+ 		input_report_key(input, wacom_wac->tool[0], sense);
+ 		if (wacom_wac->serial[0]) {
+-			input_event(input, EV_MSC, MSC_SERIAL, wacom_wac->serial[0]);
++			/*
++			 * xf86-input-wacom does not accept a serial number
++			 * of '0'. Report the low 32 bits if possible, but
++			 * if they are zero, report the upper ones instead.
++			 */
++			__u32 serial_lo = wacom_wac->serial[0] & 0xFFFFFFFFu;
++			__u32 serial_hi = wacom_wac->serial[0] >> 32;
++			input_event(input, EV_MSC, MSC_SERIAL, (int)(serial_lo ? serial_lo : serial_hi));
+ 			input_report_abs(input, ABS_MISC, sense ? id : 0);
+ 		}
+ 
+diff --git a/drivers/hwmon/aspeed-pwm-tacho.c b/drivers/hwmon/aspeed-pwm-tacho.c
+index 3d8239fd66ed6..3dc97041a704c 100644
+--- a/drivers/hwmon/aspeed-pwm-tacho.c
++++ b/drivers/hwmon/aspeed-pwm-tacho.c
+@@ -194,6 +194,8 @@ struct aspeed_pwm_tacho_data {
+ 	u8 fan_tach_ch_source[16];
+ 	struct aspeed_cooling_device *cdev[8];
+ 	const struct attribute_group *groups[3];
++	/* protects access to shared ASPEED_PTCR_RESULT */
++	struct mutex tach_lock;
+ };
+ 
+ enum type { TYPEM, TYPEN, TYPEO };
+@@ -528,6 +530,8 @@ static int aspeed_get_fan_tach_ch_rpm(struct aspeed_pwm_tacho_data *priv,
+ 	u8 fan_tach_ch_source, type, mode, both;
+ 	int ret;
+ 
++	mutex_lock(&priv->tach_lock);
++
+ 	regmap_write(priv->regmap, ASPEED_PTCR_TRIGGER, 0);
+ 	regmap_write(priv->regmap, ASPEED_PTCR_TRIGGER, 0x1 << fan_tach_ch);
+ 
+@@ -545,6 +549,8 @@ static int aspeed_get_fan_tach_ch_rpm(struct aspeed_pwm_tacho_data *priv,
+ 		ASPEED_RPM_STATUS_SLEEP_USEC,
+ 		usec);
+ 
++	mutex_unlock(&priv->tach_lock);
++
+ 	/* return -ETIMEDOUT if we didn't get an answer. */
+ 	if (ret)
+ 		return ret;
+@@ -904,6 +910,7 @@ static int aspeed_pwm_tacho_probe(struct platform_device *pdev)
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
++	mutex_init(&priv->tach_lock);
+ 	priv->regmap = devm_regmap_init(dev, NULL, (__force void *)regs,
+ 			&aspeed_pwm_tacho_regmap_config);
+ 	if (IS_ERR(priv->regmap))
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index 5b2057ce5a59d..d67d972d18aa2 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -380,7 +380,7 @@ static int get_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
+ }
+ 
+ static int create_core_attrs(struct temp_data *tdata, struct device *dev,
+-			     int attr_no)
++			     int index)
+ {
+ 	int i;
+ 	static ssize_t (*const rd_ptr[TOTAL_ATTRS]) (struct device *dev,
+@@ -392,13 +392,20 @@ static int create_core_attrs(struct temp_data *tdata, struct device *dev,
+ 	};
+ 
+ 	for (i = 0; i < tdata->attr_size; i++) {
++		/*
++		 * We map the attr number to core id of the CPU
++		 * The attr number is always core id + 2
++		 * The Pkgtemp will always show up as temp1_*, if available
++		 */
++		int attr_no = tdata->is_pkg_data ? 1 : tdata->cpu_core_id + 2;
++
+ 		snprintf(tdata->attr_name[i], CORETEMP_NAME_LENGTH,
+ 			 "temp%d_%s", attr_no, suffixes[i]);
+ 		sysfs_attr_init(&tdata->sd_attrs[i].dev_attr.attr);
+ 		tdata->sd_attrs[i].dev_attr.attr.name = tdata->attr_name[i];
+ 		tdata->sd_attrs[i].dev_attr.attr.mode = 0444;
+ 		tdata->sd_attrs[i].dev_attr.show = rd_ptr[i];
+-		tdata->sd_attrs[i].index = attr_no;
++		tdata->sd_attrs[i].index = index;
+ 		tdata->attrs[i] = &tdata->sd_attrs[i].dev_attr.attr;
+ 	}
+ 	tdata->attr_group.attrs = tdata->attrs;
+@@ -456,27 +463,22 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
+ 	struct platform_data *pdata = platform_get_drvdata(pdev);
+ 	struct cpuinfo_x86 *c = &cpu_data(cpu);
+ 	u32 eax, edx;
+-	int err, index, attr_no;
++	int err, index;
+ 
+ 	/*
+-	 * Find attr number for sysfs:
+-	 * We map the attr number to core id of the CPU
+-	 * The attr number is always core id + 2
+-	 * The Pkgtemp will always show up as temp1_*, if available
++	 * Get the index of tdata in pdata->core_data[]
++	 * tdata for package: pdata->core_data[1]
++	 * tdata for core: pdata->core_data[2] .. pdata->core_data[NUM_REAL_CORES + 1]
+ 	 */
+ 	if (pkg_flag) {
+-		attr_no = PKG_SYSFS_ATTR_NO;
++		index = PKG_SYSFS_ATTR_NO;
+ 	} else {
+-		index = ida_alloc(&pdata->ida, GFP_KERNEL);
++		index = ida_alloc_max(&pdata->ida, NUM_REAL_CORES - 1, GFP_KERNEL);
+ 		if (index < 0)
+ 			return index;
+-		pdata->cpu_map[index] = topology_core_id(cpu);
+-		attr_no = index + BASE_SYSFS_ATTR_NO;
+-	}
+ 
+-	if (attr_no > MAX_CORE_DATA - 1) {
+-		err = -ERANGE;
+-		goto ida_free;
++		pdata->cpu_map[index] = topology_core_id(cpu);
++		index += BASE_SYSFS_ATTR_NO;
+ 	}
+ 
+ 	tdata = init_temp_data(cpu, pkg_flag);
+@@ -508,20 +510,20 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
+ 		}
+ 	}
+ 
+-	pdata->core_data[attr_no] = tdata;
++	pdata->core_data[index] = tdata;
+ 
+ 	/* Create sysfs interfaces */
+-	err = create_core_attrs(tdata, pdata->hwmon_dev, attr_no);
++	err = create_core_attrs(tdata, pdata->hwmon_dev, index);
+ 	if (err)
+ 		goto exit_free;
+ 
+ 	return 0;
+ exit_free:
+-	pdata->core_data[attr_no] = NULL;
++	pdata->core_data[index] = NULL;
+ 	kfree(tdata);
+ ida_free:
+ 	if (!pkg_flag)
+-		ida_free(&pdata->ida, index);
++		ida_free(&pdata->ida, index - BASE_SYSFS_ATTR_NO);
+ 	return err;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index cb8f560225928..d6b945f5b8872 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -540,12 +540,13 @@ static int i801_block_transaction_by_block(struct i801_priv *priv,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	inb_p(SMBHSTCNT(priv)); /* reset the data buffer index */
++	/* Set block buffer mode */
++	outb_p(inb_p(SMBAUXCTL(priv)) | SMBAUXCTL_E32B, SMBAUXCTL(priv));
+ 
+-	/* Use 32-byte buffer to process this transaction */
+ 	if (read_write == I2C_SMBUS_WRITE) {
+ 		len = data->block[0];
+ 		outb_p(len, SMBHSTDAT0(priv));
++		inb_p(SMBHSTCNT(priv));	/* reset the data buffer index */
+ 		for (i = 0; i < len; i++)
+ 			outb_p(data->block[i+1], SMBBLKDAT(priv));
+ 	}
+@@ -561,6 +562,7 @@ static int i801_block_transaction_by_block(struct i801_priv *priv,
+ 			return -EPROTO;
+ 
+ 		data->block[0] = len;
++		inb_p(SMBHSTCNT(priv));	/* reset the data buffer index */
+ 		for (i = 0; i < len; i++)
+ 			data->block[i + 1] = inb_p(SMBBLKDAT(priv));
+ 	}
+@@ -780,14 +782,6 @@ static int i801_block_transaction_byte_by_byte(struct i801_priv *priv,
+ 	return i801_check_post(priv, status);
+ }
+ 
+-static int i801_set_block_buffer_mode(struct i801_priv *priv)
+-{
+-	outb_p(inb_p(SMBAUXCTL(priv)) | SMBAUXCTL_E32B, SMBAUXCTL(priv));
+-	if ((inb_p(SMBAUXCTL(priv)) & SMBAUXCTL_E32B) == 0)
+-		return -EIO;
+-	return 0;
+-}
+-
+ /* Block transaction function */
+ static int i801_block_transaction(struct i801_priv *priv,
+ 				  union i2c_smbus_data *data, char read_write,
+@@ -817,9 +811,8 @@ static int i801_block_transaction(struct i801_priv *priv,
+ 	/* Experience has shown that the block buffer can only be used for
+ 	   SMBus (not I2C) block transactions, even though the datasheet
+ 	   doesn't mention this limitation. */
+-	if ((priv->features & FEATURE_BLOCK_BUFFER)
+-	 && command != I2C_SMBUS_I2C_BLOCK_DATA
+-	 && i801_set_block_buffer_mode(priv) == 0)
++	if ((priv->features & FEATURE_BLOCK_BUFFER) &&
++	    command != I2C_SMBUS_I2C_BLOCK_DATA)
+ 		result = i801_block_transaction_by_block(priv, data,
+ 							 read_write,
+ 							 command, hwpec);
+diff --git a/drivers/i3c/master/i3c-master-cdns.c b/drivers/i3c/master/i3c-master-cdns.c
+index 6b9df33ac5618..6b126fce5a9e0 100644
+--- a/drivers/i3c/master/i3c-master-cdns.c
++++ b/drivers/i3c/master/i3c-master-cdns.c
+@@ -77,7 +77,8 @@
+ #define PRESCL_CTRL0			0x14
+ #define PRESCL_CTRL0_I2C(x)		((x) << 16)
+ #define PRESCL_CTRL0_I3C(x)		(x)
+-#define PRESCL_CTRL0_MAX		GENMASK(9, 0)
++#define PRESCL_CTRL0_I3C_MAX		GENMASK(9, 0)
++#define PRESCL_CTRL0_I2C_MAX		GENMASK(15, 0)
+ 
+ #define PRESCL_CTRL1			0x18
+ #define PRESCL_CTRL1_PP_LOW_MASK	GENMASK(15, 8)
+@@ -1234,7 +1235,7 @@ static int cdns_i3c_master_bus_init(struct i3c_master_controller *m)
+ 		return -EINVAL;
+ 
+ 	pres = DIV_ROUND_UP(sysclk_rate, (bus->scl_rate.i3c * 4)) - 1;
+-	if (pres > PRESCL_CTRL0_MAX)
++	if (pres > PRESCL_CTRL0_I3C_MAX)
+ 		return -ERANGE;
+ 
+ 	bus->scl_rate.i3c = sysclk_rate / ((pres + 1) * 4);
+@@ -1247,7 +1248,7 @@ static int cdns_i3c_master_bus_init(struct i3c_master_controller *m)
+ 	max_i2cfreq = bus->scl_rate.i2c;
+ 
+ 	pres = (sysclk_rate / (max_i2cfreq * 5)) - 1;
+-	if (pres > PRESCL_CTRL0_MAX)
++	if (pres > PRESCL_CTRL0_I2C_MAX)
+ 		return -ERANGE;
+ 
+ 	bus->scl_rate.i2c = sysclk_rate / ((pres + 1) * 5);
+diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig
+index 8acf277b8b258..1c82840ac32af 100644
+--- a/drivers/iio/accel/Kconfig
++++ b/drivers/iio/accel/Kconfig
+@@ -128,10 +128,12 @@ config BMA400
+ 
+ config BMA400_I2C
+ 	tristate
++	select REGMAP_I2C
+ 	depends on BMA400
+ 
+ config BMA400_SPI
+ 	tristate
++	select REGMAP_SPI
+ 	depends on BMA400
+ 
+ config BMC150_ACCEL
+diff --git a/drivers/iio/adc/ad7091r-base.c b/drivers/iio/adc/ad7091r-base.c
+index 811f04448d8d9..76002b91c86a4 100644
+--- a/drivers/iio/adc/ad7091r-base.c
++++ b/drivers/iio/adc/ad7091r-base.c
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include <linux/bitops.h>
++#include <linux/bitfield.h>
+ #include <linux/iio/events.h>
+ #include <linux/iio/iio.h>
+ #include <linux/interrupt.h>
+@@ -28,6 +29,7 @@
+ #define AD7091R_REG_RESULT_CONV_RESULT(x)   ((x) & 0xfff)
+ 
+ /* AD7091R_REG_CONF */
++#define AD7091R_REG_CONF_ALERT_EN   BIT(4)
+ #define AD7091R_REG_CONF_AUTO   BIT(8)
+ #define AD7091R_REG_CONF_CMD    BIT(10)
+ 
+@@ -49,6 +51,27 @@ struct ad7091r_state {
+ 	struct mutex lock; /*lock to prevent concurent reads */
+ };
+ 
++const struct iio_event_spec ad7091r_events[] = {
++	{
++		.type = IIO_EV_TYPE_THRESH,
++		.dir = IIO_EV_DIR_RISING,
++		.mask_separate = BIT(IIO_EV_INFO_VALUE) |
++				 BIT(IIO_EV_INFO_ENABLE),
++	},
++	{
++		.type = IIO_EV_TYPE_THRESH,
++		.dir = IIO_EV_DIR_FALLING,
++		.mask_separate = BIT(IIO_EV_INFO_VALUE) |
++				 BIT(IIO_EV_INFO_ENABLE),
++	},
++	{
++		.type = IIO_EV_TYPE_THRESH,
++		.dir = IIO_EV_DIR_EITHER,
++		.mask_separate = BIT(IIO_EV_INFO_HYSTERESIS),
++	},
++};
++EXPORT_SYMBOL_NS_GPL(ad7091r_events, IIO_AD7091R);
++
+ static int ad7091r_set_mode(struct ad7091r_state *st, enum ad7091r_mode mode)
+ {
+ 	int ret, conf;
+@@ -168,8 +191,142 @@ static int ad7091r_read_raw(struct iio_dev *iio_dev,
+ 	return ret;
+ }
+ 
++static int ad7091r_read_event_config(struct iio_dev *indio_dev,
++				     const struct iio_chan_spec *chan,
++				     enum iio_event_type type,
++				     enum iio_event_direction dir)
++{
++	struct ad7091r_state *st = iio_priv(indio_dev);
++	int val, ret;
++
++	switch (dir) {
++	case IIO_EV_DIR_RISING:
++		ret = regmap_read(st->map,
++				  AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
++				  &val);
++		if (ret)
++			return ret;
++		return val != AD7091R_HIGH_LIMIT;
++	case IIO_EV_DIR_FALLING:
++		ret = regmap_read(st->map,
++				  AD7091R_REG_CH_LOW_LIMIT(chan->channel),
++				  &val);
++		if (ret)
++			return ret;
++		return val != AD7091R_LOW_LIMIT;
++	default:
++		return -EINVAL;
++	}
++}
++
++static int ad7091r_write_event_config(struct iio_dev *indio_dev,
++				      const struct iio_chan_spec *chan,
++				      enum iio_event_type type,
++				      enum iio_event_direction dir, int state)
++{
++	struct ad7091r_state *st = iio_priv(indio_dev);
++
++	if (state) {
++		return regmap_set_bits(st->map, AD7091R_REG_CONF,
++				       AD7091R_REG_CONF_ALERT_EN);
++	} else {
++		/*
++		 * Set thresholds either to 0 or to 2^12 - 1 as appropriate to
++		 * prevent alerts and thus disable event generation.
++		 */
++		switch (dir) {
++		case IIO_EV_DIR_RISING:
++			return regmap_write(st->map,
++					    AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
++					    AD7091R_HIGH_LIMIT);
++		case IIO_EV_DIR_FALLING:
++			return regmap_write(st->map,
++					    AD7091R_REG_CH_LOW_LIMIT(chan->channel),
++					    AD7091R_LOW_LIMIT);
++		default:
++			return -EINVAL;
++		}
++	}
++}
++
++static int ad7091r_read_event_value(struct iio_dev *indio_dev,
++				    const struct iio_chan_spec *chan,
++				    enum iio_event_type type,
++				    enum iio_event_direction dir,
++				    enum iio_event_info info, int *val, int *val2)
++{
++	struct ad7091r_state *st = iio_priv(indio_dev);
++	int ret;
++
++	switch (info) {
++	case IIO_EV_INFO_VALUE:
++		switch (dir) {
++		case IIO_EV_DIR_RISING:
++			ret = regmap_read(st->map,
++					  AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
++					  val);
++			if (ret)
++				return ret;
++			return IIO_VAL_INT;
++		case IIO_EV_DIR_FALLING:
++			ret = regmap_read(st->map,
++					  AD7091R_REG_CH_LOW_LIMIT(chan->channel),
++					  val);
++			if (ret)
++				return ret;
++			return IIO_VAL_INT;
++		default:
++			return -EINVAL;
++		}
++	case IIO_EV_INFO_HYSTERESIS:
++		ret = regmap_read(st->map,
++				  AD7091R_REG_CH_HYSTERESIS(chan->channel),
++				  val);
++		if (ret)
++			return ret;
++		return IIO_VAL_INT;
++	default:
++		return -EINVAL;
++	}
++}
++
++static int ad7091r_write_event_value(struct iio_dev *indio_dev,
++				     const struct iio_chan_spec *chan,
++				     enum iio_event_type type,
++				     enum iio_event_direction dir,
++				     enum iio_event_info info, int val, int val2)
++{
++	struct ad7091r_state *st = iio_priv(indio_dev);
++
++	switch (info) {
++	case IIO_EV_INFO_VALUE:
++		switch (dir) {
++		case IIO_EV_DIR_RISING:
++			return regmap_write(st->map,
++					    AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
++					    val);
++		case IIO_EV_DIR_FALLING:
++			return regmap_write(st->map,
++					    AD7091R_REG_CH_LOW_LIMIT(chan->channel),
++					    val);
++		default:
++			return -EINVAL;
++		}
++	case IIO_EV_INFO_HYSTERESIS:
++		return regmap_write(st->map,
++				    AD7091R_REG_CH_HYSTERESIS(chan->channel),
++				    val);
++	default:
++		return -EINVAL;
++	}
++}
++
+ static const struct iio_info ad7091r_info = {
+ 	.read_raw = ad7091r_read_raw,
++	.read_event_config = &ad7091r_read_event_config,
++	.write_event_config = &ad7091r_write_event_config,
++	.read_event_value = &ad7091r_read_event_value,
++	.write_event_value = &ad7091r_write_event_value,
+ };
+ 
+ static irqreturn_t ad7091r_event_handler(int irq, void *private)
+@@ -232,6 +389,11 @@ int ad7091r_probe(struct device *dev, const char *name,
+ 	iio_dev->channels = chip_info->channels;
+ 
+ 	if (irq) {
++		ret = regmap_update_bits(st->map, AD7091R_REG_CONF,
++					 AD7091R_REG_CONF_ALERT_EN, BIT(4));
++		if (ret)
++			return ret;
++
+ 		ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				ad7091r_event_handler,
+ 				IRQF_TRIGGER_FALLING | IRQF_ONESHOT, name, iio_dev);
+@@ -243,7 +405,14 @@ int ad7091r_probe(struct device *dev, const char *name,
+ 	if (IS_ERR(st->vref)) {
+ 		if (PTR_ERR(st->vref) == -EPROBE_DEFER)
+ 			return -EPROBE_DEFER;
++
+ 		st->vref = NULL;
++		/* Enable internal vref */
++		ret = regmap_set_bits(st->map, AD7091R_REG_CONF,
++				      AD7091R_REG_CONF_INT_VREF);
++		if (ret)
++			return dev_err_probe(st->dev, ret,
++					     "Error on enable internal reference\n");
+ 	} else {
+ 		ret = regulator_enable(st->vref);
+ 		if (ret)
+@@ -260,7 +429,7 @@ int ad7091r_probe(struct device *dev, const char *name,
+ 
+ 	return devm_iio_device_register(dev, iio_dev);
+ }
+-EXPORT_SYMBOL_GPL(ad7091r_probe);
++EXPORT_SYMBOL_NS_GPL(ad7091r_probe, IIO_AD7091R);
+ 
+ static bool ad7091r_writeable_reg(struct device *dev, unsigned int reg)
+ {
+@@ -290,7 +459,7 @@ const struct regmap_config ad7091r_regmap_config = {
+ 	.writeable_reg = ad7091r_writeable_reg,
+ 	.volatile_reg = ad7091r_volatile_reg,
+ };
+-EXPORT_SYMBOL_GPL(ad7091r_regmap_config);
++EXPORT_SYMBOL_NS_GPL(ad7091r_regmap_config, IIO_AD7091R);
+ 
+ MODULE_AUTHOR("Beniamin Bia <beniamin.bia@analog.com>");
+ MODULE_DESCRIPTION("Analog Devices AD7091Rx multi-channel converters");
+diff --git a/drivers/iio/adc/ad7091r-base.h b/drivers/iio/adc/ad7091r-base.h
+index 509748aef9b19..b9e1c8bf3440a 100644
+--- a/drivers/iio/adc/ad7091r-base.h
++++ b/drivers/iio/adc/ad7091r-base.h
+@@ -8,6 +8,12 @@
+ #ifndef __DRIVERS_IIO_ADC_AD7091R_BASE_H__
+ #define __DRIVERS_IIO_ADC_AD7091R_BASE_H__
+ 
++#define AD7091R_REG_CONF_INT_VREF	BIT(0)
++
++/* AD7091R_REG_CH_LIMIT */
++#define AD7091R_HIGH_LIMIT		0xFFF
++#define AD7091R_LOW_LIMIT		0x0
++
+ struct device;
+ struct ad7091r_state;
+ 
+@@ -17,6 +23,8 @@ struct ad7091r_chip_info {
+ 	unsigned int vref_mV;
+ };
+ 
++extern const struct iio_event_spec ad7091r_events[3];
++
+ extern const struct regmap_config ad7091r_regmap_config;
+ 
+ int ad7091r_probe(struct device *dev, const char *name,
+diff --git a/drivers/iio/adc/ad7091r5.c b/drivers/iio/adc/ad7091r5.c
+index 9665679c3ea6d..12d475463945d 100644
+--- a/drivers/iio/adc/ad7091r5.c
++++ b/drivers/iio/adc/ad7091r5.c
+@@ -12,26 +12,6 @@
+ 
+ #include "ad7091r-base.h"
+ 
+-static const struct iio_event_spec ad7091r5_events[] = {
+-	{
+-		.type = IIO_EV_TYPE_THRESH,
+-		.dir = IIO_EV_DIR_RISING,
+-		.mask_separate = BIT(IIO_EV_INFO_VALUE) |
+-				 BIT(IIO_EV_INFO_ENABLE),
+-	},
+-	{
+-		.type = IIO_EV_TYPE_THRESH,
+-		.dir = IIO_EV_DIR_FALLING,
+-		.mask_separate = BIT(IIO_EV_INFO_VALUE) |
+-				 BIT(IIO_EV_INFO_ENABLE),
+-	},
+-	{
+-		.type = IIO_EV_TYPE_THRESH,
+-		.dir = IIO_EV_DIR_EITHER,
+-		.mask_separate = BIT(IIO_EV_INFO_HYSTERESIS),
+-	},
+-};
+-
+ #define AD7091R_CHANNEL(idx, bits, ev, num_ev) { \
+ 	.type = IIO_VOLTAGE, \
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+@@ -44,10 +24,10 @@ static const struct iio_event_spec ad7091r5_events[] = {
+ 	.scan_type.realbits = bits, \
+ }
+ static const struct iio_chan_spec ad7091r5_channels_irq[] = {
+-	AD7091R_CHANNEL(0, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
+-	AD7091R_CHANNEL(1, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
+-	AD7091R_CHANNEL(2, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
+-	AD7091R_CHANNEL(3, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
++	AD7091R_CHANNEL(0, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
++	AD7091R_CHANNEL(1, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
++	AD7091R_CHANNEL(2, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
++	AD7091R_CHANNEL(3, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
+ };
+ 
+ static const struct iio_chan_spec ad7091r5_channels_noirq[] = {
+@@ -111,3 +91,4 @@ module_i2c_driver(ad7091r5_driver);
+ MODULE_AUTHOR("Beniamin Bia <beniamin.bia@analog.com>");
+ MODULE_DESCRIPTION("Analog Devices AD7091R5 multi-channel ADC driver");
+ MODULE_LICENSE("GPL v2");
++MODULE_IMPORT_NS(IIO_AD7091R);
+diff --git a/drivers/iio/magnetometer/rm3100-core.c b/drivers/iio/magnetometer/rm3100-core.c
+index 720234a91db11..e7690d38621ad 100644
+--- a/drivers/iio/magnetometer/rm3100-core.c
++++ b/drivers/iio/magnetometer/rm3100-core.c
+@@ -538,6 +538,7 @@ int rm3100_common_probe(struct device *dev, struct regmap *regmap, int irq)
+ 	struct rm3100_data *data;
+ 	unsigned int tmp;
+ 	int ret;
++	int samp_rate_index;
+ 
+ 	indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
+ 	if (!indio_dev)
+@@ -596,9 +597,14 @@ int rm3100_common_probe(struct device *dev, struct regmap *regmap, int irq)
+ 	ret = regmap_read(regmap, RM3100_REG_TMRC, &tmp);
+ 	if (ret < 0)
+ 		return ret;
++
++	samp_rate_index = tmp - RM3100_TMRC_OFFSET;
++	if (samp_rate_index < 0 || samp_rate_index >=  RM3100_SAMP_NUM) {
++		dev_err(dev, "The value read from RM3100_REG_TMRC is invalid!\n");
++		return -EINVAL;
++	}
+ 	/* Initializing max wait time, which is double conversion time. */
+-	data->conversion_time = rm3100_samp_rates[tmp - RM3100_TMRC_OFFSET][2]
+-				* 2;
++	data->conversion_time = rm3100_samp_rates[samp_rate_index][2] * 2;
+ 
+ 	/* Cycle count values may not be what we want. */
+ 	if ((tmp - RM3100_TMRC_OFFSET) == 0)
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index 86e4ed64e4e21..e009123c703b0 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -542,21 +542,18 @@ static int ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
+ 			/* SM supports sendonly-fullmember, otherwise fallback to full-member */
+ 			rec.join_state = SENDONLY_FULLMEMBER_JOIN;
+ 	}
+-	spin_unlock_irq(&priv->lock);
+ 
+ 	multicast = ib_sa_join_multicast(&ipoib_sa_client, priv->ca, priv->port,
+-					 &rec, comp_mask, GFP_KERNEL,
++					 &rec, comp_mask, GFP_ATOMIC,
+ 					 ipoib_mcast_join_complete, mcast);
+-	spin_lock_irq(&priv->lock);
+ 	if (IS_ERR(multicast)) {
+ 		ret = PTR_ERR(multicast);
+ 		ipoib_warn(priv, "ib_sa_join_multicast failed, status %d\n", ret);
+ 		/* Requeue this join task with a backoff delay */
+ 		__ipoib_mcast_schedule_join_thread(priv, mcast, 1);
+ 		clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
+-		spin_unlock_irq(&priv->lock);
+ 		complete(&mcast->done);
+-		spin_lock_irq(&priv->lock);
++		return ret;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c
+index 4912582c54dad..3e73eb465e18c 100644
+--- a/drivers/input/keyboard/atkbd.c
++++ b/drivers/input/keyboard/atkbd.c
+@@ -802,7 +802,6 @@ static int atkbd_probe(struct atkbd *atkbd)
+ {
+ 	struct ps2dev *ps2dev = &atkbd->ps2dev;
+ 	unsigned char param[2];
+-	bool skip_getid;
+ 
+ /*
+  * Some systems, where the bit-twiddling when testing the io-lines of the
+@@ -816,6 +815,11 @@ static int atkbd_probe(struct atkbd *atkbd)
+ 				 "keyboard reset failed on %s\n",
+ 				 ps2dev->serio->phys);
+ 
++	if (atkbd_skip_getid(atkbd)) {
++		atkbd->id = 0xab83;
++		return 0;
++	}
++
+ /*
+  * Then we check the keyboard ID. We should get 0xab83 under normal conditions.
+  * Some keyboards report different values, but the first byte is always 0xab or
+@@ -824,18 +828,17 @@ static int atkbd_probe(struct atkbd *atkbd)
+  */
+ 
+ 	param[0] = param[1] = 0xa5;	/* initialize with invalid values */
+-	skip_getid = atkbd_skip_getid(atkbd);
+-	if (skip_getid || ps2_command(ps2dev, param, ATKBD_CMD_GETID)) {
++	if (ps2_command(ps2dev, param, ATKBD_CMD_GETID)) {
+ 
+ /*
+- * If the get ID command was skipped or failed, we check if we can at least set
++ * If the get ID command failed, we check if we can at least set
+  * the LEDs on the keyboard. This should work on every keyboard out there.
+  * It also turns the LEDs off, which we want anyway.
+  */
+ 		param[0] = 0;
+ 		if (ps2_command(ps2dev, param, ATKBD_CMD_SETLEDS))
+ 			return -1;
+-		atkbd->id = skip_getid ? 0xab83 : 0xabba;
++		atkbd->id = 0xabba;
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 124ab98ea43a4..cd21c92a6b2cd 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1179,6 +1179,12 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 					SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
+ 					SERIO_QUIRK_NOPNP)
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "NS5x_7xPU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"),
+diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
+index a4aee16db5314..a46a4f83682f8 100644
+--- a/drivers/irqchip/irq-brcmstb-l2.c
++++ b/drivers/irqchip/irq-brcmstb-l2.c
+@@ -2,7 +2,7 @@
+ /*
+  * Generic Broadcom Set Top Box Level 2 Interrupt controller driver
+  *
+- * Copyright (C) 2014-2017 Broadcom
++ * Copyright (C) 2014-2024 Broadcom
+  */
+ 
+ #define pr_fmt(fmt)	KBUILD_MODNAME	": " fmt
+@@ -113,6 +113,9 @@ static void brcmstb_l2_intc_irq_handle(struct irq_desc *desc)
+ 		generic_handle_irq(irq_linear_revmap(b->domain, irq));
+ 	} while (status);
+ out:
++	/* Don't ack parent before all device writes are done */
++	wmb();
++
+ 	chained_irq_exit(chip, desc);
+ }
+ 
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index f1fa98e5ea13f..c1f3cd82caf33 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3782,8 +3782,9 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ 				bool force)
+ {
+ 	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+-	int from, cpu = cpumask_first(mask_val);
++	struct cpumask common, *table_mask;
+ 	unsigned long flags;
++	int from, cpu;
+ 
+ 	/*
+ 	 * Changing affinity is mega expensive, so let's be as lazy as
+@@ -3799,19 +3800,22 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ 	 * taken on any vLPI handling path that evaluates vpe->col_idx.
+ 	 */
+ 	from = vpe_to_cpuid_lock(vpe, &flags);
+-	if (from == cpu)
+-		goto out;
+-
+-	vpe->col_idx = cpu;
++	table_mask = gic_data_rdist_cpu(from)->vpe_table_mask;
+ 
+ 	/*
+-	 * GICv4.1 allows us to skip VMOVP if moving to a cpu whose RD
+-	 * is sharing its VPE table with the current one.
++	 * If we are offered another CPU in the same GICv4.1 ITS
++	 * affinity, pick this one. Otherwise, any CPU will do.
+ 	 */
+-	if (gic_data_rdist_cpu(cpu)->vpe_table_mask &&
+-	    cpumask_test_cpu(from, gic_data_rdist_cpu(cpu)->vpe_table_mask))
++	if (table_mask && cpumask_and(&common, mask_val, table_mask))
++		cpu = cpumask_test_cpu(from, &common) ? from : cpumask_first(&common);
++	else
++		cpu = cpumask_first(mask_val);
++
++	if (from == cpu)
+ 		goto out;
+ 
++	vpe->col_idx = cpu;
++
+ 	its_send_vmovp(vpe);
+ 	its_vpe_db_proxy_move(vpe, from, cpu);
+ 
+diff --git a/drivers/leds/trigger/ledtrig-panic.c b/drivers/leds/trigger/ledtrig-panic.c
+index 5751cd032f9db..4bf232465dfd0 100644
+--- a/drivers/leds/trigger/ledtrig-panic.c
++++ b/drivers/leds/trigger/ledtrig-panic.c
+@@ -63,10 +63,13 @@ static long led_panic_blink(int state)
+ 
+ static int __init ledtrig_panic_init(void)
+ {
++	led_trigger_register_simple("panic", &trigger);
++	if (!trigger)
++		return -ENOMEM;
++
+ 	atomic_notifier_chain_register(&panic_notifier_list,
+ 				       &led_trigger_panic_nb);
+ 
+-	led_trigger_register_simple("panic", &trigger);
+ 	panic_blink = led_panic_blink;
+ 	return 0;
+ }
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index 3db92d9a030b9..ff73b2c17be53 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -19,6 +19,8 @@
+ #include "dm.h"
+ 
+ #define DM_RESERVED_MAX_IOS		1024
++#define DM_MAX_TARGETS			1048576
++#define DM_MAX_TARGET_PARAMS		1024
+ 
+ struct dm_kobject_holder {
+ 	struct kobject kobj;
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 5f9b9178c647e..4184c8a2d4977 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1760,7 +1760,8 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
+ 	if (copy_from_user(param_kernel, user, minimum_data_size))
+ 		return -EFAULT;
+ 
+-	if (param_kernel->data_size < minimum_data_size)
++	if (unlikely(param_kernel->data_size < minimum_data_size) ||
++	    unlikely(param_kernel->data_size > DM_MAX_TARGETS * DM_MAX_TARGET_PARAMS))
+ 		return -EINVAL;
+ 
+ 	secure_data = param_kernel->flags & DM_SECURE_DATA_FLAG;
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 5c590895c14c3..31bcdcd93c7a8 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -144,7 +144,12 @@ static int alloc_targets(struct dm_table *t, unsigned int num)
+ int dm_table_create(struct dm_table **result, fmode_t mode,
+ 		    unsigned num_targets, struct mapped_device *md)
+ {
+-	struct dm_table *t = kzalloc(sizeof(*t), GFP_KERNEL);
++	struct dm_table *t;
++
++	if (num_targets > DM_MAX_TARGETS)
++		return -EOVERFLOW;
++
++	t = kzalloc(sizeof(*t), GFP_KERNEL);
+ 
+ 	if (!t)
+ 		return -ENOMEM;
+@@ -158,7 +163,7 @@ int dm_table_create(struct dm_table **result, fmode_t mode,
+ 
+ 	if (!num_targets) {
+ 		kfree(t);
+-		return -ENOMEM;
++		return -EOVERFLOW;
+ 	}
+ 
+ 	if (alloc_targets(t, num_targets)) {
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 6efe49f7bdf5e..03d2e31dda2f6 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1179,6 +1179,7 @@ struct super_type  {
+ 					  struct md_rdev *refdev,
+ 					  int minor_version);
+ 	int		    (*validate_super)(struct mddev *mddev,
++					      struct md_rdev *freshest,
+ 					      struct md_rdev *rdev);
+ 	void		    (*sync_super)(struct mddev *mddev,
+ 					  struct md_rdev *rdev);
+@@ -1317,8 +1318,9 @@ static int super_90_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor
+ 
+ /*
+  * validate_super for 0.90.0
++ * note: we are not using "freshest" for 0.9 superblock
+  */
+-static int super_90_validate(struct mddev *mddev, struct md_rdev *rdev)
++static int super_90_validate(struct mddev *mddev, struct md_rdev *freshest, struct md_rdev *rdev)
+ {
+ 	mdp_disk_t *desc;
+ 	mdp_super_t *sb = page_address(rdev->sb_page);
+@@ -1833,7 +1835,7 @@ static int super_1_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor_
+ 	return ret;
+ }
+ 
+-static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev)
++static int super_1_validate(struct mddev *mddev, struct md_rdev *freshest, struct md_rdev *rdev)
+ {
+ 	struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
+ 	__u64 ev1 = le64_to_cpu(sb->events);
+@@ -1929,13 +1931,15 @@ static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev)
+ 		}
+ 	} else if (mddev->pers == NULL) {
+ 		/* Insist of good event counter while assembling, except for
+-		 * spares (which don't need an event count) */
+-		++ev1;
++		 * spares (which don't need an event count).
++		 * Similar to mdadm, we allow event counter difference of 1
++		 * from the freshest device.
++		 */
+ 		if (rdev->desc_nr >= 0 &&
+ 		    rdev->desc_nr < le32_to_cpu(sb->max_dev) &&
+ 		    (le16_to_cpu(sb->dev_roles[rdev->desc_nr]) < MD_DISK_ROLE_MAX ||
+ 		     le16_to_cpu(sb->dev_roles[rdev->desc_nr]) == MD_DISK_ROLE_JOURNAL))
+-			if (ev1 < mddev->events)
++			if (ev1 + 1 < mddev->events)
+ 				return -EINVAL;
+ 	} else if (mddev->bitmap) {
+ 		/* If adding to array with a bitmap, then we can accept an
+@@ -1956,8 +1960,38 @@ static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev)
+ 		    rdev->desc_nr >= le32_to_cpu(sb->max_dev)) {
+ 			role = MD_DISK_ROLE_SPARE;
+ 			rdev->desc_nr = -1;
+-		} else
++		} else if (mddev->pers == NULL && freshest && ev1 < mddev->events) {
++			/*
++			 * If we are assembling, and our event counter is smaller than the
++			 * highest event counter, we cannot trust our superblock about the role.
++			 * It could happen that our rdev was marked as Faulty, and all other
++			 * superblocks were updated with +1 event counter.
++			 * Then, before the next superblock update, which typically happens when
++			 * remove_and_add_spares() removes the device from the array, there was
++			 * a crash or reboot.
++			 * If we allow current rdev without consulting the freshest superblock,
++			 * we could cause data corruption.
++			 * Note that in this case our event counter is smaller by 1 than the
++			 * highest, otherwise, this rdev would not be allowed into array;
++			 * both kernel and mdadm allow event counter difference of 1.
++			 */
++			struct mdp_superblock_1 *freshest_sb = page_address(freshest->sb_page);
++			u32 freshest_max_dev = le32_to_cpu(freshest_sb->max_dev);
++
++			if (rdev->desc_nr >= freshest_max_dev) {
++				/* this is unexpected, better not proceed */
++				pr_warn("md: %s: rdev[%pg]: desc_nr(%d) >= freshest(%pg)->sb->max_dev(%u)\n",
++						mdname(mddev), rdev->bdev, rdev->desc_nr,
++						freshest->bdev, freshest_max_dev);
++				return -EUCLEAN;
++			}
++
++			role = le16_to_cpu(freshest_sb->dev_roles[rdev->desc_nr]);
++			pr_debug("md: %s: rdev[%pg]: role=%d(0x%x) according to freshest %pg\n",
++				     mdname(mddev), rdev->bdev, role, role, freshest->bdev);
++		} else {
+ 			role = le16_to_cpu(sb->dev_roles[rdev->desc_nr]);
++		}
+ 		switch(role) {
+ 		case MD_DISK_ROLE_SPARE: /* spare */
+ 			break;
+@@ -2896,7 +2930,7 @@ static int add_bound_rdev(struct md_rdev *rdev)
+ 		 * and should be added immediately.
+ 		 */
+ 		super_types[mddev->major_version].
+-			validate_super(mddev, rdev);
++			validate_super(mddev, NULL/*freshest*/, rdev);
+ 		if (add_journal)
+ 			mddev_suspend(mddev);
+ 		err = mddev->pers->hot_add_disk(mddev, rdev);
+@@ -3814,7 +3848,7 @@ static int analyze_sbs(struct mddev *mddev)
+ 	}
+ 
+ 	super_types[mddev->major_version].
+-		validate_super(mddev, freshest);
++		validate_super(mddev, NULL/*freshest*/, freshest);
+ 
+ 	i = 0;
+ 	rdev_for_each_safe(rdev, tmp, mddev) {
+@@ -3829,7 +3863,7 @@ static int analyze_sbs(struct mddev *mddev)
+ 		}
+ 		if (rdev != freshest) {
+ 			if (super_types[mddev->major_version].
+-			    validate_super(mddev, rdev)) {
++			    validate_super(mddev, freshest, rdev)) {
+ 				pr_warn("md: kicking non-fresh %s from array!\n",
+ 					bdevname(rdev->bdev,b));
+ 				md_kick_rdev_from_array(rdev);
+@@ -6817,7 +6851,7 @@ int md_add_new_disk(struct mddev *mddev, struct mdu_disk_info_s *info)
+ 			rdev->saved_raid_disk = rdev->raid_disk;
+ 		} else
+ 			super_types[mddev->major_version].
+-				validate_super(mddev, rdev);
++				validate_super(mddev, NULL/*freshest*/, rdev);
+ 		if ((info->state & (1<<MD_DISK_SYNC)) &&
+ 		     rdev->raid_disk != info->raid_disk) {
+ 			/* This was a hot-add request, but events doesn't
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 9f114b9d8dc6b..00995e60d46b1 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -36,7 +36,6 @@
+  */
+ 
+ #include <linux/blkdev.h>
+-#include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/raid/pq.h>
+ #include <linux/async_tx.h>
+@@ -6520,18 +6519,7 @@ static void raid5d(struct md_thread *thread)
+ 			spin_unlock_irq(&conf->device_lock);
+ 			md_check_recovery(mddev);
+ 			spin_lock_irq(&conf->device_lock);
+-
+-			/*
+-			 * Waiting on MD_SB_CHANGE_PENDING below may deadlock
+-			 * seeing md_check_recovery() is needed to clear
+-			 * the flag when using mdmon.
+-			 */
+-			continue;
+ 		}
+-
+-		wait_event_lock_irq(mddev->sb_wait,
+-			!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags),
+-			conf->device_lock);
+ 	}
+ 	pr_debug("%d stripes handled\n", handled);
+ 
+diff --git a/drivers/media/pci/ddbridge/ddbridge-main.c b/drivers/media/pci/ddbridge/ddbridge-main.c
+index 03dc9924fa2cc..bb7fb6402d6e5 100644
+--- a/drivers/media/pci/ddbridge/ddbridge-main.c
++++ b/drivers/media/pci/ddbridge/ddbridge-main.c
+@@ -247,7 +247,7 @@ static int ddb_probe(struct pci_dev *pdev,
+ 	ddb_unmap(dev);
+ 	pci_set_drvdata(pdev, NULL);
+ 	pci_disable_device(pdev);
+-	return -1;
++	return stat;
+ }
+ 
+ /****************************************************************************/
+diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+index 36109c324cb6c..3519c2252ae88 100644
+--- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+@@ -977,13 +977,13 @@ static void mtk_jpeg_dec_device_run(void *priv)
+ 	if (ret < 0)
+ 		goto dec_end;
+ 
+-	schedule_delayed_work(&jpeg->job_timeout_work,
+-			      msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
+-
+ 	mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
+ 	if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, &dst_buf->vb2_buf, &fb))
+ 		goto dec_end;
+ 
++	schedule_delayed_work(&jpeg->job_timeout_work,
++			      msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
++
+ 	spin_lock_irqsave(&jpeg->hw_lock, flags);
+ 	mtk_jpeg_dec_reset(jpeg->reg_base);
+ 	mtk_jpeg_dec_set_config(jpeg->reg_base,
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index e3246344fb724..bcbbd1408b368 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -187,25 +187,16 @@ static int rga_setup_ctrls(struct rga_ctx *ctx)
+ static struct rga_fmt formats[] = {
+ 	{
+ 		.fourcc = V4L2_PIX_FMT_ARGB32,
+-		.color_swap = RGA_COLOR_RB_SWAP,
++		.color_swap = RGA_COLOR_ALPHA_SWAP,
+ 		.hw_format = RGA_COLOR_FMT_ABGR8888,
+ 		.depth = 32,
+ 		.uv_factor = 1,
+ 		.y_div = 1,
+ 		.x_div = 1,
+ 	},
+-	{
+-		.fourcc = V4L2_PIX_FMT_XRGB32,
+-		.color_swap = RGA_COLOR_RB_SWAP,
+-		.hw_format = RGA_COLOR_FMT_XBGR8888,
+-		.depth = 32,
+-		.uv_factor = 1,
+-		.y_div = 1,
+-		.x_div = 1,
+-	},
+ 	{
+ 		.fourcc = V4L2_PIX_FMT_ABGR32,
+-		.color_swap = RGA_COLOR_ALPHA_SWAP,
++		.color_swap = RGA_COLOR_RB_SWAP,
+ 		.hw_format = RGA_COLOR_FMT_ABGR8888,
+ 		.depth = 32,
+ 		.uv_factor = 1,
+@@ -214,7 +205,7 @@ static struct rga_fmt formats[] = {
+ 	},
+ 	{
+ 		.fourcc = V4L2_PIX_FMT_XBGR32,
+-		.color_swap = RGA_COLOR_ALPHA_SWAP,
++		.color_swap = RGA_COLOR_RB_SWAP,
+ 		.hw_format = RGA_COLOR_FMT_XBGR8888,
+ 		.depth = 32,
+ 		.uv_factor = 1,
+diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
+index afae0afe3f810..a8c55e4bfaee2 100644
+--- a/drivers/media/rc/bpf-lirc.c
++++ b/drivers/media/rc/bpf-lirc.c
+@@ -249,7 +249,7 @@ int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ 	if (attr->attach_flags)
+ 		return -EINVAL;
+ 
+-	rcdev = rc_dev_get_from_fd(attr->target_fd);
++	rcdev = rc_dev_get_from_fd(attr->target_fd, true);
+ 	if (IS_ERR(rcdev))
+ 		return PTR_ERR(rcdev);
+ 
+@@ -274,7 +274,7 @@ int lirc_prog_detach(const union bpf_attr *attr)
+ 	if (IS_ERR(prog))
+ 		return PTR_ERR(prog);
+ 
+-	rcdev = rc_dev_get_from_fd(attr->target_fd);
++	rcdev = rc_dev_get_from_fd(attr->target_fd, true);
+ 	if (IS_ERR(rcdev)) {
+ 		bpf_prog_put(prog);
+ 		return PTR_ERR(rcdev);
+@@ -299,7 +299,7 @@ int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
+ 	if (attr->query.query_flags)
+ 		return -EINVAL;
+ 
+-	rcdev = rc_dev_get_from_fd(attr->query.target_fd);
++	rcdev = rc_dev_get_from_fd(attr->query.target_fd, false);
+ 	if (IS_ERR(rcdev))
+ 		return PTR_ERR(rcdev);
+ 
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 7f394277478b3..cd2fddf003bd6 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -324,6 +324,7 @@ static int irtoy_tx(struct rc_dev *rc, uint *txbuf, uint count)
+ 			    sizeof(COMMAND_SMODE_EXIT), STATE_RESET);
+ 	if (err) {
+ 		dev_err(irtoy->dev, "exit sample mode: %d\n", err);
++		kfree(buf);
+ 		return err;
+ 	}
+ 
+@@ -331,6 +332,7 @@ static int irtoy_tx(struct rc_dev *rc, uint *txbuf, uint count)
+ 			    sizeof(COMMAND_SMODE_ENTER), STATE_COMMAND);
+ 	if (err) {
+ 		dev_err(irtoy->dev, "enter sample mode: %d\n", err);
++		kfree(buf);
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/media/rc/lirc_dev.c b/drivers/media/rc/lirc_dev.c
+index 9c888047fa994..14243ce03b46e 100644
+--- a/drivers/media/rc/lirc_dev.c
++++ b/drivers/media/rc/lirc_dev.c
+@@ -826,7 +826,7 @@ void __exit lirc_dev_exit(void)
+ 	unregister_chrdev_region(lirc_base_dev, RC_DEV_MAX);
+ }
+ 
+-struct rc_dev *rc_dev_get_from_fd(int fd)
++struct rc_dev *rc_dev_get_from_fd(int fd, bool write)
+ {
+ 	struct fd f = fdget(fd);
+ 	struct lirc_fh *fh;
+@@ -840,6 +840,9 @@ struct rc_dev *rc_dev_get_from_fd(int fd)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
++	if (write && !(f.file->f_mode & FMODE_WRITE))
++		return ERR_PTR(-EPERM);
++
+ 	fh = f.file->private_data;
+ 	dev = fh->rc;
+ 
+diff --git a/drivers/media/rc/rc-core-priv.h b/drivers/media/rc/rc-core-priv.h
+index 62f032dffd33a..dfe0352c0f0a6 100644
+--- a/drivers/media/rc/rc-core-priv.h
++++ b/drivers/media/rc/rc-core-priv.h
+@@ -325,7 +325,7 @@ void lirc_raw_event(struct rc_dev *dev, struct ir_raw_event ev);
+ void lirc_scancode_event(struct rc_dev *dev, struct lirc_scancode *lsc);
+ int lirc_register(struct rc_dev *dev);
+ void lirc_unregister(struct rc_dev *dev);
+-struct rc_dev *rc_dev_get_from_fd(int fd);
++struct rc_dev *rc_dev_get_from_fd(int fd, bool write);
+ #else
+ static inline int lirc_dev_init(void) { return 0; }
+ static inline void lirc_dev_exit(void) {}
+diff --git a/drivers/media/usb/stk1160/stk1160-video.c b/drivers/media/usb/stk1160/stk1160-video.c
+index 202b084f65a22..4cf540d1b2501 100644
+--- a/drivers/media/usb/stk1160/stk1160-video.c
++++ b/drivers/media/usb/stk1160/stk1160-video.c
+@@ -107,8 +107,7 @@ void stk1160_copy_video(struct stk1160 *dev, u8 *src, int len)
+ 
+ 	/*
+ 	 * TODO: These stk1160_dbg are very spammy!
+-	 * We should 1) check why we are getting them
+-	 * and 2) add ratelimit.
++	 * We should check why we are getting them.
+ 	 *
+ 	 * UPDATE: One of the reasons (the only one?) for getting these
+ 	 * is incorrect standard (mismatch between expected and configured).
+@@ -151,7 +150,7 @@ void stk1160_copy_video(struct stk1160 *dev, u8 *src, int len)
+ 
+ 	/* Let the bug hunt begin! sanity checks! */
+ 	if (lencopy < 0) {
+-		stk1160_dbg("copy skipped: negative lencopy\n");
++		printk_ratelimited(KERN_DEBUG "copy skipped: negative lencopy\n");
+ 		return;
+ 	}
+ 
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index b8847ae04d938..c5c6608ccc84e 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -1382,6 +1382,7 @@ config MFD_DAVINCI_VOICECODEC
+ 
+ config MFD_TI_AM335X_TSCADC
+ 	tristate "TI ADC / Touch Screen chip support"
++	depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST
+ 	select MFD_CORE
+ 	select REGMAP
+ 	select REGMAP_MMIO
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 9822efdc6cc23..af050cfdcb8f3 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1592,7 +1592,7 @@ static int fastrpc_cb_remove(struct platform_device *pdev)
+ 	int i;
+ 
+ 	spin_lock_irqsave(&cctx->lock, flags);
+-	for (i = 1; i < FASTRPC_MAX_SESSIONS; i++) {
++	for (i = 0; i < FASTRPC_MAX_SESSIONS; i++) {
+ 		if (cctx->session[i].sid == sess->sid) {
+ 			cctx->session[i].valid = false;
+ 			cctx->sesscount--;
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 8d842ff241b29..2058f31a1bce6 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -346,6 +346,10 @@ struct mmc_blk_ioc_data {
+ 	struct mmc_ioc_cmd ic;
+ 	unsigned char *buf;
+ 	u64 buf_bytes;
++	unsigned int flags;
++#define MMC_BLK_IOC_DROP	BIT(0)	/* drop this mrq */
++#define MMC_BLK_IOC_SBC	BIT(1)	/* use mrq.sbc */
++
+ 	struct mmc_rpmb_data *rpmb;
+ };
+ 
+@@ -447,7 +451,7 @@ static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms,
+ }
+ 
+ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+-			       struct mmc_blk_ioc_data *idata)
++			       struct mmc_blk_ioc_data **idatas, int i)
+ {
+ 	struct mmc_command cmd = {}, sbc = {};
+ 	struct mmc_data data = {};
+@@ -455,10 +459,18 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	struct scatterlist sg;
+ 	int err;
+ 	unsigned int target_part;
++	struct mmc_blk_ioc_data *idata = idatas[i];
++	struct mmc_blk_ioc_data *prev_idata = NULL;
+ 
+ 	if (!card || !md || !idata)
+ 		return -EINVAL;
+ 
++	if (idata->flags & MMC_BLK_IOC_DROP)
++		return 0;
++
++	if (idata->flags & MMC_BLK_IOC_SBC)
++		prev_idata = idatas[i - 1];
++
+ 	/*
+ 	 * The RPMB accesses comes in from the character device, so we
+ 	 * need to target these explicitly. Else we just target the
+@@ -525,7 +537,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 			return err;
+ 	}
+ 
+-	if (idata->rpmb) {
++	if (idata->rpmb || prev_idata) {
+ 		sbc.opcode = MMC_SET_BLOCK_COUNT;
+ 		/*
+ 		 * We don't do any blockcount validation because the max size
+@@ -533,6 +545,8 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 		 * 'Reliable Write' bit here.
+ 		 */
+ 		sbc.arg = data.blocks | (idata->ic.write_flag & BIT(31));
++		if (prev_idata)
++			sbc.arg = prev_idata->ic.arg;
+ 		sbc.flags = MMC_RSP_R1 | MMC_CMD_AC;
+ 		mrq.sbc = &sbc;
+ 	}
+@@ -544,6 +558,15 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	mmc_wait_for_req(card->host, &mrq);
+ 	memcpy(&idata->ic.response, cmd.resp, sizeof(cmd.resp));
+ 
++	if (prev_idata) {
++		memcpy(&prev_idata->ic.response, sbc.resp, sizeof(sbc.resp));
++		if (sbc.error) {
++			dev_err(mmc_dev(card->host), "%s: sbc error %d\n",
++							__func__, sbc.error);
++			return sbc.error;
++		}
++	}
++
+ 	if (cmd.error) {
+ 		dev_err(mmc_dev(card->host), "%s: cmd error %d\n",
+ 						__func__, cmd.error);
+@@ -985,6 +1008,20 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
+ 	md->reset_done &= ~type;
+ }
+ 
++static void mmc_blk_check_sbc(struct mmc_queue_req *mq_rq)
++{
++	struct mmc_blk_ioc_data **idata = mq_rq->drv_op_data;
++	int i;
++
++	for (i = 1; i < mq_rq->ioc_count; i++) {
++		if (idata[i - 1]->ic.opcode == MMC_SET_BLOCK_COUNT &&
++		    mmc_op_multi(idata[i]->ic.opcode)) {
++			idata[i - 1]->flags |= MMC_BLK_IOC_DROP;
++			idata[i]->flags |= MMC_BLK_IOC_SBC;
++		}
++	}
++}
++
+ /*
+  * The non-block commands come back from the block layer after it queued it and
+  * processed it with all other requests and then they get issued in this
+@@ -1012,11 +1049,14 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
+ 			if (ret)
+ 				break;
+ 		}
++
++		mmc_blk_check_sbc(mq_rq);
++
+ 		fallthrough;
+ 	case MMC_DRV_OP_IOCTL_RPMB:
+ 		idata = mq_rq->drv_op_data;
+ 		for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) {
+-			ret = __mmc_blk_ioctl_cmd(card, md, idata[i]);
++			ret = __mmc_blk_ioctl_cmd(card, md, idata, i);
+ 			if (ret)
+ 				break;
+ 		}
+diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c
+index 05e907451df90..681653d097ef5 100644
+--- a/drivers/mmc/core/slot-gpio.c
++++ b/drivers/mmc/core/slot-gpio.c
+@@ -62,11 +62,15 @@ int mmc_gpio_alloc(struct mmc_host *host)
+ int mmc_gpio_get_ro(struct mmc_host *host)
+ {
+ 	struct mmc_gpio *ctx = host->slot.handler_priv;
++	int cansleep;
+ 
+ 	if (!ctx || !ctx->ro_gpio)
+ 		return -ENOSYS;
+ 
+-	return gpiod_get_value_cansleep(ctx->ro_gpio);
++	cansleep = gpiod_cansleep(ctx->ro_gpio);
++	return cansleep ?
++		gpiod_get_value_cansleep(ctx->ro_gpio) :
++		gpiod_get_value(ctx->ro_gpio);
+ }
+ EXPORT_SYMBOL(mmc_gpio_get_ro);
+ 
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index 1d814919eb6be..a1fb5d0e9553a 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -15,7 +15,7 @@
+ #include <linux/slab.h>
+ #include <linux/module.h>
+ #include <linux/bio.h>
+-#include <linux/dma-mapping.h>
++#include <linux/dma-direction.h>
+ #include <linux/crc7.h>
+ #include <linux/crc-itu-t.h>
+ #include <linux/scatterlist.h>
+@@ -119,19 +119,14 @@ struct mmc_spi_host {
+ 	struct spi_transfer	status;
+ 	struct spi_message	readback;
+ 
+-	/* underlying DMA-aware controller, or null */
+-	struct device		*dma_dev;
+-
+ 	/* buffer used for commands and for message "overhead" */
+ 	struct scratch		*data;
+-	dma_addr_t		data_dma;
+ 
+ 	/* Specs say to write ones most of the time, even when the card
+ 	 * has no need to read its input data; and many cards won't care.
+ 	 * This is our source of those ones.
+ 	 */
+ 	void			*ones;
+-	dma_addr_t		ones_dma;
+ };
+ 
+ 
+@@ -147,11 +142,8 @@ static inline int mmc_cs_off(struct mmc_spi_host *host)
+ 	return spi_setup(host->spi);
+ }
+ 
+-static int
+-mmc_spi_readbytes(struct mmc_spi_host *host, unsigned len)
++static int mmc_spi_readbytes(struct mmc_spi_host *host, unsigned int len)
+ {
+-	int status;
+-
+ 	if (len > sizeof(*host->data)) {
+ 		WARN_ON(1);
+ 		return -EIO;
+@@ -159,19 +151,7 @@ mmc_spi_readbytes(struct mmc_spi_host *host, unsigned len)
+ 
+ 	host->status.len = len;
+ 
+-	if (host->dma_dev)
+-		dma_sync_single_for_device(host->dma_dev,
+-				host->data_dma, sizeof(*host->data),
+-				DMA_FROM_DEVICE);
+-
+-	status = spi_sync_locked(host->spi, &host->readback);
+-
+-	if (host->dma_dev)
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				host->data_dma, sizeof(*host->data),
+-				DMA_FROM_DEVICE);
+-
+-	return status;
++	return spi_sync_locked(host->spi, &host->readback);
+ }
+ 
+ static int mmc_spi_skip(struct mmc_spi_host *host, unsigned long timeout,
+@@ -513,23 +493,11 @@ mmc_spi_command_send(struct mmc_spi_host *host,
+ 	t = &host->t;
+ 	memset(t, 0, sizeof(*t));
+ 	t->tx_buf = t->rx_buf = data->status;
+-	t->tx_dma = t->rx_dma = host->data_dma;
+ 	t->len = cp - data->status;
+ 	t->cs_change = 1;
+ 	spi_message_add_tail(t, &host->m);
+ 
+-	if (host->dma_dev) {
+-		host->m.is_dma_mapped = 1;
+-		dma_sync_single_for_device(host->dma_dev,
+-				host->data_dma, sizeof(*host->data),
+-				DMA_BIDIRECTIONAL);
+-	}
+ 	status = spi_sync_locked(host->spi, &host->m);
+-
+-	if (host->dma_dev)
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				host->data_dma, sizeof(*host->data),
+-				DMA_BIDIRECTIONAL);
+ 	if (status < 0) {
+ 		dev_dbg(&host->spi->dev, "  ... write returned %d\n", status);
+ 		cmd->error = status;
+@@ -547,9 +515,6 @@ mmc_spi_command_send(struct mmc_spi_host *host,
+  * We always provide TX data for data and CRC.  The MMC/SD protocol
+  * requires us to write ones; but Linux defaults to writing zeroes;
+  * so we explicitly initialize it to all ones on RX paths.
+- *
+- * We also handle DMA mapping, so the underlying SPI controller does
+- * not need to (re)do it for each message.
+  */
+ static void
+ mmc_spi_setup_data_message(
+@@ -559,11 +524,8 @@ mmc_spi_setup_data_message(
+ {
+ 	struct spi_transfer	*t;
+ 	struct scratch		*scratch = host->data;
+-	dma_addr_t		dma = host->data_dma;
+ 
+ 	spi_message_init(&host->m);
+-	if (dma)
+-		host->m.is_dma_mapped = 1;
+ 
+ 	/* for reads, readblock() skips 0xff bytes before finding
+ 	 * the token; for writes, this transfer issues that token.
+@@ -577,8 +539,6 @@ mmc_spi_setup_data_message(
+ 		else
+ 			scratch->data_token = SPI_TOKEN_SINGLE;
+ 		t->tx_buf = &scratch->data_token;
+-		if (dma)
+-			t->tx_dma = dma + offsetof(struct scratch, data_token);
+ 		spi_message_add_tail(t, &host->m);
+ 	}
+ 
+@@ -588,7 +548,6 @@ mmc_spi_setup_data_message(
+ 	t = &host->t;
+ 	memset(t, 0, sizeof(*t));
+ 	t->tx_buf = host->ones;
+-	t->tx_dma = host->ones_dma;
+ 	/* length and actual buffer info are written later */
+ 	spi_message_add_tail(t, &host->m);
+ 
+@@ -598,14 +557,9 @@ mmc_spi_setup_data_message(
+ 	if (direction == DMA_TO_DEVICE) {
+ 		/* the actual CRC may get written later */
+ 		t->tx_buf = &scratch->crc_val;
+-		if (dma)
+-			t->tx_dma = dma + offsetof(struct scratch, crc_val);
+ 	} else {
+ 		t->tx_buf = host->ones;
+-		t->tx_dma = host->ones_dma;
+ 		t->rx_buf = &scratch->crc_val;
+-		if (dma)
+-			t->rx_dma = dma + offsetof(struct scratch, crc_val);
+ 	}
+ 	spi_message_add_tail(t, &host->m);
+ 
+@@ -628,10 +582,7 @@ mmc_spi_setup_data_message(
+ 		memset(t, 0, sizeof(*t));
+ 		t->len = (direction == DMA_TO_DEVICE) ? sizeof(scratch->status) : 1;
+ 		t->tx_buf = host->ones;
+-		t->tx_dma = host->ones_dma;
+ 		t->rx_buf = scratch->status;
+-		if (dma)
+-			t->rx_dma = dma + offsetof(struct scratch, status);
+ 		t->cs_change = 1;
+ 		spi_message_add_tail(t, &host->m);
+ 	}
+@@ -660,23 +611,13 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 
+ 	if (host->mmc->use_spi_crc)
+ 		scratch->crc_val = cpu_to_be16(crc_itu_t(0, t->tx_buf, t->len));
+-	if (host->dma_dev)
+-		dma_sync_single_for_device(host->dma_dev,
+-				host->data_dma, sizeof(*scratch),
+-				DMA_BIDIRECTIONAL);
+ 
+ 	status = spi_sync_locked(spi, &host->m);
+-
+ 	if (status != 0) {
+ 		dev_dbg(&spi->dev, "write error (%d)\n", status);
+ 		return status;
+ 	}
+ 
+-	if (host->dma_dev)
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				host->data_dma, sizeof(*scratch),
+-				DMA_BIDIRECTIONAL);
+-
+ 	/*
+ 	 * Get the transmission data-response reply.  It must follow
+ 	 * immediately after the data block we transferred.  This reply
+@@ -725,8 +666,6 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 	}
+ 
+ 	t->tx_buf += t->len;
+-	if (host->dma_dev)
+-		t->tx_dma += t->len;
+ 
+ 	/* Return when not busy.  If we didn't collect that status yet,
+ 	 * we'll need some more I/O.
+@@ -790,30 +729,12 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 	}
+ 	leftover = status << 1;
+ 
+-	if (host->dma_dev) {
+-		dma_sync_single_for_device(host->dma_dev,
+-				host->data_dma, sizeof(*scratch),
+-				DMA_BIDIRECTIONAL);
+-		dma_sync_single_for_device(host->dma_dev,
+-				t->rx_dma, t->len,
+-				DMA_FROM_DEVICE);
+-	}
+-
+ 	status = spi_sync_locked(spi, &host->m);
+ 	if (status < 0) {
+ 		dev_dbg(&spi->dev, "read error %d\n", status);
+ 		return status;
+ 	}
+ 
+-	if (host->dma_dev) {
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				host->data_dma, sizeof(*scratch),
+-				DMA_BIDIRECTIONAL);
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				t->rx_dma, t->len,
+-				DMA_FROM_DEVICE);
+-	}
+-
+ 	if (bitshift) {
+ 		/* Walk through the data and the crc and do
+ 		 * all the magic to get byte-aligned data.
+@@ -848,8 +769,6 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 	}
+ 
+ 	t->rx_buf += t->len;
+-	if (host->dma_dev)
+-		t->rx_dma += t->len;
+ 
+ 	return 0;
+ }
+@@ -864,7 +783,6 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
+ 		struct mmc_data *data, u32 blk_size)
+ {
+ 	struct spi_device	*spi = host->spi;
+-	struct device		*dma_dev = host->dma_dev;
+ 	struct spi_transfer	*t;
+ 	enum dma_data_direction	direction;
+ 	struct scatterlist	*sg;
+@@ -891,31 +809,8 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
+ 	 */
+ 	for_each_sg(data->sg, sg, data->sg_len, n_sg) {
+ 		int			status = 0;
+-		dma_addr_t		dma_addr = 0;
+ 		void			*kmap_addr;
+ 		unsigned		length = sg->length;
+-		enum dma_data_direction	dir = direction;
+-
+-		/* set up dma mapping for controller drivers that might
+-		 * use DMA ... though they may fall back to PIO
+-		 */
+-		if (dma_dev) {
+-			/* never invalidate whole *shared* pages ... */
+-			if ((sg->offset != 0 || length != PAGE_SIZE)
+-					&& dir == DMA_FROM_DEVICE)
+-				dir = DMA_BIDIRECTIONAL;
+-
+-			dma_addr = dma_map_page(dma_dev, sg_page(sg), 0,
+-						PAGE_SIZE, dir);
+-			if (dma_mapping_error(dma_dev, dma_addr)) {
+-				data->error = -EFAULT;
+-				break;
+-			}
+-			if (direction == DMA_TO_DEVICE)
+-				t->tx_dma = dma_addr + sg->offset;
+-			else
+-				t->rx_dma = dma_addr + sg->offset;
+-		}
+ 
+ 		/* allow pio too; we don't allow highmem */
+ 		kmap_addr = kmap(sg_page(sg));
+@@ -951,8 +846,6 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
+ 		if (direction == DMA_FROM_DEVICE)
+ 			flush_kernel_dcache_page(sg_page(sg));
+ 		kunmap(sg_page(sg));
+-		if (dma_dev)
+-			dma_unmap_page(dma_dev, dma_addr, PAGE_SIZE, dir);
+ 
+ 		if (status < 0) {
+ 			data->error = status;
+@@ -989,21 +882,9 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
+ 		scratch->status[0] = SPI_TOKEN_STOP_TRAN;
+ 
+ 		host->early_status.tx_buf = host->early_status.rx_buf;
+-		host->early_status.tx_dma = host->early_status.rx_dma;
+ 		host->early_status.len = statlen;
+ 
+-		if (host->dma_dev)
+-			dma_sync_single_for_device(host->dma_dev,
+-					host->data_dma, sizeof(*scratch),
+-					DMA_BIDIRECTIONAL);
+-
+ 		tmp = spi_sync_locked(spi, &host->m);
+-
+-		if (host->dma_dev)
+-			dma_sync_single_for_cpu(host->dma_dev,
+-					host->data_dma, sizeof(*scratch),
+-					DMA_BIDIRECTIONAL);
+-
+ 		if (tmp < 0) {
+ 			if (!data->error)
+ 				data->error = tmp;
+@@ -1278,52 +1159,6 @@ mmc_spi_detect_irq(int irq, void *mmc)
+ 	return IRQ_HANDLED;
+ }
+ 
+-#ifdef CONFIG_HAS_DMA
+-static int mmc_spi_dma_alloc(struct mmc_spi_host *host)
+-{
+-	struct spi_device *spi = host->spi;
+-	struct device *dev;
+-
+-	if (!spi->master->dev.parent->dma_mask)
+-		return 0;
+-
+-	dev = spi->master->dev.parent;
+-
+-	host->ones_dma = dma_map_single(dev, host->ones, MMC_SPI_BLOCKSIZE,
+-					DMA_TO_DEVICE);
+-	if (dma_mapping_error(dev, host->ones_dma))
+-		return -ENOMEM;
+-
+-	host->data_dma = dma_map_single(dev, host->data, sizeof(*host->data),
+-					DMA_BIDIRECTIONAL);
+-	if (dma_mapping_error(dev, host->data_dma)) {
+-		dma_unmap_single(dev, host->ones_dma, MMC_SPI_BLOCKSIZE,
+-				 DMA_TO_DEVICE);
+-		return -ENOMEM;
+-	}
+-
+-	dma_sync_single_for_cpu(dev, host->data_dma, sizeof(*host->data),
+-				DMA_BIDIRECTIONAL);
+-
+-	host->dma_dev = dev;
+-	return 0;
+-}
+-
+-static void mmc_spi_dma_free(struct mmc_spi_host *host)
+-{
+-	if (!host->dma_dev)
+-		return;
+-
+-	dma_unmap_single(host->dma_dev, host->ones_dma, MMC_SPI_BLOCKSIZE,
+-			 DMA_TO_DEVICE);
+-	dma_unmap_single(host->dma_dev, host->data_dma,	sizeof(*host->data),
+-			 DMA_BIDIRECTIONAL);
+-}
+-#else
+-static inline int mmc_spi_dma_alloc(struct mmc_spi_host *host) { return 0; }
+-static inline void mmc_spi_dma_free(struct mmc_spi_host *host) {}
+-#endif
+-
+ static int mmc_spi_probe(struct spi_device *spi)
+ {
+ 	void			*ones;
+@@ -1415,24 +1250,17 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 
+ 	dev_set_drvdata(&spi->dev, mmc);
+ 
+-	/* preallocate dma buffers */
++	/* Preallocate buffers */
+ 	host->data = kmalloc(sizeof(*host->data), GFP_KERNEL);
+ 	if (!host->data)
+ 		goto fail_nobuf1;
+ 
+-	status = mmc_spi_dma_alloc(host);
+-	if (status)
+-		goto fail_dma;
+-
+ 	/* setup message for status/busy readback */
+ 	spi_message_init(&host->readback);
+-	host->readback.is_dma_mapped = (host->dma_dev != NULL);
+ 
+ 	spi_message_add_tail(&host->status, &host->readback);
+ 	host->status.tx_buf = host->ones;
+-	host->status.tx_dma = host->ones_dma;
+ 	host->status.rx_buf = &host->data->status;
+-	host->status.rx_dma = host->data_dma + offsetof(struct scratch, status);
+ 	host->status.cs_change = 1;
+ 
+ 	/* register card detect irq */
+@@ -1477,9 +1305,8 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 	if (!status)
+ 		has_ro = true;
+ 
+-	dev_info(&spi->dev, "SD/MMC host %s%s%s%s%s\n",
++	dev_info(&spi->dev, "SD/MMC host %s%s%s%s\n",
+ 			dev_name(&mmc->class_dev),
+-			host->dma_dev ? "" : ", no DMA",
+ 			has_ro ? "" : ", no WP",
+ 			(host->pdata && host->pdata->setpower)
+ 				? "" : ", no poweroff",
+@@ -1490,8 +1317,6 @@ static int mmc_spi_probe(struct spi_device *spi)
+ fail_gpiod_request:
+ 	mmc_remove_host(mmc);
+ fail_glue_init:
+-	mmc_spi_dma_free(host);
+-fail_dma:
+ 	kfree(host->data);
+ fail_nobuf1:
+ 	mmc_free_host(mmc);
+@@ -1513,7 +1338,6 @@ static int mmc_spi_remove(struct spi_device *spi)
+ 
+ 	mmc_remove_host(mmc);
+ 
+-	mmc_spi_dma_free(host);
+ 	kfree(host->data);
+ 	kfree(host->ones);
+ 
+diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
+index 64ba465741a78..81a5e7622ea7d 100644
+--- a/drivers/net/bonding/bond_alb.c
++++ b/drivers/net/bonding/bond_alb.c
+@@ -971,7 +971,8 @@ static int alb_upper_dev_walk(struct net_device *upper,
+ 	if (netif_is_macvlan(upper) && !strict_match) {
+ 		tags = bond_verify_device_path(bond->dev, upper, 0);
+ 		if (IS_ERR_OR_NULL(tags))
+-			BUG();
++			return -ENOMEM;
++
+ 		alb_send_lp_vid(slave, upper->dev_addr,
+ 				tags[0].vlan_proto, tags[0].vlan_id);
+ 		kfree(tags);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index 51a7ff44478ec..67e52c4815048 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -536,8 +536,8 @@ struct mv88e6xxx_ops {
+ 	int (*serdes_get_sset_count)(struct mv88e6xxx_chip *chip, int port);
+ 	int (*serdes_get_strings)(struct mv88e6xxx_chip *chip,  int port,
+ 				  uint8_t *data);
+-	int (*serdes_get_stats)(struct mv88e6xxx_chip *chip,  int port,
+-				uint64_t *data);
++	size_t (*serdes_get_stats)(struct mv88e6xxx_chip *chip, int port,
++				   uint64_t *data);
+ 
+ 	/* SERDES registers for ethtool */
+ 	int (*serdes_get_regs_len)(struct mv88e6xxx_chip *chip,  int port);
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
+index 6920e62c864df..9494d75eec625 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.c
++++ b/drivers/net/dsa/mv88e6xxx/serdes.c
+@@ -314,8 +314,8 @@ static uint64_t mv88e6352_serdes_get_stat(struct mv88e6xxx_chip *chip,
+ 	return val;
+ }
+ 
+-int mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+-			       uint64_t *data)
++size_t mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
++				  uint64_t *data)
+ {
+ 	struct mv88e6xxx_port *mv88e6xxx_port = &chip->ports[port];
+ 	struct mv88e6352_serdes_hw_stat *stat;
+@@ -631,8 +631,8 @@ static uint64_t mv88e6390_serdes_get_stat(struct mv88e6xxx_chip *chip, int lane,
+ 	return reg[0] | ((u64)reg[1] << 16) | ((u64)reg[2] << 32);
+ }
+ 
+-int mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+-			       uint64_t *data)
++size_t mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
++				  uint64_t *data)
+ {
+ 	struct mv88e6390_serdes_hw_stat *stat;
+ 	int lane;
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.h b/drivers/net/dsa/mv88e6xxx/serdes.h
+index 14315f26228a3..035688659b50f 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.h
++++ b/drivers/net/dsa/mv88e6xxx/serdes.h
+@@ -116,13 +116,13 @@ irqreturn_t mv88e6390_serdes_irq_status(struct mv88e6xxx_chip *chip, int port,
+ int mv88e6352_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port);
+ int mv88e6352_serdes_get_strings(struct mv88e6xxx_chip *chip,
+ 				 int port, uint8_t *data);
+-int mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+-			       uint64_t *data);
++size_t mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
++				  uint64_t *data);
+ int mv88e6390_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port);
+ int mv88e6390_serdes_get_strings(struct mv88e6xxx_chip *chip,
+ 				 int port, uint8_t *data);
+-int mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+-			       uint64_t *data);
++size_t mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
++				  uint64_t *data);
+ 
+ int mv88e6352_serdes_get_regs_len(struct mv88e6xxx_chip *chip, int port);
+ void mv88e6352_serdes_get_regs(struct mv88e6xxx_chip *chip, int port, void *_p);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 584f365de563f..059552f4154d1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -11331,6 +11331,11 @@ static int bnxt_fw_init_one_p1(struct bnxt *bp)
+ 
+ 	bp->fw_cap = 0;
+ 	rc = bnxt_hwrm_ver_get(bp);
++	/* FW may be unresponsive after FLR. FLR must complete within 100 msec
++	 * so wait before continuing with recovery.
++	 */
++	if (rc)
++		msleep(100);
+ 	bnxt_try_map_fw_health_reg(bp);
+ 	if (rc) {
+ 		if (bp->fw_health && bp->fw_health->status_reliable) {
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 8edf12077e663..ed0589a1a00d8 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1244,7 +1244,8 @@ static void bcmgenet_get_ethtool_stats(struct net_device *dev,
+ 	}
+ }
+ 
+-static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable)
++void bcmgenet_eee_enable_set(struct net_device *dev, bool enable,
++			     bool tx_lpi_enabled)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	u32 off = priv->hw_params->tbuf_offset + TBUF_ENERGY_CTRL;
+@@ -1264,7 +1265,7 @@ static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable)
+ 
+ 	/* Enable EEE and switch to a 27Mhz clock automatically */
+ 	reg = bcmgenet_readl(priv->base + off);
+-	if (enable)
++	if (tx_lpi_enabled)
+ 		reg |= TBUF_EEE_EN | TBUF_PM_EN;
+ 	else
+ 		reg &= ~(TBUF_EEE_EN | TBUF_PM_EN);
+@@ -1285,6 +1286,7 @@ static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable)
+ 
+ 	priv->eee.eee_enabled = enable;
+ 	priv->eee.eee_active = enable;
++	priv->eee.tx_lpi_enabled = tx_lpi_enabled;
+ }
+ 
+ static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_eee *e)
+@@ -1300,6 +1302,7 @@ static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_eee *e)
+ 
+ 	e->eee_enabled = p->eee_enabled;
+ 	e->eee_active = p->eee_active;
++	e->tx_lpi_enabled = p->tx_lpi_enabled;
+ 	e->tx_lpi_timer = bcmgenet_umac_readl(priv, UMAC_EEE_LPI_TIMER);
+ 
+ 	return phy_ethtool_get_eee(dev->phydev, e);
+@@ -1309,7 +1312,6 @@ static int bcmgenet_set_eee(struct net_device *dev, struct ethtool_eee *e)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	struct ethtool_eee *p = &priv->eee;
+-	int ret = 0;
+ 
+ 	if (GENET_IS_V1(priv))
+ 		return -EOPNOTSUPP;
+@@ -1320,16 +1322,11 @@ static int bcmgenet_set_eee(struct net_device *dev, struct ethtool_eee *e)
+ 	p->eee_enabled = e->eee_enabled;
+ 
+ 	if (!p->eee_enabled) {
+-		bcmgenet_eee_enable_set(dev, false);
++		bcmgenet_eee_enable_set(dev, false, false);
+ 	} else {
+-		ret = phy_init_eee(dev->phydev, 0);
+-		if (ret) {
+-			netif_err(priv, hw, dev, "EEE initialization failed\n");
+-			return ret;
+-		}
+-
++		p->eee_active = phy_init_eee(dev->phydev, false) >= 0;
+ 		bcmgenet_umac_writel(priv, e->tx_lpi_timer, UMAC_EEE_LPI_TIMER);
+-		bcmgenet_eee_enable_set(dev, true);
++		bcmgenet_eee_enable_set(dev, p->eee_active, e->tx_lpi_enabled);
+ 	}
+ 
+ 	return phy_ethtool_set_eee(dev->phydev, e);
+@@ -4217,9 +4214,6 @@ static int bcmgenet_resume(struct device *d)
+ 	if (!device_may_wakeup(d))
+ 		phy_resume(dev->phydev);
+ 
+-	if (priv->eee.eee_enabled)
+-		bcmgenet_eee_enable_set(dev, true);
+-
+ 	bcmgenet_netif_start(dev);
+ 
+ 	netif_device_attach(dev);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index f6ca01da141d4..c7853d5304b09 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -756,4 +756,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ void bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv,
+ 			       enum bcmgenet_power_mode mode);
+ 
++void bcmgenet_eee_enable_set(struct net_device *dev, bool enable,
++			     bool tx_lpi_enabled);
++
+ #endif /* __BCMGENET_H__ */
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 2b0538f2af639..becc717aad131 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -25,6 +25,7 @@
+ 
+ #include "bcmgenet.h"
+ 
++
+ /* setup netdev link state when PHY link status change and
+  * update UMAC and RGMII block when link up
+  */
+@@ -102,6 +103,11 @@ void bcmgenet_mii_setup(struct net_device *dev)
+ 			reg |= CMD_TX_EN | CMD_RX_EN;
+ 		}
+ 		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++
++		priv->eee.eee_active = phy_init_eee(phydev, 0) >= 0;
++		bcmgenet_eee_enable_set(dev,
++					priv->eee.eee_enabled && priv->eee.eee_active,
++					priv->eee.tx_lpi_enabled);
+ 	} else {
+ 		/* done if nothing has changed */
+ 		if (!status_changed)
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 4ce913559c91d..fe29769cb1589 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1766,6 +1766,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
+ 
+ 		/* if any of the above changed restart the FEC */
+ 		if (status_change) {
++			netif_stop_queue(ndev);
+ 			napi_disable(&fep->napi);
+ 			netif_tx_lock_bh(ndev);
+ 			fec_restart(ndev);
+@@ -1775,6 +1776,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
+ 		}
+ 	} else {
+ 		if (fep->link) {
++			netif_stop_queue(ndev);
+ 			napi_disable(&fep->napi);
+ 			netif_tx_lock_bh(ndev);
+ 			fec_stop(ndev);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index d83b96aa3e42a..135acd74497f3 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5153,7 +5153,7 @@ static int i40e_pf_wait_queues_disabled(struct i40e_pf *pf)
+ {
+ 	int v, ret = 0;
+ 
+-	for (v = 0; v < pf->hw.func_caps.num_vsis; v++) {
++	for (v = 0; v < pf->num_alloc_vsi; v++) {
+ 		if (pf->vsi[v]) {
+ 			ret = i40e_vsi_wait_queues_disabled(pf->vsi[v]);
+ 			if (ret)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 7b0ed15f4df32..f79795cc91521 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -2545,6 +2545,14 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	i40e_status aq_ret = 0;
+ 	int i;
+ 
++	if (vf->is_disabled_from_host) {
++		aq_ret = -EPERM;
++		dev_info(&pf->pdev->dev,
++			 "Admin has disabled VF %d, will not enable queues\n",
++			 vf->vf_id);
++		goto error_param;
++	}
++
+ 	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
+ 		aq_ret = I40E_ERR_PARAM;
+ 		goto error_param;
+@@ -4587,9 +4595,12 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+ 	struct i40e_pf *pf = np->vsi->back;
+ 	struct virtchnl_pf_event pfe;
+ 	struct i40e_hw *hw = &pf->hw;
++	struct i40e_vsi *vsi;
++	unsigned long q_map;
+ 	struct i40e_vf *vf;
+ 	int abs_vf_id;
+ 	int ret = 0;
++	int tmp;
+ 
+ 	if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) {
+ 		dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n");
+@@ -4612,6 +4623,9 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+ 	switch (link) {
+ 	case IFLA_VF_LINK_STATE_AUTO:
+ 		vf->link_forced = false;
++		vf->is_disabled_from_host = false;
++		/* reset needed to reinit VF resources */
++		i40e_vc_reset_vf(vf, true);
+ 		pfe.event_data.link_event.link_status =
+ 			pf->hw.phy.link_info.link_info & I40E_AQ_LINK_UP;
+ 		pfe.event_data.link_event.link_speed =
+@@ -4621,6 +4635,9 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+ 	case IFLA_VF_LINK_STATE_ENABLE:
+ 		vf->link_forced = true;
+ 		vf->link_up = true;
++		vf->is_disabled_from_host = false;
++		/* reset needed to reinit VF resources */
++		i40e_vc_reset_vf(vf, true);
+ 		pfe.event_data.link_event.link_status = true;
+ 		pfe.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
+ 		break;
+@@ -4629,6 +4646,21 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+ 		vf->link_up = false;
+ 		pfe.event_data.link_event.link_status = false;
+ 		pfe.event_data.link_event.link_speed = 0;
++
++		vsi = pf->vsi[vf->lan_vsi_idx];
++		q_map = BIT(vsi->num_queue_pairs) - 1;
++
++		vf->is_disabled_from_host = true;
++
++		/* Try to stop both Tx&Rx rings even if one of the calls fails
++		 * to ensure we stop the rings even in case of errors.
++		 * If any of them returns with an error then the first
++		 * error that occurred will be returned.
++		 */
++		tmp = i40e_ctrl_vf_tx_rings(vsi, q_map, false);
++		ret = i40e_ctrl_vf_rx_rings(vsi, q_map, false);
++
++		ret = tmp ? tmp : ret;
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index bd497cc5303a1..97e9c34d7c6cd 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -98,6 +98,7 @@ struct i40e_vf {
+ 	bool link_forced;
+ 	bool link_up;		/* only valid if VF link is forced */
+ 	bool spoofchk;
++	bool is_disabled_from_host; /* PF ctrl of VF enable/disable */
+ 	u16 num_vlan;
+ 
+ 	/* ADq related variables */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
+index 95c92fe890a14..ed35e06537a01 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
+@@ -123,14 +123,14 @@ static s32 ixgbe_init_phy_ops_82598(struct ixgbe_hw *hw)
+ 		if (ret_val)
+ 			return ret_val;
+ 		if (hw->phy.sfp_type == ixgbe_sfp_type_unknown)
+-			return IXGBE_ERR_SFP_NOT_SUPPORTED;
++			return -EOPNOTSUPP;
+ 
+ 		/* Check to see if SFP+ module is supported */
+ 		ret_val = ixgbe_get_sfp_init_sequence_offsets(hw,
+ 							    &list_offset,
+ 							    &data_offset);
+ 		if (ret_val)
+-			return IXGBE_ERR_SFP_NOT_SUPPORTED;
++			return -EOPNOTSUPP;
+ 		break;
+ 	default:
+ 		break;
+@@ -213,7 +213,7 @@ static s32 ixgbe_get_link_capabilities_82598(struct ixgbe_hw *hw,
+ 		break;
+ 
+ 	default:
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -283,7 +283,7 @@ static s32 ixgbe_fc_enable_82598(struct ixgbe_hw *hw)
+ 
+ 	/* Validate the water mark configuration */
+ 	if (!hw->fc.pause_time)
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 
+ 	/* Low water mark of zero causes XOFF floods */
+ 	for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+@@ -292,7 +292,7 @@ static s32 ixgbe_fc_enable_82598(struct ixgbe_hw *hw)
+ 			if (!hw->fc.low_water[i] ||
+ 			    hw->fc.low_water[i] >= hw->fc.high_water[i]) {
+ 				hw_dbg(hw, "Invalid water mark configuration\n");
+-				return IXGBE_ERR_INVALID_LINK_SETTINGS;
++				return -EINVAL;
+ 			}
+ 		}
+ 	}
+@@ -369,7 +369,7 @@ static s32 ixgbe_fc_enable_82598(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_dbg(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	/* Set 802.3x based flow control settings. */
+@@ -438,7 +438,7 @@ static s32 ixgbe_start_mac_link_82598(struct ixgbe_hw *hw,
+ 				msleep(100);
+ 			}
+ 			if (!(links_reg & IXGBE_LINKS_KX_AN_COMP)) {
+-				status = IXGBE_ERR_AUTONEG_NOT_COMPLETE;
++				status = -EIO;
+ 				hw_dbg(hw, "Autonegotiation did not complete.\n");
+ 			}
+ 		}
+@@ -478,7 +478,7 @@ static s32 ixgbe_validate_link_ready(struct ixgbe_hw *hw)
+ 
+ 	if (timeout == IXGBE_VALIDATE_LINK_READY_TIMEOUT) {
+ 		hw_dbg(hw, "Link was indicated but link is down\n");
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -594,7 +594,7 @@ static s32 ixgbe_setup_mac_link_82598(struct ixgbe_hw *hw,
+ 	speed &= link_capabilities;
+ 
+ 	if (speed == IXGBE_LINK_SPEED_UNKNOWN)
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EINVAL;
+ 
+ 	/* Set KX4/KX support according to speed requested */
+ 	else if (link_mode == IXGBE_AUTOC_LMS_KX4_AN ||
+@@ -701,9 +701,9 @@ static s32 ixgbe_reset_hw_82598(struct ixgbe_hw *hw)
+ 
+ 		/* Init PHY and function pointers, perform SFP setup */
+ 		phy_status = hw->phy.ops.init(hw);
+-		if (phy_status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++		if (phy_status == -EOPNOTSUPP)
+ 			return phy_status;
+-		if (phy_status == IXGBE_ERR_SFP_NOT_PRESENT)
++		if (phy_status == -ENOENT)
+ 			goto mac_reset_top;
+ 
+ 		hw->phy.ops.reset(hw);
+@@ -727,7 +727,7 @@ static s32 ixgbe_reset_hw_82598(struct ixgbe_hw *hw)
+ 		udelay(1);
+ 	}
+ 	if (ctrl & IXGBE_CTRL_RST) {
+-		status = IXGBE_ERR_RESET_FAILED;
++		status = -EIO;
+ 		hw_dbg(hw, "Reset polling failed to complete.\n");
+ 	}
+ 
+@@ -789,7 +789,7 @@ static s32 ixgbe_set_vmdq_82598(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (rar >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", rar);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(rar));
+@@ -814,7 +814,7 @@ static s32 ixgbe_clear_vmdq_82598(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (rar >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", rar);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(rar));
+@@ -845,7 +845,7 @@ static s32 ixgbe_set_vfta_82598(struct ixgbe_hw *hw, u32 vlan, u32 vind,
+ 	u32 vftabyte;
+ 
+ 	if (vlan > 4095)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* Determine 32-bit word position in array */
+ 	regindex = (vlan >> 5) & 0x7F;   /* upper seven bits */
+@@ -964,7 +964,7 @@ static s32 ixgbe_read_i2c_phy_82598(struct ixgbe_hw *hw, u8 dev_addr,
+ 		gssr = IXGBE_GSSR_PHY0_SM;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, gssr) != 0)
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	if (hw->phy.type == ixgbe_phy_nl) {
+ 		/*
+@@ -993,7 +993,7 @@ static s32 ixgbe_read_i2c_phy_82598(struct ixgbe_hw *hw, u8 dev_addr,
+ 
+ 		if (sfp_stat != IXGBE_I2C_EEPROM_STATUS_PASS) {
+ 			hw_dbg(hw, "EEPROM read did not pass.\n");
+-			status = IXGBE_ERR_SFP_NOT_PRESENT;
++			status = -ENOENT;
+ 			goto out;
+ 		}
+ 
+@@ -1003,7 +1003,7 @@ static s32 ixgbe_read_i2c_phy_82598(struct ixgbe_hw *hw, u8 dev_addr,
+ 
+ 		*eeprom_data = (u8)(sfp_data >> 8);
+ 	} else {
+-		status = IXGBE_ERR_PHY;
++		status = -EIO;
+ 	}
+ 
+ out:
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
+index 8d3798a32f0e4..46ed20005d673 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
+@@ -117,7 +117,7 @@ static s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw)
+ 		ret_val = hw->mac.ops.acquire_swfw_sync(hw,
+ 							IXGBE_GSSR_MAC_CSR_SM);
+ 		if (ret_val)
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		if (hw->eeprom.ops.read(hw, ++data_offset, &data_value))
+ 			goto setup_sfp_err;
+@@ -144,7 +144,7 @@ static s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw)
+ 
+ 		if (ret_val) {
+ 			hw_dbg(hw, " sfp module setup not complete\n");
+-			return IXGBE_ERR_SFP_SETUP_NOT_COMPLETE;
++			return -EIO;
+ 		}
+ 	}
+ 
+@@ -159,7 +159,7 @@ static s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw)
+ 	usleep_range(hw->eeprom.semaphore_delay * 1000,
+ 		     hw->eeprom.semaphore_delay * 2000);
+ 	hw_err(hw, "eeprom read at offset %d failed\n", data_offset);
+-	return IXGBE_ERR_SFP_SETUP_NOT_COMPLETE;
++	return -EIO;
+ }
+ 
+ /**
+@@ -184,7 +184,7 @@ static s32 prot_autoc_read_82599(struct ixgbe_hw *hw, bool *locked,
+ 		ret_val = hw->mac.ops.acquire_swfw_sync(hw,
+ 					IXGBE_GSSR_MAC_CSR_SM);
+ 		if (ret_val)
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		*locked = true;
+ 	}
+@@ -219,7 +219,7 @@ static s32 prot_autoc_write_82599(struct ixgbe_hw *hw, u32 autoc, bool locked)
+ 		ret_val = hw->mac.ops.acquire_swfw_sync(hw,
+ 					IXGBE_GSSR_MAC_CSR_SM);
+ 		if (ret_val)
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		locked = true;
+ 	}
+@@ -400,7 +400,7 @@ static s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw,
+ 		break;
+ 
+ 	default:
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EIO;
+ 	}
+ 
+ 	if (hw->phy.multispeed_fiber) {
+@@ -541,7 +541,7 @@ static s32 ixgbe_start_mac_link_82599(struct ixgbe_hw *hw,
+ 				msleep(100);
+ 			}
+ 			if (!(links_reg & IXGBE_LINKS_KX_AN_COMP)) {
+-				status = IXGBE_ERR_AUTONEG_NOT_COMPLETE;
++				status = -EIO;
+ 				hw_dbg(hw, "Autoneg did not complete.\n");
+ 			}
+ 		}
+@@ -794,7 +794,7 @@ static s32 ixgbe_setup_mac_link_82599(struct ixgbe_hw *hw,
+ 	speed &= link_capabilities;
+ 
+ 	if (speed == IXGBE_LINK_SPEED_UNKNOWN)
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EINVAL;
+ 
+ 	/* Use stored value (EEPROM defaults) of AUTOC to find KR/KX4 support*/
+ 	if (hw->mac.orig_link_settings_stored)
+@@ -861,8 +861,7 @@ static s32 ixgbe_setup_mac_link_82599(struct ixgbe_hw *hw,
+ 					msleep(100);
+ 				}
+ 				if (!(links_reg & IXGBE_LINKS_KX_AN_COMP)) {
+-					status =
+-						IXGBE_ERR_AUTONEG_NOT_COMPLETE;
++					status = -EIO;
+ 					hw_dbg(hw, "Autoneg did not complete.\n");
+ 				}
+ 			}
+@@ -927,7 +926,7 @@ static s32 ixgbe_reset_hw_82599(struct ixgbe_hw *hw)
+ 	/* Identify PHY and related function pointers */
+ 	status = hw->phy.ops.init(hw);
+ 
+-	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (status == -EOPNOTSUPP)
+ 		return status;
+ 
+ 	/* Setup SFP module if there is one present. */
+@@ -936,7 +935,7 @@ static s32 ixgbe_reset_hw_82599(struct ixgbe_hw *hw)
+ 		hw->phy.sfp_setup_needed = false;
+ 	}
+ 
+-	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (status == -EOPNOTSUPP)
+ 		return status;
+ 
+ 	/* Reset PHY */
+@@ -974,7 +973,7 @@ static s32 ixgbe_reset_hw_82599(struct ixgbe_hw *hw)
+ 	}
+ 
+ 	if (ctrl & IXGBE_CTRL_RST_MASK) {
+-		status = IXGBE_ERR_RESET_FAILED;
++		status = -EIO;
+ 		hw_dbg(hw, "Reset polling failed to complete.\n");
+ 	}
+ 
+@@ -1093,7 +1092,7 @@ static s32 ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, u32 *fdircmd)
+ 		udelay(10);
+ 	}
+ 
+-	return IXGBE_ERR_FDIR_CMD_INCOMPLETE;
++	return -EIO;
+ }
+ 
+ /**
+@@ -1155,7 +1154,7 @@ s32 ixgbe_reinit_fdir_tables_82599(struct ixgbe_hw *hw)
+ 	}
+ 	if (i >= IXGBE_FDIR_INIT_DONE_POLL) {
+ 		hw_dbg(hw, "Flow Director Signature poll time exceeded!\n");
+-		return IXGBE_ERR_FDIR_REINIT_FAILED;
++		return -EIO;
+ 	}
+ 
+ 	/* Clear FDIR statistics registers (read to clear) */
+@@ -1387,7 +1386,7 @@ s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on flow type input\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	/* configure FDIRCMD register */
+@@ -1546,7 +1545,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on vm pool mask\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	switch (input_mask->formatted.flow_type & IXGBE_ATR_L4TYPE_MASK) {
+@@ -1555,13 +1554,13 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
+ 		if (input_mask->formatted.dst_port ||
+ 		    input_mask->formatted.src_port) {
+ 			hw_dbg(hw, " Error on src/dst port mask\n");
+-			return IXGBE_ERR_CONFIG;
++			return -EIO;
+ 		}
+ 	case IXGBE_ATR_L4TYPE_MASK:
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on flow type mask\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	switch (ntohs(input_mask->formatted.vlan_id) & 0xEFFF) {
+@@ -1582,7 +1581,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on VLAN mask\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	switch ((__force u16)input_mask->formatted.flex_bytes & 0xFFFF) {
+@@ -1594,7 +1593,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on flexible byte mask\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	/* Now mask VM pool and destination IPv6 - bits 5 and 2 */
+@@ -1823,7 +1822,7 @@ static s32 ixgbe_identify_phy_82599(struct ixgbe_hw *hw)
+ 
+ 	/* Return error if SFP module has been detected but is not supported */
+ 	if (hw->phy.type == ixgbe_phy_sfp_unsupported)
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 
+ 	return status;
+ }
+@@ -1862,13 +1861,13 @@ static s32 ixgbe_enable_rx_dma_82599(struct ixgbe_hw *hw, u32 regval)
+  *  Verifies that installed the firmware version is 0.6 or higher
+  *  for SFI devices. All 82599 SFI devices should have version 0.6 or higher.
+  *
+- *  Returns IXGBE_ERR_EEPROM_VERSION if the FW is not present or
+- *  if the FW version is not supported.
++ *  Return: -EACCES if the FW is not present or if the FW version is
++ *  not supported.
+  **/
+ static s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw)
+ {
+-	s32 status = IXGBE_ERR_EEPROM_VERSION;
+ 	u16 fw_offset, fw_ptp_cfg_offset;
++	s32 status = -EACCES;
+ 	u16 offset;
+ 	u16 fw_version = 0;
+ 
+@@ -1882,7 +1881,7 @@ static s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw)
+ 		goto fw_version_err;
+ 
+ 	if (fw_offset == 0 || fw_offset == 0xFFFF)
+-		return IXGBE_ERR_EEPROM_VERSION;
++		return -EACCES;
+ 
+ 	/* get the offset to the Pass Through Patch Configuration block */
+ 	offset = fw_offset + IXGBE_FW_PASSTHROUGH_PATCH_CONFIG_PTR;
+@@ -1890,7 +1889,7 @@ static s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw)
+ 		goto fw_version_err;
+ 
+ 	if (fw_ptp_cfg_offset == 0 || fw_ptp_cfg_offset == 0xFFFF)
+-		return IXGBE_ERR_EEPROM_VERSION;
++		return -EACCES;
+ 
+ 	/* get the firmware version */
+ 	offset = fw_ptp_cfg_offset + IXGBE_FW_PATCH_VERSION_4;
+@@ -1904,7 +1903,7 @@ static s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw)
+ 
+ fw_version_err:
+ 	hw_err(hw, "eeprom read at offset %d failed\n", offset);
+-	return IXGBE_ERR_EEPROM_VERSION;
++	return -EACCES;
+ }
+ 
+ /**
+@@ -2037,7 +2036,7 @@ static s32 ixgbe_reset_pipeline_82599(struct ixgbe_hw *hw)
+ 
+ 	if (!(anlp1_reg & IXGBE_ANLP1_AN_STATE_MASK)) {
+ 		hw_dbg(hw, "auto negotiation not completed\n");
+-		ret_val = IXGBE_ERR_RESET_FAILED;
++		ret_val = -EIO;
+ 		goto reset_pipeline_out;
+ 	}
+ 
+@@ -2086,7 +2085,7 @@ static s32 ixgbe_read_i2c_byte_82599(struct ixgbe_hw *hw, u8 byte_offset,
+ 
+ 		if (!timeout) {
+ 			hw_dbg(hw, "Driver can't access resource, acquiring I2C bus timeout.\n");
+-			status = IXGBE_ERR_I2C;
++			status = -EIO;
+ 			goto release_i2c_access;
+ 		}
+ 	}
+@@ -2140,7 +2139,7 @@ static s32 ixgbe_write_i2c_byte_82599(struct ixgbe_hw *hw, u8 byte_offset,
+ 
+ 		if (!timeout) {
+ 			hw_dbg(hw, "Driver can't access resource, acquiring I2C bus timeout.\n");
+-			status = IXGBE_ERR_I2C;
++			status = -EIO;
+ 			goto release_i2c_access;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+index 62ddb452f8623..22595d22167db 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+@@ -30,7 +30,7 @@ static s32 ixgbe_write_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset,
+ 					     u16 words, u16 *data);
+ static s32 ixgbe_detect_eeprom_page_size_generic(struct ixgbe_hw *hw,
+ 						 u16 offset);
+-static s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw);
++static s32 ixgbe_disable_pcie_primary(struct ixgbe_hw *hw);
+ 
+ /* Base table for registers values that change by MAC */
+ const u32 ixgbe_mvals_8259X[IXGBE_MVALS_IDX_LIMIT] = {
+@@ -123,7 +123,7 @@ s32 ixgbe_setup_fc_generic(struct ixgbe_hw *hw)
+ 	 */
+ 	if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ 		hw_dbg(hw, "ixgbe_fc_rx_pause not valid in strict IEEE mode\n");
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	/*
+@@ -214,7 +214,7 @@ s32 ixgbe_setup_fc_generic(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_dbg(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	if (hw->mac.type != ixgbe_mac_X540) {
+@@ -499,7 +499,7 @@ s32 ixgbe_read_pba_string_generic(struct ixgbe_hw *hw, u8 *pba_num,
+ 
+ 	if (pba_num == NULL) {
+ 		hw_dbg(hw, "PBA string buffer was null\n");
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	ret_val = hw->eeprom.ops.read(hw, IXGBE_PBANUM0_PTR, &data);
+@@ -525,7 +525,7 @@ s32 ixgbe_read_pba_string_generic(struct ixgbe_hw *hw, u8 *pba_num,
+ 		/* we will need 11 characters to store the PBA */
+ 		if (pba_num_size < 11) {
+ 			hw_dbg(hw, "PBA string buffer too small\n");
+-			return IXGBE_ERR_NO_SPACE;
++			return -ENOSPC;
+ 		}
+ 
+ 		/* extract hex string from data and pba_ptr */
+@@ -562,13 +562,13 @@ s32 ixgbe_read_pba_string_generic(struct ixgbe_hw *hw, u8 *pba_num,
+ 
+ 	if (length == 0xFFFF || length == 0) {
+ 		hw_dbg(hw, "NVM PBA number section invalid length\n");
+-		return IXGBE_ERR_PBA_SECTION;
++		return -EIO;
+ 	}
+ 
+ 	/* check if pba_num buffer is big enough */
+ 	if (pba_num_size  < (((u32)length * 2) - 1)) {
+ 		hw_dbg(hw, "PBA string buffer too small\n");
+-		return IXGBE_ERR_NO_SPACE;
++		return -ENOSPC;
+ 	}
+ 
+ 	/* trim pba length from start of string */
+@@ -745,10 +745,10 @@ s32 ixgbe_stop_adapter_generic(struct ixgbe_hw *hw)
+ 	usleep_range(1000, 2000);
+ 
+ 	/*
+-	 * Prevent the PCI-E bus from from hanging by disabling PCI-E master
++	 * Prevent the PCI-E bus from hanging by disabling PCI-E primary
+ 	 * access and verify no pending requests
+ 	 */
+-	return ixgbe_disable_pcie_master(hw);
++	return ixgbe_disable_pcie_primary(hw);
+ }
+ 
+ /**
+@@ -804,7 +804,7 @@ s32 ixgbe_led_on_generic(struct ixgbe_hw *hw, u32 index)
+ 	u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* To turn on the LED, set mode to ON. */
+ 	led_reg &= ~IXGBE_LED_MODE_MASK(index);
+@@ -825,7 +825,7 @@ s32 ixgbe_led_off_generic(struct ixgbe_hw *hw, u32 index)
+ 	u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* To turn off the LED, set mode to OFF. */
+ 	led_reg &= ~IXGBE_LED_MODE_MASK(index);
+@@ -903,11 +903,8 @@ s32 ixgbe_write_eeprom_buffer_bit_bang_generic(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	hw->eeprom.ops.init_params(hw);
+ 
+-	if (words == 0)
+-		return IXGBE_ERR_INVALID_ARGUMENT;
+-
+-	if (offset + words > hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++	if (words == 0 || (offset + words > hw->eeprom.word_size))
++		return -EINVAL;
+ 
+ 	/*
+ 	 * The EEPROM page size cannot be queried from the chip. We do lazy
+@@ -961,7 +958,7 @@ static s32 ixgbe_write_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	if (ixgbe_ready_eeprom(hw) != 0) {
+ 		ixgbe_release_eeprom(hw);
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	for (i = 0; i < words; i++) {
+@@ -1027,7 +1024,7 @@ s32 ixgbe_write_eeprom_generic(struct ixgbe_hw *hw, u16 offset, u16 data)
+ 	hw->eeprom.ops.init_params(hw);
+ 
+ 	if (offset >= hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++		return -EINVAL;
+ 
+ 	return ixgbe_write_eeprom_buffer_bit_bang(hw, offset, 1, &data);
+ }
+@@ -1049,11 +1046,8 @@ s32 ixgbe_read_eeprom_buffer_bit_bang_generic(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	hw->eeprom.ops.init_params(hw);
+ 
+-	if (words == 0)
+-		return IXGBE_ERR_INVALID_ARGUMENT;
+-
+-	if (offset + words > hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++	if (words == 0 || (offset + words > hw->eeprom.word_size))
++		return -EINVAL;
+ 
+ 	/*
+ 	 * We cannot hold synchronization semaphores for too long
+@@ -1098,7 +1092,7 @@ static s32 ixgbe_read_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	if (ixgbe_ready_eeprom(hw) != 0) {
+ 		ixgbe_release_eeprom(hw);
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	for (i = 0; i < words; i++) {
+@@ -1141,7 +1135,7 @@ s32 ixgbe_read_eeprom_bit_bang_generic(struct ixgbe_hw *hw, u16 offset,
+ 	hw->eeprom.ops.init_params(hw);
+ 
+ 	if (offset >= hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++		return -EINVAL;
+ 
+ 	return ixgbe_read_eeprom_buffer_bit_bang(hw, offset, 1, data);
+ }
+@@ -1164,11 +1158,8 @@ s32 ixgbe_read_eerd_buffer_generic(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	hw->eeprom.ops.init_params(hw);
+ 
+-	if (words == 0)
+-		return IXGBE_ERR_INVALID_ARGUMENT;
+-
+-	if (offset >= hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++	if (words == 0 || offset >= hw->eeprom.word_size)
++		return -EINVAL;
+ 
+ 	for (i = 0; i < words; i++) {
+ 		eerd = ((offset + i) << IXGBE_EEPROM_RW_ADDR_SHIFT) |
+@@ -1261,11 +1252,8 @@ s32 ixgbe_write_eewr_buffer_generic(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	hw->eeprom.ops.init_params(hw);
+ 
+-	if (words == 0)
+-		return IXGBE_ERR_INVALID_ARGUMENT;
+-
+-	if (offset >= hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++	if (words == 0 || offset >= hw->eeprom.word_size)
++		return -EINVAL;
+ 
+ 	for (i = 0; i < words; i++) {
+ 		eewr = ((offset + i) << IXGBE_EEPROM_RW_ADDR_SHIFT) |
+@@ -1327,7 +1315,7 @@ static s32 ixgbe_poll_eerd_eewr_done(struct ixgbe_hw *hw, u32 ee_reg)
+ 		}
+ 		udelay(5);
+ 	}
+-	return IXGBE_ERR_EEPROM;
++	return -EIO;
+ }
+ 
+ /**
+@@ -1343,7 +1331,7 @@ static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw)
+ 	u32 i;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM) != 0)
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw));
+ 
+@@ -1365,7 +1353,7 @@ static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw)
+ 		hw_dbg(hw, "Could not acquire EEPROM grant\n");
+ 
+ 		hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM);
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	/* Setup EEPROM for Read/Write */
+@@ -1418,7 +1406,7 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw)
+ 		swsm = IXGBE_READ_REG(hw, IXGBE_SWSM(hw));
+ 		if (swsm & IXGBE_SWSM_SMBI) {
+ 			hw_dbg(hw, "Software semaphore SMBI between device drivers not granted.\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 	}
+ 
+@@ -1446,7 +1434,7 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw)
+ 	if (i >= timeout) {
+ 		hw_dbg(hw, "SWESMBI Software EEPROM semaphore not granted.\n");
+ 		ixgbe_release_eeprom_semaphore(hw);
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -1502,7 +1490,7 @@ static s32 ixgbe_ready_eeprom(struct ixgbe_hw *hw)
+ 	 */
+ 	if (i >= IXGBE_EEPROM_MAX_RETRY_SPI) {
+ 		hw_dbg(hw, "SPI EEPROM Status error\n");
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -1714,7 +1702,7 @@ s32 ixgbe_calc_eeprom_checksum_generic(struct ixgbe_hw *hw)
+ 	for (i = IXGBE_PCIE_ANALOG_PTR; i < IXGBE_FW_PTR; i++) {
+ 		if (hw->eeprom.ops.read(hw, i, &pointer)) {
+ 			hw_dbg(hw, "EEPROM read failed\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 
+ 		/* If the pointer seems invalid */
+@@ -1723,7 +1711,7 @@ s32 ixgbe_calc_eeprom_checksum_generic(struct ixgbe_hw *hw)
+ 
+ 		if (hw->eeprom.ops.read(hw, pointer, &length)) {
+ 			hw_dbg(hw, "EEPROM read failed\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 
+ 		if (length == 0xFFFF || length == 0)
+@@ -1732,7 +1720,7 @@ s32 ixgbe_calc_eeprom_checksum_generic(struct ixgbe_hw *hw)
+ 		for (j = pointer + 1; j <= pointer + length; j++) {
+ 			if (hw->eeprom.ops.read(hw, j, &word)) {
+ 				hw_dbg(hw, "EEPROM read failed\n");
+-				return IXGBE_ERR_EEPROM;
++				return -EIO;
+ 			}
+ 			checksum += word;
+ 		}
+@@ -1785,7 +1773,7 @@ s32 ixgbe_validate_eeprom_checksum_generic(struct ixgbe_hw *hw,
+ 	 * calculated checksum
+ 	 */
+ 	if (read_checksum != checksum)
+-		status = IXGBE_ERR_EEPROM_CHECKSUM;
++		status = -EIO;
+ 
+ 	/* If the user cares, return the calculated checksum */
+ 	if (checksum_val)
+@@ -1844,7 +1832,7 @@ s32 ixgbe_set_rar_generic(struct ixgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+ 	/* Make sure we are using a valid rar index range */
+ 	if (index >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", index);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	/* setup VMDq pool selection before this RAR gets enabled */
+@@ -1896,7 +1884,7 @@ s32 ixgbe_clear_rar_generic(struct ixgbe_hw *hw, u32 index)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (index >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", index);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	/*
+@@ -2145,7 +2133,7 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw)
+ 
+ 	/* Validate the water mark configuration. */
+ 	if (!hw->fc.pause_time)
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 
+ 	/* Low water mark of zero causes XOFF floods */
+ 	for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+@@ -2154,7 +2142,7 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw)
+ 			if (!hw->fc.low_water[i] ||
+ 			    hw->fc.low_water[i] >= hw->fc.high_water[i]) {
+ 				hw_dbg(hw, "Invalid water mark configuration\n");
+-				return IXGBE_ERR_INVALID_LINK_SETTINGS;
++				return -EINVAL;
+ 			}
+ 		}
+ 	}
+@@ -2211,7 +2199,7 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_dbg(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	/* Set 802.3x based flow control settings. */
+@@ -2268,7 +2256,7 @@ s32 ixgbe_negotiate_fc(struct ixgbe_hw *hw, u32 adv_reg, u32 lp_reg,
+ 		       u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm)
+ {
+ 	if ((!(adv_reg)) ||  (!(lp_reg)))
+-		return IXGBE_ERR_FC_NOT_NEGOTIATED;
++		return -EINVAL;
+ 
+ 	if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) {
+ 		/*
+@@ -2320,7 +2308,7 @@ static s32 ixgbe_fc_autoneg_fiber(struct ixgbe_hw *hw)
+ 	linkstat = IXGBE_READ_REG(hw, IXGBE_PCS1GLSTA);
+ 	if ((!!(linkstat & IXGBE_PCS1GLSTA_AN_COMPLETE) == 0) ||
+ 	    (!!(linkstat & IXGBE_PCS1GLSTA_AN_TIMED_OUT) == 1))
+-		return IXGBE_ERR_FC_NOT_NEGOTIATED;
++		return -EIO;
+ 
+ 	pcs_anadv_reg = IXGBE_READ_REG(hw, IXGBE_PCS1GANA);
+ 	pcs_lpab_reg = IXGBE_READ_REG(hw, IXGBE_PCS1GANLP);
+@@ -2352,12 +2340,12 @@ static s32 ixgbe_fc_autoneg_backplane(struct ixgbe_hw *hw)
+ 	 */
+ 	links = IXGBE_READ_REG(hw, IXGBE_LINKS);
+ 	if ((links & IXGBE_LINKS_KX_AN_COMP) == 0)
+-		return IXGBE_ERR_FC_NOT_NEGOTIATED;
++		return -EIO;
+ 
+ 	if (hw->mac.type == ixgbe_mac_82599EB) {
+ 		links2 = IXGBE_READ_REG(hw, IXGBE_LINKS2);
+ 		if ((links2 & IXGBE_LINKS2_AN_SUPPORTED) == 0)
+-			return IXGBE_ERR_FC_NOT_NEGOTIATED;
++			return -EIO;
+ 	}
+ 	/*
+ 	 * Read the 10g AN autoc and LP ability registers and resolve
+@@ -2406,8 +2394,8 @@ static s32 ixgbe_fc_autoneg_copper(struct ixgbe_hw *hw)
+  **/
+ void ixgbe_fc_autoneg(struct ixgbe_hw *hw)
+ {
+-	s32 ret_val = IXGBE_ERR_FC_NOT_NEGOTIATED;
+ 	ixgbe_link_speed speed;
++	s32 ret_val = -EIO;
+ 	bool link_up;
+ 
+ 	/*
+@@ -2505,15 +2493,15 @@ static u32 ixgbe_pcie_timeout_poll(struct ixgbe_hw *hw)
+ }
+ 
+ /**
+- *  ixgbe_disable_pcie_master - Disable PCI-express master access
++ *  ixgbe_disable_pcie_primary - Disable PCI-express primary access
+  *  @hw: pointer to hardware structure
+  *
+- *  Disables PCI-Express master access and verifies there are no pending
+- *  requests. IXGBE_ERR_MASTER_REQUESTS_PENDING is returned if master disable
+- *  bit hasn't caused the master requests to be disabled, else 0
+- *  is returned signifying master requests disabled.
++ *  Disables PCI-Express primary access and verifies there are no pending
++ *  requests. -EALREADY is returned if primary disable
++ *  bit hasn't caused the primary requests to be disabled, else 0
++ *  is returned signifying primary requests disabled.
+  **/
+-static s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw)
++static s32 ixgbe_disable_pcie_primary(struct ixgbe_hw *hw)
+ {
+ 	u32 i, poll;
+ 	u16 value;
+@@ -2522,23 +2510,23 @@ static s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw)
+ 	IXGBE_WRITE_REG(hw, IXGBE_CTRL, IXGBE_CTRL_GIO_DIS);
+ 
+ 	/* Poll for bit to read as set */
+-	for (i = 0; i < IXGBE_PCI_MASTER_DISABLE_TIMEOUT; i++) {
++	for (i = 0; i < IXGBE_PCI_PRIMARY_DISABLE_TIMEOUT; i++) {
+ 		if (IXGBE_READ_REG(hw, IXGBE_CTRL) & IXGBE_CTRL_GIO_DIS)
+ 			break;
+ 		usleep_range(100, 120);
+ 	}
+-	if (i >= IXGBE_PCI_MASTER_DISABLE_TIMEOUT) {
++	if (i >= IXGBE_PCI_PRIMARY_DISABLE_TIMEOUT) {
+ 		hw_dbg(hw, "GIO disable did not set - requesting resets\n");
+ 		goto gio_disable_fail;
+ 	}
+ 
+-	/* Exit if master requests are blocked */
++	/* Exit if primary requests are blocked */
+ 	if (!(IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_GIO) ||
+ 	    ixgbe_removed(hw->hw_addr))
+ 		return 0;
+ 
+-	/* Poll for master request bit to clear */
+-	for (i = 0; i < IXGBE_PCI_MASTER_DISABLE_TIMEOUT; i++) {
++	/* Poll for primary request bit to clear */
++	for (i = 0; i < IXGBE_PCI_PRIMARY_DISABLE_TIMEOUT; i++) {
+ 		udelay(100);
+ 		if (!(IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_GIO))
+ 			return 0;
+@@ -2546,13 +2534,13 @@ static s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw)
+ 
+ 	/*
+ 	 * Two consecutive resets are required via CTRL.RST per datasheet
+-	 * 5.2.5.3.2 Master Disable.  We set a flag to inform the reset routine
+-	 * of this need.  The first reset prevents new master requests from
++	 * 5.2.5.3.2 Primary Disable.  We set a flag to inform the reset routine
++	 * of this need.  The first reset prevents new primary requests from
+ 	 * being issued by our device.  We then must wait 1usec or more for any
+ 	 * remaining completions from the PCIe bus to trickle in, and then reset
+ 	 * again to clear out any effects they may have had on our device.
+ 	 */
+-	hw_dbg(hw, "GIO Master Disable bit didn't clear - requesting resets\n");
++	hw_dbg(hw, "GIO Primary Disable bit didn't clear - requesting resets\n");
+ gio_disable_fail:
+ 	hw->mac.flags |= IXGBE_FLAGS_DOUBLE_RESET_REQUIRED;
+ 
+@@ -2574,7 +2562,7 @@ static s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw)
+ 	}
+ 
+ 	hw_dbg(hw, "PCIe transaction pending bit also did not clear.\n");
+-	return IXGBE_ERR_MASTER_REQUESTS_PENDING;
++	return -EALREADY;
+ }
+ 
+ /**
+@@ -2599,7 +2587,7 @@ s32 ixgbe_acquire_swfw_sync(struct ixgbe_hw *hw, u32 mask)
+ 		 * SW_FW_SYNC bits (not just NVM)
+ 		 */
+ 		if (ixgbe_get_eeprom_semaphore(hw))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		gssr = IXGBE_READ_REG(hw, IXGBE_GSSR);
+ 		if (!(gssr & (fwmask | swmask))) {
+@@ -2619,7 +2607,7 @@ s32 ixgbe_acquire_swfw_sync(struct ixgbe_hw *hw, u32 mask)
+ 		ixgbe_release_swfw_sync(hw, gssr & (fwmask | swmask));
+ 
+ 	usleep_range(5000, 10000);
+-	return IXGBE_ERR_SWFW_SYNC;
++	return -EBUSY;
+ }
+ 
+ /**
+@@ -2756,7 +2744,7 @@ s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index)
+ 	s32 ret_val;
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/*
+ 	 * Link must be up to auto-blink the LEDs;
+@@ -2802,7 +2790,7 @@ s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index)
+ 	s32 ret_val;
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	ret_val = hw->mac.ops.prot_autoc_read(hw, &locked, &autoc_reg);
+ 	if (ret_val)
+@@ -2962,7 +2950,7 @@ s32 ixgbe_clear_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (rar >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", rar);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	mpsar_lo = IXGBE_READ_REG(hw, IXGBE_MPSAR_LO(rar));
+@@ -3013,7 +3001,7 @@ s32 ixgbe_set_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (rar >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", rar);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	if (vmdq < 32) {
+@@ -3090,7 +3078,7 @@ static s32 ixgbe_find_vlvf_slot(struct ixgbe_hw *hw, u32 vlan, bool vlvf_bypass)
+ 	 * will simply bypass the VLVF if there are no entries present in the
+ 	 * VLVF that contain our VLAN
+ 	 */
+-	first_empty_slot = vlvf_bypass ? IXGBE_ERR_NO_SPACE : 0;
++	first_empty_slot = vlvf_bypass ? -ENOSPC : 0;
+ 
+ 	/* add VLAN enable bit for comparison */
+ 	vlan |= IXGBE_VLVF_VIEN;
+@@ -3114,7 +3102,7 @@ static s32 ixgbe_find_vlvf_slot(struct ixgbe_hw *hw, u32 vlan, bool vlvf_bypass)
+ 	if (!first_empty_slot)
+ 		hw_dbg(hw, "No space in VLVF.\n");
+ 
+-	return first_empty_slot ? : IXGBE_ERR_NO_SPACE;
++	return first_empty_slot ? : -ENOSPC;
+ }
+ 
+ /**
+@@ -3134,7 +3122,7 @@ s32 ixgbe_set_vfta_generic(struct ixgbe_hw *hw, u32 vlan, u32 vind,
+ 	s32 vlvf_index;
+ 
+ 	if ((vlan > 4095) || (vind > 63))
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/*
+ 	 * this is a 2 part operation - first the VFTA, then the
+@@ -3595,7 +3583,8 @@ u8 ixgbe_calculate_checksum(u8 *buffer, u32 length)
+  *
+  *  Communicates with the manageability block. On success return 0
+  *  else returns semaphore error when encountering an error acquiring
+- *  semaphore or IXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
++ *  semaphore, -EINVAL when incorrect parameters passed or -EIO when
++ *  command fails.
+  *
+  *  This function assumes that the IXGBE_GSSR_SW_MNG_SM semaphore is held
+  *  by the caller.
+@@ -3608,7 +3597,7 @@ s32 ixgbe_hic_unlocked(struct ixgbe_hw *hw, u32 *buffer, u32 length,
+ 
+ 	if (!length || length > IXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
+ 		hw_dbg(hw, "Buffer length failure buffersize-%d.\n", length);
+-		return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		return -EINVAL;
+ 	}
+ 
+ 	/* Set bit 9 of FWSTS clearing FW reset indication */
+@@ -3619,13 +3608,13 @@ s32 ixgbe_hic_unlocked(struct ixgbe_hw *hw, u32 *buffer, u32 length,
+ 	hicr = IXGBE_READ_REG(hw, IXGBE_HICR);
+ 	if (!(hicr & IXGBE_HICR_EN)) {
+ 		hw_dbg(hw, "IXGBE_HOST_EN bit disabled.\n");
+-		return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		return -EIO;
+ 	}
+ 
+ 	/* Calculate length in DWORDs. We must be DWORD aligned */
+ 	if (length % sizeof(u32)) {
+ 		hw_dbg(hw, "Buffer length failure, not aligned to dword");
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	dword_len = length >> 2;
+@@ -3650,7 +3639,7 @@ s32 ixgbe_hic_unlocked(struct ixgbe_hw *hw, u32 *buffer, u32 length,
+ 	/* Check command successful completion. */
+ 	if ((timeout && i == timeout) ||
+ 	    !(IXGBE_READ_REG(hw, IXGBE_HICR) & IXGBE_HICR_SV))
+-		return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		return -EIO;
+ 
+ 	return 0;
+ }
+@@ -3670,7 +3659,7 @@ s32 ixgbe_hic_unlocked(struct ixgbe_hw *hw, u32 *buffer, u32 length,
+  *  in these cases.
+  *
+  *  Communicates with the manageability block.  On success return 0
+- *  else return IXGBE_ERR_HOST_INTERFACE_COMMAND.
++ *  else return -EIO or -EINVAL.
+  **/
+ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *buffer,
+ 				 u32 length, u32 timeout,
+@@ -3687,7 +3676,7 @@ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *buffer,
+ 
+ 	if (!length || length > IXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
+ 		hw_dbg(hw, "Buffer length failure buffersize-%d.\n", length);
+-		return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		return -EINVAL;
+ 	}
+ 	/* Take management host interface semaphore */
+ 	status = hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_SW_MNG_SM);
+@@ -3717,7 +3706,7 @@ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *buffer,
+ 
+ 	if (length < round_up(buf_len, 4) + hdr_size) {
+ 		hw_dbg(hw, "Buffer not large enough for reply message.\n");
+-		status = IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		status = -EIO;
+ 		goto rel_out;
+ 	}
+ 
+@@ -3748,8 +3737,8 @@ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *buffer,
+  *
+  *  Sends driver version number to firmware through the manageability
+  *  block.  On success return 0
+- *  else returns IXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+- *  semaphore or IXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
++ *  else returns -EBUSY when encountering an error acquiring
++ *  semaphore or -EIO when command fails.
+  **/
+ s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 				 u8 build, u8 sub, __always_unused u16 len,
+@@ -3785,7 +3774,7 @@ s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 		    FW_CEM_RESP_STATUS_SUCCESS)
+ 			ret_val = 0;
+ 		else
+-			ret_val = IXGBE_ERR_HOST_INTERFACE_COMMAND;
++			ret_val = -EIO;
+ 
+ 		break;
+ 	}
+@@ -3883,14 +3872,14 @@ static s32 ixgbe_get_ets_data(struct ixgbe_hw *hw, u16 *ets_cfg,
+ 		return status;
+ 
+ 	if ((*ets_offset == 0x0000) || (*ets_offset == 0xFFFF))
+-		return IXGBE_NOT_IMPLEMENTED;
++		return -EOPNOTSUPP;
+ 
+ 	status = hw->eeprom.ops.read(hw, *ets_offset, ets_cfg);
+ 	if (status)
+ 		return status;
+ 
+ 	if ((*ets_cfg & IXGBE_ETS_TYPE_MASK) != IXGBE_ETS_TYPE_EMC_SHIFTED)
+-		return IXGBE_NOT_IMPLEMENTED;
++		return -EOPNOTSUPP;
+ 
+ 	return 0;
+ }
+@@ -3913,7 +3902,7 @@ s32 ixgbe_get_thermal_sensor_data_generic(struct ixgbe_hw *hw)
+ 
+ 	/* Only support thermal sensors attached to physical port 0 */
+ 	if ((IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_LAN_ID_1))
+-		return IXGBE_NOT_IMPLEMENTED;
++		return -EOPNOTSUPP;
+ 
+ 	status = ixgbe_get_ets_data(hw, &ets_cfg, &ets_offset);
+ 	if (status)
+@@ -3973,7 +3962,7 @@ s32 ixgbe_init_thermal_sensor_thresh_generic(struct ixgbe_hw *hw)
+ 
+ 	/* Only support thermal sensors attached to physical port 0 */
+ 	if ((IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_LAN_ID_1))
+-		return IXGBE_NOT_IMPLEMENTED;
++		return -EOPNOTSUPP;
+ 
+ 	status = ixgbe_get_ets_data(hw, &ets_cfg, &ets_offset);
+ 	if (status)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+index 2eb1331834731..93532f3a3fb90 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+@@ -3346,7 +3346,7 @@ static int ixgbe_get_module_eeprom(struct net_device *dev,
+ {
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct ixgbe_hw *hw = &adapter->hw;
+-	s32 status = IXGBE_ERR_PHY_ADDR_INVALID;
++	s32 status = -EFAULT;
+ 	u8 databyte = 0xFF;
+ 	int i = 0;
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 5829d81f2cb11..b16cb2365d960 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -2766,7 +2766,6 @@ static void ixgbe_check_overtemp_subtask(struct ixgbe_adapter *adapter)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
+ 	u32 eicr = adapter->interrupt_event;
+-	s32 rc;
+ 
+ 	if (test_bit(__IXGBE_DOWN, &adapter->state))
+ 		return;
+@@ -2800,14 +2799,13 @@ static void ixgbe_check_overtemp_subtask(struct ixgbe_adapter *adapter)
+ 		}
+ 
+ 		/* Check if this is not due to overtemp */
+-		if (hw->phy.ops.check_overtemp(hw) != IXGBE_ERR_OVERTEMP)
++		if (!hw->phy.ops.check_overtemp(hw))
+ 			return;
+ 
+ 		break;
+ 	case IXGBE_DEV_ID_X550EM_A_1G_T:
+ 	case IXGBE_DEV_ID_X550EM_A_1G_T_L:
+-		rc = hw->phy.ops.check_overtemp(hw);
+-		if (rc != IXGBE_ERR_OVERTEMP)
++		if (!hw->phy.ops.check_overtemp(hw))
+ 			return;
+ 		break;
+ 	default:
+@@ -5520,7 +5518,7 @@ static int ixgbe_non_sfp_link_config(struct ixgbe_hw *hw)
+ {
+ 	u32 speed;
+ 	bool autoneg, link_up = false;
+-	int ret = IXGBE_ERR_LINK_SETUP;
++	int ret = -EIO;
+ 
+ 	if (hw->mac.ops.check_link)
+ 		ret = hw->mac.ops.check_link(hw, &speed, &link_up, false);
+@@ -5946,13 +5944,13 @@ void ixgbe_reset(struct ixgbe_adapter *adapter)
+ 	err = hw->mac.ops.init_hw(hw);
+ 	switch (err) {
+ 	case 0:
+-	case IXGBE_ERR_SFP_NOT_PRESENT:
+-	case IXGBE_ERR_SFP_NOT_SUPPORTED:
++	case -ENOENT:
++	case -EOPNOTSUPP:
+ 		break;
+-	case IXGBE_ERR_MASTER_REQUESTS_PENDING:
+-		e_dev_err("master disable timed out\n");
++	case -EALREADY:
++		e_dev_err("primary disable timed out\n");
+ 		break;
+-	case IXGBE_ERR_EEPROM_VERSION:
++	case -EACCES:
+ 		/* We are running on a pre-production device, log a warning */
+ 		e_dev_warn("This device is a pre-production adapter/LOM. "
+ 			   "Please be aware there may be issues associated with "
+@@ -7735,10 +7733,10 @@ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter)
+ 	adapter->sfp_poll_time = jiffies + IXGBE_SFP_POLL_JIFFIES - 1;
+ 
+ 	err = hw->phy.ops.identify_sfp(hw);
+-	if (err == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (err == -EOPNOTSUPP)
+ 		goto sfp_out;
+ 
+-	if (err == IXGBE_ERR_SFP_NOT_PRESENT) {
++	if (err == -ENOENT) {
+ 		/* If no cable is present, then we need to reset
+ 		 * the next time we find a good cable. */
+ 		adapter->flags2 |= IXGBE_FLAG2_SFP_NEEDS_RESET;
+@@ -7764,7 +7762,7 @@ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter)
+ 	else
+ 		err = hw->mac.ops.setup_sfp(hw);
+ 
+-	if (err == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (err == -EOPNOTSUPP)
+ 		goto sfp_out;
+ 
+ 	adapter->flags |= IXGBE_FLAG_NEED_LINK_CONFIG;
+@@ -7773,8 +7771,8 @@ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter)
+ sfp_out:
+ 	clear_bit(__IXGBE_IN_SFP_INIT, &adapter->state);
+ 
+-	if ((err == IXGBE_ERR_SFP_NOT_SUPPORTED) &&
+-	    (adapter->netdev->reg_state == NETREG_REGISTERED)) {
++	if (err == -EOPNOTSUPP &&
++	    adapter->netdev->reg_state == NETREG_REGISTERED) {
+ 		e_dev_err("failed to initialize because an unsupported "
+ 			  "SFP+ module type was detected.\n");
+ 		e_dev_err("Reload the driver after installing a "
+@@ -7844,7 +7842,7 @@ static void ixgbe_service_timer(struct timer_list *t)
+ static void ixgbe_phy_interrupt_subtask(struct ixgbe_adapter *adapter)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
+-	u32 status;
++	bool overtemp;
+ 
+ 	if (!(adapter->flags2 & IXGBE_FLAG2_PHY_INTERRUPT))
+ 		return;
+@@ -7854,11 +7852,9 @@ static void ixgbe_phy_interrupt_subtask(struct ixgbe_adapter *adapter)
+ 	if (!hw->phy.ops.handle_lasi)
+ 		return;
+ 
+-	status = hw->phy.ops.handle_lasi(&adapter->hw);
+-	if (status != IXGBE_ERR_OVERTEMP)
+-		return;
+-
+-	e_crit(drv, "%s\n", ixgbe_overheat_msg);
++	hw->phy.ops.handle_lasi(&adapter->hw, &overtemp);
++	if (overtemp)
++		e_crit(drv, "%s\n", ixgbe_overheat_msg);
+ }
+ 
+ static void ixgbe_reset_subtask(struct ixgbe_adapter *adapter)
+@@ -10796,9 +10792,9 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	err = hw->mac.ops.reset_hw(hw);
+ 	hw->phy.reset_if_overtemp = false;
+ 	ixgbe_set_eee_capable(adapter);
+-	if (err == IXGBE_ERR_SFP_NOT_PRESENT) {
++	if (err == -ENOENT) {
+ 		err = 0;
+-	} else if (err == IXGBE_ERR_SFP_NOT_SUPPORTED) {
++	} else if (err == -EOPNOTSUPP) {
+ 		e_dev_err("failed to load because an unsupported SFP+ or QSFP module type was detected.\n");
+ 		e_dev_err("Reload the driver after installing a supported module.\n");
+ 		goto err_sw_init;
+@@ -11015,7 +11011,7 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	/* reset the hardware with the new settings */
+ 	err = hw->mac.ops.start_hw(hw);
+-	if (err == IXGBE_ERR_EEPROM_VERSION) {
++	if (err == -EACCES) {
+ 		/* We are running on a pre-production device, log a warning */
+ 		e_dev_warn("This device is a pre-production adapter/LOM. "
+ 			   "Please be aware there may be issues associated "
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
+index 5679293e53f7a..fe7ef5773369a 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
+@@ -24,7 +24,7 @@ s32 ixgbe_read_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+ 		size = mbx->size;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->read(hw, msg, size, mbx_id);
+ }
+@@ -43,10 +43,10 @@ s32 ixgbe_write_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+ 	struct ixgbe_mbx_info *mbx = &hw->mbx;
+ 
+ 	if (size > mbx->size)
+-		return IXGBE_ERR_MBX;
++		return -EINVAL;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->write(hw, msg, size, mbx_id);
+ }
+@@ -63,7 +63,7 @@ s32 ixgbe_check_for_msg(struct ixgbe_hw *hw, u16 mbx_id)
+ 	struct ixgbe_mbx_info *mbx = &hw->mbx;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->check_for_msg(hw, mbx_id);
+ }
+@@ -80,7 +80,7 @@ s32 ixgbe_check_for_ack(struct ixgbe_hw *hw, u16 mbx_id)
+ 	struct ixgbe_mbx_info *mbx = &hw->mbx;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->check_for_ack(hw, mbx_id);
+ }
+@@ -97,7 +97,7 @@ s32 ixgbe_check_for_rst(struct ixgbe_hw *hw, u16 mbx_id)
+ 	struct ixgbe_mbx_info *mbx = &hw->mbx;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->check_for_rst(hw, mbx_id);
+ }
+@@ -115,12 +115,12 @@ static s32 ixgbe_poll_for_msg(struct ixgbe_hw *hw, u16 mbx_id)
+ 	int countdown = mbx->timeout;
+ 
+ 	if (!countdown || !mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	while (mbx->ops->check_for_msg(hw, mbx_id)) {
+ 		countdown--;
+ 		if (!countdown)
+-			return IXGBE_ERR_MBX;
++			return -EIO;
+ 		udelay(mbx->usec_delay);
+ 	}
+ 
+@@ -140,12 +140,12 @@ static s32 ixgbe_poll_for_ack(struct ixgbe_hw *hw, u16 mbx_id)
+ 	int countdown = mbx->timeout;
+ 
+ 	if (!countdown || !mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	while (mbx->ops->check_for_ack(hw, mbx_id)) {
+ 		countdown--;
+ 		if (!countdown)
+-			return IXGBE_ERR_MBX;
++			return -EIO;
+ 		udelay(mbx->usec_delay);
+ 	}
+ 
+@@ -169,7 +169,7 @@ static s32 ixgbe_read_posted_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size,
+ 	s32 ret_val;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	ret_val = ixgbe_poll_for_msg(hw, mbx_id);
+ 	if (ret_val)
+@@ -197,7 +197,7 @@ static s32 ixgbe_write_posted_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size,
+ 
+ 	/* exit if either we can't write or there isn't a defined timeout */
+ 	if (!mbx->ops || !mbx->timeout)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	/* send msg */
+ 	ret_val = mbx->ops->write(hw, msg, size, mbx_id);
+@@ -217,7 +217,7 @@ static s32 ixgbe_check_for_bit_pf(struct ixgbe_hw *hw, u32 mask, s32 index)
+ 		return 0;
+ 	}
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+@@ -238,7 +238,7 @@ static s32 ixgbe_check_for_msg_pf(struct ixgbe_hw *hw, u16 vf_number)
+ 		return 0;
+ 	}
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+@@ -259,7 +259,7 @@ static s32 ixgbe_check_for_ack_pf(struct ixgbe_hw *hw, u16 vf_number)
+ 		return 0;
+ 	}
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+@@ -295,7 +295,7 @@ static s32 ixgbe_check_for_rst_pf(struct ixgbe_hw *hw, u16 vf_number)
+ 		return 0;
+ 	}
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+@@ -317,7 +317,7 @@ static s32 ixgbe_obtain_mbx_lock_pf(struct ixgbe_hw *hw, u16 vf_number)
+ 	if (p2v_mailbox & IXGBE_PFMAILBOX_PFU)
+ 		return 0;
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
+index a148534d7256d..def067b158738 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
+@@ -7,7 +7,6 @@
+ #include "ixgbe_type.h"
+ 
+ #define IXGBE_VFMAILBOX_SIZE        16 /* 16 32 bit words - 64 bytes */
+-#define IXGBE_ERR_MBX               -100
+ 
+ #define IXGBE_VFMAILBOX             0x002FC
+ #define IXGBE_VFMBMEM               0x00200
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+index b0413904b798c..9d8b018b4f23d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+@@ -102,7 +102,7 @@ s32 ixgbe_read_i2c_combined_generic_int(struct ixgbe_hw *hw, u8 addr,
+ 	csum = ~csum;
+ 	do {
+ 		if (lock && hw->mac.ops.acquire_swfw_sync(hw, swfw_mask))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 		ixgbe_i2c_start(hw);
+ 		/* Device Address and write indication */
+ 		if (ixgbe_out_i2c_byte_ack(hw, addr))
+@@ -150,7 +150,7 @@ s32 ixgbe_read_i2c_combined_generic_int(struct ixgbe_hw *hw, u8 addr,
+ 			hw_dbg(hw, "I2C byte read combined error.\n");
+ 	} while (retry < max_retry);
+ 
+-	return IXGBE_ERR_I2C;
++	return -EIO;
+ }
+ 
+ /**
+@@ -179,7 +179,7 @@ s32 ixgbe_write_i2c_combined_generic_int(struct ixgbe_hw *hw, u8 addr,
+ 	csum = ~csum;
+ 	do {
+ 		if (lock && hw->mac.ops.acquire_swfw_sync(hw, swfw_mask))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 		ixgbe_i2c_start(hw);
+ 		/* Device Address and write indication */
+ 		if (ixgbe_out_i2c_byte_ack(hw, addr))
+@@ -215,7 +215,7 @@ s32 ixgbe_write_i2c_combined_generic_int(struct ixgbe_hw *hw, u8 addr,
+ 			hw_dbg(hw, "I2C byte write combined error.\n");
+ 	} while (retry < max_retry);
+ 
+-	return IXGBE_ERR_I2C;
++	return -EIO;
+ }
+ 
+ /**
+@@ -262,8 +262,8 @@ static bool ixgbe_probe_phy(struct ixgbe_hw *hw, u16 phy_addr)
+  **/
+ s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw)
+ {
++	u32 status = -EFAULT;
+ 	u32 phy_addr;
+-	u32 status = IXGBE_ERR_PHY_ADDR_INVALID;
+ 
+ 	if (!hw->phy.phy_semaphore_mask) {
+ 		if (hw->bus.lan_id)
+@@ -282,7 +282,7 @@ s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw)
+ 		if (ixgbe_probe_phy(hw, phy_addr))
+ 			return 0;
+ 		else
+-			return IXGBE_ERR_PHY_ADDR_INVALID;
++			return -EFAULT;
+ 	}
+ 
+ 	for (phy_addr = 0; phy_addr < IXGBE_MAX_PHY_ADDR; phy_addr++) {
+@@ -405,8 +405,7 @@ s32 ixgbe_reset_phy_generic(struct ixgbe_hw *hw)
+ 		return status;
+ 
+ 	/* Don't reset PHY if it's shut down due to overtemp. */
+-	if (!hw->phy.reset_if_overtemp &&
+-	    (IXGBE_ERR_OVERTEMP == hw->phy.ops.check_overtemp(hw)))
++	if (!hw->phy.reset_if_overtemp && hw->phy.ops.check_overtemp(hw))
+ 		return 0;
+ 
+ 	/* Blocked by MNG FW so bail */
+@@ -454,7 +453,7 @@ s32 ixgbe_reset_phy_generic(struct ixgbe_hw *hw)
+ 
+ 	if (ctrl & MDIO_CTRL1_RESET) {
+ 		hw_dbg(hw, "PHY reset polling failed to complete.\n");
+-		return IXGBE_ERR_RESET_FAILED;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -496,7 +495,7 @@ s32 ixgbe_read_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type,
+ 
+ 	if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) {
+ 		hw_dbg(hw, "PHY address command did not complete.\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	/* Address cycle complete, setup and write the read
+@@ -523,7 +522,7 @@ s32 ixgbe_read_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type,
+ 
+ 	if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) {
+ 		hw_dbg(hw, "PHY read command didn't complete\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	/* Read operation is complete.  Get the data
+@@ -555,7 +554,7 @@ s32 ixgbe_read_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr,
+ 						phy_data);
+ 		hw->mac.ops.release_swfw_sync(hw, gssr);
+ 	} else {
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 
+ 	return status;
+@@ -600,7 +599,7 @@ s32 ixgbe_write_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr,
+ 
+ 	if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) {
+ 		hw_dbg(hw, "PHY address cmd didn't complete\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	/*
+@@ -628,7 +627,7 @@ s32 ixgbe_write_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr,
+ 
+ 	if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) {
+ 		hw_dbg(hw, "PHY write cmd didn't complete\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -653,7 +652,7 @@ s32 ixgbe_write_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr,
+ 						 phy_data);
+ 		hw->mac.ops.release_swfw_sync(hw, gssr);
+ 	} else {
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 
+ 	return status;
+@@ -1299,7 +1298,7 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw)
+ 
+ 	if ((phy_data & MDIO_CTRL1_RESET) != 0) {
+ 		hw_dbg(hw, "PHY reset did not complete.\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	/* Get init offsets */
+@@ -1356,12 +1355,12 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw)
+ 				hw_dbg(hw, "SOL\n");
+ 			} else {
+ 				hw_dbg(hw, "Bad control value\n");
+-				return IXGBE_ERR_PHY;
++				return -EIO;
+ 			}
+ 			break;
+ 		default:
+ 			hw_dbg(hw, "Bad control type\n");
+-			return IXGBE_ERR_PHY;
++			return -EIO;
+ 		}
+ 	}
+ 
+@@ -1369,7 +1368,7 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw)
+ 
+ err_eeprom:
+ 	hw_err(hw, "eeprom read at offset %d failed\n", data_offset);
+-	return IXGBE_ERR_PHY;
++	return -EIO;
+ }
+ 
+ /**
+@@ -1387,10 +1386,10 @@ s32 ixgbe_identify_module_generic(struct ixgbe_hw *hw)
+ 		return ixgbe_identify_qsfp_module_generic(hw);
+ 	default:
+ 		hw->phy.sfp_type = ixgbe_sfp_type_not_present;
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 	}
+ 
+-	return IXGBE_ERR_SFP_NOT_PRESENT;
++	return -ENOENT;
+ }
+ 
+ /**
+@@ -1415,7 +1414,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 
+ 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_fiber) {
+ 		hw->phy.sfp_type = ixgbe_sfp_type_not_present;
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 	}
+ 
+ 	/* LAN ID is needed for sfp_type determination */
+@@ -1430,7 +1429,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 
+ 	if (identifier != IXGBE_SFF_IDENTIFIER_SFP) {
+ 		hw->phy.type = ixgbe_phy_sfp_unsupported;
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 	status = hw->phy.ops.read_i2c_eeprom(hw,
+ 					     IXGBE_SFF_1GBE_COMP_CODES,
+@@ -1621,7 +1620,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 	      hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 ||
+ 	      hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1)) {
+ 		hw->phy.type = ixgbe_phy_sfp_unsupported;
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	/* Anything else 82598-based is supported */
+@@ -1645,7 +1644,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 		}
+ 		hw_dbg(hw, "SFP+ module not supported\n");
+ 		hw->phy.type = ixgbe_phy_sfp_unsupported;
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 	return 0;
+ 
+@@ -1655,7 +1654,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 		hw->phy.id = 0;
+ 		hw->phy.type = ixgbe_phy_unknown;
+ 	}
+-	return IXGBE_ERR_SFP_NOT_PRESENT;
++	return -ENOENT;
+ }
+ 
+ /**
+@@ -1682,7 +1681,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 
+ 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_fiber_qsfp) {
+ 		hw->phy.sfp_type = ixgbe_sfp_type_not_present;
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 	}
+ 
+ 	/* LAN ID is needed for sfp_type determination */
+@@ -1696,7 +1695,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 
+ 	if (identifier != IXGBE_SFF_IDENTIFIER_QSFP_PLUS) {
+ 		hw->phy.type = ixgbe_phy_sfp_unsupported;
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	hw->phy.id = identifier;
+@@ -1764,7 +1763,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 		} else {
+ 			/* unsupported module type */
+ 			hw->phy.type = ixgbe_phy_sfp_unsupported;
+-			return IXGBE_ERR_SFP_NOT_SUPPORTED;
++			return -EOPNOTSUPP;
+ 		}
+ 	}
+ 
+@@ -1824,7 +1823,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 			}
+ 			hw_dbg(hw, "QSFP module not supported\n");
+ 			hw->phy.type = ixgbe_phy_sfp_unsupported;
+-			return IXGBE_ERR_SFP_NOT_SUPPORTED;
++			return -EOPNOTSUPP;
+ 		}
+ 		return 0;
+ 	}
+@@ -1835,7 +1834,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 	hw->phy.id = 0;
+ 	hw->phy.type = ixgbe_phy_unknown;
+ 
+-	return IXGBE_ERR_SFP_NOT_PRESENT;
++	return -ENOENT;
+ }
+ 
+ /**
+@@ -1855,14 +1854,14 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 	u16 sfp_type = hw->phy.sfp_type;
+ 
+ 	if (hw->phy.sfp_type == ixgbe_sfp_type_unknown)
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 
+ 	if (hw->phy.sfp_type == ixgbe_sfp_type_not_present)
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 
+ 	if ((hw->device_id == IXGBE_DEV_ID_82598_SR_DUAL_PORT_EM) &&
+ 	    (hw->phy.sfp_type == ixgbe_sfp_type_da_cu))
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 
+ 	/*
+ 	 * Limiting active cables and 1G Phys must be initialized as
+@@ -1883,11 +1882,11 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 	if (hw->eeprom.ops.read(hw, IXGBE_PHY_INIT_OFFSET_NL, list_offset)) {
+ 		hw_err(hw, "eeprom read at %d failed\n",
+ 		       IXGBE_PHY_INIT_OFFSET_NL);
+-		return IXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT;
++		return -EIO;
+ 	}
+ 
+ 	if ((!*list_offset) || (*list_offset == 0xFFFF))
+-		return IXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT;
++		return -EIO;
+ 
+ 	/* Shift offset to first ID word */
+ 	(*list_offset)++;
+@@ -1906,7 +1905,7 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 				goto err_phy;
+ 			if ((!*data_offset) || (*data_offset == 0xFFFF)) {
+ 				hw_dbg(hw, "SFP+ module not supported\n");
+-				return IXGBE_ERR_SFP_NOT_SUPPORTED;
++				return -EOPNOTSUPP;
+ 			} else {
+ 				break;
+ 			}
+@@ -1919,14 +1918,14 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 
+ 	if (sfp_id == IXGBE_PHY_INIT_END_NL) {
+ 		hw_dbg(hw, "No matching SFP+ module found\n");
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	return 0;
+ 
+ err_phy:
+ 	hw_err(hw, "eeprom read at offset %d failed\n", *list_offset);
+-	return IXGBE_ERR_PHY;
++	return -EIO;
+ }
+ 
+ /**
+@@ -2021,7 +2020,7 @@ static s32 ixgbe_read_i2c_byte_generic_int(struct ixgbe_hw *hw, u8 byte_offset,
+ 
+ 	do {
+ 		if (lock && hw->mac.ops.acquire_swfw_sync(hw, swfw_mask))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		ixgbe_i2c_start(hw);
+ 
+@@ -2137,7 +2136,7 @@ static s32 ixgbe_write_i2c_byte_generic_int(struct ixgbe_hw *hw, u8 byte_offset,
+ 	u32 swfw_mask = hw->phy.phy_semaphore_mask;
+ 
+ 	if (lock && hw->mac.ops.acquire_swfw_sync(hw, swfw_mask))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	do {
+ 		ixgbe_i2c_start(hw);
+@@ -2379,7 +2378,7 @@ static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw)
+ 
+ 	if (ack == 1) {
+ 		hw_dbg(hw, "I2C ack was not received.\n");
+-		status = IXGBE_ERR_I2C;
++		status = -EIO;
+ 	}
+ 
+ 	ixgbe_lower_i2c_clk(hw, &i2cctl);
+@@ -2451,7 +2450,7 @@ static s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data)
+ 		udelay(IXGBE_I2C_T_LOW);
+ 	} else {
+ 		hw_dbg(hw, "I2C data was not set to %X\n", data);
+-		return IXGBE_ERR_I2C;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -2547,7 +2546,7 @@ static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data)
+ 	*i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL(hw));
+ 	if (data != ixgbe_get_i2c_data(hw, i2cctl)) {
+ 		hw_dbg(hw, "Error - I2C data was not set to %X.\n", data);
+-		return IXGBE_ERR_I2C;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -2617,22 +2616,24 @@ static void ixgbe_i2c_bus_clear(struct ixgbe_hw *hw)
+  *  @hw: pointer to hardware structure
+  *
+  *  Checks if the LASI temp alarm status was triggered due to overtemp
++ *
++ *  Return true when an overtemp event detected, otherwise false.
+  **/
+-s32 ixgbe_tn_check_overtemp(struct ixgbe_hw *hw)
++bool ixgbe_tn_check_overtemp(struct ixgbe_hw *hw)
+ {
+ 	u16 phy_data = 0;
++	u32 status;
+ 
+ 	if (hw->device_id != IXGBE_DEV_ID_82599_T3_LOM)
+-		return 0;
++		return false;
+ 
+ 	/* Check that the LASI temp alarm status was triggered */
+-	hw->phy.ops.read_reg(hw, IXGBE_TN_LASI_STATUS_REG,
+-			     MDIO_MMD_PMAPMD, &phy_data);
+-
+-	if (!(phy_data & IXGBE_TN_LASI_STATUS_TEMP_ALARM))
+-		return 0;
++	status = hw->phy.ops.read_reg(hw, IXGBE_TN_LASI_STATUS_REG,
++				      MDIO_MMD_PMAPMD, &phy_data);
++	if (status)
++		return false;
+ 
+-	return IXGBE_ERR_OVERTEMP;
++	return !!(phy_data & IXGBE_TN_LASI_STATUS_TEMP_ALARM);
+ }
+ 
+ /** ixgbe_set_copper_phy_power - Control power for copper phy
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+index 6544c4539c0de..ef72729d7c933 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+@@ -155,7 +155,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw);
+ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 					u16 *list_offset,
+ 					u16 *data_offset);
+-s32 ixgbe_tn_check_overtemp(struct ixgbe_hw *hw);
++bool ixgbe_tn_check_overtemp(struct ixgbe_hw *hw);
+ s32 ixgbe_read_i2c_byte_generic(struct ixgbe_hw *hw, u8 byte_offset,
+ 				u8 dev_addr, u8 *data);
+ s32 ixgbe_read_i2c_byte_generic_unlocked(struct ixgbe_hw *hw, u8 byte_offset,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 5eba086690efa..0cd8bec6ae5ee 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -1279,7 +1279,7 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf)
+ 		break;
+ 	default:
+ 		e_err(drv, "Unhandled Msg %8.8x\n", msgbuf[0]);
+-		retval = IXGBE_ERR_MBX;
++		retval = -EIO;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+index 2be1c4c724354..e84dbf6a3cb81 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+@@ -1247,7 +1247,7 @@ struct ixgbe_nvm_version {
+ #define IXGBE_PSRTYPE_RQPL_SHIFT    29
+ 
+ /* CTRL Bit Masks */
+-#define IXGBE_CTRL_GIO_DIS      0x00000004 /* Global IO Master Disable bit */
++#define IXGBE_CTRL_GIO_DIS      0x00000004 /* Global IO Primary Disable bit */
+ #define IXGBE_CTRL_LNK_RST      0x00000008 /* Link Reset. Resets everything. */
+ #define IXGBE_CTRL_RST          0x04000000 /* Reset (SW) */
+ #define IXGBE_CTRL_RST_MASK     (IXGBE_CTRL_LNK_RST | IXGBE_CTRL_RST)
+@@ -1810,7 +1810,7 @@ enum {
+ /* STATUS Bit Masks */
+ #define IXGBE_STATUS_LAN_ID         0x0000000C /* LAN ID */
+ #define IXGBE_STATUS_LAN_ID_SHIFT   2          /* LAN ID Shift*/
+-#define IXGBE_STATUS_GIO            0x00080000 /* GIO Master Enable Status */
++#define IXGBE_STATUS_GIO            0x00080000 /* GIO Primary Enable Status */
+ 
+ #define IXGBE_STATUS_LAN_ID_0   0x00000000 /* LAN ID 0 */
+ #define IXGBE_STATUS_LAN_ID_1   0x00000004 /* LAN ID 1 */
+@@ -2192,8 +2192,8 @@ enum {
+ #define IXGBE_PCIDEVCTRL2_4_8s		0xd
+ #define IXGBE_PCIDEVCTRL2_17_34s	0xe
+ 
+-/* Number of 100 microseconds we wait for PCI Express master disable */
+-#define IXGBE_PCI_MASTER_DISABLE_TIMEOUT	800
++/* Number of 100 microseconds we wait for PCI Express primary disable */
++#define IXGBE_PCI_PRIMARY_DISABLE_TIMEOUT	800
+ 
+ /* RAH */
+ #define IXGBE_RAH_VIND_MASK     0x003C0000
+@@ -3505,10 +3505,10 @@ struct ixgbe_phy_operations {
+ 	s32 (*read_i2c_sff8472)(struct ixgbe_hw *, u8 , u8 *);
+ 	s32 (*read_i2c_eeprom)(struct ixgbe_hw *, u8 , u8 *);
+ 	s32 (*write_i2c_eeprom)(struct ixgbe_hw *, u8, u8);
+-	s32 (*check_overtemp)(struct ixgbe_hw *);
++	bool (*check_overtemp)(struct ixgbe_hw *);
+ 	s32 (*set_phy_power)(struct ixgbe_hw *, bool on);
+ 	s32 (*enter_lplu)(struct ixgbe_hw *);
+-	s32 (*handle_lasi)(struct ixgbe_hw *hw);
++	s32 (*handle_lasi)(struct ixgbe_hw *hw, bool *);
+ 	s32 (*read_i2c_byte_unlocked)(struct ixgbe_hw *, u8 offset, u8 addr,
+ 				      u8 *value);
+ 	s32 (*write_i2c_byte_unlocked)(struct ixgbe_hw *, u8 offset, u8 addr,
+@@ -3661,45 +3661,6 @@ struct ixgbe_info {
+ 	const u32			*mvals;
+ };
+ 
+-
+-/* Error Codes */
+-#define IXGBE_ERR_EEPROM                        -1
+-#define IXGBE_ERR_EEPROM_CHECKSUM               -2
+-#define IXGBE_ERR_PHY                           -3
+-#define IXGBE_ERR_CONFIG                        -4
+-#define IXGBE_ERR_PARAM                         -5
+-#define IXGBE_ERR_MAC_TYPE                      -6
+-#define IXGBE_ERR_UNKNOWN_PHY                   -7
+-#define IXGBE_ERR_LINK_SETUP                    -8
+-#define IXGBE_ERR_ADAPTER_STOPPED               -9
+-#define IXGBE_ERR_INVALID_MAC_ADDR              -10
+-#define IXGBE_ERR_DEVICE_NOT_SUPPORTED          -11
+-#define IXGBE_ERR_MASTER_REQUESTS_PENDING       -12
+-#define IXGBE_ERR_INVALID_LINK_SETTINGS         -13
+-#define IXGBE_ERR_AUTONEG_NOT_COMPLETE          -14
+-#define IXGBE_ERR_RESET_FAILED                  -15
+-#define IXGBE_ERR_SWFW_SYNC                     -16
+-#define IXGBE_ERR_PHY_ADDR_INVALID              -17
+-#define IXGBE_ERR_I2C                           -18
+-#define IXGBE_ERR_SFP_NOT_SUPPORTED             -19
+-#define IXGBE_ERR_SFP_NOT_PRESENT               -20
+-#define IXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT       -21
+-#define IXGBE_ERR_NO_SAN_ADDR_PTR               -22
+-#define IXGBE_ERR_FDIR_REINIT_FAILED            -23
+-#define IXGBE_ERR_EEPROM_VERSION                -24
+-#define IXGBE_ERR_NO_SPACE                      -25
+-#define IXGBE_ERR_OVERTEMP                      -26
+-#define IXGBE_ERR_FC_NOT_NEGOTIATED             -27
+-#define IXGBE_ERR_FC_NOT_SUPPORTED              -28
+-#define IXGBE_ERR_SFP_SETUP_NOT_COMPLETE        -30
+-#define IXGBE_ERR_PBA_SECTION                   -31
+-#define IXGBE_ERR_INVALID_ARGUMENT              -32
+-#define IXGBE_ERR_HOST_INTERFACE_COMMAND        -33
+-#define IXGBE_ERR_FDIR_CMD_INCOMPLETE		-38
+-#define IXGBE_ERR_FW_RESP_INVALID		-39
+-#define IXGBE_ERR_TOKEN_RETRY			-40
+-#define IXGBE_NOT_IMPLEMENTED                   0x7FFFFFFF
+-
+ #define IXGBE_FUSES0_GROUP(_i)		(0x11158 + ((_i) * 4))
+ #define IXGBE_FUSES0_300MHZ		BIT(5)
+ #define IXGBE_FUSES0_REV_MASK		(3u << 6)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
+index 4b93ba149ec5c..fb4ced963c883 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
+@@ -84,7 +84,7 @@ s32 ixgbe_reset_hw_X540(struct ixgbe_hw *hw)
+ 	status = hw->mac.ops.acquire_swfw_sync(hw, swfw_mask);
+ 	if (status) {
+ 		hw_dbg(hw, "semaphore failed with %d", status);
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 
+ 	ctrl = IXGBE_CTRL_RST;
+@@ -103,7 +103,7 @@ s32 ixgbe_reset_hw_X540(struct ixgbe_hw *hw)
+ 	}
+ 
+ 	if (ctrl & IXGBE_CTRL_RST_MASK) {
+-		status = IXGBE_ERR_RESET_FAILED;
++		status = -EIO;
+ 		hw_dbg(hw, "Reset polling failed to complete.\n");
+ 	}
+ 	msleep(100);
+@@ -220,7 +220,7 @@ static s32 ixgbe_read_eerd_X540(struct ixgbe_hw *hw, u16 offset, u16 *data)
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_read_eerd_generic(hw, offset, data);
+ 
+@@ -243,7 +243,7 @@ static s32 ixgbe_read_eerd_buffer_X540(struct ixgbe_hw *hw,
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_read_eerd_buffer_generic(hw, offset, words, data);
+ 
+@@ -264,7 +264,7 @@ static s32 ixgbe_write_eewr_X540(struct ixgbe_hw *hw, u16 offset, u16 data)
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_write_eewr_generic(hw, offset, data);
+ 
+@@ -287,7 +287,7 @@ static s32 ixgbe_write_eewr_buffer_X540(struct ixgbe_hw *hw,
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_write_eewr_buffer_generic(hw, offset, words, data);
+ 
+@@ -324,7 +324,7 @@ static s32 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw)
+ 	for (i = 0; i < checksum_last_word; i++) {
+ 		if (ixgbe_read_eerd_generic(hw, i, &word)) {
+ 			hw_dbg(hw, "EEPROM read failed\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 		checksum += word;
+ 	}
+@@ -349,7 +349,7 @@ static s32 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw)
+ 
+ 		if (ixgbe_read_eerd_generic(hw, pointer, &length)) {
+ 			hw_dbg(hw, "EEPROM read failed\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 
+ 		/* Skip pointer section if length is invalid. */
+@@ -360,7 +360,7 @@ static s32 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw)
+ 		for (j = pointer + 1; j <= pointer + length; j++) {
+ 			if (ixgbe_read_eerd_generic(hw, j, &word)) {
+ 				hw_dbg(hw, "EEPROM read failed\n");
+-				return IXGBE_ERR_EEPROM;
++				return -EIO;
+ 			}
+ 			checksum += word;
+ 		}
+@@ -397,7 +397,7 @@ static s32 ixgbe_validate_eeprom_checksum_X540(struct ixgbe_hw *hw,
+ 	}
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = hw->eeprom.ops.calc_checksum(hw);
+ 	if (status < 0)
+@@ -418,7 +418,7 @@ static s32 ixgbe_validate_eeprom_checksum_X540(struct ixgbe_hw *hw,
+ 	 */
+ 	if (read_checksum != checksum) {
+ 		hw_dbg(hw, "Invalid EEPROM checksum");
+-		status = IXGBE_ERR_EEPROM_CHECKSUM;
++		status = -EIO;
+ 	}
+ 
+ 	/* If the user cares, return the calculated checksum */
+@@ -455,7 +455,7 @@ static s32 ixgbe_update_eeprom_checksum_X540(struct ixgbe_hw *hw)
+ 	}
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return  IXGBE_ERR_SWFW_SYNC;
++		return  -EBUSY;
+ 
+ 	status = hw->eeprom.ops.calc_checksum(hw);
+ 	if (status < 0)
+@@ -490,7 +490,7 @@ static s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw)
+ 	s32 status;
+ 
+ 	status = ixgbe_poll_flash_update_done_X540(hw);
+-	if (status == IXGBE_ERR_EEPROM) {
++	if (status == -EIO) {
+ 		hw_dbg(hw, "Flash update time out\n");
+ 		return status;
+ 	}
+@@ -540,7 +540,7 @@ static s32 ixgbe_poll_flash_update_done_X540(struct ixgbe_hw *hw)
+ 			return 0;
+ 		udelay(5);
+ 	}
+-	return IXGBE_ERR_EEPROM;
++	return -EIO;
+ }
+ 
+ /**
+@@ -575,7 +575,7 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
+ 		 * SW_FW_SYNC bits (not just NVM)
+ 		 */
+ 		if (ixgbe_get_swfw_sync_semaphore(hw))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC(hw));
+ 		if (!(swfw_sync & (fwmask | swmask | hwmask))) {
+@@ -599,7 +599,7 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
+ 	 * bits in the SW_FW_SYNC register.
+ 	 */
+ 	if (ixgbe_get_swfw_sync_semaphore(hw))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC(hw));
+ 	if (swfw_sync & (fwmask | hwmask)) {
+ 		swfw_sync |= swmask;
+@@ -622,11 +622,11 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
+ 			rmask |= IXGBE_GSSR_I2C_MASK;
+ 		ixgbe_release_swfw_sync_X540(hw, rmask);
+ 		ixgbe_release_swfw_sync_semaphore(hw);
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 	ixgbe_release_swfw_sync_semaphore(hw);
+ 
+-	return IXGBE_ERR_SWFW_SYNC;
++	return -EBUSY;
+ }
+ 
+ /**
+@@ -680,7 +680,7 @@ static s32 ixgbe_get_swfw_sync_semaphore(struct ixgbe_hw *hw)
+ 	if (i == timeout) {
+ 		hw_dbg(hw,
+ 		       "Software semaphore SMBI between device drivers not granted.\n");
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	/* Now get the semaphore between SW/FW through the REGSMP bit */
+@@ -697,7 +697,7 @@ static s32 ixgbe_get_swfw_sync_semaphore(struct ixgbe_hw *hw)
+ 	 */
+ 	hw_dbg(hw, "REGSMP Software NVM semaphore not granted\n");
+ 	ixgbe_release_swfw_sync_semaphore(hw);
+-	return IXGBE_ERR_EEPROM;
++	return -EIO;
+ }
+ 
+ /**
+@@ -768,7 +768,7 @@ s32 ixgbe_blink_led_start_X540(struct ixgbe_hw *hw, u32 index)
+ 	bool link_up;
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* Link should be up in order for the blink bit in the LED control
+ 	 * register to work. Force link and speed in the MAC if link is down.
+@@ -804,7 +804,7 @@ s32 ixgbe_blink_led_stop_X540(struct ixgbe_hw *hw, u32 index)
+ 	u32 ledctl_reg;
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* Restore the LED to its default value. */
+ 	ledctl_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+index 37f2bc6de4b65..9347dc786b5b7 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+@@ -206,13 +206,13 @@ static s32 ixgbe_reset_cs4227(struct ixgbe_hw *hw)
+ 	}
+ 	if (retry == IXGBE_CS4227_RETRIES) {
+ 		hw_err(hw, "CS4227 reset did not complete\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	status = ixgbe_read_cs4227(hw, IXGBE_CS4227_EEPROM_STATUS, &value);
+ 	if (status || !(value & IXGBE_CS4227_EEPROM_LOAD_OK)) {
+ 		hw_err(hw, "CS4227 EEPROM did not load successfully\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -350,13 +350,13 @@ static s32 ixgbe_identify_phy_x550em(struct ixgbe_hw *hw)
+ static s32 ixgbe_read_phy_reg_x550em(struct ixgbe_hw *hw, u32 reg_addr,
+ 				     u32 device_type, u16 *phy_data)
+ {
+-	return IXGBE_NOT_IMPLEMENTED;
++	return -EOPNOTSUPP;
+ }
+ 
+ static s32 ixgbe_write_phy_reg_x550em(struct ixgbe_hw *hw, u32 reg_addr,
+ 				      u32 device_type, u16 phy_data)
+ {
+-	return IXGBE_NOT_IMPLEMENTED;
++	return -EOPNOTSUPP;
+ }
+ 
+ /**
+@@ -463,7 +463,7 @@ s32 ixgbe_fw_phy_activity(struct ixgbe_hw *hw, u16 activity,
+ 		--retries;
+ 	} while (retries > 0);
+ 
+-	return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++	return -EIO;
+ }
+ 
+ static const struct {
+@@ -511,7 +511,7 @@ static s32 ixgbe_get_phy_id_fw(struct ixgbe_hw *hw)
+ 	hw->phy.id |= phy_id_lo & IXGBE_PHY_REVISION_MASK;
+ 	hw->phy.revision = phy_id_lo & ~IXGBE_PHY_REVISION_MASK;
+ 	if (!hw->phy.id || hw->phy.id == IXGBE_PHY_REVISION_MASK)
+-		return IXGBE_ERR_PHY_ADDR_INVALID;
++		return -EFAULT;
+ 
+ 	hw->phy.autoneg_advertised = hw->phy.speeds_supported;
+ 	hw->phy.eee_speeds_supported = IXGBE_LINK_SPEED_100_FULL |
+@@ -568,7 +568,7 @@ static s32 ixgbe_setup_fw_link(struct ixgbe_hw *hw)
+ 
+ 	if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ 		hw_err(hw, "rx_pause not valid in strict IEEE mode\n");
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	switch (hw->fc.requested_mode) {
+@@ -600,8 +600,10 @@ static s32 ixgbe_setup_fw_link(struct ixgbe_hw *hw)
+ 	rc = ixgbe_fw_phy_activity(hw, FW_PHY_ACT_SETUP_LINK, &setup);
+ 	if (rc)
+ 		return rc;
++
+ 	if (setup[0] == FW_PHY_ACT_SETUP_LINK_RSP_DOWN)
+-		return IXGBE_ERR_OVERTEMP;
++		return -EIO;
++
+ 	return 0;
+ }
+ 
+@@ -675,7 +677,7 @@ static s32 ixgbe_iosf_wait(struct ixgbe_hw *hw, u32 *ctrl)
+ 		*ctrl = command;
+ 	if (i == IXGBE_MDIO_COMMAND_TIMEOUT) {
+ 		hw_dbg(hw, "IOSF wait timed out\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -715,7 +717,8 @@ static s32 ixgbe_read_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr,
+ 		error = (command & IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK) >>
+ 			 IXGBE_SB_IOSF_CTRL_CMPL_ERR_SHIFT;
+ 		hw_dbg(hw, "Failed to read, error %x\n", error);
+-		return IXGBE_ERR_PHY;
++		ret = -EIO;
++		goto out;
+ 	}
+ 
+ 	if (!ret)
+@@ -750,9 +753,9 @@ static s32 ixgbe_get_phy_token(struct ixgbe_hw *hw)
+ 	if (token_cmd.hdr.cmd_or_resp.ret_status == FW_PHY_TOKEN_OK)
+ 		return 0;
+ 	if (token_cmd.hdr.cmd_or_resp.ret_status != FW_PHY_TOKEN_RETRY)
+-		return IXGBE_ERR_FW_RESP_INVALID;
++		return -EIO;
+ 
+-	return IXGBE_ERR_TOKEN_RETRY;
++	return -EAGAIN;
+ }
+ 
+ /**
+@@ -778,7 +781,7 @@ static s32 ixgbe_put_phy_token(struct ixgbe_hw *hw)
+ 		return status;
+ 	if (token_cmd.hdr.cmd_or_resp.ret_status == FW_PHY_TOKEN_OK)
+ 		return 0;
+-	return IXGBE_ERR_FW_RESP_INVALID;
++	return -EIO;
+ }
+ 
+ /**
+@@ -942,7 +945,7 @@ static s32 ixgbe_checksum_ptr_x550(struct ixgbe_hw *hw, u16 ptr,
+ 		local_buffer = buf;
+ 	} else {
+ 		if (buffer_size < ptr)
+-			return  IXGBE_ERR_PARAM;
++			return  -EINVAL;
+ 		local_buffer = &buffer[ptr];
+ 	}
+ 
+@@ -960,7 +963,7 @@ static s32 ixgbe_checksum_ptr_x550(struct ixgbe_hw *hw, u16 ptr,
+ 	}
+ 
+ 	if (buffer && ((u32)start + (u32)length > buffer_size))
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	for (i = start; length; i++, length--) {
+ 		if (i == bufsz && !buffer) {
+@@ -1012,7 +1015,7 @@ static s32 ixgbe_calc_checksum_X550(struct ixgbe_hw *hw, u16 *buffer,
+ 		local_buffer = eeprom_ptrs;
+ 	} else {
+ 		if (buffer_size < IXGBE_EEPROM_LAST_WORD)
+-			return IXGBE_ERR_PARAM;
++			return -EINVAL;
+ 		local_buffer = buffer;
+ 	}
+ 
+@@ -1148,7 +1151,7 @@ static s32 ixgbe_validate_eeprom_checksum_X550(struct ixgbe_hw *hw,
+ 	 * calculated checksum
+ 	 */
+ 	if (read_checksum != checksum) {
+-		status = IXGBE_ERR_EEPROM_CHECKSUM;
++		status = -EIO;
+ 		hw_dbg(hw, "Invalid EEPROM checksum");
+ 	}
+ 
+@@ -1203,7 +1206,7 @@ static s32 ixgbe_write_ee_hostif_X550(struct ixgbe_hw *hw, u16 offset, u16 data)
+ 		hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM);
+ 	} else {
+ 		hw_dbg(hw, "write ee hostif failed to get semaphore");
+-		status = IXGBE_ERR_SWFW_SYNC;
++		status = -EBUSY;
+ 	}
+ 
+ 	return status;
+@@ -1415,7 +1418,7 @@ static s32 ixgbe_write_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr,
+ 		error = (command & IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK) >>
+ 			 IXGBE_SB_IOSF_CTRL_CMPL_ERR_SHIFT;
+ 		hw_dbg(hw, "Failed to write, error %x\n", error);
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ out:
+@@ -1558,7 +1561,7 @@ static s32 ixgbe_setup_ixfi_x550em(struct ixgbe_hw *hw, ixgbe_link_speed *speed)
+ 
+ 	/* iXFI is only supported with X552 */
+ 	if (mac->type != ixgbe_mac_X550EM_x)
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EIO;
+ 
+ 	/* Disable AN and force speed to 10G Serial. */
+ 	status = ixgbe_read_iosf_sb_reg_x550(hw,
+@@ -1580,7 +1583,7 @@ static s32 ixgbe_setup_ixfi_x550em(struct ixgbe_hw *hw, ixgbe_link_speed *speed)
+ 		break;
+ 	default:
+ 		/* Other link speeds are not supported by internal KR PHY. */
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EINVAL;
+ 	}
+ 
+ 	status = ixgbe_write_iosf_sb_reg_x550(hw,
+@@ -1611,7 +1614,7 @@ static s32 ixgbe_supported_sfp_modules_X550em(struct ixgbe_hw *hw, bool *linear)
+ {
+ 	switch (hw->phy.sfp_type) {
+ 	case ixgbe_sfp_type_not_present:
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 	case ixgbe_sfp_type_da_cu_core0:
+ 	case ixgbe_sfp_type_da_cu_core1:
+ 		*linear = true;
+@@ -1630,7 +1633,7 @@ static s32 ixgbe_supported_sfp_modules_X550em(struct ixgbe_hw *hw, bool *linear)
+ 	case ixgbe_sfp_type_1g_cu_core0:
+ 	case ixgbe_sfp_type_1g_cu_core1:
+ 	default:
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	return 0;
+@@ -1660,7 +1663,7 @@ ixgbe_setup_mac_link_sfp_x550em(struct ixgbe_hw *hw,
+ 	 * there is no reason to configure CS4227 and SFP not present error is
+ 	 * not accepted in the setup MAC link flow.
+ 	 */
+-	if (status == IXGBE_ERR_SFP_NOT_PRESENT)
++	if (status == -ENOENT)
+ 		return 0;
+ 
+ 	if (status)
+@@ -1718,7 +1721,7 @@ static s32 ixgbe_setup_sfi_x550a(struct ixgbe_hw *hw, ixgbe_link_speed *speed)
+ 		break;
+ 	default:
+ 		/* Other link speeds are not supported by internal PHY. */
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EINVAL;
+ 	}
+ 
+ 	status = mac->ops.write_iosf_sb_reg(hw,
+@@ -1753,7 +1756,7 @@ ixgbe_setup_mac_link_sfp_n(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+ 	/* If no SFP module present, then return success. Return success since
+ 	 * SFP not present error is not excepted in the setup MAC link flow.
+ 	 */
+-	if (ret_val == IXGBE_ERR_SFP_NOT_PRESENT)
++	if (ret_val == -ENOENT)
+ 		return 0;
+ 
+ 	if (ret_val)
+@@ -1803,7 +1806,7 @@ ixgbe_setup_mac_link_sfp_x550a(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+ 	/* If no SFP module present, then return success. Return success since
+ 	 * SFP not present error is not excepted in the setup MAC link flow.
+ 	 */
+-	if (ret_val == IXGBE_ERR_SFP_NOT_PRESENT)
++	if (ret_val == -ENOENT)
+ 		return 0;
+ 
+ 	if (ret_val)
+@@ -1813,7 +1816,7 @@ ixgbe_setup_mac_link_sfp_x550a(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+ 	ixgbe_setup_kr_speed_x550em(hw, speed);
+ 
+ 	if (hw->phy.mdio.prtad == MDIO_PRTAD_NONE)
+-		return IXGBE_ERR_PHY_ADDR_INVALID;
++		return -EFAULT;
+ 
+ 	/* Get external PHY SKU id */
+ 	ret_val = hw->phy.ops.read_reg(hw, IXGBE_CS4227_EFUSE_PDF_SKU,
+@@ -1912,7 +1915,7 @@ static s32 ixgbe_check_link_t_X550em(struct ixgbe_hw *hw,
+ 	u16 i, autoneg_status;
+ 
+ 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_copper)
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 
+ 	status = ixgbe_check_mac_link_generic(hw, speed, link_up,
+ 					      link_up_wait_to_complete);
+@@ -2095,9 +2098,9 @@ static s32 ixgbe_setup_sgmii_fw(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+  */
+ static void ixgbe_fc_autoneg_sgmii_x550em_a(struct ixgbe_hw *hw)
+ {
+-	s32 status = IXGBE_ERR_FC_NOT_NEGOTIATED;
+ 	u32 info[FW_PHY_ACT_DATA_COUNT] = { 0 };
+ 	ixgbe_link_speed speed;
++	s32 status = -EIO;
+ 	bool link_up;
+ 
+ 	/* AN should have completed when the cable was plugged in.
+@@ -2115,7 +2118,7 @@ static void ixgbe_fc_autoneg_sgmii_x550em_a(struct ixgbe_hw *hw)
+ 	/* Check if auto-negotiation has completed */
+ 	status = ixgbe_fw_phy_activity(hw, FW_PHY_ACT_GET_LINK_INFO, &info);
+ 	if (status || !(info[0] & FW_PHY_ACT_GET_LINK_INFO_AN_COMPLETE)) {
+-		status = IXGBE_ERR_FC_NOT_NEGOTIATED;
++		status = -EIO;
+ 		goto out;
+ 	}
+ 
+@@ -2319,18 +2322,18 @@ static s32 ixgbe_get_link_capabilities_X550em(struct ixgbe_hw *hw,
+  * @hw: pointer to hardware structure
+  * @lsc: pointer to boolean flag which indicates whether external Base T
+  *	 PHY interrupt is lsc
++ * @is_overtemp: indicate whether an overtemp event encountered
+  *
+  * Determime if external Base T PHY interrupt cause is high temperature
+  * failure alarm or link status change.
+- *
+- * Return IXGBE_ERR_OVERTEMP if interrupt is high temperature
+- * failure alarm, else return PHY access status.
+  **/
+-static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc)
++static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc,
++				       bool *is_overtemp)
+ {
+ 	u32 status;
+ 	u16 reg;
+ 
++	*is_overtemp = false;
+ 	*lsc = false;
+ 
+ 	/* Vendor alarm triggered */
+@@ -2362,7 +2365,8 @@ static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc)
+ 	if (reg & IXGBE_MDIO_GLOBAL_ALM_1_HI_TMP_FAIL) {
+ 		/* power down the PHY in case the PHY FW didn't already */
+ 		ixgbe_set_copper_phy_power(hw, false);
+-		return IXGBE_ERR_OVERTEMP;
++		*is_overtemp = true;
++		return -EIO;
+ 	}
+ 	if (reg & IXGBE_MDIO_GLOBAL_ALM_1_DEV_FAULT) {
+ 		/*  device fault alarm triggered */
+@@ -2376,7 +2380,8 @@ static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc)
+ 		if (reg == IXGBE_MDIO_GLOBAL_FAULT_MSG_HI_TMP) {
+ 			/* power down the PHY in case the PHY FW didn't */
+ 			ixgbe_set_copper_phy_power(hw, false);
+-			return IXGBE_ERR_OVERTEMP;
++			*is_overtemp = true;
++			return -EIO;
+ 		}
+ 	}
+ 
+@@ -2412,12 +2417,12 @@ static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc)
+  **/
+ static s32 ixgbe_enable_lasi_ext_t_x550em(struct ixgbe_hw *hw)
+ {
++	bool lsc, overtemp;
+ 	u32 status;
+ 	u16 reg;
+-	bool lsc;
+ 
+ 	/* Clear interrupt flags */
+-	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc);
++	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc, &overtemp);
+ 
+ 	/* Enable link status change alarm */
+ 
+@@ -2496,21 +2501,20 @@ static s32 ixgbe_enable_lasi_ext_t_x550em(struct ixgbe_hw *hw)
+ /**
+  * ixgbe_handle_lasi_ext_t_x550em - Handle external Base T PHY interrupt
+  * @hw: pointer to hardware structure
++ * @is_overtemp: indicate whether an overtemp event encountered
+  *
+  * Handle external Base T PHY interrupt. If high temperature
+  * failure alarm then return error, else if link status change
+  * then setup internal/external PHY link
+- *
+- * Return IXGBE_ERR_OVERTEMP if interrupt is high temperature
+- * failure alarm, else return PHY access status.
+  **/
+-static s32 ixgbe_handle_lasi_ext_t_x550em(struct ixgbe_hw *hw)
++static s32 ixgbe_handle_lasi_ext_t_x550em(struct ixgbe_hw *hw,
++					  bool *is_overtemp)
+ {
+ 	struct ixgbe_phy_info *phy = &hw->phy;
+ 	bool lsc;
+ 	u32 status;
+ 
+-	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc);
++	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc, is_overtemp);
+ 	if (status)
+ 		return status;
+ 
+@@ -2642,7 +2646,7 @@ static s32 ixgbe_setup_internal_phy_t_x550em(struct ixgbe_hw *hw)
+ 	u16 speed;
+ 
+ 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_copper)
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 
+ 	if (!(hw->mac.type == ixgbe_mac_X550EM_x &&
+ 	      !(hw->phy.nw_mng_if_sel & IXGBE_NW_MNG_IF_SEL_INT_PHY_MODE))) {
+@@ -2685,7 +2689,7 @@ static s32 ixgbe_setup_internal_phy_t_x550em(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		/* Internal PHY does not support anything else */
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	return ixgbe_setup_ixfi_x550em(hw, &force_speed);
+@@ -2717,7 +2721,7 @@ static s32 ixgbe_led_on_t_x550em(struct ixgbe_hw *hw, u32 led_idx)
+ 	u16 phy_data;
+ 
+ 	if (led_idx >= IXGBE_X557_MAX_LED_INDEX)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* To turn on the LED, set mode to ON. */
+ 	hw->phy.ops.read_reg(hw, IXGBE_X557_LED_PROVISIONING + led_idx,
+@@ -2739,7 +2743,7 @@ static s32 ixgbe_led_off_t_x550em(struct ixgbe_hw *hw, u32 led_idx)
+ 	u16 phy_data;
+ 
+ 	if (led_idx >= IXGBE_X557_MAX_LED_INDEX)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* To turn on the LED, set mode to ON. */
+ 	hw->phy.ops.read_reg(hw, IXGBE_X557_LED_PROVISIONING + led_idx,
+@@ -2763,8 +2767,9 @@ static s32 ixgbe_led_off_t_x550em(struct ixgbe_hw *hw, u32 led_idx)
+  *
+  *  Sends driver version number to firmware through the manageability
+  *  block.  On success return 0
+- *  else returns IXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+- *  semaphore or IXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
++ *  else returns -EBUSY when encountering an error acquiring
++ *  semaphore, -EIO when command fails or -ENIVAL when incorrect
++ *  params passed.
+  **/
+ static s32 ixgbe_set_fw_drv_ver_x550(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 				     u8 build, u8 sub, u16 len,
+@@ -2775,7 +2780,7 @@ static s32 ixgbe_set_fw_drv_ver_x550(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 	int i;
+ 
+ 	if (!len || !driver_ver || (len > sizeof(fw_cmd.driver_string)))
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 
+ 	fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO;
+ 	fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN + len;
+@@ -2800,7 +2805,7 @@ static s32 ixgbe_set_fw_drv_ver_x550(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 
+ 		if (fw_cmd.hdr.cmd_or_resp.ret_status !=
+ 		    FW_CEM_RESP_STATUS_SUCCESS)
+-			return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++			return -EIO;
+ 		return 0;
+ 	}
+ 
+@@ -2857,7 +2862,7 @@ static s32 ixgbe_setup_fc_x550em(struct ixgbe_hw *hw)
+ 	/* Validate the requested mode */
+ 	if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ 		hw_err(hw, "ixgbe_fc_rx_pause not valid in strict IEEE mode\n");
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	/* 10gig parts do not have a word in the EEPROM to determine the
+@@ -2892,7 +2897,7 @@ static s32 ixgbe_setup_fc_x550em(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_err(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	switch (hw->device_id) {
+@@ -2936,8 +2941,8 @@ static s32 ixgbe_setup_fc_x550em(struct ixgbe_hw *hw)
+ static void ixgbe_fc_autoneg_backplane_x550em_a(struct ixgbe_hw *hw)
+ {
+ 	u32 link_s1, lp_an_page_low, an_cntl_1;
+-	s32 status = IXGBE_ERR_FC_NOT_NEGOTIATED;
+ 	ixgbe_link_speed speed;
++	s32 status = -EIO;
+ 	bool link_up;
+ 
+ 	/* AN should have completed when the cable was plugged in.
+@@ -2963,7 +2968,7 @@ static void ixgbe_fc_autoneg_backplane_x550em_a(struct ixgbe_hw *hw)
+ 
+ 	if (status || (link_s1 & IXGBE_KRM_LINK_S1_MAC_AN_COMPLETE) == 0) {
+ 		hw_dbg(hw, "Auto-Negotiation did not complete\n");
+-		status = IXGBE_ERR_FC_NOT_NEGOTIATED;
++		status = -EIO;
+ 		goto out;
+ 	}
+ 
+@@ -3137,21 +3142,23 @@ static s32 ixgbe_reset_phy_fw(struct ixgbe_hw *hw)
+ /**
+  * ixgbe_check_overtemp_fw - Check firmware-controlled PHYs for overtemp
+  * @hw: pointer to hardware structure
++ *
++ * Return true when an overtemp event detected, otherwise false.
+  */
+-static s32 ixgbe_check_overtemp_fw(struct ixgbe_hw *hw)
++static bool ixgbe_check_overtemp_fw(struct ixgbe_hw *hw)
+ {
+ 	u32 store[FW_PHY_ACT_DATA_COUNT] = { 0 };
+ 	s32 rc;
+ 
+ 	rc = ixgbe_fw_phy_activity(hw, FW_PHY_ACT_GET_LINK_INFO, &store);
+ 	if (rc)
+-		return rc;
++		return false;
+ 
+ 	if (store[0] & FW_PHY_ACT_GET_LINK_INFO_TEMP) {
+ 		ixgbe_shutdown_fw_phy(hw);
+-		return IXGBE_ERR_OVERTEMP;
++		return true;
+ 	}
+-	return 0;
++	return false;
+ }
+ 
+ /**
+@@ -3201,8 +3208,7 @@ static s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw)
+ 
+ 	/* Identify the PHY or SFP module */
+ 	ret_val = phy->ops.identify(hw);
+-	if (ret_val == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+-	    ret_val == IXGBE_ERR_PHY_ADDR_INVALID)
++	if (ret_val == -EOPNOTSUPP || ret_val == -EFAULT)
+ 		return ret_val;
+ 
+ 	/* Setup function pointers based on detected hardware */
+@@ -3410,8 +3416,7 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 
+ 	/* PHY ops must be identified and initialized prior to reset */
+ 	status = hw->phy.ops.init(hw);
+-	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+-	    status == IXGBE_ERR_PHY_ADDR_INVALID)
++	if (status == -EOPNOTSUPP || status == -EFAULT)
+ 		return status;
+ 
+ 	/* start the external PHY */
+@@ -3427,7 +3432,7 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 		hw->phy.sfp_setup_needed = false;
+ 	}
+ 
+-	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (status == -EOPNOTSUPP)
+ 		return status;
+ 
+ 	/* Reset PHY */
+@@ -3451,7 +3456,7 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 	status = hw->mac.ops.acquire_swfw_sync(hw, swfw_mask);
+ 	if (status) {
+ 		hw_dbg(hw, "semaphore failed with %d", status);
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 
+ 	ctrl |= IXGBE_READ_REG(hw, IXGBE_CTRL);
+@@ -3469,7 +3474,7 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 	}
+ 
+ 	if (ctrl & IXGBE_CTRL_RST_MASK) {
+-		status = IXGBE_ERR_RESET_FAILED;
++		status = -EIO;
+ 		hw_dbg(hw, "Reset polling failed to complete.\n");
+ 	}
+ 
+@@ -3565,7 +3570,7 @@ static s32 ixgbe_setup_fc_backplane_x550em_a(struct ixgbe_hw *hw)
+ 	/* Validate the requested mode */
+ 	if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ 		hw_err(hw, "ixgbe_fc_rx_pause not valid in strict IEEE mode\n");
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	if (hw->fc.requested_mode == ixgbe_fc_default)
+@@ -3622,7 +3627,7 @@ static s32 ixgbe_setup_fc_backplane_x550em_a(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_err(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	status = hw->mac.ops.write_iosf_sb_reg(hw,
+@@ -3718,7 +3723,7 @@ static s32 ixgbe_acquire_swfw_sync_x550em_a(struct ixgbe_hw *hw, u32 mask)
+ 			return 0;
+ 		if (hmask)
+ 			ixgbe_release_swfw_sync_X540(hw, hmask);
+-		if (status != IXGBE_ERR_TOKEN_RETRY)
++		if (status != -EAGAIN)
+ 			return status;
+ 		msleep(FW_PHY_TOKEN_DELAY);
+ 	}
+@@ -3762,7 +3767,7 @@ static s32 ixgbe_read_phy_reg_x550a(struct ixgbe_hw *hw, u32 reg_addr,
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, mask))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = hw->phy.ops.read_reg_mdi(hw, reg_addr, device_type, phy_data);
+ 
+@@ -3788,7 +3793,7 @@ static s32 ixgbe_write_phy_reg_x550a(struct ixgbe_hw *hw, u32 reg_addr,
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, mask))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_write_phy_reg_mdi(hw, reg_addr, device_type, phy_data);
+ 	hw->mac.ops.release_swfw_sync(hw, mask);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index e0e6275b3e20c..e4e80c2b1ce40 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -581,12 +581,38 @@ static int mvpp2_bm_pools_init(struct device *dev, struct mvpp2 *priv)
+ 	return err;
+ }
+ 
++/* Cleanup pool before actual initialization in the OS */
++static void mvpp2_bm_pool_cleanup(struct mvpp2 *priv, int pool_id)
++{
++	unsigned int thread = mvpp2_cpu_to_thread(priv, get_cpu());
++	u32 val;
++	int i;
++
++	/* Drain the BM from all possible residues left by firmware */
++	for (i = 0; i < MVPP2_BM_POOL_SIZE_MAX; i++)
++		mvpp2_thread_read(priv, thread, MVPP2_BM_PHY_ALLOC_REG(pool_id));
++
++	put_cpu();
++
++	/* Stop the BM pool */
++	val = mvpp2_read(priv, MVPP2_BM_POOL_CTRL_REG(pool_id));
++	val |= MVPP2_BM_STOP_MASK;
++	mvpp2_write(priv, MVPP2_BM_POOL_CTRL_REG(pool_id), val);
++}
++
+ static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
+ {
+ 	enum dma_data_direction dma_dir = DMA_FROM_DEVICE;
+ 	int i, err, poolnum = MVPP2_BM_POOLS_NUM;
+ 	struct mvpp2_port *port;
+ 
++	if (priv->percpu_pools)
++		poolnum = mvpp2_get_nrxqs(priv) * 2;
++
++	/* Clean up the pool state in case it contains stale state */
++	for (i = 0; i < poolnum; i++)
++		mvpp2_bm_pool_cleanup(priv, i);
++
+ 	if (priv->percpu_pools) {
+ 		for (i = 0; i < priv->port_count; i++) {
+ 			port = priv->port_list[i];
+@@ -596,7 +622,6 @@ static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
+ 			}
+ 		}
+ 
+-		poolnum = mvpp2_get_nrxqs(priv) * 2;
+ 		for (i = 0; i < poolnum; i++) {
+ 			/* the pool in use */
+ 			int pn = i / (poolnum / 2);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
+index 39475f6565c73..7c436bdcf5b5f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
+@@ -208,11 +208,13 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 
+ 	ft->g = kcalloc(MLX5E_ARFS_NUM_GROUPS,
+ 			sizeof(*ft->g), GFP_KERNEL);
+-	in = kvzalloc(inlen, GFP_KERNEL);
+-	if  (!in || !ft->g) {
+-		kfree(ft->g);
+-		kvfree(in);
++	if (!ft->g)
+ 		return -ENOMEM;
++
++	in = kvzalloc(inlen, GFP_KERNEL);
++	if (!in) {
++		err = -ENOMEM;
++		goto err_free_g;
+ 	}
+ 
+ 	mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+@@ -232,7 +234,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 		break;
+ 	default:
+ 		err = -EINVAL;
+-		goto out;
++		goto err_free_in;
+ 	}
+ 
+ 	switch (type) {
+@@ -254,7 +256,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 		break;
+ 	default:
+ 		err = -EINVAL;
+-		goto out;
++		goto err_free_in;
+ 	}
+ 
+ 	MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+@@ -263,7 +265,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 	MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ 	ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
+ 	if (IS_ERR(ft->g[ft->num_groups]))
+-		goto err;
++		goto err_clean_group;
+ 	ft->num_groups++;
+ 
+ 	memset(in, 0, inlen);
+@@ -272,18 +274,20 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 	MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ 	ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
+ 	if (IS_ERR(ft->g[ft->num_groups]))
+-		goto err;
++		goto err_clean_group;
+ 	ft->num_groups++;
+ 
+ 	kvfree(in);
+ 	return 0;
+ 
+-err:
++err_clean_group:
+ 	err = PTR_ERR(ft->g[ft->num_groups]);
+ 	ft->g[ft->num_groups] = NULL;
+-out:
++err_free_in:
+ 	kvfree(in);
+-
++err_free_g:
++	kfree(ft->g);
++	ft->g = NULL;
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+index df1363a34a429..9721fe58eb7b0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+@@ -667,6 +667,7 @@ int mlx5dr_actions_build_ste_arr(struct mlx5dr_matcher *matcher,
+ 		switch (action_type) {
+ 		case DR_ACTION_TYP_DROP:
+ 			attr.final_icm_addr = nic_dmn->drop_icm_addr;
++			attr.hit_gvmi = nic_dmn->drop_icm_addr >> 48;
+ 			break;
+ 		case DR_ACTION_TYP_FT:
+ 			dest_action = action;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index 088ceac07b805..08d74d001aca8 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -927,7 +927,7 @@ nfp_tunnel_add_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 	u16 nfp_mac_idx = 0;
+ 
+ 	entry = nfp_tunnel_lookup_offloaded_macs(app, netdev->dev_addr);
+-	if (entry && nfp_tunnel_is_mac_idx_global(entry->index)) {
++	if (entry && (nfp_tunnel_is_mac_idx_global(entry->index) || netif_is_lag_port(netdev))) {
+ 		if (entry->bridge_count ||
+ 		    !nfp_flower_is_supported_bridge(netdev)) {
+ 			nfp_tunnel_offloaded_macs_inc_ref_and_link(entry,
+diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
+index 252fe06f58aac..4c513e7755f7f 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
++++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
+@@ -542,11 +542,13 @@ static int enable_bars(struct nfp6000_pcie *nfp, u16 interface)
+ 	const u32 barcfg_msix_general =
+ 		NFP_PCIE_BAR_PCIE2CPP_MapType(
+ 			NFP_PCIE_BAR_PCIE2CPP_MapType_GENERAL) |
+-		NFP_PCIE_BAR_PCIE2CPP_LengthSelect_32BIT;
++		NFP_PCIE_BAR_PCIE2CPP_LengthSelect(
++			NFP_PCIE_BAR_PCIE2CPP_LengthSelect_32BIT);
+ 	const u32 barcfg_msix_xpb =
+ 		NFP_PCIE_BAR_PCIE2CPP_MapType(
+ 			NFP_PCIE_BAR_PCIE2CPP_MapType_BULK) |
+-		NFP_PCIE_BAR_PCIE2CPP_LengthSelect_32BIT |
++		NFP_PCIE_BAR_PCIE2CPP_LengthSelect(
++			NFP_PCIE_BAR_PCIE2CPP_LengthSelect_32BIT) |
+ 		NFP_PCIE_BAR_PCIE2CPP_Target_BaseAddress(
+ 			NFP_CPP_TARGET_ISLAND_XPB);
+ 	const u32 barcfg_explicit[4] = {
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+index dc5fbc2704f3a..b5f681918f6ee 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+@@ -200,6 +200,7 @@ void ionic_dev_cmd_comp(struct ionic_dev *idev, union ionic_dev_cmd_comp *comp)
+ 
+ void ionic_dev_cmd_go(struct ionic_dev *idev, union ionic_dev_cmd *cmd)
+ {
++	idev->opcode = cmd->cmd.opcode;
+ 	memcpy_toio(&idev->dev_cmd_regs->cmd, cmd, sizeof(*cmd));
+ 	iowrite32(0, &idev->dev_cmd_regs->done);
+ 	iowrite32(1, &idev->dev_cmd_regs->doorbell);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+index 64d27e8e07725..1ce0d307a9d0f 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+@@ -136,6 +136,7 @@ struct ionic_dev {
+ 	unsigned long last_hb_time;
+ 	u32 last_hb;
+ 	u8 last_fw_status;
++	u8 opcode;
+ 
+ 	u64 __iomem *db_pages;
+ 	dma_addr_t phy_db_pages;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index 00b6985edea04..694e710244e69 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -333,7 +333,7 @@ int ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_seconds)
+ 	 */
+ 	max_wait = jiffies + (max_seconds * HZ);
+ try_again:
+-	opcode = readb(&idev->dev_cmd_regs->cmd.cmd.opcode);
++	opcode = idev->opcode;
+ 	start_time = jiffies;
+ 	do {
+ 		done = ionic_dev_cmd_done(idev);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index af43035239297..0bc345aff1cbd 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -189,6 +189,7 @@ struct stmmac_safety_stats {
+ 	unsigned long mac_errors[32];
+ 	unsigned long mtl_errors[32];
+ 	unsigned long dma_errors[32];
++	unsigned long dma_dpp_errors[32];
+ };
+ 
+ /* Number of fields in Safety Stats */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
+index eee58e0513877..4426cb923ac8f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
+@@ -282,6 +282,8 @@
+ #define XGMAC_RXCEIE			BIT(4)
+ #define XGMAC_TXCEIE			BIT(0)
+ #define XGMAC_MTL_ECC_INT_STATUS	0x000010cc
++#define XGMAC_MTL_DPP_CONTROL		0x000010e0
++#define XGMAC_DPP_DISABLE		BIT(0)
+ #define XGMAC_MTL_TXQ_OPMODE(x)		(0x00001100 + (0x80 * (x)))
+ #define XGMAC_TQS			GENMASK(25, 16)
+ #define XGMAC_TQS_SHIFT			16
+@@ -364,6 +366,7 @@
+ #define XGMAC_DCEIE			BIT(1)
+ #define XGMAC_TCEIE			BIT(0)
+ #define XGMAC_DMA_ECC_INT_STATUS	0x0000306c
++#define XGMAC_DMA_DPP_INT_STATUS	0x00003074
+ #define XGMAC_DMA_CH_CONTROL(x)		(0x00003100 + (0x80 * (x)))
+ #define XGMAC_SPH			BIT(24)
+ #define XGMAC_PBLx8			BIT(16)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index b060667463028..9a5dc5fde24ae 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -788,6 +788,44 @@ static const struct dwxgmac3_error_desc dwxgmac3_dma_errors[32]= {
+ 	{ false, "UNKNOWN", "Unknown Error" }, /* 31 */
+ };
+ 
++#define DPP_RX_ERR "Read Rx Descriptor Parity checker Error"
++#define DPP_TX_ERR "Read Tx Descriptor Parity checker Error"
++
++static const struct dwxgmac3_error_desc dwxgmac3_dma_dpp_errors[32] = {
++	{ true, "TDPES0", DPP_TX_ERR },
++	{ true, "TDPES1", DPP_TX_ERR },
++	{ true, "TDPES2", DPP_TX_ERR },
++	{ true, "TDPES3", DPP_TX_ERR },
++	{ true, "TDPES4", DPP_TX_ERR },
++	{ true, "TDPES5", DPP_TX_ERR },
++	{ true, "TDPES6", DPP_TX_ERR },
++	{ true, "TDPES7", DPP_TX_ERR },
++	{ true, "TDPES8", DPP_TX_ERR },
++	{ true, "TDPES9", DPP_TX_ERR },
++	{ true, "TDPES10", DPP_TX_ERR },
++	{ true, "TDPES11", DPP_TX_ERR },
++	{ true, "TDPES12", DPP_TX_ERR },
++	{ true, "TDPES13", DPP_TX_ERR },
++	{ true, "TDPES14", DPP_TX_ERR },
++	{ true, "TDPES15", DPP_TX_ERR },
++	{ true, "RDPES0", DPP_RX_ERR },
++	{ true, "RDPES1", DPP_RX_ERR },
++	{ true, "RDPES2", DPP_RX_ERR },
++	{ true, "RDPES3", DPP_RX_ERR },
++	{ true, "RDPES4", DPP_RX_ERR },
++	{ true, "RDPES5", DPP_RX_ERR },
++	{ true, "RDPES6", DPP_RX_ERR },
++	{ true, "RDPES7", DPP_RX_ERR },
++	{ true, "RDPES8", DPP_RX_ERR },
++	{ true, "RDPES9", DPP_RX_ERR },
++	{ true, "RDPES10", DPP_RX_ERR },
++	{ true, "RDPES11", DPP_RX_ERR },
++	{ true, "RDPES12", DPP_RX_ERR },
++	{ true, "RDPES13", DPP_RX_ERR },
++	{ true, "RDPES14", DPP_RX_ERR },
++	{ true, "RDPES15", DPP_RX_ERR },
++};
++
+ static void dwxgmac3_handle_dma_err(struct net_device *ndev,
+ 				    void __iomem *ioaddr, bool correctable,
+ 				    struct stmmac_safety_stats *stats)
+@@ -799,6 +837,13 @@ static void dwxgmac3_handle_dma_err(struct net_device *ndev,
+ 
+ 	dwxgmac3_log_error(ndev, value, correctable, "DMA",
+ 			   dwxgmac3_dma_errors, STAT_OFF(dma_errors), stats);
++
++	value = readl(ioaddr + XGMAC_DMA_DPP_INT_STATUS);
++	writel(value, ioaddr + XGMAC_DMA_DPP_INT_STATUS);
++
++	dwxgmac3_log_error(ndev, value, false, "DMA_DPP",
++			   dwxgmac3_dma_dpp_errors,
++			   STAT_OFF(dma_dpp_errors), stats);
+ }
+ 
+ static int dwxgmac3_safety_feat_config(void __iomem *ioaddr, unsigned int asp)
+@@ -835,6 +880,12 @@ static int dwxgmac3_safety_feat_config(void __iomem *ioaddr, unsigned int asp)
+ 	value |= XGMAC_TMOUTEN; /* FSM Timeout Feature */
+ 	writel(value, ioaddr + XGMAC_MAC_FSM_CONTROL);
+ 
++	/* 5. Enable Data Path Parity Protection */
++	value = readl(ioaddr + XGMAC_MTL_DPP_CONTROL);
++	/* already enabled by default, explicit enable it again */
++	value &= ~XGMAC_DPP_DISABLE;
++	writel(value, ioaddr + XGMAC_MTL_DPP_CONTROL);
++
+ 	return 0;
+ }
+ 
+@@ -868,7 +919,11 @@ static int dwxgmac3_safety_feat_irq_status(struct net_device *ndev,
+ 		ret |= !corr;
+ 	}
+ 
+-	err = dma & (XGMAC_DEUIS | XGMAC_DECIS);
++	/* DMA_DPP_Interrupt_Status is indicated by MCSIS bit in
++	 * DMA_Safety_Interrupt_Status, so we handle DMA Data Path
++	 * Parity Errors here
++	 */
++	err = dma & (XGMAC_DEUIS | XGMAC_DECIS | XGMAC_MCSIS);
+ 	corr = dma & XGMAC_DECIS;
+ 	if (err) {
+ 		dwxgmac3_handle_dma_err(ndev, ioaddr, corr, stats);
+@@ -884,6 +939,7 @@ static const struct dwxgmac3_error {
+ 	{ dwxgmac3_mac_errors },
+ 	{ dwxgmac3_mtl_errors },
+ 	{ dwxgmac3_dma_errors },
++	{ dwxgmac3_dma_dpp_errors },
+ };
+ 
+ static int dwxgmac3_safety_feat_dump(struct stmmac_safety_stats *stats,
+diff --git a/drivers/net/fjes/fjes_hw.c b/drivers/net/fjes/fjes_hw.c
+index 065bb0a40b1d1..a1405a3e294c3 100644
+--- a/drivers/net/fjes/fjes_hw.c
++++ b/drivers/net/fjes/fjes_hw.c
+@@ -220,21 +220,25 @@ static int fjes_hw_setup(struct fjes_hw *hw)
+ 
+ 	mem_size = FJES_DEV_REQ_BUF_SIZE(hw->max_epid);
+ 	hw->hw_info.req_buf = kzalloc(mem_size, GFP_KERNEL);
+-	if (!(hw->hw_info.req_buf))
+-		return -ENOMEM;
++	if (!(hw->hw_info.req_buf)) {
++		result = -ENOMEM;
++		goto free_ep_info;
++	}
+ 
+ 	hw->hw_info.req_buf_size = mem_size;
+ 
+ 	mem_size = FJES_DEV_RES_BUF_SIZE(hw->max_epid);
+ 	hw->hw_info.res_buf = kzalloc(mem_size, GFP_KERNEL);
+-	if (!(hw->hw_info.res_buf))
+-		return -ENOMEM;
++	if (!(hw->hw_info.res_buf)) {
++		result = -ENOMEM;
++		goto free_req_buf;
++	}
+ 
+ 	hw->hw_info.res_buf_size = mem_size;
+ 
+ 	result = fjes_hw_alloc_shared_status_region(hw);
+ 	if (result)
+-		return result;
++		goto free_res_buf;
+ 
+ 	hw->hw_info.buffer_share_bit = 0;
+ 	hw->hw_info.buffer_unshare_reserve_bit = 0;
+@@ -245,11 +249,11 @@ static int fjes_hw_setup(struct fjes_hw *hw)
+ 
+ 			result = fjes_hw_alloc_epbuf(&buf_pair->tx);
+ 			if (result)
+-				return result;
++				goto free_epbuf;
+ 
+ 			result = fjes_hw_alloc_epbuf(&buf_pair->rx);
+ 			if (result)
+-				return result;
++				goto free_epbuf;
+ 
+ 			spin_lock_irqsave(&hw->rx_status_lock, flags);
+ 			fjes_hw_setup_epbuf(&buf_pair->tx, mac,
+@@ -272,6 +276,25 @@ static int fjes_hw_setup(struct fjes_hw *hw)
+ 	fjes_hw_init_command_registers(hw, &param);
+ 
+ 	return 0;
++
++free_epbuf:
++	for (epidx = 0; epidx < hw->max_epid ; epidx++) {
++		if (epidx == hw->my_epid)
++			continue;
++		fjes_hw_free_epbuf(&hw->ep_shm_info[epidx].tx);
++		fjes_hw_free_epbuf(&hw->ep_shm_info[epidx].rx);
++	}
++	fjes_hw_free_shared_status_region(hw);
++free_res_buf:
++	kfree(hw->hw_info.res_buf);
++	hw->hw_info.res_buf = NULL;
++free_req_buf:
++	kfree(hw->hw_info.req_buf);
++	hw->hw_info.req_buf = NULL;
++free_ep_info:
++	kfree(hw->ep_shm_info);
++	hw->ep_shm_info = NULL;
++	return result;
+ }
+ 
+ static void fjes_hw_cleanup(struct fjes_hw *hw)
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index d15da8287df32..3eae31c0f97a6 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -654,7 +654,10 @@ void netvsc_device_remove(struct hv_device *device)
+ 	/* Disable NAPI and disassociate its context from the device. */
+ 	for (i = 0; i < net_device->num_chn; i++) {
+ 		/* See also vmbus_reset_channel_cb(). */
+-		napi_disable(&net_device->chan_table[i].napi);
++		/* only disable enabled NAPI channel */
++		if (i < ndev->real_num_rx_queues)
++			napi_disable(&net_device->chan_table[i].napi);
++
+ 		netif_napi_del(&net_device->chan_table[i].napi);
+ 	}
+ 
+diff --git a/drivers/net/ppp/ppp_async.c b/drivers/net/ppp/ppp_async.c
+index f14a9d190de91..aada8a3c18213 100644
+--- a/drivers/net/ppp/ppp_async.c
++++ b/drivers/net/ppp/ppp_async.c
+@@ -471,6 +471,10 @@ ppp_async_ioctl(struct ppp_channel *chan, unsigned int cmd, unsigned long arg)
+ 	case PPPIOCSMRU:
+ 		if (get_user(val, p))
+ 			break;
++		if (val > U16_MAX) {
++			err = -EINVAL;
++			break;
++		}
+ 		if (val < PPP_MRU)
+ 			val = PPP_MRU;
+ 		ap->mru = val;
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 2fd5d2b7a2092..4029c56dfcf0f 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2819,10 +2819,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
+ {
+ 	vq_callback_t **callbacks;
+ 	struct virtqueue **vqs;
+-	int ret = -ENOMEM;
+-	int i, total_vqs;
+ 	const char **names;
++	int ret = -ENOMEM;
++	int total_vqs;
+ 	bool *ctx;
++	u16 i;
+ 
+ 	/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
+ 	 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
+@@ -2859,8 +2860,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
+ 	for (i = 0; i < vi->max_queue_pairs; i++) {
+ 		callbacks[rxq2vq(i)] = skb_recv_done;
+ 		callbacks[txq2vq(i)] = skb_xmit_done;
+-		sprintf(vi->rq[i].name, "input.%d", i);
+-		sprintf(vi->sq[i].name, "output.%d", i);
++		sprintf(vi->rq[i].name, "input.%u", i);
++		sprintf(vi->sq[i].name, "output.%u", i);
+ 		names[rxq2vq(i)] = vi->rq[i].name;
+ 		names[txq2vq(i)] = vi->sq[i].name;
+ 		if (ctx)
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 622fc7f170402..5037142c5a822 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -647,9 +647,10 @@ void ath9k_htc_txstatus(struct ath9k_htc_priv *priv, void *wmi_event)
+ 	struct ath9k_htc_tx_event *tx_pend;
+ 	int i;
+ 
+-	for (i = 0; i < txs->cnt; i++) {
+-		WARN_ON(txs->cnt > HTC_MAX_TX_STATUS);
++	if (WARN_ON_ONCE(txs->cnt > HTC_MAX_TX_STATUS))
++		return;
+ 
++	for (i = 0; i < txs->cnt; i++) {
+ 		__txs = &txs->txstatus[i];
+ 
+ 		skb = ath9k_htc_tx_get_packet(priv, __txs);
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index fdf2c6ea41d96..bcaec8a184cd6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -876,7 +876,7 @@ static int iwl_dbg_tlv_override_trig_node(struct iwl_fw_runtime *fwrt,
+ 		node_trig = (void *)node_tlv->data;
+ 	}
+ 
+-	memcpy(node_trig->data + offset, trig->data, trig_data_len);
++	memcpy((u8 *)node_trig->data + offset, trig->data, trig_data_len);
+ 	node_tlv->length = cpu_to_le32(size);
+ 
+ 	if (policy & IWL_FW_INI_APPLY_POLICY_OVERRIDE_CFG) {
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
+index b04f76551ca48..be3c153ab3b0b 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
+@@ -101,6 +101,7 @@ void rt2x00lib_disable_radio(struct rt2x00_dev *rt2x00dev)
+ 	rt2x00link_stop_tuner(rt2x00dev);
+ 	rt2x00queue_stop_queues(rt2x00dev);
+ 	rt2x00queue_flush_queues(rt2x00dev, true);
++	rt2x00queue_stop_queue(rt2x00dev->bcn);
+ 
+ 	/*
+ 	 * Disable radio.
+@@ -1272,6 +1273,7 @@ int rt2x00lib_start(struct rt2x00_dev *rt2x00dev)
+ 	rt2x00dev->intf_ap_count = 0;
+ 	rt2x00dev->intf_sta_count = 0;
+ 	rt2x00dev->intf_associated = 0;
++	rt2x00dev->intf_beaconing = 0;
+ 
+ 	/* Enable the radio */
+ 	retval = rt2x00lib_enable_radio(rt2x00dev);
+@@ -1298,6 +1300,7 @@ void rt2x00lib_stop(struct rt2x00_dev *rt2x00dev)
+ 	rt2x00dev->intf_ap_count = 0;
+ 	rt2x00dev->intf_sta_count = 0;
+ 	rt2x00dev->intf_associated = 0;
++	rt2x00dev->intf_beaconing = 0;
+ }
+ 
+ static inline void rt2x00lib_set_if_combinations(struct rt2x00_dev *rt2x00dev)
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c b/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
+index 2f68a31072ae4..795bd3b0ebd8f 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
+@@ -599,6 +599,17 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw,
+ 	 */
+ 	if (changes & BSS_CHANGED_BEACON_ENABLED) {
+ 		mutex_lock(&intf->beacon_skb_mutex);
++
++		/*
++		 * Clear the 'enable_beacon' flag and clear beacon because
++		 * the beacon queue has been stopped after hardware reset.
++		 */
++		if (test_bit(DEVICE_STATE_RESET, &rt2x00dev->flags) &&
++		    intf->enable_beacon) {
++			intf->enable_beacon = false;
++			rt2x00queue_clear_beacon(rt2x00dev, vif);
++		}
++
+ 		if (!bss_conf->enable_beacon && intf->enable_beacon) {
+ 			rt2x00dev->intf_beaconing--;
+ 			intf->enable_beacon = false;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 004778faf3d07..3051fb358fdd5 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -6973,6 +6973,18 @@ static const struct usb_device_id dev_table[] = {
+ 	.driver_info = (unsigned long)&rtl8192eu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x818c, 0xff, 0xff, 0xff),
+ 	.driver_info = (unsigned long)&rtl8192eu_fops},
++/* D-Link DWA-131 rev C1 */
++{USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x3312, 0xff, 0xff, 0xff),
++	.driver_info = (unsigned long)&rtl8192eu_fops},
++/* TP-Link TL-WN8200ND V2 */
++{USB_DEVICE_AND_INTERFACE_INFO(0x2357, 0x0126, 0xff, 0xff, 0xff),
++	.driver_info = (unsigned long)&rtl8192eu_fops},
++/* Mercusys MW300UM */
++{USB_DEVICE_AND_INTERFACE_INFO(0x2c4e, 0x0100, 0xff, 0xff, 0xff),
++	.driver_info = (unsigned long)&rtl8192eu_fops},
++/* Mercusys MW300UH */
++{USB_DEVICE_AND_INTERFACE_INFO(0x2c4e, 0x0104, 0xff, 0xff, 0xff),
++	.driver_info = (unsigned long)&rtl8192eu_fops},
+ #endif
+ { }
+ };
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c
+index fa0eed434d4f6..d26dda8e46fdb 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c
+@@ -49,7 +49,7 @@ u32 rtl8723e_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 							    rfpath, regaddr);
+ 	}
+ 
+-	bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -80,7 +80,7 @@ void rtl8723e_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = rtl8723_phy_rf_serial_read(hw,
+ 								    rfpath,
+ 								    regaddr);
+-			bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+@@ -89,7 +89,7 @@ void rtl8723e_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 		rtl8723_phy_rf_serial_write(hw, rfpath, regaddr, data);
+ 	} else {
+ 		if (bitmask != RFREG_OFFSET_MASK) {
+-			bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c
+index f09f55b0468a4..35dfea54ae9c6 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c
+@@ -41,7 +41,7 @@ u32 rtl8723be_phy_query_rf_reg(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 	spin_lock(&rtlpriv->locks.rf_lock);
+ 
+ 	original_value = rtl8723_phy_rf_serial_read(hw, rfpath, regaddr);
+-	bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -68,7 +68,7 @@ void rtl8723be_phy_set_rf_reg(struct ieee80211_hw *hw, enum radio_path path,
+ 	if (bitmask != RFREG_OFFSET_MASK) {
+ 			original_value = rtl8723_phy_rf_serial_read(hw, path,
+ 								    regaddr);
+-			bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data = ((original_value & (~bitmask)) |
+ 				(data << bitshift));
+ 		}
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index d25bb5b9a54cd..f5c5cf650b48e 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -104,13 +104,12 @@ bool provides_xdp_headroom = true;
+ module_param(provides_xdp_headroom, bool, 0644);
+ 
+ static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
+-			       u8 status);
++			       s8 status);
+ 
+ static void make_tx_response(struct xenvif_queue *queue,
+-			     struct xen_netif_tx_request *txp,
++			     const struct xen_netif_tx_request *txp,
+ 			     unsigned int extra_count,
+-			     s8       st);
+-static void push_tx_responses(struct xenvif_queue *queue);
++			     s8 status);
+ 
+ static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
+ 
+@@ -208,13 +207,9 @@ static void xenvif_tx_err(struct xenvif_queue *queue,
+ 			  unsigned int extra_count, RING_IDX end)
+ {
+ 	RING_IDX cons = queue->tx.req_cons;
+-	unsigned long flags;
+ 
+ 	do {
+-		spin_lock_irqsave(&queue->response_lock, flags);
+ 		make_tx_response(queue, txp, extra_count, XEN_NETIF_RSP_ERROR);
+-		push_tx_responses(queue);
+-		spin_unlock_irqrestore(&queue->response_lock, flags);
+ 		if (cons == end)
+ 			break;
+ 		RING_COPY_REQUEST(&queue->tx, cons++, txp);
+@@ -465,12 +460,7 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 	for (shinfo->nr_frags = 0; nr_slots > 0 && shinfo->nr_frags < MAX_SKB_FRAGS;
+ 	     nr_slots--) {
+ 		if (unlikely(!txp->size)) {
+-			unsigned long flags;
+-
+-			spin_lock_irqsave(&queue->response_lock, flags);
+ 			make_tx_response(queue, txp, 0, XEN_NETIF_RSP_OKAY);
+-			push_tx_responses(queue);
+-			spin_unlock_irqrestore(&queue->response_lock, flags);
+ 			++txp;
+ 			continue;
+ 		}
+@@ -496,14 +486,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 
+ 		for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; ++txp) {
+ 			if (unlikely(!txp->size)) {
+-				unsigned long flags;
+-
+-				spin_lock_irqsave(&queue->response_lock, flags);
+ 				make_tx_response(queue, txp, 0,
+ 						 XEN_NETIF_RSP_OKAY);
+-				push_tx_responses(queue);
+-				spin_unlock_irqrestore(&queue->response_lock,
+-						       flags);
+ 				continue;
+ 			}
+ 
+@@ -997,7 +981,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 					 (ret == 0) ?
+ 					 XEN_NETIF_RSP_OKAY :
+ 					 XEN_NETIF_RSP_ERROR);
+-			push_tx_responses(queue);
+ 			continue;
+ 		}
+ 
+@@ -1009,7 +992,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 
+ 			make_tx_response(queue, &txreq, extra_count,
+ 					 XEN_NETIF_RSP_OKAY);
+-			push_tx_responses(queue);
+ 			continue;
+ 		}
+ 
+@@ -1444,8 +1426,35 @@ int xenvif_tx_action(struct xenvif_queue *queue, int budget)
+ 	return work_done;
+ }
+ 
++static void _make_tx_response(struct xenvif_queue *queue,
++			     const struct xen_netif_tx_request *txp,
++			     unsigned int extra_count,
++			     s8 status)
++{
++	RING_IDX i = queue->tx.rsp_prod_pvt;
++	struct xen_netif_tx_response *resp;
++
++	resp = RING_GET_RESPONSE(&queue->tx, i);
++	resp->id     = txp->id;
++	resp->status = status;
++
++	while (extra_count-- != 0)
++		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
++
++	queue->tx.rsp_prod_pvt = ++i;
++}
++
++static void push_tx_responses(struct xenvif_queue *queue)
++{
++	int notify;
++
++	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
++	if (notify)
++		notify_remote_via_irq(queue->tx_irq);
++}
++
+ static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
+-			       u8 status)
++			       s8 status)
+ {
+ 	struct pending_tx_info *pending_tx_info;
+ 	pending_ring_idx_t index;
+@@ -1455,8 +1464,8 @@ static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
+ 
+ 	spin_lock_irqsave(&queue->response_lock, flags);
+ 
+-	make_tx_response(queue, &pending_tx_info->req,
+-			 pending_tx_info->extra_count, status);
++	_make_tx_response(queue, &pending_tx_info->req,
++			  pending_tx_info->extra_count, status);
+ 
+ 	/* Release the pending index before pusing the Tx response so
+ 	 * its available before a new Tx request is pushed by the
+@@ -1470,32 +1479,19 @@ static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
+ 	spin_unlock_irqrestore(&queue->response_lock, flags);
+ }
+ 
+-
+ static void make_tx_response(struct xenvif_queue *queue,
+-			     struct xen_netif_tx_request *txp,
++			     const struct xen_netif_tx_request *txp,
+ 			     unsigned int extra_count,
+-			     s8       st)
++			     s8 status)
+ {
+-	RING_IDX i = queue->tx.rsp_prod_pvt;
+-	struct xen_netif_tx_response *resp;
+-
+-	resp = RING_GET_RESPONSE(&queue->tx, i);
+-	resp->id     = txp->id;
+-	resp->status = st;
+-
+-	while (extra_count-- != 0)
+-		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
++	unsigned long flags;
+ 
+-	queue->tx.rsp_prod_pvt = ++i;
+-}
++	spin_lock_irqsave(&queue->response_lock, flags);
+ 
+-static void push_tx_responses(struct xenvif_queue *queue)
+-{
+-	int notify;
++	_make_tx_response(queue, txp, extra_count, status);
++	push_tx_responses(queue);
+ 
+-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
+-	if (notify)
+-		notify_remote_via_irq(queue->tx_irq);
++	spin_unlock_irqrestore(&queue->response_lock, flags);
+ }
+ 
+ static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index a411460d2b211..758d6db590aa8 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1306,7 +1306,7 @@ DEFINE_SIMPLE_PROP(clocks, "clocks", "#clock-cells")
+ DEFINE_SIMPLE_PROP(interconnects, "interconnects", "#interconnect-cells")
+ DEFINE_SIMPLE_PROP(iommus, "iommus", "#iommu-cells")
+ DEFINE_SIMPLE_PROP(mboxes, "mboxes", "#mbox-cells")
+-DEFINE_SIMPLE_PROP(io_channels, "io-channel", "#io-channel-cells")
++DEFINE_SIMPLE_PROP(io_channels, "io-channels", "#io-channel-cells")
+ DEFINE_SIMPLE_PROP(interrupt_parent, "interrupt-parent", NULL)
+ DEFINE_SIMPLE_PROP(dmas, "dmas", "#dma-cells")
+ DEFINE_SIMPLE_PROP(power_domains, "power-domains", "#power-domain-cells")
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index f9083c868a36d..a334c68db3395 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -48,6 +48,12 @@ static struct unittest_results {
+ 	failed; \
+ })
+ 
++#ifdef CONFIG_OF_KOBJ
++#define OF_KREF_READ(NODE) kref_read(&(NODE)->kobj.kref)
++#else
++#define OF_KREF_READ(NODE) 1
++#endif
++
+ /*
+  * Expected message may have a message level other than KERN_INFO.
+  * Print the expected message only if the current loglevel will allow
+@@ -561,7 +567,7 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 			pr_err("missing testcase data\n");
+ 			return;
+ 		}
+-		prefs[i] = kref_read(&p[i]->kobj.kref);
++		prefs[i] = OF_KREF_READ(p[i]);
+ 	}
+ 
+ 	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
+@@ -684,9 +690,9 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(p); ++i) {
+-		unittest(prefs[i] == kref_read(&p[i]->kobj.kref),
++		unittest(prefs[i] == OF_KREF_READ(p[i]),
+ 			 "provider%d: expected:%d got:%d\n",
+-			 i, prefs[i], kref_read(&p[i]->kobj.kref));
++			 i, prefs[i], OF_KREF_READ(p[i]));
+ 		of_node_put(p[i]);
+ 	}
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 95ed719402d75..339318e790e21 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -6,6 +6,7 @@
+  * Author: Kishon Vijay Abraham I <kishon@ti.com>
+  */
+ 
++#include <linux/kernel.h>
+ #include <linux/of.h>
+ 
+ #include "pcie-designware.h"
+@@ -593,6 +594,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
+ 	}
+ 
+ 	aligned_offset = msg_addr & (epc->mem->window.page_size - 1);
++	msg_addr = ALIGN_DOWN(msg_addr, epc->mem->window.page_size);
+ 	ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys,  msg_addr,
+ 				  epc->mem->window.page_size);
+ 	if (ret)
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index 23548b517e4b6..ea91d63c8be15 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -620,14 +620,20 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc)
+ 		if (status & MSI_STATUS){
+ 			unsigned long imsi_status;
+ 
++			/*
++			 * The interrupt status can be cleared even if the
++			 * MSI status remains pending. As such, given the
++			 * edge-triggered interrupt type, its status should
++			 * be cleared before being dispatched to the
++			 * handler of the underlying device.
++			 */
++			writel(MSI_STATUS, port->base + PCIE_INT_STATUS);
+ 			while ((imsi_status = readl(port->base + PCIE_IMSI_STATUS))) {
+ 				for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM) {
+ 					virq = irq_find_mapping(port->inner_domain, bit);
+ 					generic_handle_irq(virq);
+ 				}
+ 			}
+-			/* Clear MSI interrupt status */
+-			writel(MSI_STATUS, port->base + PCIE_INT_STATUS);
+ 		}
+ 	}
+ 
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index 9564b74003f0f..d58b02237075c 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -741,7 +741,7 @@ static void aer_print_port_info(struct pci_dev *dev, struct aer_err_info *info)
+ 	u8 bus = info->id >> 8;
+ 	u8 devfn = info->id & 0xff;
+ 
+-	pci_info(dev, "%s%s error received: %04x:%02x:%02x.%d\n",
++	pci_info(dev, "%s%s error message received from %04x:%02x:%02x.%d\n",
+ 		 info->multi_error_valid ? "Multiple " : "",
+ 		 aer_error_severity_string[info->severity],
+ 		 pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn),
+@@ -926,7 +926,12 @@ static bool find_source_device(struct pci_dev *parent,
+ 	pci_walk_bus(parent->subordinate, find_device_iter, e_info);
+ 
+ 	if (!e_info->error_dev_num) {
+-		pci_info(parent, "can't find device of ID%04x\n", e_info->id);
++		u8 bus = e_info->id >> 8;
++		u8 devfn = e_info->id & 0xff;
++
++		pci_info(parent, "found no error details for %04x:%02x:%02x.%d\n",
++			 pci_domain_nr(parent->bus), bus, PCI_SLOT(devfn),
++			 PCI_FUNC(devfn));
+ 		return false;
+ 	}
+ 	return true;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 21661feeeeb65..b67aea8d8f197 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -609,10 +609,13 @@ static void quirk_amd_dwc_class(struct pci_dev *pdev)
+ {
+ 	u32 class = pdev->class;
+ 
+-	/* Use "USB Device (not host controller)" class */
+-	pdev->class = PCI_CLASS_SERIAL_USB_DEVICE;
+-	pci_info(pdev, "PCI class overridden (%#08x -> %#08x) so dwc3 driver can claim this instead of xhci\n",
+-		 class, pdev->class);
++	if (class != PCI_CLASS_SERIAL_USB_DEVICE) {
++		/* Use "USB Device (not host controller)" class */
++		pdev->class = PCI_CLASS_SERIAL_USB_DEVICE;
++		pci_info(pdev,
++			"PCI class overridden (%#08x -> %#08x) so dwc3 driver can claim this instead of xhci\n",
++			class, pdev->class);
++	}
+ }
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB,
+ 		quirk_amd_dwc_class);
+@@ -3638,6 +3641,19 @@ static void quirk_no_pm_reset(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
+ 			       PCI_CLASS_DISPLAY_VGA, 8, quirk_no_pm_reset);
+ 
++/*
++ * Spectrum-{1,2,3,4} devices report that a D3hot->D0 transition causes a reset
++ * (i.e., they advertise NoSoftRst-). However, this transition does not have
++ * any effect on the device: It continues to be operational and network ports
++ * remain up. Advertising this support makes it seem as if a PM reset is viable
++ * for these devices. Mark it as unavailable to skip it when testing reset
++ * methods.
++ */
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcb84, quirk_no_pm_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcf6c, quirk_no_pm_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcf70, quirk_no_pm_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcf80, quirk_no_pm_reset);
++
+ /*
+  * Thunderbolt controllers with broken MSI hotplug signaling:
+  * Entire 1st generation (Light Ridge, Eagle Ridge, Light Peak) and part
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index ba52459928f7f..5cea3ad290c54 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -1251,13 +1251,6 @@ static void stdev_release(struct device *dev)
+ {
+ 	struct switchtec_dev *stdev = to_stdev(dev);
+ 
+-	if (stdev->dma_mrpc) {
+-		iowrite32(0, &stdev->mmio_mrpc->dma_en);
+-		flush_wc_buf(stdev);
+-		writeq(0, &stdev->mmio_mrpc->dma_addr);
+-		dma_free_coherent(&stdev->pdev->dev, sizeof(*stdev->dma_mrpc),
+-				stdev->dma_mrpc, stdev->dma_mrpc_dma_addr);
+-	}
+ 	kfree(stdev);
+ }
+ 
+@@ -1301,7 +1294,7 @@ static struct switchtec_dev *stdev_create(struct pci_dev *pdev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	stdev->alive = true;
+-	stdev->pdev = pdev;
++	stdev->pdev = pci_dev_get(pdev);
+ 	INIT_LIST_HEAD(&stdev->mrpc_queue);
+ 	mutex_init(&stdev->mrpc_mutex);
+ 	stdev->mrpc_busy = 0;
+@@ -1335,6 +1328,7 @@ static struct switchtec_dev *stdev_create(struct pci_dev *pdev)
+ 	return stdev;
+ 
+ err_put:
++	pci_dev_put(stdev->pdev);
+ 	put_device(&stdev->dev);
+ 	return ERR_PTR(rc);
+ }
+@@ -1587,6 +1581,18 @@ static int switchtec_init_pci(struct switchtec_dev *stdev,
+ 	return 0;
+ }
+ 
++static void switchtec_exit_pci(struct switchtec_dev *stdev)
++{
++	if (stdev->dma_mrpc) {
++		iowrite32(0, &stdev->mmio_mrpc->dma_en);
++		flush_wc_buf(stdev);
++		writeq(0, &stdev->mmio_mrpc->dma_addr);
++		dma_free_coherent(&stdev->pdev->dev, sizeof(*stdev->dma_mrpc),
++				  stdev->dma_mrpc, stdev->dma_mrpc_dma_addr);
++		stdev->dma_mrpc = NULL;
++	}
++}
++
+ static int switchtec_pci_probe(struct pci_dev *pdev,
+ 			       const struct pci_device_id *id)
+ {
+@@ -1646,6 +1652,9 @@ static void switchtec_pci_remove(struct pci_dev *pdev)
+ 	ida_simple_remove(&switchtec_minor_ida, MINOR(stdev->dev.devt));
+ 	dev_info(&stdev->dev, "unregistered.\n");
+ 	stdev_kill(stdev);
++	switchtec_exit_pci(stdev);
++	pci_dev_put(stdev->pdev);
++	stdev->pdev = NULL;
+ 	put_device(&stdev->dev);
+ }
+ 
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index 2cb949f931b69..c0802152f30bc 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -633,8 +633,6 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ 	channel->irq = platform_get_irq_optional(pdev, 0);
+ 	channel->dr_mode = rcar_gen3_get_dr_mode(dev->of_node);
+ 	if (channel->dr_mode != USB_DR_MODE_UNKNOWN) {
+-		int ret;
+-
+ 		channel->is_otg_channel = true;
+ 		channel->uses_otg_pins = !of_property_read_bool(dev->of_node,
+ 							"renesas,no-otg-pins");
+@@ -693,8 +691,6 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ 		ret = PTR_ERR(provider);
+ 		goto error;
+ 	} else if (channel->is_otg_channel) {
+-		int ret;
+-
+ 		ret = device_create_file(dev, &dev_attr_role);
+ 		if (ret < 0)
+ 			goto error;
+diff --git a/drivers/phy/ti/phy-omap-usb2.c b/drivers/phy/ti/phy-omap-usb2.c
+index f77ac041d8368..95e72f7a3199d 100644
+--- a/drivers/phy/ti/phy-omap-usb2.c
++++ b/drivers/phy/ti/phy-omap-usb2.c
+@@ -116,7 +116,7 @@ static int omap_usb_set_vbus(struct usb_otg *otg, bool enabled)
+ {
+ 	struct omap_usb *phy = phy_to_omapusb(otg->usb_phy);
+ 
+-	if (!phy->comparator)
++	if (!phy->comparator || !phy->comparator->set_vbus)
+ 		return -ENODEV;
+ 
+ 	return phy->comparator->set_vbus(phy->comparator, enabled);
+@@ -126,7 +126,7 @@ static int omap_usb_start_srp(struct usb_otg *otg)
+ {
+ 	struct omap_usb *phy = phy_to_omapusb(otg->usb_phy);
+ 
+-	if (!phy->comparator)
++	if (!phy->comparator || !phy->comparator->start_srp)
+ 		return -ENODEV;
+ 
+ 	return phy->comparator->start_srp(phy->comparator);
+diff --git a/drivers/pnp/pnpacpi/rsparser.c b/drivers/pnp/pnpacpi/rsparser.c
+index da78dc77aed32..9879deb4dc0b5 100644
+--- a/drivers/pnp/pnpacpi/rsparser.c
++++ b/drivers/pnp/pnpacpi/rsparser.c
+@@ -151,13 +151,13 @@ static int vendor_resource_matches(struct pnp_dev *dev,
+ static void pnpacpi_parse_allocated_vendor(struct pnp_dev *dev,
+ 				    struct acpi_resource_vendor_typed *vendor)
+ {
+-	if (vendor_resource_matches(dev, vendor, &hp_ccsr_uuid, 16)) {
+-		u64 start, length;
++	struct { u64 start, length; } range;
+ 
+-		memcpy(&start, vendor->byte_data, sizeof(start));
+-		memcpy(&length, vendor->byte_data + 8, sizeof(length));
+-
+-		pnp_add_mem_resource(dev, start, start + length - 1, 0);
++	if (vendor_resource_matches(dev, vendor, &hp_ccsr_uuid,
++				    sizeof(range))) {
++		memcpy(&range, vendor->byte_data, sizeof(range));
++		pnp_add_mem_resource(dev, range.start, range.start +
++				     range.length - 1, 0);
+ 	}
+ }
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 51c4f604d3b24..54330eb0d03b8 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2768,7 +2768,8 @@ static int _regulator_enable(struct regulator *regulator)
+ 		/* Fallthrough on positive return values - already enabled */
+ 	}
+ 
+-	rdev->use_count++;
++	if (regulator->enable_count == 1)
++		rdev->use_count++;
+ 
+ 	return 0;
+ 
+@@ -2846,37 +2847,40 @@ static int _regulator_disable(struct regulator *regulator)
+ 
+ 	lockdep_assert_held_once(&rdev->mutex.base);
+ 
+-	if (WARN(rdev->use_count <= 0,
++	if (WARN(regulator->enable_count == 0,
+ 		 "unbalanced disables for %s\n", rdev_get_name(rdev)))
+ 		return -EIO;
+ 
+-	/* are we the last user and permitted to disable ? */
+-	if (rdev->use_count == 1 &&
+-	    (rdev->constraints && !rdev->constraints->always_on)) {
+-
+-		/* we are last user */
+-		if (regulator_ops_is_valid(rdev, REGULATOR_CHANGE_STATUS)) {
+-			ret = _notifier_call_chain(rdev,
+-						   REGULATOR_EVENT_PRE_DISABLE,
+-						   NULL);
+-			if (ret & NOTIFY_STOP_MASK)
+-				return -EINVAL;
+-
+-			ret = _regulator_do_disable(rdev);
+-			if (ret < 0) {
+-				rdev_err(rdev, "failed to disable: %pe\n", ERR_PTR(ret));
+-				_notifier_call_chain(rdev,
+-						REGULATOR_EVENT_ABORT_DISABLE,
++	if (regulator->enable_count == 1) {
++	/* disabling last enable_count from this regulator */
++		/* are we the last user and permitted to disable ? */
++		if (rdev->use_count == 1 &&
++		    (rdev->constraints && !rdev->constraints->always_on)) {
++
++			/* we are last user */
++			if (regulator_ops_is_valid(rdev, REGULATOR_CHANGE_STATUS)) {
++				ret = _notifier_call_chain(rdev,
++							   REGULATOR_EVENT_PRE_DISABLE,
++							   NULL);
++				if (ret & NOTIFY_STOP_MASK)
++					return -EINVAL;
++
++				ret = _regulator_do_disable(rdev);
++				if (ret < 0) {
++					rdev_err(rdev, "failed to disable: %pe\n", ERR_PTR(ret));
++					_notifier_call_chain(rdev,
++							REGULATOR_EVENT_ABORT_DISABLE,
++							NULL);
++					return ret;
++				}
++				_notifier_call_chain(rdev, REGULATOR_EVENT_DISABLE,
+ 						NULL);
+-				return ret;
+ 			}
+-			_notifier_call_chain(rdev, REGULATOR_EVENT_DISABLE,
+-					NULL);
+-		}
+ 
+-		rdev->use_count = 0;
+-	} else if (rdev->use_count > 1) {
+-		rdev->use_count--;
++			rdev->use_count = 0;
++		} else if (rdev->use_count > 1) {
++			rdev->use_count--;
++		}
+ 	}
+ 
+ 	if (ret == 0)
+diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
+index 7d7ed4e5cce7b..a8b5d10880c0d 100644
+--- a/drivers/rpmsg/virtio_rpmsg_bus.c
++++ b/drivers/rpmsg/virtio_rpmsg_bus.c
+@@ -387,6 +387,7 @@ static void virtio_rpmsg_release_device(struct device *dev)
+ 	struct rpmsg_device *rpdev = to_rpmsg_device(dev);
+ 	struct virtio_rpmsg_channel *vch = to_virtio_rpmsg_channel(rpdev);
+ 
++	kfree(rpdev->driver_override);
+ 	kfree(vch);
+ }
+ 
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 7f560937bf7cb..2c4ccab6e462d 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -292,7 +292,7 @@ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 
+ 	/* This not only a rtc_op, but also called directly */
+ 	if (!is_valid_irq(cmos->irq))
+-		return -EIO;
++		return -ETIMEDOUT;
+ 
+ 	/* Basic alarms only support hour, minute, and seconds fields.
+ 	 * Some also support day and month, for alarms up to a year in
+@@ -557,7 +557,7 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 	 * Use mc146818_avoid_UIP() to avoid this.
+ 	 */
+ 	if (!mc146818_avoid_UIP(cmos_set_alarm_callback, &p))
+-		return -EIO;
++		return -ETIMEDOUT;
+ 
+ 	cmos->alarm_expires = rtc_tm_to_time64(&t->time);
+ 
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index d8cdf90241268..fee7b09ebc226 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -257,9 +257,10 @@ static void qeth_l3_clear_ip_htable(struct qeth_card *card, int recover)
+ 		if (!recover) {
+ 			hash_del(&addr->hnode);
+ 			kfree(addr);
+-			continue;
++		} else {
++			/* prepare for recovery */
++			addr->disp_flag = QETH_DISP_ADDR_ADD;
+ 		}
+-		addr->disp_flag = QETH_DISP_ADDR_ADD;
+ 	}
+ 
+ 	mutex_unlock(&card->ip_lock);
+@@ -280,9 +281,11 @@ static void qeth_l3_recover_ip(struct qeth_card *card)
+ 		if (addr->disp_flag == QETH_DISP_ADDR_ADD) {
+ 			rc = qeth_l3_register_addr_entry(card, addr);
+ 
+-			if (!rc) {
++			if (!rc || rc == -EADDRINUSE || rc == -ENETDOWN) {
++				/* keep it in the records */
+ 				addr->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
+ 			} else {
++				/* bad address */
+ 				hash_del(&addr->hnode);
+ 				kfree(addr);
+ 			}
+diff --git a/drivers/scsi/arcmsr/arcmsr.h b/drivers/scsi/arcmsr/arcmsr.h
+index 5d054d5c70a59..f2e587e66e19d 100644
+--- a/drivers/scsi/arcmsr/arcmsr.h
++++ b/drivers/scsi/arcmsr/arcmsr.h
+@@ -77,9 +77,13 @@ struct device_attribute;
+ #ifndef PCI_DEVICE_ID_ARECA_1203
+ #define PCI_DEVICE_ID_ARECA_1203	0x1203
+ #endif
++#ifndef PCI_DEVICE_ID_ARECA_1883
++#define PCI_DEVICE_ID_ARECA_1883	0x1883
++#endif
+ #ifndef PCI_DEVICE_ID_ARECA_1884
+ #define PCI_DEVICE_ID_ARECA_1884	0x1884
+ #endif
++#define PCI_DEVICE_ID_ARECA_1886_0	0x1886
+ #define PCI_DEVICE_ID_ARECA_1886	0x188A
+ #define	ARCMSR_HOURS			(1000 * 60 * 60 * 4)
+ #define	ARCMSR_MINUTES			(1000 * 60 * 60)
+diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
+index 9294a2c677b3e..199b102f31a24 100644
+--- a/drivers/scsi/arcmsr/arcmsr_hba.c
++++ b/drivers/scsi/arcmsr/arcmsr_hba.c
+@@ -208,8 +208,12 @@ static struct pci_device_id arcmsr_device_id_table[] = {
+ 		.driver_data = ACB_ADAPTER_TYPE_A},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1880),
+ 		.driver_data = ACB_ADAPTER_TYPE_C},
++	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1883),
++		.driver_data = ACB_ADAPTER_TYPE_C},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1884),
+ 		.driver_data = ACB_ADAPTER_TYPE_E},
++	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1886_0),
++		.driver_data = ACB_ADAPTER_TYPE_F},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1886),
+ 		.driver_data = ACB_ADAPTER_TYPE_F},
+ 	{0, 0}, /* Terminating entry */
+@@ -4701,9 +4705,11 @@ static const char *arcmsr_info(struct Scsi_Host *host)
+ 	case PCI_DEVICE_ID_ARECA_1680:
+ 	case PCI_DEVICE_ID_ARECA_1681:
+ 	case PCI_DEVICE_ID_ARECA_1880:
++	case PCI_DEVICE_ID_ARECA_1883:
+ 	case PCI_DEVICE_ID_ARECA_1884:
+ 		type = "SAS/SATA";
+ 		break;
++	case PCI_DEVICE_ID_ARECA_1886_0:
+ 	case PCI_DEVICE_ID_ARECA_1886:
+ 		type = "NVMe/SAS/SATA";
+ 		break;
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index bf0b3178f84d0..4371d8b006564 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -405,8 +405,8 @@ static char print_alua_state(unsigned char state)
+ 	}
+ }
+ 
+-static int alua_check_sense(struct scsi_device *sdev,
+-			    struct scsi_sense_hdr *sense_hdr)
++static enum scsi_disposition alua_check_sense(struct scsi_device *sdev,
++					      struct scsi_sense_hdr *sense_hdr)
+ {
+ 	switch (sense_hdr->sense_key) {
+ 	case NOT_READY:
+diff --git a/drivers/scsi/device_handler/scsi_dh_emc.c b/drivers/scsi/device_handler/scsi_dh_emc.c
+index caa685cfe3d45..bd28ec6cfb72f 100644
+--- a/drivers/scsi/device_handler/scsi_dh_emc.c
++++ b/drivers/scsi/device_handler/scsi_dh_emc.c
+@@ -280,8 +280,8 @@ static int send_trespass_cmd(struct scsi_device *sdev,
+ 	return res;
+ }
+ 
+-static int clariion_check_sense(struct scsi_device *sdev,
+-				struct scsi_sense_hdr *sense_hdr)
++static enum scsi_disposition clariion_check_sense(struct scsi_device *sdev,
++					struct scsi_sense_hdr *sense_hdr)
+ {
+ 	switch (sense_hdr->sense_key) {
+ 	case NOT_READY:
+diff --git a/drivers/scsi/device_handler/scsi_dh_rdac.c b/drivers/scsi/device_handler/scsi_dh_rdac.c
+index 85a71bafaea76..66652ab409cc9 100644
+--- a/drivers/scsi/device_handler/scsi_dh_rdac.c
++++ b/drivers/scsi/device_handler/scsi_dh_rdac.c
+@@ -656,8 +656,8 @@ static blk_status_t rdac_prep_fn(struct scsi_device *sdev, struct request *req)
+ 	return BLK_STS_OK;
+ }
+ 
+-static int rdac_check_sense(struct scsi_device *sdev,
+-				struct scsi_sense_hdr *sense_hdr)
++static enum scsi_disposition rdac_check_sense(struct scsi_device *sdev,
++					      struct scsi_sense_hdr *sense_hdr)
+ {
+ 	struct rdac_dh_data *h = sdev->handler_data;
+ 
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index a2d60ad2a6835..bbc5d6b9be737 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -319,17 +319,16 @@ static void fcoe_ctlr_announce(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *sel;
+ 	struct fcoe_fcf *fcf;
+-	unsigned long flags;
+ 
+ 	mutex_lock(&fip->ctlr_mutex);
+-	spin_lock_irqsave(&fip->ctlr_lock, flags);
++	spin_lock_bh(&fip->ctlr_lock);
+ 
+ 	kfree_skb(fip->flogi_req);
+ 	fip->flogi_req = NULL;
+ 	list_for_each_entry(fcf, &fip->fcfs, list)
+ 		fcf->flogi_sent = 0;
+ 
+-	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
++	spin_unlock_bh(&fip->ctlr_lock);
+ 	sel = fip->sel_fcf;
+ 
+ 	if (sel && ether_addr_equal(sel->fcf_mac, fip->dest_addr))
+@@ -700,7 +699,6 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
+ {
+ 	struct fc_frame *fp;
+ 	struct fc_frame_header *fh;
+-	unsigned long flags;
+ 	u16 old_xid;
+ 	u8 op;
+ 	u8 mac[ETH_ALEN];
+@@ -734,11 +732,11 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
+ 		op = FIP_DT_FLOGI;
+ 		if (fip->mode == FIP_MODE_VN2VN)
+ 			break;
+-		spin_lock_irqsave(&fip->ctlr_lock, flags);
++		spin_lock_bh(&fip->ctlr_lock);
+ 		kfree_skb(fip->flogi_req);
+ 		fip->flogi_req = skb;
+ 		fip->flogi_req_send = 1;
+-		spin_unlock_irqrestore(&fip->ctlr_lock, flags);
++		spin_unlock_bh(&fip->ctlr_lock);
+ 		schedule_work(&fip->timer_work);
+ 		return -EINPROGRESS;
+ 	case ELS_FDISC:
+@@ -1715,11 +1713,10 @@ static int fcoe_ctlr_flogi_send_locked(struct fcoe_ctlr *fip)
+ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *fcf;
+-	unsigned long flags;
+ 	int error;
+ 
+ 	mutex_lock(&fip->ctlr_mutex);
+-	spin_lock_irqsave(&fip->ctlr_lock, flags);
++	spin_lock_bh(&fip->ctlr_lock);
+ 	LIBFCOE_FIP_DBG(fip, "re-sending FLOGI - reselect\n");
+ 	fcf = fcoe_ctlr_select(fip);
+ 	if (!fcf || fcf->flogi_sent) {
+@@ -1730,7 +1727,7 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ 		fcoe_ctlr_solicit(fip, NULL);
+ 		error = fcoe_ctlr_flogi_send_locked(fip);
+ 	}
+-	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
++	spin_unlock_bh(&fip->ctlr_lock);
+ 	mutex_unlock(&fip->ctlr_mutex);
+ 	return error;
+ }
+@@ -1747,9 +1744,8 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *fcf;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&fip->ctlr_lock, flags);
++	spin_lock_bh(&fip->ctlr_lock);
+ 	fcf = fip->sel_fcf;
+ 	if (!fcf || !fip->flogi_req_send)
+ 		goto unlock;
+@@ -1776,7 +1772,7 @@ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
+ 	} else /* XXX */
+ 		LIBFCOE_FIP_DBG(fip, "No FCF selected - defer send\n");
+ unlock:
+-	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
++	spin_unlock_bh(&fip->ctlr_lock);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
+index b6d68d871b6cb..a4129e456efa0 100644
+--- a/drivers/scsi/isci/request.c
++++ b/drivers/scsi/isci/request.c
+@@ -3398,7 +3398,7 @@ static enum sci_status isci_io_request_build(struct isci_host *ihost,
+ 		return SCI_FAILURE;
+ 	}
+ 
+-	return SCI_SUCCESS;
++	return status;
+ }
+ 
+ static struct isci_request *isci_request_from_tag(struct isci_host *ihost, u16 tag)
+diff --git a/drivers/scsi/libfc/fc_fcp.c b/drivers/scsi/libfc/fc_fcp.c
+index 7cfeb6886237c..61c12dde967ee 100644
+--- a/drivers/scsi/libfc/fc_fcp.c
++++ b/drivers/scsi/libfc/fc_fcp.c
+@@ -270,6 +270,11 @@ static int fc_fcp_send_abort(struct fc_fcp_pkt *fsp)
+ 	if (!fsp->seq_ptr)
+ 		return -EINVAL;
+ 
++	if (fsp->state & FC_SRB_ABORT_PENDING) {
++		FC_FCP_DBG(fsp, "abort already pending\n");
++		return -EBUSY;
++	}
++
+ 	per_cpu_ptr(fsp->lp->stats, get_cpu())->FcpPktAborts++;
+ 	put_cpu();
+ 
+@@ -1681,7 +1686,7 @@ static void fc_fcp_rec_error(struct fc_fcp_pkt *fsp, struct fc_frame *fp)
+ 		if (fsp->recov_retry++ < FC_MAX_RECOV_RETRY)
+ 			fc_fcp_rec(fsp);
+ 		else
+-			fc_fcp_recovery(fsp, FC_ERROR);
++			fc_fcp_recovery(fsp, FC_TIMED_OUT);
+ 		break;
+ 	}
+ 	fc_fcp_unlock_pkt(fsp);
+@@ -1700,11 +1705,12 @@ static void fc_fcp_recovery(struct fc_fcp_pkt *fsp, u8 code)
+ 	fsp->status_code = code;
+ 	fsp->cdb_status = 0;
+ 	fsp->io_status = 0;
+-	/*
+-	 * if this fails then we let the scsi command timer fire and
+-	 * scsi-ml escalate.
+-	 */
+-	fc_fcp_send_abort(fsp);
++	if (!fsp->cmd)
++		/*
++		 * Only abort non-scsi commands; otherwise let the
++		 * scsi command timer fire and scsi-ml escalate.
++		 */
++		fc_fcp_send_abort(fsp);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 03bc472f302a2..cf69f831a7253 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -32,6 +32,7 @@
+ struct lpfc_sli2_slim;
+ 
+ #define ELX_MODEL_NAME_SIZE	80
++#define ELX_FW_NAME_SIZE	84
+ 
+ #define LPFC_PCI_DEV_LP		0x1
+ #define LPFC_PCI_DEV_OC		0x2
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 1bb3c96a04bd6..5f2009327a593 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -13026,7 +13026,7 @@ lpfc_write_firmware(const struct firmware *fw, void *context)
+ int
+ lpfc_sli4_request_firmware_update(struct lpfc_hba *phba, uint8_t fw_upgrade)
+ {
+-	uint8_t file_name[ELX_MODEL_NAME_SIZE];
++	char file_name[ELX_FW_NAME_SIZE] = {0};
+ 	int ret;
+ 	const struct firmware *fw;
+ 
+@@ -13035,7 +13035,7 @@ lpfc_sli4_request_firmware_update(struct lpfc_hba *phba, uint8_t fw_upgrade)
+ 	    LPFC_SLI_INTF_IF_TYPE_2)
+ 		return -EPERM;
+ 
+-	snprintf(file_name, ELX_MODEL_NAME_SIZE, "%s.grp", phba->ModelName);
++	scnprintf(file_name, sizeof(file_name), "%s.grp", phba->ModelName);
+ 
+ 	if (fw_upgrade == INT_FW_UPGRADE) {
+ 		ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_HOTPLUG,
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 3d3d139127eec..ffc6f3031e82b 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -60,14 +60,14 @@ static void scsi_eh_done(struct scsi_cmnd *scmd);
+ #define HOST_RESET_SETTLE_TIME  (10)
+ 
+ static int scsi_eh_try_stu(struct scsi_cmnd *scmd);
+-static int scsi_try_to_abort_cmd(struct scsi_host_template *,
+-				 struct scsi_cmnd *);
++static enum scsi_disposition scsi_try_to_abort_cmd(struct scsi_host_template *,
++						   struct scsi_cmnd *);
+ 
+-void scsi_eh_wakeup(struct Scsi_Host *shost)
++void scsi_eh_wakeup(struct Scsi_Host *shost, unsigned int busy)
+ {
+ 	lockdep_assert_held(shost->host_lock);
+ 
+-	if (scsi_host_busy(shost) == shost->host_failed) {
++	if (busy == shost->host_failed) {
+ 		trace_scsi_eh_wakeup(shost);
+ 		wake_up_process(shost->ehandler);
+ 		SCSI_LOG_ERROR_RECOVERY(5, shost_printk(KERN_INFO, shost,
+@@ -90,7 +90,7 @@ void scsi_schedule_eh(struct Scsi_Host *shost)
+ 	if (scsi_host_set_state(shost, SHOST_RECOVERY) == 0 ||
+ 	    scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY) == 0) {
+ 		shost->host_eh_scheduled++;
+-		scsi_eh_wakeup(shost);
++		scsi_eh_wakeup(shost, scsi_host_busy(shost));
+ 	}
+ 
+ 	spin_unlock_irqrestore(shost->host_lock, flags);
+@@ -140,7 +140,7 @@ scmd_eh_abort_handler(struct work_struct *work)
+ 	struct scsi_cmnd *scmd =
+ 		container_of(work, struct scsi_cmnd, abort_work.work);
+ 	struct scsi_device *sdev = scmd->device;
+-	int rtn;
++	enum scsi_disposition rtn;
+ 
+ 	if (scsi_host_eh_past_deadline(sdev->host)) {
+ 		SCSI_LOG_ERROR_RECOVERY(3,
+@@ -241,11 +241,12 @@ static void scsi_eh_inc_host_failed(struct rcu_head *head)
+ {
+ 	struct scsi_cmnd *scmd = container_of(head, typeof(*scmd), rcu);
+ 	struct Scsi_Host *shost = scmd->device->host;
++	unsigned int busy = scsi_host_busy(shost);
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(shost->host_lock, flags);
+ 	shost->host_failed++;
+-	scsi_eh_wakeup(shost);
++	scsi_eh_wakeup(shost, busy);
+ 	spin_unlock_irqrestore(shost->host_lock, flags);
+ }
+ 
+@@ -478,7 +479,7 @@ static void scsi_report_sense(struct scsi_device *sdev,
+  *	When a deferred error is detected the current command has
+  *	not been executed and needs retrying.
+  */
+-int scsi_check_sense(struct scsi_cmnd *scmd)
++enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd)
+ {
+ 	struct scsi_device *sdev = scmd->device;
+ 	struct scsi_sense_hdr sshdr;
+@@ -492,7 +493,7 @@ int scsi_check_sense(struct scsi_cmnd *scmd)
+ 		return NEEDS_RETRY;
+ 
+ 	if (sdev->handler && sdev->handler->check_sense) {
+-		int rc;
++		enum scsi_disposition rc;
+ 
+ 		rc = sdev->handler->check_sense(sdev, &sshdr);
+ 		if (rc != SCSI_RETURN_NOT_HANDLED)
+@@ -703,7 +704,7 @@ static void scsi_handle_queue_full(struct scsi_device *sdev)
+  *    don't allow for the possibility of retries here, and we are a lot
+  *    more restrictive about what we consider acceptable.
+  */
+-static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
++static enum scsi_disposition scsi_eh_completed_normally(struct scsi_cmnd *scmd)
+ {
+ 	/*
+ 	 * first check the host byte, to see if there is anything in there
+@@ -784,10 +785,10 @@ static void scsi_eh_done(struct scsi_cmnd *scmd)
+  * scsi_try_host_reset - ask host adapter to reset itself
+  * @scmd:	SCSI cmd to send host reset.
+  */
+-static int scsi_try_host_reset(struct scsi_cmnd *scmd)
++static enum scsi_disposition scsi_try_host_reset(struct scsi_cmnd *scmd)
+ {
+ 	unsigned long flags;
+-	int rtn;
++	enum scsi_disposition rtn;
+ 	struct Scsi_Host *host = scmd->device->host;
+ 	struct scsi_host_template *hostt = host->hostt;
+ 
+@@ -814,10 +815,10 @@ static int scsi_try_host_reset(struct scsi_cmnd *scmd)
+  * scsi_try_bus_reset - ask host to perform a bus reset
+  * @scmd:	SCSI cmd to send bus reset.
+  */
+-static int scsi_try_bus_reset(struct scsi_cmnd *scmd)
++static enum scsi_disposition scsi_try_bus_reset(struct scsi_cmnd *scmd)
+ {
+ 	unsigned long flags;
+-	int rtn;
++	enum scsi_disposition rtn;
+ 	struct Scsi_Host *host = scmd->device->host;
+ 	struct scsi_host_template *hostt = host->hostt;
+ 
+@@ -856,10 +857,10 @@ static void __scsi_report_device_reset(struct scsi_device *sdev, void *data)
+  *    timer on it, and set the host back to a consistent state prior to
+  *    returning.
+  */
+-static int scsi_try_target_reset(struct scsi_cmnd *scmd)
++static enum scsi_disposition scsi_try_target_reset(struct scsi_cmnd *scmd)
+ {
+ 	unsigned long flags;
+-	int rtn;
++	enum scsi_disposition rtn;
+ 	struct Scsi_Host *host = scmd->device->host;
+ 	struct scsi_host_template *hostt = host->hostt;
+ 
+@@ -887,9 +888,9 @@ static int scsi_try_target_reset(struct scsi_cmnd *scmd)
+  *    timer on it, and set the host back to a consistent state prior to
+  *    returning.
+  */
+-static int scsi_try_bus_device_reset(struct scsi_cmnd *scmd)
++static enum scsi_disposition scsi_try_bus_device_reset(struct scsi_cmnd *scmd)
+ {
+-	int rtn;
++	enum scsi_disposition rtn;
+ 	struct scsi_host_template *hostt = scmd->device->host->hostt;
+ 
+ 	if (!hostt->eh_device_reset_handler)
+@@ -918,8 +919,8 @@ static int scsi_try_bus_device_reset(struct scsi_cmnd *scmd)
+  *    if the device is temporarily unavailable (eg due to a
+  *    link down on FibreChannel)
+  */
+-static int scsi_try_to_abort_cmd(struct scsi_host_template *hostt,
+-				 struct scsi_cmnd *scmd)
++static enum scsi_disposition
++scsi_try_to_abort_cmd(struct scsi_host_template *hostt, struct scsi_cmnd *scmd)
+ {
+ 	if (!hostt->eh_abort_handler)
+ 		return FAILED;
+@@ -1052,8 +1053,8 @@ EXPORT_SYMBOL(scsi_eh_restore_cmnd);
+  * Return value:
+  *    SUCCESS or FAILED or NEEDS_RETRY
+  */
+-static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd,
+-			     int cmnd_size, int timeout, unsigned sense_bytes)
++static enum scsi_disposition scsi_send_eh_cmnd(struct scsi_cmnd *scmd,
++	unsigned char *cmnd, int cmnd_size, int timeout, unsigned sense_bytes)
+ {
+ 	struct scsi_device *sdev = scmd->device;
+ 	struct Scsi_Host *shost = sdev->host;
+@@ -1161,12 +1162,13 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd,
+  *    that we obtain it on our own. This function will *not* return until
+  *    the command either times out, or it completes.
+  */
+-static int scsi_request_sense(struct scsi_cmnd *scmd)
++static enum scsi_disposition scsi_request_sense(struct scsi_cmnd *scmd)
+ {
+ 	return scsi_send_eh_cmnd(scmd, NULL, 0, scmd->device->eh_timeout, ~0);
+ }
+ 
+-static int scsi_eh_action(struct scsi_cmnd *scmd, int rtn)
++static enum scsi_disposition
++scsi_eh_action(struct scsi_cmnd *scmd, enum scsi_disposition rtn)
+ {
+ 	if (!blk_rq_is_passthrough(scmd->request)) {
+ 		struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd);
+@@ -1219,7 +1221,7 @@ int scsi_eh_get_sense(struct list_head *work_q,
+ {
+ 	struct scsi_cmnd *scmd, *next;
+ 	struct Scsi_Host *shost;
+-	int rtn;
++	enum scsi_disposition rtn;
+ 
+ 	/*
+ 	 * If SCSI_EH_ABORT_SCHEDULED has been set, it is timeout IO,
+@@ -1297,7 +1299,8 @@ EXPORT_SYMBOL_GPL(scsi_eh_get_sense);
+ static int scsi_eh_tur(struct scsi_cmnd *scmd)
+ {
+ 	static unsigned char tur_command[6] = {TEST_UNIT_READY, 0, 0, 0, 0, 0};
+-	int retry_cnt = 1, rtn;
++	int retry_cnt = 1;
++	enum scsi_disposition rtn;
+ 
+ retry_tur:
+ 	rtn = scsi_send_eh_cmnd(scmd, tur_command, 6,
+@@ -1385,7 +1388,8 @@ static int scsi_eh_try_stu(struct scsi_cmnd *scmd)
+ 	static unsigned char stu_command[6] = {START_STOP, 0, 0, 0, 1, 0};
+ 
+ 	if (scmd->device->allow_restart) {
+-		int i, rtn = NEEDS_RETRY;
++		int i;
++		enum scsi_disposition rtn = NEEDS_RETRY;
+ 
+ 		for (i = 0; rtn == NEEDS_RETRY && i < 2; i++)
+ 			rtn = scsi_send_eh_cmnd(scmd, stu_command, 6, scmd->device->request_queue->rq_timeout, 0);
+@@ -1479,7 +1483,7 @@ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
+ {
+ 	struct scsi_cmnd *scmd, *bdr_scmd, *next;
+ 	struct scsi_device *sdev;
+-	int rtn;
++	enum scsi_disposition rtn;
+ 
+ 	shost_for_each_device(sdev, shost) {
+ 		if (scsi_host_eh_past_deadline(shost)) {
+@@ -1546,7 +1550,7 @@ static int scsi_eh_target_reset(struct Scsi_Host *shost,
+ 
+ 	while (!list_empty(&tmp_list)) {
+ 		struct scsi_cmnd *next, *scmd;
+-		int rtn;
++		enum scsi_disposition rtn;
+ 		unsigned int id;
+ 
+ 		if (scsi_host_eh_past_deadline(shost)) {
+@@ -1604,7 +1608,7 @@ static int scsi_eh_bus_reset(struct Scsi_Host *shost,
+ 	struct scsi_cmnd *scmd, *chan_scmd, *next;
+ 	LIST_HEAD(check_list);
+ 	unsigned int channel;
+-	int rtn;
++	enum scsi_disposition rtn;
+ 
+ 	/*
+ 	 * we really want to loop over the various channels, and do this on
+@@ -1675,7 +1679,7 @@ static int scsi_eh_host_reset(struct Scsi_Host *shost,
+ {
+ 	struct scsi_cmnd *scmd, *next;
+ 	LIST_HEAD(check_list);
+-	int rtn;
++	enum scsi_disposition rtn;
+ 
+ 	if (!list_empty(work_q)) {
+ 		scmd = list_entry(work_q->next,
+@@ -1781,9 +1785,9 @@ int scsi_noretry_cmd(struct scsi_cmnd *scmd)
+  *    doesn't require the error handler read (i.e. we don't need to
+  *    abort/reset), this function should return SUCCESS.
+  */
+-int scsi_decide_disposition(struct scsi_cmnd *scmd)
++enum scsi_disposition scsi_decide_disposition(struct scsi_cmnd *scmd)
+ {
+-	int rtn;
++	enum scsi_disposition rtn;
+ 
+ 	/*
+ 	 * if the device is offline, then we clearly just pass the result back
+@@ -2339,7 +2343,8 @@ scsi_ioctl_reset(struct scsi_device *dev, int __user *arg)
+ 	struct Scsi_Host *shost = dev->host;
+ 	struct request *rq;
+ 	unsigned long flags;
+-	int error = 0, rtn, val;
++	int error = 0, val;
++	enum scsi_disposition rtn;
+ 
+ 	if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
+ 		return -EACCES;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 99b90031500b2..14dec86ff749e 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -310,9 +310,11 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
+ 	rcu_read_lock();
+ 	__clear_bit(SCMD_STATE_INFLIGHT, &cmd->state);
+ 	if (unlikely(scsi_host_in_recovery(shost))) {
++		unsigned int busy = scsi_host_busy(shost);
++
+ 		spin_lock_irqsave(shost->host_lock, flags);
+ 		if (shost->host_failed || shost->host_eh_scheduled)
+-			scsi_eh_wakeup(shost);
++			scsi_eh_wakeup(shost, busy);
+ 		spin_unlock_irqrestore(shost->host_lock, flags);
+ 	}
+ 	rcu_read_unlock();
+@@ -1426,7 +1428,7 @@ static bool scsi_mq_lld_busy(struct request_queue *q)
+ static void scsi_softirq_done(struct request *rq)
+ {
+ 	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+-	int disposition;
++	enum scsi_disposition disposition;
+ 
+ 	INIT_LIST_HEAD(&cmd->eh_entry);
+ 
+diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
+index 180636d54982d..1183dbed687c6 100644
+--- a/drivers/scsi/scsi_priv.h
++++ b/drivers/scsi/scsi_priv.h
+@@ -73,8 +73,8 @@ extern void scsi_exit_devinfo(void);
+ extern void scmd_eh_abort_handler(struct work_struct *work);
+ extern enum blk_eh_timer_return scsi_times_out(struct request *req);
+ extern int scsi_error_handler(void *host);
+-extern int scsi_decide_disposition(struct scsi_cmnd *cmd);
+-extern void scsi_eh_wakeup(struct Scsi_Host *shost);
++extern enum scsi_disposition scsi_decide_disposition(struct scsi_cmnd *cmd);
++extern void scsi_eh_wakeup(struct Scsi_Host *shost, unsigned int busy);
+ extern void scsi_eh_scmd_add(struct scsi_cmnd *);
+ void scsi_eh_ready_devs(struct Scsi_Host *shost,
+ 			struct list_head *work_q,
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 2c734ea0784b7..898658ab1dcd4 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -19,7 +19,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/spi/spi.h>
+-#include <linux/spi/spi-mem.h>
++#include <linux/mtd/spi-nor.h>
+ #include <linux/sysfs.h>
+ #include <linux/types.h>
+ #include "spi-bcm-qspi.h"
+@@ -1048,7 +1048,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
+ 
+ 	/* non-aligned and very short transfers are handled by MSPI */
+ 	if (!IS_ALIGNED((uintptr_t)addr, 4) || !IS_ALIGNED((uintptr_t)buf, 4) ||
+-	    len < 4)
++	    len < 4 || op->cmd.opcode == SPINOR_OP_RDSFDP)
+ 		mspi_read = true;
+ 
+ 	if (!has_bspi(qspi) || mspi_read)
+diff --git a/drivers/spi/spi-ppc4xx.c b/drivers/spi/spi-ppc4xx.c
+index d8ee363fb7145..4200b12fc347f 100644
+--- a/drivers/spi/spi-ppc4xx.c
++++ b/drivers/spi/spi-ppc4xx.c
+@@ -166,10 +166,8 @@ static int spi_ppc4xx_setupxfer(struct spi_device *spi, struct spi_transfer *t)
+ 	int scr;
+ 	u8 cdm = 0;
+ 	u32 speed;
+-	u8 bits_per_word;
+ 
+ 	/* Start with the generic configuration for this device. */
+-	bits_per_word = spi->bits_per_word;
+ 	speed = spi->max_speed_hz;
+ 
+ 	/*
+@@ -177,9 +175,6 @@ static int spi_ppc4xx_setupxfer(struct spi_device *spi, struct spi_transfer *t)
+ 	 * the transfer to overwrite the generic configuration with zeros.
+ 	 */
+ 	if (t) {
+-		if (t->bits_per_word)
+-			bits_per_word = t->bits_per_word;
+-
+ 		if (t->speed_hz)
+ 			speed = min(t->speed_hz, spi->max_speed_hz);
+ 	}
+diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c
+index dba78896ea8f2..7d91d64b26f3b 100644
+--- a/drivers/staging/iio/impedance-analyzer/ad5933.c
++++ b/drivers/staging/iio/impedance-analyzer/ad5933.c
+@@ -625,7 +625,7 @@ static void ad5933_work(struct work_struct *work)
+ 		struct ad5933_state, work.work);
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(st->client);
+ 	__be16 buf[2];
+-	int val[2];
++	u16 val[2];
+ 	unsigned char status;
+ 	int ret;
+ 
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 43f2eed6df78e..355ee338d752c 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -1026,6 +1026,7 @@ int serial8250_register_8250_port(struct uart_8250_port *up)
+ 		uart->port.throttle	= up->port.throttle;
+ 		uart->port.unthrottle	= up->port.unthrottle;
+ 		uart->port.rs485_config	= up->port.rs485_config;
++		uart->port.rs485_supported = up->port.rs485_supported;
+ 		uart->port.rs485	= up->port.rs485;
+ 		uart->rs485_start_tx	= up->rs485_start_tx;
+ 		uart->rs485_stop_tx	= up->rs485_stop_tx;
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 5c2adf14049b7..6e33c74e569f0 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -123,6 +123,7 @@ struct exar8250;
+ 
+ struct exar8250_platform {
+ 	int (*rs485_config)(struct uart_port *, struct serial_rs485 *);
++	const struct serial_rs485 *rs485_supported;
+ 	int (*register_gpio)(struct pci_dev *, struct uart_8250_port *);
+ };
+ 
+@@ -423,9 +424,14 @@ static int generic_rs485_config(struct uart_port *port,
+ 	return 0;
+ }
+ 
++static const struct serial_rs485 generic_rs485_supported = {
++	.flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND,
++};
++
+ static const struct exar8250_platform exar8250_default_platform = {
+ 	.register_gpio = xr17v35x_register_gpio,
+ 	.rs485_config = generic_rs485_config,
++	.rs485_supported = &generic_rs485_supported,
+ };
+ 
+ static int iot2040_rs485_config(struct uart_port *port,
+@@ -461,6 +467,11 @@ static int iot2040_rs485_config(struct uart_port *port,
+ 	return generic_rs485_config(port, rs485);
+ }
+ 
++static const struct serial_rs485 iot2040_rs485_supported = {
++	.flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND |
++		 SER_RS485_RX_DURING_TX | SER_RS485_TERMINATE_BUS,
++};
++
+ static const struct property_entry iot2040_gpio_properties[] = {
+ 	PROPERTY_ENTRY_U32("exar,first-pin", 10),
+ 	PROPERTY_ENTRY_U32("ngpios", 1),
+@@ -485,6 +496,7 @@ static int iot2040_register_gpio(struct pci_dev *pcidev,
+ 
+ static const struct exar8250_platform iot2040_platform = {
+ 	.rs485_config = iot2040_rs485_config,
++	.rs485_supported = &iot2040_rs485_supported,
+ 	.register_gpio = iot2040_register_gpio,
+ };
+ 
+@@ -522,6 +534,7 @@ pci_xr17v35x_setup(struct exar8250 *priv, struct pci_dev *pcidev,
+ 
+ 	port->port.uartclk = baud * 16;
+ 	port->port.rs485_config = platform->rs485_config;
++	port->port.rs485_supported = platform->rs485_supported;
+ 
+ 	/*
+ 	 * Setup the UART clock for the devices on expansion slot to
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 5bf8dd6198bbd..14537878f9855 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -235,6 +235,10 @@
+ #define MAX310x_REV_MASK		(0xf8)
+ #define MAX310X_WRITE_BIT		0x80
+ 
++/* Crystal-related definitions */
++#define MAX310X_XTAL_WAIT_RETRIES	20 /* Number of retries */
++#define MAX310X_XTAL_WAIT_DELAY_MS	10 /* Delay between retries */
++
+ /* MAX3107 specific */
+ #define MAX3107_REV_ID			(0xa0)
+ 
+@@ -610,12 +614,19 @@ static int max310x_set_ref_clk(struct device *dev, struct max310x_port *s,
+ 
+ 	/* Wait for crystal */
+ 	if (xtal) {
+-		unsigned int val;
+-		msleep(10);
+-		regmap_read(s->regmap, MAX310X_STS_IRQSTS_REG, &val);
+-		if (!(val & MAX310X_STS_CLKREADY_BIT)) {
++		bool stable = false;
++		unsigned int try = 0, val = 0;
++
++		do {
++			msleep(MAX310X_XTAL_WAIT_DELAY_MS);
++			regmap_read(s->regmap, MAX310X_STS_IRQSTS_REG, &val);
++
++			if (val & MAX310X_STS_CLKREADY_BIT)
++				stable = true;
++		} while (!stable && (++try < MAX310X_XTAL_WAIT_RETRIES));
++
++		if (!stable)
+ 			dev_warn(dev, "clock is not stable yet\n");
+-		}
+ 	}
+ 
+ 	return (int)bestfreq;
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index fd9be81bcfd86..31e0c5c3ddeac 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -24,6 +24,7 @@
+ #include <linux/tty_flip.h>
+ #include <linux/spi/spi.h>
+ #include <linux/uaccess.h>
++#include <linux/units.h>
+ #include <uapi/linux/sched/types.h>
+ 
+ #define SC16IS7XX_NAME			"sc16is7xx"
+@@ -1449,9 +1450,12 @@ static int sc16is7xx_spi_probe(struct spi_device *spi)
+ 
+ 	/* Setup SPI bus */
+ 	spi->bits_per_word	= 8;
+-	/* only supports mode 0 on SC16IS762 */
++	/* For all variants, only mode 0 is supported */
++	if ((spi->mode & SPI_MODE_X_MASK) != SPI_MODE_0)
++		return dev_err_probe(&spi->dev, -EINVAL, "Unsupported SPI mode\n");
++
+ 	spi->mode		= spi->mode ? : SPI_MODE_0;
+-	spi->max_speed_hz	= spi->max_speed_hz ? : 15000000;
++	spi->max_speed_hz	= spi->max_speed_hz ? : 4 * HZ_PER_MHZ;
+ 	ret = spi_setup(spi);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
+index 12a30329abdb0..7ae2630cb7506 100644
+--- a/drivers/tty/tty_ioctl.c
++++ b/drivers/tty/tty_ioctl.c
+@@ -763,7 +763,7 @@ int tty_mode_ioctl(struct tty_struct *tty, struct file *file,
+ 			ret = -EFAULT;
+ 		return ret;
+ 	case TIOCSLCKTRMIOS:
+-		if (!capable(CAP_SYS_ADMIN))
++		if (!checkpoint_restore_ns_capable(&init_user_ns))
+ 			return -EPERM;
+ 		copy_termios_locked(real_tty, &kterm);
+ 		if (user_termios_to_kernel_termios(&kterm,
+@@ -780,7 +780,7 @@ int tty_mode_ioctl(struct tty_struct *tty, struct file *file,
+ 			ret = -EFAULT;
+ 		return ret;
+ 	case TIOCSLCKTRMIOS:
+-		if (!capable(CAP_SYS_ADMIN))
++		if (!checkpoint_restore_ns_capable(&init_user_ns))
+ 			return -EPERM;
+ 		copy_termios_locked(real_tty, &kterm);
+ 		if (user_termios_to_kernel_termios_1(&kterm,
+diff --git a/drivers/usb/cdns3/ep0.c b/drivers/usb/cdns3/ep0.c
+index 30d3516c7f988..4241c513b9f62 100644
+--- a/drivers/usb/cdns3/ep0.c
++++ b/drivers/usb/cdns3/ep0.c
+@@ -364,7 +364,7 @@ static int cdns3_ep0_feature_handle_endpoint(struct cdns3_device *priv_dev,
+ 	if (le16_to_cpu(ctrl->wValue) != USB_ENDPOINT_HALT)
+ 		return -EINVAL;
+ 
+-	if (!(ctrl->wIndex & ~USB_DIR_IN))
++	if (!(le16_to_cpu(ctrl->wIndex) & ~USB_DIR_IN))
+ 		return 0;
+ 
+ 	index = cdns3_ep_addr_to_index(le16_to_cpu(ctrl->wIndex));
+@@ -790,7 +790,7 @@ int cdns3_gadget_ep_set_wedge(struct usb_ep *ep)
+ 	return 0;
+ }
+ 
+-const struct usb_ep_ops cdns3_gadget_ep0_ops = {
++static const struct usb_ep_ops cdns3_gadget_ep0_ops = {
+ 	.enable = cdns3_gadget_ep0_enable,
+ 	.disable = cdns3_gadget_ep0_disable,
+ 	.alloc_request = cdns3_gadget_ep_alloc_request,
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 210c1d6150825..8a1f0a636848b 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -1118,6 +1118,8 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 	dma_addr_t trb_dma;
+ 	u32 togle_pcs = 1;
+ 	int sg_iter = 0;
++	int num_trb_req;
++	int trb_burst;
+ 	int num_trb;
+ 	int address;
+ 	u32 control;
+@@ -1126,15 +1128,13 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 	struct scatterlist *s = NULL;
+ 	bool sg_supported = !!(request->num_mapped_sgs);
+ 
++	num_trb_req = sg_supported ? request->num_mapped_sgs : 1;
++
++	/* ISO transfer require each SOF have a TD, each TD include some TRBs */
+ 	if (priv_ep->type == USB_ENDPOINT_XFER_ISOC)
+-		num_trb = priv_ep->interval;
++		num_trb = priv_ep->interval * num_trb_req;
+ 	else
+-		num_trb = sg_supported ? request->num_mapped_sgs : 1;
+-
+-	if (num_trb > priv_ep->free_trbs) {
+-		priv_ep->flags |= EP_RING_FULL;
+-		return -ENOBUFS;
+-	}
++		num_trb = num_trb_req;
+ 
+ 	priv_req = to_cdns3_request(request);
+ 	address = priv_ep->endpoint.desc->bEndpointAddress;
+@@ -1183,14 +1183,31 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 
+ 		link_trb->control = cpu_to_le32(((priv_ep->pcs) ? TRB_CYCLE : 0) |
+ 				    TRB_TYPE(TRB_LINK) | TRB_TOGGLE | ch_bit);
++
++		if (priv_ep->type == USB_ENDPOINT_XFER_ISOC) {
++			/*
++			 * ISO require LINK TRB must be first one of TD.
++			 * Fill LINK TRBs for left trb space to simply software process logic.
++			 */
++			while (priv_ep->enqueue) {
++				*trb = *link_trb;
++				trace_cdns3_prepare_trb(priv_ep, trb);
++
++				cdns3_ep_inc_enq(priv_ep);
++				trb = priv_ep->trb_pool + priv_ep->enqueue;
++				priv_req->trb = trb;
++			}
++		}
++	}
++
++	if (num_trb > priv_ep->free_trbs) {
++		priv_ep->flags |= EP_RING_FULL;
++		return -ENOBUFS;
+ 	}
+ 
+ 	if (priv_dev->dev_ver <= DEV_VER_V2)
+ 		togle_pcs = cdns3_wa1_update_guard(priv_ep, trb);
+ 
+-	if (sg_supported)
+-		s = request->sg;
+-
+ 	/* set incorrect Cycle Bit for first trb*/
+ 	control = priv_ep->pcs ? 0 : TRB_CYCLE;
+ 	trb->length = 0;
+@@ -1200,7 +1217,7 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 		td_size = DIV_ROUND_UP(request->length,
+ 				       priv_ep->endpoint.maxpacket);
+ 		if (priv_dev->gadget.speed == USB_SPEED_SUPER)
+-			trb->length = TRB_TDL_SS_SIZE(td_size);
++			trb->length = cpu_to_le32(TRB_TDL_SS_SIZE(td_size));
+ 		else
+ 			control |= TRB_TDL_HS_SIZE(td_size);
+ 	}
+@@ -1208,6 +1225,9 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 	do {
+ 		u32 length;
+ 
++		if (!(sg_iter % num_trb_req) && sg_supported)
++			s = request->sg;
++
+ 		/* fill TRB */
+ 		control |= TRB_TYPE(TRB_NORMAL);
+ 		if (sg_supported) {
+@@ -1222,7 +1242,36 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 			total_tdl += DIV_ROUND_UP(length,
+ 					       priv_ep->endpoint.maxpacket);
+ 
+-		trb->length |= cpu_to_le32(TRB_BURST_LEN(priv_ep->trb_burst_size) |
++		trb_burst = priv_ep->trb_burst_size;
++
++		/*
++		 * Supposed DMA cross 4k bounder problem should be fixed at DEV_VER_V2, but still
++		 * met problem when do ISO transfer if sg enabled.
++		 *
++		 * Data pattern likes below when sg enabled, package size is 1k and mult is 2
++		 *       [UVC Header(8B) ] [data(3k - 8)] ...
++		 *
++		 * The received data at offset 0xd000 will get 0xc000 data, len 0x70. Error happen
++		 * as below pattern:
++		 *	0xd000: wrong
++		 *	0xe000: wrong
++		 *	0xf000: correct
++		 *	0x10000: wrong
++		 *	0x11000: wrong
++		 *	0x12000: correct
++		 *	...
++		 *
++		 * But it is still unclear about why error have not happen below 0xd000, it should
++		 * cross 4k bounder. But anyway, the below code can fix this problem.
++		 *
++		 * To avoid DMA cross 4k bounder at ISO transfer, reduce burst len according to 16.
++		 */
++		if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && priv_dev->dev_ver <= DEV_VER_V2)
++			if (ALIGN_DOWN(trb->buffer, SZ_4K) !=
++			    ALIGN_DOWN(trb->buffer + length, SZ_4K))
++				trb_burst = 16;
++
++		trb->length |= cpu_to_le32(TRB_BURST_LEN(trb_burst) |
+ 					TRB_LEN(length));
+ 		pcs = priv_ep->pcs ? TRB_CYCLE : 0;
+ 
+@@ -1247,10 +1296,10 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 			priv_req->trb->control = cpu_to_le32(control);
+ 
+ 		if (sg_supported) {
+-			trb->control |= TRB_ISP;
++			trb->control |= cpu_to_le32(TRB_ISP);
+ 			/* Don't set chain bit for last TRB */
+-			if (sg_iter < num_trb - 1)
+-				trb->control |= TRB_CHAIN;
++			if ((sg_iter % num_trb_req) < num_trb_req - 1)
++				trb->control |= cpu_to_le32(TRB_CHAIN);
+ 
+ 			s = sg_next(s);
+ 		}
+@@ -1507,6 +1556,12 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
+ 
+ 		/* The TRB was changed as link TRB, and the request was handled at ep_dequeue */
+ 		while (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) {
++
++			/* ISO ep_traddr may stop at LINK TRB */
++			if (priv_ep->dequeue == cdns3_get_dma_pos(priv_dev, priv_ep) &&
++			    priv_ep->type == USB_ENDPOINT_XFER_ISOC)
++				break;
++
+ 			trace_cdns3_complete_trb(priv_ep, trb);
+ 			cdns3_ep_inc_deq(priv_ep);
+ 			trb = priv_ep->trb_pool + priv_ep->dequeue;
+@@ -1539,6 +1594,10 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
+ 			}
+ 
+ 			if (request_handled) {
++				/* TRBs are duplicated by priv_ep->interval time for ISO IN */
++				if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && priv_ep->dir)
++					request->actual /= priv_ep->interval;
++
+ 				cdns3_gadget_giveback(priv_ep, priv_req, 0);
+ 				request_handled = false;
+ 				transfer_end = false;
+@@ -2034,11 +2093,10 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	bool is_iso_ep = (priv_ep->type == USB_ENDPOINT_XFER_ISOC);
+ 	struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
+ 	u32 bEndpointAddress = priv_ep->num | priv_ep->dir;
+-	u32 max_packet_size = 0;
+-	u8 maxburst = 0;
++	u32 max_packet_size = priv_ep->wMaxPacketSize;
++	u8 maxburst = priv_ep->bMaxBurst;
+ 	u32 ep_cfg = 0;
+ 	u8 buffering;
+-	u8 mult = 0;
+ 	int ret;
+ 
+ 	buffering = priv_dev->ep_buf_size - 1;
+@@ -2060,8 +2118,7 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 		break;
+ 	default:
+ 		ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_ISOC);
+-		mult = priv_dev->ep_iso_burst - 1;
+-		buffering = mult + 1;
++		buffering = (priv_ep->bMaxBurst + 1) * (priv_ep->mult + 1) - 1;
+ 	}
+ 
+ 	switch (priv_dev->gadget.speed) {
+@@ -2072,17 +2129,8 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 		max_packet_size = is_iso_ep ? 1024 : 512;
+ 		break;
+ 	case USB_SPEED_SUPER:
+-		/* It's limitation that driver assumes in driver. */
+-		mult = 0;
+-		max_packet_size = 1024;
+-		if (priv_ep->type == USB_ENDPOINT_XFER_ISOC) {
+-			maxburst = priv_dev->ep_iso_burst - 1;
+-			buffering = (mult + 1) *
+-				    (maxburst + 1);
+-
+-			if (priv_ep->interval > 1)
+-				buffering++;
+-		} else {
++		if (priv_ep->type != USB_ENDPOINT_XFER_ISOC) {
++			max_packet_size = 1024;
+ 			maxburst = priv_dev->ep_buf_size - 1;
+ 		}
+ 		break;
+@@ -2111,7 +2159,6 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	if (priv_dev->dev_ver < DEV_VER_V2)
+ 		priv_ep->trb_burst_size = 16;
+ 
+-	mult = min_t(u8, mult, EP_CFG_MULT_MAX);
+ 	buffering = min_t(u8, buffering, EP_CFG_BUFFERING_MAX);
+ 	maxburst = min_t(u8, maxburst, EP_CFG_MAXBURST_MAX);
+ 
+@@ -2145,7 +2192,7 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	}
+ 
+ 	ep_cfg |= EP_CFG_MAXPKTSIZE(max_packet_size) |
+-		  EP_CFG_MULT(mult) |
++		  EP_CFG_MULT(priv_ep->mult) |			/* must match EP setting */
+ 		  EP_CFG_BUFFERING(buffering) |
+ 		  EP_CFG_MAXBURST(maxburst);
+ 
+@@ -2235,6 +2282,13 @@ usb_ep *cdns3_gadget_match_ep(struct usb_gadget *gadget,
+ 	priv_ep->type = usb_endpoint_type(desc);
+ 	priv_ep->flags |= EP_CLAIMED;
+ 	priv_ep->interval = desc->bInterval ? BIT(desc->bInterval - 1) : 0;
++	priv_ep->wMaxPacketSize =  usb_endpoint_maxp(desc);
++	priv_ep->mult = USB_EP_MAXP_MULT(priv_ep->wMaxPacketSize);
++	priv_ep->wMaxPacketSize &= USB_ENDPOINT_MAXP_MASK;
++	if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && comp_desc) {
++		priv_ep->mult =  USB_SS_MULT(comp_desc->bmAttributes) - 1;
++		priv_ep->bMaxBurst = comp_desc->bMaxBurst;
++	}
+ 
+ 	spin_unlock_irqrestore(&priv_dev->lock, flags);
+ 	return &priv_ep->endpoint;
+@@ -3001,23 +3055,43 @@ static int cdns3_gadget_udc_stop(struct usb_gadget *gadget)
+ static int cdns3_gadget_check_config(struct usb_gadget *gadget)
+ {
+ 	struct cdns3_device *priv_dev = gadget_to_cdns3_device(gadget);
++	struct cdns3_endpoint *priv_ep;
+ 	struct usb_ep *ep;
+ 	int n_in = 0;
++	int iso = 0;
++	int out = 1;
+ 	int total;
++	int n;
+ 
+ 	list_for_each_entry(ep, &gadget->ep_list, ep_list) {
+-		if (ep->claimed && (ep->address & USB_DIR_IN))
+-			n_in++;
++		priv_ep = ep_to_cdns3_ep(ep);
++		if (!(priv_ep->flags & EP_CLAIMED))
++			continue;
++
++		n = (priv_ep->mult + 1) * (priv_ep->bMaxBurst + 1);
++		if (ep->address & USB_DIR_IN) {
++			/*
++			 * ISO transfer: DMA start move data when get ISO, only transfer
++			 * data as min(TD size, iso). No benefit for allocate bigger
++			 * internal memory than 'iso'.
++			 */
++			if (priv_ep->type == USB_ENDPOINT_XFER_ISOC)
++				iso += n;
++			else
++				n_in++;
++		} else {
++			if (priv_ep->type == USB_ENDPOINT_XFER_ISOC)
++				out = max_t(int, out, n);
++		}
+ 	}
+ 
+ 	/* 2KB are reserved for EP0, 1KB for out*/
+-	total = 2 + n_in + 1;
++	total = 2 + n_in + out + iso;
+ 
+ 	if (total > priv_dev->onchip_buffers)
+ 		return -ENOMEM;
+ 
+-	priv_dev->ep_buf_size = priv_dev->ep_iso_burst =
+-			(priv_dev->onchip_buffers - 2) / (n_in + 1);
++	priv_dev->ep_buf_size = (priv_dev->onchip_buffers - 2 - iso) / (n_in + out);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/cdns3/gadget.h b/drivers/usb/cdns3/gadget.h
+index 32825477edd3e..aeb2211228c13 100644
+--- a/drivers/usb/cdns3/gadget.h
++++ b/drivers/usb/cdns3/gadget.h
+@@ -1167,6 +1167,9 @@ struct cdns3_endpoint {
+ 	u8			dir;
+ 	u8			num;
+ 	u8			type;
++	u8			mult;
++	u8			bMaxBurst;
++	u16			wMaxPacketSize;
+ 	int			interval;
+ 
+ 	int			free_trbs;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 331f41c6cc75e..91b974aa59bff 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -46,8 +46,8 @@
+ #define USB_VENDOR_TEXAS_INSTRUMENTS		0x0451
+ #define USB_PRODUCT_TUSB8041_USB3		0x8140
+ #define USB_PRODUCT_TUSB8041_USB2		0x8142
+-#define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND	0x01
+-#define HUB_QUIRK_DISABLE_AUTOSUSPEND		0x02
++#define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND	BIT(0)
++#define HUB_QUIRK_DISABLE_AUTOSUSPEND		BIT(1)
+ 
+ #define USB_TP_TRANSMISSION_DELAY	40	/* ns */
+ #define USB_TP_TRANSMISSION_DELAY_MAX	65535	/* ns */
+@@ -2367,17 +2367,25 @@ static int usb_enumerate_device_otg(struct usb_device *udev)
+ 			}
+ 		} else if (desc->bLength == sizeof
+ 				(struct usb_otg_descriptor)) {
+-			/* Set a_alt_hnp_support for legacy otg device */
+-			err = usb_control_msg(udev,
+-				usb_sndctrlpipe(udev, 0),
+-				USB_REQ_SET_FEATURE, 0,
+-				USB_DEVICE_A_ALT_HNP_SUPPORT,
+-				0, NULL, 0,
+-				USB_CTRL_SET_TIMEOUT);
+-			if (err < 0)
+-				dev_err(&udev->dev,
+-					"set a_alt_hnp_support failed: %d\n",
+-					err);
++			/*
++			 * We are operating on a legacy OTP device
++			 * These should be told that they are operating
++			 * on the wrong port if we have another port that does
++			 * support HNP
++			 */
++			if (bus->otg_port != 0) {
++				/* Set a_alt_hnp_support for legacy otg device */
++				err = usb_control_msg(udev,
++					usb_sndctrlpipe(udev, 0),
++					USB_REQ_SET_FEATURE, 0,
++					USB_DEVICE_A_ALT_HNP_SUPPORT,
++					0, NULL, 0,
++					USB_CTRL_SET_TIMEOUT);
++				if (err < 0)
++					dev_err(&udev->dev,
++						"set a_alt_hnp_support failed: %d\n",
++						err);
++			}
+ 		}
+ 	}
+ #endif
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index 553547f12fd20..d20ca59749074 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -575,21 +575,37 @@ static int start_transfer(struct fsg_dev *fsg, struct usb_ep *ep,
+ 
+ static bool start_in_transfer(struct fsg_common *common, struct fsg_buffhd *bh)
+ {
++	int rc;
++
+ 	if (!fsg_is_set(common))
+ 		return false;
+ 	bh->state = BUF_STATE_SENDING;
+-	if (start_transfer(common->fsg, common->fsg->bulk_in, bh->inreq))
++	rc = start_transfer(common->fsg, common->fsg->bulk_in, bh->inreq);
++	if (rc) {
+ 		bh->state = BUF_STATE_EMPTY;
++		if (rc == -ESHUTDOWN) {
++			common->running = 0;
++			return false;
++		}
++	}
+ 	return true;
+ }
+ 
+ static bool start_out_transfer(struct fsg_common *common, struct fsg_buffhd *bh)
+ {
++	int rc;
++
+ 	if (!fsg_is_set(common))
+ 		return false;
+ 	bh->state = BUF_STATE_RECEIVING;
+-	if (start_transfer(common->fsg, common->fsg->bulk_out, bh->outreq))
++	rc = start_transfer(common->fsg, common->fsg->bulk_out, bh->outreq);
++	if (rc) {
+ 		bh->state = BUF_STATE_FULL;
++		if (rc == -ESHUTDOWN) {
++			common->running = 0;
++			return false;
++		}
++	}
+ 	return true;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index e56a1fb9715a7..83c7dffa945c3 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -323,6 +323,9 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ 		if (device_property_read_bool(tmpdev, "quirk-broken-port-ped"))
+ 			xhci->quirks |= XHCI_BROKEN_PORT_PED;
+ 
++		if (device_property_read_bool(tmpdev, "xhci-sg-trb-cache-size-quirk"))
++			xhci->quirks |= XHCI_SG_TRB_CACHE_SIZE_QUIRK;
++
+ 		device_property_read_u32(tmpdev, "imod-interval-ns",
+ 					 &xhci->imod_interval);
+ 	}
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 045e24174e1ae..d161b64416a48 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -150,6 +150,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */
+ 	{ USB_DEVICE(0x10C4, 0x8664) }, /* AC-Services CAN-IF */
+ 	{ USB_DEVICE(0x10C4, 0x8665) }, /* AC-Services OBD-IF */
++	{ USB_DEVICE(0x10C4, 0x87ED) }, /* IMST USB-Stick for Smart Meter */
+ 	{ USB_DEVICE(0x10C4, 0x8856) },	/* CEL EM357 ZigBee USB Stick - LR */
+ 	{ USB_DEVICE(0x10C4, 0x8857) },	/* CEL EM357 ZigBee USB Stick */
+ 	{ USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 6be7358ca1aff..43e8cb17b4c7a 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2269,6 +2269,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) },			/* Fibocom FM160 (MBIM mode) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) },			/* Fibocom FM101-GL (laptop MBIM) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a3, 0xff) },			/* Fibocom FM101-GL (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff),			/* Fibocom FM101-GL (laptop MBIM) */
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) },			/* LongSung M5710 */
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index b1e844bf31f81..703a9c5635573 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -184,6 +184,8 @@ static const struct usb_device_id id_table[] = {
+ 	{DEVICE_SWI(0x413c, 0x81d0)},   /* Dell Wireless 5819 */
+ 	{DEVICE_SWI(0x413c, 0x81d1)},   /* Dell Wireless 5818 */
+ 	{DEVICE_SWI(0x413c, 0x81d2)},   /* Dell Wireless 5818 */
++	{DEVICE_SWI(0x413c, 0x8217)},	/* Dell Wireless DW5826e */
++	{DEVICE_SWI(0x413c, 0x8218)},	/* Dell Wireless DW5826e QDL */
+ 
+ 	/* Huawei devices */
+ 	{DEVICE_HWI(0x03f0, 0x581d)},	/* HP lt4112 LTE/HSPA+ Gobi 4G Modem (Huawei me906e) */
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index 04976435ad736..0c88d5bf09cae 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -70,9 +70,13 @@ static int ucsi_acpi_sync_write(struct ucsi *ucsi, unsigned int offset,
+ 				const void *val, size_t val_len)
+ {
+ 	struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
++	bool ack = UCSI_COMMAND(*(u64 *)val) == UCSI_ACK_CC_CI;
+ 	int ret;
+ 
+-	set_bit(COMMAND_PENDING, &ua->flags);
++	if (ack)
++		set_bit(ACK_PENDING, &ua->flags);
++	else
++		set_bit(COMMAND_PENDING, &ua->flags);
+ 
+ 	ret = ucsi_acpi_async_write(ucsi, offset, val, val_len);
+ 	if (ret)
+@@ -82,7 +86,10 @@ static int ucsi_acpi_sync_write(struct ucsi *ucsi, unsigned int offset,
+ 		ret = -ETIMEDOUT;
+ 
+ out_clear_bit:
+-	clear_bit(COMMAND_PENDING, &ua->flags);
++	if (ack)
++		clear_bit(ACK_PENDING, &ua->flags);
++	else
++		clear_bit(COMMAND_PENDING, &ua->flags);
+ 
+ 	return ret;
+ }
+@@ -106,8 +113,10 @@ static void ucsi_acpi_notify(acpi_handle handle, u32 event, void *data)
+ 	if (UCSI_CCI_CONNECTOR(cci))
+ 		ucsi_connector_change(ua->ucsi, UCSI_CCI_CONNECTOR(cci));
+ 
+-	if (test_bit(COMMAND_PENDING, &ua->flags) &&
+-	    cci & (UCSI_CCI_ACK_COMPLETE | UCSI_CCI_COMMAND_COMPLETE))
++	if (cci & UCSI_CCI_ACK_COMPLETE && test_bit(ACK_PENDING, &ua->flags))
++		complete(&ua->complete);
++	if (cci & UCSI_CCI_COMMAND_COMPLETE &&
++	    test_bit(COMMAND_PENDING, &ua->flags))
+ 		complete(&ua->complete);
+ }
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index de110363af521..ab67160f72841 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2577,12 +2577,11 @@ EXPORT_SYMBOL_GPL(vhost_disable_notify);
+ /* Create a new message. */
+ struct vhost_msg_node *vhost_new_msg(struct vhost_virtqueue *vq, int type)
+ {
+-	struct vhost_msg_node *node = kmalloc(sizeof *node, GFP_KERNEL);
++	/* Make sure all padding within the structure is initialized. */
++	struct vhost_msg_node *node = kzalloc(sizeof(*node), GFP_KERNEL);
+ 	if (!node)
+ 		return NULL;
+ 
+-	/* Make sure all padding within the structure is initialized. */
+-	memset(&node->msg, 0, sizeof node->msg);
+ 	node->vq = vq;
+ 	node->msg.type = type;
+ 	return node;
+diff --git a/drivers/watchdog/it87_wdt.c b/drivers/watchdog/it87_wdt.c
+index 2b48318421627..6340ca058f890 100644
+--- a/drivers/watchdog/it87_wdt.c
++++ b/drivers/watchdog/it87_wdt.c
+@@ -263,6 +263,7 @@ static struct watchdog_device wdt_dev = {
+ static int __init it87_wdt_init(void)
+ {
+ 	u8  chip_rev;
++	u8 ctrl;
+ 	int rc;
+ 
+ 	rc = superio_enter();
+@@ -321,7 +322,18 @@ static int __init it87_wdt_init(void)
+ 
+ 	superio_select(GPIO);
+ 	superio_outb(WDT_TOV1, WDTCFG);
+-	superio_outb(0x00, WDTCTRL);
++
++	switch (chip_type) {
++	case IT8784_ID:
++	case IT8786_ID:
++		ctrl = superio_inb(WDTCTRL);
++		ctrl &= 0x08;
++		superio_outb(ctrl, WDTCTRL);
++		break;
++	default:
++		superio_outb(0x00, WDTCTRL);
++	}
++
+ 	superio_exit();
+ 
+ 	if (timeout < 1 || timeout > max_units * 60) {
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index 4c13cbc99896a..398ea69c176c1 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -11,6 +11,7 @@
+ #include <linux/kernel.h>
+ #include <linux/errno.h>
+ #include <linux/dma-buf.h>
++#include <linux/dma-direct.h>
+ #include <linux/slab.h>
+ #include <linux/types.h>
+ #include <linux/uaccess.h>
+@@ -56,7 +57,7 @@ struct gntdev_dmabuf {
+ 
+ 	/* Number of pages this buffer has. */
+ 	int nr_pages;
+-	/* Pages of this buffer. */
++	/* Pages of this buffer (only for dma-buf export). */
+ 	struct page **pages;
+ };
+ 
+@@ -490,7 +491,7 @@ static int dmabuf_exp_from_refs(struct gntdev_priv *priv, int flags,
+ /* DMA buffer import support. */
+ 
+ static int
+-dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs,
++dmabuf_imp_grant_foreign_access(unsigned long *gfns, u32 *refs,
+ 				int count, int domid)
+ {
+ 	grant_ref_t priv_gref_head;
+@@ -513,7 +514,7 @@ dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs,
+ 		}
+ 
+ 		gnttab_grant_foreign_access_ref(cur_ref, domid,
+-						xen_page_to_gfn(pages[i]), 0);
++						gfns[i], 0);
+ 		refs[i] = cur_ref;
+ 	}
+ 
+@@ -535,7 +536,6 @@ static void dmabuf_imp_end_foreign_access(u32 *refs, int count)
+ 
+ static void dmabuf_imp_free_storage(struct gntdev_dmabuf *gntdev_dmabuf)
+ {
+-	kfree(gntdev_dmabuf->pages);
+ 	kfree(gntdev_dmabuf->u.imp.refs);
+ 	kfree(gntdev_dmabuf);
+ }
+@@ -555,12 +555,6 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count)
+ 	if (!gntdev_dmabuf->u.imp.refs)
+ 		goto fail;
+ 
+-	gntdev_dmabuf->pages = kcalloc(count,
+-				       sizeof(gntdev_dmabuf->pages[0]),
+-				       GFP_KERNEL);
+-	if (!gntdev_dmabuf->pages)
+-		goto fail;
+-
+ 	gntdev_dmabuf->nr_pages = count;
+ 
+ 	for (i = 0; i < count; i++)
+@@ -582,7 +576,8 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
+ 	struct dma_buf *dma_buf;
+ 	struct dma_buf_attachment *attach;
+ 	struct sg_table *sgt;
+-	struct sg_page_iter sg_iter;
++	struct sg_dma_page_iter sg_iter;
++	unsigned long *gfns;
+ 	int i;
+ 
+ 	dma_buf = dma_buf_get(fd);
+@@ -630,26 +625,31 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
+ 
+ 	gntdev_dmabuf->u.imp.sgt = sgt;
+ 
+-	/* Now convert sgt to array of pages and check for page validity. */
++	gfns = kcalloc(count, sizeof(*gfns), GFP_KERNEL);
++	if (!gfns) {
++		ret = ERR_PTR(-ENOMEM);
++		goto fail_unmap;
++	}
++
++	/*
++	 * Now convert sgt to array of gfns without accessing underlying pages.
++	 * It is not allowed to access the underlying struct page of an sg table
++	 * exported by DMA-buf, but since we deal with special Xen dma device here
++	 * (not a normal physical one) look at the dma addresses in the sg table
++	 * and then calculate gfns directly from them.
++	 */
+ 	i = 0;
+-	for_each_sgtable_page(sgt, &sg_iter, 0) {
+-		struct page *page = sg_page_iter_page(&sg_iter);
+-		/*
+-		 * Check if page is valid: this can happen if we are given
+-		 * a page from VRAM or other resources which are not backed
+-		 * by a struct page.
+-		 */
+-		if (!pfn_valid(page_to_pfn(page))) {
+-			ret = ERR_PTR(-EINVAL);
+-			goto fail_unmap;
+-		}
++	for_each_sgtable_dma_page(sgt, &sg_iter, 0) {
++		dma_addr_t addr = sg_page_iter_dma_address(&sg_iter);
++		unsigned long pfn = bfn_to_pfn(XEN_PFN_DOWN(dma_to_phys(dev, addr)));
+ 
+-		gntdev_dmabuf->pages[i++] = page;
++		gfns[i++] = pfn_to_gfn(pfn);
+ 	}
+ 
+-	ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gntdev_dmabuf->pages,
++	ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gfns,
+ 						      gntdev_dmabuf->u.imp.refs,
+ 						      count, domid));
++	kfree(gfns);
+ 	if (IS_ERR(ret))
+ 		goto fail_end_access;
+ 
+diff --git a/fs/afs/callback.c b/fs/afs/callback.c
+index 7d9b23d981bf1..229308c7f7449 100644
+--- a/fs/afs/callback.c
++++ b/fs/afs/callback.c
+@@ -70,13 +70,14 @@ static struct afs_volume *afs_lookup_volume_rcu(struct afs_cell *cell,
+ {
+ 	struct afs_volume *volume = NULL;
+ 	struct rb_node *p;
+-	int seq = 0;
++	int seq = 1;
+ 
+ 	do {
+ 		/* Unfortunately, rbtree walking doesn't give reliable results
+ 		 * under just the RCU read lock, so we have to check for
+ 		 * changes.
+ 		 */
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ 		read_seqbegin_or_lock(&cell->volume_lock, &seq);
+ 
+ 		p = rcu_dereference_raw(cell->volumes.rb_node);
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index a59d6293a32b2..0b927736ca728 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -418,6 +418,14 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode,
+ 			continue;
+ 		}
+ 
++		/* Don't expose silly rename entries to userspace. */
++		if (nlen > 6 &&
++		    dire->u.name[0] == '.' &&
++		    ctx->actor != afs_lookup_filldir &&
++		    ctx->actor != afs_lookup_one_filldir &&
++		    memcmp(dire->u.name, ".__afs", 6) == 0)
++			continue;
++
+ 		/* found the next entry */
+ 		if (!dir_emit(ctx, dire->u.name, nlen,
+ 			      ntohl(dire->u.vnode),
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index 684a2b02b9ff7..733e3c470f7e3 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -27,7 +27,7 @@ struct afs_server *afs_find_server(struct afs_net *net,
+ 	const struct afs_addr_list *alist;
+ 	struct afs_server *server = NULL;
+ 	unsigned int i;
+-	int seq = 0, diff;
++	int seq = 1, diff;
+ 
+ 	rcu_read_lock();
+ 
+@@ -35,6 +35,7 @@ struct afs_server *afs_find_server(struct afs_net *net,
+ 		if (server)
+ 			afs_unuse_server_notime(net, server, afs_server_trace_put_find_rsq);
+ 		server = NULL;
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ 		read_seqbegin_or_lock(&net->fs_addr_lock, &seq);
+ 
+ 		if (srx->transport.family == AF_INET6) {
+@@ -90,7 +91,7 @@ struct afs_server *afs_find_server_by_uuid(struct afs_net *net, const uuid_t *uu
+ {
+ 	struct afs_server *server = NULL;
+ 	struct rb_node *p;
+-	int diff, seq = 0;
++	int diff, seq = 1;
+ 
+ 	_enter("%pU", uuid);
+ 
+@@ -102,7 +103,7 @@ struct afs_server *afs_find_server_by_uuid(struct afs_net *net, const uuid_t *uu
+ 		if (server)
+ 			afs_unuse_server(net, server, afs_server_trace_put_uuid_rsq);
+ 		server = NULL;
+-
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ 		read_seqbegin_or_lock(&net->fs_lock, &seq);
+ 
+ 		p = net->fs_servers.rb_node;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 0e25a3f64b2e0..019f0925fa73c 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1553,8 +1553,17 @@ static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
+ again:
+ 	root = btrfs_lookup_fs_root(fs_info, objectid);
+ 	if (root) {
+-		/* Shouldn't get preallocated anon_dev for cached roots */
+-		ASSERT(!anon_dev);
++		/*
++		 * Some other caller may have read out the newly inserted
++		 * subvolume already (for things like backref walk etc).  Not
++		 * that common but still possible.  In that case, we just need
++		 * to free the anon_dev.
++		 */
++		if (unlikely(anon_dev)) {
++			free_anon_bdev(anon_dev);
++			anon_dev = 0;
++		}
++
+ 		if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
+ 			btrfs_put_root(root);
+ 			return ERR_PTR(-ENOENT);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8f62e171053ba..3ba43a40032cd 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -1202,7 +1202,8 @@ static int btrfs_issue_discard(struct block_device *bdev, u64 start, u64 len,
+ 	u64 bytes_left, end;
+ 	u64 aligned_start = ALIGN(start, 1 << 9);
+ 
+-	if (WARN_ON(start != aligned_start)) {
++	/* Adjust the range to be aligned to 512B sectors if necessary. */
++	if (start != aligned_start) {
+ 		len -= aligned_start - start;
+ 		len = round_down(len, 1 << 9);
+ 		start = aligned_start;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index c900a39666e38..250b6064876de 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4007,7 +4007,8 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 	struct btrfs_block_rsv block_rsv;
+ 	u64 root_flags;
+ 	int ret;
+-	int err;
++
++	down_write(&fs_info->subvol_sem);
+ 
+ 	/*
+ 	 * Don't allow to delete a subvolume with send in progress. This is
+@@ -4020,25 +4021,25 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 		btrfs_warn(fs_info,
+ 			   "attempt to delete subvolume %llu during send",
+ 			   dest->root_key.objectid);
+-		return -EPERM;
++		ret = -EPERM;
++		goto out_up_write;
+ 	}
+ 	if (atomic_read(&dest->nr_swapfiles)) {
+ 		spin_unlock(&dest->root_item_lock);
+ 		btrfs_warn(fs_info,
+ 			   "attempt to delete subvolume %llu with active swapfile",
+ 			   root->root_key.objectid);
+-		return -EPERM;
++		ret = -EPERM;
++		goto out_up_write;
+ 	}
+ 	root_flags = btrfs_root_flags(&dest->root_item);
+ 	btrfs_set_root_flags(&dest->root_item,
+ 			     root_flags | BTRFS_ROOT_SUBVOL_DEAD);
+ 	spin_unlock(&dest->root_item_lock);
+ 
+-	down_write(&fs_info->subvol_sem);
+-
+-	err = may_destroy_subvol(dest);
+-	if (err)
+-		goto out_up_write;
++	ret = may_destroy_subvol(dest);
++	if (ret)
++		goto out_undead;
+ 
+ 	btrfs_init_block_rsv(&block_rsv, BTRFS_BLOCK_RSV_TEMP);
+ 	/*
+@@ -4046,13 +4047,13 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 	 * two for dir entries,
+ 	 * two for root ref/backref.
+ 	 */
+-	err = btrfs_subvolume_reserve_metadata(root, &block_rsv, 5, true);
+-	if (err)
+-		goto out_up_write;
++	ret = btrfs_subvolume_reserve_metadata(root, &block_rsv, 5, true);
++	if (ret)
++		goto out_undead;
+ 
+ 	trans = btrfs_start_transaction(root, 0);
+ 	if (IS_ERR(trans)) {
+-		err = PTR_ERR(trans);
++		ret = PTR_ERR(trans);
+ 		goto out_release;
+ 	}
+ 	trans->block_rsv = &block_rsv;
+@@ -4062,7 +4063,6 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 
+ 	ret = btrfs_unlink_subvol(trans, dir, dentry);
+ 	if (ret) {
+-		err = ret;
+ 		btrfs_abort_transaction(trans, ret);
+ 		goto out_end_trans;
+ 	}
+@@ -4080,7 +4080,6 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 					dest->root_key.objectid);
+ 		if (ret) {
+ 			btrfs_abort_transaction(trans, ret);
+-			err = ret;
+ 			goto out_end_trans;
+ 		}
+ 	}
+@@ -4090,7 +4089,6 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 				  dest->root_key.objectid);
+ 	if (ret && ret != -ENOENT) {
+ 		btrfs_abort_transaction(trans, ret);
+-		err = ret;
+ 		goto out_end_trans;
+ 	}
+ 	if (!btrfs_is_empty_uuid(dest->root_item.received_uuid)) {
+@@ -4100,7 +4098,6 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 					  dest->root_key.objectid);
+ 		if (ret && ret != -ENOENT) {
+ 			btrfs_abort_transaction(trans, ret);
+-			err = ret;
+ 			goto out_end_trans;
+ 		}
+ 	}
+@@ -4111,20 +4108,20 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 	trans->block_rsv = NULL;
+ 	trans->bytes_reserved = 0;
+ 	ret = btrfs_end_transaction(trans);
+-	if (ret && !err)
+-		err = ret;
+ 	inode->i_flags |= S_DEAD;
+ out_release:
+ 	btrfs_subvolume_release_metadata(root, &block_rsv);
+-out_up_write:
+-	up_write(&fs_info->subvol_sem);
+-	if (err) {
++out_undead:
++	if (ret) {
+ 		spin_lock(&dest->root_item_lock);
+ 		root_flags = btrfs_root_flags(&dest->root_item);
+ 		btrfs_set_root_flags(&dest->root_item,
+ 				root_flags & ~BTRFS_ROOT_SUBVOL_DEAD);
+ 		spin_unlock(&dest->root_item_lock);
+-	} else {
++	}
++out_up_write:
++	up_write(&fs_info->subvol_sem);
++	if (!ret) {
+ 		d_invalidate(dentry);
+ 		btrfs_prune_dentries(dest);
+ 		ASSERT(dest->send_in_progress == 0);
+@@ -4136,7 +4133,7 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 		}
+ 	}
+ 
+-	return err;
++	return ret;
+ }
+ 
+ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index f06824bea4686..049b837934e5d 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -798,6 +798,9 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 	struct btrfs_trans_handle *trans;
+ 	int ret;
+ 
++	if (btrfs_root_refs(&root->root_item) == 0)
++		return -ENOENT;
++
+ 	if (!test_bit(BTRFS_ROOT_SHAREABLE, &root->state))
+ 		return -EINVAL;
+ 
+@@ -3190,6 +3193,10 @@ static int btrfs_ioctl_defrag(struct file *file, void __user *argp)
+ 				kfree(range);
+ 				goto out;
+ 			}
++			if (range->flags & ~BTRFS_DEFRAG_RANGE_FLAGS_SUPP) {
++				ret = -EOPNOTSUPP;
++				goto out;
++			}
+ 			/* compression requires us to start the IO */
+ 			if ((range->flags & BTRFS_DEFRAG_RANGE_COMPRESS)) {
+ 				range->flags |= BTRFS_DEFRAG_RANGE_START_IO;
+@@ -4318,6 +4325,11 @@ static long btrfs_ioctl_qgroup_create(struct file *file, void __user *arg)
+ 		goto out;
+ 	}
+ 
++	if (sa->create && is_fstree(sa->qgroupid)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	trans = btrfs_join_transaction(root);
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index a67323c2d41f7..7f849310303b1 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1608,6 +1608,15 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 	return ret;
+ }
+ 
++static bool qgroup_has_usage(struct btrfs_qgroup *qgroup)
++{
++	return (qgroup->rfer > 0 || qgroup->rfer_cmpr > 0 ||
++		qgroup->excl > 0 || qgroup->excl_cmpr > 0 ||
++		qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA] > 0 ||
++		qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC] > 0 ||
++		qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS] > 0);
++}
++
+ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+@@ -1627,6 +1636,11 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 		goto out;
+ 	}
+ 
++	if (is_fstree(qgroupid) && qgroup_has_usage(qgroup)) {
++		ret = -EBUSY;
++		goto out;
++	}
++
+ 	/* Check if there are no children of this qgroup */
+ 	if (!list_empty(&qgroup->members)) {
+ 		ret = -EBUSY;
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index bd3bb94cc56bd..c3711598a9be5 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -899,8 +899,10 @@ int btrfs_ref_tree_mod(struct btrfs_fs_info *fs_info,
+ out_unlock:
+ 	spin_unlock(&fs_info->ref_verify_lock);
+ out:
+-	if (ret)
++	if (ret) {
++		btrfs_free_ref_cache(fs_info);
+ 		btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);
++	}
+ 	return ret;
+ }
+ 
+@@ -1029,8 +1031,8 @@ int btrfs_build_ref_tree(struct btrfs_fs_info *fs_info)
+ 		}
+ 	}
+ 	if (ret) {
+-		btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);
+ 		btrfs_free_ref_cache(fs_info);
++		btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);
+ 	}
+ 	btrfs_free_path(path);
+ 	return ret;
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index af9701afcab77..0b04adfd4a4a4 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -7285,7 +7285,7 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 	}
+ 
+ 	if (arg->flags & ~BTRFS_SEND_FLAG_MASK) {
+-		ret = -EINVAL;
++		ret = -EOPNOTSUPP;
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 32f1b15b25dcc..c0eda3816f685 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1334,7 +1334,7 @@ static int check_extent_item(struct extent_buffer *leaf,
+ 		if (ptr + btrfs_extent_inline_ref_size(inline_type) > end) {
+ 			extent_err(leaf, slot,
+ "inline ref item overflows extent item, ptr %lu iref size %u end %lu",
+-				   ptr, inline_type, end);
++				   ptr, btrfs_extent_inline_ref_size(inline_type), end);
+ 			return -EUCLEAN;
+ 		}
+ 
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 432dc2a16e282..8e43d07ffa8bd 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1402,7 +1402,7 @@ static void __prep_cap(struct cap_msg_args *arg, struct ceph_cap *cap,
+ 	if (flushing & CEPH_CAP_XATTR_EXCL) {
+ 		arg->old_xattr_buf = __ceph_build_xattrs_blob(ci);
+ 		arg->xattr_version = ci->i_xattrs.version;
+-		arg->xattr_buf = ci->i_xattrs.blob;
++		arg->xattr_buf = ceph_buffer_get(ci->i_xattrs.blob);
+ 	} else {
+ 		arg->xattr_buf = NULL;
+ 		arg->old_xattr_buf = NULL;
+@@ -1468,6 +1468,7 @@ static void __send_cap(struct cap_msg_args *arg, struct ceph_inode_info *ci)
+ 	encode_cap_msg(msg, arg);
+ 	ceph_con_send(&arg->session->s_con, msg);
+ 	ceph_buffer_put(arg->old_xattr_buf);
++	ceph_buffer_put(arg->xattr_buf);
+ 	if (arg->wake)
+ 		wake_up_all(&ci->i_cap_wq);
+ }
+@@ -4598,12 +4599,14 @@ int ceph_encode_dentry_release(void **p, struct dentry *dentry,
+ 			       struct inode *dir,
+ 			       int mds, int drop, int unless)
+ {
+-	struct dentry *parent = NULL;
+ 	struct ceph_mds_request_release *rel = *p;
+ 	struct ceph_dentry_info *di = ceph_dentry(dentry);
+ 	int force = 0;
+ 	int ret;
+ 
++	/* This shouldn't happen */
++	BUG_ON(!dir);
++
+ 	/*
+ 	 * force an record for the directory caps if we have a dentry lease.
+ 	 * this is racy (can't take i_ceph_lock and d_lock together), but it
+@@ -4613,14 +4616,9 @@ int ceph_encode_dentry_release(void **p, struct dentry *dentry,
+ 	spin_lock(&dentry->d_lock);
+ 	if (di->lease_session && di->lease_session->s_mds == mds)
+ 		force = 1;
+-	if (!dir) {
+-		parent = dget(dentry->d_parent);
+-		dir = d_inode(parent);
+-	}
+ 	spin_unlock(&dentry->d_lock);
+ 
+ 	ret = ceph_encode_inode_release(p, dir, mds, drop, unless, force);
+-	dput(parent);
+ 
+ 	spin_lock(&dentry->d_lock);
+ 	if (ret && di->lease_session && di->lease_session->s_mds == mds) {
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index b98bba887f84b..660e00eb42060 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -117,7 +117,7 @@ static __u32 get_neg_ctxt_len(struct smb2_sync_hdr *hdr, __u32 len,
+ 	} else if (nc_offset + 1 == non_ctxlen) {
+ 		cifs_dbg(FYI, "no SPNEGO security blob in negprot rsp\n");
+ 		size_of_pad_before_neg_ctxts = 0;
+-	} else if (non_ctxlen == SMB311_NEGPROT_BASE_SIZE)
++	} else if (non_ctxlen == SMB311_NEGPROT_BASE_SIZE + 1)
+ 		/* has padding, but no SPNEGO blob */
+ 		size_of_pad_before_neg_ctxts = nc_offset - non_ctxlen + 1;
+ 	else
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 26edaeb4245d8..84850a55c8b7e 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -5561,7 +5561,7 @@ struct smb_version_values smb20_values = {
+ 	.header_size = sizeof(struct smb2_sync_hdr),
+ 	.header_preamble_size = 0,
+ 	.max_header_size = MAX_SMB2_HDR_SIZE,
+-	.read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
++	.read_rsp_size = sizeof(struct smb2_read_rsp),
+ 	.lock_cmd = SMB2_LOCK,
+ 	.cap_unix = 0,
+ 	.cap_nt_find = SMB2_NT_FIND,
+@@ -5583,7 +5583,7 @@ struct smb_version_values smb21_values = {
+ 	.header_size = sizeof(struct smb2_sync_hdr),
+ 	.header_preamble_size = 0,
+ 	.max_header_size = MAX_SMB2_HDR_SIZE,
+-	.read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
++	.read_rsp_size = sizeof(struct smb2_read_rsp),
+ 	.lock_cmd = SMB2_LOCK,
+ 	.cap_unix = 0,
+ 	.cap_nt_find = SMB2_NT_FIND,
+@@ -5604,7 +5604,7 @@ struct smb_version_values smb3any_values = {
+ 	.header_size = sizeof(struct smb2_sync_hdr),
+ 	.header_preamble_size = 0,
+ 	.max_header_size = MAX_SMB2_HDR_SIZE,
+-	.read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
++	.read_rsp_size = sizeof(struct smb2_read_rsp),
+ 	.lock_cmd = SMB2_LOCK,
+ 	.cap_unix = 0,
+ 	.cap_nt_find = SMB2_NT_FIND,
+@@ -5625,7 +5625,7 @@ struct smb_version_values smbdefault_values = {
+ 	.header_size = sizeof(struct smb2_sync_hdr),
+ 	.header_preamble_size = 0,
+ 	.max_header_size = MAX_SMB2_HDR_SIZE,
+-	.read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
++	.read_rsp_size = sizeof(struct smb2_read_rsp),
+ 	.lock_cmd = SMB2_LOCK,
+ 	.cap_unix = 0,
+ 	.cap_nt_find = SMB2_NT_FIND,
+@@ -5646,7 +5646,7 @@ struct smb_version_values smb30_values = {
+ 	.header_size = sizeof(struct smb2_sync_hdr),
+ 	.header_preamble_size = 0,
+ 	.max_header_size = MAX_SMB2_HDR_SIZE,
+-	.read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
++	.read_rsp_size = sizeof(struct smb2_read_rsp),
+ 	.lock_cmd = SMB2_LOCK,
+ 	.cap_unix = 0,
+ 	.cap_nt_find = SMB2_NT_FIND,
+@@ -5667,7 +5667,7 @@ struct smb_version_values smb302_values = {
+ 	.header_size = sizeof(struct smb2_sync_hdr),
+ 	.header_preamble_size = 0,
+ 	.max_header_size = MAX_SMB2_HDR_SIZE,
+-	.read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
++	.read_rsp_size = sizeof(struct smb2_read_rsp),
+ 	.lock_cmd = SMB2_LOCK,
+ 	.cap_unix = 0,
+ 	.cap_nt_find = SMB2_NT_FIND,
+@@ -5688,7 +5688,7 @@ struct smb_version_values smb311_values = {
+ 	.header_size = sizeof(struct smb2_sync_hdr),
+ 	.header_preamble_size = 0,
+ 	.max_header_size = MAX_SMB2_HDR_SIZE,
+-	.read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
++	.read_rsp_size = sizeof(struct smb2_read_rsp),
+ 	.lock_cmd = SMB2_LOCK,
+ 	.cap_unix = 0,
+ 	.cap_nt_find = SMB2_NT_FIND,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 76679dc4e6328..4aec01841f0f2 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1261,7 +1261,7 @@ SMB2_sess_sendreceive(struct SMB2_sess_data *sess_data)
+ 
+ 	/* Testing shows that buffer offset must be at location of Buffer[0] */
+ 	req->SecurityBufferOffset =
+-		cpu_to_le16(sizeof(struct smb2_sess_setup_req) - 1 /* pad */);
++		cpu_to_le16(sizeof(struct smb2_sess_setup_req));
+ 	req->SecurityBufferLength = cpu_to_le16(sess_data->iov[1].iov_len);
+ 
+ 	memset(&rqst, 0, sizeof(struct smb_rqst));
+@@ -1760,8 +1760,7 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
+ 	iov[0].iov_len = total_len - 1;
+ 
+ 	/* Testing shows that buffer offset must be at location of Buffer[0] */
+-	req->PathOffset = cpu_to_le16(sizeof(struct smb2_tree_connect_req)
+-			- 1 /* pad */);
++	req->PathOffset = cpu_to_le16(sizeof(struct smb2_tree_connect_req));
+ 	req->PathLength = cpu_to_le16(unc_path_len - 2);
+ 	iov[1].iov_base = unc_path;
+ 	iov[1].iov_len = unc_path_len;
+@@ -4676,7 +4675,7 @@ int SMB2_query_directory_init(const unsigned int xid,
+ 	memcpy(bufptr, &asteriks, len);
+ 
+ 	req->FileNameOffset =
+-		cpu_to_le16(sizeof(struct smb2_query_directory_req) - 1);
++		cpu_to_le16(sizeof(struct smb2_query_directory_req));
+ 	req->FileNameLength = cpu_to_le16(len);
+ 	/*
+ 	 * BB could be 30 bytes or so longer if we used SMB2 specific
+@@ -4873,7 +4872,7 @@ SMB2_set_info_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
+ 	req->AdditionalInformation = cpu_to_le32(additional_info);
+ 
+ 	req->BufferOffset =
+-			cpu_to_le16(sizeof(struct smb2_set_info_req) - 1);
++			cpu_to_le16(sizeof(struct smb2_set_info_req));
+ 	req->BufferLength = cpu_to_le32(*size);
+ 
+ 	memcpy(req->Buffer, *data, *size);
+@@ -5105,9 +5104,9 @@ build_qfs_info_req(struct kvec *iov, struct cifs_tcon *tcon,
+ 	req->VolatileFileId = volatile_fid;
+ 	/* 1 for pad */
+ 	req->InputBufferOffset =
+-			cpu_to_le16(sizeof(struct smb2_query_info_req) - 1);
++			cpu_to_le16(sizeof(struct smb2_query_info_req));
+ 	req->OutputBufferLength = cpu_to_le32(
+-		outbuf_len + sizeof(struct smb2_query_info_rsp) - 1);
++		outbuf_len + sizeof(struct smb2_query_info_rsp));
+ 
+ 	iov->iov_base = (char *)req;
+ 	iov->iov_len = total_len;
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index 89a732b31390e..eaa873175318a 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -220,7 +220,7 @@ struct smb2_err_rsp {
+ 	__le16 StructureSize;
+ 	__le16 Reserved; /* MBZ */
+ 	__le32 ByteCount;  /* even if zero, at least one byte follows */
+-	__u8   ErrorData[1];  /* variable length */
++	__u8   ErrorData[];  /* variable length */
+ } __packed;
+ 
+ #define SYMLINK_ERROR_TAG 0x4c4d5953
+@@ -464,7 +464,7 @@ struct smb2_negotiate_rsp {
+ 	__le16 SecurityBufferOffset;
+ 	__le16 SecurityBufferLength;
+ 	__le32 NegotiateContextOffset;	/* Pre:SMB3.1.1 was reserved/ignored */
+-	__u8   Buffer[1];	/* variable length GSS security buffer */
++	__u8   Buffer[];	/* variable length GSS security buffer */
+ } __packed;
+ 
+ /* Flags */
+@@ -481,7 +481,7 @@ struct smb2_sess_setup_req {
+ 	__le16 SecurityBufferOffset;
+ 	__le16 SecurityBufferLength;
+ 	__u64 PreviousSessionId;
+-	__u8   Buffer[1];	/* variable length GSS security buffer */
++	__u8   Buffer[];	/* variable length GSS security buffer */
+ } __packed;
+ 
+ /* Currently defined SessionFlags */
+@@ -494,7 +494,7 @@ struct smb2_sess_setup_rsp {
+ 	__le16 SessionFlags;
+ 	__le16 SecurityBufferOffset;
+ 	__le16 SecurityBufferLength;
+-	__u8   Buffer[1];	/* variable length GSS security buffer */
++	__u8   Buffer[];	/* variable length GSS security buffer */
+ } __packed;
+ 
+ struct smb2_logoff_req {
+@@ -520,7 +520,7 @@ struct smb2_tree_connect_req {
+ 	__le16 Flags; /* Reserved MBZ for dialects prior to SMB3.1.1 */
+ 	__le16 PathOffset;
+ 	__le16 PathLength;
+-	__u8   Buffer[1];	/* variable length */
++	__u8   Buffer[];	/* variable length */
+ } __packed;
+ 
+ /* See MS-SMB2 section 2.2.9.2 */
+@@ -828,7 +828,7 @@ struct smb2_create_rsp {
+ 	__u64  VolatileFileId; /* opaque endianness */
+ 	__le32 CreateContextsOffset;
+ 	__le32 CreateContextsLength;
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ struct create_context {
+@@ -1289,7 +1289,7 @@ struct smb2_read_plain_req {
+ 	__le32 RemainingBytes;
+ 	__le16 ReadChannelInfoOffset;
+ 	__le16 ReadChannelInfoLength;
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ /* Read flags */
+@@ -1304,7 +1304,7 @@ struct smb2_read_rsp {
+ 	__le32 DataLength;
+ 	__le32 DataRemaining;
+ 	__u32  Flags;
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ /* For write request Flags field below the following flags are defined: */
+@@ -1324,7 +1324,7 @@ struct smb2_write_req {
+ 	__le16 WriteChannelInfoOffset;
+ 	__le16 WriteChannelInfoLength;
+ 	__le32 Flags;
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ struct smb2_write_rsp {
+@@ -1335,7 +1335,7 @@ struct smb2_write_rsp {
+ 	__le32 DataLength;
+ 	__le32 DataRemaining;
+ 	__u32  Reserved2;
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ /* notify flags */
+@@ -1371,7 +1371,7 @@ struct smb2_change_notify_rsp {
+ 	__le16	StructureSize;  /* Must be 9 */
+ 	__le16	OutputBufferOffset;
+ 	__le32	OutputBufferLength;
+-	__u8	Buffer[1]; /* array of file notify structs */
++	__u8	Buffer[]; /* array of file notify structs */
+ } __packed;
+ 
+ #define SMB2_LOCKFLAG_SHARED_LOCK	0x0001
+@@ -1394,7 +1394,10 @@ struct smb2_lock_req {
+ 	__u64  PersistentFileId; /* opaque endianness */
+ 	__u64  VolatileFileId; /* opaque endianness */
+ 	/* Followed by at least one */
+-	struct smb2_lock_element locks[1];
++	union {
++		struct smb2_lock_element lock;
++		DECLARE_FLEX_ARRAY(struct smb2_lock_element, locks);
++	};
+ } __packed;
+ 
+ struct smb2_lock_rsp {
+@@ -1434,7 +1437,7 @@ struct smb2_query_directory_req {
+ 	__le16 FileNameOffset;
+ 	__le16 FileNameLength;
+ 	__le32 OutputBufferLength;
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ struct smb2_query_directory_rsp {
+@@ -1442,7 +1445,7 @@ struct smb2_query_directory_rsp {
+ 	__le16 StructureSize; /* Must be 9 */
+ 	__le16 OutputBufferOffset;
+ 	__le32 OutputBufferLength;
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ /* Possible InfoType values */
+@@ -1483,7 +1486,7 @@ struct smb2_query_info_req {
+ 	__le32 Flags;
+ 	__u64  PersistentFileId; /* opaque endianness */
+ 	__u64  VolatileFileId; /* opaque endianness */
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ struct smb2_query_info_rsp {
+@@ -1491,7 +1494,7 @@ struct smb2_query_info_rsp {
+ 	__le16 StructureSize; /* Must be 9 */
+ 	__le16 OutputBufferOffset;
+ 	__le32 OutputBufferLength;
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ /*
+@@ -1514,7 +1517,7 @@ struct smb2_set_info_req {
+ 	__le32 AdditionalInformation;
+ 	__u64  PersistentFileId; /* opaque endianness */
+ 	__u64  VolatileFileId; /* opaque endianness */
+-	__u8   Buffer[1];
++	__u8   Buffer[];
+ } __packed;
+ 
+ struct smb2_set_info_rsp {
+@@ -1716,7 +1719,10 @@ struct smb2_file_all_info { /* data block encoding of response to level 18 */
+ 	__le32 Mode;
+ 	__le32 AlignmentRequirement;
+ 	__le32 FileNameLength;
+-	char   FileName[1];
++	union {
++		char __pad;     /* Legacy structure padding */
++		DECLARE_FLEX_ARRAY(char, FileName);
++	};
+ } __packed; /* level 18 Query */
+ 
+ struct smb2_file_eof_info { /* encoding of request for level 10 */
+diff --git a/fs/dcache.c b/fs/dcache.c
+index ea0485861d937..976c7474d62a9 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -759,12 +759,12 @@ static inline bool fast_dput(struct dentry *dentry)
+ 	 */
+ 	if (unlikely(ret < 0)) {
+ 		spin_lock(&dentry->d_lock);
+-		if (dentry->d_lockref.count > 1) {
+-			dentry->d_lockref.count--;
++		if (WARN_ON_ONCE(dentry->d_lockref.count <= 0)) {
+ 			spin_unlock(&dentry->d_lock);
+ 			return true;
+ 		}
+-		return false;
++		dentry->d_lockref.count--;
++		goto locked;
+ 	}
+ 
+ 	/*
+@@ -815,6 +815,7 @@ static inline bool fast_dput(struct dentry *dentry)
+ 	 * else could have killed it and marked it dead. Either way, we
+ 	 * don't need to do anything else.
+ 	 */
++locked:
+ 	if (dentry->d_lockref.count) {
+ 		spin_unlock(&dentry->d_lock);
+ 		return true;
+diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
+index e23752d9a79f3..c867a0d62f360 100644
+--- a/fs/ecryptfs/inode.c
++++ b/fs/ecryptfs/inode.c
+@@ -76,6 +76,14 @@ static struct inode *__ecryptfs_get_inode(struct inode *lower_inode,
+ 
+ 	if (lower_inode->i_sb != ecryptfs_superblock_to_lower(sb))
+ 		return ERR_PTR(-EXDEV);
++
++	/* Reject dealing with casefold directories. */
++	if (IS_CASEFOLDED(lower_inode)) {
++		pr_err_ratelimited("%s: Can't handle casefolded directory.\n",
++				   __func__);
++		return ERR_PTR(-EREMOTE);
++	}
++
+ 	if (!igrab(lower_inode))
+ 		return ERR_PTR(-ESTALE);
+ 	inode = iget5_locked(sb, (unsigned long)lower_inode,
+diff --git a/fs/exec.c b/fs/exec.c
+index 983295c0b8acf..2006e245b8f30 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1392,6 +1392,9 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 
+ out_unlock:
+ 	up_write(&me->signal->exec_update_lock);
++	if (!bprm->cred)
++		mutex_unlock(&me->signal->cred_guard_mutex);
++
+ out:
+ 	return retval;
+ }
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 3babc07ae613e..9bec75847b856 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5895,11 +5895,16 @@ __acquires(bitlock)
+ static ext4_grpblk_t ext4_last_grp_cluster(struct super_block *sb,
+ 					   ext4_group_t grp)
+ {
+-	if (grp < ext4_get_groups_count(sb))
+-		return EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+-	return (ext4_blocks_count(EXT4_SB(sb)->s_es) -
+-		ext4_group_first_block_no(sb, grp) - 1) >>
+-					EXT4_CLUSTER_BITS(sb);
++	unsigned long nr_clusters_in_group;
++
++	if (grp < (ext4_get_groups_count(sb) - 1))
++		nr_clusters_in_group = EXT4_CLUSTERS_PER_GROUP(sb);
++	else
++		nr_clusters_in_group = (ext4_blocks_count(EXT4_SB(sb)->s_es) -
++					ext4_group_first_block_no(sb, grp))
++				       >> EXT4_CLUSTER_BITS(sb);
++
++	return nr_clusters_in_group - 1;
+ }
+ 
+ static bool ext4_trim_interrupted(void)
+@@ -5911,13 +5916,15 @@ static int ext4_try_to_trim_range(struct super_block *sb,
+ 		struct ext4_buddy *e4b, ext4_grpblk_t start,
+ 		ext4_grpblk_t max, ext4_grpblk_t minblocks)
+ {
+-	ext4_grpblk_t next, count, free_count;
++	ext4_grpblk_t next, count, free_count, last, origin_start;
+ 	bool set_trimmed = false;
+ 	void *bitmap;
+ 
++	last = ext4_last_grp_cluster(sb, e4b->bd_group);
+ 	bitmap = e4b->bd_bitmap;
+-	if (start == 0 && max >= ext4_last_grp_cluster(sb, e4b->bd_group))
++	if (start == 0 && max >= last)
+ 		set_trimmed = true;
++	origin_start = start;
+ 	start = max(e4b->bd_info->bb_first_free, start);
+ 	count = 0;
+ 	free_count = 0;
+@@ -5926,7 +5933,10 @@ static int ext4_try_to_trim_range(struct super_block *sb,
+ 		start = mb_find_next_zero_bit(bitmap, max + 1, start);
+ 		if (start > max)
+ 			break;
+-		next = mb_find_next_bit(bitmap, max + 1, start);
++
++		next = mb_find_next_bit(bitmap, last + 1, start);
++		if (origin_start == 0 && next >= last)
++			set_trimmed = true;
+ 
+ 		if ((next - start) >= minblocks) {
+ 			int ret = ext4_trim_extent(sb, start, next - start, e4b);
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 64a579734f934..f8dd5d972c337 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -615,6 +615,7 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk,
+ 		goto out;
+ 	o_end = o_start + len;
+ 
++	*moved_len = 0;
+ 	while (o_start < o_end) {
+ 		struct ext4_extent *ex;
+ 		ext4_lblk_t cur_blk, next_blk;
+@@ -670,7 +671,7 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk,
+ 		 */
+ 		ext4_double_up_write_data_sem(orig_inode, donor_inode);
+ 		/* Swap original branches with new branches */
+-		move_extent_per_page(o_filp, donor_inode,
++		*moved_len += move_extent_per_page(o_filp, donor_inode,
+ 				     orig_page_index, donor_page_index,
+ 				     offset_in_page, cur_len,
+ 				     unwritten, &ret);
+@@ -680,9 +681,6 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk,
+ 		o_start += cur_len;
+ 		d_start += cur_len;
+ 	}
+-	*moved_len = o_start - orig_blk;
+-	if (*moved_len > len)
+-		*moved_len = len;
+ 
+ out:
+ 	if (*moved_len) {
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 9b4199a1e0397..06e0eaf2ea4e1 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -227,17 +227,24 @@ struct ext4_new_flex_group_data {
+ 						   in the flex group */
+ 	__u16 *bg_flags;			/* block group flags of groups
+ 						   in @groups */
++	ext4_group_t resize_bg;			/* number of allocated
++						   new_group_data */
+ 	ext4_group_t count;			/* number of groups in @groups
+ 						 */
+ };
+ 
++/*
++ * Avoiding memory allocation failures due to too many groups added each time.
++ */
++#define MAX_RESIZE_BG				16384
++
+ /*
+  * alloc_flex_gd() allocates a ext4_new_flex_group_data with size of
+  * @flexbg_size.
+  *
+  * Returns NULL on failure otherwise address of the allocated structure.
+  */
+-static struct ext4_new_flex_group_data *alloc_flex_gd(unsigned long flexbg_size)
++static struct ext4_new_flex_group_data *alloc_flex_gd(unsigned int flexbg_size)
+ {
+ 	struct ext4_new_flex_group_data *flex_gd;
+ 
+@@ -245,17 +252,18 @@ static struct ext4_new_flex_group_data *alloc_flex_gd(unsigned long flexbg_size)
+ 	if (flex_gd == NULL)
+ 		goto out3;
+ 
+-	if (flexbg_size >= UINT_MAX / sizeof(struct ext4_new_group_data))
+-		goto out2;
+-	flex_gd->count = flexbg_size;
++	if (unlikely(flexbg_size > MAX_RESIZE_BG))
++		flex_gd->resize_bg = MAX_RESIZE_BG;
++	else
++		flex_gd->resize_bg = flexbg_size;
+ 
+-	flex_gd->groups = kmalloc_array(flexbg_size,
++	flex_gd->groups = kmalloc_array(flex_gd->resize_bg,
+ 					sizeof(struct ext4_new_group_data),
+ 					GFP_NOFS);
+ 	if (flex_gd->groups == NULL)
+ 		goto out2;
+ 
+-	flex_gd->bg_flags = kmalloc_array(flexbg_size, sizeof(__u16),
++	flex_gd->bg_flags = kmalloc_array(flex_gd->resize_bg, sizeof(__u16),
+ 					  GFP_NOFS);
+ 	if (flex_gd->bg_flags == NULL)
+ 		goto out1;
+@@ -292,7 +300,7 @@ static void free_flex_gd(struct ext4_new_flex_group_data *flex_gd)
+  */
+ static int ext4_alloc_group_tables(struct super_block *sb,
+ 				struct ext4_new_flex_group_data *flex_gd,
+-				int flexbg_size)
++				unsigned int flexbg_size)
+ {
+ 	struct ext4_new_group_data *group_data = flex_gd->groups;
+ 	ext4_fsblk_t start_blk;
+@@ -393,12 +401,12 @@ static int ext4_alloc_group_tables(struct super_block *sb,
+ 		group = group_data[0].group;
+ 
+ 		printk(KERN_DEBUG "EXT4-fs: adding a flex group with "
+-		       "%d groups, flexbg size is %d:\n", flex_gd->count,
++		       "%u groups, flexbg size is %u:\n", flex_gd->count,
+ 		       flexbg_size);
+ 
+ 		for (i = 0; i < flex_gd->count; i++) {
+ 			ext4_debug(
+-			       "adding %s group %u: %u blocks (%d free, %d mdata blocks)\n",
++			       "adding %s group %u: %u blocks (%u free, %u mdata blocks)\n",
+ 			       ext4_bg_has_super(sb, group + i) ? "normal" :
+ 			       "no-super", group + i,
+ 			       group_data[i].blocks_count,
+@@ -1562,8 +1570,7 @@ static int ext4_flex_group_add(struct super_block *sb,
+ 
+ static int ext4_setup_next_flex_gd(struct super_block *sb,
+ 				    struct ext4_new_flex_group_data *flex_gd,
+-				    ext4_fsblk_t n_blocks_count,
+-				    unsigned long flexbg_size)
++				    ext4_fsblk_t n_blocks_count)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	struct ext4_super_block *es = sbi->s_es;
+@@ -1587,7 +1594,7 @@ static int ext4_setup_next_flex_gd(struct super_block *sb,
+ 	BUG_ON(last);
+ 	ext4_get_group_no_and_offset(sb, n_blocks_count - 1, &n_group, &last);
+ 
+-	last_group = group | (flexbg_size - 1);
++	last_group = group | (flex_gd->resize_bg - 1);
+ 	if (last_group > n_group)
+ 		last_group = n_group;
+ 
+@@ -1941,8 +1948,9 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
+ 	ext4_fsblk_t o_blocks_count;
+ 	ext4_fsblk_t n_blocks_count_retry = 0;
+ 	unsigned long last_update_time = 0;
+-	int err = 0, flexbg_size = 1 << sbi->s_log_groups_per_flex;
++	int err = 0;
+ 	int meta_bg;
++	unsigned int flexbg_size = ext4_flex_bg_size(sbi);
+ 
+ 	/* See if the device is actually as big as what was requested */
+ 	bh = ext4_sb_bread(sb, n_blocks_count - 1, 0);
+@@ -2083,8 +2091,7 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
+ 	/* Add flex groups. Note that a regular group is a
+ 	 * flex group with 1 group.
+ 	 */
+-	while (ext4_setup_next_flex_gd(sb, flex_gd, n_blocks_count,
+-					      flexbg_size)) {
++	while (ext4_setup_next_flex_gd(sb, flex_gd, n_blocks_count)) {
+ 		if (jiffies - last_update_time > HZ * 10) {
+ 			if (last_update_time)
+ 				ext4_msg(sb, KERN_INFO,
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index c3c527afdd074..cd56af93df427 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -641,7 +641,16 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
+ 		 */
+ 		if (dest == NEW_ADDR) {
+ 			f2fs_truncate_data_blocks_range(&dn, 1);
+-			f2fs_reserve_new_block(&dn);
++			do {
++				err = f2fs_reserve_new_block(&dn);
++				if (err == -ENOSPC) {
++					f2fs_bug_on(sbi, 1);
++					break;
++				}
++			} while (err &&
++				IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION));
++			if (err)
++				goto err;
+ 			continue;
+ 		}
+ 
+@@ -649,12 +658,14 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
+ 		if (f2fs_is_valid_blkaddr(sbi, dest, META_POR)) {
+ 
+ 			if (src == NULL_ADDR) {
+-				err = f2fs_reserve_new_block(&dn);
+-				while (err &&
+-				       IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION))
++				do {
+ 					err = f2fs_reserve_new_block(&dn);
+-				/* We should not get -ENOSPC */
+-				f2fs_bug_on(sbi, err);
++					if (err == -ENOSPC) {
++						f2fs_bug_on(sbi, 1);
++						break;
++					}
++				} while (err &&
++					IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION));
+ 				if (err)
+ 					goto err;
+ 			}
+@@ -844,6 +855,8 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
+ 	if (!err && fix_curseg_write_pointer && !f2fs_readonly(sbi->sb) &&
+ 			f2fs_sb_has_blkzoned(sbi)) {
+ 		err = f2fs_fix_curseg_write_pointer(sbi);
++		if (!err)
++			err = f2fs_check_write_pointer(sbi);
+ 		ret = err;
+ 	}
+ 
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index 7bcc60091287c..b2d06f016ec66 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -799,8 +799,7 @@ COMPAT_SYSCALL_DEFINE3(ioctl, unsigned int, fd, unsigned int, cmd,
+ 	if (!f.file)
+ 		return -EBADF;
+ 
+-	/* RED-PEN how should LSM module know it's handling 32bit? */
+-	error = security_file_ioctl(f.file, cmd, arg);
++	error = security_file_ioctl_compat(f.file, cmd, arg);
+ 	if (error)
+ 		goto out;
+ 
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 72eb5ed54c2ab..9b6849b9bfdb9 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -63,10 +63,10 @@
+  */
+ static void dbAllocBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 			int nblocks);
+-static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval);
+-static int dbBackSplit(dmtree_t * tp, int leafno);
+-static int dbJoin(dmtree_t * tp, int leafno, int newval);
+-static void dbAdjTree(dmtree_t * tp, int leafno, int newval);
++static void dbSplit(dmtree_t *tp, int leafno, int splitsz, int newval, bool is_ctl);
++static int dbBackSplit(dmtree_t *tp, int leafno, bool is_ctl);
++static int dbJoin(dmtree_t *tp, int leafno, int newval, bool is_ctl);
++static void dbAdjTree(dmtree_t *tp, int leafno, int newval, bool is_ctl);
+ static int dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc,
+ 		    int level);
+ static int dbAllocAny(struct bmap * bmp, s64 nblocks, int l2nb, s64 * results);
+@@ -2171,7 +2171,7 @@ static int dbFreeDmap(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 		 * system.
+ 		 */
+ 		if (dp->tree.stree[word] == NOFREE)
+-			dbBackSplit((dmtree_t *) & dp->tree, word);
++			dbBackSplit((dmtree_t *)&dp->tree, word, false);
+ 
+ 		dbAllocBits(bmp, dp, blkno, nblocks);
+ 	}
+@@ -2257,7 +2257,7 @@ static void dbAllocBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 			 * the binary system of the leaves if need be.
+ 			 */
+ 			dbSplit(tp, word, BUDMIN,
+-				dbMaxBud((u8 *) & dp->wmap[word]));
++				dbMaxBud((u8 *)&dp->wmap[word]), false);
+ 
+ 			word += 1;
+ 		} else {
+@@ -2297,7 +2297,7 @@ static void dbAllocBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 				 * system of the leaves to reflect the current
+ 				 * allocation (size).
+ 				 */
+-				dbSplit(tp, word, size, NOFREE);
++				dbSplit(tp, word, size, NOFREE, false);
+ 
+ 				/* get the number of dmap words handled */
+ 				nw = BUDSIZE(size, BUDMIN);
+@@ -2404,7 +2404,7 @@ static int dbFreeBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 			/* update the leaf for this dmap word.
+ 			 */
+ 			rc = dbJoin(tp, word,
+-				    dbMaxBud((u8 *) & dp->wmap[word]));
++				    dbMaxBud((u8 *)&dp->wmap[word]), false);
+ 			if (rc)
+ 				return rc;
+ 
+@@ -2437,7 +2437,7 @@ static int dbFreeBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 
+ 				/* update the leaf.
+ 				 */
+-				rc = dbJoin(tp, word, size);
++				rc = dbJoin(tp, word, size, false);
+ 				if (rc)
+ 					return rc;
+ 
+@@ -2589,14 +2589,14 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
+ 		 * that it is at the front of a binary buddy system.
+ 		 */
+ 		if (oldval == NOFREE) {
+-			rc = dbBackSplit((dmtree_t *) dcp, leafno);
++			rc = dbBackSplit((dmtree_t *)dcp, leafno, true);
+ 			if (rc)
+ 				return rc;
+ 			oldval = dcp->stree[ti];
+ 		}
+-		dbSplit((dmtree_t *) dcp, leafno, dcp->budmin, newval);
++		dbSplit((dmtree_t *) dcp, leafno, dcp->budmin, newval, true);
+ 	} else {
+-		rc = dbJoin((dmtree_t *) dcp, leafno, newval);
++		rc = dbJoin((dmtree_t *) dcp, leafno, newval, true);
+ 		if (rc)
+ 			return rc;
+ 	}
+@@ -2625,7 +2625,7 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
+ 				 */
+ 				if (alloc) {
+ 					dbJoin((dmtree_t *) dcp, leafno,
+-					       oldval);
++					       oldval, true);
+ 				} else {
+ 					/* the dbJoin() above might have
+ 					 * caused a larger binary buddy system
+@@ -2635,9 +2635,9 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
+ 					 */
+ 					if (dcp->stree[ti] == NOFREE)
+ 						dbBackSplit((dmtree_t *)
+-							    dcp, leafno);
++							    dcp, leafno, true);
+ 					dbSplit((dmtree_t *) dcp, leafno,
+-						dcp->budmin, oldval);
++						dcp->budmin, oldval, true);
+ 				}
+ 
+ 				/* release the buffer and return the error.
+@@ -2685,7 +2685,7 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
+  *
+  * serialization: IREAD_LOCK(ipbmap) or IWRITE_LOCK(ipbmap) held on entry/exit;
+  */
+-static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval)
++static void dbSplit(dmtree_t *tp, int leafno, int splitsz, int newval, bool is_ctl)
+ {
+ 	int budsz;
+ 	int cursz;
+@@ -2707,7 +2707,7 @@ static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval)
+ 		while (cursz >= splitsz) {
+ 			/* update the buddy's leaf with its new value.
+ 			 */
+-			dbAdjTree(tp, leafno ^ budsz, cursz);
++			dbAdjTree(tp, leafno ^ budsz, cursz, is_ctl);
+ 
+ 			/* on to the next size and buddy.
+ 			 */
+@@ -2719,7 +2719,7 @@ static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval)
+ 	/* adjust the dmap tree to reflect the specified leaf's new
+ 	 * value.
+ 	 */
+-	dbAdjTree(tp, leafno, newval);
++	dbAdjTree(tp, leafno, newval, is_ctl);
+ }
+ 
+ 
+@@ -2750,7 +2750,7 @@ static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval)
+  *
+  * serialization: IREAD_LOCK(ipbmap) or IWRITE_LOCK(ipbmap) held on entry/exit;
+  */
+-static int dbBackSplit(dmtree_t * tp, int leafno)
++static int dbBackSplit(dmtree_t *tp, int leafno, bool is_ctl)
+ {
+ 	int budsz, bud, w, bsz, size;
+ 	int cursz;
+@@ -2801,7 +2801,7 @@ static int dbBackSplit(dmtree_t * tp, int leafno)
+ 				 * system in two.
+ 				 */
+ 				cursz = leaf[bud] - 1;
+-				dbSplit(tp, bud, cursz, cursz);
++				dbSplit(tp, bud, cursz, cursz, is_ctl);
+ 				break;
+ 			}
+ 		}
+@@ -2829,7 +2829,7 @@ static int dbBackSplit(dmtree_t * tp, int leafno)
+  *
+  * RETURN VALUES: none
+  */
+-static int dbJoin(dmtree_t * tp, int leafno, int newval)
++static int dbJoin(dmtree_t *tp, int leafno, int newval, bool is_ctl)
+ {
+ 	int budsz, buddy;
+ 	s8 *leaf;
+@@ -2884,12 +2884,12 @@ static int dbJoin(dmtree_t * tp, int leafno, int newval)
+ 			if (leafno < buddy) {
+ 				/* leafno is the left buddy.
+ 				 */
+-				dbAdjTree(tp, buddy, NOFREE);
++				dbAdjTree(tp, buddy, NOFREE, is_ctl);
+ 			} else {
+ 				/* buddy is the left buddy and becomes
+ 				 * leafno.
+ 				 */
+-				dbAdjTree(tp, leafno, NOFREE);
++				dbAdjTree(tp, leafno, NOFREE, is_ctl);
+ 				leafno = buddy;
+ 			}
+ 
+@@ -2902,7 +2902,7 @@ static int dbJoin(dmtree_t * tp, int leafno, int newval)
+ 
+ 	/* update the leaf value.
+ 	 */
+-	dbAdjTree(tp, leafno, newval);
++	dbAdjTree(tp, leafno, newval, is_ctl);
+ 
+ 	return 0;
+ }
+@@ -2923,15 +2923,20 @@ static int dbJoin(dmtree_t * tp, int leafno, int newval)
+  *
+  * RETURN VALUES: none
+  */
+-static void dbAdjTree(dmtree_t * tp, int leafno, int newval)
++static void dbAdjTree(dmtree_t *tp, int leafno, int newval, bool is_ctl)
+ {
+ 	int lp, pp, k;
+-	int max;
++	int max, size;
++
++	size = is_ctl ? CTLTREESIZE : TREESIZE;
+ 
+ 	/* pick up the index of the leaf for this leafno.
+ 	 */
+ 	lp = leafno + le32_to_cpu(tp->dmt_leafidx);
+ 
++	if (WARN_ON_ONCE(lp >= size || lp < 0))
++		return;
++
+ 	/* is the current value the same as the old value ?  if so,
+ 	 * there is nothing to do.
+ 	 */
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index 837d42f61464b..a222a9d71887f 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -633,6 +633,11 @@ int dtSearch(struct inode *ip, struct component_name * key, ino_t * data,
+ 		for (base = 0, lim = p->header.nextindex; lim; lim >>= 1) {
+ 			index = base + (lim >> 1);
+ 
++			if (stbl[index] < 0) {
++				rc = -EIO;
++				goto out;
++			}
++
+ 			if (p->header.flag & BT_LEAF) {
+ 				/* uppercase leaf name to compare */
+ 				cmp =
+@@ -1970,7 +1975,7 @@ static int dtSplitRoot(tid_t tid,
+ 		do {
+ 			f = &rp->slot[fsi];
+ 			fsi = f->next;
+-		} while (fsi != -1);
++		} while (fsi >= 0);
+ 
+ 		f->next = n;
+ 	}
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index 14f918a4831d3..b0965f3ef1865 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -2181,6 +2181,9 @@ static int diNewExt(struct inomap * imap, struct iag * iagp, int extno)
+ 	/* get the ag and iag numbers for this iag.
+ 	 */
+ 	agno = BLKTOAG(le64_to_cpu(iagp->agstart), sbi);
++	if (agno >= MAXAG || agno < 0)
++		return -EIO;
++
+ 	iagno = le32_to_cpu(iagp->iagnum);
+ 
+ 	/* check if this is the last free extent within the
+diff --git a/fs/jfs/jfs_mount.c b/fs/jfs/jfs_mount.c
+index aa4ff7bcaff23..55702b31ab3c4 100644
+--- a/fs/jfs/jfs_mount.c
++++ b/fs/jfs/jfs_mount.c
+@@ -172,15 +172,15 @@ int jfs_mount(struct super_block *sb)
+ 	}
+ 	jfs_info("jfs_mount: ipimap:0x%p", ipimap);
+ 
+-	/* map further access of per fileset inodes by the fileset inode */
+-	sbi->ipimap = ipimap;
+-
+ 	/* initialize fileset inode allocation map */
+ 	if ((rc = diMount(ipimap))) {
+ 		jfs_err("jfs_mount: diMount failed w/rc = %d", rc);
+ 		goto err_ipimap;
+ 	}
+ 
++	/* map further access of per fileset inodes by the fileset inode */
++	sbi->ipimap = ipimap;
++
+ 	return rc;
+ 
+ 	/*
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index c91ee05cce74f..0ba056e06e489 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -696,6 +696,18 @@ struct kernfs_node *kernfs_new_node(struct kernfs_node *parent,
+ {
+ 	struct kernfs_node *kn;
+ 
++	if (parent->mode & S_ISGID) {
++		/* this code block imitates inode_init_owner() for
++		 * kernfs
++		 */
++
++		if (parent->iattr)
++			gid = parent->iattr->ia_gid;
++
++		if (flags & KERNFS_DIR)
++			mode |= S_ISGID;
++	}
++
+ 	kn = __kernfs_new_node(kernfs_root(parent), parent,
+ 			       name, mode, uid, gid, flags);
+ 	if (kn) {
+diff --git a/fs/namei.c b/fs/namei.c
+index 3ff954a2bbd1d..cb37d7c477e0b 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2771,20 +2771,14 @@ struct dentry *lock_rename(struct dentry *p1, struct dentry *p2)
+ 	p = d_ancestor(p2, p1);
+ 	if (p) {
+ 		inode_lock_nested(p2->d_inode, I_MUTEX_PARENT);
+-		inode_lock_nested(p1->d_inode, I_MUTEX_CHILD);
++		inode_lock_nested(p1->d_inode, I_MUTEX_PARENT2);
+ 		return p;
+ 	}
+ 
+ 	p = d_ancestor(p1, p2);
+-	if (p) {
+-		inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
+-		inode_lock_nested(p2->d_inode, I_MUTEX_CHILD);
+-		return p;
+-	}
+-
+-	lock_two_inodes(p1->d_inode, p2->d_inode,
+-			I_MUTEX_PARENT, I_MUTEX_PARENT2);
+-	return NULL;
++	inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
++	inode_lock_nested(p2->d_inode, I_MUTEX_PARENT2);
++	return p;
+ }
+ EXPORT_SYMBOL(lock_rename);
+ 
+@@ -4260,11 +4254,12 @@ SYSCALL_DEFINE2(link, const char __user *, oldname, const char __user *, newname
+  *
+  *	a) we can get into loop creation.
+  *	b) race potential - two innocent renames can create a loop together.
+- *	   That's where 4.4 screws up. Current fix: serialization on
++ *	   That's where 4.4BSD screws up. Current fix: serialization on
+  *	   sb->s_vfs_rename_mutex. We might be more accurate, but that's another
+  *	   story.
+- *	c) we have to lock _four_ objects - parents and victim (if it exists),
+- *	   and source.
++ *	c) we may have to lock up to _four_ objects - parents and victim (if it exists),
++ *	   and source (if it's a non-directory or a subdirectory that moves to
++ *	   different parent).
+  *	   And that - after we got ->i_mutex on parents (until then we don't know
+  *	   whether the target exists).  Solution: try to be smart with locking
+  *	   order for inodes.  We rely on the fact that tree topology may change
+@@ -4293,6 +4288,7 @@ int vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	bool new_is_dir = false;
+ 	unsigned max_links = new_dir->i_sb->s_max_links;
+ 	struct name_snapshot old_name;
++	bool lock_old_subdir, lock_new_subdir;
+ 
+ 	if (source == target)
+ 		return 0;
+@@ -4342,15 +4338,32 @@ int vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	take_dentry_name_snapshot(&old_name, old_dentry);
+ 	dget(new_dentry);
+ 	/*
+-	 * Lock all moved children. Moved directories may need to change parent
+-	 * pointer so they need the lock to prevent against concurrent
+-	 * directory changes moving parent pointer. For regular files we've
+-	 * historically always done this. The lockdep locking subclasses are
+-	 * somewhat arbitrary but RENAME_EXCHANGE in particular can swap
+-	 * regular files and directories so it's difficult to tell which
+-	 * subclasses to use.
++	 * Lock children.
++	 * The source subdirectory needs to be locked on cross-directory
++	 * rename or cross-directory exchange since its parent changes.
++	 * The target subdirectory needs to be locked on cross-directory
++	 * exchange due to parent change and on any rename due to becoming
++	 * a victim.
++	 * Non-directories need locking in all cases (for NFS reasons);
++	 * they get locked after any subdirectories (in inode address order).
++	 *
++	 * NOTE: WE ONLY LOCK UNRELATED DIRECTORIES IN CROSS-DIRECTORY CASE.
++	 * NEVER, EVER DO THAT WITHOUT ->s_vfs_rename_mutex.
+ 	 */
+-	lock_two_inodes(source, target, I_MUTEX_NORMAL, I_MUTEX_NONDIR2);
++	lock_old_subdir = new_dir != old_dir;
++	lock_new_subdir = new_dir != old_dir || !(flags & RENAME_EXCHANGE);
++	if (is_dir) {
++		if (lock_old_subdir)
++			inode_lock_nested(source, I_MUTEX_CHILD);
++		if (target && (!new_is_dir || lock_new_subdir))
++			inode_lock(target);
++	} else if (new_is_dir) {
++		if (lock_new_subdir)
++			inode_lock_nested(target, I_MUTEX_CHILD);
++		inode_lock(source);
++	} else {
++		lock_two_nondirectories(source, target);
++	}
+ 
+ 	error = -EBUSY;
+ 	if (is_local_mountpoint(old_dentry) || is_local_mountpoint(new_dentry))
+@@ -4394,8 +4407,9 @@ int vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			d_exchange(old_dentry, new_dentry);
+ 	}
+ out:
+-	inode_unlock(source);
+-	if (target)
++	if (!is_dir || lock_old_subdir)
++		inode_unlock(source);
++	if (target && (!new_is_dir || lock_new_subdir))
+ 		inode_unlock(target);
+ 	dput(new_dentry);
+ 	if (!error) {
+diff --git a/fs/nilfs2/dat.c b/fs/nilfs2/dat.c
+index 8fedc7104320d..22b1ca5c379da 100644
+--- a/fs/nilfs2/dat.c
++++ b/fs/nilfs2/dat.c
+@@ -40,8 +40,21 @@ static inline struct nilfs_dat_info *NILFS_DAT_I(struct inode *dat)
+ static int nilfs_dat_prepare_entry(struct inode *dat,
+ 				   struct nilfs_palloc_req *req, int create)
+ {
+-	return nilfs_palloc_get_entry_block(dat, req->pr_entry_nr,
+-					    create, &req->pr_entry_bh);
++	int ret;
++
++	ret = nilfs_palloc_get_entry_block(dat, req->pr_entry_nr,
++					   create, &req->pr_entry_bh);
++	if (unlikely(ret == -ENOENT)) {
++		nilfs_err(dat->i_sb,
++			  "DAT doesn't have a block to manage vblocknr = %llu",
++			  (unsigned long long)req->pr_entry_nr);
++		/*
++		 * Return internal code -EINVAL to notify bmap layer of
++		 * metadata corruption.
++		 */
++		ret = -EINVAL;
++	}
++	return ret;
+ }
+ 
+ static void nilfs_dat_commit_entry(struct inode *dat,
+@@ -123,11 +136,7 @@ static void nilfs_dat_commit_free(struct inode *dat,
+ 
+ int nilfs_dat_prepare_start(struct inode *dat, struct nilfs_palloc_req *req)
+ {
+-	int ret;
+-
+-	ret = nilfs_dat_prepare_entry(dat, req, 0);
+-	WARN_ON(ret == -ENOENT);
+-	return ret;
++	return nilfs_dat_prepare_entry(dat, req, 0);
+ }
+ 
+ void nilfs_dat_commit_start(struct inode *dat, struct nilfs_palloc_req *req,
+@@ -154,10 +163,8 @@ int nilfs_dat_prepare_end(struct inode *dat, struct nilfs_palloc_req *req)
+ 	int ret;
+ 
+ 	ret = nilfs_dat_prepare_entry(dat, req, 0);
+-	if (ret < 0) {
+-		WARN_ON(ret == -ENOENT);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	kaddr = kmap_atomic(req->pr_entry_bh->b_page);
+ 	entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr,
+diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
+index e1bd592ce7001..5611a35344a75 100644
+--- a/fs/nilfs2/file.c
++++ b/fs/nilfs2/file.c
+@@ -105,7 +105,13 @@ static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf)
+ 	nilfs_transaction_commit(inode->i_sb);
+ 
+  mapped:
+-	wait_for_stable_page(page);
++	/*
++	 * Since checksumming including data blocks is performed to determine
++	 * the validity of the log to be written and used for recovery, it is
++	 * necessary to wait for writeback to finish here, regardless of the
++	 * stable write requirement of the backing device.
++	 */
++	wait_on_page_writeback(page);
+  out:
+ 	sb_end_pagefault(inode->i_sb);
+ 	return block_page_mkwrite_return(ret);
+diff --git a/fs/nilfs2/recovery.c b/fs/nilfs2/recovery.c
+index 2217f904a7cfb..188b8cc52e2b6 100644
+--- a/fs/nilfs2/recovery.c
++++ b/fs/nilfs2/recovery.c
+@@ -472,9 +472,10 @@ static int nilfs_prepare_segment_for_recovery(struct the_nilfs *nilfs,
+ 
+ static int nilfs_recovery_copy_block(struct the_nilfs *nilfs,
+ 				     struct nilfs_recovery_block *rb,
+-				     struct page *page)
++				     loff_t pos, struct page *page)
+ {
+ 	struct buffer_head *bh_org;
++	size_t from = pos & ~PAGE_MASK;
+ 	void *kaddr;
+ 
+ 	bh_org = __bread(nilfs->ns_bdev, rb->blocknr, nilfs->ns_blocksize);
+@@ -482,7 +483,7 @@ static int nilfs_recovery_copy_block(struct the_nilfs *nilfs,
+ 		return -EIO;
+ 
+ 	kaddr = kmap_atomic(page);
+-	memcpy(kaddr + bh_offset(bh_org), bh_org->b_data, bh_org->b_size);
++	memcpy(kaddr + from, bh_org->b_data, bh_org->b_size);
+ 	kunmap_atomic(kaddr);
+ 	brelse(bh_org);
+ 	return 0;
+@@ -521,7 +522,7 @@ static int nilfs_recover_dsync_blocks(struct the_nilfs *nilfs,
+ 			goto failed_inode;
+ 		}
+ 
+-		err = nilfs_recovery_copy_block(nilfs, rb, page);
++		err = nilfs_recovery_copy_block(nilfs, rb, pos, page);
+ 		if (unlikely(err))
+ 			goto failed_page;
+ 
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 418055ac910b6..be0ca35b8aa4b 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -1707,7 +1707,6 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci)
+ 
+ 		list_for_each_entry(bh, &segbuf->sb_payload_buffers,
+ 				    b_assoc_buffers) {
+-			set_buffer_async_write(bh);
+ 			if (bh == segbuf->sb_super_root) {
+ 				if (bh->b_page != bd_page) {
+ 					lock_page(bd_page);
+@@ -1718,6 +1717,7 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci)
+ 				}
+ 				break;
+ 			}
++			set_buffer_async_write(bh);
+ 			if (bh->b_page != fs_page) {
+ 				nilfs_begin_page_io(fs_page);
+ 				fs_page = bh->b_page;
+@@ -1803,7 +1803,6 @@ static void nilfs_abort_logs(struct list_head *logs, int err)
+ 
+ 		list_for_each_entry(bh, &segbuf->sb_payload_buffers,
+ 				    b_assoc_buffers) {
+-			clear_buffer_async_write(bh);
+ 			if (bh == segbuf->sb_super_root) {
+ 				clear_buffer_uptodate(bh);
+ 				if (bh->b_page != bd_page) {
+@@ -1812,6 +1811,7 @@ static void nilfs_abort_logs(struct list_head *logs, int err)
+ 				}
+ 				break;
+ 			}
++			clear_buffer_async_write(bh);
+ 			if (bh->b_page != fs_page) {
+ 				nilfs_end_page_io(fs_page, err);
+ 				fs_page = bh->b_page;
+@@ -1899,8 +1899,9 @@ static void nilfs_segctor_complete_write(struct nilfs_sc_info *sci)
+ 				 BIT(BH_Delay) | BIT(BH_NILFS_Volatile) |
+ 				 BIT(BH_NILFS_Redirected));
+ 
+-			set_mask_bits(&bh->b_state, clear_bits, set_bits);
+ 			if (bh == segbuf->sb_super_root) {
++				set_buffer_uptodate(bh);
++				clear_buffer_dirty(bh);
+ 				if (bh->b_page != bd_page) {
+ 					end_page_writeback(bd_page);
+ 					bd_page = bh->b_page;
+@@ -1908,6 +1909,7 @@ static void nilfs_segctor_complete_write(struct nilfs_sc_info *sci)
+ 				update_sr = true;
+ 				break;
+ 			}
++			set_mask_bits(&bh->b_state, clear_bits, set_bits);
+ 			if (bh->b_page != fs_page) {
+ 				nilfs_end_page_io(fs_page, 0);
+ 				fs_page = bh->b_page;
+diff --git a/fs/pipe.c b/fs/pipe.c
+index dbb090e1b026c..588fe37d8d955 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -435,12 +435,10 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ 		goto out;
+ 	}
+ 
+-#ifdef CONFIG_WATCH_QUEUE
+-	if (pipe->watch_queue) {
++	if (pipe_has_watch_queue(pipe)) {
+ 		ret = -EXDEV;
+ 		goto out;
+ 	}
+-#endif
+ 
+ 	/*
+ 	 * If it wasn't empty we try to merge new data into
+@@ -1302,6 +1300,11 @@ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots)
+ 	pipe->tail = tail;
+ 	pipe->head = head;
+ 
++	if (!pipe_has_watch_queue(pipe)) {
++		pipe->max_usage = nr_slots;
++		pipe->nr_accounted = nr_slots;
++	}
++
+ 	spin_unlock_irq(&pipe->rd_wait.lock);
+ 
+ 	/* This might have made more room for writers */
+@@ -1319,10 +1322,8 @@ static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long arg)
+ 	unsigned int nr_slots, size;
+ 	long ret = 0;
+ 
+-#ifdef CONFIG_WATCH_QUEUE
+-	if (pipe->watch_queue)
++	if (pipe_has_watch_queue(pipe))
+ 		return -EBUSY;
+-#endif
+ 
+ 	size = round_pipe_size(arg);
+ 	nr_slots = size >> PAGE_SHIFT;
+@@ -1355,8 +1356,6 @@ static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long arg)
+ 	if (ret < 0)
+ 		goto out_revert_acct;
+ 
+-	pipe->max_usage = nr_slots;
+-	pipe->nr_accounted = nr_slots;
+ 	return pipe->max_usage * PAGE_SIZE;
+ 
+ out_revert_acct:
+@@ -1375,10 +1374,8 @@ struct pipe_inode_info *get_pipe_info(struct file *file, bool for_splice)
+ 
+ 	if (file->f_op != &pipefifo_fops || !pipe)
+ 		return NULL;
+-#ifdef CONFIG_WATCH_QUEUE
+-	if (for_splice && pipe->watch_queue)
++	if (for_splice && pipe_has_watch_queue(pipe))
+ 		return NULL;
+-#endif
+ 	return pipe;
+ }
+ 
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index 98e579ce0d633..44fc3b3962882 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -519,6 +519,7 @@ static int ramoops_init_przs(const char *name,
+ 	}
+ 
+ 	zone_sz = mem_sz / *cnt;
++	zone_sz = ALIGN_DOWN(zone_sz, 2);
+ 	if (!zone_sz) {
+ 		dev_err(dev, "%s zone size == 0\n", name);
+ 		goto fail;
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index bc562b1072d3e..11cd921df6dac 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1198,6 +1198,8 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	dir_ui->ui_size = dir->i_size;
+ 	mutex_unlock(&dir_ui->ui_mutex);
+ out_inode:
++	/* Free inode->i_link before inode is marked as bad. */
++	fscrypt_free_inode(inode);
+ 	make_bad_inode(inode);
+ 	iput(inode);
+ out_fname:
+diff --git a/include/drm/drm_color_mgmt.h b/include/drm/drm_color_mgmt.h
+index 81c298488b0c8..6b5eec10c3db3 100644
+--- a/include/drm/drm_color_mgmt.h
++++ b/include/drm/drm_color_mgmt.h
+@@ -24,6 +24,7 @@
+ #define __DRM_COLOR_MGMT_H__
+ 
+ #include <linux/ctype.h>
++#include <linux/math64.h>
+ #include <drm/drm_property.h>
+ 
+ struct drm_crtc;
+diff --git a/include/drm/drm_mipi_dsi.h b/include/drm/drm_mipi_dsi.h
+index 31ba85a4110a8..3c0d1495c062d 100644
+--- a/include/drm/drm_mipi_dsi.h
++++ b/include/drm/drm_mipi_dsi.h
+@@ -161,6 +161,7 @@ struct mipi_dsi_device_info {
+  * struct mipi_dsi_device - DSI peripheral device
+  * @host: DSI host for this peripheral
+  * @dev: driver model device node for this peripheral
++ * @attached: the DSI device has been successfully attached
+  * @name: DSI peripheral chip type
+  * @channel: virtual channel assigned to the peripheral
+  * @format: pixel format for video mode
+@@ -176,6 +177,7 @@ struct mipi_dsi_device_info {
+ struct mipi_dsi_device {
+ 	struct mipi_dsi_host *host;
+ 	struct device dev;
++	bool attached;
+ 
+ 	char name[DSI_DEV_NAME_SIZE];
+ 	unsigned int channel;
+diff --git a/include/linux/async.h b/include/linux/async.h
+index 0a17cd27f3485..d5496a520a381 100644
+--- a/include/linux/async.h
++++ b/include/linux/async.h
+@@ -90,6 +90,8 @@ async_schedule_dev(async_func_t func, struct device *dev)
+ 	return async_schedule_node(func, dev, dev_to_node(dev));
+ }
+ 
++bool async_schedule_dev_nocall(async_func_t func, struct device *dev);
++
+ /**
+  * async_schedule_dev_domain - A device specific version of async_schedule_domain
+  * @func: function to execute asynchronously
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 8f4379e93ad49..bfdf40be5360a 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -82,7 +82,11 @@ struct bpf_map_ops {
+ 	/* funcs called by prog_array and perf_event_array map */
+ 	void *(*map_fd_get_ptr)(struct bpf_map *map, struct file *map_file,
+ 				int fd);
+-	void (*map_fd_put_ptr)(void *ptr);
++	/* If need_defer is true, the implementation should guarantee that
++	 * the to-be-put element is still alive before the bpf program, which
++	 * may manipulate it, exists.
++	 */
++	void (*map_fd_put_ptr)(struct bpf_map *map, void *ptr, bool need_defer);
+ 	int (*map_gen_lookup)(struct bpf_map *map, struct bpf_insn *insn_buf);
+ 	u32 (*map_fd_sys_lookup_elem)(void *ptr);
+ 	void (*map_seq_show_elem)(struct bpf_map *map, void *key,
+diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
+index dd357a747780f..4e4cce0ad4d79 100644
+--- a/include/linux/dmaengine.h
++++ b/include/linux/dmaengine.h
+@@ -949,7 +949,8 @@ static inline int dmaengine_slave_config(struct dma_chan *chan,
+ 
+ static inline bool is_slave_direction(enum dma_transfer_direction direction)
+ {
+-	return (direction == DMA_MEM_TO_DEV) || (direction == DMA_DEV_TO_MEM);
++	return (direction == DMA_MEM_TO_DEV) || (direction == DMA_DEV_TO_MEM) ||
++	       (direction == DMA_DEV_TO_DEV);
+ }
+ 
+ static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single(
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index a88be8bd4e1d1..54a3ad7bff581 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -197,6 +197,7 @@ enum  hrtimer_base_type {
+  * @max_hang_time:	Maximum time spent in hrtimer_interrupt
+  * @softirq_expiry_lock: Lock which is taken while softirq based hrtimer are
+  *			 expired
++ * @online:		CPU is online from an hrtimers point of view
+  * @timer_waiters:	A hrtimer_cancel() invocation waits for the timer
+  *			callback to finish.
+  * @expires_next:	absolute time of the next event, is required for remote
+@@ -219,7 +220,8 @@ struct hrtimer_cpu_base {
+ 	unsigned int			hres_active		: 1,
+ 					in_hrtirq		: 1,
+ 					hang_detected		: 1,
+-					softirq_activated       : 1;
++					softirq_activated       : 1,
++					online			: 1;
+ #ifdef CONFIG_HIGH_RES_TIMERS
+ 	unsigned int			nr_events;
+ 	unsigned short			nr_retries;
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 693ed9c614b65..92a76ce0c382d 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -158,6 +158,8 @@ LSM_HOOK(int, 0, file_alloc_security, struct file *file)
+ LSM_HOOK(void, LSM_RET_VOID, file_free_security, struct file *file)
+ LSM_HOOK(int, 0, file_ioctl, struct file *file, unsigned int cmd,
+ 	 unsigned long arg)
++LSM_HOOK(int, 0, file_ioctl_compat, struct file *file, unsigned int cmd,
++	 unsigned long arg)
+ LSM_HOOK(int, 0, mmap_addr, unsigned long addr)
+ LSM_HOOK(int, 0, mmap_file, struct file *file, unsigned long reqprot,
+ 	 unsigned long prot, unsigned long flags)
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index b2e4599b88832..ffae2b3308180 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1188,6 +1188,7 @@ static inline unsigned long section_nr_to_pfn(unsigned long sec)
+ #define SUBSECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SUBSECTION_MASK)
+ 
+ struct mem_section_usage {
++	struct rcu_head rcu;
+ #ifdef CONFIG_SPARSEMEM_VMEMMAP
+ 	DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION);
+ #endif
+@@ -1353,7 +1354,7 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+ {
+ 	int idx = subsection_map_index(pfn);
+ 
+-	return test_bit(idx, ms->usage->subsection_map);
++	return test_bit(idx, READ_ONCE(ms->usage)->subsection_map);
+ }
+ #else
+ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+@@ -1366,17 +1367,24 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+ static inline int pfn_valid(unsigned long pfn)
+ {
+ 	struct mem_section *ms;
++	int ret;
+ 
+ 	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
+ 		return 0;
+-	ms = __nr_to_section(pfn_to_section_nr(pfn));
+-	if (!valid_section(ms))
++	ms = __pfn_to_section(pfn);
++	rcu_read_lock();
++	if (!valid_section(ms)) {
++		rcu_read_unlock();
+ 		return 0;
++	}
+ 	/*
+ 	 * Traditionally early sections always returned pfn_valid() for
+ 	 * the entire section-sized span.
+ 	 */
+-	return early_section(ms) || pfn_section_valid(ms, pfn);
++	ret = early_section(ms) || pfn_section_valid(ms, pfn);
++	rcu_read_unlock();
++
++	return ret;
+ }
+ #endif
+ 
+@@ -1384,7 +1392,7 @@ static inline int pfn_in_present_section(unsigned long pfn)
+ {
+ 	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
+ 		return 0;
+-	return present_section(__nr_to_section(pfn_to_section_nr(pfn)));
++	return present_section(__pfn_to_section(pfn));
+ }
+ 
+ static inline unsigned long next_present_section_nr(unsigned long section_nr)
+diff --git a/include/linux/netfilter/ipset/ip_set.h b/include/linux/netfilter/ipset/ip_set.h
+index 62f7e7e257c10..f27894e50ef19 100644
+--- a/include/linux/netfilter/ipset/ip_set.h
++++ b/include/linux/netfilter/ipset/ip_set.h
+@@ -188,6 +188,8 @@ struct ip_set_type_variant {
+ 	/* Return true if "b" set is the same as "a"
+ 	 * according to the create set parameters */
+ 	bool (*same_set)(const struct ip_set *a, const struct ip_set *b);
++	/* Cancel ongoing garbage collectors before destroying the set*/
++	void (*cancel_gc)(struct ip_set *set);
+ 	/* Region-locking is used */
+ 	bool region_lock;
+ };
+@@ -239,6 +241,8 @@ extern void ip_set_type_unregister(struct ip_set_type *set_type);
+ 
+ /* A generic IP set */
+ struct ip_set {
++	/* For call_cru in destroy */
++	struct rcu_head rcu;
+ 	/* The name of the set */
+ 	char name[IPSET_MAXNAMELEN];
+ 	/* Lock protecting the set data */
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 1a41147b22e8f..80744a7b5e333 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -3020,6 +3020,7 @@
+ #define PCI_DEVICE_ID_INTEL_82443GX_0	0x71a0
+ #define PCI_DEVICE_ID_INTEL_82443GX_2	0x71a2
+ #define PCI_DEVICE_ID_INTEL_82372FB_1	0x7601
++#define PCI_DEVICE_ID_INTEL_HDA_ARL	0x7728
+ #define PCI_DEVICE_ID_INTEL_SCH_LPC	0x8119
+ #define PCI_DEVICE_ID_INTEL_SCH_IDE	0x811a
+ #define PCI_DEVICE_ID_INTEL_E6XX_CU	0x8183
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index ef236dbaa2945..7b72d93c26530 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -124,6 +124,22 @@ struct pipe_buf_operations {
+ 	bool (*get)(struct pipe_inode_info *, struct pipe_buffer *);
+ };
+ 
++/**
++ * pipe_has_watch_queue - Check whether the pipe is a watch_queue,
++ * i.e. it was created with O_NOTIFICATION_PIPE
++ * @pipe: The pipe to check
++ *
++ * Return: true if pipe is a watch queue, false otherwise.
++ */
++static inline bool pipe_has_watch_queue(const struct pipe_inode_info *pipe)
++{
++#ifdef CONFIG_WATCH_QUEUE
++	return pipe->watch_queue != NULL;
++#else
++	return false;
++#endif
++}
++
+ /**
+  * pipe_empty - Return true if the pipe is empty
+  * @head: The pipe ring head pointer
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index 718600e83020a..ca856e5829145 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -60,6 +60,8 @@ extern void pm_runtime_new_link(struct device *dev);
+ extern void pm_runtime_drop_link(struct device_link *link);
+ extern void pm_runtime_release_supplier(struct device_link *link);
+ 
++extern int devm_pm_runtime_enable(struct device *dev);
++
+ /**
+  * pm_runtime_get_if_in_use - Conditionally bump up runtime PM usage counter.
+  * @dev: Target device.
+@@ -254,6 +256,8 @@ static inline void __pm_runtime_disable(struct device *dev, bool c) {}
+ static inline void pm_runtime_allow(struct device *dev) {}
+ static inline void pm_runtime_forbid(struct device *dev) {}
+ 
++static inline int devm_pm_runtime_enable(struct device *dev) { return 0; }
++
+ static inline void pm_suspend_ignore_children(struct device *dev, bool enable) {}
+ static inline void pm_runtime_get_noresume(struct device *dev) {}
+ static inline void pm_runtime_put_noidle(struct device *dev) {}
+@@ -535,6 +539,10 @@ static inline void pm_runtime_disable(struct device *dev)
+  * Allow the runtime PM autosuspend mechanism to be used for @dev whenever
+  * requested (or "autosuspend" will be handled as direct runtime-suspend for
+  * it).
++ *
++ * NOTE: It's important to undo this with pm_runtime_dont_use_autosuspend()
++ * at driver exit time unless your driver initially enabled pm_runtime
++ * with devm_pm_runtime_enable() (which handles it for you).
+  */
+ static inline void pm_runtime_use_autosuspend(struct device *dev)
+ {
+diff --git a/include/linux/security.h b/include/linux/security.h
+index e9b4b54106147..e388b1666bcfc 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -368,6 +368,8 @@ int security_file_permission(struct file *file, int mask);
+ int security_file_alloc(struct file *file);
+ void security_file_free(struct file *file);
+ int security_file_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
++int security_file_ioctl_compat(struct file *file, unsigned int cmd,
++			       unsigned long arg);
+ int security_mmap_file(struct file *file, unsigned long prot,
+ 			unsigned long flags);
+ int security_mmap_addr(unsigned long addr);
+@@ -925,6 +927,13 @@ static inline int security_file_ioctl(struct file *file, unsigned int cmd,
+ 	return 0;
+ }
+ 
++static inline int security_file_ioctl_compat(struct file *file,
++					     unsigned int cmd,
++					     unsigned long arg)
++{
++	return 0;
++}
++
+ static inline int security_mmap_file(struct file *file, unsigned long prot,
+ 				     unsigned long flags)
+ {
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 6df4c3356ae61..46a21984c0b22 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -254,6 +254,7 @@ struct uart_port {
+ 	struct attribute_group	*attr_group;		/* port specific attributes */
+ 	const struct attribute_group **tty_groups;	/* all attributes (serial core use only) */
+ 	struct serial_rs485     rs485;
++	const struct serial_rs485	*rs485_supported;	/* Supported mask for serial_rs485 */
+ 	struct gpio_desc	*rs485_term_gpio;	/* enable RS485 bus termination */
+ 	struct serial_iso7816   iso7816;
+ 	void			*private_data;		/* generic platform data pointer */
+diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
+index e1d88630ff243..ab7747549d23f 100644
+--- a/include/linux/spi/spi.h
++++ b/include/linux/spi/spi.h
+@@ -171,6 +171,7 @@ struct spi_device {
+ #define	SPI_MODE_1	(0|SPI_CPHA)
+ #define	SPI_MODE_2	(SPI_CPOL|0)
+ #define	SPI_MODE_3	(SPI_CPOL|SPI_CPHA)
++#define	SPI_MODE_X_MASK	(SPI_CPOL|SPI_CPHA)
+ #define	SPI_CS_HIGH	0x04			/* chipselect active high? */
+ #define	SPI_LSB_FIRST	0x08			/* per-word bits-on-wire */
+ #define	SPI_3WIRE	0x10			/* SI/SO signals shared */
+diff --git a/include/linux/stddef.h b/include/linux/stddef.h
+index 938216f8ab7e7..31fdbb784c24e 100644
+--- a/include/linux/stddef.h
++++ b/include/linux/stddef.h
+@@ -84,4 +84,17 @@ enum {
+ #define struct_group_tagged(TAG, NAME, MEMBERS...) \
+ 	__struct_group(TAG, NAME, /* no attrs */, MEMBERS)
+ 
++/**
++ * DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
++ *
++ * @TYPE: The type of each flexible array element
++ * @NAME: The name of the flexible array member
++ *
++ * In order to have a flexible array member in a union or alone in a
++ * struct, it needs to be wrapped in an anonymous struct with at least 1
++ * named member, but that member can be empty.
++ */
++#define DECLARE_FLEX_ARRAY(TYPE, NAME) \
++	__DECLARE_FLEX_ARRAY(TYPE, NAME)
++
+ #endif
+diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
+index a058c96cf2138..17a24e1180dad 100644
+--- a/include/linux/syscalls.h
++++ b/include/linux/syscalls.h
+@@ -119,6 +119,7 @@ struct open_how;
+ #define __TYPE_IS_LL(t) (__TYPE_AS(t, 0LL) || __TYPE_AS(t, 0ULL))
+ #define __SC_LONG(t, a) __typeof(__builtin_choose_expr(__TYPE_IS_LL(t), 0LL, 0L)) a
+ #define __SC_CAST(t, a)	(__force t) a
++#define __SC_TYPE(t, a)	t
+ #define __SC_ARGS(t, a)	a
+ #define __SC_TEST(t, a) (void)BUILD_BUG_ON_ZERO(!__TYPE_IS_LL(t) && sizeof(t) > sizeof(long))
+ 
+diff --git a/include/linux/units.h b/include/linux/units.h
+index 3457179f7116a..b61e3f6d50991 100644
+--- a/include/linux/units.h
++++ b/include/linux/units.h
+@@ -20,9 +20,13 @@
+ #define PICO	1000000000000ULL
+ #define FEMTO	1000000000000000ULL
+ 
+-#define MILLIWATT_PER_WATT	1000L
+-#define MICROWATT_PER_MILLIWATT	1000L
+-#define MICROWATT_PER_WATT	1000000L
++#define HZ_PER_KHZ		1000UL
++#define KHZ_PER_MHZ		1000UL
++#define HZ_PER_MHZ		1000000UL
++
++#define MILLIWATT_PER_WATT	1000UL
++#define MICROWATT_PER_MILLIWATT	1000UL
++#define MICROWATT_PER_WATT	1000000UL
+ 
+ #define ABSOLUTE_ZERO_MILLICELSIUS -273150
+ 
+diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
+index 76dad53a410ac..0fd47f2f39eb0 100644
+--- a/include/linux/vmalloc.h
++++ b/include/linux/vmalloc.h
+@@ -112,6 +112,11 @@ extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
+ void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask,
+ 		int node, const void *caller);
+ 
++extern void *__vmalloc_array(size_t n, size_t size, gfp_t flags);
++extern void *vmalloc_array(size_t n, size_t size);
++extern void *__vcalloc(size_t n, size_t size, gfp_t flags);
++extern void *vcalloc(size_t n, size_t size);
++
+ extern void vfree(const void *addr);
+ extern void vfree_atomic(const void *addr);
+ 
+diff --git a/include/net/af_unix.h b/include/net/af_unix.h
+index f42fdddecd417..a6b6ce8b918b7 100644
+--- a/include/net/af_unix.h
++++ b/include/net/af_unix.h
+@@ -47,12 +47,6 @@ struct scm_stat {
+ 
+ #define UNIXCB(skb)	(*(struct unix_skb_parms *)&((skb)->cb))
+ 
+-#define unix_state_lock(s)	spin_lock(&unix_sk(s)->lock)
+-#define unix_state_unlock(s)	spin_unlock(&unix_sk(s)->lock)
+-#define unix_state_lock_nested(s) \
+-				spin_lock_nested(&unix_sk(s)->lock, \
+-				SINGLE_DEPTH_NESTING)
+-
+ /* The AF_UNIX socket */
+ struct unix_sock {
+ 	/* WARNING: sk has to be the first member */
+@@ -77,6 +71,20 @@ static inline struct unix_sock *unix_sk(const struct sock *sk)
+ 	return (struct unix_sock *)sk;
+ }
+ 
++#define unix_state_lock(s)	spin_lock(&unix_sk(s)->lock)
++#define unix_state_unlock(s)	spin_unlock(&unix_sk(s)->lock)
++enum unix_socket_lock_class {
++	U_LOCK_NORMAL,
++	U_LOCK_SECOND,	/* for double locking, see unix_state_double_lock(). */
++	U_LOCK_DIAG, /* used while dumping icons, see sk_diag_dump_icons(). */
++};
++
++static inline void unix_state_lock_nested(struct sock *sk,
++				   enum unix_socket_lock_class subclass)
++{
++	spin_lock_nested(&unix_sk(sk)->lock, subclass);
++}
++
+ #define peer_wait peer_wq.wait
+ 
+ long unix_inq_len(struct sock *sk);
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index ff901aade442f..568121fa0965c 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -339,4 +339,12 @@ static inline bool inet_csk_has_ulp(struct sock *sk)
+ 	return inet_sk(sk)->is_icsk && !!inet_csk(sk)->icsk_ulp_ops;
+ }
+ 
++static inline void inet_init_csk_locks(struct sock *sk)
++{
++	struct inet_connection_sock *icsk = inet_csk(sk);
++
++	spin_lock_init(&icsk->icsk_accept_queue.rskq_lock);
++	spin_lock_init(&icsk->icsk_accept_queue.fastopenq.lock);
++}
++
+ #endif /* _INET_CONNECTION_SOCK_H */
+diff --git a/include/net/llc_pdu.h b/include/net/llc_pdu.h
+index 49aa79c7b278a..581cd37aa98b7 100644
+--- a/include/net/llc_pdu.h
++++ b/include/net/llc_pdu.h
+@@ -262,8 +262,7 @@ static inline void llc_pdu_header_init(struct sk_buff *skb, u8 type,
+  */
+ static inline void llc_pdu_decode_sa(struct sk_buff *skb, u8 *sa)
+ {
+-	if (skb->protocol == htons(ETH_P_802_2))
+-		memcpy(sa, eth_hdr(skb)->h_source, ETH_ALEN);
++	memcpy(sa, eth_hdr(skb)->h_source, ETH_ALEN);
+ }
+ 
+ /**
+@@ -275,8 +274,7 @@ static inline void llc_pdu_decode_sa(struct sk_buff *skb, u8 *sa)
+  */
+ static inline void llc_pdu_decode_da(struct sk_buff *skb, u8 *da)
+ {
+-	if (skb->protocol == htons(ETH_P_802_2))
+-		memcpy(da, eth_hdr(skb)->h_dest, ETH_ALEN);
++	memcpy(da, eth_hdr(skb)->h_dest, ETH_ALEN);
+ }
+ 
+ /**
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 2237657514e14..2da11d8c0f45e 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -142,9 +142,9 @@ static inline u16 nft_reg_load16(const u32 *sreg)
+ 	return *(u16 *)sreg;
+ }
+ 
+-static inline void nft_reg_store64(u32 *dreg, u64 val)
++static inline void nft_reg_store64(u64 *dreg, u64 val)
+ {
+-	put_unaligned(val, (u64 *)dreg);
++	put_unaligned(val, dreg);
+ }
+ 
+ static inline u64 nft_reg_load64(const u32 *sreg)
+diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h
+index 5339baadc082e..39c7a36cd6ce8 100644
+--- a/include/scsi/scsi.h
++++ b/include/scsi/scsi.h
+@@ -178,16 +178,17 @@ static inline int scsi_is_wlun(u64 lun)
+ /*
+  * Internal return values.
+  */
+-
+-#define NEEDS_RETRY     0x2001
+-#define SUCCESS         0x2002
+-#define FAILED          0x2003
+-#define QUEUED          0x2004
+-#define SOFT_ERROR      0x2005
+-#define ADD_TO_MLQUEUE  0x2006
+-#define TIMEOUT_ERROR   0x2007
+-#define SCSI_RETURN_NOT_HANDLED   0x2008
+-#define FAST_IO_FAIL	0x2009
++enum scsi_disposition {
++	NEEDS_RETRY		= 0x2001,
++	SUCCESS			= 0x2002,
++	FAILED			= 0x2003,
++	QUEUED			= 0x2004,
++	SOFT_ERROR		= 0x2005,
++	ADD_TO_MLQUEUE		= 0x2006,
++	TIMEOUT_ERROR		= 0x2007,
++	SCSI_RETURN_NOT_HANDLED	= 0x2008,
++	FAST_IO_FAIL		= 0x2009,
++};
+ 
+ /*
+  * Midlevel queue return values.
+diff --git a/include/scsi/scsi_dh.h b/include/scsi/scsi_dh.h
+index 2852e470a8edb..47ccf2f11d897 100644
+--- a/include/scsi/scsi_dh.h
++++ b/include/scsi/scsi_dh.h
+@@ -52,7 +52,8 @@ struct scsi_device_handler {
+ 	/* Filled by the hardware handler */
+ 	struct module *module;
+ 	const char *name;
+-	int (*check_sense)(struct scsi_device *, struct scsi_sense_hdr *);
++	enum scsi_disposition (*check_sense)(struct scsi_device *,
++					     struct scsi_sense_hdr *);
+ 	int (*attach)(struct scsi_device *);
+ 	void (*detach)(struct scsi_device *);
+ 	int (*activate)(struct scsi_device *, activate_complete, void *);
+diff --git a/include/scsi/scsi_eh.h b/include/scsi/scsi_eh.h
+index 6bd5ed695a5e8..468094254b3cc 100644
+--- a/include/scsi/scsi_eh.h
++++ b/include/scsi/scsi_eh.h
+@@ -17,7 +17,7 @@ extern void scsi_report_device_reset(struct Scsi_Host *, int, int);
+ extern int scsi_block_when_processing_errors(struct scsi_device *);
+ extern bool scsi_command_normalize_sense(const struct scsi_cmnd *cmd,
+ 					 struct scsi_sense_hdr *sshdr);
+-extern int scsi_check_sense(struct scsi_cmnd *);
++extern enum scsi_disposition scsi_check_sense(struct scsi_cmnd *);
+ 
+ static inline bool scsi_sense_is_deferred(const struct scsi_sense_hdr *sshdr)
+ {
+diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
+index e65300d63d7c4..cf4431b748c23 100644
+--- a/include/uapi/linux/btrfs.h
++++ b/include/uapi/linux/btrfs.h
+@@ -576,6 +576,9 @@ struct btrfs_ioctl_clone_range_args {
+  */
+ #define BTRFS_DEFRAG_RANGE_COMPRESS 1
+ #define BTRFS_DEFRAG_RANGE_START_IO 2
++#define BTRFS_DEFRAG_RANGE_FLAGS_SUPP	(BTRFS_DEFRAG_RANGE_COMPRESS |		\
++					 BTRFS_DEFRAG_RANGE_START_IO)
++
+ struct btrfs_ioctl_defrag_range_args {
+ 	/* start of the defrag operation */
+ 	__u64 start;
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index 163b7ec577e74..f93ffb1b67398 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -262,9 +262,11 @@ enum nft_rule_attributes {
+ /**
+  * enum nft_rule_compat_flags - nf_tables rule compat flags
+  *
++ * @NFT_RULE_COMPAT_F_UNUSED: unused
+  * @NFT_RULE_COMPAT_F_INV: invert the check result
+  */
+ enum nft_rule_compat_flags {
++	NFT_RULE_COMPAT_F_UNUSED = (1 << 0),
+ 	NFT_RULE_COMPAT_F_INV	= (1 << 1),
+ 	NFT_RULE_COMPAT_F_MASK	= NFT_RULE_COMPAT_F_INV,
+ };
+diff --git a/include/uapi/linux/stddef.h b/include/uapi/linux/stddef.h
+index c3725b4922632..46c7bab501cbd 100644
+--- a/include/uapi/linux/stddef.h
++++ b/include/uapi/linux/stddef.h
+@@ -28,4 +28,27 @@
+ 		struct { MEMBERS } ATTRS; \
+ 		struct TAG { MEMBERS } ATTRS NAME; \
+ 	}
++
++#ifdef __cplusplus
++/* sizeof(struct{}) is 1 in C++, not 0, can't use C version of the macro. */
++#define __DECLARE_FLEX_ARRAY(T, member)	\
++	T member[0]
++#else
++/**
++ * __DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
++ *
++ * @TYPE: The type of each flexible array element
++ * @NAME: The name of the flexible array member
++ *
++ * In order to have a flexible array member in a union or alone in a
++ * struct, it needs to be wrapped in an anonymous struct with at least 1
++ * named member, but that member can be empty.
++ */
++#define __DECLARE_FLEX_ARRAY(TYPE, NAME)	\
++	struct { \
++		struct { } __empty_ ## NAME; \
++		TYPE NAME[]; \
++	}
++#endif
++
+ #endif
+diff --git a/kernel/async.c b/kernel/async.c
+index 1746cd65e271b..5dba7461fc75d 100644
+--- a/kernel/async.c
++++ b/kernel/async.c
+@@ -145,6 +145,39 @@ static void async_run_entry_fn(struct work_struct *work)
+ 	wake_up(&async_done);
+ }
+ 
++static async_cookie_t __async_schedule_node_domain(async_func_t func,
++						   void *data, int node,
++						   struct async_domain *domain,
++						   struct async_entry *entry)
++{
++	async_cookie_t newcookie;
++	unsigned long flags;
++
++	INIT_LIST_HEAD(&entry->domain_list);
++	INIT_LIST_HEAD(&entry->global_list);
++	INIT_WORK(&entry->work, async_run_entry_fn);
++	entry->func = func;
++	entry->data = data;
++	entry->domain = domain;
++
++	spin_lock_irqsave(&async_lock, flags);
++
++	/* allocate cookie and queue */
++	newcookie = entry->cookie = next_cookie++;
++
++	list_add_tail(&entry->domain_list, &domain->pending);
++	if (domain->registered)
++		list_add_tail(&entry->global_list, &async_global_pending);
++
++	atomic_inc(&entry_count);
++	spin_unlock_irqrestore(&async_lock, flags);
++
++	/* schedule for execution */
++	queue_work_node(node, system_unbound_wq, &entry->work);
++
++	return newcookie;
++}
++
+ /**
+  * async_schedule_node_domain - NUMA specific version of async_schedule_domain
+  * @func: function to execute asynchronously
+@@ -186,29 +219,8 @@ async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
+ 		func(data, newcookie);
+ 		return newcookie;
+ 	}
+-	INIT_LIST_HEAD(&entry->domain_list);
+-	INIT_LIST_HEAD(&entry->global_list);
+-	INIT_WORK(&entry->work, async_run_entry_fn);
+-	entry->func = func;
+-	entry->data = data;
+-	entry->domain = domain;
+-
+-	spin_lock_irqsave(&async_lock, flags);
+-
+-	/* allocate cookie and queue */
+-	newcookie = entry->cookie = next_cookie++;
+-
+-	list_add_tail(&entry->domain_list, &domain->pending);
+-	if (domain->registered)
+-		list_add_tail(&entry->global_list, &async_global_pending);
+-
+-	atomic_inc(&entry_count);
+-	spin_unlock_irqrestore(&async_lock, flags);
+-
+-	/* schedule for execution */
+-	queue_work_node(node, system_unbound_wq, &entry->work);
+ 
+-	return newcookie;
++	return __async_schedule_node_domain(func, data, node, domain, entry);
+ }
+ EXPORT_SYMBOL_GPL(async_schedule_node_domain);
+ 
+@@ -231,6 +243,35 @@ async_cookie_t async_schedule_node(async_func_t func, void *data, int node)
+ }
+ EXPORT_SYMBOL_GPL(async_schedule_node);
+ 
++/**
++ * async_schedule_dev_nocall - A simplified variant of async_schedule_dev()
++ * @func: function to execute asynchronously
++ * @dev: device argument to be passed to function
++ *
++ * @dev is used as both the argument for the function and to provide NUMA
++ * context for where to run the function.
++ *
++ * If the asynchronous execution of @func is scheduled successfully, return
++ * true. Otherwise, do nothing and return false, unlike async_schedule_dev()
++ * that will run the function synchronously then.
++ */
++bool async_schedule_dev_nocall(async_func_t func, struct device *dev)
++{
++	struct async_entry *entry;
++
++	entry = kzalloc(sizeof(struct async_entry), GFP_KERNEL);
++
++	/* Give up if there is no memory or too much work. */
++	if (!entry || atomic_read(&entry_count) > MAX_WORK) {
++		kfree(entry);
++		return false;
++	}
++
++	__async_schedule_node_domain(func, dev, dev_to_node(dev),
++				     &async_dfl_domain, entry);
++	return true;
++}
++
+ /**
+  * async_synchronize_full - synchronize all asynchronous function calls
+  *
+diff --git a/kernel/audit.c b/kernel/audit.c
+index aeec86ed47088..2ab04e0a74418 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -490,15 +490,19 @@ static void auditd_conn_free(struct rcu_head *rcu)
+  * @pid: auditd PID
+  * @portid: auditd netlink portid
+  * @net: auditd network namespace pointer
++ * @skb: the netlink command from the audit daemon
++ * @ack: netlink ack flag, cleared if ack'd here
+  *
+  * Description:
+  * This function will obtain and drop network namespace references as
+  * necessary.  Returns zero on success, negative values on failure.
+  */
+-static int auditd_set(struct pid *pid, u32 portid, struct net *net)
++static int auditd_set(struct pid *pid, u32 portid, struct net *net,
++		      struct sk_buff *skb, bool *ack)
+ {
+ 	unsigned long flags;
+ 	struct auditd_connection *ac_old, *ac_new;
++	struct nlmsghdr *nlh;
+ 
+ 	if (!pid || !net)
+ 		return -EINVAL;
+@@ -510,6 +514,13 @@ static int auditd_set(struct pid *pid, u32 portid, struct net *net)
+ 	ac_new->portid = portid;
+ 	ac_new->net = get_net(net);
+ 
++	/* send the ack now to avoid a race with the queue backlog */
++	if (*ack) {
++		nlh = nlmsg_hdr(skb);
++		netlink_ack(skb, nlh, 0, NULL);
++		*ack = false;
++	}
++
+ 	spin_lock_irqsave(&auditd_conn_lock, flags);
+ 	ac_old = rcu_dereference_protected(auditd_conn,
+ 					   lockdep_is_held(&auditd_conn_lock));
+@@ -1203,7 +1214,8 @@ static int audit_replace(struct pid *pid)
+ 	return auditd_send_unicast_skb(skb);
+ }
+ 
+-static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
++static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
++			     bool *ack)
+ {
+ 	u32			seq;
+ 	void			*data;
+@@ -1296,7 +1308,8 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ 				/* register a new auditd connection */
+ 				err = auditd_set(req_pid,
+ 						 NETLINK_CB(skb).portid,
+-						 sock_net(NETLINK_CB(skb).sk));
++						 sock_net(NETLINK_CB(skb).sk),
++						 skb, ack);
+ 				if (audit_enabled != AUDIT_OFF)
+ 					audit_log_config_change("audit_pid",
+ 								new_pid,
+@@ -1541,9 +1554,10 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+  * Parse the provided skb and deal with any messages that may be present,
+  * malformed skbs are discarded.
+  */
+-static void audit_receive(struct sk_buff  *skb)
++static void audit_receive(struct sk_buff *skb)
+ {
+ 	struct nlmsghdr *nlh;
++	bool ack;
+ 	/*
+ 	 * len MUST be signed for nlmsg_next to be able to dec it below 0
+ 	 * if the nlmsg_len was not aligned
+@@ -1556,9 +1570,12 @@ static void audit_receive(struct sk_buff  *skb)
+ 
+ 	audit_ctl_lock();
+ 	while (nlmsg_ok(nlh, len)) {
+-		err = audit_receive_msg(skb, nlh);
+-		/* if err or if this message says it wants a response */
+-		if (err || (nlh->nlmsg_flags & NLM_F_ACK))
++		ack = nlh->nlmsg_flags & NLM_F_ACK;
++		err = audit_receive_msg(skb, nlh, &ack);
++
++		/* send an ack if the user asked for one and audit_receive_msg
++		 * didn't already do it, or if there was an error. */
++		if (ack || err)
+ 			netlink_ack(skb, nlh, err, NULL);
+ 
+ 		nlh = nlmsg_next(nlh, &len);
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index f241bda2679d4..5102338129d5f 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -764,7 +764,7 @@ int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file,
+ 	}
+ 
+ 	if (old_ptr)
+-		map->ops->map_fd_put_ptr(old_ptr);
++		map->ops->map_fd_put_ptr(map, old_ptr, true);
+ 	return 0;
+ }
+ 
+@@ -787,7 +787,7 @@ static int fd_array_map_delete_elem(struct bpf_map *map, void *key)
+ 	}
+ 
+ 	if (old_ptr) {
+-		map->ops->map_fd_put_ptr(old_ptr);
++		map->ops->map_fd_put_ptr(map, old_ptr, true);
+ 		return 0;
+ 	} else {
+ 		return -ENOENT;
+@@ -811,8 +811,9 @@ static void *prog_fd_array_get_ptr(struct bpf_map *map,
+ 	return prog;
+ }
+ 
+-static void prog_fd_array_put_ptr(void *ptr)
++static void prog_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
++	/* bpf_prog is freed after one RCU or tasks trace grace period */
+ 	bpf_prog_put(ptr);
+ }
+ 
+@@ -1139,8 +1140,9 @@ static void *perf_event_fd_array_get_ptr(struct bpf_map *map,
+ 	return ee;
+ }
+ 
+-static void perf_event_fd_array_put_ptr(void *ptr)
++static void perf_event_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
++	/* bpf_perf_event is freed after one RCU grace period */
+ 	bpf_event_entry_free_rcu(ptr);
+ }
+ 
+@@ -1195,7 +1197,7 @@ static void *cgroup_fd_array_get_ptr(struct bpf_map *map,
+ 	return cgroup_get_from_fd(fd);
+ }
+ 
+-static void cgroup_fd_array_put_ptr(void *ptr)
++static void cgroup_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
+ 	/* cgroup_put free cgrp after a rcu grace period */
+ 	cgroup_put(ptr);
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 0ce445aadfdfb..ec84973142725 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -786,7 +786,7 @@ static void htab_put_fd_value(struct bpf_htab *htab, struct htab_elem *l)
+ 
+ 	if (map->ops->map_fd_put_ptr) {
+ 		ptr = fd_htab_map_get_ptr(map, l);
+-		map->ops->map_fd_put_ptr(ptr);
++		map->ops->map_fd_put_ptr(map, ptr, true);
+ 	}
+ }
+ 
+@@ -2023,7 +2023,7 @@ static void fd_htab_map_free(struct bpf_map *map)
+ 		hlist_nulls_for_each_entry_safe(l, n, head, hash_node) {
+ 			void *ptr = fd_htab_map_get_ptr(map, l);
+ 
+-			map->ops->map_fd_put_ptr(ptr);
++			map->ops->map_fd_put_ptr(map, ptr, false);
+ 		}
+ 	}
+ 
+@@ -2064,7 +2064,7 @@ int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
+ 
+ 	ret = htab_map_update_elem(map, key, &ptr, map_flags);
+ 	if (ret)
+-		map->ops->map_fd_put_ptr(ptr);
++		map->ops->map_fd_put_ptr(map, ptr, false);
+ 
+ 	return ret;
+ }
+diff --git a/kernel/bpf/map_in_map.c b/kernel/bpf/map_in_map.c
+index 39ab0b68cade5..0cf4cb6858105 100644
+--- a/kernel/bpf/map_in_map.c
++++ b/kernel/bpf/map_in_map.c
+@@ -100,7 +100,7 @@ void *bpf_map_fd_get_ptr(struct bpf_map *map,
+ 	return inner_map;
+ }
+ 
+-void bpf_map_fd_put_ptr(void *ptr)
++void bpf_map_fd_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
+ 	/* ptr->ops->map_free() has to go through one
+ 	 * rcu grace period by itself.
+diff --git a/kernel/bpf/map_in_map.h b/kernel/bpf/map_in_map.h
+index bcb7534afb3c0..7d61602354de8 100644
+--- a/kernel/bpf/map_in_map.h
++++ b/kernel/bpf/map_in_map.h
+@@ -13,7 +13,7 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd);
+ void bpf_map_meta_free(struct bpf_map *map_meta);
+ void *bpf_map_fd_get_ptr(struct bpf_map *map, struct file *map_file,
+ 			 int ufd);
+-void bpf_map_fd_put_ptr(void *ptr);
++void bpf_map_fd_put_ptr(struct bpf_map *map, void *ptr, bool need_defer);
+ u32 bpf_map_fd_sys_lookup_elem(void *ptr);
+ 
+ #endif
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index aaad2dce2be6f..16affa09db5c9 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1285,6 +1285,9 @@ int generic_map_delete_batch(struct bpf_map *map,
+ 	if (!max_count)
+ 		return 0;
+ 
++	if (put_user(0, &uattr->batch.count))
++		return -EFAULT;
++
+ 	key = kmalloc(map->key_size, GFP_USER | __GFP_NOWARN);
+ 	if (!key)
+ 		return -ENOMEM;
+@@ -1343,6 +1346,9 @@ int generic_map_update_batch(struct bpf_map *map,
+ 	if (!max_count)
+ 		return 0;
+ 
++	if (put_user(0, &uattr->batch.count))
++		return -EFAULT;
++
+ 	key = kmalloc(map->key_size, GFP_USER | __GFP_NOWARN);
+ 	if (!key)
+ 		return -ENOMEM;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index afedd008e0afd..bd569cf235699 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10855,9 +10855,30 @@ static DEVICE_ATTR_RW(perf_event_mux_interval_ms);
+ static struct attribute *pmu_dev_attrs[] = {
+ 	&dev_attr_type.attr,
+ 	&dev_attr_perf_event_mux_interval_ms.attr,
++	&dev_attr_nr_addr_filters.attr,
++	NULL,
++};
++
++static umode_t pmu_dev_is_visible(struct kobject *kobj, struct attribute *a, int n)
++{
++	struct device *dev = kobj_to_dev(kobj);
++	struct pmu *pmu = dev_get_drvdata(dev);
++
++	if (n == 2 && !pmu->nr_addr_filters)
++		return 0;
++
++	return a->mode;
++}
++
++static struct attribute_group pmu_dev_attr_group = {
++	.is_visible = pmu_dev_is_visible,
++	.attrs = pmu_dev_attrs,
++};
++
++static const struct attribute_group *pmu_dev_groups[] = {
++	&pmu_dev_attr_group,
+ 	NULL,
+ };
+-ATTRIBUTE_GROUPS(pmu_dev);
+ 
+ static int pmu_bus_running;
+ static struct bus_type pmu_bus = {
+@@ -10893,18 +10914,11 @@ static int pmu_dev_alloc(struct pmu *pmu)
+ 	if (ret)
+ 		goto free_dev;
+ 
+-	/* For PMUs with address filters, throw in an extra attribute: */
+-	if (pmu->nr_addr_filters)
+-		ret = device_create_file(pmu->dev, &dev_attr_nr_addr_filters);
+-
+-	if (ret)
+-		goto del_dev;
+-
+-	if (pmu->attr_update)
++	if (pmu->attr_update) {
+ 		ret = sysfs_update_groups(&pmu->dev->kobj, pmu->attr_update);
+-
+-	if (ret)
+-		goto del_dev;
++		if (ret)
++			goto del_dev;
++	}
+ 
+ out:
+ 	return ret;
+diff --git a/kernel/power/swap.c b/kernel/power/swap.c
+index 25e7cb96bb884..b288aba8040c2 100644
+--- a/kernel/power/swap.c
++++ b/kernel/power/swap.c
+@@ -603,11 +603,11 @@ static int crc32_threadfn(void *data)
+ 	unsigned i;
+ 
+ 	while (1) {
+-		wait_event(d->go, atomic_read(&d->ready) ||
++		wait_event(d->go, atomic_read_acquire(&d->ready) ||
+ 		                  kthread_should_stop());
+ 		if (kthread_should_stop()) {
+ 			d->thr = NULL;
+-			atomic_set(&d->stop, 1);
++			atomic_set_release(&d->stop, 1);
+ 			wake_up(&d->done);
+ 			break;
+ 		}
+@@ -616,7 +616,7 @@ static int crc32_threadfn(void *data)
+ 		for (i = 0; i < d->run_threads; i++)
+ 			*d->crc32 = crc32_le(*d->crc32,
+ 			                     d->unc[i], *d->unc_len[i]);
+-		atomic_set(&d->stop, 1);
++		atomic_set_release(&d->stop, 1);
+ 		wake_up(&d->done);
+ 	}
+ 	return 0;
+@@ -646,12 +646,12 @@ static int lzo_compress_threadfn(void *data)
+ 	struct cmp_data *d = data;
+ 
+ 	while (1) {
+-		wait_event(d->go, atomic_read(&d->ready) ||
++		wait_event(d->go, atomic_read_acquire(&d->ready) ||
+ 		                  kthread_should_stop());
+ 		if (kthread_should_stop()) {
+ 			d->thr = NULL;
+ 			d->ret = -1;
+-			atomic_set(&d->stop, 1);
++			atomic_set_release(&d->stop, 1);
+ 			wake_up(&d->done);
+ 			break;
+ 		}
+@@ -660,7 +660,7 @@ static int lzo_compress_threadfn(void *data)
+ 		d->ret = lzo1x_1_compress(d->unc, d->unc_len,
+ 		                          d->cmp + LZO_HEADER, &d->cmp_len,
+ 		                          d->wrk);
+-		atomic_set(&d->stop, 1);
++		atomic_set_release(&d->stop, 1);
+ 		wake_up(&d->done);
+ 	}
+ 	return 0;
+@@ -798,7 +798,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
+ 
+ 			data[thr].unc_len = off;
+ 
+-			atomic_set(&data[thr].ready, 1);
++			atomic_set_release(&data[thr].ready, 1);
+ 			wake_up(&data[thr].go);
+ 		}
+ 
+@@ -806,12 +806,12 @@ static int save_image_lzo(struct swap_map_handle *handle,
+ 			break;
+ 
+ 		crc->run_threads = thr;
+-		atomic_set(&crc->ready, 1);
++		atomic_set_release(&crc->ready, 1);
+ 		wake_up(&crc->go);
+ 
+ 		for (run_threads = thr, thr = 0; thr < run_threads; thr++) {
+ 			wait_event(data[thr].done,
+-			           atomic_read(&data[thr].stop));
++				atomic_read_acquire(&data[thr].stop));
+ 			atomic_set(&data[thr].stop, 0);
+ 
+ 			ret = data[thr].ret;
+@@ -850,7 +850,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
+ 			}
+ 		}
+ 
+-		wait_event(crc->done, atomic_read(&crc->stop));
++		wait_event(crc->done, atomic_read_acquire(&crc->stop));
+ 		atomic_set(&crc->stop, 0);
+ 	}
+ 
+@@ -1132,12 +1132,12 @@ static int lzo_decompress_threadfn(void *data)
+ 	struct dec_data *d = data;
+ 
+ 	while (1) {
+-		wait_event(d->go, atomic_read(&d->ready) ||
++		wait_event(d->go, atomic_read_acquire(&d->ready) ||
+ 		                  kthread_should_stop());
+ 		if (kthread_should_stop()) {
+ 			d->thr = NULL;
+ 			d->ret = -1;
+-			atomic_set(&d->stop, 1);
++			atomic_set_release(&d->stop, 1);
+ 			wake_up(&d->done);
+ 			break;
+ 		}
+@@ -1150,7 +1150,7 @@ static int lzo_decompress_threadfn(void *data)
+ 			flush_icache_range((unsigned long)d->unc,
+ 					   (unsigned long)d->unc + d->unc_len);
+ 
+-		atomic_set(&d->stop, 1);
++		atomic_set_release(&d->stop, 1);
+ 		wake_up(&d->done);
+ 	}
+ 	return 0;
+@@ -1338,7 +1338,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 		}
+ 
+ 		if (crc->run_threads) {
+-			wait_event(crc->done, atomic_read(&crc->stop));
++			wait_event(crc->done, atomic_read_acquire(&crc->stop));
+ 			atomic_set(&crc->stop, 0);
+ 			crc->run_threads = 0;
+ 		}
+@@ -1374,7 +1374,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 					pg = 0;
+ 			}
+ 
+-			atomic_set(&data[thr].ready, 1);
++			atomic_set_release(&data[thr].ready, 1);
+ 			wake_up(&data[thr].go);
+ 		}
+ 
+@@ -1393,7 +1393,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 
+ 		for (run_threads = thr, thr = 0; thr < run_threads; thr++) {
+ 			wait_event(data[thr].done,
+-			           atomic_read(&data[thr].stop));
++				atomic_read_acquire(&data[thr].stop));
+ 			atomic_set(&data[thr].stop, 0);
+ 
+ 			ret = data[thr].ret;
+@@ -1424,7 +1424,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 				ret = snapshot_write_next(snapshot);
+ 				if (ret <= 0) {
+ 					crc->run_threads = thr + 1;
+-					atomic_set(&crc->ready, 1);
++					atomic_set_release(&crc->ready, 1);
+ 					wake_up(&crc->go);
+ 					goto out_finish;
+ 				}
+@@ -1432,13 +1432,13 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 		}
+ 
+ 		crc->run_threads = thr;
+-		atomic_set(&crc->ready, 1);
++		atomic_set_release(&crc->ready, 1);
+ 		wake_up(&crc->go);
+ 	}
+ 
+ out_finish:
+ 	if (crc->run_threads) {
+-		wait_event(crc->done, atomic_read(&crc->stop));
++		wait_event(crc->done, atomic_read_acquire(&crc->stop));
+ 		atomic_set(&crc->stop, 0);
+ 	}
+ 	stop = ktime_get();
+diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
+index cc7cd512e4e33..1b7c3bdba8f75 100644
+--- a/kernel/sched/membarrier.c
++++ b/kernel/sched/membarrier.c
+@@ -34,6 +34,8 @@
+ 	| MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK		\
+ 	| MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
+ 
++static DEFINE_MUTEX(membarrier_ipi_mutex);
++
+ static void ipi_mb(void *info)
+ {
+ 	smp_mb();	/* IPIs should be serializing but paranoid. */
+@@ -119,6 +121,7 @@ static int membarrier_global_expedited(void)
+ 	if (!zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
++	mutex_lock(&membarrier_ipi_mutex);
+ 	cpus_read_lock();
+ 	rcu_read_lock();
+ 	for_each_online_cpu(cpu) {
+@@ -165,6 +168,8 @@ static int membarrier_global_expedited(void)
+ 	 * rq->curr modification in scheduler.
+ 	 */
+ 	smp_mb();	/* exit from system call is not a mb */
++	mutex_unlock(&membarrier_ipi_mutex);
++
+ 	return 0;
+ }
+ 
+@@ -208,6 +213,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
+ 	if (cpu_id < 0 && !zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
++	mutex_lock(&membarrier_ipi_mutex);
+ 	cpus_read_lock();
+ 
+ 	if (cpu_id >= 0) {
+@@ -280,6 +286,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
+ 	 * rq->curr modification in scheduler.
+ 	 */
+ 	smp_mb();	/* exit from system call is not a mb */
++	mutex_unlock(&membarrier_ipi_mutex);
+ 
+ 	return 0;
+ }
+@@ -321,6 +328,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
+ 	 * between threads which are users of @mm has its membarrier state
+ 	 * updated.
+ 	 */
++	mutex_lock(&membarrier_ipi_mutex);
+ 	cpus_read_lock();
+ 	rcu_read_lock();
+ 	for_each_online_cpu(cpu) {
+@@ -337,6 +345,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
+ 
+ 	free_cpumask_var(tmpmask);
+ 	cpus_read_unlock();
++	mutex_unlock(&membarrier_ipi_mutex);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 86e0fbe583f2b..754e93edb2f79 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -118,6 +118,7 @@ static DECLARE_WORK(watchdog_work, clocksource_watchdog_work);
+ static DEFINE_SPINLOCK(watchdog_lock);
+ static int watchdog_running;
+ static atomic_t watchdog_reset_pending;
++static int64_t watchdog_max_interval;
+ 
+ static inline void clocksource_watchdog_lock(unsigned long *flags)
+ {
+@@ -136,6 +137,7 @@ static void __clocksource_change_rating(struct clocksource *cs, int rating);
+  * Interval: 0.5sec.
+  */
+ #define WATCHDOG_INTERVAL (HZ >> 1)
++#define WATCHDOG_INTERVAL_MAX_NS ((2 * WATCHDOG_INTERVAL) * (NSEC_PER_SEC / HZ))
+ 
+ static void clocksource_watchdog_work(struct work_struct *work)
+ {
+@@ -324,8 +326,8 @@ static inline void clocksource_reset_watchdog(void)
+ static void clocksource_watchdog(struct timer_list *unused)
+ {
+ 	u64 csnow, wdnow, cslast, wdlast, delta;
++	int64_t wd_nsec, cs_nsec, interval;
+ 	int next_cpu, reset_pending;
+-	int64_t wd_nsec, cs_nsec;
+ 	struct clocksource *cs;
+ 	enum wd_read_status read_ret;
+ 	unsigned long extra_wait = 0;
+@@ -395,6 +397,27 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 		if (atomic_read(&watchdog_reset_pending))
+ 			continue;
+ 
++		/*
++		 * The processing of timer softirqs can get delayed (usually
++		 * on account of ksoftirqd not getting to run in a timely
++		 * manner), which causes the watchdog interval to stretch.
++		 * Skew detection may fail for longer watchdog intervals
++		 * on account of fixed margins being used.
++		 * Some clocksources, e.g. acpi_pm, cannot tolerate
++		 * watchdog intervals longer than a few seconds.
++		 */
++		interval = max(cs_nsec, wd_nsec);
++		if (unlikely(interval > WATCHDOG_INTERVAL_MAX_NS)) {
++			if (system_state > SYSTEM_SCHEDULING &&
++			    interval > 2 * watchdog_max_interval) {
++				watchdog_max_interval = interval;
++				pr_warn("Long readout interval, skipping watchdog check: cs_nsec: %lld wd_nsec: %lld\n",
++					cs_nsec, wd_nsec);
++			}
++			watchdog_timer.expires = jiffies;
++			continue;
++		}
++
+ 		/* Check the deviation from the watchdog clocksource. */
+ 		md = cs->uncertainty_margin + watchdog->uncertainty_margin;
+ 		if (abs(cs_nsec - wd_nsec) > md) {
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index ede09dda36e90..2b2a6e29219dc 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -980,6 +980,7 @@ static int enqueue_hrtimer(struct hrtimer *timer,
+ 			   enum hrtimer_mode mode)
+ {
+ 	debug_activate(timer, mode);
++	WARN_ON_ONCE(!base->cpu_base->online);
+ 
+ 	base->cpu_base->active_bases |= 1 << base->index;
+ 
+@@ -2078,6 +2079,7 @@ int hrtimers_prepare_cpu(unsigned int cpu)
+ 	cpu_base->softirq_next_timer = NULL;
+ 	cpu_base->expires_next = KTIME_MAX;
+ 	cpu_base->softirq_expires_next = KTIME_MAX;
++	cpu_base->online = 1;
+ 	hrtimer_cpu_base_init_expiry_lock(cpu_base);
+ 	return 0;
+ }
+@@ -2145,6 +2147,7 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
+ 	smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
+ 
+ 	raw_spin_unlock(&new_base->lock);
++	old_base->online = 0;
+ 	raw_spin_unlock(&old_base->lock);
+ 
+ 	return 0;
+@@ -2161,7 +2164,7 @@ void __init hrtimers_init(void)
+ /**
+  * schedule_hrtimeout_range_clock - sleep until timeout
+  * @expires:	timeout value (ktime_t)
+- * @delta:	slack in expires timeout (ktime_t)
++ * @delta:	slack in expires timeout (ktime_t) for SCHED_OTHER tasks
+  * @mode:	timer mode
+  * @clock_id:	timer clock to be used
+  */
+@@ -2188,6 +2191,13 @@ schedule_hrtimeout_range_clock(ktime_t *expires, u64 delta,
+ 		return -EINTR;
+ 	}
+ 
++	/*
++	 * Override any slack passed by the user if under
++	 * rt contraints.
++	 */
++	if (rt_task(current))
++		delta = 0;
++
+ 	hrtimer_init_sleeper_on_stack(&t, clock_id, mode);
+ 	hrtimer_set_expires_range_ns(&t.timer, *expires, delta);
+ 	hrtimer_sleeper_start_expires(&t, mode);
+@@ -2207,7 +2217,7 @@ EXPORT_SYMBOL_GPL(schedule_hrtimeout_range_clock);
+ /**
+  * schedule_hrtimeout_range - sleep until timeout
+  * @expires:	timeout value (ktime_t)
+- * @delta:	slack in expires timeout (ktime_t)
++ * @delta:	slack in expires timeout (ktime_t) for SCHED_OTHER tasks
+  * @mode:	timer mode
+  *
+  * Make the current task sleep until the given expiry time has
+@@ -2215,7 +2225,8 @@ EXPORT_SYMBOL_GPL(schedule_hrtimeout_range_clock);
+  * the current task state has been set (see set_current_state()).
+  *
+  * The @delta argument gives the kernel the freedom to schedule the
+- * actual wakeup to a time that is both power and performance friendly.
++ * actual wakeup to a time that is both power and performance friendly
++ * for regular (non RT/DL) tasks.
+  * The kernel give the normal best effort behavior for "@expires+@delta",
+  * but may decide to fire the timer earlier, but no earlier than @expires.
+  *
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index bc00ab0118e6c..d1693c26958fc 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -1440,6 +1440,7 @@ void tick_cancel_sched_timer(int cpu)
+ {
+ 	struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
+ 	ktime_t idle_sleeptime, iowait_sleeptime;
++	unsigned long idle_calls, idle_sleeps;
+ 
+ # ifdef CONFIG_HIGH_RES_TIMERS
+ 	if (ts->sched_timer.base)
+@@ -1448,9 +1449,13 @@ void tick_cancel_sched_timer(int cpu)
+ 
+ 	idle_sleeptime = ts->idle_sleeptime;
+ 	iowait_sleeptime = ts->iowait_sleeptime;
++	idle_calls = ts->idle_calls;
++	idle_sleeps = ts->idle_sleeps;
+ 	memset(ts, 0, sizeof(*ts));
+ 	ts->idle_sleeptime = idle_sleeptime;
+ 	ts->iowait_sleeptime = iowait_sleeptime;
++	ts->idle_calls = idle_calls;
++	ts->idle_sleeps = idle_sleeps;
+ }
+ #endif
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 041b91c2ba10a..4a43b8846b49f 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1008,7 +1008,7 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 		full = 0;
+ 	} else {
+ 		if (!cpumask_test_cpu(cpu, buffer->cpumask))
+-			return -EINVAL;
++			return EPOLLERR;
+ 
+ 		cpu_buffer = buffer->buffers[cpu];
+ 		work = &cpu_buffer->irq_work;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 548f694fc8574..22e1e57118698 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -39,6 +39,7 @@
+ #include <linux/slab.h>
+ #include <linux/ctype.h>
+ #include <linux/init.h>
++#include <linux/kmemleak.h>
+ #include <linux/poll.h>
+ #include <linux/nmi.h>
+ #include <linux/fs.h>
+@@ -2239,7 +2240,7 @@ struct saved_cmdlines_buffer {
+ 	unsigned *map_cmdline_to_pid;
+ 	unsigned cmdline_num;
+ 	int cmdline_idx;
+-	char *saved_cmdlines;
++	char saved_cmdlines[];
+ };
+ static struct saved_cmdlines_buffer *savedcmd;
+ 
+@@ -2253,47 +2254,60 @@ static inline void set_cmdline(int idx, const char *cmdline)
+ 	strncpy(get_saved_cmdlines(idx), cmdline, TASK_COMM_LEN);
+ }
+ 
+-static int allocate_cmdlines_buffer(unsigned int val,
+-				    struct saved_cmdlines_buffer *s)
++static void free_saved_cmdlines_buffer(struct saved_cmdlines_buffer *s)
+ {
++	int order = get_order(sizeof(*s) + s->cmdline_num * TASK_COMM_LEN);
++
++	kfree(s->map_cmdline_to_pid);
++	kmemleak_free(s);
++	free_pages((unsigned long)s, order);
++}
++
++static struct saved_cmdlines_buffer *allocate_cmdlines_buffer(unsigned int val)
++{
++	struct saved_cmdlines_buffer *s;
++	struct page *page;
++	int orig_size, size;
++	int order;
++
++	/* Figure out how much is needed to hold the given number of cmdlines */
++	orig_size = sizeof(*s) + val * TASK_COMM_LEN;
++	order = get_order(orig_size);
++	size = 1 << (order + PAGE_SHIFT);
++	page = alloc_pages(GFP_KERNEL, order);
++	if (!page)
++		return NULL;
++
++	s = page_address(page);
++	kmemleak_alloc(s, size, 1, GFP_KERNEL);
++	memset(s, 0, sizeof(*s));
++
++	/* Round up to actual allocation */
++	val = (size - sizeof(*s)) / TASK_COMM_LEN;
++	s->cmdline_num = val;
++
+ 	s->map_cmdline_to_pid = kmalloc_array(val,
+ 					      sizeof(*s->map_cmdline_to_pid),
+ 					      GFP_KERNEL);
+-	if (!s->map_cmdline_to_pid)
+-		return -ENOMEM;
+-
+-	s->saved_cmdlines = kmalloc_array(TASK_COMM_LEN, val, GFP_KERNEL);
+-	if (!s->saved_cmdlines) {
+-		kfree(s->map_cmdline_to_pid);
+-		return -ENOMEM;
++	if (!s->map_cmdline_to_pid) {
++		free_saved_cmdlines_buffer(s);
++		return NULL;
+ 	}
+ 
+ 	s->cmdline_idx = 0;
+-	s->cmdline_num = val;
+ 	memset(&s->map_pid_to_cmdline, NO_CMDLINE_MAP,
+ 	       sizeof(s->map_pid_to_cmdline));
+ 	memset(s->map_cmdline_to_pid, NO_CMDLINE_MAP,
+ 	       val * sizeof(*s->map_cmdline_to_pid));
+ 
+-	return 0;
++	return s;
+ }
+ 
+ static int trace_create_savedcmd(void)
+ {
+-	int ret;
+-
+-	savedcmd = kmalloc(sizeof(*savedcmd), GFP_KERNEL);
+-	if (!savedcmd)
+-		return -ENOMEM;
++	savedcmd = allocate_cmdlines_buffer(SAVED_CMDLINES_DEFAULT);
+ 
+-	ret = allocate_cmdlines_buffer(SAVED_CMDLINES_DEFAULT, savedcmd);
+-	if (ret < 0) {
+-		kfree(savedcmd);
+-		savedcmd = NULL;
+-		return -ENOMEM;
+-	}
+-
+-	return 0;
++	return savedcmd ? 0 : -ENOMEM;
+ }
+ 
+ int is_tracing_stopped(void)
+@@ -5603,26 +5617,14 @@ tracing_saved_cmdlines_size_read(struct file *filp, char __user *ubuf,
+ 	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
+ }
+ 
+-static void free_saved_cmdlines_buffer(struct saved_cmdlines_buffer *s)
+-{
+-	kfree(s->saved_cmdlines);
+-	kfree(s->map_cmdline_to_pid);
+-	kfree(s);
+-}
+-
+ static int tracing_resize_saved_cmdlines(unsigned int val)
+ {
+ 	struct saved_cmdlines_buffer *s, *savedcmd_temp;
+ 
+-	s = kmalloc(sizeof(*s), GFP_KERNEL);
++	s = allocate_cmdlines_buffer(val);
+ 	if (!s)
+ 		return -ENOMEM;
+ 
+-	if (allocate_cmdlines_buffer(val, s) < 0) {
+-		kfree(s);
+-		return -ENOMEM;
+-	}
+-
+ 	preempt_disable();
+ 	arch_spin_lock(&trace_cmdline_lock);
+ 	savedcmd_temp = savedcmd;
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 4bc90965abb25..e4340958da2df 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -1140,8 +1140,10 @@ register_snapshot_trigger(char *glob, struct event_trigger_ops *ops,
+ 			  struct event_trigger_data *data,
+ 			  struct trace_event_file *file)
+ {
+-	if (tracing_alloc_snapshot_instance(file->tr) != 0)
+-		return 0;
++	int ret = tracing_alloc_snapshot_instance(file->tr);
++
++	if (ret < 0)
++		return ret;
+ 
+ 	return register_trigger(glob, ops, data, file);
+ }
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index 51a9d1185033b..d47641f9740bc 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -574,7 +574,12 @@ __tracing_map_insert(struct tracing_map *map, void *key, bool lookup_only)
+ 				}
+ 
+ 				memcpy(elt->key, key, map->key_size);
+-				entry->val = elt;
++				/*
++				 * Ensure the initialization is visible and
++				 * publish the elt.
++				 */
++				smp_wmb();
++				WRITE_ONCE(entry->val, elt);
+ 				atomic64_inc(&map->hits);
+ 
+ 				return entry->val;
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 4dd9283f6fea0..b055741a5a4dd 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -612,9 +612,8 @@ static void debug_objects_fill_pool(void)
+ static void
+ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack)
+ {
+-	enum debug_obj_state state;
++	struct debug_obj *obj, o;
+ 	struct debug_bucket *db;
+-	struct debug_obj *obj;
+ 	unsigned long flags;
+ 
+ 	debug_objects_fill_pool();
+@@ -635,24 +634,18 @@ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack
+ 	case ODEBUG_STATE_INIT:
+ 	case ODEBUG_STATE_INACTIVE:
+ 		obj->state = ODEBUG_STATE_INIT;
+-		break;
+-
+-	case ODEBUG_STATE_ACTIVE:
+-		state = obj->state;
+-		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		debug_print_object(obj, "init");
+-		debug_object_fixup(descr->fixup_init, addr, state);
+-		return;
+-
+-	case ODEBUG_STATE_DESTROYED:
+ 		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		debug_print_object(obj, "init");
+ 		return;
+ 	default:
+ 		break;
+ 	}
+ 
++	o = *obj;
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
++	debug_print_object(&o, "init");
++
++	if (o.state == ODEBUG_STATE_ACTIVE)
++		debug_object_fixup(descr->fixup_init, addr, o.state);
+ }
+ 
+ /**
+@@ -693,11 +686,9 @@ EXPORT_SYMBOL_GPL(debug_object_init_on_stack);
+ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
+ {
+ 	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+-	enum debug_obj_state state;
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+-	int ret;
+ 
+ 	if (!debug_objects_enabled)
+ 		return 0;
+@@ -709,49 +700,38 @@ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+ 	obj = lookup_object_or_alloc(addr, db, descr, false, true);
+-	if (likely(!IS_ERR_OR_NULL(obj))) {
+-		bool print_object = false;
+-
++	if (unlikely(!obj)) {
++		raw_spin_unlock_irqrestore(&db->lock, flags);
++		debug_objects_oom();
++		return 0;
++	} else if (likely(!IS_ERR(obj))) {
+ 		switch (obj->state) {
+-		case ODEBUG_STATE_INIT:
+-		case ODEBUG_STATE_INACTIVE:
+-			obj->state = ODEBUG_STATE_ACTIVE;
+-			ret = 0;
+-			break;
+-
+ 		case ODEBUG_STATE_ACTIVE:
+-			state = obj->state;
+-			raw_spin_unlock_irqrestore(&db->lock, flags);
+-			debug_print_object(obj, "activate");
+-			ret = debug_object_fixup(descr->fixup_activate, addr, state);
+-			return ret ? 0 : -EINVAL;
+-
+ 		case ODEBUG_STATE_DESTROYED:
+-			print_object = true;
+-			ret = -EINVAL;
++			o = *obj;
+ 			break;
++		case ODEBUG_STATE_INIT:
++		case ODEBUG_STATE_INACTIVE:
++			obj->state = ODEBUG_STATE_ACTIVE;
++			fallthrough;
+ 		default:
+-			ret = 0;
+-			break;
++			raw_spin_unlock_irqrestore(&db->lock, flags);
++			return 0;
+ 		}
+-		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		if (print_object)
+-			debug_print_object(obj, "activate");
+-		return ret;
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
++	debug_print_object(&o, "activate");
+ 
+-	/* If NULL the allocation has hit OOM */
+-	if (!obj) {
+-		debug_objects_oom();
+-		return 0;
++	switch (o.state) {
++	case ODEBUG_STATE_ACTIVE:
++	case ODEBUG_STATE_NOTAVAILABLE:
++		if (debug_object_fixup(descr->fixup_activate, addr, o.state))
++			return 0;
++		fallthrough;
++	default:
++		return -EINVAL;
+ 	}
+-
+-	/* Object is neither static nor tracked. It's not initialized */
+-	debug_print_object(&o, "activate");
+-	ret = debug_object_fixup(descr->fixup_activate, addr, ODEBUG_STATE_NOTAVAILABLE);
+-	return ret ? 0 : -EINVAL;
+ }
+ EXPORT_SYMBOL_GPL(debug_object_activate);
+ 
+@@ -762,10 +742,10 @@ EXPORT_SYMBOL_GPL(debug_object_activate);
+  */
+ void debug_object_deactivate(void *addr, const struct debug_obj_descr *descr)
+ {
++	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+-	bool print_object = false;
+ 
+ 	if (!debug_objects_enabled)
+ 		return;
+@@ -777,33 +757,24 @@ void debug_object_deactivate(void *addr, const struct debug_obj_descr *descr)
+ 	obj = lookup_object(addr, db);
+ 	if (obj) {
+ 		switch (obj->state) {
++		case ODEBUG_STATE_DESTROYED:
++			break;
+ 		case ODEBUG_STATE_INIT:
+ 		case ODEBUG_STATE_INACTIVE:
+ 		case ODEBUG_STATE_ACTIVE:
+-			if (!obj->astate)
+-				obj->state = ODEBUG_STATE_INACTIVE;
+-			else
+-				print_object = true;
+-			break;
+-
+-		case ODEBUG_STATE_DESTROYED:
+-			print_object = true;
+-			break;
++			if (obj->astate)
++				break;
++			obj->state = ODEBUG_STATE_INACTIVE;
++			fallthrough;
+ 		default:
+-			break;
++			raw_spin_unlock_irqrestore(&db->lock, flags);
++			return;
+ 		}
++		o = *obj;
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+-	if (!obj) {
+-		struct debug_obj o = { .object = addr,
+-				       .state = ODEBUG_STATE_NOTAVAILABLE,
+-				       .descr = descr };
+-
+-		debug_print_object(&o, "deactivate");
+-	} else if (print_object) {
+-		debug_print_object(obj, "deactivate");
+-	}
++	debug_print_object(&o, "deactivate");
+ }
+ EXPORT_SYMBOL_GPL(debug_object_deactivate);
+ 
+@@ -814,11 +785,9 @@ EXPORT_SYMBOL_GPL(debug_object_deactivate);
+  */
+ void debug_object_destroy(void *addr, const struct debug_obj_descr *descr)
+ {
+-	enum debug_obj_state state;
++	struct debug_obj *obj, o;
+ 	struct debug_bucket *db;
+-	struct debug_obj *obj;
+ 	unsigned long flags;
+-	bool print_object = false;
+ 
+ 	if (!debug_objects_enabled)
+ 		return;
+@@ -828,32 +797,31 @@ void debug_object_destroy(void *addr, const struct debug_obj_descr *descr)
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+ 	obj = lookup_object(addr, db);
+-	if (!obj)
+-		goto out_unlock;
++	if (!obj) {
++		raw_spin_unlock_irqrestore(&db->lock, flags);
++		return;
++	}
+ 
+ 	switch (obj->state) {
++	case ODEBUG_STATE_ACTIVE:
++	case ODEBUG_STATE_DESTROYED:
++		break;
+ 	case ODEBUG_STATE_NONE:
+ 	case ODEBUG_STATE_INIT:
+ 	case ODEBUG_STATE_INACTIVE:
+ 		obj->state = ODEBUG_STATE_DESTROYED;
+-		break;
+-	case ODEBUG_STATE_ACTIVE:
+-		state = obj->state;
++		fallthrough;
++	default:
+ 		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		debug_print_object(obj, "destroy");
+-		debug_object_fixup(descr->fixup_destroy, addr, state);
+ 		return;
+-
+-	case ODEBUG_STATE_DESTROYED:
+-		print_object = true;
+-		break;
+-	default:
+-		break;
+ 	}
+-out_unlock:
++
++	o = *obj;
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+-	if (print_object)
+-		debug_print_object(obj, "destroy");
++	debug_print_object(&o, "destroy");
++
++	if (o.state == ODEBUG_STATE_ACTIVE)
++		debug_object_fixup(descr->fixup_destroy, addr, o.state);
+ }
+ EXPORT_SYMBOL_GPL(debug_object_destroy);
+ 
+@@ -864,9 +832,8 @@ EXPORT_SYMBOL_GPL(debug_object_destroy);
+  */
+ void debug_object_free(void *addr, const struct debug_obj_descr *descr)
+ {
+-	enum debug_obj_state state;
++	struct debug_obj *obj, o;
+ 	struct debug_bucket *db;
+-	struct debug_obj *obj;
+ 	unsigned long flags;
+ 
+ 	if (!debug_objects_enabled)
+@@ -877,24 +844,26 @@ void debug_object_free(void *addr, const struct debug_obj_descr *descr)
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+ 	obj = lookup_object(addr, db);
+-	if (!obj)
+-		goto out_unlock;
++	if (!obj) {
++		raw_spin_unlock_irqrestore(&db->lock, flags);
++		return;
++	}
+ 
+ 	switch (obj->state) {
+ 	case ODEBUG_STATE_ACTIVE:
+-		state = obj->state;
+-		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		debug_print_object(obj, "free");
+-		debug_object_fixup(descr->fixup_free, addr, state);
+-		return;
++		break;
+ 	default:
+ 		hlist_del(&obj->node);
+ 		raw_spin_unlock_irqrestore(&db->lock, flags);
+ 		free_object(obj);
+ 		return;
+ 	}
+-out_unlock:
++
++	o = *obj;
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
++	debug_print_object(&o, "free");
++
++	debug_object_fixup(descr->fixup_free, addr, o.state);
+ }
+ EXPORT_SYMBOL_GPL(debug_object_free);
+ 
+@@ -946,10 +915,10 @@ void
+ debug_object_active_state(void *addr, const struct debug_obj_descr *descr,
+ 			  unsigned int expect, unsigned int next)
+ {
++	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+-	bool print_object = false;
+ 
+ 	if (!debug_objects_enabled)
+ 		return;
+@@ -962,28 +931,19 @@ debug_object_active_state(void *addr, const struct debug_obj_descr *descr,
+ 	if (obj) {
+ 		switch (obj->state) {
+ 		case ODEBUG_STATE_ACTIVE:
+-			if (obj->astate == expect)
+-				obj->astate = next;
+-			else
+-				print_object = true;
+-			break;
+-
++			if (obj->astate != expect)
++				break;
++			obj->astate = next;
++			raw_spin_unlock_irqrestore(&db->lock, flags);
++			return;
+ 		default:
+-			print_object = true;
+ 			break;
+ 		}
++		o = *obj;
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+-	if (!obj) {
+-		struct debug_obj o = { .object = addr,
+-				       .state = ODEBUG_STATE_NOTAVAILABLE,
+-				       .descr = descr };
+-
+-		debug_print_object(&o, "active_state");
+-	} else if (print_object) {
+-		debug_print_object(obj, "active_state");
+-	}
++	debug_print_object(&o, "active_state");
+ }
+ EXPORT_SYMBOL_GPL(debug_object_active_state);
+ 
+@@ -991,12 +951,10 @@ EXPORT_SYMBOL_GPL(debug_object_active_state);
+ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
+ {
+ 	unsigned long flags, oaddr, saddr, eaddr, paddr, chunks;
+-	const struct debug_obj_descr *descr;
+-	enum debug_obj_state state;
++	int cnt, objs_checked = 0;
++	struct debug_obj *obj, o;
+ 	struct debug_bucket *db;
+ 	struct hlist_node *tmp;
+-	struct debug_obj *obj;
+-	int cnt, objs_checked = 0;
+ 
+ 	saddr = (unsigned long) address;
+ 	eaddr = saddr + size;
+@@ -1018,12 +976,10 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
+ 
+ 			switch (obj->state) {
+ 			case ODEBUG_STATE_ACTIVE:
+-				descr = obj->descr;
+-				state = obj->state;
++				o = *obj;
+ 				raw_spin_unlock_irqrestore(&db->lock, flags);
+-				debug_print_object(obj, "free");
+-				debug_object_fixup(descr->fixup_free,
+-						   (void *) oaddr, state);
++				debug_print_object(&o, "free");
++				debug_object_fixup(o.descr->fixup_free, (void *)oaddr, o.state);
+ 				goto repeat;
+ 			default:
+ 				hlist_del(&obj->node);
+diff --git a/lib/mpi/ec.c b/lib/mpi/ec.c
+index c21470122dfc1..941ba0b0067ef 100644
+--- a/lib/mpi/ec.c
++++ b/lib/mpi/ec.c
+@@ -584,6 +584,9 @@ void mpi_ec_init(struct mpi_ec_ctx *ctx, enum gcry_mpi_ec_models model,
+ 	ctx->a = mpi_copy(a);
+ 	ctx->b = mpi_copy(b);
+ 
++	ctx->d = NULL;
++	ctx->t.two_inv_p = NULL;
++
+ 	ctx->t.p_barrett = use_barrett > 0 ? mpi_barrett_init(ctx->p, 0) : NULL;
+ 
+ 	mpi_ec_get_reset(ctx);
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index eb34d204d4ee7..e8d7d3c2bfcb8 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -1524,7 +1524,7 @@ static inline void wb_dirty_limits(struct dirty_throttle_control *dtc)
+ 	 */
+ 	dtc->wb_thresh = __wb_calc_thresh(dtc);
+ 	dtc->wb_bg_thresh = dtc->thresh ?
+-		div_u64((u64)dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0;
++		div64_u64(dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0;
+ 
+ 	/*
+ 	 * In order to avoid the stacked BDI deadlock we need
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 33406ea2ecc44..db0a7c53775b8 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -809,6 +809,13 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
+ 	if (empty) {
+ 		unsigned long section_nr = pfn_to_section_nr(pfn);
+ 
++		/*
++		 * Mark the section invalid so that valid_section()
++		 * return false. This prevents code from dereferencing
++		 * ms->usage array.
++		 */
++		ms->section_mem_map &= ~SECTION_HAS_MEM_MAP;
++
+ 		/*
+ 		 * When removing an early section, the usage map is kept (as the
+ 		 * usage maps of other sections fall into the same page). It
+@@ -817,16 +824,10 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
+ 		 * was allocated during boot.
+ 		 */
+ 		if (!PageReserved(virt_to_page(ms->usage))) {
+-			kfree(ms->usage);
+-			ms->usage = NULL;
++			kfree_rcu(ms->usage, rcu);
++			WRITE_ONCE(ms->usage, NULL);
+ 		}
+ 		memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+-		/*
+-		 * Mark the section invalid so that valid_section()
+-		 * return false. This prevents code from dereferencing
+-		 * ms->usage array.
+-		 */
+-		ms->section_mem_map &= ~SECTION_HAS_MEM_MAP;
+ 	}
+ 
+ 	/*
+diff --git a/mm/util.c b/mm/util.c
+index 25bfda774f6fd..7fd3c2bb3e4f5 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -686,6 +686,56 @@ static inline void *__page_rmapping(struct page *page)
+ 	return (void *)mapping;
+ }
+ 
++/**
++ * __vmalloc_array - allocate memory for a virtually contiguous array.
++ * @n: number of elements.
++ * @size: element size.
++ * @flags: the type of memory to allocate (see kmalloc).
++ */
++void *__vmalloc_array(size_t n, size_t size, gfp_t flags)
++{
++	size_t bytes;
++
++	if (unlikely(check_mul_overflow(n, size, &bytes)))
++		return NULL;
++	return __vmalloc(bytes, flags);
++}
++EXPORT_SYMBOL(__vmalloc_array);
++
++/**
++ * vmalloc_array - allocate memory for a virtually contiguous array.
++ * @n: number of elements.
++ * @size: element size.
++ */
++void *vmalloc_array(size_t n, size_t size)
++{
++	return __vmalloc_array(n, size, GFP_KERNEL);
++}
++EXPORT_SYMBOL(vmalloc_array);
++
++/**
++ * __vcalloc - allocate and zero memory for a virtually contiguous array.
++ * @n: number of elements.
++ * @size: element size.
++ * @flags: the type of memory to allocate (see kmalloc).
++ */
++void *__vcalloc(size_t n, size_t size, gfp_t flags)
++{
++	return __vmalloc_array(n, size, flags | __GFP_ZERO);
++}
++EXPORT_SYMBOL(__vcalloc);
++
++/**
++ * vcalloc - allocate and zero memory for a virtually contiguous array.
++ * @n: number of elements.
++ * @size: element size.
++ */
++void *vcalloc(size_t n, size_t size)
++{
++	return __vmalloc_array(n, size, GFP_KERNEL | __GFP_ZERO);
++}
++EXPORT_SYMBOL(vcalloc);
++
+ /* Neutral page->mapping pointer to address_space or anon_vma or other */
+ void *page_rmapping(struct page *page)
+ {
+diff --git a/net/8021q/vlan_netlink.c b/net/8021q/vlan_netlink.c
+index 0db85aeb119b8..99b2777752579 100644
+--- a/net/8021q/vlan_netlink.c
++++ b/net/8021q/vlan_netlink.c
+@@ -118,12 +118,16 @@ static int vlan_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	}
+ 	if (data[IFLA_VLAN_INGRESS_QOS]) {
+ 		nla_for_each_nested(attr, data[IFLA_VLAN_INGRESS_QOS], rem) {
++			if (nla_type(attr) != IFLA_VLAN_QOS_MAPPING)
++				continue;
+ 			m = nla_data(attr);
+ 			vlan_dev_set_ingress_priority(dev, m->to, m->from);
+ 		}
+ 	}
+ 	if (data[IFLA_VLAN_EGRESS_QOS]) {
+ 		nla_for_each_nested(attr, data[IFLA_VLAN_EGRESS_QOS], rem) {
++			if (nla_type(attr) != IFLA_VLAN_QOS_MAPPING)
++				continue;
+ 			m = nla_data(attr);
+ 			err = vlan_dev_set_egress_priority(dev, m->from, m->to);
+ 			if (err)
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index cf78a48085eda..a752032e12fcf 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -6522,7 +6522,8 @@ static inline void l2cap_sig_channel(struct l2cap_conn *conn,
+ 		if (len > skb->len || !cmd->ident) {
+ 			BT_DBG("corrupted command");
+ 			l2cap_sig_send_rej(conn, cmd->ident);
+-			break;
++			skb_pull(skb, len > skb->len ? skb->len : len);
++			continue;
+ 		}
+ 
+ 		err = l2cap_bredr_sig_cmd(conn, cmd, len, skb->data);
+diff --git a/net/can/j1939/j1939-priv.h b/net/can/j1939/j1939-priv.h
+index cea712fb2a9e0..9ac2a10b18265 100644
+--- a/net/can/j1939/j1939-priv.h
++++ b/net/can/j1939/j1939-priv.h
+@@ -297,6 +297,7 @@ struct j1939_sock {
+ 
+ 	int ifindex;
+ 	struct j1939_addr addr;
++	spinlock_t filters_lock;
+ 	struct j1939_filter *filters;
+ 	int nfilters;
+ 	pgn_t pgn_rx_filter;
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 906a08d38c1c8..c216c60f572b5 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -262,12 +262,17 @@ static bool j1939_sk_match_dst(struct j1939_sock *jsk,
+ static bool j1939_sk_match_filter(struct j1939_sock *jsk,
+ 				  const struct j1939_sk_buff_cb *skcb)
+ {
+-	const struct j1939_filter *f = jsk->filters;
+-	int nfilter = jsk->nfilters;
++	const struct j1939_filter *f;
++	int nfilter;
++
++	spin_lock_bh(&jsk->filters_lock);
++
++	f = jsk->filters;
++	nfilter = jsk->nfilters;
+ 
+ 	if (!nfilter)
+ 		/* receive all when no filters are assigned */
+-		return true;
++		goto filter_match_found;
+ 
+ 	for (; nfilter; ++f, --nfilter) {
+ 		if ((skcb->addr.pgn & f->pgn_mask) != f->pgn)
+@@ -276,9 +281,15 @@ static bool j1939_sk_match_filter(struct j1939_sock *jsk,
+ 			continue;
+ 		if ((skcb->addr.src_name & f->name_mask) != f->name)
+ 			continue;
+-		return true;
++		goto filter_match_found;
+ 	}
++
++	spin_unlock_bh(&jsk->filters_lock);
+ 	return false;
++
++filter_match_found:
++	spin_unlock_bh(&jsk->filters_lock);
++	return true;
+ }
+ 
+ static bool j1939_sk_recv_match_one(struct j1939_sock *jsk,
+@@ -401,6 +412,7 @@ static int j1939_sk_init(struct sock *sk)
+ 	atomic_set(&jsk->skb_pending, 0);
+ 	spin_lock_init(&jsk->sk_session_queue_lock);
+ 	INIT_LIST_HEAD(&jsk->sk_session_queue);
++	spin_lock_init(&jsk->filters_lock);
+ 
+ 	/* j1939_sk_sock_destruct() depends on SOCK_RCU_FREE flag */
+ 	sock_set_flag(sk, SOCK_RCU_FREE);
+@@ -703,9 +715,11 @@ static int j1939_sk_setsockopt(struct socket *sock, int level, int optname,
+ 		}
+ 
+ 		lock_sock(&jsk->sk);
++		spin_lock_bh(&jsk->filters_lock);
+ 		ofilters = jsk->filters;
+ 		jsk->filters = filters;
+ 		jsk->nfilters = count;
++		spin_unlock_bh(&jsk->filters_lock);
+ 		release_sock(&jsk->sk);
+ 		kfree(ofilters);
+ 		return 0;
+diff --git a/net/core/request_sock.c b/net/core/request_sock.c
+index f35c2e9984062..63de5c635842b 100644
+--- a/net/core/request_sock.c
++++ b/net/core/request_sock.c
+@@ -33,9 +33,6 @@
+ 
+ void reqsk_queue_alloc(struct request_sock_queue *queue)
+ {
+-	spin_lock_init(&queue->rskq_lock);
+-
+-	spin_lock_init(&queue->fastopenq.lock);
+ 	queue->fastopenq.rskq_rst_head = NULL;
+ 	queue->fastopenq.rskq_rst_tail = NULL;
+ 	queue->fastopenq.qlen = 0;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 196278a137c01..50261f3aec82b 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3877,8 +3877,9 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
+ 		/* GSO partial only requires that we trim off any excess that
+ 		 * doesn't fit into an MSS sized block, so take care of that
+ 		 * now.
++		 * Cap len to not accidentally hit GSO_BY_FRAGS.
+ 		 */
+-		partial_segs = len / mss;
++		partial_segs = min(len, GSO_BY_FRAGS - 1U) / mss;
+ 		if (partial_segs > 1)
+ 			mss *= partial_segs;
+ 		else
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index 84e6ef4f35252..c5a4c5fb72934 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -291,7 +291,7 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
+ 
+ 	skb = hsr_init_skb(master);
+ 	if (!skb) {
+-		WARN_ONCE(1, "HSR: Could not send supervision frame\n");
++		netdev_warn_once(master->dev, "HSR: Could not send supervision frame\n");
+ 		return;
+ 	}
+ 
+@@ -338,7 +338,7 @@ static void send_prp_supervision_frame(struct hsr_port *master,
+ 
+ 	skb = hsr_init_skb(master);
+ 	if (!skb) {
+-		WARN_ONCE(1, "PRP: Could not send supervision frame\n");
++		netdev_warn_once(master->dev, "PRP: Could not send supervision frame\n");
+ 		return;
+ 	}
+ 
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index acb4887351daf..5f1b334e64b32 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -327,6 +327,9 @@ static int inet_create(struct net *net, struct socket *sock, int protocol,
+ 	if (INET_PROTOSW_REUSE & answer_flags)
+ 		sk->sk_reuse = SK_CAN_REUSE;
+ 
++	if (INET_PROTOSW_ICSK & answer_flags)
++		inet_init_csk_locks(sk);
++
+ 	inet = inet_sk(sk);
+ 	inet->is_icsk = (INET_PROTOSW_ICSK & answer_flags) != 0;
+ 
+@@ -1597,10 +1600,12 @@ EXPORT_SYMBOL(inet_current_timestamp);
+ 
+ int inet_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
+ {
+-	if (sk->sk_family == AF_INET)
++	unsigned int family = READ_ONCE(sk->sk_family);
++
++	if (family == AF_INET)
+ 		return ip_recv_error(sk, msg, len, addr_len);
+ #if IS_ENABLED(CONFIG_IPV6)
+-	if (sk->sk_family == AF_INET6)
++	if (family == AF_INET6)
+ 		return pingv6_ops.ipv6_recv_error(sk, msg, len, addr_len);
+ #endif
+ 	return -EINVAL;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 5f71a1c74e7e0..b15c9ad0095a2 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -536,6 +536,10 @@ struct sock *inet_csk_accept(struct sock *sk, int flags, int *err, bool kern)
+ 	}
+ 	if (req)
+ 		reqsk_put(req);
++
++	if (newsk)
++		inet_init_csk_locks(newsk);
++
+ 	return newsk;
+ out_err:
+ 	newsk = NULL;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index a99c374101fc5..12ee857d6cfe4 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1260,6 +1260,12 @@ static int ip_setup_cork(struct sock *sk, struct inet_cork *cork,
+ 	if (unlikely(!rt))
+ 		return -EFAULT;
+ 
++	cork->fragsize = ip_sk_use_pmtu(sk) ?
++			 dst_mtu(&rt->dst) : READ_ONCE(rt->dst.dev->mtu);
++
++	if (!inetdev_valid_mtu(cork->fragsize))
++		return -ENETUNREACH;
++
+ 	/*
+ 	 * setup for corking.
+ 	 */
+@@ -1276,12 +1282,6 @@ static int ip_setup_cork(struct sock *sk, struct inet_cork *cork,
+ 		cork->addr = ipc->addr;
+ 	}
+ 
+-	cork->fragsize = ip_sk_use_pmtu(sk) ?
+-			 dst_mtu(&rt->dst) : READ_ONCE(rt->dst.dev->mtu);
+-
+-	if (!inetdev_valid_mtu(cork->fragsize))
+-		return -ENETUNREACH;
+-
+ 	cork->gso_size = ipc->gso_size;
+ 
+ 	cork->dst = &rt->dst;
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index da9a55c68e11e..ba1388ba6c6e5 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -332,7 +332,7 @@ static int iptunnel_pmtud_build_icmpv6(struct sk_buff *skb, int mtu)
+ 	};
+ 	skb_reset_network_header(skb);
+ 
+-	csum = csum_partial(icmp6h, len, 0);
++	csum = skb_checksum(skb, skb_transport_offset(skb), len, 0);
+ 	icmp6h->icmp6_cksum = csum_ipv6_magic(&nip6h->saddr, &nip6h->daddr, len,
+ 					      IPPROTO_ICMPV6, csum);
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 3dd9b76f40559..a5c15e2d193f6 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -726,6 +726,7 @@ void tcp_push(struct sock *sk, int flags, int mss_now,
+ 		if (!test_bit(TSQ_THROTTLED, &sk->sk_tsq_flags)) {
+ 			NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAUTOCORKING);
+ 			set_bit(TSQ_THROTTLED, &sk->sk_tsq_flags);
++			smp_mb__after_atomic();
+ 		}
+ 		/* It is possible TX completion already happened
+ 		 * before we set TSQ_THROTTLED.
+@@ -1777,6 +1778,36 @@ static skb_frag_t *skb_advance_to_frag(struct sk_buff *skb, u32 offset_skb,
+ 	return frag;
+ }
+ 
++static bool can_map_frag(const skb_frag_t *frag)
++{
++	struct page *page;
++
++	if (skb_frag_size(frag) != PAGE_SIZE || skb_frag_off(frag))
++		return false;
++
++	page = skb_frag_page(frag);
++
++	if (PageCompound(page) || page->mapping)
++		return false;
++
++	return true;
++}
++
++static int find_next_mappable_frag(const skb_frag_t *frag,
++				   int remaining_in_skb)
++{
++	int offset = 0;
++
++	if (likely(can_map_frag(frag)))
++		return 0;
++
++	while (offset < remaining_in_skb && !can_map_frag(frag)) {
++		offset += skb_frag_size(frag);
++		++frag;
++	}
++	return offset;
++}
++
+ static int tcp_copy_straggler_data(struct tcp_zerocopy_receive *zc,
+ 				   struct sk_buff *skb, u32 copylen,
+ 				   u32 *offset, u32 *seq)
+@@ -1902,6 +1933,8 @@ static int tcp_zerocopy_receive(struct sock *sk,
+ 	ret = 0;
+ 	curr_addr = address;
+ 	while (length + PAGE_SIZE <= zc->length) {
++		int mappable_offset;
++
+ 		if (zc->recv_skip_hint < PAGE_SIZE) {
+ 			u32 offset_frag;
+ 
+@@ -1929,15 +1962,11 @@ static int tcp_zerocopy_receive(struct sock *sk,
+ 			if (!frags || offset_frag)
+ 				break;
+ 		}
+-		if (skb_frag_size(frags) != PAGE_SIZE || skb_frag_off(frags)) {
+-			int remaining = zc->recv_skip_hint;
+ 
+-			while (remaining && (skb_frag_size(frags) != PAGE_SIZE ||
+-					     skb_frag_off(frags))) {
+-				remaining -= skb_frag_size(frags);
+-				frags++;
+-			}
+-			zc->recv_skip_hint -= remaining;
++		mappable_offset = find_next_mappable_frag(frags,
++							  zc->recv_skip_hint);
++		if (mappable_offset) {
++			zc->recv_skip_hint = mappable_offset;
+ 			break;
+ 		}
+ 		pages[pg_idx] = skb_frag_page(frags);
+diff --git a/net/ipv6/addrconf_core.c b/net/ipv6/addrconf_core.c
+index c70c192bc91b3..5e0e2b5ba34e4 100644
+--- a/net/ipv6/addrconf_core.c
++++ b/net/ipv6/addrconf_core.c
+@@ -213,19 +213,26 @@ const struct ipv6_stub *ipv6_stub __read_mostly = &(struct ipv6_stub) {
+ EXPORT_SYMBOL_GPL(ipv6_stub);
+ 
+ /* IPv6 Wildcard Address and Loopback Address defined by RFC2553 */
+-const struct in6_addr in6addr_loopback = IN6ADDR_LOOPBACK_INIT;
++const struct in6_addr in6addr_loopback __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_LOOPBACK_INIT;
+ EXPORT_SYMBOL(in6addr_loopback);
+-const struct in6_addr in6addr_any = IN6ADDR_ANY_INIT;
++const struct in6_addr in6addr_any __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_ANY_INIT;
+ EXPORT_SYMBOL(in6addr_any);
+-const struct in6_addr in6addr_linklocal_allnodes = IN6ADDR_LINKLOCAL_ALLNODES_INIT;
++const struct in6_addr in6addr_linklocal_allnodes __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_LINKLOCAL_ALLNODES_INIT;
+ EXPORT_SYMBOL(in6addr_linklocal_allnodes);
+-const struct in6_addr in6addr_linklocal_allrouters = IN6ADDR_LINKLOCAL_ALLROUTERS_INIT;
++const struct in6_addr in6addr_linklocal_allrouters __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_LINKLOCAL_ALLROUTERS_INIT;
+ EXPORT_SYMBOL(in6addr_linklocal_allrouters);
+-const struct in6_addr in6addr_interfacelocal_allnodes = IN6ADDR_INTERFACELOCAL_ALLNODES_INIT;
++const struct in6_addr in6addr_interfacelocal_allnodes __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_INTERFACELOCAL_ALLNODES_INIT;
+ EXPORT_SYMBOL(in6addr_interfacelocal_allnodes);
+-const struct in6_addr in6addr_interfacelocal_allrouters = IN6ADDR_INTERFACELOCAL_ALLROUTERS_INIT;
++const struct in6_addr in6addr_interfacelocal_allrouters __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_INTERFACELOCAL_ALLROUTERS_INIT;
+ EXPORT_SYMBOL(in6addr_interfacelocal_allrouters);
+-const struct in6_addr in6addr_sitelocal_allrouters = IN6ADDR_SITELOCAL_ALLROUTERS_INIT;
++const struct in6_addr in6addr_sitelocal_allrouters __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_SITELOCAL_ALLROUTERS_INIT;
+ EXPORT_SYMBOL(in6addr_sitelocal_allrouters);
+ 
+ static void snmp6_free_dev(struct inet6_dev *idev)
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 4247997077bfb..329b3b36688aa 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -197,6 +197,9 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol,
+ 	if (INET_PROTOSW_REUSE & answer_flags)
+ 		sk->sk_reuse = SK_CAN_REUSE;
+ 
++	if (INET_PROTOSW_ICSK & answer_flags)
++		inet_init_csk_locks(sk);
++
+ 	inet = inet_sk(sk);
+ 	inet->is_icsk = (INET_PROTOSW_ICSK & answer_flags) != 0;
+ 
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index edf4a842506f2..d1f8192384147 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -829,9 +829,8 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
+ 						struct sk_buff *skb),
+ 			 bool log_ecn_err)
+ {
+-	struct pcpu_sw_netstats *tstats;
+-	const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+-	int err;
++	const struct ipv6hdr *ipv6h;
++	int nh, err;
+ 
+ 	if ((!(tpi->flags & TUNNEL_CSUM) &&
+ 	     (tunnel->parms.i_flags & TUNNEL_CSUM)) ||
+@@ -863,14 +862,29 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
+ 			goto drop;
+ 		}
+ 
+-		ipv6h = ipv6_hdr(skb);
+ 		skb->protocol = eth_type_trans(skb, tunnel->dev);
+ 		skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
+ 	} else {
+ 		skb->dev = tunnel->dev;
+ 	}
+ 
++	/* Save offset of outer header relative to skb->head,
++	 * because we are going to reset the network header to the inner header
++	 * and might change skb->head.
++	 */
++	nh = skb_network_header(skb) - skb->head;
++
+ 	skb_reset_network_header(skb);
++
++	if (!pskb_inet_may_pull(skb)) {
++		DEV_STATS_INC(tunnel->dev, rx_length_errors);
++		DEV_STATS_INC(tunnel->dev, rx_errors);
++		goto drop;
++	}
++
++	/* Get the outer header. */
++	ipv6h = (struct ipv6hdr *)(skb->head + nh);
++
+ 	memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
+ 
+ 	__skb_tunnel_rx(skb, tunnel->dev, tunnel->net);
+@@ -888,11 +902,7 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
+ 		}
+ 	}
+ 
+-	tstats = this_cpu_ptr(tunnel->dev->tstats);
+-	u64_stats_update_begin(&tstats->syncp);
+-	tstats->rx_packets++;
+-	tstats->rx_bytes += skb->len;
+-	u64_stats_update_end(&tstats->syncp);
++	dev_sw_netstats_rx_add(tunnel->dev, skb->len);
+ 
+ 	skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(tunnel->dev)));
+ 
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index e14368ced21f8..7c73faa5336cd 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -2310,7 +2310,7 @@ static int __init afiucv_init(void)
+ {
+ 	int err;
+ 
+-	if (MACHINE_IS_VM) {
++	if (MACHINE_IS_VM && IS_ENABLED(CONFIG_IUCV)) {
+ 		cpcmd("QUERY USERID", iucv_userid, sizeof(iucv_userid), &err);
+ 		if (unlikely(err)) {
+ 			WARN_ON(err);
+@@ -2318,11 +2318,7 @@ static int __init afiucv_init(void)
+ 			goto out;
+ 		}
+ 
+-		pr_iucv = try_then_request_module(symbol_get(iucv_if), "iucv");
+-		if (!pr_iucv) {
+-			printk(KERN_WARNING "iucv_if lookup failed\n");
+-			memset(&iucv_userid, 0, sizeof(iucv_userid));
+-		}
++		pr_iucv = &iucv_if;
+ 	} else {
+ 		memset(&iucv_userid, 0, sizeof(iucv_userid));
+ 		pr_iucv = NULL;
+@@ -2356,17 +2352,13 @@ static int __init afiucv_init(void)
+ out_proto:
+ 	proto_unregister(&iucv_proto);
+ out:
+-	if (pr_iucv)
+-		symbol_put(iucv_if);
+ 	return err;
+ }
+ 
+ static void __exit afiucv_exit(void)
+ {
+-	if (pr_iucv) {
++	if (pr_iucv)
+ 		afiucv_iucv_exit();
+-		symbol_put(iucv_if);
+-	}
+ 
+ 	unregister_netdevice_notifier(&afiucv_netdev_notifier);
+ 	dev_remove_pack(&iucv_packet_type);
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 01e26698285a0..dae978badd26d 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -227,6 +227,8 @@ static int llc_ui_release(struct socket *sock)
+ 	if (llc->dev)
+ 		dev_put(llc->dev);
+ 	sock_put(sk);
++	sock_orphan(sk);
++	sock->sk = NULL;
+ 	llc_sk_free(sk);
+ out:
+ 	return 0;
+@@ -927,14 +929,15 @@ static int llc_ui_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+  */
+ static int llc_ui_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ {
++	DECLARE_SOCKADDR(struct sockaddr_llc *, addr, msg->msg_name);
+ 	struct sock *sk = sock->sk;
+ 	struct llc_sock *llc = llc_sk(sk);
+-	DECLARE_SOCKADDR(struct sockaddr_llc *, addr, msg->msg_name);
+ 	int flags = msg->msg_flags;
+ 	int noblock = flags & MSG_DONTWAIT;
++	int rc = -EINVAL, copied = 0, hdrlen, hh_len;
+ 	struct sk_buff *skb = NULL;
++	struct net_device *dev;
+ 	size_t size = 0;
+-	int rc = -EINVAL, copied = 0, hdrlen;
+ 
+ 	dprintk("%s: sending from %02X to %02X\n", __func__,
+ 		llc->laddr.lsap, llc->daddr.lsap);
+@@ -954,22 +957,29 @@ static int llc_ui_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		if (rc)
+ 			goto out;
+ 	}
+-	hdrlen = llc->dev->hard_header_len + llc_ui_header_len(sk, addr);
++	dev = llc->dev;
++	hh_len = LL_RESERVED_SPACE(dev);
++	hdrlen = llc_ui_header_len(sk, addr);
+ 	size = hdrlen + len;
+-	if (size > llc->dev->mtu)
+-		size = llc->dev->mtu;
++	size = min_t(size_t, size, READ_ONCE(dev->mtu));
+ 	copied = size - hdrlen;
+ 	rc = -EINVAL;
+ 	if (copied < 0)
+ 		goto out;
+ 	release_sock(sk);
+-	skb = sock_alloc_send_skb(sk, size, noblock, &rc);
++	skb = sock_alloc_send_skb(sk, hh_len + size, noblock, &rc);
+ 	lock_sock(sk);
+ 	if (!skb)
+ 		goto out;
+-	skb->dev      = llc->dev;
++	if (sock_flag(sk, SOCK_ZAPPED) ||
++	    llc->dev != dev ||
++	    hdrlen != llc_ui_header_len(sk, addr) ||
++	    hh_len != LL_RESERVED_SPACE(dev) ||
++	    size > READ_ONCE(dev->mtu))
++		goto out;
++	skb->dev      = dev;
+ 	skb->protocol = llc_proto_type(addr->sllc_arphrd);
+-	skb_reserve(skb, hdrlen);
++	skb_reserve(skb, hh_len + hdrlen);
+ 	rc = memcpy_from_msg(skb_put(skb, copied), msg, copied);
+ 	if (rc)
+ 		goto out;
+diff --git a/net/llc/llc_core.c b/net/llc/llc_core.c
+index 64d4bef04e730..4900a27b51768 100644
+--- a/net/llc/llc_core.c
++++ b/net/llc/llc_core.c
+@@ -135,22 +135,15 @@ static struct packet_type llc_packet_type __read_mostly = {
+ 	.func = llc_rcv,
+ };
+ 
+-static struct packet_type llc_tr_packet_type __read_mostly = {
+-	.type = cpu_to_be16(ETH_P_TR_802_2),
+-	.func = llc_rcv,
+-};
+-
+ static int __init llc_init(void)
+ {
+ 	dev_add_pack(&llc_packet_type);
+-	dev_add_pack(&llc_tr_packet_type);
+ 	return 0;
+ }
+ 
+ static void __exit llc_exit(void)
+ {
+ 	dev_remove_pack(&llc_packet_type);
+-	dev_remove_pack(&llc_tr_packet_type);
+ }
+ 
+ module_init(llc_init);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 788b6a3c14191..55abc06214c4d 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3721,6 +3721,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 			goto begin;
+ 
+ 		skb = __skb_dequeue(&tx.skbs);
++		info = IEEE80211_SKB_CB(skb);
+ 
+ 		if (!skb_queue_empty(&tx.skbs)) {
+ 			spin_lock_bh(&fq->lock);
+@@ -3765,7 +3766,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 	}
+ 
+ encap_out:
+-	IEEE80211_SKB_CB(skb)->control.vif = vif;
++	info->control.vif = vif;
+ 
+ 	if (vif &&
+ 	    wiphy_ext_feature_isset(local->hw.wiphy, NL80211_EXT_FEATURE_AQL)) {
+diff --git a/net/netfilter/ipset/ip_set_bitmap_gen.h b/net/netfilter/ipset/ip_set_bitmap_gen.h
+index 26ab0e9612d82..9523104a90da4 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_gen.h
++++ b/net/netfilter/ipset/ip_set_bitmap_gen.h
+@@ -28,6 +28,7 @@
+ #define mtype_del		IPSET_TOKEN(MTYPE, _del)
+ #define mtype_list		IPSET_TOKEN(MTYPE, _list)
+ #define mtype_gc		IPSET_TOKEN(MTYPE, _gc)
++#define mtype_cancel_gc		IPSET_TOKEN(MTYPE, _cancel_gc)
+ #define mtype			MTYPE
+ 
+ #define get_ext(set, map, id)	((map)->extensions + ((set)->dsize * (id)))
+@@ -57,9 +58,6 @@ mtype_destroy(struct ip_set *set)
+ {
+ 	struct mtype *map = set->data;
+ 
+-	if (SET_WITH_TIMEOUT(set))
+-		del_timer_sync(&map->gc);
+-
+ 	if (set->dsize && set->extensions & IPSET_EXT_DESTROY)
+ 		mtype_ext_cleanup(set);
+ 	ip_set_free(map->members);
+@@ -288,6 +286,15 @@ mtype_gc(struct timer_list *t)
+ 	add_timer(&map->gc);
+ }
+ 
++static void
++mtype_cancel_gc(struct ip_set *set)
++{
++	struct mtype *map = set->data;
++
++	if (SET_WITH_TIMEOUT(set))
++		del_timer_sync(&map->gc);
++}
++
+ static const struct ip_set_type_variant mtype = {
+ 	.kadt	= mtype_kadt,
+ 	.uadt	= mtype_uadt,
+@@ -301,6 +308,7 @@ static const struct ip_set_type_variant mtype = {
+ 	.head	= mtype_head,
+ 	.list	= mtype_list,
+ 	.same_set = mtype_same_set,
++	.cancel_gc = mtype_cancel_gc,
+ };
+ 
+ #endif /* __IP_SET_BITMAP_IP_GEN_H */
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 24f81826ed4a5..cc04c4d7956c5 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -1158,6 +1158,7 @@ static int ip_set_create(struct net *net, struct sock *ctnl,
+ 	return ret;
+ 
+ cleanup:
++	set->variant->cancel_gc(set);
+ 	set->variant->destroy(set);
+ put_out:
+ 	module_put(set->type->me);
+@@ -1186,6 +1187,14 @@ ip_set_destroy_set(struct ip_set *set)
+ 	kfree(set);
+ }
+ 
++static void
++ip_set_destroy_set_rcu(struct rcu_head *head)
++{
++	struct ip_set *set = container_of(head, struct ip_set, rcu);
++
++	ip_set_destroy_set(set);
++}
++
+ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 			  struct sk_buff *skb, const struct nlmsghdr *nlh,
+ 			  const struct nlattr * const attr[],
+@@ -1199,8 +1208,6 @@ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 	if (unlikely(protocol_min_failed(attr)))
+ 		return -IPSET_ERR_PROTOCOL;
+ 
+-	/* Must wait for flush to be really finished in list:set */
+-	rcu_barrier();
+ 
+ 	/* Commands are serialized and references are
+ 	 * protected by the ip_set_ref_lock.
+@@ -1212,8 +1219,10 @@ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 	 * counter, so if it's already zero, we can proceed
+ 	 * without holding the lock.
+ 	 */
+-	read_lock_bh(&ip_set_ref_lock);
+ 	if (!attr[IPSET_ATTR_SETNAME]) {
++		/* Must wait for flush to be really finished in list:set */
++		rcu_barrier();
++		read_lock_bh(&ip_set_ref_lock);
+ 		for (i = 0; i < inst->ip_set_max; i++) {
+ 			s = ip_set(inst, i);
+ 			if (s && (s->ref || s->ref_netlink)) {
+@@ -1227,12 +1236,17 @@ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 			s = ip_set(inst, i);
+ 			if (s) {
+ 				ip_set(inst, i) = NULL;
++				/* Must cancel garbage collectors */
++				s->variant->cancel_gc(s);
+ 				ip_set_destroy_set(s);
+ 			}
+ 		}
+ 		/* Modified by ip_set_destroy() only, which is serialized */
+ 		inst->is_destroyed = false;
+ 	} else {
++		u16 features = 0;
++
++		read_lock_bh(&ip_set_ref_lock);
+ 		s = find_set_and_id(inst, nla_data(attr[IPSET_ATTR_SETNAME]),
+ 				    &i);
+ 		if (!s) {
+@@ -1242,10 +1256,16 @@ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 			ret = -IPSET_ERR_BUSY;
+ 			goto out;
+ 		}
++		features = s->type->features;
+ 		ip_set(inst, i) = NULL;
+ 		read_unlock_bh(&ip_set_ref_lock);
+-
+-		ip_set_destroy_set(s);
++		if (features & IPSET_TYPE_NAME) {
++			/* Must wait for flush to be really finished  */
++			rcu_barrier();
++		}
++		/* Must cancel garbage collectors */
++		s->variant->cancel_gc(s);
++		call_rcu(&s->rcu, ip_set_destroy_set_rcu);
+ 	}
+ 	return 0;
+ out:
+@@ -1404,9 +1424,6 @@ static int ip_set_swap(struct net *net, struct sock *ctnl, struct sk_buff *skb,
+ 	ip_set(inst, to_id) = from;
+ 	write_unlock_bh(&ip_set_ref_lock);
+ 
+-	/* Make sure all readers of the old set pointers are completed. */
+-	synchronize_rcu();
+-
+ 	return 0;
+ }
+ 
+@@ -2397,6 +2414,7 @@ ip_set_net_exit(struct net *net)
+ 		set = ip_set(inst, i);
+ 		if (set) {
+ 			ip_set(inst, i) = NULL;
++			set->variant->cancel_gc(set);
+ 			ip_set_destroy_set(set);
+ 		}
+ 	}
+@@ -2444,8 +2462,11 @@ ip_set_fini(void)
+ {
+ 	nf_unregister_sockopt(&so_set);
+ 	nfnetlink_subsys_unregister(&ip_set_netlink_subsys);
+-
+ 	unregister_pernet_subsys(&ip_set_net_ops);
++
++	/* Wait for call_rcu() in destroy */
++	rcu_barrier();
++
+ 	pr_debug("these are the famous last words\n");
+ }
+ 
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index b0670388da49a..093ec52140084 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -235,6 +235,7 @@ htable_size(u8 hbits)
+ #undef mtype_gc_do
+ #undef mtype_gc
+ #undef mtype_gc_init
++#undef mtype_cancel_gc
+ #undef mtype_variant
+ #undef mtype_data_match
+ 
+@@ -279,6 +280,7 @@ htable_size(u8 hbits)
+ #define mtype_gc_do		IPSET_TOKEN(MTYPE, _gc_do)
+ #define mtype_gc		IPSET_TOKEN(MTYPE, _gc)
+ #define mtype_gc_init		IPSET_TOKEN(MTYPE, _gc_init)
++#define mtype_cancel_gc		IPSET_TOKEN(MTYPE, _cancel_gc)
+ #define mtype_variant		IPSET_TOKEN(MTYPE, _variant)
+ #define mtype_data_match	IPSET_TOKEN(MTYPE, _data_match)
+ 
+@@ -444,7 +446,7 @@ mtype_ahash_destroy(struct ip_set *set, struct htable *t, bool ext_destroy)
+ 	u32 i;
+ 
+ 	for (i = 0; i < jhash_size(t->htable_bits); i++) {
+-		n = __ipset_dereference(hbucket(t, i));
++		n = (__force struct hbucket *)hbucket(t, i);
+ 		if (!n)
+ 			continue;
+ 		if (set->extensions & IPSET_EXT_DESTROY && ext_destroy)
+@@ -464,10 +466,7 @@ mtype_destroy(struct ip_set *set)
+ 	struct htype *h = set->data;
+ 	struct list_head *l, *lt;
+ 
+-	if (SET_WITH_TIMEOUT(set))
+-		cancel_delayed_work_sync(&h->gc.dwork);
+-
+-	mtype_ahash_destroy(set, ipset_dereference_nfnl(h->table), true);
++	mtype_ahash_destroy(set, (__force struct htable *)h->table, true);
+ 	list_for_each_safe(l, lt, &h->ad) {
+ 		list_del(l);
+ 		kfree(l);
+@@ -613,6 +612,15 @@ mtype_gc_init(struct htable_gc *gc)
+ 	queue_delayed_work(system_power_efficient_wq, &gc->dwork, HZ);
+ }
+ 
++static void
++mtype_cancel_gc(struct ip_set *set)
++{
++	struct htype *h = set->data;
++
++	if (SET_WITH_TIMEOUT(set))
++		cancel_delayed_work_sync(&h->gc.dwork);
++}
++
+ static int
+ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
+ 	  struct ip_set_ext *mext, u32 flags);
+@@ -1433,6 +1441,7 @@ static const struct ip_set_type_variant mtype_variant = {
+ 	.uref	= mtype_uref,
+ 	.resize	= mtype_resize,
+ 	.same_set = mtype_same_set,
++	.cancel_gc = mtype_cancel_gc,
+ 	.region_lock = true,
+ };
+ 
+diff --git a/net/netfilter/ipset/ip_set_list_set.c b/net/netfilter/ipset/ip_set_list_set.c
+index 5a67f79665742..6bc7019982b05 100644
+--- a/net/netfilter/ipset/ip_set_list_set.c
++++ b/net/netfilter/ipset/ip_set_list_set.c
+@@ -426,9 +426,6 @@ list_set_destroy(struct ip_set *set)
+ 	struct list_set *map = set->data;
+ 	struct set_elem *e, *n;
+ 
+-	if (SET_WITH_TIMEOUT(set))
+-		del_timer_sync(&map->gc);
+-
+ 	list_for_each_entry_safe(e, n, &map->members, list) {
+ 		list_del(&e->list);
+ 		ip_set_put_byindex(map->net, e->id);
+@@ -545,6 +542,15 @@ list_set_same_set(const struct ip_set *a, const struct ip_set *b)
+ 	       a->extensions == b->extensions;
+ }
+ 
++static void
++list_set_cancel_gc(struct ip_set *set)
++{
++	struct list_set *map = set->data;
++
++	if (SET_WITH_TIMEOUT(set))
++		del_timer_sync(&map->gc);
++}
++
+ static const struct ip_set_type_variant set_variant = {
+ 	.kadt	= list_set_kadt,
+ 	.uadt	= list_set_uadt,
+@@ -558,6 +564,7 @@ static const struct ip_set_type_variant set_variant = {
+ 	.head	= list_set_head,
+ 	.list	= list_set_list,
+ 	.same_set = list_set_same_set,
++	.cancel_gc = list_set_cancel_gc,
+ };
+ 
+ static void
+diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
+index 6cb9f9474b055..28c6cb5cff0e3 100644
+--- a/net/netfilter/nf_log.c
++++ b/net/netfilter/nf_log.c
+@@ -203,11 +203,12 @@ void nf_logger_put(int pf, enum nf_log_type type)
+ 		return;
+ 	}
+ 
+-	BUG_ON(loggers[pf][type] == NULL);
+-
+ 	rcu_read_lock();
+ 	logger = rcu_dereference(loggers[pf][type]);
+-	module_put(logger->me);
++	if (!logger)
++		WARN_ON_ONCE(1);
++	else
++		module_put(logger->me);
+ 	rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(nf_logger_put);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index fca8f9a360632..f586e8b3c6cfa 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -25,6 +25,7 @@
+ #include <net/sock.h>
+ 
+ #define NFT_MODULE_AUTOLOAD_LIMIT (MODULE_NAME_LEN - sizeof("nft-expr-255-"))
++#define NFT_SET_MAX_ANONLEN 16
+ 
+ unsigned int nf_tables_net_id __read_mostly;
+ 
+@@ -3930,6 +3931,9 @@ static int nf_tables_set_alloc_name(struct nft_ctx *ctx, struct nft_set *set,
+ 		if (p[1] != 'd' || strchr(p + 2, '%'))
+ 			return -EINVAL;
+ 
++		if (strnlen(name, NFT_SET_MAX_ANONLEN) >= NFT_SET_MAX_ANONLEN)
++			return -EINVAL;
++
+ 		inuse = (unsigned long *)get_zeroed_page(GFP_KERNEL);
+ 		if (inuse == NULL)
+ 			return -ENOMEM;
+@@ -9336,16 +9340,10 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 	data->verdict.code = ntohl(nla_get_be32(tb[NFTA_VERDICT_CODE]));
+ 
+ 	switch (data->verdict.code) {
+-	default:
+-		switch (data->verdict.code & NF_VERDICT_MASK) {
+-		case NF_ACCEPT:
+-		case NF_DROP:
+-		case NF_QUEUE:
+-			break;
+-		default:
+-			return -EINVAL;
+-		}
+-		fallthrough;
++	case NF_ACCEPT:
++	case NF_DROP:
++	case NF_QUEUE:
++		break;
+ 	case NFT_CONTINUE:
+ 	case NFT_BREAK:
+ 	case NFT_RETURN:
+@@ -9380,6 +9378,8 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 
+ 		data->verdict.chain = chain;
+ 		break;
++	default:
++		return -EINVAL;
+ 	}
+ 
+ 	desc->len = sizeof(data->verdict);
+diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c
+index 7b0b8fecb2205..9d250bd60bb8b 100644
+--- a/net/netfilter/nft_byteorder.c
++++ b/net/netfilter/nft_byteorder.c
+@@ -38,20 +38,21 @@ void nft_byteorder_eval(const struct nft_expr *expr,
+ 
+ 	switch (priv->size) {
+ 	case 8: {
++		u64 *dst64 = (void *)dst;
+ 		u64 src64;
+ 
+ 		switch (priv->op) {
+ 		case NFT_BYTEORDER_NTOH:
+ 			for (i = 0; i < priv->len / 8; i++) {
+ 				src64 = nft_reg_load64(&src[i]);
+-				nft_reg_store64(&dst[i], be64_to_cpu(src64));
++				nft_reg_store64(&dst64[i], be64_to_cpu(src64));
+ 			}
+ 			break;
+ 		case NFT_BYTEORDER_HTON:
+ 			for (i = 0; i < priv->len / 8; i++) {
+ 				src64 = (__force __u64)
+ 					cpu_to_be64(nft_reg_load64(&src[i]));
+-				nft_reg_store64(&dst[i], src64);
++				nft_reg_store64(&dst64[i], src64);
+ 			}
+ 			break;
+ 		}
+diff --git a/net/netfilter/nft_chain_filter.c b/net/netfilter/nft_chain_filter.c
+index 7a9aa57b195bf..a18582a4ecf34 100644
+--- a/net/netfilter/nft_chain_filter.c
++++ b/net/netfilter/nft_chain_filter.c
+@@ -358,9 +358,10 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 				  unsigned long event, void *ptr)
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++	struct nft_base_chain *basechain;
+ 	struct nftables_pernet *nft_net;
+-	struct nft_table *table;
+ 	struct nft_chain *chain, *nr;
++	struct nft_table *table;
+ 	struct nft_ctx ctx = {
+ 		.net	= dev_net(dev),
+ 	};
+@@ -372,7 +373,8 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 	nft_net = net_generic(ctx.net, nf_tables_net_id);
+ 	mutex_lock(&nft_net->commit_mutex);
+ 	list_for_each_entry(table, &nft_net->tables, list) {
+-		if (table->family != NFPROTO_NETDEV)
++		if (table->family != NFPROTO_NETDEV &&
++		    table->family != NFPROTO_INET)
+ 			continue;
+ 
+ 		ctx.family = table->family;
+@@ -381,6 +383,11 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 			if (!nft_is_base_chain(chain))
+ 				continue;
+ 
++			basechain = nft_base_chain(chain);
++			if (table->family == NFPROTO_INET &&
++			    basechain->ops.hooknum != NF_INET_INGRESS)
++				continue;
++
+ 			ctx.chain = chain;
+ 			nft_netdev_event(event, dev, &ctx);
+ 		}
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index b8dbd20a6a4c5..77c7362a7db8e 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -192,6 +192,7 @@ static const struct nla_policy nft_rule_compat_policy[NFTA_RULE_COMPAT_MAX + 1]
+ static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv)
+ {
+ 	struct nlattr *tb[NFTA_RULE_COMPAT_MAX+1];
++	u32 l4proto;
+ 	u32 flags;
+ 	int err;
+ 
+@@ -204,12 +205,18 @@ static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv)
+ 		return -EINVAL;
+ 
+ 	flags = ntohl(nla_get_be32(tb[NFTA_RULE_COMPAT_FLAGS]));
+-	if (flags & ~NFT_RULE_COMPAT_F_MASK)
++	if (flags & NFT_RULE_COMPAT_F_UNUSED ||
++	    flags & ~NFT_RULE_COMPAT_F_MASK)
+ 		return -EINVAL;
+ 	if (flags & NFT_RULE_COMPAT_F_INV)
+ 		*inv = true;
+ 
+-	*proto = ntohl(nla_get_be32(tb[NFTA_RULE_COMPAT_PROTO]));
++	l4proto = ntohl(nla_get_be32(tb[NFTA_RULE_COMPAT_PROTO]));
++	if (l4proto > U16_MAX)
++		return -EINVAL;
++
++	*proto = l4proto;
++
+ 	return 0;
+ }
+ 
+@@ -327,6 +334,12 @@ static int nft_target_validate(const struct nft_ctx *ctx,
+ 	unsigned int hook_mask = 0;
+ 	int ret;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_BRIDGE &&
++	    ctx->family != NFPROTO_ARP)
++		return -EOPNOTSUPP;
++
+ 	if (nft_is_base_chain(ctx->chain)) {
+ 		const struct nft_base_chain *basechain =
+ 						nft_base_chain(ctx->chain);
+@@ -569,6 +582,12 @@ static int nft_match_validate(const struct nft_ctx *ctx,
+ 	unsigned int hook_mask = 0;
+ 	int ret;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_BRIDGE &&
++	    ctx->family != NFPROTO_ARP)
++		return -EOPNOTSUPP;
++
+ 	if (nft_is_base_chain(ctx->chain)) {
+ 		const struct nft_base_chain *basechain =
+ 						nft_base_chain(ctx->chain);
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 14093d86e6823..2b15dbbca98b3 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -482,6 +482,9 @@ static int nft_ct_get_init(const struct nft_ctx *ctx,
+ 		break;
+ #endif
+ 	case NFT_CT_ID:
++		if (tb[NFTA_CT_DIRECTION])
++			return -EINVAL;
++
+ 		len = sizeof(u32);
+ 		break;
+ 	default:
+@@ -1182,7 +1185,31 @@ static int nft_ct_expect_obj_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_CT_EXPECT_L3PROTO])
+ 		priv->l3num = ntohs(nla_get_be16(tb[NFTA_CT_EXPECT_L3PROTO]));
+ 
++	switch (priv->l3num) {
++	case NFPROTO_IPV4:
++	case NFPROTO_IPV6:
++		if (priv->l3num != ctx->family)
++			return -EINVAL;
++
++		fallthrough;
++	case NFPROTO_INET:
++		break;
++	default:
++		return -EOPNOTSUPP;
++	}
++
+ 	priv->l4proto = nla_get_u8(tb[NFTA_CT_EXPECT_L4PROTO]);
++	switch (priv->l4proto) {
++	case IPPROTO_TCP:
++	case IPPROTO_UDP:
++	case IPPROTO_UDPLITE:
++	case IPPROTO_DCCP:
++	case IPPROTO_SCTP:
++		break;
++	default:
++		return -EOPNOTSUPP;
++	}
++
+ 	priv->dport = nla_get_be16(tb[NFTA_CT_EXPECT_DPORT]);
+ 	priv->timeout = nla_get_u32(tb[NFTA_CT_EXPECT_TIMEOUT]);
+ 	priv->size = nla_get_u8(tb[NFTA_CT_EXPECT_SIZE]);
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index a44340dd3ce64..c2a5d05f501f7 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -150,6 +150,11 @@ static int nft_flow_offload_validate(const struct nft_ctx *ctx,
+ {
+ 	unsigned int hook_mask = (1 << NF_INET_FORWARD);
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	return nft_chain_validate_hooks(ctx->chain, hook_mask);
+ }
+ 
+diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c
+index 44d9b38e5f90c..cb5bb0e21b66f 100644
+--- a/net/netfilter/nft_meta.c
++++ b/net/netfilter/nft_meta.c
+@@ -63,7 +63,7 @@ nft_meta_get_eval_time(enum nft_meta_keys key,
+ {
+ 	switch (key) {
+ 	case NFT_META_TIME_NS:
+-		nft_reg_store64(dest, ktime_get_real_ns());
++		nft_reg_store64((u64 *)dest, ktime_get_real_ns());
+ 		break;
+ 	case NFT_META_TIME_DAY:
+ 		nft_reg_store8(dest, nft_meta_weekday());
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index cd4eb4996aff3..2e1ee7d9d9c3c 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -142,6 +142,11 @@ static int nft_nat_validate(const struct nft_ctx *ctx,
+ 	struct nft_nat *priv = nft_expr_priv(expr);
+ 	int err;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	err = nft_chain_validate_dependency(ctx->chain, NFT_CHAIN_T_NAT);
+ 	if (err < 0)
+ 		return err;
+diff --git a/net/netfilter/nft_rt.c b/net/netfilter/nft_rt.c
+index bcd01a63e38f1..f4a96164a5a11 100644
+--- a/net/netfilter/nft_rt.c
++++ b/net/netfilter/nft_rt.c
+@@ -166,6 +166,11 @@ static int nft_rt_validate(const struct nft_ctx *ctx, const struct nft_expr *exp
+ 	const struct nft_rt *priv = nft_expr_priv(expr);
+ 	unsigned int hooks;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	switch (priv->key) {
+ 	case NFT_RT_NEXTHOP4:
+ 	case NFT_RT_NEXTHOP6:
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index bc30bd121ff2f..70a59a35d1761 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -342,9 +342,6 @@
+ #include "nft_set_pipapo_avx2.h"
+ #include "nft_set_pipapo.h"
+ 
+-/* Current working bitmap index, toggled between field matches */
+-static DEFINE_PER_CPU(bool, nft_pipapo_scratch_index);
+-
+ /**
+  * pipapo_refill() - For each set bit, set bits from selected mapping table item
+  * @map:	Bitmap to be scanned for set bits
+@@ -412,6 +409,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 		       const u32 *key, const struct nft_set_ext **ext)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
++	struct nft_pipapo_scratch *scratch;
+ 	unsigned long *res_map, *fill_map;
+ 	u8 genmask = nft_genmask_cur(net);
+ 	const u8 *rp = (const u8 *)key;
+@@ -422,15 +420,17 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 
+ 	local_bh_disable();
+ 
+-	map_index = raw_cpu_read(nft_pipapo_scratch_index);
+-
+ 	m = rcu_dereference(priv->match);
+ 
+ 	if (unlikely(!m || !*raw_cpu_ptr(m->scratch)))
+ 		goto out;
+ 
+-	res_map  = *raw_cpu_ptr(m->scratch) + (map_index ? m->bsize_max : 0);
+-	fill_map = *raw_cpu_ptr(m->scratch) + (map_index ? 0 : m->bsize_max);
++	scratch = *raw_cpu_ptr(m->scratch);
++
++	map_index = scratch->map_index;
++
++	res_map  = scratch->map + (map_index ? m->bsize_max : 0);
++	fill_map = scratch->map + (map_index ? 0 : m->bsize_max);
+ 
+ 	memset(res_map, 0xff, m->bsize_max * sizeof(*res_map));
+ 
+@@ -460,7 +460,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 		b = pipapo_refill(res_map, f->bsize, f->rules, fill_map, f->mt,
+ 				  last);
+ 		if (b < 0) {
+-			raw_cpu_write(nft_pipapo_scratch_index, map_index);
++			scratch->map_index = map_index;
+ 			local_bh_enable();
+ 
+ 			return false;
+@@ -477,7 +477,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 			 * current inactive bitmap is clean and can be reused as
+ 			 * *next* bitmap (not initial) for the next packet.
+ 			 */
+-			raw_cpu_write(nft_pipapo_scratch_index, map_index);
++			scratch->map_index = map_index;
+ 			local_bh_enable();
+ 
+ 			return true;
+@@ -1101,6 +1101,25 @@ static void pipapo_map(struct nft_pipapo_match *m,
+ 		f->mt[map[i].to + j].e = e;
+ }
+ 
++/**
++ * pipapo_free_scratch() - Free per-CPU map at original (not aligned) address
++ * @m:		Matching data
++ * @cpu:	CPU number
++ */
++static void pipapo_free_scratch(const struct nft_pipapo_match *m, unsigned int cpu)
++{
++	struct nft_pipapo_scratch *s;
++	void *mem;
++
++	s = *per_cpu_ptr(m->scratch, cpu);
++	if (!s)
++		return;
++
++	mem = s;
++	mem -= s->align_off;
++	kfree(mem);
++}
++
+ /**
+  * pipapo_realloc_scratch() - Reallocate scratch maps for partial match results
+  * @clone:	Copy of matching data with pending insertions and deletions
+@@ -1114,12 +1133,13 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
+ 	int i;
+ 
+ 	for_each_possible_cpu(i) {
+-		unsigned long *scratch;
++		struct nft_pipapo_scratch *scratch;
+ #ifdef NFT_PIPAPO_ALIGN
+-		unsigned long *scratch_aligned;
++		void *scratch_aligned;
++		u32 align_off;
+ #endif
+-
+-		scratch = kzalloc_node(bsize_max * sizeof(*scratch) * 2 +
++		scratch = kzalloc_node(struct_size(scratch, map,
++						   bsize_max * 2) +
+ 				       NFT_PIPAPO_ALIGN_HEADROOM,
+ 				       GFP_KERNEL, cpu_to_node(i));
+ 		if (!scratch) {
+@@ -1133,14 +1153,25 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
+ 			return -ENOMEM;
+ 		}
+ 
+-		kfree(*per_cpu_ptr(clone->scratch, i));
+-
+-		*per_cpu_ptr(clone->scratch, i) = scratch;
++		pipapo_free_scratch(clone, i);
+ 
+ #ifdef NFT_PIPAPO_ALIGN
+-		scratch_aligned = NFT_PIPAPO_LT_ALIGN(scratch);
+-		*per_cpu_ptr(clone->scratch_aligned, i) = scratch_aligned;
++		/* Align &scratch->map (not the struct itself): the extra
++		 * %NFT_PIPAPO_ALIGN_HEADROOM bytes passed to kzalloc_node()
++		 * above guarantee we can waste up to those bytes in order
++		 * to align the map field regardless of its offset within
++		 * the struct.
++		 */
++		BUILD_BUG_ON(offsetof(struct nft_pipapo_scratch, map) > NFT_PIPAPO_ALIGN_HEADROOM);
++
++		scratch_aligned = NFT_PIPAPO_LT_ALIGN(&scratch->map);
++		scratch_aligned -= offsetof(struct nft_pipapo_scratch, map);
++		align_off = scratch_aligned - (void *)scratch;
++
++		scratch = scratch_aligned;
++		scratch->align_off = align_off;
+ #endif
++		*per_cpu_ptr(clone->scratch, i) = scratch;
+ 	}
+ 
+ 	return 0;
+@@ -1294,11 +1325,6 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 	if (!new->scratch)
+ 		goto out_scratch;
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-	new->scratch_aligned = alloc_percpu(*new->scratch_aligned);
+-	if (!new->scratch_aligned)
+-		goto out_scratch;
+-#endif
+ 	for_each_possible_cpu(i)
+ 		*per_cpu_ptr(new->scratch, i) = NULL;
+ 
+@@ -1350,10 +1376,7 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 	}
+ out_scratch_realloc:
+ 	for_each_possible_cpu(i)
+-		kfree(*per_cpu_ptr(new->scratch, i));
+-#ifdef NFT_PIPAPO_ALIGN
+-	free_percpu(new->scratch_aligned);
+-#endif
++		pipapo_free_scratch(new, i);
+ out_scratch:
+ 	free_percpu(new->scratch);
+ 	kfree(new);
+@@ -1635,13 +1658,9 @@ static void pipapo_free_match(struct nft_pipapo_match *m)
+ 	int i;
+ 
+ 	for_each_possible_cpu(i)
+-		kfree(*per_cpu_ptr(m->scratch, i));
++		pipapo_free_scratch(m, i);
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-	free_percpu(m->scratch_aligned);
+-#endif
+ 	free_percpu(m->scratch);
+-
+ 	pipapo_free_fields(m);
+ 
+ 	kfree(m);
+@@ -2118,7 +2137,7 @@ static int nft_pipapo_init(const struct nft_set *set,
+ 	m->field_count = field_count;
+ 	m->bsize_max = 0;
+ 
+-	m->scratch = alloc_percpu(unsigned long *);
++	m->scratch = alloc_percpu(struct nft_pipapo_scratch *);
+ 	if (!m->scratch) {
+ 		err = -ENOMEM;
+ 		goto out_scratch;
+@@ -2126,16 +2145,6 @@ static int nft_pipapo_init(const struct nft_set *set,
+ 	for_each_possible_cpu(i)
+ 		*per_cpu_ptr(m->scratch, i) = NULL;
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-	m->scratch_aligned = alloc_percpu(unsigned long *);
+-	if (!m->scratch_aligned) {
+-		err = -ENOMEM;
+-		goto out_free;
+-	}
+-	for_each_possible_cpu(i)
+-		*per_cpu_ptr(m->scratch_aligned, i) = NULL;
+-#endif
+-
+ 	rcu_head_init(&m->rcu);
+ 
+ 	nft_pipapo_for_each_field(f, i, m) {
+@@ -2166,9 +2175,6 @@ static int nft_pipapo_init(const struct nft_set *set,
+ 	return 0;
+ 
+ out_free:
+-#ifdef NFT_PIPAPO_ALIGN
+-	free_percpu(m->scratch_aligned);
+-#endif
+ 	free_percpu(m->scratch);
+ out_scratch:
+ 	kfree(m);
+@@ -2222,11 +2228,8 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ 
+ 		nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-		free_percpu(m->scratch_aligned);
+-#endif
+ 		for_each_possible_cpu(cpu)
+-			kfree(*per_cpu_ptr(m->scratch, cpu));
++			pipapo_free_scratch(m, cpu);
+ 		free_percpu(m->scratch);
+ 		pipapo_free_fields(m);
+ 		kfree(m);
+@@ -2239,11 +2242,8 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ 		if (priv->dirty)
+ 			nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-		free_percpu(priv->clone->scratch_aligned);
+-#endif
+ 		for_each_possible_cpu(cpu)
+-			kfree(*per_cpu_ptr(priv->clone->scratch, cpu));
++			pipapo_free_scratch(priv->clone, cpu);
+ 		free_percpu(priv->clone->scratch);
+ 
+ 		pipapo_free_fields(priv->clone);
+diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h
+index d84afb8fa79a1..2e709ae01924f 100644
+--- a/net/netfilter/nft_set_pipapo.h
++++ b/net/netfilter/nft_set_pipapo.h
+@@ -130,21 +130,29 @@ struct nft_pipapo_field {
+ 	union nft_pipapo_map_bucket *mt;
+ };
+ 
++/**
++ * struct nft_pipapo_scratch - percpu data used for lookup and matching
++ * @map_index:	Current working bitmap index, toggled between field matches
++ * @align_off:	Offset to get the originally allocated address
++ * @map:	store partial matching results during lookup
++ */
++struct nft_pipapo_scratch {
++	u8 map_index;
++	u32 align_off;
++	unsigned long map[];
++};
++
+ /**
+  * struct nft_pipapo_match - Data used for lookup and matching
+  * @field_count		Amount of fields in set
+  * @scratch:		Preallocated per-CPU maps for partial matching results
+- * @scratch_aligned:	Version of @scratch aligned to NFT_PIPAPO_ALIGN bytes
+  * @bsize_max:		Maximum lookup table bucket size of all fields, in longs
+  * @rcu			Matching data is swapped on commits
+  * @f:			Fields, with lookup and mapping tables
+  */
+ struct nft_pipapo_match {
+ 	int field_count;
+-#ifdef NFT_PIPAPO_ALIGN
+-	unsigned long * __percpu *scratch_aligned;
+-#endif
+-	unsigned long * __percpu *scratch;
++	struct nft_pipapo_scratch * __percpu *scratch;
+ 	size_t bsize_max;
+ 	struct rcu_head rcu;
+ 	struct nft_pipapo_field f[];
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index 10332178da8c5..60fb8bc0fdcc9 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -71,9 +71,6 @@
+ #define NFT_PIPAPO_AVX2_ZERO(reg)					\
+ 	asm volatile("vpxor %ymm" #reg ", %ymm" #reg ", %ymm" #reg)
+ 
+-/* Current working bitmap index, toggled between field matches */
+-static DEFINE_PER_CPU(bool, nft_pipapo_avx2_scratch_index);
+-
+ /**
+  * nft_pipapo_avx2_prepare() - Prepare before main algorithm body
+  *
+@@ -1123,11 +1120,12 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 			    const u32 *key, const struct nft_set_ext **ext)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+-	unsigned long *res, *fill, *scratch;
++	struct nft_pipapo_scratch *scratch;
+ 	u8 genmask = nft_genmask_cur(net);
+ 	const u8 *rp = (const u8 *)key;
+ 	struct nft_pipapo_match *m;
+ 	struct nft_pipapo_field *f;
++	unsigned long *res, *fill;
+ 	bool map_index;
+ 	int i, ret = 0;
+ 
+@@ -1139,15 +1137,16 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	/* This also protects access to all data related to scratch maps */
+ 	kernel_fpu_begin();
+ 
+-	scratch = *raw_cpu_ptr(m->scratch_aligned);
++	scratch = *raw_cpu_ptr(m->scratch);
+ 	if (unlikely(!scratch)) {
+ 		kernel_fpu_end();
+ 		return false;
+ 	}
+-	map_index = raw_cpu_read(nft_pipapo_avx2_scratch_index);
+ 
+-	res  = scratch + (map_index ? m->bsize_max : 0);
+-	fill = scratch + (map_index ? 0 : m->bsize_max);
++	map_index = scratch->map_index;
++
++	res  = scratch->map + (map_index ? m->bsize_max : 0);
++	fill = scratch->map + (map_index ? 0 : m->bsize_max);
+ 
+ 	/* Starting map doesn't need to be set for this implementation */
+ 
+@@ -1219,7 +1218,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 
+ out:
+ 	if (i % 2)
+-		raw_cpu_write(nft_pipapo_avx2_scratch_index, !map_index);
++		scratch->map_index = !map_index;
+ 	kernel_fpu_end();
+ 
+ 	return ret >= 0;
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 12d9d0d0c6022..18c0d163dc76c 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -237,7 +237,7 @@ static void nft_rbtree_gc_remove(struct net *net, struct nft_set *set,
+ 
+ static const struct nft_rbtree_elem *
+ nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
+-		   struct nft_rbtree_elem *rbe, u8 genmask)
++		   struct nft_rbtree_elem *rbe)
+ {
+ 	struct nft_set *set = (struct nft_set *)__set;
+ 	struct rb_node *prev = rb_prev(&rbe->node);
+@@ -256,7 +256,7 @@ nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
+ 	while (prev) {
+ 		rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
+ 		if (nft_rbtree_interval_end(rbe_prev) &&
+-		    nft_set_elem_active(&rbe_prev->ext, genmask))
++		    nft_set_elem_active(&rbe_prev->ext, NFT_GENMASK_ANY))
+ 			break;
+ 
+ 		prev = rb_prev(prev);
+@@ -367,7 +367,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 		    nft_set_elem_active(&rbe->ext, cur_genmask)) {
+ 			const struct nft_rbtree_elem *removed_end;
+ 
+-			removed_end = nft_rbtree_gc_elem(set, priv, rbe, genmask);
++			removed_end = nft_rbtree_gc_elem(set, priv, rbe);
+ 			if (IS_ERR(removed_end))
+ 				return PTR_ERR(removed_end);
+ 
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index f6d517185d9c0..826e5f8c78f34 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -166,6 +166,11 @@ static int nft_socket_validate(const struct nft_ctx *ctx,
+ 			       const struct nft_expr *expr,
+ 			       const struct nft_data **data)
+ {
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	return nft_chain_validate_hooks(ctx->chain,
+ 					(1 << NF_INET_PRE_ROUTING) |
+ 					(1 << NF_INET_LOCAL_IN) |
+diff --git a/net/netfilter/nft_synproxy.c b/net/netfilter/nft_synproxy.c
+index 1133e06f3c40e..0806813d3a767 100644
+--- a/net/netfilter/nft_synproxy.c
++++ b/net/netfilter/nft_synproxy.c
+@@ -186,7 +186,6 @@ static int nft_synproxy_do_init(const struct nft_ctx *ctx,
+ 		break;
+ #endif
+ 	case NFPROTO_INET:
+-	case NFPROTO_BRIDGE:
+ 		err = nf_synproxy_ipv4_init(snet, ctx->net);
+ 		if (err)
+ 			goto nf_ct_failure;
+@@ -219,7 +218,6 @@ static void nft_synproxy_do_destroy(const struct nft_ctx *ctx)
+ 		break;
+ #endif
+ 	case NFPROTO_INET:
+-	case NFPROTO_BRIDGE:
+ 		nf_synproxy_ipv4_fini(snet, ctx->net);
+ 		nf_synproxy_ipv6_fini(snet, ctx->net);
+ 		break;
+@@ -253,6 +251,11 @@ static int nft_synproxy_validate(const struct nft_ctx *ctx,
+ 				 const struct nft_expr *expr,
+ 				 const struct nft_data **data)
+ {
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	return nft_chain_validate_hooks(ctx->chain, (1 << NF_INET_LOCAL_IN) |
+ 						    (1 << NF_INET_FORWARD));
+ }
+diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
+index f8d277e05ef4f..6b606e83cdb6c 100644
+--- a/net/netfilter/nft_tproxy.c
++++ b/net/netfilter/nft_tproxy.c
+@@ -293,6 +293,11 @@ static int nft_tproxy_validate(const struct nft_ctx *ctx,
+ 			       const struct nft_expr *expr,
+ 			       const struct nft_data **data)
+ {
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	return nft_chain_validate_hooks(ctx->chain, 1 << NF_INET_PRE_ROUTING);
+ }
+ 
+diff --git a/net/netfilter/nft_xfrm.c b/net/netfilter/nft_xfrm.c
+index cbbbc4ecad3ae..7f762fc428912 100644
+--- a/net/netfilter/nft_xfrm.c
++++ b/net/netfilter/nft_xfrm.c
+@@ -233,6 +233,11 @@ static int nft_xfrm_validate(const struct nft_ctx *ctx, const struct nft_expr *e
+ 	const struct nft_xfrm *priv = nft_expr_priv(expr);
+ 	unsigned int hooks;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	switch (priv->dir) {
+ 	case XFRM_POLICY_IN:
+ 		hooks = (1 << NF_INET_FORWARD) |
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 901358a5b5931..359f07a53eccf 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -366,7 +366,7 @@ static void netlink_skb_destructor(struct sk_buff *skb)
+ 	if (is_vmalloc_addr(skb->head)) {
+ 		if (!skb->cloned ||
+ 		    !atomic_dec_return(&(skb_shinfo(skb)->dataref)))
+-			vfree(skb->head);
++			vfree_atomic(skb->head);
+ 
+ 		skb->head = NULL;
+ 	}
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index 4c931bd1c1743..5bfaf06f7be7f 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -1197,6 +1197,10 @@ void nci_free_device(struct nci_dev *ndev)
+ {
+ 	nfc_free_device(ndev->nfc_dev);
+ 	nci_hci_deallocate(ndev);
++
++	/* drop partial rx data packet if present */
++	if (ndev->rx_data_reassembly)
++		kfree_skb(ndev->rx_data_reassembly);
+ 	kfree(ndev);
+ }
+ EXPORT_SYMBOL(nci_free_device);
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 293a798e89f42..cff18a5bbf386 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -47,6 +47,7 @@ struct ovs_len_tbl {
+ 
+ #define OVS_ATTR_NESTED -1
+ #define OVS_ATTR_VARIABLE -2
++#define OVS_COPY_ACTIONS_MAX_DEPTH 16
+ 
+ static bool actions_may_change_flow(const struct nlattr *actions)
+ {
+@@ -2514,13 +2515,15 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 				  const struct sw_flow_key *key,
+ 				  struct sw_flow_actions **sfa,
+ 				  __be16 eth_type, __be16 vlan_tci,
+-				  u32 mpls_label_count, bool log);
++				  u32 mpls_label_count, bool log,
++				  u32 depth);
+ 
+ static int validate_and_copy_sample(struct net *net, const struct nlattr *attr,
+ 				    const struct sw_flow_key *key,
+ 				    struct sw_flow_actions **sfa,
+ 				    __be16 eth_type, __be16 vlan_tci,
+-				    u32 mpls_label_count, bool log, bool last)
++				    u32 mpls_label_count, bool log, bool last,
++				    u32 depth)
+ {
+ 	const struct nlattr *attrs[OVS_SAMPLE_ATTR_MAX + 1];
+ 	const struct nlattr *probability, *actions;
+@@ -2571,7 +2574,8 @@ static int validate_and_copy_sample(struct net *net, const struct nlattr *attr,
+ 		return err;
+ 
+ 	err = __ovs_nla_copy_actions(net, actions, key, sfa,
+-				     eth_type, vlan_tci, mpls_label_count, log);
++				     eth_type, vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 
+ 	if (err)
+ 		return err;
+@@ -2586,7 +2590,8 @@ static int validate_and_copy_dec_ttl(struct net *net,
+ 				     const struct sw_flow_key *key,
+ 				     struct sw_flow_actions **sfa,
+ 				     __be16 eth_type, __be16 vlan_tci,
+-				     u32 mpls_label_count, bool log)
++				     u32 mpls_label_count, bool log,
++				     u32 depth)
+ {
+ 	const struct nlattr *attrs[OVS_DEC_TTL_ATTR_MAX + 1];
+ 	int start, action_start, err, rem;
+@@ -2619,7 +2624,8 @@ static int validate_and_copy_dec_ttl(struct net *net,
+ 		return action_start;
+ 
+ 	err = __ovs_nla_copy_actions(net, actions, key, sfa, eth_type,
+-				     vlan_tci, mpls_label_count, log);
++				     vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 	if (err)
+ 		return err;
+ 
+@@ -2633,7 +2639,8 @@ static int validate_and_copy_clone(struct net *net,
+ 				   const struct sw_flow_key *key,
+ 				   struct sw_flow_actions **sfa,
+ 				   __be16 eth_type, __be16 vlan_tci,
+-				   u32 mpls_label_count, bool log, bool last)
++				   u32 mpls_label_count, bool log, bool last,
++				   u32 depth)
+ {
+ 	int start, err;
+ 	u32 exec;
+@@ -2653,7 +2660,8 @@ static int validate_and_copy_clone(struct net *net,
+ 		return err;
+ 
+ 	err = __ovs_nla_copy_actions(net, attr, key, sfa,
+-				     eth_type, vlan_tci, mpls_label_count, log);
++				     eth_type, vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 	if (err)
+ 		return err;
+ 
+@@ -3022,7 +3030,7 @@ static int validate_and_copy_check_pkt_len(struct net *net,
+ 					   struct sw_flow_actions **sfa,
+ 					   __be16 eth_type, __be16 vlan_tci,
+ 					   u32 mpls_label_count,
+-					   bool log, bool last)
++					   bool log, bool last, u32 depth)
+ {
+ 	const struct nlattr *acts_if_greater, *acts_if_lesser_eq;
+ 	struct nlattr *a[OVS_CHECK_PKT_LEN_ATTR_MAX + 1];
+@@ -3070,7 +3078,8 @@ static int validate_and_copy_check_pkt_len(struct net *net,
+ 		return nested_acts_start;
+ 
+ 	err = __ovs_nla_copy_actions(net, acts_if_lesser_eq, key, sfa,
+-				     eth_type, vlan_tci, mpls_label_count, log);
++				     eth_type, vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 
+ 	if (err)
+ 		return err;
+@@ -3083,7 +3092,8 @@ static int validate_and_copy_check_pkt_len(struct net *net,
+ 		return nested_acts_start;
+ 
+ 	err = __ovs_nla_copy_actions(net, acts_if_greater, key, sfa,
+-				     eth_type, vlan_tci, mpls_label_count, log);
++				     eth_type, vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 
+ 	if (err)
+ 		return err;
+@@ -3111,12 +3121,16 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 				  const struct sw_flow_key *key,
+ 				  struct sw_flow_actions **sfa,
+ 				  __be16 eth_type, __be16 vlan_tci,
+-				  u32 mpls_label_count, bool log)
++				  u32 mpls_label_count, bool log,
++				  u32 depth)
+ {
+ 	u8 mac_proto = ovs_key_mac_proto(key);
+ 	const struct nlattr *a;
+ 	int rem, err;
+ 
++	if (depth > OVS_COPY_ACTIONS_MAX_DEPTH)
++		return -EOVERFLOW;
++
+ 	nla_for_each_nested(a, attr, rem) {
+ 		/* Expected argument lengths, (u32)-1 for variable length. */
+ 		static const u32 action_lens[OVS_ACTION_ATTR_MAX + 1] = {
+@@ -3311,7 +3325,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 			err = validate_and_copy_sample(net, a, key, sfa,
+ 						       eth_type, vlan_tci,
+ 						       mpls_label_count,
+-						       log, last);
++						       log, last, depth);
+ 			if (err)
+ 				return err;
+ 			skip_copy = true;
+@@ -3382,7 +3396,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 			err = validate_and_copy_clone(net, a, key, sfa,
+ 						      eth_type, vlan_tci,
+ 						      mpls_label_count,
+-						      log, last);
++						      log, last, depth);
+ 			if (err)
+ 				return err;
+ 			skip_copy = true;
+@@ -3396,7 +3410,8 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 							      eth_type,
+ 							      vlan_tci,
+ 							      mpls_label_count,
+-							      log, last);
++							      log, last,
++							      depth);
+ 			if (err)
+ 				return err;
+ 			skip_copy = true;
+@@ -3406,7 +3421,8 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 		case OVS_ACTION_ATTR_DEC_TTL:
+ 			err = validate_and_copy_dec_ttl(net, a, key, sfa,
+ 							eth_type, vlan_tci,
+-							mpls_label_count, log);
++							mpls_label_count, log,
++							depth);
+ 			if (err)
+ 				return err;
+ 			skip_copy = true;
+@@ -3446,7 +3462,8 @@ int ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 
+ 	(*sfa)->orig_len = nla_len(attr);
+ 	err = __ovs_nla_copy_actions(net, attr, key, sfa, key->eth.type,
+-				     key->eth.vlan.tci, mpls_label_count, log);
++				     key->eth.vlan.tci, mpls_label_count, log,
++				     0);
+ 	if (err)
+ 		ovs_nla_free_flow_actions(*sfa);
+ 
+diff --git a/net/rds/af_rds.c b/net/rds/af_rds.c
+index b239120dd9ca6..0ec0ae1483492 100644
+--- a/net/rds/af_rds.c
++++ b/net/rds/af_rds.c
+@@ -419,7 +419,7 @@ static int rds_recv_track_latency(struct rds_sock *rs, sockptr_t optval,
+ 
+ 	rs->rs_rx_traces = trace.rx_traces;
+ 	for (i = 0; i < rs->rs_rx_traces; i++) {
+-		if (trace.rx_trace_pos[i] > RDS_MSG_RX_DGRAM_TRACE_MAX) {
++		if (trace.rx_trace_pos[i] >= RDS_MSG_RX_DGRAM_TRACE_MAX) {
+ 			rs->rs_rx_traces = 0;
+ 			return -EFAULT;
+ 		}
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index aff184145ffaf..9081e84295844 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -41,6 +41,14 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ 
+ 	_enter("%d", conn->debug_id);
+ 
++	if (sp && sp->hdr.type == RXRPC_PACKET_TYPE_ACK) {
++		if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
++				  &pkt.ack, sizeof(pkt.ack)) < 0)
++			return;
++		if (pkt.ack.reason == RXRPC_ACK_PING_RESPONSE)
++			return;
++	}
++
+ 	chan = &conn->channels[channel];
+ 
+ 	/* If the last call got moved on whilst we were waiting to run, just
+diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
+index 68508166bbc0b..af0e95ef992d0 100644
+--- a/net/rxrpc/conn_service.c
++++ b/net/rxrpc/conn_service.c
+@@ -31,7 +31,7 @@ struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
+ 	struct rxrpc_conn_proto k;
+ 	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ 	struct rb_node *p;
+-	unsigned int seq = 0;
++	unsigned int seq = 1;
+ 
+ 	k.epoch	= sp->hdr.epoch;
+ 	k.cid	= sp->hdr.cid & RXRPC_CIDMASK;
+@@ -41,6 +41,7 @@ struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
+ 		 * under just the RCU read lock, so we have to check for
+ 		 * changes.
+ 		 */
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ 		read_seqbegin_or_lock(&peer->service_conn_lock, &seq);
+ 
+ 		p = rcu_dereference_raw(peer->service_conns.rb_node);
+diff --git a/net/smc/smc_diag.c b/net/smc/smc_diag.c
+index f15fca59b4b26..7c921760dce78 100644
+--- a/net/smc/smc_diag.c
++++ b/net/smc/smc_diag.c
+@@ -177,7 +177,7 @@ static int __smc_diag_dump(struct sock *sk, struct sk_buff *skb,
+ 	}
+ 	if (smc->conn.lgr && smc->conn.lgr->is_smcd &&
+ 	    (req->diag_ext & (1 << (SMC_DIAG_DMBINFO - 1))) &&
+-	    !list_empty(&smc->conn.lgr->list)) {
++	    !list_empty(&smc->conn.lgr->list) && smc->conn.rmb_desc) {
+ 		struct smc_connection *conn = &smc->conn;
+ 		struct smcd_diag_dmbinfo dinfo;
+ 
+diff --git a/net/sunrpc/xprtmultipath.c b/net/sunrpc/xprtmultipath.c
+index 78c075a68c047..a11e80d178305 100644
+--- a/net/sunrpc/xprtmultipath.c
++++ b/net/sunrpc/xprtmultipath.c
+@@ -253,8 +253,9 @@ struct rpc_xprt *xprt_iter_current_entry(struct rpc_xprt_iter *xpi)
+ 	return xprt_switch_find_current_entry(head, xpi->xpi_cursor);
+ }
+ 
+-bool rpc_xprt_switch_has_addr(struct rpc_xprt_switch *xps,
+-			      const struct sockaddr *sap)
++static
++bool __rpc_xprt_switch_has_addr(struct rpc_xprt_switch *xps,
++				const struct sockaddr *sap)
+ {
+ 	struct list_head *head;
+ 	struct rpc_xprt *pos;
+@@ -273,6 +274,18 @@ bool rpc_xprt_switch_has_addr(struct rpc_xprt_switch *xps,
+ 	return false;
+ }
+ 
++bool rpc_xprt_switch_has_addr(struct rpc_xprt_switch *xps,
++			      const struct sockaddr *sap)
++{
++	bool res;
++
++	rcu_read_lock();
++	res = __rpc_xprt_switch_has_addr(xps, sap);
++	rcu_read_unlock();
++
++	return res;
++}
++
+ static
+ struct rpc_xprt *xprt_switch_find_next_entry(struct list_head *head,
+ 		const struct rpc_xprt *cur)
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index df6aba2246fa0..2511718b8f3f3 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -1072,6 +1072,12 @@ int tipc_nl_bearer_add(struct sk_buff *skb, struct genl_info *info)
+ 
+ #ifdef CONFIG_TIPC_MEDIA_UDP
+ 	if (attrs[TIPC_NLA_BEARER_UDP_OPTS]) {
++		if (b->media->type_id != TIPC_MEDIA_TYPE_UDP) {
++			rtnl_unlock();
++			NL_SET_ERR_MSG(info->extack, "UDP option is unsupported");
++			return -EINVAL;
++		}
++
+ 		err = tipc_udp_nl_bearer_add(b,
+ 					     attrs[TIPC_NLA_BEARER_UDP_OPTS]);
+ 		if (err) {
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 237488b1b58b6..b003d0597f4bd 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1126,13 +1126,11 @@ static void unix_state_double_lock(struct sock *sk1, struct sock *sk2)
+ 		unix_state_lock(sk1);
+ 		return;
+ 	}
+-	if (sk1 < sk2) {
+-		unix_state_lock(sk1);
+-		unix_state_lock_nested(sk2);
+-	} else {
+-		unix_state_lock(sk2);
+-		unix_state_lock_nested(sk1);
+-	}
++	if (sk1 > sk2)
++		swap(sk1, sk2);
++
++	unix_state_lock(sk1);
++	unix_state_lock_nested(sk2, U_LOCK_SECOND);
+ }
+ 
+ static void unix_state_double_unlock(struct sock *sk1, struct sock *sk2)
+@@ -1352,7 +1350,7 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 		goto out_unlock;
+ 	}
+ 
+-	unix_state_lock_nested(sk);
++	unix_state_lock_nested(sk, U_LOCK_SECOND);
+ 
+ 	if (sk->sk_state != st) {
+ 		unix_state_unlock(sk);
+diff --git a/net/unix/diag.c b/net/unix/diag.c
+index 951b33fa8f5cf..2975e7a061d0b 100644
+--- a/net/unix/diag.c
++++ b/net/unix/diag.c
+@@ -83,7 +83,7 @@ static int sk_diag_dump_icons(struct sock *sk, struct sk_buff *nlskb)
+ 			 * queue lock. With the other's queue locked it's
+ 			 * OK to lock the state.
+ 			 */
+-			unix_state_lock_nested(req);
++			unix_state_lock_nested(req, U_LOCK_DIAG);
+ 			peer = unix_sk(req)->peer;
+ 			buf[i++] = (peer ? sock_i_ino(peer) : 0);
+ 			unix_state_unlock(req);
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 1e6dfe204ff36..a6c289a61d30c 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1801,8 +1801,12 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 				list_add(&new->hidden_list,
+ 					 &hidden->hidden_list);
+ 				hidden->refcount++;
++
++				ies = (void *)rcu_access_pointer(new->pub.beacon_ies);
+ 				rcu_assign_pointer(new->pub.beacon_ies,
+ 						   hidden->pub.beacon_ies);
++				if (ies)
++					kfree_rcu(ies, rcu_head);
+ 			}
+ 		} else {
+ 			/*
+diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh
+index 90398347e3664..3463f0f6c3f4f 100755
+--- a/scripts/decode_stacktrace.sh
++++ b/scripts/decode_stacktrace.sh
+@@ -9,6 +9,29 @@ if [[ $# < 1 ]]; then
+ 	exit 1
+ fi
+ 
++# Try to find a Rust demangler
++if type llvm-cxxfilt >/dev/null 2>&1 ; then
++	cppfilt=llvm-cxxfilt
++elif type c++filt >/dev/null 2>&1 ; then
++	cppfilt=c++filt
++	cppfilt_opts=-i
++fi
++
++UTIL_SUFFIX=
++if [[ -z ${LLVM:-} ]]; then
++	UTIL_PREFIX=${CROSS_COMPILE:-}
++else
++	UTIL_PREFIX=llvm-
++	if [[ ${LLVM} == */ ]]; then
++		UTIL_PREFIX=${LLVM}${UTIL_PREFIX}
++	elif [[ ${LLVM} == -* ]]; then
++		UTIL_SUFFIX=${LLVM}
++	fi
++fi
++
++READELF=${UTIL_PREFIX}readelf${UTIL_SUFFIX}
++ADDR2LINE=${UTIL_PREFIX}addr2line${UTIL_SUFFIX}
++
+ if [[ $1 == "-r" ]] ; then
+ 	vmlinux=""
+ 	basepath="auto"
+@@ -33,13 +56,18 @@ else
+ 	release=""
+ fi
+ 
+-declare -A cache
+-declare -A modcache
++declare aarray_support=true
++declare -A cache 2>/dev/null
++if [[ $? != 0 ]]; then
++	aarray_support=false
++else
++	declare -A modcache
++fi
+ 
+ find_module() {
+ 	if [[ "$modpath" != "" ]] ; then
+ 		for fn in $(find "$modpath" -name "${module//_/[-_]}.ko*") ; do
+-			if readelf -WS "$fn" | grep -qwF .debug_line ; then
++			if ${READELF} -WS "$fn" | grep -qwF .debug_line ; then
+ 				echo $fn
+ 				return
+ 			fi
+@@ -51,7 +79,7 @@ find_module() {
+ 	find_module && return
+ 
+ 	if [[ $release == "" ]] ; then
+-		release=$(gdb -ex 'print init_uts_ns.name.release' -ex 'quit' -quiet -batch "$vmlinux" | sed -n 's/\$1 = "\(.*\)".*/\1/p')
++		release=$(gdb -ex 'print init_uts_ns.name.release' -ex 'quit' -quiet -batch "$vmlinux" 2>/dev/null | sed -n 's/\$1 = "\(.*\)".*/\1/p')
+ 	fi
+ 
+ 	for dn in {/usr/lib/debug,}/lib/modules/$release ; do
+@@ -74,7 +102,7 @@ parse_symbol() {
+ 
+ 	if [[ $module == "" ]] ; then
+ 		local objfile=$vmlinux
+-	elif [[ "${modcache[$module]+isset}" == "isset" ]]; then
++	elif [[ $aarray_support == true && "${modcache[$module]+isset}" == "isset" ]]; then
+ 		local objfile=${modcache[$module]}
+ 	else
+ 		local objfile=$(find_module)
+@@ -82,7 +110,9 @@ parse_symbol() {
+ 			echo "WARNING! Modules path isn't set, but is needed to parse this symbol" >&2
+ 			return
+ 		fi
+-		modcache[$module]=$objfile
++		if [[ $aarray_support == true ]]; then
++			modcache[$module]=$objfile
++		fi
+ 	fi
+ 
+ 	# Remove the englobing parenthesis
+@@ -102,15 +132,17 @@ parse_symbol() {
+ 	# Use 'nm vmlinux' to figure out the base address of said symbol.
+ 	# It's actually faster to call it every time than to load it
+ 	# all into bash.
+-	if [[ "${cache[$module,$name]+isset}" == "isset" ]]; then
++	if [[ $aarray_support == true && "${cache[$module,$name]+isset}" == "isset" ]]; then
+ 		local base_addr=${cache[$module,$name]}
+ 	else
+-		local base_addr=$(nm "$objfile" | awk '$3 == "'$name'" && ($2 == "t" || $2 == "T") {print $1; exit}')
++		local base_addr=$(nm "$objfile" 2>/dev/null | awk '$3 == "'$name'" && ($2 == "t" || $2 == "T") {print $1; exit}')
+ 		if [[ $base_addr == "" ]] ; then
+ 			# address not found
+ 			return
+ 		fi
+-		cache[$module,$name]="$base_addr"
++		if [[ $aarray_support == true ]]; then
++			cache[$module,$name]="$base_addr"
++		fi
+ 	fi
+ 	# Let's start doing the math to get the exact address into the
+ 	# symbol. First, strip out the symbol total length.
+@@ -126,11 +158,13 @@ parse_symbol() {
+ 
+ 	# Pass it to addr2line to get filename and line number
+ 	# Could get more than one result
+-	if [[ "${cache[$module,$address]+isset}" == "isset" ]]; then
++	if [[ $aarray_support == true && "${cache[$module,$address]+isset}" == "isset" ]]; then
+ 		local code=${cache[$module,$address]}
+ 	else
+-		local code=$(${CROSS_COMPILE}addr2line -i -e "$objfile" "$address")
+-		cache[$module,$address]=$code
++		local code=$(${ADDR2LINE} -i -e "$objfile" "$address" 2>/dev/null)
++		if [[ $aarray_support == true ]]; then
++			cache[$module,$address]=$code
++		fi
+ 	fi
+ 
+ 	# addr2line doesn't return a proper error code if it fails, so
+@@ -146,6 +180,12 @@ parse_symbol() {
+ 	# In the case of inlines, move everything to same line
+ 	code=${code//$'\n'/' '}
+ 
++	# Demangle if the name looks like a Rust symbol and if
++	# we got a Rust demangler
++	if [[ $name =~ ^_R && $cppfilt != "" ]] ; then
++		name=$("$cppfilt" "$cppfilt_opts" "$name")
++	fi
++
+ 	# Replace old address with pretty line numbers
+ 	symbol="$segment$name ($code)"
+ }
+diff --git a/scripts/get_abi.pl b/scripts/get_abi.pl
+index 92d9aa6cc4f5d..db6098c42fa6b 100755
+--- a/scripts/get_abi.pl
++++ b/scripts/get_abi.pl
+@@ -75,7 +75,7 @@ sub parse_abi {
+ 	$name =~ s,.*/,,;
+ 
+ 	my $fn = $file;
+-	$fn =~ s,Documentation/ABI/,,;
++	$fn =~ s,.*Documentation/ABI/,,;
+ 
+ 	my $nametag = "File $fn";
+ 	$data{$nametag}->{what} = "File $name";
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 19af6dd160e6b..7a04d4c053260 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -1232,7 +1232,8 @@ sub dump_struct($$) {
+ 	$members =~ s/DECLARE_KFIFO\s*\(([^,)]+),\s*([^,)]+),\s*([^,)]+)\)/$2 \*$1/gos;
+ 	# replace DECLARE_KFIFO_PTR
+ 	$members =~ s/DECLARE_KFIFO_PTR\s*\(([^,)]+),\s*([^,)]+)\)/$2 \*$1/gos;
+-
++	# replace DECLARE_FLEX_ARRAY
++	$members =~ s/(?:__)?DECLARE_FLEX_ARRAY\s*\($args,\s*$args\)/$1 $2\[\]/gos;
+ 	my $declaration = $members;
+ 
+ 	# Split nested struct/union elements as newer ones
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index acd07a70a2f4e..3a1ffd84eac28 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -171,8 +171,13 @@ gen_btf()
+ 	${OBJCOPY} --only-section=.BTF --set-section-flags .BTF=alloc,readonly \
+ 		--strip-all ${1} ${2} 2>/dev/null
+ 	# Change e_type to ET_REL so that it can be used to link final vmlinux.
+-	# Unlike GNU ld, lld does not allow an ET_EXEC input.
+-	printf '\1' | dd of=${2} conv=notrunc bs=1 seek=16 status=none
++	# GNU ld 2.35+ and lld do not allow an ET_EXEC input.
++	if [ -n "${CONFIG_CPU_BIG_ENDIAN}" ]; then
++		et_rel='\0\1'
++	else
++		et_rel='\1\0'
++	fi
++	printf "${et_rel}" | dd of=${2} conv=notrunc bs=1 seek=16 status=none
+ }
+ 
+ # Create ${2} .S file with all symbols from the ${1} object file
+diff --git a/scripts/mod/sumversion.c b/scripts/mod/sumversion.c
+index d587f40f11177..b6eda411be154 100644
+--- a/scripts/mod/sumversion.c
++++ b/scripts/mod/sumversion.c
+@@ -328,7 +328,12 @@ static int parse_source_files(const char *objfile, struct md4_ctx *md)
+ 
+ 	/* Sum all files in the same dir or subdirs. */
+ 	while ((line = get_line(&pos))) {
+-		char* p = line;
++		char* p;
++
++		/* trim the leading spaces away */
++		while (isspace(*line))
++			line++;
++		p = line;
+ 
+ 		if (strncmp(line, "source_", sizeof("source_")-1) == 0) {
+ 			p = strrchr(line, ' ');
+diff --git a/security/security.c b/security/security.c
+index f9157d5023c66..269c3965393f4 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -1498,6 +1498,24 @@ int security_file_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ }
+ EXPORT_SYMBOL_GPL(security_file_ioctl);
+ 
++/**
++ * security_file_ioctl_compat() - Check if an ioctl is allowed in compat mode
++ * @file: associated file
++ * @cmd: ioctl cmd
++ * @arg: ioctl arguments
++ *
++ * Compat version of security_file_ioctl() that correctly handles 32-bit
++ * processes running on 64-bit kernels.
++ *
++ * Return: Returns 0 if permission is granted.
++ */
++int security_file_ioctl_compat(struct file *file, unsigned int cmd,
++			       unsigned long arg)
++{
++	return call_int_hook(file_ioctl_compat, 0, file, cmd, arg);
++}
++EXPORT_SYMBOL_GPL(security_file_ioctl_compat);
++
+ static inline unsigned long mmap_prot(struct file *file, unsigned long prot)
+ {
+ 	/*
+@@ -2080,7 +2098,19 @@ EXPORT_SYMBOL(security_inode_setsecctx);
+ 
+ int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen)
+ {
+-	return call_int_hook(inode_getsecctx, -EOPNOTSUPP, inode, ctx, ctxlen);
++	struct security_hook_list *hp;
++	int rc;
++
++	/*
++	 * Only one module will provide a security context.
++	 */
++	hlist_for_each_entry(hp, &security_hook_heads.inode_getsecctx, list) {
++		rc = hp->hook.inode_getsecctx(inode, ctx, ctxlen);
++		if (rc != LSM_RET_DEFAULT(inode_getsecctx))
++			return rc;
++	}
++
++	return LSM_RET_DEFAULT(inode_getsecctx);
+ }
+ EXPORT_SYMBOL(security_inode_getsecctx);
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index f545321d96dc3..50d3ddfe15fd1 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -3662,6 +3662,33 @@ static int selinux_file_ioctl(struct file *file, unsigned int cmd,
+ 	return error;
+ }
+ 
++static int selinux_file_ioctl_compat(struct file *file, unsigned int cmd,
++			      unsigned long arg)
++{
++	/*
++	 * If we are in a 64-bit kernel running 32-bit userspace, we need to
++	 * make sure we don't compare 32-bit flags to 64-bit flags.
++	 */
++	switch (cmd) {
++	case FS_IOC32_GETFLAGS:
++		cmd = FS_IOC_GETFLAGS;
++		break;
++	case FS_IOC32_SETFLAGS:
++		cmd = FS_IOC_SETFLAGS;
++		break;
++	case FS_IOC32_GETVERSION:
++		cmd = FS_IOC_GETVERSION;
++		break;
++	case FS_IOC32_SETVERSION:
++		cmd = FS_IOC_SETVERSION;
++		break;
++	default:
++		break;
++	}
++
++	return selinux_file_ioctl(file, cmd, arg);
++}
++
+ static int default_noexec __ro_after_init;
+ 
+ static int file_map_prot_check(struct file *file, unsigned long prot, int shared)
+@@ -7049,6 +7076,7 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = {
+ 	LSM_HOOK_INIT(file_permission, selinux_file_permission),
+ 	LSM_HOOK_INIT(file_alloc_security, selinux_file_alloc_security),
+ 	LSM_HOOK_INIT(file_ioctl, selinux_file_ioctl),
++	LSM_HOOK_INIT(file_ioctl_compat, selinux_file_ioctl_compat),
+ 	LSM_HOOK_INIT(mmap_file, selinux_mmap_file),
+ 	LSM_HOOK_INIT(mmap_addr, selinux_mmap_addr),
+ 	LSM_HOOK_INIT(file_mprotect, selinux_file_mprotect),
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 814518ad4402b..e1669759403a6 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -4767,6 +4767,7 @@ static struct security_hook_list smack_hooks[] __lsm_ro_after_init = {
+ 
+ 	LSM_HOOK_INIT(file_alloc_security, smack_file_alloc_security),
+ 	LSM_HOOK_INIT(file_ioctl, smack_file_ioctl),
++	LSM_HOOK_INIT(file_ioctl_compat, smack_file_ioctl),
+ 	LSM_HOOK_INIT(file_lock, smack_file_lock),
+ 	LSM_HOOK_INIT(file_fcntl, smack_file_fcntl),
+ 	LSM_HOOK_INIT(mmap_file, smack_mmap_file),
+diff --git a/security/tomoyo/tomoyo.c b/security/tomoyo/tomoyo.c
+index 1f3cd432d8308..a8dc3ae938f9c 100644
+--- a/security/tomoyo/tomoyo.c
++++ b/security/tomoyo/tomoyo.c
+@@ -548,6 +548,7 @@ static struct security_hook_list tomoyo_hooks[] __lsm_ro_after_init = {
+ 	LSM_HOOK_INIT(path_rename, tomoyo_path_rename),
+ 	LSM_HOOK_INIT(inode_getattr, tomoyo_inode_getattr),
+ 	LSM_HOOK_INIT(file_ioctl, tomoyo_file_ioctl),
++	LSM_HOOK_INIT(file_ioctl_compat, tomoyo_file_ioctl),
+ 	LSM_HOOK_INIT(path_chmod, tomoyo_path_chmod),
+ 	LSM_HOOK_INIT(path_chown, tomoyo_path_chown),
+ 	LSM_HOOK_INIT(path_chroot, tomoyo_path_chroot),
+diff --git a/sound/hda/hdac_stream.c b/sound/hda/hdac_stream.c
+index 5570722458caf..e510bf09967d4 100644
+--- a/sound/hda/hdac_stream.c
++++ b/sound/hda/hdac_stream.c
+@@ -605,17 +605,15 @@ void snd_hdac_stream_timecounter_init(struct hdac_stream *azx_dev,
+ 	struct hdac_stream *s;
+ 	bool inited = false;
+ 	u64 cycle_last = 0;
+-	int i = 0;
+ 
+ 	list_for_each_entry(s, &bus->stream_list, list) {
+-		if (streams & (1 << i)) {
++		if ((streams & (1 << s->index))) {
+ 			azx_timecounter_init(s, inited, cycle_last);
+ 			if (!inited) {
+ 				inited = true;
+ 				cycle_last = s->tc.cycle_last;
+ 			}
+ 		}
+-		i++;
+ 	}
+ 
+ 	snd_pcm_gettime(runtime, &runtime->trigger_tstamp);
+@@ -660,14 +658,13 @@ void snd_hdac_stream_sync(struct hdac_stream *azx_dev, bool start,
+ 			  unsigned int streams)
+ {
+ 	struct hdac_bus *bus = azx_dev->bus;
+-	int i, nwait, timeout;
++	int nwait, timeout;
+ 	struct hdac_stream *s;
+ 
+ 	for (timeout = 5000; timeout; timeout--) {
+ 		nwait = 0;
+-		i = 0;
+ 		list_for_each_entry(s, &bus->stream_list, list) {
+-			if (!(streams & (1 << i++)))
++			if (!(streams & (1 << s->index)))
+ 				continue;
+ 
+ 			if (start) {
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 48c78388c1d20..ea0a2b1d23a38 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -372,6 +372,16 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ 		.device = 0x7e28,
+ 	},
++	/* ArrowLake-S */
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++		.device = PCI_DEVICE_ID_INTEL_HDA_ARL_S,
++	},
++	/* ArrowLake */
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++		.device = PCI_DEVICE_ID_INTEL_HDA_ARL,
++	},
+ #endif
+ 
+ /* Lunar Lake */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 12c6eb76fca31..a3c6a5eeba3a4 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2581,6 +2581,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ 	{ PCI_DEVICE(0x8086, 0x4b58),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++	/* Arrow Lake */
++	{ PCI_DEVICE_DATA(INTEL, HDA_ARL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE) },
+ 	/* Broxton-P(Apollolake) */
+ 	{ PCI_DEVICE(0x8086, 0x5a98),
+ 	  .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_BROXTON },
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index e35c470eb4814..5b37f5f14bc91 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -235,6 +235,7 @@ enum {
+ 	CXT_FIXUP_HP_ZBOOK_MUTE_LED,
+ 	CXT_FIXUP_HEADSET_MIC,
+ 	CXT_FIXUP_HP_MIC_NO_PRESENCE,
++	CXT_PINCFG_SWS_JS201D,
+ };
+ 
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -732,6 +733,17 @@ static const struct hda_pintbl cxt_pincfg_lemote[] = {
+ 	{}
+ };
+ 
++/* SuoWoSi/South-holding JS201D with sn6140 */
++static const struct hda_pintbl cxt_pincfg_sws_js201d[] = {
++	{ 0x16, 0x03211040 }, /* hp out */
++	{ 0x17, 0x91170110 }, /* SPK/Class_D */
++	{ 0x18, 0x95a70130 }, /* Internal mic */
++	{ 0x19, 0x03a11020 }, /* Headset Mic */
++	{ 0x1a, 0x40f001f0 }, /* Not used */
++	{ 0x21, 0x40f001f0 }, /* Not used */
++	{}
++};
++
+ static const struct hda_fixup cxt_fixups[] = {
+ 	[CXT_PINCFG_LENOVO_X200] = {
+ 		.type = HDA_FIXUP_PINS,
+@@ -887,6 +899,10 @@ static const struct hda_fixup cxt_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = CXT_FIXUP_HEADSET_MIC,
+ 	},
++	[CXT_PINCFG_SWS_JS201D] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = cxt_pincfg_sws_js201d,
++	},
+ };
+ 
+ static const struct snd_pci_quirk cxt5045_fixups[] = {
+@@ -960,6 +976,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
++	SND_PCI_QUIRK(0x14f1, 0x0265, "SWS JS201D", CXT_PINCFG_SWS_JS201D),
+ 	SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x215e, "Lenovo T410", CXT_PINCFG_LENOVO_TP410),
+@@ -1000,6 +1017,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
+ 	{ .id = CXT_FIXUP_HP_ZBOOK_MUTE_LED, .name = "hp-zbook-mute-led" },
+ 	{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
+ 	{ .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" },
++	{ .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" },
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 412fbe098e0c7..233449d982370 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8841,6 +8841,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1247, "Acer vCopperbox", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS),
+ 	SND_PCI_QUIRK(0x1025, 0x1248, "Acer Veriton N4660G", ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1269, "Acer SWIFT SF314-54", ALC256_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x126a, "Acer Swift SF114-32", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+@@ -9025,6 +9026,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8786, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8787, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x87b7, "HP Laptop 14-fq0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -9331,6 +9333,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
++	SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ 	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 7dc80183921ed..04457cbed5b4e 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3276,6 +3276,7 @@ static void rt5645_jack_detect_work(struct work_struct *work)
+ 				    report, SND_JACK_HEADPHONE);
+ 		snd_soc_jack_report(rt5645->mic_jack,
+ 				    report, SND_JACK_MICROPHONE);
++		mutex_unlock(&rt5645->jd_mutex);
+ 		return;
+ 	case 4:
+ 		val = snd_soc_component_read(rt5645->component, RT5645_A_JD_CTRL1) & 0x0020;
+diff --git a/tools/lib/subcmd/help.c b/tools/lib/subcmd/help.c
+index bf02d62a3b2b5..42f57b640f119 100644
+--- a/tools/lib/subcmd/help.c
++++ b/tools/lib/subcmd/help.c
+@@ -50,11 +50,21 @@ void uniq(struct cmdnames *cmds)
+ 	if (!cmds->cnt)
+ 		return;
+ 
+-	for (i = j = 1; i < cmds->cnt; i++)
+-		if (strcmp(cmds->names[i]->name, cmds->names[i-1]->name))
+-			cmds->names[j++] = cmds->names[i];
+-
++	for (i = 1; i < cmds->cnt; i++) {
++		if (!strcmp(cmds->names[i]->name, cmds->names[i-1]->name))
++			zfree(&cmds->names[i - 1]);
++	}
++	for (i = 0, j = 0; i < cmds->cnt; i++) {
++		if (cmds->names[i]) {
++			if (i == j)
++				j++;
++			else
++				cmds->names[j++] = cmds->names[i];
++		}
++	}
+ 	cmds->cnt = j;
++	while (j < i)
++		cmds->names[j++] = NULL;
+ }
+ 
+ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
+index 28d22265b8253..cbdc2839904ef 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf.c
+@@ -4611,6 +4611,7 @@ static size_t get_pprint_mapv_size(enum pprint_mapv_kind_t mapv_kind)
+ #endif
+ 
+ 	assert(0);
++	return 0;
+ }
+ 
+ static void set_pprint_mapv(enum pprint_mapv_kind_t mapv_kind,
+diff --git a/tools/testing/selftests/bpf/progs/pyperf180.c b/tools/testing/selftests/bpf/progs/pyperf180.c
+index c39f559d3100e..42c4a8b62e360 100644
+--- a/tools/testing/selftests/bpf/progs/pyperf180.c
++++ b/tools/testing/selftests/bpf/progs/pyperf180.c
+@@ -1,4 +1,26 @@
+ // SPDX-License-Identifier: GPL-2.0
+ // Copyright (c) 2019 Facebook
+ #define STACK_MAX_LEN 180
++
++/* llvm upstream commit at clang18
++ *   https://github.com/llvm/llvm-project/commit/1a2e77cf9e11dbf56b5720c607313a566eebb16e
++ * changed inlining behavior and caused compilation failure as some branch
++ * target distance exceeded 16bit representation which is the maximum for
++ * cpu v1/v2/v3. Macro __BPF_CPU_VERSION__ is later implemented in clang18
++ * to specify which cpu version is used for compilation. So a smaller
++ * unroll_count can be set if __BPF_CPU_VERSION__ is less than 4, which
++ * reduced some branch target distances and resolved the compilation failure.
++ *
++ * To capture the case where a developer/ci uses clang18 but the corresponding
++ * repo checkpoint does not have __BPF_CPU_VERSION__, a smaller unroll_count
++ * will be set as well to prevent potential compilation failures.
++ */
++#ifdef __BPF_CPU_VERSION__
++#if __BPF_CPU_VERSION__ < 4
++#define UNROLL_COUNT 90
++#endif
++#elif __clang_major__ == 18
++#define UNROLL_COUNT 90
++#endif
++
+ #include "pyperf.h"
+diff --git a/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh b/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
+index 1b08e042cf942..185b02d2d4cd1 100755
+--- a/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
++++ b/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
+@@ -269,6 +269,7 @@ for port in 0 1; do
+ 	echo 1 > $NSIM_DEV_SYS/new_port
+     fi
+     NSIM_NETDEV=`get_netdev_name old_netdevs`
++    ifconfig $NSIM_NETDEV up
+ 
+     msg="new NIC device created"
+     exp0=( 0 0 0 0 )
+@@ -430,6 +431,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     overflow_table0 "overflow NIC table"
+@@ -487,6 +489,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     overflow_table0 "overflow NIC table"
+@@ -543,6 +546,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     overflow_table0 "destroy NIC"
+@@ -572,6 +576,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     msg="create VxLANs v6"
+@@ -632,6 +637,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error
+@@ -687,6 +693,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     msg="create VxLANs v6"
+@@ -746,6 +753,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     msg="create VxLANs v6"
+@@ -876,6 +884,7 @@ msg="re-add a port"
+ 
+ echo 2 > $NSIM_DEV_SYS/del_port
+ echo 2 > $NSIM_DEV_SYS/new_port
++NSIM_NETDEV=`get_netdev_name old_netdevs`
+ check_tables
+ 
+ msg="replace VxLAN in overflow table"
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 3253fdc780d62..9cd5cf800a5b5 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -1583,6 +1583,13 @@ check_command() {
+ 	return 0
+ }
+ 
++check_running() {
++	pid=${1}
++	cmd=${2}
++
++	[ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "{cmd}" ]
++}
++
+ test_cleanup_vxlanX_exception() {
+ 	outer="${1}"
+ 	encap="vxlan"
+@@ -1613,11 +1620,12 @@ test_cleanup_vxlanX_exception() {
+ 
+ 	${ns_a} ip link del dev veth_A-R1 &
+ 	iplink_pid=$!
+-	sleep 1
+-	if [ "$(cat /proc/${iplink_pid}/cmdline 2>/dev/null | tr -d '\0')" = "iplinkdeldevveth_A-R1" ]; then
+-		err "  can't delete veth device in a timely manner, PMTU dst likely leaked"
+-		return 1
+-	fi
++	for i in $(seq 1 20); do
++		check_running ${iplink_pid} "iplinkdeldevveth_A-R1" || return 0
++		sleep 0.1
++	done
++	err "  can't delete veth device in a timely manner, PMTU dst likely leaked"
++	return 1
+ }
+ 
+ test_cleanup_ipv6_exception() {
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 356fd5d1a4285..b7638c3c9eb7d 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1008,9 +1008,9 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
+  */
+ static int kvm_alloc_dirty_bitmap(struct kvm_memory_slot *memslot)
+ {
+-	unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
++	unsigned long dirty_bytes = kvm_dirty_bitmap_bytes(memslot);
+ 
+-	memslot->dirty_bitmap = kvzalloc(dirty_bytes, GFP_KERNEL_ACCOUNT);
++	memslot->dirty_bitmap = __vcalloc(2, dirty_bytes, GFP_KERNEL_ACCOUNT);
+ 	if (!memslot->dirty_bitmap)
+ 		return -ENOMEM;
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-02-23 12:45 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-02-23 12:45 UTC (permalink / raw
  To: gentoo-commits

commit:     91f1ab4ed6c4c8b21324486c19a8228c11f65451
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 23 12:44:47 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 23 12:44:47 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=91f1ab4e

Temporary remove broken cpu opt patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                   |   4 -
 5010_enable-cpu-optimizations-universal.patch | 682 --------------------------
 2 files changed, 686 deletions(-)

diff --git a/0000_README b/0000_README
index 2aa6b81e..9f0368c8 100644
--- a/0000_README
+++ b/0000_README
@@ -922,7 +922,3 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5000_shiftfs-ubuntu-20.04.patch
 From:   https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
 Desc:   UID/GID shifting overlay filesystem for containers 
-
-Patch:  5010_enable-cpu-optimizations-universal.patch
-From:   https://github.com/graysky2/kernel_compiler_patch
-Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
deleted file mode 100644
index d437e1ad..00000000
--- a/5010_enable-cpu-optimizations-universal.patch
+++ /dev/null
@@ -1,682 +0,0 @@
-From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
-From: graysky <graysky@archlinux.us>
-Date: Sun, 6 Jun 2021 09:41:36 -0400
-Subject: [PATCH] more uarches for kernel 5.8-5.14
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-FEATURES
-This patch adds additional CPU options to the Linux kernel accessible under:
- Processor type and features  --->
-  Processor family --->
-
-With the release of gcc 11.1 and clang 12.0, several generic 64-bit levels are
-offered which are good for supported Intel or AMD CPUs:
-• x86-64-v2
-• x86-64-v3
-• x86-64-v4
-
-Users of glibc 2.33 and above can see which level is supported by current
-hardware by running:
-  /lib/ld-linux-x86-64.so.2 --help | grep supported
-
-Alternatively, compare the flags from /proc/cpuinfo to this list.[1]
-
-CPU-specific microarchitectures include:
-• AMD Improved K8-family
-• AMD K10-family
-• AMD Family 10h (Barcelona)
-• AMD Family 14h (Bobcat)
-• AMD Family 16h (Jaguar)
-• AMD Family 15h (Bulldozer)
-• AMD Family 15h (Piledriver)
-• AMD Family 15h (Steamroller)
-• AMD Family 15h (Excavator)
-• AMD Family 17h (Zen)
-• AMD Family 17h (Zen 2)
-• AMD Family 19h (Zen 3)†
-• Intel Silvermont low-power processors
-• Intel Goldmont low-power processors (Apollo Lake and Denverton)
-• Intel Goldmont Plus low-power processors (Gemini Lake)
-• Intel 1st Gen Core i3/i5/i7 (Nehalem)
-• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
-• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
-• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
-• Intel 4th Gen Core i3/i5/i7 (Haswell)
-• Intel 5th Gen Core i3/i5/i7 (Broadwell)
-• Intel 6th Gen Core i3/i5/i7 (Skylake)
-• Intel 6th Gen Core i7/i9 (Skylake X)
-• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
-• Intel 10th Gen Core i7/i9 (Ice Lake)
-• Intel Xeon (Cascade Lake)
-• Intel Xeon (Cooper Lake)*
-• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
-• Intel 3rd Gen 10nm++ Xeon (Sapphire Rapids)‡
-• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)‡
-• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)‡
-
-Notes: If not otherwise noted, gcc >=9.1 is required for support.
-       *Requires gcc >=10.1 or clang >=10.0
-       †Required gcc >=10.3 or clang >=12.0
-       ‡Required gcc >=11.1 or clang >=12.0
-
-It also offers to compile passing the 'native' option which, "selects the CPU
-to generate code for at compilation time by determining the processor type of
-the compiling machine. Using -march=native enables all instruction subsets
-supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[2]
-
-Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
-CPUs should select the 'AMD-Native' option.
-
-MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
-This patch also changes -march=atom to -march=bonnell in accordance with the
-gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
-believe it should use the newer -march=bonnell flag for atom processors.[3]
-
-It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
-recommendation is to use the 'atom' option instead.
-
-BENEFITS
-Small but real speed increases are measurable using a make endpoint comparing
-a generic kernel to one built with one of the respective microarchs.
-
-See the following experimental evidence supporting this statement:
-https://github.com/graysky2/kernel_gcc_patch
-
-REQUIREMENTS
-linux version 5.8-5.14
-gcc version >=9.0 or clang version >=9.0
-
-ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[5]
-
-REFERENCES
-1.  https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
-2.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
-3.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
-4.  https://github.com/graysky2/kernel_gcc_patch/issues/15
-5.  http://www.linuxforge.net/docs/linux/linux-gcc.php
-
-Signed-off-by: graysky <graysky@archlinux.us>
----
- arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  47 ++++-
- arch/x86/include/asm/vermagic.h |  66 +++++++
- 3 files changed, 428 insertions(+), 17 deletions(-)
-
-diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..8acf6519d279 100644
---- a/arch/x86/Kconfig.cpu
-+++ b/arch/x86/Kconfig.cpu
-@@ -157,7 +157,7 @@ config MPENTIUM4
- 
- 
- config MK6
--	bool "K6/K6-II/K6-III"
-+	bool "AMD K6/K6-II/K6-III"
- 	depends on X86_32
- 	help
- 	  Select this for an AMD K6-family processor.  Enables use of
-@@ -165,7 +165,7 @@ config MK6
- 	  flags to GCC.
- 
- config MK7
--	bool "Athlon/Duron/K7"
-+	bool "AMD Athlon/Duron/K7"
- 	depends on X86_32
- 	help
- 	  Select this for an AMD Athlon K7-family processor.  Enables use of
-@@ -173,12 +173,98 @@ config MK7
- 	  flags to GCC.
- 
- config MK8
--	bool "Opteron/Athlon64/Hammer/K8"
-+	bool "AMD Opteron/Athlon64/Hammer/K8"
- 	help
- 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
- 	  Enables use of some extended instructions, and passes appropriate
- 	  optimization flags to GCC.
- 
-+config MK8SSE3
-+	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
-+	help
-+	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MK10
-+	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
-+	help
-+	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
-+	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MBARCELONA
-+	bool "AMD Barcelona"
-+	help
-+	  Select this for AMD Family 10h Barcelona processors.
-+
-+	  Enables -march=barcelona
-+
-+config MBOBCAT
-+	bool "AMD Bobcat"
-+	help
-+	  Select this for AMD Family 14h Bobcat processors.
-+
-+	  Enables -march=btver1
-+
-+config MJAGUAR
-+	bool "AMD Jaguar"
-+	help
-+	  Select this for AMD Family 16h Jaguar processors.
-+
-+	  Enables -march=btver2
-+
-+config MBULLDOZER
-+	bool "AMD Bulldozer"
-+	help
-+	  Select this for AMD Family 15h Bulldozer processors.
-+
-+	  Enables -march=bdver1
-+
-+config MPILEDRIVER
-+	bool "AMD Piledriver"
-+	help
-+	  Select this for AMD Family 15h Piledriver processors.
-+
-+	  Enables -march=bdver2
-+
-+config MSTEAMROLLER
-+	bool "AMD Steamroller"
-+	help
-+	  Select this for AMD Family 15h Steamroller processors.
-+
-+	  Enables -march=bdver3
-+
-+config MEXCAVATOR
-+	bool "AMD Excavator"
-+	help
-+	  Select this for AMD Family 15h Excavator processors.
-+
-+	  Enables -march=bdver4
-+
-+config MZEN
-+	bool "AMD Zen"
-+	help
-+	  Select this for AMD Family 17h Zen processors.
-+
-+	  Enables -march=znver1
-+
-+config MZEN2
-+	bool "AMD Zen 2"
-+	help
-+	  Select this for AMD Family 17h Zen 2 processors.
-+
-+	  Enables -march=znver2
-+
-+config MZEN3
-+	bool "AMD Zen 3"
-+	depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	help
-+	  Select this for AMD Family 19h Zen 3 processors.
-+
-+	  Enables -march=znver3
-+
- config MCRUSOE
- 	bool "Crusoe"
- 	depends on X86_32
-@@ -270,7 +356,7 @@ config MPSC
- 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
- 
- config MCORE2
--	bool "Core 2/newer Xeon"
-+	bool "Intel Core 2"
- 	help
- 
- 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,6 +364,8 @@ config MCORE2
- 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
- 	  (not a typo)
- 
-+	  Enables -march=core2
-+
- config MATOM
- 	bool "Intel Atom"
- 	help
-@@ -287,6 +375,182 @@ config MATOM
- 	  accordingly optimized code. Use a recent GCC with specific Atom
- 	  support in order to fully benefit from selecting this option.
- 
-+config MNEHALEM
-+	bool "Intel Nehalem"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 1st Gen Core processors in the Nehalem family.
-+
-+	  Enables -march=nehalem
-+
-+config MWESTMERE
-+	bool "Intel Westmere"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Westmere formerly Nehalem-C family.
-+
-+	  Enables -march=westmere
-+
-+config MSILVERMONT
-+	bool "Intel Silvermont"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Silvermont platform.
-+
-+	  Enables -march=silvermont
-+
-+config MGOLDMONT
-+	bool "Intel Goldmont"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
-+
-+	  Enables -march=goldmont
-+
-+config MGOLDMONTPLUS
-+	bool "Intel Goldmont Plus"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
-+
-+	  Enables -march=goldmont-plus
-+
-+config MSANDYBRIDGE
-+	bool "Intel Sandy Bridge"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
-+
-+	  Enables -march=sandybridge
-+
-+config MIVYBRIDGE
-+	bool "Intel Ivy Bridge"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
-+
-+	  Enables -march=ivybridge
-+
-+config MHASWELL
-+	bool "Intel Haswell"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 4th Gen Core processors in the Haswell family.
-+
-+	  Enables -march=haswell
-+
-+config MBROADWELL
-+	bool "Intel Broadwell"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 5th Gen Core processors in the Broadwell family.
-+
-+	  Enables -march=broadwell
-+
-+config MSKYLAKE
-+	bool "Intel Skylake"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 6th Gen Core processors in the Skylake family.
-+
-+	  Enables -march=skylake
-+
-+config MSKYLAKEX
-+	bool "Intel Skylake X"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 6th Gen Core processors in the Skylake X family.
-+
-+	  Enables -march=skylake-avx512
-+
-+config MCANNONLAKE
-+	bool "Intel Cannon Lake"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 8th Gen Core processors
-+
-+	  Enables -march=cannonlake
-+
-+config MICELAKE
-+	bool "Intel Ice Lake"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for 10th Gen Core processors in the Ice Lake family.
-+
-+	  Enables -march=icelake-client
-+
-+config MCASCADELAKE
-+	bool "Intel Cascade Lake"
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for Xeon processors in the Cascade Lake family.
-+
-+	  Enables -march=cascadelake
-+
-+config MCOOPERLAKE
-+	bool "Intel Cooper Lake"
-+	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for Xeon processors in the Cooper Lake family.
-+
-+	  Enables -march=cooperlake
-+
-+config MTIGERLAKE
-+	bool "Intel Tiger Lake"
-+	depends on  (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
-+
-+	  Enables -march=tigerlake
-+
-+config MSAPPHIRERAPIDS
-+	bool "Intel Sapphire Rapids"
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for third-generation 10 nm process processors in the Sapphire Rapids family.
-+
-+	  Enables -march=sapphirerapids
-+
-+config MROCKETLAKE
-+	bool "Intel Rocket Lake"
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for eleventh-generation processors in the Rocket Lake family.
-+
-+	  Enables -march=rocketlake
-+
-+config MALDERLAKE
-+	bool "Intel Alder Lake"
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	select X86_P6_NOP
-+	help
-+
-+	  Select this for twelfth-generation processors in the Alder Lake family.
-+
-+	  Enables -march=alderlake
-+
- config GENERIC_CPU
- 	bool "Generic-x86-64"
- 	depends on X86_64
-@@ -294,6 +558,50 @@ config GENERIC_CPU
- 	  Generic x86-64 CPU.
- 	  Run equally well on all x86-64 CPUs.
- 
-+config GENERIC_CPU2
-+	bool "Generic-x86-64-v2"
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	depends on X86_64
-+	help
-+	  Generic x86-64 CPU.
-+	  Run equally well on all x86-64 CPUs with min support of x86-64-v2.
-+
-+config GENERIC_CPU3
-+	bool "Generic-x86-64-v3"
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	depends on X86_64
-+	help
-+	  Generic x86-64-v3 CPU with v3 instructions.
-+	  Run equally well on all x86-64 CPUs with min support of x86-64-v3.
-+
-+config GENERIC_CPU4
-+	bool "Generic-x86-64-v4"
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	depends on X86_64
-+	help
-+	  Generic x86-64 CPU with v4 instructions.
-+	  Run equally well on all x86-64 CPUs with min support of x86-64-v4.
-+
-+config MNATIVE_INTEL
-+	bool "Intel-Native optimizations autodetected by the compiler"
-+	help
-+
-+	  Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
-+	  the optimum settings to use based on your processor. Do NOT use this
-+	  for AMD CPUs.  Intel Only!
-+
-+	  Enables -march=native
-+
-+config MNATIVE_AMD
-+	bool "AMD-Native optimizations autodetected by the compiler"
-+	help
-+
-+	  Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
-+	  the optimum settings to use based on your processor. Do NOT use this
-+	  for Intel CPUs.  AMD Only!
-+
-+	  Enables -march=native
-+
- endchoice
- 
- config X86_GENERIC
-@@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
- config X86_L1_CACHE_SHIFT
- 	int
- 	default "7" if MPENTIUM4 || MPSC
--	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
- 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
- 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
-@@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
- 
- config X86_INTEL_USERCOPY
- 	def_bool y
--	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
- 
- config X86_USE_PPRO_CHECKSUM
- 	def_bool y
--	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
- 
- config X86_USE_3DNOW
- 	def_bool y
-@@ -360,26 +668,26 @@ config X86_USE_3DNOW
- config X86_P6_NOP
- 	def_bool y
- 	depends on X86_64
--	depends on (MCORE2 || MPENTIUM4 || MPSC)
-+	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
- 
- config X86_TSC
- 	def_bool y
--	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
- 
- config X86_CMPXCHG64
- 	def_bool y
--	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
-+	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
- 
- # this should be set for all -march=.. options where the compiler
- # generates cmov.
- config X86_CMOV
- 	def_bool y
--	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
- 
- config X86_MINIMUM_CPU_FAMILY
- 	int
- 	default "64" if X86_64
--	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
-+	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
- 	default "5" if X86_32 && X86_CMPXCHG64
- 	default "4"
- 
-diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 78faf9c7e3ae..ee0cd507af8b 100644
---- a/arch/x86/Makefile
-+++ b/arch/x86/Makefile
-@@ -114,11 +114,48 @@ else
-         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
--
--        cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
-+        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
-+        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
-+        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
-+        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
-+        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
-+        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
-+
-+        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
-+        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
-+        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
-+        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
-+        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
-+        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
-+        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
-+        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
-+        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
-+        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
-+        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
-+        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
-+        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
-+        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
-+        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
-+        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
-+        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
-+        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
-+        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
-+        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
-+        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
-+        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
-+        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
-         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
-         KBUILD_CFLAGS += $(cflags-y)
- 
-diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
-index 75884d2cdec3..4e6a08d4c7e5 100644
---- a/arch/x86/include/asm/vermagic.h
-+++ b/arch/x86/include/asm/vermagic.h
-@@ -17,6 +17,48 @@
- #define MODULE_PROC_FAMILY "586MMX "
- #elif defined CONFIG_MCORE2
- #define MODULE_PROC_FAMILY "CORE2 "
-+#elif defined CONFIG_MNATIVE_INTEL
-+#define MODULE_PROC_FAMILY "NATIVE_INTEL "
-+#elif defined CONFIG_MNATIVE_AMD
-+#define MODULE_PROC_FAMILY "NATIVE_AMD "
-+#elif defined CONFIG_MNEHALEM
-+#define MODULE_PROC_FAMILY "NEHALEM "
-+#elif defined CONFIG_MWESTMERE
-+#define MODULE_PROC_FAMILY "WESTMERE "
-+#elif defined CONFIG_MSILVERMONT
-+#define MODULE_PROC_FAMILY "SILVERMONT "
-+#elif defined CONFIG_MGOLDMONT
-+#define MODULE_PROC_FAMILY "GOLDMONT "
-+#elif defined CONFIG_MGOLDMONTPLUS
-+#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
-+#elif defined CONFIG_MSANDYBRIDGE
-+#define MODULE_PROC_FAMILY "SANDYBRIDGE "
-+#elif defined CONFIG_MIVYBRIDGE
-+#define MODULE_PROC_FAMILY "IVYBRIDGE "
-+#elif defined CONFIG_MHASWELL
-+#define MODULE_PROC_FAMILY "HASWELL "
-+#elif defined CONFIG_MBROADWELL
-+#define MODULE_PROC_FAMILY "BROADWELL "
-+#elif defined CONFIG_MSKYLAKE
-+#define MODULE_PROC_FAMILY "SKYLAKE "
-+#elif defined CONFIG_MSKYLAKEX
-+#define MODULE_PROC_FAMILY "SKYLAKEX "
-+#elif defined CONFIG_MCANNONLAKE
-+#define MODULE_PROC_FAMILY "CANNONLAKE "
-+#elif defined CONFIG_MICELAKE
-+#define MODULE_PROC_FAMILY "ICELAKE "
-+#elif defined CONFIG_MCASCADELAKE
-+#define MODULE_PROC_FAMILY "CASCADELAKE "
-+#elif defined CONFIG_MCOOPERLAKE
-+#define MODULE_PROC_FAMILY "COOPERLAKE "
-+#elif defined CONFIG_MTIGERLAKE
-+#define MODULE_PROC_FAMILY "TIGERLAKE "
-+#elif defined CONFIG_MSAPPHIRERAPIDS
-+#define MODULE_PROC_FAMILY "SAPPHIRERAPIDS "
-+#elif defined CONFIG_ROCKETLAKE
-+#define MODULE_PROC_FAMILY "ROCKETLAKE "
-+#elif defined CONFIG_MALDERLAKE
-+#define MODULE_PROC_FAMILY "ALDERLAKE "
- #elif defined CONFIG_MATOM
- #define MODULE_PROC_FAMILY "ATOM "
- #elif defined CONFIG_M686
-@@ -35,6 +77,30 @@
- #define MODULE_PROC_FAMILY "K7 "
- #elif defined CONFIG_MK8
- #define MODULE_PROC_FAMILY "K8 "
-+#elif defined CONFIG_MK8SSE3
-+#define MODULE_PROC_FAMILY "K8SSE3 "
-+#elif defined CONFIG_MK10
-+#define MODULE_PROC_FAMILY "K10 "
-+#elif defined CONFIG_MBARCELONA
-+#define MODULE_PROC_FAMILY "BARCELONA "
-+#elif defined CONFIG_MBOBCAT
-+#define MODULE_PROC_FAMILY "BOBCAT "
-+#elif defined CONFIG_MBULLDOZER
-+#define MODULE_PROC_FAMILY "BULLDOZER "
-+#elif defined CONFIG_MPILEDRIVER
-+#define MODULE_PROC_FAMILY "PILEDRIVER "
-+#elif defined CONFIG_MSTEAMROLLER
-+#define MODULE_PROC_FAMILY "STEAMROLLER "
-+#elif defined CONFIG_MJAGUAR
-+#define MODULE_PROC_FAMILY "JAGUAR "
-+#elif defined CONFIG_MEXCAVATOR
-+#define MODULE_PROC_FAMILY "EXCAVATOR "
-+#elif defined CONFIG_MZEN
-+#define MODULE_PROC_FAMILY "ZEN "
-+#elif defined CONFIG_MZEN2
-+#define MODULE_PROC_FAMILY "ZEN2 "
-+#elif defined CONFIG_MZEN3
-+#define MODULE_PROC_FAMILY "ZEN3 "
- #elif defined CONFIG_MELAN
- #define MODULE_PROC_FAMILY "ELAN "
- #elif defined CONFIG_MCRUSOE
--- 
-2.31.1
-


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-03-01 13:09 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-03-01 13:09 UTC (permalink / raw
  To: gentoo-commits

commit:     c8d7c31185ba6c4c2569107bc4858dda52ebacfa
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar  1 13:08:58 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar  1 13:08:58 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c8d7c311

Linux patch 5.10.211

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1210_linux-5.10.211.patch | 7770 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7774 insertions(+)

diff --git a/0000_README b/0000_README
index 9f0368c8..bcc82e6d 100644
--- a/0000_README
+++ b/0000_README
@@ -883,6 +883,10 @@ Patch:  1209_linux-5.10.210.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.210
 
+Patch:  1210_linux-5.10.211.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.211
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1210_linux-5.10.211.patch b/1210_linux-5.10.211.patch
new file mode 100644
index 00000000..bef3b1db
--- /dev/null
+++ b/1210_linux-5.10.211.patch
@@ -0,0 +1,7770 @@
+diff --git a/Makefile b/Makefile
+index 6e9ee164b9dfd..dc55a86e0f7df 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 210
++SUBLEVEL = 211
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts b/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts
+index 00e688b45d981..5901160919dcd 100644
+--- a/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts
++++ b/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts
+@@ -26,7 +26,6 @@ leds {
+ 		wlan {
+ 			label = "bcm53xx:blue:wlan";
+ 			gpios = <&chipcommon 10 GPIO_ACTIVE_LOW>;
+-			linux,default-trigger = "default-off";
+ 		};
+ 
+ 		system {
+diff --git a/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts b/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts
+index 78c80a5d3f4fa..8e7483272d47d 100644
+--- a/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts
++++ b/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts
+@@ -26,7 +26,6 @@ leds {
+ 		5ghz {
+ 			label = "bcm53xx:blue:5ghz";
+ 			gpios = <&chipcommon 11 GPIO_ACTIVE_HIGH>;
+-			linux,default-trigger = "default-off";
+ 		};
+ 
+ 		system {
+@@ -42,7 +41,6 @@ pcie0_leds {
+ 		2ghz {
+ 			label = "bcm53xx:blue:2ghz";
+ 			gpios = <&pcie0_chipcommon 3 GPIO_ACTIVE_HIGH>;
+-			linux,default-trigger = "default-off";
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
+index 08332f70a8dc2..51491b7418e40 100644
+--- a/arch/arm/boot/dts/imx6sx.dtsi
++++ b/arch/arm/boot/dts/imx6sx.dtsi
+@@ -981,6 +981,8 @@ usdhc1: mmc@2190000 {
+ 					 <&clks IMX6SX_CLK_USDHC1>;
+ 				clock-names = "ipg", "ahb", "per";
+ 				bus-width = <4>;
++				fsl,tuning-start-tap = <20>;
++				fsl,tuning-step= <2>;
+ 				status = "disabled";
+ 			};
+ 
+@@ -993,6 +995,8 @@ usdhc2: mmc@2194000 {
+ 					 <&clks IMX6SX_CLK_USDHC2>;
+ 				clock-names = "ipg", "ahb", "per";
+ 				bus-width = <4>;
++				fsl,tuning-start-tap = <20>;
++				fsl,tuning-step= <2>;
+ 				status = "disabled";
+ 			};
+ 
+@@ -1005,6 +1009,8 @@ usdhc3: mmc@2198000 {
+ 					 <&clks IMX6SX_CLK_USDHC3>;
+ 				clock-names = "ipg", "ahb", "per";
+ 				bus-width = <4>;
++				fsl,tuning-start-tap = <20>;
++				fsl,tuning-step= <2>;
+ 				status = "disabled";
+ 			};
+ 
+diff --git a/arch/arm/mach-ep93xx/core.c b/arch/arm/mach-ep93xx/core.c
+index 6fb19a393fd2e..c06ae33dc53ec 100644
+--- a/arch/arm/mach-ep93xx/core.c
++++ b/arch/arm/mach-ep93xx/core.c
+@@ -337,6 +337,7 @@ static struct gpiod_lookup_table ep93xx_i2c_gpiod_table = {
+ 				GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
+ 		GPIO_LOOKUP_IDX("G", 0, NULL, 1,
+ 				GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
++		{ }
+ 	},
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/px30.dtsi b/arch/arm64/boot/dts/rockchip/px30.dtsi
+index 0d6761074b11a..f241e7c318bcd 100644
+--- a/arch/arm64/boot/dts/rockchip/px30.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30.dtsi
+@@ -577,6 +577,7 @@ spi0: spi@ff1d0000 {
+ 		clock-names = "spiclk", "apb_pclk";
+ 		dmas = <&dmac 12>, <&dmac 13>;
+ 		dma-names = "tx", "rx";
++		num-cs = <2>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&spi0_clk &spi0_csn &spi0_miso &spi0_mosi>;
+ 		#address-cells = <1>;
+@@ -592,6 +593,7 @@ spi1: spi@ff1d8000 {
+ 		clock-names = "spiclk", "apb_pclk";
+ 		dmas = <&dmac 14>, <&dmac 15>;
+ 		dma-names = "tx", "rx";
++		num-cs = <2>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&spi1_clk &spi1_csn0 &spi1_csn1 &spi1_miso &spi1_mosi>;
+ 		#address-cells = <1>;
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index 62f261b8eb62f..93c0365cdd7b7 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -462,6 +462,9 @@ static int its_sync_lpi_pending_table(struct kvm_vcpu *vcpu)
+ 		}
+ 
+ 		irq = vgic_get_irq(vcpu->kvm, NULL, intids[i]);
++		if (!irq)
++			continue;
++
+ 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
+ 		irq->pending_latch = pendmask & (1U << bit_nr);
+ 		vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+@@ -1374,6 +1377,8 @@ static int vgic_its_cmd_handle_movall(struct kvm *kvm, struct vgic_its *its,
+ 
+ 	for (i = 0; i < irq_count; i++) {
+ 		irq = vgic_get_irq(kvm, NULL, intids[i]);
++		if (!irq)
++			continue;
+ 
+ 		update_affinity(irq, vcpu2);
+ 
+diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
+index 6e5bed50c3578..ca3374c6f3749 100644
+--- a/arch/powerpc/kernel/hw_breakpoint.c
++++ b/arch/powerpc/kernel/hw_breakpoint.c
+@@ -504,6 +504,11 @@ static bool is_larx_stcx_instr(int type)
+ 	return type == LARX || type == STCX;
+ }
+ 
++static bool is_octword_vsx_instr(int type, int size)
++{
++	return ((type == LOAD_VSX || type == STORE_VSX) && size == 32);
++}
++
+ /*
+  * We've failed in reliably handling the hw-breakpoint. Unregister
+  * it and throw a warning message to let the user know about it.
+@@ -554,6 +559,63 @@ static bool stepping_handler(struct pt_regs *regs, struct perf_event **bp,
+ 	return true;
+ }
+ 
++static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info,
++					     int *hit, unsigned long ea)
++{
++	int i;
++	unsigned long hw_end_addr;
++
++	/*
++	 * Handle spurious exception only when any bp_per_reg is set.
++	 * Otherwise this might be created by xmon and not actually a
++	 * spurious exception.
++	 */
++	for (i = 0; i < nr_wp_slots(); i++) {
++		if (!info[i])
++			continue;
++
++		hw_end_addr = ALIGN(info[i]->address + info[i]->len, HW_BREAKPOINT_SIZE);
++
++		/*
++		 * Ending address of DAWR range is less than starting
++		 * address of op.
++		 */
++		if ((hw_end_addr - 1) >= ea)
++			continue;
++
++		/*
++		 * Those addresses need to be in the same or in two
++		 * consecutive 512B blocks;
++		 */
++		if (((hw_end_addr - 1) >> 10) != (ea >> 10))
++			continue;
++
++		/*
++		 * 'op address + 64B' generates an address that has a
++		 * carry into bit 52 (crosses 2K boundary).
++		 */
++		if ((ea & 0x800) == ((ea + 64) & 0x800))
++			continue;
++
++		break;
++	}
++
++	if (i == nr_wp_slots())
++		return;
++
++	for (i = 0; i < nr_wp_slots(); i++) {
++		if (info[i]) {
++			hit[i] = 1;
++			info[i]->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
++		}
++	}
++}
++
++/*
++ * Handle a DABR or DAWR exception.
++ *
++ * Called in atomic context.
++ */
+ int hw_breakpoint_handler(struct die_args *args)
+ {
+ 	bool err = false;
+@@ -612,8 +674,14 @@ int hw_breakpoint_handler(struct die_args *args)
+ 		goto reset;
+ 
+ 	if (!nr_hit) {
+-		rc = NOTIFY_DONE;
+-		goto out;
++		/* Workaround for Power10 DD1 */
++		if (!IS_ENABLED(CONFIG_PPC_8xx) && mfspr(SPRN_PVR) == 0x800100 &&
++		    is_octword_vsx_instr(type, size)) {
++			handle_p10dd1_spurious_exception(info, hit, ea);
++		} else {
++			rc = NOTIFY_DONE;
++			goto out;
++		}
+ 	}
+ 
+ 	/*
+@@ -674,6 +742,8 @@ NOKPROBE_SYMBOL(hw_breakpoint_handler);
+ 
+ /*
+  * Handle single-step exceptions following a DABR hit.
++ *
++ * Called in atomic context.
+  */
+ static int single_step_dabr_instruction(struct die_args *args)
+ {
+@@ -731,6 +801,8 @@ NOKPROBE_SYMBOL(single_step_dabr_instruction);
+ 
+ /*
+  * Handle debug exception notifications.
++ *
++ * Called in atomic context.
+  */
+ int hw_breakpoint_exceptions_notify(
+ 		struct notifier_block *unused, unsigned long val, void *data)
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 74799439b2598..beecc36c30276 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -225,7 +225,7 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
+ /* combine single writes by using store-block insn */
+ void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
+ {
+-       zpci_memcpy_toio(to, from, count);
++	zpci_memcpy_toio(to, from, count * 8);
+ }
+ 
+ static void __iomem *__ioremap(phys_addr_t addr, size_t size, pgprot_t prot)
+diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
+index dd5ea1bdf04c5..75efc4c6f0766 100644
+--- a/arch/x86/include/asm/cpu_entry_area.h
++++ b/arch/x86/include/asm/cpu_entry_area.h
+@@ -143,7 +143,7 @@ extern void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags);
+ 
+ extern struct cpu_entry_area *get_cpu_entry_area(int cpu);
+ 
+-static inline struct entry_stack *cpu_entry_stack(int cpu)
++static __always_inline struct entry_stack *cpu_entry_stack(int cpu)
+ {
+ 	return &get_cpu_entry_area(cpu)->entry_stack_page.stack;
+ }
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 7b4782249a925..7711ba5342a1a 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -207,6 +207,8 @@ extern void srso_alias_untrain_ret(void);
+ extern void entry_untrain_ret(void);
+ extern void entry_ibpb(void);
+ 
++extern void (*x86_return_thunk)(void);
++
+ #ifdef CONFIG_RETPOLINE
+ 
+ typedef u8 retpoline_thunk_t[RETPOLINE_THUNK_SIZE];
+diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
+index b7421780e4e92..c6015b4074614 100644
+--- a/arch/x86/include/asm/text-patching.h
++++ b/arch/x86/include/asm/text-patching.h
+@@ -96,24 +96,40 @@ union text_poke_insn {
+ };
+ 
+ static __always_inline
+-void *text_gen_insn(u8 opcode, const void *addr, const void *dest)
++void __text_gen_insn(void *buf, u8 opcode, const void *addr, const void *dest, int size)
+ {
+-	static union text_poke_insn insn; /* per instance */
+-	int size = text_opcode_size(opcode);
++	union text_poke_insn *insn = buf;
++
++	BUG_ON(size < text_opcode_size(opcode));
++
++	/*
++	 * Hide the addresses to avoid the compiler folding in constants when
++	 * referencing code, these can mess up annotations like
++	 * ANNOTATE_NOENDBR.
++	 */
++	OPTIMIZER_HIDE_VAR(insn);
++	OPTIMIZER_HIDE_VAR(addr);
++	OPTIMIZER_HIDE_VAR(dest);
+ 
+-	insn.opcode = opcode;
++	insn->opcode = opcode;
+ 
+ 	if (size > 1) {
+-		insn.disp = (long)dest - (long)(addr + size);
++		insn->disp = (long)dest - (long)(addr + size);
+ 		if (size == 2) {
+ 			/*
+-			 * Ensure that for JMP9 the displacement
++			 * Ensure that for JMP8 the displacement
+ 			 * actually fits the signed byte.
+ 			 */
+-			BUG_ON((insn.disp >> 31) != (insn.disp >> 7));
++			BUG_ON((insn->disp >> 31) != (insn->disp >> 7));
+ 		}
+ 	}
++}
+ 
++static __always_inline
++void *text_gen_insn(u8 opcode, const void *addr, const void *dest)
++{
++	static union text_poke_insn insn; /* per instance */
++	__text_gen_insn(&insn, opcode, addr, dest, text_opcode_size(opcode));
+ 	return &insn.text;
+ }
+ 
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index bf2561a5eb581..3616fd4ba3953 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -414,6 +414,103 @@ do {									\
+ 
+ #endif // CONFIG_CC_ASM_GOTO_OUTPUT
+ 
++#ifdef CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
++#define __try_cmpxchg_user_asm(itype, ltype, _ptr, _pold, _new, label)	({ \
++	bool success;							\
++	__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold);		\
++	__typeof__(*(_ptr)) __old = *_old;				\
++	__typeof__(*(_ptr)) __new = (_new);				\
++	asm_volatile_goto("\n"						\
++		     "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
++		     _ASM_EXTABLE_UA(1b, %l[label])			\
++		     : CC_OUT(z) (success),				\
++		       [ptr] "+m" (*_ptr),				\
++		       [old] "+a" (__old)				\
++		     : [new] ltype (__new)				\
++		     : "memory"						\
++		     : label);						\
++	if (unlikely(!success))						\
++		*_old = __old;						\
++	likely(success);					})
++
++#ifdef CONFIG_X86_32
++#define __try_cmpxchg64_user_asm(_ptr, _pold, _new, label)	({	\
++	bool success;							\
++	__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold);		\
++	__typeof__(*(_ptr)) __old = *_old;				\
++	__typeof__(*(_ptr)) __new = (_new);				\
++	asm_volatile_goto("\n"						\
++		     "1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n"		\
++		     _ASM_EXTABLE_UA(1b, %l[label])			\
++		     : CC_OUT(z) (success),				\
++		       "+A" (__old),					\
++		       [ptr] "+m" (*_ptr)				\
++		     : "b" ((u32)__new),				\
++		       "c" ((u32)((u64)__new >> 32))			\
++		     : "memory"						\
++		     : label);						\
++	if (unlikely(!success))						\
++		*_old = __old;						\
++	likely(success);					})
++#endif // CONFIG_X86_32
++#else  // !CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
++#define __try_cmpxchg_user_asm(itype, ltype, _ptr, _pold, _new, label)	({ \
++	int __err = 0;							\
++	bool success;							\
++	__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold);		\
++	__typeof__(*(_ptr)) __old = *_old;				\
++	__typeof__(*(_ptr)) __new = (_new);				\
++	asm volatile("\n"						\
++		     "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
++		     CC_SET(z)						\
++		     "2:\n"						\
++		     _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG,	\
++					   %[errout])			\
++		     : CC_OUT(z) (success),				\
++		       [errout] "+r" (__err),				\
++		       [ptr] "+m" (*_ptr),				\
++		       [old] "+a" (__old)				\
++		     : [new] ltype (__new)				\
++		     : "memory");					\
++	if (unlikely(__err))						\
++		goto label;						\
++	if (unlikely(!success))						\
++		*_old = __old;						\
++	likely(success);					})
++
++#ifdef CONFIG_X86_32
++/*
++ * Unlike the normal CMPXCHG, hardcode ECX for both success/fail and error.
++ * There are only six GPRs available and four (EAX, EBX, ECX, and EDX) are
++ * hardcoded by CMPXCHG8B, leaving only ESI and EDI.  If the compiler uses
++ * both ESI and EDI for the memory operand, compilation will fail if the error
++ * is an input+output as there will be no register available for input.
++ */
++#define __try_cmpxchg64_user_asm(_ptr, _pold, _new, label)	({	\
++	int __result;							\
++	__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold);		\
++	__typeof__(*(_ptr)) __old = *_old;				\
++	__typeof__(*(_ptr)) __new = (_new);				\
++	asm volatile("\n"						\
++		     "1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n"		\
++		     "mov $0, %%ecx\n\t"				\
++		     "setz %%cl\n"					\
++		     "2:\n"						\
++		     _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %%ecx) \
++		     : [result]"=c" (__result),				\
++		       "+A" (__old),					\
++		       [ptr] "+m" (*_ptr)				\
++		     : "b" ((u32)__new),				\
++		       "c" ((u32)((u64)__new >> 32))			\
++		     : "memory", "cc");					\
++	if (unlikely(__result < 0))					\
++		goto label;						\
++	if (unlikely(!__result))					\
++		*_old = __old;						\
++	likely(__result);					})
++#endif // CONFIG_X86_32
++#endif // CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
++
+ /* FIXME: this hack is definitely wrong -AK */
+ struct __large_struct { unsigned long buf[100]; };
+ #define __m(x) (*(struct __large_struct __user *)(x))
+@@ -506,6 +603,51 @@ do {										\
+ } while (0)
+ #endif // CONFIG_CC_HAS_ASM_GOTO_OUTPUT
+ 
++extern void __try_cmpxchg_user_wrong_size(void);
++
++#ifndef CONFIG_X86_32
++#define __try_cmpxchg64_user_asm(_ptr, _oldp, _nval, _label)		\
++	__try_cmpxchg_user_asm("q", "r", (_ptr), (_oldp), (_nval), _label)
++#endif
++
++/*
++ * Force the pointer to u<size> to match the size expected by the asm helper.
++ * clang/LLVM compiles all cases and only discards the unused paths after
++ * processing errors, which breaks i386 if the pointer is an 8-byte value.
++ */
++#define unsafe_try_cmpxchg_user(_ptr, _oldp, _nval, _label) ({			\
++	bool __ret;								\
++	__chk_user_ptr(_ptr);							\
++	switch (sizeof(*(_ptr))) {						\
++	case 1:	__ret = __try_cmpxchg_user_asm("b", "q",			\
++					       (__force u8 *)(_ptr), (_oldp),	\
++					       (_nval), _label);		\
++		break;								\
++	case 2:	__ret = __try_cmpxchg_user_asm("w", "r",			\
++					       (__force u16 *)(_ptr), (_oldp),	\
++					       (_nval), _label);		\
++		break;								\
++	case 4:	__ret = __try_cmpxchg_user_asm("l", "r",			\
++					       (__force u32 *)(_ptr), (_oldp),	\
++					       (_nval), _label);		\
++		break;								\
++	case 8:	__ret = __try_cmpxchg64_user_asm((__force u64 *)(_ptr), (_oldp),\
++						 (_nval), _label);		\
++		break;								\
++	default: __try_cmpxchg_user_wrong_size();				\
++	}									\
++	__ret;						})
++
++/* "Returns" 0 on success, 1 on failure, -EFAULT if the access faults. */
++#define __try_cmpxchg_user(_ptr, _oldp, _nval, _label)	({		\
++	int __ret = -EFAULT;						\
++	__uaccess_begin_nospec();					\
++	__ret = !unsafe_try_cmpxchg_user(_ptr, _oldp, _nval, _label);	\
++_label:									\
++	__uaccess_end();						\
++	__ret;								\
++							})
++
+ /*
+  * We want the unsafe accessors to always be inlined and use
+  * the error labels - thus the macro games.
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 9e0a3daa838c2..9ceef8515c031 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -676,6 +676,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end)
+ }
+ 
+ #ifdef CONFIG_RETHUNK
++
+ /*
+  * Rewrite the compiler generated return thunk tail-calls.
+  *
+@@ -691,14 +692,18 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes)
+ {
+ 	int i = 0;
+ 
+-	if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
+-		return -1;
++	if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
++		if (x86_return_thunk == __x86_return_thunk)
++			return -1;
+ 
+-	bytes[i++] = RET_INSN_OPCODE;
++		i = JMP32_INSN_SIZE;
++		__text_gen_insn(bytes, JMP32_INSN_OPCODE, addr, x86_return_thunk, i);
++	} else {
++		bytes[i++] = RET_INSN_OPCODE;
++	}
+ 
+ 	for (; i < insn->length;)
+ 		bytes[i++] = INT3_INSN_OPCODE;
+-
+ 	return i;
+ }
+ 
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 6d546f4426ac3..46447877b5941 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -367,10 +367,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 		goto fail;
+ 
+ 	ip = trampoline + size;
+-
+-	/* The trampoline ends with ret(q) */
+ 	if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
+-		memcpy(ip, text_gen_insn(JMP32_INSN_OPCODE, ip, &__x86_return_thunk), JMP32_INSN_SIZE);
++		__text_gen_insn(ip, JMP32_INSN_OPCODE, ip, x86_return_thunk, JMP32_INSN_SIZE);
+ 	else
+ 		memcpy(ip, retq, sizeof(retq));
+ 
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index e21937680d1f2..5bea8d93883a2 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -55,28 +55,16 @@ void __init default_banner(void)
+ static const unsigned char ud2a[] = { 0x0f, 0x0b };
+ 
+ struct branch {
+-	unsigned char opcode;
+-	u32 delta;
++       unsigned char opcode;
++       u32 delta;
+ } __attribute__((packed));
+ 
+ static unsigned paravirt_patch_call(void *insn_buff, const void *target,
+ 				    unsigned long addr, unsigned len)
+ {
+-	const int call_len = 5;
+-	struct branch *b = insn_buff;
+-	unsigned long delta = (unsigned long)target - (addr+call_len);
+-
+-	if (len < call_len) {
+-		pr_warn("paravirt: Failed to patch indirect CALL at %ps\n", (void *)addr);
+-		/* Kernel might not be viable if patching fails, bail out: */
+-		BUG_ON(1);
+-	}
+-
+-	b->opcode = 0xe8; /* call */
+-	b->delta = delta;
+-	BUILD_BUG_ON(sizeof(*b) != call_len);
+-
+-	return call_len;
++	__text_gen_insn(insn_buff, CALL_INSN_OPCODE,
++			(void *)addr, target, CALL_INSN_SIZE);
++	return CALL_INSN_SIZE;
+ }
+ 
+ #ifdef CONFIG_PARAVIRT_XXL
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 759b986b7f033..273e9b77b7302 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -41,7 +41,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type,
+ 
+ 	case RET:
+ 		if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
+-			code = text_gen_insn(JMP32_INSN_OPCODE, insn, &__x86_return_thunk);
++			code = text_gen_insn(JMP32_INSN_OPCODE, insn, x86_return_thunk);
+ 		else
+ 			code = &retinsn;
+ 		break;
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 8e3c3d8916dd2..d7d592c092983 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -405,7 +405,7 @@ static void emit_return(u8 **pprog, u8 *ip)
+ 	int cnt = 0;
+ 
+ 	if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
+-		emit_jump(&prog, &__x86_return_thunk, ip);
++		emit_jump(&prog, x86_return_thunk, ip);
+ 	} else {
+ 		EMIT1(0xC3);		/* ret */
+ 		if (IS_ENABLED(CONFIG_SLS))
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 4297a8d69dbf7..6f7f8e41404dc 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -49,6 +49,7 @@ enum {
+ enum board_ids {
+ 	/* board IDs by feature in alphabetical order */
+ 	board_ahci,
++	board_ahci_43bit_dma,
+ 	board_ahci_ign_iferr,
+ 	board_ahci_low_power,
+ 	board_ahci_no_debounce_delay,
+@@ -129,6 +130,13 @@ static const struct ata_port_info ahci_port_info[] = {
+ 		.udma_mask	= ATA_UDMA6,
+ 		.port_ops	= &ahci_ops,
+ 	},
++	[board_ahci_43bit_dma] = {
++		AHCI_HFLAGS	(AHCI_HFLAG_43BIT_ONLY),
++		.flags		= AHCI_FLAG_COMMON,
++		.pio_mask	= ATA_PIO4,
++		.udma_mask	= ATA_UDMA6,
++		.port_ops	= &ahci_ops,
++	},
+ 	[board_ahci_ign_iferr] = {
+ 		AHCI_HFLAGS	(AHCI_HFLAG_IGN_IRQ_IF_ERR),
+ 		.flags		= AHCI_FLAG_COMMON,
+@@ -594,11 +602,11 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(PROMISE, 0x3f20), board_ahci },	/* PDC42819 */
+ 	{ PCI_VDEVICE(PROMISE, 0x3781), board_ahci },   /* FastTrak TX8660 ahci-mode */
+ 
+-	/* Asmedia */
++	/* ASMedia */
+ 	{ PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci },	/* ASM1060 */
+ 	{ PCI_VDEVICE(ASMEDIA, 0x0602), board_ahci },	/* ASM1060 */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci },	/* ASM1061 */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci },	/* ASM1062 */
++	{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci_43bit_dma },	/* ASM1061 */
++	{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci_43bit_dma },	/* ASM1061/1062 */
+ 	{ PCI_VDEVICE(ASMEDIA, 0x0621), board_ahci },   /* ASM1061R */
+ 	{ PCI_VDEVICE(ASMEDIA, 0x0622), board_ahci },   /* ASM1062R */
+ 
+@@ -654,6 +662,11 @@ MODULE_PARM_DESC(mobile_lpm_policy, "Default LPM policy for mobile chipsets");
+ static void ahci_pci_save_initial_config(struct pci_dev *pdev,
+ 					 struct ahci_host_priv *hpriv)
+ {
++	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == 0x1166) {
++		dev_info(&pdev->dev, "ASM1166 has only six ports\n");
++		hpriv->saved_port_map = 0x3f;
++	}
++
+ 	if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) {
+ 		dev_info(&pdev->dev, "JMB361 has only one port\n");
+ 		hpriv->force_port_map = 1;
+@@ -946,11 +959,20 @@ static int ahci_pci_device_resume(struct device *dev)
+ 
+ #endif /* CONFIG_PM */
+ 
+-static int ahci_configure_dma_masks(struct pci_dev *pdev, int using_dac)
++static int ahci_configure_dma_masks(struct pci_dev *pdev,
++				    struct ahci_host_priv *hpriv)
+ {
+-	const int dma_bits = using_dac ? 64 : 32;
++	int dma_bits;
+ 	int rc;
+ 
++	if (hpriv->cap & HOST_CAP_64) {
++		dma_bits = 64;
++		if (hpriv->flags & AHCI_HFLAG_43BIT_ONLY)
++			dma_bits = 43;
++	} else {
++		dma_bits = 32;
++	}
++
+ 	/*
+ 	 * If the device fixup already set the dma_mask to some non-standard
+ 	 * value, don't extend it here. This happens on STA2X11, for example.
+@@ -1928,7 +1950,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	ahci_gtf_filter_workaround(host);
+ 
+ 	/* initialize adapter */
+-	rc = ahci_configure_dma_masks(pdev, hpriv->cap & HOST_CAP_64);
++	rc = ahci_configure_dma_masks(pdev, hpriv);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index 7cc6feb17e972..b8db2b0d74146 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -244,6 +244,7 @@ enum {
+ 	AHCI_HFLAG_IGN_NOTSUPP_POWER_ON	= BIT(27), /* ignore -EOPNOTSUPP
+ 						      from phy_power_on() */
+ 	AHCI_HFLAG_NO_SXS		= BIT(28), /* SXS not supported */
++	AHCI_HFLAG_43BIT_ONLY		= BIT(29), /* 43bit DMA addr limit */
+ 
+ 	/* ap->flags bits */
+ 
+diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
+index 3e881fdb06e0a..224450c90e459 100644
+--- a/drivers/block/ataflop.c
++++ b/drivers/block/ataflop.c
+@@ -456,10 +456,20 @@ static DEFINE_TIMER(fd_timer, check_change);
+ 	
+ static void fd_end_request_cur(blk_status_t err)
+ {
++	DPRINT(("fd_end_request_cur(), bytes %d of %d\n",
++		blk_rq_cur_bytes(fd_request),
++		blk_rq_bytes(fd_request)));
++
+ 	if (!blk_update_request(fd_request, err,
+ 				blk_rq_cur_bytes(fd_request))) {
++		DPRINT(("calling __blk_mq_end_request()\n"));
+ 		__blk_mq_end_request(fd_request, err);
+ 		fd_request = NULL;
++	} else {
++		/* requeue rest of request */
++		DPRINT(("calling blk_mq_requeue_request()\n"));
++		blk_mq_requeue_request(fd_request, true);
++		fd_request = NULL;
+ 	}
+ }
+ 
+@@ -653,9 +663,6 @@ static inline void copy_buffer(void *from, void *to)
+ 		*p2++ = *p1++;
+ }
+ 
+-  
+-  
+-
+ /* General Interrupt Handling */
+ 
+ static void (*FloppyIRQHandler)( int status ) = NULL;
+@@ -700,12 +707,21 @@ static void fd_error( void )
+ 	if (fd_request->error_count >= MAX_ERRORS) {
+ 		printk(KERN_ERR "fd%d: too many errors.\n", SelectedDrive );
+ 		fd_end_request_cur(BLK_STS_IOERR);
++		finish_fdc();
++		return;
+ 	}
+ 	else if (fd_request->error_count == RECALIBRATE_ERRORS) {
+ 		printk(KERN_WARNING "fd%d: recalibrating\n", SelectedDrive );
+ 		if (SelectedDrive != -1)
+ 			SUD.track = -1;
+ 	}
++	/* need to re-run request to recalibrate */
++	atari_disable_irq( IRQ_MFP_FDC );
++
++	setup_req_params( SelectedDrive );
++	do_fd_action( SelectedDrive );
++
++	atari_enable_irq( IRQ_MFP_FDC );
+ }
+ 
+ 
+@@ -740,6 +756,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
+ 	if (type) {
+ 		if (--type >= NUM_DISK_MINORS ||
+ 		    minor2disktype[type].drive_types > DriveType) {
++			finish_fdc();
+ 			ret = -EINVAL;
+ 			goto out;
+ 		}
+@@ -748,6 +765,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
+ 	}
+ 
+ 	if (!UDT || desc->track >= UDT->blocks/UDT->spt/2 || desc->head >= 2) {
++		finish_fdc();
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+@@ -788,6 +806,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
+ 
+ 	wait_for_completion(&format_wait);
+ 
++	finish_fdc();
+ 	ret = FormatError ? -EIO : 0;
+ out:
+ 	blk_mq_unquiesce_queue(q);
+@@ -822,6 +841,7 @@ static void do_fd_action( int drive )
+ 		    else {
+ 			/* all sectors finished */
+ 			fd_end_request_cur(BLK_STS_OK);
++			finish_fdc();
+ 			return;
+ 		    }
+ 		}
+@@ -1226,6 +1246,7 @@ static void fd_rwsec_done1(int status)
+ 	else {
+ 		/* all sectors finished */
+ 		fd_end_request_cur(BLK_STS_OK);
++		finish_fdc();
+ 	}
+ 	return;
+   
+@@ -1347,7 +1368,7 @@ static void fd_times_out(struct timer_list *unused)
+ 
+ static void finish_fdc( void )
+ {
+-	if (!NeedSeek) {
++	if (!NeedSeek || !stdma_is_locked_by(floppy_irq)) {
+ 		finish_fdc_done( 0 );
+ 	}
+ 	else {
+@@ -1382,7 +1403,8 @@ static void finish_fdc_done( int dummy )
+ 	start_motor_off_timer();
+ 
+ 	local_irq_save(flags);
+-	stdma_release();
++	if (stdma_is_locked_by(floppy_irq))
++		stdma_release();
+ 	local_irq_restore(flags);
+ 
+ 	DPRINT(("finish_fdc() finished\n"));
+@@ -1472,15 +1494,6 @@ static void setup_req_params( int drive )
+ 			ReqTrack, ReqSector, (unsigned long)ReqData ));
+ }
+ 
+-static void ataflop_commit_rqs(struct blk_mq_hw_ctx *hctx)
+-{
+-	spin_lock_irq(&ataflop_lock);
+-	atari_disable_irq(IRQ_MFP_FDC);
+-	finish_fdc();
+-	atari_enable_irq(IRQ_MFP_FDC);
+-	spin_unlock_irq(&ataflop_lock);
+-}
+-
+ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 				     const struct blk_mq_queue_data *bd)
+ {
+@@ -1488,6 +1501,10 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	int drive = floppy - unit;
+ 	int type = floppy->type;
+ 
++	DPRINT(("Queue request: drive %d type %d sectors %d of %d last %d\n",
++		drive, type, blk_rq_cur_sectors(bd->rq),
++		blk_rq_sectors(bd->rq), bd->last));
++
+ 	spin_lock_irq(&ataflop_lock);
+ 	if (fd_request) {
+ 		spin_unlock_irq(&ataflop_lock);
+@@ -1508,6 +1525,7 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		/* drive not connected */
+ 		printk(KERN_ERR "Unknown Device: fd%d\n", drive );
+ 		fd_end_request_cur(BLK_STS_IOERR);
++		stdma_release();
+ 		goto out;
+ 	}
+ 		
+@@ -1524,11 +1542,13 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		if (--type >= NUM_DISK_MINORS) {
+ 			printk(KERN_WARNING "fd%d: invalid disk format", drive );
+ 			fd_end_request_cur(BLK_STS_IOERR);
++			stdma_release();
+ 			goto out;
+ 		}
+ 		if (minor2disktype[type].drive_types > DriveType)  {
+ 			printk(KERN_WARNING "fd%d: unsupported disk format", drive );
+ 			fd_end_request_cur(BLK_STS_IOERR);
++			stdma_release();
+ 			goto out;
+ 		}
+ 		type = minor2disktype[type].index;
+@@ -1547,8 +1567,6 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	setup_req_params( drive );
+ 	do_fd_action( drive );
+ 
+-	if (bd->last)
+-		finish_fdc();
+ 	atari_enable_irq( IRQ_MFP_FDC );
+ 
+ out:
+@@ -1631,6 +1649,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
+ 		/* what if type > 0 here? Overwrite specified entry ? */
+ 		if (type) {
+ 		        /* refuse to re-set a predefined type for now */
++			finish_fdc();
+ 			return -EINVAL;
+ 		}
+ 
+@@ -1698,8 +1717,10 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
+ 
+ 		/* sanity check */
+ 		if (setprm.track != dtp->blocks/dtp->spt/2 ||
+-		    setprm.head != 2)
++		    setprm.head != 2) {
++			finish_fdc();
+ 			return -EINVAL;
++		}
+ 
+ 		UDT = dtp;
+ 		set_capacity(floppy->disk, UDT->blocks);
+@@ -1959,7 +1980,6 @@ static const struct block_device_operations floppy_fops = {
+ 
+ static const struct blk_mq_ops ataflop_mq_ops = {
+ 	.queue_rq = ataflop_queue_rq,
+-	.commit_rqs = ataflop_commit_rqs,
+ };
+ 
+ static struct kobject *floppy_find(dev_t dev, int *part, void *data)
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 9b54eec9b17eb..7eae3f3732336 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -952,14 +952,15 @@ static int virtblk_freeze(struct virtio_device *vdev)
+ {
+ 	struct virtio_blk *vblk = vdev->priv;
+ 
++	/* Ensure no requests in virtqueues before deleting vqs. */
++	blk_mq_freeze_queue(vblk->disk->queue);
++
+ 	/* Ensure we don't receive any more interrupts */
+ 	vdev->config->reset(vdev);
+ 
+ 	/* Make sure no work handler is accessing the device. */
+ 	flush_work(&vblk->config_work);
+ 
+-	blk_mq_quiesce_queue(vblk->disk->queue);
+-
+ 	vdev->config->del_vqs(vdev);
+ 	kfree(vblk->vqs);
+ 
+@@ -977,7 +978,7 @@ static int virtblk_restore(struct virtio_device *vdev)
+ 
+ 	virtio_device_ready(vdev);
+ 
+-	blk_mq_unquiesce_queue(vblk->disk->queue);
++	blk_mq_unfreeze_queue(vblk->disk->queue);
+ 	return 0;
+ }
+ #endif
+diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
+index 69385f32e2756..f383f219ed008 100644
+--- a/drivers/dma/fsl-qdma.c
++++ b/drivers/dma/fsl-qdma.c
+@@ -805,7 +805,7 @@ fsl_qdma_irq_init(struct platform_device *pdev,
+ 	int i;
+ 	int cpu;
+ 	int ret;
+-	char irq_name[20];
++	char irq_name[32];
+ 
+ 	fsl_qdma->error_irq =
+ 		platform_get_irq_byname(pdev, "qdma-error");
+diff --git a/drivers/dma/sh/shdma.h b/drivers/dma/sh/shdma.h
+index 9c121a4b33ad8..f97d80343aea4 100644
+--- a/drivers/dma/sh/shdma.h
++++ b/drivers/dma/sh/shdma.h
+@@ -25,7 +25,7 @@ struct sh_dmae_chan {
+ 	const struct sh_dmae_slave_config *config; /* Slave DMA configuration */
+ 	int xmit_shift;			/* log_2(bytes_per_xfer) */
+ 	void __iomem *base;
+-	char dev_id[16];		/* unique name per DMAC of channel */
++	char dev_id[32];		/* unique name per DMAC of channel */
+ 	int pm_error;
+ 	dma_addr_t slave_addr;
+ };
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index a1adc8d91fd8d..69292d4a0c441 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -2462,6 +2462,11 @@ static int edma_probe(struct platform_device *pdev)
+ 	if (irq > 0) {
+ 		irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_ccint",
+ 					  dev_name(dev));
++		if (!irq_name) {
++			ret = -ENOMEM;
++			goto err_disable_pm;
++		}
++
+ 		ret = devm_request_irq(dev, irq, dma_irq_handler, 0, irq_name,
+ 				       ecc);
+ 		if (ret) {
+@@ -2478,6 +2483,11 @@ static int edma_probe(struct platform_device *pdev)
+ 	if (irq > 0) {
+ 		irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_ccerrint",
+ 					  dev_name(dev));
++		if (!irq_name) {
++			ret = -ENOMEM;
++			goto err_disable_pm;
++		}
++
+ 		ret = devm_request_irq(dev, irq, dma_ccerr_handler, 0, irq_name,
+ 				       ecc);
+ 		if (ret) {
+diff --git a/drivers/firewire/core-card.c b/drivers/firewire/core-card.c
+index f3b3953cac834..be195ba834632 100644
+--- a/drivers/firewire/core-card.c
++++ b/drivers/firewire/core-card.c
+@@ -429,7 +429,23 @@ static void bm_work(struct work_struct *work)
+ 	 */
+ 	card->bm_generation = generation;
+ 
+-	if (root_device == NULL) {
++	if (card->gap_count == 0) {
++		/*
++		 * If self IDs have inconsistent gap counts, do a
++		 * bus reset ASAP. The config rom read might never
++		 * complete, so don't wait for it. However, still
++		 * send a PHY configuration packet prior to the
++		 * bus reset. The PHY configuration packet might
++		 * fail, but 1394-2008 8.4.5.2 explicitly permits
++		 * it in this case, so it should be safe to try.
++		 */
++		new_root_id = local_id;
++		/*
++		 * We must always send a bus reset if the gap count
++		 * is inconsistent, so bypass the 5-reset limit.
++		 */
++		card->bm_retries = 0;
++	} else if (root_device == NULL) {
+ 		/*
+ 		 * Either link_on is false, or we failed to read the
+ 		 * config rom.  In either case, pick another root.
+diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
+index 3359ae2adf24b..9054c2852580d 100644
+--- a/drivers/firmware/efi/arm-runtime.c
++++ b/drivers/firmware/efi/arm-runtime.c
+@@ -107,7 +107,7 @@ static int __init arm_enable_runtime_services(void)
+ 		efi_memory_desc_t *md;
+ 
+ 		for_each_efi_memory_desc(md) {
+-			int md_size = md->num_pages << EFI_PAGE_SHIFT;
++			u64 md_size = md->num_pages << EFI_PAGE_SHIFT;
+ 			struct resource *res;
+ 
+ 			if (!(md->attribute & EFI_MEMORY_SP))
+diff --git a/drivers/firmware/efi/efi-init.c b/drivers/firmware/efi/efi-init.c
+index f55a92ff12c0f..86da3c7a5036a 100644
+--- a/drivers/firmware/efi/efi-init.c
++++ b/drivers/firmware/efi/efi-init.c
+@@ -141,15 +141,6 @@ static __init int is_usable_memory(efi_memory_desc_t *md)
+ 	case EFI_BOOT_SERVICES_DATA:
+ 	case EFI_CONVENTIONAL_MEMORY:
+ 	case EFI_PERSISTENT_MEMORY:
+-		/*
+-		 * Special purpose memory is 'soft reserved', which means it
+-		 * is set aside initially, but can be hotplugged back in or
+-		 * be assigned to the dax driver after boot.
+-		 */
+-		if (efi_soft_reserve_enabled() &&
+-		    (md->attribute & EFI_MEMORY_SP))
+-			return false;
+-
+ 		/*
+ 		 * According to the spec, these regions are no longer reserved
+ 		 * after calling ExitBootServices(). However, we can only use
+@@ -194,6 +185,16 @@ static __init void reserve_regions(void)
+ 		size = npages << PAGE_SHIFT;
+ 
+ 		if (is_memory(md)) {
++			/*
++			 * Special purpose memory is 'soft reserved', which
++			 * means it is set aside initially. Don't add a memblock
++			 * for it now so that it can be hotplugged back in or
++			 * be assigned to the dax driver after boot.
++			 */
++			if (efi_soft_reserve_enabled() &&
++			    (md->attribute & EFI_MEMORY_SP))
++				continue;
++
+ 			early_init_dt_add_memory_arch(paddr, size);
+ 
+ 			if (!is_usable_memory(md))
+diff --git a/drivers/firmware/efi/riscv-runtime.c b/drivers/firmware/efi/riscv-runtime.c
+index d28e715d2bcc8..6711e64eb0b16 100644
+--- a/drivers/firmware/efi/riscv-runtime.c
++++ b/drivers/firmware/efi/riscv-runtime.c
+@@ -85,7 +85,7 @@ static int __init riscv_enable_runtime_services(void)
+ 		efi_memory_desc_t *md;
+ 
+ 		for_each_efi_memory_desc(md) {
+-			int md_size = md->num_pages << EFI_PAGE_SHIFT;
++			u64 md_size = md->num_pages << EFI_PAGE_SHIFT;
+ 			struct resource *res;
+ 
+ 			if (!(md->attribute & EFI_MEMORY_SP))
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 54d6b4128721e..3578e3b3536e3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1456,6 +1456,7 @@ static int dm_sw_fini(void *handle)
+ 
+ 	if (adev->dm.dmub_srv) {
+ 		dmub_srv_destroy(adev->dm.dmub_srv);
++		kfree(adev->dm.dmub_srv);
+ 		adev->dm.dmub_srv = NULL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index 738e60139db90..6ce446cc88780 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -387,6 +387,15 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
+ 	if (!syncobj)
+ 		return -ENOENT;
+ 
++	/* Waiting for userspace with locks help is illegal cause that can
++	 * trivial deadlock with page faults for example. Make lockdep complain
++	 * about it early on.
++	 */
++	if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
++		might_sleep();
++		lockdep_assert_none_held_once();
++	}
++
+ 	*fence = drm_syncobj_fence_get(syncobj);
+ 
+ 	if (*fence) {
+@@ -951,6 +960,10 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
+ 	uint64_t *points;
+ 	uint32_t signaled_count, i;
+ 
++	if (flags & (DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
++		     DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
++		lockdep_assert_none_held_once();
++
+ 	points = kmalloc_array(count, sizeof(*points), GFP_KERNEL);
+ 	if (points == NULL)
+ 		return -ENOMEM;
+@@ -1017,7 +1030,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
+ 	 * fallthough and try a 0 timeout wait!
+ 	 */
+ 
+-	if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
++	if (flags & (DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
++		     DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE)) {
+ 		for (i = 0; i < count; ++i)
+ 			drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
+index 4b571cc6bc70f..6597def18627e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
+@@ -154,11 +154,17 @@ shadow_fw_init(struct nvkm_bios *bios, const char *name)
+ 	return (void *)fw;
+ }
+ 
++static void
++shadow_fw_release(void *fw)
++{
++	release_firmware(fw);
++}
++
+ static const struct nvbios_source
+ shadow_fw = {
+ 	.name = "firmware",
+ 	.init = shadow_fw_init,
+-	.fini = (void(*)(void *))release_firmware,
++	.fini = shadow_fw_release,
+ 	.read = shadow_fw_read,
+ 	.rw = false,
+ };
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index d67d972d18aa2..cbe2f874b5e2f 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -40,7 +40,7 @@ MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
+ 
+ #define PKG_SYSFS_ATTR_NO	1	/* Sysfs attribute for package temp */
+ #define BASE_SYSFS_ATTR_NO	2	/* Sysfs Base attr no for coretemp */
+-#define NUM_REAL_CORES		128	/* Number of Real cores per cpu */
++#define NUM_REAL_CORES		512	/* Number of Real cores per cpu */
+ #define CORETEMP_NAME_LENGTH	28	/* String Length of attrs */
+ #define MAX_CORE_ATTRS		4	/* Maximum no of basic attrs */
+ #define TOTAL_ATTRS		(MAX_CORE_ATTRS + 1)
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 2a973a1390a4a..a0d7777acb6d4 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -1711,7 +1711,7 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr,
+ 	switch (srq_attr_mask) {
+ 	case IB_SRQ_MAX_WR:
+ 		/* SRQ resize is not supported */
+-		break;
++		return -EINVAL;
+ 	case IB_SRQ_LIMIT:
+ 		/* Change the SRQ threshold */
+ 		if (srq_attr->srq_limit > srq->qplib_srq.max_wqe)
+@@ -1726,13 +1726,12 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr,
+ 		/* On success, update the shadow */
+ 		srq->srq_limit = srq_attr->srq_limit;
+ 		/* No need to Build and send response back to udata */
+-		break;
++		return 0;
+ 	default:
+ 		ibdev_err(&rdev->ibdev,
+ 			  "Unsupported srq_attr_mask 0x%x", srq_attr_mask);
+ 		return -EINVAL;
+ 	}
+-	return 0;
+ }
+ 
+ int bnxt_re_query_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr)
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index 60eb3a64518f3..969004258692b 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -2131,7 +2131,7 @@ int init_credit_return(struct hfi1_devdata *dd)
+ 				   "Unable to allocate credit return DMA range for NUMA %d\n",
+ 				   i);
+ 			ret = -ENOMEM;
+-			goto done;
++			goto free_cr_base;
+ 		}
+ 	}
+ 	set_dev_node(&dd->pcidev->dev, dd->node);
+@@ -2139,6 +2139,10 @@ int init_credit_return(struct hfi1_devdata *dd)
+ 	ret = 0;
+ done:
+ 	return ret;
++
++free_cr_base:
++	free_credit_return(dd);
++	goto done;
+ }
+ 
+ void free_credit_return(struct hfi1_devdata *dd)
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 2dc97de434a5e..68a8557e9a7c4 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -3200,7 +3200,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ {
+ 	int rval = 0;
+ 
+-	if ((unlikely(tx->num_desc + 1 == tx->desc_limit))) {
++	if ((unlikely(tx->num_desc == tx->desc_limit))) {
+ 		rval = _extend_sdma_tx_descs(dd, tx);
+ 		if (rval) {
+ 			__sdma_txclean(dd, tx);
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 3543b9af10b7a..d382ac21159c2 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -1865,8 +1865,17 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
+ 		/* RQ - read access only (0) */
+ 		rc = qedr_init_user_queue(udata, dev, &qp->urq, ureq.rq_addr,
+ 					  ureq.rq_len, true, 0, alloc_and_init);
+-		if (rc)
++		if (rc) {
++			ib_umem_release(qp->usq.umem);
++			qp->usq.umem = NULL;
++			if (rdma_protocol_roce(&dev->ibdev, 1)) {
++				qedr_free_pbl(dev, &qp->usq.pbl_info,
++					      qp->usq.pbl_tbl);
++			} else {
++				kfree(qp->usq.pbl_tbl);
++			}
+ 			return rc;
++		}
+ 	}
+ 
+ 	memset(&in_params, 0, sizeof(in_params));
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 983f59c87b79f..41abf9cf11c67 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -79,12 +79,16 @@ module_param(srpt_srq_size, int, 0444);
+ MODULE_PARM_DESC(srpt_srq_size,
+ 		 "Shared receive queue (SRQ) size.");
+ 
++static int srpt_set_u64_x(const char *buffer, const struct kernel_param *kp)
++{
++	return kstrtou64(buffer, 16, (u64 *)kp->arg);
++}
+ static int srpt_get_u64_x(char *buffer, const struct kernel_param *kp)
+ {
+ 	return sprintf(buffer, "0x%016llx\n", *(u64 *)kp->arg);
+ }
+-module_param_call(srpt_service_guid, NULL, srpt_get_u64_x, &srpt_service_guid,
+-		  0444);
++module_param_call(srpt_service_guid, srpt_set_u64_x, srpt_get_u64_x,
++		  &srpt_service_guid, 0444);
+ MODULE_PARM_DESC(srpt_service_guid,
+ 		 "Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA.");
+ 
+@@ -210,10 +214,12 @@ static const char *get_ch_state_name(enum rdma_ch_state s)
+ /**
+  * srpt_qp_event - QP event callback function
+  * @event: Description of the event that occurred.
+- * @ch: SRPT RDMA channel.
++ * @ptr: SRPT RDMA channel.
+  */
+-static void srpt_qp_event(struct ib_event *event, struct srpt_rdma_ch *ch)
++static void srpt_qp_event(struct ib_event *event, void *ptr)
+ {
++	struct srpt_rdma_ch *ch = ptr;
++
+ 	pr_debug("QP event %d on ch=%p sess_name=%s-%d state=%s\n",
+ 		 event->event, ch, ch->sess_name, ch->qp->qp_num,
+ 		 get_ch_state_name(ch->state));
+@@ -1803,8 +1809,7 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
+ 	ch->cq_size = ch->rq_size + sq_size;
+ 
+ 	qp_init->qp_context = (void *)ch;
+-	qp_init->event_handler
+-		= (void(*)(struct ib_event *, void*))srpt_qp_event;
++	qp_init->event_handler = srpt_qp_event;
+ 	qp_init->send_cq = ch->cq;
+ 	qp_init->recv_cq = ch->cq;
+ 	qp_init->sq_sig_type = IB_SIGNAL_REQ_WR;
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index cd21c92a6b2cd..6804970d8f51a 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -625,6 +625,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		},
+ 		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
+ 	},
++	{
++		/* Fujitsu Lifebook U728 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U728"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
++	},
+ 	{
+ 		/* Gigabyte M912 */
+ 		.matches = {
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index fc25b900cef71..7888e3c08df40 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -398,6 +398,8 @@ static void gic_all_vpes_irq_cpu_online(void)
+ 		unsigned int intr = local_intrs[i];
+ 		struct gic_all_vpes_chip_data *cd;
+ 
++		if (!gic_local_irq_is_routable(intr))
++			continue;
+ 		cd = &gic_all_vpes_chip_data[intr];
+ 		write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
+ 		if (cd->mask)
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 5d772f322a245..5edcdcee91c23 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2064,6 +2064,12 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ 	io->ctx.bio_out = clone;
+ 	io->ctx.iter_out = clone->bi_iter;
+ 
++	if (crypt_integrity_aead(cc)) {
++		bio_copy_data(clone, io->base_bio);
++		io->ctx.bio_in = clone;
++		io->ctx.iter_in = clone->bi_iter;
++	}
++
+ 	sector += bio_sectors(clone);
+ 
+ 	crypt_inc_pending(io);
+diff --git a/drivers/media/pci/ttpci/av7110_av.c b/drivers/media/pci/ttpci/av7110_av.c
+index ea9f7d0058a21..e201d5a56bc65 100644
+--- a/drivers/media/pci/ttpci/av7110_av.c
++++ b/drivers/media/pci/ttpci/av7110_av.c
+@@ -822,10 +822,10 @@ static int write_ts_to_decoder(struct av7110 *av7110, int type, const u8 *buf, s
+ 		av7110_ipack_flush(ipack);
+ 
+ 	if (buf[3] & ADAPT_FIELD) {
++		if (buf[4] > len - 1 - 4)
++			return 0;
+ 		len -= buf[4] + 1;
+ 		buf += buf[4] + 1;
+-		if (!len)
+-			return 0;
+ 	}
+ 
+ 	av7110_ipack_instant_repack(buf + 4, len - 4, ipack);
+diff --git a/drivers/mtd/nand/spi/macronix.c b/drivers/mtd/nand/spi/macronix.c
+index cd7a9cacc3fbf..8bd3f6bf9b103 100644
+--- a/drivers/mtd/nand/spi/macronix.c
++++ b/drivers/mtd/nand/spi/macronix.c
+@@ -119,6 +119,26 @@ static const struct spinand_info macronix_spinand_table[] = {
+ 					      &update_cache_variants),
+ 		     SPINAND_HAS_QE_BIT,
+ 		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
++	SPINAND_INFO("MX35LF2GE4AD",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x26),
++		     NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1),
++		     NAND_ECCREQ(8, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     0,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35LF4GE4AD",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x37),
++		     NAND_MEMORG(1, 4096, 128, 64, 2048, 40, 1, 1, 1),
++		     NAND_ECCREQ(8, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     0,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
+ 	SPINAND_INFO("MX31LF1GE4BC",
+ 		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x1e),
+ 		     NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
+diff --git a/drivers/net/ethernet/microchip/lan743x_ethtool.c b/drivers/net/ethernet/microchip/lan743x_ethtool.c
+index dcde496da7fb4..c5de8f46cdd35 100644
+--- a/drivers/net/ethernet/microchip/lan743x_ethtool.c
++++ b/drivers/net/ethernet/microchip/lan743x_ethtool.c
+@@ -780,7 +780,9 @@ static void lan743x_ethtool_get_wol(struct net_device *netdev,
+ 
+ 	wol->supported = 0;
+ 	wol->wolopts = 0;
+-	phy_ethtool_get_wol(netdev->phydev, wol);
++
++	if (netdev->phydev)
++		phy_ethtool_get_wol(netdev->phydev, wol);
+ 
+ 	wol->supported |= WAKE_BCAST | WAKE_UCAST | WAKE_MCAST |
+ 		WAKE_MAGIC | WAKE_PHY | WAKE_ARP;
+@@ -809,9 +811,8 @@ static int lan743x_ethtool_set_wol(struct net_device *netdev,
+ 
+ 	device_set_wakeup_enable(&adapter->pdev->dev, (bool)wol->wolopts);
+ 
+-	phy_ethtool_set_wol(netdev->phydev, wol);
+-
+-	return 0;
++	return netdev->phydev ? phy_ethtool_set_wol(netdev->phydev, wol)
++			: -ENETDOWN;
+ }
+ #endif /* CONFIG_PM */
+ 
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index ed247cba22916..9534f58368ccb 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1410,20 +1410,20 @@ static int __init gtp_init(void)
+ 	if (err < 0)
+ 		goto error_out;
+ 
+-	err = genl_register_family(&gtp_genl_family);
++	err = register_pernet_subsys(&gtp_net_ops);
+ 	if (err < 0)
+ 		goto unreg_rtnl_link;
+ 
+-	err = register_pernet_subsys(&gtp_net_ops);
++	err = genl_register_family(&gtp_genl_family);
+ 	if (err < 0)
+-		goto unreg_genl_family;
++		goto unreg_pernet_subsys;
+ 
+ 	pr_info("GTP module loaded (pdp ctx size %zd bytes)\n",
+ 		sizeof(struct pdp_ctx));
+ 	return 0;
+ 
+-unreg_genl_family:
+-	genl_unregister_family(&gtp_genl_family);
++unreg_pernet_subsys:
++	unregister_pernet_subsys(&gtp_net_ops);
+ unreg_rtnl_link:
+ 	rtnl_link_unregister(&gtp_link_ops);
+ error_out:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index d2c6fdb702732..08008b0c0637c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -5155,8 +5155,7 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
+ 
+ 	if (notif->sync) {
+ 		notif->cookie = mvm->queue_sync_cookie;
+-		atomic_set(&mvm->queue_sync_counter,
+-			   mvm->trans->num_rx_queues);
++		mvm->queue_sync_state = (1 << mvm->trans->num_rx_queues) - 1;
+ 	}
+ 
+ 	ret = iwl_mvm_notify_rx_queue(mvm, qmask, (u8 *)notif,
+@@ -5169,16 +5168,19 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
+ 	if (notif->sync) {
+ 		lockdep_assert_held(&mvm->mutex);
+ 		ret = wait_event_timeout(mvm->rx_sync_waitq,
+-					 atomic_read(&mvm->queue_sync_counter) == 0 ||
++					 READ_ONCE(mvm->queue_sync_state) == 0 ||
+ 					 iwl_mvm_is_radio_killed(mvm),
+ 					 HZ);
+-		WARN_ON_ONCE(!ret && !iwl_mvm_is_radio_killed(mvm));
++		WARN_ONCE(!ret && !iwl_mvm_is_radio_killed(mvm),
++			  "queue sync: failed to sync, state is 0x%lx\n",
++			  mvm->queue_sync_state);
+ 	}
+ 
+ out:
+-	atomic_set(&mvm->queue_sync_counter, 0);
+-	if (notif->sync)
++	if (notif->sync) {
++		mvm->queue_sync_state = 0;
+ 		mvm->queue_sync_cookie++;
++	}
+ }
+ 
+ static void iwl_mvm_sync_rx_queues(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index 64f5a4cb3d3ac..8b779c3a92d43 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -842,7 +842,7 @@ struct iwl_mvm {
+ 	unsigned long status;
+ 
+ 	u32 queue_sync_cookie;
+-	atomic_t queue_sync_counter;
++	unsigned long queue_sync_state;
+ 	/*
+ 	 * for beacon filtering -
+ 	 * currently only one interface can be supported
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 5b173f21e87bf..3548eb57f1f30 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -725,7 +725,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
+ 
+ 	init_waitqueue_head(&mvm->rx_sync_waitq);
+ 
+-	atomic_set(&mvm->queue_sync_counter, 0);
++	mvm->queue_sync_state = 0;
+ 
+ 	SET_IEEE80211_DEV(mvm->hw, mvm->trans->dev);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 86b3fb321dfdd..e2a39e8b98d07 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -853,9 +853,13 @@ void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct napi_struct *napi,
+ 		WARN_ONCE(1, "Invalid identifier %d", internal_notif->type);
+ 	}
+ 
+-	if (internal_notif->sync &&
+-	    !atomic_dec_return(&mvm->queue_sync_counter))
+-		wake_up(&mvm->rx_sync_waitq);
++	if (internal_notif->sync) {
++		WARN_ONCE(!test_and_clear_bit(queue, &mvm->queue_sync_state),
++			  "queue sync: queue %d responded a second time!\n",
++			  queue);
++		if (READ_ONCE(mvm->queue_sync_state) == 0)
++			wake_up(&mvm->rx_sync_waitq);
++	}
+ }
+ 
+ static void iwl_mvm_oldsn_workaround(struct iwl_mvm *mvm,
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 906cab35afe7a..8e05239073ef2 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -220,11 +220,6 @@ static LIST_HEAD(nvme_fc_lport_list);
+ static DEFINE_IDA(nvme_fc_local_port_cnt);
+ static DEFINE_IDA(nvme_fc_ctrl_cnt);
+ 
+-static struct workqueue_struct *nvme_fc_wq;
+-
+-static bool nvme_fc_waiting_to_unload;
+-static DECLARE_COMPLETION(nvme_fc_unload_proceed);
+-
+ /*
+  * These items are short-term. They will eventually be moved into
+  * a generic FC class. See comments in module init.
+@@ -254,8 +249,6 @@ nvme_fc_free_lport(struct kref *ref)
+ 	/* remove from transport list */
+ 	spin_lock_irqsave(&nvme_fc_lock, flags);
+ 	list_del(&lport->port_list);
+-	if (nvme_fc_waiting_to_unload && list_empty(&nvme_fc_lport_list))
+-		complete(&nvme_fc_unload_proceed);
+ 	spin_unlock_irqrestore(&nvme_fc_lock, flags);
+ 
+ 	ida_simple_remove(&nvme_fc_local_port_cnt, lport->localport.port_num);
+@@ -3823,10 +3816,6 @@ static int __init nvme_fc_init_module(void)
+ {
+ 	int ret;
+ 
+-	nvme_fc_wq = alloc_workqueue("nvme_fc_wq", WQ_MEM_RECLAIM, 0);
+-	if (!nvme_fc_wq)
+-		return -ENOMEM;
+-
+ 	/*
+ 	 * NOTE:
+ 	 * It is expected that in the future the kernel will combine
+@@ -3844,7 +3833,7 @@ static int __init nvme_fc_init_module(void)
+ 	ret = class_register(&fc_class);
+ 	if (ret) {
+ 		pr_err("couldn't register class fc\n");
+-		goto out_destroy_wq;
++		return ret;
+ 	}
+ 
+ 	/*
+@@ -3868,8 +3857,6 @@ static int __init nvme_fc_init_module(void)
+ 	device_destroy(&fc_class, MKDEV(0, 0));
+ out_destroy_class:
+ 	class_unregister(&fc_class);
+-out_destroy_wq:
+-	destroy_workqueue(nvme_fc_wq);
+ 
+ 	return ret;
+ }
+@@ -3889,45 +3876,23 @@ nvme_fc_delete_controllers(struct nvme_fc_rport *rport)
+ 	spin_unlock(&rport->lock);
+ }
+ 
+-static void
+-nvme_fc_cleanup_for_unload(void)
++static void __exit nvme_fc_exit_module(void)
+ {
+ 	struct nvme_fc_lport *lport;
+ 	struct nvme_fc_rport *rport;
+-
+-	list_for_each_entry(lport, &nvme_fc_lport_list, port_list) {
+-		list_for_each_entry(rport, &lport->endp_list, endp_list) {
+-			nvme_fc_delete_controllers(rport);
+-		}
+-	}
+-}
+-
+-static void __exit nvme_fc_exit_module(void)
+-{
+ 	unsigned long flags;
+-	bool need_cleanup = false;
+ 
+ 	spin_lock_irqsave(&nvme_fc_lock, flags);
+-	nvme_fc_waiting_to_unload = true;
+-	if (!list_empty(&nvme_fc_lport_list)) {
+-		need_cleanup = true;
+-		nvme_fc_cleanup_for_unload();
+-	}
++	list_for_each_entry(lport, &nvme_fc_lport_list, port_list)
++		list_for_each_entry(rport, &lport->endp_list, endp_list)
++			nvme_fc_delete_controllers(rport);
+ 	spin_unlock_irqrestore(&nvme_fc_lock, flags);
+-	if (need_cleanup) {
+-		pr_info("%s: waiting for ctlr deletes\n", __func__);
+-		wait_for_completion(&nvme_fc_unload_proceed);
+-		pr_info("%s: ctrl deletes complete\n", __func__);
+-	}
++	flush_workqueue(nvme_delete_wq);
+ 
+ 	nvmf_unregister_transport(&nvme_fc_transport);
+ 
+-	ida_destroy(&nvme_fc_local_port_cnt);
+-	ida_destroy(&nvme_fc_ctrl_cnt);
+-
+ 	device_destroy(&fc_class, MKDEV(0, 0));
+ 	class_unregister(&fc_class);
+-	destroy_workqueue(nvme_fc_wq);
+ }
+ 
+ module_init(nvme_fc_init_module);
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 46fc44ce86712..846fb41da6430 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -357,7 +357,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
+ 
+ 	if (!lsop->req_queued) {
+ 		spin_unlock_irqrestore(&tgtport->lock, flags);
+-		return;
++		goto out_puttgtport;
+ 	}
+ 
+ 	list_del(&lsop->lsreq_list);
+@@ -370,6 +370,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
+ 				  (lsreq->rqstlen + lsreq->rsplen),
+ 				  DMA_BIDIRECTIONAL);
+ 
++out_puttgtport:
+ 	nvmet_fc_tgtport_put(tgtport);
+ }
+ 
+@@ -1101,6 +1102,9 @@ nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
+ 	int idx;
+ 	bool needrandom = true;
+ 
++	if (!tgtport->pe)
++		return NULL;
++
+ 	assoc = kzalloc(sizeof(*assoc), GFP_KERNEL);
+ 	if (!assoc)
+ 		return NULL;
+@@ -2528,8 +2532,9 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+ 
+ 	fod->req.cmd = &fod->cmdiubuf.sqe;
+ 	fod->req.cqe = &fod->rspiubuf.cqe;
+-	if (tgtport->pe)
+-		fod->req.port = tgtport->pe->port;
++	if (!tgtport->pe)
++		goto transport_error;
++	fod->req.port = tgtport->pe->port;
+ 
+ 	/* clear any response payload */
+ 	memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 80a208fb34f52..f2c5136bf2b82 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -358,7 +358,7 @@ fcloop_h2t_ls_req(struct nvme_fc_local_port *localport,
+ 	if (!rport->targetport) {
+ 		tls_req->status = -ECONNREFUSED;
+ 		spin_lock(&rport->lock);
+-		list_add_tail(&rport->ls_list, &tls_req->ls_list);
++		list_add_tail(&tls_req->ls_list, &rport->ls_list);
+ 		spin_unlock(&rport->lock);
+ 		schedule_work(&rport->ls_work);
+ 		return ret;
+@@ -391,7 +391,7 @@ fcloop_h2t_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
+ 	if (remoteport) {
+ 		rport = remoteport->private;
+ 		spin_lock(&rport->lock);
+-		list_add_tail(&rport->ls_list, &tls_req->ls_list);
++		list_add_tail(&tls_req->ls_list, &rport->ls_list);
+ 		spin_unlock(&rport->lock);
+ 		schedule_work(&rport->ls_work);
+ 	}
+@@ -446,7 +446,7 @@ fcloop_t2h_ls_req(struct nvmet_fc_target_port *targetport, void *hosthandle,
+ 	if (!tport->remoteport) {
+ 		tls_req->status = -ECONNREFUSED;
+ 		spin_lock(&tport->lock);
+-		list_add_tail(&tport->ls_list, &tls_req->ls_list);
++		list_add_tail(&tls_req->ls_list, &tport->ls_list);
+ 		spin_unlock(&tport->lock);
+ 		schedule_work(&tport->ls_work);
+ 		return ret;
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 116ae6fd35e2d..d70a2fa4ba45f 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1852,6 +1852,7 @@ static void __exit nvmet_tcp_exit(void)
+ 	flush_scheduled_work();
+ 
+ 	destroy_workqueue(nvmet_tcp_wq);
++	ida_destroy(&nvmet_tcp_queue_ida);
+ }
+ 
+ module_init(nvmet_tcp_init);
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 3da69b26e6743..27377f2f9e84b 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -1409,7 +1409,7 @@ static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc)
+ 
+ 	return (irq_hw_number_t)desc->msi_attrib.entry_nr |
+ 		pci_dev_id(dev) << 11 |
+-		(pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27;
++		((irq_hw_number_t)(pci_domain_nr(dev->bus) & 0xFFFFFFFF)) << 27;
+ }
+ 
+ static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc)
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index a90c32d072da3..9c8a6722f115e 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -230,6 +230,12 @@ static const struct dmi_system_id dmi_switches_allow_list[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7352"),
+ 		},
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion 13 x360 PC"),
++		},
++	},
+ 	{} /* Array terminator */
+ };
+ 
+diff --git a/drivers/regulator/pwm-regulator.c b/drivers/regulator/pwm-regulator.c
+index 7629476d94aeb..f4d9d9455dea6 100644
+--- a/drivers/regulator/pwm-regulator.c
++++ b/drivers/regulator/pwm-regulator.c
+@@ -158,6 +158,9 @@ static int pwm_regulator_get_voltage(struct regulator_dev *rdev)
+ 	pwm_get_state(drvdata->pwm, &pstate);
+ 
+ 	voltage = pwm_get_relative_duty_cycle(&pstate, duty_unit);
++	if (voltage < min(max_uV_duty, min_uV_duty) ||
++	    voltage > max(max_uV_duty, min_uV_duty))
++		return -ENOTRECOVERABLE;
+ 
+ 	/*
+ 	 * The dutycycle for min_uV might be greater than the one for max_uV.
+diff --git a/drivers/s390/cio/device_ops.c b/drivers/s390/cio/device_ops.c
+index c533d1dadc6bb..a5dba3829769c 100644
+--- a/drivers/s390/cio/device_ops.c
++++ b/drivers/s390/cio/device_ops.c
+@@ -202,7 +202,8 @@ int ccw_device_start_timeout_key(struct ccw_device *cdev, struct ccw1 *cpa,
+ 		return -EINVAL;
+ 	if (cdev->private->state == DEV_STATE_NOT_OPER)
+ 		return -ENODEV;
+-	if (cdev->private->state == DEV_STATE_VERIFY) {
++	if (cdev->private->state == DEV_STATE_VERIFY ||
++	    cdev->private->flags.doverify) {
+ 		/* Remember to fake irb when finished. */
+ 		if (!cdev->private->flags.fake_irb) {
+ 			cdev->private->flags.fake_irb = FAKE_CMD_IRB;
+@@ -214,8 +215,7 @@ int ccw_device_start_timeout_key(struct ccw_device *cdev, struct ccw1 *cpa,
+ 	}
+ 	if (cdev->private->state != DEV_STATE_ONLINE ||
+ 	    ((sch->schib.scsw.cmd.stctl & SCSW_STCTL_PRIM_STATUS) &&
+-	     !(sch->schib.scsw.cmd.stctl & SCSW_STCTL_SEC_STATUS)) ||
+-	    cdev->private->flags.doverify)
++	     !(sch->schib.scsw.cmd.stctl & SCSW_STCTL_SEC_STATUS)))
+ 		return -EBUSY;
+ 	ret = cio_set_options (sch, flags);
+ 	if (ret)
+diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
+index 6524e1fe54d2e..f59c9002468cc 100644
+--- a/drivers/scsi/Kconfig
++++ b/drivers/scsi/Kconfig
+@@ -1289,7 +1289,7 @@ source "drivers/scsi/arm/Kconfig"
+ 
+ config JAZZ_ESP
+ 	bool "MIPS JAZZ FAS216 SCSI support"
+-	depends on MACH_JAZZ && SCSI
++	depends on MACH_JAZZ && SCSI=y
+ 	select SCSI_SPI_ATTRS
+ 	help
+ 	  This is the driver for the onboard SCSI host adapter of MIPS Magnum
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 983eeb0e3d07e..b4b87e5d8b291 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -1944,7 +1944,7 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+  *
+  * Returns the number of SGEs added to the SGL.
+  **/
+-static int
++static uint32_t
+ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ 		struct sli4_sge *sgl, int datasegcnt,
+ 		struct lpfc_io_buf *lpfc_cmd)
+@@ -1952,8 +1952,8 @@ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ 	struct scatterlist *sgde = NULL; /* s/g data entry */
+ 	struct sli4_sge_diseed *diseed = NULL;
+ 	dma_addr_t physaddr;
+-	int i = 0, num_sge = 0, status;
+-	uint32_t reftag;
++	int i = 0, status;
++	uint32_t reftag, num_sge = 0;
+ 	uint8_t txop, rxop;
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+ 	uint32_t rc;
+@@ -2124,7 +2124,7 @@ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+  *
+  * Returns the number of SGEs added to the SGL.
+  **/
+-static int
++static uint32_t
+ lpfc_bg_setup_sgl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ 		struct sli4_sge *sgl, int datacnt, int protcnt,
+ 		struct lpfc_io_buf *lpfc_cmd)
+@@ -2148,8 +2148,8 @@ lpfc_bg_setup_sgl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ 	uint32_t rc;
+ #endif
+ 	uint32_t checking = 1;
+-	uint32_t dma_offset = 0;
+-	int num_sge = 0, j = 2;
++	uint32_t dma_offset = 0, num_sge = 0;
++	int j = 2;
+ 	struct sli4_hybrid_sgl *sgl_xtra = NULL;
+ 
+ 	sgpe = scsi_prot_sglist(sc);
+diff --git a/drivers/soc/renesas/r8a77980-sysc.c b/drivers/soc/renesas/r8a77980-sysc.c
+index 39ca84a67daad..621e411fc9991 100644
+--- a/drivers/soc/renesas/r8a77980-sysc.c
++++ b/drivers/soc/renesas/r8a77980-sysc.c
+@@ -25,7 +25,8 @@ static const struct rcar_sysc_area r8a77980_areas[] __initconst = {
+ 	  PD_CPU_NOCR },
+ 	{ "ca53-cpu3",	0x200, 3, R8A77980_PD_CA53_CPU3, R8A77980_PD_CA53_SCU,
+ 	  PD_CPU_NOCR },
+-	{ "cr7",	0x240, 0, R8A77980_PD_CR7,	R8A77980_PD_ALWAYS_ON },
++	{ "cr7",	0x240, 0, R8A77980_PD_CR7,	R8A77980_PD_ALWAYS_ON,
++	  PD_CPU_NOCR },
+ 	{ "a3ir",	0x180, 0, R8A77980_PD_A3IR,	R8A77980_PD_ALWAYS_ON },
+ 	{ "a2ir0",	0x400, 0, R8A77980_PD_A2IR0,	R8A77980_PD_A3IR },
+ 	{ "a2ir1",	0x400, 1, R8A77980_PD_A2IR1,	R8A77980_PD_A3IR },
+diff --git a/drivers/spi/spi-hisi-sfc-v3xx.c b/drivers/spi/spi-hisi-sfc-v3xx.c
+index 4650b483a33d3..e0c3ad73c576d 100644
+--- a/drivers/spi/spi-hisi-sfc-v3xx.c
++++ b/drivers/spi/spi-hisi-sfc-v3xx.c
+@@ -365,6 +365,11 @@ static const struct spi_controller_mem_ops hisi_sfc_v3xx_mem_ops = {
+ static irqreturn_t hisi_sfc_v3xx_isr(int irq, void *data)
+ {
+ 	struct hisi_sfc_v3xx_host *host = data;
++	u32 reg;
++
++	reg = readl(host->regbase + HISI_SFC_V3XX_INT_STAT);
++	if (!reg)
++		return IRQ_NONE;
+ 
+ 	hisi_sfc_v3xx_disable_int(host);
+ 
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 35d30378256f6..12fd02f92e37b 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -137,14 +137,14 @@ struct sh_msiof_spi_priv {
+ 
+ /* SIFCTR */
+ #define SIFCTR_TFWM_MASK	GENMASK(31, 29)	/* Transmit FIFO Watermark */
+-#define SIFCTR_TFWM_64		(0 << 29)	/*  Transfer Request when 64 empty stages */
+-#define SIFCTR_TFWM_32		(1 << 29)	/*  Transfer Request when 32 empty stages */
+-#define SIFCTR_TFWM_24		(2 << 29)	/*  Transfer Request when 24 empty stages */
+-#define SIFCTR_TFWM_16		(3 << 29)	/*  Transfer Request when 16 empty stages */
+-#define SIFCTR_TFWM_12		(4 << 29)	/*  Transfer Request when 12 empty stages */
+-#define SIFCTR_TFWM_8		(5 << 29)	/*  Transfer Request when 8 empty stages */
+-#define SIFCTR_TFWM_4		(6 << 29)	/*  Transfer Request when 4 empty stages */
+-#define SIFCTR_TFWM_1		(7 << 29)	/*  Transfer Request when 1 empty stage */
++#define SIFCTR_TFWM_64		(0UL << 29)	/*  Transfer Request when 64 empty stages */
++#define SIFCTR_TFWM_32		(1UL << 29)	/*  Transfer Request when 32 empty stages */
++#define SIFCTR_TFWM_24		(2UL << 29)	/*  Transfer Request when 24 empty stages */
++#define SIFCTR_TFWM_16		(3UL << 29)	/*  Transfer Request when 16 empty stages */
++#define SIFCTR_TFWM_12		(4UL << 29)	/*  Transfer Request when 12 empty stages */
++#define SIFCTR_TFWM_8		(5UL << 29)	/*  Transfer Request when 8 empty stages */
++#define SIFCTR_TFWM_4		(6UL << 29)	/*  Transfer Request when 4 empty stages */
++#define SIFCTR_TFWM_1		(7UL << 29)	/*  Transfer Request when 1 empty stage */
+ #define SIFCTR_TFUA_MASK	GENMASK(26, 20) /* Transmit FIFO Usable Area */
+ #define SIFCTR_TFUA_SHIFT	20
+ #define SIFCTR_TFUA(i)		((i) << SIFCTR_TFUA_SHIFT)
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 9aeedcff7d02e..daa4d06ce2336 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -150,7 +150,6 @@ int transport_lookup_tmr_lun(struct se_cmd *se_cmd)
+ 	struct se_session *se_sess = se_cmd->se_sess;
+ 	struct se_node_acl *nacl = se_sess->se_node_acl;
+ 	struct se_tmr_req *se_tmr = se_cmd->se_tmr_req;
+-	unsigned long flags;
+ 
+ 	rcu_read_lock();
+ 	deve = target_nacl_find_deve(nacl, se_cmd->orig_fe_lun);
+@@ -181,10 +180,6 @@ int transport_lookup_tmr_lun(struct se_cmd *se_cmd)
+ 	se_cmd->se_dev = rcu_dereference_raw(se_lun->lun_se_dev);
+ 	se_tmr->tmr_dev = rcu_dereference_raw(se_lun->lun_se_dev);
+ 
+-	spin_lock_irqsave(&se_tmr->tmr_dev->se_tmr_lock, flags);
+-	list_add_tail(&se_tmr->tmr_list, &se_tmr->tmr_dev->dev_tmr_list);
+-	spin_unlock_irqrestore(&se_tmr->tmr_dev->se_tmr_lock, flags);
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL(transport_lookup_tmr_lun);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 2e97937f005ff..8d294b658592c 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -3436,6 +3436,10 @@ int transport_generic_handle_tmr(
+ 	unsigned long flags;
+ 	bool aborted = false;
+ 
++	spin_lock_irqsave(&cmd->se_dev->se_tmr_lock, flags);
++	list_add_tail(&cmd->se_tmr_req->tmr_list, &cmd->se_dev->dev_tmr_list);
++	spin_unlock_irqrestore(&cmd->se_dev->se_tmr_lock, flags);
++
+ 	spin_lock_irqsave(&cmd->t_state_lock, flags);
+ 	if (cmd->transport_state & CMD_T_ABORTED) {
+ 		aborted = true;
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index cf0fb650a9247..4886cad0fde61 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -43,6 +43,7 @@ struct xencons_info {
+ 	int irq;
+ 	int vtermno;
+ 	grant_ref_t gntref;
++	spinlock_t ring_lock;
+ };
+ 
+ static LIST_HEAD(xenconsoles);
+@@ -89,12 +90,15 @@ static int __write_console(struct xencons_info *xencons,
+ 	XENCONS_RING_IDX cons, prod;
+ 	struct xencons_interface *intf = xencons->intf;
+ 	int sent = 0;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&xencons->ring_lock, flags);
+ 	cons = intf->out_cons;
+ 	prod = intf->out_prod;
+ 	mb();			/* update queue values before going on */
+ 
+ 	if ((prod - cons) > sizeof(intf->out)) {
++		spin_unlock_irqrestore(&xencons->ring_lock, flags);
+ 		pr_err_once("xencons: Illegal ring page indices");
+ 		return -EINVAL;
+ 	}
+@@ -104,6 +108,7 @@ static int __write_console(struct xencons_info *xencons,
+ 
+ 	wmb();			/* write ring before updating pointer */
+ 	intf->out_prod = prod;
++	spin_unlock_irqrestore(&xencons->ring_lock, flags);
+ 
+ 	if (sent)
+ 		notify_daemon(xencons);
+@@ -146,16 +151,19 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
+ 	int recv = 0;
+ 	struct xencons_info *xencons = vtermno_to_xencons(vtermno);
+ 	unsigned int eoiflag = 0;
++	unsigned long flags;
+ 
+ 	if (xencons == NULL)
+ 		return -EINVAL;
+ 	intf = xencons->intf;
+ 
++	spin_lock_irqsave(&xencons->ring_lock, flags);
+ 	cons = intf->in_cons;
+ 	prod = intf->in_prod;
+ 	mb();			/* get pointers before reading ring */
+ 
+ 	if ((prod - cons) > sizeof(intf->in)) {
++		spin_unlock_irqrestore(&xencons->ring_lock, flags);
+ 		pr_err_once("xencons: Illegal ring page indices");
+ 		return -EINVAL;
+ 	}
+@@ -179,10 +187,13 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
+ 		xencons->out_cons = intf->out_cons;
+ 		xencons->out_cons_same = 0;
+ 	}
++	if (!recv && xencons->out_cons_same++ > 1) {
++		eoiflag = XEN_EOI_FLAG_SPURIOUS;
++	}
++	spin_unlock_irqrestore(&xencons->ring_lock, flags);
++
+ 	if (recv) {
+ 		notify_daemon(xencons);
+-	} else if (xencons->out_cons_same++ > 1) {
+-		eoiflag = XEN_EOI_FLAG_SPURIOUS;
+ 	}
+ 
+ 	xen_irq_lateeoi(xencons->irq, eoiflag);
+@@ -239,6 +250,7 @@ static int xen_hvm_console_init(void)
+ 		info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
+ 		if (!info)
+ 			return -ENOMEM;
++		spin_lock_init(&info->ring_lock);
+ 	} else if (info->intf != NULL) {
+ 		/* already configured */
+ 		return 0;
+@@ -275,6 +287,7 @@ static int xen_hvm_console_init(void)
+ 
+ static int xencons_info_pv_init(struct xencons_info *info, int vtermno)
+ {
++	spin_lock_init(&info->ring_lock);
+ 	info->evtchn = xen_start_info->console.domU.evtchn;
+ 	/* GFN == MFN for PV guest */
+ 	info->intf = gfn_to_virt(xen_start_info->console.domU.mfn);
+@@ -325,6 +338,7 @@ static int xen_initial_domain_console_init(void)
+ 		info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
+ 		if (!info)
+ 			return -ENOMEM;
++		spin_lock_init(&info->ring_lock);
+ 	}
+ 
+ 	info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
+@@ -485,6 +499,7 @@ static int xencons_probe(struct xenbus_device *dev,
+ 	info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
+ 	if (!info)
+ 		return -ENOMEM;
++	spin_lock_init(&info->ring_lock);
+ 	dev_set_drvdata(&dev->dev, info);
+ 	info->xbdev = dev;
+ 	info->vtermno = xenbus_devid_to_vtermno(devid);
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 8a1f0a636848b..eeea892248b5d 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -837,7 +837,11 @@ void cdns3_gadget_giveback(struct cdns3_endpoint *priv_ep,
+ 			return;
+ 	}
+ 
+-	if (request->complete) {
++	/*
++	 * zlp request is appended by driver, needn't call usb_gadget_giveback_request() to notify
++	 * gadget composite driver.
++	 */
++	if (request->complete && request->buf != priv_dev->zlp_buf) {
+ 		spin_unlock(&priv_dev->lock);
+ 		usb_gadget_giveback_request(&priv_ep->endpoint,
+ 					    request);
+@@ -2538,11 +2542,11 @@ static int cdns3_gadget_ep_disable(struct usb_ep *ep)
+ 
+ 	while (!list_empty(&priv_ep->wa2_descmiss_req_list)) {
+ 		priv_req = cdns3_next_priv_request(&priv_ep->wa2_descmiss_req_list);
++		list_del_init(&priv_req->list);
+ 
+ 		kfree(priv_req->request.buf);
+ 		cdns3_gadget_ep_free_request(&priv_ep->endpoint,
+ 					     &priv_req->request);
+-		list_del_init(&priv_req->list);
+ 		--priv_ep->wa2_counter;
+ 	}
+ 
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index d42cd1d036bdf..8fac7a67db76f 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -1349,7 +1349,15 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 	     "Parsed NTB with %d frames\n", dgram_counter);
+ 
+ 	to_process -= block_len;
+-	if (to_process != 0) {
++
++	/*
++	 * Windows NCM driver avoids USB ZLPs by adding a 1-byte
++	 * zero pad as needed.
++	 */
++	if (to_process == 1 &&
++	    (*(unsigned char *)(ntb_ptr + block_len) == 0x00)) {
++		to_process--;
++	} else if (to_process > 0) {
+ 		ntb_ptr = (unsigned char *)(ntb_ptr + block_len);
+ 		goto parse_ntb;
+ 	}
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index 5cc20275335d1..e1dff4a44fd25 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -19,7 +19,9 @@ static struct class *role_class;
+ struct usb_role_switch {
+ 	struct device dev;
+ 	struct mutex lock; /* device lock*/
++	struct module *module; /* the module this device depends on */
+ 	enum usb_role role;
++	bool registered;
+ 
+ 	/* From descriptor */
+ 	struct device *usb2_port;
+@@ -46,6 +48,9 @@ int usb_role_switch_set_role(struct usb_role_switch *sw, enum usb_role role)
+ 	if (IS_ERR_OR_NULL(sw))
+ 		return 0;
+ 
++	if (!sw->registered)
++		return -EOPNOTSUPP;
++
+ 	mutex_lock(&sw->lock);
+ 
+ 	ret = sw->set(sw, role);
+@@ -71,7 +76,7 @@ enum usb_role usb_role_switch_get_role(struct usb_role_switch *sw)
+ {
+ 	enum usb_role role;
+ 
+-	if (IS_ERR_OR_NULL(sw))
++	if (IS_ERR_OR_NULL(sw) || !sw->registered)
+ 		return USB_ROLE_NONE;
+ 
+ 	mutex_lock(&sw->lock);
+@@ -133,7 +138,7 @@ struct usb_role_switch *usb_role_switch_get(struct device *dev)
+ 						  usb_role_switch_match);
+ 
+ 	if (!IS_ERR_OR_NULL(sw))
+-		WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
++		WARN_ON(!try_module_get(sw->module));
+ 
+ 	return sw;
+ }
+@@ -155,7 +160,7 @@ struct usb_role_switch *fwnode_usb_role_switch_get(struct fwnode_handle *fwnode)
+ 		sw = fwnode_connection_find_match(fwnode, "usb-role-switch",
+ 						  NULL, usb_role_switch_match);
+ 	if (!IS_ERR_OR_NULL(sw))
+-		WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
++		WARN_ON(!try_module_get(sw->module));
+ 
+ 	return sw;
+ }
+@@ -170,7 +175,7 @@ EXPORT_SYMBOL_GPL(fwnode_usb_role_switch_get);
+ void usb_role_switch_put(struct usb_role_switch *sw)
+ {
+ 	if (!IS_ERR_OR_NULL(sw)) {
+-		module_put(sw->dev.parent->driver->owner);
++		module_put(sw->module);
+ 		put_device(&sw->dev);
+ 	}
+ }
+@@ -187,15 +192,18 @@ struct usb_role_switch *
+ usb_role_switch_find_by_fwnode(const struct fwnode_handle *fwnode)
+ {
+ 	struct device *dev;
++	struct usb_role_switch *sw = NULL;
+ 
+ 	if (!fwnode)
+ 		return NULL;
+ 
+ 	dev = class_find_device_by_fwnode(role_class, fwnode);
+-	if (dev)
+-		WARN_ON(!try_module_get(dev->parent->driver->owner));
++	if (dev) {
++		sw = to_role_switch(dev);
++		WARN_ON(!try_module_get(sw->module));
++	}
+ 
+-	return dev ? to_role_switch(dev) : NULL;
++	return sw;
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_find_by_fwnode);
+ 
+@@ -328,6 +336,7 @@ usb_role_switch_register(struct device *parent,
+ 	sw->set = desc->set;
+ 	sw->get = desc->get;
+ 
++	sw->module = parent->driver->owner;
+ 	sw->dev.parent = parent;
+ 	sw->dev.fwnode = desc->fwnode;
+ 	sw->dev.class = role_class;
+@@ -342,6 +351,8 @@ usb_role_switch_register(struct device *parent,
+ 		return ERR_PTR(ret);
+ 	}
+ 
++	sw->registered = true;
++
+ 	/* TODO: Symlinks for the host port and the device controller. */
+ 
+ 	return sw;
+@@ -356,8 +367,10 @@ EXPORT_SYMBOL_GPL(usb_role_switch_register);
+  */
+ void usb_role_switch_unregister(struct usb_role_switch *sw)
+ {
+-	if (!IS_ERR_OR_NULL(sw))
++	if (!IS_ERR_OR_NULL(sw)) {
++		sw->registered = false;
+ 		device_unregister(&sw->dev);
++	}
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_unregister);
+ 
+diff --git a/drivers/video/fbdev/savage/savagefb_driver.c b/drivers/video/fbdev/savage/savagefb_driver.c
+index 0ac750cc5ea13..94ebd8af50cf7 100644
+--- a/drivers/video/fbdev/savage/savagefb_driver.c
++++ b/drivers/video/fbdev/savage/savagefb_driver.c
+@@ -868,6 +868,9 @@ static int savagefb_check_var(struct fb_var_screeninfo   *var,
+ 
+ 	DBG("savagefb_check_var");
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	var->transp.offset = 0;
+ 	var->transp.length = 0;
+ 	switch (var->bits_per_pixel) {
+diff --git a/drivers/video/fbdev/sis/sis_main.c b/drivers/video/fbdev/sis/sis_main.c
+index 03c736f6f3d08..e540cb0c51726 100644
+--- a/drivers/video/fbdev/sis/sis_main.c
++++ b/drivers/video/fbdev/sis/sis_main.c
+@@ -1474,6 +1474,8 @@ sisfb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ 
+ 	vtotal = var->upper_margin + var->lower_margin + var->vsync_len;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
+ 	pixclock = var->pixclock;
+ 
+ 	if((var->vmode & FB_VMODE_MASK) == FB_VMODE_NONINTERLACED) {
+diff --git a/fs/afs/volume.c b/fs/afs/volume.c
+index f84194b791d3e..fb19c69284ab2 100644
+--- a/fs/afs/volume.c
++++ b/fs/afs/volume.c
+@@ -302,7 +302,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
+ {
+ 	struct afs_server_list *new, *old, *discard;
+ 	struct afs_vldb_entry *vldb;
+-	char idbuf[16];
++	char idbuf[24];
+ 	int ret, idsz;
+ 
+ 	_enter("");
+@@ -310,7 +310,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
+ 	/* We look up an ID by passing it as a decimal string in the
+ 	 * operation's name parameter.
+ 	 */
+-	idsz = sprintf(idbuf, "%llu", volume->vid);
++	idsz = snprintf(idbuf, sizeof(idbuf), "%llu", volume->vid);
+ 
+ 	vldb = afs_vl_lookup_vldb(volume->cell, key, idbuf, idsz);
+ 	if (IS_ERR(vldb)) {
+diff --git a/fs/aio.c b/fs/aio.c
+index 5934ea84b4993..900ed5207540e 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -569,6 +569,13 @@ void kiocb_set_cancel_fn(struct kiocb *iocb, kiocb_cancel_fn *cancel)
+ 	struct kioctx *ctx = req->ki_ctx;
+ 	unsigned long flags;
+ 
++	/*
++	 * kiocb didn't come from aio or is neither a read nor a write, hence
++	 * ignore it.
++	 */
++	if (!(iocb->ki_flags & IOCB_AIO_RW))
++		return;
++
+ 	if (WARN_ON_ONCE(!list_empty(&req->ki_list)))
+ 		return;
+ 
+@@ -1454,7 +1461,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ 	req->ki_complete = aio_complete_rw;
+ 	req->private = NULL;
+ 	req->ki_pos = iocb->aio_offset;
+-	req->ki_flags = iocb_flags(req->ki_filp);
++	req->ki_flags = iocb_flags(req->ki_filp) | IOCB_AIO_RW;
+ 	if (iocb->aio_flags & IOCB_FLAG_RESFD)
+ 		req->ki_flags |= IOCB_EVENTFD;
+ 	req->ki_hint = ki_hint_validate(file_write_hint(req->ki_filp));
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 67831868ef0de..3ddb09f2b1685 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2879,7 +2879,7 @@ struct btrfs_dir_item *
+ btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans,
+ 			    struct btrfs_root *root,
+ 			    struct btrfs_path *path, u64 dir,
+-			    u64 objectid, const char *name, int name_len,
++			    u64 index, const char *name, int name_len,
+ 			    int mod);
+ struct btrfs_dir_item *
+ btrfs_search_dir_index_item(struct btrfs_root *root,
+diff --git a/fs/btrfs/dir-item.c b/fs/btrfs/dir-item.c
+index 863367c2c6205..98c6faa8ce15b 100644
+--- a/fs/btrfs/dir-item.c
++++ b/fs/btrfs/dir-item.c
+@@ -171,10 +171,40 @@ int btrfs_insert_dir_item(struct btrfs_trans_handle *trans, const char *name,
+ 	return 0;
+ }
+ 
++static struct btrfs_dir_item *btrfs_lookup_match_dir(
++			struct btrfs_trans_handle *trans,
++			struct btrfs_root *root, struct btrfs_path *path,
++			struct btrfs_key *key, const char *name,
++			int name_len, int mod)
++{
++	const int ins_len = (mod < 0 ? -1 : 0);
++	const int cow = (mod != 0);
++	int ret;
++
++	ret = btrfs_search_slot(trans, root, key, path, ins_len, cow);
++	if (ret < 0)
++		return ERR_PTR(ret);
++	if (ret > 0)
++		return ERR_PTR(-ENOENT);
++
++	return btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
++}
++
+ /*
+- * lookup a directory item based on name.  'dir' is the objectid
+- * we're searching in, and 'mod' tells us if you plan on deleting the
+- * item (use mod < 0) or changing the options (use mod > 0)
++ * Lookup for a directory item by name.
++ *
++ * @trans:	The transaction handle to use. Can be NULL if @mod is 0.
++ * @root:	The root of the target tree.
++ * @path:	Path to use for the search.
++ * @dir:	The inode number (objectid) of the directory.
++ * @name:	The name associated to the directory entry we are looking for.
++ * @name_len:	The length of the name.
++ * @mod:	Used to indicate if the tree search is meant for a read only
++ *		lookup, for a modification lookup or for a deletion lookup, so
++ *		its value should be 0, 1 or -1, respectively.
++ *
++ * Returns: NULL if the dir item does not exists, an error pointer if an error
++ * happened, or a pointer to a dir item if a dir item exists for the given name.
+  */
+ struct btrfs_dir_item *btrfs_lookup_dir_item(struct btrfs_trans_handle *trans,
+ 					     struct btrfs_root *root,
+@@ -182,23 +212,18 @@ struct btrfs_dir_item *btrfs_lookup_dir_item(struct btrfs_trans_handle *trans,
+ 					     const char *name, int name_len,
+ 					     int mod)
+ {
+-	int ret;
+ 	struct btrfs_key key;
+-	int ins_len = mod < 0 ? -1 : 0;
+-	int cow = mod != 0;
++	struct btrfs_dir_item *di;
+ 
+ 	key.objectid = dir;
+ 	key.type = BTRFS_DIR_ITEM_KEY;
+-
+ 	key.offset = btrfs_name_hash(name, name_len);
+ 
+-	ret = btrfs_search_slot(trans, root, &key, path, ins_len, cow);
+-	if (ret < 0)
+-		return ERR_PTR(ret);
+-	if (ret > 0)
++	di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);
++	if (IS_ERR(di) && PTR_ERR(di) == -ENOENT)
+ 		return NULL;
+ 
+-	return btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
++	return di;
+ }
+ 
+ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
+@@ -212,7 +237,6 @@ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
+ 	int slot;
+ 	struct btrfs_path *path;
+ 
+-
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+ 		return -ENOMEM;
+@@ -221,20 +245,20 @@ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
+ 	key.type = BTRFS_DIR_ITEM_KEY;
+ 	key.offset = btrfs_name_hash(name, name_len);
+ 
+-	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+-
+-	/* return back any errors */
+-	if (ret < 0)
+-		goto out;
++	di = btrfs_lookup_match_dir(NULL, root, path, &key, name, name_len, 0);
++	if (IS_ERR(di)) {
++		ret = PTR_ERR(di);
++		/* Nothing found, we're safe */
++		if (ret == -ENOENT) {
++			ret = 0;
++			goto out;
++		}
+ 
+-	/* nothing found, we're safe */
+-	if (ret > 0) {
+-		ret = 0;
+-		goto out;
++		if (ret < 0)
++			goto out;
+ 	}
+ 
+ 	/* we found an item, look for our name in the item */
+-	di = btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
+ 	if (di) {
+ 		/* our exact name was found */
+ 		ret = -EEXIST;
+@@ -261,35 +285,42 @@ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
+ }
+ 
+ /*
+- * lookup a directory item based on index.  'dir' is the objectid
+- * we're searching in, and 'mod' tells us if you plan on deleting the
+- * item (use mod < 0) or changing the options (use mod > 0)
++ * Lookup for a directory index item by name and index number.
++ *
++ * @trans:	The transaction handle to use. Can be NULL if @mod is 0.
++ * @root:	The root of the target tree.
++ * @path:	Path to use for the search.
++ * @dir:	The inode number (objectid) of the directory.
++ * @index:	The index number.
++ * @name:	The name associated to the directory entry we are looking for.
++ * @name_len:	The length of the name.
++ * @mod:	Used to indicate if the tree search is meant for a read only
++ *		lookup, for a modification lookup or for a deletion lookup, so
++ *		its value should be 0, 1 or -1, respectively.
+  *
+- * The name is used to make sure the index really points to the name you were
+- * looking for.
++ * Returns: NULL if the dir index item does not exists, an error pointer if an
++ * error happened, or a pointer to a dir item if the dir index item exists and
++ * matches the criteria (name and index number).
+  */
+ struct btrfs_dir_item *
+ btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans,
+ 			    struct btrfs_root *root,
+ 			    struct btrfs_path *path, u64 dir,
+-			    u64 objectid, const char *name, int name_len,
++			    u64 index, const char *name, int name_len,
+ 			    int mod)
+ {
+-	int ret;
++	struct btrfs_dir_item *di;
+ 	struct btrfs_key key;
+-	int ins_len = mod < 0 ? -1 : 0;
+-	int cow = mod != 0;
+ 
+ 	key.objectid = dir;
+ 	key.type = BTRFS_DIR_INDEX_KEY;
+-	key.offset = objectid;
++	key.offset = index;
+ 
+-	ret = btrfs_search_slot(trans, root, &key, path, ins_len, cow);
+-	if (ret < 0)
+-		return ERR_PTR(ret);
+-	if (ret > 0)
+-		return ERR_PTR(-ENOENT);
+-	return btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
++	di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);
++	if (di == ERR_PTR(-ENOENT))
++		return NULL;
++
++	return di;
+ }
+ 
+ struct btrfs_dir_item *
+@@ -346,21 +377,18 @@ struct btrfs_dir_item *btrfs_lookup_xattr(struct btrfs_trans_handle *trans,
+ 					  const char *name, u16 name_len,
+ 					  int mod)
+ {
+-	int ret;
+ 	struct btrfs_key key;
+-	int ins_len = mod < 0 ? -1 : 0;
+-	int cow = mod != 0;
++	struct btrfs_dir_item *di;
+ 
+ 	key.objectid = dir;
+ 	key.type = BTRFS_XATTR_ITEM_KEY;
+ 	key.offset = btrfs_name_hash(name, name_len);
+-	ret = btrfs_search_slot(trans, root, &key, path, ins_len, cow);
+-	if (ret < 0)
+-		return ERR_PTR(ret);
+-	if (ret > 0)
++
++	di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);
++	if (IS_ERR(di) && PTR_ERR(di) == -ENOENT)
+ 		return NULL;
+ 
+-	return btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
++	return di;
+ }
+ 
+ /*
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 250b6064876de..591caac2bf814 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -8968,8 +8968,6 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 		/* force full log commit if subvolume involved. */
+ 		btrfs_set_log_full_commit(trans);
+ 	} else {
+-		btrfs_pin_log_trans(root);
+-		root_log_pinned = true;
+ 		ret = btrfs_insert_inode_ref(trans, dest,
+ 					     new_dentry->d_name.name,
+ 					     new_dentry->d_name.len,
+@@ -8986,8 +8984,6 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 		/* force full log commit if subvolume involved. */
+ 		btrfs_set_log_full_commit(trans);
+ 	} else {
+-		btrfs_pin_log_trans(dest);
+-		dest_log_pinned = true;
+ 		ret = btrfs_insert_inode_ref(trans, root,
+ 					     old_dentry->d_name.name,
+ 					     old_dentry->d_name.len,
+@@ -9018,6 +9014,29 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 				BTRFS_I(new_inode), 1);
+ 	}
+ 
++	/*
++	 * Now pin the logs of the roots. We do it to ensure that no other task
++	 * can sync the logs while we are in progress with the rename, because
++	 * that could result in an inconsistency in case any of the inodes that
++	 * are part of this rename operation were logged before.
++	 *
++	 * We pin the logs even if at this precise moment none of the inodes was
++	 * logged before. This is because right after we checked for that, some
++	 * other task fsyncing some other inode not involved with this rename
++	 * operation could log that one of our inodes exists.
++	 *
++	 * We don't need to pin the logs before the above calls to
++	 * btrfs_insert_inode_ref(), since those don't ever need to change a log.
++	 */
++	if (old_ino != BTRFS_FIRST_FREE_OBJECTID) {
++		btrfs_pin_log_trans(root);
++		root_log_pinned = true;
++	}
++	if (new_ino != BTRFS_FIRST_FREE_OBJECTID) {
++		btrfs_pin_log_trans(dest);
++		dest_log_pinned = true;
++	}
++
+ 	/* src is a subvolume */
+ 	if (old_ino == BTRFS_FIRST_FREE_OBJECTID) {
+ 		ret = btrfs_unlink_subvol(trans, old_dir, old_dentry);
+@@ -9267,8 +9286,6 @@ static int btrfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		/* force full log commit if subvolume involved. */
+ 		btrfs_set_log_full_commit(trans);
+ 	} else {
+-		btrfs_pin_log_trans(root);
+-		log_pinned = true;
+ 		ret = btrfs_insert_inode_ref(trans, dest,
+ 					     new_dentry->d_name.name,
+ 					     new_dentry->d_name.len,
+@@ -9292,6 +9309,25 @@ static int btrfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	if (unlikely(old_ino == BTRFS_FIRST_FREE_OBJECTID)) {
+ 		ret = btrfs_unlink_subvol(trans, old_dir, old_dentry);
+ 	} else {
++		/*
++		 * Now pin the log. We do it to ensure that no other task can
++		 * sync the log while we are in progress with the rename, as
++		 * that could result in an inconsistency in case any of the
++		 * inodes that are part of this rename operation were logged
++		 * before.
++		 *
++		 * We pin the log even if at this precise moment none of the
++		 * inodes was logged before. This is because right after we
++		 * checked for that, some other task fsyncing some other inode
++		 * not involved with this rename operation could log that one of
++		 * our inodes exists.
++		 *
++		 * We don't need to pin the logs before the above call to
++		 * btrfs_insert_inode_ref(), since that does not need to change
++		 * a log.
++		 */
++		btrfs_pin_log_trans(root);
++		log_pinned = true;
+ 		ret = __btrfs_unlink_inode(trans, root, BTRFS_I(old_dir),
+ 					BTRFS_I(d_inode(old_dentry)),
+ 					old_dentry->d_name.name,
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index c0eda3816f685..5b952f69bc1f6 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1189,7 +1189,8 @@ static void extent_err(const struct extent_buffer *eb, int slot,
+ }
+ 
+ static int check_extent_item(struct extent_buffer *leaf,
+-			     struct btrfs_key *key, int slot)
++			     struct btrfs_key *key, int slot,
++			     struct btrfs_key *prev_key)
+ {
+ 	struct btrfs_fs_info *fs_info = leaf->fs_info;
+ 	struct btrfs_extent_item *ei;
+@@ -1400,6 +1401,26 @@ static int check_extent_item(struct extent_buffer *leaf,
+ 			   total_refs, inline_refs);
+ 		return -EUCLEAN;
+ 	}
++
++	if ((prev_key->type == BTRFS_EXTENT_ITEM_KEY) ||
++	    (prev_key->type == BTRFS_METADATA_ITEM_KEY)) {
++		u64 prev_end = prev_key->objectid;
++
++		if (prev_key->type == BTRFS_METADATA_ITEM_KEY)
++			prev_end += fs_info->nodesize;
++		else
++			prev_end += prev_key->offset;
++
++		if (unlikely(prev_end > key->objectid)) {
++			extent_err(leaf, slot,
++	"previous extent [%llu %u %llu] overlaps current extent [%llu %u %llu]",
++				   prev_key->objectid, prev_key->type,
++				   prev_key->offset, key->objectid, key->type,
++				   key->offset);
++			return -EUCLEAN;
++		}
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1568,7 +1589,7 @@ static int check_leaf_item(struct extent_buffer *leaf,
+ 		break;
+ 	case BTRFS_EXTENT_ITEM_KEY:
+ 	case BTRFS_METADATA_ITEM_KEY:
+-		ret = check_extent_item(leaf, key, slot);
++		ret = check_extent_item(leaf, key, slot, prev_key);
+ 		break;
+ 	case BTRFS_TREE_BLOCK_REF_KEY:
+ 	case BTRFS_SHARED_DATA_REF_KEY:
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 10a0913ffb492..34e9eb5010cda 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -912,8 +912,7 @@ static noinline int inode_in_dir(struct btrfs_root *root,
+ 	di = btrfs_lookup_dir_index_item(NULL, root, path, dirid,
+ 					 index, name, name_len, 0);
+ 	if (IS_ERR(di)) {
+-		if (PTR_ERR(di) != -ENOENT)
+-			ret = PTR_ERR(di);
++		ret = PTR_ERR(di);
+ 		goto out;
+ 	} else if (di) {
+ 		btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
+@@ -1149,8 +1148,7 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
+ 	di = btrfs_lookup_dir_index_item(trans, root, path, btrfs_ino(dir),
+ 					 ref_index, name, namelen, 0);
+ 	if (IS_ERR(di)) {
+-		if (PTR_ERR(di) != -ENOENT)
+-			return PTR_ERR(di);
++		return PTR_ERR(di);
+ 	} else if (di) {
+ 		ret = drop_one_dir_item(trans, root, path, dir, di);
+ 		if (ret)
+@@ -1976,9 +1974,6 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 		goto out;
+ 	}
+ 
+-	if (dst_di == ERR_PTR(-ENOENT))
+-		dst_di = NULL;
+-
+ 	if (IS_ERR(dst_di)) {
+ 		ret = PTR_ERR(dst_di);
+ 		goto out;
+@@ -2286,7 +2281,7 @@ static noinline int check_item_in_log(struct btrfs_trans_handle *trans,
+ 						     dir_key->offset,
+ 						     name, name_len, 0);
+ 		}
+-		if (!log_di || log_di == ERR_PTR(-ENOENT)) {
++		if (!log_di) {
+ 			btrfs_dir_item_key_to_cpu(eb, di, &location);
+ 			btrfs_release_path(path);
+ 			btrfs_release_path(log_path);
+@@ -3495,8 +3490,7 @@ int btrfs_del_dir_entries_in_log(struct btrfs_trans_handle *trans,
+ 	if (err == -ENOSPC) {
+ 		btrfs_set_log_full_commit(trans);
+ 		err = 0;
+-	} else if (err < 0 && err != -ENOENT) {
+-		/* ENOENT can be returned if the entry hasn't been fsynced yet */
++	} else if (err < 0) {
+ 		btrfs_abort_transaction(trans, err);
+ 	}
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 84850a55c8b7e..b2a7238a34221 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -82,6 +82,7 @@ smb2_add_credits(struct TCP_Server_Info *server,
+ 		*val = 65000; /* Don't get near 64K credits, avoid srv bugs */
+ 		pr_warn_once("server overflowed SMB3 credits\n");
+ 	}
++	WARN_ON_ONCE(server->in_flight == 0);
+ 	server->in_flight--;
+ 	if (server->in_flight == 0 && (optype & CIFS_OP_MASK) != CIFS_NEG_OP)
+ 		rc = change_conf(server);
+@@ -818,10 +819,12 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon,
+ 	if (o_rsp->OplockLevel == SMB2_OPLOCK_LEVEL_LEASE) {
+ 		kref_get(&tcon->crfid.refcount);
+ 		tcon->crfid.has_lease = true;
+-		smb2_parse_contexts(server, o_rsp,
++		rc = smb2_parse_contexts(server, rsp_iov,
+ 				&oparms.fid->epoch,
+ 				    oparms.fid->lease_key, &oplock,
+ 				    NULL, NULL);
++		if (rc)
++			goto oshr_exit;
+ 	} else
+ 		goto oshr_exit;
+ 
+@@ -4892,6 +4895,7 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
+ 	struct smb2_sync_hdr *shdr;
+ 	unsigned int pdu_length = server->pdu_size;
+ 	unsigned int buf_size;
++	unsigned int next_cmd;
+ 	struct mid_q_entry *mid_entry;
+ 	int next_is_large;
+ 	char *next_buffer = NULL;
+@@ -4920,14 +4924,15 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
+ 	next_is_large = server->large_buf;
+ one_more:
+ 	shdr = (struct smb2_sync_hdr *)buf;
+-	if (shdr->NextCommand) {
++	next_cmd = le32_to_cpu(shdr->NextCommand);
++	if (next_cmd) {
++		if (WARN_ON_ONCE(next_cmd > pdu_length))
++			return -1;
+ 		if (next_is_large)
+ 			next_buffer = (char *)cifs_buf_get();
+ 		else
+ 			next_buffer = (char *)cifs_small_buf_get();
+-		memcpy(next_buffer,
+-		       buf + le32_to_cpu(shdr->NextCommand),
+-		       pdu_length - le32_to_cpu(shdr->NextCommand));
++		memcpy(next_buffer, buf + next_cmd, pdu_length - next_cmd);
+ 	}
+ 
+ 	mid_entry = smb2_find_mid(server, buf);
+@@ -4951,8 +4956,8 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
+ 	else
+ 		ret = cifs_handle_standard(server, mid_entry);
+ 
+-	if (ret == 0 && shdr->NextCommand) {
+-		pdu_length -= le32_to_cpu(shdr->NextCommand);
++	if (ret == 0 && next_cmd) {
++		pdu_length -= next_cmd;
+ 		server->large_buf = next_is_large;
+ 		if (next_is_large)
+ 			server->bigbuf = buf = next_buffer;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 4aec01841f0f2..aa3211d8cce3b 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1991,17 +1991,18 @@ parse_posix_ctxt(struct create_context *cc, struct smb2_file_all_info *info,
+ 		 posix->nlink, posix->mode, posix->reparse_tag);
+ }
+ 
+-void
+-smb2_parse_contexts(struct TCP_Server_Info *server,
+-		    struct smb2_create_rsp *rsp,
+-		    unsigned int *epoch, char *lease_key, __u8 *oplock,
+-		    struct smb2_file_all_info *buf,
+-		    struct create_posix_rsp *posix)
++int smb2_parse_contexts(struct TCP_Server_Info *server,
++			struct kvec *rsp_iov,
++			unsigned int *epoch,
++			char *lease_key, __u8 *oplock,
++			struct smb2_file_all_info *buf,
++			struct create_posix_rsp *posix)
+ {
+-	char *data_offset;
++	struct smb2_create_rsp *rsp = rsp_iov->iov_base;
+ 	struct create_context *cc;
+-	unsigned int next;
+-	unsigned int remaining;
++	size_t rem, off, len;
++	size_t doff, dlen;
++	size_t noff, nlen;
+ 	char *name;
+ 	static const char smb3_create_tag_posix[] = {
+ 		0x93, 0xAD, 0x25, 0x50, 0x9C,
+@@ -2010,45 +2011,63 @@ smb2_parse_contexts(struct TCP_Server_Info *server,
+ 	};
+ 
+ 	*oplock = 0;
+-	data_offset = (char *)rsp + le32_to_cpu(rsp->CreateContextsOffset);
+-	remaining = le32_to_cpu(rsp->CreateContextsLength);
+-	cc = (struct create_context *)data_offset;
++
++	off = le32_to_cpu(rsp->CreateContextsOffset);
++	rem = le32_to_cpu(rsp->CreateContextsLength);
++	if (check_add_overflow(off, rem, &len) || len > rsp_iov->iov_len)
++		return -EINVAL;
++	cc = (struct create_context *)((u8 *)rsp + off);
+ 
+ 	/* Initialize inode number to 0 in case no valid data in qfid context */
+ 	if (buf)
+ 		buf->IndexNumber = 0;
+ 
+-	while (remaining >= sizeof(struct create_context)) {
+-		name = le16_to_cpu(cc->NameOffset) + (char *)cc;
+-		if (le16_to_cpu(cc->NameLength) == 4 &&
+-		    strncmp(name, SMB2_CREATE_REQUEST_LEASE, 4) == 0)
+-			*oplock = server->ops->parse_lease_buf(cc, epoch,
+-							   lease_key);
+-		else if (buf && (le16_to_cpu(cc->NameLength) == 4) &&
+-		    strncmp(name, SMB2_CREATE_QUERY_ON_DISK_ID, 4) == 0)
+-			parse_query_id_ctxt(cc, buf);
+-		else if ((le16_to_cpu(cc->NameLength) == 16)) {
+-			if (posix &&
+-			    memcmp(name, smb3_create_tag_posix, 16) == 0)
++	while (rem >= sizeof(*cc)) {
++		doff = le16_to_cpu(cc->DataOffset);
++		dlen = le32_to_cpu(cc->DataLength);
++		if (check_add_overflow(doff, dlen, &len) || len > rem)
++			return -EINVAL;
++
++		noff = le16_to_cpu(cc->NameOffset);
++		nlen = le16_to_cpu(cc->NameLength);
++		if (noff + nlen > doff)
++			return -EINVAL;
++
++		name = (char *)cc + noff;
++		switch (nlen) {
++		case 4:
++			if (!strncmp(name, SMB2_CREATE_REQUEST_LEASE, 4)) {
++				*oplock = server->ops->parse_lease_buf(cc, epoch,
++								       lease_key);
++			} else if (buf &&
++				   !strncmp(name, SMB2_CREATE_QUERY_ON_DISK_ID, 4)) {
++				parse_query_id_ctxt(cc, buf);
++			}
++			break;
++		case 16:
++			if (posix && !memcmp(name, smb3_create_tag_posix, 16))
+ 				parse_posix_ctxt(cc, buf, posix);
++			break;
++		default:
++			cifs_dbg(FYI, "%s: unhandled context (nlen=%zu dlen=%zu)\n",
++				 __func__, nlen, dlen);
++			if (IS_ENABLED(CONFIG_CIFS_DEBUG2))
++				cifs_dump_mem("context data: ", cc, dlen);
++			break;
+ 		}
+-		/* else {
+-			cifs_dbg(FYI, "Context not matched with len %d\n",
+-				le16_to_cpu(cc->NameLength));
+-			cifs_dump_mem("Cctxt name: ", name, 4);
+-		} */
+-
+-		next = le32_to_cpu(cc->Next);
+-		if (!next)
++
++		off = le32_to_cpu(cc->Next);
++		if (!off)
+ 			break;
+-		remaining -= next;
+-		cc = (struct create_context *)((char *)cc + next);
++		if (check_sub_overflow(rem, off, &rem))
++			return -EINVAL;
++		cc = (struct create_context *)((u8 *)cc + off);
+ 	}
+ 
+ 	if (rsp->OplockLevel != SMB2_OPLOCK_LEVEL_LEASE)
+ 		*oplock = rsp->OplockLevel;
+ 
+-	return;
++	return 0;
+ }
+ 
+ static int
+@@ -2915,8 +2934,8 @@ SMB2_open(const unsigned int xid, struct cifs_open_parms *oparms, __le16 *path,
+ 	}
+ 
+ 
+-	smb2_parse_contexts(server, rsp, &oparms->fid->epoch,
+-			    oparms->fid->lease_key, oplock, buf, posix);
++	rc = smb2_parse_contexts(server, &rsp_iov, &oparms->fid->epoch,
++				 oparms->fid->lease_key, oplock, buf, posix);
+ creat_exit:
+ 	SMB2_open_free(&rqst);
+ 	free_rsp_buf(resp_buftype, rsp);
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index ed2b4fb012a41..3184a5efcdba5 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -270,11 +270,13 @@ extern int smb3_validate_negotiate(const unsigned int, struct cifs_tcon *);
+ 
+ extern enum securityEnum smb2_select_sectype(struct TCP_Server_Info *,
+ 					enum securityEnum);
+-extern void smb2_parse_contexts(struct TCP_Server_Info *server,
+-				struct smb2_create_rsp *rsp,
+-				unsigned int *epoch, char *lease_key,
+-				__u8 *oplock, struct smb2_file_all_info *buf,
+-				struct create_posix_rsp *posix);
++int smb2_parse_contexts(struct TCP_Server_Info *server,
++			struct kvec *rsp_iov,
++			unsigned int *epoch,
++			char *lease_key, __u8 *oplock,
++			struct smb2_file_all_info *buf,
++			struct create_posix_rsp *posix);
++
+ extern int smb3_encryption_required(const struct cifs_tcon *tcon);
+ extern int smb2_validate_iov(unsigned int offset, unsigned int buffer_length,
+ 			     struct kvec *iov, unsigned int min_buf_size);
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index f921580b56cbc..36693924db182 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -24,7 +24,8 @@ struct z_erofs_decompressor {
+ 	 */
+ 	int (*prepare_destpages)(struct z_erofs_decompress_req *rq,
+ 				 struct list_head *pagepool);
+-	int (*decompress)(struct z_erofs_decompress_req *rq, u8 *out);
++	int (*decompress)(struct z_erofs_decompress_req *rq, u8 *out,
++			  u8 *obase);
+ 	char *name;
+ };
+ 
+@@ -114,10 +115,13 @@ static void *generic_copy_inplace_data(struct z_erofs_decompress_req *rq,
+ 	return tmp;
+ }
+ 
+-static int z_erofs_lz4_decompress(struct z_erofs_decompress_req *rq, u8 *out)
++static int z_erofs_lz4_decompress(struct z_erofs_decompress_req *rq, u8 *out,
++				  u8 *obase)
+ {
++	const uint nrpages_out = PAGE_ALIGN(rq->pageofs_out +
++					    rq->outputsize) >> PAGE_SHIFT;
+ 	unsigned int inputmargin, inlen;
+-	u8 *src;
++	u8 *src, *src2;
+ 	bool copied, support_0padding;
+ 	int ret;
+ 
+@@ -125,6 +129,7 @@ static int z_erofs_lz4_decompress(struct z_erofs_decompress_req *rq, u8 *out)
+ 		return -EOPNOTSUPP;
+ 
+ 	src = kmap_atomic(*rq->in);
++	src2 = src;
+ 	inputmargin = 0;
+ 	support_0padding = false;
+ 
+@@ -148,16 +153,15 @@ static int z_erofs_lz4_decompress(struct z_erofs_decompress_req *rq, u8 *out)
+ 	if (rq->inplace_io) {
+ 		const uint oend = (rq->pageofs_out +
+ 				   rq->outputsize) & ~PAGE_MASK;
+-		const uint nr = PAGE_ALIGN(rq->pageofs_out +
+-					   rq->outputsize) >> PAGE_SHIFT;
+-
+ 		if (rq->partial_decoding || !support_0padding ||
+-		    rq->out[nr - 1] != rq->in[0] ||
++		    rq->out[nrpages_out - 1] != rq->in[0] ||
+ 		    rq->inputsize - oend <
+ 		      LZ4_DECOMPRESS_INPLACE_MARGIN(inlen)) {
+ 			src = generic_copy_inplace_data(rq, src, inputmargin);
+ 			inputmargin = 0;
+ 			copied = true;
++		} else {
++			src = obase + ((nrpages_out - 1) << PAGE_SHIFT);
+ 		}
+ 	}
+ 
+@@ -187,7 +191,7 @@ static int z_erofs_lz4_decompress(struct z_erofs_decompress_req *rq, u8 *out)
+ 	if (copied)
+ 		erofs_put_pcpubuf(src);
+ 	else
+-		kunmap_atomic(src);
++		kunmap_atomic(src2);
+ 	return ret;
+ }
+ 
+@@ -257,7 +261,7 @@ static int z_erofs_decompress_generic(struct z_erofs_decompress_req *rq,
+ 			return PTR_ERR(dst);
+ 
+ 		rq->inplace_io = false;
+-		ret = alg->decompress(rq, dst);
++		ret = alg->decompress(rq, dst, NULL);
+ 		if (!ret)
+ 			copy_from_pcpubuf(rq->out, dst, rq->pageofs_out,
+ 					  rq->outputsize);
+@@ -291,7 +295,7 @@ static int z_erofs_decompress_generic(struct z_erofs_decompress_req *rq,
+ 	dst_maptype = 2;
+ 
+ dstmap_out:
+-	ret = alg->decompress(rq, dst + rq->pageofs_out);
++	ret = alg->decompress(rq, dst + rq->pageofs_out, dst);
+ 
+ 	if (!dst_maptype)
+ 		kunmap_atomic(dst);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 193b13630ac1e..68aa8760cb465 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -2222,7 +2222,7 @@ static int ext4_fill_es_cache_info(struct inode *inode,
+ 
+ 
+ /*
+- * ext4_ext_determine_hole - determine hole around given block
++ * ext4_ext_find_hole - find hole around given block according to the given path
+  * @inode:	inode we lookup in
+  * @path:	path in extent tree to @lblk
+  * @lblk:	pointer to logical block around which we want to determine hole
+@@ -2234,9 +2234,9 @@ static int ext4_fill_es_cache_info(struct inode *inode,
+  * The function returns the length of a hole starting at @lblk. We update @lblk
+  * to the beginning of the hole if we managed to find it.
+  */
+-static ext4_lblk_t ext4_ext_determine_hole(struct inode *inode,
+-					   struct ext4_ext_path *path,
+-					   ext4_lblk_t *lblk)
++static ext4_lblk_t ext4_ext_find_hole(struct inode *inode,
++				      struct ext4_ext_path *path,
++				      ext4_lblk_t *lblk)
+ {
+ 	int depth = ext_depth(inode);
+ 	struct ext4_extent *ex;
+@@ -2263,30 +2263,6 @@ static ext4_lblk_t ext4_ext_determine_hole(struct inode *inode,
+ 	return len;
+ }
+ 
+-/*
+- * ext4_ext_put_gap_in_cache:
+- * calculate boundaries of the gap that the requested block fits into
+- * and cache this gap
+- */
+-static void
+-ext4_ext_put_gap_in_cache(struct inode *inode, ext4_lblk_t hole_start,
+-			  ext4_lblk_t hole_len)
+-{
+-	struct extent_status es;
+-
+-	ext4_es_find_extent_range(inode, &ext4_es_is_delayed, hole_start,
+-				  hole_start + hole_len - 1, &es);
+-	if (es.es_len) {
+-		/* There's delayed extent containing lblock? */
+-		if (es.es_lblk <= hole_start)
+-			return;
+-		hole_len = min(es.es_lblk - hole_start, hole_len);
+-	}
+-	ext_debug(inode, " -> %u:%u\n", hole_start, hole_len);
+-	ext4_es_insert_extent(inode, hole_start, hole_len, ~0,
+-			      EXTENT_STATUS_HOLE);
+-}
+-
+ /*
+  * ext4_ext_rm_idx:
+  * removes index from the index block.
+@@ -4058,6 +4034,69 @@ static int get_implied_cluster_alloc(struct super_block *sb,
+ 	return 0;
+ }
+ 
++/*
++ * Determine hole length around the given logical block, first try to
++ * locate and expand the hole from the given @path, and then adjust it
++ * if it's partially or completely converted to delayed extents, insert
++ * it into the extent cache tree if it's indeed a hole, finally return
++ * the length of the determined extent.
++ */
++static ext4_lblk_t ext4_ext_determine_insert_hole(struct inode *inode,
++						  struct ext4_ext_path *path,
++						  ext4_lblk_t lblk)
++{
++	ext4_lblk_t hole_start, len;
++	struct extent_status es;
++
++	hole_start = lblk;
++	len = ext4_ext_find_hole(inode, path, &hole_start);
++again:
++	ext4_es_find_extent_range(inode, &ext4_es_is_delayed, hole_start,
++				  hole_start + len - 1, &es);
++	if (!es.es_len)
++		goto insert_hole;
++
++	/*
++	 * There's a delalloc extent in the hole, handle it if the delalloc
++	 * extent is in front of, behind and straddle the queried range.
++	 */
++	if (lblk >= es.es_lblk + es.es_len) {
++		/*
++		 * The delalloc extent is in front of the queried range,
++		 * find again from the queried start block.
++		 */
++		len -= lblk - hole_start;
++		hole_start = lblk;
++		goto again;
++	} else if (in_range(lblk, es.es_lblk, es.es_len)) {
++		/*
++		 * The delalloc extent containing lblk, it must have been
++		 * added after ext4_map_blocks() checked the extent status
++		 * tree, adjust the length to the delalloc extent's after
++		 * lblk.
++		 */
++		len = es.es_lblk + es.es_len - lblk;
++		return len;
++	} else {
++		/*
++		 * The delalloc extent is partially or completely behind
++		 * the queried range, update hole length until the
++		 * beginning of the delalloc extent.
++		 */
++		len = min(es.es_lblk - hole_start, len);
++	}
++
++insert_hole:
++	/* Put just found gap into cache to speed up subsequent requests */
++	ext_debug(inode, " -> %u:%u\n", hole_start, len);
++	ext4_es_insert_extent(inode, hole_start, len, ~0, EXTENT_STATUS_HOLE);
++
++	/* Update hole_len to reflect hole size after lblk */
++	if (hole_start != lblk)
++		len -= lblk - hole_start;
++
++	return len;
++}
+ 
+ /*
+  * Block allocation/map/preallocation routine for extents based files
+@@ -4175,22 +4214,12 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
+ 	 * we couldn't try to create block if create flag is zero
+ 	 */
+ 	if ((flags & EXT4_GET_BLOCKS_CREATE) == 0) {
+-		ext4_lblk_t hole_start, hole_len;
++		ext4_lblk_t len;
+ 
+-		hole_start = map->m_lblk;
+-		hole_len = ext4_ext_determine_hole(inode, path, &hole_start);
+-		/*
+-		 * put just found gap into cache to speed up
+-		 * subsequent requests
+-		 */
+-		ext4_ext_put_gap_in_cache(inode, hole_start, hole_len);
++		len = ext4_ext_determine_insert_hole(inode, path, map->m_lblk);
+ 
+-		/* Update hole_len to reflect hole size after map->m_lblk */
+-		if (hole_start != map->m_lblk)
+-			hole_len -= map->m_lblk - hole_start;
+ 		map->m_pblk = 0;
+-		map->m_len = min_t(unsigned int, map->m_len, hole_len);
+-
++		map->m_len = min_t(unsigned int, map->m_len, len);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 9bec75847b856..d66ba6f6a8115 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -823,6 +823,24 @@ void ext4_mb_generate_buddy(struct super_block *sb,
+ 	atomic64_add(period, &sbi->s_mb_generation_time);
+ }
+ 
++static void mb_regenerate_buddy(struct ext4_buddy *e4b)
++{
++	int count;
++	int order = 1;
++	void *buddy;
++
++	while ((buddy = mb_find_buddy(e4b, order++, &count)))
++		ext4_set_bits(buddy, 0, count);
++
++	e4b->bd_info->bb_fragments = 0;
++	memset(e4b->bd_info->bb_counters, 0,
++		sizeof(*e4b->bd_info->bb_counters) *
++		(e4b->bd_sb->s_blocksize_bits + 2));
++
++	ext4_mb_generate_buddy(e4b->bd_sb, e4b->bd_buddy,
++		e4b->bd_bitmap, e4b->bd_group, e4b->bd_info);
++}
++
+ /* The buddy information is attached the buddy cache inode
+  * for convenience. The information regarding each group
+  * is loaded via ext4_mb_load_buddy. The information involve
+@@ -1505,6 +1523,8 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 			ext4_mark_group_bitmap_corrupted(
+ 				sb, e4b->bd_group,
+ 				EXT4_GROUP_INFO_BBITMAP_CORRUPT);
++		} else {
++			mb_regenerate_buddy(e4b);
+ 		}
+ 		goto done;
+ 	}
+@@ -1854,6 +1874,9 @@ int ext4_mb_try_best_found(struct ext4_allocation_context *ac,
+ 		return err;
+ 
+ 	ext4_lock_group(ac->ac_sb, group);
++	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)))
++		goto out;
++
+ 	max = mb_find_extent(e4b, ex.fe_start, ex.fe_len, &ex);
+ 
+ 	if (max > 0) {
+@@ -1861,6 +1884,7 @@ int ext4_mb_try_best_found(struct ext4_allocation_context *ac,
+ 		ext4_mb_use_best_found(ac, e4b);
+ 	}
+ 
++out:
+ 	ext4_unlock_group(ac->ac_sb, group);
+ 	ext4_mb_unload_buddy(e4b);
+ 
+@@ -1889,12 +1913,10 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
+ 	if (err)
+ 		return err;
+ 
+-	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info))) {
+-		ext4_mb_unload_buddy(e4b);
+-		return 0;
+-	}
+-
+ 	ext4_lock_group(ac->ac_sb, group);
++	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)))
++		goto out;
++
+ 	max = mb_find_extent(e4b, ac->ac_g_ex.fe_start,
+ 			     ac->ac_g_ex.fe_len, &ex);
+ 	ex.fe_logical = 0xDEADFA11; /* debug value */
+@@ -1927,6 +1949,7 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
+ 		ac->ac_b_ex = ex;
+ 		ext4_mb_use_best_found(ac, e4b);
+ 	}
++out:
+ 	ext4_unlock_group(ac->ac_sb, group);
+ 	ext4_mb_unload_buddy(e4b);
+ 
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 472932b9e6bca..7898983c9fba0 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -57,28 +57,6 @@ static inline void __buffer_unlink(struct journal_head *jh)
+ 	}
+ }
+ 
+-/*
+- * Move a buffer from the checkpoint list to the checkpoint io list
+- *
+- * Called with j_list_lock held
+- */
+-static inline void __buffer_relink_io(struct journal_head *jh)
+-{
+-	transaction_t *transaction = jh->b_cp_transaction;
+-
+-	__buffer_unlink_first(jh);
+-
+-	if (!transaction->t_checkpoint_io_list) {
+-		jh->b_cpnext = jh->b_cpprev = jh;
+-	} else {
+-		jh->b_cpnext = transaction->t_checkpoint_io_list;
+-		jh->b_cpprev = transaction->t_checkpoint_io_list->b_cpprev;
+-		jh->b_cpprev->b_cpnext = jh;
+-		jh->b_cpnext->b_cpprev = jh;
+-	}
+-	transaction->t_checkpoint_io_list = jh;
+-}
+-
+ /*
+  * Try to release a checkpointed buffer from its transaction.
+  * Returns 1 if we released it and 2 if we also released the
+@@ -91,8 +69,7 @@ static int __try_to_free_cp_buf(struct journal_head *jh)
+ 	int ret = 0;
+ 	struct buffer_head *bh = jh2bh(jh);
+ 
+-	if (jh->b_transaction == NULL && !buffer_locked(bh) &&
+-	    !buffer_dirty(bh) && !buffer_write_io_error(bh)) {
++	if (!jh->b_transaction && !buffer_locked(bh) && !buffer_dirty(bh)) {
+ 		JBUFFER_TRACE(jh, "remove from checkpoint list");
+ 		ret = __jbd2_journal_remove_checkpoint(jh) + 1;
+ 	}
+@@ -191,6 +168,7 @@ __flush_batch(journal_t *journal, int *batch_count)
+ 		struct buffer_head *bh = journal->j_chkpt_bhs[i];
+ 		BUFFER_TRACE(bh, "brelse");
+ 		__brelse(bh);
++		journal->j_chkpt_bhs[i] = NULL;
+ 	}
+ 	*batch_count = 0;
+ }
+@@ -228,7 +206,6 @@ int jbd2_log_do_checkpoint(journal_t *journal)
+ 	 * OK, we need to start writing disk blocks.  Take one transaction
+ 	 * and write it.
+ 	 */
+-	result = 0;
+ 	spin_lock(&journal->j_list_lock);
+ 	if (!journal->j_checkpoint_transactions)
+ 		goto out;
+@@ -251,15 +228,6 @@ int jbd2_log_do_checkpoint(journal_t *journal)
+ 		jh = transaction->t_checkpoint_list;
+ 		bh = jh2bh(jh);
+ 
+-		if (buffer_locked(bh)) {
+-			get_bh(bh);
+-			spin_unlock(&journal->j_list_lock);
+-			wait_on_buffer(bh);
+-			/* the journal_head may have gone by now */
+-			BUFFER_TRACE(bh, "brelse");
+-			__brelse(bh);
+-			goto retry;
+-		}
+ 		if (jh->b_transaction != NULL) {
+ 			transaction_t *t = jh->b_transaction;
+ 			tid_t tid = t->t_tid;
+@@ -294,32 +262,50 @@ int jbd2_log_do_checkpoint(journal_t *journal)
+ 			spin_lock(&journal->j_list_lock);
+ 			goto restart;
+ 		}
+-		if (!buffer_dirty(bh)) {
+-			if (unlikely(buffer_write_io_error(bh)) && !result)
+-				result = -EIO;
++		if (!trylock_buffer(bh)) {
++			/*
++			 * The buffer is locked, it may be writing back, or
++			 * flushing out in the last couple of cycles, or
++			 * re-adding into a new transaction, need to check
++			 * it again until it's unlocked.
++			 */
++			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
++			wait_on_buffer(bh);
++			/* the journal_head may have gone by now */
++			BUFFER_TRACE(bh, "brelse");
++			__brelse(bh);
++			goto retry;
++		} else if (!buffer_dirty(bh)) {
++			unlock_buffer(bh);
+ 			BUFFER_TRACE(bh, "remove from checkpoint");
+-			if (__jbd2_journal_remove_checkpoint(jh))
+-				/* The transaction was released; we're done */
++			/*
++			 * If the transaction was released or the checkpoint
++			 * list was empty, we're done.
++			 */
++			if (__jbd2_journal_remove_checkpoint(jh) ||
++			    !transaction->t_checkpoint_list)
+ 				goto out;
+-			continue;
++		} else {
++			unlock_buffer(bh);
++			/*
++			 * We are about to write the buffer, it could be
++			 * raced by some other transaction shrink or buffer
++			 * re-log logic once we release the j_list_lock,
++			 * leave it on the checkpoint list and check status
++			 * again to make sure it's clean.
++			 */
++			BUFFER_TRACE(bh, "queue");
++			get_bh(bh);
++			J_ASSERT_BH(bh, !buffer_jwrite(bh));
++			journal->j_chkpt_bhs[batch_count++] = bh;
++			transaction->t_chp_stats.cs_written++;
++			transaction->t_checkpoint_list = jh->b_cpnext;
+ 		}
+-		/*
+-		 * Important: we are about to write the buffer, and
+-		 * possibly block, while still holding the journal
+-		 * lock.  We cannot afford to let the transaction
+-		 * logic start messing around with this buffer before
+-		 * we write it to disk, as that would break
+-		 * recoverability.
+-		 */
+-		BUFFER_TRACE(bh, "queue");
+-		get_bh(bh);
+-		J_ASSERT_BH(bh, !buffer_jwrite(bh));
+-		journal->j_chkpt_bhs[batch_count++] = bh;
+-		__buffer_relink_io(jh);
+-		transaction->t_chp_stats.cs_written++;
++
+ 		if ((batch_count == JBD2_NR_BATCH) ||
+-		    need_resched() ||
+-		    spin_needbreak(&journal->j_list_lock))
++		    need_resched() || spin_needbreak(&journal->j_list_lock) ||
++		    jh2bh(transaction->t_checkpoint_list) == journal->j_chkpt_bhs[0])
+ 			goto unlock_and_flush;
+ 	}
+ 
+@@ -333,46 +319,9 @@ int jbd2_log_do_checkpoint(journal_t *journal)
+ 			goto restart;
+ 	}
+ 
+-	/*
+-	 * Now we issued all of the transaction's buffers, let's deal
+-	 * with the buffers that are out for I/O.
+-	 */
+-restart2:
+-	/* Did somebody clean up the transaction in the meanwhile? */
+-	if (journal->j_checkpoint_transactions != transaction ||
+-	    transaction->t_tid != this_tid)
+-		goto out;
+-
+-	while (transaction->t_checkpoint_io_list) {
+-		jh = transaction->t_checkpoint_io_list;
+-		bh = jh2bh(jh);
+-		if (buffer_locked(bh)) {
+-			get_bh(bh);
+-			spin_unlock(&journal->j_list_lock);
+-			wait_on_buffer(bh);
+-			/* the journal_head may have gone by now */
+-			BUFFER_TRACE(bh, "brelse");
+-			__brelse(bh);
+-			spin_lock(&journal->j_list_lock);
+-			goto restart2;
+-		}
+-		if (unlikely(buffer_write_io_error(bh)) && !result)
+-			result = -EIO;
+-
+-		/*
+-		 * Now in whatever state the buffer currently is, we
+-		 * know that it has been written out and so we can
+-		 * drop it from the list
+-		 */
+-		if (__jbd2_journal_remove_checkpoint(jh))
+-			break;
+-	}
+ out:
+ 	spin_unlock(&journal->j_list_lock);
+-	if (result < 0)
+-		jbd2_journal_abort(journal, result);
+-	else
+-		result = jbd2_cleanup_journal_tail(journal);
++	result = jbd2_cleanup_journal_tail(journal);
+ 
+ 	return (result < 0) ? result : 0;
+ }
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index b9522eee1257a..f9ecade9ea65b 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -319,16 +319,18 @@ static loff_t zonefs_check_zone_condition(struct inode *inode,
+ 	}
+ }
+ 
+-struct zonefs_ioerr_data {
+-	struct inode	*inode;
+-	bool		write;
+-};
+-
+ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 			      void *data)
+ {
+-	struct zonefs_ioerr_data *err = data;
+-	struct inode *inode = err->inode;
++	struct blk_zone *z = data;
++
++	*z = *zone;
++	return 0;
++}
++
++static void zonefs_handle_io_error(struct inode *inode, struct blk_zone *zone,
++				   bool write)
++{
+ 	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+ 	struct super_block *sb = inode->i_sb;
+ 	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+@@ -344,8 +346,8 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 	isize = i_size_read(inode);
+ 	if (zone->cond != BLK_ZONE_COND_OFFLINE &&
+ 	    zone->cond != BLK_ZONE_COND_READONLY &&
+-	    !err->write && isize == data_size)
+-		return 0;
++	    !write && isize == data_size)
++		return;
+ 
+ 	/*
+ 	 * At this point, we detected either a bad zone or an inconsistency
+@@ -366,8 +368,9 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 	 * In all cases, warn about inode size inconsistency and handle the
+ 	 * IO error according to the zone condition and to the mount options.
+ 	 */
+-	if (zi->i_ztype == ZONEFS_ZTYPE_SEQ && isize != data_size)
+-		zonefs_warn(sb, "inode %lu: invalid size %lld (should be %lld)\n",
++	if (isize != data_size)
++		zonefs_warn(sb,
++			    "inode %lu: invalid size %lld (should be %lld)\n",
+ 			    inode->i_ino, isize, data_size);
+ 
+ 	/*
+@@ -427,8 +430,6 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 	zonefs_update_stats(inode, data_size);
+ 	zonefs_i_size_write(inode, data_size);
+ 	zi->i_wpoffset = data_size;
+-
+-	return 0;
+ }
+ 
+ /*
+@@ -442,23 +443,25 @@ static void __zonefs_io_error(struct inode *inode, bool write)
+ {
+ 	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+ 	struct super_block *sb = inode->i_sb;
+-	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+ 	unsigned int noio_flag;
+-	unsigned int nr_zones = 1;
+-	struct zonefs_ioerr_data err = {
+-		.inode = inode,
+-		.write = write,
+-	};
++	struct blk_zone zone;
+ 	int ret;
+ 
+ 	/*
+-	 * The only files that have more than one zone are conventional zone
+-	 * files with aggregated conventional zones, for which the inode zone
+-	 * size is always larger than the device zone size.
++	 * Conventional zone have no write pointer and cannot become read-only
++	 * or offline. So simply fake a report for a single or aggregated zone
++	 * and let zonefs_handle_io_error() correct the zone inode information
++	 * according to the mount options.
+ 	 */
+-	if (zi->i_zone_size > bdev_zone_sectors(sb->s_bdev))
+-		nr_zones = zi->i_zone_size >>
+-			(sbi->s_zone_sectors_shift + SECTOR_SHIFT);
++	if (zi->i_ztype != ZONEFS_ZTYPE_SEQ) {
++		zone.start = zi->i_zsector;
++		zone.len = zi->i_max_size >> SECTOR_SHIFT;
++		zone.wp = zone.start + zone.len;
++		zone.type = BLK_ZONE_TYPE_CONVENTIONAL;
++		zone.cond = BLK_ZONE_COND_NOT_WP;
++		zone.capacity = zone.len;
++		goto handle_io_error;
++	}
+ 
+ 	/*
+ 	 * Memory allocations in blkdev_report_zones() can trigger a memory
+@@ -469,12 +472,19 @@ static void __zonefs_io_error(struct inode *inode, bool write)
+ 	 * the GFP_NOIO context avoids both problems.
+ 	 */
+ 	noio_flag = memalloc_noio_save();
+-	ret = blkdev_report_zones(sb->s_bdev, zi->i_zsector, nr_zones,
+-				  zonefs_io_error_cb, &err);
+-	if (ret != nr_zones)
++	ret = blkdev_report_zones(sb->s_bdev, zi->i_zsector, 1,
++				  zonefs_io_error_cb, &zone);
++	memalloc_noio_restore(noio_flag);
++	if (ret != 1) {
+ 		zonefs_err(sb, "Get inode %lu zone information failed %d\n",
+ 			   inode->i_ino, ret);
+-	memalloc_noio_restore(noio_flag);
++		zonefs_warn(sb, "remounting filesystem read-only\n");
++		sb->s_flags |= SB_RDONLY;
++		return;
++	}
++
++handle_io_error:
++	zonefs_handle_io_error(inode, &zone, write);
+ }
+ 
+ static void zonefs_io_error(struct inode *inode, bool write)
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 82316863c71fd..6de70634e5471 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -316,6 +316,8 @@ enum rw_hint {
+ /* iocb->ki_waitq is valid */
+ #define IOCB_WAITQ		(1 << 19)
+ #define IOCB_NOIO		(1 << 20)
++/* kiocb is a read or write operation submitted by fs/aio.c. */
++#define IOCB_AIO_RW		(1 << 23)
+ 
+ struct kiocb {
+ 	struct file		*ki_filp;
+diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
+index 2c2586312b447..3eca9f91b9a56 100644
+--- a/include/linux/lockdep.h
++++ b/include/linux/lockdep.h
+@@ -321,6 +321,10 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie);
+ 		WARN_ON_ONCE(debug_locks && !lockdep_is_held(l));	\
+ 	} while (0)
+ 
++#define lockdep_assert_none_held_once()	do {				\
++		WARN_ON_ONCE(debug_locks && current->lockdep_depth);	\
++	} while (0)
++
+ #define lockdep_recursing(tsk)	((tsk)->lockdep_recursion)
+ 
+ #define lockdep_pin_lock(l)	lock_pin_lock(&(l)->dep_map)
+@@ -394,6 +398,7 @@ static inline void lockdep_unregister_key(struct lock_class_key *key)
+ #define lockdep_assert_held_write(l)	do { (void)(l); } while (0)
+ #define lockdep_assert_held_read(l)		do { (void)(l); } while (0)
+ #define lockdep_assert_held_once(l)		do { (void)(l); } while (0)
++#define lockdep_assert_none_held_once()	do { } while (0)
+ 
+ #define lockdep_recursing(tsk)			(0)
+ 
+diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
+index f24575942dabe..879a5c8f930b6 100644
+--- a/include/linux/sched/task_stack.h
++++ b/include/linux/sched/task_stack.h
+@@ -16,7 +16,7 @@
+  * try_get_task_stack() instead.  task_stack_page will return a pointer
+  * that could get freed out from under you.
+  */
+-static inline void *task_stack_page(const struct task_struct *task)
++static __always_inline void *task_stack_page(const struct task_struct *task)
+ {
+ 	return task->stack;
+ }
+diff --git a/include/linux/socket.h b/include/linux/socket.h
+index c3b35d18bcd30..daf51fef5a8d1 100644
+--- a/include/linux/socket.h
++++ b/include/linux/socket.h
+@@ -31,7 +31,10 @@ typedef __kernel_sa_family_t	sa_family_t;
+ 
+ struct sockaddr {
+ 	sa_family_t	sa_family;	/* address family, AF_xxx	*/
+-	char		sa_data[14];	/* 14 bytes of protocol address	*/
++	union {
++		char sa_data_min[14];		/* Minimum 14 bytes of protocol address	*/
++		DECLARE_FLEX_ARRAY(char, sa_data);
++	};
+ };
+ 
+ struct linger {
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 5c03dc6d0f792..d6e4b6f7d6ce0 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -2224,7 +2224,7 @@ struct tcp_ulp_ops {
+ 	/* cleanup ulp */
+ 	void (*release)(struct sock *sk);
+ 	/* diagnostic */
+-	int (*get_info)(const struct sock *sk, struct sk_buff *skb);
++	int (*get_info)(struct sock *sk, struct sk_buff *skb);
+ 	size_t (*get_info_size)(const struct sock *sk);
+ 	/* clone ulp */
+ 	void (*clone)(const struct request_sock *req, struct sock *newsk,
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index f690f901b6cc7..1289991c970e1 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -8,7 +8,7 @@
+ #include "pelt.h"
+ 
+ int sched_rr_timeslice = RR_TIMESLICE;
+-int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE;
++int sysctl_sched_rr_timeslice = (MSEC_PER_SEC * RR_TIMESLICE) / HZ;
+ /* More than 4 hours if BW_SHIFT equals 20. */
+ static const u64 max_rt_runtime = MAX_BW;
+ 
+@@ -2727,9 +2727,6 @@ static int sched_rt_global_constraints(void)
+ 
+ static int sched_rt_global_validate(void)
+ {
+-	if (sysctl_sched_rt_period <= 0)
+-		return -EINVAL;
+-
+ 	if ((sysctl_sched_rt_runtime != RUNTIME_INF) &&
+ 		((sysctl_sched_rt_runtime > sysctl_sched_rt_period) ||
+ 		 ((u64)sysctl_sched_rt_runtime *
+@@ -2760,7 +2757,7 @@ int sched_rt_handler(struct ctl_table *table, int write, void *buffer,
+ 	old_period = sysctl_sched_rt_period;
+ 	old_runtime = sysctl_sched_rt_runtime;
+ 
+-	ret = proc_dointvec(table, write, buffer, lenp, ppos);
++	ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ 
+ 	if (!ret && write) {
+ 		ret = sched_rt_global_validate();
+@@ -2804,6 +2801,9 @@ int sched_rr_handler(struct ctl_table *table, int write, void *buffer,
+ 		sched_rr_timeslice =
+ 			sysctl_sched_rr_timeslice <= 0 ? RR_TIMESLICE :
+ 			msecs_to_jiffies(sysctl_sched_rr_timeslice);
++
++		if (sysctl_sched_rr_timeslice <= 0)
++			sysctl_sched_rr_timeslice = jiffies_to_msecs(RR_TIMESLICE);
+ 	}
+ 	mutex_unlock(&mutex);
+ 
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 305f0eca163ed..0b0331346e4be 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -29,6 +29,9 @@
+ #include <linux/syscalls.h>
+ #include <linux/sysctl.h>
+ 
++/* Not exposed in headers: strictly internal use only. */
++#define SECCOMP_MODE_DEAD	(SECCOMP_MODE_FILTER + 1)
++
+ #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
+ #include <asm/syscall.h>
+ #endif
+@@ -795,6 +798,7 @@ static void __secure_computing_strict(int this_syscall)
+ #ifdef SECCOMP_DEBUG
+ 	dump_stack();
+ #endif
++	current->seccomp.mode = SECCOMP_MODE_DEAD;
+ 	seccomp_log(this_syscall, SIGKILL, SECCOMP_RET_KILL_THREAD, true);
+ 	do_exit(SIGKILL);
+ }
+@@ -1023,6 +1027,7 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
+ 	case SECCOMP_RET_KILL_THREAD:
+ 	case SECCOMP_RET_KILL_PROCESS:
+ 	default:
++		current->seccomp.mode = SECCOMP_MODE_DEAD;
+ 		seccomp_log(this_syscall, SIGSYS, action, true);
+ 		/* Dump core only if this is the last remaining thread. */
+ 		if (action != SECCOMP_RET_KILL_THREAD ||
+@@ -1075,6 +1080,11 @@ int __secure_computing(const struct seccomp_data *sd)
+ 		return 0;
+ 	case SECCOMP_MODE_FILTER:
+ 		return __seccomp_filter(this_syscall, sd, false);
++	/* Surviving SECCOMP_RET_KILL_* must be proactively impossible. */
++	case SECCOMP_MODE_DEAD:
++		WARN_ON_ONCE(1);
++		do_exit(SIGKILL);
++		return -1;
+ 	default:
+ 		BUG();
+ 	}
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index a45f0dd10b9a3..99a19190196e0 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -1859,6 +1859,8 @@ static struct ctl_table kern_table[] = {
+ 		.maxlen		= sizeof(unsigned int),
+ 		.mode		= 0644,
+ 		.proc_handler	= sched_rt_handler,
++		.extra1		= SYSCTL_ONE,
++		.extra2		= SYSCTL_INT_MAX,
+ 	},
+ 	{
+ 		.procname	= "sched_rt_runtime_us",
+@@ -1866,6 +1868,8 @@ static struct ctl_table kern_table[] = {
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+ 		.proc_handler	= sched_rt_handler,
++		.extra1		= SYSCTL_NEG_ONE,
++		.extra2		= SYSCTL_INT_MAX,
+ 	},
+ 	{
+ 		.procname	= "sched_deadline_period_max_us",
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 078d95cd32c53..c28ff36f5b311 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -209,6 +209,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
+ 					      unsigned long dst_start,
+ 					      unsigned long src_start,
+ 					      unsigned long len,
++					      bool *mmap_changing,
+ 					      bool zeropage)
+ {
+ 	int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
+@@ -329,6 +330,15 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
+ 				goto out;
+ 			}
+ 			mmap_read_lock(dst_mm);
++			/*
++			 * If memory mappings are changing because of non-cooperative
++			 * operation (e.g. mremap) running in parallel, bail out and
++			 * request the user to retry later
++			 */
++			if (mmap_changing && READ_ONCE(*mmap_changing)) {
++				err = -EAGAIN;
++				break;
++			}
+ 
+ 			dst_vma = NULL;
+ 			goto retry;
+@@ -410,6 +420,7 @@ extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
+ 				      unsigned long dst_start,
+ 				      unsigned long src_start,
+ 				      unsigned long len,
++				      bool *mmap_changing,
+ 				      bool zeropage);
+ #endif /* CONFIG_HUGETLB_PAGE */
+ 
+@@ -529,7 +540,8 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
+ 	 */
+ 	if (is_vm_hugetlb_page(dst_vma))
+ 		return  __mcopy_atomic_hugetlb(dst_mm, dst_vma, dst_start,
+-						src_start, len, zeropage);
++					       src_start, len, mmap_changing,
++					       zeropage);
+ 
+ 	if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))
+ 		goto out_unlock;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index fc881d60a9dcc..0619d2253aa24 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -8787,7 +8787,7 @@ EXPORT_SYMBOL(dev_set_mac_address_user);
+ 
+ int dev_get_mac_address(struct sockaddr *sa, struct net *net, char *dev_name)
+ {
+-	size_t size = sizeof(sa->sa_data);
++	size_t size = sizeof(sa->sa_data_min);
+ 	struct net_device *dev;
+ 	int ret = 0;
+ 
+diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
+index 993420da29307..60e815a71909a 100644
+--- a/net/core/dev_ioctl.c
++++ b/net/core/dev_ioctl.c
+@@ -245,7 +245,7 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, unsigned int cmd)
+ 		if (ifr->ifr_hwaddr.sa_family != dev->type)
+ 			return -EINVAL;
+ 		memcpy(dev->broadcast, ifr->ifr_hwaddr.sa_data,
+-		       min(sizeof(ifr->ifr_hwaddr.sa_data),
++		       min(sizeof(ifr->ifr_hwaddr.sa_data_min),
+ 			   (size_t)dev->addr_len));
+ 		call_netdevice_notifiers(NETDEV_CHANGEADDR, dev);
+ 		return 0;
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index afc97d65cf2d8..87fc86aade5c9 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -327,9 +327,12 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
+ 	node_real->addr_B_port = port_rcv->type;
+ 
+ 	spin_lock_bh(&hsr->list_lock);
+-	list_del_rcu(&node_curr->mac_list);
++	if (!node_curr->removed) {
++		list_del_rcu(&node_curr->mac_list);
++		node_curr->removed = true;
++		kfree_rcu(node_curr, rcu_head);
++	}
+ 	spin_unlock_bh(&hsr->list_lock);
+-	kfree_rcu(node_curr, rcu_head);
+ 
+ done:
+ 	/* PRP uses v0 header */
+@@ -506,9 +509,12 @@ void hsr_prune_nodes(struct timer_list *t)
+ 		if (time_is_before_jiffies(timestamp +
+ 				msecs_to_jiffies(HSR_NODE_FORGET_TIME))) {
+ 			hsr_nl_nodedown(hsr, node->macaddress_A);
+-			list_del_rcu(&node->mac_list);
+-			/* Note that we need to free this entry later: */
+-			kfree_rcu(node, rcu_head);
++			if (!node->removed) {
++				list_del_rcu(&node->mac_list);
++				node->removed = true;
++				/* Note that we need to free this entry later: */
++				kfree_rcu(node, rcu_head);
++			}
+ 		}
+ 	}
+ 	spin_unlock_bh(&hsr->list_lock);
+diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
+index 5a771cb3f0325..48990166e4c4e 100644
+--- a/net/hsr/hsr_framereg.h
++++ b/net/hsr/hsr_framereg.h
+@@ -82,6 +82,7 @@ struct hsr_node {
+ 	bool			san_a;
+ 	bool			san_b;
+ 	u16			seq_out[HSR_PT_PORTS];
++	bool			removed;
+ 	struct rcu_head		rcu_head;
+ };
+ 
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 83a47998c4b18..8ae9bd6f91c19 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -1104,7 +1104,8 @@ static int arp_req_get(struct arpreq *r, struct net_device *dev)
+ 	if (neigh) {
+ 		if (!(neigh->nud_state & NUD_NOARP)) {
+ 			read_lock_bh(&neigh->lock);
+-			memcpy(r->arp_ha.sa_data, neigh->ha, dev->addr_len);
++			memcpy(r->arp_ha.sa_data, neigh->ha,
++			       min(dev->addr_len, (unsigned char)sizeof(r->arp_ha.sa_data_min)));
+ 			r->arp_flags = arp_state_to_flags(neigh);
+ 			read_unlock_bh(&neigh->lock);
+ 			r->arp_ha.sa_family = dev->type;
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index da1ca8081c035..9ac7d47d27b81 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -1798,6 +1798,21 @@ static int in_dev_dump_addr(struct in_device *in_dev, struct sk_buff *skb,
+ 	return err;
+ }
+ 
++/* Combine dev_addr_genid and dev_base_seq to detect changes.
++ */
++static u32 inet_base_seq(const struct net *net)
++{
++	u32 res = atomic_read(&net->ipv4.dev_addr_genid) +
++		  net->dev_base_seq;
++
++	/* Must not return 0 (see nl_dump_check_consistent()).
++	 * Chose a value far away from 0.
++	 */
++	if (!res)
++		res = 0x80000000;
++	return res;
++}
++
+ static int inet_dump_ifaddr(struct sk_buff *skb, struct netlink_callback *cb)
+ {
+ 	const struct nlmsghdr *nlh = cb->nlh;
+@@ -1849,8 +1864,7 @@ static int inet_dump_ifaddr(struct sk_buff *skb, struct netlink_callback *cb)
+ 		idx = 0;
+ 		head = &tgt_net->dev_index_head[h];
+ 		rcu_read_lock();
+-		cb->seq = atomic_read(&tgt_net->ipv4.dev_addr_genid) ^
+-			  tgt_net->dev_base_seq;
++		cb->seq = inet_base_seq(tgt_net);
+ 		hlist_for_each_entry_rcu(dev, head, index_hlist) {
+ 			if (idx < s_idx)
+ 				goto cont;
+@@ -2249,8 +2263,7 @@ static int inet_netconf_dump_devconf(struct sk_buff *skb,
+ 		idx = 0;
+ 		head = &net->dev_index_head[h];
+ 		rcu_read_lock();
+-		cb->seq = atomic_read(&net->ipv4.dev_addr_genid) ^
+-			  net->dev_base_seq;
++		cb->seq = inet_base_seq(net);
+ 		hlist_for_each_entry_rcu(dev, head, index_hlist) {
+ 			if (idx < s_idx)
+ 				goto cont;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 79787a1f5ab34..150c2f71ec89f 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -698,6 +698,22 @@ static int inet6_netconf_get_devconf(struct sk_buff *in_skb,
+ 	return err;
+ }
+ 
++/* Combine dev_addr_genid and dev_base_seq to detect changes.
++ */
++static u32 inet6_base_seq(const struct net *net)
++{
++	u32 res = atomic_read(&net->ipv6.dev_addr_genid) +
++		  net->dev_base_seq;
++
++	/* Must not return 0 (see nl_dump_check_consistent()).
++	 * Chose a value far away from 0.
++	 */
++	if (!res)
++		res = 0x80000000;
++	return res;
++}
++
++
+ static int inet6_netconf_dump_devconf(struct sk_buff *skb,
+ 				      struct netlink_callback *cb)
+ {
+@@ -731,8 +747,7 @@ static int inet6_netconf_dump_devconf(struct sk_buff *skb,
+ 		idx = 0;
+ 		head = &net->dev_index_head[h];
+ 		rcu_read_lock();
+-		cb->seq = atomic_read(&net->ipv6.dev_addr_genid) ^
+-			  net->dev_base_seq;
++		cb->seq = inet6_base_seq(net);
+ 		hlist_for_each_entry_rcu(dev, head, index_hlist) {
+ 			if (idx < s_idx)
+ 				goto cont;
+@@ -5288,7 +5303,7 @@ static int inet6_dump_addr(struct sk_buff *skb, struct netlink_callback *cb,
+ 	}
+ 
+ 	rcu_read_lock();
+-	cb->seq = atomic_read(&tgt_net->ipv6.dev_addr_genid) ^ tgt_net->dev_base_seq;
++	cb->seq = inet6_base_seq(tgt_net);
+ 	for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
+ 		idx = 0;
+ 		head = &tgt_net->dev_index_head[h];
+diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c
+index 2278c0234c497..a8439fded12dc 100644
+--- a/net/ipv6/seg6.c
++++ b/net/ipv6/seg6.c
+@@ -451,22 +451,24 @@ int __init seg6_init(void)
+ {
+ 	int err;
+ 
+-	err = genl_register_family(&seg6_genl_family);
++	err = register_pernet_subsys(&ip6_segments_ops);
+ 	if (err)
+ 		goto out;
+ 
+-	err = register_pernet_subsys(&ip6_segments_ops);
++	err = genl_register_family(&seg6_genl_family);
+ 	if (err)
+-		goto out_unregister_genl;
++		goto out_unregister_pernet;
+ 
+ #ifdef CONFIG_IPV6_SEG6_LWTUNNEL
+ 	err = seg6_iptunnel_init();
+ 	if (err)
+-		goto out_unregister_pernet;
++		goto out_unregister_genl;
+ 
+ 	err = seg6_local_init();
+-	if (err)
+-		goto out_unregister_pernet;
++	if (err) {
++		seg6_iptunnel_exit();
++		goto out_unregister_genl;
++	}
+ #endif
+ 
+ #ifdef CONFIG_IPV6_SEG6_HMAC
+@@ -487,11 +489,11 @@ int __init seg6_init(void)
+ #endif
+ #endif
+ #ifdef CONFIG_IPV6_SEG6_LWTUNNEL
+-out_unregister_pernet:
+-	unregister_pernet_subsys(&ip6_segments_ops);
+-#endif
+ out_unregister_genl:
+ 	genl_unregister_family(&seg6_genl_family);
++#endif
++out_unregister_pernet:
++	unregister_pernet_subsys(&ip6_segments_ops);
+ 	goto out;
+ }
+ 
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 9746c624a5503..eb3d81bcce6d2 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -628,7 +628,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ back_from_confirm:
+ 	lock_sock(sk);
+-	ulen = len + skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0;
++	ulen = len + (skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0);
+ 	err = ip6_append_data(sk, ip_generic_getfrag, msg,
+ 			      ulen, transhdrlen, &ipc6,
+ 			      &fl6, (struct rt6_info *)dst,
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 2e84360990f0c..44bd03c6b8473 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -700,6 +700,8 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
+ 	if (ieee80211_vif_is_mesh(&sdata->vif))
+ 		mesh_accept_plinks_update(sdata);
+ 
++	ieee80211_check_fast_xmit(sta);
++
+ 	return 0;
+  out_remove:
+ 	sta_info_hash_del(local, sta);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 55abc06214c4d..0d6d12fc3c07e 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -2959,7 +2959,7 @@ void ieee80211_check_fast_xmit(struct sta_info *sta)
+ 	    sdata->vif.type == NL80211_IFTYPE_STATION)
+ 		goto out;
+ 
+-	if (!test_sta_flag(sta, WLAN_STA_AUTHORIZED))
++	if (!test_sta_flag(sta, WLAN_STA_AUTHORIZED) || !sta->uploaded)
+ 		goto out;
+ 
+ 	if (test_sta_flag(sta, WLAN_STA_PS_STA) ||
+diff --git a/net/mptcp/diag.c b/net/mptcp/diag.c
+index a536586742f28..e57c5f47f0351 100644
+--- a/net/mptcp/diag.c
++++ b/net/mptcp/diag.c
+@@ -13,17 +13,19 @@
+ #include <uapi/linux/mptcp.h>
+ #include "protocol.h"
+ 
+-static int subflow_get_info(const struct sock *sk, struct sk_buff *skb)
++static int subflow_get_info(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *sf;
+ 	struct nlattr *start;
+ 	u32 flags = 0;
++	bool slow;
+ 	int err;
+ 
+ 	start = nla_nest_start_noflag(skb, INET_ULP_INFO_MPTCP);
+ 	if (!start)
+ 		return -EMSGSIZE;
+ 
++	slow = lock_sock_fast(sk);
+ 	rcu_read_lock();
+ 	sf = rcu_dereference(inet_csk(sk)->icsk_ulp_data);
+ 	if (!sf) {
+@@ -69,11 +71,13 @@ static int subflow_get_info(const struct sock *sk, struct sk_buff *skb)
+ 	}
+ 
+ 	rcu_read_unlock();
++	unlock_sock_fast(sk, slow);
+ 	nla_nest_end(skb, start);
+ 	return 0;
+ 
+ nla_failure:
+ 	rcu_read_unlock();
++	unlock_sock_fast(sk, slow);
+ 	nla_nest_cancel(skb, start);
+ 	return err;
+ }
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index e7545bcca805e..6b2a215b27862 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -299,7 +299,7 @@ sctp_new(struct nf_conn *ct, const struct sk_buff *skb,
+ 			pr_debug("Setting vtag %x for secondary conntrack\n",
+ 				 sh->vtag);
+ 			ct->proto.sctp.vtag[IP_CT_DIR_ORIGINAL] = sh->vtag;
+-		} else {
++		} else if (sch->type == SCTP_CID_SHUTDOWN_ACK) {
+ 		/* If it is a shutdown ack OOTB packet, we expect a return
+ 		   shutdown complete, otherwise an ABORT Sec 8.4 (5) and (8) */
+ 			pr_debug("Setting vtag %x for new conn OOTB\n",
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f586e8b3c6cfa..73b0a6925304c 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1132,6 +1132,7 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 	return 0;
+ 
+ err_register_hooks:
++	ctx->table->flags |= NFT_TABLE_F_DORMANT;
+ 	nft_trans_destroy(trans);
+ 	return ret;
+ }
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index b292d58fdcc4c..6cc054dd53b6e 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1871,7 +1871,7 @@ static int packet_rcv_spkt(struct sk_buff *skb, struct net_device *dev,
+ 	 */
+ 
+ 	spkt->spkt_family = dev->type;
+-	strlcpy(spkt->spkt_device, dev->name, sizeof(spkt->spkt_device));
++	strscpy(spkt->spkt_device, dev->name, sizeof(spkt->spkt_device));
+ 	spkt->spkt_protocol = skb->protocol;
+ 
+ 	/*
+@@ -3252,7 +3252,7 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
+ 			    int addr_len)
+ {
+ 	struct sock *sk = sock->sk;
+-	char name[sizeof(uaddr->sa_data) + 1];
++	char name[sizeof(uaddr->sa_data_min) + 1];
+ 
+ 	/*
+ 	 *	Check legality
+@@ -3263,8 +3263,8 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
+ 	/* uaddr->sa_data comes from the userspace, it's not guaranteed to be
+ 	 * zero-terminated.
+ 	 */
+-	memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data));
+-	name[sizeof(uaddr->sa_data)] = 0;
++	memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data_min));
++	name[sizeof(uaddr->sa_data_min)] = 0;
+ 
+ 	return packet_do_bind(sk, name, 0, 0);
+ }
+@@ -3536,11 +3536,11 @@ static int packet_getname_spkt(struct socket *sock, struct sockaddr *uaddr,
+ 		return -EOPNOTSUPP;
+ 
+ 	uaddr->sa_family = AF_PACKET;
+-	memset(uaddr->sa_data, 0, sizeof(uaddr->sa_data));
++	memset(uaddr->sa_data, 0, sizeof(uaddr->sa_data_min));
+ 	rcu_read_lock();
+ 	dev = dev_get_by_index_rcu(sock_net(sk), READ_ONCE(pkt_sk(sk)->ifindex));
+ 	if (dev)
+-		strlcpy(uaddr->sa_data, dev->name, sizeof(uaddr->sa_data));
++		strscpy(uaddr->sa_data, dev->name, sizeof(uaddr->sa_data_min));
+ 	rcu_read_unlock();
+ 
+ 	return sizeof(*uaddr);
+diff --git a/net/sched/Kconfig b/net/sched/Kconfig
+index 2046c16b29f09..e5f7675e587b8 100644
+--- a/net/sched/Kconfig
++++ b/net/sched/Kconfig
+@@ -45,23 +45,6 @@ if NET_SCHED
+ 
+ comment "Queueing/Scheduling"
+ 
+-config NET_SCH_CBQ
+-	tristate "Class Based Queueing (CBQ)"
+-	help
+-	  Say Y here if you want to use the Class-Based Queueing (CBQ) packet
+-	  scheduling algorithm. This algorithm classifies the waiting packets
+-	  into a tree-like hierarchy of classes; the leaves of this tree are
+-	  in turn scheduled by separate algorithms.
+-
+-	  See the top of <file:net/sched/sch_cbq.c> for more details.
+-
+-	  CBQ is a commonly used scheduler, so if you're unsure, you should
+-	  say Y here. Then say Y to all the queueing algorithms below that you
+-	  want to use as leaf disciplines.
+-
+-	  To compile this code as a module, choose M here: the
+-	  module will be called sch_cbq.
+-
+ config NET_SCH_HTB
+ 	tristate "Hierarchical Token Bucket (HTB)"
+ 	help
+@@ -85,20 +68,6 @@ config NET_SCH_HFSC
+ 	  To compile this code as a module, choose M here: the
+ 	  module will be called sch_hfsc.
+ 
+-config NET_SCH_ATM
+-	tristate "ATM Virtual Circuits (ATM)"
+-	depends on ATM
+-	help
+-	  Say Y here if you want to use the ATM pseudo-scheduler.  This
+-	  provides a framework for invoking classifiers, which in turn
+-	  select classes of this queuing discipline.  Each class maps
+-	  the flow(s) it is handling to a given virtual circuit.
+-
+-	  See the top of <file:net/sched/sch_atm.c> for more details.
+-
+-	  To compile this code as a module, choose M here: the
+-	  module will be called sch_atm.
+-
+ config NET_SCH_PRIO
+ 	tristate "Multi Band Priority Queueing (PRIO)"
+ 	help
+@@ -217,17 +186,6 @@ config NET_SCH_GRED
+ 	  To compile this code as a module, choose M here: the
+ 	  module will be called sch_gred.
+ 
+-config NET_SCH_DSMARK
+-	tristate "Differentiated Services marker (DSMARK)"
+-	help
+-	  Say Y if you want to schedule packets according to the
+-	  Differentiated Services architecture proposed in RFC 2475.
+-	  Technical information on this method, with pointers to associated
+-	  RFCs, is available at <http://www.gta.ufrj.br/diffserv/>.
+-
+-	  To compile this code as a module, choose M here: the
+-	  module will be called sch_dsmark.
+-
+ config NET_SCH_NETEM
+ 	tristate "Network emulator (NETEM)"
+ 	help
+diff --git a/net/sched/Makefile b/net/sched/Makefile
+index df2bcd785f7d1..1b8d0fc6614c2 100644
+--- a/net/sched/Makefile
++++ b/net/sched/Makefile
+@@ -32,20 +32,17 @@ obj-$(CONFIG_NET_ACT_TUNNEL_KEY)+= act_tunnel_key.o
+ obj-$(CONFIG_NET_ACT_CT)	+= act_ct.o
+ obj-$(CONFIG_NET_ACT_GATE)	+= act_gate.o
+ obj-$(CONFIG_NET_SCH_FIFO)	+= sch_fifo.o
+-obj-$(CONFIG_NET_SCH_CBQ)	+= sch_cbq.o
+ obj-$(CONFIG_NET_SCH_HTB)	+= sch_htb.o
+ obj-$(CONFIG_NET_SCH_HFSC)	+= sch_hfsc.o
+ obj-$(CONFIG_NET_SCH_RED)	+= sch_red.o
+ obj-$(CONFIG_NET_SCH_GRED)	+= sch_gred.o
+ obj-$(CONFIG_NET_SCH_INGRESS)	+= sch_ingress.o
+-obj-$(CONFIG_NET_SCH_DSMARK)	+= sch_dsmark.o
+ obj-$(CONFIG_NET_SCH_SFB)	+= sch_sfb.o
+ obj-$(CONFIG_NET_SCH_SFQ)	+= sch_sfq.o
+ obj-$(CONFIG_NET_SCH_TBF)	+= sch_tbf.o
+ obj-$(CONFIG_NET_SCH_TEQL)	+= sch_teql.o
+ obj-$(CONFIG_NET_SCH_PRIO)	+= sch_prio.o
+ obj-$(CONFIG_NET_SCH_MULTIQ)	+= sch_multiq.o
+-obj-$(CONFIG_NET_SCH_ATM)	+= sch_atm.o
+ obj-$(CONFIG_NET_SCH_NETEM)	+= sch_netem.o
+ obj-$(CONFIG_NET_SCH_DRR)	+= sch_drr.o
+ obj-$(CONFIG_NET_SCH_PLUG)	+= sch_plug.o
+diff --git a/net/sched/sch_atm.c b/net/sched/sch_atm.c
+deleted file mode 100644
+index 95967ce1f370a..0000000000000
+--- a/net/sched/sch_atm.c
++++ /dev/null
+@@ -1,709 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/* net/sched/sch_atm.c - ATM VC selection "queueing discipline" */
+-
+-/* Written 1998-2000 by Werner Almesberger, EPFL ICA */
+-
+-#include <linux/module.h>
+-#include <linux/slab.h>
+-#include <linux/init.h>
+-#include <linux/interrupt.h>
+-#include <linux/string.h>
+-#include <linux/errno.h>
+-#include <linux/skbuff.h>
+-#include <linux/atmdev.h>
+-#include <linux/atmclip.h>
+-#include <linux/rtnetlink.h>
+-#include <linux/file.h>		/* for fput */
+-#include <net/netlink.h>
+-#include <net/pkt_sched.h>
+-#include <net/pkt_cls.h>
+-
+-/*
+- * The ATM queuing discipline provides a framework for invoking classifiers
+- * (aka "filters"), which in turn select classes of this queuing discipline.
+- * Each class maps the flow(s) it is handling to a given VC. Multiple classes
+- * may share the same VC.
+- *
+- * When creating a class, VCs are specified by passing the number of the open
+- * socket descriptor by which the calling process references the VC. The kernel
+- * keeps the VC open at least until all classes using it are removed.
+- *
+- * In this file, most functions are named atm_tc_* to avoid confusion with all
+- * the atm_* in net/atm. This naming convention differs from what's used in the
+- * rest of net/sched.
+- *
+- * Known bugs:
+- *  - sometimes messes up the IP stack
+- *  - any manipulations besides the few operations described in the README, are
+- *    untested and likely to crash the system
+- *  - should lock the flow while there is data in the queue (?)
+- */
+-
+-#define VCC2FLOW(vcc) ((struct atm_flow_data *) ((vcc)->user_back))
+-
+-struct atm_flow_data {
+-	struct Qdisc_class_common common;
+-	struct Qdisc		*q;	/* FIFO, TBF, etc. */
+-	struct tcf_proto __rcu	*filter_list;
+-	struct tcf_block	*block;
+-	struct atm_vcc		*vcc;	/* VCC; NULL if VCC is closed */
+-	void			(*old_pop)(struct atm_vcc *vcc,
+-					   struct sk_buff *skb); /* chaining */
+-	struct atm_qdisc_data	*parent;	/* parent qdisc */
+-	struct socket		*sock;		/* for closing */
+-	int			ref;		/* reference count */
+-	struct gnet_stats_basic_packed	bstats;
+-	struct gnet_stats_queue	qstats;
+-	struct list_head	list;
+-	struct atm_flow_data	*excess;	/* flow for excess traffic;
+-						   NULL to set CLP instead */
+-	int			hdr_len;
+-	unsigned char		hdr[];		/* header data; MUST BE LAST */
+-};
+-
+-struct atm_qdisc_data {
+-	struct atm_flow_data	link;		/* unclassified skbs go here */
+-	struct list_head	flows;		/* NB: "link" is also on this
+-						   list */
+-	struct tasklet_struct	task;		/* dequeue tasklet */
+-};
+-
+-/* ------------------------- Class/flow operations ------------------------- */
+-
+-static inline struct atm_flow_data *lookup_flow(struct Qdisc *sch, u32 classid)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow;
+-
+-	list_for_each_entry(flow, &p->flows, list) {
+-		if (flow->common.classid == classid)
+-			return flow;
+-	}
+-	return NULL;
+-}
+-
+-static int atm_tc_graft(struct Qdisc *sch, unsigned long arg,
+-			struct Qdisc *new, struct Qdisc **old,
+-			struct netlink_ext_ack *extack)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow = (struct atm_flow_data *)arg;
+-
+-	pr_debug("atm_tc_graft(sch %p,[qdisc %p],flow %p,new %p,old %p)\n",
+-		sch, p, flow, new, old);
+-	if (list_empty(&flow->list))
+-		return -EINVAL;
+-	if (!new)
+-		new = &noop_qdisc;
+-	*old = flow->q;
+-	flow->q = new;
+-	if (*old)
+-		qdisc_reset(*old);
+-	return 0;
+-}
+-
+-static struct Qdisc *atm_tc_leaf(struct Qdisc *sch, unsigned long cl)
+-{
+-	struct atm_flow_data *flow = (struct atm_flow_data *)cl;
+-
+-	pr_debug("atm_tc_leaf(sch %p,flow %p)\n", sch, flow);
+-	return flow ? flow->q : NULL;
+-}
+-
+-static unsigned long atm_tc_find(struct Qdisc *sch, u32 classid)
+-{
+-	struct atm_qdisc_data *p __maybe_unused = qdisc_priv(sch);
+-	struct atm_flow_data *flow;
+-
+-	pr_debug("%s(sch %p,[qdisc %p],classid %x)\n", __func__, sch, p, classid);
+-	flow = lookup_flow(sch, classid);
+-	pr_debug("%s: flow %p\n", __func__, flow);
+-	return (unsigned long)flow;
+-}
+-
+-static unsigned long atm_tc_bind_filter(struct Qdisc *sch,
+-					unsigned long parent, u32 classid)
+-{
+-	struct atm_qdisc_data *p __maybe_unused = qdisc_priv(sch);
+-	struct atm_flow_data *flow;
+-
+-	pr_debug("%s(sch %p,[qdisc %p],classid %x)\n", __func__, sch, p, classid);
+-	flow = lookup_flow(sch, classid);
+-	if (flow)
+-		flow->ref++;
+-	pr_debug("%s: flow %p\n", __func__, flow);
+-	return (unsigned long)flow;
+-}
+-
+-/*
+- * atm_tc_put handles all destructions, including the ones that are explicitly
+- * requested (atm_tc_destroy, etc.). The assumption here is that we never drop
+- * anything that still seems to be in use.
+- */
+-static void atm_tc_put(struct Qdisc *sch, unsigned long cl)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow = (struct atm_flow_data *)cl;
+-
+-	pr_debug("atm_tc_put(sch %p,[qdisc %p],flow %p)\n", sch, p, flow);
+-	if (--flow->ref)
+-		return;
+-	pr_debug("atm_tc_put: destroying\n");
+-	list_del_init(&flow->list);
+-	pr_debug("atm_tc_put: qdisc %p\n", flow->q);
+-	qdisc_put(flow->q);
+-	tcf_block_put(flow->block);
+-	if (flow->sock) {
+-		pr_debug("atm_tc_put: f_count %ld\n",
+-			file_count(flow->sock->file));
+-		flow->vcc->pop = flow->old_pop;
+-		sockfd_put(flow->sock);
+-	}
+-	if (flow->excess)
+-		atm_tc_put(sch, (unsigned long)flow->excess);
+-	if (flow != &p->link)
+-		kfree(flow);
+-	/*
+-	 * If flow == &p->link, the qdisc no longer works at this point and
+-	 * needs to be removed. (By the caller of atm_tc_put.)
+-	 */
+-}
+-
+-static void sch_atm_pop(struct atm_vcc *vcc, struct sk_buff *skb)
+-{
+-	struct atm_qdisc_data *p = VCC2FLOW(vcc)->parent;
+-
+-	pr_debug("sch_atm_pop(vcc %p,skb %p,[qdisc %p])\n", vcc, skb, p);
+-	VCC2FLOW(vcc)->old_pop(vcc, skb);
+-	tasklet_schedule(&p->task);
+-}
+-
+-static const u8 llc_oui_ip[] = {
+-	0xaa,			/* DSAP: non-ISO */
+-	0xaa,			/* SSAP: non-ISO */
+-	0x03,			/* Ctrl: Unnumbered Information Command PDU */
+-	0x00,			/* OUI: EtherType */
+-	0x00, 0x00,
+-	0x08, 0x00
+-};				/* Ethertype IP (0800) */
+-
+-static const struct nla_policy atm_policy[TCA_ATM_MAX + 1] = {
+-	[TCA_ATM_FD]		= { .type = NLA_U32 },
+-	[TCA_ATM_EXCESS]	= { .type = NLA_U32 },
+-};
+-
+-static int atm_tc_change(struct Qdisc *sch, u32 classid, u32 parent,
+-			 struct nlattr **tca, unsigned long *arg,
+-			 struct netlink_ext_ack *extack)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow = (struct atm_flow_data *)*arg;
+-	struct atm_flow_data *excess = NULL;
+-	struct nlattr *opt = tca[TCA_OPTIONS];
+-	struct nlattr *tb[TCA_ATM_MAX + 1];
+-	struct socket *sock;
+-	int fd, error, hdr_len;
+-	void *hdr;
+-
+-	pr_debug("atm_tc_change(sch %p,[qdisc %p],classid %x,parent %x,"
+-		"flow %p,opt %p)\n", sch, p, classid, parent, flow, opt);
+-	/*
+-	 * The concept of parents doesn't apply for this qdisc.
+-	 */
+-	if (parent && parent != TC_H_ROOT && parent != sch->handle)
+-		return -EINVAL;
+-	/*
+-	 * ATM classes cannot be changed. In order to change properties of the
+-	 * ATM connection, that socket needs to be modified directly (via the
+-	 * native ATM API. In order to send a flow to a different VC, the old
+-	 * class needs to be removed and a new one added. (This may be changed
+-	 * later.)
+-	 */
+-	if (flow)
+-		return -EBUSY;
+-	if (opt == NULL)
+-		return -EINVAL;
+-
+-	error = nla_parse_nested_deprecated(tb, TCA_ATM_MAX, opt, atm_policy,
+-					    NULL);
+-	if (error < 0)
+-		return error;
+-
+-	if (!tb[TCA_ATM_FD])
+-		return -EINVAL;
+-	fd = nla_get_u32(tb[TCA_ATM_FD]);
+-	pr_debug("atm_tc_change: fd %d\n", fd);
+-	if (tb[TCA_ATM_HDR]) {
+-		hdr_len = nla_len(tb[TCA_ATM_HDR]);
+-		hdr = nla_data(tb[TCA_ATM_HDR]);
+-	} else {
+-		hdr_len = RFC1483LLC_LEN;
+-		hdr = NULL;	/* default LLC/SNAP for IP */
+-	}
+-	if (!tb[TCA_ATM_EXCESS])
+-		excess = NULL;
+-	else {
+-		excess = (struct atm_flow_data *)
+-			atm_tc_find(sch, nla_get_u32(tb[TCA_ATM_EXCESS]));
+-		if (!excess)
+-			return -ENOENT;
+-	}
+-	pr_debug("atm_tc_change: type %d, payload %d, hdr_len %d\n",
+-		 opt->nla_type, nla_len(opt), hdr_len);
+-	sock = sockfd_lookup(fd, &error);
+-	if (!sock)
+-		return error;	/* f_count++ */
+-	pr_debug("atm_tc_change: f_count %ld\n", file_count(sock->file));
+-	if (sock->ops->family != PF_ATMSVC && sock->ops->family != PF_ATMPVC) {
+-		error = -EPROTOTYPE;
+-		goto err_out;
+-	}
+-	/* @@@ should check if the socket is really operational or we'll crash
+-	   on vcc->send */
+-	if (classid) {
+-		if (TC_H_MAJ(classid ^ sch->handle)) {
+-			pr_debug("atm_tc_change: classid mismatch\n");
+-			error = -EINVAL;
+-			goto err_out;
+-		}
+-	} else {
+-		int i;
+-		unsigned long cl;
+-
+-		for (i = 1; i < 0x8000; i++) {
+-			classid = TC_H_MAKE(sch->handle, 0x8000 | i);
+-			cl = atm_tc_find(sch, classid);
+-			if (!cl)
+-				break;
+-		}
+-	}
+-	pr_debug("atm_tc_change: new id %x\n", classid);
+-	flow = kzalloc(sizeof(struct atm_flow_data) + hdr_len, GFP_KERNEL);
+-	pr_debug("atm_tc_change: flow %p\n", flow);
+-	if (!flow) {
+-		error = -ENOBUFS;
+-		goto err_out;
+-	}
+-
+-	error = tcf_block_get(&flow->block, &flow->filter_list, sch,
+-			      extack);
+-	if (error) {
+-		kfree(flow);
+-		goto err_out;
+-	}
+-
+-	flow->q = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, classid,
+-				    extack);
+-	if (!flow->q)
+-		flow->q = &noop_qdisc;
+-	pr_debug("atm_tc_change: qdisc %p\n", flow->q);
+-	flow->sock = sock;
+-	flow->vcc = ATM_SD(sock);	/* speedup */
+-	flow->vcc->user_back = flow;
+-	pr_debug("atm_tc_change: vcc %p\n", flow->vcc);
+-	flow->old_pop = flow->vcc->pop;
+-	flow->parent = p;
+-	flow->vcc->pop = sch_atm_pop;
+-	flow->common.classid = classid;
+-	flow->ref = 1;
+-	flow->excess = excess;
+-	list_add(&flow->list, &p->link.list);
+-	flow->hdr_len = hdr_len;
+-	if (hdr)
+-		memcpy(flow->hdr, hdr, hdr_len);
+-	else
+-		memcpy(flow->hdr, llc_oui_ip, sizeof(llc_oui_ip));
+-	*arg = (unsigned long)flow;
+-	return 0;
+-err_out:
+-	sockfd_put(sock);
+-	return error;
+-}
+-
+-static int atm_tc_delete(struct Qdisc *sch, unsigned long arg)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow = (struct atm_flow_data *)arg;
+-
+-	pr_debug("atm_tc_delete(sch %p,[qdisc %p],flow %p)\n", sch, p, flow);
+-	if (list_empty(&flow->list))
+-		return -EINVAL;
+-	if (rcu_access_pointer(flow->filter_list) || flow == &p->link)
+-		return -EBUSY;
+-	/*
+-	 * Reference count must be 2: one for "keepalive" (set at class
+-	 * creation), and one for the reference held when calling delete.
+-	 */
+-	if (flow->ref < 2) {
+-		pr_err("atm_tc_delete: flow->ref == %d\n", flow->ref);
+-		return -EINVAL;
+-	}
+-	if (flow->ref > 2)
+-		return -EBUSY;	/* catch references via excess, etc. */
+-	atm_tc_put(sch, arg);
+-	return 0;
+-}
+-
+-static void atm_tc_walk(struct Qdisc *sch, struct qdisc_walker *walker)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow;
+-
+-	pr_debug("atm_tc_walk(sch %p,[qdisc %p],walker %p)\n", sch, p, walker);
+-	if (walker->stop)
+-		return;
+-	list_for_each_entry(flow, &p->flows, list) {
+-		if (walker->count >= walker->skip &&
+-		    walker->fn(sch, (unsigned long)flow, walker) < 0) {
+-			walker->stop = 1;
+-			break;
+-		}
+-		walker->count++;
+-	}
+-}
+-
+-static struct tcf_block *atm_tc_tcf_block(struct Qdisc *sch, unsigned long cl,
+-					  struct netlink_ext_ack *extack)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow = (struct atm_flow_data *)cl;
+-
+-	pr_debug("atm_tc_find_tcf(sch %p,[qdisc %p],flow %p)\n", sch, p, flow);
+-	return flow ? flow->block : p->link.block;
+-}
+-
+-/* --------------------------- Qdisc operations ---------------------------- */
+-
+-static int atm_tc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+-			  struct sk_buff **to_free)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow;
+-	struct tcf_result res;
+-	int result;
+-	int ret = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
+-
+-	pr_debug("atm_tc_enqueue(skb %p,sch %p,[qdisc %p])\n", skb, sch, p);
+-	result = TC_ACT_OK;	/* be nice to gcc */
+-	flow = NULL;
+-	if (TC_H_MAJ(skb->priority) != sch->handle ||
+-	    !(flow = (struct atm_flow_data *)atm_tc_find(sch, skb->priority))) {
+-		struct tcf_proto *fl;
+-
+-		list_for_each_entry(flow, &p->flows, list) {
+-			fl = rcu_dereference_bh(flow->filter_list);
+-			if (fl) {
+-				result = tcf_classify(skb, fl, &res, true);
+-				if (result < 0)
+-					continue;
+-				if (result == TC_ACT_SHOT)
+-					goto done;
+-
+-				flow = (struct atm_flow_data *)res.class;
+-				if (!flow)
+-					flow = lookup_flow(sch, res.classid);
+-				goto drop;
+-			}
+-		}
+-		flow = NULL;
+-done:
+-		;
+-	}
+-	if (!flow) {
+-		flow = &p->link;
+-	} else {
+-		if (flow->vcc)
+-			ATM_SKB(skb)->atm_options = flow->vcc->atm_options;
+-		/*@@@ looks good ... but it's not supposed to work :-) */
+-#ifdef CONFIG_NET_CLS_ACT
+-		switch (result) {
+-		case TC_ACT_QUEUED:
+-		case TC_ACT_STOLEN:
+-		case TC_ACT_TRAP:
+-			__qdisc_drop(skb, to_free);
+-			return NET_XMIT_SUCCESS | __NET_XMIT_STOLEN;
+-		case TC_ACT_SHOT:
+-			__qdisc_drop(skb, to_free);
+-			goto drop;
+-		case TC_ACT_RECLASSIFY:
+-			if (flow->excess)
+-				flow = flow->excess;
+-			else
+-				ATM_SKB(skb)->atm_options |= ATM_ATMOPT_CLP;
+-			break;
+-		}
+-#endif
+-	}
+-
+-	ret = qdisc_enqueue(skb, flow->q, to_free);
+-	if (ret != NET_XMIT_SUCCESS) {
+-drop: __maybe_unused
+-		if (net_xmit_drop_count(ret)) {
+-			qdisc_qstats_drop(sch);
+-			if (flow)
+-				flow->qstats.drops++;
+-		}
+-		return ret;
+-	}
+-	/*
+-	 * Okay, this may seem weird. We pretend we've dropped the packet if
+-	 * it goes via ATM. The reason for this is that the outer qdisc
+-	 * expects to be able to q->dequeue the packet later on if we return
+-	 * success at this place. Also, sch->q.qdisc needs to reflect whether
+-	 * there is a packet egligible for dequeuing or not. Note that the
+-	 * statistics of the outer qdisc are necessarily wrong because of all
+-	 * this. There's currently no correct solution for this.
+-	 */
+-	if (flow == &p->link) {
+-		sch->q.qlen++;
+-		return NET_XMIT_SUCCESS;
+-	}
+-	tasklet_schedule(&p->task);
+-	return NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
+-}
+-
+-/*
+- * Dequeue packets and send them over ATM. Note that we quite deliberately
+- * avoid checking net_device's flow control here, simply because sch_atm
+- * uses its own channels, which have nothing to do with any CLIP/LANE/or
+- * non-ATM interfaces.
+- */
+-
+-static void sch_atm_dequeue(unsigned long data)
+-{
+-	struct Qdisc *sch = (struct Qdisc *)data;
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow;
+-	struct sk_buff *skb;
+-
+-	pr_debug("sch_atm_dequeue(sch %p,[qdisc %p])\n", sch, p);
+-	list_for_each_entry(flow, &p->flows, list) {
+-		if (flow == &p->link)
+-			continue;
+-		/*
+-		 * If traffic is properly shaped, this won't generate nasty
+-		 * little bursts. Otherwise, it may ... (but that's okay)
+-		 */
+-		while ((skb = flow->q->ops->peek(flow->q))) {
+-			if (!atm_may_send(flow->vcc, skb->truesize))
+-				break;
+-
+-			skb = qdisc_dequeue_peeked(flow->q);
+-			if (unlikely(!skb))
+-				break;
+-
+-			qdisc_bstats_update(sch, skb);
+-			bstats_update(&flow->bstats, skb);
+-			pr_debug("atm_tc_dequeue: sending on class %p\n", flow);
+-			/* remove any LL header somebody else has attached */
+-			skb_pull(skb, skb_network_offset(skb));
+-			if (skb_headroom(skb) < flow->hdr_len) {
+-				struct sk_buff *new;
+-
+-				new = skb_realloc_headroom(skb, flow->hdr_len);
+-				dev_kfree_skb(skb);
+-				if (!new)
+-					continue;
+-				skb = new;
+-			}
+-			pr_debug("sch_atm_dequeue: ip %p, data %p\n",
+-				 skb_network_header(skb), skb->data);
+-			ATM_SKB(skb)->vcc = flow->vcc;
+-			memcpy(skb_push(skb, flow->hdr_len), flow->hdr,
+-			       flow->hdr_len);
+-			refcount_add(skb->truesize,
+-				   &sk_atm(flow->vcc)->sk_wmem_alloc);
+-			/* atm.atm_options are already set by atm_tc_enqueue */
+-			flow->vcc->send(flow->vcc, skb);
+-		}
+-	}
+-}
+-
+-static struct sk_buff *atm_tc_dequeue(struct Qdisc *sch)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct sk_buff *skb;
+-
+-	pr_debug("atm_tc_dequeue(sch %p,[qdisc %p])\n", sch, p);
+-	tasklet_schedule(&p->task);
+-	skb = qdisc_dequeue_peeked(p->link.q);
+-	if (skb)
+-		sch->q.qlen--;
+-	return skb;
+-}
+-
+-static struct sk_buff *atm_tc_peek(struct Qdisc *sch)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-
+-	pr_debug("atm_tc_peek(sch %p,[qdisc %p])\n", sch, p);
+-
+-	return p->link.q->ops->peek(p->link.q);
+-}
+-
+-static int atm_tc_init(struct Qdisc *sch, struct nlattr *opt,
+-		       struct netlink_ext_ack *extack)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	int err;
+-
+-	pr_debug("atm_tc_init(sch %p,[qdisc %p],opt %p)\n", sch, p, opt);
+-	INIT_LIST_HEAD(&p->flows);
+-	INIT_LIST_HEAD(&p->link.list);
+-	list_add(&p->link.list, &p->flows);
+-	p->link.q = qdisc_create_dflt(sch->dev_queue,
+-				      &pfifo_qdisc_ops, sch->handle, extack);
+-	if (!p->link.q)
+-		p->link.q = &noop_qdisc;
+-	pr_debug("atm_tc_init: link (%p) qdisc %p\n", &p->link, p->link.q);
+-	p->link.vcc = NULL;
+-	p->link.sock = NULL;
+-	p->link.common.classid = sch->handle;
+-	p->link.ref = 1;
+-
+-	err = tcf_block_get(&p->link.block, &p->link.filter_list, sch,
+-			    extack);
+-	if (err)
+-		return err;
+-
+-	tasklet_init(&p->task, sch_atm_dequeue, (unsigned long)sch);
+-	return 0;
+-}
+-
+-static void atm_tc_reset(struct Qdisc *sch)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow;
+-
+-	pr_debug("atm_tc_reset(sch %p,[qdisc %p])\n", sch, p);
+-	list_for_each_entry(flow, &p->flows, list)
+-		qdisc_reset(flow->q);
+-}
+-
+-static void atm_tc_destroy(struct Qdisc *sch)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow, *tmp;
+-
+-	pr_debug("atm_tc_destroy(sch %p,[qdisc %p])\n", sch, p);
+-	list_for_each_entry(flow, &p->flows, list) {
+-		tcf_block_put(flow->block);
+-		flow->block = NULL;
+-	}
+-
+-	list_for_each_entry_safe(flow, tmp, &p->flows, list) {
+-		if (flow->ref > 1)
+-			pr_err("atm_destroy: %p->ref = %d\n", flow, flow->ref);
+-		atm_tc_put(sch, (unsigned long)flow);
+-	}
+-	tasklet_kill(&p->task);
+-}
+-
+-static int atm_tc_dump_class(struct Qdisc *sch, unsigned long cl,
+-			     struct sk_buff *skb, struct tcmsg *tcm)
+-{
+-	struct atm_qdisc_data *p = qdisc_priv(sch);
+-	struct atm_flow_data *flow = (struct atm_flow_data *)cl;
+-	struct nlattr *nest;
+-
+-	pr_debug("atm_tc_dump_class(sch %p,[qdisc %p],flow %p,skb %p,tcm %p)\n",
+-		sch, p, flow, skb, tcm);
+-	if (list_empty(&flow->list))
+-		return -EINVAL;
+-	tcm->tcm_handle = flow->common.classid;
+-	tcm->tcm_info = flow->q->handle;
+-
+-	nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+-	if (nest == NULL)
+-		goto nla_put_failure;
+-
+-	if (nla_put(skb, TCA_ATM_HDR, flow->hdr_len, flow->hdr))
+-		goto nla_put_failure;
+-	if (flow->vcc) {
+-		struct sockaddr_atmpvc pvc;
+-		int state;
+-
+-		memset(&pvc, 0, sizeof(pvc));
+-		pvc.sap_family = AF_ATMPVC;
+-		pvc.sap_addr.itf = flow->vcc->dev ? flow->vcc->dev->number : -1;
+-		pvc.sap_addr.vpi = flow->vcc->vpi;
+-		pvc.sap_addr.vci = flow->vcc->vci;
+-		if (nla_put(skb, TCA_ATM_ADDR, sizeof(pvc), &pvc))
+-			goto nla_put_failure;
+-		state = ATM_VF2VS(flow->vcc->flags);
+-		if (nla_put_u32(skb, TCA_ATM_STATE, state))
+-			goto nla_put_failure;
+-	}
+-	if (flow->excess) {
+-		if (nla_put_u32(skb, TCA_ATM_EXCESS, flow->common.classid))
+-			goto nla_put_failure;
+-	} else {
+-		if (nla_put_u32(skb, TCA_ATM_EXCESS, 0))
+-			goto nla_put_failure;
+-	}
+-	return nla_nest_end(skb, nest);
+-
+-nla_put_failure:
+-	nla_nest_cancel(skb, nest);
+-	return -1;
+-}
+-static int
+-atm_tc_dump_class_stats(struct Qdisc *sch, unsigned long arg,
+-			struct gnet_dump *d)
+-{
+-	struct atm_flow_data *flow = (struct atm_flow_data *)arg;
+-
+-	if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch),
+-				  d, NULL, &flow->bstats) < 0 ||
+-	    gnet_stats_copy_queue(d, NULL, &flow->qstats, flow->q->q.qlen) < 0)
+-		return -1;
+-
+-	return 0;
+-}
+-
+-static int atm_tc_dump(struct Qdisc *sch, struct sk_buff *skb)
+-{
+-	return 0;
+-}
+-
+-static const struct Qdisc_class_ops atm_class_ops = {
+-	.graft		= atm_tc_graft,
+-	.leaf		= atm_tc_leaf,
+-	.find		= atm_tc_find,
+-	.change		= atm_tc_change,
+-	.delete		= atm_tc_delete,
+-	.walk		= atm_tc_walk,
+-	.tcf_block	= atm_tc_tcf_block,
+-	.bind_tcf	= atm_tc_bind_filter,
+-	.unbind_tcf	= atm_tc_put,
+-	.dump		= atm_tc_dump_class,
+-	.dump_stats	= atm_tc_dump_class_stats,
+-};
+-
+-static struct Qdisc_ops atm_qdisc_ops __read_mostly = {
+-	.cl_ops		= &atm_class_ops,
+-	.id		= "atm",
+-	.priv_size	= sizeof(struct atm_qdisc_data),
+-	.enqueue	= atm_tc_enqueue,
+-	.dequeue	= atm_tc_dequeue,
+-	.peek		= atm_tc_peek,
+-	.init		= atm_tc_init,
+-	.reset		= atm_tc_reset,
+-	.destroy	= atm_tc_destroy,
+-	.dump		= atm_tc_dump,
+-	.owner		= THIS_MODULE,
+-};
+-
+-static int __init atm_init(void)
+-{
+-	return register_qdisc(&atm_qdisc_ops);
+-}
+-
+-static void __exit atm_exit(void)
+-{
+-	unregister_qdisc(&atm_qdisc_ops);
+-}
+-
+-module_init(atm_init)
+-module_exit(atm_exit)
+-MODULE_LICENSE("GPL");
+diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
+deleted file mode 100644
+index 3da5eb313c246..0000000000000
+--- a/net/sched/sch_cbq.c
++++ /dev/null
+@@ -1,1816 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- * net/sched/sch_cbq.c	Class-Based Queueing discipline.
+- *
+- * Authors:	Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
+- */
+-
+-#include <linux/module.h>
+-#include <linux/slab.h>
+-#include <linux/types.h>
+-#include <linux/kernel.h>
+-#include <linux/string.h>
+-#include <linux/errno.h>
+-#include <linux/skbuff.h>
+-#include <net/netlink.h>
+-#include <net/pkt_sched.h>
+-#include <net/pkt_cls.h>
+-
+-
+-/*	Class-Based Queueing (CBQ) algorithm.
+-	=======================================
+-
+-	Sources: [1] Sally Floyd and Van Jacobson, "Link-sharing and Resource
+-		 Management Models for Packet Networks",
+-		 IEEE/ACM Transactions on Networking, Vol.3, No.4, 1995
+-
+-		 [2] Sally Floyd, "Notes on CBQ and Guaranteed Service", 1995
+-
+-		 [3] Sally Floyd, "Notes on Class-Based Queueing: Setting
+-		 Parameters", 1996
+-
+-		 [4] Sally Floyd and Michael Speer, "Experimental Results
+-		 for Class-Based Queueing", 1998, not published.
+-
+-	-----------------------------------------------------------------------
+-
+-	Algorithm skeleton was taken from NS simulator cbq.cc.
+-	If someone wants to check this code against the LBL version,
+-	he should take into account that ONLY the skeleton was borrowed,
+-	the implementation is different. Particularly:
+-
+-	--- The WRR algorithm is different. Our version looks more
+-	reasonable (I hope) and works when quanta are allowed to be
+-	less than MTU, which is always the case when real time classes
+-	have small rates. Note, that the statement of [3] is
+-	incomplete, delay may actually be estimated even if class
+-	per-round allotment is less than MTU. Namely, if per-round
+-	allotment is W*r_i, and r_1+...+r_k = r < 1
+-
+-	delay_i <= ([MTU/(W*r_i)]*W*r + W*r + k*MTU)/B
+-
+-	In the worst case we have IntServ estimate with D = W*r+k*MTU
+-	and C = MTU*r. The proof (if correct at all) is trivial.
+-
+-
+-	--- It seems that cbq-2.0 is not very accurate. At least, I cannot
+-	interpret some places, which look like wrong translations
+-	from NS. Anyone is advised to find these differences
+-	and explain to me, why I am wrong 8).
+-
+-	--- Linux has no EOI event, so that we cannot estimate true class
+-	idle time. Workaround is to consider the next dequeue event
+-	as sign that previous packet is finished. This is wrong because of
+-	internal device queueing, but on a permanently loaded link it is true.
+-	Moreover, combined with clock integrator, this scheme looks
+-	very close to an ideal solution.  */
+-
+-struct cbq_sched_data;
+-
+-
+-struct cbq_class {
+-	struct Qdisc_class_common common;
+-	struct cbq_class	*next_alive;	/* next class with backlog in this priority band */
+-
+-/* Parameters */
+-	unsigned char		priority;	/* class priority */
+-	unsigned char		priority2;	/* priority to be used after overlimit */
+-	unsigned char		ewma_log;	/* time constant for idle time calculation */
+-
+-	u32			defmap;
+-
+-	/* Link-sharing scheduler parameters */
+-	long			maxidle;	/* Class parameters: see below. */
+-	long			offtime;
+-	long			minidle;
+-	u32			avpkt;
+-	struct qdisc_rate_table	*R_tab;
+-
+-	/* General scheduler (WRR) parameters */
+-	long			allot;
+-	long			quantum;	/* Allotment per WRR round */
+-	long			weight;		/* Relative allotment: see below */
+-
+-	struct Qdisc		*qdisc;		/* Ptr to CBQ discipline */
+-	struct cbq_class	*split;		/* Ptr to split node */
+-	struct cbq_class	*share;		/* Ptr to LS parent in the class tree */
+-	struct cbq_class	*tparent;	/* Ptr to tree parent in the class tree */
+-	struct cbq_class	*borrow;	/* NULL if class is bandwidth limited;
+-						   parent otherwise */
+-	struct cbq_class	*sibling;	/* Sibling chain */
+-	struct cbq_class	*children;	/* Pointer to children chain */
+-
+-	struct Qdisc		*q;		/* Elementary queueing discipline */
+-
+-
+-/* Variables */
+-	unsigned char		cpriority;	/* Effective priority */
+-	unsigned char		delayed;
+-	unsigned char		level;		/* level of the class in hierarchy:
+-						   0 for leaf classes, and maximal
+-						   level of children + 1 for nodes.
+-						 */
+-
+-	psched_time_t		last;		/* Last end of service */
+-	psched_time_t		undertime;
+-	long			avgidle;
+-	long			deficit;	/* Saved deficit for WRR */
+-	psched_time_t		penalized;
+-	struct gnet_stats_basic_packed bstats;
+-	struct gnet_stats_queue qstats;
+-	struct net_rate_estimator __rcu *rate_est;
+-	struct tc_cbq_xstats	xstats;
+-
+-	struct tcf_proto __rcu	*filter_list;
+-	struct tcf_block	*block;
+-
+-	int			filters;
+-
+-	struct cbq_class	*defaults[TC_PRIO_MAX + 1];
+-};
+-
+-struct cbq_sched_data {
+-	struct Qdisc_class_hash	clhash;			/* Hash table of all classes */
+-	int			nclasses[TC_CBQ_MAXPRIO + 1];
+-	unsigned int		quanta[TC_CBQ_MAXPRIO + 1];
+-
+-	struct cbq_class	link;
+-
+-	unsigned int		activemask;
+-	struct cbq_class	*active[TC_CBQ_MAXPRIO + 1];	/* List of all classes
+-								   with backlog */
+-
+-#ifdef CONFIG_NET_CLS_ACT
+-	struct cbq_class	*rx_class;
+-#endif
+-	struct cbq_class	*tx_class;
+-	struct cbq_class	*tx_borrowed;
+-	int			tx_len;
+-	psched_time_t		now;		/* Cached timestamp */
+-	unsigned int		pmask;
+-
+-	struct hrtimer		delay_timer;
+-	struct qdisc_watchdog	watchdog;	/* Watchdog timer,
+-						   started when CBQ has
+-						   backlog, but cannot
+-						   transmit just now */
+-	psched_tdiff_t		wd_expires;
+-	int			toplevel;
+-	u32			hgenerator;
+-};
+-
+-
+-#define L2T(cl, len)	qdisc_l2t((cl)->R_tab, len)
+-
+-static inline struct cbq_class *
+-cbq_class_lookup(struct cbq_sched_data *q, u32 classid)
+-{
+-	struct Qdisc_class_common *clc;
+-
+-	clc = qdisc_class_find(&q->clhash, classid);
+-	if (clc == NULL)
+-		return NULL;
+-	return container_of(clc, struct cbq_class, common);
+-}
+-
+-#ifdef CONFIG_NET_CLS_ACT
+-
+-static struct cbq_class *
+-cbq_reclassify(struct sk_buff *skb, struct cbq_class *this)
+-{
+-	struct cbq_class *cl;
+-
+-	for (cl = this->tparent; cl; cl = cl->tparent) {
+-		struct cbq_class *new = cl->defaults[TC_PRIO_BESTEFFORT];
+-
+-		if (new != NULL && new != this)
+-			return new;
+-	}
+-	return NULL;
+-}
+-
+-#endif
+-
+-/* Classify packet. The procedure is pretty complicated, but
+- * it allows us to combine link sharing and priority scheduling
+- * transparently.
+- *
+- * Namely, you can put link sharing rules (f.e. route based) at root of CBQ,
+- * so that it resolves to split nodes. Then packets are classified
+- * by logical priority, or a more specific classifier may be attached
+- * to the split node.
+- */
+-
+-static struct cbq_class *
+-cbq_classify(struct sk_buff *skb, struct Qdisc *sch, int *qerr)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *head = &q->link;
+-	struct cbq_class **defmap;
+-	struct cbq_class *cl = NULL;
+-	u32 prio = skb->priority;
+-	struct tcf_proto *fl;
+-	struct tcf_result res;
+-
+-	/*
+-	 *  Step 1. If skb->priority points to one of our classes, use it.
+-	 */
+-	if (TC_H_MAJ(prio ^ sch->handle) == 0 &&
+-	    (cl = cbq_class_lookup(q, prio)) != NULL)
+-		return cl;
+-
+-	*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
+-	for (;;) {
+-		int result = 0;
+-		defmap = head->defaults;
+-
+-		fl = rcu_dereference_bh(head->filter_list);
+-		/*
+-		 * Step 2+n. Apply classifier.
+-		 */
+-		result = tcf_classify(skb, fl, &res, true);
+-		if (!fl || result < 0)
+-			goto fallback;
+-		if (result == TC_ACT_SHOT)
+-			return NULL;
+-
+-		cl = (void *)res.class;
+-		if (!cl) {
+-			if (TC_H_MAJ(res.classid))
+-				cl = cbq_class_lookup(q, res.classid);
+-			else if ((cl = defmap[res.classid & TC_PRIO_MAX]) == NULL)
+-				cl = defmap[TC_PRIO_BESTEFFORT];
+-
+-			if (cl == NULL)
+-				goto fallback;
+-		}
+-		if (cl->level >= head->level)
+-			goto fallback;
+-#ifdef CONFIG_NET_CLS_ACT
+-		switch (result) {
+-		case TC_ACT_QUEUED:
+-		case TC_ACT_STOLEN:
+-		case TC_ACT_TRAP:
+-			*qerr = NET_XMIT_SUCCESS | __NET_XMIT_STOLEN;
+-			fallthrough;
+-		case TC_ACT_RECLASSIFY:
+-			return cbq_reclassify(skb, cl);
+-		}
+-#endif
+-		if (cl->level == 0)
+-			return cl;
+-
+-		/*
+-		 * Step 3+n. If classifier selected a link sharing class,
+-		 *	   apply agency specific classifier.
+-		 *	   Repeat this procdure until we hit a leaf node.
+-		 */
+-		head = cl;
+-	}
+-
+-fallback:
+-	cl = head;
+-
+-	/*
+-	 * Step 4. No success...
+-	 */
+-	if (TC_H_MAJ(prio) == 0 &&
+-	    !(cl = head->defaults[prio & TC_PRIO_MAX]) &&
+-	    !(cl = head->defaults[TC_PRIO_BESTEFFORT]))
+-		return head;
+-
+-	return cl;
+-}
+-
+-/*
+- * A packet has just been enqueued on the empty class.
+- * cbq_activate_class adds it to the tail of active class list
+- * of its priority band.
+- */
+-
+-static inline void cbq_activate_class(struct cbq_class *cl)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(cl->qdisc);
+-	int prio = cl->cpriority;
+-	struct cbq_class *cl_tail;
+-
+-	cl_tail = q->active[prio];
+-	q->active[prio] = cl;
+-
+-	if (cl_tail != NULL) {
+-		cl->next_alive = cl_tail->next_alive;
+-		cl_tail->next_alive = cl;
+-	} else {
+-		cl->next_alive = cl;
+-		q->activemask |= (1<<prio);
+-	}
+-}
+-
+-/*
+- * Unlink class from active chain.
+- * Note that this same procedure is done directly in cbq_dequeue*
+- * during round-robin procedure.
+- */
+-
+-static void cbq_deactivate_class(struct cbq_class *this)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(this->qdisc);
+-	int prio = this->cpriority;
+-	struct cbq_class *cl;
+-	struct cbq_class *cl_prev = q->active[prio];
+-
+-	do {
+-		cl = cl_prev->next_alive;
+-		if (cl == this) {
+-			cl_prev->next_alive = cl->next_alive;
+-			cl->next_alive = NULL;
+-
+-			if (cl == q->active[prio]) {
+-				q->active[prio] = cl_prev;
+-				if (cl == q->active[prio]) {
+-					q->active[prio] = NULL;
+-					q->activemask &= ~(1<<prio);
+-					return;
+-				}
+-			}
+-			return;
+-		}
+-	} while ((cl_prev = cl) != q->active[prio]);
+-}
+-
+-static void
+-cbq_mark_toplevel(struct cbq_sched_data *q, struct cbq_class *cl)
+-{
+-	int toplevel = q->toplevel;
+-
+-	if (toplevel > cl->level) {
+-		psched_time_t now = psched_get_time();
+-
+-		do {
+-			if (cl->undertime < now) {
+-				q->toplevel = cl->level;
+-				return;
+-			}
+-		} while ((cl = cl->borrow) != NULL && toplevel > cl->level);
+-	}
+-}
+-
+-static int
+-cbq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+-	    struct sk_buff **to_free)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	int ret;
+-	struct cbq_class *cl = cbq_classify(skb, sch, &ret);
+-
+-#ifdef CONFIG_NET_CLS_ACT
+-	q->rx_class = cl;
+-#endif
+-	if (cl == NULL) {
+-		if (ret & __NET_XMIT_BYPASS)
+-			qdisc_qstats_drop(sch);
+-		__qdisc_drop(skb, to_free);
+-		return ret;
+-	}
+-
+-	ret = qdisc_enqueue(skb, cl->q, to_free);
+-	if (ret == NET_XMIT_SUCCESS) {
+-		sch->q.qlen++;
+-		cbq_mark_toplevel(q, cl);
+-		if (!cl->next_alive)
+-			cbq_activate_class(cl);
+-		return ret;
+-	}
+-
+-	if (net_xmit_drop_count(ret)) {
+-		qdisc_qstats_drop(sch);
+-		cbq_mark_toplevel(q, cl);
+-		cl->qstats.drops++;
+-	}
+-	return ret;
+-}
+-
+-/* Overlimit action: penalize leaf class by adding offtime */
+-static void cbq_overlimit(struct cbq_class *cl)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(cl->qdisc);
+-	psched_tdiff_t delay = cl->undertime - q->now;
+-
+-	if (!cl->delayed) {
+-		delay += cl->offtime;
+-
+-		/*
+-		 * Class goes to sleep, so that it will have no
+-		 * chance to work avgidle. Let's forgive it 8)
+-		 *
+-		 * BTW cbq-2.0 has a crap in this
+-		 * place, apparently they forgot to shift it by cl->ewma_log.
+-		 */
+-		if (cl->avgidle < 0)
+-			delay -= (-cl->avgidle) - ((-cl->avgidle) >> cl->ewma_log);
+-		if (cl->avgidle < cl->minidle)
+-			cl->avgidle = cl->minidle;
+-		if (delay <= 0)
+-			delay = 1;
+-		cl->undertime = q->now + delay;
+-
+-		cl->xstats.overactions++;
+-		cl->delayed = 1;
+-	}
+-	if (q->wd_expires == 0 || q->wd_expires > delay)
+-		q->wd_expires = delay;
+-
+-	/* Dirty work! We must schedule wakeups based on
+-	 * real available rate, rather than leaf rate,
+-	 * which may be tiny (even zero).
+-	 */
+-	if (q->toplevel == TC_CBQ_MAXLEVEL) {
+-		struct cbq_class *b;
+-		psched_tdiff_t base_delay = q->wd_expires;
+-
+-		for (b = cl->borrow; b; b = b->borrow) {
+-			delay = b->undertime - q->now;
+-			if (delay < base_delay) {
+-				if (delay <= 0)
+-					delay = 1;
+-				base_delay = delay;
+-			}
+-		}
+-
+-		q->wd_expires = base_delay;
+-	}
+-}
+-
+-static psched_tdiff_t cbq_undelay_prio(struct cbq_sched_data *q, int prio,
+-				       psched_time_t now)
+-{
+-	struct cbq_class *cl;
+-	struct cbq_class *cl_prev = q->active[prio];
+-	psched_time_t sched = now;
+-
+-	if (cl_prev == NULL)
+-		return 0;
+-
+-	do {
+-		cl = cl_prev->next_alive;
+-		if (now - cl->penalized > 0) {
+-			cl_prev->next_alive = cl->next_alive;
+-			cl->next_alive = NULL;
+-			cl->cpriority = cl->priority;
+-			cl->delayed = 0;
+-			cbq_activate_class(cl);
+-
+-			if (cl == q->active[prio]) {
+-				q->active[prio] = cl_prev;
+-				if (cl == q->active[prio]) {
+-					q->active[prio] = NULL;
+-					return 0;
+-				}
+-			}
+-
+-			cl = cl_prev->next_alive;
+-		} else if (sched - cl->penalized > 0)
+-			sched = cl->penalized;
+-	} while ((cl_prev = cl) != q->active[prio]);
+-
+-	return sched - now;
+-}
+-
+-static enum hrtimer_restart cbq_undelay(struct hrtimer *timer)
+-{
+-	struct cbq_sched_data *q = container_of(timer, struct cbq_sched_data,
+-						delay_timer);
+-	struct Qdisc *sch = q->watchdog.qdisc;
+-	psched_time_t now;
+-	psched_tdiff_t delay = 0;
+-	unsigned int pmask;
+-
+-	now = psched_get_time();
+-
+-	pmask = q->pmask;
+-	q->pmask = 0;
+-
+-	while (pmask) {
+-		int prio = ffz(~pmask);
+-		psched_tdiff_t tmp;
+-
+-		pmask &= ~(1<<prio);
+-
+-		tmp = cbq_undelay_prio(q, prio, now);
+-		if (tmp > 0) {
+-			q->pmask |= 1<<prio;
+-			if (tmp < delay || delay == 0)
+-				delay = tmp;
+-		}
+-	}
+-
+-	if (delay) {
+-		ktime_t time;
+-
+-		time = 0;
+-		time = ktime_add_ns(time, PSCHED_TICKS2NS(now + delay));
+-		hrtimer_start(&q->delay_timer, time, HRTIMER_MODE_ABS_PINNED);
+-	}
+-
+-	__netif_schedule(qdisc_root(sch));
+-	return HRTIMER_NORESTART;
+-}
+-
+-/*
+- * It is mission critical procedure.
+- *
+- * We "regenerate" toplevel cutoff, if transmitting class
+- * has backlog and it is not regulated. It is not part of
+- * original CBQ description, but looks more reasonable.
+- * Probably, it is wrong. This question needs further investigation.
+- */
+-
+-static inline void
+-cbq_update_toplevel(struct cbq_sched_data *q, struct cbq_class *cl,
+-		    struct cbq_class *borrowed)
+-{
+-	if (cl && q->toplevel >= borrowed->level) {
+-		if (cl->q->q.qlen > 1) {
+-			do {
+-				if (borrowed->undertime == PSCHED_PASTPERFECT) {
+-					q->toplevel = borrowed->level;
+-					return;
+-				}
+-			} while ((borrowed = borrowed->borrow) != NULL);
+-		}
+-#if 0
+-	/* It is not necessary now. Uncommenting it
+-	   will save CPU cycles, but decrease fairness.
+-	 */
+-		q->toplevel = TC_CBQ_MAXLEVEL;
+-#endif
+-	}
+-}
+-
+-static void
+-cbq_update(struct cbq_sched_data *q)
+-{
+-	struct cbq_class *this = q->tx_class;
+-	struct cbq_class *cl = this;
+-	int len = q->tx_len;
+-	psched_time_t now;
+-
+-	q->tx_class = NULL;
+-	/* Time integrator. We calculate EOS time
+-	 * by adding expected packet transmission time.
+-	 */
+-	now = q->now + L2T(&q->link, len);
+-
+-	for ( ; cl; cl = cl->share) {
+-		long avgidle = cl->avgidle;
+-		long idle;
+-
+-		cl->bstats.packets++;
+-		cl->bstats.bytes += len;
+-
+-		/*
+-		 * (now - last) is total time between packet right edges.
+-		 * (last_pktlen/rate) is "virtual" busy time, so that
+-		 *
+-		 *	idle = (now - last) - last_pktlen/rate
+-		 */
+-
+-		idle = now - cl->last;
+-		if ((unsigned long)idle > 128*1024*1024) {
+-			avgidle = cl->maxidle;
+-		} else {
+-			idle -= L2T(cl, len);
+-
+-		/* true_avgidle := (1-W)*true_avgidle + W*idle,
+-		 * where W=2^{-ewma_log}. But cl->avgidle is scaled:
+-		 * cl->avgidle == true_avgidle/W,
+-		 * hence:
+-		 */
+-			avgidle += idle - (avgidle>>cl->ewma_log);
+-		}
+-
+-		if (avgidle <= 0) {
+-			/* Overlimit or at-limit */
+-
+-			if (avgidle < cl->minidle)
+-				avgidle = cl->minidle;
+-
+-			cl->avgidle = avgidle;
+-
+-			/* Calculate expected time, when this class
+-			 * will be allowed to send.
+-			 * It will occur, when:
+-			 * (1-W)*true_avgidle + W*delay = 0, i.e.
+-			 * idle = (1/W - 1)*(-true_avgidle)
+-			 * or
+-			 * idle = (1 - W)*(-cl->avgidle);
+-			 */
+-			idle = (-avgidle) - ((-avgidle) >> cl->ewma_log);
+-
+-			/*
+-			 * That is not all.
+-			 * To maintain the rate allocated to the class,
+-			 * we add to undertime virtual clock,
+-			 * necessary to complete transmitted packet.
+-			 * (len/phys_bandwidth has been already passed
+-			 * to the moment of cbq_update)
+-			 */
+-
+-			idle -= L2T(&q->link, len);
+-			idle += L2T(cl, len);
+-
+-			cl->undertime = now + idle;
+-		} else {
+-			/* Underlimit */
+-
+-			cl->undertime = PSCHED_PASTPERFECT;
+-			if (avgidle > cl->maxidle)
+-				cl->avgidle = cl->maxidle;
+-			else
+-				cl->avgidle = avgidle;
+-		}
+-		if ((s64)(now - cl->last) > 0)
+-			cl->last = now;
+-	}
+-
+-	cbq_update_toplevel(q, this, q->tx_borrowed);
+-}
+-
+-static inline struct cbq_class *
+-cbq_under_limit(struct cbq_class *cl)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(cl->qdisc);
+-	struct cbq_class *this_cl = cl;
+-
+-	if (cl->tparent == NULL)
+-		return cl;
+-
+-	if (cl->undertime == PSCHED_PASTPERFECT || q->now >= cl->undertime) {
+-		cl->delayed = 0;
+-		return cl;
+-	}
+-
+-	do {
+-		/* It is very suspicious place. Now overlimit
+-		 * action is generated for not bounded classes
+-		 * only if link is completely congested.
+-		 * Though it is in agree with ancestor-only paradigm,
+-		 * it looks very stupid. Particularly,
+-		 * it means that this chunk of code will either
+-		 * never be called or result in strong amplification
+-		 * of burstiness. Dangerous, silly, and, however,
+-		 * no another solution exists.
+-		 */
+-		cl = cl->borrow;
+-		if (!cl) {
+-			this_cl->qstats.overlimits++;
+-			cbq_overlimit(this_cl);
+-			return NULL;
+-		}
+-		if (cl->level > q->toplevel)
+-			return NULL;
+-	} while (cl->undertime != PSCHED_PASTPERFECT && q->now < cl->undertime);
+-
+-	cl->delayed = 0;
+-	return cl;
+-}
+-
+-static inline struct sk_buff *
+-cbq_dequeue_prio(struct Qdisc *sch, int prio)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *cl_tail, *cl_prev, *cl;
+-	struct sk_buff *skb;
+-	int deficit;
+-
+-	cl_tail = cl_prev = q->active[prio];
+-	cl = cl_prev->next_alive;
+-
+-	do {
+-		deficit = 0;
+-
+-		/* Start round */
+-		do {
+-			struct cbq_class *borrow = cl;
+-
+-			if (cl->q->q.qlen &&
+-			    (borrow = cbq_under_limit(cl)) == NULL)
+-				goto skip_class;
+-
+-			if (cl->deficit <= 0) {
+-				/* Class exhausted its allotment per
+-				 * this round. Switch to the next one.
+-				 */
+-				deficit = 1;
+-				cl->deficit += cl->quantum;
+-				goto next_class;
+-			}
+-
+-			skb = cl->q->dequeue(cl->q);
+-
+-			/* Class did not give us any skb :-(
+-			 * It could occur even if cl->q->q.qlen != 0
+-			 * f.e. if cl->q == "tbf"
+-			 */
+-			if (skb == NULL)
+-				goto skip_class;
+-
+-			cl->deficit -= qdisc_pkt_len(skb);
+-			q->tx_class = cl;
+-			q->tx_borrowed = borrow;
+-			if (borrow != cl) {
+-#ifndef CBQ_XSTATS_BORROWS_BYTES
+-				borrow->xstats.borrows++;
+-				cl->xstats.borrows++;
+-#else
+-				borrow->xstats.borrows += qdisc_pkt_len(skb);
+-				cl->xstats.borrows += qdisc_pkt_len(skb);
+-#endif
+-			}
+-			q->tx_len = qdisc_pkt_len(skb);
+-
+-			if (cl->deficit <= 0) {
+-				q->active[prio] = cl;
+-				cl = cl->next_alive;
+-				cl->deficit += cl->quantum;
+-			}
+-			return skb;
+-
+-skip_class:
+-			if (cl->q->q.qlen == 0 || prio != cl->cpriority) {
+-				/* Class is empty or penalized.
+-				 * Unlink it from active chain.
+-				 */
+-				cl_prev->next_alive = cl->next_alive;
+-				cl->next_alive = NULL;
+-
+-				/* Did cl_tail point to it? */
+-				if (cl == cl_tail) {
+-					/* Repair it! */
+-					cl_tail = cl_prev;
+-
+-					/* Was it the last class in this band? */
+-					if (cl == cl_tail) {
+-						/* Kill the band! */
+-						q->active[prio] = NULL;
+-						q->activemask &= ~(1<<prio);
+-						if (cl->q->q.qlen)
+-							cbq_activate_class(cl);
+-						return NULL;
+-					}
+-
+-					q->active[prio] = cl_tail;
+-				}
+-				if (cl->q->q.qlen)
+-					cbq_activate_class(cl);
+-
+-				cl = cl_prev;
+-			}
+-
+-next_class:
+-			cl_prev = cl;
+-			cl = cl->next_alive;
+-		} while (cl_prev != cl_tail);
+-	} while (deficit);
+-
+-	q->active[prio] = cl_prev;
+-
+-	return NULL;
+-}
+-
+-static inline struct sk_buff *
+-cbq_dequeue_1(struct Qdisc *sch)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct sk_buff *skb;
+-	unsigned int activemask;
+-
+-	activemask = q->activemask & 0xFF;
+-	while (activemask) {
+-		int prio = ffz(~activemask);
+-		activemask &= ~(1<<prio);
+-		skb = cbq_dequeue_prio(sch, prio);
+-		if (skb)
+-			return skb;
+-	}
+-	return NULL;
+-}
+-
+-static struct sk_buff *
+-cbq_dequeue(struct Qdisc *sch)
+-{
+-	struct sk_buff *skb;
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	psched_time_t now;
+-
+-	now = psched_get_time();
+-
+-	if (q->tx_class)
+-		cbq_update(q);
+-
+-	q->now = now;
+-
+-	for (;;) {
+-		q->wd_expires = 0;
+-
+-		skb = cbq_dequeue_1(sch);
+-		if (skb) {
+-			qdisc_bstats_update(sch, skb);
+-			sch->q.qlen--;
+-			return skb;
+-		}
+-
+-		/* All the classes are overlimit.
+-		 *
+-		 * It is possible, if:
+-		 *
+-		 * 1. Scheduler is empty.
+-		 * 2. Toplevel cutoff inhibited borrowing.
+-		 * 3. Root class is overlimit.
+-		 *
+-		 * Reset 2d and 3d conditions and retry.
+-		 *
+-		 * Note, that NS and cbq-2.0 are buggy, peeking
+-		 * an arbitrary class is appropriate for ancestor-only
+-		 * sharing, but not for toplevel algorithm.
+-		 *
+-		 * Our version is better, but slower, because it requires
+-		 * two passes, but it is unavoidable with top-level sharing.
+-		 */
+-
+-		if (q->toplevel == TC_CBQ_MAXLEVEL &&
+-		    q->link.undertime == PSCHED_PASTPERFECT)
+-			break;
+-
+-		q->toplevel = TC_CBQ_MAXLEVEL;
+-		q->link.undertime = PSCHED_PASTPERFECT;
+-	}
+-
+-	/* No packets in scheduler or nobody wants to give them to us :-(
+-	 * Sigh... start watchdog timer in the last case.
+-	 */
+-
+-	if (sch->q.qlen) {
+-		qdisc_qstats_overlimit(sch);
+-		if (q->wd_expires)
+-			qdisc_watchdog_schedule(&q->watchdog,
+-						now + q->wd_expires);
+-	}
+-	return NULL;
+-}
+-
+-/* CBQ class maintanance routines */
+-
+-static void cbq_adjust_levels(struct cbq_class *this)
+-{
+-	if (this == NULL)
+-		return;
+-
+-	do {
+-		int level = 0;
+-		struct cbq_class *cl;
+-
+-		cl = this->children;
+-		if (cl) {
+-			do {
+-				if (cl->level > level)
+-					level = cl->level;
+-			} while ((cl = cl->sibling) != this->children);
+-		}
+-		this->level = level + 1;
+-	} while ((this = this->tparent) != NULL);
+-}
+-
+-static void cbq_normalize_quanta(struct cbq_sched_data *q, int prio)
+-{
+-	struct cbq_class *cl;
+-	unsigned int h;
+-
+-	if (q->quanta[prio] == 0)
+-		return;
+-
+-	for (h = 0; h < q->clhash.hashsize; h++) {
+-		hlist_for_each_entry(cl, &q->clhash.hash[h], common.hnode) {
+-			/* BUGGGG... Beware! This expression suffer of
+-			 * arithmetic overflows!
+-			 */
+-			if (cl->priority == prio) {
+-				cl->quantum = (cl->weight*cl->allot*q->nclasses[prio])/
+-					q->quanta[prio];
+-			}
+-			if (cl->quantum <= 0 ||
+-			    cl->quantum > 32*qdisc_dev(cl->qdisc)->mtu) {
+-				pr_warn("CBQ: class %08x has bad quantum==%ld, repaired.\n",
+-					cl->common.classid, cl->quantum);
+-				cl->quantum = qdisc_dev(cl->qdisc)->mtu/2 + 1;
+-			}
+-		}
+-	}
+-}
+-
+-static void cbq_sync_defmap(struct cbq_class *cl)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(cl->qdisc);
+-	struct cbq_class *split = cl->split;
+-	unsigned int h;
+-	int i;
+-
+-	if (split == NULL)
+-		return;
+-
+-	for (i = 0; i <= TC_PRIO_MAX; i++) {
+-		if (split->defaults[i] == cl && !(cl->defmap & (1<<i)))
+-			split->defaults[i] = NULL;
+-	}
+-
+-	for (i = 0; i <= TC_PRIO_MAX; i++) {
+-		int level = split->level;
+-
+-		if (split->defaults[i])
+-			continue;
+-
+-		for (h = 0; h < q->clhash.hashsize; h++) {
+-			struct cbq_class *c;
+-
+-			hlist_for_each_entry(c, &q->clhash.hash[h],
+-					     common.hnode) {
+-				if (c->split == split && c->level < level &&
+-				    c->defmap & (1<<i)) {
+-					split->defaults[i] = c;
+-					level = c->level;
+-				}
+-			}
+-		}
+-	}
+-}
+-
+-static void cbq_change_defmap(struct cbq_class *cl, u32 splitid, u32 def, u32 mask)
+-{
+-	struct cbq_class *split = NULL;
+-
+-	if (splitid == 0) {
+-		split = cl->split;
+-		if (!split)
+-			return;
+-		splitid = split->common.classid;
+-	}
+-
+-	if (split == NULL || split->common.classid != splitid) {
+-		for (split = cl->tparent; split; split = split->tparent)
+-			if (split->common.classid == splitid)
+-				break;
+-	}
+-
+-	if (split == NULL)
+-		return;
+-
+-	if (cl->split != split) {
+-		cl->defmap = 0;
+-		cbq_sync_defmap(cl);
+-		cl->split = split;
+-		cl->defmap = def & mask;
+-	} else
+-		cl->defmap = (cl->defmap & ~mask) | (def & mask);
+-
+-	cbq_sync_defmap(cl);
+-}
+-
+-static void cbq_unlink_class(struct cbq_class *this)
+-{
+-	struct cbq_class *cl, **clp;
+-	struct cbq_sched_data *q = qdisc_priv(this->qdisc);
+-
+-	qdisc_class_hash_remove(&q->clhash, &this->common);
+-
+-	if (this->tparent) {
+-		clp = &this->sibling;
+-		cl = *clp;
+-		do {
+-			if (cl == this) {
+-				*clp = cl->sibling;
+-				break;
+-			}
+-			clp = &cl->sibling;
+-		} while ((cl = *clp) != this->sibling);
+-
+-		if (this->tparent->children == this) {
+-			this->tparent->children = this->sibling;
+-			if (this->sibling == this)
+-				this->tparent->children = NULL;
+-		}
+-	} else {
+-		WARN_ON(this->sibling != this);
+-	}
+-}
+-
+-static void cbq_link_class(struct cbq_class *this)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(this->qdisc);
+-	struct cbq_class *parent = this->tparent;
+-
+-	this->sibling = this;
+-	qdisc_class_hash_insert(&q->clhash, &this->common);
+-
+-	if (parent == NULL)
+-		return;
+-
+-	if (parent->children == NULL) {
+-		parent->children = this;
+-	} else {
+-		this->sibling = parent->children->sibling;
+-		parent->children->sibling = this;
+-	}
+-}
+-
+-static void
+-cbq_reset(struct Qdisc *sch)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *cl;
+-	int prio;
+-	unsigned int h;
+-
+-	q->activemask = 0;
+-	q->pmask = 0;
+-	q->tx_class = NULL;
+-	q->tx_borrowed = NULL;
+-	qdisc_watchdog_cancel(&q->watchdog);
+-	hrtimer_cancel(&q->delay_timer);
+-	q->toplevel = TC_CBQ_MAXLEVEL;
+-	q->now = psched_get_time();
+-
+-	for (prio = 0; prio <= TC_CBQ_MAXPRIO; prio++)
+-		q->active[prio] = NULL;
+-
+-	for (h = 0; h < q->clhash.hashsize; h++) {
+-		hlist_for_each_entry(cl, &q->clhash.hash[h], common.hnode) {
+-			qdisc_reset(cl->q);
+-
+-			cl->next_alive = NULL;
+-			cl->undertime = PSCHED_PASTPERFECT;
+-			cl->avgidle = cl->maxidle;
+-			cl->deficit = cl->quantum;
+-			cl->cpriority = cl->priority;
+-		}
+-	}
+-}
+-
+-
+-static int cbq_set_lss(struct cbq_class *cl, struct tc_cbq_lssopt *lss)
+-{
+-	if (lss->change & TCF_CBQ_LSS_FLAGS) {
+-		cl->share = (lss->flags & TCF_CBQ_LSS_ISOLATED) ? NULL : cl->tparent;
+-		cl->borrow = (lss->flags & TCF_CBQ_LSS_BOUNDED) ? NULL : cl->tparent;
+-	}
+-	if (lss->change & TCF_CBQ_LSS_EWMA)
+-		cl->ewma_log = lss->ewma_log;
+-	if (lss->change & TCF_CBQ_LSS_AVPKT)
+-		cl->avpkt = lss->avpkt;
+-	if (lss->change & TCF_CBQ_LSS_MINIDLE)
+-		cl->minidle = -(long)lss->minidle;
+-	if (lss->change & TCF_CBQ_LSS_MAXIDLE) {
+-		cl->maxidle = lss->maxidle;
+-		cl->avgidle = lss->maxidle;
+-	}
+-	if (lss->change & TCF_CBQ_LSS_OFFTIME)
+-		cl->offtime = lss->offtime;
+-	return 0;
+-}
+-
+-static void cbq_rmprio(struct cbq_sched_data *q, struct cbq_class *cl)
+-{
+-	q->nclasses[cl->priority]--;
+-	q->quanta[cl->priority] -= cl->weight;
+-	cbq_normalize_quanta(q, cl->priority);
+-}
+-
+-static void cbq_addprio(struct cbq_sched_data *q, struct cbq_class *cl)
+-{
+-	q->nclasses[cl->priority]++;
+-	q->quanta[cl->priority] += cl->weight;
+-	cbq_normalize_quanta(q, cl->priority);
+-}
+-
+-static int cbq_set_wrr(struct cbq_class *cl, struct tc_cbq_wrropt *wrr)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(cl->qdisc);
+-
+-	if (wrr->allot)
+-		cl->allot = wrr->allot;
+-	if (wrr->weight)
+-		cl->weight = wrr->weight;
+-	if (wrr->priority) {
+-		cl->priority = wrr->priority - 1;
+-		cl->cpriority = cl->priority;
+-		if (cl->priority >= cl->priority2)
+-			cl->priority2 = TC_CBQ_MAXPRIO - 1;
+-	}
+-
+-	cbq_addprio(q, cl);
+-	return 0;
+-}
+-
+-static int cbq_set_fopt(struct cbq_class *cl, struct tc_cbq_fopt *fopt)
+-{
+-	cbq_change_defmap(cl, fopt->split, fopt->defmap, fopt->defchange);
+-	return 0;
+-}
+-
+-static const struct nla_policy cbq_policy[TCA_CBQ_MAX + 1] = {
+-	[TCA_CBQ_LSSOPT]	= { .len = sizeof(struct tc_cbq_lssopt) },
+-	[TCA_CBQ_WRROPT]	= { .len = sizeof(struct tc_cbq_wrropt) },
+-	[TCA_CBQ_FOPT]		= { .len = sizeof(struct tc_cbq_fopt) },
+-	[TCA_CBQ_OVL_STRATEGY]	= { .len = sizeof(struct tc_cbq_ovl) },
+-	[TCA_CBQ_RATE]		= { .len = sizeof(struct tc_ratespec) },
+-	[TCA_CBQ_RTAB]		= { .type = NLA_BINARY, .len = TC_RTAB_SIZE },
+-	[TCA_CBQ_POLICE]	= { .len = sizeof(struct tc_cbq_police) },
+-};
+-
+-static int cbq_opt_parse(struct nlattr *tb[TCA_CBQ_MAX + 1],
+-			 struct nlattr *opt,
+-			 struct netlink_ext_ack *extack)
+-{
+-	int err;
+-
+-	if (!opt) {
+-		NL_SET_ERR_MSG(extack, "CBQ options are required for this operation");
+-		return -EINVAL;
+-	}
+-
+-	err = nla_parse_nested_deprecated(tb, TCA_CBQ_MAX, opt,
+-					  cbq_policy, extack);
+-	if (err < 0)
+-		return err;
+-
+-	if (tb[TCA_CBQ_WRROPT]) {
+-		const struct tc_cbq_wrropt *wrr = nla_data(tb[TCA_CBQ_WRROPT]);
+-
+-		if (wrr->priority > TC_CBQ_MAXPRIO) {
+-			NL_SET_ERR_MSG(extack, "priority is bigger than TC_CBQ_MAXPRIO");
+-			err = -EINVAL;
+-		}
+-	}
+-	return err;
+-}
+-
+-static int cbq_init(struct Qdisc *sch, struct nlattr *opt,
+-		    struct netlink_ext_ack *extack)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct nlattr *tb[TCA_CBQ_MAX + 1];
+-	struct tc_ratespec *r;
+-	int err;
+-
+-	qdisc_watchdog_init(&q->watchdog, sch);
+-	hrtimer_init(&q->delay_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
+-	q->delay_timer.function = cbq_undelay;
+-
+-	err = cbq_opt_parse(tb, opt, extack);
+-	if (err < 0)
+-		return err;
+-
+-	if (!tb[TCA_CBQ_RTAB] || !tb[TCA_CBQ_RATE]) {
+-		NL_SET_ERR_MSG(extack, "Rate specification missing or incomplete");
+-		return -EINVAL;
+-	}
+-
+-	r = nla_data(tb[TCA_CBQ_RATE]);
+-
+-	q->link.R_tab = qdisc_get_rtab(r, tb[TCA_CBQ_RTAB], extack);
+-	if (!q->link.R_tab)
+-		return -EINVAL;
+-
+-	err = tcf_block_get(&q->link.block, &q->link.filter_list, sch, extack);
+-	if (err)
+-		goto put_rtab;
+-
+-	err = qdisc_class_hash_init(&q->clhash);
+-	if (err < 0)
+-		goto put_block;
+-
+-	q->link.sibling = &q->link;
+-	q->link.common.classid = sch->handle;
+-	q->link.qdisc = sch;
+-	q->link.q = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
+-				      sch->handle, NULL);
+-	if (!q->link.q)
+-		q->link.q = &noop_qdisc;
+-	else
+-		qdisc_hash_add(q->link.q, true);
+-
+-	q->link.priority = TC_CBQ_MAXPRIO - 1;
+-	q->link.priority2 = TC_CBQ_MAXPRIO - 1;
+-	q->link.cpriority = TC_CBQ_MAXPRIO - 1;
+-	q->link.allot = psched_mtu(qdisc_dev(sch));
+-	q->link.quantum = q->link.allot;
+-	q->link.weight = q->link.R_tab->rate.rate;
+-
+-	q->link.ewma_log = TC_CBQ_DEF_EWMA;
+-	q->link.avpkt = q->link.allot/2;
+-	q->link.minidle = -0x7FFFFFFF;
+-
+-	q->toplevel = TC_CBQ_MAXLEVEL;
+-	q->now = psched_get_time();
+-
+-	cbq_link_class(&q->link);
+-
+-	if (tb[TCA_CBQ_LSSOPT])
+-		cbq_set_lss(&q->link, nla_data(tb[TCA_CBQ_LSSOPT]));
+-
+-	cbq_addprio(q, &q->link);
+-	return 0;
+-
+-put_block:
+-	tcf_block_put(q->link.block);
+-
+-put_rtab:
+-	qdisc_put_rtab(q->link.R_tab);
+-	return err;
+-}
+-
+-static int cbq_dump_rate(struct sk_buff *skb, struct cbq_class *cl)
+-{
+-	unsigned char *b = skb_tail_pointer(skb);
+-
+-	if (nla_put(skb, TCA_CBQ_RATE, sizeof(cl->R_tab->rate), &cl->R_tab->rate))
+-		goto nla_put_failure;
+-	return skb->len;
+-
+-nla_put_failure:
+-	nlmsg_trim(skb, b);
+-	return -1;
+-}
+-
+-static int cbq_dump_lss(struct sk_buff *skb, struct cbq_class *cl)
+-{
+-	unsigned char *b = skb_tail_pointer(skb);
+-	struct tc_cbq_lssopt opt;
+-
+-	opt.flags = 0;
+-	if (cl->borrow == NULL)
+-		opt.flags |= TCF_CBQ_LSS_BOUNDED;
+-	if (cl->share == NULL)
+-		opt.flags |= TCF_CBQ_LSS_ISOLATED;
+-	opt.ewma_log = cl->ewma_log;
+-	opt.level = cl->level;
+-	opt.avpkt = cl->avpkt;
+-	opt.maxidle = cl->maxidle;
+-	opt.minidle = (u32)(-cl->minidle);
+-	opt.offtime = cl->offtime;
+-	opt.change = ~0;
+-	if (nla_put(skb, TCA_CBQ_LSSOPT, sizeof(opt), &opt))
+-		goto nla_put_failure;
+-	return skb->len;
+-
+-nla_put_failure:
+-	nlmsg_trim(skb, b);
+-	return -1;
+-}
+-
+-static int cbq_dump_wrr(struct sk_buff *skb, struct cbq_class *cl)
+-{
+-	unsigned char *b = skb_tail_pointer(skb);
+-	struct tc_cbq_wrropt opt;
+-
+-	memset(&opt, 0, sizeof(opt));
+-	opt.flags = 0;
+-	opt.allot = cl->allot;
+-	opt.priority = cl->priority + 1;
+-	opt.cpriority = cl->cpriority + 1;
+-	opt.weight = cl->weight;
+-	if (nla_put(skb, TCA_CBQ_WRROPT, sizeof(opt), &opt))
+-		goto nla_put_failure;
+-	return skb->len;
+-
+-nla_put_failure:
+-	nlmsg_trim(skb, b);
+-	return -1;
+-}
+-
+-static int cbq_dump_fopt(struct sk_buff *skb, struct cbq_class *cl)
+-{
+-	unsigned char *b = skb_tail_pointer(skb);
+-	struct tc_cbq_fopt opt;
+-
+-	if (cl->split || cl->defmap) {
+-		opt.split = cl->split ? cl->split->common.classid : 0;
+-		opt.defmap = cl->defmap;
+-		opt.defchange = ~0;
+-		if (nla_put(skb, TCA_CBQ_FOPT, sizeof(opt), &opt))
+-			goto nla_put_failure;
+-	}
+-	return skb->len;
+-
+-nla_put_failure:
+-	nlmsg_trim(skb, b);
+-	return -1;
+-}
+-
+-static int cbq_dump_attr(struct sk_buff *skb, struct cbq_class *cl)
+-{
+-	if (cbq_dump_lss(skb, cl) < 0 ||
+-	    cbq_dump_rate(skb, cl) < 0 ||
+-	    cbq_dump_wrr(skb, cl) < 0 ||
+-	    cbq_dump_fopt(skb, cl) < 0)
+-		return -1;
+-	return 0;
+-}
+-
+-static int cbq_dump(struct Qdisc *sch, struct sk_buff *skb)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct nlattr *nest;
+-
+-	nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+-	if (nest == NULL)
+-		goto nla_put_failure;
+-	if (cbq_dump_attr(skb, &q->link) < 0)
+-		goto nla_put_failure;
+-	return nla_nest_end(skb, nest);
+-
+-nla_put_failure:
+-	nla_nest_cancel(skb, nest);
+-	return -1;
+-}
+-
+-static int
+-cbq_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-
+-	q->link.xstats.avgidle = q->link.avgidle;
+-	return gnet_stats_copy_app(d, &q->link.xstats, sizeof(q->link.xstats));
+-}
+-
+-static int
+-cbq_dump_class(struct Qdisc *sch, unsigned long arg,
+-	       struct sk_buff *skb, struct tcmsg *tcm)
+-{
+-	struct cbq_class *cl = (struct cbq_class *)arg;
+-	struct nlattr *nest;
+-
+-	if (cl->tparent)
+-		tcm->tcm_parent = cl->tparent->common.classid;
+-	else
+-		tcm->tcm_parent = TC_H_ROOT;
+-	tcm->tcm_handle = cl->common.classid;
+-	tcm->tcm_info = cl->q->handle;
+-
+-	nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+-	if (nest == NULL)
+-		goto nla_put_failure;
+-	if (cbq_dump_attr(skb, cl) < 0)
+-		goto nla_put_failure;
+-	return nla_nest_end(skb, nest);
+-
+-nla_put_failure:
+-	nla_nest_cancel(skb, nest);
+-	return -1;
+-}
+-
+-static int
+-cbq_dump_class_stats(struct Qdisc *sch, unsigned long arg,
+-	struct gnet_dump *d)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *cl = (struct cbq_class *)arg;
+-	__u32 qlen;
+-
+-	cl->xstats.avgidle = cl->avgidle;
+-	cl->xstats.undertime = 0;
+-	qdisc_qstats_qlen_backlog(cl->q, &qlen, &cl->qstats.backlog);
+-
+-	if (cl->undertime != PSCHED_PASTPERFECT)
+-		cl->xstats.undertime = cl->undertime - q->now;
+-
+-	if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch),
+-				  d, NULL, &cl->bstats) < 0 ||
+-	    gnet_stats_copy_rate_est(d, &cl->rate_est) < 0 ||
+-	    gnet_stats_copy_queue(d, NULL, &cl->qstats, qlen) < 0)
+-		return -1;
+-
+-	return gnet_stats_copy_app(d, &cl->xstats, sizeof(cl->xstats));
+-}
+-
+-static int cbq_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
+-		     struct Qdisc **old, struct netlink_ext_ack *extack)
+-{
+-	struct cbq_class *cl = (struct cbq_class *)arg;
+-
+-	if (new == NULL) {
+-		new = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
+-					cl->common.classid, extack);
+-		if (new == NULL)
+-			return -ENOBUFS;
+-	}
+-
+-	*old = qdisc_replace(sch, new, &cl->q);
+-	return 0;
+-}
+-
+-static struct Qdisc *cbq_leaf(struct Qdisc *sch, unsigned long arg)
+-{
+-	struct cbq_class *cl = (struct cbq_class *)arg;
+-
+-	return cl->q;
+-}
+-
+-static void cbq_qlen_notify(struct Qdisc *sch, unsigned long arg)
+-{
+-	struct cbq_class *cl = (struct cbq_class *)arg;
+-
+-	cbq_deactivate_class(cl);
+-}
+-
+-static unsigned long cbq_find(struct Qdisc *sch, u32 classid)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-
+-	return (unsigned long)cbq_class_lookup(q, classid);
+-}
+-
+-static void cbq_destroy_class(struct Qdisc *sch, struct cbq_class *cl)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-
+-	WARN_ON(cl->filters);
+-
+-	tcf_block_put(cl->block);
+-	qdisc_put(cl->q);
+-	qdisc_put_rtab(cl->R_tab);
+-	gen_kill_estimator(&cl->rate_est);
+-	if (cl != &q->link)
+-		kfree(cl);
+-}
+-
+-static void cbq_destroy(struct Qdisc *sch)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct hlist_node *next;
+-	struct cbq_class *cl;
+-	unsigned int h;
+-
+-#ifdef CONFIG_NET_CLS_ACT
+-	q->rx_class = NULL;
+-#endif
+-	/*
+-	 * Filters must be destroyed first because we don't destroy the
+-	 * classes from root to leafs which means that filters can still
+-	 * be bound to classes which have been destroyed already. --TGR '04
+-	 */
+-	for (h = 0; h < q->clhash.hashsize; h++) {
+-		hlist_for_each_entry(cl, &q->clhash.hash[h], common.hnode) {
+-			tcf_block_put(cl->block);
+-			cl->block = NULL;
+-		}
+-	}
+-	for (h = 0; h < q->clhash.hashsize; h++) {
+-		hlist_for_each_entry_safe(cl, next, &q->clhash.hash[h],
+-					  common.hnode)
+-			cbq_destroy_class(sch, cl);
+-	}
+-	qdisc_class_hash_destroy(&q->clhash);
+-}
+-
+-static int
+-cbq_change_class(struct Qdisc *sch, u32 classid, u32 parentid, struct nlattr **tca,
+-		 unsigned long *arg, struct netlink_ext_ack *extack)
+-{
+-	int err;
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *cl = (struct cbq_class *)*arg;
+-	struct nlattr *opt = tca[TCA_OPTIONS];
+-	struct nlattr *tb[TCA_CBQ_MAX + 1];
+-	struct cbq_class *parent;
+-	struct qdisc_rate_table *rtab = NULL;
+-
+-	err = cbq_opt_parse(tb, opt, extack);
+-	if (err < 0)
+-		return err;
+-
+-	if (tb[TCA_CBQ_OVL_STRATEGY] || tb[TCA_CBQ_POLICE]) {
+-		NL_SET_ERR_MSG(extack, "Neither overlimit strategy nor policing attributes can be used for changing class params");
+-		return -EOPNOTSUPP;
+-	}
+-
+-	if (cl) {
+-		/* Check parent */
+-		if (parentid) {
+-			if (cl->tparent &&
+-			    cl->tparent->common.classid != parentid) {
+-				NL_SET_ERR_MSG(extack, "Invalid parent id");
+-				return -EINVAL;
+-			}
+-			if (!cl->tparent && parentid != TC_H_ROOT) {
+-				NL_SET_ERR_MSG(extack, "Parent must be root");
+-				return -EINVAL;
+-			}
+-		}
+-
+-		if (tb[TCA_CBQ_RATE]) {
+-			rtab = qdisc_get_rtab(nla_data(tb[TCA_CBQ_RATE]),
+-					      tb[TCA_CBQ_RTAB], extack);
+-			if (rtab == NULL)
+-				return -EINVAL;
+-		}
+-
+-		if (tca[TCA_RATE]) {
+-			err = gen_replace_estimator(&cl->bstats, NULL,
+-						    &cl->rate_est,
+-						    NULL,
+-						    qdisc_root_sleeping_running(sch),
+-						    tca[TCA_RATE]);
+-			if (err) {
+-				NL_SET_ERR_MSG(extack, "Failed to replace specified rate estimator");
+-				qdisc_put_rtab(rtab);
+-				return err;
+-			}
+-		}
+-
+-		/* Change class parameters */
+-		sch_tree_lock(sch);
+-
+-		if (cl->next_alive != NULL)
+-			cbq_deactivate_class(cl);
+-
+-		if (rtab) {
+-			qdisc_put_rtab(cl->R_tab);
+-			cl->R_tab = rtab;
+-		}
+-
+-		if (tb[TCA_CBQ_LSSOPT])
+-			cbq_set_lss(cl, nla_data(tb[TCA_CBQ_LSSOPT]));
+-
+-		if (tb[TCA_CBQ_WRROPT]) {
+-			cbq_rmprio(q, cl);
+-			cbq_set_wrr(cl, nla_data(tb[TCA_CBQ_WRROPT]));
+-		}
+-
+-		if (tb[TCA_CBQ_FOPT])
+-			cbq_set_fopt(cl, nla_data(tb[TCA_CBQ_FOPT]));
+-
+-		if (cl->q->q.qlen)
+-			cbq_activate_class(cl);
+-
+-		sch_tree_unlock(sch);
+-
+-		return 0;
+-	}
+-
+-	if (parentid == TC_H_ROOT)
+-		return -EINVAL;
+-
+-	if (!tb[TCA_CBQ_WRROPT] || !tb[TCA_CBQ_RATE] || !tb[TCA_CBQ_LSSOPT]) {
+-		NL_SET_ERR_MSG(extack, "One of the following attributes MUST be specified: WRR, rate or link sharing");
+-		return -EINVAL;
+-	}
+-
+-	rtab = qdisc_get_rtab(nla_data(tb[TCA_CBQ_RATE]), tb[TCA_CBQ_RTAB],
+-			      extack);
+-	if (rtab == NULL)
+-		return -EINVAL;
+-
+-	if (classid) {
+-		err = -EINVAL;
+-		if (TC_H_MAJ(classid ^ sch->handle) ||
+-		    cbq_class_lookup(q, classid)) {
+-			NL_SET_ERR_MSG(extack, "Specified class not found");
+-			goto failure;
+-		}
+-	} else {
+-		int i;
+-		classid = TC_H_MAKE(sch->handle, 0x8000);
+-
+-		for (i = 0; i < 0x8000; i++) {
+-			if (++q->hgenerator >= 0x8000)
+-				q->hgenerator = 1;
+-			if (cbq_class_lookup(q, classid|q->hgenerator) == NULL)
+-				break;
+-		}
+-		err = -ENOSR;
+-		if (i >= 0x8000) {
+-			NL_SET_ERR_MSG(extack, "Unable to generate classid");
+-			goto failure;
+-		}
+-		classid = classid|q->hgenerator;
+-	}
+-
+-	parent = &q->link;
+-	if (parentid) {
+-		parent = cbq_class_lookup(q, parentid);
+-		err = -EINVAL;
+-		if (!parent) {
+-			NL_SET_ERR_MSG(extack, "Failed to find parentid");
+-			goto failure;
+-		}
+-	}
+-
+-	err = -ENOBUFS;
+-	cl = kzalloc(sizeof(*cl), GFP_KERNEL);
+-	if (cl == NULL)
+-		goto failure;
+-
+-	err = tcf_block_get(&cl->block, &cl->filter_list, sch, extack);
+-	if (err) {
+-		kfree(cl);
+-		goto failure;
+-	}
+-
+-	if (tca[TCA_RATE]) {
+-		err = gen_new_estimator(&cl->bstats, NULL, &cl->rate_est,
+-					NULL,
+-					qdisc_root_sleeping_running(sch),
+-					tca[TCA_RATE]);
+-		if (err) {
+-			NL_SET_ERR_MSG(extack, "Couldn't create new estimator");
+-			tcf_block_put(cl->block);
+-			kfree(cl);
+-			goto failure;
+-		}
+-	}
+-
+-	cl->R_tab = rtab;
+-	rtab = NULL;
+-	cl->q = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, classid,
+-				  NULL);
+-	if (!cl->q)
+-		cl->q = &noop_qdisc;
+-	else
+-		qdisc_hash_add(cl->q, true);
+-
+-	cl->common.classid = classid;
+-	cl->tparent = parent;
+-	cl->qdisc = sch;
+-	cl->allot = parent->allot;
+-	cl->quantum = cl->allot;
+-	cl->weight = cl->R_tab->rate.rate;
+-
+-	sch_tree_lock(sch);
+-	cbq_link_class(cl);
+-	cl->borrow = cl->tparent;
+-	if (cl->tparent != &q->link)
+-		cl->share = cl->tparent;
+-	cbq_adjust_levels(parent);
+-	cl->minidle = -0x7FFFFFFF;
+-	cbq_set_lss(cl, nla_data(tb[TCA_CBQ_LSSOPT]));
+-	cbq_set_wrr(cl, nla_data(tb[TCA_CBQ_WRROPT]));
+-	if (cl->ewma_log == 0)
+-		cl->ewma_log = q->link.ewma_log;
+-	if (cl->maxidle == 0)
+-		cl->maxidle = q->link.maxidle;
+-	if (cl->avpkt == 0)
+-		cl->avpkt = q->link.avpkt;
+-	if (tb[TCA_CBQ_FOPT])
+-		cbq_set_fopt(cl, nla_data(tb[TCA_CBQ_FOPT]));
+-	sch_tree_unlock(sch);
+-
+-	qdisc_class_hash_grow(sch, &q->clhash);
+-
+-	*arg = (unsigned long)cl;
+-	return 0;
+-
+-failure:
+-	qdisc_put_rtab(rtab);
+-	return err;
+-}
+-
+-static int cbq_delete(struct Qdisc *sch, unsigned long arg)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *cl = (struct cbq_class *)arg;
+-
+-	if (cl->filters || cl->children || cl == &q->link)
+-		return -EBUSY;
+-
+-	sch_tree_lock(sch);
+-
+-	qdisc_purge_queue(cl->q);
+-
+-	if (cl->next_alive)
+-		cbq_deactivate_class(cl);
+-
+-	if (q->tx_borrowed == cl)
+-		q->tx_borrowed = q->tx_class;
+-	if (q->tx_class == cl) {
+-		q->tx_class = NULL;
+-		q->tx_borrowed = NULL;
+-	}
+-#ifdef CONFIG_NET_CLS_ACT
+-	if (q->rx_class == cl)
+-		q->rx_class = NULL;
+-#endif
+-
+-	cbq_unlink_class(cl);
+-	cbq_adjust_levels(cl->tparent);
+-	cl->defmap = 0;
+-	cbq_sync_defmap(cl);
+-
+-	cbq_rmprio(q, cl);
+-	sch_tree_unlock(sch);
+-
+-	cbq_destroy_class(sch, cl);
+-	return 0;
+-}
+-
+-static struct tcf_block *cbq_tcf_block(struct Qdisc *sch, unsigned long arg,
+-				       struct netlink_ext_ack *extack)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *cl = (struct cbq_class *)arg;
+-
+-	if (cl == NULL)
+-		cl = &q->link;
+-
+-	return cl->block;
+-}
+-
+-static unsigned long cbq_bind_filter(struct Qdisc *sch, unsigned long parent,
+-				     u32 classid)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *p = (struct cbq_class *)parent;
+-	struct cbq_class *cl = cbq_class_lookup(q, classid);
+-
+-	if (cl) {
+-		if (p && p->level <= cl->level)
+-			return 0;
+-		cl->filters++;
+-		return (unsigned long)cl;
+-	}
+-	return 0;
+-}
+-
+-static void cbq_unbind_filter(struct Qdisc *sch, unsigned long arg)
+-{
+-	struct cbq_class *cl = (struct cbq_class *)arg;
+-
+-	cl->filters--;
+-}
+-
+-static void cbq_walk(struct Qdisc *sch, struct qdisc_walker *arg)
+-{
+-	struct cbq_sched_data *q = qdisc_priv(sch);
+-	struct cbq_class *cl;
+-	unsigned int h;
+-
+-	if (arg->stop)
+-		return;
+-
+-	for (h = 0; h < q->clhash.hashsize; h++) {
+-		hlist_for_each_entry(cl, &q->clhash.hash[h], common.hnode) {
+-			if (arg->count < arg->skip) {
+-				arg->count++;
+-				continue;
+-			}
+-			if (arg->fn(sch, (unsigned long)cl, arg) < 0) {
+-				arg->stop = 1;
+-				return;
+-			}
+-			arg->count++;
+-		}
+-	}
+-}
+-
+-static const struct Qdisc_class_ops cbq_class_ops = {
+-	.graft		=	cbq_graft,
+-	.leaf		=	cbq_leaf,
+-	.qlen_notify	=	cbq_qlen_notify,
+-	.find		=	cbq_find,
+-	.change		=	cbq_change_class,
+-	.delete		=	cbq_delete,
+-	.walk		=	cbq_walk,
+-	.tcf_block	=	cbq_tcf_block,
+-	.bind_tcf	=	cbq_bind_filter,
+-	.unbind_tcf	=	cbq_unbind_filter,
+-	.dump		=	cbq_dump_class,
+-	.dump_stats	=	cbq_dump_class_stats,
+-};
+-
+-static struct Qdisc_ops cbq_qdisc_ops __read_mostly = {
+-	.next		=	NULL,
+-	.cl_ops		=	&cbq_class_ops,
+-	.id		=	"cbq",
+-	.priv_size	=	sizeof(struct cbq_sched_data),
+-	.enqueue	=	cbq_enqueue,
+-	.dequeue	=	cbq_dequeue,
+-	.peek		=	qdisc_peek_dequeued,
+-	.init		=	cbq_init,
+-	.reset		=	cbq_reset,
+-	.destroy	=	cbq_destroy,
+-	.change		=	NULL,
+-	.dump		=	cbq_dump,
+-	.dump_stats	=	cbq_dump_stats,
+-	.owner		=	THIS_MODULE,
+-};
+-
+-static int __init cbq_module_init(void)
+-{
+-	return register_qdisc(&cbq_qdisc_ops);
+-}
+-static void __exit cbq_module_exit(void)
+-{
+-	unregister_qdisc(&cbq_qdisc_ops);
+-}
+-module_init(cbq_module_init)
+-module_exit(cbq_module_exit)
+-MODULE_LICENSE("GPL");
+diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c
+deleted file mode 100644
+index a75bc7f80cd7e..0000000000000
+--- a/net/sched/sch_dsmark.c
++++ /dev/null
+@@ -1,521 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/* net/sched/sch_dsmark.c - Differentiated Services field marker */
+-
+-/* Written 1998-2000 by Werner Almesberger, EPFL ICA */
+-
+-
+-#include <linux/module.h>
+-#include <linux/init.h>
+-#include <linux/slab.h>
+-#include <linux/types.h>
+-#include <linux/string.h>
+-#include <linux/errno.h>
+-#include <linux/skbuff.h>
+-#include <linux/rtnetlink.h>
+-#include <linux/bitops.h>
+-#include <net/pkt_sched.h>
+-#include <net/pkt_cls.h>
+-#include <net/dsfield.h>
+-#include <net/inet_ecn.h>
+-#include <asm/byteorder.h>
+-
+-/*
+- * classid	class		marking
+- * -------	-----		-------
+- *   n/a	  0		n/a
+- *   x:0	  1		use entry [0]
+- *   ...	 ...		...
+- *   x:y y>0	 y+1		use entry [y]
+- *   ...	 ...		...
+- * x:indices-1	indices		use entry [indices-1]
+- *   ...	 ...		...
+- *   x:y	 y+1		use entry [y & (indices-1)]
+- *   ...	 ...		...
+- * 0xffff	0x10000		use entry [indices-1]
+- */
+-
+-
+-#define NO_DEFAULT_INDEX	(1 << 16)
+-
+-struct mask_value {
+-	u8			mask;
+-	u8			value;
+-};
+-
+-struct dsmark_qdisc_data {
+-	struct Qdisc		*q;
+-	struct tcf_proto __rcu	*filter_list;
+-	struct tcf_block	*block;
+-	struct mask_value	*mv;
+-	u16			indices;
+-	u8			set_tc_index;
+-	u32			default_index;	/* index range is 0...0xffff */
+-#define DSMARK_EMBEDDED_SZ	16
+-	struct mask_value	embedded[DSMARK_EMBEDDED_SZ];
+-};
+-
+-static inline int dsmark_valid_index(struct dsmark_qdisc_data *p, u16 index)
+-{
+-	return index <= p->indices && index > 0;
+-}
+-
+-/* ------------------------- Class/flow operations ------------------------- */
+-
+-static int dsmark_graft(struct Qdisc *sch, unsigned long arg,
+-			struct Qdisc *new, struct Qdisc **old,
+-			struct netlink_ext_ack *extack)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-
+-	pr_debug("%s(sch %p,[qdisc %p],new %p,old %p)\n",
+-		 __func__, sch, p, new, old);
+-
+-	if (new == NULL) {
+-		new = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
+-					sch->handle, NULL);
+-		if (new == NULL)
+-			new = &noop_qdisc;
+-	}
+-
+-	*old = qdisc_replace(sch, new, &p->q);
+-	return 0;
+-}
+-
+-static struct Qdisc *dsmark_leaf(struct Qdisc *sch, unsigned long arg)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-	return p->q;
+-}
+-
+-static unsigned long dsmark_find(struct Qdisc *sch, u32 classid)
+-{
+-	return TC_H_MIN(classid) + 1;
+-}
+-
+-static unsigned long dsmark_bind_filter(struct Qdisc *sch,
+-					unsigned long parent, u32 classid)
+-{
+-	pr_debug("%s(sch %p,[qdisc %p],classid %x)\n",
+-		 __func__, sch, qdisc_priv(sch), classid);
+-
+-	return dsmark_find(sch, classid);
+-}
+-
+-static void dsmark_unbind_filter(struct Qdisc *sch, unsigned long cl)
+-{
+-}
+-
+-static const struct nla_policy dsmark_policy[TCA_DSMARK_MAX + 1] = {
+-	[TCA_DSMARK_INDICES]		= { .type = NLA_U16 },
+-	[TCA_DSMARK_DEFAULT_INDEX]	= { .type = NLA_U16 },
+-	[TCA_DSMARK_SET_TC_INDEX]	= { .type = NLA_FLAG },
+-	[TCA_DSMARK_MASK]		= { .type = NLA_U8 },
+-	[TCA_DSMARK_VALUE]		= { .type = NLA_U8 },
+-};
+-
+-static int dsmark_change(struct Qdisc *sch, u32 classid, u32 parent,
+-			 struct nlattr **tca, unsigned long *arg,
+-			 struct netlink_ext_ack *extack)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-	struct nlattr *opt = tca[TCA_OPTIONS];
+-	struct nlattr *tb[TCA_DSMARK_MAX + 1];
+-	int err = -EINVAL;
+-
+-	pr_debug("%s(sch %p,[qdisc %p],classid %x,parent %x), arg 0x%lx\n",
+-		 __func__, sch, p, classid, parent, *arg);
+-
+-	if (!dsmark_valid_index(p, *arg)) {
+-		err = -ENOENT;
+-		goto errout;
+-	}
+-
+-	if (!opt)
+-		goto errout;
+-
+-	err = nla_parse_nested_deprecated(tb, TCA_DSMARK_MAX, opt,
+-					  dsmark_policy, NULL);
+-	if (err < 0)
+-		goto errout;
+-
+-	if (tb[TCA_DSMARK_VALUE])
+-		p->mv[*arg - 1].value = nla_get_u8(tb[TCA_DSMARK_VALUE]);
+-
+-	if (tb[TCA_DSMARK_MASK])
+-		p->mv[*arg - 1].mask = nla_get_u8(tb[TCA_DSMARK_MASK]);
+-
+-	err = 0;
+-
+-errout:
+-	return err;
+-}
+-
+-static int dsmark_delete(struct Qdisc *sch, unsigned long arg)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-
+-	if (!dsmark_valid_index(p, arg))
+-		return -EINVAL;
+-
+-	p->mv[arg - 1].mask = 0xff;
+-	p->mv[arg - 1].value = 0;
+-
+-	return 0;
+-}
+-
+-static void dsmark_walk(struct Qdisc *sch, struct qdisc_walker *walker)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-	int i;
+-
+-	pr_debug("%s(sch %p,[qdisc %p],walker %p)\n",
+-		 __func__, sch, p, walker);
+-
+-	if (walker->stop)
+-		return;
+-
+-	for (i = 0; i < p->indices; i++) {
+-		if (p->mv[i].mask == 0xff && !p->mv[i].value)
+-			goto ignore;
+-		if (walker->count >= walker->skip) {
+-			if (walker->fn(sch, i + 1, walker) < 0) {
+-				walker->stop = 1;
+-				break;
+-			}
+-		}
+-ignore:
+-		walker->count++;
+-	}
+-}
+-
+-static struct tcf_block *dsmark_tcf_block(struct Qdisc *sch, unsigned long cl,
+-					  struct netlink_ext_ack *extack)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-
+-	return p->block;
+-}
+-
+-/* --------------------------- Qdisc operations ---------------------------- */
+-
+-static int dsmark_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+-			  struct sk_buff **to_free)
+-{
+-	unsigned int len = qdisc_pkt_len(skb);
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-	int err;
+-
+-	pr_debug("%s(skb %p,sch %p,[qdisc %p])\n", __func__, skb, sch, p);
+-
+-	if (p->set_tc_index) {
+-		int wlen = skb_network_offset(skb);
+-
+-		switch (skb_protocol(skb, true)) {
+-		case htons(ETH_P_IP):
+-			wlen += sizeof(struct iphdr);
+-			if (!pskb_may_pull(skb, wlen) ||
+-			    skb_try_make_writable(skb, wlen))
+-				goto drop;
+-
+-			skb->tc_index = ipv4_get_dsfield(ip_hdr(skb))
+-				& ~INET_ECN_MASK;
+-			break;
+-
+-		case htons(ETH_P_IPV6):
+-			wlen += sizeof(struct ipv6hdr);
+-			if (!pskb_may_pull(skb, wlen) ||
+-			    skb_try_make_writable(skb, wlen))
+-				goto drop;
+-
+-			skb->tc_index = ipv6_get_dsfield(ipv6_hdr(skb))
+-				& ~INET_ECN_MASK;
+-			break;
+-		default:
+-			skb->tc_index = 0;
+-			break;
+-		}
+-	}
+-
+-	if (TC_H_MAJ(skb->priority) == sch->handle)
+-		skb->tc_index = TC_H_MIN(skb->priority);
+-	else {
+-		struct tcf_result res;
+-		struct tcf_proto *fl = rcu_dereference_bh(p->filter_list);
+-		int result = tcf_classify(skb, fl, &res, false);
+-
+-		pr_debug("result %d class 0x%04x\n", result, res.classid);
+-
+-		switch (result) {
+-#ifdef CONFIG_NET_CLS_ACT
+-		case TC_ACT_QUEUED:
+-		case TC_ACT_STOLEN:
+-		case TC_ACT_TRAP:
+-			__qdisc_drop(skb, to_free);
+-			return NET_XMIT_SUCCESS | __NET_XMIT_STOLEN;
+-
+-		case TC_ACT_SHOT:
+-			goto drop;
+-#endif
+-		case TC_ACT_OK:
+-			skb->tc_index = TC_H_MIN(res.classid);
+-			break;
+-
+-		default:
+-			if (p->default_index != NO_DEFAULT_INDEX)
+-				skb->tc_index = p->default_index;
+-			break;
+-		}
+-	}
+-
+-	err = qdisc_enqueue(skb, p->q, to_free);
+-	if (err != NET_XMIT_SUCCESS) {
+-		if (net_xmit_drop_count(err))
+-			qdisc_qstats_drop(sch);
+-		return err;
+-	}
+-
+-	sch->qstats.backlog += len;
+-	sch->q.qlen++;
+-
+-	return NET_XMIT_SUCCESS;
+-
+-drop:
+-	qdisc_drop(skb, sch, to_free);
+-	return NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
+-}
+-
+-static struct sk_buff *dsmark_dequeue(struct Qdisc *sch)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-	struct sk_buff *skb;
+-	u32 index;
+-
+-	pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p);
+-
+-	skb = qdisc_dequeue_peeked(p->q);
+-	if (skb == NULL)
+-		return NULL;
+-
+-	qdisc_bstats_update(sch, skb);
+-	qdisc_qstats_backlog_dec(sch, skb);
+-	sch->q.qlen--;
+-
+-	index = skb->tc_index & (p->indices - 1);
+-	pr_debug("index %d->%d\n", skb->tc_index, index);
+-
+-	switch (skb_protocol(skb, true)) {
+-	case htons(ETH_P_IP):
+-		ipv4_change_dsfield(ip_hdr(skb), p->mv[index].mask,
+-				    p->mv[index].value);
+-			break;
+-	case htons(ETH_P_IPV6):
+-		ipv6_change_dsfield(ipv6_hdr(skb), p->mv[index].mask,
+-				    p->mv[index].value);
+-			break;
+-	default:
+-		/*
+-		 * Only complain if a change was actually attempted.
+-		 * This way, we can send non-IP traffic through dsmark
+-		 * and don't need yet another qdisc as a bypass.
+-		 */
+-		if (p->mv[index].mask != 0xff || p->mv[index].value)
+-			pr_warn("%s: unsupported protocol %d\n",
+-				__func__, ntohs(skb_protocol(skb, true)));
+-		break;
+-	}
+-
+-	return skb;
+-}
+-
+-static struct sk_buff *dsmark_peek(struct Qdisc *sch)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-
+-	pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p);
+-
+-	return p->q->ops->peek(p->q);
+-}
+-
+-static int dsmark_init(struct Qdisc *sch, struct nlattr *opt,
+-		       struct netlink_ext_ack *extack)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-	struct nlattr *tb[TCA_DSMARK_MAX + 1];
+-	int err = -EINVAL;
+-	u32 default_index = NO_DEFAULT_INDEX;
+-	u16 indices;
+-	int i;
+-
+-	pr_debug("%s(sch %p,[qdisc %p],opt %p)\n", __func__, sch, p, opt);
+-
+-	if (!opt)
+-		goto errout;
+-
+-	err = tcf_block_get(&p->block, &p->filter_list, sch, extack);
+-	if (err)
+-		return err;
+-
+-	err = nla_parse_nested_deprecated(tb, TCA_DSMARK_MAX, opt,
+-					  dsmark_policy, NULL);
+-	if (err < 0)
+-		goto errout;
+-
+-	err = -EINVAL;
+-	if (!tb[TCA_DSMARK_INDICES])
+-		goto errout;
+-	indices = nla_get_u16(tb[TCA_DSMARK_INDICES]);
+-
+-	if (hweight32(indices) != 1)
+-		goto errout;
+-
+-	if (tb[TCA_DSMARK_DEFAULT_INDEX])
+-		default_index = nla_get_u16(tb[TCA_DSMARK_DEFAULT_INDEX]);
+-
+-	if (indices <= DSMARK_EMBEDDED_SZ)
+-		p->mv = p->embedded;
+-	else
+-		p->mv = kmalloc_array(indices, sizeof(*p->mv), GFP_KERNEL);
+-	if (!p->mv) {
+-		err = -ENOMEM;
+-		goto errout;
+-	}
+-	for (i = 0; i < indices; i++) {
+-		p->mv[i].mask = 0xff;
+-		p->mv[i].value = 0;
+-	}
+-	p->indices = indices;
+-	p->default_index = default_index;
+-	p->set_tc_index = nla_get_flag(tb[TCA_DSMARK_SET_TC_INDEX]);
+-
+-	p->q = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, sch->handle,
+-				 NULL);
+-	if (p->q == NULL)
+-		p->q = &noop_qdisc;
+-	else
+-		qdisc_hash_add(p->q, true);
+-
+-	pr_debug("%s: qdisc %p\n", __func__, p->q);
+-
+-	err = 0;
+-errout:
+-	return err;
+-}
+-
+-static void dsmark_reset(struct Qdisc *sch)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-
+-	pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p);
+-	if (p->q)
+-		qdisc_reset(p->q);
+-}
+-
+-static void dsmark_destroy(struct Qdisc *sch)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-
+-	pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p);
+-
+-	tcf_block_put(p->block);
+-	qdisc_put(p->q);
+-	if (p->mv != p->embedded)
+-		kfree(p->mv);
+-}
+-
+-static int dsmark_dump_class(struct Qdisc *sch, unsigned long cl,
+-			     struct sk_buff *skb, struct tcmsg *tcm)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-	struct nlattr *opts = NULL;
+-
+-	pr_debug("%s(sch %p,[qdisc %p],class %ld\n", __func__, sch, p, cl);
+-
+-	if (!dsmark_valid_index(p, cl))
+-		return -EINVAL;
+-
+-	tcm->tcm_handle = TC_H_MAKE(TC_H_MAJ(sch->handle), cl - 1);
+-	tcm->tcm_info = p->q->handle;
+-
+-	opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
+-	if (opts == NULL)
+-		goto nla_put_failure;
+-	if (nla_put_u8(skb, TCA_DSMARK_MASK, p->mv[cl - 1].mask) ||
+-	    nla_put_u8(skb, TCA_DSMARK_VALUE, p->mv[cl - 1].value))
+-		goto nla_put_failure;
+-
+-	return nla_nest_end(skb, opts);
+-
+-nla_put_failure:
+-	nla_nest_cancel(skb, opts);
+-	return -EMSGSIZE;
+-}
+-
+-static int dsmark_dump(struct Qdisc *sch, struct sk_buff *skb)
+-{
+-	struct dsmark_qdisc_data *p = qdisc_priv(sch);
+-	struct nlattr *opts = NULL;
+-
+-	opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
+-	if (opts == NULL)
+-		goto nla_put_failure;
+-	if (nla_put_u16(skb, TCA_DSMARK_INDICES, p->indices))
+-		goto nla_put_failure;
+-
+-	if (p->default_index != NO_DEFAULT_INDEX &&
+-	    nla_put_u16(skb, TCA_DSMARK_DEFAULT_INDEX, p->default_index))
+-		goto nla_put_failure;
+-
+-	if (p->set_tc_index &&
+-	    nla_put_flag(skb, TCA_DSMARK_SET_TC_INDEX))
+-		goto nla_put_failure;
+-
+-	return nla_nest_end(skb, opts);
+-
+-nla_put_failure:
+-	nla_nest_cancel(skb, opts);
+-	return -EMSGSIZE;
+-}
+-
+-static const struct Qdisc_class_ops dsmark_class_ops = {
+-	.graft		=	dsmark_graft,
+-	.leaf		=	dsmark_leaf,
+-	.find		=	dsmark_find,
+-	.change		=	dsmark_change,
+-	.delete		=	dsmark_delete,
+-	.walk		=	dsmark_walk,
+-	.tcf_block	=	dsmark_tcf_block,
+-	.bind_tcf	=	dsmark_bind_filter,
+-	.unbind_tcf	=	dsmark_unbind_filter,
+-	.dump		=	dsmark_dump_class,
+-};
+-
+-static struct Qdisc_ops dsmark_qdisc_ops __read_mostly = {
+-	.next		=	NULL,
+-	.cl_ops		=	&dsmark_class_ops,
+-	.id		=	"dsmark",
+-	.priv_size	=	sizeof(struct dsmark_qdisc_data),
+-	.enqueue	=	dsmark_enqueue,
+-	.dequeue	=	dsmark_dequeue,
+-	.peek		=	dsmark_peek,
+-	.init		=	dsmark_init,
+-	.reset		=	dsmark_reset,
+-	.destroy	=	dsmark_destroy,
+-	.change		=	NULL,
+-	.dump		=	dsmark_dump,
+-	.owner		=	THIS_MODULE,
+-};
+-
+-static int __init dsmark_module_init(void)
+-{
+-	return register_qdisc(&dsmark_qdisc_ops);
+-}
+-
+-static void __exit dsmark_module_exit(void)
+-{
+-	unregister_qdisc(&dsmark_qdisc_ops);
+-}
+-
+-module_init(dsmark_module_init)
+-module_exit(dsmark_module_exit)
+-
+-MODULE_LICENSE("GPL");
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 7ee3c8b03a39e..2bbacd9b97e56 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -800,7 +800,7 @@ static void tls_update(struct sock *sk, struct proto *p,
+ 	}
+ }
+ 
+-static int tls_get_info(const struct sock *sk, struct sk_buff *skb)
++static int tls_get_info(struct sock *sk, struct sk_buff *skb)
+ {
+ 	u16 version, cipher_type;
+ 	struct tls_context *ctx;
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index dd980438f201f..46f1c19f7c60b 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1754,6 +1754,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 	struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
+ 	struct tls_prot_info *prot = &tls_ctx->prot_info;
+ 	struct sk_psock *psock;
++	int num_async, pending;
+ 	unsigned char control = 0;
+ 	ssize_t decrypted = 0;
+ 	struct strp_msg *rxm;
+@@ -1766,8 +1767,6 @@ int tls_sw_recvmsg(struct sock *sk,
+ 	bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
+ 	bool is_peek = flags & MSG_PEEK;
+ 	bool bpf_strp_enabled;
+-	int num_async = 0;
+-	int pending;
+ 
+ 	flags |= nonblock;
+ 
+@@ -1784,17 +1783,18 @@ int tls_sw_recvmsg(struct sock *sk,
+ 	if (err < 0) {
+ 		tls_err_abort(sk, err);
+ 		goto end;
+-	} else {
+-		copied = err;
+ 	}
+ 
+-	if (len <= copied)
+-		goto recv_end;
++	copied = err;
++	if (len <= copied || (copied && control != TLS_RECORD_TYPE_DATA))
++		goto end;
+ 
+ 	target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
+ 	len = len - copied;
+ 	timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+ 
++	decrypted = 0;
++	num_async = 0;
+ 	while (len && (decrypted + copied < target || ctx->recv_pkt)) {
+ 		bool retain_skb = false;
+ 		bool zc = false;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 0ac829c8f1888..279f4977e2eed 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3595,6 +3595,7 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback *
+ 			if_idx++;
+ 		}
+ 
++		if_start = 0;
+ 		wp_idx++;
+ 	}
+  out:
+diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
+index 31484377b8b11..806240dda6090 100755
+--- a/scripts/bpf_helpers_doc.py
++++ b/scripts/bpf_helpers_doc.py
+@@ -284,7 +284,7 @@ eBPF programs can have an associated license, passed along with the bytecode
+ instructions to the kernel when the programs are loaded. The format for that
+ string is identical to the one in use for kernel modules (Dual licenses, such
+ as "Dual BSD/GPL", may be used). Some helper functions are only accessible to
+-programs that are compatible with the GNU Privacy License (GPL).
++programs that are compatible with the GNU General Public License (GNU GPL).
+ 
+ In order to use such helpers, the eBPF program must be loaded with the correct
+ license string passed (via **attr**) to the **bpf**\ () system call, and this
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 97f83c63e7652..826829e3ff7a2 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -756,18 +756,23 @@ static int fsl_micfil_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(&pdev->dev);
+ 
++	/*
++	 * Register platform component before registering cpu dai for there
++	 * is not defer probe for platform component in snd_soc_add_pcm_runtime().
++	 */
++	ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to pcm register\n");
++		return ret;
++	}
++
+ 	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_micfil_component,
+ 					      &fsl_micfil_dai, 1);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register component %s\n",
+ 			fsl_micfil_component.name);
+-		return ret;
+ 	}
+ 
+-	ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
+-	if (ret)
+-		dev_err(&pdev->dev, "failed to pcm register\n");
+-
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/intel/boards/bytcht_es8316.c b/sound/soc/intel/boards/bytcht_es8316.c
+index 81269ed5a2aaa..03b9cdbd3170f 100644
+--- a/sound/soc/intel/boards/bytcht_es8316.c
++++ b/sound/soc/intel/boards/bytcht_es8316.c
+@@ -37,6 +37,7 @@ struct byt_cht_es8316_private {
+ 	struct clk *mclk;
+ 	struct snd_soc_jack jack;
+ 	struct gpio_desc *speaker_en_gpio;
++	struct device *codec_dev;
+ 	bool speaker_en;
+ };
+ 
+@@ -549,9 +550,10 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* get speaker enable GPIO */
+-	codec_dev = bus_find_device_by_name(&i2c_bus_type, NULL, codec_name);
++	codec_dev = acpi_get_first_physical_node(adev);
+ 	if (!codec_dev)
+ 		return -EPROBE_DEFER;
++	priv->codec_dev = get_device(codec_dev);
+ 
+ 	if (quirk & BYT_CHT_ES8316_JD_INVERTED)
+ 		props[cnt++] = PROPERTY_ENTRY_BOOL("everest,jack-detect-inverted");
+@@ -569,7 +571,6 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ 		gpiod_get_index(codec_dev, "speaker-enable", 0,
+ 				/* see comment in byt_cht_es8316_resume */
+ 				GPIOD_OUT_LOW | GPIOD_FLAGS_BIT_NONEXCLUSIVE);
+-	put_device(codec_dev);
+ 
+ 	if (IS_ERR(priv->speaker_en_gpio)) {
+ 		ret = PTR_ERR(priv->speaker_en_gpio);
+@@ -581,7 +582,7 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ 			dev_err(dev, "get speaker GPIO failed: %d\n", ret);
+ 			fallthrough;
+ 		case -EPROBE_DEFER:
+-			return ret;
++			goto err_put_codec;
+ 		}
+ 	}
+ 
+@@ -604,10 +605,14 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		gpiod_put(priv->speaker_en_gpio);
+ 		dev_err(dev, "snd_soc_register_card failed: %d\n", ret);
+-		return ret;
++		goto err_put_codec;
+ 	}
+ 	platform_set_drvdata(pdev, &byt_cht_es8316_card);
+ 	return 0;
++
++err_put_codec:
++	put_device(priv->codec_dev);
++	return ret;
+ }
+ 
+ static int snd_byt_cht_es8316_mc_remove(struct platform_device *pdev)
+@@ -616,6 +621,7 @@ static int snd_byt_cht_es8316_mc_remove(struct platform_device *pdev)
+ 	struct byt_cht_es8316_private *priv = snd_soc_card_get_drvdata(card);
+ 
+ 	gpiod_put(priv->speaker_en_gpio);
++	put_device(priv->codec_dev);
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 9a5ab96f917d3..f5b1b3b876980 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -86,6 +86,7 @@ enum {
+ struct byt_rt5640_private {
+ 	struct snd_soc_jack jack;
+ 	struct clk *mclk;
++	struct device *codec_dev;
+ };
+ static bool is_bytcr;
+ 
+@@ -941,15 +942,11 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+  * Note this MUST be called before snd_soc_register_card(), so that the props
+  * are in place before the codec component driver's probe function parses them.
+  */
+-static int byt_rt5640_add_codec_device_props(const char *i2c_dev_name)
++static int byt_rt5640_add_codec_device_props(struct device *i2c_dev,
++					     struct byt_rt5640_private *priv)
+ {
+ 	struct property_entry props[MAX_NO_PROPS] = {};
+-	struct device *i2c_dev;
+-	int ret, cnt = 0;
+-
+-	i2c_dev = bus_find_device_by_name(&i2c_bus_type, NULL, i2c_dev_name);
+-	if (!i2c_dev)
+-		return -EPROBE_DEFER;
++	int cnt = 0;
+ 
+ 	switch (BYT_RT5640_MAP(byt_rt5640_quirk)) {
+ 	case BYT_RT5640_DMIC1_MAP:
+@@ -989,10 +986,7 @@ static int byt_rt5640_add_codec_device_props(const char *i2c_dev_name)
+ 	if (byt_rt5640_quirk & BYT_RT5640_JD_NOT_INV)
+ 		props[cnt++] = PROPERTY_ENTRY_BOOL("realtek,jack-detect-not-inverted");
+ 
+-	ret = device_add_properties(i2c_dev, props);
+-	put_device(i2c_dev);
+-
+-	return ret;
++	return device_add_properties(i2c_dev, props);
+ }
+ 
+ static int byt_rt5640_init(struct snd_soc_pcm_runtime *runtime)
+@@ -1324,6 +1318,7 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ 	struct snd_soc_acpi_mach *mach;
+ 	const char *platform_name;
+ 	struct acpi_device *adev;
++	struct device *codec_dev;
+ 	int ret_val = 0;
+ 	int dai_index = 0;
+ 	int i, cfg_spk;
+@@ -1430,10 +1425,15 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ 		byt_rt5640_quirk = quirk_override;
+ 	}
+ 
++	codec_dev = acpi_get_first_physical_node(adev);
++	if (!codec_dev)
++		return -EPROBE_DEFER;
++	priv->codec_dev = get_device(codec_dev);
++
+ 	/* Must be called before register_card, also see declaration comment. */
+-	ret_val = byt_rt5640_add_codec_device_props(byt_rt5640_codec_name);
++	ret_val = byt_rt5640_add_codec_device_props(codec_dev, priv);
+ 	if (ret_val)
+-		return ret_val;
++		goto err;
+ 
+ 	log_quirks(&pdev->dev);
+ 
+@@ -1460,7 +1460,7 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ 			 * for all other errors, including -EPROBE_DEFER
+ 			 */
+ 			if (ret_val != -ENOENT)
+-				return ret_val;
++				goto err;
+ 			byt_rt5640_quirk &= ~BYT_RT5640_MCLK_EN;
+ 		}
+ 	}
+@@ -1493,17 +1493,30 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ 	ret_val = snd_soc_fixup_dai_links_platform_name(&byt_rt5640_card,
+ 							platform_name);
+ 	if (ret_val)
+-		return ret_val;
++		goto err;
+ 
+ 	ret_val = devm_snd_soc_register_card(&pdev->dev, &byt_rt5640_card);
+ 
+ 	if (ret_val) {
+ 		dev_err(&pdev->dev, "devm_snd_soc_register_card failed %d\n",
+ 			ret_val);
+-		return ret_val;
++		goto err;
+ 	}
+ 	platform_set_drvdata(pdev, &byt_rt5640_card);
+ 	return ret_val;
++
++err:
++	put_device(priv->codec_dev);
++	return ret_val;
++}
++
++static int snd_byt_rt5640_mc_remove(struct platform_device *pdev)
++{
++	struct snd_soc_card *card = platform_get_drvdata(pdev);
++	struct byt_rt5640_private *priv = snd_soc_card_get_drvdata(card);
++
++	put_device(priv->codec_dev);
++	return 0;
+ }
+ 
+ static struct platform_driver snd_byt_rt5640_mc_driver = {
+@@ -1514,6 +1527,7 @@ static struct platform_driver snd_byt_rt5640_mc_driver = {
+ #endif
+ 	},
+ 	.probe = snd_byt_rt5640_mc_probe,
++	.remove = snd_byt_rt5640_mc_remove,
+ };
+ 
+ module_platform_driver(snd_byt_rt5640_mc_driver);
+diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
+index bf8b87d45cb0a..a8289f74463e9 100644
+--- a/sound/soc/intel/boards/bytcr_rt5651.c
++++ b/sound/soc/intel/boards/bytcr_rt5651.c
+@@ -85,6 +85,7 @@ struct byt_rt5651_private {
+ 	struct gpio_desc *ext_amp_gpio;
+ 	struct gpio_desc *hp_detect;
+ 	struct snd_soc_jack jack;
++	struct device *codec_dev;
+ };
+ 
+ static const struct acpi_gpio_mapping *byt_rt5651_gpios;
+@@ -918,17 +919,17 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ 	if (adev) {
+ 		snprintf(byt_rt5651_codec_name, sizeof(byt_rt5651_codec_name),
+ 			 "i2c-%s", acpi_dev_name(adev));
+-		put_device(&adev->dev);
+ 		byt_rt5651_dais[dai_index].codecs->name = byt_rt5651_codec_name;
+ 	} else {
+ 		dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
+ 		return -ENODEV;
+ 	}
+ 
+-	codec_dev = bus_find_device_by_name(&i2c_bus_type, NULL,
+-					    byt_rt5651_codec_name);
++	codec_dev = acpi_get_first_physical_node(adev);
++	acpi_dev_put(adev);
+ 	if (!codec_dev)
+ 		return -EPROBE_DEFER;
++	priv->codec_dev = get_device(codec_dev);
+ 
+ 	/*
+ 	 * swap SSP0 if bytcr is detected
+@@ -997,10 +998,8 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ 
+ 	/* Must be called before register_card, also see declaration comment. */
+ 	ret_val = byt_rt5651_add_codec_device_props(codec_dev);
+-	if (ret_val) {
+-		put_device(codec_dev);
+-		return ret_val;
+-	}
++	if (ret_val)
++		goto err;
+ 
+ 	/* Cherry Trail devices use an external amplifier enable gpio */
+ 	if (soc_intel_is_cht() && !byt_rt5651_gpios)
+@@ -1024,8 +1023,7 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ 					ret_val);
+ 				fallthrough;
+ 			case -EPROBE_DEFER:
+-				put_device(codec_dev);
+-				return ret_val;
++				goto err;
+ 			}
+ 		}
+ 		priv->hp_detect = devm_fwnode_gpiod_get(&pdev->dev,
+@@ -1044,14 +1042,11 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ 					ret_val);
+ 				fallthrough;
+ 			case -EPROBE_DEFER:
+-				put_device(codec_dev);
+-				return ret_val;
++				goto err;
+ 			}
+ 		}
+ 	}
+ 
+-	put_device(codec_dev);
+-
+ 	log_quirks(&pdev->dev);
+ 
+ 	if ((byt_rt5651_quirk & BYT_RT5651_SSP2_AIF2) ||
+@@ -1075,7 +1070,7 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ 			 * for all other errors, including -EPROBE_DEFER
+ 			 */
+ 			if (ret_val != -ENOENT)
+-				return ret_val;
++				goto err;
+ 			byt_rt5651_quirk &= ~BYT_RT5651_MCLK_EN;
+ 		}
+ 	}
+@@ -1104,17 +1099,30 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ 	ret_val = snd_soc_fixup_dai_links_platform_name(&byt_rt5651_card,
+ 							platform_name);
+ 	if (ret_val)
+-		return ret_val;
++		goto err;
+ 
+ 	ret_val = devm_snd_soc_register_card(&pdev->dev, &byt_rt5651_card);
+ 
+ 	if (ret_val) {
+ 		dev_err(&pdev->dev, "devm_snd_soc_register_card failed %d\n",
+ 			ret_val);
+-		return ret_val;
++		goto err;
+ 	}
+ 	platform_set_drvdata(pdev, &byt_rt5651_card);
+ 	return ret_val;
++
++err:
++	put_device(priv->codec_dev);
++	return ret_val;
++}
++
++static int snd_byt_rt5651_mc_remove(struct platform_device *pdev)
++{
++	struct snd_soc_card *card = platform_get_drvdata(pdev);
++	struct byt_rt5651_private *priv = snd_soc_card_get_drvdata(card);
++
++	put_device(priv->codec_dev);
++	return 0;
+ }
+ 
+ static struct platform_driver snd_byt_rt5651_mc_driver = {
+@@ -1125,6 +1133,7 @@ static struct platform_driver snd_byt_rt5651_mc_driver = {
+ #endif
+ 	},
+ 	.probe = snd_byt_rt5651_mc_probe,
++	.remove = snd_byt_rt5651_mc_remove,
+ };
+ 
+ module_platform_driver(snd_byt_rt5651_mc_driver);
+diff --git a/sound/soc/sunxi/sun4i-spdif.c b/sound/soc/sunxi/sun4i-spdif.c
+index 228485fe07342..6dcad1aa25037 100644
+--- a/sound/soc/sunxi/sun4i-spdif.c
++++ b/sound/soc/sunxi/sun4i-spdif.c
+@@ -464,6 +464,11 @@ static const struct of_device_id sun4i_spdif_of_match[] = {
+ 		.compatible = "allwinner,sun50i-h6-spdif",
+ 		.data = &sun50i_h6_spdif_quirks,
+ 	},
++	{
++		.compatible = "allwinner,sun50i-h616-spdif",
++		/* Essentially the same as the H6, but without RX */
++		.data = &sun50i_h6_spdif_quirks,
++	},
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, sun4i_spdif_of_match);


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-03-06 18:09 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-03-06 18:09 UTC (permalink / raw
  To: gentoo-commits

commit:     daa429d9e46fa8d87a88d9388e912c537dbb4161
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar  6 18:09:12 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar  6 18:09:12 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=daa429d9

Linux patch 5.10.212

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1211_linux-5.10.212.patch | 1357 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1361 insertions(+)

diff --git a/0000_README b/0000_README
index bcc82e6d..ff3682f8 100644
--- a/0000_README
+++ b/0000_README
@@ -887,6 +887,10 @@ Patch:  1210_linux-5.10.211.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.211
 
+Patch:  1211_linux-5.10.212.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.212
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1211_linux-5.10.212.patch b/1211_linux-5.10.212.patch
new file mode 100644
index 00000000..89fddcdc
--- /dev/null
+++ b/1211_linux-5.10.212.patch
@@ -0,0 +1,1357 @@
+diff --git a/Makefile b/Makefile
+index dc55a86e0f7df..d7ec0be4cd791 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 211
++SUBLEVEL = 212
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index b16304fdf4489..5ab13570daa53 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -44,7 +44,7 @@
+  * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel
+  * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled.
+  */
+-#define vmemmap		((struct page *)VMEMMAP_START)
++#define vmemmap		((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT))
+ 
+ #define PCI_IO_SIZE      SZ_16M
+ #define PCI_IO_END       VMEMMAP_START
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index c6ad53e38f653..a7a8c7731c1a4 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -178,6 +178,90 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+ 	return false;
+ }
+ 
++#define MSR_IA32_TME_ACTIVATE		0x982
++
++/* Helpers to access TME_ACTIVATE MSR */
++#define TME_ACTIVATE_LOCKED(x)		(x & 0x1)
++#define TME_ACTIVATE_ENABLED(x)		(x & 0x2)
++
++#define TME_ACTIVATE_POLICY(x)		((x >> 4) & 0xf)	/* Bits 7:4 */
++#define TME_ACTIVATE_POLICY_AES_XTS_128	0
++
++#define TME_ACTIVATE_KEYID_BITS(x)	((x >> 32) & 0xf)	/* Bits 35:32 */
++
++#define TME_ACTIVATE_CRYPTO_ALGS(x)	((x >> 48) & 0xffff)	/* Bits 63:48 */
++#define TME_ACTIVATE_CRYPTO_AES_XTS_128	1
++
++/* Values for mktme_status (SW only construct) */
++#define MKTME_ENABLED			0
++#define MKTME_DISABLED			1
++#define MKTME_UNINITIALIZED		2
++static int mktme_status = MKTME_UNINITIALIZED;
++
++static void detect_tme_early(struct cpuinfo_x86 *c)
++{
++	u64 tme_activate, tme_policy, tme_crypto_algs;
++	int keyid_bits = 0, nr_keyids = 0;
++	static u64 tme_activate_cpu0 = 0;
++
++	rdmsrl(MSR_IA32_TME_ACTIVATE, tme_activate);
++
++	if (mktme_status != MKTME_UNINITIALIZED) {
++		if (tme_activate != tme_activate_cpu0) {
++			/* Broken BIOS? */
++			pr_err_once("x86/tme: configuration is inconsistent between CPUs\n");
++			pr_err_once("x86/tme: MKTME is not usable\n");
++			mktme_status = MKTME_DISABLED;
++
++			/* Proceed. We may need to exclude bits from x86_phys_bits. */
++		}
++	} else {
++		tme_activate_cpu0 = tme_activate;
++	}
++
++	if (!TME_ACTIVATE_LOCKED(tme_activate) || !TME_ACTIVATE_ENABLED(tme_activate)) {
++		pr_info_once("x86/tme: not enabled by BIOS\n");
++		mktme_status = MKTME_DISABLED;
++		return;
++	}
++
++	if (mktme_status != MKTME_UNINITIALIZED)
++		goto detect_keyid_bits;
++
++	pr_info("x86/tme: enabled by BIOS\n");
++
++	tme_policy = TME_ACTIVATE_POLICY(tme_activate);
++	if (tme_policy != TME_ACTIVATE_POLICY_AES_XTS_128)
++		pr_warn("x86/tme: Unknown policy is active: %#llx\n", tme_policy);
++
++	tme_crypto_algs = TME_ACTIVATE_CRYPTO_ALGS(tme_activate);
++	if (!(tme_crypto_algs & TME_ACTIVATE_CRYPTO_AES_XTS_128)) {
++		pr_err("x86/mktme: No known encryption algorithm is supported: %#llx\n",
++				tme_crypto_algs);
++		mktme_status = MKTME_DISABLED;
++	}
++detect_keyid_bits:
++	keyid_bits = TME_ACTIVATE_KEYID_BITS(tme_activate);
++	nr_keyids = (1UL << keyid_bits) - 1;
++	if (nr_keyids) {
++		pr_info_once("x86/mktme: enabled by BIOS\n");
++		pr_info_once("x86/mktme: %d KeyIDs available\n", nr_keyids);
++	} else {
++		pr_info_once("x86/mktme: disabled by BIOS\n");
++	}
++
++	if (mktme_status == MKTME_UNINITIALIZED) {
++		/* MKTME is usable */
++		mktme_status = MKTME_ENABLED;
++	}
++
++	/*
++	 * KeyID bits effectively lower the number of physical address
++	 * bits.  Update cpuinfo_x86::x86_phys_bits accordingly.
++	 */
++	c->x86_phys_bits -= keyid_bits;
++}
++
+ static void early_init_intel(struct cpuinfo_x86 *c)
+ {
+ 	u64 misc_enable;
+@@ -329,6 +413,13 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ 	 */
+ 	if (detect_extended_topology_early(c) < 0)
+ 		detect_ht_early(c);
++
++	/*
++	 * Adjust the number of physical bits early because it affects the
++	 * valid bits of the MTRR mask registers.
++	 */
++	if (cpu_has(c, X86_FEATURE_TME))
++		detect_tme_early(c);
+ }
+ 
+ static void bsp_init_intel(struct cpuinfo_x86 *c)
+@@ -489,90 +580,6 @@ static void srat_detect_node(struct cpuinfo_x86 *c)
+ #endif
+ }
+ 
+-#define MSR_IA32_TME_ACTIVATE		0x982
+-
+-/* Helpers to access TME_ACTIVATE MSR */
+-#define TME_ACTIVATE_LOCKED(x)		(x & 0x1)
+-#define TME_ACTIVATE_ENABLED(x)		(x & 0x2)
+-
+-#define TME_ACTIVATE_POLICY(x)		((x >> 4) & 0xf)	/* Bits 7:4 */
+-#define TME_ACTIVATE_POLICY_AES_XTS_128	0
+-
+-#define TME_ACTIVATE_KEYID_BITS(x)	((x >> 32) & 0xf)	/* Bits 35:32 */
+-
+-#define TME_ACTIVATE_CRYPTO_ALGS(x)	((x >> 48) & 0xffff)	/* Bits 63:48 */
+-#define TME_ACTIVATE_CRYPTO_AES_XTS_128	1
+-
+-/* Values for mktme_status (SW only construct) */
+-#define MKTME_ENABLED			0
+-#define MKTME_DISABLED			1
+-#define MKTME_UNINITIALIZED		2
+-static int mktme_status = MKTME_UNINITIALIZED;
+-
+-static void detect_tme(struct cpuinfo_x86 *c)
+-{
+-	u64 tme_activate, tme_policy, tme_crypto_algs;
+-	int keyid_bits = 0, nr_keyids = 0;
+-	static u64 tme_activate_cpu0 = 0;
+-
+-	rdmsrl(MSR_IA32_TME_ACTIVATE, tme_activate);
+-
+-	if (mktme_status != MKTME_UNINITIALIZED) {
+-		if (tme_activate != tme_activate_cpu0) {
+-			/* Broken BIOS? */
+-			pr_err_once("x86/tme: configuration is inconsistent between CPUs\n");
+-			pr_err_once("x86/tme: MKTME is not usable\n");
+-			mktme_status = MKTME_DISABLED;
+-
+-			/* Proceed. We may need to exclude bits from x86_phys_bits. */
+-		}
+-	} else {
+-		tme_activate_cpu0 = tme_activate;
+-	}
+-
+-	if (!TME_ACTIVATE_LOCKED(tme_activate) || !TME_ACTIVATE_ENABLED(tme_activate)) {
+-		pr_info_once("x86/tme: not enabled by BIOS\n");
+-		mktme_status = MKTME_DISABLED;
+-		return;
+-	}
+-
+-	if (mktme_status != MKTME_UNINITIALIZED)
+-		goto detect_keyid_bits;
+-
+-	pr_info("x86/tme: enabled by BIOS\n");
+-
+-	tme_policy = TME_ACTIVATE_POLICY(tme_activate);
+-	if (tme_policy != TME_ACTIVATE_POLICY_AES_XTS_128)
+-		pr_warn("x86/tme: Unknown policy is active: %#llx\n", tme_policy);
+-
+-	tme_crypto_algs = TME_ACTIVATE_CRYPTO_ALGS(tme_activate);
+-	if (!(tme_crypto_algs & TME_ACTIVATE_CRYPTO_AES_XTS_128)) {
+-		pr_err("x86/mktme: No known encryption algorithm is supported: %#llx\n",
+-				tme_crypto_algs);
+-		mktme_status = MKTME_DISABLED;
+-	}
+-detect_keyid_bits:
+-	keyid_bits = TME_ACTIVATE_KEYID_BITS(tme_activate);
+-	nr_keyids = (1UL << keyid_bits) - 1;
+-	if (nr_keyids) {
+-		pr_info_once("x86/mktme: enabled by BIOS\n");
+-		pr_info_once("x86/mktme: %d KeyIDs available\n", nr_keyids);
+-	} else {
+-		pr_info_once("x86/mktme: disabled by BIOS\n");
+-	}
+-
+-	if (mktme_status == MKTME_UNINITIALIZED) {
+-		/* MKTME is usable */
+-		mktme_status = MKTME_ENABLED;
+-	}
+-
+-	/*
+-	 * KeyID bits effectively lower the number of physical address
+-	 * bits.  Update cpuinfo_x86::x86_phys_bits accordingly.
+-	 */
+-	c->x86_phys_bits -= keyid_bits;
+-}
+-
+ static void init_cpuid_fault(struct cpuinfo_x86 *c)
+ {
+ 	u64 msr;
+@@ -708,9 +715,6 @@ static void init_intel(struct cpuinfo_x86 *c)
+ 
+ 	init_ia32_feat_ctl(c);
+ 
+-	if (cpu_has(c, X86_FEATURE_TME))
+-		detect_tme(c);
+-
+ 	init_intel_misc_features(c);
+ 
+ 	if (tsx_ctrl_state == TSX_CTRL_ENABLE)
+diff --git a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
+index 2cfc36d141c07..c58fac5748359 100644
+--- a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
++++ b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
+@@ -101,7 +101,8 @@ static void virtio_crypto_dataq_akcipher_callback(struct virtio_crypto_request *
+ }
+ 
+ static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher_ctx *ctx,
+-		struct virtio_crypto_ctrl_header *header, void *para,
++		struct virtio_crypto_ctrl_header *header,
++		struct virtio_crypto_akcipher_session_para *para,
+ 		const uint8_t *key, unsigned int keylen)
+ {
+ 	struct scatterlist outhdr_sg, key_sg, inhdr_sg, *sgs[3];
+@@ -125,7 +126,7 @@ static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher
+ 
+ 	ctrl = &vc_ctrl_req->ctrl;
+ 	memcpy(&ctrl->header, header, sizeof(ctrl->header));
+-	memcpy(&ctrl->u, para, sizeof(ctrl->u));
++	memcpy(&ctrl->u.akcipher_create_session.para, para, sizeof(*para));
+ 	input = &vc_ctrl_req->input;
+ 	input->status = cpu_to_le32(VIRTIO_CRYPTO_ERR);
+ 
+diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
+index f383f219ed008..7082a5a6814a4 100644
+--- a/drivers/dma/fsl-qdma.c
++++ b/drivers/dma/fsl-qdma.c
+@@ -109,6 +109,7 @@
+ #define FSL_QDMA_CMD_WTHROTL_OFFSET	20
+ #define FSL_QDMA_CMD_DSEN_OFFSET	19
+ #define FSL_QDMA_CMD_LWC_OFFSET		16
++#define FSL_QDMA_CMD_PF			BIT(17)
+ 
+ /* Field definition for Descriptor status */
+ #define QDMA_CCDF_STATUS_RTE		BIT(5)
+@@ -384,7 +385,8 @@ static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
+ 	qdma_csgf_set_f(csgf_dest, len);
+ 	/* Descriptor Buffer */
+ 	cmd = cpu_to_le32(FSL_QDMA_CMD_RWTTYPE <<
+-			  FSL_QDMA_CMD_RWTTYPE_OFFSET);
++			  FSL_QDMA_CMD_RWTTYPE_OFFSET) |
++			  FSL_QDMA_CMD_PF;
+ 	sdf->data = QDMA_SDDF_CMD(cmd);
+ 
+ 	cmd = cpu_to_le32(FSL_QDMA_CMD_RWTTYPE <<
+@@ -1201,10 +1203,6 @@ static int fsl_qdma_probe(struct platform_device *pdev)
+ 	if (!fsl_qdma->queue)
+ 		return -ENOMEM;
+ 
+-	ret = fsl_qdma_irq_init(pdev, fsl_qdma);
+-	if (ret)
+-		return ret;
+-
+ 	fsl_qdma->irq_base = platform_get_irq_byname(pdev, "qdma-queue0");
+ 	if (fsl_qdma->irq_base < 0)
+ 		return fsl_qdma->irq_base;
+@@ -1243,16 +1241,19 @@ static int fsl_qdma_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, fsl_qdma);
+ 
+-	ret = dma_async_device_register(&fsl_qdma->dma_dev);
++	ret = fsl_qdma_reg_init(fsl_qdma);
+ 	if (ret) {
+-		dev_err(&pdev->dev,
+-			"Can't register NXP Layerscape qDMA engine.\n");
++		dev_err(&pdev->dev, "Can't Initialize the qDMA engine.\n");
+ 		return ret;
+ 	}
+ 
+-	ret = fsl_qdma_reg_init(fsl_qdma);
++	ret = fsl_qdma_irq_init(pdev, fsl_qdma);
++	if (ret)
++		return ret;
++
++	ret = dma_async_device_register(&fsl_qdma->dma_dev);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "Can't Initialize the qDMA engine.\n");
++		dev_err(&pdev->dev, "Can't register NXP Layerscape qDMA engine.\n");
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/firmware/efi/capsule-loader.c b/drivers/firmware/efi/capsule-loader.c
+index 3e8d4b51a8140..97bafb5f70389 100644
+--- a/drivers/firmware/efi/capsule-loader.c
++++ b/drivers/firmware/efi/capsule-loader.c
+@@ -292,7 +292,7 @@ static int efi_capsule_open(struct inode *inode, struct file *file)
+ 		return -ENOMEM;
+ 	}
+ 
+-	cap_info->phys = kzalloc(sizeof(void *), GFP_KERNEL);
++	cap_info->phys = kzalloc(sizeof(phys_addr_t), GFP_KERNEL);
+ 	if (!cap_info->phys) {
+ 		kfree(cap_info->pages);
+ 		kfree(cap_info);
+diff --git a/drivers/gpio/gpio-74x164.c b/drivers/gpio/gpio-74x164.c
+index 05637d5851526..1b50470c4e381 100644
+--- a/drivers/gpio/gpio-74x164.c
++++ b/drivers/gpio/gpio-74x164.c
+@@ -127,8 +127,6 @@ static int gen_74x164_probe(struct spi_device *spi)
+ 	if (IS_ERR(chip->gpiod_oe))
+ 		return PTR_ERR(chip->gpiod_oe);
+ 
+-	gpiod_set_value_cansleep(chip->gpiod_oe, 1);
+-
+ 	spi_set_drvdata(spi, chip);
+ 
+ 	chip->gpio_chip.label = spi->modalias;
+@@ -153,6 +151,8 @@ static int gen_74x164_probe(struct spi_device *spi)
+ 		goto exit_destroy;
+ 	}
+ 
++	gpiod_set_value_cansleep(chip->gpiod_oe, 1);
++
+ 	ret = gpiochip_add_data(&chip->gpio_chip, chip);
+ 	if (!ret)
+ 		return 0;
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index d10f621085e2e..374bb9f432660 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -732,11 +732,11 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ 
+ 	ret = gpiochip_irqchip_init_valid_mask(gc);
+ 	if (ret)
+-		goto err_remove_acpi_chip;
++		goto err_free_hogs;
+ 
+ 	ret = gpiochip_irqchip_init_hw(gc);
+ 	if (ret)
+-		goto err_remove_acpi_chip;
++		goto err_remove_irqchip_mask;
+ 
+ 	ret = gpiochip_add_irqchip(gc, lock_key, request_key);
+ 	if (ret)
+@@ -761,13 +761,13 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ 	gpiochip_irqchip_remove(gc);
+ err_remove_irqchip_mask:
+ 	gpiochip_irqchip_free_valid_mask(gc);
+-err_remove_acpi_chip:
++err_free_hogs:
++	gpiochip_free_hogs(gc);
+ 	acpi_gpiochip_remove(gc);
++	gpiochip_remove_pin_ranges(gc);
+ err_remove_of_chip:
+-	gpiochip_free_hogs(gc);
+ 	of_gpiochip_remove(gc);
+ err_free_gpiochip_mask:
+-	gpiochip_remove_pin_ranges(gc);
+ 	gpiochip_free_valid_mask(gc);
+ err_remove_from_list:
+ 	spin_lock_irqsave(&gpio_lock, flags);
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 87807ef010a96..2059cd226cbdc 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -997,10 +997,12 @@ static int mmc_select_bus_width(struct mmc_card *card)
+ 	static unsigned ext_csd_bits[] = {
+ 		EXT_CSD_BUS_WIDTH_8,
+ 		EXT_CSD_BUS_WIDTH_4,
++		EXT_CSD_BUS_WIDTH_1,
+ 	};
+ 	static unsigned bus_widths[] = {
+ 		MMC_BUS_WIDTH_8,
+ 		MMC_BUS_WIDTH_4,
++		MMC_BUS_WIDTH_1,
+ 	};
+ 	struct mmc_host *host = card->host;
+ 	unsigned idx, bus_width = 0;
+diff --git a/drivers/mmc/host/sdhci-xenon-phy.c b/drivers/mmc/host/sdhci-xenon-phy.c
+index 03ce57ef45858..53bf9870cd71b 100644
+--- a/drivers/mmc/host/sdhci-xenon-phy.c
++++ b/drivers/mmc/host/sdhci-xenon-phy.c
+@@ -11,6 +11,7 @@
+ #include <linux/slab.h>
+ #include <linux/delay.h>
+ #include <linux/ktime.h>
++#include <linux/iopoll.h>
+ #include <linux/of_address.h>
+ 
+ #include "sdhci-pltfm.h"
+@@ -109,6 +110,8 @@
+ #define XENON_EMMC_PHY_LOGIC_TIMING_ADJUST	(XENON_EMMC_PHY_REG_BASE + 0x18)
+ #define XENON_LOGIC_TIMING_VALUE		0x00AA8977
+ 
++#define XENON_MAX_PHY_TIMEOUT_LOOPS		100
++
+ /*
+  * List offset of PHY registers and some special register values
+  * in eMMC PHY 5.0 or eMMC PHY 5.1
+@@ -216,6 +219,19 @@ static int xenon_alloc_emmc_phy(struct sdhci_host *host)
+ 	return 0;
+ }
+ 
++static int xenon_check_stability_internal_clk(struct sdhci_host *host)
++{
++	u32 reg;
++	int err;
++
++	err = read_poll_timeout(sdhci_readw, reg, reg & SDHCI_CLOCK_INT_STABLE,
++				1100, 20000, false, host, SDHCI_CLOCK_CONTROL);
++	if (err)
++		dev_err(mmc_dev(host->mmc), "phy_init: Internal clock never stabilized.\n");
++
++	return err;
++}
++
+ /*
+  * eMMC 5.0/5.1 PHY init/re-init.
+  * eMMC PHY init should be executed after:
+@@ -232,6 +248,11 @@ static int xenon_emmc_phy_init(struct sdhci_host *host)
+ 	struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host);
+ 	struct xenon_emmc_phy_regs *phy_regs = priv->emmc_phy_regs;
+ 
++	int ret = xenon_check_stability_internal_clk(host);
++
++	if (ret)
++		return ret;
++
+ 	reg = sdhci_readl(host, phy_regs->timing_adj);
+ 	reg |= XENON_PHY_INITIALIZAION;
+ 	sdhci_writel(host, reg, phy_regs->timing_adj);
+@@ -259,18 +280,27 @@ static int xenon_emmc_phy_init(struct sdhci_host *host)
+ 	/* get the wait time */
+ 	wait /= clock;
+ 	wait++;
+-	/* wait for host eMMC PHY init completes */
+-	udelay(wait);
+ 
+-	reg = sdhci_readl(host, phy_regs->timing_adj);
+-	reg &= XENON_PHY_INITIALIZAION;
+-	if (reg) {
++	/*
++	 * AC5X spec says bit must be polled until zero.
++	 * We see cases in which timeout can take longer
++	 * than the standard calculation on AC5X, which is
++	 * expected following the spec comment above.
++	 * According to the spec, we must wait as long as
++	 * it takes for that bit to toggle on AC5X.
++	 * Cap that with 100 delay loops so we won't get
++	 * stuck here forever:
++	 */
++
++	ret = read_poll_timeout(sdhci_readl, reg,
++				!(reg & XENON_PHY_INITIALIZAION),
++				wait, XENON_MAX_PHY_TIMEOUT_LOOPS * wait,
++				false, host, phy_regs->timing_adj);
++	if (ret)
+ 		dev_err(mmc_dev(host->mmc), "eMMC PHY init cannot complete after %d us\n",
+-			wait);
+-		return -ETIMEDOUT;
+-	}
++			wait * XENON_MAX_PHY_TIMEOUT_LOOPS);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ #define ARMADA_3700_SOC_PAD_1_8V	0x1
+diff --git a/drivers/mtd/nand/spi/gigadevice.c b/drivers/mtd/nand/spi/gigadevice.c
+index 33c67403c4aa1..56d1b56615f97 100644
+--- a/drivers/mtd/nand/spi/gigadevice.c
++++ b/drivers/mtd/nand/spi/gigadevice.c
+@@ -13,7 +13,10 @@
+ #define GD5FXGQ4XA_STATUS_ECC_1_7_BITFLIPS	(1 << 4)
+ #define GD5FXGQ4XA_STATUS_ECC_8_BITFLIPS	(3 << 4)
+ 
+-#define GD5FXGQ4UEXXG_REG_STATUS2		0xf0
++#define GD5FXGQ5XE_STATUS_ECC_1_4_BITFLIPS	(1 << 4)
++#define GD5FXGQ5XE_STATUS_ECC_4_BITFLIPS	(3 << 4)
++
++#define GD5FXGQXXEXXG_REG_STATUS2		0xf0
+ 
+ #define GD5FXGQ4UXFXXG_STATUS_ECC_MASK		(7 << 4)
+ #define GD5FXGQ4UXFXXG_STATUS_ECC_NO_BITFLIPS	(0 << 4)
+@@ -36,6 +39,14 @@ static SPINAND_OP_VARIANTS(read_cache_variants_f,
+ 		SPINAND_PAGE_READ_FROM_CACHE_OP_3A(true, 0, 1, NULL, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_OP_3A(false, 0, 0, NULL, 0));
+ 
++static SPINAND_OP_VARIANTS(read_cache_variants_1gq5,
++		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_OP(true, 0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0));
++
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+ 		SPINAND_PROG_LOAD(true, 0, NULL, 0));
+@@ -102,7 +113,7 @@ static int gd5fxgq4xa_ecc_get_status(struct spinand_device *spinand,
+ 	return -EINVAL;
+ }
+ 
+-static int gd5fxgq4_variant2_ooblayout_ecc(struct mtd_info *mtd, int section,
++static int gd5fxgqx_variant2_ooblayout_ecc(struct mtd_info *mtd, int section,
+ 				       struct mtd_oob_region *region)
+ {
+ 	if (section)
+@@ -114,7 +125,7 @@ static int gd5fxgq4_variant2_ooblayout_ecc(struct mtd_info *mtd, int section,
+ 	return 0;
+ }
+ 
+-static int gd5fxgq4_variant2_ooblayout_free(struct mtd_info *mtd, int section,
++static int gd5fxgqx_variant2_ooblayout_free(struct mtd_info *mtd, int section,
+ 					struct mtd_oob_region *region)
+ {
+ 	if (section)
+@@ -127,9 +138,10 @@ static int gd5fxgq4_variant2_ooblayout_free(struct mtd_info *mtd, int section,
+ 	return 0;
+ }
+ 
+-static const struct mtd_ooblayout_ops gd5fxgq4_variant2_ooblayout = {
+-	.ecc = gd5fxgq4_variant2_ooblayout_ecc,
+-	.free = gd5fxgq4_variant2_ooblayout_free,
++/* Valid for Q4/Q5 and Q6 (untested) devices */
++static const struct mtd_ooblayout_ops gd5fxgqx_variant2_ooblayout = {
++	.ecc = gd5fxgqx_variant2_ooblayout_ecc,
++	.free = gd5fxgqx_variant2_ooblayout_free,
+ };
+ 
+ static int gd5fxgq4xc_ooblayout_256_ecc(struct mtd_info *mtd, int section,
+@@ -165,8 +177,8 @@ static int gd5fxgq4uexxg_ecc_get_status(struct spinand_device *spinand,
+ 					u8 status)
+ {
+ 	u8 status2;
+-	struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQ4UEXXG_REG_STATUS2,
+-						      &status2);
++	struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQXXEXXG_REG_STATUS2,
++						      spinand->scratchbuf);
+ 	int ret;
+ 
+ 	switch (status & STATUS_ECC_MASK) {
+@@ -187,6 +199,7 @@ static int gd5fxgq4uexxg_ecc_get_status(struct spinand_device *spinand,
+ 		 * report the maximum of 4 in this case
+ 		 */
+ 		/* bits sorted this way (3...0): ECCS1,ECCS0,ECCSE1,ECCSE0 */
++		status2 = *(spinand->scratchbuf);
+ 		return ((status & STATUS_ECC_MASK) >> 2) |
+ 			((status2 & STATUS_ECC_MASK) >> 4);
+ 
+@@ -203,6 +216,44 @@ static int gd5fxgq4uexxg_ecc_get_status(struct spinand_device *spinand,
+ 	return -EINVAL;
+ }
+ 
++static int gd5fxgq5xexxg_ecc_get_status(struct spinand_device *spinand,
++					u8 status)
++{
++	u8 status2;
++	struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQXXEXXG_REG_STATUS2,
++						      spinand->scratchbuf);
++	int ret;
++
++	switch (status & STATUS_ECC_MASK) {
++	case STATUS_ECC_NO_BITFLIPS:
++		return 0;
++
++	case GD5FXGQ5XE_STATUS_ECC_1_4_BITFLIPS:
++		/*
++		 * Read status2 register to determine a more fine grained
++		 * bit error status
++		 */
++		ret = spi_mem_exec_op(spinand->spimem, &op);
++		if (ret)
++			return ret;
++
++		/*
++		 * 1 ... 4 bits are flipped (and corrected)
++		 */
++		/* bits sorted this way (1...0): ECCSE1, ECCSE0 */
++		status2 = *(spinand->scratchbuf);
++		return ((status2 & STATUS_ECC_MASK) >> 4) + 1;
++
++	case STATUS_ECC_UNCOR_ERROR:
++		return -EBADMSG;
++
++	default:
++		break;
++	}
++
++	return -EINVAL;
++}
++
+ static int gd5fxgq4ufxxg_ecc_get_status(struct spinand_device *spinand,
+ 					u8 status)
+ {
+@@ -282,7 +333,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ 					      &write_cache_variants,
+ 					      &update_cache_variants),
+ 		     SPINAND_HAS_QE_BIT,
+-		     SPINAND_ECCINFO(&gd5fxgq4_variant2_ooblayout,
++		     SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout,
+ 				     gd5fxgq4uexxg_ecc_get_status)),
+ 	SPINAND_INFO("GD5F1GQ4UFxxG",
+ 		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE, 0xb1, 0x48),
+@@ -292,8 +343,18 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ 					      &write_cache_variants,
+ 					      &update_cache_variants),
+ 		     SPINAND_HAS_QE_BIT,
+-		     SPINAND_ECCINFO(&gd5fxgq4_variant2_ooblayout,
++		     SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout,
+ 				     gd5fxgq4ufxxg_ecc_get_status)),
++	SPINAND_INFO("GD5F1GQ5UExxG",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x51),
++		     NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
++		     NAND_ECCREQ(4, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants_1gq5,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout,
++				     gd5fxgq5xexxg_ecc_get_status)),
+ };
+ 
+ static const struct spinand_manufacturer_ops gigadevice_spinand_manuf_ops = {
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 9534f58368ccb..4e19760cddefe 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1406,26 +1406,26 @@ static int __init gtp_init(void)
+ 
+ 	get_random_bytes(&gtp_h_initval, sizeof(gtp_h_initval));
+ 
+-	err = rtnl_link_register(&gtp_link_ops);
++	err = register_pernet_subsys(&gtp_net_ops);
+ 	if (err < 0)
+ 		goto error_out;
+ 
+-	err = register_pernet_subsys(&gtp_net_ops);
++	err = rtnl_link_register(&gtp_link_ops);
+ 	if (err < 0)
+-		goto unreg_rtnl_link;
++		goto unreg_pernet_subsys;
+ 
+ 	err = genl_register_family(&gtp_genl_family);
+ 	if (err < 0)
+-		goto unreg_pernet_subsys;
++		goto unreg_rtnl_link;
+ 
+ 	pr_info("GTP module loaded (pdp ctx size %zd bytes)\n",
+ 		sizeof(struct pdp_ctx));
+ 	return 0;
+ 
+-unreg_pernet_subsys:
+-	unregister_pernet_subsys(&gtp_net_ops);
+ unreg_rtnl_link:
+ 	rtnl_link_unregister(&gtp_link_ops);
++unreg_pernet_subsys:
++	unregister_pernet_subsys(&gtp_net_ops);
+ error_out:
+ 	pr_err("error loading GTP module loaded\n");
+ 	return err;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 0b25b59f44033..bb0368272a1bb 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -665,6 +665,7 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 				   tun->tfiles[tun->numqueues - 1]);
+ 		ntfile = rtnl_dereference(tun->tfiles[index]);
+ 		ntfile->queue_index = index;
++		ntfile->xdp_rxq.queue_index = index;
+ 		rcu_assign_pointer(tun->tfiles[tun->numqueues - 1],
+ 				   NULL);
+ 
+diff --git a/drivers/net/usb/dm9601.c b/drivers/net/usb/dm9601.c
+index 5aad26600b03e..9b7db5fd9e08f 100644
+--- a/drivers/net/usb/dm9601.c
++++ b/drivers/net/usb/dm9601.c
+@@ -231,7 +231,7 @@ static int dm9601_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ 	err = dm_read_shared_word(dev, 1, loc, &res);
+ 	if (err < 0) {
+ 		netdev_err(dev->net, "MDIO read error: %d\n", err);
+-		return err;
++		return 0;
+ 	}
+ 
+ 	netdev_dbg(dev->net,
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 667984efeb3be..c5a666bb86ee4 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -2527,7 +2527,8 @@ static int lan78xx_reset(struct lan78xx_net *dev)
+ 	if (dev->chipid == ID_REV_CHIP_ID_7801_)
+ 		buf &= ~MAC_CR_GMII_EN_;
+ 
+-	if (dev->chipid == ID_REV_CHIP_ID_7800_) {
++	if (dev->chipid == ID_REV_CHIP_ID_7800_ ||
++	    dev->chipid == ID_REV_CHIP_ID_7850_) {
+ 		ret = lan78xx_read_raw_eeprom(dev, 0, 1, &sig);
+ 		if (!ret && sig != EEPROM_INDICATOR) {
+ 			/* Implies there is no external eeprom. Set mac speed */
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index eedff2ae28511..ebe959db1eeb9 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -50,7 +50,7 @@ static const struct property_entry chuwi_hi8_air_props[] = {
+ };
+ 
+ static const struct ts_dmi_data chuwi_hi8_air_data = {
+-	.acpi_name	= "MSSL1680:00",
++	.acpi_name	= "MSSL1680",
+ 	.properties	= chuwi_hi8_air_props,
+ };
+ 
+@@ -1648,7 +1648,7 @@ static void ts_dmi_add_props(struct i2c_client *client)
+ 	int error;
+ 
+ 	if (has_acpi_companion(dev) &&
+-	    !strncmp(ts_data->acpi_name, client->name, I2C_NAME_SIZE)) {
++	    strstarts(client->name, ts_data->acpi_name)) {
+ 		error = device_add_properties(dev, ts_data->properties);
+ 		if (error)
+ 			dev_err(dev, "failed to add properties: %d\n", error);
+diff --git a/drivers/power/supply/bq27xxx_battery_i2c.c b/drivers/power/supply/bq27xxx_battery_i2c.c
+index 0e32efb10ee78..6fbae8fc2e501 100644
+--- a/drivers/power/supply/bq27xxx_battery_i2c.c
++++ b/drivers/power/supply/bq27xxx_battery_i2c.c
+@@ -209,7 +209,9 @@ static int bq27xxx_battery_i2c_remove(struct i2c_client *client)
+ {
+ 	struct bq27xxx_device_info *di = i2c_get_clientdata(client);
+ 
+-	free_irq(client->irq, di);
++	if (client->irq)
++		free_irq(client->irq, di);
++
+ 	bq27xxx_battery_teardown(di);
+ 
+ 	mutex_lock(&battery_mutex);
+diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
+index 436ec79122ed2..d228e77175c7f 100644
+--- a/drivers/soc/qcom/rpmhpd.c
++++ b/drivers/soc/qcom/rpmhpd.c
+@@ -261,12 +261,15 @@ static int rpmhpd_aggregate_corner(struct rpmhpd *pd, unsigned int corner)
+ 	unsigned int active_corner, sleep_corner;
+ 	unsigned int this_active_corner = 0, this_sleep_corner = 0;
+ 	unsigned int peer_active_corner = 0, peer_sleep_corner = 0;
++	unsigned int peer_enabled_corner;
+ 
+ 	to_active_sleep(pd, corner, &this_active_corner, &this_sleep_corner);
+ 
+-	if (peer && peer->enabled)
+-		to_active_sleep(peer, peer->corner, &peer_active_corner,
++	if (peer && peer->enabled) {
++		peer_enabled_corner = max(peer->corner, peer->enable_corner);
++		to_active_sleep(peer, peer_enabled_corner, &peer_active_corner,
+ 				&peer_sleep_corner);
++	}
+ 
+ 	active_corner = max(this_active_corner, peer_active_corner);
+ 
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 0b927736ca728..88f0e719c6ac0 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -423,8 +423,10 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode,
+ 		    dire->u.name[0] == '.' &&
+ 		    ctx->actor != afs_lookup_filldir &&
+ 		    ctx->actor != afs_lookup_one_filldir &&
+-		    memcmp(dire->u.name, ".__afs", 6) == 0)
++		    memcmp(dire->u.name, ".__afs", 6) == 0) {
++			ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent);
+ 			continue;
++		}
+ 
+ 		/* found the next entry */
+ 		if (!dir_emit(ctx, dire->u.name, nlen,
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index be6935d191970..9a6055659c1a6 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -566,6 +566,23 @@ static int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 	return ret;
+ }
+ 
++static int btrfs_check_replace_dev_names(struct btrfs_ioctl_dev_replace_args *args)
++{
++	if (args->start.srcdevid == 0) {
++		if (memchr(args->start.srcdev_name, 0,
++			   sizeof(args->start.srcdev_name)) == NULL)
++			return -ENAMETOOLONG;
++	} else {
++		args->start.srcdev_name[0] = 0;
++	}
++
++	if (memchr(args->start.tgtdev_name, 0,
++		   sizeof(args->start.tgtdev_name)) == NULL)
++	    return -ENAMETOOLONG;
++
++	return 0;
++}
++
+ int btrfs_dev_replace_by_ioctl(struct btrfs_fs_info *fs_info,
+ 			    struct btrfs_ioctl_dev_replace_args *args)
+ {
+@@ -578,10 +595,9 @@ int btrfs_dev_replace_by_ioctl(struct btrfs_fs_info *fs_info,
+ 	default:
+ 		return -EINVAL;
+ 	}
+-
+-	if ((args->start.srcdevid == 0 && args->start.srcdev_name[0] == '\0') ||
+-	    args->start.tgtdev_name[0] == '\0')
+-		return -EINVAL;
++	ret = btrfs_check_replace_dev_names(args);
++	if (ret < 0)
++		return ret;
+ 
+ 	ret = btrfs_dev_replace_start(fs_info, args->start.tgtdev_name,
+ 					args->start.srcdevid,
+diff --git a/fs/cachefiles/bind.c b/fs/cachefiles/bind.c
+index dfb14dbddf51d..3b39552c23651 100644
+--- a/fs/cachefiles/bind.c
++++ b/fs/cachefiles/bind.c
+@@ -245,6 +245,8 @@ static int cachefiles_daemon_add_cache(struct cachefiles_cache *cache)
+ 	kmem_cache_free(cachefiles_object_jar, fsdef);
+ error_root_object:
+ 	cachefiles_end_secure(cache, saved_cred);
++	put_cred(cache->cache_cred);
++	cache->cache_cred = NULL;
+ 	pr_err("Failed to register: %d\n", ret);
+ 	return ret;
+ }
+@@ -265,6 +267,7 @@ void cachefiles_daemon_unbind(struct cachefiles_cache *cache)
+ 
+ 	dput(cache->graveyard);
+ 	mntput(cache->mnt);
++	put_cred(cache->cache_cred);
+ 
+ 	kfree(cache->rootdirname);
+ 	kfree(cache->secctx);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index d66ba6f6a8115..61988b7b5be77 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1494,11 +1494,6 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 	mb_check_buddy(e4b);
+ 	mb_free_blocks_double(inode, e4b, first, count);
+ 
+-	this_cpu_inc(discard_pa_seq);
+-	e4b->bd_info->bb_free += count;
+-	if (first < e4b->bd_info->bb_first_free)
+-		e4b->bd_info->bb_first_free = first;
+-
+ 	/* access memory sequentially: check left neighbour,
+ 	 * clear range and then check right neighbour
+ 	 */
+@@ -1512,23 +1507,31 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 		struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 		ext4_fsblk_t blocknr;
+ 
++		/*
++		 * Fastcommit replay can free already freed blocks which
++		 * corrupts allocation info. Regenerate it.
++		 */
++		if (sbi->s_mount_state & EXT4_FC_REPLAY) {
++			mb_regenerate_buddy(e4b);
++			goto check;
++		}
++
+ 		blocknr = ext4_group_first_block_no(sb, e4b->bd_group);
+ 		blocknr += EXT4_C2B(sbi, block);
+-		if (!(sbi->s_mount_state & EXT4_FC_REPLAY)) {
+-			ext4_grp_locked_error(sb, e4b->bd_group,
+-					      inode ? inode->i_ino : 0,
+-					      blocknr,
+-					      "freeing already freed block (bit %u); block bitmap corrupt.",
+-					      block);
+-			ext4_mark_group_bitmap_corrupted(
+-				sb, e4b->bd_group,
++		ext4_grp_locked_error(sb, e4b->bd_group,
++				      inode ? inode->i_ino : 0, blocknr,
++				      "freeing already freed block (bit %u); block bitmap corrupt.",
++				      block);
++		ext4_mark_group_bitmap_corrupted(sb, e4b->bd_group,
+ 				EXT4_GROUP_INFO_BBITMAP_CORRUPT);
+-		} else {
+-			mb_regenerate_buddy(e4b);
+-		}
+-		goto done;
++		return;
+ 	}
+ 
++	this_cpu_inc(discard_pa_seq);
++	e4b->bd_info->bb_free += count;
++	if (first < e4b->bd_info->bb_first_free)
++		e4b->bd_info->bb_first_free = first;
++
+ 	/* let's maintain fragments counter */
+ 	if (left_is_free && right_is_free)
+ 		e4b->bd_info->bb_fragments--;
+@@ -1553,8 +1556,8 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 	if (first <= last)
+ 		mb_buddy_mark_free(e4b, first >> 1, last >> 1);
+ 
+-done:
+ 	mb_set_largest_free_order(sb, e4b->bd_info);
++check:
+ 	mb_check_buddy(e4b);
+ }
+ 
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 5181e6d4e18ca..a0edd4b8fa189 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -1234,6 +1234,7 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
+ {
+ 	struct hugetlbfs_fs_context *ctx = fc->fs_private;
+ 	struct fs_parse_result result;
++	struct hstate *h;
+ 	char *rest;
+ 	unsigned long ps;
+ 	int opt;
+@@ -1278,11 +1279,12 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
+ 
+ 	case Opt_pagesize:
+ 		ps = memparse(param->string, &rest);
+-		ctx->hstate = size_to_hstate(ps);
+-		if (!ctx->hstate) {
++		h = size_to_hstate(ps);
++		if (!h) {
+ 			pr_err("Unsupported page size %lu MB\n", ps >> 20);
+ 			return -EINVAL;
+ 		}
++		ctx->hstate = h;
+ 		return 0;
+ 
+ 	case Opt_min_size:
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index e33fe4b1c4e29..5f1fbf86e0ceb 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2318,6 +2318,7 @@ static void hci_error_reset(struct work_struct *work)
+ {
+ 	struct hci_dev *hdev = container_of(work, struct hci_dev, error_reset);
+ 
++	hci_dev_hold(hdev);
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	if (hdev->hw_error)
+@@ -2325,10 +2326,10 @@ static void hci_error_reset(struct work_struct *work)
+ 	else
+ 		bt_dev_err(hdev, "hardware error 0x%2.2x", hdev->hw_error_code);
+ 
+-	if (hci_dev_do_close(hdev))
+-		return;
++	if (!hci_dev_do_close(hdev))
++		hci_dev_do_open(hdev);
+ 
+-	hci_dev_do_open(hdev);
++	hci_dev_put(hdev);
+ }
+ 
+ void hci_uuids_clear(struct hci_dev *hdev)
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 4027c79786fd3..47f37080c0c55 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4612,9 +4612,12 @@ static void hci_io_capa_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	hci_dev_lock(hdev);
+ 
+ 	conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr);
+-	if (!conn || !hci_conn_ssp_enabled(conn))
++	if (!conn || !hci_dev_test_flag(hdev, HCI_SSP_ENABLED))
+ 		goto unlock;
+ 
++	/* Assume remote supports SSP since it has triggered this event */
++	set_bit(HCI_CONN_SSP_ENABLED, &conn->flags);
++
+ 	hci_conn_hold(conn);
+ 
+ 	if (!hci_dev_test_flag(hdev, HCI_MGMT))
+@@ -5922,6 +5925,10 @@ static void hci_le_remote_conn_param_req_evt(struct hci_dev *hdev,
+ 		return send_conn_param_neg_reply(hdev, handle,
+ 						 HCI_ERROR_UNKNOWN_CONN_ID);
+ 
++	if (max > hcon->le_conn_max_interval)
++		return send_conn_param_neg_reply(hdev, handle,
++						 HCI_ERROR_INVALID_LL_PARAMS);
++
+ 	if (hci_check_conn_params(min, max, latency, timeout))
+ 		return send_conn_param_neg_reply(hdev, handle,
+ 						 HCI_ERROR_INVALID_LL_PARAMS);
+@@ -6139,10 +6146,10 @@ static void hci_store_wake_reason(struct hci_dev *hdev, u8 event,
+ 	 * keep track of the bdaddr of the connection event that woke us up.
+ 	 */
+ 	if (event == HCI_EV_CONN_REQUEST) {
+-		bacpy(&hdev->wake_addr, &conn_complete->bdaddr);
++		bacpy(&hdev->wake_addr, &conn_request->bdaddr);
+ 		hdev->wake_addr_type = BDADDR_BREDR;
+ 	} else if (event == HCI_EV_CONN_COMPLETE) {
+-		bacpy(&hdev->wake_addr, &conn_request->bdaddr);
++		bacpy(&hdev->wake_addr, &conn_complete->bdaddr);
+ 		hdev->wake_addr_type = BDADDR_BREDR;
+ 	} else if (event == HCI_EV_LE_META) {
+ 		struct hci_ev_le_meta *le_ev = (void *)skb->data;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index a752032e12fcf..580b6d6b970d2 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -5609,7 +5609,13 @@ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn,
+ 
+ 	memset(&rsp, 0, sizeof(rsp));
+ 
+-	err = hci_check_conn_params(min, max, latency, to_multiplier);
++	if (max > hcon->le_conn_max_interval) {
++		BT_DBG("requested connection interval exceeds current bounds.");
++		err = -EINVAL;
++	} else {
++		err = hci_check_conn_params(min, max, latency, to_multiplier);
++	}
++
+ 	if (err)
+ 		rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED);
+ 	else
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 021dcfdae2835..8938320f7ba3b 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -4903,10 +4903,9 @@ static int rtnl_bridge_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	struct net *net = sock_net(skb->sk);
+ 	struct ifinfomsg *ifm;
+ 	struct net_device *dev;
+-	struct nlattr *br_spec, *attr = NULL;
++	struct nlattr *br_spec, *attr, *br_flags_attr = NULL;
+ 	int rem, err = -EOPNOTSUPP;
+ 	u16 flags = 0;
+-	bool have_flags = false;
+ 
+ 	if (nlmsg_len(nlh) < sizeof(*ifm))
+ 		return -EINVAL;
+@@ -4924,11 +4923,11 @@ static int rtnl_bridge_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
+ 	if (br_spec) {
+ 		nla_for_each_nested(attr, br_spec, rem) {
+-			if (nla_type(attr) == IFLA_BRIDGE_FLAGS && !have_flags) {
++			if (nla_type(attr) == IFLA_BRIDGE_FLAGS && !br_flags_attr) {
+ 				if (nla_len(attr) < sizeof(flags))
+ 					return -EINVAL;
+ 
+-				have_flags = true;
++				br_flags_attr = attr;
+ 				flags = nla_get_u16(attr);
+ 			}
+ 
+@@ -4972,8 +4971,8 @@ static int rtnl_bridge_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		}
+ 	}
+ 
+-	if (have_flags)
+-		memcpy(nla_data(attr), &flags, sizeof(flags));
++	if (br_flags_attr)
++		memcpy(nla_data(br_flags_attr), &flags, sizeof(flags));
+ out:
+ 	return err;
+ }
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 99f70b990eb13..50f8231e9daec 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -540,6 +540,20 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
+ 	return 0;
+ }
+ 
++static void ip_tunnel_adj_headroom(struct net_device *dev, unsigned int headroom)
++{
++	/* we must cap headroom to some upperlimit, else pskb_expand_head
++	 * will overflow header offsets in skb_headers_offset_update().
++	 */
++	static const unsigned int max_allowed = 512;
++
++	if (headroom > max_allowed)
++		headroom = max_allowed;
++
++	if (headroom > READ_ONCE(dev->needed_headroom))
++		WRITE_ONCE(dev->needed_headroom, headroom);
++}
++
+ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		       u8 proto, int tunnel_hlen)
+ {
+@@ -613,13 +627,13 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	}
+ 
+ 	headroom += LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len;
+-	if (headroom > READ_ONCE(dev->needed_headroom))
+-		WRITE_ONCE(dev->needed_headroom, headroom);
+-
+-	if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
++	if (skb_cow_head(skb, headroom)) {
+ 		ip_rt_put(rt);
+ 		goto tx_dropped;
+ 	}
++
++	ip_tunnel_adj_headroom(dev, headroom);
++
+ 	iptunnel_xmit(NULL, rt, skb, fl4.saddr, fl4.daddr, proto, tos, ttl,
+ 		      df, !net_eq(tunnel->net, dev_net(dev)));
+ 	return;
+@@ -797,16 +811,16 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
+ 			+ rt->dst.header_len + ip_encap_hlen(&tunnel->encap);
+-	if (max_headroom > READ_ONCE(dev->needed_headroom))
+-		WRITE_ONCE(dev->needed_headroom, max_headroom);
+ 
+-	if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
++	if (skb_cow_head(skb, max_headroom)) {
+ 		ip_rt_put(rt);
+ 		dev->stats.tx_dropped++;
+ 		kfree_skb(skb);
+ 		return;
+ 	}
+ 
++	ip_tunnel_adj_headroom(dev, max_headroom);
++
+ 	iptunnel_xmit(NULL, rt, skb, fl4.saddr, fl4.daddr, protocol, tos, ttl,
+ 		      df, !net_eq(tunnel->net, dev_net(dev)));
+ 	return;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 150c2f71ec89f..0429c1d50fc92 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -5436,9 +5436,10 @@ static int inet6_rtm_getaddr(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ 	}
+ 
+ 	addr = extract_addr(tb[IFA_ADDRESS], tb[IFA_LOCAL], &peer);
+-	if (!addr)
+-		return -EINVAL;
+-
++	if (!addr) {
++		err = -EINVAL;
++		goto errout;
++	}
+ 	ifm = nlmsg_data(nlh);
+ 	if (ifm->ifa_index)
+ 		dev = dev_get_by_index(tgt_net, ifm->ifa_index);
+diff --git a/net/mptcp/diag.c b/net/mptcp/diag.c
+index e57c5f47f0351..d7ca71c597545 100644
+--- a/net/mptcp/diag.c
++++ b/net/mptcp/diag.c
+@@ -21,6 +21,9 @@ static int subflow_get_info(struct sock *sk, struct sk_buff *skb)
+ 	bool slow;
+ 	int err;
+ 
++	if (inet_sk_state_load(sk) == TCP_LISTEN)
++		return 0;
++
+ 	start = nla_nest_start_noflag(skb, INET_ULP_INFO_MPTCP);
+ 	if (!start)
+ 		return -EMSGSIZE;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 72d944e6a641f..adbe6350f980b 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2052,8 +2052,50 @@ static struct ipv6_pinfo *mptcp_inet6_sk(const struct sock *sk)
+ 
+ 	return (struct ipv6_pinfo *)(((u8 *)sk) + offset);
+ }
++
++static void mptcp_copy_ip6_options(struct sock *newsk, const struct sock *sk)
++{
++	const struct ipv6_pinfo *np = inet6_sk(sk);
++	struct ipv6_txoptions *opt;
++	struct ipv6_pinfo *newnp;
++
++	newnp = inet6_sk(newsk);
++
++	rcu_read_lock();
++	opt = rcu_dereference(np->opt);
++	if (opt) {
++		opt = ipv6_dup_options(newsk, opt);
++		if (!opt)
++			net_warn_ratelimited("%s: Failed to copy ip6 options\n", __func__);
++	}
++	RCU_INIT_POINTER(newnp->opt, opt);
++	rcu_read_unlock();
++}
+ #endif
+ 
++static void mptcp_copy_ip_options(struct sock *newsk, const struct sock *sk)
++{
++	struct ip_options_rcu *inet_opt, *newopt = NULL;
++	const struct inet_sock *inet = inet_sk(sk);
++	struct inet_sock *newinet;
++
++	newinet = inet_sk(newsk);
++
++	rcu_read_lock();
++	inet_opt = rcu_dereference(inet->inet_opt);
++	if (inet_opt) {
++		newopt = sock_kmalloc(newsk, sizeof(*inet_opt) +
++				      inet_opt->opt.optlen, GFP_ATOMIC);
++		if (newopt)
++			memcpy(newopt, inet_opt, sizeof(*inet_opt) +
++			       inet_opt->opt.optlen);
++		else
++			net_warn_ratelimited("%s: Failed to copy ip options\n", __func__);
++	}
++	RCU_INIT_POINTER(newinet->inet_opt, newopt);
++	rcu_read_unlock();
++}
++
+ struct sock *mptcp_sk_clone(const struct sock *sk,
+ 			    const struct mptcp_options_received *mp_opt,
+ 			    struct request_sock *req)
+@@ -2073,6 +2115,13 @@ struct sock *mptcp_sk_clone(const struct sock *sk,
+ 
+ 	__mptcp_init_sock(nsk);
+ 
++#if IS_ENABLED(CONFIG_MPTCP_IPV6)
++	if (nsk->sk_family == AF_INET6)
++		mptcp_copy_ip6_options(nsk, sk);
++	else
++#endif
++		mptcp_copy_ip_options(nsk, sk);
++
+ 	msk = mptcp_sk(nsk);
+ 	msk->local_key = subflow_req->local_key;
+ 	msk->token = subflow_req->token;
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 77c7362a7db8e..3e0a6e7930c6c 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -336,10 +336,20 @@ static int nft_target_validate(const struct nft_ctx *ctx,
+ 
+ 	if (ctx->family != NFPROTO_IPV4 &&
+ 	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET &&
+ 	    ctx->family != NFPROTO_BRIDGE &&
+ 	    ctx->family != NFPROTO_ARP)
+ 		return -EOPNOTSUPP;
+ 
++	ret = nft_chain_validate_hooks(ctx->chain,
++				       (1 << NF_INET_PRE_ROUTING) |
++				       (1 << NF_INET_LOCAL_IN) |
++				       (1 << NF_INET_FORWARD) |
++				       (1 << NF_INET_LOCAL_OUT) |
++				       (1 << NF_INET_POST_ROUTING));
++	if (ret)
++		return ret;
++
+ 	if (nft_is_base_chain(ctx->chain)) {
+ 		const struct nft_base_chain *basechain =
+ 						nft_base_chain(ctx->chain);
+@@ -584,10 +594,20 @@ static int nft_match_validate(const struct nft_ctx *ctx,
+ 
+ 	if (ctx->family != NFPROTO_IPV4 &&
+ 	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET &&
+ 	    ctx->family != NFPROTO_BRIDGE &&
+ 	    ctx->family != NFPROTO_ARP)
+ 		return -EOPNOTSUPP;
+ 
++	ret = nft_chain_validate_hooks(ctx->chain,
++				       (1 << NF_INET_PRE_ROUTING) |
++				       (1 << NF_INET_LOCAL_IN) |
++				       (1 << NF_INET_FORWARD) |
++				       (1 << NF_INET_LOCAL_OUT) |
++				       (1 << NF_INET_POST_ROUTING));
++	if (ret)
++		return ret;
++
+ 	if (nft_is_base_chain(ctx->chain)) {
+ 		const struct nft_base_chain *basechain =
+ 						nft_base_chain(ctx->chain);
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 359f07a53eccf..a2b14434d7aa0 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -157,7 +157,7 @@ static inline u32 netlink_group_mask(u32 group)
+ static struct sk_buff *netlink_to_full_skb(const struct sk_buff *skb,
+ 					   gfp_t gfp_mask)
+ {
+-	unsigned int len = skb_end_offset(skb);
++	unsigned int len = skb->len;
+ 	struct sk_buff *new;
+ 
+ 	new = alloc_skb(len, gfp_mask);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 279f4977e2eed..933591f9704b8 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3772,6 +3772,8 @@ static int nl80211_set_interface(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		if (ntype != NL80211_IFTYPE_MESH_POINT)
+ 			return -EINVAL;
++		if (otype != NL80211_IFTYPE_MESH_POINT)
++			return -EINVAL;
+ 		if (netif_running(dev))
+ 			return -EBUSY;
+ 
+diff --git a/security/tomoyo/common.c b/security/tomoyo/common.c
+index 4bee32bfe16d1..6235c3be832aa 100644
+--- a/security/tomoyo/common.c
++++ b/security/tomoyo/common.c
+@@ -2657,13 +2657,14 @@ ssize_t tomoyo_write_control(struct tomoyo_io_buffer *head,
+ {
+ 	int error = buffer_len;
+ 	size_t avail_len = buffer_len;
+-	char *cp0 = head->write_buf;
++	char *cp0;
+ 	int idx;
+ 
+ 	if (!head->write)
+ 		return -EINVAL;
+ 	if (mutex_lock_interruptible(&head->io_sem))
+ 		return -EINTR;
++	cp0 = head->write_buf;
+ 	head->read_user_buf_avail = 0;
+ 	idx = tomoyo_read_lock();
+ 	/* Read a line and dispatch it to the policy handler. */
+diff --git a/sound/core/Makefile b/sound/core/Makefile
+index d123587c0fd8f..bc04acf4a45ce 100644
+--- a/sound/core/Makefile
++++ b/sound/core/Makefile
+@@ -32,7 +32,6 @@ snd-pcm-dmaengine-objs := pcm_dmaengine.o
+ snd-rawmidi-objs  := rawmidi.o
+ snd-timer-objs    := timer.o
+ snd-hrtimer-objs  := hrtimer.o
+-snd-rtctimer-objs := rtctimer.o
+ snd-hwdep-objs    := hwdep.o
+ snd-seq-device-objs := seq_device.o
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-03-15 22:02 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-03-15 22:02 UTC (permalink / raw
  To: gentoo-commits

commit:     0c99b90ec911284b061c95028f6ddf3a7d85e4c1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 15 22:02:19 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 15 22:02:19 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0c99b90e

Linux patch 5.10.213

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1212_linux-5.10.213.patch | 5313 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5317 insertions(+)

diff --git a/0000_README b/0000_README
index ff3682f8..653d7d47 100644
--- a/0000_README
+++ b/0000_README
@@ -891,6 +891,10 @@ Patch:  1211_linux-5.10.212.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.212
 
+Patch:  1212_linux-5.10.213.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.213
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1212_linux-5.10.213.patch b/1212_linux-5.10.213.patch
new file mode 100644
index 00000000..f00d62fb
--- /dev/null
+++ b/1212_linux-5.10.213.patch
@@ -0,0 +1,5313 @@
+diff --git a/Makefile b/Makefile
+index d7ec0be4cd791..b6af62d53d7a6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 212
++SUBLEVEL = 213
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/um/Kconfig b/arch/um/Kconfig
+index eb1c6880bde49..20264b47dcffc 100644
+--- a/arch/um/Kconfig
++++ b/arch/um/Kconfig
+@@ -92,6 +92,19 @@ config LD_SCRIPT_DYN
+ 	depends on !LD_SCRIPT_STATIC
+ 	select MODULE_REL_CRCS if MODVERSIONS
+ 
++config LD_SCRIPT_DYN_RPATH
++	bool "set rpath in the binary" if EXPERT
++	default y
++	depends on LD_SCRIPT_DYN
++	help
++	  Add /lib (and /lib64 for 64-bit) to the linux binary's rpath
++	  explicitly.
++
++	  You may need to turn this off if compiling for nix systems
++	  that have their libraries in random /nix directories and
++	  might otherwise unexpected use libraries from /lib or /lib64
++	  instead of the desired ones.
++
+ config HOSTFS
+ 	tristate "Host filesystem"
+ 	help
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index 56e5320da7624..4211e23a2f68f 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -118,7 +118,8 @@ archprepare:
+ 	$(Q)$(MAKE) $(build)=$(HOST_DIR)/um include/generated/user_constants.h
+ 
+ LINK-$(CONFIG_LD_SCRIPT_STATIC) += -static
+-LINK-$(CONFIG_LD_SCRIPT_DYN) += -Wl,-rpath,/lib $(call cc-option, -no-pie)
++LINK-$(CONFIG_LD_SCRIPT_DYN) += $(call cc-option, -no-pie)
++LINK-$(CONFIG_LD_SCRIPT_DYN_RPATH) += -Wl,-rpath,/lib
+ 
+ CFLAGS_NO_HARDENING := $(call cc-option, -fno-PIC,) $(call cc-option, -fno-pic,) \
+ 	-fno-stack-protector $(call cc-option, -fno-stack-protector-all)
+diff --git a/arch/x86/Makefile.um b/arch/x86/Makefile.um
+index 1db7913795f51..b3c1ae084180d 100644
+--- a/arch/x86/Makefile.um
++++ b/arch/x86/Makefile.um
+@@ -44,7 +44,7 @@ ELF_FORMAT := elf64-x86-64
+ 
+ # Not on all 64-bit distros /lib is a symlink to /lib64. PLD is an example.
+ 
+-LINK-$(CONFIG_LD_SCRIPT_DYN) += -Wl,-rpath,/lib64
++LINK-$(CONFIG_LD_SCRIPT_DYN_RPATH) += -Wl,-rpath,/lib64
+ LINK-y += -m64
+ 
+ endif
+diff --git a/drivers/base/regmap/internal.h b/drivers/base/regmap/internal.h
+index 0097696c31de2..2720d8d7bbfc9 100644
+--- a/drivers/base/regmap/internal.h
++++ b/drivers/base/regmap/internal.h
+@@ -104,6 +104,10 @@ struct regmap {
+ 	int (*reg_write)(void *context, unsigned int reg, unsigned int val);
+ 	int (*reg_update_bits)(void *context, unsigned int reg,
+ 			       unsigned int mask, unsigned int val);
++	/* Bulk read/write */
++	int (*read)(void *context, const void *reg_buf, size_t reg_size,
++		    void *val_buf, size_t val_size);
++	int (*write)(void *context, const void *data, size_t count);
+ 
+ 	bool defer_caching;
+ 
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 7bc603145bd98..2dfd6aa600450 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -800,12 +800,15 @@ struct regmap *__regmap_init(struct device *dev,
+ 		map->reg_stride_order = ilog2(map->reg_stride);
+ 	else
+ 		map->reg_stride_order = -1;
+-	map->use_single_read = config->use_single_read || !bus || !bus->read;
+-	map->use_single_write = config->use_single_write || !bus || !bus->write;
+-	map->can_multi_write = config->can_multi_write && bus && bus->write;
++	map->use_single_read = config->use_single_read || !(config->read || (bus && bus->read));
++	map->use_single_write = config->use_single_write || !(config->write || (bus && bus->write));
++	map->can_multi_write = config->can_multi_write && (config->write || (bus && bus->write));
+ 	if (bus) {
+ 		map->max_raw_read = bus->max_raw_read;
+ 		map->max_raw_write = bus->max_raw_write;
++	} else if (config->max_raw_read && config->max_raw_write) {
++		map->max_raw_read = config->max_raw_read;
++		map->max_raw_write = config->max_raw_write;
+ 	}
+ 	map->dev = dev;
+ 	map->bus = bus;
+@@ -839,9 +842,19 @@ struct regmap *__regmap_init(struct device *dev,
+ 		map->read_flag_mask = bus->read_flag_mask;
+ 	}
+ 
+-	if (!bus) {
++	if (config && config->read && config->write) {
++		map->reg_read  = _regmap_bus_read;
++
++		/* Bulk read/write */
++		map->read = config->read;
++		map->write = config->write;
++
++		reg_endian = REGMAP_ENDIAN_NATIVE;
++		val_endian = REGMAP_ENDIAN_NATIVE;
++	} else if (!bus) {
+ 		map->reg_read  = config->reg_read;
+ 		map->reg_write = config->reg_write;
++		map->reg_update_bits = config->reg_update_bits;
+ 
+ 		map->defer_caching = false;
+ 		goto skip_format_initialization;
+@@ -855,10 +868,13 @@ struct regmap *__regmap_init(struct device *dev,
+ 	} else {
+ 		map->reg_read  = _regmap_bus_read;
+ 		map->reg_update_bits = bus->reg_update_bits;
+-	}
++		/* Bulk read/write */
++		map->read = bus->read;
++		map->write = bus->write;
+ 
+-	reg_endian = regmap_get_reg_endian(bus, config);
+-	val_endian = regmap_get_val_endian(dev, bus, config);
++		reg_endian = regmap_get_reg_endian(bus, config);
++		val_endian = regmap_get_val_endian(dev, bus, config);
++	}
+ 
+ 	switch (config->reg_bits + map->reg_shift) {
+ 	case 2:
+@@ -1627,8 +1643,6 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 	size_t len;
+ 	int i;
+ 
+-	WARN_ON(!map->bus);
+-
+ 	/* Check for unwritable or noinc registers in range
+ 	 * before we start
+ 	 */
+@@ -1710,7 +1724,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 		val = work_val;
+ 	}
+ 
+-	if (map->async && map->bus->async_write) {
++	if (map->async && map->bus && map->bus->async_write) {
+ 		struct regmap_async *async;
+ 
+ 		trace_regmap_async_write_start(map, reg, val_len);
+@@ -1778,10 +1792,10 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 	 * write.
+ 	 */
+ 	if (val == work_val)
+-		ret = map->bus->write(map->bus_context, map->work_buf,
+-				      map->format.reg_bytes +
+-				      map->format.pad_bytes +
+-				      val_len);
++		ret = map->write(map->bus_context, map->work_buf,
++				 map->format.reg_bytes +
++				 map->format.pad_bytes +
++				 val_len);
+ 	else if (map->bus->gather_write)
+ 		ret = map->bus->gather_write(map->bus_context, map->work_buf,
+ 					     map->format.reg_bytes +
+@@ -1800,7 +1814,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 		memcpy(buf, map->work_buf, map->format.reg_bytes);
+ 		memcpy(buf + map->format.reg_bytes + map->format.pad_bytes,
+ 		       val, val_len);
+-		ret = map->bus->write(map->bus_context, buf, len);
++		ret = map->write(map->bus_context, buf, len);
+ 
+ 		kfree(buf);
+ 	} else if (ret != 0 && !map->cache_bypass && map->format.parse_val) {
+@@ -1857,7 +1871,7 @@ static int _regmap_bus_formatted_write(void *context, unsigned int reg,
+ 	struct regmap_range_node *range;
+ 	struct regmap *map = context;
+ 
+-	WARN_ON(!map->bus || !map->format.format_write);
++	WARN_ON(!map->format.format_write);
+ 
+ 	range = _regmap_range_lookup(map, reg);
+ 	if (range) {
+@@ -1870,8 +1884,7 @@ static int _regmap_bus_formatted_write(void *context, unsigned int reg,
+ 
+ 	trace_regmap_hw_write_start(map, reg, 1);
+ 
+-	ret = map->bus->write(map->bus_context, map->work_buf,
+-			      map->format.buf_size);
++	ret = map->write(map->bus_context, map->work_buf, map->format.buf_size);
+ 
+ 	trace_regmap_hw_write_done(map, reg, 1);
+ 
+@@ -1891,7 +1904,7 @@ static int _regmap_bus_raw_write(void *context, unsigned int reg,
+ {
+ 	struct regmap *map = context;
+ 
+-	WARN_ON(!map->bus || !map->format.format_val);
++	WARN_ON(!map->format.format_val);
+ 
+ 	map->format.format_val(map->work_buf + map->format.reg_bytes
+ 			       + map->format.pad_bytes, val, 0);
+@@ -1905,7 +1918,7 @@ static int _regmap_bus_raw_write(void *context, unsigned int reg,
+ 
+ static inline void *_regmap_map_get_context(struct regmap *map)
+ {
+-	return (map->bus) ? map : map->bus_context;
++	return (map->bus || (!map->bus && map->read)) ? map : map->bus_context;
+ }
+ 
+ int _regmap_write(struct regmap *map, unsigned int reg,
+@@ -2312,7 +2325,7 @@ static int _regmap_raw_multi_reg_write(struct regmap *map,
+ 	u8 = buf;
+ 	*u8 |= map->write_flag_mask;
+ 
+-	ret = map->bus->write(map->bus_context, buf, len);
++	ret = map->write(map->bus_context, buf, len);
+ 
+ 	kfree(buf);
+ 
+@@ -2618,9 +2631,7 @@ static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
+ 	struct regmap_range_node *range;
+ 	int ret;
+ 
+-	WARN_ON(!map->bus);
+-
+-	if (!map->bus || !map->bus->read)
++	if (!map->read)
+ 		return -EINVAL;
+ 
+ 	range = _regmap_range_lookup(map, reg);
+@@ -2636,9 +2647,9 @@ static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
+ 				      map->read_flag_mask);
+ 	trace_regmap_hw_read_start(map, reg, val_len / map->format.val_bytes);
+ 
+-	ret = map->bus->read(map->bus_context, map->work_buf,
+-			     map->format.reg_bytes + map->format.pad_bytes,
+-			     val, val_len);
++	ret = map->read(map->bus_context, map->work_buf,
++			map->format.reg_bytes + map->format.pad_bytes,
++			val, val_len);
+ 
+ 	trace_regmap_hw_read_done(map, reg, val_len / map->format.val_bytes);
+ 
+@@ -2749,8 +2760,6 @@ int regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
+ 	unsigned int v;
+ 	int ret, i;
+ 
+-	if (!map->bus)
+-		return -EINVAL;
+ 	if (val_len % map->format.val_bytes)
+ 		return -EINVAL;
+ 	if (!IS_ALIGNED(reg, map->reg_stride))
+@@ -2765,7 +2774,7 @@ int regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
+ 		size_t chunk_count, chunk_bytes;
+ 		size_t chunk_regs = val_count;
+ 
+-		if (!map->bus->read) {
++		if (!map->read) {
+ 			ret = -ENOTSUPP;
+ 			goto out;
+ 		}
+@@ -2825,7 +2834,7 @@ EXPORT_SYMBOL_GPL(regmap_raw_read);
+  * @val: Pointer to data buffer
+  * @val_len: Length of output buffer in bytes.
+  *
+- * The regmap API usually assumes that bulk bus read operations will read a
++ * The regmap API usually assumes that bulk read operations will read a
+  * range of registers. Some devices have certain registers for which a read
+  * operation read will read from an internal FIFO.
+  *
+@@ -2843,10 +2852,6 @@ int regmap_noinc_read(struct regmap *map, unsigned int reg,
+ 	size_t read_len;
+ 	int ret;
+ 
+-	if (!map->bus)
+-		return -EINVAL;
+-	if (!map->bus->read)
+-		return -ENOTSUPP;
+ 	if (val_len % map->format.val_bytes)
+ 		return -EINVAL;
+ 	if (!IS_ALIGNED(reg, map->reg_stride))
+@@ -2960,7 +2965,7 @@ int regmap_bulk_read(struct regmap *map, unsigned int reg, void *val,
+ 	if (val_count == 0)
+ 		return -EINVAL;
+ 
+-	if (map->bus && map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) {
++	if (map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) {
+ 		ret = regmap_raw_read(map, reg, val, val_bytes * val_count);
+ 		if (ret != 0)
+ 			return ret;
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index f064fa6ef181a..a59ab2f3d68e1 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -503,6 +503,70 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
+ }
+ EXPORT_SYMBOL_GPL(vmbus_establish_gpadl);
+ 
++/**
++ * request_arr_init - Allocates memory for the requestor array. Each slot
++ * keeps track of the next available slot in the array. Initially, each
++ * slot points to the next one (as in a Linked List). The last slot
++ * does not point to anything, so its value is U64_MAX by default.
++ * @size The size of the array
++ */
++static u64 *request_arr_init(u32 size)
++{
++	int i;
++	u64 *req_arr;
++
++	req_arr = kcalloc(size, sizeof(u64), GFP_KERNEL);
++	if (!req_arr)
++		return NULL;
++
++	for (i = 0; i < size - 1; i++)
++		req_arr[i] = i + 1;
++
++	/* Last slot (no more available slots) */
++	req_arr[i] = U64_MAX;
++
++	return req_arr;
++}
++
++/*
++ * vmbus_alloc_requestor - Initializes @rqstor's fields.
++ * Index 0 is the first free slot
++ * @size: Size of the requestor array
++ */
++static int vmbus_alloc_requestor(struct vmbus_requestor *rqstor, u32 size)
++{
++	u64 *rqst_arr;
++	unsigned long *bitmap;
++
++	rqst_arr = request_arr_init(size);
++	if (!rqst_arr)
++		return -ENOMEM;
++
++	bitmap = bitmap_zalloc(size, GFP_KERNEL);
++	if (!bitmap) {
++		kfree(rqst_arr);
++		return -ENOMEM;
++	}
++
++	rqstor->req_arr = rqst_arr;
++	rqstor->req_bitmap = bitmap;
++	rqstor->size = size;
++	rqstor->next_request_id = 0;
++	spin_lock_init(&rqstor->req_lock);
++
++	return 0;
++}
++
++/*
++ * vmbus_free_requestor - Frees memory allocated for @rqstor
++ * @rqstor: Pointer to the requestor struct
++ */
++static void vmbus_free_requestor(struct vmbus_requestor *rqstor)
++{
++	kfree(rqstor->req_arr);
++	bitmap_free(rqstor->req_bitmap);
++}
++
+ static int __vmbus_open(struct vmbus_channel *newchannel,
+ 		       void *userdata, u32 userdatalen,
+ 		       void (*onchannelcallback)(void *context), void *context)
+@@ -523,6 +587,12 @@ static int __vmbus_open(struct vmbus_channel *newchannel,
+ 	if (newchannel->state != CHANNEL_OPEN_STATE)
+ 		return -EINVAL;
+ 
++	/* Create and init requestor */
++	if (newchannel->rqstor_size) {
++		if (vmbus_alloc_requestor(&newchannel->requestor, newchannel->rqstor_size))
++			return -ENOMEM;
++	}
++
+ 	newchannel->state = CHANNEL_OPENING_STATE;
+ 	newchannel->onchannel_callback = onchannelcallback;
+ 	newchannel->channel_callback_context = context;
+@@ -626,6 +696,7 @@ static int __vmbus_open(struct vmbus_channel *newchannel,
+ error_clean_ring:
+ 	hv_ringbuffer_cleanup(&newchannel->outbound);
+ 	hv_ringbuffer_cleanup(&newchannel->inbound);
++	vmbus_free_requestor(&newchannel->requestor);
+ 	newchannel->state = CHANNEL_OPEN_STATE;
+ 	return err;
+ }
+@@ -808,6 +879,9 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 		channel->ringbuffer_gpadlhandle = 0;
+ 	}
+ 
++	if (!ret)
++		vmbus_free_requestor(&channel->requestor);
++
+ 	return ret;
+ }
+ 
+@@ -888,7 +962,7 @@ int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
+ 	/* in 8-bytes granularity */
+ 	desc.offset8 = sizeof(struct vmpacket_descriptor) >> 3;
+ 	desc.len8 = (u16)(packetlen_aligned >> 3);
+-	desc.trans_id = requestid;
++	desc.trans_id = VMBUS_RQST_ERROR; /* will be updated in hv_ringbuffer_write() */
+ 
+ 	bufferlist[0].iov_base = &desc;
+ 	bufferlist[0].iov_len = sizeof(struct vmpacket_descriptor);
+@@ -897,7 +971,7 @@ int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
+ 	bufferlist[2].iov_base = &aligned_data;
+ 	bufferlist[2].iov_len = (packetlen_aligned - packetlen);
+ 
+-	return hv_ringbuffer_write(channel, bufferlist, num_vecs);
++	return hv_ringbuffer_write(channel, bufferlist, num_vecs, requestid);
+ }
+ EXPORT_SYMBOL(vmbus_sendpacket);
+ 
+@@ -939,7 +1013,7 @@ int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
+ 	desc.flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
+ 	desc.dataoffset8 = descsize >> 3; /* in 8-bytes granularity */
+ 	desc.length8 = (u16)(packetlen_aligned >> 3);
+-	desc.transactionid = requestid;
++	desc.transactionid = VMBUS_RQST_ERROR; /* will be updated in hv_ringbuffer_write() */
+ 	desc.reserved = 0;
+ 	desc.rangecount = pagecount;
+ 
+@@ -956,7 +1030,7 @@ int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
+ 	bufferlist[2].iov_base = &aligned_data;
+ 	bufferlist[2].iov_len = (packetlen_aligned - packetlen);
+ 
+-	return hv_ringbuffer_write(channel, bufferlist, 3);
++	return hv_ringbuffer_write(channel, bufferlist, 3, requestid);
+ }
+ EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer);
+ 
+@@ -983,7 +1057,7 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
+ 	desc->flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
+ 	desc->dataoffset8 = desc_size >> 3; /* in 8-bytes granularity */
+ 	desc->length8 = (u16)(packetlen_aligned >> 3);
+-	desc->transactionid = requestid;
++	desc->transactionid = VMBUS_RQST_ERROR; /* will be updated in hv_ringbuffer_write() */
+ 	desc->reserved = 0;
+ 	desc->rangecount = 1;
+ 
+@@ -994,7 +1068,7 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
+ 	bufferlist[2].iov_base = &aligned_data;
+ 	bufferlist[2].iov_len = (packetlen_aligned - packetlen);
+ 
+-	return hv_ringbuffer_write(channel, bufferlist, 3);
++	return hv_ringbuffer_write(channel, bufferlist, 3, requestid);
+ }
+ EXPORT_SYMBOL_GPL(vmbus_sendpacket_mpb_desc);
+ 
+@@ -1042,3 +1116,91 @@ int vmbus_recvpacket_raw(struct vmbus_channel *channel, void *buffer,
+ 				  buffer_actual_len, requestid, true);
+ }
+ EXPORT_SYMBOL_GPL(vmbus_recvpacket_raw);
++
++/*
++ * vmbus_next_request_id - Returns a new request id. It is also
++ * the index at which the guest memory address is stored.
++ * Uses a spin lock to avoid race conditions.
++ * @rqstor: Pointer to the requestor struct
++ * @rqst_add: Guest memory address to be stored in the array
++ */
++u64 vmbus_next_request_id(struct vmbus_requestor *rqstor, u64 rqst_addr)
++{
++	unsigned long flags;
++	u64 current_id;
++	const struct vmbus_channel *channel =
++		container_of(rqstor, const struct vmbus_channel, requestor);
++
++	/* Check rqstor has been initialized */
++	if (!channel->rqstor_size)
++		return VMBUS_NO_RQSTOR;
++
++	spin_lock_irqsave(&rqstor->req_lock, flags);
++	current_id = rqstor->next_request_id;
++
++	/* Requestor array is full */
++	if (current_id >= rqstor->size) {
++		spin_unlock_irqrestore(&rqstor->req_lock, flags);
++		return VMBUS_RQST_ERROR;
++	}
++
++	rqstor->next_request_id = rqstor->req_arr[current_id];
++	rqstor->req_arr[current_id] = rqst_addr;
++
++	/* The already held spin lock provides atomicity */
++	bitmap_set(rqstor->req_bitmap, current_id, 1);
++
++	spin_unlock_irqrestore(&rqstor->req_lock, flags);
++
++	/*
++	 * Cannot return an ID of 0, which is reserved for an unsolicited
++	 * message from Hyper-V.
++	 */
++	return current_id + 1;
++}
++EXPORT_SYMBOL_GPL(vmbus_next_request_id);
++
++/*
++ * vmbus_request_addr - Returns the memory address stored at @trans_id
++ * in @rqstor. Uses a spin lock to avoid race conditions.
++ * @rqstor: Pointer to the requestor struct
++ * @trans_id: Request id sent back from Hyper-V. Becomes the requestor's
++ * next request id.
++ */
++u64 vmbus_request_addr(struct vmbus_requestor *rqstor, u64 trans_id)
++{
++	unsigned long flags;
++	u64 req_addr;
++	const struct vmbus_channel *channel =
++		container_of(rqstor, const struct vmbus_channel, requestor);
++
++	/* Check rqstor has been initialized */
++	if (!channel->rqstor_size)
++		return VMBUS_NO_RQSTOR;
++
++	/* Hyper-V can send an unsolicited message with ID of 0 */
++	if (!trans_id)
++		return trans_id;
++
++	spin_lock_irqsave(&rqstor->req_lock, flags);
++
++	/* Data corresponding to trans_id is stored at trans_id - 1 */
++	trans_id--;
++
++	/* Invalid trans_id */
++	if (trans_id >= rqstor->size || !test_bit(trans_id, rqstor->req_bitmap)) {
++		spin_unlock_irqrestore(&rqstor->req_lock, flags);
++		return VMBUS_RQST_ERROR;
++	}
++
++	req_addr = rqstor->req_arr[trans_id];
++	rqstor->req_arr[trans_id] = rqstor->next_request_id;
++	rqstor->next_request_id = trans_id;
++
++	/* The already held spin lock provides atomicity */
++	bitmap_clear(rqstor->req_bitmap, trans_id, 1);
++
++	spin_unlock_irqrestore(&rqstor->req_lock, flags);
++	return req_addr;
++}
++EXPORT_SYMBOL_GPL(vmbus_request_addr);
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index 7845fa5de79e9..601660bca5d47 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -180,7 +180,8 @@ int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info,
+ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info);
+ 
+ int hv_ringbuffer_write(struct vmbus_channel *channel,
+-			const struct kvec *kv_list, u32 kv_count);
++			const struct kvec *kv_list, u32 kv_count,
++			u64 requestid);
+ 
+ int hv_ringbuffer_read(struct vmbus_channel *channel,
+ 		       void *buffer, u32 buflen, u32 *buffer_actual_len,
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
+index 7ed6fad3fa8ff..a49cc69c56af0 100644
+--- a/drivers/hv/ring_buffer.c
++++ b/drivers/hv/ring_buffer.c
+@@ -261,7 +261,8 @@ EXPORT_SYMBOL_GPL(hv_ringbuffer_spinlock_busy);
+ 
+ /* Write to the ring buffer. */
+ int hv_ringbuffer_write(struct vmbus_channel *channel,
+-			const struct kvec *kv_list, u32 kv_count)
++			const struct kvec *kv_list, u32 kv_count,
++			u64 requestid)
+ {
+ 	int i;
+ 	u32 bytes_avail_towrite;
+@@ -271,6 +272,8 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
+ 	u64 prev_indices;
+ 	unsigned long flags;
+ 	struct hv_ring_buffer_info *outring_info = &channel->outbound;
++	struct vmpacket_descriptor *desc = kv_list[0].iov_base;
++	u64 rqst_id = VMBUS_NO_RQSTOR;
+ 
+ 	if (channel->rescind)
+ 		return -ENODEV;
+@@ -313,6 +316,22 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
+ 						     kv_list[i].iov_len);
+ 	}
+ 
++	/*
++	 * Allocate the request ID after the data has been copied into the
++	 * ring buffer.  Once this request ID is allocated, the completion
++	 * path could find the data and free it.
++	 */
++
++	if (desc->flags == VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED) {
++		rqst_id = vmbus_next_request_id(&channel->requestor, requestid);
++		if (rqst_id == VMBUS_RQST_ERROR) {
++			spin_unlock_irqrestore(&outring_info->ring_lock, flags);
++			return -EAGAIN;
++		}
++	}
++	desc = hv_get_ring_buffer(outring_info) + old_write;
++	desc->trans_id = (rqst_id == VMBUS_NO_RQSTOR) ? requestid : rqst_id;
++
+ 	/* Set previous packet start */
+ 	prev_indices = hv_get_ring_bufferindices(outring_info);
+ 
+@@ -332,8 +351,13 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
+ 
+ 	hv_signal_on_write(old_write, channel);
+ 
+-	if (channel->rescind)
++	if (channel->rescind) {
++		if (rqst_id != VMBUS_NO_RQSTOR) {
++			/* Reclaim request ID to avoid leak of IDs */
++			vmbus_request_addr(&channel->requestor, rqst_id);
++		}
+ 		return -ENODEV;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index 4cceb9bab0361..e3201a621870a 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -43,6 +43,9 @@ struct sdmmc_lli_desc {
+ struct sdmmc_idma {
+ 	dma_addr_t sg_dma;
+ 	void *sg_cpu;
++	dma_addr_t bounce_dma_addr;
++	void *bounce_buf;
++	bool use_bounce_buffer;
+ };
+ 
+ struct sdmmc_dlyb {
+@@ -54,6 +57,8 @@ struct sdmmc_dlyb {
+ static int sdmmc_idma_validate_data(struct mmci_host *host,
+ 				    struct mmc_data *data)
+ {
++	struct sdmmc_idma *idma = host->dma_priv;
++	struct device *dev = mmc_dev(host->mmc);
+ 	struct scatterlist *sg;
+ 	int i;
+ 
+@@ -61,41 +66,69 @@ static int sdmmc_idma_validate_data(struct mmci_host *host,
+ 	 * idma has constraints on idmabase & idmasize for each element
+ 	 * excepted the last element which has no constraint on idmasize
+ 	 */
++	idma->use_bounce_buffer = false;
+ 	for_each_sg(data->sg, sg, data->sg_len - 1, i) {
+ 		if (!IS_ALIGNED(sg->offset, sizeof(u32)) ||
+ 		    !IS_ALIGNED(sg->length, SDMMC_IDMA_BURST)) {
+-			dev_err(mmc_dev(host->mmc),
++			dev_dbg(mmc_dev(host->mmc),
+ 				"unaligned scatterlist: ofst:%x length:%d\n",
+ 				data->sg->offset, data->sg->length);
+-			return -EINVAL;
++			goto use_bounce_buffer;
+ 		}
+ 	}
+ 
+ 	if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
+-		dev_err(mmc_dev(host->mmc),
++		dev_dbg(mmc_dev(host->mmc),
+ 			"unaligned last scatterlist: ofst:%x length:%d\n",
+ 			data->sg->offset, data->sg->length);
+-		return -EINVAL;
++		goto use_bounce_buffer;
+ 	}
+ 
++	return 0;
++
++use_bounce_buffer:
++	if (!idma->bounce_buf) {
++		idma->bounce_buf = dmam_alloc_coherent(dev,
++						       host->mmc->max_req_size,
++						       &idma->bounce_dma_addr,
++						       GFP_KERNEL);
++		if (!idma->bounce_buf) {
++			dev_err(dev, "Unable to map allocate DMA bounce buffer.\n");
++			return -ENOMEM;
++		}
++	}
++
++	idma->use_bounce_buffer = true;
++
+ 	return 0;
+ }
+ 
+ static int _sdmmc_idma_prep_data(struct mmci_host *host,
+ 				 struct mmc_data *data)
+ {
+-	int n_elem;
++	struct sdmmc_idma *idma = host->dma_priv;
+ 
+-	n_elem = dma_map_sg(mmc_dev(host->mmc),
+-			    data->sg,
+-			    data->sg_len,
+-			    mmc_get_dma_dir(data));
++	if (idma->use_bounce_buffer) {
++		if (data->flags & MMC_DATA_WRITE) {
++			unsigned int xfer_bytes = data->blksz * data->blocks;
+ 
+-	if (!n_elem) {
+-		dev_err(mmc_dev(host->mmc), "dma_map_sg failed\n");
+-		return -EINVAL;
+-	}
++			sg_copy_to_buffer(data->sg, data->sg_len,
++					  idma->bounce_buf, xfer_bytes);
++			dma_wmb();
++		}
++	} else {
++		int n_elem;
+ 
++		n_elem = dma_map_sg(mmc_dev(host->mmc),
++				    data->sg,
++				    data->sg_len,
++				    mmc_get_dma_dir(data));
++
++		if (!n_elem) {
++			dev_err(mmc_dev(host->mmc), "dma_map_sg failed\n");
++			return -EINVAL;
++		}
++	}
+ 	return 0;
+ }
+ 
+@@ -112,8 +145,19 @@ static int sdmmc_idma_prep_data(struct mmci_host *host,
+ static void sdmmc_idma_unprep_data(struct mmci_host *host,
+ 				   struct mmc_data *data, int err)
+ {
+-	dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+-		     mmc_get_dma_dir(data));
++	struct sdmmc_idma *idma = host->dma_priv;
++
++	if (idma->use_bounce_buffer) {
++		if (data->flags & MMC_DATA_READ) {
++			unsigned int xfer_bytes = data->blksz * data->blocks;
++
++			sg_copy_from_buffer(data->sg, data->sg_len,
++					    idma->bounce_buf, xfer_bytes);
++		}
++	} else {
++		dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
++			     mmc_get_dma_dir(data));
++	}
+ }
+ 
+ static int sdmmc_idma_setup(struct mmci_host *host)
+@@ -137,6 +181,8 @@ static int sdmmc_idma_setup(struct mmci_host *host)
+ 		host->mmc->max_segs = SDMMC_LLI_BUF_LEN /
+ 			sizeof(struct sdmmc_lli_desc);
+ 		host->mmc->max_seg_size = host->variant->stm32_idmabsize_mask;
++
++		host->mmc->max_req_size = SZ_1M;
+ 	} else {
+ 		host->mmc->max_segs = 1;
+ 		host->mmc->max_seg_size = host->mmc->max_req_size;
+@@ -154,8 +200,18 @@ static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl)
+ 	struct scatterlist *sg;
+ 	int i;
+ 
+-	if (!host->variant->dma_lli || data->sg_len == 1) {
+-		writel_relaxed(sg_dma_address(data->sg),
++	host->dma_in_progress = true;
++
++	if (!host->variant->dma_lli || data->sg_len == 1 ||
++	    idma->use_bounce_buffer) {
++		u32 dma_addr;
++
++		if (idma->use_bounce_buffer)
++			dma_addr = idma->bounce_dma_addr;
++		else
++			dma_addr = sg_dma_address(data->sg);
++
++		writel_relaxed(dma_addr,
+ 			       host->base + MMCI_STM32_IDMABASE0R);
+ 		writel_relaxed(MMCI_STM32_IDMAEN,
+ 			       host->base + MMCI_STM32_IDMACTRLR);
+@@ -184,9 +240,30 @@ static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl)
+ 	return 0;
+ }
+ 
++static void sdmmc_idma_error(struct mmci_host *host)
++{
++	struct mmc_data *data = host->data;
++	struct sdmmc_idma *idma = host->dma_priv;
++
++	if (!dma_inprogress(host))
++		return;
++
++	writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR);
++	host->dma_in_progress = false;
++	data->host_cookie = 0;
++
++	if (!idma->use_bounce_buffer)
++		dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
++			     mmc_get_dma_dir(data));
++}
++
+ static void sdmmc_idma_finalize(struct mmci_host *host, struct mmc_data *data)
+ {
++	if (!dma_inprogress(host))
++		return;
++
+ 	writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR);
++	host->dma_in_progress = false;
+ 
+ 	if (!data->host_cookie)
+ 		sdmmc_idma_unprep_data(host, data, 0);
+@@ -512,6 +589,7 @@ static struct mmci_host_ops sdmmc_variant_ops = {
+ 	.dma_setup = sdmmc_idma_setup,
+ 	.dma_start = sdmmc_idma_start,
+ 	.dma_finalize = sdmmc_idma_finalize,
++	.dma_error = sdmmc_idma_error,
+ 	.set_clkreg = mmci_sdmmc_set_clkreg,
+ 	.set_pwrreg = mmci_sdmmc_set_pwrreg,
+ 	.busy_complete = sdmmc_busy_complete,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 135acd74497f3..58c87d79c1261 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -12944,9 +12944,9 @@ int i40e_queue_pair_disable(struct i40e_vsi *vsi, int queue_pair)
+ 		return err;
+ 
+ 	i40e_queue_pair_disable_irq(vsi, queue_pair);
++	i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */);
+ 	err = i40e_queue_pair_toggle_rings(vsi, queue_pair, false /* off */);
+ 	i40e_clean_rx_ring(vsi->rx_rings[queue_pair]);
+-	i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */);
+ 	i40e_queue_pair_clean_rings(vsi, queue_pair);
+ 	i40e_queue_pair_reset_stats(vsi, queue_pair);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 02f72fbec1042..035bc90c81246 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -6546,6 +6546,8 @@ ice_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh,
+ 	pf_sw = pf->first_sw;
+ 	/* find the attribute in the netlink message */
+ 	br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
++	if (!br_spec)
++		return -EINVAL;
+ 
+ 	nla_for_each_nested(attr, br_spec, rem) {
+ 		__u16 mode;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index b16cb2365d960..b7672200dc624 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -2949,8 +2949,8 @@ static void ixgbe_check_lsc(struct ixgbe_adapter *adapter)
+ static inline void ixgbe_irq_enable_queues(struct ixgbe_adapter *adapter,
+ 					   u64 qmask)
+ {
+-	u32 mask;
+ 	struct ixgbe_hw *hw = &adapter->hw;
++	u32 mask;
+ 
+ 	switch (hw->mac.type) {
+ 	case ixgbe_mac_82598EB:
+@@ -10394,6 +10394,44 @@ static void ixgbe_reset_rxr_stats(struct ixgbe_ring *rx_ring)
+ 	memset(&rx_ring->rx_stats, 0, sizeof(rx_ring->rx_stats));
+ }
+ 
++/**
++ * ixgbe_irq_disable_single - Disable single IRQ vector
++ * @adapter: adapter structure
++ * @ring: ring index
++ **/
++static void ixgbe_irq_disable_single(struct ixgbe_adapter *adapter, u32 ring)
++{
++	struct ixgbe_hw *hw = &adapter->hw;
++	u64 qmask = BIT_ULL(ring);
++	u32 mask;
++
++	switch (adapter->hw.mac.type) {
++	case ixgbe_mac_82598EB:
++		mask = qmask & IXGBE_EIMC_RTX_QUEUE;
++		IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC, mask);
++		break;
++	case ixgbe_mac_82599EB:
++	case ixgbe_mac_X540:
++	case ixgbe_mac_X550:
++	case ixgbe_mac_X550EM_x:
++	case ixgbe_mac_x550em_a:
++		mask = (qmask & 0xFFFFFFFF);
++		if (mask)
++			IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(0), mask);
++		mask = (qmask >> 32);
++		if (mask)
++			IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(1), mask);
++		break;
++	default:
++		break;
++	}
++	IXGBE_WRITE_FLUSH(&adapter->hw);
++	if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED)
++		synchronize_irq(adapter->msix_entries[ring].vector);
++	else
++		synchronize_irq(adapter->pdev->irq);
++}
++
+ /**
+  * ixgbe_txrx_ring_disable - Disable Rx/Tx/XDP Tx rings
+  * @adapter: adapter structure
+@@ -10410,6 +10448,11 @@ void ixgbe_txrx_ring_disable(struct ixgbe_adapter *adapter, int ring)
+ 	tx_ring = adapter->tx_ring[ring];
+ 	xdp_ring = adapter->xdp_ring[ring];
+ 
++	ixgbe_irq_disable_single(adapter, ring);
++
++	/* Rx/Tx/XDP Tx share the same napi context. */
++	napi_disable(&rx_ring->q_vector->napi);
++
+ 	ixgbe_disable_txr(adapter, tx_ring);
+ 	if (xdp_ring)
+ 		ixgbe_disable_txr(adapter, xdp_ring);
+@@ -10418,9 +10461,6 @@ void ixgbe_txrx_ring_disable(struct ixgbe_adapter *adapter, int ring)
+ 	if (xdp_ring)
+ 		synchronize_rcu();
+ 
+-	/* Rx/Tx/XDP Tx share the same napi context. */
+-	napi_disable(&rx_ring->q_vector->napi);
+-
+ 	ixgbe_clean_tx_ring(tx_ring);
+ 	if (xdp_ring)
+ 		ixgbe_clean_tx_ring(xdp_ring);
+@@ -10448,9 +10488,6 @@ void ixgbe_txrx_ring_enable(struct ixgbe_adapter *adapter, int ring)
+ 	tx_ring = adapter->tx_ring[ring];
+ 	xdp_ring = adapter->xdp_ring[ring];
+ 
+-	/* Rx/Tx/XDP Tx share the same napi context. */
+-	napi_enable(&rx_ring->q_vector->napi);
+-
+ 	ixgbe_configure_tx_ring(adapter, tx_ring);
+ 	if (xdp_ring)
+ 		ixgbe_configure_tx_ring(adapter, xdp_ring);
+@@ -10459,6 +10496,11 @@ void ixgbe_txrx_ring_enable(struct ixgbe_adapter *adapter, int ring)
+ 	clear_bit(__IXGBE_TX_DISABLED, &tx_ring->state);
+ 	if (xdp_ring)
+ 		clear_bit(__IXGBE_TX_DISABLED, &xdp_ring->state);
++
++	/* Rx/Tx/XDP Tx share the same napi context. */
++	napi_enable(&rx_ring->q_vector->napi);
++	ixgbe_irq_enable_queues(adapter, BIT_ULL(ring));
++	IXGBE_WRITE_FLUSH(&adapter->hw);
+ }
+ 
+ /**
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 081939cb420b0..2bb9820c66641 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -218,7 +218,7 @@ static void geneve_rx(struct geneve_dev *geneve, struct geneve_sock *gs,
+ 	struct genevehdr *gnvh = geneve_hdr(skb);
+ 	struct metadata_dst *tun_dst = NULL;
+ 	unsigned int len;
+-	int err = 0;
++	int nh, err = 0;
+ 	void *oiph;
+ 
+ 	if (ip_tunnel_collect_metadata() || gs->collect_md) {
+@@ -262,9 +262,23 @@ static void geneve_rx(struct geneve_dev *geneve, struct geneve_sock *gs,
+ 		goto drop;
+ 	}
+ 
+-	oiph = skb_network_header(skb);
++	/* Save offset of outer header relative to skb->head,
++	 * because we are going to reset the network header to the inner header
++	 * and might change skb->head.
++	 */
++	nh = skb_network_header(skb) - skb->head;
++
+ 	skb_reset_network_header(skb);
+ 
++	if (!pskb_inet_may_pull(skb)) {
++		DEV_STATS_INC(geneve->dev, rx_length_errors);
++		DEV_STATS_INC(geneve->dev, rx_errors);
++		goto drop;
++	}
++
++	/* Get the outer header. */
++	oiph = skb->head + nh;
++
+ 	if (geneve_get_sk_family(gs) == AF_INET)
+ 		err = IP_ECN_decapsulate(oiph, skb);
+ #if IS_ENABLED(CONFIG_IPV6)
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index 367878493e704..15652d7951f9e 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -847,6 +847,19 @@ struct nvsp_message {
+ 
+ #define NETVSC_XDP_HDRM 256
+ 
++#define NETVSC_MIN_OUT_MSG_SIZE (sizeof(struct vmpacket_descriptor) + \
++				 sizeof(struct nvsp_message))
++#define NETVSC_MIN_IN_MSG_SIZE sizeof(struct vmpacket_descriptor)
++
++/* Estimated requestor size:
++ * out_ring_size/min_out_msg_size + in_ring_size/min_in_msg_size
++ */
++static inline u32 netvsc_rqstor_size(unsigned long ringbytes)
++{
++	return ringbytes / NETVSC_MIN_OUT_MSG_SIZE +
++		ringbytes / NETVSC_MIN_IN_MSG_SIZE;
++}
++
+ #define NETVSC_XFER_HEADER_SIZE(rng_cnt) \
+ 		(offsetof(struct vmtransfer_page_packet_header, ranges) + \
+ 		(rng_cnt) * sizeof(struct vmtransfer_page_range))
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 3eae31c0f97a6..03333a4136bf4 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -37,6 +37,10 @@ void netvsc_switch_datapath(struct net_device *ndev, bool vf)
+ 	struct netvsc_device *nv_dev = rtnl_dereference(net_device_ctx->nvdev);
+ 	struct nvsp_message *init_pkt = &nv_dev->channel_init_pkt;
+ 
++	/* Block sending traffic to VF if it's about to be gone */
++	if (!vf)
++		net_device_ctx->data_path_is_vf = vf;
++
+ 	memset(init_pkt, 0, sizeof(struct nvsp_message));
+ 	init_pkt->hdr.msg_type = NVSP_MSG4_TYPE_SWITCH_DATA_PATH;
+ 	if (vf)
+@@ -51,7 +55,10 @@ void netvsc_switch_datapath(struct net_device *ndev, bool vf)
+ 	vmbus_sendpacket(dev->channel, init_pkt,
+ 			       sizeof(struct nvsp_message),
+ 			       (unsigned long)init_pkt,
+-			       VM_PKT_DATA_INBAND, 0);
++			       VM_PKT_DATA_INBAND,
++			       VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
++	wait_for_completion(&nv_dev->channel_init_wait);
++	net_device_ctx->data_path_is_vf = vf;
+ }
+ 
+ /* Worker to setup sub channels on initial setup
+@@ -163,7 +170,7 @@ static void netvsc_revoke_recv_buf(struct hv_device *device,
+ 		ret = vmbus_sendpacket(device->channel,
+ 				       revoke_packet,
+ 				       sizeof(struct nvsp_message),
+-				       (unsigned long)revoke_packet,
++				       VMBUS_RQST_ID_NO_RESPONSE,
+ 				       VM_PKT_DATA_INBAND, 0);
+ 		/* If the failure is because the channel is rescinded;
+ 		 * ignore the failure since we cannot send on a rescinded
+@@ -213,7 +220,7 @@ static void netvsc_revoke_send_buf(struct hv_device *device,
+ 		ret = vmbus_sendpacket(device->channel,
+ 				       revoke_packet,
+ 				       sizeof(struct nvsp_message),
+-				       (unsigned long)revoke_packet,
++				       VMBUS_RQST_ID_NO_RESPONSE,
+ 				       VM_PKT_DATA_INBAND, 0);
+ 
+ 		/* If the failure is because the channel is rescinded;
+@@ -557,7 +564,7 @@ static int negotiate_nvsp_ver(struct hv_device *device,
+ 
+ 	ret = vmbus_sendpacket(device->channel, init_packet,
+ 				sizeof(struct nvsp_message),
+-				(unsigned long)init_packet,
++				VMBUS_RQST_ID_NO_RESPONSE,
+ 				VM_PKT_DATA_INBAND, 0);
+ 
+ 	return ret;
+@@ -614,7 +621,7 @@ static int netvsc_connect_vsp(struct hv_device *device,
+ 	/* Send the init request */
+ 	ret = vmbus_sendpacket(device->channel, init_packet,
+ 				sizeof(struct nvsp_message),
+-				(unsigned long)init_packet,
++				VMBUS_RQST_ID_NO_RESPONSE,
+ 				VM_PKT_DATA_INBAND, 0);
+ 	if (ret != 0)
+ 		goto cleanup;
+@@ -698,10 +705,19 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
+ 				    const struct vmpacket_descriptor *desc,
+ 				    int budget)
+ {
+-	struct sk_buff *skb = (struct sk_buff *)(unsigned long)desc->trans_id;
+ 	struct net_device_context *ndev_ctx = netdev_priv(ndev);
++	struct sk_buff *skb;
+ 	u16 q_idx = 0;
+ 	int queue_sends;
++	u64 cmd_rqst;
++
++	cmd_rqst = vmbus_request_addr(&channel->requestor, (u64)desc->trans_id);
++	if (cmd_rqst == VMBUS_RQST_ERROR) {
++		netdev_err(ndev, "Incorrect transaction id\n");
++		return;
++	}
++
++	skb = (struct sk_buff *)(unsigned long)cmd_rqst;
+ 
+ 	/* Notify the layer above us */
+ 	if (likely(skb)) {
+@@ -748,8 +764,31 @@ static void netvsc_send_completion(struct net_device *ndev,
+ 				   const struct vmpacket_descriptor *desc,
+ 				   int budget)
+ {
+-	const struct nvsp_message *nvsp_packet = hv_pkt_data(desc);
++	const struct nvsp_message *nvsp_packet;
+ 	u32 msglen = hv_pkt_datalen(desc);
++	struct nvsp_message *pkt_rqst;
++	u64 cmd_rqst;
++
++	/* First check if this is a VMBUS completion without data payload */
++	if (!msglen) {
++		cmd_rqst = vmbus_request_addr(&incoming_channel->requestor,
++					      (u64)desc->trans_id);
++		if (cmd_rqst == VMBUS_RQST_ERROR) {
++			netdev_err(ndev, "Invalid transaction id\n");
++			return;
++		}
++
++		pkt_rqst = (struct nvsp_message *)(uintptr_t)cmd_rqst;
++		switch (pkt_rqst->hdr.msg_type) {
++		case NVSP_MSG4_TYPE_SWITCH_DATA_PATH:
++			complete(&net_device->channel_init_wait);
++			break;
++
++		default:
++			netdev_err(ndev, "Unexpected VMBUS completion!!\n");
++		}
++		return;
++	}
+ 
+ 	/* Ensure packet is big enough to read header fields */
+ 	if (msglen < sizeof(struct nvsp_message_header)) {
+@@ -757,6 +796,7 @@ static void netvsc_send_completion(struct net_device *ndev,
+ 		return;
+ 	}
+ 
++	nvsp_packet = hv_pkt_data(desc);
+ 	switch (nvsp_packet->hdr.msg_type) {
+ 	case NVSP_MSG_TYPE_INIT_COMPLETE:
+ 		if (msglen < sizeof(struct nvsp_message_header) +
+@@ -1530,6 +1570,7 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device,
+ 		       netvsc_poll, NAPI_POLL_WEIGHT);
+ 
+ 	/* Open the channel */
++	device->channel->rqstor_size = netvsc_rqstor_size(netvsc_ring_bytes);
+ 	ret = vmbus_open(device->channel, netvsc_ring_bytes,
+ 			 netvsc_ring_bytes,  NULL, 0,
+ 			 netvsc_channel_cb, net_device->chan_table);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 790bf750281ad..0fc0f9cb3f34b 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -44,6 +44,10 @@
+ #define LINKCHANGE_INT (2 * HZ)
+ #define VF_TAKEOVER_INT (HZ / 10)
+ 
++/* Macros to define the context of vf registration */
++#define VF_REG_IN_PROBE		1
++#define VF_REG_IN_NOTIFIER	2
++
+ static unsigned int ring_size __ro_after_init = 128;
+ module_param(ring_size, uint, 0444);
+ MODULE_PARM_DESC(ring_size, "Ring buffer size (# of pages)");
+@@ -2194,7 +2198,7 @@ static rx_handler_result_t netvsc_vf_handle_frame(struct sk_buff **pskb)
+ }
+ 
+ static int netvsc_vf_join(struct net_device *vf_netdev,
+-			  struct net_device *ndev)
++			  struct net_device *ndev, int context)
+ {
+ 	struct net_device_context *ndev_ctx = netdev_priv(ndev);
+ 	int ret;
+@@ -2217,7 +2221,11 @@ static int netvsc_vf_join(struct net_device *vf_netdev,
+ 		goto upper_link_failed;
+ 	}
+ 
+-	schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT);
++	/* If this registration is called from probe context vf_takeover
++	 * is taken care of later in probe itself.
++	 */
++	if (context == VF_REG_IN_NOTIFIER)
++		schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT);
+ 
+ 	call_netdevice_notifiers(NETDEV_JOIN, vf_netdev);
+ 
+@@ -2310,8 +2318,17 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
+ 		if (!ndev_ctx->vf_alloc)
+ 			continue;
+ 
+-		if (ndev_ctx->vf_serial == serial)
+-			return hv_get_drvdata(ndev_ctx->device_ctx);
++		if (ndev_ctx->vf_serial != serial)
++			continue;
++
++		ndev = hv_get_drvdata(ndev_ctx->device_ctx);
++		if (ndev->addr_len != vf_netdev->addr_len ||
++		    memcmp(ndev->perm_addr, vf_netdev->perm_addr,
++			   ndev->addr_len) != 0)
++			continue;
++
++		return ndev;
++
+ 	}
+ 
+ 	/* Fallback path to check synthetic vf with help of mac addr.
+@@ -2346,7 +2363,7 @@ static int netvsc_prepare_bonding(struct net_device *vf_netdev)
+ 	return NOTIFY_DONE;
+ }
+ 
+-static int netvsc_register_vf(struct net_device *vf_netdev)
++static int netvsc_register_vf(struct net_device *vf_netdev, int context)
+ {
+ 	struct net_device_context *net_device_ctx;
+ 	struct netvsc_device *netvsc_dev;
+@@ -2386,7 +2403,7 @@ static int netvsc_register_vf(struct net_device *vf_netdev)
+ 
+ 	netdev_info(ndev, "VF registering: %s\n", vf_netdev->name);
+ 
+-	if (netvsc_vf_join(vf_netdev, ndev) != 0)
++	if (netvsc_vf_join(vf_netdev, ndev, context) != 0)
+ 		return NOTIFY_DONE;
+ 
+ 	dev_hold(vf_netdev);
+@@ -2411,12 +2428,15 @@ static int netvsc_register_vf(struct net_device *vf_netdev)
+  * During hibernation, if a VF NIC driver (e.g. mlx5) preserves the network
+  * interface, there is only the CHANGE event and no UP or DOWN event.
+  */
+-static int netvsc_vf_changed(struct net_device *vf_netdev)
++static int netvsc_vf_changed(struct net_device *vf_netdev, unsigned long event)
+ {
+ 	struct net_device_context *net_device_ctx;
+ 	struct netvsc_device *netvsc_dev;
+ 	struct net_device *ndev;
+-	bool vf_is_up = netif_running(vf_netdev);
++	bool vf_is_up = false;
++
++	if (event != NETDEV_GOING_DOWN)
++		vf_is_up = netif_running(vf_netdev);
+ 
+ 	ndev = get_netvsc_byref(vf_netdev);
+ 	if (!ndev)
+@@ -2429,7 +2449,6 @@ static int netvsc_vf_changed(struct net_device *vf_netdev)
+ 
+ 	if (net_device_ctx->data_path_is_vf == vf_is_up)
+ 		return NOTIFY_OK;
+-	net_device_ctx->data_path_is_vf = vf_is_up;
+ 
+ 	if (vf_is_up && !net_device_ctx->vf_alloc) {
+ 		netdev_info(ndev, "Waiting for the VF association from host\n");
+@@ -2468,10 +2487,31 @@ static int netvsc_unregister_vf(struct net_device *vf_netdev)
+ 	return NOTIFY_OK;
+ }
+ 
++static int check_dev_is_matching_vf(struct net_device *event_ndev)
++{
++	/* Skip NetVSC interfaces */
++	if (event_ndev->netdev_ops == &device_ops)
++		return -ENODEV;
++
++	/* Avoid non-Ethernet type devices */
++	if (event_ndev->type != ARPHRD_ETHER)
++		return -ENODEV;
++
++	/* Avoid Vlan dev with same MAC registering as VF */
++	if (is_vlan_dev(event_ndev))
++		return -ENODEV;
++
++	/* Avoid Bonding master dev with same MAC registering as VF */
++	if (netif_is_bond_master(event_ndev))
++		return -ENODEV;
++
++	return 0;
++}
++
+ static int netvsc_probe(struct hv_device *dev,
+ 			const struct hv_vmbus_device_id *dev_id)
+ {
+-	struct net_device *net = NULL;
++	struct net_device *net = NULL, *vf_netdev;
+ 	struct net_device_context *net_device_ctx;
+ 	struct netvsc_device_info *device_info = NULL;
+ 	struct netvsc_device *nvdev;
+@@ -2579,6 +2619,30 @@ static int netvsc_probe(struct hv_device *dev,
+ 	}
+ 
+ 	list_add(&net_device_ctx->list, &netvsc_dev_list);
++
++	/* When the hv_netvsc driver is unloaded and reloaded, the
++	 * NET_DEVICE_REGISTER for the vf device is replayed before probe
++	 * is complete. This is because register_netdevice_notifier() gets
++	 * registered before vmbus_driver_register() so that callback func
++	 * is set before probe and we don't miss events like NETDEV_POST_INIT
++	 * So, in this section we try to register the matching vf device that
++	 * is present as a netdevice, knowing that its register call is not
++	 * processed in the netvsc_netdev_notifier(as probing is progress and
++	 * get_netvsc_byslot fails).
++	 */
++	for_each_netdev(dev_net(net), vf_netdev) {
++		ret = check_dev_is_matching_vf(vf_netdev);
++		if (ret != 0)
++			continue;
++
++		if (net != get_netvsc_byslot(vf_netdev))
++			continue;
++
++		netvsc_prepare_bonding(vf_netdev);
++		netvsc_register_vf(vf_netdev, VF_REG_IN_PROBE);
++		__netvsc_vf_setup(net, vf_netdev);
++		break;
++	}
+ 	rtnl_unlock();
+ 
+ 	netvsc_devinfo_put(device_info);
+@@ -2735,35 +2799,24 @@ static int netvsc_netdev_event(struct notifier_block *this,
+ 			       unsigned long event, void *ptr)
+ {
+ 	struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
++	int ret = 0;
+ 
+-	/* Skip our own events */
+-	if (event_dev->netdev_ops == &device_ops)
+-		return NOTIFY_DONE;
+-
+-	/* Avoid non-Ethernet type devices */
+-	if (event_dev->type != ARPHRD_ETHER)
+-		return NOTIFY_DONE;
+-
+-	/* Avoid Vlan dev with same MAC registering as VF */
+-	if (is_vlan_dev(event_dev))
+-		return NOTIFY_DONE;
+-
+-	/* Avoid Bonding master dev with same MAC registering as VF */
+-	if ((event_dev->priv_flags & IFF_BONDING) &&
+-	    (event_dev->flags & IFF_MASTER))
++	ret = check_dev_is_matching_vf(event_dev);
++	if (ret != 0)
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+ 	case NETDEV_POST_INIT:
+ 		return netvsc_prepare_bonding(event_dev);
+ 	case NETDEV_REGISTER:
+-		return netvsc_register_vf(event_dev);
++		return netvsc_register_vf(event_dev, VF_REG_IN_NOTIFIER);
+ 	case NETDEV_UNREGISTER:
+ 		return netvsc_unregister_vf(event_dev);
+ 	case NETDEV_UP:
+ 	case NETDEV_DOWN:
+ 	case NETDEV_CHANGE:
+-		return netvsc_vf_changed(event_dev);
++	case NETDEV_GOING_DOWN:
++		return netvsc_vf_changed(event_dev, event);
+ 	default:
+ 		return NOTIFY_DONE;
+ 	}
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index 90bc0008fa2fd..13f62950eeb9f 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -1170,6 +1170,7 @@ static void netvsc_sc_open(struct vmbus_channel *new_sc)
+ 	/* Set the channel before opening.*/
+ 	nvchan->channel = new_sc;
+ 
++	new_sc->rqstor_size = netvsc_rqstor_size(netvsc_ring_bytes);
+ 	ret = vmbus_open(new_sc, netvsc_ring_bytes,
+ 			 netvsc_ring_bytes, NULL, 0,
+ 			 netvsc_channel_cb, nvchan);
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index c5a666bb86ee4..96d3d0bd248bc 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -90,6 +90,12 @@
+ /* statistic update interval (mSec) */
+ #define STAT_UPDATE_TIMER		(1 * 1000)
+ 
++/* time to wait for MAC or FCT to stop (jiffies) */
++#define HW_DISABLE_TIMEOUT		(HZ / 10)
++
++/* time to wait between polling MAC or FCT state (ms) */
++#define HW_DISABLE_DELAY_MS		1
++
+ /* defines interrupts from interrupt EP */
+ #define MAX_INT_EP			(32)
+ #define INT_EP_INTEP			(31)
+@@ -384,8 +390,9 @@ struct lan78xx_net {
+ 	struct urb		*urb_intr;
+ 	struct usb_anchor	deferred;
+ 
++	struct mutex		dev_mutex; /* serialise open/stop wrt suspend/resume */
+ 	struct mutex		phy_mutex; /* for phy access */
+-	unsigned		pipe_in, pipe_out, pipe_intr;
++	unsigned int		pipe_in, pipe_out, pipe_intr;
+ 
+ 	u32			hard_mtu;	/* count any extra framing */
+ 	size_t			rx_urb_size;	/* size for rx urbs */
+@@ -395,7 +402,7 @@ struct lan78xx_net {
+ 	wait_queue_head_t	*wait;
+ 	unsigned char		suspend_count;
+ 
+-	unsigned		maxpacket;
++	unsigned int		maxpacket;
+ 	struct timer_list	delay;
+ 	struct timer_list	stat_monitor;
+ 
+@@ -479,6 +486,26 @@ static int lan78xx_write_reg(struct lan78xx_net *dev, u32 index, u32 data)
+ 	return ret;
+ }
+ 
++static int lan78xx_update_reg(struct lan78xx_net *dev, u32 reg, u32 mask,
++			      u32 data)
++{
++	int ret;
++	u32 buf;
++
++	ret = lan78xx_read_reg(dev, reg, &buf);
++	if (ret < 0)
++		return ret;
++
++	buf &= ~mask;
++	buf |= (mask & data);
++
++	ret = lan78xx_write_reg(dev, reg, buf);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
+ static int lan78xx_read_stats(struct lan78xx_net *dev,
+ 			      struct lan78xx_statstage *data)
+ {
+@@ -504,7 +531,7 @@ static int lan78xx_read_stats(struct lan78xx_net *dev,
+ 	if (likely(ret >= 0)) {
+ 		src = (u32 *)stats;
+ 		dst = (u32 *)data;
+-		for (i = 0; i < sizeof(*stats)/sizeof(u32); i++) {
++		for (i = 0; i < sizeof(*stats) / sizeof(u32); i++) {
+ 			le32_to_cpus(&src[i]);
+ 			dst[i] = src[i];
+ 		}
+@@ -518,10 +545,11 @@ static int lan78xx_read_stats(struct lan78xx_net *dev,
+ 	return ret;
+ }
+ 
+-#define check_counter_rollover(struct1, dev_stats, member) {	\
+-	if (struct1->member < dev_stats.saved.member)		\
+-		dev_stats.rollover_count.member++;		\
+-	}
++#define check_counter_rollover(struct1, dev_stats, member)		\
++	do {								\
++		if ((struct1)->member < (dev_stats).saved.member)	\
++			(dev_stats).rollover_count.member++;		\
++	} while (0)
+ 
+ static void lan78xx_check_stat_rollover(struct lan78xx_net *dev,
+ 					struct lan78xx_statstage *stats)
+@@ -847,9 +875,9 @@ static int lan78xx_read_raw_otp(struct lan78xx_net *dev, u32 offset,
+ 
+ 	for (i = 0; i < length; i++) {
+ 		lan78xx_write_reg(dev, OTP_ADDR1,
+-					((offset + i) >> 8) & OTP_ADDR1_15_11);
++				  ((offset + i) >> 8) & OTP_ADDR1_15_11);
+ 		lan78xx_write_reg(dev, OTP_ADDR2,
+-					((offset + i) & OTP_ADDR2_10_3));
++				  ((offset + i) & OTP_ADDR2_10_3));
+ 
+ 		lan78xx_write_reg(dev, OTP_FUNC_CMD, OTP_FUNC_CMD_READ_);
+ 		lan78xx_write_reg(dev, OTP_CMD_GO, OTP_CMD_GO_GO_);
+@@ -903,9 +931,9 @@ static int lan78xx_write_raw_otp(struct lan78xx_net *dev, u32 offset,
+ 
+ 	for (i = 0; i < length; i++) {
+ 		lan78xx_write_reg(dev, OTP_ADDR1,
+-					((offset + i) >> 8) & OTP_ADDR1_15_11);
++				  ((offset + i) >> 8) & OTP_ADDR1_15_11);
+ 		lan78xx_write_reg(dev, OTP_ADDR2,
+-					((offset + i) & OTP_ADDR2_10_3));
++				  ((offset + i) & OTP_ADDR2_10_3));
+ 		lan78xx_write_reg(dev, OTP_PRGM_DATA, data[i]);
+ 		lan78xx_write_reg(dev, OTP_TST_CMD, OTP_TST_CMD_PRGVRFY_);
+ 		lan78xx_write_reg(dev, OTP_CMD_GO, OTP_CMD_GO_GO_);
+@@ -962,7 +990,7 @@ static int lan78xx_dataport_wait_not_busy(struct lan78xx_net *dev)
+ 		usleep_range(40, 100);
+ 	}
+ 
+-	netdev_warn(dev->net, "lan78xx_dataport_wait_not_busy timed out");
++	netdev_warn(dev->net, "%s timed out", __func__);
+ 
+ 	return -EIO;
+ }
+@@ -975,7 +1003,7 @@ static int lan78xx_dataport_write(struct lan78xx_net *dev, u32 ram_select,
+ 	int i, ret;
+ 
+ 	if (usb_autopm_get_interface(dev->intf) < 0)
+-			return 0;
++		return 0;
+ 
+ 	mutex_lock(&pdata->dataport_mutex);
+ 
+@@ -1048,9 +1076,9 @@ static void lan78xx_deferred_multicast_write(struct work_struct *param)
+ 	for (i = 1; i < NUM_OF_MAF; i++) {
+ 		lan78xx_write_reg(dev, MAF_HI(i), 0);
+ 		lan78xx_write_reg(dev, MAF_LO(i),
+-					pdata->pfilter_table[i][1]);
++				  pdata->pfilter_table[i][1]);
+ 		lan78xx_write_reg(dev, MAF_HI(i),
+-					pdata->pfilter_table[i][0]);
++				  pdata->pfilter_table[i][0]);
+ 	}
+ 
+ 	lan78xx_write_reg(dev, RFE_CTL, pdata->rfe_ctl);
+@@ -1069,11 +1097,12 @@ static void lan78xx_set_multicast(struct net_device *netdev)
+ 			    RFE_CTL_DA_PERFECT_ | RFE_CTL_MCAST_HASH_);
+ 
+ 	for (i = 0; i < DP_SEL_VHF_HASH_LEN; i++)
+-			pdata->mchash_table[i] = 0;
++		pdata->mchash_table[i] = 0;
++
+ 	/* pfilter_table[0] has own HW address */
+ 	for (i = 1; i < NUM_OF_MAF; i++) {
+-			pdata->pfilter_table[i][0] =
+-			pdata->pfilter_table[i][1] = 0;
++		pdata->pfilter_table[i][0] = 0;
++		pdata->pfilter_table[i][1] = 0;
+ 	}
+ 
+ 	pdata->rfe_ctl |= RFE_CTL_BCAST_EN_;
+@@ -1163,7 +1192,7 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 	/* clear LAN78xx interrupt status */
+ 	ret = lan78xx_write_reg(dev, INT_STS, INT_STS_PHY_INT_);
+ 	if (unlikely(ret < 0))
+-		return -EIO;
++		return ret;
+ 
+ 	mutex_lock(&phydev->lock);
+ 	phy_read_status(phydev);
+@@ -1176,11 +1205,11 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 		/* reset MAC */
+ 		ret = lan78xx_read_reg(dev, MAC_CR, &buf);
+ 		if (unlikely(ret < 0))
+-			return -EIO;
++			return ret;
+ 		buf |= MAC_CR_RST_;
+ 		ret = lan78xx_write_reg(dev, MAC_CR, buf);
+ 		if (unlikely(ret < 0))
+-			return -EIO;
++			return ret;
+ 
+ 		del_timer(&dev->stat_monitor);
+ 	} else if (link && !dev->link_on) {
+@@ -1192,18 +1221,30 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 			if (ecmd.base.speed == 1000) {
+ 				/* disable U2 */
+ 				ret = lan78xx_read_reg(dev, USB_CFG1, &buf);
++				if (ret < 0)
++					return ret;
+ 				buf &= ~USB_CFG1_DEV_U2_INIT_EN_;
+ 				ret = lan78xx_write_reg(dev, USB_CFG1, buf);
++				if (ret < 0)
++					return ret;
+ 				/* enable U1 */
+ 				ret = lan78xx_read_reg(dev, USB_CFG1, &buf);
++				if (ret < 0)
++					return ret;
+ 				buf |= USB_CFG1_DEV_U1_INIT_EN_;
+ 				ret = lan78xx_write_reg(dev, USB_CFG1, buf);
++				if (ret < 0)
++					return ret;
+ 			} else {
+ 				/* enable U1 & U2 */
+ 				ret = lan78xx_read_reg(dev, USB_CFG1, &buf);
++				if (ret < 0)
++					return ret;
+ 				buf |= USB_CFG1_DEV_U2_INIT_EN_;
+ 				buf |= USB_CFG1_DEV_U1_INIT_EN_;
+ 				ret = lan78xx_write_reg(dev, USB_CFG1, buf);
++				if (ret < 0)
++					return ret;
+ 			}
+ 		}
+ 
+@@ -1221,6 +1262,8 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 
+ 		ret = lan78xx_update_flowcontrol(dev, ecmd.base.duplex, ladv,
+ 						 radv);
++		if (ret < 0)
++			return ret;
+ 
+ 		if (!timer_pending(&dev->stat_monitor)) {
+ 			dev->delta = 1;
+@@ -1231,7 +1274,7 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 		tasklet_schedule(&dev->bh);
+ 	}
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ /* some work can't be done in tasklets, so we use keventd
+@@ -1267,9 +1310,10 @@ static void lan78xx_status(struct lan78xx_net *dev, struct urb *urb)
+ 			generic_handle_irq(dev->domain_data.phyirq);
+ 			local_irq_enable();
+ 		}
+-	} else
++	} else {
+ 		netdev_warn(dev->net,
+ 			    "unexpected interrupt: 0x%08x\n", intdata);
++	}
+ }
+ 
+ static int lan78xx_ethtool_get_eeprom_len(struct net_device *netdev)
+@@ -1358,7 +1402,7 @@ static void lan78xx_get_wol(struct net_device *netdev,
+ 	struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
+ 
+ 	if (usb_autopm_get_interface(dev->intf) < 0)
+-			return;
++		return;
+ 
+ 	ret = lan78xx_read_reg(dev, USB_CFG0, &buf);
+ 	if (unlikely(ret < 0)) {
+@@ -1980,7 +2024,7 @@ static int lan8835_fixup(struct phy_device *phydev)
+ 
+ 	/* RGMII MAC TXC Delay Enable */
+ 	lan78xx_write_reg(dev, MAC_RGMII_ID,
+-				MAC_RGMII_ID_TXC_DELAY_EN_);
++			  MAC_RGMII_ID_TXC_DELAY_EN_);
+ 
+ 	/* RGMII TX DLL Tune Adjust */
+ 	lan78xx_write_reg(dev, RGMII_TX_BYP_DLL, 0x3D00);
+@@ -2244,11 +2288,16 @@ static int lan78xx_change_mtu(struct net_device *netdev, int new_mtu)
+ 	int ll_mtu = new_mtu + netdev->hard_header_len;
+ 	int old_hard_mtu = dev->hard_mtu;
+ 	int old_rx_urb_size = dev->rx_urb_size;
++	int ret;
+ 
+ 	/* no second zero-length packet read wanted after mtu-sized packets */
+ 	if ((ll_mtu % dev->maxpacket) == 0)
+ 		return -EDOM;
+ 
++	ret = usb_autopm_get_interface(dev->intf);
++	if (ret < 0)
++		return ret;
++
+ 	lan78xx_set_rx_max_frame_length(dev, new_mtu + VLAN_ETH_HLEN);
+ 
+ 	netdev->mtu = new_mtu;
+@@ -2264,6 +2313,8 @@ static int lan78xx_change_mtu(struct net_device *netdev, int new_mtu)
+ 		}
+ 	}
+ 
++	usb_autopm_put_interface(dev->intf);
++
+ 	return 0;
+ }
+ 
+@@ -2420,26 +2471,186 @@ static void lan78xx_init_ltm(struct lan78xx_net *dev)
+ 	lan78xx_write_reg(dev, LTM_INACTIVE1, regs[5]);
+ }
+ 
++static int lan78xx_start_hw(struct lan78xx_net *dev, u32 reg, u32 hw_enable)
++{
++	return lan78xx_update_reg(dev, reg, hw_enable, hw_enable);
++}
++
++static int lan78xx_stop_hw(struct lan78xx_net *dev, u32 reg, u32 hw_enabled,
++			   u32 hw_disabled)
++{
++	unsigned long timeout;
++	bool stopped = true;
++	int ret;
++	u32 buf;
++
++	/* Stop the h/w block (if not already stopped) */
++
++	ret = lan78xx_read_reg(dev, reg, &buf);
++	if (ret < 0)
++		return ret;
++
++	if (buf & hw_enabled) {
++		buf &= ~hw_enabled;
++
++		ret = lan78xx_write_reg(dev, reg, buf);
++		if (ret < 0)
++			return ret;
++
++		stopped = false;
++		timeout = jiffies + HW_DISABLE_TIMEOUT;
++		do  {
++			ret = lan78xx_read_reg(dev, reg, &buf);
++			if (ret < 0)
++				return ret;
++
++			if (buf & hw_disabled)
++				stopped = true;
++			else
++				msleep(HW_DISABLE_DELAY_MS);
++		} while (!stopped && !time_after(jiffies, timeout));
++	}
++
++	ret = stopped ? 0 : -ETIME;
++
++	return ret;
++}
++
++static int lan78xx_flush_fifo(struct lan78xx_net *dev, u32 reg, u32 fifo_flush)
++{
++	return lan78xx_update_reg(dev, reg, fifo_flush, fifo_flush);
++}
++
++static int lan78xx_start_tx_path(struct lan78xx_net *dev)
++{
++	int ret;
++
++	netif_dbg(dev, drv, dev->net, "start tx path");
++
++	/* Start the MAC transmitter */
++
++	ret = lan78xx_start_hw(dev, MAC_TX, MAC_TX_TXEN_);
++	if (ret < 0)
++		return ret;
++
++	/* Start the Tx FIFO */
++
++	ret = lan78xx_start_hw(dev, FCT_TX_CTL, FCT_TX_CTL_EN_);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
++static int lan78xx_stop_tx_path(struct lan78xx_net *dev)
++{
++	int ret;
++
++	netif_dbg(dev, drv, dev->net, "stop tx path");
++
++	/* Stop the Tx FIFO */
++
++	ret = lan78xx_stop_hw(dev, FCT_TX_CTL, FCT_TX_CTL_EN_, FCT_TX_CTL_DIS_);
++	if (ret < 0)
++		return ret;
++
++	/* Stop the MAC transmitter */
++
++	ret = lan78xx_stop_hw(dev, MAC_TX, MAC_TX_TXEN_, MAC_TX_TXD_);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
++/* The caller must ensure the Tx path is stopped before calling
++ * lan78xx_flush_tx_fifo().
++ */
++static int lan78xx_flush_tx_fifo(struct lan78xx_net *dev)
++{
++	return lan78xx_flush_fifo(dev, FCT_TX_CTL, FCT_TX_CTL_RST_);
++}
++
++static int lan78xx_start_rx_path(struct lan78xx_net *dev)
++{
++	int ret;
++
++	netif_dbg(dev, drv, dev->net, "start rx path");
++
++	/* Start the Rx FIFO */
++
++	ret = lan78xx_start_hw(dev, FCT_RX_CTL, FCT_RX_CTL_EN_);
++	if (ret < 0)
++		return ret;
++
++	/* Start the MAC receiver*/
++
++	ret = lan78xx_start_hw(dev, MAC_RX, MAC_RX_RXEN_);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
++static int lan78xx_stop_rx_path(struct lan78xx_net *dev)
++{
++	int ret;
++
++	netif_dbg(dev, drv, dev->net, "stop rx path");
++
++	/* Stop the MAC receiver */
++
++	ret = lan78xx_stop_hw(dev, MAC_RX, MAC_RX_RXEN_, MAC_RX_RXD_);
++	if (ret < 0)
++		return ret;
++
++	/* Stop the Rx FIFO */
++
++	ret = lan78xx_stop_hw(dev, FCT_RX_CTL, FCT_RX_CTL_EN_, FCT_RX_CTL_DIS_);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
++/* The caller must ensure the Rx path is stopped before calling
++ * lan78xx_flush_rx_fifo().
++ */
++static int lan78xx_flush_rx_fifo(struct lan78xx_net *dev)
++{
++	return lan78xx_flush_fifo(dev, FCT_RX_CTL, FCT_RX_CTL_RST_);
++}
++
+ static int lan78xx_reset(struct lan78xx_net *dev)
+ {
+ 	struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
+-	u32 buf;
+-	int ret = 0;
+ 	unsigned long timeout;
++	int ret;
++	u32 buf;
+ 	u8 sig;
+ 
+ 	ret = lan78xx_read_reg(dev, HW_CFG, &buf);
++	if (ret < 0)
++		return ret;
++
+ 	buf |= HW_CFG_LRST_;
++
+ 	ret = lan78xx_write_reg(dev, HW_CFG, buf);
++	if (ret < 0)
++		return ret;
+ 
+ 	timeout = jiffies + HZ;
+ 	do {
+ 		mdelay(1);
+ 		ret = lan78xx_read_reg(dev, HW_CFG, &buf);
++		if (ret < 0)
++			return ret;
++
+ 		if (time_after(jiffies, timeout)) {
+ 			netdev_warn(dev->net,
+ 				    "timeout on completion of LiteReset");
+-			return -EIO;
++			ret = -ETIMEDOUT;
++			return ret;
+ 		}
+ 	} while (buf & HW_CFG_LRST_);
+ 
+@@ -2447,13 +2658,22 @@ static int lan78xx_reset(struct lan78xx_net *dev)
+ 
+ 	/* save DEVID for later usage */
+ 	ret = lan78xx_read_reg(dev, ID_REV, &buf);
++	if (ret < 0)
++		return ret;
++
+ 	dev->chipid = (buf & ID_REV_CHIP_ID_MASK_) >> 16;
+ 	dev->chiprev = buf & ID_REV_CHIP_REV_MASK_;
+ 
+ 	/* Respond to the IN token with a NAK */
+ 	ret = lan78xx_read_reg(dev, USB_CFG0, &buf);
++	if (ret < 0)
++		return ret;
++
+ 	buf |= USB_CFG_BIR_;
++
+ 	ret = lan78xx_write_reg(dev, USB_CFG0, buf);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* Init LTM */
+ 	lan78xx_init_ltm(dev);
+@@ -2476,53 +2696,105 @@ static int lan78xx_reset(struct lan78xx_net *dev)
+ 	}
+ 
+ 	ret = lan78xx_write_reg(dev, BURST_CAP, buf);
++	if (ret < 0)
++		return ret;
++
+ 	ret = lan78xx_write_reg(dev, BULK_IN_DLY, DEFAULT_BULK_IN_DELAY);
++	if (ret < 0)
++		return ret;
+ 
+ 	ret = lan78xx_read_reg(dev, HW_CFG, &buf);
++	if (ret < 0)
++		return ret;
++
+ 	buf |= HW_CFG_MEF_;
++
+ 	ret = lan78xx_write_reg(dev, HW_CFG, buf);
++	if (ret < 0)
++		return ret;
+ 
+ 	ret = lan78xx_read_reg(dev, USB_CFG0, &buf);
++	if (ret < 0)
++		return ret;
++
+ 	buf |= USB_CFG_BCE_;
++
+ 	ret = lan78xx_write_reg(dev, USB_CFG0, buf);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* set FIFO sizes */
+ 	buf = (MAX_RX_FIFO_SIZE - 512) / 512;
++
+ 	ret = lan78xx_write_reg(dev, FCT_RX_FIFO_END, buf);
++	if (ret < 0)
++		return ret;
+ 
+ 	buf = (MAX_TX_FIFO_SIZE - 512) / 512;
++
+ 	ret = lan78xx_write_reg(dev, FCT_TX_FIFO_END, buf);
++	if (ret < 0)
++		return ret;
+ 
+ 	ret = lan78xx_write_reg(dev, INT_STS, INT_STS_CLEAR_ALL_);
++	if (ret < 0)
++		return ret;
++
+ 	ret = lan78xx_write_reg(dev, FLOW, 0);
++	if (ret < 0)
++		return ret;
++
+ 	ret = lan78xx_write_reg(dev, FCT_FLOW, 0);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* Don't need rfe_ctl_lock during initialisation */
+ 	ret = lan78xx_read_reg(dev, RFE_CTL, &pdata->rfe_ctl);
++	if (ret < 0)
++		return ret;
++
+ 	pdata->rfe_ctl |= RFE_CTL_BCAST_EN_ | RFE_CTL_DA_PERFECT_;
++
+ 	ret = lan78xx_write_reg(dev, RFE_CTL, pdata->rfe_ctl);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* Enable or disable checksum offload engines */
+-	lan78xx_set_features(dev->net, dev->net->features);
++	ret = lan78xx_set_features(dev->net, dev->net->features);
++	if (ret < 0)
++		return ret;
+ 
+ 	lan78xx_set_multicast(dev->net);
+ 
+ 	/* reset PHY */
+ 	ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++	if (ret < 0)
++		return ret;
++
+ 	buf |= PMT_CTL_PHY_RST_;
++
+ 	ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++	if (ret < 0)
++		return ret;
+ 
+ 	timeout = jiffies + HZ;
+ 	do {
+ 		mdelay(1);
+ 		ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++		if (ret < 0)
++			return ret;
++
+ 		if (time_after(jiffies, timeout)) {
+ 			netdev_warn(dev->net, "timeout waiting for PHY Reset");
+-			return -EIO;
++			ret = -ETIMEDOUT;
++			return ret;
+ 		}
+ 	} while ((buf & PMT_CTL_PHY_RST_) || !(buf & PMT_CTL_READY_));
+ 
+ 	ret = lan78xx_read_reg(dev, MAC_CR, &buf);
++	if (ret < 0)
++		return ret;
++
+ 	/* LAN7801 only has RGMII mode */
+ 	if (dev->chipid == ID_REV_CHIP_ID_7801_)
+ 		buf &= ~MAC_CR_GMII_EN_;
+@@ -2537,27 +2809,13 @@ static int lan78xx_reset(struct lan78xx_net *dev)
+ 		}
+ 	}
+ 	ret = lan78xx_write_reg(dev, MAC_CR, buf);
+-
+-	ret = lan78xx_read_reg(dev, MAC_TX, &buf);
+-	buf |= MAC_TX_TXEN_;
+-	ret = lan78xx_write_reg(dev, MAC_TX, buf);
+-
+-	ret = lan78xx_read_reg(dev, FCT_TX_CTL, &buf);
+-	buf |= FCT_TX_CTL_EN_;
+-	ret = lan78xx_write_reg(dev, FCT_TX_CTL, buf);
++	if (ret < 0)
++		return ret;
+ 
+ 	ret = lan78xx_set_rx_max_frame_length(dev,
+ 					      dev->net->mtu + VLAN_ETH_HLEN);
+ 
+-	ret = lan78xx_read_reg(dev, MAC_RX, &buf);
+-	buf |= MAC_RX_RXEN_;
+-	ret = lan78xx_write_reg(dev, MAC_RX, buf);
+-
+-	ret = lan78xx_read_reg(dev, FCT_RX_CTL, &buf);
+-	buf |= FCT_RX_CTL_EN_;
+-	ret = lan78xx_write_reg(dev, FCT_RX_CTL, buf);
+-
+-	return 0;
++	return ret;
+ }
+ 
+ static void lan78xx_init_stats(struct lan78xx_net *dev)
+@@ -2591,9 +2849,13 @@ static int lan78xx_open(struct net_device *net)
+ 	struct lan78xx_net *dev = netdev_priv(net);
+ 	int ret;
+ 
++	netif_dbg(dev, ifup, dev->net, "open device");
++
+ 	ret = usb_autopm_get_interface(dev->intf);
+ 	if (ret < 0)
+-		goto out;
++		return ret;
++
++	mutex_lock(&dev->dev_mutex);
+ 
+ 	phy_start(net->phydev);
+ 
+@@ -2609,6 +2871,20 @@ static int lan78xx_open(struct net_device *net)
+ 		}
+ 	}
+ 
++	ret = lan78xx_flush_rx_fifo(dev);
++	if (ret < 0)
++		goto done;
++	ret = lan78xx_flush_tx_fifo(dev);
++	if (ret < 0)
++		goto done;
++
++	ret = lan78xx_start_tx_path(dev);
++	if (ret < 0)
++		goto done;
++	ret = lan78xx_start_rx_path(dev);
++	if (ret < 0)
++		goto done;
++
+ 	lan78xx_init_stats(dev);
+ 
+ 	set_bit(EVENT_DEV_OPEN, &dev->flags);
+@@ -2619,9 +2895,11 @@ static int lan78xx_open(struct net_device *net)
+ 
+ 	lan78xx_defer_kevent(dev, EVENT_LINK_RESET);
+ done:
+-	usb_autopm_put_interface(dev->intf);
++	mutex_unlock(&dev->dev_mutex);
++
++	if (ret < 0)
++		usb_autopm_put_interface(dev->intf);
+ 
+-out:
+ 	return ret;
+ }
+ 
+@@ -2638,38 +2916,56 @@ static void lan78xx_terminate_urbs(struct lan78xx_net *dev)
+ 	temp = unlink_urbs(dev, &dev->txq) + unlink_urbs(dev, &dev->rxq);
+ 
+ 	/* maybe wait for deletions to finish. */
+-	while (!skb_queue_empty(&dev->rxq) &&
+-	       !skb_queue_empty(&dev->txq) &&
+-	       !skb_queue_empty(&dev->done)) {
++	while (!skb_queue_empty(&dev->rxq) ||
++	       !skb_queue_empty(&dev->txq)) {
+ 		schedule_timeout(msecs_to_jiffies(UNLINK_TIMEOUT_MS));
+ 		set_current_state(TASK_UNINTERRUPTIBLE);
+ 		netif_dbg(dev, ifdown, dev->net,
+-			  "waited for %d urb completions\n", temp);
++			  "waited for %d urb completions", temp);
+ 	}
+ 	set_current_state(TASK_RUNNING);
+ 	dev->wait = NULL;
+ 	remove_wait_queue(&unlink_wakeup, &wait);
++
++	while (!skb_queue_empty(&dev->done)) {
++		struct skb_data *entry;
++		struct sk_buff *skb;
++
++		skb = skb_dequeue(&dev->done);
++		entry = (struct skb_data *)(skb->cb);
++		usb_free_urb(entry->urb);
++		dev_kfree_skb(skb);
++	}
+ }
+ 
+ static int lan78xx_stop(struct net_device *net)
+ {
+ 	struct lan78xx_net *dev = netdev_priv(net);
+ 
++	netif_dbg(dev, ifup, dev->net, "stop device");
++
++	mutex_lock(&dev->dev_mutex);
++
+ 	if (timer_pending(&dev->stat_monitor))
+ 		del_timer_sync(&dev->stat_monitor);
+ 
+-	if (net->phydev)
+-		phy_stop(net->phydev);
+-
+ 	clear_bit(EVENT_DEV_OPEN, &dev->flags);
+ 	netif_stop_queue(net);
++	tasklet_kill(&dev->bh);
++
++	lan78xx_terminate_urbs(dev);
+ 
+ 	netif_info(dev, ifdown, dev->net,
+ 		   "stop stats: rx/tx %lu/%lu, errs %lu/%lu\n",
+ 		   net->stats.rx_packets, net->stats.tx_packets,
+ 		   net->stats.rx_errors, net->stats.tx_errors);
+ 
+-	lan78xx_terminate_urbs(dev);
++	/* ignore errors that occur stopping the Tx and Rx data paths */
++	lan78xx_stop_tx_path(dev);
++	lan78xx_stop_rx_path(dev);
++
++	if (net->phydev)
++		phy_stop(net->phydev);
+ 
+ 	usb_kill_urb(dev->urb_intr);
+ 
+@@ -2679,12 +2975,17 @@ static int lan78xx_stop(struct net_device *net)
+ 	 * can't flush_scheduled_work() until we drop rtnl (later),
+ 	 * else workers could deadlock; so make workers a NOP.
+ 	 */
+-	dev->flags = 0;
++	clear_bit(EVENT_TX_HALT, &dev->flags);
++	clear_bit(EVENT_RX_HALT, &dev->flags);
++	clear_bit(EVENT_LINK_RESET, &dev->flags);
++	clear_bit(EVENT_STAT_UPDATE, &dev->flags);
++
+ 	cancel_delayed_work_sync(&dev->wq);
+-	tasklet_kill(&dev->bh);
+ 
+ 	usb_autopm_put_interface(dev->intf);
+ 
++	mutex_unlock(&dev->dev_mutex);
++
+ 	return 0;
+ }
+ 
+@@ -2807,6 +3108,9 @@ lan78xx_start_xmit(struct sk_buff *skb, struct net_device *net)
+ 	struct lan78xx_net *dev = netdev_priv(net);
+ 	struct sk_buff *skb2 = NULL;
+ 
++	if (test_bit(EVENT_DEV_ASLEEP, &dev->flags))
++		schedule_delayed_work(&dev->wq, 0);
++
+ 	if (skb) {
+ 		skb_tx_timestamp(skb);
+ 		skb2 = lan78xx_tx_prep(dev, skb, GFP_ATOMIC);
+@@ -3334,9 +3638,10 @@ static void lan78xx_tx_bh(struct lan78xx_net *dev)
+ 		if (skb)
+ 			dev_kfree_skb_any(skb);
+ 		usb_free_urb(urb);
+-	} else
++	} else {
+ 		netif_dbg(dev, tx_queued, dev->net,
+ 			  "> tx, len %d, type 0x%x\n", length, skb->protocol);
++	}
+ }
+ 
+ static void lan78xx_rx_bh(struct lan78xx_net *dev)
+@@ -3412,18 +3717,17 @@ static void lan78xx_delayedwork(struct work_struct *work)
+ 
+ 	dev = container_of(work, struct lan78xx_net, wq.work);
+ 
++	if (usb_autopm_get_interface(dev->intf) < 0)
++		return;
++
+ 	if (test_bit(EVENT_TX_HALT, &dev->flags)) {
+ 		unlink_urbs(dev, &dev->txq);
+-		status = usb_autopm_get_interface(dev->intf);
+-		if (status < 0)
+-			goto fail_pipe;
++
+ 		status = usb_clear_halt(dev->udev, dev->pipe_out);
+-		usb_autopm_put_interface(dev->intf);
+ 		if (status < 0 &&
+ 		    status != -EPIPE &&
+ 		    status != -ESHUTDOWN) {
+ 			if (netif_msg_tx_err(dev))
+-fail_pipe:
+ 				netdev_err(dev->net,
+ 					   "can't clear tx halt, status %d\n",
+ 					   status);
+@@ -3433,18 +3737,14 @@ static void lan78xx_delayedwork(struct work_struct *work)
+ 				netif_wake_queue(dev->net);
+ 		}
+ 	}
++
+ 	if (test_bit(EVENT_RX_HALT, &dev->flags)) {
+ 		unlink_urbs(dev, &dev->rxq);
+-		status = usb_autopm_get_interface(dev->intf);
+-		if (status < 0)
+-				goto fail_halt;
+ 		status = usb_clear_halt(dev->udev, dev->pipe_in);
+-		usb_autopm_put_interface(dev->intf);
+ 		if (status < 0 &&
+ 		    status != -EPIPE &&
+ 		    status != -ESHUTDOWN) {
+ 			if (netif_msg_rx_err(dev))
+-fail_halt:
+ 				netdev_err(dev->net,
+ 					   "can't clear rx halt, status %d\n",
+ 					   status);
+@@ -3458,16 +3758,9 @@ static void lan78xx_delayedwork(struct work_struct *work)
+ 		int ret = 0;
+ 
+ 		clear_bit(EVENT_LINK_RESET, &dev->flags);
+-		status = usb_autopm_get_interface(dev->intf);
+-		if (status < 0)
+-			goto skip_reset;
+ 		if (lan78xx_link_reset(dev) < 0) {
+-			usb_autopm_put_interface(dev->intf);
+-skip_reset:
+ 			netdev_info(dev->net, "link reset failed (%d)\n",
+ 				    ret);
+-		} else {
+-			usb_autopm_put_interface(dev->intf);
+ 		}
+ 	}
+ 
+@@ -3481,6 +3774,8 @@ static void lan78xx_delayedwork(struct work_struct *work)
+ 
+ 		dev->delta = min((dev->delta * 2), 50);
+ 	}
++
++	usb_autopm_put_interface(dev->intf);
+ }
+ 
+ static void intr_complete(struct urb *urb)
+@@ -3610,8 +3905,8 @@ static int lan78xx_probe(struct usb_interface *intf,
+ 	struct net_device *netdev;
+ 	struct usb_device *udev;
+ 	int ret;
+-	unsigned maxp;
+-	unsigned period;
++	unsigned int maxp;
++	unsigned int period;
+ 	u8 *buf = NULL;
+ 
+ 	udev = interface_to_usbdev(intf);
+@@ -3640,6 +3935,7 @@ static int lan78xx_probe(struct usb_interface *intf,
+ 	skb_queue_head_init(&dev->rxq_pause);
+ 	skb_queue_head_init(&dev->txq_pend);
+ 	mutex_init(&dev->phy_mutex);
++	mutex_init(&dev->dev_mutex);
+ 
+ 	tasklet_init(&dev->bh, lan78xx_bh, (unsigned long)dev);
+ 	INIT_DELAYED_WORK(&dev->wq, lan78xx_delayedwork);
+@@ -3782,37 +4078,119 @@ static u16 lan78xx_wakeframe_crc16(const u8 *buf, int len)
+ 	return crc;
+ }
+ 
+-static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
++static int lan78xx_set_auto_suspend(struct lan78xx_net *dev)
+ {
+ 	u32 buf;
+-	int mask_index;
+-	u16 crc;
+-	u32 temp_wucsr;
+-	u32 temp_pmt_ctl;
++	int ret;
++
++	ret = lan78xx_stop_tx_path(dev);
++	if (ret < 0)
++		return ret;
++
++	ret = lan78xx_stop_rx_path(dev);
++	if (ret < 0)
++		return ret;
++
++	/* auto suspend (selective suspend) */
++
++	ret = lan78xx_write_reg(dev, WUCSR, 0);
++	if (ret < 0)
++		return ret;
++	ret = lan78xx_write_reg(dev, WUCSR2, 0);
++	if (ret < 0)
++		return ret;
++	ret = lan78xx_write_reg(dev, WK_SRC, 0xFFF1FF1FUL);
++	if (ret < 0)
++		return ret;
++
++	/* set goodframe wakeup */
++
++	ret = lan78xx_read_reg(dev, WUCSR, &buf);
++	if (ret < 0)
++		return ret;
++
++	buf |= WUCSR_RFE_WAKE_EN_;
++	buf |= WUCSR_STORE_WAKE_;
++
++	ret = lan78xx_write_reg(dev, WUCSR, buf);
++	if (ret < 0)
++		return ret;
++
++	ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++	if (ret < 0)
++		return ret;
++
++	buf &= ~PMT_CTL_RES_CLR_WKP_EN_;
++	buf |= PMT_CTL_RES_CLR_WKP_STS_;
++	buf |= PMT_CTL_PHY_WAKE_EN_;
++	buf |= PMT_CTL_WOL_EN_;
++	buf &= ~PMT_CTL_SUS_MODE_MASK_;
++	buf |= PMT_CTL_SUS_MODE_3_;
++
++	ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++	if (ret < 0)
++		return ret;
++
++	ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++	if (ret < 0)
++		return ret;
++
++	buf |= PMT_CTL_WUPS_MASK_;
++
++	ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++	if (ret < 0)
++		return ret;
++
++	ret = lan78xx_start_rx_path(dev);
++
++	return ret;
++}
++
++static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
++{
+ 	const u8 ipv4_multicast[3] = { 0x01, 0x00, 0x5E };
+ 	const u8 ipv6_multicast[3] = { 0x33, 0x33 };
+ 	const u8 arp_type[2] = { 0x08, 0x06 };
++	u32 temp_pmt_ctl;
++	int mask_index;
++	u32 temp_wucsr;
++	u32 buf;
++	u16 crc;
++	int ret;
+ 
+-	lan78xx_read_reg(dev, MAC_TX, &buf);
+-	buf &= ~MAC_TX_TXEN_;
+-	lan78xx_write_reg(dev, MAC_TX, buf);
+-	lan78xx_read_reg(dev, MAC_RX, &buf);
+-	buf &= ~MAC_RX_RXEN_;
+-	lan78xx_write_reg(dev, MAC_RX, buf);
++	ret = lan78xx_stop_tx_path(dev);
++	if (ret < 0)
++		return ret;
++	ret = lan78xx_stop_rx_path(dev);
++	if (ret < 0)
++		return ret;
+ 
+-	lan78xx_write_reg(dev, WUCSR, 0);
+-	lan78xx_write_reg(dev, WUCSR2, 0);
+-	lan78xx_write_reg(dev, WK_SRC, 0xFFF1FF1FUL);
++	ret = lan78xx_write_reg(dev, WUCSR, 0);
++	if (ret < 0)
++		return ret;
++	ret = lan78xx_write_reg(dev, WUCSR2, 0);
++	if (ret < 0)
++		return ret;
++	ret = lan78xx_write_reg(dev, WK_SRC, 0xFFF1FF1FUL);
++	if (ret < 0)
++		return ret;
+ 
+ 	temp_wucsr = 0;
+ 
+ 	temp_pmt_ctl = 0;
+-	lan78xx_read_reg(dev, PMT_CTL, &temp_pmt_ctl);
++
++	ret = lan78xx_read_reg(dev, PMT_CTL, &temp_pmt_ctl);
++	if (ret < 0)
++		return ret;
++
+ 	temp_pmt_ctl &= ~PMT_CTL_RES_CLR_WKP_EN_;
+ 	temp_pmt_ctl |= PMT_CTL_RES_CLR_WKP_STS_;
+ 
+-	for (mask_index = 0; mask_index < NUM_OF_WUF_CFG; mask_index++)
+-		lan78xx_write_reg(dev, WUF_CFG(mask_index), 0);
++	for (mask_index = 0; mask_index < NUM_OF_WUF_CFG; mask_index++) {
++		ret = lan78xx_write_reg(dev, WUF_CFG(mask_index), 0);
++		if (ret < 0)
++			return ret;
++	}
+ 
+ 	mask_index = 0;
+ 	if (wol & WAKE_PHY) {
+@@ -3841,30 +4219,52 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 
+ 		/* set WUF_CFG & WUF_MASK for IPv4 Multicast */
+ 		crc = lan78xx_wakeframe_crc16(ipv4_multicast, 3);
+-		lan78xx_write_reg(dev, WUF_CFG(mask_index),
++		ret = lan78xx_write_reg(dev, WUF_CFG(mask_index),
+ 					WUF_CFGX_EN_ |
+ 					WUF_CFGX_TYPE_MCAST_ |
+ 					(0 << WUF_CFGX_OFFSET_SHIFT_) |
+ 					(crc & WUF_CFGX_CRC16_MASK_));
++		if (ret < 0)
++			return ret;
++
++		ret = lan78xx_write_reg(dev, WUF_MASK0(mask_index), 7);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
++		if (ret < 0)
++			return ret;
+ 
+-		lan78xx_write_reg(dev, WUF_MASK0(mask_index), 7);
+-		lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
+-		lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
+-		lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
+ 		mask_index++;
+ 
+ 		/* for IPv6 Multicast */
+ 		crc = lan78xx_wakeframe_crc16(ipv6_multicast, 2);
+-		lan78xx_write_reg(dev, WUF_CFG(mask_index),
++		ret = lan78xx_write_reg(dev, WUF_CFG(mask_index),
+ 					WUF_CFGX_EN_ |
+ 					WUF_CFGX_TYPE_MCAST_ |
+ 					(0 << WUF_CFGX_OFFSET_SHIFT_) |
+ 					(crc & WUF_CFGX_CRC16_MASK_));
++		if (ret < 0)
++			return ret;
++
++		ret = lan78xx_write_reg(dev, WUF_MASK0(mask_index), 3);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
++		if (ret < 0)
++			return ret;
+ 
+-		lan78xx_write_reg(dev, WUF_MASK0(mask_index), 3);
+-		lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
+-		lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
+-		lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
+ 		mask_index++;
+ 
+ 		temp_pmt_ctl |= PMT_CTL_WOL_EN_;
+@@ -3885,16 +4285,27 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 		 * for packettype (offset 12,13) = ARP (0x0806)
+ 		 */
+ 		crc = lan78xx_wakeframe_crc16(arp_type, 2);
+-		lan78xx_write_reg(dev, WUF_CFG(mask_index),
++		ret = lan78xx_write_reg(dev, WUF_CFG(mask_index),
+ 					WUF_CFGX_EN_ |
+ 					WUF_CFGX_TYPE_ALL_ |
+ 					(0 << WUF_CFGX_OFFSET_SHIFT_) |
+ 					(crc & WUF_CFGX_CRC16_MASK_));
++		if (ret < 0)
++			return ret;
++
++		ret = lan78xx_write_reg(dev, WUF_MASK0(mask_index), 0x3000);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
++		if (ret < 0)
++			return ret;
++		ret = lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
++		if (ret < 0)
++			return ret;
+ 
+-		lan78xx_write_reg(dev, WUF_MASK0(mask_index), 0x3000);
+-		lan78xx_write_reg(dev, WUF_MASK1(mask_index), 0);
+-		lan78xx_write_reg(dev, WUF_MASK2(mask_index), 0);
+-		lan78xx_write_reg(dev, WUF_MASK3(mask_index), 0);
+ 		mask_index++;
+ 
+ 		temp_pmt_ctl |= PMT_CTL_WOL_EN_;
+@@ -3902,7 +4313,9 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 		temp_pmt_ctl |= PMT_CTL_SUS_MODE_0_;
+ 	}
+ 
+-	lan78xx_write_reg(dev, WUCSR, temp_wucsr);
++	ret = lan78xx_write_reg(dev, WUCSR, temp_wucsr);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* when multiple WOL bits are set */
+ 	if (hweight_long((unsigned long)wol) > 1) {
+@@ -3910,33 +4323,45 @@ static int lan78xx_set_suspend(struct lan78xx_net *dev, u32 wol)
+ 		temp_pmt_ctl &= ~PMT_CTL_SUS_MODE_MASK_;
+ 		temp_pmt_ctl |= PMT_CTL_SUS_MODE_0_;
+ 	}
+-	lan78xx_write_reg(dev, PMT_CTL, temp_pmt_ctl);
++	ret = lan78xx_write_reg(dev, PMT_CTL, temp_pmt_ctl);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* clear WUPS */
+-	lan78xx_read_reg(dev, PMT_CTL, &buf);
++	ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++	if (ret < 0)
++		return ret;
++
+ 	buf |= PMT_CTL_WUPS_MASK_;
+-	lan78xx_write_reg(dev, PMT_CTL, buf);
+ 
+-	lan78xx_read_reg(dev, MAC_RX, &buf);
+-	buf |= MAC_RX_RXEN_;
+-	lan78xx_write_reg(dev, MAC_RX, buf);
++	ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++	if (ret < 0)
++		return ret;
+ 
+-	return 0;
++	ret = lan78xx_start_rx_path(dev);
++
++	return ret;
+ }
+ 
+ static int lan78xx_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ 	struct lan78xx_net *dev = usb_get_intfdata(intf);
+-	struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
+-	u32 buf;
++	bool dev_open;
+ 	int ret;
+ 
+-	if (!dev->suspend_count++) {
++	mutex_lock(&dev->dev_mutex);
++
++	netif_dbg(dev, ifdown, dev->net,
++		  "suspending: pm event %#x", message.event);
++
++	dev_open = test_bit(EVENT_DEV_OPEN, &dev->flags);
++
++	if (dev_open) {
+ 		spin_lock_irq(&dev->txq.lock);
+ 		/* don't autosuspend while transmitting */
+ 		if ((skb_queue_len(&dev->txq) ||
+ 		     skb_queue_len(&dev->txq_pend)) &&
+-			PMSG_IS_AUTO(message)) {
++		    PMSG_IS_AUTO(message)) {
+ 			spin_unlock_irq(&dev->txq.lock);
+ 			ret = -EBUSY;
+ 			goto out;
+@@ -3945,129 +4370,207 @@ static int lan78xx_suspend(struct usb_interface *intf, pm_message_t message)
+ 			spin_unlock_irq(&dev->txq.lock);
+ 		}
+ 
+-		/* stop TX & RX */
+-		ret = lan78xx_read_reg(dev, MAC_TX, &buf);
+-		buf &= ~MAC_TX_TXEN_;
+-		ret = lan78xx_write_reg(dev, MAC_TX, buf);
+-		ret = lan78xx_read_reg(dev, MAC_RX, &buf);
+-		buf &= ~MAC_RX_RXEN_;
+-		ret = lan78xx_write_reg(dev, MAC_RX, buf);
++		/* stop RX */
++		ret = lan78xx_stop_rx_path(dev);
++		if (ret < 0)
++			goto out;
++
++		ret = lan78xx_flush_rx_fifo(dev);
++		if (ret < 0)
++			goto out;
+ 
+-		/* empty out the rx and queues */
++		/* stop Tx */
++		ret = lan78xx_stop_tx_path(dev);
++		if (ret < 0)
++			goto out;
++
++		/* empty out the Rx and Tx queues */
+ 		netif_device_detach(dev->net);
+ 		lan78xx_terminate_urbs(dev);
+ 		usb_kill_urb(dev->urb_intr);
+ 
+ 		/* reattach */
+ 		netif_device_attach(dev->net);
+-	}
+ 
+-	if (test_bit(EVENT_DEV_ASLEEP, &dev->flags)) {
+ 		del_timer(&dev->stat_monitor);
+ 
+ 		if (PMSG_IS_AUTO(message)) {
+-			/* auto suspend (selective suspend) */
+-			ret = lan78xx_read_reg(dev, MAC_TX, &buf);
+-			buf &= ~MAC_TX_TXEN_;
+-			ret = lan78xx_write_reg(dev, MAC_TX, buf);
+-			ret = lan78xx_read_reg(dev, MAC_RX, &buf);
+-			buf &= ~MAC_RX_RXEN_;
+-			ret = lan78xx_write_reg(dev, MAC_RX, buf);
++			ret = lan78xx_set_auto_suspend(dev);
++			if (ret < 0)
++				goto out;
++		} else {
++			struct lan78xx_priv *pdata;
++
++			pdata = (struct lan78xx_priv *)(dev->data[0]);
++			netif_carrier_off(dev->net);
++			ret = lan78xx_set_suspend(dev, pdata->wol);
++			if (ret < 0)
++				goto out;
++		}
++	} else {
++		/* Interface is down; don't allow WOL and PHY
++		 * events to wake up the host
++		 */
++		u32 buf;
+ 
+-			ret = lan78xx_write_reg(dev, WUCSR, 0);
+-			ret = lan78xx_write_reg(dev, WUCSR2, 0);
+-			ret = lan78xx_write_reg(dev, WK_SRC, 0xFFF1FF1FUL);
++		set_bit(EVENT_DEV_ASLEEP, &dev->flags);
+ 
+-			/* set goodframe wakeup */
+-			ret = lan78xx_read_reg(dev, WUCSR, &buf);
++		ret = lan78xx_write_reg(dev, WUCSR, 0);
++		if (ret < 0)
++			goto out;
++		ret = lan78xx_write_reg(dev, WUCSR2, 0);
++		if (ret < 0)
++			goto out;
+ 
+-			buf |= WUCSR_RFE_WAKE_EN_;
+-			buf |= WUCSR_STORE_WAKE_;
++		ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++		if (ret < 0)
++			goto out;
+ 
+-			ret = lan78xx_write_reg(dev, WUCSR, buf);
++		buf &= ~PMT_CTL_RES_CLR_WKP_EN_;
++		buf |= PMT_CTL_RES_CLR_WKP_STS_;
++		buf &= ~PMT_CTL_SUS_MODE_MASK_;
++		buf |= PMT_CTL_SUS_MODE_3_;
+ 
+-			ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++		ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++		if (ret < 0)
++			goto out;
+ 
+-			buf &= ~PMT_CTL_RES_CLR_WKP_EN_;
+-			buf |= PMT_CTL_RES_CLR_WKP_STS_;
++		ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++		if (ret < 0)
++			goto out;
+ 
+-			buf |= PMT_CTL_PHY_WAKE_EN_;
+-			buf |= PMT_CTL_WOL_EN_;
+-			buf &= ~PMT_CTL_SUS_MODE_MASK_;
+-			buf |= PMT_CTL_SUS_MODE_3_;
++		buf |= PMT_CTL_WUPS_MASK_;
+ 
+-			ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++		ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++		if (ret < 0)
++			goto out;
++	}
+ 
+-			ret = lan78xx_read_reg(dev, PMT_CTL, &buf);
++	ret = 0;
++out:
++	mutex_unlock(&dev->dev_mutex);
+ 
+-			buf |= PMT_CTL_WUPS_MASK_;
++	return ret;
++}
+ 
+-			ret = lan78xx_write_reg(dev, PMT_CTL, buf);
++static bool lan78xx_submit_deferred_urbs(struct lan78xx_net *dev)
++{
++	bool pipe_halted = false;
++	struct urb *urb;
+ 
+-			ret = lan78xx_read_reg(dev, MAC_RX, &buf);
+-			buf |= MAC_RX_RXEN_;
+-			ret = lan78xx_write_reg(dev, MAC_RX, buf);
++	while ((urb = usb_get_from_anchor(&dev->deferred))) {
++		struct sk_buff *skb = urb->context;
++		int ret;
++
++		if (!netif_device_present(dev->net) ||
++		    !netif_carrier_ok(dev->net) ||
++		    pipe_halted) {
++			usb_free_urb(urb);
++			dev_kfree_skb(skb);
++			continue;
++		}
++
++		ret = usb_submit_urb(urb, GFP_ATOMIC);
++
++		if (ret == 0) {
++			netif_trans_update(dev->net);
++			lan78xx_queue_skb(&dev->txq, skb, tx_start);
+ 		} else {
+-			lan78xx_set_suspend(dev, pdata->wol);
++			usb_free_urb(urb);
++			dev_kfree_skb(skb);
++
++			if (ret == -EPIPE) {
++				netif_stop_queue(dev->net);
++				pipe_halted = true;
++			} else if (ret == -ENODEV) {
++				netif_device_detach(dev->net);
++			}
+ 		}
+ 	}
+ 
+-	ret = 0;
+-out:
+-	return ret;
++	return pipe_halted;
+ }
+ 
+ static int lan78xx_resume(struct usb_interface *intf)
+ {
+ 	struct lan78xx_net *dev = usb_get_intfdata(intf);
+-	struct sk_buff *skb;
+-	struct urb *res;
++	bool dev_open;
+ 	int ret;
+-	u32 buf;
+ 
+-	if (!timer_pending(&dev->stat_monitor)) {
+-		dev->delta = 1;
+-		mod_timer(&dev->stat_monitor,
+-			  jiffies + STAT_UPDATE_TIMER);
+-	}
++	mutex_lock(&dev->dev_mutex);
+ 
+-	if (!--dev->suspend_count) {
+-		/* resume interrupt URBs */
+-		if (dev->urb_intr && test_bit(EVENT_DEV_OPEN, &dev->flags))
+-				usb_submit_urb(dev->urb_intr, GFP_NOIO);
++	netif_dbg(dev, ifup, dev->net, "resuming device");
++
++	dev_open = test_bit(EVENT_DEV_OPEN, &dev->flags);
++
++	if (dev_open) {
++		bool pipe_halted = false;
++
++		ret = lan78xx_flush_tx_fifo(dev);
++		if (ret < 0)
++			goto out;
++
++		if (dev->urb_intr) {
++			int ret = usb_submit_urb(dev->urb_intr, GFP_KERNEL);
+ 
+-		spin_lock_irq(&dev->txq.lock);
+-		while ((res = usb_get_from_anchor(&dev->deferred))) {
+-			skb = (struct sk_buff *)res->context;
+-			ret = usb_submit_urb(res, GFP_ATOMIC);
+ 			if (ret < 0) {
+-				dev_kfree_skb_any(skb);
+-				usb_free_urb(res);
+-				usb_autopm_put_interface_async(dev->intf);
+-			} else {
+-				netif_trans_update(dev->net);
+-				lan78xx_queue_skb(&dev->txq, skb, tx_start);
++				if (ret == -ENODEV)
++					netif_device_detach(dev->net);
++
++			netdev_warn(dev->net, "Failed to submit intr URB");
+ 			}
+ 		}
+ 
++		spin_lock_irq(&dev->txq.lock);
++
++		if (netif_device_present(dev->net)) {
++			pipe_halted = lan78xx_submit_deferred_urbs(dev);
++
++			if (pipe_halted)
++				lan78xx_defer_kevent(dev, EVENT_TX_HALT);
++		}
++
+ 		clear_bit(EVENT_DEV_ASLEEP, &dev->flags);
++
+ 		spin_unlock_irq(&dev->txq.lock);
+ 
+-		if (test_bit(EVENT_DEV_OPEN, &dev->flags)) {
+-			if (!(skb_queue_len(&dev->txq) >= dev->tx_qlen))
+-				netif_start_queue(dev->net);
+-			tasklet_schedule(&dev->bh);
++		if (!pipe_halted &&
++		    netif_device_present(dev->net) &&
++		    (skb_queue_len(&dev->txq) < dev->tx_qlen))
++			netif_start_queue(dev->net);
++
++		ret = lan78xx_start_tx_path(dev);
++		if (ret < 0)
++			goto out;
++
++		tasklet_schedule(&dev->bh);
++
++		if (!timer_pending(&dev->stat_monitor)) {
++			dev->delta = 1;
++			mod_timer(&dev->stat_monitor,
++				  jiffies + STAT_UPDATE_TIMER);
+ 		}
++
++	} else {
++		clear_bit(EVENT_DEV_ASLEEP, &dev->flags);
+ 	}
+ 
+ 	ret = lan78xx_write_reg(dev, WUCSR2, 0);
++	if (ret < 0)
++		goto out;
+ 	ret = lan78xx_write_reg(dev, WUCSR, 0);
++	if (ret < 0)
++		goto out;
+ 	ret = lan78xx_write_reg(dev, WK_SRC, 0xFFF1FF1FUL);
++	if (ret < 0)
++		goto out;
+ 
+ 	ret = lan78xx_write_reg(dev, WUCSR2, WUCSR2_NS_RCD_ |
+ 					     WUCSR2_ARP_RCD_ |
+ 					     WUCSR2_IPV6_TCPSYN_RCD_ |
+ 					     WUCSR2_IPV4_TCPSYN_RCD_);
++	if (ret < 0)
++		goto out;
+ 
+ 	ret = lan78xx_write_reg(dev, WUCSR, WUCSR_EEE_TX_WAKE_ |
+ 					    WUCSR_EEE_RX_WAKE_ |
+@@ -4076,23 +4579,32 @@ static int lan78xx_resume(struct usb_interface *intf)
+ 					    WUCSR_WUFR_ |
+ 					    WUCSR_MPR_ |
+ 					    WUCSR_BCST_FR_);
++	if (ret < 0)
++		goto out;
+ 
+-	ret = lan78xx_read_reg(dev, MAC_TX, &buf);
+-	buf |= MAC_TX_TXEN_;
+-	ret = lan78xx_write_reg(dev, MAC_TX, buf);
++	ret = 0;
++out:
++	mutex_unlock(&dev->dev_mutex);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int lan78xx_reset_resume(struct usb_interface *intf)
+ {
+ 	struct lan78xx_net *dev = usb_get_intfdata(intf);
++	int ret;
+ 
+-	lan78xx_reset(dev);
++	netif_dbg(dev, ifup, dev->net, "(reset) resuming device");
++
++	ret = lan78xx_reset(dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	phy_start(dev->net->phydev);
+ 
+-	return lan78xx_resume(intf);
++	ret = lan78xx_resume(intf);
++
++	return ret;
+ }
+ 
+ static const struct usb_device_id products[] = {
+diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
+index 28f22e58639c6..bd30ae9751bf5 100644
+--- a/drivers/tty/serial/Kconfig
++++ b/drivers/tty/serial/Kconfig
+@@ -343,6 +343,7 @@ config SERIAL_MAX310X
+ 	depends on SPI_MASTER
+ 	select SERIAL_CORE
+ 	select REGMAP_SPI if SPI_MASTER
++	select REGMAP_I2C if I2C
+ 	help
+ 	  This selects support for an advanced UART from Maxim (Dallas).
+ 	  Supported ICs are MAX3107, MAX3108, MAX3109, MAX14830.
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 14537878f9855..2f88eae8a55a1 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -14,9 +14,10 @@
+ #include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/gpio/driver.h>
++#include <linux/i2c.h>
+ #include <linux/module.h>
+-#include <linux/of.h>
+-#include <linux/of_device.h>
++#include <linux/mod_devicetable.h>
++#include <linux/property.h>
+ #include <linux/regmap.h>
+ #include <linux/serial_core.h>
+ #include <linux/serial.h>
+@@ -72,7 +73,8 @@
+ #define MAX310X_GLOBALCMD_REG		MAX310X_REG_1F /* Global Command (WO) */
+ 
+ /* Extended registers */
+-#define MAX310X_REVID_EXTREG		MAX310X_REG_05 /* Revision ID */
++#define MAX310X_SPI_REVID_EXTREG	MAX310X_REG_05 /* Revision ID */
++#define MAX310X_I2C_REVID_EXTREG	(0x25) /* Revision ID */
+ 
+ /* IRQ register bits */
+ #define MAX310X_IRQ_LSR_BIT		(1 << 0) /* LSR interrupt */
+@@ -235,6 +237,10 @@
+ #define MAX310x_REV_MASK		(0xf8)
+ #define MAX310X_WRITE_BIT		0x80
+ 
++/* Port startup definitions */
++#define MAX310X_PORT_STARTUP_WAIT_RETRIES	20 /* Number of retries */
++#define MAX310X_PORT_STARTUP_WAIT_DELAY_MS	10 /* Delay between retries */
++
+ /* Crystal-related definitions */
+ #define MAX310X_XTAL_WAIT_RETRIES	20 /* Number of retries */
+ #define MAX310X_XTAL_WAIT_DELAY_MS	10 /* Delay between retries */
+@@ -249,7 +255,17 @@
+ #define MAX14830_BRGCFG_CLKDIS_BIT	(1 << 6) /* Clock Disable */
+ #define MAX14830_REV_ID			(0xb0)
+ 
++struct max310x_if_cfg {
++	int (*extended_reg_enable)(struct device *dev, bool enable);
++
++	unsigned int rev_id_reg;
++};
++
+ struct max310x_devtype {
++	struct {
++		unsigned short min;
++		unsigned short max;
++	} slave_addr;
+ 	char	name[9];
+ 	int	nr;
+ 	u8	mode1;
+@@ -262,16 +278,16 @@ struct max310x_one {
+ 	struct work_struct	tx_work;
+ 	struct work_struct	md_work;
+ 	struct work_struct	rs_work;
++	struct regmap		*regmap;
+ 
+-	u8 wr_header;
+-	u8 rd_header;
+ 	u8 rx_buf[MAX310X_FIFO_SIZE];
+ };
+ #define to_max310x_port(_port) \
+ 	container_of(_port, struct max310x_one, port)
+ 
+ struct max310x_port {
+-	struct max310x_devtype	*devtype;
++	const struct max310x_devtype *devtype;
++	const struct max310x_if_cfg *if_cfg;
+ 	struct regmap		*regmap;
+ 	struct clk		*clk;
+ #ifdef CONFIG_GPIOLIB
+@@ -293,26 +309,26 @@ static DECLARE_BITMAP(max310x_lines, MAX310X_UART_NRMAX);
+ 
+ static u8 max310x_port_read(struct uart_port *port, u8 reg)
+ {
+-	struct max310x_port *s = dev_get_drvdata(port->dev);
++	struct max310x_one *one = to_max310x_port(port);
+ 	unsigned int val = 0;
+ 
+-	regmap_read(s->regmap, port->iobase + reg, &val);
++	regmap_read(one->regmap, reg, &val);
+ 
+ 	return val;
+ }
+ 
+ static void max310x_port_write(struct uart_port *port, u8 reg, u8 val)
+ {
+-	struct max310x_port *s = dev_get_drvdata(port->dev);
++	struct max310x_one *one = to_max310x_port(port);
+ 
+-	regmap_write(s->regmap, port->iobase + reg, val);
++	regmap_write(one->regmap, reg, val);
+ }
+ 
+ static void max310x_port_update(struct uart_port *port, u8 reg, u8 mask, u8 val)
+ {
+-	struct max310x_port *s = dev_get_drvdata(port->dev);
++	struct max310x_one *one = to_max310x_port(port);
+ 
+-	regmap_update_bits(s->regmap, port->iobase + reg, mask, val);
++	regmap_update_bits(one->regmap, reg, mask, val);
+ }
+ 
+ static int max3107_detect(struct device *dev)
+@@ -361,13 +377,12 @@ static int max3109_detect(struct device *dev)
+ 	unsigned int val = 0;
+ 	int ret;
+ 
+-	ret = regmap_write(s->regmap, MAX310X_GLOBALCMD_REG,
+-			   MAX310X_EXTREG_ENBL);
++	ret = s->if_cfg->extended_reg_enable(dev, true);
+ 	if (ret)
+ 		return ret;
+ 
+-	regmap_read(s->regmap, MAX310X_REVID_EXTREG, &val);
+-	regmap_write(s->regmap, MAX310X_GLOBALCMD_REG, MAX310X_EXTREG_DSBL);
++	regmap_read(s->regmap, s->if_cfg->rev_id_reg, &val);
++	s->if_cfg->extended_reg_enable(dev, false);
+ 	if (((val & MAX310x_REV_MASK) != MAX3109_REV_ID)) {
+ 		dev_err(dev,
+ 			"%s ID 0x%02x does not match\n", s->devtype->name, val);
+@@ -392,13 +407,12 @@ static int max14830_detect(struct device *dev)
+ 	unsigned int val = 0;
+ 	int ret;
+ 
+-	ret = regmap_write(s->regmap, MAX310X_GLOBALCMD_REG,
+-			   MAX310X_EXTREG_ENBL);
++	ret = s->if_cfg->extended_reg_enable(dev, true);
+ 	if (ret)
+ 		return ret;
+ 	
+-	regmap_read(s->regmap, MAX310X_REVID_EXTREG, &val);
+-	regmap_write(s->regmap, MAX310X_GLOBALCMD_REG, MAX310X_EXTREG_DSBL);
++	regmap_read(s->regmap, s->if_cfg->rev_id_reg, &val);
++	s->if_cfg->extended_reg_enable(dev, false);
+ 	if (((val & MAX310x_REV_MASK) != MAX14830_REV_ID)) {
+ 		dev_err(dev,
+ 			"%s ID 0x%02x does not match\n", s->devtype->name, val);
+@@ -423,6 +437,10 @@ static const struct max310x_devtype max3107_devtype = {
+ 	.mode1	= MAX310X_MODE1_AUTOSLEEP_BIT | MAX310X_MODE1_IRQSEL_BIT,
+ 	.detect	= max3107_detect,
+ 	.power	= max310x_power,
++	.slave_addr	= {
++		.min = 0x2c,
++		.max = 0x2f,
++	},
+ };
+ 
+ static const struct max310x_devtype max3108_devtype = {
+@@ -431,6 +449,10 @@ static const struct max310x_devtype max3108_devtype = {
+ 	.mode1	= MAX310X_MODE1_AUTOSLEEP_BIT,
+ 	.detect	= max3108_detect,
+ 	.power	= max310x_power,
++	.slave_addr	= {
++		.min = 0x60,
++		.max = 0x6f,
++	},
+ };
+ 
+ static const struct max310x_devtype max3109_devtype = {
+@@ -439,6 +461,10 @@ static const struct max310x_devtype max3109_devtype = {
+ 	.mode1	= MAX310X_MODE1_AUTOSLEEP_BIT,
+ 	.detect	= max3109_detect,
+ 	.power	= max310x_power,
++	.slave_addr	= {
++		.min = 0x60,
++		.max = 0x6f,
++	},
+ };
+ 
+ static const struct max310x_devtype max14830_devtype = {
+@@ -447,11 +473,15 @@ static const struct max310x_devtype max14830_devtype = {
+ 	.mode1	= MAX310X_MODE1_IRQSEL_BIT,
+ 	.detect	= max14830_detect,
+ 	.power	= max14830_power,
++	.slave_addr	= {
++		.min = 0x60,
++		.max = 0x6f,
++	},
+ };
+ 
+ static bool max310x_reg_writeable(struct device *dev, unsigned int reg)
+ {
+-	switch (reg & 0x1f) {
++	switch (reg) {
+ 	case MAX310X_IRQSTS_REG:
+ 	case MAX310X_LSR_IRQSTS_REG:
+ 	case MAX310X_SPCHR_IRQSTS_REG:
+@@ -468,7 +498,7 @@ static bool max310x_reg_writeable(struct device *dev, unsigned int reg)
+ 
+ static bool max310x_reg_volatile(struct device *dev, unsigned int reg)
+ {
+-	switch (reg & 0x1f) {
++	switch (reg) {
+ 	case MAX310X_RHR_REG:
+ 	case MAX310X_IRQSTS_REG:
+ 	case MAX310X_LSR_IRQSTS_REG:
+@@ -490,7 +520,7 @@ static bool max310x_reg_volatile(struct device *dev, unsigned int reg)
+ 
+ static bool max310x_reg_precious(struct device *dev, unsigned int reg)
+ {
+-	switch (reg & 0x1f) {
++	switch (reg) {
+ 	case MAX310X_RHR_REG:
+ 	case MAX310X_IRQSTS_REG:
+ 	case MAX310X_SPCHR_IRQSTS_REG:
+@@ -503,6 +533,11 @@ static bool max310x_reg_precious(struct device *dev, unsigned int reg)
+ 	return false;
+ }
+ 
++static bool max310x_reg_noinc(struct device *dev, unsigned int reg)
++{
++	return reg == MAX310X_RHR_REG;
++}
++
+ static int max310x_set_baud(struct uart_port *port, int baud)
+ {
+ 	unsigned int mode = 0, div = 0, frac = 0, c = 0, F = 0;
+@@ -556,7 +591,7 @@ static int max310x_update_best_err(unsigned long f, long *besterr)
+ 	return 1;
+ }
+ 
+-static int max310x_set_ref_clk(struct device *dev, struct max310x_port *s,
++static s32 max310x_set_ref_clk(struct device *dev, struct max310x_port *s,
+ 			       unsigned long freq, bool xtal)
+ {
+ 	unsigned int div, clksrc, pllcfg = 0;
+@@ -626,40 +661,25 @@ static int max310x_set_ref_clk(struct device *dev, struct max310x_port *s,
+ 		} while (!stable && (++try < MAX310X_XTAL_WAIT_RETRIES));
+ 
+ 		if (!stable)
+-			dev_warn(dev, "clock is not stable yet\n");
++			return dev_err_probe(dev, -EAGAIN,
++					     "clock is not stable\n");
+ 	}
+ 
+-	return (int)bestfreq;
++	return bestfreq;
+ }
+ 
+ static void max310x_batch_write(struct uart_port *port, u8 *txbuf, unsigned int len)
+ {
+ 	struct max310x_one *one = to_max310x_port(port);
+-	struct spi_transfer xfer[] = {
+-		{
+-			.tx_buf = &one->wr_header,
+-			.len = sizeof(one->wr_header),
+-		}, {
+-			.tx_buf = txbuf,
+-			.len = len,
+-		}
+-	};
+-	spi_sync_transfer(to_spi_device(port->dev), xfer, ARRAY_SIZE(xfer));
++
++	regmap_noinc_write(one->regmap, MAX310X_THR_REG, txbuf, len);
+ }
+ 
+ static void max310x_batch_read(struct uart_port *port, u8 *rxbuf, unsigned int len)
+ {
+ 	struct max310x_one *one = to_max310x_port(port);
+-	struct spi_transfer xfer[] = {
+-		{
+-			.tx_buf = &one->rd_header,
+-			.len = sizeof(one->rd_header),
+-		}, {
+-			.rx_buf = rxbuf,
+-			.len = len,
+-		}
+-	};
+-	spi_sync_transfer(to_spi_device(port->dev), xfer, ARRAY_SIZE(xfer));
++
++	regmap_noinc_read(one->regmap, MAX310X_RHR_REG, rxbuf, len);
+ }
+ 
+ static void max310x_handle_rx(struct uart_port *port, unsigned int rxlen)
+@@ -1261,16 +1281,18 @@ static int max310x_gpio_set_config(struct gpio_chip *chip, unsigned int offset,
+ }
+ #endif
+ 
+-static int max310x_probe(struct device *dev, struct max310x_devtype *devtype,
+-			 struct regmap *regmap, int irq)
++static int max310x_probe(struct device *dev, const struct max310x_devtype *devtype,
++			 const struct max310x_if_cfg *if_cfg,
++			 struct regmap *regmaps[], int irq)
+ {
+-	int i, ret, fmin, fmax, freq, uartclk;
+-	struct clk *clk_osc, *clk_xtal;
++	int i, ret, fmin, fmax, freq;
+ 	struct max310x_port *s;
+-	bool xtal = false;
++	s32 uartclk = 0;
++	bool xtal;
+ 
+-	if (IS_ERR(regmap))
+-		return PTR_ERR(regmap);
++	for (i = 0; i < devtype->nr; i++)
++		if (IS_ERR(regmaps[i]))
++			return PTR_ERR(regmaps[i]);
+ 
+ 	/* Alloc port structure */
+ 	s = devm_kzalloc(dev, struct_size(s, p, devtype->nr), GFP_KERNEL);
+@@ -1279,23 +1301,20 @@ static int max310x_probe(struct device *dev, struct max310x_devtype *devtype,
+ 		return -ENOMEM;
+ 	}
+ 
+-	clk_osc = devm_clk_get(dev, "osc");
+-	clk_xtal = devm_clk_get(dev, "xtal");
+-	if (!IS_ERR(clk_osc)) {
+-		s->clk = clk_osc;
+-		fmin = 500000;
+-		fmax = 35000000;
+-	} else if (!IS_ERR(clk_xtal)) {
+-		s->clk = clk_xtal;
+-		fmin = 1000000;
+-		fmax = 4000000;
+-		xtal = true;
+-	} else if (PTR_ERR(clk_osc) == -EPROBE_DEFER ||
+-		   PTR_ERR(clk_xtal) == -EPROBE_DEFER) {
+-		return -EPROBE_DEFER;
++	/* Always ask for fixed clock rate from a property. */
++	device_property_read_u32(dev, "clock-frequency", &uartclk);
++
++	s->clk = devm_clk_get_optional(dev, "osc");
++	if (IS_ERR(s->clk))
++		return PTR_ERR(s->clk);
++	if (s->clk) {
++		xtal = false;
+ 	} else {
+-		dev_err(dev, "Cannot get clock\n");
+-		return -EINVAL;
++		s->clk = devm_clk_get_optional(dev, "xtal");
++		if (IS_ERR(s->clk))
++			return PTR_ERR(s->clk);
++
++		xtal = true;
+ 	}
+ 
+ 	ret = clk_prepare_enable(s->clk);
+@@ -1303,14 +1322,31 @@ static int max310x_probe(struct device *dev, struct max310x_devtype *devtype,
+ 		return ret;
+ 
+ 	freq = clk_get_rate(s->clk);
++	if (freq == 0)
++		freq = uartclk;
++	if (freq == 0) {
++		dev_err(dev, "Cannot get clock rate\n");
++		ret = -EINVAL;
++		goto out_clk;
++	}
++
++	if (xtal) {
++		fmin = 1000000;
++		fmax = 4000000;
++	} else {
++		fmin = 500000;
++		fmax = 35000000;
++	}
++
+ 	/* Check frequency limits */
+ 	if (freq < fmin || freq > fmax) {
+ 		ret = -ERANGE;
+ 		goto out_clk;
+ 	}
+ 
+-	s->regmap = regmap;
++	s->regmap = regmaps[0];
+ 	s->devtype = devtype;
++	s->if_cfg = if_cfg;
+ 	dev_set_drvdata(dev, s);
+ 
+ 	/* Check device to ensure we are talking to what we expect */
+@@ -1319,25 +1355,38 @@ static int max310x_probe(struct device *dev, struct max310x_devtype *devtype,
+ 		goto out_clk;
+ 
+ 	for (i = 0; i < devtype->nr; i++) {
+-		unsigned int offs = i << 5;
++		bool started = false;
++		unsigned int try = 0, val = 0;
+ 
+ 		/* Reset port */
+-		regmap_write(s->regmap, MAX310X_MODE2_REG + offs,
++		regmap_write(regmaps[i], MAX310X_MODE2_REG,
+ 			     MAX310X_MODE2_RST_BIT);
+ 		/* Clear port reset */
+-		regmap_write(s->regmap, MAX310X_MODE2_REG + offs, 0);
++		regmap_write(regmaps[i], MAX310X_MODE2_REG, 0);
+ 
+ 		/* Wait for port startup */
+ 		do {
+-			regmap_read(s->regmap,
+-				    MAX310X_BRGDIVLSB_REG + offs, &ret);
+-		} while (ret != 0x01);
++			msleep(MAX310X_PORT_STARTUP_WAIT_DELAY_MS);
++			regmap_read(regmaps[i], MAX310X_BRGDIVLSB_REG, &val);
+ 
+-		regmap_write(s->regmap, MAX310X_MODE1_REG + offs,
+-			     devtype->mode1);
++			if (val == 0x01)
++				started = true;
++		} while (!started && (++try < MAX310X_PORT_STARTUP_WAIT_RETRIES));
++
++		if (!started) {
++			ret = dev_err_probe(dev, -EAGAIN, "port reset failed\n");
++			goto out_uart;
++		}
++
++		regmap_write(regmaps[i], MAX310X_MODE1_REG, devtype->mode1);
+ 	}
+ 
+ 	uartclk = max310x_set_ref_clk(dev, s, freq, xtal);
++	if (uartclk < 0) {
++		ret = uartclk;
++		goto out_uart;
++	}
++
+ 	dev_dbg(dev, "Reference clock set to %i Hz\n", uartclk);
+ 
+ 	for (i = 0; i < devtype->nr; i++) {
+@@ -1357,11 +1406,13 @@ static int max310x_probe(struct device *dev, struct max310x_devtype *devtype,
+ 		s->p[i].port.fifosize	= MAX310X_FIFO_SIZE;
+ 		s->p[i].port.flags	= UPF_FIXED_TYPE | UPF_LOW_LATENCY;
+ 		s->p[i].port.iotype	= UPIO_PORT;
+-		s->p[i].port.iobase	= i * 0x20;
++		s->p[i].port.iobase	= i;
+ 		s->p[i].port.membase	= (void __iomem *)~0;
+ 		s->p[i].port.uartclk	= uartclk;
+ 		s->p[i].port.rs485_config = max310x_rs485_config;
+ 		s->p[i].port.ops	= &max310x_ops;
++		s->p[i].regmap		= regmaps[i];
++
+ 		/* Disable all interrupts */
+ 		max310x_port_write(&s->p[i].port, MAX310X_IRQEN_REG, 0);
+ 		/* Clear IRQ status register */
+@@ -1372,10 +1423,6 @@ static int max310x_probe(struct device *dev, struct max310x_devtype *devtype,
+ 		INIT_WORK(&s->p[i].md_work, max310x_md_proc);
+ 		/* Initialize queue for changing RS485 mode */
+ 		INIT_WORK(&s->p[i].rs_work, max310x_rs_proc);
+-		/* Initialize SPI-transfer buffers */
+-		s->p[i].wr_header = (s->p[i].port.iobase + MAX310X_THR_REG) |
+-				    MAX310X_WRITE_BIT;
+-		s->p[i].rd_header = (s->p[i].port.iobase + MAX310X_RHR_REG);
+ 
+ 		/* Register port */
+ 		ret = uart_add_one_port(&max310x_uart, &s->p[i].port);
+@@ -1462,16 +1509,35 @@ static struct regmap_config regcfg = {
+ 	.val_bits = 8,
+ 	.write_flag_mask = MAX310X_WRITE_BIT,
+ 	.cache_type = REGCACHE_RBTREE,
++	.max_register = MAX310X_REG_1F,
+ 	.writeable_reg = max310x_reg_writeable,
+ 	.volatile_reg = max310x_reg_volatile,
+ 	.precious_reg = max310x_reg_precious,
++	.writeable_noinc_reg = max310x_reg_noinc,
++	.readable_noinc_reg = max310x_reg_noinc,
++	.max_raw_read = MAX310X_FIFO_SIZE,
++	.max_raw_write = MAX310X_FIFO_SIZE,
+ };
+ 
+ #ifdef CONFIG_SPI_MASTER
++static int max310x_spi_extended_reg_enable(struct device *dev, bool enable)
++{
++	struct max310x_port *s = dev_get_drvdata(dev);
++
++	return regmap_write(s->regmap, MAX310X_GLOBALCMD_REG,
++			    enable ? MAX310X_EXTREG_ENBL : MAX310X_EXTREG_DSBL);
++}
++
++static const struct max310x_if_cfg __maybe_unused max310x_spi_if_cfg = {
++	.extended_reg_enable = max310x_spi_extended_reg_enable,
++	.rev_id_reg = MAX310X_SPI_REVID_EXTREG,
++};
++
+ static int max310x_spi_probe(struct spi_device *spi)
+ {
+-	struct max310x_devtype *devtype;
+-	struct regmap *regmap;
++	const struct max310x_devtype *devtype;
++	struct regmap *regmaps[4];
++	unsigned int i;
+ 	int ret;
+ 
+ 	/* Setup SPI bus */
+@@ -1482,23 +1548,18 @@ static int max310x_spi_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (spi->dev.of_node) {
+-		const struct of_device_id *of_id =
+-			of_match_device(max310x_dt_ids, &spi->dev);
+-		if (!of_id)
+-			return -ENODEV;
+-
+-		devtype = (struct max310x_devtype *)of_id->data;
+-	} else {
+-		const struct spi_device_id *id_entry = spi_get_device_id(spi);
++	devtype = device_get_match_data(&spi->dev);
++	if (!devtype)
++		devtype = (struct max310x_devtype *)spi_get_device_id(spi)->driver_data;
+ 
+-		devtype = (struct max310x_devtype *)id_entry->driver_data;
++	for (i = 0; i < devtype->nr; i++) {
++		u8 port_mask = i * 0x20;
++		regcfg.read_flag_mask = port_mask;
++		regcfg.write_flag_mask = port_mask | MAX310X_WRITE_BIT;
++		regmaps[i] = devm_regmap_init_spi(spi, &regcfg);
+ 	}
+ 
+-	regcfg.max_register = devtype->nr * 0x20 - 1;
+-	regmap = devm_regmap_init_spi(spi, &regcfg);
+-
+-	return max310x_probe(&spi->dev, devtype, regmap, spi->irq);
++	return max310x_probe(&spi->dev, devtype, &max310x_spi_if_cfg, regmaps, spi->irq);
+ }
+ 
+ static int max310x_spi_remove(struct spi_device *spi)
+@@ -1518,7 +1579,7 @@ MODULE_DEVICE_TABLE(spi, max310x_id_table);
+ static struct spi_driver max310x_spi_driver = {
+ 	.driver = {
+ 		.name		= MAX310X_NAME,
+-		.of_match_table	= of_match_ptr(max310x_dt_ids),
++		.of_match_table	= max310x_dt_ids,
+ 		.pm		= &max310x_pm_ops,
+ 	},
+ 	.probe		= max310x_spi_probe,
+@@ -1527,6 +1588,101 @@ static struct spi_driver max310x_spi_driver = {
+ };
+ #endif
+ 
++#ifdef CONFIG_I2C
++static int max310x_i2c_extended_reg_enable(struct device *dev, bool enable)
++{
++	return 0;
++}
++
++static struct regmap_config regcfg_i2c = {
++	.reg_bits = 8,
++	.val_bits = 8,
++	.cache_type = REGCACHE_RBTREE,
++	.writeable_reg = max310x_reg_writeable,
++	.volatile_reg = max310x_reg_volatile,
++	.precious_reg = max310x_reg_precious,
++	.max_register = MAX310X_I2C_REVID_EXTREG,
++	.writeable_noinc_reg = max310x_reg_noinc,
++	.readable_noinc_reg = max310x_reg_noinc,
++	.max_raw_read = MAX310X_FIFO_SIZE,
++	.max_raw_write = MAX310X_FIFO_SIZE,
++};
++
++static const struct max310x_if_cfg max310x_i2c_if_cfg = {
++	.extended_reg_enable = max310x_i2c_extended_reg_enable,
++	.rev_id_reg = MAX310X_I2C_REVID_EXTREG,
++};
++
++static unsigned short max310x_i2c_slave_addr(unsigned short addr,
++					     unsigned int nr)
++{
++	/*
++	 * For MAX14830 and MAX3109, the slave address depends on what the
++	 * A0 and A1 pins are tied to.
++	 * See Table I2C Address Map of the datasheet.
++	 * Based on that table, the following formulas were determined.
++	 * UART1 - UART0 = 0x10
++	 * UART2 - UART1 = 0x20 + 0x10
++	 * UART3 - UART2 = 0x10
++	 */
++
++	addr -= nr * 0x10;
++
++	if (nr >= 2)
++		addr -= 0x20;
++
++	return addr;
++}
++
++static int max310x_i2c_probe(struct i2c_client *client)
++{
++	const struct max310x_devtype *devtype =
++			device_get_match_data(&client->dev);
++	struct i2c_client *port_client;
++	struct regmap *regmaps[4];
++	unsigned int i;
++	u8 port_addr;
++
++	if (client->addr < devtype->slave_addr.min ||
++		client->addr > devtype->slave_addr.max)
++		return dev_err_probe(&client->dev, -EINVAL,
++				     "Slave addr 0x%x outside of range [0x%x, 0x%x]\n",
++				     client->addr, devtype->slave_addr.min,
++				     devtype->slave_addr.max);
++
++	regmaps[0] = devm_regmap_init_i2c(client, &regcfg_i2c);
++
++	for (i = 1; i < devtype->nr; i++) {
++		port_addr = max310x_i2c_slave_addr(client->addr, i);
++		port_client = devm_i2c_new_dummy_device(&client->dev,
++							client->adapter,
++							port_addr);
++
++		regmaps[i] = devm_regmap_init_i2c(port_client, &regcfg_i2c);
++	}
++
++	return max310x_probe(&client->dev, devtype, &max310x_i2c_if_cfg,
++			     regmaps, client->irq);
++}
++
++static int max310x_i2c_remove(struct i2c_client *client)
++{
++	max310x_remove(&client->dev);
++
++	return 0;
++}
++
++static struct i2c_driver max310x_i2c_driver = {
++	.driver = {
++		.name		= MAX310X_NAME,
++		.of_match_table	= max310x_dt_ids,
++		.pm		= &max310x_pm_ops,
++	},
++	.probe_new	= max310x_i2c_probe,
++	.remove		= max310x_i2c_remove,
++};
++#endif
++
+ static int __init max310x_uart_init(void)
+ {
+ 	int ret;
+@@ -1540,15 +1696,35 @@ static int __init max310x_uart_init(void)
+ #ifdef CONFIG_SPI_MASTER
+ 	ret = spi_register_driver(&max310x_spi_driver);
+ 	if (ret)
+-		uart_unregister_driver(&max310x_uart);
++		goto err_spi_register;
++#endif
++
++#ifdef CONFIG_I2C
++	ret = i2c_add_driver(&max310x_i2c_driver);
++	if (ret)
++		goto err_i2c_register;
++#endif
++
++	return 0;
++
++#ifdef CONFIG_I2C
++err_i2c_register:
++	spi_unregister_driver(&max310x_spi_driver);
+ #endif
+ 
++err_spi_register:
++	uart_unregister_driver(&max310x_uart);
++
+ 	return ret;
+ }
+ module_init(max310x_uart_init);
+ 
+ static void __exit max310x_uart_exit(void)
+ {
++#ifdef CONFIG_I2C
++	i2c_del_driver(&max310x_i2c_driver);
++#endif
++
+ #ifdef CONFIG_SPI_MASTER
+ 	spi_unregister_driver(&max310x_spi_driver);
+ #endif
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index eb70f07e3623a..4fa387e447f08 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2059,16 +2059,13 @@ int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code)
+ 	return 0;
+ }
+ 
+-static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td,
+-	struct xhci_transfer_event *event, struct xhci_virt_ep *ep)
++static int finish_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
++		     struct xhci_ring *ep_ring, struct xhci_td *td,
++		     u32 trb_comp_code)
+ {
+ 	struct xhci_ep_ctx *ep_ctx;
+-	struct xhci_ring *ep_ring;
+-	u32 trb_comp_code;
+ 
+-	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+ 	ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index);
+-	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+ 
+ 	if (trb_comp_code == COMP_STOPPED_LENGTH_INVALID ||
+ 			trb_comp_code == COMP_STOPPED ||
+@@ -2099,8 +2096,9 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 					     EP_HARD_RESET);
+ 	} else {
+ 		/* Update ring dequeue pointer */
+-		while (ep_ring->dequeue != td->last_trb)
+-			inc_deq(xhci, ep_ring);
++		ep_ring->dequeue = td->last_trb;
++		ep_ring->deq_seg = td->last_trb_seg;
++		ep_ring->num_trbs_free += td->num_trbs - 1;
+ 		inc_deq(xhci, ep_ring);
+ 	}
+ 
+@@ -2125,9 +2123,9 @@ static int sum_trb_lengths(struct xhci_hcd *xhci, struct xhci_ring *ring,
+ /*
+  * Process control tds, update urb status and actual_length.
+  */
+-static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td,
+-	union xhci_trb *ep_trb, struct xhci_transfer_event *event,
+-	struct xhci_virt_ep *ep)
++static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
++		struct xhci_ring *ep_ring,  struct xhci_td *td,
++			   union xhci_trb *ep_trb, struct xhci_transfer_event *event)
+ {
+ 	struct xhci_ep_ctx *ep_ctx;
+ 	u32 trb_comp_code;
+@@ -2215,15 +2213,15 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		td->urb->actual_length = requested;
+ 
+ finish_td:
+-	return finish_td(xhci, td, event, ep);
++	return finish_td(xhci, ep, ep_ring, td, trb_comp_code);
+ }
+ 
+ /*
+  * Process isochronous tds, update urb packet status and actual_length.
+  */
+-static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+-	union xhci_trb *ep_trb, struct xhci_transfer_event *event,
+-	struct xhci_virt_ep *ep)
++static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
++		struct xhci_ring *ep_ring, struct xhci_td *td,
++		union xhci_trb *ep_trb, struct xhci_transfer_event *event)
+ {
+ 	struct urb_priv *urb_priv;
+ 	int idx;
+@@ -2246,6 +2244,9 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	/* handle completion code */
+ 	switch (trb_comp_code) {
+ 	case COMP_SUCCESS:
++		/* Don't overwrite status if TD had an error, see xHCI 4.9.1 */
++		if (td->error_mid_td)
++			break;
+ 		if (remaining) {
+ 			frame->status = short_framestatus;
+ 			if (xhci->quirks & XHCI_TRUST_TX_LENGTH)
+@@ -2261,9 +2262,13 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	case COMP_BANDWIDTH_OVERRUN_ERROR:
+ 		frame->status = -ECOMM;
+ 		break;
+-	case COMP_ISOCH_BUFFER_OVERRUN:
+ 	case COMP_BABBLE_DETECTED_ERROR:
++		sum_trbs_for_length = true;
++		fallthrough;
++	case COMP_ISOCH_BUFFER_OVERRUN:
+ 		frame->status = -EOVERFLOW;
++		if (ep_trb != td->last_trb)
++			td->error_mid_td = true;
+ 		break;
+ 	case COMP_INCOMPATIBLE_DEVICE_ERROR:
+ 	case COMP_STALL_ERROR:
+@@ -2271,8 +2276,9 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		break;
+ 	case COMP_USB_TRANSACTION_ERROR:
+ 		frame->status = -EPROTO;
++		sum_trbs_for_length = true;
+ 		if (ep_trb != td->last_trb)
+-			return 0;
++			td->error_mid_td = true;
+ 		break;
+ 	case COMP_STOPPED:
+ 		sum_trbs_for_length = true;
+@@ -2292,6 +2298,9 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		break;
+ 	}
+ 
++	if (td->urb_length_set)
++		goto finish_td;
++
+ 	if (sum_trbs_for_length)
+ 		frame->actual_length = sum_trb_lengths(xhci, ep->ring, ep_trb) +
+ 			ep_trb_len - remaining;
+@@ -2300,7 +2309,15 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 
+ 	td->urb->actual_length += frame->actual_length;
+ 
+-	return finish_td(xhci, td, event, ep);
++finish_td:
++	/* Don't give back TD yet if we encountered an error mid TD */
++	if (td->error_mid_td && ep_trb != td->last_trb) {
++		xhci_dbg(xhci, "Error mid isoc TD, wait for final completion event\n");
++		td->urb_length_set = true;
++		return 0;
++	}
++
++	return finish_td(xhci, ep, ep_ring, td, trb_comp_code);
+ }
+ 
+ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+@@ -2321,8 +2338,9 @@ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	frame->actual_length = 0;
+ 
+ 	/* Update ring dequeue pointer */
+-	while (ep->ring->dequeue != td->last_trb)
+-		inc_deq(xhci, ep->ring);
++	ep->ring->dequeue = td->last_trb;
++	ep->ring->deq_seg = td->last_trb_seg;
++	ep->ring->num_trbs_free += td->num_trbs - 1;
+ 	inc_deq(xhci, ep->ring);
+ 
+ 	return xhci_td_cleanup(xhci, td, ep->ring, status);
+@@ -2331,17 +2349,15 @@ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ /*
+  * Process bulk and interrupt tds, update urb status and actual_length.
+  */
+-static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
+-	union xhci_trb *ep_trb, struct xhci_transfer_event *event,
+-	struct xhci_virt_ep *ep)
++static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
++		struct xhci_ring *ep_ring, struct xhci_td *td,
++		union xhci_trb *ep_trb, struct xhci_transfer_event *event)
+ {
+ 	struct xhci_slot_ctx *slot_ctx;
+-	struct xhci_ring *ep_ring;
+ 	u32 trb_comp_code;
+ 	u32 remaining, requested, ep_trb_len;
+ 
+ 	slot_ctx = xhci_get_slot_ctx(xhci, ep->vdev->out_ctx);
+-	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+ 	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+ 	remaining = EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+ 	ep_trb_len = TRB_LEN(le32_to_cpu(ep_trb->generic.field[2]));
+@@ -2401,7 +2417,8 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 			  remaining);
+ 		td->urb->actual_length = 0;
+ 	}
+-	return finish_td(xhci, td, event, ep);
++
++	return finish_td(xhci, ep, ep_ring, td, trb_comp_code);
+ }
+ 
+ /*
+@@ -2686,17 +2703,51 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 		}
+ 
+ 		if (!ep_seg) {
+-			if (!ep->skip ||
+-			    !usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {
+-				/* Some host controllers give a spurious
+-				 * successful event after a short transfer.
+-				 * Ignore it.
+-				 */
+-				if ((xhci->quirks & XHCI_SPURIOUS_SUCCESS) &&
+-						ep_ring->last_td_was_short) {
+-					ep_ring->last_td_was_short = false;
+-					goto cleanup;
++
++			if (ep->skip && usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {
++				skip_isoc_td(xhci, td, ep, status);
++				goto cleanup;
++			}
++
++			/*
++			 * Some hosts give a spurious success event after a short
++			 * transfer. Ignore it.
++			 */
++			if ((xhci->quirks & XHCI_SPURIOUS_SUCCESS) &&
++			    ep_ring->last_td_was_short) {
++				ep_ring->last_td_was_short = false;
++				goto cleanup;
++			}
++
++			/*
++			 * xhci 4.10.2 states isoc endpoints should continue
++			 * processing the next TD if there was an error mid TD.
++			 * So host like NEC don't generate an event for the last
++			 * isoc TRB even if the IOC flag is set.
++			 * xhci 4.9.1 states that if there are errors in mult-TRB
++			 * TDs xHC should generate an error for that TRB, and if xHC
++			 * proceeds to the next TD it should genete an event for
++			 * any TRB with IOC flag on the way. Other host follow this.
++			 * So this event might be for the next TD.
++			 */
++			if (td->error_mid_td &&
++			    !list_is_last(&td->td_list, &ep_ring->td_list)) {
++				struct xhci_td *td_next = list_next_entry(td, td_list);
++
++				ep_seg = trb_in_td(xhci, td_next->start_seg, td_next->first_trb,
++						   td_next->last_trb, ep_trb_dma, false);
++				if (ep_seg) {
++					/* give back previous TD, start handling new */
++					xhci_dbg(xhci, "Missing TD completion event after mid TD error\n");
++					ep_ring->dequeue = td->last_trb;
++					ep_ring->deq_seg = td->last_trb_seg;
++					inc_deq(xhci, ep_ring);
++					xhci_td_cleanup(xhci, td, ep_ring, td->status);
++					td = td_next;
+ 				}
++			}
++
++			if (!ep_seg) {
+ 				/* HC is busted, give up! */
+ 				xhci_err(xhci,
+ 					"ERROR Transfer event TRB DMA ptr not "
+@@ -2708,9 +2759,6 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 					  ep_trb_dma, true);
+ 				return -ESHUTDOWN;
+ 			}
+-
+-			skip_isoc_td(xhci, td, ep, status);
+-			goto cleanup;
+ 		}
+ 		if (trb_comp_code == COMP_SHORT_PACKET)
+ 			ep_ring->last_td_was_short = true;
+@@ -2752,11 +2800,11 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 
+ 		/* update the urb's actual_length and give back to the core */
+ 		if (usb_endpoint_xfer_control(&td->urb->ep->desc))
+-			process_ctrl_td(xhci, td, ep_trb, event, ep);
++			process_ctrl_td(xhci, ep, ep_ring, td, ep_trb, event);
+ 		else if (usb_endpoint_xfer_isoc(&td->urb->ep->desc))
+-			process_isoc_td(xhci, td, ep_trb, event, ep);
++			process_isoc_td(xhci, ep, ep_ring, td, ep_trb, event);
+ 		else
+-			process_bulk_intr_td(xhci, td, ep_trb, event, ep);
++			process_bulk_intr_td(xhci, ep, ep_ring, td, ep_trb, event);
+ cleanup:
+ 		handling_skipped_tds = ep->skip &&
+ 			trb_comp_code != COMP_MISSED_SERVICE_ERROR &&
+@@ -3487,7 +3535,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 			field |= TRB_IOC;
+ 			more_trbs_coming = false;
+ 			td->last_trb = ring->enqueue;
+-
++			td->last_trb_seg = ring->enq_seg;
+ 			if (xhci_urb_suitable_for_idt(urb)) {
+ 				memcpy(&send_addr, urb->transfer_buffer,
+ 				       trb_buff_len);
+@@ -3513,7 +3561,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 				upper_32_bits(send_addr),
+ 				length_field,
+ 				field);
+-
++		td->num_trbs++;
+ 		addr += trb_buff_len;
+ 		sent_len = trb_buff_len;
+ 
+@@ -3537,8 +3585,10 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 				       ep_index, urb->stream_id,
+ 				       1, urb, 1, mem_flags);
+ 		urb_priv->td[1].last_trb = ring->enqueue;
++		urb_priv->td[1].last_trb_seg = ring->enq_seg;
+ 		field = TRB_TYPE(TRB_NORMAL) | ring->cycle_state | TRB_IOC;
+ 		queue_trb(xhci, ring, 0, 0, 0, TRB_INTR_TARGET(0), field);
++		urb_priv->td[1].num_trbs++;
+ 	}
+ 
+ 	check_trb_math(urb, enqd_len);
+@@ -3589,6 +3639,7 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 
+ 	urb_priv = urb->hcpriv;
+ 	td = &urb_priv->td[0];
++	td->num_trbs = num_trbs;
+ 
+ 	/*
+ 	 * Don't give the first TRB to the hardware (by toggling the cycle bit)
+@@ -3661,6 +3712,7 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 
+ 	/* Save the DMA address of the last TRB in the TD */
+ 	td->last_trb = ep_ring->enqueue;
++	td->last_trb_seg = ep_ring->enq_seg;
+ 
+ 	/* Queue status TRB - see Table 7 and sections 4.11.2.2 and 6.4.1.2.3 */
+ 	/* If the device sent data, the status stage is an OUT transfer */
+@@ -3905,7 +3957,7 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 			goto cleanup;
+ 		}
+ 		td = &urb_priv->td[i];
+-
++		td->num_trbs = trbs_per_td;
+ 		/* use SIA as default, if frame id is used overwrite it */
+ 		sia_frame_id = TRB_SIA;
+ 		if (!(urb->transfer_flags & URB_ISO_ASAP) &&
+@@ -3948,6 +4000,7 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 			} else {
+ 				more_trbs_coming = false;
+ 				td->last_trb = ep_ring->enqueue;
++				td->last_trb_seg = ep_ring->enq_seg;
+ 				field |= TRB_IOC;
+ 				if (trb_block_event_intr(xhci, num_tds, i))
+ 					field |= TRB_BEI;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index bb3c362a194b2..5a8443f6ed703 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1550,9 +1550,12 @@ struct xhci_td {
+ 	struct xhci_segment	*start_seg;
+ 	union xhci_trb		*first_trb;
+ 	union xhci_trb		*last_trb;
++	struct xhci_segment	*last_trb_seg;
+ 	struct xhci_segment	*bounce_seg;
+ 	/* actual_length of the URB has already been set */
+ 	bool			urb_length_set;
++	bool			error_mid_td;
++	unsigned int		num_trbs;
+ };
+ 
+ /* xHCI command default timeout value */
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 68aa8760cb465..9e12592727914 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -3107,8 +3107,9 @@ static int ext4_zeroout_es(struct inode *inode, struct ext4_extent *ex)
+ 	if (ee_len == 0)
+ 		return 0;
+ 
+-	return ext4_es_insert_extent(inode, ee_block, ee_len, ee_pblock,
+-				     EXTENT_STATUS_WRITTEN);
++	ext4_es_insert_extent(inode, ee_block, ee_len, ee_pblock,
++			      EXTENT_STATUS_WRITTEN);
++	return 0;
+ }
+ 
+ /* FIXME!! we need to try to merge to left or right after zero-out  */
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index cccbdfd49a86b..f37e62546745b 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -846,12 +846,10 @@ static int __es_insert_extent(struct inode *inode, struct extent_status *newes,
+ /*
+  * ext4_es_insert_extent() adds information to an inode's extent
+  * status tree.
+- *
+- * Return 0 on success, error code on failure.
+  */
+-int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
+-			  ext4_lblk_t len, ext4_fsblk_t pblk,
+-			  unsigned int status)
++void ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
++			   ext4_lblk_t len, ext4_fsblk_t pblk,
++			   unsigned int status)
+ {
+ 	struct extent_status newes;
+ 	ext4_lblk_t end = lblk + len - 1;
+@@ -863,13 +861,13 @@ int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
+ 	bool revise_pending = false;
+ 
+ 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
+-		return 0;
++		return;
+ 
+ 	es_debug("add [%u/%u) %llu %x to extent status tree of inode %lu\n",
+ 		 lblk, len, pblk, status, inode->i_ino);
+ 
+ 	if (!len)
+-		return 0;
++		return;
+ 
+ 	BUG_ON(end < lblk);
+ 
+@@ -938,7 +936,7 @@ int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
+ 		goto retry;
+ 
+ 	ext4_es_print_tree(inode);
+-	return 0;
++	return;
+ }
+ 
+ /*
+diff --git a/fs/ext4/extents_status.h b/fs/ext4/extents_status.h
+index 4ec30a7982605..481ec4381bee6 100644
+--- a/fs/ext4/extents_status.h
++++ b/fs/ext4/extents_status.h
+@@ -127,9 +127,9 @@ extern int __init ext4_init_es(void);
+ extern void ext4_exit_es(void);
+ extern void ext4_es_init_tree(struct ext4_es_tree *tree);
+ 
+-extern int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
+-				 ext4_lblk_t len, ext4_fsblk_t pblk,
+-				 unsigned int status);
++extern void ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
++				  ext4_lblk_t len, ext4_fsblk_t pblk,
++				  unsigned int status);
+ extern void ext4_es_cache_extent(struct inode *inode, ext4_lblk_t lblk,
+ 				 ext4_lblk_t len, ext4_fsblk_t pblk,
+ 				 unsigned int status);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 365c4d3a434ab..8b48ed351c4b9 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -589,10 +589,8 @@ int ext4_map_blocks(handle_t *handle, struct inode *inode,
+ 		    ext4_es_scan_range(inode, &ext4_es_is_delayed, map->m_lblk,
+ 				       map->m_lblk + map->m_len - 1))
+ 			status |= EXTENT_STATUS_DELAYED;
+-		ret = ext4_es_insert_extent(inode, map->m_lblk,
+-					    map->m_len, map->m_pblk, status);
+-		if (ret < 0)
+-			retval = ret;
++		ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
++				      map->m_pblk, status);
+ 	}
+ 	up_read((&EXT4_I(inode)->i_data_sem));
+ 
+@@ -701,12 +699,8 @@ int ext4_map_blocks(handle_t *handle, struct inode *inode,
+ 		    ext4_es_scan_range(inode, &ext4_es_is_delayed, map->m_lblk,
+ 				       map->m_lblk + map->m_len - 1))
+ 			status |= EXTENT_STATUS_DELAYED;
+-		ret = ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
+-					    map->m_pblk, status);
+-		if (ret < 0) {
+-			retval = ret;
+-			goto out_sem;
+-		}
++		ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
++				      map->m_pblk, status);
+ 	}
+ 
+ out_sem:
+@@ -1734,11 +1728,8 @@ static int ext4_da_map_blocks(struct inode *inode, sector_t iblock,
+ 
+ 	/* Lookup extent status tree firstly */
+ 	if (ext4_es_lookup_extent(inode, iblock, NULL, &es)) {
+-		if (ext4_es_is_hole(&es)) {
+-			retval = 0;
+-			down_read(&EXT4_I(inode)->i_data_sem);
++		if (ext4_es_is_hole(&es))
+ 			goto add_delayed;
+-		}
+ 
+ 		/*
+ 		 * Delayed extent could be allocated by fallocate.
+@@ -1780,27 +1771,11 @@ static int ext4_da_map_blocks(struct inode *inode, sector_t iblock,
+ 		retval = ext4_ext_map_blocks(NULL, inode, map, 0);
+ 	else
+ 		retval = ext4_ind_map_blocks(NULL, inode, map, 0);
+-
+-add_delayed:
+-	if (retval == 0) {
+-		int ret;
+-
+-		/*
+-		 * XXX: __block_prepare_write() unmaps passed block,
+-		 * is it OK?
+-		 */
+-
+-		ret = ext4_insert_delayed_block(inode, map->m_lblk);
+-		if (ret != 0) {
+-			retval = ret;
+-			goto out_unlock;
+-		}
+-
+-		map_bh(bh, inode->i_sb, invalid_block);
+-		set_buffer_new(bh);
+-		set_buffer_delay(bh);
+-	} else if (retval > 0) {
+-		int ret;
++	if (retval < 0) {
++		up_read(&EXT4_I(inode)->i_data_sem);
++		return retval;
++	}
++	if (retval > 0) {
+ 		unsigned int status;
+ 
+ 		if (unlikely(retval != map->m_len)) {
+@@ -1813,15 +1788,23 @@ static int ext4_da_map_blocks(struct inode *inode, sector_t iblock,
+ 
+ 		status = map->m_flags & EXT4_MAP_UNWRITTEN ?
+ 				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+-		ret = ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
+-					    map->m_pblk, status);
+-		if (ret != 0)
+-			retval = ret;
++		ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
++				      map->m_pblk, status);
++		up_read(&EXT4_I(inode)->i_data_sem);
++		return retval;
+ 	}
++	up_read(&EXT4_I(inode)->i_data_sem);
+ 
+-out_unlock:
+-	up_read((&EXT4_I(inode)->i_data_sem));
++add_delayed:
++	down_write(&EXT4_I(inode)->i_data_sem);
++	retval = ext4_insert_delayed_block(inode, map->m_lblk);
++	up_write(&EXT4_I(inode)->i_data_sem);
++	if (retval)
++		return retval;
+ 
++	map_bh(bh, inode->i_sb, invalid_block);
++	set_buffer_new(bh);
++	set_buffer_delay(bh);
+ 	return retval;
+ }
+ 
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index a0edd4b8fa189..bf3cda4989623 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -135,6 +135,7 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 	loff_t len, vma_len;
+ 	int ret;
+ 	struct hstate *h = hstate_file(file);
++	vm_flags_t vm_flags;
+ 
+ 	/*
+ 	 * vma address alignment (but not the pgoff alignment) has
+@@ -176,10 +177,20 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 	file_accessed(file);
+ 
+ 	ret = -ENOMEM;
+-	if (hugetlb_reserve_pages(inode,
++
++	vm_flags = vma->vm_flags;
++	/*
++	 * for SHM_HUGETLB, the pages are reserved in the shmget() call so skip
++	 * reserving here. Note: only for SHM hugetlbfs file, the inode
++	 * flag S_PRIVATE is set.
++	 */
++	if (inode->i_flags & S_PRIVATE)
++		vm_flags |= VM_NORESERVE;
++
++	if (!hugetlb_reserve_pages(inode,
+ 				vma->vm_pgoff >> huge_page_order(h),
+ 				len >> huge_page_shift(h), vma,
+-				vma->vm_flags))
++				vm_flags))
+ 		goto out;
+ 
+ 	ret = 0;
+@@ -1500,7 +1511,7 @@ struct file *hugetlb_file_setup(const char *name, size_t size,
+ 	inode->i_size = size;
+ 	clear_nlink(inode);
+ 
+-	if (hugetlb_reserve_pages(inode, 0,
++	if (!hugetlb_reserve_pages(inode, 0,
+ 			size >> huge_page_shift(hstate_inode(inode)), NULL,
+ 			acctflag))
+ 		file = ERR_PTR(-ENOMEM);
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index bc6ce4b202a80..cd56e53bd42e2 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -892,8 +892,7 @@ int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk);
+ int sk_reuseport_attach_bpf(u32 ufd, struct sock *sk);
+ void sk_reuseport_prog_free(struct bpf_prog *prog);
+ int sk_detach_filter(struct sock *sk);
+-int sk_get_filter(struct sock *sk, struct sock_filter __user *filter,
+-		  unsigned int len);
++int sk_get_filter(struct sock *sk, sockptr_t optval, unsigned int len);
+ 
+ bool sk_filter_charge(struct sock *sk, struct sk_filter *fp);
+ void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp);
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 99b73fc4a8246..90c66b9458c31 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -140,7 +140,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte,
+ 				unsigned long dst_addr,
+ 				unsigned long src_addr,
+ 				struct page **pagep);
+-int hugetlb_reserve_pages(struct inode *inode, long from, long to,
++bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
+ 						struct vm_area_struct *vma,
+ 						vm_flags_t vm_flags);
+ long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index eada4d8d65879..2aaf450c8d800 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -764,6 +764,23 @@ enum vmbus_device_type {
+ 	HV_UNKNOWN,
+ };
+ 
++/*
++ * Provides request ids for VMBus. Encapsulates guest memory
++ * addresses and stores the next available slot in req_arr
++ * to generate new ids in constant time.
++ */
++struct vmbus_requestor {
++	u64 *req_arr;
++	unsigned long *req_bitmap; /* is a given slot available? */
++	u32 size;
++	u64 next_request_id;
++	spinlock_t req_lock; /* provides atomicity */
++};
++
++#define VMBUS_NO_RQSTOR U64_MAX
++#define VMBUS_RQST_ERROR (U64_MAX - 1)
++#define VMBUS_RQST_ID_NO_RESPONSE (U64_MAX - 2)
++
+ struct vmbus_device {
+ 	u16  dev_type;
+ 	guid_t guid;
+@@ -988,8 +1005,14 @@ struct vmbus_channel {
+ 	u32 fuzz_testing_interrupt_delay;
+ 	u32 fuzz_testing_message_delay;
+ 
++	/* request/transaction ids for VMBus */
++	struct vmbus_requestor requestor;
++	u32 rqstor_size;
+ };
+ 
++u64 vmbus_next_request_id(struct vmbus_requestor *rqstor, u64 rqst_addr);
++u64 vmbus_request_addr(struct vmbus_requestor *rqstor, u64 trans_id);
++
+ static inline bool is_hvsock_channel(const struct vmbus_channel *c)
+ {
+ 	return !!(c->offermsg.offer.chn_flags &
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 92a76ce0c382d..07abcd384975b 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -293,9 +293,9 @@ LSM_HOOK(int, 0, socket_getsockopt, struct socket *sock, int level, int optname)
+ LSM_HOOK(int, 0, socket_setsockopt, struct socket *sock, int level, int optname)
+ LSM_HOOK(int, 0, socket_shutdown, struct socket *sock, int how)
+ LSM_HOOK(int, 0, socket_sock_rcv_skb, struct sock *sk, struct sk_buff *skb)
+-LSM_HOOK(int, 0, socket_getpeersec_stream, struct socket *sock,
+-	 char __user *optval, int __user *optlen, unsigned len)
+-LSM_HOOK(int, 0, socket_getpeersec_dgram, struct socket *sock,
++LSM_HOOK(int, -ENOPROTOOPT, socket_getpeersec_stream, struct socket *sock,
++	 sockptr_t optval, sockptr_t optlen, unsigned int len)
++LSM_HOOK(int, -ENOPROTOOPT, socket_getpeersec_dgram, struct socket *sock,
+ 	 struct sk_buff *skb, u32 *secid)
+ LSM_HOOK(int, 0, sk_alloc_security, struct sock *sk, int family, gfp_t priority)
+ LSM_HOOK(void, LSM_RET_VOID, sk_free_security, struct sock *sk)
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index 64cdf4d7bfb30..bbf9c8c7bd9c5 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -926,8 +926,8 @@
+  *	SO_GETPEERSEC.  For tcp sockets this can be meaningful if the
+  *	socket is associated with an ipsec SA.
+  *	@sock is the local socket.
+- *	@optval userspace memory where the security state is to be copied.
+- *	@optlen userspace int where the module should copy the actual length
++ *	@optval memory where the security state is to be copied.
++ *	@optlen memory where the module should copy the actual length
+  *	of the security state.
+  *	@len as input is the maximum length to copy to userspace provided
+  *	by the caller.
+diff --git a/include/linux/regmap.h b/include/linux/regmap.h
+index e7834d98207f7..83a7485de78fb 100644
+--- a/include/linux/regmap.h
++++ b/include/linux/regmap.h
+@@ -289,6 +289,17 @@ typedef void (*regmap_unlock)(void *);
+  *		  read operation on a bus such as SPI, I2C, etc. Most of the
+  *		  devices do not need this.
+  * @reg_write:	  Same as above for writing.
++ * @reg_update_bits: Optional callback that if filled will be used to perform
++ *		     all the update_bits(rmw) operation. Should only be provided
++ *		     if the function require special handling with lock and reg
++ *		     handling and the operation cannot be represented as a simple
++ *		     update_bits operation on a bus such as SPI, I2C, etc.
++ * @read: Optional callback that if filled will be used to perform all the
++ *        bulk reads from the registers. Data is returned in the buffer used
++ *        to transmit data.
++ * @write: Same as above for writing.
++ * @max_raw_read: Max raw read size that can be used on the device.
++ * @max_raw_write: Max raw write size that can be used on the device.
+  * @fast_io:	  Register IO is fast. Use a spinlock instead of a mutex
+  *	     	  to perform locking. This field is ignored if custom lock/unlock
+  *	     	  functions are used (see fields lock/unlock of struct regmap_config).
+@@ -366,6 +377,14 @@ struct regmap_config {
+ 
+ 	int (*reg_read)(void *context, unsigned int reg, unsigned int *val);
+ 	int (*reg_write)(void *context, unsigned int reg, unsigned int val);
++	int (*reg_update_bits)(void *context, unsigned int reg,
++			       unsigned int mask, unsigned int val);
++	/* Bulk read/write */
++	int (*read)(void *context, const void *reg_buf, size_t reg_size,
++		    void *val_buf, size_t val_size);
++	int (*write)(void *context, const void *data, size_t count);
++	size_t max_raw_read;
++	size_t max_raw_write;
+ 
+ 	bool fast_io;
+ 
+diff --git a/include/linux/security.h b/include/linux/security.h
+index e388b1666bcfc..5b61aa19fac66 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -31,6 +31,7 @@
+ #include <linux/err.h>
+ #include <linux/string.h>
+ #include <linux/mm.h>
++#include <linux/sockptr.h>
+ 
+ struct linux_binprm;
+ struct cred;
+@@ -1366,8 +1367,8 @@ int security_socket_getsockopt(struct socket *sock, int level, int optname);
+ int security_socket_setsockopt(struct socket *sock, int level, int optname);
+ int security_socket_shutdown(struct socket *sock, int how);
+ int security_sock_rcv_skb(struct sock *sk, struct sk_buff *skb);
+-int security_socket_getpeersec_stream(struct socket *sock, char __user *optval,
+-				      int __user *optlen, unsigned len);
++int security_socket_getpeersec_stream(struct socket *sock, sockptr_t optval,
++				      sockptr_t optlen, unsigned int len);
+ int security_socket_getpeersec_dgram(struct socket *sock, struct sk_buff *skb, u32 *secid);
+ int security_sk_alloc(struct sock *sk, int family, gfp_t priority);
+ void security_sk_free(struct sock *sk);
+@@ -1501,8 +1502,10 @@ static inline int security_sock_rcv_skb(struct sock *sk,
+ 	return 0;
+ }
+ 
+-static inline int security_socket_getpeersec_stream(struct socket *sock, char __user *optval,
+-						    int __user *optlen, unsigned len)
++static inline int security_socket_getpeersec_stream(struct socket *sock,
++						    sockptr_t optval,
++						    sockptr_t optlen,
++						    unsigned int len)
+ {
+ 	return -ENOPROTOOPT;
+ }
+diff --git a/include/linux/sockptr.h b/include/linux/sockptr.h
+index ea193414298b7..38862819e77a1 100644
+--- a/include/linux/sockptr.h
++++ b/include/linux/sockptr.h
+@@ -64,6 +64,11 @@ static inline int copy_to_sockptr_offset(sockptr_t dst, size_t offset,
+ 	return 0;
+ }
+ 
++static inline int copy_to_sockptr(sockptr_t dst, const void *src, size_t size)
++{
++	return copy_to_sockptr_offset(dst, 0, src, size);
++}
++
+ static inline void *memdup_sockptr(sockptr_t src, size_t len)
+ {
+ 	void *p = kmalloc_track_caller(len, GFP_USER | __GFP_NOWARN);
+diff --git a/include/trace/events/qdisc.h b/include/trace/events/qdisc.h
+index 330d32d84485b..a50df41634c58 100644
+--- a/include/trace/events/qdisc.h
++++ b/include/trace/events/qdisc.h
+@@ -53,14 +53,14 @@ TRACE_EVENT(qdisc_reset,
+ 	TP_ARGS(q),
+ 
+ 	TP_STRUCT__entry(
+-		__string(	dev,		qdisc_dev(q)	)
+-		__string(	kind,		q->ops->id	)
+-		__field(	u32,		parent		)
+-		__field(	u32,		handle		)
++		__string(	dev,		qdisc_dev(q)->name	)
++		__string(	kind,		q->ops->id		)
++		__field(	u32,		parent			)
++		__field(	u32,		handle			)
+ 	),
+ 
+ 	TP_fast_assign(
+-		__assign_str(dev, qdisc_dev(q));
++		__assign_str(dev, qdisc_dev(q)->name);
+ 		__assign_str(kind, q->ops->id);
+ 		__entry->parent = q->parent;
+ 		__entry->handle = q->handle;
+@@ -78,14 +78,14 @@ TRACE_EVENT(qdisc_destroy,
+ 	TP_ARGS(q),
+ 
+ 	TP_STRUCT__entry(
+-		__string(	dev,		qdisc_dev(q)	)
+-		__string(	kind,		q->ops->id	)
+-		__field(	u32,		parent		)
+-		__field(	u32,		handle		)
++		__string(	dev,		qdisc_dev(q)->name	)
++		__string(	kind,		q->ops->id		)
++		__field(	u32,		parent			)
++		__field(	u32,		handle			)
+ 	),
+ 
+ 	TP_fast_assign(
+-		__assign_str(dev, qdisc_dev(q));
++		__assign_str(dev, qdisc_dev(q)->name);
+ 		__assign_str(kind, q->ops->id);
+ 		__entry->parent = q->parent;
+ 		__entry->handle = q->handle;
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index c61a23b564aa5..2dcc04b2f330e 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -229,7 +229,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ 				    void **frames, int n,
+ 				    struct xdp_cpumap_stats *stats)
+ {
+-	struct xdp_rxq_info rxq;
++	struct xdp_rxq_info rxq = {};
+ 	struct xdp_buff xdp;
+ 	int i, nframes = 0;
+ 
+diff --git a/kernel/sys.c b/kernel/sys.c
+index bff14910b9262..efc213ae4c5ad 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1736,74 +1736,87 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
+ 	struct task_struct *t;
+ 	unsigned long flags;
+ 	u64 tgutime, tgstime, utime, stime;
+-	unsigned long maxrss = 0;
++	unsigned long maxrss;
++	struct mm_struct *mm;
++	struct signal_struct *sig = p->signal;
++	unsigned int seq = 0;
+ 
+-	memset((char *)r, 0, sizeof (*r));
++retry:
++	memset(r, 0, sizeof(*r));
+ 	utime = stime = 0;
++	maxrss = 0;
+ 
+ 	if (who == RUSAGE_THREAD) {
+ 		task_cputime_adjusted(current, &utime, &stime);
+ 		accumulate_thread_rusage(p, r);
+-		maxrss = p->signal->maxrss;
+-		goto out;
++		maxrss = sig->maxrss;
++		goto out_thread;
+ 	}
+ 
+-	if (!lock_task_sighand(p, &flags))
+-		return;
++	flags = read_seqbegin_or_lock_irqsave(&sig->stats_lock, &seq);
+ 
+ 	switch (who) {
+ 	case RUSAGE_BOTH:
+ 	case RUSAGE_CHILDREN:
+-		utime = p->signal->cutime;
+-		stime = p->signal->cstime;
+-		r->ru_nvcsw = p->signal->cnvcsw;
+-		r->ru_nivcsw = p->signal->cnivcsw;
+-		r->ru_minflt = p->signal->cmin_flt;
+-		r->ru_majflt = p->signal->cmaj_flt;
+-		r->ru_inblock = p->signal->cinblock;
+-		r->ru_oublock = p->signal->coublock;
+-		maxrss = p->signal->cmaxrss;
++		utime = sig->cutime;
++		stime = sig->cstime;
++		r->ru_nvcsw = sig->cnvcsw;
++		r->ru_nivcsw = sig->cnivcsw;
++		r->ru_minflt = sig->cmin_flt;
++		r->ru_majflt = sig->cmaj_flt;
++		r->ru_inblock = sig->cinblock;
++		r->ru_oublock = sig->coublock;
++		maxrss = sig->cmaxrss;
+ 
+ 		if (who == RUSAGE_CHILDREN)
+ 			break;
+ 		fallthrough;
+ 
+ 	case RUSAGE_SELF:
+-		thread_group_cputime_adjusted(p, &tgutime, &tgstime);
+-		utime += tgutime;
+-		stime += tgstime;
+-		r->ru_nvcsw += p->signal->nvcsw;
+-		r->ru_nivcsw += p->signal->nivcsw;
+-		r->ru_minflt += p->signal->min_flt;
+-		r->ru_majflt += p->signal->maj_flt;
+-		r->ru_inblock += p->signal->inblock;
+-		r->ru_oublock += p->signal->oublock;
+-		if (maxrss < p->signal->maxrss)
+-			maxrss = p->signal->maxrss;
+-		t = p;
+-		do {
++		r->ru_nvcsw += sig->nvcsw;
++		r->ru_nivcsw += sig->nivcsw;
++		r->ru_minflt += sig->min_flt;
++		r->ru_majflt += sig->maj_flt;
++		r->ru_inblock += sig->inblock;
++		r->ru_oublock += sig->oublock;
++		if (maxrss < sig->maxrss)
++			maxrss = sig->maxrss;
++
++		rcu_read_lock();
++		__for_each_thread(sig, t)
+ 			accumulate_thread_rusage(t, r);
+-		} while_each_thread(p, t);
++		rcu_read_unlock();
++
+ 		break;
+ 
+ 	default:
+ 		BUG();
+ 	}
+-	unlock_task_sighand(p, &flags);
+ 
+-out:
+-	r->ru_utime = ns_to_kernel_old_timeval(utime);
+-	r->ru_stime = ns_to_kernel_old_timeval(stime);
++	if (need_seqretry(&sig->stats_lock, seq)) {
++		seq = 1;
++		goto retry;
++	}
++	done_seqretry_irqrestore(&sig->stats_lock, seq, flags);
+ 
+-	if (who != RUSAGE_CHILDREN) {
+-		struct mm_struct *mm = get_task_mm(p);
++	if (who == RUSAGE_CHILDREN)
++		goto out_children;
+ 
+-		if (mm) {
+-			setmax_mm_hiwater_rss(&maxrss, mm);
+-			mmput(mm);
+-		}
++	thread_group_cputime_adjusted(p, &tgutime, &tgstime);
++	utime += tgutime;
++	stime += tgstime;
++
++out_thread:
++	mm = get_task_mm(p);
++	if (mm) {
++		setmax_mm_hiwater_rss(&maxrss, mm);
++		mmput(mm);
+ 	}
++
++out_children:
+ 	r->ru_maxrss = maxrss * (PAGE_SIZE / 1024); /* convert pages to KBs */
++	r->ru_utime = ns_to_kernel_old_timeval(utime);
++	r->ru_stime = ns_to_kernel_old_timeval(stime);
+ }
+ 
+ SYSCALL_DEFINE2(getrusage, int, who, struct rusage __user *, ru)
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 81949f6d29af5..02b7c8f9b0e87 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5108,12 +5108,13 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+ 	return pages << h->order;
+ }
+ 
+-int hugetlb_reserve_pages(struct inode *inode,
++/* Return true if reservation was successful, false otherwise.  */
++bool hugetlb_reserve_pages(struct inode *inode,
+ 					long from, long to,
+ 					struct vm_area_struct *vma,
+ 					vm_flags_t vm_flags)
+ {
+-	long ret, chg, add = -1;
++	long chg, add = -1;
+ 	struct hstate *h = hstate_inode(inode);
+ 	struct hugepage_subpool *spool = subpool_inode(inode);
+ 	struct resv_map *resv_map;
+@@ -5123,7 +5124,7 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 	/* This should never happen */
+ 	if (from > to) {
+ 		VM_WARN(1, "%s called with a negative range\n", __func__);
+-		return -EINVAL;
++		return false;
+ 	}
+ 
+ 	/*
+@@ -5132,7 +5133,7 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 	 * without using reserves
+ 	 */
+ 	if (vm_flags & VM_NORESERVE)
+-		return 0;
++		return true;
+ 
+ 	/*
+ 	 * Shared mappings base their reservation on the number of pages that
+@@ -5154,7 +5155,7 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 		/* Private mapping. */
+ 		resv_map = resv_map_alloc();
+ 		if (!resv_map)
+-			return -ENOMEM;
++			return false;
+ 
+ 		chg = to - from;
+ 
+@@ -5162,18 +5163,12 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 		set_vma_resv_flags(vma, HPAGE_RESV_OWNER);
+ 	}
+ 
+-	if (chg < 0) {
+-		ret = chg;
++	if (chg < 0)
+ 		goto out_err;
+-	}
+-
+-	ret = hugetlb_cgroup_charge_cgroup_rsvd(
+-		hstate_index(h), chg * pages_per_huge_page(h), &h_cg);
+ 
+-	if (ret < 0) {
+-		ret = -ENOMEM;
++	if (hugetlb_cgroup_charge_cgroup_rsvd(hstate_index(h),
++				chg * pages_per_huge_page(h), &h_cg) < 0)
+ 		goto out_err;
+-	}
+ 
+ 	if (vma && !(vma->vm_flags & VM_MAYSHARE) && h_cg) {
+ 		/* For private mappings, the hugetlb_cgroup uncharge info hangs
+@@ -5188,19 +5183,15 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 	 * reservations already in place (gbl_reserve).
+ 	 */
+ 	gbl_reserve = hugepage_subpool_get_pages(spool, chg);
+-	if (gbl_reserve < 0) {
+-		ret = -ENOSPC;
++	if (gbl_reserve < 0)
+ 		goto out_uncharge_cgroup;
+-	}
+ 
+ 	/*
+ 	 * Check enough hugepages are available for the reservation.
+ 	 * Hand the pages back to the subpool if there are not
+ 	 */
+-	ret = hugetlb_acct_memory(h, gbl_reserve);
+-	if (ret < 0) {
++	if (hugetlb_acct_memory(h, gbl_reserve) < 0)
+ 		goto out_put_pages;
+-	}
+ 
+ 	/*
+ 	 * Account for the reservations made. Shared mappings record regions
+@@ -5218,7 +5209,6 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 
+ 		if (unlikely(add < 0)) {
+ 			hugetlb_acct_memory(h, -gbl_reserve);
+-			ret = add;
+ 			goto out_put_pages;
+ 		} else if (unlikely(chg > add)) {
+ 			/*
+@@ -5251,7 +5241,8 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 			hugetlb_cgroup_put_rsvd_cgroup(h_cg);
+ 		}
+ 	}
+-	return 0;
++	return true;
++
+ out_put_pages:
+ 	/* put back original number of pages, chg */
+ 	(void)hugepage_subpool_put_pages(spool, chg);
+@@ -5267,7 +5258,7 @@ int hugetlb_reserve_pages(struct inode *inode,
+ 			region_abort(resv_map, from, to, regions_needed);
+ 	if (vma && is_vma_resv_set(vma, HPAGE_RESV_OWNER))
+ 		kref_put(&resv_map->refs, resv_map_release);
+-	return ret;
++	return false;
+ }
+ 
+ long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 6cfc8fb0562a2..49e4d1535cc82 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -9903,8 +9903,7 @@ int sk_detach_filter(struct sock *sk)
+ }
+ EXPORT_SYMBOL_GPL(sk_detach_filter);
+ 
+-int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf,
+-		  unsigned int len)
++int sk_get_filter(struct sock *sk, sockptr_t optval, unsigned int len)
+ {
+ 	struct sock_fprog_kern *fprog;
+ 	struct sk_filter *filter;
+@@ -9935,7 +9934,7 @@ int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf,
+ 		goto out;
+ 
+ 	ret = -EFAULT;
+-	if (copy_to_user(ubuf, fprog->filter, bpf_classic_proglen(fprog)))
++	if (copy_to_sockptr(optval, fprog->filter, bpf_classic_proglen(fprog)))
+ 		goto out;
+ 
+ 	/* Instead of bytes, the API requests to return the number
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 769e969cd1dc5..016c0b9e01b70 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -644,8 +644,8 @@ static int sock_setbindtodevice(struct sock *sk, sockptr_t optval, int optlen)
+ 	return ret;
+ }
+ 
+-static int sock_getbindtodevice(struct sock *sk, char __user *optval,
+-				int __user *optlen, int len)
++static int sock_getbindtodevice(struct sock *sk, sockptr_t optval,
++				sockptr_t optlen, int len)
+ {
+ 	int ret = -ENOPROTOOPT;
+ #ifdef CONFIG_NETDEVICES
+@@ -668,12 +668,12 @@ static int sock_getbindtodevice(struct sock *sk, char __user *optval,
+ 	len = strlen(devname) + 1;
+ 
+ 	ret = -EFAULT;
+-	if (copy_to_user(optval, devname, len))
++	if (copy_to_sockptr(optval, devname, len))
+ 		goto out;
+ 
+ zero:
+ 	ret = -EFAULT;
+-	if (put_user(len, optlen))
++	if (copy_to_sockptr(optlen, &len, sizeof(int)))
+ 		goto out;
+ 
+ 	ret = 0;
+@@ -1281,22 +1281,25 @@ static void cred_to_ucred(struct pid *pid, const struct cred *cred,
+ 	}
+ }
+ 
+-static int groups_to_user(gid_t __user *dst, const struct group_info *src)
++static int groups_to_user(sockptr_t dst, const struct group_info *src)
+ {
+ 	struct user_namespace *user_ns = current_user_ns();
+ 	int i;
+ 
+-	for (i = 0; i < src->ngroups; i++)
+-		if (put_user(from_kgid_munged(user_ns, src->gid[i]), dst + i))
++	for (i = 0; i < src->ngroups; i++) {
++		gid_t gid = from_kgid_munged(user_ns, src->gid[i]);
++
++		if (copy_to_sockptr_offset(dst, i * sizeof(gid), &gid, sizeof(gid)))
+ 			return -EFAULT;
++	}
+ 
+ 	return 0;
+ }
+ 
+-int sock_getsockopt(struct socket *sock, int level, int optname,
+-		    char __user *optval, int __user *optlen)
++static int sk_getsockopt(struct sock *sk, int level, int optname,
++			 sockptr_t optval, sockptr_t optlen)
+ {
+-	struct sock *sk = sock->sk;
++	struct socket *sock = sk->sk_socket;
+ 
+ 	union {
+ 		int val;
+@@ -1312,7 +1315,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 	int lv = sizeof(int);
+ 	int len;
+ 
+-	if (get_user(len, optlen))
++	if (copy_from_sockptr(&len, optlen, sizeof(int)))
+ 		return -EFAULT;
+ 	if (len < 0)
+ 		return -EINVAL;
+@@ -1445,7 +1448,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		cred_to_ucred(sk->sk_peer_pid, sk->sk_peer_cred, &peercred);
+ 		spin_unlock(&sk->sk_peer_lock);
+ 
+-		if (copy_to_user(optval, &peercred, len))
++		if (copy_to_sockptr(optval, &peercred, len))
+ 			return -EFAULT;
+ 		goto lenout;
+ 	}
+@@ -1463,11 +1466,11 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		if (len < n * sizeof(gid_t)) {
+ 			len = n * sizeof(gid_t);
+ 			put_cred(cred);
+-			return put_user(len, optlen) ? -EFAULT : -ERANGE;
++			return copy_to_sockptr(optlen, &len, sizeof(int)) ? -EFAULT : -ERANGE;
+ 		}
+ 		len = n * sizeof(gid_t);
+ 
+-		ret = groups_to_user((gid_t __user *)optval, cred->group_info);
++		ret = groups_to_user(optval, cred->group_info);
+ 		put_cred(cred);
+ 		if (ret)
+ 			return ret;
+@@ -1483,7 +1486,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 			return -ENOTCONN;
+ 		if (lv < len)
+ 			return -EINVAL;
+-		if (copy_to_user(optval, address, len))
++		if (copy_to_sockptr(optval, address, len))
+ 			return -EFAULT;
+ 		goto lenout;
+ 	}
+@@ -1500,7 +1503,8 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case SO_PEERSEC:
+-		return security_socket_getpeersec_stream(sock, optval, optlen, len);
++		return security_socket_getpeersec_stream(sock,
++							 optval, optlen, len);
+ 
+ 	case SO_MARK:
+ 		v.val = sk->sk_mark;
+@@ -1528,7 +1532,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		return sock_getbindtodevice(sk, optval, optlen, len);
+ 
+ 	case SO_GET_FILTER:
+-		len = sk_get_filter(sk, (struct sock_filter __user *)optval, len);
++		len = sk_get_filter(sk, optval, len);
+ 		if (len < 0)
+ 			return len;
+ 
+@@ -1575,7 +1579,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		sk_get_meminfo(sk, meminfo);
+ 
+ 		len = min_t(unsigned int, len, sizeof(meminfo));
+-		if (copy_to_user(optval, &meminfo, len))
++		if (copy_to_sockptr(optval, &meminfo, len))
+ 			return -EFAULT;
+ 
+ 		goto lenout;
+@@ -1625,14 +1629,22 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 
+ 	if (len > lv)
+ 		len = lv;
+-	if (copy_to_user(optval, &v, len))
++	if (copy_to_sockptr(optval, &v, len))
+ 		return -EFAULT;
+ lenout:
+-	if (put_user(len, optlen))
++	if (copy_to_sockptr(optlen, &len, sizeof(int)))
+ 		return -EFAULT;
+ 	return 0;
+ }
+ 
++int sock_getsockopt(struct socket *sock, int level, int optname,
++		    char __user *optval, int __user *optlen)
++{
++	return sk_getsockopt(sock->sk, level, optname,
++			     USER_SOCKPTR(optval),
++			     USER_SOCKPTR(optlen));
++}
++
+ /*
+  * Initialize an sk_lock.
+  *
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index b23e42efb3dff..2d53c362f309e 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5235,19 +5235,7 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 	err_nh = NULL;
+ 	list_for_each_entry(nh, &rt6_nh_list, next) {
+ 		err = __ip6_ins_rt(nh->fib6_info, info, extack);
+-		fib6_info_release(nh->fib6_info);
+-
+-		if (!err) {
+-			/* save reference to last route successfully inserted */
+-			rt_last = nh->fib6_info;
+-
+-			/* save reference to first route for notification */
+-			if (!rt_notif)
+-				rt_notif = nh->fib6_info;
+-		}
+ 
+-		/* nh->fib6_info is used or freed at this point, reset to NULL*/
+-		nh->fib6_info = NULL;
+ 		if (err) {
+ 			if (replace && nhn)
+ 				NL_SET_ERR_MSG_MOD(extack,
+@@ -5255,6 +5243,12 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 			err_nh = nh;
+ 			goto add_errout;
+ 		}
++		/* save reference to last route successfully inserted */
++		rt_last = nh->fib6_info;
++
++		/* save reference to first route for notification */
++		if (!rt_notif)
++			rt_notif = nh->fib6_info;
+ 
+ 		/* Because each route is added like a single route we remove
+ 		 * these flags after the first nexthop: if there is a collision,
+@@ -5315,8 +5309,7 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 
+ cleanup:
+ 	list_for_each_entry_safe(nh, nh_safe, &rt6_nh_list, next) {
+-		if (nh->fib6_info)
+-			fib6_info_release(nh->fib6_info);
++		fib6_info_release(nh->fib6_info);
+ 		list_del(&nh->next);
+ 		kfree(nh);
+ 	}
+diff --git a/net/netfilter/nf_conntrack_h323_asn1.c b/net/netfilter/nf_conntrack_h323_asn1.c
+index e697a824b0018..540d97715bd23 100644
+--- a/net/netfilter/nf_conntrack_h323_asn1.c
++++ b/net/netfilter/nf_conntrack_h323_asn1.c
+@@ -533,6 +533,8 @@ static int decode_seq(struct bitstr *bs, const struct field_t *f,
+ 	/* Get fields bitmap */
+ 	if (nf_h323_error_boundary(bs, 0, f->sz))
+ 		return H323_ERROR_BOUND;
++	if (f->sz > 32)
++		return H323_ERROR_RANGE;
+ 	bmp = get_bitmap(bs, f->sz);
+ 	if (base)
+ 		*(unsigned int *)base = bmp;
+@@ -589,6 +591,8 @@ static int decode_seq(struct bitstr *bs, const struct field_t *f,
+ 	bmp2_len = get_bits(bs, 7) + 1;
+ 	if (nf_h323_error_boundary(bs, 0, bmp2_len))
+ 		return H323_ERROR_BOUND;
++	if (bmp2_len > 32)
++		return H323_ERROR_RANGE;
+ 	bmp2 = get_bitmap(bs, bmp2_len);
+ 	bmp |= bmp2 >> f->sz;
+ 	if (base)
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 2b15dbbca98b3..2a8dfa68f6e20 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -1188,14 +1188,13 @@ static int nft_ct_expect_obj_init(const struct nft_ctx *ctx,
+ 	switch (priv->l3num) {
+ 	case NFPROTO_IPV4:
+ 	case NFPROTO_IPV6:
+-		if (priv->l3num != ctx->family)
+-			return -EINVAL;
++		if (priv->l3num == ctx->family || ctx->family == NFPROTO_INET)
++			break;
+ 
+-		fallthrough;
+-	case NFPROTO_INET:
+-		break;
++		return -EINVAL;
++	case NFPROTO_INET: /* tuple.src.l3num supports NFPROTO_IPV4/6 only */
+ 	default:
+-		return -EOPNOTSUPP;
++		return -EAFNOSUPPORT;
+ 	}
+ 
+ 	priv->l4proto = nla_get_u8(tb[NFTA_CT_EXPECT_L4PROTO]);
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index 24747163122bb..37d0bf6cab456 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -453,16 +453,16 @@ static int nr_create(struct net *net, struct socket *sock, int protocol,
+ 	nr_init_timers(sk);
+ 
+ 	nr->t1     =
+-		msecs_to_jiffies(sysctl_netrom_transport_timeout);
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_timeout));
+ 	nr->t2     =
+-		msecs_to_jiffies(sysctl_netrom_transport_acknowledge_delay);
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_acknowledge_delay));
+ 	nr->n2     =
+-		msecs_to_jiffies(sysctl_netrom_transport_maximum_tries);
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_maximum_tries));
+ 	nr->t4     =
+-		msecs_to_jiffies(sysctl_netrom_transport_busy_delay);
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_busy_delay));
+ 	nr->idle   =
+-		msecs_to_jiffies(sysctl_netrom_transport_no_activity_timeout);
+-	nr->window = sysctl_netrom_transport_requested_window_size;
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_no_activity_timeout));
++	nr->window = READ_ONCE(sysctl_netrom_transport_requested_window_size);
+ 
+ 	nr->bpqext = 1;
+ 	nr->state  = NR_STATE_0;
+@@ -954,7 +954,7 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev)
+ 		 * G8PZT's Xrouter which is sending packets with command type 7
+ 		 * as an extension of the protocol.
+ 		 */
+-		if (sysctl_netrom_reset_circuit &&
++		if (READ_ONCE(sysctl_netrom_reset_circuit) &&
+ 		    (frametype != NR_RESET || flags != 0))
+ 			nr_transmit_reset(skb, 1);
+ 
+diff --git a/net/netrom/nr_dev.c b/net/netrom/nr_dev.c
+index 29e418c8c6c30..4caee8754b794 100644
+--- a/net/netrom/nr_dev.c
++++ b/net/netrom/nr_dev.c
+@@ -81,7 +81,7 @@ static int nr_header(struct sk_buff *skb, struct net_device *dev,
+ 	buff[6] |= AX25_SSSID_SPARE;
+ 	buff    += AX25_ADDR_LEN;
+ 
+-	*buff++ = sysctl_netrom_network_ttl_initialiser;
++	*buff++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser);
+ 
+ 	*buff++ = NR_PROTO_IP;
+ 	*buff++ = NR_PROTO_IP;
+diff --git a/net/netrom/nr_in.c b/net/netrom/nr_in.c
+index 69e58906c32b1..034f79d11ae11 100644
+--- a/net/netrom/nr_in.c
++++ b/net/netrom/nr_in.c
+@@ -97,7 +97,7 @@ static int nr_state1_machine(struct sock *sk, struct sk_buff *skb,
+ 		break;
+ 
+ 	case NR_RESET:
+-		if (sysctl_netrom_reset_circuit)
++		if (READ_ONCE(sysctl_netrom_reset_circuit))
+ 			nr_disconnect(sk, ECONNRESET);
+ 		break;
+ 
+@@ -128,7 +128,7 @@ static int nr_state2_machine(struct sock *sk, struct sk_buff *skb,
+ 		break;
+ 
+ 	case NR_RESET:
+-		if (sysctl_netrom_reset_circuit)
++		if (READ_ONCE(sysctl_netrom_reset_circuit))
+ 			nr_disconnect(sk, ECONNRESET);
+ 		break;
+ 
+@@ -263,7 +263,7 @@ static int nr_state3_machine(struct sock *sk, struct sk_buff *skb, int frametype
+ 		break;
+ 
+ 	case NR_RESET:
+-		if (sysctl_netrom_reset_circuit)
++		if (READ_ONCE(sysctl_netrom_reset_circuit))
+ 			nr_disconnect(sk, ECONNRESET);
+ 		break;
+ 
+diff --git a/net/netrom/nr_out.c b/net/netrom/nr_out.c
+index 44929657f5b71..5e531394a724b 100644
+--- a/net/netrom/nr_out.c
++++ b/net/netrom/nr_out.c
+@@ -204,7 +204,7 @@ void nr_transmit_buffer(struct sock *sk, struct sk_buff *skb)
+ 	dptr[6] |= AX25_SSSID_SPARE;
+ 	dptr += AX25_ADDR_LEN;
+ 
+-	*dptr++ = sysctl_netrom_network_ttl_initialiser;
++	*dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser);
+ 
+ 	if (!nr_route_frame(skb, NULL)) {
+ 		kfree_skb(skb);
+diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
+index 78da5eab252a0..895702337c92e 100644
+--- a/net/netrom/nr_route.c
++++ b/net/netrom/nr_route.c
+@@ -153,7 +153,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
+ 		nr_neigh->digipeat = NULL;
+ 		nr_neigh->ax25     = NULL;
+ 		nr_neigh->dev      = dev;
+-		nr_neigh->quality  = sysctl_netrom_default_path_quality;
++		nr_neigh->quality  = READ_ONCE(sysctl_netrom_default_path_quality);
+ 		nr_neigh->locked   = 0;
+ 		nr_neigh->count    = 0;
+ 		nr_neigh->number   = nr_neigh_no++;
+@@ -725,7 +725,7 @@ void nr_link_failed(ax25_cb *ax25, int reason)
+ 	nr_neigh->ax25 = NULL;
+ 	ax25_cb_put(ax25);
+ 
+-	if (++nr_neigh->failed < sysctl_netrom_link_fails_count) {
++	if (++nr_neigh->failed < READ_ONCE(sysctl_netrom_link_fails_count)) {
+ 		nr_neigh_put(nr_neigh);
+ 		return;
+ 	}
+@@ -763,7 +763,7 @@ int nr_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ 	if (ax25 != NULL) {
+ 		ret = nr_add_node(nr_src, "", &ax25->dest_addr, ax25->digipeat,
+ 				  ax25->ax25_dev->dev, 0,
+-				  sysctl_netrom_obsolescence_count_initialiser);
++				  READ_ONCE(sysctl_netrom_obsolescence_count_initialiser));
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -777,7 +777,7 @@ int nr_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ 		return ret;
+ 	}
+ 
+-	if (!sysctl_netrom_routing_control && ax25 != NULL)
++	if (!READ_ONCE(sysctl_netrom_routing_control) && ax25 != NULL)
+ 		return 0;
+ 
+ 	/* Its Time-To-Live has expired */
+diff --git a/net/netrom/nr_subr.c b/net/netrom/nr_subr.c
+index e2d2af924cff4..c3bbd5880850b 100644
+--- a/net/netrom/nr_subr.c
++++ b/net/netrom/nr_subr.c
+@@ -182,7 +182,8 @@ void nr_write_internal(struct sock *sk, int frametype)
+ 		*dptr++ = nr->my_id;
+ 		*dptr++ = frametype;
+ 		*dptr++ = nr->window;
+-		if (nr->bpqext) *dptr++ = sysctl_netrom_network_ttl_initialiser;
++		if (nr->bpqext)
++			*dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser);
+ 		break;
+ 
+ 	case NR_DISCREQ:
+@@ -236,7 +237,7 @@ void __nr_transmit_reply(struct sk_buff *skb, int mine, unsigned char cmdflags)
+ 	dptr[6] |= AX25_SSSID_SPARE;
+ 	dptr += AX25_ADDR_LEN;
+ 
+-	*dptr++ = sysctl_netrom_network_ttl_initialiser;
++	*dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser);
+ 
+ 	if (mine) {
+ 		*dptr++ = 0;
+diff --git a/net/rds/rdma.c b/net/rds/rdma.c
+index 6f1a50d50d06d..c29c7a59f2053 100644
+--- a/net/rds/rdma.c
++++ b/net/rds/rdma.c
+@@ -301,6 +301,9 @@ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
+ 			kfree(sg);
+ 		}
+ 		ret = PTR_ERR(trans_private);
++		/* Trigger connection so that its ready for the next retry */
++		if (ret == -ENODEV)
++			rds_conn_connect_if_down(cp->cp_conn);
+ 		goto out;
+ 	}
+ 
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 985d0b7713acc..65eeb82cb5de5 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1314,12 +1314,8 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ 
+ 	/* Parse any control messages the user may have included. */
+ 	ret = rds_cmsg_send(rs, rm, msg, &allocated_mr, &vct);
+-	if (ret) {
+-		/* Trigger connection so that its ready for the next retry */
+-		if (ret ==  -EAGAIN)
+-			rds_conn_connect_if_down(conn);
++	if (ret)
+ 		goto out;
+-	}
+ 
+ 	if (rm->rdma.op_active && !conn->c_trans->xmit_rdma) {
+ 		printk_ratelimited(KERN_NOTICE "rdma_op %p conn xmit_rdma %p\n",
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index 585edcc6814d2..052f1b920e43f 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -1070,11 +1070,10 @@ static struct aa_label *sk_peer_label(struct sock *sk)
+  * Note: for tcp only valid if using ipsec or cipso on lan
+  */
+ static int apparmor_socket_getpeersec_stream(struct socket *sock,
+-					     char __user *optval,
+-					     int __user *optlen,
++					     sockptr_t optval, sockptr_t optlen,
+ 					     unsigned int len)
+ {
+-	char *name;
++	char *name = NULL;
+ 	int slen, error = 0;
+ 	struct aa_label *label;
+ 	struct aa_label *peer;
+@@ -1091,23 +1090,21 @@ static int apparmor_socket_getpeersec_stream(struct socket *sock,
+ 	/* don't include terminating \0 in slen, it breaks some apps */
+ 	if (slen < 0) {
+ 		error = -ENOMEM;
+-	} else {
+-		if (slen > len) {
+-			error = -ERANGE;
+-		} else if (copy_to_user(optval, name, slen)) {
+-			error = -EFAULT;
+-			goto out;
+-		}
+-		if (put_user(slen, optlen))
+-			error = -EFAULT;
+-out:
+-		kfree(name);
+-
++		goto done;
++	}
++	if (slen > len) {
++		error = -ERANGE;
++		goto done_len;
+ 	}
+ 
++	if (copy_to_sockptr(optval, name, slen))
++		error = -EFAULT;
++done_len:
++	if (copy_to_sockptr(optlen, &slen, sizeof(slen)))
++		error = -EFAULT;
+ done:
+ 	end_current_label_crit_section(label);
+-
++	kfree(name);
+ 	return error;
+ }
+ 
+diff --git a/security/security.c b/security/security.c
+index 269c3965393f4..0bbcb100ba8e9 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -2224,17 +2224,40 @@ int security_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ }
+ EXPORT_SYMBOL(security_sock_rcv_skb);
+ 
+-int security_socket_getpeersec_stream(struct socket *sock, char __user *optval,
+-				      int __user *optlen, unsigned len)
++int security_socket_getpeersec_stream(struct socket *sock, sockptr_t optval,
++				      sockptr_t optlen, unsigned int len)
+ {
+-	return call_int_hook(socket_getpeersec_stream, -ENOPROTOOPT, sock,
+-				optval, optlen, len);
++	struct security_hook_list *hp;
++	int rc;
++
++	/*
++	 * Only one module will provide a security context.
++	 */
++	hlist_for_each_entry(hp, &security_hook_heads.socket_getpeersec_stream,
++			     list) {
++		rc = hp->hook.socket_getpeersec_stream(sock, optval, optlen,
++						       len);
++		if (rc != LSM_RET_DEFAULT(socket_getpeersec_stream))
++			return rc;
++	}
++	return LSM_RET_DEFAULT(socket_getpeersec_stream);
+ }
+ 
+ int security_socket_getpeersec_dgram(struct socket *sock, struct sk_buff *skb, u32 *secid)
+ {
+-	return call_int_hook(socket_getpeersec_dgram, -ENOPROTOOPT, sock,
+-			     skb, secid);
++	struct security_hook_list *hp;
++	int rc;
++
++	/*
++	 * Only one module will provide a security context.
++	 */
++	hlist_for_each_entry(hp, &security_hook_heads.socket_getpeersec_dgram,
++			     list) {
++		rc = hp->hook.socket_getpeersec_dgram(sock, skb, secid);
++		if (rc != LSM_RET_DEFAULT(socket_getpeersec_dgram))
++			return rc;
++	}
++	return LSM_RET_DEFAULT(socket_getpeersec_dgram);
+ }
+ EXPORT_SYMBOL(security_socket_getpeersec_dgram);
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 50d3ddfe15fd1..46c00a68bb4bd 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -5110,11 +5110,12 @@ static int selinux_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 	return err;
+ }
+ 
+-static int selinux_socket_getpeersec_stream(struct socket *sock, char __user *optval,
+-					    int __user *optlen, unsigned len)
++static int selinux_socket_getpeersec_stream(struct socket *sock,
++					    sockptr_t optval, sockptr_t optlen,
++					    unsigned int len)
+ {
+ 	int err = 0;
+-	char *scontext;
++	char *scontext = NULL;
+ 	u32 scontext_len;
+ 	struct sk_security_struct *sksec = sock->sk->sk_security;
+ 	u32 peer_sid = SECSID_NULL;
+@@ -5130,17 +5131,15 @@ static int selinux_socket_getpeersec_stream(struct socket *sock, char __user *op
+ 				      &scontext_len);
+ 	if (err)
+ 		return err;
+-
+ 	if (scontext_len > len) {
+ 		err = -ERANGE;
+ 		goto out_len;
+ 	}
+ 
+-	if (copy_to_user(optval, scontext, scontext_len))
++	if (copy_to_sockptr(optval, scontext, scontext_len))
+ 		err = -EFAULT;
+-
+ out_len:
+-	if (put_user(scontext_len, optlen))
++	if (copy_to_sockptr(optlen, &scontext_len, sizeof(scontext_len)))
+ 		err = -EFAULT;
+ 	kfree(scontext);
+ 	return err;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index e1669759403a6..5388f143eecd8 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -4022,12 +4022,12 @@ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+  * returns zero on success, an error code otherwise
+  */
+ static int smack_socket_getpeersec_stream(struct socket *sock,
+-					  char __user *optval,
+-					  int __user *optlen, unsigned len)
++					  sockptr_t optval, sockptr_t optlen,
++					  unsigned int len)
+ {
+ 	struct socket_smack *ssp;
+ 	char *rcp = "";
+-	int slen = 1;
++	u32 slen = 1;
+ 	int rc = 0;
+ 
+ 	ssp = sock->sk->sk_security;
+@@ -4035,15 +4035,16 @@ static int smack_socket_getpeersec_stream(struct socket *sock,
+ 		rcp = ssp->smk_packet->smk_known;
+ 		slen = strlen(rcp) + 1;
+ 	}
+-
+-	if (slen > len)
++	if (slen > len) {
+ 		rc = -ERANGE;
+-	else if (copy_to_user(optval, rcp, slen) != 0)
+-		rc = -EFAULT;
++		goto out_len;
++	}
+ 
+-	if (put_user(slen, optlen) != 0)
++	if (copy_to_sockptr(optval, rcp, slen))
++		rc = -EFAULT;
++out_len:
++	if (copy_to_sockptr(optlen, &slen, sizeof(slen)))
+ 		rc = -EFAULT;
+-
+ 	return rc;
+ }
+ 
+diff --git a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
+index 7536ff2f890a1..d0107f8ae6213 100644
+--- a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ set -e
+diff --git a/tools/testing/selftests/vm/map_hugetlb.c b/tools/testing/selftests/vm/map_hugetlb.c
+index 312889edb84ab..c65c55b7a789f 100644
+--- a/tools/testing/selftests/vm/map_hugetlb.c
++++ b/tools/testing/selftests/vm/map_hugetlb.c
+@@ -15,6 +15,7 @@
+ #include <unistd.h>
+ #include <sys/mman.h>
+ #include <fcntl.h>
++#include "vm_util.h"
+ 
+ #define LENGTH (256UL*1024*1024)
+ #define PROTECTION (PROT_READ | PROT_WRITE)
+@@ -70,10 +71,16 @@ int main(int argc, char **argv)
+ {
+ 	void *addr;
+ 	int ret;
++	size_t hugepage_size;
+ 	size_t length = LENGTH;
+ 	int flags = FLAGS;
+ 	int shift = 0;
+ 
++	hugepage_size = default_huge_page_size();
++	/* munmap with fail if the length is not page aligned */
++	if (hugepage_size > length)
++		length = hugepage_size;
++
+ 	if (argc > 1)
+ 		length = atol(argv[1]) << 20;
+ 	if (argc > 2) {
+diff --git a/tools/testing/selftests/vm/write_hugetlb_memory.sh b/tools/testing/selftests/vm/write_hugetlb_memory.sh
+index 70a02301f4c27..3d2d2eb9d6fff 100644
+--- a/tools/testing/selftests/vm/write_hugetlb_memory.sh
++++ b/tools/testing/selftests/vm/write_hugetlb_memory.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ set -e


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-03-27 11:26 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-03-27 11:26 UTC (permalink / raw
  To: gentoo-commits

commit:     30c746016a370b6cf66587a7c7d27a55d8e4092b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 27 11:26:09 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 27 11:26:09 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=30c74601

Linux patch 5.10.214

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1213_linux-5.10.214.patch | 8493 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8497 insertions(+)

diff --git a/0000_README b/0000_README
index 653d7d47..acd4a2d0 100644
--- a/0000_README
+++ b/0000_README
@@ -895,6 +895,10 @@ Patch:  1212_linux-5.10.213.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.213
 
+Patch:  1213_linux-5.10.214.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.214
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1213_linux-5.10.214.patch b/1213_linux-5.10.214.patch
new file mode 100644
index 00000000..1ea9eaa6
--- /dev/null
+++ b/1213_linux-5.10.214.patch
@@ -0,0 +1,8493 @@
+diff --git a/Makefile b/Makefile
+index b6af62d53d7a6..88d4de93aed79 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 213
++SUBLEVEL = 214
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/arm-realview-pb1176.dts b/arch/arm/boot/dts/arm-realview-pb1176.dts
+index f925782f85604..f0be83eebb09d 100644
+--- a/arch/arm/boot/dts/arm-realview-pb1176.dts
++++ b/arch/arm/boot/dts/arm-realview-pb1176.dts
+@@ -435,7 +435,7 @@ pb1176_serial3: serial@1010f000 {
+ 
+ 		/* Direct-mapped development chip ROM */
+ 		pb1176_rom@10200000 {
+-			compatible = "direct-mapped";
++			compatible = "mtd-rom";
+ 			reg = <0x10200000 0x4000>;
+ 			bank-width = <1>;
+ 		};
+diff --git a/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi b/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
+index ebc0892e37c7a..cbf5a76625e69 100644
+--- a/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
++++ b/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
+@@ -103,8 +103,6 @@ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+ 	phy-mode = "rgmii-id";
+-	phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
+-	phy-reset-duration = <20>;
+ 	phy-supply = <&sw2_reg>;
+ 	status = "okay";
+ 
+@@ -117,17 +115,10 @@ mdio {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		phy_port2: phy@1 {
+-			reg = <1>;
+-		};
+-
+-		phy_port3: phy@2 {
+-			reg = <2>;
+-		};
+-
+ 		switch@10 {
+ 			compatible = "qca,qca8334";
+-			reg = <10>;
++			reg = <0x10>;
++			reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
+ 
+ 			switch_ports: ports {
+ 				#address-cells = <1>;
+@@ -148,15 +139,30 @@ fixed-link {
+ 				eth2: port@2 {
+ 					reg = <2>;
+ 					label = "eth2";
++					phy-mode = "internal";
+ 					phy-handle = <&phy_port2>;
+ 				};
+ 
+ 				eth1: port@3 {
+ 					reg = <3>;
+ 					label = "eth1";
++					phy-mode = "internal";
+ 					phy-handle = <&phy_port3>;
+ 				};
+ 			};
++
++			mdio {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				phy_port2: ethernet-phy@1 {
++					reg = <1>;
++				};
++
++				phy_port3: ethernet-phy@2 {
++					reg = <2>;
++				};
++			};
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts b/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
+index 4c6704e4c57ec..74d5732c412ba 100644
+--- a/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
++++ b/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
+@@ -62,6 +62,30 @@ reg_vdd_cpux: vdd-cpux-regulator {
+ 		states = <1100000 0>, <1300000 1>;
+ 	};
+ 
++	reg_vcc_dram: vcc-dram {
++		compatible = "regulator-fixed";
++		regulator-name = "vcc-dram";
++		regulator-min-microvolt = <1500000>;
++		regulator-max-microvolt = <1500000>;
++		regulator-always-on;
++		regulator-boot-on;
++		enable-active-high;
++		gpio = <&r_pio 0 9 GPIO_ACTIVE_HIGH>; /* PL9 */
++		vin-supply = <&reg_vcc5v0>;
++	};
++
++	reg_vcc1v2: vcc1v2 {
++		compatible = "regulator-fixed";
++		regulator-name = "vcc1v2";
++		regulator-min-microvolt = <1200000>;
++		regulator-max-microvolt = <1200000>;
++		regulator-always-on;
++		regulator-boot-on;
++		enable-active-high;
++		gpio = <&r_pio 0 8 GPIO_ACTIVE_HIGH>; /* PL8 */
++		vin-supply = <&reg_vcc5v0>;
++	};
++
+ 	wifi_pwrseq: wifi_pwrseq {
+ 		compatible = "mmc-pwrseq-simple";
+ 		reset-gpios = <&r_pio 0 7 GPIO_ACTIVE_LOW>; /* PL7 */
+diff --git a/arch/arm/crypto/sha256_glue.c b/arch/arm/crypto/sha256_glue.c
+index b8a4f79020cf8..e36b86778468e 100644
+--- a/arch/arm/crypto/sha256_glue.c
++++ b/arch/arm/crypto/sha256_glue.c
+@@ -24,8 +24,8 @@
+ 
+ #include "sha256_glue.h"
+ 
+-asmlinkage void sha256_block_data_order(u32 *digest, const void *data,
+-					unsigned int num_blks);
++asmlinkage void sha256_block_data_order(struct sha256_state *state,
++					const u8 *data, int num_blks);
+ 
+ int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
+ 			     unsigned int len)
+@@ -33,23 +33,20 @@ int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
+ 	/* make sure casting to sha256_block_fn() is safe */
+ 	BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0);
+ 
+-	return sha256_base_do_update(desc, data, len,
+-				(sha256_block_fn *)sha256_block_data_order);
++	return sha256_base_do_update(desc, data, len, sha256_block_data_order);
+ }
+ EXPORT_SYMBOL(crypto_sha256_arm_update);
+ 
+ static int crypto_sha256_arm_final(struct shash_desc *desc, u8 *out)
+ {
+-	sha256_base_do_finalize(desc,
+-				(sha256_block_fn *)sha256_block_data_order);
++	sha256_base_do_finalize(desc, sha256_block_data_order);
+ 	return sha256_base_finish(desc, out);
+ }
+ 
+ int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data,
+ 			    unsigned int len, u8 *out)
+ {
+-	sha256_base_do_update(desc, data, len,
+-			      (sha256_block_fn *)sha256_block_data_order);
++	sha256_base_do_update(desc, data, len, sha256_block_data_order);
+ 	return crypto_sha256_arm_final(desc, out);
+ }
+ EXPORT_SYMBOL(crypto_sha256_arm_finup);
+diff --git a/arch/arm/crypto/sha512-glue.c b/arch/arm/crypto/sha512-glue.c
+index 8775aa42bbbe8..1a16b98ec1085 100644
+--- a/arch/arm/crypto/sha512-glue.c
++++ b/arch/arm/crypto/sha512-glue.c
+@@ -25,27 +25,25 @@ MODULE_ALIAS_CRYPTO("sha512");
+ MODULE_ALIAS_CRYPTO("sha384-arm");
+ MODULE_ALIAS_CRYPTO("sha512-arm");
+ 
+-asmlinkage void sha512_block_data_order(u64 *state, u8 const *src, int blocks);
++asmlinkage void sha512_block_data_order(struct sha512_state *state,
++					u8 const *src, int blocks);
+ 
+ int sha512_arm_update(struct shash_desc *desc, const u8 *data,
+ 		      unsigned int len)
+ {
+-	return sha512_base_do_update(desc, data, len,
+-		(sha512_block_fn *)sha512_block_data_order);
++	return sha512_base_do_update(desc, data, len, sha512_block_data_order);
+ }
+ 
+ static int sha512_arm_final(struct shash_desc *desc, u8 *out)
+ {
+-	sha512_base_do_finalize(desc,
+-		(sha512_block_fn *)sha512_block_data_order);
++	sha512_base_do_finalize(desc, sha512_block_data_order);
+ 	return sha512_base_finish(desc, out);
+ }
+ 
+ int sha512_arm_finup(struct shash_desc *desc, const u8 *data,
+ 		     unsigned int len, u8 *out)
+ {
+-	sha512_base_do_update(desc, data, len,
+-		(sha512_block_fn *)sha512_block_data_order);
++	sha512_base_do_update(desc, data, len, sha512_block_data_order);
+ 	return sha512_arm_final(desc, out);
+ }
+ 
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 0f4bcd15d8580..086c3cc7d055c 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -414,14 +414,14 @@ xor11 {
+ 			crypto: crypto@90000 {
+ 				compatible = "inside-secure,safexcel-eip97ies";
+ 				reg = <0x90000 0x20000>;
+-				interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
+-					     <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
++				interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
+ 					     <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
+ 					     <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
+ 					     <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
+-					     <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>;
+-				interrupt-names = "mem", "ring0", "ring1",
+-						  "ring2", "ring3", "eip";
++					     <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
++					     <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
++				interrupt-names = "ring0", "ring1", "ring2",
++						  "ring3", "eip", "mem";
+ 				clocks = <&nb_periph_clk 15>;
+ 			};
+ 
+diff --git a/arch/arm64/boot/dts/marvell/armada-cp11x.dtsi b/arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
+index 9dcf16beabf5d..da83bfdbe8432 100644
+--- a/arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
+@@ -477,14 +477,14 @@ CP11X_LABEL(sdhci0): sdhci@780000 {
+ 		CP11X_LABEL(crypto): crypto@800000 {
+ 			compatible = "inside-secure,safexcel-eip197b";
+ 			reg = <0x800000 0x200000>;
+-			interrupts = <87 IRQ_TYPE_LEVEL_HIGH>,
+-				<88 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts = <88 IRQ_TYPE_LEVEL_HIGH>,
+ 				<89 IRQ_TYPE_LEVEL_HIGH>,
+ 				<90 IRQ_TYPE_LEVEL_HIGH>,
+ 				<91 IRQ_TYPE_LEVEL_HIGH>,
+-				<92 IRQ_TYPE_LEVEL_HIGH>;
+-			interrupt-names = "mem", "ring0", "ring1",
+-				"ring2", "ring3", "eip";
++				<92 IRQ_TYPE_LEVEL_HIGH>,
++				<87 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-names = "ring0", "ring1", "ring2", "ring3",
++					  "eip", "mem";
+ 			clock-names = "core", "reg";
+ 			clocks = <&CP11X_LABEL(clk) 1 26>,
+ 				 <&CP11X_LABEL(clk) 1 17>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+index 7e6cffdc5a551..778174a7d649b 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+@@ -71,6 +71,7 @@ red {
+ 
+ 	memory@40000000 {
+ 		reg = <0 0x40000000 0 0x40000000>;
++		device_type = "memory";
+ 	};
+ 
+ 	reg_1p8v: regulator-1p8v {
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+index 993f033d0bf04..810575de66702 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+@@ -57,6 +57,7 @@ wps {
+ 
+ 	memory@40000000 {
+ 		reg = <0 0x40000000 0 0x20000000>;
++		device_type = "memory";
+ 	};
+ 
+ 	reg_1p8v: regulator-1p8v {
+diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
+index 1e76774b36ddf..2849a9b65a055 100644
+--- a/arch/mips/include/asm/ptrace.h
++++ b/arch/mips/include/asm/ptrace.h
+@@ -60,6 +60,7 @@ static inline void instruction_pointer_set(struct pt_regs *regs,
+                                            unsigned long val)
+ {
+ 	regs->cp0_epc = val;
++	regs->cp0_cause &= ~CAUSEF_BD;
+ }
+ 
+ /* Query offset/name of register from its name/offset */
+diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
+index 63e3ecb9da812..8538425cc43e0 100644
+--- a/arch/parisc/kernel/ftrace.c
++++ b/arch/parisc/kernel/ftrace.c
+@@ -81,7 +81,7 @@ void notrace __hot ftrace_function_trampoline(unsigned long parent,
+ #endif
+ }
+ 
+-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
++#if defined(CONFIG_DYNAMIC_FTRACE) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
+ int ftrace_enable_ftrace_graph_caller(void)
+ {
+ 	return 0;
+diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
+index 28b770bbc10b4..2a054de80e50b 100644
+--- a/arch/powerpc/perf/hv-gpci.c
++++ b/arch/powerpc/perf/hv-gpci.c
+@@ -164,6 +164,20 @@ static unsigned long single_gpci_request(u32 req, u32 starting_index,
+ 
+ 	ret = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO,
+ 			virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE);
++
++	/*
++	 * ret value as 'H_PARAMETER' with detail_rc as 'GEN_BUF_TOO_SMALL',
++	 * specifies that the current buffer size cannot accommodate
++	 * all the information and a partial buffer returned.
++	 * Since in this function we are only accessing data for a given starting index,
++	 * we don't need to accommodate whole data and can get required count by
++	 * accessing first entry data.
++	 * Hence hcall fails only incase the ret value is other than H_SUCCESS or
++	 * H_PARAMETER with detail_rc value as GEN_BUF_TOO_SMALL(0x1B).
++	 */
++	if (ret == H_PARAMETER && be32_to_cpu(arg->params.detail_rc) == 0x1B)
++		ret = 0;
++
+ 	if (ret) {
+ 		pr_devel("hcall failed: 0x%lx\n", ret);
+ 		goto out;
+@@ -228,6 +242,7 @@ static int h_gpci_event_init(struct perf_event *event)
+ {
+ 	u64 count;
+ 	u8 length;
++	unsigned long ret;
+ 
+ 	/* Not our event */
+ 	if (event->attr.type != event->pmu->type)
+@@ -258,13 +273,23 @@ static int h_gpci_event_init(struct perf_event *event)
+ 	}
+ 
+ 	/* check if the request works... */
+-	if (single_gpci_request(event_get_request(event),
++	ret = single_gpci_request(event_get_request(event),
+ 				event_get_starting_index(event),
+ 				event_get_secondary_index(event),
+ 				event_get_counter_info_version(event),
+ 				event_get_offset(event),
+ 				length,
+-				&count)) {
++				&count);
++
++	/*
++	 * ret value as H_AUTHORITY implies that partition is not permitted to retrieve
++	 * performance information, and required to set
++	 * "Enable Performance Information Collection" option.
++	 */
++	if (ret == H_AUTHORITY)
++		return -EPERM;
++
++	if (ret) {
+ 		pr_devel("gpci hcall failed\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/arch/powerpc/platforms/embedded6xx/linkstation.c b/arch/powerpc/platforms/embedded6xx/linkstation.c
+index f514d5d28cd4f..3f3821eb4c36b 100644
+--- a/arch/powerpc/platforms/embedded6xx/linkstation.c
++++ b/arch/powerpc/platforms/embedded6xx/linkstation.c
+@@ -97,9 +97,6 @@ static void __init linkstation_init_IRQ(void)
+ 	mpic_init(mpic);
+ }
+ 
+-extern void avr_uart_configure(void);
+-extern void avr_uart_send(const char);
+-
+ static void __noreturn linkstation_restart(char *cmd)
+ {
+ 	local_irq_disable();
+diff --git a/arch/powerpc/platforms/embedded6xx/mpc10x.h b/arch/powerpc/platforms/embedded6xx/mpc10x.h
+index 5ad12023e5628..ebc258fa4858d 100644
+--- a/arch/powerpc/platforms/embedded6xx/mpc10x.h
++++ b/arch/powerpc/platforms/embedded6xx/mpc10x.h
+@@ -156,4 +156,7 @@ int mpc10x_disable_store_gathering(struct pci_controller *hose);
+ /* For MPC107 boards that use the built-in openpic */
+ void mpc10x_set_openpic(void);
+ 
++void avr_uart_configure(void);
++void avr_uart_send(const char c);
++
+ #endif	/* __PPC_KERNEL_MPC10X_H */
+diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c
+index 579ec3a8c816f..bd65ff88c5baa 100644
+--- a/arch/s390/kernel/vtime.c
++++ b/arch/s390/kernel/vtime.c
+@@ -214,13 +214,13 @@ void vtime_flush(struct task_struct *tsk)
+ 		virt_timer_expire();
+ 
+ 	steal = S390_lowcore.steal_timer;
+-	avg_steal = S390_lowcore.avg_steal_timer / 2;
++	avg_steal = S390_lowcore.avg_steal_timer;
+ 	if ((s64) steal > 0) {
+ 		S390_lowcore.steal_timer = 0;
+ 		account_steal_time(cputime_to_nsecs(steal));
+ 		avg_steal += steal;
+ 	}
+-	S390_lowcore.avg_steal_timer = avg_steal;
++	S390_lowcore.avg_steal_timer = avg_steal / 2;
+ }
+ 
+ /*
+diff --git a/arch/sparc/kernel/leon_pci_grpci1.c b/arch/sparc/kernel/leon_pci_grpci1.c
+index e6935d0ac1ec9..c32590bdd3120 100644
+--- a/arch/sparc/kernel/leon_pci_grpci1.c
++++ b/arch/sparc/kernel/leon_pci_grpci1.c
+@@ -696,7 +696,7 @@ static int grpci1_of_probe(struct platform_device *ofdev)
+ 	return err;
+ }
+ 
+-static const struct of_device_id grpci1_of_match[] __initconst = {
++static const struct of_device_id grpci1_of_match[] = {
+ 	{
+ 	 .name = "GAISLER_PCIFBRG",
+ 	 },
+diff --git a/arch/sparc/kernel/leon_pci_grpci2.c b/arch/sparc/kernel/leon_pci_grpci2.c
+index ca22f93d90454..dd06abc61657f 100644
+--- a/arch/sparc/kernel/leon_pci_grpci2.c
++++ b/arch/sparc/kernel/leon_pci_grpci2.c
+@@ -887,7 +887,7 @@ static int grpci2_of_probe(struct platform_device *ofdev)
+ 	return err;
+ }
+ 
+-static const struct of_device_id grpci2_of_match[] __initconst = {
++static const struct of_device_id grpci2_of_match[] = {
+ 	{
+ 	 .name = "GAISLER_GRPCI2",
+ 	 },
+diff --git a/arch/x86/include/asm/vsyscall.h b/arch/x86/include/asm/vsyscall.h
+index ab60a71a8dcb9..472f0263dbc61 100644
+--- a/arch/x86/include/asm/vsyscall.h
++++ b/arch/x86/include/asm/vsyscall.h
+@@ -4,6 +4,7 @@
+ 
+ #include <linux/seqlock.h>
+ #include <uapi/asm/vsyscall.h>
++#include <asm/page_types.h>
+ 
+ #ifdef CONFIG_X86_VSYSCALL_EMULATION
+ extern void map_vsyscall(void);
+@@ -24,4 +25,13 @@ static inline bool emulate_vsyscall(unsigned long error_code,
+ }
+ #endif
+ 
++/*
++ * The (legacy) vsyscall page is the long page in the kernel portion
++ * of the address space that has user-accessible permissions.
++ */
++static inline bool is_vsyscall_vaddr(unsigned long vaddr)
++{
++	return unlikely((vaddr & PAGE_MASK) == VSYSCALL_ADDR);
++}
++
+ #endif /* _ASM_X86_VSYSCALL_H */
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 5bea8d93883a2..f0e4ad8595ca7 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -31,6 +31,7 @@
+ #include <asm/special_insns.h>
+ #include <asm/tlb.h>
+ #include <asm/io_bitmap.h>
++#include <asm/text-patching.h>
+ 
+ /*
+  * nop stub, which must not clobber anything *including the stack* to
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 9c1545c376e9b..cdb337cf92bae 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -781,15 +781,6 @@ show_signal_msg(struct pt_regs *regs, unsigned long error_code,
+ 	show_opcodes(regs, loglvl);
+ }
+ 
+-/*
+- * The (legacy) vsyscall page is the long page in the kernel portion
+- * of the address space that has user-accessible permissions.
+- */
+-static bool is_vsyscall_vaddr(unsigned long vaddr)
+-{
+-	return unlikely((vaddr & PAGE_MASK) == VSYSCALL_ADDR);
+-}
+-
+ static void
+ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
+ 		       unsigned long address, u32 pkey, int si_code)
+diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
+index 6993f026adec9..42115ac079cfe 100644
+--- a/arch/x86/mm/maccess.c
++++ b/arch/x86/mm/maccess.c
+@@ -3,6 +3,8 @@
+ #include <linux/uaccess.h>
+ #include <linux/kernel.h>
+ 
++#include <asm/vsyscall.h>
++
+ #ifdef CONFIG_X86_64
+ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
+ {
+@@ -15,6 +17,14 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
+ 	if (vaddr < TASK_SIZE_MAX + PAGE_SIZE)
+ 		return false;
+ 
++	/*
++	 * Reading from the vsyscall page may cause an unhandled fault in
++	 * certain cases.  Though it is at an address above TASK_SIZE_MAX, it is
++	 * usually considered as a user space address.
++	 */
++	if (is_vsyscall_vaddr(vaddr))
++		return false;
++
+ 	/*
+ 	 * Allow everything during early boot before 'x86_virt_bits'
+ 	 * is initialized.  Needed for instruction decoding in early
+diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
+index 1c3a1962cade6..0043fd374a62f 100644
+--- a/arch/x86/tools/relocs.c
++++ b/arch/x86/tools/relocs.c
+@@ -596,6 +596,14 @@ static void print_absolute_relocs(void)
+ 		if (!(sec_applies->shdr.sh_flags & SHF_ALLOC)) {
+ 			continue;
+ 		}
++		/*
++		 * Do not perform relocations in .notes section; any
++		 * values there are meant for pre-boot consumption (e.g.
++		 * startup_xen).
++		 */
++		if (sec_applies->shdr.sh_type == SHT_NOTE) {
++			continue;
++		}
+ 		sh_symtab  = sec_symtab->symtab;
+ 		sym_strtab = sec_symtab->link->strtab;
+ 		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index cdec892b28e2e..a641e0d452194 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -65,6 +65,8 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	char *resched_name, *callfunc_name, *debug_name;
+ 
+ 	resched_name = kasprintf(GFP_KERNEL, "resched%d", cpu);
++	if (!resched_name)
++		goto fail_mem;
+ 	per_cpu(xen_resched_irq, cpu).name = resched_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_RESCHEDULE_VECTOR,
+ 				    cpu,
+@@ -77,6 +79,8 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	per_cpu(xen_resched_irq, cpu).irq = rc;
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "callfunc%d", cpu);
++	if (!callfunc_name)
++		goto fail_mem;
+ 	per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_VECTOR,
+ 				    cpu,
+@@ -90,6 +94,9 @@ int xen_smp_intr_init(unsigned int cpu)
+ 
+ 	if (!xen_fifo_events) {
+ 		debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
++		if (!debug_name)
++			goto fail_mem;
++
+ 		per_cpu(xen_debug_irq, cpu).name = debug_name;
+ 		rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu,
+ 					     xen_debug_interrupt,
+@@ -101,6 +108,9 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	}
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "callfuncsingle%d", cpu);
++	if (!callfunc_name)
++		goto fail_mem;
++
+ 	per_cpu(xen_callfuncsingle_irq, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
+ 				    cpu,
+@@ -114,6 +124,8 @@ int xen_smp_intr_init(unsigned int cpu)
+ 
+ 	return 0;
+ 
++ fail_mem:
++	rc = -ENOMEM;
+  fail:
+ 	xen_smp_intr_free(cpu);
+ 	return rc;
+diff --git a/block/ioctl.c b/block/ioctl.c
+index e7eed7dadb5cf..24f8042f12b60 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -405,6 +405,11 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
+ 		return ret;
+ 	if (get_user(n, (int __user *)arg))
+ 		return -EFAULT;
++	if (bdev->bd_disk->fops->set_read_only) {
++		ret = bdev->bd_disk->fops->set_read_only(bdev, n);
++		if (ret)
++			return ret;
++	}
+ 	set_device_ro(bdev, n);
+ 	return 0;
+ }
+diff --git a/block/opal_proto.h b/block/opal_proto.h
+index b486b3ec7dc41..a50191bddbc26 100644
+--- a/block/opal_proto.h
++++ b/block/opal_proto.h
+@@ -66,6 +66,7 @@ enum opal_response_token {
+ #define SHORT_ATOM_BYTE  0xBF
+ #define MEDIUM_ATOM_BYTE 0xDF
+ #define LONG_ATOM_BYTE   0xE3
++#define EMPTY_ATOM_BYTE  0xFF
+ 
+ #define OPAL_INVAL_PARAM 12
+ #define OPAL_MANUFACTURED_INACTIVE 0x08
+diff --git a/block/sed-opal.c b/block/sed-opal.c
+index 0ac5a4f3f2261..00e4d23ac49e7 100644
+--- a/block/sed-opal.c
++++ b/block/sed-opal.c
+@@ -895,16 +895,20 @@ static int response_parse(const u8 *buf, size_t length,
+ 			token_length = response_parse_medium(iter, pos);
+ 		else if (pos[0] <= LONG_ATOM_BYTE) /* long atom */
+ 			token_length = response_parse_long(iter, pos);
++		else if (pos[0] == EMPTY_ATOM_BYTE) /* empty atom */
++			token_length = 1;
+ 		else /* TOKEN */
+ 			token_length = response_parse_token(iter, pos);
+ 
+ 		if (token_length < 0)
+ 			return token_length;
+ 
++		if (pos[0] != EMPTY_ATOM_BYTE)
++			num_entries++;
++
+ 		pos += token_length;
+ 		total -= token_length;
+ 		iter++;
+-		num_entries++;
+ 	}
+ 
+ 	resp->num = num_entries;
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 59781e765e0e2..3deeabb273940 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -1427,6 +1427,8 @@ int acpi_processor_power_exit(struct acpi_processor *pr)
+ 		acpi_processor_registered--;
+ 		if (acpi_processor_registered == 0)
+ 			cpuidle_unregister_driver(&acpi_idle_driver);
++
++		kfree(dev);
+ 	}
+ 
+ 	pr->flags.power_setup_done = 0;
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 67a5ee2fedfd3..f17f48bc13bc0 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -321,18 +321,14 @@ static int acpi_scan_device_check(struct acpi_device *adev)
+ 		 * again).
+ 		 */
+ 		if (adev->handler) {
+-			dev_warn(&adev->dev, "Already enumerated\n");
+-			return -EALREADY;
++			dev_dbg(&adev->dev, "Already enumerated\n");
++			return 0;
+ 		}
+ 		error = acpi_bus_scan(adev->handle);
+ 		if (error) {
+ 			dev_warn(&adev->dev, "Namespace scan failure\n");
+ 			return error;
+ 		}
+-		if (!adev->handler) {
+-			dev_warn(&adev->dev, "Enumeration failure\n");
+-			error = -ENODEV;
+-		}
+ 	} else {
+ 		error = acpi_scan_device_not_present(adev);
+ 	}
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 2dfd6aa600450..a3c4086603a60 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1796,7 +1796,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 				 map->format.reg_bytes +
+ 				 map->format.pad_bytes +
+ 				 val_len);
+-	else if (map->bus->gather_write)
++	else if (map->bus && map->bus->gather_write)
+ 		ret = map->bus->gather_write(map->bus_context, map->work_buf,
+ 					     map->format.reg_bytes +
+ 					     map->format.pad_bytes,
+diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
+index 313f0b946fe2b..c805909c8e775 100644
+--- a/drivers/block/aoe/aoecmd.c
++++ b/drivers/block/aoe/aoecmd.c
+@@ -420,13 +420,16 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff_head *qu
+ 	rcu_read_lock();
+ 	for_each_netdev_rcu(&init_net, ifp) {
+ 		dev_hold(ifp);
+-		if (!is_aoe_netif(ifp))
+-			goto cont;
++		if (!is_aoe_netif(ifp)) {
++			dev_put(ifp);
++			continue;
++		}
+ 
+ 		skb = new_skb(sizeof *h + sizeof *ch);
+ 		if (skb == NULL) {
+ 			printk(KERN_INFO "aoe: skb alloc failure\n");
+-			goto cont;
++			dev_put(ifp);
++			continue;
+ 		}
+ 		skb_put(skb, sizeof *h + sizeof *ch);
+ 		skb->dev = ifp;
+@@ -441,9 +444,6 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff_head *qu
+ 		h->major = cpu_to_be16(aoemajor);
+ 		h->minor = aoeminor;
+ 		h->cmd = AOECMD_CFG;
+-
+-cont:
+-		dev_put(ifp);
+ 	}
+ 	rcu_read_unlock();
+ }
+diff --git a/drivers/block/aoe/aoenet.c b/drivers/block/aoe/aoenet.c
+index 63773a90581dd..1e66c7a188a12 100644
+--- a/drivers/block/aoe/aoenet.c
++++ b/drivers/block/aoe/aoenet.c
+@@ -64,6 +64,7 @@ tx(int id) __must_hold(&txlock)
+ 			pr_warn("aoe: packet could not be sent on %s.  %s\n",
+ 				ifp ? ifp->name : "netif",
+ 				"consider increasing tx_queue_len");
++		dev_put(ifp);
+ 		spin_lock_irq(&txlock);
+ 	}
+ 	return 0;
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index e0f805ca0e727..d6e3edb404748 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -2339,6 +2339,12 @@ static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	dev_list = nla_nest_start_noflag(reply, NBD_ATTR_DEVICE_LIST);
++	if (!dev_list) {
++		nlmsg_free(reply);
++		ret = -EMSGSIZE;
++		goto out;
++	}
++
+ 	if (index == -1) {
+ 		ret = idr_for_each(&nbd_index_idr, &status_cb, reply);
+ 		if (ret) {
+diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
+index 0c262c2aeaf2f..01f2349dbfaed 100644
+--- a/drivers/bus/Kconfig
++++ b/drivers/bus/Kconfig
+@@ -176,11 +176,12 @@ config SUNXI_RSB
+ 
+ config TEGRA_ACONNECT
+ 	tristate "Tegra ACONNECT Bus Driver"
+-	depends on ARCH_TEGRA_210_SOC
++	depends on ARCH_TEGRA
+ 	depends on OF && PM
+ 	help
+ 	  Driver for the Tegra ACONNECT bus which is used to interface with
+-	  the devices inside the Audio Processing Engine (APE) for Tegra210.
++	  the devices inside the Audio Processing Engine (APE) for
++	  Tegra210 and later.
+ 
+ config TEGRA_GMI
+ 	tristate "Tegra Generic Memory Interface bus driver"
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 62572d59e7e38..aa2f1f8aa2994 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -425,6 +425,9 @@ static struct clk_core *clk_core_get(struct clk_core *core, u8 p_index)
+ 	if (IS_ERR(hw))
+ 		return ERR_CAST(hw);
+ 
++	if (!hw)
++		return NULL;
++
+ 	return hw->core;
+ }
+ 
+diff --git a/drivers/clk/hisilicon/clk-hi3519.c b/drivers/clk/hisilicon/clk-hi3519.c
+index ad0c7f350cf03..60d8a27a90824 100644
+--- a/drivers/clk/hisilicon/clk-hi3519.c
++++ b/drivers/clk/hisilicon/clk-hi3519.c
+@@ -130,7 +130,7 @@ static void hi3519_clk_unregister(struct platform_device *pdev)
+ 	of_clk_del_provider(pdev->dev.of_node);
+ 
+ 	hisi_clk_unregister_gate(hi3519_gate_clks,
+-				ARRAY_SIZE(hi3519_mux_clks),
++				ARRAY_SIZE(hi3519_gate_clks),
+ 				crg->clk_data);
+ 	hisi_clk_unregister_mux(hi3519_mux_clks,
+ 				ARRAY_SIZE(hi3519_mux_clks),
+diff --git a/drivers/clk/qcom/dispcc-sdm845.c b/drivers/clk/qcom/dispcc-sdm845.c
+index 5c932cd17b140..8cd8174ac9aa7 100644
+--- a/drivers/clk/qcom/dispcc-sdm845.c
++++ b/drivers/clk/qcom/dispcc-sdm845.c
+@@ -768,6 +768,8 @@ static struct clk_branch disp_cc_mdss_vsync_clk = {
+ 
+ static struct gdsc mdss_gdsc = {
+ 	.gdscr = 0x3000,
++	.en_few_wait_val = 0x6,
++	.en_rest_wait_val = 0x5,
+ 	.pd = {
+ 		.name = "mdss_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/reset.c b/drivers/clk/qcom/reset.c
+index e45e32804d2c7..d96c96a9089f4 100644
+--- a/drivers/clk/qcom/reset.c
++++ b/drivers/clk/qcom/reset.c
+@@ -22,8 +22,8 @@ static int qcom_reset(struct reset_controller_dev *rcdev, unsigned long id)
+ 	return 0;
+ }
+ 
+-static int
+-qcom_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
++static int qcom_reset_set_assert(struct reset_controller_dev *rcdev,
++				 unsigned long id, bool assert)
+ {
+ 	struct qcom_reset_controller *rst;
+ 	const struct qcom_reset_map *map;
+@@ -33,21 +33,22 @@ qcom_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
+ 	map = &rst->reset_map[id];
+ 	mask = map->bitmask ? map->bitmask : BIT(map->bit);
+ 
+-	return regmap_update_bits(rst->regmap, map->reg, mask, mask);
++	regmap_update_bits(rst->regmap, map->reg, mask, assert ? mask : 0);
++
++	/* Read back the register to ensure write completion, ignore the value */
++	regmap_read(rst->regmap, map->reg, &mask);
++
++	return 0;
+ }
+ 
+-static int
+-qcom_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
++static int qcom_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
+ {
+-	struct qcom_reset_controller *rst;
+-	const struct qcom_reset_map *map;
+-	u32 mask;
+-
+-	rst = to_qcom_reset_controller(rcdev);
+-	map = &rst->reset_map[id];
+-	mask = map->bitmask ? map->bitmask : BIT(map->bit);
++	return qcom_reset_set_assert(rcdev, id, true);
++}
+ 
+-	return regmap_update_bits(rst->regmap, map->reg, mask, 0);
++static int qcom_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
++{
++	return qcom_reset_set_assert(rcdev, id, false);
+ }
+ 
+ const struct reset_control_ops qcom_reset_ops = {
+diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+index f644c5e325fb2..38ec0fedb247f 100644
+--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c
++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+@@ -481,6 +481,8 @@ static bool brcm_avs_is_firmware_loaded(struct private_data *priv)
+ static unsigned int brcm_avs_cpufreq_get(unsigned int cpu)
+ {
+ 	struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
++	if (!policy)
++		return 0;
+ 	struct private_data *priv = policy->driver_data;
+ 
+ 	cpufreq_cpu_put(policy);
+diff --git a/drivers/crypto/xilinx/zynqmp-aes-gcm.c b/drivers/crypto/xilinx/zynqmp-aes-gcm.c
+index bf1f421e05f25..74bd3eb63734d 100644
+--- a/drivers/crypto/xilinx/zynqmp-aes-gcm.c
++++ b/drivers/crypto/xilinx/zynqmp-aes-gcm.c
+@@ -231,7 +231,10 @@ static int zynqmp_handle_aes_req(struct crypto_engine *engine,
+ 		err = zynqmp_aes_aead_cipher(areq);
+ 	}
+ 
++	local_bh_disable();
+ 	crypto_finalize_aead_request(engine, areq, err);
++	local_bh_enable();
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index 7e1bd79fbee8f..02b98f979479a 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -614,16 +614,16 @@ config TEGRA20_APB_DMA
+ 
+ config TEGRA210_ADMA
+ 	tristate "NVIDIA Tegra210 ADMA support"
+-	depends on (ARCH_TEGRA_210_SOC || COMPILE_TEST)
++	depends on (ARCH_TEGRA || COMPILE_TEST)
+ 	select DMA_ENGINE
+ 	select DMA_VIRTUAL_CHANNELS
+ 	help
+-	  Support for the NVIDIA Tegra210 ADMA controller driver. The
+-	  DMA controller has multiple DMA channels and is used to service
+-	  various audio clients in the Tegra210 audio processing engine
+-	  (APE). This DMA controller transfers data from memory to
+-	  peripheral and vice versa. It does not support memory to
+-	  memory data transfer.
++	  Support for the NVIDIA Tegra210/Tegra186/Tegra194/Tegra234 ADMA
++	  controller driver. The DMA controller has multiple DMA channels
++	  and is used to service various audio clients in the Tegra210
++	  audio processing engine (APE). This DMA controller transfers
++	  data from memory to peripheral and vice versa. It does not
++	  support memory to memory data transfer.
+ 
+ config TIMB_DMA
+ 	tristate "Timberdale FPGA DMA support"
+diff --git a/drivers/firewire/core-card.c b/drivers/firewire/core-card.c
+index be195ba834632..d446a72629414 100644
+--- a/drivers/firewire/core-card.c
++++ b/drivers/firewire/core-card.c
+@@ -500,7 +500,19 @@ static void bm_work(struct work_struct *work)
+ 		fw_notice(card, "phy config: new root=%x, gap_count=%d\n",
+ 			  new_root_id, gap_count);
+ 		fw_send_phy_config(card, new_root_id, generation, gap_count);
+-		reset_bus(card, true);
++		/*
++		 * Where possible, use a short bus reset to minimize
++		 * disruption to isochronous transfers. But in the event
++		 * of a gap count inconsistency, use a long bus reset.
++		 *
++		 * As noted in 1394a 8.4.6.2, nodes on a mixed 1394/1394a bus
++		 * may set different gap counts after a bus reset. On a mixed
++		 * 1394/1394a bus, a short bus reset can get doubled. Some
++		 * nodes may treat the double reset as one bus reset and others
++		 * may treat it as two, causing a gap count inconsistency
++		 * again. Using a long bus reset prevents this.
++		 */
++		reset_bus(card, card->gap_count != 0);
+ 		/* Will allocate broadcast channel after the reset. */
+ 		goto out;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/atom.c b/drivers/gpu/drm/amd/amdgpu/atom.c
+index 4cfc786699c7f..c1841fa873f56 100644
+--- a/drivers/gpu/drm/amd/amdgpu/atom.c
++++ b/drivers/gpu/drm/amd/amdgpu/atom.c
+@@ -310,7 +310,7 @@ static uint32_t atom_get_src_int(atom_exec_context *ctx, uint8_t attr,
+ 				DEBUG("IMM 0x%02X\n", val);
+ 			return val;
+ 		}
+-		return 0;
++		break;
+ 	case ATOM_ARG_PLL:
+ 		idx = U8(*ptr);
+ 		(*ptr)++;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index e43f82bcb231a..32dbd2a270887 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -1179,7 +1179,7 @@ static ssize_t dp_dsc_clock_en_read(struct file *f, char __user *buf,
+ 	const uint32_t rd_buf_size = 10;
+ 	struct pipe_ctx *pipe_ctx;
+ 	ssize_t result = 0;
+-	int i, r, str_len = 30;
++	int i, r, str_len = 10;
+ 
+ 	rd_buf = kcalloc(rd_buf_size, sizeof(char), GFP_KERNEL);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 1c669f115dd80..8cf6e307ae36e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -1669,6 +1669,9 @@ bool dcn10_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ {
+ 	struct dpp *dpp = pipe_ctx->plane_res.dpp;
+ 
++	if (!stream)
++		return false;
++
+ 	if (dpp == NULL)
+ 		return false;
+ 
+@@ -1691,8 +1694,8 @@ bool dcn10_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ 	} else
+ 		dpp->funcs->dpp_program_regamma_pwl(dpp, NULL, OPP_REGAMMA_BYPASS);
+ 
+-	if (stream != NULL && stream->ctx != NULL &&
+-			stream->out_transfer_func != NULL) {
++	if (stream->ctx &&
++	    stream->out_transfer_func) {
+ 		log_tf(stream->ctx,
+ 				stream->out_transfer_func,
+ 				dpp->regamma_params.hw_points_num);
+diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
+index 11223fe348dfe..894175f2ed492 100644
+--- a/drivers/gpu/drm/lima/lima_gem.c
++++ b/drivers/gpu/drm/lima/lima_gem.c
+@@ -74,29 +74,34 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
+ 	} else {
+ 		bo->base.sgt = kmalloc(sizeof(*bo->base.sgt), GFP_KERNEL);
+ 		if (!bo->base.sgt) {
+-			sg_free_table(&sgt);
+-			return -ENOMEM;
++			ret = -ENOMEM;
++			goto err_out0;
+ 		}
+ 	}
+ 
+ 	ret = dma_map_sgtable(dev, &sgt, DMA_BIDIRECTIONAL, 0);
+-	if (ret) {
+-		sg_free_table(&sgt);
+-		kfree(bo->base.sgt);
+-		bo->base.sgt = NULL;
+-		return ret;
+-	}
++	if (ret)
++		goto err_out1;
+ 
+ 	*bo->base.sgt = sgt;
+ 
+ 	if (vm) {
+ 		ret = lima_vm_map_bo(vm, bo, old_size >> PAGE_SHIFT);
+ 		if (ret)
+-			return ret;
++			goto err_out2;
+ 	}
+ 
+ 	bo->heap_size = new_size;
+ 	return 0;
++
++err_out2:
++	dma_unmap_sgtable(dev, &sgt, DMA_BIDIRECTIONAL, 0);
++err_out1:
++	kfree(bo->base.sgt);
++	bo->base.sgt = NULL;
++err_out0:
++	sg_free_table(&sgt);
++	return ret;
+ }
+ 
+ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file,
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index 1eaf513166a1a..d08827803a32f 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -84,11 +84,13 @@ static void mtk_drm_crtc_finish_page_flip(struct mtk_drm_crtc *mtk_crtc)
+ 	struct drm_crtc *crtc = &mtk_crtc->base;
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&crtc->dev->event_lock, flags);
+-	drm_crtc_send_vblank_event(crtc, mtk_crtc->event);
+-	drm_crtc_vblank_put(crtc);
+-	mtk_crtc->event = NULL;
+-	spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
++	if (mtk_crtc->event) {
++		spin_lock_irqsave(&crtc->dev->event_lock, flags);
++		drm_crtc_send_vblank_event(crtc, mtk_crtc->event);
++		drm_crtc_vblank_put(crtc);
++		mtk_crtc->event = NULL;
++		spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
++	}
+ }
+ 
+ static void mtk_drm_finish_page_flip(struct mtk_drm_crtc *mtk_crtc)
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index a6e71b7b69b83..17d45f06cedf3 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -68,8 +68,8 @@
+ #define DSI_PS_WC			0x3fff
+ #define DSI_PS_SEL			(3 << 16)
+ #define PACKED_PS_16BIT_RGB565		(0 << 16)
+-#define LOOSELY_PS_18BIT_RGB666		(1 << 16)
+-#define PACKED_PS_18BIT_RGB666		(2 << 16)
++#define PACKED_PS_18BIT_RGB666		(1 << 16)
++#define LOOSELY_PS_24BIT_RGB666		(2 << 16)
+ #define PACKED_PS_24BIT_RGB888		(3 << 16)
+ 
+ #define DSI_VSA_NL		0x20
+@@ -365,10 +365,10 @@ static void mtk_dsi_ps_control_vact(struct mtk_dsi *dsi)
+ 		ps_bpp_mode |= PACKED_PS_24BIT_RGB888;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB666:
+-		ps_bpp_mode |= PACKED_PS_18BIT_RGB666;
++		ps_bpp_mode |= LOOSELY_PS_24BIT_RGB666;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB666_PACKED:
+-		ps_bpp_mode |= LOOSELY_PS_18BIT_RGB666;
++		ps_bpp_mode |= PACKED_PS_18BIT_RGB666;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB565:
+ 		ps_bpp_mode |= PACKED_PS_16BIT_RGB565;
+@@ -419,7 +419,7 @@ static void mtk_dsi_ps_control(struct mtk_dsi *dsi)
+ 		dsi_tmp_buf_bpp = 3;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB666:
+-		tmp_reg = LOOSELY_PS_18BIT_RGB666;
++		tmp_reg = LOOSELY_PS_24BIT_RGB666;
+ 		dsi_tmp_buf_bpp = 3;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB666_PACKED:
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+index 805e059b50b71..33880f66625e6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+@@ -265,12 +265,14 @@ static void dpu_encoder_phys_vid_setup_timing_engine(
+ 		mode.htotal >>= 1;
+ 		mode.hsync_start >>= 1;
+ 		mode.hsync_end >>= 1;
++		mode.hskew >>= 1;
+ 
+ 		DPU_DEBUG_VIDENC(phys_enc,
+-			"split_role %d, halve horizontal %d %d %d %d\n",
++			"split_role %d, halve horizontal %d %d %d %d %d\n",
+ 			phys_enc->split_role,
+ 			mode.hdisplay, mode.htotal,
+-			mode.hsync_start, mode.hsync_end);
++			mode.hsync_start, mode.hsync_end,
++			mode.hskew);
+ 	}
+ 
+ 	drm_mode_to_intf_timing_params(phys_enc, &mode, &timing_params);
+diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
+index 02feb0801fd30..50c7430f8cd33 100644
+--- a/drivers/gpu/drm/radeon/ni.c
++++ b/drivers/gpu/drm/radeon/ni.c
+@@ -826,7 +826,7 @@ int ni_init_microcode(struct radeon_device *rdev)
+ 			err = 0;
+ 		} else if (rdev->smc_fw->size != smc_req_size) {
+ 			pr_err("ni_mc: Bogus length %zu in firmware \"%s\"\n",
+-			       rdev->mc_fw->size, fw_name);
++			       rdev->smc_fw->size, fw_name);
+ 			err = -EINVAL;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/rockchip/inno_hdmi.c b/drivers/gpu/drm/rockchip/inno_hdmi.c
+index 78120da5e63aa..27540d308ccb9 100644
+--- a/drivers/gpu/drm/rockchip/inno_hdmi.c
++++ b/drivers/gpu/drm/rockchip/inno_hdmi.c
+@@ -402,7 +402,7 @@ static int inno_hdmi_config_video_timing(struct inno_hdmi *hdmi,
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HBLANK_L, value & 0xFF);
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HBLANK_H, (value >> 8) & 0xFF);
+ 
+-	value = mode->hsync_start - mode->hdisplay;
++	value = mode->htotal - mode->hsync_start;
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HDELAY_L, value & 0xFF);
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HDELAY_H, (value >> 8) & 0xFF);
+ 
+@@ -417,7 +417,7 @@ static int inno_hdmi_config_video_timing(struct inno_hdmi *hdmi,
+ 	value = mode->vtotal - mode->vdisplay;
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_VBLANK, value & 0xFF);
+ 
+-	value = mode->vsync_start - mode->vdisplay;
++	value = mode->vtotal - mode->vsync_start;
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_VDELAY, value & 0xFF);
+ 
+ 	value = mode->vsync_end - mode->vsync_start;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+index e2487937c4e3d..96c13c182809e 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+@@ -572,8 +572,7 @@ static int rockchip_lvds_bind(struct device *dev, struct device *master,
+ 		ret = -EINVAL;
+ 		goto err_put_port;
+ 	} else if (ret) {
+-		DRM_DEV_ERROR(dev, "failed to find panel and bridge node\n");
+-		ret = -EPROBE_DEFER;
++		dev_err_probe(dev, ret, "failed to find panel and bridge node\n");
+ 		goto err_put_port;
+ 	}
+ 	if (lvds->panel)
+diff --git a/drivers/gpu/drm/tegra/dsi.c b/drivers/gpu/drm/tegra/dsi.c
+index de1333dc0d867..7bb26655cb3cc 100644
+--- a/drivers/gpu/drm/tegra/dsi.c
++++ b/drivers/gpu/drm/tegra/dsi.c
+@@ -1534,9 +1534,11 @@ static int tegra_dsi_ganged_probe(struct tegra_dsi *dsi)
+ 	np = of_parse_phandle(dsi->dev->of_node, "nvidia,ganged-mode", 0);
+ 	if (np) {
+ 		struct platform_device *gangster = of_find_device_by_node(np);
++		of_node_put(np);
++		if (!gangster)
++			return -EPROBE_DEFER;
+ 
+ 		dsi->slave = platform_get_drvdata(gangster);
+-		of_node_put(np);
+ 
+ 		if (!dsi->slave) {
+ 			put_device(&gangster->dev);
+@@ -1584,48 +1586,58 @@ static int tegra_dsi_probe(struct platform_device *pdev)
+ 
+ 	if (!pdev->dev.pm_domain) {
+ 		dsi->rst = devm_reset_control_get(&pdev->dev, "dsi");
+-		if (IS_ERR(dsi->rst))
+-			return PTR_ERR(dsi->rst);
++		if (IS_ERR(dsi->rst)) {
++			err = PTR_ERR(dsi->rst);
++			goto remove;
++		}
+ 	}
+ 
+ 	dsi->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(dsi->clk)) {
+-		dev_err(&pdev->dev, "cannot get DSI clock\n");
+-		return PTR_ERR(dsi->clk);
++		err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk),
++				    "cannot get DSI clock\n");
++		goto remove;
+ 	}
+ 
+ 	dsi->clk_lp = devm_clk_get(&pdev->dev, "lp");
+ 	if (IS_ERR(dsi->clk_lp)) {
+-		dev_err(&pdev->dev, "cannot get low-power clock\n");
+-		return PTR_ERR(dsi->clk_lp);
++		err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_lp),
++				    "cannot get low-power clock\n");
++		goto remove;
+ 	}
+ 
+ 	dsi->clk_parent = devm_clk_get(&pdev->dev, "parent");
+ 	if (IS_ERR(dsi->clk_parent)) {
+-		dev_err(&pdev->dev, "cannot get parent clock\n");
+-		return PTR_ERR(dsi->clk_parent);
++		err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_parent),
++				    "cannot get parent clock\n");
++		goto remove;
+ 	}
+ 
+ 	dsi->vdd = devm_regulator_get(&pdev->dev, "avdd-dsi-csi");
+ 	if (IS_ERR(dsi->vdd)) {
+-		dev_err(&pdev->dev, "cannot get VDD supply\n");
+-		return PTR_ERR(dsi->vdd);
++		err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->vdd),
++				    "cannot get VDD supply\n");
++		goto remove;
+ 	}
+ 
+ 	err = tegra_dsi_setup_clocks(dsi);
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "cannot setup clocks\n");
+-		return err;
++		goto remove;
+ 	}
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	dsi->regs = devm_ioremap_resource(&pdev->dev, regs);
+-	if (IS_ERR(dsi->regs))
+-		return PTR_ERR(dsi->regs);
++	if (IS_ERR(dsi->regs)) {
++		err = PTR_ERR(dsi->regs);
++		goto remove;
++	}
+ 
+ 	dsi->mipi = tegra_mipi_request(&pdev->dev, pdev->dev.of_node);
+-	if (IS_ERR(dsi->mipi))
+-		return PTR_ERR(dsi->mipi);
++	if (IS_ERR(dsi->mipi)) {
++		err = PTR_ERR(dsi->mipi);
++		goto remove;
++	}
+ 
+ 	dsi->host.ops = &tegra_dsi_host_ops;
+ 	dsi->host.dev = &pdev->dev;
+@@ -1653,9 +1665,12 @@ static int tegra_dsi_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ unregister:
++	pm_runtime_disable(&pdev->dev);
+ 	mipi_dsi_host_unregister(&dsi->host);
+ mipi_free:
+ 	tegra_mipi_free(dsi->mipi);
++remove:
++	tegra_output_remove(&dsi->output);
+ 	return err;
+ }
+ 
+diff --git a/drivers/gpu/drm/tegra/fb.c b/drivers/gpu/drm/tegra/fb.c
+index 01939c57fc74d..2040dbfed7e21 100644
+--- a/drivers/gpu/drm/tegra/fb.c
++++ b/drivers/gpu/drm/tegra/fb.c
+@@ -155,6 +155,7 @@ struct drm_framebuffer *tegra_fb_create(struct drm_device *drm,
+ 
+ 		if (gem->size < size) {
+ 			err = -EINVAL;
++			drm_gem_object_put(gem);
+ 			goto unreference;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/tegra/output.c b/drivers/gpu/drm/tegra/output.c
+index 47d26b5d99456..7ccd010a821b7 100644
+--- a/drivers/gpu/drm/tegra/output.c
++++ b/drivers/gpu/drm/tegra/output.c
+@@ -139,8 +139,10 @@ int tegra_output_probe(struct tegra_output *output)
+ 						       GPIOD_IN,
+ 						       "HDMI hotplug detect");
+ 	if (IS_ERR(output->hpd_gpio)) {
+-		if (PTR_ERR(output->hpd_gpio) != -ENOENT)
+-			return PTR_ERR(output->hpd_gpio);
++		if (PTR_ERR(output->hpd_gpio) != -ENOENT) {
++			err = PTR_ERR(output->hpd_gpio);
++			goto put_i2c;
++		}
+ 
+ 		output->hpd_gpio = NULL;
+ 	}
+@@ -149,7 +151,7 @@ int tegra_output_probe(struct tegra_output *output)
+ 		err = gpiod_to_irq(output->hpd_gpio);
+ 		if (err < 0) {
+ 			dev_err(output->dev, "gpiod_to_irq(): %d\n", err);
+-			return err;
++			goto put_i2c;
+ 		}
+ 
+ 		output->hpd_irq = err;
+@@ -162,7 +164,7 @@ int tegra_output_probe(struct tegra_output *output)
+ 		if (err < 0) {
+ 			dev_err(output->dev, "failed to request IRQ#%u: %d\n",
+ 				output->hpd_irq, err);
+-			return err;
++			goto put_i2c;
+ 		}
+ 
+ 		output->connector.polled = DRM_CONNECTOR_POLL_HPD;
+@@ -176,6 +178,12 @@ int tegra_output_probe(struct tegra_output *output)
+ 	}
+ 
+ 	return 0;
++
++put_i2c:
++	if (output->ddc)
++		i2c_put_adapter(output->ddc);
++
++	return err;
+ }
+ 
+ void tegra_output_remove(struct tegra_output *output)
+diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tidss/tidss_plane.c
+index 43e72d0b2d84d..e2ebd5fdc1138 100644
+--- a/drivers/gpu/drm/tidss/tidss_plane.c
++++ b/drivers/gpu/drm/tidss/tidss_plane.c
+@@ -202,7 +202,7 @@ struct tidss_plane *tidss_plane_create(struct tidss_device *tidss,
+ 
+ 	drm_plane_helper_add(&tplane->plane, &tidss_plane_helper_funcs);
+ 
+-	drm_plane_create_zpos_property(&tplane->plane, hw_plane_id, 0,
++	drm_plane_create_zpos_property(&tplane->plane, tidss->num_planes, 0,
+ 				       num_planes - 1);
+ 
+ 	ret = drm_plane_create_color_properties(&tplane->plane,
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index 249af8d26fe78..c9fdeffbe1a9e 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -53,10 +53,10 @@ struct lenovo_drvdata {
+ 	/* 0: Up
+ 	 * 1: Down (undecided)
+ 	 * 2: Scrolling
+-	 * 3: Patched firmware, disable workaround
+ 	 */
+ 	u8 middlebutton_state;
+ 	bool fn_lock;
++	bool middleclick_workaround_cptkbd;
+ };
+ 
+ #define map_key_clear(c) hid_map_usage_clear(hi, usage, bit, max, EV_KEY, (c))
+@@ -418,6 +418,36 @@ static ssize_t attr_sensitivity_store_cptkbd(struct device *dev,
+ 	return count;
+ }
+ 
++static ssize_t attr_middleclick_workaround_show_cptkbd(struct device *dev,
++		struct device_attribute *attr,
++		char *buf)
++{
++	struct hid_device *hdev = to_hid_device(dev);
++	struct lenovo_drvdata *cptkbd_data = hid_get_drvdata(hdev);
++
++	return snprintf(buf, PAGE_SIZE, "%u\n",
++		cptkbd_data->middleclick_workaround_cptkbd);
++}
++
++static ssize_t attr_middleclick_workaround_store_cptkbd(struct device *dev,
++		struct device_attribute *attr,
++		const char *buf,
++		size_t count)
++{
++	struct hid_device *hdev = to_hid_device(dev);
++	struct lenovo_drvdata *cptkbd_data = hid_get_drvdata(hdev);
++	int value;
++
++	if (kstrtoint(buf, 10, &value))
++		return -EINVAL;
++	if (value < 0 || value > 1)
++		return -EINVAL;
++
++	cptkbd_data->middleclick_workaround_cptkbd = !!value;
++
++	return count;
++}
++
+ 
+ static struct device_attribute dev_attr_fn_lock =
+ 	__ATTR(fn_lock, S_IWUSR | S_IRUGO,
+@@ -429,10 +459,16 @@ static struct device_attribute dev_attr_sensitivity_cptkbd =
+ 			attr_sensitivity_show_cptkbd,
+ 			attr_sensitivity_store_cptkbd);
+ 
++static struct device_attribute dev_attr_middleclick_workaround_cptkbd =
++	__ATTR(middleclick_workaround, S_IWUSR | S_IRUGO,
++			attr_middleclick_workaround_show_cptkbd,
++			attr_middleclick_workaround_store_cptkbd);
++
+ 
+ static struct attribute *lenovo_attributes_cptkbd[] = {
+ 	&dev_attr_fn_lock.attr,
+ 	&dev_attr_sensitivity_cptkbd.attr,
++	&dev_attr_middleclick_workaround_cptkbd.attr,
+ 	NULL
+ };
+ 
+@@ -483,23 +519,7 @@ static int lenovo_event_cptkbd(struct hid_device *hdev,
+ {
+ 	struct lenovo_drvdata *cptkbd_data = hid_get_drvdata(hdev);
+ 
+-	if (cptkbd_data->middlebutton_state != 3) {
+-		/* REL_X and REL_Y events during middle button pressed
+-		 * are only possible on patched, bug-free firmware
+-		 * so set middlebutton_state to 3
+-		 * to never apply workaround anymore
+-		 */
+-		if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD &&
+-				cptkbd_data->middlebutton_state == 1 &&
+-				usage->type == EV_REL &&
+-				(usage->code == REL_X || usage->code == REL_Y)) {
+-			cptkbd_data->middlebutton_state = 3;
+-			/* send middle button press which was hold before */
+-			input_event(field->hidinput->input,
+-				EV_KEY, BTN_MIDDLE, 1);
+-			input_sync(field->hidinput->input);
+-		}
+-
++	if (cptkbd_data->middleclick_workaround_cptkbd) {
+ 		/* "wheel" scroll events */
+ 		if (usage->type == EV_REL && (usage->code == REL_WHEEL ||
+ 				usage->code == REL_HWHEEL)) {
+@@ -976,6 +996,7 @@ static int lenovo_probe_cptkbd(struct hid_device *hdev)
+ 	cptkbd_data->middlebutton_state = 0;
+ 	cptkbd_data->fn_lock = true;
+ 	cptkbd_data->sensitivity = 0x05;
++	cptkbd_data->middleclick_workaround_cptkbd = true;
+ 	lenovo_features_set_cptkbd(hdev);
+ 
+ 	ret = sysfs_create_group(&hdev->dev.kobj, &lenovo_attr_group_cptkbd);
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 7d43d62df2409..8dcd636daf270 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2067,6 +2067,10 @@ static const struct hid_device_id mt_devices[] = {
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 			USB_VENDOR_ID_SYNAPTICS, 0xcd7e) },
+ 
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_SYNAPTICS, 0xcddc) },
++
+ 	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 			USB_VENDOR_ID_SYNAPTICS, 0xce08) },
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 3c29fd04b3016..94c3bad72cc59 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -1686,7 +1686,7 @@ static int assign_client_id(struct ib_client *client)
+ {
+ 	int ret;
+ 
+-	down_write(&clients_rwsem);
++	lockdep_assert_held(&clients_rwsem);
+ 	/*
+ 	 * The add/remove callbacks must be called in FIFO/LIFO order. To
+ 	 * achieve this we assign client_ids so they are sorted in
+@@ -1695,14 +1695,11 @@ static int assign_client_id(struct ib_client *client)
+ 	client->client_id = highest_client_id;
+ 	ret = xa_insert(&clients, client->client_id, client, GFP_KERNEL);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+ 	highest_client_id++;
+ 	xa_set_mark(&clients, client->client_id, CLIENT_REGISTERED);
+-
+-out:
+-	up_write(&clients_rwsem);
+-	return ret;
++	return 0;
+ }
+ 
+ static void remove_client_id(struct ib_client *client)
+@@ -1732,25 +1729,35 @@ int ib_register_client(struct ib_client *client)
+ {
+ 	struct ib_device *device;
+ 	unsigned long index;
++	bool need_unreg = false;
+ 	int ret;
+ 
+ 	refcount_set(&client->uses, 1);
+ 	init_completion(&client->uses_zero);
++
++	/*
++	 * The devices_rwsem is held in write mode to ensure that a racing
++	 * ib_register_device() sees a consisent view of clients and devices.
++	 */
++	down_write(&devices_rwsem);
++	down_write(&clients_rwsem);
+ 	ret = assign_client_id(client);
+ 	if (ret)
+-		return ret;
++		goto out;
+ 
+-	down_read(&devices_rwsem);
++	need_unreg = true;
+ 	xa_for_each_marked (&devices, index, device, DEVICE_REGISTERED) {
+ 		ret = add_client_context(device, client);
+-		if (ret) {
+-			up_read(&devices_rwsem);
+-			ib_unregister_client(client);
+-			return ret;
+-		}
++		if (ret)
++			goto out;
+ 	}
+-	up_read(&devices_rwsem);
+-	return 0;
++	ret = 0;
++out:
++	up_write(&clients_rwsem);
++	up_write(&devices_rwsem);
++	if (need_unreg && ret)
++		ib_unregister_client(client);
++	return ret;
+ }
+ EXPORT_SYMBOL(ib_register_client);
+ 
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index a56ebdc15723c..f67ebd9f3cdd1 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -2780,7 +2780,7 @@ DECLARE_UVERBS_NAMED_METHOD(
+ 	MLX5_IB_METHOD_DEVX_OBJ_MODIFY,
+ 	UVERBS_ATTR_IDR(MLX5_IB_ATTR_DEVX_OBJ_MODIFY_HANDLE,
+ 			UVERBS_IDR_ANY_OBJECT,
+-			UVERBS_ACCESS_WRITE,
++			UVERBS_ACCESS_READ,
+ 			UA_MANDATORY),
+ 	UVERBS_ATTR_PTR_IN(
+ 		MLX5_IB_ATTR_DEVX_OBJ_MODIFY_CMD_IN,
+diff --git a/drivers/infiniband/hw/mlx5/wr.c b/drivers/infiniband/hw/mlx5/wr.c
+index d6038fb6c50c6..19fd440a6ce38 100644
+--- a/drivers/infiniband/hw/mlx5/wr.c
++++ b/drivers/infiniband/hw/mlx5/wr.c
+@@ -128,7 +128,7 @@ static void set_eth_seg(const struct ib_send_wr *wr, struct mlx5_ib_qp *qp,
+ 		 */
+ 		copysz = min_t(u64, *cur_edge - (void *)eseg->inline_hdr.start,
+ 			       left);
+-		memcpy(eseg->inline_hdr.start, pdata, copysz);
++		memcpy(eseg->inline_hdr.data, pdata, copysz);
+ 		stride = ALIGN(sizeof(struct mlx5_wqe_eth_seg) -
+ 			       sizeof(eseg->inline_hdr.start) + copysz, 16);
+ 		*size += stride / 16;
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 41abf9cf11c67..960f870a952a5 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -3205,7 +3205,6 @@ static int srpt_add_one(struct ib_device *device)
+ 
+ 	INIT_IB_EVENT_HANDLER(&sdev->event_handler, sdev->device,
+ 			      srpt_event_handler);
+-	ib_register_event_handler(&sdev->event_handler);
+ 
+ 	for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
+ 		sport = &sdev->port[i - 1];
+@@ -3228,6 +3227,7 @@ static int srpt_add_one(struct ib_device *device)
+ 		}
+ 	}
+ 
++	ib_register_event_handler(&sdev->event_handler);
+ 	spin_lock(&srpt_dev_lock);
+ 	list_add_tail(&sdev->list, &srpt_dev_list);
+ 	spin_unlock(&srpt_dev_lock);
+@@ -3238,7 +3238,6 @@ static int srpt_add_one(struct ib_device *device)
+ 
+ err_port:
+ 	srpt_unregister_mad_agent(sdev, i);
+-	ib_unregister_event_handler(&sdev->event_handler);
+ err_cm:
+ 	if (sdev->cm_id)
+ 		ib_destroy_cm_id(sdev->cm_id);
+diff --git a/drivers/input/keyboard/gpio_keys_polled.c b/drivers/input/keyboard/gpio_keys_polled.c
+index c3937d2fc7446..a0f9978c68f55 100644
+--- a/drivers/input/keyboard/gpio_keys_polled.c
++++ b/drivers/input/keyboard/gpio_keys_polled.c
+@@ -319,12 +319,10 @@ static int gpio_keys_polled_probe(struct platform_device *pdev)
+ 
+ 			error = devm_gpio_request_one(dev, button->gpio,
+ 					flags, button->desc ? : DRV_NAME);
+-			if (error) {
+-				dev_err(dev,
+-					"unable to claim gpio %u, err=%d\n",
+-					button->gpio, error);
+-				return error;
+-			}
++			if (error)
++				return dev_err_probe(dev, error,
++						     "unable to claim gpio %u\n",
++						     button->gpio);
+ 
+ 			bdata->gpiod = gpio_to_desc(button->gpio);
+ 			if (!bdata->gpiod) {
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 603f625a74e54..91cc3a5643caf 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -1827,6 +1827,9 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
+ 	/* Prevent binding other PCI device drivers to IOMMU devices */
+ 	iommu->dev->match_driver = false;
+ 
++	/* ACPI _PRT won't have an IRQ for IOMMU */
++	iommu->dev->irq_managed = 1;
++
+ 	pci_read_config_dword(iommu->dev, cap_ptr + MMIO_CAP_HDR_OFFSET,
+ 			      &iommu->cap);
+ 
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index 9b24e8224379e..586b289cf468d 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -489,6 +489,9 @@ devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
+ 	if (!info || !info->ats_enabled)
+ 		return;
+ 
++	if (pci_dev_is_disconnected(to_pci_dev(dev)))
++		return;
++
+ 	sid = info->bus << 8 | info->devfn;
+ 	qdep = info->ats_qdep;
+ 	pfsid = info->pfsid;
+diff --git a/drivers/leds/leds-aw2013.c b/drivers/leds/leds-aw2013.c
+index 80d937454aeef..f7d9795ce5e1f 100644
+--- a/drivers/leds/leds-aw2013.c
++++ b/drivers/leds/leds-aw2013.c
+@@ -397,6 +397,7 @@ static int aw2013_probe(struct i2c_client *client)
+ 	regulator_disable(chip->vcc_regulator);
+ 
+ error:
++	mutex_unlock(&chip->mutex);
+ 	mutex_destroy(&chip->mutex);
+ 	return ret;
+ }
+diff --git a/drivers/leds/leds-sgm3140.c b/drivers/leds/leds-sgm3140.c
+index f4f831570f11c..e72017b11098b 100644
+--- a/drivers/leds/leds-sgm3140.c
++++ b/drivers/leds/leds-sgm3140.c
+@@ -114,8 +114,11 @@ static int sgm3140_brightness_set(struct led_classdev *led_cdev,
+ 				"failed to enable regulator: %d\n", ret);
+ 			return ret;
+ 		}
++		gpiod_set_value_cansleep(priv->flash_gpio, 0);
+ 		gpiod_set_value_cansleep(priv->enable_gpio, 1);
+ 	} else {
++		del_timer_sync(&priv->powerdown_timer);
++		gpiod_set_value_cansleep(priv->flash_gpio, 0);
+ 		gpiod_set_value_cansleep(priv->enable_gpio, 0);
+ 		ret = regulator_disable(priv->vin_regulator);
+ 		if (ret) {
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 5edcdcee91c23..5deda6c6fa2e7 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -48,11 +48,11 @@
+ struct convert_context {
+ 	struct completion restart;
+ 	struct bio *bio_in;
+-	struct bio *bio_out;
+ 	struct bvec_iter iter_in;
++	struct bio *bio_out;
+ 	struct bvec_iter iter_out;
+-	u64 cc_sector;
+ 	atomic_t cc_pending;
++	u64 cc_sector;
+ 	union {
+ 		struct skcipher_request *req;
+ 		struct aead_request *req_aead;
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 140bdf2a6ee11..e523ecdf947f4 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3329,14 +3329,14 @@ static int raid_map(struct dm_target *ti, struct bio *bio)
+ 	struct mddev *mddev = &rs->md;
+ 
+ 	/*
+-	 * If we're reshaping to add disk(s)), ti->len and
++	 * If we're reshaping to add disk(s), ti->len and
+ 	 * mddev->array_sectors will differ during the process
+ 	 * (ti->len > mddev->array_sectors), so we have to requeue
+ 	 * bios with addresses > mddev->array_sectors here or
+ 	 * there will occur accesses past EOD of the component
+ 	 * data images thus erroring the raid set.
+ 	 */
+-	if (unlikely(bio_end_sector(bio) > mddev->array_sectors))
++	if (unlikely(bio_has_data(bio) && bio_end_sector(bio) > mddev->array_sectors))
+ 		return DM_MAPIO_REQUEUE;
+ 
+ 	md_handle_request(mddev, bio);
+diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h
+index 78d1e51195ada..f61c89c79cf5b 100644
+--- a/drivers/md/dm-verity.h
++++ b/drivers/md/dm-verity.h
+@@ -74,11 +74,11 @@ struct dm_verity_io {
+ 	/* original value of bio->bi_end_io */
+ 	bio_end_io_t *orig_bi_end_io;
+ 
++	struct bvec_iter iter;
++
+ 	sector_t block;
+ 	unsigned n_blocks;
+ 
+-	struct bvec_iter iter;
+-
+ 	struct work_struct work;
+ 
+ 	/*
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 9029c1004b933..dc8498b4b5c13 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2733,6 +2733,9 @@ static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_fla
+ 
+ static void __dm_internal_resume(struct mapped_device *md)
+ {
++	int r;
++	struct dm_table *map;
++
+ 	BUG_ON(!md->internal_suspend_count);
+ 
+ 	if (--md->internal_suspend_count)
+@@ -2741,12 +2744,23 @@ static void __dm_internal_resume(struct mapped_device *md)
+ 	if (dm_suspended_md(md))
+ 		goto done; /* resume from nested suspend */
+ 
+-	/*
+-	 * NOTE: existing callers don't need to call dm_table_resume_targets
+-	 * (which may fail -- so best to avoid it for now by passing NULL map)
+-	 */
+-	(void) __dm_resume(md, NULL);
+-
++	map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
++	r = __dm_resume(md, map);
++	if (r) {
++		/*
++		 * If a preresume method of some target failed, we are in a
++		 * tricky situation. We can't return an error to the caller. We
++		 * can't fake success because then the "resume" and
++		 * "postsuspend" methods would not be paired correctly, and it
++		 * would break various targets, for example it would cause list
++		 * corruption in the "origin" target.
++		 *
++		 * So, we fake normal suspend here, to make sure that the
++		 * "resume" and "postsuspend" methods will be paired correctly.
++		 */
++		DMERR("Preresume method failed: %d", r);
++		set_bit(DMF_SUSPENDED, &md->flags);
++	}
+ done:
+ 	clear_bit(DMF_SUSPENDED_INTERNALLY, &md->flags);
+ 	smp_mb__after_atomic();
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 03d2e31dda2f6..09c7f52156f3f 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6243,7 +6243,15 @@ static void md_clean(struct mddev *mddev)
+ 	mddev->persistent = 0;
+ 	mddev->level = LEVEL_NONE;
+ 	mddev->clevel[0] = 0;
+-	mddev->flags = 0;
++	/*
++	 * Don't clear MD_CLOSING, or mddev can be opened again.
++	 * 'hold_active != 0' means mddev is still in the creation
++	 * process and will be used later.
++	 */
++	if (mddev->hold_active)
++		mddev->flags = 0;
++	else
++		mddev->flags &= BIT_ULL_MASK(MD_CLOSING);
+ 	mddev->sb_flags = 0;
+ 	mddev->ro = 0;
+ 	mddev->metadata_type[0] = 0;
+@@ -7536,7 +7544,6 @@ static inline bool md_ioctl_valid(unsigned int cmd)
+ {
+ 	switch (cmd) {
+ 	case ADD_NEW_DISK:
+-	case BLKROSET:
+ 	case GET_ARRAY_INFO:
+ 	case GET_BITMAP_FILE:
+ 	case GET_DISK_INFO:
+@@ -7563,8 +7570,6 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
+ 	int err = 0;
+ 	void __user *argp = (void __user *)arg;
+ 	struct mddev *mddev = NULL;
+-	int ro;
+-	bool did_set_md_closing = false;
+ 
+ 	if (!md_ioctl_valid(cmd))
+ 		return -ENOTTY;
+@@ -7651,7 +7656,6 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
+ 			err = -EBUSY;
+ 			goto out;
+ 		}
+-		did_set_md_closing = true;
+ 		mutex_unlock(&mddev->open_mutex);
+ 		sync_blockdev(bdev);
+ 	}
+@@ -7746,35 +7750,6 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
+ 			goto unlock;
+ 		}
+ 		break;
+-
+-	case BLKROSET:
+-		if (get_user(ro, (int __user *)(arg))) {
+-			err = -EFAULT;
+-			goto unlock;
+-		}
+-		err = -EINVAL;
+-
+-		/* if the bdev is going readonly the value of mddev->ro
+-		 * does not matter, no writes are coming
+-		 */
+-		if (ro)
+-			goto unlock;
+-
+-		/* are we are already prepared for writes? */
+-		if (mddev->ro != 1)
+-			goto unlock;
+-
+-		/* transitioning to readauto need only happen for
+-		 * arrays that call md_write_start
+-		 */
+-		if (mddev->pers) {
+-			err = restart_array(mddev);
+-			if (err == 0) {
+-				mddev->ro = 2;
+-				set_disk_ro(mddev->gendisk, 0);
+-			}
+-		}
+-		goto unlock;
+ 	}
+ 
+ 	/*
+@@ -7844,7 +7819,7 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
+ 		mddev->hold_active = 0;
+ 	mddev_unlock(mddev);
+ out:
+-	if(did_set_md_closing)
++	if (cmd == STOP_ARRAY_RO || (err && cmd == STOP_ARRAY))
+ 		clear_bit(MD_CLOSING, &mddev->flags);
+ 	return err;
+ }
+@@ -7868,6 +7843,36 @@ static int md_compat_ioctl(struct block_device *bdev, fmode_t mode,
+ }
+ #endif /* CONFIG_COMPAT */
+ 
++static int md_set_read_only(struct block_device *bdev, bool ro)
++{
++	struct mddev *mddev = bdev->bd_disk->private_data;
++	int err;
++
++	err = mddev_lock(mddev);
++	if (err)
++		return err;
++
++	if (!mddev->raid_disks && !mddev->external) {
++		err = -ENODEV;
++		goto out_unlock;
++	}
++
++	/*
++	 * Transitioning to read-auto need only happen for arrays that call
++	 * md_write_start and which are not ready for writes yet.
++	 */
++	if (!ro && mddev->ro == 1 && mddev->pers) {
++		err = restart_array(mddev);
++		if (err)
++			goto out_unlock;
++		mddev->ro = 2;
++	}
++
++out_unlock:
++	mddev_unlock(mddev);
++	return err;
++}
++
+ static int md_open(struct block_device *bdev, fmode_t mode)
+ {
+ 	/*
+@@ -7944,6 +7949,7 @@ const struct block_device_operations md_fops =
+ #endif
+ 	.getgeo		= md_getgeo,
+ 	.check_events	= md_check_events,
++	.set_read_only	= md_set_read_only,
+ };
+ 
+ static int md_thread(void *arg)
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+index 7607b516a7c43..68968bfa2edc1 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+@@ -113,6 +113,7 @@ int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
+ {
+ 	unsigned pat;
+ 	unsigned plane;
++	int ret = 0;
+ 
+ 	tpg->max_line_width = max_w;
+ 	for (pat = 0; pat < TPG_MAX_PAT_LINES; pat++) {
+@@ -121,14 +122,18 @@ int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
+ 
+ 			tpg->lines[pat][plane] =
+ 				vzalloc(array3_size(max_w, 2, pixelsz));
+-			if (!tpg->lines[pat][plane])
+-				return -ENOMEM;
++			if (!tpg->lines[pat][plane]) {
++				ret = -ENOMEM;
++				goto free_lines;
++			}
+ 			if (plane == 0)
+ 				continue;
+ 			tpg->downsampled_lines[pat][plane] =
+ 				vzalloc(array3_size(max_w, 2, pixelsz));
+-			if (!tpg->downsampled_lines[pat][plane])
+-				return -ENOMEM;
++			if (!tpg->downsampled_lines[pat][plane]) {
++				ret = -ENOMEM;
++				goto free_lines;
++			}
+ 		}
+ 	}
+ 	for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
+@@ -136,18 +141,45 @@ int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
+ 
+ 		tpg->contrast_line[plane] =
+ 			vzalloc(array_size(pixelsz, max_w));
+-		if (!tpg->contrast_line[plane])
+-			return -ENOMEM;
++		if (!tpg->contrast_line[plane]) {
++			ret = -ENOMEM;
++			goto free_contrast_line;
++		}
+ 		tpg->black_line[plane] =
+ 			vzalloc(array_size(pixelsz, max_w));
+-		if (!tpg->black_line[plane])
+-			return -ENOMEM;
++		if (!tpg->black_line[plane]) {
++			ret = -ENOMEM;
++			goto free_contrast_line;
++		}
+ 		tpg->random_line[plane] =
+ 			vzalloc(array3_size(max_w, 2, pixelsz));
+-		if (!tpg->random_line[plane])
+-			return -ENOMEM;
++		if (!tpg->random_line[plane]) {
++			ret = -ENOMEM;
++			goto free_contrast_line;
++		}
+ 	}
+ 	return 0;
++
++free_contrast_line:
++	for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
++		vfree(tpg->contrast_line[plane]);
++		vfree(tpg->black_line[plane]);
++		vfree(tpg->random_line[plane]);
++		tpg->contrast_line[plane] = NULL;
++		tpg->black_line[plane] = NULL;
++		tpg->random_line[plane] = NULL;
++	}
++free_lines:
++	for (pat = 0; pat < TPG_MAX_PAT_LINES; pat++)
++		for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
++			vfree(tpg->lines[pat][plane]);
++			tpg->lines[pat][plane] = NULL;
++			if (plane == 0)
++				continue;
++			vfree(tpg->downsampled_lines[pat][plane]);
++			tpg->downsampled_lines[pat][plane] = NULL;
++		}
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(tpg_alloc);
+ 
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 3a83e8e092568..23a0c209744dc 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -504,6 +504,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		dvbdevfops = kmemdup(template->fops, sizeof(*dvbdevfops), GFP_KERNEL);
+ 		if (!dvbdevfops) {
+ 			kfree(dvbdev);
++			*pdvbdev = NULL;
+ 			mutex_unlock(&dvbdev_register_lock);
+ 			return -ENOMEM;
+ 		}
+@@ -512,6 +513,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		if (!new_node) {
+ 			kfree(dvbdevfops);
+ 			kfree(dvbdev);
++			*pdvbdev = NULL;
+ 			mutex_unlock(&dvbdev_register_lock);
+ 			return -ENOMEM;
+ 		}
+@@ -545,6 +547,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		}
+ 		list_del (&dvbdev->list_head);
+ 		kfree(dvbdev);
++		*pdvbdev = NULL;
+ 		up_write(&minor_rwsem);
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return -EINVAL;
+@@ -567,6 +570,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		dvb_media_device_free(dvbdev);
+ 		list_del (&dvbdev->list_head);
+ 		kfree(dvbdev);
++		*pdvbdev = NULL;
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return ret;
+ 	}
+@@ -585,6 +589,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		dvb_media_device_free(dvbdev);
+ 		list_del (&dvbdev->list_head);
+ 		kfree(dvbdev);
++		*pdvbdev = NULL;
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return PTR_ERR(clsdev);
+ 	}
+diff --git a/drivers/media/dvb-frontends/stv0367.c b/drivers/media/dvb-frontends/stv0367.c
+index 0bfca1174e9e7..8cbae8235b174 100644
+--- a/drivers/media/dvb-frontends/stv0367.c
++++ b/drivers/media/dvb-frontends/stv0367.c
+@@ -118,50 +118,32 @@ static const s32 stv0367cab_RF_LookUp2[RF_LOOKUP_TABLE2_SIZE][RF_LOOKUP_TABLE2_S
+ 	}
+ };
+ 
+-static
+-int stv0367_writeregs(struct stv0367_state *state, u16 reg, u8 *data, int len)
++static noinline_for_stack
++int stv0367_writereg(struct stv0367_state *state, u16 reg, u8 data)
+ {
+-	u8 buf[MAX_XFER_SIZE];
++	u8 buf[3] = { MSB(reg), LSB(reg), data };
+ 	struct i2c_msg msg = {
+ 		.addr = state->config->demod_address,
+ 		.flags = 0,
+ 		.buf = buf,
+-		.len = len + 2
++		.len = 3,
+ 	};
+ 	int ret;
+ 
+-	if (2 + len > sizeof(buf)) {
+-		printk(KERN_WARNING
+-		       "%s: i2c wr reg=%04x: len=%d is too big!\n",
+-		       KBUILD_MODNAME, reg, len);
+-		return -EINVAL;
+-	}
+-
+-
+-	buf[0] = MSB(reg);
+-	buf[1] = LSB(reg);
+-	memcpy(buf + 2, data, len);
+-
+ 	if (i2cdebug)
+ 		printk(KERN_DEBUG "%s: [%02x] %02x: %02x\n", __func__,
+-			state->config->demod_address, reg, buf[2]);
++			state->config->demod_address, reg, data);
+ 
+ 	ret = i2c_transfer(state->i2c, &msg, 1);
+ 	if (ret != 1)
+ 		printk(KERN_ERR "%s: i2c write error! ([%02x] %02x: %02x)\n",
+-			__func__, state->config->demod_address, reg, buf[2]);
++			__func__, state->config->demod_address, reg, data);
+ 
+ 	return (ret != 1) ? -EREMOTEIO : 0;
+ }
+ 
+-static int stv0367_writereg(struct stv0367_state *state, u16 reg, u8 data)
+-{
+-	u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+-
+-	return stv0367_writeregs(state, reg, &tmp, 1);
+-}
+-
+-static u8 stv0367_readreg(struct stv0367_state *state, u16 reg)
++static noinline_for_stack
++u8 stv0367_readreg(struct stv0367_state *state, u16 reg)
+ {
+ 	u8 b0[] = { 0, 0 };
+ 	u8 b1[] = { 0 };
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index f21da11caf224..8bcb4b354c895 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -2108,9 +2108,6 @@ static int tc358743_probe(struct i2c_client *client)
+ 	state->mbus_fmt_code = MEDIA_BUS_FMT_RGB888_1X24;
+ 
+ 	sd->dev = &client->dev;
+-	err = v4l2_async_register_subdev(sd);
+-	if (err < 0)
+-		goto err_hdl;
+ 
+ 	mutex_init(&state->confctl_mutex);
+ 
+@@ -2168,6 +2165,10 @@ static int tc358743_probe(struct i2c_client *client)
+ 	if (err)
+ 		goto err_work_queues;
+ 
++	err = v4l2_async_register_subdev(sd);
++	if (err < 0)
++		goto err_work_queues;
++
+ 	v4l2_info(sd, "%s found @ 0x%x (%s)\n", client->name,
+ 		  client->addr << 1, client->adapter->name);
+ 
+diff --git a/drivers/media/pci/ttpci/budget-av.c b/drivers/media/pci/ttpci/budget-av.c
+index 3cb83005cf09b..519f85e0a397d 100644
+--- a/drivers/media/pci/ttpci/budget-av.c
++++ b/drivers/media/pci/ttpci/budget-av.c
+@@ -1462,7 +1462,8 @@ static int budget_av_attach(struct saa7146_dev *dev, struct saa7146_pci_extensio
+ 		budget_av->has_saa7113 = 1;
+ 		err = saa7146_vv_init(dev, &vv_data);
+ 		if (err != 0) {
+-			/* fixme: proper cleanup here */
++			ttpci_budget_deinit(&budget_av->budget);
++			kfree(budget_av);
+ 			ERR("cannot init vv subsystem\n");
+ 			return err;
+ 		}
+@@ -1471,9 +1472,10 @@ static int budget_av_attach(struct saa7146_dev *dev, struct saa7146_pci_extensio
+ 		vv_data.vid_ops.vidioc_s_input = vidioc_s_input;
+ 
+ 		if ((err = saa7146_register_device(&budget_av->vd, dev, "knc1", VFL_TYPE_VIDEO))) {
+-			/* fixme: proper cleanup here */
+-			ERR("cannot register capture v4l2 device\n");
+ 			saa7146_vv_release(dev);
++			ttpci_budget_deinit(&budget_av->budget);
++			kfree(budget_av);
++			ERR("cannot register capture v4l2 device\n");
+ 			return err;
+ 		}
+ 
+diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c b/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c
+index b065ccd069140..378a1cba0144f 100644
+--- a/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c
++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c
+@@ -26,7 +26,7 @@ static void mtk_mdp_vpu_handle_init_ack(const struct mdp_ipi_comm_ack *msg)
+ 	vpu->inst_addr = msg->vpu_inst_addr;
+ }
+ 
+-static void mtk_mdp_vpu_ipi_handler(const void *data, unsigned int len,
++static void mtk_mdp_vpu_ipi_handler(void *data, unsigned int len,
+ 				    void *priv)
+ {
+ 	const struct mdp_ipi_comm_ack *msg = data;
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
+index cfc7ebed8fb7a..1ec29f1b163a1 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
+@@ -29,15 +29,7 @@ static int mtk_vcodec_vpu_set_ipi_register(struct mtk_vcodec_fw *fw, int id,
+ 					   mtk_vcodec_ipi_handler handler,
+ 					   const char *name, void *priv)
+ {
+-	/*
+-	 * The handler we receive takes a void * as its first argument. We
+-	 * cannot change this because it needs to be passed down to the rproc
+-	 * subsystem when SCP is used. VPU takes a const argument, which is
+-	 * more constrained, so the conversion below is safe.
+-	 */
+-	ipi_handler_t handler_const = (ipi_handler_t)handler;
+-
+-	return vpu_ipi_register(fw->pdev, id, handler_const, name, priv);
++	return vpu_ipi_register(fw->pdev, id, handler, name, priv);
+ }
+ 
+ static int mtk_vcodec_vpu_ipi_send(struct mtk_vcodec_fw *fw, int id, void *buf,
+diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+index e7c4b0dd588a9..a2f61d97ffeb1 100644
+--- a/drivers/media/platform/mtk-vpu/mtk_vpu.c
++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+@@ -612,7 +612,7 @@ int vpu_load_firmware(struct platform_device *pdev)
+ }
+ EXPORT_SYMBOL_GPL(vpu_load_firmware);
+ 
+-static void vpu_init_ipi_handler(const void *data, unsigned int len, void *priv)
++static void vpu_init_ipi_handler(void *data, unsigned int len, void *priv)
+ {
+ 	struct mtk_vpu *vpu = priv;
+ 	const struct vpu_run *run = data;
+diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.h b/drivers/media/platform/mtk-vpu/mtk_vpu.h
+index ee7c552ce9289..d4453b4bcee92 100644
+--- a/drivers/media/platform/mtk-vpu/mtk_vpu.h
++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.h
+@@ -15,7 +15,7 @@
+  * VPU interfaces with other blocks by share memory and interrupt.
+  **/
+ 
+-typedef void (*ipi_handler_t) (const void *data,
++typedef void (*ipi_handler_t) (void *data,
+ 			       unsigned int len,
+ 			       void *priv);
+ 
+diff --git a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+index 2c159483c56ba..f0d2bcbe20b0d 100644
+--- a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
++++ b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+@@ -66,6 +66,7 @@ static void deinterlace_device_run(void *priv)
+ 	struct vb2_v4l2_buffer *src, *dst;
+ 	unsigned int hstep, vstep;
+ 	dma_addr_t addr;
++	int i;
+ 
+ 	src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+@@ -160,6 +161,26 @@ static void deinterlace_device_run(void *priv)
+ 	deinterlace_write(dev, DEINTERLACE_CH1_HORZ_FACT, hstep);
+ 	deinterlace_write(dev, DEINTERLACE_CH1_VERT_FACT, vstep);
+ 
++	/* neutral filter coefficients */
++	deinterlace_set_bits(dev, DEINTERLACE_FRM_CTRL,
++			     DEINTERLACE_FRM_CTRL_COEF_ACCESS);
++	readl_poll_timeout(dev->base + DEINTERLACE_STATUS, val,
++			   val & DEINTERLACE_STATUS_COEF_STATUS, 2, 40);
++
++	for (i = 0; i < 32; i++) {
++		deinterlace_write(dev, DEINTERLACE_CH0_HORZ_COEF0 + i * 4,
++				  DEINTERLACE_IDENTITY_COEF);
++		deinterlace_write(dev, DEINTERLACE_CH0_VERT_COEF + i * 4,
++				  DEINTERLACE_IDENTITY_COEF);
++		deinterlace_write(dev, DEINTERLACE_CH1_HORZ_COEF0 + i * 4,
++				  DEINTERLACE_IDENTITY_COEF);
++		deinterlace_write(dev, DEINTERLACE_CH1_VERT_COEF + i * 4,
++				  DEINTERLACE_IDENTITY_COEF);
++	}
++
++	deinterlace_clr_set_bits(dev, DEINTERLACE_FRM_CTRL,
++				 DEINTERLACE_FRM_CTRL_COEF_ACCESS, 0);
++
+ 	deinterlace_clr_set_bits(dev, DEINTERLACE_FIELD_CTRL,
+ 				 DEINTERLACE_FIELD_CTRL_FIELD_CNT_MSK,
+ 				 DEINTERLACE_FIELD_CTRL_FIELD_CNT(ctx->field));
+@@ -248,7 +269,6 @@ static irqreturn_t deinterlace_irq(int irq, void *data)
+ static void deinterlace_init(struct deinterlace_dev *dev)
+ {
+ 	u32 val;
+-	int i;
+ 
+ 	deinterlace_write(dev, DEINTERLACE_BYPASS,
+ 			  DEINTERLACE_BYPASS_CSC);
+@@ -284,27 +304,7 @@ static void deinterlace_init(struct deinterlace_dev *dev)
+ 
+ 	deinterlace_clr_set_bits(dev, DEINTERLACE_CHROMA_DIFF,
+ 				 DEINTERLACE_CHROMA_DIFF_TH_MSK,
+-				 DEINTERLACE_CHROMA_DIFF_TH(5));
+-
+-	/* neutral filter coefficients */
+-	deinterlace_set_bits(dev, DEINTERLACE_FRM_CTRL,
+-			     DEINTERLACE_FRM_CTRL_COEF_ACCESS);
+-	readl_poll_timeout(dev->base + DEINTERLACE_STATUS, val,
+-			   val & DEINTERLACE_STATUS_COEF_STATUS, 2, 40);
+-
+-	for (i = 0; i < 32; i++) {
+-		deinterlace_write(dev, DEINTERLACE_CH0_HORZ_COEF0 + i * 4,
+-				  DEINTERLACE_IDENTITY_COEF);
+-		deinterlace_write(dev, DEINTERLACE_CH0_VERT_COEF + i * 4,
+-				  DEINTERLACE_IDENTITY_COEF);
+-		deinterlace_write(dev, DEINTERLACE_CH1_HORZ_COEF0 + i * 4,
+-				  DEINTERLACE_IDENTITY_COEF);
+-		deinterlace_write(dev, DEINTERLACE_CH1_VERT_COEF + i * 4,
+-				  DEINTERLACE_IDENTITY_COEF);
+-	}
+-
+-	deinterlace_clr_set_bits(dev, DEINTERLACE_FRM_CTRL,
+-				 DEINTERLACE_FRM_CTRL_COEF_ACCESS, 0);
++				 DEINTERLACE_CHROMA_DIFF_TH(31));
+ }
+ 
+ static inline struct deinterlace_ctx *deinterlace_file2ctx(struct file *file)
+@@ -937,11 +937,18 @@ static int deinterlace_runtime_resume(struct device *device)
+ 		return ret;
+ 	}
+ 
++	ret = reset_control_deassert(dev->rstc);
++	if (ret) {
++		dev_err(dev->dev, "Failed to apply reset\n");
++
++		goto err_exclusive_rate;
++	}
++
+ 	ret = clk_prepare_enable(dev->bus_clk);
+ 	if (ret) {
+ 		dev_err(dev->dev, "Failed to enable bus clock\n");
+ 
+-		goto err_exclusive_rate;
++		goto err_rst;
+ 	}
+ 
+ 	ret = clk_prepare_enable(dev->mod_clk);
+@@ -958,23 +965,16 @@ static int deinterlace_runtime_resume(struct device *device)
+ 		goto err_mod_clk;
+ 	}
+ 
+-	ret = reset_control_deassert(dev->rstc);
+-	if (ret) {
+-		dev_err(dev->dev, "Failed to apply reset\n");
+-
+-		goto err_ram_clk;
+-	}
+-
+ 	deinterlace_init(dev);
+ 
+ 	return 0;
+ 
+-err_ram_clk:
+-	clk_disable_unprepare(dev->ram_clk);
+ err_mod_clk:
+ 	clk_disable_unprepare(dev->mod_clk);
+ err_bus_clk:
+ 	clk_disable_unprepare(dev->bus_clk);
++err_rst:
++	reset_control_assert(dev->rstc);
+ err_exclusive_rate:
+ 	clk_rate_exclusive_put(dev->mod_clk);
+ 
+@@ -985,11 +985,12 @@ static int deinterlace_runtime_suspend(struct device *device)
+ {
+ 	struct deinterlace_dev *dev = dev_get_drvdata(device);
+ 
+-	reset_control_assert(dev->rstc);
+-
+ 	clk_disable_unprepare(dev->ram_clk);
+ 	clk_disable_unprepare(dev->mod_clk);
+ 	clk_disable_unprepare(dev->bus_clk);
++
++	reset_control_assert(dev->rstc);
++
+ 	clk_rate_exclusive_put(dev->mod_clk);
+ 
+ 	return 0;
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 26408a972b443..5deee83132c62 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -4049,6 +4049,10 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ 	 * topology will likely change after the load of the em28xx subdrivers.
+ 	 */
+ #ifdef CONFIG_MEDIA_CONTROLLER
++	/*
++	 * No need to check the return value, the device will still be
++	 * usable without media controller API.
++	 */
+ 	retval = media_device_register(dev->media_dev);
+ #endif
+ 
+diff --git a/drivers/media/usb/go7007/go7007-driver.c b/drivers/media/usb/go7007/go7007-driver.c
+index 6650eab913d81..3c66542ce284a 100644
+--- a/drivers/media/usb/go7007/go7007-driver.c
++++ b/drivers/media/usb/go7007/go7007-driver.c
+@@ -80,7 +80,7 @@ static int go7007_load_encoder(struct go7007 *go)
+ 	const struct firmware *fw_entry;
+ 	char fw_name[] = "go7007/go7007fw.bin";
+ 	void *bounce;
+-	int fw_len, rv = 0;
++	int fw_len;
+ 	u16 intr_val, intr_data;
+ 
+ 	if (go->boot_fw == NULL) {
+@@ -109,9 +109,11 @@ static int go7007_load_encoder(struct go7007 *go)
+ 	    go7007_read_interrupt(go, &intr_val, &intr_data) < 0 ||
+ 			(intr_val & ~0x1) != 0x5a5a) {
+ 		v4l2_err(go, "error transferring firmware\n");
+-		rv = -1;
++		kfree(go->boot_fw);
++		go->boot_fw = NULL;
++		return -1;
+ 	}
+-	return rv;
++	return 0;
+ }
+ 
+ MODULE_FIRMWARE("go7007/go7007fw.bin");
+diff --git a/drivers/media/usb/go7007/go7007-usb.c b/drivers/media/usb/go7007/go7007-usb.c
+index eeb85981e02b6..762c13e49bfa5 100644
+--- a/drivers/media/usb/go7007/go7007-usb.c
++++ b/drivers/media/usb/go7007/go7007-usb.c
+@@ -1201,7 +1201,9 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ 				u16 channel;
+ 
+ 				/* read channel number from GPIO[1:0] */
+-				go7007_read_addr(go, 0x3c81, &channel);
++				if (go7007_read_addr(go, 0x3c81, &channel))
++					goto allocfail;
++
+ 				channel &= 0x3;
+ 				go->board_id = GO7007_BOARDID_ADLINK_MPG24;
+ 				usb->board = board = &board_adlink_mpg24;
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-context.c b/drivers/media/usb/pvrusb2/pvrusb2-context.c
+index 1764674de98bc..73c95ba2328a4 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-context.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-context.c
+@@ -90,8 +90,10 @@ static void pvr2_context_destroy(struct pvr2_context *mp)
+ }
+ 
+ 
+-static void pvr2_context_notify(struct pvr2_context *mp)
++static void pvr2_context_notify(void *ptr)
+ {
++	struct pvr2_context *mp = ptr;
++
+ 	pvr2_context_set_notify(mp,!0);
+ }
+ 
+@@ -106,9 +108,7 @@ static void pvr2_context_check(struct pvr2_context *mp)
+ 		pvr2_trace(PVR2_TRACE_CTXT,
+ 			   "pvr2_context %p (initialize)", mp);
+ 		/* Finish hardware initialization */
+-		if (pvr2_hdw_initialize(mp->hdw,
+-					(void (*)(void *))pvr2_context_notify,
+-					mp)) {
++		if (pvr2_hdw_initialize(mp->hdw, pvr2_context_notify, mp)) {
+ 			mp->video_stream.stream =
+ 				pvr2_hdw_get_video_stream(mp->hdw);
+ 			/* Trigger interface initialization.  By doing this
+@@ -267,9 +267,9 @@ static void pvr2_context_exit(struct pvr2_context *mp)
+ void pvr2_context_disconnect(struct pvr2_context *mp)
+ {
+ 	pvr2_hdw_disconnect(mp->hdw);
+-	mp->disconnect_flag = !0;
+ 	if (!pvr2_context_shutok())
+ 		pvr2_context_notify(mp);
++	mp->disconnect_flag = !0;
+ }
+ 
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-dvb.c b/drivers/media/usb/pvrusb2/pvrusb2-dvb.c
+index 6954584526a32..1b768e7466721 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-dvb.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-dvb.c
+@@ -88,8 +88,10 @@ static int pvr2_dvb_feed_thread(void *data)
+ 	return stat;
+ }
+ 
+-static void pvr2_dvb_notify(struct pvr2_dvb_adapter *adap)
++static void pvr2_dvb_notify(void *ptr)
+ {
++	struct pvr2_dvb_adapter *adap = ptr;
++
+ 	wake_up(&adap->buffer_wait_data);
+ }
+ 
+@@ -149,7 +151,7 @@ static int pvr2_dvb_stream_do_start(struct pvr2_dvb_adapter *adap)
+ 	}
+ 
+ 	pvr2_stream_set_callback(pvr->video_stream.stream,
+-				 (pvr2_stream_callback) pvr2_dvb_notify, adap);
++				 pvr2_dvb_notify, adap);
+ 
+ 	ret = pvr2_stream_set_buffer_count(stream, PVR2_DVB_BUFFER_COUNT);
+ 	if (ret < 0) return ret;
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c b/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
+index 9657c18833116..29f2e767f236f 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
+@@ -1037,8 +1037,10 @@ static int pvr2_v4l2_open(struct file *file)
+ }
+ 
+ 
+-static void pvr2_v4l2_notify(struct pvr2_v4l2_fh *fhp)
++static void pvr2_v4l2_notify(void *ptr)
+ {
++	struct pvr2_v4l2_fh *fhp = ptr;
++
+ 	wake_up(&fhp->wait_data);
+ }
+ 
+@@ -1071,7 +1073,7 @@ static int pvr2_v4l2_iosetup(struct pvr2_v4l2_fh *fh)
+ 
+ 	hdw = fh->channel.mc_head->hdw;
+ 	sp = fh->pdi->stream->stream;
+-	pvr2_stream_set_callback(sp,(pvr2_stream_callback)pvr2_v4l2_notify,fh);
++	pvr2_stream_set_callback(sp, pvr2_v4l2_notify, fh);
+ 	pvr2_hdw_set_stream_type(hdw,fh->pdi->config);
+ 	if ((ret = pvr2_hdw_set_streaming(hdw,!0)) < 0) return ret;
+ 	return pvr2_ioread_set_enabled(fh->rhp,!0);
+@@ -1202,11 +1204,6 @@ static void pvr2_v4l2_dev_init(struct pvr2_v4l2_dev *dip,
+ 		dip->minor_type = pvr2_v4l_type_video;
+ 		nr_ptr = video_nr;
+ 		caps |= V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_AUDIO;
+-		if (!dip->stream) {
+-			pr_err(KBUILD_MODNAME
+-				": Failed to set up pvrusb2 v4l video dev due to missing stream instance\n");
+-			return;
+-		}
+ 		break;
+ 	case VFL_TYPE_VBI:
+ 		dip->config = pvr2_config_vbi;
+diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
+index ad14d52141067..56d320b1a1ca7 100644
+--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
++++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
+@@ -1053,11 +1053,17 @@ static int v4l2_m2m_register_entity(struct media_device *mdev,
+ 	entity->function = function;
+ 
+ 	ret = media_entity_pads_init(entity, num_pads, pads);
+-	if (ret)
++	if (ret) {
++		kfree(entity->name);
++		entity->name = NULL;
+ 		return ret;
++	}
+ 	ret = media_device_register_entity(mdev, entity);
+-	if (ret)
++	if (ret) {
++		kfree(entity->name);
++		entity->name = NULL;
+ 		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mfd/altera-sysmgr.c b/drivers/mfd/altera-sysmgr.c
+index 591b300d90953..59efe7d5dcaa9 100644
+--- a/drivers/mfd/altera-sysmgr.c
++++ b/drivers/mfd/altera-sysmgr.c
+@@ -110,7 +110,9 @@ struct regmap *altr_sysmgr_regmap_lookup_by_phandle(struct device_node *np,
+ 
+ 	dev = driver_find_device_by_of_node(&altr_sysmgr_driver.driver,
+ 					    (void *)sysmgr_np);
+-	of_node_put(sysmgr_np);
++	if (property)
++		of_node_put(sysmgr_np);
++
+ 	if (!dev)
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index 60f74144a4f88..4d536a097e8cb 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -224,7 +224,9 @@ struct regmap *syscon_regmap_lookup_by_phandle(struct device_node *np,
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	regmap = syscon_node_to_regmap(syscon_np);
+-	of_node_put(syscon_np);
++
++	if (property)
++		of_node_put(syscon_np);
+ 
+ 	return regmap;
+ }
+diff --git a/drivers/mmc/host/wmt-sdmmc.c b/drivers/mmc/host/wmt-sdmmc.c
+index 3933195488575..3fcc81e48ad66 100644
+--- a/drivers/mmc/host/wmt-sdmmc.c
++++ b/drivers/mmc/host/wmt-sdmmc.c
+@@ -889,7 +889,6 @@ static int wmt_mci_remove(struct platform_device *pdev)
+ {
+ 	struct mmc_host *mmc;
+ 	struct wmt_mci_priv *priv;
+-	struct resource *res;
+ 	u32 reg_tmp;
+ 
+ 	mmc = platform_get_drvdata(pdev);
+@@ -917,9 +916,6 @@ static int wmt_mci_remove(struct platform_device *pdev)
+ 	clk_disable_unprepare(priv->clk_sdmmc);
+ 	clk_put(priv->clk_sdmmc);
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	release_mem_region(res->start, resource_size(res));
+-
+ 	mmc_free_host(mmc);
+ 
+ 	dev_info(&pdev->dev, "WMT MCI device removed\n");
+diff --git a/drivers/mtd/maps/physmap-core.c b/drivers/mtd/maps/physmap-core.c
+index 9ab795f03c546..e5552093585e2 100644
+--- a/drivers/mtd/maps/physmap-core.c
++++ b/drivers/mtd/maps/physmap-core.c
+@@ -528,7 +528,7 @@ static int physmap_flash_probe(struct platform_device *dev)
+ 		if (!info->maps[i].phys)
+ 			info->maps[i].phys = res->start;
+ 
+-		info->win_order = get_bitmask_order(resource_size(res)) - 1;
++		info->win_order = fls64(resource_size(res)) - 1;
+ 		info->maps[i].size = BIT(info->win_order +
+ 					 (info->gpios ?
+ 					  info->gpios->ndescs : 0));
+diff --git a/drivers/mtd/nand/raw/lpc32xx_mlc.c b/drivers/mtd/nand/raw/lpc32xx_mlc.c
+index 9e728c7317956..db228460a9079 100644
+--- a/drivers/mtd/nand/raw/lpc32xx_mlc.c
++++ b/drivers/mtd/nand/raw/lpc32xx_mlc.c
+@@ -304,8 +304,9 @@ static int lpc32xx_nand_device_ready(struct nand_chip *nand_chip)
+ 	return 0;
+ }
+ 
+-static irqreturn_t lpc3xxx_nand_irq(int irq, struct lpc32xx_nand_host *host)
++static irqreturn_t lpc3xxx_nand_irq(int irq, void *data)
+ {
++	struct lpc32xx_nand_host *host = data;
+ 	uint8_t sr;
+ 
+ 	/* Clear interrupt flag by reading status */
+@@ -780,7 +781,7 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
+ 		goto release_dma_chan;
+ 	}
+ 
+-	if (request_irq(host->irq, (irq_handler_t)&lpc3xxx_nand_irq,
++	if (request_irq(host->irq, &lpc3xxx_nand_irq,
+ 			IRQF_TRIGGER_HIGH, DRV_NAME, host)) {
+ 		dev_err(&pdev->dev, "Error requesting NAND IRQ\n");
+ 		res = -ENXIO;
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 4056ca4255be7..0fbed76bb9697 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1581,11 +1581,11 @@ mt7530_setup(struct dsa_switch *ds)
+ 	 */
+ 	if (priv->mcm) {
+ 		reset_control_assert(priv->rstc);
+-		usleep_range(1000, 1100);
++		usleep_range(5000, 5100);
+ 		reset_control_deassert(priv->rstc);
+ 	} else {
+ 		gpiod_set_value_cansleep(priv->reset, 0);
+-		usleep_range(1000, 1100);
++		usleep_range(5000, 5100);
+ 		gpiod_set_value_cansleep(priv->reset, 1);
+ 	}
+ 
+@@ -1704,11 +1704,11 @@ mt7531_setup(struct dsa_switch *ds)
+ 	 */
+ 	if (priv->mcm) {
+ 		reset_control_assert(priv->rstc);
+-		usleep_range(1000, 1100);
++		usleep_range(5000, 5100);
+ 		reset_control_deassert(priv->rstc);
+ 	} else {
+ 		gpiod_set_value_cansleep(priv->reset, 0);
+-		usleep_range(1000, 1100);
++		usleep_range(5000, 5100);
+ 		gpiod_set_value_cansleep(priv->reset, 1);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index e13ae04d2f0fd..fa65971949fce 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -3057,22 +3057,6 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	return NETDEV_TX_OK;
+ }
+ 
+-static u16 ena_select_queue(struct net_device *dev, struct sk_buff *skb,
+-			    struct net_device *sb_dev)
+-{
+-	u16 qid;
+-	/* we suspect that this is good for in--kernel network services that
+-	 * want to loop incoming skb rx to tx in normal user generated traffic,
+-	 * most probably we will not get to this
+-	 */
+-	if (skb_rx_queue_recorded(skb))
+-		qid = skb_get_rx_queue(skb);
+-	else
+-		qid = netdev_pick_tx(dev, skb, NULL);
+-
+-	return qid;
+-}
+-
+ static void ena_config_host_info(struct ena_com_dev *ena_dev, struct pci_dev *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -3242,7 +3226,6 @@ static const struct net_device_ops ena_netdev_ops = {
+ 	.ndo_open		= ena_open,
+ 	.ndo_stop		= ena_close,
+ 	.ndo_start_xmit		= ena_start_xmit,
+-	.ndo_select_queue	= ena_select_queue,
+ 	.ndo_get_stats64	= ena_get_stats64,
+ 	.ndo_tx_timeout		= ena_tx_timeout,
+ 	.ndo_change_mtu		= ena_change_mtu,
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+index d8b1824c334d3..0bc1367fd6492 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+@@ -1002,9 +1002,6 @@ static inline void bnx2x_set_fw_mac_addr(__le16 *fw_hi, __le16 *fw_mid,
+ static inline void bnx2x_free_rx_mem_pool(struct bnx2x *bp,
+ 					  struct bnx2x_alloc_pool *pool)
+ {
+-	if (!pool->page)
+-		return;
+-
+ 	put_page(pool->page);
+ 
+ 	pool->page = NULL;
+@@ -1015,6 +1012,9 @@ static inline void bnx2x_free_rx_sge_range(struct bnx2x *bp,
+ {
+ 	int i;
+ 
++	if (!fp->page_pool.page)
++		return;
++
+ 	if (fp->mode == TPA_MODE_DISABLED)
+ 		return;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index deba485ced1bd..c14c391a0cec6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2723,7 +2723,10 @@ static int hclge_mac_init(struct hclge_dev *hdev)
+ 	int ret;
+ 
+ 	hdev->support_sfp_query = true;
+-	hdev->hw.mac.duplex = HCLGE_MAC_FULL;
++
++	if (!test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
++		hdev->hw.mac.duplex = HCLGE_MAC_FULL;
++
+ 	ret = hclge_cfg_mac_speed_dup_hw(hdev, hdev->hw.mac.speed,
+ 					 hdev->hw.mac.duplex);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 01176c86be125..0848613c3f45a 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -6762,77 +6762,75 @@ void igb_update_stats(struct igb_adapter *adapter)
+ 	}
+ }
+ 
++static void igb_perout(struct igb_adapter *adapter, int tsintr_tt)
++{
++	int pin = ptp_find_pin(adapter->ptp_clock, PTP_PF_PEROUT, tsintr_tt);
++	struct e1000_hw *hw = &adapter->hw;
++	struct timespec64 ts;
++	u32 tsauxc;
++
++	if (pin < 0 || pin >= IGB_N_PEROUT)
++		return;
++
++	spin_lock(&adapter->tmreg_lock);
++	ts = timespec64_add(adapter->perout[pin].start,
++			    adapter->perout[pin].period);
++	/* u32 conversion of tv_sec is safe until y2106 */
++	wr32((tsintr_tt == 1) ? E1000_TRGTTIML1 : E1000_TRGTTIML0, ts.tv_nsec);
++	wr32((tsintr_tt == 1) ? E1000_TRGTTIMH1 : E1000_TRGTTIMH0, (u32)ts.tv_sec);
++	tsauxc = rd32(E1000_TSAUXC);
++	tsauxc |= TSAUXC_EN_TT0;
++	wr32(E1000_TSAUXC, tsauxc);
++	adapter->perout[pin].start = ts;
++	spin_unlock(&adapter->tmreg_lock);
++}
++
++static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
++{
++	int pin = ptp_find_pin(adapter->ptp_clock, PTP_PF_EXTTS, tsintr_tt);
++	struct e1000_hw *hw = &adapter->hw;
++	struct ptp_clock_event event;
++	u32 sec, nsec;
++
++	if (pin < 0 || pin >= IGB_N_EXTTS)
++		return;
++
++	nsec = rd32((tsintr_tt == 1) ? E1000_AUXSTMPL1 : E1000_AUXSTMPL0);
++	sec  = rd32((tsintr_tt == 1) ? E1000_AUXSTMPH1 : E1000_AUXSTMPH0);
++	event.type = PTP_CLOCK_EXTTS;
++	event.index = tsintr_tt;
++	event.timestamp = sec * 1000000000ULL + nsec;
++	ptp_clock_event(adapter->ptp_clock, &event);
++}
++
+ static void igb_tsync_interrupt(struct igb_adapter *adapter)
+ {
+ 	struct e1000_hw *hw = &adapter->hw;
++	u32 tsicr = rd32(E1000_TSICR);
+ 	struct ptp_clock_event event;
+-	struct timespec64 ts;
+-	u32 ack = 0, tsauxc, sec, nsec, tsicr = rd32(E1000_TSICR);
+ 
+ 	if (tsicr & TSINTR_SYS_WRAP) {
+ 		event.type = PTP_CLOCK_PPS;
+ 		if (adapter->ptp_caps.pps)
+ 			ptp_clock_event(adapter->ptp_clock, &event);
+-		ack |= TSINTR_SYS_WRAP;
+ 	}
+ 
+ 	if (tsicr & E1000_TSICR_TXTS) {
+ 		/* retrieve hardware timestamp */
+ 		schedule_work(&adapter->ptp_tx_work);
+-		ack |= E1000_TSICR_TXTS;
+-	}
+-
+-	if (tsicr & TSINTR_TT0) {
+-		spin_lock(&adapter->tmreg_lock);
+-		ts = timespec64_add(adapter->perout[0].start,
+-				    adapter->perout[0].period);
+-		/* u32 conversion of tv_sec is safe until y2106 */
+-		wr32(E1000_TRGTTIML0, ts.tv_nsec);
+-		wr32(E1000_TRGTTIMH0, (u32)ts.tv_sec);
+-		tsauxc = rd32(E1000_TSAUXC);
+-		tsauxc |= TSAUXC_EN_TT0;
+-		wr32(E1000_TSAUXC, tsauxc);
+-		adapter->perout[0].start = ts;
+-		spin_unlock(&adapter->tmreg_lock);
+-		ack |= TSINTR_TT0;
+-	}
+-
+-	if (tsicr & TSINTR_TT1) {
+-		spin_lock(&adapter->tmreg_lock);
+-		ts = timespec64_add(adapter->perout[1].start,
+-				    adapter->perout[1].period);
+-		wr32(E1000_TRGTTIML1, ts.tv_nsec);
+-		wr32(E1000_TRGTTIMH1, (u32)ts.tv_sec);
+-		tsauxc = rd32(E1000_TSAUXC);
+-		tsauxc |= TSAUXC_EN_TT1;
+-		wr32(E1000_TSAUXC, tsauxc);
+-		adapter->perout[1].start = ts;
+-		spin_unlock(&adapter->tmreg_lock);
+-		ack |= TSINTR_TT1;
+-	}
+-
+-	if (tsicr & TSINTR_AUTT0) {
+-		nsec = rd32(E1000_AUXSTMPL0);
+-		sec  = rd32(E1000_AUXSTMPH0);
+-		event.type = PTP_CLOCK_EXTTS;
+-		event.index = 0;
+-		event.timestamp = sec * 1000000000ULL + nsec;
+-		ptp_clock_event(adapter->ptp_clock, &event);
+-		ack |= TSINTR_AUTT0;
+-	}
+-
+-	if (tsicr & TSINTR_AUTT1) {
+-		nsec = rd32(E1000_AUXSTMPL1);
+-		sec  = rd32(E1000_AUXSTMPH1);
+-		event.type = PTP_CLOCK_EXTTS;
+-		event.index = 1;
+-		event.timestamp = sec * 1000000000ULL + nsec;
+-		ptp_clock_event(adapter->ptp_clock, &event);
+-		ack |= TSINTR_AUTT1;
+-	}
+-
+-	/* acknowledge the interrupts */
+-	wr32(E1000_TSICR, ack);
++	}
++
++	if (tsicr & TSINTR_TT0)
++		igb_perout(adapter, 0);
++
++	if (tsicr & TSINTR_TT1)
++		igb_perout(adapter, 1);
++
++	if (tsicr & TSINTR_AUTT0)
++		igb_extts(adapter, 0);
++
++	if (tsicr & TSINTR_AUTT1)
++		igb_extts(adapter, 1);
+ }
+ 
+ static irqreturn_t igb_msix_other(int irq, void *data)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index c0a0a31272cc2..55dfe1a20bc99 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -714,7 +714,7 @@ static irqreturn_t cgx_fwi_event_handler(int irq, void *data)
+ 
+ 		/* Release thread waiting for completion  */
+ 		lmac->cmd_pend = false;
+-		wake_up_interruptible(&lmac->wq_cmd_cmplt);
++		wake_up(&lmac->wq_cmd_cmplt);
+ 		break;
+ 	case CGX_EVT_ASYNC:
+ 		if (cgx_event_is_linkevent(event))
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index acbc67074f59c..23b829f974de1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -1969,10 +1969,9 @@ static void rvu_queue_work(struct mbox_wq_info *mw, int first,
+ 	}
+ }
+ 
+-static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
++static irqreturn_t rvu_mbox_pf_intr_handler(int irq, void *rvu_irq)
+ {
+ 	struct rvu *rvu = (struct rvu *)rvu_irq;
+-	int vfs = rvu->vfs;
+ 	u64 intr;
+ 
+ 	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT);
+@@ -1986,6 +1985,18 @@ static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+ 
+ 	rvu_queue_work(&rvu->afpf_wq_info, 0, rvu->hw->total_pfs, intr);
+ 
++	return IRQ_HANDLED;
++}
++
++static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
++{
++	struct rvu *rvu = (struct rvu *)rvu_irq;
++	int vfs = rvu->vfs;
++	u64 intr;
++
++	/* Sync with mbox memory region */
++	rmb();
++
+ 	/* Handle VF interrupts */
+ 	if (vfs > 64) {
+ 		intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(1));
+@@ -2300,7 +2311,7 @@ static int rvu_register_interrupts(struct rvu *rvu)
+ 	/* Register mailbox interrupt handler */
+ 	sprintf(&rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], "RVUAF Mbox");
+ 	ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_MBOX),
+-			  rvu_mbox_intr_handler, 0,
++			  rvu_mbox_pf_intr_handler, 0,
+ 			  &rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], rvu);
+ 	if (ret) {
+ 		dev_err(rvu->dev,
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
+index 63907aeb3884e..3167f9675ae0f 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
+@@ -308,6 +308,11 @@ static void nfp_fl_lag_do_work(struct work_struct *work)
+ 
+ 		acti_netdevs = kmalloc_array(entry->slave_cnt,
+ 					     sizeof(*acti_netdevs), GFP_KERNEL);
++		if (!acti_netdevs) {
++			schedule_delayed_work(&lag->work,
++					      NFP_FL_LAG_DELAY);
++			continue;
++		}
+ 
+ 		/* Include sanity check in the loop. It may be that a bond has
+ 		 * changed between processing the last notification and the
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index 81412999445d8..14c5e082ccc8f 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -94,7 +94,8 @@
+ #define DP83822_WOL_INDICATION_SEL BIT(8)
+ #define DP83822_WOL_CLR_INDICATION BIT(11)
+ 
+-/* RSCR bits */
++/* RCSR bits */
++#define DP83822_RGMII_MODE_EN	BIT(9)
+ #define DP83822_RX_CLK_SHIFT	BIT(12)
+ #define DP83822_TX_CLK_SHIFT	BIT(11)
+ 
+@@ -358,7 +359,7 @@ static int dp83822_config_init(struct phy_device *phydev)
+ {
+ 	struct dp83822_private *dp83822 = phydev->priv;
+ 	struct device *dev = &phydev->mdio.dev;
+-	int rgmii_delay;
++	int rgmii_delay = 0;
+ 	s32 rx_int_delay;
+ 	s32 tx_int_delay;
+ 	int err = 0;
+@@ -368,24 +369,33 @@ static int dp83822_config_init(struct phy_device *phydev)
+ 		rx_int_delay = phy_get_internal_delay(phydev, dev, NULL, 0,
+ 						      true);
+ 
+-		if (rx_int_delay <= 0)
+-			rgmii_delay = 0;
+-		else
+-			rgmii_delay = DP83822_RX_CLK_SHIFT;
++		/* Set DP83822_RX_CLK_SHIFT to enable rx clk internal delay */
++		if (rx_int_delay > 0)
++			rgmii_delay |= DP83822_RX_CLK_SHIFT;
+ 
+ 		tx_int_delay = phy_get_internal_delay(phydev, dev, NULL, 0,
+ 						      false);
++
++		/* Set DP83822_TX_CLK_SHIFT to disable tx clk internal delay */
+ 		if (tx_int_delay <= 0)
+-			rgmii_delay &= ~DP83822_TX_CLK_SHIFT;
+-		else
+ 			rgmii_delay |= DP83822_TX_CLK_SHIFT;
+ 
+-		if (rgmii_delay) {
+-			err = phy_set_bits_mmd(phydev, DP83822_DEVADDR,
+-					       MII_DP83822_RCSR, rgmii_delay);
+-			if (err)
+-				return err;
+-		}
++		err = phy_modify_mmd(phydev, DP83822_DEVADDR, MII_DP83822_RCSR,
++				     DP83822_RX_CLK_SHIFT | DP83822_TX_CLK_SHIFT, rgmii_delay);
++		if (err)
++			return err;
++
++		err = phy_set_bits_mmd(phydev, DP83822_DEVADDR,
++				       MII_DP83822_RCSR, DP83822_RGMII_MODE_EN);
++
++		if (err)
++			return err;
++	} else {
++		err = phy_clear_bits_mmd(phydev, DP83822_DEVADDR,
++					 MII_DP83822_RCSR, DP83822_RGMII_MODE_EN);
++
++		if (err)
++			return err;
+ 	}
+ 
+ 	if (dp83822->fx_enabled) {
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 095d16ceafcf8..8654e05ddc415 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -2769,7 +2769,7 @@ s32 phy_get_internal_delay(struct phy_device *phydev, struct device *dev,
+ 	if (delay < 0)
+ 		return delay;
+ 
+-	if (delay && size == 0)
++	if (size == 0)
+ 		return delay;
+ 
+ 	if (delay < delay_values[0] || delay > delay_values[size - 1]) {
+diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c
+index 681e0def6356b..a5332e99102a5 100644
+--- a/drivers/net/usb/sr9800.c
++++ b/drivers/net/usb/sr9800.c
+@@ -736,7 +736,9 @@ static int sr9800_bind(struct usbnet *dev, struct usb_interface *intf)
+ 
+ 	data->eeprom_len = SR9800_EEPROM_LEN;
+ 
+-	usbnet_get_endpoints(dev, intf);
++	ret = usbnet_get_endpoints(dev, intf);
++	if (ret)
++		goto out;
+ 
+ 	/* LED Setting Rule :
+ 	 * AABB:CCDD
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index d38b24339a1f9..ed274e9bdf3ce 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -258,7 +258,7 @@ static bool decrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair)
+ 
+ 	if (unlikely(!READ_ONCE(keypair->receiving.is_valid) ||
+ 		  wg_birthdate_has_expired(keypair->receiving.birthdate, REJECT_AFTER_TIME) ||
+-		  keypair->receiving_counter.counter >= REJECT_AFTER_MESSAGES)) {
++		  READ_ONCE(keypair->receiving_counter.counter) >= REJECT_AFTER_MESSAGES)) {
+ 		WRITE_ONCE(keypair->receiving.is_valid, false);
+ 		return false;
+ 	}
+@@ -325,7 +325,7 @@ static bool counter_validate(struct noise_replay_counter *counter, u64 their_cou
+ 		for (i = 1; i <= top; ++i)
+ 			counter->backtrack[(i + index_current) &
+ 				((COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1)] = 0;
+-		counter->counter = their_counter;
++		WRITE_ONCE(counter->counter, their_counter);
+ 	}
+ 
+ 	index &= (COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1;
+@@ -470,7 +470,7 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget)
+ 			net_dbg_ratelimited("%s: Packet has invalid nonce %llu (max %llu)\n",
+ 					    peer->device->dev->name,
+ 					    PACKET_CB(skb)->nonce,
+-					    keypair->receiving_counter.counter);
++					    READ_ONCE(keypair->receiving_counter.counter));
+ 			goto next;
+ 		}
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index d0967bb1f3871..57ac80997319b 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -3381,13 +3381,10 @@ EXPORT_SYMBOL(ath10k_core_create);
+ 
+ void ath10k_core_destroy(struct ath10k *ar)
+ {
+-	flush_workqueue(ar->workqueue);
+ 	destroy_workqueue(ar->workqueue);
+ 
+-	flush_workqueue(ar->workqueue_aux);
+ 	destroy_workqueue(ar->workqueue_aux);
+ 
+-	flush_workqueue(ar->workqueue_tx_complete);
+ 	destroy_workqueue(ar->workqueue_tx_complete);
+ 
+ 	ath10k_debug_destroy(ar);
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index 0fe639710a8bb..9d1b0890f3105 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -2651,7 +2651,6 @@ static void ath10k_sdio_remove(struct sdio_func *func)
+ 
+ 	ath10k_core_destroy(ar);
+ 
+-	flush_workqueue(ar_sdio->workqueue);
+ 	destroy_workqueue(ar_sdio->workqueue);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 4f2fbc610d798..0eeb74245372f 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -844,6 +844,10 @@ ath10k_wmi_tlv_op_pull_mgmt_tx_compl_ev(struct ath10k *ar, struct sk_buff *skb,
+ 	}
+ 
+ 	ev = tb[WMI_TLV_TAG_STRUCT_MGMT_TX_COMPL_EVENT];
++	if (!ev) {
++		kfree(tb);
++		return -EPROTO;
++	}
+ 
+ 	arg->desc_id = ev->desc_id;
+ 	arg->status = ev->status;
+diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
+index 237f4ec2cffd7..6c33e898b3000 100644
+--- a/drivers/net/wireless/ath/ath9k/htc.h
++++ b/drivers/net/wireless/ath/ath9k/htc.h
+@@ -306,7 +306,6 @@ struct ath9k_htc_tx {
+ 	DECLARE_BITMAP(tx_slot, MAX_TX_BUF_NUM);
+ 	struct timer_list cleanup_timer;
+ 	spinlock_t tx_lock;
+-	bool initialized;
+ };
+ 
+ struct ath9k_htc_tx_ctl {
+@@ -515,6 +514,7 @@ struct ath9k_htc_priv {
+ 	unsigned long ps_usecount;
+ 	bool ps_enabled;
+ 	bool ps_idle;
++	bool initialized;
+ 
+ #ifdef CONFIG_MAC80211_LEDS
+ 	enum led_brightness brightness;
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index 96a3185a96d75..b014185373f34 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -966,6 +966,10 @@ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ 
+ 	htc_handle->drv_priv = priv;
+ 
++	/* Allow ath9k_wmi_event_tasklet() to operate. */
++	smp_wmb();
++	priv->initialized = true;
++
+ 	return 0;
+ 
+ err_init:
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 5037142c5a822..95146ec754d5c 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -810,10 +810,6 @@ int ath9k_tx_init(struct ath9k_htc_priv *priv)
+ 	skb_queue_head_init(&priv->tx.data_vo_queue);
+ 	skb_queue_head_init(&priv->tx.tx_failed);
+ 
+-	/* Allow ath9k_wmi_event_tasklet(WMI_TXSTATUS_EVENTID) to operate. */
+-	smp_wmb();
+-	priv->tx.initialized = true;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index 1476b42b52a91..805ad31edba2b 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -155,6 +155,12 @@ void ath9k_wmi_event_tasklet(struct tasklet_struct *t)
+ 		}
+ 		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 
++		/* Check if ath9k_htc_probe_device() completed. */
++		if (!data_race(priv->initialized)) {
++			kfree_skb(skb);
++			continue;
++		}
++
+ 		hdr = (struct wmi_cmd_hdr *) skb->data;
+ 		cmd_id = be16_to_cpu(hdr->command_id);
+ 		wmi_event = skb_pull(skb, sizeof(struct wmi_cmd_hdr));
+@@ -169,10 +175,6 @@ void ath9k_wmi_event_tasklet(struct tasklet_struct *t)
+ 					     &wmi->drv_priv->fatal_work);
+ 			break;
+ 		case WMI_TXSTATUS_EVENTID:
+-			/* Check if ath9k_tx_init() completed. */
+-			if (!data_race(priv->tx.initialized))
+-				break;
+-
+ 			spin_lock_bh(&priv->tx.tx_lock);
+ 			if (priv->tx.flags & ATH9K_HTC_OP_TX_DRAIN) {
+ 				spin_unlock_bh(&priv->tx.tx_lock);
+diff --git a/drivers/net/wireless/broadcom/b43/b43.h b/drivers/net/wireless/broadcom/b43/b43.h
+index 67b4bac048e58..c0d8fc0b22fb2 100644
+--- a/drivers/net/wireless/broadcom/b43/b43.h
++++ b/drivers/net/wireless/broadcom/b43/b43.h
+@@ -1082,6 +1082,22 @@ static inline bool b43_using_pio_transfers(struct b43_wldev *dev)
+ 	return dev->__using_pio_transfers;
+ }
+ 
++static inline void b43_wake_queue(struct b43_wldev *dev, int queue_prio)
++{
++	if (dev->qos_enabled)
++		ieee80211_wake_queue(dev->wl->hw, queue_prio);
++	else
++		ieee80211_wake_queue(dev->wl->hw, 0);
++}
++
++static inline void b43_stop_queue(struct b43_wldev *dev, int queue_prio)
++{
++	if (dev->qos_enabled)
++		ieee80211_stop_queue(dev->wl->hw, queue_prio);
++	else
++		ieee80211_stop_queue(dev->wl->hw, 0);
++}
++
+ /* Message printing */
+ __printf(2, 3) void b43info(struct b43_wl *wl, const char *fmt, ...);
+ __printf(2, 3) void b43err(struct b43_wl *wl, const char *fmt, ...);
+diff --git a/drivers/net/wireless/broadcom/b43/dma.c b/drivers/net/wireless/broadcom/b43/dma.c
+index 9a7c62bd5e431..cfaf2f9d67b22 100644
+--- a/drivers/net/wireless/broadcom/b43/dma.c
++++ b/drivers/net/wireless/broadcom/b43/dma.c
+@@ -1399,7 +1399,7 @@ int b43_dma_tx(struct b43_wldev *dev, struct sk_buff *skb)
+ 	    should_inject_overflow(ring)) {
+ 		/* This TX ring is full. */
+ 		unsigned int skb_mapping = skb_get_queue_mapping(skb);
+-		ieee80211_stop_queue(dev->wl->hw, skb_mapping);
++		b43_stop_queue(dev, skb_mapping);
+ 		dev->wl->tx_queue_stopped[skb_mapping] = true;
+ 		ring->stopped = true;
+ 		if (b43_debug(dev, B43_DBG_DMAVERBOSE)) {
+@@ -1570,7 +1570,7 @@ void b43_dma_handle_txstatus(struct b43_wldev *dev,
+ 	} else {
+ 		/* If the driver queue is running wake the corresponding
+ 		 * mac80211 queue. */
+-		ieee80211_wake_queue(dev->wl->hw, ring->queue_prio);
++		b43_wake_queue(dev, ring->queue_prio);
+ 		if (b43_debug(dev, B43_DBG_DMAVERBOSE)) {
+ 			b43dbg(dev->wl, "Woke up TX ring %d\n", ring->index);
+ 		}
+diff --git a/drivers/net/wireless/broadcom/b43/main.c b/drivers/net/wireless/broadcom/b43/main.c
+index f175dbaffc300..29f97ab9b72a6 100644
+--- a/drivers/net/wireless/broadcom/b43/main.c
++++ b/drivers/net/wireless/broadcom/b43/main.c
+@@ -2587,7 +2587,8 @@ static void b43_request_firmware(struct work_struct *work)
+ 
+ start_ieee80211:
+ 	wl->hw->queues = B43_QOS_QUEUE_NUM;
+-	if (!modparam_qos || dev->fw.opensource)
++	if (!modparam_qos || dev->fw.opensource ||
++	    dev->dev->chip_id == BCMA_CHIP_ID_BCM4331)
+ 		wl->hw->queues = 1;
+ 
+ 	err = ieee80211_register_hw(wl->hw);
+@@ -3603,7 +3604,7 @@ static void b43_tx_work(struct work_struct *work)
+ 				err = b43_dma_tx(dev, skb);
+ 			if (err == -ENOSPC) {
+ 				wl->tx_queue_stopped[queue_num] = true;
+-				ieee80211_stop_queue(wl->hw, queue_num);
++				b43_stop_queue(dev, queue_num);
+ 				skb_queue_head(&wl->tx_queue[queue_num], skb);
+ 				break;
+ 			}
+@@ -3627,6 +3628,7 @@ static void b43_op_tx(struct ieee80211_hw *hw,
+ 		      struct sk_buff *skb)
+ {
+ 	struct b43_wl *wl = hw_to_b43_wl(hw);
++	u16 skb_queue_mapping;
+ 
+ 	if (unlikely(skb->len < 2 + 2 + 6)) {
+ 		/* Too short, this can't be a valid frame. */
+@@ -3635,12 +3637,12 @@ static void b43_op_tx(struct ieee80211_hw *hw,
+ 	}
+ 	B43_WARN_ON(skb_shinfo(skb)->nr_frags);
+ 
+-	skb_queue_tail(&wl->tx_queue[skb->queue_mapping], skb);
+-	if (!wl->tx_queue_stopped[skb->queue_mapping]) {
++	skb_queue_mapping = skb_get_queue_mapping(skb);
++	skb_queue_tail(&wl->tx_queue[skb_queue_mapping], skb);
++	if (!wl->tx_queue_stopped[skb_queue_mapping])
+ 		ieee80211_queue_work(wl->hw, &wl->tx_work);
+-	} else {
+-		ieee80211_stop_queue(wl->hw, skb->queue_mapping);
+-	}
++	else
++		b43_stop_queue(wl->current_dev, skb_queue_mapping);
+ }
+ 
+ static void b43_qos_params_upload(struct b43_wldev *dev,
+diff --git a/drivers/net/wireless/broadcom/b43/pio.c b/drivers/net/wireless/broadcom/b43/pio.c
+index 8c28a9250cd19..cc19b589fa70d 100644
+--- a/drivers/net/wireless/broadcom/b43/pio.c
++++ b/drivers/net/wireless/broadcom/b43/pio.c
+@@ -525,7 +525,7 @@ int b43_pio_tx(struct b43_wldev *dev, struct sk_buff *skb)
+ 	if (total_len > (q->buffer_size - q->buffer_used)) {
+ 		/* Not enough memory on the queue. */
+ 		err = -EBUSY;
+-		ieee80211_stop_queue(dev->wl->hw, skb_get_queue_mapping(skb));
++		b43_stop_queue(dev, skb_get_queue_mapping(skb));
+ 		q->stopped = true;
+ 		goto out;
+ 	}
+@@ -552,7 +552,7 @@ int b43_pio_tx(struct b43_wldev *dev, struct sk_buff *skb)
+ 	if (((q->buffer_size - q->buffer_used) < roundup(2 + 2 + 6, 4)) ||
+ 	    (q->free_packet_slots == 0)) {
+ 		/* The queue is full. */
+-		ieee80211_stop_queue(dev->wl->hw, skb_get_queue_mapping(skb));
++		b43_stop_queue(dev, skb_get_queue_mapping(skb));
+ 		q->stopped = true;
+ 	}
+ 
+@@ -587,7 +587,7 @@ void b43_pio_handle_txstatus(struct b43_wldev *dev,
+ 	list_add(&pack->list, &q->packets_list);
+ 
+ 	if (q->stopped) {
+-		ieee80211_wake_queue(dev->wl->hw, q->queue_prio);
++		b43_wake_queue(dev, q->queue_prio);
+ 		q->stopped = false;
+ 	}
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
+index ccc621b8ed9f2..4a1fe982a948e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
+@@ -383,8 +383,9 @@ struct shared_phy *wlc_phy_shared_attach(struct shared_phy_params *shp)
+ 	return sh;
+ }
+ 
+-static void wlc_phy_timercb_phycal(struct brcms_phy *pi)
++static void wlc_phy_timercb_phycal(void *ptr)
+ {
++	struct brcms_phy *pi = ptr;
+ 	uint delay = 5;
+ 
+ 	if (PHY_PERICAL_MPHASE_PENDING(pi)) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.c
+index a0de5db0cd646..b723817915365 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.c
+@@ -57,12 +57,11 @@ void wlc_phy_shim_detach(struct phy_shim_info *physhim)
+ }
+ 
+ struct wlapi_timer *wlapi_init_timer(struct phy_shim_info *physhim,
+-				     void (*fn)(struct brcms_phy *pi),
++				     void (*fn)(void *pi),
+ 				     void *arg, const char *name)
+ {
+ 	return (struct wlapi_timer *)
+-			brcms_init_timer(physhim->wl, (void (*)(void *))fn,
+-					 arg, name);
++			brcms_init_timer(physhim->wl, fn, arg, name);
+ }
+ 
+ void wlapi_free_timer(struct wlapi_timer *t)
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.h
+index dd8774717adee..27d0934e600ed 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.h
+@@ -131,7 +131,7 @@ void wlc_phy_shim_detach(struct phy_shim_info *physhim);
+ 
+ /* PHY to WL utility functions */
+ struct wlapi_timer *wlapi_init_timer(struct phy_shim_info *physhim,
+-				     void (*fn)(struct brcms_phy *pi),
++				     void (*fn)(void *pi),
+ 				     void *arg, const char *name);
+ void wlapi_free_timer(struct wlapi_timer *t);
+ void wlapi_add_timer(struct wlapi_timer *t, uint ms, int periodic);
+diff --git a/drivers/net/wireless/intel/iwlegacy/3945-mac.c b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+index ef0ac42a55a2a..55c00a07bc4d3 100644
+--- a/drivers/net/wireless/intel/iwlegacy/3945-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+@@ -3831,7 +3831,6 @@ il3945_pci_remove(struct pci_dev *pdev)
+ 	il3945_unset_hw_params(il);
+ 
+ 	/*netif_stop_queue(dev); */
+-	flush_workqueue(il->workqueue);
+ 
+ 	/* ieee80211_unregister_hw calls il3945_mac_stop, which flushes
+ 	 * il->workqueue... so we can't take down the workqueue
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index 12cf22d0e9949..2549902552e1d 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -6745,7 +6745,6 @@ il4965_pci_remove(struct pci_dev *pdev)
+ 	il_eeprom_free(il);
+ 
+ 	/*netif_stop_queue(dev); */
+-	flush_workqueue(il->workqueue);
+ 
+ 	/* ieee80211_unregister_hw calls il_mac_stop, which flushes
+ 	 * il->workqueue... so we can't take down the workqueue
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+index 461af58311561..6a19fc4c68604 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+@@ -1526,7 +1526,6 @@ static void iwl_op_mode_dvm_stop(struct iwl_op_mode *op_mode)
+ 	kfree(priv->nvm_data);
+ 
+ 	/*netif_stop_queue(dev); */
+-	flush_workqueue(priv->workqueue);
+ 
+ 	/* ieee80211_unregister_hw calls iwlagn_mac_stop, which flushes
+ 	 * priv->workqueue... so we can't take down the workqueue
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index 5e4faf9ce4bbe..fc35f8f84376c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -555,7 +555,7 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ 	 * from index 1, so the maximum value allowed here is
+ 	 * ACPI_SAR_PROFILES_NUM - 1.
+ 	 */
+-	if (n_profiles <= 0 || n_profiles >= ACPI_SAR_PROFILE_NUM) {
++	if (n_profiles >= ACPI_SAR_PROFILE_NUM) {
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index bcaec8a184cd6..299819d2d1904 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -155,6 +155,12 @@ static int iwl_dbg_tlv_alloc_debug_info(struct iwl_trans *trans,
+ 	if (le32_to_cpu(tlv->length) != sizeof(*debug_info))
+ 		return -EINVAL;
+ 
++	/* we use this as a string, ensure input was NUL terminated */
++	if (strnlen(debug_info->debug_cfg_name,
++		    sizeof(debug_info->debug_cfg_name)) ==
++			sizeof(debug_info->debug_cfg_name))
++		return -EINVAL;
++
+ 	IWL_DEBUG_FW(trans, "WRT: Loading debug cfg: %s\n",
+ 		     debug_info->debug_cfg_name);
+ 
+diff --git a/drivers/net/wireless/marvell/libertas/cmd.c b/drivers/net/wireless/marvell/libertas/cmd.c
+index a4d9dd73b2588..db9a852fa58a3 100644
+--- a/drivers/net/wireless/marvell/libertas/cmd.c
++++ b/drivers/net/wireless/marvell/libertas/cmd.c
+@@ -1133,7 +1133,7 @@ int lbs_allocate_cmd_buffer(struct lbs_private *priv)
+ 		if (!cmdarray[i].cmdbuf) {
+ 			lbs_deb_host("ALLOC_CMD_BUF: ptempvirtualaddr is NULL\n");
+ 			ret = -1;
+-			goto done;
++			goto free_cmd_array;
+ 		}
+ 	}
+ 
+@@ -1141,8 +1141,17 @@ int lbs_allocate_cmd_buffer(struct lbs_private *priv)
+ 		init_waitqueue_head(&cmdarray[i].cmdwait_q);
+ 		lbs_cleanup_and_insert_cmd(priv, &cmdarray[i]);
+ 	}
+-	ret = 0;
++	return 0;
+ 
++free_cmd_array:
++	for (i = 0; i < LBS_NUM_CMD_BUFFERS; i++) {
++		if (cmdarray[i].cmdbuf) {
++			kfree(cmdarray[i].cmdbuf);
++			cmdarray[i].cmdbuf = NULL;
++		}
++	}
++	kfree(priv->cmd_array);
++	priv->cmd_array = NULL;
+ done:
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 2f5f1ff22a601..e1196c565a62f 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -3155,13 +3155,11 @@ int mwifiex_del_virtual_intf(struct wiphy *wiphy, struct wireless_dev *wdev)
+ 		unregister_netdevice(wdev->netdev);
+ 
+ 	if (priv->dfs_cac_workqueue) {
+-		flush_workqueue(priv->dfs_cac_workqueue);
+ 		destroy_workqueue(priv->dfs_cac_workqueue);
+ 		priv->dfs_cac_workqueue = NULL;
+ 	}
+ 
+ 	if (priv->dfs_chan_sw_workqueue) {
+-		flush_workqueue(priv->dfs_chan_sw_workqueue);
+ 		destroy_workqueue(priv->dfs_chan_sw_workqueue);
+ 		priv->dfs_chan_sw_workqueue = NULL;
+ 	}
+diff --git a/drivers/net/wireless/marvell/mwifiex/debugfs.c b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+index 1e7dc724c6a94..d48a3e0b36060 100644
+--- a/drivers/net/wireless/marvell/mwifiex/debugfs.c
++++ b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+@@ -976,9 +976,6 @@ mwifiex_dev_debugfs_init(struct mwifiex_private *priv)
+ 	priv->dfs_dev_dir = debugfs_create_dir(priv->netdev->name,
+ 					       mwifiex_dfs_dir);
+ 
+-	if (!priv->dfs_dev_dir)
+-		return;
+-
+ 	MWIFIEX_DFS_ADD_FILE(info);
+ 	MWIFIEX_DFS_ADD_FILE(debug);
+ 	MWIFIEX_DFS_ADD_FILE(getlog);
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index 6283df5aaaf8b..b8b79fe50dbc2 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -498,13 +498,11 @@ static void mwifiex_free_adapter(struct mwifiex_adapter *adapter)
+ static void mwifiex_terminate_workqueue(struct mwifiex_adapter *adapter)
+ {
+ 	if (adapter->workqueue) {
+-		flush_workqueue(adapter->workqueue);
+ 		destroy_workqueue(adapter->workqueue);
+ 		adapter->workqueue = NULL;
+ 	}
+ 
+ 	if (adapter->rx_workqueue) {
+-		flush_workqueue(adapter->rx_workqueue);
+ 		destroy_workqueue(adapter->rx_workqueue);
+ 		adapter->rx_workqueue = NULL;
+ 	}
+diff --git a/drivers/net/wireless/microchip/wilc1000/cfg80211.c b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+index dd26f20861807..5d4f9e9a81e05 100644
+--- a/drivers/net/wireless/microchip/wilc1000/cfg80211.c
++++ b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+@@ -1562,7 +1562,6 @@ static int del_virtual_intf(struct wiphy *wiphy, struct wireless_dev *wdev)
+ 	unregister_netdevice(vif->ndev);
+ 	vif->monitor_flag = 0;
+ 
+-	wilc_set_operation_mode(vif, 0, 0, 0);
+ 	mutex_lock(&wl->vif_mutex);
+ 	list_del_rcu(&vif->list);
+ 	wl->vif_num--;
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index 884f45e627a72..457386f9de990 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -359,38 +359,49 @@ static void handle_connect_timeout(struct work_struct *work)
+ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 				struct cfg80211_crypto_settings *crypto)
+ {
+-	struct wilc_join_bss_param *param;
+-	struct ieee80211_p2p_noa_attr noa_attr;
+-	u8 rates_len = 0;
+-	const u8 *tim_elm, *ssid_elm, *rates_ie, *supp_rates_ie;
++	const u8 *ies_data, *tim_elm, *ssid_elm, *rates_ie, *supp_rates_ie;
+ 	const u8 *ht_ie, *wpa_ie, *wmm_ie, *rsn_ie;
++	struct ieee80211_p2p_noa_attr noa_attr;
++	const struct cfg80211_bss_ies *ies;
++	struct wilc_join_bss_param *param;
++	u8 rates_len = 0, ies_len;
+ 	int ret;
+-	const struct cfg80211_bss_ies *ies = rcu_dereference(bss->ies);
+ 
+ 	param = kzalloc(sizeof(*param), GFP_KERNEL);
+ 	if (!param)
+ 		return NULL;
+ 
++	rcu_read_lock();
++	ies = rcu_dereference(bss->ies);
++	ies_data = kmemdup(ies->data, ies->len, GFP_ATOMIC);
++	if (!ies_data) {
++		rcu_read_unlock();
++		kfree(param);
++		return NULL;
++	}
++	ies_len = ies->len;
++	rcu_read_unlock();
++
+ 	param->beacon_period = cpu_to_le16(bss->beacon_interval);
+ 	param->cap_info = cpu_to_le16(bss->capability);
+ 	param->bss_type = WILC_FW_BSS_TYPE_INFRA;
+ 	param->ch = ieee80211_frequency_to_channel(bss->channel->center_freq);
+ 	ether_addr_copy(param->bssid, bss->bssid);
+ 
+-	ssid_elm = cfg80211_find_ie(WLAN_EID_SSID, ies->data, ies->len);
++	ssid_elm = cfg80211_find_ie(WLAN_EID_SSID, ies_data, ies_len);
+ 	if (ssid_elm) {
+ 		if (ssid_elm[1] <= IEEE80211_MAX_SSID_LEN)
+ 			memcpy(param->ssid, ssid_elm + 2, ssid_elm[1]);
+ 	}
+ 
+-	tim_elm = cfg80211_find_ie(WLAN_EID_TIM, ies->data, ies->len);
++	tim_elm = cfg80211_find_ie(WLAN_EID_TIM, ies_data, ies_len);
+ 	if (tim_elm && tim_elm[1] >= 2)
+ 		param->dtim_period = tim_elm[3];
+ 
+ 	memset(param->p_suites, 0xFF, 3);
+ 	memset(param->akm_suites, 0xFF, 3);
+ 
+-	rates_ie = cfg80211_find_ie(WLAN_EID_SUPP_RATES, ies->data, ies->len);
++	rates_ie = cfg80211_find_ie(WLAN_EID_SUPP_RATES, ies_data, ies_len);
+ 	if (rates_ie) {
+ 		rates_len = rates_ie[1];
+ 		if (rates_len > WILC_MAX_RATES_SUPPORTED)
+@@ -401,7 +412,7 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 
+ 	if (rates_len < WILC_MAX_RATES_SUPPORTED) {
+ 		supp_rates_ie = cfg80211_find_ie(WLAN_EID_EXT_SUPP_RATES,
+-						 ies->data, ies->len);
++						 ies_data, ies_len);
+ 		if (supp_rates_ie) {
+ 			u8 ext_rates = supp_rates_ie[1];
+ 
+@@ -416,11 +427,11 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 		}
+ 	}
+ 
+-	ht_ie = cfg80211_find_ie(WLAN_EID_HT_CAPABILITY, ies->data, ies->len);
++	ht_ie = cfg80211_find_ie(WLAN_EID_HT_CAPABILITY, ies_data, ies_len);
+ 	if (ht_ie)
+ 		param->ht_capable = true;
+ 
+-	ret = cfg80211_get_p2p_attr(ies->data, ies->len,
++	ret = cfg80211_get_p2p_attr(ies_data, ies_len,
+ 				    IEEE80211_P2P_ATTR_ABSENCE_NOTICE,
+ 				    (u8 *)&noa_attr, sizeof(noa_attr));
+ 	if (ret > 0) {
+@@ -444,7 +455,7 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 	}
+ 	wmm_ie = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ 					 WLAN_OUI_TYPE_MICROSOFT_WMM,
+-					 ies->data, ies->len);
++					 ies_data, ies_len);
+ 	if (wmm_ie) {
+ 		struct ieee80211_wmm_param_ie *ie;
+ 
+@@ -459,13 +470,13 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 
+ 	wpa_ie = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ 					 WLAN_OUI_TYPE_MICROSOFT_WPA,
+-					 ies->data, ies->len);
++					 ies_data, ies_len);
+ 	if (wpa_ie) {
+ 		param->mode_802_11i = 1;
+ 		param->rsn_found = true;
+ 	}
+ 
+-	rsn_ie = cfg80211_find_ie(WLAN_EID_RSN, ies->data, ies->len);
++	rsn_ie = cfg80211_find_ie(WLAN_EID_RSN, ies_data, ies_len);
+ 	if (rsn_ie) {
+ 		int rsn_ie_len = sizeof(struct element) + rsn_ie[1];
+ 		int offset = 8;
+@@ -499,6 +510,7 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 			param->akm_suites[i] = crypto->akm_suites[i] & 0xFF;
+ 	}
+ 
++	kfree(ies_data);
+ 	return (void *)param;
+ }
+ 
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index c508f429984ab..ab84b146aa272 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -821,8 +821,7 @@ static const struct net_device_ops wilc_netdev_ops = {
+ 
+ void wilc_netdev_cleanup(struct wilc *wilc)
+ {
+-	struct wilc_vif *vif;
+-	int srcu_idx, ifc_cnt = 0;
++	struct wilc_vif *vif, *vif_tmp;
+ 
+ 	if (!wilc)
+ 		return;
+@@ -832,33 +831,19 @@ void wilc_netdev_cleanup(struct wilc *wilc)
+ 		wilc->firmware = NULL;
+ 	}
+ 
+-	srcu_idx = srcu_read_lock(&wilc->srcu);
+-	list_for_each_entry_rcu(vif, &wilc->vif_list, list) {
++	list_for_each_entry_safe(vif, vif_tmp, &wilc->vif_list, list) {
++		mutex_lock(&wilc->vif_mutex);
++		list_del_rcu(&vif->list);
++		wilc->vif_num--;
++		mutex_unlock(&wilc->vif_mutex);
++		synchronize_srcu(&wilc->srcu);
+ 		if (vif->ndev)
+ 			unregister_netdev(vif->ndev);
+ 	}
+-	srcu_read_unlock(&wilc->srcu, srcu_idx);
+ 
+ 	wilc_wfi_deinit_mon_interface(wilc, false);
+-	flush_workqueue(wilc->hif_workqueue);
+ 	destroy_workqueue(wilc->hif_workqueue);
+ 
+-	while (ifc_cnt < WILC_NUM_CONCURRENT_IFC) {
+-		mutex_lock(&wilc->vif_mutex);
+-		if (wilc->vif_num <= 0) {
+-			mutex_unlock(&wilc->vif_mutex);
+-			break;
+-		}
+-		vif = wilc_get_wl_to_vif(wilc);
+-		if (!IS_ERR(vif))
+-			list_del_rcu(&vif->list);
+-
+-		wilc->vif_num--;
+-		mutex_unlock(&wilc->vif_mutex);
+-		synchronize_srcu(&wilc->srcu);
+-		ifc_cnt++;
+-	}
+-
+ 	wilc_wlan_cfg_deinit(wilc);
+ 	wlan_deinit_locks(wilc);
+ 	kfree(wilc->bus_data);
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/core.c b/drivers/net/wireless/quantenna/qtnfmac/core.c
+index bf6dbeb618423..d39c210da68e2 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/core.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/core.c
+@@ -816,13 +816,11 @@ void qtnf_core_detach(struct qtnf_bus *bus)
+ 	bus->fw_state = QTNF_FW_STATE_DETACHED;
+ 
+ 	if (bus->workqueue) {
+-		flush_workqueue(bus->workqueue);
+ 		destroy_workqueue(bus->workqueue);
+ 		bus->workqueue = NULL;
+ 	}
+ 
+ 	if (bus->hprio_workqueue) {
+-		flush_workqueue(bus->hprio_workqueue);
+ 		destroy_workqueue(bus->hprio_workqueue);
+ 		bus->hprio_workqueue = NULL;
+ 	}
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c
+index 0f328ce47fee3..f65eb6e5b8d59 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c
+@@ -387,7 +387,6 @@ static int qtnf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ error:
+-	flush_workqueue(pcie_priv->workqueue);
+ 	destroy_workqueue(pcie_priv->workqueue);
+ 	pci_set_drvdata(pdev, NULL);
+ 	return ret;
+@@ -416,7 +415,6 @@ static void qtnf_pcie_remove(struct pci_dev *dev)
+ 		qtnf_core_detach(bus);
+ 
+ 	netif_napi_del(&bus->mux_napi);
+-	flush_workqueue(priv->workqueue);
+ 	destroy_workqueue(priv->workqueue);
+ 	tasklet_kill(&priv->reclaim_tq);
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 3051fb358fdd5..9efc15e69ae82 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -6483,6 +6483,7 @@ static void rtl8xxxu_stop(struct ieee80211_hw *hw)
+ 	if (priv->usb_interrupts)
+ 		rtl8xxxu_write32(priv, REG_USB_HIMR, 0);
+ 
++	cancel_work_sync(&priv->c2hcmd_work);
+ 	cancel_delayed_work_sync(&priv->ra_watchdog);
+ 
+ 	rtl8xxxu_free_rx_resources(priv);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 679ae786cf450..6d9f2a6233a21 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -1704,7 +1704,6 @@ static void rtl_pci_deinit(struct ieee80211_hw *hw)
+ 	tasklet_kill(&rtlpriv->works.irq_tasklet);
+ 	cancel_work_sync(&rtlpriv->works.lps_change_work);
+ 
+-	flush_workqueue(rtlpriv->works.rtl_wq);
+ 	destroy_workqueue(rtlpriv->works.rtl_wq);
+ }
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index f9615f76f1734..d517f92b6180b 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -579,9 +579,9 @@ static void rtw8821c_false_alarm_statistics(struct rtw_dev *rtwdev)
+ 
+ 	dm_info->cck_fa_cnt = cck_fa_cnt;
+ 	dm_info->ofdm_fa_cnt = ofdm_fa_cnt;
++	dm_info->total_fa_cnt = ofdm_fa_cnt;
+ 	if (cck_enable)
+ 		dm_info->total_fa_cnt += cck_fa_cnt;
+-	dm_info->total_fa_cnt = ofdm_fa_cnt;
+ 
+ 	crc32_cnt = rtw_read32(rtwdev, REG_CRC_CCK);
+ 	dm_info->cck_ok_cnt = FIELD_GET(GENMASK(15, 0), crc32_cnt);
+diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c
+index dc076d8448680..75c78e1c924b0 100644
+--- a/drivers/net/wireless/rndis_wlan.c
++++ b/drivers/net/wireless/rndis_wlan.c
+@@ -3497,7 +3497,6 @@ static int rndis_wlan_bind(struct usbnet *usbdev, struct usb_interface *intf)
+ 	cancel_delayed_work_sync(&priv->dev_poller_work);
+ 	cancel_delayed_work_sync(&priv->scan_work);
+ 	cancel_work_sync(&priv->work);
+-	flush_workqueue(priv->workqueue);
+ 	destroy_workqueue(priv->workqueue);
+ 
+ 	wiphy_free(wiphy);
+@@ -3514,7 +3513,6 @@ static void rndis_wlan_unbind(struct usbnet *usbdev, struct usb_interface *intf)
+ 	cancel_delayed_work_sync(&priv->dev_poller_work);
+ 	cancel_delayed_work_sync(&priv->scan_work);
+ 	cancel_work_sync(&priv->work);
+-	flush_workqueue(priv->workqueue);
+ 	destroy_workqueue(priv->workqueue);
+ 
+ 	rndis_unbind(usbdev, intf);
+diff --git a/drivers/net/wireless/st/cw1200/bh.c b/drivers/net/wireless/st/cw1200/bh.c
+index 02efe8483cba6..361fef6e1eeaa 100644
+--- a/drivers/net/wireless/st/cw1200/bh.c
++++ b/drivers/net/wireless/st/cw1200/bh.c
+@@ -88,8 +88,6 @@ void cw1200_unregister_bh(struct cw1200_common *priv)
+ 	atomic_add(1, &priv->bh_term);
+ 	wake_up(&priv->bh_wq);
+ 
+-	flush_workqueue(priv->bh_workqueue);
+-
+ 	destroy_workqueue(priv->bh_workqueue);
+ 	priv->bh_workqueue = NULL;
+ 
+diff --git a/drivers/opp/debugfs.c b/drivers/opp/debugfs.c
+index 60f4ff8e044d1..016dea5a412be 100644
+--- a/drivers/opp/debugfs.c
++++ b/drivers/opp/debugfs.c
+@@ -36,10 +36,12 @@ static ssize_t bw_name_read(struct file *fp, char __user *userbuf,
+ 			    size_t count, loff_t *ppos)
+ {
+ 	struct icc_path *path = fp->private_data;
++	const char *name = icc_get_name(path);
+ 	char buf[64];
+-	int i;
++	int i = 0;
+ 
+-	i = scnprintf(buf, sizeof(buf), "%.62s\n", icc_get_name(path));
++	if (name)
++		i = scnprintf(buf, sizeof(buf), "%.62s\n", name);
+ 
+ 	return simple_read_from_buffer(userbuf, count, ppos, buf, i);
+ }
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 72436000ff252..32fa07bfc448e 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -399,11 +399,6 @@ static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
+ 	return 0;
+ }
+ 
+-static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
+-{
+-	return dev->error_state == pci_channel_io_perm_failure;
+-}
+-
+ /* pci_dev priv_flags */
+ #define PCI_DEV_ADDED 0
+ #define PCI_DPC_RECOVERED 1
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index f21d64ae4ffcc..cf0d4ba2e157a 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -231,7 +231,7 @@ static void dpc_process_rp_pio_error(struct pci_dev *pdev)
+ 
+ 	for (i = 0; i < pdev->dpc_rp_log_size - 5; i++) {
+ 		pci_read_config_dword(pdev,
+-			cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix);
++			cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG + i * 4, &prefix);
+ 		pci_err(pdev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix);
+ 	}
+  clear_status:
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index b67aea8d8f197..646807a443e2d 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5364,6 +5364,7 @@ static void quirk_no_ext_tags(struct pci_dev *pdev)
+ 
+ 	pci_walk_bus(bridge->bus, pci_configure_extended_tags, NULL);
+ }
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_3WARE, 0x1004, quirk_no_ext_tags);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0132, quirk_no_ext_tags);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0140, quirk_no_ext_tags);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0141, quirk_no_ext_tags);
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index 5cea3ad290c54..9aa230c3208ab 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -1615,7 +1615,7 @@ static int switchtec_pci_probe(struct pci_dev *pdev,
+ 	rc = switchtec_init_isr(stdev);
+ 	if (rc) {
+ 		dev_err(&stdev->dev, "failed to init isr.\n");
+-		goto err_put;
++		goto err_exit_pci;
+ 	}
+ 
+ 	iowrite32(SWITCHTEC_EVENT_CLEAR |
+@@ -1636,6 +1636,8 @@ static int switchtec_pci_probe(struct pci_dev *pdev,
+ 
+ err_devadd:
+ 	stdev_kill(stdev);
++err_exit_pci:
++	switchtec_exit_pci(stdev);
+ err_put:
+ 	ida_simple_remove(&switchtec_minor_ida, MINOR(stdev->dev.devt));
+ 	put_device(&stdev->dev);
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt8192.c b/drivers/pinctrl/mediatek/pinctrl-mt8192.c
+index 0c16b2c756bf3..f3020e3c8533b 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt8192.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt8192.c
+@@ -1346,7 +1346,6 @@ static const struct mtk_pin_reg_calc mt8192_reg_cals[PINCTRL_PIN_REG_MAX] = {
+ 	[PINCTRL_PIN_REG_DIR] = MTK_RANGE(mt8192_pin_dir_range),
+ 	[PINCTRL_PIN_REG_DI] = MTK_RANGE(mt8192_pin_di_range),
+ 	[PINCTRL_PIN_REG_DO] = MTK_RANGE(mt8192_pin_do_range),
+-	[PINCTRL_PIN_REG_SR] = MTK_RANGE(mt8192_pin_dir_range),
+ 	[PINCTRL_PIN_REG_SMT] = MTK_RANGE(mt8192_pin_smt_range),
+ 	[PINCTRL_PIN_REG_IES] = MTK_RANGE(mt8192_pin_ies_range),
+ 	[PINCTRL_PIN_REG_PU] = MTK_RANGE(mt8192_pin_pu_range),
+diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
+index d99548fb5ddef..6dbf6ed10ea49 100644
+--- a/drivers/remoteproc/Kconfig
++++ b/drivers/remoteproc/Kconfig
+@@ -249,7 +249,7 @@ config ST_SLIM_REMOTEPROC
+ 
+ config STM32_RPROC
+ 	tristate "STM32 remoteproc support"
+-	depends on ARCH_STM32
++	depends on ARCH_STM32 || COMPILE_TEST
+ 	depends on REMOTEPROC
+ 	select MAILBOX
+ 	help
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index cc55ff0128cf2..a933e345683c4 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -1539,6 +1539,32 @@ static int rproc_fw_boot(struct rproc *rproc, const struct firmware *fw)
+ 	return ret;
+ }
+ 
++static int rproc_set_rsc_table(struct rproc *rproc)
++{
++	struct resource_table *table_ptr;
++	struct device *dev = &rproc->dev;
++	size_t table_sz;
++	int ret;
++
++	table_ptr = rproc_get_loaded_rsc_table(rproc, &table_sz);
++	if (!table_ptr) {
++		/* Not having a resource table is acceptable */
++		return 0;
++	}
++
++	if (IS_ERR(table_ptr)) {
++		ret = PTR_ERR(table_ptr);
++		dev_err(dev, "can't load resource table: %d\n", ret);
++		return ret;
++	}
++
++	rproc->cached_table = NULL;
++	rproc->table_ptr = table_ptr;
++	rproc->table_sz = table_sz;
++
++	return 0;
++}
++
+ /*
+  * Attach to remote processor - similar to rproc_fw_boot() but without
+  * the steps that deal with the firmware image.
+@@ -1558,6 +1584,12 @@ static int rproc_actuate(struct rproc *rproc)
+ 		return ret;
+ 	}
+ 
++	ret = rproc_set_rsc_table(rproc);
++	if (ret) {
++		dev_err(dev, "can't load resource table: %d\n", ret);
++		goto disable_iommu;
++	}
++
+ 	/* reset max_notifyid */
+ 	rproc->max_notifyid = -1;
+ 
+diff --git a/drivers/remoteproc/remoteproc_internal.h b/drivers/remoteproc/remoteproc_internal.h
+index c34002888d2c3..4f73aac7e60d1 100644
+--- a/drivers/remoteproc/remoteproc_internal.h
++++ b/drivers/remoteproc/remoteproc_internal.h
+@@ -177,6 +177,16 @@ struct resource_table *rproc_find_loaded_rsc_table(struct rproc *rproc,
+ 	return NULL;
+ }
+ 
++static inline
++struct resource_table *rproc_get_loaded_rsc_table(struct rproc *rproc,
++						  size_t *size)
++{
++	if (rproc->ops->get_loaded_rsc_table)
++		return rproc->ops->get_loaded_rsc_table(rproc, size);
++
++	return NULL;
++}
++
+ static inline
+ bool rproc_u64_fit_in_size_t(u64 val)
+ {
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index df784fec124f6..3d411e18a8593 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -117,10 +117,10 @@ static int stm32_rproc_mem_alloc(struct rproc *rproc,
+ 	struct device *dev = rproc->dev.parent;
+ 	void *va;
+ 
+-	dev_dbg(dev, "map memory: %pa+%x\n", &mem->dma, mem->len);
+-	va = ioremap_wc(mem->dma, mem->len);
++	dev_dbg(dev, "map memory: %pad+%zx\n", &mem->dma, mem->len);
++	va = (__force void *)ioremap_wc(mem->dma, mem->len);
+ 	if (IS_ERR_OR_NULL(va)) {
+-		dev_err(dev, "Unable to map memory region: %pa+%x\n",
++		dev_err(dev, "Unable to map memory region: %pad+0x%zx\n",
+ 			&mem->dma, mem->len);
+ 		return -ENOMEM;
+ 	}
+@@ -135,7 +135,7 @@ static int stm32_rproc_mem_release(struct rproc *rproc,
+ 				   struct rproc_mem_entry *mem)
+ {
+ 	dev_dbg(rproc->dev.parent, "unmap memory: %pa\n", &mem->dma);
+-	iounmap(mem->va);
++	iounmap((__force __iomem void *)mem->va);
+ 
+ 	return 0;
+ }
+@@ -553,7 +553,74 @@ static void stm32_rproc_kick(struct rproc *rproc, int vqid)
+ 	}
+ }
+ 
+-static struct rproc_ops st_rproc_ops = {
++static int stm32_rproc_da_to_pa(struct rproc *rproc,
++				u64 da, phys_addr_t *pa)
++{
++	struct stm32_rproc *ddata = rproc->priv;
++	struct device *dev = rproc->dev.parent;
++	struct stm32_rproc_mem *p_mem;
++	unsigned int i;
++
++	for (i = 0; i < ddata->nb_rmems; i++) {
++		p_mem = &ddata->rmems[i];
++
++		if (da < p_mem->dev_addr ||
++		    da >= p_mem->dev_addr + p_mem->size)
++			continue;
++
++		*pa = da - p_mem->dev_addr + p_mem->bus_addr;
++		dev_dbg(dev, "da %llx to pa %pap\n", da, pa);
++
++		return 0;
++	}
++
++	dev_err(dev, "can't translate da %llx\n", da);
++
++	return -EINVAL;
++}
++
++static struct resource_table *
++stm32_rproc_get_loaded_rsc_table(struct rproc *rproc, size_t *table_sz)
++{
++	struct stm32_rproc *ddata = rproc->priv;
++	struct device *dev = rproc->dev.parent;
++	phys_addr_t rsc_pa;
++	u32 rsc_da;
++	int err;
++
++	/* The resource table has already been mapped, nothing to do */
++	if (ddata->rsc_va)
++		goto done;
++
++	err = regmap_read(ddata->rsctbl.map, ddata->rsctbl.reg, &rsc_da);
++	if (err) {
++		dev_err(dev, "failed to read rsc tbl addr\n");
++		return ERR_PTR(-EINVAL);
++	}
++
++	if (!rsc_da)
++		/* no rsc table */
++		return ERR_PTR(-ENOENT);
++
++	err = stm32_rproc_da_to_pa(rproc, rsc_da, &rsc_pa);
++	if (err)
++		return ERR_PTR(err);
++
++	ddata->rsc_va = devm_ioremap_wc(dev, rsc_pa, RSC_TBL_SIZE);
++	if (IS_ERR_OR_NULL(ddata->rsc_va)) {
++		dev_err(dev, "Unable to map memory region: %pa+%x\n",
++			&rsc_pa, RSC_TBL_SIZE);
++		ddata->rsc_va = NULL;
++		return ERR_PTR(-ENOMEM);
++	}
++
++done:
++	/* Assuming the resource table fits in 1kB is fair */
++	*table_sz = RSC_TBL_SIZE;
++	return (__force struct resource_table *)ddata->rsc_va;
++}
++
++static const struct rproc_ops st_rproc_ops = {
+ 	.start		= stm32_rproc_start,
+ 	.stop		= stm32_rproc_stop,
+ 	.attach		= stm32_rproc_attach,
+@@ -561,6 +628,7 @@ static struct rproc_ops st_rproc_ops = {
+ 	.load		= rproc_elf_load_segments,
+ 	.parse_fw	= stm32_rproc_parse_fw,
+ 	.find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table,
++	.get_loaded_rsc_table = stm32_rproc_get_loaded_rsc_table,
+ 	.sanity_check	= rproc_elf_sanity_check,
+ 	.get_boot_addr	= rproc_elf_get_boot_addr,
+ };
+@@ -704,75 +772,6 @@ static int stm32_rproc_get_m4_status(struct stm32_rproc *ddata,
+ 	return regmap_read(ddata->m4_state.map, ddata->m4_state.reg, state);
+ }
+ 
+-static int stm32_rproc_da_to_pa(struct platform_device *pdev,
+-				struct stm32_rproc *ddata,
+-				u64 da, phys_addr_t *pa)
+-{
+-	struct device *dev = &pdev->dev;
+-	struct stm32_rproc_mem *p_mem;
+-	unsigned int i;
+-
+-	for (i = 0; i < ddata->nb_rmems; i++) {
+-		p_mem = &ddata->rmems[i];
+-
+-		if (da < p_mem->dev_addr ||
+-		    da >= p_mem->dev_addr + p_mem->size)
+-			continue;
+-
+-		*pa = da - p_mem->dev_addr + p_mem->bus_addr;
+-		dev_dbg(dev, "da %llx to pa %#x\n", da, *pa);
+-
+-		return 0;
+-	}
+-
+-	dev_err(dev, "can't translate da %llx\n", da);
+-
+-	return -EINVAL;
+-}
+-
+-static int stm32_rproc_get_loaded_rsc_table(struct platform_device *pdev,
+-					    struct rproc *rproc,
+-					    struct stm32_rproc *ddata)
+-{
+-	struct device *dev = &pdev->dev;
+-	phys_addr_t rsc_pa;
+-	u32 rsc_da;
+-	int err;
+-
+-	err = regmap_read(ddata->rsctbl.map, ddata->rsctbl.reg, &rsc_da);
+-	if (err) {
+-		dev_err(dev, "failed to read rsc tbl addr\n");
+-		return err;
+-	}
+-
+-	if (!rsc_da)
+-		/* no rsc table */
+-		return 0;
+-
+-	err = stm32_rproc_da_to_pa(pdev, ddata, rsc_da, &rsc_pa);
+-	if (err)
+-		return err;
+-
+-	ddata->rsc_va = devm_ioremap_wc(dev, rsc_pa, RSC_TBL_SIZE);
+-	if (IS_ERR_OR_NULL(ddata->rsc_va)) {
+-		dev_err(dev, "Unable to map memory region: %pa+%zx\n",
+-			&rsc_pa, RSC_TBL_SIZE);
+-		ddata->rsc_va = NULL;
+-		return -ENOMEM;
+-	}
+-
+-	/*
+-	 * The resource table is already loaded in device memory, no need
+-	 * to work with a cached table.
+-	 */
+-	rproc->cached_table = NULL;
+-	/* Assuming the resource table fits in 1kB is fair */
+-	rproc->table_sz = RSC_TBL_SIZE;
+-	rproc->table_ptr = (struct resource_table *)ddata->rsc_va;
+-
+-	return 0;
+-}
+-
+ static int stm32_rproc_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -812,10 +811,6 @@ static int stm32_rproc_probe(struct platform_device *pdev)
+ 		ret = stm32_rproc_parse_memory_regions(rproc);
+ 		if (ret)
+ 			goto free_resources;
+-
+-		ret = stm32_rproc_get_loaded_rsc_table(pdev, rproc, ddata);
+-		if (ret)
+-			goto free_resources;
+ 	}
+ 
+ 	rproc->has_iommu = false;
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index 54cf5ec8f4019..8ddd334e049e1 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -1833,7 +1833,8 @@ config RTC_DRV_MT2712
+ 
+ config RTC_DRV_MT6397
+ 	tristate "MediaTek PMIC based RTC"
+-	depends on MFD_MT6397 || (COMPILE_TEST && IRQ_DOMAIN)
++	depends on MFD_MT6397 || COMPILE_TEST
++	select IRQ_DOMAIN
+ 	help
+ 	  This selects the MediaTek(R) RTC driver. RTC is part of MediaTek
+ 	  MT6397 PMIC. You should enable MT6397 PMIC MFD before select
+diff --git a/drivers/scsi/bfa/bfa.h b/drivers/scsi/bfa/bfa.h
+index 7bd2ba1ad4d11..f30fe324e6ecc 100644
+--- a/drivers/scsi/bfa/bfa.h
++++ b/drivers/scsi/bfa/bfa.h
+@@ -20,7 +20,6 @@
+ struct bfa_s;
+ 
+ typedef void (*bfa_isr_func_t) (struct bfa_s *bfa, struct bfi_msg_s *m);
+-typedef void (*bfa_cb_cbfn_status_t) (void *cbarg, bfa_status_t status);
+ 
+ /*
+  * Interrupt message handlers
+@@ -437,4 +436,12 @@ struct bfa_cb_pending_q_s {
+ 	(__qe)->data = (__data);				\
+ } while (0)
+ 
++#define bfa_pending_q_init_status(__qe, __cbfn, __cbarg, __data) do {	\
++	bfa_q_qe_init(&((__qe)->hcb_qe.qe));			\
++	(__qe)->hcb_qe.cbfn_status = (__cbfn);			\
++	(__qe)->hcb_qe.cbarg = (__cbarg);			\
++	(__qe)->hcb_qe.pre_rmv = BFA_TRUE;			\
++	(__qe)->data = (__data);				\
++} while (0)
++
+ #endif /* __BFA_H__ */
+diff --git a/drivers/scsi/bfa/bfa_core.c b/drivers/scsi/bfa/bfa_core.c
+index 6846ca8f7313c..3438d0b8ba062 100644
+--- a/drivers/scsi/bfa/bfa_core.c
++++ b/drivers/scsi/bfa/bfa_core.c
+@@ -1907,15 +1907,13 @@ bfa_comp_process(struct bfa_s *bfa, struct list_head *comp_q)
+ 	struct list_head		*qe;
+ 	struct list_head		*qen;
+ 	struct bfa_cb_qe_s	*hcb_qe;
+-	bfa_cb_cbfn_status_t	cbfn;
+ 
+ 	list_for_each_safe(qe, qen, comp_q) {
+ 		hcb_qe = (struct bfa_cb_qe_s *) qe;
+ 		if (hcb_qe->pre_rmv) {
+ 			/* qe is invalid after return, dequeue before cbfn() */
+ 			list_del(qe);
+-			cbfn = (bfa_cb_cbfn_status_t)(hcb_qe->cbfn);
+-			cbfn(hcb_qe->cbarg, hcb_qe->fw_status);
++			hcb_qe->cbfn_status(hcb_qe->cbarg, hcb_qe->fw_status);
+ 		} else
+ 			hcb_qe->cbfn(hcb_qe->cbarg, BFA_TRUE);
+ 	}
+diff --git a/drivers/scsi/bfa/bfa_ioc.h b/drivers/scsi/bfa/bfa_ioc.h
+index 933a1c3890ff5..5e568d6d7b261 100644
+--- a/drivers/scsi/bfa/bfa_ioc.h
++++ b/drivers/scsi/bfa/bfa_ioc.h
+@@ -361,14 +361,18 @@ struct bfa_reqq_wait_s {
+ 	void	*cbarg;
+ };
+ 
+-typedef void	(*bfa_cb_cbfn_t) (void *cbarg, bfa_boolean_t complete);
++typedef void (*bfa_cb_cbfn_t) (void *cbarg, bfa_boolean_t complete);
++typedef void (*bfa_cb_cbfn_status_t) (void *cbarg, bfa_status_t status);
+ 
+ /*
+  * Generic BFA callback element.
+  */
+ struct bfa_cb_qe_s {
+ 	struct list_head	qe;
+-	bfa_cb_cbfn_t	cbfn;
++	union {
++		bfa_cb_cbfn_status_t	cbfn_status;
++		bfa_cb_cbfn_t		cbfn;
++	};
+ 	bfa_boolean_t	once;
+ 	bfa_boolean_t	pre_rmv;	/* set for stack based qe(s) */
+ 	bfa_status_t	fw_status;	/* to access fw status in comp proc */
+diff --git a/drivers/scsi/bfa/bfad_bsg.c b/drivers/scsi/bfa/bfad_bsg.c
+index fc515424ca88d..eb589f9e8cfb5 100644
+--- a/drivers/scsi/bfa/bfad_bsg.c
++++ b/drivers/scsi/bfa/bfad_bsg.c
+@@ -2135,8 +2135,7 @@ bfad_iocmd_fcport_get_stats(struct bfad_s *bfad, void *cmd)
+ 	struct bfa_cb_pending_q_s cb_qe;
+ 
+ 	init_completion(&fcomp.comp);
+-	bfa_pending_q_init(&cb_qe, (bfa_cb_cbfn_t)bfad_hcb_comp,
+-			   &fcomp, &iocmd->stats);
++	bfa_pending_q_init_status(&cb_qe, bfad_hcb_comp, &fcomp, &iocmd->stats);
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	iocmd->status = bfa_fcport_get_stats(&bfad->bfa, &cb_qe);
+ 	spin_unlock_irqrestore(&bfad->bfad_lock, flags);
+@@ -2159,7 +2158,7 @@ bfad_iocmd_fcport_reset_stats(struct bfad_s *bfad, void *cmd)
+ 	struct bfa_cb_pending_q_s cb_qe;
+ 
+ 	init_completion(&fcomp.comp);
+-	bfa_pending_q_init(&cb_qe, (bfa_cb_cbfn_t)bfad_hcb_comp, &fcomp, NULL);
++	bfa_pending_q_init_status(&cb_qe, bfad_hcb_comp, &fcomp, NULL);
+ 
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	iocmd->status = bfa_fcport_clear_stats(&bfad->bfa, &cb_qe);
+@@ -2443,8 +2442,7 @@ bfad_iocmd_qos_get_stats(struct bfad_s *bfad, void *cmd)
+ 	struct bfa_fcport_s *fcport = BFA_FCPORT_MOD(&bfad->bfa);
+ 
+ 	init_completion(&fcomp.comp);
+-	bfa_pending_q_init(&cb_qe, (bfa_cb_cbfn_t)bfad_hcb_comp,
+-			   &fcomp, &iocmd->stats);
++	bfa_pending_q_init_status(&cb_qe, bfad_hcb_comp, &fcomp, &iocmd->stats);
+ 
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	WARN_ON(!bfa_ioc_get_fcmode(&bfad->bfa.ioc));
+@@ -2474,8 +2472,7 @@ bfad_iocmd_qos_reset_stats(struct bfad_s *bfad, void *cmd)
+ 	struct bfa_fcport_s *fcport = BFA_FCPORT_MOD(&bfad->bfa);
+ 
+ 	init_completion(&fcomp.comp);
+-	bfa_pending_q_init(&cb_qe, (bfa_cb_cbfn_t)bfad_hcb_comp,
+-			   &fcomp, NULL);
++	bfa_pending_q_init_status(&cb_qe, bfad_hcb_comp, &fcomp, NULL);
+ 
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	WARN_ON(!bfa_ioc_get_fcmode(&bfad->bfa.ioc));
+diff --git a/drivers/scsi/csiostor/csio_defs.h b/drivers/scsi/csiostor/csio_defs.h
+index c38017b4af982..e50e93e7fe5a1 100644
+--- a/drivers/scsi/csiostor/csio_defs.h
++++ b/drivers/scsi/csiostor/csio_defs.h
+@@ -73,7 +73,21 @@ csio_list_deleted(struct list_head *list)
+ #define csio_list_prev(elem)	(((struct list_head *)(elem))->prev)
+ 
+ /* State machine */
+-typedef void (*csio_sm_state_t)(void *, uint32_t);
++struct csio_lnode;
++
++/* State machine evets */
++enum csio_ln_ev {
++	CSIO_LNE_NONE = (uint32_t)0,
++	CSIO_LNE_LINKUP,
++	CSIO_LNE_FAB_INIT_DONE,
++	CSIO_LNE_LINK_DOWN,
++	CSIO_LNE_DOWN_LINK,
++	CSIO_LNE_LOGO,
++	CSIO_LNE_CLOSE,
++	CSIO_LNE_MAX_EVENT,
++};
++
++typedef void (*csio_sm_state_t)(struct csio_lnode *ln, enum csio_ln_ev evt);
+ 
+ struct csio_sm {
+ 	struct list_head	sm_list;
+@@ -83,7 +97,7 @@ struct csio_sm {
+ static inline void
+ csio_set_state(void *smp, void *state)
+ {
+-	((struct csio_sm *)smp)->sm_state = (csio_sm_state_t)state;
++	((struct csio_sm *)smp)->sm_state = state;
+ }
+ 
+ static inline void
+diff --git a/drivers/scsi/csiostor/csio_lnode.c b/drivers/scsi/csiostor/csio_lnode.c
+index d5ac938970232..5b3ffefae476d 100644
+--- a/drivers/scsi/csiostor/csio_lnode.c
++++ b/drivers/scsi/csiostor/csio_lnode.c
+@@ -1095,7 +1095,7 @@ csio_handle_link_down(struct csio_hw *hw, uint8_t portid, uint32_t fcfi,
+ int
+ csio_is_lnode_ready(struct csio_lnode *ln)
+ {
+-	return (csio_get_state(ln) == ((csio_sm_state_t)csio_lns_ready));
++	return (csio_get_state(ln) == csio_lns_ready);
+ }
+ 
+ /*****************************************************************************/
+@@ -1366,15 +1366,15 @@ csio_free_fcfinfo(struct kref *kref)
+ void
+ csio_lnode_state_to_str(struct csio_lnode *ln, int8_t *str)
+ {
+-	if (csio_get_state(ln) == ((csio_sm_state_t)csio_lns_uninit)) {
++	if (csio_get_state(ln) == csio_lns_uninit) {
+ 		strcpy(str, "UNINIT");
+ 		return;
+ 	}
+-	if (csio_get_state(ln) == ((csio_sm_state_t)csio_lns_ready)) {
++	if (csio_get_state(ln) == csio_lns_ready) {
+ 		strcpy(str, "READY");
+ 		return;
+ 	}
+-	if (csio_get_state(ln) == ((csio_sm_state_t)csio_lns_offline)) {
++	if (csio_get_state(ln) == csio_lns_offline) {
+ 		strcpy(str, "OFFLINE");
+ 		return;
+ 	}
+diff --git a/drivers/scsi/csiostor/csio_lnode.h b/drivers/scsi/csiostor/csio_lnode.h
+index 372a67d122d38..607698a0f0631 100644
+--- a/drivers/scsi/csiostor/csio_lnode.h
++++ b/drivers/scsi/csiostor/csio_lnode.h
+@@ -53,19 +53,6 @@
+ extern int csio_fcoe_rnodes;
+ extern int csio_fdmi_enable;
+ 
+-/* State machine evets */
+-enum csio_ln_ev {
+-	CSIO_LNE_NONE = (uint32_t)0,
+-	CSIO_LNE_LINKUP,
+-	CSIO_LNE_FAB_INIT_DONE,
+-	CSIO_LNE_LINK_DOWN,
+-	CSIO_LNE_DOWN_LINK,
+-	CSIO_LNE_LOGO,
+-	CSIO_LNE_CLOSE,
+-	CSIO_LNE_MAX_EVENT,
+-};
+-
+-
+ struct csio_fcf_info {
+ 	struct list_head	list;
+ 	uint8_t			priority;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 814ac25238058..105d781d0cacf 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -6357,7 +6357,9 @@ _base_wait_for_iocstate(struct MPT3SAS_ADAPTER *ioc, int timeout)
+ 		return -EFAULT;
+ 	}
+ 
+- issue_diag_reset:
++	return 0;
++
++issue_diag_reset:
+ 	rc = _base_diag_reset(ioc);
+ 	return rc;
+ }
+diff --git a/drivers/soc/fsl/dpio/dpio-service.c b/drivers/soc/fsl/dpio/dpio-service.c
+index 779c319a4b820..6cdd2c517ba68 100644
+--- a/drivers/soc/fsl/dpio/dpio-service.c
++++ b/drivers/soc/fsl/dpio/dpio-service.c
+@@ -485,7 +485,7 @@ int dpaa2_io_service_enqueue_multiple_desc_fq(struct dpaa2_io *d,
+ 	struct qbman_eq_desc *ed;
+ 	int i, ret;
+ 
+-	ed = kcalloc(sizeof(struct qbman_eq_desc), 32, GFP_KERNEL);
++	ed = kcalloc(32, sizeof(struct qbman_eq_desc), GFP_KERNEL);
+ 	if (!ed)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 92a09dfb99a8e..0bcf4a28132ad 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -566,17 +566,19 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
+ 		mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len);
+ 		mtk_spi_setup_packet(master);
+ 
+-		cnt = mdata->xfer_len / 4;
+-		iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
+-				trans->tx_buf + mdata->num_xfered, cnt);
++		if (trans->tx_buf) {
++			cnt = mdata->xfer_len / 4;
++			iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
++					trans->tx_buf + mdata->num_xfered, cnt);
+ 
+-		remainder = mdata->xfer_len % 4;
+-		if (remainder > 0) {
+-			reg_val = 0;
+-			memcpy(&reg_val,
+-				trans->tx_buf + (cnt * 4) + mdata->num_xfered,
+-				remainder);
+-			writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++			remainder = mdata->xfer_len % 4;
++			if (remainder > 0) {
++				reg_val = 0;
++				memcpy(&reg_val,
++					trans->tx_buf + (cnt * 4) + mdata->num_xfered,
++					remainder);
++				writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++			}
+ 		}
+ 
+ 		mtk_spi_enable_transfer(master);
+diff --git a/drivers/staging/greybus/light.c b/drivers/staging/greybus/light.c
+index d2672b65c3f49..e59bb27236b9f 100644
+--- a/drivers/staging/greybus/light.c
++++ b/drivers/staging/greybus/light.c
+@@ -100,15 +100,15 @@ static struct led_classdev *get_channel_cdev(struct gb_channel *channel)
+ static struct gb_channel *get_channel_from_mode(struct gb_light *light,
+ 						u32 mode)
+ {
+-	struct gb_channel *channel = NULL;
++	struct gb_channel *channel;
+ 	int i;
+ 
+ 	for (i = 0; i < light->channels_count; i++) {
+ 		channel = &light->channels[i];
+-		if (channel && channel->mode == mode)
+-			break;
++		if (channel->mode == mode)
++			return channel;
+ 	}
+-	return channel;
++	return NULL;
+ }
+ 
+ static int __gb_lights_flash_intensity_set(struct gb_channel *channel,
+diff --git a/drivers/staging/media/imx/imx-media-csc-scaler.c b/drivers/staging/media/imx/imx-media-csc-scaler.c
+index 63a0204502a8b..939843b895440 100644
+--- a/drivers/staging/media/imx/imx-media-csc-scaler.c
++++ b/drivers/staging/media/imx/imx-media-csc-scaler.c
+@@ -803,6 +803,7 @@ static int ipu_csc_scaler_release(struct file *file)
+ 
+ 	dev_dbg(priv->dev, "Releasing instance %p\n", ctx);
+ 
++	v4l2_ctrl_handler_free(&ctx->ctrl_hdlr);
+ 	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+ 	v4l2_fh_del(&ctx->fh);
+ 	v4l2_fh_exit(&ctx->fh);
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 6e33c74e569f0..7c28d2752a4cd 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -688,6 +688,7 @@ static void exar_pci_remove(struct pci_dev *pcidev)
+ 	for (i = 0; i < priv->nr; i++)
+ 		serial8250_unregister_port(priv->line[i]);
+ 
++	/* Ensure that every init quirk is properly torn down */
+ 	if (priv->board->exit)
+ 		priv->board->exit(pcidev);
+ }
+@@ -702,10 +703,6 @@ static int __maybe_unused exar_suspend(struct device *dev)
+ 		if (priv->line[i] >= 0)
+ 			serial8250_suspend_port(priv->line[i]);
+ 
+-	/* Ensure that every init quirk is properly torn down */
+-	if (priv->board->exit)
+-		priv->board->exit(pcidev);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 2f88eae8a55a1..5570fd3b84e15 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1460,7 +1460,7 @@ static int max310x_probe(struct device *dev, const struct max310x_devtype *devty
+ 	if (!ret)
+ 		return 0;
+ 
+-	dev_err(dev, "Unable to reguest IRQ %i\n", irq);
++	dev_err(dev, "Unable to request IRQ %i\n", irq);
+ 
+ out_uart:
+ 	for (i = 0; i < devtype->nr; i++) {
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index fa5b1321d9b15..5388eb7fa0f47 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -922,11 +922,10 @@ static unsigned int s3c24xx_serial_tx_empty(struct uart_port *port)
+ 		if ((ufstat & info->tx_fifomask) != 0 ||
+ 		    (ufstat & info->tx_fifofull))
+ 			return 0;
+-
+-		return 1;
++		return TIOCSER_TEMT;
+ 	}
+ 
+-	return s3c24xx_serial_txempty_nofifo(port);
++	return s3c24xx_serial_txempty_nofifo(port) ? TIOCSER_TEMT : 0;
+ }
+ 
+ /* no modem control lines */
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 0252c0562dbc8..df645d127e401 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -2516,7 +2516,7 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ 		}
+ 		return;
+ 	case EScsiignore:
+-		if (c >= 20 && c <= 0x3f)
++		if (c >= 0x20 && c <= 0x3f)
+ 			return;
+ 		vc->vc_state = ESnormal;
+ 		return;
+diff --git a/drivers/usb/gadget/udc/net2272.c b/drivers/usb/gadget/udc/net2272.c
+index 23a735641c3df..8c56efe6abc49 100644
+--- a/drivers/usb/gadget/udc/net2272.c
++++ b/drivers/usb/gadget/udc/net2272.c
+@@ -2636,7 +2636,7 @@ net2272_plat_probe(struct platform_device *pdev)
+ 		goto err_req;
+ 	}
+ 
+-	ret = net2272_probe_fin(dev, IRQF_TRIGGER_LOW);
++	ret = net2272_probe_fin(dev, irqflags);
+ 	if (ret)
+ 		goto err_io;
+ 
+diff --git a/drivers/video/backlight/da9052_bl.c b/drivers/video/backlight/da9052_bl.c
+index 882359dd288c0..aa00379392a0f 100644
+--- a/drivers/video/backlight/da9052_bl.c
++++ b/drivers/video/backlight/da9052_bl.c
+@@ -117,6 +117,7 @@ static int da9052_backlight_probe(struct platform_device *pdev)
+ 	wleds->led_reg = platform_get_device_id(pdev)->driver_data;
+ 	wleds->state = DA9052_WLEDS_OFF;
+ 
++	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_RAW;
+ 	props.max_brightness = DA9052_MAX_BRIGHTNESS;
+ 
+diff --git a/drivers/video/backlight/lm3630a_bl.c b/drivers/video/backlight/lm3630a_bl.c
+index 419b0334cf087..2134342c2c97d 100644
+--- a/drivers/video/backlight/lm3630a_bl.c
++++ b/drivers/video/backlight/lm3630a_bl.c
+@@ -229,7 +229,7 @@ static int lm3630a_bank_a_get_brightness(struct backlight_device *bl)
+ 		if (rval < 0)
+ 			goto out_i2c_err;
+ 		brightness |= rval;
+-		goto out;
++		return brightness;
+ 	}
+ 
+ 	/* disable sleep */
+@@ -240,11 +240,8 @@ static int lm3630a_bank_a_get_brightness(struct backlight_device *bl)
+ 	rval = lm3630a_read(pchip, REG_BRT_A);
+ 	if (rval < 0)
+ 		goto out_i2c_err;
+-	brightness = rval;
++	return rval;
+ 
+-out:
+-	bl->props.brightness = brightness;
+-	return bl->props.brightness;
+ out_i2c_err:
+ 	dev_err(pchip->dev, "i2c failed to access register\n");
+ 	return 0;
+@@ -306,7 +303,7 @@ static int lm3630a_bank_b_get_brightness(struct backlight_device *bl)
+ 		if (rval < 0)
+ 			goto out_i2c_err;
+ 		brightness |= rval;
+-		goto out;
++		return brightness;
+ 	}
+ 
+ 	/* disable sleep */
+@@ -317,11 +314,8 @@ static int lm3630a_bank_b_get_brightness(struct backlight_device *bl)
+ 	rval = lm3630a_read(pchip, REG_BRT_B);
+ 	if (rval < 0)
+ 		goto out_i2c_err;
+-	brightness = rval;
++	return rval;
+ 
+-out:
+-	bl->props.brightness = brightness;
+-	return bl->props.brightness;
+ out_i2c_err:
+ 	dev_err(pchip->dev, "i2c failed to access register\n");
+ 	return 0;
+@@ -339,6 +333,7 @@ static int lm3630a_backlight_register(struct lm3630a_chip *pchip)
+ 	struct backlight_properties props;
+ 	const char *label;
+ 
++	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_RAW;
+ 	if (pdata->leda_ctrl != LM3630A_LEDA_DISABLE) {
+ 		props.brightness = pdata->leda_init_brt;
+diff --git a/drivers/video/backlight/lm3639_bl.c b/drivers/video/backlight/lm3639_bl.c
+index 48c04155a5f9d..bb617f4673e94 100644
+--- a/drivers/video/backlight/lm3639_bl.c
++++ b/drivers/video/backlight/lm3639_bl.c
+@@ -339,6 +339,7 @@ static int lm3639_probe(struct i2c_client *client,
+ 	}
+ 
+ 	/* backlight */
++	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_RAW;
+ 	props.brightness = pdata->init_brt_led;
+ 	props.max_brightness = pdata->max_brt_led;
+diff --git a/drivers/video/backlight/lp8788_bl.c b/drivers/video/backlight/lp8788_bl.c
+index ba42f3fe0c739..d9b95dbd40d30 100644
+--- a/drivers/video/backlight/lp8788_bl.c
++++ b/drivers/video/backlight/lp8788_bl.c
+@@ -191,6 +191,7 @@ static int lp8788_backlight_register(struct lp8788_bl *bl)
+ 	int init_brt;
+ 	char *name;
+ 
++	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_PLATFORM;
+ 	props.max_brightness = MAX_BRIGHTNESS;
+ 
+diff --git a/drivers/watchdog/stm32_iwdg.c b/drivers/watchdog/stm32_iwdg.c
+index 25188d6bbe152..16dd1aab7c676 100644
+--- a/drivers/watchdog/stm32_iwdg.c
++++ b/drivers/watchdog/stm32_iwdg.c
+@@ -21,6 +21,8 @@
+ #include <linux/platform_device.h>
+ #include <linux/watchdog.h>
+ 
++#define DEFAULT_TIMEOUT 10
++
+ /* IWDG registers */
+ #define IWDG_KR		0x00 /* Key register */
+ #define IWDG_PR		0x04 /* Prescaler Register */
+@@ -254,6 +256,7 @@ static int stm32_iwdg_probe(struct platform_device *pdev)
+ 	wdd->parent = dev;
+ 	wdd->info = &stm32_iwdg_info;
+ 	wdd->ops = &stm32_iwdg_ops;
++	wdd->timeout = DEFAULT_TIMEOUT;
+ 	wdd->min_timeout = DIV_ROUND_UP((RLR_MIN + 1) * PR_MIN, wdt->rate);
+ 	wdd->max_hw_heartbeat_ms = ((RLR_MAX + 1) * wdt->data->max_prescaler *
+ 				    1000) / wdt->rate;
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 88f0e719c6ac0..a59d6293a32b2 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -418,16 +418,6 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode,
+ 			continue;
+ 		}
+ 
+-		/* Don't expose silly rename entries to userspace. */
+-		if (nlen > 6 &&
+-		    dire->u.name[0] == '.' &&
+-		    ctx->actor != afs_lookup_filldir &&
+-		    ctx->actor != afs_lookup_one_filldir &&
+-		    memcmp(dire->u.name, ".__afs", 6) == 0) {
+-			ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent);
+-			continue;
+-		}
+-
+ 		/* found the next entry */
+ 		if (!dir_emit(ctx, dire->u.name, nlen,
+ 			      ntohl(dire->u.vnode),
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 55818bd510fb0..40e805014f719 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -3663,7 +3663,13 @@ static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count)
+ 				goto next;
+ 			}
+ 
+-			if (__is_valid_data_blkaddr(blkaddr)) {
++			/*
++			 * compressed cluster was not released due to it
++			 * fails in release_compress_blocks(), so NEW_ADDR
++			 * is a possible case.
++			 */
++			if (blkaddr == NEW_ADDR ||
++				__is_valid_data_blkaddr(blkaddr)) {
+ 				compr_blocks++;
+ 				continue;
+ 			}
+@@ -3673,6 +3679,11 @@ static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count)
+ 		}
+ 
+ 		reserved = cluster_size - compr_blocks;
++
++		/* for the case all blocks in cluster were reserved */
++		if (reserved == 1)
++			goto next;
++
+ 		ret = inc_valid_block_count(sbi, dn->inode, &reserved);
+ 		if (ret)
+ 			return ret;
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index 01263ffbc4c08..9a5f153c8919e 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -37,7 +37,7 @@ static long do_sys_name_to_handle(struct path *path,
+ 	if (f_handle.handle_bytes > MAX_HANDLE_SZ)
+ 		return -EINVAL;
+ 
+-	handle = kmalloc(sizeof(struct file_handle) + f_handle.handle_bytes,
++	handle = kzalloc(sizeof(struct file_handle) + f_handle.handle_bytes,
+ 			 GFP_KERNEL);
+ 	if (!handle)
+ 		return -ENOMEM;
+diff --git a/fs/nfs/nfs42.h b/fs/nfs/nfs42.h
+index 0fe5aacbcfdf1..e7192d0eea3de 100644
+--- a/fs/nfs/nfs42.h
++++ b/fs/nfs/nfs42.h
+@@ -54,11 +54,14 @@ int nfs42_proc_removexattr(struct inode *inode, const char *name);
+  * They would be 7 bytes long in the eventual buffer ("user.x\0"), and
+  * 8 bytes long XDR-encoded.
+  *
+- * Include the trailing eof word as well.
++ * Include the trailing eof word as well and make the result a multiple
++ * of 4 bytes.
+  */
+ static inline u32 nfs42_listxattr_xdrsize(u32 buflen)
+ {
+-	return ((buflen / (XATTR_USER_PREFIX_LEN + 2)) * 8) + 4;
++	u32 size = 8 * buflen / (XATTR_USER_PREFIX_LEN + 2) + 4;
++
++	return (size + 3) & ~3;
+ }
+ #endif /* CONFIG_NFS_V4_2 */
+ #endif /* __LINUX_FS_NFS_NFS4_2_H */
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 7c3c96ed60853..8e546e6a56198 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10370,29 +10370,33 @@ const struct nfs4_minor_version_ops *nfs_v4_minor_ops[] = {
+ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ {
+ 	ssize_t error, error2, error3;
++	size_t left = size;
+ 
+-	error = generic_listxattr(dentry, list, size);
++	error = generic_listxattr(dentry, list, left);
+ 	if (error < 0)
+ 		return error;
+ 	if (list) {
+ 		list += error;
+-		size -= error;
++		left -= error;
+ 	}
+ 
+-	error2 = nfs4_listxattr_nfs4_label(d_inode(dentry), list, size);
++	error2 = nfs4_listxattr_nfs4_label(d_inode(dentry), list, left);
+ 	if (error2 < 0)
+ 		return error2;
+ 
+ 	if (list) {
+ 		list += error2;
+-		size -= error2;
++		left -= error2;
+ 	}
+ 
+-	error3 = nfs4_listxattr_nfs4_user(d_inode(dentry), list, size);
++	error3 = nfs4_listxattr_nfs4_user(d_inode(dentry), list, left);
+ 	if (error3 < 0)
+ 		return error3;
+ 
+-	return error + error2 + error3;
++	error += error2 + error3;
++	if (size && error > size)
++		return -ERANGE;
++	return error;
+ }
+ 
+ static void nfs4_enable_swap(struct inode *inode)
+diff --git a/fs/nfs/nfsroot.c b/fs/nfs/nfsroot.c
+index fa148308822cc..c2cf4ff628811 100644
+--- a/fs/nfs/nfsroot.c
++++ b/fs/nfs/nfsroot.c
+@@ -175,10 +175,10 @@ static int __init root_nfs_cat(char *dest, const char *src,
+ 	size_t len = strlen(dest);
+ 
+ 	if (len && dest[len - 1] != ',')
+-		if (strlcat(dest, ",", destlen) > destlen)
++		if (strlcat(dest, ",", destlen) >= destlen)
+ 			return -1;
+ 
+-	if (strlcat(dest, src, destlen) > destlen)
++	if (strlcat(dest, src, destlen) >= destlen)
+ 		return -1;
+ 	return 0;
+ }
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 4bb4b4b79827a..6a7b7d44753a3 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -401,15 +401,17 @@ int dquot_mark_dquot_dirty(struct dquot *dquot)
+ EXPORT_SYMBOL(dquot_mark_dquot_dirty);
+ 
+ /* Dirtify all the dquots - this can block when journalling */
+-static inline int mark_all_dquot_dirty(struct dquot * const *dquot)
++static inline int mark_all_dquot_dirty(struct dquot __rcu * const *dquots)
+ {
+ 	int ret, err, cnt;
++	struct dquot *dquot;
+ 
+ 	ret = err = 0;
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (dquot[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (dquot)
+ 			/* Even in case of error we have to continue */
+-			ret = mark_dquot_dirty(dquot[cnt]);
++			ret = mark_dquot_dirty(dquot);
+ 		if (!err)
+ 			err = ret;
+ 	}
+@@ -1006,14 +1008,15 @@ struct dquot *dqget(struct super_block *sb, struct kqid qid)
+ }
+ EXPORT_SYMBOL(dqget);
+ 
+-static inline struct dquot **i_dquot(struct inode *inode)
++static inline struct dquot __rcu **i_dquot(struct inode *inode)
+ {
+-	return inode->i_sb->s_op->get_dquots(inode);
++	/* Force __rcu for now until filesystems are fixed */
++	return (struct dquot __rcu **)inode->i_sb->s_op->get_dquots(inode);
+ }
+ 
+ static int dqinit_needed(struct inode *inode, int type)
+ {
+-	struct dquot * const *dquots;
++	struct dquot __rcu * const *dquots;
+ 	int cnt;
+ 
+ 	if (IS_NOQUOTA(inode))
+@@ -1086,59 +1089,7 @@ static int add_dquot_ref(struct super_block *sb, int type)
+ 	return err;
+ }
+ 
+-/*
+- * Remove references to dquots from inode and add dquot to list for freeing
+- * if we have the last reference to dquot
+- */
+-static void remove_inode_dquot_ref(struct inode *inode, int type,
+-				   struct list_head *tofree_head)
+-{
+-	struct dquot **dquots = i_dquot(inode);
+-	struct dquot *dquot = dquots[type];
+-
+-	if (!dquot)
+-		return;
+-
+-	dquots[type] = NULL;
+-	if (list_empty(&dquot->dq_free)) {
+-		/*
+-		 * The inode still has reference to dquot so it can't be in the
+-		 * free list
+-		 */
+-		spin_lock(&dq_list_lock);
+-		list_add(&dquot->dq_free, tofree_head);
+-		spin_unlock(&dq_list_lock);
+-	} else {
+-		/*
+-		 * Dquot is already in a list to put so we won't drop the last
+-		 * reference here.
+-		 */
+-		dqput(dquot);
+-	}
+-}
+-
+-/*
+- * Free list of dquots
+- * Dquots are removed from inodes and no new references can be got so we are
+- * the only ones holding reference
+- */
+-static void put_dquot_list(struct list_head *tofree_head)
+-{
+-	struct list_head *act_head;
+-	struct dquot *dquot;
+-
+-	act_head = tofree_head->next;
+-	while (act_head != tofree_head) {
+-		dquot = list_entry(act_head, struct dquot, dq_free);
+-		act_head = act_head->next;
+-		/* Remove dquot from the list so we won't have problems... */
+-		list_del_init(&dquot->dq_free);
+-		dqput(dquot);
+-	}
+-}
+-
+-static void remove_dquot_ref(struct super_block *sb, int type,
+-		struct list_head *tofree_head)
++static void remove_dquot_ref(struct super_block *sb, int type)
+ {
+ 	struct inode *inode;
+ #ifdef CONFIG_QUOTA_DEBUG
+@@ -1155,11 +1106,18 @@ static void remove_dquot_ref(struct super_block *sb, int type,
+ 		 */
+ 		spin_lock(&dq_data_lock);
+ 		if (!IS_NOQUOTA(inode)) {
++			struct dquot __rcu **dquots = i_dquot(inode);
++			struct dquot *dquot = srcu_dereference_check(
++				dquots[type], &dquot_srcu,
++				lockdep_is_held(&dq_data_lock));
++
+ #ifdef CONFIG_QUOTA_DEBUG
+ 			if (unlikely(inode_get_rsv_space(inode) > 0))
+ 				reserved = 1;
+ #endif
+-			remove_inode_dquot_ref(inode, type, tofree_head);
++			rcu_assign_pointer(dquots[type], NULL);
++			if (dquot)
++				dqput(dquot);
+ 		}
+ 		spin_unlock(&dq_data_lock);
+ 	}
+@@ -1176,13 +1134,8 @@ static void remove_dquot_ref(struct super_block *sb, int type,
+ /* Gather all references from inodes and drop them */
+ static void drop_dquot_ref(struct super_block *sb, int type)
+ {
+-	LIST_HEAD(tofree_head);
+-
+-	if (sb->dq_op) {
+-		remove_dquot_ref(sb, type, &tofree_head);
+-		synchronize_srcu(&dquot_srcu);
+-		put_dquot_list(&tofree_head);
+-	}
++	if (sb->dq_op)
++		remove_dquot_ref(sb, type);
+ }
+ 
+ static inline
+@@ -1515,7 +1468,8 @@ static int inode_quota_active(const struct inode *inode)
+ static int __dquot_initialize(struct inode *inode, int type)
+ {
+ 	int cnt, init_needed = 0;
+-	struct dquot **dquots, *got[MAXQUOTAS] = {};
++	struct dquot __rcu **dquots;
++	struct dquot *got[MAXQUOTAS] = {};
+ 	struct super_block *sb = inode->i_sb;
+ 	qsize_t rsv;
+ 	int ret = 0;
+@@ -1590,7 +1544,7 @@ static int __dquot_initialize(struct inode *inode, int type)
+ 		if (!got[cnt])
+ 			continue;
+ 		if (!dquots[cnt]) {
+-			dquots[cnt] = got[cnt];
++			rcu_assign_pointer(dquots[cnt], got[cnt]);
+ 			got[cnt] = NULL;
+ 			/*
+ 			 * Make quota reservation system happy if someone
+@@ -1598,12 +1552,16 @@ static int __dquot_initialize(struct inode *inode, int type)
+ 			 */
+ 			rsv = inode_get_rsv_space(inode);
+ 			if (unlikely(rsv)) {
++				struct dquot *dquot = srcu_dereference_check(
++					dquots[cnt], &dquot_srcu,
++					lockdep_is_held(&dq_data_lock));
++
+ 				spin_lock(&inode->i_lock);
+ 				/* Get reservation again under proper lock */
+ 				rsv = __inode_get_rsv_space(inode);
+-				spin_lock(&dquots[cnt]->dq_dqb_lock);
+-				dquots[cnt]->dq_dqb.dqb_rsvspace += rsv;
+-				spin_unlock(&dquots[cnt]->dq_dqb_lock);
++				spin_lock(&dquot->dq_dqb_lock);
++				dquot->dq_dqb.dqb_rsvspace += rsv;
++				spin_unlock(&dquot->dq_dqb_lock);
+ 				spin_unlock(&inode->i_lock);
+ 			}
+ 		}
+@@ -1625,7 +1583,7 @@ EXPORT_SYMBOL(dquot_initialize);
+ 
+ bool dquot_initialize_needed(struct inode *inode)
+ {
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
+ 	int i;
+ 
+ 	if (!inode_quota_active(inode))
+@@ -1650,13 +1608,14 @@ EXPORT_SYMBOL(dquot_initialize_needed);
+ static void __dquot_drop(struct inode *inode)
+ {
+ 	int cnt;
+-	struct dquot **dquots = i_dquot(inode);
++	struct dquot __rcu **dquots = i_dquot(inode);
+ 	struct dquot *put[MAXQUOTAS];
+ 
+ 	spin_lock(&dq_data_lock);
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		put[cnt] = dquots[cnt];
+-		dquots[cnt] = NULL;
++		put[cnt] = srcu_dereference_check(dquots[cnt], &dquot_srcu,
++					lockdep_is_held(&dq_data_lock));
++		rcu_assign_pointer(dquots[cnt], NULL);
+ 	}
+ 	spin_unlock(&dq_data_lock);
+ 	dqput_all(put);
+@@ -1664,7 +1623,7 @@ static void __dquot_drop(struct inode *inode)
+ 
+ void dquot_drop(struct inode *inode)
+ {
+-	struct dquot * const *dquots;
++	struct dquot __rcu * const *dquots;
+ 	int cnt;
+ 
+ 	if (IS_NOQUOTA(inode))
+@@ -1737,7 +1696,8 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
+ 	int cnt, ret = 0, index;
+ 	struct dquot_warn warn[MAXQUOTAS];
+ 	int reserve = flags & DQUOT_SPACE_RESERVE;
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
++	struct dquot *dquot;
+ 
+ 	if (!inode_quota_active(inode)) {
+ 		if (reserve) {
+@@ -1757,27 +1717,26 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
+ 	index = srcu_read_lock(&dquot_srcu);
+ 	spin_lock(&inode->i_lock);
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (!dquots[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (!dquot)
+ 			continue;
+ 		if (reserve) {
+-			ret = dquot_add_space(dquots[cnt], 0, number, flags,
+-					      &warn[cnt]);
++			ret = dquot_add_space(dquot, 0, number, flags, &warn[cnt]);
+ 		} else {
+-			ret = dquot_add_space(dquots[cnt], number, 0, flags,
+-					      &warn[cnt]);
++			ret = dquot_add_space(dquot, number, 0, flags, &warn[cnt]);
+ 		}
+ 		if (ret) {
+ 			/* Back out changes we already did */
+ 			for (cnt--; cnt >= 0; cnt--) {
+-				if (!dquots[cnt])
++				dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++				if (!dquot)
+ 					continue;
+-				spin_lock(&dquots[cnt]->dq_dqb_lock);
++				spin_lock(&dquot->dq_dqb_lock);
+ 				if (reserve)
+-					dquot_free_reserved_space(dquots[cnt],
+-								  number);
++					dquot_free_reserved_space(dquot, number);
+ 				else
+-					dquot_decr_space(dquots[cnt], number);
+-				spin_unlock(&dquots[cnt]->dq_dqb_lock);
++					dquot_decr_space(dquot, number);
++				spin_unlock(&dquot->dq_dqb_lock);
+ 			}
+ 			spin_unlock(&inode->i_lock);
+ 			goto out_flush_warn;
+@@ -1807,7 +1766,8 @@ int dquot_alloc_inode(struct inode *inode)
+ {
+ 	int cnt, ret = 0, index;
+ 	struct dquot_warn warn[MAXQUOTAS];
+-	struct dquot * const *dquots;
++	struct dquot __rcu * const *dquots;
++	struct dquot *dquot;
+ 
+ 	if (!inode_quota_active(inode))
+ 		return 0;
+@@ -1818,17 +1778,19 @@ int dquot_alloc_inode(struct inode *inode)
+ 	index = srcu_read_lock(&dquot_srcu);
+ 	spin_lock(&inode->i_lock);
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (!dquots[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (!dquot)
+ 			continue;
+-		ret = dquot_add_inodes(dquots[cnt], 1, &warn[cnt]);
++		ret = dquot_add_inodes(dquot, 1, &warn[cnt]);
+ 		if (ret) {
+ 			for (cnt--; cnt >= 0; cnt--) {
+-				if (!dquots[cnt])
++				dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++				if (!dquot)
+ 					continue;
+ 				/* Back out changes we already did */
+-				spin_lock(&dquots[cnt]->dq_dqb_lock);
+-				dquot_decr_inodes(dquots[cnt], 1);
+-				spin_unlock(&dquots[cnt]->dq_dqb_lock);
++				spin_lock(&dquot->dq_dqb_lock);
++				dquot_decr_inodes(dquot, 1);
++				spin_unlock(&dquot->dq_dqb_lock);
+ 			}
+ 			goto warn_put_all;
+ 		}
+@@ -1849,7 +1811,8 @@ EXPORT_SYMBOL(dquot_alloc_inode);
+  */
+ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
+ {
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
++	struct dquot *dquot;
+ 	int cnt, index;
+ 
+ 	if (!inode_quota_active(inode)) {
+@@ -1865,9 +1828,8 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
+ 	spin_lock(&inode->i_lock);
+ 	/* Claim reserved quotas to allocated quotas */
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (dquots[cnt]) {
+-			struct dquot *dquot = dquots[cnt];
+-
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (dquot) {
+ 			spin_lock(&dquot->dq_dqb_lock);
+ 			if (WARN_ON_ONCE(dquot->dq_dqb.dqb_rsvspace < number))
+ 				number = dquot->dq_dqb.dqb_rsvspace;
+@@ -1891,7 +1853,8 @@ EXPORT_SYMBOL(dquot_claim_space_nodirty);
+  */
+ void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number)
+ {
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
++	struct dquot *dquot;
+ 	int cnt, index;
+ 
+ 	if (!inode_quota_active(inode)) {
+@@ -1907,9 +1870,8 @@ void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number)
+ 	spin_lock(&inode->i_lock);
+ 	/* Claim reserved quotas to allocated quotas */
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (dquots[cnt]) {
+-			struct dquot *dquot = dquots[cnt];
+-
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (dquot) {
+ 			spin_lock(&dquot->dq_dqb_lock);
+ 			if (WARN_ON_ONCE(dquot->dq_dqb.dqb_curspace < number))
+ 				number = dquot->dq_dqb.dqb_curspace;
+@@ -1935,7 +1897,8 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
+ {
+ 	unsigned int cnt;
+ 	struct dquot_warn warn[MAXQUOTAS];
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
++	struct dquot *dquot;
+ 	int reserve = flags & DQUOT_SPACE_RESERVE, index;
+ 
+ 	if (!inode_quota_active(inode)) {
+@@ -1956,17 +1919,18 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
+ 		int wtype;
+ 
+ 		warn[cnt].w_type = QUOTA_NL_NOWARN;
+-		if (!dquots[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (!dquot)
+ 			continue;
+-		spin_lock(&dquots[cnt]->dq_dqb_lock);
+-		wtype = info_bdq_free(dquots[cnt], number);
++		spin_lock(&dquot->dq_dqb_lock);
++		wtype = info_bdq_free(dquot, number);
+ 		if (wtype != QUOTA_NL_NOWARN)
+-			prepare_warning(&warn[cnt], dquots[cnt], wtype);
++			prepare_warning(&warn[cnt], dquot, wtype);
+ 		if (reserve)
+-			dquot_free_reserved_space(dquots[cnt], number);
++			dquot_free_reserved_space(dquot, number);
+ 		else
+-			dquot_decr_space(dquots[cnt], number);
+-		spin_unlock(&dquots[cnt]->dq_dqb_lock);
++			dquot_decr_space(dquot, number);
++		spin_unlock(&dquot->dq_dqb_lock);
+ 	}
+ 	if (reserve)
+ 		*inode_reserved_space(inode) -= number;
+@@ -1990,7 +1954,8 @@ void dquot_free_inode(struct inode *inode)
+ {
+ 	unsigned int cnt;
+ 	struct dquot_warn warn[MAXQUOTAS];
+-	struct dquot * const *dquots;
++	struct dquot __rcu * const *dquots;
++	struct dquot *dquot;
+ 	int index;
+ 
+ 	if (!inode_quota_active(inode))
+@@ -2001,16 +1966,16 @@ void dquot_free_inode(struct inode *inode)
+ 	spin_lock(&inode->i_lock);
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+ 		int wtype;
+-
+ 		warn[cnt].w_type = QUOTA_NL_NOWARN;
+-		if (!dquots[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (!dquot)
+ 			continue;
+-		spin_lock(&dquots[cnt]->dq_dqb_lock);
+-		wtype = info_idq_free(dquots[cnt], 1);
++		spin_lock(&dquot->dq_dqb_lock);
++		wtype = info_idq_free(dquot, 1);
+ 		if (wtype != QUOTA_NL_NOWARN)
+-			prepare_warning(&warn[cnt], dquots[cnt], wtype);
+-		dquot_decr_inodes(dquots[cnt], 1);
+-		spin_unlock(&dquots[cnt]->dq_dqb_lock);
++			prepare_warning(&warn[cnt], dquot, wtype);
++		dquot_decr_inodes(dquot, 1);
++		spin_unlock(&dquot->dq_dqb_lock);
+ 	}
+ 	spin_unlock(&inode->i_lock);
+ 	mark_all_dquot_dirty(dquots);
+@@ -2036,8 +2001,9 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 	qsize_t cur_space;
+ 	qsize_t rsv_space = 0;
+ 	qsize_t inode_usage = 1;
++	struct dquot __rcu **dquots;
+ 	struct dquot *transfer_from[MAXQUOTAS] = {};
+-	int cnt, ret = 0;
++	int cnt, index, ret = 0;
+ 	char is_valid[MAXQUOTAS] = {};
+ 	struct dquot_warn warn_to[MAXQUOTAS];
+ 	struct dquot_warn warn_from_inodes[MAXQUOTAS];
+@@ -2068,6 +2034,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 	}
+ 	cur_space = __inode_get_bytes(inode);
+ 	rsv_space = __inode_get_rsv_space(inode);
++	dquots = i_dquot(inode);
+ 	/*
+ 	 * Build the transfer_from list, check limits, and update usage in
+ 	 * the target structures.
+@@ -2082,7 +2049,8 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 		if (!sb_has_quota_active(inode->i_sb, cnt))
+ 			continue;
+ 		is_valid[cnt] = 1;
+-		transfer_from[cnt] = i_dquot(inode)[cnt];
++		transfer_from[cnt] = srcu_dereference_check(dquots[cnt],
++				&dquot_srcu, lockdep_is_held(&dq_data_lock));
+ 		ret = dquot_add_inodes(transfer_to[cnt], inode_usage,
+ 				       &warn_to[cnt]);
+ 		if (ret)
+@@ -2121,13 +2089,21 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 						  rsv_space);
+ 			spin_unlock(&transfer_from[cnt]->dq_dqb_lock);
+ 		}
+-		i_dquot(inode)[cnt] = transfer_to[cnt];
++		rcu_assign_pointer(dquots[cnt], transfer_to[cnt]);
+ 	}
+ 	spin_unlock(&inode->i_lock);
+ 	spin_unlock(&dq_data_lock);
+ 
+-	mark_all_dquot_dirty(transfer_from);
+-	mark_all_dquot_dirty(transfer_to);
++	/*
++	 * These arrays are local and we hold dquot references so we don't need
++	 * the srcu protection but still take dquot_srcu to avoid warning in
++	 * mark_all_dquot_dirty().
++	 */
++	index = srcu_read_lock(&dquot_srcu);
++	mark_all_dquot_dirty((struct dquot __rcu **)transfer_from);
++	mark_all_dquot_dirty((struct dquot __rcu **)transfer_to);
++	srcu_read_unlock(&dquot_srcu, index);
++
+ 	flush_warnings(warn_to);
+ 	flush_warnings(warn_from_inodes);
+ 	flush_warnings(warn_from_space);
+diff --git a/fs/select.c b/fs/select.c
+index 5edffee1162c2..668a5200503ae 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -475,7 +475,7 @@ static inline void wait_key_set(poll_table *wait, unsigned long in,
+ 		wait->_key |= POLLOUT_SET;
+ }
+ 
+-static int do_select(int n, fd_set_bits *fds, struct timespec64 *end_time)
++static noinline_for_stack int do_select(int n, fd_set_bits *fds, struct timespec64 *end_time)
+ {
+ 	ktime_t expire, *to = NULL;
+ 	struct poll_wqueues table;
+diff --git a/include/drm/drm_fixed.h b/include/drm/drm_fixed.h
+index 553210c02ee0f..627efa56e59fb 100644
+--- a/include/drm/drm_fixed.h
++++ b/include/drm/drm_fixed.h
+@@ -88,7 +88,7 @@ static inline int drm_fixp2int(s64 a)
+ 
+ static inline int drm_fixp2int_ceil(s64 a)
+ {
+-	if (a > 0)
++	if (a >= 0)
+ 		return drm_fixp2int(a + DRM_FIXED_ALMOST_ONE);
+ 	else
+ 		return drm_fixp2int(a - DRM_FIXED_ALMOST_ONE);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 98fdf5a31fd66..583824f111079 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1883,6 +1883,7 @@ struct block_device_operations {
+ 	void (*unlock_native_capacity) (struct gendisk *);
+ 	int (*revalidate_disk) (struct gendisk *);
+ 	int (*getgeo)(struct block_device *, struct hd_geometry *);
++	int (*set_read_only)(struct block_device *bdev, bool ro);
+ 	/* this callback is with swap_lock and sometimes page table lock held */
+ 	void (*swap_slot_free_notify) (struct block_device *, unsigned long);
+ 	int (*report_zones)(struct gendisk *, sector_t sector,
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bfdf40be5360a..a75faf437e750 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -175,9 +175,14 @@ struct bpf_map {
+ 	 */
+ 	atomic64_t refcnt ____cacheline_aligned;
+ 	atomic64_t usercnt;
+-	struct work_struct work;
++	/* rcu is used before freeing and work is only used during freeing */
++	union {
++		struct work_struct work;
++		struct rcu_head rcu;
++	};
+ 	struct mutex freeze_mutex;
+ 	atomic64_t writecnt;
++	bool free_after_mult_rcu_gp;
+ };
+ 
+ static inline bool map_value_has_spin_lock(const struct bpf_map *map)
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index cd56e53bd42e2..840b2a05c1b9f 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -480,24 +480,27 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
+ 	__BPF_MAP(n, __BPF_DECL_ARGS, __BPF_N, u64, __ur_1, u64, __ur_2,       \
+ 		  u64, __ur_3, u64, __ur_4, u64, __ur_5)
+ 
+-#define BPF_CALL_x(x, name, ...)					       \
++#define BPF_CALL_x(x, attr, name, ...)					       \
+ 	static __always_inline						       \
+ 	u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__));   \
+ 	typedef u64 (*btf_##name)(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)); \
+-	u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__));	       \
+-	u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__))	       \
++	attr u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__));    \
++	attr u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__))     \
+ 	{								       \
+ 		return ((btf_##name)____##name)(__BPF_MAP(x,__BPF_CAST,__BPF_N,__VA_ARGS__));\
+ 	}								       \
+ 	static __always_inline						       \
+ 	u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__))
+ 
+-#define BPF_CALL_0(name, ...)	BPF_CALL_x(0, name, __VA_ARGS__)
+-#define BPF_CALL_1(name, ...)	BPF_CALL_x(1, name, __VA_ARGS__)
+-#define BPF_CALL_2(name, ...)	BPF_CALL_x(2, name, __VA_ARGS__)
+-#define BPF_CALL_3(name, ...)	BPF_CALL_x(3, name, __VA_ARGS__)
+-#define BPF_CALL_4(name, ...)	BPF_CALL_x(4, name, __VA_ARGS__)
+-#define BPF_CALL_5(name, ...)	BPF_CALL_x(5, name, __VA_ARGS__)
++#define __NOATTR
++#define BPF_CALL_0(name, ...)	BPF_CALL_x(0, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_1(name, ...)	BPF_CALL_x(1, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_2(name, ...)	BPF_CALL_x(2, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_3(name, ...)	BPF_CALL_x(3, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_4(name, ...)	BPF_CALL_x(4, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_5(name, ...)	BPF_CALL_x(5, __NOATTR, name, __VA_ARGS__)
++
++#define NOTRACE_BPF_CALL_1(name, ...)	BPF_CALL_x(1, notrace, name, __VA_ARGS__)
+ 
+ #define bpf_ctx_range(TYPE, MEMBER)						\
+ 	offsetof(TYPE, MEMBER) ... offsetofend(TYPE, MEMBER) - 1
+diff --git a/include/linux/igmp.h b/include/linux/igmp.h
+index 64ce8cd1cfaf1..4adab8ada85af 100644
+--- a/include/linux/igmp.h
++++ b/include/linux/igmp.h
+@@ -121,9 +121,9 @@ extern int ip_mc_source(int add, int omode, struct sock *sk,
+ 		struct ip_mreq_source *mreqs, int ifindex);
+ extern int ip_mc_msfilter(struct sock *sk, struct ip_msfilter *msf,int ifindex);
+ extern int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf,
+-		struct ip_msfilter __user *optval, int __user *optlen);
++			sockptr_t optval, sockptr_t optlen);
+ extern int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
+-			struct sockaddr_storage __user *p);
++			sockptr_t optval, size_t offset);
+ extern int ip_mc_sf_allow(struct sock *sk, __be32 local, __be32 rmt,
+ 			  int dif, int sdif);
+ extern void ip_mc_init_dev(struct in_device *);
+diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
+index 649a4d7c241bc..55d09f594cd14 100644
+--- a/include/linux/io_uring.h
++++ b/include/linux/io_uring.h
+@@ -6,9 +6,9 @@
+ #include <linux/xarray.h>
+ 
+ #if defined(CONFIG_IO_URING)
+-struct sock *io_uring_get_socket(struct file *file);
+ void __io_uring_cancel(bool cancel_all);
+ void __io_uring_free(struct task_struct *tsk);
++bool io_is_uring_fops(struct file *file);
+ 
+ static inline void io_uring_files_cancel(void)
+ {
+@@ -26,10 +26,6 @@ static inline void io_uring_free(struct task_struct *tsk)
+ 		__io_uring_free(tsk);
+ }
+ #else
+-static inline struct sock *io_uring_get_socket(struct file *file)
+-{
+-	return NULL;
+-}
+ static inline void io_uring_task_cancel(void)
+ {
+ }
+@@ -39,6 +35,10 @@ static inline void io_uring_files_cancel(void)
+ static inline void io_uring_free(struct task_struct *tsk)
+ {
+ }
++static inline bool io_is_uring_fops(struct file *file)
++{
++	return false;
++}
+ #endif
+ 
+ #endif
+diff --git a/include/linux/mlx5/qp.h b/include/linux/mlx5/qp.h
+index d75ef8aa8fac0..28d44061d6700 100644
+--- a/include/linux/mlx5/qp.h
++++ b/include/linux/mlx5/qp.h
+@@ -261,7 +261,10 @@ struct mlx5_wqe_eth_seg {
+ 	union {
+ 		struct {
+ 			__be16 sz;
+-			u8     start[2];
++			union {
++				u8     start[2];
++				DECLARE_FLEX_ARRAY(u8, data);
++			};
+ 		} inline_hdr;
+ 		struct {
+ 			__be16 type;
+diff --git a/include/linux/mroute.h b/include/linux/mroute.h
+index 6cbbfe94348ce..80b8400ab8b24 100644
+--- a/include/linux/mroute.h
++++ b/include/linux/mroute.h
+@@ -17,7 +17,7 @@ static inline int ip_mroute_opt(int opt)
+ }
+ 
+ int ip_mroute_setsockopt(struct sock *, int, sockptr_t, unsigned int);
+-int ip_mroute_getsockopt(struct sock *, int, char __user *, int __user *);
++int ip_mroute_getsockopt(struct sock *, int, sockptr_t, sockptr_t);
+ int ipmr_ioctl(struct sock *sk, int cmd, void __user *arg);
+ int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
+ int ip_mr_init(void);
+@@ -29,8 +29,8 @@ static inline int ip_mroute_setsockopt(struct sock *sock, int optname,
+ 	return -ENOPROTOOPT;
+ }
+ 
+-static inline int ip_mroute_getsockopt(struct sock *sock, int optname,
+-				       char __user *optval, int __user *optlen)
++static inline int ip_mroute_getsockopt(struct sock *sk, int optname,
++				       sockptr_t optval, sockptr_t optlen)
+ {
+ 	return -ENOPROTOOPT;
+ }
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 550e1cdb473fa..bf46453475e31 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -2191,6 +2191,11 @@ static inline struct pci_dev *pcie_find_root_port(struct pci_dev *dev)
+ 	return NULL;
+ }
+ 
++static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
++{
++	return dev->error_state == pci_channel_io_perm_failure;
++}
++
+ void pci_request_acs(void);
+ bool pci_acs_enabled(struct pci_dev *pdev, u16 acs_flags);
+ bool pci_acs_path_enabled(struct pci_dev *start,
+diff --git a/include/linux/poll.h b/include/linux/poll.h
+index 1cdc32b1f1b08..7e0fdcf905d2e 100644
+--- a/include/linux/poll.h
++++ b/include/linux/poll.h
+@@ -16,11 +16,7 @@
+ extern struct ctl_table epoll_table[]; /* for sysctl */
+ /* ~832 bytes of stack space used max in sys_select/sys_poll before allocating
+    additional memory. */
+-#ifdef __clang__
+-#define MAX_STACK_ALLOC 768
+-#else
+ #define MAX_STACK_ALLOC 832
+-#endif
+ #define FRONTEND_STACK_ALLOC	256
+ #define SELECT_STACK_ALLOC	FRONTEND_STACK_ALLOC
+ #define POLL_STACK_ALLOC	FRONTEND_STACK_ALLOC
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 8716a17063518..9db6710e6ee7b 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -201,6 +201,18 @@ static inline void exit_tasks_rcu_stop(void) { }
+ static inline void exit_tasks_rcu_finish(void) { }
+ #endif /* #else #ifdef CONFIG_TASKS_RCU_GENERIC */
+ 
++/**
++ * rcu_trace_implies_rcu_gp - does an RCU Tasks Trace grace period imply an RCU grace period?
++ *
++ * As an accident of implementation, an RCU Tasks Trace grace period also
++ * acts as an RCU grace period.  However, this could change at any time.
++ * Code relying on this accident must call this function to verify that
++ * this accident is still happening.
++ *
++ * You have been warned!
++ */
++static inline bool rcu_trace_implies_rcu_gp(void) { return true; }
++
+ /**
+  * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
+  *
+@@ -214,6 +226,37 @@ do { \
+ 	cond_resched(); \
+ } while (0)
+ 
++/**
++ * rcu_softirq_qs_periodic - Report RCU and RCU-Tasks quiescent states
++ * @old_ts: jiffies at start of processing.
++ *
++ * This helper is for long-running softirq handlers, such as NAPI threads in
++ * networking. The caller should initialize the variable passed in as @old_ts
++ * at the beginning of the softirq handler. When invoked frequently, this macro
++ * will invoke rcu_softirq_qs() every 100 milliseconds thereafter, which will
++ * provide both RCU and RCU-Tasks quiescent states. Note that this macro
++ * modifies its old_ts argument.
++ *
++ * Because regions of code that have disabled softirq act as RCU read-side
++ * critical sections, this macro should be invoked with softirq (and
++ * preemption) enabled.
++ *
++ * The macro is not needed when CONFIG_PREEMPT_RT is defined. RT kernels would
++ * have more chance to invoke schedule() calls and provide necessary quiescent
++ * states. As a contrast, calling cond_resched() only won't achieve the same
++ * effect because cond_resched() does not provide RCU-Tasks quiescent states.
++ */
++#define rcu_softirq_qs_periodic(old_ts) \
++do { \
++	if (!IS_ENABLED(CONFIG_PREEMPT_RT) && \
++	    time_after(jiffies, (old_ts) + HZ / 10)) { \
++		preempt_disable(); \
++		rcu_softirq_qs(); \
++		preempt_enable(); \
++		(old_ts) = jiffies; \
++	} \
++} while (0)
++
+ /*
+  * Infrastructure to implement the synchronize_() primitives in
+  * TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
+diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
+index 3fa3ba6498e87..2546758f13eb0 100644
+--- a/include/linux/remoteproc.h
++++ b/include/linux/remoteproc.h
+@@ -368,7 +368,9 @@ enum rsc_handling_status {
+  * RSC_HANDLED if resource was handled, RSC_IGNORED if not handled and a
+  * negative value on error
+  * @load_rsc_table:	load resource table from firmware image
+- * @find_loaded_rsc_table: find the loaded resouce table
++ * @find_loaded_rsc_table: find the loaded resource table from firmware image
++ * @get_loaded_rsc_table: get resource table installed in memory
++ *			  by external entity
+  * @load:		load firmware to memory, where the remote processor
+  *			expects to find it
+  * @sanity_check:	sanity check the fw image
+@@ -389,6 +391,8 @@ struct rproc_ops {
+ 			  int offset, int avail);
+ 	struct resource_table *(*find_loaded_rsc_table)(
+ 				struct rproc *rproc, const struct firmware *fw);
++	struct resource_table *(*get_loaded_rsc_table)(
++				struct rproc *rproc, size_t *size);
+ 	int (*load)(struct rproc *rproc, const struct firmware *fw);
+ 	int (*sanity_check)(struct rproc *rproc, const struct firmware *fw);
+ 	u64 (*get_boot_addr)(struct rproc *rproc, const struct firmware *fw);
+diff --git a/include/net/compat.h b/include/net/compat.h
+index 745db0d605b62..52bf5f0ee236b 100644
+--- a/include/net/compat.h
++++ b/include/net/compat.h
+@@ -81,13 +81,26 @@ struct compat_group_source_req {
+ } __packed;
+ 
+ struct compat_group_filter {
+-	__u32				 gf_interface;
+-	struct __kernel_sockaddr_storage gf_group
+-		__aligned(4);
+-	__u32				 gf_fmode;
+-	__u32				 gf_numsrc;
+-	struct __kernel_sockaddr_storage gf_slist[1]
+-		__aligned(4);
++	union {
++		struct {
++			__u32				 gf_interface_aux;
++			struct __kernel_sockaddr_storage gf_group_aux
++				__aligned(4);
++			__u32				 gf_fmode_aux;
++			__u32				 gf_numsrc_aux;
++			struct __kernel_sockaddr_storage gf_slist[1]
++				__aligned(4);
++		} __packed;
++		struct {
++			__u32				 gf_interface;
++			struct __kernel_sockaddr_storage gf_group
++				__aligned(4);
++			__u32				 gf_fmode;
++			__u32				 gf_numsrc;
++			struct __kernel_sockaddr_storage gf_slist_flex[]
++				__aligned(4);
++		} __packed;
++	};
+ } __packed;
+ 
+ #endif /* NET_COMPAT_H */
+diff --git a/include/uapi/linux/in.h b/include/uapi/linux/in.h
+index 3960bc3da6b30..c4702fff64d3a 100644
+--- a/include/uapi/linux/in.h
++++ b/include/uapi/linux/in.h
+@@ -190,11 +190,22 @@ struct ip_mreq_source {
+ };
+ 
+ struct ip_msfilter {
+-	__be32		imsf_multiaddr;
+-	__be32		imsf_interface;
+-	__u32		imsf_fmode;
+-	__u32		imsf_numsrc;
+-	__be32		imsf_slist[1];
++	union {
++		struct {
++			__be32		imsf_multiaddr_aux;
++			__be32		imsf_interface_aux;
++			__u32		imsf_fmode_aux;
++			__u32		imsf_numsrc_aux;
++			__be32		imsf_slist[1];
++		};
++		struct {
++			__be32		imsf_multiaddr;
++			__be32		imsf_interface;
++			__u32		imsf_fmode;
++			__u32		imsf_numsrc;
++			__be32		imsf_slist_flex[];
++		};
++	};
+ };
+ 
+ #define IP_MSFILTER_SIZE(numsrc) \
+@@ -213,11 +224,22 @@ struct group_source_req {
+ };
+ 
+ struct group_filter {
+-	__u32				 gf_interface;	/* interface index */
+-	struct __kernel_sockaddr_storage gf_group;	/* multicast address */
+-	__u32				 gf_fmode;	/* filter mode */
+-	__u32				 gf_numsrc;	/* number of sources */
+-	struct __kernel_sockaddr_storage gf_slist[1];	/* interface index */
++	union {
++		struct {
++			__u32				 gf_interface_aux; /* interface index */
++			struct __kernel_sockaddr_storage gf_group_aux;	   /* multicast address */
++			__u32				 gf_fmode_aux;	   /* filter mode */
++			__u32				 gf_numsrc_aux;	   /* number of sources */
++			struct __kernel_sockaddr_storage gf_slist[1];	   /* interface index */
++		};
++		struct {
++			__u32				 gf_interface;	  /* interface index */
++			struct __kernel_sockaddr_storage gf_group;	  /* multicast address */
++			__u32				 gf_fmode;	  /* filter mode */
++			__u32				 gf_numsrc;	  /* number of sources */
++			struct __kernel_sockaddr_storage gf_slist_flex[]; /* interface index */
++		};
++	};
+ };
+ 
+ #define GROUP_FILTER_SIZE(numsrc) \
+diff --git a/include/uapi/scsi/fc/fc_els.h b/include/uapi/scsi/fc/fc_els.h
+index 8c704e510e398..91d4be9872203 100644
+--- a/include/uapi/scsi/fc/fc_els.h
++++ b/include/uapi/scsi/fc/fc_els.h
+@@ -916,7 +916,9 @@ enum fc_els_clid_ic {
+ 	ELS_CLID_IC_LIP =	8,	/* receiving LIP */
+ };
+ 
+-
++/*
++ * Link Integrity event types
++ */
+ enum fc_fpin_li_event_types {
+ 	FPIN_LI_UNKNOWN =		0x0,
+ 	FPIN_LI_LINK_FAILURE =		0x1,
+@@ -943,6 +945,54 @@ enum fc_fpin_li_event_types {
+ 	{ FPIN_LI_DEVICE_SPEC,		"Device Specific" },		\
+ }
+ 
++/*
++ * Delivery event types
++ */
++enum fc_fpin_deli_event_types {
++	FPIN_DELI_UNKNOWN =		0x0,
++	FPIN_DELI_TIMEOUT =		0x1,
++	FPIN_DELI_UNABLE_TO_ROUTE =	0x2,
++	FPIN_DELI_DEVICE_SPEC =		0xF,
++};
++
++/*
++ * Initializer useful for decoding table.
++ * Please keep this in sync with the above definitions.
++ */
++#define FC_FPIN_DELI_EVT_TYPES_INIT {					\
++	{ FPIN_DELI_UNKNOWN,		"Unknown" },			\
++	{ FPIN_DELI_TIMEOUT,		"Timeout" },			\
++	{ FPIN_DELI_UNABLE_TO_ROUTE,	"Unable to Route" },		\
++	{ FPIN_DELI_DEVICE_SPEC,	"Device Specific" },		\
++}
++
++/*
++ * Congestion event types
++ */
++enum fc_fpin_congn_event_types {
++	FPIN_CONGN_CLEAR =		0x0,
++	FPIN_CONGN_LOST_CREDIT =	0x1,
++	FPIN_CONGN_CREDIT_STALL =	0x2,
++	FPIN_CONGN_OVERSUBSCRIPTION =	0x3,
++	FPIN_CONGN_DEVICE_SPEC =	0xF,
++};
++
++/*
++ * Initializer useful for decoding table.
++ * Please keep this in sync with the above definitions.
++ */
++#define FC_FPIN_CONGN_EVT_TYPES_INIT {					\
++	{ FPIN_CONGN_CLEAR,		"Clear" },			\
++	{ FPIN_CONGN_LOST_CREDIT,	"Lost Credit" },		\
++	{ FPIN_CONGN_CREDIT_STALL,	"Credit Stall" },		\
++	{ FPIN_CONGN_OVERSUBSCRIPTION,	"Oversubscription" },		\
++	{ FPIN_CONGN_DEVICE_SPEC,	"Device Specific" },		\
++}
++
++enum fc_fpin_congn_severity_types {
++	FPIN_CONGN_SEVERITY_WARNING =	0xF1,
++	FPIN_CONGN_SEVERITY_ERROR =	0xF7,
++};
+ 
+ /*
+  * Link Integrity Notification Descriptor
+@@ -974,6 +1024,68 @@ struct fc_fn_li_desc {
+ 					 */
+ };
+ 
++/*
++ * Delivery Notification Descriptor
++ */
++struct fc_fn_deli_desc {
++	__be32		desc_tag;	/* Descriptor Tag (0x00020002) */
++	__be32		desc_len;	/* Length of Descriptor (in bytes).
++					 * Size of descriptor excluding
++					 * desc_tag and desc_len fields.
++					 */
++	__be64		detecting_wwpn;	/* Port Name that detected event */
++	__be64		attached_wwpn;	/* Port Name of device attached to
++					 * detecting Port Name
++					 */
++	__be32		deli_reason_code;/* see enum fc_fpin_deli_event_types */
++};
++
++/*
++ * Peer Congestion Notification Descriptor
++ */
++struct fc_fn_peer_congn_desc {
++	__be32		desc_tag;	/* Descriptor Tag (0x00020003) */
++	__be32		desc_len;	/* Length of Descriptor (in bytes).
++					 * Size of descriptor excluding
++					 * desc_tag and desc_len fields.
++					 */
++	__be64		detecting_wwpn;	/* Port Name that detected event */
++	__be64		attached_wwpn;	/* Port Name of device attached to
++					 * detecting Port Name
++					 */
++	__be16		event_type;	/* see enum fc_fpin_congn_event_types */
++	__be16		event_modifier;	/* Implementation specific value
++					 * describing the event type
++					 */
++	__be32		event_period;	/* duration (ms) of the detected
++					 * congestion event
++					 */
++	__be32		pname_count;	/* number of portname_list elements */
++	__be64		pname_list[0];	/* list of N_Port_Names accessible
++					 * through the attached port
++					 */
++};
++
++/*
++ * Congestion Notification Descriptor
++ */
++struct fc_fn_congn_desc {
++	__be32		desc_tag;	/* Descriptor Tag (0x00020004) */
++	__be32		desc_len;	/* Length of Descriptor (in bytes).
++					 * Size of descriptor excluding
++					 * desc_tag and desc_len fields.
++					 */
++	__be16		event_type;	/* see enum fc_fpin_congn_event_types */
++	__be16		event_modifier;	/* Implementation specific value
++					 * describing the event type
++					 */
++	__be32		event_period;	/* duration (ms) of the detected
++					 * congestion event
++					 */
++	__u8		severity;	/* command */
++	__u8		resv[3];	/* reserved - must be zero */
++};
++
+ /*
+  * ELS_FPIN - Fabric Performance Impact Notification
+  */
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 936abc6ee450c..fc60396c90396 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -62,7 +62,6 @@
+ #include <linux/net.h>
+ #include <net/sock.h>
+ #include <net/af_unix.h>
+-#include <net/scm.h>
+ #include <linux/anon_inodes.h>
+ #include <linux/sched/mm.h>
+ #include <linux/uaccess.h>
+@@ -440,9 +439,6 @@ struct io_ring_ctx {
+ 
+ 	/* Keep this last, we don't need it for the fast path */
+ 	struct {
+-		#if defined(CONFIG_UNIX)
+-			struct socket		*ring_sock;
+-		#endif
+ 		/* hashed buffered write serialization */
+ 		struct io_wq_hash		*hash_map;
+ 
+@@ -1113,19 +1109,6 @@ static struct kmem_cache *req_cachep;
+ 
+ static const struct file_operations io_uring_fops;
+ 
+-struct sock *io_uring_get_socket(struct file *file)
+-{
+-#if defined(CONFIG_UNIX)
+-	if (file->f_op == &io_uring_fops) {
+-		struct io_ring_ctx *ctx = file->private_data;
+-
+-		return ctx->ring_sock->sk;
+-	}
+-#endif
+-	return NULL;
+-}
+-EXPORT_SYMBOL(io_uring_get_socket);
+-
+ static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
+ {
+ 	if (!*locked) {
+@@ -7657,7 +7640,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 					  struct io_wait_queue *iowq,
+ 					  ktime_t *timeout)
+ {
+-	int io_wait, ret;
++	int ret;
+ 
+ 	/* make sure we run task_work before checking for signals */
+ 	ret = io_run_task_work_sig();
+@@ -7672,13 +7655,12 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 	 * can take into account that the task is waiting for IO - turns out
+ 	 * to be important for low QD IO.
+ 	 */
+-	io_wait = current->in_iowait;
+ 	if (current_pending_io())
+ 		current->in_iowait = 1;
+ 	ret = 1;
+ 	if (!schedule_hrtimeout(timeout, HRTIMER_MODE_ABS))
+ 		ret = -ETIME;
+-	current->in_iowait = io_wait;
++	current->in_iowait = 0;
+ 	return ret;
+ }
+ 
+@@ -7989,15 +7971,6 @@ static void io_free_file_tables(struct io_file_table *table)
+ 
+ static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
+ {
+-#if defined(CONFIG_UNIX)
+-	if (ctx->ring_sock) {
+-		struct sock *sock = ctx->ring_sock->sk;
+-		struct sk_buff *skb;
+-
+-		while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
+-			kfree_skb(skb);
+-	}
+-#else
+ 	int i;
+ 
+ 	for (i = 0; i < ctx->nr_user_files; i++) {
+@@ -8007,7 +7980,6 @@ static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
+ 		if (file)
+ 			fput(file);
+ 	}
+-#endif
+ 	io_free_file_tables(&ctx->file_table);
+ 	io_rsrc_data_free(ctx->file_data);
+ 	ctx->file_data = NULL;
+@@ -8159,170 +8131,11 @@ static struct io_sq_data *io_get_sq_data(struct io_uring_params *p,
+ 	return sqd;
+ }
+ 
+-#if defined(CONFIG_UNIX)
+-/*
+- * Ensure the UNIX gc is aware of our file set, so we are certain that
+- * the io_uring can be safely unregistered on process exit, even if we have
+- * loops in the file referencing.
+- */
+-static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+-{
+-	struct sock *sk = ctx->ring_sock->sk;
+-	struct scm_fp_list *fpl;
+-	struct sk_buff *skb;
+-	int i, nr_files;
+-
+-	fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
+-	if (!fpl)
+-		return -ENOMEM;
+-
+-	skb = alloc_skb(0, GFP_KERNEL);
+-	if (!skb) {
+-		kfree(fpl);
+-		return -ENOMEM;
+-	}
+-
+-	skb->sk = sk;
+-	skb->scm_io_uring = 1;
+-
+-	nr_files = 0;
+-	fpl->user = get_uid(current_user());
+-	for (i = 0; i < nr; i++) {
+-		struct file *file = io_file_from_index(ctx, i + offset);
+-
+-		if (!file)
+-			continue;
+-		fpl->fp[nr_files] = get_file(file);
+-		unix_inflight(fpl->user, fpl->fp[nr_files]);
+-		nr_files++;
+-	}
+-
+-	if (nr_files) {
+-		fpl->max = SCM_MAX_FD;
+-		fpl->count = nr_files;
+-		UNIXCB(skb).fp = fpl;
+-		skb->destructor = unix_destruct_scm;
+-		refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+-		skb_queue_head(&sk->sk_receive_queue, skb);
+-
+-		for (i = 0; i < nr; i++) {
+-			struct file *file = io_file_from_index(ctx, i + offset);
+-
+-			if (file)
+-				fput(file);
+-		}
+-	} else {
+-		kfree_skb(skb);
+-		free_uid(fpl->user);
+-		kfree(fpl);
+-	}
+-
+-	return 0;
+-}
+-
+-/*
+- * If UNIX sockets are enabled, fd passing can cause a reference cycle which
+- * causes regular reference counting to break down. We rely on the UNIX
+- * garbage collection to take care of this problem for us.
+- */
+-static int io_sqe_files_scm(struct io_ring_ctx *ctx)
+-{
+-	unsigned left, total;
+-	int ret = 0;
+-
+-	total = 0;
+-	left = ctx->nr_user_files;
+-	while (left) {
+-		unsigned this_files = min_t(unsigned, left, SCM_MAX_FD);
+-
+-		ret = __io_sqe_files_scm(ctx, this_files, total);
+-		if (ret)
+-			break;
+-		left -= this_files;
+-		total += this_files;
+-	}
+-
+-	if (!ret)
+-		return 0;
+-
+-	while (total < ctx->nr_user_files) {
+-		struct file *file = io_file_from_index(ctx, total);
+-
+-		if (file)
+-			fput(file);
+-		total++;
+-	}
+-
+-	return ret;
+-}
+-#else
+-static int io_sqe_files_scm(struct io_ring_ctx *ctx)
+-{
+-	return 0;
+-}
+-#endif
+-
+ static void io_rsrc_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
+ {
+ 	struct file *file = prsrc->file;
+-#if defined(CONFIG_UNIX)
+-	struct sock *sock = ctx->ring_sock->sk;
+-	struct sk_buff_head list, *head = &sock->sk_receive_queue;
+-	struct sk_buff *skb;
+-	int i;
+-
+-	__skb_queue_head_init(&list);
+-
+-	/*
+-	 * Find the skb that holds this file in its SCM_RIGHTS. When found,
+-	 * remove this entry and rearrange the file array.
+-	 */
+-	skb = skb_dequeue(head);
+-	while (skb) {
+-		struct scm_fp_list *fp;
+ 
+-		fp = UNIXCB(skb).fp;
+-		for (i = 0; i < fp->count; i++) {
+-			int left;
+-
+-			if (fp->fp[i] != file)
+-				continue;
+-
+-			unix_notinflight(fp->user, fp->fp[i]);
+-			left = fp->count - 1 - i;
+-			if (left) {
+-				memmove(&fp->fp[i], &fp->fp[i + 1],
+-						left * sizeof(struct file *));
+-			}
+-			fp->count--;
+-			if (!fp->count) {
+-				kfree_skb(skb);
+-				skb = NULL;
+-			} else {
+-				__skb_queue_tail(&list, skb);
+-			}
+-			fput(file);
+-			file = NULL;
+-			break;
+-		}
+-
+-		if (!file)
+-			break;
+-
+-		__skb_queue_tail(&list, skb);
+-
+-		skb = skb_dequeue(head);
+-	}
+-
+-	if (skb_peek(&list)) {
+-		spin_lock_irq(&head->lock);
+-		while ((skb = __skb_dequeue(&list)) != NULL)
+-			__skb_queue_tail(head, skb);
+-		spin_unlock_irq(&head->lock);
+-	}
+-#else
+ 	fput(file);
+-#endif
+ }
+ 
+ static void __io_rsrc_put_work(struct io_rsrc_node *ref_node)
+@@ -8433,12 +8246,6 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 		io_fixed_file_set(io_fixed_file_slot(&ctx->file_table, i), file);
+ 	}
+ 
+-	ret = io_sqe_files_scm(ctx);
+-	if (ret) {
+-		__io_sqe_files_unregister(ctx);
+-		return ret;
+-	}
+-
+ 	io_rsrc_node_switch(ctx, NULL);
+ 	return ret;
+ out_fput:
+@@ -9395,12 +9202,6 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ 	WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
+ 	WARN_ON_ONCE(!llist_empty(&ctx->rsrc_put_llist));
+ 
+-#if defined(CONFIG_UNIX)
+-	if (ctx->ring_sock) {
+-		ctx->ring_sock->file = NULL; /* so that iput() is called */
+-		sock_release(ctx->ring_sock);
+-	}
+-#endif
+ 	WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
+ 
+ 	if (ctx->mm_account) {
+@@ -10275,6 +10076,11 @@ static const struct file_operations io_uring_fops = {
+ #endif
+ };
+ 
++bool io_is_uring_fops(struct file *file)
++{
++	return file->f_op == &io_uring_fops;
++}
++
+ static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+ 				  struct io_uring_params *p)
+ {
+@@ -10337,32 +10143,12 @@ static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
+ /*
+  * Allocate an anonymous fd, this is what constitutes the application
+  * visible backing of an io_uring instance. The application mmaps this
+- * fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
+- * we have to tie this fd to a socket for file garbage collection purposes.
++ * fd to gain access to the SQ/CQ ring details.
+  */
+ static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
+ {
+-	struct file *file;
+-#if defined(CONFIG_UNIX)
+-	int ret;
+-
+-	ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
+-				&ctx->ring_sock);
+-	if (ret)
+-		return ERR_PTR(ret);
+-#endif
+-
+-	file = anon_inode_getfile("[io_uring]", &io_uring_fops, ctx,
+-					O_RDWR | O_CLOEXEC);
+-#if defined(CONFIG_UNIX)
+-	if (IS_ERR(file)) {
+-		sock_release(ctx->ring_sock);
+-		ctx->ring_sock = NULL;
+-	} else {
+-		ctx->ring_sock->file = file;
+-	}
+-#endif
+-	return file;
++	return anon_inode_getfile("[io_uring]", &io_uring_fops, ctx,
++				  O_RDWR | O_CLOEXEC);
+ }
+ 
+ static int io_uring_create(unsigned entries, struct io_uring_params *p,
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index 2dcc04b2f330e..9a4378df45998 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -299,6 +299,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ static int cpu_map_kthread_run(void *data)
+ {
+ 	struct bpf_cpu_map_entry *rcpu = data;
++	unsigned long last_qs = jiffies;
+ 
+ 	set_current_state(TASK_INTERRUPTIBLE);
+ 
+@@ -322,10 +323,12 @@ static int cpu_map_kthread_run(void *data)
+ 			if (__ptr_ring_empty(rcpu->queue)) {
+ 				schedule();
+ 				sched = 1;
++				last_qs = jiffies;
+ 			} else {
+ 				__set_current_state(TASK_RUNNING);
+ 			}
+ 		} else {
++			rcu_softirq_qs_periodic(last_qs);
+ 			sched = cond_resched();
+ 		}
+ 
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 01149821ded91..07b5edb2c70f5 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -109,8 +109,6 @@ static inline struct hlist_head *dev_map_index_hash(struct bpf_dtab *dtab,
+ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
+ {
+ 	u32 valsize = attr->value_size;
+-	u64 cost = 0;
+-	int err;
+ 
+ 	/* check sanity of attributes. 2 value sizes supported:
+ 	 * 4 bytes: ifindex
+@@ -131,25 +129,18 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
+ 	bpf_map_init_from_attr(&dtab->map, attr);
+ 
+ 	if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
+-		dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);
+-
+-		if (!dtab->n_buckets) /* Overflow check */
++		/* hash table size must be power of 2; roundup_pow_of_two() can
++		 * overflow into UB on 32-bit arches, so check that first
++		 */
++		if (dtab->map.max_entries > 1UL << 31)
+ 			return -EINVAL;
+-		cost += (u64) sizeof(struct hlist_head) * dtab->n_buckets;
+-	} else {
+-		cost += (u64) dtab->map.max_entries * sizeof(struct bpf_dtab_netdev *);
+-	}
+ 
+-	/* if map size is larger than memlock limit, reject it */
+-	err = bpf_map_charge_init(&dtab->map.memory, cost);
+-	if (err)
+-		return -EINVAL;
++		dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);
+ 
+-	if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
+ 		dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
+ 							   dtab->map.numa_node);
+ 		if (!dtab->dev_index_head)
+-			goto free_charge;
++			return -ENOMEM;
+ 
+ 		spin_lock_init(&dtab->index_lock);
+ 	} else {
+@@ -157,14 +148,10 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
+ 						      sizeof(struct bpf_dtab_netdev *),
+ 						      dtab->map.numa_node);
+ 		if (!dtab->netdev_map)
+-			goto free_charge;
++			return -ENOMEM;
+ 	}
+ 
+ 	return 0;
+-
+-free_charge:
+-	bpf_map_charge_finish(&dtab->map.memory);
+-	return -ENOMEM;
+ }
+ 
+ static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index ec84973142725..72bc5f5752543 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -443,7 +443,13 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
+ 							  num_possible_cpus());
+ 	}
+ 
+-	/* hash table size must be power of 2 */
++	/* hash table size must be power of 2; roundup_pow_of_two() can overflow
++	 * into UB on 32-bit arches, so check that first
++	 */
++	err = -E2BIG;
++	if (htab->map.max_entries > 1UL << 31)
++		goto free_htab;
++
+ 	htab->n_buckets = roundup_pow_of_two(htab->map.max_entries);
+ 
+ 	htab->elem_size = sizeof(struct htab_elem) +
+@@ -453,10 +459,8 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
+ 	else
+ 		htab->elem_size += round_up(htab->map.value_size, 8);
+ 
+-	err = -E2BIG;
+-	/* prevent zero size kmalloc and check for u32 overflow */
+-	if (htab->n_buckets == 0 ||
+-	    htab->n_buckets > U32_MAX / sizeof(struct bucket))
++	/* check for u32 overflow */
++	if (htab->n_buckets > U32_MAX / sizeof(struct bucket))
+ 		goto free_htab;
+ 
+ 	cost = (u64) htab->n_buckets * sizeof(struct bucket) +
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 0efe7c7bfe5e9..084ac7e429199 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -278,13 +278,18 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock)
+ 
+ static DEFINE_PER_CPU(unsigned long, irqsave_flags);
+ 
+-notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock)
++static inline void __bpf_spin_lock_irqsave(struct bpf_spin_lock *lock)
+ {
+ 	unsigned long flags;
+ 
+ 	local_irq_save(flags);
+ 	__bpf_spin_lock(lock);
+ 	__this_cpu_write(irqsave_flags, flags);
++}
++
++NOTRACE_BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock)
++{
++	__bpf_spin_lock_irqsave(lock);
+ 	return 0;
+ }
+ 
+@@ -295,13 +300,18 @@ const struct bpf_func_proto bpf_spin_lock_proto = {
+ 	.arg1_type	= ARG_PTR_TO_SPIN_LOCK,
+ };
+ 
+-notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock)
++static inline void __bpf_spin_unlock_irqrestore(struct bpf_spin_lock *lock)
+ {
+ 	unsigned long flags;
+ 
+ 	flags = __this_cpu_read(irqsave_flags);
+ 	__bpf_spin_unlock(lock);
+ 	local_irq_restore(flags);
++}
++
++NOTRACE_BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock)
++{
++	__bpf_spin_unlock_irqrestore(lock);
+ 	return 0;
+ }
+ 
+@@ -322,9 +332,9 @@ void copy_map_value_locked(struct bpf_map *map, void *dst, void *src,
+ 	else
+ 		lock = dst + map->spin_lock_off;
+ 	preempt_disable();
+-	____bpf_spin_lock(lock);
++	__bpf_spin_lock_irqsave(lock);
+ 	copy_map_value(map, dst, src);
+-	____bpf_spin_unlock(lock);
++	__bpf_spin_unlock_irqrestore(lock);
+ 	preempt_enable();
+ }
+ 
+diff --git a/kernel/bpf/map_in_map.c b/kernel/bpf/map_in_map.c
+index 0cf4cb6858105..caa1a17cbae15 100644
+--- a/kernel/bpf/map_in_map.c
++++ b/kernel/bpf/map_in_map.c
+@@ -102,10 +102,15 @@ void *bpf_map_fd_get_ptr(struct bpf_map *map,
+ 
+ void bpf_map_fd_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
+-	/* ptr->ops->map_free() has to go through one
+-	 * rcu grace period by itself.
++	struct bpf_map *inner_map = ptr;
++
++	/* The inner map may still be used by both non-sleepable and sleepable
++	 * bpf program, so free it after one RCU grace period and one tasks
++	 * trace RCU grace period.
+ 	 */
+-	bpf_map_put(ptr);
++	if (need_defer)
++		WRITE_ONCE(inner_map->free_after_mult_rcu_gp, true);
++	bpf_map_put(inner_map);
+ }
+ 
+ u32 bpf_map_fd_sys_lookup_elem(void *ptr)
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index b8afea2ceeeb1..3ec76cb5f240d 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -115,11 +115,14 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
+ 	} else if (value_size / 8 > sysctl_perf_event_max_stack)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	/* hash table size must be power of 2 */
+-	n_buckets = roundup_pow_of_two(attr->max_entries);
+-	if (!n_buckets)
++	/* hash table size must be power of 2; roundup_pow_of_two() can overflow
++	 * into UB on 32-bit arches, so check that first
++	 */
++	if (attr->max_entries > 1UL << 31)
+ 		return ERR_PTR(-E2BIG);
+ 
++	n_buckets = roundup_pow_of_two(attr->max_entries);
++
+ 	cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap);
+ 	err = bpf_map_charge_init(&mem, cost + attr->max_entries *
+ 			   (sizeof(struct stack_map_bucket) + (u64)value_size));
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 16affa09db5c9..e1bee8cd34044 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -493,6 +493,25 @@ static void bpf_map_put_uref(struct bpf_map *map)
+ 	}
+ }
+ 
++static void bpf_map_free_in_work(struct bpf_map *map)
++{
++	INIT_WORK(&map->work, bpf_map_free_deferred);
++	schedule_work(&map->work);
++}
++
++static void bpf_map_free_rcu_gp(struct rcu_head *rcu)
++{
++	bpf_map_free_in_work(container_of(rcu, struct bpf_map, rcu));
++}
++
++static void bpf_map_free_mult_rcu_gp(struct rcu_head *rcu)
++{
++	if (rcu_trace_implies_rcu_gp())
++		bpf_map_free_rcu_gp(rcu);
++	else
++		call_rcu(rcu, bpf_map_free_rcu_gp);
++}
++
+ /* decrement map refcnt and schedule it for freeing via workqueue
+  * (unrelying map implementation ops->map_free() might sleep)
+  */
+@@ -502,8 +521,11 @@ static void __bpf_map_put(struct bpf_map *map, bool do_idr_lock)
+ 		/* bpf_map_free_id() must be called first */
+ 		bpf_map_free_id(map, do_idr_lock);
+ 		btf_put(map->btf);
+-		INIT_WORK(&map->work, bpf_map_free_deferred);
+-		schedule_work(&map->work);
++
++		if (READ_ONCE(map->free_after_mult_rcu_gp))
++			call_rcu_tasks_trace(&map->rcu, bpf_map_free_mult_rcu_gp);
++		else
++			bpf_map_free_in_work(map);
+ 	}
+ }
+ 
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index c5624ab0580c5..105fdc2bb004c 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -1015,6 +1015,8 @@ static void rcu_tasks_trace_postscan(struct list_head *hop)
+ 
+ 	// Wait for late-stage exiting tasks to finish exiting.
+ 	// These might have passed the call to exit_tasks_rcu_finish().
++
++	// If you remove the following line, update rcu_trace_implies_rcu_gp()!!!
+ 	synchronize_rcu();
+ 	// Any tasks that exit after this point will set ->trc_reader_checked.
+ }
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index d9b48f7a35e0d..629a07e6a0bfc 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -1167,13 +1167,15 @@ static int adjust_historical_crosststamp(struct system_time_snapshot *history,
+ }
+ 
+ /*
+- * cycle_between - true if test occurs chronologically between before and after
++ * timestamp_in_interval - true if ts is chronologically in [start, end]
++ *
++ * True if ts occurs chronologically at or after start, and before or at end.
+  */
+-static bool cycle_between(u64 before, u64 test, u64 after)
++static bool timestamp_in_interval(u64 start, u64 end, u64 ts)
+ {
+-	if (test > before && test < after)
++	if (ts >= start && ts <= end)
+ 		return true;
+-	if (test < before && before > after)
++	if (start > end && (ts >= start || ts <= end))
+ 		return true;
+ 	return false;
+ }
+@@ -1233,7 +1235,7 @@ int get_device_system_crosststamp(int (*get_time_fn)
+ 		 */
+ 		now = tk_clock_read(&tk->tkr_mono);
+ 		interval_start = tk->tkr_mono.cycle_last;
+-		if (!cycle_between(interval_start, cycles, now)) {
++		if (!timestamp_in_interval(interval_start, now, cycles)) {
+ 			clock_was_set_seq = tk->clock_was_set_seq;
+ 			cs_was_changed_seq = tk->cs_was_changed_seq;
+ 			cycles = interval_start;
+@@ -1246,10 +1248,8 @@ int get_device_system_crosststamp(int (*get_time_fn)
+ 				      tk_core.timekeeper.offs_real);
+ 		base_raw = tk->tkr_raw.base;
+ 
+-		nsec_real = timekeeping_cycles_to_ns(&tk->tkr_mono,
+-						     system_counterval.cycles);
+-		nsec_raw = timekeeping_cycles_to_ns(&tk->tkr_raw,
+-						    system_counterval.cycles);
++		nsec_real = timekeeping_cycles_to_ns(&tk->tkr_mono, cycles);
++		nsec_raw = timekeeping_cycles_to_ns(&tk->tkr_raw, cycles);
+ 	} while (read_seqcount_retry(&tk_core.seq, seq));
+ 
+ 	xtstamp->sys_realtime = ktime_add_ns(base_real, nsec_real);
+@@ -1264,13 +1264,13 @@ int get_device_system_crosststamp(int (*get_time_fn)
+ 		bool discontinuity;
+ 
+ 		/*
+-		 * Check that the counter value occurs after the provided
++		 * Check that the counter value is not before the provided
+ 		 * history reference and that the history doesn't cross a
+ 		 * clocksource change
+ 		 */
+ 		if (!history_begin ||
+-		    !cycle_between(history_begin->cycles,
+-				   system_counterval.cycles, cycles) ||
++		    !timestamp_in_interval(history_begin->cycles,
++					   cycles, system_counterval.cycles) ||
+ 		    history_begin->cs_was_changed_seq != cs_was_changed_seq)
+ 			return -EINVAL;
+ 		partial_history_cycles = cycles - system_counterval.cycles;
+diff --git a/lib/test_blackhole_dev.c b/lib/test_blackhole_dev.c
+index 4c40580a99a36..f247089d63c08 100644
+--- a/lib/test_blackhole_dev.c
++++ b/lib/test_blackhole_dev.c
+@@ -29,7 +29,6 @@ static int __init test_blackholedev_init(void)
+ {
+ 	struct ipv6hdr *ip6h;
+ 	struct sk_buff *skb;
+-	struct ethhdr *ethh;
+ 	struct udphdr *uh;
+ 	int data_len;
+ 	int ret;
+@@ -61,7 +60,7 @@ static int __init test_blackholedev_init(void)
+ 	ip6h->saddr = in6addr_loopback;
+ 	ip6h->daddr = in6addr_loopback;
+ 	/* Ether */
+-	ethh = (struct ethhdr *)skb_push(skb, sizeof(struct ethhdr));
++	skb_push(skb, sizeof(struct ethhdr));
+ 	skb_set_mac_header(skb, 0);
+ 
+ 	skb->protocol = htons(ETH_P_IPV6);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 5f1fbf86e0ceb..b9cf5bc9364c1 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2175,7 +2175,7 @@ int hci_get_dev_info(void __user *arg)
+ 	else
+ 		flags = hdev->flags;
+ 
+-	strcpy(di.name, hdev->name);
++	strscpy(di.name, hdev->name, sizeof(di.name));
+ 	di.bdaddr   = hdev->bdaddr;
+ 	di.type     = (hdev->bus & 0x0f) | ((hdev->dev_type & 0x03) << 4);
+ 	di.flags    = flags;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 47f37080c0c55..a0d9bc99f4e14 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -2979,8 +2979,6 @@ static void hci_remote_name_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+-	hci_conn_check_pending(hdev);
+-
+ 	hci_dev_lock(hdev);
+ 
+ 	conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr);
+diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
+index 8d6fce9005bdd..4f54c7df3a94f 100644
+--- a/net/bluetooth/rfcomm/core.c
++++ b/net/bluetooth/rfcomm/core.c
+@@ -1937,7 +1937,7 @@ static struct rfcomm_session *rfcomm_process_rx(struct rfcomm_session *s)
+ 	/* Get data directly from socket receive queue without copying it. */
+ 	while ((skb = skb_dequeue(&sk->sk_receive_queue))) {
+ 		skb_orphan(skb);
+-		if (!skb_linearize(skb)) {
++		if (!skb_linearize(skb) && sk->sk_state != BT_CLOSED) {
+ 			s = rfcomm_recv_frame(s, skb);
+ 			if (!s)
+ 				break;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 0619d2253aa24..0e2c433bebcd4 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2324,7 +2324,7 @@ void dev_queue_xmit_nit(struct sk_buff *skb, struct net_device *dev)
+ 	rcu_read_lock();
+ again:
+ 	list_for_each_entry_rcu(ptype, ptype_list, list) {
+-		if (ptype->ignore_outgoing)
++		if (READ_ONCE(ptype->ignore_outgoing))
+ 			continue;
+ 
+ 		/* Never send packets back to the socket
+diff --git a/net/core/scm.c b/net/core/scm.c
+index 3c7f160720d34..d09849cb60f08 100644
+--- a/net/core/scm.c
++++ b/net/core/scm.c
+@@ -105,7 +105,7 @@ static int scm_fp_copy(struct cmsghdr *cmsg, struct scm_fp_list **fplp)
+ 		if (fd < 0 || !(file = fget_raw(fd)))
+ 			return -EBADF;
+ 		/* don't allow io_uring files */
+-		if (io_uring_get_socket(file)) {
++		if (io_is_uring_fops(file)) {
+ 			fput(file);
+ 			return -EINVAL;
+ 		}
+diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c
+index c9c45b935f990..bce65b519ee80 100644
+--- a/net/core/sock_diag.c
++++ b/net/core/sock_diag.c
+@@ -189,7 +189,7 @@ int sock_diag_register(const struct sock_diag_handler *hndl)
+ 	if (sock_diag_handlers[hndl->family])
+ 		err = -EBUSY;
+ 	else
+-		sock_diag_handlers[hndl->family] = hndl;
++		WRITE_ONCE(sock_diag_handlers[hndl->family], hndl);
+ 	mutex_unlock(&sock_diag_table_mutex);
+ 
+ 	return err;
+@@ -205,7 +205,7 @@ void sock_diag_unregister(const struct sock_diag_handler *hnld)
+ 
+ 	mutex_lock(&sock_diag_table_mutex);
+ 	BUG_ON(sock_diag_handlers[family] != hnld);
+-	sock_diag_handlers[family] = NULL;
++	WRITE_ONCE(sock_diag_handlers[family], NULL);
+ 	mutex_unlock(&sock_diag_table_mutex);
+ }
+ EXPORT_SYMBOL_GPL(sock_diag_unregister);
+@@ -223,7 +223,7 @@ static int __sock_diag_cmd(struct sk_buff *skb, struct nlmsghdr *nlh)
+ 		return -EINVAL;
+ 	req->sdiag_family = array_index_nospec(req->sdiag_family, AF_MAX);
+ 
+-	if (sock_diag_handlers[req->sdiag_family] == NULL)
++	if (READ_ONCE(sock_diag_handlers[req->sdiag_family]) == NULL)
+ 		sock_load_diag_module(req->sdiag_family, 0);
+ 
+ 	mutex_lock(&sock_diag_table_mutex);
+@@ -282,12 +282,12 @@ static int sock_diag_bind(struct net *net, int group)
+ 	switch (group) {
+ 	case SKNLGRP_INET_TCP_DESTROY:
+ 	case SKNLGRP_INET_UDP_DESTROY:
+-		if (!sock_diag_handlers[AF_INET])
++		if (!READ_ONCE(sock_diag_handlers[AF_INET]))
+ 			sock_load_diag_module(AF_INET, 0);
+ 		break;
+ 	case SKNLGRP_INET6_TCP_DESTROY:
+ 	case SKNLGRP_INET6_UDP_DESTROY:
+-		if (!sock_diag_handlers[AF_INET6])
++		if (!READ_ONCE(sock_diag_handlers[AF_INET6]))
+ 			sock_load_diag_module(AF_INET6, 0);
+ 		break;
+ 	}
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 87fc86aade5c9..fc9fb3e5ae3e2 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -237,6 +237,10 @@ struct hsr_node *hsr_get_node(struct hsr_port *port, struct list_head *node_db,
+ 	 */
+ 	if (ethhdr->h_proto == htons(ETH_P_PRP) ||
+ 	    ethhdr->h_proto == htons(ETH_P_HSR)) {
++		/* Check if skb contains hsr_ethhdr */
++		if (skb->mac_len < sizeof(struct hsr_ethhdr))
++			return NULL;
++
+ 		/* Use the existing sequence_nr from the tag as starting point
+ 		 * for filtering duplicate frames.
+ 		 */
+diff --git a/net/hsr/hsr_main.c b/net/hsr/hsr_main.c
+index 2fd1976e5b1c3..bea7f935f38e9 100644
+--- a/net/hsr/hsr_main.c
++++ b/net/hsr/hsr_main.c
+@@ -137,14 +137,21 @@ static struct notifier_block hsr_nb = {
+ 
+ static int __init hsr_init(void)
+ {
+-	int res;
++	int err;
+ 
+ 	BUILD_BUG_ON(sizeof(struct hsr_tag) != HSR_HLEN);
+ 
+-	register_netdevice_notifier(&hsr_nb);
+-	res = hsr_netlink_init();
++	err = register_netdevice_notifier(&hsr_nb);
++	if (err)
++		return err;
++
++	err = hsr_netlink_init();
++	if (err) {
++		unregister_netdevice_notifier(&hsr_nb);
++		return err;
++	}
+ 
+-	return res;
++	return 0;
+ }
+ 
+ static void __exit hsr_exit(void)
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index cb55fede03c04..f0a313747b950 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -2493,8 +2493,8 @@ int ip_mc_msfilter(struct sock *sk, struct ip_msfilter *msf, int ifindex)
+ 			goto done;
+ 		}
+ 		newpsl->sl_max = newpsl->sl_count = msf->imsf_numsrc;
+-		memcpy(newpsl->sl_addr, msf->imsf_slist,
+-			msf->imsf_numsrc * sizeof(msf->imsf_slist[0]));
++		memcpy(newpsl->sl_addr, msf->imsf_slist_flex,
++		       flex_array_size(msf, imsf_slist_flex, msf->imsf_numsrc));
+ 		err = ip_mc_add_src(in_dev, &msf->imsf_multiaddr,
+ 			msf->imsf_fmode, newpsl->sl_count, newpsl->sl_addr, 0);
+ 		if (err) {
+@@ -2526,11 +2526,10 @@ int ip_mc_msfilter(struct sock *sk, struct ip_msfilter *msf, int ifindex)
+ 		err = ip_mc_leave_group(sk, &imr);
+ 	return err;
+ }
+-
+ int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf,
+-	struct ip_msfilter __user *optval, int __user *optlen)
++		 sockptr_t optval, sockptr_t optlen)
+ {
+-	int err, len, count, copycount;
++	int err, len, count, copycount, msf_size;
+ 	struct ip_mreqn	imr;
+ 	__be32 addr = msf->imsf_multiaddr;
+ 	struct ip_mc_socklist *pmc;
+@@ -2571,14 +2570,17 @@ int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf,
+ 		count = psl->sl_count;
+ 	}
+ 	copycount = count < msf->imsf_numsrc ? count : msf->imsf_numsrc;
+-	len = copycount * sizeof(psl->sl_addr[0]);
++	len = flex_array_size(psl, sl_addr, copycount);
+ 	msf->imsf_numsrc = count;
+-	if (put_user(IP_MSFILTER_SIZE(copycount), optlen) ||
+-	    copy_to_user(optval, msf, IP_MSFILTER_SIZE(0))) {
++	msf_size = IP_MSFILTER_SIZE(copycount);
++	if (copy_to_sockptr(optlen, &msf_size, sizeof(int)) ||
++	    copy_to_sockptr(optval, msf, IP_MSFILTER_SIZE(0))) {
+ 		return -EFAULT;
+ 	}
+ 	if (len &&
+-	    copy_to_user(&optval->imsf_slist[0], psl->sl_addr, len))
++	    copy_to_sockptr_offset(optval,
++				   offsetof(struct ip_msfilter, imsf_slist_flex),
++				   psl->sl_addr, len))
+ 		return -EFAULT;
+ 	return 0;
+ done:
+@@ -2586,7 +2588,7 @@ int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf,
+ }
+ 
+ int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
+-	struct sockaddr_storage __user *p)
++		 sockptr_t optval, size_t ss_offset)
+ {
+ 	int i, count, copycount;
+ 	struct sockaddr_in *psin;
+@@ -2616,15 +2618,17 @@ int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
+ 	count = psl ? psl->sl_count : 0;
+ 	copycount = count < gsf->gf_numsrc ? count : gsf->gf_numsrc;
+ 	gsf->gf_numsrc = count;
+-	for (i = 0; i < copycount; i++, p++) {
++	for (i = 0; i < copycount; i++) {
+ 		struct sockaddr_storage ss;
+ 
+ 		psin = (struct sockaddr_in *)&ss;
+ 		memset(&ss, 0, sizeof(ss));
+ 		psin->sin_family = AF_INET;
+ 		psin->sin_addr.s_addr = psl->sl_addr[i];
+-		if (copy_to_user(p, &ss, sizeof(ss)))
++		if (copy_to_sockptr_offset(optval, ss_offset,
++					   &ss, sizeof(ss)))
+ 			return -EFAULT;
++		ss_offset += sizeof(ss);
+ 	}
+ 	return 0;
+ }
+diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
+index fa9f1de58df46..27a5a7d66d184 100644
+--- a/net/ipv4/inet_diag.c
++++ b/net/ipv4/inet_diag.c
+@@ -57,7 +57,7 @@ static const struct inet_diag_handler *inet_diag_lock_handler(int proto)
+ 		return ERR_PTR(-ENOENT);
+ 	}
+ 
+-	if (!inet_diag_table[proto])
++	if (!READ_ONCE(inet_diag_table[proto]))
+ 		sock_load_diag_module(AF_INET, proto);
+ 
+ 	mutex_lock(&inet_diag_table_mutex);
+@@ -1413,7 +1413,7 @@ int inet_diag_register(const struct inet_diag_handler *h)
+ 	mutex_lock(&inet_diag_table_mutex);
+ 	err = -EEXIST;
+ 	if (!inet_diag_table[type]) {
+-		inet_diag_table[type] = h;
++		WRITE_ONCE(inet_diag_table[type], h);
+ 		err = 0;
+ 	}
+ 	mutex_unlock(&inet_diag_table_mutex);
+@@ -1430,7 +1430,7 @@ void inet_diag_unregister(const struct inet_diag_handler *h)
+ 		return;
+ 
+ 	mutex_lock(&inet_diag_table_mutex);
+-	inet_diag_table[type] = NULL;
++	WRITE_ONCE(inet_diag_table[type], NULL);
+ 	mutex_unlock(&inet_diag_table_mutex);
+ }
+ EXPORT_SYMBOL_GPL(inet_diag_unregister);
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 1b35afd326b8d..b300d0988d525 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -670,12 +670,11 @@ static int set_mcast_msfilter(struct sock *sk, int ifindex,
+ 			      struct sockaddr_storage *group,
+ 			      struct sockaddr_storage *list)
+ {
+-	int msize = IP_MSFILTER_SIZE(numsrc);
+ 	struct ip_msfilter *msf;
+ 	struct sockaddr_in *psin;
+ 	int err, i;
+ 
+-	msf = kmalloc(msize, GFP_KERNEL);
++	msf = kmalloc(IP_MSFILTER_SIZE(numsrc), GFP_KERNEL);
+ 	if (!msf)
+ 		return -ENOBUFS;
+ 
+@@ -691,7 +690,7 @@ static int set_mcast_msfilter(struct sock *sk, int ifindex,
+ 
+ 		if (psin->sin_family != AF_INET)
+ 			goto Eaddrnotavail;
+-		msf->imsf_slist[i] = psin->sin_addr.s_addr;
++		msf->imsf_slist_flex[i] = psin->sin_addr.s_addr;
+ 	}
+ 	err = ip_mc_msfilter(sk, msf, ifindex);
+ 	kfree(msf);
+@@ -798,7 +797,8 @@ static int ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval, int optlen)
+ 		goto out_free_gsf;
+ 
+ 	err = set_mcast_msfilter(sk, gsf->gf_interface, gsf->gf_numsrc,
+-				 gsf->gf_fmode, &gsf->gf_group, gsf->gf_slist);
++				 gsf->gf_fmode, &gsf->gf_group,
++				 gsf->gf_slist_flex);
+ out_free_gsf:
+ 	kfree(gsf);
+ 	return err;
+@@ -807,7 +807,7 @@ static int ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval, int optlen)
+ static int compat_ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 		int optlen)
+ {
+-	const int size0 = offsetof(struct compat_group_filter, gf_slist);
++	const int size0 = offsetof(struct compat_group_filter, gf_slist_flex);
+ 	struct compat_group_filter *gf32;
+ 	unsigned int n;
+ 	void *p;
+@@ -821,7 +821,7 @@ static int compat_ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 	p = kmalloc(optlen + 4, GFP_KERNEL);
+ 	if (!p)
+ 		return -ENOMEM;
+-	gf32 = p + 4; /* we want ->gf_group and ->gf_slist aligned */
++	gf32 = p + 4; /* we want ->gf_group and ->gf_slist_flex aligned */
+ 
+ 	err = -EFAULT;
+ 	if (copy_from_sockptr(gf32, optval, optlen))
+@@ -834,7 +834,7 @@ static int compat_ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 		goto out_free_gsf;
+ 
+ 	err = -EINVAL;
+-	if (offsetof(struct compat_group_filter, gf_slist[n]) > optlen)
++	if (offsetof(struct compat_group_filter, gf_slist_flex[n]) > optlen)
+ 		goto out_free_gsf;
+ 
+ 	/* numsrc >= (4G-140)/128 overflow in 32 bits */
+@@ -842,7 +842,7 @@ static int compat_ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 	if (n > READ_ONCE(sock_net(sk)->ipv4.sysctl_igmp_max_msf))
+ 		goto out_free_gsf;
+ 	err = set_mcast_msfilter(sk, gf32->gf_interface, n, gf32->gf_fmode,
+-				 &gf32->gf_group, gf32->gf_slist);
++				 &gf32->gf_group, gf32->gf_slist_flex);
+ out_free_gsf:
+ 	kfree(p);
+ 	return err;
+@@ -1460,37 +1460,37 @@ static bool getsockopt_needs_rtnl(int optname)
+ 	return false;
+ }
+ 
+-static int ip_get_mcast_msfilter(struct sock *sk, void __user *optval,
+-		int __user *optlen, int len)
++static int ip_get_mcast_msfilter(struct sock *sk, sockptr_t optval,
++				 sockptr_t optlen, int len)
+ {
+-	const int size0 = offsetof(struct group_filter, gf_slist);
+-	struct group_filter __user *p = optval;
++	const int size0 = offsetof(struct group_filter, gf_slist_flex);
+ 	struct group_filter gsf;
+-	int num;
++	int num, gsf_size;
+ 	int err;
+ 
+ 	if (len < size0)
+ 		return -EINVAL;
+-	if (copy_from_user(&gsf, p, size0))
++	if (copy_from_sockptr(&gsf, optval, size0))
+ 		return -EFAULT;
+ 
+ 	num = gsf.gf_numsrc;
+-	err = ip_mc_gsfget(sk, &gsf, p->gf_slist);
++	err = ip_mc_gsfget(sk, &gsf, optval,
++			   offsetof(struct group_filter, gf_slist_flex));
+ 	if (err)
+ 		return err;
+ 	if (gsf.gf_numsrc < num)
+ 		num = gsf.gf_numsrc;
+-	if (put_user(GROUP_FILTER_SIZE(num), optlen) ||
+-	    copy_to_user(p, &gsf, size0))
++	gsf_size = GROUP_FILTER_SIZE(num);
++	if (copy_to_sockptr(optlen, &gsf_size, sizeof(int)) ||
++	    copy_to_sockptr(optval, &gsf, size0))
+ 		return -EFAULT;
+ 	return 0;
+ }
+ 
+-static int compat_ip_get_mcast_msfilter(struct sock *sk, void __user *optval,
+-		int __user *optlen, int len)
++static int compat_ip_get_mcast_msfilter(struct sock *sk, sockptr_t optval,
++					sockptr_t optlen, int len)
+ {
+-	const int size0 = offsetof(struct compat_group_filter, gf_slist);
+-	struct compat_group_filter __user *p = optval;
++	const int size0 = offsetof(struct compat_group_filter, gf_slist_flex);
+ 	struct compat_group_filter gf32;
+ 	struct group_filter gf;
+ 	int num;
+@@ -1498,7 +1498,7 @@ static int compat_ip_get_mcast_msfilter(struct sock *sk, void __user *optval,
+ 
+ 	if (len < size0)
+ 		return -EINVAL;
+-	if (copy_from_user(&gf32, p, size0))
++	if (copy_from_sockptr(&gf32, optval, size0))
+ 		return -EFAULT;
+ 
+ 	gf.gf_interface = gf32.gf_interface;
+@@ -1506,21 +1506,24 @@ static int compat_ip_get_mcast_msfilter(struct sock *sk, void __user *optval,
+ 	num = gf.gf_numsrc = gf32.gf_numsrc;
+ 	gf.gf_group = gf32.gf_group;
+ 
+-	err = ip_mc_gsfget(sk, &gf, p->gf_slist);
++	err = ip_mc_gsfget(sk, &gf, optval,
++			   offsetof(struct compat_group_filter, gf_slist_flex));
+ 	if (err)
+ 		return err;
+ 	if (gf.gf_numsrc < num)
+ 		num = gf.gf_numsrc;
+ 	len = GROUP_FILTER_SIZE(num) - (sizeof(gf) - sizeof(gf32));
+-	if (put_user(len, optlen) ||
+-	    put_user(gf.gf_fmode, &p->gf_fmode) ||
+-	    put_user(gf.gf_numsrc, &p->gf_numsrc))
++	if (copy_to_sockptr(optlen, &len, sizeof(int)) ||
++	    copy_to_sockptr_offset(optval, offsetof(struct compat_group_filter, gf_fmode),
++				   &gf.gf_fmode, sizeof(gf.gf_fmode)) ||
++	    copy_to_sockptr_offset(optval, offsetof(struct compat_group_filter, gf_numsrc),
++				   &gf.gf_numsrc, sizeof(gf.gf_numsrc)))
+ 		return -EFAULT;
+ 	return 0;
+ }
+ 
+ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+-			    char __user *optval, int __user *optlen)
++			    sockptr_t optval, sockptr_t optlen)
+ {
+ 	struct inet_sock *inet = inet_sk(sk);
+ 	bool needs_rtnl = getsockopt_needs_rtnl(optname);
+@@ -1533,7 +1536,7 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 	if (ip_mroute_opt(optname))
+ 		return ip_mroute_getsockopt(sk, optname, optval, optlen);
+ 
+-	if (get_user(len, optlen))
++	if (copy_from_sockptr(&len, optlen, sizeof(int)))
+ 		return -EFAULT;
+ 	if (len < 0)
+ 		return -EINVAL;
+@@ -1558,15 +1561,17 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 			       inet_opt->opt.optlen);
+ 		release_sock(sk);
+ 
+-		if (opt->optlen == 0)
+-			return put_user(0, optlen);
++		if (opt->optlen == 0) {
++			len = 0;
++			return copy_to_sockptr(optlen, &len, sizeof(int));
++		}
+ 
+ 		ip_options_undo(opt);
+ 
+ 		len = min_t(unsigned int, len, opt->optlen);
+-		if (put_user(len, optlen))
++		if (copy_to_sockptr(optlen, &len, sizeof(int)))
+ 			return -EFAULT;
+-		if (copy_to_user(optval, opt->__data, len))
++		if (copy_to_sockptr(optval, opt->__data, len))
+ 			return -EFAULT;
+ 		return 0;
+ 	}
+@@ -1657,9 +1662,9 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 		addr.s_addr = inet->mc_addr;
+ 		release_sock(sk);
+ 
+-		if (put_user(len, optlen))
++		if (copy_to_sockptr(optlen, &len, sizeof(int)))
+ 			return -EFAULT;
+-		if (copy_to_user(optval, &addr, len))
++		if (copy_to_sockptr(optval, &addr, len))
+ 			return -EFAULT;
+ 		return 0;
+ 	}
+@@ -1671,12 +1676,11 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 			err = -EINVAL;
+ 			goto out;
+ 		}
+-		if (copy_from_user(&msf, optval, IP_MSFILTER_SIZE(0))) {
++		if (copy_from_sockptr(&msf, optval, IP_MSFILTER_SIZE(0))) {
+ 			err = -EFAULT;
+ 			goto out;
+ 		}
+-		err = ip_mc_msfget(sk, &msf,
+-				   (struct ip_msfilter __user *)optval, optlen);
++		err = ip_mc_msfget(sk, &msf, optval, optlen);
+ 		goto out;
+ 	}
+ 	case MCAST_MSFILTER:
+@@ -1698,8 +1702,13 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 		if (sk->sk_type != SOCK_STREAM)
+ 			return -ENOPROTOOPT;
+ 
+-		msg.msg_control_is_user = true;
+-		msg.msg_control_user = optval;
++		if (optval.is_kernel) {
++			msg.msg_control_is_user = false;
++			msg.msg_control = optval.kernel;
++		} else {
++			msg.msg_control_is_user = true;
++			msg.msg_control_user = optval.user;
++		}
+ 		msg.msg_controllen = len;
+ 		msg.msg_flags = in_compat_syscall() ? MSG_CMSG_COMPAT : 0;
+ 
+@@ -1720,7 +1729,7 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 			put_cmsg(&msg, SOL_IP, IP_TOS, sizeof(tos), &tos);
+ 		}
+ 		len -= msg.msg_controllen;
+-		return put_user(len, optlen);
++		return copy_to_sockptr(optlen, &len, sizeof(int));
+ 	}
+ 	case IP_FREEBIND:
+ 		val = inet->freebind;
+@@ -1743,15 +1752,15 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ 	if (len < sizeof(int) && len > 0 && val >= 0 && val <= 255) {
+ 		unsigned char ucval = (unsigned char)val;
+ 		len = 1;
+-		if (put_user(len, optlen))
++		if (copy_to_sockptr(optlen, &len, sizeof(int)))
+ 			return -EFAULT;
+-		if (copy_to_user(optval, &ucval, 1))
++		if (copy_to_sockptr(optval, &ucval, 1))
+ 			return -EFAULT;
+ 	} else {
+ 		len = min_t(unsigned int, sizeof(int), len);
+-		if (put_user(len, optlen))
++		if (copy_to_sockptr(optlen, &len, sizeof(int)))
+ 			return -EFAULT;
+-		if (copy_to_user(optval, &val, len))
++		if (copy_to_sockptr(optval, &val, len))
+ 			return -EFAULT;
+ 	}
+ 	return 0;
+@@ -1768,7 +1777,8 @@ int ip_getsockopt(struct sock *sk, int level,
+ {
+ 	int err;
+ 
+-	err = do_ip_getsockopt(sk, level, optname, optval, optlen);
++	err = do_ip_getsockopt(sk, level, optname,
++			       USER_SOCKPTR(optval), USER_SOCKPTR(optlen));
+ 
+ #if IS_ENABLED(CONFIG_BPFILTER_UMH)
+ 	if (optname >= BPFILTER_IPT_SO_GET_INFO &&
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 50f8231e9daec..0953d805cbbee 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -364,7 +364,7 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
+ 		  bool log_ecn_error)
+ {
+ 	const struct iphdr *iph = ip_hdr(skb);
+-	int err;
++	int nh, err;
+ 
+ #ifdef CONFIG_NET_IPGRE_BROADCAST
+ 	if (ipv4_is_multicast(iph->daddr)) {
+@@ -390,8 +390,21 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
+ 		tunnel->i_seqno = ntohl(tpi->seq) + 1;
+ 	}
+ 
++	/* Save offset of outer header relative to skb->head,
++	 * because we are going to reset the network header to the inner header
++	 * and might change skb->head.
++	 */
++	nh = skb_network_header(skb) - skb->head;
++
+ 	skb_set_network_header(skb, (tunnel->dev->type == ARPHRD_ETHER) ? ETH_HLEN : 0);
+ 
++	if (!pskb_inet_may_pull(skb)) {
++		DEV_STATS_INC(tunnel->dev, rx_length_errors);
++		DEV_STATS_INC(tunnel->dev, rx_errors);
++		goto drop;
++	}
++	iph = (struct iphdr *)(skb->head + nh);
++
+ 	err = IP_ECN_decapsulate(iph, skb);
+ 	if (unlikely(err)) {
+ 		if (log_ecn_error)
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index be1976536f1c0..db184cb826b95 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -1540,7 +1540,8 @@ int ip_mroute_setsockopt(struct sock *sk, int optname, sockptr_t optval,
+ }
+ 
+ /* Getsock opt support for the multicast routing system. */
+-int ip_mroute_getsockopt(struct sock *sk, int optname, char __user *optval, int __user *optlen)
++int ip_mroute_getsockopt(struct sock *sk, int optname, sockptr_t optval,
++			 sockptr_t optlen)
+ {
+ 	int olr;
+ 	int val;
+@@ -1571,14 +1572,16 @@ int ip_mroute_getsockopt(struct sock *sk, int optname, char __user *optval, int
+ 		return -ENOPROTOOPT;
+ 	}
+ 
+-	if (get_user(olr, optlen))
++	if (copy_from_sockptr(&olr, optlen, sizeof(int)))
+ 		return -EFAULT;
+-	olr = min_t(unsigned int, olr, sizeof(int));
+ 	if (olr < 0)
+ 		return -EINVAL;
+-	if (put_user(olr, optlen))
++
++	olr = min_t(unsigned int, olr, sizeof(int));
++
++	if (copy_to_sockptr(optlen, &olr, sizeof(int)))
+ 		return -EFAULT;
+-	if (copy_to_user(optval, &val, olr))
++	if (copy_to_sockptr(optval, &val, olr))
+ 		return -EFAULT;
+ 	return 0;
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index a5c15e2d193f6..2e874ec859715 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3742,11 +3742,11 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ 	if (get_user(len, optlen))
+ 		return -EFAULT;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+-
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	switch (optname) {
+ 	case TCP_MAXSEG:
+ 		val = tp->mss_cache;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 476f79f1563a8..b2541c7d7c87f 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2748,11 +2748,11 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		return -EFAULT;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+-
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	switch (optname) {
+ 	case UDP_CORK:
+ 		val = READ_ONCE(up->corkflag);
+diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c
+index 3e4c87b29b115..55cd23b7a9357 100644
+--- a/net/ipv6/fib6_rules.c
++++ b/net/ipv6/fib6_rules.c
+@@ -446,6 +446,11 @@ static size_t fib6_rule_nlmsg_payload(struct fib_rule *rule)
+ 	       + nla_total_size(16); /* src */
+ }
+ 
++static void fib6_rule_flush_cache(struct fib_rules_ops *ops)
++{
++	rt_genid_bump_ipv6(ops->fro_net);
++}
++
+ static const struct fib_rules_ops __net_initconst fib6_rules_ops_template = {
+ 	.family			= AF_INET6,
+ 	.rule_size		= sizeof(struct fib6_rule),
+@@ -458,6 +463,7 @@ static const struct fib_rules_ops __net_initconst fib6_rules_ops_template = {
+ 	.compare		= fib6_rule_compare,
+ 	.fill			= fib6_rule_fill,
+ 	.nlmsg_payload		= fib6_rule_nlmsg_payload,
++	.flush_cache		= fib6_rule_flush_cache,
+ 	.nlgroup		= RTNLGRP_IPV6_RULE,
+ 	.policy			= fib6_rule_policy,
+ 	.owner			= THIS_MODULE,
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 7b4b457a8b87a..0ac527cd5d56d 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -225,7 +225,7 @@ static int ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 	if (GROUP_FILTER_SIZE(gsf->gf_numsrc) > optlen)
+ 		goto out_free_gsf;
+ 
+-	ret = ip6_mc_msfilter(sk, gsf, gsf->gf_slist);
++	ret = ip6_mc_msfilter(sk, gsf, gsf->gf_slist_flex);
+ out_free_gsf:
+ 	kfree(gsf);
+ 	return ret;
+@@ -234,7 +234,7 @@ static int ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ static int compat_ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 		int optlen)
+ {
+-	const int size0 = offsetof(struct compat_group_filter, gf_slist);
++	const int size0 = offsetof(struct compat_group_filter, gf_slist_flex);
+ 	struct compat_group_filter *gf32;
+ 	void *p;
+ 	int ret;
+@@ -249,7 +249,7 @@ static int compat_ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 	if (!p)
+ 		return -ENOMEM;
+ 
+-	gf32 = p + 4; /* we want ->gf_group and ->gf_slist aligned */
++	gf32 = p + 4; /* we want ->gf_group and ->gf_slist_flex aligned */
+ 	ret = -EFAULT;
+ 	if (copy_from_sockptr(gf32, optval, optlen))
+ 		goto out_free_p;
+@@ -261,14 +261,14 @@ static int compat_ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+ 		goto out_free_p;
+ 
+ 	ret = -EINVAL;
+-	if (offsetof(struct compat_group_filter, gf_slist[n]) > optlen)
++	if (offsetof(struct compat_group_filter, gf_slist_flex[n]) > optlen)
+ 		goto out_free_p;
+ 
+ 	ret = ip6_mc_msfilter(sk, &(struct group_filter){
+ 			.gf_interface = gf32->gf_interface,
+ 			.gf_group = gf32->gf_group,
+ 			.gf_fmode = gf32->gf_fmode,
+-			.gf_numsrc = gf32->gf_numsrc}, gf32->gf_slist);
++			.gf_numsrc = gf32->gf_numsrc}, gf32->gf_slist_flex);
+ 
+ out_free_p:
+ 	kfree(p);
+@@ -1051,7 +1051,7 @@ static int ipv6_getsockopt_sticky(struct sock *sk, struct ipv6_txoptions *opt,
+ static int ipv6_get_msfilter(struct sock *sk, void __user *optval,
+ 		int __user *optlen, int len)
+ {
+-	const int size0 = offsetof(struct group_filter, gf_slist);
++	const int size0 = offsetof(struct group_filter, gf_slist_flex);
+ 	struct group_filter __user *p = optval;
+ 	struct group_filter gsf;
+ 	int num;
+@@ -1065,7 +1065,7 @@ static int ipv6_get_msfilter(struct sock *sk, void __user *optval,
+ 		return -EADDRNOTAVAIL;
+ 	num = gsf.gf_numsrc;
+ 	lock_sock(sk);
+-	err = ip6_mc_msfget(sk, &gsf, p->gf_slist);
++	err = ip6_mc_msfget(sk, &gsf, p->gf_slist_flex);
+ 	if (!err) {
+ 		if (num > gsf.gf_numsrc)
+ 			num = gsf.gf_numsrc;
+@@ -1080,7 +1080,7 @@ static int ipv6_get_msfilter(struct sock *sk, void __user *optval,
+ static int compat_ipv6_get_msfilter(struct sock *sk, void __user *optval,
+ 		int __user *optlen)
+ {
+-	const int size0 = offsetof(struct compat_group_filter, gf_slist);
++	const int size0 = offsetof(struct compat_group_filter, gf_slist_flex);
+ 	struct compat_group_filter __user *p = optval;
+ 	struct compat_group_filter gf32;
+ 	struct group_filter gf;
+@@ -1103,7 +1103,7 @@ static int compat_ipv6_get_msfilter(struct sock *sk, void __user *optval,
+ 		return -EADDRNOTAVAIL;
+ 
+ 	lock_sock(sk);
+-	err = ip6_mc_msfget(sk, &gf, p->gf_slist);
++	err = ip6_mc_msfget(sk, &gf, p->gf_slist_flex);
+ 	release_sock(sk);
+ 	if (err)
+ 		return err;
+diff --git a/net/iucv/iucv.c b/net/iucv/iucv.c
+index 6f84978a77265..ed0dbdbba4d94 100644
+--- a/net/iucv/iucv.c
++++ b/net/iucv/iucv.c
+@@ -156,7 +156,7 @@ static char iucv_error_pathid[16] = "INVALID PATHID";
+ static LIST_HEAD(iucv_handler_list);
+ 
+ /*
+- * iucv_path_table: an array of iucv_path structures.
++ * iucv_path_table: array of pointers to iucv_path structures.
+  */
+ static struct iucv_path **iucv_path_table;
+ static unsigned long iucv_max_pathid;
+@@ -542,7 +542,7 @@ static int iucv_enable(void)
+ 
+ 	get_online_cpus();
+ 	rc = -ENOMEM;
+-	alloc_size = iucv_max_pathid * sizeof(struct iucv_path);
++	alloc_size = iucv_max_pathid * sizeof(*iucv_path_table);
+ 	iucv_path_table = kzalloc(alloc_size, GFP_KERNEL);
+ 	if (!iucv_path_table)
+ 		goto out;
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 39b3c7fbf9f66..7420b4f19b45e 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -1275,10 +1275,11 @@ static int kcm_getsockopt(struct socket *sock, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		return -EFAULT;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	switch (optname) {
+ 	case KCM_RECV_DISABLE:
+ 		val = kcm->rx_disabled;
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index 5ecc0f2009444..b1d89c850f686 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -1357,11 +1357,11 @@ static int pppol2tp_getsockopt(struct socket *sock, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		return -EFAULT;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+-
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	err = -ENOTCONN;
+ 	if (!sk->sk_user_data)
+ 		goto end;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 73b0a6925304c..8d4472b127e41 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1097,7 +1097,7 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 	if (flags & ~NFT_TABLE_F_DORMANT)
+ 		return -EINVAL;
+ 
+-	if (flags == ctx->table->flags)
++	if (flags == (ctx->table->flags & NFT_TABLE_F_MASK))
+ 		return 0;
+ 
+ 	/* No dormant off/on/off/on games in single transaction */
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 70a59a35d1761..b9682e085fcef 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -2226,8 +2226,6 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ 	if (m) {
+ 		rcu_barrier();
+ 
+-		nft_set_pipapo_match_destroy(ctx, set, m);
+-
+ 		for_each_possible_cpu(cpu)
+ 			pipapo_free_scratch(m, cpu);
+ 		free_percpu(m->scratch);
+@@ -2239,8 +2237,7 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ 	if (priv->clone) {
+ 		m = priv->clone;
+ 
+-		if (priv->dirty)
+-			nft_set_pipapo_match_destroy(ctx, set, m);
++		nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+ 		for_each_possible_cpu(cpu)
+ 			pipapo_free_scratch(priv->clone, cpu);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 6cc054dd53b6e..db5d16c5d5b11 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3951,7 +3951,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 		if (val < 0 || val > 1)
+ 			return -EINVAL;
+ 
+-		po->prot_hook.ignore_outgoing = !!val;
++		WRITE_ONCE(po->prot_hook.ignore_outgoing, !!val);
+ 		return 0;
+ 	}
+ 	case PACKET_TX_HAS_OFF:
+@@ -4083,7 +4083,7 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
+ 		       0);
+ 		break;
+ 	case PACKET_IGNORE_OUTGOING:
+-		val = po->prot_hook.ignore_outgoing;
++		val = READ_ONCE(po->prot_hook.ignore_outgoing);
+ 		break;
+ 	case PACKET_ROLLOVER_STATS:
+ 		if (!po->rollover)
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 65eeb82cb5de5..1923eaa91e939 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -103,13 +103,12 @@ EXPORT_SYMBOL_GPL(rds_send_path_reset);
+ 
+ static int acquire_in_xmit(struct rds_conn_path *cp)
+ {
+-	return test_and_set_bit(RDS_IN_XMIT, &cp->cp_flags) == 0;
++	return test_and_set_bit_lock(RDS_IN_XMIT, &cp->cp_flags) == 0;
+ }
+ 
+ static void release_in_xmit(struct rds_conn_path *cp)
+ {
+-	clear_bit(RDS_IN_XMIT, &cp->cp_flags);
+-	smp_mb__after_atomic();
++	clear_bit_unlock(RDS_IN_XMIT, &cp->cp_flags);
+ 	/*
+ 	 * We don't use wait_on_bit()/wake_up_bit() because our waking is in a
+ 	 * hot path and finding waiters is very rare.  We don't want to walk
+diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c
+index d435bffc61999..97ff11973c493 100644
+--- a/net/sunrpc/addr.c
++++ b/net/sunrpc/addr.c
+@@ -284,10 +284,10 @@ char *rpc_sockaddr2uaddr(const struct sockaddr *sap, gfp_t gfp_flags)
+ 	}
+ 
+ 	if (snprintf(portbuf, sizeof(portbuf),
+-		     ".%u.%u", port >> 8, port & 0xff) > (int)sizeof(portbuf))
++		     ".%u.%u", port >> 8, port & 0xff) >= (int)sizeof(portbuf))
+ 		return NULL;
+ 
+-	if (strlcat(addrbuf, portbuf, sizeof(addrbuf)) > sizeof(addrbuf))
++	if (strlcat(addrbuf, portbuf, sizeof(addrbuf)) >= sizeof(addrbuf))
+ 		return NULL;
+ 
+ 	return kstrdup(addrbuf, gfp_flags);
+diff --git a/net/sunrpc/auth_gss/gss_rpc_xdr.c b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+index 2ff7b7083ebab..e265b8d38aa14 100644
+--- a/net/sunrpc/auth_gss/gss_rpc_xdr.c
++++ b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+@@ -250,8 +250,8 @@ static int gssx_dec_option_array(struct xdr_stream *xdr,
+ 
+ 	creds = kzalloc(sizeof(struct svc_cred), GFP_KERNEL);
+ 	if (!creds) {
+-		kfree(oa->data);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto free_oa;
+ 	}
+ 
+ 	oa->data[0].option.data = CREDS_VALUE;
+@@ -265,29 +265,40 @@ static int gssx_dec_option_array(struct xdr_stream *xdr,
+ 
+ 		/* option buffer */
+ 		p = xdr_inline_decode(xdr, 4);
+-		if (unlikely(p == NULL))
+-			return -ENOSPC;
++		if (unlikely(p == NULL)) {
++			err = -ENOSPC;
++			goto free_creds;
++		}
+ 
+ 		length = be32_to_cpup(p);
+ 		p = xdr_inline_decode(xdr, length);
+-		if (unlikely(p == NULL))
+-			return -ENOSPC;
++		if (unlikely(p == NULL)) {
++			err = -ENOSPC;
++			goto free_creds;
++		}
+ 
+ 		if (length == sizeof(CREDS_VALUE) &&
+ 		    memcmp(p, CREDS_VALUE, sizeof(CREDS_VALUE)) == 0) {
+ 			/* We have creds here. parse them */
+ 			err = gssx_dec_linux_creds(xdr, creds);
+ 			if (err)
+-				return err;
++				goto free_creds;
+ 			oa->data[0].value.len = 1; /* presence */
+ 		} else {
+ 			/* consume uninteresting buffer */
+ 			err = gssx_dec_buffer(xdr, &dummy);
+ 			if (err)
+-				return err;
++				goto free_creds;
+ 		}
+ 	}
+ 	return 0;
++
++free_creds:
++	kfree(creds);
++free_oa:
++	kfree(oa->data);
++	oa->data = NULL;
++	return err;
+ }
+ 
+ static int gssx_dec_status(struct xdr_stream *xdr,
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index dc27635403932..9121a4d5436d5 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -198,7 +198,7 @@ void wait_for_unix_gc(void)
+ 	if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC &&
+ 	    !READ_ONCE(gc_in_progress))
+ 		unix_gc();
+-	wait_event(unix_gc_wait, gc_in_progress == false);
++	wait_event(unix_gc_wait, !READ_ONCE(gc_in_progress));
+ }
+ 
+ /* The external entry point: unix_gc() */
+diff --git a/net/unix/scm.c b/net/unix/scm.c
+index e8e2a00bb0f58..d1048b4c2baaf 100644
+--- a/net/unix/scm.c
++++ b/net/unix/scm.c
+@@ -34,10 +34,8 @@ struct sock *unix_get_socket(struct file *filp)
+ 		/* PF_UNIX ? */
+ 		if (s && sock->ops && sock->ops->family == PF_UNIX)
+ 			u_sock = s;
+-	} else {
+-		/* Could be an io_uring instance */
+-		u_sock = io_uring_get_socket(filp);
+ 	}
++
+ 	return u_sock;
+ }
+ EXPORT_SYMBOL(unix_get_socket);
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index 161dc194e6342..a7ecf2956cdd6 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -470,12 +470,12 @@ static int x25_getsockopt(struct socket *sock, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		goto out;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+-
+ 	rc = -EINVAL;
+ 	if (len < 0)
+ 		goto out;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	rc = -EFAULT;
+ 	if (put_user(len, optlen))
+ 		goto out;
+diff --git a/scripts/clang-tools/gen_compile_commands.py b/scripts/clang-tools/gen_compile_commands.py
+index 8bf55bb4f515c..96e4865ee934d 100755
+--- a/scripts/clang-tools/gen_compile_commands.py
++++ b/scripts/clang-tools/gen_compile_commands.py
+@@ -176,7 +176,7 @@ def process_line(root_directory, command_prefix, file_path):
+     # escape the pound sign '#', either as '\#' or '$(pound)' (depending on the
+     # kernel version). The compile_commands.json file is not interepreted
+     # by Make, so this code replaces the escaped version with '#'.
+-    prefix = command_prefix.replace('\#', '#').replace('$(pound)', '#')
++    prefix = command_prefix.replace(r'\#', '#').replace('$(pound)', '#')
+ 
+     # Use os.path.abspath() to normalize the path resolving '.' and '..' .
+     abs_path = os.path.abspath(os.path.join(root_directory, file_path))
+diff --git a/scripts/kconfig/lexer.l b/scripts/kconfig/lexer.l
+index 240109f965aeb..72e5e9ac52bb4 100644
+--- a/scripts/kconfig/lexer.l
++++ b/scripts/kconfig/lexer.l
+@@ -305,8 +305,11 @@ static char *expand_token(const char *in, size_t n)
+ 	new_string();
+ 	append_string(in, n);
+ 
+-	/* get the whole line because we do not know the end of token. */
+-	while ((c = input()) != EOF) {
++	/*
++	 * get the whole line because we do not know the end of token.
++	 * input() returns 0 (not EOF!) when it reachs the end of file.
++	 */
++	while ((c = input()) != 0) {
+ 		if (c == '\n') {
+ 			unput(c);
+ 			break;
+diff --git a/sound/core/seq/seq_midi.c b/sound/core/seq/seq_midi.c
+index 6825940ea2cf8..a741d1ae6639a 100644
+--- a/sound/core/seq/seq_midi.c
++++ b/sound/core/seq/seq_midi.c
+@@ -111,6 +111,12 @@ static int dump_midi(struct snd_rawmidi_substream *substream, const char *buf, i
+ 	return 0;
+ }
+ 
++/* callback for snd_seq_dump_var_event(), bridging to dump_midi() */
++static int __dump_midi(void *ptr, void *buf, int count)
++{
++	return dump_midi(ptr, buf, count);
++}
++
+ static int event_process_midi(struct snd_seq_event *ev, int direct,
+ 			      void *private_data, int atomic, int hop)
+ {
+@@ -130,7 +136,7 @@ static int event_process_midi(struct snd_seq_event *ev, int direct,
+ 			pr_debug("ALSA: seq_midi: invalid sysex event flags = 0x%x\n", ev->flags);
+ 			return 0;
+ 		}
+-		snd_seq_dump_var_event(ev, (snd_seq_dump_func_t)dump_midi, substream);
++		snd_seq_dump_var_event(ev, __dump_midi, substream);
+ 		snd_midi_event_reset_decode(msynth->parser);
+ 	} else {
+ 		if (msynth->parser == NULL)
+diff --git a/sound/core/seq/seq_virmidi.c b/sound/core/seq/seq_virmidi.c
+index 77d7037d1476f..82396b8c885a5 100644
+--- a/sound/core/seq/seq_virmidi.c
++++ b/sound/core/seq/seq_virmidi.c
+@@ -62,6 +62,13 @@ static void snd_virmidi_init_event(struct snd_virmidi *vmidi,
+ /*
+  * decode input event and put to read buffer of each opened file
+  */
++
++/* callback for snd_seq_dump_var_event(), bridging to snd_rawmidi_receive() */
++static int dump_to_rawmidi(void *ptr, void *buf, int count)
++{
++	return snd_rawmidi_receive(ptr, buf, count);
++}
++
+ static int snd_virmidi_dev_receive_event(struct snd_virmidi_dev *rdev,
+ 					 struct snd_seq_event *ev,
+ 					 bool atomic)
+@@ -80,7 +87,7 @@ static int snd_virmidi_dev_receive_event(struct snd_virmidi_dev *rdev,
+ 		if (ev->type == SNDRV_SEQ_EVENT_SYSEX) {
+ 			if ((ev->flags & SNDRV_SEQ_EVENT_LENGTH_MASK) != SNDRV_SEQ_EVENT_LENGTH_VARIABLE)
+ 				continue;
+-			snd_seq_dump_var_event(ev, (snd_seq_dump_func_t)snd_rawmidi_receive, vmidi->substream);
++			snd_seq_dump_var_event(ev, dump_to_rawmidi, vmidi->substream);
+ 			snd_midi_event_reset_decode(vmidi->parser);
+ 		} else {
+ 			len = snd_midi_event_decode(vmidi->parser, msg, sizeof(msg), ev);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 233449d982370..038837481c27c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6612,6 +6612,60 @@ static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc285_fixup_hp_envy_x360(struct hda_codec *codec,
++				      const struct hda_fixup *fix,
++				      int action)
++{
++	static const struct coef_fw coefs[] = {
++		WRITE_COEF(0x08, 0x6a0c), WRITE_COEF(0x0d, 0xa023),
++		WRITE_COEF(0x10, 0x0320), WRITE_COEF(0x1a, 0x8c03),
++		WRITE_COEF(0x25, 0x1800), WRITE_COEF(0x26, 0x003a),
++		WRITE_COEF(0x28, 0x1dfe), WRITE_COEF(0x29, 0xb014),
++		WRITE_COEF(0x2b, 0x1dfe), WRITE_COEF(0x37, 0xfe15),
++		WRITE_COEF(0x38, 0x7909), WRITE_COEF(0x45, 0xd489),
++		WRITE_COEF(0x46, 0x00f4), WRITE_COEF(0x4a, 0x21e0),
++		WRITE_COEF(0x66, 0x03f0), WRITE_COEF(0x67, 0x1000),
++		WRITE_COEF(0x6e, 0x1005), { }
++	};
++
++	static const struct hda_pintbl pincfgs[] = {
++		{ 0x12, 0xb7a60130 },  /* Internal microphone*/
++		{ 0x14, 0x90170150 },  /* B&O soundbar speakers */
++		{ 0x17, 0x90170153 },  /* Side speakers */
++		{ 0x19, 0x03a11040 },  /* Headset microphone */
++		{ }
++	};
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_apply_pincfgs(codec, pincfgs);
++
++		/* Fixes volume control problem for side speakers */
++		alc295_fixup_disable_dac3(codec, fix, action);
++
++		/* Fixes no sound from headset speaker */
++		snd_hda_codec_amp_stereo(codec, 0x21, HDA_OUTPUT, 0, -1, 0);
++
++		/* Auto-enable headset mic when plugged */
++		snd_hda_jack_set_gating_jack(codec, 0x19, 0x21);
++
++		/* Headset mic volume enhancement */
++		snd_hda_codec_set_pin_target(codec, 0x19, PIN_VREF50);
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		alc_process_coef_fw(codec, coefs);
++		break;
++	case HDA_FIXUP_ACT_BUILD:
++		rename_ctl(codec, "Bass Speaker Playback Volume",
++			   "B&O-Tuned Playback Volume");
++		rename_ctl(codec, "Front Playback Switch",
++			   "B&O Soundbar Playback Switch");
++		rename_ctl(codec, "Bass Speaker Playback Switch",
++			   "Side Speaker Playback Switch");
++		break;
++	}
++}
++
+ /* for hda_fixup_thinkpad_acpi() */
+ #include "thinkpad_helper.c"
+ 
+@@ -6819,6 +6873,7 @@ enum {
+ 	ALC280_FIXUP_HP_9480M,
+ 	ALC245_FIXUP_HP_X360_AMP,
+ 	ALC285_FIXUP_HP_SPECTRE_X360_EB1,
++	ALC285_FIXUP_HP_ENVY_X360,
+ 	ALC288_FIXUP_DELL_HEADSET_MODE,
+ 	ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC288_FIXUP_DELL_XPS_13,
+@@ -8614,6 +8669,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_spectre_x360_eb1
+ 	},
++	[ALC285_FIXUP_HP_ENVY_X360] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_envy_x360,
++		.chained = true,
++		.chain_id = ALC285_FIXUP_HP_GPIO_AMP_INIT,
++	},
+ 	[ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_ideapad_s740_coef,
+@@ -9001,6 +9062,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+@@ -9517,6 +9579,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
+ 	{.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
+ 	{.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"},
++	{.id = ALC285_FIXUP_HP_ENVY_X360, .name = "alc285-hp-envy-x360"},
+ 	{.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
+ 	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+ 	{.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 04457cbed5b4e..5db63ef33f1a2 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3772,6 +3772,16 @@ static const struct dmi_system_id dmi_platform_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ 		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
+ 		  DMI_EXACT_MATCH(DMI_BOARD_VERSION, "Default string"),
++		  /*
++		   * Above strings are too generic, LattePanda BIOS versions for
++		   * all 4 hw revisions are:
++		   * DF-BI-7-S70CR100-*
++		   * DF-BI-7-S70CR110-*
++		   * DF-BI-7-S70CR200-*
++		   * LP-BS-7-S70CR700-*
++		   * Do a partial match for S70CR to avoid false positive matches.
++		   */
++		  DMI_MATCH(DMI_BIOS_VERSION, "S70CR"),
+ 		},
+ 		.driver_data = (void *)&lattepanda_board_platform_data,
+ 	},
+diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c
+index 57aeded978c28..272932e200d87 100644
+--- a/sound/soc/codecs/wm8962.c
++++ b/sound/soc/codecs/wm8962.c
+@@ -2219,6 +2219,9 @@ SND_SOC_DAPM_PGA_E("HPOUT", SND_SOC_NOPM, 0, 0, NULL, 0, hp_event,
+ 
+ SND_SOC_DAPM_OUTPUT("HPOUTL"),
+ SND_SOC_DAPM_OUTPUT("HPOUTR"),
++
++SND_SOC_DAPM_PGA("SPKOUTL Output", WM8962_CLASS_D_CONTROL_1, 6, 0, NULL, 0),
++SND_SOC_DAPM_PGA("SPKOUTR Output", WM8962_CLASS_D_CONTROL_1, 7, 0, NULL, 0),
+ };
+ 
+ static const struct snd_soc_dapm_widget wm8962_dapm_spk_mono_widgets[] = {
+@@ -2226,7 +2229,6 @@ SND_SOC_DAPM_MIXER("Speaker Mixer", WM8962_MIXER_ENABLES, 1, 0,
+ 		   spkmixl, ARRAY_SIZE(spkmixl)),
+ SND_SOC_DAPM_MUX_E("Speaker PGA", WM8962_PWR_MGMT_2, 4, 0, &spkoutl_mux,
+ 		   out_pga_event, SND_SOC_DAPM_POST_PMU),
+-SND_SOC_DAPM_PGA("Speaker Output", WM8962_CLASS_D_CONTROL_1, 7, 0, NULL, 0),
+ SND_SOC_DAPM_OUTPUT("SPKOUT"),
+ };
+ 
+@@ -2241,9 +2243,6 @@ SND_SOC_DAPM_MUX_E("SPKOUTL PGA", WM8962_PWR_MGMT_2, 4, 0, &spkoutl_mux,
+ SND_SOC_DAPM_MUX_E("SPKOUTR PGA", WM8962_PWR_MGMT_2, 3, 0, &spkoutr_mux,
+ 		   out_pga_event, SND_SOC_DAPM_POST_PMU),
+ 
+-SND_SOC_DAPM_PGA("SPKOUTR Output", WM8962_CLASS_D_CONTROL_1, 7, 0, NULL, 0),
+-SND_SOC_DAPM_PGA("SPKOUTL Output", WM8962_CLASS_D_CONTROL_1, 6, 0, NULL, 0),
+-
+ SND_SOC_DAPM_OUTPUT("SPKOUTL"),
+ SND_SOC_DAPM_OUTPUT("SPKOUTR"),
+ };
+@@ -2353,12 +2352,18 @@ static const struct snd_soc_dapm_route wm8962_spk_mono_intercon[] = {
+ 	{ "Speaker PGA", "Mixer", "Speaker Mixer" },
+ 	{ "Speaker PGA", "DAC", "DACL" },
+ 
+-	{ "Speaker Output", NULL, "Speaker PGA" },
+-	{ "Speaker Output", NULL, "SYSCLK" },
+-	{ "Speaker Output", NULL, "TOCLK" },
+-	{ "Speaker Output", NULL, "TEMP_SPK" },
++	{ "SPKOUTL Output", NULL, "Speaker PGA" },
++	{ "SPKOUTL Output", NULL, "SYSCLK" },
++	{ "SPKOUTL Output", NULL, "TOCLK" },
++	{ "SPKOUTL Output", NULL, "TEMP_SPK" },
+ 
+-	{ "SPKOUT", NULL, "Speaker Output" },
++	{ "SPKOUTR Output", NULL, "Speaker PGA" },
++	{ "SPKOUTR Output", NULL, "SYSCLK" },
++	{ "SPKOUTR Output", NULL, "TOCLK" },
++	{ "SPKOUTR Output", NULL, "TEMP_SPK" },
++
++	{ "SPKOUT", NULL, "SPKOUTL Output" },
++	{ "SPKOUT", NULL, "SPKOUTR Output" },
+ };
+ 
+ static const struct snd_soc_dapm_route wm8962_spk_stereo_intercon[] = {
+@@ -2898,8 +2903,12 @@ static int wm8962_set_fll(struct snd_soc_component *component, int fll_id, int s
+ 	switch (fll_id) {
+ 	case WM8962_FLL_MCLK:
+ 	case WM8962_FLL_BCLK:
++		fll1 |= (fll_id - 1) << WM8962_FLL_REFCLK_SRC_SHIFT;
++		break;
+ 	case WM8962_FLL_OSC:
+ 		fll1 |= (fll_id - 1) << WM8962_FLL_REFCLK_SRC_SHIFT;
++		snd_soc_component_update_bits(component, WM8962_PLL2,
++					      WM8962_OSC_ENA, WM8962_OSC_ENA);
+ 		break;
+ 	case WM8962_FLL_INT:
+ 		snd_soc_component_update_bits(component, WM8962_FLL_CONTROL_1,
+@@ -2908,7 +2917,7 @@ static int wm8962_set_fll(struct snd_soc_component *component, int fll_id, int s
+ 				    WM8962_FLL_FRC_NCO, WM8962_FLL_FRC_NCO);
+ 		break;
+ 	default:
+-		dev_err(component->dev, "Unknown FLL source %d\n", ret);
++		dev_err(component->dev, "Unknown FLL source %d\n", source);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index f5b1b3b876980..1d049685e7075 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -529,6 +529,18 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* Chuwi Vi8 dual-boot (CWI506) */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "i86"),
++			/* The above are too generic, also match BIOS info */
++			DMI_MATCH(DMI_BIOS_VERSION, "CHUWI2.D86JHBNR02"),
++		},
++		.driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++					BYT_RT5640_MONO_SPEAKER |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		/* Chuwi Vi10 (CWI505) */
+ 		.matches = {
+diff --git a/sound/soc/meson/aiu.c b/sound/soc/meson/aiu.c
+index dc35ca79021c5..03bc3e5b6cab5 100644
+--- a/sound/soc/meson/aiu.c
++++ b/sound/soc/meson/aiu.c
+@@ -215,49 +215,27 @@ static const char * const aiu_spdif_ids[] = {
+ static int aiu_clk_get(struct device *dev)
+ {
+ 	struct aiu *aiu = dev_get_drvdata(dev);
++	struct clk *pclk;
+ 	int ret;
+ 
+-	aiu->pclk = devm_clk_get(dev, "pclk");
+-	if (IS_ERR(aiu->pclk)) {
+-		if (PTR_ERR(aiu->pclk) != -EPROBE_DEFER)
+-			dev_err(dev, "Can't get the aiu pclk\n");
+-		return PTR_ERR(aiu->pclk);
+-	}
++	pclk = devm_clk_get_enabled(dev, "pclk");
++	if (IS_ERR(pclk))
++		return dev_err_probe(dev, PTR_ERR(pclk), "Can't get the aiu pclk\n");
+ 
+ 	aiu->spdif_mclk = devm_clk_get(dev, "spdif_mclk");
+-	if (IS_ERR(aiu->spdif_mclk)) {
+-		if (PTR_ERR(aiu->spdif_mclk) != -EPROBE_DEFER)
+-			dev_err(dev, "Can't get the aiu spdif master clock\n");
+-		return PTR_ERR(aiu->spdif_mclk);
+-	}
++	if (IS_ERR(aiu->spdif_mclk))
++		return dev_err_probe(dev, PTR_ERR(aiu->spdif_mclk),
++				     "Can't get the aiu spdif master clock\n");
+ 
+ 	ret = aiu_clk_bulk_get(dev, aiu_i2s_ids, ARRAY_SIZE(aiu_i2s_ids),
+ 			       &aiu->i2s);
+-	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "Can't get the i2s clocks\n");
+-		return ret;
+-	}
++	if (ret)
++		return dev_err_probe(dev, ret, "Can't get the i2s clocks\n");
+ 
+ 	ret = aiu_clk_bulk_get(dev, aiu_spdif_ids, ARRAY_SIZE(aiu_spdif_ids),
+ 			       &aiu->spdif);
+-	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "Can't get the spdif clocks\n");
+-		return ret;
+-	}
+-
+-	ret = clk_prepare_enable(aiu->pclk);
+-	if (ret) {
+-		dev_err(dev, "peripheral clock enable failed\n");
+-		return ret;
+-	}
+-
+-	ret = devm_add_action_or_reset(dev,
+-				       (void(*)(void *))clk_disable_unprepare,
+-				       aiu->pclk);
+ 	if (ret)
+-		dev_err(dev, "failed to add reset action on pclk");
++		return dev_err_probe(dev, ret, "Can't get the spdif clocks\n");
+ 
+ 	return ret;
+ }
+@@ -281,11 +259,8 @@ static int aiu_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, aiu);
+ 
+ 	ret = device_reset(dev);
+-	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "Failed to reset device\n");
+-		return ret;
+-	}
++	if (ret)
++		return dev_err_probe(dev, ret, "Failed to reset device\n");
+ 
+ 	regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(regs))
+diff --git a/sound/soc/meson/aiu.h b/sound/soc/meson/aiu.h
+index 87aa19ac4af3a..44f8c213d35a0 100644
+--- a/sound/soc/meson/aiu.h
++++ b/sound/soc/meson/aiu.h
+@@ -33,7 +33,6 @@ struct aiu_platform_data {
+ };
+ 
+ struct aiu {
+-	struct clk *pclk;
+ 	struct clk *spdif_mclk;
+ 	struct aiu_interface i2s;
+ 	struct aiu_interface spdif;
+diff --git a/sound/soc/meson/axg-fifo.c b/sound/soc/meson/axg-fifo.c
+index b2e867113226b..295c0fc30745e 100644
+--- a/sound/soc/meson/axg-fifo.c
++++ b/sound/soc/meson/axg-fifo.c
+@@ -350,20 +350,12 @@ int axg_fifo_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	fifo->pclk = devm_clk_get(dev, NULL);
+-	if (IS_ERR(fifo->pclk)) {
+-		if (PTR_ERR(fifo->pclk) != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get pclk: %ld\n",
+-				PTR_ERR(fifo->pclk));
+-		return PTR_ERR(fifo->pclk);
+-	}
++	if (IS_ERR(fifo->pclk))
++		return dev_err_probe(dev, PTR_ERR(fifo->pclk), "failed to get pclk\n");
+ 
+ 	fifo->arb = devm_reset_control_get_exclusive(dev, NULL);
+-	if (IS_ERR(fifo->arb)) {
+-		if (PTR_ERR(fifo->arb) != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get arb reset: %ld\n",
+-				PTR_ERR(fifo->arb));
+-		return PTR_ERR(fifo->arb);
+-	}
++	if (IS_ERR(fifo->arb))
++		return dev_err_probe(dev, PTR_ERR(fifo->arb), "failed to get arb reset\n");
+ 
+ 	fifo->irq = of_irq_get(dev->of_node, 0);
+ 	if (fifo->irq <= 0) {
+diff --git a/sound/soc/meson/axg-pdm.c b/sound/soc/meson/axg-pdm.c
+index bfd37d49a73ef..672e43a9729dc 100644
+--- a/sound/soc/meson/axg-pdm.c
++++ b/sound/soc/meson/axg-pdm.c
+@@ -586,7 +586,6 @@ static int axg_pdm_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct axg_pdm *priv;
+ 	void __iomem *regs;
+-	int ret;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+@@ -611,28 +610,16 @@ static int axg_pdm_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	priv->pclk = devm_clk_get(dev, "pclk");
+-	if (IS_ERR(priv->pclk)) {
+-		ret = PTR_ERR(priv->pclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get pclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->pclk))
++		return dev_err_probe(dev, PTR_ERR(priv->pclk), "failed to get pclk\n");
+ 
+ 	priv->dclk = devm_clk_get(dev, "dclk");
+-	if (IS_ERR(priv->dclk)) {
+-		ret = PTR_ERR(priv->dclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get dclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->dclk))
++		return dev_err_probe(dev, PTR_ERR(priv->dclk), "failed to get dclk\n");
+ 
+ 	priv->sysclk = devm_clk_get(dev, "sysclk");
+-	if (IS_ERR(priv->sysclk)) {
+-		ret = PTR_ERR(priv->sysclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get dclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->sysclk))
++		return dev_err_probe(dev, PTR_ERR(priv->sysclk), "failed to get dclk\n");
+ 
+ 	return devm_snd_soc_register_component(dev, &axg_pdm_component_drv,
+ 					       &axg_pdm_dai_drv, 1);
+diff --git a/sound/soc/meson/axg-spdifin.c b/sound/soc/meson/axg-spdifin.c
+index 7aaded1fc376b..245189d2ee95f 100644
+--- a/sound/soc/meson/axg-spdifin.c
++++ b/sound/soc/meson/axg-spdifin.c
+@@ -439,7 +439,6 @@ static int axg_spdifin_probe(struct platform_device *pdev)
+ 	struct axg_spdifin *priv;
+ 	struct snd_soc_dai_driver *dai_drv;
+ 	void __iomem *regs;
+-	int ret;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+@@ -464,20 +463,12 @@ static int axg_spdifin_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	priv->pclk = devm_clk_get(dev, "pclk");
+-	if (IS_ERR(priv->pclk)) {
+-		ret = PTR_ERR(priv->pclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get pclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->pclk))
++		return dev_err_probe(dev, PTR_ERR(priv->pclk), "failed to get pclk\n");
+ 
+ 	priv->refclk = devm_clk_get(dev, "refclk");
+-	if (IS_ERR(priv->refclk)) {
+-		ret = PTR_ERR(priv->refclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get mclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->refclk))
++		return dev_err_probe(dev, PTR_ERR(priv->refclk), "failed to get mclk\n");
+ 
+ 	dai_drv = axg_spdifin_get_dai_drv(dev, priv);
+ 	if (IS_ERR(dai_drv)) {
+diff --git a/sound/soc/meson/axg-spdifout.c b/sound/soc/meson/axg-spdifout.c
+index e769a5ee6e27e..3960d082e1436 100644
+--- a/sound/soc/meson/axg-spdifout.c
++++ b/sound/soc/meson/axg-spdifout.c
+@@ -403,7 +403,6 @@ static int axg_spdifout_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct axg_spdifout *priv;
+ 	void __iomem *regs;
+-	int ret;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+@@ -422,20 +421,12 @@ static int axg_spdifout_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	priv->pclk = devm_clk_get(dev, "pclk");
+-	if (IS_ERR(priv->pclk)) {
+-		ret = PTR_ERR(priv->pclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get pclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->pclk))
++		return dev_err_probe(dev, PTR_ERR(priv->pclk), "failed to get pclk\n");
+ 
+ 	priv->mclk = devm_clk_get(dev, "mclk");
+-	if (IS_ERR(priv->mclk)) {
+-		ret = PTR_ERR(priv->mclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get mclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->mclk))
++		return dev_err_probe(dev, PTR_ERR(priv->mclk), "failed to get mclk\n");
+ 
+ 	return devm_snd_soc_register_component(dev, &axg_spdifout_component_drv,
+ 			axg_spdifout_dai_drv, ARRAY_SIZE(axg_spdifout_dai_drv));
+diff --git a/sound/soc/meson/axg-tdm-formatter.c b/sound/soc/meson/axg-tdm-formatter.c
+index 4834cfd163c03..63333a2b0a9c3 100644
+--- a/sound/soc/meson/axg-tdm-formatter.c
++++ b/sound/soc/meson/axg-tdm-formatter.c
+@@ -265,7 +265,6 @@ int axg_tdm_formatter_probe(struct platform_device *pdev)
+ 	const struct axg_tdm_formatter_driver *drv;
+ 	struct axg_tdm_formatter *formatter;
+ 	void __iomem *regs;
+-	int ret;
+ 
+ 	drv = of_device_get_match_data(dev);
+ 	if (!drv) {
+@@ -292,57 +291,34 @@ int axg_tdm_formatter_probe(struct platform_device *pdev)
+ 
+ 	/* Peripharal clock */
+ 	formatter->pclk = devm_clk_get(dev, "pclk");
+-	if (IS_ERR(formatter->pclk)) {
+-		ret = PTR_ERR(formatter->pclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get pclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(formatter->pclk))
++		return dev_err_probe(dev, PTR_ERR(formatter->pclk), "failed to get pclk\n");
+ 
+ 	/* Formatter bit clock */
+ 	formatter->sclk = devm_clk_get(dev, "sclk");
+-	if (IS_ERR(formatter->sclk)) {
+-		ret = PTR_ERR(formatter->sclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get sclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(formatter->sclk))
++		return dev_err_probe(dev, PTR_ERR(formatter->sclk), "failed to get sclk\n");
+ 
+ 	/* Formatter sample clock */
+ 	formatter->lrclk = devm_clk_get(dev, "lrclk");
+-	if (IS_ERR(formatter->lrclk)) {
+-		ret = PTR_ERR(formatter->lrclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get lrclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(formatter->lrclk))
++		return dev_err_probe(dev, PTR_ERR(formatter->lrclk), "failed to get lrclk\n");
+ 
+ 	/* Formatter bit clock input multiplexer */
+ 	formatter->sclk_sel = devm_clk_get(dev, "sclk_sel");
+-	if (IS_ERR(formatter->sclk_sel)) {
+-		ret = PTR_ERR(formatter->sclk_sel);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get sclk_sel: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(formatter->sclk_sel))
++		return dev_err_probe(dev, PTR_ERR(formatter->sclk_sel), "failed to get sclk_sel\n");
+ 
+ 	/* Formatter sample clock input multiplexer */
+ 	formatter->lrclk_sel = devm_clk_get(dev, "lrclk_sel");
+-	if (IS_ERR(formatter->lrclk_sel)) {
+-		ret = PTR_ERR(formatter->lrclk_sel);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get lrclk_sel: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(formatter->lrclk_sel))
++		return dev_err_probe(dev, PTR_ERR(formatter->lrclk_sel),
++				     "failed to get lrclk_sel\n");
+ 
+ 	/* Formatter dedicated reset line */
+ 	formatter->reset = devm_reset_control_get_optional_exclusive(dev, NULL);
+-	if (IS_ERR(formatter->reset)) {
+-		ret = PTR_ERR(formatter->reset);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get reset: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(formatter->reset))
++		return dev_err_probe(dev, PTR_ERR(formatter->reset), "failed to get reset\n");
+ 
+ 	return devm_snd_soc_register_component(dev, drv->component_drv,
+ 					       NULL, 0);
+diff --git a/sound/soc/meson/axg-tdm-interface.c b/sound/soc/meson/axg-tdm-interface.c
+index 87cac440b3693..60d132ab1ab78 100644
+--- a/sound/soc/meson/axg-tdm-interface.c
++++ b/sound/soc/meson/axg-tdm-interface.c
+@@ -12,6 +12,9 @@
+ 
+ #include "axg-tdm.h"
+ 
++/* Maximum bit clock frequency according the datasheets */
++#define MAX_SCLK 100000000 /* Hz */
++
+ enum {
+ 	TDM_IFACE_PAD,
+ 	TDM_IFACE_LOOPBACK,
+@@ -155,19 +158,27 @@ static int axg_tdm_iface_startup(struct snd_pcm_substream *substream,
+ 		return -EINVAL;
+ 	}
+ 
+-	/* Apply component wide rate symmetry */
+ 	if (snd_soc_component_active(dai->component)) {
++		/* Apply component wide rate symmetry */
+ 		ret = snd_pcm_hw_constraint_single(substream->runtime,
+ 						   SNDRV_PCM_HW_PARAM_RATE,
+ 						   iface->rate);
+-		if (ret < 0) {
+-			dev_err(dai->dev,
+-				"can't set iface rate constraint\n");
+-			return ret;
+-		}
++
++	} else {
++		/* Limit rate according to the slot number and width */
++		unsigned int max_rate =
++			MAX_SCLK / (iface->slots * iface->slot_width);
++		ret = snd_pcm_hw_constraint_minmax(substream->runtime,
++						   SNDRV_PCM_HW_PARAM_RATE,
++						   0, max_rate);
+ 	}
+ 
+-	return 0;
++	if (ret < 0)
++		dev_err(dai->dev, "can't set iface rate constraint\n");
++	else
++		ret = 0;
++
++	return ret;
+ }
+ 
+ static int axg_tdm_iface_set_stream(struct snd_pcm_substream *substream,
+@@ -266,8 +277,8 @@ static int axg_tdm_iface_set_sclk(struct snd_soc_dai *dai,
+ 	srate = iface->slots * iface->slot_width * params_rate(params);
+ 
+ 	if (!iface->mclk_rate) {
+-		/* If no specific mclk is requested, default to bit clock * 4 */
+-		clk_set_rate(iface->mclk, 4 * srate);
++		/* If no specific mclk is requested, default to bit clock * 2 */
++		clk_set_rate(iface->mclk, 2 * srate);
+ 	} else {
+ 		/* Check if we can actually get the bit clock from mclk */
+ 		if (iface->mclk_rate % srate) {
+@@ -517,21 +528,13 @@ static int axg_tdm_iface_probe(struct platform_device *pdev)
+ 
+ 	/* Bit clock provided on the pad */
+ 	iface->sclk = devm_clk_get(dev, "sclk");
+-	if (IS_ERR(iface->sclk)) {
+-		ret = PTR_ERR(iface->sclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get sclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(iface->sclk))
++		return dev_err_probe(dev, PTR_ERR(iface->sclk), "failed to get sclk\n");
+ 
+ 	/* Sample clock provided on the pad */
+ 	iface->lrclk = devm_clk_get(dev, "lrclk");
+-	if (IS_ERR(iface->lrclk)) {
+-		ret = PTR_ERR(iface->lrclk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get lrclk: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(iface->lrclk))
++		return dev_err_probe(dev, PTR_ERR(iface->lrclk), "failed to get lrclk\n");
+ 
+ 	/*
+ 	 * mclk maybe be missing when the cpu dai is in slave mode and
+@@ -542,13 +545,10 @@ static int axg_tdm_iface_probe(struct platform_device *pdev)
+ 	iface->mclk = devm_clk_get(dev, "mclk");
+ 	if (IS_ERR(iface->mclk)) {
+ 		ret = PTR_ERR(iface->mclk);
+-		if (ret == -ENOENT) {
++		if (ret == -ENOENT)
+ 			iface->mclk = NULL;
+-		} else {
+-			if (ret != -EPROBE_DEFER)
+-				dev_err(dev, "failed to get mclk: %d\n", ret);
+-			return ret;
+-		}
++		else
++			return dev_err_probe(dev, ret, "failed to get mclk\n");
+ 	}
+ 
+ 	return devm_snd_soc_register_component(dev,
+diff --git a/sound/soc/meson/meson-card-utils.c b/sound/soc/meson/meson-card-utils.c
+index 300ac8be46ef8..0e2691f011b7b 100644
+--- a/sound/soc/meson/meson-card-utils.c
++++ b/sound/soc/meson/meson-card-utils.c
+@@ -85,11 +85,9 @@ int meson_card_parse_dai(struct snd_soc_card *card,
+ 
+ 	ret = of_parse_phandle_with_args(node, "sound-dai",
+ 					 "#sound-dai-cells", 0, &args);
+-	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(card->dev, "can't parse dai %d\n", ret);
+-		return ret;
+-	}
++	if (ret)
++		return dev_err_probe(card->dev, ret, "can't parse dai\n");
++
+ 	*dai_of_node = args.np;
+ 
+ 	return snd_soc_get_dai_name(&args, dai_name);
+diff --git a/sound/soc/meson/t9015.c b/sound/soc/meson/t9015.c
+index 56d2592c16d53..b085aa65c688c 100644
+--- a/sound/soc/meson/t9015.c
++++ b/sound/soc/meson/t9015.c
+@@ -48,7 +48,6 @@
+ #define POWER_CFG	0x10
+ 
+ struct t9015 {
+-	struct clk *pclk;
+ 	struct regulator *avdd;
+ };
+ 
+@@ -250,6 +249,7 @@ static int t9015_probe(struct platform_device *pdev)
+ 	struct t9015 *priv;
+ 	void __iomem *regs;
+ 	struct regmap *regmap;
++	struct clk *pclk;
+ 	int ret;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -257,31 +257,13 @@ static int t9015_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 	platform_set_drvdata(pdev, priv);
+ 
+-	priv->pclk = devm_clk_get(dev, "pclk");
+-	if (IS_ERR(priv->pclk)) {
+-		if (PTR_ERR(priv->pclk) != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get core clock\n");
+-		return PTR_ERR(priv->pclk);
+-	}
++	pclk = devm_clk_get_enabled(dev, "pclk");
++	if (IS_ERR(pclk))
++		return dev_err_probe(dev, PTR_ERR(pclk), "failed to get core clock\n");
+ 
+ 	priv->avdd = devm_regulator_get(dev, "AVDD");
+-	if (IS_ERR(priv->avdd)) {
+-		if (PTR_ERR(priv->avdd) != -EPROBE_DEFER)
+-			dev_err(dev, "failed to AVDD\n");
+-		return PTR_ERR(priv->avdd);
+-	}
+-
+-	ret = clk_prepare_enable(priv->pclk);
+-	if (ret) {
+-		dev_err(dev, "core clock enable failed\n");
+-		return ret;
+-	}
+-
+-	ret = devm_add_action_or_reset(dev,
+-			(void(*)(void *))clk_disable_unprepare,
+-			priv->pclk);
+-	if (ret)
+-		return ret;
++	if (IS_ERR(priv->avdd))
++		return dev_err_probe(dev, PTR_ERR(priv->avdd), "failed to AVDD\n");
+ 
+ 	ret = device_reset(dev);
+ 	if (ret) {
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index c4f4585f9b851..f51e901a9689e 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -301,9 +301,12 @@ static struct snd_pcm_chmap_elem *convert_chmap(int channels, unsigned int bits,
+ 	c = 0;
+ 
+ 	if (bits) {
+-		for (; bits && *maps; maps++, bits >>= 1)
++		for (; bits && *maps; maps++, bits >>= 1) {
+ 			if (bits & 1)
+ 				chmap->map[c++] = *maps;
++			if (c == chmap->channels)
++				break;
++		}
+ 	} else {
+ 		/* If we're missing wChannelConfig, then guess something
+ 		    to make sure the channel map is not skipped entirely */
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index d2bcce627b320..d07996e7952f3 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -1946,7 +1946,7 @@ static int profile_open_perf_events(struct profiler_bpf *obj)
+ 	int map_fd;
+ 
+ 	profile_perf_events = calloc(
+-		sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric);
++		obj->rodata->num_cpu * obj->rodata->num_metric, sizeof(int));
+ 	if (!profile_perf_events) {
+ 		p_err("failed to allocate memory for perf_event array: %s",
+ 		      strerror(errno));
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index e5c938d538ee5..167cd8d3b7a21 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -1264,8 +1264,8 @@ static int
+ record__switch_output(struct record *rec, bool at_exit)
+ {
+ 	struct perf_data *data = &rec->data;
++	char *new_filename = NULL;
+ 	int fd, err;
+-	char *new_filename;
+ 
+ 	/* Same Size:      "2015122520103046"*/
+ 	char timestamp[] = "InvalidTimestamp";
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 1a1cbd16d76d4..d9a4c0202a8c3 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -2109,7 +2109,6 @@ int evsel__parse_sample(struct evsel *evsel, union perf_event *event,
+ 	data->period = evsel->core.attr.sample_period;
+ 	data->cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK;
+ 	data->misc    = event->header.misc;
+-	data->id = -1ULL;
+ 	data->data_src = PERF_MEM_DATA_SRC_NONE;
+ 
+ 	if (event->header.type != PERF_RECORD_SAMPLE) {
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 4688e39de52af..971fd77bd3e61 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -271,7 +271,7 @@ static void print_metric_only(struct perf_stat_config *config,
+ 	if (color)
+ 		mlen += strlen(color) + sizeof(PERF_COLOR_RESET) - 1;
+ 
+-	color_snprintf(str, sizeof(str), color ?: "", fmt, val);
++	color_snprintf(str, sizeof(str), color ?: "", fmt ?: "", val);
+ 	fprintf(out, "%*s ", mlen, str);
+ }
+ 
+diff --git a/tools/perf/util/thread_map.c b/tools/perf/util/thread_map.c
+index c9bfe4696943b..cee7fc3b5bb0c 100644
+--- a/tools/perf/util/thread_map.c
++++ b/tools/perf/util/thread_map.c
+@@ -279,13 +279,13 @@ struct perf_thread_map *thread_map__new_by_tid_str(const char *tid_str)
+ 		threads->nr = ntasks;
+ 	}
+ out:
++	strlist__delete(slist);
+ 	if (threads)
+ 		refcount_set(&threads->refcnt, 1);
+ 	return threads;
+ 
+ out_free_threads:
+ 	zfree(&threads);
+-	strlist__delete(slist);
+ 	goto out;
+ }
+ 
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 44a25a9f1f722..956ee3c01dd1a 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -653,12 +653,12 @@ TEST_F(tls, recv_partial)
+ 
+ 	memset(recv_mem, 0, sizeof(recv_mem));
+ 	EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len);
+-	EXPECT_NE(recv(self->cfd, recv_mem, strlen(test_str_first),
+-		       MSG_WAITALL), -1);
++	EXPECT_EQ(recv(self->cfd, recv_mem, strlen(test_str_first),
++		       MSG_WAITALL), strlen(test_str_first));
+ 	EXPECT_EQ(memcmp(test_str_first, recv_mem, strlen(test_str_first)), 0);
+ 	memset(recv_mem, 0, sizeof(recv_mem));
+-	EXPECT_NE(recv(self->cfd, recv_mem, strlen(test_str_second),
+-		       MSG_WAITALL), -1);
++	EXPECT_EQ(recv(self->cfd, recv_mem, strlen(test_str_second),
++		       MSG_WAITALL), strlen(test_str_second));
+ 	EXPECT_EQ(memcmp(test_str_second, recv_mem, strlen(test_str_second)),
+ 		  0);
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-04-13 13:09 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-04-13 13:09 UTC (permalink / raw
  To: gentoo-commits

commit:     e360e589e2ef0dee5c740b1153047c498309637e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 13 13:08:46 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Apr 13 13:08:46 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e360e589

Linux patch 5.10.215

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1214_linux-5.10.215.patch | 11851 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11855 insertions(+)

diff --git a/0000_README b/0000_README
index acd4a2d0..fe9c9853 100644
--- a/0000_README
+++ b/0000_README
@@ -899,6 +899,10 @@ Patch:  1213_linux-5.10.214.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.214
 
+Patch:  1214_linux-5.10.215.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.215
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1214_linux-5.10.215.patch b/1214_linux-5.10.215.patch
new file mode 100644
index 00000000..76e42d63
--- /dev/null
+++ b/1214_linux-5.10.215.patch
@@ -0,0 +1,11851 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index bfb4f4fada337..2a273bfebed05 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -507,6 +507,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/mds
+ 		/sys/devices/system/cpu/vulnerabilities/meltdown
+ 		/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
++		/sys/devices/system/cpu/vulnerabilities/reg_file_data_sampling
+ 		/sys/devices/system/cpu/vulnerabilities/retbleed
+ 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v1
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index 84742be223ff8..e020d1637e1c4 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -18,3 +18,4 @@ are configurable at compile, boot or run time.
+    processor_mmio_stale_data.rst
+    gather_data_sampling.rst
+    srso
++   reg-file-data-sampling
+diff --git a/Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst b/Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst
+new file mode 100644
+index 0000000000000..810424b4b7f6d
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst
+@@ -0,0 +1,104 @@
++==================================
++Register File Data Sampling (RFDS)
++==================================
++
++Register File Data Sampling (RFDS) is a microarchitectural vulnerability that
++only affects Intel Atom parts(also branded as E-cores). RFDS may allow
++a malicious actor to infer data values previously used in floating point
++registers, vector registers, or integer registers. RFDS does not provide the
++ability to choose which data is inferred. CVE-2023-28746 is assigned to RFDS.
++
++Affected Processors
++===================
++Below is the list of affected Intel processors [#f1]_:
++
++   ===================  ============
++   Common name          Family_Model
++   ===================  ============
++   ATOM_GOLDMONT           06_5CH
++   ATOM_GOLDMONT_D         06_5FH
++   ATOM_GOLDMONT_PLUS      06_7AH
++   ATOM_TREMONT_D          06_86H
++   ATOM_TREMONT            06_96H
++   ALDERLAKE               06_97H
++   ALDERLAKE_L             06_9AH
++   ATOM_TREMONT_L          06_9CH
++   RAPTORLAKE              06_B7H
++   RAPTORLAKE_P            06_BAH
++   ALDERLAKE_N             06_BEH
++   RAPTORLAKE_S            06_BFH
++   ===================  ============
++
++As an exception to this table, Intel Xeon E family parts ALDERLAKE(06_97H) and
++RAPTORLAKE(06_B7H) codenamed Catlow are not affected. They are reported as
++vulnerable in Linux because they share the same family/model with an affected
++part. Unlike their affected counterparts, they do not enumerate RFDS_CLEAR or
++CPUID.HYBRID. This information could be used to distinguish between the
++affected and unaffected parts, but it is deemed not worth adding complexity as
++the reporting is fixed automatically when these parts enumerate RFDS_NO.
++
++Mitigation
++==========
++Intel released a microcode update that enables software to clear sensitive
++information using the VERW instruction. Like MDS, RFDS deploys the same
++mitigation strategy to force the CPU to clear the affected buffers before an
++attacker can extract the secrets. This is achieved by using the otherwise
++unused and obsolete VERW instruction in combination with a microcode update.
++The microcode clears the affected CPU buffers when the VERW instruction is
++executed.
++
++Mitigation points
++-----------------
++VERW is executed by the kernel before returning to user space, and by KVM
++before VMentry. None of the affected cores support SMT, so VERW is not required
++at C-state transitions.
++
++New bits in IA32_ARCH_CAPABILITIES
++----------------------------------
++Newer processors and microcode update on existing affected processors added new
++bits to IA32_ARCH_CAPABILITIES MSR. These bits can be used to enumerate
++vulnerability and mitigation capability:
++
++- Bit 27 - RFDS_NO - When set, processor is not affected by RFDS.
++- Bit 28 - RFDS_CLEAR - When set, processor is affected by RFDS, and has the
++  microcode that clears the affected buffers on VERW execution.
++
++Mitigation control on the kernel command line
++---------------------------------------------
++The kernel command line allows to control RFDS mitigation at boot time with the
++parameter "reg_file_data_sampling=". The valid arguments are:
++
++  ==========  =================================================================
++  on          If the CPU is vulnerable, enable mitigation; CPU buffer clearing
++              on exit to userspace and before entering a VM.
++  off         Disables mitigation.
++  ==========  =================================================================
++
++Mitigation default is selected by CONFIG_MITIGATION_RFDS.
++
++Mitigation status information
++-----------------------------
++The Linux kernel provides a sysfs interface to enumerate the current
++vulnerability status of the system: whether the system is vulnerable, and
++which mitigations are active. The relevant sysfs file is:
++
++	/sys/devices/system/cpu/vulnerabilities/reg_file_data_sampling
++
++The possible values in this file are:
++
++  .. list-table::
++
++     * - 'Not affected'
++       - The processor is not vulnerable
++     * - 'Vulnerable'
++       - The processor is vulnerable, but no mitigation enabled
++     * - 'Vulnerable: No microcode'
++       - The processor is vulnerable but microcode is not updated.
++     * - 'Mitigation: Clear Register File'
++       - The processor is vulnerable and the CPU buffer clearing mitigation is
++	 enabled.
++
++References
++----------
++.. [#f1] Affected Processors
++   https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index 0fba3758d0da8..3056003512099 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -484,11 +484,14 @@ Spectre variant 2
+ 
+    Systems which support enhanced IBRS (eIBRS) enable IBRS protection once at
+    boot, by setting the IBRS bit, and they're automatically protected against
+-   Spectre v2 variant attacks, including cross-thread branch target injections
+-   on SMT systems (STIBP). In other words, eIBRS enables STIBP too.
++   Spectre v2 variant attacks.
+ 
+-   Legacy IBRS systems clear the IBRS bit on exit to userspace and
+-   therefore explicitly enable STIBP for that
++   On Intel's enhanced IBRS systems, this includes cross-thread branch target
++   injections on SMT systems (STIBP). In other words, Intel eIBRS enables
++   STIBP, too.
++
++   AMD Automatic IBRS does not protect userspace, and Legacy IBRS systems clear
++   the IBRS bit on exit to userspace, therefore both explicitly enable STIBP.
+ 
+    The retpoline mitigation is turned on by default on vulnerable
+    CPUs. It can be forced on or off by the administrator
+@@ -622,9 +625,10 @@ kernel command line.
+                 retpoline,generic       Retpolines
+                 retpoline,lfence        LFENCE; indirect branch
+                 retpoline,amd           alias for retpoline,lfence
+-                eibrs                   enhanced IBRS
+-                eibrs,retpoline         enhanced IBRS + Retpolines
+-                eibrs,lfence            enhanced IBRS + LFENCE
++                eibrs                   Enhanced/Auto IBRS
++                eibrs,retpoline         Enhanced/Auto IBRS + Retpolines
++                eibrs,lfence            Enhanced/Auto IBRS + LFENCE
++                ibrs                    use IBRS to protect kernel
+ 
+ 		Not specifying this option is equivalent to
+ 		spectre_v2=auto.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index f1f7c068cf65b..8e4882bb8cf85 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -993,6 +993,26 @@
+ 			The filter can be disabled or changed to another
+ 			driver later using sysfs.
+ 
++	reg_file_data_sampling=
++			[X86] Controls mitigation for Register File Data
++			Sampling (RFDS) vulnerability. RFDS is a CPU
++			vulnerability which may allow userspace to infer
++			kernel data values previously stored in floating point
++			registers, vector registers, or integer registers.
++			RFDS only affects Intel Atom processors.
++
++			on:	Turns ON the mitigation.
++			off:	Turns OFF the mitigation.
++
++			This parameter overrides the compile time default set
++			by CONFIG_MITIGATION_RFDS. Mitigation cannot be
++			disabled when other VERW based mitigations (like MDS)
++			are enabled. In order to disable RFDS mitigation all
++			VERW based mitigations need to be disabled.
++
++			For details see:
++			Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst
++
+ 	driver_async_probe=  [KNL]
+ 			List of driver names to be probed asynchronously.
+ 			Format: <driver_name1>,<driver_name2>...
+@@ -2919,6 +2939,7 @@
+ 					       nopti [X86,PPC]
+ 					       nospectre_v1 [X86,PPC]
+ 					       nospectre_v2 [X86,PPC,S390,ARM64]
++					       reg_file_data_sampling=off [X86]
+ 					       retbleed=off [X86]
+ 					       spec_store_bypass_disable=off [X86,PPC]
+ 					       spectre_v2_user=off [X86]
+@@ -5091,9 +5112,9 @@
+ 			retpoline,generic - Retpolines
+ 			retpoline,lfence  - LFENCE; indirect branch
+ 			retpoline,amd     - alias for retpoline,lfence
+-			eibrs		  - enhanced IBRS
+-			eibrs,retpoline   - enhanced IBRS + Retpolines
+-			eibrs,lfence      - enhanced IBRS + LFENCE
++			eibrs		  - Enhanced/Auto IBRS
++			eibrs,retpoline   - Enhanced/Auto IBRS + Retpolines
++			eibrs,lfence      - Enhanced/Auto IBRS + LFENCE
+ 			ibrs		  - use IBRS to protect kernel
+ 
+ 			Not specifying this option is equivalent to
+diff --git a/Documentation/block/queue-sysfs.rst b/Documentation/block/queue-sysfs.rst
+index 2638d3446b794..c8bf8bc3c03af 100644
+--- a/Documentation/block/queue-sysfs.rst
++++ b/Documentation/block/queue-sysfs.rst
+@@ -273,4 +273,11 @@ devices are described in the ZBC (Zoned Block Commands) and ZAC
+ do not support zone commands, they will be treated as regular block devices
+ and zoned will report "none".
+ 
++zone_write_granularity (RO)
++---------------------------
++This indicates the alignment constraint, in bytes, for write operations in
++sequential zones of zoned block devices (devices with a zoned attributed
++that reports "host-managed" or "host-aware"). This value is always 0 for
++regular block devices.
++
+ Jens Axboe <jens.axboe@oracle.com>, February 2009
+diff --git a/Documentation/x86/mds.rst b/Documentation/x86/mds.rst
+index 5d4330be200f9..e801df0bb3a81 100644
+--- a/Documentation/x86/mds.rst
++++ b/Documentation/x86/mds.rst
+@@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing:
+ 
+     mds_clear_cpu_buffers()
+ 
++Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path.
++Other than CFLAGS.ZF, this macro doesn't clobber any registers.
++
+ The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
+ (idle) transitions.
+ 
+@@ -138,17 +141,30 @@ Mitigation points
+ 
+    When transitioning from kernel to user space the CPU buffers are flushed
+    on affected CPUs when the mitigation is not disabled on the kernel
+-   command line. The migitation is enabled through the static key
+-   mds_user_clear.
+-
+-   The mitigation is invoked in prepare_exit_to_usermode() which covers
+-   all but one of the kernel to user space transitions.  The exception
+-   is when we return from a Non Maskable Interrupt (NMI), which is
+-   handled directly in do_nmi().
+-
+-   (The reason that NMI is special is that prepare_exit_to_usermode() can
+-    enable IRQs.  In NMI context, NMIs are blocked, and we don't want to
+-    enable IRQs with NMIs blocked.)
++   command line. The mitigation is enabled through the feature flag
++   X86_FEATURE_CLEAR_CPU_BUF.
++
++   The mitigation is invoked just before transitioning to userspace after
++   user registers are restored. This is done to minimize the window in
++   which kernel data could be accessed after VERW e.g. via an NMI after
++   VERW.
++
++   **Corner case not handled**
++   Interrupts returning to kernel don't clear CPUs buffers since the
++   exit-to-user path is expected to do that anyways. But, there could be
++   a case when an NMI is generated in kernel after the exit-to-user path
++   has cleared the buffers. This case is not handled and NMI returning to
++   kernel don't clear CPU buffers because:
++
++   1. It is rare to get an NMI after VERW, but before returning to userspace.
++   2. For an unprivileged user, there is no known way to make that NMI
++      less rare or target it.
++   3. It would take a large number of these precisely-timed NMIs to mount
++      an actual attack.  There's presumably not enough bandwidth.
++   4. The NMI in question occurs after a VERW, i.e. when user state is
++      restored and most interesting data is already scrubbed. Whats left
++      is only the data that NMI touches, and that may or may not be of
++      any interest.
+ 
+ 
+ 2. C-State transition
+diff --git a/Makefile b/Makefile
+index 88d4de93aed79..2af799d3ce78b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 214
++SUBLEVEL = 215
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/mmp2-brownstone.dts b/arch/arm/boot/dts/mmp2-brownstone.dts
+index 04f1ae1382e7a..bc64348b82185 100644
+--- a/arch/arm/boot/dts/mmp2-brownstone.dts
++++ b/arch/arm/boot/dts/mmp2-brownstone.dts
+@@ -28,7 +28,7 @@ &uart3 {
+ &twsi1 {
+ 	status = "okay";
+ 	pmic: max8925@3c {
+-		compatible = "maxium,max8925";
++		compatible = "maxim,max8925";
+ 		reg = <0x3c>;
+ 		interrupts = <1>;
+ 		interrupt-parent = <&intcmux4>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+index cb2c47f13a8a4..9ce8bfbf7ea21 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+@@ -810,7 +810,8 @@ bluetooth: bluetooth {
+ 		vddrf-supply = <&pp1300_l2c>;
+ 		vddch0-supply = <&pp3300_l10c>;
+ 		max-speed = <3200000>;
+-		clocks = <&rpmhcc RPMH_RF_CLK2>;
++
++		qcom,local-bd-address-broken;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index 72112fe05a5c4..10df6636a6b6c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -732,11 +732,20 @@ hdmi: hdmi@ff3c0000 {
+ 		status = "disabled";
+ 
+ 		ports {
+-			hdmi_in: port {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			hdmi_in: port@0 {
++				reg = <0>;
++
+ 				hdmi_in_vop: endpoint {
+ 					remote-endpoint = <&vop_out_hdmi>;
+ 				};
+ 			};
++
++			hdmi_out: port@1 {
++				reg = <1>;
++			};
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 3180f576ed02e..e2515218ff734 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1769,6 +1769,7 @@ simple-audio-card,codec {
+ 	hdmi: hdmi@ff940000 {
+ 		compatible = "rockchip,rk3399-dw-hdmi";
+ 		reg = <0x0 0xff940000 0x0 0x20000>;
++		reg-io-width = <4>;
+ 		interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH 0>;
+ 		clocks = <&cru PCLK_HDMI_CTRL>,
+ 			 <&cru SCLK_HDMI_SFR>,
+@@ -1777,13 +1778,16 @@ hdmi: hdmi@ff940000 {
+ 			 <&cru PLL_VPLL>;
+ 		clock-names = "iahb", "isfr", "cec", "grf", "vpll";
+ 		power-domains = <&power RK3399_PD_HDCP>;
+-		reg-io-width = <4>;
+ 		rockchip,grf = <&grf>;
+ 		#sound-dai-cells = <0>;
+ 		status = "disabled";
+ 
+ 		ports {
+-			hdmi_in: port {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			hdmi_in: port@0 {
++				reg = <0>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+ 
+@@ -1796,6 +1800,10 @@ hdmi_in_vopl: endpoint@1 {
+ 					remote-endpoint = <&vopl_out_hdmi>;
+ 				};
+ 			};
++
++			hdmi_out: port@1 {
++				reg = <1>;
++			};
+ 		};
+ 	};
+ 
+diff --git a/arch/hexagon/kernel/vmlinux.lds.S b/arch/hexagon/kernel/vmlinux.lds.S
+index 57465bff1fe49..df7f349c8d4f3 100644
+--- a/arch/hexagon/kernel/vmlinux.lds.S
++++ b/arch/hexagon/kernel/vmlinux.lds.S
+@@ -64,6 +64,7 @@ SECTIONS
+ 	STABS_DEBUG
+ 	DWARF_DEBUG
+ 	ELF_DETAILS
++	.hexagon.attributes 0 : { *(.hexagon.attributes) }
+ 
+ 	DISCARDS
+ }
+diff --git a/arch/parisc/include/asm/assembly.h b/arch/parisc/include/asm/assembly.h
+index a39250cb7dfcf..d3f23ed570c66 100644
+--- a/arch/parisc/include/asm/assembly.h
++++ b/arch/parisc/include/asm/assembly.h
+@@ -83,26 +83,28 @@
+ 	 * version takes two arguments: a src and destination register.
+ 	 * However, the source and destination registers can not be
+ 	 * the same register.
++	 *
++	 * We use add,l to avoid clobbering the C/B bits in the PSW.
+ 	 */
+ 
+ 	.macro  tophys  grvirt, grphys
+-	ldil    L%(__PAGE_OFFSET), \grphys
+-	sub     \grvirt, \grphys, \grphys
++	ldil    L%(-__PAGE_OFFSET), \grphys
++	addl    \grvirt, \grphys, \grphys
+ 	.endm
+-	
++
+ 	.macro  tovirt  grphys, grvirt
+ 	ldil    L%(__PAGE_OFFSET), \grvirt
+-	add     \grphys, \grvirt, \grvirt
++	addl    \grphys, \grvirt, \grvirt
+ 	.endm
+ 
+ 	.macro  tophys_r1  gr
+-	ldil    L%(__PAGE_OFFSET), %r1
+-	sub     \gr, %r1, \gr
++	ldil    L%(-__PAGE_OFFSET), %r1
++	addl    \gr, %r1, \gr
+ 	.endm
+-	
++
+ 	.macro  tovirt_r1  gr
+ 	ldil    L%(__PAGE_OFFSET), %r1
+-	add     \gr, %r1, \gr
++	addl    \gr, %r1, \gr
+ 	.endm
+ 
+ 	.macro delay value
+diff --git a/arch/parisc/include/asm/checksum.h b/arch/parisc/include/asm/checksum.h
+index 3c43baca7b397..2aceebcd695c8 100644
+--- a/arch/parisc/include/asm/checksum.h
++++ b/arch/parisc/include/asm/checksum.h
+@@ -40,7 +40,7 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
+ "	addc		%0, %5, %0\n"
+ "	addc		%0, %3, %0\n"
+ "1:	ldws,ma		4(%1), %3\n"
+-"	addib,<		0, %2, 1b\n"
++"	addib,>		-1, %2, 1b\n"
+ "	addc		%0, %3, %0\n"
+ "\n"
+ "	extru		%0, 31, 16, %4\n"
+@@ -126,6 +126,7 @@ static __inline__ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ 	** Try to keep 4 registers with "live" values ahead of the ALU.
+ 	*/
+ 
++"	depdi		0, 31, 32, %0\n"/* clear upper half of incoming checksum */
+ "	ldd,ma		8(%1), %4\n"	/* get 1st saddr word */
+ "	ldd,ma		8(%2), %5\n"	/* get 1st daddr word */
+ "	add		%4, %0, %0\n"
+@@ -137,8 +138,8 @@ static __inline__ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ "	add,dc		%3, %0, %0\n"  /* fold in proto+len | carry bit */
+ "	extrd,u		%0, 31, 32, %4\n"/* copy upper half down */
+ "	depdi		0, 31, 32, %0\n"/* clear upper half */
+-"	add		%4, %0, %0\n"	/* fold into 32-bits */
+-"	addc		0, %0, %0\n"	/* add carry */
++"	add,dc		%4, %0, %0\n"	/* fold into 32-bits, plus carry */
++"	addc		0, %0, %0\n"	/* add final carry */
+ 
+ #else
+ 
+@@ -163,7 +164,8 @@ static __inline__ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ "	ldw,ma		4(%2), %7\n"	/* 4th daddr */
+ "	addc		%6, %0, %0\n"
+ "	addc		%7, %0, %0\n"
+-"	addc		%3, %0, %0\n"	/* fold in proto+len, catch carry */
++"	addc		%3, %0, %0\n"	/* fold in proto+len */
++"	addc		0, %0, %0\n"	/* add carry */
+ 
+ #endif
+ 	: "=r" (sum), "=r" (saddr), "=r" (daddr), "=r" (len),
+diff --git a/arch/powerpc/include/asm/reg_fsl_emb.h b/arch/powerpc/include/asm/reg_fsl_emb.h
+index a21f529c43d96..8359c06d92d9f 100644
+--- a/arch/powerpc/include/asm/reg_fsl_emb.h
++++ b/arch/powerpc/include/asm/reg_fsl_emb.h
+@@ -12,9 +12,16 @@
+ #ifndef __ASSEMBLY__
+ /* Performance Monitor Registers */
+ #define mfpmr(rn)	({unsigned int rval; \
+-			asm volatile("mfpmr %0," __stringify(rn) \
++			asm volatile(".machine push; " \
++				     ".machine e300; " \
++				     "mfpmr %0," __stringify(rn) ";" \
++				     ".machine pop; " \
+ 				     : "=r" (rval)); rval;})
+-#define mtpmr(rn, v)	asm volatile("mtpmr " __stringify(rn) ",%0" : : "r" (v))
++#define mtpmr(rn, v)	asm volatile(".machine push; " \
++				     ".machine e300; " \
++				     "mtpmr " __stringify(rn) ",%0; " \
++				     ".machine pop; " \
++				     : : "r" (v))
+ #endif /* __ASSEMBLY__ */
+ 
+ /* Freescale Book E Performance Monitor APU Registers */
+diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
+index 321cab5c3ea02..bd5012aa94e3f 100644
+--- a/arch/powerpc/lib/Makefile
++++ b/arch/powerpc/lib/Makefile
+@@ -67,6 +67,6 @@ obj-$(CONFIG_PPC_LIB_RHEAP) += rheap.o
+ obj-$(CONFIG_FTR_FIXUP_SELFTEST) += feature-fixups-test.o
+ 
+ obj-$(CONFIG_ALTIVEC)	+= xor_vmx.o xor_vmx_glue.o
+-CFLAGS_xor_vmx.o += -maltivec $(call cc-option,-mabi=altivec)
++CFLAGS_xor_vmx.o += -mhard-float -maltivec $(call cc-option,-mabi=altivec)
+ 
+ obj-$(CONFIG_PPC64) += $(obj64-y)
+diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
+index 66af6abfe8af4..c351231b77923 100644
+--- a/arch/riscv/include/asm/uaccess.h
++++ b/arch/riscv/include/asm/uaccess.h
+@@ -468,7 +468,7 @@ unsigned long __must_check clear_user(void __user *to, unsigned long n)
+ 
+ #define __get_kernel_nofault(dst, src, type, err_label)			\
+ do {									\
+-	long __kr_err;							\
++	long __kr_err = 0;						\
+ 									\
+ 	__get_user_nocheck(*((type *)(dst)), (type *)(src), __kr_err);	\
+ 	if (unlikely(__kr_err))						\
+@@ -477,7 +477,7 @@ do {									\
+ 
+ #define __put_kernel_nofault(dst, src, type, err_label)			\
+ do {									\
+-	long __kr_err;							\
++	long __kr_err = 0;						\
+ 									\
+ 	__put_user_nocheck(*((type *)(src)), (type *)(dst), __kr_err);	\
+ 	if (unlikely(__kr_err))						\
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 88ecbcf097a36..127a8d295ae3a 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -1298,6 +1298,7 @@ ENDPROC(stack_overflow)
+ 
+ #endif
+ 	.section .rodata, "a"
++	.balign	8
+ #define SYSCALL(esame,emu)	.quad __s390x_ ## esame
+ 	.globl	sys_call_table
+ sys_call_table:
+diff --git a/arch/sparc/kernel/nmi.c b/arch/sparc/kernel/nmi.c
+index 060fff95a305c..fbf25e926f67c 100644
+--- a/arch/sparc/kernel/nmi.c
++++ b/arch/sparc/kernel/nmi.c
+@@ -274,7 +274,7 @@ static int __init setup_nmi_watchdog(char *str)
+ 	if (!strncmp(str, "panic", 5))
+ 		panic_on_timeout = 1;
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("nmi_watchdog=", setup_nmi_watchdog);
+ 
+diff --git a/arch/sparc/vdso/vma.c b/arch/sparc/vdso/vma.c
+index cc19e09b0fa1e..b073153c711ad 100644
+--- a/arch/sparc/vdso/vma.c
++++ b/arch/sparc/vdso/vma.c
+@@ -449,9 +449,8 @@ static __init int vdso_setup(char *s)
+ 	unsigned long val;
+ 
+ 	err = kstrtoul(s, 10, &val);
+-	if (err)
+-		return err;
+-	vdso_enabled = val;
+-	return 0;
++	if (!err)
++		vdso_enabled = val;
++	return 1;
+ }
+ __setup("vdso=", vdso_setup);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 6dc670e363939..9b3fa05e46226 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -352,10 +352,6 @@ config X86_64_SMP
+ 	def_bool y
+ 	depends on X86_64 && SMP
+ 
+-config X86_32_LAZY_GS
+-	def_bool y
+-	depends on X86_32 && !STACKPROTECTOR
+-
+ config ARCH_SUPPORTS_UPROBES
+ 	def_bool y
+ 
+@@ -378,7 +374,8 @@ config CC_HAS_SANE_STACKPROTECTOR
+ 	default $(success,$(srctree)/scripts/gcc-x86_32-has-stack-protector.sh $(CC))
+ 	help
+ 	   We have to make sure stack protector is unconditionally disabled if
+-	   the compiler produces broken code.
++	   the compiler produces broken code or if it does not let us control
++	   the segment on 32-bit kernels.
+ 
+ menu "Processor type and features"
+ 
+@@ -2511,6 +2508,17 @@ config GDS_FORCE_MITIGATION
+ 
+ 	  If in doubt, say N.
+ 
++config MITIGATION_RFDS
++	bool "RFDS Mitigation"
++	depends on CPU_SUP_INTEL
++	default y
++	help
++	  Enable mitigation for Register File Data Sampling (RFDS) by default.
++	  RFDS is a hardware vulnerability which affects Intel Atom CPUs. It
++	  allows unprivileged speculative access to stale data previously
++	  stored in floating point, vector and integer registers.
++	  See also <file:Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst>
++
+ endif
+ 
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 1f796050c6dde..8b9fa777f513b 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -87,6 +87,14 @@ ifeq ($(CONFIG_X86_32),y)
+ 
+         # temporary until string.h is fixed
+         KBUILD_CFLAGS += -ffreestanding
++
++	ifeq ($(CONFIG_STACKPROTECTOR),y)
++		ifeq ($(CONFIG_SMP),y)
++			KBUILD_CFLAGS += -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard
++		else
++			KBUILD_CFLAGS += -mstack-protector-guard=global
++		endif
++	endif
+ else
+         BITS := 64
+         UTS_MACHINE := x86_64
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index bfb7bcb362bcf..09e99d13fc0b3 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -6,6 +6,9 @@
+ #include <linux/linkage.h>
+ #include <asm/export.h>
+ #include <asm/msr-index.h>
++#include <asm/unwind_hints.h>
++#include <asm/segment.h>
++#include <asm/cache.h>
+ 
+ .pushsection .noinstr.text, "ax"
+ 
+@@ -20,3 +23,23 @@ SYM_FUNC_END(entry_ibpb)
+ EXPORT_SYMBOL_GPL(entry_ibpb);
+ 
+ .popsection
++
++/*
++ * Define the VERW operand that is disguised as entry code so that
++ * it can be referenced with KPTI enabled. This ensure VERW can be
++ * used late in exit-to-user path after page tables are switched.
++ */
++.pushsection .entry.text, "ax"
++
++.align L1_CACHE_BYTES, 0xcc
++SYM_CODE_START_NOALIGN(mds_verw_sel)
++	UNWIND_HINT_EMPTY
++	ANNOTATE_NOENDBR
++	.word __KERNEL_DS
++.align L1_CACHE_BYTES, 0xcc
++SYM_CODE_END(mds_verw_sel);
++/* For KVM */
++EXPORT_SYMBOL_GPL(mds_verw_sel);
++
++.popsection
++
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index 70bd81b6c612e..97d422f31c77e 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -20,7 +20,7 @@
+  *	1C(%esp) - %ds
+  *	20(%esp) - %es
+  *	24(%esp) - %fs
+- *	28(%esp) - %gs		saved iff !CONFIG_X86_32_LAZY_GS
++ *	28(%esp) - unused -- was %gs on old stackprotector kernels
+  *	2C(%esp) - orig_eax
+  *	30(%esp) - %eip
+  *	34(%esp) - %cs
+@@ -56,14 +56,9 @@
+ /*
+  * User gs save/restore
+  *
+- * %gs is used for userland TLS and kernel only uses it for stack
+- * canary which is required to be at %gs:20 by gcc.  Read the comment
+- * at the top of stackprotector.h for more info.
+- *
+- * Local labels 98 and 99 are used.
++ * This is leftover junk from CONFIG_X86_32_LAZY_GS.  A subsequent patch
++ * will remove it entirely.
+  */
+-#ifdef CONFIG_X86_32_LAZY_GS
+-
+  /* unfortunately push/pop can't be no-op */
+ .macro PUSH_GS
+ 	pushl	$0
+@@ -86,49 +81,6 @@
+ .macro SET_KERNEL_GS reg
+ .endm
+ 
+-#else	/* CONFIG_X86_32_LAZY_GS */
+-
+-.macro PUSH_GS
+-	pushl	%gs
+-.endm
+-
+-.macro POP_GS pop=0
+-98:	popl	%gs
+-  .if \pop <> 0
+-	add	$\pop, %esp
+-  .endif
+-.endm
+-.macro POP_GS_EX
+-.pushsection .fixup, "ax"
+-99:	movl	$0, (%esp)
+-	jmp	98b
+-.popsection
+-	_ASM_EXTABLE(98b, 99b)
+-.endm
+-
+-.macro PTGS_TO_GS
+-98:	mov	PT_GS(%esp), %gs
+-.endm
+-.macro PTGS_TO_GS_EX
+-.pushsection .fixup, "ax"
+-99:	movl	$0, PT_GS(%esp)
+-	jmp	98b
+-.popsection
+-	_ASM_EXTABLE(98b, 99b)
+-.endm
+-
+-.macro GS_TO_REG reg
+-	movl	%gs, \reg
+-.endm
+-.macro REG_TO_PTGS reg
+-	movl	\reg, PT_GS(%esp)
+-.endm
+-.macro SET_KERNEL_GS reg
+-	movl	$(__KERNEL_STACK_CANARY), \reg
+-	movl	\reg, %gs
+-.endm
+-
+-#endif /* CONFIG_X86_32_LAZY_GS */
+ 
+ /* Unconditionally switch to user cr3 */
+ .macro SWITCH_TO_USER_CR3 scratch_reg:req
+@@ -779,7 +731,7 @@ SYM_CODE_START(__switch_to_asm)
+ 
+ #ifdef CONFIG_STACKPROTECTOR
+ 	movl	TASK_stack_canary(%edx), %ebx
+-	movl	%ebx, PER_CPU_VAR(stack_canary)+stack_canary_offset
++	movl	%ebx, PER_CPU_VAR(__stack_chk_guard)
+ #endif
+ 
+ 	/*
+@@ -997,6 +949,7 @@ SYM_FUNC_START(entry_SYSENTER_32)
+ 	BUG_IF_WRONG_CR3 no_user_check=1
+ 	popfl
+ 	popl	%eax
++	CLEAR_CPU_BUFFERS
+ 
+ 	/*
+ 	 * Return back to the vDSO, which will pop ecx and edx.
+@@ -1069,6 +1022,7 @@ restore_all_switch_stack:
+ 
+ 	/* Restore user state */
+ 	RESTORE_REGS pop=4			# skip orig_eax/error_code
++	CLEAR_CPU_BUFFERS
+ .Lirq_return:
+ 	/*
+ 	 * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
+@@ -1267,6 +1221,7 @@ SYM_CODE_START(asm_exc_nmi)
+ 
+ 	/* Not on SYSENTER stack. */
+ 	call	exc_nmi
++	CLEAR_CPU_BUFFERS
+ 	jmp	.Lnmi_return
+ 
+ .Lnmi_from_sysenter_stack:
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 23212c53cef7f..1631a9a1566e3 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -615,6 +615,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
+ 	/* Restore RDI. */
+ 	popq	%rdi
+ 	SWAPGS
++	CLEAR_CPU_BUFFERS
+ 	INTERRUPT_RETURN
+ 
+ 
+@@ -721,6 +722,8 @@ native_irq_return_ldt:
+ 	 */
+ 	popq	%rax				/* Restore user RAX */
+ 
++	CLEAR_CPU_BUFFERS
++
+ 	/*
+ 	 * RSP now points to an ordinary IRET frame, except that the page
+ 	 * is read-only and RSP[31:16] are preloaded with the userspace
+@@ -1487,6 +1490,12 @@ nmi_restore:
+ 	std
+ 	movq	$0, 5*8(%rsp)		/* clear "NMI executing" */
+ 
++	/*
++	 * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like
++	 * NMI in kernel after user state is restored. For an unprivileged user
++	 * these conditions are hard to meet.
++	 */
++
+ 	/*
+ 	 * iretq reads the "iret" frame and exits the NMI stack in a
+ 	 * single instruction.  We are returning to kernel mode, so this
+@@ -1504,6 +1513,7 @@ SYM_CODE_END(asm_exc_nmi)
+ SYM_CODE_START(ignore_sysret)
+ 	UNWIND_HINT_EMPTY
+ 	mov	$-ENOSYS, %eax
++	CLEAR_CPU_BUFFERS
+ 	sysretl
+ SYM_CODE_END(ignore_sysret)
+ #endif
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index 4d637a965efbe..7f09e7ad3c74d 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -319,6 +319,7 @@ sysret32_from_system_call:
+ 	xorl	%r9d, %r9d
+ 	xorl	%r10d, %r10d
+ 	swapgs
++	CLEAR_CPU_BUFFERS
+ 	sysretl
+ SYM_CODE_END(entry_SYSCALL_compat)
+ 
+diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
+index 8f80de627c60a..5cdccea455544 100644
+--- a/arch/x86/include/asm/asm-prototypes.h
++++ b/arch/x86/include/asm/asm-prototypes.h
+@@ -12,6 +12,7 @@
+ #include <asm/special_insns.h>
+ #include <asm/preempt.h>
+ #include <asm/asm.h>
++#include <asm/nospec-branch.h>
+ 
+ #ifndef CONFIG_X86_CMPXCHG64
+ extern void cmpxchg8b_emu(void);
+diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
+index 0603c7423aca2..c01005d7a4ede 100644
+--- a/arch/x86/include/asm/asm.h
++++ b/arch/x86/include/asm/asm.h
+@@ -6,12 +6,14 @@
+ # define __ASM_FORM(x)	x
+ # define __ASM_FORM_RAW(x)     x
+ # define __ASM_FORM_COMMA(x) x,
++# define __ASM_REGPFX			%
+ #else
+ #include <linux/stringify.h>
+ 
+ # define __ASM_FORM(x)	" " __stringify(x) " "
+ # define __ASM_FORM_RAW(x)     __stringify(x)
+ # define __ASM_FORM_COMMA(x) " " __stringify(x) ","
++# define __ASM_REGPFX			%%
+ #endif
+ 
+ #ifndef __x86_64__
+@@ -48,6 +50,9 @@
+ #define _ASM_SI		__ASM_REG(si)
+ #define _ASM_DI		__ASM_REG(di)
+ 
++/* Adds a (%rip) suffix on 64 bits only; for immediate memory references */
++#define _ASM_RIP(x)	__ASM_SEL_RAW(x, x (__ASM_REGPFX rip))
++
+ #ifndef __x86_64__
+ /* 32 bit */
+ 
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index cc3f62f5d5515..955ca6b13e35f 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -33,6 +33,8 @@ enum cpuid_leafs
+ 	CPUID_7_EDX,
+ 	CPUID_8000_001F_EAX,
+ 	CPUID_8000_0021_EAX,
++	CPUID_LNX_5,
++	NR_CPUID_WORDS,
+ };
+ 
+ #ifdef CONFIG_X86_FEATURE_NAMES
+@@ -93,8 +95,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
+ 	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) ||	\
+ 	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) ||	\
+ 	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 20, feature_bit) ||	\
++	   CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 21, feature_bit) ||	\
+ 	   REQUIRED_MASK_CHECK					  ||	\
+-	   BUILD_BUG_ON_ZERO(NCAPINTS != 21))
++	   BUILD_BUG_ON_ZERO(NCAPINTS != 22))
+ 
+ #define DISABLED_MASK_BIT_SET(feature_bit)				\
+ 	 ( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK,  0, feature_bit) ||	\
+@@ -118,8 +121,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
+ 	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) ||	\
+ 	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) ||	\
+ 	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 20, feature_bit) ||	\
++	   CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 21, feature_bit) ||	\
+ 	   DISABLED_MASK_CHECK					  ||	\
+-	   BUILD_BUG_ON_ZERO(NCAPINTS != 21))
++	   BUILD_BUG_ON_ZERO(NCAPINTS != 22))
+ 
+ #define cpu_has(c, bit)							\
+ 	(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 :	\
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 5a54c3685a066..e1bc2bad8cff8 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -13,7 +13,7 @@
+ /*
+  * Defines x86 CPU feature bits
+  */
+-#define NCAPINTS			21	   /* N 32-bit words worth of info */
++#define NCAPINTS			22	   /* N 32-bit words worth of info */
+ #define NBUGINTS			2	   /* N 32-bit bug flags */
+ 
+ /*
+@@ -300,6 +300,7 @@
+ #define X86_FEATURE_USE_IBPB_FW		(11*32+16) /* "" Use IBPB during runtime firmware calls */
+ #define X86_FEATURE_RSB_VMEXIT_LITE	(11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
+ #define X86_FEATURE_MSR_TSX_CTRL	(11*32+18) /* "" MSR IA32_TSX_CTRL (Intel) implemented */
++#define X86_FEATURE_CLEAR_CPU_BUF	(11*32+19) /* "" Clear CPU buffers using VERW */
+ 
+ #define X86_FEATURE_SRSO		(11*32+24) /* "" AMD BTB untrain RETs */
+ #define X86_FEATURE_SRSO_ALIAS		(11*32+25) /* "" AMD BTB untrain RETs through aliasing */
+@@ -403,6 +404,7 @@
+ #define X86_FEATURE_SEV_ES		(19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
+ #define X86_FEATURE_SME_COHERENT	(19*32+10) /* "" AMD hardware-enforced cache coherency */
+ 
++#define X86_FEATURE_AUTOIBRS		(20*32+ 8) /* "" Automatic IBRS */
+ #define X86_FEATURE_SBPB		(20*32+27) /* "" Selective Branch Prediction Barrier */
+ #define X86_FEATURE_IBPB_BRTYPE		(20*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */
+ #define X86_FEATURE_SRSO_NO		(20*32+29) /* "" CPU is not affected by SRSO */
+@@ -452,4 +454,5 @@
+ /* BUG word 2 */
+ #define X86_BUG_SRSO			X86_BUG(1*32 + 0) /* AMD SRSO bug */
+ #define X86_BUG_DIV0			X86_BUG(1*32 + 1) /* AMD DIV0 speculation bug */
++#define X86_BUG_RFDS			X86_BUG(1*32 + 2) /* CPU is vulnerable to Register File Data Sampling */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
+index 3c24378e67bea..e5f44a3e275c1 100644
+--- a/arch/x86/include/asm/disabled-features.h
++++ b/arch/x86/include/asm/disabled-features.h
+@@ -103,6 +103,7 @@
+ #define DISABLED_MASK18	0
+ #define DISABLED_MASK19	0
+ #define DISABLED_MASK20	0
+-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)
++#define DISABLED_MASK21	0
++#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22)
+ 
+ #endif /* _ASM_X86_DISABLED_FEATURES_H */
+diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
+index 5443851d3aa60..264ab414e9f63 100644
+--- a/arch/x86/include/asm/entry-common.h
++++ b/arch/x86/include/asm/entry-common.h
+@@ -77,7 +77,6 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
+ 
+ static __always_inline void arch_exit_to_user_mode(void)
+ {
+-	mds_user_clear_cpu_buffers();
+ 	amd_clear_divider();
+ }
+ #define arch_exit_to_user_mode arch_exit_to_user_mode
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 8c86edefa1150..f40dea50dfbf3 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -134,6 +134,7 @@ static __always_inline unsigned long arch_local_irq_save(void)
+ #define INTERRUPT_RETURN	jmp native_iret
+ #define USERGS_SYSRET64				\
+ 	swapgs;					\
++	CLEAR_CPU_BUFFERS;			\
+ 	sysretq;
+ #define USERGS_SYSRET32				\
+ 	swapgs;					\
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 7d7a3cbb8e017..52a6d43ed2f94 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -30,6 +30,7 @@
+ #define _EFER_SVME		12 /* Enable virtualization */
+ #define _EFER_LMSLE		13 /* Long Mode Segment Limit Enable */
+ #define _EFER_FFXSR		14 /* Enable Fast FXSAVE/FXRSTOR */
++#define _EFER_AUTOIBRS		21 /* Enable Automatic IBRS */
+ 
+ #define EFER_SCE		(1<<_EFER_SCE)
+ #define EFER_LME		(1<<_EFER_LME)
+@@ -38,6 +39,7 @@
+ #define EFER_SVME		(1<<_EFER_SVME)
+ #define EFER_LMSLE		(1<<_EFER_LMSLE)
+ #define EFER_FFXSR		(1<<_EFER_FFXSR)
++#define EFER_AUTOIBRS		(1<<_EFER_AUTOIBRS)
+ 
+ /* Intel MSRs. Some also available on other CPUs */
+ 
+@@ -166,6 +168,14 @@
+ 						 * CPU is not vulnerable to Gather
+ 						 * Data Sampling (GDS).
+ 						 */
++#define ARCH_CAP_RFDS_NO		BIT(27)	/*
++						 * Not susceptible to Register
++						 * File Data Sampling.
++						 */
++#define ARCH_CAP_RFDS_CLEAR		BIT(28)	/*
++						 * VERW clears CPU Register
++						 * File.
++						 */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+ #define L1D_FLUSH			BIT(0)	/*
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 7711ba5342a1a..87e1ff0640259 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -155,11 +155,20 @@
+ .Lskip_rsb_\@:
+ .endm
+ 
++/*
++ * The CALL to srso_alias_untrain_ret() must be patched in directly at
++ * the spot where untraining must be done, ie., srso_alias_untrain_ret()
++ * must be the target of a CALL instruction instead of indirectly
++ * jumping to a wrapper which then calls it. Therefore, this macro is
++ * called outside of __UNTRAIN_RET below, for the time being, before the
++ * kernel can support nested alternatives with arbitrary nesting.
++ */
++.macro CALL_UNTRAIN_RET
+ #ifdef CONFIG_CPU_UNRET_ENTRY
+-#define CALL_UNTRAIN_RET	"call entry_untrain_ret"
+-#else
+-#define CALL_UNTRAIN_RET	""
++	ALTERNATIVE_2 "", "call entry_untrain_ret", X86_FEATURE_UNRET, \
++		          "call srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS
+ #endif
++.endm
+ 
+ /*
+  * Mitigate RETBleed for AMD/Hygon Zen uarch. Requires KERNEL CR3 because the
+@@ -176,12 +185,24 @@
+ #if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
+ 	defined(CONFIG_CPU_SRSO)
+ 	ANNOTATE_UNRET_END
+-	ALTERNATIVE_2 "",						\
+-		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
+-		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB
++	CALL_UNTRAIN_RET
++	ALTERNATIVE "", "call entry_ibpb", X86_FEATURE_ENTRY_IBPB
+ #endif
+ .endm
+ 
++/*
++ * Macro to execute VERW instruction that mitigate transient data sampling
++ * attacks such as MDS. On affected systems a microcode update overloaded VERW
++ * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
++ *
++ * Note: Only the memory operand variant of VERW clears the CPU buffers.
++ */
++.macro CLEAR_CPU_BUFFERS
++	ALTERNATIVE "jmp .Lskip_verw_\@", "", X86_FEATURE_CLEAR_CPU_BUF
++	verw _ASM_RIP(mds_verw_sel)
++.Lskip_verw_\@:
++.endm
++
+ #else /* __ASSEMBLY__ */
+ 
+ #define ANNOTATE_RETPOLINE_SAFE					\
+@@ -357,11 +378,12 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
+-DECLARE_STATIC_KEY_FALSE(mds_user_clear);
+ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
+ 
+ DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
+ 
++extern u16 mds_verw_sel;
++
+ #include <asm/segment.h>
+ 
+ /**
+@@ -387,17 +409,6 @@ static __always_inline void mds_clear_cpu_buffers(void)
+ 	asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc");
+ }
+ 
+-/**
+- * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
+- *
+- * Clear CPU buffers if the corresponding static key is enabled
+- */
+-static __always_inline void mds_user_clear_cpu_buffers(void)
+-{
+-	if (static_branch_likely(&mds_user_clear))
+-		mds_clear_cpu_buffers();
+-}
+-
+ /**
+  * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
+  *
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index d7e017b0b4c3b..6dc3c5f0be076 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -441,6 +441,9 @@ struct fixed_percpu_data {
+ 	 * GCC hardcodes the stack canary as %gs:40.  Since the
+ 	 * irq_stack is the object at %gs:0, we reserve the bottom
+ 	 * 48 bytes of the irq stack for the canary.
++	 *
++	 * Once we are willing to require -mstack-protector-guard-symbol=
++	 * support for x86_64 stackprotector, we can get rid of this.
+ 	 */
+ 	char		gs_base[40];
+ 	unsigned long	stack_canary;
+@@ -461,17 +464,7 @@ extern asmlinkage void ignore_sysret(void);
+ void current_save_fsgs(void);
+ #else	/* X86_64 */
+ #ifdef CONFIG_STACKPROTECTOR
+-/*
+- * Make sure stack canary segment base is cached-aligned:
+- *   "For Intel Atom processors, avoid non zero segment base address
+- *    that is not aligned to cache line boundary at all cost."
+- * (Optim Ref Manual Assembly/Compiler Coding Rule 15.)
+- */
+-struct stack_canary {
+-	char __pad[20];		/* canary at %gs:20 */
+-	unsigned long canary;
+-};
+-DECLARE_PER_CPU_ALIGNED(struct stack_canary, stack_canary);
++DECLARE_PER_CPU(unsigned long, __stack_chk_guard);
+ #endif
+ /* Per CPU softirq stack pointer */
+ DECLARE_PER_CPU(struct irq_stack *, softirq_stack_ptr);
+diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
+index 409f661481e11..b94f615600d57 100644
+--- a/arch/x86/include/asm/ptrace.h
++++ b/arch/x86/include/asm/ptrace.h
+@@ -37,7 +37,10 @@ struct pt_regs {
+ 	unsigned short __esh;
+ 	unsigned short fs;
+ 	unsigned short __fsh;
+-	/* On interrupt, gs and __gsh store the vector number. */
++	/*
++	 * On interrupt, gs and __gsh store the vector number.  They never
++	 * store gs any more.
++	 */
+ 	unsigned short gs;
+ 	unsigned short __gsh;
+ 	/* On interrupt, this is the error code. */
+diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
+index 9bf60a8b9e9c2..1fbe53583e952 100644
+--- a/arch/x86/include/asm/required-features.h
++++ b/arch/x86/include/asm/required-features.h
+@@ -103,6 +103,7 @@
+ #define REQUIRED_MASK18	0
+ #define REQUIRED_MASK19	0
+ #define REQUIRED_MASK20	0
+-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)
++#define REQUIRED_MASK21	0
++#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22)
+ 
+ #endif /* _ASM_X86_REQUIRED_FEATURES_H */
+diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
+index 7fdd4facfce71..72044026eb3c2 100644
+--- a/arch/x86/include/asm/segment.h
++++ b/arch/x86/include/asm/segment.h
+@@ -95,7 +95,7 @@
+  *
+  *  26 - ESPFIX small SS
+  *  27 - per-cpu			[ offset to per-cpu data area ]
+- *  28 - stack_canary-20		[ for stack protector ]		<=== cacheline #8
++ *  28 - unused
+  *  29 - unused
+  *  30 - unused
+  *  31 - TSS for double fault handler
+@@ -118,7 +118,6 @@
+ 
+ #define GDT_ENTRY_ESPFIX_SS		26
+ #define GDT_ENTRY_PERCPU		27
+-#define GDT_ENTRY_STACK_CANARY		28
+ 
+ #define GDT_ENTRY_DOUBLEFAULT_TSS	31
+ 
+@@ -158,12 +157,6 @@
+ # define __KERNEL_PERCPU		0
+ #endif
+ 
+-#ifdef CONFIG_STACKPROTECTOR
+-# define __KERNEL_STACK_CANARY		(GDT_ENTRY_STACK_CANARY*8)
+-#else
+-# define __KERNEL_STACK_CANARY		0
+-#endif
+-
+ #else /* 64-bit: */
+ 
+ #include <asm/cache.h>
+@@ -364,22 +357,15 @@ static inline void __loadsegment_fs(unsigned short value)
+ 	asm("mov %%" #seg ",%0":"=r" (value) : : "memory")
+ 
+ /*
+- * x86-32 user GS accessors:
++ * x86-32 user GS accessors.  This is ugly and could do with some cleaning up.
+  */
+ #ifdef CONFIG_X86_32
+-# ifdef CONFIG_X86_32_LAZY_GS
+-#  define get_user_gs(regs)		(u16)({ unsigned long v; savesegment(gs, v); v; })
+-#  define set_user_gs(regs, v)		loadsegment(gs, (unsigned long)(v))
+-#  define task_user_gs(tsk)		((tsk)->thread.gs)
+-#  define lazy_save_gs(v)		savesegment(gs, (v))
+-#  define lazy_load_gs(v)		loadsegment(gs, (v))
+-# else	/* X86_32_LAZY_GS */
+-#  define get_user_gs(regs)		(u16)((regs)->gs)
+-#  define set_user_gs(regs, v)		do { (regs)->gs = (v); } while (0)
+-#  define task_user_gs(tsk)		(task_pt_regs(tsk)->gs)
+-#  define lazy_save_gs(v)		do { } while (0)
+-#  define lazy_load_gs(v)		do { } while (0)
+-# endif	/* X86_32_LAZY_GS */
++# define get_user_gs(regs)		(u16)({ unsigned long v; savesegment(gs, v); v; })
++# define set_user_gs(regs, v)		loadsegment(gs, (unsigned long)(v))
++# define task_user_gs(tsk)		((tsk)->thread.gs)
++# define lazy_save_gs(v)		savesegment(gs, (v))
++# define lazy_load_gs(v)		loadsegment(gs, (v))
++# define load_gs_index(v)		loadsegment(gs, (v))
+ #endif	/* X86_32 */
+ 
+ #endif /* !__ASSEMBLY__ */
+diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
+index 4e1757bf66a89..d65bfc293a48d 100644
+--- a/arch/x86/include/asm/setup.h
++++ b/arch/x86/include/asm/setup.h
+@@ -49,7 +49,6 @@ extern unsigned long saved_video_mode;
+ extern void reserve_standard_io_resources(void);
+ extern void i386_reserve_resources(void);
+ extern unsigned long __startup_64(unsigned long physaddr, struct boot_params *bp);
+-extern unsigned long __startup_secondary_64(void);
+ extern void startup_64_setup_env(unsigned long physbase);
+ extern void early_setup_idt(void);
+ extern void __init do_early_exception(struct pt_regs *regs, int trapnr);
+diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h
+index 7fb482f0f25b0..b6ffe58c70fab 100644
+--- a/arch/x86/include/asm/stackprotector.h
++++ b/arch/x86/include/asm/stackprotector.h
+@@ -5,30 +5,23 @@
+  * Stack protector works by putting predefined pattern at the start of
+  * the stack frame and verifying that it hasn't been overwritten when
+  * returning from the function.  The pattern is called stack canary
+- * and unfortunately gcc requires it to be at a fixed offset from %gs.
+- * On x86_64, the offset is 40 bytes and on x86_32 20 bytes.  x86_64
+- * and x86_32 use segment registers differently and thus handles this
+- * requirement differently.
++ * and unfortunately gcc historically required it to be at a fixed offset
++ * from the percpu segment base.  On x86_64, the offset is 40 bytes.
+  *
+- * On x86_64, %gs is shared by percpu area and stack canary.  All
+- * percpu symbols are zero based and %gs points to the base of percpu
+- * area.  The first occupant of the percpu area is always
+- * fixed_percpu_data which contains stack_canary at offset 40.  Userland
+- * %gs is always saved and restored on kernel entry and exit using
+- * swapgs, so stack protector doesn't add any complexity there.
++ * The same segment is shared by percpu area and stack canary.  On
++ * x86_64, percpu symbols are zero based and %gs (64-bit) points to the
++ * base of percpu area.  The first occupant of the percpu area is always
++ * fixed_percpu_data which contains stack_canary at the approproate
++ * offset.  On x86_32, the stack canary is just a regular percpu
++ * variable.
+  *
+- * On x86_32, it's slightly more complicated.  As in x86_64, %gs is
+- * used for userland TLS.  Unfortunately, some processors are much
+- * slower at loading segment registers with different value when
+- * entering and leaving the kernel, so the kernel uses %fs for percpu
+- * area and manages %gs lazily so that %gs is switched only when
+- * necessary, usually during task switch.
++ * Putting percpu data in %fs on 32-bit is a minor optimization compared to
++ * using %gs.  Since 32-bit userspace normally has %fs == 0, we are likely
++ * to load 0 into %fs on exit to usermode, whereas with percpu data in
++ * %gs, we are likely to load a non-null %gs on return to user mode.
+  *
+- * As gcc requires the stack canary at %gs:20, %gs can't be managed
+- * lazily if stack protector is enabled, so the kernel saves and
+- * restores userland %gs on kernel entry and exit.  This behavior is
+- * controlled by CONFIG_X86_32_LAZY_GS and accessors are defined in
+- * system.h to hide the details.
++ * Once we are willing to require GCC 8.1 or better for 64-bit stackprotector
++ * support, we can remove some of this complexity.
+  */
+ 
+ #ifndef _ASM_STACKPROTECTOR_H
+@@ -44,14 +37,6 @@
+ #include <linux/random.h>
+ #include <linux/sched.h>
+ 
+-/*
+- * 24 byte read-only segment initializer for stack canary.  Linker
+- * can't handle the address bit shifting.  Address will be set in
+- * head_32 for boot CPU and setup_per_cpu_areas() for others.
+- */
+-#define GDT_STACK_CANARY_INIT						\
+-	[GDT_ENTRY_STACK_CANARY] = GDT_ENTRY_INIT(0x4090, 0, 0x18),
+-
+ /*
+  * Initialize the stackprotector canary value.
+  *
+@@ -86,7 +71,7 @@ static __always_inline void boot_init_stack_canary(void)
+ #ifdef CONFIG_X86_64
+ 	this_cpu_write(fixed_percpu_data.stack_canary, canary);
+ #else
+-	this_cpu_write(stack_canary.canary, canary);
++	this_cpu_write(__stack_chk_guard, canary);
+ #endif
+ }
+ 
+@@ -95,48 +80,16 @@ static inline void cpu_init_stack_canary(int cpu, struct task_struct *idle)
+ #ifdef CONFIG_X86_64
+ 	per_cpu(fixed_percpu_data.stack_canary, cpu) = idle->stack_canary;
+ #else
+-	per_cpu(stack_canary.canary, cpu) = idle->stack_canary;
+-#endif
+-}
+-
+-static inline void setup_stack_canary_segment(int cpu)
+-{
+-#ifdef CONFIG_X86_32
+-	unsigned long canary = (unsigned long)&per_cpu(stack_canary, cpu);
+-	struct desc_struct *gdt_table = get_cpu_gdt_rw(cpu);
+-	struct desc_struct desc;
+-
+-	desc = gdt_table[GDT_ENTRY_STACK_CANARY];
+-	set_desc_base(&desc, canary);
+-	write_gdt_entry(gdt_table, GDT_ENTRY_STACK_CANARY, &desc, DESCTYPE_S);
+-#endif
+-}
+-
+-static inline void load_stack_canary_segment(void)
+-{
+-#ifdef CONFIG_X86_32
+-	asm("mov %0, %%gs" : : "r" (__KERNEL_STACK_CANARY) : "memory");
++	per_cpu(__stack_chk_guard, cpu) = idle->stack_canary;
+ #endif
+ }
+ 
+ #else	/* STACKPROTECTOR */
+ 
+-#define GDT_STACK_CANARY_INIT
+-
+ /* dummy boot_init_stack_canary() is defined in linux/stackprotector.h */
+ 
+-static inline void setup_stack_canary_segment(int cpu)
+-{ }
+-
+ static inline void cpu_init_stack_canary(int cpu, struct task_struct *idle)
+ { }
+ 
+-static inline void load_stack_canary_segment(void)
+-{
+-#ifdef CONFIG_X86_32
+-	asm volatile ("mov %0, %%gs" : : "r" (0));
+-#endif
+-}
+-
+ #endif	/* STACKPROTECTOR */
+ #endif	/* _ASM_STACKPROTECTOR_H */
+diff --git a/arch/x86/include/asm/suspend_32.h b/arch/x86/include/asm/suspend_32.h
+index 3b97aa9215430..d8416b3bf832e 100644
+--- a/arch/x86/include/asm/suspend_32.h
++++ b/arch/x86/include/asm/suspend_32.h
+@@ -12,13 +12,6 @@
+ 
+ /* image of the saved processor state */
+ struct saved_context {
+-	/*
+-	 * On x86_32, all segment registers, with the possible exception of
+-	 * gs, are saved at kernel entry in pt_regs.
+-	 */
+-#ifdef CONFIG_X86_32_LAZY_GS
+-	u16 gs;
+-#endif
+ 	unsigned long cr0, cr2, cr3, cr4;
+ 	u64 misc_enable;
+ 	struct saved_msrs saved_msrs;
+@@ -29,6 +22,11 @@ struct saved_context {
+ 	unsigned long tr;
+ 	unsigned long safety;
+ 	unsigned long return_address;
++	/*
++	 * On x86_32, all segment registers except gs are saved at kernel
++	 * entry in pt_regs.
++	 */
++	u16 gs;
+ 	bool misc_enable_saved;
+ } __attribute__((packed));
+ 
+diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
+index c06f3a961d647..fd5a2a53f41fe 100644
+--- a/arch/x86/kernel/Makefile
++++ b/arch/x86/kernel/Makefile
+@@ -49,7 +49,6 @@ endif
+ # non-deterministic coverage.
+ KCOV_INSTRUMENT		:= n
+ 
+-CFLAGS_head$(BITS).o	+= -fno-stack-protector
+ CFLAGS_cc_platform.o	+= -fno-stack-protector
+ 
+ CFLAGS_irq.o := -I $(srctree)/$(src)/../include/asm/trace
+diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c
+index 6e043f295a605..2b411cd00a4e2 100644
+--- a/arch/x86/kernel/asm-offsets_32.c
++++ b/arch/x86/kernel/asm-offsets_32.c
+@@ -53,11 +53,6 @@ void foo(void)
+ 	       offsetof(struct cpu_entry_area, tss.x86_tss.sp1) -
+ 	       offsetofend(struct cpu_entry_area, entry_stack_page.stack));
+ 
+-#ifdef CONFIG_STACKPROTECTOR
+-	BLANK();
+-	OFFSET(stack_canary_offset, stack_canary, canary);
+-#endif
+-
+ 	BLANK();
+ 	DEFINE(EFI_svam, offsetof(efi_runtime_services_t, set_virtual_address_map));
+ }
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index f29c6bed9d657..3b02cb8b05338 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1049,11 +1049,11 @@ static bool cpu_has_zenbleed_microcode(void)
+ 	u32 good_rev = 0;
+ 
+ 	switch (boot_cpu_data.x86_model) {
+-	case 0x30 ... 0x3f: good_rev = 0x0830107a; break;
+-	case 0x60 ... 0x67: good_rev = 0x0860010b; break;
+-	case 0x68 ... 0x6f: good_rev = 0x08608105; break;
+-	case 0x70 ... 0x7f: good_rev = 0x08701032; break;
+-	case 0xa0 ... 0xaf: good_rev = 0x08a00008; break;
++	case 0x30 ... 0x3f: good_rev = 0x0830107b; break;
++	case 0x60 ... 0x67: good_rev = 0x0860010c; break;
++	case 0x68 ... 0x6f: good_rev = 0x08608107; break;
++	case 0x70 ... 0x7f: good_rev = 0x08701033; break;
++	case 0xa0 ... 0xaf: good_rev = 0x08a00009; break;
+ 
+ 	default:
+ 		return false;
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index d9fda0b6eb19e..d6e14190cf80d 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -109,9 +109,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ /* Control unconditional IBPB in switch_mm() */
+ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
+-/* Control MDS CPU buffer clear before returning to user space */
+-DEFINE_STATIC_KEY_FALSE(mds_user_clear);
+-EXPORT_SYMBOL_GPL(mds_user_clear);
+ /* Control MDS CPU buffer clear before idling (halt, mwait) */
+ DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
+ EXPORT_SYMBOL_GPL(mds_idle_clear);
+@@ -249,7 +246,7 @@ static void __init mds_select_mitigation(void)
+ 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ 			mds_mitigation = MDS_MITIGATION_VMWERV;
+ 
+-		static_branch_enable(&mds_user_clear);
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ 
+ 		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
+ 		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
+@@ -353,7 +350,7 @@ static void __init taa_select_mitigation(void)
+ 	 * For guests that can't determine whether the correct microcode is
+ 	 * present on host, enable the mitigation for UCODE_NEEDED as well.
+ 	 */
+-	static_branch_enable(&mds_user_clear);
++	setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ 
+ 	if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ 		cpu_smt_disable(false);
+@@ -421,7 +418,14 @@ static void __init mmio_select_mitigation(void)
+ 	 */
+ 	if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
+ 					      boot_cpu_has(X86_FEATURE_RTM)))
+-		static_branch_enable(&mds_user_clear);
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++
++	/*
++	 * X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW based
++	 * mitigations, disable KVM-only mitigation in that case.
++	 */
++	if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
++		static_branch_disable(&mmio_stale_data_clear);
+ 	else
+ 		static_branch_enable(&mmio_stale_data_clear);
+ 
+@@ -473,6 +477,57 @@ static int __init mmio_stale_data_parse_cmdline(char *str)
+ }
+ early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"Register File Data Sampling: " fmt
++
++enum rfds_mitigations {
++	RFDS_MITIGATION_OFF,
++	RFDS_MITIGATION_VERW,
++	RFDS_MITIGATION_UCODE_NEEDED,
++};
++
++/* Default mitigation for Register File Data Sampling */
++static enum rfds_mitigations rfds_mitigation __ro_after_init =
++	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
++
++static const char * const rfds_strings[] = {
++	[RFDS_MITIGATION_OFF]			= "Vulnerable",
++	[RFDS_MITIGATION_VERW]			= "Mitigation: Clear Register File",
++	[RFDS_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
++};
++
++static void __init rfds_select_mitigation(void)
++{
++	if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off()) {
++		rfds_mitigation = RFDS_MITIGATION_OFF;
++		return;
++	}
++	if (rfds_mitigation == RFDS_MITIGATION_OFF)
++		return;
++
++	if (x86_read_arch_cap_msr() & ARCH_CAP_RFDS_CLEAR)
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++	else
++		rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
++}
++
++static __init int rfds_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	if (!boot_cpu_has_bug(X86_BUG_RFDS))
++		return 0;
++
++	if (!strcmp(str, "off"))
++		rfds_mitigation = RFDS_MITIGATION_OFF;
++	else if (!strcmp(str, "on"))
++		rfds_mitigation = RFDS_MITIGATION_VERW;
++
++	return 0;
++}
++early_param("reg_file_data_sampling", rfds_parse_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "" fmt
+ 
+@@ -481,12 +536,12 @@ static void __init md_clear_update_mitigation(void)
+ 	if (cpu_mitigations_off())
+ 		return;
+ 
+-	if (!static_key_enabled(&mds_user_clear))
++	if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
+ 		goto out;
+ 
+ 	/*
+-	 * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data
+-	 * mitigation, if necessary.
++	 * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
++	 * Stale Data mitigation, if necessary.
+ 	 */
+ 	if (mds_mitigation == MDS_MITIGATION_OFF &&
+ 	    boot_cpu_has_bug(X86_BUG_MDS)) {
+@@ -498,11 +553,19 @@ static void __init md_clear_update_mitigation(void)
+ 		taa_mitigation = TAA_MITIGATION_VERW;
+ 		taa_select_mitigation();
+ 	}
+-	if (mmio_mitigation == MMIO_MITIGATION_OFF &&
+-	    boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
++	/*
++	 * MMIO_MITIGATION_OFF is not checked here so that mmio_stale_data_clear
++	 * gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
++	 */
++	if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
+ 		mmio_mitigation = MMIO_MITIGATION_VERW;
+ 		mmio_select_mitigation();
+ 	}
++	if (rfds_mitigation == RFDS_MITIGATION_OFF &&
++	    boot_cpu_has_bug(X86_BUG_RFDS)) {
++		rfds_mitigation = RFDS_MITIGATION_VERW;
++		rfds_select_mitigation();
++	}
+ out:
+ 	if (boot_cpu_has_bug(X86_BUG_MDS))
+ 		pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
+@@ -512,6 +575,8 @@ static void __init md_clear_update_mitigation(void)
+ 		pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
+ 	else if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
+ 		pr_info("MMIO Stale Data: Unknown: No mitigations\n");
++	if (boot_cpu_has_bug(X86_BUG_RFDS))
++		pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
+ }
+ 
+ static void __init md_clear_select_mitigation(void)
+@@ -519,11 +584,12 @@ static void __init md_clear_select_mitigation(void)
+ 	mds_select_mitigation();
+ 	taa_select_mitigation();
+ 	mmio_select_mitigation();
++	rfds_select_mitigation();
+ 
+ 	/*
+-	 * As MDS, TAA and MMIO Stale Data mitigations are inter-related, update
+-	 * and print their mitigation after MDS, TAA and MMIO Stale Data
+-	 * mitigation selection is done.
++	 * As these mitigations are inter-related and rely on VERW instruction
++	 * to clear the microarchitural buffers, update and print their status
++	 * after mitigation selection is done for each of these vulnerabilities.
+ 	 */
+ 	md_clear_update_mitigation();
+ }
+@@ -1251,19 +1317,21 @@ spectre_v2_user_select_mitigation(void)
+ 	}
+ 
+ 	/*
+-	 * If no STIBP, enhanced IBRS is enabled, or SMT impossible, STIBP
++	 * If no STIBP, Intel enhanced IBRS is enabled, or SMT impossible, STIBP
+ 	 * is not required.
+ 	 *
+-	 * Enhanced IBRS also protects against cross-thread branch target
++	 * Intel's Enhanced IBRS also protects against cross-thread branch target
+ 	 * injection in user-mode as the IBRS bit remains always set which
+ 	 * implicitly enables cross-thread protections.  However, in legacy IBRS
+ 	 * mode, the IBRS bit is set only on kernel entry and cleared on return
+-	 * to userspace. This disables the implicit cross-thread protection,
+-	 * so allow for STIBP to be selected in that case.
++	 * to userspace.  AMD Automatic IBRS also does not protect userspace.
++	 * These modes therefore disable the implicit cross-thread protection,
++	 * so allow for STIBP to be selected in those cases.
+ 	 */
+ 	if (!boot_cpu_has(X86_FEATURE_STIBP) ||
+ 	    !smt_possible ||
+-	    spectre_v2_in_eibrs_mode(spectre_v2_enabled))
++	    (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
++	     !boot_cpu_has(X86_FEATURE_AUTOIBRS)))
+ 		return;
+ 
+ 	/*
+@@ -1293,9 +1361,9 @@ static const char * const spectre_v2_strings[] = {
+ 	[SPECTRE_V2_NONE]			= "Vulnerable",
+ 	[SPECTRE_V2_RETPOLINE]			= "Mitigation: Retpolines",
+ 	[SPECTRE_V2_LFENCE]			= "Mitigation: LFENCE",
+-	[SPECTRE_V2_EIBRS]			= "Mitigation: Enhanced IBRS",
+-	[SPECTRE_V2_EIBRS_LFENCE]		= "Mitigation: Enhanced IBRS + LFENCE",
+-	[SPECTRE_V2_EIBRS_RETPOLINE]		= "Mitigation: Enhanced IBRS + Retpolines",
++	[SPECTRE_V2_EIBRS]			= "Mitigation: Enhanced / Automatic IBRS",
++	[SPECTRE_V2_EIBRS_LFENCE]		= "Mitigation: Enhanced / Automatic IBRS + LFENCE",
++	[SPECTRE_V2_EIBRS_RETPOLINE]		= "Mitigation: Enhanced / Automatic IBRS + Retpolines",
+ 	[SPECTRE_V2_IBRS]			= "Mitigation: IBRS",
+ };
+ 
+@@ -1364,7 +1432,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	     cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
+ 	     cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
+ 	    !boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
+-		pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
++		pr_err("%s selected but CPU doesn't have Enhanced or Automatic IBRS. Switching to AUTO select\n",
+ 		       mitigation_options[i].option);
+ 		return SPECTRE_V2_CMD_AUTO;
+ 	}
+@@ -1549,8 +1617,12 @@ static void __init spectre_v2_select_mitigation(void)
+ 		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
+ 
+ 	if (spectre_v2_in_ibrs_mode(mode)) {
+-		x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+-		update_spec_ctrl(x86_spec_ctrl_base);
++		if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
++			msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
++		} else {
++			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
++			update_spec_ctrl(x86_spec_ctrl_base);
++		}
+ 	}
+ 
+ 	switch (mode) {
+@@ -1634,8 +1706,8 @@ static void __init spectre_v2_select_mitigation(void)
+ 	/*
+ 	 * Retpoline protects the kernel, but doesn't protect firmware.  IBRS
+ 	 * and Enhanced IBRS protect firmware too, so enable IBRS around
+-	 * firmware calls only when IBRS / Enhanced IBRS aren't otherwise
+-	 * enabled.
++	 * firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
++	 * otherwise enabled.
+ 	 *
+ 	 * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
+ 	 * the user might select retpoline on the kernel command line and if
+@@ -2432,74 +2504,74 @@ static const char * const l1tf_vmx_states[] = {
+ static ssize_t l1tf_show_state(char *buf)
+ {
+ 	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO)
+-		return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++		return sysfs_emit(buf, "%s\n", L1TF_DEFAULT_MSG);
+ 
+ 	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||
+ 	    (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&
+ 	     sched_smt_active())) {
+-		return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
+-			       l1tf_vmx_states[l1tf_vmx_mitigation]);
++		return sysfs_emit(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
++				  l1tf_vmx_states[l1tf_vmx_mitigation]);
+ 	}
+ 
+-	return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
+-		       l1tf_vmx_states[l1tf_vmx_mitigation],
+-		       sched_smt_active() ? "vulnerable" : "disabled");
++	return sysfs_emit(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
++			  l1tf_vmx_states[l1tf_vmx_mitigation],
++			  sched_smt_active() ? "vulnerable" : "disabled");
+ }
+ 
+ static ssize_t itlb_multihit_show_state(char *buf)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) ||
+ 	    !boot_cpu_has(X86_FEATURE_VMX))
+-		return sprintf(buf, "KVM: Mitigation: VMX unsupported\n");
++		return sysfs_emit(buf, "KVM: Mitigation: VMX unsupported\n");
+ 	else if (!(cr4_read_shadow() & X86_CR4_VMXE))
+-		return sprintf(buf, "KVM: Mitigation: VMX disabled\n");
++		return sysfs_emit(buf, "KVM: Mitigation: VMX disabled\n");
+ 	else if (itlb_multihit_kvm_mitigation)
+-		return sprintf(buf, "KVM: Mitigation: Split huge pages\n");
++		return sysfs_emit(buf, "KVM: Mitigation: Split huge pages\n");
+ 	else
+-		return sprintf(buf, "KVM: Vulnerable\n");
++		return sysfs_emit(buf, "KVM: Vulnerable\n");
+ }
+ #else
+ static ssize_t l1tf_show_state(char *buf)
+ {
+-	return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++	return sysfs_emit(buf, "%s\n", L1TF_DEFAULT_MSG);
+ }
+ 
+ static ssize_t itlb_multihit_show_state(char *buf)
+ {
+-	return sprintf(buf, "Processor vulnerable\n");
++	return sysfs_emit(buf, "Processor vulnerable\n");
+ }
+ #endif
+ 
+ static ssize_t mds_show_state(char *buf)
+ {
+ 	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+-		return sprintf(buf, "%s; SMT Host state unknown\n",
+-			       mds_strings[mds_mitigation]);
++		return sysfs_emit(buf, "%s; SMT Host state unknown\n",
++				  mds_strings[mds_mitigation]);
+ 	}
+ 
+ 	if (boot_cpu_has(X86_BUG_MSBDS_ONLY)) {
+-		return sprintf(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
+-			       (mds_mitigation == MDS_MITIGATION_OFF ? "vulnerable" :
+-			        sched_smt_active() ? "mitigated" : "disabled"));
++		return sysfs_emit(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
++				  (mds_mitigation == MDS_MITIGATION_OFF ? "vulnerable" :
++				   sched_smt_active() ? "mitigated" : "disabled"));
+ 	}
+ 
+-	return sprintf(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
+-		       sched_smt_active() ? "vulnerable" : "disabled");
++	return sysfs_emit(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
++			  sched_smt_active() ? "vulnerable" : "disabled");
+ }
+ 
+ static ssize_t tsx_async_abort_show_state(char *buf)
+ {
+ 	if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLED) ||
+ 	    (taa_mitigation == TAA_MITIGATION_OFF))
+-		return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
++		return sysfs_emit(buf, "%s\n", taa_strings[taa_mitigation]);
+ 
+ 	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+-		return sprintf(buf, "%s; SMT Host state unknown\n",
+-			       taa_strings[taa_mitigation]);
++		return sysfs_emit(buf, "%s; SMT Host state unknown\n",
++				  taa_strings[taa_mitigation]);
+ 	}
+ 
+-	return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],
+-		       sched_smt_active() ? "vulnerable" : "disabled");
++	return sysfs_emit(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],
++			  sched_smt_active() ? "vulnerable" : "disabled");
+ }
+ 
+ static ssize_t mmio_stale_data_show_state(char *buf)
+@@ -2519,9 +2591,15 @@ static ssize_t mmio_stale_data_show_state(char *buf)
+ 			  sched_smt_active() ? "vulnerable" : "disabled");
+ }
+ 
++static ssize_t rfds_show_state(char *buf)
++{
++	return sysfs_emit(buf, "%s\n", rfds_strings[rfds_mitigation]);
++}
++
+ static char *stibp_state(void)
+ {
+-	if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
++	if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
++	    !boot_cpu_has(X86_FEATURE_AUTOIBRS))
+ 		return "";
+ 
+ 	switch (spectre_v2_user_stibp) {
+@@ -2567,47 +2645,46 @@ static char *pbrsb_eibrs_state(void)
+ static ssize_t spectre_v2_show_state(char *buf)
+ {
+ 	if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
+-		return sprintf(buf, "Vulnerable: LFENCE\n");
++		return sysfs_emit(buf, "Vulnerable: LFENCE\n");
+ 
+ 	if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
+-		return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");
++		return sysfs_emit(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");
+ 
+ 	if (sched_smt_active() && unprivileged_ebpf_enabled() &&
+ 	    spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
+-		return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
++		return sysfs_emit(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
+ 
+-	return sprintf(buf, "%s%s%s%s%s%s%s\n",
+-		       spectre_v2_strings[spectre_v2_enabled],
+-		       ibpb_state(),
+-		       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
+-		       stibp_state(),
+-		       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
+-		       pbrsb_eibrs_state(),
+-		       spectre_v2_module_string());
++	return sysfs_emit(buf, "%s%s%s%s%s%s%s\n",
++			  spectre_v2_strings[spectre_v2_enabled],
++			  ibpb_state(),
++			  boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
++			  stibp_state(),
++			  boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
++			  pbrsb_eibrs_state(),
++			  spectre_v2_module_string());
+ }
+ 
+ static ssize_t srbds_show_state(char *buf)
+ {
+-	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
++	return sysfs_emit(buf, "%s\n", srbds_strings[srbds_mitigation]);
+ }
+ 
+ static ssize_t retbleed_show_state(char *buf)
+ {
+ 	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
+ 	    retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
+-	    if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
+-		boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
+-		    return sprintf(buf, "Vulnerable: untrained return thunk / IBPB on non-AMD based uarch\n");
++		if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
++		    boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
++			return sysfs_emit(buf, "Vulnerable: untrained return thunk / IBPB on non-AMD based uarch\n");
+ 
+-	    return sprintf(buf, "%s; SMT %s\n",
+-			   retbleed_strings[retbleed_mitigation],
+-			   !sched_smt_active() ? "disabled" :
+-			   spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
+-			   spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ?
+-			   "enabled with STIBP protection" : "vulnerable");
++		return sysfs_emit(buf, "%s; SMT %s\n", retbleed_strings[retbleed_mitigation],
++				  !sched_smt_active() ? "disabled" :
++				  spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++				  spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ?
++				  "enabled with STIBP protection" : "vulnerable");
+ 	}
+ 
+-	return sprintf(buf, "%s\n", retbleed_strings[retbleed_mitigation]);
++	return sysfs_emit(buf, "%s\n", retbleed_strings[retbleed_mitigation]);
+ }
+ 
+ static ssize_t gds_show_state(char *buf)
+@@ -2629,26 +2706,26 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 			       char *buf, unsigned int bug)
+ {
+ 	if (!boot_cpu_has_bug(bug))
+-		return sprintf(buf, "Not affected\n");
++		return sysfs_emit(buf, "Not affected\n");
+ 
+ 	switch (bug) {
+ 	case X86_BUG_CPU_MELTDOWN:
+ 		if (boot_cpu_has(X86_FEATURE_PTI))
+-			return sprintf(buf, "Mitigation: PTI\n");
++			return sysfs_emit(buf, "Mitigation: PTI\n");
+ 
+ 		if (hypervisor_is_type(X86_HYPER_XEN_PV))
+-			return sprintf(buf, "Unknown (XEN PV detected, hypervisor mitigation required)\n");
++			return sysfs_emit(buf, "Unknown (XEN PV detected, hypervisor mitigation required)\n");
+ 
+ 		break;
+ 
+ 	case X86_BUG_SPECTRE_V1:
+-		return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
++		return sysfs_emit(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
+ 
+ 	case X86_BUG_SPECTRE_V2:
+ 		return spectre_v2_show_state(buf);
+ 
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+-		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
++		return sysfs_emit(buf, "%s\n", ssb_strings[ssb_mode]);
+ 
+ 	case X86_BUG_L1TF:
+ 		if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV))
+@@ -2680,11 +2757,14 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_SRSO:
+ 		return srso_show_state(buf);
+ 
++	case X86_BUG_RFDS:
++		return rfds_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+ 
+-	return sprintf(buf, "Vulnerable\n");
++	return sysfs_emit(buf, "Vulnerable\n");
+ }
+ 
+ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
+@@ -2754,4 +2834,9 @@ ssize_t cpu_show_spec_rstack_overflow(struct device *dev, struct device_attribut
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_SRSO);
+ }
++
++ssize_t cpu_show_reg_file_data_sampling(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_RFDS);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 4ecc6072e9a48..a496a9867f4b1 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -166,7 +166,6 @@ DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = {
+ 
+ 	[GDT_ENTRY_ESPFIX_SS]		= GDT_ENTRY_INIT(0xc092, 0, 0xfffff),
+ 	[GDT_ENTRY_PERCPU]		= GDT_ENTRY_INIT(0xc092, 0, 0xfffff),
+-	GDT_STACK_CANARY_INIT
+ #endif
+ } };
+ EXPORT_PER_CPU_SYMBOL_GPL(gdt_page);
+@@ -600,7 +599,6 @@ void load_percpu_segment(int cpu)
+ 	__loadsegment_simple(gs, 0);
+ 	wrmsrl(MSR_GS_BASE, cpu_kernelmode_gs_base(cpu));
+ #endif
+-	load_stack_canary_segment();
+ }
+ 
+ #ifdef CONFIG_X86_32
+@@ -1098,8 +1096,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 	VULNWL_AMD(0x12,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+ 
+ 	/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
+-	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+-	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
++	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
+ 
+ 	/* Zhaoxin Family 7 */
+ 	VULNWL(CENTAUR,	7, X86_MODEL_ANY,	NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
+@@ -1134,6 +1132,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define SRSO		BIT(5)
+ /* CPU is affected by GDS */
+ #define GDS		BIT(6)
++/* CPU is affected by Register File Data Sampling */
++#define RFDS		BIT(7)
+ 
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+@@ -1161,14 +1161,23 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(TIGERLAKE,	X86_STEPPING_ANY,		GDS),
+ 	VULNBL_INTEL_STEPPINGS(LAKEFIELD,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
+ 	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS),
+-	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
+-	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D,	X86_STEPPING_ANY,		MMIO),
+-	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(ALDERLAKE,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ALDERLAKE_L,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(RAPTORLAKE,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(RAPTORLAKE_P,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(RAPTORLAKE_S,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ALDERLAKE_N,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D,	X86_STEPPING_ANY,		MMIO | RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_GOLDMONT,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_GOLDMONT_D,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_GOLDMONT_PLUS, X86_STEPPING_ANY,		RFDS),
+ 
+ 	VULNBL_AMD(0x15, RETBLEED),
+ 	VULNBL_AMD(0x16, RETBLEED),
+ 	VULNBL_AMD(0x17, RETBLEED | SRSO),
+-	VULNBL_HYGON(0x18, RETBLEED),
++	VULNBL_HYGON(0x18, RETBLEED | SRSO),
+ 	VULNBL_AMD(0x19, SRSO),
+ 	{}
+ };
+@@ -1197,6 +1206,24 @@ static bool arch_cap_mmio_immune(u64 ia32_cap)
+ 		ia32_cap & ARCH_CAP_SBDR_SSDP_NO);
+ }
+ 
++static bool __init vulnerable_to_rfds(u64 ia32_cap)
++{
++	/* The "immunity" bit trumps everything else: */
++	if (ia32_cap & ARCH_CAP_RFDS_NO)
++		return false;
++
++	/*
++	 * VMMs set ARCH_CAP_RFDS_CLEAR for processors not in the blacklist to
++	 * indicate that mitigation is needed because guest is running on a
++	 * vulnerable hardware or may migrate to such hardware:
++	 */
++	if (ia32_cap & ARCH_CAP_RFDS_CLEAR)
++		return true;
++
++	/* Only consult the blacklist when there is no enumeration: */
++	return cpu_matches(cpu_vuln_blacklist, RFDS);
++}
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ 	u64 ia32_cap = x86_read_arch_cap_msr();
+@@ -1219,8 +1246,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+ 
+-	if (ia32_cap & ARCH_CAP_IBRS_ALL)
++	/*
++	 * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel feature
++	 * flag and protect from vendor-specific bugs via the whitelist.
++	 */
++	if ((ia32_cap & ARCH_CAP_IBRS_ALL) || cpu_has(c, X86_FEATURE_AUTOIBRS)) {
+ 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
++		if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
++		    !(ia32_cap & ARCH_CAP_PBRSB_NO))
++			setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
++	}
+ 
+ 	if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
+ 	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
+@@ -1282,11 +1317,6 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 			setup_force_cpu_bug(X86_BUG_RETBLEED);
+ 	}
+ 
+-	if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) &&
+-	    !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
+-	    !(ia32_cap & ARCH_CAP_PBRSB_NO))
+-		setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
+-
+ 	/*
+ 	 * Check if CPU is vulnerable to GDS. If running in a virtual machine on
+ 	 * an affected processor, the VMM may have disabled the use of GATHER by
+@@ -1302,6 +1332,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 			setup_force_cpu_bug(X86_BUG_SRSO);
+ 	}
+ 
++	if (vulnerable_to_rfds(ia32_cap))
++		setup_force_cpu_bug(X86_BUG_RFDS);
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+@@ -1937,7 +1970,8 @@ DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
+ EXPORT_PER_CPU_SYMBOL(cpu_current_top_of_stack);
+ 
+ #ifdef CONFIG_STACKPROTECTOR
+-DEFINE_PER_CPU_ALIGNED(struct stack_canary, stack_canary);
++DEFINE_PER_CPU(unsigned long, __stack_chk_guard);
++EXPORT_PER_CPU_SYMBOL(__stack_chk_guard);
+ #endif
+ 
+ #endif	/* CONFIG_X86_64 */
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 18a6ed2afca03..97ab294290932 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -2389,12 +2389,14 @@ static ssize_t set_bank(struct device *s, struct device_attribute *attr,
+ 		return -EINVAL;
+ 
+ 	b = &per_cpu(mce_banks_array, s->id)[bank];
+-
+ 	if (!b->init)
+ 		return -ENODEV;
+ 
+ 	b->ctl = new;
++
++	mutex_lock(&mce_sysfs_mutex);
+ 	mce_restart();
++	mutex_unlock(&mce_sysfs_mutex);
+ 
+ 	return size;
+ }
+diff --git a/arch/x86/kernel/doublefault_32.c b/arch/x86/kernel/doublefault_32.c
+index 759d392cbe9f0..d1d49e3d536b8 100644
+--- a/arch/x86/kernel/doublefault_32.c
++++ b/arch/x86/kernel/doublefault_32.c
+@@ -100,9 +100,7 @@ DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = {
+ 		.ss		= __KERNEL_DS,
+ 		.ds		= __USER_DS,
+ 		.fs		= __KERNEL_PERCPU,
+-#ifndef CONFIG_X86_32_LAZY_GS
+-		.gs		= __KERNEL_STACK_CANARY,
+-#endif
++		.gs		= 0,
+ 
+ 		.__cr3		= __pa_nodebug(swapper_pg_dir),
+ 	},
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 8596b4dca9455..2988ffd099da4 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -302,15 +302,6 @@ unsigned long __head __startup_64(unsigned long physaddr,
+ 	return sme_get_me_mask();
+ }
+ 
+-unsigned long __startup_secondary_64(void)
+-{
+-	/*
+-	 * Return the SME encryption mask (if SME is active) to be used as a
+-	 * modifier for the initial pgdir entry programmed into CR3.
+-	 */
+-	return sme_get_me_mask();
+-}
+-
+ /* Wipe all early page tables except for the kernel symbol map */
+ static void __init reset_early_page_tables(void)
+ {
+diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
+index 3f1691b89231f..0359333f6bdee 100644
+--- a/arch/x86/kernel/head_32.S
++++ b/arch/x86/kernel/head_32.S
+@@ -319,8 +319,8 @@ SYM_FUNC_START(startup_32_smp)
+ 	movl $(__KERNEL_PERCPU), %eax
+ 	movl %eax,%fs			# set this cpu's percpu
+ 
+-	movl $(__KERNEL_STACK_CANARY),%eax
+-	movl %eax,%gs
++	xorl %eax,%eax
++	movl %eax,%gs			# clear possible garbage in %gs
+ 
+ 	xorl %eax,%eax			# Clear LDT
+ 	lldt %ax
+@@ -340,20 +340,6 @@ SYM_FUNC_END(startup_32_smp)
+  */
+ __INIT
+ setup_once:
+-#ifdef CONFIG_STACKPROTECTOR
+-	/*
+-	 * Configure the stack canary. The linker can't handle this by
+-	 * relocation.  Manually set base address in stack canary
+-	 * segment descriptor.
+-	 */
+-	movl $gdt_page,%eax
+-	movl $stack_canary,%ecx
+-	movw %cx, 8 * GDT_ENTRY_STACK_CANARY + 2(%eax)
+-	shrl $16, %ecx
+-	movb %cl, 8 * GDT_ENTRY_STACK_CANARY + 4(%eax)
+-	movb %ch, 8 * GDT_ENTRY_STACK_CANARY + 7(%eax)
+-#endif
+-
+ 	andl $0,setup_once_ref	/* Once is enough, thanks */
+ 	RET
+ 
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 0424c2a6c15b8..713b1ac34639b 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -74,6 +74,22 @@ SYM_CODE_START_NOALIGN(startup_64)
+ 	leaq	(__end_init_task - SIZEOF_PTREGS)(%rip), %rsp
+ 
+ 	leaq	_text(%rip), %rdi
++
++	/*
++	 * initial_gs points to initial fixed_percpu_data struct with storage for
++	 * the stack protector canary. Global pointer fixups are needed at this
++	 * stage, so apply them as is done in fixup_pointer(), and initialize %gs
++	 * such that the canary can be accessed at %gs:40 for subsequent C calls.
++	 */
++	movl	$MSR_GS_BASE, %ecx
++	movq	initial_gs(%rip), %rax
++	movq	$_text, %rdx
++	subq	%rdx, %rax
++	addq	%rdi, %rax
++	movq	%rax, %rdx
++	shrq	$32,  %rdx
++	wrmsr
++
+ 	pushq	%rsi
+ 	call	startup_64_setup_env
+ 	popq	%rsi
+@@ -141,9 +157,11 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
+ 	 * Retrieve the modifier (SME encryption mask if SME is active) to be
+ 	 * added to the initial pgdir entry that will be programmed into CR3.
+ 	 */
+-	pushq	%rsi
+-	call	__startup_secondary_64
+-	popq	%rsi
++#ifdef CONFIG_AMD_MEM_ENCRYPT
++	movq	sme_me_mask, %rax
++#else
++	xorq	%rax, %rax
++#endif
+ 
+ 	/* Form the CR3 value being sure to include the CR3 modifier */
+ 	addq	$(init_top_pgt - __START_KERNEL_map), %rax
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index 2ef961cf4cfc5..f2e53b20df7e3 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -519,9 +519,6 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
+ 		write_cr2(this_cpu_read(nmi_cr2));
+ 	if (this_cpu_dec_return(nmi_state))
+ 		goto nmi_restart;
+-
+-	if (user_mode(regs))
+-		mds_user_clear_cpu_buffers();
+ }
+ 
+ #if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL)
+diff --git a/arch/x86/kernel/setup_percpu.c b/arch/x86/kernel/setup_percpu.c
+index fd945ce78554e..0941d2f44f2a2 100644
+--- a/arch/x86/kernel/setup_percpu.c
++++ b/arch/x86/kernel/setup_percpu.c
+@@ -224,7 +224,6 @@ void __init setup_per_cpu_areas(void)
+ 		per_cpu(this_cpu_off, cpu) = per_cpu_offset(cpu);
+ 		per_cpu(cpu_number, cpu) = cpu;
+ 		setup_percpu_segment(cpu);
+-		setup_stack_canary_segment(cpu);
+ 		/*
+ 		 * Copy data used in early init routines from the
+ 		 * initial arrays to the per cpu data areas.  These
+diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
+index 64a496a0687f6..3c883e0642424 100644
+--- a/arch/x86/kernel/tls.c
++++ b/arch/x86/kernel/tls.c
+@@ -164,17 +164,11 @@ int do_set_thread_area(struct task_struct *p, int idx,
+ 		savesegment(fs, sel);
+ 		if (sel == modified_sel)
+ 			loadsegment(fs, sel);
+-
+-		savesegment(gs, sel);
+-		if (sel == modified_sel)
+-			load_gs_index(sel);
+ #endif
+ 
+-#ifdef CONFIG_X86_32_LAZY_GS
+ 		savesegment(gs, sel);
+ 		if (sel == modified_sel)
+-			loadsegment(gs, sel);
+-#endif
++			load_gs_index(sel);
+ 	} else {
+ #ifdef CONFIG_X86_64
+ 		if (p->thread.fsindex == modified_sel)
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index 1ba9313d26b91..e25853c2eb0fc 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -76,10 +76,12 @@ static const struct cpuid_reg reverse_cpuid[] = {
+  */
+ static __always_inline void reverse_cpuid_check(unsigned int x86_leaf)
+ {
++	BUILD_BUG_ON(NR_CPUID_WORDS != NCAPINTS);
+ 	BUILD_BUG_ON(x86_leaf == CPUID_LNX_1);
+ 	BUILD_BUG_ON(x86_leaf == CPUID_LNX_2);
+ 	BUILD_BUG_ON(x86_leaf == CPUID_LNX_3);
+ 	BUILD_BUG_ON(x86_leaf == CPUID_LNX_4);
++	BUILD_BUG_ON(x86_leaf == CPUID_LNX_5);
+ 	BUILD_BUG_ON(x86_leaf >= ARRAY_SIZE(reverse_cpuid));
+ 	BUILD_BUG_ON(reverse_cpuid[x86_leaf].function == 0);
+ }
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index c2b34998c27df..52e14d6aa4965 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -1024,20 +1024,22 @@ int svm_register_enc_region(struct kvm *kvm,
+ 		goto e_free;
+ 	}
+ 
+-	region->uaddr = range->addr;
+-	region->size = range->size;
+-
+-	list_add_tail(&region->list, &sev->regions_list);
+-	mutex_unlock(&kvm->lock);
+-
+ 	/*
+ 	 * The guest may change the memory encryption attribute from C=0 -> C=1
+ 	 * or vice versa for this memory range. Lets make sure caches are
+ 	 * flushed to ensure that guest data gets written into memory with
+-	 * correct C-bit.
++	 * correct C-bit.  Note, this must be done before dropping kvm->lock,
++	 * as region and its array of pages can be freed by a different task
++	 * once kvm->lock is released.
+ 	 */
+ 	sev_clflush_pages(region->pages, region->npages);
+ 
++	region->uaddr = range->addr;
++	region->size = range->size;
++
++	list_add_tail(&region->list, &sev->regions_list);
++	mutex_unlock(&kvm->lock);
++
+ 	return ret;
+ 
+ e_free:
+diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
+index edc3f16cc1896..6a9bfdfbb6e59 100644
+--- a/arch/x86/kvm/vmx/run_flags.h
++++ b/arch/x86/kvm/vmx/run_flags.h
+@@ -2,7 +2,10 @@
+ #ifndef __KVM_X86_VMX_RUN_FLAGS_H
+ #define __KVM_X86_VMX_RUN_FLAGS_H
+ 
+-#define VMX_RUN_VMRESUME	(1 << 0)
+-#define VMX_RUN_SAVE_SPEC_CTRL	(1 << 1)
++#define VMX_RUN_VMRESUME_SHIFT		0
++#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT	1
++
++#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
++#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
+ 
+ #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
+diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
+index 982138bebb70f..7a4b999d5701e 100644
+--- a/arch/x86/kvm/vmx/vmenter.S
++++ b/arch/x86/kvm/vmx/vmenter.S
+@@ -77,7 +77,7 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 	mov (%_ASM_SP), %_ASM_AX
+ 
+ 	/* Check if vmlaunch or vmresume is needed */
+-	testb $VMX_RUN_VMRESUME, %bl
++	bt   $VMX_RUN_VMRESUME_SHIFT, %bx
+ 
+ 	/* Load guest registers.  Don't clobber flags. */
+ 	mov VCPU_RCX(%_ASM_AX), %_ASM_CX
+@@ -99,8 +99,11 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 	/* Load guest RAX.  This kills the @regs pointer! */
+ 	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
+ 
+-	/* Check EFLAGS.ZF from 'testb' above */
+-	jz .Lvmlaunch
++	/* Clobbers EFLAGS.ZF */
++	CLEAR_CPU_BUFFERS
++
++	/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
++	jnc .Lvmlaunch
+ 
+ 	/*
+ 	 * After a successful VMRESUME/VMLAUNCH, control flow "magically"
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 2445c61038954..3e9bb9ae836dd 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -397,7 +397,8 @@ static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx)
+ 
+ static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ {
+-	vmx->disable_fb_clear = vmx_fb_clear_ctrl_available;
++	vmx->disable_fb_clear = !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) &&
++		vmx_fb_clear_ctrl_available;
+ 
+ 	/*
+ 	 * If guest will not execute VERW, there is no need to set FB_CLEAR_DIS
+@@ -6792,11 +6793,14 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 	guest_enter_irqoff();
+ 	lockdep_hardirqs_on(CALLER_ADDR0);
+ 
+-	/* L1D Flush includes CPU buffer clear to mitigate MDS */
++	/*
++	 * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
++	 * mitigation for MDS is done late in VMentry and is still
++	 * executed in spite of L1D Flush. This is because an extra VERW
++	 * should not matter much after the big hammer L1D Flush.
++	 */
+ 	if (static_branch_unlikely(&vmx_l1d_should_flush))
+ 		vmx_l1d_flush(vcpu);
+-	else if (static_branch_unlikely(&mds_user_clear))
+-		mds_clear_cpu_buffers();
+ 	else if (static_branch_unlikely(&mmio_stale_data_clear) &&
+ 		 kvm_arch_has_assigned_device(vcpu->kvm))
+ 		mds_clear_cpu_buffers();
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 6c2bf7cd7aec6..8e0b957c62193 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1389,7 +1389,8 @@ static unsigned int num_msr_based_features;
+ 	 ARCH_CAP_SKIP_VMENTRY_L1DFLUSH | ARCH_CAP_SSB_NO | ARCH_CAP_MDS_NO | \
+ 	 ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \
+ 	 ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \
+-	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO)
++	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO | \
++	 ARCH_CAP_RFDS_NO | ARCH_CAP_RFDS_CLEAR)
+ 
+ static u64 kvm_get_arch_capabilities(void)
+ {
+@@ -1426,6 +1427,8 @@ static u64 kvm_get_arch_capabilities(void)
+ 		data |= ARCH_CAP_SSB_NO;
+ 	if (!boot_cpu_has_bug(X86_BUG_MDS))
+ 		data |= ARCH_CAP_MDS_NO;
++	if (!boot_cpu_has_bug(X86_BUG_RFDS))
++		data |= ARCH_CAP_RFDS_NO;
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
+ 		/*
+diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
+index ffc8b7dcf1feb..6ed542e310adc 100644
+--- a/arch/x86/lib/insn-eval.c
++++ b/arch/x86/lib/insn-eval.c
+@@ -404,10 +404,6 @@ static short get_segment_selector(struct pt_regs *regs, int seg_reg_idx)
+ 	case INAT_SEG_REG_FS:
+ 		return (unsigned short)(regs->fs & 0xffff);
+ 	case INAT_SEG_REG_GS:
+-		/*
+-		 * GS may or may not be in regs as per CONFIG_X86_32_LAZY_GS.
+-		 * The macro below takes care of both cases.
+-		 */
+ 		return get_user_gs(regs);
+ 	case INAT_SEG_REG_IGNORE:
+ 	default:
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index 6f5321b36dbb1..ab9b047790dd0 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -108,6 +108,7 @@ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+ 	ret
+ 	int3
+ SYM_FUNC_END(srso_alias_untrain_ret)
++__EXPORT_THUNK(srso_alias_untrain_ret)
+ #endif
+ 
+ SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
+@@ -249,9 +250,7 @@ SYM_CODE_START(srso_return_thunk)
+ SYM_CODE_END(srso_return_thunk)
+ 
+ SYM_FUNC_START(entry_untrain_ret)
+-	ALTERNATIVE_2 "jmp retbleed_untrain_ret", \
+-		      "jmp srso_untrain_ret", X86_FEATURE_SRSO, \
+-		      "jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS
++	ALTERNATIVE "jmp retbleed_untrain_ret", "jmp srso_untrain_ret", X86_FEATURE_SRSO
+ SYM_FUNC_END(entry_untrain_ret)
+ __EXPORT_THUNK(entry_untrain_ret)
+ 
+@@ -259,6 +258,7 @@ SYM_CODE_START(__x86_return_thunk)
+ 	UNWIND_HINT_FUNC
+ 	ANNOTATE_NOENDBR
+ 	ANNOTATE_UNRET_SAFE
++	ANNOTATE_NOENDBR
+ 	ret
+ 	int3
+ SYM_CODE_END(__x86_return_thunk)
+diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
+index f50cc210a9818..968d7005f4a72 100644
+--- a/arch/x86/mm/ident_map.c
++++ b/arch/x86/mm/ident_map.c
+@@ -26,31 +26,18 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
+ 	for (; addr < end; addr = next) {
+ 		pud_t *pud = pud_page + pud_index(addr);
+ 		pmd_t *pmd;
+-		bool use_gbpage;
+ 
+ 		next = (addr & PUD_MASK) + PUD_SIZE;
+ 		if (next > end)
+ 			next = end;
+ 
+-		/* if this is already a gbpage, this portion is already mapped */
+-		if (pud_large(*pud))
+-			continue;
+-
+-		/* Is using a gbpage allowed? */
+-		use_gbpage = info->direct_gbpages;
+-
+-		/* Don't use gbpage if it maps more than the requested region. */
+-		/* at the begining: */
+-		use_gbpage &= ((addr & ~PUD_MASK) == 0);
+-		/* ... or at the end: */
+-		use_gbpage &= ((next & ~PUD_MASK) == 0);
+-
+-		/* Never overwrite existing mappings */
+-		use_gbpage &= !pud_present(*pud);
+-
+-		if (use_gbpage) {
++		if (info->direct_gbpages) {
+ 			pud_t pudval;
+ 
++			if (pud_present(*pud))
++				continue;
++
++			addr &= PUD_MASK;
+ 			pudval = __pud((addr - info->offset) | info->page_flag);
+ 			set_pud(pud, pudval);
+ 			continue;
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index f9c53a7107407..adc76b4133ab5 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -56,6 +56,7 @@
+ 
+ #include "memtype.h"
+ #include "../mm_internal.h"
++#include "../../../mm/internal.h"	/* is_cow_mapping() */
+ 
+ #undef pr_fmt
+ #define pr_fmt(fmt) "" fmt
+@@ -987,6 +988,38 @@ static void free_pfn_range(u64 paddr, unsigned long size)
+ 		memtype_free(paddr, paddr + size);
+ }
+ 
++static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
++		pgprot_t *pgprot)
++{
++	unsigned long prot;
++
++	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_PAT));
++
++	/*
++	 * We need the starting PFN and cachemode used for track_pfn_remap()
++	 * that covered the whole VMA. For most mappings, we can obtain that
++	 * information from the page tables. For COW mappings, we might now
++	 * suddenly have anon folios mapped and follow_phys() will fail.
++	 *
++	 * Fallback to using vma->vm_pgoff, see remap_pfn_range_notrack(), to
++	 * detect the PFN. If we need the cachemode as well, we're out of luck
++	 * for now and have to fail fork().
++	 */
++	if (!follow_phys(vma, vma->vm_start, 0, &prot, paddr)) {
++		if (pgprot)
++			*pgprot = __pgprot(prot);
++		return 0;
++	}
++	if (is_cow_mapping(vma->vm_flags)) {
++		if (pgprot)
++			return -EINVAL;
++		*paddr = (resource_size_t)vma->vm_pgoff << PAGE_SHIFT;
++		return 0;
++	}
++	WARN_ON_ONCE(1);
++	return -EINVAL;
++}
++
+ /*
+  * track_pfn_copy is called when vma that is covering the pfnmap gets
+  * copied through copy_page_range().
+@@ -997,20 +1030,13 @@ static void free_pfn_range(u64 paddr, unsigned long size)
+ int track_pfn_copy(struct vm_area_struct *vma)
+ {
+ 	resource_size_t paddr;
+-	unsigned long prot;
+ 	unsigned long vma_size = vma->vm_end - vma->vm_start;
+ 	pgprot_t pgprot;
+ 
+ 	if (vma->vm_flags & VM_PAT) {
+-		/*
+-		 * reserve the whole chunk covered by vma. We need the
+-		 * starting address and protection from pte.
+-		 */
+-		if (follow_phys(vma, vma->vm_start, 0, &prot, &paddr)) {
+-			WARN_ON_ONCE(1);
++		if (get_pat_info(vma, &paddr, &pgprot))
+ 			return -EINVAL;
+-		}
+-		pgprot = __pgprot(prot);
++		/* reserve the whole chunk covered by vma. */
+ 		return reserve_pfn_range(paddr, vma_size, &pgprot, 1);
+ 	}
+ 
+@@ -1085,7 +1111,6 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
+ 		 unsigned long size)
+ {
+ 	resource_size_t paddr;
+-	unsigned long prot;
+ 
+ 	if (vma && !(vma->vm_flags & VM_PAT))
+ 		return;
+@@ -1093,11 +1118,8 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
+ 	/* free the chunk starting from pfn or the whole chunk */
+ 	paddr = (resource_size_t)pfn << PAGE_SHIFT;
+ 	if (!paddr && !size) {
+-		if (follow_phys(vma, vma->vm_start, 0, &prot, &paddr)) {
+-			WARN_ON_ONCE(1);
++		if (get_pat_info(vma, &paddr, NULL))
+ 			return;
+-		}
+-
+ 		size = vma->vm_end - vma->vm_start;
+ 	}
+ 	free_pfn_range(paddr, size);
+diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
+index 43b4d864817ec..afbf0bb252da5 100644
+--- a/arch/x86/platform/pvh/head.S
++++ b/arch/x86/platform/pvh/head.S
+@@ -45,10 +45,8 @@
+ 
+ #define PVH_GDT_ENTRY_CS	1
+ #define PVH_GDT_ENTRY_DS	2
+-#define PVH_GDT_ENTRY_CANARY	3
+ #define PVH_CS_SEL		(PVH_GDT_ENTRY_CS * 8)
+ #define PVH_DS_SEL		(PVH_GDT_ENTRY_DS * 8)
+-#define PVH_CANARY_SEL		(PVH_GDT_ENTRY_CANARY * 8)
+ 
+ SYM_CODE_START_LOCAL(pvh_start_xen)
+ 	cld
+@@ -109,17 +107,6 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
+ 
+ #else /* CONFIG_X86_64 */
+ 
+-	/* Set base address in stack canary descriptor. */
+-	movl $_pa(gdt_start),%eax
+-	movl $_pa(canary),%ecx
+-	movw %cx, (PVH_GDT_ENTRY_CANARY * 8) + 2(%eax)
+-	shrl $16, %ecx
+-	movb %cl, (PVH_GDT_ENTRY_CANARY * 8) + 4(%eax)
+-	movb %ch, (PVH_GDT_ENTRY_CANARY * 8) + 7(%eax)
+-
+-	mov $PVH_CANARY_SEL,%eax
+-	mov %eax,%gs
+-
+ 	call mk_early_pgtbl_32
+ 
+ 	mov $_pa(initial_page_table), %eax
+@@ -163,7 +150,6 @@ SYM_DATA_START_LOCAL(gdt_start)
+ 	.quad GDT_ENTRY(0xc09a, 0, 0xfffff) /* PVH_CS_SEL */
+ #endif
+ 	.quad GDT_ENTRY(0xc092, 0, 0xfffff) /* PVH_DS_SEL */
+-	.quad GDT_ENTRY(0x4090, 0, 0x18)    /* PVH_CANARY_SEL */
+ SYM_DATA_END_LABEL(gdt_start, SYM_L_LOCAL, gdt_end)
+ 
+ 	.balign 16
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index 4e4e76ecd3ecd..84c7b2312ea9e 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -101,11 +101,8 @@ static void __save_processor_state(struct saved_context *ctxt)
+ 	/*
+ 	 * segment registers
+ 	 */
+-#ifdef CONFIG_X86_32_LAZY_GS
+ 	savesegment(gs, ctxt->gs);
+-#endif
+ #ifdef CONFIG_X86_64
+-	savesegment(gs, ctxt->gs);
+ 	savesegment(fs, ctxt->fs);
+ 	savesegment(ds, ctxt->ds);
+ 	savesegment(es, ctxt->es);
+@@ -234,7 +231,6 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
+ 	wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);
+ #else
+ 	loadsegment(fs, __KERNEL_PERCPU);
+-	loadsegment(gs, __KERNEL_STACK_CANARY);
+ #endif
+ 
+ 	/* Restore the TSS, RO GDT, LDT, and usermode-relevant MSRs. */
+@@ -257,7 +253,7 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
+ 	 */
+ 	wrmsrl(MSR_FS_BASE, ctxt->fs_base);
+ 	wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);
+-#elif defined(CONFIG_X86_32_LAZY_GS)
++#else
+ 	loadsegment(gs, ctxt->gs);
+ #endif
+ 
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 815030b7f6fa8..94804670caab8 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1193,7 +1193,6 @@ static void __init xen_setup_gdt(int cpu)
+ 	pv_ops.cpu.write_gdt_entry = xen_write_gdt_entry_boot;
+ 	pv_ops.cpu.load_gdt = xen_load_gdt_boot;
+ 
+-	setup_stack_canary_segment(cpu);
+ 	switch_to_new_gdt(cpu);
+ 
+ 	pv_ops.cpu.write_gdt_entry = xen_write_gdt_entry;
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index c3aa7f8ee3883..ebd373469c807 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -60,6 +60,7 @@ void blk_set_default_limits(struct queue_limits *lim)
+ 	lim->io_opt = 0;
+ 	lim->misaligned = 0;
+ 	lim->zoned = BLK_ZONED_NONE;
++	lim->zone_write_granularity = 0;
+ }
+ EXPORT_SYMBOL(blk_set_default_limits);
+ 
+@@ -353,6 +354,28 @@ void blk_queue_physical_block_size(struct request_queue *q, unsigned int size)
+ }
+ EXPORT_SYMBOL(blk_queue_physical_block_size);
+ 
++/**
++ * blk_queue_zone_write_granularity - set zone write granularity for the queue
++ * @q:  the request queue for the zoned device
++ * @size:  the zone write granularity size, in bytes
++ *
++ * Description:
++ *   This should be set to the lowest possible size allowing to write in
++ *   sequential zones of a zoned block device.
++ */
++void blk_queue_zone_write_granularity(struct request_queue *q,
++				      unsigned int size)
++{
++	if (WARN_ON_ONCE(!blk_queue_is_zoned(q)))
++		return;
++
++	q->limits.zone_write_granularity = size;
++
++	if (q->limits.zone_write_granularity < q->limits.logical_block_size)
++		q->limits.zone_write_granularity = q->limits.logical_block_size;
++}
++EXPORT_SYMBOL_GPL(blk_queue_zone_write_granularity);
++
+ /**
+  * blk_queue_alignment_offset - set physical block alignment offset
+  * @q:	the request queue for the device
+@@ -630,7 +653,13 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
+ 			t->discard_granularity;
+ 	}
+ 
++	t->zone_write_granularity = max(t->zone_write_granularity,
++					b->zone_write_granularity);
+ 	t->zoned = max(t->zoned, b->zoned);
++	if (!t->zoned) {
++		t->zone_write_granularity = 0;
++		t->max_zone_append_sectors = 0;
++	}
+ 	return ret;
+ }
+ EXPORT_SYMBOL(blk_stack_limits);
+@@ -846,6 +875,8 @@ EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging);
+  */
+ void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model)
+ {
++	struct request_queue *q = disk->queue;
++
+ 	switch (model) {
+ 	case BLK_ZONED_HM:
+ 		/*
+@@ -874,7 +905,15 @@ void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model)
+ 		break;
+ 	}
+ 
+-	disk->queue->limits.zoned = model;
++	q->limits.zoned = model;
++	if (model != BLK_ZONED_NONE) {
++		/*
++		 * Set the zone write granularity to the device logical block
++		 * size by default. The driver can change this value if needed.
++		 */
++		blk_queue_zone_write_granularity(q,
++						queue_logical_block_size(q));
++	}
+ }
+ EXPORT_SYMBOL_GPL(blk_queue_set_zoned);
+ 
+diff --git a/block/blk-stat.c b/block/blk-stat.c
+index ae3dd1fb8e61d..6e602f9b966e4 100644
+--- a/block/blk-stat.c
++++ b/block/blk-stat.c
+@@ -28,7 +28,7 @@ void blk_rq_stat_init(struct blk_rq_stat *stat)
+ /* src is a per-cpu stat, mean isn't initialized */
+ void blk_rq_stat_sum(struct blk_rq_stat *dst, struct blk_rq_stat *src)
+ {
+-	if (!src->nr_samples)
++	if (dst->nr_samples + src->nr_samples <= dst->nr_samples)
+ 		return;
+ 
+ 	dst->min = min(dst->min, src->min);
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 9174137a913c4..ddf23bf3e0f1d 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -219,6 +219,12 @@ static ssize_t queue_write_zeroes_max_show(struct request_queue *q, char *page)
+ 		(unsigned long long)q->limits.max_write_zeroes_sectors << 9);
+ }
+ 
++static ssize_t queue_zone_write_granularity_show(struct request_queue *q,
++						 char *page)
++{
++	return queue_var_show(queue_zone_write_granularity(q), page);
++}
++
+ static ssize_t queue_zone_append_max_show(struct request_queue *q, char *page)
+ {
+ 	unsigned long long max_sectors = q->limits.max_zone_append_sectors;
+@@ -585,6 +591,7 @@ QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
+ QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
+ QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes");
+ QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes");
++QUEUE_RO_ENTRY(queue_zone_write_granularity, "zone_write_granularity");
+ 
+ QUEUE_RO_ENTRY(queue_zoned, "zoned");
+ QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones");
+@@ -639,6 +646,7 @@ static struct attribute *queue_attrs[] = {
+ 	&queue_write_same_max_entry.attr,
+ 	&queue_write_zeroes_max_entry.attr,
+ 	&queue_zone_append_max_entry.attr,
++	&queue_zone_write_granularity_entry.attr,
+ 	&queue_nonrot_entry.attr,
+ 	&queue_zoned_entry.attr,
+ 	&queue_nr_zones_entry.attr,
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 24f8042f12b60..bc97698e0e8a3 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -17,7 +17,7 @@ static int blkpg_do_ioctl(struct block_device *bdev,
+ 			  struct blkpg_partition __user *upart, int op)
+ {
+ 	struct blkpg_partition p;
+-	long long start, length;
++	sector_t start, length;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EACCES;
+@@ -32,6 +32,12 @@ static int blkpg_do_ioctl(struct block_device *bdev,
+ 	if (op == BLKPG_DEL_PARTITION)
+ 		return bdev_del_partition(bdev, p.pno);
+ 
++	if (p.start < 0 || p.length <= 0 || p.start + p.length < 0)
++		return -EINVAL;
++	/* Check that the partition is aligned to the block size */
++	if (!IS_ALIGNED(p.start | p.length, bdev_logical_block_size(bdev)))
++		return -EINVAL;
++
+ 	start = p.start >> SECTOR_SHIFT;
+ 	length = p.length >> SECTOR_SHIFT;
+ 
+@@ -46,9 +52,6 @@ static int blkpg_do_ioctl(struct block_device *bdev,
+ 
+ 	switch (op) {
+ 	case BLKPG_ADD_PARTITION:
+-		/* check if partition is aligned to blocksize */
+-		if (p.start & (bdev_logical_block_size(bdev) - 1))
+-			return -EINVAL;
+ 		return bdev_add_partition(bdev, p.pno, start, length);
+ 	case BLKPG_RESIZE_PARTITION:
+ 		return bdev_resize_partition(bdev, p.pno, start, length);
+diff --git a/drivers/accessibility/speakup/synth.c b/drivers/accessibility/speakup/synth.c
+index ac47dbac72075..82cfc5ec6bdf9 100644
+--- a/drivers/accessibility/speakup/synth.c
++++ b/drivers/accessibility/speakup/synth.c
+@@ -208,8 +208,10 @@ void spk_do_flush(void)
+ 	wake_up_process(speakup_task);
+ }
+ 
+-void synth_write(const char *buf, size_t count)
++void synth_write(const char *_buf, size_t count)
+ {
++	const unsigned char *buf = (const unsigned char *) _buf;
++
+ 	while (count--)
+ 		synth_buffer_add(*buf++);
+ 	synth_start();
+diff --git a/drivers/acpi/acpica/dbnames.c b/drivers/acpi/acpica/dbnames.c
+index b91155ea9c343..c9131259f717b 100644
+--- a/drivers/acpi/acpica/dbnames.c
++++ b/drivers/acpi/acpica/dbnames.c
+@@ -550,8 +550,12 @@ acpi_db_walk_for_fields(acpi_handle obj_handle,
+ 	ACPI_FREE(buffer.pointer);
+ 
+ 	buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER;
+-	acpi_evaluate_object(obj_handle, NULL, NULL, &buffer);
+-
++	status = acpi_evaluate_object(obj_handle, NULL, NULL, &buffer);
++	if (ACPI_FAILURE(status)) {
++		acpi_os_printf("Could Not evaluate object %p\n",
++			       obj_handle);
++		return (AE_OK);
++	}
+ 	/*
+ 	 * Since this is a field unit, surround the output in braces
+ 	 */
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 097a5b5f46ab0..e79c004ca0b24 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -385,18 +385,6 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "20GGA00L00"),
+ 		},
+ 	},
+-	/*
+-	 * ASUS B1400CEAE hangs on resume from suspend (see
+-	 * https://bugzilla.kernel.org/show_bug.cgi?id=215742).
+-	 */
+-	{
+-	.callback = init_default_s3,
+-	.ident = "ASUS B1400CEAE",
+-	.matches = {
+-		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-		DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK B1400CEAE"),
+-		},
+-	},
+ 	{},
+ };
+ 
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 6f7f8e41404dc..55a07dbe1d8a6 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -662,11 +662,6 @@ MODULE_PARM_DESC(mobile_lpm_policy, "Default LPM policy for mobile chipsets");
+ static void ahci_pci_save_initial_config(struct pci_dev *pdev,
+ 					 struct ahci_host_priv *hpriv)
+ {
+-	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == 0x1166) {
+-		dev_info(&pdev->dev, "ASM1166 has only six ports\n");
+-		hpriv->saved_port_map = 0x3f;
+-	}
+-
+ 	if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) {
+ 		dev_info(&pdev->dev, "JMB361 has only one port\n");
+ 		hpriv->force_port_map = 1;
+diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
+index 11be88f70690e..cede4b517646f 100644
+--- a/drivers/ata/sata_mv.c
++++ b/drivers/ata/sata_mv.c
+@@ -783,37 +783,6 @@ static const struct ata_port_info mv_port_info[] = {
+ 	},
+ };
+ 
+-static const struct pci_device_id mv_pci_tbl[] = {
+-	{ PCI_VDEVICE(MARVELL, 0x5040), chip_504x },
+-	{ PCI_VDEVICE(MARVELL, 0x5041), chip_504x },
+-	{ PCI_VDEVICE(MARVELL, 0x5080), chip_5080 },
+-	{ PCI_VDEVICE(MARVELL, 0x5081), chip_508x },
+-	/* RocketRAID 1720/174x have different identifiers */
+-	{ PCI_VDEVICE(TTI, 0x1720), chip_6042 },
+-	{ PCI_VDEVICE(TTI, 0x1740), chip_6042 },
+-	{ PCI_VDEVICE(TTI, 0x1742), chip_6042 },
+-
+-	{ PCI_VDEVICE(MARVELL, 0x6040), chip_604x },
+-	{ PCI_VDEVICE(MARVELL, 0x6041), chip_604x },
+-	{ PCI_VDEVICE(MARVELL, 0x6042), chip_6042 },
+-	{ PCI_VDEVICE(MARVELL, 0x6080), chip_608x },
+-	{ PCI_VDEVICE(MARVELL, 0x6081), chip_608x },
+-
+-	{ PCI_VDEVICE(ADAPTEC2, 0x0241), chip_604x },
+-
+-	/* Adaptec 1430SA */
+-	{ PCI_VDEVICE(ADAPTEC2, 0x0243), chip_7042 },
+-
+-	/* Marvell 7042 support */
+-	{ PCI_VDEVICE(MARVELL, 0x7042), chip_7042 },
+-
+-	/* Highpoint RocketRAID PCIe series */
+-	{ PCI_VDEVICE(TTI, 0x2300), chip_7042 },
+-	{ PCI_VDEVICE(TTI, 0x2310), chip_7042 },
+-
+-	{ }			/* terminate list */
+-};
+-
+ static const struct mv_hw_ops mv5xxx_ops = {
+ 	.phy_errata		= mv5_phy_errata,
+ 	.enable_leds		= mv5_enable_leds,
+@@ -4307,6 +4276,36 @@ static int mv_pci_init_one(struct pci_dev *pdev,
+ static int mv_pci_device_resume(struct pci_dev *pdev);
+ #endif
+ 
++static const struct pci_device_id mv_pci_tbl[] = {
++	{ PCI_VDEVICE(MARVELL, 0x5040), chip_504x },
++	{ PCI_VDEVICE(MARVELL, 0x5041), chip_504x },
++	{ PCI_VDEVICE(MARVELL, 0x5080), chip_5080 },
++	{ PCI_VDEVICE(MARVELL, 0x5081), chip_508x },
++	/* RocketRAID 1720/174x have different identifiers */
++	{ PCI_VDEVICE(TTI, 0x1720), chip_6042 },
++	{ PCI_VDEVICE(TTI, 0x1740), chip_6042 },
++	{ PCI_VDEVICE(TTI, 0x1742), chip_6042 },
++
++	{ PCI_VDEVICE(MARVELL, 0x6040), chip_604x },
++	{ PCI_VDEVICE(MARVELL, 0x6041), chip_604x },
++	{ PCI_VDEVICE(MARVELL, 0x6042), chip_6042 },
++	{ PCI_VDEVICE(MARVELL, 0x6080), chip_608x },
++	{ PCI_VDEVICE(MARVELL, 0x6081), chip_608x },
++
++	{ PCI_VDEVICE(ADAPTEC2, 0x0241), chip_604x },
++
++	/* Adaptec 1430SA */
++	{ PCI_VDEVICE(ADAPTEC2, 0x0243), chip_7042 },
++
++	/* Marvell 7042 support */
++	{ PCI_VDEVICE(MARVELL, 0x7042), chip_7042 },
++
++	/* Highpoint RocketRAID PCIe series */
++	{ PCI_VDEVICE(TTI, 0x2300), chip_7042 },
++	{ PCI_VDEVICE(TTI, 0x2310), chip_7042 },
++
++	{ }			/* terminate list */
++};
+ 
+ static struct pci_driver mv_pci_driver = {
+ 	.name			= DRV_NAME,
+@@ -4319,6 +4318,7 @@ static struct pci_driver mv_pci_driver = {
+ #endif
+ 
+ };
++MODULE_DEVICE_TABLE(pci, mv_pci_tbl);
+ 
+ /**
+  *      mv_print_info - Dump key info to kernel log for perusal.
+@@ -4491,7 +4491,6 @@ static void __exit mv_exit(void)
+ MODULE_AUTHOR("Brett Russ");
+ MODULE_DESCRIPTION("SCSI low-level driver for Marvell SATA controllers");
+ MODULE_LICENSE("GPL v2");
+-MODULE_DEVICE_TABLE(pci, mv_pci_tbl);
+ MODULE_VERSION(DRV_VERSION);
+ MODULE_ALIAS("platform:" DRV_NAME);
+ 
+diff --git a/drivers/ata/sata_sx4.c b/drivers/ata/sata_sx4.c
+index 4c01190a5e370..c95685f693a68 100644
+--- a/drivers/ata/sata_sx4.c
++++ b/drivers/ata/sata_sx4.c
+@@ -1004,8 +1004,7 @@ static void pdc20621_get_from_dimm(struct ata_host *host, void *psource,
+ 
+ 	offset -= (idx * window_size);
+ 	idx++;
+-	dist = ((long) (window_size - (offset + size))) >= 0 ? size :
+-		(long) (window_size - offset);
++	dist = min(size, window_size - offset);
+ 	memcpy_fromio(psource, dimm_mmio + offset / 4, dist);
+ 
+ 	psource += dist;
+@@ -1053,8 +1052,7 @@ static void pdc20621_put_to_dimm(struct ata_host *host, void *psource,
+ 	readl(mmio + PDC_DIMM_WINDOW_CTLR);
+ 	offset -= (idx * window_size);
+ 	idx++;
+-	dist = ((long)(s32)(window_size - (offset + size))) >= 0 ? size :
+-		(long) (window_size - offset);
++	dist = min(size, window_size - offset);
+ 	memcpy_toio(dimm_mmio + offset / 4, psource, dist);
+ 	writel(0x01, mmio + PDC_GENERAL_CTLR);
+ 	readl(mmio + PDC_GENERAL_CTLR);
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index d98cab88c38af..2c978941b488e 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -53,6 +53,7 @@ static unsigned int defer_sync_state_count = 1;
+ static unsigned int defer_fw_devlink_count;
+ static LIST_HEAD(deferred_fw_devlink);
+ static DEFINE_MUTEX(defer_fw_devlink_lock);
++static struct workqueue_struct *device_link_wq;
+ static bool fw_devlink_is_permissive(void);
+ 
+ #ifdef CONFIG_SRCU
+@@ -364,12 +365,26 @@ static void devlink_dev_release(struct device *dev)
+ 	/*
+ 	 * It may take a while to complete this work because of the SRCU
+ 	 * synchronization in device_link_release_fn() and if the consumer or
+-	 * supplier devices get deleted when it runs, so put it into the "long"
+-	 * workqueue.
++	 * supplier devices get deleted when it runs, so put it into the
++	 * dedicated workqueue.
+ 	 */
+-	queue_work(system_long_wq, &link->rm_work);
++	queue_work(device_link_wq, &link->rm_work);
+ }
+ 
++/**
++ * device_link_wait_removal - Wait for ongoing devlink removal jobs to terminate
++ */
++void device_link_wait_removal(void)
++{
++	/*
++	 * devlink removal jobs are queued in the dedicated work queue.
++	 * To be sure that all removal jobs are terminated, ensure that any
++	 * scheduled work has run to completion.
++	 */
++	flush_workqueue(device_link_wq);
++}
++EXPORT_SYMBOL_GPL(device_link_wait_removal);
++
+ static struct class devlink_class = {
+ 	.name = "devlink",
+ 	.owner = THIS_MODULE,
+@@ -3415,9 +3430,14 @@ int __init devices_init(void)
+ 	sysfs_dev_char_kobj = kobject_create_and_add("char", dev_kobj);
+ 	if (!sysfs_dev_char_kobj)
+ 		goto char_kobj_err;
++	device_link_wq = alloc_workqueue("device_link_wq", 0, 0);
++	if (!device_link_wq)
++		goto wq_err;
+ 
+ 	return 0;
+ 
++ wq_err:
++	kobject_put(sysfs_dev_char_kobj);
+  char_kobj_err:
+ 	kobject_put(sysfs_dev_block_kobj);
+  block_kobj_err:
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 2db1e0e8c1a7d..e3aed8333f097 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -591,6 +591,12 @@ ssize_t __weak cpu_show_spec_rstack_overflow(struct device *dev,
+ 	return sysfs_emit(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_reg_file_data_sampling(struct device *dev,
++					       struct device_attribute *attr, char *buf)
++{
++	return sysfs_emit(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -604,6 +610,7 @@ static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL);
+ static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL);
+ static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
+ static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NULL);
++static DEVICE_ATTR(reg_file_data_sampling, 0444, cpu_show_reg_file_data_sampling, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -619,6 +626,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_retbleed.attr,
+ 	&dev_attr_gather_data_sampling.attr,
+ 	&dev_attr_spec_rstack_overflow.attr,
++	&dev_attr_reg_file_data_sampling.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c
+index aea690c64e394..4f4310724fee5 100644
+--- a/drivers/base/power/wakeirq.c
++++ b/drivers/base/power/wakeirq.c
+@@ -365,8 +365,10 @@ void dev_pm_enable_wake_irq_complete(struct device *dev)
+ 		return;
+ 
+ 	if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED &&
+-	    wirq->status & WAKE_IRQ_DEDICATED_REVERSE)
++	    wirq->status & WAKE_IRQ_DEDICATED_REVERSE) {
+ 		enable_irq(wirq->irq);
++		wirq->status |= WAKE_IRQ_DEDICATED_ENABLED;
++	}
+ }
+ 
+ /**
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 88ce5f0ffc4ba..2538bdee31d8a 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -350,7 +350,7 @@ int btintel_read_version(struct hci_dev *hdev, struct intel_version *ver)
+ 		return PTR_ERR(skb);
+ 	}
+ 
+-	if (skb->len != sizeof(*ver)) {
++	if (!skb || skb->len != sizeof(*ver)) {
+ 		bt_dev_err(hdev, "Intel version event size mismatch");
+ 		kfree_skb(skb);
+ 		return -EILSEQ;
+diff --git a/drivers/clk/qcom/gcc-ipq6018.c b/drivers/clk/qcom/gcc-ipq6018.c
+index 4c5c7a8f41d08..b9844e41cf99d 100644
+--- a/drivers/clk/qcom/gcc-ipq6018.c
++++ b/drivers/clk/qcom/gcc-ipq6018.c
+@@ -1557,6 +1557,7 @@ static struct clk_regmap_div nss_ubi0_div_clk_src = {
+ 
+ static const struct freq_tbl ftbl_pcie_aux_clk_src[] = {
+ 	F(24000000, P_XO, 1, 0, 0),
++	{ }
+ };
+ 
+ static const struct clk_parent_data gcc_xo_gpll0_core_pi_sleep_clk[] = {
+@@ -1737,6 +1738,7 @@ static const struct freq_tbl ftbl_sdcc_ice_core_clk_src[] = {
+ 	F(160000000, P_GPLL0, 5, 0, 0),
+ 	F(216000000, P_GPLL6, 5, 0, 0),
+ 	F(308570000, P_GPLL6, 3.5, 0, 0),
++	{ }
+ };
+ 
+ static const struct clk_parent_data gcc_xo_gpll0_gpll6_gpll0_div2[] = {
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 0393154fea2f9..649e75a41f7af 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -972,6 +972,7 @@ static struct clk_rcg2 pcie0_axi_clk_src = {
+ 
+ static const struct freq_tbl ftbl_pcie_aux_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 pcie0_aux_clk_src = {
+@@ -1077,6 +1078,7 @@ static const struct freq_tbl ftbl_sdcc_ice_core_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
+ 	F(160000000, P_GPLL0, 5, 0, 0),
+ 	F(308570000, P_GPLL6, 3.5, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 sdcc1_ice_core_clk_src = {
+diff --git a/drivers/clk/qcom/gcc-sdm845.c b/drivers/clk/qcom/gcc-sdm845.c
+index 90f7febaf5288..a6b07aa8eb650 100644
+--- a/drivers/clk/qcom/gcc-sdm845.c
++++ b/drivers/clk/qcom/gcc-sdm845.c
+@@ -3646,3 +3646,4 @@ module_exit(gcc_sdm845_exit);
+ MODULE_DESCRIPTION("QTI GCC SDM845 Driver");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:gcc-sdm845");
++MODULE_SOFTDEP("pre: rpmhpd");
+diff --git a/drivers/clk/qcom/mmcc-apq8084.c b/drivers/clk/qcom/mmcc-apq8084.c
+index fbfcf00067394..c2fd0e8f4bc09 100644
+--- a/drivers/clk/qcom/mmcc-apq8084.c
++++ b/drivers/clk/qcom/mmcc-apq8084.c
+@@ -333,6 +333,7 @@ static struct freq_tbl ftbl_mmss_axi_clk[] = {
+ 	F(333430000, P_MMPLL1, 3.5, 0, 0),
+ 	F(400000000, P_MMPLL0, 2, 0, 0),
+ 	F(466800000, P_MMPLL1, 2.5, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 mmss_axi_clk_src = {
+@@ -357,6 +358,7 @@ static struct freq_tbl ftbl_ocmemnoc_clk[] = {
+ 	F(150000000, P_GPLL0, 4, 0, 0),
+ 	F(228570000, P_MMPLL0, 3.5, 0, 0),
+ 	F(320000000, P_MMPLL0, 2.5, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 ocmemnoc_clk_src = {
+diff --git a/drivers/clk/qcom/mmcc-msm8974.c b/drivers/clk/qcom/mmcc-msm8974.c
+index 015426262d080..dfc377463a7af 100644
+--- a/drivers/clk/qcom/mmcc-msm8974.c
++++ b/drivers/clk/qcom/mmcc-msm8974.c
+@@ -283,6 +283,7 @@ static struct freq_tbl ftbl_mmss_axi_clk[] = {
+ 	F(291750000, P_MMPLL1, 4, 0, 0),
+ 	F(400000000, P_MMPLL0, 2, 0, 0),
+ 	F(466800000, P_MMPLL1, 2.5, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 mmss_axi_clk_src = {
+@@ -307,6 +308,7 @@ static struct freq_tbl ftbl_ocmemnoc_clk[] = {
+ 	F(150000000, P_GPLL0, 4, 0, 0),
+ 	F(291750000, P_MMPLL1, 4, 0, 0),
+ 	F(400000000, P_MMPLL0, 2, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 ocmemnoc_clk_src = {
+diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+index 38ec0fedb247f..552db816ed22c 100644
+--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c
++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+@@ -481,10 +481,11 @@ static bool brcm_avs_is_firmware_loaded(struct private_data *priv)
+ static unsigned int brcm_avs_cpufreq_get(unsigned int cpu)
+ {
+ 	struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
++	struct private_data *priv;
++
+ 	if (!policy)
+ 		return 0;
+-	struct private_data *priv = policy->driver_data;
+-
++	priv = policy->driver_data;
+ 	cpufreq_cpu_put(policy);
+ 
+ 	return brcm_avs_get_frequency(priv->base);
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index e363ae04aac62..44cc596ca0a5e 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -251,7 +251,7 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
+-	if (!alloc_cpumask_var(&priv->cpus, GFP_KERNEL))
++	if (!zalloc_cpumask_var(&priv->cpus, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
+ 	priv->cpu_dev = cpu_dev;
+diff --git a/drivers/crypto/qat/qat_common/adf_aer.c b/drivers/crypto/qat/qat_common/adf_aer.c
+index d2ae293d0df6a..dc0feedf52702 100644
+--- a/drivers/crypto/qat/qat_common/adf_aer.c
++++ b/drivers/crypto/qat/qat_common/adf_aer.c
+@@ -95,18 +95,28 @@ static void adf_device_reset_worker(struct work_struct *work)
+ 	if (adf_dev_init(accel_dev) || adf_dev_start(accel_dev)) {
+ 		/* The device hanged and we can't restart it so stop here */
+ 		dev_err(&GET_DEV(accel_dev), "Restart device failed\n");
+-		kfree(reset_data);
++		if (reset_data->mode == ADF_DEV_RESET_ASYNC ||
++		    completion_done(&reset_data->compl))
++			kfree(reset_data);
+ 		WARN(1, "QAT: device restart failed. Device is unusable\n");
+ 		return;
+ 	}
+ 	adf_dev_restarted_notify(accel_dev);
+ 	clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status);
+ 
+-	/* The dev is back alive. Notify the caller if in sync mode */
+-	if (reset_data->mode == ADF_DEV_RESET_SYNC)
+-		complete(&reset_data->compl);
+-	else
++	/*
++	 * The dev is back alive. Notify the caller if in sync mode
++	 *
++	 * If device restart will take a more time than expected,
++	 * the schedule_reset() function can timeout and exit. This can be
++	 * detected by calling the completion_done() function. In this case
++	 * the reset_data structure needs to be freed here.
++	 */
++	if (reset_data->mode == ADF_DEV_RESET_ASYNC ||
++	    completion_done(&reset_data->compl))
+ 		kfree(reset_data);
++	else
++		complete(&reset_data->compl);
+ }
+ 
+ static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev,
+@@ -139,8 +149,9 @@ static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev,
+ 			dev_err(&GET_DEV(accel_dev),
+ 				"Reset device timeout expired\n");
+ 			ret = -EFAULT;
++		} else {
++			kfree(reset_data);
+ 		}
+-		kfree(reset_data);
+ 		return ret;
+ 	}
+ 	return 0;
+diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c
+index cae590bd08f27..eaed1ddcc803b 100644
+--- a/drivers/firmware/efi/vars.c
++++ b/drivers/firmware/efi/vars.c
+@@ -415,7 +415,7 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
+ 		void *data, bool duplicates, struct list_head *head)
+ {
+ 	const struct efivar_operations *ops;
+-	unsigned long variable_name_size = 1024;
++	unsigned long variable_name_size = 512;
+ 	efi_char16_t *variable_name;
+ 	efi_status_t status;
+ 	efi_guid_t vendor_guid;
+@@ -438,12 +438,13 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
+ 	}
+ 
+ 	/*
+-	 * Per EFI spec, the maximum storage allocated for both
+-	 * the variable name and variable data is 1024 bytes.
++	 * A small set of old UEFI implementations reject sizes
++	 * above a certain threshold, the lowest seen in the wild
++	 * is 512.
+ 	 */
+ 
+ 	do {
+-		variable_name_size = 1024;
++		variable_name_size = 512;
+ 
+ 		status = ops->get_next_variable(&variable_name_size,
+ 						variable_name,
+@@ -491,9 +492,13 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
+ 			break;
+ 		case EFI_NOT_FOUND:
+ 			break;
++		case EFI_BUFFER_TOO_SMALL:
++			pr_warn("efivars: Variable name size exceeds maximum (%lu > 512)\n",
++				variable_name_size);
++			status = EFI_NOT_FOUND;
++			break;
+ 		default:
+-			printk(KERN_WARNING "efivars: get_next_variable: status=%lx\n",
+-				status);
++			pr_warn("efivars: get_next_variable: status=%lx\n", status);
+ 			status = EFI_NOT_FOUND;
+ 			break;
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 8cc51cec988a4..799a91a064a1b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -959,8 +959,8 @@ static int kfd_ioctl_get_process_apertures_new(struct file *filp,
+ 	 * nodes, but not more than args->num_of_nodes as that is
+ 	 * the amount of memory allocated by user
+ 	 */
+-	pa = kzalloc((sizeof(struct kfd_process_device_apertures) *
+-				args->num_of_nodes), GFP_KERNEL);
++	pa = kcalloc(args->num_of_nodes, sizeof(struct kfd_process_device_apertures),
++		     GFP_KERNEL);
+ 	if (!pa)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index 22c77e96f6a54..02d22f62c0031 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -641,10 +641,20 @@ void dcn30_set_avmute(struct pipe_ctx *pipe_ctx, bool enable)
+ 	if (pipe_ctx == NULL)
+ 		return;
+ 
+-	if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal) && pipe_ctx->stream_res.stream_enc != NULL)
++	if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal) && pipe_ctx->stream_res.stream_enc != NULL) {
+ 		pipe_ctx->stream_res.stream_enc->funcs->set_avmute(
+ 				pipe_ctx->stream_res.stream_enc,
+ 				enable);
++
++		/* Wait for two frame to make sure AV mute is sent out */
++		if (enable) {
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VBLANK);
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VBLANK);
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
++		}
++	}
+ }
+ 
+ void dcn30_update_info_frame(struct pipe_ctx *pipe_ctx)
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+index 972f2600f967f..05d96fa681354 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+@@ -405,6 +405,9 @@ enum mod_hdcp_status mod_hdcp_hdcp2_create_session(struct mod_hdcp *hdcp)
+ 	hdcp_cmd = (struct ta_hdcp_shared_memory *)psp->hdcp_context.hdcp_shared_buf;
+ 	memset(hdcp_cmd, 0, sizeof(struct ta_hdcp_shared_memory));
+ 
++	if (!display)
++		return MOD_HDCP_STATUS_DISPLAY_NOT_FOUND;
++
+ 	hdcp_cmd->in_msg.hdcp2_create_session_v2.display_handle = display->index;
+ 
+ 	if (hdcp->connection.link.adjust.hdcp2.force_type == MOD_HDCP_FORCE_TYPE_0)
+diff --git a/drivers/gpu/drm/amd/display/modules/inc/mod_stats.h b/drivers/gpu/drm/amd/display/modules/inc/mod_stats.h
+index 4220fd8fdd60c..54cd86060f4d6 100644
+--- a/drivers/gpu/drm/amd/display/modules/inc/mod_stats.h
++++ b/drivers/gpu/drm/amd/display/modules/inc/mod_stats.h
+@@ -57,10 +57,10 @@ void mod_stats_update_event(struct mod_stats *mod_stats,
+ 		unsigned int length);
+ 
+ void mod_stats_update_flip(struct mod_stats *mod_stats,
+-		unsigned long timestamp_in_ns);
++		unsigned long long timestamp_in_ns);
+ 
+ void mod_stats_update_vupdate(struct mod_stats *mod_stats,
+-		unsigned long timestamp_in_ns);
++		unsigned long long timestamp_in_ns);
+ 
+ void mod_stats_update_freesync(struct mod_stats *mod_stats,
+ 		unsigned int v_total_min,
+diff --git a/drivers/gpu/drm/drm_panel.c b/drivers/gpu/drm/drm_panel.c
+index f634371c717a8..7fd3de89ed079 100644
+--- a/drivers/gpu/drm/drm_panel.c
++++ b/drivers/gpu/drm/drm_panel.c
+@@ -207,19 +207,24 @@ EXPORT_SYMBOL(drm_panel_disable);
+  * The modes probed from the panel are automatically added to the connector
+  * that the panel is attached to.
+  *
+- * Return: The number of modes available from the panel on success or a
+- * negative error code on failure.
++ * Return: The number of modes available from the panel on success, or 0 on
++ * failure (no modes).
+  */
+ int drm_panel_get_modes(struct drm_panel *panel,
+ 			struct drm_connector *connector)
+ {
+ 	if (!panel)
+-		return -EINVAL;
++		return 0;
+ 
+-	if (panel->funcs && panel->funcs->get_modes)
+-		return panel->funcs->get_modes(panel, connector);
++	if (panel->funcs && panel->funcs->get_modes) {
++		int num;
+ 
+-	return -EOPNOTSUPP;
++		num = panel->funcs->get_modes(panel, connector);
++		if (num > 0)
++			return num;
++	}
++
++	return 0;
+ }
+ EXPORT_SYMBOL(drm_panel_get_modes);
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+index a9a3afaef9a1c..edf9387069cdc 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+@@ -511,7 +511,7 @@ static struct drm_driver etnaviv_drm_driver = {
+ 	.desc               = "etnaviv DRM",
+ 	.date               = "20151214",
+ 	.major              = 1,
+-	.minor              = 3,
++	.minor              = 4,
+ };
+ 
+ /*
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c b/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c
+index 167971a09be79..e9b777ab3be7f 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c
+@@ -73,6 +73,9 @@ static const struct etnaviv_chip_identity etnaviv_chip_identities[] = {
+ bool etnaviv_fill_identity_from_hwdb(struct etnaviv_gpu *gpu)
+ {
+ 	struct etnaviv_chip_identity *ident = &gpu->identity;
++	const u32 product_id = ident->product_id;
++	const u32 customer_id = ident->customer_id;
++	const u32 eco_id = ident->eco_id;
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(etnaviv_chip_identities); i++) {
+@@ -86,6 +89,12 @@ bool etnaviv_fill_identity_from_hwdb(struct etnaviv_gpu *gpu)
+ 			 etnaviv_chip_identities[i].eco_id == ~0U)) {
+ 			memcpy(ident, &etnaviv_chip_identities[i],
+ 			       sizeof(*ident));
++
++			/* Restore some id values as ~0U aka 'don't care' might been used. */
++			ident->product_id = product_id;
++			ident->customer_id = customer_id;
++			ident->eco_id = eco_id;
++
+ 			return true;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_vidi.c b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+index e96436e11a36c..e1ffe8a28b649 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_vidi.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+@@ -315,14 +315,14 @@ static int vidi_get_modes(struct drm_connector *connector)
+ 	 */
+ 	if (!ctx->raw_edid) {
+ 		DRM_DEV_DEBUG_KMS(ctx->dev, "raw_edid is null.\n");
+-		return -EFAULT;
++		return 0;
+ 	}
+ 
+ 	edid_len = (1 + ctx->raw_edid->extensions) * EDID_LENGTH;
+ 	edid = kmemdup(ctx->raw_edid, edid_len, GFP_KERNEL);
+ 	if (!edid) {
+ 		DRM_DEV_DEBUG_KMS(ctx->dev, "failed to allocate edid\n");
+-		return -ENOMEM;
++		return 0;
+ 	}
+ 
+ 	drm_connector_update_edid_property(connector, edid);
+diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c
+index 981bffacda243..576fcf1807164 100644
+--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
++++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
+@@ -878,11 +878,11 @@ static int hdmi_get_modes(struct drm_connector *connector)
+ 	int ret;
+ 
+ 	if (!hdata->ddc_adpt)
+-		return -ENODEV;
++		return 0;
+ 
+ 	edid = drm_get_edid(connector, hdata->ddc_adpt);
+ 	if (!edid)
+-		return -ENODEV;
++		return 0;
+ 
+ 	hdata->dvi_mode = !drm_detect_hdmi_monitor(edid);
+ 	DRM_DEV_DEBUG_KMS(hdata->dev, "%s : width[%d] x height[%d]\n",
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+index f7b2e07e22298..f9fdbd79c0f37 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+@@ -250,9 +250,6 @@ static int __engine_park(struct intel_wakeref *wf)
+ 	intel_engine_park_heartbeat(engine);
+ 	intel_breadcrumbs_park(engine->breadcrumbs);
+ 
+-	/* Must be reset upon idling, or we may miss the busy wakeup. */
+-	GEM_BUG_ON(engine->execlists.queue_priority_hint != INT_MIN);
+-
+ 	if (engine->park)
+ 		engine->park(engine);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
+index ee9b33c3aff83..5a414d989dca5 100644
+--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
++++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
+@@ -5032,6 +5032,9 @@ static void execlists_park(struct intel_engine_cs *engine)
+ {
+ 	cancel_timer(&engine->execlists.timer);
+ 	cancel_timer(&engine->execlists.preempt);
++
++	/* Reset upon idling, or we may delay the busy wakeup. */
++	WRITE_ONCE(engine->execlists.queue_priority_hint, INT_MIN);
+ }
+ 
+ void intel_execlists_set_default_submission(struct intel_engine_cs *engine)
+diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
+index b61bfa84b6bbd..bcd6b9ee8eea6 100644
+--- a/drivers/gpu/drm/imx/parallel-display.c
++++ b/drivers/gpu/drm/imx/parallel-display.c
+@@ -65,14 +65,14 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
+ 		int ret;
+ 
+ 		if (!mode)
+-			return -EINVAL;
++			return 0;
+ 
+ 		ret = of_get_drm_display_mode(np, &imxpd->mode,
+ 					      &imxpd->bus_flags,
+ 					      OF_USE_NATIVE_MODE);
+ 		if (ret) {
+ 			drm_mode_destroy(connector->dev, mode);
+-			return ret;
++			return 0;
+ 		}
+ 
+ 		drm_mode_copy(mode, &imxpd->mode);
+diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c
+index 89d50f38c0f2c..5af52012fc5c6 100644
+--- a/drivers/gpu/drm/ttm/ttm_memory.c
++++ b/drivers/gpu/drm/ttm/ttm_memory.c
+@@ -431,8 +431,10 @@ int ttm_mem_global_init(struct ttm_mem_global *glob)
+ 
+ 	si_meminfo(&si);
+ 
++	spin_lock(&glob->lock);
+ 	/* set it as 0 by default to keep original behavior of OOM */
+ 	glob->lower_mem_limit = 0;
++	spin_unlock(&glob->lock);
+ 
+ 	ret = ttm_mem_init_kernel_zone(glob, &si);
+ 	if (unlikely(ret != 0))
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 7e8620838de9c..6d01258349faa 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -197,7 +197,7 @@ static int vc4_hdmi_connector_get_modes(struct drm_connector *connector)
+ 	edid = drm_get_edid(connector, vc4_hdmi->ddc);
+ 	cec_s_phys_addr_from_edid(vc4_hdmi->cec_adap, edid);
+ 	if (!edid)
+-		return -ENODEV;
++		return 0;
+ 
+ 	vc4_encoder->hdmi_monitor = drm_detect_hdmi_monitor(edid);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c b/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
+index f41550797970b..4da4bf3b7f0b3 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
+@@ -713,7 +713,7 @@ static int vmw_binding_scrub_cb(struct vmw_ctx_bindinfo *bi, bool rebind)
+  * without checking which bindings actually need to be emitted
+  *
+  * @cbs: Pointer to the context's struct vmw_ctx_binding_state
+- * @bi: Pointer to where the binding info array is stored in @cbs
++ * @biv: Pointer to where the binding info array is stored in @cbs
+  * @max_num: Maximum number of entries in the @bi array.
+  *
+  * Scans the @bi array for bindings and builds a buffer of view id data.
+@@ -723,11 +723,9 @@ static int vmw_binding_scrub_cb(struct vmw_ctx_bindinfo *bi, bool rebind)
+  * contains the command data.
+  */
+ static void vmw_collect_view_ids(struct vmw_ctx_binding_state *cbs,
+-				 const struct vmw_ctx_bindinfo *bi,
++				 const struct vmw_ctx_bindinfo_view *biv,
+ 				 u32 max_num)
+ {
+-	const struct vmw_ctx_bindinfo_view *biv =
+-		container_of(bi, struct vmw_ctx_bindinfo_view, bi);
+ 	unsigned long i;
+ 
+ 	cbs->bind_cmd_count = 0;
+@@ -835,7 +833,7 @@ static int vmw_emit_set_sr(struct vmw_ctx_binding_state *cbs,
+  */
+ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->render_targets[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->render_targets[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetRenderTargets body;
+@@ -871,7 +869,7 @@ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+  * without checking which bindings actually need to be emitted
+  *
+  * @cbs: Pointer to the context's struct vmw_ctx_binding_state
+- * @bi: Pointer to where the binding info array is stored in @cbs
++ * @biso: Pointer to where the binding info array is stored in @cbs
+  * @max_num: Maximum number of entries in the @bi array.
+  *
+  * Scans the @bi array for bindings and builds a buffer of SVGA3dSoTarget data.
+@@ -881,11 +879,9 @@ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+  * contains the command data.
+  */
+ static void vmw_collect_so_targets(struct vmw_ctx_binding_state *cbs,
+-				   const struct vmw_ctx_bindinfo *bi,
++				   const struct vmw_ctx_bindinfo_so_target *biso,
+ 				   u32 max_num)
+ {
+-	const struct vmw_ctx_bindinfo_so_target *biso =
+-		container_of(bi, struct vmw_ctx_bindinfo_so_target, bi);
+ 	unsigned long i;
+ 	SVGA3dSoTarget *so_buffer = (SVGA3dSoTarget *) cbs->bind_cmd_buffer;
+ 
+@@ -916,7 +912,7 @@ static void vmw_collect_so_targets(struct vmw_ctx_binding_state *cbs,
+  */
+ static int vmw_emit_set_so_target(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->so_targets[0].bi;
++	const struct vmw_ctx_bindinfo_so_target *loc = &cbs->so_targets[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetSOTargets body;
+@@ -1063,7 +1059,7 @@ static int vmw_emit_set_vb(struct vmw_ctx_binding_state *cbs)
+ 
+ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->ua_views[0].views[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->ua_views[0].views[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetUAViews body;
+@@ -1093,7 +1089,7 @@ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs)
+ 
+ static int vmw_emit_set_cs_uav(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->ua_views[1].views[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->ua_views[1].views[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetCSUAViews body;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
+index e8d66182cd7b5..ea2f2f937eb30 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
+@@ -459,9 +459,9 @@ int vmw_bo_cpu_blit(struct ttm_buffer_object *dst,
+ 	int ret = 0;
+ 
+ 	/* Buffer objects need to be either pinned or reserved: */
+-	if (!(dst->mem.placement & TTM_PL_FLAG_NO_EVICT))
++	if (!(dst->pin_count))
+ 		dma_resv_assert_held(dst->base.resv);
+-	if (!(src->mem.placement & TTM_PL_FLAG_NO_EVICT))
++	if (!(src->pin_count))
+ 		dma_resv_assert_held(src->base.resv);
+ 
+ 	if (!ttm_tt_is_populated(dst->ttm)) {
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+index 813f1b1480941..9a66ba2543263 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+@@ -106,7 +106,7 @@ int vmw_bo_pin_in_placement(struct vmw_private *dev_priv,
+ 	if (unlikely(ret != 0))
+ 		goto err;
+ 
+-	if (buf->pin_count > 0)
++	if (buf->base.pin_count > 0)
+ 		ret = ttm_bo_mem_compat(placement, &bo->mem,
+ 					&new_flags) == true ? 0 : -EINVAL;
+ 	else
+@@ -155,7 +155,7 @@ int vmw_bo_pin_in_vram_or_gmr(struct vmw_private *dev_priv,
+ 	if (unlikely(ret != 0))
+ 		goto err;
+ 
+-	if (buf->pin_count > 0) {
++	if (buf->base.pin_count > 0) {
+ 		ret = ttm_bo_mem_compat(&vmw_vram_gmr_placement, &bo->mem,
+ 					&new_flags) == true ? 0 : -EINVAL;
+ 		goto out_unreserve;
+@@ -246,12 +246,12 @@ int vmw_bo_pin_in_start_of_vram(struct vmw_private *dev_priv,
+ 	if (bo->mem.mem_type == TTM_PL_VRAM &&
+ 	    bo->mem.start < bo->num_pages &&
+ 	    bo->mem.start > 0 &&
+-	    buf->pin_count == 0) {
++	    buf->base.pin_count == 0) {
+ 		ctx.interruptible = false;
+ 		(void) ttm_bo_validate(bo, &vmw_sys_placement, &ctx);
+ 	}
+ 
+-	if (buf->pin_count > 0)
++	if (buf->base.pin_count > 0)
+ 		ret = ttm_bo_mem_compat(&placement, &bo->mem,
+ 					&new_flags) == true ? 0 : -EINVAL;
+ 	else
+@@ -343,23 +343,13 @@ void vmw_bo_pin_reserved(struct vmw_buffer_object *vbo, bool pin)
+ 
+ 	dma_resv_assert_held(bo->base.resv);
+ 
+-	if (pin) {
+-		if (vbo->pin_count++ > 0)
+-			return;
+-	} else {
+-		WARN_ON(vbo->pin_count <= 0);
+-		if (--vbo->pin_count > 0)
+-			return;
+-	}
++	if (pin == !!bo->pin_count)
++		return;
+ 
+ 	pl.fpfn = 0;
+ 	pl.lpfn = 0;
+ 	pl.mem_type = bo->mem.mem_type;
+ 	pl.flags = bo->mem.placement;
+-	if (pin)
+-		pl.flags |= TTM_PL_FLAG_NO_EVICT;
+-	else
+-		pl.flags &= ~TTM_PL_FLAG_NO_EVICT;
+ 
+ 	memset(&placement, 0, sizeof(placement));
+ 	placement.num_placement = 1;
+@@ -368,8 +358,12 @@ void vmw_bo_pin_reserved(struct vmw_buffer_object *vbo, bool pin)
+ 	ret = ttm_bo_validate(bo, &placement, &ctx);
+ 
+ 	BUG_ON(ret != 0 || bo->mem.mem_type != old_mem_type);
+-}
+ 
++	if (pin)
++		ttm_bo_pin(bo);
++	else
++		ttm_bo_unpin(bo);
++}
+ 
+ /**
+  * vmw_bo_map_and_cache - Map a buffer object and cache the map
+@@ -487,6 +481,49 @@ static void vmw_user_bo_destroy(struct ttm_buffer_object *bo)
+ 	ttm_prime_object_kfree(vmw_user_bo, prime);
+ }
+ 
++/**
++ * vmw_bo_create_kernel - Create a pinned BO for internal kernel use.
++ *
++ * @dev_priv: Pointer to the device private struct
++ * @size: size of the BO we need
++ * @placement: where to put it
++ * @p_bo: resulting BO
++ *
++ * Creates and pin a simple BO for in kernel use.
++ */
++int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size,
++			 struct ttm_placement *placement,
++			 struct ttm_buffer_object **p_bo)
++{
++	unsigned npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
++	struct ttm_operation_ctx ctx = { false, false };
++	struct ttm_buffer_object *bo;
++	size_t acc_size;
++	int ret;
++
++	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
++	if (unlikely(!bo))
++		return -ENOMEM;
++
++	acc_size = ttm_round_pot(sizeof(*bo));
++	acc_size += ttm_round_pot(npages * sizeof(void *));
++	acc_size += ttm_round_pot(sizeof(struct ttm_tt));
++	ret = ttm_bo_init_reserved(&dev_priv->bdev, bo, size,
++				   ttm_bo_type_device, placement, 0,
++				   &ctx, acc_size, NULL, NULL, NULL);
++	if (unlikely(ret))
++		goto error_free;
++
++	ttm_bo_pin(bo);
++	ttm_bo_unreserve(bo);
++	*p_bo = bo;
++
++	return 0;
++
++error_free:
++	kfree(bo);
++	return ret;
++}
+ 
+ /**
+  * vmw_bo_init - Initialize a vmw buffer object
+@@ -496,6 +533,7 @@ static void vmw_user_bo_destroy(struct ttm_buffer_object *bo)
+  * @size: Buffer object size in bytes.
+  * @placement: Initial placement.
+  * @interruptible: Whether waits should be performed interruptible.
++ * @pin: If the BO should be created pinned at a fixed location.
+  * @bo_free: The buffer object destructor.
+  * Returns: Zero on success, negative error code on error.
+  *
+@@ -504,9 +542,10 @@ static void vmw_user_bo_destroy(struct ttm_buffer_object *bo)
+ int vmw_bo_init(struct vmw_private *dev_priv,
+ 		struct vmw_buffer_object *vmw_bo,
+ 		size_t size, struct ttm_placement *placement,
+-		bool interruptible,
++		bool interruptible, bool pin,
+ 		void (*bo_free)(struct ttm_buffer_object *bo))
+ {
++	struct ttm_operation_ctx ctx = { interruptible, false };
+ 	struct ttm_bo_device *bdev = &dev_priv->bdev;
+ 	size_t acc_size;
+ 	int ret;
+@@ -520,11 +559,16 @@ int vmw_bo_init(struct vmw_private *dev_priv,
+ 	vmw_bo->base.priority = 3;
+ 	vmw_bo->res_tree = RB_ROOT;
+ 
+-	ret = ttm_bo_init(bdev, &vmw_bo->base, size,
+-			  ttm_bo_type_device, placement,
+-			  0, interruptible, acc_size,
+-			  NULL, NULL, bo_free);
+-	return ret;
++	ret = ttm_bo_init_reserved(bdev, &vmw_bo->base, size,
++				   ttm_bo_type_device, placement,
++				   0, &ctx, acc_size, NULL, NULL, bo_free);
++	if (unlikely(ret))
++		return ret;
++
++	if (pin)
++		ttm_bo_pin(&vmw_bo->base);
++	ttm_bo_unreserve(&vmw_bo->base);
++	return 0;
+ }
+ 
+ 
+@@ -613,7 +657,7 @@ int vmw_user_bo_alloc(struct vmw_private *dev_priv,
+ 	ret = vmw_bo_init(dev_priv, &user_bo->vbo, size,
+ 			  (dev_priv->has_mob) ?
+ 			  &vmw_sys_placement :
+-			  &vmw_vram_sys_placement, true,
++			  &vmw_vram_sys_placement, true, false,
+ 			  &vmw_user_bo_destroy);
+ 	if (unlikely(ret != 0))
+ 		return ret;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
+index 3b41cf63110ad..87a39721e5bc0 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
+@@ -514,7 +514,7 @@ static void vmw_cmdbuf_work_func(struct work_struct *work)
+ 	struct vmw_cmdbuf_man *man =
+ 		container_of(work, struct vmw_cmdbuf_man, work);
+ 	struct vmw_cmdbuf_header *entry, *next;
+-	uint32_t dummy;
++	uint32_t dummy = 0;
+ 	bool send_fence = false;
+ 	struct list_head restart_head[SVGA_CB_CONTEXT_MAX];
+ 	int i;
+@@ -1245,9 +1245,9 @@ int vmw_cmdbuf_set_pool_size(struct vmw_cmdbuf_man *man,
+ 		    !dev_priv->has_mob)
+ 			return -ENOMEM;
+ 
+-		ret = ttm_bo_create(&dev_priv->bdev, size, ttm_bo_type_device,
+-				    &vmw_mob_ne_placement, 0, false,
+-				    &man->cmd_space);
++		ret = vmw_bo_create_kernel(dev_priv, size,
++					   &vmw_mob_placement,
++					   &man->cmd_space);
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
+index 44d858ce4ce7f..f212368c03129 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
+@@ -168,8 +168,8 @@ void vmw_cmdbuf_res_revert(struct list_head *list)
+ 			vmw_cmdbuf_res_free(entry->man, entry);
+ 			break;
+ 		case VMW_CMDBUF_RES_DEL:
+-			ret = drm_ht_insert_item(&entry->man->resources,
+-						 &entry->hash);
++			ret = drm_ht_insert_item(&entry->man->resources, &entry->hash);
++			BUG_ON(ret);
+ 			list_del(&entry->head);
+ 			list_add_tail(&entry->head, &entry->man->list);
+ 			entry->state = VMW_CMDBUF_RES_COMMITTED;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
+index 65e8e7a977246..984d8884357d9 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
+@@ -410,8 +410,8 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size)
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+-	ret = vmw_bo_init(dev_priv, buf, new_size, &vmw_mob_ne_placement,
+-			  true, vmw_bo_bo_free);
++	ret = vmw_bo_init(dev_priv, buf, new_size, &vmw_mob_placement,
++			  true, true, vmw_bo_bo_free);
+ 	if (ret) {
+ 		DRM_ERROR("Failed initializing new cotable MOB.\n");
+ 		return ret;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 31e3e5c9f3622..bdb7a5e965601 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -372,7 +372,7 @@ static int vmw_dummy_query_bo_create(struct vmw_private *dev_priv)
+ 		return -ENOMEM;
+ 
+ 	ret = vmw_bo_init(dev_priv, vbo, PAGE_SIZE,
+-			  &vmw_sys_ne_placement, false,
++			  &vmw_sys_placement, false, true,
+ 			  &vmw_bo_bo_free);
+ 	if (unlikely(ret != 0))
+ 		return ret;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 0a79c57c7db64..fa285c20d6dac 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -99,7 +99,6 @@ struct vmw_fpriv {
+  * struct vmw_buffer_object - TTM buffer object with vmwgfx additions
+  * @base: The TTM buffer object
+  * @res_tree: RB tree of resources using this buffer object as a backing MOB
+- * @pin_count: pin depth
+  * @cpu_writers: Number of synccpu write grabs. Protected by reservation when
+  * increased. May be decreased without reservation.
+  * @dx_query_ctx: DX context if this buffer object is used as a DX query MOB
+@@ -110,7 +109,6 @@ struct vmw_fpriv {
+ struct vmw_buffer_object {
+ 	struct ttm_buffer_object base;
+ 	struct rb_root res_tree;
+-	s32 pin_count;
+ 	atomic_t cpu_writers;
+ 	/* Not ref-counted.  Protected by binding_mutex */
+ 	struct vmw_resource *dx_query_ctx;
+@@ -845,10 +843,14 @@ extern void vmw_bo_get_guest_ptr(const struct ttm_buffer_object *buf,
+ 				 SVGAGuestPtr *ptr);
+ extern void vmw_bo_pin_reserved(struct vmw_buffer_object *bo, bool pin);
+ extern void vmw_bo_bo_free(struct ttm_buffer_object *bo);
++extern int vmw_bo_create_kernel(struct vmw_private *dev_priv,
++				unsigned long size,
++				struct ttm_placement *placement,
++				struct ttm_buffer_object **p_bo);
+ extern int vmw_bo_init(struct vmw_private *dev_priv,
+ 		       struct vmw_buffer_object *vmw_bo,
+ 		       size_t size, struct ttm_placement *placement,
+-		       bool interruptible,
++		       bool interruptible, bool pin,
+ 		       void (*bo_free)(struct ttm_buffer_object *bo));
+ extern int vmw_user_bo_verify_access(struct ttm_buffer_object *bo,
+ 				     struct ttm_object_file *tfile);
+@@ -1005,16 +1007,13 @@ extern void vmw_validation_mem_init_ttm(struct vmw_private *dev_priv,
+ 
+ extern const size_t vmw_tt_size;
+ extern struct ttm_placement vmw_vram_placement;
+-extern struct ttm_placement vmw_vram_ne_placement;
+ extern struct ttm_placement vmw_vram_sys_placement;
+ extern struct ttm_placement vmw_vram_gmr_placement;
+ extern struct ttm_placement vmw_vram_gmr_ne_placement;
+ extern struct ttm_placement vmw_sys_placement;
+-extern struct ttm_placement vmw_sys_ne_placement;
+ extern struct ttm_placement vmw_evictable_placement;
+ extern struct ttm_placement vmw_srf_placement;
+ extern struct ttm_placement vmw_mob_placement;
+-extern struct ttm_placement vmw_mob_ne_placement;
+ extern struct ttm_placement vmw_nonfixed_placement;
+ extern struct ttm_bo_driver vmw_bo_driver;
+ extern const struct vmw_sg_table *
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 00082c679170a..616f6cb622783 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -467,7 +467,7 @@ static int vmw_resource_context_res_add(struct vmw_private *dev_priv,
+ 	    vmw_res_type(ctx) == vmw_res_dx_context) {
+ 		for (i = 0; i < cotable_max; ++i) {
+ 			res = vmw_context_cotable(ctx, i);
+-			if (IS_ERR(res))
++			if (IS_ERR_OR_NULL(res))
+ 				continue;
+ 
+ 			ret = vmw_execbuf_res_noctx_val_add(sw_context, res,
+@@ -1272,6 +1272,8 @@ static int vmw_cmd_dx_define_query(struct vmw_private *dev_priv,
+ 		return -EINVAL;
+ 
+ 	cotable_res = vmw_context_cotable(ctx_node->ctx, SVGA_COTABLE_DXQUERY);
++	if (IS_ERR_OR_NULL(cotable_res))
++		return cotable_res ? PTR_ERR(cotable_res) : -EINVAL;
+ 	ret = vmw_cotable_notify(cotable_res, cmd->body.queryId);
+ 
+ 	return ret;
+@@ -2450,6 +2452,8 @@ static int vmw_cmd_dx_view_define(struct vmw_private *dev_priv,
+ 		return ret;
+ 
+ 	res = vmw_context_cotable(ctx_node->ctx, vmw_view_cotables[view_type]);
++	if (IS_ERR_OR_NULL(res))
++		return res ? PTR_ERR(res) : -EINVAL;
+ 	ret = vmw_cotable_notify(res, cmd->defined_id);
+ 	if (unlikely(ret != 0))
+ 		return ret;
+@@ -2535,6 +2539,8 @@ static int vmw_cmd_dx_so_define(struct vmw_private *dev_priv,
+ 
+ 	so_type = vmw_so_cmd_to_type(header->id);
+ 	res = vmw_context_cotable(ctx_node->ctx, vmw_so_cotables[so_type]);
++	if (IS_ERR_OR_NULL(res))
++		return res ? PTR_ERR(res) : -EINVAL;
+ 	cmd = container_of(header, typeof(*cmd), header);
+ 	ret = vmw_cotable_notify(res, cmd->defined_id);
+ 
+@@ -2653,6 +2659,8 @@ static int vmw_cmd_dx_define_shader(struct vmw_private *dev_priv,
+ 		return -EINVAL;
+ 
+ 	res = vmw_context_cotable(ctx_node->ctx, SVGA_COTABLE_DXSHADER);
++	if (IS_ERR_OR_NULL(res))
++		return res ? PTR_ERR(res) : -EINVAL;
+ 	ret = vmw_cotable_notify(res, cmd->body.shaderId);
+ 	if (ret)
+ 		return ret;
+@@ -2974,6 +2982,8 @@ static int vmw_cmd_dx_define_streamoutput(struct vmw_private *dev_priv,
+ 	}
+ 
+ 	res = vmw_context_cotable(ctx_node->ctx, SVGA_COTABLE_STREAMOUTPUT);
++	if (IS_ERR_OR_NULL(res))
++		return res ? PTR_ERR(res) : -EINVAL;
+ 	ret = vmw_cotable_notify(res, cmd->body.soid);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+index 97d9d2557447b..3923acc3ab1e5 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+@@ -406,7 +406,7 @@ static int vmw_fb_create_bo(struct vmw_private *vmw_priv,
+ 
+ 	ret = vmw_bo_init(vmw_priv, vmw_bo, size,
+ 			      &vmw_sys_placement,
+-			      false,
++			      false, false,
+ 			      &vmw_bo_bo_free);
+ 	if (unlikely(ret != 0))
+ 		goto err_unlock; /* init frees the buffer on failure */
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+index 7f95ed6aa2241..fb0797b380dd2 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+@@ -494,11 +494,13 @@ static void vmw_mob_pt_setup(struct vmw_mob *mob,
+ {
+ 	unsigned long num_pt_pages = 0;
+ 	struct ttm_buffer_object *bo = mob->pt_bo;
+-	struct vmw_piter save_pt_iter;
++	struct vmw_piter save_pt_iter = {0};
+ 	struct vmw_piter pt_iter;
+ 	const struct vmw_sg_table *vsgt;
+ 	int ret;
+ 
++	BUG_ON(num_data_pages == 0);
++
+ 	ret = ttm_bo_reserve(bo, false, true, NULL);
+ 	BUG_ON(ret != 0);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+index 15b5bde693242..751582f5ab0b1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+@@ -154,6 +154,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel,
+ 	/* HB port can't access encrypted memory. */
+ 	if (hb && !mem_encrypt_active()) {
+ 		unsigned long bp = channel->cookie_high;
++		u32 channel_id = (channel->channel_id << 16);
+ 
+ 		si = (uintptr_t) msg;
+ 		di = channel->cookie_low;
+@@ -161,7 +162,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel,
+ 		VMW_PORT_HB_OUT(
+ 			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
+ 			msg_len, si, di,
+-			VMWARE_HYPERVISOR_HB | (channel->channel_id << 16) |
++			VMWARE_HYPERVISOR_HB | channel_id |
+ 			VMWARE_HYPERVISOR_OUT,
+ 			VMW_HYPERVISOR_MAGIC, bp,
+ 			eax, ebx, ecx, edx, si, di);
+@@ -209,6 +210,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply,
+ 	/* HB port can't access encrypted memory */
+ 	if (hb && !mem_encrypt_active()) {
+ 		unsigned long bp = channel->cookie_low;
++		u32 channel_id = (channel->channel_id << 16);
+ 
+ 		si = channel->cookie_high;
+ 		di = (uintptr_t) reply;
+@@ -216,7 +218,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply,
+ 		VMW_PORT_HB_IN(
+ 			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
+ 			reply_len, si, di,
+-			VMWARE_HYPERVISOR_HB | (channel->channel_id << 16),
++			VMWARE_HYPERVISOR_HB | channel_id,
+ 			VMW_HYPERVISOR_MAGIC, bp,
+ 			eax, ebx, ecx, edx, si, di);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index c0f156078ddae..26f88d64879f1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -114,6 +114,7 @@ static void vmw_resource_release(struct kref *kref)
+ 	    container_of(kref, struct vmw_resource, kref);
+ 	struct vmw_private *dev_priv = res->dev_priv;
+ 	int id;
++	int ret;
+ 	struct idr *idr = &dev_priv->res_idr[res->func->res_type];
+ 
+ 	spin_lock(&dev_priv->resource_lock);
+@@ -122,7 +123,8 @@ static void vmw_resource_release(struct kref *kref)
+ 	if (res->backup) {
+ 		struct ttm_buffer_object *bo = &res->backup->base;
+ 
+-		ttm_bo_reserve(bo, false, false, NULL);
++		ret = ttm_bo_reserve(bo, false, false, NULL);
++		BUG_ON(ret);
+ 		if (vmw_resource_mob_attached(res) &&
+ 		    res->func->unbind != NULL) {
+ 			struct ttm_validate_buffer val_buf;
+@@ -370,7 +372,7 @@ static int vmw_resource_buf_alloc(struct vmw_resource *res,
+ 
+ 	ret = vmw_bo_init(res->dev_priv, backup, res->backup_size,
+ 			      res->func->backup_placement,
+-			      interruptible,
++			      interruptible, false,
+ 			      &vmw_bo_bo_free);
+ 	if (unlikely(ret != 0))
+ 		goto out_no_bo;
+@@ -1001,8 +1003,10 @@ int vmw_resource_pin(struct vmw_resource *res, bool interruptible)
+ 		if (res->backup) {
+ 			vbo = res->backup;
+ 
+-			ttm_bo_reserve(&vbo->base, interruptible, false, NULL);
+-			if (!vbo->pin_count) {
++			ret = ttm_bo_reserve(&vbo->base, interruptible, false, NULL);
++			if (ret)
++				goto out_no_validate;
++			if (!vbo->base.pin_count) {
+ 				ret = ttm_bo_validate
+ 					(&vbo->base,
+ 					 res->func->backup_placement,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+index 2b6590344468d..9c8109efefbd6 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+@@ -451,8 +451,8 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane,
+ 	 */
+ 	vmw_overlay_pause_all(dev_priv);
+ 	ret = vmw_bo_init(dev_priv, vps->bo, size,
+-			      &vmw_vram_ne_placement,
+-			      false, &vmw_bo_bo_free);
++			      &vmw_vram_placement,
++			      false, true, &vmw_bo_bo_free);
+ 	vmw_overlay_resume_all(dev_priv);
+ 	if (ret) {
+ 		vps->bo = NULL; /* vmw_bo_init frees on error */
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+index e139fdfd16356..f328aa5839a22 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+@@ -978,8 +978,8 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv,
+ 	if (unlikely(!buf))
+ 		return -ENOMEM;
+ 
+-	ret = vmw_bo_init(dev_priv, buf, size, &vmw_sys_ne_placement,
+-			      true, vmw_bo_bo_free);
++	ret = vmw_bo_init(dev_priv, buf, size, &vmw_sys_placement,
++			      true, true, vmw_bo_bo_free);
+ 	if (unlikely(ret != 0))
+ 		goto out;
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_so.c b/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
+index 3f97b61dd5d83..9330f1a0f1743 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
+@@ -538,7 +538,8 @@ const SVGACOTableType vmw_so_cotables[] = {
+ 	[vmw_so_ds] = SVGA_COTABLE_DEPTHSTENCIL,
+ 	[vmw_so_rs] = SVGA_COTABLE_RASTERIZERSTATE,
+ 	[vmw_so_ss] = SVGA_COTABLE_SAMPLER,
+-	[vmw_so_so] = SVGA_COTABLE_STREAMOUTPUT
++	[vmw_so_so] = SVGA_COTABLE_STREAMOUTPUT,
++	[vmw_so_max]= SVGA_COTABLE_MAX
+ };
+ 
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+index 73116ec70ba59..89b3356ec27f0 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+@@ -37,13 +37,6 @@ static const struct ttm_place vram_placement_flags = {
+ 	.flags = TTM_PL_FLAG_CACHED
+ };
+ 
+-static const struct ttm_place vram_ne_placement_flags = {
+-	.fpfn = 0,
+-	.lpfn = 0,
+-	.mem_type = TTM_PL_VRAM,
+-	.flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_NO_EVICT
+-};
+-
+ static const struct ttm_place sys_placement_flags = {
+ 	.fpfn = 0,
+ 	.lpfn = 0,
+@@ -51,13 +44,6 @@ static const struct ttm_place sys_placement_flags = {
+ 	.flags = TTM_PL_FLAG_CACHED
+ };
+ 
+-static const struct ttm_place sys_ne_placement_flags = {
+-	.fpfn = 0,
+-	.lpfn = 0,
+-	.mem_type = TTM_PL_SYSTEM,
+-	.flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_NO_EVICT
+-};
+-
+ static const struct ttm_place gmr_placement_flags = {
+ 	.fpfn = 0,
+ 	.lpfn = 0,
+@@ -79,13 +65,6 @@ static const struct ttm_place mob_placement_flags = {
+ 	.flags = TTM_PL_FLAG_CACHED
+ };
+ 
+-static const struct ttm_place mob_ne_placement_flags = {
+-	.fpfn = 0,
+-	.lpfn = 0,
+-	.mem_type = VMW_PL_MOB,
+-	.flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_NO_EVICT
+-};
+-
+ struct ttm_placement vmw_vram_placement = {
+ 	.num_placement = 1,
+ 	.placement = &vram_placement_flags,
+@@ -158,13 +137,6 @@ struct ttm_placement vmw_vram_sys_placement = {
+ 	.busy_placement = &sys_placement_flags
+ };
+ 
+-struct ttm_placement vmw_vram_ne_placement = {
+-	.num_placement = 1,
+-	.placement = &vram_ne_placement_flags,
+-	.num_busy_placement = 1,
+-	.busy_placement = &vram_ne_placement_flags
+-};
+-
+ struct ttm_placement vmw_sys_placement = {
+ 	.num_placement = 1,
+ 	.placement = &sys_placement_flags,
+@@ -172,13 +144,6 @@ struct ttm_placement vmw_sys_placement = {
+ 	.busy_placement = &sys_placement_flags
+ };
+ 
+-struct ttm_placement vmw_sys_ne_placement = {
+-	.num_placement = 1,
+-	.placement = &sys_ne_placement_flags,
+-	.num_busy_placement = 1,
+-	.busy_placement = &sys_ne_placement_flags
+-};
+-
+ static const struct ttm_place evictable_placement_flags[] = {
+ 	{
+ 		.fpfn = 0,
+@@ -243,13 +208,6 @@ struct ttm_placement vmw_mob_placement = {
+ 	.busy_placement = &mob_placement_flags
+ };
+ 
+-struct ttm_placement vmw_mob_ne_placement = {
+-	.num_placement = 1,
+-	.num_busy_placement = 1,
+-	.placement = &mob_ne_placement_flags,
+-	.busy_placement = &mob_ne_placement_flags
+-};
+-
+ struct ttm_placement vmw_nonfixed_placement = {
+ 	.num_placement = 3,
+ 	.placement = nonfixed_placement_flags,
+@@ -817,11 +775,9 @@ int vmw_bo_create_and_populate(struct vmw_private *dev_priv,
+ 	struct ttm_buffer_object *bo;
+ 	int ret;
+ 
+-	ret = ttm_bo_create(&dev_priv->bdev, bo_size,
+-			    ttm_bo_type_device,
+-			    &vmw_sys_ne_placement,
+-			    0, false, &bo);
+-
++	ret = vmw_bo_create_kernel(dev_priv, bo_size,
++				   &vmw_sys_placement,
++				   &bo);
+ 	if (unlikely(ret != 0))
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
+index e69bc373ae2e5..cc1cfc827bb9a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
+@@ -540,7 +540,7 @@ int vmw_validation_bo_validate_single(struct ttm_buffer_object *bo,
+ 	if (atomic_read(&vbo->cpu_writers))
+ 		return -EBUSY;
+ 
+-	if (vbo->pin_count > 0)
++	if (vbo->base.pin_count > 0)
+ 		return 0;
+ 
+ 	if (validate_as_mob)
+@@ -585,13 +585,13 @@ int vmw_validation_bo_validate(struct vmw_validation_context *ctx, bool intr)
+ 			container_of(entry->base.bo, typeof(*vbo), base);
+ 
+ 		if (entry->cpu_blit) {
+-			struct ttm_operation_ctx ctx = {
++			struct ttm_operation_ctx ttm_ctx = {
+ 				.interruptible = intr,
+ 				.no_wait_gpu = false
+ 			};
+ 
+ 			ret = ttm_bo_validate(entry->base.bo,
+-					      &vmw_nonfixed_placement, &ctx);
++					      &vmw_nonfixed_placement, &ttm_ctx);
+ 		} else {
+ 			ret = vmw_validation_bo_validate_single
+ 			(entry->base.bo, intr, entry->as_mob);
+diff --git a/drivers/hwmon/amc6821.c b/drivers/hwmon/amc6821.c
+index 6b1ce2242c618..60dfdb0f55231 100644
+--- a/drivers/hwmon/amc6821.c
++++ b/drivers/hwmon/amc6821.c
+@@ -934,10 +934,21 @@ static const struct i2c_device_id amc6821_id[] = {
+ 
+ MODULE_DEVICE_TABLE(i2c, amc6821_id);
+ 
++static const struct of_device_id __maybe_unused amc6821_of_match[] = {
++	{
++		.compatible = "ti,amc6821",
++		.data = (void *)amc6821,
++	},
++	{ }
++};
++
++MODULE_DEVICE_TABLE(of, amc6821_of_match);
++
+ static struct i2c_driver amc6821_driver = {
+ 	.class = I2C_CLASS_HWMON,
+ 	.driver = {
+ 		.name	= "amc6821",
++		.of_match_table = of_match_ptr(amc6821_of_match),
+ 	},
+ 	.probe_new = amc6821_probe,
+ 	.id_table = amc6821_id,
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index db1a25fbe2fa9..2a30b25c5e7e5 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -33,6 +33,7 @@ MODULE_AUTHOR("Sean Hefty");
+ MODULE_DESCRIPTION("InfiniBand CM");
+ MODULE_LICENSE("Dual BSD/GPL");
+ 
++#define CM_DESTROY_ID_WAIT_TIMEOUT 10000 /* msecs */
+ static const char * const ibcm_rej_reason_strs[] = {
+ 	[IB_CM_REJ_NO_QP]			= "no QP",
+ 	[IB_CM_REJ_NO_EEC]			= "no EEC",
+@@ -1056,10 +1057,20 @@ static void cm_reset_to_idle(struct cm_id_private *cm_id_priv)
+ 	}
+ }
+ 
++static noinline void cm_destroy_id_wait_timeout(struct ib_cm_id *cm_id)
++{
++	struct cm_id_private *cm_id_priv;
++
++	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
++	pr_err("%s: cm_id=%p timed out. state=%d refcnt=%d\n", __func__,
++	       cm_id, cm_id->state, refcount_read(&cm_id_priv->refcount));
++}
++
+ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
+ {
+ 	struct cm_id_private *cm_id_priv;
+ 	struct cm_work *work;
++	int ret;
+ 
+ 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+ 	spin_lock_irq(&cm_id_priv->lock);
+@@ -1171,7 +1182,14 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
+ 
+ 	xa_erase(&cm.local_id_table, cm_local_id(cm_id->local_id));
+ 	cm_deref_id(cm_id_priv);
+-	wait_for_completion(&cm_id_priv->comp);
++	do {
++		ret = wait_for_completion_timeout(&cm_id_priv->comp,
++						  msecs_to_jiffies(
++						  CM_DESTROY_ID_WAIT_TIMEOUT));
++		if (!ret) /* timeout happened */
++			cm_destroy_id_wait_timeout(cm_id);
++	} while (!ret);
++
+ 	while ((work = cm_dequeue_work(cm_id_priv)) != NULL)
+ 		cm_free_work(work);
+ 
+diff --git a/drivers/input/rmi4/rmi_driver.c b/drivers/input/rmi4/rmi_driver.c
+index 258d5fe3d395c..aa32371f04af6 100644
+--- a/drivers/input/rmi4/rmi_driver.c
++++ b/drivers/input/rmi4/rmi_driver.c
+@@ -1196,7 +1196,11 @@ static int rmi_driver_probe(struct device *dev)
+ 		}
+ 		rmi_driver_set_input_params(rmi_dev, data->input);
+ 		data->input->phys = devm_kasprintf(dev, GFP_KERNEL,
+-						"%s/input0", dev_name(dev));
++						   "%s/input0", dev_name(dev));
++		if (!data->input->phys) {
++			retval = -ENOMEM;
++			goto err;
++		}
+ 	}
+ 
+ 	retval = rmi_init_functions(data);
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 62cae34ca3b43..067be1d9f51cb 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -3937,7 +3937,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 		} else if (sscanf(opt_string, "sectors_per_bit:%llu%c", &llval, &dummy) == 1) {
+ 			log2_sectors_per_bitmap_bit = !llval ? 0 : __ilog2_u64(llval);
+ 		} else if (sscanf(opt_string, "bitmap_flush_interval:%u%c", &val, &dummy) == 1) {
+-			if (val >= (uint64_t)UINT_MAX * 1000 / HZ) {
++			if ((uint64_t)val >= (uint64_t)UINT_MAX * 1000 / HZ) {
+ 				r = -EINVAL;
+ 				ti->error = "Invalid bitmap_flush_interval argument";
+ 				goto bad;
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index e523ecdf947f4..99995b1804b32 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -4019,7 +4019,9 @@ static void raid_resume(struct dm_target *ti)
+ 		 * Take this opportunity to check whether any failed
+ 		 * devices are reachable again.
+ 		 */
++		mddev_lock_nointr(mddev);
+ 		attempt_restore_of_faulty_devices(rs);
++		mddev_unlock(mddev);
+ 	}
+ 
+ 	if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) {
+diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
+index 41735a25d50aa..de73fe79640fd 100644
+--- a/drivers/md/dm-snap.c
++++ b/drivers/md/dm-snap.c
+@@ -685,8 +685,10 @@ static void dm_exception_table_exit(struct dm_exception_table *et,
+ 	for (i = 0; i < size; i++) {
+ 		slot = et->table + i;
+ 
+-		hlist_bl_for_each_entry_safe(ex, pos, n, slot, hash_list)
++		hlist_bl_for_each_entry_safe(ex, pos, n, slot, hash_list) {
+ 			kmem_cache_free(mem, ex);
++			cond_resched();
++		}
+ 	}
+ 
+ 	vfree(et->table);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 00995e60d46b1..9f114b9d8dc6b 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -36,6 +36,7 @@
+  */
+ 
+ #include <linux/blkdev.h>
++#include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/raid/pq.h>
+ #include <linux/async_tx.h>
+@@ -6519,7 +6520,18 @@ static void raid5d(struct md_thread *thread)
+ 			spin_unlock_irq(&conf->device_lock);
+ 			md_check_recovery(mddev);
+ 			spin_lock_irq(&conf->device_lock);
++
++			/*
++			 * Waiting on MD_SB_CHANGE_PENDING below may deadlock
++			 * seeing md_check_recovery() is needed to clear
++			 * the flag when using mdmon.
++			 */
++			continue;
+ 		}
++
++		wait_event_lock_irq(mddev->sb_wait,
++			!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags),
++			conf->device_lock);
+ 	}
+ 	pr_debug("%d stripes handled\n", handled);
+ 
+diff --git a/drivers/media/pci/sta2x11/sta2x11_vip.c b/drivers/media/pci/sta2x11/sta2x11_vip.c
+index 336df65c8af11..01ca940aecc2d 100644
+--- a/drivers/media/pci/sta2x11/sta2x11_vip.c
++++ b/drivers/media/pci/sta2x11/sta2x11_vip.c
+@@ -760,7 +760,7 @@ static const struct video_device video_dev_template = {
+ /**
+  * vip_irq - interrupt routine
+  * @irq: Number of interrupt ( not used, correct number is assumed )
+- * @vip: local data structure containing all information
++ * @data: local data structure containing all information
+  *
+  * check for both frame interrupts set ( top and bottom ).
+  * check FIFO overflow, but limit number of log messages after open.
+@@ -770,8 +770,9 @@ static const struct video_device video_dev_template = {
+  *
+  * IRQ_HANDLED, interrupt done.
+  */
+-static irqreturn_t vip_irq(int irq, struct sta2x11_vip *vip)
++static irqreturn_t vip_irq(int irq, void *data)
+ {
++	struct sta2x11_vip *vip = data;
+ 	unsigned int status;
+ 
+ 	status = reg_read(vip, DVP_ITS);
+@@ -1053,9 +1054,7 @@ static int sta2x11_vip_init_one(struct pci_dev *pdev,
+ 
+ 	spin_lock_init(&vip->slock);
+ 
+-	ret = request_irq(pdev->irq,
+-			  (irq_handler_t) vip_irq,
+-			  IRQF_SHARED, KBUILD_MODNAME, vip);
++	ret = request_irq(pdev->irq, vip_irq, IRQF_SHARED, KBUILD_MODNAME, vip);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "request_irq failed\n");
+ 		ret = -ENODEV;
+diff --git a/drivers/media/tuners/xc4000.c b/drivers/media/tuners/xc4000.c
+index ef9af052007cb..849df4d1c573c 100644
+--- a/drivers/media/tuners/xc4000.c
++++ b/drivers/media/tuners/xc4000.c
+@@ -1517,10 +1517,10 @@ static int xc4000_get_frequency(struct dvb_frontend *fe, u32 *freq)
+ {
+ 	struct xc4000_priv *priv = fe->tuner_priv;
+ 
++	mutex_lock(&priv->lock);
+ 	*freq = priv->freq_hz + priv->freq_offset;
+ 
+ 	if (debug) {
+-		mutex_lock(&priv->lock);
+ 		if ((priv->cur_fw.type
+ 		     & (BASE | FM | DTV6 | DTV7 | DTV78 | DTV8)) == BASE) {
+ 			u16	snr = 0;
+@@ -1531,8 +1531,8 @@ static int xc4000_get_frequency(struct dvb_frontend *fe, u32 *freq)
+ 				return 0;
+ 			}
+ 		}
+-		mutex_unlock(&priv->lock);
+ 	}
++	mutex_unlock(&priv->lock);
+ 
+ 	dprintk(1, "%s()\n", __func__);
+ 
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index eabbdf17b0c6d..28acedb10737d 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -112,6 +112,8 @@
+ #define MEI_DEV_ID_RPL_S      0x7A68  /* Raptor Lake Point S */
+ 
+ #define MEI_DEV_ID_MTL_M      0x7E70  /* Meteor Lake Point M */
++#define MEI_DEV_ID_ARL_S      0x7F68  /* Arrow Lake Point S */
++#define MEI_DEV_ID_ARL_H      0x7770  /* Arrow Lake Point H */
+ 
+ /*
+  * MEI HW Section
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index f2765d6b8c043..91586d8afcaa0 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -118,6 +118,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_MTL_M, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ARL_S, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ARL_H, MEI_ME_PCH15_CFG)},
+ 
+ 	/* required last entry */
+ 	{0, }
+diff --git a/drivers/misc/vmw_vmci/vmci_datagram.c b/drivers/misc/vmw_vmci/vmci_datagram.c
+index f50d22882476f..a0ad1f3a69f7e 100644
+--- a/drivers/misc/vmw_vmci/vmci_datagram.c
++++ b/drivers/misc/vmw_vmci/vmci_datagram.c
+@@ -234,7 +234,8 @@ static int dg_dispatch_as_host(u32 context_id, struct vmci_datagram *dg)
+ 
+ 			dg_info->in_dg_host_queue = true;
+ 			dg_info->entry = dst_entry;
+-			memcpy(&dg_info->msg, dg, dg_size);
++			dg_info->msg = *dg;
++			memcpy(&dg_info->msg_payload, dg + 1, dg->payload_size);
+ 
+ 			INIT_WORK(&dg_info->work, dg_delayed_dispatch);
+ 			schedule_work(&dg_info->work);
+@@ -377,7 +378,8 @@ int vmci_datagram_invoke_guest_handler(struct vmci_datagram *dg)
+ 
+ 		dg_info->in_dg_host_queue = false;
+ 		dg_info->entry = dst_entry;
+-		memcpy(&dg_info->msg, dg, VMCI_DG_SIZE(dg));
++		dg_info->msg = *dg;
++		memcpy(&dg_info->msg_payload, dg + 1, dg->payload_size);
+ 
+ 		INIT_WORK(&dg_info->work, dg_delayed_dispatch);
+ 		schedule_work(&dg_info->work);
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 2058f31a1bce6..71ecdb13477a5 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -359,7 +359,7 @@ static struct mmc_blk_ioc_data *mmc_blk_ioctl_copy_from_user(
+ 	struct mmc_blk_ioc_data *idata;
+ 	int err;
+ 
+-	idata = kmalloc(sizeof(*idata), GFP_KERNEL);
++	idata = kzalloc(sizeof(*idata), GFP_KERNEL);
+ 	if (!idata) {
+ 		err = -ENOMEM;
+ 		goto out;
+@@ -468,7 +468,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	if (idata->flags & MMC_BLK_IOC_DROP)
+ 		return 0;
+ 
+-	if (idata->flags & MMC_BLK_IOC_SBC)
++	if (idata->flags & MMC_BLK_IOC_SBC && i > 0)
+ 		prev_idata = idatas[i - 1];
+ 
+ 	/*
+@@ -823,10 +823,11 @@ static const struct block_device_operations mmc_bdops = {
+ static int mmc_blk_part_switch_pre(struct mmc_card *card,
+ 				   unsigned int part_type)
+ {
+-	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_RPMB;
++	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_MASK;
++	const unsigned int rpmb = EXT_CSD_PART_CONFIG_ACC_RPMB;
+ 	int ret = 0;
+ 
+-	if ((part_type & mask) == mask) {
++	if ((part_type & mask) == rpmb) {
+ 		if (card->ext_csd.cmdq_en) {
+ 			ret = mmc_cmdq_disable(card);
+ 			if (ret)
+@@ -841,10 +842,11 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
+ static int mmc_blk_part_switch_post(struct mmc_card *card,
+ 				    unsigned int part_type)
+ {
+-	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_RPMB;
++	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_MASK;
++	const unsigned int rpmb = EXT_CSD_PART_CONFIG_ACC_RPMB;
+ 	int ret = 0;
+ 
+-	if ((part_type & mask) == mask) {
++	if ((part_type & mask) == rpmb) {
+ 		mmc_retune_unpause(card->host);
+ 		if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
+ 			ret = mmc_cmdq_enable(card);
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index abf36acb2641f..0482920a79681 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -216,6 +216,8 @@ static void tmio_mmc_reset_work(struct work_struct *work)
+ 	else
+ 		mrq->cmd->error = -ETIMEDOUT;
+ 
++	/* No new calls yet, but disallow concurrent tmio_mmc_done_work() */
++	host->mrq = ERR_PTR(-EBUSY);
+ 	host->cmd = NULL;
+ 	host->data = NULL;
+ 
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index 6bb0fca4a91d0..9a64190f30e9b 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -59,7 +59,7 @@
+ #define CMDRWGEN(cmd_dir, ran, bch, short_mode, page_size, pages)	\
+ 	(								\
+ 		(cmd_dir)			|			\
+-		((ran) << 19)			|			\
++		(ran)				|			\
+ 		((bch) << 14)			|			\
+ 		((short_mode) << 13)		|			\
+ 		(((page_size) & 0x7f) << 6)	|			\
+diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
+index 6e95c4b1473e6..8081fc760d34f 100644
+--- a/drivers/mtd/ubi/fastmap.c
++++ b/drivers/mtd/ubi/fastmap.c
+@@ -86,9 +86,10 @@ size_t ubi_calc_fm_size(struct ubi_device *ubi)
+ 		sizeof(struct ubi_fm_scan_pool) +
+ 		sizeof(struct ubi_fm_scan_pool) +
+ 		(ubi->peb_count * sizeof(struct ubi_fm_ec)) +
+-		(sizeof(struct ubi_fm_eba) +
+-		(ubi->peb_count * sizeof(__be32))) +
+-		sizeof(struct ubi_fm_volhdr) * UBI_MAX_VOLUMES;
++		((sizeof(struct ubi_fm_eba) +
++		  sizeof(struct ubi_fm_volhdr)) *
++		 (UBI_MAX_VOLUMES + UBI_INT_VOL_COUNT)) +
++		(ubi->peb_count * sizeof(__be32));
+ 	return roundup(size, ubi->leb_size);
+ }
+ 
+diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
+index f700f0e4f2ec4..6e5489e233dd2 100644
+--- a/drivers/mtd/ubi/vtbl.c
++++ b/drivers/mtd/ubi/vtbl.c
+@@ -791,6 +791,12 @@ int ubi_read_volume_table(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ 	 * The number of supported volumes is limited by the eraseblock size
+ 	 * and by the UBI_MAX_VOLUMES constant.
+ 	 */
++
++	if (ubi->leb_size < UBI_VTBL_RECORD_SIZE) {
++		ubi_err(ubi, "LEB size too small for a volume record");
++		return -EINVAL;
++	}
++
+ 	ubi->vtbl_slots = ubi->leb_size / UBI_VTBL_RECORD_SIZE;
+ 	if (ubi->vtbl_slots > UBI_MAX_VOLUMES)
+ 		ubi->vtbl_slots = UBI_MAX_VOLUMES;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_trace.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_trace.h
+index 5b0b71bd61200..e8e67321e9632 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_trace.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_trace.h
+@@ -24,7 +24,7 @@ TRACE_EVENT(hclge_pf_mbx_get,
+ 		__field(u8, code)
+ 		__field(u8, subcode)
+ 		__string(pciname, pci_name(hdev->pdev))
+-		__string(devname, &hdev->vport[0].nic.kinfo.netdev->name)
++		__string(devname, hdev->vport[0].nic.kinfo.netdev->name)
+ 		__array(u32, mbx_data, PF_GET_MBX_LEN)
+ 	),
+ 
+@@ -33,7 +33,7 @@ TRACE_EVENT(hclge_pf_mbx_get,
+ 		__entry->code = req->msg.code;
+ 		__entry->subcode = req->msg.subcode;
+ 		__assign_str(pciname, pci_name(hdev->pdev));
+-		__assign_str(devname, &hdev->vport[0].nic.kinfo.netdev->name);
++		__assign_str(devname, hdev->vport[0].nic.kinfo.netdev->name);
+ 		memcpy(__entry->mbx_data, req,
+ 		       sizeof(struct hclge_mbx_vf_to_pf_cmd));
+ 	),
+@@ -56,7 +56,7 @@ TRACE_EVENT(hclge_pf_mbx_send,
+ 		__field(u8, vfid)
+ 		__field(u16, code)
+ 		__string(pciname, pci_name(hdev->pdev))
+-		__string(devname, &hdev->vport[0].nic.kinfo.netdev->name)
++		__string(devname, hdev->vport[0].nic.kinfo.netdev->name)
+ 		__array(u32, mbx_data, PF_SEND_MBX_LEN)
+ 	),
+ 
+@@ -64,7 +64,7 @@ TRACE_EVENT(hclge_pf_mbx_send,
+ 		__entry->vfid = req->dest_vfid;
+ 		__entry->code = req->msg.code;
+ 		__assign_str(pciname, pci_name(hdev->pdev));
+-		__assign_str(devname, &hdev->vport[0].nic.kinfo.netdev->name);
++		__assign_str(devname, hdev->vport[0].nic.kinfo.netdev->name);
+ 		memcpy(__entry->mbx_data, req,
+ 		       sizeof(struct hclge_mbx_pf_to_vf_cmd));
+ 	),
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_trace.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_trace.h
+index e4bfb6191fef5..a208af567909f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_trace.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_trace.h
+@@ -23,7 +23,7 @@ TRACE_EVENT(hclge_vf_mbx_get,
+ 		__field(u8, vfid)
+ 		__field(u16, code)
+ 		__string(pciname, pci_name(hdev->pdev))
+-		__string(devname, &hdev->nic.kinfo.netdev->name)
++		__string(devname, hdev->nic.kinfo.netdev->name)
+ 		__array(u32, mbx_data, VF_GET_MBX_LEN)
+ 	),
+ 
+@@ -31,7 +31,7 @@ TRACE_EVENT(hclge_vf_mbx_get,
+ 		__entry->vfid = req->dest_vfid;
+ 		__entry->code = req->msg.code;
+ 		__assign_str(pciname, pci_name(hdev->pdev));
+-		__assign_str(devname, &hdev->nic.kinfo.netdev->name);
++		__assign_str(devname, hdev->nic.kinfo.netdev->name);
+ 		memcpy(__entry->mbx_data, req,
+ 		       sizeof(struct hclge_mbx_pf_to_vf_cmd));
+ 	),
+@@ -55,7 +55,7 @@ TRACE_EVENT(hclge_vf_mbx_send,
+ 		__field(u8, code)
+ 		__field(u8, subcode)
+ 		__string(pciname, pci_name(hdev->pdev))
+-		__string(devname, &hdev->nic.kinfo.netdev->name)
++		__string(devname, hdev->nic.kinfo.netdev->name)
+ 		__array(u32, mbx_data, VF_SEND_MBX_LEN)
+ 	),
+ 
+@@ -64,7 +64,7 @@ TRACE_EVENT(hclge_vf_mbx_send,
+ 		__entry->code = req->msg.code;
+ 		__entry->subcode = req->msg.subcode;
+ 		__assign_str(pciname, pci_name(hdev->pdev));
+-		__assign_str(devname, &hdev->nic.kinfo.netdev->name);
++		__assign_str(devname, hdev->nic.kinfo.netdev->name);
+ 		memcpy(__entry->mbx_data, req,
+ 		       sizeof(struct hclge_mbx_vf_to_pf_cmd));
+ 	),
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 58c87d79c1261..6ea2d94c3ddea 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -1230,8 +1230,11 @@ int i40e_count_filters(struct i40e_vsi *vsi)
+ 	int bkt;
+ 	int cnt = 0;
+ 
+-	hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist)
+-		++cnt;
++	hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) {
++		if (f->state == I40E_FILTER_NEW ||
++		    f->state == I40E_FILTER_ACTIVE)
++			++cnt;
++	}
+ 
+ 	return cnt;
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index f79795cc91521..4f23243bbfbb6 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1573,8 +1573,8 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ {
+ 	struct i40e_hw *hw = &pf->hw;
+ 	struct i40e_vf *vf;
+-	int i, v;
+ 	u32 reg;
++	int i;
+ 
+ 	/* If we don't have any VFs, then there is nothing to reset */
+ 	if (!pf->num_alloc_vfs)
+@@ -1585,11 +1585,10 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 		return false;
+ 
+ 	/* Begin reset on all VFs at once */
+-	for (v = 0; v < pf->num_alloc_vfs; v++) {
+-		vf = &pf->vf[v];
++	for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) {
+ 		/* If VF is being reset no need to trigger reset again */
+ 		if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+-			i40e_trigger_vf_reset(&pf->vf[v], flr);
++			i40e_trigger_vf_reset(vf, flr);
+ 	}
+ 
+ 	/* HW requires some time to make sure it can flush the FIFO for a VF
+@@ -1598,14 +1597,13 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 	 * the VFs using a simple iterator that increments once that VF has
+ 	 * finished resetting.
+ 	 */
+-	for (i = 0, v = 0; i < 10 && v < pf->num_alloc_vfs; i++) {
++	for (i = 0, vf = &pf->vf[0]; i < 10 && vf < &pf->vf[pf->num_alloc_vfs]; ++i) {
+ 		usleep_range(10000, 20000);
+ 
+ 		/* Check each VF in sequence, beginning with the VF to fail
+ 		 * the previous check.
+ 		 */
+-		while (v < pf->num_alloc_vfs) {
+-			vf = &pf->vf[v];
++		while (vf < &pf->vf[pf->num_alloc_vfs]) {
+ 			if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) {
+ 				reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id));
+ 				if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK))
+@@ -1615,7 +1613,7 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 			/* If the current VF has finished resetting, move on
+ 			 * to the next VF in sequence.
+ 			 */
+-			v++;
++			++vf;
+ 		}
+ 	}
+ 
+@@ -1625,39 +1623,39 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 	/* Display a warning if at least one VF didn't manage to reset in
+ 	 * time, but continue on with the operation.
+ 	 */
+-	if (v < pf->num_alloc_vfs)
++	if (vf < &pf->vf[pf->num_alloc_vfs])
+ 		dev_err(&pf->pdev->dev, "VF reset check timeout on VF %d\n",
+-			pf->vf[v].vf_id);
++			vf->vf_id);
+ 	usleep_range(10000, 20000);
+ 
+ 	/* Begin disabling all the rings associated with VFs, but do not wait
+ 	 * between each VF.
+ 	 */
+-	for (v = 0; v < pf->num_alloc_vfs; v++) {
++	for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) {
+ 		/* On initial reset, we don't have any queues to disable */
+-		if (pf->vf[v].lan_vsi_idx == 0)
++		if (vf->lan_vsi_idx == 0)
+ 			continue;
+ 
+ 		/* If VF is reset in another thread just continue */
+ 		if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+ 			continue;
+ 
+-		i40e_vsi_stop_rings_no_wait(pf->vsi[pf->vf[v].lan_vsi_idx]);
++		i40e_vsi_stop_rings_no_wait(pf->vsi[vf->lan_vsi_idx]);
+ 	}
+ 
+ 	/* Now that we've notified HW to disable all of the VF rings, wait
+ 	 * until they finish.
+ 	 */
+-	for (v = 0; v < pf->num_alloc_vfs; v++) {
++	for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) {
+ 		/* On initial reset, we don't have any queues to disable */
+-		if (pf->vf[v].lan_vsi_idx == 0)
++		if (vf->lan_vsi_idx == 0)
+ 			continue;
+ 
+ 		/* If VF is reset in another thread just continue */
+ 		if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+ 			continue;
+ 
+-		i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[v].lan_vsi_idx]);
++		i40e_vsi_wait_queues_disabled(pf->vsi[vf->lan_vsi_idx]);
+ 	}
+ 
+ 	/* Hw may need up to 50ms to finish disabling the RX queues. We
+@@ -1666,12 +1664,12 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
+ 	mdelay(50);
+ 
+ 	/* Finish the reset on each VF */
+-	for (v = 0; v < pf->num_alloc_vfs; v++) {
++	for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) {
+ 		/* If VF is reset in another thread just continue */
+ 		if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+ 			continue;
+ 
+-		i40e_cleanup_reset_vf(&pf->vf[v]);
++		i40e_cleanup_reset_vf(vf);
+ 	}
+ 
+ 	i40e_flush(hw);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index 319620856cba1..512da34e70a35 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -909,7 +909,13 @@ int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ 		goto err_out;
+ 	}
+ 
+-	xs = kzalloc(sizeof(*xs), GFP_KERNEL);
++	algo = xfrm_aead_get_byname(aes_gcm_name, IXGBE_IPSEC_AUTH_BITS, 1);
++	if (unlikely(!algo)) {
++		err = -ENOENT;
++		goto err_out;
++	}
++
++	xs = kzalloc(sizeof(*xs), GFP_ATOMIC);
+ 	if (unlikely(!xs)) {
+ 		err = -ENOMEM;
+ 		goto err_out;
+@@ -925,14 +931,8 @@ int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ 		memcpy(&xs->id.daddr.a4, sam->addr, sizeof(xs->id.daddr.a4));
+ 	xs->xso.dev = adapter->netdev;
+ 
+-	algo = xfrm_aead_get_byname(aes_gcm_name, IXGBE_IPSEC_AUTH_BITS, 1);
+-	if (unlikely(!algo)) {
+-		err = -ENOENT;
+-		goto err_xs;
+-	}
+-
+ 	aead_len = sizeof(*xs->aead) + IXGBE_IPSEC_KEY_BITS / 8;
+-	xs->aead = kzalloc(aead_len, GFP_KERNEL);
++	xs->aead = kzalloc(aead_len, GFP_ATOMIC);
+ 	if (unlikely(!xs->aead)) {
+ 		err = -ENOMEM;
+ 		goto err_xs;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 55dfe1a20bc99..7f82baf8e7403 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -403,6 +403,11 @@ int cgx_lmac_set_pause_frm(void *cgxd, int lmac_id,
+ 	if (!cgx || lmac_id >= cgx->lmac_count)
+ 		return -ENODEV;
+ 
++	cfg = cgx_read(cgx, lmac_id, CGXX_GMP_GMI_RXX_FRM_CTL);
++	cfg &= ~CGX_GMP_GMI_RXX_FRM_CTL_CTL_BCK;
++	cfg |= rx_pause ? CGX_GMP_GMI_RXX_FRM_CTL_CTL_BCK : 0x0;
++	cgx_write(cgx, lmac_id, CGXX_GMP_GMI_RXX_FRM_CTL, cfg);
++
+ 	cfg = cgx_read(cgx, lmac_id, CGXX_SMUX_RX_FRM_CTL);
+ 	cfg &= ~CGX_SMUX_RX_FRM_CTL_CTL_BCK;
+ 	cfg |= rx_pause ? CGX_SMUX_RX_FRM_CTL_CTL_BCK : 0x0;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index d6f7a2a58aee8..aada28868ac59 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1598,7 +1598,7 @@ int otx2_open(struct net_device *netdev)
+ 	 * mcam entries are enabled to receive the packets. Hence disable the
+ 	 * packet I/O.
+ 	 */
+-	if (err == EIO)
++	if (err == -EIO)
+ 		goto err_disable_rxtx;
+ 	else if (err)
+ 		goto err_tx_stop_queues;
+diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.c b/drivers/net/ethernet/neterion/vxge/vxge-config.c
+index f5d48d7c4ce28..da48dd85770c0 100644
+--- a/drivers/net/ethernet/neterion/vxge/vxge-config.c
++++ b/drivers/net/ethernet/neterion/vxge/vxge-config.c
+@@ -1121,7 +1121,7 @@ static void __vxge_hw_blockpool_destroy(struct __vxge_hw_blockpool *blockpool)
+ 
+ 	list_for_each_safe(p, n, &blockpool->free_entry_list) {
+ 		list_del(&((struct __vxge_hw_blockpool_entry *)p)->item);
+-		kfree((void *)p);
++		kfree(p);
+ 	}
+ 
+ 	return;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 49c28134ac2cc..a37ca4b1e5665 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -2708,9 +2708,12 @@ static int ionic_lif_adminq_init(struct ionic_lif *lif)
+ 
+ 	napi_enable(&qcq->napi);
+ 
+-	if (qcq->flags & IONIC_QCQ_F_INTR)
++	if (qcq->flags & IONIC_QCQ_F_INTR) {
++		irq_set_affinity_hint(qcq->intr.vector,
++				      &qcq->intr.affinity_mask);
+ 		ionic_intr_mask(idev->intr_ctrl, qcq->intr.index,
+ 				IONIC_INTR_MASK_CLEAR);
++	}
+ 
+ 	qcq->flags |= IONIC_QCQ_F_INITED;
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index c72ff0fd38c42..c29d43c5f4504 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -5181,6 +5181,15 @@ static int r8169_mdio_register(struct rtl8169_private *tp)
+ 	struct mii_bus *new_bus;
+ 	int ret;
+ 
++	/* On some boards with this chip version the BIOS is buggy and misses
++	 * to reset the PHY page selector. This results in the PHY ID read
++	 * accessing registers on a different page, returning a more or
++	 * less random value. Fix this by resetting the page selector first.
++	 */
++	if (tp->mac_version == RTL_GIGA_MAC_VER_25 ||
++	    tp->mac_version == RTL_GIGA_MAC_VER_26)
++		r8169_mdio_write(tp, 0x1f, 0);
++
+ 	new_bus = devm_mdiobus_alloc(&pdev->dev);
+ 	if (!new_bus)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 8a4dff0566f7d..b08478aabc6e6 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -911,12 +911,12 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 	int q = napi - priv->napi;
+ 	int mask = BIT(q);
+ 	int quota = budget;
++	bool unmask;
+ 
+ 	/* Processing RX Descriptor Ring */
+ 	/* Clear RX interrupt */
+ 	ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
+-	if (ravb_rx(ndev, &quota, q))
+-		goto out;
++	unmask = !ravb_rx(ndev, &quota, q);
+ 
+ 	/* Processing RX Descriptor Ring */
+ 	spin_lock_irqsave(&priv->lock, flags);
+@@ -926,6 +926,9 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 	netif_wake_subqueue(ndev, q);
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+ 
++	if (!unmask)
++		goto out;
++
+ 	napi_complete(napi);
+ 
+ 	/* Re-enable RX/TX interrupts */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index cd11be005390b..5c6073d95f023 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -75,19 +75,41 @@ static void dwmac4_rx_queue_priority(struct mac_device_info *hw,
+ 				     u32 prio, u32 queue)
+ {
+ 	void __iomem *ioaddr = hw->pcsr;
+-	u32 base_register;
+-	u32 value;
++	u32 clear_mask = 0;
++	u32 ctrl2, ctrl3;
++	int i;
+ 
+-	base_register = (queue < 4) ? GMAC_RXQ_CTRL2 : GMAC_RXQ_CTRL3;
+-	if (queue >= 4)
+-		queue -= 4;
++	ctrl2 = readl(ioaddr + GMAC_RXQ_CTRL2);
++	ctrl3 = readl(ioaddr + GMAC_RXQ_CTRL3);
+ 
+-	value = readl(ioaddr + base_register);
++	/* The software must ensure that the same priority
++	 * is not mapped to multiple Rx queues
++	 */
++	for (i = 0; i < 4; i++)
++		clear_mask |= ((prio << GMAC_RXQCTRL_PSRQX_SHIFT(i)) &
++						GMAC_RXQCTRL_PSRQX_MASK(i));
+ 
+-	value &= ~GMAC_RXQCTRL_PSRQX_MASK(queue);
+-	value |= (prio << GMAC_RXQCTRL_PSRQX_SHIFT(queue)) &
++	ctrl2 &= ~clear_mask;
++	ctrl3 &= ~clear_mask;
++
++	/* First assign new priorities to a queue, then
++	 * clear them from others queues
++	 */
++	if (queue < 4) {
++		ctrl2 |= (prio << GMAC_RXQCTRL_PSRQX_SHIFT(queue)) &
+ 						GMAC_RXQCTRL_PSRQX_MASK(queue);
+-	writel(value, ioaddr + base_register);
++
++		writel(ctrl2, ioaddr + GMAC_RXQ_CTRL2);
++		writel(ctrl3, ioaddr + GMAC_RXQ_CTRL3);
++	} else {
++		queue -= 4;
++
++		ctrl3 |= (prio << GMAC_RXQCTRL_PSRQX_SHIFT(queue)) &
++						GMAC_RXQCTRL_PSRQX_MASK(queue);
++
++		writel(ctrl3, ioaddr + GMAC_RXQ_CTRL3);
++		writel(ctrl2, ioaddr + GMAC_RXQ_CTRL2);
++	}
+ }
+ 
+ static void dwmac4_tx_queue_priority(struct mac_device_info *hw,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index 9a5dc5fde24ae..86f70ea9a520c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -97,17 +97,41 @@ static void dwxgmac2_rx_queue_prio(struct mac_device_info *hw, u32 prio,
+ 				   u32 queue)
+ {
+ 	void __iomem *ioaddr = hw->pcsr;
+-	u32 value, reg;
++	u32 clear_mask = 0;
++	u32 ctrl2, ctrl3;
++	int i;
+ 
+-	reg = (queue < 4) ? XGMAC_RXQ_CTRL2 : XGMAC_RXQ_CTRL3;
+-	if (queue >= 4)
++	ctrl2 = readl(ioaddr + XGMAC_RXQ_CTRL2);
++	ctrl3 = readl(ioaddr + XGMAC_RXQ_CTRL3);
++
++	/* The software must ensure that the same priority
++	 * is not mapped to multiple Rx queues
++	 */
++	for (i = 0; i < 4; i++)
++		clear_mask |= ((prio << XGMAC_PSRQ_SHIFT(i)) &
++						XGMAC_PSRQ(i));
++
++	ctrl2 &= ~clear_mask;
++	ctrl3 &= ~clear_mask;
++
++	/* First assign new priorities to a queue, then
++	 * clear them from others queues
++	 */
++	if (queue < 4) {
++		ctrl2 |= (prio << XGMAC_PSRQ_SHIFT(queue)) &
++						XGMAC_PSRQ(queue);
++
++		writel(ctrl2, ioaddr + XGMAC_RXQ_CTRL2);
++		writel(ctrl3, ioaddr + XGMAC_RXQ_CTRL3);
++	} else {
+ 		queue -= 4;
+ 
+-	value = readl(ioaddr + reg);
+-	value &= ~XGMAC_PSRQ(queue);
+-	value |= (prio << XGMAC_PSRQ_SHIFT(queue)) & XGMAC_PSRQ(queue);
++		ctrl3 |= (prio << XGMAC_PSRQ_SHIFT(queue)) &
++						XGMAC_PSRQ(queue);
+ 
+-	writel(value, ioaddr + reg);
++		writel(ctrl3, ioaddr + XGMAC_RXQ_CTRL3);
++		writel(ctrl2, ioaddr + XGMAC_RXQ_CTRL2);
++	}
+ }
+ 
+ static void dwxgmac2_tx_queue_prio(struct mac_device_info *hw, u32 prio,
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index da136abba1520..e50b59efe188b 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -1427,7 +1427,7 @@ static int temac_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* map device registers */
+-	lp->regs = devm_platform_ioremap_resource_byname(pdev, 0);
++	lp->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(lp->regs)) {
+ 		dev_err(&pdev->dev, "could not map TEMAC registers\n");
+ 		return -ENOMEM;
+diff --git a/drivers/net/wireguard/netlink.c b/drivers/net/wireguard/netlink.c
+index f5bc279c9a8c2..9dc02fa51ed09 100644
+--- a/drivers/net/wireguard/netlink.c
++++ b/drivers/net/wireguard/netlink.c
+@@ -164,8 +164,8 @@ get_peer(struct wg_peer *peer, struct sk_buff *skb, struct dump_ctx *ctx)
+ 	if (!allowedips_node)
+ 		goto no_allowedips;
+ 	if (!ctx->allowedips_seq)
+-		ctx->allowedips_seq = peer->device->peer_allowedips.seq;
+-	else if (ctx->allowedips_seq != peer->device->peer_allowedips.seq)
++		ctx->allowedips_seq = ctx->wg->peer_allowedips.seq;
++	else if (ctx->allowedips_seq != ctx->wg->peer_allowedips.seq)
+ 		goto no_allowedips;
+ 
+ 	allowedips_nest = nla_nest_start(skb, WGPEER_A_ALLOWEDIPS);
+@@ -255,17 +255,17 @@ static int wg_get_device_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (!peers_nest)
+ 		goto out;
+ 	ret = 0;
+-	/* If the last cursor was removed via list_del_init in peer_remove, then
++	lockdep_assert_held(&wg->device_update_lock);
++	/* If the last cursor was removed in peer_remove or peer_remove_all, then
+ 	 * we just treat this the same as there being no more peers left. The
+ 	 * reason is that seq_nr should indicate to userspace that this isn't a
+ 	 * coherent dump anyway, so they'll try again.
+ 	 */
+ 	if (list_empty(&wg->peer_list) ||
+-	    (ctx->next_peer && list_empty(&ctx->next_peer->peer_list))) {
++	    (ctx->next_peer && ctx->next_peer->is_dead)) {
+ 		nla_nest_cancel(skb, peers_nest);
+ 		goto out;
+ 	}
+-	lockdep_assert_held(&wg->device_update_lock);
+ 	peer = list_prepare_entry(ctx->next_peer, &wg->peer_list, peer_list);
+ 	list_for_each_entry_continue(peer, &wg->peer_list, peer_list) {
+ 		if (get_peer(peer, skb, ctx)) {
+diff --git a/drivers/net/wireless/ath/ath9k/antenna.c b/drivers/net/wireless/ath/ath9k/antenna.c
+index 988222cea9dfe..acc84e6711b0e 100644
+--- a/drivers/net/wireless/ath/ath9k/antenna.c
++++ b/drivers/net/wireless/ath/ath9k/antenna.c
+@@ -643,7 +643,7 @@ static void ath_ant_try_scan(struct ath_ant_comb *antcomb,
+ 				conf->main_lna_conf = ATH_ANT_DIV_COMB_LNA1;
+ 				conf->alt_lna_conf = ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2;
+ 			} else if (antcomb->rssi_sub >
+-				   antcomb->rssi_lna1) {
++				   antcomb->rssi_lna2) {
+ 				/* set to A-B */
+ 				conf->main_lna_conf = ATH_ANT_DIV_COMB_LNA1;
+ 				conf->alt_lna_conf = ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index baf5f0afe802e..fbb5e29530e3d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -790,8 +790,7 @@ s32 brcmf_notify_escan_complete(struct brcmf_cfg80211_info *cfg,
+ 	scan_request = cfg->scan_request;
+ 	cfg->scan_request = NULL;
+ 
+-	if (timer_pending(&cfg->escan_timeout))
+-		del_timer_sync(&cfg->escan_timeout);
++	timer_delete_sync(&cfg->escan_timeout);
+ 
+ 	if (fw_abort) {
+ 		/* Do a scan abort to stop the driver's scan engine */
+@@ -7674,6 +7673,7 @@ void brcmf_cfg80211_detach(struct brcmf_cfg80211_info *cfg)
+ 	brcmf_btcoex_detach(cfg);
+ 	wiphy_unregister(cfg->wiphy);
+ 	wl_deinit_priv(cfg);
++	cancel_work_sync(&cfg->escan_timeout_work);
+ 	brcmf_free_wiphy(cfg->wiphy);
+ 	kfree(cfg);
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 970a1b374a669..5242feda5471a 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3199,6 +3199,9 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_VDEVICE(REDHAT, 0x0010),	/* Qemu emulated controller */
+ 		.driver_data = NVME_QUIRK_BOGUS_NID, },
++	{ PCI_DEVICE(0x126f, 0x2262),	/* Silicon Motion generic */
++		.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
++				NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_DEVICE(0x126f, 0x2263),	/* Silicon Motion unidentified */
+ 		.driver_data = NVME_QUIRK_NO_NS_DESC_LIST, },
+ 	{ PCI_DEVICE(0x1bb1, 0x0100),   /* Seagate Nytro Flash Storage */
+diff --git a/drivers/nvmem/meson-efuse.c b/drivers/nvmem/meson-efuse.c
+index d6b533497ce1a..ba2714bef8d0e 100644
+--- a/drivers/nvmem/meson-efuse.c
++++ b/drivers/nvmem/meson-efuse.c
+@@ -47,7 +47,6 @@ static int meson_efuse_probe(struct platform_device *pdev)
+ 	struct nvmem_config *econfig;
+ 	struct clk *clk;
+ 	unsigned int size;
+-	int ret;
+ 
+ 	sm_np = of_parse_phandle(pdev->dev.of_node, "secure-monitor", 0);
+ 	if (!sm_np) {
+@@ -60,27 +59,9 @@ static int meson_efuse_probe(struct platform_device *pdev)
+ 	if (!fw)
+ 		return -EPROBE_DEFER;
+ 
+-	clk = devm_clk_get(dev, NULL);
+-	if (IS_ERR(clk)) {
+-		ret = PTR_ERR(clk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get efuse gate");
+-		return ret;
+-	}
+-
+-	ret = clk_prepare_enable(clk);
+-	if (ret) {
+-		dev_err(dev, "failed to enable gate");
+-		return ret;
+-	}
+-
+-	ret = devm_add_action_or_reset(dev,
+-				       (void(*)(void *))clk_disable_unprepare,
+-				       clk);
+-	if (ret) {
+-		dev_err(dev, "failed to add disable callback");
+-		return ret;
+-	}
++	clk = devm_clk_get_enabled(dev, NULL);
++	if (IS_ERR(clk))
++		return dev_err_probe(dev, PTR_ERR(clk), "failed to get efuse gate");
+ 
+ 	if (meson_sm_call(fw, SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) {
+ 		dev_err(dev, "failed to get max user");
+diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c
+index b6a3ee65437b9..4d80167d39d47 100644
+--- a/drivers/of/dynamic.c
++++ b/drivers/of/dynamic.c
+@@ -9,6 +9,7 @@
+ 
+ #define pr_fmt(fmt)	"OF: " fmt
+ 
++#include <linux/device.h>
+ #include <linux/of.h>
+ #include <linux/spinlock.h>
+ #include <linux/slab.h>
+@@ -675,6 +676,17 @@ void of_changeset_destroy(struct of_changeset *ocs)
+ {
+ 	struct of_changeset_entry *ce, *cen;
+ 
++	/*
++	 * When a device is deleted, the device links to/from it are also queued
++	 * for deletion. Until these device links are freed, the devices
++	 * themselves aren't freed. If the device being deleted is due to an
++	 * overlay change, this device might be holding a reference to a device
++	 * node that will be freed. So, wait until all already pending device
++	 * links are deleted before freeing a device node. This ensures we don't
++	 * free any device node that has a non-zero reference count.
++	 */
++	device_link_wait_removal();
++
+ 	list_for_each_entry_safe_reverse(ce, cen, &ocs->entries, node)
+ 		__of_changeset_entry_destroy(ce);
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 339318e790e21..8ed1df61f9c7c 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -662,8 +662,13 @@ int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
+ 		nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
+ 			PCI_REBAR_CTRL_NBAR_SHIFT;
+ 
++		/*
++		 * PCIe r6.0, sec 7.8.6.2 require us to support at least one
++		 * size in the range from 1 MB to 512 GB. Advertise support
++		 * for 1 MB BAR size only.
++		 */
+ 		for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
+-			dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
++			dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, BIT(4));
+ 	}
+ 
+ 	dw_pcie_setup(pci);
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index c22cc20db1a74..ae675c5a47151 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -444,16 +444,21 @@ static int pci_device_remove(struct device *dev)
+ 	struct pci_dev *pci_dev = to_pci_dev(dev);
+ 	struct pci_driver *drv = pci_dev->driver;
+ 
+-	if (drv) {
+-		if (drv->remove) {
+-			pm_runtime_get_sync(dev);
+-			drv->remove(pci_dev);
+-			pm_runtime_put_noidle(dev);
+-		}
+-		pcibios_free_irq(pci_dev);
+-		pci_dev->driver = NULL;
+-		pci_iov_remove(pci_dev);
++	if (drv->remove) {
++		pm_runtime_get_sync(dev);
++		/*
++		 * If the driver provides a .runtime_idle() callback and it has
++		 * started to run already, it may continue to run in parallel
++		 * with the code below, so wait until all of the runtime PM
++		 * activity has completed.
++		 */
++		pm_runtime_barrier(dev);
++		drv->remove(pci_dev);
++		pm_runtime_put_noidle(dev);
+ 	}
++	pcibios_free_irq(pci_dev);
++	pci_dev->driver = NULL;
++	pci_iov_remove(pci_dev);
+ 
+ 	/* Undo the runtime PM settings in local_pci_probe() */
+ 	pm_runtime_put_sync(dev);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 1f8106ec70945..d1631109b1422 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -31,6 +31,7 @@
+ #include <linux/vmalloc.h>
+ #include <asm/dma.h>
+ #include <linux/aer.h>
++#include <linux/bitfield.h>
+ #include "pci.h"
+ 
+ DEFINE_MUTEX(pci_slot_mutex);
+@@ -4572,13 +4573,10 @@ EXPORT_SYMBOL(pci_wait_for_pending_transaction);
+  */
+ bool pcie_has_flr(struct pci_dev *dev)
+ {
+-	u32 cap;
+-
+ 	if (dev->dev_flags & PCI_DEV_FLAGS_NO_FLR_RESET)
+ 		return false;
+ 
+-	pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap);
+-	return cap & PCI_EXP_DEVCAP_FLR;
++	return FIELD_GET(PCI_EXP_DEVCAP_FLR, dev->devcap) == 1;
+ }
+ EXPORT_SYMBOL_GPL(pcie_has_flr);
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 32fa07bfc448e..da40f29036d65 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -442,6 +442,15 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
+ void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
+ #endif	/* CONFIG_PCIEAER */
+ 
++#ifdef CONFIG_PCIEPORTBUS
++/* Cached RCEC Endpoint Association */
++struct rcec_ea {
++	u8		nextbusn;
++	u8		lastbusn;
++	u32		bitmap;
++};
++#endif
++
+ #ifdef CONFIG_PCIE_DPC
+ void pci_save_dpc_state(struct pci_dev *dev);
+ void pci_restore_dpc_state(struct pci_dev *dev);
+@@ -456,6 +465,14 @@ static inline void pci_dpc_init(struct pci_dev *pdev) {}
+ static inline bool pci_dpc_recovered(struct pci_dev *pdev) { return false; }
+ #endif
+ 
++#ifdef CONFIG_PCIEPORTBUS
++void pci_rcec_init(struct pci_dev *dev);
++void pci_rcec_exit(struct pci_dev *dev);
++#else
++static inline void pci_rcec_init(struct pci_dev *dev) {}
++static inline void pci_rcec_exit(struct pci_dev *dev) {}
++#endif
++
+ #ifdef CONFIG_PCI_ATS
+ /* Address Translation Service */
+ void pci_ats_init(struct pci_dev *dev);
+diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile
+index 9a7085668466f..b2980db88cc09 100644
+--- a/drivers/pci/pcie/Makefile
++++ b/drivers/pci/pcie/Makefile
+@@ -2,7 +2,7 @@
+ #
+ # Makefile for PCI Express features and port driver
+ 
+-pcieportdrv-y			:= portdrv_core.o portdrv_pci.o err.o
++pcieportdrv-y			:= portdrv_core.o portdrv_pci.o err.o rcec.o
+ 
+ obj-$(CONFIG_PCIEPORTBUS)	+= pcieportdrv.o
+ 
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index cf0d4ba2e157a..ab83f78f3eb1d 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -335,11 +335,16 @@ void pci_dpc_init(struct pci_dev *pdev)
+ 		return;
+ 
+ 	pdev->dpc_rp_extensions = true;
+-	pdev->dpc_rp_log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8;
+-	if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) {
+-		pci_err(pdev, "RP PIO log size %u is invalid\n",
+-			pdev->dpc_rp_log_size);
+-		pdev->dpc_rp_log_size = 0;
++
++	/* Quirks may set dpc_rp_log_size if device or firmware is buggy */
++	if (!pdev->dpc_rp_log_size) {
++		pdev->dpc_rp_log_size =
++			(cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8;
++		if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) {
++			pci_err(pdev, "RP PIO log size %u is invalid\n",
++				pdev->dpc_rp_log_size);
++			pdev->dpc_rp_log_size = 0;
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
+index 984aa023c753f..bb4febcf45ba1 100644
+--- a/drivers/pci/pcie/err.c
++++ b/drivers/pci/pcie/err.c
+@@ -13,6 +13,7 @@
+ #define dev_fmt(fmt) "AER: " fmt
+ 
+ #include <linux/pci.h>
++#include <linux/pm_runtime.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/errno.h>
+@@ -79,6 +80,18 @@ static int report_error_detected(struct pci_dev *dev,
+ 	return 0;
+ }
+ 
++static int pci_pm_runtime_get_sync(struct pci_dev *pdev, void *data)
++{
++	pm_runtime_get_sync(&pdev->dev);
++	return 0;
++}
++
++static int pci_pm_runtime_put(struct pci_dev *pdev, void *data)
++{
++	pm_runtime_put(&pdev->dev);
++	return 0;
++}
++
+ static int report_frozen_detected(struct pci_dev *dev, void *data)
+ {
+ 	return report_error_detected(dev, pci_channel_io_frozen, data);
+@@ -176,6 +189,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+ 	int type = pci_pcie_type(dev);
+ 	struct pci_dev *bridge;
+ 	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
++	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+ 
+ 	/*
+ 	 * If the error was detected by a Root Port, Downstream Port, or
+@@ -193,6 +207,8 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+ 	else
+ 		bridge = pci_upstream_bridge(dev);
+ 
++	pci_walk_bridge(bridge, pci_pm_runtime_get_sync, NULL);
++
+ 	pci_dbg(bridge, "broadcast error_detected message\n");
+ 	if (state == pci_channel_io_frozen) {
+ 		pci_walk_bridge(bridge, report_frozen_detected, &status);
+@@ -227,13 +243,26 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+ 	pci_dbg(bridge, "broadcast resume message\n");
+ 	pci_walk_bridge(bridge, report_resume, &status);
+ 
+-	if (pcie_aer_is_native(bridge))
++	/*
++	 * If we have native control of AER, clear error status in the Root
++	 * Port or Downstream Port that signaled the error.  If the
++	 * platform retained control of AER, it is responsible for clearing
++	 * this status.  In that case, the signaling device may not even be
++	 * visible to the OS.
++	 */
++	if (host->native_aer || pcie_ports_native) {
+ 		pcie_clear_device_status(bridge);
+-	pci_aer_clear_nonfatal_status(bridge);
++		pci_aer_clear_nonfatal_status(bridge);
++	}
++
++	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
++
+ 	pci_info(bridge, "device recovery successful\n");
+ 	return status;
+ 
+ failed:
++	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
++
+ 	pci_uevent_ers(bridge, PCI_ERS_RESULT_DISCONNECT);
+ 
+ 	/* TODO: Should kernel panic here? */
+diff --git a/drivers/pci/pcie/rcec.c b/drivers/pci/pcie/rcec.c
+new file mode 100644
+index 0000000000000..038e9d706d5fd
+--- /dev/null
++++ b/drivers/pci/pcie/rcec.c
+@@ -0,0 +1,59 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Root Complex Event Collector Support
++ *
++ * Authors:
++ *  Sean V Kelley <sean.v.kelley@intel.com>
++ *  Qiuxu Zhuo <qiuxu.zhuo@intel.com>
++ *
++ * Copyright (C) 2020 Intel Corp.
++ */
++
++#include <linux/kernel.h>
++#include <linux/pci.h>
++#include <linux/pci_regs.h>
++
++#include "../pci.h"
++
++void pci_rcec_init(struct pci_dev *dev)
++{
++	struct rcec_ea *rcec_ea;
++	u32 rcec, hdr, busn;
++	u8 ver;
++
++	/* Only for Root Complex Event Collectors */
++	if (pci_pcie_type(dev) != PCI_EXP_TYPE_RC_EC)
++		return;
++
++	rcec = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_RCEC);
++	if (!rcec)
++		return;
++
++	rcec_ea = kzalloc(sizeof(*rcec_ea), GFP_KERNEL);
++	if (!rcec_ea)
++		return;
++
++	pci_read_config_dword(dev, rcec + PCI_RCEC_RCIEP_BITMAP,
++			      &rcec_ea->bitmap);
++
++	/* Check whether RCEC BUSN register is present */
++	pci_read_config_dword(dev, rcec, &hdr);
++	ver = PCI_EXT_CAP_VER(hdr);
++	if (ver >= PCI_RCEC_BUSN_REG_VER) {
++		pci_read_config_dword(dev, rcec + PCI_RCEC_BUSN, &busn);
++		rcec_ea->nextbusn = PCI_RCEC_BUSN_NEXT(busn);
++		rcec_ea->lastbusn = PCI_RCEC_BUSN_LAST(busn);
++	} else {
++		/* Avoid later ver check by setting nextbusn */
++		rcec_ea->nextbusn = 0xff;
++		rcec_ea->lastbusn = 0x00;
++	}
++
++	dev->rcec_ea = rcec_ea;
++}
++
++void pci_rcec_exit(struct pci_dev *dev)
++{
++	kfree(dev->rcec_ea);
++	dev->rcec_ea = NULL;
++}
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index ece90a23936d2..02a75f3b59208 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -19,6 +19,7 @@
+ #include <linux/hypervisor.h>
+ #include <linux/irqdomain.h>
+ #include <linux/pm_runtime.h>
++#include <linux/bitfield.h>
+ #include "pci.h"
+ 
+ #define CARDBUS_LATENCY_TIMER	176	/* secondary latency timer */
+@@ -1496,8 +1497,8 @@ void set_pcie_port_type(struct pci_dev *pdev)
+ 	pdev->pcie_cap = pos;
+ 	pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, &reg16);
+ 	pdev->pcie_flags_reg = reg16;
+-	pci_read_config_word(pdev, pos + PCI_EXP_DEVCAP, &reg16);
+-	pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD;
++	pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap);
++	pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap);
+ 
+ 	parent = pci_upstream_bridge(pdev);
+ 	if (!parent)
+@@ -2216,6 +2217,7 @@ static void pci_configure_device(struct pci_dev *dev)
+ static void pci_release_capabilities(struct pci_dev *dev)
+ {
+ 	pci_aer_exit(dev);
++	pci_rcec_exit(dev);
+ 	pci_vpd_release(dev);
+ 	pci_iov_release(dev);
+ 	pci_free_cap_save_buffers(dev);
+@@ -2416,6 +2418,7 @@ static void pci_init_capabilities(struct pci_dev *dev)
+ 	pci_ptm_init(dev);		/* Precision Time Measurement */
+ 	pci_aer_init(dev);		/* Advanced Error Reporting */
+ 	pci_dpc_init(dev);		/* Downstream Port Containment */
++	pci_rcec_init(dev);		/* Root Complex Event Collector */
+ 
+ 	pcie_report_downtraining(dev);
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 646807a443e2d..60a469bdc7e3e 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -12,6 +12,7 @@
+  * file, where their drivers can use them.
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/export.h>
+@@ -5852,3 +5853,102 @@ static void nvidia_ion_ahci_fixup(struct pci_dev *pdev)
+ 	pdev->dev_flags |= PCI_DEV_FLAGS_HAS_MSI_MASKING;
+ }
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0ab8, nvidia_ion_ahci_fixup);
++
++static void rom_bar_overlap_defect(struct pci_dev *dev)
++{
++	pci_info(dev, "working around ROM BAR overlap defect\n");
++	dev->rom_bar_overlap = 1;
++}
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1533, rom_bar_overlap_defect);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1536, rom_bar_overlap_defect);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1537, rom_bar_overlap_defect);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1538, rom_bar_overlap_defect);
++
++#ifdef CONFIG_PCIEASPM
++/*
++ * Several Intel DG2 graphics devices advertise that they can only tolerate
++ * 1us latency when transitioning from L1 to L0, which may prevent ASPM L1
++ * from being enabled.  But in fact these devices can tolerate unlimited
++ * latency.  Override their Device Capabilities value to allow ASPM L1 to
++ * be enabled.
++ */
++static void aspm_l1_acceptable_latency(struct pci_dev *dev)
++{
++	u32 l1_lat = FIELD_GET(PCI_EXP_DEVCAP_L1, dev->devcap);
++
++	if (l1_lat < 7) {
++		dev->devcap |= FIELD_PREP(PCI_EXP_DEVCAP_L1, 7);
++		pci_info(dev, "ASPM: overriding L1 acceptable latency from %#x to 0x7\n",
++			 l1_lat);
++	}
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f80, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f81, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f82, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f83, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f84, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f85, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f86, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f87, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f88, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5690, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5691, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5692, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5693, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5694, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5695, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a1, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a2, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a3, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a4, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a5, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a6, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56b0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56b1, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56c0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56c1, aspm_l1_acceptable_latency);
++#endif
++
++#ifdef CONFIG_PCIE_DPC
++/*
++ * Intel Ice Lake, Tiger Lake and Alder Lake BIOS has a bug that clears
++ * the DPC RP PIO Log Size of the integrated Thunderbolt PCIe Root
++ * Ports.
++ */
++static void dpc_log_size(struct pci_dev *dev)
++{
++	u16 dpc, val;
++
++	dpc = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC);
++	if (!dpc)
++		return;
++
++	pci_read_config_word(dev, dpc + PCI_EXP_DPC_CAP, &val);
++	if (!(val & PCI_EXP_DPC_CAP_RP_EXT))
++		return;
++
++	if (!((val & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8)) {
++		pci_info(dev, "Overriding RP PIO Log Size to 4\n");
++		dev->dpc_rp_log_size = 4;
++	}
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x461f, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x462f, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x463f, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x466e, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a1d, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a1f, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a21, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a23, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a23, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a25, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a27, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a29, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2b, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2d, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2f, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a31, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa73f, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa76e, dpc_log_size);
++#endif
+diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
+index 875d50c16f19d..b492e67c3d871 100644
+--- a/drivers/pci/setup-res.c
++++ b/drivers/pci/setup-res.c
+@@ -75,12 +75,16 @@ static void pci_std_update_resource(struct pci_dev *dev, int resno)
+ 		 * as zero when disabled, so don't update ROM BARs unless
+ 		 * they're enabled.  See
+ 		 * https://lore.kernel.org/r/43147B3D.1030309@vc.cvut.cz/
++		 * But we must update ROM BAR for buggy devices where even a
++		 * disabled ROM can conflict with other BARs.
+ 		 */
+-		if (!(res->flags & IORESOURCE_ROM_ENABLE))
++		if (!(res->flags & IORESOURCE_ROM_ENABLE) &&
++		    !dev->rom_bar_overlap)
+ 			return;
+ 
+ 		reg = dev->rom_base_reg;
+-		new |= PCI_ROM_ADDRESS_ENABLE;
++		if (res->flags & IORESOURCE_ROM_ENABLE)
++			new |= PCI_ROM_ADDRESS_ENABLE;
+ 	} else
+ 		return;
+ 
+diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c
+index 8f11b293c48d1..856397def89ac 100644
+--- a/drivers/phy/tegra/xusb.c
++++ b/drivers/phy/tegra/xusb.c
+@@ -1399,6 +1399,19 @@ int tegra_xusb_padctl_get_usb3_companion(struct tegra_xusb_padctl *padctl,
+ }
+ EXPORT_SYMBOL_GPL(tegra_xusb_padctl_get_usb3_companion);
+ 
++int tegra_xusb_padctl_get_port_number(struct phy *phy)
++{
++	struct tegra_xusb_lane *lane;
++
++	if (!phy)
++		return -ENODEV;
++
++	lane = phy_get_drvdata(phy);
++
++	return lane->index;
++}
++EXPORT_SYMBOL_GPL(tegra_xusb_padctl_get_port_number);
++
+ MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>");
+ MODULE_DESCRIPTION("Tegra XUSB Pad Controller driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index 54f1a7334027a..c390854483680 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -868,9 +868,11 @@ static void __init sh_pfc_check_cfg_reg(const char *drvname,
+ 		sh_pfc_err("reg 0x%x: var_field_width declares %u instead of %u bits\n",
+ 			   cfg_reg->reg, rw, cfg_reg->reg_width);
+ 
+-	if (n != cfg_reg->nr_enum_ids)
++	if (n != cfg_reg->nr_enum_ids) {
+ 		sh_pfc_err("reg 0x%x: enum_ids[] has %u instead of %u values\n",
+ 			   cfg_reg->reg, cfg_reg->nr_enum_ids, n);
++		n = cfg_reg->nr_enum_ids;
++	}
+ 
+ check_enum_ids:
+ 	sh_pfc_check_reg_enums(drvname, cfg_reg->reg, cfg_reg->enum_ids, n);
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index ebe959db1eeb9..fbaa618594628 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -1084,6 +1084,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BIOS_VERSION, "CHUWI.D86JLBNR"),
+ 		},
+ 	},
++	{
++		/* Chuwi Vi8 dual-boot (CWI506) */
++		.driver_data = (void *)&chuwi_vi8_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "i86"),
++			DMI_MATCH(DMI_BIOS_VERSION, "CHUWI2.D86JHBNR02"),
++		},
++	},
+ 	{
+ 		/* Chuwi Vi8 Plus (CWI519) */
+ 		.driver_data = (void *)&chuwi_vi8_plus_data,
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index b518009715eeb..dde10a5c7d509 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -576,6 +576,7 @@ static inline struct zcrypt_queue *zcrypt_pick_queue(struct zcrypt_card *zc,
+ {
+ 	if (!zq || !try_module_get(zq->queue->ap_dev.drv->driver.owner))
+ 		return NULL;
++	zcrypt_card_get(zc);
+ 	zcrypt_queue_get(zq);
+ 	get_device(&zq->queue->ap_dev.device);
+ 	atomic_add(weight, &zc->load);
+@@ -595,6 +596,7 @@ static inline void zcrypt_drop_queue(struct zcrypt_card *zc,
+ 	atomic_sub(weight, &zq->load);
+ 	put_device(&zq->queue->ap_dev.device);
+ 	zcrypt_queue_put(zq);
++	zcrypt_card_put(zc);
+ 	module_put(mod);
+ }
+ 
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 59eb6c2969860..297f2b412d074 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -334,12 +334,13 @@ static void scsi_host_dev_release(struct device *dev)
+ 
+ 	if (shost->shost_state == SHOST_CREATED) {
+ 		/*
+-		 * Free the shost_dev device name here if scsi_host_alloc()
+-		 * and scsi_host_put() have been called but neither
++		 * Free the shost_dev device name and remove the proc host dir
++		 * here if scsi_host_{alloc,put}() have been called but neither
+ 		 * scsi_host_add() nor scsi_host_remove() has been called.
+ 		 * This avoids that the memory allocated for the shost_dev
+-		 * name is leaked.
++		 * name as well as the proc dir structure are leaked.
+ 		 */
++		scsi_proc_hostdir_rm(shost->hostt);
+ 		kfree(dev_name(&shost->shost_dev));
+ 	}
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index 1e22364a31fcf..d6287c58d5045 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -784,8 +784,10 @@ lpfc_rcv_padisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 				/* Save the ELS cmd */
+ 				elsiocb->drvrTimeout = cmd;
+ 
+-				lpfc_sli4_resume_rpi(ndlp,
+-					lpfc_mbx_cmpl_resume_rpi, elsiocb);
++				if (lpfc_sli4_resume_rpi(ndlp,
++						lpfc_mbx_cmpl_resume_rpi,
++						elsiocb))
++					kfree(elsiocb);
+ 				goto out;
+ 			}
+ 		}
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index deab8931ab48e..59dc379251223 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -1579,7 +1579,7 @@ lpfc_nvmet_setup_io_context(struct lpfc_hba *phba)
+ 		wqe = &nvmewqe->wqe;
+ 
+ 		/* Initialize WQE */
+-		memset(wqe, 0, sizeof(union lpfc_wqe));
++		memset(wqe, 0, sizeof(*wqe));
+ 
+ 		ctx_buf->iocbq->context1 = NULL;
+ 		spin_lock(&phba->sli4_hba.sgl_list_lock);
+diff --git a/drivers/scsi/myrb.c b/drivers/scsi/myrb.c
+index ad17c2beaacad..d4c3f00faf1ba 100644
+--- a/drivers/scsi/myrb.c
++++ b/drivers/scsi/myrb.c
+@@ -1803,9 +1803,9 @@ static ssize_t raid_state_show(struct device *dev,
+ 
+ 		name = myrb_devstate_name(ldev_info->state);
+ 		if (name)
+-			ret = snprintf(buf, 32, "%s\n", name);
++			ret = snprintf(buf, 64, "%s\n", name);
+ 		else
+-			ret = snprintf(buf, 32, "Invalid (%02X)\n",
++			ret = snprintf(buf, 64, "Invalid (%02X)\n",
+ 				       ldev_info->state);
+ 	} else {
+ 		struct myrb_pdev_state *pdev_info = sdev->hostdata;
+@@ -1824,9 +1824,9 @@ static ssize_t raid_state_show(struct device *dev,
+ 		else
+ 			name = myrb_devstate_name(pdev_info->state);
+ 		if (name)
+-			ret = snprintf(buf, 32, "%s\n", name);
++			ret = snprintf(buf, 64, "%s\n", name);
+ 		else
+-			ret = snprintf(buf, 32, "Invalid (%02X)\n",
++			ret = snprintf(buf, 64, "Invalid (%02X)\n",
+ 				       pdev_info->state);
+ 	}
+ 	return ret;
+@@ -1914,11 +1914,11 @@ static ssize_t raid_level_show(struct device *dev,
+ 
+ 		name = myrb_raidlevel_name(ldev_info->raid_level);
+ 		if (!name)
+-			return snprintf(buf, 32, "Invalid (%02X)\n",
++			return snprintf(buf, 64, "Invalid (%02X)\n",
+ 					ldev_info->state);
+-		return snprintf(buf, 32, "%s\n", name);
++		return snprintf(buf, 64, "%s\n", name);
+ 	}
+-	return snprintf(buf, 32, "Physical Drive\n");
++	return snprintf(buf, 64, "Physical Drive\n");
+ }
+ static DEVICE_ATTR_RO(raid_level);
+ 
+@@ -1931,15 +1931,15 @@ static ssize_t rebuild_show(struct device *dev,
+ 	unsigned char status;
+ 
+ 	if (sdev->channel < myrb_logical_channel(sdev->host))
+-		return snprintf(buf, 32, "physical device - not rebuilding\n");
++		return snprintf(buf, 64, "physical device - not rebuilding\n");
+ 
+ 	status = myrb_get_rbld_progress(cb, &rbld_buf);
+ 
+ 	if (rbld_buf.ldev_num != sdev->id ||
+ 	    status != MYRB_STATUS_SUCCESS)
+-		return snprintf(buf, 32, "not rebuilding\n");
++		return snprintf(buf, 64, "not rebuilding\n");
+ 
+-	return snprintf(buf, 32, "rebuilding block %u of %u\n",
++	return snprintf(buf, 64, "rebuilding block %u of %u\n",
+ 			rbld_buf.ldev_size - rbld_buf.blocks_left,
+ 			rbld_buf.ldev_size);
+ }
+diff --git a/drivers/scsi/myrs.c b/drivers/scsi/myrs.c
+index e6a6678967e52..857a73e856a14 100644
+--- a/drivers/scsi/myrs.c
++++ b/drivers/scsi/myrs.c
+@@ -950,9 +950,9 @@ static ssize_t raid_state_show(struct device *dev,
+ 
+ 		name = myrs_devstate_name(ldev_info->dev_state);
+ 		if (name)
+-			ret = snprintf(buf, 32, "%s\n", name);
++			ret = snprintf(buf, 64, "%s\n", name);
+ 		else
+-			ret = snprintf(buf, 32, "Invalid (%02X)\n",
++			ret = snprintf(buf, 64, "Invalid (%02X)\n",
+ 				       ldev_info->dev_state);
+ 	} else {
+ 		struct myrs_pdev_info *pdev_info;
+@@ -961,9 +961,9 @@ static ssize_t raid_state_show(struct device *dev,
+ 		pdev_info = sdev->hostdata;
+ 		name = myrs_devstate_name(pdev_info->dev_state);
+ 		if (name)
+-			ret = snprintf(buf, 32, "%s\n", name);
++			ret = snprintf(buf, 64, "%s\n", name);
+ 		else
+-			ret = snprintf(buf, 32, "Invalid (%02X)\n",
++			ret = snprintf(buf, 64, "Invalid (%02X)\n",
+ 				       pdev_info->dev_state);
+ 	}
+ 	return ret;
+@@ -1069,13 +1069,13 @@ static ssize_t raid_level_show(struct device *dev,
+ 		ldev_info = sdev->hostdata;
+ 		name = myrs_raid_level_name(ldev_info->raid_level);
+ 		if (!name)
+-			return snprintf(buf, 32, "Invalid (%02X)\n",
++			return snprintf(buf, 64, "Invalid (%02X)\n",
+ 					ldev_info->dev_state);
+ 
+ 	} else
+ 		name = myrs_raid_level_name(MYRS_RAID_PHYSICAL);
+ 
+-	return snprintf(buf, 32, "%s\n", name);
++	return snprintf(buf, 64, "%s\n", name);
+ }
+ static DEVICE_ATTR_RO(raid_level);
+ 
+@@ -1089,7 +1089,7 @@ static ssize_t rebuild_show(struct device *dev,
+ 	unsigned char status;
+ 
+ 	if (sdev->channel < cs->ctlr_info->physchan_present)
+-		return snprintf(buf, 32, "physical device - not rebuilding\n");
++		return snprintf(buf, 64, "physical device - not rebuilding\n");
+ 
+ 	ldev_info = sdev->hostdata;
+ 	ldev_num = ldev_info->ldev_num;
+@@ -1101,11 +1101,11 @@ static ssize_t rebuild_show(struct device *dev,
+ 		return -EIO;
+ 	}
+ 	if (ldev_info->rbld_active) {
+-		return snprintf(buf, 32, "rebuilding block %zu of %zu\n",
++		return snprintf(buf, 64, "rebuilding block %zu of %zu\n",
+ 				(size_t)ldev_info->rbld_lba,
+ 				(size_t)ldev_info->cfg_devsize);
+ 	} else
+-		return snprintf(buf, 32, "not rebuilding\n");
++		return snprintf(buf, 64, "not rebuilding\n");
+ }
+ 
+ static ssize_t rebuild_store(struct device *dev,
+@@ -1194,7 +1194,7 @@ static ssize_t consistency_check_show(struct device *dev,
+ 	unsigned char status;
+ 
+ 	if (sdev->channel < cs->ctlr_info->physchan_present)
+-		return snprintf(buf, 32, "physical device - not checking\n");
++		return snprintf(buf, 64, "physical device - not checking\n");
+ 
+ 	ldev_info = sdev->hostdata;
+ 	if (!ldev_info)
+@@ -1202,11 +1202,11 @@ static ssize_t consistency_check_show(struct device *dev,
+ 	ldev_num = ldev_info->ldev_num;
+ 	status = myrs_get_ldev_info(cs, ldev_num, ldev_info);
+ 	if (ldev_info->cc_active)
+-		return snprintf(buf, 32, "checking block %zu of %zu\n",
++		return snprintf(buf, 64, "checking block %zu of %zu\n",
+ 				(size_t)ldev_info->cc_lba,
+ 				(size_t)ldev_info->cfg_devsize);
+ 	else
+-		return snprintf(buf, 32, "not checking\n");
++		return snprintf(buf, 64, "not checking\n");
+ }
+ 
+ static ssize_t consistency_check_store(struct device *dev,
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index f919da0bbf75e..e23a93374eaf9 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -2689,7 +2689,13 @@ qla2x00_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 		return;
+ 
+ 	if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) {
+-		qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16);
++		/* Will wait for wind down of adapter */
++		ql_dbg(ql_dbg_aer, fcport->vha, 0x900c,
++		    "%s pci offline detected (id %06x)\n", __func__,
++		    fcport->d_id.b24);
++		qla_pci_set_eeh_busy(fcport->vha);
++		qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24,
++		    0, WAIT_TARGET);
+ 		return;
+ 	}
+ }
+@@ -2711,7 +2717,11 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
+ 	vha = fcport->vha;
+ 
+ 	if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) {
+-		qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16);
++		/* Will wait for wind down of adapter */
++		ql_dbg(ql_dbg_aer, fcport->vha, 0x900b,
++		    "%s pci offline detected (id %06x)\n", __func__,
++		    fcport->d_id.b24);
++		qla_pci_set_eeh_busy(vha);
+ 		qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24,
+ 			0, WAIT_TARGET);
+ 		return;
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 6645b69fc2a0f..b8628bceb3aeb 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -56,7 +56,7 @@ typedef struct {
+ #include "qla_nvme.h"
+ #define QLA2XXX_DRIVER_NAME	"qla2xxx"
+ #define QLA2XXX_APIDEV		"ql2xapidev"
+-#define QLA2XXX_MANUFACTURER	"QLogic Corporation"
++#define QLA2XXX_MANUFACTURER	"Marvell"
+ 
+ /*
+  * We have MAILBOX_REGISTER_COUNT sized arrays in a few places,
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 20bbd69e35e51..d9ac17dbad789 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -1614,7 +1614,7 @@ qla2x00_hba_attributes(scsi_qla_host_t *vha, void *entries,
+ 	eiter->type = cpu_to_be16(FDMI_HBA_MANUFACTURER);
+ 	alen = scnprintf(
+ 		eiter->a.manufacturer, sizeof(eiter->a.manufacturer),
+-		"%s", "QLogic Corporation");
++		"%s", QLA2XXX_MANUFACTURER);
+ 	alen += FDMI_ATTR_ALIGNMENT(alen);
+ 	alen += FDMI_ATTR_TYPELEN(eiter);
+ 	eiter->len = cpu_to_be16(alen);
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index a8d2c06285c24..3d56f971cdc4d 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -2280,6 +2280,40 @@ qla83xx_nic_core_fw_load(scsi_qla_host_t *vha)
+ 	return rval;
+ }
+ 
++static void qla_enable_fce_trace(scsi_qla_host_t *vha)
++{
++	int rval;
++	struct qla_hw_data *ha = vha->hw;
++
++	if (ha->fce) {
++		ha->flags.fce_enabled = 1;
++		memset(ha->fce, 0, fce_calc_size(ha->fce_bufs));
++		rval = qla2x00_enable_fce_trace(vha,
++		    ha->fce_dma, ha->fce_bufs, ha->fce_mb, &ha->fce_bufs);
++
++		if (rval) {
++			ql_log(ql_log_warn, vha, 0x8033,
++			    "Unable to reinitialize FCE (%d).\n", rval);
++			ha->flags.fce_enabled = 0;
++		}
++	}
++}
++
++static void qla_enable_eft_trace(scsi_qla_host_t *vha)
++{
++	int rval;
++	struct qla_hw_data *ha = vha->hw;
++
++	if (ha->eft) {
++		memset(ha->eft, 0, EFT_SIZE);
++		rval = qla2x00_enable_eft_trace(vha, ha->eft_dma, EFT_NUM_BUFFERS);
++
++		if (rval) {
++			ql_log(ql_log_warn, vha, 0x8034,
++			    "Unable to reinitialize EFT (%d).\n", rval);
++		}
++	}
++}
+ /*
+ * qla2x00_initialize_adapter
+ *      Initialize board.
+@@ -3230,9 +3264,8 @@ qla24xx_chip_diag(scsi_qla_host_t *vha)
+ }
+ 
+ static void
+-qla2x00_init_fce_trace(scsi_qla_host_t *vha)
++qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
+ {
+-	int rval;
+ 	dma_addr_t tc_dma;
+ 	void *tc;
+ 	struct qla_hw_data *ha = vha->hw;
+@@ -3261,27 +3294,17 @@ qla2x00_init_fce_trace(scsi_qla_host_t *vha)
+ 		return;
+ 	}
+ 
+-	rval = qla2x00_enable_fce_trace(vha, tc_dma, FCE_NUM_BUFFERS,
+-					ha->fce_mb, &ha->fce_bufs);
+-	if (rval) {
+-		ql_log(ql_log_warn, vha, 0x00bf,
+-		       "Unable to initialize FCE (%d).\n", rval);
+-		dma_free_coherent(&ha->pdev->dev, FCE_SIZE, tc, tc_dma);
+-		return;
+-	}
+-
+ 	ql_dbg(ql_dbg_init, vha, 0x00c0,
+ 	       "Allocated (%d KB) for FCE...\n", FCE_SIZE / 1024);
+ 
+-	ha->flags.fce_enabled = 1;
+ 	ha->fce_dma = tc_dma;
+ 	ha->fce = tc;
++	ha->fce_bufs = FCE_NUM_BUFFERS;
+ }
+ 
+ static void
+-qla2x00_init_eft_trace(scsi_qla_host_t *vha)
++qla2x00_alloc_eft_trace(scsi_qla_host_t *vha)
+ {
+-	int rval;
+ 	dma_addr_t tc_dma;
+ 	void *tc;
+ 	struct qla_hw_data *ha = vha->hw;
+@@ -3306,14 +3329,6 @@ qla2x00_init_eft_trace(scsi_qla_host_t *vha)
+ 		return;
+ 	}
+ 
+-	rval = qla2x00_enable_eft_trace(vha, tc_dma, EFT_NUM_BUFFERS);
+-	if (rval) {
+-		ql_log(ql_log_warn, vha, 0x00c2,
+-		       "Unable to initialize EFT (%d).\n", rval);
+-		dma_free_coherent(&ha->pdev->dev, EFT_SIZE, tc, tc_dma);
+-		return;
+-	}
+-
+ 	ql_dbg(ql_dbg_init, vha, 0x00c3,
+ 	       "Allocated (%d KB) EFT ...\n", EFT_SIZE / 1024);
+ 
+@@ -3321,13 +3336,6 @@ qla2x00_init_eft_trace(scsi_qla_host_t *vha)
+ 	ha->eft = tc;
+ }
+ 
+-static void
+-qla2x00_alloc_offload_mem(scsi_qla_host_t *vha)
+-{
+-	qla2x00_init_fce_trace(vha);
+-	qla2x00_init_eft_trace(vha);
+-}
+-
+ void
+ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ {
+@@ -3382,10 +3390,10 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ 		if (ha->tgt.atio_ring)
+ 			mq_size += ha->tgt.atio_q_length * sizeof(request_t);
+ 
+-		qla2x00_init_fce_trace(vha);
++		qla2x00_alloc_fce_trace(vha);
+ 		if (ha->fce)
+ 			fce_size = sizeof(struct qla2xxx_fce_chain) + FCE_SIZE;
+-		qla2x00_init_eft_trace(vha);
++		qla2x00_alloc_eft_trace(vha);
+ 		if (ha->eft)
+ 			eft_size = EFT_SIZE;
+ 	}
+@@ -3784,7 +3792,6 @@ qla2x00_setup_chip(scsi_qla_host_t *vha)
+ 	struct qla_hw_data *ha = vha->hw;
+ 	struct device_reg_2xxx __iomem *reg = &ha->iobase->isp;
+ 	unsigned long flags;
+-	uint16_t fw_major_version;
+ 	int done_once = 0;
+ 
+ 	if (IS_P3P_TYPE(ha)) {
+@@ -3851,7 +3858,6 @@ qla2x00_setup_chip(scsi_qla_host_t *vha)
+ 					goto failed;
+ 
+ enable_82xx_npiv:
+-				fw_major_version = ha->fw_major_version;
+ 				if (IS_P3P_TYPE(ha))
+ 					qla82xx_check_md_needed(vha);
+ 				else
+@@ -3880,12 +3886,11 @@ qla2x00_setup_chip(scsi_qla_host_t *vha)
+ 				if (rval != QLA_SUCCESS)
+ 					goto failed;
+ 
+-				if (!fw_major_version && !(IS_P3P_TYPE(ha)))
+-					qla2x00_alloc_offload_mem(vha);
+-
+ 				if (ql2xallocfwdump && !(IS_P3P_TYPE(ha)))
+ 					qla2x00_alloc_fw_dump(vha);
+ 
++				qla_enable_fce_trace(vha);
++				qla_enable_eft_trace(vha);
+ 			} else {
+ 				goto failed;
+ 			}
+@@ -7012,7 +7017,6 @@ qla2x00_abort_isp_cleanup(scsi_qla_host_t *vha)
+ int
+ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ {
+-	int rval;
+ 	uint8_t        status = 0;
+ 	struct qla_hw_data *ha = vha->hw;
+ 	struct scsi_qla_host *vp;
+@@ -7100,31 +7104,7 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 
+ 			if (IS_QLA81XX(ha) || IS_QLA8031(ha))
+ 				qla2x00_get_fw_version(vha);
+-			if (ha->fce) {
+-				ha->flags.fce_enabled = 1;
+-				memset(ha->fce, 0,
+-				    fce_calc_size(ha->fce_bufs));
+-				rval = qla2x00_enable_fce_trace(vha,
+-				    ha->fce_dma, ha->fce_bufs, ha->fce_mb,
+-				    &ha->fce_bufs);
+-				if (rval) {
+-					ql_log(ql_log_warn, vha, 0x8033,
+-					    "Unable to reinitialize FCE "
+-					    "(%d).\n", rval);
+-					ha->flags.fce_enabled = 0;
+-				}
+-			}
+ 
+-			if (ha->eft) {
+-				memset(ha->eft, 0, EFT_SIZE);
+-				rval = qla2x00_enable_eft_trace(vha,
+-				    ha->eft_dma, EFT_NUM_BUFFERS);
+-				if (rval) {
+-					ql_log(ql_log_warn, vha, 0x8034,
+-					    "Unable to reinitialize EFT "
+-					    "(%d).\n", rval);
+-				}
+-			}
+ 		} else {	/* failed the ISP abort */
+ 			vha->flags.online = 1;
+ 			if (test_bit(ISP_ABORT_RETRY, &vha->dpc_flags)) {
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index fdb424501da5b..d2433f162815a 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1038,6 +1038,16 @@ void qlt_free_session_done(struct work_struct *work)
+ 		    "%s: sess %p logout completed\n", __func__, sess);
+ 	}
+ 
++	/* check for any straggling io left behind */
++	if (!(sess->flags & FCF_FCP2_DEVICE) &&
++	    qla2x00_eh_wait_for_pending_commands(sess->vha, sess->d_id.b24, 0, WAIT_TARGET)) {
++		ql_log(ql_log_warn, vha, 0x3027,
++		    "IO not return. Resetting.\n");
++		set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
++		qla2xxx_wake_dpc(vha);
++		qla2x00_wait_for_chip_reset(vha);
++	}
++
+ 	if (sess->logo_ack_needed) {
+ 		sess->logo_ack_needed = 0;
+ 		qla24xx_async_notify_ack(vha, sess,
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 58f66176bcb28..f2dfd9853d343 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3026,8 +3026,13 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp)
+ 	}
+ 
+ 	if (sdkp->device->type == TYPE_ZBC) {
+-		/* Host-managed */
++		/*
++		 * Host-managed: Per ZBC and ZAC specifications, writes in
++		 * sequential write required zones of host-managed devices must
++		 * be aligned to the device physical block size.
++		 */
+ 		blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HM);
++		blk_queue_zone_write_granularity(q, sdkp->physical_block_size);
+ 	} else {
+ 		sdkp->zoned = (buffer[8] >> 4) & 3;
+ 		if (sdkp->zoned == 1) {
+diff --git a/drivers/slimbus/core.c b/drivers/slimbus/core.c
+index 1d2bc181da050..69f6178f294c8 100644
+--- a/drivers/slimbus/core.c
++++ b/drivers/slimbus/core.c
+@@ -438,8 +438,8 @@ static int slim_device_alloc_laddr(struct slim_device *sbdev,
+ 		if (ret < 0)
+ 			goto err;
+ 	} else if (report_present) {
+-		ret = ida_simple_get(&ctrl->laddr_ida,
+-				     0, SLIM_LA_MANAGER - 1, GFP_KERNEL);
++		ret = ida_alloc_max(&ctrl->laddr_ida,
++				    SLIM_LA_MANAGER - 1, GFP_KERNEL);
+ 		if (ret < 0)
+ 			goto err;
+ 
+diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
+index feb97470699d9..7abc9b6a04ab6 100644
+--- a/drivers/soc/fsl/qbman/qman.c
++++ b/drivers/soc/fsl/qbman/qman.c
+@@ -991,7 +991,7 @@ struct qman_portal {
+ 	/* linked-list of CSCN handlers. */
+ 	struct list_head cgr_cbs;
+ 	/* list lock */
+-	spinlock_t cgr_lock;
++	raw_spinlock_t cgr_lock;
+ 	struct work_struct congestion_work;
+ 	struct work_struct mr_work;
+ 	char irqname[MAX_IRQNAME];
+@@ -1281,7 +1281,7 @@ static int qman_create_portal(struct qman_portal *portal,
+ 		/* if the given mask is NULL, assume all CGRs can be seen */
+ 		qman_cgrs_fill(&portal->cgrs[0]);
+ 	INIT_LIST_HEAD(&portal->cgr_cbs);
+-	spin_lock_init(&portal->cgr_lock);
++	raw_spin_lock_init(&portal->cgr_lock);
+ 	INIT_WORK(&portal->congestion_work, qm_congestion_task);
+ 	INIT_WORK(&portal->mr_work, qm_mr_process_task);
+ 	portal->bits = 0;
+@@ -1456,11 +1456,14 @@ static void qm_congestion_task(struct work_struct *work)
+ 	union qm_mc_result *mcr;
+ 	struct qman_cgr *cgr;
+ 
+-	spin_lock(&p->cgr_lock);
++	/*
++	 * FIXME: QM_MCR_TIMEOUT is 10ms, which is too long for a raw spinlock!
++	 */
++	raw_spin_lock_irq(&p->cgr_lock);
+ 	qm_mc_start(&p->p);
+ 	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+ 	if (!qm_mc_result_timeout(&p->p, &mcr)) {
+-		spin_unlock(&p->cgr_lock);
++		raw_spin_unlock_irq(&p->cgr_lock);
+ 		dev_crit(p->config->dev, "QUERYCONGESTION timeout\n");
+ 		qman_p_irqsource_add(p, QM_PIRQ_CSCI);
+ 		return;
+@@ -1476,7 +1479,7 @@ static void qm_congestion_task(struct work_struct *work)
+ 	list_for_each_entry(cgr, &p->cgr_cbs, node)
+ 		if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+ 			cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+-	spin_unlock(&p->cgr_lock);
++	raw_spin_unlock_irq(&p->cgr_lock);
+ 	qman_p_irqsource_add(p, QM_PIRQ_CSCI);
+ }
+ 
+@@ -2440,7 +2443,7 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+ 	preempt_enable();
+ 
+ 	cgr->chan = p->config->channel;
+-	spin_lock(&p->cgr_lock);
++	raw_spin_lock_irq(&p->cgr_lock);
+ 
+ 	if (opts) {
+ 		struct qm_mcc_initcgr local_opts = *opts;
+@@ -2477,19 +2480,14 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+ 	    qman_cgrs_get(&p->cgrs[1], cgr->cgrid))
+ 		cgr->cb(p, cgr, 1);
+ out:
+-	spin_unlock(&p->cgr_lock);
++	raw_spin_unlock_irq(&p->cgr_lock);
+ 	put_affine_portal();
+ 	return ret;
+ }
+ EXPORT_SYMBOL(qman_create_cgr);
+ 
+-int qman_delete_cgr(struct qman_cgr *cgr)
++static struct qman_portal *qman_cgr_get_affine_portal(struct qman_cgr *cgr)
+ {
+-	unsigned long irqflags;
+-	struct qm_mcr_querycgr cgr_state;
+-	struct qm_mcc_initcgr local_opts;
+-	int ret = 0;
+-	struct qman_cgr *i;
+ 	struct qman_portal *p = get_affine_portal();
+ 
+ 	if (cgr->chan != p->config->channel) {
+@@ -2497,12 +2495,27 @@ int qman_delete_cgr(struct qman_cgr *cgr)
+ 		dev_err(p->config->dev, "CGR not owned by current portal");
+ 		dev_dbg(p->config->dev, " create 0x%x, delete 0x%x\n",
+ 			cgr->chan, p->config->channel);
+-
+-		ret = -EINVAL;
+-		goto put_portal;
++		put_affine_portal();
++		return NULL;
+ 	}
++
++	return p;
++}
++
++int qman_delete_cgr(struct qman_cgr *cgr)
++{
++	unsigned long irqflags;
++	struct qm_mcr_querycgr cgr_state;
++	struct qm_mcc_initcgr local_opts;
++	int ret = 0;
++	struct qman_cgr *i;
++	struct qman_portal *p = qman_cgr_get_affine_portal(cgr);
++
++	if (!p)
++		return -EINVAL;
++
+ 	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+-	spin_lock_irqsave(&p->cgr_lock, irqflags);
++	raw_spin_lock_irqsave(&p->cgr_lock, irqflags);
+ 	list_del(&cgr->node);
+ 	/*
+ 	 * If there are no other CGR objects for this CGRID in the list,
+@@ -2527,8 +2540,7 @@ int qman_delete_cgr(struct qman_cgr *cgr)
+ 		/* add back to the list */
+ 		list_add(&cgr->node, &p->cgr_cbs);
+ release_lock:
+-	spin_unlock_irqrestore(&p->cgr_lock, irqflags);
+-put_portal:
++	raw_spin_unlock_irqrestore(&p->cgr_lock, irqflags);
+ 	put_affine_portal();
+ 	return ret;
+ }
+@@ -2559,6 +2571,54 @@ void qman_delete_cgr_safe(struct qman_cgr *cgr)
+ }
+ EXPORT_SYMBOL(qman_delete_cgr_safe);
+ 
++static int qman_update_cgr(struct qman_cgr *cgr, struct qm_mcc_initcgr *opts)
++{
++	int ret;
++	unsigned long irqflags;
++	struct qman_portal *p = qman_cgr_get_affine_portal(cgr);
++
++	if (!p)
++		return -EINVAL;
++
++	raw_spin_lock_irqsave(&p->cgr_lock, irqflags);
++	ret = qm_modify_cgr(cgr, 0, opts);
++	raw_spin_unlock_irqrestore(&p->cgr_lock, irqflags);
++	put_affine_portal();
++	return ret;
++}
++
++struct update_cgr_params {
++	struct qman_cgr *cgr;
++	struct qm_mcc_initcgr *opts;
++	int ret;
++};
++
++static void qman_update_cgr_smp_call(void *p)
++{
++	struct update_cgr_params *params = p;
++
++	params->ret = qman_update_cgr(params->cgr, params->opts);
++}
++
++int qman_update_cgr_safe(struct qman_cgr *cgr, struct qm_mcc_initcgr *opts)
++{
++	struct update_cgr_params params = {
++		.cgr = cgr,
++		.opts = opts,
++	};
++
++	preempt_disable();
++	if (qman_cgr_cpus[cgr->cgrid] != smp_processor_id())
++		smp_call_function_single(qman_cgr_cpus[cgr->cgrid],
++					 qman_update_cgr_smp_call, &params,
++					 true);
++	else
++		params.ret = qman_update_cgr(cgr, opts);
++	preempt_enable();
++	return params.ret;
++}
++EXPORT_SYMBOL(qman_update_cgr_safe);
++
+ /* Cleanup FQs */
+ 
+ static int _qm_mr_consume_and_match_verb(struct qm_portal *p, int v)
+diff --git a/drivers/staging/comedi/drivers/comedi_test.c b/drivers/staging/comedi/drivers/comedi_test.c
+index cbc225eb19918..bea9a3adf08c8 100644
+--- a/drivers/staging/comedi/drivers/comedi_test.c
++++ b/drivers/staging/comedi/drivers/comedi_test.c
+@@ -87,6 +87,8 @@ struct waveform_private {
+ 	struct comedi_device *dev;	/* parent comedi device */
+ 	u64 ao_last_scan_time;		/* time of previous AO scan in usec */
+ 	unsigned int ao_scan_period;	/* AO scan period in usec */
++	bool ai_timer_enable:1;		/* should AI timer be running? */
++	bool ao_timer_enable:1;		/* should AO timer be running? */
+ 	unsigned short ao_loopbacks[N_CHANS];
+ };
+ 
+@@ -236,8 +238,12 @@ static void waveform_ai_timer(struct timer_list *t)
+ 			time_increment = devpriv->ai_convert_time - now;
+ 		else
+ 			time_increment = 1;
+-		mod_timer(&devpriv->ai_timer,
+-			  jiffies + usecs_to_jiffies(time_increment));
++		spin_lock(&dev->spinlock);
++		if (devpriv->ai_timer_enable) {
++			mod_timer(&devpriv->ai_timer,
++				  jiffies + usecs_to_jiffies(time_increment));
++		}
++		spin_unlock(&dev->spinlock);
+ 	}
+ 
+ overrun:
+@@ -393,9 +399,12 @@ static int waveform_ai_cmd(struct comedi_device *dev,
+ 	 * Seem to need an extra jiffy here, otherwise timer expires slightly
+ 	 * early!
+ 	 */
++	spin_lock_bh(&dev->spinlock);
++	devpriv->ai_timer_enable = true;
+ 	devpriv->ai_timer.expires =
+ 		jiffies + usecs_to_jiffies(devpriv->ai_convert_period) + 1;
+ 	add_timer(&devpriv->ai_timer);
++	spin_unlock_bh(&dev->spinlock);
+ 	return 0;
+ }
+ 
+@@ -404,6 +413,9 @@ static int waveform_ai_cancel(struct comedi_device *dev,
+ {
+ 	struct waveform_private *devpriv = dev->private;
+ 
++	spin_lock_bh(&dev->spinlock);
++	devpriv->ai_timer_enable = false;
++	spin_unlock_bh(&dev->spinlock);
+ 	if (in_softirq()) {
+ 		/* Assume we were called from the timer routine itself. */
+ 		del_timer(&devpriv->ai_timer);
+@@ -495,8 +507,12 @@ static void waveform_ao_timer(struct timer_list *t)
+ 		unsigned int time_inc = devpriv->ao_last_scan_time +
+ 					devpriv->ao_scan_period - now;
+ 
+-		mod_timer(&devpriv->ao_timer,
+-			  jiffies + usecs_to_jiffies(time_inc));
++		spin_lock(&dev->spinlock);
++		if (devpriv->ao_timer_enable) {
++			mod_timer(&devpriv->ao_timer,
++				  jiffies + usecs_to_jiffies(time_inc));
++		}
++		spin_unlock(&dev->spinlock);
+ 	}
+ 
+ underrun:
+@@ -517,9 +533,12 @@ static int waveform_ao_inttrig_start(struct comedi_device *dev,
+ 	async->inttrig = NULL;
+ 
+ 	devpriv->ao_last_scan_time = ktime_to_us(ktime_get());
++	spin_lock_bh(&dev->spinlock);
++	devpriv->ao_timer_enable = true;
+ 	devpriv->ao_timer.expires =
+ 		jiffies + usecs_to_jiffies(devpriv->ao_scan_period);
+ 	add_timer(&devpriv->ao_timer);
++	spin_unlock_bh(&dev->spinlock);
+ 
+ 	return 1;
+ }
+@@ -604,6 +623,9 @@ static int waveform_ao_cancel(struct comedi_device *dev,
+ 	struct waveform_private *devpriv = dev->private;
+ 
+ 	s->async->inttrig = NULL;
++	spin_lock_bh(&dev->spinlock);
++	devpriv->ao_timer_enable = false;
++	spin_unlock_bh(&dev->spinlock);
+ 	if (in_softirq()) {
+ 		/* Assume we were called from the timer routine itself. */
+ 		del_timer(&devpriv->ao_timer);
+diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c
+index 103f84466f6fc..371117b511e21 100644
+--- a/drivers/staging/media/ipu3/ipu3-v4l2.c
++++ b/drivers/staging/media/ipu3/ipu3-v4l2.c
+@@ -1063,6 +1063,11 @@ static int imgu_v4l2_subdev_register(struct imgu_device *imgu,
+ 	struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe];
+ 
+ 	/* Initialize subdev media entity */
++	imgu_sd->subdev.entity.ops = &imgu_media_ops;
++	for (i = 0; i < IMGU_NODE_NUM; i++) {
++		imgu_sd->subdev_pads[i].flags = imgu_pipe->nodes[i].output ?
++			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
++	}
+ 	r = media_entity_pads_init(&imgu_sd->subdev.entity, IMGU_NODE_NUM,
+ 				   imgu_sd->subdev_pads);
+ 	if (r) {
+@@ -1070,11 +1075,6 @@ static int imgu_v4l2_subdev_register(struct imgu_device *imgu,
+ 			"failed initialize subdev media entity (%d)\n", r);
+ 		return r;
+ 	}
+-	imgu_sd->subdev.entity.ops = &imgu_media_ops;
+-	for (i = 0; i < IMGU_NODE_NUM; i++) {
+-		imgu_sd->subdev_pads[i].flags = imgu_pipe->nodes[i].output ?
+-			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
+-	}
+ 
+ 	/* Initialize subdev */
+ 	v4l2_subdev_init(&imgu_sd->subdev, &imgu_subdev_ops);
+@@ -1169,15 +1169,15 @@ static int imgu_v4l2_node_setup(struct imgu_device *imgu, unsigned int pipe,
+ 	}
+ 
+ 	/* Initialize media entities */
++	node->vdev_pad.flags = node->output ?
++		MEDIA_PAD_FL_SOURCE : MEDIA_PAD_FL_SINK;
++	vdev->entity.ops = NULL;
+ 	r = media_entity_pads_init(&vdev->entity, 1, &node->vdev_pad);
+ 	if (r) {
+ 		dev_err(dev, "failed initialize media entity (%d)\n", r);
+ 		mutex_destroy(&node->lock);
+ 		return r;
+ 	}
+-	node->vdev_pad.flags = node->output ?
+-		MEDIA_PAD_FL_SOURCE : MEDIA_PAD_FL_SINK;
+-	vdev->entity.ops = NULL;
+ 
+ 	/* Initialize vbq */
+ 	vbq->type = node->vdev_fmt.type;
+diff --git a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+index d697ea55a0da1..3306ec5bb291b 100644
+--- a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+@@ -940,8 +940,9 @@ static int create_component(struct vchiq_mmal_instance *instance,
+ 	/* build component create message */
+ 	m.h.type = MMAL_MSG_TYPE_COMPONENT_CREATE;
+ 	m.u.component_create.client_component = component->client_component;
+-	strncpy(m.u.component_create.name, name,
+-		sizeof(m.u.component_create.name));
++	strscpy_pad(m.u.component_create.name, name,
++		    sizeof(m.u.component_create.name));
++	m.u.component_create.pid = 0;
+ 
+ 	ret = send_synchronous_mmal_msg(instance, &m,
+ 					sizeof(m.u.component_create),
+diff --git a/drivers/tee/optee/device.c b/drivers/tee/optee/device.c
+index 3cb39f02fae0d..26683697ca304 100644
+--- a/drivers/tee/optee/device.c
++++ b/drivers/tee/optee/device.c
+@@ -90,13 +90,14 @@ static int optee_register_device(const uuid_t *device_uuid, u32 func)
+ 	if (rc) {
+ 		pr_err("device registration failed, err: %d\n", rc);
+ 		put_device(&optee_device->dev);
++		return rc;
+ 	}
+ 
+ 	if (func == PTA_CMD_GET_DEVICES_SUPP)
+ 		device_create_file(&optee_device->dev,
+ 				   &dev_attr_need_supplicant);
+ 
+-	return rc;
++	return 0;
+ }
+ 
+ static int __optee_enumerate_devices(u32 func)
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index fa49529682cec..c20f69a4c5e9e 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -2661,6 +2661,9 @@ static int gsmld_open(struct tty_struct *tty)
+ {
+ 	struct gsm_mux *gsm;
+ 
++	if (!capable(CAP_NET_ADMIN))
++		return -EPERM;
++
+ 	if (tty->ops->write == NULL)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 8b49ac4856d2c..6098e87a34046 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1358,9 +1358,6 @@ static void autoconfig_irq(struct uart_8250_port *up)
+ 		inb_p(ICP);
+ 	}
+ 
+-	if (uart_console(port))
+-		console_lock();
+-
+ 	/* forget possible initially masked and pending IRQ */
+ 	probe_irq_off(probe_irq_on());
+ 	save_mcr = serial8250_in_MCR(up);
+@@ -1391,9 +1388,6 @@ static void autoconfig_irq(struct uart_8250_port *up)
+ 	if (port->flags & UPF_FOURPORT)
+ 		outb_p(save_ICP, ICP);
+ 
+-	if (uart_console(port))
+-		console_unlock();
+-
+ 	port->irq = (irq > 0) ? irq : 0;
+ }
+ 
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 227fb2d320465..b16ad6db1ef8e 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2178,9 +2178,12 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 		       UARTCTRL);
+ 
+ 	lpuart32_serial_setbrg(sport, baud);
+-	lpuart32_write(&sport->port, modem, UARTMODIR);
+-	lpuart32_write(&sport->port, ctrl, UARTCTRL);
++	/* disable CTS before enabling UARTCTRL_TE to avoid pending idle preamble */
++	lpuart32_write(&sport->port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
+ 	/* restore control register */
++	lpuart32_write(&sport->port, ctrl, UARTCTRL);
++	/* re-enable the CTS if needed */
++	lpuart32_write(&sport->port, modem, UARTMODIR);
+ 
+ 	if (old && sport->lpuart_dma_rx_use) {
+ 		if (!lpuart_start_rx_dma(sport))
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 5570fd3b84e15..363b68555fe62 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1636,13 +1636,16 @@ static unsigned short max310x_i2c_slave_addr(unsigned short addr,
+ 
+ static int max310x_i2c_probe(struct i2c_client *client)
+ {
+-	const struct max310x_devtype *devtype =
+-			device_get_match_data(&client->dev);
++	const struct max310x_devtype *devtype;
+ 	struct i2c_client *port_client;
+ 	struct regmap *regmaps[4];
+ 	unsigned int i;
+ 	u8 port_addr;
+ 
++	devtype = device_get_match_data(&client->dev);
++	if (!devtype)
++		return dev_err_probe(&client->dev, -ENODEV, "Failed to match device\n");
++
+ 	if (client->addr < devtype->slave_addr.min ||
+ 		client->addr > devtype->slave_addr.max)
+ 		return dev_err_probe(&client->dev, -EINVAL,
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 31e0c5c3ddeac..29f05db0d49ba 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -376,9 +376,7 @@ static void sc16is7xx_fifo_read(struct uart_port *port, unsigned int rxlen)
+ 	const u8 line = sc16is7xx_line(port);
+ 	u8 addr = (SC16IS7XX_RHR_REG << SC16IS7XX_REG_SHIFT) | line;
+ 
+-	regcache_cache_bypass(s->regmap, true);
+-	regmap_raw_read(s->regmap, addr, s->buf, rxlen);
+-	regcache_cache_bypass(s->regmap, false);
++	regmap_noinc_read(s->regmap, addr, s->buf, rxlen);
+ }
+ 
+ static void sc16is7xx_fifo_write(struct uart_port *port, u8 to_send)
+@@ -394,9 +392,7 @@ static void sc16is7xx_fifo_write(struct uart_port *port, u8 to_send)
+ 	if (unlikely(!to_send))
+ 		return;
+ 
+-	regcache_cache_bypass(s->regmap, true);
+-	regmap_raw_write(s->regmap, addr, s->buf, to_send);
+-	regcache_cache_bypass(s->regmap, false);
++	regmap_noinc_write(s->regmap, addr, s->buf, to_send);
+ }
+ 
+ static void sc16is7xx_port_update(struct uart_port *port, u8 reg,
+@@ -489,6 +485,11 @@ static bool sc16is7xx_regmap_precious(struct device *dev, unsigned int reg)
+ 	return false;
+ }
+ 
++static bool sc16is7xx_regmap_noinc(struct device *dev, unsigned int reg)
++{
++	return reg == SC16IS7XX_RHR_REG;
++}
++
+ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ {
+ 	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+@@ -1439,6 +1440,8 @@ static struct regmap_config regcfg = {
+ 	.cache_type = REGCACHE_RBTREE,
+ 	.volatile_reg = sc16is7xx_regmap_volatile,
+ 	.precious_reg = sc16is7xx_regmap_precious,
++	.writeable_noinc_reg = sc16is7xx_regmap_noinc,
++	.readable_noinc_reg = sc16is7xx_regmap_noinc,
+ };
+ 
+ #ifdef CONFIG_SERIAL_SC16IS7XX_SPI
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 40fff38588d4f..10b8785b99827 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -2431,7 +2431,12 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 			port->type = PORT_UNKNOWN;
+ 			flags |= UART_CONFIG_TYPE;
+ 		}
++		/* Synchronize with possible boot console. */
++		if (uart_console(port))
++			console_lock();
+ 		port->ops->config_port(port, flags);
++		if (uart_console(port))
++			console_unlock();
+ 	}
+ 
+ 	if (port->type != PORT_UNKNOWN) {
+@@ -2439,6 +2444,10 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 
+ 		uart_report_port(drv, port);
+ 
++		/* Synchronize with possible boot console. */
++		if (uart_console(port))
++			console_lock();
++
+ 		/* Power up port for set_mctrl() */
+ 		uart_change_pm(state, UART_PM_STATE_ON);
+ 
+@@ -2455,6 +2464,9 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 			port->rs485_config(port, &port->rs485);
+ 		spin_unlock_irqrestore(&port->lock, flags);
+ 
++		if (uart_console(port))
++			console_unlock();
++
+ 		/*
+ 		 * If this driver supports console, and it hasn't been
+ 		 * successfully registered yet, try to re-register it.
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index df645d127e401..a070f2e7d960f 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -398,7 +398,7 @@ static void vc_uniscr_delete(struct vc_data *vc, unsigned int nr)
+ 		char32_t *ln = uniscr->lines[vc->state.y];
+ 		unsigned int x = vc->state.x, cols = vc->vc_cols;
+ 
+-		memcpy(&ln[x], &ln[x + nr], (cols - x - nr) * sizeof(*ln));
++		memmove(&ln[x], &ln[x + nr], (cols - x - nr) * sizeof(*ln));
+ 		memset32(&ln[cols - nr], ' ', nr);
+ 	}
+ }
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 80332b6a1963e..58423b16022b4 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -471,6 +471,7 @@ static ssize_t wdm_write
+ static int service_outstanding_interrupt(struct wdm_device *desc)
+ {
+ 	int rv = 0;
++	int used;
+ 
+ 	/* submit read urb only if the device is waiting for it */
+ 	if (!desc->resp_count || !--desc->resp_count)
+@@ -485,7 +486,10 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 		goto out;
+ 	}
+ 
+-	set_bit(WDM_RESPONDING, &desc->flags);
++	used = test_and_set_bit(WDM_RESPONDING, &desc->flags);
++	if (used)
++		goto out;
++
+ 	spin_unlock_irq(&desc->iuspin);
+ 	rv = usb_submit_urb(desc->response, GFP_KERNEL);
+ 	spin_lock_irq(&desc->iuspin);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 91b974aa59bff..eef78141ffcae 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -116,7 +116,6 @@ EXPORT_SYMBOL_GPL(ehci_cf_port_reset_rwsem);
+ #define HUB_DEBOUNCE_STEP	  25
+ #define HUB_DEBOUNCE_STABLE	 100
+ 
+-static void hub_release(struct kref *kref);
+ static int usb_reset_and_verify_device(struct usb_device *udev);
+ static int hub_port_disable(struct usb_hub *hub, int port1, int set_state);
+ static bool hub_port_warm_reset_required(struct usb_hub *hub, int port1,
+@@ -678,14 +677,14 @@ static void kick_hub_wq(struct usb_hub *hub)
+ 	 */
+ 	intf = to_usb_interface(hub->intfdev);
+ 	usb_autopm_get_interface_no_resume(intf);
+-	kref_get(&hub->kref);
++	hub_get(hub);
+ 
+ 	if (queue_work(hub_wq, &hub->events))
+ 		return;
+ 
+ 	/* the work has already been scheduled */
+ 	usb_autopm_put_interface_async(intf);
+-	kref_put(&hub->kref, hub_release);
++	hub_put(hub);
+ }
+ 
+ void usb_kick_hub_wq(struct usb_device *hdev)
+@@ -1053,7 +1052,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 			goto init2;
+ 		goto init3;
+ 	}
+-	kref_get(&hub->kref);
++	hub_get(hub);
+ 
+ 	/* The superspeed hub except for root hub has to use Hub Depth
+ 	 * value as an offset into the route string to locate the bits
+@@ -1301,7 +1300,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 		device_unlock(&hdev->dev);
+ 	}
+ 
+-	kref_put(&hub->kref, hub_release);
++	hub_put(hub);
+ }
+ 
+ /* Implement the continuations for the delays above */
+@@ -1717,6 +1716,16 @@ static void hub_release(struct kref *kref)
+ 	kfree(hub);
+ }
+ 
++void hub_get(struct usb_hub *hub)
++{
++	kref_get(&hub->kref);
++}
++
++void hub_put(struct usb_hub *hub)
++{
++	kref_put(&hub->kref, hub_release);
++}
++
+ static unsigned highspeed_hubs;
+ 
+ static void hub_disconnect(struct usb_interface *intf)
+@@ -1763,7 +1772,7 @@ static void hub_disconnect(struct usb_interface *intf)
+ 	if (hub->quirk_disable_autosuspend)
+ 		usb_autopm_put_interface(intf);
+ 
+-	kref_put(&hub->kref, hub_release);
++	hub_put(hub);
+ }
+ 
+ static bool hub_descriptor_is_sane(struct usb_host_interface *desc)
+@@ -5857,7 +5866,7 @@ static void hub_event(struct work_struct *work)
+ 
+ 	/* Balance the stuff in kick_hub_wq() and allow autosuspend */
+ 	usb_autopm_put_interface(intf);
+-	kref_put(&hub->kref, hub_release);
++	hub_put(hub);
+ 
+ 	kcov_remote_stop();
+ }
+diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h
+index db4c7e2c5960d..dd049bc85f88c 100644
+--- a/drivers/usb/core/hub.h
++++ b/drivers/usb/core/hub.h
+@@ -117,6 +117,8 @@ extern void usb_hub_remove_port_device(struct usb_hub *hub,
+ extern int usb_hub_set_port_power(struct usb_device *hdev, struct usb_hub *hub,
+ 		int port1, bool set);
+ extern struct usb_hub *usb_hub_to_struct_hub(struct usb_device *hdev);
++extern void hub_get(struct usb_hub *hub);
++extern void hub_put(struct usb_hub *hub);
+ extern int hub_port_debounce(struct usb_hub *hub, int port1,
+ 		bool must_be_connected);
+ extern int usb_clear_port_feature(struct usb_device *hdev,
+diff --git a/drivers/usb/core/port.c b/drivers/usb/core/port.c
+index 235a7c6455036..336ecf6e19678 100644
+--- a/drivers/usb/core/port.c
++++ b/drivers/usb/core/port.c
+@@ -450,7 +450,7 @@ static int match_location(struct usb_device *peer_hdev, void *p)
+ 	struct usb_hub *peer_hub = usb_hub_to_struct_hub(peer_hdev);
+ 	struct usb_device *hdev = to_usb_device(port_dev->dev.parent->parent);
+ 
+-	if (!peer_hub)
++	if (!peer_hub || port_dev->connect_type == USB_PORT_NOT_USED)
+ 		return 0;
+ 
+ 	hcd = bus_to_hcd(hdev->bus);
+@@ -461,7 +461,8 @@ static int match_location(struct usb_device *peer_hdev, void *p)
+ 
+ 	for (port1 = 1; port1 <= peer_hdev->maxchild; port1++) {
+ 		peer = peer_hub->ports[port1 - 1];
+-		if (peer && peer->location == port_dev->location) {
++		if (peer && peer->connect_type != USB_PORT_NOT_USED &&
++		    peer->location == port_dev->location) {
+ 			link_peers_report(port_dev, peer);
+ 			return 1; /* done */
+ 		}
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index 35ce8b87e9396..366c095217859 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -1166,14 +1166,24 @@ static ssize_t interface_authorized_store(struct device *dev,
+ {
+ 	struct usb_interface *intf = to_usb_interface(dev);
+ 	bool val;
++	struct kernfs_node *kn;
+ 
+ 	if (strtobool(buf, &val) != 0)
+ 		return -EINVAL;
+ 
+-	if (val)
++	if (val) {
+ 		usb_authorize_interface(intf);
+-	else
+-		usb_deauthorize_interface(intf);
++	} else {
++		/*
++		 * Prevent deadlock if another process is concurrently
++		 * trying to unregister intf.
++		 */
++		kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
++		if (kn) {
++			usb_deauthorize_interface(intf);
++			sysfs_unbreak_active_protection(kn);
++		}
++	}
+ 
+ 	return count;
+ }
+diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
+index 03d16a08261d8..03cc331d3b931 100644
+--- a/drivers/usb/dwc2/core.h
++++ b/drivers/usb/dwc2/core.h
+@@ -748,8 +748,14 @@ struct dwc2_dregs_backup {
+  * struct dwc2_hregs_backup - Holds host registers state before
+  * entering partial power down
+  * @hcfg:		Backup of HCFG register
++ * @hflbaddr:		Backup of HFLBADDR register
+  * @haintmsk:		Backup of HAINTMSK register
++ * @hcchar:		Backup of HCCHAR register
++ * @hcsplt:		Backup of HCSPLT register
+  * @hcintmsk:		Backup of HCINTMSK register
++ * @hctsiz:		Backup of HCTSIZ register
++ * @hdma:		Backup of HCDMA register
++ * @hcdmab:		Backup of HCDMAB register
+  * @hprt0:		Backup of HPTR0 register
+  * @hfir:		Backup of HFIR register
+  * @hptxfsiz:		Backup of HPTXFSIZ register
+@@ -757,8 +763,14 @@ struct dwc2_dregs_backup {
+  */
+ struct dwc2_hregs_backup {
+ 	u32 hcfg;
++	u32 hflbaddr;
+ 	u32 haintmsk;
++	u32 hcchar[MAX_EPS_CHANNELS];
++	u32 hcsplt[MAX_EPS_CHANNELS];
+ 	u32 hcintmsk[MAX_EPS_CHANNELS];
++	u32 hctsiz[MAX_EPS_CHANNELS];
++	u32 hcidma[MAX_EPS_CHANNELS];
++	u32 hcidmab[MAX_EPS_CHANNELS];
+ 	u32 hprt0;
+ 	u32 hfir;
+ 	u32 hptxfsiz;
+@@ -1097,6 +1109,7 @@ struct dwc2_hsotg {
+ 	bool needs_byte_swap;
+ 
+ 	/* DWC OTG HW Release versions */
++#define DWC2_CORE_REV_4_30a	0x4f54430a
+ #define DWC2_CORE_REV_2_71a	0x4f54271a
+ #define DWC2_CORE_REV_2_72a     0x4f54272a
+ #define DWC2_CORE_REV_2_80a	0x4f54280a
+@@ -1335,6 +1348,7 @@ int dwc2_backup_global_registers(struct dwc2_hsotg *hsotg);
+ int dwc2_restore_global_registers(struct dwc2_hsotg *hsotg);
+ 
+ void dwc2_enable_acg(struct dwc2_hsotg *hsotg);
++void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg, bool remotewakeup);
+ 
+ /* This function should be called on every hardware interrupt. */
+ irqreturn_t dwc2_handle_common_intr(int irq, void *dev);
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index e3f429f1575e9..8deb2ea214b07 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -344,10 +344,11 @@ static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
+  * @hsotg: Programming view of DWC_otg controller
+  *
+  */
+-static void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg)
++void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg, bool remotewakeup)
+ {
+ 	u32 glpmcfg;
+-	u32 i = 0;
++	u32 pcgctl;
++	u32 dctl;
+ 
+ 	if (hsotg->lx_state != DWC2_L1) {
+ 		dev_err(hsotg->dev, "Core isn't in DWC2_L1 state\n");
+@@ -356,37 +357,57 @@ static void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg)
+ 
+ 	glpmcfg = dwc2_readl(hsotg, GLPMCFG);
+ 	if (dwc2_is_device_mode(hsotg)) {
+-		dev_dbg(hsotg->dev, "Exit from L1 state\n");
++		dev_dbg(hsotg->dev, "Exit from L1 state, remotewakeup=%d\n", remotewakeup);
+ 		glpmcfg &= ~GLPMCFG_ENBLSLPM;
+-		glpmcfg &= ~GLPMCFG_HIRD_THRES_EN;
++		glpmcfg &= ~GLPMCFG_HIRD_THRES_MASK;
+ 		dwc2_writel(hsotg, glpmcfg, GLPMCFG);
+ 
+-		do {
+-			glpmcfg = dwc2_readl(hsotg, GLPMCFG);
++		pcgctl = dwc2_readl(hsotg, PCGCTL);
++		pcgctl &= ~PCGCTL_ENBL_SLEEP_GATING;
++		dwc2_writel(hsotg, pcgctl, PCGCTL);
+ 
+-			if (!(glpmcfg & (GLPMCFG_COREL1RES_MASK |
+-					 GLPMCFG_L1RESUMEOK | GLPMCFG_SLPSTS)))
+-				break;
++		glpmcfg = dwc2_readl(hsotg, GLPMCFG);
++		if (glpmcfg & GLPMCFG_ENBESL) {
++			glpmcfg |= GLPMCFG_RSTRSLPSTS;
++			dwc2_writel(hsotg, glpmcfg, GLPMCFG);
++		}
++
++		if (remotewakeup) {
++			if (dwc2_hsotg_wait_bit_set(hsotg, GLPMCFG, GLPMCFG_L1RESUMEOK, 1000)) {
++				dev_warn(hsotg->dev, "%s: timeout GLPMCFG_L1RESUMEOK\n", __func__);
++				goto fail;
++				return;
++			}
++
++			dctl = dwc2_readl(hsotg, DCTL);
++			dctl |= DCTL_RMTWKUPSIG;
++			dwc2_writel(hsotg, dctl, DCTL);
+ 
+-			udelay(1);
+-		} while (++i < 200);
++			if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS, GINTSTS_WKUPINT, 1000)) {
++				dev_warn(hsotg->dev, "%s: timeout GINTSTS_WKUPINT\n", __func__);
++				goto fail;
++				return;
++			}
++		}
+ 
+-		if (i == 200) {
+-			dev_err(hsotg->dev, "Failed to exit L1 sleep state in 200us.\n");
++		glpmcfg = dwc2_readl(hsotg, GLPMCFG);
++		if (glpmcfg & GLPMCFG_COREL1RES_MASK || glpmcfg & GLPMCFG_SLPSTS ||
++		    glpmcfg & GLPMCFG_L1RESUMEOK) {
++			goto fail;
+ 			return;
+ 		}
+-		dwc2_gadget_init_lpm(hsotg);
++
++		/* Inform gadget to exit from L1 */
++		call_gadget(hsotg, resume);
++		/* Change to L0 state */
++		hsotg->lx_state = DWC2_L0;
++		hsotg->bus_suspended = false;
++fail:		dwc2_gadget_init_lpm(hsotg);
+ 	} else {
+ 		/* TODO */
+ 		dev_err(hsotg->dev, "Host side LPM is not supported.\n");
+ 		return;
+ 	}
+-
+-	/* Change to L0 state */
+-	hsotg->lx_state = DWC2_L0;
+-
+-	/* Inform gadget to exit from L1 */
+-	call_gadget(hsotg, resume);
+ }
+ 
+ /*
+@@ -407,7 +428,7 @@ static void dwc2_handle_wakeup_detected_intr(struct dwc2_hsotg *hsotg)
+ 	dev_dbg(hsotg->dev, "%s lxstate = %d\n", __func__, hsotg->lx_state);
+ 
+ 	if (hsotg->lx_state == DWC2_L1) {
+-		dwc2_wakeup_from_lpm_l1(hsotg);
++		dwc2_wakeup_from_lpm_l1(hsotg, false);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index da0df69cc2344..d8b83665581f5 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -1416,6 +1416,10 @@ static int dwc2_hsotg_ep_queue(struct usb_ep *ep, struct usb_request *req,
+ 		ep->name, req, req->length, req->buf, req->no_interrupt,
+ 		req->zero, req->short_not_ok);
+ 
++	if (hs->lx_state == DWC2_L1) {
++		dwc2_wakeup_from_lpm_l1(hs, true);
++	}
++
+ 	/* Prevent new request submission when controller is suspended */
+ 	if (hs->lx_state != DWC2_L0) {
+ 		dev_dbg(hs->dev, "%s: submit request only in active state\n",
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 14925fedb01aa..9c32a64bc8c20 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -2736,8 +2736,11 @@ enum dwc2_transaction_type dwc2_hcd_select_transactions(
+ 			hsotg->available_host_channels--;
+ 		}
+ 		qh = list_entry(qh_ptr, struct dwc2_qh, qh_list_entry);
+-		if (dwc2_assign_and_init_hc(hsotg, qh))
++		if (dwc2_assign_and_init_hc(hsotg, qh)) {
++			if (hsotg->params.uframe_sched)
++				hsotg->available_host_channels++;
+ 			break;
++		}
+ 
+ 		/*
+ 		 * Move the QH from the periodic ready schedule to the
+@@ -2770,8 +2773,11 @@ enum dwc2_transaction_type dwc2_hcd_select_transactions(
+ 			hsotg->available_host_channels--;
+ 		}
+ 
+-		if (dwc2_assign_and_init_hc(hsotg, qh))
++		if (dwc2_assign_and_init_hc(hsotg, qh)) {
++			if (hsotg->params.uframe_sched)
++				hsotg->available_host_channels++;
+ 			break;
++		}
+ 
+ 		/*
+ 		 * Move the QH from the non-periodic inactive schedule to the
+@@ -4125,6 +4131,8 @@ void dwc2_host_complete(struct dwc2_hsotg *hsotg, struct dwc2_qtd *qtd,
+ 			 urb->actual_length);
+ 
+ 	if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
++		if (!hsotg->params.dma_desc_enable)
++			urb->start_frame = qtd->qh->start_active_frame;
+ 		urb->error_count = dwc2_hcd_urb_get_error_count(qtd->urb);
+ 		for (i = 0; i < urb->number_of_packets; ++i) {
+ 			urb->iso_frame_desc[i].actual_length =
+@@ -5319,9 +5327,16 @@ int dwc2_backup_host_registers(struct dwc2_hsotg *hsotg)
+ 	/* Backup Host regs */
+ 	hr = &hsotg->hr_backup;
+ 	hr->hcfg = dwc2_readl(hsotg, HCFG);
++	hr->hflbaddr = dwc2_readl(hsotg, HFLBADDR);
+ 	hr->haintmsk = dwc2_readl(hsotg, HAINTMSK);
+-	for (i = 0; i < hsotg->params.host_channels; ++i)
++	for (i = 0; i < hsotg->params.host_channels; ++i) {
++		hr->hcchar[i] = dwc2_readl(hsotg, HCCHAR(i));
++		hr->hcsplt[i] = dwc2_readl(hsotg, HCSPLT(i));
+ 		hr->hcintmsk[i] = dwc2_readl(hsotg, HCINTMSK(i));
++		hr->hctsiz[i] = dwc2_readl(hsotg, HCTSIZ(i));
++		hr->hcidma[i] = dwc2_readl(hsotg, HCDMA(i));
++		hr->hcidmab[i] = dwc2_readl(hsotg, HCDMAB(i));
++	}
+ 
+ 	hr->hprt0 = dwc2_read_hprt0(hsotg);
+ 	hr->hfir = dwc2_readl(hsotg, HFIR);
+@@ -5355,10 +5370,17 @@ int dwc2_restore_host_registers(struct dwc2_hsotg *hsotg)
+ 	hr->valid = false;
+ 
+ 	dwc2_writel(hsotg, hr->hcfg, HCFG);
++	dwc2_writel(hsotg, hr->hflbaddr, HFLBADDR);
+ 	dwc2_writel(hsotg, hr->haintmsk, HAINTMSK);
+ 
+-	for (i = 0; i < hsotg->params.host_channels; ++i)
++	for (i = 0; i < hsotg->params.host_channels; ++i) {
++		dwc2_writel(hsotg, hr->hcchar[i], HCCHAR(i));
++		dwc2_writel(hsotg, hr->hcsplt[i], HCSPLT(i));
+ 		dwc2_writel(hsotg, hr->hcintmsk[i], HCINTMSK(i));
++		dwc2_writel(hsotg, hr->hctsiz[i], HCTSIZ(i));
++		dwc2_writel(hsotg, hr->hcidma[i], HCDMA(i));
++		dwc2_writel(hsotg, hr->hcidmab[i], HCDMAB(i));
++	}
+ 
+ 	dwc2_writel(hsotg, hr->hprt0, HPRT0);
+ 	dwc2_writel(hsotg, hr->hfir, HFIR);
+@@ -5523,10 +5545,12 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
+ 	dwc2_writel(hsotg, hr->hcfg, HCFG);
+ 
+ 	/* De-assert Wakeup Logic */
+-	gpwrdn = dwc2_readl(hsotg, GPWRDN);
+-	gpwrdn &= ~GPWRDN_PMUACTV;
+-	dwc2_writel(hsotg, gpwrdn, GPWRDN);
+-	udelay(10);
++	if (!(rem_wakeup && hsotg->hw_params.snpsid >= DWC2_CORE_REV_4_30a)) {
++		gpwrdn = dwc2_readl(hsotg, GPWRDN);
++		gpwrdn &= ~GPWRDN_PMUACTV;
++		dwc2_writel(hsotg, gpwrdn, GPWRDN);
++		udelay(10);
++	}
+ 
+ 	hprt0 = hr->hprt0;
+ 	hprt0 |= HPRT0_PWR;
+@@ -5551,6 +5575,13 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
+ 		hprt0 |= HPRT0_RES;
+ 		dwc2_writel(hsotg, hprt0, HPRT0);
+ 
++		/* De-assert Wakeup Logic */
++		if ((rem_wakeup && hsotg->hw_params.snpsid >= DWC2_CORE_REV_4_30a)) {
++			gpwrdn = dwc2_readl(hsotg, GPWRDN);
++			gpwrdn &= ~GPWRDN_PMUACTV;
++			dwc2_writel(hsotg, gpwrdn, GPWRDN);
++			udelay(10);
++		}
+ 		/* Wait for Resume time and then program HPRT again */
+ 		mdelay(100);
+ 		hprt0 &= ~HPRT0_RES;
+diff --git a/drivers/usb/dwc2/hcd_ddma.c b/drivers/usb/dwc2/hcd_ddma.c
+index a858b5f9c1d60..6a4aa71da103f 100644
+--- a/drivers/usb/dwc2/hcd_ddma.c
++++ b/drivers/usb/dwc2/hcd_ddma.c
+@@ -589,7 +589,7 @@ static void dwc2_init_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ 	idx = qh->td_last;
+ 	inc = qh->host_interval;
+ 	hsotg->frame_number = dwc2_hcd_get_frame_number(hsotg);
+-	cur_idx = dwc2_frame_list_idx(hsotg->frame_number);
++	cur_idx = idx;
+ 	next_idx = dwc2_desclist_idx_inc(qh->td_last, inc, qh->dev_speed);
+ 
+ 	/*
+@@ -896,6 +896,8 @@ static int dwc2_cmpl_host_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ {
+ 	struct dwc2_dma_desc *dma_desc;
+ 	struct dwc2_hcd_iso_packet_desc *frame_desc;
++	u16 frame_desc_idx;
++	struct urb *usb_urb = qtd->urb->priv;
+ 	u16 remain = 0;
+ 	int rc = 0;
+ 
+@@ -908,8 +910,11 @@ static int dwc2_cmpl_host_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ 				DMA_FROM_DEVICE);
+ 
+ 	dma_desc = &qh->desc_list[idx];
++	frame_desc_idx = (idx - qtd->isoc_td_first) & (usb_urb->number_of_packets - 1);
+ 
+-	frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index_last];
++	frame_desc = &qtd->urb->iso_descs[frame_desc_idx];
++	if (idx == qtd->isoc_td_first)
++		usb_urb->start_frame = dwc2_hcd_get_frame_number(hsotg);
+ 	dma_desc->buf = (u32)(qtd->urb->dma + frame_desc->offset);
+ 	if (chan->ep_is_in)
+ 		remain = (dma_desc->status & HOST_DMA_ISOC_NBYTES_MASK) >>
+@@ -930,7 +935,7 @@ static int dwc2_cmpl_host_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ 		frame_desc->status = 0;
+ 	}
+ 
+-	if (++qtd->isoc_frame_index == qtd->urb->packet_count) {
++	if (++qtd->isoc_frame_index == usb_urb->number_of_packets) {
+ 		/*
+ 		 * urb->status is not used for isoc transfers here. The
+ 		 * individual frame_desc status are used instead.
+@@ -1035,11 +1040,11 @@ static void dwc2_complete_isoc_xfer_ddma(struct dwc2_hsotg *hsotg,
+ 				return;
+ 			idx = dwc2_desclist_idx_inc(idx, qh->host_interval,
+ 						    chan->speed);
+-			if (!rc)
++			if (rc == 0)
+ 				continue;
+ 
+-			if (rc == DWC2_CMPL_DONE)
+-				break;
++			if (rc == DWC2_CMPL_DONE || rc == DWC2_CMPL_STOP)
++				goto stop_scan;
+ 
+ 			/* rc == DWC2_CMPL_STOP */
+ 
+diff --git a/drivers/usb/dwc2/hw.h b/drivers/usb/dwc2/hw.h
+index c3d6dde2aca45..6ad7ba0544329 100644
+--- a/drivers/usb/dwc2/hw.h
++++ b/drivers/usb/dwc2/hw.h
+@@ -727,7 +727,7 @@
+ #define TXSTS_QTOP_TOKEN_MASK		(0x3 << 25)
+ #define TXSTS_QTOP_TOKEN_SHIFT		25
+ #define TXSTS_QTOP_TERMINATE		BIT(24)
+-#define TXSTS_QSPCAVAIL_MASK		(0xff << 16)
++#define TXSTS_QSPCAVAIL_MASK		(0x7f << 16)
+ #define TXSTS_QSPCAVAIL_SHIFT		16
+ #define TXSTS_FSPCAVAIL_MASK		(0xffff << 0)
+ #define TXSTS_FSPCAVAIL_SHIFT		0
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index 8fac7a67db76f..752951b56ad3f 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -1357,7 +1357,7 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 	if (to_process == 1 &&
+ 	    (*(unsigned char *)(ntb_ptr + block_len) == 0x00)) {
+ 		to_process--;
+-	} else if (to_process > 0) {
++	} else if ((to_process > 0) && (block_len != 0)) {
+ 		ntb_ptr = (unsigned char *)(ntb_ptr + block_len);
+ 		goto parse_ntb;
+ 	}
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 7b3c0787d5a45..e80aa717c8b56 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -273,7 +273,9 @@ int usb_ep_queue(struct usb_ep *ep,
+ {
+ 	int ret = 0;
+ 
+-	if (WARN_ON_ONCE(!ep->enabled && ep->address)) {
++	if (!ep->enabled && ep->address) {
++		pr_debug("USB gadget: queue request to disabled ep 0x%x (%s)\n",
++				 ep->address, ep->name);
+ 		ret = -ESHUTDOWN;
+ 		goto out;
+ 	}
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index c5f0fbb8ffe47..9d3e36c867e83 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3480,8 +3480,8 @@ static void tegra_xudc_device_params_init(struct tegra_xudc *xudc)
+ 
+ static int tegra_xudc_phy_get(struct tegra_xudc *xudc)
+ {
+-	int err = 0, usb3;
+-	unsigned int i;
++	int err = 0, usb3_companion_port;
++	unsigned int i, j;
+ 
+ 	xudc->utmi_phy = devm_kcalloc(xudc->dev, xudc->soc->num_phys,
+ 					   sizeof(*xudc->utmi_phy), GFP_KERNEL);
+@@ -3508,10 +3508,8 @@ static int tegra_xudc_phy_get(struct tegra_xudc *xudc)
+ 		xudc->utmi_phy[i] = devm_phy_optional_get(xudc->dev, phy_name);
+ 		if (IS_ERR(xudc->utmi_phy[i])) {
+ 			err = PTR_ERR(xudc->utmi_phy[i]);
+-			if (err != -EPROBE_DEFER)
+-				dev_err(xudc->dev, "failed to get usb2-%d PHY: %d\n",
+-					i, err);
+-
++			dev_err_probe(xudc->dev, err,
++				"failed to get PHY for phy-name usb2-%d\n", i);
+ 			goto clean_up;
+ 		} else if (xudc->utmi_phy[i]) {
+ 			/* Get usb-phy, if utmi phy is available */
+@@ -3530,21 +3528,30 @@ static int tegra_xudc_phy_get(struct tegra_xudc *xudc)
+ 		}
+ 
+ 		/* Get USB3 phy */
+-		usb3 = tegra_xusb_padctl_get_usb3_companion(xudc->padctl, i);
+-		if (usb3 < 0)
++		usb3_companion_port = tegra_xusb_padctl_get_usb3_companion(xudc->padctl, i);
++		if (usb3_companion_port < 0)
+ 			continue;
+ 
+-		snprintf(phy_name, sizeof(phy_name), "usb3-%d", usb3);
+-		xudc->usb3_phy[i] = devm_phy_optional_get(xudc->dev, phy_name);
+-		if (IS_ERR(xudc->usb3_phy[i])) {
+-			err = PTR_ERR(xudc->usb3_phy[i]);
+-			if (err != -EPROBE_DEFER)
+-				dev_err(xudc->dev, "failed to get usb3-%d PHY: %d\n",
+-					usb3, err);
+-
+-			goto clean_up;
+-		} else if (xudc->usb3_phy[i])
+-			dev_dbg(xudc->dev, "usb3-%d PHY registered", usb3);
++		for (j = 0; j < xudc->soc->num_phys; j++) {
++			snprintf(phy_name, sizeof(phy_name), "usb3-%d", j);
++			xudc->usb3_phy[i] = devm_phy_optional_get(xudc->dev, phy_name);
++			if (IS_ERR(xudc->usb3_phy[i])) {
++				err = PTR_ERR(xudc->usb3_phy[i]);
++				dev_err_probe(xudc->dev, err,
++					"failed to get PHY for phy-name usb3-%d\n", j);
++				goto clean_up;
++			} else if (xudc->usb3_phy[i]) {
++				int usb2_port =
++					tegra_xusb_padctl_get_port_number(xudc->utmi_phy[i]);
++				int usb3_port =
++					tegra_xusb_padctl_get_port_number(xudc->usb3_phy[i]);
++				if (usb3_port == usb3_companion_port) {
++					dev_dbg(xudc->dev, "USB2 port %d is paired with USB3 port %d for device mode port %d\n",
++					 usb2_port, usb3_port, i);
++					break;
++				}
++			}
++		}
+ 	}
+ 
+ 	return err;
+@@ -3781,9 +3788,7 @@ static int tegra_xudc_probe(struct platform_device *pdev)
+ 
+ 	err = devm_clk_bulk_get(&pdev->dev, xudc->soc->num_clks, xudc->clks);
+ 	if (err) {
+-		if (err != -EPROBE_DEFER)
+-			dev_err(xudc->dev, "failed to request clocks: %d\n", err);
+-
++		dev_err_probe(xudc->dev, err, "failed to request clocks\n");
+ 		return err;
+ 	}
+ 
+@@ -3798,9 +3803,7 @@ static int tegra_xudc_probe(struct platform_device *pdev)
+ 	err = devm_regulator_bulk_get(&pdev->dev, xudc->soc->num_supplies,
+ 				      xudc->supplies);
+ 	if (err) {
+-		if (err != -EPROBE_DEFER)
+-			dev_err(xudc->dev, "failed to request regulators: %d\n", err);
+-
++		dev_err_probe(xudc->dev, err, "failed to request regulators\n");
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c
+index 9465fce99c822..f803079a9f263 100644
+--- a/drivers/usb/host/sl811-hcd.c
++++ b/drivers/usb/host/sl811-hcd.c
+@@ -585,6 +585,7 @@ done(struct sl811 *sl811, struct sl811h_ep *ep, u8 bank)
+ 		finish_request(sl811, ep, urb, urbstat);
+ }
+ 
++#ifdef QUIRK2
+ static inline u8 checkdone(struct sl811 *sl811)
+ {
+ 	u8	ctl;
+@@ -616,6 +617,7 @@ static inline u8 checkdone(struct sl811 *sl811)
+ #endif
+ 	return irqstat;
+ }
++#endif
+ 
+ static irqreturn_t sl811h_irq(struct usb_hcd *hcd)
+ {
+diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
+index 34b9f81401871..661a229c105dd 100644
+--- a/drivers/usb/phy/phy-generic.c
++++ b/drivers/usb/phy/phy-generic.c
+@@ -268,13 +268,6 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop)
+ 			return -EPROBE_DEFER;
+ 	}
+ 
+-	nop->vbus_draw = devm_regulator_get_exclusive(dev, "vbus");
+-	if (PTR_ERR(nop->vbus_draw) == -ENODEV)
+-		nop->vbus_draw = NULL;
+-	if (IS_ERR(nop->vbus_draw))
+-		return dev_err_probe(dev, PTR_ERR(nop->vbus_draw),
+-				     "could not get vbus regulator\n");
+-
+ 	nop->dev		= dev;
+ 	nop->phy.dev		= nop->dev;
+ 	nop->phy.label		= "nop-xceiv";
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index d161b64416a48..294f7f01656aa 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -60,6 +60,8 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x0471, 0x066A) }, /* AKTAKOM ACE-1001 cable */
+ 	{ USB_DEVICE(0x0489, 0xE000) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
+ 	{ USB_DEVICE(0x0489, 0xE003) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
++	{ USB_DEVICE(0x04BF, 0x1301) }, /* TDK Corporation NC0110013M - Network Controller */
++	{ USB_DEVICE(0x04BF, 0x1303) }, /* TDK Corporation MM0110113M - i3 Micro Module */
+ 	{ USB_DEVICE(0x0745, 0x1000) }, /* CipherLab USB CCD Barcode Scanner 1000 */
+ 	{ USB_DEVICE(0x0846, 0x1100) }, /* NetGear Managed Switch M4100 series, M5300 series, M7100 series */
+ 	{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+@@ -148,6 +150,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x85EA) }, /* AC-Services IBUS-IF */
+ 	{ USB_DEVICE(0x10C4, 0x85EB) }, /* AC-Services CIS-IBUS */
+ 	{ USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */
++	{ USB_DEVICE(0x10C4, 0x863C) }, /* MGP Instruments PDS100 */
+ 	{ USB_DEVICE(0x10C4, 0x8664) }, /* AC-Services CAN-IF */
+ 	{ USB_DEVICE(0x10C4, 0x8665) }, /* AC-Services OBD-IF */
+ 	{ USB_DEVICE(0x10C4, 0x87ED) }, /* IMST USB-Stick for Smart Meter */
+@@ -181,6 +184,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0xF004) }, /* Elan Digital Systems USBcount50 */
+ 	{ USB_DEVICE(0x10C5, 0xEA61) }, /* Silicon Labs MobiData GPRS USB Modem */
+ 	{ USB_DEVICE(0x10CE, 0xEA6A) }, /* Silicon Labs MobiData GPRS USB Modem 100EU */
++	{ USB_DEVICE(0x11CA, 0x0212) }, /* Verifone USB to Printer (UART, CP2102) */
+ 	{ USB_DEVICE(0x12B8, 0xEC60) }, /* Link G4 ECU */
+ 	{ USB_DEVICE(0x12B8, 0xEC62) }, /* Link G4+ ECU */
+ 	{ USB_DEVICE(0x13AD, 0x9999) }, /* Baltech card reader */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 4d7f4a4ab69fb..66aa999efa6d5 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1055,6 +1055,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_FALCONIA_JTAG_UNBUF_PID),
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
++	/* GMC devices */
++	{ USB_DEVICE(GMC_VID, GMC_Z216C_PID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 9a0f9fc991246..b2aec1106678a 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1599,3 +1599,9 @@
+ #define UBLOX_VID			0x1546
+ #define UBLOX_C099F9P_ZED_PID		0x0502
+ #define UBLOX_C099F9P_ODIN_PID		0x0503
++
++/*
++ * GMC devices
++ */
++#define GMC_VID				0x1cd7
++#define GMC_Z216C_PID			0x0217 /* GMC Z216C Adapter IR-USB */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 43e8cb17b4c7a..fb1eba835e508 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -613,6 +613,11 @@ static void option_instat_callback(struct urb *urb);
+ /* Luat Air72*U series based on UNISOC UIS8910 uses UNISOC's vendor ID */
+ #define LUAT_PRODUCT_AIR720U			0x4e00
+ 
++/* MeiG Smart Technology products */
++#define MEIGSMART_VENDOR_ID			0x2dee
++/* MeiG Smart SLM320 based on UNISOC UIS8910 */
++#define MEIGSMART_PRODUCT_SLM320		0x4d41
++
+ /* Device flags */
+ 
+ /* Highest interface number which can be used with NCTRL() and RSVD() */
+@@ -2282,6 +2287,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/storage/isd200.c b/drivers/usb/storage/isd200.c
+index 3c76336e43bb2..1d22db59f7bdf 100644
+--- a/drivers/usb/storage/isd200.c
++++ b/drivers/usb/storage/isd200.c
+@@ -1105,7 +1105,7 @@ static void isd200_dump_driveid(struct us_data *us, u16 *id)
+ static int isd200_get_inquiry_data( struct us_data *us )
+ {
+ 	struct isd200_info *info = (struct isd200_info *)us->extra;
+-	int retStatus = ISD200_GOOD;
++	int retStatus;
+ 	u16 *id = info->id;
+ 
+ 	usb_stor_dbg(us, "Entering isd200_get_inquiry_data\n");
+@@ -1137,6 +1137,13 @@ static int isd200_get_inquiry_data( struct us_data *us )
+ 				isd200_fix_driveid(id);
+ 				isd200_dump_driveid(us, id);
+ 
++				/* Prevent division by 0 in isd200_scsi_to_ata() */
++				if (id[ATA_ID_HEADS] == 0 || id[ATA_ID_SECTORS] == 0) {
++					usb_stor_dbg(us, "   Invalid ATA Identify data\n");
++					retStatus = ISD200_ERROR;
++					goto Done;
++				}
++
+ 				memset(&info->InquiryData, 0, sizeof(info->InquiryData));
+ 
+ 				/* Standard IDE interface only supports disks */
+@@ -1202,6 +1209,7 @@ static int isd200_get_inquiry_data( struct us_data *us )
+ 		}
+ 	}
+ 
++ Done:
+ 	usb_stor_dbg(us, "Leaving isd200_get_inquiry_data %08X\n", retStatus);
+ 
+ 	return(retStatus);
+@@ -1481,22 +1489,27 @@ static int isd200_init_info(struct us_data *us)
+ 
+ static int isd200_Initialization(struct us_data *us)
+ {
++	int rc = 0;
++
+ 	usb_stor_dbg(us, "ISD200 Initialization...\n");
+ 
+ 	/* Initialize ISD200 info struct */
+ 
+-	if (isd200_init_info(us) == ISD200_ERROR) {
++	if (isd200_init_info(us) < 0) {
+ 		usb_stor_dbg(us, "ERROR Initializing ISD200 Info struct\n");
++		rc = -ENOMEM;
+ 	} else {
+ 		/* Get device specific data */
+ 
+-		if (isd200_get_inquiry_data(us) != ISD200_GOOD)
++		if (isd200_get_inquiry_data(us) != ISD200_GOOD) {
+ 			usb_stor_dbg(us, "ISD200 Initialization Failure\n");
+-		else
++			rc = -EINVAL;
++		} else {
+ 			usb_stor_dbg(us, "ISD200 Initialization complete\n");
++		}
+ 	}
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index e34e46df80243..33c67adf7c67a 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -732,6 +732,7 @@ MODULE_DEVICE_TABLE(i2c, tcpci_id);
+ #ifdef CONFIG_OF
+ static const struct of_device_id tcpci_of_match[] = {
+ 	{ .compatible = "nxp,ptn5110", },
++	{ .compatible = "tcpci", },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, tcpci_of_match);
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index cd3689005c310..2ddc8936a8935 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -138,8 +138,12 @@ static int ucsi_exec_command(struct ucsi *ucsi, u64 cmd)
+ 	if (!(cci & UCSI_CCI_COMMAND_COMPLETE))
+ 		return -EIO;
+ 
+-	if (cci & UCSI_CCI_NOT_SUPPORTED)
++	if (cci & UCSI_CCI_NOT_SUPPORTED) {
++		if (ucsi_acknowledge_command(ucsi) < 0)
++			dev_err(ucsi->dev,
++				"ACK of unsupported command failed\n");
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	if (cci & UCSI_CCI_ERROR) {
+ 		if (cmd == UCSI_GET_ERROR_STATUS)
+@@ -848,13 +852,47 @@ static int ucsi_reset_connector(struct ucsi_connector *con, bool hard)
+ 
+ static int ucsi_reset_ppm(struct ucsi *ucsi)
+ {
+-	u64 command = UCSI_PPM_RESET;
++	u64 command;
+ 	unsigned long tmo;
+ 	u32 cci;
+ 	int ret;
+ 
+ 	mutex_lock(&ucsi->ppm_lock);
+ 
++	ret = ucsi->ops->read(ucsi, UCSI_CCI, &cci, sizeof(cci));
++	if (ret < 0)
++		goto out;
++
++	/*
++	 * If UCSI_CCI_RESET_COMPLETE is already set we must clear
++	 * the flag before we start another reset. Send a
++	 * UCSI_SET_NOTIFICATION_ENABLE command to achieve this.
++	 * Ignore a timeout and try the reset anyway if this fails.
++	 */
++	if (cci & UCSI_CCI_RESET_COMPLETE) {
++		command = UCSI_SET_NOTIFICATION_ENABLE;
++		ret = ucsi->ops->async_write(ucsi, UCSI_CONTROL, &command,
++					     sizeof(command));
++		if (ret < 0)
++			goto out;
++
++		tmo = jiffies + msecs_to_jiffies(UCSI_TIMEOUT_MS);
++		do {
++			ret = ucsi->ops->read(ucsi, UCSI_CCI,
++					      &cci, sizeof(cci));
++			if (ret < 0)
++				goto out;
++			if (cci & UCSI_CCI_COMMAND_COMPLETE)
++				break;
++			if (time_is_before_jiffies(tmo))
++				break;
++			msleep(20);
++		} while (1);
++
++		WARN_ON(cci & UCSI_CCI_RESET_COMPLETE);
++	}
++
++	command = UCSI_PPM_RESET;
+ 	ret = ucsi->ops->async_write(ucsi, UCSI_CONTROL, &command,
+ 				     sizeof(command));
+ 	if (ret < 0)
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index fce23ad16c6d0..41e1a64da82e8 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -219,12 +219,12 @@ struct ucsi_cable_property {
+ #define UCSI_CABLE_PROP_FLAG_VBUS_IN_CABLE	BIT(0)
+ #define UCSI_CABLE_PROP_FLAG_ACTIVE_CABLE	BIT(1)
+ #define UCSI_CABLE_PROP_FLAG_DIRECTIONALITY	BIT(2)
+-#define UCSI_CABLE_PROP_FLAG_PLUG_TYPE(_f_)	((_f_) & GENMASK(3, 0))
++#define UCSI_CABLE_PROP_FLAG_PLUG_TYPE(_f_)	(((_f_) & GENMASK(4, 3)) >> 3)
+ #define   UCSI_CABLE_PROPERTY_PLUG_TYPE_A	0
+ #define   UCSI_CABLE_PROPERTY_PLUG_TYPE_B	1
+ #define   UCSI_CABLE_PROPERTY_PLUG_TYPE_C	2
+ #define   UCSI_CABLE_PROPERTY_PLUG_OTHER	3
+-#define UCSI_CABLE_PROP_MODE_SUPPORT		BIT(5)
++#define UCSI_CABLE_PROP_FLAG_MODE_SUPPORT	BIT(5)
+ 	u8 latency;
+ } __packed;
+ 
+diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c b/drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c
+index 0d9f3002df7f5..86f770e6d0f8e 100644
+--- a/drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c
++++ b/drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c
+@@ -142,13 +142,14 @@ static int vfio_fsl_mc_set_irq_trigger(struct vfio_fsl_mc_device *vdev,
+ 	irq = &vdev->mc_irqs[index];
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+-		vfio_fsl_mc_irq_handler(hwirq, irq);
++		if (irq->trigger)
++			eventfd_signal(irq->trigger, 1);
+ 
+ 	} else if (flags & VFIO_IRQ_SET_DATA_BOOL) {
+ 		u8 trigger = *(u8 *)data;
+ 
+-		if (trigger)
+-			vfio_fsl_mc_irq_handler(hwirq, irq);
++		if (trigger && irq->trigger)
++			eventfd_signal(irq->trigger, 1);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
+index 869dce5f134dd..5b0b7fab3ba19 100644
+--- a/drivers/vfio/pci/vfio_pci_intrs.c
++++ b/drivers/vfio/pci/vfio_pci_intrs.c
+@@ -29,15 +29,22 @@ static void vfio_send_intx_eventfd(void *opaque, void *unused)
+ {
+ 	struct vfio_pci_device *vdev = opaque;
+ 
+-	if (likely(is_intx(vdev) && !vdev->virq_disabled))
+-		eventfd_signal(vdev->ctx[0].trigger, 1);
++	if (likely(is_intx(vdev) && !vdev->virq_disabled)) {
++		struct eventfd_ctx *trigger;
++
++		trigger = READ_ONCE(vdev->ctx[0].trigger);
++		if (likely(trigger))
++			eventfd_signal(trigger, 1);
++	}
+ }
+ 
+-void vfio_pci_intx_mask(struct vfio_pci_device *vdev)
++static void __vfio_pci_intx_mask(struct vfio_pci_device *vdev)
+ {
+ 	struct pci_dev *pdev = vdev->pdev;
+ 	unsigned long flags;
+ 
++	lockdep_assert_held(&vdev->igate);
++
+ 	spin_lock_irqsave(&vdev->irqlock, flags);
+ 
+ 	/*
+@@ -65,6 +72,13 @@ void vfio_pci_intx_mask(struct vfio_pci_device *vdev)
+ 	spin_unlock_irqrestore(&vdev->irqlock, flags);
+ }
+ 
++void vfio_pci_intx_mask(struct vfio_pci_device *vdev)
++{
++	mutex_lock(&vdev->igate);
++	__vfio_pci_intx_mask(vdev);
++	mutex_unlock(&vdev->igate);
++}
++
+ /*
+  * If this is triggered by an eventfd, we can't call eventfd_signal
+  * or else we'll deadlock on the eventfd wait queue.  Return >0 when
+@@ -107,12 +121,21 @@ static int vfio_pci_intx_unmask_handler(void *opaque, void *unused)
+ 	return ret;
+ }
+ 
+-void vfio_pci_intx_unmask(struct vfio_pci_device *vdev)
++static void __vfio_pci_intx_unmask(struct vfio_pci_device *vdev)
+ {
++	lockdep_assert_held(&vdev->igate);
++
+ 	if (vfio_pci_intx_unmask_handler(vdev, NULL) > 0)
+ 		vfio_send_intx_eventfd(vdev, NULL);
+ }
+ 
++void vfio_pci_intx_unmask(struct vfio_pci_device *vdev)
++{
++	mutex_lock(&vdev->igate);
++	__vfio_pci_intx_unmask(vdev);
++	mutex_unlock(&vdev->igate);
++}
++
+ static irqreturn_t vfio_intx_handler(int irq, void *dev_id)
+ {
+ 	struct vfio_pci_device *vdev = dev_id;
+@@ -139,95 +162,104 @@ static irqreturn_t vfio_intx_handler(int irq, void *dev_id)
+ 	return ret;
+ }
+ 
+-static int vfio_intx_enable(struct vfio_pci_device *vdev)
++static int vfio_intx_enable(struct vfio_pci_device *vdev,
++			    struct eventfd_ctx *trigger)
+ {
++	struct pci_dev *pdev = vdev->pdev;
++	unsigned long irqflags;
++	char *name;
++	int ret;
++
+ 	if (!is_irq_none(vdev))
+ 		return -EINVAL;
+ 
+-	if (!vdev->pdev->irq)
++	if (!pdev->irq)
+ 		return -ENODEV;
+ 
++	name = kasprintf(GFP_KERNEL, "vfio-intx(%s)", pci_name(pdev));
++	if (!name)
++		return -ENOMEM;
++
+ 	vdev->ctx = kzalloc(sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);
+ 	if (!vdev->ctx)
+ 		return -ENOMEM;
+ 
+ 	vdev->num_ctx = 1;
+ 
++	vdev->ctx[0].name = name;
++	vdev->ctx[0].trigger = trigger;
++
+ 	/*
+-	 * If the virtual interrupt is masked, restore it.  Devices
+-	 * supporting DisINTx can be masked at the hardware level
+-	 * here, non-PCI-2.3 devices will have to wait until the
+-	 * interrupt is enabled.
++	 * Fill the initial masked state based on virq_disabled.  After
++	 * enable, changing the DisINTx bit in vconfig directly changes INTx
++	 * masking.  igate prevents races during setup, once running masked
++	 * is protected via irqlock.
++	 *
++	 * Devices supporting DisINTx also reflect the current mask state in
++	 * the physical DisINTx bit, which is not affected during IRQ setup.
++	 *
++	 * Devices without DisINTx support require an exclusive interrupt.
++	 * IRQ masking is performed at the IRQ chip.  Again, igate protects
++	 * against races during setup and IRQ handlers and irqfds are not
++	 * yet active, therefore masked is stable and can be used to
++	 * conditionally auto-enable the IRQ.
++	 *
++	 * irq_type must be stable while the IRQ handler is registered,
++	 * therefore it must be set before request_irq().
+ 	 */
+ 	vdev->ctx[0].masked = vdev->virq_disabled;
+-	if (vdev->pci_2_3)
+-		pci_intx(vdev->pdev, !vdev->ctx[0].masked);
++	if (vdev->pci_2_3) {
++		pci_intx(pdev, !vdev->ctx[0].masked);
++		irqflags = IRQF_SHARED;
++	} else {
++		irqflags = vdev->ctx[0].masked ? IRQF_NO_AUTOEN : 0;
++	}
+ 
+ 	vdev->irq_type = VFIO_PCI_INTX_IRQ_INDEX;
+ 
++	ret = request_irq(pdev->irq, vfio_intx_handler,
++			  irqflags, vdev->ctx[0].name, vdev);
++	if (ret) {
++		vdev->irq_type = VFIO_PCI_NUM_IRQS;
++		kfree(name);
++		vdev->num_ctx = 0;
++		kfree(vdev->ctx);
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+-static int vfio_intx_set_signal(struct vfio_pci_device *vdev, int fd)
++static int vfio_intx_set_signal(struct vfio_pci_device *vdev,
++				struct eventfd_ctx *trigger)
+ {
+ 	struct pci_dev *pdev = vdev->pdev;
+-	unsigned long irqflags = IRQF_SHARED;
+-	struct eventfd_ctx *trigger;
+-	unsigned long flags;
+-	int ret;
++	struct eventfd_ctx *old;
+ 
+-	if (vdev->ctx[0].trigger) {
+-		free_irq(pdev->irq, vdev);
+-		kfree(vdev->ctx[0].name);
+-		eventfd_ctx_put(vdev->ctx[0].trigger);
+-		vdev->ctx[0].trigger = NULL;
+-	}
+-
+-	if (fd < 0) /* Disable only */
+-		return 0;
+-
+-	vdev->ctx[0].name = kasprintf(GFP_KERNEL, "vfio-intx(%s)",
+-				      pci_name(pdev));
+-	if (!vdev->ctx[0].name)
+-		return -ENOMEM;
+-
+-	trigger = eventfd_ctx_fdget(fd);
+-	if (IS_ERR(trigger)) {
+-		kfree(vdev->ctx[0].name);
+-		return PTR_ERR(trigger);
+-	}
+-
+-	vdev->ctx[0].trigger = trigger;
++	old = vdev->ctx[0].trigger;
+ 
+-	if (!vdev->pci_2_3)
+-		irqflags = 0;
++	WRITE_ONCE(vdev->ctx[0].trigger, trigger);
+ 
+-	ret = request_irq(pdev->irq, vfio_intx_handler,
+-			  irqflags, vdev->ctx[0].name, vdev);
+-	if (ret) {
+-		vdev->ctx[0].trigger = NULL;
+-		kfree(vdev->ctx[0].name);
+-		eventfd_ctx_put(trigger);
+-		return ret;
++	/* Releasing an old ctx requires synchronizing in-flight users */
++	if (old) {
++		synchronize_irq(pdev->irq);
++		vfio_virqfd_flush_thread(&vdev->ctx[0].unmask);
++		eventfd_ctx_put(old);
+ 	}
+ 
+-	/*
+-	 * INTx disable will stick across the new irq setup,
+-	 * disable_irq won't.
+-	 */
+-	spin_lock_irqsave(&vdev->irqlock, flags);
+-	if (!vdev->pci_2_3 && vdev->ctx[0].masked)
+-		disable_irq_nosync(pdev->irq);
+-	spin_unlock_irqrestore(&vdev->irqlock, flags);
+-
+ 	return 0;
+ }
+ 
+ static void vfio_intx_disable(struct vfio_pci_device *vdev)
+ {
++	struct pci_dev *pdev = vdev->pdev;
++
+ 	vfio_virqfd_disable(&vdev->ctx[0].unmask);
+ 	vfio_virqfd_disable(&vdev->ctx[0].mask);
+-	vfio_intx_set_signal(vdev, -1);
++	free_irq(pdev->irq, vdev);
++	if (vdev->ctx[0].trigger)
++		eventfd_ctx_put(vdev->ctx[0].trigger);
++	kfree(vdev->ctx[0].name);
+ 	vdev->irq_type = VFIO_PCI_NUM_IRQS;
+ 	vdev->num_ctx = 0;
+ 	kfree(vdev->ctx);
+@@ -425,11 +457,11 @@ static int vfio_pci_set_intx_unmask(struct vfio_pci_device *vdev,
+ 		return -EINVAL;
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+-		vfio_pci_intx_unmask(vdev);
++		__vfio_pci_intx_unmask(vdev);
+ 	} else if (flags & VFIO_IRQ_SET_DATA_BOOL) {
+ 		uint8_t unmask = *(uint8_t *)data;
+ 		if (unmask)
+-			vfio_pci_intx_unmask(vdev);
++			__vfio_pci_intx_unmask(vdev);
+ 	} else if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
+ 		int32_t fd = *(int32_t *)data;
+ 		if (fd >= 0)
+@@ -452,11 +484,11 @@ static int vfio_pci_set_intx_mask(struct vfio_pci_device *vdev,
+ 		return -EINVAL;
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+-		vfio_pci_intx_mask(vdev);
++		__vfio_pci_intx_mask(vdev);
+ 	} else if (flags & VFIO_IRQ_SET_DATA_BOOL) {
+ 		uint8_t mask = *(uint8_t *)data;
+ 		if (mask)
+-			vfio_pci_intx_mask(vdev);
++			__vfio_pci_intx_mask(vdev);
+ 	} else if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
+ 		return -ENOTTY; /* XXX implement me */
+ 	}
+@@ -477,19 +509,23 @@ static int vfio_pci_set_intx_trigger(struct vfio_pci_device *vdev,
+ 		return -EINVAL;
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
++		struct eventfd_ctx *trigger = NULL;
+ 		int32_t fd = *(int32_t *)data;
+ 		int ret;
+ 
+-		if (is_intx(vdev))
+-			return vfio_intx_set_signal(vdev, fd);
++		if (fd >= 0) {
++			trigger = eventfd_ctx_fdget(fd);
++			if (IS_ERR(trigger))
++				return PTR_ERR(trigger);
++		}
+ 
+-		ret = vfio_intx_enable(vdev);
+-		if (ret)
+-			return ret;
++		if (is_intx(vdev))
++			ret = vfio_intx_set_signal(vdev, trigger);
++		else
++			ret = vfio_intx_enable(vdev, trigger);
+ 
+-		ret = vfio_intx_set_signal(vdev, fd);
+-		if (ret)
+-			vfio_intx_disable(vdev);
++		if (ret && trigger)
++			eventfd_ctx_put(trigger);
+ 
+ 		return ret;
+ 	}
+diff --git a/drivers/vfio/platform/vfio_platform_irq.c b/drivers/vfio/platform/vfio_platform_irq.c
+index c5b09ec0a3c98..7f4341a8d7185 100644
+--- a/drivers/vfio/platform/vfio_platform_irq.c
++++ b/drivers/vfio/platform/vfio_platform_irq.c
+@@ -136,6 +136,16 @@ static int vfio_platform_set_irq_unmask(struct vfio_platform_device *vdev,
+ 	return 0;
+ }
+ 
++/*
++ * The trigger eventfd is guaranteed valid in the interrupt path
++ * and protected by the igate mutex when triggered via ioctl.
++ */
++static void vfio_send_eventfd(struct vfio_platform_irq *irq_ctx)
++{
++	if (likely(irq_ctx->trigger))
++		eventfd_signal(irq_ctx->trigger, 1);
++}
++
+ static irqreturn_t vfio_automasked_irq_handler(int irq, void *dev_id)
+ {
+ 	struct vfio_platform_irq *irq_ctx = dev_id;
+@@ -155,7 +165,7 @@ static irqreturn_t vfio_automasked_irq_handler(int irq, void *dev_id)
+ 	spin_unlock_irqrestore(&irq_ctx->lock, flags);
+ 
+ 	if (ret == IRQ_HANDLED)
+-		eventfd_signal(irq_ctx->trigger, 1);
++		vfio_send_eventfd(irq_ctx);
+ 
+ 	return ret;
+ }
+@@ -164,22 +174,19 @@ static irqreturn_t vfio_irq_handler(int irq, void *dev_id)
+ {
+ 	struct vfio_platform_irq *irq_ctx = dev_id;
+ 
+-	eventfd_signal(irq_ctx->trigger, 1);
++	vfio_send_eventfd(irq_ctx);
+ 
+ 	return IRQ_HANDLED;
+ }
+ 
+ static int vfio_set_trigger(struct vfio_platform_device *vdev, int index,
+-			    int fd, irq_handler_t handler)
++			    int fd)
+ {
+ 	struct vfio_platform_irq *irq = &vdev->irqs[index];
+ 	struct eventfd_ctx *trigger;
+-	int ret;
+ 
+ 	if (irq->trigger) {
+-		irq_clear_status_flags(irq->hwirq, IRQ_NOAUTOEN);
+-		free_irq(irq->hwirq, irq);
+-		kfree(irq->name);
++		disable_irq(irq->hwirq);
+ 		eventfd_ctx_put(irq->trigger);
+ 		irq->trigger = NULL;
+ 	}
+@@ -187,30 +194,20 @@ static int vfio_set_trigger(struct vfio_platform_device *vdev, int index,
+ 	if (fd < 0) /* Disable only */
+ 		return 0;
+ 
+-	irq->name = kasprintf(GFP_KERNEL, "vfio-irq[%d](%s)",
+-						irq->hwirq, vdev->name);
+-	if (!irq->name)
+-		return -ENOMEM;
+-
+ 	trigger = eventfd_ctx_fdget(fd);
+-	if (IS_ERR(trigger)) {
+-		kfree(irq->name);
++	if (IS_ERR(trigger))
+ 		return PTR_ERR(trigger);
+-	}
+ 
+ 	irq->trigger = trigger;
+ 
+-	irq_set_status_flags(irq->hwirq, IRQ_NOAUTOEN);
+-	ret = request_irq(irq->hwirq, handler, 0, irq->name, irq);
+-	if (ret) {
+-		kfree(irq->name);
+-		eventfd_ctx_put(trigger);
+-		irq->trigger = NULL;
+-		return ret;
+-	}
+-
+-	if (!irq->masked)
+-		enable_irq(irq->hwirq);
++	/*
++	 * irq->masked effectively provides nested disables within the overall
++	 * enable relative to trigger.  Specifically request_irq() is called
++	 * with NO_AUTOEN, therefore the IRQ is initially disabled.  The user
++	 * may only further disable the IRQ with a MASK operations because
++	 * irq->masked is initially false.
++	 */
++	enable_irq(irq->hwirq);
+ 
+ 	return 0;
+ }
+@@ -229,7 +226,7 @@ static int vfio_platform_set_irq_trigger(struct vfio_platform_device *vdev,
+ 		handler = vfio_irq_handler;
+ 
+ 	if (!count && (flags & VFIO_IRQ_SET_DATA_NONE))
+-		return vfio_set_trigger(vdev, index, -1, handler);
++		return vfio_set_trigger(vdev, index, -1);
+ 
+ 	if (start != 0 || count != 1)
+ 		return -EINVAL;
+@@ -237,7 +234,7 @@ static int vfio_platform_set_irq_trigger(struct vfio_platform_device *vdev,
+ 	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
+ 		int32_t fd = *(int32_t *)data;
+ 
+-		return vfio_set_trigger(vdev, index, fd, handler);
++		return vfio_set_trigger(vdev, index, fd);
+ 	}
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+@@ -261,6 +258,14 @@ int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
+ 		    unsigned start, unsigned count, uint32_t flags,
+ 		    void *data) = NULL;
+ 
++	/*
++	 * For compatibility, errors from request_irq() are local to the
++	 * SET_IRQS path and reflected in the name pointer.  This allows,
++	 * for example, polling mode fallback for an exclusive IRQ failure.
++	 */
++	if (IS_ERR(vdev->irqs[index].name))
++		return PTR_ERR(vdev->irqs[index].name);
++
+ 	switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+ 	case VFIO_IRQ_SET_ACTION_MASK:
+ 		func = vfio_platform_set_irq_mask;
+@@ -281,7 +286,7 @@ int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
+ 
+ int vfio_platform_irq_init(struct vfio_platform_device *vdev)
+ {
+-	int cnt = 0, i;
++	int cnt = 0, i, ret = 0;
+ 
+ 	while (vdev->get_irq(vdev, cnt) >= 0)
+ 		cnt++;
+@@ -292,37 +297,70 @@ int vfio_platform_irq_init(struct vfio_platform_device *vdev)
+ 
+ 	for (i = 0; i < cnt; i++) {
+ 		int hwirq = vdev->get_irq(vdev, i);
++		irq_handler_t handler = vfio_irq_handler;
+ 
+-		if (hwirq < 0)
++		if (hwirq < 0) {
++			ret = -EINVAL;
+ 			goto err;
++		}
+ 
+ 		spin_lock_init(&vdev->irqs[i].lock);
+ 
+ 		vdev->irqs[i].flags = VFIO_IRQ_INFO_EVENTFD;
+ 
+-		if (irq_get_trigger_type(hwirq) & IRQ_TYPE_LEVEL_MASK)
++		if (irq_get_trigger_type(hwirq) & IRQ_TYPE_LEVEL_MASK) {
+ 			vdev->irqs[i].flags |= VFIO_IRQ_INFO_MASKABLE
+ 						| VFIO_IRQ_INFO_AUTOMASKED;
++			handler = vfio_automasked_irq_handler;
++		}
+ 
+ 		vdev->irqs[i].count = 1;
+ 		vdev->irqs[i].hwirq = hwirq;
+ 		vdev->irqs[i].masked = false;
++		vdev->irqs[i].name = kasprintf(GFP_KERNEL,
++					       "vfio-irq[%d](%s)", hwirq,
++					       vdev->name);
++		if (!vdev->irqs[i].name) {
++			ret = -ENOMEM;
++			goto err;
++		}
++
++		ret = request_irq(hwirq, handler, IRQF_NO_AUTOEN,
++				  vdev->irqs[i].name, &vdev->irqs[i]);
++		if (ret) {
++			kfree(vdev->irqs[i].name);
++			vdev->irqs[i].name = ERR_PTR(ret);
++		}
+ 	}
+ 
+ 	vdev->num_irqs = cnt;
+ 
+ 	return 0;
+ err:
++	for (--i; i >= 0; i--) {
++		if (!IS_ERR(vdev->irqs[i].name)) {
++			free_irq(vdev->irqs[i].hwirq, &vdev->irqs[i]);
++			kfree(vdev->irqs[i].name);
++		}
++	}
+ 	kfree(vdev->irqs);
+-	return -EINVAL;
++	return ret;
+ }
+ 
+ void vfio_platform_irq_cleanup(struct vfio_platform_device *vdev)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < vdev->num_irqs; i++)
+-		vfio_set_trigger(vdev, i, -1, NULL);
++	for (i = 0; i < vdev->num_irqs; i++) {
++		vfio_virqfd_disable(&vdev->irqs[i].mask);
++		vfio_virqfd_disable(&vdev->irqs[i].unmask);
++		if (!IS_ERR(vdev->irqs[i].name)) {
++			free_irq(vdev->irqs[i].hwirq, &vdev->irqs[i]);
++			if (vdev->irqs[i].trigger)
++				eventfd_ctx_put(vdev->irqs[i].trigger);
++			kfree(vdev->irqs[i].name);
++		}
++	}
+ 
+ 	vdev->num_irqs = 0;
+ 	kfree(vdev->irqs);
+diff --git a/drivers/vfio/virqfd.c b/drivers/vfio/virqfd.c
+index 997cb5d0a657c..1cff533017abf 100644
+--- a/drivers/vfio/virqfd.c
++++ b/drivers/vfio/virqfd.c
+@@ -101,6 +101,13 @@ static void virqfd_inject(struct work_struct *work)
+ 		virqfd->thread(virqfd->opaque, virqfd->data);
+ }
+ 
++static void virqfd_flush_inject(struct work_struct *work)
++{
++	struct virqfd *virqfd = container_of(work, struct virqfd, flush_inject);
++
++	flush_work(&virqfd->inject);
++}
++
+ int vfio_virqfd_enable(void *opaque,
+ 		       int (*handler)(void *, void *),
+ 		       void (*thread)(void *, void *),
+@@ -124,6 +131,7 @@ int vfio_virqfd_enable(void *opaque,
+ 
+ 	INIT_WORK(&virqfd->shutdown, virqfd_shutdown);
+ 	INIT_WORK(&virqfd->inject, virqfd_inject);
++	INIT_WORK(&virqfd->flush_inject, virqfd_flush_inject);
+ 
+ 	irqfd = fdget(fd);
+ 	if (!irqfd.file) {
+@@ -214,6 +222,19 @@ void vfio_virqfd_disable(struct virqfd **pvirqfd)
+ }
+ EXPORT_SYMBOL_GPL(vfio_virqfd_disable);
+ 
++void vfio_virqfd_flush_thread(struct virqfd **pvirqfd)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&virqfd_lock, flags);
++	if (*pvirqfd && (*pvirqfd)->thread)
++		queue_work(vfio_irqfd_cleanup_wq, &(*pvirqfd)->flush_inject);
++	spin_unlock_irqrestore(&virqfd_lock, flags);
++
++	flush_workqueue(vfio_irqfd_cleanup_wq);
++}
++EXPORT_SYMBOL_GPL(vfio_virqfd_flush_thread);
++
+ module_init(vfio_virqfd_init);
+ module_exit(vfio_virqfd_exit);
+ 
+diff --git a/drivers/video/fbdev/core/fbmon.c b/drivers/video/fbdev/core/fbmon.c
+index 1bf82dbc9e3cf..3c29a5eb43805 100644
+--- a/drivers/video/fbdev/core/fbmon.c
++++ b/drivers/video/fbdev/core/fbmon.c
+@@ -1311,7 +1311,7 @@ int fb_get_mode(int flags, u32 val, struct fb_var_screeninfo *var, struct fb_inf
+ int fb_videomode_from_videomode(const struct videomode *vm,
+ 				struct fb_videomode *fbmode)
+ {
+-	unsigned int htotal, vtotal;
++	unsigned int htotal, vtotal, total;
+ 
+ 	fbmode->xres = vm->hactive;
+ 	fbmode->left_margin = vm->hback_porch;
+@@ -1344,8 +1344,9 @@ int fb_videomode_from_videomode(const struct videomode *vm,
+ 	vtotal = vm->vactive + vm->vfront_porch + vm->vback_porch +
+ 		 vm->vsync_len;
+ 	/* prevent division by zero */
+-	if (htotal && vtotal) {
+-		fbmode->refresh = vm->pixelclock / (htotal * vtotal);
++	total = htotal * vtotal;
++	if (total) {
++		fbmode->refresh = vm->pixelclock / total;
+ 	/* a mode must have htotal and vtotal != 0 or it is invalid */
+ 	} else {
+ 		fbmode->refresh = 0;
+diff --git a/drivers/video/fbdev/via/accel.c b/drivers/video/fbdev/via/accel.c
+index 0a1bc7a4d7853..1e04026f08091 100644
+--- a/drivers/video/fbdev/via/accel.c
++++ b/drivers/video/fbdev/via/accel.c
+@@ -115,7 +115,7 @@ static int hw_bitblt_1(void __iomem *engine, u8 op, u32 width, u32 height,
+ 
+ 	if (op != VIA_BITBLT_FILL) {
+ 		tmp = src_mem ? 0 : src_addr;
+-		if (dst_addr & 0xE0000007) {
++		if (tmp & 0xE0000007) {
+ 			printk(KERN_WARNING "hw_bitblt_1: Unsupported source "
+ 				"address %X\n", tmp);
+ 			return -EINVAL;
+@@ -260,7 +260,7 @@ static int hw_bitblt_2(void __iomem *engine, u8 op, u32 width, u32 height,
+ 		writel(tmp, engine + 0x18);
+ 
+ 		tmp = src_mem ? 0 : src_addr;
+-		if (dst_addr & 0xE0000007) {
++		if (tmp & 0xE0000007) {
+ 			printk(KERN_WARNING "hw_bitblt_2: Unsupported source "
+ 				"address %X\n", tmp);
+ 			return -EINVAL;
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 441bc057896f5..5df51095d8e1d 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -403,13 +403,19 @@ EXPORT_SYMBOL_GPL(unregister_virtio_device);
+ int virtio_device_freeze(struct virtio_device *dev)
+ {
+ 	struct virtio_driver *drv = drv_to_virtio(dev->dev.driver);
++	int ret;
+ 
+ 	virtio_config_disable(dev);
+ 
+ 	dev->failed = dev->config->get_status(dev) & VIRTIO_CONFIG_S_FAILED;
+ 
+-	if (drv && drv->freeze)
+-		return drv->freeze(dev);
++	if (drv && drv->freeze) {
++		ret = drv->freeze(dev);
++		if (ret) {
++			virtio_config_enable(dev);
++			return ret;
++		}
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 24e39984914fe..f608663bdfd5b 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -885,8 +885,8 @@ static void shutdown_pirq(struct irq_data *data)
+ 		return;
+ 
+ 	do_mask(info, EVT_MASK_REASON_EXPLICIT);
+-	xen_evtchn_close(evtchn);
+ 	xen_irq_info_cleanup(info);
++	xen_evtchn_close(evtchn);
+ }
+ 
+ static void enable_pirq(struct irq_data *data)
+@@ -929,8 +929,6 @@ static void __unbind_from_irq(unsigned int irq)
+ 	if (VALID_EVTCHN(evtchn)) {
+ 		unsigned int cpu = cpu_from_irq(irq);
+ 
+-		xen_evtchn_close(evtchn);
+-
+ 		switch (type_from_irq(irq)) {
+ 		case IRQT_VIRQ:
+ 			per_cpu(virq_to_irq, cpu)[virq_from_irq(irq)] = -1;
+@@ -943,6 +941,7 @@ static void __unbind_from_irq(unsigned int irq)
+ 		}
+ 
+ 		xen_irq_info_cleanup(info);
++		xen_evtchn_close(evtchn);
+ 	}
+ 
+ 	xen_free_irq(irq);
+diff --git a/fs/aio.c b/fs/aio.c
+index 900ed5207540e..93b6bbf01d715 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -565,8 +565,8 @@ static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events)
+ 
+ void kiocb_set_cancel_fn(struct kiocb *iocb, kiocb_cancel_fn *cancel)
+ {
+-	struct aio_kiocb *req = container_of(iocb, struct aio_kiocb, rw);
+-	struct kioctx *ctx = req->ki_ctx;
++	struct aio_kiocb *req;
++	struct kioctx *ctx;
+ 	unsigned long flags;
+ 
+ 	/*
+@@ -576,9 +576,13 @@ void kiocb_set_cancel_fn(struct kiocb *iocb, kiocb_cancel_fn *cancel)
+ 	if (!(iocb->ki_flags & IOCB_AIO_RW))
+ 		return;
+ 
++	req = container_of(iocb, struct aio_kiocb, rw);
++
+ 	if (WARN_ON_ONCE(!list_empty(&req->ki_list)))
+ 		return;
+ 
++	ctx = req->ki_ctx;
++
+ 	spin_lock_irqsave(&ctx->ctx_lock, flags);
+ 	list_add_tail(&req->ki_list, &ctx->active_reqs);
+ 	req->ki_cancel = cancel;
+diff --git a/fs/btrfs/export.c b/fs/btrfs/export.c
+index bfa2bf44529c2..d908afa1f313c 100644
+--- a/fs/btrfs/export.c
++++ b/fs/btrfs/export.c
+@@ -161,8 +161,15 @@ struct dentry *btrfs_get_parent(struct dentry *child)
+ 	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+ 	if (ret < 0)
+ 		goto fail;
++	if (ret == 0) {
++		/*
++		 * Key with offset of -1 found, there would have to exist an
++		 * inode with such number or a root with such id.
++		 */
++		ret = -EUCLEAN;
++		goto fail;
++	}
+ 
+-	BUG_ON(ret == 0); /* Key with offset of -1 found */
+ 	if (path->slots[0] == 0) {
+ 		ret = -ENOENT;
+ 		goto fail;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 049b837934e5d..ab8ed187746ea 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3148,7 +3148,7 @@ static int btrfs_ioctl_defrag(struct file *file, void __user *argp)
+ {
+ 	struct inode *inode = file_inode(file);
+ 	struct btrfs_root *root = BTRFS_I(inode)->root;
+-	struct btrfs_ioctl_defrag_range_args *range;
++	struct btrfs_ioctl_defrag_range_args range = {0};
+ 	int ret;
+ 
+ 	ret = mnt_want_write_file(file);
+@@ -3180,37 +3180,28 @@ static int btrfs_ioctl_defrag(struct file *file, void __user *argp)
+ 			goto out;
+ 		}
+ 
+-		range = kzalloc(sizeof(*range), GFP_KERNEL);
+-		if (!range) {
+-			ret = -ENOMEM;
+-			goto out;
+-		}
+-
+ 		if (argp) {
+-			if (copy_from_user(range, argp,
+-					   sizeof(*range))) {
++			if (copy_from_user(&range, argp, sizeof(range))) {
+ 				ret = -EFAULT;
+-				kfree(range);
+ 				goto out;
+ 			}
+-			if (range->flags & ~BTRFS_DEFRAG_RANGE_FLAGS_SUPP) {
++			if (range.flags & ~BTRFS_DEFRAG_RANGE_FLAGS_SUPP) {
+ 				ret = -EOPNOTSUPP;
+ 				goto out;
+ 			}
+ 			/* compression requires us to start the IO */
+-			if ((range->flags & BTRFS_DEFRAG_RANGE_COMPRESS)) {
+-				range->flags |= BTRFS_DEFRAG_RANGE_START_IO;
+-				range->extent_thresh = (u32)-1;
++			if ((range.flags & BTRFS_DEFRAG_RANGE_COMPRESS)) {
++				range.flags |= BTRFS_DEFRAG_RANGE_START_IO;
++				range.extent_thresh = (u32)-1;
+ 			}
+ 		} else {
+ 			/* the rest are all set to zero by kzalloc */
+-			range->len = (u64)-1;
++			range.len = (u64)-1;
+ 		}
+ 		ret = btrfs_defrag_file(file_inode(file), file,
+-					range, BTRFS_OLDEST_GENERATION, 0);
++					&range, BTRFS_OLDEST_GENERATION, 0);
+ 		if (ret > 0)
+ 			ret = 0;
+-		kfree(range);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 0b04adfd4a4a4..0519a3557697a 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -966,7 +966,15 @@ static int iterate_inode_ref(struct btrfs_root *root, struct btrfs_path *path,
+ 					ret = PTR_ERR(start);
+ 					goto out;
+ 				}
+-				BUG_ON(start < p->buf);
++				if (unlikely(start < p->buf)) {
++					btrfs_err(root->fs_info,
++			"send: path ref buffer underflow for key (%llu %u %llu)",
++						  found_key->objectid,
++						  found_key->type,
++						  found_key->offset);
++					ret = -EINVAL;
++					goto out;
++				}
+ 			}
+ 			p->start = start;
+ 		} else {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index eaf5cd043dace..09c23626feba4 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1471,7 +1471,7 @@ static bool contains_pending_extent(struct btrfs_device *device, u64 *start,
+ 
+ 		if (in_range(physical_start, *start, len) ||
+ 		    in_range(*start, physical_start,
+-			     physical_end - physical_start)) {
++			     physical_end + 1 - physical_start)) {
+ 			*start = physical_end + 1;
+ 			return true;
+ 		}
+@@ -3178,7 +3178,17 @@ static int btrfs_relocate_sys_chunks(struct btrfs_fs_info *fs_info)
+ 			mutex_unlock(&fs_info->delete_unused_bgs_mutex);
+ 			goto error;
+ 		}
+-		BUG_ON(ret == 0); /* Corruption */
++		if (ret == 0) {
++			/*
++			 * On the first search we would find chunk tree with
++			 * offset -1, which is not possible. On subsequent
++			 * loops this would find an existing item on an invalid
++			 * offset (one less than the previous one, wrong
++			 * alignment and size).
++			 */
++			ret = -EUCLEAN;
++			goto error;
++		}
+ 
+ 		ret = btrfs_previous_item(chunk_root, path, key.objectid,
+ 					  key.type);
+diff --git a/fs/exec.c b/fs/exec.c
+index 2006e245b8f30..ebe9011955b9b 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -888,6 +888,7 @@ int transfer_args_to_stack(struct linux_binprm *bprm,
+ 			goto out;
+ 	}
+ 
++	bprm->exec += *sp_location - MAX_ARG_PAGES * PAGE_SIZE;
+ 	*sp_location = sp;
+ 
+ out:
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 61988b7b5be77..c1d632725c2c0 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2591,7 +2591,10 @@ static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
+ 	for (i = 0; i <= 13; i++)
+ 		seq_printf(seq, " %-5u", i <= blocksize_bits + 1 ?
+ 				sg.info.bb_counters[i] : 0);
+-	seq_puts(seq, " ]\n");
++	seq_puts(seq, " ]");
++	if (EXT4_MB_GRP_BBITMAP_CORRUPT(&sg.info))
++		seq_puts(seq, " Block bitmap corrupted!");
++	seq_puts(seq, "\n");
+ 
+ 	return 0;
+ }
+@@ -4162,10 +4165,16 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 			.fe_len = ac->ac_g_ex.fe_len,
+ 		};
+ 		loff_t orig_goal_end = extent_logical_end(sbi, &ex);
++		loff_t o_ex_end = extent_logical_end(sbi, &ac->ac_o_ex);
+ 
+-		/* we can't allocate as much as normalizer wants.
+-		 * so, found space must get proper lstart
+-		 * to cover original request */
++		/*
++		 * We can't allocate as much as normalizer wants, so we try
++		 * to get proper lstart to cover the original request, except
++		 * when the goal doesn't cover the original request as below:
++		 *
++		 * orig_ex:2045/2055(10), isize:8417280 -> normalized:0/2048
++		 * best_ex:0/200(200) -> adjusted: 1848/2048(200)
++		 */
+ 		BUG_ON(ac->ac_g_ex.fe_logical > ac->ac_o_ex.fe_logical);
+ 		BUG_ON(ac->ac_g_ex.fe_len < ac->ac_o_ex.fe_len);
+ 
+@@ -4177,7 +4186,7 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 		 * 1. Check if best ex can be kept at end of goal and still
+ 		 *    cover original start
+ 		 * 2. Else, check if best ex can be kept at start of goal and
+-		 *    still cover original start
++		 *    still cover original end
+ 		 * 3. Else, keep the best ex at start of original request.
+ 		 */
+ 		ex.fe_len = ac->ac_b_ex.fe_len;
+@@ -4187,7 +4196,7 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 			goto adjust_bex;
+ 
+ 		ex.fe_logical = ac->ac_g_ex.fe_logical;
+-		if (ac->ac_o_ex.fe_logical < extent_logical_end(sbi, &ex))
++		if (o_ex_end <= extent_logical_end(sbi, &ex))
+ 			goto adjust_bex;
+ 
+ 		ex.fe_logical = ac->ac_o_ex.fe_logical;
+@@ -4195,7 +4204,6 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 		ac->ac_b_ex.fe_logical = ex.fe_logical;
+ 
+ 		BUG_ON(ac->ac_o_ex.fe_logical < ac->ac_b_ex.fe_logical);
+-		BUG_ON(ac->ac_o_ex.fe_len > ac->ac_b_ex.fe_len);
+ 		BUG_ON(extent_logical_end(sbi, &ex) > orig_goal_end);
+ 	}
+ 
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 06e0eaf2ea4e1..5d45ec29e61a6 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1545,7 +1545,8 @@ static int ext4_flex_group_add(struct super_block *sb,
+ 		int gdb_num = group / EXT4_DESC_PER_BLOCK(sb);
+ 		int gdb_num_end = ((group + flex_gd->count - 1) /
+ 				   EXT4_DESC_PER_BLOCK(sb));
+-		int meta_bg = ext4_has_feature_meta_bg(sb);
++		int meta_bg = ext4_has_feature_meta_bg(sb) &&
++			      gdb_num >= le32_to_cpu(es->s_first_meta_bg);
+ 		sector_t padding_blocks = meta_bg ? 0 : sbi->s_sbh->b_blocknr -
+ 					 ext4_group_first_block_no(sb, 0);
+ 		sector_t old_gdb = 0;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index e386d67cff9d1..0149d3c2cfd78 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6205,6 +6205,10 @@ static int ext4_write_dquot(struct dquot *dquot)
+ 	if (IS_ERR(handle))
+ 		return PTR_ERR(handle);
+ 	ret = dquot_commit(dquot);
++	if (ret < 0)
++		ext4_error_err(dquot->dq_sb, -ret,
++			       "Failed to commit dquot type %d",
++			       dquot->dq_id.type);
+ 	err = ext4_journal_stop(handle);
+ 	if (!ret)
+ 		ret = err;
+@@ -6221,6 +6225,10 @@ static int ext4_acquire_dquot(struct dquot *dquot)
+ 	if (IS_ERR(handle))
+ 		return PTR_ERR(handle);
+ 	ret = dquot_acquire(dquot);
++	if (ret < 0)
++		ext4_error_err(dquot->dq_sb, -ret,
++			      "Failed to acquire dquot type %d",
++			      dquot->dq_id.type);
+ 	err = ext4_journal_stop(handle);
+ 	if (!ret)
+ 		ret = err;
+@@ -6240,6 +6248,10 @@ static int ext4_release_dquot(struct dquot *dquot)
+ 		return PTR_ERR(handle);
+ 	}
+ 	ret = dquot_release(dquot);
++	if (ret < 0)
++		ext4_error_err(dquot->dq_sb, -ret,
++			       "Failed to release dquot type %d",
++			       dquot->dq_id.type);
+ 	err = ext4_journal_stop(handle);
+ 	if (!ret)
+ 		ret = err;
+diff --git a/fs/fat/nfs.c b/fs/fat/nfs.c
+index af191371c3529..bab63eeaf9cbc 100644
+--- a/fs/fat/nfs.c
++++ b/fs/fat/nfs.c
+@@ -130,6 +130,12 @@ fat_encode_fh_nostale(struct inode *inode, __u32 *fh, int *lenp,
+ 		fid->parent_i_gen = parent->i_generation;
+ 		type = FILEID_FAT_WITH_PARENT;
+ 		*lenp = FAT_FID_SIZE_WITH_PARENT;
++	} else {
++		/*
++		 * We need to initialize this field because the fh is actually
++		 * 12 bytes long
++		 */
++		fid->parent_i_pos_hi = 0;
+ 	}
+ 
+ 	return type;
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index b0c701c007c68..d131f34cd3e13 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -451,6 +451,10 @@ int fuse_lookup_name(struct super_block *sb, u64 nodeid, const struct qstr *name
+ 		goto out_put_forget;
+ 	if (fuse_invalid_attr(&outarg->attr))
+ 		goto out_put_forget;
++	if (outarg->nodeid == FUSE_ROOT_ID && outarg->generation != 0) {
++		pr_warn_once("root generation should be zero\n");
++		outarg->generation = 0;
++	}
+ 
+ 	*inode = fuse_iget(sb, outarg->nodeid, outarg->generation,
+ 			   &outarg->attr, entry_attr_timeout(outarg),
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index ceaa6868386e6..33eb5fefc06b4 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -873,7 +873,6 @@ static inline bool fuse_stale_inode(const struct inode *inode, int generation,
+ 
+ static inline void fuse_make_bad(struct inode *inode)
+ {
+-	remove_inode_hash(inode);
+ 	set_bit(FUSE_I_BAD, &get_fuse_inode(inode)->state);
+ }
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 9ea175ff9c8e6..4a7ebccd359ee 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -352,8 +352,11 @@ struct inode *fuse_iget(struct super_block *sb, u64 nodeid,
+ 	} else if (fuse_stale_inode(inode, generation, attr)) {
+ 		/* nodeid was reused, any I/O on the old inode should fail */
+ 		fuse_make_bad(inode);
+-		iput(inode);
+-		goto retry;
++		if (inode != d_inode(sb->s_root)) {
++			remove_inode_hash(inode);
++			iput(inode);
++			goto retry;
++		}
+ 	}
+ done:
+ 	fi = get_fuse_inode(inode);
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index f62b5a5015668..4c763f573faf3 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -907,8 +907,22 @@ static int isofs_fill_super(struct super_block *s, void *data, int silent)
+ 	 * we then decide whether to use the Joliet descriptor.
+ 	 */
+ 	inode = isofs_iget(s, sbi->s_firstdatazone, 0);
+-	if (IS_ERR(inode))
+-		goto out_no_root;
++
++	/*
++	 * Fix for broken CDs with a corrupt root inode but a correct Joliet
++	 * root directory.
++	 */
++	if (IS_ERR(inode)) {
++		if (joliet_level && sbi->s_firstdatazone != first_data_zone) {
++			printk(KERN_NOTICE
++			       "ISOFS: root inode is unusable. "
++			       "Disabling Rock Ridge and switching to Joliet.");
++			sbi->s_rock = 0;
++			inode = NULL;
++		} else {
++			goto out_no_root;
++		}
++	}
+ 
+ 	/*
+ 	 * Fix for broken CDs with Rock Ridge and empty ISO root directory but
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 5d86ffa72ceab..499519f0f6ecd 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -678,10 +678,17 @@ static void nfs_direct_commit_schedule(struct nfs_direct_req *dreq)
+ 	LIST_HEAD(mds_list);
+ 
+ 	nfs_init_cinfo_from_dreq(&cinfo, dreq);
++	nfs_commit_begin(cinfo.mds);
+ 	nfs_scan_commit(dreq->inode, &mds_list, &cinfo);
+ 	res = nfs_generic_commit_list(dreq->inode, &mds_list, 0, &cinfo);
+-	if (res < 0) /* res == -ENOMEM */
+-		nfs_direct_write_reschedule(dreq);
++	if (res < 0) { /* res == -ENOMEM */
++		spin_lock(&dreq->lock);
++		if (dreq->flags == 0)
++			dreq->flags = NFS_ODIRECT_RESCHED_WRITES;
++		spin_unlock(&dreq->lock);
++	}
++	if (nfs_commit_end(cinfo.mds))
++		nfs_direct_write_complete(dreq);
+ }
+ 
+ static void nfs_direct_write_clear_reqs(struct nfs_direct_req *dreq)
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index d3cd099ffb6e1..4cf0606919794 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1637,7 +1637,7 @@ static int wait_on_commit(struct nfs_mds_commit_info *cinfo)
+ 				       !atomic_read(&cinfo->rpcs_out));
+ }
+ 
+-static void nfs_commit_begin(struct nfs_mds_commit_info *cinfo)
++void nfs_commit_begin(struct nfs_mds_commit_info *cinfo)
+ {
+ 	atomic_inc(&cinfo->rpcs_out);
+ }
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 65cd599cb2ab6..4905b7cd7bf33 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -724,7 +724,7 @@ static int nilfs_btree_lookup_contig(const struct nilfs_bmap *btree,
+ 		dat = nilfs_bmap_get_dat(btree);
+ 		ret = nilfs_dat_translate(dat, ptr, &blocknr);
+ 		if (ret < 0)
+-			goto out;
++			goto dat_error;
+ 		ptr = blocknr;
+ 	}
+ 	cnt = 1;
+@@ -743,7 +743,7 @@ static int nilfs_btree_lookup_contig(const struct nilfs_bmap *btree,
+ 			if (dat) {
+ 				ret = nilfs_dat_translate(dat, ptr2, &blocknr);
+ 				if (ret < 0)
+-					goto out;
++					goto dat_error;
+ 				ptr2 = blocknr;
+ 			}
+ 			if (ptr2 != ptr + cnt || ++cnt == maxblocks)
+@@ -782,6 +782,11 @@ static int nilfs_btree_lookup_contig(const struct nilfs_bmap *btree,
+  out:
+ 	nilfs_btree_free_path(path);
+ 	return ret;
++
++ dat_error:
++	if (ret == -ENOENT)
++		ret = -EINVAL;  /* Notify bmap layer of metadata corruption */
++	goto out;
+ }
+ 
+ static void nilfs_btree_promote_key(struct nilfs_bmap *btree,
+diff --git a/fs/nilfs2/direct.c b/fs/nilfs2/direct.c
+index f353101955e3b..7faf8c285d6c9 100644
+--- a/fs/nilfs2/direct.c
++++ b/fs/nilfs2/direct.c
+@@ -66,7 +66,7 @@ static int nilfs_direct_lookup_contig(const struct nilfs_bmap *direct,
+ 		dat = nilfs_bmap_get_dat(direct);
+ 		ret = nilfs_dat_translate(dat, ptr, &blocknr);
+ 		if (ret < 0)
+-			return ret;
++			goto dat_error;
+ 		ptr = blocknr;
+ 	}
+ 
+@@ -79,7 +79,7 @@ static int nilfs_direct_lookup_contig(const struct nilfs_bmap *direct,
+ 		if (dat) {
+ 			ret = nilfs_dat_translate(dat, ptr2, &blocknr);
+ 			if (ret < 0)
+-				return ret;
++				goto dat_error;
+ 			ptr2 = blocknr;
+ 		}
+ 		if (ptr2 != ptr + cnt)
+@@ -87,6 +87,11 @@ static int nilfs_direct_lookup_contig(const struct nilfs_bmap *direct,
+ 	}
+ 	*ptrp = ptr;
+ 	return cnt;
++
++ dat_error:
++	if (ret == -ENOENT)
++		ret = -EINVAL;  /* Notify bmap layer of metadata corruption */
++	return ret;
+ }
+ 
+ static __u64
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index d144e08a9003a..06f4deb550c9f 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -112,7 +112,7 @@ int nilfs_get_block(struct inode *inode, sector_t blkoff,
+ 					   "%s (ino=%lu): a race condition while inserting a data block at offset=%llu",
+ 					   __func__, inode->i_ino,
+ 					   (unsigned long long)blkoff);
+-				err = 0;
++				err = -EAGAIN;
+ 			}
+ 			nilfs_transaction_abort(inode->i_sb);
+ 			goto out;
+diff --git a/fs/pstore/zone.c b/fs/pstore/zone.c
+index b50fc33f2ab29..2426fb6794fd3 100644
+--- a/fs/pstore/zone.c
++++ b/fs/pstore/zone.c
+@@ -973,6 +973,8 @@ static ssize_t psz_kmsg_read(struct pstore_zone *zone,
+ 		char *buf = kasprintf(GFP_KERNEL, "%s: Total %d times\n",
+ 				      kmsg_dump_reason_str(record->reason),
+ 				      record->count);
++		if (!buf)
++			return -ENOMEM;
+ 		hlen = strlen(buf);
+ 		record->buf = krealloc(buf, hlen + size, GFP_KERNEL);
+ 		if (!record->buf) {
+diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c
+index e3d1673b8ec97..ef9bcfeec21ad 100644
+--- a/fs/sysv/itree.c
++++ b/fs/sysv/itree.c
+@@ -82,9 +82,6 @@ static inline sysv_zone_t *block_end(struct buffer_head *bh)
+ 	return (sysv_zone_t*)((char*)bh->b_data + bh->b_size);
+ }
+ 
+-/*
+- * Requires read_lock(&pointers_lock) or write_lock(&pointers_lock)
+- */
+ static Indirect *get_branch(struct inode *inode,
+ 			    int depth,
+ 			    int offsets[],
+@@ -104,15 +101,18 @@ static Indirect *get_branch(struct inode *inode,
+ 		bh = sb_bread(sb, block);
+ 		if (!bh)
+ 			goto failure;
++		read_lock(&pointers_lock);
+ 		if (!verify_chain(chain, p))
+ 			goto changed;
+ 		add_chain(++p, bh, (sysv_zone_t*)bh->b_data + *++offsets);
++		read_unlock(&pointers_lock);
+ 		if (!p->key)
+ 			goto no_block;
+ 	}
+ 	return NULL;
+ 
+ changed:
++	read_unlock(&pointers_lock);
+ 	brelse(bh);
+ 	*err = -EAGAIN;
+ 	goto no_block;
+@@ -218,9 +218,7 @@ static int get_block(struct inode *inode, sector_t iblock, struct buffer_head *b
+ 		goto out;
+ 
+ reread:
+-	read_lock(&pointers_lock);
+ 	partial = get_branch(inode, depth, offsets, chain, &err);
+-	read_unlock(&pointers_lock);
+ 
+ 	/* Simplest case - block found, no allocation needed */
+ 	if (!partial) {
+@@ -290,9 +288,9 @@ static Indirect *find_shared(struct inode *inode,
+ 	*top = 0;
+ 	for (k = depth; k > 1 && !offsets[k-1]; k--)
+ 		;
++	partial = get_branch(inode, k, offsets, chain, &err);
+ 
+ 	write_lock(&pointers_lock);
+-	partial = get_branch(inode, k, offsets, chain, &err);
+ 	if (!partial)
+ 		partial = chain + k-1;
+ 	/*
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index 19fdcda045890..18df7a82517fa 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -262,9 +262,6 @@ static int write_begin_slow(struct address_space *mapping,
+ 				return err;
+ 			}
+ 		}
+-
+-		SetPageUptodate(page);
+-		ClearPageError(page);
+ 	}
+ 
+ 	if (PagePrivate(page))
+@@ -463,9 +460,6 @@ static int ubifs_write_begin(struct file *file, struct address_space *mapping,
+ 				return err;
+ 			}
+ 		}
+-
+-		SetPageUptodate(page);
+-		ClearPageError(page);
+ 	}
+ 
+ 	err = allocate_budget(c, page, ui, appending);
+@@ -475,10 +469,8 @@ static int ubifs_write_begin(struct file *file, struct address_space *mapping,
+ 		 * If we skipped reading the page because we were going to
+ 		 * write all of it, then it is not up to date.
+ 		 */
+-		if (skipped_read) {
++		if (skipped_read)
+ 			ClearPageChecked(page);
+-			ClearPageUptodate(page);
+-		}
+ 		/*
+ 		 * Budgeting failed which means it would have to force
+ 		 * write-back but didn't, because we set the @fast flag in the
+@@ -569,6 +561,9 @@ static int ubifs_write_end(struct file *file, struct address_space *mapping,
+ 		goto out;
+ 	}
+ 
++	if (len == PAGE_SIZE)
++		SetPageUptodate(page);
++
+ 	if (!PagePrivate(page)) {
+ 		attach_page_private(page, (void *)1);
+ 		atomic_long_inc(&c->dirty_pg_cnt);
+diff --git a/fs/vboxsf/super.c b/fs/vboxsf/super.c
+index c578e772cbd58..f11bcbac77278 100644
+--- a/fs/vboxsf/super.c
++++ b/fs/vboxsf/super.c
+@@ -151,7 +151,7 @@ static int vboxsf_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		if (!sbi->nls) {
+ 			vbg_err("vboxsf: Count not load '%s' nls\n", nls_name);
+ 			err = -EINVAL;
+-			goto fail_free;
++			goto fail_destroy_idr;
+ 		}
+ 	}
+ 
+@@ -224,6 +224,7 @@ static int vboxsf_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		ida_simple_remove(&vboxsf_bdi_ida, sbi->bdi_id);
+ 	if (sbi->nls)
+ 		unload_nls(sbi->nls);
++fail_destroy_idr:
+ 	idr_destroy(&sbi->ino_idr);
+ 	kfree(sbi);
+ 	return err;
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 583824f111079..a47e1aebaff24 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -345,6 +345,7 @@ struct queue_limits {
+ 	unsigned int		max_zone_append_sectors;
+ 	unsigned int		discard_granularity;
+ 	unsigned int		discard_alignment;
++	unsigned int		zone_write_granularity;
+ 
+ 	unsigned short		max_segments;
+ 	unsigned short		max_integrity_segments;
+@@ -1169,6 +1170,8 @@ extern void blk_queue_logical_block_size(struct request_queue *, unsigned int);
+ extern void blk_queue_max_zone_append_sectors(struct request_queue *q,
+ 		unsigned int max_zone_append_sectors);
+ extern void blk_queue_physical_block_size(struct request_queue *, unsigned int);
++void blk_queue_zone_write_granularity(struct request_queue *q,
++				      unsigned int size);
+ extern void blk_queue_alignment_offset(struct request_queue *q,
+ 				       unsigned int alignment);
+ void blk_queue_update_readahead(struct request_queue *q);
+@@ -1480,6 +1483,18 @@ static inline int bdev_io_opt(struct block_device *bdev)
+ 	return queue_io_opt(bdev_get_queue(bdev));
+ }
+ 
++static inline unsigned int
++queue_zone_write_granularity(const struct request_queue *q)
++{
++	return q->limits.zone_write_granularity;
++}
++
++static inline unsigned int
++bdev_zone_write_granularity(struct block_device *bdev)
++{
++	return queue_zone_write_granularity(bdev_get_queue(bdev));
++}
++
+ static inline int queue_alignment_offset(const struct request_queue *q)
+ {
+ 	if (q->limits.misaligned)
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 6d7ab016127c9..2099226d86238 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -74,6 +74,8 @@ extern ssize_t cpu_show_spec_rstack_overflow(struct device *dev,
+ 					     struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_gds(struct device *dev,
+ 			    struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_reg_file_data_sampling(struct device *dev,
++					       struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 6394c4b70a090..9c9ce573c737f 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -973,6 +973,7 @@ void device_link_del(struct device_link *link);
+ void device_link_remove(void *consumer, struct device *supplier);
+ void device_links_supplier_sync_state_pause(void);
+ void device_links_supplier_sync_state_resume(void);
++void device_link_wait_removal(void);
+ 
+ extern __printf(3, 4)
+ int dev_err_probe(const struct device *dev, int err, const char *fmt, ...);
+diff --git a/include/linux/gfp.h b/include/linux/gfp.h
+index c603237e006ce..973f4143f9f6d 100644
+--- a/include/linux/gfp.h
++++ b/include/linux/gfp.h
+@@ -623,6 +623,15 @@ static inline bool pm_suspended_storage(void)
+ }
+ #endif /* CONFIG_PM_SLEEP */
+ 
++/*
++ * Check if the gfp flags allow compaction - GFP_NOIO is a really
++ * tricky context because the migration might require IO.
++ */
++static inline bool gfp_compaction_allowed(gfp_t gfp_mask)
++{
++	return IS_ENABLED(CONFIG_COMPACTION) && (gfp_mask & __GFP_IO);
++}
++
+ #ifdef CONFIG_CONTIG_ALLOC
+ /* The below functions must be run on a range from a single zone. */
+ extern int alloc_contig_range(unsigned long start, unsigned long end,
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 2aaf450c8d800..b606a203de88c 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -164,8 +164,28 @@ struct hv_ring_buffer {
+ 	u8 buffer[];
+ } __packed;
+ 
++
++/*
++ * If the requested ring buffer size is at least 8 times the size of the
++ * header, steal space from the ring buffer for the header. Otherwise, add
++ * space for the header so that is doesn't take too much of the ring buffer
++ * space.
++ *
++ * The factor of 8 is somewhat arbitrary. The goal is to prevent adding a
++ * relatively small header (4 Kbytes on x86) to a large-ish power-of-2 ring
++ * buffer size (such as 128 Kbytes) and so end up making a nearly twice as
++ * large allocation that will be almost half wasted. As a contrasting example,
++ * on ARM64 with 64 Kbyte page size, we don't want to take 64 Kbytes for the
++ * header from a 128 Kbyte allocation, leaving only 64 Kbytes for the ring.
++ * In this latter case, we must add 64 Kbytes for the header and not worry
++ * about what's wasted.
++ */
++#define VMBUS_HEADER_ADJ(payload_sz) \
++	((payload_sz) >=  8 * sizeof(struct hv_ring_buffer) ? \
++	0 : sizeof(struct hv_ring_buffer))
++
+ /* Calculate the proper size of a ringbuffer, it must be page-aligned */
+-#define VMBUS_RING_SIZE(payload_sz) PAGE_ALIGN(sizeof(struct hv_ring_buffer) + \
++#define VMBUS_RING_SIZE(payload_sz) PAGE_ALIGN(VMBUS_HEADER_ADJ(payload_sz) + \
+ 					       (payload_sz))
+ 
+ struct hv_ring_buffer_info {
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index e39342945a80b..7488864589a7a 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -553,6 +553,7 @@ extern int nfs_wb_page_cancel(struct inode *inode, struct page* page);
+ extern int  nfs_commit_inode(struct inode *, int);
+ extern struct nfs_commit_data *nfs_commitdata_alloc(void);
+ extern void nfs_commit_free(struct nfs_commit_data *data);
++void nfs_commit_begin(struct nfs_mds_commit_info *cinfo);
+ bool nfs_commit_end(struct nfs_mds_commit_info *cinfo);
+ 
+ static inline int
+diff --git a/include/linux/objtool.h b/include/linux/objtool.h
+index d17fa7f4001d7..0f13875acc5ed 100644
+--- a/include/linux/objtool.h
++++ b/include/linux/objtool.h
+@@ -141,6 +141,12 @@ struct unwind_hint {
+ 	.popsection
+ .endm
+ 
++.macro STACK_FRAME_NON_STANDARD func:req
++	.pushsection .discard.func_stack_frame_non_standard, "aw"
++		.long \func - .
++	.popsection
++.endm
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #else /* !CONFIG_STACK_VALIDATION */
+@@ -158,6 +164,8 @@ struct unwind_hint {
+ .endm
+ .macro ANNOTATE_NOENDBR
+ .endm
++.macro STACK_FRAME_NON_STANDARD func:req
++.endm
+ #endif
+ 
+ #endif /* CONFIG_STACK_VALIDATION */
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index bf46453475e31..5b24a6fbfa0be 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -306,6 +306,7 @@ struct pcie_link_state;
+ struct pci_vpd;
+ struct pci_sriov;
+ struct pci_p2pdma;
++struct rcec_ea;
+ 
+ /* The pci_dev structure describes PCI devices */
+ struct pci_dev {
+@@ -329,6 +330,10 @@ struct pci_dev {
+ 	u16		aer_cap;	/* AER capability offset */
+ 	struct aer_stats *aer_stats;	/* AER stats for this device */
+ #endif
++#ifdef CONFIG_PCIEPORTBUS
++	struct rcec_ea	*rcec_ea;	/* RCEC cached endpoint association */
++#endif
++	u32		devcap;		/* PCIe Device Capabilities */
+ 	u8		pcie_cap;	/* PCIe capability offset */
+ 	u8		msi_cap;	/* MSI capability offset */
+ 	u8		msix_cap;	/* MSI-X capability offset */
+@@ -449,6 +454,7 @@ struct pci_dev {
+ 	unsigned int	link_active_reporting:1;/* Device capable of reporting link active */
+ 	unsigned int	no_vf_scan:1;		/* Don't scan for VFs after IOV enablement */
+ 	unsigned int	no_command_memory:1;	/* No PCI_COMMAND_MEMORY */
++	unsigned int	rom_bar_overlap:1;	/* ROM BAR disable broken */
+ 	pci_dev_flags_t dev_flags;
+ 	atomic_t	enable_cnt;	/* pci_enable_device has been called */
+ 
+diff --git a/include/linux/phy/tegra/xusb.h b/include/linux/phy/tegra/xusb.h
+index 71d956935405f..86ccd90e425c7 100644
+--- a/include/linux/phy/tegra/xusb.h
++++ b/include/linux/phy/tegra/xusb.h
+@@ -23,4 +23,6 @@ int tegra_xusb_padctl_set_vbus_override(struct tegra_xusb_padctl *padctl,
+ int tegra_phy_xusb_utmi_port_reset(struct phy *phy);
+ int tegra_xusb_padctl_get_usb3_companion(struct tegra_xusb_padctl *padctl,
+ 					 unsigned int port);
++int tegra_xusb_padctl_get_port_number(struct phy *phy);
++
+ #endif /* PHY_TEGRA_XUSB_H */
+diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h
+index 256dff36cf720..0527a4bc9a36f 100644
+--- a/include/linux/sunrpc/sched.h
++++ b/include/linux/sunrpc/sched.h
+@@ -197,7 +197,7 @@ struct rpc_wait_queue {
+ 	unsigned char		maxpriority;		/* maximum priority (0 if queue is not a priority queue) */
+ 	unsigned char		priority;		/* current priority */
+ 	unsigned char		nr;			/* # tasks remaining for cookie */
+-	unsigned short		qlen;			/* total # tasks waiting in queue */
++	unsigned int		qlen;			/* total # tasks waiting in queue */
+ 	struct rpc_timer	timer_list;
+ #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) || IS_ENABLED(CONFIG_TRACEPOINTS)
+ 	const char *		name;
+diff --git a/include/linux/timer.h b/include/linux/timer.h
+index d10bc7e73b41e..a3d04c4f1263a 100644
+--- a/include/linux/timer.h
++++ b/include/linux/timer.h
+@@ -183,12 +183,20 @@ extern int timer_reduce(struct timer_list *timer, unsigned long expires);
+ extern void add_timer(struct timer_list *timer);
+ 
+ extern int try_to_del_timer_sync(struct timer_list *timer);
++extern int timer_delete_sync(struct timer_list *timer);
+ 
+-#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
+-  extern int del_timer_sync(struct timer_list *timer);
+-#else
+-# define del_timer_sync(t)		del_timer(t)
+-#endif
++/**
++ * del_timer_sync - Delete a pending timer and wait for a running callback
++ * @timer:	The timer to be deleted
++ *
++ * See timer_delete_sync() for detailed explanation.
++ *
++ * Do not use in new code. Use timer_delete_sync() instead.
++ */
++static inline int del_timer_sync(struct timer_list *timer)
++{
++	return timer_delete_sync(timer);
++}
+ 
+ #define del_singleshot_timer_sync(t) del_timer_sync(t)
+ 
+diff --git a/include/linux/udp.h b/include/linux/udp.h
+index ae58ff3b6b5b8..a220880019d6b 100644
+--- a/include/linux/udp.h
++++ b/include/linux/udp.h
+@@ -131,6 +131,24 @@ static inline void udp_cmsg_recv(struct msghdr *msg, struct sock *sk,
+ 	}
+ }
+ 
++DECLARE_STATIC_KEY_FALSE(udp_encap_needed_key);
++#if IS_ENABLED(CONFIG_IPV6)
++DECLARE_STATIC_KEY_FALSE(udpv6_encap_needed_key);
++#endif
++
++static inline bool udp_encap_needed(void)
++{
++	if (static_branch_unlikely(&udp_encap_needed_key))
++		return true;
++
++#if IS_ENABLED(CONFIG_IPV6)
++	if (static_branch_unlikely(&udpv6_encap_needed_key))
++		return true;
++#endif
++
++	return false;
++}
++
+ static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)
+ {
+ 	if (!skb_is_gso(skb))
+@@ -142,6 +160,16 @@ static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)
+ 	if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST && !udp_sk(sk)->accept_udp_fraglist)
+ 		return true;
+ 
++	/* GSO packets lacking the SKB_GSO_UDP_TUNNEL/_CSUM bits might still
++	 * land in a tunnel as the socket check in udp_gro_receive cannot be
++	 * foolproof.
++	 */
++	if (udp_encap_needed() &&
++	    READ_ONCE(udp_sk(sk)->encap_rcv) &&
++	    !(skb_shinfo(skb)->gso_type &
++	      (SKB_GSO_UDP_TUNNEL | SKB_GSO_UDP_TUNNEL_CSUM)))
++		return true;
++
+ 	return false;
+ }
+ 
+diff --git a/include/linux/vfio.h b/include/linux/vfio.h
+index f479c5d7f2c37..56d18912c41af 100644
+--- a/include/linux/vfio.h
++++ b/include/linux/vfio.h
+@@ -221,6 +221,7 @@ struct virqfd {
+ 	wait_queue_entry_t		wait;
+ 	poll_table		pt;
+ 	struct work_struct	shutdown;
++	struct work_struct	flush_inject;
+ 	struct virqfd		**pvirqfd;
+ };
+ 
+@@ -229,5 +230,6 @@ extern int vfio_virqfd_enable(void *opaque,
+ 			      void (*thread)(void *, void *),
+ 			      void *data, struct virqfd **pvirqfd, int fd);
+ extern void vfio_virqfd_disable(struct virqfd **pvirqfd);
++void vfio_virqfd_flush_thread(struct virqfd **pvirqfd);
+ 
+ #endif /* VFIO_H */
+diff --git a/include/net/cfg802154.h b/include/net/cfg802154.h
+index 6ed07844eb244..5290781abba3d 100644
+--- a/include/net/cfg802154.h
++++ b/include/net/cfg802154.h
+@@ -257,6 +257,7 @@ struct ieee802154_llsec_key {
+ 
+ struct ieee802154_llsec_key_entry {
+ 	struct list_head list;
++	struct rcu_head rcu;
+ 
+ 	struct ieee802154_llsec_key_id id;
+ 	struct ieee802154_llsec_key *key;
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index 568121fa0965c..f5967805c33fd 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -172,6 +172,7 @@ void inet_csk_init_xmit_timers(struct sock *sk,
+ 			       void (*delack_handler)(struct timer_list *),
+ 			       void (*keepalive_handler)(struct timer_list *));
+ void inet_csk_clear_xmit_timers(struct sock *sk);
++void inet_csk_clear_xmit_timers_sync(struct sock *sk);
+ 
+ static inline void inet_csk_schedule_ack(struct sock *sk)
+ {
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 87ee284ea9cb3..8bcc96bf291c3 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1681,6 +1681,13 @@ static inline void sock_owned_by_me(const struct sock *sk)
+ #endif
+ }
+ 
++static inline void sock_not_owned_by_me(const struct sock *sk)
++{
++#ifdef CONFIG_LOCKDEP
++	WARN_ON_ONCE(lockdep_sock_is_held(sk) && debug_locks);
++#endif
++}
++
+ static inline bool sock_owned_by_user(const struct sock *sk)
+ {
+ 	sock_owned_by_me(sk);
+diff --git a/include/soc/fsl/qman.h b/include/soc/fsl/qman.h
+index 9f484113cfda7..3cecbfdb0f8c2 100644
+--- a/include/soc/fsl/qman.h
++++ b/include/soc/fsl/qman.h
+@@ -1170,6 +1170,15 @@ int qman_delete_cgr(struct qman_cgr *cgr);
+  */
+ void qman_delete_cgr_safe(struct qman_cgr *cgr);
+ 
++/**
++ * qman_update_cgr_safe - Modifies a congestion group object from any CPU
++ * @cgr: the 'cgr' object to modify
++ * @opts: state of the CGR settings
++ *
++ * This will select the proper CPU and modify the CGR settings.
++ */
++int qman_update_cgr_safe(struct qman_cgr *cgr, struct qm_mcc_initcgr *opts);
++
+ /**
+  * qman_query_cgr_congested - Queries CGR's congestion status
+  * @cgr: the 'cgr' object to query
+diff --git a/include/uapi/linux/input-event-codes.h b/include/uapi/linux/input-event-codes.h
+index 7989d9483ea75..bed20a89c14c1 100644
+--- a/include/uapi/linux/input-event-codes.h
++++ b/include/uapi/linux/input-event-codes.h
+@@ -602,6 +602,7 @@
+ 
+ #define KEY_ALS_TOGGLE		0x230	/* Ambient light sensor */
+ #define KEY_ROTATE_LOCK_TOGGLE	0x231	/* Display rotation lock */
++#define KEY_REFRESH_RATE_TOGGLE	0x232	/* Display refresh rate toggle */
+ 
+ #define KEY_BUTTONCONFIG		0x240	/* AL Button Configuration */
+ #define KEY_TASKMANAGER		0x241	/* AL Task/Project Manager */
+diff --git a/init/initramfs.c b/init/initramfs.c
+index 55b74d7e52607..ff09460727237 100644
+--- a/init/initramfs.c
++++ b/init/initramfs.c
+@@ -589,7 +589,7 @@ static void __init populate_initrd_image(char *err)
+ 
+ 	printk(KERN_INFO "rootfs image is not initramfs (%s); looks like an initrd\n",
+ 			err);
+-	file = filp_open("/initrd.image", O_WRONLY | O_CREAT, 0700);
++	file = filp_open("/initrd.image", O_WRONLY|O_CREAT|O_LARGEFILE, 0700);
+ 	if (IS_ERR(file))
+ 		return;
+ 
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index fc60396c90396..93f9ecedc59f6 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -8247,7 +8247,7 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 	}
+ 
+ 	io_rsrc_node_switch(ctx, NULL);
+-	return ret;
++	return 0;
+ out_fput:
+ 	for (i = 0; i < ctx->nr_user_files; i++) {
+ 		file = io_file_from_index(ctx, i);
+diff --git a/kernel/bounds.c b/kernel/bounds.c
+index 9795d75b09b23..a94e3769347ee 100644
+--- a/kernel/bounds.c
++++ b/kernel/bounds.c
+@@ -19,7 +19,7 @@ int main(void)
+ 	DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS);
+ 	DEFINE(MAX_NR_ZONES, __MAX_NR_ZONES);
+ #ifdef CONFIG_SMP
+-	DEFINE(NR_CPUS_BITS, ilog2(CONFIG_NR_CPUS));
++	DEFINE(NR_CPUS_BITS, bits_per(CONFIG_NR_CPUS));
+ #endif
+ 	DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
+ 	/* End of constants */
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index fce2345f600f2..25f8a8716e88d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3941,6 +3941,11 @@ static int check_stack_access_within_bounds(
+ 	err = check_stack_slot_within_bounds(min_off, state, type);
+ 	if (!err && max_off > 0)
+ 		err = -EINVAL; /* out of stack access into non-negative offsets */
++	if (!err && access_size < 0)
++		/* access_size should not be negative (or overflow an int); others checks
++		 * along the way should have prevented such an access.
++		 */
++		err = -EFAULT; /* invalid negative access size; integer overflow? */
+ 
+ 	if (err) {
+ 		if (tnum_is_const(reg->var_off)) {
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index bd569cf235699..e0b47bed86750 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6890,9 +6890,16 @@ static void perf_output_read_group(struct perf_output_handle *handle,
+ {
+ 	struct perf_event *leader = event->group_leader, *sub;
+ 	u64 read_format = event->attr.read_format;
++	unsigned long flags;
+ 	u64 values[6];
+ 	int n = 0;
+ 
++	/*
++	 * Disabling interrupts avoids all counter scheduling
++	 * (context switches, timer based rotation and IPIs).
++	 */
++	local_irq_save(flags);
++
+ 	values[n++] = 1 + leader->nr_siblings;
+ 
+ 	if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED)
+@@ -6928,6 +6935,8 @@ static void perf_output_read_group(struct perf_output_handle *handle,
+ 
+ 		__output_copy(handle, values, n * sizeof(u64));
+ 	}
++
++	local_irq_restore(flags);
+ }
+ 
+ #define PERF_FORMAT_TOTAL_TIMES (PERF_FORMAT_TOTAL_TIME_ENABLED|\
+diff --git a/kernel/panic.c b/kernel/panic.c
+index bc39e2b27d315..30d8da0d43d8f 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -427,6 +427,14 @@ void panic(const char *fmt, ...)
+ 
+ 	/* Do not scroll important messages printed above */
+ 	suppress_printk = 1;
++
++	/*
++	 * The final messages may not have been printed if in a context that
++	 * defers printing (such as NMI) and irq_work is not available.
++	 * Explicitly flush the kernel log buffer one last time.
++	 */
++	console_flush_on_panic(CONSOLE_FLUSH_PENDING);
++
+ 	local_irq_enable();
+ 	for (i = 0; ; i += PANIC_TIMER_STEP) {
+ 		touch_softlockup_watchdog();
+diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
+index 4aa4d5d3947f1..14e981c0588ad 100644
+--- a/kernel/power/suspend.c
++++ b/kernel/power/suspend.c
+@@ -187,6 +187,7 @@ static int __init mem_sleep_default_setup(char *str)
+ 		if (mem_sleep_labels[state] &&
+ 		    !strcmp(str, mem_sleep_labels[state])) {
+ 			mem_sleep_default = state;
++			mem_sleep_current = state;
+ 			break;
+ 		}
+ 
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 17a310dcb6d96..a8af93cbc2936 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1866,6 +1866,12 @@ static int console_trylock_spinning(void)
+ 	 */
+ 	mutex_acquire(&console_lock_dep_map, 0, 1, _THIS_IP_);
+ 
++	/*
++	 * Update @console_may_schedule for trylock because the previous
++	 * owner may have been schedulable.
++	 */
++	console_may_schedule = 0;
++
+ 	return 1;
+ }
+ 
+@@ -2684,6 +2690,21 @@ static int __init keep_bootcon_setup(char *str)
+ 
+ early_param("keep_bootcon", keep_bootcon_setup);
+ 
++static int console_call_setup(struct console *newcon, char *options)
++{
++	int err;
++
++	if (!newcon->setup)
++		return 0;
++
++	/* Synchronize with possible boot console. */
++	console_lock();
++	err = newcon->setup(newcon, options);
++	console_unlock();
++
++	return err;
++}
++
+ /*
+  * This is called by register_console() to try to match
+  * the newly registered console with any of the ones selected
+@@ -2693,7 +2714,8 @@ early_param("keep_bootcon", keep_bootcon_setup);
+  * Care need to be taken with consoles that are statically
+  * enabled such as netconsole
+  */
+-static int try_enable_new_console(struct console *newcon, bool user_specified)
++static int try_enable_preferred_console(struct console *newcon,
++					bool user_specified)
+ {
+ 	struct console_cmdline *c;
+ 	int i, err;
+@@ -2718,8 +2740,8 @@ static int try_enable_new_console(struct console *newcon, bool user_specified)
+ 			if (_braille_register_console(newcon, c))
+ 				return 0;
+ 
+-			if (newcon->setup &&
+-			    (err = newcon->setup(newcon, c->options)) != 0)
++			err = console_call_setup(newcon, c->options);
++			if (err)
+ 				return err;
+ 		}
+ 		newcon->flags |= CON_ENABLED;
+@@ -2741,6 +2763,23 @@ static int try_enable_new_console(struct console *newcon, bool user_specified)
+ 	return -ENOENT;
+ }
+ 
++/* Try to enable the console unconditionally */
++static void try_enable_default_console(struct console *newcon)
++{
++	if (newcon->index < 0)
++		newcon->index = 0;
++
++	if (console_call_setup(newcon, NULL) != 0)
++		return;
++
++	newcon->flags |= CON_ENABLED;
++
++	if (newcon->device) {
++		newcon->flags |= CON_CONSDEV;
++		has_preferred_console = true;
++	}
++}
++
+ /*
+  * The console driver calls this routine during kernel initialization
+  * to register the console printing procedure with printk() and to
+@@ -2797,25 +2836,15 @@ void register_console(struct console *newcon)
+ 	 *	didn't select a console we take the first one
+ 	 *	that registers here.
+ 	 */
+-	if (!has_preferred_console) {
+-		if (newcon->index < 0)
+-			newcon->index = 0;
+-		if (newcon->setup == NULL ||
+-		    newcon->setup(newcon, NULL) == 0) {
+-			newcon->flags |= CON_ENABLED;
+-			if (newcon->device) {
+-				newcon->flags |= CON_CONSDEV;
+-				has_preferred_console = true;
+-			}
+-		}
+-	}
++	if (!has_preferred_console)
++		try_enable_default_console(newcon);
+ 
+ 	/* See if this console matches one we selected on the command line */
+-	err = try_enable_new_console(newcon, true);
++	err = try_enable_preferred_console(newcon, true);
+ 
+ 	/* If not, try to match against the platform default(s) */
+ 	if (err == -ENOENT)
+-		err = try_enable_new_console(newcon, false);
++		err = try_enable_preferred_console(newcon, false);
+ 
+ 	/* printk() messages are not printed to the Braille console. */
+ 	if (err || newcon->flags & CON_BRL)
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index e87e638c31bdf..c135cefa44ac0 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1030,7 +1030,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int option
+ 		/*
+ 		 * We are trying to schedule the timer on the new base.
+ 		 * However we can't change timer's base while it is running,
+-		 * otherwise del_timer_sync() can't detect that the timer's
++		 * otherwise timer_delete_sync() can't detect that the timer's
+ 		 * handler yet has not finished. This also guarantees that the
+ 		 * timer is serialized wrt itself.
+ 		 */
+@@ -1068,14 +1068,16 @@ __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int option
+ }
+ 
+ /**
+- * mod_timer_pending - modify a pending timer's timeout
+- * @timer: the pending timer to be modified
+- * @expires: new timeout in jiffies
++ * mod_timer_pending - Modify a pending timer's timeout
++ * @timer:	The pending timer to be modified
++ * @expires:	New absolute timeout in jiffies
+  *
+- * mod_timer_pending() is the same for pending timers as mod_timer(),
+- * but will not re-activate and modify already deleted timers.
++ * mod_timer_pending() is the same for pending timers as mod_timer(), but
++ * will not activate inactive timers.
+  *
+- * It is useful for unserialized use of timers.
++ * Return:
++ * * %0 - The timer was inactive and not modified
++ * * %1 - The timer was active and requeued to expire at @expires
+  */
+ int mod_timer_pending(struct timer_list *timer, unsigned long expires)
+ {
+@@ -1084,24 +1086,27 @@ int mod_timer_pending(struct timer_list *timer, unsigned long expires)
+ EXPORT_SYMBOL(mod_timer_pending);
+ 
+ /**
+- * mod_timer - modify a timer's timeout
+- * @timer: the timer to be modified
+- * @expires: new timeout in jiffies
+- *
+- * mod_timer() is a more efficient way to update the expire field of an
+- * active timer (if the timer is inactive it will be activated)
++ * mod_timer - Modify a timer's timeout
++ * @timer:	The timer to be modified
++ * @expires:	New absolute timeout in jiffies
+  *
+  * mod_timer(timer, expires) is equivalent to:
+  *
+  *     del_timer(timer); timer->expires = expires; add_timer(timer);
+  *
++ * mod_timer() is more efficient than the above open coded sequence. In
++ * case that the timer is inactive, the del_timer() part is a NOP. The
++ * timer is in any case activated with the new expiry time @expires.
++ *
+  * Note that if there are multiple unserialized concurrent users of the
+  * same timer, then mod_timer() is the only safe way to modify the timeout,
+  * since add_timer() cannot modify an already running timer.
+  *
+- * The function returns whether it has modified a pending timer or not.
+- * (ie. mod_timer() of an inactive timer returns 0, mod_timer() of an
+- * active timer returns 1.)
++ * Return:
++ * * %0 - The timer was inactive and started
++ * * %1 - The timer was active and requeued to expire at @expires or
++ *	  the timer was active and not modified because @expires did
++ *	  not change the effective expiry time
+  */
+ int mod_timer(struct timer_list *timer, unsigned long expires)
+ {
+@@ -1112,11 +1117,18 @@ EXPORT_SYMBOL(mod_timer);
+ /**
+  * timer_reduce - Modify a timer's timeout if it would reduce the timeout
+  * @timer:	The timer to be modified
+- * @expires:	New timeout in jiffies
++ * @expires:	New absolute timeout in jiffies
+  *
+  * timer_reduce() is very similar to mod_timer(), except that it will only
+- * modify a running timer if that would reduce the expiration time (it will
+- * start a timer that isn't running).
++ * modify an enqueued timer if that would reduce the expiration time. If
++ * @timer is not enqueued it starts the timer.
++ *
++ * Return:
++ * * %0 - The timer was inactive and started
++ * * %1 - The timer was active and requeued to expire at @expires or
++ *	  the timer was active and not modified because @expires
++ *	  did not change the effective expiry time such that the
++ *	  timer would expire earlier than already scheduled
+  */
+ int timer_reduce(struct timer_list *timer, unsigned long expires)
+ {
+@@ -1125,18 +1137,21 @@ int timer_reduce(struct timer_list *timer, unsigned long expires)
+ EXPORT_SYMBOL(timer_reduce);
+ 
+ /**
+- * add_timer - start a timer
+- * @timer: the timer to be added
++ * add_timer - Start a timer
++ * @timer:	The timer to be started
+  *
+- * The kernel will do a ->function(@timer) callback from the
+- * timer interrupt at the ->expires point in the future. The
+- * current time is 'jiffies'.
++ * Start @timer to expire at @timer->expires in the future. @timer->expires
++ * is the absolute expiry time measured in 'jiffies'. When the timer expires
++ * timer->function(timer) will be invoked from soft interrupt context.
+  *
+- * The timer's ->expires, ->function fields must be set prior calling this
+- * function.
++ * The @timer->expires and @timer->function fields must be set prior
++ * to calling this function.
+  *
+- * Timers with an ->expires field in the past will be executed in the next
+- * timer tick.
++ * If @timer->expires is already in the past @timer will be queued to
++ * expire at the next timer tick.
++ *
++ * This can only operate on an inactive timer. Attempts to invoke this on
++ * an active timer are rejected with a warning.
+  */
+ void add_timer(struct timer_list *timer)
+ {
+@@ -1146,11 +1161,13 @@ void add_timer(struct timer_list *timer)
+ EXPORT_SYMBOL(add_timer);
+ 
+ /**
+- * add_timer_on - start a timer on a particular CPU
+- * @timer: the timer to be added
+- * @cpu: the CPU to start it on
++ * add_timer_on - Start a timer on a particular CPU
++ * @timer:	The timer to be started
++ * @cpu:	The CPU to start it on
++ *
++ * Same as add_timer() except that it starts the timer on the given CPU.
+  *
+- * This is not very scalable on SMP. Double adds are not possible.
++ * See add_timer() for further details.
+  */
+ void add_timer_on(struct timer_list *timer, int cpu)
+ {
+@@ -1185,15 +1202,18 @@ void add_timer_on(struct timer_list *timer, int cpu)
+ EXPORT_SYMBOL_GPL(add_timer_on);
+ 
+ /**
+- * del_timer - deactivate a timer.
+- * @timer: the timer to be deactivated
+- *
+- * del_timer() deactivates a timer - this works on both active and inactive
+- * timers.
+- *
+- * The function returns whether it has deactivated a pending timer or not.
+- * (ie. del_timer() of an inactive timer returns 0, del_timer() of an
+- * active timer returns 1.)
++ * del_timer - Deactivate a timer.
++ * @timer:	The timer to be deactivated
++ *
++ * The function only deactivates a pending timer, but contrary to
++ * timer_delete_sync() it does not take into account whether the timer's
++ * callback function is concurrently executed on a different CPU or not.
++ * It neither prevents rearming of the timer. If @timer can be rearmed
++ * concurrently then the return value of this function is meaningless.
++ *
++ * Return:
++ * * %0 - The timer was not pending
++ * * %1 - The timer was pending and deactivated
+  */
+ int del_timer(struct timer_list *timer)
+ {
+@@ -1215,10 +1235,19 @@ EXPORT_SYMBOL(del_timer);
+ 
+ /**
+  * try_to_del_timer_sync - Try to deactivate a timer
+- * @timer: timer to delete
++ * @timer:	Timer to deactivate
++ *
++ * This function tries to deactivate a timer. On success the timer is not
++ * queued and the timer callback function is not running on any CPU.
++ *
++ * This function does not guarantee that the timer cannot be rearmed right
++ * after dropping the base lock. That needs to be prevented by the calling
++ * code if necessary.
+  *
+- * This function tries to deactivate a timer. Upon successful (ret >= 0)
+- * exit the timer is not queued and the handler is not running on any CPU.
++ * Return:
++ * * %0  - The timer was not pending
++ * * %1  - The timer was pending and deactivated
++ * * %-1 - The timer callback function is running on a different CPU
+  */
+ int try_to_del_timer_sync(struct timer_list *timer)
+ {
+@@ -1312,25 +1341,20 @@ static inline void timer_sync_wait_running(struct timer_base *base) { }
+ static inline void del_timer_wait_running(struct timer_list *timer) { }
+ #endif
+ 
+-#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
+ /**
+- * del_timer_sync - deactivate a timer and wait for the handler to finish.
+- * @timer: the timer to be deactivated
+- *
+- * This function only differs from del_timer() on SMP: besides deactivating
+- * the timer it also makes sure the handler has finished executing on other
+- * CPUs.
++ * timer_delete_sync - Deactivate a timer and wait for the handler to finish.
++ * @timer:	The timer to be deactivated
+  *
+  * Synchronization rules: Callers must prevent restarting of the timer,
+  * otherwise this function is meaningless. It must not be called from
+  * interrupt contexts unless the timer is an irqsafe one. The caller must
+- * not hold locks which would prevent completion of the timer's
+- * handler. The timer's handler must not call add_timer_on(). Upon exit the
+- * timer is not queued and the handler is not running on any CPU.
++ * not hold locks which would prevent completion of the timer's callback
++ * function. The timer's handler must not call add_timer_on(). Upon exit
++ * the timer is not queued and the handler is not running on any CPU.
+  *
+- * Note: For !irqsafe timers, you must not hold locks that are held in
+- *   interrupt context while calling this function. Even if the lock has
+- *   nothing to do with the timer in question.  Here's why::
++ * For !irqsafe timers, the caller must not hold locks that are held in
++ * interrupt context. Even if the lock has nothing to do with the timer in
++ * question.  Here's why::
+  *
+  *    CPU0                             CPU1
+  *    ----                             ----
+@@ -1340,16 +1364,23 @@ static inline void del_timer_wait_running(struct timer_list *timer) { }
+  *    spin_lock_irq(somelock);
+  *                                     <IRQ>
+  *                                        spin_lock(somelock);
+- *    del_timer_sync(mytimer);
++ *    timer_delete_sync(mytimer);
+  *    while (base->running_timer == mytimer);
+  *
+- * Now del_timer_sync() will never return and never release somelock.
+- * The interrupt on the other CPU is waiting to grab somelock but
+- * it has interrupted the softirq that CPU0 is waiting to finish.
++ * Now timer_delete_sync() will never return and never release somelock.
++ * The interrupt on the other CPU is waiting to grab somelock but it has
++ * interrupted the softirq that CPU0 is waiting to finish.
++ *
++ * This function cannot guarantee that the timer is not rearmed again by
++ * some concurrent or preempting code, right after it dropped the base
++ * lock. If there is the possibility of a concurrent rearm then the return
++ * value of the function is meaningless.
+  *
+- * The function returns whether it has deactivated a pending timer or not.
++ * Return:
++ * * %0	- The timer was not pending
++ * * %1	- The timer was pending and deactivated
+  */
+-int del_timer_sync(struct timer_list *timer)
++int timer_delete_sync(struct timer_list *timer)
+ {
+ 	int ret;
+ 
+@@ -1382,8 +1413,7 @@ int del_timer_sync(struct timer_list *timer)
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL(del_timer_sync);
+-#endif
++EXPORT_SYMBOL(timer_delete_sync);
+ 
+ static void call_timer_fn(struct timer_list *timer,
+ 			  void (*fn)(struct timer_list *),
+@@ -1405,8 +1435,8 @@ static void call_timer_fn(struct timer_list *timer,
+ #endif
+ 	/*
+ 	 * Couple the lock chain with the lock chain at
+-	 * del_timer_sync() by acquiring the lock_map around the fn()
+-	 * call here and in del_timer_sync().
++	 * timer_delete_sync() by acquiring the lock_map around the fn()
++	 * call here and in timer_delete_sync().
+ 	 */
+ 	lock_map_acquire(&lockdep_map);
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 4a43b8846b49f..2df8e13a29e57 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -415,7 +415,6 @@ struct rb_irq_work {
+ 	struct irq_work			work;
+ 	wait_queue_head_t		waiters;
+ 	wait_queue_head_t		full_waiters;
+-	long				wait_index;
+ 	bool				waiters_pending;
+ 	bool				full_waiters_pending;
+ 	bool				wakeup_full;
+@@ -832,8 +831,19 @@ static void rb_wake_up_waiters(struct irq_work *work)
+ 
+ 	wake_up_all(&rbwork->waiters);
+ 	if (rbwork->full_waiters_pending || rbwork->wakeup_full) {
++		/* Only cpu_buffer sets the above flags */
++		struct ring_buffer_per_cpu *cpu_buffer =
++			container_of(rbwork, struct ring_buffer_per_cpu, irq_work);
++
++		/* Called from interrupt context */
++		raw_spin_lock(&cpu_buffer->reader_lock);
+ 		rbwork->wakeup_full = false;
+ 		rbwork->full_waiters_pending = false;
++
++		/* Waking up all waiters, they will reset the shortest full */
++		cpu_buffer->shortest_full = 0;
++		raw_spin_unlock(&cpu_buffer->reader_lock);
++
+ 		wake_up_all(&rbwork->full_waiters);
+ 	}
+ }
+@@ -862,14 +872,41 @@ void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu)
+ 		rbwork = &cpu_buffer->irq_work;
+ 	}
+ 
+-	rbwork->wait_index++;
+-	/* make sure the waiters see the new index */
+-	smp_wmb();
+-
+ 	/* This can be called in any context */
+ 	irq_work_queue(&rbwork->work);
+ }
+ 
++static bool rb_watermark_hit(struct trace_buffer *buffer, int cpu, int full)
++{
++	struct ring_buffer_per_cpu *cpu_buffer;
++	bool ret = false;
++
++	/* Reads of all CPUs always waits for any data */
++	if (cpu == RING_BUFFER_ALL_CPUS)
++		return !ring_buffer_empty(buffer);
++
++	cpu_buffer = buffer->buffers[cpu];
++
++	if (!ring_buffer_empty_cpu(buffer, cpu)) {
++		unsigned long flags;
++		bool pagebusy;
++
++		if (!full)
++			return true;
++
++		raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++		pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page;
++		ret = !pagebusy && full_hit(buffer, cpu, full);
++
++		if (!ret && (!cpu_buffer->shortest_full ||
++			     cpu_buffer->shortest_full > full)) {
++		    cpu_buffer->shortest_full = full;
++		}
++		raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	}
++	return ret;
++}
++
+ /**
+  * ring_buffer_wait - wait for input to the ring buffer
+  * @buffer: buffer to wait on
+@@ -885,7 +922,6 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	DEFINE_WAIT(wait);
+ 	struct rb_irq_work *work;
+-	long wait_index;
+ 	int ret = 0;
+ 
+ 	/*
+@@ -904,81 +940,54 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ 		work = &cpu_buffer->irq_work;
+ 	}
+ 
+-	wait_index = READ_ONCE(work->wait_index);
+-
+-	while (true) {
+-		if (full)
+-			prepare_to_wait(&work->full_waiters, &wait, TASK_INTERRUPTIBLE);
+-		else
+-			prepare_to_wait(&work->waiters, &wait, TASK_INTERRUPTIBLE);
+-
+-		/*
+-		 * The events can happen in critical sections where
+-		 * checking a work queue can cause deadlocks.
+-		 * After adding a task to the queue, this flag is set
+-		 * only to notify events to try to wake up the queue
+-		 * using irq_work.
+-		 *
+-		 * We don't clear it even if the buffer is no longer
+-		 * empty. The flag only causes the next event to run
+-		 * irq_work to do the work queue wake up. The worse
+-		 * that can happen if we race with !trace_empty() is that
+-		 * an event will cause an irq_work to try to wake up
+-		 * an empty queue.
+-		 *
+-		 * There's no reason to protect this flag either, as
+-		 * the work queue and irq_work logic will do the necessary
+-		 * synchronization for the wake ups. The only thing
+-		 * that is necessary is that the wake up happens after
+-		 * a task has been queued. It's OK for spurious wake ups.
+-		 */
+-		if (full)
+-			work->full_waiters_pending = true;
+-		else
+-			work->waiters_pending = true;
+-
+-		if (signal_pending(current)) {
+-			ret = -EINTR;
+-			break;
+-		}
+-
+-		if (cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer))
+-			break;
+-
+-		if (cpu != RING_BUFFER_ALL_CPUS &&
+-		    !ring_buffer_empty_cpu(buffer, cpu)) {
+-			unsigned long flags;
+-			bool pagebusy;
+-			bool done;
+-
+-			if (!full)
+-				break;
+-
+-			raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+-			pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page;
+-			done = !pagebusy && full_hit(buffer, cpu, full);
++	if (full)
++		prepare_to_wait(&work->full_waiters, &wait, TASK_INTERRUPTIBLE);
++	else
++		prepare_to_wait(&work->waiters, &wait, TASK_INTERRUPTIBLE);
+ 
+-			if (!cpu_buffer->shortest_full ||
+-			    cpu_buffer->shortest_full > full)
+-				cpu_buffer->shortest_full = full;
+-			raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+-			if (done)
+-				break;
+-		}
++	/*
++	 * The events can happen in critical sections where
++	 * checking a work queue can cause deadlocks.
++	 * After adding a task to the queue, this flag is set
++	 * only to notify events to try to wake up the queue
++	 * using irq_work.
++	 *
++	 * We don't clear it even if the buffer is no longer
++	 * empty. The flag only causes the next event to run
++	 * irq_work to do the work queue wake up. The worse
++	 * that can happen if we race with !trace_empty() is that
++	 * an event will cause an irq_work to try to wake up
++	 * an empty queue.
++	 *
++	 * There's no reason to protect this flag either, as
++	 * the work queue and irq_work logic will do the necessary
++	 * synchronization for the wake ups. The only thing
++	 * that is necessary is that the wake up happens after
++	 * a task has been queued. It's OK for spurious wake ups.
++	 */
++	if (full)
++		work->full_waiters_pending = true;
++	else
++		work->waiters_pending = true;
+ 
+-		schedule();
++	if (rb_watermark_hit(buffer, cpu, full))
++		goto out;
+ 
+-		/* Make sure to see the new wait index */
+-		smp_rmb();
+-		if (wait_index != work->wait_index)
+-			break;
++	if (signal_pending(current)) {
++		ret = -EINTR;
++		goto out;
+ 	}
+ 
++	schedule();
++ out:
+ 	if (full)
+ 		finish_wait(&work->full_waiters, &wait);
+ 	else
+ 		finish_wait(&work->waiters, &wait);
+ 
++	if (!ret && !rb_watermark_hit(buffer, cpu, full) && signal_pending(current))
++		ret = -EINTR;
++
+ 	return ret;
+ }
+ 
+@@ -1001,30 +1010,51 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 			  struct file *filp, poll_table *poll_table, int full)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+-	struct rb_irq_work *work;
++	struct rb_irq_work *rbwork;
+ 
+ 	if (cpu == RING_BUFFER_ALL_CPUS) {
+-		work = &buffer->irq_work;
++		rbwork = &buffer->irq_work;
+ 		full = 0;
+ 	} else {
+ 		if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 			return EPOLLERR;
+ 
+ 		cpu_buffer = buffer->buffers[cpu];
+-		work = &cpu_buffer->irq_work;
++		rbwork = &cpu_buffer->irq_work;
+ 	}
+ 
+ 	if (full) {
+-		poll_wait(filp, &work->full_waiters, poll_table);
+-		work->full_waiters_pending = true;
++		unsigned long flags;
++
++		poll_wait(filp, &rbwork->full_waiters, poll_table);
++
++		raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ 		if (!cpu_buffer->shortest_full ||
+ 		    cpu_buffer->shortest_full > full)
+ 			cpu_buffer->shortest_full = full;
+-	} else {
+-		poll_wait(filp, &work->waiters, poll_table);
+-		work->waiters_pending = true;
++		raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++		if (full_hit(buffer, cpu, full))
++			return EPOLLIN | EPOLLRDNORM;
++		/*
++		 * Only allow full_waiters_pending update to be seen after
++		 * the shortest_full is set. If the writer sees the
++		 * full_waiters_pending flag set, it will compare the
++		 * amount in the ring buffer to shortest_full. If the amount
++		 * in the ring buffer is greater than the shortest_full
++		 * percent, it will call the irq_work handler to wake up
++		 * this list. The irq_handler will reset shortest_full
++		 * back to zero. That's done under the reader_lock, but
++		 * the below smp_mb() makes sure that the update to
++		 * full_waiters_pending doesn't leak up into the above.
++		 */
++		smp_mb();
++		rbwork->full_waiters_pending = true;
++		return 0;
+ 	}
+ 
++	poll_wait(filp, &rbwork->waiters, poll_table);
++	rbwork->waiters_pending = true;
++
+ 	/*
+ 	 * There's a tight race between setting the waiters_pending and
+ 	 * checking if the ring buffer is empty.  Once the waiters_pending bit
+@@ -1040,9 +1070,6 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 	 */
+ 	smp_mb();
+ 
+-	if (full)
+-		return full_hit(buffer, cpu, full) ? EPOLLIN | EPOLLRDNORM : 0;
+-
+ 	if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) ||
+ 	    (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu)))
+ 		return EPOLLIN | EPOLLRDNORM;
+@@ -4184,7 +4211,7 @@ int ring_buffer_iter_empty(struct ring_buffer_iter *iter)
+ 	cpu_buffer = iter->cpu_buffer;
+ 	reader = cpu_buffer->reader_page;
+ 	head_page = cpu_buffer->head_page;
+-	commit_page = cpu_buffer->commit_page;
++	commit_page = READ_ONCE(cpu_buffer->commit_page);
+ 	commit_ts = commit_page->page->time_stamp;
+ 
+ 	/*
+diff --git a/mm/compaction.c b/mm/compaction.c
+index b58021666e1a3..77ca5e6f49800 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -2466,16 +2466,11 @@ enum compact_result try_to_compact_pages(gfp_t gfp_mask, unsigned int order,
+ 		unsigned int alloc_flags, const struct alloc_context *ac,
+ 		enum compact_priority prio, struct page **capture)
+ {
+-	int may_perform_io = gfp_mask & __GFP_IO;
+ 	struct zoneref *z;
+ 	struct zone *zone;
+ 	enum compact_result rc = COMPACT_SKIPPED;
+ 
+-	/*
+-	 * Check if the GFP flags allow compaction - GFP_NOIO is really
+-	 * tricky context because the migration might require IO
+-	 */
+-	if (!may_perform_io)
++	if (!gfp_compaction_allowed(gfp_mask))
+ 		return COMPACT_SKIPPED;
+ 
+ 	trace_mm_compaction_try_to_compact_pages(order, gfp_mask, prio);
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index f320ff02cc196..dba2936292cf1 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1075,7 +1075,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
+ 				unmap_success = false;
+ 			}
+ 		} else {
+-			unmap_success = try_to_unmap(p, ttu);
++			unmap_success = try_to_unmap(hpage, ttu);
+ 		}
+ 	}
+ 	if (!unmap_success)
+diff --git a/mm/memory.c b/mm/memory.c
+index 1d101aeae416a..2183003687cec 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4934,6 +4934,10 @@ int follow_phys(struct vm_area_struct *vma,
+ 		goto out;
+ 	pte = *ptep;
+ 
++	/* Never return PFNs of anon folios in COW mappings. */
++	if (vm_normal_page(vma, address, pte))
++		goto unlock;
++
+ 	if ((flags & FOLL_WRITE) && !pte_write(pte))
+ 		goto unlock;
+ 
+diff --git a/mm/memtest.c b/mm/memtest.c
+index f53ace709ccd8..d407373f225b4 100644
+--- a/mm/memtest.c
++++ b/mm/memtest.c
+@@ -46,10 +46,10 @@ static void __init memtest(u64 pattern, phys_addr_t start_phys, phys_addr_t size
+ 	last_bad = 0;
+ 
+ 	for (p = start; p < end; p++)
+-		*p = pattern;
++		WRITE_ONCE(*p, pattern);
+ 
+ 	for (p = start; p < end; p++, start_phys_aligned += incr) {
+-		if (*p == pattern)
++		if (READ_ONCE(*p) == pattern)
+ 			continue;
+ 		if (start_phys_aligned == last_bad + incr) {
+ 			last_bad += incr;
+diff --git a/mm/migrate.c b/mm/migrate.c
+index fcb7eb6a6ecae..c0a8f3c9e256c 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -447,8 +447,12 @@ int migrate_page_move_mapping(struct address_space *mapping,
+ 	if (PageSwapBacked(page)) {
+ 		__SetPageSwapBacked(newpage);
+ 		if (PageSwapCache(page)) {
++			int i;
++
+ 			SetPageSwapCache(newpage);
+-			set_page_private(newpage, page_private(page));
++			for (i = 0; i < (1 << compound_order(page)); i++)
++				set_page_private(newpage + i,
++						 page_private(page + i));
+ 		}
+ 	} else {
+ 		VM_BUG_ON_PAGE(PageSwapCache(page), page);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 124ab93246104..ed66601044be5 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4644,6 +4644,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 						struct alloc_context *ac)
+ {
+ 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
++	bool can_compact = gfp_compaction_allowed(gfp_mask);
+ 	const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
+ 	struct page *page = NULL;
+ 	unsigned int alloc_flags;
+@@ -4709,7 +4710,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 	 * Don't try this for allocations that are allowed to ignore
+ 	 * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
+ 	 */
+-	if (can_direct_reclaim &&
++	if (can_direct_reclaim && can_compact &&
+ 			(costly_order ||
+ 			   (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
+ 			&& !gfp_pfmemalloc_allowed(gfp_mask)) {
+@@ -4806,9 +4807,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 
+ 	/*
+ 	 * Do not retry costly high order allocations unless they are
+-	 * __GFP_RETRY_MAYFAIL
++	 * __GFP_RETRY_MAYFAIL and we can compact
+ 	 */
+-	if (costly_order && !(gfp_mask & __GFP_RETRY_MAYFAIL))
++	if (costly_order && (!can_compact ||
++			     !(gfp_mask & __GFP_RETRY_MAYFAIL)))
+ 		goto nopage;
+ 
+ 	if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
+@@ -4821,7 +4823,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 	 * implementation of the compaction depends on the sufficient amount
+ 	 * of free memory (see __compaction_suitable)
+ 	 */
+-	if (did_some_progress > 0 &&
++	if (did_some_progress > 0 && can_compact &&
+ 			should_compact_retry(ac, order, alloc_flags,
+ 				compact_result, &compact_priority,
+ 				&compaction_retries))
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 86ade667a7af6..4ca1d04d8732f 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1271,6 +1271,11 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
+ }
+ 
+ /*
++ * Note that when only holding the PTL, swapoff might succeed immediately
++ * after freeing a swap entry. Therefore, immediately after
++ * __swap_entry_free(), the swap info might become stale and should not
++ * be touched without a prior get_swap_device().
++ *
+  * Check whether swap entry is valid in the swap device.  If so,
+  * return pointer to swap_info_struct, and keep the swap entry valid
+  * via preventing the swap device from being swapoff, until
+@@ -1797,13 +1802,19 @@ int free_swap_and_cache(swp_entry_t entry)
+ 	if (non_swap_entry(entry))
+ 		return 1;
+ 
+-	p = _swap_info_get(entry);
++	p = get_swap_device(entry);
+ 	if (p) {
++		if (WARN_ON(data_race(!p->swap_map[swp_offset(entry)]))) {
++			put_swap_device(p);
++			return 0;
++		}
++
+ 		count = __swap_entry_free(p, entry);
+ 		if (count == SWAP_HAS_CACHE &&
+ 		    !swap_page_trans_huge_swapped(p, entry))
+ 			__try_to_reclaim_swap(p, swp_offset(entry),
+ 					      TTRS_UNMAPPED | TTRS_FULL);
++		put_swap_device(p);
+ 	}
+ 	return p != NULL;
+ }
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 51ccd80e70b6a..e2b8cee1dbc33 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2546,7 +2546,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
+ /* Use reclaim/compaction for costly allocs or under memory pressure */
+ static bool in_reclaim_compaction(struct scan_control *sc)
+ {
+-	if (IS_ENABLED(CONFIG_COMPACTION) && sc->order &&
++	if (gfp_compaction_allowed(sc->gfp_mask) && sc->order &&
+ 			(sc->order > PAGE_ALLOC_COSTLY_ORDER ||
+ 			 sc->priority < DEF_PRIORITY - 2))
+ 		return true;
+@@ -2873,6 +2873,9 @@ static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
+ 	unsigned long watermark;
+ 	enum compact_result suitable;
+ 
++	if (!gfp_compaction_allowed(sc->gfp_mask))
++		return false;
++
+ 	suitable = compaction_suitable(zone, sc->order, 0, sc->reclaim_idx);
+ 	if (suitable == COMPACT_SUCCESS)
+ 		/* Allocation should succeed already. Don't reclaim. */
+diff --git a/net/bluetooth/hci_debugfs.c b/net/bluetooth/hci_debugfs.c
+index d4efc4aa55af8..131bb56bf2afe 100644
+--- a/net/bluetooth/hci_debugfs.c
++++ b/net/bluetooth/hci_debugfs.c
+@@ -216,10 +216,12 @@ static int conn_info_min_age_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val == 0 || val > hdev->conn_info_max_age)
++	hci_dev_lock(hdev);
++	if (val == 0 || val > hdev->conn_info_max_age) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->conn_info_min_age = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -244,10 +246,12 @@ static int conn_info_max_age_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val == 0 || val < hdev->conn_info_min_age)
++	hci_dev_lock(hdev);
++	if (val == 0 || val < hdev->conn_info_min_age) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->conn_info_max_age = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -526,10 +530,12 @@ static int sniff_min_interval_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val == 0 || val % 2 || val > hdev->sniff_max_interval)
++	hci_dev_lock(hdev);
++	if (val == 0 || val % 2 || val > hdev->sniff_max_interval) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->sniff_min_interval = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -554,10 +560,12 @@ static int sniff_max_interval_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val == 0 || val % 2 || val < hdev->sniff_min_interval)
++	hci_dev_lock(hdev);
++	if (val == 0 || val % 2 || val < hdev->sniff_min_interval) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->sniff_max_interval = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -798,10 +806,12 @@ static int conn_min_interval_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val < 0x0006 || val > 0x0c80 || val > hdev->le_conn_max_interval)
++	hci_dev_lock(hdev);
++	if (val < 0x0006 || val > 0x0c80 || val > hdev->le_conn_max_interval) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->le_conn_min_interval = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -826,10 +836,12 @@ static int conn_max_interval_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val < 0x0006 || val > 0x0c80 || val < hdev->le_conn_min_interval)
++	hci_dev_lock(hdev);
++	if (val < 0x0006 || val > 0x0c80 || val < hdev->le_conn_min_interval) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->le_conn_max_interval = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -938,10 +950,12 @@ static int adv_min_interval_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val < 0x0020 || val > 0x4000 || val > hdev->le_adv_max_interval)
++	hci_dev_lock(hdev);
++	if (val < 0x0020 || val > 0x4000 || val > hdev->le_adv_max_interval) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->le_adv_min_interval = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -966,10 +980,12 @@ static int adv_max_interval_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val < 0x0020 || val > 0x4000 || val < hdev->le_adv_min_interval)
++	hci_dev_lock(hdev);
++	if (val < 0x0020 || val > 0x4000 || val < hdev->le_adv_min_interval) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->le_adv_max_interval = val;
+ 	hci_dev_unlock(hdev);
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index a0d9bc99f4e14..58c0299587595 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -2636,6 +2636,31 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		if (test_bit(HCI_ENCRYPT, &hdev->flags))
+ 			set_bit(HCI_CONN_ENCRYPT, &conn->flags);
+ 
++		/* "Link key request" completed ahead of "connect request" completes */
++		if (ev->encr_mode == 1 && !test_bit(HCI_CONN_ENCRYPT, &conn->flags) &&
++		    ev->link_type == ACL_LINK) {
++			struct link_key *key;
++			struct hci_cp_read_enc_key_size cp;
++
++			key = hci_find_link_key(hdev, &ev->bdaddr);
++			if (key) {
++				set_bit(HCI_CONN_ENCRYPT, &conn->flags);
++
++				if (!(hdev->commands[20] & 0x10)) {
++					conn->enc_key_size = HCI_LINK_KEY_SIZE;
++				} else {
++					cp.handle = cpu_to_le16(conn->handle);
++					if (hci_send_cmd(hdev, HCI_OP_READ_ENC_KEY_SIZE,
++							 sizeof(cp), &cp)) {
++						bt_dev_err(hdev, "sending read key size failed");
++						conn->enc_key_size = HCI_LINK_KEY_SIZE;
++					}
++				}
++
++				hci_encrypt_cfm(conn, ev->status);
++			}
++		}
++
+ 		/* Get remote features */
+ 		if (conn->type == ACL_LINK) {
+ 			struct hci_cp_read_remote_features cp;
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index bab14186f9ad5..14a06d8b1a2d0 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -1070,6 +1070,8 @@ static int do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 	struct ebt_table_info *newinfo;
+ 	struct ebt_replace tmp;
+ 
++	if (len < sizeof(tmp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+@@ -1309,6 +1311,8 @@ static int update_counters(struct net *net, sockptr_t arg, unsigned int len)
+ {
+ 	struct ebt_replace hlp;
+ 
++	if (len < sizeof(hlp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&hlp, arg, sizeof(hlp)))
+ 		return -EFAULT;
+ 
+@@ -2238,6 +2242,8 @@ static int compat_update_counters(struct net *net, sockptr_t arg,
+ {
+ 	struct compat_ebt_replace hlp;
+ 
++	if (len < sizeof(hlp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&hlp, arg, sizeof(hlp)))
+ 		return -EFAULT;
+ 
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index f375ef1501490..52e395a189dff 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -422,6 +422,9 @@ static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test,
+ 	struct sock *sk;
+ 	int err = 0;
+ 
++	if (irqs_disabled())
++		return -EOPNOTSUPP; /* locks here are hardirq-unsafe */
++
+ 	raw_spin_lock_bh(&stab->lock);
+ 	sk = *psk;
+ 	if (!sk_test || sk_test == sk)
+@@ -955,6 +958,9 @@ static int sock_hash_delete_elem(struct bpf_map *map, void *key)
+ 	struct bpf_shtab_elem *elem;
+ 	int ret = -ENOENT;
+ 
++	if (irqs_disabled())
++		return -EOPNOTSUPP; /* locks here are hardirq-unsafe */
++
+ 	hash = sock_hash_bucket_hash(key, key_size);
+ 	bucket = sock_hash_select_bucket(htab, hash);
+ 
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index b15c9ad0095a2..6ebe43b4d28f7 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -580,6 +580,20 @@ void inet_csk_clear_xmit_timers(struct sock *sk)
+ }
+ EXPORT_SYMBOL(inet_csk_clear_xmit_timers);
+ 
++void inet_csk_clear_xmit_timers_sync(struct sock *sk)
++{
++	struct inet_connection_sock *icsk = inet_csk(sk);
++
++	/* ongoing timer handlers need to acquire socket lock. */
++	sock_not_owned_by_me(sk);
++
++	icsk->icsk_pending = icsk->icsk_ack.pending = 0;
++
++	sk_stop_timer_sync(sk, &icsk->icsk_retransmit_timer);
++	sk_stop_timer_sync(sk, &icsk->icsk_delack_timer);
++	sk_stop_timer_sync(sk, &sk->sk_timer);
++}
++
+ void inet_csk_delete_keepalive_timer(struct sock *sk)
+ {
+ 	sk_stop_timer(sk, &sk->sk_timer);
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index a6ad0fe1387c0..0ac652fef06d4 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -278,8 +278,13 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 					  tpi->flags | TUNNEL_NO_KEY,
+ 					  iph->saddr, iph->daddr, 0);
+ 	} else {
++		if (unlikely(!pskb_may_pull(skb,
++					    gre_hdr_len + sizeof(*ershdr))))
++			return PACKET_REJECT;
++
+ 		ershdr = (struct erspan_base_hdr *)(skb->data + gre_hdr_len);
+ 		ver = ershdr->ver;
++		iph = ip_hdr(skb);
+ 		tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex,
+ 					  tpi->flags | TUNNEL_KEY,
+ 					  iph->saddr, iph->daddr, tpi->key);
+diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
+index d6d45d820d79a..48c6aa3d91ae8 100644
+--- a/net/ipv4/netfilter/arp_tables.c
++++ b/net/ipv4/netfilter/arp_tables.c
+@@ -955,6 +955,8 @@ static int do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 	void *loc_cpu_entry;
+ 	struct arpt_entry *iter;
+ 
++	if (len < sizeof(tmp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+@@ -1253,6 +1255,8 @@ static int compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 	void *loc_cpu_entry;
+ 	struct arpt_entry *iter;
+ 
++	if (len < sizeof(tmp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index ec981618b7b22..b46d58b9f3fe4 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -1109,6 +1109,8 @@ do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 	void *loc_cpu_entry;
+ 	struct ipt_entry *iter;
+ 
++	if (len < sizeof(tmp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+@@ -1493,6 +1495,8 @@ compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 	void *loc_cpu_entry;
+ 	struct ipt_entry *iter;
+ 
++	if (len < sizeof(tmp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 2e874ec859715..ac6cb2dc60380 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2717,6 +2717,8 @@ void tcp_close(struct sock *sk, long timeout)
+ 	lock_sock(sk);
+ 	__tcp_close(sk, timeout);
+ 	release_sock(sk);
++	if (!sk->sk_net_refcnt)
++		inet_csk_clear_xmit_timers_sync(sk);
+ 	sock_put(sk);
+ }
+ EXPORT_SYMBOL(tcp_close);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index b2541c7d7c87f..0b7e76e6f2028 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -602,6 +602,13 @@ static inline bool __udp_is_mcast_sock(struct net *net, struct sock *sk,
+ }
+ 
+ DEFINE_STATIC_KEY_FALSE(udp_encap_needed_key);
++EXPORT_SYMBOL(udp_encap_needed_key);
++
++#if IS_ENABLED(CONFIG_IPV6)
++DEFINE_STATIC_KEY_FALSE(udpv6_encap_needed_key);
++EXPORT_SYMBOL(udpv6_encap_needed_key);
++#endif
++
+ void udp_encap_enable(void)
+ {
+ 	static_branch_inc(&udp_encap_needed_key);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index f4b8e56068e06..445d8bc30fdd1 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -512,6 +512,11 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ 	unsigned int off = skb_gro_offset(skb);
+ 	int flush = 1;
+ 
++	/* We can do L4 aggregation only if the packet can't land in a tunnel
++	 * otherwise we could corrupt the inner stream. Detecting such packets
++	 * cannot be foolproof and the aggregation might still happen in some
++	 * cases. Such packets should be caught in udp_unexpected_gso later.
++	 */
+ 	NAPI_GRO_CB(skb)->is_flist = 0;
+ 	if (skb->dev->features & NETIF_F_GRO_FRAGLIST)
+ 		NAPI_GRO_CB(skb)->is_flist = sk ? !udp_sk(sk)->gro_enabled: 1;
+@@ -668,13 +673,7 @@ INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff)
+ 		skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4);
+ 		skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
+ 
+-		if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
+-			if (skb->csum_level < SKB_MAX_CSUM_LEVEL)
+-				skb->csum_level++;
+-		} else {
+-			skb->ip_summed = CHECKSUM_UNNECESSARY;
+-			skb->csum_level = 0;
+-		}
++		__skb_incr_checksum_unnecessary(skb);
+ 
+ 		return 0;
+ 	}
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 608205c632c8c..d70783283a417 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -643,19 +643,19 @@ static int inet6_dump_fib(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (!w) {
+ 		/* New dump:
+ 		 *
+-		 * 1. hook callback destructor.
+-		 */
+-		cb->args[3] = (long)cb->done;
+-		cb->done = fib6_dump_done;
+-
+-		/*
+-		 * 2. allocate and initialize walker.
++		 * 1. allocate and initialize walker.
+ 		 */
+ 		w = kzalloc(sizeof(*w), GFP_ATOMIC);
+ 		if (!w)
+ 			return -ENOMEM;
+ 		w->func = fib6_dump_node;
+ 		cb->args[2] = (long)w;
++
++		/* 2. hook callback destructor.
++		 */
++		cb->args[3] = (long)cb->done;
++		cb->done = fib6_dump_done;
++
+ 	}
+ 
+ 	arg.skb = skb;
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 2df1036330f80..13ac0ccdc8d79 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -533,6 +533,9 @@ static int ip6erspan_rcv(struct sk_buff *skb,
+ 	struct ip6_tnl *tunnel;
+ 	u8 ver;
+ 
++	if (unlikely(!pskb_may_pull(skb, sizeof(*ershdr))))
++		return PACKET_REJECT;
++
+ 	ipv6h = ipv6_hdr(skb);
+ 	ershdr = (struct erspan_base_hdr *)skb->data;
+ 	ver = ershdr->ver;
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index 99bb11d167127..d013395be05fc 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -1127,6 +1127,8 @@ do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 	void *loc_cpu_entry;
+ 	struct ip6t_entry *iter;
+ 
++	if (len < sizeof(tmp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+@@ -1503,6 +1505,8 @@ compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 	void *loc_cpu_entry;
+ 	struct ip6t_entry *iter;
+ 
++	if (len < sizeof(tmp))
++		return -EINVAL;
+ 	if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 5385037209a6b..b5d879f2501da 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -474,7 +474,7 @@ int udpv6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 	goto try_again;
+ }
+ 
+-DEFINE_STATIC_KEY_FALSE(udpv6_encap_needed_key);
++DECLARE_STATIC_KEY_FALSE(udpv6_encap_needed_key);
+ void udpv6_encap_enable(void)
+ {
+ 	static_branch_inc(&udpv6_encap_needed_key);
+diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c
+index ebee748f25b9e..7752e1e921f8f 100644
+--- a/net/ipv6/udp_offload.c
++++ b/net/ipv6/udp_offload.c
+@@ -169,13 +169,7 @@ INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff)
+ 		skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4);
+ 		skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
+ 
+-		if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
+-			if (skb->csum_level < SKB_MAX_CSUM_LEVEL)
+-				skb->csum_level++;
+-		} else {
+-			skb->ip_summed = CHECKSUM_UNNECESSARY;
+-			skb->csum_level = 0;
+-		}
++		__skb_incr_checksum_unnecessary(skb);
+ 
+ 		return 0;
+ 	}
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 45bb6f2755987..0c3da7771b48b 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1811,15 +1811,14 @@ static int ieee80211_change_station(struct wiphy *wiphy,
+ 		}
+ 
+ 		if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
+-		    sta->sdata->u.vlan.sta) {
+-			ieee80211_clear_fast_rx(sta);
++		    sta->sdata->u.vlan.sta)
+ 			RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL);
+-		}
+ 
+ 		if (test_sta_flag(sta, WLAN_STA_AUTHORIZED))
+ 			ieee80211_vif_dec_num_mcast(sta->sdata);
+ 
+ 		sta->sdata = vlansdata;
++		ieee80211_check_fast_rx(sta);
+ 		ieee80211_check_fast_xmit(sta);
+ 
+ 		if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) {
+diff --git a/net/mac802154/llsec.c b/net/mac802154/llsec.c
+index 55550ead2ced8..a4cc9d077c59c 100644
+--- a/net/mac802154/llsec.c
++++ b/net/mac802154/llsec.c
+@@ -265,19 +265,27 @@ int mac802154_llsec_key_add(struct mac802154_llsec *sec,
+ 	return -ENOMEM;
+ }
+ 
++static void mac802154_llsec_key_del_rcu(struct rcu_head *rcu)
++{
++	struct ieee802154_llsec_key_entry *pos;
++	struct mac802154_llsec_key *mkey;
++
++	pos = container_of(rcu, struct ieee802154_llsec_key_entry, rcu);
++	mkey = container_of(pos->key, struct mac802154_llsec_key, key);
++
++	llsec_key_put(mkey);
++	kfree_sensitive(pos);
++}
++
+ int mac802154_llsec_key_del(struct mac802154_llsec *sec,
+ 			    const struct ieee802154_llsec_key_id *key)
+ {
+ 	struct ieee802154_llsec_key_entry *pos;
+ 
+ 	list_for_each_entry(pos, &sec->table.keys, list) {
+-		struct mac802154_llsec_key *mkey;
+-
+-		mkey = container_of(pos->key, struct mac802154_llsec_key, key);
+-
+ 		if (llsec_key_id_equal(&pos->id, key)) {
+ 			list_del_rcu(&pos->list);
+-			llsec_key_put(mkey);
++			call_rcu(&pos->rcu, mac802154_llsec_key_del_rcu);
+ 			return 0;
+ 		}
+ 	}
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index adbe6350f980b..6be7e75922918 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2218,9 +2218,6 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
+ 
+ 		__MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEPASSIVEACK);
+ 		local_bh_enable();
+-	} else {
+-		MPTCP_INC_STATS(sock_net(sk),
+-				MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK);
+ 	}
+ 
+ out:
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 607519246bf28..276fe9f44df73 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -595,6 +595,9 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 			if (fallback_is_fatal)
+ 				goto dispose_child;
+ 
++			if (fallback)
++				SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK);
++
+ 			subflow_drop_ctx(child);
+ 			goto out;
+ 		}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 8d4472b127e41..ab7f7e45b9846 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1084,6 +1084,24 @@ static void nf_tables_table_disable(struct net *net, struct nft_table *table)
+ #define __NFT_TABLE_F_UPDATE		(__NFT_TABLE_F_WAS_DORMANT | \
+ 					 __NFT_TABLE_F_WAS_AWAKEN)
+ 
++static bool nft_table_pending_update(const struct nft_ctx *ctx)
++{
++	struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
++	struct nft_trans *trans;
++
++	if (ctx->table->flags & __NFT_TABLE_F_UPDATE)
++		return true;
++
++	list_for_each_entry(trans, &nft_net->commit_list, list) {
++		if (trans->ctx.table == ctx->table &&
++		    trans->msg_type == NFT_MSG_DELCHAIN &&
++		    nft_is_base_chain(trans->ctx.chain))
++			return true;
++	}
++
++	return false;
++}
++
+ static int nf_tables_updtable(struct nft_ctx *ctx)
+ {
+ 	struct nft_trans *trans;
+@@ -1101,7 +1119,7 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 		return 0;
+ 
+ 	/* No dormant off/on/off/on games in single transaction */
+-	if (ctx->table->flags & __NFT_TABLE_F_UPDATE)
++	if (nft_table_pending_update(ctx))
+ 		return -EINVAL;
+ 
+ 	trans = nft_trans_alloc(ctx, NFT_MSG_NEWTABLE,
+@@ -2225,6 +2243,9 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 		struct nft_stats __percpu *stats = NULL;
+ 		struct nft_chain_hook hook;
+ 
++		if (table->flags & __NFT_TABLE_F_UPDATE)
++			return -EINVAL;
++
+ 		if (flags & NFT_CHAIN_BINDING)
+ 			return -EOPNOTSUPP;
+ 
+@@ -4410,6 +4431,12 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 		if ((flags & (NFT_SET_EVAL | NFT_SET_OBJECT)) ==
+ 			     (NFT_SET_EVAL | NFT_SET_OBJECT))
+ 			return -EOPNOTSUPP;
++		if ((flags & (NFT_SET_ANONYMOUS | NFT_SET_TIMEOUT | NFT_SET_EVAL)) ==
++			     (NFT_SET_ANONYMOUS | NFT_SET_TIMEOUT))
++			return -EOPNOTSUPP;
++		if ((flags & (NFT_SET_CONSTANT | NFT_SET_TIMEOUT)) ==
++			     (NFT_SET_CONSTANT | NFT_SET_TIMEOUT))
++			return -EOPNOTSUPP;
+ 	}
+ 
+ 	dtype = 0;
+@@ -4451,6 +4478,9 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 		if (!(flags & NFT_SET_TIMEOUT))
+ 			return -EINVAL;
+ 
++		if (flags & NFT_SET_ANONYMOUS)
++			return -EOPNOTSUPP;
++
+ 		err = nf_msecs_to_jiffies64(nla[NFTA_SET_TIMEOUT], &timeout);
+ 		if (err)
+ 			return err;
+@@ -4459,6 +4489,10 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 	if (nla[NFTA_SET_GC_INTERVAL] != NULL) {
+ 		if (!(flags & NFT_SET_TIMEOUT))
+ 			return -EINVAL;
++
++		if (flags & NFT_SET_ANONYMOUS)
++			return -EOPNOTSUPP;
++
+ 		gc_int = ntohl(nla_get_be32(nla[NFTA_SET_GC_INTERVAL]));
+ 	}
+ 
+@@ -4754,6 +4788,7 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 
+ 	if (list_empty(&set->bindings) && nft_set_is_anonymous(set)) {
+ 		list_del_rcu(&set->list);
++		set->dead = 1;
+ 		if (event)
+ 			nf_tables_set_notify(ctx, set, NFT_MSG_DELSET,
+ 					     GFP_KERNEL);
+@@ -6862,11 +6897,12 @@ static int nft_flowtable_parse_hook(const struct nft_ctx *ctx,
+ 	return err;
+ }
+ 
++/* call under rcu_read_lock */
+ static const struct nf_flowtable_type *__nft_flowtable_type_get(u8 family)
+ {
+ 	const struct nf_flowtable_type *type;
+ 
+-	list_for_each_entry(type, &nf_tables_flowtables, list) {
++	list_for_each_entry_rcu(type, &nf_tables_flowtables, list) {
+ 		if (family == type->family)
+ 			return type;
+ 	}
+@@ -6878,9 +6914,13 @@ nft_flowtable_type_get(struct net *net, u8 family)
+ {
+ 	const struct nf_flowtable_type *type;
+ 
++	rcu_read_lock();
+ 	type = __nft_flowtable_type_get(family);
+-	if (type != NULL && try_module_get(type->owner))
++	if (type != NULL && try_module_get(type->owner)) {
++		rcu_read_unlock();
+ 		return type;
++	}
++	rcu_read_unlock();
+ 
+ 	lockdep_nfnl_nft_mutex_not_held();
+ #ifdef CONFIG_MODULES
+@@ -8778,10 +8818,11 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 	struct nft_trans *trans, *next;
+ 	LIST_HEAD(set_update_list);
+ 	struct nft_trans_elem *te;
++	int err = 0;
+ 
+ 	if (action == NFNL_ABORT_VALIDATE &&
+ 	    nf_tables_validate(net) < 0)
+-		return -EAGAIN;
++		err = -EAGAIN;
+ 
+ 	list_for_each_entry_safe_reverse(trans, next, &nft_net->commit_list,
+ 					 list) {
+@@ -8941,12 +8982,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 		nf_tables_abort_release(trans);
+ 	}
+ 
+-	if (action == NFNL_ABORT_AUTOLOAD)
+-		nf_tables_module_autoload(net);
+-	else
+-		nf_tables_module_autoload_cleanup(net);
+-
+-	return 0;
++	return err;
+ }
+ 
+ static int nf_tables_abort(struct net *net, struct sk_buff *skb,
+@@ -8960,6 +8996,16 @@ static int nf_tables_abort(struct net *net, struct sk_buff *skb,
+ 	ret = __nf_tables_abort(net, action);
+ 	nft_gc_seq_end(nft_net, gc_seq);
+ 
++	WARN_ON_ONCE(!list_empty(&nft_net->commit_list));
++
++	/* module autoload needs to happen after GC sequence update because it
++	 * temporarily releases and grabs mutex again.
++	 */
++	if (action == NFNL_ABORT_AUTOLOAD)
++		nf_tables_module_autoload(net);
++	else
++		nf_tables_module_autoload_cleanup(net);
++
+ 	mutex_unlock(&nft_net->commit_mutex);
+ 
+ 	return ret;
+@@ -9694,8 +9740,11 @@ static void __net_exit nf_tables_exit_net(struct net *net)
+ 
+ 	gc_seq = nft_gc_seq_begin(nft_net);
+ 
+-	if (!list_empty(&nft_net->commit_list))
+-		__nf_tables_abort(net, NFNL_ABORT_NONE);
++	WARN_ON_ONCE(!list_empty(&nft_net->commit_list));
++
++	if (!list_empty(&nft_net->module_list))
++		nf_tables_module_autoload_cleanup(net);
++
+ 	__nft_release_tables(net);
+ 
+ 	nft_gc_seq_end(nft_net, gc_seq);
+@@ -9779,6 +9828,7 @@ static void __exit nf_tables_module_exit(void)
+ 	unregister_netdevice_notifier(&nf_tables_flowtable_notifier);
+ 	nft_chain_filter_fini();
+ 	nft_chain_route_fini();
++	nf_tables_trans_destroy_flush_work();
+ 	unregister_pernet_subsys(&nf_tables_net_ops);
+ 	cancel_work_sync(&trans_gc_work);
+ 	cancel_work_sync(&trans_destroy_work);
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index 5bfaf06f7be7f..d8002065baaef 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -1502,6 +1502,11 @@ static void nci_rx_work(struct work_struct *work)
+ 		nfc_send_to_raw_sock(ndev->nfc_dev, skb,
+ 				     RAW_PAYLOAD_NCI, NFC_DIRECTION_RX);
+ 
++		if (!nci_plen(skb->data)) {
++			kfree_skb(skb);
++			break;
++		}
++
+ 		/* Process frame */
+ 		switch (nci_mt(skb->data)) {
+ 		case NCI_MT_RSP_PKT:
+diff --git a/net/rds/rdma.c b/net/rds/rdma.c
+index c29c7a59f2053..3df0affff6b0f 100644
+--- a/net/rds/rdma.c
++++ b/net/rds/rdma.c
+@@ -302,7 +302,7 @@ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
+ 		}
+ 		ret = PTR_ERR(trans_private);
+ 		/* Trigger connection so that its ready for the next retry */
+-		if (ret == -ENODEV)
++		if (ret == -ENODEV && cp)
+ 			rds_conn_connect_if_down(cp->cp_conn);
+ 		goto out;
+ 	}
+diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c
+index aa98dcac94b95..934765a2edebb 100644
+--- a/net/sched/act_skbmod.c
++++ b/net/sched/act_skbmod.c
+@@ -219,13 +219,13 @@ static int tcf_skbmod_dump(struct sk_buff *skb, struct tc_action *a,
+ 	struct tcf_skbmod *d = to_skbmod(a);
+ 	unsigned char *b = skb_tail_pointer(skb);
+ 	struct tcf_skbmod_params  *p;
+-	struct tc_skbmod opt = {
+-		.index   = d->tcf_index,
+-		.refcnt  = refcount_read(&d->tcf_refcnt) - ref,
+-		.bindcnt = atomic_read(&d->tcf_bindcnt) - bind,
+-	};
++	struct tc_skbmod opt;
+ 	struct tcf_t t;
+ 
++	memset(&opt, 0, sizeof(opt));
++	opt.index   = d->tcf_index;
++	opt.refcnt  = refcount_read(&d->tcf_refcnt) - ref,
++	opt.bindcnt = atomic_read(&d->tcf_bindcnt) - bind;
+ 	spin_lock_bh(&d->tcf_lock);
+ 	opt.action = d->tcf_action;
+ 	p = rcu_dereference_protected(d->skbmod_p,
+diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c
+index 30bae60d626c6..ed9cfa11b589f 100644
+--- a/net/smc/smc_pnet.c
++++ b/net/smc/smc_pnet.c
+@@ -797,6 +797,16 @@ static void smc_pnet_create_pnetids_list(struct net *net)
+ 	u8 ndev_pnetid[SMC_MAX_PNETID_LEN];
+ 	struct net_device *dev;
+ 
++	/* Newly created netns do not have devices.
++	 * Do not even acquire rtnl.
++	 */
++	if (list_empty(&net->dev_base_head))
++		return;
++
++	/* Note: This might not be needed, because smc_pnet_netdev_event()
++	 * is also calling smc_pnet_add_base_pnetid() when handling
++	 * NETDEV_UP event.
++	 */
+ 	rtnl_lock();
+ 	for_each_netdev(net, dev)
+ 		smc_pnet_add_base_pnetid(net, dev, ndev_pnetid);
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 8fce2e93bb3b3..070946d093817 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -1753,6 +1753,9 @@ static int copy_to_user_tmpl(struct xfrm_policy *xp, struct sk_buff *skb)
+ 	if (xp->xfrm_nr == 0)
+ 		return 0;
+ 
++	if (xp->xfrm_nr > XFRM_MAX_DEPTH)
++		return -ENOBUFS;
++
+ 	for (i = 0; i < xp->xfrm_nr; i++) {
+ 		struct xfrm_user_tmpl *up = &vec[i];
+ 		struct xfrm_tmpl *kp = &xp->xfrm_vec[i];
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index fe327a4532ddb..eb01e8d07e280 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -53,6 +53,8 @@ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast)
+ KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare
+ KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access)
+ KBUILD_CFLAGS += $(call cc-disable-warning, cast-function-type-strict)
++KBUILD_CFLAGS += -Wno-enum-compare-conditional
++KBUILD_CFLAGS += -Wno-enum-enum-conversion
+ endif
+ 
+ endif
+diff --git a/scripts/dummy-tools/gcc b/scripts/dummy-tools/gcc
+index 485427f40dba8..91df612bf9374 100755
+--- a/scripts/dummy-tools/gcc
++++ b/scripts/dummy-tools/gcc
+@@ -76,7 +76,11 @@ fi
+ if arg_contain -S "$@"; then
+ 	# For scripts/gcc-x86-*-has-stack-protector.sh
+ 	if arg_contain -fstack-protector "$@"; then
+-		echo "%gs"
++		if arg_contain -mstack-protector-guard-reg=fs "$@"; then
++			echo "%fs"
++		else
++			echo "%gs"
++		fi
+ 		exit 0
+ 	fi
+ fi
+diff --git a/scripts/gcc-x86_32-has-stack-protector.sh b/scripts/gcc-x86_32-has-stack-protector.sh
+index f5c1194952540..825c75c5b7150 100755
+--- a/scripts/gcc-x86_32-has-stack-protector.sh
++++ b/scripts/gcc-x86_32-has-stack-protector.sh
+@@ -1,4 +1,8 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ 
+-echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -m32 -O0 -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
++# This requires GCC 8.1 or better.  Specifically, we require
++# -mstack-protector-guard-reg, added by
++# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81708
++
++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -m32 -O0 -fstack-protector -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard - -o - 2> /dev/null | grep -q "%fs"
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 7a04d4c053260..8e3257f1ea2c1 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -1233,7 +1233,7 @@ sub dump_struct($$) {
+ 	# replace DECLARE_KFIFO_PTR
+ 	$members =~ s/DECLARE_KFIFO_PTR\s*\(([^,)]+),\s*([^,)]+)\)/$2 \*$1/gos;
+ 	# replace DECLARE_FLEX_ARRAY
+-	$members =~ s/(?:__)?DECLARE_FLEX_ARRAY\s*\($args,\s*$args\)/$1 $2\[\]/gos;
++	$members =~ s/(?:__)?DECLARE_FLEX_ARRAY\s*\(([^,)]+),\s*([^,)]+)\)/$1 $2\[\]/gos;
+ 	my $declaration = $members;
+ 
+ 	# Split nested struct/union elements as newer ones
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 5388f143eecd8..750f6007bbbb0 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -1280,7 +1280,8 @@ static int smack_inode_setxattr(struct dentry *dentry, const char *name,
+ 		check_star = 1;
+ 	} else if (strcmp(name, XATTR_NAME_SMACKTRANSMUTE) == 0) {
+ 		check_priv = 1;
+-		if (size != TRANS_TRUE_SIZE ||
++		if (!S_ISDIR(d_backing_inode(dentry)->i_mode) ||
++		    size != TRANS_TRUE_SIZE ||
+ 		    strncmp(value, TRANS_TRUE, TRANS_TRUE_SIZE) != 0)
+ 			rc = -EINVAL;
+ 	} else
+@@ -2721,6 +2722,15 @@ static int smack_inode_setsecurity(struct inode *inode, const char *name,
+ 	if (value == NULL || size > SMK_LONGLABEL || size == 0)
+ 		return -EINVAL;
+ 
++	if (strcmp(name, XATTR_SMACK_TRANSMUTE) == 0) {
++		if (!S_ISDIR(inode->i_mode) || size != TRANS_TRUE_SIZE ||
++		    strncmp(value, TRANS_TRUE, TRANS_TRUE_SIZE) != 0)
++			return -EINVAL;
++
++		nsp->smk_flags |= SMK_INODE_TRANSMUTE;
++		return 0;
++	}
++
+ 	skp = smk_import_entry(value, size);
+ 	if (IS_ERR(skp))
+ 		return PTR_ERR(skp);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 038837481c27c..84df5582cde22 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9192,7 +9192,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+-	SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
++	SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_AMP),
+@@ -10692,8 +10692,7 @@ static void alc897_hp_automute_hook(struct hda_codec *codec,
+ 
+ 	snd_hda_gen_hp_automute(codec, jack);
+ 	vref = spec->gen.hp_jack_present ? (PIN_HP | AC_PINCTL_VREF_100) : PIN_HP;
+-	snd_hda_codec_write(codec, 0x1b, 0, AC_VERB_SET_PIN_WIDGET_CONTROL,
+-			    vref);
++	snd_hda_set_pin_ctl(codec, 0x1b, vref);
+ }
+ 
+ static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec,
+@@ -10702,6 +10701,10 @@ static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec,
+ 	struct alc_spec *spec = codec->spec;
+ 	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ 		spec->gen.hp_automute_hook = alc897_hp_automute_hook;
++		spec->no_shutup_pins = 1;
++	}
++	if (action == HDA_FIXUP_ACT_PROBE) {
++		snd_hda_set_pin_ctl_cache(codec, 0x1a, PIN_IN | AC_PINCTL_VREF_100);
+ 	}
+ }
+ 
+diff --git a/sound/sh/aica.c b/sound/sh/aica.c
+index 8fa68432d3c1e..2d59e3907fcfb 100644
+--- a/sound/sh/aica.c
++++ b/sound/sh/aica.c
+@@ -279,7 +279,8 @@ static void run_spu_dma(struct work_struct *work)
+ 		dreamcastcard->clicks++;
+ 		if (unlikely(dreamcastcard->clicks >= AICA_PERIOD_NUMBER))
+ 			dreamcastcard->clicks %= AICA_PERIOD_NUMBER;
+-		mod_timer(&dreamcastcard->timer, jiffies + 1);
++		if (snd_pcm_running(dreamcastcard->substream))
++			mod_timer(&dreamcastcard->timer, jiffies + 1);
+ 	}
+ }
+ 
+@@ -291,6 +292,8 @@ static void aica_period_elapsed(struct timer_list *t)
+ 	/*timer function - so cannot sleep */
+ 	int play_period;
+ 	struct snd_pcm_runtime *runtime;
++	if (!snd_pcm_running(substream))
++		return;
+ 	runtime = substream->runtime;
+ 	dreamcastcard = substream->pcm->private_data;
+ 	/* Have we played out an additional period? */
+@@ -351,12 +354,19 @@ static int snd_aicapcm_pcm_open(struct snd_pcm_substream
+ 	return 0;
+ }
+ 
++static int snd_aicapcm_pcm_sync_stop(struct snd_pcm_substream *substream)
++{
++	struct snd_card_aica *dreamcastcard = substream->pcm->private_data;
++
++	del_timer_sync(&dreamcastcard->timer);
++	cancel_work_sync(&dreamcastcard->spu_dma_work);
++	return 0;
++}
++
+ static int snd_aicapcm_pcm_close(struct snd_pcm_substream
+ 				 *substream)
+ {
+ 	struct snd_card_aica *dreamcastcard = substream->pcm->private_data;
+-	flush_work(&(dreamcastcard->spu_dma_work));
+-	del_timer(&dreamcastcard->timer);
+ 	dreamcastcard->substream = NULL;
+ 	kfree(dreamcastcard->channel);
+ 	spu_disable();
+@@ -402,6 +412,7 @@ static const struct snd_pcm_ops snd_aicapcm_playback_ops = {
+ 	.prepare = snd_aicapcm_pcm_prepare,
+ 	.trigger = snd_aicapcm_pcm_trigger,
+ 	.pointer = snd_aicapcm_pcm_pointer,
++	.sync_stop = snd_aicapcm_pcm_sync_stop,
+ };
+ 
+ /* TO DO: set up to handle more than one pcm instance */
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index daecd386d5ec8..a83cd8d8a9633 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -246,7 +246,7 @@ int snd_soc_get_volsw(struct snd_kcontrol *kcontrol,
+ 	int max = mc->max;
+ 	int min = mc->min;
+ 	int sign_bit = mc->sign_bit;
+-	unsigned int mask = (1 << fls(max)) - 1;
++	unsigned int mask = (1ULL << fls(max)) - 1;
+ 	unsigned int invert = mc->invert;
+ 	int val;
+ 	int ret;
+diff --git a/tools/iio/iio_utils.c b/tools/iio/iio_utils.c
+index 48360994c2a13..b8745873928c5 100644
+--- a/tools/iio/iio_utils.c
++++ b/tools/iio/iio_utils.c
+@@ -373,7 +373,7 @@ int build_channel_array(const char *device_dir,
+ 		goto error_close_dir;
+ 	}
+ 
+-	seekdir(dp, 0);
++	rewinddir(dp);
+ 	while (ent = readdir(dp), ent) {
+ 		if (strcmp(ent->d_name + strlen(ent->d_name) - strlen("_en"),
+ 			   "_en") == 0) {
+diff --git a/tools/include/linux/objtool.h b/tools/include/linux/objtool.h
+index d17fa7f4001d7..0f13875acc5ed 100644
+--- a/tools/include/linux/objtool.h
++++ b/tools/include/linux/objtool.h
+@@ -141,6 +141,12 @@ struct unwind_hint {
+ 	.popsection
+ .endm
+ 
++.macro STACK_FRAME_NON_STANDARD func:req
++	.pushsection .discard.func_stack_frame_non_standard, "aw"
++		.long \func - .
++	.popsection
++.endm
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #else /* !CONFIG_STACK_VALIDATION */
+@@ -158,6 +164,8 @@ struct unwind_hint {
+ .endm
+ .macro ANNOTATE_NOENDBR
+ .endm
++.macro STACK_FRAME_NON_STANDARD func:req
++.endm
+ #endif
+ 
+ #endif /* CONFIG_STACK_VALIDATION */
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index f76b1a9d5a6e1..53cff32b2cb80 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -226,10 +226,10 @@ u64 perf_evlist__read_format(struct perf_evlist *evlist)
+ 
+ static void perf_evlist__id_hash(struct perf_evlist *evlist,
+ 				 struct perf_evsel *evsel,
+-				 int cpu, int thread, u64 id)
++				 int cpu_map_idx, int thread, u64 id)
+ {
+ 	int hash;
+-	struct perf_sample_id *sid = SID(evsel, cpu, thread);
++	struct perf_sample_id *sid = SID(evsel, cpu_map_idx, thread);
+ 
+ 	sid->id = id;
+ 	sid->evsel = evsel;
+@@ -239,21 +239,27 @@ static void perf_evlist__id_hash(struct perf_evlist *evlist,
+ 
+ void perf_evlist__id_add(struct perf_evlist *evlist,
+ 			 struct perf_evsel *evsel,
+-			 int cpu, int thread, u64 id)
++			 int cpu_map_idx, int thread, u64 id)
+ {
+-	perf_evlist__id_hash(evlist, evsel, cpu, thread, id);
++	if (!SID(evsel, cpu_map_idx, thread))
++		return;
++
++	perf_evlist__id_hash(evlist, evsel, cpu_map_idx, thread, id);
+ 	evsel->id[evsel->ids++] = id;
+ }
+ 
+ int perf_evlist__id_add_fd(struct perf_evlist *evlist,
+ 			   struct perf_evsel *evsel,
+-			   int cpu, int thread, int fd)
++			   int cpu_map_idx, int thread, int fd)
+ {
+ 	u64 read_data[4] = { 0, };
+ 	int id_idx = 1; /* The first entry is the counter value */
+ 	u64 id;
+ 	int ret;
+ 
++	if (!SID(evsel, cpu_map_idx, thread))
++		return -1;
++
+ 	ret = ioctl(fd, PERF_EVENT_IOC_ID, &id);
+ 	if (!ret)
+ 		goto add;
+@@ -282,7 +288,7 @@ int perf_evlist__id_add_fd(struct perf_evlist *evlist,
+ 	id = read_data[id_idx];
+ 
+ add:
+-	perf_evlist__id_add(evlist, evsel, cpu, thread, id);
++	perf_evlist__id_add(evlist, evsel, cpu_map_idx, thread, id);
+ 	return 0;
+ }
+ 
+diff --git a/tools/lib/perf/include/internal/evlist.h b/tools/lib/perf/include/internal/evlist.h
+index 2d0fa02b036f6..8999f2cc8ee44 100644
+--- a/tools/lib/perf/include/internal/evlist.h
++++ b/tools/lib/perf/include/internal/evlist.h
+@@ -118,10 +118,10 @@ u64 perf_evlist__read_format(struct perf_evlist *evlist);
+ 
+ void perf_evlist__id_add(struct perf_evlist *evlist,
+ 			 struct perf_evsel *evsel,
+-			 int cpu, int thread, u64 id);
++			 int cpu_map_idx, int thread, u64 id);
+ 
+ int perf_evlist__id_add_fd(struct perf_evlist *evlist,
+ 			   struct perf_evsel *evsel,
+-			   int cpu, int thread, int fd);
++			   int cpu_map_idx, int thread, int fd);
+ 
+ #endif /* __LIBPERF_INTERNAL_EVLIST_H */
+diff --git a/tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c b/tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c
+index ff6c6661f075f..1c80aa498d543 100644
+--- a/tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c
++++ b/tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c
+@@ -1152,6 +1152,7 @@ unsigned int get_pkg_num(int cpu)
+ 	retval = fscanf(fp, "%d\n", &pkg);
+ 	if (retval != 1)
+ 		errx(1, "%s: failed to parse", pathname);
++	fclose(fp);
+ 	return pkg;
+ }
+ 
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index ea26f2b0c1bc2..f72da30795dd6 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -773,6 +773,7 @@ sub set_value {
+     if ($lvalue =~ /^(TEST|BISECT|CONFIG_BISECT)_TYPE(\[.*\])?$/ &&
+ 	$prvalue !~ /^(config_|)bisect$/ &&
+ 	$prvalue !~ /^build$/ &&
++	$prvalue !~ /^make_warnings_file$/ &&
+ 	$buildonly) {
+ 
+ 	# Note if a test is something other than build, then we
+diff --git a/tools/testing/selftests/mqueue/setting b/tools/testing/selftests/mqueue/setting
+new file mode 100644
+index 0000000000000..a953c96aa16e1
+--- /dev/null
++++ b/tools/testing/selftests/mqueue/setting
+@@ -0,0 +1 @@
++timeout=180
+diff --git a/tools/testing/selftests/net/reuseaddr_conflict.c b/tools/testing/selftests/net/reuseaddr_conflict.c
+index 7c5b12664b03b..bfb07dc495186 100644
+--- a/tools/testing/selftests/net/reuseaddr_conflict.c
++++ b/tools/testing/selftests/net/reuseaddr_conflict.c
+@@ -109,6 +109,6 @@ int main(void)
+ 	fd1 = open_port(0, 1);
+ 	if (fd1 >= 0)
+ 		error(1, 0, "Was allowed to create an ipv4 reuseport on an already bound non-reuseport socket with no ipv6");
+-	fprintf(stderr, "Success");
++	fprintf(stderr, "Success\n");
+ 	return 0;
+ }
+diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
+index dd777688d14a9..952afb1bc83b4 100644
+--- a/virt/kvm/async_pf.c
++++ b/virt/kvm/async_pf.c
+@@ -88,7 +88,27 @@ static void async_pf_execute(struct work_struct *work)
+ 	rcuwait_wake_up(&vcpu->wait);
+ 
+ 	mmput(mm);
+-	kvm_put_kvm(vcpu->kvm);
++}
++
++static void kvm_flush_and_free_async_pf_work(struct kvm_async_pf *work)
++{
++	/*
++	 * The async #PF is "done", but KVM must wait for the work item itself,
++	 * i.e. async_pf_execute(), to run to completion.  If KVM is a module,
++	 * KVM must ensure *no* code owned by the KVM (the module) can be run
++	 * after the last call to module_put().  Note, flushing the work item
++	 * is always required when the item is taken off the completion queue.
++	 * E.g. even if the vCPU handles the item in the "normal" path, the VM
++	 * could be terminated before async_pf_execute() completes.
++	 *
++	 * Wake all events skip the queue and go straight done, i.e. don't
++	 * need to be flushed (but sanity check that the work wasn't queued).
++	 */
++	if (work->wakeup_all)
++		WARN_ON_ONCE(work->work.func);
++	else
++		flush_work(&work->work);
++	kmem_cache_free(async_pf_cache, work);
+ }
+ 
+ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu)
+@@ -115,7 +135,6 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu)
+ #else
+ 		if (cancel_work_sync(&work->work)) {
+ 			mmput(work->mm);
+-			kvm_put_kvm(vcpu->kvm); /* == work->vcpu->kvm */
+ 			kmem_cache_free(async_pf_cache, work);
+ 		}
+ #endif
+@@ -127,7 +146,10 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu)
+ 			list_first_entry(&vcpu->async_pf.done,
+ 					 typeof(*work), link);
+ 		list_del(&work->link);
+-		kmem_cache_free(async_pf_cache, work);
++
++		spin_unlock(&vcpu->async_pf.lock);
++		kvm_flush_and_free_async_pf_work(work);
++		spin_lock(&vcpu->async_pf.lock);
+ 	}
+ 	spin_unlock(&vcpu->async_pf.lock);
+ 
+@@ -152,7 +174,7 @@ void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
+ 
+ 		list_del(&work->queue);
+ 		vcpu->async_pf.queued--;
+-		kmem_cache_free(async_pf_cache, work);
++		kvm_flush_and_free_async_pf_work(work);
+ 	}
+ }
+ 
+@@ -187,7 +209,6 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 	work->arch = *arch;
+ 	work->mm = current->mm;
+ 	mmget(work->mm);
+-	kvm_get_kvm(work->vcpu->kvm);
+ 
+ 	INIT_WORK(&work->work, async_pf_execute);
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-04-27 22:57 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-04-27 22:57 UTC (permalink / raw
  To: gentoo-commits

commit:     1c3510b7a3ca005e2008e6ff0a3aa63a906b9a50
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 27 22:56:53 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Apr 27 22:56:53 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1c3510b7

Add UBSAN_BOUNDS and UBSAN_SHIFT and dependencies

Bug: Add UBSAN_BOUNDS and UBSAN_SHIFT and dependencies

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 435a76ea..497932fe 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2022-01-30 08:12:05.041788304 -0500
-+++ b/distro/Kconfig	2022-01-30 15:28:10.030352980 -0500
-@@ -0,0 +1,285 @@
+--- /dev/null	2024-04-27 13:10:54.188000027 -0400
++++ b/distro/Kconfig	2024-04-27 18:54:09.734564235 -0400
+@@ -0,0 +1,289 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -148,6 +148,10 @@
 +	select TIMERFD
 +	select TMPFS_POSIX_ACL
 +	select TMPFS_XATTR
++	select UBSAN
++	select CC_HAS_UBSAN_BOUNDS_STRICT if !CC_HAS_UBSAN_ARRAY_BOUNDS
++	select UBSAN_BOUNDS
++	select UBSAN_SHIFT
 +
 +	select ANON_INODES
 +	select BLOCK


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-05-02 15:03 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-05-02 15:03 UTC (permalink / raw
  To: gentoo-commits

commit:     f92aa194c0694293fc2e9b74bfe7cd353735ce6b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May  2 15:03:22 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May  2 15:03:22 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f92aa194

Linux patch 5.10.216

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1215_linux-5.10.216.patch | 4009 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4013 insertions(+)

diff --git a/0000_README b/0000_README
index fe9c9853..0f28b50e 100644
--- a/0000_README
+++ b/0000_README
@@ -903,6 +903,10 @@ Patch:  1214_linux-5.10.215.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.215
 
+Patch:  1215_linux-5.10.216.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.216
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1215_linux-5.10.216.patch b/1215_linux-5.10.216.patch
new file mode 100644
index 00000000..6dc594cd
--- /dev/null
+++ b/1215_linux-5.10.216.patch
@@ -0,0 +1,4009 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-devfreq b/Documentation/ABI/testing/sysfs-class-devfreq
+index b8ebff4b1c4ca..4514cf9fc7a15 100644
+--- a/Documentation/ABI/testing/sysfs-class-devfreq
++++ b/Documentation/ABI/testing/sysfs-class-devfreq
+@@ -66,6 +66,9 @@ Description:
+ 
+ 			echo 0 > /sys/class/devfreq/.../trans_stat
+ 
++		If the transition table is bigger than PAGE_SIZE, reading
++		this will return an -EFBIG error.
++
+ What:		/sys/class/devfreq/.../userspace/set_freq
+ Date:		September 2011
+ Contact:	MyungJoo Ham <myungjoo.ham@samsung.com>
+diff --git a/Makefile b/Makefile
+index 2af799d3ce78b..6fe6554ecfb8c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 215
++SUBLEVEL = 216
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 240277d5626c8..72e4cef062aca 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -9,6 +9,14 @@
+ #
+ source "arch/$(SRCARCH)/Kconfig"
+ 
++config ARCH_CONFIGURES_CPU_MITIGATIONS
++	bool
++
++if !ARCH_CONFIGURES_CPU_MITIGATIONS
++config CPU_MITIGATIONS
++	def_bool y
++endif
++
+ menu "General architecture-dependent options"
+ 
+ config CRASH_CORE
+diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts
+index dcaa44e408ace..27f4194b376bb 100644
+--- a/arch/arc/boot/dts/hsdk.dts
++++ b/arch/arc/boot/dts/hsdk.dts
+@@ -205,7 +205,6 @@ dmac_cfg_clk: dmac-gpu-cfg-clk {
+ 		};
+ 
+ 		gmac: ethernet@8000 {
+-			#interrupt-cells = <1>;
+ 			compatible = "snps,dwmac";
+ 			reg = <0x8000 0x2000>;
+ 			interrupts = <10>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt2712-evb.dts b/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
+index 9d20cabf4f699..99515c13da3cf 100644
+--- a/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
+@@ -127,7 +127,7 @@ ethernet_phy0: ethernet-phy@5 {
+ };
+ 
+ &pio {
+-	eth_default: eth_default {
++	eth_default: eth-default-pins {
+ 		tx_pins {
+ 			pinmux = <MT2712_PIN_71_GBE_TXD3__FUNC_GBE_TXD3>,
+ 				 <MT2712_PIN_72_GBE_TXD2__FUNC_GBE_TXD2>,
+@@ -154,7 +154,7 @@ mdio_pins {
+ 		};
+ 	};
+ 
+-	eth_sleep: eth_sleep {
++	eth_sleep: eth-sleep-pins {
+ 		tx_pins {
+ 			pinmux = <MT2712_PIN_71_GBE_TXD3__FUNC_GPIO71>,
+ 				 <MT2712_PIN_72_GBE_TXD2__FUNC_GPIO72>,
+@@ -180,14 +180,14 @@ mdio_pins {
+ 		};
+ 	};
+ 
+-	usb0_id_pins_float: usb0_iddig {
++	usb0_id_pins_float: usb0-iddig-pins {
+ 		pins_iddig {
+ 			pinmux = <MT2712_PIN_12_IDDIG_P0__FUNC_IDDIG_A>;
+ 			bias-pull-up;
+ 		};
+ 	};
+ 
+-	usb1_id_pins_float: usb1_iddig {
++	usb1_id_pins_float: usb1-iddig-pins {
+ 		pins_iddig {
+ 			pinmux = <MT2712_PIN_14_IDDIG_P1__FUNC_IDDIG_B>;
+ 			bias-pull-up;
+diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+index cc3d1c99517d1..f7ce2eba10f7a 100644
+--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+@@ -249,10 +249,11 @@ topckgen: syscon@10000000 {
+ 		#clock-cells = <1>;
+ 	};
+ 
+-	infracfg: syscon@10001000 {
++	infracfg: clock-controller@10001000 {
+ 		compatible = "mediatek,mt2712-infracfg", "syscon";
+ 		reg = <0 0x10001000 0 0x1000>;
+ 		#clock-cells = <1>;
++		#reset-cells = <1>;
+ 	};
+ 
+ 	pericfg: syscon@10003000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+index 884930a5849a2..4454115ad8a0d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+@@ -244,7 +244,7 @@ scpsys: power-controller@10006000 {
+ 		clock-names = "hif_sel";
+ 	};
+ 
+-	cir: cir@10009000 {
++	cir: ir-receiver@10009000 {
+ 		compatible = "mediatek,mt7622-cir";
+ 		reg = <0 0x10009000 0 0x1000>;
+ 		interrupts = <GIC_SPI 175 IRQ_TYPE_LEVEL_LOW>;
+@@ -275,16 +275,14 @@ thermal_calibration: calib@198 {
+ 		};
+ 	};
+ 
+-	apmixedsys: apmixedsys@10209000 {
+-		compatible = "mediatek,mt7622-apmixedsys",
+-			     "syscon";
++	apmixedsys: clock-controller@10209000 {
++		compatible = "mediatek,mt7622-apmixedsys";
+ 		reg = <0 0x10209000 0 0x1000>;
+ 		#clock-cells = <1>;
+ 	};
+ 
+-	topckgen: topckgen@10210000 {
+-		compatible = "mediatek,mt7622-topckgen",
+-			     "syscon";
++	topckgen: clock-controller@10210000 {
++		compatible = "mediatek,mt7622-topckgen";
+ 		reg = <0 0x10210000 0 0x1000>;
+ 		#clock-cells = <1>;
+ 	};
+@@ -357,7 +355,7 @@ cci_control1: slave-if@4000 {
+ 		};
+ 
+ 		cci_control2: slave-if@5000 {
+-			compatible = "arm,cci-400-ctrl-if";
++			compatible = "arm,cci-400-ctrl-if", "syscon";
+ 			interface-type = "ace";
+ 			reg = <0x5000 0x1000>;
+ 		};
+@@ -507,7 +505,6 @@ thermal: thermal@1100b000 {
+ 			 <&pericfg CLK_PERI_AUXADC_PD>;
+ 		clock-names = "therm", "auxadc";
+ 		resets = <&pericfg MT7622_PERI_THERM_SW_RST>;
+-		reset-names = "therm";
+ 		mediatek,auxadc = <&auxadc>;
+ 		mediatek,apmixedsys = <&apmixedsys>;
+ 		nvmem-cells = <&thermal_calibration>;
+@@ -715,9 +712,8 @@ wmac: wmac@18000000 {
+ 		power-domains = <&scpsys MT7622_POWER_DOMAIN_WB>;
+ 	};
+ 
+-	ssusbsys: ssusbsys@1a000000 {
+-		compatible = "mediatek,mt7622-ssusbsys",
+-			     "syscon";
++	ssusbsys: clock-controller@1a000000 {
++		compatible = "mediatek,mt7622-ssusbsys";
+ 		reg = <0 0x1a000000 0 0x1000>;
+ 		#clock-cells = <1>;
+ 		#reset-cells = <1>;
+@@ -774,9 +770,8 @@ u2port1: usb-phy@1a0c5000 {
+ 		};
+ 	};
+ 
+-	pciesys: pciesys@1a100800 {
+-		compatible = "mediatek,mt7622-pciesys",
+-			     "syscon";
++	pciesys: clock-controller@1a100800 {
++		compatible = "mediatek,mt7622-pciesys";
+ 		reg = <0 0x1a100800 0 0x1000>;
+ 		#clock-cells = <1>;
+ 		#reset-cells = <1>;
+@@ -893,7 +888,13 @@ sata_port: sata-phy@1a243000 {
+ 		};
+ 	};
+ 
+-	ethsys: syscon@1b000000 {
++	hifsys: clock-controller@1af00000 {
++		compatible = "mediatek,mt7622-hifsys";
++		reg = <0 0x1af00000 0 0x70>;
++		#clock-cells = <1>;
++	};
++
++	ethsys: clock-controller@1b000000 {
+ 		compatible = "mediatek,mt7622-ethsys",
+ 			     "syscon";
+ 		reg = <0 0x1b000000 0 0x1000>;
+@@ -911,10 +912,28 @@ hsdma: dma-controller@1b007000 {
+ 		#dma-cells = <1>;
+ 	};
+ 
+-	eth: ethernet@1b100000 {
+-		compatible = "mediatek,mt7622-eth",
+-			     "mediatek,mt2701-eth",
++	pcie_mirror: pcie-mirror@10000400 {
++		compatible = "mediatek,mt7622-pcie-mirror",
+ 			     "syscon";
++		reg = <0 0x10000400 0 0x10>;
++	};
++
++	wed0: wed@1020a000 {
++		compatible = "mediatek,mt7622-wed",
++			     "syscon";
++		reg = <0 0x1020a000 0 0x1000>;
++		interrupts = <GIC_SPI 214 IRQ_TYPE_LEVEL_LOW>;
++	};
++
++	wed1: wed@1020b000 {
++		compatible = "mediatek,mt7622-wed",
++			     "syscon";
++		reg = <0 0x1020b000 0 0x1000>;
++		interrupts = <GIC_SPI 215 IRQ_TYPE_LEVEL_LOW>;
++	};
++
++	eth: ethernet@1b100000 {
++		compatible = "mediatek,mt7622-eth";
+ 		reg = <0 0x1b100000 0 0x20000>;
+ 		interrupts = <GIC_SPI 223 IRQ_TYPE_LEVEL_LOW>,
+ 			     <GIC_SPI 224 IRQ_TYPE_LEVEL_LOW>,
+@@ -937,6 +956,11 @@ eth: ethernet@1b100000 {
+ 		power-domains = <&scpsys MT7622_POWER_DOMAIN_ETHSYS>;
+ 		mediatek,ethsys = <&ethsys>;
+ 		mediatek,sgmiisys = <&sgmiisys>;
++		mediatek,cci-control = <&cci_control2>;
++		mediatek,wed = <&wed0>, <&wed1>;
++		mediatek,pcie-mirror = <&pcie_mirror>;
++		mediatek,hifsys = <&hifsys>;
++		dma-coherent;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 		status = "disabled";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+index 4297c1db5a413..913ba25ea72f6 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+@@ -784,7 +784,6 @@ &pcie_phy {
+ };
+ 
+ &pcie0 {
+-	bus-scan-delay-ms = <1000>;
+ 	ep-gpios = <&gpio2 RK_PD4 GPIO_ACTIVE_HIGH>;
+ 	num-lanes = <4>;
+ 	pinctrl-names = "default";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 95bc7a5f61dd5..0cf656824e230 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -430,16 +430,22 @@ &io_domains {
+ 	gpio1830-supply = <&vcc_1v8>;
+ };
+ 
+-&pmu_io_domains {
+-	status = "okay";
+-	pmu1830-supply = <&vcc_1v8>;
+-};
+-
+-&pwm2 {
+-	status = "okay";
++&pcie_clkreqn_cpm {
++	rockchip,pins =
++		<2 RK_PD2 RK_FUNC_GPIO &pcfg_pull_up>;
+ };
+ 
+ &pinctrl {
++	pinctrl-names = "default";
++	pinctrl-0 = <&q7_thermal_pin>;
++
++	gpios {
++		q7_thermal_pin: q7-thermal-pin {
++			rockchip,pins =
++				<0 RK_PA3 RK_FUNC_GPIO &pcfg_pull_up>;
++		};
++	};
++
+ 	i2c8 {
+ 		i2c8_xfer_a: i2c8-xfer {
+ 			rockchip,pins =
+@@ -470,6 +476,15 @@ vcc5v0_host_en: vcc5v0-host-en {
+ 	};
+ };
+ 
++&pmu_io_domains {
++	status = "okay";
++	pmu1830-supply = <&vcc_1v8>;
++};
++
++&pwm2 {
++	status = "okay";
++};
++
+ &sdhci {
+ 	/*
+ 	 * Signal integrity isn't great at 200MHz but 100MHz has proven stable
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index b28fabfc91bf7..70271db833831 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -445,6 +445,14 @@ config EFI
+ 	  allow the kernel to be booted as an EFI application. This
+ 	  is only useful on systems that have UEFI firmware.
+ 
++config CC_HAVE_STACKPROTECTOR_TLS
++	def_bool $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=tp -mstack-protector-guard-offset=0)
++
++config STACKPROTECTOR_PER_TASK
++	def_bool y
++	depends on !GCC_PLUGIN_RANDSTRUCT
++	depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_TLS
++
+ endmenu
+ 
+ config BUILTIN_DTB
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index daa679440000a..8572d23fba700 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -88,6 +88,16 @@ KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
+ # architectures.  It's faster to have GCC emit only aligned accesses.
+ KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
+ 
++ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y)
++prepare: stack_protector_prepare
++stack_protector_prepare: prepare0
++	$(eval KBUILD_CFLAGS += -mstack-protector-guard=tls		  \
++				-mstack-protector-guard-reg=tp		  \
++				-mstack-protector-guard-offset=$(shell	  \
++			awk '{if ($$2 == "TSK_STACK_CANARY") print $$3;}' \
++					include/generated/asm-offsets.h))
++endif
++
+ # arch specific predefines for sparse
+ CHECKFLAGS += -D__riscv -D__riscv_xlen=$(BITS)
+ 
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 5ab13570daa53..982745572945e 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -456,8 +456,8 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+ #define PAGE_SHARED		__pgprot(0)
+ #define PAGE_KERNEL		__pgprot(0)
+ #define swapper_pg_dir		NULL
+-#define TASK_SIZE		0xffffffffUL
+-#define VMALLOC_START		0
++#define TASK_SIZE		_AC(-1, UL)
++#define VMALLOC_START		_AC(0, UL)
+ #define VMALLOC_END		TASK_SIZE
+ 
+ static inline void __kernel_map_pages(struct page *page, int numpages, int enable) {}
+diff --git a/arch/riscv/include/asm/stackprotector.h b/arch/riscv/include/asm/stackprotector.h
+index 5962f8891f06f..09093af46565e 100644
+--- a/arch/riscv/include/asm/stackprotector.h
++++ b/arch/riscv/include/asm/stackprotector.h
+@@ -24,6 +24,7 @@ static __always_inline void boot_init_stack_canary(void)
+ 	canary &= CANARY_MASK;
+ 
+ 	current->stack_canary = canary;
+-	__stack_chk_guard = current->stack_canary;
++	if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK))
++		__stack_chk_guard = current->stack_canary;
+ }
+ #endif /* _ASM_RISCV_STACKPROTECTOR_H */
+diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
+index db203442c08f9..877ff65b4e136 100644
+--- a/arch/riscv/kernel/asm-offsets.c
++++ b/arch/riscv/kernel/asm-offsets.c
+@@ -66,6 +66,9 @@ void asm_offsets(void)
+ 	OFFSET(TASK_THREAD_F30, task_struct, thread.fstate.f[30]);
+ 	OFFSET(TASK_THREAD_F31, task_struct, thread.fstate.f[31]);
+ 	OFFSET(TASK_THREAD_FCSR, task_struct, thread.fstate.fcsr);
++#ifdef CONFIG_STACKPROTECTOR
++	OFFSET(TSK_STACK_CANARY, task_struct, stack_canary);
++#endif
+ 
+ 	DEFINE(PT_SIZE, sizeof(struct pt_regs));
+ 	OFFSET(PT_EPC, pt_regs, epc);
+diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
+index 7868050ff426d..9dac6bec316e4 100644
+--- a/arch/riscv/kernel/process.c
++++ b/arch/riscv/kernel/process.c
+@@ -22,9 +22,7 @@
+ #include <asm/switch_to.h>
+ #include <asm/thread_info.h>
+ 
+-register unsigned long gp_in_global __asm__("gp");
+-
+-#ifdef CONFIG_STACKPROTECTOR
++#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK)
+ #include <linux/stackprotector.h>
+ unsigned long __stack_chk_guard __read_mostly;
+ EXPORT_SYMBOL(__stack_chk_guard);
+@@ -117,7 +115,6 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 	if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ 		/* Kernel thread */
+ 		memset(childregs, 0, sizeof(struct pt_regs));
+-		childregs->gp = gp_in_global;
+ 		/* Supervisor/Machine, irqs on: */
+ 		childregs->status = SR_PP | SR_PIE;
+ 
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 9b3fa05e46226..0c802ade80406 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -57,6 +57,7 @@ config X86
+ 	select ACPI_LEGACY_TABLES_LOOKUP	if ACPI
+ 	select ACPI_SYSTEM_POWER_STATES_SUPPORT	if ACPI
+ 	select ARCH_32BIT_OFF_T			if X86_32
++	select ARCH_CONFIGURES_CPU_MITIGATIONS
+ 	select ARCH_CLOCKSOURCE_INIT
+ 	select ARCH_HAS_ACPI_TABLE_UPGRADE	if ACPI
+ 	select ARCH_HAS_CPU_FINALIZE_INIT
+@@ -2408,17 +2409,17 @@ config CC_HAS_SLS
+ config CC_HAS_RETURN_THUNK
+ 	def_bool $(cc-option,-mfunction-return=thunk-extern)
+ 
+-menuconfig SPECULATION_MITIGATIONS
+-	bool "Mitigations for speculative execution vulnerabilities"
++menuconfig CPU_MITIGATIONS
++	bool "Mitigations for CPU vulnerabilities"
+ 	default y
+ 	help
+-	  Say Y here to enable options which enable mitigations for
+-	  speculative execution hardware vulnerabilities.
++	  Say Y here to enable options which enable mitigations for hardware
++	  vulnerabilities (usually related to speculative execution).
+ 
+ 	  If you say N, all mitigations will be disabled. You really
+ 	  should know what you are doing to say so.
+ 
+-if SPECULATION_MITIGATIONS
++if CPU_MITIGATIONS
+ 
+ config PAGE_TABLE_ISOLATION
+ 	bool "Remove the kernel mapping in user mode"
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 3b4412c83eec0..01064ba1a657e 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -12,6 +12,7 @@
+ #include <asm/mpspec.h>
+ #include <asm/msr.h>
+ #include <asm/hardirq.h>
++#include <asm/io.h>
+ 
+ #define ARCH_APICTIMER_STOPS_ON_C3	1
+ 
+@@ -111,7 +112,7 @@ static inline void native_apic_mem_write(u32 reg, u32 v)
+ 
+ static inline u32 native_apic_mem_read(u32 reg)
+ {
+-	return *((volatile u32 *)(APIC_BASE + reg));
++	return readl((void __iomem *)(APIC_BASE + reg));
+ }
+ 
+ extern void native_apic_wait_icr_idle(void);
+diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c
+index d502241995a39..24fca3d56c7f3 100644
+--- a/arch/x86/kernel/cpu/cpuid-deps.c
++++ b/arch/x86/kernel/cpu/cpuid-deps.c
+@@ -44,7 +44,10 @@ static const struct cpuid_dep cpuid_deps[] = {
+ 	{ X86_FEATURE_F16C,			X86_FEATURE_XMM2,     },
+ 	{ X86_FEATURE_AES,			X86_FEATURE_XMM2      },
+ 	{ X86_FEATURE_SHA_NI,			X86_FEATURE_XMM2      },
++	{ X86_FEATURE_GFNI,			X86_FEATURE_XMM2      },
+ 	{ X86_FEATURE_FMA,			X86_FEATURE_AVX       },
++	{ X86_FEATURE_VAES,			X86_FEATURE_AVX       },
++	{ X86_FEATURE_VPCLMULQDQ,		X86_FEATURE_AVX       },
+ 	{ X86_FEATURE_AVX2,			X86_FEATURE_AVX,      },
+ 	{ X86_FEATURE_AVX512F,			X86_FEATURE_AVX,      },
+ 	{ X86_FEATURE_AVX512IFMA,		X86_FEATURE_AVX512F   },
+@@ -56,9 +59,6 @@ static const struct cpuid_dep cpuid_deps[] = {
+ 	{ X86_FEATURE_AVX512VL,			X86_FEATURE_AVX512F   },
+ 	{ X86_FEATURE_AVX512VBMI,		X86_FEATURE_AVX512F   },
+ 	{ X86_FEATURE_AVX512_VBMI2,		X86_FEATURE_AVX512VL  },
+-	{ X86_FEATURE_GFNI,			X86_FEATURE_AVX512VL  },
+-	{ X86_FEATURE_VAES,			X86_FEATURE_AVX512VL  },
+-	{ X86_FEATURE_VPCLMULQDQ,		X86_FEATURE_AVX512VL  },
+ 	{ X86_FEATURE_AVX512_VNNI,		X86_FEATURE_AVX512VL  },
+ 	{ X86_FEATURE_AVX512_BITALG,		X86_FEATURE_AVX512VL  },
+ 	{ X86_FEATURE_AVX512_4VNNIW,		X86_FEATURE_AVX512F   },
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index bb03bed14f740..5d422e725b267 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -258,7 +258,6 @@ static struct crypto_larval *__crypto_register_alg(struct crypto_alg *alg)
+ 		}
+ 
+ 		if (!strcmp(q->cra_driver_name, alg->cra_name) ||
+-		    !strcmp(q->cra_driver_name, alg->cra_driver_name) ||
+ 		    !strcmp(q->cra_name, alg->cra_driver_name))
+ 			goto err;
+ 	}
+diff --git a/drivers/accessibility/speakup/main.c b/drivers/accessibility/speakup/main.c
+index 63c5444f0f1ae..7598778cf37fb 100644
+--- a/drivers/accessibility/speakup/main.c
++++ b/drivers/accessibility/speakup/main.c
+@@ -576,7 +576,7 @@ static u_long get_word(struct vc_data *vc)
+ 	}
+ 	attr_ch = get_char(vc, (u_short *)tmp_pos, &spk_attr);
+ 	buf[cnt++] = attr_ch;
+-	while (tmpx < vc->vc_cols - 1) {
++	while (tmpx < vc->vc_cols - 1 && cnt < sizeof(buf) - 1) {
+ 		tmp_pos += 2;
+ 		tmpx++;
+ 		ch = get_char(vc, (u_short *)tmp_pos, &temp);
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 708c91215ec06..bcbaa4d6a0ff5 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2042,8 +2042,10 @@ static size_t binder_get_object(struct binder_proc *proc,
+ 	size_t object_size = 0;
+ 
+ 	read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset);
+-	if (offset > buffer->data_size || read_size < sizeof(*hdr))
++	if (offset > buffer->data_size || read_size < sizeof(*hdr) ||
++	    !IS_ALIGNED(offset, sizeof(u32)))
+ 		return 0;
++
+ 	if (u) {
+ 		if (copy_from_user(object, u + offset, read_size))
+ 			return 0;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a1a8b282b99b1..b2836e8efefd2 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -418,6 +418,8 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Realtek 8852BE Bluetooth devices */
+ 	{ USB_DEVICE(0x0cb8, 0xc559), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0bda, 0x4853), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0bda, 0x887b), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0bda, 0xb85b), .driver_info = BTUSB_REALTEK |
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index aa2f1f8aa2994..a0927c7f83d60 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -37,7 +37,11 @@ static HLIST_HEAD(clk_root_list);
+ static HLIST_HEAD(clk_orphan_list);
+ static LIST_HEAD(clk_notifier_list);
+ 
+-static struct hlist_head *all_lists[] = {
++/* List of registered clks that use runtime PM */
++static HLIST_HEAD(clk_rpm_list);
++static DEFINE_MUTEX(clk_rpm_list_lock);
++
++static const struct hlist_head *all_lists[] = {
+ 	&clk_root_list,
+ 	&clk_orphan_list,
+ 	NULL,
+@@ -59,6 +63,7 @@ struct clk_core {
+ 	struct clk_hw		*hw;
+ 	struct module		*owner;
+ 	struct device		*dev;
++	struct hlist_node	rpm_node;
+ 	struct device_node	*of_node;
+ 	struct clk_core		*parent;
+ 	struct clk_parent_map	*parents;
+@@ -129,6 +134,89 @@ static void clk_pm_runtime_put(struct clk_core *core)
+ 	pm_runtime_put_sync(core->dev);
+ }
+ 
++/**
++ * clk_pm_runtime_get_all() - Runtime "get" all clk provider devices
++ *
++ * Call clk_pm_runtime_get() on all runtime PM enabled clks in the clk tree so
++ * that disabling unused clks avoids a deadlock where a device is runtime PM
++ * resuming/suspending and the runtime PM callback is trying to grab the
++ * prepare_lock for something like clk_prepare_enable() while
++ * clk_disable_unused_subtree() holds the prepare_lock and is trying to runtime
++ * PM resume/suspend the device as well.
++ *
++ * Context: Acquires the 'clk_rpm_list_lock' and returns with the lock held on
++ * success. Otherwise the lock is released on failure.
++ *
++ * Return: 0 on success, negative errno otherwise.
++ */
++static int clk_pm_runtime_get_all(void)
++{
++	int ret;
++	struct clk_core *core, *failed;
++
++	/*
++	 * Grab the list lock to prevent any new clks from being registered
++	 * or unregistered until clk_pm_runtime_put_all().
++	 */
++	mutex_lock(&clk_rpm_list_lock);
++
++	/*
++	 * Runtime PM "get" all the devices that are needed for the clks
++	 * currently registered. Do this without holding the prepare_lock, to
++	 * avoid the deadlock.
++	 */
++	hlist_for_each_entry(core, &clk_rpm_list, rpm_node) {
++		ret = clk_pm_runtime_get(core);
++		if (ret) {
++			failed = core;
++			pr_err("clk: Failed to runtime PM get '%s' for clk '%s'\n",
++			       dev_name(failed->dev), failed->name);
++			goto err;
++		}
++	}
++
++	return 0;
++
++err:
++	hlist_for_each_entry(core, &clk_rpm_list, rpm_node) {
++		if (core == failed)
++			break;
++
++		clk_pm_runtime_put(core);
++	}
++	mutex_unlock(&clk_rpm_list_lock);
++
++	return ret;
++}
++
++/**
++ * clk_pm_runtime_put_all() - Runtime "put" all clk provider devices
++ *
++ * Put the runtime PM references taken in clk_pm_runtime_get_all() and release
++ * the 'clk_rpm_list_lock'.
++ */
++static void clk_pm_runtime_put_all(void)
++{
++	struct clk_core *core;
++
++	hlist_for_each_entry(core, &clk_rpm_list, rpm_node)
++		clk_pm_runtime_put(core);
++	mutex_unlock(&clk_rpm_list_lock);
++}
++
++static void clk_pm_runtime_init(struct clk_core *core)
++{
++	struct device *dev = core->dev;
++
++	if (dev && pm_runtime_enabled(dev)) {
++		core->rpm_enabled = true;
++
++		mutex_lock(&clk_rpm_list_lock);
++		hlist_add_head(&core->rpm_node, &clk_rpm_list);
++		mutex_unlock(&clk_rpm_list_lock);
++	}
++}
++
+ /***           locking             ***/
+ static void clk_prepare_lock(void)
+ {
+@@ -1231,9 +1319,6 @@ static void __init clk_unprepare_unused_subtree(struct clk_core *core)
+ 	if (core->flags & CLK_IGNORE_UNUSED)
+ 		return;
+ 
+-	if (clk_pm_runtime_get(core))
+-		return;
+-
+ 	if (clk_core_is_prepared(core)) {
+ 		trace_clk_unprepare(core);
+ 		if (core->ops->unprepare_unused)
+@@ -1242,8 +1327,6 @@ static void __init clk_unprepare_unused_subtree(struct clk_core *core)
+ 			core->ops->unprepare(core->hw);
+ 		trace_clk_unprepare_complete(core);
+ 	}
+-
+-	clk_pm_runtime_put(core);
+ }
+ 
+ static void __init clk_disable_unused_subtree(struct clk_core *core)
+@@ -1259,9 +1342,6 @@ static void __init clk_disable_unused_subtree(struct clk_core *core)
+ 	if (core->flags & CLK_OPS_PARENT_ENABLE)
+ 		clk_core_prepare_enable(core->parent);
+ 
+-	if (clk_pm_runtime_get(core))
+-		goto unprepare_out;
+-
+ 	flags = clk_enable_lock();
+ 
+ 	if (core->enable_count)
+@@ -1286,8 +1366,6 @@ static void __init clk_disable_unused_subtree(struct clk_core *core)
+ 
+ unlock_out:
+ 	clk_enable_unlock(flags);
+-	clk_pm_runtime_put(core);
+-unprepare_out:
+ 	if (core->flags & CLK_OPS_PARENT_ENABLE)
+ 		clk_core_disable_unprepare(core->parent);
+ }
+@@ -1303,12 +1381,22 @@ __setup("clk_ignore_unused", clk_ignore_unused_setup);
+ static int __init clk_disable_unused(void)
+ {
+ 	struct clk_core *core;
++	int ret;
+ 
+ 	if (clk_ignore_unused) {
+ 		pr_warn("clk: Not disabling unused clocks\n");
+ 		return 0;
+ 	}
+ 
++	pr_info("clk: Disabling unused clocks\n");
++
++	ret = clk_pm_runtime_get_all();
++	if (ret)
++		return ret;
++	/*
++	 * Grab the prepare lock to keep the clk topology stable while iterating
++	 * over clks.
++	 */
+ 	clk_prepare_lock();
+ 
+ 	hlist_for_each_entry(core, &clk_root_list, child_node)
+@@ -1325,6 +1413,8 @@ static int __init clk_disable_unused(void)
+ 
+ 	clk_prepare_unlock();
+ 
++	clk_pm_runtime_put_all();
++
+ 	return 0;
+ }
+ late_initcall_sync(clk_disable_unused);
+@@ -3630,9 +3720,6 @@ static int __clk_core_init(struct clk_core *core)
+ 	}
+ 
+ 	clk_core_reparent_orphans_nolock();
+-
+-
+-	kref_init(&core->ref);
+ out:
+ 	clk_pm_runtime_put(core);
+ unlock:
+@@ -3842,6 +3929,22 @@ static void clk_core_free_parent_map(struct clk_core *core)
+ 	kfree(core->parents);
+ }
+ 
++/* Free memory allocated for a struct clk_core */
++static void __clk_release(struct kref *ref)
++{
++	struct clk_core *core = container_of(ref, struct clk_core, ref);
++
++	if (core->rpm_enabled) {
++		mutex_lock(&clk_rpm_list_lock);
++		hlist_del(&core->rpm_node);
++		mutex_unlock(&clk_rpm_list_lock);
++	}
++
++	clk_core_free_parent_map(core);
++	kfree_const(core->name);
++	kfree(core);
++}
++
+ static struct clk *
+ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ {
+@@ -3862,6 +3965,8 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ 		goto fail_out;
+ 	}
+ 
++	kref_init(&core->ref);
++
+ 	core->name = kstrdup_const(init->name, GFP_KERNEL);
+ 	if (!core->name) {
+ 		ret = -ENOMEM;
+@@ -3874,9 +3979,8 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ 	}
+ 	core->ops = init->ops;
+ 
+-	if (dev && pm_runtime_enabled(dev))
+-		core->rpm_enabled = true;
+ 	core->dev = dev;
++	clk_pm_runtime_init(core);
+ 	core->of_node = np;
+ 	if (dev && dev->driver)
+ 		core->owner = dev->driver->owner;
+@@ -3916,12 +4020,10 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ 	hw->clk = NULL;
+ 
+ fail_create_clk:
+-	clk_core_free_parent_map(core);
+ fail_parents:
+ fail_ops:
+-	kfree_const(core->name);
+ fail_name:
+-	kfree(core);
++	kref_put(&core->ref, __clk_release);
+ fail_out:
+ 	return ERR_PTR(ret);
+ }
+@@ -4001,18 +4103,6 @@ int of_clk_hw_register(struct device_node *node, struct clk_hw *hw)
+ }
+ EXPORT_SYMBOL_GPL(of_clk_hw_register);
+ 
+-/* Free memory allocated for a clock. */
+-static void __clk_release(struct kref *ref)
+-{
+-	struct clk_core *core = container_of(ref, struct clk_core, ref);
+-
+-	lockdep_assert_held(&prepare_lock);
+-
+-	clk_core_free_parent_map(core);
+-	kfree_const(core->name);
+-	kfree(core);
+-}
+-
+ /*
+  * Empty clk_ops for unregistered clocks. These are used temporarily
+  * after clk_unregister() was called on a clock and until last clock
+@@ -4065,7 +4155,7 @@ static void clk_core_evict_parent_cache_subtree(struct clk_core *root,
+ /* Remove this clk from all parent caches */
+ static void clk_core_evict_parent_cache(struct clk_core *core)
+ {
+-	struct hlist_head **lists;
++	const struct hlist_head **lists;
+ 	struct clk_core *root;
+ 
+ 	lockdep_assert_held(&prepare_lock);
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 216594b861191..93df6cef4f5a9 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -1639,7 +1639,7 @@ static ssize_t trans_stat_show(struct device *dev,
+ 			       struct device_attribute *attr, char *buf)
+ {
+ 	struct devfreq *df = to_devfreq(dev);
+-	ssize_t len;
++	ssize_t len = 0;
+ 	int i, j;
+ 	unsigned int max_state;
+ 
+@@ -1648,7 +1648,7 @@ static ssize_t trans_stat_show(struct device *dev,
+ 	max_state = df->profile->max_state;
+ 
+ 	if (max_state == 0)
+-		return sprintf(buf, "Not Supported.\n");
++		return scnprintf(buf, PAGE_SIZE, "Not Supported.\n");
+ 
+ 	mutex_lock(&df->lock);
+ 	if (!df->stop_polling &&
+@@ -1658,33 +1658,54 @@ static ssize_t trans_stat_show(struct device *dev,
+ 	}
+ 	mutex_unlock(&df->lock);
+ 
+-	len = sprintf(buf, "     From  :   To\n");
+-	len += sprintf(buf + len, "           :");
+-	for (i = 0; i < max_state; i++)
+-		len += sprintf(buf + len, "%10lu",
+-				df->profile->freq_table[i]);
++	len += scnprintf(buf + len, PAGE_SIZE - len, "     From  :   To\n");
++	len += scnprintf(buf + len, PAGE_SIZE - len, "           :");
++	for (i = 0; i < max_state; i++) {
++		if (len >= PAGE_SIZE - 1)
++			break;
++		len += scnprintf(buf + len, PAGE_SIZE - len, "%10lu",
++				 df->profile->freq_table[i]);
++	}
++	if (len >= PAGE_SIZE - 1)
++		return PAGE_SIZE - 1;
+ 
+-	len += sprintf(buf + len, "   time(ms)\n");
++	len += scnprintf(buf + len, PAGE_SIZE - len, "   time(ms)\n");
+ 
+ 	for (i = 0; i < max_state; i++) {
++		if (len >= PAGE_SIZE - 1)
++			break;
+ 		if (df->profile->freq_table[i]
+ 					== df->previous_freq) {
+-			len += sprintf(buf + len, "*");
++			len += scnprintf(buf + len, PAGE_SIZE - len, "*");
+ 		} else {
+-			len += sprintf(buf + len, " ");
++			len += scnprintf(buf + len, PAGE_SIZE - len, " ");
++		}
++		if (len >= PAGE_SIZE - 1)
++			break;
++
++		len += scnprintf(buf + len, PAGE_SIZE - len, "%10lu:",
++				 df->profile->freq_table[i]);
++		for (j = 0; j < max_state; j++) {
++			if (len >= PAGE_SIZE - 1)
++				break;
++			len += scnprintf(buf + len, PAGE_SIZE - len, "%10u",
++					 df->stats.trans_table[(i * max_state) + j]);
+ 		}
+-		len += sprintf(buf + len, "%10lu:",
+-				df->profile->freq_table[i]);
+-		for (j = 0; j < max_state; j++)
+-			len += sprintf(buf + len, "%10u",
+-				df->stats.trans_table[(i * max_state) + j]);
++		if (len >= PAGE_SIZE - 1)
++			break;
++		len += scnprintf(buf + len, PAGE_SIZE - len, "%10llu\n", (u64)
++				 jiffies64_to_msecs(df->stats.time_in_state[i]));
++	}
++
++	if (len < PAGE_SIZE - 1)
++		len += scnprintf(buf + len, PAGE_SIZE - len, "Total transition : %u\n",
++				 df->stats.total_trans);
+ 
+-		len += sprintf(buf + len, "%10llu\n", (u64)
+-			jiffies64_to_msecs(df->stats.time_in_state[i]));
++	if (len >= PAGE_SIZE - 1) {
++		pr_warn_once("devfreq transition table exceeds PAGE_SIZE. Disabling\n");
++		return -EFBIG;
+ 	}
+ 
+-	len += sprintf(buf + len, "Total transition : %u\n",
+-					df->stats.total_trans);
+ 	return len;
+ }
+ 
+diff --git a/drivers/dma/idma64.c b/drivers/dma/idma64.c
+index f5a84c8463945..db506e1f7ef4e 100644
+--- a/drivers/dma/idma64.c
++++ b/drivers/dma/idma64.c
+@@ -167,6 +167,10 @@ static irqreturn_t idma64_irq(int irq, void *dev)
+ 	u32 status_err;
+ 	unsigned short i;
+ 
++	/* Since IRQ may be shared, check if DMA controller is powered on */
++	if (status == GENMASK(31, 0))
++		return IRQ_NONE;
++
+ 	dev_vdbg(idma64->dma.dev, "%s: status=%#x\n", __func__, status);
+ 
+ 	/* Check if we have any interrupt from the DMA controller */
+diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c
+index 04202d75f4eed..695feb3443d80 100644
+--- a/drivers/dma/owl-dma.c
++++ b/drivers/dma/owl-dma.c
+@@ -249,7 +249,7 @@ static void pchan_update(struct owl_dma_pchan *pchan, u32 reg,
+ 	else
+ 		regval &= ~val;
+ 
+-	writel(val, pchan->base + reg);
++	writel(regval, pchan->base + reg);
+ }
+ 
+ static void pchan_writel(struct owl_dma_pchan *pchan, u32 reg, u32 data)
+@@ -273,7 +273,7 @@ static void dma_update(struct owl_dma *od, u32 reg, u32 val, bool state)
+ 	else
+ 		regval &= ~val;
+ 
+-	writel(val, od->base + reg);
++	writel(regval, od->base + reg);
+ }
+ 
+ static void dma_writel(struct owl_dma *od, u32 reg, u32 data)
+diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
+index 6c709803203ad..058c3a6ed6bbf 100644
+--- a/drivers/dma/xilinx/xilinx_dpdma.c
++++ b/drivers/dma/xilinx/xilinx_dpdma.c
+@@ -213,7 +213,8 @@ struct xilinx_dpdma_tx_desc {
+  * @running: true if the channel is running
+  * @first_frame: flag for the first frame of stream
+  * @video_group: flag if multi-channel operation is needed for video channels
+- * @lock: lock to access struct xilinx_dpdma_chan
++ * @lock: lock to access struct xilinx_dpdma_chan. Must be taken before
++ *        @vchan.lock, if both are to be held.
+  * @desc_pool: descriptor allocation pool
+  * @err_task: error IRQ bottom half handler
+  * @desc: References to descriptors being processed
+@@ -1101,12 +1102,14 @@ static void xilinx_dpdma_chan_vsync_irq(struct  xilinx_dpdma_chan *chan)
+ 	 * Complete the active descriptor, if any, promote the pending
+ 	 * descriptor to active, and queue the next transfer, if any.
+ 	 */
++	spin_lock(&chan->vchan.lock);
+ 	if (chan->desc.active)
+ 		vchan_cookie_complete(&chan->desc.active->vdesc);
+ 	chan->desc.active = pending;
+ 	chan->desc.pending = NULL;
+ 
+ 	xilinx_dpdma_chan_queue_transfer(chan);
++	spin_unlock(&chan->vchan.lock);
+ 
+ out:
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+@@ -1264,10 +1267,12 @@ static void xilinx_dpdma_issue_pending(struct dma_chan *dchan)
+ 	struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&chan->vchan.lock, flags);
++	spin_lock_irqsave(&chan->lock, flags);
++	spin_lock(&chan->vchan.lock);
+ 	if (vchan_issue_pending(&chan->vchan))
+ 		xilinx_dpdma_chan_queue_transfer(chan);
+-	spin_unlock_irqrestore(&chan->vchan.lock, flags);
++	spin_unlock(&chan->vchan.lock);
++	spin_unlock_irqrestore(&chan->lock, flags);
+ }
+ 
+ static int xilinx_dpdma_config(struct dma_chan *dchan,
+@@ -1491,7 +1496,9 @@ static void xilinx_dpdma_chan_err_task(struct tasklet_struct *t)
+ 		    XILINX_DPDMA_EINTR_CHAN_ERR_MASK << chan->id);
+ 
+ 	spin_lock_irqsave(&chan->lock, flags);
++	spin_lock(&chan->vchan.lock);
+ 	xilinx_dpdma_chan_queue_transfer(chan);
++	spin_unlock(&chan->vchan.lock);
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 1b4c7ced8b92c..4a95a624fca7b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -1259,6 +1259,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
+ err_bo_create:
+ 	unreserve_mem_limit(adev, size, alloc_domain, !!sg);
+ err_reserve_limit:
++	amdgpu_sync_free(&(*mem)->sync);
+ 	mutex_destroy(&(*mem)->lock);
+ 	kfree(*mem);
+ err:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 8445bb7ae06ab..ad3de971a0878 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2201,6 +2201,37 @@ static void amdgpu_vm_bo_insert_map(struct amdgpu_device *adev,
+ 	trace_amdgpu_vm_bo_map(bo_va, mapping);
+ }
+ 
++/* Validate operation parameters to prevent potential abuse */
++static int amdgpu_vm_verify_parameters(struct amdgpu_device *adev,
++					  struct amdgpu_bo *bo,
++					  uint64_t saddr,
++					  uint64_t offset,
++					  uint64_t size)
++{
++	uint64_t tmp, lpfn;
++
++	if (saddr & AMDGPU_GPU_PAGE_MASK
++	    || offset & AMDGPU_GPU_PAGE_MASK
++	    || size & AMDGPU_GPU_PAGE_MASK)
++		return -EINVAL;
++
++	if (check_add_overflow(saddr, size, &tmp)
++	    || check_add_overflow(offset, size, &tmp)
++	    || size == 0 /* which also leads to end < begin */)
++		return -EINVAL;
++
++	/* make sure object fit at this offset */
++	if (bo && offset + size > amdgpu_bo_size(bo))
++		return -EINVAL;
++
++	/* Ensure last pfn not exceed max_pfn */
++	lpfn = (saddr + size - 1) >> AMDGPU_GPU_PAGE_SHIFT;
++	if (lpfn >= adev->vm_manager.max_pfn)
++		return -EINVAL;
++
++	return 0;
++}
++
+ /**
+  * amdgpu_vm_bo_map - map bo inside a vm
+  *
+@@ -2227,21 +2258,14 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
+ 	struct amdgpu_bo *bo = bo_va->base.bo;
+ 	struct amdgpu_vm *vm = bo_va->base.vm;
+ 	uint64_t eaddr;
++	int r;
+ 
+-	/* validate the parameters */
+-	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK)
+-		return -EINVAL;
+-	if (saddr + size <= saddr || offset + size <= offset)
+-		return -EINVAL;
+-
+-	/* make sure object fit at this offset */
+-	eaddr = saddr + size - 1;
+-	if ((bo && offset + size > amdgpu_bo_size(bo)) ||
+-	    (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT))
+-		return -EINVAL;
++	r = amdgpu_vm_verify_parameters(adev, bo, saddr, offset, size);
++	if (r)
++		return r;
+ 
+ 	saddr /= AMDGPU_GPU_PAGE_SIZE;
+-	eaddr /= AMDGPU_GPU_PAGE_SIZE;
++	eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE;
+ 
+ 	tmp = amdgpu_vm_it_iter_first(&vm->va, saddr, eaddr);
+ 	if (tmp) {
+@@ -2294,17 +2318,9 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
+ 	uint64_t eaddr;
+ 	int r;
+ 
+-	/* validate the parameters */
+-	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK)
+-		return -EINVAL;
+-	if (saddr + size <= saddr || offset + size <= offset)
+-		return -EINVAL;
+-
+-	/* make sure object fit at this offset */
+-	eaddr = saddr + size - 1;
+-	if ((bo && offset + size > amdgpu_bo_size(bo)) ||
+-	    (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT))
+-		return -EINVAL;
++	r = amdgpu_vm_verify_parameters(adev, bo, saddr, offset, size);
++	if (r)
++		return r;
+ 
+ 	/* Allocate all the needed memory */
+ 	mapping = kmalloc(sizeof(*mapping), GFP_KERNEL);
+@@ -2318,7 +2334,7 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
+ 	}
+ 
+ 	saddr /= AMDGPU_GPU_PAGE_SIZE;
+-	eaddr /= AMDGPU_GPU_PAGE_SIZE;
++	eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE;
+ 
+ 	mapping->start = saddr;
+ 	mapping->last = eaddr;
+@@ -2405,10 +2421,14 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
+ 	struct amdgpu_bo_va_mapping *before, *after, *tmp, *next;
+ 	LIST_HEAD(removed);
+ 	uint64_t eaddr;
++	int r;
++
++	r = amdgpu_vm_verify_parameters(adev, NULL, saddr, 0, size);
++	if (r)
++		return r;
+ 
+-	eaddr = saddr + size - 1;
+ 	saddr /= AMDGPU_GPU_PAGE_SIZE;
+-	eaddr /= AMDGPU_GPU_PAGE_SIZE;
++	eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE;
+ 
+ 	/* Allocate all the needed memory */
+ 	before = kzalloc(sizeof(*before), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index 1bd330d431479..ad36f337c8a88 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -390,17 +390,21 @@ static void sdma_v5_2_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+ 	u32 ref_and_mask = 0;
+ 	const struct nbio_hdp_flush_reg *nbio_hf_reg = adev->nbio.hdp_flush_reg;
+ 
+-	ref_and_mask = nbio_hf_reg->ref_and_mask_sdma0 << ring->me;
+-
+-	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) |
+-			  SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(1) |
+-			  SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3)); /* == */
+-	amdgpu_ring_write(ring, (adev->nbio.funcs->get_hdp_flush_done_offset(adev)) << 2);
+-	amdgpu_ring_write(ring, (adev->nbio.funcs->get_hdp_flush_req_offset(adev)) << 2);
+-	amdgpu_ring_write(ring, ref_and_mask); /* reference */
+-	amdgpu_ring_write(ring, ref_and_mask); /* mask */
+-	amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
+-			  SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(10)); /* retry count, poll interval */
++	if (ring->me > 1) {
++		amdgpu_asic_flush_hdp(adev, ring);
++	} else {
++		ref_and_mask = nbio_hf_reg->ref_and_mask_sdma0 << ring->me;
++
++		amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) |
++				  SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(1) |
++				  SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3)); /* == */
++		amdgpu_ring_write(ring, (adev->nbio.funcs->get_hdp_flush_done_offset(adev)) << 2);
++		amdgpu_ring_write(ring, (adev->nbio.funcs->get_hdp_flush_req_offset(adev)) << 2);
++		amdgpu_ring_write(ring, ref_and_mask); /* reference */
++		amdgpu_ring_write(ring, ref_and_mask); /* mask */
++		amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
++				  SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(10)); /* retry count, poll interval */
++	}
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c
+index d5fd418236246..7872a04e9a721 100644
+--- a/drivers/gpu/drm/drm_client_modeset.c
++++ b/drivers/gpu/drm/drm_client_modeset.c
+@@ -774,6 +774,7 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
+ 	unsigned int total_modes_count = 0;
+ 	struct drm_client_offset *offsets;
+ 	unsigned int connector_count = 0;
++	/* points to modes protected by mode_config.mutex */
+ 	struct drm_display_mode **modes;
+ 	struct drm_crtc **crtcs;
+ 	int i, ret = 0;
+@@ -842,7 +843,6 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
+ 		drm_client_pick_crtcs(client, connectors, connector_count,
+ 				      crtcs, modes, 0, width, height);
+ 	}
+-	mutex_unlock(&dev->mode_config.mutex);
+ 
+ 	drm_client_modeset_release(client);
+ 
+@@ -872,6 +872,7 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
+ 			modeset->y = offset->y;
+ 		}
+ 	}
++	mutex_unlock(&dev->mode_config.mutex);
+ 
+ 	mutex_unlock(&client->modeset_mutex);
+ out:
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bios.c b/drivers/gpu/drm/nouveau/nouveau_bios.c
+index d204ea8a5618e..5cdf0d8d4bc18 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bios.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bios.c
+@@ -23,6 +23,7 @@
+  */
+ 
+ #include "nouveau_drv.h"
++#include "nouveau_bios.h"
+ #include "nouveau_reg.h"
+ #include "dispnv04/hw.h"
+ #include "nouveau_encoder.h"
+@@ -1672,7 +1673,7 @@ apply_dcb_encoder_quirks(struct drm_device *dev, int idx, u32 *conn, u32 *conf)
+ 	 */
+ 	if (nv_match_device(dev, 0x0201, 0x1462, 0x8851)) {
+ 		if (*conn == 0xf2005014 && *conf == 0xffffffff) {
+-			fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1, 1, 1);
++			fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1, 1, DCB_OUTPUT_B);
+ 			return false;
+ 		}
+ 	}
+@@ -1758,26 +1759,26 @@ fabricate_dcb_encoder_table(struct drm_device *dev, struct nvbios *bios)
+ #ifdef __powerpc__
+ 	/* Apple iMac G4 NV17 */
+ 	if (of_machine_is_compatible("PowerMac4,5")) {
+-		fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 0, all_heads, 1);
+-		fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1, all_heads, 2);
++		fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 0, all_heads, DCB_OUTPUT_B);
++		fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1, all_heads, DCB_OUTPUT_C);
+ 		return;
+ 	}
+ #endif
+ 
+ 	/* Make up some sane defaults */
+ 	fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG,
+-			     bios->legacy.i2c_indices.crt, 1, 1);
++			     bios->legacy.i2c_indices.crt, 1, DCB_OUTPUT_B);
+ 
+ 	if (nv04_tv_identify(dev, bios->legacy.i2c_indices.tv) >= 0)
+ 		fabricate_dcb_output(dcb, DCB_OUTPUT_TV,
+ 				     bios->legacy.i2c_indices.tv,
+-				     all_heads, 0);
++				     all_heads, DCB_OUTPUT_A);
+ 
+ 	else if (bios->tmds.output0_script_ptr ||
+ 		 bios->tmds.output1_script_ptr)
+ 		fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS,
+ 				     bios->legacy.i2c_indices.panel,
+-				     all_heads, 1);
++				     all_heads, DCB_OUTPUT_B);
+ }
+ 
+ static int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowof.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowof.c
+index 4bf486b571013..cb05f7f48a98b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowof.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowof.c
+@@ -66,11 +66,16 @@ of_init(struct nvkm_bios *bios, const char *name)
+ 	return ERR_PTR(-EINVAL);
+ }
+ 
++static void of_fini(void *p)
++{
++	kfree(p);
++}
++
+ const struct nvbios_source
+ nvbios_of = {
+ 	.name = "OpenFirmware",
+ 	.init = of_init,
+-	.fini = (void(*)(void *))kfree,
++	.fini = of_fini,
+ 	.read = of_read,
+ 	.size = of_size,
+ 	.rw = false,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.c
+index 02c4eb28cef44..f3db07325dbd8 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.c
+@@ -221,8 +221,11 @@ nv50_instobj_acquire(struct nvkm_memory *memory)
+ 	void __iomem *map = NULL;
+ 
+ 	/* Already mapped? */
+-	if (refcount_inc_not_zero(&iobj->maps))
++	if (refcount_inc_not_zero(&iobj->maps)) {
++		/* read barrier match the wmb on refcount set */
++		smp_rmb();
+ 		return iobj->map;
++	}
+ 
+ 	/* Take the lock, and re-check that another thread hasn't
+ 	 * already mapped the object in the meantime.
+@@ -249,6 +252,8 @@ nv50_instobj_acquire(struct nvkm_memory *memory)
+ 			iobj->base.memory.ptrs = &nv50_instobj_fast;
+ 		else
+ 			iobj->base.memory.ptrs = &nv50_instobj_slow;
++		/* barrier to ensure the ptrs are written before refcount is set */
++		smp_wmb();
+ 		refcount_set(&iobj->maps, 1);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panel/panel-visionox-rm69299.c b/drivers/gpu/drm/panel/panel-visionox-rm69299.c
+index eb43503ec97b3..6134432e4918d 100644
+--- a/drivers/gpu/drm/panel/panel-visionox-rm69299.c
++++ b/drivers/gpu/drm/panel/panel-visionox-rm69299.c
+@@ -261,8 +261,6 @@ static int visionox_rm69299_remove(struct mipi_dsi_device *dsi)
+ 	struct visionox_rm69299 *ctx = mipi_dsi_get_drvdata(dsi);
+ 
+ 	mipi_dsi_detach(ctx->dsi);
+-	mipi_dsi_device_unregister(ctx->dsi);
+-
+ 	drm_panel_remove(&ctx->panel);
+ 	return 0;
+ }
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 14811d42a5a91..4f8e8f392daba 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -56,7 +56,6 @@
+ /* flags */
+ #define I2C_HID_STARTED		0
+ #define I2C_HID_RESET_PENDING	1
+-#define I2C_HID_READ_PENDING	2
+ 
+ #define I2C_HID_PWR_ON		0x00
+ #define I2C_HID_PWR_SLEEP	0x01
+@@ -256,7 +255,6 @@ static int __i2c_hid_command(struct i2c_client *client,
+ 		msg[1].len = data_len;
+ 		msg[1].buf = buf_recv;
+ 		msg_num = 2;
+-		set_bit(I2C_HID_READ_PENDING, &ihid->flags);
+ 	}
+ 
+ 	if (wait)
+@@ -264,9 +262,6 @@ static int __i2c_hid_command(struct i2c_client *client,
+ 
+ 	ret = i2c_transfer(client->adapter, msg, msg_num);
+ 
+-	if (data_len > 0)
+-		clear_bit(I2C_HID_READ_PENDING, &ihid->flags);
+-
+ 	if (ret != msg_num)
+ 		return ret < 0 ? ret : -EIO;
+ 
+@@ -538,9 +533,6 @@ static irqreturn_t i2c_hid_irq(int irq, void *dev_id)
+ {
+ 	struct i2c_hid *ihid = dev_id;
+ 
+-	if (test_bit(I2C_HID_READ_PENDING, &ihid->flags))
+-		return IRQ_HANDLED;
+-
+ 	i2c_hid_get_input(ihid);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 34fecf97a355b..e8a89e18c640e 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -2013,13 +2013,18 @@ static int i2c_check_for_quirks(struct i2c_adapter *adap, struct i2c_msg *msgs,
+  * Returns negative errno, else the number of messages executed.
+  *
+  * Adapter lock must be held when calling this function. No debug logging
+- * takes place. adap->algo->master_xfer existence isn't checked.
++ * takes place.
+  */
+ int __i2c_transfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ {
+ 	unsigned long orig_jiffies;
+ 	int ret, try;
+ 
++	if (!adap->algo->master_xfer) {
++		dev_dbg(&adap->dev, "I2C level transfers not supported\n");
++		return -EOPNOTSUPP;
++	}
++
+ 	if (WARN_ON(!msgs || num < 1))
+ 		return -EINVAL;
+ 
+@@ -2086,11 +2091,6 @@ int i2c_transfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ {
+ 	int ret;
+ 
+-	if (!adap->algo->master_xfer) {
+-		dev_dbg(&adap->dev, "I2C level transfers not supported\n");
+-		return -EOPNOTSUPP;
+-	}
+-
+ 	/* REVISIT the fault reporting model here is weak:
+ 	 *
+ 	 *  - When we get an error after receiving N bytes from a slave,
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 2a30b25c5e7e5..26c66685a43dd 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -1057,23 +1057,26 @@ static void cm_reset_to_idle(struct cm_id_private *cm_id_priv)
+ 	}
+ }
+ 
+-static noinline void cm_destroy_id_wait_timeout(struct ib_cm_id *cm_id)
++static noinline void cm_destroy_id_wait_timeout(struct ib_cm_id *cm_id,
++						enum ib_cm_state old_state)
+ {
+ 	struct cm_id_private *cm_id_priv;
+ 
+ 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+-	pr_err("%s: cm_id=%p timed out. state=%d refcnt=%d\n", __func__,
+-	       cm_id, cm_id->state, refcount_read(&cm_id_priv->refcount));
++	pr_err("%s: cm_id=%p timed out. state %d -> %d, refcnt=%d\n", __func__,
++	       cm_id, old_state, cm_id->state, refcount_read(&cm_id_priv->refcount));
+ }
+ 
+ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
+ {
+ 	struct cm_id_private *cm_id_priv;
++	enum ib_cm_state old_state;
+ 	struct cm_work *work;
+ 	int ret;
+ 
+ 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+ 	spin_lock_irq(&cm_id_priv->lock);
++	old_state = cm_id->state;
+ retest:
+ 	switch (cm_id->state) {
+ 	case IB_CM_LISTEN:
+@@ -1187,7 +1190,7 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
+ 						  msecs_to_jiffies(
+ 						  CM_DESTROY_ID_WAIT_TIMEOUT));
+ 		if (!ret) /* timeout happened */
+-			cm_destroy_id_wait_timeout(cm_id);
++			cm_destroy_id_wait_timeout(cm_id, old_state);
+ 	} while (!ret);
+ 
+ 	while ((work = cm_dequeue_work(cm_id_priv)) != NULL)
+diff --git a/drivers/infiniband/hw/mlx5/mad.c b/drivers/infiniband/hw/mlx5/mad.c
+index cca7a4a6bd82d..7f12a9b05c872 100644
+--- a/drivers/infiniband/hw/mlx5/mad.c
++++ b/drivers/infiniband/hw/mlx5/mad.c
+@@ -166,7 +166,8 @@ static int process_pma_cmd(struct mlx5_ib_dev *dev, u8 port_num,
+ 		mdev = dev->mdev;
+ 		mdev_port_num = 1;
+ 	}
+-	if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1) {
++	if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1 &&
++	    !mlx5_core_mp_enabled(mdev)) {
+ 		/* set local port to one for Function-Per-Port HCA. */
+ 		mdev = dev->mdev;
+ 		mdev_port_num = 1;
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index 95f0de0c8b49c..0505c81aa8d04 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -35,6 +35,8 @@ void rxe_dealloc(struct ib_device *ib_dev)
+ 
+ 	if (rxe->tfm)
+ 		crypto_free_shash(rxe->tfm);
++
++	mutex_destroy(&rxe->usdev_lock);
+ }
+ 
+ /* initialize rxe device parameters */
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index aabf56272b86d..02e3183a4c67e 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -33,7 +33,7 @@ int intel_svm_enable_prq(struct intel_iommu *iommu)
+ 	struct page *pages;
+ 	int irq, ret;
+ 
+-	pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, PRQ_ORDER);
++	pages = alloc_pages_node(iommu->node, GFP_KERNEL | __GFP_ZERO, PRQ_ORDER);
+ 	if (!pages) {
+ 		pr_warn("IOMMU: %s: Failed to allocate page request queue\n",
+ 			iommu->name);
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index c1f3cd82caf33..4e486cccc4cc6 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -4508,13 +4508,8 @@ static int its_vpe_irq_domain_alloc(struct irq_domain *domain, unsigned int virq
+ 		set_bit(i, bitmap);
+ 	}
+ 
+-	if (err) {
+-		if (i > 0)
+-			its_vpe_irq_domain_free(domain, virq, i);
+-
+-		its_lpi_free(bitmap, base, nr_ids);
+-		its_free_prop_table(vprop_page);
+-	}
++	if (err)
++		its_vpe_irq_domain_free(domain, virq, i);
+ 
+ 	return err;
+ }
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index c5663398c6b7d..28f5450e41306 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -331,8 +331,6 @@ static int imx_mu_startup(struct mbox_chan *chan)
+ 		break;
+ 	}
+ 
+-	priv->suspend = true;
+-
+ 	return 0;
+ }
+ 
+@@ -550,8 +548,6 @@ static int imx_mu_probe(struct platform_device *pdev)
+ 
+ 	clk_disable_unprepare(priv->clk);
+ 
+-	priv->suspend = false;
+-
+ 	return 0;
+ 
+ disable_runtime_pm:
+@@ -614,6 +610,8 @@ static int __maybe_unused imx_mu_suspend_noirq(struct device *dev)
+ 	if (!priv->clk)
+ 		priv->xcr = imx_mu_read(priv, priv->dcfg->xCR);
+ 
++	priv->suspend = true;
++
+ 	return 0;
+ }
+ 
+@@ -632,6 +630,8 @@ static int __maybe_unused imx_mu_resume_noirq(struct device *dev)
+ 	if (!imx_mu_read(priv, priv->dcfg->xCR) && !priv->clk)
+ 		imx_mu_write(priv, priv->xcr, priv->dcfg->xCR);
+ 
++	priv->suspend = false;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index 97b479223fe52..2f7ab5df1c584 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -1116,20 +1116,6 @@ void cec_received_msg_ts(struct cec_adapter *adap,
+ 	if (valid_la && min_len) {
+ 		/* These messages have special length requirements */
+ 		switch (cmd) {
+-		case CEC_MSG_TIMER_STATUS:
+-			if (msg->msg[2] & 0x10) {
+-				switch (msg->msg[2] & 0xf) {
+-				case CEC_OP_PROG_INFO_NOT_ENOUGH_SPACE:
+-				case CEC_OP_PROG_INFO_MIGHT_NOT_BE_ENOUGH_SPACE:
+-					if (msg->len < 5)
+-						valid_la = false;
+-					break;
+-				}
+-			} else if ((msg->msg[2] & 0xf) == CEC_OP_PROG_ERROR_DUPLICATE) {
+-				if (msg->len < 5)
+-					valid_la = false;
+-			}
+-			break;
+ 		case CEC_MSG_RECORD_ON:
+ 			switch (msg->msg[2]) {
+ 			case CEC_OP_RECORD_SRC_OWN:
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 91586d8afcaa0..2809338a5c3ae 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -115,7 +115,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)},
+ 
+-	{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_SPS_CFG)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_MTL_M, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ARL_S, MEI_ME_PCH15_CFG)},
+diff --git a/drivers/mtd/nand/raw/diskonchip.c b/drivers/mtd/nand/raw/diskonchip.c
+index 26b265e4384a1..9ee7daa6fa3c8 100644
+--- a/drivers/mtd/nand/raw/diskonchip.c
++++ b/drivers/mtd/nand/raw/diskonchip.c
+@@ -53,7 +53,7 @@ static unsigned long doc_locations[] __initdata = {
+ 	0xe8000, 0xea000, 0xec000, 0xee000,
+ #endif
+ #endif
+-	0xffffffff };
++};
+ 
+ static struct mtd_info *doclist = NULL;
+ 
+@@ -1552,7 +1552,7 @@ static int __init init_nanddoc(void)
+ 		if (ret < 0)
+ 			return ret;
+ 	} else {
+-		for (i = 0; (doc_locations[i] != 0xffffffff); i++) {
++		for (i = 0; i < ARRAY_SIZE(doc_locations); i++) {
+ 			doc_probe(doc_locations[i]);
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
+index d59ea5148c16c..60645ea7c0f80 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_com.c
+@@ -352,7 +352,7 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
+ 			ENA_COM_BOUNCE_BUFFER_CNTRL_CNT;
+ 		io_sq->bounce_buf_ctrl.next_to_use = 0;
+ 
+-		size = io_sq->bounce_buf_ctrl.buffer_size *
++		size = (size_t)io_sq->bounce_buf_ctrl.buffer_size *
+ 			io_sq->bounce_buf_ctrl.buffers_num;
+ 
+ 		dev_node = dev_to_node(ena_dev->dmadev);
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index fa65971949fce..9149c82c0a564 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -1105,8 +1105,11 @@ static void ena_unmap_tx_buff(struct ena_ring *tx_ring,
+ static void ena_free_tx_bufs(struct ena_ring *tx_ring)
+ {
+ 	bool print_once = true;
++	bool is_xdp_ring;
+ 	u32 i;
+ 
++	is_xdp_ring = ENA_IS_XDP_INDEX(tx_ring->adapter, tx_ring->qid);
++
+ 	for (i = 0; i < tx_ring->ring_size; i++) {
+ 		struct ena_tx_buffer *tx_info = &tx_ring->tx_buffer_info[i];
+ 
+@@ -1126,10 +1129,15 @@ static void ena_free_tx_bufs(struct ena_ring *tx_ring)
+ 
+ 		ena_unmap_tx_buff(tx_ring, tx_info);
+ 
+-		dev_kfree_skb_any(tx_info->skb);
++		if (is_xdp_ring)
++			xdp_return_frame(tx_info->xdpf);
++		else
++			dev_kfree_skb_any(tx_info->skb);
+ 	}
+-	netdev_tx_reset_queue(netdev_get_tx_queue(tx_ring->netdev,
+-						  tx_ring->qid));
++
++	if (!is_xdp_ring)
++		netdev_tx_reset_queue(netdev_get_tx_queue(tx_ring->netdev,
++							  tx_ring->qid));
+ }
+ 
+ static void ena_free_all_tx_bufs(struct ena_adapter *adapter)
+@@ -3672,10 +3680,11 @@ static void check_for_missing_completions(struct ena_adapter *adapter)
+ {
+ 	struct ena_ring *tx_ring;
+ 	struct ena_ring *rx_ring;
+-	int i, budget, rc;
++	int qid, budget, rc;
+ 	int io_queue_count;
+ 
+ 	io_queue_count = adapter->xdp_num_queues + adapter->num_io_queues;
++
+ 	/* Make sure the driver doesn't turn the device in other process */
+ 	smp_rmb();
+ 
+@@ -3688,27 +3697,29 @@ static void check_for_missing_completions(struct ena_adapter *adapter)
+ 	if (adapter->missing_tx_completion_to == ENA_HW_HINTS_NO_TIMEOUT)
+ 		return;
+ 
+-	budget = ENA_MONITORED_TX_QUEUES;
++	budget = min_t(u32, io_queue_count, ENA_MONITORED_TX_QUEUES);
+ 
+-	for (i = adapter->last_monitored_tx_qid; i < io_queue_count; i++) {
+-		tx_ring = &adapter->tx_ring[i];
+-		rx_ring = &adapter->rx_ring[i];
++	qid = adapter->last_monitored_tx_qid;
++
++	while (budget) {
++		qid = (qid + 1) % io_queue_count;
++
++		tx_ring = &adapter->tx_ring[qid];
++		rx_ring = &adapter->rx_ring[qid];
+ 
+ 		rc = check_missing_comp_in_tx_queue(adapter, tx_ring);
+ 		if (unlikely(rc))
+ 			return;
+ 
+-		rc =  !ENA_IS_XDP_INDEX(adapter, i) ?
++		rc =  !ENA_IS_XDP_INDEX(adapter, qid) ?
+ 			check_for_rx_interrupt_queue(adapter, rx_ring) : 0;
+ 		if (unlikely(rc))
+ 			return;
+ 
+ 		budget--;
+-		if (!budget)
+-			break;
+ 	}
+ 
+-	adapter->last_monitored_tx_qid = i % io_queue_count;
++	adapter->last_monitored_tx_qid = qid;
+ }
+ 
+ /* trigger napi schedule after 2 consecutive detections */
+diff --git a/drivers/net/ethernet/broadcom/b44.c b/drivers/net/ethernet/broadcom/b44.c
+index b455b60a5434b..7ad9a47156912 100644
+--- a/drivers/net/ethernet/broadcom/b44.c
++++ b/drivers/net/ethernet/broadcom/b44.c
+@@ -2027,12 +2027,14 @@ static int b44_set_pauseparam(struct net_device *dev,
+ 		bp->flags |= B44_FLAG_TX_PAUSE;
+ 	else
+ 		bp->flags &= ~B44_FLAG_TX_PAUSE;
+-	if (bp->flags & B44_FLAG_PAUSE_AUTO) {
+-		b44_halt(bp);
+-		b44_init_rings(bp);
+-		b44_init_hw(bp, B44_FULL_RESET);
+-	} else {
+-		__b44_set_flow_ctrl(bp, bp->flags);
++	if (netif_running(dev)) {
++		if (bp->flags & B44_FLAG_PAUSE_AUTO) {
++			b44_halt(bp);
++			b44_init_rings(bp);
++			b44_init_hw(bp, B44_FULL_RESET);
++		} else {
++			__b44_set_flow_ctrl(bp, bp->flags);
++		}
+ 	}
+ 	spin_unlock_irq(&bp->lock);
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 6ea2d94c3ddea..35a903f6df215 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -15488,8 +15488,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	val = (rd32(&pf->hw, I40E_PRTGL_SAH) &
+ 	       I40E_PRTGL_SAH_MFS_MASK) >> I40E_PRTGL_SAH_MFS_SHIFT;
+ 	if (val < MAX_FRAME_SIZE_DEFAULT)
+-		dev_warn(&pdev->dev, "MFS for port %x has been set below the default: %x\n",
+-			 pf->hw.port, val);
++		dev_warn(&pdev->dev, "MFS for port %x (%d) has been set below the default (%d)\n",
++			 pf->hw.port, val, MAX_FRAME_SIZE_DEFAULT);
+ 
+ 	/* Add a filter to drop all Flow control frames from any VSI from being
+ 	 * transmitted. By doing so we stop a malicious VF from sending out
+@@ -16023,7 +16023,7 @@ static int __init i40e_init_module(void)
+ 	 * since we need to be able to guarantee forward progress even under
+ 	 * memory pressure.
+ 	 */
+-	i40e_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, i40e_driver_name);
++	i40e_wq = alloc_workqueue("%s", 0, 0, i40e_driver_name);
+ 	if (!i40e_wq) {
+ 		pr_err("%s: Failed to create workqueue\n", i40e_driver_name);
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index b64801bc216bb..65259722a5728 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2642,6 +2642,34 @@ static void iavf_del_all_cloud_filters(struct iavf_adapter *adapter)
+ 	spin_unlock_bh(&adapter->cloud_filter_list_lock);
+ }
+ 
++/**
++ * iavf_is_tc_config_same - Compare the mqprio TC config with the
++ * TC config already configured on this adapter.
++ * @adapter: board private structure
++ * @mqprio_qopt: TC config received from kernel.
++ *
++ * This function compares the TC config received from the kernel
++ * with the config already configured on the adapter.
++ *
++ * Return: True if configuration is same, false otherwise.
++ **/
++static bool iavf_is_tc_config_same(struct iavf_adapter *adapter,
++				   struct tc_mqprio_qopt *mqprio_qopt)
++{
++	struct virtchnl_channel_info *ch = &adapter->ch_config.ch_info[0];
++	int i;
++
++	if (adapter->num_tc != mqprio_qopt->num_tc)
++		return false;
++
++	for (i = 0; i < adapter->num_tc; i++) {
++		if (ch[i].count != mqprio_qopt->count[i] ||
++		    ch[i].offset != mqprio_qopt->offset[i])
++			return false;
++	}
++	return true;
++}
++
+ /**
+  * __iavf_setup_tc - configure multiple traffic classes
+  * @netdev: network interface device structure
+@@ -2698,7 +2726,7 @@ static int __iavf_setup_tc(struct net_device *netdev, void *type_data)
+ 		if (ret)
+ 			return ret;
+ 		/* Return if same TC config is requested */
+-		if (adapter->num_tc == num_tc)
++		if (iavf_is_tc_config_same(adapter, &mqprio_qopt->qopt))
+ 			return 0;
+ 		adapter->num_tc = num_tc;
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index e549b09c347a7..fb4b18be503c5 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -3146,18 +3146,18 @@ int rvu_nix_init(struct rvu *rvu)
+ 		 */
+ 		rvu_write64(rvu, blkaddr, NIX_AF_CFG,
+ 			    rvu_read64(rvu, blkaddr, NIX_AF_CFG) | 0x40ULL);
++	}
+ 
+-		/* Set chan/link to backpressure TL3 instead of TL2 */
+-		rvu_write64(rvu, blkaddr, NIX_AF_PSE_CHANNEL_LEVEL, 0x01);
++	/* Set chan/link to backpressure TL3 instead of TL2 */
++	rvu_write64(rvu, blkaddr, NIX_AF_PSE_CHANNEL_LEVEL, 0x01);
+ 
+-		/* Disable SQ manager's sticky mode operation (set TM6 = 0)
+-		 * This sticky mode is known to cause SQ stalls when multiple
+-		 * SQs are mapped to same SMQ and transmitting pkts at a time.
+-		 */
+-		cfg = rvu_read64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS);
+-		cfg &= ~BIT_ULL(15);
+-		rvu_write64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS, cfg);
+-	}
++	/* Disable SQ manager's sticky mode operation (set TM6 = 0)
++	 * This sticky mode is known to cause SQ stalls when multiple
++	 * SQs are mapped to same SMQ and transmitting pkts at a time.
++	 */
++	cfg = rvu_read64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS);
++	cfg &= ~BIT_ULL(15);
++	rvu_write64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS, cfg);
+ 
+ 	ltdefs = rvu->kpu.lt_def;
+ 	/* Calibrate X2P bus to check if CGX/LBK links are fine */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 39c17e9039157..0ba43c93abb26 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -114,15 +114,18 @@ static u8 alloc_token(struct mlx5_cmd *cmd)
+ 	return token;
+ }
+ 
+-static int cmd_alloc_index(struct mlx5_cmd *cmd)
++static int cmd_alloc_index(struct mlx5_cmd *cmd, struct mlx5_cmd_work_ent *ent)
+ {
+ 	unsigned long flags;
+ 	int ret;
+ 
+ 	spin_lock_irqsave(&cmd->alloc_lock, flags);
+ 	ret = find_first_bit(&cmd->bitmask, cmd->max_reg_cmds);
+-	if (ret < cmd->max_reg_cmds)
++	if (ret < cmd->max_reg_cmds) {
+ 		clear_bit(ret, &cmd->bitmask);
++		ent->idx = ret;
++		cmd->ent_arr[ent->idx] = ent;
++	}
+ 	spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+ 
+ 	return ret < cmd->max_reg_cmds ? ret : -ENOMEM;
+@@ -912,7 +915,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 	sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
+ 	down(sem);
+ 	if (!ent->page_queue) {
+-		alloc_ret = cmd_alloc_index(cmd);
++		alloc_ret = cmd_alloc_index(cmd, ent);
+ 		if (alloc_ret < 0) {
+ 			mlx5_core_err_rl(dev, "failed to allocate command entry\n");
+ 			if (ent->callback) {
+@@ -927,15 +930,14 @@ static void cmd_work_handler(struct work_struct *work)
+ 			up(sem);
+ 			return;
+ 		}
+-		ent->idx = alloc_ret;
+ 	} else {
+ 		ent->idx = cmd->max_reg_cmds;
+ 		spin_lock_irqsave(&cmd->alloc_lock, flags);
+ 		clear_bit(ent->idx, &cmd->bitmask);
++		cmd->ent_arr[ent->idx] = ent;
+ 		spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+ 	}
+ 
+-	cmd->ent_arr[ent->idx] = ent;
+ 	lay = get_inst(cmd, ent->idx);
+ 	ent->lay = lay;
+ 	memset(lay, 0, sizeof(*lay));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 4e8e3797aed08..074c9eb44ab73 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1675,8 +1675,9 @@ static struct mlx5_flow_handle *add_rule_fg(struct mlx5_flow_group *fg,
+ 	}
+ 	trace_mlx5_fs_set_fte(fte, false);
+ 
++	/* Link newly added rules into the tree. */
+ 	for (i = 0; i < handle->num_rules; i++) {
+-		if (refcount_read(&handle->rule[i]->node.refcount) == 1) {
++		if (!handle->rule[i]->node.parent) {
+ 			tree_add_node(&handle->rule[i]->node, &fte->node);
+ 			trace_mlx5_fs_add_rule(handle->rule[i]);
+ 		}
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index 1a86535c49685..f568ae250393f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -697,7 +697,7 @@ static void mlxsw_emad_rx_listener_func(struct sk_buff *skb, u8 local_port,
+ 
+ static const struct mlxsw_listener mlxsw_emad_rx_listener =
+ 	MLXSW_RXL(mlxsw_emad_rx_listener_func, ETHEMAD, TRAP_TO_CPU, false,
+-		  EMAD, DISCARD);
++		  EMAD, FORWARD);
+ 
+ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
+ {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+index 483c8b75bebb8..46b1120a8151e 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+@@ -713,7 +713,9 @@ static void mlxsw_sp_acl_tcam_vregion_rehash_work(struct work_struct *work)
+ 			     rehash.dw.work);
+ 	int credits = MLXSW_SP_ACL_TCAM_VREGION_REHASH_CREDITS;
+ 
++	mutex_lock(&vregion->lock);
+ 	mlxsw_sp_acl_tcam_vregion_rehash(vregion->mlxsw_sp, vregion, &credits);
++	mutex_unlock(&vregion->lock);
+ 	if (credits < 0)
+ 		/* Rehash gone out of credits so it was interrupted.
+ 		 * Schedule the work as soon as possible to continue.
+@@ -723,6 +725,17 @@ static void mlxsw_sp_acl_tcam_vregion_rehash_work(struct work_struct *work)
+ 		mlxsw_sp_acl_tcam_vregion_rehash_work_schedule(vregion);
+ }
+ 
++static void
++mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(struct mlxsw_sp_acl_tcam_rehash_ctx *ctx)
++{
++	/* The entry markers are relative to the current chunk and therefore
++	 * needs to be reset together with the chunk marker.
++	 */
++	ctx->current_vchunk = NULL;
++	ctx->start_ventry = NULL;
++	ctx->stop_ventry = NULL;
++}
++
+ static void
+ mlxsw_sp_acl_tcam_rehash_ctx_vchunk_changed(struct mlxsw_sp_acl_tcam_vchunk *vchunk)
+ {
+@@ -745,7 +758,7 @@ mlxsw_sp_acl_tcam_rehash_ctx_vregion_changed(struct mlxsw_sp_acl_tcam_vregion *v
+ 	 * the current chunk pointer to make sure all chunks
+ 	 * are properly migrated.
+ 	 */
+-	vregion->rehash.ctx.current_vchunk = NULL;
++	mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(&vregion->rehash.ctx);
+ }
+ 
+ static struct mlxsw_sp_acl_tcam_vregion *
+@@ -818,10 +831,14 @@ mlxsw_sp_acl_tcam_vregion_destroy(struct mlxsw_sp *mlxsw_sp,
+ 	struct mlxsw_sp_acl_tcam *tcam = vregion->tcam;
+ 
+ 	if (vgroup->vregion_rehash_enabled && ops->region_rehash_hints_get) {
++		struct mlxsw_sp_acl_tcam_rehash_ctx *ctx = &vregion->rehash.ctx;
++
+ 		mutex_lock(&tcam->lock);
+ 		list_del(&vregion->tlist);
+ 		mutex_unlock(&tcam->lock);
+-		cancel_delayed_work_sync(&vregion->rehash.dw);
++		if (cancel_delayed_work_sync(&vregion->rehash.dw) &&
++		    ctx->hints_priv)
++			ops->region_rehash_hints_put(ctx->hints_priv);
+ 	}
+ 	mlxsw_sp_acl_tcam_vgroup_vregion_detach(mlxsw_sp, vregion);
+ 	if (vregion->region2)
+@@ -1187,8 +1204,14 @@ mlxsw_sp_acl_tcam_ventry_activity_get(struct mlxsw_sp *mlxsw_sp,
+ 				      struct mlxsw_sp_acl_tcam_ventry *ventry,
+ 				      bool *activity)
+ {
+-	return mlxsw_sp_acl_tcam_entry_activity_get(mlxsw_sp,
+-						    ventry->entry, activity);
++	struct mlxsw_sp_acl_tcam_vregion *vregion = ventry->vchunk->vregion;
++	int err;
++
++	mutex_lock(&vregion->lock);
++	err = mlxsw_sp_acl_tcam_entry_activity_get(mlxsw_sp, ventry->entry,
++						   activity);
++	mutex_unlock(&vregion->lock);
++	return err;
+ }
+ 
+ static int
+@@ -1222,6 +1245,8 @@ mlxsw_sp_acl_tcam_vchunk_migrate_start(struct mlxsw_sp *mlxsw_sp,
+ {
+ 	struct mlxsw_sp_acl_tcam_chunk *new_chunk;
+ 
++	WARN_ON(vchunk->chunk2);
++
+ 	new_chunk = mlxsw_sp_acl_tcam_chunk_create(mlxsw_sp, vchunk, region);
+ 	if (IS_ERR(new_chunk))
+ 		return PTR_ERR(new_chunk);
+@@ -1240,7 +1265,7 @@ mlxsw_sp_acl_tcam_vchunk_migrate_end(struct mlxsw_sp *mlxsw_sp,
+ {
+ 	mlxsw_sp_acl_tcam_chunk_destroy(mlxsw_sp, vchunk->chunk2);
+ 	vchunk->chunk2 = NULL;
+-	ctx->current_vchunk = NULL;
++	mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(ctx);
+ }
+ 
+ static int
+@@ -1263,6 +1288,9 @@ mlxsw_sp_acl_tcam_vchunk_migrate_one(struct mlxsw_sp *mlxsw_sp,
+ 		return 0;
+ 	}
+ 
++	if (list_empty(&vchunk->ventry_list))
++		goto out;
++
+ 	/* If the migration got interrupted, we have the ventry to start from
+ 	 * stored in context.
+ 	 */
+@@ -1272,6 +1300,8 @@ mlxsw_sp_acl_tcam_vchunk_migrate_one(struct mlxsw_sp *mlxsw_sp,
+ 		ventry = list_first_entry(&vchunk->ventry_list,
+ 					  typeof(*ventry), list);
+ 
++	WARN_ON(ventry->vchunk != vchunk);
++
+ 	list_for_each_entry_from(ventry, &vchunk->ventry_list, list) {
+ 		/* During rollback, once we reach the ventry that failed
+ 		 * to migrate, we are done.
+@@ -1312,6 +1342,7 @@ mlxsw_sp_acl_tcam_vchunk_migrate_one(struct mlxsw_sp *mlxsw_sp,
+ 		}
+ 	}
+ 
++out:
+ 	mlxsw_sp_acl_tcam_vchunk_migrate_end(mlxsw_sp, vchunk, ctx);
+ 	return 0;
+ }
+@@ -1325,6 +1356,9 @@ mlxsw_sp_acl_tcam_vchunk_migrate_all(struct mlxsw_sp *mlxsw_sp,
+ 	struct mlxsw_sp_acl_tcam_vchunk *vchunk;
+ 	int err;
+ 
++	if (list_empty(&vregion->vchunk_list))
++		return 0;
++
+ 	/* If the migration got interrupted, we have the vchunk
+ 	 * we are working on stored in context.
+ 	 */
+@@ -1353,16 +1387,17 @@ mlxsw_sp_acl_tcam_vregion_migrate(struct mlxsw_sp *mlxsw_sp,
+ 	int err, err2;
+ 
+ 	trace_mlxsw_sp_acl_tcam_vregion_migrate(mlxsw_sp, vregion);
+-	mutex_lock(&vregion->lock);
+ 	err = mlxsw_sp_acl_tcam_vchunk_migrate_all(mlxsw_sp, vregion,
+ 						   ctx, credits);
+ 	if (err) {
++		if (ctx->this_is_rollback)
++			return err;
+ 		/* In case migration was not successful, we need to swap
+ 		 * so the original region pointer is assigned again
+ 		 * to vregion->region.
+ 		 */
+ 		swap(vregion->region, vregion->region2);
+-		ctx->current_vchunk = NULL;
++		mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(ctx);
+ 		ctx->this_is_rollback = true;
+ 		err2 = mlxsw_sp_acl_tcam_vchunk_migrate_all(mlxsw_sp, vregion,
+ 							    ctx, credits);
+@@ -1373,7 +1408,6 @@ mlxsw_sp_acl_tcam_vregion_migrate(struct mlxsw_sp *mlxsw_sp,
+ 			/* Let the rollback to be continued later on. */
+ 		}
+ 	}
+-	mutex_unlock(&vregion->lock);
+ 	trace_mlxsw_sp_acl_tcam_vregion_migrate_end(mlxsw_sp, vregion);
+ 	return err;
+ }
+@@ -1422,6 +1456,7 @@ mlxsw_sp_acl_tcam_vregion_rehash_start(struct mlxsw_sp *mlxsw_sp,
+ 
+ 	ctx->hints_priv = hints_priv;
+ 	ctx->this_is_rollback = false;
++	mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(ctx);
+ 
+ 	return 0;
+ 
+@@ -1474,7 +1509,8 @@ mlxsw_sp_acl_tcam_vregion_rehash(struct mlxsw_sp *mlxsw_sp,
+ 	err = mlxsw_sp_acl_tcam_vregion_migrate(mlxsw_sp, vregion,
+ 						ctx, credits);
+ 	if (err) {
+-		dev_err(mlxsw_sp->bus_info->dev, "Failed to migrate vregion\n");
++		dev_err_ratelimited(mlxsw_sp->bus_info->dev, "Failed to migrate vregion\n");
++		return;
+ 	}
+ 
+ 	if (*credits >= 0)
+diff --git a/drivers/net/ethernet/ti/am65-cpts.c b/drivers/net/ethernet/ti/am65-cpts.c
+index 5dc60ecabe561..e0fcdbadf2cba 100644
+--- a/drivers/net/ethernet/ti/am65-cpts.c
++++ b/drivers/net/ethernet/ti/am65-cpts.c
+@@ -649,6 +649,11 @@ static bool am65_cpts_match_tx_ts(struct am65_cpts *cpts,
+ 		struct am65_cpts_skb_cb_data *skb_cb =
+ 					(struct am65_cpts_skb_cb_data *)skb->cb;
+ 
++		if ((ptp_classify_raw(skb) & PTP_CLASS_V1) &&
++		    ((mtype_seqid & AM65_CPTS_EVENT_1_SEQUENCE_ID_MASK) ==
++		     (skb_cb->skb_mtype_seqid & AM65_CPTS_EVENT_1_SEQUENCE_ID_MASK)))
++			mtype_seqid = skb_cb->skb_mtype_seqid;
++
+ 		if (mtype_seqid == skb_cb->skb_mtype_seqid) {
+ 			u64 ns = event->timestamp;
+ 
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 2bb9820c66641..af35361a3dcee 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -907,7 +907,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
+-	if (!pskb_inet_may_pull(skb))
++	if (!skb_vlan_inet_prepare(skb))
+ 		return -EINVAL;
+ 
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+@@ -1004,7 +1004,7 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
+-	if (!pskb_inet_may_pull(skb))
++	if (!skb_vlan_inet_prepare(skb))
+ 		return -EINVAL;
+ 
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 4e19760cddefe..c8246363d3832 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -700,11 +700,12 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
+ static void gtp_dellink(struct net_device *dev, struct list_head *head)
+ {
+ 	struct gtp_dev *gtp = netdev_priv(dev);
++	struct hlist_node *next;
+ 	struct pdp_ctx *pctx;
+ 	int i;
+ 
+ 	for (i = 0; i < gtp->hash_size; i++)
+-		hlist_for_each_entry_rcu(pctx, &gtp->tid_hash[i], hlist_tid)
++		hlist_for_each_entry_safe(pctx, next, &gtp->tid_hash[i], hlist_tid)
+ 			pdp_context_delete(pctx);
+ 
+ 	list_del_rcu(&gtp->list);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index bb0368272a1bb..77e63e7366e78 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2141,14 +2141,16 @@ static ssize_t tun_put_user(struct tun_struct *tun,
+ 					    tun_is_little_endian(tun), true,
+ 					    vlan_hlen)) {
+ 			struct skb_shared_info *sinfo = skb_shinfo(skb);
+-			pr_err("unexpected GSO type: "
+-			       "0x%x, gso_size %d, hdr_len %d\n",
+-			       sinfo->gso_type, tun16_to_cpu(tun, gso.gso_size),
+-			       tun16_to_cpu(tun, gso.hdr_len));
+-			print_hex_dump(KERN_ERR, "tun: ",
+-				       DUMP_PREFIX_NONE,
+-				       16, 1, skb->head,
+-				       min((int)tun16_to_cpu(tun, gso.hdr_len), 64), true);
++
++			if (net_ratelimit()) {
++				netdev_err(tun->dev, "unexpected GSO type: 0x%x, gso_size %d, hdr_len %d\n",
++					   sinfo->gso_type, tun16_to_cpu(tun, gso.gso_size),
++					   tun16_to_cpu(tun, gso.hdr_len));
++				print_hex_dump(KERN_ERR, "tun: ",
++					       DUMP_PREFIX_NONE,
++					       16, 1, skb->head,
++					       min((int)tun16_to_cpu(tun, gso.hdr_len), 64), true);
++			}
+ 			WARN_ON_ONCE(1);
+ 			return -EINVAL;
+ 		}
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 38cb863ccb911..da4a2427b005f 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1558,21 +1558,16 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 			/* Skip IP alignment pseudo header */
+ 			skb_pull(skb, 2);
+ 
+-			skb->truesize = SKB_TRUESIZE(pkt_len_plus_padd);
+ 			ax88179_rx_checksum(skb, pkt_hdr);
+ 			return 1;
+ 		}
+ 
+-		ax_skb = skb_clone(skb, GFP_ATOMIC);
++		ax_skb = netdev_alloc_skb_ip_align(dev->net, pkt_len);
+ 		if (!ax_skb)
+ 			return 0;
+-		skb_trim(ax_skb, pkt_len);
++		skb_put(ax_skb, pkt_len);
++		memcpy(ax_skb->data, skb->data + 2, pkt_len);
+ 
+-		/* Skip IP alignment pseudo header */
+-		skb_pull(ax_skb, 2);
+-
+-		skb->truesize = pkt_len_plus_padd +
+-				SKB_DATA_ALIGN(sizeof(struct sk_buff));
+ 		ax88179_rx_checksum(ax_skb, pkt_hdr);
+ 		usbnet_skb_return(dev, ax_skb);
+ 
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 3096769e718ed..b173497a3e0ca 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -1778,6 +1778,10 @@ static bool vxlan_set_mac(struct vxlan_dev *vxlan,
+ 	if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
+ 		return false;
+ 
++	/* Ignore packets from invalid src-address */
++	if (!is_valid_ether_addr(eth_hdr(skb)->h_source))
++		return false;
++
+ 	/* Get address from the outer IP header */
+ 	if (vxlan_get_sk_family(vs) == AF_INET) {
+ 		saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+index b1335fe3b01a2..e2df8ddc1a56a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+@@ -106,6 +106,8 @@ int iwl_mvm_ftm_add_pasn_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ 	if (!pasn)
+ 		return -ENOBUFS;
+ 
++	iwl_mvm_ftm_remove_pasn_sta(mvm, addr);
++
+ 	pasn->cipher = iwl_mvm_cipher_to_location_cipher(cipher);
+ 
+ 	switch (pasn->cipher) {
+diff --git a/drivers/nfc/trf7970a.c b/drivers/nfc/trf7970a.c
+index c70f62fe321eb..081ec1105572e 100644
+--- a/drivers/nfc/trf7970a.c
++++ b/drivers/nfc/trf7970a.c
+@@ -424,7 +424,8 @@ struct trf7970a {
+ 	enum trf7970a_state		state;
+ 	struct device			*dev;
+ 	struct spi_device		*spi;
+-	struct regulator		*regulator;
++	struct regulator		*vin_regulator;
++	struct regulator		*vddio_regulator;
+ 	struct nfc_digital_dev		*ddev;
+ 	u32				quirks;
+ 	bool				is_initiator;
+@@ -1882,7 +1883,7 @@ static int trf7970a_power_up(struct trf7970a *trf)
+ 	if (trf->state != TRF7970A_ST_PWR_OFF)
+ 		return 0;
+ 
+-	ret = regulator_enable(trf->regulator);
++	ret = regulator_enable(trf->vin_regulator);
+ 	if (ret) {
+ 		dev_err(trf->dev, "%s - Can't enable VIN: %d\n", __func__, ret);
+ 		return ret;
+@@ -1925,7 +1926,7 @@ static int trf7970a_power_down(struct trf7970a *trf)
+ 	if (trf->en2_gpiod && !(trf->quirks & TRF7970A_QUIRK_EN2_MUST_STAY_LOW))
+ 		gpiod_set_value_cansleep(trf->en2_gpiod, 0);
+ 
+-	ret = regulator_disable(trf->regulator);
++	ret = regulator_disable(trf->vin_regulator);
+ 	if (ret)
+ 		dev_err(trf->dev, "%s - Can't disable VIN: %d\n", __func__,
+ 			ret);
+@@ -2064,37 +2065,37 @@ static int trf7970a_probe(struct spi_device *spi)
+ 	mutex_init(&trf->lock);
+ 	INIT_DELAYED_WORK(&trf->timeout_work, trf7970a_timeout_work_handler);
+ 
+-	trf->regulator = devm_regulator_get(&spi->dev, "vin");
+-	if (IS_ERR(trf->regulator)) {
+-		ret = PTR_ERR(trf->regulator);
++	trf->vin_regulator = devm_regulator_get(&spi->dev, "vin");
++	if (IS_ERR(trf->vin_regulator)) {
++		ret = PTR_ERR(trf->vin_regulator);
+ 		dev_err(trf->dev, "Can't get VIN regulator: %d\n", ret);
+ 		goto err_destroy_lock;
+ 	}
+ 
+-	ret = regulator_enable(trf->regulator);
++	ret = regulator_enable(trf->vin_regulator);
+ 	if (ret) {
+ 		dev_err(trf->dev, "Can't enable VIN: %d\n", ret);
+ 		goto err_destroy_lock;
+ 	}
+ 
+-	uvolts = regulator_get_voltage(trf->regulator);
++	uvolts = regulator_get_voltage(trf->vin_regulator);
+ 	if (uvolts > 4000000)
+ 		trf->chip_status_ctrl = TRF7970A_CHIP_STATUS_VRS5_3;
+ 
+-	trf->regulator = devm_regulator_get(&spi->dev, "vdd-io");
+-	if (IS_ERR(trf->regulator)) {
+-		ret = PTR_ERR(trf->regulator);
++	trf->vddio_regulator = devm_regulator_get(&spi->dev, "vdd-io");
++	if (IS_ERR(trf->vddio_regulator)) {
++		ret = PTR_ERR(trf->vddio_regulator);
+ 		dev_err(trf->dev, "Can't get VDD_IO regulator: %d\n", ret);
+-		goto err_destroy_lock;
++		goto err_disable_vin_regulator;
+ 	}
+ 
+-	ret = regulator_enable(trf->regulator);
++	ret = regulator_enable(trf->vddio_regulator);
+ 	if (ret) {
+ 		dev_err(trf->dev, "Can't enable VDD_IO: %d\n", ret);
+-		goto err_destroy_lock;
++		goto err_disable_vin_regulator;
+ 	}
+ 
+-	if (regulator_get_voltage(trf->regulator) == 1800000) {
++	if (regulator_get_voltage(trf->vddio_regulator) == 1800000) {
+ 		trf->io_ctrl = TRF7970A_REG_IO_CTRL_IO_LOW;
+ 		dev_dbg(trf->dev, "trf7970a config vdd_io to 1.8V\n");
+ 	}
+@@ -2107,7 +2108,7 @@ static int trf7970a_probe(struct spi_device *spi)
+ 	if (!trf->ddev) {
+ 		dev_err(trf->dev, "Can't allocate NFC digital device\n");
+ 		ret = -ENOMEM;
+-		goto err_disable_regulator;
++		goto err_disable_vddio_regulator;
+ 	}
+ 
+ 	nfc_digital_set_parent_dev(trf->ddev, trf->dev);
+@@ -2136,8 +2137,10 @@ static int trf7970a_probe(struct spi_device *spi)
+ 	trf7970a_shutdown(trf);
+ err_free_ddev:
+ 	nfc_digital_free_device(trf->ddev);
+-err_disable_regulator:
+-	regulator_disable(trf->regulator);
++err_disable_vddio_regulator:
++	regulator_disable(trf->vddio_regulator);
++err_disable_vin_regulator:
++	regulator_disable(trf->vin_regulator);
+ err_destroy_lock:
+ 	mutex_destroy(&trf->lock);
+ 	return ret;
+@@ -2156,7 +2159,8 @@ static int trf7970a_remove(struct spi_device *spi)
+ 	nfc_digital_unregister_device(trf->ddev);
+ 	nfc_digital_free_device(trf->ddev);
+ 
+-	regulator_disable(trf->regulator);
++	regulator_disable(trf->vddio_regulator);
++	regulator_disable(trf->vin_regulator);
+ 
+ 	mutex_destroy(&trf->lock);
+ 
+diff --git a/drivers/staging/comedi/drivers/vmk80xx.c b/drivers/staging/comedi/drivers/vmk80xx.c
+index ccc65cfc519f5..51b814e44783e 100644
+--- a/drivers/staging/comedi/drivers/vmk80xx.c
++++ b/drivers/staging/comedi/drivers/vmk80xx.c
+@@ -642,33 +642,22 @@ static int vmk80xx_find_usb_endpoints(struct comedi_device *dev)
+ 	struct vmk80xx_private *devpriv = dev->private;
+ 	struct usb_interface *intf = comedi_to_usb_interface(dev);
+ 	struct usb_host_interface *iface_desc = intf->cur_altsetting;
+-	struct usb_endpoint_descriptor *ep_desc;
+-	int i;
+-
+-	if (iface_desc->desc.bNumEndpoints != 2)
+-		return -ENODEV;
+-
+-	for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) {
+-		ep_desc = &iface_desc->endpoint[i].desc;
+-
+-		if (usb_endpoint_is_int_in(ep_desc) ||
+-		    usb_endpoint_is_bulk_in(ep_desc)) {
+-			if (!devpriv->ep_rx)
+-				devpriv->ep_rx = ep_desc;
+-			continue;
+-		}
++	struct usb_endpoint_descriptor *ep_rx_desc, *ep_tx_desc;
++	int ret;
+ 
+-		if (usb_endpoint_is_int_out(ep_desc) ||
+-		    usb_endpoint_is_bulk_out(ep_desc)) {
+-			if (!devpriv->ep_tx)
+-				devpriv->ep_tx = ep_desc;
+-			continue;
+-		}
+-	}
++	if (devpriv->model == VMK8061_MODEL)
++		ret = usb_find_common_endpoints(iface_desc, &ep_rx_desc,
++						&ep_tx_desc, NULL, NULL);
++	else
++		ret = usb_find_common_endpoints(iface_desc, NULL, NULL,
++						&ep_rx_desc, &ep_tx_desc);
+ 
+-	if (!devpriv->ep_rx || !devpriv->ep_tx)
++	if (ret)
+ 		return -ENODEV;
+ 
++	devpriv->ep_rx = ep_rx_desc;
++	devpriv->ep_tx = ep_tx_desc;
++
+ 	if (!usb_endpoint_maxp(devpriv->ep_rx) || !usb_endpoint_maxp(devpriv->ep_tx))
+ 		return -EINVAL;
+ 
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 22e5c4de345b5..b13a944eb4620 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2402,22 +2402,29 @@ void tb_switch_unconfigure_link(struct tb_switch *sw)
+ {
+ 	struct tb_port *up, *down;
+ 
+-	if (sw->is_unplugged)
+-		return;
+ 	if (!tb_route(sw) || tb_switch_is_icm(sw))
+ 		return;
+ 
++	/*
++	 * Unconfigure downstream port so that wake-on-connect can be
++	 * configured after router unplug. No need to unconfigure upstream port
++	 * since its router is unplugged.
++	 */
+ 	up = tb_upstream_port(sw);
+-	if (tb_switch_is_usb4(up->sw))
+-		usb4_port_unconfigure(up);
+-	else
+-		tb_lc_unconfigure_port(up);
+-
+ 	down = up->remote;
+ 	if (tb_switch_is_usb4(down->sw))
+ 		usb4_port_unconfigure(down);
+ 	else
+ 		tb_lc_unconfigure_port(down);
++
++	if (sw->is_unplugged)
++		return;
++
++	up = tb_upstream_port(sw);
++	if (tb_switch_is_usb4(up->sw))
++		usb4_port_unconfigure(up);
++	else
++		tb_lc_unconfigure_port(up);
+ }
+ 
+ static int tb_switch_port_hotplug_enable(struct tb_switch *sw)
+@@ -2631,7 +2638,26 @@ static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags)
+ 	return tb_lc_set_wake(sw, flags);
+ }
+ 
+-int tb_switch_resume(struct tb_switch *sw)
++static void tb_switch_check_wakes(struct tb_switch *sw)
++{
++	if (device_may_wakeup(&sw->dev)) {
++		if (tb_switch_is_usb4(sw))
++			usb4_switch_check_wakes(sw);
++	}
++}
++
++/**
++ * tb_switch_resume() - Resume a switch after sleep
++ * @sw: Switch to resume
++ * @runtime: Is this resume from runtime suspend or system sleep
++ *
++ * Resumes and re-enumerates router (and all its children), if still plugged
++ * after suspend. Don't enumerate device router whose UID was changed during
++ * suspend. If this is resume from system sleep, notifies PM core about the
++ * wakes occurred during suspend. Disables all wakes, except USB4 wake of
++ * upstream port for USB4 routers that shall be always enabled.
++ */
++int tb_switch_resume(struct tb_switch *sw, bool runtime)
+ {
+ 	struct tb_port *port;
+ 	int err;
+@@ -2676,6 +2702,9 @@ int tb_switch_resume(struct tb_switch *sw)
+ 	if (err)
+ 		return err;
+ 
++	if (!runtime)
++		tb_switch_check_wakes(sw);
++
+ 	/* Disable wakes */
+ 	tb_switch_set_wake(sw, 0);
+ 
+@@ -2702,7 +2731,8 @@ int tb_switch_resume(struct tb_switch *sw)
+ 			 */
+ 			if (tb_port_unlock(port))
+ 				tb_port_warn(port, "failed to unlock port\n");
+-			if (port->remote && tb_switch_resume(port->remote->sw)) {
++			if (port->remote &&
++			    tb_switch_resume(port->remote->sw, runtime)) {
+ 				tb_port_warn(port,
+ 					     "lost during suspend, disconnecting\n");
+ 				tb_sw_set_unplugged(port->remote->sw);
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index a56ea540af00b..26a78847985ba 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -1385,7 +1385,7 @@ static int tb_resume_noirq(struct tb *tb)
+ 	/* remove any pci devices the firmware might have setup */
+ 	tb_switch_reset(tb->root_switch);
+ 
+-	tb_switch_resume(tb->root_switch);
++	tb_switch_resume(tb->root_switch, false);
+ 	tb_free_invalid_tunnels(tb);
+ 	tb_free_unplugged_children(tb->root_switch);
+ 	tb_restore_children(tb->root_switch);
+@@ -1488,7 +1488,7 @@ static int tb_runtime_resume(struct tb *tb)
+ 	struct tb_tunnel *tunnel, *n;
+ 
+ 	mutex_lock(&tb->lock);
+-	tb_switch_resume(tb->root_switch);
++	tb_switch_resume(tb->root_switch, true);
+ 	tb_free_invalid_tunnels(tb);
+ 	tb_restore_children(tb->root_switch);
+ 	list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list)
+diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
+index 266f3bf8ff5c6..f3df5655f7a1b 100644
+--- a/drivers/thunderbolt/tb.h
++++ b/drivers/thunderbolt/tb.h
+@@ -653,7 +653,7 @@ int tb_switch_configure(struct tb_switch *sw);
+ int tb_switch_add(struct tb_switch *sw);
+ void tb_switch_remove(struct tb_switch *sw);
+ void tb_switch_suspend(struct tb_switch *sw, bool runtime);
+-int tb_switch_resume(struct tb_switch *sw);
++int tb_switch_resume(struct tb_switch *sw, bool runtime);
+ int tb_switch_reset(struct tb_switch *sw);
+ void tb_sw_set_unplugged(struct tb_switch *sw);
+ struct tb_port *tb_switch_find_port(struct tb_switch *sw,
+@@ -957,6 +957,7 @@ static inline struct tb_retimer *tb_to_retimer(struct device *dev)
+ 	return NULL;
+ }
+ 
++void usb4_switch_check_wakes(struct tb_switch *sw);
+ int usb4_switch_setup(struct tb_switch *sw);
+ int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid);
+ int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index 5b45c45e7c5bf..78c895abb1e48 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -197,15 +197,18 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
+ 	return 0;
+ }
+ 
+-static void usb4_switch_check_wakes(struct tb_switch *sw)
++/**
++ * usb4_switch_check_wakes() - Check for wakes and notify PM core about them
++ * @sw: Router whose wakes to check
++ *
++ * Checks wakes occurred during suspend and notify the PM core about them.
++ */
++void usb4_switch_check_wakes(struct tb_switch *sw)
+ {
+ 	struct tb_port *port;
+ 	bool wakeup = false;
+ 	u32 val;
+ 
+-	if (!device_may_wakeup(&sw->dev))
+-		return;
+-
+ 	if (tb_route(sw)) {
+ 		if (tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_6, 1))
+ 			return;
+@@ -270,8 +273,6 @@ int usb4_switch_setup(struct tb_switch *sw)
+ 	u32 val = 0;
+ 	int ret;
+ 
+-	usb4_switch_check_wakes(sw);
+-
+ 	if (!tb_route(sw))
+ 		return 0;
+ 
+diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
+index b784323a6a7b0..be6c8b9f1606e 100644
+--- a/drivers/tty/serial/mxs-auart.c
++++ b/drivers/tty/serial/mxs-auart.c
+@@ -1122,11 +1122,13 @@ static void mxs_auart_set_ldisc(struct uart_port *port,
+ 
+ static irqreturn_t mxs_auart_irq_handle(int irq, void *context)
+ {
+-	u32 istat;
++	u32 istat, stat;
+ 	struct mxs_auart_port *s = context;
+ 	u32 mctrl_temp = s->mctrl_prev;
+-	u32 stat = mxs_read(s, REG_STAT);
+ 
++	uart_port_lock(&s->port);
++
++	stat = mxs_read(s, REG_STAT);
+ 	istat = mxs_read(s, REG_INTR);
+ 
+ 	/* ack irq */
+@@ -1162,6 +1164,8 @@ static irqreturn_t mxs_auart_irq_handle(int irq, void *context)
+ 		istat &= ~AUART_INTR_TXIS;
+ 	}
+ 
++	uart_port_unlock(&s->port);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/tty/serial/pmac_zilog.c b/drivers/tty/serial/pmac_zilog.c
+index d6aef8a1f0a48..1d0717fc3729e 100644
+--- a/drivers/tty/serial/pmac_zilog.c
++++ b/drivers/tty/serial/pmac_zilog.c
+@@ -217,7 +217,6 @@ static bool pmz_receive_chars(struct uart_pmac_port *uap)
+ {
+ 	struct tty_port *port;
+ 	unsigned char ch, r1, drop, flag;
+-	int loops = 0;
+ 
+ 	/* Sanity check, make sure the old bug is no longer happening */
+ 	if (uap->port.state == NULL) {
+@@ -298,24 +297,11 @@ static bool pmz_receive_chars(struct uart_pmac_port *uap)
+ 		if (r1 & Rx_OVR)
+ 			tty_insert_flip_char(port, 0, TTY_OVERRUN);
+ 	next_char:
+-		/* We can get stuck in an infinite loop getting char 0 when the
+-		 * line is in a wrong HW state, we break that here.
+-		 * When that happens, I disable the receive side of the driver.
+-		 * Note that what I've been experiencing is a real irq loop where
+-		 * I'm getting flooded regardless of the actual port speed.
+-		 * Something strange is going on with the HW
+-		 */
+-		if ((++loops) > 1000)
+-			goto flood;
+ 		ch = read_zsreg(uap, R0);
+ 		if (!(ch & Rx_CH_AV))
+ 			break;
+ 	}
+ 
+-	return true;
+- flood:
+-	pmz_interrupt_control(uap, 0);
+-	pmz_error("pmz: rx irq flood !\n");
+ 	return true;
+ }
+ 
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 58423b16022b4..80332b6a1963e 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -471,7 +471,6 @@ static ssize_t wdm_write
+ static int service_outstanding_interrupt(struct wdm_device *desc)
+ {
+ 	int rv = 0;
+-	int used;
+ 
+ 	/* submit read urb only if the device is waiting for it */
+ 	if (!desc->resp_count || !--desc->resp_count)
+@@ -486,10 +485,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 		goto out;
+ 	}
+ 
+-	used = test_and_set_bit(WDM_RESPONDING, &desc->flags);
+-	if (used)
+-		goto out;
+-
++	set_bit(WDM_RESPONDING, &desc->flags);
+ 	spin_unlock_irq(&desc->iuspin);
+ 	rv = usb_submit_urb(desc->response, GFP_KERNEL);
+ 	spin_lock_irq(&desc->iuspin);
+diff --git a/drivers/usb/core/port.c b/drivers/usb/core/port.c
+index 336ecf6e19678..86e8585a55122 100644
+--- a/drivers/usb/core/port.c
++++ b/drivers/usb/core/port.c
+@@ -295,8 +295,10 @@ static void usb_port_shutdown(struct device *dev)
+ {
+ 	struct usb_port *port_dev = to_usb_port(dev);
+ 
+-	if (port_dev->child)
++	if (port_dev->child) {
+ 		usb_disable_usb2_hardware_lpm(port_dev->child);
++		usb_unlocked_disable_lpm(port_dev->child);
++	}
+ }
+ 
+ static const struct dev_pm_ops usb_port_pm_ops = {
+diff --git a/drivers/usb/dwc2/hcd_ddma.c b/drivers/usb/dwc2/hcd_ddma.c
+index 6a4aa71da103f..d6fa02d851e49 100644
+--- a/drivers/usb/dwc2/hcd_ddma.c
++++ b/drivers/usb/dwc2/hcd_ddma.c
+@@ -897,13 +897,15 @@ static int dwc2_cmpl_host_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ 	struct dwc2_dma_desc *dma_desc;
+ 	struct dwc2_hcd_iso_packet_desc *frame_desc;
+ 	u16 frame_desc_idx;
+-	struct urb *usb_urb = qtd->urb->priv;
++	struct urb *usb_urb;
+ 	u16 remain = 0;
+ 	int rc = 0;
+ 
+ 	if (!qtd->urb)
+ 		return -EINVAL;
+ 
++	usb_urb = qtd->urb->priv;
++
+ 	dma_sync_single_for_cpu(hsotg->dev, qh->desc_list_dma + (idx *
+ 				sizeof(struct dwc2_dma_desc)),
+ 				sizeof(struct dwc2_dma_desc),
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index fb1eba835e508..400e3e10b435c 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -255,6 +255,10 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EM061K_LMS		0x0124
+ #define QUECTEL_PRODUCT_EC25			0x0125
+ #define QUECTEL_PRODUCT_EM060K_128		0x0128
++#define QUECTEL_PRODUCT_EM060K_129		0x0129
++#define QUECTEL_PRODUCT_EM060K_12a		0x012a
++#define QUECTEL_PRODUCT_EM060K_12b		0x012b
++#define QUECTEL_PRODUCT_EM060K_12c		0x012c
+ #define QUECTEL_PRODUCT_EG91			0x0191
+ #define QUECTEL_PRODUCT_EG95			0x0195
+ #define QUECTEL_PRODUCT_BG96			0x0296
+@@ -1218,6 +1222,18 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0x00, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) },
+@@ -1360,6 +1376,12 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff),	/* Telit FE990 (ECM) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff),	/* Telit FN20C04 (rmnet) */
++	  .driver_info = RSVD(0) | NCTRL(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a4, 0xff),	/* Telit FN20C04 (rmnet) */
++	  .driver_info = RSVD(0) | NCTRL(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a9, 0xff),	/* Telit FN20C04 (rmnet) */
++	  .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -2052,6 +2074,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, 0x9803, 0xff),
+ 	  .driver_info = RSVD(4) },
++	{ USB_DEVICE(LONGCHEER_VENDOR_ID, 0x9b05),	/* Longsung U8300 */
++	  .driver_info = RSVD(4) | RSVD(5) },
++	{ USB_DEVICE(LONGCHEER_VENDOR_ID, 0x9b3c),	/* Longsung U9300 */
++	  .driver_info = RSVD(0) | RSVD(4) },
+ 	{ USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) },
+ 	{ USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) },
+ 	{ USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) },
+@@ -2272,15 +2298,29 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) },	/* Fibocom FG150 Diag */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) },		/* Fibocom FG150 AT */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) },			/* Fibocom FM160 (MBIM mode) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0115, 0xff),			/* Fibocom FM135 (laptop MBIM) */
++	  .driver_info = RSVD(5) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) },			/* Fibocom FM101-GL (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a3, 0xff) },			/* Fibocom FM101-GL (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff),			/* Fibocom FM101-GL (laptop MBIM) */
+ 	  .driver_info = RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a04, 0xff) },			/* Fibocom FM650-CN (ECM mode) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a05, 0xff) },			/* Fibocom FM650-CN (NCM mode) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a06, 0xff) },			/* Fibocom FM650-CN (RNDIS mode) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a07, 0xff) },			/* Fibocom FM650-CN (MBIM mode) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) },			/* LongSung M5710 */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) },			/* GosunCn GM500 RNDIS */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) },			/* GosunCn GM500 MBIM */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) },			/* GosunCn GM500 ECM/NCM */
++	{ USB_DEVICE(0x33f8, 0x0104),						/* Rolling RW101-GL (laptop RMNET) */
++	  .driver_info = RSVD(4) | RSVD(5) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a2, 0xff) },			/* Rolling RW101-GL (laptop MBIM) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a3, 0xff) },			/* Rolling RW101-GL (laptop MBIM) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a4, 0xff),			/* Rolling RW101-GL (laptop MBIM) */
++	  .driver_info = RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0115, 0xff),			/* Rolling RW135-GL (laptop MBIM) */
++	  .driver_info = RSVD(5) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index ab67160f72841..8ed9c9b63eb16 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2513,9 +2513,19 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq)
+ 	r = vhost_get_avail_idx(vq, &avail_idx);
+ 	if (unlikely(r))
+ 		return false;
++
+ 	vq->avail_idx = vhost16_to_cpu(vq, avail_idx);
++	if (vq->avail_idx != vq->last_avail_idx) {
++		/* Since we have updated avail_idx, the following
++		 * call to vhost_get_vq_desc() will read available
++		 * ring entries. Make sure that read happens after
++		 * the avail_idx read.
++		 */
++		smp_rmb();
++		return false;
++	}
+ 
+-	return vq->avail_idx == vq->last_avail_idx;
++	return true;
+ }
+ EXPORT_SYMBOL_GPL(vhost_vq_avail_empty);
+ 
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index d2f77b1242a13..f1731eeb86a7f 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -2315,20 +2315,14 @@ struct btrfs_data_container *init_data_container(u32 total_bytes)
+ 	size_t alloc_bytes;
+ 
+ 	alloc_bytes = max_t(size_t, total_bytes, sizeof(*data));
+-	data = kvmalloc(alloc_bytes, GFP_KERNEL);
++	data = kvzalloc(alloc_bytes, GFP_KERNEL);
+ 	if (!data)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	if (total_bytes >= sizeof(*data)) {
++	if (total_bytes >= sizeof(*data))
+ 		data->bytes_left = total_bytes - sizeof(*data);
+-		data->bytes_missing = 0;
+-	} else {
++	else
+ 		data->bytes_missing = sizeof(*data) - total_bytes;
+-		data->bytes_left = 0;
+-	}
+-
+-	data->elem_cnt = 0;
+-	data->elem_missed = 0;
+ 
+ 	return data;
+ }
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index bcffe7886530a..cdfc791b3c405 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1135,6 +1135,9 @@ __btrfs_commit_inode_delayed_items(struct btrfs_trans_handle *trans,
+ 	if (ret)
+ 		return ret;
+ 
++	ret = btrfs_record_root_in_trans(trans, node->root);
++	if (ret)
++		return ret;
+ 	ret = btrfs_update_delayed_inode(trans, node->root, path, node);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 7f849310303b1..50669ff9346c6 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -4114,6 +4114,8 @@ void btrfs_qgroup_convert_reserved_meta(struct btrfs_root *root, int num_bytes)
+ 				      BTRFS_QGROUP_RSV_META_PREALLOC);
+ 	trace_qgroup_meta_convert(root, num_bytes);
+ 	qgroup_convert_meta(fs_info, root->root_key.objectid, num_bytes);
++	if (!sb_rdonly(fs_info->sb))
++		add_root_meta_rsv(root, num_bytes, BTRFS_QGROUP_RSV_META_PERTRANS);
+ }
+ 
+ /*
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index 81394e22d0a09..eb7de9e2a384e 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -243,7 +243,7 @@ nilfs_filetype_table[NILFS_FT_MAX] = {
+ 
+ #define S_SHIFT 12
+ static unsigned char
+-nilfs_type_by_mode[S_IFMT >> S_SHIFT] = {
++nilfs_type_by_mode[(S_IFMT >> S_SHIFT) + 1] = {
+ 	[S_IFREG >> S_SHIFT]	= NILFS_FT_REG_FILE,
+ 	[S_IFDIR >> S_SHIFT]	= NILFS_FT_DIR,
+ 	[S_IFCHR >> S_SHIFT]	= NILFS_FT_CHRDEV,
+diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
+index 96d0da65e0887..f51b2396830f4 100644
+--- a/fs/sysfs/file.c
++++ b/fs/sysfs/file.c
+@@ -429,6 +429,8 @@ struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
+ 	kn = kernfs_find_and_get(kobj->sd, attr->name);
+ 	if (kn)
+ 		kernfs_break_active_protection(kn);
++	else
++		kobject_put(kobj);
+ 	return kn;
+ }
+ EXPORT_SYMBOL_GPL(sysfs_break_active_protection);
+diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
+index b060514bf25d2..bbd3aeb2f9c5b 100644
+--- a/include/linux/etherdevice.h
++++ b/include/linux/etherdevice.h
+@@ -542,6 +542,31 @@ static inline unsigned long compare_ether_header(const void *a, const void *b)
+ #endif
+ }
+ 
++/**
++ * eth_skb_pkt_type - Assign packet type if destination address does not match
++ * @skb: Assigned a packet type if address does not match @dev address
++ * @dev: Network device used to compare packet address against
++ *
++ * If the destination MAC address of the packet does not match the network
++ * device address, assign an appropriate packet type.
++ */
++static inline void eth_skb_pkt_type(struct sk_buff *skb,
++				    const struct net_device *dev)
++{
++	const struct ethhdr *eth = eth_hdr(skb);
++
++	if (unlikely(!ether_addr_equal_64bits(eth->h_dest, dev->dev_addr))) {
++		if (unlikely(is_multicast_ether_addr_64bits(eth->h_dest))) {
++			if (ether_addr_equal_64bits(eth->h_dest, dev->broadcast))
++				skb->pkt_type = PACKET_BROADCAST;
++			else
++				skb->pkt_type = PACKET_MULTICAST;
++		} else {
++			skb->pkt_type = PACKET_OTHERHOST;
++		}
++	}
++}
++
+ /**
+  * eth_skb_pad - Pad buffer to mininum number of octets for Ethernet frame
+  * @skb: Buffer to pad
+diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
+index 3ed4e8771b64e..a53adf4316a53 100644
+--- a/include/linux/irqflags.h
++++ b/include/linux/irqflags.h
+@@ -133,7 +133,7 @@ do {						\
+ # define lockdep_softirq_enter()		do { } while (0)
+ # define lockdep_softirq_exit()			do { } while (0)
+ # define lockdep_hrtimer_enter(__hrtimer)	false
+-# define lockdep_hrtimer_exit(__context)	do { } while (0)
++# define lockdep_hrtimer_exit(__context)	do { (void)(__context); } while (0)
+ # define lockdep_posixtimer_enter()		do { } while (0)
+ # define lockdep_posixtimer_exit()		do { } while (0)
+ # define lockdep_irq_work_enter(__work)		do { } while (0)
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 46a21984c0b22..48b9cd8fdcf7c 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -260,6 +260,85 @@ struct uart_port {
+ 	void			*private_data;		/* generic platform data pointer */
+ };
+ 
++/**
++ * uart_port_lock - Lock the UART port
++ * @up:		Pointer to UART port structure
++ */
++static inline void uart_port_lock(struct uart_port *up)
++{
++	spin_lock(&up->lock);
++}
++
++/**
++ * uart_port_lock_irq - Lock the UART port and disable interrupts
++ * @up:		Pointer to UART port structure
++ */
++static inline void uart_port_lock_irq(struct uart_port *up)
++{
++	spin_lock_irq(&up->lock);
++}
++
++/**
++ * uart_port_lock_irqsave - Lock the UART port, save and disable interrupts
++ * @up:		Pointer to UART port structure
++ * @flags:	Pointer to interrupt flags storage
++ */
++static inline void uart_port_lock_irqsave(struct uart_port *up, unsigned long *flags)
++{
++	spin_lock_irqsave(&up->lock, *flags);
++}
++
++/**
++ * uart_port_trylock - Try to lock the UART port
++ * @up:		Pointer to UART port structure
++ *
++ * Returns: True if lock was acquired, false otherwise
++ */
++static inline bool uart_port_trylock(struct uart_port *up)
++{
++	return spin_trylock(&up->lock);
++}
++
++/**
++ * uart_port_trylock_irqsave - Try to lock the UART port, save and disable interrupts
++ * @up:		Pointer to UART port structure
++ * @flags:	Pointer to interrupt flags storage
++ *
++ * Returns: True if lock was acquired, false otherwise
++ */
++static inline bool uart_port_trylock_irqsave(struct uart_port *up, unsigned long *flags)
++{
++	return spin_trylock_irqsave(&up->lock, *flags);
++}
++
++/**
++ * uart_port_unlock - Unlock the UART port
++ * @up:		Pointer to UART port structure
++ */
++static inline void uart_port_unlock(struct uart_port *up)
++{
++	spin_unlock(&up->lock);
++}
++
++/**
++ * uart_port_unlock_irq - Unlock the UART port and re-enable interrupts
++ * @up:		Pointer to UART port structure
++ */
++static inline void uart_port_unlock_irq(struct uart_port *up)
++{
++	spin_unlock_irq(&up->lock);
++}
++
++/**
++ * uart_port_unlock_irqrestore - Unlock the UART port, restore interrupts
++ * @up:		Pointer to UART port structure
++ * @flags:	The saved interrupt flags for restore
++ */
++static inline void uart_port_unlock_irqrestore(struct uart_port *up, unsigned long flags)
++{
++	spin_unlock_irqrestore(&up->lock, flags);
++}
++
+ static inline int serial_port_in(struct uart_port *up, int offset)
+ {
+ 	return up->serial_in(up, offset);
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index f7ed0471d5a85..dbf0993153d35 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -577,7 +577,7 @@ struct trace_event_file {
+ 	}								\
+ 	early_initcall(trace_init_perf_perm_##name);
+ 
+-#define PERF_MAX_TRACE_SIZE	2048
++#define PERF_MAX_TRACE_SIZE	8192
+ 
+ #define MAX_FILTER_STR_VAL	256U	/* Should handle KSYM_SYMBOL_LEN */
+ 
+diff --git a/include/linux/u64_stats_sync.h b/include/linux/u64_stats_sync.h
+index e81856c0ba134..7c6f81b8971aa 100644
+--- a/include/linux/u64_stats_sync.h
++++ b/include/linux/u64_stats_sync.h
+@@ -116,7 +116,11 @@ static inline void u64_stats_inc(u64_stats_t *p)
+ #endif
+ 
+ #if BITS_PER_LONG == 32 && defined(CONFIG_SMP)
+-#define u64_stats_init(syncp)	seqcount_init(&(syncp)->seq)
++#define u64_stats_init(syncp)				\
++	do {						\
++		struct u64_stats_sync *__s = (syncp);	\
++		seqcount_init(&__s->seq);		\
++	} while (0)
+ #else
+ static inline void u64_stats_init(struct u64_stats_sync *syncp)
+ {
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index 4d0c4cf1d4c88..f666d3628d6aa 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -437,6 +437,10 @@ static inline void in6_ifa_hold(struct inet6_ifaddr *ifp)
+ 	refcount_inc(&ifp->refcnt);
+ }
+ 
++static inline bool in6_ifa_hold_safe(struct inet6_ifaddr *ifp)
++{
++	return refcount_inc_not_zero(&ifp->refcnt);
++}
+ 
+ /*
+  *	compute link-local solicited-node multicast address
+diff --git a/include/net/af_unix.h b/include/net/af_unix.h
+index a6b6ce8b918b7..349279c4d2672 100644
+--- a/include/net/af_unix.h
++++ b/include/net/af_unix.h
+@@ -56,7 +56,7 @@ struct unix_sock {
+ 	struct mutex		iolock, bindlock;
+ 	struct sock		*peer;
+ 	struct list_head	link;
+-	atomic_long_t		inflight;
++	unsigned long		inflight;
+ 	spinlock_t		lock;
+ 	unsigned long		gc_flags;
+ #define UNIX_GC_CANDIDATE	0
+@@ -77,6 +77,9 @@ enum unix_socket_lock_class {
+ 	U_LOCK_NORMAL,
+ 	U_LOCK_SECOND,	/* for double locking, see unix_state_double_lock(). */
+ 	U_LOCK_DIAG, /* used while dumping icons, see sk_diag_dump_icons(). */
++	U_LOCK_GC_LISTENER, /* used for listening socket while determining gc
++			     * candidates to close a small race window.
++			     */
+ };
+ 
+ static inline void unix_state_lock_nested(struct sock *sk,
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index 355835639ae58..7d2bd562da4be 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -487,6 +487,15 @@ static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk,
+ 	return skb;
+ }
+ 
++static inline int bt_copy_from_sockptr(void *dst, size_t dst_size,
++				       sockptr_t src, size_t src_size)
++{
++	if (dst_size > src_size)
++		return -EINVAL;
++
++	return copy_from_sockptr(dst, src, dst_size);
++}
++
+ int bt_to_errno(u16 code);
+ 
+ void hci_sock_set_flag(struct sock *sk, int nr);
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index 58d8e6260aa13..1f016af0622bd 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -333,6 +333,39 @@ static inline bool pskb_inet_may_pull(struct sk_buff *skb)
+ 	return pskb_network_may_pull(skb, nhlen);
+ }
+ 
++/* Variant of pskb_inet_may_pull().
++ */
++static inline bool skb_vlan_inet_prepare(struct sk_buff *skb)
++{
++	int nhlen = 0, maclen = ETH_HLEN;
++	__be16 type = skb->protocol;
++
++	/* Essentially this is skb_protocol(skb, true)
++	 * And we get MAC len.
++	 */
++	if (eth_type_vlan(type))
++		type = __vlan_get_protocol(skb, type, &maclen);
++
++	switch (type) {
++#if IS_ENABLED(CONFIG_IPV6)
++	case htons(ETH_P_IPV6):
++		nhlen = sizeof(struct ipv6hdr);
++		break;
++#endif
++	case htons(ETH_P_IP):
++		nhlen = sizeof(struct iphdr);
++		break;
++	}
++	/* For ETH_P_IPV6/ETH_P_IP we make sure to pull
++	 * a base network header in skb->head.
++	 */
++	if (!pskb_may_pull(skb, maclen + nhlen))
++		return false;
++
++	skb_set_network_header(skb, maclen);
++	return true;
++}
++
+ static inline int ip_encap_hlen(struct ip_tunnel_encap *e)
+ {
+ 	const struct ip_tunnel_encap_ops *ops;
+diff --git a/init/main.c b/init/main.c
+index 298989b0d4e88..b1593bdaf3b97 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -627,6 +627,8 @@ static void __init setup_command_line(char *command_line)
+ 	if (!saved_command_line)
+ 		panic("%s: Failed to allocate %zu bytes\n", __func__, len + ilen);
+ 
++	len = xlen + strlen(command_line) + 1;
++
+ 	static_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
+ 	if (!static_command_line)
+ 		panic("%s: Failed to allocate %zu bytes\n", __func__, len);
+diff --git a/kernel/bounds.c b/kernel/bounds.c
+index a94e3769347ee..a3e1d3dfad312 100644
+--- a/kernel/bounds.c
++++ b/kernel/bounds.c
+@@ -19,7 +19,7 @@ int main(void)
+ 	DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS);
+ 	DEFINE(MAX_NR_ZONES, __MAX_NR_ZONES);
+ #ifdef CONFIG_SMP
+-	DEFINE(NR_CPUS_BITS, bits_per(CONFIG_NR_CPUS));
++	DEFINE(NR_CPUS_BITS, order_base_2(CONFIG_NR_CPUS));
+ #endif
+ 	DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
+ 	/* End of constants */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index abf717c4f57c2..d84ba5a13d171 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2600,7 +2600,8 @@ enum cpu_mitigations {
+ };
+ 
+ static enum cpu_mitigations cpu_mitigations __ro_after_init =
+-	CPU_MITIGATIONS_AUTO;
++	IS_ENABLED(CONFIG_CPU_MITIGATIONS) ? CPU_MITIGATIONS_AUTO :
++					     CPU_MITIGATIONS_OFF;
+ 
+ static int __init mitigations_parse_cmdline(char *arg)
+ {
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 05d3e156a7d63..dba6541c0fc3c 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1647,10 +1647,17 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ 	jump_label_lock();
+ 	preempt_disable();
+ 
+-	/* Ensure it is not in reserved area nor out of text */
+-	if (!(core_kernel_text((unsigned long) p->addr) ||
+-	    is_module_text_address((unsigned long) p->addr)) ||
+-	    in_gate_area_no_mm((unsigned long) p->addr) ||
++	/* Ensure the address is in a text area, and find a module if exists. */
++	*probed_mod = NULL;
++	if (!core_kernel_text((unsigned long) p->addr)) {
++		*probed_mod = __module_text_address((unsigned long) p->addr);
++		if (!(*probed_mod)) {
++			ret = -EINVAL;
++			goto out;
++		}
++	}
++	/* Ensure it is not in reserved area. */
++	if (in_gate_area_no_mm((unsigned long) p->addr) ||
+ 	    within_kprobe_blacklist((unsigned long) p->addr) ||
+ 	    jump_label_text_reserved(p->addr, p->addr) ||
+ 	    static_call_text_reserved(p->addr, p->addr) ||
+@@ -1660,8 +1667,7 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ 		goto out;
+ 	}
+ 
+-	/* Check if are we probing a module */
+-	*probed_mod = __module_text_address((unsigned long) p->addr);
++	/* Get module refcount and reject __init functions for loaded modules. */
+ 	if (*probed_mod) {
+ 		/*
+ 		 * We must hold a refcount of the probed module while updating
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 643e0b19920d2..eb81ad523a553 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -400,7 +400,8 @@ void *perf_trace_buf_alloc(int size, struct pt_regs **regs, int *rctxp)
+ 	BUILD_BUG_ON(PERF_MAX_TRACE_SIZE % sizeof(unsigned long));
+ 
+ 	if (WARN_ONCE(size > PERF_MAX_TRACE_SIZE,
+-		      "perf buffer not large enough"))
++		      "perf buffer not large enough, wanted %d, have %d",
++		      size, PERF_MAX_TRACE_SIZE))
+ 		return NULL;
+ 
+ 	*rctxp = rctx = perf_swevent_get_recursion_context();
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index e4340958da2df..4bc90965abb25 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -1140,10 +1140,8 @@ register_snapshot_trigger(char *glob, struct event_trigger_ops *ops,
+ 			  struct event_trigger_data *data,
+ 			  struct trace_event_file *file)
+ {
+-	int ret = tracing_alloc_snapshot_instance(file->tr);
+-
+-	if (ret < 0)
+-		return ret;
++	if (tracing_alloc_snapshot_instance(file->tr) != 0)
++		return 0;
+ 
+ 	return register_trigger(glob, ops, data, file);
+ }
+diff --git a/lib/stackdepot.c b/lib/stackdepot.c
+index 25bbac46605e9..308a7c8518e8f 100644
+--- a/lib/stackdepot.c
++++ b/lib/stackdepot.c
+@@ -271,10 +271,10 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
+ 		/*
+ 		 * Zero out zone modifiers, as we don't have specific zone
+ 		 * requirements. Keep the flags related to allocation in atomic
+-		 * contexts and I/O.
++		 * contexts, I/O, nolockdep.
+ 		 */
+ 		alloc_flags &= ~GFP_ZONEMASK;
+-		alloc_flags &= (GFP_ATOMIC | GFP_KERNEL);
++		alloc_flags &= (GFP_ATOMIC | GFP_KERNEL | __GFP_NOLOCKDEP);
+ 		alloc_flags |= __GFP_NOWARN;
+ 		page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER);
+ 		if (page)
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 9e8ebac9b7e7e..f5019f698105b 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -4188,7 +4188,7 @@ void batadv_tt_local_resize_to_mtu(struct net_device *soft_iface)
+ 
+ 	spin_lock_bh(&bat_priv->tt.commit_lock);
+ 
+-	while (true) {
++	while (timeout) {
+ 		table_size = batadv_tt_local_table_transmit_size(bat_priv);
+ 		if (packet_size_max >= table_size)
+ 			break;
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index a0f980e615052..7ce6db1ac558a 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -107,8 +107,10 @@ static void hci_req_sync_complete(struct hci_dev *hdev, u8 result, u16 opcode,
+ 	if (hdev->req_status == HCI_REQ_PEND) {
+ 		hdev->req_result = result;
+ 		hdev->req_status = HCI_REQ_DONE;
+-		if (skb)
++		if (skb) {
++			kfree_skb(hdev->req_skb);
+ 			hdev->req_skb = skb_get(skb);
++		}
+ 		wake_up_interruptible(&hdev->req_wait_q);
+ 	}
+ }
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 756523e5402a8..3a2be1b4a5743 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -456,7 +456,8 @@ static int l2cap_sock_getsockopt_old(struct socket *sock, int optname,
+ 	struct l2cap_chan *chan = l2cap_pi(sk)->chan;
+ 	struct l2cap_options opts;
+ 	struct l2cap_conninfo cinfo;
+-	int len, err = 0;
++	int err = 0;
++	size_t len;
+ 	u32 opt;
+ 
+ 	BT_DBG("sk %p", sk);
+@@ -503,7 +504,7 @@ static int l2cap_sock_getsockopt_old(struct socket *sock, int optname,
+ 
+ 		BT_DBG("mode 0x%2.2x", chan->mode);
+ 
+-		len = min_t(unsigned int, len, sizeof(opts));
++		len = min(len, sizeof(opts));
+ 		if (copy_to_user(optval, (char *) &opts, len))
+ 			err = -EFAULT;
+ 
+@@ -553,7 +554,7 @@ static int l2cap_sock_getsockopt_old(struct socket *sock, int optname,
+ 		cinfo.hci_handle = chan->conn->hcon->handle;
+ 		memcpy(cinfo.dev_class, chan->conn->hcon->dev_class, 3);
+ 
+-		len = min_t(unsigned int, len, sizeof(cinfo));
++		len = min(len, sizeof(cinfo));
+ 		if (copy_to_user(optval, (char *) &cinfo, len))
+ 			err = -EFAULT;
+ 
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 8244d3ae185bf..2115ca6d7e178 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -825,7 +825,7 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			       sockptr_t optval, unsigned int optlen)
+ {
+ 	struct sock *sk = sock->sk;
+-	int len, err = 0;
++	int err = 0;
+ 	struct bt_voice voice;
+ 	u32 opt;
+ 
+@@ -841,10 +841,9 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			break;
+ 		}
+ 
+-		if (copy_from_sockptr(&opt, optval, sizeof(u32))) {
+-			err = -EFAULT;
++		err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++		if (err)
+ 			break;
+-		}
+ 
+ 		if (opt)
+ 			set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags);
+@@ -861,11 +860,10 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ 
+ 		voice.setting = sco_pi(sk)->setting;
+ 
+-		len = min_t(unsigned int, sizeof(voice), optlen);
+-		if (copy_from_sockptr(&voice, optval, len)) {
+-			err = -EFAULT;
++		err = bt_copy_from_sockptr(&voice, sizeof(voice), optval,
++					   optlen);
++		if (err)
+ 			break;
+-		}
+ 
+ 		/* Explicitly check for these values */
+ 		if (voice.setting != BT_VOICE_TRANSPARENT &&
+@@ -878,10 +876,9 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case BT_PKT_STATUS:
+-		if (copy_from_sockptr(&opt, optval, sizeof(u32))) {
+-			err = -EFAULT;
++		err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++		if (err)
+ 			break;
+-		}
+ 
+ 		if (opt)
+ 			sco_pi(sk)->cmsg_mask |= SCO_CMSG_PKT_STATUS;
+@@ -904,7 +901,8 @@ static int sco_sock_getsockopt_old(struct socket *sock, int optname,
+ 	struct sock *sk = sock->sk;
+ 	struct sco_options opts;
+ 	struct sco_conninfo cinfo;
+-	int len, err = 0;
++	int err = 0;
++	size_t len;
+ 
+ 	BT_DBG("sk %p", sk);
+ 
+@@ -926,7 +924,7 @@ static int sco_sock_getsockopt_old(struct socket *sock, int optname,
+ 
+ 		BT_DBG("mtu %d", opts.mtu);
+ 
+-		len = min_t(unsigned int, len, sizeof(opts));
++		len = min(len, sizeof(opts));
+ 		if (copy_to_user(optval, (char *)&opts, len))
+ 			err = -EFAULT;
+ 
+@@ -944,7 +942,7 @@ static int sco_sock_getsockopt_old(struct socket *sock, int optname,
+ 		cinfo.hci_handle = sco_pi(sk)->conn->hcon->handle;
+ 		memcpy(cinfo.dev_class, sco_pi(sk)->conn->hcon->dev_class, 3);
+ 
+-		len = min_t(unsigned int, len, sizeof(cinfo));
++		len = min(len, sizeof(cinfo));
+ 		if (copy_to_user(optval, (char *)&cinfo, len))
+ 			err = -EFAULT;
+ 
+diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c
+index dac65180c4eff..61cb40368723c 100644
+--- a/net/ethernet/eth.c
++++ b/net/ethernet/eth.c
+@@ -164,17 +164,7 @@ __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev)
+ 	eth = (struct ethhdr *)skb->data;
+ 	skb_pull_inline(skb, ETH_HLEN);
+ 
+-	if (unlikely(!ether_addr_equal_64bits(eth->h_dest,
+-					      dev->dev_addr))) {
+-		if (unlikely(is_multicast_ether_addr_64bits(eth->h_dest))) {
+-			if (ether_addr_equal_64bits(eth->h_dest, dev->broadcast))
+-				skb->pkt_type = PACKET_BROADCAST;
+-			else
+-				skb->pkt_type = PACKET_MULTICAST;
+-		} else {
+-			skb->pkt_type = PACKET_OTHERHOST;
+-		}
+-	}
++	eth_skb_pkt_type(skb, dev);
+ 
+ 	/*
+ 	 * Some variants of DSA tagging don't have an ethertype field
+diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
+index c411c87ae865f..85cb44bfa3bab 100644
+--- a/net/ipv4/inet_timewait_sock.c
++++ b/net/ipv4/inet_timewait_sock.c
+@@ -254,12 +254,12 @@ void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo, bool rearm)
+ }
+ EXPORT_SYMBOL_GPL(__inet_twsk_schedule);
+ 
++/* Remove all non full sockets (TIME_WAIT and NEW_SYN_RECV) for dead netns */
+ void inet_twsk_purge(struct inet_hashinfo *hashinfo, int family)
+ {
+-	struct inet_timewait_sock *tw;
+-	struct sock *sk;
+ 	struct hlist_nulls_node *node;
+ 	unsigned int slot;
++	struct sock *sk;
+ 
+ 	for (slot = 0; slot <= hashinfo->ehash_mask; slot++) {
+ 		struct inet_ehash_bucket *head = &hashinfo->ehash[slot];
+@@ -268,25 +268,35 @@ void inet_twsk_purge(struct inet_hashinfo *hashinfo, int family)
+ 		rcu_read_lock();
+ restart:
+ 		sk_nulls_for_each_rcu(sk, node, &head->chain) {
+-			if (sk->sk_state != TCP_TIME_WAIT)
++			int state = inet_sk_state_load(sk);
++
++			if ((1 << state) & ~(TCPF_TIME_WAIT |
++					     TCPF_NEW_SYN_RECV))
+ 				continue;
+-			tw = inet_twsk(sk);
+-			if ((tw->tw_family != family) ||
+-				refcount_read(&twsk_net(tw)->count))
++
++			if (sk->sk_family != family ||
++			    refcount_read(&sock_net(sk)->count))
+ 				continue;
+ 
+-			if (unlikely(!refcount_inc_not_zero(&tw->tw_refcnt)))
++			if (unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+ 				continue;
+ 
+-			if (unlikely((tw->tw_family != family) ||
+-				     refcount_read(&twsk_net(tw)->count))) {
+-				inet_twsk_put(tw);
++			if (unlikely(sk->sk_family != family ||
++				     refcount_read(&sock_net(sk)->count))) {
++				sock_gen_put(sk);
+ 				goto restart;
+ 			}
+ 
+ 			rcu_read_unlock();
+ 			local_bh_disable();
+-			inet_twsk_deschedule_put(tw);
++			if (state == TCP_TIME_WAIT) {
++				inet_twsk_deschedule_put(inet_twsk(sk));
++			} else {
++				struct request_sock *req = inet_reqsk(sk);
++
++				inet_csk_reqsk_queue_drop_and_put(req->rsk_listener,
++								  req);
++			}
+ 			local_bh_enable();
+ 			goto restart_rcu;
+ 		}
+diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
+index 48c6aa3d91ae8..5823e89b8a734 100644
+--- a/net/ipv4/netfilter/arp_tables.c
++++ b/net/ipv4/netfilter/arp_tables.c
+@@ -965,6 +965,8 @@ static int do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 		return -ENOMEM;
+ 	if (tmp.num_counters == 0)
+ 		return -EINVAL;
++	if ((u64)len < (u64)tmp.size + sizeof(tmp))
++		return -EINVAL;
+ 
+ 	tmp.name[sizeof(tmp.name)-1] = 0;
+ 
+@@ -1265,6 +1267,8 @@ static int compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 		return -ENOMEM;
+ 	if (tmp.num_counters == 0)
+ 		return -EINVAL;
++	if ((u64)len < (u64)tmp.size + sizeof(tmp))
++		return -EINVAL;
+ 
+ 	tmp.name[sizeof(tmp.name)-1] = 0;
+ 
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index b46d58b9f3fe4..22e9ff592cd75 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -1119,6 +1119,8 @@ do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 		return -ENOMEM;
+ 	if (tmp.num_counters == 0)
+ 		return -EINVAL;
++	if ((u64)len < (u64)tmp.size + sizeof(tmp))
++		return -EINVAL;
+ 
+ 	tmp.name[sizeof(tmp.name)-1] = 0;
+ 
+@@ -1505,6 +1507,8 @@ compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 		return -ENOMEM;
+ 	if (tmp.num_counters == 0)
+ 		return -EINVAL;
++	if ((u64)len < (u64)tmp.size + sizeof(tmp))
++		return -EINVAL;
+ 
+ 	tmp.name[sizeof(tmp.name)-1] = 0;
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index d360c7d70e8a2..cc409cc0789c8 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -955,13 +955,11 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ 		icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST, gw);
+ 		peer->rate_last = jiffies;
+ 		++peer->n_redirects;
+-#ifdef CONFIG_IP_ROUTE_VERBOSE
+-		if (log_martians &&
++		if (IS_ENABLED(CONFIG_IP_ROUTE_VERBOSE) && log_martians &&
+ 		    peer->n_redirects == ip_rt_redirect_number)
+ 			net_warn_ratelimited("host %pI4/if%d ignores redirects for %pI4 to %pI4\n",
+ 					     &ip_hdr(skb)->saddr, inet_iif(skb),
+ 					     &ip_hdr(skb)->daddr, &gw);
+-#endif
+ 	}
+ out_put_peer:
+ 	inet_putpeer(peer);
+@@ -2090,6 +2088,9 @@ int ip_route_use_hint(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ 	int err = -EINVAL;
+ 	u32 tag = 0;
+ 
++	if (!in_dev)
++		return -EINVAL;
++
+ 	if (ipv4_is_multicast(saddr) || ipv4_is_lbcast(saddr))
+ 		goto martian_source;
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 0b7e76e6f2028..16ff3962b24db 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1125,16 +1125,17 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ 	if (msg->msg_controllen) {
+ 		err = udp_cmsg_send(sk, msg, &ipc.gso_size);
+-		if (err > 0)
++		if (err > 0) {
+ 			err = ip_cmsg_send(sk, msg, &ipc,
+ 					   sk->sk_family == AF_INET6);
++			connected = 0;
++		}
+ 		if (unlikely(err < 0)) {
+ 			kfree(ipc.opt);
+ 			return err;
+ 		}
+ 		if (ipc.opt)
+ 			free = 1;
+-		connected = 0;
+ 	}
+ 	if (!ipc.opt) {
+ 		struct ip_options_rcu *inet_opt;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 0429c1d50fc92..8a6f4cdd5a486 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2044,9 +2044,10 @@ struct inet6_ifaddr *ipv6_get_ifaddr(struct net *net, const struct in6_addr *add
+ 		if (ipv6_addr_equal(&ifp->addr, addr)) {
+ 			if (!dev || ifp->idev->dev == dev ||
+ 			    !(ifp->scope&(IFA_LINK|IFA_HOST) || strict)) {
+-				result = ifp;
+-				in6_ifa_hold(ifp);
+-				break;
++				if (in6_ifa_hold_safe(ifp)) {
++					result = ifp;
++					break;
++				}
+ 			}
+ 		}
+ 	}
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index d70783283a417..b79e571e5a863 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1373,7 +1373,10 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
+ 	     struct nl_info *info, struct netlink_ext_ack *extack)
+ {
+ 	struct fib6_table *table = rt->fib6_table;
+-	struct fib6_node *fn, *pn = NULL;
++	struct fib6_node *fn;
++#ifdef CONFIG_IPV6_SUBTREES
++	struct fib6_node *pn = NULL;
++#endif
+ 	int err = -ENOMEM;
+ 	int allow_create = 1;
+ 	int replace_required = 0;
+@@ -1397,9 +1400,9 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
+ 		goto out;
+ 	}
+ 
++#ifdef CONFIG_IPV6_SUBTREES
+ 	pn = fn;
+ 
+-#ifdef CONFIG_IPV6_SUBTREES
+ 	if (rt->fib6_src.plen) {
+ 		struct fib6_node *sn;
+ 
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index d013395be05fc..df7cd3d285e4f 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -1137,6 +1137,8 @@ do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 		return -ENOMEM;
+ 	if (tmp.num_counters == 0)
+ 		return -EINVAL;
++	if ((u64)len < (u64)tmp.size + sizeof(tmp))
++		return -EINVAL;
+ 
+ 	tmp.name[sizeof(tmp.name)-1] = 0;
+ 
+@@ -1515,6 +1517,8 @@ compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
+ 		return -ENOMEM;
+ 	if (tmp.num_counters == 0)
+ 		return -EINVAL;
++	if ((u64)len < (u64)tmp.size + sizeof(tmp))
++		return -EINVAL;
+ 
+ 	tmp.name[sizeof(tmp.name)-1] = 0;
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index b5d879f2501da..8c9672e7a7dd6 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1453,9 +1453,11 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 		ipc6.opt = opt;
+ 
+ 		err = udp_cmsg_send(sk, msg, &ipc6.gso_size);
+-		if (err > 0)
++		if (err > 0) {
+ 			err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6,
+ 						    &ipc6);
++			connected = false;
++		}
+ 		if (err < 0) {
+ 			fl6_sock_release(flowlabel);
+ 			return err;
+@@ -1467,7 +1469,6 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 		}
+ 		if (!(opt->opt_nflen|opt->opt_flen))
+ 			opt = NULL;
+-		connected = false;
+ 	}
+ 	if (!opt) {
+ 		opt = txopt_get(np);
+diff --git a/net/netfilter/ipvs/ip_vs_proto_sctp.c b/net/netfilter/ipvs/ip_vs_proto_sctp.c
+index a0921adc31a9f..1e689c7141271 100644
+--- a/net/netfilter/ipvs/ip_vs_proto_sctp.c
++++ b/net/netfilter/ipvs/ip_vs_proto_sctp.c
+@@ -126,7 +126,8 @@ sctp_snat_handler(struct sk_buff *skb, struct ip_vs_protocol *pp,
+ 	if (sctph->source != cp->vport || payload_csum ||
+ 	    skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		sctph->source = cp->vport;
+-		sctp_nat_csum(skb, sctph, sctphoff);
++		if (!skb_is_gso(skb) || !skb_is_gso_sctp(skb))
++			sctp_nat_csum(skb, sctph, sctphoff);
+ 	} else {
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 	}
+@@ -174,7 +175,8 @@ sctp_dnat_handler(struct sk_buff *skb, struct ip_vs_protocol *pp,
+ 	    (skb->ip_summed == CHECKSUM_PARTIAL &&
+ 	     !(skb_dst(skb)->dev->features & NETIF_F_SCTP_CRC))) {
+ 		sctph->dest = cp->dport;
+-		sctp_nat_csum(skb, sctph, sctphoff);
++		if (!skb_is_gso(skb) || !skb_is_gso_sctp(skb))
++			sctp_nat_csum(skb, sctph, sctphoff);
+ 	} else if (skb->ip_summed != CHECKSUM_PARTIAL) {
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 	}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index ab7f7e45b9846..858d09b54eaa4 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2739,7 +2739,7 @@ static const struct nft_expr_type *__nft_expr_type_get(u8 family,
+ {
+ 	const struct nft_expr_type *type, *candidate = NULL;
+ 
+-	list_for_each_entry(type, &nf_tables_expressions, list) {
++	list_for_each_entry_rcu(type, &nf_tables_expressions, list) {
+ 		if (!nla_strcmp(nla, type->name)) {
+ 			if (!type->family && !candidate)
+ 				candidate = type;
+@@ -2771,9 +2771,13 @@ static const struct nft_expr_type *nft_expr_type_get(struct net *net,
+ 	if (nla == NULL)
+ 		return ERR_PTR(-EINVAL);
+ 
++	rcu_read_lock();
+ 	type = __nft_expr_type_get(family, nla);
+-	if (type != NULL && try_module_get(type->owner))
++	if (type != NULL && try_module_get(type->owner)) {
++		rcu_read_unlock();
+ 		return type;
++	}
++	rcu_read_unlock();
+ 
+ 	lockdep_nfnl_nft_mutex_not_held();
+ #ifdef CONFIG_MODULES
+diff --git a/net/netfilter/nft_chain_filter.c b/net/netfilter/nft_chain_filter.c
+index a18582a4ecf34..aad676402919b 100644
+--- a/net/netfilter/nft_chain_filter.c
++++ b/net/netfilter/nft_chain_filter.c
+@@ -339,7 +339,9 @@ static void nft_netdev_event(unsigned long event, struct net_device *dev,
+ 		return;
+ 
+ 	if (n > 1) {
+-		nf_unregister_net_hook(ctx->net, &found->ops);
++		if (!(ctx->chain->table->flags & NFT_TABLE_F_DORMANT))
++			nf_unregister_net_hook(ctx->net, &found->ops);
++
+ 		list_del_rcu(&found->list);
+ 		kfree_rcu(found, rcu);
+ 		return;
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index b9682e085fcef..5a8521abd8f5c 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1980,6 +1980,8 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+ 		rules_fx = rules_f0;
+ 
+ 		nft_pipapo_for_each_field(f, i, m) {
++			bool last = i == m->field_count - 1;
++
+ 			if (!pipapo_match_field(f, start, rules_fx,
+ 						match_start, match_end))
+ 				break;
+@@ -1992,16 +1994,18 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+ 
+ 			match_start += NFT_PIPAPO_GROUPS_PADDED_SIZE(f);
+ 			match_end += NFT_PIPAPO_GROUPS_PADDED_SIZE(f);
+-		}
+ 
+-		if (i == m->field_count) {
+-			priv->dirty = true;
+-			pipapo_drop(m, rulemap);
+-			return;
++			if (last && f->mt[rulemap[i].to].e == e) {
++				priv->dirty = true;
++				pipapo_drop(m, rulemap);
++				return;
++			}
+ 		}
+ 
+ 		first_rule += rules_f0;
+ 	}
++
++	WARN_ON_ONCE(1); /* elem_priv not found */
+ }
+ 
+ /**
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 0f0f380e81a40..30f5e414018b1 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -1692,8 +1692,9 @@ int ovs_ct_copy_action(struct net *net, const struct nlattr *attr,
+ 	if (ct_info.timeout[0]) {
+ 		if (nf_ct_set_timeout(net, ct_info.ct, family, key->ip.proto,
+ 				      ct_info.timeout))
+-			pr_info_ratelimited("Failed to associated timeout "
+-					    "policy `%s'\n", ct_info.timeout);
++			OVS_NLERR(log,
++				  "Failed to associated timeout policy '%s'",
++				  ct_info.timeout);
+ 		else
+ 			ct_info.nf_ct_timeout = rcu_dereference(
+ 				nf_ct_timeout_find(ct_info.ct)->timeout);
+@@ -1901,9 +1902,9 @@ static void ovs_ct_limit_exit(struct net *net, struct ovs_net *ovs_net)
+ 	for (i = 0; i < CT_LIMIT_HASH_BUCKETS; ++i) {
+ 		struct hlist_head *head = &info->limits[i];
+ 		struct ovs_ct_limit *ct_limit;
++		struct hlist_node *next;
+ 
+-		hlist_for_each_entry_rcu(ct_limit, head, hlist_node,
+-					 lockdep_ovsl_is_held())
++		hlist_for_each_entry_safe(ct_limit, next, head, hlist_node)
+ 			kfree_rcu(ct_limit, rcu);
+ 	}
+ 	kfree(info->limits);
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index b003d0597f4bd..224b1fdc82279 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -817,11 +817,11 @@ static struct sock *unix_create1(struct net *net, struct socket *sock, int kern)
+ 	sk->sk_write_space	= unix_write_space;
+ 	sk->sk_max_ack_backlog	= net->unx.sysctl_max_dgram_qlen;
+ 	sk->sk_destruct		= unix_sock_destructor;
+-	u	  = unix_sk(sk);
++	u = unix_sk(sk);
++	u->inflight = 0;
+ 	u->path.dentry = NULL;
+ 	u->path.mnt = NULL;
+ 	spin_lock_init(&u->lock);
+-	atomic_long_set(&u->inflight, 0);
+ 	INIT_LIST_HEAD(&u->link);
+ 	mutex_init(&u->iolock); /* single task reading lock */
+ 	mutex_init(&u->bindlock); /* single task binding lock */
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index 9121a4d5436d5..133ba5be4b580 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -166,17 +166,18 @@ static void scan_children(struct sock *x, void (*func)(struct unix_sock *),
+ 
+ static void dec_inflight(struct unix_sock *usk)
+ {
+-	atomic_long_dec(&usk->inflight);
++	usk->inflight--;
+ }
+ 
+ static void inc_inflight(struct unix_sock *usk)
+ {
+-	atomic_long_inc(&usk->inflight);
++	usk->inflight++;
+ }
+ 
+ static void inc_inflight_move_tail(struct unix_sock *u)
+ {
+-	atomic_long_inc(&u->inflight);
++	u->inflight++;
++
+ 	/* If this still might be part of a cycle, move it to the end
+ 	 * of the list, so that it's checked even if it was already
+ 	 * passed over
+@@ -234,20 +235,34 @@ void unix_gc(void)
+ 	 * receive queues.  Other, non candidate sockets _can_ be
+ 	 * added to queue, so we must make sure only to touch
+ 	 * candidates.
++	 *
++	 * Embryos, though never candidates themselves, affect which
++	 * candidates are reachable by the garbage collector.  Before
++	 * being added to a listener's queue, an embryo may already
++	 * receive data carrying SCM_RIGHTS, potentially making the
++	 * passed socket a candidate that is not yet reachable by the
++	 * collector.  It becomes reachable once the embryo is
++	 * enqueued.  Therefore, we must ensure that no SCM-laden
++	 * embryo appears in a (candidate) listener's queue between
++	 * consecutive scan_children() calls.
+ 	 */
+ 	list_for_each_entry_safe(u, next, &gc_inflight_list, link) {
++		struct sock *sk = &u->sk;
+ 		long total_refs;
+-		long inflight_refs;
+ 
+-		total_refs = file_count(u->sk.sk_socket->file);
+-		inflight_refs = atomic_long_read(&u->inflight);
++		total_refs = file_count(sk->sk_socket->file);
+ 
+-		BUG_ON(inflight_refs < 1);
+-		BUG_ON(total_refs < inflight_refs);
+-		if (total_refs == inflight_refs) {
++		BUG_ON(!u->inflight);
++		BUG_ON(total_refs < u->inflight);
++		if (total_refs == u->inflight) {
+ 			list_move_tail(&u->link, &gc_candidates);
+ 			__set_bit(UNIX_GC_CANDIDATE, &u->gc_flags);
+ 			__set_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags);
++
++			if (sk->sk_state == TCP_LISTEN) {
++				unix_state_lock_nested(sk, U_LOCK_GC_LISTENER);
++				unix_state_unlock(sk);
++			}
+ 		}
+ 	}
+ 
+@@ -271,7 +286,7 @@ void unix_gc(void)
+ 		/* Move cursor to after the current position. */
+ 		list_move(&cursor, &u->link);
+ 
+-		if (atomic_long_read(&u->inflight) > 0) {
++		if (u->inflight) {
+ 			list_move_tail(&u->link, &not_cycle_list);
+ 			__clear_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags);
+ 			scan_children(&u->sk, inc_inflight_move_tail, NULL);
+diff --git a/net/unix/scm.c b/net/unix/scm.c
+index d1048b4c2baaf..4eff7da9f6f96 100644
+--- a/net/unix/scm.c
++++ b/net/unix/scm.c
+@@ -52,12 +52,13 @@ void unix_inflight(struct user_struct *user, struct file *fp)
+ 	if (s) {
+ 		struct unix_sock *u = unix_sk(s);
+ 
+-		if (atomic_long_inc_return(&u->inflight) == 1) {
++		if (!u->inflight) {
+ 			BUG_ON(!list_empty(&u->link));
+ 			list_add_tail(&u->link, &gc_inflight_list);
+ 		} else {
+ 			BUG_ON(list_empty(&u->link));
+ 		}
++		u->inflight++;
+ 		/* Paired with READ_ONCE() in wait_for_unix_gc() */
+ 		WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1);
+ 	}
+@@ -74,10 +75,11 @@ void unix_notinflight(struct user_struct *user, struct file *fp)
+ 	if (s) {
+ 		struct unix_sock *u = unix_sk(s);
+ 
+-		BUG_ON(!atomic_long_read(&u->inflight));
++		BUG_ON(!u->inflight);
+ 		BUG_ON(list_empty(&u->link));
+ 
+-		if (atomic_long_dec_and_test(&u->inflight))
++		u->inflight--;
++		if (!u->inflight)
+ 			list_del_init(&u->link);
+ 		/* Paired with READ_ONCE() in wait_for_unix_gc() */
+ 		WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1);
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index d04f91f4d09df..562d69f17b4c0 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -895,6 +895,8 @@ static int xsk_setsockopt(struct socket *sock, int level, int optname,
+ 		struct xsk_queue **q;
+ 		int entries;
+ 
++		if (optlen < sizeof(entries))
++			return -EINVAL;
+ 		if (copy_from_sockptr(&entries, optval, sizeof(entries)))
+ 			return -EFAULT;
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+index b1ede62498667..b7c8f29c09a97 100644
+--- a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+@@ -18,7 +18,7 @@ echo 'sched:*' > set_event
+ 
+ yield
+ 
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -lt 3 ]; then
+     fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -29,7 +29,7 @@ echo 1 > events/sched/enable
+ 
+ yield
+ 
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -lt 3 ]; then
+     fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -40,7 +40,7 @@ echo 0 > events/sched/enable
+ 
+ yield
+ 
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -ne 0 ]; then
+     fail "any of scheduler events should not be recorded"
+ fi
+diff --git a/tools/testing/selftests/timers/posix_timers.c b/tools/testing/selftests/timers/posix_timers.c
+index 0ba500056e635..193a984f512c3 100644
+--- a/tools/testing/selftests/timers/posix_timers.c
++++ b/tools/testing/selftests/timers/posix_timers.c
+@@ -66,7 +66,7 @@ static int check_diff(struct timeval start, struct timeval end)
+ 	diff = end.tv_usec - start.tv_usec;
+ 	diff += (end.tv_sec - start.tv_sec) * USECS_PER_SEC;
+ 
+-	if (abs(diff - DELAY * USECS_PER_SEC) > USECS_PER_SEC / 2) {
++	if (llabs(diff - DELAY * USECS_PER_SEC) > USECS_PER_SEC / 2) {
+ 		printf("Diff too high: %lld..", diff);
+ 		return -1;
+ 	}


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-05-05 18:14 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-05-05 18:14 UTC (permalink / raw
  To: gentoo-commits

commit:     c623a513a581cbd0a45d7ea015ddb22c2e452137
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May  5 18:13:49 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May  5 18:13:49 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c623a513

Update to KSPP patch

Bug: https://bugs.gentoo.org/930733

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 96 ++++++++++++++++------------------------
 1 file changed, 38 insertions(+), 58 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 497932fe..87b8fa95 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,14 @@
---- a/Kconfig	2021-06-04 19:03:33.646823432 -0400
-+++ b/Kconfig	2021-06-04 19:03:40.508892817 -0400
+--- a/Kconfig	2022-08-25 10:11:47.220973785 -0400
++++ b/Kconfig	2022-08-25 10:11:56.997682513 -0400
 @@ -30,3 +30,5 @@ source "lib/Kconfig"
  source "lib/Kconfig.debug"
  
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2024-04-27 13:10:54.188000027 -0400
-+++ b/distro/Kconfig	2024-04-27 18:54:09.734564235 -0400
-@@ -0,0 +1,289 @@
+--- /dev/null	2024-05-05 10:40:37.103999988 -0400
++++ b/distro/Kconfig	2024-05-05 13:37:37.699554927 -0400
+@@ -0,0 +1,310 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -148,10 +148,6 @@
 +	select TIMERFD
 +	select TMPFS_POSIX_ACL
 +	select TMPFS_XATTR
-+	select UBSAN
-+	select CC_HAS_UBSAN_BOUNDS_STRICT if !CC_HAS_UBSAN_ARRAY_BOUNDS
-+	select UBSAN_BOUNDS
-+	select UBSAN_SHIFT
 +
 +	select ANON_INODES
 +	select BLOCK
@@ -182,14 +178,14 @@
 +		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_COMMON and search for 
 +		GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency information on your 
 +		specific architecture.
-+		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
++		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
 +		for X86_64
 +
 +if GENTOO_KERNEL_SELF_PROTECTION
 +config GENTOO_KERNEL_SELF_PROTECTION_COMMON
 +	bool "Enable Kernel Self Protection Project Recommendations"
 +
-+	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS
++	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32_ABI && !MODIFY_LDT_SYSCALL && GCC_PLUGINS && !IOMMU_DEFAULT_DMA_LAZY && !IOMMU_DEFAULT_PASSTHROUGH && IOMMU_DEFAULT_DMA_STRICT && SECURITY && !ARCH_EPHEMERAL_INODES  && RANDSTRUCT_PERFORMANCE
 +
 +	select BUG
 +	select STRICT_KERNEL_RWX
@@ -203,7 +199,15 @@
 +	select DEBUG_NOTIFIERS
 +	select DEBUG_LIST
 +	select DEBUG_SG
++	select HARDENED_USERCOPY if HAVE_HARDENED_USERCOPY_ALLOCATOR=y
++	select KFENCE if HAVE_ARCH_KFENCE && (!SLAB || SLUB)
++	select PAGE_TABLE_CHECK if ARCH_SUPPORTS_PAGE_TABLE_CHECK=y && EXCLUSIVE_SYSTEM_RAM=y  
++	select PAGE_TABLE_CHECK_ENFORCED if PAGE_TABLE_CHECK=y
++	select RANDOMIZE_KSTACK_OFFSET_DEFAULT if HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET && (INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION>=140000)
++	select SECURITY_LANDLOCK
++	select SCHED_CORE if SCHED_SMT
 +	select BUG_ON_DATA_CORRUPTION
++	select RANDOM_KMALLOC_CACHE if SLUB_TINY=n
 +	select SCHED_STACK_END_CHECK
 +	select SECCOMP if HAVE_ARCH_SECCOMP
 +	select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
@@ -212,6 +216,10 @@
 +	select SLAB_FREELIST_HARDENED
 +	select SHUFFLE_PAGE_ALLOCATOR
 +	select SLUB_DEBUG
++	select UBSAN
++	select CC_HAS_UBSAN_BOUNDS_STRICT if !CC_HAS_UBSAN_ARRAY_BOUNDS
++	select UBSAN_BOUNDS
++	select UBSAN_SHIFT
 +	select PAGE_POISONING
 +	select PAGE_POISONING_NO_SANITY
 +	select PAGE_POISONING_ZERO
@@ -224,8 +232,9 @@
 +	select GCC_PLUGIN_LATENT_ENTROPY
 +	select GCC_PLUGIN_STRUCTLEAK
 +	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
-+	select GCC_PLUGIN_RANDSTRUCT
++	select GCC_PLUGIN_RANDSTRUCT 
 +	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
++	select ZERO_CALL_USED_REGS if CC_HAS_ZERO_CALL_USED_REGS
 +
 +	help
 +		Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency 
@@ -238,12 +247,14 @@
 +	depends on !X86_MSR && X86_64 && GENTOO_KERNEL_SELF_PROTECTION
 +	default n
 +	
++	select GCC_PLUGIN_STACKLEAK
++	select X86_KERNEL_IBT if CC_HAS_IBT=y && HAVE_OBJTOOL=y && (!LD_IS_LLD=n || LLD_VERSION>=140000) 
++	select LEGACY_VSYSCALL_NONE
++ 	select PAGE_TABLE_ISOLATION
 +	select RANDOMIZE_BASE
 +	select RANDOMIZE_MEMORY
 +	select RELOCATABLE
-+	select LEGACY_VSYSCALL_NONE
-+ 	select PAGE_TABLE_ISOLATION
-+	select GCC_PLUGIN_STACKLEAK
++	select X86_USER_SHADOW_STACK if AS_WRUSS=Y
 +	select VMAP_STACK
 +
 +
@@ -253,11 +264,21 @@
 +	depends on ARM64
 +	default n
 +
-+	select RANDOMIZE_BASE
-+	select RELOCATABLE
++	select ARM64_BTI
++	select ARM64_E0PD
++	select ARM64_EPAN if ARM64_PAN=y
++	select ARM64_MTE if (ARM64_AS_HAS_MTE=y && ARM64_TAGGED_ADDR_ABI=y ) && ( AS_HAS_ARMV8_5=y ) && ( AS_HAS_LSE_ATOMICS=y ) && ( ARM64_PAN=y )
++	select ARM64_PTR_AUTH
++	select ARM64_PTR_AUTH_KERNEL if ( ARM64_PTR_AUTH=y ) && (( CC_HAS_SIGN_RETURN_ADDRESS=y || CC_HAS_BRANCH_PROT_PAC_RET=y ) && AS_HAS_ARMV8_3=y ) && ( LD_IS_LLD=y || LD_VERSION >= 23301 || ( CC_IS_GCC=y && GCC_VERSION < 90100 )) && (CC_IS_CLANG=n || AS_HAS_CFI_NEGATE_RA_STATE=y ) && ((FUNCTION_GRAPH_TRACER=n || DYNAMIC_FTRACE_WITH_ARGS=y ))
++	select ARM64_BTI_KERNEL if ( ARM64_BTI=y ) && ( ARM64_PTR_AUTH_KERNEL=y ) && ( CC_HAS_BRANCH_PROT_PAC_RET_BTI=y ) && (CC_IS_GCC=n || GCC_VERSION >= 100100 ) && (CC_IS_GCC=n ) && ((FUNCTION_GRAPH_TRACE=n || DYNAMIC_FTRACE_WITH_ARG=y ))
 +	select ARM64_SW_TTBR0_PAN
 +	select CONFIG_UNMAP_KERNEL_AT_EL0
 +	select GCC_PLUGIN_STACKLEAK
++	select KASAN_HW_TAGS if HAVE_ARCH_KASAN_HW_TAGS=y
++	select RANDOMIZE_BASE
++	select RELOCATABLE
++	select SHADOW_CALL_STACK if ARCH_SUPPORTS_SHADOW_CALL_STACK=y && (DYNAMIC_FTRACE_WITH_ARGS=y || DYNAMIC_FTRACE_WITH_REGS=y || FUNCTION_GRAPH_TRACER=n) && MMU=y 
++	select UNWIND_PATCH_PAC_INTO_SCS if (CC_IS_CLANG=y && CLANG_VERSION >= CONFIG_150000 ) && ( ARM64_PTR_AUTH_KERNEL=y && CC_HAS_BRANCH_PROT_PAC_RET=y ) && ( SHADOW_CALL_STACK=y )
 +	select VMAP_STACK
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_X86_32
@@ -298,47 +319,6 @@
 +		See the settings that become available for more details and fine-tuning.
 +
 +endmenu
-diff --git a/security/Kconfig b/security/Kconfig
-index 7561f6f99..01f0bf73f 100644
---- a/security/Kconfig
-+++ b/security/Kconfig
-@@ -166,6 +166,7 @@ config HARDENED_USERCOPY
- config HARDENED_USERCOPY_FALLBACK
- 	bool "Allow usercopy whitelist violations to fallback to object size"
- 	depends on HARDENED_USERCOPY
-+	depends on !GENTOO_KERNEL_SELF_PROTECTION
- 	default y
- 	help
- 	  This is a temporary option that allows missing usercopy whitelists
-@@ -181,6 +182,7 @@ config HARDENED_USERCOPY_PAGESPAN
- 	bool "Refuse to copy allocations that span multiple pages"
- 	depends on HARDENED_USERCOPY
- 	depends on EXPERT
-+	depends on !GENTOO_KERNEL_SELF_PROTECTION
- 	help
- 	  When a multi-page allocation is done without __GFP_COMP,
- 	  hardened usercopy will reject attempts to copy it. There are,
-diff --git a/security/selinux/Kconfig b/security/selinux/Kconfig
-index 9e921fc72..f29bc13fa 100644
---- a/security/selinux/Kconfig
-+++ b/security/selinux/Kconfig
-@@ -26,6 +26,7 @@ config SECURITY_SELINUX_BOOTPARAM
- config SECURITY_SELINUX_DISABLE
- 	bool "NSA SELinux runtime disable"
- 	depends on SECURITY_SELINUX
-+	depends on !GENTOO_KERNEL_SELF_PROTECTION
- 	select SECURITY_WRITABLE_HOOKS
- 	default n
- 	help
--- 
-2.31.1
-
-From bd3ff0b16792c18c0614c2b95e148943209f460a Mon Sep 17 00:00:00 2001
-From: Georgy Yakovlev <gyakovlev@gentoo.org>
-Date: Tue, 8 Jun 2021 13:59:57 -0700
-Subject: [PATCH 2/2] set DEFAULT_MMAP_MIN_ADDR by default
-
----
  mm/Kconfig | 2 ++
  1 file changed, 2 insertions(+)
 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-05-17 11:38 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-05-17 11:38 UTC (permalink / raw
  To: gentoo-commits

commit:     8e5aa14252f3bafbca53b783672fa207b6acb85d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri May 17 11:38:25 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri May 17 11:38:25 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8e5aa142

Linux 5.10.217

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1216_linux-5.10.217.patch | 3543 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3547 insertions(+)

diff --git a/0000_README b/0000_README
index 0f28b50e..ce7db8ae 100644
--- a/0000_README
+++ b/0000_README
@@ -907,6 +907,10 @@ Patch:  1215_linux-5.10.216.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.216
 
+Patch:  1216_linux-5.10.217.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.217
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1216_linux-5.10.217.patch b/1216_linux-5.10.217.patch
new file mode 100644
index 00000000..db21c16a
--- /dev/null
+++ b/1216_linux-5.10.217.patch
@@ -0,0 +1,3543 @@
+diff --git a/Makefile b/Makefile
+index 6fe6554ecfb8c..d9557382a0286 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 216
++SUBLEVEL = 217
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index ca8e7848769a6..5a7f9a806ce79 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -949,10 +949,10 @@ pcie0: pci@1c00000 {
+ 			interrupts = <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+ 			interrupt-map-mask = <0 0 0 0x7>;
+-			interrupt-map =	<0 0 0 1 &intc 0 135 IRQ_TYPE_LEVEL_HIGH>,
+-					<0 0 0 2 &intc 0 136 IRQ_TYPE_LEVEL_HIGH>,
+-					<0 0 0 3 &intc 0 138 IRQ_TYPE_LEVEL_HIGH>,
+-					<0 0 0 4 &intc 0 139 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-map =	<0 0 0 1 &intc 0 0 135 IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 2 &intc 0 0 136 IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 3 &intc 0 0 138 IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 4 &intc 0 0 139 IRQ_TYPE_LEVEL_HIGH>;
+ 
+ 			clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
+ 				 <&gcc GCC_PCIE_0_MSTR_AXI_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index e3c6b05869e7f..b00f6d8bc8bac 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -1824,10 +1824,10 @@ pcie0: pci@1c00000 {
+ 			interrupt-names = "msi";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+-			interrupt-map = <0 0 0 1 &intc 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+-					<0 0 0 2 &intc 0 150 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
+-					<0 0 0 3 &intc 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+-					<0 0 0 4 &intc 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
++			interrupt-map = <0 0 0 1 &intc 0 0 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
++					<0 0 0 2 &intc 0 0 0 150 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
++					<0 0 0 3 &intc 0 0 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
++					<0 0 0 4 &intc 0 0 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ 
+ 			clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
+ 				 <&gcc GCC_PCIE_0_AUX_CLK>,
+@@ -1928,10 +1928,10 @@ pcie1: pci@1c08000 {
+ 			interrupt-names = "msi";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+-			interrupt-map = <0 0 0 1 &intc 0 434 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+-					<0 0 0 2 &intc 0 435 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
+-					<0 0 0 3 &intc 0 438 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+-					<0 0 0 4 &intc 0 439 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
++			interrupt-map = <0 0 0 1 &intc 0 0 0 434 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
++					<0 0 0 2 &intc 0 0 0 435 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
++					<0 0 0 3 &intc 0 0 0 438 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
++					<0 0 0 4 &intc 0 0 0 439 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ 
+ 			clocks = <&gcc GCC_PCIE_1_PIPE_CLK>,
+ 				 <&gcc GCC_PCIE_1_AUX_CLK>,
+diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+index 7740995de982e..e80b638b78271 100644
+--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
++++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+@@ -284,16 +284,12 @@ int kvm_register_vgic_device(unsigned long type)
+ int vgic_v2_parse_attr(struct kvm_device *dev, struct kvm_device_attr *attr,
+ 		       struct vgic_reg_attr *reg_attr)
+ {
+-	int cpuid;
++	int cpuid = FIELD_GET(KVM_DEV_ARM_VGIC_CPUID_MASK, attr->attr);
+ 
+-	cpuid = (attr->attr & KVM_DEV_ARM_VGIC_CPUID_MASK) >>
+-		 KVM_DEV_ARM_VGIC_CPUID_SHIFT;
+-
+-	if (cpuid >= atomic_read(&dev->kvm->online_vcpus))
+-		return -EINVAL;
+-
+-	reg_attr->vcpu = kvm_get_vcpu(dev->kvm, cpuid);
+ 	reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK;
++	reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid);
++	if (!reg_attr->vcpu)
++		return -EINVAL;
+ 
+ 	return 0;
+ }
+diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
+index 2849a9b65a055..ae578860f7295 100644
+--- a/arch/mips/include/asm/ptrace.h
++++ b/arch/mips/include/asm/ptrace.h
+@@ -157,7 +157,7 @@ static inline long regs_return_value(struct pt_regs *regs)
+ #define instruction_pointer(regs) ((regs)->cp0_epc)
+ #define profile_pc(regs) instruction_pointer(regs)
+ 
+-extern asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall);
++extern asmlinkage long syscall_trace_enter(struct pt_regs *regs);
+ extern asmlinkage void syscall_trace_leave(struct pt_regs *regs);
+ 
+ extern void die(const char *, struct pt_regs *) __noreturn;
+diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
+index aebfda81120a1..6c5269d3aacba 100644
+--- a/arch/mips/kernel/asm-offsets.c
++++ b/arch/mips/kernel/asm-offsets.c
+@@ -100,6 +100,7 @@ void output_thread_info_defines(void)
+ 	OFFSET(TI_PRE_COUNT, thread_info, preempt_count);
+ 	OFFSET(TI_ADDR_LIMIT, thread_info, addr_limit);
+ 	OFFSET(TI_REGS, thread_info, regs);
++	OFFSET(TI_SYSCALL, thread_info, syscall);
+ 	DEFINE(_THREAD_SIZE, THREAD_SIZE);
+ 	DEFINE(_THREAD_MASK, THREAD_MASK);
+ 	DEFINE(_IRQ_STACK_SIZE, IRQ_STACK_SIZE);
+diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
+index db7c5be1d4a35..dd454b429ff73 100644
+--- a/arch/mips/kernel/ptrace.c
++++ b/arch/mips/kernel/ptrace.c
+@@ -1310,16 +1310,13 @@ long arch_ptrace(struct task_struct *child, long request,
+  * Notification of system call entry/exit
+  * - triggered by current->work.syscall_trace
+  */
+-asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall)
++asmlinkage long syscall_trace_enter(struct pt_regs *regs)
+ {
+ 	user_exit();
+ 
+-	current_thread_info()->syscall = syscall;
+-
+ 	if (test_thread_flag(TIF_SYSCALL_TRACE)) {
+ 		if (tracehook_report_syscall_entry(regs))
+ 			return -1;
+-		syscall = current_thread_info()->syscall;
+ 	}
+ 
+ #ifdef CONFIG_SECCOMP
+@@ -1328,7 +1325,7 @@ asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall)
+ 		struct seccomp_data sd;
+ 		unsigned long args[6];
+ 
+-		sd.nr = syscall;
++		sd.nr = current_thread_info()->syscall;
+ 		sd.arch = syscall_get_arch(current);
+ 		syscall_get_arguments(current, regs, args);
+ 		for (i = 0; i < 6; i++)
+@@ -1338,23 +1335,23 @@ asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall)
+ 		ret = __secure_computing(&sd);
+ 		if (ret == -1)
+ 			return ret;
+-		syscall = current_thread_info()->syscall;
+ 	}
+ #endif
+ 
+ 	if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
+ 		trace_sys_enter(regs, regs->regs[2]);
+ 
+-	audit_syscall_entry(syscall, regs->regs[4], regs->regs[5],
++	audit_syscall_entry(current_thread_info()->syscall,
++			    regs->regs[4], regs->regs[5],
+ 			    regs->regs[6], regs->regs[7]);
+ 
+ 	/*
+ 	 * Negative syscall numbers are mistaken for rejected syscalls, but
+ 	 * won't have had the return value set appropriately, so we do so now.
+ 	 */
+-	if (syscall < 0)
++	if (current_thread_info()->syscall < 0)
+ 		syscall_set_return_value(current, regs, -ENOSYS, 0);
+-	return syscall;
++	return current_thread_info()->syscall;
+ }
+ 
+ /*
+diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S
+index b449b68662a9a..80747719a35ca 100644
+--- a/arch/mips/kernel/scall32-o32.S
++++ b/arch/mips/kernel/scall32-o32.S
+@@ -80,6 +80,18 @@ loads_done:
+ 	PTR	load_a7, bad_stack_a7
+ 	.previous
+ 
++	/*
++	 * syscall number is in v0 unless we called syscall(__NR_###)
++	 * where the real syscall number is in a0
++	 */
++	subu	t2, v0,  __NR_O32_Linux
++	bnez	t2, 1f /* __NR_syscall at offset 0 */
++	LONG_S	a0, TI_SYSCALL($28)	# Save a0 as syscall number
++	b	2f
++1:
++	LONG_S	v0, TI_SYSCALL($28)	# Save v0 as syscall number
++2:
++
+ 	lw	t0, TI_FLAGS($28)	# syscall tracing enabled?
+ 	li	t1, _TIF_WORK_SYSCALL_ENTRY
+ 	and	t0, t1
+@@ -117,16 +129,7 @@ syscall_trace_entry:
+ 	SAVE_STATIC
+ 	move	a0, sp
+ 
+-	/*
+-	 * syscall number is in v0 unless we called syscall(__NR_###)
+-	 * where the real syscall number is in a0
+-	 */
+-	move	a1, v0
+-	subu	t2, v0,  __NR_O32_Linux
+-	bnez	t2, 1f /* __NR_syscall at offset 0 */
+-	lw	a1, PT_R4(sp)
+-
+-1:	jal	syscall_trace_enter
++	jal	syscall_trace_enter
+ 
+ 	bltz	v0, 1f			# seccomp failed? Skip syscall
+ 
+diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S
+index 35d8c86b160ea..a8679e34c95e3 100644
+--- a/arch/mips/kernel/scall64-n32.S
++++ b/arch/mips/kernel/scall64-n32.S
+@@ -44,6 +44,8 @@ NESTED(handle_sysn32, PT_SIZE, sp)
+ 
+ 	sd	a3, PT_R26(sp)		# save a3 for syscall restarting
+ 
++	LONG_S	v0, TI_SYSCALL($28)     # Store syscall number
++
+ 	li	t1, _TIF_WORK_SYSCALL_ENTRY
+ 	LONG_L	t0, TI_FLAGS($28)	# syscall tracing enabled?
+ 	and	t0, t1, t0
+@@ -72,7 +74,6 @@ syscall_common:
+ n32_syscall_trace_entry:
+ 	SAVE_STATIC
+ 	move	a0, sp
+-	move	a1, v0
+ 	jal	syscall_trace_enter
+ 
+ 	bltz	v0, 1f			# seccomp failed? Skip syscall
+diff --git a/arch/mips/kernel/scall64-n64.S b/arch/mips/kernel/scall64-n64.S
+index 23b2e2b1609cf..a3b5ab509b412 100644
+--- a/arch/mips/kernel/scall64-n64.S
++++ b/arch/mips/kernel/scall64-n64.S
+@@ -47,6 +47,8 @@ NESTED(handle_sys64, PT_SIZE, sp)
+ 
+ 	sd	a3, PT_R26(sp)		# save a3 for syscall restarting
+ 
++	LONG_S	v0, TI_SYSCALL($28)     # Store syscall number
++
+ 	li	t1, _TIF_WORK_SYSCALL_ENTRY
+ 	LONG_L	t0, TI_FLAGS($28)	# syscall tracing enabled?
+ 	and	t0, t1, t0
+@@ -83,7 +85,6 @@ n64_syscall_exit:
+ syscall_trace_entry:
+ 	SAVE_STATIC
+ 	move	a0, sp
+-	move	a1, v0
+ 	jal	syscall_trace_enter
+ 
+ 	bltz	v0, 1f			# seccomp failed? Skip syscall
+diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S
+index 50c9a57e0d3ad..6757368e9c940 100644
+--- a/arch/mips/kernel/scall64-o32.S
++++ b/arch/mips/kernel/scall64-o32.S
+@@ -79,6 +79,22 @@ loads_done:
+ 	PTR	load_a7, bad_stack_a7
+ 	.previous
+ 
++	/*
++	 * absolute syscall number is in v0 unless we called syscall(__NR_###)
++	 * where the real syscall number is in a0
++	 * note: NR_syscall is the first O32 syscall but the macro is
++	 * only defined when compiling with -mabi=32 (CONFIG_32BIT)
++	 * therefore __NR_O32_Linux is used (4000)
++	 */
++
++	subu	t2, v0,  __NR_O32_Linux
++	bnez	t2, 1f /* __NR_syscall at offset 0 */
++	LONG_S	a0, TI_SYSCALL($28)	# Save a0 as syscall number
++	b	2f
++1:
++	LONG_S	v0, TI_SYSCALL($28)	# Save v0 as syscall number
++2:
++
+ 	li	t1, _TIF_WORK_SYSCALL_ENTRY
+ 	LONG_L	t0, TI_FLAGS($28)	# syscall tracing enabled?
+ 	and	t0, t1, t0
+@@ -113,22 +129,7 @@ trace_a_syscall:
+ 	sd	a7, PT_R11(sp)		# For indirect syscalls
+ 
+ 	move	a0, sp
+-	/*
+-	 * absolute syscall number is in v0 unless we called syscall(__NR_###)
+-	 * where the real syscall number is in a0
+-	 * note: NR_syscall is the first O32 syscall but the macro is
+-	 * only defined when compiling with -mabi=32 (CONFIG_32BIT)
+-	 * therefore __NR_O32_Linux is used (4000)
+-	 */
+-	.set	push
+-	.set	reorder
+-	subu	t1, v0,  __NR_O32_Linux
+-	move	a1, v0
+-	bnez	t1, 1f /* __NR_syscall at offset 0 */
+-	ld	a1, PT_R4(sp) /* Arg1 for __NR_syscall case */
+-	.set	pop
+-
+-1:	jal	syscall_trace_enter
++	jal	syscall_trace_enter
+ 
+ 	bltz	v0, 1f			# seccomp failed? Skip syscall
+ 
+diff --git a/arch/s390/include/asm/dwarf.h b/arch/s390/include/asm/dwarf.h
+index 4f21ae561e4dd..390906b8e386e 100644
+--- a/arch/s390/include/asm/dwarf.h
++++ b/arch/s390/include/asm/dwarf.h
+@@ -9,6 +9,7 @@
+ #define CFI_DEF_CFA_OFFSET	.cfi_def_cfa_offset
+ #define CFI_ADJUST_CFA_OFFSET	.cfi_adjust_cfa_offset
+ #define CFI_RESTORE		.cfi_restore
++#define CFI_REL_OFFSET		.cfi_rel_offset
+ 
+ #ifdef CONFIG_AS_CFI_VAL_OFFSET
+ #define CFI_VAL_OFFSET		.cfi_val_offset
+diff --git a/arch/s390/kernel/vdso64/vdso_user_wrapper.S b/arch/s390/kernel/vdso64/vdso_user_wrapper.S
+index a775d7e528728..2183b8f64d574 100644
+--- a/arch/s390/kernel/vdso64/vdso_user_wrapper.S
++++ b/arch/s390/kernel/vdso64/vdso_user_wrapper.S
+@@ -23,8 +23,10 @@ __kernel_\func:
+ 	CFI_DEF_CFA_OFFSET (STACK_FRAME_OVERHEAD + WRAPPER_FRAME_SIZE)
+ 	CFI_VAL_OFFSET 15, -STACK_FRAME_OVERHEAD
+ 	stg	%r14,STACK_FRAME_OVERHEAD(%r15)
++	CFI_REL_OFFSET 14, STACK_FRAME_OVERHEAD
+ 	brasl	%r14,__s390_vdso_\func
+ 	lg	%r14,STACK_FRAME_OVERHEAD(%r15)
++	CFI_RESTORE 14
+ 	aghi	%r15,WRAPPER_FRAME_SIZE
+ 	CFI_DEF_CFA_OFFSET STACK_FRAME_OVERHEAD
+ 	CFI_RESTORE 15
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index b5a60fbb96644..ad4bae2465b19 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2627,7 +2627,7 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
+ 		return 0;
+ 
+ 	start = pmd_val(*pmd) & HPAGE_MASK;
+-	end = start + HPAGE_SIZE - 1;
++	end = start + HPAGE_SIZE;
+ 	__storage_key_init_range(start, end);
+ 	set_bit(PG_arch_1, &page->flags);
+ 	cond_resched();
+diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
+index 3b5a4d25ca9b5..0ca46f5d9438f 100644
+--- a/arch/s390/mm/hugetlbpage.c
++++ b/arch/s390/mm/hugetlbpage.c
+@@ -146,7 +146,7 @@ static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste)
+ 	}
+ 
+ 	if (!test_and_set_bit(PG_arch_1, &page->flags))
+-		__storage_key_init_range(paddr, paddr + size - 1);
++		__storage_key_init_range(paddr, paddr + size);
+ }
+ 
+ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 63a8fb456b283..fe5b0c79e5411 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1290,7 +1290,7 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now)
+ {
+ 	struct ioc *ioc = iocg->ioc;
+ 	struct blkcg_gq *blkg = iocg_to_blkg(iocg);
+-	u64 tdelta, delay, new_delay;
++	u64 tdelta, delay, new_delay, shift;
+ 	s64 vover, vover_pct;
+ 	u32 hwa;
+ 
+@@ -1305,8 +1305,9 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now)
+ 
+ 	/* calculate the current delay in effect - 1/2 every second */
+ 	tdelta = now->now - iocg->delay_at;
+-	if (iocg->delay)
+-		delay = iocg->delay >> div64_u64(tdelta, USEC_PER_SEC);
++	shift = div64_u64(tdelta, USEC_PER_SEC);
++	if (iocg->delay && shift < BITS_PER_LONG)
++		delay = iocg->delay >> shift;
+ 	else
+ 		delay = 0;
+ 
+diff --git a/drivers/ata/sata_gemini.c b/drivers/ata/sata_gemini.c
+index 6fd54e968d10a..1564472fd5d50 100644
+--- a/drivers/ata/sata_gemini.c
++++ b/drivers/ata/sata_gemini.c
+@@ -201,7 +201,10 @@ int gemini_sata_start_bridge(struct sata_gemini *sg, unsigned int bridge)
+ 		pclk = sg->sata0_pclk;
+ 	else
+ 		pclk = sg->sata1_pclk;
+-	clk_enable(pclk);
++	ret = clk_enable(pclk);
++	if (ret)
++		return ret;
++
+ 	msleep(10);
+ 
+ 	/* Do not keep clocking a bridge that is not online */
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index a0927c7f83d60..7dc3b0cca252a 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -4186,7 +4186,8 @@ void clk_unregister(struct clk *clk)
+ 	if (ops == &clk_nodrv_ops) {
+ 		pr_err("%s: unregistered clock: %s\n", __func__,
+ 		       clk->core->name);
+-		goto unlock;
++		clk_prepare_unlock();
++		return;
+ 	}
+ 	/*
+ 	 * Assign empty clock ops for consumers that might still hold
+@@ -4220,11 +4221,10 @@ void clk_unregister(struct clk *clk)
+ 	if (clk->core->protect_count)
+ 		pr_warn("%s: unregistering protected clock: %s\n",
+ 					__func__, clk->core->name);
++	clk_prepare_unlock();
+ 
+ 	kref_put(&clk->core->ref, __clk_release);
+ 	free_clk(clk);
+-unlock:
+-	clk_prepare_unlock();
+ }
+ EXPORT_SYMBOL_GPL(clk_unregister);
+ 
+@@ -4387,13 +4387,11 @@ void __clk_put(struct clk *clk)
+ 	    clk->max_rate < clk->core->req_rate)
+ 		clk_core_set_rate_nolock(clk->core, clk->core->req_rate);
+ 
+-	owner = clk->core->owner;
+-	kref_put(&clk->core->ref, __clk_release);
+-
+ 	clk_prepare_unlock();
+ 
++	owner = clk->core->owner;
++	kref_put(&clk->core->ref, __clk_release);
+ 	module_put(owner);
+-
+ 	free_clk(clk);
+ }
+ 
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
+index bff446b782907..2d4fc55718eb4 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
+@@ -1181,12 +1181,19 @@ static const u32 usb2_clk_regs[] = {
+ 	SUN50I_H6_USB3_CLK_REG,
+ };
+ 
++static struct ccu_mux_nb sun50i_h6_cpu_nb = {
++	.common		= &cpux_clk.common,
++	.cm		= &cpux_clk.mux,
++	.delay_us       = 1,
++	.bypass_index   = 0, /* index of 24 MHz oscillator */
++};
++
+ static int sun50i_h6_ccu_probe(struct platform_device *pdev)
+ {
+ 	struct resource *res;
+ 	void __iomem *reg;
++	int i, ret;
+ 	u32 val;
+-	int i;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	reg = devm_ioremap_resource(&pdev->dev, res);
+@@ -1240,7 +1247,15 @@ static int sun50i_h6_ccu_probe(struct platform_device *pdev)
+ 	val |= BIT(24);
+ 	writel(val, reg + SUN50I_H6_HDMI_CEC_CLK_REG);
+ 
+-	return sunxi_ccu_probe(pdev->dev.of_node, reg, &sun50i_h6_ccu_desc);
++	ret = sunxi_ccu_probe(pdev->dev.of_node, reg, &sun50i_h6_ccu_desc);
++	if (ret)
++		return ret;
++
++	/* Reparent CPU during PLL CPUX rate changes */
++	ccu_mux_notifier_register(pll_cpux_clk.common.hw.clk,
++				  &sun50i_h6_cpu_nb);
++
++	return 0;
+ }
+ 
+ static const struct of_device_id sun50i_h6_ccu_ids[] = {
+diff --git a/drivers/firewire/nosy.c b/drivers/firewire/nosy.c
+index 88ed971e32c0d..42d9f25efc5c3 100644
+--- a/drivers/firewire/nosy.c
++++ b/drivers/firewire/nosy.c
+@@ -148,10 +148,12 @@ packet_buffer_get(struct client *client, char __user *data, size_t user_length)
+ 	if (atomic_read(&buffer->size) == 0)
+ 		return -ENODEV;
+ 
+-	/* FIXME: Check length <= user_length. */
++	length = buffer->head->length;
++
++	if (length > user_length)
++		return 0;
+ 
+ 	end = buffer->data + buffer->capacity;
+-	length = buffer->head->length;
+ 
+ 	if (&buffer->head->data[length] < end) {
+ 		if (copy_to_user(data, buffer->head->data, length))
+diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c
+index 45d19cc0aeac0..9b2471d12783c 100644
+--- a/drivers/firewire/ohci.c
++++ b/drivers/firewire/ohci.c
+@@ -2049,6 +2049,8 @@ static void bus_reset_work(struct work_struct *work)
+ 
+ 	ohci->generation = generation;
+ 	reg_write(ohci, OHCI1394_IntEventClear, OHCI1394_busReset);
++	if (param_debug & OHCI_PARAM_DEBUG_BUSRESETS)
++		reg_write(ohci, OHCI1394_IntMaskSet, OHCI1394_busReset);
+ 
+ 	if (ohci->quirks & QUIRK_RESET_PACKET)
+ 		ohci->request_generation = generation;
+@@ -2115,12 +2117,14 @@ static irqreturn_t irq_handler(int irq, void *data)
+ 		return IRQ_NONE;
+ 
+ 	/*
+-	 * busReset and postedWriteErr must not be cleared yet
++	 * busReset and postedWriteErr events must not be cleared yet
+ 	 * (OHCI 1.1 clauses 7.2.3.2 and 13.2.8.1)
+ 	 */
+ 	reg_write(ohci, OHCI1394_IntEventClear,
+ 		  event & ~(OHCI1394_busReset | OHCI1394_postedWriteErr));
+ 	log_irqs(ohci, event);
++	if (event & OHCI1394_busReset)
++		reg_write(ohci, OHCI1394_IntMaskClear, OHCI1394_busReset);
+ 
+ 	if (event & OHCI1394_selfIDComplete)
+ 		queue_work(selfid_workqueue, &ohci->bus_reset_work);
+diff --git a/drivers/gpio/gpio-crystalcove.c b/drivers/gpio/gpio-crystalcove.c
+index 2ba2257200865..0f0b4edaa4865 100644
+--- a/drivers/gpio/gpio-crystalcove.c
++++ b/drivers/gpio/gpio-crystalcove.c
+@@ -91,7 +91,7 @@ static inline int to_reg(int gpio, enum ctrl_register reg_type)
+ 		case 0x5e:
+ 			return GPIOPANELCTL;
+ 		default:
+-			return -EOPNOTSUPP;
++			return -ENOTSUPP;
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-wcove.c b/drivers/gpio/gpio-wcove.c
+index b5fbba5a783af..e3755bc636267 100644
+--- a/drivers/gpio/gpio-wcove.c
++++ b/drivers/gpio/gpio-wcove.c
+@@ -102,7 +102,7 @@ static inline int to_reg(int gpio, enum ctrl_register reg_type)
+ 	unsigned int reg;
+ 
+ 	if (gpio >= WCOVE_GPIO_NUM)
+-		return -EOPNOTSUPP;
++		return -ENOTSUPP;
+ 
+ 	if (reg_type == CTRL_IN)
+ 		reg = GPIO_IN_CTRL_BASE + gpio;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dp.c b/drivers/gpu/drm/nouveau/nouveau_dp.c
+index 447b7594b35ae..0107a21dc9f9b 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dp.c
++++ b/drivers/gpu/drm/nouveau/nouveau_dp.c
+@@ -109,12 +109,15 @@ nouveau_dp_detect(struct nouveau_connector *nv_connector,
+ 	u8 *dpcd = nv_encoder->dp.dpcd;
+ 	int ret = NOUVEAU_DP_NONE;
+ 
+-	/* If we've already read the DPCD on an eDP device, we don't need to
+-	 * reread it as it won't change
++	/* eDP ports don't support hotplugging - so there's no point in probing eDP ports unless we
++	 * haven't probed them once before.
+ 	 */
+-	if (connector->connector_type == DRM_MODE_CONNECTOR_eDP &&
+-	    dpcd[DP_DPCD_REV] != 0)
+-		return NOUVEAU_DP_SST;
++	if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
++		if (connector->status == connector_status_connected)
++			return NOUVEAU_DP_SST;
++		else if (connector->status == connector_status_disconnected)
++			return NOUVEAU_DP_NONE;
++	}
+ 
+ 	mutex_lock(&nv_encoder->dp.hpd_irq_lock);
+ 	if (mstm) {
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+index 8bc41ec97d71a..6bacdb7583dfe 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+@@ -1066,7 +1066,7 @@ static int vmw_event_fence_action_create(struct drm_file *file_priv,
+ 	}
+ 
+ 	event->event.base.type = DRM_VMW_EVENT_FENCE_SIGNALED;
+-	event->event.base.length = sizeof(*event);
++	event->event.base.length = sizeof(event->event);
+ 	event->event.user_data = user_data;
+ 
+ 	ret = drm_event_reserve_init(dev, file_priv, &event->base, &event->event.base);
+diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/host1x/bus.c
+index 6e3b49d0de66d..b113c8e0acd02 100644
+--- a/drivers/gpu/host1x/bus.c
++++ b/drivers/gpu/host1x/bus.c
+@@ -335,11 +335,6 @@ static int host1x_device_uevent(struct device *dev,
+ 	return 0;
+ }
+ 
+-static int host1x_dma_configure(struct device *dev)
+-{
+-	return of_dma_configure(dev, dev->of_node, true);
+-}
+-
+ static const struct dev_pm_ops host1x_device_pm_ops = {
+ 	.suspend = pm_generic_suspend,
+ 	.resume = pm_generic_resume,
+@@ -353,7 +348,6 @@ struct bus_type host1x_bus_type = {
+ 	.name = "host1x",
+ 	.match = host1x_device_match,
+ 	.uevent = host1x_device_uevent,
+-	.dma_configure = host1x_dma_configure,
+ 	.pm = &host1x_device_pm_ops,
+ };
+ 
+@@ -442,8 +436,6 @@ static int host1x_device_add(struct host1x *host1x,
+ 	device->dev.bus = &host1x_bus_type;
+ 	device->dev.parent = host1x->dev;
+ 
+-	of_dma_configure(&device->dev, host1x->dev->of_node, true);
+-
+ 	device->dev.dma_parms = &device->dma_parms;
+ 	dma_set_max_seg_size(&device->dev, UINT_MAX);
+ 
+diff --git a/drivers/hwmon/corsair-cpro.c b/drivers/hwmon/corsair-cpro.c
+index 591929ec217a6..05df31cab2e52 100644
+--- a/drivers/hwmon/corsair-cpro.c
++++ b/drivers/hwmon/corsair-cpro.c
+@@ -16,6 +16,7 @@
+ #include <linux/module.h>
+ #include <linux/mutex.h>
+ #include <linux/slab.h>
++#include <linux/spinlock.h>
+ #include <linux/types.h>
+ 
+ #define USB_VENDOR_ID_CORSAIR			0x1b1c
+@@ -77,8 +78,11 @@
+ struct ccp_device {
+ 	struct hid_device *hdev;
+ 	struct device *hwmon_dev;
++	/* For reinitializing the completion below */
++	spinlock_t wait_input_report_lock;
+ 	struct completion wait_input_report;
+ 	struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */
++	u8 *cmd_buffer;
+ 	u8 *buffer;
+ 	int target[6];
+ 	DECLARE_BITMAP(temp_cnct, NUM_TEMP_SENSORS);
+@@ -111,15 +115,23 @@ static int send_usb_cmd(struct ccp_device *ccp, u8 command, u8 byte1, u8 byte2,
+ 	unsigned long t;
+ 	int ret;
+ 
+-	memset(ccp->buffer, 0x00, OUT_BUFFER_SIZE);
+-	ccp->buffer[0] = command;
+-	ccp->buffer[1] = byte1;
+-	ccp->buffer[2] = byte2;
+-	ccp->buffer[3] = byte3;
+-
++	memset(ccp->cmd_buffer, 0x00, OUT_BUFFER_SIZE);
++	ccp->cmd_buffer[0] = command;
++	ccp->cmd_buffer[1] = byte1;
++	ccp->cmd_buffer[2] = byte2;
++	ccp->cmd_buffer[3] = byte3;
++
++	/*
++	 * Disable raw event parsing for a moment to safely reinitialize the
++	 * completion. Reinit is done because hidraw could have triggered
++	 * the raw event parsing and marked the ccp->wait_input_report
++	 * completion as done.
++	 */
++	spin_lock_bh(&ccp->wait_input_report_lock);
+ 	reinit_completion(&ccp->wait_input_report);
++	spin_unlock_bh(&ccp->wait_input_report_lock);
+ 
+-	ret = hid_hw_output_report(ccp->hdev, ccp->buffer, OUT_BUFFER_SIZE);
++	ret = hid_hw_output_report(ccp->hdev, ccp->cmd_buffer, OUT_BUFFER_SIZE);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -135,11 +147,12 @@ static int ccp_raw_event(struct hid_device *hdev, struct hid_report *report, u8
+ 	struct ccp_device *ccp = hid_get_drvdata(hdev);
+ 
+ 	/* only copy buffer when requested */
+-	if (completion_done(&ccp->wait_input_report))
+-		return 0;
+-
+-	memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size));
+-	complete(&ccp->wait_input_report);
++	spin_lock(&ccp->wait_input_report_lock);
++	if (!completion_done(&ccp->wait_input_report)) {
++		memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size));
++		complete_all(&ccp->wait_input_report);
++	}
++	spin_unlock(&ccp->wait_input_report_lock);
+ 
+ 	return 0;
+ }
+@@ -491,7 +504,11 @@ static int ccp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	if (!ccp)
+ 		return -ENOMEM;
+ 
+-	ccp->buffer = devm_kmalloc(&hdev->dev, OUT_BUFFER_SIZE, GFP_KERNEL);
++	ccp->cmd_buffer = devm_kmalloc(&hdev->dev, OUT_BUFFER_SIZE, GFP_KERNEL);
++	if (!ccp->cmd_buffer)
++		return -ENOMEM;
++
++	ccp->buffer = devm_kmalloc(&hdev->dev, IN_BUFFER_SIZE, GFP_KERNEL);
+ 	if (!ccp->buffer)
+ 		return -ENOMEM;
+ 
+@@ -509,7 +526,9 @@ static int ccp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	ccp->hdev = hdev;
+ 	hid_set_drvdata(hdev, ccp);
++
+ 	mutex_init(&ccp->mutex);
++	spin_lock_init(&ccp->wait_input_report_lock);
+ 	init_completion(&ccp->wait_input_report);
+ 
+ 	hid_device_io_start(hdev);
+diff --git a/drivers/hwmon/pmbus/ucd9000.c b/drivers/hwmon/pmbus/ucd9000.c
+index 9e26cc084a176..0046cfa44e6f3 100644
+--- a/drivers/hwmon/pmbus/ucd9000.c
++++ b/drivers/hwmon/pmbus/ucd9000.c
+@@ -80,11 +80,11 @@ struct ucd9000_debugfs_entry {
+  * It has been observed that the UCD90320 randomly fails register access when
+  * doing another access right on the back of a register write. To mitigate this
+  * make sure that there is a minimum delay between a write access and the
+- * following access. The 250us is based on experimental data. At a delay of
+- * 200us the issue seems to go away. Add a bit of extra margin to allow for
++ * following access. The 500 is based on experimental data. At a delay of
++ * 350us the issue seems to go away. Add a bit of extra margin to allow for
+  * system to system differences.
+  */
+-#define UCD90320_WAIT_DELAY_US 250
++#define UCD90320_WAIT_DELAY_US 500
+ 
+ static inline void ucd90320_wait(const struct ucd9000_data *data)
+ {
+diff --git a/drivers/iio/accel/mxc4005.c b/drivers/iio/accel/mxc4005.c
+index ecd9d8ad59288..e0d1491d52b01 100644
+--- a/drivers/iio/accel/mxc4005.c
++++ b/drivers/iio/accel/mxc4005.c
+@@ -27,9 +27,13 @@
+ #define MXC4005_REG_ZOUT_UPPER		0x07
+ #define MXC4005_REG_ZOUT_LOWER		0x08
+ 
++#define MXC4005_REG_INT_MASK0		0x0A
++
+ #define MXC4005_REG_INT_MASK1		0x0B
+ #define MXC4005_REG_INT_MASK1_BIT_DRDYE	0x01
+ 
++#define MXC4005_REG_INT_CLR0		0x00
++
+ #define MXC4005_REG_INT_CLR1		0x01
+ #define MXC4005_REG_INT_CLR1_BIT_DRDYC	0x01
+ 
+@@ -113,7 +117,9 @@ static bool mxc4005_is_readable_reg(struct device *dev, unsigned int reg)
+ static bool mxc4005_is_writeable_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
++	case MXC4005_REG_INT_CLR0:
+ 	case MXC4005_REG_INT_CLR1:
++	case MXC4005_REG_INT_MASK0:
+ 	case MXC4005_REG_INT_MASK1:
+ 	case MXC4005_REG_CONTROL:
+ 		return true;
+@@ -334,17 +340,13 @@ static int mxc4005_set_trigger_state(struct iio_trigger *trig,
+ {
+ 	struct iio_dev *indio_dev = iio_trigger_get_drvdata(trig);
+ 	struct mxc4005_data *data = iio_priv(indio_dev);
++	unsigned int val;
+ 	int ret;
+ 
+ 	mutex_lock(&data->mutex);
+-	if (state) {
+-		ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK1,
+-				   MXC4005_REG_INT_MASK1_BIT_DRDYE);
+-	} else {
+-		ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK1,
+-				   ~MXC4005_REG_INT_MASK1_BIT_DRDYE);
+-	}
+ 
++	val = state ? MXC4005_REG_INT_MASK1_BIT_DRDYE : 0;
++	ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK1, val);
+ 	if (ret < 0) {
+ 		mutex_unlock(&data->mutex);
+ 		dev_err(data->dev, "failed to update reg_int_mask1");
+@@ -386,6 +388,14 @@ static int mxc4005_chip_init(struct mxc4005_data *data)
+ 
+ 	dev_dbg(data->dev, "MXC4005 chip id %02x\n", reg);
+ 
++	ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK0, 0);
++	if (ret < 0)
++		return dev_err_probe(data->dev, ret, "writing INT_MASK0\n");
++
++	ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK1, 0);
++	if (ret < 0)
++		return dev_err_probe(data->dev, ret, "writing INT_MASK1\n");
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/imu/adis16475.c b/drivers/iio/imu/adis16475.c
+index aed1cf3bfa13c..88a673bc3eab5 100644
+--- a/drivers/iio/imu/adis16475.c
++++ b/drivers/iio/imu/adis16475.c
+@@ -1070,6 +1070,7 @@ static int adis16475_config_sync_mode(struct adis16475 *st)
+ 	struct device *dev = &st->adis.spi->dev;
+ 	const struct adis16475_sync *sync;
+ 	u32 sync_mode;
++	u16 val;
+ 
+ 	/* default to internal clk */
+ 	st->clk_freq = st->info->int_clk * 1000;
+@@ -1155,8 +1156,9 @@ static int adis16475_config_sync_mode(struct adis16475 *st)
+ 	 * I'm keeping this for simplicity and avoiding extra variables
+ 	 * in chip_info.
+ 	 */
++	val = ADIS16475_SYNC_MODE(sync->sync_mode);
+ 	ret = __adis_update_bits(&st->adis, ADIS16475_REG_MSG_CTRL,
+-				 ADIS16475_SYNC_MODE_MASK, sync->sync_mode);
++				 ADIS16475_SYNC_MODE_MASK, val);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 09c7f52156f3f..67ceab4573be4 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -2532,6 +2532,7 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
+  fail:
+ 	pr_warn("md: failed to register dev-%s for %s\n",
+ 		b, mdname(mddev));
++	mddev_destroy_serial_pool(mddev, rdev, false);
+ 	return err;
+ }
+ 
+diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
+index 305ffad131a29..02bea44369435 100644
+--- a/drivers/misc/eeprom/at24.c
++++ b/drivers/misc/eeprom/at24.c
+@@ -585,6 +585,31 @@ static unsigned int at24_get_offset_adj(u8 flags, unsigned int byte_len)
+ 	}
+ }
+ 
++static void at24_probe_temp_sensor(struct i2c_client *client)
++{
++	struct at24_data *at24 = i2c_get_clientdata(client);
++	struct i2c_board_info info = { .type = "jc42" };
++	int ret;
++	u8 val;
++
++	/*
++	 * Byte 2 has value 11 for DDR3, earlier versions don't
++	 * support the thermal sensor present flag
++	 */
++	ret = at24_read(at24, 2, &val, 1);
++	if (ret || val != 11)
++		return;
++
++	/* Byte 32, bit 7 is set if temp sensor is present */
++	ret = at24_read(at24, 32, &val, 1);
++	if (ret || !(val & BIT(7)))
++		return;
++
++	info.addr = 0x18 | (client->addr & 7);
++
++	i2c_new_client_device(client->adapter, &info);
++}
++
+ static int at24_probe(struct i2c_client *client)
+ {
+ 	struct regmap_config regmap_config = { };
+@@ -757,14 +782,6 @@ static int at24_probe(struct i2c_client *client)
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+ 
+-	at24->nvmem = devm_nvmem_register(dev, &nvmem_config);
+-	if (IS_ERR(at24->nvmem)) {
+-		pm_runtime_disable(dev);
+-		if (!pm_runtime_status_suspended(dev))
+-			regulator_disable(at24->vcc_reg);
+-		return PTR_ERR(at24->nvmem);
+-	}
+-
+ 	/*
+ 	 * Perform a one-byte test read to verify that the
+ 	 * chip is functional.
+@@ -777,6 +794,19 @@ static int at24_probe(struct i2c_client *client)
+ 		return -ENODEV;
+ 	}
+ 
++	at24->nvmem = devm_nvmem_register(dev, &nvmem_config);
++	if (IS_ERR(at24->nvmem)) {
++		pm_runtime_disable(dev);
++		if (!pm_runtime_status_suspended(dev))
++			regulator_disable(at24->vcc_reg);
++		return dev_err_probe(dev, PTR_ERR(at24->nvmem),
++				     "failed to register nvmem\n");
++	}
++
++	/* If this a SPD EEPROM, probe for DDR3 thermal sensor */
++	if (cdata == &at24_data_spd)
++		at24_probe_temp_sensor(client);
++
+ 	pm_runtime_idle(dev);
+ 
+ 	if (writable)
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 28acedb10737d..129e36ea0370e 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -115,6 +115,8 @@
+ #define MEI_DEV_ID_ARL_S      0x7F68  /* Arrow Lake Point S */
+ #define MEI_DEV_ID_ARL_H      0x7770  /* Arrow Lake Point H */
+ 
++#define MEI_DEV_ID_LNL_M      0xA870  /* Lunar Lake Point M */
++
+ /*
+  * MEI HW Section
+  */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 2809338a5c3ae..188d847662ff7 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -121,6 +121,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ARL_S, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ARL_H, MEI_ME_PCH15_CFG)},
+ 
++	{MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)},
++
+ 	/* required last entry */
+ 	{0, }
+ };
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 53fbef9f4ce54..ac56bc175b51b 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -4650,7 +4650,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6141,
+ 		.family = MV88E6XXX_FAMILY_6341,
+ 		.name = "Marvell 88E6141",
+-		.num_databases = 4096,
++		.num_databases = 256,
+ 		.num_macs = 2048,
+ 		.num_ports = 6,
+ 		.num_internal_phys = 5,
+@@ -5056,7 +5056,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6341,
+ 		.family = MV88E6XXX_FAMILY_6341,
+ 		.name = "Marvell 88E6341",
+-		.num_databases = 4096,
++		.num_databases = 256,
+ 		.num_macs = 2048,
+ 		.num_internal_phys = 5,
+ 		.num_ports = 6,
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index ed0589a1a00d8..89dc69d1807e1 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2,7 +2,7 @@
+ /*
+  * Broadcom GENET (Gigabit Ethernet) controller driver
+  *
+- * Copyright (c) 2014-2020 Broadcom
++ * Copyright (c) 2014-2024 Broadcom
+  */
+ 
+ #define pr_fmt(fmt)				"bcmgenet: " fmt
+@@ -3252,7 +3252,7 @@ static void bcmgenet_get_hw_addr(struct bcmgenet_priv *priv,
+ }
+ 
+ /* Returns a reusable dma control register value */
+-static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
++static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv, bool flush_rx)
+ {
+ 	unsigned int i;
+ 	u32 reg;
+@@ -3277,6 +3277,14 @@ static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
+ 	udelay(10);
+ 	bcmgenet_umac_writel(priv, 0, UMAC_TX_FLUSH);
+ 
++	if (flush_rx) {
++		reg = bcmgenet_rbuf_ctrl_get(priv);
++		bcmgenet_rbuf_ctrl_set(priv, reg | BIT(0));
++		udelay(10);
++		bcmgenet_rbuf_ctrl_set(priv, reg);
++		udelay(10);
++	}
++
+ 	return dma_ctrl;
+ }
+ 
+@@ -3298,7 +3306,9 @@ static void bcmgenet_netif_start(struct net_device *dev)
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 
+ 	/* Start the network engine */
++	netif_addr_lock_bh(dev);
+ 	bcmgenet_set_rx_mode(dev);
++	netif_addr_unlock_bh(dev);
+ 	bcmgenet_enable_rx_napi(priv);
+ 
+ 	umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, true);
+@@ -3340,8 +3350,8 @@ static int bcmgenet_open(struct net_device *dev)
+ 
+ 	bcmgenet_set_hw_addr(priv, dev->dev_addr);
+ 
+-	/* Disable RX/TX DMA and flush TX queues */
+-	dma_ctrl = bcmgenet_dma_disable(priv);
++	/* Disable RX/TX DMA and flush TX and RX queues */
++	dma_ctrl = bcmgenet_dma_disable(priv, true);
+ 
+ 	/* Reinitialize TDMA and RDMA and SW housekeeping */
+ 	ret = bcmgenet_init_dma(priv);
+@@ -4199,7 +4209,7 @@ static int bcmgenet_resume(struct device *d)
+ 			bcmgenet_hfb_create_rxnfc_filter(priv, rule);
+ 
+ 	/* Disable RX/TX DMA and flush TX queues */
+-	dma_ctrl = bcmgenet_dma_disable(priv);
++	dma_ctrl = bcmgenet_dma_disable(priv, false);
+ 
+ 	/* Reinitialize TDMA and RDMA and SW housekeeping */
+ 	ret = bcmgenet_init_dma(priv);
+diff --git a/drivers/net/ethernet/brocade/bna/bnad_debugfs.c b/drivers/net/ethernet/brocade/bna/bnad_debugfs.c
+index 04ad0f2b9677e..777f0d7e48192 100644
+--- a/drivers/net/ethernet/brocade/bna/bnad_debugfs.c
++++ b/drivers/net/ethernet/brocade/bna/bnad_debugfs.c
+@@ -312,7 +312,7 @@ bnad_debugfs_write_regrd(struct file *file, const char __user *buf,
+ 	void *kern_buf;
+ 
+ 	/* Copy the user space buf */
+-	kern_buf = memdup_user(buf, nbytes);
++	kern_buf = memdup_user_nul(buf, nbytes);
+ 	if (IS_ERR(kern_buf))
+ 		return PTR_ERR(kern_buf);
+ 
+@@ -372,7 +372,7 @@ bnad_debugfs_write_regwr(struct file *file, const char __user *buf,
+ 	void *kern_buf;
+ 
+ 	/* Copy the user space buf */
+-	kern_buf = memdup_user(buf, nbytes);
++	kern_buf = memdup_user_nul(buf, nbytes);
+ 	if (IS_ERR(kern_buf))
+ 		return PTR_ERR(kern_buf);
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index ccb6bd002b20d..89917dde0e223 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -2678,12 +2678,12 @@ int cxgb4_selftest_lb_pkt(struct net_device *netdev)
+ 	lb->loopback = 1;
+ 
+ 	q = &adap->sge.ethtxq[pi->first_qset];
+-	__netif_tx_lock(q->txq, smp_processor_id());
++	__netif_tx_lock_bh(q->txq);
+ 
+ 	reclaim_completed_tx(adap, &q->q, -1, true);
+ 	credits = txq_avail(&q->q) - ndesc;
+ 	if (unlikely(credits < 0)) {
+-		__netif_tx_unlock(q->txq);
++		__netif_tx_unlock_bh(q->txq);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -2718,7 +2718,7 @@ int cxgb4_selftest_lb_pkt(struct net_device *netdev)
+ 	init_completion(&lb->completion);
+ 	txq_advance(&q->q, ndesc);
+ 	cxgb4_ring_tx_db(adap, &q->q, ndesc);
+-	__netif_tx_unlock(q->txq);
++	__netif_tx_unlock_bh(q->txq);
+ 
+ 	/* wait for the pkt to return */
+ 	ret = wait_for_completion_timeout(&lb->completion, 10 * HZ);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index c14c391a0cec6..5dbee850fef53 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -7005,8 +7005,7 @@ static void hclge_set_timer_task(struct hnae3_handle *handle, bool enable)
+ 		/* Set the DOWN flag here to disable link updating */
+ 		set_bit(HCLGE_STATE_DOWN, &hdev->state);
+ 
+-		/* flush memory to make sure DOWN is seen by service task */
+-		smp_mb__before_atomic();
++		smp_mb__after_atomic(); /* flush memory to make sure DOWN is seen by service task */
+ 		hclge_flush_link_update(hdev);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 2bb0ce1761fb0..be41117ec1465 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2583,8 +2583,7 @@ static void hclgevf_set_timer_task(struct hnae3_handle *handle, bool enable)
+ 	} else {
+ 		set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+ 
+-		/* flush memory to make sure DOWN is seen by service task */
+-		smp_mb__before_atomic();
++		smp_mb__after_atomic(); /* flush memory to make sure DOWN is seen by service task */
+ 		hclgevf_flush_link_update(hdev);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index 5205796859f6c..d212bab3ddbae 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -420,12 +420,10 @@ static ssize_t rvu_dbg_qsize_write(struct file *filp,
+ 	u16 pcifunc;
+ 	int ret, lf;
+ 
+-	cmd_buf = memdup_user(buffer, count + 1);
++	cmd_buf = memdup_user_nul(buffer, count);
+ 	if (IS_ERR(cmd_buf))
+ 		return -ENOMEM;
+ 
+-	cmd_buf[count] = '\0';
+-
+ 	cmd_buf_tmp = strchr(cmd_buf, '\n');
+ 	if (cmd_buf_tmp) {
+ 		*cmd_buf_tmp = '\0';
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+index a2e4dfb5cb44e..5f4962d90022e 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+@@ -1877,8 +1877,8 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
+ 			    struct flow_cls_offload *f)
+ {
+ 	struct qede_arfs_fltr_node *n;
+-	int min_hlen, rc = -EINVAL;
+ 	struct qede_arfs_tuple t;
++	int min_hlen, rc;
+ 
+ 	__qede_lock(edev);
+ 
+@@ -1888,7 +1888,8 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
+ 	}
+ 
+ 	/* parse flower attribute and prepare filter */
+-	if (qede_parse_flow_attr(edev, proto, f->rule, &t))
++	rc = qede_parse_flow_attr(edev, proto, f->rule, &t);
++	if (rc)
+ 		goto unlock;
+ 
+ 	/* Validate profile mode and number of filters */
+@@ -1897,11 +1898,13 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
+ 		DP_NOTICE(edev,
+ 			  "Filter configuration invalidated, filter mode=0x%x, configured mode=0x%x, filter count=0x%x\n",
+ 			  t.mode, edev->arfs->mode, edev->arfs->filter_count);
++		rc = -EINVAL;
+ 		goto unlock;
+ 	}
+ 
+ 	/* parse tc actions and get the vf_id */
+-	if (qede_parse_actions(edev, &f->rule->action, f->common.extack))
++	rc = qede_parse_actions(edev, &f->rule->action, f->common.extack);
++	if (rc)
+ 		goto unlock;
+ 
+ 	if (qede_flow_find_fltr(edev, &t)) {
+@@ -2007,10 +2010,9 @@ static int qede_flow_spec_to_rule(struct qede_dev *edev,
+ 	if (IS_ERR(flow))
+ 		return PTR_ERR(flow);
+ 
+-	if (qede_parse_flow_attr(edev, proto, flow->rule, t)) {
+-		err = -EINVAL;
++	err = qede_parse_flow_attr(edev, proto, flow->rule, t);
++	if (err)
+ 		goto err_out;
+-	}
+ 
+ 	/* Make sure location is valid and filter isn't already set */
+ 	err = qede_flow_spec_validate(edev, &flow->rule->action, t,
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 3d342908f57a0..be2761d0bcd91 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1358,6 +1358,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x0489, 0xe0b5, 0)},	/* Foxconn T77W968 LTE with eSIM support*/
+ 	{QMI_FIXED_INTF(0x2692, 0x9025, 4)},    /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
+ 	{QMI_QUIRK_SET_DTR(0x1546, 0x1342, 4)},	/* u-blox LARA-L6 */
++	{QMI_QUIRK_SET_DTR(0x33f8, 0x0104, 4)}, /* Rolling RW101 RMNET */
+ 
+ 	/* 4. Gobi 1000 devices */
+ 	{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)},	/* Acer Gobi Modem Device */
+diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
+index c2ba4064ce5b2..3e05444537ed1 100644
+--- a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
+@@ -43,7 +43,7 @@
+ #define SCU614		0x614 /* Disable GPIO Internal Pull-Down #1 */
+ #define SCU618		0x618 /* Disable GPIO Internal Pull-Down #2 */
+ #define SCU61C		0x61c /* Disable GPIO Internal Pull-Down #3 */
+-#define SCU620		0x620 /* Disable GPIO Internal Pull-Down #4 */
++#define SCU630		0x630 /* Disable GPIO Internal Pull-Down #4 */
+ #define SCU634		0x634 /* Disable GPIO Internal Pull-Down #5 */
+ #define SCU638		0x638 /* Disable GPIO Internal Pull-Down #6 */
+ #define SCU694		0x694 /* Multi-function Pin Control #25 */
+@@ -2471,38 +2471,38 @@ static struct aspeed_pin_config aspeed_g6_configs[] = {
+ 	ASPEED_PULL_DOWN_PINCONF(D14, SCU61C, 0),
+ 
+ 	/* GPIOS7 */
+-	ASPEED_PULL_DOWN_PINCONF(T24, SCU620, 23),
++	ASPEED_PULL_DOWN_PINCONF(T24, SCU630, 23),
+ 	/* GPIOS6 */
+-	ASPEED_PULL_DOWN_PINCONF(P23, SCU620, 22),
++	ASPEED_PULL_DOWN_PINCONF(P23, SCU630, 22),
+ 	/* GPIOS5 */
+-	ASPEED_PULL_DOWN_PINCONF(P24, SCU620, 21),
++	ASPEED_PULL_DOWN_PINCONF(P24, SCU630, 21),
+ 	/* GPIOS4 */
+-	ASPEED_PULL_DOWN_PINCONF(R26, SCU620, 20),
++	ASPEED_PULL_DOWN_PINCONF(R26, SCU630, 20),
+ 	/* GPIOS3*/
+-	ASPEED_PULL_DOWN_PINCONF(R24, SCU620, 19),
++	ASPEED_PULL_DOWN_PINCONF(R24, SCU630, 19),
+ 	/* GPIOS2 */
+-	ASPEED_PULL_DOWN_PINCONF(T26, SCU620, 18),
++	ASPEED_PULL_DOWN_PINCONF(T26, SCU630, 18),
+ 	/* GPIOS1 */
+-	ASPEED_PULL_DOWN_PINCONF(T25, SCU620, 17),
++	ASPEED_PULL_DOWN_PINCONF(T25, SCU630, 17),
+ 	/* GPIOS0 */
+-	ASPEED_PULL_DOWN_PINCONF(R23, SCU620, 16),
++	ASPEED_PULL_DOWN_PINCONF(R23, SCU630, 16),
+ 
+ 	/* GPIOR7 */
+-	ASPEED_PULL_DOWN_PINCONF(U26, SCU620, 15),
++	ASPEED_PULL_DOWN_PINCONF(U26, SCU630, 15),
+ 	/* GPIOR6 */
+-	ASPEED_PULL_DOWN_PINCONF(W26, SCU620, 14),
++	ASPEED_PULL_DOWN_PINCONF(W26, SCU630, 14),
+ 	/* GPIOR5 */
+-	ASPEED_PULL_DOWN_PINCONF(T23, SCU620, 13),
++	ASPEED_PULL_DOWN_PINCONF(T23, SCU630, 13),
+ 	/* GPIOR4 */
+-	ASPEED_PULL_DOWN_PINCONF(U25, SCU620, 12),
++	ASPEED_PULL_DOWN_PINCONF(U25, SCU630, 12),
+ 	/* GPIOR3*/
+-	ASPEED_PULL_DOWN_PINCONF(V26, SCU620, 11),
++	ASPEED_PULL_DOWN_PINCONF(V26, SCU630, 11),
+ 	/* GPIOR2 */
+-	ASPEED_PULL_DOWN_PINCONF(V24, SCU620, 10),
++	ASPEED_PULL_DOWN_PINCONF(V24, SCU630, 10),
+ 	/* GPIOR1 */
+-	ASPEED_PULL_DOWN_PINCONF(U24, SCU620, 9),
++	ASPEED_PULL_DOWN_PINCONF(U24, SCU630, 9),
+ 	/* GPIOR0 */
+-	ASPEED_PULL_DOWN_PINCONF(V25, SCU620, 8),
++	ASPEED_PULL_DOWN_PINCONF(V25, SCU630, 8),
+ 
+ 	/* GPIOX7 */
+ 	ASPEED_PULL_DOWN_PINCONF(AB10, SCU634, 31),
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 4d52494369935..4d46260e6bff3 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -2075,13 +2075,7 @@ int pinctrl_enable(struct pinctrl_dev *pctldev)
+ 
+ 	error = pinctrl_claim_hogs(pctldev);
+ 	if (error) {
+-		dev_err(pctldev->dev, "could not claim hogs: %i\n",
+-			error);
+-		pinctrl_free_pindescs(pctldev, pctldev->desc->pins,
+-				      pctldev->desc->npins);
+-		mutex_destroy(&pctldev->mutex);
+-		kfree(pctldev);
+-
++		dev_err(pctldev->dev, "could not claim hogs: %i\n", error);
+ 		return error;
+ 	}
+ 
+diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c
+index eac55fee5281c..0220228c50404 100644
+--- a/drivers/pinctrl/devicetree.c
++++ b/drivers/pinctrl/devicetree.c
+@@ -220,14 +220,16 @@ int pinctrl_dt_to_map(struct pinctrl *p, struct pinctrl_dev *pctldev)
+ 	for (state = 0; ; state++) {
+ 		/* Retrieve the pinctrl-* property */
+ 		propname = kasprintf(GFP_KERNEL, "pinctrl-%d", state);
+-		if (!propname)
+-			return -ENOMEM;
++		if (!propname) {
++			ret = -ENOMEM;
++			goto err;
++		}
+ 		prop = of_find_property(np, propname, &size);
+ 		kfree(propname);
+ 		if (!prop) {
+ 			if (state == 0) {
+-				of_node_put(np);
+-				return -ENODEV;
++				ret = -ENODEV;
++				goto err;
+ 			}
+ 			break;
+ 		}
+diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
+index e486d66e220b0..7f1acfe1a413d 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
+@@ -79,78 +79,76 @@ static int mtk_pinconf_get(struct pinctrl_dev *pctldev,
+ {
+ 	struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
+ 	u32 param = pinconf_to_config_param(*config);
+-	int pullup, err, reg, ret = 1;
++	int pullup, reg, err = -ENOTSUPP, ret = 1;
+ 	const struct mtk_pin_desc *desc;
+ 
+-	if (pin >= hw->soc->npins) {
+-		err = -EINVAL;
+-		goto out;
+-	}
++	if (pin >= hw->soc->npins)
++		return -EINVAL;
++
+ 	desc = (const struct mtk_pin_desc *)&hw->soc->pins[pin];
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		if (hw->soc->bias_get_combo) {
+-			err = hw->soc->bias_get_combo(hw, desc, &pullup, &ret);
+-			if (err)
+-				goto out;
+-			if (ret == MTK_PUPD_SET_R1R0_00)
+-				ret = MTK_DISABLE;
+-			if (param == PIN_CONFIG_BIAS_DISABLE) {
+-				if (ret != MTK_DISABLE)
+-					err = -EINVAL;
+-			} else if (param == PIN_CONFIG_BIAS_PULL_UP) {
+-				if (!pullup || ret == MTK_DISABLE)
+-					err = -EINVAL;
+-			} else if (param == PIN_CONFIG_BIAS_PULL_DOWN) {
+-				if (pullup || ret == MTK_DISABLE)
+-					err = -EINVAL;
+-			}
+-		} else {
+-			err = -ENOTSUPP;
++		if (!hw->soc->bias_get_combo)
++			break;
++		err = hw->soc->bias_get_combo(hw, desc, &pullup, &ret);
++		if (err)
++			break;
++		if (ret == MTK_PUPD_SET_R1R0_00)
++			ret = MTK_DISABLE;
++		if (param == PIN_CONFIG_BIAS_DISABLE) {
++			if (ret != MTK_DISABLE)
++				err = -EINVAL;
++		} else if (param == PIN_CONFIG_BIAS_PULL_UP) {
++			if (!pullup || ret == MTK_DISABLE)
++				err = -EINVAL;
++		} else if (param == PIN_CONFIG_BIAS_PULL_DOWN) {
++			if (pullup || ret == MTK_DISABLE)
++				err = -EINVAL;
+ 		}
+ 		break;
+ 	case PIN_CONFIG_SLEW_RATE:
+ 		err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_SR, &ret);
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-	case PIN_CONFIG_OUTPUT_ENABLE:
++		err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_IES, &ret);
++		if (!ret)
++			err = -EINVAL;
++		break;
++	case PIN_CONFIG_OUTPUT:
+ 		err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DIR, &ret);
+ 		if (err)
+-			goto out;
+-		/*     CONFIG     Current direction return value
+-		 * -------------  ----------------- ----------------------
+-		 * OUTPUT_ENABLE       output       1 (= HW value)
+-		 *                     input        0 (= HW value)
+-		 * INPUT_ENABLE        output       0 (= reverse HW value)
+-		 *                     input        1 (= reverse HW value)
+-		 */
+-		if (param == PIN_CONFIG_INPUT_ENABLE)
+-			ret = !ret;
++			break;
++
++		if (!ret) {
++			err = -EINVAL;
++			break;
++		}
+ 
++		err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DO, &ret);
+ 		break;
+ 	case PIN_CONFIG_INPUT_SCHMITT_ENABLE:
+ 		err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DIR, &ret);
+ 		if (err)
+-			goto out;
++			break;
+ 		/* return error when in output mode
+ 		 * because schmitt trigger only work in input mode
+ 		 */
+ 		if (ret) {
+ 			err = -EINVAL;
+-			goto out;
++			break;
+ 		}
+ 
+ 		err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_SMT, &ret);
+-
++		if (!ret)
++			err = -EINVAL;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+-		if (hw->soc->drive_get)
+-			err = hw->soc->drive_get(hw, desc, &ret);
+-		else
+-			err = -ENOTSUPP;
++		if (!hw->soc->drive_get)
++			break;
++		err = hw->soc->drive_get(hw, desc, &ret);
+ 		break;
+ 	case MTK_PIN_CONFIG_TDSEL:
+ 	case MTK_PIN_CONFIG_RDSEL:
+@@ -160,23 +158,18 @@ static int mtk_pinconf_get(struct pinctrl_dev *pctldev,
+ 		break;
+ 	case MTK_PIN_CONFIG_PU_ADV:
+ 	case MTK_PIN_CONFIG_PD_ADV:
+-		if (hw->soc->adv_pull_get) {
+-			pullup = param == MTK_PIN_CONFIG_PU_ADV;
+-			err = hw->soc->adv_pull_get(hw, desc, pullup, &ret);
+-		} else
+-			err = -ENOTSUPP;
++		if (!hw->soc->adv_pull_get)
++			break;
++		pullup = param == MTK_PIN_CONFIG_PU_ADV;
++		err = hw->soc->adv_pull_get(hw, desc, pullup, &ret);
+ 		break;
+ 	case MTK_PIN_CONFIG_DRV_ADV:
+-		if (hw->soc->adv_drive_get)
+-			err = hw->soc->adv_drive_get(hw, desc, &ret);
+-		else
+-			err = -ENOTSUPP;
++		if (!hw->soc->adv_drive_get)
++			break;
++		err = hw->soc->adv_drive_get(hw, desc, &ret);
+ 		break;
+-	default:
+-		err = -ENOTSUPP;
+ 	}
+ 
+-out:
+ 	if (!err)
+ 		*config = pinconf_to_config_packed(param, ret);
+ 
+@@ -188,54 +181,33 @@ static int mtk_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ {
+ 	struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
+ 	const struct mtk_pin_desc *desc;
+-	int err = 0;
++	int err = -ENOTSUPP;
+ 	u32 reg;
+ 
+-	if (pin >= hw->soc->npins) {
+-		err = -EINVAL;
+-		goto err;
+-	}
++	if (pin >= hw->soc->npins)
++		return -EINVAL;
++
+ 	desc = (const struct mtk_pin_desc *)&hw->soc->pins[pin];
+ 
+ 	switch ((u32)param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		if (hw->soc->bias_set_combo)
+-			err = hw->soc->bias_set_combo(hw, desc, 0, MTK_DISABLE);
+-		else
+-			err = -ENOTSUPP;
++		if (!hw->soc->bias_set_combo)
++			break;
++		err = hw->soc->bias_set_combo(hw, desc, 0, MTK_DISABLE);
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+-		if (hw->soc->bias_set_combo)
+-			err = hw->soc->bias_set_combo(hw, desc, 1, arg);
+-		else
+-			err = -ENOTSUPP;
++		if (!hw->soc->bias_set_combo)
++			break;
++		err = hw->soc->bias_set_combo(hw, desc, 1, arg);
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		if (hw->soc->bias_set_combo)
+-			err = hw->soc->bias_set_combo(hw, desc, 0, arg);
+-		else
+-			err = -ENOTSUPP;
+-		break;
+-	case PIN_CONFIG_OUTPUT_ENABLE:
+-		err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_SMT,
+-				       MTK_DISABLE);
+-		/* Keep set direction to consider the case that a GPIO pin
+-		 *  does not have SMT control
+-		 */
+-		if (err != -ENOTSUPP)
+-			goto err;
+-
+-		err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DIR,
+-				       MTK_OUTPUT);
++		if (!hw->soc->bias_set_combo)
++			break;
++		err = hw->soc->bias_set_combo(hw, desc, 0, arg);
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+ 		/* regard all non-zero value as enable */
+ 		err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_IES, !!arg);
+-		if (err)
+-			goto err;
+-
+-		err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DIR,
+-				       MTK_INPUT);
+ 		break;
+ 	case PIN_CONFIG_SLEW_RATE:
+ 		/* regard all non-zero value as enable */
+@@ -245,7 +217,7 @@ static int mtk_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 		err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DIR,
+ 				       MTK_OUTPUT);
+ 		if (err)
+-			goto err;
++			break;
+ 
+ 		err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DO,
+ 				       arg);
+@@ -257,15 +229,14 @@ static int mtk_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 		 */
+ 		err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DIR, !arg);
+ 		if (err)
+-			goto err;
++			break;
+ 
+ 		err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_SMT, !!arg);
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+-		if (hw->soc->drive_set)
+-			err = hw->soc->drive_set(hw, desc, arg);
+-		else
+-			err = -ENOTSUPP;
++		if (!hw->soc->drive_set)
++			break;
++		err = hw->soc->drive_set(hw, desc, arg);
+ 		break;
+ 	case MTK_PIN_CONFIG_TDSEL:
+ 	case MTK_PIN_CONFIG_RDSEL:
+@@ -275,26 +246,19 @@ static int mtk_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 		break;
+ 	case MTK_PIN_CONFIG_PU_ADV:
+ 	case MTK_PIN_CONFIG_PD_ADV:
+-		if (hw->soc->adv_pull_set) {
+-			bool pullup;
+-
+-			pullup = param == MTK_PIN_CONFIG_PU_ADV;
+-			err = hw->soc->adv_pull_set(hw, desc, pullup,
+-						    arg);
+-		} else
+-			err = -ENOTSUPP;
++		if (!hw->soc->adv_pull_set)
++			break;
++		err = hw->soc->adv_pull_set(hw, desc,
++					    (param == MTK_PIN_CONFIG_PU_ADV),
++					    arg);
+ 		break;
+ 	case MTK_PIN_CONFIG_DRV_ADV:
+-		if (hw->soc->adv_drive_set)
+-			err = hw->soc->adv_drive_set(hw, desc, arg);
+-		else
+-			err = -ENOTSUPP;
++		if (!hw->soc->adv_drive_set)
++			break;
++		err = hw->soc->adv_drive_set(hw, desc, arg);
+ 		break;
+-	default:
+-		err = -ENOTSUPP;
+ 	}
+ 
+-err:
+ 	return err;
+ }
+ 
+diff --git a/drivers/pinctrl/meson/pinctrl-meson-a1.c b/drivers/pinctrl/meson/pinctrl-meson-a1.c
+index 8abf750eac7ee..5be02a4fbe775 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson-a1.c
++++ b/drivers/pinctrl/meson/pinctrl-meson-a1.c
+@@ -250,7 +250,7 @@ static const unsigned int pdm_dclk_x_pins[]		= { GPIOX_10 };
+ static const unsigned int pdm_din2_a_pins[]		= { GPIOA_6 };
+ static const unsigned int pdm_din1_a_pins[]		= { GPIOA_7 };
+ static const unsigned int pdm_din0_a_pins[]		= { GPIOA_8 };
+-static const unsigned int pdm_dclk_pins[]		= { GPIOA_9 };
++static const unsigned int pdm_dclk_a_pins[]		= { GPIOA_9 };
+ 
+ /* gen_clk */
+ static const unsigned int gen_clk_x_pins[]		= { GPIOX_7 };
+@@ -591,7 +591,7 @@ static struct meson_pmx_group meson_a1_periphs_groups[] = {
+ 	GROUP(pdm_din2_a,		3),
+ 	GROUP(pdm_din1_a,		3),
+ 	GROUP(pdm_din0_a,		3),
+-	GROUP(pdm_dclk,			3),
++	GROUP(pdm_dclk_a,		3),
+ 	GROUP(pwm_c_a,			3),
+ 	GROUP(pwm_b_a,			3),
+ 
+@@ -755,7 +755,7 @@ static const char * const spi_a_groups[] = {
+ 
+ static const char * const pdm_groups[] = {
+ 	"pdm_din0_x", "pdm_din1_x", "pdm_din2_x", "pdm_dclk_x", "pdm_din2_a",
+-	"pdm_din1_a", "pdm_din0_a", "pdm_dclk",
++	"pdm_din1_a", "pdm_din0_a", "pdm_dclk_a",
+ };
+ 
+ static const char * const gen_clk_groups[] = {
+diff --git a/drivers/power/supply/rt9455_charger.c b/drivers/power/supply/rt9455_charger.c
+index 594bb3b8a4d1e..a84afccd509f1 100644
+--- a/drivers/power/supply/rt9455_charger.c
++++ b/drivers/power/supply/rt9455_charger.c
+@@ -193,6 +193,7 @@ static const int rt9455_voreg_values[] = {
+ 	4450000, 4450000, 4450000, 4450000, 4450000, 4450000, 4450000, 4450000
+ };
+ 
++#if IS_ENABLED(CONFIG_USB_PHY)
+ /*
+  * When the charger is in boost mode, REG02[7:2] represent boost output
+  * voltage.
+@@ -208,6 +209,7 @@ static const int rt9455_boost_voltage_values[] = {
+ 	5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000,
+ 	5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000,
+ };
++#endif
+ 
+ /* REG07[3:0] (VMREG) in uV */
+ static const int rt9455_vmreg_values[] = {
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 54330eb0d03b8..2d1a23b9eae3b 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1749,19 +1749,24 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 		}
+ 	}
+ 
+-	if (err != -EEXIST)
++	if (err != -EEXIST) {
+ 		regulator->debugfs = debugfs_create_dir(supply_name, rdev->debugfs);
+-	if (IS_ERR(regulator->debugfs))
+-		rdev_dbg(rdev, "Failed to create debugfs directory\n");
++		if (IS_ERR(regulator->debugfs)) {
++			rdev_dbg(rdev, "Failed to create debugfs directory\n");
++			regulator->debugfs = NULL;
++		}
++	}
+ 
+-	debugfs_create_u32("uA_load", 0444, regulator->debugfs,
+-			   &regulator->uA_load);
+-	debugfs_create_u32("min_uV", 0444, regulator->debugfs,
+-			   &regulator->voltage[PM_SUSPEND_ON].min_uV);
+-	debugfs_create_u32("max_uV", 0444, regulator->debugfs,
+-			   &regulator->voltage[PM_SUSPEND_ON].max_uV);
+-	debugfs_create_file("constraint_flags", 0444, regulator->debugfs,
+-			    regulator, &constraint_flags_fops);
++	if (regulator->debugfs) {
++		debugfs_create_u32("uA_load", 0444, regulator->debugfs,
++				   &regulator->uA_load);
++		debugfs_create_u32("min_uV", 0444, regulator->debugfs,
++				   &regulator->voltage[PM_SUSPEND_ON].min_uV);
++		debugfs_create_u32("max_uV", 0444, regulator->debugfs,
++				   &regulator->voltage[PM_SUSPEND_ON].max_uV);
++		debugfs_create_file("constraint_flags", 0444, regulator->debugfs,
++				    regulator, &constraint_flags_fops);
++	}
+ 
+ 	/*
+ 	 * Check now if the regulator is an always on regulator - if
+diff --git a/drivers/regulator/mt6360-regulator.c b/drivers/regulator/mt6360-regulator.c
+index 15308ee29c13e..e30eb20bc8ea0 100644
+--- a/drivers/regulator/mt6360-regulator.c
++++ b/drivers/regulator/mt6360-regulator.c
+@@ -319,15 +319,15 @@ static unsigned int mt6360_regulator_of_map_mode(unsigned int hw_mode)
+ 	}
+ }
+ 
+-#define MT6360_REGULATOR_DESC(_name, _sname, ereg, emask, vreg,	vmask,	\
+-			      mreg, mmask, streg, stmask, vranges,	\
+-			      vcnts, offon_delay, irq_tbls)		\
++#define MT6360_REGULATOR_DESC(match, _name, _sname, ereg, emask, vreg,	\
++			      vmask, mreg, mmask, streg, stmask,	\
++			      vranges, vcnts, offon_delay, irq_tbls)	\
+ {									\
+ 	.desc = {							\
+ 		.name = #_name,						\
+ 		.supply_name = #_sname,					\
+ 		.id =  MT6360_REGULATOR_##_name,			\
+-		.of_match = of_match_ptr(#_name),			\
++		.of_match = of_match_ptr(match),			\
+ 		.regulators_node = of_match_ptr("regulator"),		\
+ 		.of_map_mode = mt6360_regulator_of_map_mode,		\
+ 		.owner = THIS_MODULE,					\
+@@ -351,21 +351,29 @@ static unsigned int mt6360_regulator_of_map_mode(unsigned int hw_mode)
+ }
+ 
+ static const struct mt6360_regulator_desc mt6360_regulator_descs[] =  {
+-	MT6360_REGULATOR_DESC(BUCK1, BUCK1_VIN, 0x117, 0x40, 0x110, 0xff, 0x117, 0x30, 0x117, 0x04,
++	MT6360_REGULATOR_DESC("buck1", BUCK1, BUCK1_VIN,
++			      0x117, 0x40, 0x110, 0xff, 0x117, 0x30, 0x117, 0x04,
+ 			      buck_vout_ranges, 256, 0, buck1_irq_tbls),
+-	MT6360_REGULATOR_DESC(BUCK2, BUCK2_VIN, 0x127, 0x40, 0x120, 0xff, 0x127, 0x30, 0x127, 0x04,
++	MT6360_REGULATOR_DESC("buck2", BUCK2, BUCK2_VIN,
++			      0x127, 0x40, 0x120, 0xff, 0x127, 0x30, 0x127, 0x04,
+ 			      buck_vout_ranges, 256, 0, buck2_irq_tbls),
+-	MT6360_REGULATOR_DESC(LDO6, LDO_VIN3, 0x137, 0x40, 0x13B, 0xff, 0x137, 0x30, 0x137, 0x04,
++	MT6360_REGULATOR_DESC("ldo6", LDO6, LDO_VIN3,
++			      0x137, 0x40, 0x13B, 0xff, 0x137, 0x30, 0x137, 0x04,
+ 			      ldo_vout_ranges1, 256, 0, ldo6_irq_tbls),
+-	MT6360_REGULATOR_DESC(LDO7, LDO_VIN3, 0x131, 0x40, 0x135, 0xff, 0x131, 0x30, 0x131, 0x04,
++	MT6360_REGULATOR_DESC("ldo7", LDO7, LDO_VIN3,
++			      0x131, 0x40, 0x135, 0xff, 0x131, 0x30, 0x131, 0x04,
+ 			      ldo_vout_ranges1, 256, 0, ldo7_irq_tbls),
+-	MT6360_REGULATOR_DESC(LDO1, LDO_VIN1, 0x217, 0x40, 0x21B, 0xff, 0x217, 0x30, 0x217, 0x04,
++	MT6360_REGULATOR_DESC("ldo1", LDO1, LDO_VIN1,
++			      0x217, 0x40, 0x21B, 0xff, 0x217, 0x30, 0x217, 0x04,
+ 			      ldo_vout_ranges2, 256, 0, ldo1_irq_tbls),
+-	MT6360_REGULATOR_DESC(LDO2, LDO_VIN1, 0x211, 0x40, 0x215, 0xff, 0x211, 0x30, 0x211, 0x04,
++	MT6360_REGULATOR_DESC("ldo2", LDO2, LDO_VIN1,
++			      0x211, 0x40, 0x215, 0xff, 0x211, 0x30, 0x211, 0x04,
+ 			      ldo_vout_ranges2, 256, 0, ldo2_irq_tbls),
+-	MT6360_REGULATOR_DESC(LDO3, LDO_VIN1, 0x205, 0x40, 0x209, 0xff, 0x205, 0x30, 0x205, 0x04,
++	MT6360_REGULATOR_DESC("ldo3", LDO3, LDO_VIN1,
++			      0x205, 0x40, 0x209, 0xff, 0x205, 0x30, 0x205, 0x04,
+ 			      ldo_vout_ranges2, 256, 100, ldo3_irq_tbls),
+-	MT6360_REGULATOR_DESC(LDO5, LDO_VIN2, 0x20B, 0x40, 0x20F, 0x7f, 0x20B, 0x30, 0x20B, 0x04,
++	MT6360_REGULATOR_DESC("ldo5", LDO5, LDO_VIN2,
++			      0x20B, 0x40, 0x20F, 0x7f, 0x20B, 0x30, 0x20B, 0x04,
+ 			      ldo_vout_ranges3, 128, 100, ldo5_irq_tbls),
+ };
+ 
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_tgt.c b/drivers/scsi/bnx2fc/bnx2fc_tgt.c
+index a3e2a38aabf2f..283df0a9da167 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_tgt.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_tgt.c
+@@ -833,7 +833,6 @@ static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba,
+ 
+ 	BNX2FC_TGT_DBG(tgt, "Freeing up session resources\n");
+ 
+-	spin_lock_bh(&tgt->cq_lock);
+ 	ctx_base_ptr = tgt->ctx_base;
+ 	tgt->ctx_base = NULL;
+ 
+@@ -889,7 +888,6 @@ static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba,
+ 				    tgt->sq, tgt->sq_dma);
+ 		tgt->sq = NULL;
+ 	}
+-	spin_unlock_bh(&tgt->cq_lock);
+ 
+ 	if (ctx_base_ptr)
+ 		iounmap(ctx_base_ptr);
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index cf69f831a7253..8f1b5b0ee8cd8 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -1065,7 +1065,6 @@ struct lpfc_hba {
+ 	unsigned long bit_flags;
+ #define	FABRIC_COMANDS_BLOCKED	0
+ 	atomic_t num_rsrc_err;
+-	atomic_t num_cmd_success;
+ 	unsigned long last_rsrc_error_time;
+ 	unsigned long last_ramp_down_time;
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index b4b87e5d8b291..2121534838747 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -246,11 +246,10 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
+ 	struct Scsi_Host  *shost;
+ 	struct scsi_device *sdev;
+ 	unsigned long new_queue_depth;
+-	unsigned long num_rsrc_err, num_cmd_success;
++	unsigned long num_rsrc_err;
+ 	int i;
+ 
+ 	num_rsrc_err = atomic_read(&phba->num_rsrc_err);
+-	num_cmd_success = atomic_read(&phba->num_cmd_success);
+ 
+ 	/*
+ 	 * The error and success command counters are global per
+@@ -265,20 +264,16 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
+ 		for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) {
+ 			shost = lpfc_shost_from_vport(vports[i]);
+ 			shost_for_each_device(sdev, shost) {
+-				new_queue_depth =
+-					sdev->queue_depth * num_rsrc_err /
+-					(num_rsrc_err + num_cmd_success);
+-				if (!new_queue_depth)
+-					new_queue_depth = sdev->queue_depth - 1;
++				if (num_rsrc_err >= sdev->queue_depth)
++					new_queue_depth = 1;
+ 				else
+ 					new_queue_depth = sdev->queue_depth -
+-								new_queue_depth;
++						num_rsrc_err;
+ 				scsi_change_queue_depth(sdev, new_queue_depth);
+ 			}
+ 		}
+ 	lpfc_destroy_vport_work_array(phba, vports);
+ 	atomic_set(&phba->num_rsrc_err, 0);
+-	atomic_set(&phba->num_cmd_success, 0);
+ }
+ 
+ /**
+diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
+index 56ae882fb7b39..4d2fbe1429b69 100644
+--- a/drivers/target/target_core_configfs.c
++++ b/drivers/target/target_core_configfs.c
+@@ -3532,6 +3532,8 @@ static int __init target_core_init_configfs(void)
+ {
+ 	struct configfs_subsystem *subsys = &target_core_fabrics;
+ 	struct t10_alua_lu_gp *lu_gp;
++	struct cred *kern_cred;
++	const struct cred *old_cred;
+ 	int ret;
+ 
+ 	pr_debug("TARGET_CORE[0]: Loading Generic Kernel Storage"
+@@ -3608,11 +3610,21 @@ static int __init target_core_init_configfs(void)
+ 	if (ret < 0)
+ 		goto out;
+ 
++	/* We use the kernel credentials to access the target directory */
++	kern_cred = prepare_kernel_cred(&init_task);
++	if (!kern_cred) {
++		ret = -ENOMEM;
++		goto out;
++	}
++	old_cred = override_creds(kern_cred);
+ 	target_init_dbroot();
++	revert_creds(old_cred);
++	put_cred(kern_cred);
+ 
+ 	return 0;
+ 
+ out:
++	target_xcopy_release_pt();
+ 	configfs_unregister_subsystem(subsys);
+ 	core_dev_release_virtual_lun0();
+ 	rd_module_exit();
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index eef78141ffcae..4ef05bafcf2bf 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -5053,9 +5053,10 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	}
+ 	if (usb_endpoint_maxp(&udev->ep0.desc) == i) {
+ 		;	/* Initial ep0 maxpacket guess is right */
+-	} else if ((udev->speed == USB_SPEED_FULL ||
++	} else if (((udev->speed == USB_SPEED_FULL ||
+ 				udev->speed == USB_SPEED_HIGH) &&
+-			(i == 8 || i == 16 || i == 32 || i == 64)) {
++			(i == 8 || i == 16 || i == 32 || i == 64)) ||
++			(udev->speed >= USB_SPEED_SUPER && i > 0)) {
+ 		/* Initial guess is wrong; use the descriptor's value */
+ 		if (udev->speed == USB_SPEED_FULL)
+ 			dev_dbg(&udev->dev, "ep0 maxpacket = %d\n", i);
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 26f9928b972c6..7275bff857ae8 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -102,6 +102,27 @@ static int dwc3_get_dr_mode(struct dwc3 *dwc)
+ 	return 0;
+ }
+ 
++void dwc3_enable_susphy(struct dwc3 *dwc, bool enable)
++{
++	u32 reg;
++
++	reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
++	if (enable && !dwc->dis_u3_susphy_quirk)
++		reg |= DWC3_GUSB3PIPECTL_SUSPHY;
++	else
++		reg &= ~DWC3_GUSB3PIPECTL_SUSPHY;
++
++	dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), reg);
++
++	reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
++	if (enable && !dwc->dis_u2_susphy_quirk)
++		reg |= DWC3_GUSB2PHYCFG_SUSPHY;
++	else
++		reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
++
++	dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
++}
++
+ void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode)
+ {
+ 	u32 reg;
+@@ -590,11 +611,8 @@ static int dwc3_core_ulpi_init(struct dwc3 *dwc)
+  */
+ static int dwc3_phy_setup(struct dwc3 *dwc)
+ {
+-	unsigned int hw_mode;
+ 	u32 reg;
+ 
+-	hw_mode = DWC3_GHWPARAMS0_MODE(dwc->hwparams.hwparams0);
+-
+ 	reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
+ 
+ 	/*
+@@ -604,21 +622,16 @@ static int dwc3_phy_setup(struct dwc3 *dwc)
+ 	reg &= ~DWC3_GUSB3PIPECTL_UX_EXIT_PX;
+ 
+ 	/*
+-	 * Above 1.94a, it is recommended to set DWC3_GUSB3PIPECTL_SUSPHY
+-	 * to '0' during coreConsultant configuration. So default value
+-	 * will be '0' when the core is reset. Application needs to set it
+-	 * to '1' after the core initialization is completed.
++	 * Above DWC_usb3.0 1.94a, it is recommended to set
++	 * DWC3_GUSB3PIPECTL_SUSPHY to '0' during coreConsultant configuration.
++	 * So default value will be '0' when the core is reset. Application
++	 * needs to set it to '1' after the core initialization is completed.
++	 *
++	 * Similarly for DRD controllers, GUSB3PIPECTL.SUSPENDENABLE must be
++	 * cleared after power-on reset, and it can be set after core
++	 * initialization.
+ 	 */
+-	if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A))
+-		reg |= DWC3_GUSB3PIPECTL_SUSPHY;
+-
+-	/*
+-	 * For DRD controllers, GUSB3PIPECTL.SUSPENDENABLE must be cleared after
+-	 * power-on reset, and it can be set after core initialization, which is
+-	 * after device soft-reset during initialization.
+-	 */
+-	if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD)
+-		reg &= ~DWC3_GUSB3PIPECTL_SUSPHY;
++	reg &= ~DWC3_GUSB3PIPECTL_SUSPHY;
+ 
+ 	if (dwc->u2ss_inp3_quirk)
+ 		reg |= DWC3_GUSB3PIPECTL_U2SSINP3OK;
+@@ -644,9 +657,6 @@ static int dwc3_phy_setup(struct dwc3 *dwc)
+ 	if (dwc->tx_de_emphasis_quirk)
+ 		reg |= DWC3_GUSB3PIPECTL_TX_DEEPH(dwc->tx_de_emphasis);
+ 
+-	if (dwc->dis_u3_susphy_quirk)
+-		reg &= ~DWC3_GUSB3PIPECTL_SUSPHY;
+-
+ 	if (dwc->dis_del_phy_power_chg_quirk)
+ 		reg &= ~DWC3_GUSB3PIPECTL_DEPOCHANGE;
+ 
+@@ -694,24 +704,15 @@ static int dwc3_phy_setup(struct dwc3 *dwc)
+ 	}
+ 
+ 	/*
+-	 * Above 1.94a, it is recommended to set DWC3_GUSB2PHYCFG_SUSPHY to
+-	 * '0' during coreConsultant configuration. So default value will
+-	 * be '0' when the core is reset. Application needs to set it to
+-	 * '1' after the core initialization is completed.
++	 * Above DWC_usb3.0 1.94a, it is recommended to set
++	 * DWC3_GUSB2PHYCFG_SUSPHY to '0' during coreConsultant configuration.
++	 * So default value will be '0' when the core is reset. Application
++	 * needs to set it to '1' after the core initialization is completed.
++	 *
++	 * Similarly for DRD controllers, GUSB2PHYCFG.SUSPHY must be cleared
++	 * after power-on reset, and it can be set after core initialization.
+ 	 */
+-	if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A))
+-		reg |= DWC3_GUSB2PHYCFG_SUSPHY;
+-
+-	/*
+-	 * For DRD controllers, GUSB2PHYCFG.SUSPHY must be cleared after
+-	 * power-on reset, and it can be set after core initialization, which is
+-	 * after device soft-reset during initialization.
+-	 */
+-	if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD)
+-		reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
+-
+-	if (dwc->dis_u2_susphy_quirk)
+-		reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
++	reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
+ 
+ 	if (dwc->dis_enblslpm_quirk)
+ 		reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM;
+@@ -993,21 +994,6 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 	if (ret)
+ 		goto err1;
+ 
+-	if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD &&
+-	    !DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) {
+-		if (!dwc->dis_u3_susphy_quirk) {
+-			reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
+-			reg |= DWC3_GUSB3PIPECTL_SUSPHY;
+-			dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), reg);
+-		}
+-
+-		if (!dwc->dis_u2_susphy_quirk) {
+-			reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+-			reg |= DWC3_GUSB2PHYCFG_SUSPHY;
+-			dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+-		}
+-	}
+-
+ 	dwc3_core_setup_global_control(dwc);
+ 	dwc3_core_num_eps(dwc);
+ 
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 291893d274297..1c8496fc732eb 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -1456,6 +1456,7 @@ int dwc3_event_buffers_setup(struct dwc3 *dwc);
+ void dwc3_event_buffers_cleanup(struct dwc3 *dwc);
+ 
+ int dwc3_core_soft_reset(struct dwc3 *dwc);
++void dwc3_enable_susphy(struct dwc3 *dwc, bool enable);
+ 
+ #if IS_ENABLED(CONFIG_USB_DWC3_HOST) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)
+ int dwc3_host_init(struct dwc3 *dwc);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 565397c41910d..550eae39a63d3 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2380,6 +2380,7 @@ static int __dwc3_gadget_start(struct dwc3 *dwc)
+ 	dwc3_ep0_out_start(dwc);
+ 
+ 	dwc3_gadget_enable_irq(dwc);
++	dwc3_enable_susphy(dwc, true);
+ 
+ 	return 0;
+ 
+@@ -4046,6 +4047,7 @@ void dwc3_gadget_exit(struct dwc3 *dwc)
+ 	if (!dwc->gadget)
+ 		return;
+ 
++	dwc3_enable_susphy(dwc, false);
+ 	usb_del_gadget(dwc->gadget);
+ 	dwc3_gadget_free_endpoints(dwc);
+ 	usb_put_gadget(dwc->gadget);
+diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
+index b06ab85f8187e..05718f6fa60da 100644
+--- a/drivers/usb/dwc3/host.c
++++ b/drivers/usb/dwc3/host.c
+@@ -9,9 +9,30 @@
+ 
+ #include <linux/acpi.h>
+ #include <linux/platform_device.h>
++#include <linux/usb.h>
++#include <linux/usb/hcd.h>
+ 
++#include "../host/xhci-plat.h"
+ #include "core.h"
+ 
++static void dwc3_xhci_plat_start(struct usb_hcd *hcd)
++{
++	struct platform_device *pdev;
++	struct dwc3 *dwc;
++
++	if (!usb_hcd_is_primary_hcd(hcd))
++		return;
++
++	pdev = to_platform_device(hcd->self.controller);
++	dwc = dev_get_drvdata(pdev->dev.parent);
++
++	dwc3_enable_susphy(dwc, true);
++}
++
++static const struct xhci_plat_priv dwc3_xhci_plat_quirk = {
++	.plat_start = dwc3_xhci_plat_start,
++};
++
+ static int dwc3_host_get_irq(struct dwc3 *dwc)
+ {
+ 	struct platform_device	*dwc3_pdev = to_platform_device(dwc->dev);
+@@ -115,6 +136,11 @@ int dwc3_host_init(struct dwc3 *dwc)
+ 		}
+ 	}
+ 
++	ret = platform_device_add_data(xhci, &dwc3_xhci_plat_quirk,
++				       sizeof(struct xhci_plat_priv));
++	if (ret)
++		goto err;
++
+ 	ret = platform_device_add(xhci);
+ 	if (ret) {
+ 		dev_err(dwc->dev, "failed to register xHCI device\n");
+@@ -129,6 +155,7 @@ int dwc3_host_init(struct dwc3 *dwc)
+ 
+ void dwc3_host_exit(struct dwc3 *dwc)
+ {
++	dwc3_enable_susphy(dwc, false);
+ 	platform_device_unregister(dwc->xhci);
+ 	dwc->xhci = NULL;
+ }
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index a980799900e71..a6ec6c8f32160 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1925,7 +1925,7 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 			buf[5] = 0x01;
+ 			switch (ctrl->bRequestType & USB_RECIP_MASK) {
+ 			case USB_RECIP_DEVICE:
+-				if (w_index != 0x4 || (w_value >> 8))
++				if (w_index != 0x4 || (w_value & 0xff))
+ 					break;
+ 				buf[6] = w_index;
+ 				/* Number of ext compat interfaces */
+@@ -1941,9 +1941,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 				}
+ 				break;
+ 			case USB_RECIP_INTERFACE:
+-				if (w_index != 0x5 || (w_value >> 8))
++				if (w_index != 0x5 || (w_value & 0xff))
+ 					break;
+-				interface = w_value & 0xFF;
++				interface = w_value >> 8;
+ 				if (interface >= MAX_CONFIG_INTERFACES ||
+ 				    !os_desc_cfg->interface[interface])
+ 					break;
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index b17acab77fe26..ad7df99f09a4c 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -3403,7 +3403,7 @@ static int ffs_func_setup(struct usb_function *f,
+ 	__ffs_event_add(ffs, FUNCTIONFS_SETUP);
+ 	spin_unlock_irqrestore(&ffs->ev.waitq.lock, flags);
+ 
+-	return creq->wLength == 0 ? USB_GADGET_DELAYED_STATUS : 0;
++	return ffs->ev.setup.wLength == 0 ? USB_GADGET_DELAYED_STATUS : 0;
+ }
+ 
+ static bool ffs_func_req_match(struct usb_function *f,
+diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c
+index 73e13e7c2b465..dd320c8633a4e 100644
+--- a/drivers/usb/host/ohci-hcd.c
++++ b/drivers/usb/host/ohci-hcd.c
+@@ -890,6 +890,7 @@ static irqreturn_t ohci_irq (struct usb_hcd *hcd)
+ 	/* Check for an all 1's result which is a typical consequence
+ 	 * of dead, unclocked, or unplugged (CardBus...) devices
+ 	 */
++again:
+ 	if (ints == ~(u32)0) {
+ 		ohci->rh_state = OHCI_RH_HALTED;
+ 		ohci_dbg (ohci, "device removed!\n");
+@@ -984,6 +985,13 @@ static irqreturn_t ohci_irq (struct usb_hcd *hcd)
+ 	}
+ 	spin_unlock(&ohci->lock);
+ 
++	/* repeat until all enabled interrupts are handled */
++	if (ohci->rh_state != OHCI_RH_HALTED) {
++		ints = ohci_readl(ohci, &regs->intrstatus);
++		if (ints && (ints & ohci_readl(ohci, &regs->intrenable)))
++			goto again;
++	}
++
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-plat.h b/drivers/usb/host/xhci-plat.h
+index 561d0b7bce098..29f15298e315f 100644
+--- a/drivers/usb/host/xhci-plat.h
++++ b/drivers/usb/host/xhci-plat.h
+@@ -8,7 +8,9 @@
+ #ifndef _XHCI_PLAT_H
+ #define _XHCI_PLAT_H
+ 
+-#include "xhci.h"	/* for hcd_to_xhci() */
++struct device;
++struct platform_device;
++struct usb_hcd;
+ 
+ struct xhci_plat_priv {
+ 	const char *firmware_name;
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 2ddc8936a8935..abb29c1a70344 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -827,7 +827,7 @@ void ucsi_connector_change(struct ucsi *ucsi, u8 num)
+ 	struct ucsi_connector *con = &ucsi->connector[num - 1];
+ 
+ 	if (!(ucsi->ntfy & UCSI_ENABLE_NTFY_CONNECTOR_CHANGE)) {
+-		dev_dbg(ucsi->dev, "Bogus connector change event\n");
++		dev_dbg(ucsi->dev, "Early connector change event\n");
+ 		return;
+ 	}
+ 
+@@ -1191,6 +1191,7 @@ static int ucsi_init(struct ucsi *ucsi)
+ {
+ 	struct ucsi_connector *con;
+ 	u64 command, ntfy;
++	u32 cci;
+ 	int ret;
+ 	int i;
+ 
+@@ -1242,6 +1243,15 @@ static int ucsi_init(struct ucsi *ucsi)
+ 		goto err_unregister;
+ 
+ 	ucsi->ntfy = ntfy;
++
++	mutex_lock(&ucsi->ppm_lock);
++	ret = ucsi->ops->read(ucsi, UCSI_CCI, &cci, sizeof(cci));
++	mutex_unlock(&ucsi->ppm_lock);
++	if (ret)
++		return ret;
++	if (UCSI_CCI_CONNECTOR(cci))
++		ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci));
++
+ 	return 0;
+ 
+ err_unregister:
+diff --git a/drivers/usb/usbip/usbip_common.h b/drivers/usb/usbip/usbip_common.h
+index a7e6ce96f62c7..02cd91cb3f831 100644
+--- a/drivers/usb/usbip/usbip_common.h
++++ b/drivers/usb/usbip/usbip_common.h
+@@ -18,6 +18,7 @@
+ #include <linux/usb.h>
+ #include <linux/wait.h>
+ #include <linux/sched/task.h>
++#include <linux/kcov.h>
+ #include <uapi/linux/usbip.h>
+ 
+ #undef pr_fmt
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index be5768949cb15..5d92eaeaebd91 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -685,6 +685,7 @@ const struct file_operations v9fs_file_operations = {
+ 	.splice_read = generic_file_splice_read,
+ 	.splice_write = iter_file_splice_write,
+ 	.fsync = v9fs_file_fsync,
++	.setlease = simple_nosetlease,
+ };
+ 
+ const struct file_operations v9fs_file_operations_dotl = {
+@@ -726,4 +727,5 @@ const struct file_operations v9fs_mmap_file_operations_dotl = {
+ 	.splice_read = generic_file_splice_read,
+ 	.splice_write = iter_file_splice_write,
+ 	.fsync = v9fs_file_fsync_dotl,
++	.setlease = simple_nosetlease,
+ };
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index 0791480bf922b..4ffb0750c79b9 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -86,7 +86,7 @@ static int p9mode2perm(struct v9fs_session_info *v9ses,
+ 	int res;
+ 	int mode = stat->mode;
+ 
+-	res = mode & S_IALLUGO;
++	res = mode & 0777; /* S_IRWXUGO */
+ 	if (v9fs_proto_dotu(v9ses)) {
+ 		if ((mode & P9_DMSETUID) == P9_DMSETUID)
+ 			res |= S_ISUID;
+@@ -177,6 +177,9 @@ int v9fs_uflags2omode(int uflags, int extended)
+ 		break;
+ 	}
+ 
++	if (uflags & O_TRUNC)
++		ret |= P9_OTRUNC;
++
+ 	if (extended) {
+ 		if (uflags & O_EXCL)
+ 			ret |= P9_OEXCL;
+diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c
+index 9a21269b72347..69e7f88a21e7f 100644
+--- a/fs/9p/vfs_super.c
++++ b/fs/9p/vfs_super.c
+@@ -336,6 +336,7 @@ static const struct super_operations v9fs_super_ops = {
+ 	.alloc_inode = v9fs_alloc_inode,
+ 	.free_inode = v9fs_free_inode,
+ 	.statfs = simple_statfs,
++	.drop_inode = v9fs_drop_inode,
+ 	.evict_inode = v9fs_evict_inode,
+ 	.show_options = v9fs_show_options,
+ 	.umount_begin = v9fs_umount_begin,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 591caac2bf814..1f99d7dced17a 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2070,7 +2070,7 @@ void btrfs_clear_delalloc_extent(struct inode *vfs_inode,
+ 		 */
+ 		if (*bits & EXTENT_CLEAR_META_RESV &&
+ 		    root != fs_info->tree_root)
+-			btrfs_delalloc_release_metadata(inode, len, false);
++			btrfs_delalloc_release_metadata(inode, len, true);
+ 
+ 		/* For sanity tests. */
+ 		if (btrfs_is_testing(fs_info))
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 0519a3557697a..a5ed01d49f069 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -7339,8 +7339,8 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 	sctx->waiting_dir_moves = RB_ROOT;
+ 	sctx->orphan_dirs = RB_ROOT;
+ 
+-	sctx->clone_roots = kvcalloc(sizeof(*sctx->clone_roots),
+-				     arg->clone_sources_count + 1,
++	sctx->clone_roots = kvcalloc(arg->clone_sources_count + 1,
++				     sizeof(*sctx->clone_roots),
+ 				     GFP_KERNEL);
+ 	if (!sctx->clone_roots) {
+ 		ret = -ENOMEM;
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index d23047b23005c..8cefe11c57dbc 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1343,6 +1343,7 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans)
+ 			radix_tree_tag_clear(&fs_info->fs_roots_radix,
+ 					(unsigned long)root->root_key.objectid,
+ 					BTRFS_ROOT_TRANS_TAG);
++			btrfs_qgroup_free_meta_all_pertrans(root);
+ 			spin_unlock(&fs_info->fs_roots_radix_lock);
+ 
+ 			btrfs_free_log(trans, root);
+@@ -1367,7 +1368,6 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans)
+ 			if (ret2)
+ 				return ret2;
+ 			spin_lock(&fs_info->fs_roots_radix_lock);
+-			btrfs_qgroup_free_meta_all_pertrans(root);
+ 		}
+ 	}
+ 	spin_unlock(&fs_info->fs_roots_radix_lock);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 09c23626feba4..51298d749824c 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1266,25 +1266,32 @@ static int open_fs_devices(struct btrfs_fs_devices *fs_devices,
+ 	struct btrfs_device *device;
+ 	struct btrfs_device *latest_dev = NULL;
+ 	struct btrfs_device *tmp_device;
++	int ret = 0;
+ 
+ 	flags |= FMODE_EXCL;
+ 
+ 	list_for_each_entry_safe(device, tmp_device, &fs_devices->devices,
+ 				 dev_list) {
+-		int ret;
++		int ret2;
+ 
+-		ret = btrfs_open_one_device(fs_devices, device, flags, holder);
+-		if (ret == 0 &&
++		ret2 = btrfs_open_one_device(fs_devices, device, flags, holder);
++		if (ret2 == 0 &&
+ 		    (!latest_dev || device->generation > latest_dev->generation)) {
+ 			latest_dev = device;
+-		} else if (ret == -ENODATA) {
++		} else if (ret2 == -ENODATA) {
+ 			fs_devices->num_devices--;
+ 			list_del(&device->dev_list);
+ 			btrfs_free_device(device);
+ 		}
++		if (ret == 0 && ret2 != 0)
++			ret = ret2;
+ 	}
+-	if (fs_devices->open_devices == 0)
++
++	if (fs_devices->open_devices == 0) {
++		if (ret)
++			return ret;
+ 		return -EINVAL;
++	}
+ 
+ 	fs_devices->opened = 1;
+ 	fs_devices->latest_bdev = latest_dev->bdev;
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index eaee95d2ad143..3d60ad9982c87 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1758,7 +1758,8 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length)
+ 	struct buffer_head *dibh, *bh;
+ 	struct gfs2_holder rd_gh;
+ 	unsigned int bsize_shift = sdp->sd_sb.sb_bsize_shift;
+-	u64 lblock = (offset + (1 << bsize_shift) - 1) >> bsize_shift;
++	unsigned int bsize = 1 << bsize_shift;
++	u64 lblock = (offset + bsize - 1) >> bsize_shift;
+ 	__u16 start_list[GFS2_MAX_META_HEIGHT];
+ 	__u16 __end_list[GFS2_MAX_META_HEIGHT], *end_list = NULL;
+ 	unsigned int start_aligned, end_aligned;
+@@ -1769,7 +1770,7 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length)
+ 	u64 prev_bnr = 0;
+ 	__be64 *start, *end;
+ 
+-	if (offset >= maxsize) {
++	if (offset + bsize - 1 >= maxsize) {
+ 		/*
+ 		 * The starting point lies beyond the allocated meta-data;
+ 		 * there are no blocks do deallocate.
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 818ff8b1b99da..1437eb31dd034 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -73,7 +73,6 @@ const struct rpc_program nfs_program = {
+ 	.number			= NFS_PROGRAM,
+ 	.nrvers			= ARRAY_SIZE(nfs_version),
+ 	.version		= nfs_version,
+-	.stats			= &nfs_rpcstat,
+ 	.pipe_dir_name		= NFS_PIPE_DIRNAME,
+ };
+ 
+@@ -501,6 +500,7 @@ int nfs_create_rpc_client(struct nfs_client *clp,
+ 			  const struct nfs_client_initdata *cl_init,
+ 			  rpc_authflavor_t flavor)
+ {
++	struct nfs_net		*nn = net_generic(clp->cl_net, nfs_net_id);
+ 	struct rpc_clnt		*clnt = NULL;
+ 	struct rpc_create_args args = {
+ 		.net		= clp->cl_net,
+@@ -512,6 +512,7 @@ int nfs_create_rpc_client(struct nfs_client *clp,
+ 		.servername	= clp->cl_hostname,
+ 		.nodename	= cl_init->nodename,
+ 		.program	= &nfs_program,
++		.stats		= &nn->rpcstats,
+ 		.version	= clp->rpc_ops->version,
+ 		.authflavor	= flavor,
+ 		.cred		= cl_init->cred,
+@@ -1110,6 +1111,8 @@ void nfs_clients_init(struct net *net)
+ #endif
+ 	spin_lock_init(&nn->nfs_client_lock);
+ 	nn->boot_time = ktime_get_real();
++	memset(&nn->rpcstats, 0, sizeof(nn->rpcstats));
++	nn->rpcstats.program = &nfs_program;
+ 
+ 	nfs_netns_sysfs_setup(nn, net);
+ }
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 36f415278c042..0d06ec25e21e0 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -2225,12 +2225,21 @@ EXPORT_SYMBOL_GPL(nfs_net_id);
+ 
+ static int nfs_net_init(struct net *net)
+ {
++	struct nfs_net *nn = net_generic(net, nfs_net_id);
++
+ 	nfs_clients_init(net);
++
++	if (!rpc_proc_register(net, &nn->rpcstats)) {
++		nfs_clients_exit(net);
++		return -ENOMEM;
++	}
++
+ 	return nfs_fs_proc_net_init(net);
+ }
+ 
+ static void nfs_net_exit(struct net *net)
+ {
++	rpc_proc_unregister(net, "nfs");
+ 	nfs_fs_proc_net_exit(net);
+ 	nfs_clients_exit(net);
+ }
+@@ -2289,15 +2298,12 @@ static int __init init_nfs_fs(void)
+ 	if (err)
+ 		goto out1;
+ 
+-	rpc_proc_register(&init_net, &nfs_rpcstat);
+-
+ 	err = register_nfs_fs();
+ 	if (err)
+ 		goto out0;
+ 
+ 	return 0;
+ out0:
+-	rpc_proc_unregister(&init_net, "nfs");
+ 	nfs_destroy_directcache();
+ out1:
+ 	nfs_destroy_writepagecache();
+@@ -2330,7 +2336,6 @@ static void __exit exit_nfs_fs(void)
+ 	nfs_destroy_nfspagecache();
+ 	nfs_fscache_unregister();
+ 	unregister_pernet_subsys(&nfs_net_ops);
+-	rpc_proc_unregister(&init_net, "nfs");
+ 	unregister_nfs_fs();
+ 	nfs_fs_proc_exit();
+ 	nfsiod_stop();
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index a7e0970b5bfe1..9a72abfb46abc 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -435,8 +435,6 @@ int nfs_try_get_tree(struct fs_context *);
+ int nfs_get_tree_common(struct fs_context *);
+ void nfs_kill_super(struct super_block *);
+ 
+-extern struct rpc_stat nfs_rpcstat;
+-
+ extern int __init register_nfs_fs(void);
+ extern void __exit unregister_nfs_fs(void);
+ extern bool nfs_sb_active(struct super_block *sb);
+diff --git a/fs/nfs/netns.h b/fs/nfs/netns.h
+index c8374f74dce11..a68b21603ea9a 100644
+--- a/fs/nfs/netns.h
++++ b/fs/nfs/netns.h
+@@ -9,6 +9,7 @@
+ #include <linux/nfs4.h>
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
++#include <linux/sunrpc/stats.h>
+ 
+ struct bl_dev_msg {
+ 	int32_t status;
+@@ -34,6 +35,7 @@ struct nfs_net {
+ 	struct nfs_netns_client *nfs_client;
+ 	spinlock_t nfs_client_lock;
+ 	ktime_t boot_time;
++	struct rpc_stat rpcstats;
+ #ifdef CONFIG_PROC_FS
+ 	struct proc_dir_entry *proc_nfsfs;
+ #endif
+diff --git a/include/linux/kcov.h b/include/linux/kcov.h
+index a10e84707d820..b48128b717f1f 100644
+--- a/include/linux/kcov.h
++++ b/include/linux/kcov.h
+@@ -2,6 +2,7 @@
+ #ifndef _LINUX_KCOV_H
+ #define _LINUX_KCOV_H
+ 
++#include <linux/sched.h>
+ #include <uapi/linux/kcov.h>
+ 
+ struct task_struct;
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index aa015416c5693..3613c3f43b83e 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -14,7 +14,6 @@
+ #include <linux/pid.h>
+ #include <linux/sem.h>
+ #include <linux/shm.h>
+-#include <linux/kcov.h>
+ #include <linux/mutex.h>
+ #include <linux/plist.h>
+ #include <linux/hrtimer.h>
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index a210f19958621..31755d496b01d 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -2607,6 +2607,21 @@ static inline void skb_mac_header_rebuild(struct sk_buff *skb)
+ 	}
+ }
+ 
++/* Move the full mac header up to current network_header.
++ * Leaves skb->data pointing at offset skb->mac_len into the mac_header.
++ * Must be provided the complete mac header length.
++ */
++static inline void skb_mac_header_rebuild_full(struct sk_buff *skb, u32 full_mac_len)
++{
++	if (skb_mac_header_was_set(skb)) {
++		const unsigned char *old_mac = skb_mac_header(skb);
++
++		skb_set_mac_header(skb, -full_mac_len);
++		memmove(skb_mac_header(skb), old_mac, full_mac_len);
++		__skb_push(skb, full_mac_len - skb->mac_len);
++	}
++}
++
+ static inline int skb_checksum_start_offset(const struct sk_buff *skb)
+ {
+ 	return skb->csum_start - skb_headroom(skb);
+diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
+index 41ed614e69209..187e9f06cf64b 100644
+--- a/include/linux/sunrpc/clnt.h
++++ b/include/linux/sunrpc/clnt.h
+@@ -126,6 +126,7 @@ struct rpc_create_args {
+ 	const char		*servername;
+ 	const char		*nodename;
+ 	const struct rpc_program *program;
++	struct rpc_stat		*stats;
+ 	u32			prognumber;	/* overrides program->number */
+ 	u32			version;
+ 	rpc_authflavor_t	authflavor;
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 7865db2f827e6..6fbaf304648f6 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1028,6 +1028,9 @@ struct xfrm_offload {
+ #define CRYPTO_INVALID_PACKET_SYNTAX		64
+ #define CRYPTO_INVALID_PROTOCOL			128
+ 
++	/* Used to keep whole l2 header for transport mode GRO */
++	__u32			orig_mac_len;
++
+ 	__u8			proto;
+ };
+ 
+diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
+index 10a50c03074ee..685cf3e6771d3 100644
+--- a/lib/dynamic_debug.c
++++ b/lib/dynamic_debug.c
+@@ -260,7 +260,11 @@ static int ddebug_tokenize(char *buf, char *words[], int maxwords)
+ 		} else {
+ 			for (end = buf; *end && !isspace(*end); end++)
+ 				;
+-			BUG_ON(end == buf);
++			if (end == buf) {
++				pr_err("parse err after word:%d=%s\n", nwords,
++				       nwords ? words[nwords - 1] : "<none>");
++				return -EINVAL;
++			}
+ 		}
+ 
+ 		/* `buf' is start of word, `end' is one past its end */
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 580b6d6b970d2..da03ca6dd9221 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -435,6 +435,9 @@ static void l2cap_chan_timeout(struct work_struct *work)
+ 
+ 	BT_DBG("chan %p state %s", chan, state_to_string(chan->state));
+ 
++	if (!conn)
++		return;
++
+ 	mutex_lock(&conn->chan_lock);
+ 	/* __set_chan_timer() calls l2cap_chan_hold(chan) while scheduling
+ 	 * this work. No need to call l2cap_chan_hold(chan) here again.
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 2115ca6d7e178..ae788d3e0c53a 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -83,6 +83,10 @@ static void sco_sock_timeout(struct work_struct *work)
+ 	struct sock *sk;
+ 
+ 	sco_conn_lock(conn);
++	if (!conn->hcon) {
++		sco_conn_unlock(conn);
++		return;
++	}
+ 	sk = conn->sk;
+ 	if (sk)
+ 		sock_hold(sk);
+diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c
+index f2ef75c7ccc68..ada03d49e7c1a 100644
+--- a/net/bridge/br_forward.c
++++ b/net/bridge/br_forward.c
+@@ -245,6 +245,7 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb,
+ {
+ 	struct net_device *dev = BR_INPUT_SKB_CB(skb)->brdev;
+ 	const unsigned char *src = eth_hdr(skb)->h_source;
++	struct sk_buff *nskb;
+ 
+ 	if (!should_deliver(p, skb))
+ 		return;
+@@ -253,12 +254,16 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb,
+ 	if (skb->dev == p->dev && ether_addr_equal(src, addr))
+ 		return;
+ 
+-	skb = skb_copy(skb, GFP_ATOMIC);
+-	if (!skb) {
++	__skb_push(skb, ETH_HLEN);
++	nskb = pskb_copy(skb, GFP_ATOMIC);
++	__skb_pull(skb, ETH_HLEN);
++	if (!nskb) {
+ 		DEV_STATS_INC(dev, tx_dropped);
+ 		return;
+ 	}
+ 
++	skb = nskb;
++	__skb_pull(skb, ETH_HLEN);
+ 	if (!is_broadcast_ether_addr(addr))
+ 		memcpy(eth_hdr(skb)->h_dest, addr, ETH_ALEN);
+ 
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index e05dd4f3279a8..72cfe5248b764 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -86,12 +86,15 @@ u64 __net_gen_cookie(struct net *net)
+ 
+ static struct net_generic *net_alloc_generic(void)
+ {
++	unsigned int gen_ptrs = READ_ONCE(max_gen_ptrs);
++	unsigned int generic_size;
+ 	struct net_generic *ng;
+-	unsigned int generic_size = offsetof(struct net_generic, ptr[max_gen_ptrs]);
++
++	generic_size = offsetof(struct net_generic, ptr[gen_ptrs]);
+ 
+ 	ng = kzalloc(generic_size, GFP_KERNEL);
+ 	if (ng)
+-		ng->s.len = max_gen_ptrs;
++		ng->s.len = gen_ptrs;
+ 
+ 	return ng;
+ }
+@@ -1241,7 +1244,11 @@ static int register_pernet_operations(struct list_head *list,
+ 		if (error < 0)
+ 			return error;
+ 		*ops->id = error;
+-		max_gen_ptrs = max(max_gen_ptrs, *ops->id + 1);
++		/* This does not require READ_ONCE as writers already hold
++		 * pernet_ops_rwsem. But WRITE_ONCE is needed to protect
++		 * net_alloc_generic.
++		 */
++		WRITE_ONCE(max_gen_ptrs, max(max_gen_ptrs, *ops->id + 1));
+ 	}
+ 	error = __register_pernet_operations(list, ops);
+ 	if (error) {
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 8938320f7ba3b..2806b9ed63879 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2379,7 +2379,7 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ 
+ 		nla_for_each_nested(attr, tb[IFLA_VF_VLAN_LIST], rem) {
+ 			if (nla_type(attr) != IFLA_VF_VLAN_INFO ||
+-			    nla_len(attr) < NLA_HDRLEN) {
++			    nla_len(attr) < sizeof(struct ifla_vf_vlan_info)) {
+ 				return -EINVAL;
+ 			}
+ 			if (len >= MAX_VLAN_LIST_LEN)
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 50261f3aec82b..b0c2d6f018003 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -60,6 +60,7 @@
+ #include <linux/prefetch.h>
+ #include <linux/if_vlan.h>
+ #include <linux/mpls.h>
++#include <linux/kcov.h>
+ 
+ #include <net/protocol.h>
+ #include <net/dst.h>
+@@ -1516,11 +1517,17 @@ static inline int skb_alloc_rx_flag(const struct sk_buff *skb)
+ 
+ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask)
+ {
+-	int headerlen = skb_headroom(skb);
+-	unsigned int size = skb_end_offset(skb) + skb->data_len;
+-	struct sk_buff *n = __alloc_skb(size, gfp_mask,
+-					skb_alloc_rx_flag(skb), NUMA_NO_NODE);
++	struct sk_buff *n;
++	unsigned int size;
++	int headerlen;
++
++	if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST))
++		return NULL;
+ 
++	headerlen = skb_headroom(skb);
++	size = skb_end_offset(skb) + skb->data_len;
++	n = __alloc_skb(size, gfp_mask,
++			skb_alloc_rx_flag(skb), NUMA_NO_NODE);
+ 	if (!n)
+ 		return NULL;
+ 
+@@ -1750,12 +1757,17 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb,
+ 	/*
+ 	 *	Allocate the copy buffer
+ 	 */
+-	struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom,
+-					gfp_mask, skb_alloc_rx_flag(skb),
+-					NUMA_NO_NODE);
+-	int oldheadroom = skb_headroom(skb);
+ 	int head_copy_len, head_copy_off;
++	struct sk_buff *n;
++	int oldheadroom;
++
++	if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST))
++		return NULL;
+ 
++	oldheadroom = skb_headroom(skb);
++	n = __alloc_skb(newheadroom + skb->len + newtailroom,
++			gfp_mask, skb_alloc_rx_flag(skb),
++			NUMA_NO_NODE);
+ 	if (!n)
+ 		return NULL;
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 016c0b9e01b70..b4ecd0071e220 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -440,7 +440,7 @@ int __sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 	unsigned long flags;
+ 	struct sk_buff_head *list = &sk->sk_receive_queue;
+ 
+-	if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf) {
++	if (atomic_read(&sk->sk_rmem_alloc) >= READ_ONCE(sk->sk_rcvbuf)) {
+ 		atomic_inc(&sk->sk_drops);
+ 		trace_sock_rcvqueue_full(sk, skb);
+ 		return -ENOMEM;
+@@ -492,7 +492,7 @@ int __sk_receive_skb(struct sock *sk, struct sk_buff *skb,
+ 
+ 	skb->dev = NULL;
+ 
+-	if (sk_rcvqueues_full(sk, sk->sk_rcvbuf)) {
++	if (sk_rcvqueues_full(sk, READ_ONCE(sk->sk_rcvbuf))) {
+ 		atomic_inc(&sk->sk_drops);
+ 		goto discard_and_relse;
+ 	}
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index ac6cb2dc60380..4ed0d303791a1 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2493,7 +2493,7 @@ void tcp_shutdown(struct sock *sk, int how)
+ 	/* If we've already sent a FIN, or it's a closed state, skip this. */
+ 	if ((1 << sk->sk_state) &
+ 	    (TCPF_ESTABLISHED | TCPF_SYN_SENT |
+-	     TCPF_SYN_RECV | TCPF_CLOSE_WAIT)) {
++	     TCPF_CLOSE_WAIT)) {
+ 		/* Clear out any half completed packets.  FIN if needed. */
+ 		if (tcp_close_state(sk))
+ 			tcp_send_fin(sk);
+@@ -2604,7 +2604,7 @@ void __tcp_close(struct sock *sk, long timeout)
+ 		 * machine. State transitions:
+ 		 *
+ 		 * TCP_ESTABLISHED -> TCP_FIN_WAIT1
+-		 * TCP_SYN_RECV	-> TCP_FIN_WAIT1 (forget it, it's impossible)
++		 * TCP_SYN_RECV	-> TCP_FIN_WAIT1 (it is difficult)
+ 		 * TCP_CLOSE_WAIT -> TCP_LAST_ACK
+ 		 *
+ 		 * are legal only when FIN has been sent (i.e. in window),
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 0f9fe5edad142..512f8dc051c61 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -6516,6 +6516,8 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 
+ 		tcp_initialize_rcv_mss(sk);
+ 		tcp_fast_path_on(tp);
++		if (sk->sk_shutdown & SEND_SHUTDOWN)
++			tcp_shutdown(sk, SEND_SHUTDOWN);
+ 		break;
+ 
+ 	case TCP_FIN_WAIT1: {
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 7a94acbd9f142..85d8688933f3c 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -153,6 +153,12 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
+ 	if (tcptw->tw_ts_recent_stamp &&
+ 	    (!twp || (reuse && time_after32(ktime_get_seconds(),
+ 					    tcptw->tw_ts_recent_stamp)))) {
++		/* inet_twsk_hashdance() sets sk_refcnt after putting twsk
++		 * and releasing the bucket lock.
++		 */
++		if (unlikely(!refcount_inc_not_zero(&sktw->sk_refcnt)))
++			return 0;
++
+ 		/* In case of repair and re-using TIME-WAIT sockets we still
+ 		 * want to be sure that it is safe as above but honor the
+ 		 * sequence numbers and time stamps set as part of the repair
+@@ -173,7 +179,7 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
+ 			tp->rx_opt.ts_recent	   = tcptw->tw_ts_recent;
+ 			tp->rx_opt.ts_recent_stamp = tcptw->tw_ts_recent_stamp;
+ 		}
+-		sock_hold(sktw);
++
+ 		return 1;
+ 	}
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index f0df14782ee01..68f1633c477ae 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3440,7 +3440,9 @@ void tcp_send_fin(struct sock *sk)
+ 			return;
+ 		}
+ 	} else {
+-		skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation);
++		skb = alloc_skb_fclone(MAX_TCP_HEADER,
++				       sk_gfp_mask(sk, GFP_ATOMIC |
++						       __GFP_NOWARN));
+ 		if (unlikely(!skb))
+ 			return;
+ 
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 445d8bc30fdd1..a0b569d0085bc 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -431,6 +431,7 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head,
+ 	struct sk_buff *p;
+ 	unsigned int ulen;
+ 	int ret = 0;
++	int flush;
+ 
+ 	/* requires non zero csum, for symmetry with GSO */
+ 	if (!uh->check) {
+@@ -464,13 +465,22 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head,
+ 			return p;
+ 		}
+ 
++		flush = NAPI_GRO_CB(p)->flush;
++
++		if (NAPI_GRO_CB(p)->flush_id != 1 ||
++		    NAPI_GRO_CB(p)->count != 1 ||
++		    !NAPI_GRO_CB(p)->is_atomic)
++			flush |= NAPI_GRO_CB(p)->flush_id;
++		else
++			NAPI_GRO_CB(p)->is_atomic = false;
++
+ 		/* Terminate the flow on len mismatch or if it grow "too much".
+ 		 * Under small packet flood GRO count could elsewhere grow a lot
+ 		 * leading to excessive truesize values.
+ 		 * On len mismatch merge the first packet shorter than gso_size,
+ 		 * otherwise complete the GRO packet.
+ 		 */
+-		if (ulen > ntohs(uh2->len)) {
++		if (ulen > ntohs(uh2->len) || flush) {
+ 			pp = p;
+ 		} else {
+ 			if (NAPI_GRO_CB(skb)->is_flist) {
+diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
+index eac206a290d05..1f50517289fd9 100644
+--- a/net/ipv4/xfrm4_input.c
++++ b/net/ipv4/xfrm4_input.c
+@@ -61,7 +61,11 @@ int xfrm4_transport_finish(struct sk_buff *skb, int async)
+ 	ip_send_check(iph);
+ 
+ 	if (xo && (xo->flags & XFRM_GRO)) {
+-		skb_mac_header_rebuild(skb);
++		/* The full l2 header needs to be preserved so that re-injecting the packet at l2
++		 * works correctly in the presence of vlan tags.
++		 */
++		skb_mac_header_rebuild_full(skb, xo->orig_mac_len);
++		skb_reset_network_header(skb);
+ 		skb_reset_transport_header(skb);
+ 		return 0;
+ 	}
+diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c
+index 55cd23b7a9357..cf9a44fb8243d 100644
+--- a/net/ipv6/fib6_rules.c
++++ b/net/ipv6/fib6_rules.c
+@@ -232,8 +232,12 @@ static int __fib6_rule_action(struct fib_rule *rule, struct flowi *flp,
+ 	rt = pol_lookup_func(lookup,
+ 			     net, table, flp6, arg->lookup_data, flags);
+ 	if (rt != net->ipv6.ip6_null_entry) {
++		struct inet6_dev *idev = ip6_dst_idev(&rt->dst);
++
++		if (!idev)
++			goto again;
+ 		err = fib6_rule_saddr(net, rule, flags, flp6,
+-				      ip6_dst_idev(&rt->dst)->dev);
++				      idev->dev);
+ 
+ 		if (err == -EAGAIN)
+ 			goto again;
+diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
+index 4907ab241d6be..7dbefbb338ca5 100644
+--- a/net/ipv6/xfrm6_input.c
++++ b/net/ipv6/xfrm6_input.c
+@@ -56,7 +56,11 @@ int xfrm6_transport_finish(struct sk_buff *skb, int async)
+ 	skb_postpush_rcsum(skb, skb_network_header(skb), nhlen);
+ 
+ 	if (xo && (xo->flags & XFRM_GRO)) {
+-		skb_mac_header_rebuild(skb);
++		/* The full l2 header needs to be preserved so that re-injecting the packet at l2
++		 * works correctly in the presence of vlan tags.
++		 */
++		skb_mac_header_rebuild_full(skb, xo->orig_mac_len);
++		skb_reset_network_header(skb);
+ 		skb_reset_transport_header(skb);
+ 		return 0;
+ 	}
+diff --git a/net/l2tp/l2tp_eth.c b/net/l2tp/l2tp_eth.c
+index 6cd97c75445c8..9a36e174984cf 100644
+--- a/net/l2tp/l2tp_eth.c
++++ b/net/l2tp/l2tp_eth.c
+@@ -136,6 +136,9 @@ static void l2tp_eth_dev_recv(struct l2tp_session *session, struct sk_buff *skb,
+ 	/* checksums verified by L2TP */
+ 	skb->ip_summed = CHECKSUM_NONE;
+ 
++	/* drop outer flow-hash */
++	skb_clear_hash(skb);
++
+ 	skb_dst_drop(skb);
+ 	nf_reset_ct(skb);
+ 
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index bd349ae9ee4b4..782ff56c5aff1 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -112,7 +112,7 @@ struct ieee80211_bss {
+ };
+ 
+ /**
+- * enum ieee80211_corrupt_data_flags - BSS data corruption flags
++ * enum ieee80211_bss_corrupt_data_flags - BSS data corruption flags
+  * @IEEE80211_BSS_CORRUPT_BEACON: last beacon frame received was corrupted
+  * @IEEE80211_BSS_CORRUPT_PROBE_RESP: last probe response received was corrupted
+  *
+@@ -125,7 +125,7 @@ enum ieee80211_bss_corrupt_data_flags {
+ };
+ 
+ /**
+- * enum ieee80211_valid_data_flags - BSS valid data flags
++ * enum ieee80211_bss_valid_data_flags - BSS valid data flags
+  * @IEEE80211_BSS_VALID_WMM: WMM/UAPSD data was gathered from non-corrupt IE
+  * @IEEE80211_BSS_VALID_RATES: Supported rates were gathered from non-corrupt IE
+  * @IEEE80211_BSS_VALID_ERP: ERP flag was gathered from non-corrupt IE
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 3a15ef8dd3228..06ce138eedf1b 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -15,6 +15,7 @@
+ #include <linux/if_arp.h>
+ #include <linux/netdevice.h>
+ #include <linux/rtnetlink.h>
++#include <linux/kcov.h>
+ #include <net/mac80211.h>
+ #include <net/ieee80211_radiotap.h>
+ #include "ieee80211_i.h"
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 97a63b940482d..65fea564c9c00 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -17,6 +17,7 @@
+ #include <linux/etherdevice.h>
+ #include <linux/rcupdate.h>
+ #include <linux/export.h>
++#include <linux/kcov.h>
+ #include <linux/bitops.h>
+ #include <net/mac80211.h>
+ #include <net/ieee80211_radiotap.h>
+diff --git a/net/nsh/nsh.c b/net/nsh/nsh.c
+index 0f23e5e8e03eb..3e0fc71d95a14 100644
+--- a/net/nsh/nsh.c
++++ b/net/nsh/nsh.c
+@@ -76,13 +76,15 @@ EXPORT_SYMBOL_GPL(nsh_pop);
+ static struct sk_buff *nsh_gso_segment(struct sk_buff *skb,
+ 				       netdev_features_t features)
+ {
++	unsigned int outer_hlen, mac_len, nsh_len;
+ 	struct sk_buff *segs = ERR_PTR(-EINVAL);
+ 	u16 mac_offset = skb->mac_header;
+-	unsigned int nsh_len, mac_len;
+-	__be16 proto;
++	__be16 outer_proto, proto;
+ 
+ 	skb_reset_network_header(skb);
+ 
++	outer_proto = skb->protocol;
++	outer_hlen = skb_mac_header_len(skb);
+ 	mac_len = skb->mac_len;
+ 
+ 	if (unlikely(!pskb_may_pull(skb, NSH_BASE_HDR_LEN)))
+@@ -112,10 +114,10 @@ static struct sk_buff *nsh_gso_segment(struct sk_buff *skb,
+ 	}
+ 
+ 	for (skb = segs; skb; skb = skb->next) {
+-		skb->protocol = htons(ETH_P_NSH);
+-		__skb_push(skb, nsh_len);
+-		skb->mac_header = mac_offset;
+-		skb->network_header = skb->mac_header + mac_len;
++		skb->protocol = outer_proto;
++		__skb_push(skb, nsh_len + outer_hlen);
++		skb_reset_mac_header(skb);
++		skb_set_network_header(skb, outer_hlen);
+ 		skb->mac_len = mac_len;
+ 	}
+ 
+diff --git a/net/phonet/pn_netlink.c b/net/phonet/pn_netlink.c
+index 59aebe2968907..dd4c7e9a634fb 100644
+--- a/net/phonet/pn_netlink.c
++++ b/net/phonet/pn_netlink.c
+@@ -193,7 +193,7 @@ void rtm_phonet_notify(int event, struct net_device *dev, u8 dst)
+ 	struct sk_buff *skb;
+ 	int err = -ENOBUFS;
+ 
+-	skb = nlmsg_new(NLMSG_ALIGN(sizeof(struct ifaddrmsg)) +
++	skb = nlmsg_new(NLMSG_ALIGN(sizeof(struct rtmsg)) +
+ 			nla_total_size(1) + nla_total_size(4), GFP_KERNEL);
+ 	if (skb == NULL)
+ 		goto errout;
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 360a3bcd91fe1..a5ce9b937c42e 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -395,7 +395,7 @@ static struct rpc_clnt * rpc_new_client(const struct rpc_create_args *args,
+ 	clnt->cl_maxproc  = version->nrprocs;
+ 	clnt->cl_prog     = args->prognumber ? : program->number;
+ 	clnt->cl_vers     = version->number;
+-	clnt->cl_stats    = program->stats;
++	clnt->cl_stats    = args->stats ? : program->stats;
+ 	clnt->cl_metrics  = rpc_alloc_iostats(clnt);
+ 	rpc_init_pipe_dir_head(&clnt->cl_pipedir_objects);
+ 	err = -ENOMEM;
+@@ -665,6 +665,7 @@ struct rpc_clnt *rpc_clone_client(struct rpc_clnt *clnt)
+ 		.version	= clnt->cl_vers,
+ 		.authflavor	= clnt->cl_auth->au_flavor,
+ 		.cred		= clnt->cl_cred,
++		.stats		= clnt->cl_stats,
+ 	};
+ 	return __rpc_clone_client(&args, clnt);
+ }
+@@ -687,6 +688,7 @@ rpc_clone_client_set_auth(struct rpc_clnt *clnt, rpc_authflavor_t flavor)
+ 		.version	= clnt->cl_vers,
+ 		.authflavor	= flavor,
+ 		.cred		= clnt->cl_cred,
++		.stats		= clnt->cl_stats,
+ 	};
+ 	return __rpc_clone_client(&args, clnt);
+ }
+@@ -967,6 +969,7 @@ struct rpc_clnt *rpc_bind_new_program(struct rpc_clnt *old,
+ 		.version	= vers,
+ 		.authflavor	= old->cl_auth->au_flavor,
+ 		.cred		= old->cl_cred,
++		.stats		= old->cl_stats,
+ 	};
+ 	struct rpc_clnt *clnt;
+ 	int err;
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 91dcf648d32bb..1fcd676133eb1 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -148,9 +148,9 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
+ 	if (fragid == FIRST_FRAGMENT) {
+ 		if (unlikely(head))
+ 			goto err;
+-		*buf = NULL;
+ 		if (skb_has_frag_list(frag) && __skb_linearize(frag))
+ 			goto err;
++		*buf = NULL;
+ 		frag = skb_unshare(frag, GFP_ATOMIC);
+ 		if (unlikely(!frag))
+ 			goto err;
+@@ -162,6 +162,11 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
+ 	if (!head)
+ 		goto err;
+ 
++	/* Either the input skb ownership is transferred to headskb
++	 * or the input skb is freed, clear the reference to avoid
++	 * bad access on error path.
++	 */
++	*buf = NULL;
+ 	if (skb_try_coalesce(head, frag, &headstolen, &delta)) {
+ 		kfree_skb_partial(frag, headstolen);
+ 	} else {
+@@ -185,7 +190,6 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
+ 		*headbuf = NULL;
+ 		return 1;
+ 	}
+-	*buf = NULL;
+ 	return 0;
+ err:
+ 	kfree_skb(*buf);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 933591f9704b8..846e40dc00bb6 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -12642,6 +12642,8 @@ static int nl80211_set_coalesce(struct sk_buff *skb, struct genl_info *info)
+ error:
+ 	for (i = 0; i < new_coalesce.n_rules; i++) {
+ 		tmp_rule = &new_coalesce.rules[i];
++		if (!tmp_rule)
++			continue;
+ 		for (j = 0; j < tmp_rule->n_patterns; j++)
+ 			kfree(tmp_rule->patterns[j].mask);
+ 		kfree(tmp_rule->patterns);
+diff --git a/net/wireless/trace.h b/net/wireless/trace.h
+index 6e218a0acd4e3..edc824c103e83 100644
+--- a/net/wireless/trace.h
++++ b/net/wireless/trace.h
+@@ -968,7 +968,7 @@ TRACE_EVENT(rdev_get_mpp,
+ TRACE_EVENT(rdev_dump_mpp,
+ 	TP_PROTO(struct wiphy *wiphy, struct net_device *netdev, int _idx,
+ 		 u8 *dst, u8 *mpp),
+-	TP_ARGS(wiphy, netdev, _idx, mpp, dst),
++	TP_ARGS(wiphy, netdev, _idx, dst, mpp),
+ 	TP_STRUCT__entry(
+ 		WIPHY_ENTRY
+ 		NETDEV_ENTRY
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index f3bccab983f05..0c3fa01ec67a7 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -399,11 +399,15 @@ static int xfrm_prepare_input(struct xfrm_state *x, struct sk_buff *skb)
+  */
+ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ {
++	struct xfrm_offload *xo = xfrm_offload(skb);
+ 	int ihl = skb->data - skb_transport_header(skb);
+ 
+ 	if (skb->transport_header != skb->network_header) {
+ 		memmove(skb_transport_header(skb),
+ 			skb_network_header(skb), ihl);
++		if (xo)
++			xo->orig_mac_len =
++				skb_mac_header_was_set(skb) ? skb_mac_header_len(skb) : 0;
+ 		skb->network_header = skb->transport_header;
+ 	}
+ 	ip_hdr(skb)->tot_len = htons(skb->len + ihl);
+@@ -414,11 +418,15 @@ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ #if IS_ENABLED(CONFIG_IPV6)
++	struct xfrm_offload *xo = xfrm_offload(skb);
+ 	int ihl = skb->data - skb_transport_header(skb);
+ 
+ 	if (skb->transport_header != skb->network_header) {
+ 		memmove(skb_transport_header(skb),
+ 			skb_network_header(skb), ihl);
++		if (xo)
++			xo->orig_mac_len =
++				skb_mac_header_was_set(skb) ? skb_mac_header_len(skb) : 0;
+ 		skb->network_header = skb->transport_header;
+ 	}
+ 	ipv6_hdr(skb)->payload_len = htons(skb->len + ihl -
+diff --git a/security/keys/key.c b/security/keys/key.c
+index 67ad0826e385c..e5111ce17e254 100644
+--- a/security/keys/key.c
++++ b/security/keys/key.c
+@@ -464,7 +464,8 @@ static int __key_instantiate_and_link(struct key *key,
+ 			if (authkey)
+ 				key_invalidate(authkey);
+ 
+-			key_set_expiry(key, prep->expiry);
++			if (prep->expiry != TIME64_MAX)
++				key_set_expiry(key, prep->expiry);
+ 		}
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 84df5582cde22..4af8094938059 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9066,6 +9066,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x86c1, "HP Laptop 15-da3001TU", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x86e7, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ 	SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 04a7070c78e28..a8b9eb6ce2ea8 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -517,7 +517,7 @@ config SND_SOC_AK5558
+ 	select REGMAP_I2C
+ 
+ config SND_SOC_ALC5623
+-       tristate "Realtek ALC5623 CODEC"
++	tristate "Realtek ALC5623 CODEC"
+ 	depends on I2C
+ 
+ config SND_SOC_ALC5632
+@@ -733,7 +733,7 @@ config SND_SOC_JZ4770_CODEC
+ 	  will be called snd-soc-jz4770-codec.
+ 
+ config SND_SOC_L3
+-       tristate
++	tristate
+ 
+ config SND_SOC_DA7210
+ 	tristate
+@@ -773,10 +773,10 @@ config SND_SOC_HDMI_CODEC
+ 	select HDMI
+ 
+ config SND_SOC_ES7134
+-       tristate "Everest Semi ES7134 CODEC"
++	tristate "Everest Semi ES7134 CODEC"
+ 
+ config SND_SOC_ES7241
+-       tristate "Everest Semi ES7241 CODEC"
++	tristate "Everest Semi ES7241 CODEC"
+ 
+ config SND_SOC_ES8316
+ 	tristate "Everest Semi ES8316 CODEC"
+@@ -974,10 +974,10 @@ config SND_SOC_PCM186X_SPI
+ 	select REGMAP_SPI
+ 
+ config SND_SOC_PCM3008
+-       tristate
++	tristate
+ 
+ config SND_SOC_PCM3060
+-       tristate
++	tristate
+ 
+ config SND_SOC_PCM3060_I2C
+ 	tristate "Texas Instruments PCM3060 CODEC - I2C"
+@@ -1440,7 +1440,7 @@ config SND_SOC_UDA1334
+ 	  rate) and mute.
+ 
+ config SND_SOC_UDA134X
+-       tristate
++	tristate
+ 
+ config SND_SOC_UDA1380
+ 	tristate
+@@ -1765,8 +1765,8 @@ config SND_SOC_MT6660
+ 	  Select M to build this as module.
+ 
+ config SND_SOC_NAU8540
+-       tristate "Nuvoton Technology Corporation NAU85L40 CODEC"
+-       depends on I2C
++	tristate "Nuvoton Technology Corporation NAU85L40 CODEC"
++	depends on I2C
+ 
+ config SND_SOC_NAU8810
+ 	tristate "Nuvoton Technology Corporation NAU88C10 CODEC"
+diff --git a/sound/soc/generic/Kconfig b/sound/soc/generic/Kconfig
+index a90c3b28bce5f..4cafcf0e2bbfd 100644
+--- a/sound/soc/generic/Kconfig
++++ b/sound/soc/generic/Kconfig
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config SND_SIMPLE_CARD_UTILS
+-       tristate
++	tristate
+ 
+ config SND_SIMPLE_CARD
+ 	tristate "ASoC Simple sound card support"
+diff --git a/sound/soc/intel/boards/Kconfig b/sound/soc/intel/boards/Kconfig
+index c10c37803c670..dddb672a6d553 100644
+--- a/sound/soc/intel/boards/Kconfig
++++ b/sound/soc/intel/boards/Kconfig
+@@ -552,7 +552,7 @@ config SND_SOC_INTEL_SOUNDWIRE_SOF_MACH
+ 	select SND_SOC_RT715_SDCA_SDW
+ 	select SND_SOC_RT5682_SDW
+ 	select SND_SOC_DMIC
+-        help
++	help
+ 	  Add support for Intel SoundWire-based platforms connected to
+ 	  MAX98373, RT700, RT711, RT1308 and RT715
+ 	  If unsure select "N".
+diff --git a/sound/soc/meson/Kconfig b/sound/soc/meson/Kconfig
+index ce0cbdc69b2ec..6458d5dc4902f 100644
+--- a/sound/soc/meson/Kconfig
++++ b/sound/soc/meson/Kconfig
+@@ -98,7 +98,8 @@ config SND_MESON_AXG_PDM
+ 	  in the Amlogic AXG SoC family
+ 
+ config SND_MESON_CARD_UTILS
+-       tristate
++	tristate
++	select SND_DYNAMIC_MINORS
+ 
+ config SND_MESON_CODEC_GLUE
+ 	tristate
+diff --git a/sound/soc/pxa/Kconfig b/sound/soc/pxa/Kconfig
+index 0ac85eada75cb..9d40e8a206d10 100644
+--- a/sound/soc/pxa/Kconfig
++++ b/sound/soc/pxa/Kconfig
+@@ -221,13 +221,13 @@ config SND_PXA2XX_SOC_MIOA701
+ 	  MIO A701.
+ 
+ config SND_PXA2XX_SOC_IMOTE2
+-       tristate "SoC Audio support for IMote 2"
+-       depends on SND_PXA2XX_SOC && MACH_INTELMOTE2 && I2C
+-       select SND_PXA2XX_SOC_I2S
+-       select SND_SOC_WM8940
+-       help
+-	 Say Y if you want to add support for SoC audio on the
+-	 IMote 2.
++	tristate "SoC Audio support for IMote 2"
++	depends on SND_PXA2XX_SOC && MACH_INTELMOTE2 && I2C
++	select SND_PXA2XX_SOC_I2S
++	select SND_SOC_WM8940
++	help
++	  Say Y if you want to add support for SoC audio on the
++	  IMote 2.
+ 
+ config SND_MMP_SOC_BROWNSTONE
+ 	tristate "SoC Audio support for Marvell Brownstone"
+diff --git a/sound/soc/tegra/tegra186_dspk.c b/sound/soc/tegra/tegra186_dspk.c
+index 373189e5907b9..313797aef49ff 100644
+--- a/sound/soc/tegra/tegra186_dspk.c
++++ b/sound/soc/tegra/tegra186_dspk.c
+@@ -1,8 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
++// SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ //
+ // tegra186_dspk.c - Tegra186 DSPK driver
+-//
+-// Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved.
+ 
+ #include <linux/clk.h>
+ #include <linux/device.h>
+@@ -241,14 +240,14 @@ static int tegra186_dspk_hw_params(struct snd_pcm_substream *substream,
+ 		return -EINVAL;
+ 	}
+ 
+-	cif_conf.client_bits = TEGRA_ACIF_BITS_24;
+-
+ 	switch (params_format(params)) {
+ 	case SNDRV_PCM_FORMAT_S16_LE:
+ 		cif_conf.audio_bits = TEGRA_ACIF_BITS_16;
++		cif_conf.client_bits = TEGRA_ACIF_BITS_16;
+ 		break;
+ 	case SNDRV_PCM_FORMAT_S32_LE:
+ 		cif_conf.audio_bits = TEGRA_ACIF_BITS_32;
++		cif_conf.client_bits = TEGRA_ACIF_BITS_24;
+ 		break;
+ 	default:
+ 		dev_err(dev, "unsupported format!\n");
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index b67617b68e509..f4437015d43a7 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -202,7 +202,7 @@ int line6_send_raw_message_async(struct usb_line6 *line6, const char *buffer,
+ 	struct urb *urb;
+ 
+ 	/* create message: */
+-	msg = kmalloc(sizeof(struct message), GFP_ATOMIC);
++	msg = kzalloc(sizeof(struct message), GFP_ATOMIC);
+ 	if (msg == NULL)
+ 		return -ENOMEM;
+ 
+@@ -688,7 +688,7 @@ static int line6_init_cap_control(struct usb_line6 *line6)
+ 	int ret;
+ 
+ 	/* initialize USB buffers: */
+-	line6->buffer_listen = kmalloc(LINE6_BUFSIZE_LISTEN, GFP_KERNEL);
++	line6->buffer_listen = kzalloc(LINE6_BUFSIZE_LISTEN, GFP_KERNEL);
+ 	if (!line6->buffer_listen)
+ 		return -ENOMEM;
+ 
+@@ -697,7 +697,7 @@ static int line6_init_cap_control(struct usb_line6 *line6)
+ 		return -ENOMEM;
+ 
+ 	if (line6->properties->capabilities & LINE6_CAP_CONTROL_MIDI) {
+-		line6->buffer_message = kmalloc(LINE6_MIDI_MESSAGE_MAXLEN, GFP_KERNEL);
++		line6->buffer_message = kzalloc(LINE6_MIDI_MESSAGE_MAXLEN, GFP_KERNEL);
+ 		if (!line6->buffer_message)
+ 			return -ENOMEM;
+ 
+diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
+index 71e3f3a68b9df..f8cc88c56ae8f 100644
+--- a/tools/power/x86/turbostat/turbostat.8
++++ b/tools/power/x86/turbostat/turbostat.8
+@@ -320,7 +320,7 @@ below the processor's base frequency.
+ 
+ Busy% = MPERF_delta/TSC_delta
+ 
+-Bzy_MHz = TSC_delta/APERF_delta/MPERF_delta/measurement_interval
++Bzy_MHz = TSC_delta*APERF_delta/MPERF_delta/measurement_interval
+ 
+ Note that these calculations depend on TSC_delta, so they
+ are not reliable during intervals when TSC_MHz is not running at the base frequency.
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 9d4a249cc98bb..11f1b6288e123 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -1685,9 +1685,10 @@ int sum_counters(struct thread_data *t, struct core_data *c,
+ 	average.packages.rapl_dram_perf_status += p->rapl_dram_perf_status;
+ 
+ 	for (i = 0, mp = sys.pp; mp; i++, mp = mp->next) {
+-		if (mp->format == FORMAT_RAW)
+-			continue;
+-		average.packages.counter[i] += p->counter[i];
++		if ((mp->format == FORMAT_RAW) && (topo.num_packages == 0))
++			average.packages.counter[i] = p->counter[i];
++		else
++			average.packages.counter[i] += p->counter[i];
+ 	}
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/timers/valid-adjtimex.c b/tools/testing/selftests/timers/valid-adjtimex.c
+index 48b9a803235a8..d13ebde203221 100644
+--- a/tools/testing/selftests/timers/valid-adjtimex.c
++++ b/tools/testing/selftests/timers/valid-adjtimex.c
+@@ -21,9 +21,6 @@
+  *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  *   GNU General Public License for more details.
+  */
+-
+-
+-
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <time.h>
+@@ -62,45 +59,47 @@ int clear_time_state(void)
+ #define NUM_FREQ_OUTOFRANGE 4
+ #define NUM_FREQ_INVALID 2
+ 
++#define SHIFTED_PPM (1 << 16)
++
+ long valid_freq[NUM_FREQ_VALID] = {
+-	-499<<16,
+-	-450<<16,
+-	-400<<16,
+-	-350<<16,
+-	-300<<16,
+-	-250<<16,
+-	-200<<16,
+-	-150<<16,
+-	-100<<16,
+-	-75<<16,
+-	-50<<16,
+-	-25<<16,
+-	-10<<16,
+-	-5<<16,
+-	-1<<16,
++	 -499 * SHIFTED_PPM,
++	 -450 * SHIFTED_PPM,
++	 -400 * SHIFTED_PPM,
++	 -350 * SHIFTED_PPM,
++	 -300 * SHIFTED_PPM,
++	 -250 * SHIFTED_PPM,
++	 -200 * SHIFTED_PPM,
++	 -150 * SHIFTED_PPM,
++	 -100 * SHIFTED_PPM,
++	  -75 * SHIFTED_PPM,
++	  -50 * SHIFTED_PPM,
++	  -25 * SHIFTED_PPM,
++	  -10 * SHIFTED_PPM,
++	   -5 * SHIFTED_PPM,
++	   -1 * SHIFTED_PPM,
+ 	-1000,
+-	1<<16,
+-	5<<16,
+-	10<<16,
+-	25<<16,
+-	50<<16,
+-	75<<16,
+-	100<<16,
+-	150<<16,
+-	200<<16,
+-	250<<16,
+-	300<<16,
+-	350<<16,
+-	400<<16,
+-	450<<16,
+-	499<<16,
++	    1 * SHIFTED_PPM,
++	    5 * SHIFTED_PPM,
++	   10 * SHIFTED_PPM,
++	   25 * SHIFTED_PPM,
++	   50 * SHIFTED_PPM,
++	   75 * SHIFTED_PPM,
++	  100 * SHIFTED_PPM,
++	  150 * SHIFTED_PPM,
++	  200 * SHIFTED_PPM,
++	  250 * SHIFTED_PPM,
++	  300 * SHIFTED_PPM,
++	  350 * SHIFTED_PPM,
++	  400 * SHIFTED_PPM,
++	  450 * SHIFTED_PPM,
++	  499 * SHIFTED_PPM,
+ };
+ 
+ long outofrange_freq[NUM_FREQ_OUTOFRANGE] = {
+-	-1000<<16,
+-	-550<<16,
+-	550<<16,
+-	1000<<16,
++	-1000 * SHIFTED_PPM,
++	 -550 * SHIFTED_PPM,
++	  550 * SHIFTED_PPM,
++	 1000 * SHIFTED_PPM,
+ };
+ 
+ #define LONG_MAX (~0UL>>1)


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-05-25 15:14 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-05-25 15:14 UTC (permalink / raw
  To: gentoo-commits

commit:     c60e6e4dd9d01c3e54b863430b79ffe2f6a0432b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat May 25 15:14:44 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat May 25 15:14:44 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c60e6e4d

Linux patch 5.10.218

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |   4 +
 1217_linux-5.10.218.patch | 805 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 809 insertions(+)

diff --git a/0000_README b/0000_README
index ce7db8ae..c4548db6 100644
--- a/0000_README
+++ b/0000_README
@@ -911,6 +911,10 @@ Patch:  1216_linux-5.10.217.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.217
 
+Patch:  1217_linux-5.10.218.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.218
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1217_linux-5.10.218.patch b/1217_linux-5.10.218.patch
new file mode 100644
index 00000000..83f6fb25
--- /dev/null
+++ b/1217_linux-5.10.218.patch
@@ -0,0 +1,805 @@
+diff --git a/Documentation/sphinx/kernel_include.py b/Documentation/sphinx/kernel_include.py
+index f523aa68a36b3..cf601bd058abe 100755
+--- a/Documentation/sphinx/kernel_include.py
++++ b/Documentation/sphinx/kernel_include.py
+@@ -94,7 +94,6 @@ class KernelInclude(Include):
+         # HINT: this is the only line I had to change / commented out:
+         #path = utils.relative_path(None, path)
+ 
+-        path = nodes.reprunicode(path)
+         encoding = self.options.get(
+             'encoding', self.state.document.settings.input_encoding)
+         e_handler=self.state.document.settings.input_encoding_error_handler
+diff --git a/Makefile b/Makefile
+index d9557382a0286..3bc56bf43c822 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 217
++SUBLEVEL = 218
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 1631a9a1566e3..bd785386d6291 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -46,14 +46,6 @@
+ .code64
+ .section .entry.text, "ax"
+ 
+-#ifdef CONFIG_PARAVIRT_XXL
+-SYM_CODE_START(native_usergs_sysret64)
+-	UNWIND_HINT_EMPTY
+-	swapgs
+-	sysretq
+-SYM_CODE_END(native_usergs_sysret64)
+-#endif /* CONFIG_PARAVIRT_XXL */
+-
+ /*
+  * 64-bit SYSCALL instruction entry. Up to 6 arguments in registers.
+  *
+@@ -128,7 +120,12 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
+ 	 * Try to use SYSRET instead of IRET if we're returning to
+ 	 * a completely clean 64-bit userspace context.  If we're not,
+ 	 * go to the slow exit path.
++	 * In the Xen PV case we must use iret anyway.
+ 	 */
++
++	ALTERNATIVE "", "jmp	swapgs_restore_regs_and_return_to_usermode", \
++		X86_FEATURE_XENPV
++
+ 	movq	RCX(%rsp), %rcx
+ 	movq	RIP(%rsp), %r11
+ 
+@@ -220,7 +217,9 @@ syscall_return_via_sysret:
+ 
+ 	popq	%rdi
+ 	popq	%rsp
+-	USERGS_SYSRET64
++	swapgs
++	CLEAR_CPU_BUFFERS
++	sysretq
+ SYM_CODE_END(entry_SYSCALL_64)
+ 
+ /*
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index f40dea50dfbf3..e585a4705b8dd 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -132,13 +132,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
+ #endif
+ 
+ #define INTERRUPT_RETURN	jmp native_iret
+-#define USERGS_SYSRET64				\
+-	swapgs;					\
+-	CLEAR_CPU_BUFFERS;			\
+-	sysretq;
+-#define USERGS_SYSRET32				\
+-	swapgs;					\
+-	sysretl
+ 
+ #else
+ #define INTERRUPT_RETURN		iret
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index 4a32b0d343762..3c89c1f648719 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -776,11 +776,6 @@ extern void default_banner(void);
+ 
+ #ifdef CONFIG_X86_64
+ #ifdef CONFIG_PARAVIRT_XXL
+-#define USERGS_SYSRET64							\
+-	PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64),			\
+-		  ANNOTATE_RETPOLINE_SAFE;				\
+-		  jmp PARA_INDIRECT(pv_ops+PV_CPU_usergs_sysret64);)
+-
+ #ifdef CONFIG_DEBUG_ENTRY
+ #define SAVE_FLAGS(clobbers)                                        \
+ 	PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),			    \
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index 903d71884fa25..55d8b7950e61f 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -157,14 +157,6 @@ struct pv_cpu_ops {
+ 
+ 	u64 (*read_pmc)(int counter);
+ 
+-	/*
+-	 * Switch to usermode gs and return to 64-bit usermode using
+-	 * sysret.  Only used in 64-bit kernels to return to 64-bit
+-	 * processes.  Usermode register state, including %rsp, must
+-	 * already be restored.
+-	 */
+-	void (*usergs_sysret64)(void);
+-
+ 	/* Normal iret.  Jump to this with the standard iret stack
+ 	   frame set up. */
+ 	void (*iret)(void);
+diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
+index 1354bc30614d7..b14533af76762 100644
+--- a/arch/x86/kernel/asm-offsets_64.c
++++ b/arch/x86/kernel/asm-offsets_64.c
+@@ -13,8 +13,6 @@ int main(void)
+ {
+ #ifdef CONFIG_PARAVIRT
+ #ifdef CONFIG_PARAVIRT_XXL
+-	OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
+-	       cpu.usergs_sysret64);
+ #ifdef CONFIG_DEBUG_ENTRY
+ 	OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
+ #endif
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index f0e4ad8595ca7..9d91061b862c9 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -124,8 +124,7 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
+ 	else if (opfunc == _paravirt_ident_64)
+ 		ret = paravirt_patch_ident_64(insn_buff, len);
+ 
+-	else if (type == PARAVIRT_PATCH(cpu.iret) ||
+-		 type == PARAVIRT_PATCH(cpu.usergs_sysret64))
++	else if (type == PARAVIRT_PATCH(cpu.iret))
+ 		/* If operation requires a jmp, then jmp */
+ 		ret = paravirt_patch_jmp(insn_buff, opfunc, addr, len);
+ #endif
+@@ -159,7 +158,6 @@ static u64 native_steal_clock(int cpu)
+ 
+ /* These are in entry.S */
+ extern void native_iret(void);
+-extern void native_usergs_sysret64(void);
+ 
+ static struct resource reserve_ioports = {
+ 	.start = 0,
+@@ -299,7 +297,6 @@ struct paravirt_patch_template pv_ops = {
+ 
+ 	.cpu.load_sp0		= native_load_sp0,
+ 
+-	.cpu.usergs_sysret64	= native_usergs_sysret64,
+ 	.cpu.iret		= native_iret,
+ 
+ #ifdef CONFIG_X86_IOPL_IOPERM
+diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
+index 7c518b08aa3c5..2fada2c347c98 100644
+--- a/arch/x86/kernel/paravirt_patch.c
++++ b/arch/x86/kernel/paravirt_patch.c
+@@ -27,7 +27,6 @@ struct patch_xxl {
+ 	const unsigned char	mmu_write_cr3[3];
+ 	const unsigned char	irq_restore_fl[2];
+ 	const unsigned char	cpu_wbinvd[2];
+-	const unsigned char	cpu_usergs_sysret64[6];
+ 	const unsigned char	mov64[3];
+ };
+ 
+@@ -40,8 +39,6 @@ static const struct patch_xxl patch_data_xxl = {
+ 	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
+ 	.irq_restore_fl		= { 0x57, 0x9d },	// push %rdi; popfq
+ 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
+-	.cpu_usergs_sysret64	= { 0x0f, 0x01, 0xf8,
+-				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
+ 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
+ };
+ 
+@@ -83,7 +80,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
+ 	PATCH_CASE(mmu, read_cr3, xxl, insn_buff, len);
+ 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
+ 
+-	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
+ 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
+ #endif
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 8e0b957c62193..bc295439360e5 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8501,13 +8501,20 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
+ 
+ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
+ {
++	/*
++	 * Suppress the error code if the vCPU is in Real Mode, as Real Mode
++	 * exceptions don't report error codes.  The presence of an error code
++	 * is carried with the exception and only stripped when the exception
++	 * is injected as intercepted #PF VM-Exits for AMD's Paged Real Mode do
++	 * report an error code despite the CPU being in Real Mode.
++	 */
++	vcpu->arch.exception.has_error_code &= is_protmode(vcpu);
++
+ 	trace_kvm_inj_exception(vcpu->arch.exception.nr,
+ 				vcpu->arch.exception.has_error_code,
+ 				vcpu->arch.exception.error_code,
+ 				vcpu->arch.exception.injected);
+ 
+-	if (vcpu->arch.exception.error_code && !is_protmode(vcpu))
+-		vcpu->arch.exception.error_code = false;
+ 	kvm_x86_ops.queue_exception(vcpu);
+ }
+ 
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 94804670caab8..b1efc4b4f42ad 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1059,7 +1059,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
+ 	.read_pmc = xen_read_pmc,
+ 
+ 	.iret = xen_iret,
+-	.usergs_sysret64 = xen_sysret64,
+ 
+ 	.load_tr_desc = paravirt_nop,
+ 	.set_ldt = xen_set_ldt,
+diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
+index e3031afcb1039..3a33713cf449f 100644
+--- a/arch/x86/xen/xen-asm.S
++++ b/arch/x86/xen/xen-asm.S
+@@ -220,27 +220,6 @@ SYM_CODE_START(xen_iret)
+ 	jmp hypercall_iret
+ SYM_CODE_END(xen_iret)
+ 
+-SYM_CODE_START(xen_sysret64)
+-	UNWIND_HINT_EMPTY
+-	/*
+-	 * We're already on the usermode stack at this point, but
+-	 * still with the kernel gs, so we can easily switch back.
+-	 *
+-	 * tss.sp2 is scratch space.
+-	 */
+-	movq %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
+-	movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+-
+-	pushq $__USER_DS
+-	pushq PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
+-	pushq %r11
+-	pushq $__USER_CS
+-	pushq %rcx
+-
+-	pushq $VGCF_in_syscall
+-	jmp hypercall_iret
+-SYM_CODE_END(xen_sysret64)
+-
+ /*
+  * XEN pv doesn't use trampoline stack, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is
+  * also the kernel stack.  Reusing swapgs_restore_regs_and_return_to_usermode()
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 8695809b88f08..98242430d07e7 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -138,8 +138,6 @@ __visible unsigned long xen_read_cr2_direct(void);
+ 
+ /* These are not functions, and cannot be called normally */
+ __visible void xen_iret(void);
+-__visible void xen_sysret32(void);
+-__visible void xen_sysret64(void);
+ 
+ extern int xen_panic_handler_init(void);
+ 
+diff --git a/drivers/firmware/arm_scmi/reset.c b/drivers/firmware/arm_scmi/reset.c
+index a981a22cfe891..b8388a3b9c064 100644
+--- a/drivers/firmware/arm_scmi/reset.c
++++ b/drivers/firmware/arm_scmi/reset.c
+@@ -149,8 +149,12 @@ static int scmi_domain_reset(const struct scmi_handle *handle, u32 domain,
+ 	struct scmi_xfer *t;
+ 	struct scmi_msg_reset_domain_reset *dom;
+ 	struct scmi_reset_info *pi = handle->reset_priv;
+-	struct reset_dom_info *rdom = pi->dom_info + domain;
++	struct reset_dom_info *rdom;
+ 
++	if (domain >= pi->num_domains)
++		return -EINVAL;
++
++	rdom = pi->dom_info + domain;
+ 	if (rdom->async_reset)
+ 		flags |= ASYNCHRONOUS_RESET;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index a8f1c4969fac7..e971d2b9e3c00 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -765,6 +765,9 @@ int amdgpu_ras_error_query(struct amdgpu_device *adev,
+ 	if (!obj)
+ 		return -EINVAL;
+ 
++	if (!info || info->head.block == AMDGPU_RAS_BLOCK_COUNT)
++		return -EINVAL;
++
+ 	switch (info->head.block) {
+ 	case AMDGPU_RAS_BLOCK__UMC:
+ 		if (adev->umc.funcs->query_ras_error_count)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 89dc69d1807e1..2fc21aae1004e 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2420,14 +2420,18 @@ static void umac_enable_set(struct bcmgenet_priv *priv, u32 mask, bool enable)
+ {
+ 	u32 reg;
+ 
++	spin_lock_bh(&priv->reg_lock);
+ 	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+-	if (reg & CMD_SW_RESET)
++	if (reg & CMD_SW_RESET) {
++		spin_unlock_bh(&priv->reg_lock);
+ 		return;
++	}
+ 	if (enable)
+ 		reg |= mask;
+ 	else
+ 		reg &= ~mask;
+ 	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++	spin_unlock_bh(&priv->reg_lock);
+ 
+ 	/* UniMAC stops on a packet boundary, wait for a full-size packet
+ 	 * to be processed
+@@ -2443,8 +2447,10 @@ static void reset_umac(struct bcmgenet_priv *priv)
+ 	udelay(10);
+ 
+ 	/* issue soft reset and disable MAC while updating its registers */
++	spin_lock_bh(&priv->reg_lock);
+ 	bcmgenet_umac_writel(priv, CMD_SW_RESET, UMAC_CMD);
+ 	udelay(2);
++	spin_unlock_bh(&priv->reg_lock);
+ }
+ 
+ static void bcmgenet_intr_disable(struct bcmgenet_priv *priv)
+@@ -3572,16 +3578,19 @@ static void bcmgenet_set_rx_mode(struct net_device *dev)
+ 	 * 3. The number of filters needed exceeds the number filters
+ 	 *    supported by the hardware.
+ 	*/
++	spin_lock(&priv->reg_lock);
+ 	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+ 	if ((dev->flags & (IFF_PROMISC | IFF_ALLMULTI)) ||
+ 	    (nfilter > MAX_MDF_FILTER)) {
+ 		reg |= CMD_PROMISC;
+ 		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++		spin_unlock(&priv->reg_lock);
+ 		bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL);
+ 		return;
+ 	} else {
+ 		reg &= ~CMD_PROMISC;
+ 		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++		spin_unlock(&priv->reg_lock);
+ 	}
+ 
+ 	/* update MDF filter */
+@@ -3975,6 +3984,7 @@ static int bcmgenet_probe(struct platform_device *pdev)
+ 		goto err;
+ 	}
+ 
++	spin_lock_init(&priv->reg_lock);
+ 	spin_lock_init(&priv->lock);
+ 
+ 	SET_NETDEV_DEV(dev, &pdev->dev);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index c7853d5304b09..82f9fdf591036 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -627,6 +627,8 @@ struct bcmgenet_rxnfc_rule {
+ /* device context */
+ struct bcmgenet_priv {
+ 	void __iomem *base;
++	/* reg_lock: lock to serialize access to shared registers */
++	spinlock_t reg_lock;
+ 	enum bcmgenet_version version;
+ 	struct net_device *dev;
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index 2c2a56d5a0a1a..35c12938cb348 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -134,6 +134,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ 	}
+ 
+ 	/* Can't suspend with WoL if MAC is still in reset */
++	spin_lock_bh(&priv->reg_lock);
+ 	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+ 	if (reg & CMD_SW_RESET)
+ 		reg &= ~CMD_SW_RESET;
+@@ -141,6 +142,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ 	/* disable RX */
+ 	reg &= ~CMD_RX_EN;
+ 	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++	spin_unlock_bh(&priv->reg_lock);
+ 	mdelay(10);
+ 
+ 	if (priv->wolopts & (WAKE_MAGIC | WAKE_MAGICSECURE)) {
+@@ -186,6 +188,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ 	}
+ 
+ 	/* Enable CRC forward */
++	spin_lock_bh(&priv->reg_lock);
+ 	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+ 	priv->crc_fwd_en = 1;
+ 	reg |= CMD_CRC_FWD;
+@@ -193,6 +196,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ 	/* Receiver must be enabled for WOL MP detection */
+ 	reg |= CMD_RX_EN;
+ 	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++	spin_unlock_bh(&priv->reg_lock);
+ 
+ 	reg = UMAC_IRQ_MPD_R;
+ 	if (hfb_enable)
+@@ -239,7 +243,9 @@ void bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv,
+ 	}
+ 
+ 	/* Disable CRC Forward */
++	spin_lock_bh(&priv->reg_lock);
+ 	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+ 	reg &= ~CMD_CRC_FWD;
+ 	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++	spin_unlock_bh(&priv->reg_lock);
+ }
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index becc717aad131..1e07f57ff3edd 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -91,6 +91,7 @@ void bcmgenet_mii_setup(struct net_device *dev)
+ 		reg |= RGMII_LINK;
+ 		bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
+ 
++		spin_lock_bh(&priv->reg_lock);
+ 		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+ 		reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) |
+ 			       CMD_HD_EN |
+@@ -103,6 +104,7 @@ void bcmgenet_mii_setup(struct net_device *dev)
+ 			reg |= CMD_TX_EN | CMD_RX_EN;
+ 		}
+ 		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++		spin_unlock_bh(&priv->reg_lock);
+ 
+ 		priv->eee.eee_active = phy_init_eee(phydev, 0) >= 0;
+ 		bcmgenet_eee_enable_set(dev,
+@@ -264,6 +266,7 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
+ 	 * block for the interface to work
+ 	 */
+ 	if (priv->ext_phy) {
++		mutex_lock(&phydev->lock);
+ 		reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
+ 		reg &= ~ID_MODE_DIS;
+ 		reg |= id_mode_dis;
+@@ -272,6 +275,7 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
+ 		else
+ 			reg |= RGMII_MODE_EN;
+ 		bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
++		mutex_unlock(&phydev->lock);
+ 	}
+ 
+ 	if (init)
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 4d46260e6bff3..ee99dc56c5448 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -205,6 +205,7 @@ static int pinctrl_register_one_pin(struct pinctrl_dev *pctldev,
+ 				    const struct pinctrl_pin_desc *pin)
+ {
+ 	struct pin_desc *pindesc;
++	int error;
+ 
+ 	pindesc = pin_desc_get(pctldev, pin->number);
+ 	if (pindesc) {
+@@ -226,18 +227,25 @@ static int pinctrl_register_one_pin(struct pinctrl_dev *pctldev,
+ 	} else {
+ 		pindesc->name = kasprintf(GFP_KERNEL, "PIN%u", pin->number);
+ 		if (!pindesc->name) {
+-			kfree(pindesc);
+-			return -ENOMEM;
++			error = -ENOMEM;
++			goto failed;
+ 		}
+ 		pindesc->dynamic_name = true;
+ 	}
+ 
+ 	pindesc->drv_data = pin->drv_data;
+ 
+-	radix_tree_insert(&pctldev->pin_desc_tree, pin->number, pindesc);
++	error = radix_tree_insert(&pctldev->pin_desc_tree, pin->number, pindesc);
++	if (error)
++		goto failed;
++
+ 	pr_debug("registered pin %d (%s) on %s\n",
+ 		 pin->number, pindesc->name, pctldev->desc->name);
+ 	return 0;
++
++failed:
++	kfree(pindesc);
++	return error;
+ }
+ 
+ static int pinctrl_register_pins(struct pinctrl_dev *pctldev,
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index 79b7db8580e05..d988511f8b326 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -19,6 +19,7 @@
+ #include <linux/console.h>
+ #include <linux/vt_kern.h>
+ #include <linux/input.h>
++#include <linux/irq_work.h>
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include <linux/serial_core.h>
+@@ -48,6 +49,25 @@ static struct kgdb_io		kgdboc_earlycon_io_ops;
+ static int                      (*earlycon_orig_exit)(struct console *con);
+ #endif /* IS_BUILTIN(CONFIG_KGDB_SERIAL_CONSOLE) */
+ 
++/*
++ * When we leave the debug trap handler we need to reset the keyboard status
++ * (since the original keyboard state gets partially clobbered by kdb use of
++ * the keyboard).
++ *
++ * The path to deliver the reset is somewhat circuitous.
++ *
++ * To deliver the reset we register an input handler, reset the keyboard and
++ * then deregister the input handler. However, to get this done right, we do
++ * have to carefully manage the calling context because we can only register
++ * input handlers from task context.
++ *
++ * In particular we need to trigger the action from the debug trap handler with
++ * all its NMI and/or NMI-like oddities. To solve this the kgdboc trap exit code
++ * (the "post_exception" callback) uses irq_work_queue(), which is NMI-safe, to
++ * schedule a callback from a hardirq context. From there we have to defer the
++ * work again, this time using schedule_work(), to get a callback using the
++ * system workqueue, which runs in task context.
++ */
+ #ifdef CONFIG_KDB_KEYBOARD
+ static int kgdboc_reset_connect(struct input_handler *handler,
+ 				struct input_dev *dev,
+@@ -99,10 +119,17 @@ static void kgdboc_restore_input_helper(struct work_struct *dummy)
+ 
+ static DECLARE_WORK(kgdboc_restore_input_work, kgdboc_restore_input_helper);
+ 
++static void kgdboc_queue_restore_input_helper(struct irq_work *unused)
++{
++	schedule_work(&kgdboc_restore_input_work);
++}
++
++static DEFINE_IRQ_WORK(kgdboc_restore_input_irq_work, kgdboc_queue_restore_input_helper);
++
+ static void kgdboc_restore_input(void)
+ {
+ 	if (likely(system_state == SYSTEM_RUNNING))
+-		schedule_work(&kgdboc_restore_input_work);
++		irq_work_queue(&kgdboc_restore_input_irq_work);
+ }
+ 
+ static int kgdboc_register_kbd(char **cptr)
+@@ -133,6 +160,7 @@ static void kgdboc_unregister_kbd(void)
+ 			i--;
+ 		}
+ 	}
++	irq_work_sync(&kgdboc_restore_input_irq_work);
+ 	flush_work(&kgdboc_restore_input_work);
+ }
+ #else /* ! CONFIG_KDB_KEYBOARD */
+diff --git a/drivers/usb/typec/ucsi/displayport.c b/drivers/usb/typec/ucsi/displayport.c
+index 261131c9e37c6..4446c4066679d 100644
+--- a/drivers/usb/typec/ucsi/displayport.c
++++ b/drivers/usb/typec/ucsi/displayport.c
+@@ -249,8 +249,6 @@ static void ucsi_displayport_work(struct work_struct *work)
+ 	struct ucsi_dp *dp = container_of(work, struct ucsi_dp, work);
+ 	int ret;
+ 
+-	mutex_lock(&dp->con->lock);
+-
+ 	ret = typec_altmode_vdm(dp->alt, dp->header,
+ 				dp->vdo_data, dp->vdo_size);
+ 	if (ret)
+@@ -259,8 +257,6 @@ static void ucsi_displayport_work(struct work_struct *work)
+ 	dp->vdo_data = NULL;
+ 	dp->vdo_size = 0;
+ 	dp->header = 0;
+-
+-	mutex_unlock(&dp->con->lock);
+ }
+ 
+ void ucsi_displayport_remove_partner(struct typec_altmode *alt)
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 51298d749824c..209eb85b6c270 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -3194,6 +3194,7 @@ static int btrfs_relocate_sys_chunks(struct btrfs_fs_info *fs_info)
+ 			 * alignment and size).
+ 			 */
+ 			ret = -EUCLEAN;
++			mutex_unlock(&fs_info->delete_unused_bgs_mutex);
+ 			goto error;
+ 		}
+ 
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 6be7e75922918..36fa456f42ba9 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2645,6 +2645,8 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 	if (subflow->request_mptcp && mptcp_token_new_connect(ssock->sk))
+ 		mptcp_subflow_early_fallback(msk, subflow);
+ 
++	WRITE_ONCE(msk->write_seq, subflow->idsn);
++
+ do_connect:
+ 	err = ssock->ops->connect(ssock, uaddr, addr_len, flags);
+ 	sock->state = ssock->state;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index a2b14434d7aa0..ac3678d2d6d52 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1927,7 +1927,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	struct sock *sk = sock->sk;
+ 	struct netlink_sock *nlk = nlk_sk(sk);
+ 	int noblock = flags & MSG_DONTWAIT;
+-	size_t copied;
++	size_t copied, max_recvmsg_len;
+ 	struct sk_buff *skb, *data_skb;
+ 	int err, ret;
+ 
+@@ -1960,9 +1960,10 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ #endif
+ 
+ 	/* Record the max length of recvmsg() calls for future allocations */
+-	nlk->max_recvmsg_len = max(nlk->max_recvmsg_len, len);
+-	nlk->max_recvmsg_len = min_t(size_t, nlk->max_recvmsg_len,
+-				     SKB_WITH_OVERHEAD(32768));
++	max_recvmsg_len = max(READ_ONCE(nlk->max_recvmsg_len), len);
++	max_recvmsg_len = min_t(size_t, max_recvmsg_len,
++				SKB_WITH_OVERHEAD(32768));
++	WRITE_ONCE(nlk->max_recvmsg_len, max_recvmsg_len);
+ 
+ 	copied = data_skb->len;
+ 	if (len < copied) {
+@@ -2211,6 +2212,7 @@ static int netlink_dump(struct sock *sk)
+ 	struct netlink_ext_ack extack = {};
+ 	struct netlink_callback *cb;
+ 	struct sk_buff *skb = NULL;
++	size_t max_recvmsg_len;
+ 	struct module *module;
+ 	int err = -ENOBUFS;
+ 	int alloc_min_size;
+@@ -2233,8 +2235,9 @@ static int netlink_dump(struct sock *sk)
+ 	cb = &nlk->cb;
+ 	alloc_min_size = max_t(int, cb->min_dump_alloc, NLMSG_GOODSIZE);
+ 
+-	if (alloc_min_size < nlk->max_recvmsg_len) {
+-		alloc_size = nlk->max_recvmsg_len;
++	max_recvmsg_len = READ_ONCE(nlk->max_recvmsg_len);
++	if (alloc_min_size < max_recvmsg_len) {
++		alloc_size = max_recvmsg_len;
+ 		skb = alloc_skb(alloc_size,
+ 				(GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) |
+ 				__GFP_NOWARN | __GFP_NORETRY);
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 1c403e8a8044c..4f5d44037081b 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -210,7 +210,7 @@ static struct ima_rule_entry *arch_policy_entry __ro_after_init;
+ static LIST_HEAD(ima_default_rules);
+ static LIST_HEAD(ima_policy_rules);
+ static LIST_HEAD(ima_temp_rules);
+-static struct list_head *ima_rules = &ima_default_rules;
++static struct list_head __rcu *ima_rules = (struct list_head __rcu *)(&ima_default_rules);
+ 
+ static int ima_policy __initdata;
+ 
+@@ -648,12 +648,14 @@ int ima_match_policy(struct inode *inode, const struct cred *cred, u32 secid,
+ {
+ 	struct ima_rule_entry *entry;
+ 	int action = 0, actmask = flags | (flags << 1);
++	struct list_head *ima_rules_tmp;
+ 
+ 	if (template_desc)
+ 		*template_desc = ima_template_desc_current();
+ 
+ 	rcu_read_lock();
+-	list_for_each_entry_rcu(entry, ima_rules, list) {
++	ima_rules_tmp = rcu_dereference(ima_rules);
++	list_for_each_entry_rcu(entry, ima_rules_tmp, list) {
+ 
+ 		if (!(entry->action & actmask))
+ 			continue;
+@@ -701,11 +703,15 @@ int ima_match_policy(struct inode *inode, const struct cred *cred, u32 secid,
+ void ima_update_policy_flag(void)
+ {
+ 	struct ima_rule_entry *entry;
++	struct list_head *ima_rules_tmp;
+ 
+-	list_for_each_entry(entry, ima_rules, list) {
++	rcu_read_lock();
++	ima_rules_tmp = rcu_dereference(ima_rules);
++	list_for_each_entry_rcu(entry, ima_rules_tmp, list) {
+ 		if (entry->action & IMA_DO_MASK)
+ 			ima_policy_flag |= entry->action;
+ 	}
++	rcu_read_unlock();
+ 
+ 	ima_appraise |= (build_ima_appraise | temp_ima_appraise);
+ 	if (!ima_appraise)
+@@ -898,10 +904,10 @@ void ima_update_policy(void)
+ 
+ 	list_splice_tail_init_rcu(&ima_temp_rules, policy, synchronize_rcu);
+ 
+-	if (ima_rules != policy) {
++	if (ima_rules != (struct list_head __rcu *)policy) {
+ 		ima_policy_flag = 0;
+-		ima_rules = policy;
+ 
++		rcu_assign_pointer(ima_rules, policy);
+ 		/*
+ 		 * IMA architecture specific policy rules are specified
+ 		 * as strings and converted to an array of ima_entry_rules
+@@ -989,7 +995,7 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry,
+ 		pr_warn("rule for LSM \'%s\' is undefined\n",
+ 			entry->lsm[lsm_rule].args_p);
+ 
+-		if (ima_rules == &ima_default_rules) {
++		if (ima_rules == (struct list_head __rcu *)(&ima_default_rules)) {
+ 			kfree(entry->lsm[lsm_rule].args_p);
+ 			entry->lsm[lsm_rule].args_p = NULL;
+ 			result = -EINVAL;
+@@ -1598,9 +1604,11 @@ void *ima_policy_start(struct seq_file *m, loff_t *pos)
+ {
+ 	loff_t l = *pos;
+ 	struct ima_rule_entry *entry;
++	struct list_head *ima_rules_tmp;
+ 
+ 	rcu_read_lock();
+-	list_for_each_entry_rcu(entry, ima_rules, list) {
++	ima_rules_tmp = rcu_dereference(ima_rules);
++	list_for_each_entry_rcu(entry, ima_rules_tmp, list) {
+ 		if (!l--) {
+ 			rcu_read_unlock();
+ 			return entry;
+@@ -1619,7 +1627,8 @@ void *ima_policy_next(struct seq_file *m, void *v, loff_t *pos)
+ 	rcu_read_unlock();
+ 	(*pos)++;
+ 
+-	return (&entry->list == ima_rules) ? NULL : entry;
++	return (&entry->list == &ima_default_rules ||
++		&entry->list == &ima_policy_rules) ? NULL : entry;
+ }
+ 
+ void ima_policy_stop(struct seq_file *m, void *v)
+@@ -1823,6 +1832,7 @@ bool ima_appraise_signature(enum kernel_read_file_id id)
+ 	struct ima_rule_entry *entry;
+ 	bool found = false;
+ 	enum ima_hooks func;
++	struct list_head *ima_rules_tmp;
+ 
+ 	if (id >= READING_MAX_ID)
+ 		return false;
+@@ -1834,7 +1844,8 @@ bool ima_appraise_signature(enum kernel_read_file_id id)
+ 	func = read_idmap[id] ?: FILE_CHECK;
+ 
+ 	rcu_read_lock();
+-	list_for_each_entry_rcu(entry, ima_rules, list) {
++	ima_rules_tmp = rcu_dereference(ima_rules);
++	list_for_each_entry_rcu(entry, ima_rules_tmp, list) {
+ 		if (entry->action != APPRAISE)
+ 			continue;
+ 
+diff --git a/tools/testing/selftests/vm/map_hugetlb.c b/tools/testing/selftests/vm/map_hugetlb.c
+index c65c55b7a789f..312889edb84ab 100644
+--- a/tools/testing/selftests/vm/map_hugetlb.c
++++ b/tools/testing/selftests/vm/map_hugetlb.c
+@@ -15,7 +15,6 @@
+ #include <unistd.h>
+ #include <sys/mman.h>
+ #include <fcntl.h>
+-#include "vm_util.h"
+ 
+ #define LENGTH (256UL*1024*1024)
+ #define PROTECTION (PROT_READ | PROT_WRITE)
+@@ -71,16 +70,10 @@ int main(int argc, char **argv)
+ {
+ 	void *addr;
+ 	int ret;
+-	size_t hugepage_size;
+ 	size_t length = LENGTH;
+ 	int flags = FLAGS;
+ 	int shift = 0;
+ 
+-	hugepage_size = default_huge_page_size();
+-	/* munmap with fail if the length is not page aligned */
+-	if (hugepage_size > length)
+-		length = hugepage_size;
+-
+ 	if (argc > 1)
+ 		length = atol(argv[1]) << 20;
+ 	if (argc > 2) {


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-06-16 14:35 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-06-16 14:35 UTC (permalink / raw
  To: gentoo-commits

commit:     cee0247e609508235301fc46d1b72a6933738884
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jun 16 14:35:26 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jun 16 14:35:26 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cee0247e

Linux 5.10.219

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1218_linux-5.10.219.patch | 11359 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11363 insertions(+)

diff --git a/0000_README b/0000_README
index c4548db6..893a81a9 100644
--- a/0000_README
+++ b/0000_README
@@ -915,6 +915,10 @@ Patch:  1217_linux-5.10.218.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.218
 
+Patch:  1218_linux-5.10.219.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.219
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1218_linux-5.10.219.patch b/1218_linux-5.10.219.patch
new file mode 100644
index 00000000..2dfcca68
--- /dev/null
+++ b/1218_linux-5.10.219.patch
@@ -0,0 +1,11359 @@
+diff --git a/Documentation/devicetree/bindings/sound/rt5645.txt b/Documentation/devicetree/bindings/sound/rt5645.txt
+index 41a62fd2ae1ff..c1fa379f5f3ea 100644
+--- a/Documentation/devicetree/bindings/sound/rt5645.txt
++++ b/Documentation/devicetree/bindings/sound/rt5645.txt
+@@ -20,6 +20,11 @@ Optional properties:
+   a GPIO spec for the external headphone detect pin. If jd-mode = 0,
+   we will get the JD status by getting the value of hp-detect-gpios.
+ 
++- cbj-sleeve-gpios:
++  a GPIO spec to control the external combo jack circuit to tie the sleeve/ring2
++  contacts to the ground or floating. It could avoid some electric noise from the
++  active speaker jacks.
++
+ - realtek,in2-differential
+   Boolean. Indicate MIC2 input are differential, rather than single-ended.
+ 
+@@ -68,6 +73,7 @@ codec: rt5650@1a {
+ 	compatible = "realtek,rt5650";
+ 	reg = <0x1a>;
+ 	hp-detect-gpios = <&gpio 19 0>;
++	cbj-sleeve-gpios = <&gpio 20 0>;
+ 	interrupt-parent = <&gpio>;
+ 	interrupts = <7 IRQ_TYPE_EDGE_FALLING>;
+ 	realtek,dmic-en = "true";
+diff --git a/Documentation/driver-api/fpga/fpga-bridge.rst b/Documentation/driver-api/fpga/fpga-bridge.rst
+index 198aadafd3e7d..8d650b4e2ce6d 100644
+--- a/Documentation/driver-api/fpga/fpga-bridge.rst
++++ b/Documentation/driver-api/fpga/fpga-bridge.rst
+@@ -4,11 +4,11 @@ FPGA Bridge
+ API to implement a new FPGA bridge
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ 
+-* struct fpga_bridge — The FPGA Bridge structure
+-* struct fpga_bridge_ops — Low level Bridge driver ops
+-* devm_fpga_bridge_create() — Allocate and init a bridge struct
+-* fpga_bridge_register() — Register a bridge
+-* fpga_bridge_unregister() — Unregister a bridge
++* struct fpga_bridge - The FPGA Bridge structure
++* struct fpga_bridge_ops - Low level Bridge driver ops
++* devm_fpga_bridge_create() - Allocate and init a bridge struct
++* fpga_bridge_register() - Register a bridge
++* fpga_bridge_unregister() - Unregister a bridge
+ 
+ .. kernel-doc:: include/linux/fpga/fpga-bridge.h
+    :functions: fpga_bridge
+diff --git a/Documentation/driver-api/fpga/fpga-mgr.rst b/Documentation/driver-api/fpga/fpga-mgr.rst
+index 917ee22db429d..4d926b452cb35 100644
+--- a/Documentation/driver-api/fpga/fpga-mgr.rst
++++ b/Documentation/driver-api/fpga/fpga-mgr.rst
+@@ -101,12 +101,12 @@ in state.
+ API for implementing a new FPGA Manager driver
+ ----------------------------------------------
+ 
+-* ``fpga_mgr_states`` —  Values for :c:expr:`fpga_manager->state`.
+-* struct fpga_manager —  the FPGA manager struct
+-* struct fpga_manager_ops —  Low level FPGA manager driver ops
+-* devm_fpga_mgr_create() —  Allocate and init a manager struct
+-* fpga_mgr_register() —  Register an FPGA manager
+-* fpga_mgr_unregister() —  Unregister an FPGA manager
++* ``fpga_mgr_states`` -  Values for :c:expr:`fpga_manager->state`.
++* struct fpga_manager -  the FPGA manager struct
++* struct fpga_manager_ops -  Low level FPGA manager driver ops
++* devm_fpga_mgr_create() -  Allocate and init a manager struct
++* fpga_mgr_register() -  Register an FPGA manager
++* fpga_mgr_unregister() -  Unregister an FPGA manager
+ 
+ .. kernel-doc:: include/linux/fpga/fpga-mgr.h
+    :functions: fpga_mgr_states
+diff --git a/Documentation/driver-api/fpga/fpga-programming.rst b/Documentation/driver-api/fpga/fpga-programming.rst
+index 002392dab04f7..fb4da4240e961 100644
+--- a/Documentation/driver-api/fpga/fpga-programming.rst
++++ b/Documentation/driver-api/fpga/fpga-programming.rst
+@@ -84,10 +84,10 @@ will generate that list.  Here's some sample code of what to do next::
+ API for programming an FPGA
+ ---------------------------
+ 
+-* fpga_region_program_fpga() —  Program an FPGA
+-* fpga_image_info() —  Specifies what FPGA image to program
+-* fpga_image_info_alloc() —  Allocate an FPGA image info struct
+-* fpga_image_info_free() —  Free an FPGA image info struct
++* fpga_region_program_fpga() -  Program an FPGA
++* fpga_image_info() -  Specifies what FPGA image to program
++* fpga_image_info_alloc() -  Allocate an FPGA image info struct
++* fpga_image_info_free() -  Free an FPGA image info struct
+ 
+ .. kernel-doc:: drivers/fpga/fpga-region.c
+    :functions: fpga_region_program_fpga
+diff --git a/Documentation/driver-api/fpga/fpga-region.rst b/Documentation/driver-api/fpga/fpga-region.rst
+index 363a8171ab0a5..2d03b5fb76575 100644
+--- a/Documentation/driver-api/fpga/fpga-region.rst
++++ b/Documentation/driver-api/fpga/fpga-region.rst
+@@ -45,19 +45,25 @@ An example of usage can be seen in the probe function of [#f2]_.
+ API to add a new FPGA region
+ ----------------------------
+ 
+-* struct fpga_region — The FPGA region struct
+-* devm_fpga_region_create() — Allocate and init a region struct
+-* fpga_region_register() —  Register an FPGA region
+-* fpga_region_unregister() —  Unregister an FPGA region
++* struct fpga_region - The FPGA region struct
++* struct fpga_region_info - Parameter structure for __fpga_region_register_full()
++* __fpga_region_register_full() -  Create and register an FPGA region using the
++  fpga_region_info structure to provide the full flexibility of options
++* __fpga_region_register() -  Create and register an FPGA region using standard
++  arguments
++* fpga_region_unregister() -  Unregister an FPGA region
++
++Helper macros ``fpga_region_register()`` and ``fpga_region_register_full()``
++automatically set the module that registers the FPGA region as the owner.
+ 
+ The FPGA region's probe function will need to get a reference to the FPGA
+ Manager it will be using to do the programming.  This usually would happen
+ during the region's probe function.
+ 
+-* fpga_mgr_get() — Get a reference to an FPGA manager, raise ref count
+-* of_fpga_mgr_get() —  Get a reference to an FPGA manager, raise ref count,
++* fpga_mgr_get() - Get a reference to an FPGA manager, raise ref count
++* of_fpga_mgr_get() -  Get a reference to an FPGA manager, raise ref count,
+   given a device node.
+-* fpga_mgr_put() — Put an FPGA manager
++* fpga_mgr_put() - Put an FPGA manager
+ 
+ The FPGA region will need to specify which bridges to control while programming
+ the FPGA.  The region driver can build a list of bridges during probe time
+@@ -66,20 +72,23 @@ the list of bridges to program just before programming
+ (:c:expr:`fpga_region->get_bridges`).  The FPGA bridge framework supplies the
+ following APIs to handle building or tearing down that list.
+ 
+-* fpga_bridge_get_to_list() — Get a ref of an FPGA bridge, add it to a
++* fpga_bridge_get_to_list() - Get a ref of an FPGA bridge, add it to a
+   list
+-* of_fpga_bridge_get_to_list() — Get a ref of an FPGA bridge, add it to a
++* of_fpga_bridge_get_to_list() - Get a ref of an FPGA bridge, add it to a
+   list, given a device node
+-* fpga_bridges_put() — Given a list of bridges, put them
++* fpga_bridges_put() - Given a list of bridges, put them
+ 
+ .. kernel-doc:: include/linux/fpga/fpga-region.h
+    :functions: fpga_region
+ 
++.. kernel-doc:: include/linux/fpga/fpga-region.h
++   :functions: fpga_region_info
++
+ .. kernel-doc:: drivers/fpga/fpga-region.c
+-   :functions: devm_fpga_region_create
++   :functions: __fpga_region_register_full
+ 
+ .. kernel-doc:: drivers/fpga/fpga-region.c
+-   :functions: fpga_region_register
++   :functions: __fpga_region_register
+ 
+ .. kernel-doc:: drivers/fpga/fpga-region.c
+    :functions: fpga_region_unregister
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index 8c0fbdd8ce6fb..de2bacc418fee 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -260,6 +260,14 @@ compress_extension=%s	 Support adding specified extension, so that f2fs can enab
+ 			 For other files, we can still enable compression via ioctl.
+ 			 Note that, there is one reserved special extension '*', it
+ 			 can be set to enable compression for all files.
++compress_chksum		 Support verifying chksum of raw data in compressed cluster.
++compress_mode=%s	 Control file compression mode. This supports "fs" and "user"
++			 modes. In "fs" mode (default), f2fs does automatic compression
++			 on the compression enabled files. In "user" mode, f2fs disables
++			 the automaic compression and gives the user discretion of
++			 choosing the target file and the timing. The user can do manual
++			 compression/decompression on the compression enabled files using
++			 ioctls.
+ inlinecrypt		 When possible, encrypt/decrypt the contents of encrypted
+ 			 files using the blk-crypto framework rather than
+ 			 filesystem-layer encryption. This allows the use of
+@@ -810,6 +818,34 @@ Compress metadata layout::
+ 	| data length | data chksum | reserved |      compressed data       |
+ 	+-------------+-------------+----------+----------------------------+
+ 
++Compression mode
++--------------------------
++
++f2fs supports "fs" and "user" compression modes with "compression_mode" mount option.
++With this option, f2fs provides a choice to select the way how to compress the
++compression enabled files (refer to "Compression implementation" section for how to
++enable compression on a regular inode).
++
++1) compress_mode=fs
++This is the default option. f2fs does automatic compression in the writeback of the
++compression enabled files.
++
++2) compress_mode=user
++This disables the automaic compression and gives the user discretion of choosing the
++target file and the timing. The user can do manual compression/decompression on the
++compression enabled files using F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
++ioctls like the below.
++
++To decompress a file,
++
++fd = open(filename, O_WRONLY, 0);
++ret = ioctl(fd, F2FS_IOC_DECOMPRESS_FILE);
++
++To compress a file,
++
++fd = open(filename, O_WRONLY, 0);
++ret = ioctl(fd, F2FS_IOC_COMPRESS_FILE);
++
+ NVMe Zoned Namespace devices
+ ----------------------------
+ 
+diff --git a/Makefile b/Makefile
+index 3bc56bf43c822..3b36b77589f2b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 218
++SUBLEVEL = 219
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi b/arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi
+index 12bc1d3ed4243..adc0a096ab4c4 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi
++++ b/arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi
+@@ -58,7 +58,7 @@ cpu@3 {
+ 	gic: interrupt-controller@f1001000 {
+ 		compatible = "arm,gic-400";
+ 		reg = <0x0 0xf1001000 0x0 0x1000>,  /* GICD */
+-		      <0x0 0xf1002000 0x0 0x100>;   /* GICC */
++		      <0x0 0xf1002000 0x0 0x2000>;  /* GICC */
+ 		#address-cells = <0>;
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts b/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts
+index 6e5f8465669e3..a5ff8cfedf344 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts
++++ b/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts
+@@ -9,8 +9,8 @@ / {
+ 	compatible = "nvidia,norrin", "nvidia,tegra132", "nvidia,tegra124";
+ 
+ 	aliases {
+-		rtc0 = "/i2c@7000d000/as3722@40";
+-		rtc1 = "/rtc@7000e000";
++		rtc0 = &as3722;
++		rtc1 = &tegra_rtc;
+ 		serial0 = &uarta;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra132.dtsi b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+index b14e9f3bfdbdc..2533d72fb2e56 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra132.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+@@ -573,7 +573,7 @@ spi@7000de00 {
+ 		status = "disabled";
+ 	};
+ 
+-	rtc@7000e000 {
++	tegra_rtc: rtc@7000e000 {
+ 		compatible = "nvidia,tegra124-rtc", "nvidia,tegra20-rtc";
+ 		reg = <0x0 0x7000e000 0x0 0x100>;
+ 		interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
+index a80c578484ba3..b6d70d0073e7f 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
+@@ -60,7 +60,7 @@ bluetooth {
+ 		vddrf-supply = <&vreg_l1_1p3>;
+ 		vddch0-supply = <&vdd_ch0_3p3>;
+ 
+-		local-bd-address = [ 02 00 00 00 5a ad ];
++		local-bd-address = [ 00 00 00 00 00 00 ];
+ 
+ 		max-speed = <3200000>;
+ 	};
+diff --git a/arch/arm64/include/asm/asm-bug.h b/arch/arm64/include/asm/asm-bug.h
+index 03f52f84a4f3f..bc2dcc8a00009 100644
+--- a/arch/arm64/include/asm/asm-bug.h
++++ b/arch/arm64/include/asm/asm-bug.h
+@@ -28,6 +28,7 @@
+ 	14470:	.long 14471f - 14470b;			\
+ _BUGVERBOSE_LOCATION(__FILE__, __LINE__)		\
+ 		.short flags; 				\
++		.align 2;				\
+ 		.popsection;				\
+ 	14471:
+ #else
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index dfb5218137ca9..3aad58626429b 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -234,6 +234,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 		case PSR_AA32_MODE_SVC:
+ 		case PSR_AA32_MODE_ABT:
+ 		case PSR_AA32_MODE_UND:
++		case PSR_AA32_MODE_SYS:
+ 			if (!vcpu_el1_is_32bit(vcpu))
+ 				return -EINVAL;
+ 			break;
+diff --git a/arch/m68k/kernel/entry.S b/arch/m68k/kernel/entry.S
+index 546bab6bfc273..d0ca4df435285 100644
+--- a/arch/m68k/kernel/entry.S
++++ b/arch/m68k/kernel/entry.S
+@@ -432,7 +432,9 @@ resume:
+ 	movec	%a0,%dfc
+ 
+ 	/* restore status register */
+-	movew	%a1@(TASK_THREAD+THREAD_SR),%sr
++	movew	%a1@(TASK_THREAD+THREAD_SR),%d0
++	oriw	#0x0700,%d0
++	movew	%d0,%sr
+ 
+ 	rts
+ 
+diff --git a/arch/m68k/mac/misc.c b/arch/m68k/mac/misc.c
+index 90f4e9ca1276b..d3b34dd7590de 100644
+--- a/arch/m68k/mac/misc.c
++++ b/arch/m68k/mac/misc.c
+@@ -452,30 +452,18 @@ void mac_poweroff(void)
+ 
+ void mac_reset(void)
+ {
+-	if (macintosh_config->adb_type == MAC_ADB_II &&
+-	    macintosh_config->ident != MAC_MODEL_SE30) {
+-		/* need ROMBASE in booter */
+-		/* indeed, plus need to MAP THE ROM !! */
+-
+-		if (mac_bi_data.rombase == 0)
+-			mac_bi_data.rombase = 0x40800000;
+-
+-		/* works on some */
+-		rom_reset = (void *) (mac_bi_data.rombase + 0xa);
+-
+-		local_irq_disable();
+-		rom_reset();
+ #ifdef CONFIG_ADB_CUDA
+-	} else if (macintosh_config->adb_type == MAC_ADB_EGRET ||
+-	           macintosh_config->adb_type == MAC_ADB_CUDA) {
++	if (macintosh_config->adb_type == MAC_ADB_EGRET ||
++	    macintosh_config->adb_type == MAC_ADB_CUDA) {
+ 		cuda_restart();
++	} else
+ #endif
+ #ifdef CONFIG_ADB_PMU
+-	} else if (macintosh_config->adb_type == MAC_ADB_PB2) {
++	if (macintosh_config->adb_type == MAC_ADB_PB2) {
+ 		pmu_restart();
++	} else
+ #endif
+-	} else if (CPU_IS_030) {
+-
++	if (CPU_IS_030) {
+ 		/* 030-specific reset routine.  The idea is general, but the
+ 		 * specific registers to reset are '030-specific.  Until I
+ 		 * have a non-030 machine, I can't test anything else.
+@@ -523,6 +511,18 @@ void mac_reset(void)
+ 		    "jmp %/a0@\n\t" /* jump to the reset vector */
+ 		    ".chip 68k"
+ 		    : : "r" (offset), "a" (rombase) : "a0");
++	} else {
++		/* need ROMBASE in booter */
++		/* indeed, plus need to MAP THE ROM !! */
++
++		if (mac_bi_data.rombase == 0)
++			mac_bi_data.rombase = 0x40800000;
++
++		/* works on some */
++		rom_reset = (void *)(mac_bi_data.rombase + 0xa);
++
++		local_irq_disable();
++		rom_reset();
+ 	}
+ 
+ 	/* should never get here */
+diff --git a/arch/microblaze/kernel/Makefile b/arch/microblaze/kernel/Makefile
+index dd71637437f4f..8b9d52b194cb4 100644
+--- a/arch/microblaze/kernel/Makefile
++++ b/arch/microblaze/kernel/Makefile
+@@ -7,7 +7,6 @@ ifdef CONFIG_FUNCTION_TRACER
+ # Do not trace early boot code and low level code
+ CFLAGS_REMOVE_timer.o = -pg
+ CFLAGS_REMOVE_intc.o = -pg
+-CFLAGS_REMOVE_early_printk.o = -pg
+ CFLAGS_REMOVE_ftrace.o = -pg
+ CFLAGS_REMOVE_process.o = -pg
+ endif
+diff --git a/arch/microblaze/kernel/cpu/cpuinfo-static.c b/arch/microblaze/kernel/cpu/cpuinfo-static.c
+index 85dbda4a08a81..03da36dc6d9c9 100644
+--- a/arch/microblaze/kernel/cpu/cpuinfo-static.c
++++ b/arch/microblaze/kernel/cpu/cpuinfo-static.c
+@@ -18,7 +18,7 @@ static const char family_string[] = CONFIG_XILINX_MICROBLAZE0_FAMILY;
+ static const char cpu_ver_string[] = CONFIG_XILINX_MICROBLAZE0_HW_VER;
+ 
+ #define err_printk(x) \
+-	early_printk("ERROR: Microblaze " x "-different for kernel and DTS\n");
++	pr_err("ERROR: Microblaze " x "-different for kernel and DTS\n");
+ 
+ void __init set_cpuinfo_static(struct cpuinfo *ci, struct device_node *cpu)
+ {
+diff --git a/arch/parisc/kernel/parisc_ksyms.c b/arch/parisc/kernel/parisc_ksyms.c
+index e8a6a751dfd8e..07418f11a964f 100644
+--- a/arch/parisc/kernel/parisc_ksyms.c
++++ b/arch/parisc/kernel/parisc_ksyms.c
+@@ -21,6 +21,7 @@ EXPORT_SYMBOL(memset);
+ #include <linux/atomic.h>
+ EXPORT_SYMBOL(__xchg8);
+ EXPORT_SYMBOL(__xchg32);
++EXPORT_SYMBOL(__cmpxchg_u8);
+ EXPORT_SYMBOL(__cmpxchg_u32);
+ EXPORT_SYMBOL(__cmpxchg_u64);
+ #ifdef CONFIG_SMP
+diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
+index 00c8cda1c9c31..1a60188f74ad8 100644
+--- a/arch/powerpc/include/asm/hvcall.h
++++ b/arch/powerpc/include/asm/hvcall.h
+@@ -494,7 +494,7 @@ struct hvcall_mpp_data {
+ 	unsigned long backing_mem;
+ };
+ 
+-int h_get_mpp(struct hvcall_mpp_data *);
++long h_get_mpp(struct hvcall_mpp_data *mpp_data);
+ 
+ struct hvcall_mpp_x_data {
+ 	unsigned long coalesced_bytes;
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 4a3425fb19398..aed67f1a1bc56 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -1883,10 +1883,10 @@ void __trace_hcall_exit(long opcode, long retval, unsigned long *retbuf)
+  * h_get_mpp
+  * H_GET_MPP hcall returns info in 7 parms
+  */
+-int h_get_mpp(struct hvcall_mpp_data *mpp_data)
++long h_get_mpp(struct hvcall_mpp_data *mpp_data)
+ {
+-	int rc;
+-	unsigned long retbuf[PLPAR_HCALL9_BUFSIZE];
++	unsigned long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
++	long rc;
+ 
+ 	rc = plpar_hcall9(H_GET_MPP, retbuf);
+ 
+diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c
+index a7d4e25ae82a1..e0c5fc9ab242e 100644
+--- a/arch/powerpc/platforms/pseries/lparcfg.c
++++ b/arch/powerpc/platforms/pseries/lparcfg.c
+@@ -112,8 +112,8 @@ struct hvcall_ppp_data {
+  */
+ static unsigned int h_get_ppp(struct hvcall_ppp_data *ppp_data)
+ {
+-	unsigned long rc;
+-	unsigned long retbuf[PLPAR_HCALL9_BUFSIZE];
++	unsigned long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
++	long rc;
+ 
+ 	rc = plpar_hcall9(H_GET_PPP, retbuf);
+ 
+@@ -192,7 +192,7 @@ static void parse_ppp_data(struct seq_file *m)
+ 	struct hvcall_ppp_data ppp_data;
+ 	struct device_node *root;
+ 	const __be32 *perf_level;
+-	int rc;
++	long rc;
+ 
+ 	rc = h_get_ppp(&ppp_data);
+ 	if (rc)
+diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
+index d276c5e964458..77aa7a13ab1db 100644
+--- a/arch/powerpc/sysdev/fsl_msi.c
++++ b/arch/powerpc/sysdev/fsl_msi.c
+@@ -573,10 +573,12 @@ static const struct fsl_msi_feature ipic_msi_feature = {
+ 	.msiir_offset = 0x38,
+ };
+ 
++#ifdef CONFIG_EPAPR_PARAVIRT
+ static const struct fsl_msi_feature vmpic_msi_feature = {
+ 	.fsl_pic_ip = FSL_PIC_IP_VMPIC,
+ 	.msiir_offset = 0,
+ };
++#endif
+ 
+ static const struct of_device_id fsl_of_msi_ids[] = {
+ 	{
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index c469e8848d659..939ceec83048f 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -832,8 +832,8 @@ static ssize_t reipl_nvme_scpdata_write(struct file *filp, struct kobject *kobj,
+ 		scpdata_len += padding;
+ 	}
+ 
+-	reipl_block_nvme->hdr.len = IPL_BP_FCP_LEN + scpdata_len;
+-	reipl_block_nvme->nvme.len = IPL_BP0_FCP_LEN + scpdata_len;
++	reipl_block_nvme->hdr.len = IPL_BP_NVME_LEN + scpdata_len;
++	reipl_block_nvme->nvme.len = IPL_BP0_NVME_LEN + scpdata_len;
+ 	reipl_block_nvme->nvme.scp_data_len = scpdata_len;
+ 
+ 	return count;
+@@ -1602,9 +1602,9 @@ static int __init dump_nvme_init(void)
+ 	}
+ 	dump_block_nvme->hdr.len = IPL_BP_NVME_LEN;
+ 	dump_block_nvme->hdr.version = IPL_PARM_BLOCK_VERSION;
+-	dump_block_nvme->fcp.len = IPL_BP0_NVME_LEN;
+-	dump_block_nvme->fcp.pbt = IPL_PBT_NVME;
+-	dump_block_nvme->fcp.opt = IPL_PB0_NVME_OPT_DUMP;
++	dump_block_nvme->nvme.len = IPL_BP0_NVME_LEN;
++	dump_block_nvme->nvme.pbt = IPL_PBT_NVME;
++	dump_block_nvme->nvme.opt = IPL_PB0_NVME_OPT_DUMP;
+ 	dump_capabilities |= DUMP_TYPE_NVME;
+ 	return 0;
+ }
+diff --git a/arch/sh/kernel/kprobes.c b/arch/sh/kernel/kprobes.c
+index 756100b01e846..8498013732198 100644
+--- a/arch/sh/kernel/kprobes.c
++++ b/arch/sh/kernel/kprobes.c
+@@ -44,17 +44,12 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
+ 	if (OPCODE_RTE(opcode))
+ 		return -EFAULT;	/* Bad breakpoint */
+ 
++	memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+ 	p->opcode = opcode;
+ 
+ 	return 0;
+ }
+ 
+-void __kprobes arch_copy_kprobe(struct kprobe *p)
+-{
+-	memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+-	p->opcode = *p->addr;
+-}
+-
+ void __kprobes arch_arm_kprobe(struct kprobe *p)
+ {
+ 	*p->addr = BREAKPOINT_INSTRUCTION;
+diff --git a/arch/sh/lib/checksum.S b/arch/sh/lib/checksum.S
+index 3e07074e00981..06fed5a21e8ba 100644
+--- a/arch/sh/lib/checksum.S
++++ b/arch/sh/lib/checksum.S
+@@ -33,7 +33,8 @@
+  */
+ 
+ /*	
+- * asmlinkage __wsum csum_partial(const void *buf, int len, __wsum sum);
++ * unsigned int csum_partial(const unsigned char *buf, int len,
++ *                           unsigned int sum);
+  */
+ 
+ .text
+@@ -45,31 +46,11 @@ ENTRY(csum_partial)
+ 	   * Fortunately, it is easy to convert 2-byte alignment to 4-byte
+ 	   * alignment for the unrolled loop.
+ 	   */
++	mov	r5, r1
+ 	mov	r4, r0
+-	tst	#3, r0		! Check alignment.
+-	bt/s	2f		! Jump if alignment is ok.
+-	 mov	r4, r7		! Keep a copy to check for alignment
++	tst	#2, r0		! Check alignment.
++	bt	2f		! Jump if alignment is ok.
+ 	!
+-	tst	#1, r0		! Check alignment.
+-	bt	21f		! Jump if alignment is boundary of 2bytes.
+-
+-	! buf is odd
+-	tst	r5, r5
+-	add	#-1, r5
+-	bt	9f
+-	mov.b	@r4+, r0
+-	extu.b	r0, r0
+-	addc	r0, r6		! t=0 from previous tst
+-	mov	r6, r0
+-	shll8	r6
+-	shlr16	r0
+-	shlr8	r0
+-	or	r0, r6
+-	mov	r4, r0
+-	tst	#2, r0
+-	bt	2f
+-21:
+-	! buf is 2 byte aligned (len could be 0)
+ 	add	#-2, r5		! Alignment uses up two bytes.
+ 	cmp/pz	r5		!
+ 	bt/s	1f		! Jump if we had at least two bytes.
+@@ -77,17 +58,16 @@ ENTRY(csum_partial)
+ 	bra	6f
+ 	 add	#2, r5		! r5 was < 2.  Deal with it.
+ 1:
++	mov	r5, r1		! Save new len for later use.
+ 	mov.w	@r4+, r0
+ 	extu.w	r0, r0
+ 	addc	r0, r6
+ 	bf	2f
+ 	add	#1, r6
+ 2:
+-	! buf is 4 byte aligned (len could be 0)
+-	mov	r5, r1
+ 	mov	#-5, r0
+-	shld	r0, r1
+-	tst	r1, r1
++	shld	r0, r5
++	tst	r5, r5
+ 	bt/s	4f		! if it's =0, go to 4f
+ 	 clrt
+ 	.align	2
+@@ -109,31 +89,30 @@ ENTRY(csum_partial)
+ 	addc	r0, r6
+ 	addc	r2, r6
+ 	movt	r0
+-	dt	r1
++	dt	r5
+ 	bf/s	3b
+ 	 cmp/eq	#1, r0
+-	! here, we know r1==0
+-	addc	r1, r6			! add carry to r6
++	! here, we know r5==0
++	addc	r5, r6			! add carry to r6
+ 4:
+-	mov	r5, r0
++	mov	r1, r0
+ 	and	#0x1c, r0
+ 	tst	r0, r0
+-	bt	6f
+-	! 4 bytes or more remaining
+-	mov	r0, r1
+-	shlr2	r1
++	bt/s	6f
++	 mov	r0, r5
++	shlr2	r5
+ 	mov	#0, r2
+ 5:
+ 	addc	r2, r6
+ 	mov.l	@r4+, r2
+ 	movt	r0
+-	dt	r1
++	dt	r5
+ 	bf/s	5b
+ 	 cmp/eq	#1, r0
+ 	addc	r2, r6
+-	addc	r1, r6		! r1==0 here, so it means add carry-bit
++	addc	r5, r6		! r5==0 here, so it means add carry-bit
+ 6:
+-	! 3 bytes or less remaining
++	mov	r1, r5
+ 	mov	#3, r0
+ 	and	r0, r5
+ 	tst	r5, r5
+@@ -159,16 +138,6 @@ ENTRY(csum_partial)
+ 	mov	#0, r0
+ 	addc	r0, r6
+ 9:
+-	! Check if the buffer was misaligned, if so realign sum
+-	mov	r7, r0
+-	tst	#1, r0
+-	bt	10f
+-	mov	r6, r0
+-	shll8	r6
+-	shlr16	r0
+-	shlr8	r0
+-	or	r0, r6
+-10:
+ 	rts
+ 	 mov	r6, r0
+ 
+diff --git a/arch/sparc/include/asm/smp_64.h b/arch/sparc/include/asm/smp_64.h
+index e75783b6abc42..16ab904616a0c 100644
+--- a/arch/sparc/include/asm/smp_64.h
++++ b/arch/sparc/include/asm/smp_64.h
+@@ -47,7 +47,6 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask);
+ int hard_smp_processor_id(void);
+ #define raw_smp_processor_id() (current_thread_info()->cpu)
+ 
+-void smp_fill_in_cpu_possible_map(void);
+ void smp_fill_in_sib_core_maps(void);
+ void cpu_play_dead(void);
+ 
+@@ -77,7 +76,6 @@ void __cpu_die(unsigned int cpu);
+ #define smp_fill_in_sib_core_maps() do { } while (0)
+ #define smp_fetch_global_regs() do { } while (0)
+ #define smp_fetch_global_pmu() do { } while (0)
+-#define smp_fill_in_cpu_possible_map() do { } while (0)
+ #define smp_init_cpu_poke() do { } while (0)
+ #define scheduler_poke() do { } while (0)
+ 
+diff --git a/arch/sparc/include/uapi/asm/termbits.h b/arch/sparc/include/uapi/asm/termbits.h
+index ce5ad5d0f1057..0614e179bcccc 100644
+--- a/arch/sparc/include/uapi/asm/termbits.h
++++ b/arch/sparc/include/uapi/asm/termbits.h
+@@ -13,16 +13,6 @@ typedef unsigned int    tcflag_t;
+ typedef unsigned long   tcflag_t;
+ #endif
+ 
+-#define NCC 8
+-struct termio {
+-	unsigned short c_iflag;		/* input mode flags */
+-	unsigned short c_oflag;		/* output mode flags */
+-	unsigned short c_cflag;		/* control mode flags */
+-	unsigned short c_lflag;		/* local mode flags */
+-	unsigned char c_line;		/* line discipline */
+-	unsigned char c_cc[NCC];	/* control characters */
+-};
+-
+ #define NCCS 17
+ struct termios {
+ 	tcflag_t c_iflag;		/* input mode flags */
+diff --git a/arch/sparc/include/uapi/asm/termios.h b/arch/sparc/include/uapi/asm/termios.h
+index ee86f4093d83e..cceb32260881e 100644
+--- a/arch/sparc/include/uapi/asm/termios.h
++++ b/arch/sparc/include/uapi/asm/termios.h
+@@ -40,5 +40,14 @@ struct winsize {
+ 	unsigned short ws_ypixel;
+ };
+ 
++#define NCC 8
++struct termio {
++	unsigned short c_iflag;		/* input mode flags */
++	unsigned short c_oflag;		/* output mode flags */
++	unsigned short c_cflag;		/* control mode flags */
++	unsigned short c_lflag;		/* local mode flags */
++	unsigned char c_line;		/* line discipline */
++	unsigned char c_cc[NCC];	/* control characters */
++};
+ 
+ #endif /* _UAPI_SPARC_TERMIOS_H */
+diff --git a/arch/sparc/kernel/prom_64.c b/arch/sparc/kernel/prom_64.c
+index f883a50fa3339..4eae633f71982 100644
+--- a/arch/sparc/kernel/prom_64.c
++++ b/arch/sparc/kernel/prom_64.c
+@@ -483,7 +483,9 @@ static void *record_one_cpu(struct device_node *dp, int cpuid, int arg)
+ 	ncpus_probed++;
+ #ifdef CONFIG_SMP
+ 	set_cpu_present(cpuid, true);
+-	set_cpu_possible(cpuid, true);
++
++	if (num_possible_cpus() < nr_cpu_ids)
++		set_cpu_possible(cpuid, true);
+ #endif
+ 	return NULL;
+ }
+diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
+index d87244197d5cb..d921b4790af92 100644
+--- a/arch/sparc/kernel/setup_64.c
++++ b/arch/sparc/kernel/setup_64.c
+@@ -688,7 +688,6 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	paging_init();
+ 	init_sparc64_elf_hwcap();
+-	smp_fill_in_cpu_possible_map();
+ 	/*
+ 	 * Once the OF device tree and MDESC have been setup and nr_cpus has
+ 	 * been parsed, we know the list of possible cpus.  Therefore we can
+diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
+index ae5faa1d989d2..748909095d9b6 100644
+--- a/arch/sparc/kernel/smp_64.c
++++ b/arch/sparc/kernel/smp_64.c
+@@ -1210,20 +1210,6 @@ void __init smp_setup_processor_id(void)
+ 		xcall_deliver_impl = hypervisor_xcall_deliver;
+ }
+ 
+-void __init smp_fill_in_cpu_possible_map(void)
+-{
+-	int possible_cpus = num_possible_cpus();
+-	int i;
+-
+-	if (possible_cpus > nr_cpu_ids)
+-		possible_cpus = nr_cpu_ids;
+-
+-	for (i = 0; i < possible_cpus; i++)
+-		set_cpu_possible(i, true);
+-	for (; i < NR_CPUS; i++)
+-		set_cpu_possible(i, false);
+-}
+-
+ void smp_fill_in_sib_core_maps(void)
+ {
+ 	unsigned int i;
+diff --git a/arch/um/drivers/line.c b/arch/um/drivers/line.c
+index 14ad9f495fe69..37e96ba0f5fb1 100644
+--- a/arch/um/drivers/line.c
++++ b/arch/um/drivers/line.c
+@@ -668,24 +668,26 @@ void register_winch_irq(int fd, int tty_fd, int pid, struct tty_port *port,
+ 		goto cleanup;
+ 	}
+ 
+-	*winch = ((struct winch) { .list  	= LIST_HEAD_INIT(winch->list),
+-				   .fd  	= fd,
++	*winch = ((struct winch) { .fd  	= fd,
+ 				   .tty_fd 	= tty_fd,
+ 				   .pid  	= pid,
+ 				   .port 	= port,
+ 				   .stack	= stack });
+ 
++	spin_lock(&winch_handler_lock);
++	list_add(&winch->list, &winch_handlers);
++	spin_unlock(&winch_handler_lock);
++
+ 	if (um_request_irq(WINCH_IRQ, fd, IRQ_READ, winch_interrupt,
+ 			   IRQF_SHARED, "winch", winch) < 0) {
+ 		printk(KERN_ERR "register_winch_irq - failed to register "
+ 		       "IRQ\n");
++		spin_lock(&winch_handler_lock);
++		list_del(&winch->list);
++		spin_unlock(&winch_handler_lock);
+ 		goto out_free;
+ 	}
+ 
+-	spin_lock(&winch_handler_lock);
+-	list_add(&winch->list, &winch_handlers);
+-	spin_unlock(&winch_handler_lock);
+-
+ 	return;
+ 
+  out_free:
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index b12c1b0d3e1d0..de28ce711687e 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -1158,7 +1158,7 @@ static int __init ubd_init(void)
+ 
+ 	if (irq_req_buffer == NULL) {
+ 		printk(KERN_ERR "Failed to initialize ubd buffering\n");
+-		return -1;
++		return -ENOMEM;
+ 	}
+ 	io_req_buffer = kmalloc_array(UBD_REQ_BUFFER_SIZE,
+ 				      sizeof(struct io_thread_req *),
+@@ -1169,7 +1169,7 @@ static int __init ubd_init(void)
+ 
+ 	if (io_req_buffer == NULL) {
+ 		printk(KERN_ERR "Failed to initialize ubd buffering\n");
+-		return -1;
++		return -ENOMEM;
+ 	}
+ 	platform_driver_register(&ubd_driver);
+ 	mutex_lock(&ubd_lock);
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index fc662f7cc2afb..c10432ef2d410 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -142,7 +142,7 @@ static bool get_bpf_flash(struct arglist *def)
+ 
+ 	if (allow != NULL) {
+ 		if (kstrtoul(allow, 10, &result) == 0)
+-			return (allow > 0);
++			return result > 0;
+ 	}
+ 	return false;
+ }
+diff --git a/arch/um/include/asm/mmu.h b/arch/um/include/asm/mmu.h
+index 5b072aba5b658..a7cb380c0b5c0 100644
+--- a/arch/um/include/asm/mmu.h
++++ b/arch/um/include/asm/mmu.h
+@@ -15,8 +15,6 @@ typedef struct mm_context {
+ 	struct page *stub_pages[2];
+ } mm_context_t;
+ 
+-extern void __switch_mm(struct mm_id * mm_idp);
+-
+ /* Avoid tangled inclusion with asm/ldt.h */
+ extern long init_new_ldt(struct mm_context *to_mm, struct mm_context *from_mm);
+ extern void free_ldt(struct mm_context *mm);
+diff --git a/arch/um/include/shared/skas/mm_id.h b/arch/um/include/shared/skas/mm_id.h
+index e82e203f5f419..92dbf727e3842 100644
+--- a/arch/um/include/shared/skas/mm_id.h
++++ b/arch/um/include/shared/skas/mm_id.h
+@@ -15,4 +15,6 @@ struct mm_id {
+ 	int kill;
+ };
+ 
++void __switch_mm(struct mm_id *mm_idp);
++
+ #endif
+diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
+index 27b5e2bc6a016..e6546aa448de1 100644
+--- a/arch/x86/Kconfig.debug
++++ b/arch/x86/Kconfig.debug
+@@ -258,6 +258,7 @@ config UNWINDER_ORC
+ 
+ config UNWINDER_FRAME_POINTER
+ 	bool "Frame pointer unwinder"
++	select ARCH_WANT_FRAME_POINTERS
+ 	select FRAME_POINTER
+ 	help
+ 	  This option enables the frame pointer unwinder for unwinding kernel
+@@ -281,7 +282,3 @@ config UNWINDER_GUESS
+ 	  overhead.
+ 
+ endchoice
+-
+-config FRAME_POINTER
+-	depends on !UNWINDER_ORC && !UNWINDER_GUESS
+-	bool
+diff --git a/arch/x86/crypto/nh-avx2-x86_64.S b/arch/x86/crypto/nh-avx2-x86_64.S
+index 6a0b15e7196a8..54c0ee41209d5 100644
+--- a/arch/x86/crypto/nh-avx2-x86_64.S
++++ b/arch/x86/crypto/nh-avx2-x86_64.S
+@@ -153,5 +153,6 @@ SYM_FUNC_START(nh_avx2)
+ 	vpaddq		T1, T0, T0
+ 	vpaddq		T4, T0, T0
+ 	vmovdqu		T0, (HASH)
++	vzeroupper
+ 	RET
+ SYM_FUNC_END(nh_avx2)
+diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S
+index 3439aaf4295d2..81c8053152bb9 100644
+--- a/arch/x86/crypto/sha256-avx2-asm.S
++++ b/arch/x86/crypto/sha256-avx2-asm.S
+@@ -711,6 +711,7 @@ done_hash:
+ 	popq	%r13
+ 	popq	%r12
+ 	popq	%rbx
++	vzeroupper
+ 	RET
+ SYM_FUNC_END(sha256_transform_rorx)
+ 
+diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
+index 44c33103a9554..f0b817eb6e8ba 100644
+--- a/arch/x86/entry/vsyscall/vsyscall_64.c
++++ b/arch/x86/entry/vsyscall/vsyscall_64.c
+@@ -98,11 +98,6 @@ static int addr_to_vsyscall_nr(unsigned long addr)
+ 
+ static bool write_ok_or_segv(unsigned long ptr, size_t size)
+ {
+-	/*
+-	 * XXX: if access_ok, get_user, and put_user handled
+-	 * sig_on_uaccess_err, this could go away.
+-	 */
+-
+ 	if (!access_ok((void __user *)ptr, size)) {
+ 		struct thread_struct *thread = &current->thread;
+ 
+@@ -120,10 +115,8 @@ static bool write_ok_or_segv(unsigned long ptr, size_t size)
+ bool emulate_vsyscall(unsigned long error_code,
+ 		      struct pt_regs *regs, unsigned long address)
+ {
+-	struct task_struct *tsk;
+ 	unsigned long caller;
+ 	int vsyscall_nr, syscall_nr, tmp;
+-	int prev_sig_on_uaccess_err;
+ 	long ret;
+ 	unsigned long orig_dx;
+ 
+@@ -172,8 +165,6 @@ bool emulate_vsyscall(unsigned long error_code,
+ 		goto sigsegv;
+ 	}
+ 
+-	tsk = current;
+-
+ 	/*
+ 	 * Check for access_ok violations and find the syscall nr.
+ 	 *
+@@ -233,12 +224,8 @@ bool emulate_vsyscall(unsigned long error_code,
+ 		goto do_ret;  /* skip requested */
+ 
+ 	/*
+-	 * With a real vsyscall, page faults cause SIGSEGV.  We want to
+-	 * preserve that behavior to make writing exploits harder.
++	 * With a real vsyscall, page faults cause SIGSEGV.
+ 	 */
+-	prev_sig_on_uaccess_err = current->thread.sig_on_uaccess_err;
+-	current->thread.sig_on_uaccess_err = 1;
+-
+ 	ret = -EFAULT;
+ 	switch (vsyscall_nr) {
+ 	case 0:
+@@ -261,23 +248,12 @@ bool emulate_vsyscall(unsigned long error_code,
+ 		break;
+ 	}
+ 
+-	current->thread.sig_on_uaccess_err = prev_sig_on_uaccess_err;
+-
+ check_fault:
+ 	if (ret == -EFAULT) {
+ 		/* Bad news -- userspace fed a bad pointer to a vsyscall. */
+ 		warn_bad_vsyscall(KERN_INFO, regs,
+ 				  "vsyscall fault (exploit attempt?)");
+-
+-		/*
+-		 * If we failed to generate a signal for any reason,
+-		 * generate one here.  (This should be impossible.)
+-		 */
+-		if (WARN_ON_ONCE(!sigismember(&tsk->pending.signal, SIGBUS) &&
+-				 !sigismember(&tsk->pending.signal, SIGSEGV)))
+-			goto sigsegv;
+-
+-		return true;  /* Don't emulate the ret. */
++		goto sigsegv;
+ 	}
+ 
+ 	regs->ax = ret;
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 6dc3c5f0be076..c682a14299e0e 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -528,7 +528,6 @@ struct thread_struct {
+ 	unsigned long		iopl_emul;
+ 
+ 	unsigned int		iopl_warn:1;
+-	unsigned int		sig_on_uaccess_err:1;
+ 
+ 	/* Floating point and extended processor state */
+ 	struct fpu		fpu;
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index bd557e9f5dd8e..167c9df27ae7a 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -920,7 +920,8 @@ static void __send_cleanup_vector(struct apic_chip_data *apicd)
+ 		hlist_add_head(&apicd->clist, per_cpu_ptr(&cleanup_list, cpu));
+ 		apic->send_IPI(cpu, IRQ_MOVE_CLEANUP_VECTOR);
+ 	} else {
+-		apicd->prev_vector = 0;
++		pr_warn("IRQ %u schedule cleanup for offline CPU %u\n", apicd->irq, cpu);
++		free_moved_vector(apicd);
+ 	}
+ 	raw_spin_unlock(&vector_lock);
+ }
+@@ -957,6 +958,7 @@ void irq_complete_move(struct irq_cfg *cfg)
+  */
+ void irq_force_complete_move(struct irq_desc *desc)
+ {
++	unsigned int cpu = smp_processor_id();
+ 	struct apic_chip_data *apicd;
+ 	struct irq_data *irqd;
+ 	unsigned int vector;
+@@ -981,10 +983,11 @@ void irq_force_complete_move(struct irq_desc *desc)
+ 		goto unlock;
+ 
+ 	/*
+-	 * If prev_vector is empty, no action required.
++	 * If prev_vector is empty or the descriptor is neither currently
++	 * nor previously on the outgoing CPU no action required.
+ 	 */
+ 	vector = apicd->prev_vector;
+-	if (!vector)
++	if (!vector || (apicd->cpu != cpu && apicd->prev_cpu != cpu))
+ 		goto unlock;
+ 
+ 	/*
+diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c
+index 923660057f363..a6f357774697d 100644
+--- a/arch/x86/kernel/tsc_sync.c
++++ b/arch/x86/kernel/tsc_sync.c
+@@ -192,11 +192,9 @@ bool tsc_store_and_check_tsc_adjust(bool bootcpu)
+ 	cur->warned = false;
+ 
+ 	/*
+-	 * If a non-zero TSC value for socket 0 may be valid then the default
+-	 * adjusted value cannot assumed to be zero either.
++	 * The default adjust value cannot be assumed to be zero on any socket.
+ 	 */
+-	if (tsc_async_resets)
+-		cur->adjusted = bootval;
++	cur->adjusted = bootval;
+ 
+ 	/*
+ 	 * Check whether this CPU is the first in a package to come up. In
+diff --git a/arch/x86/lib/x86-opcode-map.txt b/arch/x86/lib/x86-opcode-map.txt
+index ec31f5b60323d..1c25c1072a84d 100644
+--- a/arch/x86/lib/x86-opcode-map.txt
++++ b/arch/x86/lib/x86-opcode-map.txt
+@@ -148,7 +148,7 @@ AVXcode:
+ 65: SEG=GS (Prefix)
+ 66: Operand-Size (Prefix)
+ 67: Address-Size (Prefix)
+-68: PUSH Iz (d64)
++68: PUSH Iz
+ 69: IMUL Gv,Ev,Iz
+ 6a: PUSH Ib (d64)
+ 6b: IMUL Gv,Ev,Ib
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index cdb337cf92bae..98a5924d98b72 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -649,33 +649,8 @@ no_context(struct pt_regs *regs, unsigned long error_code,
+ 	}
+ 
+ 	/* Are we prepared to handle this kernel fault? */
+-	if (fixup_exception(regs, X86_TRAP_PF, error_code, address)) {
+-		/*
+-		 * Any interrupt that takes a fault gets the fixup. This makes
+-		 * the below recursive fault logic only apply to a faults from
+-		 * task context.
+-		 */
+-		if (in_interrupt())
+-			return;
+-
+-		/*
+-		 * Per the above we're !in_interrupt(), aka. task context.
+-		 *
+-		 * In this case we need to make sure we're not recursively
+-		 * faulting through the emulate_vsyscall() logic.
+-		 */
+-		if (current->thread.sig_on_uaccess_err && signal) {
+-			set_signal_archinfo(address, error_code);
+-
+-			/* XXX: hwpoison faults will set the wrong code. */
+-			force_sig_fault(signal, si_code, (void __user *)address);
+-		}
+-
+-		/*
+-		 * Barring that, we can do the fixup and be happy.
+-		 */
++	if (fixup_exception(regs, X86_TRAP_PF, error_code, address))
+ 		return;
+-	}
+ 
+ #ifdef CONFIG_VMAP_STACK
+ 	/*
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index dc0b91c1db04b..7c7bfdf0d0c0f 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -37,7 +37,8 @@ KCOV_INSTRUMENT := n
+ # make up the standalone purgatory.ro
+ 
+ PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
+-PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss -g0
++PURGATORY_CFLAGS := -mcmodel=small -ffreestanding -fno-zero-initialized-in-bss -g0
++PURGATORY_CFLAGS += -fpic -fvisibility=hidden
+ PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING
+ PURGATORY_CFLAGS += -fno-stack-protector
+ 
+diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
+index 0043fd374a62f..f9ec998b7e946 100644
+--- a/arch/x86/tools/relocs.c
++++ b/arch/x86/tools/relocs.c
+@@ -689,6 +689,15 @@ static void walk_relocs(int (*process)(struct section *sec, Elf_Rel *rel,
+ 		if (!(sec_applies->shdr.sh_flags & SHF_ALLOC)) {
+ 			continue;
+ 		}
++
++		/*
++		 * Do not perform relocations in .notes sections; any
++		 * values there are meant for pre-boot consumption (e.g.
++		 * startup_xen).
++		 */
++		if (sec_applies->shdr.sh_type == SHT_NOTE)
++			continue;
++
+ 		sh_symtab = sec_symtab->symtab;
+ 		sym_strtab = sec_symtab->link->strtab;
+ 		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+diff --git a/crypto/ecrdsa.c b/crypto/ecrdsa.c
+index f7ed430206720..0a970261b107d 100644
+--- a/crypto/ecrdsa.c
++++ b/crypto/ecrdsa.c
+@@ -294,4 +294,5 @@ module_exit(ecrdsa_mod_fini);
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Vitaly Chikunov <vt@altlinux.org>");
+ MODULE_DESCRIPTION("EC-RDSA generic algorithm");
++MODULE_ALIAS_CRYPTO("ecrdsa");
+ MODULE_ALIAS_CRYPTO("ecrdsa-generic");
+diff --git a/drivers/accessibility/speakup/main.c b/drivers/accessibility/speakup/main.c
+index 7598778cf37fb..60c059c74fb60 100644
+--- a/drivers/accessibility/speakup/main.c
++++ b/drivers/accessibility/speakup/main.c
+@@ -576,7 +576,7 @@ static u_long get_word(struct vc_data *vc)
+ 	}
+ 	attr_ch = get_char(vc, (u_short *)tmp_pos, &spk_attr);
+ 	buf[cnt++] = attr_ch;
+-	while (tmpx < vc->vc_cols - 1 && cnt < sizeof(buf) - 1) {
++	while (tmpx < vc->vc_cols - 1 && cnt < ARRAY_SIZE(buf) - 1) {
+ 		tmp_pos += 2;
+ 		tmpx++;
+ 		ch = get_char(vc, (u_short *)tmp_pos, &temp);
+diff --git a/drivers/acpi/acpica/Makefile b/drivers/acpi/acpica/Makefile
+index f919811156b1f..b6cf9c9bd6396 100644
+--- a/drivers/acpi/acpica/Makefile
++++ b/drivers/acpi/acpica/Makefile
+@@ -5,6 +5,7 @@
+ 
+ ccflags-y			:= -D_LINUX -DBUILDING_ACPICA
+ ccflags-$(CONFIG_ACPI_DEBUG)	+= -DACPI_DEBUG_OUTPUT
++CFLAGS_tbfind.o 		+= $(call cc-disable-warning, stringop-truncation)
+ 
+ # use acpi.o to put all files here into acpi.o modparam namespace
+ obj-y	+= acpi.o
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 42b1b06efda6f..aa92ec4fe7214 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -475,6 +475,18 @@ static const struct dmi_system_id asus_laptop[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "B2502CBA"),
+ 		},
+ 	},
++	{
++		/* TongFang GXxHRXx/TUXEDO InfinityBook Pro Gen9 AMD */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GXxHRXx"),
++		},
++	},
++	{
++		/* TongFang GMxHGxx/TUXEDO Stellaris Slim Gen1 AMD */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index bcbaa4d6a0ff5..6631a65f632b1 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -476,7 +476,7 @@ struct binder_proc {
+ 	struct list_head todo;
+ 	struct binder_stats stats;
+ 	struct list_head delivered_death;
+-	int max_threads;
++	u32 max_threads;
+ 	int requested_threads;
+ 	int requested_threads_started;
+ 	int tmp_ref;
+@@ -5408,7 +5408,7 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 			goto err;
+ 		break;
+ 	case BINDER_SET_MAX_THREADS: {
+-		int max_threads;
++		u32 max_threads;
+ 
+ 		if (copy_from_user(&max_threads, ubuf,
+ 				   sizeof(max_threads))) {
+diff --git a/drivers/ata/pata_legacy.c b/drivers/ata/pata_legacy.c
+index 4405d255e3aa2..220739b309ea3 100644
+--- a/drivers/ata/pata_legacy.c
++++ b/drivers/ata/pata_legacy.c
+@@ -114,8 +114,6 @@ static int legacy_port[NR_HOST] = { 0x1f0, 0x170, 0x1e8, 0x168, 0x1e0, 0x160 };
+ static struct legacy_probe probe_list[NR_HOST];
+ static struct legacy_data legacy_data[NR_HOST];
+ static struct ata_host *legacy_host[NR_HOST];
+-static int nr_legacy_host;
+-
+ 
+ static int probe_all;		/* Set to check all ISA port ranges */
+ static int ht6560a;		/* HT 6560A on primary 1, second 2, both 3 */
+@@ -1239,9 +1237,11 @@ static __exit void legacy_exit(void)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < nr_legacy_host; i++) {
++	for (i = 0; i < NR_HOST; i++) {
+ 		struct legacy_data *ld = &legacy_data[i];
+-		ata_host_detach(legacy_host[i]);
++
++		if (legacy_host[i])
++			ata_host_detach(legacy_host[i]);
+ 		platform_device_unregister(ld->platform_dev);
+ 	}
+ }
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 35b390a785dd4..862a9420df526 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -2032,10 +2032,13 @@ static void __exit null_exit(void)
+ 
+ 	if (g_queue_mode == NULL_Q_MQ && shared_tags)
+ 		blk_mq_free_tag_set(&tag_set);
++
++	mutex_destroy(&lock);
+ }
+ 
+ module_init(null_init);
+ module_exit(null_exit);
+ 
+ MODULE_AUTHOR("Jens Axboe <axboe@kernel.dk>");
++MODULE_DESCRIPTION("multi queue aware block test driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/char/ppdev.c b/drivers/char/ppdev.c
+index 38b46c7d17371..a97edbf7455a6 100644
+--- a/drivers/char/ppdev.c
++++ b/drivers/char/ppdev.c
+@@ -296,28 +296,35 @@ static int register_device(int minor, struct pp_struct *pp)
+ 	if (!port) {
+ 		pr_warn("%s: no associated port!\n", name);
+ 		rc = -ENXIO;
+-		goto err;
++		goto err_free_name;
++	}
++
++	index = ida_alloc(&ida_index, GFP_KERNEL);
++	if (index < 0) {
++		pr_warn("%s: failed to get index!\n", name);
++		rc = index;
++		goto err_put_port;
+ 	}
+ 
+-	index = ida_simple_get(&ida_index, 0, 0, GFP_KERNEL);
+ 	memset(&ppdev_cb, 0, sizeof(ppdev_cb));
+ 	ppdev_cb.irq_func = pp_irq;
+ 	ppdev_cb.flags = (pp->flags & PP_EXCL) ? PARPORT_FLAG_EXCL : 0;
+ 	ppdev_cb.private = pp;
+ 	pdev = parport_register_dev_model(port, name, &ppdev_cb, index);
+-	parport_put_port(port);
+ 
+ 	if (!pdev) {
+ 		pr_warn("%s: failed to register device!\n", name);
+ 		rc = -ENXIO;
+-		ida_simple_remove(&ida_index, index);
+-		goto err;
++		ida_free(&ida_index, index);
++		goto err_put_port;
+ 	}
+ 
+ 	pp->pdev = pdev;
+ 	pp->index = index;
+ 	dev_dbg(&pdev->dev, "registered pardevice\n");
+-err:
++err_put_port:
++	parport_put_port(port);
++err_free_name:
+ 	kfree(name);
+ 	return rc;
+ }
+@@ -750,7 +757,7 @@ static int pp_release(struct inode *inode, struct file *file)
+ 
+ 	if (pp->pdev) {
+ 		parport_unregister_device(pp->pdev);
+-		ida_simple_remove(&ida_index, pp->index);
++		ida_free(&ida_index, pp->index);
+ 		pp->pdev = NULL;
+ 		pr_debug(CHRDEV "%x: unregistered pardevice\n", minor);
+ 	}
+diff --git a/drivers/clk/qcom/mmcc-msm8998.c b/drivers/clk/qcom/mmcc-msm8998.c
+index a68764cfb7930..5e2e60c1c2283 100644
+--- a/drivers/clk/qcom/mmcc-msm8998.c
++++ b/drivers/clk/qcom/mmcc-msm8998.c
+@@ -2587,6 +2587,8 @@ static struct clk_hw *mmcc_msm8998_hws[] = {
+ 
+ static struct gdsc video_top_gdsc = {
+ 	.gdscr = 0x1024,
++	.cxcs = (unsigned int []){ 0x1028, 0x1034, 0x1038 },
++	.cxc_count = 3,
+ 	.pd = {
+ 		.name = "video_top",
+ 	},
+@@ -2595,20 +2597,26 @@ static struct gdsc video_top_gdsc = {
+ 
+ static struct gdsc video_subcore0_gdsc = {
+ 	.gdscr = 0x1040,
++	.cxcs = (unsigned int []){ 0x1048 },
++	.cxc_count = 1,
+ 	.pd = {
+ 		.name = "video_subcore0",
+ 	},
+ 	.parent = &video_top_gdsc.pd,
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = HW_CTRL,
+ };
+ 
+ static struct gdsc video_subcore1_gdsc = {
+ 	.gdscr = 0x1044,
++	.cxcs = (unsigned int []){ 0x104c },
++	.cxc_count = 1,
+ 	.pd = {
+ 		.name = "video_subcore1",
+ 	},
+ 	.parent = &video_top_gdsc.pd,
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = HW_CTRL,
+ };
+ 
+ static struct gdsc mdss_gdsc = {
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 5b4bca71f201d..162de402b1134 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1559,47 +1559,36 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
+ 	return 0;
+ }
+ 
+-static int cpufreq_offline(unsigned int cpu)
++static void __cpufreq_offline(unsigned int cpu, struct cpufreq_policy *policy)
+ {
+-	struct cpufreq_policy *policy;
+ 	int ret;
+ 
+-	pr_debug("%s: unregistering CPU %u\n", __func__, cpu);
+-
+-	policy = cpufreq_cpu_get_raw(cpu);
+-	if (!policy) {
+-		pr_debug("%s: No cpu_data found\n", __func__);
+-		return 0;
+-	}
+-
+-	down_write(&policy->rwsem);
+ 	if (has_target())
+ 		cpufreq_stop_governor(policy);
+ 
+ 	cpumask_clear_cpu(cpu, policy->cpus);
+ 
+-	if (policy_is_inactive(policy)) {
+-		if (has_target())
+-			strncpy(policy->last_governor, policy->governor->name,
+-				CPUFREQ_NAME_LEN);
+-		else
+-			policy->last_policy = policy->policy;
+-	} else if (cpu == policy->cpu) {
+-		/* Nominate new CPU */
+-		policy->cpu = cpumask_any(policy->cpus);
+-	}
+-
+-	/* Start governor again for active policy */
+ 	if (!policy_is_inactive(policy)) {
++		/* Nominate a new CPU if necessary. */
++		if (cpu == policy->cpu)
++			policy->cpu = cpumask_any(policy->cpus);
++
++		/* Start the governor again for the active policy. */
+ 		if (has_target()) {
+ 			ret = cpufreq_start_governor(policy);
+ 			if (ret)
+ 				pr_err("%s: Failed to start governor\n", __func__);
+ 		}
+ 
+-		goto unlock;
++		return;
+ 	}
+ 
++	if (has_target())
++		strncpy(policy->last_governor, policy->governor->name,
++			CPUFREQ_NAME_LEN);
++	else
++		policy->last_policy = policy->policy;
++
+ 	if (cpufreq_thermal_control_enabled(cpufreq_driver)) {
+ 		cpufreq_cooling_unregister(policy->cdev);
+ 		policy->cdev = NULL;
+@@ -1617,12 +1606,31 @@ static int cpufreq_offline(unsigned int cpu)
+ 	 */
+ 	if (cpufreq_driver->offline) {
+ 		cpufreq_driver->offline(policy);
+-	} else if (cpufreq_driver->exit) {
++		return;
++	}
++
++	if (cpufreq_driver->exit)
+ 		cpufreq_driver->exit(policy);
+-		policy->freq_table = NULL;
++
++	policy->freq_table = NULL;
++}
++
++static int cpufreq_offline(unsigned int cpu)
++{
++	struct cpufreq_policy *policy;
++
++	pr_debug("%s: unregistering CPU %u\n", __func__, cpu);
++
++	policy = cpufreq_cpu_get_raw(cpu);
++	if (!policy) {
++		pr_debug("%s: No cpu_data found\n", __func__);
++		return 0;
+ 	}
+ 
+-unlock:
++	down_write(&policy->rwsem);
++
++	__cpufreq_offline(cpu, policy);
++
+ 	up_write(&policy->rwsem);
+ 	return 0;
+ }
+@@ -1640,19 +1648,26 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
+ 	if (!policy)
+ 		return;
+ 
++	down_write(&policy->rwsem);
++
+ 	if (cpu_online(cpu))
+-		cpufreq_offline(cpu);
++		__cpufreq_offline(cpu, policy);
+ 
+ 	cpumask_clear_cpu(cpu, policy->real_cpus);
+ 	remove_cpu_dev_symlink(policy, dev);
+ 
+-	if (cpumask_empty(policy->real_cpus)) {
+-		/* We did light-weight exit earlier, do full tear down now */
+-		if (cpufreq_driver->offline)
+-			cpufreq_driver->exit(policy);
+-
+-		cpufreq_policy_free(policy);
++	if (!cpumask_empty(policy->real_cpus)) {
++		up_write(&policy->rwsem);
++		return;
+ 	}
++
++	/* We did light-weight exit earlier, do full tear down now */
++	if (cpufreq_driver->offline && cpufreq_driver->exit)
++		cpufreq_driver->exit(policy);
++
++	up_write(&policy->rwsem);
++
++	cpufreq_policy_free(policy);
+ }
+ 
+ /**
+diff --git a/drivers/crypto/bcm/spu2.c b/drivers/crypto/bcm/spu2.c
+index c860ffb0b4c38..670d439f204c5 100644
+--- a/drivers/crypto/bcm/spu2.c
++++ b/drivers/crypto/bcm/spu2.c
+@@ -495,7 +495,7 @@ static void spu2_dump_omd(u8 *omd, u16 hash_key_len, u16 ciph_key_len,
+ 	if (hash_iv_len) {
+ 		packet_log("  Hash IV Length %u bytes\n", hash_iv_len);
+ 		packet_dump("  hash IV: ", ptr, hash_iv_len);
+-		ptr += ciph_key_len;
++		ptr += hash_iv_len;
+ 	}
+ 
+ 	if (ciph_iv_len) {
+diff --git a/drivers/crypto/ccp/sp-platform.c b/drivers/crypto/ccp/sp-platform.c
+index 9dba52fbee997..121f9d0cb608e 100644
+--- a/drivers/crypto/ccp/sp-platform.c
++++ b/drivers/crypto/ccp/sp-platform.c
+@@ -39,44 +39,38 @@ static const struct sp_dev_vdata dev_vdata[] = {
+ 	},
+ };
+ 
+-#ifdef CONFIG_ACPI
+ static const struct acpi_device_id sp_acpi_match[] = {
+ 	{ "AMDI0C00", (kernel_ulong_t)&dev_vdata[0] },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(acpi, sp_acpi_match);
+-#endif
+ 
+-#ifdef CONFIG_OF
+ static const struct of_device_id sp_of_match[] = {
+ 	{ .compatible = "amd,ccp-seattle-v1a",
+ 	  .data = (const void *)&dev_vdata[0] },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(of, sp_of_match);
+-#endif
+ 
+ static struct sp_dev_vdata *sp_get_of_version(struct platform_device *pdev)
+ {
+-#ifdef CONFIG_OF
+ 	const struct of_device_id *match;
+ 
+ 	match = of_match_node(sp_of_match, pdev->dev.of_node);
+ 	if (match && match->data)
+ 		return (struct sp_dev_vdata *)match->data;
+-#endif
++
+ 	return NULL;
+ }
+ 
+ static struct sp_dev_vdata *sp_get_acpi_version(struct platform_device *pdev)
+ {
+-#ifdef CONFIG_ACPI
+ 	const struct acpi_device_id *match;
+ 
+ 	match = acpi_match_device(sp_acpi_match, &pdev->dev);
+ 	if (match && match->driver_data)
+ 		return (struct sp_dev_vdata *)match->driver_data;
+-#endif
++
+ 	return NULL;
+ }
+ 
+@@ -222,12 +216,8 @@ static int sp_platform_resume(struct platform_device *pdev)
+ static struct platform_driver sp_platform_driver = {
+ 	.driver = {
+ 		.name = "ccp",
+-#ifdef CONFIG_ACPI
+ 		.acpi_match_table = sp_acpi_match,
+-#endif
+-#ifdef CONFIG_OF
+ 		.of_match_table = sp_of_match,
+-#endif
+ 	},
+ 	.probe = sp_platform_probe,
+ 	.remove = sp_platform_remove,
+diff --git a/drivers/crypto/qat/qat_common/adf_aer.c b/drivers/crypto/qat/qat_common/adf_aer.c
+index dc0feedf52702..bf6e2116148b5 100644
+--- a/drivers/crypto/qat/qat_common/adf_aer.c
++++ b/drivers/crypto/qat/qat_common/adf_aer.c
+@@ -95,8 +95,7 @@ static void adf_device_reset_worker(struct work_struct *work)
+ 	if (adf_dev_init(accel_dev) || adf_dev_start(accel_dev)) {
+ 		/* The device hanged and we can't restart it so stop here */
+ 		dev_err(&GET_DEV(accel_dev), "Restart device failed\n");
+-		if (reset_data->mode == ADF_DEV_RESET_ASYNC ||
+-		    completion_done(&reset_data->compl))
++		if (reset_data->mode == ADF_DEV_RESET_ASYNC)
+ 			kfree(reset_data);
+ 		WARN(1, "QAT: device restart failed. Device is unusable\n");
+ 		return;
+@@ -104,16 +103,8 @@ static void adf_device_reset_worker(struct work_struct *work)
+ 	adf_dev_restarted_notify(accel_dev);
+ 	clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status);
+ 
+-	/*
+-	 * The dev is back alive. Notify the caller if in sync mode
+-	 *
+-	 * If device restart will take a more time than expected,
+-	 * the schedule_reset() function can timeout and exit. This can be
+-	 * detected by calling the completion_done() function. In this case
+-	 * the reset_data structure needs to be freed here.
+-	 */
+-	if (reset_data->mode == ADF_DEV_RESET_ASYNC ||
+-	    completion_done(&reset_data->compl))
++	/* The dev is back alive. Notify the caller if in sync mode */
++	if (reset_data->mode == ADF_DEV_RESET_ASYNC)
+ 		kfree(reset_data);
+ 	else
+ 		complete(&reset_data->compl);
+@@ -148,10 +139,10 @@ static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev,
+ 		if (!timeout) {
+ 			dev_err(&GET_DEV(accel_dev),
+ 				"Reset device timeout expired\n");
++			cancel_work_sync(&reset_data->reset_work);
+ 			ret = -EFAULT;
+-		} else {
+-			kfree(reset_data);
+ 		}
++		kfree(reset_data);
+ 		return ret;
+ 	}
+ 	return 0;
+diff --git a/drivers/dma-buf/sync_debug.c b/drivers/dma-buf/sync_debug.c
+index 101394f16930f..237bce21d1e72 100644
+--- a/drivers/dma-buf/sync_debug.c
++++ b/drivers/dma-buf/sync_debug.c
+@@ -110,12 +110,12 @@ static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
+ 
+ 	seq_printf(s, "%s: %d\n", obj->name, obj->value);
+ 
+-	spin_lock_irq(&obj->lock);
++	spin_lock(&obj->lock); /* Caller already disabled IRQ. */
+ 	list_for_each(pos, &obj->pt_list) {
+ 		struct sync_pt *pt = container_of(pos, struct sync_pt, link);
+ 		sync_print_fence(s, &pt->base, false);
+ 	}
+-	spin_unlock_irq(&obj->lock);
++	spin_unlock(&obj->lock);
+ }
+ 
+ static void sync_print_sync_file(struct seq_file *s,
+diff --git a/drivers/dma/idma64.c b/drivers/dma/idma64.c
+index db506e1f7ef4e..0f065ba844c00 100644
+--- a/drivers/dma/idma64.c
++++ b/drivers/dma/idma64.c
+@@ -594,7 +594,9 @@ static int idma64_probe(struct idma64_chip *chip)
+ 
+ 	idma64->dma.dev = chip->sysdev;
+ 
+-	dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK);
++	ret = dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK);
++	if (ret)
++		return ret;
+ 
+ 	ret = dma_async_device_register(&idma64->dma);
+ 	if (ret)
+diff --git a/drivers/extcon/Kconfig b/drivers/extcon/Kconfig
+index aac507bff135c..09485803280ef 100644
+--- a/drivers/extcon/Kconfig
++++ b/drivers/extcon/Kconfig
+@@ -121,7 +121,8 @@ config EXTCON_MAX77843
+ 
+ config EXTCON_MAX8997
+ 	tristate "Maxim MAX8997 EXTCON Support"
+-	depends on MFD_MAX8997 && IRQ_DOMAIN
++	depends on MFD_MAX8997
++	select IRQ_DOMAIN
+ 	help
+ 	  If you say yes here you get support for the MUIC device of
+ 	  Maxim MAX8997 PMIC. The MAX8997 MUIC is a USB port accessory
+diff --git a/drivers/firmware/dmi-id.c b/drivers/firmware/dmi-id.c
+index 86d71b0212b1b..0cfc60cc267d4 100644
+--- a/drivers/firmware/dmi-id.c
++++ b/drivers/firmware/dmi-id.c
+@@ -164,9 +164,14 @@ static int dmi_dev_uevent(struct device *dev, struct kobj_uevent_env *env)
+ 	return 0;
+ }
+ 
++static void dmi_dev_release(struct device *dev)
++{
++	kfree(dev);
++}
++
+ static struct class dmi_class = {
+ 	.name = "dmi",
+-	.dev_release = (void(*)(struct device *)) kfree,
++	.dev_release = dmi_dev_release,
+ 	.dev_uevent = dmi_dev_uevent,
+ };
+ 
+diff --git a/drivers/firmware/raspberrypi.c b/drivers/firmware/raspberrypi.c
+index 45ff03da234a6..c8a292df6fea5 100644
+--- a/drivers/firmware/raspberrypi.c
++++ b/drivers/firmware/raspberrypi.c
+@@ -9,6 +9,7 @@
+ #include <linux/dma-mapping.h>
+ #include <linux/kref.h>
+ #include <linux/mailbox_client.h>
++#include <linux/mailbox_controller.h>
+ #include <linux/module.h>
+ #include <linux/of_platform.h>
+ #include <linux/platform_device.h>
+@@ -96,8 +97,8 @@ int rpi_firmware_property_list(struct rpi_firmware *fw,
+ 	if (size & 3)
+ 		return -EINVAL;
+ 
+-	buf = dma_alloc_coherent(fw->cl.dev, PAGE_ALIGN(size), &bus_addr,
+-				 GFP_ATOMIC);
++	buf = dma_alloc_coherent(fw->chan->mbox->dev, PAGE_ALIGN(size),
++				 &bus_addr, GFP_ATOMIC);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+@@ -125,7 +126,7 @@ int rpi_firmware_property_list(struct rpi_firmware *fw,
+ 		ret = -EINVAL;
+ 	}
+ 
+-	dma_free_coherent(fw->cl.dev, PAGE_ALIGN(size), buf, bus_addr);
++	dma_free_coherent(fw->chan->mbox->dev, PAGE_ALIGN(size), buf, bus_addr);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/fpga/dfl-fme-region.c b/drivers/fpga/dfl-fme-region.c
+index 1eeb42af10122..4aebde0a7f1c3 100644
+--- a/drivers/fpga/dfl-fme-region.c
++++ b/drivers/fpga/dfl-fme-region.c
+@@ -30,6 +30,7 @@ static int fme_region_get_bridges(struct fpga_region *region)
+ static int fme_region_probe(struct platform_device *pdev)
+ {
+ 	struct dfl_fme_region_pdata *pdata = dev_get_platdata(&pdev->dev);
++	struct fpga_region_info info = { 0 };
+ 	struct device *dev = &pdev->dev;
+ 	struct fpga_region *region;
+ 	struct fpga_manager *mgr;
+@@ -39,20 +40,18 @@ static int fme_region_probe(struct platform_device *pdev)
+ 	if (IS_ERR(mgr))
+ 		return -EPROBE_DEFER;
+ 
+-	region = devm_fpga_region_create(dev, mgr, fme_region_get_bridges);
+-	if (!region) {
+-		ret = -ENOMEM;
++	info.mgr = mgr;
++	info.compat_id = mgr->compat_id;
++	info.get_bridges = fme_region_get_bridges;
++	info.priv = pdata;
++	region = fpga_region_register_full(dev, &info);
++	if (IS_ERR(region)) {
++		ret = PTR_ERR(region);
+ 		goto eprobe_mgr_put;
+ 	}
+ 
+-	region->priv = pdata;
+-	region->compat_id = mgr->compat_id;
+ 	platform_set_drvdata(pdev, region);
+ 
+-	ret = fpga_region_register(region);
+-	if (ret)
+-		goto eprobe_mgr_put;
+-
+ 	dev_dbg(dev, "DFL FME FPGA Region probed\n");
+ 
+ 	return 0;
+diff --git a/drivers/fpga/dfl.c b/drivers/fpga/dfl.c
+index eb8a6e329af9b..ae220365fc04f 100644
+--- a/drivers/fpga/dfl.c
++++ b/drivers/fpga/dfl.c
+@@ -1400,19 +1400,15 @@ dfl_fpga_feature_devs_enumerate(struct dfl_fpga_enum_info *info)
+ 	if (!cdev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	cdev->region = devm_fpga_region_create(info->dev, NULL, NULL);
+-	if (!cdev->region) {
+-		ret = -ENOMEM;
+-		goto free_cdev_exit;
+-	}
+-
+ 	cdev->parent = info->dev;
+ 	mutex_init(&cdev->lock);
+ 	INIT_LIST_HEAD(&cdev->port_dev_list);
+ 
+-	ret = fpga_region_register(cdev->region);
+-	if (ret)
++	cdev->region = fpga_region_register(info->dev, NULL, NULL);
++	if (IS_ERR(cdev->region)) {
++		ret = PTR_ERR(cdev->region);
+ 		goto free_cdev_exit;
++	}
+ 
+ 	/* create and init build info for enumeration */
+ 	binfo = devm_kzalloc(info->dev, sizeof(*binfo), GFP_KERNEL);
+diff --git a/drivers/fpga/fpga-region.c b/drivers/fpga/fpga-region.c
+index c3134b89c3fe5..d73daea579ac7 100644
+--- a/drivers/fpga/fpga-region.c
++++ b/drivers/fpga/fpga-region.c
+@@ -33,14 +33,14 @@ struct fpga_region *fpga_region_class_find(
+ EXPORT_SYMBOL_GPL(fpga_region_class_find);
+ 
+ /**
+- * fpga_region_get - get an exclusive reference to a fpga region
++ * fpga_region_get - get an exclusive reference to an fpga region
+  * @region: FPGA Region struct
+  *
+  * Caller should call fpga_region_put() when done with region.
+  *
+  * Return fpga_region struct if successful.
+  * Return -EBUSY if someone already has a reference to the region.
+- * Return -ENODEV if @np is not a FPGA Region.
++ * Return -ENODEV if @np is not an FPGA Region.
+  */
+ static struct fpga_region *fpga_region_get(struct fpga_region *region)
+ {
+@@ -52,7 +52,7 @@ static struct fpga_region *fpga_region_get(struct fpga_region *region)
+ 	}
+ 
+ 	get_device(dev);
+-	if (!try_module_get(dev->parent->driver->owner)) {
++	if (!try_module_get(region->ops_owner)) {
+ 		put_device(dev);
+ 		mutex_unlock(&region->mutex);
+ 		return ERR_PTR(-ENODEV);
+@@ -74,7 +74,7 @@ static void fpga_region_put(struct fpga_region *region)
+ 
+ 	dev_dbg(dev, "put\n");
+ 
+-	module_put(dev->parent->driver->owner);
++	module_put(region->ops_owner);
+ 	put_device(dev);
+ 	mutex_unlock(&region->mutex);
+ }
+@@ -180,48 +180,60 @@ static struct attribute *fpga_region_attrs[] = {
+ ATTRIBUTE_GROUPS(fpga_region);
+ 
+ /**
+- * fpga_region_create - alloc and init a struct fpga_region
+- * @dev: device parent
+- * @mgr: manager that programs this region
+- * @get_bridges: optional function to get bridges to a list
+- *
+- * The caller of this function is responsible for freeing the resulting region
+- * struct with fpga_region_free().  Using devm_fpga_region_create() instead is
+- * recommended.
++ * __fpga_region_register_full - create and register an FPGA Region device
++ * @parent: device parent
++ * @info: parameters for FPGA Region
++ * @owner: module containing the get_bridges function
+  *
+- * Return: struct fpga_region or NULL
++ * Return: struct fpga_region or ERR_PTR()
+  */
+-struct fpga_region
+-*fpga_region_create(struct device *dev,
+-		    struct fpga_manager *mgr,
+-		    int (*get_bridges)(struct fpga_region *))
++struct fpga_region *
++__fpga_region_register_full(struct device *parent, const struct fpga_region_info *info,
++			    struct module *owner)
+ {
+ 	struct fpga_region *region;
+ 	int id, ret = 0;
+ 
++	if (!info) {
++		dev_err(parent,
++			"Attempt to register without required info structure\n");
++		return ERR_PTR(-EINVAL);
++	}
++
+ 	region = kzalloc(sizeof(*region), GFP_KERNEL);
+ 	if (!region)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	id = ida_simple_get(&fpga_region_ida, 0, 0, GFP_KERNEL);
+-	if (id < 0)
++	if (id < 0) {
++		ret = id;
+ 		goto err_free;
++	}
++
++	region->mgr = info->mgr;
++	region->compat_id = info->compat_id;
++	region->priv = info->priv;
++	region->get_bridges = info->get_bridges;
++	region->ops_owner = owner;
+ 
+-	region->mgr = mgr;
+-	region->get_bridges = get_bridges;
+ 	mutex_init(&region->mutex);
+ 	INIT_LIST_HEAD(&region->bridge_list);
+ 
+-	device_initialize(&region->dev);
+ 	region->dev.class = fpga_region_class;
+-	region->dev.parent = dev;
+-	region->dev.of_node = dev->of_node;
++	region->dev.parent = parent;
++	region->dev.of_node = parent->of_node;
+ 	region->dev.id = id;
+ 
+ 	ret = dev_set_name(&region->dev, "region%d", id);
+ 	if (ret)
+ 		goto err_remove;
+ 
++	ret = device_register(&region->dev);
++	if (ret) {
++		put_device(&region->dev);
++		return ERR_PTR(ret);
++	}
++
+ 	return region;
+ 
+ err_remove:
+@@ -229,84 +241,41 @@ struct fpga_region
+ err_free:
+ 	kfree(region);
+ 
+-	return NULL;
+-}
+-EXPORT_SYMBOL_GPL(fpga_region_create);
+-
+-/**
+- * fpga_region_free - free a FPGA region created by fpga_region_create()
+- * @region: FPGA region
+- */
+-void fpga_region_free(struct fpga_region *region)
+-{
+-	ida_simple_remove(&fpga_region_ida, region->dev.id);
+-	kfree(region);
+-}
+-EXPORT_SYMBOL_GPL(fpga_region_free);
+-
+-static void devm_fpga_region_release(struct device *dev, void *res)
+-{
+-	struct fpga_region *region = *(struct fpga_region **)res;
+-
+-	fpga_region_free(region);
++	return ERR_PTR(ret);
+ }
++EXPORT_SYMBOL_GPL(__fpga_region_register_full);
+ 
+ /**
+- * devm_fpga_region_create - create and initialize a managed FPGA region struct
+- * @dev: device parent
++ * __fpga_region_register - create and register an FPGA Region device
++ * @parent: device parent
+  * @mgr: manager that programs this region
+  * @get_bridges: optional function to get bridges to a list
++ * @owner: module containing the get_bridges function
+  *
+- * This function is intended for use in a FPGA region driver's probe function.
+- * After the region driver creates the region struct with
+- * devm_fpga_region_create(), it should register it with fpga_region_register().
+- * The region driver's remove function should call fpga_region_unregister().
+- * The region struct allocated with this function will be freed automatically on
+- * driver detach.  This includes the case of a probe function returning error
+- * before calling fpga_region_register(), the struct will still get cleaned up.
++ * This simple version of the register function should be sufficient for most users.
++ * The fpga_region_register_full() function is available for users that need to
++ * pass additional, optional parameters.
+  *
+- * Return: struct fpga_region or NULL
++ * Return: struct fpga_region or ERR_PTR()
+  */
+-struct fpga_region
+-*devm_fpga_region_create(struct device *dev,
+-			 struct fpga_manager *mgr,
+-			 int (*get_bridges)(struct fpga_region *))
++struct fpga_region *
++__fpga_region_register(struct device *parent, struct fpga_manager *mgr,
++		       int (*get_bridges)(struct fpga_region *), struct module *owner)
+ {
+-	struct fpga_region **ptr, *region;
+-
+-	ptr = devres_alloc(devm_fpga_region_release, sizeof(*ptr), GFP_KERNEL);
+-	if (!ptr)
+-		return NULL;
++	struct fpga_region_info info = { 0 };
+ 
+-	region = fpga_region_create(dev, mgr, get_bridges);
+-	if (!region) {
+-		devres_free(ptr);
+-	} else {
+-		*ptr = region;
+-		devres_add(dev, ptr);
+-	}
++	info.mgr = mgr;
++	info.get_bridges = get_bridges;
+ 
+-	return region;
++	return __fpga_region_register_full(parent, &info, owner);
+ }
+-EXPORT_SYMBOL_GPL(devm_fpga_region_create);
++EXPORT_SYMBOL_GPL(__fpga_region_register);
+ 
+ /**
+- * fpga_region_register - register a FPGA region
++ * fpga_region_unregister - unregister an FPGA region
+  * @region: FPGA region
+  *
+- * Return: 0 or -errno
+- */
+-int fpga_region_register(struct fpga_region *region)
+-{
+-	return device_add(&region->dev);
+-}
+-EXPORT_SYMBOL_GPL(fpga_region_register);
+-
+-/**
+- * fpga_region_unregister - unregister a FPGA region
+- * @region: FPGA region
+- *
+- * This function is intended for use in a FPGA region driver's remove function.
++ * This function is intended for use in an FPGA region driver's remove function.
+  */
+ void fpga_region_unregister(struct fpga_region *region)
+ {
+@@ -316,6 +285,10 @@ EXPORT_SYMBOL_GPL(fpga_region_unregister);
+ 
+ static void fpga_region_dev_release(struct device *dev)
+ {
++	struct fpga_region *region = to_fpga_region(dev);
++
++	ida_simple_remove(&fpga_region_ida, region->dev.id);
++	kfree(region);
+ }
+ 
+ /**
+diff --git a/drivers/fpga/of-fpga-region.c b/drivers/fpga/of-fpga-region.c
+index e405309baadc1..466e083654ae1 100644
+--- a/drivers/fpga/of-fpga-region.c
++++ b/drivers/fpga/of-fpga-region.c
+@@ -405,16 +405,12 @@ static int of_fpga_region_probe(struct platform_device *pdev)
+ 	if (IS_ERR(mgr))
+ 		return -EPROBE_DEFER;
+ 
+-	region = devm_fpga_region_create(dev, mgr, of_fpga_region_get_bridges);
+-	if (!region) {
+-		ret = -ENOMEM;
++	region = fpga_region_register(dev, mgr, of_fpga_region_get_bridges);
++	if (IS_ERR(region)) {
++		ret = PTR_ERR(region);
+ 		goto eprobe_mgr_put;
+ 	}
+ 
+-	ret = fpga_region_register(region);
+-	if (ret)
+-		goto eprobe_mgr_put;
+-
+ 	of_platform_populate(np, fpga_region_of_match, NULL, &region->dev);
+ 	platform_set_drvdata(pdev, region);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index dbcaef3f35da9..c54834ed8751e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -2073,6 +2073,9 @@ static int sdma_v4_0_process_trap_irq(struct amdgpu_device *adev,
+ 
+ 	DRM_DEBUG("IH: SDMA trap\n");
+ 	instance = sdma_v4_0_irq_id_to_seq(entry->client_id);
++	if (instance < 0)
++		return instance;
++
+ 	switch (entry->ring_id) {
+ 	case 0:
+ 		amdgpu_fence_process(&adev->sdma.instance[instance].ring);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index d243e60c6eef7..534f2dec6356f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -766,6 +766,14 @@ struct kfd_process *kfd_create_process(struct file *filep)
+ 	if (process) {
+ 		pr_debug("Process already found\n");
+ 	} else {
++		/* If the process just called exec(3), it is possible that the
++		 * cleanup of the kfd_process (following the release of the mm
++		 * of the old process image) is still in the cleanup work queue.
++		 * Make sure to drain any job before trying to recreate any
++		 * resource for this process.
++		 */
++		flush_workqueue(kfd_process_wq);
++
+ 		process = create_process(thread);
+ 		if (IS_ERR(process))
+ 			goto out;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3578e3b3536e3..29ef0ed44d5f4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2099,6 +2099,7 @@ static int dm_resume(void *handle)
+ 			dc_stream_release(dm_new_crtc_state->stream);
+ 			dm_new_crtc_state->stream = NULL;
+ 		}
++		dm_new_crtc_state->base.color_mgmt_changed = true;
+ 	}
+ 
+ 	for_each_new_plane_in_state(dm->cached_state, plane, new_plane_state, i) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+index 7a00fe525dfba..bd9bc51983fec 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+@@ -379,6 +379,11 @@ bool cm_helper_translate_curve_to_hw_format(
+ 				i += increment) {
+ 			if (j == hw_points - 1)
+ 				break;
++			if (i >= TRANSFER_FUNC_POINTS) {
++				DC_LOG_ERROR("Index out of bounds: i=%d, TRANSFER_FUNC_POINTS=%d\n",
++					     i, TRANSFER_FUNC_POINTS);
++				return false;
++			}
+ 			rgb_resulted[j].red = output_tf->tf_pts.red[i];
+ 			rgb_resulted[j].green = output_tf->tf_pts.green[i];
+ 			rgb_resulted[j].blue = output_tf->tf_pts.blue[i];
+diff --git a/drivers/gpu/drm/arm/malidp_mw.c b/drivers/gpu/drm/arm/malidp_mw.c
+index 7d0e7b031e447..fa5e77ee3af86 100644
+--- a/drivers/gpu/drm/arm/malidp_mw.c
++++ b/drivers/gpu/drm/arm/malidp_mw.c
+@@ -70,7 +70,10 @@ static void malidp_mw_connector_reset(struct drm_connector *connector)
+ 		__drm_atomic_helper_connector_destroy_state(connector->state);
+ 
+ 	kfree(connector->state);
+-	__drm_atomic_helper_connector_reset(connector, &mw_state->base);
++	connector->state = NULL;
++
++	if (mw_state)
++		__drm_atomic_helper_connector_reset(connector, &mw_state->base);
+ }
+ 
+ static enum drm_connector_status
+diff --git a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+index f56ff97c98990..ae99d04f00456 100644
+--- a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
++++ b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+@@ -1978,6 +1978,9 @@ static void cdns_mhdp_atomic_enable(struct drm_bridge *bridge,
+ 	mhdp_state = to_cdns_mhdp_bridge_state(new_state);
+ 
+ 	mhdp_state->current_mode = drm_mode_duplicate(bridge->dev, mode);
++	if (!mhdp_state->current_mode)
++		return;
++
+ 	drm_mode_set_name(mhdp_state->current_mode);
+ 
+ 	dev_dbg(mhdp->dev, "%s: Enabling mode %s\n", __func__, mode->name);
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index 660e05fa4a704..7f58ceda5b08a 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -766,10 +766,8 @@ static struct mipi_dsi_device *lt9611_attach_dsi(struct lt9611 *lt9611,
+ 	int ret;
+ 
+ 	host = of_find_mipi_dsi_host_by_node(dsi_node);
+-	if (!host) {
+-		dev_err(lt9611->dev, "failed to find dsi host\n");
+-		return ERR_PTR(-EPROBE_DEFER);
+-	}
++	if (!host)
++		return ERR_PTR(dev_err_probe(lt9611->dev, -EPROBE_DEFER, "failed to find dsi host\n"));
+ 
+ 	dsi = mipi_dsi_device_register_full(host, &info);
+ 	if (IS_ERR(dsi)) {
+diff --git a/drivers/gpu/drm/bridge/tc358775.c b/drivers/gpu/drm/bridge/tc358775.c
+index 2272adcc5b4ad..2e299cfe4e487 100644
+--- a/drivers/gpu/drm/bridge/tc358775.c
++++ b/drivers/gpu/drm/bridge/tc358775.c
+@@ -453,10 +453,6 @@ static void tc_bridge_enable(struct drm_bridge *bridge)
+ 	dev_dbg(tc->dev, "bus_formats %04x bpc %d\n",
+ 		connector->display_info.bus_formats[0],
+ 		tc->bpc);
+-	/*
+-	 * Default hardware register settings of tc358775 configured
+-	 * with MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA jeida-24 format
+-	 */
+ 	if (connector->display_info.bus_formats[0] ==
+ 		MEDIA_BUS_FMT_RGB888_1X7X4_SPWG) {
+ 		/* VESA-24 */
+@@ -467,14 +463,15 @@ static void tc_bridge_enable(struct drm_bridge *bridge)
+ 		d2l_write(tc->i2c, LV_MX1619, LV_MX(LVI_B6, LVI_B7, LVI_B1, LVI_B2));
+ 		d2l_write(tc->i2c, LV_MX2023, LV_MX(LVI_B3, LVI_B4, LVI_B5, LVI_L0));
+ 		d2l_write(tc->i2c, LV_MX2427, LV_MX(LVI_HS, LVI_VS, LVI_DE, LVI_R6));
+-	} else { /*  MEDIA_BUS_FMT_RGB666_1X7X3_SPWG - JEIDA-18 */
+-		d2l_write(tc->i2c, LV_MX0003, LV_MX(LVI_R0, LVI_R1, LVI_R2, LVI_R3));
+-		d2l_write(tc->i2c, LV_MX0407, LV_MX(LVI_R4, LVI_L0, LVI_R5, LVI_G0));
+-		d2l_write(tc->i2c, LV_MX0811, LV_MX(LVI_G1, LVI_G2, LVI_L0, LVI_L0));
+-		d2l_write(tc->i2c, LV_MX1215, LV_MX(LVI_G3, LVI_G4, LVI_G5, LVI_B0));
+-		d2l_write(tc->i2c, LV_MX1619, LV_MX(LVI_L0, LVI_L0, LVI_B1, LVI_B2));
+-		d2l_write(tc->i2c, LV_MX2023, LV_MX(LVI_B3, LVI_B4, LVI_B5, LVI_L0));
+-		d2l_write(tc->i2c, LV_MX2427, LV_MX(LVI_HS, LVI_VS, LVI_DE, LVI_L0));
++	} else {
++		/* JEIDA-18 and JEIDA-24 */
++		d2l_write(tc->i2c, LV_MX0003, LV_MX(LVI_R2, LVI_R3, LVI_R4, LVI_R5));
++		d2l_write(tc->i2c, LV_MX0407, LV_MX(LVI_R6, LVI_R1, LVI_R7, LVI_G2));
++		d2l_write(tc->i2c, LV_MX0811, LV_MX(LVI_G3, LVI_G4, LVI_G0, LVI_G1));
++		d2l_write(tc->i2c, LV_MX1215, LV_MX(LVI_G5, LVI_G6, LVI_G7, LVI_B2));
++		d2l_write(tc->i2c, LV_MX1619, LV_MX(LVI_B0, LVI_B1, LVI_B3, LVI_B4));
++		d2l_write(tc->i2c, LV_MX2023, LV_MX(LVI_B5, LVI_B6, LVI_B7, LVI_L0));
++		d2l_write(tc->i2c, LV_MX2427, LV_MX(LVI_HS, LVI_VS, LVI_DE, LVI_R0));
+ 	}
+ 
+ 	d2l_write(tc->i2c, VFUEN, VFUEN_EN);
+@@ -605,10 +602,8 @@ static int tc_bridge_attach(struct drm_bridge *bridge,
+ 						};
+ 
+ 	host = of_find_mipi_dsi_host_by_node(tc->host_node);
+-	if (!host) {
+-		dev_err(dev, "failed to find dsi host\n");
+-		return -EPROBE_DEFER;
+-	}
++	if (!host)
++		return dev_err_probe(dev, -EPROBE_DEFER, "failed to find dsi host\n");
+ 
+ 	dsi = mipi_dsi_device_register_full(host, &info);
+ 	if (IS_ERR(dsi)) {
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 83918ac1f6086..1e9842aac4dc9 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -572,7 +572,7 @@ EXPORT_SYMBOL(mipi_dsi_set_maximum_return_packet_size);
+  *
+  * Return: 0 on success or a negative error code on failure.
+  */
+-ssize_t mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable)
++int mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable)
+ {
+ 	/* Note: Needs updating for non-default PPS or algorithm */
+ 	u8 tx[2] = { enable << 0, 0 };
+@@ -597,8 +597,8 @@ EXPORT_SYMBOL(mipi_dsi_compression_mode);
+  *
+  * Return: 0 on success or a negative error code on failure.
+  */
+-ssize_t mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi,
+-				       const struct drm_dsc_picture_parameter_set *pps)
++int mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi,
++				   const struct drm_dsc_picture_parameter_set *pps)
+ {
+ 	struct mipi_dsi_msg msg = {
+ 		.channel = dsi->channel,
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+index b20ea58907c2a..1dac9cd20d466 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+@@ -21,6 +21,9 @@ static struct mtk_drm_gem_obj *mtk_drm_gem_init(struct drm_device *dev,
+ 
+ 	size = round_up(size, PAGE_SIZE);
+ 
++	if (size == 0)
++		return ERR_PTR(-EINVAL);
++
+ 	mtk_gem_obj = kzalloc(sizeof(*mtk_gem_obj), GFP_KERNEL);
+ 	if (!mtk_gem_obj)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/drivers/gpu/drm/meson/meson_vclk.c b/drivers/gpu/drm/meson/meson_vclk.c
+index 0eb86943a3588..37127ba40aa97 100644
+--- a/drivers/gpu/drm/meson/meson_vclk.c
++++ b/drivers/gpu/drm/meson/meson_vclk.c
+@@ -790,13 +790,13 @@ meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,
+ 				 FREQ_1000_1001(params[i].pixel_freq));
+ 		DRM_DEBUG_DRIVER("i = %d phy_freq = %d alt = %d\n",
+ 				 i, params[i].phy_freq,
+-				 FREQ_1000_1001(params[i].phy_freq/10)*10);
++				 FREQ_1000_1001(params[i].phy_freq/1000)*1000);
+ 		/* Match strict frequency */
+ 		if (phy_freq == params[i].phy_freq &&
+ 		    vclk_freq == params[i].vclk_freq)
+ 			return MODE_OK;
+ 		/* Match 1000/1001 variant */
+-		if (phy_freq == (FREQ_1000_1001(params[i].phy_freq/10)*10) &&
++		if (phy_freq == (FREQ_1000_1001(params[i].phy_freq/1000)*1000) &&
+ 		    vclk_freq == FREQ_1000_1001(params[i].vclk_freq))
+ 			return MODE_OK;
+ 	}
+@@ -1070,7 +1070,7 @@ void meson_vclk_setup(struct meson_drm *priv, unsigned int target,
+ 
+ 	for (freq = 0 ; params[freq].pixel_freq ; ++freq) {
+ 		if ((phy_freq == params[freq].phy_freq ||
+-		     phy_freq == FREQ_1000_1001(params[freq].phy_freq/10)*10) &&
++		     phy_freq == FREQ_1000_1001(params[freq].phy_freq/1000)*1000) &&
+ 		    (vclk_freq == params[freq].vclk_freq ||
+ 		     vclk_freq == FREQ_1000_1001(params[freq].vclk_freq))) {
+ 			if (vclk_freq != params[freq].vclk_freq)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+index 8493d68ad8417..e1a97c10c0e41 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+@@ -448,9 +448,6 @@ static void dpu_encoder_phys_cmd_enable_helper(
+ 
+ 	_dpu_encoder_phys_cmd_pingpong_config(phys_enc);
+ 
+-	if (!dpu_encoder_phys_cmd_is_master(phys_enc))
+-		return;
+-
+ 	ctl = phys_enc->hw_ctl;
+ 	ctl->ops.get_bitmask_intf(ctl, &flush_mask, phys_enc->intf_idx);
+ 	ctl->ops.update_pending_flush(ctl, flush_mask);
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 51470020ba61d..7797aad592a19 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2235,6 +2235,9 @@ static const struct panel_desc innolux_g121x1_l03 = {
+ 		.unprepare = 200,
+ 		.disable = 400,
+ 	},
++	.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
++	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+ /*
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 6d01258349faa..0bd49538cb6db 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1253,6 +1253,8 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
+ 		index = 1;
+ 
+ 	addr = of_get_address(dev->of_node, index, NULL, NULL);
++	if (!addr)
++		return -EINVAL;
+ 
+ 	vc4_hdmi->audio.dma_data.addr = be32_to_cpup(addr) + mai_data->offset;
+ 	vc4_hdmi->audio.dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+index c6d48a8648b70..4a16ede402ff5 100644
+--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+@@ -164,6 +164,11 @@ static int ish_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	/* request and enable interrupt */
+ 	ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
++	if (ret < 0) {
++		dev_err(dev, "ISH: Failed to allocate IRQ vectors\n");
++		return ret;
++	}
++
+ 	if (!pdev->msi_enabled && !pdev->msix_enabled)
+ 		irq_flag = IRQF_SHARED;
+ 
+diff --git a/drivers/hwmon/shtc1.c b/drivers/hwmon/shtc1.c
+index 18546ebc8e9f7..0365643029aee 100644
+--- a/drivers/hwmon/shtc1.c
++++ b/drivers/hwmon/shtc1.c
+@@ -238,7 +238,7 @@ static int shtc1_probe(struct i2c_client *client)
+ 
+ 	if (np) {
+ 		data->setup.blocking_io = of_property_read_bool(np, "sensirion,blocking-io");
+-		data->setup.high_precision = !of_property_read_bool(np, "sensicon,low-precision");
++		data->setup.high_precision = !of_property_read_bool(np, "sensirion,low-precision");
+ 	} else {
+ 		if (client->dev.platform_data)
+ 			data->setup = *(struct shtc1_platform_data *)dev->platform_data;
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index e25438025b9f2..7f13687267fd3 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -289,6 +289,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7e24),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Meteor Lake-S CPU */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xae24),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{
+ 		/* Raptor Lake-S */
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7a26),
+diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
+index 2712e699ba08c..ae9ea3a1fa2aa 100644
+--- a/drivers/hwtracing/stm/core.c
++++ b/drivers/hwtracing/stm/core.c
+@@ -868,8 +868,11 @@ int stm_register_device(struct device *parent, struct stm_data *stm_data,
+ 		return -ENOMEM;
+ 
+ 	stm->major = register_chrdev(0, stm_data->name, &stm_fops);
+-	if (stm->major < 0)
+-		goto err_free;
++	if (stm->major < 0) {
++		err = stm->major;
++		vfree(stm);
++		return err;
++	}
+ 
+ 	device_initialize(&stm->dev);
+ 	stm->dev.devt = MKDEV(stm->major, 0);
+@@ -913,10 +916,8 @@ int stm_register_device(struct device *parent, struct stm_data *stm_data,
+ err_device:
+ 	unregister_chrdev(stm->major, stm_data->name);
+ 
+-	/* matches device_initialize() above */
++	/* calls stm_device_release() */
+ 	put_device(&stm->dev);
+-err_free:
+-	vfree(stm);
+ 
+ 	return err;
+ }
+diff --git a/drivers/iio/pressure/dps310.c b/drivers/iio/pressure/dps310.c
+index 1b6b9530f1662..7fdc7a0147f0e 100644
+--- a/drivers/iio/pressure/dps310.c
++++ b/drivers/iio/pressure/dps310.c
+@@ -730,7 +730,7 @@ static int dps310_read_pressure(struct dps310_data *data, int *val, int *val2,
+ 	}
+ }
+ 
+-static int dps310_calculate_temp(struct dps310_data *data)
++static int dps310_calculate_temp(struct dps310_data *data, int *val)
+ {
+ 	s64 c0;
+ 	s64 t;
+@@ -746,7 +746,9 @@ static int dps310_calculate_temp(struct dps310_data *data)
+ 	t = c0 + ((s64)data->temp_raw * (s64)data->c1);
+ 
+ 	/* Convert to milliCelsius and scale the temperature */
+-	return (int)div_s64(t * 1000LL, kt);
++	*val = (int)div_s64(t * 1000LL, kt);
++
++	return 0;
+ }
+ 
+ static int dps310_read_temp(struct dps310_data *data, int *val, int *val2,
+@@ -768,11 +770,10 @@ static int dps310_read_temp(struct dps310_data *data, int *val, int *val2,
+ 		if (rc)
+ 			return rc;
+ 
+-		rc = dps310_calculate_temp(data);
+-		if (rc < 0)
++		rc = dps310_calculate_temp(data, val);
++		if (rc)
+ 			return rc;
+ 
+-		*val = rc;
+ 		return IIO_VAL_INT;
+ 
+ 	case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
+diff --git a/drivers/infiniband/hw/hns/hns_roce_alloc.c b/drivers/infiniband/hw/hns/hns_roce_alloc.c
+index 5b2baf89d1109..4bcaaa0524b12 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_alloc.c
++++ b/drivers/infiniband/hw/hns/hns_roce_alloc.c
+@@ -159,76 +159,96 @@ void hns_roce_bitmap_cleanup(struct hns_roce_bitmap *bitmap)
+ 
+ void hns_roce_buf_free(struct hns_roce_dev *hr_dev, struct hns_roce_buf *buf)
+ {
+-	struct device *dev = hr_dev->dev;
+-	u32 size = buf->size;
+-	int i;
++	struct hns_roce_buf_list *trunks;
++	u32 i;
+ 
+-	if (size == 0)
++	if (!buf)
+ 		return;
+ 
+-	buf->size = 0;
++	trunks = buf->trunk_list;
++	if (trunks) {
++		buf->trunk_list = NULL;
++		for (i = 0; i < buf->ntrunks; i++)
++			dma_free_coherent(hr_dev->dev, 1 << buf->trunk_shift,
++					  trunks[i].buf, trunks[i].map);
+ 
+-	if (hns_roce_buf_is_direct(buf)) {
+-		dma_free_coherent(dev, size, buf->direct.buf, buf->direct.map);
+-	} else {
+-		for (i = 0; i < buf->npages; ++i)
+-			if (buf->page_list[i].buf)
+-				dma_free_coherent(dev, 1 << buf->page_shift,
+-						  buf->page_list[i].buf,
+-						  buf->page_list[i].map);
+-		kfree(buf->page_list);
+-		buf->page_list = NULL;
++		kfree(trunks);
+ 	}
++
++	kfree(buf);
+ }
+ 
+-int hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size, u32 max_direct,
+-		       struct hns_roce_buf *buf, u32 page_shift)
++/*
++ * Allocate the dma buffer for storing ROCEE table entries
++ *
++ * @size: required size
++ * @page_shift: the unit size in a continuous dma address range
++ * @flags: HNS_ROCE_BUF_ flags to control the allocation flow.
++ */
++struct hns_roce_buf *hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size,
++					u32 page_shift, u32 flags)
+ {
+-	struct hns_roce_buf_list *buf_list;
+-	struct device *dev = hr_dev->dev;
+-	u32 page_size;
+-	int i;
++	u32 trunk_size, page_size, alloced_size;
++	struct hns_roce_buf_list *trunks;
++	struct hns_roce_buf *buf;
++	gfp_t gfp_flags;
++	u32 ntrunk, i;
+ 
+ 	/* The minimum shift of the page accessed by hw is HNS_HW_PAGE_SHIFT */
+-	buf->page_shift = max_t(int, HNS_HW_PAGE_SHIFT, page_shift);
++	if (WARN_ON(page_shift < HNS_HW_PAGE_SHIFT))
++		return ERR_PTR(-EINVAL);
++
++	gfp_flags = (flags & HNS_ROCE_BUF_NOSLEEP) ? GFP_ATOMIC : GFP_KERNEL;
++	buf = kzalloc(sizeof(*buf), gfp_flags);
++	if (!buf)
++		return ERR_PTR(-ENOMEM);
+ 
++	buf->page_shift = page_shift;
+ 	page_size = 1 << buf->page_shift;
+-	buf->npages = DIV_ROUND_UP(size, page_size);
+-
+-	/* required size is not bigger than one trunk size */
+-	if (size <= max_direct) {
+-		buf->page_list = NULL;
+-		buf->direct.buf = dma_alloc_coherent(dev, size,
+-						     &buf->direct.map,
+-						     GFP_KERNEL);
+-		if (!buf->direct.buf)
+-			return -ENOMEM;
++
++	/* Calc the trunk size and num by required size and page_shift */
++	if (flags & HNS_ROCE_BUF_DIRECT) {
++		buf->trunk_shift = ilog2(ALIGN(size, PAGE_SIZE));
++		ntrunk = 1;
+ 	} else {
+-		buf_list = kcalloc(buf->npages, sizeof(*buf_list), GFP_KERNEL);
+-		if (!buf_list)
+-			return -ENOMEM;
+-
+-		for (i = 0; i < buf->npages; i++) {
+-			buf_list[i].buf = dma_alloc_coherent(dev, page_size,
+-							     &buf_list[i].map,
+-							     GFP_KERNEL);
+-			if (!buf_list[i].buf)
+-				break;
+-		}
++		buf->trunk_shift = ilog2(ALIGN(page_size, PAGE_SIZE));
++		ntrunk = DIV_ROUND_UP(size, 1 << buf->trunk_shift);
++	}
+ 
+-		if (i != buf->npages && i > 0) {
+-			while (i-- > 0)
+-				dma_free_coherent(dev, page_size,
+-						  buf_list[i].buf,
+-						  buf_list[i].map);
+-			kfree(buf_list);
+-			return -ENOMEM;
+-		}
+-		buf->page_list = buf_list;
++	trunks = kcalloc(ntrunk, sizeof(*trunks), gfp_flags);
++	if (!trunks) {
++		kfree(buf);
++		return ERR_PTR(-ENOMEM);
+ 	}
+-	buf->size = size;
+ 
+-	return 0;
++	trunk_size = 1 << buf->trunk_shift;
++	alloced_size = 0;
++	for (i = 0; i < ntrunk; i++) {
++		trunks[i].buf = dma_alloc_coherent(hr_dev->dev, trunk_size,
++						   &trunks[i].map, gfp_flags);
++		if (!trunks[i].buf)
++			break;
++
++		alloced_size += trunk_size;
++	}
++
++	buf->ntrunks = i;
++
++	/* In nofail mode, it's only failed when the alloced size is 0 */
++	if ((flags & HNS_ROCE_BUF_NOFAIL) ? i == 0 : i != ntrunk) {
++		for (i = 0; i < buf->ntrunks; i++)
++			dma_free_coherent(hr_dev->dev, trunk_size,
++					  trunks[i].buf, trunks[i].map);
++
++		kfree(trunks);
++		kfree(buf);
++		return ERR_PTR(-ENOMEM);
++	}
++
++	buf->npages = DIV_ROUND_UP(alloced_size, page_size);
++	buf->trunk_list = trunks;
++
++	return buf;
+ }
+ 
+ int hns_roce_get_kmem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
+index c493d7644b577..339e3fd98b0b4 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
+@@ -60,7 +60,7 @@ static int hns_roce_cmd_mbox_post_hw(struct hns_roce_dev *hr_dev, u64 in_param,
+ static int __hns_roce_cmd_mbox_poll(struct hns_roce_dev *hr_dev, u64 in_param,
+ 				    u64 out_param, unsigned long in_modifier,
+ 				    u8 op_modifier, u16 op,
+-				    unsigned long timeout)
++				    unsigned int timeout)
+ {
+ 	struct device *dev = hr_dev->dev;
+ 	int ret;
+@@ -78,7 +78,7 @@ static int __hns_roce_cmd_mbox_poll(struct hns_roce_dev *hr_dev, u64 in_param,
+ 
+ static int hns_roce_cmd_mbox_poll(struct hns_roce_dev *hr_dev, u64 in_param,
+ 				  u64 out_param, unsigned long in_modifier,
+-				  u8 op_modifier, u16 op, unsigned long timeout)
++				  u8 op_modifier, u16 op, unsigned int timeout)
+ {
+ 	int ret;
+ 
+@@ -108,7 +108,7 @@ void hns_roce_cmd_event(struct hns_roce_dev *hr_dev, u16 token, u8 status,
+ static int __hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
+ 				    u64 out_param, unsigned long in_modifier,
+ 				    u8 op_modifier, u16 op,
+-				    unsigned long timeout)
++				    unsigned int timeout)
+ {
+ 	struct hns_roce_cmdq *cmd = &hr_dev->cmd;
+ 	struct hns_roce_cmd_context *context;
+@@ -159,7 +159,7 @@ static int __hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
+ 
+ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
+ 				  u64 out_param, unsigned long in_modifier,
+-				  u8 op_modifier, u16 op, unsigned long timeout)
++				  u8 op_modifier, u16 op, unsigned int timeout)
+ {
+ 	int ret;
+ 
+@@ -173,7 +173,7 @@ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
+ 
+ int hns_roce_cmd_mbox(struct hns_roce_dev *hr_dev, u64 in_param, u64 out_param,
+ 		      unsigned long in_modifier, u8 op_modifier, u16 op,
+-		      unsigned long timeout)
++		      unsigned int timeout)
+ {
+ 	int ret;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.h b/drivers/infiniband/hw/hns/hns_roce_cmd.h
+index 8e63b827f28cc..8025e7f657fa6 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cmd.h
++++ b/drivers/infiniband/hw/hns/hns_roce_cmd.h
+@@ -141,7 +141,7 @@ enum {
+ 
+ int hns_roce_cmd_mbox(struct hns_roce_dev *hr_dev, u64 in_param, u64 out_param,
+ 		      unsigned long in_modifier, u8 op_modifier, u16 op,
+-		      unsigned long timeout);
++		      unsigned int timeout);
+ 
+ struct hns_roce_cmd_mailbox *
+ hns_roce_alloc_cmd_mailbox(struct hns_roce_dev *hr_dev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_common.h b/drivers/infiniband/hw/hns/hns_roce_common.h
+index f5669ff8cfebc..e57e812b1546d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_common.h
++++ b/drivers/infiniband/hw/hns/hns_roce_common.h
+@@ -38,19 +38,19 @@
+ #define roce_raw_write(value, addr) \
+ 	__raw_writel((__force u32)cpu_to_le32(value), (addr))
+ 
+-#define roce_get_field(origin, mask, shift) \
+-	(((le32_to_cpu(origin)) & (mask)) >> (shift))
++#define roce_get_field(origin, mask, shift)                                    \
++	((le32_to_cpu(origin) & (mask)) >> (u32)(shift))
+ 
+ #define roce_get_bit(origin, shift) \
+ 	roce_get_field((origin), (1ul << (shift)), (shift))
+ 
+-#define roce_set_field(origin, mask, shift, val) \
+-	do { \
+-		(origin) &= ~cpu_to_le32(mask); \
+-		(origin) |= cpu_to_le32(((u32)(val) << (shift)) & (mask)); \
++#define roce_set_field(origin, mask, shift, val)                               \
++	do {                                                                   \
++		(origin) &= ~cpu_to_le32(mask);                                \
++		(origin) |= cpu_to_le32(((u32)(val) << (u32)(shift)) & (mask));     \
+ 	} while (0)
+ 
+-#define roce_set_bit(origin, shift, val) \
++#define roce_set_bit(origin, shift, val)                                       \
+ 	roce_set_field((origin), (1ul << (shift)), (shift), (val))
+ 
+ #define ROCEE_GLB_CFG_ROCEE_DB_SQ_MODE_S 3
+diff --git a/drivers/infiniband/hw/hns/hns_roce_db.c b/drivers/infiniband/hw/hns/hns_roce_db.c
+index bff6abdccfb0c..5cb7376ce9789 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_db.c
++++ b/drivers/infiniband/hw/hns/hns_roce_db.c
+@@ -95,8 +95,8 @@ static struct hns_roce_db_pgdir *hns_roce_alloc_db_pgdir(
+ static int hns_roce_alloc_db_from_pgdir(struct hns_roce_db_pgdir *pgdir,
+ 					struct hns_roce_db *db, int order)
+ {
+-	int o;
+-	int i;
++	unsigned long o;
++	unsigned long i;
+ 
+ 	for (o = order; o <= 1; ++o) {
+ 		i = find_first_bit(pgdir->bits[o], HNS_ROCE_DB_PER_PAGE >> o);
+@@ -154,8 +154,8 @@ int hns_roce_alloc_db(struct hns_roce_dev *hr_dev, struct hns_roce_db *db,
+ 
+ void hns_roce_free_db(struct hns_roce_dev *hr_dev, struct hns_roce_db *db)
+ {
+-	int o;
+-	int i;
++	unsigned long o;
++	unsigned long i;
+ 
+ 	mutex_lock(&hr_dev->pgdir_mutex);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 09b5e4935c2ca..fe54e09eeccdd 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -115,11 +115,15 @@
+ #define HNS_ROCE_IDX_QUE_ENTRY_SZ		4
+ #define SRQ_DB_REG				0x230
+ 
++#define HNS_ROCE_QP_BANK_NUM 8
++
+ /* The chip implementation of the consumer index is calculated
+  * according to twice the actual EQ depth
+  */
+ #define EQ_DEPTH_COEFF				2
+ 
++#define CQ_BANKID_MASK GENMASK(1, 0)
++
+ enum {
+ 	SERV_TYPE_RC,
+ 	SERV_TYPE_UC,
+@@ -263,9 +267,6 @@ enum {
+ #define HNS_HW_PAGE_SHIFT			12
+ #define HNS_HW_PAGE_SIZE			(1 << HNS_HW_PAGE_SHIFT)
+ 
+-/* The minimum page count for hardware access page directly. */
+-#define HNS_HW_DIRECT_PAGE_COUNT 2
+-
+ struct hns_roce_uar {
+ 	u64		pfn;
+ 	unsigned long	index;
+@@ -316,7 +317,7 @@ struct hns_roce_hem_table {
+ };
+ 
+ struct hns_roce_buf_region {
+-	int offset; /* page offset */
++	u32 offset; /* page offset */
+ 	u32 count; /* page count */
+ 	int hopnum; /* addressing hop num */
+ };
+@@ -336,10 +337,10 @@ struct hns_roce_buf_attr {
+ 		size_t	size;  /* region size */
+ 		int	hopnum; /* multi-hop addressing hop num */
+ 	} region[HNS_ROCE_MAX_BT_REGION];
+-	int region_count; /* valid region count */
++	unsigned int region_count; /* valid region count */
+ 	unsigned int page_shift;  /* buffer page shift */
+ 	bool fixed_page; /* decide page shift is fixed-size or maximum size */
+-	int user_access; /* umem access flag */
++	unsigned int user_access; /* umem access flag */
+ 	bool mtt_only; /* only alloc buffer-required MTT memory */
+ };
+ 
+@@ -350,7 +351,7 @@ struct hns_roce_hem_cfg {
+ 	unsigned int	buf_pg_shift; /* buffer page shift */
+ 	unsigned int	buf_pg_count;  /* buffer page count */
+ 	struct hns_roce_buf_region region[HNS_ROCE_MAX_BT_REGION];
+-	int		region_count;
++	unsigned int	region_count;
+ };
+ 
+ /* memory translate region */
+@@ -398,7 +399,7 @@ struct hns_roce_wq {
+ 	u64		*wrid;     /* Work request ID */
+ 	spinlock_t	lock;
+ 	u32		wqe_cnt;  /* WQE num */
+-	int		max_gs;
++	u32		max_gs;
+ 	int		offset;
+ 	int		wqe_shift;	/* WQE size */
+ 	u32		head;
+@@ -417,11 +418,26 @@ struct hns_roce_buf_list {
+ 	dma_addr_t	map;
+ };
+ 
++/*
++ * %HNS_ROCE_BUF_DIRECT indicates that the all memory must be in a continuous
++ * dma address range.
++ *
++ * %HNS_ROCE_BUF_NOSLEEP indicates that the caller cannot sleep.
++ *
++ * %HNS_ROCE_BUF_NOFAIL allocation only failed when allocated size is zero, even
++ * the allocated size is smaller than the required size.
++ */
++enum {
++	HNS_ROCE_BUF_DIRECT = BIT(0),
++	HNS_ROCE_BUF_NOSLEEP = BIT(1),
++	HNS_ROCE_BUF_NOFAIL = BIT(2),
++};
++
+ struct hns_roce_buf {
+-	struct hns_roce_buf_list	direct;
+-	struct hns_roce_buf_list	*page_list;
++	struct hns_roce_buf_list	*trunk_list;
++	u32				ntrunks;
+ 	u32				npages;
+-	u32				size;
++	unsigned int			trunk_shift;
+ 	unsigned int			page_shift;
+ };
+ 
+@@ -449,8 +465,8 @@ struct hns_roce_db {
+ 	} u;
+ 	dma_addr_t	dma;
+ 	void		*virt_addr;
+-	int		index;
+-	int		order;
++	unsigned long	index;
++	unsigned long	order;
+ };
+ 
+ struct hns_roce_cq {
+@@ -498,8 +514,8 @@ struct hns_roce_srq {
+ 	u64		       *wrid;
+ 	struct hns_roce_idx_que idx_que;
+ 	spinlock_t		lock;
+-	int			head;
+-	int			tail;
++	u16			head;
++	u16			tail;
+ 	struct mutex		mutex;
+ 	void (*event)(struct hns_roce_srq *srq, enum hns_roce_event event);
+ };
+@@ -508,13 +524,22 @@ struct hns_roce_uar_table {
+ 	struct hns_roce_bitmap bitmap;
+ };
+ 
++struct hns_roce_bank {
++	struct ida ida;
++	u32 inuse; /* Number of IDs allocated */
++	u32 min; /* Lowest ID to allocate.  */
++	u32 max; /* Highest ID to allocate. */
++	u32 next; /* Next ID to allocate. */
++};
++
+ struct hns_roce_qp_table {
+-	struct hns_roce_bitmap		bitmap;
+ 	struct hns_roce_hem_table	qp_table;
+ 	struct hns_roce_hem_table	irrl_table;
+ 	struct hns_roce_hem_table	trrl_table;
+ 	struct hns_roce_hem_table	sccc_table;
+ 	struct mutex			scc_mutex;
++	struct hns_roce_bank bank[HNS_ROCE_QP_BANK_NUM];
++	struct mutex bank_mutex;
+ };
+ 
+ struct hns_roce_cq_table {
+@@ -728,11 +753,11 @@ struct hns_roce_eq {
+ 	int				type_flag; /* Aeq:1 ceq:0 */
+ 	int				eqn;
+ 	u32				entries;
+-	int				log_entries;
++	u32				log_entries;
+ 	int				eqe_size;
+ 	int				irq;
+ 	int				log_page_size;
+-	int				cons_index;
++	u32				cons_index;
+ 	struct hns_roce_buf_list	*buf_list;
+ 	int				over_ignore;
+ 	int				coalesce;
+@@ -740,7 +765,7 @@ struct hns_roce_eq {
+ 	int				hop_num;
+ 	struct hns_roce_mtr		mtr;
+ 	u16				eq_max_cnt;
+-	int				eq_period;
++	u32				eq_period;
+ 	int				shift;
+ 	int				event_type;
+ 	int				sub_type;
+@@ -763,8 +788,8 @@ struct hns_roce_caps {
+ 	u32		max_sq_inline;
+ 	u32		max_rq_sg;
+ 	u32		max_extend_sg;
+-	int		num_qps;
+-	int             reserved_qps;
++	u32		num_qps;
++	u32		reserved_qps;
+ 	int		num_qpc_timer;
+ 	int		num_cqc_timer;
+ 	int		num_srqs;
+@@ -776,7 +801,7 @@ struct hns_roce_caps {
+ 	u32		max_srq_desc_sz;
+ 	int		max_qp_init_rdma;
+ 	int		max_qp_dest_rdma;
+-	int		num_cqs;
++	u32		num_cqs;
+ 	u32		max_cqes;
+ 	u32		min_cqes;
+ 	u32		min_wqes;
+@@ -785,7 +810,7 @@ struct hns_roce_caps {
+ 	int		num_aeq_vectors;
+ 	int		num_comp_vectors;
+ 	int		num_other_vectors;
+-	int		num_mtpts;
++	u32		num_mtpts;
+ 	u32		num_mtt_segs;
+ 	u32		num_cqe_segs;
+ 	u32		num_srqwqe_segs;
+@@ -896,7 +921,7 @@ struct hns_roce_hw {
+ 	int (*post_mbox)(struct hns_roce_dev *hr_dev, u64 in_param,
+ 			 u64 out_param, u32 in_modifier, u8 op_modifier, u16 op,
+ 			 u16 token, int event);
+-	int (*chk_mbox)(struct hns_roce_dev *hr_dev, unsigned long timeout);
++	int (*chk_mbox)(struct hns_roce_dev *hr_dev, unsigned int timeout);
+ 	int (*rst_prc_mbox)(struct hns_roce_dev *hr_dev);
+ 	int (*set_gid)(struct hns_roce_dev *hr_dev, u8 port, int gid_index,
+ 		       const union ib_gid *gid, const struct ib_gid_attr *attr);
+@@ -1067,29 +1092,19 @@ static inline struct hns_roce_qp
+ 	return xa_load(&hr_dev->qp_table_xa, qpn & (hr_dev->caps.num_qps - 1));
+ }
+ 
+-static inline bool hns_roce_buf_is_direct(struct hns_roce_buf *buf)
++static inline void *hns_roce_buf_offset(struct hns_roce_buf *buf,
++					unsigned int offset)
+ {
+-	if (buf->page_list)
+-		return false;
+-
+-	return true;
++	return (char *)(buf->trunk_list[offset >> buf->trunk_shift].buf) +
++			(offset & ((1 << buf->trunk_shift) - 1));
+ }
+ 
+-static inline void *hns_roce_buf_offset(struct hns_roce_buf *buf, int offset)
++static inline dma_addr_t hns_roce_buf_page(struct hns_roce_buf *buf, u32 idx)
+ {
+-	if (hns_roce_buf_is_direct(buf))
+-		return (char *)(buf->direct.buf) + (offset & (buf->size - 1));
+-
+-	return (char *)(buf->page_list[offset >> buf->page_shift].buf) +
+-	       (offset & ((1 << buf->page_shift) - 1));
+-}
++	unsigned int offset = idx << buf->page_shift;
+ 
+-static inline dma_addr_t hns_roce_buf_page(struct hns_roce_buf *buf, int idx)
+-{
+-	if (hns_roce_buf_is_direct(buf))
+-		return buf->direct.map + ((dma_addr_t)idx << buf->page_shift);
+-	else
+-		return buf->page_list[idx].map;
++	return buf->trunk_list[offset >> buf->trunk_shift].map +
++			(offset & ((1 << buf->trunk_shift) - 1));
+ }
+ 
+ #define hr_hw_page_align(x)		ALIGN(x, 1 << HNS_HW_PAGE_SHIFT)
+@@ -1161,7 +1176,7 @@ int hns_roce_mtr_create(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ void hns_roce_mtr_destroy(struct hns_roce_dev *hr_dev,
+ 			  struct hns_roce_mtr *mtr);
+ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+-		     dma_addr_t *pages, int page_cnt);
++		     dma_addr_t *pages, unsigned int page_cnt);
+ 
+ int hns_roce_init_pd_table(struct hns_roce_dev *hr_dev);
+ int hns_roce_init_mr_table(struct hns_roce_dev *hr_dev);
+@@ -1221,8 +1236,8 @@ int hns_roce_alloc_mw(struct ib_mw *mw, struct ib_udata *udata);
+ int hns_roce_dealloc_mw(struct ib_mw *ibmw);
+ 
+ void hns_roce_buf_free(struct hns_roce_dev *hr_dev, struct hns_roce_buf *buf);
+-int hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size, u32 max_direct,
+-		       struct hns_roce_buf *buf, u32 page_shift);
++struct hns_roce_buf *hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size,
++					u32 page_shift, u32 flags);
+ 
+ int hns_roce_get_kmem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
+ 			   int buf_cnt, int start, struct hns_roce_buf *buf);
+@@ -1244,10 +1259,10 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *ib_pd,
+ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		       int attr_mask, struct ib_udata *udata);
+ void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp);
+-void *hns_roce_get_recv_wqe(struct hns_roce_qp *hr_qp, int n);
+-void *hns_roce_get_send_wqe(struct hns_roce_qp *hr_qp, int n);
+-void *hns_roce_get_extend_sge(struct hns_roce_qp *hr_qp, int n);
+-bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, int nreq,
++void *hns_roce_get_recv_wqe(struct hns_roce_qp *hr_qp, unsigned int n);
++void *hns_roce_get_send_wqe(struct hns_roce_qp *hr_qp, unsigned int n);
++void *hns_roce_get_extend_sge(struct hns_roce_qp *hr_qp, unsigned int n);
++bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, u32 nreq,
+ 			  struct ib_cq *ib_cq);
+ enum hns_roce_qp_state to_hns_roce_state(enum ib_qp_state state);
+ void hns_roce_lock_cqs(struct hns_roce_cq *send_cq,
+@@ -1277,7 +1292,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn);
+ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type);
+ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type);
+ void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type);
+-int hns_get_gid_index(struct hns_roce_dev *hr_dev, u8 port, int gid_index);
++u8 hns_get_gid_index(struct hns_roce_dev *hr_dev, u8 port, int gid_index);
+ void hns_roce_handle_device_err(struct hns_roce_dev *hr_dev);
+ int hns_roce_init(struct hns_roce_dev *hr_dev);
+ void hns_roce_exit(struct hns_roce_dev *hr_dev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.h b/drivers/infiniband/hw/hns/hns_roce_hem.h
+index b7617786b1005..5b2162a2b8cef 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.h
+@@ -60,16 +60,16 @@ enum {
+ 	 (sizeof(struct scatterlist) + sizeof(void *)))
+ 
+ #define check_whether_bt_num_3(type, hop_num) \
+-	(type < HEM_TYPE_MTT && hop_num == 2)
++	((type) < HEM_TYPE_MTT && (hop_num) == 2)
+ 
+ #define check_whether_bt_num_2(type, hop_num) \
+-	((type < HEM_TYPE_MTT && hop_num == 1) || \
+-	(type >= HEM_TYPE_MTT && hop_num == 2))
++	(((type) < HEM_TYPE_MTT && (hop_num) == 1) || \
++	((type) >= HEM_TYPE_MTT && (hop_num) == 2))
+ 
+ #define check_whether_bt_num_1(type, hop_num) \
+-	((type < HEM_TYPE_MTT && hop_num == HNS_ROCE_HOP_NUM_0) || \
+-	(type >= HEM_TYPE_MTT && hop_num == 1) || \
+-	(type >= HEM_TYPE_MTT && hop_num == HNS_ROCE_HOP_NUM_0))
++	(((type) < HEM_TYPE_MTT && (hop_num) == HNS_ROCE_HOP_NUM_0) || \
++	((type) >= HEM_TYPE_MTT && (hop_num) == 1) || \
++	((type) >= HEM_TYPE_MTT && (hop_num) == HNS_ROCE_HOP_NUM_0))
+ 
+ struct hns_roce_hem_chunk {
+ 	struct list_head	 list;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+index 6f9b024d4ff7c..ea8d7154d0e17 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+@@ -288,7 +288,7 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
+ 					ret = -EINVAL;
+ 					*bad_wr = wr;
+ 					dev_err(dev, "inline len(1-%d)=%d, illegal",
+-						ctrl->msg_length,
++						le32_to_cpu(ctrl->msg_length),
+ 						hr_dev->caps.max_sq_inline);
+ 					goto out;
+ 				}
+@@ -1715,7 +1715,7 @@ static int hns_roce_v1_post_mbox(struct hns_roce_dev *hr_dev, u64 in_param,
+ }
+ 
+ static int hns_roce_v1_chk_mbox(struct hns_roce_dev *hr_dev,
+-				unsigned long timeout)
++				unsigned int timeout)
+ {
+ 	u8 __iomem *hcr = hr_dev->reg_base + ROCEE_MB1_REG;
+ 	unsigned long end;
+@@ -3674,10 +3674,10 @@ static int hns_roce_v1_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ 	return 0;
+ }
+ 
+-static void set_eq_cons_index_v1(struct hns_roce_eq *eq, int req_not)
++static void set_eq_cons_index_v1(struct hns_roce_eq *eq, u32 req_not)
+ {
+ 	roce_raw_write((eq->cons_index & HNS_ROCE_V1_CONS_IDX_M) |
+-		      (req_not << eq->log_entries), eq->doorbell);
++		       (req_not << eq->log_entries), eq->doorbell);
+ }
+ 
+ static void hns_roce_v1_wq_catas_err_handle(struct hns_roce_dev *hr_dev,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 518b38e9158d4..13aa8dd42f7d6 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -650,7 +650,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
+ 	unsigned int sge_idx;
+ 	unsigned int wqe_idx;
+ 	void *wqe = NULL;
+-	int nreq;
++	u32 nreq;
+ 	int ret;
+ 
+ 	spin_lock_irqsave(&qp->sq.lock, flags);
+@@ -828,7 +828,7 @@ static void *get_srq_wqe(struct hns_roce_srq *srq, int n)
+ 	return hns_roce_buf_offset(srq->buf_mtr.kmem, n << srq->wqe_shift);
+ }
+ 
+-static void *get_idx_buf(struct hns_roce_idx_que *idx_que, int n)
++static void *get_idx_buf(struct hns_roce_idx_que *idx_que, unsigned int n)
+ {
+ 	return hns_roce_buf_offset(idx_que->mtr.kmem,
+ 				   n << idx_que->entry_shift);
+@@ -869,12 +869,12 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq,
+ 	struct hns_roce_v2_wqe_data_seg *dseg;
+ 	struct hns_roce_v2_db srq_db;
+ 	unsigned long flags;
++	unsigned int ind;
+ 	__le32 *srq_idx;
+ 	int ret = 0;
+ 	int wqe_idx;
+ 	void *wqe;
+ 	int nreq;
+-	int ind;
+ 	int i;
+ 
+ 	spin_lock_irqsave(&srq->lock, flags);
+@@ -1128,7 +1128,7 @@ static void hns_roce_cmq_init_regs(struct hns_roce_dev *hr_dev, bool ring_type)
+ 		roce_write(hr_dev, ROCEE_TX_CMQ_BASEADDR_H_REG,
+ 			   upper_32_bits(dma));
+ 		roce_write(hr_dev, ROCEE_TX_CMQ_DEPTH_REG,
+-			   ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S);
++			   (u32)ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S);
+ 		roce_write(hr_dev, ROCEE_TX_CMQ_HEAD_REG, 0);
+ 		roce_write(hr_dev, ROCEE_TX_CMQ_TAIL_REG, 0);
+ 	} else {
+@@ -1136,7 +1136,7 @@ static void hns_roce_cmq_init_regs(struct hns_roce_dev *hr_dev, bool ring_type)
+ 		roce_write(hr_dev, ROCEE_RX_CMQ_BASEADDR_H_REG,
+ 			   upper_32_bits(dma));
+ 		roce_write(hr_dev, ROCEE_RX_CMQ_DEPTH_REG,
+-			   ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S);
++			   (u32)ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S);
+ 		roce_write(hr_dev, ROCEE_RX_CMQ_HEAD_REG, 0);
+ 		roce_write(hr_dev, ROCEE_RX_CMQ_TAIL_REG, 0);
+ 	}
+@@ -1907,8 +1907,8 @@ static void set_default_caps(struct hns_roce_dev *hr_dev)
+ 	}
+ }
+ 
+-static void calc_pg_sz(int obj_num, int obj_size, int hop_num, int ctx_bt_num,
+-		       int *buf_page_size, int *bt_page_size, u32 hem_type)
++static void calc_pg_sz(u32 obj_num, u32 obj_size, u32 hop_num, u32 ctx_bt_num,
++		       u32 *buf_page_size, u32 *bt_page_size, u32 hem_type)
+ {
+ 	u64 obj_per_chunk;
+ 	u64 bt_chunk_size = PAGE_SIZE;
+@@ -2382,10 +2382,10 @@ static int hns_roce_init_link_table(struct hns_roce_dev *hr_dev,
+ 	u32 buf_chk_sz;
+ 	dma_addr_t t;
+ 	int func_num = 1;
+-	int pg_num_a;
+-	int pg_num_b;
+-	int pg_num;
+-	int size;
++	u32 pg_num_a;
++	u32 pg_num_b;
++	u32 pg_num;
++	u32 size;
+ 	int i;
+ 
+ 	switch (type) {
+@@ -2549,7 +2549,7 @@ static int hns_roce_query_mbox_status(struct hns_roce_dev *hr_dev)
+ 	struct hns_roce_cmq_desc desc;
+ 	struct hns_roce_mbox_status *mb_st =
+ 				       (struct hns_roce_mbox_status *)desc.data;
+-	enum hns_roce_cmd_return_status status;
++	int status;
+ 
+ 	hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_QUERY_MB_ST, true);
+ 
+@@ -2620,7 +2620,7 @@ static int hns_roce_v2_post_mbox(struct hns_roce_dev *hr_dev, u64 in_param,
+ }
+ 
+ static int hns_roce_v2_chk_mbox(struct hns_roce_dev *hr_dev,
+-				unsigned long timeout)
++				unsigned int timeout)
+ {
+ 	struct device *dev = hr_dev->dev;
+ 	unsigned long end;
+@@ -2970,7 +2970,7 @@ static void *get_cqe_v2(struct hns_roce_cq *hr_cq, int n)
+ 	return hns_roce_buf_offset(hr_cq->mtr.kmem, n * hr_cq->cqe_size);
+ }
+ 
+-static void *get_sw_cqe_v2(struct hns_roce_cq *hr_cq, int n)
++static void *get_sw_cqe_v2(struct hns_roce_cq *hr_cq, unsigned int n)
+ {
+ 	struct hns_roce_v2_cqe *cqe = get_cqe_v2(hr_cq, n & hr_cq->ib_cq.cqe);
+ 
+@@ -3278,8 +3278,9 @@ static void get_cqe_status(struct hns_roce_dev *hr_dev, struct hns_roce_qp *qp,
+ 		   wc->status == IB_WC_WR_FLUSH_ERR))
+ 		return;
+ 
+-	ibdev_err(&hr_dev->ib_dev, "error cqe status 0x%x:\n", cqe_status);
+-	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_NONE, 16, 4, cqe,
++	ibdev_err_ratelimited(&hr_dev->ib_dev, "error cqe status 0x%x:\n",
++			      cqe_status);
++	print_hex_dump(KERN_DEBUG, "", DUMP_PREFIX_NONE, 16, 4, cqe,
+ 		       cq->cqe_size, false);
+ 
+ 	/*
+@@ -3314,7 +3315,7 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
+ 	int is_send;
+ 	u16 wqe_ctr;
+ 	u32 opcode;
+-	int qpn;
++	u32 qpn;
+ 	int ret;
+ 
+ 	/* Find cqe according to consumer index */
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 90cbd15f64415..f62162771db51 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -53,7 +53,7 @@
+  *		GID[0][0], GID[1][0],.....GID[N - 1][0],
+  *		And so on
+  */
+-int hns_get_gid_index(struct hns_roce_dev *hr_dev, u8 port, int gid_index)
++u8 hns_get_gid_index(struct hns_roce_dev *hr_dev, u8 port, int gid_index)
+ {
+ 	return gid_index * hr_dev->caps.num_ports + port;
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index d5b3b10e0a807..c038ed7d94962 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -484,18 +484,18 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+ 	struct hns_roce_mr *mr = to_hr_mr(ibmr);
+ 	struct hns_roce_mtr *mtr = &mr->pbl_mtr;
+-	int ret = 0;
++	int ret, sg_num = 0;
+ 
+ 	mr->npages = 0;
+ 	mr->page_list = kvcalloc(mr->pbl_mtr.hem_cfg.buf_pg_count,
+ 				 sizeof(dma_addr_t), GFP_KERNEL);
+ 	if (!mr->page_list)
+-		return ret;
++		return sg_num;
+ 
+-	ret = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page);
+-	if (ret < 1) {
++	sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page);
++	if (sg_num < 1) {
+ 		ibdev_err(ibdev, "failed to store sg pages %u %u, cnt = %d.\n",
+-			  mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, ret);
++			  mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, sg_num);
+ 		goto err_page_list;
+ 	}
+ 
+@@ -506,17 +506,16 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ 	ret = hns_roce_mtr_map(hr_dev, mtr, mr->page_list, mr->npages);
+ 	if (ret) {
+ 		ibdev_err(ibdev, "failed to map sg mtr, ret = %d.\n", ret);
+-		ret = 0;
++		sg_num = 0;
+ 	} else {
+-		mr->pbl_mtr.hem_cfg.buf_pg_shift = ilog2(ibmr->page_size);
+-		ret = mr->npages;
++		mr->pbl_mtr.hem_cfg.buf_pg_shift = (u32)ilog2(ibmr->page_size);
+ 	}
+ 
+ err_page_list:
+ 	kvfree(mr->page_list);
+ 	mr->page_list = NULL;
+ 
+-	return ret;
++	return sg_num;
+ }
+ 
+ static void hns_roce_mw_free(struct hns_roce_dev *hr_dev,
+@@ -694,15 +693,6 @@ static inline size_t mtr_bufs_size(struct hns_roce_buf_attr *attr)
+ 	return size;
+ }
+ 
+-static inline size_t mtr_kmem_direct_size(bool is_direct, size_t alloc_size,
+-					  unsigned int page_shift)
+-{
+-	if (is_direct)
+-		return ALIGN(alloc_size, 1 << page_shift);
+-	else
+-		return HNS_HW_DIRECT_PAGE_COUNT << page_shift;
+-}
+-
+ /*
+  * check the given pages in continuous address space
+  * Returns 0 on success, or the error page num.
+@@ -731,7 +721,6 @@ static void mtr_free_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr)
+ 	/* release kernel buffers */
+ 	if (mtr->kmem) {
+ 		hns_roce_buf_free(hr_dev, mtr->kmem);
+-		kfree(mtr->kmem);
+ 		mtr->kmem = NULL;
+ 	}
+ }
+@@ -743,13 +732,12 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+ 	unsigned int best_pg_shift;
+ 	int all_pg_count = 0;
+-	size_t direct_size;
+ 	size_t total_size;
+ 	int ret;
+ 
+ 	total_size = mtr_bufs_size(buf_attr);
+ 	if (total_size < 1) {
+-		ibdev_err(ibdev, "Failed to check mtr size\n");
++		ibdev_err(ibdev, "failed to check mtr size\n.");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -761,7 +749,7 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 		mtr->umem = ib_umem_get(ibdev, user_addr, total_size,
+ 					buf_attr->user_access);
+ 		if (IS_ERR_OR_NULL(mtr->umem)) {
+-			ibdev_err(ibdev, "Failed to get umem, ret %ld\n",
++			ibdev_err(ibdev, "failed to get umem, ret = %ld.\n",
+ 				  PTR_ERR(mtr->umem));
+ 			return -ENOMEM;
+ 		}
+@@ -779,19 +767,16 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 		ret = 0;
+ 	} else {
+ 		mtr->umem = NULL;
+-		mtr->kmem = kzalloc(sizeof(*mtr->kmem), GFP_KERNEL);
+-		if (!mtr->kmem) {
+-			ibdev_err(ibdev, "Failed to alloc kmem\n");
+-			return -ENOMEM;
+-		}
+-		direct_size = mtr_kmem_direct_size(is_direct, total_size,
+-						   buf_attr->page_shift);
+-		ret = hns_roce_buf_alloc(hr_dev, total_size, direct_size,
+-					 mtr->kmem, buf_attr->page_shift);
+-		if (ret) {
+-			ibdev_err(ibdev, "Failed to alloc kmem, ret %d\n", ret);
+-			goto err_alloc_mem;
++		mtr->kmem =
++			hns_roce_buf_alloc(hr_dev, total_size,
++					   buf_attr->page_shift,
++					   is_direct ? HNS_ROCE_BUF_DIRECT : 0);
++		if (IS_ERR(mtr->kmem)) {
++			ibdev_err(ibdev, "failed to alloc kmem, ret = %ld.\n",
++				  PTR_ERR(mtr->kmem));
++			return PTR_ERR(mtr->kmem);
+ 		}
++
+ 		best_pg_shift = buf_attr->page_shift;
+ 		all_pg_count = mtr->kmem->npages;
+ 	}
+@@ -799,7 +784,8 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 	/* must bigger than minimum hardware page shift */
+ 	if (best_pg_shift < HNS_HW_PAGE_SHIFT || all_pg_count < 1) {
+ 		ret = -EINVAL;
+-		ibdev_err(ibdev, "Failed to check mtr page shift %d count %d\n",
++		ibdev_err(ibdev,
++			  "failed to check mtr, page shift = %u count = %d.\n",
+ 			  best_pg_shift, all_pg_count);
+ 		goto err_alloc_mem;
+ 	}
+@@ -840,12 +826,12 @@ static int mtr_get_pages(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ }
+ 
+ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+-		     dma_addr_t *pages, int page_cnt)
++		     dma_addr_t *pages, unsigned int page_cnt)
+ {
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+ 	struct hns_roce_buf_region *r;
++	unsigned int i;
+ 	int err;
+-	int i;
+ 
+ 	/*
+ 	 * Only use the first page address as root ba when hopnum is 0, this
+@@ -882,13 +868,12 @@ int hns_roce_mtr_find(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 		      int offset, u64 *mtt_buf, int mtt_max, u64 *base_addr)
+ {
+ 	struct hns_roce_hem_cfg *cfg = &mtr->hem_cfg;
++	int mtt_count, left;
+ 	int start_index;
+-	int mtt_count;
+ 	int total = 0;
+ 	__le64 *mtts;
+-	int npage;
++	u32 npage;
+ 	u64 addr;
+-	int left;
+ 
+ 	if (!mtt_buf || mtt_max < 1)
+ 		goto done;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index d1c07f1f8fe98..1a6de9a9e57c1 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -154,9 +154,66 @@ static void hns_roce_ib_qp_event(struct hns_roce_qp *hr_qp,
+ 	}
+ }
+ 
+-static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
++static u8 get_affinity_cq_bank(u8 qp_bank)
+ {
++	return (qp_bank >> 1) & CQ_BANKID_MASK;
++}
++
++static u8 get_least_load_bankid_for_qp(struct ib_qp_init_attr *init_attr,
++					struct hns_roce_bank *bank)
++{
++#define INVALID_LOAD_QPNUM 0xFFFFFFFF
++	struct ib_cq *scq = init_attr->send_cq;
++	u32 least_load = INVALID_LOAD_QPNUM;
++	unsigned long cqn = 0;
++	u8 bankid = 0;
++	u32 bankcnt;
++	u8 i;
++
++	if (scq)
++		cqn = to_hr_cq(scq)->cqn;
++
++	for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++) {
++		if (scq && (get_affinity_cq_bank(i) != (cqn & CQ_BANKID_MASK)))
++			continue;
++
++		bankcnt = bank[i].inuse;
++		if (bankcnt < least_load) {
++			least_load = bankcnt;
++			bankid = i;
++		}
++	}
++
++	return bankid;
++}
++
++static int alloc_qpn_with_bankid(struct hns_roce_bank *bank, u8 bankid,
++				 unsigned long *qpn)
++{
++	int id;
++
++	id = ida_alloc_range(&bank->ida, bank->next, bank->max, GFP_KERNEL);
++	if (id < 0) {
++		id = ida_alloc_range(&bank->ida, bank->min, bank->max,
++				     GFP_KERNEL);
++		if (id < 0)
++			return id;
++	}
++
++	/* the QPN should keep increasing until the max value is reached. */
++	bank->next = (id + 1) > bank->max ? bank->min : id + 1;
++
++	/* the lower 3 bits is bankid */
++	*qpn = (id << 3) | bankid;
++
++	return 0;
++}
++static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
++		     struct ib_qp_init_attr *init_attr)
++{
++	struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
+ 	unsigned long num = 0;
++	u8 bankid;
+ 	int ret;
+ 
+ 	if (hr_qp->ibqp.qp_type == IB_QPT_GSI) {
+@@ -169,13 +226,21 @@ static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ 
+ 		hr_qp->doorbell_qpn = 1;
+ 	} else {
+-		ret = hns_roce_bitmap_alloc_range(&hr_dev->qp_table.bitmap,
+-						  1, 1, &num);
++		mutex_lock(&qp_table->bank_mutex);
++		bankid = get_least_load_bankid_for_qp(init_attr, qp_table->bank);
++
++		ret = alloc_qpn_with_bankid(&qp_table->bank[bankid], bankid,
++					    &num);
+ 		if (ret) {
+-			ibdev_err(&hr_dev->ib_dev, "Failed to alloc bitmap\n");
+-			return -ENOMEM;
++			ibdev_err(&hr_dev->ib_dev,
++				  "failed to alloc QPN, ret = %d\n", ret);
++			mutex_unlock(&qp_table->bank_mutex);
++			return ret;
+ 		}
+ 
++		qp_table->bank[bankid].inuse++;
++		mutex_unlock(&qp_table->bank_mutex);
++
+ 		hr_qp->doorbell_qpn = (u32)num;
+ 	}
+ 
+@@ -340,9 +405,15 @@ static void free_qpc(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ 	hns_roce_table_put(hr_dev, &qp_table->irrl_table, hr_qp->qpn);
+ }
+ 
++static inline u8 get_qp_bankid(unsigned long qpn)
++{
++	/* The lower 3 bits of QPN are used to hash to different banks */
++	return (u8)(qpn & GENMASK(2, 0));
++}
++
+ static void free_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ {
+-	struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
++	u8 bankid;
+ 
+ 	if (hr_qp->ibqp.qp_type == IB_QPT_GSI)
+ 		return;
+@@ -350,7 +421,13 @@ static void free_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ 	if (hr_qp->qpn < hr_dev->caps.reserved_qps)
+ 		return;
+ 
+-	hns_roce_bitmap_free_range(&qp_table->bitmap, hr_qp->qpn, 1, BITMAP_RR);
++	bankid = get_qp_bankid(hr_qp->qpn);
++
++	ida_free(&hr_dev->qp_table.bank[bankid].ida, hr_qp->qpn >> 3);
++
++	mutex_lock(&hr_dev->qp_table.bank_mutex);
++	hr_dev->qp_table.bank[bankid].inuse--;
++	mutex_unlock(&hr_dev->qp_table.bank_mutex);
+ }
+ 
+ static int set_rq_size(struct hns_roce_dev *hr_dev, struct ib_qp_cap *cap,
+@@ -944,7 +1021,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ 		goto err_db;
+ 	}
+ 
+-	ret = alloc_qpn(hr_dev, hr_qp);
++	ret = alloc_qpn(hr_dev, hr_qp, init_attr);
+ 	if (ret) {
+ 		ibdev_err(ibdev, "failed to alloc QPN, ret = %d.\n", ret);
+ 		goto err_buf;
+@@ -1257,22 +1334,22 @@ static inline void *get_wqe(struct hns_roce_qp *hr_qp, int offset)
+ 	return hns_roce_buf_offset(hr_qp->mtr.kmem, offset);
+ }
+ 
+-void *hns_roce_get_recv_wqe(struct hns_roce_qp *hr_qp, int n)
++void *hns_roce_get_recv_wqe(struct hns_roce_qp *hr_qp, unsigned int n)
+ {
+ 	return get_wqe(hr_qp, hr_qp->rq.offset + (n << hr_qp->rq.wqe_shift));
+ }
+ 
+-void *hns_roce_get_send_wqe(struct hns_roce_qp *hr_qp, int n)
++void *hns_roce_get_send_wqe(struct hns_roce_qp *hr_qp, unsigned int n)
+ {
+ 	return get_wqe(hr_qp, hr_qp->sq.offset + (n << hr_qp->sq.wqe_shift));
+ }
+ 
+-void *hns_roce_get_extend_sge(struct hns_roce_qp *hr_qp, int n)
++void *hns_roce_get_extend_sge(struct hns_roce_qp *hr_qp, unsigned int n)
+ {
+ 	return get_wqe(hr_qp, hr_qp->sge.offset + (n << hr_qp->sge.sge_shift));
+ }
+ 
+-bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, int nreq,
++bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, u32 nreq,
+ 			  struct ib_cq *ib_cq)
+ {
+ 	struct hns_roce_cq *hr_cq;
+@@ -1293,22 +1370,25 @@ bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, int nreq,
+ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
+ {
+ 	struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
+-	int reserved_from_top = 0;
+-	int reserved_from_bot;
+-	int ret;
++	unsigned int reserved_from_bot;
++	unsigned int i;
+ 
+ 	mutex_init(&qp_table->scc_mutex);
++	mutex_init(&qp_table->bank_mutex);
+ 	xa_init(&hr_dev->qp_table_xa);
+ 
+ 	reserved_from_bot = hr_dev->caps.reserved_qps;
+ 
+-	ret = hns_roce_bitmap_init(&qp_table->bitmap, hr_dev->caps.num_qps,
+-				   hr_dev->caps.num_qps - 1, reserved_from_bot,
+-				   reserved_from_top);
+-	if (ret) {
+-		dev_err(hr_dev->dev, "qp bitmap init failed!error=%d\n",
+-			ret);
+-		return ret;
++	for (i = 0; i < reserved_from_bot; i++) {
++		hr_dev->qp_table.bank[get_qp_bankid(i)].inuse++;
++		hr_dev->qp_table.bank[get_qp_bankid(i)].min++;
++	}
++
++	for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++) {
++		ida_init(&hr_dev->qp_table.bank[i].ida);
++		hr_dev->qp_table.bank[i].max = hr_dev->caps.num_qps /
++					       HNS_ROCE_QP_BANK_NUM - 1;
++		hr_dev->qp_table.bank[i].next = hr_dev->qp_table.bank[i].min;
+ 	}
+ 
+ 	return 0;
+@@ -1316,5 +1396,8 @@ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
+ 
+ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
+ {
+-	hns_roce_bitmap_cleanup(&hr_dev->qp_table.bitmap);
++	int i;
++
++	for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++)
++		ida_destroy(&hr_dev->qp_table.bank[i].ida);
+ }
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_vlan.c b/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
+index 4c50a87ed7cc2..15a7cda44676c 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
+@@ -185,8 +185,12 @@ int ipoib_vlan_add(struct net_device *pdev, unsigned short pkey)
+ 
+ 	ppriv = ipoib_priv(pdev);
+ 
+-	snprintf(intf_name, sizeof(intf_name), "%s.%04x",
+-		 ppriv->dev->name, pkey);
++	/* If you increase IFNAMSIZ, update snprintf below
++	 * to allow longer names.
++	 */
++	BUILD_BUG_ON(IFNAMSIZ != 16);
++	snprintf(intf_name, sizeof(intf_name), "%.10s.%04x", ppriv->dev->name,
++		 pkey);
+ 
+ 	ndev = ipoib_intf_alloc(ppriv->ca, ppriv->port, intf_name);
+ 	if (IS_ERR(ndev)) {
+diff --git a/drivers/input/misc/ims-pcu.c b/drivers/input/misc/ims-pcu.c
+index 08b9b5cdb943e..e5cb20e7f57b1 100644
+--- a/drivers/input/misc/ims-pcu.c
++++ b/drivers/input/misc/ims-pcu.c
+@@ -42,8 +42,8 @@ struct ims_pcu_backlight {
+ #define IMS_PCU_PART_NUMBER_LEN		15
+ #define IMS_PCU_SERIAL_NUMBER_LEN	8
+ #define IMS_PCU_DOM_LEN			8
+-#define IMS_PCU_FW_VERSION_LEN		(9 + 1)
+-#define IMS_PCU_BL_VERSION_LEN		(9 + 1)
++#define IMS_PCU_FW_VERSION_LEN		16
++#define IMS_PCU_BL_VERSION_LEN		16
+ #define IMS_PCU_BL_RESET_REASON_LEN	(2 + 1)
+ 
+ #define IMS_PCU_PCU_B_DEVICE_ID		5
+diff --git a/drivers/input/misc/pm8xxx-vibrator.c b/drivers/input/misc/pm8xxx-vibrator.c
+index 53ad25eaf1a28..8bfe5c7b1244c 100644
+--- a/drivers/input/misc/pm8xxx-vibrator.c
++++ b/drivers/input/misc/pm8xxx-vibrator.c
+@@ -14,7 +14,8 @@
+ 
+ #define VIB_MAX_LEVEL_mV	(3100)
+ #define VIB_MIN_LEVEL_mV	(1200)
+-#define VIB_MAX_LEVELS		(VIB_MAX_LEVEL_mV - VIB_MIN_LEVEL_mV)
++#define VIB_PER_STEP_mV		(100)
++#define VIB_MAX_LEVELS		(VIB_MAX_LEVEL_mV - VIB_MIN_LEVEL_mV + VIB_PER_STEP_mV)
+ 
+ #define MAX_FF_SPEED		0xff
+ 
+@@ -118,10 +119,10 @@ static void pm8xxx_work_handler(struct work_struct *work)
+ 		vib->active = true;
+ 		vib->level = ((VIB_MAX_LEVELS * vib->speed) / MAX_FF_SPEED) +
+ 						VIB_MIN_LEVEL_mV;
+-		vib->level /= 100;
++		vib->level /= VIB_PER_STEP_mV;
+ 	} else {
+ 		vib->active = false;
+-		vib->level = VIB_MIN_LEVEL_mV / 100;
++		vib->level = VIB_MIN_LEVEL_mV / VIB_PER_STEP_mV;
+ 	}
+ 
+ 	pm8xxx_vib_set(vib, vib->active);
+diff --git a/drivers/input/serio/ioc3kbd.c b/drivers/input/serio/ioc3kbd.c
+index d51bfe912db5b..676b0bda3d720 100644
+--- a/drivers/input/serio/ioc3kbd.c
++++ b/drivers/input/serio/ioc3kbd.c
+@@ -190,7 +190,7 @@ static int ioc3kbd_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int ioc3kbd_remove(struct platform_device *pdev)
++static void ioc3kbd_remove(struct platform_device *pdev)
+ {
+ 	struct ioc3kbd_data *d = platform_get_drvdata(pdev);
+ 
+@@ -198,13 +198,18 @@ static int ioc3kbd_remove(struct platform_device *pdev)
+ 
+ 	serio_unregister_port(d->kbd);
+ 	serio_unregister_port(d->aux);
+-
+-	return 0;
+ }
+ 
++static const struct platform_device_id ioc3kbd_id_table[] = {
++	{ "ioc3-kbd", },
++	{ }
++};
++MODULE_DEVICE_TABLE(platform, ioc3kbd_id_table);
++
+ static struct platform_driver ioc3kbd_driver = {
+ 	.probe          = ioc3kbd_probe,
+-	.remove         = ioc3kbd_remove,
++	.remove_new     = ioc3kbd_remove,
++	.id_table	= ioc3kbd_id_table,
+ 	.driver = {
+ 		.name = "ioc3-kbd",
+ 	},
+diff --git a/drivers/irqchip/irq-alpine-msi.c b/drivers/irqchip/irq-alpine-msi.c
+index 1819bb1d27230..aedbc4befcdf0 100644
+--- a/drivers/irqchip/irq-alpine-msi.c
++++ b/drivers/irqchip/irq-alpine-msi.c
+@@ -165,7 +165,7 @@ static int alpine_msix_middle_domain_alloc(struct irq_domain *domain,
+ 	return 0;
+ 
+ err_sgi:
+-	irq_domain_free_irqs_parent(domain, virq, i - 1);
++	irq_domain_free_irqs_parent(domain, virq, i);
+ 	alpine_msix_free_sgi(priv, sgi, nr_irqs);
+ 	return err;
+ }
+diff --git a/drivers/irqchip/irq-loongson-pch-msi.c b/drivers/irqchip/irq-loongson-pch-msi.c
+index 32562b7e681b5..254a58fbb844a 100644
+--- a/drivers/irqchip/irq-loongson-pch-msi.c
++++ b/drivers/irqchip/irq-loongson-pch-msi.c
+@@ -132,7 +132,7 @@ static int pch_msi_middle_domain_alloc(struct irq_domain *domain,
+ 
+ err_hwirq:
+ 	pch_msi_free_hwirq(priv, hwirq, nr_irqs);
+-	irq_domain_free_irqs_parent(domain, virq, i - 1);
++	irq_domain_free_irqs_parent(domain, virq, i);
+ 
+ 	return err;
+ }
+diff --git a/drivers/macintosh/via-macii.c b/drivers/macintosh/via-macii.c
+index 060e03f2264bc..6adb34d1e381b 100644
+--- a/drivers/macintosh/via-macii.c
++++ b/drivers/macintosh/via-macii.c
+@@ -142,24 +142,19 @@ static int macii_probe(void)
+ /* Initialize the driver */
+ static int macii_init(void)
+ {
+-	unsigned long flags;
+ 	int err;
+ 
+-	local_irq_save(flags);
+-
+ 	err = macii_init_via();
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	err = request_irq(IRQ_MAC_ADB, macii_interrupt, 0, "ADB",
+ 			  macii_interrupt);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	macii_state = idle;
+-out:
+-	local_irq_restore(flags);
+-	return err;
++	return 0;
+ }
+ 
+ /* initialize the hardware */
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index b28302836b2e9..91bc764a854c6 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1355,7 +1355,7 @@ __acquires(bitmap->lock)
+ 	sector_t chunk = offset >> bitmap->chunkshift;
+ 	unsigned long page = chunk >> PAGE_COUNTER_SHIFT;
+ 	unsigned long pageoff = (chunk & PAGE_COUNTER_MASK) << COUNTER_BYTE_SHIFT;
+-	sector_t csize;
++	sector_t csize = ((sector_t)1) << bitmap->chunkshift;
+ 	int err;
+ 
+ 	if (page >= bitmap->pages) {
+@@ -1364,6 +1364,7 @@ __acquires(bitmap->lock)
+ 		 * End-of-device while looking for a whole page or
+ 		 * user set a huge number to sysfs bitmap_set_bits.
+ 		 */
++		*blocks = csize - (offset & (csize - 1));
+ 		return NULL;
+ 	}
+ 	err = md_bitmap_checkpage(bitmap, page, create, 0);
+@@ -1372,8 +1373,7 @@ __acquires(bitmap->lock)
+ 	    bitmap->bp[page].map == NULL)
+ 		csize = ((sector_t)1) << (bitmap->chunkshift +
+ 					  PAGE_COUNTER_SHIFT);
+-	else
+-		csize = ((sector_t)1) << bitmap->chunkshift;
++
+ 	*blocks = csize - (offset & (csize - 1));
+ 
+ 	if (err < 0)
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 9f114b9d8dc6b..66167c4c7bc9e 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -36,7 +36,6 @@
+  */
+ 
+ #include <linux/blkdev.h>
+-#include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/raid/pq.h>
+ #include <linux/async_tx.h>
+@@ -6484,6 +6483,9 @@ static void raid5d(struct md_thread *thread)
+ 		int batch_size, released;
+ 		unsigned int offset;
+ 
++		if (test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags))
++			break;
++
+ 		released = release_stripe_list(conf, conf->temp_inactive_list);
+ 		if (released)
+ 			clear_bit(R5_DID_ALLOC, &conf->cache_state);
+@@ -6520,18 +6522,7 @@ static void raid5d(struct md_thread *thread)
+ 			spin_unlock_irq(&conf->device_lock);
+ 			md_check_recovery(mddev);
+ 			spin_lock_irq(&conf->device_lock);
+-
+-			/*
+-			 * Waiting on MD_SB_CHANGE_PENDING below may deadlock
+-			 * seeing md_check_recovery() is needed to clear
+-			 * the flag when using mdmon.
+-			 */
+-			continue;
+ 		}
+-
+-		wait_event_lock_irq(mddev->sb_wait,
+-			!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags),
+-			conf->device_lock);
+ 	}
+ 	pr_debug("%d stripes handled\n", handled);
+ 
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index 2f7ab5df1c584..73cc2b0fe1851 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -39,15 +39,6 @@ static void cec_fill_msg_report_features(struct cec_adapter *adap,
+  */
+ #define CEC_XFER_TIMEOUT_MS (5 * 400 + 100)
+ 
+-#define call_op(adap, op, arg...) \
+-	(adap->ops->op ? adap->ops->op(adap, ## arg) : 0)
+-
+-#define call_void_op(adap, op, arg...)			\
+-	do {						\
+-		if (adap->ops->op)			\
+-			adap->ops->op(adap, ## arg);	\
+-	} while (0)
+-
+ static int cec_log_addr2idx(const struct cec_adapter *adap, u8 log_addr)
+ {
+ 	int i;
+@@ -161,10 +152,10 @@ static void cec_queue_event(struct cec_adapter *adap,
+ 	u64 ts = ktime_get_ns();
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list)
+ 		cec_queue_event_fh(fh, ev, ts);
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ 
+ /* Notify userspace that the CEC pin changed state at the given time. */
+@@ -178,11 +169,12 @@ void cec_queue_pin_cec_event(struct cec_adapter *adap, bool is_high,
+ 	};
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
+-	list_for_each_entry(fh, &adap->devnode.fhs, list)
++	mutex_lock(&adap->devnode.lock_fhs);
++	list_for_each_entry(fh, &adap->devnode.fhs, list) {
+ 		if (fh->mode_follower == CEC_MODE_MONITOR_PIN)
+ 			cec_queue_event_fh(fh, &ev, ktime_to_ns(ts));
+-	mutex_unlock(&adap->devnode.lock);
++	}
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ EXPORT_SYMBOL_GPL(cec_queue_pin_cec_event);
+ 
+@@ -195,10 +187,10 @@ void cec_queue_pin_hpd_event(struct cec_adapter *adap, bool is_high, ktime_t ts)
+ 	};
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list)
+ 		cec_queue_event_fh(fh, &ev, ktime_to_ns(ts));
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ EXPORT_SYMBOL_GPL(cec_queue_pin_hpd_event);
+ 
+@@ -211,10 +203,10 @@ void cec_queue_pin_5v_event(struct cec_adapter *adap, bool is_high, ktime_t ts)
+ 	};
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list)
+ 		cec_queue_event_fh(fh, &ev, ktime_to_ns(ts));
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ EXPORT_SYMBOL_GPL(cec_queue_pin_5v_event);
+ 
+@@ -286,12 +278,12 @@ static void cec_queue_msg_monitor(struct cec_adapter *adap,
+ 	u32 monitor_mode = valid_la ? CEC_MODE_MONITOR :
+ 				      CEC_MODE_MONITOR_ALL;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list) {
+ 		if (fh->mode_follower >= monitor_mode)
+ 			cec_queue_msg_fh(fh, msg);
+ 	}
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ 
+ /*
+@@ -302,12 +294,12 @@ static void cec_queue_msg_followers(struct cec_adapter *adap,
+ {
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list) {
+ 		if (fh->mode_follower == CEC_MODE_FOLLOWER)
+ 			cec_queue_msg_fh(fh, msg);
+ 	}
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ 
+ /* Notify userspace of an adapter state change. */
+@@ -365,38 +357,48 @@ static void cec_data_completed(struct cec_data *data)
+ /*
+  * A pending CEC transmit needs to be cancelled, either because the CEC
+  * adapter is disabled or the transmit takes an impossibly long time to
+- * finish.
++ * finish, or the reply timed out.
+  *
+  * This function is called with adap->lock held.
+  */
+-static void cec_data_cancel(struct cec_data *data, u8 tx_status)
++static void cec_data_cancel(struct cec_data *data, u8 tx_status, u8 rx_status)
+ {
++	struct cec_adapter *adap = data->adap;
++
+ 	/*
+ 	 * It's either the current transmit, or it is a pending
+ 	 * transmit. Take the appropriate action to clear it.
+ 	 */
+-	if (data->adap->transmitting == data) {
+-		data->adap->transmitting = NULL;
++	if (adap->transmitting == data) {
++		adap->transmitting = NULL;
+ 	} else {
+ 		list_del_init(&data->list);
+ 		if (!(data->msg.tx_status & CEC_TX_STATUS_OK))
+-			if (!WARN_ON(!data->adap->transmit_queue_sz))
+-				data->adap->transmit_queue_sz--;
++			if (!WARN_ON(!adap->transmit_queue_sz))
++				adap->transmit_queue_sz--;
+ 	}
+ 
+ 	if (data->msg.tx_status & CEC_TX_STATUS_OK) {
+ 		data->msg.rx_ts = ktime_get_ns();
+-		data->msg.rx_status = CEC_RX_STATUS_ABORTED;
++		data->msg.rx_status = rx_status;
++		if (!data->blocking)
++			data->msg.tx_status = 0;
+ 	} else {
+ 		data->msg.tx_ts = ktime_get_ns();
+ 		data->msg.tx_status |= tx_status |
+ 				       CEC_TX_STATUS_MAX_RETRIES;
+ 		data->msg.tx_error_cnt++;
+ 		data->attempts = 0;
++		if (!data->blocking)
++			data->msg.rx_status = 0;
+ 	}
+ 
+ 	/* Queue transmitted message for monitoring purposes */
+-	cec_queue_msg_monitor(data->adap, &data->msg, 1);
++	cec_queue_msg_monitor(adap, &data->msg, 1);
++
++	if (!data->blocking && data->msg.sequence)
++		/* Allow drivers to react to a canceled transmit */
++		call_void_op(adap, adap_nb_transmit_canceled, &data->msg);
+ 
+ 	cec_data_completed(data);
+ }
+@@ -417,15 +419,15 @@ static void cec_flush(struct cec_adapter *adap)
+ 	while (!list_empty(&adap->transmit_queue)) {
+ 		data = list_first_entry(&adap->transmit_queue,
+ 					struct cec_data, list);
+-		cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
++		cec_data_cancel(data, CEC_TX_STATUS_ABORTED, 0);
+ 	}
+ 	if (adap->transmitting)
+-		cec_data_cancel(adap->transmitting, CEC_TX_STATUS_ABORTED);
++		adap->transmit_in_progress_aborted = true;
+ 
+ 	/* Cancel the pending timeout work. */
+ 	list_for_each_entry_safe(data, n, &adap->wait_queue, list) {
+ 		if (cancel_delayed_work(&data->work))
+-			cec_data_cancel(data, CEC_TX_STATUS_OK);
++			cec_data_cancel(data, CEC_TX_STATUS_OK, CEC_RX_STATUS_ABORTED);
+ 		/*
+ 		 * If cancel_delayed_work returned false, then
+ 		 * the cec_wait_timeout function is running,
+@@ -500,6 +502,15 @@ int cec_thread_func(void *_adap)
+ 			goto unlock;
+ 		}
+ 
++		if (adap->transmit_in_progress &&
++		    adap->transmit_in_progress_aborted) {
++			if (adap->transmitting)
++				cec_data_cancel(adap->transmitting,
++						CEC_TX_STATUS_ABORTED, 0);
++			adap->transmit_in_progress = false;
++			adap->transmit_in_progress_aborted = false;
++			goto unlock;
++		}
+ 		if (adap->transmit_in_progress && timeout) {
+ 			/*
+ 			 * If we timeout, then log that. Normally this does
+@@ -515,7 +526,7 @@ int cec_thread_func(void *_adap)
+ 					adap->transmitting->msg.msg);
+ 				/* Just give up on this. */
+ 				cec_data_cancel(adap->transmitting,
+-						CEC_TX_STATUS_TIMEOUT);
++						CEC_TX_STATUS_TIMEOUT, 0);
+ 			} else {
+ 				pr_warn("cec-%s: transmit timed out\n", adap->name);
+ 			}
+@@ -571,10 +582,11 @@ int cec_thread_func(void *_adap)
+ 		if (data->attempts == 0)
+ 			data->attempts = attempts;
+ 
++		adap->transmit_in_progress_aborted = false;
+ 		/* Tell the adapter to transmit, cancel on error */
+-		if (adap->ops->adap_transmit(adap, data->attempts,
+-					     signal_free_time, &data->msg))
+-			cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
++		if (call_op(adap, adap_transmit, data->attempts,
++			    signal_free_time, &data->msg))
++			cec_data_cancel(data, CEC_TX_STATUS_ABORTED, 0);
+ 		else
+ 			adap->transmit_in_progress = true;
+ 
+@@ -598,6 +610,8 @@ void cec_transmit_done_ts(struct cec_adapter *adap, u8 status,
+ 	struct cec_msg *msg;
+ 	unsigned int attempts_made = arb_lost_cnt + nack_cnt +
+ 				     low_drive_cnt + error_cnt;
++	bool done = status & (CEC_TX_STATUS_MAX_RETRIES | CEC_TX_STATUS_OK);
++	bool aborted = adap->transmit_in_progress_aborted;
+ 
+ 	dprintk(2, "%s: status 0x%02x\n", __func__, status);
+ 	if (attempts_made < 1)
+@@ -618,6 +632,7 @@ void cec_transmit_done_ts(struct cec_adapter *adap, u8 status,
+ 		goto wake_thread;
+ 	}
+ 	adap->transmit_in_progress = false;
++	adap->transmit_in_progress_aborted = false;
+ 
+ 	msg = &data->msg;
+ 
+@@ -638,8 +653,7 @@ void cec_transmit_done_ts(struct cec_adapter *adap, u8 status,
+ 	 * the hardware didn't signal that it retried itself (by setting
+ 	 * CEC_TX_STATUS_MAX_RETRIES), then we will retry ourselves.
+ 	 */
+-	if (data->attempts > attempts_made &&
+-	    !(status & (CEC_TX_STATUS_MAX_RETRIES | CEC_TX_STATUS_OK))) {
++	if (!aborted && data->attempts > attempts_made && !done) {
+ 		/* Retry this message */
+ 		data->attempts -= attempts_made;
+ 		if (msg->timeout)
+@@ -654,6 +668,8 @@ void cec_transmit_done_ts(struct cec_adapter *adap, u8 status,
+ 		goto wake_thread;
+ 	}
+ 
++	if (aborted && !done)
++		status |= CEC_TX_STATUS_ABORTED;
+ 	data->attempts = 0;
+ 
+ 	/* Always set CEC_TX_STATUS_MAX_RETRIES on error */
+@@ -732,9 +748,7 @@ static void cec_wait_timeout(struct work_struct *work)
+ 
+ 	/* Mark the message as timed out */
+ 	list_del_init(&data->list);
+-	data->msg.rx_ts = ktime_get_ns();
+-	data->msg.rx_status = CEC_RX_STATUS_TIMEOUT;
+-	cec_data_completed(data);
++	cec_data_cancel(data, CEC_TX_STATUS_OK, CEC_RX_STATUS_TIMEOUT);
+ unlock:
+ 	mutex_unlock(&adap->lock);
+ }
+@@ -750,6 +764,7 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ {
+ 	struct cec_data *data;
+ 	bool is_raw = msg_is_raw(msg);
++	int err;
+ 
+ 	if (adap->devnode.unregistered)
+ 		return -ENODEV;
+@@ -912,14 +927,20 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 	 * Release the lock and wait, retake the lock afterwards.
+ 	 */
+ 	mutex_unlock(&adap->lock);
+-	wait_for_completion_killable(&data->c);
+-	if (!data->completed)
+-		cancel_delayed_work_sync(&data->work);
++	err = wait_for_completion_killable(&data->c);
++	cancel_delayed_work_sync(&data->work);
+ 	mutex_lock(&adap->lock);
+ 
++	if (err)
++		adap->transmit_in_progress_aborted = true;
++
+ 	/* Cancel the transmit if it was interrupted */
+-	if (!data->completed)
+-		cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
++	if (!data->completed) {
++		if (data->msg.tx_status & CEC_TX_STATUS_OK)
++			cec_data_cancel(data, CEC_TX_STATUS_OK, CEC_RX_STATUS_ABORTED);
++		else
++			cec_data_cancel(data, CEC_TX_STATUS_ABORTED, 0);
++	}
+ 
+ 	/* The transmit completed (possibly with an error) */
+ 	*msg = data->msg;
+@@ -1294,7 +1315,7 @@ static int cec_config_log_addr(struct cec_adapter *adap,
+ 	 * Message not acknowledged, so this logical
+ 	 * address is free to use.
+ 	 */
+-	err = adap->ops->adap_log_addr(adap, log_addr);
++	err = call_op(adap, adap_log_addr, log_addr);
+ 	if (err)
+ 		return err;
+ 
+@@ -1311,9 +1332,8 @@ static int cec_config_log_addr(struct cec_adapter *adap,
+  */
+ static void cec_adap_unconfigure(struct cec_adapter *adap)
+ {
+-	if (!adap->needs_hpd ||
+-	    adap->phys_addr != CEC_PHYS_ADDR_INVALID)
+-		WARN_ON(adap->ops->adap_log_addr(adap, CEC_LOG_ADDR_INVALID));
++	if (!adap->needs_hpd || adap->phys_addr != CEC_PHYS_ADDR_INVALID)
++		WARN_ON(call_op(adap, adap_log_addr, CEC_LOG_ADDR_INVALID));
+ 	adap->log_addrs.log_addr_mask = 0;
+ 	adap->is_configured = false;
+ 	cec_flush(adap);
+@@ -1521,9 +1541,12 @@ static int cec_config_thread_func(void *arg)
+  */
+ static void cec_claim_log_addrs(struct cec_adapter *adap, bool block)
+ {
+-	if (WARN_ON(adap->is_configuring || adap->is_configured))
++	if (WARN_ON(adap->is_claiming_log_addrs ||
++		    adap->is_configuring || adap->is_configured))
+ 		return;
+ 
++	adap->is_claiming_log_addrs = true;
++
+ 	init_completion(&adap->config_completion);
+ 
+ 	/* Ready to kick off the thread */
+@@ -1532,11 +1555,67 @@ static void cec_claim_log_addrs(struct cec_adapter *adap, bool block)
+ 					   "ceccfg-%s", adap->name);
+ 	if (IS_ERR(adap->kthread_config)) {
+ 		adap->kthread_config = NULL;
++		adap->is_configuring = false;
+ 	} else if (block) {
+ 		mutex_unlock(&adap->lock);
+ 		wait_for_completion(&adap->config_completion);
+ 		mutex_lock(&adap->lock);
+ 	}
++	adap->is_claiming_log_addrs = false;
++}
++
++/*
++ * Helper function to enable/disable the CEC adapter.
++ *
++ * This function is called with adap->lock held.
++ */
++static int cec_adap_enable(struct cec_adapter *adap)
++{
++	bool enable;
++	int ret = 0;
++
++	enable = adap->monitor_all_cnt || adap->monitor_pin_cnt ||
++		 adap->log_addrs.num_log_addrs;
++	if (adap->needs_hpd)
++		enable = enable && adap->phys_addr != CEC_PHYS_ADDR_INVALID;
++
++	if (enable == adap->is_enabled)
++		return 0;
++
++	/* serialize adap_enable */
++	mutex_lock(&adap->devnode.lock);
++	if (enable) {
++		adap->last_initiator = 0xff;
++		adap->transmit_in_progress = false;
++		ret = adap->ops->adap_enable(adap, true);
++		if (!ret) {
++			/*
++			 * Enable monitor-all/pin modes if needed. We warn, but
++			 * continue if this fails as this is not a critical error.
++			 */
++			if (adap->monitor_all_cnt)
++				WARN_ON(call_op(adap, adap_monitor_all_enable, true));
++			if (adap->monitor_pin_cnt)
++				WARN_ON(call_op(adap, adap_monitor_pin_enable, true));
++		}
++	} else {
++		/* Disable monitor-all/pin modes if needed (needs_hpd == 1) */
++		if (adap->monitor_all_cnt)
++			WARN_ON(call_op(adap, adap_monitor_all_enable, false));
++		if (adap->monitor_pin_cnt)
++			WARN_ON(call_op(adap, adap_monitor_pin_enable, false));
++		WARN_ON(adap->ops->adap_enable(adap, false));
++		adap->last_initiator = 0xff;
++		adap->transmit_in_progress = false;
++		adap->transmit_in_progress_aborted = false;
++		if (adap->transmitting)
++			cec_data_cancel(adap->transmitting, CEC_TX_STATUS_ABORTED, 0);
++	}
++	if (!ret)
++		adap->is_enabled = enable;
++	wake_up_interruptible(&adap->kthread_waitq);
++	mutex_unlock(&adap->devnode.lock);
++	return ret;
+ }
+ 
+ /* Set a new physical address and send an event notifying userspace of this.
+@@ -1545,52 +1624,30 @@ static void cec_claim_log_addrs(struct cec_adapter *adap, bool block)
+  */
+ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
+ {
++	bool becomes_invalid = phys_addr == CEC_PHYS_ADDR_INVALID;
++	bool is_invalid = adap->phys_addr == CEC_PHYS_ADDR_INVALID;
++
+ 	if (phys_addr == adap->phys_addr)
+ 		return;
+-	if (phys_addr != CEC_PHYS_ADDR_INVALID && adap->devnode.unregistered)
++	if (!becomes_invalid && adap->devnode.unregistered)
+ 		return;
+ 
+ 	dprintk(1, "new physical address %x.%x.%x.%x\n",
+ 		cec_phys_addr_exp(phys_addr));
+-	if (phys_addr == CEC_PHYS_ADDR_INVALID ||
+-	    adap->phys_addr != CEC_PHYS_ADDR_INVALID) {
++	if (becomes_invalid || !is_invalid) {
+ 		adap->phys_addr = CEC_PHYS_ADDR_INVALID;
+ 		cec_post_state_event(adap);
+ 		cec_adap_unconfigure(adap);
+-		/* Disabling monitor all mode should always succeed */
+-		if (adap->monitor_all_cnt)
+-			WARN_ON(call_op(adap, adap_monitor_all_enable, false));
+-		mutex_lock(&adap->devnode.lock);
+-		if (adap->needs_hpd || list_empty(&adap->devnode.fhs)) {
+-			WARN_ON(adap->ops->adap_enable(adap, false));
+-			adap->transmit_in_progress = false;
+-			wake_up_interruptible(&adap->kthread_waitq);
+-		}
+-		mutex_unlock(&adap->devnode.lock);
+-		if (phys_addr == CEC_PHYS_ADDR_INVALID)
++		if (becomes_invalid) {
++			cec_adap_enable(adap);
+ 			return;
++		}
+ 	}
+ 
+-	mutex_lock(&adap->devnode.lock);
+-	adap->last_initiator = 0xff;
+-	adap->transmit_in_progress = false;
+-
+-	if ((adap->needs_hpd || list_empty(&adap->devnode.fhs)) &&
+-	    adap->ops->adap_enable(adap, true)) {
+-		mutex_unlock(&adap->devnode.lock);
+-		return;
+-	}
+-
+-	if (adap->monitor_all_cnt &&
+-	    call_op(adap, adap_monitor_all_enable, true)) {
+-		if (adap->needs_hpd || list_empty(&adap->devnode.fhs))
+-			WARN_ON(adap->ops->adap_enable(adap, false));
+-		mutex_unlock(&adap->devnode.lock);
+-		return;
+-	}
+-	mutex_unlock(&adap->devnode.lock);
+-
+ 	adap->phys_addr = phys_addr;
++	if (is_invalid)
++		cec_adap_enable(adap);
++
+ 	cec_post_state_event(adap);
+ 	if (adap->log_addrs.num_log_addrs)
+ 		cec_claim_log_addrs(adap, block);
+@@ -1647,12 +1704,15 @@ int __cec_s_log_addrs(struct cec_adapter *adap,
+ 		      struct cec_log_addrs *log_addrs, bool block)
+ {
+ 	u16 type_mask = 0;
++	int err;
+ 	int i;
+ 
+ 	if (adap->devnode.unregistered)
+ 		return -ENODEV;
+ 
+ 	if (!log_addrs || log_addrs->num_log_addrs == 0) {
++		if (!adap->is_configuring && !adap->is_configured)
++			return 0;
+ 		cec_adap_unconfigure(adap);
+ 		adap->log_addrs.num_log_addrs = 0;
+ 		for (i = 0; i < CEC_MAX_LOG_ADDRS; i++)
+@@ -1660,6 +1720,7 @@ int __cec_s_log_addrs(struct cec_adapter *adap,
+ 		adap->log_addrs.osd_name[0] = '\0';
+ 		adap->log_addrs.vendor_id = CEC_VENDOR_ID_NONE;
+ 		adap->log_addrs.cec_version = CEC_OP_CEC_VERSION_2_0;
++		cec_adap_enable(adap);
+ 		return 0;
+ 	}
+ 
+@@ -1795,9 +1856,10 @@ int __cec_s_log_addrs(struct cec_adapter *adap,
+ 
+ 	log_addrs->log_addr_mask = adap->log_addrs.log_addr_mask;
+ 	adap->log_addrs = *log_addrs;
+-	if (adap->phys_addr != CEC_PHYS_ADDR_INVALID)
++	err = cec_adap_enable(adap);
++	if (!err && adap->phys_addr != CEC_PHYS_ADDR_INVALID)
+ 		cec_claim_log_addrs(adap, block);
+-	return 0;
++	return err;
+ }
+ 
+ int cec_s_log_addrs(struct cec_adapter *adap,
+@@ -1899,11 +1961,10 @@ static int cec_receive_notify(struct cec_adapter *adap, struct cec_msg *msg,
+ 	    msg->msg[1] != CEC_MSG_CDC_MESSAGE)
+ 		return 0;
+ 
+-	if (adap->ops->received) {
+-		/* Allow drivers to process the message first */
+-		if (adap->ops->received(adap, msg) != -ENOMSG)
+-			return 0;
+-	}
++	/* Allow drivers to process the message first */
++	if (adap->ops->received && !adap->devnode.unregistered &&
++	    adap->ops->received(adap, msg) != -ENOMSG)
++		return 0;
+ 
+ 	/*
+ 	 * REPORT_PHYSICAL_ADDR, CEC_MSG_USER_CONTROL_PRESSED and
+@@ -2096,20 +2157,25 @@ static int cec_receive_notify(struct cec_adapter *adap, struct cec_msg *msg,
+  */
+ int cec_monitor_all_cnt_inc(struct cec_adapter *adap)
+ {
+-	int ret = 0;
++	int ret;
++
++	if (adap->monitor_all_cnt++)
++		return 0;
+ 
+-	if (adap->monitor_all_cnt == 0)
+-		ret = call_op(adap, adap_monitor_all_enable, 1);
+-	if (ret == 0)
+-		adap->monitor_all_cnt++;
++	ret = cec_adap_enable(adap);
++	if (ret)
++		adap->monitor_all_cnt--;
+ 	return ret;
+ }
+ 
+ void cec_monitor_all_cnt_dec(struct cec_adapter *adap)
+ {
+-	adap->monitor_all_cnt--;
+-	if (adap->monitor_all_cnt == 0)
+-		WARN_ON(call_op(adap, adap_monitor_all_enable, 0));
++	if (WARN_ON(!adap->monitor_all_cnt))
++		return;
++	if (--adap->monitor_all_cnt)
++		return;
++	WARN_ON(call_op(adap, adap_monitor_all_enable, false));
++	cec_adap_enable(adap);
+ }
+ 
+ /*
+@@ -2119,20 +2185,25 @@ void cec_monitor_all_cnt_dec(struct cec_adapter *adap)
+  */
+ int cec_monitor_pin_cnt_inc(struct cec_adapter *adap)
+ {
+-	int ret = 0;
++	int ret;
++
++	if (adap->monitor_pin_cnt++)
++		return 0;
+ 
+-	if (adap->monitor_pin_cnt == 0)
+-		ret = call_op(adap, adap_monitor_pin_enable, 1);
+-	if (ret == 0)
+-		adap->monitor_pin_cnt++;
++	ret = cec_adap_enable(adap);
++	if (ret)
++		adap->monitor_pin_cnt--;
+ 	return ret;
+ }
+ 
+ void cec_monitor_pin_cnt_dec(struct cec_adapter *adap)
+ {
+-	adap->monitor_pin_cnt--;
+-	if (adap->monitor_pin_cnt == 0)
+-		WARN_ON(call_op(adap, adap_monitor_pin_enable, 0));
++	if (WARN_ON(!adap->monitor_pin_cnt))
++		return;
++	if (--adap->monitor_pin_cnt)
++		return;
++	WARN_ON(call_op(adap, adap_monitor_pin_enable, false));
++	cec_adap_enable(adap);
+ }
+ 
+ #ifdef CONFIG_DEBUG_FS
+@@ -2146,6 +2217,7 @@ int cec_adap_status(struct seq_file *file, void *priv)
+ 	struct cec_data *data;
+ 
+ 	mutex_lock(&adap->lock);
++	seq_printf(file, "enabled: %d\n", adap->is_enabled);
+ 	seq_printf(file, "configured: %d\n", adap->is_configured);
+ 	seq_printf(file, "configuring: %d\n", adap->is_configuring);
+ 	seq_printf(file, "phys_addr: %x.%x.%x.%x\n",
+@@ -2160,6 +2232,9 @@ int cec_adap_status(struct seq_file *file, void *priv)
+ 	if (adap->monitor_all_cnt)
+ 		seq_printf(file, "file handles in Monitor All mode: %u\n",
+ 			   adap->monitor_all_cnt);
++	if (adap->monitor_pin_cnt)
++		seq_printf(file, "file handles in Monitor Pin mode: %u\n",
++			   adap->monitor_pin_cnt);
+ 	if (adap->tx_timeouts) {
+ 		seq_printf(file, "transmit timeouts: %u\n",
+ 			   adap->tx_timeouts);
+diff --git a/drivers/media/cec/core/cec-api.c b/drivers/media/cec/core/cec-api.c
+index f922a2196b2b7..8bdf58abdf965 100644
+--- a/drivers/media/cec/core/cec-api.c
++++ b/drivers/media/cec/core/cec-api.c
+@@ -178,7 +178,7 @@ static long cec_adap_s_log_addrs(struct cec_adapter *adap, struct cec_fh *fh,
+ 			   CEC_LOG_ADDRS_FL_ALLOW_RC_PASSTHRU |
+ 			   CEC_LOG_ADDRS_FL_CDC_ONLY;
+ 	mutex_lock(&adap->lock);
+-	if (!adap->is_configuring &&
++	if (!adap->is_claiming_log_addrs && !adap->is_configuring &&
+ 	    (!log_addrs.num_log_addrs || !adap->is_configured) &&
+ 	    !cec_is_busy(adap, fh)) {
+ 		err = __cec_s_log_addrs(adap, &log_addrs, block);
+@@ -586,17 +586,6 @@ static int cec_open(struct inode *inode, struct file *filp)
+ 		return err;
+ 	}
+ 
+-	mutex_lock(&devnode->lock);
+-	if (list_empty(&devnode->fhs) &&
+-	    !adap->needs_hpd &&
+-	    adap->phys_addr == CEC_PHYS_ADDR_INVALID) {
+-		err = adap->ops->adap_enable(adap, true);
+-		if (err) {
+-			mutex_unlock(&devnode->lock);
+-			kfree(fh);
+-			return err;
+-		}
+-	}
+ 	filp->private_data = fh;
+ 
+ 	/* Queue up initial state events */
+@@ -606,7 +595,8 @@ static int cec_open(struct inode *inode, struct file *filp)
+ 		adap->conn_info.type != CEC_CONNECTOR_TYPE_NO_CONNECTOR;
+ 	cec_queue_event_fh(fh, &ev, 0);
+ #ifdef CONFIG_CEC_PIN
+-	if (adap->pin && adap->pin->ops->read_hpd) {
++	if (adap->pin && adap->pin->ops->read_hpd &&
++	    !adap->devnode.unregistered) {
+ 		err = adap->pin->ops->read_hpd(adap);
+ 		if (err >= 0) {
+ 			ev.event = err ? CEC_EVENT_PIN_HPD_HIGH :
+@@ -614,7 +604,8 @@ static int cec_open(struct inode *inode, struct file *filp)
+ 			cec_queue_event_fh(fh, &ev, 0);
+ 		}
+ 	}
+-	if (adap->pin && adap->pin->ops->read_5v) {
++	if (adap->pin && adap->pin->ops->read_5v &&
++	    !adap->devnode.unregistered) {
+ 		err = adap->pin->ops->read_5v(adap);
+ 		if (err >= 0) {
+ 			ev.event = err ? CEC_EVENT_PIN_5V_HIGH :
+@@ -624,7 +615,10 @@ static int cec_open(struct inode *inode, struct file *filp)
+ 	}
+ #endif
+ 
++	mutex_lock(&devnode->lock);
++	mutex_lock(&devnode->lock_fhs);
+ 	list_add(&fh->list, &devnode->fhs);
++	mutex_unlock(&devnode->lock_fhs);
+ 	mutex_unlock(&devnode->lock);
+ 
+ 	return 0;
+@@ -654,11 +648,9 @@ static int cec_release(struct inode *inode, struct file *filp)
+ 	mutex_unlock(&adap->lock);
+ 
+ 	mutex_lock(&devnode->lock);
++	mutex_lock(&devnode->lock_fhs);
+ 	list_del(&fh->list);
+-	if (cec_is_registered(adap) && list_empty(&devnode->fhs) &&
+-	    !adap->needs_hpd && adap->phys_addr == CEC_PHYS_ADDR_INVALID) {
+-		WARN_ON(adap->ops->adap_enable(adap, false));
+-	}
++	mutex_unlock(&devnode->lock_fhs);
+ 	mutex_unlock(&devnode->lock);
+ 
+ 	/* Unhook pending transmits from this filehandle. */
+@@ -672,6 +664,8 @@ static int cec_release(struct inode *inode, struct file *filp)
+ 		list_del(&data->xfer_list);
+ 	}
+ 	mutex_unlock(&adap->lock);
++
++	mutex_lock(&fh->lock);
+ 	while (!list_empty(&fh->msgs)) {
+ 		struct cec_msg_entry *entry =
+ 			list_first_entry(&fh->msgs, struct cec_msg_entry, list);
+@@ -689,6 +683,7 @@ static int cec_release(struct inode *inode, struct file *filp)
+ 			kfree(entry);
+ 		}
+ 	}
++	mutex_unlock(&fh->lock);
+ 	kfree(fh);
+ 
+ 	cec_put_device(devnode);
+diff --git a/drivers/media/cec/core/cec-core.c b/drivers/media/cec/core/cec-core.c
+index ece236291f358..0a3345e1a0f3a 100644
+--- a/drivers/media/cec/core/cec-core.c
++++ b/drivers/media/cec/core/cec-core.c
+@@ -167,8 +167,10 @@ static void cec_devnode_unregister(struct cec_adapter *adap)
+ 		return;
+ 	}
+ 
++	mutex_lock(&devnode->lock_fhs);
+ 	list_for_each_entry(fh, &devnode->fhs, list)
+ 		wake_up_interruptible(&fh->wait);
++	mutex_unlock(&devnode->lock_fhs);
+ 
+ 	devnode->registered = false;
+ 	devnode->unregistered = true;
+@@ -202,7 +204,7 @@ static ssize_t cec_error_inj_write(struct file *file,
+ 		line = strsep(&p, "\n");
+ 		if (!*line || *line == '#')
+ 			continue;
+-		if (!adap->ops->error_inj_parse_line(adap, line)) {
++		if (!call_op(adap, error_inj_parse_line, line)) {
+ 			kfree(buf);
+ 			return -EINVAL;
+ 		}
+@@ -215,7 +217,7 @@ static int cec_error_inj_show(struct seq_file *sf, void *unused)
+ {
+ 	struct cec_adapter *adap = sf->private;
+ 
+-	return adap->ops->error_inj_show(adap, sf);
++	return call_op(adap, error_inj_show, sf);
+ }
+ 
+ static int cec_error_inj_open(struct inode *inode, struct file *file)
+@@ -272,6 +274,7 @@ struct cec_adapter *cec_allocate_adapter(const struct cec_adap_ops *ops,
+ 
+ 	/* adap->devnode initialization */
+ 	INIT_LIST_HEAD(&adap->devnode.fhs);
++	mutex_init(&adap->devnode.lock_fhs);
+ 	mutex_init(&adap->devnode.lock);
+ 
+ 	adap->kthread = kthread_run(cec_thread_func, adap, "cec-%s", name);
+diff --git a/drivers/media/cec/core/cec-pin-priv.h b/drivers/media/cec/core/cec-pin-priv.h
+index f423db8855d9e..5577c62e4131f 100644
+--- a/drivers/media/cec/core/cec-pin-priv.h
++++ b/drivers/media/cec/core/cec-pin-priv.h
+@@ -12,6 +12,17 @@
+ #include <linux/atomic.h>
+ #include <media/cec-pin.h>
+ 
++#define call_pin_op(pin, op, arg...)					\
++	((pin && pin->ops->op && !pin->adap->devnode.unregistered) ?	\
++	 pin->ops->op(pin->adap, ## arg) : 0)
++
++#define call_void_pin_op(pin, op, arg...)				\
++	do {								\
++		if (pin && pin->ops->op &&				\
++		    !pin->adap->devnode.unregistered)			\
++			pin->ops->op(pin->adap, ## arg);		\
++	} while (0)
++
+ enum cec_pin_state {
+ 	/* CEC is off */
+ 	CEC_ST_OFF,
+diff --git a/drivers/media/cec/core/cec-pin.c b/drivers/media/cec/core/cec-pin.c
+index f8452a1f9fc6c..447fdada20e98 100644
+--- a/drivers/media/cec/core/cec-pin.c
++++ b/drivers/media/cec/core/cec-pin.c
+@@ -135,7 +135,7 @@ static void cec_pin_update(struct cec_pin *pin, bool v, bool force)
+ 
+ static bool cec_pin_read(struct cec_pin *pin)
+ {
+-	bool v = pin->ops->read(pin->adap);
++	bool v = call_pin_op(pin, read);
+ 
+ 	cec_pin_update(pin, v, false);
+ 	return v;
+@@ -143,13 +143,13 @@ static bool cec_pin_read(struct cec_pin *pin)
+ 
+ static void cec_pin_low(struct cec_pin *pin)
+ {
+-	pin->ops->low(pin->adap);
++	call_void_pin_op(pin, low);
+ 	cec_pin_update(pin, false, false);
+ }
+ 
+ static bool cec_pin_high(struct cec_pin *pin)
+ {
+-	pin->ops->high(pin->adap);
++	call_void_pin_op(pin, high);
+ 	return cec_pin_read(pin);
+ }
+ 
+@@ -1086,7 +1086,7 @@ static int cec_pin_thread_func(void *_adap)
+ 				    CEC_PIN_IRQ_UNCHANGED)) {
+ 		case CEC_PIN_IRQ_DISABLE:
+ 			if (irq_enabled) {
+-				pin->ops->disable_irq(adap);
++				call_void_pin_op(pin, disable_irq);
+ 				irq_enabled = false;
+ 			}
+ 			cec_pin_high(pin);
+@@ -1097,7 +1097,7 @@ static int cec_pin_thread_func(void *_adap)
+ 		case CEC_PIN_IRQ_ENABLE:
+ 			if (irq_enabled)
+ 				break;
+-			pin->enable_irq_failed = !pin->ops->enable_irq(adap);
++			pin->enable_irq_failed = !call_pin_op(pin, enable_irq);
+ 			if (pin->enable_irq_failed) {
+ 				cec_pin_to_idle(pin);
+ 				hrtimer_start(&pin->timer, ns_to_ktime(0),
+@@ -1112,8 +1112,8 @@ static int cec_pin_thread_func(void *_adap)
+ 		if (kthread_should_stop())
+ 			break;
+ 	}
+-	if (pin->ops->disable_irq && irq_enabled)
+-		pin->ops->disable_irq(adap);
++	if (irq_enabled)
++		call_void_pin_op(pin, disable_irq);
+ 	hrtimer_cancel(&pin->timer);
+ 	cec_pin_read(pin);
+ 	cec_pin_to_idle(pin);
+@@ -1208,7 +1208,7 @@ static void cec_pin_adap_status(struct cec_adapter *adap,
+ 	seq_printf(file, "state: %s\n", states[pin->state].name);
+ 	seq_printf(file, "tx_bit: %d\n", pin->tx_bit);
+ 	seq_printf(file, "rx_bit: %d\n", pin->rx_bit);
+-	seq_printf(file, "cec pin: %d\n", pin->ops->read(adap));
++	seq_printf(file, "cec pin: %d\n", call_pin_op(pin, read));
+ 	seq_printf(file, "cec pin events dropped: %u\n",
+ 		   pin->work_pin_events_dropped_cnt);
+ 	seq_printf(file, "irq failed: %d\n", pin->enable_irq_failed);
+@@ -1261,8 +1261,7 @@ static void cec_pin_adap_status(struct cec_adapter *adap,
+ 	pin->rx_data_bit_too_long_cnt = 0;
+ 	pin->rx_low_drive_cnt = 0;
+ 	pin->tx_low_drive_cnt = 0;
+-	if (pin->ops->status)
+-		pin->ops->status(adap, file);
++	call_void_pin_op(pin, status, file);
+ }
+ 
+ static int cec_pin_adap_monitor_all_enable(struct cec_adapter *adap,
+@@ -1278,7 +1277,7 @@ static void cec_pin_adap_free(struct cec_adapter *adap)
+ {
+ 	struct cec_pin *pin = adap->pin;
+ 
+-	if (pin->ops->free)
++	if (pin && pin->ops->free)
+ 		pin->ops->free(adap);
+ 	adap->pin = NULL;
+ 	kfree(pin);
+@@ -1288,7 +1287,7 @@ static int cec_pin_received(struct cec_adapter *adap, struct cec_msg *msg)
+ {
+ 	struct cec_pin *pin = adap->pin;
+ 
+-	if (pin->ops->received)
++	if (pin->ops->received && !adap->devnode.unregistered)
+ 		return pin->ops->received(adap, msg);
+ 	return -ENOMSG;
+ }
+diff --git a/drivers/media/cec/core/cec-priv.h b/drivers/media/cec/core/cec-priv.h
+index 9bbd05053d420..b78df931aa74b 100644
+--- a/drivers/media/cec/core/cec-priv.h
++++ b/drivers/media/cec/core/cec-priv.h
+@@ -17,6 +17,16 @@
+ 			pr_info("cec-%s: " fmt, adap->name, ## arg);	\
+ 	} while (0)
+ 
++#define call_op(adap, op, arg...)					\
++	((adap->ops->op && !adap->devnode.unregistered) ?		\
++	 adap->ops->op(adap, ## arg) : 0)
++
++#define call_void_op(adap, op, arg...)					\
++	do {								\
++		if (adap->ops->op && !adap->devnode.unregistered)	\
++			adap->ops->op(adap, ## arg);			\
++	} while (0)
++
+ /* devnode to cec_adapter */
+ #define to_cec_adapter(node) container_of(node, struct cec_adapter, devnode)
+ 
+diff --git a/drivers/media/dvb-frontends/lgdt3306a.c b/drivers/media/dvb-frontends/lgdt3306a.c
+index 47fb22180d5b4..d638cc88aa770 100644
+--- a/drivers/media/dvb-frontends/lgdt3306a.c
++++ b/drivers/media/dvb-frontends/lgdt3306a.c
+@@ -2213,6 +2213,11 @@ static int lgdt3306a_probe(struct i2c_client *client,
+ 	struct dvb_frontend *fe;
+ 	int ret;
+ 
++	if (!client->dev.platform_data) {
++		dev_err(&client->dev, "platform data is mandatory\n");
++		return -EINVAL;
++	}
++
+ 	config = kmemdup(client->dev.platform_data,
+ 			 sizeof(struct lgdt3306a_config), GFP_KERNEL);
+ 	if (config == NULL) {
+diff --git a/drivers/media/dvb-frontends/mxl5xx.c b/drivers/media/dvb-frontends/mxl5xx.c
+index 0b00a23436ed2..aaf9a173596a2 100644
+--- a/drivers/media/dvb-frontends/mxl5xx.c
++++ b/drivers/media/dvb-frontends/mxl5xx.c
+@@ -1390,57 +1390,57 @@ static int config_ts(struct mxl *state, enum MXL_HYDRA_DEMOD_ID_E demod_id,
+ 	u32 nco_count_min = 0;
+ 	u32 clk_type = 0;
+ 
+-	struct MXL_REG_FIELD_T xpt_sync_polarity[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_sync_polarity[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x90700010, 8, 1}, {0x90700010, 9, 1},
+ 		{0x90700010, 10, 1}, {0x90700010, 11, 1},
+ 		{0x90700010, 12, 1}, {0x90700010, 13, 1},
+ 		{0x90700010, 14, 1}, {0x90700010, 15, 1} };
+-	struct MXL_REG_FIELD_T xpt_clock_polarity[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_clock_polarity[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x90700010, 16, 1}, {0x90700010, 17, 1},
+ 		{0x90700010, 18, 1}, {0x90700010, 19, 1},
+ 		{0x90700010, 20, 1}, {0x90700010, 21, 1},
+ 		{0x90700010, 22, 1}, {0x90700010, 23, 1} };
+-	struct MXL_REG_FIELD_T xpt_valid_polarity[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_valid_polarity[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x90700014, 0, 1}, {0x90700014, 1, 1},
+ 		{0x90700014, 2, 1}, {0x90700014, 3, 1},
+ 		{0x90700014, 4, 1}, {0x90700014, 5, 1},
+ 		{0x90700014, 6, 1}, {0x90700014, 7, 1} };
+-	struct MXL_REG_FIELD_T xpt_ts_clock_phase[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_ts_clock_phase[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x90700018, 0, 3}, {0x90700018, 4, 3},
+ 		{0x90700018, 8, 3}, {0x90700018, 12, 3},
+ 		{0x90700018, 16, 3}, {0x90700018, 20, 3},
+ 		{0x90700018, 24, 3}, {0x90700018, 28, 3} };
+-	struct MXL_REG_FIELD_T xpt_lsb_first[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_lsb_first[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x9070000C, 16, 1}, {0x9070000C, 17, 1},
+ 		{0x9070000C, 18, 1}, {0x9070000C, 19, 1},
+ 		{0x9070000C, 20, 1}, {0x9070000C, 21, 1},
+ 		{0x9070000C, 22, 1}, {0x9070000C, 23, 1} };
+-	struct MXL_REG_FIELD_T xpt_sync_byte[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_sync_byte[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x90700010, 0, 1}, {0x90700010, 1, 1},
+ 		{0x90700010, 2, 1}, {0x90700010, 3, 1},
+ 		{0x90700010, 4, 1}, {0x90700010, 5, 1},
+ 		{0x90700010, 6, 1}, {0x90700010, 7, 1} };
+-	struct MXL_REG_FIELD_T xpt_enable_output[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_enable_output[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x9070000C, 0, 1}, {0x9070000C, 1, 1},
+ 		{0x9070000C, 2, 1}, {0x9070000C, 3, 1},
+ 		{0x9070000C, 4, 1}, {0x9070000C, 5, 1},
+ 		{0x9070000C, 6, 1}, {0x9070000C, 7, 1} };
+-	struct MXL_REG_FIELD_T xpt_err_replace_sync[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_err_replace_sync[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x9070000C, 24, 1}, {0x9070000C, 25, 1},
+ 		{0x9070000C, 26, 1}, {0x9070000C, 27, 1},
+ 		{0x9070000C, 28, 1}, {0x9070000C, 29, 1},
+ 		{0x9070000C, 30, 1}, {0x9070000C, 31, 1} };
+-	struct MXL_REG_FIELD_T xpt_err_replace_valid[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_err_replace_valid[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x90700014, 8, 1}, {0x90700014, 9, 1},
+ 		{0x90700014, 10, 1}, {0x90700014, 11, 1},
+ 		{0x90700014, 12, 1}, {0x90700014, 13, 1},
+ 		{0x90700014, 14, 1}, {0x90700014, 15, 1} };
+-	struct MXL_REG_FIELD_T xpt_continuous_clock[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_continuous_clock[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x907001D4, 0, 1}, {0x907001D4, 1, 1},
+ 		{0x907001D4, 2, 1}, {0x907001D4, 3, 1},
+ 		{0x907001D4, 4, 1}, {0x907001D4, 5, 1},
+ 		{0x907001D4, 6, 1}, {0x907001D4, 7, 1} };
+-	struct MXL_REG_FIELD_T xpt_nco_clock_rate[MXL_HYDRA_DEMOD_MAX] = {
++	static const struct MXL_REG_FIELD_T xpt_nco_clock_rate[MXL_HYDRA_DEMOD_MAX] = {
+ 		{0x90700044, 16, 80}, {0x90700044, 16, 81},
+ 		{0x90700044, 16, 82}, {0x90700044, 16, 83},
+ 		{0x90700044, 16, 84}, {0x90700044, 16, 85},
+diff --git a/drivers/media/mc/mc-devnode.c b/drivers/media/mc/mc-devnode.c
+index f11382afe23bf..f249199dc616b 100644
+--- a/drivers/media/mc/mc-devnode.c
++++ b/drivers/media/mc/mc-devnode.c
+@@ -246,15 +246,14 @@ int __must_check media_devnode_register(struct media_device *mdev,
+ 	kobject_set_name(&devnode->cdev.kobj, "media%d", devnode->minor);
+ 
+ 	/* Part 3: Add the media and char device */
++	set_bit(MEDIA_FLAG_REGISTERED, &devnode->flags);
+ 	ret = cdev_device_add(&devnode->cdev, &devnode->dev);
+ 	if (ret < 0) {
++		clear_bit(MEDIA_FLAG_REGISTERED, &devnode->flags);
+ 		pr_err("%s: cdev_device_add failed\n", __func__);
+ 		goto cdev_add_error;
+ 	}
+ 
+-	/* Part 4: Activate this minor. The char device can now be used. */
+-	set_bit(MEDIA_FLAG_REGISTERED, &devnode->flags);
+-
+ 	return 0;
+ 
+ cdev_add_error:
+diff --git a/drivers/media/pci/ngene/ngene-core.c b/drivers/media/pci/ngene/ngene-core.c
+index e1a8c611d01b4..97cac9770802b 100644
+--- a/drivers/media/pci/ngene/ngene-core.c
++++ b/drivers/media/pci/ngene/ngene-core.c
+@@ -1488,7 +1488,9 @@ static int init_channel(struct ngene_channel *chan)
+ 	}
+ 
+ 	if (dev->ci.en && (io & NGENE_IO_TSOUT)) {
+-		dvb_ca_en50221_init(adapter, dev->ci.en, 0, 1);
++		ret = dvb_ca_en50221_init(adapter, dev->ci.en, 0, 1);
++		if (ret != 0)
++			goto err;
+ 		set_transfer(chan, 1);
+ 		chan->dev->channel[2].DataFormatFlags = DF_SWAP32;
+ 		set_transfer(&chan->dev->channel[2], 1);
+diff --git a/drivers/media/radio/radio-shark2.c b/drivers/media/radio/radio-shark2.c
+index f1c5c0a6a335c..e3e6aa87fe081 100644
+--- a/drivers/media/radio/radio-shark2.c
++++ b/drivers/media/radio/radio-shark2.c
+@@ -62,7 +62,7 @@ struct shark_device {
+ #ifdef SHARK_USE_LEDS
+ 	struct work_struct led_work;
+ 	struct led_classdev leds[NO_LEDS];
+-	char led_names[NO_LEDS][32];
++	char led_names[NO_LEDS][64];
+ 	atomic_t brightness[NO_LEDS];
+ 	unsigned long brightness_new;
+ #endif
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
+index 2299d5cca8ffb..6ded5a6181aa2 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.c
++++ b/drivers/media/usb/b2c2/flexcop-usb.c
+@@ -502,17 +502,21 @@ static int flexcop_usb_transfer_init(struct flexcop_usb *fc_usb)
+ 
+ static int flexcop_usb_init(struct flexcop_usb *fc_usb)
+ {
+-	/* use the alternate setting with the larges buffer */
+-	int ret = usb_set_interface(fc_usb->udev, 0, 1);
++	struct usb_host_interface *alt;
++	int ret;
+ 
++	/* use the alternate setting with the largest buffer */
++	ret = usb_set_interface(fc_usb->udev, 0, 1);
+ 	if (ret) {
+ 		err("set interface failed.");
+ 		return ret;
+ 	}
+ 
+-	if (fc_usb->uintf->cur_altsetting->desc.bNumEndpoints < 1)
++	alt = fc_usb->uintf->cur_altsetting;
++
++	if (alt->desc.bNumEndpoints < 2)
+ 		return -ENODEV;
+-	if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[0].desc))
++	if (!usb_endpoint_is_isoc_in(&alt->endpoint[0].desc))
+ 		return -ENODEV;
+ 
+ 	switch (fc_usb->udev->speed) {
+diff --git a/drivers/media/usb/stk1160/stk1160-video.c b/drivers/media/usb/stk1160/stk1160-video.c
+index 4cf540d1b2501..2a5a90311e0cc 100644
+--- a/drivers/media/usb/stk1160/stk1160-video.c
++++ b/drivers/media/usb/stk1160/stk1160-video.c
+@@ -99,7 +99,7 @@ void stk1160_buffer_done(struct stk1160 *dev)
+ static inline
+ void stk1160_copy_video(struct stk1160 *dev, u8 *src, int len)
+ {
+-	int linesdone, lineoff, lencopy;
++	int linesdone, lineoff, lencopy, offset;
+ 	int bytesperline = dev->width * 2;
+ 	struct stk1160_buffer *buf = dev->isoc_ctl.buf;
+ 	u8 *dst = buf->mem;
+@@ -139,8 +139,13 @@ void stk1160_copy_video(struct stk1160 *dev, u8 *src, int len)
+ 	 * Check if we have enough space left in the buffer.
+ 	 * In that case, we force loop exit after copy.
+ 	 */
+-	if (lencopy > buf->bytesused - buf->length) {
+-		lencopy = buf->bytesused - buf->length;
++	offset = dst - (u8 *)buf->mem;
++	if (offset > buf->length) {
++		dev_warn_ratelimited(dev->dev, "out of bounds offset\n");
++		return;
++	}
++	if (lencopy > buf->length - offset) {
++		lencopy = buf->length - offset;
+ 		remain = lencopy;
+ 	}
+ 
+@@ -182,8 +187,13 @@ void stk1160_copy_video(struct stk1160 *dev, u8 *src, int len)
+ 		 * Check if we have enough space left in the buffer.
+ 		 * In that case, we force loop exit after copy.
+ 		 */
+-		if (lencopy > buf->bytesused - buf->length) {
+-			lencopy = buf->bytesused - buf->length;
++		offset = dst - (u8 *)buf->mem;
++		if (offset > buf->length) {
++			dev_warn_ratelimited(dev->dev, "offset out of bounds\n");
++			return;
++		}
++		if (lencopy > buf->length - offset) {
++			lencopy = buf->length - offset;
+ 			remain = lencopy;
+ 		}
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
+index a593ea0598b55..4c31aa0a941e7 100644
+--- a/drivers/media/v4l2-core/v4l2-dev.c
++++ b/drivers/media/v4l2-core/v4l2-dev.c
+@@ -1030,8 +1030,10 @@ int __video_register_device(struct video_device *vdev,
+ 	vdev->dev.devt = MKDEV(VIDEO_MAJOR, vdev->minor);
+ 	vdev->dev.parent = vdev->dev_parent;
+ 	dev_set_name(&vdev->dev, "%s%d", name_base, vdev->num);
++	mutex_lock(&videodev_lock);
+ 	ret = device_register(&vdev->dev);
+ 	if (ret < 0) {
++		mutex_unlock(&videodev_lock);
+ 		pr_err("%s: device_register failed\n", __func__);
+ 		goto cleanup;
+ 	}
+@@ -1051,6 +1053,7 @@ int __video_register_device(struct video_device *vdev,
+ 
+ 	/* Part 6: Activate this minor. The char device can now be used. */
+ 	set_bit(V4L2_FL_REGISTERED, &vdev->flags);
++	mutex_unlock(&videodev_lock);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index b949a4468bf58..7ba1343ca5c1e 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -114,13 +114,12 @@ void mmc_retune_enable(struct mmc_host *host)
+ 
+ /*
+  * Pause re-tuning for a small set of operations.  The pause begins after the
+- * next command and after first doing re-tuning.
++ * next command.
+  */
+ void mmc_retune_pause(struct mmc_host *host)
+ {
+ 	if (!host->retune_paused) {
+ 		host->retune_paused = 1;
+-		mmc_retune_needed(host);
+ 		mmc_retune_hold(host);
+ 	}
+ }
+diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c
+index 681653d097ef5..04c510b751b26 100644
+--- a/drivers/mmc/core/slot-gpio.c
++++ b/drivers/mmc/core/slot-gpio.c
+@@ -202,6 +202,26 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
+ }
+ EXPORT_SYMBOL(mmc_gpiod_request_cd);
+ 
++/**
++ * mmc_gpiod_set_cd_config - set config for card-detection GPIO
++ * @host: mmc host
++ * @config: Generic pinconf config (from pinconf_to_config_packed())
++ *
++ * This can be used by mmc host drivers to fixup a card-detection GPIO's config
++ * (e.g. set PIN_CONFIG_BIAS_PULL_UP) after acquiring the GPIO descriptor
++ * through mmc_gpiod_request_cd().
++ *
++ * Returns:
++ * 0 on success, or a negative errno value on error.
++ */
++int mmc_gpiod_set_cd_config(struct mmc_host *host, unsigned long config)
++{
++	struct mmc_gpio *ctx = host->slot.handler_priv;
++
++	return gpiod_set_config(ctx->cd_gpio, config);
++}
++EXPORT_SYMBOL(mmc_gpiod_set_cd_config);
++
+ bool mmc_can_gpio_cd(struct mmc_host *host)
+ {
+ 	struct mmc_gpio *ctx = host->slot.handler_priv;
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 2a28101777c6f..ba1272db851f7 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -81,6 +81,7 @@ struct sdhci_acpi_host {
+ enum {
+ 	DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP			= BIT(0),
+ 	DMI_QUIRK_SD_NO_WRITE_PROTECT				= BIT(1),
++	DMI_QUIRK_SD_CD_ACTIVE_HIGH				= BIT(2),
+ };
+ 
+ static inline void *sdhci_acpi_priv(struct sdhci_acpi_host *c)
+@@ -761,7 +762,20 @@ static const struct acpi_device_id sdhci_acpi_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(acpi, sdhci_acpi_ids);
+ 
++/* Please keep this list sorted alphabetically */
+ static const struct dmi_system_id sdhci_acpi_quirks[] = {
++	{
++		/*
++		 * The Acer Aspire Switch 10 (SW5-012) microSD slot always
++		 * reports the card being write-protected even though microSD
++		 * cards do not have a write-protect switch at all.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"),
++		},
++		.driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT,
++	},
+ 	{
+ 		/*
+ 		 * The Lenovo Miix 320-10ICR has a bug in the _PS0 method of
+@@ -778,15 +792,23 @@ static const struct dmi_system_id sdhci_acpi_quirks[] = {
+ 	},
+ 	{
+ 		/*
+-		 * The Acer Aspire Switch 10 (SW5-012) microSD slot always
+-		 * reports the card being write-protected even though microSD
+-		 * cards do not have a write-protect switch at all.
++		 * Lenovo Yoga Tablet 2 Pro 1380F/L (13" Android version) this
++		 * has broken WP reporting and an inverted CD signal.
++		 * Note this has more or less the same BIOS as the Lenovo Yoga
++		 * Tablet 2 830F/L or 1050F/L (8" and 10" Android), but unlike
++		 * the 830 / 1050 models which share the same mainboard this
++		 * model has a different mainboard and the inverted CD and
++		 * broken WP are unique to this board.
+ 		 */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corp."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "VALLEYVIEW C0 PLATFORM"),
++			DMI_MATCH(DMI_BOARD_NAME, "BYT-T FFD8"),
++			/* Full match so as to NOT match the 830/1050 BIOS */
++			DMI_MATCH(DMI_BIOS_VERSION, "BLADE_21.X64.0005.R00.1504101516"),
+ 		},
+-		.driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT,
++		.driver_data = (void *)(DMI_QUIRK_SD_NO_WRITE_PROTECT |
++					DMI_QUIRK_SD_CD_ACTIVE_HIGH),
+ 	},
+ 	{
+ 		/*
+@@ -799,6 +821,17 @@ static const struct dmi_system_id sdhci_acpi_quirks[] = {
+ 		},
+ 		.driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT,
+ 	},
++	{
++		/*
++		 * The Toshiba WT10-A's microSD slot always reports the card being
++		 * write-protected.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TOSHIBA WT10-A"),
++		},
++		.driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT,
++	},
+ 	{} /* Terminating entry */
+ };
+ 
+@@ -913,6 +946,9 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
+ 	if (sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD)) {
+ 		bool v = sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL);
+ 
++		if (quirks & DMI_QUIRK_SD_CD_ACTIVE_HIGH)
++			host->mmc->caps2 |= MMC_CAP2_CD_ACTIVE_HIGH;
++
+ 		err = mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0);
+ 		if (err) {
+ 			if (err == -EPROBE_DEFER)
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index eb52e0c5a0202..9d74ee989cb72 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -140,19 +140,26 @@ static const struct timing_data td[] = {
+ 
+ struct sdhci_am654_data {
+ 	struct regmap *base;
+-	bool legacy_otapdly;
+ 	int otap_del_sel[ARRAY_SIZE(td)];
+ 	int itap_del_sel[ARRAY_SIZE(td)];
++	u32 itap_del_ena[ARRAY_SIZE(td)];
+ 	int clkbuf_sel;
+ 	int trm_icp;
+ 	int drv_strength;
+ 	int strb_sel;
+ 	u32 flags;
+ 	u32 quirks;
++	bool dll_enable;
+ 
+ #define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0)
+ };
+ 
++struct window {
++	u8 start;
++	u8 end;
++	u8 length;
++};
++
+ struct sdhci_am654_driver_data {
+ 	const struct sdhci_pltfm_data *pdata;
+ 	u32 flags;
+@@ -232,11 +239,13 @@ static void sdhci_am654_setup_dll(struct sdhci_host *host, unsigned int clock)
+ }
+ 
+ static void sdhci_am654_write_itapdly(struct sdhci_am654_data *sdhci_am654,
+-				      u32 itapdly)
++				      u32 itapdly, u32 enable)
+ {
+ 	/* Set ITAPCHGWIN before writing to ITAPDLY */
+ 	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, ITAPCHGWIN_MASK,
+ 			   1 << ITAPCHGWIN_SHIFT);
++	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, ITAPDLYENA_MASK,
++			   enable << ITAPDLYENA_SHIFT);
+ 	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, ITAPDLYSEL_MASK,
+ 			   itapdly << ITAPDLYSEL_SHIFT);
+ 	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, ITAPCHGWIN_MASK, 0);
+@@ -253,8 +262,8 @@ static void sdhci_am654_setup_delay_chain(struct sdhci_am654_data *sdhci_am654,
+ 	mask = SELDLYTXCLK_MASK | SELDLYRXCLK_MASK;
+ 	regmap_update_bits(sdhci_am654->base, PHY_CTRL5, mask, val);
+ 
+-	sdhci_am654_write_itapdly(sdhci_am654,
+-				  sdhci_am654->itap_del_sel[timing]);
++	sdhci_am654_write_itapdly(sdhci_am654, sdhci_am654->itap_del_sel[timing],
++				  sdhci_am654->itap_del_ena[timing]);
+ }
+ 
+ static void sdhci_am654_set_clock(struct sdhci_host *host, unsigned int clock)
+@@ -263,7 +272,6 @@ static void sdhci_am654_set_clock(struct sdhci_host *host, unsigned int clock)
+ 	struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
+ 	unsigned char timing = host->mmc->ios.timing;
+ 	u32 otap_del_sel;
+-	u32 otap_del_ena;
+ 	u32 mask, val;
+ 
+ 	regmap_update_bits(sdhci_am654->base, PHY_CTRL1, ENDLL_MASK, 0);
+@@ -271,15 +279,10 @@ static void sdhci_am654_set_clock(struct sdhci_host *host, unsigned int clock)
+ 	sdhci_set_clock(host, clock);
+ 
+ 	/* Setup DLL Output TAP delay */
+-	if (sdhci_am654->legacy_otapdly)
+-		otap_del_sel = sdhci_am654->otap_del_sel[0];
+-	else
+-		otap_del_sel = sdhci_am654->otap_del_sel[timing];
+-
+-	otap_del_ena = (timing > MMC_TIMING_UHS_SDR25) ? 1 : 0;
++	otap_del_sel = sdhci_am654->otap_del_sel[timing];
+ 
+ 	mask = OTAPDLYENA_MASK | OTAPDLYSEL_MASK;
+-	val = (otap_del_ena << OTAPDLYENA_SHIFT) |
++	val = (0x1 << OTAPDLYENA_SHIFT) |
+ 	      (otap_del_sel << OTAPDLYSEL_SHIFT);
+ 
+ 	/* Write to STRBSEL for HS400 speed mode */
+@@ -294,10 +297,21 @@ static void sdhci_am654_set_clock(struct sdhci_host *host, unsigned int clock)
+ 
+ 	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, mask, val);
+ 
+-	if (timing > MMC_TIMING_UHS_SDR25 && clock >= CLOCK_TOO_SLOW_HZ)
++	if (timing > MMC_TIMING_UHS_SDR25 && clock >= CLOCK_TOO_SLOW_HZ) {
+ 		sdhci_am654_setup_dll(host, clock);
+-	else
++		sdhci_am654->dll_enable = true;
++
++		if (timing == MMC_TIMING_MMC_HS400) {
++			sdhci_am654->itap_del_ena[timing] = 0x1;
++			sdhci_am654->itap_del_sel[timing] = sdhci_am654->itap_del_sel[timing - 1];
++		}
++
++		sdhci_am654_write_itapdly(sdhci_am654, sdhci_am654->itap_del_sel[timing],
++					  sdhci_am654->itap_del_ena[timing]);
++	} else {
+ 		sdhci_am654_setup_delay_chain(sdhci_am654, timing);
++		sdhci_am654->dll_enable = false;
++	}
+ 
+ 	regmap_update_bits(sdhci_am654->base, PHY_CTRL5, CLKBUFSEL_MASK,
+ 			   sdhci_am654->clkbuf_sel);
+@@ -310,19 +324,29 @@ static void sdhci_j721e_4bit_set_clock(struct sdhci_host *host,
+ 	struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
+ 	unsigned char timing = host->mmc->ios.timing;
+ 	u32 otap_del_sel;
++	u32 itap_del_ena;
++	u32 itap_del_sel;
+ 	u32 mask, val;
+ 
+ 	/* Setup DLL Output TAP delay */
+-	if (sdhci_am654->legacy_otapdly)
+-		otap_del_sel = sdhci_am654->otap_del_sel[0];
+-	else
+-		otap_del_sel = sdhci_am654->otap_del_sel[timing];
++	otap_del_sel = sdhci_am654->otap_del_sel[timing];
+ 
+ 	mask = OTAPDLYENA_MASK | OTAPDLYSEL_MASK;
+ 	val = (0x1 << OTAPDLYENA_SHIFT) |
+ 	      (otap_del_sel << OTAPDLYSEL_SHIFT);
+-	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, mask, val);
+ 
++	/* Setup Input TAP delay */
++	itap_del_ena = sdhci_am654->itap_del_ena[timing];
++	itap_del_sel = sdhci_am654->itap_del_sel[timing];
++
++	mask |= ITAPDLYENA_MASK | ITAPDLYSEL_MASK;
++	val |= (itap_del_ena << ITAPDLYENA_SHIFT) |
++	       (itap_del_sel << ITAPDLYSEL_SHIFT);
++
++	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, ITAPCHGWIN_MASK,
++			   1 << ITAPCHGWIN_SHIFT);
++	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, mask, val);
++	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, ITAPCHGWIN_MASK, 0);
+ 	regmap_update_bits(sdhci_am654->base, PHY_CTRL5, CLKBUFSEL_MASK,
+ 			   sdhci_am654->clkbuf_sel);
+ 
+@@ -415,40 +439,105 @@ static u32 sdhci_am654_cqhci_irq(struct sdhci_host *host, u32 intmask)
+ 	return 0;
+ }
+ 
+-#define ITAP_MAX	32
++#define ITAPDLY_LENGTH 32
++#define ITAPDLY_LAST_INDEX (ITAPDLY_LENGTH - 1)
++
++static u32 sdhci_am654_calculate_itap(struct sdhci_host *host, struct window
++			  *fail_window, u8 num_fails, bool circular_buffer)
++{
++	u8 itap = 0, start_fail = 0, end_fail = 0, pass_length = 0;
++	u8 first_fail_start = 0, last_fail_end = 0;
++	struct device *dev = mmc_dev(host->mmc);
++	struct window pass_window = {0, 0, 0};
++	int prev_fail_end = -1;
++	u8 i;
++
++	if (!num_fails)
++		return ITAPDLY_LAST_INDEX >> 1;
++
++	if (fail_window->length == ITAPDLY_LENGTH) {
++		dev_err(dev, "No passing ITAPDLY, return 0\n");
++		return 0;
++	}
++
++	first_fail_start = fail_window->start;
++	last_fail_end = fail_window[num_fails - 1].end;
++
++	for (i = 0; i < num_fails; i++) {
++		start_fail = fail_window[i].start;
++		end_fail = fail_window[i].end;
++		pass_length = start_fail - (prev_fail_end + 1);
++
++		if (pass_length > pass_window.length) {
++			pass_window.start = prev_fail_end + 1;
++			pass_window.length = pass_length;
++		}
++		prev_fail_end = end_fail;
++	}
++
++	if (!circular_buffer)
++		pass_length = ITAPDLY_LAST_INDEX - last_fail_end;
++	else
++		pass_length = ITAPDLY_LAST_INDEX - last_fail_end + first_fail_start;
++
++	if (pass_length > pass_window.length) {
++		pass_window.start = last_fail_end + 1;
++		pass_window.length = pass_length;
++	}
++
++	if (!circular_buffer)
++		itap = pass_window.start + (pass_window.length >> 1);
++	else
++		itap = (pass_window.start + (pass_window.length >> 1)) % ITAPDLY_LENGTH;
++
++	return (itap > ITAPDLY_LAST_INDEX) ? ITAPDLY_LAST_INDEX >> 1 : itap;
++}
++
+ static int sdhci_am654_platform_execute_tuning(struct sdhci_host *host,
+ 					       u32 opcode)
+ {
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
+-	int cur_val, prev_val = 1, fail_len = 0, pass_window = 0, pass_len;
+-	u32 itap;
++	unsigned char timing = host->mmc->ios.timing;
++	struct window fail_window[ITAPDLY_LENGTH];
++	u8 curr_pass, itap;
++	u8 fail_index = 0;
++	u8 prev_pass = 1;
++
++	memset(fail_window, 0, sizeof(fail_window));
+ 
+ 	/* Enable ITAPDLY */
+-	regmap_update_bits(sdhci_am654->base, PHY_CTRL4, ITAPDLYENA_MASK,
+-			   1 << ITAPDLYENA_SHIFT);
++	sdhci_am654->itap_del_ena[timing] = 0x1;
++
++	for (itap = 0; itap < ITAPDLY_LENGTH; itap++) {
++		sdhci_am654_write_itapdly(sdhci_am654, itap, sdhci_am654->itap_del_ena[timing]);
+ 
+-	for (itap = 0; itap < ITAP_MAX; itap++) {
+-		sdhci_am654_write_itapdly(sdhci_am654, itap);
++		curr_pass = !mmc_send_tuning(host->mmc, opcode, NULL);
+ 
+-		cur_val = !mmc_send_tuning(host->mmc, opcode, NULL);
+-		if (cur_val && !prev_val)
+-			pass_window = itap;
++		if (!curr_pass && prev_pass)
++			fail_window[fail_index].start = itap;
++
++		if (!curr_pass) {
++			fail_window[fail_index].end = itap;
++			fail_window[fail_index].length++;
++		}
+ 
+-		if (!cur_val)
+-			fail_len++;
++		if (curr_pass && !prev_pass)
++			fail_index++;
+ 
+-		prev_val = cur_val;
++		prev_pass = curr_pass;
+ 	}
+-	/*
+-	 * Having determined the length of the failing window and start of
+-	 * the passing window calculate the length of the passing window and
+-	 * set the final value halfway through it considering the range as a
+-	 * circular buffer
+-	 */
+-	pass_len = ITAP_MAX - fail_len;
+-	itap = (pass_window + (pass_len >> 1)) % ITAP_MAX;
+-	sdhci_am654_write_itapdly(sdhci_am654, itap);
++
++	if (fail_window[fail_index].length != 0)
++		fail_index++;
++
++	itap = sdhci_am654_calculate_itap(host, fail_window, fail_index,
++					  sdhci_am654->dll_enable);
++
++	sdhci_am654_write_itapdly(sdhci_am654, itap, sdhci_am654->itap_del_ena[timing]);
++
++	/* Save ITAPDLY */
++	sdhci_am654->itap_del_sel[timing] = itap;
+ 
+ 	return 0;
+ }
+@@ -579,32 +668,15 @@ static int sdhci_am654_get_otap_delay(struct sdhci_host *host,
+ 	int i;
+ 	int ret;
+ 
+-	ret = device_property_read_u32(dev, td[MMC_TIMING_LEGACY].otap_binding,
+-				 &sdhci_am654->otap_del_sel[MMC_TIMING_LEGACY]);
+-	if (ret) {
+-		/*
+-		 * ti,otap-del-sel-legacy is mandatory, look for old binding
+-		 * if not found.
+-		 */
+-		ret = device_property_read_u32(dev, "ti,otap-del-sel",
+-					       &sdhci_am654->otap_del_sel[0]);
+-		if (ret) {
+-			dev_err(dev, "Couldn't find otap-del-sel\n");
+-
+-			return ret;
+-		}
+-
+-		dev_info(dev, "Using legacy binding ti,otap-del-sel\n");
+-		sdhci_am654->legacy_otapdly = true;
+-
+-		return 0;
+-	}
+-
+ 	for (i = MMC_TIMING_LEGACY; i <= MMC_TIMING_MMC_HS400; i++) {
+ 
+ 		ret = device_property_read_u32(dev, td[i].otap_binding,
+ 					       &sdhci_am654->otap_del_sel[i]);
+ 		if (ret) {
++			if (i == MMC_TIMING_LEGACY) {
++				dev_err(dev, "Couldn't find mandatory ti,otap-del-sel-legacy\n");
++				return ret;
++			}
+ 			dev_dbg(dev, "Couldn't find %s\n",
+ 				td[i].otap_binding);
+ 			/*
+@@ -617,9 +689,12 @@ static int sdhci_am654_get_otap_delay(struct sdhci_host *host,
+ 				host->mmc->caps2 &= ~td[i].capability;
+ 		}
+ 
+-		if (td[i].itap_binding)
+-			device_property_read_u32(dev, td[i].itap_binding,
+-						 &sdhci_am654->itap_del_sel[i]);
++		if (td[i].itap_binding) {
++			ret = device_property_read_u32(dev, td[i].itap_binding,
++						       &sdhci_am654->itap_del_sel[i]);
++			if (!ret)
++				sdhci_am654->itap_del_ena[i] = 0x1;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mtd/nand/raw/nand_hynix.c b/drivers/mtd/nand/raw/nand_hynix.c
+index a9f50c9af1097..856b3d6eceb73 100644
+--- a/drivers/mtd/nand/raw/nand_hynix.c
++++ b/drivers/mtd/nand/raw/nand_hynix.c
+@@ -402,7 +402,7 @@ static int hynix_nand_rr_init(struct nand_chip *chip)
+ 	if (ret)
+ 		pr_warn("failed to initialize read-retry infrastructure");
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void hynix_nand_extract_oobsize(struct nand_chip *chip,
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index 548d8095c0a79..b695f3f233286 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -1117,18 +1117,30 @@ static int enic_set_vf_port(struct net_device *netdev, int vf,
+ 	pp->request = nla_get_u8(port[IFLA_PORT_REQUEST]);
+ 
+ 	if (port[IFLA_PORT_PROFILE]) {
++		if (nla_len(port[IFLA_PORT_PROFILE]) != PORT_PROFILE_MAX) {
++			memcpy(pp, &prev_pp, sizeof(*pp));
++			return -EINVAL;
++		}
+ 		pp->set |= ENIC_SET_NAME;
+ 		memcpy(pp->name, nla_data(port[IFLA_PORT_PROFILE]),
+ 			PORT_PROFILE_MAX);
+ 	}
+ 
+ 	if (port[IFLA_PORT_INSTANCE_UUID]) {
++		if (nla_len(port[IFLA_PORT_INSTANCE_UUID]) != PORT_UUID_MAX) {
++			memcpy(pp, &prev_pp, sizeof(*pp));
++			return -EINVAL;
++		}
+ 		pp->set |= ENIC_SET_INSTANCE;
+ 		memcpy(pp->instance_uuid,
+ 			nla_data(port[IFLA_PORT_INSTANCE_UUID]), PORT_UUID_MAX);
+ 	}
+ 
+ 	if (port[IFLA_PORT_HOST_UUID]) {
++		if (nla_len(port[IFLA_PORT_HOST_UUID]) != PORT_UUID_MAX) {
++			memcpy(pp, &prev_pp, sizeof(*pp));
++			return -EINVAL;
++		}
+ 		pp->set |= ENIC_SET_HOST;
+ 		memcpy(pp->host_uuid,
+ 			nla_data(port[IFLA_PORT_HOST_UUID]), PORT_UUID_MAX);
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index c78587ddb32fd..fa46854fd697c 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -1109,10 +1109,13 @@ static void gmac_tx_irq_enable(struct net_device *netdev,
+ {
+ 	struct gemini_ethernet_port *port = netdev_priv(netdev);
+ 	struct gemini_ethernet *geth = port->geth;
++	unsigned long flags;
+ 	u32 val, mask;
+ 
+ 	netdev_dbg(netdev, "%s device %d\n", __func__, netdev->dev_id);
+ 
++	spin_lock_irqsave(&geth->irq_lock, flags);
++
+ 	mask = GMAC0_IRQ0_TXQ0_INTS << (6 * netdev->dev_id + txq);
+ 
+ 	if (en)
+@@ -1121,6 +1124,8 @@ static void gmac_tx_irq_enable(struct net_device *netdev,
+ 	val = readl(geth->base + GLOBAL_INTERRUPT_ENABLE_0_REG);
+ 	val = en ? val | mask : val & ~mask;
+ 	writel(val, geth->base + GLOBAL_INTERRUPT_ENABLE_0_REG);
++
++	spin_unlock_irqrestore(&geth->irq_lock, flags);
+ }
+ 
+ static void gmac_tx_irq(struct net_device *netdev, unsigned int txq_num)
+@@ -1427,15 +1432,19 @@ static unsigned int gmac_rx(struct net_device *netdev, unsigned int budget)
+ 	union gmac_rxdesc_3 word3;
+ 	struct page *page = NULL;
+ 	unsigned int page_offs;
++	unsigned long flags;
+ 	unsigned short r, w;
+ 	union dma_rwptr rw;
+ 	dma_addr_t mapping;
+ 	int frag_nr = 0;
+ 
++	spin_lock_irqsave(&geth->irq_lock, flags);
+ 	rw.bits32 = readl(ptr_reg);
+ 	/* Reset interrupt as all packages until here are taken into account */
+ 	writel(DEFAULT_Q0_INT_BIT << netdev->dev_id,
+ 	       geth->base + GLOBAL_INTERRUPT_STATUS_1_REG);
++	spin_unlock_irqrestore(&geth->irq_lock, flags);
++
+ 	r = rw.bits.rptr;
+ 	w = rw.bits.wptr;
+ 
+@@ -1738,10 +1747,9 @@ static irqreturn_t gmac_irq(int irq, void *data)
+ 		gmac_update_hw_stats(netdev);
+ 
+ 	if (val & (GMAC0_RX_OVERRUN_INT_BIT << (netdev->dev_id * 8))) {
++		spin_lock(&geth->irq_lock);
+ 		writel(GMAC0_RXDERR_INT_BIT << (netdev->dev_id * 8),
+ 		       geth->base + GLOBAL_INTERRUPT_STATUS_4_REG);
+-
+-		spin_lock(&geth->irq_lock);
+ 		u64_stats_update_begin(&port->ir_stats_syncp);
+ 		++port->stats.rx_fifo_errors;
+ 		u64_stats_update_end(&port->ir_stats_syncp);
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index fe29769cb1589..adb76db66031f 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3443,6 +3443,14 @@ static int fec_enet_init(struct net_device *ndev)
+ 	return ret;
+ }
+ 
++static void fec_enet_deinit(struct net_device *ndev)
++{
++	struct fec_enet_private *fep = netdev_priv(ndev);
++
++	netif_napi_del(&fep->napi);
++	fec_enet_free_queue(ndev);
++}
++
+ #ifdef CONFIG_OF
+ static int fec_reset_phy(struct platform_device *pdev)
+ {
+@@ -3813,6 +3821,7 @@ fec_probe(struct platform_device *pdev)
+ 	fec_enet_mii_remove(fep);
+ failed_mii_init:
+ failed_irq:
++	fec_enet_deinit(ndev);
+ failed_init:
+ 	fec_ptp_stop(pdev);
+ failed_reset:
+@@ -3874,6 +3883,7 @@ fec_drv_remove(struct platform_device *pdev)
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
++	fec_enet_deinit(ndev);
+ 	free_netdev(ndev);
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index c5ae673005908..780fbb3e1ed06 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -103,14 +103,13 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
+ 	u64 ns;
+ 	val = 0;
+ 
+-	if (fep->pps_enable == enable)
+-		return 0;
+-
+-	fep->pps_channel = DEFAULT_PPS_CHANNEL;
+-	fep->reload_period = PPS_OUPUT_RELOAD_PERIOD;
+-
+ 	spin_lock_irqsave(&fep->tmreg_lock, flags);
+ 
++	if (fep->pps_enable == enable) {
++		spin_unlock_irqrestore(&fep->tmreg_lock, flags);
++		return 0;
++	}
++
+ 	if (enable) {
+ 		/* clear capture or output compare interrupt status if have.
+ 		 */
+@@ -441,6 +440,9 @@ static int fec_ptp_enable(struct ptp_clock_info *ptp,
+ 	int ret = 0;
+ 
+ 	if (rq->type == PTP_CLK_REQ_PPS) {
++		fep->pps_channel = DEFAULT_PPS_CHANNEL;
++		fep->reload_period = PPS_OUPUT_RELOAD_PERIOD;
++
+ 		ret = fec_ptp_enable_pps(fep, on);
+ 
+ 		return ret;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 0ba43c93abb26..42dc76f85c62c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1523,6 +1523,9 @@ static int cmd_comp_notifier(struct notifier_block *nb,
+ 	dev = container_of(cmd, struct mlx5_core_dev, cmd);
+ 	eqe = data;
+ 
++	if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
++		return NOTIFY_DONE;
++
+ 	mlx5_cmd_comp_handler(dev, be32_to_cpu(eqe->data.cmd.vector), false);
+ 
+ 	return NOTIFY_OK;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 5673a4113253b..f1834853872da 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3695,7 +3695,7 @@ mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
+ 		mlx5e_fold_sw_stats64(priv, stats);
+ 	}
+ 
+-	stats->rx_dropped = priv->stats.qcnt.rx_out_of_buffer;
++	stats->rx_missed_errors = priv->stats.qcnt.rx_out_of_buffer;
+ 
+ 	stats->rx_length_errors =
+ 		PPORT_802_3_GET(pstats, a_in_range_length_errors) +
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 41bc31e3f9356..03ede9425889f 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -1234,7 +1234,6 @@ static void qed_slowpath_task(struct work_struct *work)
+ static int qed_slowpath_wq_start(struct qed_dev *cdev)
+ {
+ 	struct qed_hwfn *hwfn;
+-	char name[NAME_SIZE];
+ 	int i;
+ 
+ 	if (IS_VF(cdev))
+@@ -1243,11 +1242,11 @@ static int qed_slowpath_wq_start(struct qed_dev *cdev)
+ 	for_each_hwfn(cdev, i) {
+ 		hwfn = &cdev->hwfns[i];
+ 
+-		snprintf(name, NAME_SIZE, "slowpath-%02x:%02x.%02x",
+-			 cdev->pdev->bus->number,
+-			 PCI_SLOT(cdev->pdev->devfn), hwfn->abs_pf_id);
++		hwfn->slowpath_wq = alloc_workqueue("slowpath-%02x:%02x.%02x",
++					 0, 0, cdev->pdev->bus->number,
++					 PCI_SLOT(cdev->pdev->devfn),
++					 hwfn->abs_pf_id);
+ 
+-		hwfn->slowpath_wq = alloc_workqueue(name, 0, 0);
+ 		if (!hwfn->slowpath_wq) {
+ 			DP_NOTICE(hwfn, "Cannot create slowpath workqueue\n");
+ 			return -ENOMEM;
+diff --git a/drivers/net/ethernet/smsc/smc91x.h b/drivers/net/ethernet/smsc/smc91x.h
+index 387539a8094bf..95e9204ce8276 100644
+--- a/drivers/net/ethernet/smsc/smc91x.h
++++ b/drivers/net/ethernet/smsc/smc91x.h
+@@ -175,8 +175,8 @@ static inline void mcf_outsw(void *a, unsigned char *p, int l)
+ 		writew(*wp++, a);
+ }
+ 
+-#define SMC_inw(a, r)		_swapw(readw((a) + (r)))
+-#define SMC_outw(lp, v, a, r)	writew(_swapw(v), (a) + (r))
++#define SMC_inw(a, r)		ioread16be((a) + (r))
++#define SMC_outw(lp, v, a, r)	iowrite16be(v, (a) + (r))
+ #define SMC_insw(a, r, p, l)	mcf_insw(a + r, p, l)
+ #define SMC_outsw(a, r, p, l)	mcf_outsw(a + r, p, l)
+ 
+diff --git a/drivers/net/ethernet/sun/sungem.c b/drivers/net/ethernet/sun/sungem.c
+index 58f142ee78a30..4d6a9f02c7388 100644
+--- a/drivers/net/ethernet/sun/sungem.c
++++ b/drivers/net/ethernet/sun/sungem.c
+@@ -949,17 +949,6 @@ static irqreturn_t gem_interrupt(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
+-#ifdef CONFIG_NET_POLL_CONTROLLER
+-static void gem_poll_controller(struct net_device *dev)
+-{
+-	struct gem *gp = netdev_priv(dev);
+-
+-	disable_irq(gp->pdev->irq);
+-	gem_interrupt(gp->pdev->irq, dev);
+-	enable_irq(gp->pdev->irq);
+-}
+-#endif
+-
+ static void gem_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ {
+ 	struct gem *gp = netdev_priv(dev);
+@@ -2836,9 +2825,6 @@ static const struct net_device_ops gem_netdev_ops = {
+ 	.ndo_change_mtu		= gem_change_mtu,
+ 	.ndo_validate_addr	= eth_validate_addr,
+ 	.ndo_set_mac_address    = gem_set_mac_address,
+-#ifdef CONFIG_NET_POLL_CONTROLLER
+-	.ndo_poll_controller    = gem_poll_controller,
+-#endif
+ };
+ 
+ static int gem_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index bfea28bd45027..d04b1450875b6 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -440,7 +440,7 @@ static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb)
+ 
+ 	memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 
+-	err = ip_local_out(net, skb->sk, skb);
++	err = ip_local_out(net, NULL, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+ 		DEV_STATS_INC(dev, tx_errors);
+ 	else
+@@ -495,7 +495,7 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+ 
+ 	memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+ 
+-	err = ip6_local_out(dev_net(dev), skb->sk, skb);
++	err = ip6_local_out(dev_net(dev), NULL, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+ 		DEV_STATS_INC(dev, tx_errors);
+ 	else
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index 4ea02116be182..895d4f5166f99 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -1141,17 +1141,15 @@ static int aqc111_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 			continue;
+ 		}
+ 
+-		/* Clone SKB */
+-		new_skb = skb_clone(skb, GFP_ATOMIC);
++		new_skb = netdev_alloc_skb_ip_align(dev->net, pkt_len);
+ 
+ 		if (!new_skb)
+ 			goto err;
+ 
+-		new_skb->len = pkt_len;
++		skb_put(new_skb, pkt_len);
++		memcpy(new_skb->data, skb->data, pkt_len);
+ 		skb_pull(new_skb, AQ_RX_HW_PAD);
+-		skb_set_tail_pointer(new_skb, new_skb->len);
+ 
+-		new_skb->truesize = SKB_TRUESIZE(new_skb->len);
+ 		if (aqc111_data->rx_checksum)
+ 			aqc111_rx_checksum(new_skb, pkt_desc);
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index be2761d0bcd91..4dd1a9fb4c8a0 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1301,6 +1301,9 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)},	/* Telit LN920 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)},	/* Telit FN990 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1080, 2)}, /* Telit FE990 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x10a9, 0)}, /* Telit FN920C04 */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)},	/* Telit ME910 */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)},	/* Telit ME910 dual modem */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)},	/* Telit LE920 */
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 569be01700aa1..30e5f6910e6fd 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -845,7 +845,7 @@ static int smsc95xx_start_rx_path(struct usbnet *dev, int in_pm)
+ static int smsc95xx_reset(struct usbnet *dev)
+ {
+ 	struct smsc95xx_priv *pdata = dev->driver_priv;
+-	u32 read_buf, write_buf, burst_cap;
++	u32 read_buf, burst_cap;
+ 	int ret = 0, timeout;
+ 
+ 	netif_dbg(dev, ifup, dev->net, "entering smsc95xx_reset\n");
+@@ -987,10 +987,13 @@ static int smsc95xx_reset(struct usbnet *dev)
+ 		return ret;
+ 	netif_dbg(dev, ifup, dev->net, "ID_REV = 0x%08x\n", read_buf);
+ 
++	ret = smsc95xx_read_reg(dev, LED_GPIO_CFG, &read_buf);
++	if (ret < 0)
++		return ret;
+ 	/* Configure GPIO pins as LED outputs */
+-	write_buf = LED_GPIO_CFG_SPD_LED | LED_GPIO_CFG_LNK_LED |
+-		LED_GPIO_CFG_FDX_LED;
+-	ret = smsc95xx_write_reg(dev, LED_GPIO_CFG, write_buf);
++	read_buf |= LED_GPIO_CFG_SPD_LED | LED_GPIO_CFG_LNK_LED |
++		    LED_GPIO_CFG_FDX_LED;
++	ret = smsc95xx_write_reg(dev, LED_GPIO_CFG, read_buf);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1801,9 +1804,11 @@ static int smsc95xx_reset_resume(struct usb_interface *intf)
+ 
+ static void smsc95xx_rx_csum_offload(struct sk_buff *skb)
+ {
+-	skb->csum = *(u16 *)(skb_tail_pointer(skb) - 2);
++	u16 *csum_ptr = (u16 *)(skb_tail_pointer(skb) - 2);
++
++	skb->csum = (__force __wsum)get_unaligned(csum_ptr);
+ 	skb->ip_summed = CHECKSUM_COMPLETE;
+-	skb_trim(skb, skb->len - 2);
++	skb_trim(skb, skb->len - 2); /* remove csum */
+ }
+ 
+ static int smsc95xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+@@ -1861,25 +1866,22 @@ static int smsc95xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 				if (dev->net->features & NETIF_F_RXCSUM)
+ 					smsc95xx_rx_csum_offload(skb);
+ 				skb_trim(skb, skb->len - 4); /* remove fcs */
+-				skb->truesize = size + sizeof(struct sk_buff);
+ 
+ 				return 1;
+ 			}
+ 
+-			ax_skb = skb_clone(skb, GFP_ATOMIC);
++			ax_skb = netdev_alloc_skb_ip_align(dev->net, size);
+ 			if (unlikely(!ax_skb)) {
+ 				netdev_warn(dev->net, "Error allocating skb\n");
+ 				return 0;
+ 			}
+ 
+-			ax_skb->len = size;
+-			ax_skb->data = packet;
+-			skb_set_tail_pointer(ax_skb, size);
++			skb_put(ax_skb, size);
++			memcpy(ax_skb->data, packet, size);
+ 
+ 			if (dev->net->features & NETIF_F_RXCSUM)
+ 				smsc95xx_rx_csum_offload(ax_skb);
+ 			skb_trim(ax_skb, ax_skb->len - 4); /* remove fcs */
+-			ax_skb->truesize = size + sizeof(struct sk_buff);
+ 
+ 			usbnet_skb_return(dev, ax_skb);
+ 		}
+diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
+index 811c8751308c6..3fac642bec772 100644
+--- a/drivers/net/usb/sr9700.c
++++ b/drivers/net/usb/sr9700.c
+@@ -418,19 +418,15 @@ static int sr9700_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 			skb_pull(skb, 3);
+ 			skb->len = len;
+ 			skb_set_tail_pointer(skb, len);
+-			skb->truesize = len + sizeof(struct sk_buff);
+ 			return 2;
+ 		}
+ 
+-		/* skb_clone is used for address align */
+-		sr_skb = skb_clone(skb, GFP_ATOMIC);
++		sr_skb = netdev_alloc_skb_ip_align(dev->net, len);
+ 		if (!sr_skb)
+ 			return 0;
+ 
+-		sr_skb->len = len;
+-		sr_skb->data = skb->data + 3;
+-		skb_set_tail_pointer(sr_skb, len);
+-		sr_skb->truesize = len + sizeof(struct sk_buff);
++		skb_put(sr_skb, len);
++		memcpy(sr_skb->data, skb->data + 3, len);
+ 		usbnet_skb_return(dev, sr_skb);
+ 
+ 		skb_pull(skb, len + SR_RX_OVERHEAD);
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index b173497a3e0ca..3096769e718ed 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -1778,10 +1778,6 @@ static bool vxlan_set_mac(struct vxlan_dev *vxlan,
+ 	if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
+ 		return false;
+ 
+-	/* Ignore packets from invalid src-address */
+-	if (!is_valid_ether_addr(eth_hdr(skb)->h_source))
+-		return false;
+-
+ 	/* Get address from the outer IP header */
+ 	if (vxlan_get_sk_family(vs) == AF_INET) {
+ 		saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
+diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
+index efe38b2c1df73..71c2bf8817dc2 100644
+--- a/drivers/net/wireless/ath/ar5523/ar5523.c
++++ b/drivers/net/wireless/ath/ar5523/ar5523.c
+@@ -1590,6 +1590,20 @@ static int ar5523_probe(struct usb_interface *intf,
+ 	struct ar5523 *ar;
+ 	int error = -ENOMEM;
+ 
++	static const u8 bulk_ep_addr[] = {
++		AR5523_CMD_TX_PIPE | USB_DIR_OUT,
++		AR5523_DATA_TX_PIPE | USB_DIR_OUT,
++		AR5523_CMD_RX_PIPE | USB_DIR_IN,
++		AR5523_DATA_RX_PIPE | USB_DIR_IN,
++		0};
++
++	if (!usb_check_bulk_endpoints(intf, bulk_ep_addr)) {
++		dev_err(&dev->dev,
++			"Could not find all expected endpoints\n");
++		error = -ENODEV;
++		goto out;
++	}
++
+ 	/*
+ 	 * Load firmware if the device requires it.  This will return
+ 	 * -ENXIO on success and we'll get called back afer the usb
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 57ac80997319b..d03a36c45f9f3 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -625,6 +625,9 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.max_spatial_stream = 4,
+ 		.fw = {
+ 			.dir = WCN3990_HW_1_0_FW_DIR,
++			.board = WCN3990_HW_1_0_BOARD_DATA_FILE,
++			.board_size = WCN3990_BOARD_DATA_SZ,
++			.board_ext_size = WCN3990_BOARD_EXT_DATA_SZ,
+ 		},
+ 		.sw_decrypt_mcast_mgmt = true,
+ 		.hw_ops = &wcn3990_ops,
+diff --git a/drivers/net/wireless/ath/ath10k/debugfs_sta.c b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+index 367539f2c3700..f7912c72cba34 100644
+--- a/drivers/net/wireless/ath/ath10k/debugfs_sta.c
++++ b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+@@ -438,7 +438,7 @@ ath10k_dbg_sta_write_peer_debug_trigger(struct file *file,
+ 	}
+ out:
+ 	mutex_unlock(&ar->conf_mutex);
+-	return count;
++	return ret ?: count;
+ }
+ 
+ static const struct file_operations fops_peer_debug_trigger = {
+diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
+index d3ef83ad577da..c4df0e4e161b6 100644
+--- a/drivers/net/wireless/ath/ath10k/hw.h
++++ b/drivers/net/wireless/ath/ath10k/hw.h
+@@ -132,6 +132,7 @@ enum qca9377_chip_id_rev {
+ /* WCN3990 1.0 definitions */
+ #define WCN3990_HW_1_0_DEV_VERSION	ATH10K_HW_WCN3990
+ #define WCN3990_HW_1_0_FW_DIR		ATH10K_FW_DIR "/WCN3990/hw1.0"
++#define WCN3990_HW_1_0_BOARD_DATA_FILE "board.bin"
+ 
+ #define ATH10K_FW_FILE_BASE		"firmware"
+ #define ATH10K_FW_API_MAX		6
+diff --git a/drivers/net/wireless/ath/ath10k/targaddrs.h b/drivers/net/wireless/ath/ath10k/targaddrs.h
+index ec556bb88d658..ba37e6c7ced08 100644
+--- a/drivers/net/wireless/ath/ath10k/targaddrs.h
++++ b/drivers/net/wireless/ath/ath10k/targaddrs.h
+@@ -491,4 +491,7 @@ struct host_interest {
+ #define QCA4019_BOARD_DATA_SZ	  12064
+ #define QCA4019_BOARD_EXT_DATA_SZ 0
+ 
++#define WCN3990_BOARD_DATA_SZ	  26328
++#define WCN3990_BOARD_EXT_DATA_SZ 0
++
+ #endif /* __TARGADDRS_H__ */
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 85fe855ece097..9cfd35dc87ba3 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -1762,12 +1762,32 @@ void ath10k_wmi_put_wmi_channel(struct ath10k *ar, struct wmi_channel *ch,
+ 
+ int ath10k_wmi_wait_for_service_ready(struct ath10k *ar)
+ {
+-	unsigned long time_left;
++	unsigned long time_left, i;
+ 
+ 	time_left = wait_for_completion_timeout(&ar->wmi.service_ready,
+ 						WMI_SERVICE_READY_TIMEOUT_HZ);
+-	if (!time_left)
+-		return -ETIMEDOUT;
++	if (!time_left) {
++		/* Sometimes the PCI HIF doesn't receive interrupt
++		 * for the service ready message even if the buffer
++		 * was completed. PCIe sniffer shows that it's
++		 * because the corresponding CE ring doesn't fires
++		 * it. Workaround here by polling CE rings once.
++		 */
++		ath10k_warn(ar, "failed to receive service ready completion, polling..\n");
++
++		for (i = 0; i < CE_COUNT; i++)
++			ath10k_hif_send_complete_check(ar, i, 1);
++
++		time_left = wait_for_completion_timeout(&ar->wmi.service_ready,
++							WMI_SERVICE_READY_TIMEOUT_HZ);
++		if (!time_left) {
++			ath10k_warn(ar, "polling timed out\n");
++			return -ETIMEDOUT;
++		}
++
++		ath10k_warn(ar, "service ready completion received, continuing normally\n");
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c
+index e4eb666c6eea4..a5265997b5767 100644
+--- a/drivers/net/wireless/ath/carl9170/usb.c
++++ b/drivers/net/wireless/ath/carl9170/usb.c
+@@ -1069,6 +1069,38 @@ static int carl9170_usb_probe(struct usb_interface *intf,
+ 			ar->usb_ep_cmd_is_bulk = true;
+ 	}
+ 
++	/* Verify that all expected endpoints are present */
++	if (ar->usb_ep_cmd_is_bulk) {
++		u8 bulk_ep_addr[] = {
++			AR9170_USB_EP_RX | USB_DIR_IN,
++			AR9170_USB_EP_TX | USB_DIR_OUT,
++			AR9170_USB_EP_CMD | USB_DIR_OUT,
++			0};
++		u8 int_ep_addr[] = {
++			AR9170_USB_EP_IRQ | USB_DIR_IN,
++			0};
++		if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
++		    !usb_check_int_endpoints(intf, int_ep_addr))
++			err = -ENODEV;
++	} else {
++		u8 bulk_ep_addr[] = {
++			AR9170_USB_EP_RX | USB_DIR_IN,
++			AR9170_USB_EP_TX | USB_DIR_OUT,
++			0};
++		u8 int_ep_addr[] = {
++			AR9170_USB_EP_IRQ | USB_DIR_IN,
++			AR9170_USB_EP_CMD | USB_DIR_OUT,
++			0};
++		if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
++		    !usb_check_int_endpoints(intf, int_ep_addr))
++			err = -ENODEV;
++	}
++
++	if (err) {
++		carl9170_free(ar);
++		return err;
++	}
++
+ 	usb_set_intfdata(intf, ar);
+ 	SET_IEEE80211_DEV(ar->hw, &intf->dev);
+ 
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index dc91ac8cbd48b..dd72e9f8b4079 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -2714,7 +2714,7 @@ __mwl8k_cmd_mac_multicast_adr(struct ieee80211_hw *hw, int allmulti,
+ 		cmd->action |= cpu_to_le16(MWL8K_ENABLE_RX_MULTICAST);
+ 		cmd->numaddr = cpu_to_le16(mc_count);
+ 		netdev_hw_addr_list_for_each(ha, mc_list) {
+-			memcpy(cmd->addr[i], ha->addr, ETH_ALEN);
++			memcpy(cmd->addr[i++], ha->addr, ETH_ALEN);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 9efc15e69ae82..5b27c22e7e581 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -28,6 +28,7 @@
+ #include <linux/wireless.h>
+ #include <linux/firmware.h>
+ #include <linux/moduleparam.h>
++#include <linux/bitfield.h>
+ #include <net/mac80211.h>
+ #include "rtl8xxxu.h"
+ #include "rtl8xxxu_regs.h"
+@@ -1389,13 +1390,13 @@ rtl8xxxu_gen1_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
+ 	u8 cck[RTL8723A_MAX_RF_PATHS], ofdm[RTL8723A_MAX_RF_PATHS];
+ 	u8 ofdmbase[RTL8723A_MAX_RF_PATHS], mcsbase[RTL8723A_MAX_RF_PATHS];
+ 	u32 val32, ofdm_a, ofdm_b, mcs_a, mcs_b;
+-	u8 val8;
++	u8 val8, base;
+ 	int group, i;
+ 
+ 	group = rtl8xxxu_gen1_channel_to_group(channel);
+ 
+-	cck[0] = priv->cck_tx_power_index_A[group] - 1;
+-	cck[1] = priv->cck_tx_power_index_B[group] - 1;
++	cck[0] = priv->cck_tx_power_index_A[group];
++	cck[1] = priv->cck_tx_power_index_B[group];
+ 
+ 	if (priv->hi_pa) {
+ 		if (cck[0] > 0x20)
+@@ -1406,10 +1407,6 @@ rtl8xxxu_gen1_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
+ 
+ 	ofdm[0] = priv->ht40_1s_tx_power_index_A[group];
+ 	ofdm[1] = priv->ht40_1s_tx_power_index_B[group];
+-	if (ofdm[0])
+-		ofdm[0] -= 1;
+-	if (ofdm[1])
+-		ofdm[1] -= 1;
+ 
+ 	ofdmbase[0] = ofdm[0] +	priv->ofdm_tx_power_index_diff[group].a;
+ 	ofdmbase[1] = ofdm[1] +	priv->ofdm_tx_power_index_diff[group].b;
+@@ -1498,20 +1495,19 @@ rtl8xxxu_gen1_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
+ 
+ 	rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS15_MCS12,
+ 			 mcs_a + power_base->reg_0e1c);
++	val8 = u32_get_bits(mcs_a + power_base->reg_0e1c, 0xff000000);
+ 	for (i = 0; i < 3; i++) {
+-		if (i != 2)
+-			val8 = (mcsbase[0] > 8) ? (mcsbase[0] - 8) : 0;
+-		else
+-			val8 = (mcsbase[0] > 6) ? (mcsbase[0] - 6) : 0;
++		base = i != 2 ? 8 : 6;
++		val8 = max_t(int, val8 - base, 0);
+ 		rtl8xxxu_write8(priv, REG_OFDM0_XC_TX_IQ_IMBALANCE + i, val8);
+ 	}
++
+ 	rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS15_MCS12,
+ 			 mcs_b + power_base->reg_0868);
++	val8 = u32_get_bits(mcs_b + power_base->reg_0868, 0xff000000);
+ 	for (i = 0; i < 3; i++) {
+-		if (i != 2)
+-			val8 = (mcsbase[1] > 8) ? (mcsbase[1] - 8) : 0;
+-		else
+-			val8 = (mcsbase[1] > 6) ? (mcsbase[1] - 6) : 0;
++		base = i != 2 ? 8 : 6;
++		val8 = max_t(int, val8 - base, 0);
+ 		rtl8xxxu_write8(priv, REG_OFDM0_XD_TX_IQ_IMBALANCE + i, val8);
+ 	}
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
+index 8944712274b59..93cba14f44018 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
+@@ -35,7 +35,7 @@ static long _rtl92de_translate_todbm(struct ieee80211_hw *hw,
+ 
+ static void _rtl92de_query_rxphystatus(struct ieee80211_hw *hw,
+ 				       struct rtl_stats *pstats,
+-				       struct rx_desc_92d *pdesc,
++				       __le32 *pdesc,
+ 				       struct rx_fwinfo_92d *p_drvinfo,
+ 				       bool packet_match_bssid,
+ 				       bool packet_toself,
+@@ -49,8 +49,10 @@ static void _rtl92de_query_rxphystatus(struct ieee80211_hw *hw,
+ 	u8 i, max_spatial_stream;
+ 	u32 rssi, total_rssi = 0;
+ 	bool is_cck_rate;
++	u8 rxmcs;
+ 
+-	is_cck_rate = RX_HAL_IS_CCK_RATE(pdesc->rxmcs);
++	rxmcs = get_rx_desc_rxmcs(pdesc);
++	is_cck_rate = rxmcs <= DESC_RATE11M;
+ 	pstats->packet_matchbssid = packet_match_bssid;
+ 	pstats->packet_toself = packet_toself;
+ 	pstats->packet_beacon = packet_beacon;
+@@ -158,8 +160,8 @@ static void _rtl92de_query_rxphystatus(struct ieee80211_hw *hw,
+ 		pstats->rx_pwdb_all = pwdb_all;
+ 		pstats->rxpower = rx_pwr_all;
+ 		pstats->recvsignalpower = rx_pwr_all;
+-		if (pdesc->rxht && pdesc->rxmcs >= DESC_RATEMCS8 &&
+-		    pdesc->rxmcs <= DESC_RATEMCS15)
++		if (get_rx_desc_rxht(pdesc) && rxmcs >= DESC_RATEMCS8 &&
++		    rxmcs <= DESC_RATEMCS15)
+ 			max_spatial_stream = 2;
+ 		else
+ 			max_spatial_stream = 1;
+@@ -365,7 +367,7 @@ static void _rtl92de_process_phyinfo(struct ieee80211_hw *hw,
+ static void _rtl92de_translate_rx_signal_stuff(struct ieee80211_hw *hw,
+ 					       struct sk_buff *skb,
+ 					       struct rtl_stats *pstats,
+-					       struct rx_desc_92d *pdesc,
++					       __le32 *pdesc,
+ 					       struct rx_fwinfo_92d *p_drvinfo)
+ {
+ 	struct rtl_mac *mac = rtl_mac(rtl_priv(hw));
+@@ -414,7 +416,8 @@ bool rtl92de_rx_query_desc(struct ieee80211_hw *hw,	struct rtl_stats *stats,
+ 	stats->icv = (u16)get_rx_desc_icv(pdesc);
+ 	stats->crc = (u16)get_rx_desc_crc32(pdesc);
+ 	stats->hwerror = (stats->crc | stats->icv);
+-	stats->decrypted = !get_rx_desc_swdec(pdesc);
++	stats->decrypted = !get_rx_desc_swdec(pdesc) &&
++			   get_rx_desc_enc_type(pdesc) != RX_DESC_ENC_NONE;
+ 	stats->rate = (u8)get_rx_desc_rxmcs(pdesc);
+ 	stats->shortpreamble = (u16)get_rx_desc_splcp(pdesc);
+ 	stats->isampdu = (bool)(get_rx_desc_paggr(pdesc) == 1);
+@@ -427,8 +430,6 @@ bool rtl92de_rx_query_desc(struct ieee80211_hw *hw,	struct rtl_stats *stats,
+ 	rx_status->band = hw->conf.chandef.chan->band;
+ 	if (get_rx_desc_crc32(pdesc))
+ 		rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
+-	if (!get_rx_desc_swdec(pdesc))
+-		rx_status->flag |= RX_FLAG_DECRYPTED;
+ 	if (get_rx_desc_bw(pdesc))
+ 		rx_status->bw = RATE_INFO_BW_40;
+ 	if (get_rx_desc_rxht(pdesc))
+@@ -442,9 +443,7 @@ bool rtl92de_rx_query_desc(struct ieee80211_hw *hw,	struct rtl_stats *stats,
+ 	if (phystatus) {
+ 		p_drvinfo = (struct rx_fwinfo_92d *)(skb->data +
+ 						     stats->rx_bufshift);
+-		_rtl92de_translate_rx_signal_stuff(hw,
+-						   skb, stats,
+-						   (struct rx_desc_92d *)pdesc,
++		_rtl92de_translate_rx_signal_stuff(hw, skb, stats, pdesc,
+ 						   p_drvinfo);
+ 	}
+ 	/*rx_status->qual = stats->signal; */
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
+index d01578875cd5f..eb3f768140b5b 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
+@@ -14,6 +14,15 @@
+ #define USB_HWDESC_HEADER_LEN			32
+ #define CRCLENGTH				4
+ 
++enum rtl92d_rx_desc_enc {
++	RX_DESC_ENC_NONE	= 0,
++	RX_DESC_ENC_WEP40	= 1,
++	RX_DESC_ENC_TKIP_WO_MIC	= 2,
++	RX_DESC_ENC_TKIP_MIC	= 3,
++	RX_DESC_ENC_AES		= 4,
++	RX_DESC_ENC_WEP104	= 5,
++};
++
+ /* macros to read/write various fields in RX or TX descriptors */
+ 
+ static inline void set_tx_desc_pkt_size(__le32 *__pdesc, u32 __val)
+@@ -246,6 +255,11 @@ static inline u32 get_rx_desc_drv_info_size(__le32 *__pdesc)
+ 	return le32_get_bits(*__pdesc, GENMASK(19, 16));
+ }
+ 
++static inline u32 get_rx_desc_enc_type(__le32 *__pdesc)
++{
++	return le32_get_bits(*__pdesc, GENMASK(22, 20));
++}
++
+ static inline u32 get_rx_desc_shift(__le32 *__pdesc)
+ {
+ 	return le32_get_bits(*__pdesc, GENMASK(25, 24));
+@@ -380,10 +394,17 @@ struct rx_fwinfo_92d {
+ 	u8 csi_target[2];
+ 	u8 sigevm;
+ 	u8 max_ex_pwr;
++#ifdef __LITTLE_ENDIAN
+ 	u8 ex_intf_flag:1;
+ 	u8 sgi_en:1;
+ 	u8 rxsc:2;
+ 	u8 reserve:4;
++#else
++	u8 reserve:4;
++	u8 rxsc:2;
++	u8 sgi_en:1;
++	u8 ex_intf_flag:1;
++#endif
+ } __packed;
+ 
+ struct tx_desc_92d {
+@@ -488,64 +509,6 @@ struct tx_desc_92d {
+ 	u32 reserve_pass_pcie_mm_limit[4];
+ } __packed;
+ 
+-struct rx_desc_92d {
+-	u32 length:14;
+-	u32 crc32:1;
+-	u32 icverror:1;
+-	u32 drv_infosize:4;
+-	u32 security:3;
+-	u32 qos:1;
+-	u32 shift:2;
+-	u32 phystatus:1;
+-	u32 swdec:1;
+-	u32 lastseg:1;
+-	u32 firstseg:1;
+-	u32 eor:1;
+-	u32 own:1;
+-
+-	u32 macid:5;
+-	u32 tid:4;
+-	u32 hwrsvd:5;
+-	u32 paggr:1;
+-	u32 faggr:1;
+-	u32 a1_fit:4;
+-	u32 a2_fit:4;
+-	u32 pam:1;
+-	u32 pwr:1;
+-	u32 moredata:1;
+-	u32 morefrag:1;
+-	u32 type:2;
+-	u32 mc:1;
+-	u32 bc:1;
+-
+-	u32 seq:12;
+-	u32 frag:4;
+-	u32 nextpktlen:14;
+-	u32 nextind:1;
+-	u32 rsvd:1;
+-
+-	u32 rxmcs:6;
+-	u32 rxht:1;
+-	u32 amsdu:1;
+-	u32 splcp:1;
+-	u32 bandwidth:1;
+-	u32 htc:1;
+-	u32 tcpchk_rpt:1;
+-	u32 ipcchk_rpt:1;
+-	u32 tcpchk_valid:1;
+-	u32 hwpcerr:1;
+-	u32 hwpcind:1;
+-	u32 iv0:16;
+-
+-	u32 iv1;
+-
+-	u32 tsfl;
+-
+-	u32 bufferaddress;
+-	u32 bufferaddress64;
+-
+-} __packed;
+-
+ void rtl92de_tx_fill_desc(struct ieee80211_hw *hw,
+ 			  struct ieee80211_hdr *hdr, u8 *pdesc,
+ 			  u8 *pbd_desc_tx, struct ieee80211_tx_info *info,
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 379d6818a0635..9f59f93b70e26 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -168,7 +168,8 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
+ 		if (nvme_path_is_disabled(ns))
+ 			continue;
+ 
+-		if (READ_ONCE(head->subsys->iopolicy) == NVME_IOPOLICY_NUMA)
++		if (ns->ctrl->numa_node != NUMA_NO_NODE &&
++		    READ_ONCE(head->subsys->iopolicy) == NVME_IOPOLICY_NUMA)
+ 			distance = node_distance(node, ns->ctrl->numa_node);
+ 		else
+ 			distance = LOCAL_DISTANCE;
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 9aed5cc710960..f2d11fc047524 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -532,10 +532,18 @@ static ssize_t nvmet_ns_enable_store(struct config_item *item,
+ 	if (strtobool(page, &enable))
+ 		return -EINVAL;
+ 
++	/*
++	 * take a global nvmet_config_sem because the disable routine has a
++	 * window where it releases the subsys-lock, giving a chance to
++	 * a parallel enable to concurrently execute causing the disable to
++	 * have a misaccounting of the ns percpu_ref.
++	 */
++	down_write(&nvmet_config_sem);
+ 	if (enable)
+ 		ret = nvmet_ns_enable(ns);
+ 	else
+ 		nvmet_ns_disable(ns);
++	up_write(&nvmet_config_sem);
+ 
+ 	return ret ? ret : count;
+ }
+diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
+index 87734e4c3c204..35210007602c5 100644
+--- a/drivers/pci/pcie/edr.c
++++ b/drivers/pci/pcie/edr.c
+@@ -32,10 +32,10 @@ static int acpi_enable_dpc(struct pci_dev *pdev)
+ 	int status = 0;
+ 
+ 	/*
+-	 * Behavior when calling unsupported _DSM functions is undefined,
+-	 * so check whether EDR_PORT_DPC_ENABLE_DSM is supported.
++	 * Per PCI Firmware r3.3, sec 4.6.12, EDR_PORT_DPC_ENABLE_DSM is
++	 * optional. Return success if it's not implemented.
+ 	 */
+-	if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 5,
++	if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 6,
+ 			    1ULL << EDR_PORT_DPC_ENABLE_DSM))
+ 		return 0;
+ 
+@@ -46,12 +46,7 @@ static int acpi_enable_dpc(struct pci_dev *pdev)
+ 	argv4.package.count = 1;
+ 	argv4.package.elements = &req;
+ 
+-	/*
+-	 * Per Downstream Port Containment Related Enhancements ECN to PCI
+-	 * Firmware Specification r3.2, sec 4.6.12, EDR_PORT_DPC_ENABLE_DSM is
+-	 * optional.  Return success if it's not implemented.
+-	 */
+-	obj = acpi_evaluate_dsm(adev->handle, &pci_acpi_dsm_guid, 5,
++	obj = acpi_evaluate_dsm(adev->handle, &pci_acpi_dsm_guid, 6,
+ 				EDR_PORT_DPC_ENABLE_DSM, &argv4);
+ 	if (!obj)
+ 		return 0;
+@@ -85,8 +80,9 @@ static struct pci_dev *acpi_dpc_port_get(struct pci_dev *pdev)
+ 	u16 port;
+ 
+ 	/*
+-	 * Behavior when calling unsupported _DSM functions is undefined,
+-	 * so check whether EDR_PORT_DPC_ENABLE_DSM is supported.
++	 * If EDR_PORT_LOCATE_DSM is not implemented under the target of
++	 * EDR, the target is the port that experienced the containment
++	 * event (PCI Firmware r3.3, sec 4.6.13).
+ 	 */
+ 	if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 5,
+ 			    1ULL << EDR_PORT_LOCATE_DSM))
+@@ -103,6 +99,16 @@ static struct pci_dev *acpi_dpc_port_get(struct pci_dev *pdev)
+ 		return NULL;
+ 	}
+ 
++	/*
++	 * Bit 31 represents the success/failure of the operation. If bit
++	 * 31 is set, the operation failed.
++	 */
++	if (obj->integer.value & BIT(31)) {
++		ACPI_FREE(obj);
++		pci_err(pdev, "Locate Port _DSM failed\n");
++		return NULL;
++	}
++
+ 	/*
+ 	 * Firmware returns DPC port BDF details in following format:
+ 	 *	15:8 = bus
+diff --git a/drivers/regulator/bd71828-regulator.c b/drivers/regulator/bd71828-regulator.c
+index 85c0b90009639..fc08f6f61655e 100644
+--- a/drivers/regulator/bd71828-regulator.c
++++ b/drivers/regulator/bd71828-regulator.c
+@@ -234,14 +234,11 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 			.suspend_reg = BD71828_REG_BUCK1_SUSP_VOLT,
+ 			.suspend_mask = BD71828_MASK_BUCK1267_VOLT,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+-			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+ 			/*
+ 			 * LPSR voltage is same as SUSPEND voltage. Allow
+-			 * setting it so that regulator can be set enabled at
+-			 * LPSR state
++			 * only enabling/disabling regulator for LPSR state
+ 			 */
+-			.lpsr_reg = BD71828_REG_BUCK1_SUSP_VOLT,
+-			.lpsr_mask = BD71828_MASK_BUCK1267_VOLT,
++			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+ 		},
+ 		.reg_inits = buck1_inits,
+ 		.reg_init_amnt = ARRAY_SIZE(buck1_inits),
+@@ -312,13 +309,7 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 				     ROHM_DVS_LEVEL_SUSPEND |
+ 				     ROHM_DVS_LEVEL_LPSR,
+ 			.run_reg = BD71828_REG_BUCK3_VOLT,
+-			.idle_reg = BD71828_REG_BUCK3_VOLT,
+-			.suspend_reg = BD71828_REG_BUCK3_VOLT,
+-			.lpsr_reg = BD71828_REG_BUCK3_VOLT,
+ 			.run_mask = BD71828_MASK_BUCK3_VOLT,
+-			.idle_mask = BD71828_MASK_BUCK3_VOLT,
+-			.suspend_mask = BD71828_MASK_BUCK3_VOLT,
+-			.lpsr_mask = BD71828_MASK_BUCK3_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+@@ -353,13 +344,7 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 				     ROHM_DVS_LEVEL_SUSPEND |
+ 				     ROHM_DVS_LEVEL_LPSR,
+ 			.run_reg = BD71828_REG_BUCK4_VOLT,
+-			.idle_reg = BD71828_REG_BUCK4_VOLT,
+-			.suspend_reg = BD71828_REG_BUCK4_VOLT,
+-			.lpsr_reg = BD71828_REG_BUCK4_VOLT,
+ 			.run_mask = BD71828_MASK_BUCK4_VOLT,
+-			.idle_mask = BD71828_MASK_BUCK4_VOLT,
+-			.suspend_mask = BD71828_MASK_BUCK4_VOLT,
+-			.lpsr_mask = BD71828_MASK_BUCK4_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+@@ -394,13 +379,7 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 				     ROHM_DVS_LEVEL_SUSPEND |
+ 				     ROHM_DVS_LEVEL_LPSR,
+ 			.run_reg = BD71828_REG_BUCK5_VOLT,
+-			.idle_reg = BD71828_REG_BUCK5_VOLT,
+-			.suspend_reg = BD71828_REG_BUCK5_VOLT,
+-			.lpsr_reg = BD71828_REG_BUCK5_VOLT,
+ 			.run_mask = BD71828_MASK_BUCK5_VOLT,
+-			.idle_mask = BD71828_MASK_BUCK5_VOLT,
+-			.suspend_mask = BD71828_MASK_BUCK5_VOLT,
+-			.lpsr_mask = BD71828_MASK_BUCK5_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+@@ -509,13 +488,7 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 				     ROHM_DVS_LEVEL_SUSPEND |
+ 				     ROHM_DVS_LEVEL_LPSR,
+ 			.run_reg = BD71828_REG_LDO1_VOLT,
+-			.idle_reg = BD71828_REG_LDO1_VOLT,
+-			.suspend_reg = BD71828_REG_LDO1_VOLT,
+-			.lpsr_reg = BD71828_REG_LDO1_VOLT,
+ 			.run_mask = BD71828_MASK_LDO_VOLT,
+-			.idle_mask = BD71828_MASK_LDO_VOLT,
+-			.suspend_mask = BD71828_MASK_LDO_VOLT,
+-			.lpsr_mask = BD71828_MASK_LDO_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+@@ -549,13 +522,7 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 				     ROHM_DVS_LEVEL_SUSPEND |
+ 				     ROHM_DVS_LEVEL_LPSR,
+ 			.run_reg = BD71828_REG_LDO2_VOLT,
+-			.idle_reg = BD71828_REG_LDO2_VOLT,
+-			.suspend_reg = BD71828_REG_LDO2_VOLT,
+-			.lpsr_reg = BD71828_REG_LDO2_VOLT,
+ 			.run_mask = BD71828_MASK_LDO_VOLT,
+-			.idle_mask = BD71828_MASK_LDO_VOLT,
+-			.suspend_mask = BD71828_MASK_LDO_VOLT,
+-			.lpsr_mask = BD71828_MASK_LDO_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+@@ -589,13 +556,7 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 				     ROHM_DVS_LEVEL_SUSPEND |
+ 				     ROHM_DVS_LEVEL_LPSR,
+ 			.run_reg = BD71828_REG_LDO3_VOLT,
+-			.idle_reg = BD71828_REG_LDO3_VOLT,
+-			.suspend_reg = BD71828_REG_LDO3_VOLT,
+-			.lpsr_reg = BD71828_REG_LDO3_VOLT,
+ 			.run_mask = BD71828_MASK_LDO_VOLT,
+-			.idle_mask = BD71828_MASK_LDO_VOLT,
+-			.suspend_mask = BD71828_MASK_LDO_VOLT,
+-			.lpsr_mask = BD71828_MASK_LDO_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+@@ -630,13 +591,7 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 				     ROHM_DVS_LEVEL_SUSPEND |
+ 				     ROHM_DVS_LEVEL_LPSR,
+ 			.run_reg = BD71828_REG_LDO4_VOLT,
+-			.idle_reg = BD71828_REG_LDO4_VOLT,
+-			.suspend_reg = BD71828_REG_LDO4_VOLT,
+-			.lpsr_reg = BD71828_REG_LDO4_VOLT,
+ 			.run_mask = BD71828_MASK_LDO_VOLT,
+-			.idle_mask = BD71828_MASK_LDO_VOLT,
+-			.suspend_mask = BD71828_MASK_LDO_VOLT,
+-			.lpsr_mask = BD71828_MASK_LDO_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+@@ -671,13 +626,7 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 				     ROHM_DVS_LEVEL_SUSPEND |
+ 				     ROHM_DVS_LEVEL_LPSR,
+ 			.run_reg = BD71828_REG_LDO5_VOLT,
+-			.idle_reg = BD71828_REG_LDO5_VOLT,
+-			.suspend_reg = BD71828_REG_LDO5_VOLT,
+-			.lpsr_reg = BD71828_REG_LDO5_VOLT,
+ 			.run_mask = BD71828_MASK_LDO_VOLT,
+-			.idle_mask = BD71828_MASK_LDO_VOLT,
+-			.suspend_mask = BD71828_MASK_LDO_VOLT,
+-			.lpsr_mask = BD71828_MASK_LDO_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+@@ -736,9 +685,6 @@ static const struct bd71828_regulator_data bd71828_rdata[] = {
+ 			.suspend_reg = BD71828_REG_LDO7_VOLT,
+ 			.lpsr_reg = BD71828_REG_LDO7_VOLT,
+ 			.run_mask = BD71828_MASK_LDO_VOLT,
+-			.idle_mask = BD71828_MASK_LDO_VOLT,
+-			.suspend_mask = BD71828_MASK_LDO_VOLT,
+-			.lpsr_mask = BD71828_MASK_LDO_VOLT,
+ 			.idle_on_mask = BD71828_MASK_IDLE_EN,
+ 			.suspend_on_mask = BD71828_MASK_SUSP_EN,
+ 			.lpsr_on_mask = BD71828_MASK_LPSR_EN,
+diff --git a/drivers/regulator/vqmmc-ipq4019-regulator.c b/drivers/regulator/vqmmc-ipq4019-regulator.c
+index 6d5ae25d08d1e..e2a28788d8a22 100644
+--- a/drivers/regulator/vqmmc-ipq4019-regulator.c
++++ b/drivers/regulator/vqmmc-ipq4019-regulator.c
+@@ -86,6 +86,7 @@ static const struct of_device_id regulator_ipq4019_of_match[] = {
+ 	{ .compatible = "qcom,vqmmc-ipq4019-regulator", },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, regulator_ipq4019_of_match);
+ 
+ static struct platform_driver ipq4019_regulator_driver = {
+ 	.probe = ipq4019_regulator_probe,
+diff --git a/drivers/s390/cio/trace.h b/drivers/s390/cio/trace.h
+index 4803139bce149..4716798b1368a 100644
+--- a/drivers/s390/cio/trace.h
++++ b/drivers/s390/cio/trace.h
+@@ -50,7 +50,7 @@ DECLARE_EVENT_CLASS(s390_class_schib,
+ 		__entry->devno = schib->pmcw.dev;
+ 		__entry->schib = *schib;
+ 		__entry->pmcw_ena = schib->pmcw.ena;
+-		__entry->pmcw_st = schib->pmcw.ena;
++		__entry->pmcw_st = schib->pmcw.st;
+ 		__entry->pmcw_dnv = schib->pmcw.dnv;
+ 		__entry->pmcw_dev = schib->pmcw.dev;
+ 		__entry->pmcw_lpm = schib->pmcw.lpm;
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index c00a288a4eca2..13e56a23e41ec 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -859,7 +859,7 @@ static int hex2bitmap(const char *str, unsigned long *bitmap, int bits)
+  */
+ static int modify_bitmap(const char *str, unsigned long *bitmap, int bits)
+ {
+-	int a, i, z;
++	unsigned long a, i, z;
+ 	char *np, sign;
+ 
+ 	/* bits needs to be a multiple of 8 */
+diff --git a/drivers/scsi/bfa/bfad_debugfs.c b/drivers/scsi/bfa/bfad_debugfs.c
+index fd1b378a263a0..d3c7d4423c514 100644
+--- a/drivers/scsi/bfa/bfad_debugfs.c
++++ b/drivers/scsi/bfa/bfad_debugfs.c
+@@ -250,7 +250,7 @@ bfad_debugfs_write_regrd(struct file *file, const char __user *buf,
+ 	unsigned long flags;
+ 	void *kern_buf;
+ 
+-	kern_buf = memdup_user(buf, nbytes);
++	kern_buf = memdup_user_nul(buf, nbytes);
+ 	if (IS_ERR(kern_buf))
+ 		return PTR_ERR(kern_buf);
+ 
+@@ -317,7 +317,7 @@ bfad_debugfs_write_regwr(struct file *file, const char __user *buf,
+ 	unsigned long flags;
+ 	void *kern_buf;
+ 
+-	kern_buf = memdup_user(buf, nbytes);
++	kern_buf = memdup_user_nul(buf, nbytes);
+ 	if (IS_ERR(kern_buf))
+ 		return PTR_ERR(kern_buf);
+ 
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index a44a098dbb9c7..4e7ec83c07cf6 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -5834,7 +5834,7 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
+ {
+ 	struct Scsi_Host *sh;
+ 
+-	sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info));
++	sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info *));
+ 	if (sh == NULL) {
+ 		dev_err(&h->pdev->dev, "scsi_host_alloc failed\n");
+ 		return -ENOMEM;
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 8444a4287ac1c..8681e0cdfa5e9 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -256,8 +256,7 @@ static void sas_set_ex_phy(struct domain_device *dev, int phy_id, void *rsp)
+ 	/* help some expanders that fail to zero sas_address in the 'no
+ 	 * device' case
+ 	 */
+-	if (phy->attached_dev_type == SAS_PHY_UNUSED ||
+-	    phy->linkrate < SAS_LINK_RATE_1_5_GBPS)
++	if (phy->attached_dev_type == SAS_PHY_UNUSED)
+ 		memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE);
+ 	else
+ 		memcpy(phy->attached_sas_addr, dr->attached_sas_addr, SAS_ADDR_SIZE);
+diff --git a/drivers/scsi/qedf/qedf_debugfs.c b/drivers/scsi/qedf/qedf_debugfs.c
+index 451fd236bfd05..96174353e3898 100644
+--- a/drivers/scsi/qedf/qedf_debugfs.c
++++ b/drivers/scsi/qedf/qedf_debugfs.c
+@@ -170,7 +170,7 @@ qedf_dbg_debug_cmd_write(struct file *filp, const char __user *buffer,
+ 	if (!count || *ppos)
+ 		return 0;
+ 
+-	kern_buf = memdup_user(buffer, count);
++	kern_buf = memdup_user_nul(buffer, count);
+ 	if (IS_ERR(kern_buf))
+ 		return PTR_ERR(kern_buf);
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 3d56f971cdc4d..8d54f60998029 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -4644,7 +4644,7 @@ qla2x00_set_model_info(scsi_qla_host_t *vha, uint8_t *model, size_t len,
+ 		if (use_tbl &&
+ 		    ha->pdev->subsystem_vendor == PCI_VENDOR_ID_QLOGIC &&
+ 		    index < QLA_MODEL_NAMES)
+-			strlcpy(ha->model_desc,
++			strscpy(ha->model_desc,
+ 			    qla2x00_model_name[index * 2 + 1],
+ 			    sizeof(ha->model_desc));
+ 	} else {
+@@ -4652,14 +4652,14 @@ qla2x00_set_model_info(scsi_qla_host_t *vha, uint8_t *model, size_t len,
+ 		if (use_tbl &&
+ 		    ha->pdev->subsystem_vendor == PCI_VENDOR_ID_QLOGIC &&
+ 		    index < QLA_MODEL_NAMES) {
+-			strlcpy(ha->model_number,
++			strscpy(ha->model_number,
+ 				qla2x00_model_name[index * 2],
+ 				sizeof(ha->model_number));
+-			strlcpy(ha->model_desc,
++			strscpy(ha->model_desc,
+ 			    qla2x00_model_name[index * 2 + 1],
+ 			    sizeof(ha->model_desc));
+ 		} else {
+-			strlcpy(ha->model_number, def,
++			strscpy(ha->model_number, def,
+ 				sizeof(ha->model_number));
+ 		}
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_mr.c b/drivers/scsi/qla2xxx/qla_mr.c
+index 7178646ee0f06..cc8994a7c9942 100644
+--- a/drivers/scsi/qla2xxx/qla_mr.c
++++ b/drivers/scsi/qla2xxx/qla_mr.c
+@@ -691,7 +691,7 @@ qlafx00_pci_info_str(struct scsi_qla_host *vha, char *str, size_t str_len)
+ 	struct qla_hw_data *ha = vha->hw;
+ 
+ 	if (pci_is_pcie(ha->pdev))
+-		strlcpy(str, "PCIe iSA", str_len);
++		strscpy(str, "PCIe iSA", str_len);
+ 	return str;
+ }
+ 
+@@ -1849,21 +1849,21 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
+ 			phost_info = &preg_hsi->hsi;
+ 			memset(preg_hsi, 0, sizeof(struct register_host_info));
+ 			phost_info->os_type = OS_TYPE_LINUX;
+-			strlcpy(phost_info->sysname, p_sysid->sysname,
++			strscpy(phost_info->sysname, p_sysid->sysname,
+ 				sizeof(phost_info->sysname));
+-			strlcpy(phost_info->nodename, p_sysid->nodename,
++			strscpy(phost_info->nodename, p_sysid->nodename,
+ 				sizeof(phost_info->nodename));
+ 			if (!strcmp(phost_info->nodename, "(none)"))
+ 				ha->mr.host_info_resend = true;
+-			strlcpy(phost_info->release, p_sysid->release,
++			strscpy(phost_info->release, p_sysid->release,
+ 				sizeof(phost_info->release));
+-			strlcpy(phost_info->version, p_sysid->version,
++			strscpy(phost_info->version, p_sysid->version,
+ 				sizeof(phost_info->version));
+-			strlcpy(phost_info->machine, p_sysid->machine,
++			strscpy(phost_info->machine, p_sysid->machine,
+ 				sizeof(phost_info->machine));
+-			strlcpy(phost_info->domainname, p_sysid->domainname,
++			strscpy(phost_info->domainname, p_sysid->domainname,
+ 				sizeof(phost_info->domainname));
+-			strlcpy(phost_info->hostdriver, QLA2XXX_VERSION,
++			strscpy(phost_info->hostdriver, QLA2XXX_VERSION,
+ 				sizeof(phost_info->hostdriver));
+ 			preg_hsi->utc = (uint64_t)ktime_get_real_seconds();
+ 			ql_dbg(ql_dbg_init, vha, 0x0149,
+@@ -1909,9 +1909,9 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
+ 	if (fx_type == FXDISC_GET_CONFIG_INFO) {
+ 		struct config_info_data *pinfo =
+ 		    (struct config_info_data *) fdisc->u.fxiocb.rsp_addr;
+-		strlcpy(vha->hw->model_number, pinfo->model_num,
++		strscpy(vha->hw->model_number, pinfo->model_num,
+ 			ARRAY_SIZE(vha->hw->model_number));
+-		strlcpy(vha->hw->model_desc, pinfo->model_description,
++		strscpy(vha->hw->model_desc, pinfo->model_description,
+ 			ARRAY_SIZE(vha->hw->model_desc));
+ 		memcpy(&vha->hw->mr.symbolic_name, pinfo->symbolic_name,
+ 		    sizeof(vha->hw->mr.symbolic_name));
+diff --git a/drivers/scsi/ufs/cdns-pltfrm.c b/drivers/scsi/ufs/cdns-pltfrm.c
+index da065a259f6e4..408ba52671dec 100644
+--- a/drivers/scsi/ufs/cdns-pltfrm.c
++++ b/drivers/scsi/ufs/cdns-pltfrm.c
+@@ -135,7 +135,7 @@ static int cdns_ufs_set_hclkdiv(struct ufs_hba *hba)
+ 	 * Make sure the register was updated,
+ 	 * UniPro layer will not work with an incorrect value.
+ 	 */
+-	mb();
++	ufshcd_readl(hba, CDNS_UFS_REG_HCLKDIV);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index 08331ecbe91fb..2c3972d9ba52d 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -242,8 +242,9 @@ static void ufs_qcom_select_unipro_mode(struct ufs_qcom_host *host)
+ 	ufshcd_rmwl(host->hba, QUNIPRO_SEL,
+ 		   ufs_qcom_cap_qunipro(host) ? QUNIPRO_SEL : 0,
+ 		   REG_UFS_CFG1);
+-	/* make sure above configuration is applied before we return */
+-	mb();
++
++	if (host->hw_ver.major >= 0x05)
++		ufshcd_rmwl(host->hba, QUNIPRO_G4_SEL, 0, REG_UFS_CFG0);
+ }
+ 
+ /*
+@@ -352,7 +353,7 @@ static void ufs_qcom_enable_hw_clk_gating(struct ufs_hba *hba)
+ 		REG_UFS_CFG2);
+ 
+ 	/* Ensure that HW clock gating is enabled before next operations */
+-	mb();
++	ufshcd_readl(hba, REG_UFS_CFG2);
+ }
+ 
+ static int ufs_qcom_hce_enable_notify(struct ufs_hba *hba,
+@@ -449,7 +450,7 @@ static int ufs_qcom_cfg_timers(struct ufs_hba *hba, u32 gear,
+ 		 * make sure above write gets applied before we return from
+ 		 * this function.
+ 		 */
+-		mb();
++		ufshcd_readl(hba, REG_UFS_SYS1CLK_1US);
+ 	}
+ 
+ 	if (ufs_qcom_cap_qunipro(host))
+@@ -515,9 +516,9 @@ static int ufs_qcom_cfg_timers(struct ufs_hba *hba, u32 gear,
+ 		mb();
+ 	}
+ 
+-	if (update_link_startup_timer) {
++	if (update_link_startup_timer && host->hw_ver.major != 0x5) {
+ 		ufshcd_writel(hba, ((core_clk_rate / MSEC_PER_SEC) * 100),
+-			      REG_UFS_PA_LINK_STARTUP_TIMER);
++			      REG_UFS_CFG0);
+ 		/*
+ 		 * make sure that this configuration is applied before
+ 		 * we return
+@@ -578,6 +579,17 @@ static int ufs_qcom_link_startup_notify(struct ufs_hba *hba,
+ 	return err;
+ }
+ 
++static void ufs_qcom_device_reset_ctrl(struct ufs_hba *hba, bool asserted)
++{
++	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
++
++	/* reset gpio is optional */
++	if (!host->device_reset)
++		return;
++
++	gpiod_set_value_cansleep(host->device_reset, asserted);
++}
++
+ static int ufs_qcom_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ {
+ 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+@@ -592,6 +604,9 @@ static int ufs_qcom_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ 		ufs_qcom_disable_lane_clks(host);
+ 		phy_power_off(phy);
+ 
++		/* reset the connected UFS device during power down */
++		ufs_qcom_device_reset_ctrl(hba, true);
++
+ 	} else if (!ufs_qcom_is_link_active(hba)) {
+ 		ufs_qcom_disable_lane_clks(host);
+ 	}
+@@ -1441,10 +1456,10 @@ static int ufs_qcom_device_reset(struct ufs_hba *hba)
+ 	 * The UFS device shall detect reset pulses of 1us, sleep for 10us to
+ 	 * be on the safe side.
+ 	 */
+-	gpiod_set_value_cansleep(host->device_reset, 1);
++	ufs_qcom_device_reset_ctrl(hba, true);
+ 	usleep_range(10, 15);
+ 
+-	gpiod_set_value_cansleep(host->device_reset, 0);
++	ufs_qcom_device_reset_ctrl(hba, false);
+ 	usleep_range(10, 15);
+ 
+ 	return 0;
+diff --git a/drivers/scsi/ufs/ufs-qcom.h b/drivers/scsi/ufs/ufs-qcom.h
+index 3f4922743b3e3..742f752d01d61 100644
+--- a/drivers/scsi/ufs/ufs-qcom.h
++++ b/drivers/scsi/ufs/ufs-qcom.h
+@@ -46,8 +46,10 @@ enum {
+ 	REG_UFS_TX_SYMBOL_CLK_NS_US         = 0xC4,
+ 	REG_UFS_LOCAL_PORT_ID_REG           = 0xC8,
+ 	REG_UFS_PA_ERR_CODE                 = 0xCC,
+-	REG_UFS_RETRY_TIMER_REG             = 0xD0,
+-	REG_UFS_PA_LINK_STARTUP_TIMER       = 0xD8,
++	/* On older UFS revisions, this register is called "RETRY_TIMER_REG" */
++	REG_UFS_PARAM0                      = 0xD0,
++	/* On older UFS revisions, this register is called "REG_UFS_PA_LINK_STARTUP_TIMER" */
++	REG_UFS_CFG0                        = 0xD8,
+ 	REG_UFS_CFG1                        = 0xDC,
+ 	REG_UFS_CFG2                        = 0xE0,
+ 	REG_UFS_HW_VERSION                  = 0xE4,
+@@ -85,6 +87,9 @@ enum {
+ #define UFS_CNTLR_2_x_x_VEN_REGS_OFFSET(x)	(0x000 + x)
+ #define UFS_CNTLR_3_x_x_VEN_REGS_OFFSET(x)	(0x400 + x)
+ 
++/* bit definitions for REG_UFS_CFG0 register */
++#define QUNIPRO_G4_SEL		BIT(5)
++
+ /* bit definitions for REG_UFS_CFG1 register */
+ #define QUNIPRO_SEL		0x1
+ #define UTP_DBG_RAMS_EN		0x20000
+@@ -156,10 +161,10 @@ static inline void ufs_qcom_assert_reset(struct ufs_hba *hba)
+ 			1 << OFFSET_UFS_PHY_SOFT_RESET, REG_UFS_CFG1);
+ 
+ 	/*
+-	 * Make sure assertion of ufs phy reset is written to
+-	 * register before returning
++	 * Dummy read to ensure the write takes effect before doing any sort
++	 * of delay
+ 	 */
+-	mb();
++	ufshcd_readl(hba, REG_UFS_CFG1);
+ }
+ 
+ static inline void ufs_qcom_deassert_reset(struct ufs_hba *hba)
+@@ -168,10 +173,10 @@ static inline void ufs_qcom_deassert_reset(struct ufs_hba *hba)
+ 			0 << OFFSET_UFS_PHY_SOFT_RESET, REG_UFS_CFG1);
+ 
+ 	/*
+-	 * Make sure de-assertion of ufs phy reset is written to
+-	 * register before returning
++	 * Dummy read to ensure the write takes effect before doing any sort
++	 * of delay
+ 	 */
+-	mb();
++	ufshcd_readl(hba, REG_UFS_CFG1);
+ }
+ 
+ /* Host controller hardware version: major.minor.step */
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index a432aebd14be6..0b0230b46f28e 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -3819,7 +3819,7 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ 		 * Make sure UIC command completion interrupt is disabled before
+ 		 * issuing UIC command.
+ 		 */
+-		wmb();
++		ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
+ 		reenable_intr = true;
+ 	}
+ 	ret = __ufshcd_send_uic_cmd(hba, cmd, false);
+@@ -9193,7 +9193,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ 	 * Make sure that UFS interrupts are disabled and any pending interrupt
+ 	 * status is cleared before registering UFS interrupt handler.
+ 	 */
+-	mb();
++	ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
+ 
+ 	/* IRQ registration */
+ 	err = devm_request_irq(dev, irq, ufshcd_intr, IRQF_SHARED, UFSHCD, hba);
+diff --git a/drivers/soc/mediatek/mtk-cmdq-helper.c b/drivers/soc/mediatek/mtk-cmdq-helper.c
+index 505651b0d715f..c0b2f9397ea8a 100644
+--- a/drivers/soc/mediatek/mtk-cmdq-helper.c
++++ b/drivers/soc/mediatek/mtk-cmdq-helper.c
+@@ -13,7 +13,8 @@
+ #define CMDQ_POLL_ENABLE_MASK	BIT(0)
+ #define CMDQ_EOC_IRQ_EN		BIT(0)
+ #define CMDQ_REG_TYPE		1
+-#define CMDQ_JUMP_RELATIVE	1
++#define CMDQ_JUMP_RELATIVE	0
++#define CMDQ_JUMP_ABSOLUTE	1
+ 
+ struct cmdq_instruction {
+ 	union {
+@@ -414,7 +415,7 @@ int cmdq_pkt_jump(struct cmdq_pkt *pkt, dma_addr_t addr)
+ 	struct cmdq_instruction inst = {};
+ 
+ 	inst.op = CMDQ_CODE_JUMP;
+-	inst.offset = CMDQ_JUMP_RELATIVE;
++	inst.offset = CMDQ_JUMP_ABSOLUTE;
+ 	inst.value = addr >>
+ 		cmdq_get_shift_pa(((struct cmdq_client *)pkt->cl)->chan);
+ 	return cmdq_pkt_append_command(pkt, inst);
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index 18e7d158fcca4..d3e9cd3faadfd 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -1698,7 +1698,7 @@ struct sdw_cdns_pdi *sdw_cdns_alloc_pdi(struct sdw_cdns *cdns,
+ 
+ 	/* check if we found a PDI, else find in bi-directional */
+ 	if (!pdi)
+-		pdi = cdns_find_pdi(cdns, 2, stream->num_bd, stream->bd,
++		pdi = cdns_find_pdi(cdns, 0, stream->num_bd, stream->bd,
+ 				    dai_id);
+ 
+ 	if (pdi) {
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 9ec37cf10c010..f97b822cca19d 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -931,7 +931,7 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
+ 		mask |= STM32H7_SPI_SR_TXP | STM32H7_SPI_SR_RXP;
+ 
+ 	if (!(sr & mask)) {
+-		dev_warn(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
++		dev_vdbg(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
+ 			 sr, ier);
+ 		spin_unlock_irqrestore(&spi->lock, flags);
+ 		return IRQ_NONE;
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 857a1399850c3..e84494eed1c11 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -970,6 +970,7 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg)
+ 	else
+ 		rx_dev = ctlr->dev.parent;
+ 
++	ret = -ENOMSG;
+ 	list_for_each_entry(xfer, &msg->transfers, transfer_list) {
+ 		if (!ctlr->can_dma(ctlr, msg->spi, xfer))
+ 			continue;
+@@ -993,6 +994,9 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg)
+ 			}
+ 		}
+ 	}
++	/* No transfer has been mapped, bail out with success */
++	if (ret)
++		return 0;
+ 
+ 	ctlr->cur_msg_mapped = true;
+ 
+diff --git a/drivers/staging/greybus/arche-apb-ctrl.c b/drivers/staging/greybus/arche-apb-ctrl.c
+index bbf3ba744fc44..c7383c6c6094d 100644
+--- a/drivers/staging/greybus/arche-apb-ctrl.c
++++ b/drivers/staging/greybus/arche-apb-ctrl.c
+@@ -468,6 +468,7 @@ static const struct of_device_id arche_apb_ctrl_of_match[] = {
+ 	{ .compatible = "usbffff,2", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, arche_apb_ctrl_of_match);
+ 
+ static struct platform_driver arche_apb_ctrl_device_driver = {
+ 	.probe		= arche_apb_ctrl_probe,
+diff --git a/drivers/staging/greybus/arche-platform.c b/drivers/staging/greybus/arche-platform.c
+index eebf0deb39f50..02a700f720e6d 100644
+--- a/drivers/staging/greybus/arche-platform.c
++++ b/drivers/staging/greybus/arche-platform.c
+@@ -622,14 +622,7 @@ static const struct of_device_id arche_platform_of_match[] = {
+ 	{ .compatible = "google,arche-platform", },
+ 	{ },
+ };
+-
+-static const struct of_device_id arche_combined_id[] = {
+-	/* Use PID/VID of SVC device */
+-	{ .compatible = "google,arche-platform", },
+-	{ .compatible = "usbffff,2", },
+-	{ },
+-};
+-MODULE_DEVICE_TABLE(of, arche_combined_id);
++MODULE_DEVICE_TABLE(of, arche_platform_of_match);
+ 
+ static struct platform_driver arche_platform_device_driver = {
+ 	.probe		= arche_platform_probe,
+diff --git a/drivers/staging/greybus/light.c b/drivers/staging/greybus/light.c
+index e59bb27236b9f..7352d7deb8ba0 100644
+--- a/drivers/staging/greybus/light.c
++++ b/drivers/staging/greybus/light.c
+@@ -147,6 +147,9 @@ static int __gb_lights_flash_brightness_set(struct gb_channel *channel)
+ 		channel = get_channel_from_mode(channel->light,
+ 						GB_CHANNEL_MODE_TORCH);
+ 
++	if (!channel)
++		return -EINVAL;
++
+ 	/* For not flash we need to convert brightness to intensity */
+ 	intensity = channel->intensity_uA.min +
+ 			(channel->intensity_uA.step * channel->led->brightness);
+@@ -550,7 +553,10 @@ static int gb_lights_light_v4l2_register(struct gb_light *light)
+ 	}
+ 
+ 	channel_flash = get_channel_from_mode(light, GB_CHANNEL_MODE_FLASH);
+-	WARN_ON(!channel_flash);
++	if (!channel_flash) {
++		dev_err(dev, "failed to get flash channel from mode\n");
++		return -EINVAL;
++	}
+ 
+ 	fled = &channel_flash->fled;
+ 
+diff --git a/drivers/staging/media/atomisp/pci/sh_css.c b/drivers/staging/media/atomisp/pci/sh_css.c
+index 54a18921fbd15..cb03545203602 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css.c
++++ b/drivers/staging/media/atomisp/pci/sh_css.c
+@@ -5477,6 +5477,7 @@ static int load_video_binaries(struct ia_css_pipe *pipe)
+ 		mycs->yuv_scaler_binary = kzalloc(cas_scaler_descr.num_stage *
+ 						  sizeof(struct ia_css_binary), GFP_KERNEL);
+ 		if (!mycs->yuv_scaler_binary) {
++			mycs->num_yuv_scaler = 0;
+ 			err = -ENOMEM;
+ 			return err;
+ 		}
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index c20f69a4c5e9e..0a367fa23c277 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -2082,8 +2082,12 @@ static void gsm0_receive(struct gsm_mux *gsm, unsigned char c)
+ 		break;
+ 	case GSM_DATA:		/* Data */
+ 		gsm->buf[gsm->count++] = c;
+-		if (gsm->count == gsm->len)
++		if (gsm->count >= MAX_MRU) {
++			gsm->bad_size++;
++			gsm->state = GSM_SEARCH;
++		} else if (gsm->count >= gsm->len) {
+ 			gsm->state = GSM_FCS;
++		}
+ 		break;
+ 	case GSM_FCS:		/* FCS follows the packet */
+ 		gsm->received_fcs = c;
+@@ -2176,7 +2180,7 @@ static void gsm1_receive(struct gsm_mux *gsm, unsigned char c)
+ 		gsm->state = GSM_DATA;
+ 		break;
+ 	case GSM_DATA:		/* Data */
+-		if (gsm->count > gsm->mru) {	/* Allow one for the FCS */
++		if (gsm->count > gsm->mru || gsm->count > MAX_MRU) {	/* Allow one for the FCS */
+ 			gsm->state = GSM_OVERRUN;
+ 			gsm->bad_size++;
+ 		} else
+diff --git a/drivers/tty/serial/max3100.c b/drivers/tty/serial/max3100.c
+index 371569a0fd00a..17b6f4a872d6a 100644
+--- a/drivers/tty/serial/max3100.c
++++ b/drivers/tty/serial/max3100.c
+@@ -45,6 +45,9 @@
+ #include <linux/freezer.h>
+ #include <linux/tty.h>
+ #include <linux/tty_flip.h>
++#include <linux/types.h>
++
++#include <asm/unaligned.h>
+ 
+ #include <linux/serial_max3100.h>
+ 
+@@ -191,7 +194,7 @@ static void max3100_timeout(struct timer_list *t)
+ static int max3100_sr(struct max3100_port *s, u16 tx, u16 *rx)
+ {
+ 	struct spi_message message;
+-	u16 etx, erx;
++	__be16 etx, erx;
+ 	int status;
+ 	struct spi_transfer tran = {
+ 		.tx_buf = &etx,
+@@ -213,7 +216,7 @@ static int max3100_sr(struct max3100_port *s, u16 tx, u16 *rx)
+ 	return 0;
+ }
+ 
+-static int max3100_handlerx(struct max3100_port *s, u16 rx)
++static int max3100_handlerx_unlocked(struct max3100_port *s, u16 rx)
+ {
+ 	unsigned int ch, flg, status = 0;
+ 	int ret = 0, cts;
+@@ -253,6 +256,17 @@ static int max3100_handlerx(struct max3100_port *s, u16 rx)
+ 	return ret;
+ }
+ 
++static int max3100_handlerx(struct max3100_port *s, u16 rx)
++{
++	unsigned long flags;
++	int ret;
++
++	uart_port_lock_irqsave(&s->port, &flags);
++	ret = max3100_handlerx_unlocked(s, rx);
++	uart_port_unlock_irqrestore(&s->port, flags);
++	return ret;
++}
++
+ static void max3100_work(struct work_struct *w)
+ {
+ 	struct max3100_port *s = container_of(w, struct max3100_port, work);
+@@ -743,13 +757,14 @@ static int max3100_probe(struct spi_device *spi)
+ 	mutex_lock(&max3100s_lock);
+ 
+ 	if (!uart_driver_registered) {
+-		uart_driver_registered = 1;
+ 		retval = uart_register_driver(&max3100_uart_driver);
+ 		if (retval) {
+ 			printk(KERN_ERR "Couldn't register max3100 uart driver\n");
+ 			mutex_unlock(&max3100s_lock);
+ 			return retval;
+ 		}
++
++		uart_driver_registered = 1;
+ 	}
+ 
+ 	for (i = 0; i < MAX_MAX3100; i++)
+@@ -835,6 +850,7 @@ static int max3100_remove(struct spi_device *spi)
+ 		}
+ 	pr_debug("removing max3100 driver\n");
+ 	uart_unregister_driver(&max3100_uart_driver);
++	uart_driver_registered = 0;
+ 
+ 	mutex_unlock(&max3100s_lock);
+ 	return 0;
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 29f05db0d49ba..d751f8ce5cf6d 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -18,6 +18,7 @@
+ #include <linux/module.h>
+ #include <linux/property.h>
+ #include <linux/regmap.h>
++#include <linux/sched.h>
+ #include <linux/serial_core.h>
+ #include <linux/serial.h>
+ #include <linux/tty.h>
+@@ -25,7 +26,6 @@
+ #include <linux/spi/spi.h>
+ #include <linux/uaccess.h>
+ #include <linux/units.h>
+-#include <uapi/linux/sched/types.h>
+ 
+ #define SC16IS7XX_NAME			"sc16is7xx"
+ #define SC16IS7XX_MAX_DEVS		8
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index a7c28543c6f72..71cf9a7329f91 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -1257,9 +1257,14 @@ static void sci_dma_rx_chan_invalidate(struct sci_port *s)
+ static void sci_dma_rx_release(struct sci_port *s)
+ {
+ 	struct dma_chan *chan = s->chan_rx_saved;
++	struct uart_port *port = &s->port;
++	unsigned long flags;
+ 
++	uart_port_lock_irqsave(port, &flags);
+ 	s->chan_rx_saved = NULL;
+ 	sci_dma_rx_chan_invalidate(s);
++	uart_port_unlock_irqrestore(port, flags);
++
+ 	dmaengine_terminate_sync(chan);
+ 	dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, s->rx_buf[0],
+ 			  sg_dma_address(&s->sg_rx[0]));
+diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
+index 6c8b8f5b7e0f5..a387ff2c8b730 100644
+--- a/drivers/usb/gadget/function/u_audio.c
++++ b/drivers/usb/gadget/function/u_audio.c
+@@ -611,6 +611,8 @@ void g_audio_cleanup(struct g_audio *g_audio)
+ 		return;
+ 
+ 	uac = g_audio->uac;
++	g_audio->uac = NULL;
++
+ 	card = uac->card;
+ 	if (card)
+ 		snd_card_free_when_closed(card);
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index dd59584630979..9ad3e51578691 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -2013,8 +2013,8 @@ config FB_COBALT
+ 	depends on FB && MIPS_COBALT
+ 
+ config FB_SH7760
+-	bool "SH7760/SH7763/SH7720/SH7721 LCDC support"
+-	depends on FB=y && (CPU_SUBTYPE_SH7760 || CPU_SUBTYPE_SH7763 \
++	tristate "SH7760/SH7763/SH7720/SH7721 LCDC support"
++	depends on FB && (CPU_SUBTYPE_SH7760 || CPU_SUBTYPE_SH7763 \
+ 		|| CPU_SUBTYPE_SH7720 || CPU_SUBTYPE_SH7721)
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+diff --git a/drivers/video/fbdev/savage/savagefb_driver.c b/drivers/video/fbdev/savage/savagefb_driver.c
+index 94ebd8af50cf7..224d7c8146a94 100644
+--- a/drivers/video/fbdev/savage/savagefb_driver.c
++++ b/drivers/video/fbdev/savage/savagefb_driver.c
+@@ -2271,7 +2271,10 @@ static int savagefb_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	if (info->var.xres_virtual > 0x1000)
+ 		info->var.xres_virtual = 0x1000;
+ #endif
+-	savagefb_check_var(&info->var, info);
++	err = savagefb_check_var(&info->var, info);
++	if (err)
++		goto failed;
++
+ 	savagefb_set_fix(info);
+ 
+ 	/*
+diff --git a/drivers/video/fbdev/sh_mobile_lcdcfb.c b/drivers/video/fbdev/sh_mobile_lcdcfb.c
+index c1043420dbd3e..21d13d16f973c 100644
+--- a/drivers/video/fbdev/sh_mobile_lcdcfb.c
++++ b/drivers/video/fbdev/sh_mobile_lcdcfb.c
+@@ -1580,7 +1580,7 @@ sh_mobile_lcdc_overlay_fb_init(struct sh_mobile_lcdc_overlay *ovl)
+ 	 */
+ 	info->fix = sh_mobile_lcdc_overlay_fix;
+ 	snprintf(info->fix.id, sizeof(info->fix.id),
+-		 "SH Mobile LCDC Overlay %u", ovl->index);
++		 "SHMobile ovl %u", ovl->index);
+ 	info->fix.smem_start = ovl->dma_handle;
+ 	info->fix.smem_len = ovl->fb_size;
+ 	info->fix.line_length = ovl->pitch;
+diff --git a/drivers/video/fbdev/sis/init301.c b/drivers/video/fbdev/sis/init301.c
+index a8fb41f1a2580..09329072004f4 100644
+--- a/drivers/video/fbdev/sis/init301.c
++++ b/drivers/video/fbdev/sis/init301.c
+@@ -172,7 +172,7 @@ static const unsigned char SiS_HiTVGroup3_2[] = {
+ };
+ 
+ /* 301C / 302ELV extended Part2 TV registers (4 tap scaler) */
+-
++#ifdef CONFIG_FB_SIS_315
+ static const unsigned char SiS_Part2CLVX_1[] = {
+     0x00,0x00,
+     0x00,0x20,0x00,0x00,0x7F,0x20,0x02,0x7F,0x7D,0x20,0x04,0x7F,0x7D,0x1F,0x06,0x7E,
+@@ -245,7 +245,6 @@ static const unsigned char SiS_Part2CLVX_6[] = {   /* 1080i */
+     0xFF,0xFF,
+ };
+ 
+-#ifdef CONFIG_FB_SIS_315
+ /* 661 et al LCD data structure (2.03.00) */
+ static const unsigned char SiS_LCDStruct661[] = {
+     /* 1024x768 */
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index 1e890ef176873..a6f375417fd54 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -339,8 +339,10 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs,
+ 				  vring_interrupt, 0,
+ 				  vp_dev->msix_names[msix_vec],
+ 				  vqs[i]);
+-		if (err)
++		if (err) {
++			vp_del_vq(vqs[i]);
+ 			goto error_find;
++		}
+ 	}
+ 	return 0;
+ 
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index daa00f3c5a6af..7f2ca611a3f8e 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -52,6 +52,8 @@
+ 
+ #define DWDST			BIT(1)
+ 
++#define MAX_HW_ERROR		250
++
+ static int heartbeat = DEFAULT_HEARTBEAT;
+ 
+ /*
+@@ -90,7 +92,7 @@ static int rti_wdt_start(struct watchdog_device *wdd)
+ 	 * to be 50% or less than that; we obviouly want to configure the open
+ 	 * window as large as possible so we select the 50% option.
+ 	 */
+-	wdd->min_hw_heartbeat_ms = 500 * wdd->timeout;
++	wdd->min_hw_heartbeat_ms = 520 * wdd->timeout + MAX_HW_ERROR;
+ 
+ 	/* Generate NMI when wdt expires */
+ 	writel_relaxed(RTIWWDRX_NMI, wdt->base + RTIWWDRXCTRL);
+@@ -124,31 +126,33 @@ static int rti_wdt_setup_hw_hb(struct watchdog_device *wdd, u32 wsize)
+ 	 * be petted during the open window; not too early or not too late.
+ 	 * The HW configuration options only allow for the open window size
+ 	 * to be 50% or less than that.
++	 * To avoid any glitches, we accommodate 2% + max hardware error
++	 * safety margin.
+ 	 */
+ 	switch (wsize) {
+ 	case RTIWWDSIZE_50P:
+-		/* 50% open window => 50% min heartbeat */
+-		wdd->min_hw_heartbeat_ms = 500 * heartbeat;
++		/* 50% open window => 52% min heartbeat */
++		wdd->min_hw_heartbeat_ms = 520 * heartbeat + MAX_HW_ERROR;
+ 		break;
+ 
+ 	case RTIWWDSIZE_25P:
+-		/* 25% open window => 75% min heartbeat */
+-		wdd->min_hw_heartbeat_ms = 750 * heartbeat;
++		/* 25% open window => 77% min heartbeat */
++		wdd->min_hw_heartbeat_ms = 770 * heartbeat + MAX_HW_ERROR;
+ 		break;
+ 
+ 	case RTIWWDSIZE_12P5:
+-		/* 12.5% open window => 87.5% min heartbeat */
+-		wdd->min_hw_heartbeat_ms = 875 * heartbeat;
++		/* 12.5% open window => 89.5% min heartbeat */
++		wdd->min_hw_heartbeat_ms = 895 * heartbeat + MAX_HW_ERROR;
+ 		break;
+ 
+ 	case RTIWWDSIZE_6P25:
+-		/* 6.5% open window => 93.5% min heartbeat */
+-		wdd->min_hw_heartbeat_ms = 935 * heartbeat;
++		/* 6.5% open window => 95.5% min heartbeat */
++		wdd->min_hw_heartbeat_ms = 955 * heartbeat + MAX_HW_ERROR;
+ 		break;
+ 
+ 	case RTIWWDSIZE_3P125:
+-		/* 3.125% open window => 96.9% min heartbeat */
+-		wdd->min_hw_heartbeat_ms = 969 * heartbeat;
++		/* 3.125% open window => 98.9% min heartbeat */
++		wdd->min_hw_heartbeat_ms = 989 * heartbeat + MAX_HW_ERROR;
+ 		break;
+ 
+ 	default:
+@@ -222,14 +226,6 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	/*
+-	 * If watchdog is running at 32k clock, it is not accurate.
+-	 * Adjust frequency down in this case so that we don't pet
+-	 * the watchdog too often.
+-	 */
+-	if (wdt->freq < 32768)
+-		wdt->freq = wdt->freq * 9 / 10;
+-
+ 	pm_runtime_enable(dev);
+ 	ret = pm_runtime_get_sync(dev);
+ 	if (ret < 0) {
+diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c
+index bbb2c210d139d..fa8a6543142d5 100644
+--- a/fs/afs/mntpt.c
++++ b/fs/afs/mntpt.c
+@@ -146,6 +146,11 @@ static int afs_mntpt_set_params(struct fs_context *fc, struct dentry *mntpt)
+ 		put_page(page);
+ 		if (ret < 0)
+ 			return ret;
++
++		/* Don't cross a backup volume mountpoint from a backup volume */
++		if (src_as->volume && src_as->volume->type == AFSVL_BACKVOL &&
++		    ctx->type == AFSVL_BACKVOL)
++			return -ENODEV;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/ecryptfs/keystore.c b/fs/ecryptfs/keystore.c
+index f6a17d259db74..afe51b4be1e73 100644
+--- a/fs/ecryptfs/keystore.c
++++ b/fs/ecryptfs/keystore.c
+@@ -300,9 +300,11 @@ write_tag_66_packet(char *signature, u8 cipher_code,
+ 	 *         | Key Identifier Size      | 1 or 2 bytes |
+ 	 *         | Key Identifier           | arbitrary    |
+ 	 *         | File Encryption Key Size | 1 or 2 bytes |
++	 *         | Cipher Code              | 1 byte       |
+ 	 *         | File Encryption Key      | arbitrary    |
++	 *         | Checksum                 | 2 bytes      |
+ 	 */
+-	data_len = (5 + ECRYPTFS_SIG_SIZE_HEX + crypt_stat->key_size);
++	data_len = (8 + ECRYPTFS_SIG_SIZE_HEX + crypt_stat->key_size);
+ 	*packet = kmalloc(data_len, GFP_KERNEL);
+ 	message = *packet;
+ 	if (!message) {
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index c1d632725c2c0..87378d08a414b 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5083,8 +5083,73 @@ static bool ext4_mb_discard_preallocations_should_retry(struct super_block *sb,
+ 	return ret;
+ }
+ 
+-static ext4_fsblk_t ext4_mb_new_blocks_simple(handle_t *handle,
+-				struct ext4_allocation_request *ar, int *errp);
++/*
++ * Simple allocator for Ext4 fast commit replay path. It searches for blocks
++ * linearly starting at the goal block and also excludes the blocks which
++ * are going to be in use after fast commit replay.
++ */
++static ext4_fsblk_t
++ext4_mb_new_blocks_simple(struct ext4_allocation_request *ar, int *errp)
++{
++	struct buffer_head *bitmap_bh;
++	struct super_block *sb = ar->inode->i_sb;
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	ext4_group_t group, nr;
++	ext4_grpblk_t blkoff;
++	ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb);
++	ext4_grpblk_t i = 0;
++	ext4_fsblk_t goal, block;
++	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
++
++	goal = ar->goal;
++	if (goal < le32_to_cpu(es->s_first_data_block) ||
++			goal >= ext4_blocks_count(es))
++		goal = le32_to_cpu(es->s_first_data_block);
++
++	ar->len = 0;
++	ext4_get_group_no_and_offset(sb, goal, &group, &blkoff);
++	for (nr = ext4_get_groups_count(sb); nr > 0; nr--) {
++		bitmap_bh = ext4_read_block_bitmap(sb, group);
++		if (IS_ERR(bitmap_bh)) {
++			*errp = PTR_ERR(bitmap_bh);
++			pr_warn("Failed to read block bitmap\n");
++			return 0;
++		}
++
++		while (1) {
++			i = mb_find_next_zero_bit(bitmap_bh->b_data, max,
++						blkoff);
++			if (i >= max)
++				break;
++			if (ext4_fc_replay_check_excluded(sb,
++				ext4_group_first_block_no(sb, group) +
++				EXT4_C2B(sbi, i))) {
++				blkoff = i + 1;
++			} else
++				break;
++		}
++		brelse(bitmap_bh);
++		if (i < max)
++			break;
++
++		if (++group >= ext4_get_groups_count(sb))
++			group = 0;
++
++		blkoff = 0;
++	}
++
++	if (i >= max) {
++		*errp = -ENOSPC;
++		return 0;
++	}
++
++	block = ext4_group_first_block_no(sb, group) + EXT4_C2B(sbi, i);
++	ext4_mb_mark_bb(sb, block, 1, 1);
++	ar->len = 1;
++
++	*errp = 0;
++	return block;
++}
+ 
+ /*
+  * Main entry point into mballoc to allocate blocks
+@@ -5109,7 +5174,7 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
+ 
+ 	trace_ext4_request_blocks(ar);
+ 	if (sbi->s_mount_state & EXT4_FC_REPLAY)
+-		return ext4_mb_new_blocks_simple(handle, ar, errp);
++		return ext4_mb_new_blocks_simple(ar, errp);
+ 
+ 	/* Allow to use superuser reservation for quota file */
+ 	if (ext4_is_quota_file(ar->inode))
+@@ -5339,69 +5404,6 @@ ext4_mb_free_metadata(handle_t *handle, struct ext4_buddy *e4b,
+ 	return 0;
+ }
+ 
+-/*
+- * Simple allocator for Ext4 fast commit replay path. It searches for blocks
+- * linearly starting at the goal block and also excludes the blocks which
+- * are going to be in use after fast commit replay.
+- */
+-static ext4_fsblk_t ext4_mb_new_blocks_simple(handle_t *handle,
+-				struct ext4_allocation_request *ar, int *errp)
+-{
+-	struct buffer_head *bitmap_bh;
+-	struct super_block *sb = ar->inode->i_sb;
+-	ext4_group_t group;
+-	ext4_grpblk_t blkoff;
+-	ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb);
+-	ext4_grpblk_t i = 0;
+-	ext4_fsblk_t goal, block;
+-	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+-
+-	goal = ar->goal;
+-	if (goal < le32_to_cpu(es->s_first_data_block) ||
+-			goal >= ext4_blocks_count(es))
+-		goal = le32_to_cpu(es->s_first_data_block);
+-
+-	ar->len = 0;
+-	ext4_get_group_no_and_offset(sb, goal, &group, &blkoff);
+-	for (; group < ext4_get_groups_count(sb); group++) {
+-		bitmap_bh = ext4_read_block_bitmap(sb, group);
+-		if (IS_ERR(bitmap_bh)) {
+-			*errp = PTR_ERR(bitmap_bh);
+-			pr_warn("Failed to read block bitmap\n");
+-			return 0;
+-		}
+-
+-		ext4_get_group_no_and_offset(sb,
+-			max(ext4_group_first_block_no(sb, group), goal),
+-			NULL, &blkoff);
+-		while (1) {
+-			i = mb_find_next_zero_bit(bitmap_bh->b_data, max,
+-						blkoff);
+-			if (i >= max)
+-				break;
+-			if (ext4_fc_replay_check_excluded(sb,
+-				ext4_group_first_block_no(sb, group) + i)) {
+-				blkoff = i + 1;
+-			} else
+-				break;
+-		}
+-		brelse(bitmap_bh);
+-		if (i < max)
+-			break;
+-	}
+-
+-	if (group >= ext4_get_groups_count(sb) || i >= max) {
+-		*errp = -ENOSPC;
+-		return 0;
+-	}
+-
+-	block = ext4_group_first_block_no(sb, group) + i;
+-	ext4_mb_mark_bb(sb, block, 1, 1);
+-	ar->len = 1;
+-
+-	return block;
+-}
+-
+ static void ext4_free_blocks_simple(struct inode *inode, ext4_fsblk_t block,
+ 					unsigned long count)
+ {
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a50eb4c61ecc6..af0801593abb8 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2748,7 +2748,7 @@ static int ext4_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	inode = ext4_new_inode_start_handle(dir, mode,
+ 					    NULL, 0, NULL,
+ 					    EXT4_HT_DIR,
+-			EXT4_MAXQUOTAS_INIT_BLOCKS(dir->i_sb) +
++			EXT4_MAXQUOTAS_TRANS_BLOCKS(dir->i_sb) +
+ 			  4 + EXT4_XATTR_TRANS_BLOCKS);
+ 	handle = ext4_journal_current_handle();
+ 	err = PTR_ERR(inode);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 5bf858fc271fc..2dcb7d2bb85e8 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -3065,8 +3065,10 @@ ext4_xattr_block_cache_find(struct inode *inode,
+ 
+ 		bh = ext4_sb_bread(inode->i_sb, ce->e_value, REQ_PRIO);
+ 		if (IS_ERR(bh)) {
+-			if (PTR_ERR(bh) == -ENOMEM)
++			if (PTR_ERR(bh) == -ENOMEM) {
++				mb_cache_entry_put(ea_block_cache, ce);
+ 				return NULL;
++			}
+ 			bh = NULL;
+ 			EXT4_ERROR_INODE(inode, "block %lu read error",
+ 					 (unsigned long)ce->e_value);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 8ca549cc975e4..f4ab9b313e4f5 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -776,7 +776,7 @@ static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk)
+ 	 */
+ 	head = &im->ino_list;
+ 
+-	/* loop for each orphan inode entry and write them in Jornal block */
++	/* loop for each orphan inode entry and write them in journal block */
+ 	list_for_each_entry(orphan, head, list) {
+ 		if (!page) {
+ 			page = f2fs_grab_meta_page(sbi, start_blk++);
+@@ -1109,7 +1109,7 @@ int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type,
+ 	} else {
+ 		/*
+ 		 * We should submit bio, since it exists several
+-		 * wribacking dentry pages in the freeing inode.
++		 * writebacking dentry pages in the freeing inode.
+ 		 */
+ 		f2fs_submit_merged_write(sbi, DATA);
+ 		cond_resched();
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index a94e102d15866..8dccb59d072b7 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -589,6 +589,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc)
+ 				f2fs_cops[fi->i_compress_algorithm];
+ 	unsigned int max_len, new_nr_cpages;
+ 	struct page **new_cpages;
++	u32 chksum = 0;
+ 	int i, ret;
+ 
+ 	trace_f2fs_compress_pages_start(cc->inode, cc->cluster_idx,
+@@ -642,6 +643,11 @@ static int f2fs_compress_pages(struct compress_ctx *cc)
+ 
+ 	cc->cbuf->clen = cpu_to_le32(cc->clen);
+ 
++	if (fi->i_compress_flag & 1 << COMPRESS_CHKSUM)
++		chksum = f2fs_crc32(F2FS_I_SB(cc->inode),
++					cc->cbuf->cdata, cc->clen);
++	cc->cbuf->chksum = cpu_to_le32(chksum);
++
+ 	for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++)
+ 		cc->cbuf->reserved[i] = cpu_to_le32(0);
+ 
+@@ -777,6 +783,22 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity)
+ 
+ 	ret = cops->decompress_pages(dic);
+ 
++	if (!ret && (fi->i_compress_flag & 1 << COMPRESS_CHKSUM)) {
++		u32 provided = le32_to_cpu(dic->cbuf->chksum);
++		u32 calculated = f2fs_crc32(sbi, dic->cbuf->cdata, dic->clen);
++
++		if (provided != calculated) {
++			if (!is_inode_flag_set(dic->inode, FI_COMPRESS_CORRUPT)) {
++				set_inode_flag(dic->inode, FI_COMPRESS_CORRUPT);
++				printk_ratelimited(
++					"%sF2FS-fs (%s): checksum invalid, nid = %lu, %x vs %x",
++					KERN_INFO, sbi->sb->s_id, dic->inode->i_ino,
++					provided, calculated);
++			}
++			set_sbi_flag(sbi, SBI_NEED_FSCK);
++		}
++	}
++
+ out_vunmap_cbuf:
+ 	vm_unmap_ram(dic->cbuf, dic->nr_cpages);
+ out_vunmap_rbuf:
+@@ -843,14 +865,17 @@ static bool __cluster_may_compress(struct compress_ctx *cc)
+ 	return true;
+ }
+ 
+-static int __f2fs_cluster_blocks(struct compress_ctx *cc, bool compr)
++static int __f2fs_cluster_blocks(struct inode *inode,
++				unsigned int cluster_idx, bool compr)
+ {
+ 	struct dnode_of_data dn;
++	unsigned int cluster_size = F2FS_I(inode)->i_cluster_size;
++	unsigned int start_idx = cluster_idx <<
++				F2FS_I(inode)->i_log_cluster_size;
+ 	int ret;
+ 
+-	set_new_dnode(&dn, cc->inode, NULL, NULL, 0);
+-	ret = f2fs_get_dnode_of_data(&dn, start_idx_of_cluster(cc),
+-							LOOKUP_NODE);
++	set_new_dnode(&dn, inode, NULL, NULL, 0);
++	ret = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE);
+ 	if (ret) {
+ 		if (ret == -ENOENT)
+ 			ret = 0;
+@@ -861,7 +886,7 @@ static int __f2fs_cluster_blocks(struct compress_ctx *cc, bool compr)
+ 		int i;
+ 
+ 		ret = 1;
+-		for (i = 1; i < cc->cluster_size; i++) {
++		for (i = 1; i < cluster_size; i++) {
+ 			block_t blkaddr;
+ 
+ 			blkaddr = data_blkaddr(dn.inode,
+@@ -874,6 +899,10 @@ static int __f2fs_cluster_blocks(struct compress_ctx *cc, bool compr)
+ 					ret++;
+ 			}
+ 		}
++
++		f2fs_bug_on(F2FS_I_SB(inode),
++			!compr && ret != cluster_size &&
++			!is_inode_flag_set(inode, FI_COMPRESS_RELEASED));
+ 	}
+ fail:
+ 	f2fs_put_dnode(&dn);
+@@ -883,30 +912,20 @@ static int __f2fs_cluster_blocks(struct compress_ctx *cc, bool compr)
+ /* return # of compressed blocks in compressed cluster */
+ static int f2fs_compressed_blocks(struct compress_ctx *cc)
+ {
+-	return __f2fs_cluster_blocks(cc, true);
++	return __f2fs_cluster_blocks(cc->inode, cc->cluster_idx, true);
+ }
+ 
+ /* return # of valid blocks in compressed cluster */
+-static int f2fs_cluster_blocks(struct compress_ctx *cc)
+-{
+-	return __f2fs_cluster_blocks(cc, false);
+-}
+-
+ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index)
+ {
+-	struct compress_ctx cc = {
+-		.inode = inode,
+-		.log_cluster_size = F2FS_I(inode)->i_log_cluster_size,
+-		.cluster_size = F2FS_I(inode)->i_cluster_size,
+-		.cluster_idx = index >> F2FS_I(inode)->i_log_cluster_size,
+-	};
+-
+-	return f2fs_cluster_blocks(&cc);
++	return __f2fs_cluster_blocks(inode,
++		index >> F2FS_I(inode)->i_log_cluster_size,
++		false);
+ }
+ 
+ static bool cluster_may_compress(struct compress_ctx *cc)
+ {
+-	if (!f2fs_compressed_file(cc->inode))
++	if (!f2fs_need_compress_data(cc->inode))
+ 		return false;
+ 	if (f2fs_is_atomic_file(cc->inode))
+ 		return false;
+@@ -944,21 +963,16 @@ static int prepare_compress_overwrite(struct compress_ctx *cc,
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode);
+ 	struct address_space *mapping = cc->inode->i_mapping;
+ 	struct page *page;
+-	struct dnode_of_data dn;
+ 	sector_t last_block_in_bio;
+ 	unsigned fgp_flag = FGP_LOCK | FGP_WRITE | FGP_CREAT;
+ 	pgoff_t start_idx = start_idx_of_cluster(cc);
+ 	int i, ret;
+-	bool prealloc;
+ 
+ retry:
+-	ret = f2fs_cluster_blocks(cc);
++	ret = f2fs_is_compressed_cluster(cc->inode, start_idx);
+ 	if (ret <= 0)
+ 		return ret;
+ 
+-	/* compressed case */
+-	prealloc = (ret < cc->cluster_size);
+-
+ 	ret = f2fs_init_compress_ctx(cc);
+ 	if (ret)
+ 		return ret;
+@@ -1016,25 +1030,6 @@ static int prepare_compress_overwrite(struct compress_ctx *cc,
+ 		}
+ 	}
+ 
+-	if (prealloc) {
+-		f2fs_do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true);
+-
+-		set_new_dnode(&dn, cc->inode, NULL, NULL, 0);
+-
+-		for (i = cc->cluster_size - 1; i > 0; i--) {
+-			ret = f2fs_get_block(&dn, start_idx + i);
+-			if (ret) {
+-				i = cc->cluster_size;
+-				break;
+-			}
+-
+-			if (dn.data_blkaddr != NEW_ADDR)
+-				break;
+-		}
+-
+-		f2fs_do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false);
+-	}
+-
+ 	if (likely(!ret)) {
+ 		*fsdata = cc->rpages;
+ 		*pagep = cc->rpages[offset_in_cluster(cc, index)];
+@@ -1165,6 +1160,12 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 	loff_t psize;
+ 	int i, err;
+ 
++	/* we should bypass data pages to proceed the kworker jobs */
++	if (unlikely(f2fs_cp_error(sbi))) {
++		mapping_set_error(cc->rpages[0]->mapping, -EIO);
++		goto out_free;
++	}
++
+ 	if (IS_NOQUOTA(inode)) {
+ 		/*
+ 		 * We need to wait for node_write to avoid block allocation during
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index e0533cffbb076..1b764f70b70ed 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2431,7 +2431,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ 
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ 		if (f2fs_compressed_file(inode)) {
+-			/* there are remained comressed pages, submit them */
++			/* there are remained compressed pages, submit them */
+ 			if (!f2fs_cluster_can_merge_page(&cc, page->index)) {
+ 				ret = f2fs_read_multi_pages(&cc, &bio,
+ 							max_nr_pages,
+@@ -2807,7 +2807,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 
+ 	trace_f2fs_writepage(page, DATA);
+ 
+-	/* we should bypass data pages to proceed the kworkder jobs */
++	/* we should bypass data pages to proceed the kworker jobs */
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		mapping_set_error(page->mapping, -EIO);
+ 		/*
+@@ -2933,7 +2933,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ redirty_out:
+ 	redirty_page_for_writepage(wbc, page);
+ 	/*
+-	 * pageout() in MM traslates EAGAIN, so calls handle_write_error()
++	 * pageout() in MM translates EAGAIN, so calls handle_write_error()
+ 	 * -> mapping_set_error() -> set_bit(AS_EIO, ...).
+ 	 * file_write_and_wait_range() will see EIO error, which is critical
+ 	 * to return value of fsync() followed by atomic_write failure to user.
+@@ -2967,7 +2967,7 @@ static int f2fs_write_data_page(struct page *page,
+ }
+ 
+ /*
+- * This function was copied from write_cche_pages from mm/page-writeback.c.
++ * This function was copied from write_cache_pages from mm/page-writeback.c.
+  * The major change is making write step of cold data page separately from
+  * warm/hot data page.
+  */
+@@ -3222,7 +3222,7 @@ static inline bool __should_serialize_io(struct inode *inode,
+ 	if (IS_NOQUOTA(inode))
+ 		return false;
+ 
+-	if (f2fs_compressed_file(inode))
++	if (f2fs_need_compress_data(inode))
+ 		return true;
+ 	if (wbc->sync_mode != WB_SYNC_ALL)
+ 		return true;
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index ad0b83a412268..01f93fae56297 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -112,7 +112,7 @@ struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
+  * @prev_ex: extent before ofs
+  * @next_ex: extent after ofs
+  * @insert_p: insert point for new extent at ofs
+- * in order to simpfy the insertion after.
++ * in order to simplify the insertion after.
+  * tree must stay unchanged between lookup and insertion.
+  */
+ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root,
+@@ -572,7 +572,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	if (!en)
+ 		en = next_en;
+ 
+-	/* 2. invlidate all extent nodes in range [fofs, fofs + len - 1] */
++	/* 2. invalidate all extent nodes in range [fofs, fofs + len - 1] */
+ 	while (en && en->ei.fofs < end) {
+ 		unsigned int org_end;
+ 		int parts = 0;	/* # of parts current extent split into */
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 83ebc860508b0..4380df9b2d70a 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -147,8 +147,10 @@ struct f2fs_mount_info {
+ 
+ 	/* For compression */
+ 	unsigned char compress_algorithm;	/* algorithm type */
+-	unsigned compress_log_size;		/* cluster log size */
++	unsigned char compress_log_size;	/* cluster log size */
++	bool compress_chksum;			/* compressed data chksum */
+ 	unsigned char compress_ext_cnt;		/* extension count */
++	int compress_mode;			/* compression mode */
+ 	unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN];	/* extensions */
+ };
+ 
+@@ -678,7 +680,10 @@ enum {
+ 	FI_ATOMIC_REVOKE_REQUEST, /* request to drop atomic data */
+ 	FI_VERITY_IN_PROGRESS,	/* building fs-verity Merkle tree */
+ 	FI_COMPRESSED_FILE,	/* indicate file's data can be compressed */
++	FI_COMPRESS_CORRUPT,	/* indicate compressed cluster is corrupted */
+ 	FI_MMAP_FILE,		/* indicate file was mmapped */
++	FI_ENABLE_COMPRESS,	/* enable compression in "user" compression mode */
++	FI_COMPRESS_RELEASED,	/* compressed blocks were released */
+ 	FI_MAX,			/* max flag, never be used */
+ };
+ 
+@@ -736,6 +741,7 @@ struct f2fs_inode_info {
+ 	atomic_t i_compr_blocks;		/* # of compressed blocks */
+ 	unsigned char i_compress_algorithm;	/* algorithm type */
+ 	unsigned char i_log_cluster_size;	/* log of cluster size */
++	unsigned short i_compress_flag;		/* compress flag */
+ 	unsigned int i_cluster_size;		/* cluster size */
+ };
+ 
+@@ -1252,6 +1258,18 @@ enum fsync_mode {
+ 	FSYNC_MODE_NOBARRIER,	/* fsync behaves nobarrier based on posix */
+ };
+ 
++enum {
++	COMPR_MODE_FS,		/*
++				 * automatically compress compression
++				 * enabled files
++				 */
++	COMPR_MODE_USER,	/*
++				 * automatical compression is disabled.
++				 * user can control the file compression
++				 * using ioctls
++				 */
++};
++
+ /*
+  * this value is set in page as a private data which indicate that
+  * the page is atomically written, and it is in inmem_pages list.
+@@ -1281,9 +1299,15 @@ enum compress_algorithm_type {
+ 	COMPRESS_MAX,
+ };
+ 
+-#define COMPRESS_DATA_RESERVED_SIZE		5
++enum compress_flag {
++	COMPRESS_CHKSUM,
++	COMPRESS_MAX_FLAG,
++};
++
++#define COMPRESS_DATA_RESERVED_SIZE		4
+ struct compress_data {
+ 	__le32 clen;			/* compressed data size */
++	__le32 chksum;			/* compressed data chksum */
+ 	__le32 reserved[COMPRESS_DATA_RESERVED_SIZE];	/* reserved */
+ 	u8 cdata[];			/* compressed data */
+ };
+@@ -2627,6 +2651,7 @@ static inline void __mark_inode_dirty_flag(struct inode *inode,
+ 	case FI_DATA_EXIST:
+ 	case FI_INLINE_DOTS:
+ 	case FI_PIN_FILE:
++	case FI_COMPRESS_RELEASED:
+ 		f2fs_mark_inode_dirty_sync(inode, true);
+ 	}
+ }
+@@ -2748,6 +2773,8 @@ static inline void get_inline_info(struct inode *inode, struct f2fs_inode *ri)
+ 		set_bit(FI_EXTRA_ATTR, fi->flags);
+ 	if (ri->i_inline & F2FS_PIN_FILE)
+ 		set_bit(FI_PIN_FILE, fi->flags);
++	if (ri->i_inline & F2FS_COMPRESS_RELEASED)
++		set_bit(FI_COMPRESS_RELEASED, fi->flags);
+ }
+ 
+ static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri)
+@@ -2768,6 +2795,8 @@ static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri)
+ 		ri->i_inline |= F2FS_EXTRA_ATTR;
+ 	if (is_inode_flag_set(inode, FI_PIN_FILE))
+ 		ri->i_inline |= F2FS_PIN_FILE;
++	if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED))
++		ri->i_inline |= F2FS_COMPRESS_RELEASED;
+ }
+ 
+ static inline int f2fs_has_extra_attr(struct inode *inode)
+@@ -2786,6 +2815,22 @@ static inline int f2fs_compressed_file(struct inode *inode)
+ 		is_inode_flag_set(inode, FI_COMPRESSED_FILE);
+ }
+ 
++static inline bool f2fs_need_compress_data(struct inode *inode)
++{
++	int compress_mode = F2FS_OPTION(F2FS_I_SB(inode)).compress_mode;
++
++	if (!f2fs_compressed_file(inode))
++		return false;
++
++	if (compress_mode == COMPR_MODE_FS)
++		return true;
++	else if (compress_mode == COMPR_MODE_USER &&
++			is_inode_flag_set(inode, FI_ENABLE_COMPRESS))
++		return true;
++
++	return false;
++}
++
+ static inline unsigned int addrs_per_inode(struct inode *inode)
+ {
+ 	unsigned int addrs = CUR_ADDRS_PER_INODE(inode) -
+@@ -3925,6 +3970,9 @@ static inline void set_compress_context(struct inode *inode)
+ 			F2FS_OPTION(sbi).compress_algorithm;
+ 	F2FS_I(inode)->i_log_cluster_size =
+ 			F2FS_OPTION(sbi).compress_log_size;
++	F2FS_I(inode)->i_compress_flag =
++			F2FS_OPTION(sbi).compress_chksum ?
++				1 << COMPRESS_CHKSUM : 0;
+ 	F2FS_I(inode)->i_cluster_size =
+ 			1 << F2FS_I(inode)->i_log_cluster_size;
+ 	F2FS_I(inode)->i_flags |= F2FS_COMPR_FL;
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 40e805014f719..50514962771a1 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -63,6 +63,9 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
+ 	if (unlikely(IS_IMMUTABLE(inode)))
+ 		return VM_FAULT_SIGBUS;
+ 
++	if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED))
++		return VM_FAULT_SIGBUS;
++
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		err = -EIO;
+ 		goto err;
+@@ -81,10 +84,6 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
+ 			err = ret;
+ 			goto err;
+ 		} else if (ret) {
+-			if (ret < F2FS_I(inode)->i_cluster_size) {
+-				err = -EAGAIN;
+-				goto err;
+-			}
+ 			need_alloc = false;
+ 		}
+ 	}
+@@ -298,6 +297,18 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
+ 				f2fs_exist_written_data(sbi, ino, UPDATE_INO))
+ 			goto flush_out;
+ 		goto out;
++	} else {
++		/*
++		 * for OPU case, during fsync(), node can be persisted before
++		 * data when lower device doesn't support write barrier, result
++		 * in data corruption after SPO.
++		 * So for strict fsync mode, force to use atomic write semantics
++		 * to keep write order in between data/node and last node to
++		 * avoid potential data corruption.
++		 */
++		if (F2FS_OPTION(sbi).fsync_mode ==
++				FSYNC_MODE_STRICT && !atomic)
++			atomic = true;
+ 	}
+ go_write:
+ 	/*
+@@ -880,9 +891,14 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
+ 				  ATTR_GID | ATTR_TIMES_SET))))
+ 		return -EPERM;
+ 
+-	if ((attr->ia_valid & ATTR_SIZE) &&
+-		!f2fs_is_compress_backend_ready(inode))
+-		return -EOPNOTSUPP;
++	if ((attr->ia_valid & ATTR_SIZE)) {
++		if (!f2fs_is_compress_backend_ready(inode))
++			return -EOPNOTSUPP;
++		if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED) &&
++			!IS_ALIGNED(attr->ia_size,
++			F2FS_BLK_TO_BYTES(F2FS_I(inode)->i_cluster_size)))
++			return -EINVAL;
++	}
+ 
+ 	err = setattr_prepare(dentry, attr);
+ 	if (err)
+@@ -1253,6 +1269,8 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
+ 				f2fs_put_page(psrc, 1);
+ 				return PTR_ERR(pdst);
+ 			}
++			f2fs_wait_on_page_writeback(pdst, DATA, true, true);
++
+ 			f2fs_copy_page(psrc, pdst);
+ 			set_page_dirty(pdst);
+ 			f2fs_put_page(pdst, 1);
+@@ -1732,11 +1750,6 @@ static long f2fs_fallocate(struct file *file, int mode,
+ 		(mode & (FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_INSERT_RANGE)))
+ 		return -EOPNOTSUPP;
+ 
+-	if (f2fs_compressed_file(inode) &&
+-		(mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE |
+-			FALLOC_FL_ZERO_RANGE | FALLOC_FL_INSERT_RANGE)))
+-		return -EOPNOTSUPP;
+-
+ 	if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE |
+ 			FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_ZERO_RANGE |
+ 			FALLOC_FL_INSERT_RANGE))
+@@ -1744,6 +1757,17 @@ static long f2fs_fallocate(struct file *file, int mode,
+ 
+ 	inode_lock(inode);
+ 
++	/*
++	 * Pinned file should not support partial truncation since the block
++	 * can be used by applications.
++	 */
++	if ((f2fs_compressed_file(inode) || f2fs_is_pinned_file(inode)) &&
++		(mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE |
++			FALLOC_FL_ZERO_RANGE | FALLOC_FL_INSERT_RANGE))) {
++		ret = -EOPNOTSUPP;
++		goto out;
++	}
++
+ 	ret = file_modified(file);
+ 	if (ret)
+ 		goto out;
+@@ -1779,7 +1803,7 @@ static long f2fs_fallocate(struct file *file, int mode,
+ static int f2fs_release_file(struct inode *inode, struct file *filp)
+ {
+ 	/*
+-	 * f2fs_relase_file is called at every close calls. So we should
++	 * f2fs_release_file is called at every close calls. So we should
+ 	 * not drop any inmemory pages by close called by other process.
+ 	 */
+ 	if (!(filp->f_mode & FMODE_WRITE) ||
+@@ -2816,7 +2840,8 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
+ 			goto out;
+ 	}
+ 
+-	if (f2fs_compressed_file(src) || f2fs_compressed_file(dst)) {
++	if (f2fs_compressed_file(src) || f2fs_compressed_file(dst) ||
++		f2fs_is_pinned_file(src) || f2fs_is_pinned_file(dst)) {
+ 		ret = -EOPNOTSUPP;
+ 		goto out_unlock;
+ 	}
+@@ -3532,9 +3557,6 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
+ 	if (!f2fs_sb_has_compression(F2FS_I_SB(inode)))
+ 		return -EOPNOTSUPP;
+ 
+-	if (!f2fs_compressed_file(inode))
+-		return -EINVAL;
+-
+ 	if (f2fs_readonly(sbi->sb))
+ 		return -EROFS;
+ 
+@@ -3553,7 +3575,8 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
+ 		goto out;
+ 	}
+ 
+-	if (IS_IMMUTABLE(inode)) {
++	if (!f2fs_compressed_file(inode) ||
++		is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+@@ -3562,8 +3585,7 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
+ 	if (ret)
+ 		goto out;
+ 
+-	F2FS_I(inode)->i_flags |= F2FS_IMMUTABLE_FL;
+-	f2fs_set_inode_flags(inode);
++	set_inode_flag(inode, FI_COMPRESS_RELEASED);
+ 	inode->i_ctime = current_time(inode);
+ 	f2fs_mark_inode_dirty_sync(inode, true);
+ 
+@@ -3579,9 +3601,12 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
+ 		struct dnode_of_data dn;
+ 		pgoff_t end_offset, count;
+ 
++		f2fs_lock_op(sbi);
++
+ 		set_new_dnode(&dn, inode, NULL, NULL, 0);
+ 		ret = f2fs_get_dnode_of_data(&dn, page_idx, LOOKUP_NODE);
+ 		if (ret) {
++			f2fs_unlock_op(sbi);
+ 			if (ret == -ENOENT) {
+ 				page_idx = f2fs_get_next_page_offset(&dn,
+ 								page_idx);
+@@ -3599,6 +3624,8 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
+ 
+ 		f2fs_put_dnode(&dn);
+ 
++		f2fs_unlock_op(sbi);
++
+ 		if (ret < 0)
+ 			break;
+ 
+@@ -3712,9 +3739,6 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 	if (!f2fs_sb_has_compression(F2FS_I_SB(inode)))
+ 		return -EOPNOTSUPP;
+ 
+-	if (!f2fs_compressed_file(inode))
+-		return -EINVAL;
+-
+ 	if (f2fs_readonly(sbi->sb))
+ 		return -EROFS;
+ 
+@@ -3729,7 +3753,8 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 
+ 	inode_lock(inode);
+ 
+-	if (!IS_IMMUTABLE(inode)) {
++	if (!f2fs_compressed_file(inode) ||
++		!is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
+ 		ret = -EINVAL;
+ 		goto unlock_inode;
+ 	}
+@@ -3743,9 +3768,12 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 		struct dnode_of_data dn;
+ 		pgoff_t end_offset, count;
+ 
++		f2fs_lock_op(sbi);
++
+ 		set_new_dnode(&dn, inode, NULL, NULL, 0);
+ 		ret = f2fs_get_dnode_of_data(&dn, page_idx, LOOKUP_NODE);
+ 		if (ret) {
++			f2fs_unlock_op(sbi);
+ 			if (ret == -ENOENT) {
+ 				page_idx = f2fs_get_next_page_offset(&dn,
+ 								page_idx);
+@@ -3763,6 +3791,8 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 
+ 		f2fs_put_dnode(&dn);
+ 
++		f2fs_unlock_op(sbi);
++
+ 		if (ret < 0)
+ 			break;
+ 
+@@ -3774,8 +3804,7 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 	up_write(&F2FS_I(inode)->i_mmap_sem);
+ 
+ 	if (ret >= 0) {
+-		F2FS_I(inode)->i_flags &= ~F2FS_IMMUTABLE_FL;
+-		f2fs_set_inode_flags(inode);
++		clear_inode_flag(inode, FI_COMPRESS_RELEASED);
+ 		inode->i_ctime = current_time(inode);
+ 		f2fs_mark_inode_dirty_sync(inode, true);
+ 	}
+@@ -4132,6 +4161,11 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 		goto unlock;
+ 	}
+ 
++	if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
++		ret = -EPERM;
++		goto unlock;
++	}
++
+ 	ret = generic_write_checks(iocb, from);
+ 	if (ret > 0) {
+ 		bool preallocated = false;
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 87752550f78c8..724760353bcd5 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -326,6 +326,12 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ 		}
+ 	}
+ 
++	if (fi->i_xattr_nid && f2fs_check_nid_range(sbi, fi->i_xattr_nid)) {
++		f2fs_warn(sbi, "%s: inode (ino=%lx) has corrupted i_xattr_nid: %u, run fsck to fix.",
++			  __func__, inode->i_ino, fi->i_xattr_nid);
++		return false;
++	}
++
+ 	return true;
+ }
+ 
+@@ -455,6 +461,7 @@ static int do_read_inode(struct inode *inode)
+ 					le64_to_cpu(ri->i_compr_blocks));
+ 			fi->i_compress_algorithm = ri->i_compress_algorithm;
+ 			fi->i_log_cluster_size = ri->i_log_cluster_size;
++			fi->i_compress_flag = le16_to_cpu(ri->i_compress_flag);
+ 			fi->i_cluster_size = 1 << fi->i_log_cluster_size;
+ 			set_inode_flag(inode, FI_COMPRESSED_FILE);
+ 		}
+@@ -633,6 +640,8 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page)
+ 					&F2FS_I(inode)->i_compr_blocks));
+ 			ri->i_compress_algorithm =
+ 				F2FS_I(inode)->i_compress_algorithm;
++			ri->i_compress_flag =
++				cpu_to_le16(F2FS_I(inode)->i_compress_flag);
+ 			ri->i_log_cluster_size =
+ 				F2FS_I(inode)->i_log_cluster_size;
+ 		}
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 99e4ec48d2a46..56d23bc254353 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -937,7 +937,7 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	/*
+ 	 * If new_inode is null, the below renaming flow will
+-	 * add a link in old_dir which can conver inline_dir.
++	 * add a link in old_dir which can convert inline_dir.
+ 	 * After then, if we failed to get the entry due to other
+ 	 * reasons like ENOMEM, we had to remove the new entry.
+ 	 * Instead of adding such the error handling routine, let's
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 02cb1c806c3ed..348ad1d6199ff 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1242,6 +1242,7 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
+ 	}
+ 	if (unlikely(new_ni.blk_addr != NULL_ADDR)) {
+ 		err = -EFSCORRUPTED;
++		dec_valid_node_count(sbi, dn->inode, !ofs);
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 		goto fail;
+ 	}
+@@ -1267,7 +1268,6 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
+ 	if (ofs == 0)
+ 		inc_valid_inode_count(sbi);
+ 	return page;
+-
+ fail:
+ 	clear_node_page_dirty(page);
+ 	f2fs_put_page(page, 1);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index a27a934292715..134a78179e6ff 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -3296,7 +3296,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
+ 			else
+ 				return CURSEG_COLD_DATA;
+ 		}
+-		if (file_is_cold(inode) || f2fs_compressed_file(inode))
++		if (file_is_cold(inode) || f2fs_need_compress_data(inode))
+ 			return CURSEG_COLD_DATA;
+ 		if (file_is_hot(inode) ||
+ 				is_inode_flag_set(inode, FI_HOT_DATA) ||
+@@ -3688,7 +3688,7 @@ void f2fs_wait_on_page_writeback(struct page *page,
+ 
+ 		/* submit cached LFS IO */
+ 		f2fs_submit_merged_write_cond(sbi, NULL, page, 0, type);
+-		/* sbumit cached IPU IO */
++		/* submit cached IPU IO */
+ 		f2fs_submit_merged_ipu_write(sbi, NULL, page);
+ 		if (ordered) {
+ 			wait_on_page_writeback(page);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 9a74d60f61dba..1281b59da6a2a 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -146,6 +146,8 @@ enum {
+ 	Opt_compress_algorithm,
+ 	Opt_compress_log_size,
+ 	Opt_compress_extension,
++	Opt_compress_chksum,
++	Opt_compress_mode,
+ 	Opt_atgc,
+ 	Opt_err,
+ };
+@@ -214,6 +216,8 @@ static match_table_t f2fs_tokens = {
+ 	{Opt_compress_algorithm, "compress_algorithm=%s"},
+ 	{Opt_compress_log_size, "compress_log_size=%u"},
+ 	{Opt_compress_extension, "compress_extension=%s"},
++	{Opt_compress_chksum, "compress_chksum"},
++	{Opt_compress_mode, "compress_mode=%s"},
+ 	{Opt_atgc, "atgc"},
+ 	{Opt_err, NULL},
+ };
+@@ -974,10 +978,29 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ 			F2FS_OPTION(sbi).compress_ext_cnt++;
+ 			kfree(name);
+ 			break;
++		case Opt_compress_chksum:
++			F2FS_OPTION(sbi).compress_chksum = true;
++			break;
++		case Opt_compress_mode:
++			name = match_strdup(&args[0]);
++			if (!name)
++				return -ENOMEM;
++			if (!strcmp(name, "fs")) {
++				F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
++			} else if (!strcmp(name, "user")) {
++				F2FS_OPTION(sbi).compress_mode = COMPR_MODE_USER;
++			} else {
++				kfree(name);
++				return -EINVAL;
++			}
++			kfree(name);
++			break;
+ #else
+ 		case Opt_compress_algorithm:
+ 		case Opt_compress_log_size:
+ 		case Opt_compress_extension:
++		case Opt_compress_chksum:
++		case Opt_compress_mode:
+ 			f2fs_info(sbi, "compression options not supported");
+ 			break;
+ #endif
+@@ -1562,6 +1585,14 @@ static inline void f2fs_show_compress_options(struct seq_file *seq,
+ 		seq_printf(seq, ",compress_extension=%s",
+ 			F2FS_OPTION(sbi).extensions[i]);
+ 	}
++
++	if (F2FS_OPTION(sbi).compress_chksum)
++		seq_puts(seq, ",compress_chksum");
++
++	if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_FS)
++		seq_printf(seq, ",compress_mode=%s", "fs");
++	else if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_USER)
++		seq_printf(seq, ",compress_mode=%s", "user");
+ }
+ 
+ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
+@@ -1711,6 +1742,7 @@ static void default_options(struct f2fs_sb_info *sbi)
+ 	F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZ4;
+ 	F2FS_OPTION(sbi).compress_log_size = MIN_COMPRESS_LOG_SIZE;
+ 	F2FS_OPTION(sbi).compress_ext_cnt = 0;
++	F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
+ 	F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
+ 
+ 	sbi->sb->s_flags &= ~SB_INLINECRYPT;
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index dd052101e2266..b0f01a8e37766 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -691,11 +691,13 @@ __acquires(&gl->gl_lockref.lock)
+ 	}
+ 
+ 	if (sdp->sd_lockstruct.ls_ops->lm_lock)	{
++		struct lm_lockstruct *ls = &sdp->sd_lockstruct;
++
+ 		/* lock_dlm */
+ 		ret = sdp->sd_lockstruct.ls_ops->lm_lock(gl, target, lck_flags);
+ 		if (ret == -EINVAL && gl->gl_target == LM_ST_UNLOCKED &&
+ 		    target == LM_ST_UNLOCKED &&
+-		    test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags)) {
++		    test_bit(DFL_UNMOUNT, &ls->ls_recover_flags)) {
+ 			finish_xmote(gl, target);
+ 			gfs2_glock_queue_work(gl, 0);
+ 		} else if (ret) {
+diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
+index 3ece99e6490c2..d11152dedb803 100644
+--- a/fs/gfs2/util.c
++++ b/fs/gfs2/util.c
+@@ -348,7 +348,6 @@ int gfs2_withdraw(struct gfs2_sbd *sdp)
+ 			fs_err(sdp, "telling LM to unmount\n");
+ 			lm->lm_unmount(sdp);
+ 		}
+-		set_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags);
+ 		fs_err(sdp, "File system withdrawn\n");
+ 		dump_stack();
+ 		clear_bit(SDF_WITHDRAW_IN_PROG, &sdp->sd_flags);
+diff --git a/fs/jffs2/xattr.c b/fs/jffs2/xattr.c
+index acb4492f5970c..5a31220f96f5f 100644
+--- a/fs/jffs2/xattr.c
++++ b/fs/jffs2/xattr.c
+@@ -1111,6 +1111,9 @@ int do_jffs2_setxattr(struct inode *inode, int xprefix, const char *xname,
+ 		return rc;
+ 
+ 	request = PAD(sizeof(struct jffs2_raw_xattr) + strlen(xname) + 1 + size);
++	if (request > c->sector_size - c->cleanmarker_size)
++		return -ERANGE;
++
+ 	rc = jffs2_reserve_space(c, request, &length,
+ 				 ALLOC_NORMAL, JFFS2_SUMMARY_XATTR_SIZE);
+ 	if (rc) {
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 9a72abfb46abc..566f1b11f62f7 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -660,9 +660,9 @@ unsigned long nfs_block_bits(unsigned long bsize, unsigned char *nrbitsp)
+ 	if ((bsize & (bsize - 1)) || nrbitsp) {
+ 		unsigned char	nrbits;
+ 
+-		for (nrbits = 31; nrbits && !(bsize & (1 << nrbits)); nrbits--)
++		for (nrbits = 31; nrbits && !(bsize & (1UL << nrbits)); nrbits--)
+ 			;
+-		bsize = 1 << nrbits;
++		bsize = 1UL << nrbits;
+ 		if (nrbitsp)
+ 			*nrbitsp = nrbits;
+ 	}
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 8e546e6a56198..1ff3f9efbe519 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5320,7 +5320,7 @@ static bool nfs4_read_plus_not_supported(struct rpc_task *task,
+ 	struct rpc_message *msg = &task->tk_msg;
+ 
+ 	if (msg->rpc_proc == &nfs4_procedures[NFSPROC4_CLNT_READ_PLUS] &&
+-	    server->caps & NFS_CAP_READ_PLUS && task->tk_status == -ENOTSUPP) {
++	    task->tk_status == -ENOTSUPP) {
+ 		server->caps &= ~NFS_CAP_READ_PLUS;
+ 		msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ];
+ 		rpc_restart_call_prepare(task);
+diff --git a/fs/nilfs2/ioctl.c b/fs/nilfs2/ioctl.c
+index 01235fac5971f..668c1b28a940e 100644
+--- a/fs/nilfs2/ioctl.c
++++ b/fs/nilfs2/ioctl.c
+@@ -59,7 +59,7 @@ static int nilfs_ioctl_wrap_copy(struct the_nilfs *nilfs,
+ 	if (argv->v_nmembs == 0)
+ 		return 0;
+ 
+-	if (argv->v_size > PAGE_SIZE)
++	if ((size_t)argv->v_size > PAGE_SIZE)
+ 		return -EINVAL;
+ 
+ 	/*
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index be0ca35b8aa4b..2eeae67ad2983 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -2164,8 +2164,10 @@ static void nilfs_segctor_start_timer(struct nilfs_sc_info *sci)
+ {
+ 	spin_lock(&sci->sc_state_lock);
+ 	if (!(sci->sc_state & NILFS_SEGCTOR_COMMIT)) {
+-		sci->sc_timer.expires = jiffies + sci->sc_interval;
+-		add_timer(&sci->sc_timer);
++		if (sci->sc_task) {
++			sci->sc_timer.expires = jiffies + sci->sc_interval;
++			add_timer(&sci->sc_timer);
++		}
+ 		sci->sc_state |= NILFS_SEGCTOR_COMMIT;
+ 	}
+ 	spin_unlock(&sci->sc_state_lock);
+@@ -2212,19 +2214,36 @@ static int nilfs_segctor_sync(struct nilfs_sc_info *sci)
+ 	struct nilfs_segctor_wait_request wait_req;
+ 	int err = 0;
+ 
+-	spin_lock(&sci->sc_state_lock);
+ 	init_wait(&wait_req.wq);
+ 	wait_req.err = 0;
+ 	atomic_set(&wait_req.done, 0);
++	init_waitqueue_entry(&wait_req.wq, current);
++
++	/*
++	 * To prevent a race issue where completion notifications from the
++	 * log writer thread are missed, increment the request sequence count
++	 * "sc_seq_request" and insert a wait queue entry using the current
++	 * sequence number into the "sc_wait_request" queue at the same time
++	 * within the lock section of "sc_state_lock".
++	 */
++	spin_lock(&sci->sc_state_lock);
+ 	wait_req.seq = ++sci->sc_seq_request;
++	add_wait_queue(&sci->sc_wait_request, &wait_req.wq);
+ 	spin_unlock(&sci->sc_state_lock);
+ 
+-	init_waitqueue_entry(&wait_req.wq, current);
+-	add_wait_queue(&sci->sc_wait_request, &wait_req.wq);
+-	set_current_state(TASK_INTERRUPTIBLE);
+ 	wake_up(&sci->sc_wait_daemon);
+ 
+ 	for (;;) {
++		set_current_state(TASK_INTERRUPTIBLE);
++
++		/*
++		 * Synchronize only while the log writer thread is alive.
++		 * Leave flushing out after the log writer thread exits to
++		 * the cleanup work in nilfs_segctor_destroy().
++		 */
++		if (!sci->sc_task)
++			break;
++
+ 		if (atomic_read(&wait_req.done)) {
+ 			err = wait_req.err;
+ 			break;
+@@ -2240,7 +2259,7 @@ static int nilfs_segctor_sync(struct nilfs_sc_info *sci)
+ 	return err;
+ }
+ 
+-static void nilfs_segctor_wakeup(struct nilfs_sc_info *sci, int err)
++static void nilfs_segctor_wakeup(struct nilfs_sc_info *sci, int err, bool force)
+ {
+ 	struct nilfs_segctor_wait_request *wrq, *n;
+ 	unsigned long flags;
+@@ -2248,7 +2267,7 @@ static void nilfs_segctor_wakeup(struct nilfs_sc_info *sci, int err)
+ 	spin_lock_irqsave(&sci->sc_wait_request.lock, flags);
+ 	list_for_each_entry_safe(wrq, n, &sci->sc_wait_request.head, wq.entry) {
+ 		if (!atomic_read(&wrq->done) &&
+-		    nilfs_cnt32_ge(sci->sc_seq_done, wrq->seq)) {
++		    (force || nilfs_cnt32_ge(sci->sc_seq_done, wrq->seq))) {
+ 			wrq->err = err;
+ 			atomic_set(&wrq->done, 1);
+ 		}
+@@ -2368,10 +2387,21 @@ int nilfs_construct_dsync_segment(struct super_block *sb, struct inode *inode,
+  */
+ static void nilfs_segctor_accept(struct nilfs_sc_info *sci)
+ {
++	bool thread_is_alive;
++
+ 	spin_lock(&sci->sc_state_lock);
+ 	sci->sc_seq_accepted = sci->sc_seq_request;
++	thread_is_alive = (bool)sci->sc_task;
+ 	spin_unlock(&sci->sc_state_lock);
+-	del_timer_sync(&sci->sc_timer);
++
++	/*
++	 * This function does not race with the log writer thread's
++	 * termination.  Therefore, deleting sc_timer, which should not be
++	 * done after the log writer thread exits, can be done safely outside
++	 * the area protected by sc_state_lock.
++	 */
++	if (thread_is_alive)
++		del_timer_sync(&sci->sc_timer);
+ }
+ 
+ /**
+@@ -2388,7 +2418,7 @@ static void nilfs_segctor_notify(struct nilfs_sc_info *sci, int mode, int err)
+ 	if (mode == SC_LSEG_SR) {
+ 		sci->sc_state &= ~NILFS_SEGCTOR_COMMIT;
+ 		sci->sc_seq_done = sci->sc_seq_accepted;
+-		nilfs_segctor_wakeup(sci, err);
++		nilfs_segctor_wakeup(sci, err, false);
+ 		sci->sc_flush_request = 0;
+ 	} else {
+ 		if (mode == SC_FLUSH_FILE)
+@@ -2397,7 +2427,7 @@ static void nilfs_segctor_notify(struct nilfs_sc_info *sci, int mode, int err)
+ 			sci->sc_flush_request &= ~FLUSH_DAT_BIT;
+ 
+ 		/* re-enable timer if checkpoint creation was not done */
+-		if ((sci->sc_state & NILFS_SEGCTOR_COMMIT) &&
++		if ((sci->sc_state & NILFS_SEGCTOR_COMMIT) && sci->sc_task &&
+ 		    time_before(jiffies, sci->sc_timer.expires))
+ 			add_timer(&sci->sc_timer);
+ 	}
+@@ -2587,6 +2617,7 @@ static int nilfs_segctor_thread(void *arg)
+ 	int timeout = 0;
+ 
+ 	sci->sc_timer_task = current;
++	timer_setup(&sci->sc_timer, nilfs_construction_timeout, 0);
+ 
+ 	/* start sync. */
+ 	sci->sc_task = current;
+@@ -2653,6 +2684,7 @@ static int nilfs_segctor_thread(void *arg)
+  end_thread:
+ 	/* end sync. */
+ 	sci->sc_task = NULL;
++	del_timer_sync(&sci->sc_timer);
+ 	wake_up(&sci->sc_wait_task); /* for nilfs_segctor_kill_thread() */
+ 	spin_unlock(&sci->sc_state_lock);
+ 	return 0;
+@@ -2716,7 +2748,6 @@ static struct nilfs_sc_info *nilfs_segctor_new(struct super_block *sb,
+ 	INIT_LIST_HEAD(&sci->sc_gc_inodes);
+ 	INIT_LIST_HEAD(&sci->sc_iput_queue);
+ 	INIT_WORK(&sci->sc_iput_work, nilfs_iput_work_func);
+-	timer_setup(&sci->sc_timer, nilfs_construction_timeout, 0);
+ 
+ 	sci->sc_interval = HZ * NILFS_SC_DEFAULT_TIMEOUT;
+ 	sci->sc_mjcp_freq = HZ * NILFS_SC_DEFAULT_SR_FREQ;
+@@ -2770,6 +2801,13 @@ static void nilfs_segctor_destroy(struct nilfs_sc_info *sci)
+ 		|| sci->sc_seq_request != sci->sc_seq_done);
+ 	spin_unlock(&sci->sc_state_lock);
+ 
++	/*
++	 * Forcibly wake up tasks waiting in nilfs_segctor_sync(), which can
++	 * be called from delayed iput() via nilfs_evict_inode() and can race
++	 * with the above log writer thread termination.
++	 */
++	nilfs_segctor_wakeup(sci, 0, true);
++
+ 	if (flush_work(&sci->sc_iput_work))
+ 		flag = true;
+ 
+@@ -2795,7 +2833,6 @@ static void nilfs_segctor_destroy(struct nilfs_sc_info *sci)
+ 
+ 	down_write(&nilfs->ns_segctor_sem);
+ 
+-	del_timer_sync(&sci->sc_timer);
+ 	kfree(sci);
+ }
+ 
+diff --git a/fs/openpromfs/inode.c b/fs/openpromfs/inode.c
+index 40c8c2e32fa3e..1e22344be5e54 100644
+--- a/fs/openpromfs/inode.c
++++ b/fs/openpromfs/inode.c
+@@ -362,10 +362,10 @@ static struct inode *openprom_iget(struct super_block *sb, ino_t ino)
+ 	return inode;
+ }
+ 
+-static int openprom_remount(struct super_block *sb, int *flags, char *data)
++static int openpromfs_reconfigure(struct fs_context *fc)
+ {
+-	sync_filesystem(sb);
+-	*flags |= SB_NOATIME;
++	sync_filesystem(fc->root->d_sb);
++	fc->sb_flags |= SB_NOATIME;
+ 	return 0;
+ }
+ 
+@@ -373,7 +373,6 @@ static const struct super_operations openprom_sops = {
+ 	.alloc_inode	= openprom_alloc_inode,
+ 	.free_inode	= openprom_free_inode,
+ 	.statfs		= simple_statfs,
+-	.remount_fs	= openprom_remount,
+ };
+ 
+ static int openprom_fill_super(struct super_block *s, struct fs_context *fc)
+@@ -417,6 +416,7 @@ static int openpromfs_get_tree(struct fs_context *fc)
+ 
+ static const struct fs_context_operations openpromfs_context_ops = {
+ 	.get_tree	= openpromfs_get_tree,
++	.reconfigure	= openpromfs_reconfigure,
+ };
+ 
+ static int openpromfs_init_fs_context(struct fs_context *fc)
+diff --git a/include/drm/drm_mipi_dsi.h b/include/drm/drm_mipi_dsi.h
+index 3c0d1495c062d..0dbed65c0ca5a 100644
+--- a/include/drm/drm_mipi_dsi.h
++++ b/include/drm/drm_mipi_dsi.h
+@@ -231,9 +231,9 @@ int mipi_dsi_shutdown_peripheral(struct mipi_dsi_device *dsi);
+ int mipi_dsi_turn_on_peripheral(struct mipi_dsi_device *dsi);
+ int mipi_dsi_set_maximum_return_packet_size(struct mipi_dsi_device *dsi,
+ 					    u16 value);
+-ssize_t mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable);
+-ssize_t mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi,
+-				       const struct drm_dsc_picture_parameter_set *pps);
++int mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable);
++int mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi,
++				   const struct drm_dsc_picture_parameter_set *pps);
+ 
+ ssize_t mipi_dsi_generic_write(struct mipi_dsi_device *dsi, const void *payload,
+ 			       size_t size);
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index a5dbb57a687fb..9a6082c572199 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -229,6 +229,7 @@ struct f2fs_extent {
+ #define F2FS_INLINE_DOTS	0x10	/* file having implicit dot dentries */
+ #define F2FS_EXTRA_ATTR		0x20	/* file having extra attribute */
+ #define F2FS_PIN_FILE		0x40	/* file should not be gced */
++#define F2FS_COMPRESS_RELEASED	0x80	/* file released compressed blocks */
+ 
+ struct f2fs_inode {
+ 	__le16 i_mode;			/* file mode */
+@@ -273,7 +274,7 @@ struct f2fs_inode {
+ 			__le64 i_compr_blocks;	/* # of compressed blocks */
+ 			__u8 i_compress_algorithm;	/* compress algorithm */
+ 			__u8 i_log_cluster_size;	/* log of cluster size */
+-			__le16 i_padding;		/* padding */
++			__le16 i_compress_flag;		/* compress flag */
+ 			__le32 i_extra_end[0];	/* for attribute size calculation */
+ 		} __packed;
+ 		__le32 i_addr[DEF_ADDRS_PER_INODE];	/* Pointers to data blocks */
+diff --git a/include/linux/fpga/fpga-region.h b/include/linux/fpga/fpga-region.h
+index 27cb706275dba..1c446c2ce2f9c 100644
+--- a/include/linux/fpga/fpga-region.h
++++ b/include/linux/fpga/fpga-region.h
+@@ -7,6 +7,27 @@
+ #include <linux/fpga/fpga-mgr.h>
+ #include <linux/fpga/fpga-bridge.h>
+ 
++struct fpga_region;
++
++/**
++ * struct fpga_region_info - collection of parameters an FPGA Region
++ * @mgr: fpga region manager
++ * @compat_id: FPGA region id for compatibility check.
++ * @priv: fpga region private data
++ * @get_bridges: optional function to get bridges to a list
++ *
++ * fpga_region_info contains parameters for the register_full function.
++ * These are separated into an info structure because they some are optional
++ * others could be added to in the future. The info structure facilitates
++ * maintaining a stable API.
++ */
++struct fpga_region_info {
++	struct fpga_manager *mgr;
++	struct fpga_compat_id *compat_id;
++	void *priv;
++	int (*get_bridges)(struct fpga_region *region);
++};
++
+ /**
+  * struct fpga_region - FPGA Region structure
+  * @dev: FPGA Region device
+@@ -15,6 +36,7 @@
+  * @mgr: FPGA manager
+  * @info: FPGA image info
+  * @compat_id: FPGA region id for compatibility check.
++ * @ops_owner: module containing the get_bridges function
+  * @priv: private data
+  * @get_bridges: optional function to get bridges to a list
+  */
+@@ -25,6 +47,7 @@ struct fpga_region {
+ 	struct fpga_manager *mgr;
+ 	struct fpga_image_info *info;
+ 	struct fpga_compat_id *compat_id;
++	struct module *ops_owner;
+ 	void *priv;
+ 	int (*get_bridges)(struct fpga_region *region);
+ };
+@@ -37,15 +60,17 @@ struct fpga_region *fpga_region_class_find(
+ 
+ int fpga_region_program_fpga(struct fpga_region *region);
+ 
+-struct fpga_region
+-*fpga_region_create(struct device *dev, struct fpga_manager *mgr,
+-		    int (*get_bridges)(struct fpga_region *));
+-void fpga_region_free(struct fpga_region *region);
+-int fpga_region_register(struct fpga_region *region);
+-void fpga_region_unregister(struct fpga_region *region);
++#define fpga_region_register_full(parent, info) \
++	__fpga_region_register_full(parent, info, THIS_MODULE)
++struct fpga_region *
++__fpga_region_register_full(struct device *parent, const struct fpga_region_info *info,
++			    struct module *owner);
+ 
+-struct fpga_region
+-*devm_fpga_region_create(struct device *dev, struct fpga_manager *mgr,
+-			int (*get_bridges)(struct fpga_region *));
++#define fpga_region_register(parent, mgr, get_bridges) \
++	__fpga_region_register(parent, mgr, get_bridges, THIS_MODULE)
++struct fpga_region *
++__fpga_region_register(struct device *parent, struct fpga_manager *mgr,
++		       int (*get_bridges)(struct fpga_region *), struct module *owner);
++void fpga_region_unregister(struct fpga_region *region);
+ 
+ #endif /* _FPGA_REGION_H */
+diff --git a/include/linux/mmc/slot-gpio.h b/include/linux/mmc/slot-gpio.h
+index 4ae2f2908f993..d4a1567c94d0d 100644
+--- a/include/linux/mmc/slot-gpio.h
++++ b/include/linux/mmc/slot-gpio.h
+@@ -20,6 +20,7 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
+ 			 unsigned int debounce);
+ int mmc_gpiod_request_ro(struct mmc_host *host, const char *con_id,
+ 			 unsigned int idx, unsigned int debounce);
++int mmc_gpiod_set_cd_config(struct mmc_host *host, unsigned long config);
+ void mmc_gpio_set_cd_isr(struct mmc_host *host,
+ 			 irqreturn_t (*isr)(int irq, void *dev_id));
+ int mmc_gpio_set_cd_wake(struct mmc_host *host, bool on);
+diff --git a/include/linux/moduleparam.h b/include/linux/moduleparam.h
+index 6388eb9734a51..f25a1c4843903 100644
+--- a/include/linux/moduleparam.h
++++ b/include/linux/moduleparam.h
+@@ -431,6 +431,8 @@ extern int param_get_int(char *buffer, const struct kernel_param *kp);
+ extern const struct kernel_param_ops param_ops_uint;
+ extern int param_set_uint(const char *val, const struct kernel_param *kp);
+ extern int param_get_uint(char *buffer, const struct kernel_param *kp);
++int param_set_uint_minmax(const char *val, const struct kernel_param *kp,
++		unsigned int min, unsigned int max);
+ #define param_check_uint(name, p) __param_check(name, p, unsigned int)
+ 
+ extern const struct kernel_param_ops param_ops_long;
+diff --git a/include/media/cec.h b/include/media/cec.h
+index cd35ae6b7560f..38eb9334d854f 100644
+--- a/include/media/cec.h
++++ b/include/media/cec.h
+@@ -26,13 +26,17 @@
+  * @dev:	cec device
+  * @cdev:	cec character device
+  * @minor:	device node minor number
++ * @lock:	lock to serialize open/release and registration
+  * @registered:	the device was correctly registered
+  * @unregistered: the device was unregistered
+- * @fhs_lock:	lock to control access to the filehandle list
++ * @lock_fhs:	lock to control access to @fhs
+  * @fhs:	the list of open filehandles (cec_fh)
+  *
+  * This structure represents a cec-related device node.
+  *
++ * To add or remove filehandles from @fhs the @lock must be taken first,
++ * followed by @lock_fhs. It is safe to access @fhs if either lock is held.
++ *
+  * The @parent is a physical device. It must be set by core or device drivers
+  * before registering the node.
+  */
+@@ -43,10 +47,13 @@ struct cec_devnode {
+ 
+ 	/* device info */
+ 	int minor;
++	/* serialize open/release and registration */
++	struct mutex lock;
+ 	bool registered;
+ 	bool unregistered;
++	/* protect access to fhs */
++	struct mutex lock_fhs;
+ 	struct list_head fhs;
+-	struct mutex lock;
+ };
+ 
+ struct cec_adapter;
+@@ -113,14 +120,16 @@ struct cec_adap_ops {
+ 	int (*adap_log_addr)(struct cec_adapter *adap, u8 logical_addr);
+ 	int (*adap_transmit)(struct cec_adapter *adap, u8 attempts,
+ 			     u32 signal_free_time, struct cec_msg *msg);
++	void (*adap_nb_transmit_canceled)(struct cec_adapter *adap,
++					  const struct cec_msg *msg);
+ 	void (*adap_status)(struct cec_adapter *adap, struct seq_file *file);
+ 	void (*adap_free)(struct cec_adapter *adap);
+ 
+-	/* Error injection callbacks */
++	/* Error injection callbacks, called without adap->lock held */
+ 	int (*error_inj_show)(struct cec_adapter *adap, struct seq_file *sf);
+ 	bool (*error_inj_parse_line)(struct cec_adapter *adap, char *line);
+ 
+-	/* High-level CEC message callback */
++	/* High-level CEC message callback, called without adap->lock held */
+ 	int (*received)(struct cec_adapter *adap, struct cec_msg *msg);
+ };
+ 
+@@ -156,6 +165,11 @@ struct cec_adap_ops {
+  * @wait_queue:		queue of transmits waiting for a reply
+  * @transmitting:	CEC messages currently being transmitted
+  * @transmit_in_progress: true if a transmit is in progress
++ * @transmit_in_progress_aborted: true if a transmit is in progress is to be
++ *			aborted. This happens if the logical address is
++ *			invalidated while the transmit is ongoing. In that
++ *			case the transmit will finish, but will not retransmit
++ *			and be marked as ABORTED.
+  * @kthread_config:	kthread used to configure a CEC adapter
+  * @config_completion:	used to signal completion of the config kthread
+  * @kthread:		main CEC processing thread
+@@ -168,6 +182,7 @@ struct cec_adap_ops {
+  * @needs_hpd:		if true, then the HDMI HotPlug Detect pin must be high
+  *	in order to transmit or receive CEC messages. This is usually a HW
+  *	limitation.
++ * @is_enabled:		the CEC adapter is enabled
+  * @is_configuring:	the CEC adapter is configuring (i.e. claiming LAs)
+  * @is_configured:	the CEC adapter is configured (i.e. has claimed LAs)
+  * @cec_pin_is_high:	if true then the CEC pin is high. Only used with the
+@@ -210,6 +225,7 @@ struct cec_adapter {
+ 	struct list_head wait_queue;
+ 	struct cec_data *transmitting;
+ 	bool transmit_in_progress;
++	bool transmit_in_progress_aborted;
+ 
+ 	struct task_struct *kthread_config;
+ 	struct completion config_completion;
+@@ -224,6 +240,8 @@ struct cec_adapter {
+ 
+ 	u16 phys_addr;
+ 	bool needs_hpd;
++	bool is_enabled;
++	bool is_claiming_log_addrs;
+ 	bool is_configuring;
+ 	bool is_configured;
+ 	bool cec_pin_is_high;
+diff --git a/include/media/v4l2-h264.h b/include/media/v4l2-h264.h
+index f08ba181263d1..1cc89d2e693a3 100644
+--- a/include/media/v4l2-h264.h
++++ b/include/media/v4l2-h264.h
+@@ -66,11 +66,11 @@ v4l2_h264_build_b_ref_lists(const struct v4l2_h264_reflist_builder *builder,
+ 			    u8 *b0_reflist, u8 *b1_reflist);
+ 
+ /**
+- * v4l2_h264_build_b_ref_lists() - Build the P reference list
++ * v4l2_h264_build_p_ref_list() - Build the P reference list
+  *
+  * @builder: reference list builder context
+- * @p_reflist: 16-bytes array used to store the P reference list. Each entry
+- *	       is an index in the DPB
++ * @reflist: 16-bytes array used to store the P reference list. Each entry
++ *	     is an index in the DPB
+  *
+  * This functions builds the P reference lists. This procedure is describe in
+  * section '8.2.4 Decoding process for reference picture lists construction'
+diff --git a/include/media/v4l2-jpeg.h b/include/media/v4l2-jpeg.h
+index ddba2a56c3214..3a3344a976782 100644
+--- a/include/media/v4l2-jpeg.h
++++ b/include/media/v4l2-jpeg.h
+@@ -91,7 +91,9 @@ struct v4l2_jpeg_scan_header {
+  * struct v4l2_jpeg_header - parsed JPEG header
+  * @sof: pointer to frame header and size
+  * @sos: pointer to scan header and size
++ * @num_dht: number of entries in @dht
+  * @dht: pointers to huffman tables and sizes
++ * @num_dqt: number of entries in @dqt
+  * @dqt: pointers to quantization tables and sizes
+  * @frame: parsed frame header
+  * @scan: pointer to parsed scan header, optional
+diff --git a/include/net/dst_ops.h b/include/net/dst_ops.h
+index 632086b2f644a..3ae2fda295073 100644
+--- a/include/net/dst_ops.h
++++ b/include/net/dst_ops.h
+@@ -24,7 +24,7 @@ struct dst_ops {
+ 	void			(*destroy)(struct dst_entry *);
+ 	void			(*ifdown)(struct dst_entry *,
+ 					  struct net_device *dev, int how);
+-	struct dst_entry *	(*negative_advice)(struct dst_entry *);
++	void			(*negative_advice)(struct sock *sk, struct dst_entry *);
+ 	void			(*link_failure)(struct sk_buff *);
+ 	void			(*update_pmtu)(struct dst_entry *dst, struct sock *sk,
+ 					       struct sk_buff *skb, u32 mtu,
+diff --git a/include/net/inet6_hashtables.h b/include/net/inet6_hashtables.h
+index 56f1286583d3c..f89320b6fee31 100644
+--- a/include/net/inet6_hashtables.h
++++ b/include/net/inet6_hashtables.h
+@@ -48,6 +48,22 @@ struct sock *__inet6_lookup_established(struct net *net,
+ 					const u16 hnum, const int dif,
+ 					const int sdif);
+ 
++typedef u32 (inet6_ehashfn_t)(const struct net *net,
++			       const struct in6_addr *laddr, const u16 lport,
++			       const struct in6_addr *faddr, const __be16 fport);
++
++inet6_ehashfn_t inet6_ehashfn;
++
++INDIRECT_CALLABLE_DECLARE(inet6_ehashfn_t udp6_ehashfn);
++
++struct sock *inet6_lookup_reuseport(struct net *net, struct sock *sk,
++				    struct sk_buff *skb, int doff,
++				    const struct in6_addr *saddr,
++				    __be16 sport,
++				    const struct in6_addr *daddr,
++				    unsigned short hnum,
++				    inet6_ehashfn_t *ehashfn);
++
+ struct sock *inet6_lookup_listener(struct net *net,
+ 				   struct inet_hashinfo *hashinfo,
+ 				   struct sk_buff *skb, int doff,
+diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h
+index c9e387d174c63..7778422cfac55 100644
+--- a/include/net/inet_hashtables.h
++++ b/include/net/inet_hashtables.h
+@@ -313,6 +313,20 @@ struct sock *__inet_lookup_established(struct net *net,
+ 				       const __be32 daddr, const u16 hnum,
+ 				       const int dif, const int sdif);
+ 
++typedef u32 (inet_ehashfn_t)(const struct net *net,
++			      const __be32 laddr, const __u16 lport,
++			      const __be32 faddr, const __be16 fport);
++
++inet_ehashfn_t inet_ehashfn;
++
++INDIRECT_CALLABLE_DECLARE(inet_ehashfn_t udp_ehashfn);
++
++struct sock *inet_lookup_reuseport(struct net *net, struct sock *sk,
++				   struct sk_buff *skb, int doff,
++				   __be32 saddr, __be16 sport,
++				   __be32 daddr, unsigned short hnum,
++				   inet_ehashfn_t *ehashfn);
++
+ static inline struct sock *
+ 	inet_lookup_established(struct net *net, struct inet_hashinfo *hashinfo,
+ 				const __be32 saddr, const __be16 sport,
+@@ -382,10 +396,6 @@ static inline struct sock *__inet_lookup_skb(struct inet_hashinfo *hashinfo,
+ 			     refcounted);
+ }
+ 
+-u32 inet6_ehashfn(const struct net *net,
+-		  const struct in6_addr *laddr, const u16 lport,
+-		  const struct in6_addr *faddr, const __be16 fport);
+-
+ static inline void sk_daddr_set(struct sock *sk, __be32 addr)
+ {
+ 	sk->sk_daddr = addr; /* alias of inet_daddr */
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 2da11d8c0f45e..ab8d84775ca87 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1174,6 +1174,7 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+  *	@type: stateful object numeric type
+  *	@owner: module owner
+  *	@maxattr: maximum netlink attribute
++ *	@family: address family for AF-specific object types
+  *	@policy: netlink attribute policy
+  */
+ struct nft_object_type {
+@@ -1183,6 +1184,7 @@ struct nft_object_type {
+ 	struct list_head		list;
+ 	u32				type;
+ 	unsigned int                    maxattr;
++	u8				family;
+ 	struct module			*owner;
+ 	const struct nla_policy		*policy;
+ };
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 8bcc96bf291c3..0be6819849878 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2012,17 +2012,10 @@ sk_dst_get(struct sock *sk)
+ 
+ static inline void __dst_negative_advice(struct sock *sk)
+ {
+-	struct dst_entry *ndst, *dst = __sk_dst_get(sk);
++	struct dst_entry *dst = __sk_dst_get(sk);
+ 
+-	if (dst && dst->ops->negative_advice) {
+-		ndst = dst->ops->negative_advice(dst);
+-
+-		if (ndst != dst) {
+-			rcu_assign_pointer(sk->sk_dst_cache, ndst);
+-			sk_tx_queue_clear(sk);
+-			WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
+-		}
+-	}
++	if (dst && dst->ops->negative_advice)
++		dst->ops->negative_advice(sk, dst);
+ }
+ 
+ static inline void dst_negative_advice(struct sock *sk)
+diff --git a/include/sound/soc-acpi.h b/include/sound/soc-acpi.h
+index b16a844d16ef9..9a43c44dcbbba 100644
+--- a/include/sound/soc-acpi.h
++++ b/include/sound/soc-acpi.h
+@@ -171,4 +171,10 @@ struct snd_soc_acpi_codecs {
+ 	u8 codecs[SND_SOC_ACPI_MAX_CODECS][ACPI_ID_LEN];
+ };
+ 
++static inline bool snd_soc_acpi_sof_parent(struct device *dev)
++{
++	return dev->parent && dev->parent->driver && dev->parent->driver->name &&
++		!strcmp(dev->parent->driver->name, "sof-audio-acpi");
++}
++
+ #endif
+diff --git a/include/trace/events/asoc.h b/include/trace/events/asoc.h
+index 40c300fe704da..f62d5b7024261 100644
+--- a/include/trace/events/asoc.h
++++ b/include/trace/events/asoc.h
+@@ -11,6 +11,8 @@
+ #define DAPM_DIRECT "(direct)"
+ #define DAPM_ARROW(dir) (((dir) == SND_SOC_DAPM_DIR_OUT) ? "->" : "<-")
+ 
++TRACE_DEFINE_ENUM(SND_SOC_DAPM_DIR_OUT);
++
+ struct snd_soc_jack;
+ struct snd_soc_card;
+ struct snd_soc_dapm_widget;
+diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h
+index 7d1a06c524696..dc8879d179fdf 100644
+--- a/include/uapi/linux/cec.h
++++ b/include/uapi/linux/cec.h
+@@ -396,6 +396,7 @@ struct cec_drm_connector_info {
+  * associated with the CEC adapter.
+  * @type: connector type (if any)
+  * @drm: drm connector info
++ * @raw: array to pad the union
+  */
+ struct cec_connector_info {
+ 	__u32 type;
+@@ -453,7 +454,7 @@ struct cec_event_lost_msgs {
+  * struct cec_event - CEC event structure
+  * @ts: the timestamp of when the event was sent.
+  * @event: the event.
+- * array.
++ * @flags: event flags.
+  * @state_change: the event payload for CEC_EVENT_STATE_CHANGE.
+  * @lost_msgs: the event payload for CEC_EVENT_LOST_MSGS.
+  * @raw: array to pad the union.
+diff --git a/include/uapi/linux/v4l2-subdev.h b/include/uapi/linux/v4l2-subdev.h
+index a38454d9e0f54..658106f5b5dc9 100644
+--- a/include/uapi/linux/v4l2-subdev.h
++++ b/include/uapi/linux/v4l2-subdev.h
+@@ -44,6 +44,7 @@ enum v4l2_subdev_format_whence {
+  * @which: format type (from enum v4l2_subdev_format_whence)
+  * @pad: pad number, as reported by the media API
+  * @format: media bus format (format code and frame size)
++ * @reserved: drivers and applications must zero this array
+  */
+ struct v4l2_subdev_format {
+ 	__u32 which;
+@@ -57,6 +58,7 @@ struct v4l2_subdev_format {
+  * @which: format type (from enum v4l2_subdev_format_whence)
+  * @pad: pad number, as reported by the media API
+  * @rect: pad crop rectangle boundaries
++ * @reserved: drivers and applications must zero this array
+  */
+ struct v4l2_subdev_crop {
+ 	__u32 which;
+@@ -78,6 +80,7 @@ struct v4l2_subdev_crop {
+  * @code: format code (MEDIA_BUS_FMT_ definitions)
+  * @which: format type (from enum v4l2_subdev_format_whence)
+  * @flags: flags set by the driver, (V4L2_SUBDEV_MBUS_CODE_*)
++ * @reserved: drivers and applications must zero this array
+  */
+ struct v4l2_subdev_mbus_code_enum {
+ 	__u32 pad;
+@@ -90,10 +93,15 @@ struct v4l2_subdev_mbus_code_enum {
+ 
+ /**
+  * struct v4l2_subdev_frame_size_enum - Media bus format enumeration
+- * @pad: pad number, as reported by the media API
+  * @index: format index during enumeration
++ * @pad: pad number, as reported by the media API
+  * @code: format code (MEDIA_BUS_FMT_ definitions)
++ * @min_width: minimum frame width, in pixels
++ * @max_width: maximum frame width, in pixels
++ * @min_height: minimum frame height, in pixels
++ * @max_height: maximum frame height, in pixels
+  * @which: format type (from enum v4l2_subdev_format_whence)
++ * @reserved: drivers and applications must zero this array
+  */
+ struct v4l2_subdev_frame_size_enum {
+ 	__u32 index;
+@@ -111,6 +119,7 @@ struct v4l2_subdev_frame_size_enum {
+  * struct v4l2_subdev_frame_interval - Pad-level frame rate
+  * @pad: pad number, as reported by the media API
+  * @interval: frame interval in seconds
++ * @reserved: drivers and applications must zero this array
+  */
+ struct v4l2_subdev_frame_interval {
+ 	__u32 pad;
+@@ -127,6 +136,7 @@ struct v4l2_subdev_frame_interval {
+  * @height: frame height in pixels
+  * @interval: frame interval in seconds
+  * @which: format type (from enum v4l2_subdev_format_whence)
++ * @reserved: drivers and applications must zero this array
+  */
+ struct v4l2_subdev_frame_interval_enum {
+ 	__u32 index;
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index 55b8c4b824797..1bbd81f031fe0 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -976,8 +976,10 @@ struct v4l2_requestbuffers {
+  *			pointing to this plane
+  * @fd:			when memory is V4L2_MEMORY_DMABUF, a userspace file
+  *			descriptor associated with this plane
++ * @m:			union of @mem_offset, @userptr and @fd
+  * @data_offset:	offset in the plane to the start of data; usually 0,
+  *			unless there is a header in front of the data
++ * @reserved:		drivers and applications must zero this array
+  *
+  * Multi-planar buffers consist of one or more planes, e.g. an YCbCr buffer
+  * with two planes can have one plane for Y, and another for interleaved CbCr
+@@ -1019,10 +1021,14 @@ struct v4l2_plane {
+  *		a userspace file descriptor associated with this buffer
+  * @planes:	for multiplanar buffers; userspace pointer to the array of plane
+  *		info structs for this buffer
++ * @m:		union of @offset, @userptr, @planes and @fd
+  * @length:	size in bytes of the buffer (NOT its payload) for single-plane
+  *		buffers (when type != *_MPLANE); number of elements in the
+  *		planes array for multi-plane buffers
++ * @reserved2:	drivers and applications must zero this field
+  * @request_fd: fd of the request that this buffer should use
++ * @reserved:	for backwards compatibility with applications that do not know
++ *		about @request_fd
+  *
+  * Contains data exchanged by application and driver using one of the Streaming
+  * I/O methods.
+@@ -1060,7 +1066,7 @@ struct v4l2_buffer {
+ #ifndef __KERNEL__
+ /**
+  * v4l2_timeval_to_ns - Convert timeval to nanoseconds
+- * @ts:		pointer to the timeval variable to be converted
++ * @tv:		pointer to the timeval variable to be converted
+  *
+  * Returns the scalar nanosecond representation of the timeval
+  * parameter.
+@@ -1121,6 +1127,7 @@ static inline __u64 v4l2_timeval_to_ns(const struct timeval *tv)
+  * @flags:	flags for newly created file, currently only O_CLOEXEC is
+  *		supported, refer to manual of open syscall for more details
+  * @fd:		file descriptor associated with DMABUF (set by driver)
++ * @reserved:	drivers and applications must zero this array
+  *
+  * Contains data used for exporting a video buffer as DMABUF file descriptor.
+  * The buffer is identified by a 'cookie' returned by VIDIOC_QUERYBUF
+@@ -2215,6 +2222,7 @@ struct v4l2_mpeg_vbi_fmt_ivtv {
+  *			this plane will be used
+  * @bytesperline:	distance in bytes between the leftmost pixels in two
+  *			adjacent lines
++ * @reserved:		drivers and applications must zero this array
+  */
+ struct v4l2_plane_pix_format {
+ 	__u32		sizeimage;
+@@ -2233,8 +2241,10 @@ struct v4l2_plane_pix_format {
+  * @num_planes:		number of planes for this format
+  * @flags:		format flags (V4L2_PIX_FMT_FLAG_*)
+  * @ycbcr_enc:		enum v4l2_ycbcr_encoding, Y'CbCr encoding
++ * @hsv_enc:		enum v4l2_hsv_encoding, HSV encoding
+  * @quantization:	enum v4l2_quantization, colorspace quantization
+  * @xfer_func:		enum v4l2_xfer_func, colorspace transfer function
++ * @reserved:		drivers and applications must zero this array
+  */
+ struct v4l2_pix_format_mplane {
+ 	__u32				width;
+@@ -2259,6 +2269,7 @@ struct v4l2_pix_format_mplane {
+  * struct v4l2_sdr_format - SDR format definition
+  * @pixelformat:	little endian four character code (fourcc)
+  * @buffersize:		maximum size in bytes required for data
++ * @reserved:		drivers and applications must zero this array
+  */
+ struct v4l2_sdr_format {
+ 	__u32				pixelformat;
+@@ -2285,6 +2296,8 @@ struct v4l2_meta_format {
+  * @vbi:	raw VBI capture or output parameters
+  * @sliced:	sliced VBI capture or output parameters
+  * @raw_data:	placeholder for future extensions and custom formats
++ * @fmt:	union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
++ *		and @raw_data
+  */
+ struct v4l2_format {
+ 	__u32	 type;
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 93f9ecedc59f6..47bc8fe2b9452 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -6474,6 +6474,8 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	switch (req->opcode) {
+ 	case IORING_OP_NOP:
++		if (READ_ONCE(sqe->rw_flags))
++			return -EINVAL;
+ 		return 0;
+ 	case IORING_OP_READV:
+ 	case IORING_OP_READ_FIXED:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 25f8a8716e88d..ad115ccc2fe0e 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4890,7 +4890,8 @@ static bool may_update_sockmap(struct bpf_verifier_env *env, int func_id)
+ 	enum bpf_attach_type eatype = env->prog->expected_attach_type;
+ 	enum bpf_prog_type type = resolve_prog_type(env->prog);
+ 
+-	if (func_id != BPF_FUNC_map_update_elem)
++	if (func_id != BPF_FUNC_map_update_elem &&
++	    func_id != BPF_FUNC_map_delete_elem)
+ 		return false;
+ 
+ 	/* It's not possible to get access to a locked struct sock in these
+@@ -4901,6 +4902,11 @@ static bool may_update_sockmap(struct bpf_verifier_env *env, int func_id)
+ 		if (eatype == BPF_TRACE_ITER)
+ 			return true;
+ 		break;
++	case BPF_PROG_TYPE_SOCK_OPS:
++		/* map_update allowed only via dedicated helpers with event type checks */
++		if (func_id == BPF_FUNC_map_delete_elem)
++			return true;
++		break;
+ 	case BPF_PROG_TYPE_SOCKET_FILTER:
+ 	case BPF_PROG_TYPE_SCHED_CLS:
+ 	case BPF_PROG_TYPE_SCHED_ACT:
+@@ -4988,7 +4994,6 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
+ 	case BPF_MAP_TYPE_SOCKMAP:
+ 		if (func_id != BPF_FUNC_sk_redirect_map &&
+ 		    func_id != BPF_FUNC_sock_map_update &&
+-		    func_id != BPF_FUNC_map_delete_elem &&
+ 		    func_id != BPF_FUNC_msg_redirect_map &&
+ 		    func_id != BPF_FUNC_sk_select_reuseport &&
+ 		    func_id != BPF_FUNC_map_lookup_elem &&
+@@ -4998,7 +5003,6 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
+ 	case BPF_MAP_TYPE_SOCKHASH:
+ 		if (func_id != BPF_FUNC_sk_redirect_hash &&
+ 		    func_id != BPF_FUNC_sock_hash_update &&
+-		    func_id != BPF_FUNC_map_delete_elem &&
+ 		    func_id != BPF_FUNC_msg_redirect_hash &&
+ 		    func_id != BPF_FUNC_sk_select_reuseport &&
+ 		    func_id != BPF_FUNC_map_lookup_elem &&
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 195f9cccab20b..9f2a93c829a91 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1901,7 +1901,7 @@ bool current_cpuset_is_being_rebound(void)
+ static int update_relax_domain_level(struct cpuset *cs, s64 val)
+ {
+ #ifdef CONFIG_SMP
+-	if (val < -1 || val >= sched_domain_level_max)
++	if (val < -1 || val > sched_domain_level_max + 1)
+ 		return -EINVAL;
+ #endif
+ 
+diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
+index 6735ac36b7187..a3b4b55d2e2e1 100644
+--- a/kernel/debug/kdb/kdb_io.c
++++ b/kernel/debug/kdb/kdb_io.c
+@@ -172,6 +172,33 @@ char kdb_getchar(void)
+ 	unreachable();
+ }
+ 
++/**
++ * kdb_position_cursor() - Place cursor in the correct horizontal position
++ * @prompt: Nil-terminated string containing the prompt string
++ * @buffer: Nil-terminated string containing the entire command line
++ * @cp: Cursor position, pointer the character in buffer where the cursor
++ *      should be positioned.
++ *
++ * The cursor is positioned by sending a carriage-return and then printing
++ * the content of the line until we reach the correct cursor position.
++ *
++ * There is some additional fine detail here.
++ *
++ * Firstly, even though kdb_printf() will correctly format zero-width fields
++ * we want the second call to kdb_printf() to be conditional. That keeps things
++ * a little cleaner when LOGGING=1.
++ *
++ * Secondly, we can't combine everything into one call to kdb_printf() since
++ * that renders into a fixed length buffer and the combined print could result
++ * in unwanted truncation.
++ */
++static void kdb_position_cursor(char *prompt, char *buffer, char *cp)
++{
++	kdb_printf("\r%s", kdb_prompt_str);
++	if (cp > buffer)
++		kdb_printf("%.*s", (int)(cp - buffer), buffer);
++}
++
+ /*
+  * kdb_read
+  *
+@@ -200,7 +227,6 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 						 * and null byte */
+ 	char *lastchar;
+ 	char *p_tmp;
+-	char tmp;
+ 	static char tmpbuffer[CMD_BUFLEN];
+ 	int len = strlen(buffer);
+ 	int len_tmp;
+@@ -237,12 +263,8 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 			}
+ 			*(--lastchar) = '\0';
+ 			--cp;
+-			kdb_printf("\b%s \r", cp);
+-			tmp = *cp;
+-			*cp = '\0';
+-			kdb_printf(kdb_prompt_str);
+-			kdb_printf("%s", buffer);
+-			*cp = tmp;
++			kdb_printf("\b%s ", cp);
++			kdb_position_cursor(kdb_prompt_str, buffer, cp);
+ 		}
+ 		break;
+ 	case 13: /* enter */
+@@ -259,19 +281,14 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 			memcpy(tmpbuffer, cp+1, lastchar - cp - 1);
+ 			memcpy(cp, tmpbuffer, lastchar - cp - 1);
+ 			*(--lastchar) = '\0';
+-			kdb_printf("%s \r", cp);
+-			tmp = *cp;
+-			*cp = '\0';
+-			kdb_printf(kdb_prompt_str);
+-			kdb_printf("%s", buffer);
+-			*cp = tmp;
++			kdb_printf("%s ", cp);
++			kdb_position_cursor(kdb_prompt_str, buffer, cp);
+ 		}
+ 		break;
+ 	case 1: /* Home */
+ 		if (cp > buffer) {
+-			kdb_printf("\r");
+-			kdb_printf(kdb_prompt_str);
+ 			cp = buffer;
++			kdb_position_cursor(kdb_prompt_str, buffer, cp);
+ 		}
+ 		break;
+ 	case 5: /* End */
+@@ -287,11 +304,10 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 		}
+ 		break;
+ 	case 14: /* Down */
+-		memset(tmpbuffer, ' ',
+-		       strlen(kdb_prompt_str) + (lastchar-buffer));
+-		*(tmpbuffer+strlen(kdb_prompt_str) +
+-		  (lastchar-buffer)) = '\0';
+-		kdb_printf("\r%s\r", tmpbuffer);
++	case 16: /* Up */
++		kdb_printf("\r%*c\r",
++			   (int)(strlen(kdb_prompt_str) + (lastchar - buffer)),
++			   ' ');
+ 		*lastchar = (char)key;
+ 		*(lastchar+1) = '\0';
+ 		return lastchar;
+@@ -301,15 +317,6 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 			++cp;
+ 		}
+ 		break;
+-	case 16: /* Up */
+-		memset(tmpbuffer, ' ',
+-		       strlen(kdb_prompt_str) + (lastchar-buffer));
+-		*(tmpbuffer+strlen(kdb_prompt_str) +
+-		  (lastchar-buffer)) = '\0';
+-		kdb_printf("\r%s\r", tmpbuffer);
+-		*lastchar = (char)key;
+-		*(lastchar+1) = '\0';
+-		return lastchar;
+ 	case 9: /* Tab */
+ 		if (tab < 2)
+ 			++tab;
+@@ -353,15 +360,25 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 			kdb_printf("\n");
+ 			kdb_printf(kdb_prompt_str);
+ 			kdb_printf("%s", buffer);
++			if (cp != lastchar)
++				kdb_position_cursor(kdb_prompt_str, buffer, cp);
+ 		} else if (tab != 2 && count > 0) {
+-			len_tmp = strlen(p_tmp);
+-			strncpy(p_tmp+len_tmp, cp, lastchar-cp+1);
+-			len_tmp = strlen(p_tmp);
+-			strncpy(cp, p_tmp+len, len_tmp-len + 1);
+-			len = len_tmp - len;
+-			kdb_printf("%s", cp);
+-			cp += len;
+-			lastchar += len;
++			/* How many new characters do we want from tmpbuffer? */
++			len_tmp = strlen(p_tmp) - len;
++			if (lastchar + len_tmp >= bufend)
++				len_tmp = bufend - lastchar;
++
++			if (len_tmp) {
++				/* + 1 ensures the '\0' is memmove'd */
++				memmove(cp+len_tmp, cp, (lastchar-cp) + 1);
++				memcpy(cp, p_tmp+len, len_tmp);
++				kdb_printf("%s", cp);
++				cp += len_tmp;
++				lastchar += len_tmp;
++				if (cp != lastchar)
++					kdb_position_cursor(kdb_prompt_str,
++							    buffer, cp);
++			}
+ 		}
+ 		kdb_nextline = 1; /* reset output line number */
+ 		break;
+@@ -372,13 +389,9 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 				memcpy(cp+1, tmpbuffer, lastchar - cp);
+ 				*++lastchar = '\0';
+ 				*cp = key;
+-				kdb_printf("%s\r", cp);
++				kdb_printf("%s", cp);
+ 				++cp;
+-				tmp = *cp;
+-				*cp = '\0';
+-				kdb_printf(kdb_prompt_str);
+-				kdb_printf("%s", buffer);
+-				*cp = tmp;
++				kdb_position_cursor(kdb_prompt_str, buffer, cp);
+ 			} else {
+ 				*++lastchar = '\0';
+ 				*cp++ = key;
+diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
+index 02236b13b3599..40c6512bc1632 100644
+--- a/kernel/irq/cpuhotplug.c
++++ b/kernel/irq/cpuhotplug.c
+@@ -69,6 +69,14 @@ static bool migrate_one_irq(struct irq_desc *desc)
+ 		return false;
+ 	}
+ 
++	/*
++	 * Complete an eventually pending irq move cleanup. If this
++	 * interrupt was moved in hard irq context, then the vectors need
++	 * to be cleaned up. It can't wait until this interrupt actually
++	 * happens and this CPU was involved.
++	 */
++	irq_force_complete_move(desc);
++
+ 	/*
+ 	 * No move required, if:
+ 	 * - Interrupt is per cpu
+@@ -87,14 +95,6 @@ static bool migrate_one_irq(struct irq_desc *desc)
+ 		return false;
+ 	}
+ 
+-	/*
+-	 * Complete an eventually pending irq move cleanup. If this
+-	 * interrupt was moved in hard irq context, then the vectors need
+-	 * to be cleaned up. It can't wait until this interrupt actually
+-	 * happens and this CPU was involved.
+-	 */
+-	irq_force_complete_move(desc);
+-
+ 	/*
+ 	 * If there is a setaffinity pending, then try to reuse the pending
+ 	 * mask, so the last change of the affinity does not get lost. If
+diff --git a/kernel/params.c b/kernel/params.c
+index 164d79330849a..eb00abef7076a 100644
+--- a/kernel/params.c
++++ b/kernel/params.c
+@@ -243,6 +243,24 @@ STANDARD_PARAM_DEF(ulong,	unsigned long,		"%lu",		kstrtoul);
+ STANDARD_PARAM_DEF(ullong,	unsigned long long,	"%llu",		kstrtoull);
+ STANDARD_PARAM_DEF(hexint,	unsigned int,		"%#08x", 	kstrtouint);
+ 
++int param_set_uint_minmax(const char *val, const struct kernel_param *kp,
++		unsigned int min, unsigned int max)
++{
++	unsigned int num;
++	int ret;
++
++	if (!val)
++		return -EINVAL;
++	ret = kstrtouint(val, 0, &num);
++	if (ret)
++		return ret;
++	if (num < min || num > max)
++		return -EINVAL;
++	*((unsigned int *)kp->arg) = num;
++	return 0;
++}
++EXPORT_SYMBOL_GPL(param_set_uint_minmax);
++
+ int param_set_charp(const char *val, const struct kernel_param *kp)
+ {
+ 	if (strlen(val) > 1024) {
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index ff2c6d3ba6c79..6eb0996ef859f 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1217,7 +1217,7 @@ static void set_domain_attribute(struct sched_domain *sd,
+ 	} else
+ 		request = attr->relax_domain_level;
+ 
+-	if (sd->level > request) {
++	if (sd->level >= request) {
+ 		/* Turn off idle balance on this domain: */
+ 		sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
+ 	}
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 2df8e13a29e57..9a2c8727b033d 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1498,6 +1498,11 @@ static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer,
+  *
+  * As a safety measure we check to make sure the data pages have not
+  * been corrupted.
++ *
++ * Callers of this function need to guarantee that the list of pages doesn't get
++ * modified during the check. In particular, if it's possible that the function
++ * is invoked with concurrent readers which can swap in a new reader page then
++ * the caller should take cpu_buffer->reader_lock.
+  */
+ static int rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer)
+ {
+@@ -2222,8 +2227,12 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 		 */
+ 		synchronize_rcu();
+ 		for_each_buffer_cpu(buffer, cpu) {
++			unsigned long flags;
++
+ 			cpu_buffer = buffer->buffers[cpu];
++			raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ 			rb_check_pages(cpu_buffer);
++			raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ 		}
+ 		atomic_dec(&buffer->record_disabled);
+ 	}
+diff --git a/net/9p/client.c b/net/9p/client.c
+index cd85a4b6448b4..0fa324e8b2451 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -235,6 +235,8 @@ static int p9_fcall_init(struct p9_client *c, struct p9_fcall *fc,
+ 	if (!fc->sdata)
+ 		return -ENOMEM;
+ 	fc->capacity = alloc_msize;
++	fc->id = 0;
++	fc->tag = P9_NOTAG;
+ 	return 0;
+ }
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 0e2c433bebcd4..5e91496fd3a36 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -10217,8 +10217,9 @@ static void netdev_wait_allrefs(struct net_device *dev)
+ 			rebroadcast_time = jiffies;
+ 		}
+ 
++		rcu_barrier();
++
+ 		if (!wait) {
+-			rcu_barrier();
+ 			wait = WAIT_REFS_MIN_MSECS;
+ 		} else {
+ 			msleep(wait);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index ad050f8476b8e..56deddeac1b0e 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -28,9 +28,9 @@
+ #include <net/tcp.h>
+ #include <net/sock_reuseport.h>
+ 
+-static u32 inet_ehashfn(const struct net *net, const __be32 laddr,
+-			const __u16 lport, const __be32 faddr,
+-			const __be16 fport)
++u32 inet_ehashfn(const struct net *net, const __be32 laddr,
++		 const __u16 lport, const __be32 faddr,
++		 const __be16 fport)
+ {
+ 	static u32 inet_ehash_secret __read_mostly;
+ 
+@@ -39,6 +39,7 @@ static u32 inet_ehashfn(const struct net *net, const __be32 laddr,
+ 	return __inet_ehashfn(laddr, lport, faddr, fport,
+ 			      inet_ehash_secret + net_hash_mix(net));
+ }
++EXPORT_SYMBOL_GPL(inet_ehashfn);
+ 
+ /* This function handles inet_sock, but also timewait and request sockets
+  * for IPv4/IPv6.
+@@ -252,20 +253,25 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ 	return score;
+ }
+ 
+-static inline struct sock *lookup_reuseport(struct net *net, struct sock *sk,
+-					    struct sk_buff *skb, int doff,
+-					    __be32 saddr, __be16 sport,
+-					    __be32 daddr, unsigned short hnum)
++INDIRECT_CALLABLE_DECLARE(inet_ehashfn_t udp_ehashfn);
++
++struct sock *inet_lookup_reuseport(struct net *net, struct sock *sk,
++				   struct sk_buff *skb, int doff,
++				   __be32 saddr, __be16 sport,
++				   __be32 daddr, unsigned short hnum,
++				   inet_ehashfn_t *ehashfn)
+ {
+ 	struct sock *reuse_sk = NULL;
+ 	u32 phash;
+ 
+ 	if (sk->sk_reuseport) {
+-		phash = inet_ehashfn(net, daddr, hnum, saddr, sport);
++		phash = INDIRECT_CALL_2(ehashfn, udp_ehashfn, inet_ehashfn,
++					net, daddr, hnum, saddr, sport);
+ 		reuse_sk = reuseport_select_sock(sk, phash, skb, doff);
+ 	}
+ 	return reuse_sk;
+ }
++EXPORT_SYMBOL_GPL(inet_lookup_reuseport);
+ 
+ /*
+  * Here are some nice properties to exploit here. The BSD API
+@@ -290,8 +296,8 @@ static struct sock *inet_lhash2_lookup(struct net *net,
+ 		sk = (struct sock *)icsk;
+ 		score = compute_score(sk, net, hnum, daddr, dif, sdif);
+ 		if (score > hiscore) {
+-			result = lookup_reuseport(net, sk, skb, doff,
+-						  saddr, sport, daddr, hnum);
++			result = inet_lookup_reuseport(net, sk, skb, doff,
++						       saddr, sport, daddr, hnum, inet_ehashfn);
+ 			if (result)
+ 				return result;
+ 
+@@ -320,7 +326,8 @@ static inline struct sock *inet_lookup_run_bpf(struct net *net,
+ 	if (no_reuseport || IS_ERR_OR_NULL(sk))
+ 		return sk;
+ 
+-	reuse_sk = lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum);
++	reuse_sk = inet_lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum,
++					 inet_ehashfn);
+ 	if (reuse_sk)
+ 		sk = reuse_sk;
+ 	return sk;
+diff --git a/net/ipv4/netfilter/nf_tproxy_ipv4.c b/net/ipv4/netfilter/nf_tproxy_ipv4.c
+index 61cb2341f50fe..7c1a0cd9f4359 100644
+--- a/net/ipv4/netfilter/nf_tproxy_ipv4.c
++++ b/net/ipv4/netfilter/nf_tproxy_ipv4.c
+@@ -58,6 +58,8 @@ __be32 nf_tproxy_laddr4(struct sk_buff *skb, __be32 user_laddr, __be32 daddr)
+ 
+ 	laddr = 0;
+ 	indev = __in_dev_get_rcu(skb->dev);
++	if (!indev)
++		return daddr;
+ 
+ 	in_dev_for_each_ifa_rcu(ifa, indev) {
+ 		if (ifa->ifa_flags & IFA_F_SECONDARY)
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index cc409cc0789c8..b3b49d8b386d8 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -137,7 +137,8 @@ static int ip_rt_gc_timeout __read_mostly	= RT_GC_TIMEOUT;
+ static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie);
+ static unsigned int	 ipv4_default_advmss(const struct dst_entry *dst);
+ static unsigned int	 ipv4_mtu(const struct dst_entry *dst);
+-static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst);
++static void		ipv4_negative_advice(struct sock *sk,
++					     struct dst_entry *dst);
+ static void		 ipv4_link_failure(struct sk_buff *skb);
+ static void		 ip_rt_update_pmtu(struct dst_entry *dst, struct sock *sk,
+ 					   struct sk_buff *skb, u32 mtu,
+@@ -866,22 +867,15 @@ static void ip_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_buf
+ 	__ip_do_redirect(rt, skb, &fl4, true);
+ }
+ 
+-static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst)
++static void ipv4_negative_advice(struct sock *sk,
++				 struct dst_entry *dst)
+ {
+ 	struct rtable *rt = (struct rtable *)dst;
+-	struct dst_entry *ret = dst;
+ 
+-	if (rt) {
+-		if (dst->obsolete > 0) {
+-			ip_rt_put(rt);
+-			ret = NULL;
+-		} else if ((rt->rt_flags & RTCF_REDIRECTED) ||
+-			   rt->dst.expires) {
+-			ip_rt_put(rt);
+-			ret = NULL;
+-		}
+-	}
+-	return ret;
++	if ((dst->obsolete > 0) ||
++	    (rt->rt_flags & RTCF_REDIRECTED) ||
++	    rt->dst.expires)
++		sk_dst_reset(sk);
+ }
+ 
+ /*
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index 79f705450c162..be2c97e907ae2 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -55,7 +55,18 @@ struct dctcp {
+ };
+ 
+ static unsigned int dctcp_shift_g __read_mostly = 4; /* g = 1/2^4 */
+-module_param(dctcp_shift_g, uint, 0644);
++
++static int dctcp_shift_g_set(const char *val, const struct kernel_param *kp)
++{
++	return param_set_uint_minmax(val, kp, 0, 10);
++}
++
++static const struct kernel_param_ops dctcp_shift_g_ops = {
++	.set = dctcp_shift_g_set,
++	.get = param_get_uint,
++};
++
++module_param_cb(dctcp_shift_g, &dctcp_shift_g_ops, &dctcp_shift_g, 0644);
+ MODULE_PARM_DESC(dctcp_shift_g, "parameter g for updating dctcp_alpha");
+ 
+ static unsigned int dctcp_alpha_on_init __read_mostly = DCTCP_MAX_ALPHA;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 85d8688933f3c..0e7179a19e224 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1787,7 +1787,7 @@ int tcp_v4_early_demux(struct sk_buff *skb)
+ 
+ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ {
+-	u32 limit, tail_gso_size, tail_gso_segs;
++	u32 tail_gso_size, tail_gso_segs;
+ 	struct skb_shared_info *shinfo;
+ 	const struct tcphdr *th;
+ 	struct tcphdr *thtail;
+@@ -1796,6 +1796,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 	bool fragstolen;
+ 	u32 gso_segs;
+ 	u32 gso_size;
++	u64 limit;
+ 	int delta;
+ 
+ 	/* In case all data was pulled from skb frags (in __pskb_pull_tail()),
+@@ -1891,7 +1892,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 	__skb_push(skb, hdrlen);
+ 
+ no_coalesce:
+-	limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
++	/* sk->sk_backlog.len is reset only at the end of __release_sock().
++	 * Both sk->sk_backlog.len and sk->sk_rmem_alloc could reach
++	 * sk_rcvbuf in normal conditions.
++	 */
++	limit = ((u64)READ_ONCE(sk->sk_rcvbuf)) << 1;
++
++	limit += ((u32)READ_ONCE(sk->sk_sndbuf)) >> 1;
+ 
+ 	/* Only socket owner can try to collapse/prune rx queues
+ 	 * to reduce memory overhead, so add a little headroom here.
+@@ -1899,6 +1906,8 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 	 */
+ 	limit += 64 * 1024;
+ 
++	limit = min_t(u64, limit, UINT_MAX);
++
+ 	if (unlikely(sk_add_backlog(sk, skb, limit))) {
+ 		bh_unlock_sock(sk);
+ 		__NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPBACKLOGDROP);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 16ff3962b24db..da9015efb45e4 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -398,9 +398,9 @@ static int compute_score(struct sock *sk, struct net *net,
+ 	return score;
+ }
+ 
+-static u32 udp_ehashfn(const struct net *net, const __be32 laddr,
+-		       const __u16 lport, const __be32 faddr,
+-		       const __be16 fport)
++INDIRECT_CALLABLE_SCOPE
++u32 udp_ehashfn(const struct net *net, const __be32 laddr, const __u16 lport,
++		const __be32 faddr, const __be16 fport)
+ {
+ 	static u32 udp_ehash_secret __read_mostly;
+ 
+@@ -410,22 +410,6 @@ static u32 udp_ehashfn(const struct net *net, const __be32 laddr,
+ 			      udp_ehash_secret + net_hash_mix(net));
+ }
+ 
+-static struct sock *lookup_reuseport(struct net *net, struct sock *sk,
+-				     struct sk_buff *skb,
+-				     __be32 saddr, __be16 sport,
+-				     __be32 daddr, unsigned short hnum)
+-{
+-	struct sock *reuse_sk = NULL;
+-	u32 hash;
+-
+-	if (sk->sk_reuseport && sk->sk_state != TCP_ESTABLISHED) {
+-		hash = udp_ehashfn(net, daddr, hnum, saddr, sport);
+-		reuse_sk = reuseport_select_sock(sk, hash, skb,
+-						 sizeof(struct udphdr));
+-	}
+-	return reuse_sk;
+-}
+-
+ /* called with rcu_read_lock() */
+ static struct sock *udp4_lib_lookup2(struct net *net,
+ 				     __be32 saddr, __be16 sport,
+@@ -436,15 +420,28 @@ static struct sock *udp4_lib_lookup2(struct net *net,
+ {
+ 	struct sock *sk, *result;
+ 	int score, badness;
++	bool need_rescore;
+ 
+ 	result = NULL;
+ 	badness = 0;
+ 	udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
+-		score = compute_score(sk, net, saddr, sport,
+-				      daddr, hnum, dif, sdif);
++		need_rescore = false;
++rescore:
++		score = compute_score(need_rescore ? result : sk, net, saddr,
++				      sport, daddr, hnum, dif, sdif);
+ 		if (score > badness) {
+ 			badness = score;
+-			result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum);
++
++			if (need_rescore)
++				continue;
++
++			if (sk->sk_state == TCP_ESTABLISHED) {
++				result = sk;
++				continue;
++			}
++
++			result = inet_lookup_reuseport(net, sk, skb, sizeof(struct udphdr),
++						       saddr, sport, daddr, hnum, udp_ehashfn);
+ 			if (!result) {
+ 				result = sk;
+ 				continue;
+@@ -458,9 +455,14 @@ static struct sock *udp4_lib_lookup2(struct net *net,
+ 			if (IS_ERR(result))
+ 				continue;
+ 
+-			badness = compute_score(result, net, saddr, sport,
+-						daddr, hnum, dif, sdif);
+-
++			/* compute_score is too long of a function to be
++			 * inlined, and calling it again here yields
++			 * measureable overhead for some
++			 * workloads. Work around it by jumping
++			 * backwards to rescore 'result'.
++			 */
++			need_rescore = true;
++			goto rescore;
+ 		}
+ 	}
+ 	return result;
+@@ -483,7 +485,8 @@ static struct sock *udp4_lookup_run_bpf(struct net *net,
+ 	if (no_reuseport || IS_ERR_OR_NULL(sk))
+ 		return sk;
+ 
+-	reuse_sk = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum);
++	reuse_sk = inet_lookup_reuseport(net, sk, skb, sizeof(struct udphdr),
++					 saddr, sport, daddr, hnum, udp_ehashfn);
+ 	if (reuse_sk)
+ 		sk = reuse_sk;
+ 	return sk;
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index b4a5e01e12016..21b4fc835487b 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -41,6 +41,7 @@ u32 inet6_ehashfn(const struct net *net,
+ 	return __inet6_ehashfn(lhash, lport, fhash, fport,
+ 			       inet6_ehash_secret + net_hash_mix(net));
+ }
++EXPORT_SYMBOL_GPL(inet6_ehashfn);
+ 
+ /*
+  * Sockets in TCP_CLOSE state are _always_ taken out of the hash, so
+@@ -113,22 +114,27 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ 	return score;
+ }
+ 
+-static inline struct sock *lookup_reuseport(struct net *net, struct sock *sk,
+-					    struct sk_buff *skb, int doff,
+-					    const struct in6_addr *saddr,
+-					    __be16 sport,
+-					    const struct in6_addr *daddr,
+-					    unsigned short hnum)
++INDIRECT_CALLABLE_DECLARE(inet6_ehashfn_t udp6_ehashfn);
++
++struct sock *inet6_lookup_reuseport(struct net *net, struct sock *sk,
++				    struct sk_buff *skb, int doff,
++				    const struct in6_addr *saddr,
++				    __be16 sport,
++				    const struct in6_addr *daddr,
++				    unsigned short hnum,
++				    inet6_ehashfn_t *ehashfn)
+ {
+ 	struct sock *reuse_sk = NULL;
+ 	u32 phash;
+ 
+ 	if (sk->sk_reuseport) {
+-		phash = inet6_ehashfn(net, daddr, hnum, saddr, sport);
++		phash = INDIRECT_CALL_INET(ehashfn, udp6_ehashfn, inet6_ehashfn,
++					   net, daddr, hnum, saddr, sport);
+ 		reuse_sk = reuseport_select_sock(sk, phash, skb, doff);
+ 	}
+ 	return reuse_sk;
+ }
++EXPORT_SYMBOL_GPL(inet6_lookup_reuseport);
+ 
+ /* called with rcu_read_lock() */
+ static struct sock *inet6_lhash2_lookup(struct net *net,
+@@ -146,8 +152,8 @@ static struct sock *inet6_lhash2_lookup(struct net *net,
+ 		sk = (struct sock *)icsk;
+ 		score = compute_score(sk, net, hnum, daddr, dif, sdif);
+ 		if (score > hiscore) {
+-			result = lookup_reuseport(net, sk, skb, doff,
+-						  saddr, sport, daddr, hnum);
++			result = inet6_lookup_reuseport(net, sk, skb, doff,
++							saddr, sport, daddr, hnum, inet6_ehashfn);
+ 			if (result)
+ 				return result;
+ 
+@@ -178,7 +184,8 @@ static inline struct sock *inet6_lookup_run_bpf(struct net *net,
+ 	if (no_reuseport || IS_ERR_OR_NULL(sk))
+ 		return sk;
+ 
+-	reuse_sk = lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum);
++	reuse_sk = inet6_lookup_reuseport(net, sk, skb, doff,
++					  saddr, sport, daddr, hnum, inet6_ehashfn);
+ 	if (reuse_sk)
+ 		sk = reuse_sk;
+ 	return sk;
+diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
+index 28e44782c94d1..6993675171556 100644
+--- a/net/ipv6/reassembly.c
++++ b/net/ipv6/reassembly.c
+@@ -363,7 +363,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
+ 	 * the source of the fragment, with the Pointer field set to zero.
+ 	 */
+ 	nexthdr = hdr->nexthdr;
+-	if (ipv6frag_thdr_truncated(skb, skb_transport_offset(skb), &nexthdr)) {
++	if (ipv6frag_thdr_truncated(skb, skb_network_offset(skb) + sizeof(struct ipv6hdr), &nexthdr)) {
+ 		__IP6_INC_STATS(net, __in6_dev_get_safely(skb->dev),
+ 				IPSTATS_MIB_INHDRERRORS);
+ 		icmpv6_param_prob(skb, ICMPV6_HDR_INCOMP, 0);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 2d53c362f309e..88f96241ca971 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -85,7 +85,8 @@ enum rt6_nud_state {
+ static struct dst_entry	*ip6_dst_check(struct dst_entry *dst, u32 cookie);
+ static unsigned int	 ip6_default_advmss(const struct dst_entry *dst);
+ static unsigned int	 ip6_mtu(const struct dst_entry *dst);
+-static struct dst_entry *ip6_negative_advice(struct dst_entry *);
++static void		ip6_negative_advice(struct sock *sk,
++					    struct dst_entry *dst);
+ static void		ip6_dst_destroy(struct dst_entry *);
+ static void		ip6_dst_ifdown(struct dst_entry *,
+ 				       struct net_device *dev, int how);
+@@ -2635,24 +2636,24 @@ static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie)
+ 	return dst_ret;
+ }
+ 
+-static struct dst_entry *ip6_negative_advice(struct dst_entry *dst)
++static void ip6_negative_advice(struct sock *sk,
++				struct dst_entry *dst)
+ {
+ 	struct rt6_info *rt = (struct rt6_info *) dst;
+ 
+-	if (rt) {
+-		if (rt->rt6i_flags & RTF_CACHE) {
+-			rcu_read_lock();
+-			if (rt6_check_expired(rt)) {
+-				rt6_remove_exception_rt(rt);
+-				dst = NULL;
+-			}
+-			rcu_read_unlock();
+-		} else {
+-			dst_release(dst);
+-			dst = NULL;
++	if (rt->rt6i_flags & RTF_CACHE) {
++		rcu_read_lock();
++		if (rt6_check_expired(rt)) {
++			/* counteract the dst_release() in sk_dst_reset() */
++			dst_hold(dst);
++			sk_dst_reset(sk);
++
++			rt6_remove_exception_rt(rt);
+ 		}
++		rcu_read_unlock();
++		return;
+ 	}
+-	return dst;
++	sk_dst_reset(sk);
+ }
+ 
+ static void ip6_link_failure(struct sk_buff *skb)
+@@ -4345,7 +4346,7 @@ static void rtmsg_to_fib6_config(struct net *net,
+ 		.fc_table = l3mdev_fib_table_by_index(net, rtmsg->rtmsg_ifindex) ?
+ 			 : RT6_TABLE_MAIN,
+ 		.fc_ifindex = rtmsg->rtmsg_ifindex,
+-		.fc_metric = rtmsg->rtmsg_metric ? : IP6_RT_PRIO_USER,
++		.fc_metric = rtmsg->rtmsg_metric,
+ 		.fc_expires = rtmsg->rtmsg_info,
+ 		.fc_dst_len = rtmsg->rtmsg_dst_len,
+ 		.fc_src_len = rtmsg->rtmsg_src_len,
+@@ -4375,6 +4376,9 @@ int ipv6_route_ioctl(struct net *net, unsigned int cmd, struct in6_rtmsg *rtmsg)
+ 	rtnl_lock();
+ 	switch (cmd) {
+ 	case SIOCADDRT:
++		/* Only do the default setting of fc_metric in route adding */
++		if (cfg.fc_metric == 0)
++			cfg.fc_metric = IP6_RT_PRIO_USER;
+ 		err = ip6_route_add(&cfg, GFP_KERNEL, NULL);
+ 		break;
+ 	case SIOCDELRT:
+diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c
+index a8439fded12dc..77221f27262aa 100644
+--- a/net/ipv6/seg6.c
++++ b/net/ipv6/seg6.c
+@@ -490,6 +490,8 @@ int __init seg6_init(void)
+ #endif
+ #ifdef CONFIG_IPV6_SEG6_LWTUNNEL
+ out_unregister_genl:
++#endif
++#if IS_ENABLED(CONFIG_IPV6_SEG6_LWTUNNEL) || IS_ENABLED(CONFIG_IPV6_SEG6_HMAC)
+ 	genl_unregister_family(&seg6_genl_family);
+ #endif
+ out_unregister_pernet:
+@@ -503,8 +505,9 @@ void seg6_exit(void)
+ 	seg6_hmac_exit();
+ #endif
+ #ifdef CONFIG_IPV6_SEG6_LWTUNNEL
++	seg6_local_exit();
+ 	seg6_iptunnel_exit();
+ #endif
+-	unregister_pernet_subsys(&ip6_segments_ops);
+ 	genl_unregister_family(&seg6_genl_family);
++	unregister_pernet_subsys(&ip6_segments_ops);
+ }
+diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c
+index 552bce1fdfb94..2e2b94ae63552 100644
+--- a/net/ipv6/seg6_hmac.c
++++ b/net/ipv6/seg6_hmac.c
+@@ -355,6 +355,7 @@ static int seg6_hmac_init_algo(void)
+ 	struct crypto_shash *tfm;
+ 	struct shash_desc *shash;
+ 	int i, alg_count, cpu;
++	int ret = -ENOMEM;
+ 
+ 	alg_count = ARRAY_SIZE(hmac_algos);
+ 
+@@ -365,12 +366,14 @@ static int seg6_hmac_init_algo(void)
+ 		algo = &hmac_algos[i];
+ 		algo->tfms = alloc_percpu(struct crypto_shash *);
+ 		if (!algo->tfms)
+-			return -ENOMEM;
++			goto error_out;
+ 
+ 		for_each_possible_cpu(cpu) {
+ 			tfm = crypto_alloc_shash(algo->name, 0, 0);
+-			if (IS_ERR(tfm))
+-				return PTR_ERR(tfm);
++			if (IS_ERR(tfm)) {
++				ret = PTR_ERR(tfm);
++				goto error_out;
++			}
+ 			p_tfm = per_cpu_ptr(algo->tfms, cpu);
+ 			*p_tfm = tfm;
+ 		}
+@@ -382,18 +385,22 @@ static int seg6_hmac_init_algo(void)
+ 
+ 		algo->shashs = alloc_percpu(struct shash_desc *);
+ 		if (!algo->shashs)
+-			return -ENOMEM;
++			goto error_out;
+ 
+ 		for_each_possible_cpu(cpu) {
+ 			shash = kzalloc_node(shsize, GFP_KERNEL,
+ 					     cpu_to_node(cpu));
+ 			if (!shash)
+-				return -ENOMEM;
++				goto error_out;
+ 			*per_cpu_ptr(algo->shashs, cpu) = shash;
+ 		}
+ 	}
+ 
+ 	return 0;
++
++error_out:
++	seg6_hmac_exit();
++	return ret;
+ }
+ 
+ int __init seg6_hmac_init(void)
+@@ -413,22 +420,29 @@ int __net_init seg6_hmac_net_init(struct net *net)
+ void seg6_hmac_exit(void)
+ {
+ 	struct seg6_hmac_algo *algo = NULL;
++	struct crypto_shash *tfm;
++	struct shash_desc *shash;
+ 	int i, alg_count, cpu;
+ 
+ 	alg_count = ARRAY_SIZE(hmac_algos);
+ 	for (i = 0; i < alg_count; i++) {
+ 		algo = &hmac_algos[i];
+-		for_each_possible_cpu(cpu) {
+-			struct crypto_shash *tfm;
+-			struct shash_desc *shash;
+ 
+-			shash = *per_cpu_ptr(algo->shashs, cpu);
+-			kfree(shash);
+-			tfm = *per_cpu_ptr(algo->tfms, cpu);
+-			crypto_free_shash(tfm);
++		if (algo->shashs) {
++			for_each_possible_cpu(cpu) {
++				shash = *per_cpu_ptr(algo->shashs, cpu);
++				kfree(shash);
++			}
++			free_percpu(algo->shashs);
++		}
++
++		if (algo->tfms) {
++			for_each_possible_cpu(cpu) {
++				tfm = *per_cpu_ptr(algo->tfms, cpu);
++				crypto_free_shash(tfm);
++			}
++			free_percpu(algo->tfms);
+ 		}
+-		free_percpu(algo->tfms);
+-		free_percpu(algo->shashs);
+ 	}
+ }
+ EXPORT_SYMBOL(seg6_hmac_exit);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 8c9672e7a7dd6..203a6d64d7e99 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -67,11 +67,12 @@ int udpv6_init_sock(struct sock *sk)
+ 	return 0;
+ }
+ 
+-static u32 udp6_ehashfn(const struct net *net,
+-			const struct in6_addr *laddr,
+-			const u16 lport,
+-			const struct in6_addr *faddr,
+-			const __be16 fport)
++INDIRECT_CALLABLE_SCOPE
++u32 udp6_ehashfn(const struct net *net,
++		 const struct in6_addr *laddr,
++		 const u16 lport,
++		 const struct in6_addr *faddr,
++		 const __be16 fport)
+ {
+ 	static u32 udp6_ehash_secret __read_mostly;
+ 	static u32 udp_ipv6_hash_secret __read_mostly;
+@@ -155,24 +156,6 @@ static int compute_score(struct sock *sk, struct net *net,
+ 	return score;
+ }
+ 
+-static struct sock *lookup_reuseport(struct net *net, struct sock *sk,
+-				     struct sk_buff *skb,
+-				     const struct in6_addr *saddr,
+-				     __be16 sport,
+-				     const struct in6_addr *daddr,
+-				     unsigned int hnum)
+-{
+-	struct sock *reuse_sk = NULL;
+-	u32 hash;
+-
+-	if (sk->sk_reuseport && sk->sk_state != TCP_ESTABLISHED) {
+-		hash = udp6_ehashfn(net, daddr, hnum, saddr, sport);
+-		reuse_sk = reuseport_select_sock(sk, hash, skb,
+-						 sizeof(struct udphdr));
+-	}
+-	return reuse_sk;
+-}
+-
+ /* called with rcu_read_lock() */
+ static struct sock *udp6_lib_lookup2(struct net *net,
+ 		const struct in6_addr *saddr, __be16 sport,
+@@ -182,15 +165,28 @@ static struct sock *udp6_lib_lookup2(struct net *net,
+ {
+ 	struct sock *sk, *result;
+ 	int score, badness;
++	bool need_rescore;
+ 
+ 	result = NULL;
+ 	badness = -1;
+ 	udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
+-		score = compute_score(sk, net, saddr, sport,
+-				      daddr, hnum, dif, sdif);
++		need_rescore = false;
++rescore:
++		score = compute_score(need_rescore ? result : sk, net, saddr,
++				      sport, daddr, hnum, dif, sdif);
+ 		if (score > badness) {
+ 			badness = score;
+-			result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum);
++
++			if (need_rescore)
++				continue;
++
++			if (sk->sk_state == TCP_ESTABLISHED) {
++				result = sk;
++				continue;
++			}
++
++			result = inet6_lookup_reuseport(net, sk, skb, sizeof(struct udphdr),
++							saddr, sport, daddr, hnum, udp6_ehashfn);
+ 			if (!result) {
+ 				result = sk;
+ 				continue;
+@@ -204,8 +200,14 @@ static struct sock *udp6_lib_lookup2(struct net *net,
+ 			if (IS_ERR(result))
+ 				continue;
+ 
+-			badness = compute_score(sk, net, saddr, sport,
+-						daddr, hnum, dif, sdif);
++			/* compute_score is too long of a function to be
++			 * inlined, and calling it again here yields
++			 * measureable overhead for some
++			 * workloads. Work around it by jumping
++			 * backwards to rescore 'result'.
++			 */
++			need_rescore = true;
++			goto rescore;
+ 		}
+ 	}
+ 	return result;
+@@ -230,7 +232,8 @@ static inline struct sock *udp6_lookup_run_bpf(struct net *net,
+ 	if (no_reuseport || IS_ERR_OR_NULL(sk))
+ 		return sk;
+ 
+-	reuse_sk = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum);
++	reuse_sk = inet6_lookup_reuseport(net, sk, skb, sizeof(struct udphdr),
++					  saddr, sport, daddr, hnum, udp6_ehashfn);
+ 	if (reuse_sk)
+ 		sk = reuse_sk;
+ 	return sk;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 858d09b54eaa4..f3cb5c9202760 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -6234,11 +6234,15 @@ static int nft_object_dump(struct sk_buff *skb, unsigned int attr,
+ 	return -1;
+ }
+ 
+-static const struct nft_object_type *__nft_obj_type_get(u32 objtype)
++static const struct nft_object_type *__nft_obj_type_get(u32 objtype, u8 family)
+ {
+ 	const struct nft_object_type *type;
+ 
+-	list_for_each_entry(type, &nf_tables_objects, list) {
++	list_for_each_entry_rcu(type, &nf_tables_objects, list) {
++		if (type->family != NFPROTO_UNSPEC &&
++		    type->family != family)
++			continue;
++
+ 		if (objtype == type->type)
+ 			return type;
+ 	}
+@@ -6246,13 +6250,17 @@ static const struct nft_object_type *__nft_obj_type_get(u32 objtype)
+ }
+ 
+ static const struct nft_object_type *
+-nft_obj_type_get(struct net *net, u32 objtype)
++nft_obj_type_get(struct net *net, u32 objtype, u8 family)
+ {
+ 	const struct nft_object_type *type;
+ 
+-	type = __nft_obj_type_get(objtype);
+-	if (type != NULL && try_module_get(type->owner))
++	rcu_read_lock();
++	type = __nft_obj_type_get(objtype, family);
++	if (type != NULL && try_module_get(type->owner)) {
++		rcu_read_unlock();
+ 		return type;
++	}
++	rcu_read_unlock();
+ 
+ 	lockdep_nfnl_nft_mutex_not_held();
+ #ifdef CONFIG_MODULES
+@@ -6343,7 +6351,7 @@ static int nf_tables_newobj(struct net *net, struct sock *nlsk,
+ 		if (nlh->nlmsg_flags & NLM_F_REPLACE)
+ 			return -EOPNOTSUPP;
+ 
+-		type = __nft_obj_type_get(objtype);
++		type = __nft_obj_type_get(objtype, family);
+ 		nft_ctx_init(&ctx, net, skb, nlh, family, table, NULL, nla);
+ 
+ 		return nf_tables_updobj(&ctx, type, nla[NFTA_OBJ_DATA], obj);
+@@ -6354,7 +6362,7 @@ static int nf_tables_newobj(struct net *net, struct sock *nlsk,
+ 	if (!nft_use_inc(&table->use))
+ 		return -EMFILE;
+ 
+-	type = nft_obj_type_get(net, objtype);
++	type = nft_obj_type_get(net, objtype, family);
+ 	if (IS_ERR(type)) {
+ 		err = PTR_ERR(type);
+ 		goto err_type;
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 9d87606c76ff4..dc6af1919deaf 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -167,7 +167,9 @@ instance_destroy_rcu(struct rcu_head *head)
+ 	struct nfqnl_instance *inst = container_of(head, struct nfqnl_instance,
+ 						   rcu);
+ 
++	rcu_read_lock();
+ 	nfqnl_flush(inst, NULL, 0);
++	rcu_read_unlock();
+ 	kfree(inst);
+ 	module_put(THIS_MODULE);
+ }
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 56f6c05362ae8..fa64b1b8ae918 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -44,36 +44,27 @@ nft_payload_copy_vlan(u32 *d, const struct sk_buff *skb, u8 offset, u8 len)
+ 	int mac_off = skb_mac_header(skb) - skb->data;
+ 	u8 *vlanh, *dst_u8 = (u8 *) d;
+ 	struct vlan_ethhdr veth;
+-	u8 vlan_hlen = 0;
+-
+-	if ((skb->protocol == htons(ETH_P_8021AD) ||
+-	     skb->protocol == htons(ETH_P_8021Q)) &&
+-	    offset >= VLAN_ETH_HLEN && offset < VLAN_ETH_HLEN + VLAN_HLEN)
+-		vlan_hlen += VLAN_HLEN;
+ 
+ 	vlanh = (u8 *) &veth;
+-	if (offset < VLAN_ETH_HLEN + vlan_hlen) {
++	if (offset < VLAN_ETH_HLEN) {
+ 		u8 ethlen = len;
+ 
+-		if (vlan_hlen &&
+-		    skb_copy_bits(skb, mac_off, &veth, VLAN_ETH_HLEN) < 0)
+-			return false;
+-		else if (!nft_payload_rebuild_vlan_hdr(skb, mac_off, &veth))
++		if (!nft_payload_rebuild_vlan_hdr(skb, mac_off, &veth))
+ 			return false;
+ 
+-		if (offset + len > VLAN_ETH_HLEN + vlan_hlen)
+-			ethlen -= offset + len - VLAN_ETH_HLEN - vlan_hlen;
++		if (offset + len > VLAN_ETH_HLEN)
++			ethlen -= offset + len - VLAN_ETH_HLEN;
+ 
+-		memcpy(dst_u8, vlanh + offset - vlan_hlen, ethlen);
++		memcpy(dst_u8, vlanh + offset, ethlen);
+ 
+ 		len -= ethlen;
+ 		if (len == 0)
+ 			return true;
+ 
+ 		dst_u8 += ethlen;
+-		offset = ETH_HLEN + vlan_hlen;
++		offset = ETH_HLEN;
+ 	} else {
+-		offset -= VLAN_HLEN + vlan_hlen;
++		offset -= VLAN_HLEN;
+ 	}
+ 
+ 	return skb_copy_bits(skb, offset + mac_off, dst_u8, len) == 0;
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 2ee50996da8cc..c8822fa8196d9 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -684,6 +684,7 @@ static const struct nft_object_ops nft_tunnel_obj_ops = {
+ 
+ static struct nft_object_type nft_tunnel_obj_type __read_mostly = {
+ 	.type		= NFT_OBJECT_TUNNEL,
++	.family		= NFPROTO_NETDEV,
+ 	.ops		= &nft_tunnel_obj_ops,
+ 	.maxattr	= NFTA_TUNNEL_KEY_MAX,
+ 	.policy		= nft_tunnel_key_policy,
+diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
+index 895702337c92e..9269b5e69b9a5 100644
+--- a/net/netrom/nr_route.c
++++ b/net/netrom/nr_route.c
+@@ -284,22 +284,14 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
+ 	return 0;
+ }
+ 
+-static inline void __nr_remove_node(struct nr_node *nr_node)
++static void nr_remove_node_locked(struct nr_node *nr_node)
+ {
++	lockdep_assert_held(&nr_node_list_lock);
++
+ 	hlist_del_init(&nr_node->node_node);
+ 	nr_node_put(nr_node);
+ }
+ 
+-#define nr_remove_node_locked(__node) \
+-	__nr_remove_node(__node)
+-
+-static void nr_remove_node(struct nr_node *nr_node)
+-{
+-	spin_lock_bh(&nr_node_list_lock);
+-	__nr_remove_node(nr_node);
+-	spin_unlock_bh(&nr_node_list_lock);
+-}
+-
+ static inline void __nr_remove_neigh(struct nr_neigh *nr_neigh)
+ {
+ 	hlist_del_init(&nr_neigh->neigh_node);
+@@ -338,6 +330,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
+ 		return -EINVAL;
+ 	}
+ 
++	spin_lock_bh(&nr_node_list_lock);
+ 	nr_node_lock(nr_node);
+ 	for (i = 0; i < nr_node->count; i++) {
+ 		if (nr_node->routes[i].neighbour == nr_neigh) {
+@@ -351,7 +344,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
+ 			nr_node->count--;
+ 
+ 			if (nr_node->count == 0) {
+-				nr_remove_node(nr_node);
++				nr_remove_node_locked(nr_node);
+ 			} else {
+ 				switch (i) {
+ 				case 0:
+@@ -365,12 +358,14 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
+ 				nr_node_put(nr_node);
+ 			}
+ 			nr_node_unlock(nr_node);
++			spin_unlock_bh(&nr_node_list_lock);
+ 
+ 			return 0;
+ 		}
+ 	}
+ 	nr_neigh_put(nr_neigh);
+ 	nr_node_unlock(nr_node);
++	spin_unlock_bh(&nr_node_list_lock);
+ 	nr_node_put(nr_node);
+ 
+ 	return -EINVAL;
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index d8002065baaef..3182b4228cfa4 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -1452,6 +1452,19 @@ int nci_core_ntf_packet(struct nci_dev *ndev, __u16 opcode,
+ 				 ndev->ops->n_core_ops);
+ }
+ 
++static bool nci_valid_size(struct sk_buff *skb)
++{
++	unsigned int hdr_size = NCI_CTRL_HDR_SIZE;
++	BUILD_BUG_ON(NCI_CTRL_HDR_SIZE != NCI_DATA_HDR_SIZE);
++
++	if (skb->len < hdr_size ||
++	    !nci_plen(skb->data) ||
++	    skb->len < hdr_size + nci_plen(skb->data)) {
++		return false;
++	}
++	return true;
++}
++
+ /* ---- NCI TX Data worker thread ---- */
+ 
+ static void nci_tx_work(struct work_struct *work)
+@@ -1502,9 +1515,9 @@ static void nci_rx_work(struct work_struct *work)
+ 		nfc_send_to_raw_sock(ndev->nfc_dev, skb,
+ 				     RAW_PAYLOAD_NCI, NFC_DIRECTION_RX);
+ 
+-		if (!nci_plen(skb->data)) {
++		if (!nci_valid_size(skb)) {
+ 			kfree_skb(skb);
+-			break;
++			continue;
+ 		}
+ 
+ 		/* Process frame */
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 80fee9d118eec..4095456f413df 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -923,6 +923,12 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port,
+ 				pskb_trim(skb, ovs_mac_header_len(key));
+ 		}
+ 
++		/* Need to set the pkt_type to involve the routing layer.  The
++		 * packet movement through the OVS datapath doesn't generally
++		 * use routing, but this is needed for tunnel cases.
++		 */
++		skb->pkt_type = PACKET_OUTGOING;
++
+ 		if (likely(!mru ||
+ 		           (skb->len <= mru + vport->dev->hard_header_len))) {
+ 			ovs_vport_send(vport, skb, ovs_key_mac_proto(key));
+diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
+index c9ba61413c98b..9bad601c7fe82 100644
+--- a/net/openvswitch/flow.c
++++ b/net/openvswitch/flow.c
+@@ -412,7 +412,6 @@ static int parse_icmpv6(struct sk_buff *skb, struct sw_flow_key *key,
+ 	 */
+ 	key->tp.src = htons(icmp->icmp6_type);
+ 	key->tp.dst = htons(icmp->icmp6_code);
+-	memset(&key->ipv6.nd, 0, sizeof(key->ipv6.nd));
+ 
+ 	if (icmp->icmp6_code == 0 &&
+ 	    (icmp->icmp6_type == NDISC_NEIGHBOUR_SOLICITATION ||
+@@ -421,6 +420,8 @@ static int parse_icmpv6(struct sk_buff *skb, struct sw_flow_key *key,
+ 		struct nd_msg *nd;
+ 		int offset;
+ 
++		memset(&key->ipv6.nd, 0, sizeof(key->ipv6.nd));
++
+ 		/* In order to process neighbor discovery options, we need the
+ 		 * entire packet.
+ 		 */
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index db5d16c5d5b11..8e52f09493053 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2487,8 +2487,7 @@ static void tpacket_destruct_skb(struct sk_buff *skb)
+ 		ts = __packet_set_timestamp(po, ph, skb);
+ 		__packet_set_status(po, ph, TP_STATUS_AVAILABLE | ts);
+ 
+-		if (!packet_read_pending(&po->tx_ring))
+-			complete(&po->skb_completion);
++		complete(&po->skb_completion);
+ 	}
+ 
+ 	sock_wfree(skb);
+diff --git a/net/qrtr/af_qrtr.c b/net/qrtr/af_qrtr.c
+index 71c2295d4a573..29c0886eb9efe 100644
+--- a/net/qrtr/af_qrtr.c
++++ b/net/qrtr/af_qrtr.c
+@@ -1279,13 +1279,19 @@ static int __init qrtr_proto_init(void)
+ 		return rc;
+ 
+ 	rc = sock_register(&qrtr_family);
+-	if (rc) {
+-		proto_unregister(&qrtr_proto);
+-		return rc;
+-	}
++	if (rc)
++		goto err_proto;
+ 
+-	qrtr_ns_init();
++	rc = qrtr_ns_init();
++	if (rc)
++		goto err_sock;
+ 
++	return 0;
++
++err_sock:
++	sock_unregister(qrtr_family.family);
++err_proto:
++	proto_unregister(&qrtr_proto);
+ 	return rc;
+ }
+ postcore_initcall(qrtr_proto_init);
+diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
+index c92dd960bfefa..1da34d54092be 100644
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -771,7 +771,7 @@ static void qrtr_ns_data_ready(struct sock *sk)
+ 	queue_work(qrtr_ns.workqueue, &qrtr_ns.work);
+ }
+ 
+-void qrtr_ns_init(void)
++int qrtr_ns_init(void)
+ {
+ 	struct sockaddr_qrtr sq;
+ 	int ret;
+@@ -782,7 +782,7 @@ void qrtr_ns_init(void)
+ 	ret = sock_create_kern(&init_net, AF_QIPCRTR, SOCK_DGRAM,
+ 			       PF_QIPCRTR, &qrtr_ns.sock);
+ 	if (ret < 0)
+-		return;
++		return ret;
+ 
+ 	ret = kernel_getsockname(qrtr_ns.sock, (struct sockaddr *)&sq);
+ 	if (ret < 0) {
+@@ -815,12 +815,31 @@ void qrtr_ns_init(void)
+ 	if (ret < 0)
+ 		goto err_wq;
+ 
+-	return;
++	/* As the qrtr ns socket owner and creator is the same module, we have
++	 * to decrease the qrtr module reference count to guarantee that it
++	 * remains zero after the ns socket is created, otherwise, executing
++	 * "rmmod" command is unable to make the qrtr module deleted after the
++	 *  qrtr module is inserted successfully.
++	 *
++	 * However, the reference count is increased twice in
++	 * sock_create_kern(): one is to increase the reference count of owner
++	 * of qrtr socket's proto_ops struct; another is to increment the
++	 * reference count of owner of qrtr proto struct. Therefore, we must
++	 * decrement the module reference count twice to ensure that it keeps
++	 * zero after server's listening socket is created. Of course, we
++	 * must bump the module reference count twice as well before the socket
++	 * is closed.
++	 */
++	module_put(qrtr_ns.sock->ops->owner);
++	module_put(qrtr_ns.sock->sk->sk_prot_creator->owner);
++
++	return 0;
+ 
+ err_wq:
+ 	destroy_workqueue(qrtr_ns.workqueue);
+ err_sock:
+ 	sock_release(qrtr_ns.sock);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(qrtr_ns_init);
+ 
+@@ -828,6 +847,15 @@ void qrtr_ns_remove(void)
+ {
+ 	cancel_work_sync(&qrtr_ns.work);
+ 	destroy_workqueue(qrtr_ns.workqueue);
++
++	/* sock_release() expects the two references that were put during
++	 * qrtr_ns_init(). This function is only called during module remove,
++	 * so try_stop_module() has already set the refcnt to 0. Use
++	 * __module_get() instead of try_module_get() to successfully take two
++	 * references.
++	 */
++	__module_get(qrtr_ns.sock->ops->owner);
++	__module_get(qrtr_ns.sock->sk->sk_prot_creator->owner);
+ 	sock_release(qrtr_ns.sock);
+ }
+ EXPORT_SYMBOL_GPL(qrtr_ns_remove);
+diff --git a/net/qrtr/qrtr.h b/net/qrtr/qrtr.h
+index dc2b67f179271..3f2d28696062a 100644
+--- a/net/qrtr/qrtr.h
++++ b/net/qrtr/qrtr.h
+@@ -29,7 +29,7 @@ void qrtr_endpoint_unregister(struct qrtr_endpoint *ep);
+ 
+ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len);
+ 
+-void qrtr_ns_init(void);
++int qrtr_ns_init(void);
+ 
+ void qrtr_ns_remove(void);
+ 
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 406ff7f8b156e..784c8b24f1640 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -1126,17 +1126,11 @@ gss_read_verf(struct rpc_gss_wire_cred *gc,
+ 
+ static void gss_free_in_token_pages(struct gssp_in_token *in_token)
+ {
+-	u32 inlen;
+ 	int i;
+ 
+ 	i = 0;
+-	inlen = in_token->page_len;
+-	while (inlen) {
+-		if (in_token->pages[i])
+-			put_page(in_token->pages[i]);
+-		inlen -= inlen > PAGE_SIZE ? PAGE_SIZE : inlen;
+-	}
+-
++	while (in_token->pages[i])
++		put_page(in_token->pages[i++]);
+ 	kfree(in_token->pages);
+ 	in_token->pages = NULL;
+ }
+@@ -1162,7 +1156,7 @@ static int gss_read_proxy_verf(struct svc_rqst *rqstp,
+ 	}
+ 
+ 	pages = DIV_ROUND_UP(inlen, PAGE_SIZE);
+-	in_token->pages = kcalloc(pages, sizeof(struct page *), GFP_KERNEL);
++	in_token->pages = kcalloc(pages + 1, sizeof(struct page *), GFP_KERNEL);
+ 	if (!in_token->pages) {
+ 		kfree(in_handle->data);
+ 		return SVC_DENIED;
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index a5ce9b937c42e..196a3b11d1509 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -970,6 +970,7 @@ struct rpc_clnt *rpc_bind_new_program(struct rpc_clnt *old,
+ 		.authflavor	= old->cl_auth->au_flavor,
+ 		.cred		= old->cl_cred,
+ 		.stats		= old->cl_stats,
++		.timeout	= old->cl_timeout,
+ 	};
+ 	struct rpc_clnt *clnt;
+ 	int err;
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index 495ebe7fad6dd..cfe8b911ca013 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -1248,8 +1248,6 @@ svc_generic_init_request(struct svc_rqst *rqstp,
+ 	if (rqstp->rq_proc >= versp->vs_nproc)
+ 		goto err_bad_proc;
+ 	rqstp->rq_procinfo = procp = &versp->vs_proc[rqstp->rq_proc];
+-	if (!procp)
+-		goto err_bad_proc;
+ 
+ 	/* Initialize storage for argp and resp */
+ 	memset(rqstp->rq_argp, 0, procp->pc_argsize);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index d015576f3081a..9e9df38b29f74 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -268,7 +268,11 @@ rpcrdma_cm_event_handler(struct rdma_cm_id *id, struct rdma_cm_event *event)
+ 	case RDMA_CM_EVENT_DEVICE_REMOVAL:
+ 		pr_info("rpcrdma: removing device %s for %pISpc\n",
+ 			ep->re_id->device->name, sap);
+-		fallthrough;
++		switch (xchg(&ep->re_connect_status, -ENODEV)) {
++		case 0: goto wake_connect_worker;
++		case 1: goto disconnected;
++		}
++		return 0;
+ 	case RDMA_CM_EVENT_ADDR_CHANGE:
+ 		ep->re_connect_status = -ENODEV;
+ 		goto disconnected;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index ae5b5380f0f03..0666f981618a2 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -3166,24 +3166,6 @@ void cleanup_socket_xprt(void)
+ 	xprt_unregister_transport(&xs_bc_tcp_transport);
+ }
+ 
+-static int param_set_uint_minmax(const char *val,
+-		const struct kernel_param *kp,
+-		unsigned int min, unsigned int max)
+-{
+-	unsigned int num;
+-	int ret;
+-
+-	if (!val)
+-		return -EINVAL;
+-	ret = kstrtouint(val, 0, &num);
+-	if (ret)
+-		return ret;
+-	if (num < min || num > max)
+-		return -EINVAL;
+-	*((unsigned int *)kp->arg) = num;
+-	return 0;
+-}
+-
+ static int param_set_portnr(const char *val, const struct kernel_param *kp)
+ {
+ 	return param_set_uint_minmax(val, kp,
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 2bbacd9b97e56..ebf856cf821da 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -633,9 +633,17 @@ struct tls_context *tls_ctx_create(struct sock *sk)
+ 		return NULL;
+ 
+ 	mutex_init(&ctx->tx_lock);
+-	rcu_assign_pointer(icsk->icsk_ulp_data, ctx);
+ 	ctx->sk_proto = READ_ONCE(sk->sk_prot);
+ 	ctx->sk = sk;
++	/* Release semantic of rcu_assign_pointer() ensures that
++	 * ctx->sk_proto is visible before changing sk->sk_prot in
++	 * update_sk_prot(), and prevents reading uninitialized value in
++	 * tls_{getsockopt, setsockopt}. Note that we do not need a
++	 * read barrier in tls_{getsockopt,setsockopt} as there is an
++	 * address dependency between sk->sk_proto->{getsockopt,setsockopt}
++	 * and ctx->sk_proto.
++	 */
++	rcu_assign_pointer(icsk->icsk_ulp_data, ctx);
+ 	return ctx;
+ }
+ 
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 224b1fdc82279..3ab726a668e8a 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1918,7 +1918,7 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
+ 			goto out_err;
+ 	}
+ 
+-	if (sk->sk_shutdown & SEND_SHUTDOWN)
++	if (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN)
+ 		goto pipe_err;
+ 
+ 	while (sent < len) {
+diff --git a/net/wireless/trace.h b/net/wireless/trace.h
+index edc824c103e83..06e81d1efc921 100644
+--- a/net/wireless/trace.h
++++ b/net/wireless/trace.h
+@@ -1660,7 +1660,7 @@ TRACE_EVENT(rdev_return_void_tx_rx,
+ 
+ DECLARE_EVENT_CLASS(tx_rx_evt,
+ 	TP_PROTO(struct wiphy *wiphy, u32 tx, u32 rx),
+-	TP_ARGS(wiphy, rx, tx),
++	TP_ARGS(wiphy, tx, rx),
+ 	TP_STRUCT__entry(
+ 		WIPHY_ENTRY
+ 		__field(u32, tx)
+@@ -1677,7 +1677,7 @@ DECLARE_EVENT_CLASS(tx_rx_evt,
+ 
+ DEFINE_EVENT(tx_rx_evt, rdev_set_antenna,
+ 	TP_PROTO(struct wiphy *wiphy, u32 tx, u32 rx),
+-	TP_ARGS(wiphy, rx, tx)
++	TP_ARGS(wiphy, tx, rx)
+ );
+ 
+ DECLARE_EVENT_CLASS(wiphy_netdev_id_evt,
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 664d55957feb5..fadb309b25b40 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3807,15 +3807,10 @@ static void xfrm_link_failure(struct sk_buff *skb)
+ 	/* Impossible. Such dst must be popped before reaches point of failure. */
+ }
+ 
+-static struct dst_entry *xfrm_negative_advice(struct dst_entry *dst)
++static void xfrm_negative_advice(struct sock *sk, struct dst_entry *dst)
+ {
+-	if (dst) {
+-		if (dst->obsolete) {
+-			dst_release(dst);
+-			dst = NULL;
+-		}
+-	}
+-	return dst;
++	if (dst->obsolete)
++		sk_dst_reset(sk);
+ }
+ 
+ static void xfrm_init_pmtu(struct xfrm_dst **bundle, int nr)
+diff --git a/scripts/kconfig/symbol.c b/scripts/kconfig/symbol.c
+index a2056fa80de2b..ff4c5d314b4d7 100644
+--- a/scripts/kconfig/symbol.c
++++ b/scripts/kconfig/symbol.c
+@@ -13,18 +13,21 @@
+ 
+ struct symbol symbol_yes = {
+ 	.name = "y",
++	.type = S_TRISTATE,
+ 	.curr = { "y", yes },
+ 	.flags = SYMBOL_CONST|SYMBOL_VALID,
+ };
+ 
+ struct symbol symbol_mod = {
+ 	.name = "m",
++	.type = S_TRISTATE,
+ 	.curr = { "m", mod },
+ 	.flags = SYMBOL_CONST|SYMBOL_VALID,
+ };
+ 
+ struct symbol symbol_no = {
+ 	.name = "n",
++	.type = S_TRISTATE,
+ 	.curr = { "n", no },
+ 	.flags = SYMBOL_CONST|SYMBOL_VALID,
+ };
+@@ -776,8 +779,7 @@ const char *sym_get_string_value(struct symbol *sym)
+ 		case no:
+ 			return "n";
+ 		case mod:
+-			sym_calc_value(modules_sym);
+-			return (modules_sym->curr.tri == no) ? "n" : "m";
++			return "m";
+ 		case yes:
+ 			return "y";
+ 		}
+diff --git a/sound/core/init.c b/sound/core/init.c
+index 9f5270c90a10a..b6dd43005c272 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -206,8 +206,8 @@ int snd_card_new(struct device *parent, int idx, const char *xid,
+ 	card->number = idx;
+ #ifdef MODULE
+ 	WARN_ON(!module);
+-	card->module = module;
+ #endif
++	card->module = module;
+ 	INIT_LIST_HEAD(&card->devices);
+ 	init_rwsem(&card->controls_rwsem);
+ 	rwlock_init(&card->ctl_files_rwlock);
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 764d2b19344e3..708c9a46eefe3 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -553,6 +553,16 @@ static int snd_timer_start1(struct snd_timer_instance *timeri,
+ 		goto unlock;
+ 	}
+ 
++	/* check the actual time for the start tick;
++	 * bail out as error if it's way too low (< 100us)
++	 */
++	if (start) {
++		if ((u64)snd_timer_hw_resolution(timer) * ticks < 100000) {
++			result = -EINVAL;
++			goto unlock;
++		}
++	}
++
+ 	if (start)
+ 		timeri->ticks = timeri->cticks = ticks;
+ 	else if (!timeri->cticks)
+diff --git a/sound/soc/codecs/da7219-aad.c b/sound/soc/codecs/da7219-aad.c
+index b6030709b6b6d..2fb26dd84bc72 100644
+--- a/sound/soc/codecs/da7219-aad.c
++++ b/sound/soc/codecs/da7219-aad.c
+@@ -629,8 +629,10 @@ static struct da7219_aad_pdata *da7219_aad_fw_to_pdata(struct device *dev)
+ 		return NULL;
+ 
+ 	aad_pdata = devm_kzalloc(dev, sizeof(*aad_pdata), GFP_KERNEL);
+-	if (!aad_pdata)
++	if (!aad_pdata) {
++		fwnode_handle_put(aad_np);
+ 		return NULL;
++	}
+ 
+ 	aad_pdata->irq = i2c->irq;
+ 
+@@ -705,6 +707,8 @@ static struct da7219_aad_pdata *da7219_aad_fw_to_pdata(struct device *dev)
+ 	else
+ 		aad_pdata->adc_1bit_rpt = DA7219_AAD_ADC_1BIT_RPT_1;
+ 
++	fwnode_handle_put(aad_np);
++
+ 	return aad_pdata;
+ }
+ 
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 5db63ef33f1a2..ac403827a2290 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -414,6 +414,7 @@ struct rt5645_priv {
+ 	struct regmap *regmap;
+ 	struct i2c_client *i2c;
+ 	struct gpio_desc *gpiod_hp_det;
++	struct gpio_desc *gpiod_cbj_sleeve;
+ 	struct snd_soc_jack *hp_jack;
+ 	struct snd_soc_jack *mic_jack;
+ 	struct snd_soc_jack *btn_jack;
+@@ -3169,6 +3170,9 @@ static int rt5645_jack_detect(struct snd_soc_component *component, int jack_inse
+ 		regmap_update_bits(rt5645->regmap, RT5645_IN1_CTRL2,
+ 			RT5645_CBJ_MN_JD, 0);
+ 
++		if (rt5645->gpiod_cbj_sleeve)
++			gpiod_set_value(rt5645->gpiod_cbj_sleeve, 1);
++
+ 		msleep(600);
+ 		regmap_read(rt5645->regmap, RT5645_IN1_CTRL3, &val);
+ 		val &= 0x7;
+@@ -3185,6 +3189,8 @@ static int rt5645_jack_detect(struct snd_soc_component *component, int jack_inse
+ 			snd_soc_dapm_disable_pin(dapm, "Mic Det Power");
+ 			snd_soc_dapm_sync(dapm);
+ 			rt5645->jack_type = SND_JACK_HEADPHONE;
++			if (rt5645->gpiod_cbj_sleeve)
++				gpiod_set_value(rt5645->gpiod_cbj_sleeve, 0);
+ 		}
+ 		if (rt5645->pdata.level_trigger_irq)
+ 			regmap_update_bits(rt5645->regmap, RT5645_IRQ_CTRL2,
+@@ -3210,6 +3216,9 @@ static int rt5645_jack_detect(struct snd_soc_component *component, int jack_inse
+ 		if (rt5645->pdata.level_trigger_irq)
+ 			regmap_update_bits(rt5645->regmap, RT5645_IRQ_CTRL2,
+ 				RT5645_JD_1_1_MASK, RT5645_JD_1_1_INV);
++
++		if (rt5645->gpiod_cbj_sleeve)
++			gpiod_set_value(rt5645->gpiod_cbj_sleeve, 0);
+ 	}
+ 
+ 	return rt5645->jack_type;
+@@ -3882,6 +3891,16 @@ static int rt5645_i2c_probe(struct i2c_client *i2c,
+ 			return ret;
+ 	}
+ 
++	rt5645->gpiod_cbj_sleeve = devm_gpiod_get_optional(&i2c->dev, "cbj-sleeve",
++							   GPIOD_OUT_LOW);
++
++	if (IS_ERR(rt5645->gpiod_cbj_sleeve)) {
++		ret = PTR_ERR(rt5645->gpiod_cbj_sleeve);
++		dev_info(&i2c->dev, "failed to initialize gpiod, ret=%d\n", ret);
++		if (ret != -ENOENT)
++			return ret;
++	}
++
+ 	for (i = 0; i < ARRAY_SIZE(rt5645->supplies); i++)
+ 		rt5645->supplies[i].supply = rt5645_supply_names[i];
+ 
+@@ -4125,6 +4144,9 @@ static int rt5645_i2c_remove(struct i2c_client *i2c)
+ 	cancel_delayed_work_sync(&rt5645->jack_detect_work);
+ 	cancel_delayed_work_sync(&rt5645->rcclock_work);
+ 
++	if (rt5645->gpiod_cbj_sleeve)
++		gpiod_set_value(rt5645->gpiod_cbj_sleeve, 0);
++
+ 	regulator_bulk_disable(ARRAY_SIZE(rt5645->supplies), rt5645->supplies);
+ 
+ 	return 0;
+@@ -4142,6 +4164,9 @@ static void rt5645_i2c_shutdown(struct i2c_client *i2c)
+ 		0);
+ 	msleep(20);
+ 	regmap_write(rt5645->regmap, RT5645_RESET, 0);
++
++	if (rt5645->gpiod_cbj_sleeve)
++		gpiod_set_value(rt5645->gpiod_cbj_sleeve, 0);
+ }
+ 
+ static struct i2c_driver rt5645_i2c_driver = {
+diff --git a/sound/soc/codecs/rt715-sdw.c b/sound/soc/codecs/rt715-sdw.c
+index 361a90ae594cd..c0f09c3bcb6e3 100644
+--- a/sound/soc/codecs/rt715-sdw.c
++++ b/sound/soc/codecs/rt715-sdw.c
+@@ -110,6 +110,7 @@ static bool rt715_readable_register(struct device *dev, unsigned int reg)
+ 	case 0x839d:
+ 	case 0x83a7:
+ 	case 0x83a9:
++	case 0x752001:
+ 	case 0x752039:
+ 		return true;
+ 	default:
+diff --git a/sound/soc/codecs/tas2552.c b/sound/soc/codecs/tas2552.c
+index bd00c35116cd5..f045876c12f17 100644
+--- a/sound/soc/codecs/tas2552.c
++++ b/sound/soc/codecs/tas2552.c
+@@ -2,7 +2,8 @@
+ /*
+  * tas2552.c - ALSA SoC Texas Instruments TAS2552 Mono Audio Amplifier
+  *
+- * Copyright (C) 2014 Texas Instruments Incorporated -  https://www.ti.com
++ * Copyright (C) 2014 - 2024 Texas Instruments Incorporated -
++ *	https://www.ti.com
+  *
+  * Author: Dan Murphy <dmurphy@ti.com>
+  */
+@@ -119,12 +120,14 @@ static const struct snd_soc_dapm_widget tas2552_dapm_widgets[] =
+ 			 &tas2552_input_mux_control),
+ 
+ 	SND_SOC_DAPM_AIF_IN("DAC IN", "DAC Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("ASI OUT", "DAC Capture", 0, SND_SOC_NOPM, 0, 0),
+ 	SND_SOC_DAPM_DAC("DAC", NULL, SND_SOC_NOPM, 0, 0),
+ 	SND_SOC_DAPM_OUT_DRV("ClassD", TAS2552_CFG_2, 7, 0, NULL, 0),
+ 	SND_SOC_DAPM_SUPPLY("PLL", TAS2552_CFG_2, 3, 0, NULL, 0),
+ 	SND_SOC_DAPM_POST("Post Event", tas2552_post_event),
+ 
+-	SND_SOC_DAPM_OUTPUT("OUT")
++	SND_SOC_DAPM_OUTPUT("OUT"),
++	SND_SOC_DAPM_INPUT("DMIC")
+ };
+ 
+ static const struct snd_soc_dapm_route tas2552_audio_map[] = {
+@@ -134,6 +137,7 @@ static const struct snd_soc_dapm_route tas2552_audio_map[] = {
+ 	{"ClassD", NULL, "Input selection"},
+ 	{"OUT", NULL, "ClassD"},
+ 	{"ClassD", NULL, "PLL"},
++	{"ASI OUT", NULL, "DMIC"}
+ };
+ 
+ #ifdef CONFIG_PM
+@@ -538,6 +542,13 @@ static struct snd_soc_dai_driver tas2552_dai[] = {
+ 			.rates = SNDRV_PCM_RATE_8000_192000,
+ 			.formats = TAS2552_FORMATS,
+ 		},
++		.capture = {
++			.stream_name = "Capture",
++			.channels_min = 2,
++			.channels_max = 2,
++			.rates = SNDRV_PCM_RATE_8000_192000,
++			.formats = TAS2552_FORMATS,
++		},
+ 		.ops = &tas2552_speaker_dai_ops,
+ 	},
+ };
+diff --git a/sound/soc/intel/boards/bxt_da7219_max98357a.c b/sound/soc/intel/boards/bxt_da7219_max98357a.c
+index 0c0a717823c40..1a24c44db6dda 100644
+--- a/sound/soc/intel/boards/bxt_da7219_max98357a.c
++++ b/sound/soc/intel/boards/bxt_da7219_max98357a.c
+@@ -750,6 +750,7 @@ static struct snd_soc_card broxton_audio_card = {
+ 	.dapm_routes = audio_map,
+ 	.num_dapm_routes = ARRAY_SIZE(audio_map),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = bxt_card_late_probe,
+ };
+ 
+diff --git a/sound/soc/intel/boards/bxt_rt298.c b/sound/soc/intel/boards/bxt_rt298.c
+index 0f3157dfa8384..13c41338003f1 100644
+--- a/sound/soc/intel/boards/bxt_rt298.c
++++ b/sound/soc/intel/boards/bxt_rt298.c
+@@ -575,6 +575,7 @@ static struct snd_soc_card broxton_rt298 = {
+ 	.dapm_routes = broxton_rt298_map,
+ 	.num_dapm_routes = ARRAY_SIZE(broxton_rt298_map),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = bxt_card_late_probe,
+ 
+ };
+diff --git a/sound/soc/intel/boards/glk_rt5682_max98357a.c b/sound/soc/intel/boards/glk_rt5682_max98357a.c
+index 62cca511522ea..c1b789ac6d500 100644
+--- a/sound/soc/intel/boards/glk_rt5682_max98357a.c
++++ b/sound/soc/intel/boards/glk_rt5682_max98357a.c
+@@ -603,6 +603,8 @@ static int geminilake_audio_probe(struct platform_device *pdev)
+ 	card = &glk_audio_card_rt5682_m98357a;
+ 	card->dev = &pdev->dev;
+ 	snd_soc_card_set_drvdata(card, ctx);
++	if (!snd_soc_acpi_sof_parent(&pdev->dev))
++		card->disable_route_checks = true;
+ 
+ 	/* override plaform name, if required */
+ 	mach = pdev->dev.platform_data;
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98357a.c b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+index 36f1f49e0b76b..4ecef661f8834 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98357a.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+@@ -571,6 +571,7 @@ static struct snd_soc_card kabylake_audio_card_da7219_m98357a = {
+ 	.dapm_routes = kabylake_map,
+ 	.num_dapm_routes = ARRAY_SIZE(kabylake_map),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98927.c b/sound/soc/intel/boards/kbl_da7219_max98927.c
+index 884741aa48335..80c343fe7f658 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98927.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98927.c
+@@ -1016,6 +1016,7 @@ static struct snd_soc_card kbl_audio_card_da7219_m98927 = {
+ 	.codec_conf = max98927_codec_conf,
+ 	.num_configs = ARRAY_SIZE(max98927_codec_conf),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+@@ -1034,6 +1035,7 @@ static struct snd_soc_card kbl_audio_card_max98927 = {
+ 	.codec_conf = max98927_codec_conf,
+ 	.num_configs = ARRAY_SIZE(max98927_codec_conf),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+@@ -1051,6 +1053,7 @@ static struct snd_soc_card kbl_audio_card_da7219_m98373 = {
+ 	.codec_conf = max98373_codec_conf,
+ 	.num_configs = ARRAY_SIZE(max98373_codec_conf),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+@@ -1068,6 +1071,7 @@ static struct snd_soc_card kbl_audio_card_max98373 = {
+ 	.codec_conf = max98373_codec_conf,
+ 	.num_configs = ARRAY_SIZE(max98373_codec_conf),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+diff --git a/sound/soc/intel/boards/kbl_rt5660.c b/sound/soc/intel/boards/kbl_rt5660.c
+index 3a9f91b58e113..2bffd2b6b3618 100644
+--- a/sound/soc/intel/boards/kbl_rt5660.c
++++ b/sound/soc/intel/boards/kbl_rt5660.c
+@@ -519,6 +519,7 @@ static struct snd_soc_card kabylake_audio_card_rt5660 = {
+ 	.dapm_routes = kabylake_rt5660_map,
+ 	.num_dapm_routes = ARRAY_SIZE(kabylake_rt5660_map),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+diff --git a/sound/soc/intel/boards/kbl_rt5663_max98927.c b/sound/soc/intel/boards/kbl_rt5663_max98927.c
+index 9a4b3d0973f65..be76c71eca679 100644
+--- a/sound/soc/intel/boards/kbl_rt5663_max98927.c
++++ b/sound/soc/intel/boards/kbl_rt5663_max98927.c
+@@ -948,6 +948,7 @@ static struct snd_soc_card kabylake_audio_card_rt5663_m98927 = {
+ 	.codec_conf = max98927_codec_conf,
+ 	.num_configs = ARRAY_SIZE(max98927_codec_conf),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+@@ -964,6 +965,7 @@ static struct snd_soc_card kabylake_audio_card_rt5663 = {
+ 	.dapm_routes = kabylake_5663_map,
+ 	.num_dapm_routes = ARRAY_SIZE(kabylake_5663_map),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+diff --git a/sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c b/sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c
+index f95546c184aae..291f56f79cd36 100644
+--- a/sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c
++++ b/sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c
+@@ -779,6 +779,7 @@ static struct snd_soc_card kabylake_audio_card = {
+ 	.codec_conf = max98927_codec_conf,
+ 	.num_configs = ARRAY_SIZE(max98927_codec_conf),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = kabylake_card_late_probe,
+ };
+ 
+diff --git a/sound/soc/intel/boards/skl_hda_dsp_generic.c b/sound/soc/intel/boards/skl_hda_dsp_generic.c
+index bc50eda297ab7..c2ff59e9a8cf7 100644
+--- a/sound/soc/intel/boards/skl_hda_dsp_generic.c
++++ b/sound/soc/intel/boards/skl_hda_dsp_generic.c
+@@ -229,6 +229,8 @@ static int skl_hda_audio_probe(struct platform_device *pdev)
+ 	ctx->common_hdmi_codec_drv = mach->mach_params.common_hdmi_codec_drv;
+ 
+ 	hda_soc_card.dev = &pdev->dev;
++	if (!snd_soc_acpi_sof_parent(&pdev->dev))
++		hda_soc_card.disable_route_checks = true;
+ 
+ 	if (mach->mach_params.dmic_num > 0) {
+ 		snprintf(hda_soc_components, sizeof(hda_soc_components),
+diff --git a/sound/soc/intel/boards/skl_nau88l25_max98357a.c b/sound/soc/intel/boards/skl_nau88l25_max98357a.c
+index 55802900069a2..d976636f5a495 100644
+--- a/sound/soc/intel/boards/skl_nau88l25_max98357a.c
++++ b/sound/soc/intel/boards/skl_nau88l25_max98357a.c
+@@ -643,6 +643,7 @@ static struct snd_soc_card skylake_audio_card = {
+ 	.dapm_routes = skylake_map,
+ 	.num_dapm_routes = ARRAY_SIZE(skylake_map),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = skylake_card_late_probe,
+ };
+ 
+diff --git a/sound/soc/intel/boards/skl_rt286.c b/sound/soc/intel/boards/skl_rt286.c
+index 5a0c64a831465..4ff3e5fb9b7d0 100644
+--- a/sound/soc/intel/boards/skl_rt286.c
++++ b/sound/soc/intel/boards/skl_rt286.c
+@@ -524,6 +524,7 @@ static struct snd_soc_card skylake_rt286 = {
+ 	.dapm_routes = skylake_rt286_map,
+ 	.num_dapm_routes = ARRAY_SIZE(skylake_rt286_map),
+ 	.fully_routed = true,
++	.disable_route_checks = true,
+ 	.late_probe = skylake_card_late_probe,
+ };
+ 
+diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt
+index ec31f5b60323d..1c25c1072a84d 100644
+--- a/tools/arch/x86/lib/x86-opcode-map.txt
++++ b/tools/arch/x86/lib/x86-opcode-map.txt
+@@ -148,7 +148,7 @@ AVXcode:
+ 65: SEG=GS (Prefix)
+ 66: Operand-Size (Prefix)
+ 67: Address-Size (Prefix)
+-68: PUSH Iz (d64)
++68: PUSH Iz
+ 69: IMUL Gv,Ev,Iz
+ 6a: PUSH Ib (d64)
+ 6b: IMUL Gv,Ev,Ib
+diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
+index f32c059fbfb4f..8b2a2576fed66 100644
+--- a/tools/bpf/resolve_btfids/main.c
++++ b/tools/bpf/resolve_btfids/main.c
+@@ -637,7 +637,7 @@ static int sets_patch(struct object *obj)
+ 
+ static int symbols_patch(struct object *obj)
+ {
+-	int err;
++	off_t err;
+ 
+ 	if (__symbols_patch(obj, &obj->structs)  ||
+ 	    __symbols_patch(obj, &obj->unions)   ||
+diff --git a/tools/lib/subcmd/parse-options.c b/tools/lib/subcmd/parse-options.c
+index 39ebf6192016d..e799d35cba434 100644
+--- a/tools/lib/subcmd/parse-options.c
++++ b/tools/lib/subcmd/parse-options.c
+@@ -633,11 +633,10 @@ int parse_options_subcommand(int argc, const char **argv, const struct option *o
+ 			const char *const subcommands[], const char *usagestr[], int flags)
+ {
+ 	struct parse_opt_ctx_t ctx;
++	char *buf = NULL;
+ 
+ 	/* build usage string if it's not provided */
+ 	if (subcommands && !usagestr[0]) {
+-		char *buf = NULL;
+-
+ 		astrcatf(&buf, "%s %s [<options>] {", subcmd_config.exec_name, argv[0]);
+ 
+ 		for (int i = 0; subcommands[i]; i++) {
+@@ -679,7 +678,10 @@ int parse_options_subcommand(int argc, const char **argv, const struct option *o
+ 			astrcatf(&error_buf, "unknown switch `%c'", *ctx.opt);
+ 		usage_with_options(usagestr, options);
+ 	}
+-
++	if (buf) {
++		usagestr[0] = NULL;
++		free(buf);
++	}
+ 	return parse_options_end(&ctx);
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
+index 427ca00a32177..85d57633c8b65 100644
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -2014,9 +2014,9 @@ int main(int argc, char **argv)
+ 		free(options.whitelist);
+ 	if (options.blacklist)
+ 		free(options.blacklist);
++	close(cg_fd);
+ 	if (cg_created)
+ 		cleanup_cgroup_environment();
+-	close(cg_fd);
+ 	return err;
+ }
+ 
+diff --git a/tools/testing/selftests/filesystems/binderfs/Makefile b/tools/testing/selftests/filesystems/binderfs/Makefile
+index 8af25ae960498..24d8910c7ab58 100644
+--- a/tools/testing/selftests/filesystems/binderfs/Makefile
++++ b/tools/testing/selftests/filesystems/binderfs/Makefile
+@@ -3,6 +3,4 @@
+ CFLAGS += -I../../../../../usr/include/ -pthread
+ TEST_GEN_PROGS := binderfs_test
+ 
+-binderfs_test: binderfs_test.c ../../kselftest.h ../../kselftest_harness.h
+-
+ include ../../lib.mk
+diff --git a/tools/testing/selftests/kcmp/kcmp_test.c b/tools/testing/selftests/kcmp/kcmp_test.c
+index 6ea7b9f37a411..d7a8e321bb16b 100644
+--- a/tools/testing/selftests/kcmp/kcmp_test.c
++++ b/tools/testing/selftests/kcmp/kcmp_test.c
+@@ -88,7 +88,10 @@ int main(int argc, char **argv)
+ 		int pid2 = getpid();
+ 		int ret;
+ 
+-		fd2 = open(kpath, O_RDWR, 0644);
++		ksft_print_header();
++		ksft_set_plan(3);
++
++		fd2 = open(kpath, O_RDWR);
+ 		if (fd2 < 0) {
+ 			perror("Can't open file");
+ 			ksft_exit_fail();
+@@ -152,7 +155,6 @@ int main(int argc, char **argv)
+ 			ksft_inc_pass_cnt();
+ 		}
+ 
+-		ksft_print_cnts();
+ 
+ 		if (ret)
+ 			ksft_exit_fail();
+@@ -162,5 +164,5 @@ int main(int argc, char **argv)
+ 
+ 	waitpid(pid2, &status, P_ALL);
+ 
+-	return ksft_exit_pass();
++	return 0;
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-06-21 14:08 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-06-21 14:08 UTC (permalink / raw
  To: gentoo-commits

commit:     2513a1c96d3b8ccb88724b6b60f950810bea91b1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun 21 14:08:28 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun 21 14:08:28 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2513a1c9

Linux patch 5.10.220

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1219_linux-5.10.220.patch | 38118 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 38122 insertions(+)

diff --git a/0000_README b/0000_README
index 893a81a9..7461b294 100644
--- a/0000_README
+++ b/0000_README
@@ -919,6 +919,10 @@ Patch:  1218_linux-5.10.219.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.219
 
+Patch:  1219_linux-5.10.220.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.220
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1219_linux-5.10.220.patch b/1219_linux-5.10.220.patch
new file mode 100644
index 00000000..9b2312a5
--- /dev/null
+++ b/1219_linux-5.10.220.patch
@@ -0,0 +1,38118 @@
+diff --git a/Documentation/filesystems/files.rst b/Documentation/filesystems/files.rst
+index cbf8e57376bf6..bcf84459917f5 100644
+--- a/Documentation/filesystems/files.rst
++++ b/Documentation/filesystems/files.rst
+@@ -62,7 +62,7 @@ the fdtable structure -
+    be held.
+ 
+ 4. To look up the file structure given an fd, a reader
+-   must use either fcheck() or fcheck_files() APIs. These
++   must use either lookup_fd_rcu() or files_lookup_fd_rcu() APIs. These
+    take care of barrier requirements due to lock-free lookup.
+ 
+    An example::
+@@ -70,7 +70,7 @@ the fdtable structure -
+ 	struct file *file;
+ 
+ 	rcu_read_lock();
+-	file = fcheck(fd);
++	file = lookup_fd_rcu(fd);
+ 	if (file) {
+ 		...
+ 	}
+@@ -84,7 +84,7 @@ the fdtable structure -
+    on ->f_count::
+ 
+ 	rcu_read_lock();
+-	file = fcheck_files(files, fd);
++	file = files_lookup_fd_rcu(files, fd);
+ 	if (file) {
+ 		if (atomic_long_inc_not_zero(&file->f_count))
+ 			*fput_needed = 1;
+@@ -104,7 +104,7 @@ the fdtable structure -
+    lock-free, they must be installed using rcu_assign_pointer()
+    API. If they are looked up lock-free, rcu_dereference()
+    must be used. However it is advisable to use files_fdtable()
+-   and fcheck()/fcheck_files() which take care of these issues.
++   and lookup_fd_rcu()/files_lookup_fd_rcu() which take care of these issues.
+ 
+ 7. While updating, the fdtable pointer must be looked up while
+    holding files->file_lock. If ->file_lock is dropped, then
+diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
+index fbd695d66905f..23a0d24168bc5 100644
+--- a/Documentation/filesystems/locking.rst
++++ b/Documentation/filesystems/locking.rst
+@@ -433,17 +433,21 @@ prototypes::
+ 	void (*lm_break)(struct file_lock *); /* break_lease callback */
+ 	int (*lm_change)(struct file_lock **, int);
+ 	bool (*lm_breaker_owns_lease)(struct file_lock *);
++        bool (*lm_lock_expirable)(struct file_lock *);
++        void (*lm_expire_lock)(void);
+ 
+ locking rules:
+ 
+ ======================	=============	=================	=========
+-ops			inode->i_lock	blocked_lock_lock	may block
++ops			   flc_lock  	blocked_lock_lock	may block
+ ======================	=============	=================	=========
+-lm_notify:		yes		yes			no
++lm_notify:		no      	yes			no
+ lm_grant:		no		no			no
+ lm_break:		yes		no			no
+ lm_change		yes		no			no
+-lm_breaker_owns_lease:	no		no			no
++lm_breaker_owns_lease:	yes     	no			no
++lm_lock_expirable	yes		no			no
++lm_expire_lock		no		no			yes
+ ======================	=============	=================	=========
+ 
+ buffer_head
+diff --git a/Documentation/filesystems/nfs/exporting.rst b/Documentation/filesystems/nfs/exporting.rst
+index 33d588a01ace1..6f59a364f84cd 100644
+--- a/Documentation/filesystems/nfs/exporting.rst
++++ b/Documentation/filesystems/nfs/exporting.rst
+@@ -154,6 +154,11 @@ struct which has the following members:
+     to find potential names, and matches inode numbers to find the correct
+     match.
+ 
++  flags
++    Some filesystems may need to be handled differently than others. The
++    export_operations struct also includes a flags field that allows the
++    filesystem to communicate such information to nfsd. See the Export
++    Operations Flags section below for more explanation.
+ 
+ A filehandle fragment consists of an array of 1 or more 4byte words,
+ together with a one byte "type".
+@@ -163,3 +168,76 @@ generated by encode_fh, in which case it will have been padded with
+ nuls.  Rather, the encode_fh routine should choose a "type" which
+ indicates the decode_fh how much of the filehandle is valid, and how
+ it should be interpreted.
++
++Export Operations Flags
++-----------------------
++In addition to the operation vector pointers, struct export_operations also
++contains a "flags" field that allows the filesystem to communicate to nfsd
++that it may want to do things differently when dealing with it. The
++following flags are defined:
++
++  EXPORT_OP_NOWCC - disable NFSv3 WCC attributes on this filesystem
++    RFC 1813 recommends that servers always send weak cache consistency
++    (WCC) data to the client after each operation. The server should
++    atomically collect attributes about the inode, do an operation on it,
++    and then collect the attributes afterward. This allows the client to
++    skip issuing GETATTRs in some situations but means that the server
++    is calling vfs_getattr for almost all RPCs. On some filesystems
++    (particularly those that are clustered or networked) this is expensive
++    and atomicity is difficult to guarantee. This flag indicates to nfsd
++    that it should skip providing WCC attributes to the client in NFSv3
++    replies when doing operations on this filesystem. Consider enabling
++    this on filesystems that have an expensive ->getattr inode operation,
++    or when atomicity between pre and post operation attribute collection
++    is impossible to guarantee.
++
++  EXPORT_OP_NOSUBTREECHK - disallow subtree checking on this fs
++    Many NFS operations deal with filehandles, which the server must then
++    vet to ensure that they live inside of an exported tree. When the
++    export consists of an entire filesystem, this is trivial. nfsd can just
++    ensure that the filehandle live on the filesystem. When only part of a
++    filesystem is exported however, then nfsd must walk the ancestors of the
++    inode to ensure that it's within an exported subtree. This is an
++    expensive operation and not all filesystems can support it properly.
++    This flag exempts the filesystem from subtree checking and causes
++    exportfs to get back an error if it tries to enable subtree checking
++    on it.
++
++  EXPORT_OP_CLOSE_BEFORE_UNLINK - always close cached files before unlinking
++    On some exportable filesystems (such as NFS) unlinking a file that
++    is still open can cause a fair bit of extra work. For instance,
++    the NFS client will do a "sillyrename" to ensure that the file
++    sticks around while it's still open. When reexporting, that open
++    file is held by nfsd so we usually end up doing a sillyrename, and
++    then immediately deleting the sillyrenamed file just afterward when
++    the link count actually goes to zero. Sometimes this delete can race
++    with other operations (for instance an rmdir of the parent directory).
++    This flag causes nfsd to close any open files for this inode _before_
++    calling into the vfs to do an unlink or a rename that would replace
++    an existing file.
++
++  EXPORT_OP_REMOTE_FS - Backing storage for this filesystem is remote
++    PF_LOCAL_THROTTLE exists for loopback NFSD, where a thread needs to
++    write to one bdi (the final bdi) in order to free up writes queued
++    to another bdi (the client bdi). Such threads get a private balance
++    of dirty pages so that dirty pages for the client bdi do not imact
++    the daemon writing to the final bdi. For filesystems whose durable
++    storage is not local (such as exported NFS filesystems), this
++    constraint has negative consequences. EXPORT_OP_REMOTE_FS enables
++    an export to disable writeback throttling.
++
++  EXPORT_OP_NOATOMIC_ATTR - Filesystem does not update attributes atomically
++    EXPORT_OP_NOATOMIC_ATTR indicates that the exported filesystem
++    cannot provide the semantics required by the "atomic" boolean in
++    NFSv4's change_info4. This boolean indicates to a client whether the
++    returned before and after change attributes were obtained atomically
++    with the respect to the requested metadata operation (UNLINK,
++    OPEN/CREATE, MKDIR, etc).
++
++  EXPORT_OP_FLUSH_ON_CLOSE - Filesystem flushes file data on close(2)
++    On most filesystems, inodes can remain under writeback after the
++    file is closed. NFSD relies on client activity or local flusher
++    threads to handle writeback. Certain filesystems, such as NFS, flush
++    all of an inode's dirty data on last close. Exports that behave this
++    way should set EXPORT_OP_FLUSH_ON_CLOSE so that NFSD knows to skip
++    waiting for writeback when closing such files.
+diff --git a/Makefile b/Makefile
+index 3b36b77589f2b..9304408d8ace2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 219
++SUBLEVEL = 220
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/powerpc/platforms/cell/spufs/coredump.c b/arch/powerpc/platforms/cell/spufs/coredump.c
+index 026c181a98c5d..60b5583e9eafc 100644
+--- a/arch/powerpc/platforms/cell/spufs/coredump.c
++++ b/arch/powerpc/platforms/cell/spufs/coredump.c
+@@ -74,7 +74,7 @@ static struct spu_context *coredump_next_context(int *fd)
+ 	*fd = n - 1;
+ 
+ 	rcu_read_lock();
+-	file = fcheck(*fd);
++	file = lookup_fd_rcu(*fd);
+ 	ctx = SPUFS_I(file_inode(file))->i_ctx;
+ 	get_spu_context(ctx);
+ 	rcu_read_unlock();
+diff --git a/crypto/algboss.c b/crypto/algboss.c
+index 5ebccbd6b74ed..b87f907bb1428 100644
+--- a/crypto/algboss.c
++++ b/crypto/algboss.c
+@@ -74,7 +74,7 @@ static int cryptomgr_probe(void *data)
+ 	complete_all(&param->larval->completion);
+ 	crypto_alg_put(&param->larval->alg);
+ 	kfree(param);
+-	module_put_and_exit(0);
++	module_put_and_kthread_exit(0);
+ }
+ 
+ static int cryptomgr_schedule_probe(struct crypto_larval *larval)
+@@ -209,7 +209,7 @@ static int cryptomgr_test(void *data)
+ 	crypto_alg_tested(param->driver, err);
+ 
+ 	kfree(param);
+-	module_put_and_exit(0);
++	module_put_and_kthread_exit(0);
+ }
+ 
+ static int cryptomgr_schedule_test(struct crypto_alg *alg)
+diff --git a/fs/Kconfig b/fs/Kconfig
+index da524c4d7b7e0..11b60d160f88f 100644
+--- a/fs/Kconfig
++++ b/fs/Kconfig
+@@ -320,7 +320,7 @@ config LOCKD
+ 
+ config LOCKD_V4
+ 	bool
+-	depends on NFSD_V3 || NFS_V3
++	depends on NFSD || NFS_V3
+ 	depends on FILE_LOCKING
+ 	default y
+ 
+@@ -333,6 +333,10 @@ config NFS_COMMON
+ 	depends on NFSD || NFS_FS || LOCKD
+ 	default y
+ 
++config NFS_V4_2_SSC_HELPER
++	bool
++	default y if NFS_V4_2
++
+ source "net/sunrpc/Kconfig"
+ source "fs/ceph/Kconfig"
+ source "fs/cifs/Kconfig"
+diff --git a/fs/autofs/dev-ioctl.c b/fs/autofs/dev-ioctl.c
+index 322b7dfb4ea01..5bf781ea6d676 100644
+--- a/fs/autofs/dev-ioctl.c
++++ b/fs/autofs/dev-ioctl.c
+@@ -4,9 +4,10 @@
+  * Copyright 2008 Ian Kent <raven@themaw.net>
+  */
+ 
++#include <linux/module.h>
+ #include <linux/miscdevice.h>
+ #include <linux/compat.h>
+-#include <linux/syscalls.h>
++#include <linux/fdtable.h>
+ #include <linux/magic.h>
+ #include <linux/nospec.h>
+ 
+@@ -289,7 +290,7 @@ static int autofs_dev_ioctl_closemount(struct file *fp,
+ 				       struct autofs_sb_info *sbi,
+ 				       struct autofs_dev_ioctl *param)
+ {
+-	return ksys_close(param->ioctlfd);
++	return close_fd(param->ioctlfd);
+ }
+ 
+ /*
+diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
+index ecc8ecbbfa5ac..7b987de0babe8 100644
+--- a/fs/cachefiles/namei.c
++++ b/fs/cachefiles/namei.c
+@@ -412,9 +412,14 @@ static int cachefiles_bury_object(struct cachefiles_cache *cache,
+ 	if (ret < 0) {
+ 		cachefiles_io_error(cache, "Rename security error %d", ret);
+ 	} else {
++		struct renamedata rd = {
++			.old_dir	= d_inode(dir),
++			.old_dentry	= rep,
++			.new_dir	= d_inode(cache->graveyard),
++			.new_dentry	= grave,
++		};
+ 		trace_cachefiles_rename(object, rep, grave, why);
+-		ret = vfs_rename(d_inode(dir), rep,
+-				 d_inode(cache->graveyard), grave, NULL, 0);
++		ret = vfs_rename(&rd);
+ 		if (ret != 0 && ret != -ENOMEM)
+ 			cachefiles_io_error(cache,
+ 					    "Rename failed with error %d", ret);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 164b985407160..a3c0e6a4e4847 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1242,7 +1242,7 @@ cifs_demultiplex_thread(void *p)
+ 	}
+ 
+ 	memalloc_noreclaim_restore(noreclaim_flag);
+-	module_put_and_exit(0);
++	module_put_and_kthread_exit(0);
+ }
+ 
+ /* extract the host portion of the UNC string */
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 9d91e831ed0b2..7b085975ea163 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -590,7 +590,6 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 	int ispipe;
+ 	size_t *argv = NULL;
+ 	int argc = 0;
+-	struct files_struct *displaced;
+ 	/* require nonrelative corefile path and be extra careful */
+ 	bool need_suid_safe = false;
+ 	bool core_dumped = false;
+@@ -797,11 +796,9 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 	}
+ 
+ 	/* get us an unshared descriptor table; almost always a no-op */
+-	retval = unshare_files(&displaced);
++	retval = unshare_files();
+ 	if (retval)
+ 		goto close_fail;
+-	if (displaced)
+-		put_files_struct(displaced);
+ 	if (!dump_interrupted()) {
+ 		/*
+ 		 * umh disabled with CONFIG_STATIC_USERMODEHELPER_PATH="" would
+diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
+index c867a0d62f360..1dbe0c3ff38ea 100644
+--- a/fs/ecryptfs/inode.c
++++ b/fs/ecryptfs/inode.c
+@@ -598,6 +598,7 @@ ecryptfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	struct dentry *lower_new_dir_dentry;
+ 	struct dentry *trap;
+ 	struct inode *target_inode;
++	struct renamedata rd = {};
+ 
+ 	if (flags)
+ 		return -EINVAL;
+@@ -627,9 +628,12 @@ ecryptfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		rc = -ENOTEMPTY;
+ 		goto out_lock;
+ 	}
+-	rc = vfs_rename(d_inode(lower_old_dir_dentry), lower_old_dentry,
+-			d_inode(lower_new_dir_dentry), lower_new_dentry,
+-			NULL, 0);
++
++	rd.old_dir	= d_inode(lower_old_dir_dentry);
++	rd.old_dentry	= lower_old_dentry;
++	rd.new_dir	= d_inode(lower_new_dir_dentry);
++	rd.new_dentry	= lower_new_dentry;
++	rc = vfs_rename(&rd);
+ 	if (rc)
+ 		goto out_lock;
+ 	if (target_inode)
+diff --git a/fs/exec.c b/fs/exec.c
+index ebe9011955b9b..d5c8f085235bc 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1264,6 +1264,11 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 	if (retval)
+ 		goto out;
+ 
++	/* Ensure the files table is not shared. */
++	retval = unshare_files();
++	if (retval)
++		goto out;
++
+ 	/*
+ 	 * Must be called _before_ exec_mmap() as bprm->mm is
+ 	 * not visibile until then. This also enables the update
+@@ -1789,7 +1794,6 @@ static int bprm_execve(struct linux_binprm *bprm,
+ 		       int fd, struct filename *filename, int flags)
+ {
+ 	struct file *file;
+-	struct files_struct *displaced;
+ 	int retval;
+ 
+ 	/*
+@@ -1797,13 +1801,9 @@ static int bprm_execve(struct linux_binprm *bprm,
+ 	 */
+ 	io_uring_task_cancel();
+ 
+-	retval = unshare_files(&displaced);
+-	if (retval)
+-		return retval;
+-
+ 	retval = prepare_bprm_creds(bprm);
+ 	if (retval)
+-		goto out_files;
++		return retval;
+ 
+ 	check_unsafe_exec(bprm);
+ 	current->in_execve = 1;
+@@ -1818,11 +1818,14 @@ static int bprm_execve(struct linux_binprm *bprm,
+ 	bprm->file = file;
+ 	/*
+ 	 * Record that a name derived from an O_CLOEXEC fd will be
+-	 * inaccessible after exec. Relies on having exclusive access to
+-	 * current->files (due to unshare_files above).
++	 * inaccessible after exec.  This allows the code in exec to
++	 * choose to fail when the executable is not mmaped into the
++	 * interpreter and an open file descriptor is not passed to
++	 * the interpreter.  This makes for a better user experience
++	 * than having the interpreter start and then immediately fail
++	 * when it finds the executable is inaccessible.
+ 	 */
+-	if (bprm->fdpath &&
+-	    close_on_exec(fd, rcu_dereference_raw(current->files->fdt)))
++	if (bprm->fdpath && get_close_on_exec(fd))
+ 		bprm->interp_flags |= BINPRM_FLAGS_PATH_INACCESSIBLE;
+ 
+ 	/* Set the unchanging part of bprm->cred */
+@@ -1840,8 +1843,6 @@ static int bprm_execve(struct linux_binprm *bprm,
+ 	rseq_execve(current);
+ 	acct_update_integrals(current);
+ 	task_numa_free(current, false);
+-	if (displaced)
+-		put_files_struct(displaced);
+ 	return retval;
+ 
+ out:
+@@ -1858,10 +1859,6 @@ static int bprm_execve(struct linux_binprm *bprm,
+ 	current->fs->in_exec = 0;
+ 	current->in_execve = 0;
+ 
+-out_files:
+-	if (displaced)
+-		reset_files_struct(displaced);
+-
+ 	return retval;
+ }
+ 
+diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c
+index 2dd55b172d57f..8c28bd1c9ed94 100644
+--- a/fs/exportfs/expfs.c
++++ b/fs/exportfs/expfs.c
+@@ -18,7 +18,7 @@
+ #include <linux/sched.h>
+ #include <linux/cred.h>
+ 
+-#define dprintk(fmt, args...) do{}while(0)
++#define dprintk(fmt, args...) pr_debug(fmt, ##args)
+ 
+ 
+ static int get_name(const struct path *path, char *name, struct dentry *child);
+@@ -132,8 +132,8 @@ static struct dentry *reconnect_one(struct vfsmount *mnt,
+ 	inode_unlock(dentry->d_inode);
+ 
+ 	if (IS_ERR(parent)) {
+-		dprintk("%s: get_parent of %ld failed, err %d\n",
+-			__func__, dentry->d_inode->i_ino, PTR_ERR(parent));
++		dprintk("get_parent of %lu failed, err %ld\n",
++			dentry->d_inode->i_ino, PTR_ERR(parent));
+ 		return parent;
+ 	}
+ 
+@@ -147,7 +147,7 @@ static struct dentry *reconnect_one(struct vfsmount *mnt,
+ 	dprintk("%s: found name: %s\n", __func__, nbuf);
+ 	tmp = lookup_one_len_unlocked(nbuf, parent, strlen(nbuf));
+ 	if (IS_ERR(tmp)) {
+-		dprintk("%s: lookup failed: %d\n", __func__, PTR_ERR(tmp));
++		dprintk("lookup failed: %ld\n", PTR_ERR(tmp));
+ 		err = PTR_ERR(tmp);
+ 		goto out_err;
+ 	}
+@@ -417,9 +417,11 @@ int exportfs_encode_fh(struct dentry *dentry, struct fid *fid, int *max_len,
+ }
+ EXPORT_SYMBOL_GPL(exportfs_encode_fh);
+ 
+-struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
+-		int fh_len, int fileid_type,
+-		int (*acceptable)(void *, struct dentry *), void *context)
++struct dentry *
++exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
++		       int fileid_type,
++		       int (*acceptable)(void *, struct dentry *),
++		       void *context)
+ {
+ 	const struct export_operations *nop = mnt->mnt_sb->s_export_op;
+ 	struct dentry *result, *alias;
+@@ -432,10 +434,8 @@ struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
+ 	if (!nop || !nop->fh_to_dentry)
+ 		return ERR_PTR(-ESTALE);
+ 	result = nop->fh_to_dentry(mnt->mnt_sb, fid, fh_len, fileid_type);
+-	if (PTR_ERR(result) == -ENOMEM)
+-		return ERR_CAST(result);
+ 	if (IS_ERR_OR_NULL(result))
+-		return ERR_PTR(-ESTALE);
++		return result;
+ 
+ 	/*
+ 	 * If no acceptance criteria was specified by caller, a disconnected
+@@ -561,10 +561,26 @@ struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
+ 
+  err_result:
+ 	dput(result);
+-	if (err != -ENOMEM)
+-		err = -ESTALE;
+ 	return ERR_PTR(err);
+ }
++EXPORT_SYMBOL_GPL(exportfs_decode_fh_raw);
++
++struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
++				  int fh_len, int fileid_type,
++				  int (*acceptable)(void *, struct dentry *),
++				  void *context)
++{
++	struct dentry *ret;
++
++	ret = exportfs_decode_fh_raw(mnt, fid, fh_len, fileid_type,
++				     acceptable, context);
++	if (IS_ERR_OR_NULL(ret)) {
++		if (ret == ERR_PTR(-ENOMEM))
++			return ret;
++		return ERR_PTR(-ESTALE);
++	}
++	return ret;
++}
+ EXPORT_SYMBOL_GPL(exportfs_decode_fh);
+ 
+ MODULE_LICENSE("GPL");
+diff --git a/fs/file.c b/fs/file.c
+index d6bc73960e4ac..fdb84a64724b7 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -175,7 +175,7 @@ static int expand_fdtable(struct files_struct *files, unsigned int nr)
+ 	spin_unlock(&files->file_lock);
+ 	new_fdt = alloc_fdtable(nr);
+ 
+-	/* make sure all __fd_install() have seen resize_in_progress
++	/* make sure all fd_install() have seen resize_in_progress
+ 	 * or have finished their rcu_read_lock_sched() section.
+ 	 */
+ 	if (atomic_read(&files->count) > 1)
+@@ -198,7 +198,7 @@ static int expand_fdtable(struct files_struct *files, unsigned int nr)
+ 	rcu_assign_pointer(files->fdt, new_fdt);
+ 	if (cur_fdt != &files->fdtab)
+ 		call_rcu(&cur_fdt->rcu, free_fdtable_rcu);
+-	/* coupled with smp_rmb() in __fd_install() */
++	/* coupled with smp_rmb() in fd_install() */
+ 	smp_wmb();
+ 	return 1;
+ }
+@@ -466,18 +466,6 @@ void put_files_struct(struct files_struct *files)
+ 	}
+ }
+ 
+-void reset_files_struct(struct files_struct *files)
+-{
+-	struct task_struct *tsk = current;
+-	struct files_struct *old;
+-
+-	old = tsk->files;
+-	task_lock(tsk);
+-	tsk->files = files;
+-	task_unlock(tsk);
+-	put_files_struct(old);
+-}
+-
+ void exit_files(struct task_struct *tsk)
+ {
+ 	struct files_struct * files = tsk->files;
+@@ -521,9 +509,9 @@ static unsigned int find_next_fd(struct fdtable *fdt, unsigned int start)
+ /*
+  * allocate a file descriptor, mark it busy.
+  */
+-int __alloc_fd(struct files_struct *files,
+-	       unsigned start, unsigned end, unsigned flags)
++static int alloc_fd(unsigned start, unsigned end, unsigned flags)
+ {
++	struct files_struct *files = current->files;
+ 	unsigned int fd;
+ 	int error;
+ 	struct fdtable *fdt;
+@@ -579,14 +567,9 @@ int __alloc_fd(struct files_struct *files,
+ 	return error;
+ }
+ 
+-static int alloc_fd(unsigned start, unsigned flags)
+-{
+-	return __alloc_fd(current->files, start, rlimit(RLIMIT_NOFILE), flags);
+-}
+-
+ int __get_unused_fd_flags(unsigned flags, unsigned long nofile)
+ {
+-	return __alloc_fd(current->files, 0, nofile, flags);
++	return alloc_fd(0, nofile, flags);
+ }
+ 
+ int get_unused_fd_flags(unsigned flags)
+@@ -625,17 +608,13 @@ EXPORT_SYMBOL(put_unused_fd);
+  * It should never happen - if we allow dup2() do it, _really_ bad things
+  * will follow.
+  *
+- * NOTE: __fd_install() variant is really, really low-level; don't
+- * use it unless you are forced to by truly lousy API shoved down
+- * your throat.  'files' *MUST* be either current->files or obtained
+- * by get_files_struct(current) done by whoever had given it to you,
+- * or really bad things will happen.  Normally you want to use
+- * fd_install() instead.
++ * This consumes the "file" refcount, so callers should treat it
++ * as if they had called fput(file).
+  */
+ 
+-void __fd_install(struct files_struct *files, unsigned int fd,
+-		struct file *file)
++void fd_install(unsigned int fd, struct file *file)
+ {
++	struct files_struct *files = current->files;
+ 	struct fdtable *fdt;
+ 
+ 	rcu_read_lock_sched();
+@@ -657,15 +636,6 @@ void __fd_install(struct files_struct *files, unsigned int fd,
+ 	rcu_read_unlock_sched();
+ }
+ 
+-/*
+- * This consumes the "file" refcount, so callers should treat it
+- * as if they had called fput(file).
+- */
+-void fd_install(unsigned int fd, struct file *file)
+-{
+-	__fd_install(current->files, fd, file);
+-}
+-
+ EXPORT_SYMBOL(fd_install);
+ 
+ static struct file *pick_file(struct files_struct *files, unsigned fd)
+@@ -689,11 +659,9 @@ static struct file *pick_file(struct files_struct *files, unsigned fd)
+ 	return file;
+ }
+ 
+-/*
+- * The same warnings as for __alloc_fd()/__fd_install() apply here...
+- */
+-int __close_fd(struct files_struct *files, unsigned fd)
++int close_fd(unsigned fd)
+ {
++	struct files_struct *files = current->files;
+ 	struct file *file;
+ 
+ 	file = pick_file(files, fd);
+@@ -702,7 +670,7 @@ int __close_fd(struct files_struct *files, unsigned fd)
+ 
+ 	return filp_close(file, files);
+ }
+-EXPORT_SYMBOL(__close_fd); /* for ksys_close() */
++EXPORT_SYMBOL(close_fd); /* for ksys_close() */
+ 
+ /**
+  * __close_range() - Close all file descriptors in a given range.
+@@ -861,68 +829,28 @@ void do_close_on_exec(struct files_struct *files)
+ 	spin_unlock(&files->file_lock);
+ }
+ 
+-static inline struct file *__fget_files_rcu(struct files_struct *files,
+-	unsigned int fd, fmode_t mask, unsigned int refs)
+-{
+-	for (;;) {
+-		struct file *file;
+-		struct fdtable *fdt = rcu_dereference_raw(files->fdt);
+-		struct file __rcu **fdentry;
+-
+-		if (unlikely(fd >= fdt->max_fds))
+-			return NULL;
+-
+-		fdentry = fdt->fd + array_index_nospec(fd, fdt->max_fds);
+-		file = rcu_dereference_raw(*fdentry);
+-		if (unlikely(!file))
+-			return NULL;
+-
+-		if (unlikely(file->f_mode & mask))
+-			return NULL;
+-
+-		/*
+-		 * Ok, we have a file pointer. However, because we do
+-		 * this all locklessly under RCU, we may be racing with
+-		 * that file being closed.
+-		 *
+-		 * Such a race can take two forms:
+-		 *
+-		 *  (a) the file ref already went down to zero,
+-		 *      and get_file_rcu_many() fails. Just try
+-		 *      again:
+-		 */
+-		if (unlikely(!get_file_rcu_many(file, refs)))
+-			continue;
+-
+-		/*
+-		 *  (b) the file table entry has changed under us.
+-		 *       Note that we don't need to re-check the 'fdt->fd'
+-		 *       pointer having changed, because it always goes
+-		 *       hand-in-hand with 'fdt'.
+-		 *
+-		 * If so, we need to put our refs and try again.
+-		 */
+-		if (unlikely(rcu_dereference_raw(files->fdt) != fdt) ||
+-		    unlikely(rcu_dereference_raw(*fdentry) != file)) {
+-			fput_many(file, refs);
+-			continue;
+-		}
+-
+-		/*
+-		 * Ok, we have a ref to the file, and checked that it
+-		 * still exists.
+-		 */
+-		return file;
+-	}
+-}
+-
+ static struct file *__fget_files(struct files_struct *files, unsigned int fd,
+ 				 fmode_t mask, unsigned int refs)
+ {
+ 	struct file *file;
+ 
+ 	rcu_read_lock();
+-	file = __fget_files_rcu(files, fd, mask, refs);
++loop:
++	file = files_lookup_fd_rcu(files, fd);
++	if (file) {
++		/* File object ref couldn't be taken.
++		 * dup2() atomicity guarantee is the reason
++		 * we loop to catch the new file (or NULL pointer)
++		 */
++		if (file->f_mode & mask)
++			file = NULL;
++		else if (!get_file_rcu_many(file, refs))
++			goto loop;
++		else if (files_lookup_fd_raw(files, fd) != file) {
++			fput_many(file, refs);
++			goto loop;
++		}
++	}
+ 	rcu_read_unlock();
+ 
+ 	return file;
+@@ -963,6 +891,42 @@ struct file *fget_task(struct task_struct *task, unsigned int fd)
+ 	return file;
+ }
+ 
++struct file *task_lookup_fd_rcu(struct task_struct *task, unsigned int fd)
++{
++	/* Must be called with rcu_read_lock held */
++	struct files_struct *files;
++	struct file *file = NULL;
++
++	task_lock(task);
++	files = task->files;
++	if (files)
++		file = files_lookup_fd_rcu(files, fd);
++	task_unlock(task);
++
++	return file;
++}
++
++struct file *task_lookup_next_fd_rcu(struct task_struct *task, unsigned int *ret_fd)
++{
++	/* Must be called with rcu_read_lock held */
++	struct files_struct *files;
++	unsigned int fd = *ret_fd;
++	struct file *file = NULL;
++
++	task_lock(task);
++	files = task->files;
++	if (files) {
++		for (; fd < files_fdtable(files)->max_fds; fd++) {
++			file = files_lookup_fd_rcu(files, fd);
++			if (file)
++				break;
++		}
++	}
++	task_unlock(task);
++	*ret_fd = fd;
++	return file;
++}
++
+ /*
+  * Lightweight file lookup - no refcnt increment if fd table isn't shared.
+  *
+@@ -985,7 +949,7 @@ static unsigned long __fget_light(unsigned int fd, fmode_t mask)
+ 	struct file *file;
+ 
+ 	if (atomic_read(&files->count) == 1) {
+-		file = __fcheck_files(files, fd);
++		file = files_lookup_fd_raw(files, fd);
+ 		if (!file || unlikely(file->f_mode & mask))
+ 			return 0;
+ 		return (unsigned long)file;
+@@ -1121,7 +1085,7 @@ int replace_fd(unsigned fd, struct file *file, unsigned flags)
+ 	struct files_struct *files = current->files;
+ 
+ 	if (!file)
+-		return __close_fd(files, fd);
++		return close_fd(fd);
+ 
+ 	if (fd >= rlimit(RLIMIT_NOFILE))
+ 		return -EBADF;
+@@ -1210,7 +1174,7 @@ static int ksys_dup3(unsigned int oldfd, unsigned int newfd, int flags)
+ 
+ 	spin_lock(&files->file_lock);
+ 	err = expand_files(files, newfd);
+-	file = fcheck(oldfd);
++	file = files_lookup_fd_locked(files, oldfd);
+ 	if (unlikely(!file))
+ 		goto Ebadf;
+ 	if (unlikely(err < 0)) {
+@@ -1239,7 +1203,7 @@ SYSCALL_DEFINE2(dup2, unsigned int, oldfd, unsigned int, newfd)
+ 		int retval = oldfd;
+ 
+ 		rcu_read_lock();
+-		if (!fcheck_files(files, oldfd))
++		if (!files_lookup_fd_rcu(files, oldfd))
+ 			retval = -EBADF;
+ 		rcu_read_unlock();
+ 		return retval;
+@@ -1264,10 +1228,11 @@ SYSCALL_DEFINE1(dup, unsigned int, fildes)
+ 
+ int f_dupfd(unsigned int from, struct file *file, unsigned flags)
+ {
++	unsigned long nofile = rlimit(RLIMIT_NOFILE);
+ 	int err;
+-	if (from >= rlimit(RLIMIT_NOFILE))
++	if (from >= nofile)
+ 		return -EINVAL;
+-	err = alloc_fd(from, flags);
++	err = alloc_fd(from, nofile, flags);
+ 	if (err >= 0) {
+ 		get_file(file);
+ 		fd_install(err, file);
+diff --git a/fs/init.c b/fs/init.c
+index e9c320a48cf15..02723bea84990 100644
+--- a/fs/init.c
++++ b/fs/init.c
+@@ -49,7 +49,7 @@ int __init init_chdir(const char *filename)
+ 	error = kern_path(filename, LOOKUP_FOLLOW | LOOKUP_DIRECTORY, &path);
+ 	if (error)
+ 		return error;
+-	error = inode_permission(path.dentry->d_inode, MAY_EXEC | MAY_CHDIR);
++	error = path_permission(&path, MAY_EXEC | MAY_CHDIR);
+ 	if (!error)
+ 		set_fs_pwd(current->fs, &path);
+ 	path_put(&path);
+@@ -64,7 +64,7 @@ int __init init_chroot(const char *filename)
+ 	error = kern_path(filename, LOOKUP_FOLLOW | LOOKUP_DIRECTORY, &path);
+ 	if (error)
+ 		return error;
+-	error = inode_permission(path.dentry->d_inode, MAY_EXEC | MAY_CHDIR);
++	error = path_permission(&path, MAY_EXEC | MAY_CHDIR);
+ 	if (error)
+ 		goto dput_and_out;
+ 	error = -EPERM;
+@@ -118,7 +118,7 @@ int __init init_eaccess(const char *filename)
+ 	error = kern_path(filename, LOOKUP_FOLLOW, &path);
+ 	if (error)
+ 		return error;
+-	error = inode_permission(d_inode(path.dentry), MAY_ACCESS);
++	error = path_permission(&path, MAY_ACCESS);
+ 	path_put(&path);
+ 	return error;
+ }
+diff --git a/fs/lockd/clnt4xdr.c b/fs/lockd/clnt4xdr.c
+index 7df6324ccb8ab..8161667c976f8 100644
+--- a/fs/lockd/clnt4xdr.c
++++ b/fs/lockd/clnt4xdr.c
+@@ -261,7 +261,6 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, struct nlm_res *result)
+ 	u32 exclusive;
+ 	int error;
+ 	__be32 *p;
+-	s32 end;
+ 
+ 	memset(lock, 0, sizeof(*lock));
+ 	locks_init_lock(fl);
+@@ -285,13 +284,7 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, struct nlm_res *result)
+ 	fl->fl_type  = exclusive != 0 ? F_WRLCK : F_RDLCK;
+ 	p = xdr_decode_hyper(p, &l_offset);
+ 	xdr_decode_hyper(p, &l_len);
+-	end = l_offset + l_len - 1;
+-
+-	fl->fl_start = (loff_t)l_offset;
+-	if (l_len == 0 || end < 0)
+-		fl->fl_end = OFFSET_MAX;
+-	else
+-		fl->fl_end = (loff_t)end;
++	nlm4svc_set_file_lock_range(fl, l_offset, l_len);
+ 	error = 0;
+ out:
+ 	return error;
+diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
+index b11f2afa84f1f..99fffc9cb9585 100644
+--- a/fs/lockd/clntproc.c
++++ b/fs/lockd/clntproc.c
+@@ -794,9 +794,6 @@ static void nlmclnt_cancel_callback(struct rpc_task *task, void *data)
+ 		goto retry_cancel;
+ 	}
+ 
+-	dprintk("lockd: cancel status %u (task %u)\n",
+-			status, task->tk_pid);
+-
+ 	switch (status) {
+ 	case NLM_LCK_GRANTED:
+ 	case NLM_LCK_DENIED_GRACE_PERIOD:
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index 771c289f6df7f..cdc8e12cdac44 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -163,8 +163,8 @@ static struct nlm_host *nlm_alloc_host(struct nlm_lookup_host_info *ni,
+ 	host->h_nsmhandle  = nsm;
+ 	host->h_addrbuf    = nsm->sm_addrbuf;
+ 	host->net	   = ni->net;
+-	host->h_cred	   = get_cred(ni->cred),
+-	strlcpy(host->nodename, utsname()->nodename, sizeof(host->nodename));
++	host->h_cred	   = get_cred(ni->cred);
++	strscpy(host->nodename, utsname()->nodename, sizeof(host->nodename));
+ 
+ out:
+ 	return host;
+diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
+index 1a639e34847dd..5579e67da17db 100644
+--- a/fs/lockd/svc.c
++++ b/fs/lockd/svc.c
+@@ -54,13 +54,9 @@ EXPORT_SYMBOL_GPL(nlmsvc_ops);
+ 
+ static DEFINE_MUTEX(nlmsvc_mutex);
+ static unsigned int		nlmsvc_users;
+-static struct task_struct	*nlmsvc_task;
+-static struct svc_rqst		*nlmsvc_rqst;
++static struct svc_serv		*nlmsvc_serv;
+ unsigned long			nlmsvc_timeout;
+ 
+-static atomic_t nlm_ntf_refcnt = ATOMIC_INIT(0);
+-static DECLARE_WAIT_QUEUE_HEAD(nlm_ntf_wq);
+-
+ unsigned int lockd_net_id;
+ 
+ /*
+@@ -184,6 +180,10 @@ lockd(void *vrqstp)
+ 	nlm_shutdown_hosts();
+ 	cancel_delayed_work_sync(&ln->grace_period_end);
+ 	locks_end_grace(&ln->lockd_manager);
++
++	dprintk("lockd_down: service stopped\n");
++
++	svc_exit_thread(rqstp);
+ 	return 0;
+ }
+ 
+@@ -196,8 +196,8 @@ static int create_lockd_listener(struct svc_serv *serv, const char *name,
+ 
+ 	xprt = svc_find_xprt(serv, name, net, family, 0);
+ 	if (xprt == NULL)
+-		return svc_create_xprt(serv, name, net, family, port,
+-						SVC_SOCK_DEFAULTS, cred);
++		return svc_xprt_create(serv, name, net, family, port,
++				       SVC_SOCK_DEFAULTS, cred);
+ 	svc_xprt_put(xprt);
+ 	return 0;
+ }
+@@ -247,7 +247,8 @@ static int make_socks(struct svc_serv *serv, struct net *net,
+ 	if (warned++ == 0)
+ 		printk(KERN_WARNING
+ 			"lockd_up: makesock failed, error=%d\n", err);
+-	svc_shutdown_net(serv, net);
++	svc_xprt_destroy_all(serv, net);
++	svc_rpcb_cleanup(serv, net);
+ 	return err;
+ }
+ 
+@@ -285,13 +286,12 @@ static void lockd_down_net(struct svc_serv *serv, struct net *net)
+ 			nlm_shutdown_hosts_net(net);
+ 			cancel_delayed_work_sync(&ln->grace_period_end);
+ 			locks_end_grace(&ln->lockd_manager);
+-			svc_shutdown_net(serv, net);
+-			dprintk("%s: per-net data destroyed; net=%x\n",
+-				__func__, net->ns.inum);
++			svc_xprt_destroy_all(serv, net);
++			svc_rpcb_cleanup(serv, net);
+ 		}
+ 	} else {
+-		pr_err("%s: no users! task=%p, net=%x\n",
+-			__func__, nlmsvc_task, net->ns.inum);
++		pr_err("%s: no users! net=%x\n",
++			__func__, net->ns.inum);
+ 		BUG();
+ 	}
+ }
+@@ -302,20 +302,16 @@ static int lockd_inetaddr_event(struct notifier_block *this,
+ 	struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
+ 	struct sockaddr_in sin;
+ 
+-	if ((event != NETDEV_DOWN) ||
+-	    !atomic_inc_not_zero(&nlm_ntf_refcnt))
++	if (event != NETDEV_DOWN)
+ 		goto out;
+ 
+-	if (nlmsvc_rqst) {
++	if (nlmsvc_serv) {
+ 		dprintk("lockd_inetaddr_event: removed %pI4\n",
+ 			&ifa->ifa_local);
+ 		sin.sin_family = AF_INET;
+ 		sin.sin_addr.s_addr = ifa->ifa_local;
+-		svc_age_temp_xprts_now(nlmsvc_rqst->rq_server,
+-			(struct sockaddr *)&sin);
++		svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin);
+ 	}
+-	atomic_dec(&nlm_ntf_refcnt);
+-	wake_up(&nlm_ntf_wq);
+ 
+ out:
+ 	return NOTIFY_DONE;
+@@ -332,21 +328,17 @@ static int lockd_inet6addr_event(struct notifier_block *this,
+ 	struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr;
+ 	struct sockaddr_in6 sin6;
+ 
+-	if ((event != NETDEV_DOWN) ||
+-	    !atomic_inc_not_zero(&nlm_ntf_refcnt))
++	if (event != NETDEV_DOWN)
+ 		goto out;
+ 
+-	if (nlmsvc_rqst) {
++	if (nlmsvc_serv) {
+ 		dprintk("lockd_inet6addr_event: removed %pI6\n", &ifa->addr);
+ 		sin6.sin6_family = AF_INET6;
+ 		sin6.sin6_addr = ifa->addr;
+ 		if (ipv6_addr_type(&sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL)
+ 			sin6.sin6_scope_id = ifa->idev->dev->ifindex;
+-		svc_age_temp_xprts_now(nlmsvc_rqst->rq_server,
+-			(struct sockaddr *)&sin6);
++		svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin6);
+ 	}
+-	atomic_dec(&nlm_ntf_refcnt);
+-	wake_up(&nlm_ntf_wq);
+ 
+ out:
+ 	return NOTIFY_DONE;
+@@ -357,86 +349,14 @@ static struct notifier_block lockd_inet6addr_notifier = {
+ };
+ #endif
+ 
+-static void lockd_unregister_notifiers(void)
+-{
+-	unregister_inetaddr_notifier(&lockd_inetaddr_notifier);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	unregister_inet6addr_notifier(&lockd_inet6addr_notifier);
+-#endif
+-	wait_event(nlm_ntf_wq, atomic_read(&nlm_ntf_refcnt) == 0);
+-}
+-
+-static void lockd_svc_exit_thread(void)
+-{
+-	atomic_dec(&nlm_ntf_refcnt);
+-	lockd_unregister_notifiers();
+-	svc_exit_thread(nlmsvc_rqst);
+-}
+-
+-static int lockd_start_svc(struct svc_serv *serv)
++static int lockd_get(void)
+ {
++	struct svc_serv *serv;
+ 	int error;
+ 
+-	if (nlmsvc_rqst)
++	if (nlmsvc_serv) {
++		nlmsvc_users++;
+ 		return 0;
+-
+-	/*
+-	 * Create the kernel thread and wait for it to start.
+-	 */
+-	nlmsvc_rqst = svc_prepare_thread(serv, &serv->sv_pools[0], NUMA_NO_NODE);
+-	if (IS_ERR(nlmsvc_rqst)) {
+-		error = PTR_ERR(nlmsvc_rqst);
+-		printk(KERN_WARNING
+-			"lockd_up: svc_rqst allocation failed, error=%d\n",
+-			error);
+-		lockd_unregister_notifiers();
+-		goto out_rqst;
+-	}
+-
+-	atomic_inc(&nlm_ntf_refcnt);
+-	svc_sock_update_bufs(serv);
+-	serv->sv_maxconn = nlm_max_connections;
+-
+-	nlmsvc_task = kthread_create(lockd, nlmsvc_rqst, "%s", serv->sv_name);
+-	if (IS_ERR(nlmsvc_task)) {
+-		error = PTR_ERR(nlmsvc_task);
+-		printk(KERN_WARNING
+-			"lockd_up: kthread_run failed, error=%d\n", error);
+-		goto out_task;
+-	}
+-	nlmsvc_rqst->rq_task = nlmsvc_task;
+-	wake_up_process(nlmsvc_task);
+-
+-	dprintk("lockd_up: service started\n");
+-	return 0;
+-
+-out_task:
+-	lockd_svc_exit_thread();
+-	nlmsvc_task = NULL;
+-out_rqst:
+-	nlmsvc_rqst = NULL;
+-	return error;
+-}
+-
+-static const struct svc_serv_ops lockd_sv_ops = {
+-	.svo_shutdown		= svc_rpcb_cleanup,
+-	.svo_enqueue_xprt	= svc_xprt_do_enqueue,
+-};
+-
+-static struct svc_serv *lockd_create_svc(void)
+-{
+-	struct svc_serv *serv;
+-
+-	/*
+-	 * Check whether we're already up and running.
+-	 */
+-	if (nlmsvc_rqst) {
+-		/*
+-		 * Note: increase service usage, because later in case of error
+-		 * svc_destroy() will be called.
+-		 */
+-		svc_get(nlmsvc_rqst->rq_server);
+-		return nlmsvc_rqst->rq_server;
+ 	}
+ 
+ 	/*
+@@ -451,17 +371,44 @@ static struct svc_serv *lockd_create_svc(void)
+ 		nlm_timeout = LOCKD_DFLT_TIMEO;
+ 	nlmsvc_timeout = nlm_timeout * HZ;
+ 
+-	serv = svc_create(&nlmsvc_program, LOCKD_BUFSIZE, &lockd_sv_ops);
++	serv = svc_create(&nlmsvc_program, LOCKD_BUFSIZE, lockd);
+ 	if (!serv) {
+ 		printk(KERN_WARNING "lockd_up: create service failed\n");
+-		return ERR_PTR(-ENOMEM);
++		return -ENOMEM;
+ 	}
++
++	serv->sv_maxconn = nlm_max_connections;
++	error = svc_set_num_threads(serv, NULL, 1);
++	/* The thread now holds the only reference */
++	svc_put(serv);
++	if (error < 0)
++		return error;
++
++	nlmsvc_serv = serv;
+ 	register_inetaddr_notifier(&lockd_inetaddr_notifier);
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	register_inet6addr_notifier(&lockd_inet6addr_notifier);
+ #endif
+ 	dprintk("lockd_up: service created\n");
+-	return serv;
++	nlmsvc_users++;
++	return 0;
++}
++
++static void lockd_put(void)
++{
++	if (WARN(nlmsvc_users <= 0, "lockd_down: no users!\n"))
++		return;
++	if (--nlmsvc_users)
++		return;
++
++	unregister_inetaddr_notifier(&lockd_inetaddr_notifier);
++#if IS_ENABLED(CONFIG_IPV6)
++	unregister_inet6addr_notifier(&lockd_inet6addr_notifier);
++#endif
++
++	svc_set_num_threads(nlmsvc_serv, NULL, 0);
++	nlmsvc_serv = NULL;
++	dprintk("lockd_down: service destroyed\n");
+ }
+ 
+ /*
+@@ -469,36 +416,21 @@ static struct svc_serv *lockd_create_svc(void)
+  */
+ int lockd_up(struct net *net, const struct cred *cred)
+ {
+-	struct svc_serv *serv;
+ 	int error;
+ 
+ 	mutex_lock(&nlmsvc_mutex);
+ 
+-	serv = lockd_create_svc();
+-	if (IS_ERR(serv)) {
+-		error = PTR_ERR(serv);
+-		goto err_create;
+-	}
++	error = lockd_get();
++	if (error)
++		goto err;
+ 
+-	error = lockd_up_net(serv, net, cred);
++	error = lockd_up_net(nlmsvc_serv, net, cred);
+ 	if (error < 0) {
+-		lockd_unregister_notifiers();
+-		goto err_put;
++		lockd_put();
++		goto err;
+ 	}
+ 
+-	error = lockd_start_svc(serv);
+-	if (error < 0) {
+-		lockd_down_net(serv, net);
+-		goto err_put;
+-	}
+-	nlmsvc_users++;
+-	/*
+-	 * Note: svc_serv structures have an initial use count of 1,
+-	 * so we exit through here on both success and failure.
+-	 */
+-err_put:
+-	svc_destroy(serv);
+-err_create:
++err:
+ 	mutex_unlock(&nlmsvc_mutex);
+ 	return error;
+ }
+@@ -511,27 +443,8 @@ void
+ lockd_down(struct net *net)
+ {
+ 	mutex_lock(&nlmsvc_mutex);
+-	lockd_down_net(nlmsvc_rqst->rq_server, net);
+-	if (nlmsvc_users) {
+-		if (--nlmsvc_users)
+-			goto out;
+-	} else {
+-		printk(KERN_ERR "lockd_down: no users! task=%p\n",
+-			nlmsvc_task);
+-		BUG();
+-	}
+-
+-	if (!nlmsvc_task) {
+-		printk(KERN_ERR "lockd_down: no lockd running.\n");
+-		BUG();
+-	}
+-	kthread_stop(nlmsvc_task);
+-	dprintk("lockd_down: service stopped\n");
+-	lockd_svc_exit_thread();
+-	dprintk("lockd_down: service destroyed\n");
+-	nlmsvc_task = NULL;
+-	nlmsvc_rqst = NULL;
+-out:
++	lockd_down_net(nlmsvc_serv, net);
++	lockd_put();
+ 	mutex_unlock(&nlmsvc_mutex);
+ }
+ EXPORT_SYMBOL_GPL(lockd_down);
+@@ -584,7 +497,7 @@ static struct ctl_table nlm_sysctls[] = {
+ 		.data		= &nsm_use_hostnames,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dobool,
+ 	},
+ 	{
+ 		.procname	= "nsm_local_state",
+@@ -649,6 +562,7 @@ static int lockd_authenticate(struct svc_rqst *rqstp)
+ 	switch (rqstp->rq_authop->flavour) {
+ 		case RPC_AUTH_NULL:
+ 		case RPC_AUTH_UNIX:
++			rqstp->rq_auth_stat = rpc_auth_ok;
+ 			if (rqstp->rq_proc == 0)
+ 				return SVC_OK;
+ 			if (is_callback(rqstp->rq_proc)) {
+@@ -659,6 +573,7 @@ static int lockd_authenticate(struct svc_rqst *rqstp)
+ 			}
+ 			return svc_set_client(rqstp);
+ 	}
++	rqstp->rq_auth_stat = rpc_autherr_badcred;
+ 	return SVC_DENIED;
+ }
+ 
+@@ -766,6 +681,44 @@ static void __exit exit_nlm(void)
+ module_init(init_nlm);
+ module_exit(exit_nlm);
+ 
++/**
++ * nlmsvc_dispatch - Process an NLM Request
++ * @rqstp: incoming request
++ * @statp: pointer to location of accept_stat field in RPC Reply buffer
++ *
++ * Return values:
++ *  %0: Processing complete; do not send a Reply
++ *  %1: Processing complete; send Reply in rqstp->rq_res
++ */
++static int nlmsvc_dispatch(struct svc_rqst *rqstp, __be32 *statp)
++{
++	const struct svc_procedure *procp = rqstp->rq_procinfo;
++
++	svcxdr_init_decode(rqstp);
++	if (!procp->pc_decode(rqstp, &rqstp->rq_arg_stream))
++		goto out_decode_err;
++
++	*statp = procp->pc_func(rqstp);
++	if (*statp == rpc_drop_reply)
++		return 0;
++	if (*statp != rpc_success)
++		return 1;
++
++	svcxdr_init_encode(rqstp);
++	if (!procp->pc_encode(rqstp, &rqstp->rq_res_stream))
++		goto out_encode_err;
++
++	return 1;
++
++out_decode_err:
++	*statp = rpc_garbage_args;
++	return 1;
++
++out_encode_err:
++	*statp = rpc_system_err;
++	return 1;
++}
++
+ /*
+  * Define NLM program and procedures
+  */
+@@ -775,6 +728,7 @@ static const struct svc_version	nlmsvc_version1 = {
+ 	.vs_nproc	= 17,
+ 	.vs_proc	= nlmsvc_procedures,
+ 	.vs_count	= nlmsvc_version1_count,
++	.vs_dispatch	= nlmsvc_dispatch,
+ 	.vs_xdrsize	= NLMSVC_XDRSIZE,
+ };
+ static unsigned int nlmsvc_version3_count[24];
+@@ -783,6 +737,7 @@ static const struct svc_version	nlmsvc_version3 = {
+ 	.vs_nproc	= 24,
+ 	.vs_proc	= nlmsvc_procedures,
+ 	.vs_count	= nlmsvc_version3_count,
++	.vs_dispatch	= nlmsvc_dispatch,
+ 	.vs_xdrsize	= NLMSVC_XDRSIZE,
+ };
+ #ifdef CONFIG_LOCKD_V4
+@@ -792,6 +747,7 @@ static const struct svc_version	nlmsvc_version4 = {
+ 	.vs_nproc	= 24,
+ 	.vs_proc	= nlmsvc_procedures4,
+ 	.vs_count	= nlmsvc_version4_count,
++	.vs_dispatch	= nlmsvc_dispatch,
+ 	.vs_xdrsize	= NLMSVC_XDRSIZE,
+ };
+ #endif
+diff --git a/fs/lockd/svc4proc.c b/fs/lockd/svc4proc.c
+index fa41dda399259..b72023a6b4c16 100644
+--- a/fs/lockd/svc4proc.c
++++ b/fs/lockd/svc4proc.c
+@@ -32,6 +32,10 @@ nlm4svc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
+ 	if (!nlmsvc_ops)
+ 		return nlm_lck_denied_nolocks;
+ 
++	if (lock->lock_start > OFFSET_MAX ||
++	    (lock->lock_len && ((lock->lock_len - 1) > (OFFSET_MAX - lock->lock_start))))
++		return nlm4_fbig;
++
+ 	/* Obtain host handle */
+ 	if (!(host = nlmsvc_lookup_host(rqstp, lock->caller, lock->len))
+ 	 || (argp->monitor && nsm_monitor(host) < 0))
+@@ -40,13 +44,21 @@ nlm4svc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
+ 
+ 	/* Obtain file pointer. Not used by FREE_ALL call. */
+ 	if (filp != NULL) {
+-		if ((error = nlm_lookup_file(rqstp, &file, &lock->fh)) != 0)
++		int mode = lock_to_openmode(&lock->fl);
++
++		error = nlm_lookup_file(rqstp, &file, lock);
++		if (error)
+ 			goto no_locks;
+ 		*filp = file;
+ 
+ 		/* Set up the missing parts of the file_lock structure */
+-		lock->fl.fl_file  = file->f_file;
++		lock->fl.fl_flags = FL_POSIX;
++		lock->fl.fl_file  = file->f_file[mode];
+ 		lock->fl.fl_pid = current->tgid;
++		lock->fl.fl_start = (loff_t)lock->lock_start;
++		lock->fl.fl_end = lock->lock_len ?
++				   (loff_t)(lock->lock_start + lock->lock_len - 1) :
++				   OFFSET_MAX;
+ 		lock->fl.fl_lmops = &nlmsvc_lock_operations;
+ 		nlmsvc_locks_init_private(&lock->fl, host, (pid_t)lock->svid);
+ 		if (!lock->fl.fl_owner) {
+@@ -84,6 +96,7 @@ __nlm4svc_proc_test(struct svc_rqst *rqstp, struct nlm_res *resp)
+ 	struct nlm_args *argp = rqstp->rq_argp;
+ 	struct nlm_host	*host;
+ 	struct nlm_file	*file;
++	struct nlm_lockowner *test_owner;
+ 	__be32 rc = rpc_success;
+ 
+ 	dprintk("lockd: TEST4        called\n");
+@@ -93,6 +106,7 @@ __nlm4svc_proc_test(struct svc_rqst *rqstp, struct nlm_res *resp)
+ 	if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file)))
+ 		return resp->status == nlm_drop_reply ? rpc_drop_reply :rpc_success;
+ 
++	test_owner = argp->lock.fl.fl_owner;
+ 	/* Now check for conflicting locks */
+ 	resp->status = nlmsvc_testlock(rqstp, file, host, &argp->lock, &resp->lock, &resp->cookie);
+ 	if (resp->status == nlm_drop_reply)
+@@ -100,7 +114,7 @@ __nlm4svc_proc_test(struct svc_rqst *rqstp, struct nlm_res *resp)
+ 	else
+ 		dprintk("lockd: TEST4        status %d\n", ntohl(resp->status));
+ 
+-	nlmsvc_release_lockowner(&argp->lock);
++	nlmsvc_put_lockowner(test_owner);
+ 	nlmsvc_release_host(host);
+ 	nlm_release_file(file);
+ 	return rc;
+@@ -266,8 +280,6 @@ nlm4svc_proc_granted(struct svc_rqst *rqstp)
+  */
+ static void nlm4svc_callback_exit(struct rpc_task *task, void *data)
+ {
+-	dprintk("lockd: %5u callback returned %d\n", task->tk_pid,
+-			-task->tk_status);
+ }
+ 
+ static void nlm4svc_callback_release(void *data)
+@@ -510,191 +522,239 @@ const struct svc_procedure nlmsvc_procedures4[24] = {
+ 		.pc_decode = nlm4svc_decode_void,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_void),
++		.pc_argzero = sizeof(struct nlm_void),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "NULL",
+ 	},
+ 	[NLMPROC_TEST] = {
+ 		.pc_func = nlm4svc_proc_test,
+ 		.pc_decode = nlm4svc_decode_testargs,
+ 		.pc_encode = nlm4svc_encode_testres,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St+2+No+Rg,
++		.pc_name = "TEST",
+ 	},
+ 	[NLMPROC_LOCK] = {
+ 		.pc_func = nlm4svc_proc_lock,
+ 		.pc_decode = nlm4svc_decode_lockargs,
+ 		.pc_encode = nlm4svc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "LOCK",
+ 	},
+ 	[NLMPROC_CANCEL] = {
+ 		.pc_func = nlm4svc_proc_cancel,
+ 		.pc_decode = nlm4svc_decode_cancargs,
+ 		.pc_encode = nlm4svc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "CANCEL",
+ 	},
+ 	[NLMPROC_UNLOCK] = {
+ 		.pc_func = nlm4svc_proc_unlock,
+ 		.pc_decode = nlm4svc_decode_unlockargs,
+ 		.pc_encode = nlm4svc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "UNLOCK",
+ 	},
+ 	[NLMPROC_GRANTED] = {
+ 		.pc_func = nlm4svc_proc_granted,
+ 		.pc_decode = nlm4svc_decode_testargs,
+ 		.pc_encode = nlm4svc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "GRANTED",
+ 	},
+ 	[NLMPROC_TEST_MSG] = {
+ 		.pc_func = nlm4svc_proc_test_msg,
+ 		.pc_decode = nlm4svc_decode_testargs,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "TEST_MSG",
+ 	},
+ 	[NLMPROC_LOCK_MSG] = {
+ 		.pc_func = nlm4svc_proc_lock_msg,
+ 		.pc_decode = nlm4svc_decode_lockargs,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "LOCK_MSG",
+ 	},
+ 	[NLMPROC_CANCEL_MSG] = {
+ 		.pc_func = nlm4svc_proc_cancel_msg,
+ 		.pc_decode = nlm4svc_decode_cancargs,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "CANCEL_MSG",
+ 	},
+ 	[NLMPROC_UNLOCK_MSG] = {
+ 		.pc_func = nlm4svc_proc_unlock_msg,
+ 		.pc_decode = nlm4svc_decode_unlockargs,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "UNLOCK_MSG",
+ 	},
+ 	[NLMPROC_GRANTED_MSG] = {
+ 		.pc_func = nlm4svc_proc_granted_msg,
+ 		.pc_decode = nlm4svc_decode_testargs,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "GRANTED_MSG",
+ 	},
+ 	[NLMPROC_TEST_RES] = {
+ 		.pc_func = nlm4svc_proc_null,
+ 		.pc_decode = nlm4svc_decode_void,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "TEST_RES",
+ 	},
+ 	[NLMPROC_LOCK_RES] = {
+ 		.pc_func = nlm4svc_proc_null,
+ 		.pc_decode = nlm4svc_decode_void,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "LOCK_RES",
+ 	},
+ 	[NLMPROC_CANCEL_RES] = {
+ 		.pc_func = nlm4svc_proc_null,
+ 		.pc_decode = nlm4svc_decode_void,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "CANCEL_RES",
+ 	},
+ 	[NLMPROC_UNLOCK_RES] = {
+ 		.pc_func = nlm4svc_proc_null,
+ 		.pc_decode = nlm4svc_decode_void,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "UNLOCK_RES",
+ 	},
+ 	[NLMPROC_GRANTED_RES] = {
+ 		.pc_func = nlm4svc_proc_granted_res,
+ 		.pc_decode = nlm4svc_decode_res,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "GRANTED_RES",
+ 	},
+ 	[NLMPROC_NSM_NOTIFY] = {
+ 		.pc_func = nlm4svc_proc_sm_notify,
+ 		.pc_decode = nlm4svc_decode_reboot,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_reboot),
++		.pc_argzero = sizeof(struct nlm_reboot),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "SM_NOTIFY",
+ 	},
+ 	[17] = {
+ 		.pc_func = nlm4svc_proc_unused,
+ 		.pc_decode = nlm4svc_decode_void,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_void),
++		.pc_argzero = sizeof(struct nlm_void),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = 0,
++		.pc_name = "UNUSED",
+ 	},
+ 	[18] = {
+ 		.pc_func = nlm4svc_proc_unused,
+ 		.pc_decode = nlm4svc_decode_void,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_void),
++		.pc_argzero = sizeof(struct nlm_void),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = 0,
++		.pc_name = "UNUSED",
+ 	},
+ 	[19] = {
+ 		.pc_func = nlm4svc_proc_unused,
+ 		.pc_decode = nlm4svc_decode_void,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_void),
++		.pc_argzero = sizeof(struct nlm_void),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = 0,
++		.pc_name = "UNUSED",
+ 	},
+ 	[NLMPROC_SHARE] = {
+ 		.pc_func = nlm4svc_proc_share,
+ 		.pc_decode = nlm4svc_decode_shareargs,
+ 		.pc_encode = nlm4svc_encode_shareres,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St+1,
++		.pc_name = "SHARE",
+ 	},
+ 	[NLMPROC_UNSHARE] = {
+ 		.pc_func = nlm4svc_proc_unshare,
+ 		.pc_decode = nlm4svc_decode_shareargs,
+ 		.pc_encode = nlm4svc_encode_shareres,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St+1,
++		.pc_name = "UNSHARE",
+ 	},
+ 	[NLMPROC_NM_LOCK] = {
+ 		.pc_func = nlm4svc_proc_nm_lock,
+ 		.pc_decode = nlm4svc_decode_lockargs,
+ 		.pc_encode = nlm4svc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "NM_LOCK",
+ 	},
+ 	[NLMPROC_FREE_ALL] = {
+ 		.pc_func = nlm4svc_proc_free_all,
+ 		.pc_decode = nlm4svc_decode_notify,
+ 		.pc_encode = nlm4svc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "FREE_ALL",
+ 	},
+ };
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 273a81971ed57..4e30f3c509701 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -31,6 +31,7 @@
+ #include <linux/lockd/nlm.h>
+ #include <linux/lockd/lockd.h>
+ #include <linux/kthread.h>
++#include <linux/exportfs.h>
+ 
+ #define NLMDBG_FACILITY		NLMDBG_SVCLOCK
+ 
+@@ -339,7 +340,7 @@ nlmsvc_get_lockowner(struct nlm_lockowner *lockowner)
+ 	return lockowner;
+ }
+ 
+-static void nlmsvc_put_lockowner(struct nlm_lockowner *lockowner)
++void nlmsvc_put_lockowner(struct nlm_lockowner *lockowner)
+ {
+ 	if (!refcount_dec_and_lock(&lockowner->count, &lockowner->host->h_lock))
+ 		return;
+@@ -469,18 +470,27 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	    struct nlm_host *host, struct nlm_lock *lock, int wait,
+ 	    struct nlm_cookie *cookie, int reclaim)
+ {
++#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
++	struct inode		*inode = nlmsvc_file_inode(file);
++#endif
+ 	struct nlm_block	*block = NULL;
+ 	int			error;
++	int			mode;
++	int			async_block = 0;
+ 	__be32			ret;
+ 
+ 	dprintk("lockd: nlmsvc_lock(%s/%ld, ty=%d, pi=%d, %Ld-%Ld, bl=%d)\n",
+-				locks_inode(file->f_file)->i_sb->s_id,
+-				locks_inode(file->f_file)->i_ino,
++				inode->i_sb->s_id, inode->i_ino,
+ 				lock->fl.fl_type, lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end,
+ 				wait);
+ 
++	if (nlmsvc_file_file(file)->f_op->lock) {
++		async_block = wait;
++		wait = 0;
++	}
++
+ 	/* Lock file against concurrent access */
+ 	mutex_lock(&file->f_mutex);
+ 	/* Get existing block (in case client is busy-waiting)
+@@ -524,7 +534,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 
+ 	if (!wait)
+ 		lock->fl.fl_flags &= ~FL_SLEEP;
+-	error = vfs_lock_file(file->f_file, F_SETLK, &lock->fl, NULL);
++	mode = lock_to_openmode(&lock->fl);
++	error = vfs_lock_file(file->f_file[mode], F_SETLK, &lock->fl, NULL);
+ 	lock->fl.fl_flags &= ~FL_SLEEP;
+ 
+ 	dprintk("lockd: vfs_lock_file returned %d\n", error);
+@@ -540,7 +551,7 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 			 */
+ 			if (wait)
+ 				break;
+-			ret = nlm_lck_denied;
++			ret = async_block ? nlm_lck_blocked : nlm_lck_denied;
+ 			goto out;
+ 		case FILE_LOCK_DEFERRED:
+ 			if (wait)
+@@ -577,12 +588,12 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 		struct nlm_lock *conflock, struct nlm_cookie *cookie)
+ {
+ 	int			error;
++	int			mode;
+ 	__be32			ret;
+-	struct nlm_lockowner	*test_owner;
+ 
+ 	dprintk("lockd: nlmsvc_testlock(%s/%ld, ty=%d, %Ld-%Ld)\n",
+-				locks_inode(file->f_file)->i_sb->s_id,
+-				locks_inode(file->f_file)->i_ino,
++				nlmsvc_file_inode(file)->i_sb->s_id,
++				nlmsvc_file_inode(file)->i_ino,
+ 				lock->fl.fl_type,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -592,10 +603,8 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 		goto out;
+ 	}
+ 
+-	/* If there's a conflicting lock, remember to clean up the test lock */
+-	test_owner = (struct nlm_lockowner *)lock->fl.fl_owner;
+-
+-	error = vfs_test_lock(file->f_file, &lock->fl);
++	mode = lock_to_openmode(&lock->fl);
++	error = vfs_test_lock(file->f_file[mode], &lock->fl);
+ 	if (error) {
+ 		/* We can't currently deal with deferred test requests */
+ 		if (error == FILE_LOCK_DEFERRED)
+@@ -622,10 +631,6 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	conflock->fl.fl_end = lock->fl.fl_end;
+ 	locks_release_private(&lock->fl);
+ 
+-	/* Clean up the test lock */
+-	lock->fl.fl_owner = NULL;
+-	nlmsvc_put_lockowner(test_owner);
+-
+ 	ret = nlm_lck_denied;
+ out:
+ 	return ret;
+@@ -641,11 +646,11 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ __be32
+ nlmsvc_unlock(struct net *net, struct nlm_file *file, struct nlm_lock *lock)
+ {
+-	int	error;
++	int	error = 0;
+ 
+ 	dprintk("lockd: nlmsvc_unlock(%s/%ld, pi=%d, %Ld-%Ld)\n",
+-				locks_inode(file->f_file)->i_sb->s_id,
+-				locks_inode(file->f_file)->i_ino,
++				nlmsvc_file_inode(file)->i_sb->s_id,
++				nlmsvc_file_inode(file)->i_ino,
+ 				lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -654,7 +659,14 @@ nlmsvc_unlock(struct net *net, struct nlm_file *file, struct nlm_lock *lock)
+ 	nlmsvc_cancel_blocked(net, file, lock);
+ 
+ 	lock->fl.fl_type = F_UNLCK;
+-	error = vfs_lock_file(file->f_file, F_SETLK, &lock->fl, NULL);
++	lock->fl.fl_file = file->f_file[O_RDONLY];
++	if (lock->fl.fl_file)
++		error = vfs_lock_file(lock->fl.fl_file, F_SETLK,
++					&lock->fl, NULL);
++	lock->fl.fl_file = file->f_file[O_WRONLY];
++	if (lock->fl.fl_file)
++		error |= vfs_lock_file(lock->fl.fl_file, F_SETLK,
++					&lock->fl, NULL);
+ 
+ 	return (error < 0)? nlm_lck_denied_nolocks : nlm_granted;
+ }
+@@ -671,10 +683,11 @@ nlmsvc_cancel_blocked(struct net *net, struct nlm_file *file, struct nlm_lock *l
+ {
+ 	struct nlm_block	*block;
+ 	int status = 0;
++	int mode;
+ 
+ 	dprintk("lockd: nlmsvc_cancel(%s/%ld, pi=%d, %Ld-%Ld)\n",
+-				locks_inode(file->f_file)->i_sb->s_id,
+-				locks_inode(file->f_file)->i_ino,
++				nlmsvc_file_inode(file)->i_sb->s_id,
++				nlmsvc_file_inode(file)->i_ino,
+ 				lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -686,8 +699,10 @@ nlmsvc_cancel_blocked(struct net *net, struct nlm_file *file, struct nlm_lock *l
+ 	block = nlmsvc_lookup_block(file, lock);
+ 	mutex_unlock(&file->f_mutex);
+ 	if (block != NULL) {
+-		vfs_cancel_lock(block->b_file->f_file,
+-				&block->b_call->a_args.lock.fl);
++		struct file_lock *fl = &block->b_call->a_args.lock.fl;
++
++		mode = lock_to_openmode(fl);
++		vfs_cancel_lock(block->b_file->f_file[mode], fl);
+ 		status = nlmsvc_unlink_block(block);
+ 		nlmsvc_release_block(block);
+ 	}
+@@ -803,6 +818,7 @@ nlmsvc_grant_blocked(struct nlm_block *block)
+ {
+ 	struct nlm_file		*file = block->b_file;
+ 	struct nlm_lock		*lock = &block->b_call->a_args.lock;
++	int			mode;
+ 	int			error;
+ 	loff_t			fl_start, fl_end;
+ 
+@@ -828,7 +844,8 @@ nlmsvc_grant_blocked(struct nlm_block *block)
+ 	lock->fl.fl_flags |= FL_SLEEP;
+ 	fl_start = lock->fl.fl_start;
+ 	fl_end = lock->fl.fl_end;
+-	error = vfs_lock_file(file->f_file, F_SETLK, &lock->fl, NULL);
++	mode = lock_to_openmode(&lock->fl);
++	error = vfs_lock_file(file->f_file[mode], F_SETLK, &lock->fl, NULL);
+ 	lock->fl.fl_flags &= ~FL_SLEEP;
+ 	lock->fl.fl_start = fl_start;
+ 	lock->fl.fl_end = fl_end;
+diff --git a/fs/lockd/svcproc.c b/fs/lockd/svcproc.c
+index 50855f2c1f4b8..32784f508c810 100644
+--- a/fs/lockd/svcproc.c
++++ b/fs/lockd/svcproc.c
+@@ -55,6 +55,7 @@ nlmsvc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
+ 	struct nlm_host		*host = NULL;
+ 	struct nlm_file		*file = NULL;
+ 	struct nlm_lock		*lock = &argp->lock;
++	int			mode;
+ 	__be32			error = 0;
+ 
+ 	/* nfsd callbacks must have been installed for this procedure */
+@@ -69,13 +70,15 @@ nlmsvc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
+ 
+ 	/* Obtain file pointer. Not used by FREE_ALL call. */
+ 	if (filp != NULL) {
+-		error = cast_status(nlm_lookup_file(rqstp, &file, &lock->fh));
++		error = cast_status(nlm_lookup_file(rqstp, &file, lock));
+ 		if (error != 0)
+ 			goto no_locks;
+ 		*filp = file;
+ 
+ 		/* Set up the missing parts of the file_lock structure */
+-		lock->fl.fl_file  = file->f_file;
++		mode = lock_to_openmode(&lock->fl);
++		lock->fl.fl_flags = FL_POSIX;
++		lock->fl.fl_file  = file->f_file[mode];
+ 		lock->fl.fl_pid = current->tgid;
+ 		lock->fl.fl_lmops = &nlmsvc_lock_operations;
+ 		nlmsvc_locks_init_private(&lock->fl, host, (pid_t)lock->svid);
+@@ -114,6 +117,7 @@ __nlmsvc_proc_test(struct svc_rqst *rqstp, struct nlm_res *resp)
+ 	struct nlm_args *argp = rqstp->rq_argp;
+ 	struct nlm_host	*host;
+ 	struct nlm_file	*file;
++	struct nlm_lockowner *test_owner;
+ 	__be32 rc = rpc_success;
+ 
+ 	dprintk("lockd: TEST          called\n");
+@@ -123,6 +127,8 @@ __nlmsvc_proc_test(struct svc_rqst *rqstp, struct nlm_res *resp)
+ 	if ((resp->status = nlmsvc_retrieve_args(rqstp, argp, &host, &file)))
+ 		return resp->status == nlm_drop_reply ? rpc_drop_reply :rpc_success;
+ 
++	test_owner = argp->lock.fl.fl_owner;
++
+ 	/* Now check for conflicting locks */
+ 	resp->status = cast_status(nlmsvc_testlock(rqstp, file, host, &argp->lock, &resp->lock, &resp->cookie));
+ 	if (resp->status == nlm_drop_reply)
+@@ -131,7 +137,7 @@ __nlmsvc_proc_test(struct svc_rqst *rqstp, struct nlm_res *resp)
+ 		dprintk("lockd: TEST          status %d vers %d\n",
+ 			ntohl(resp->status), rqstp->rq_vers);
+ 
+-	nlmsvc_release_lockowner(&argp->lock);
++	nlmsvc_put_lockowner(test_owner);
+ 	nlmsvc_release_host(host);
+ 	nlm_release_file(file);
+ 	return rc;
+@@ -299,8 +305,6 @@ nlmsvc_proc_granted(struct svc_rqst *rqstp)
+  */
+ static void nlmsvc_callback_exit(struct rpc_task *task, void *data)
+ {
+-	dprintk("lockd: %5u callback returned %d\n", task->tk_pid,
+-			-task->tk_status);
+ }
+ 
+ void nlmsvc_release_call(struct nlm_rqst *call)
+@@ -552,191 +556,239 @@ const struct svc_procedure nlmsvc_procedures[24] = {
+ 		.pc_decode = nlmsvc_decode_void,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_void),
++		.pc_argzero = sizeof(struct nlm_void),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "NULL",
+ 	},
+ 	[NLMPROC_TEST] = {
+ 		.pc_func = nlmsvc_proc_test,
+ 		.pc_decode = nlmsvc_decode_testargs,
+ 		.pc_encode = nlmsvc_encode_testres,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St+2+No+Rg,
++		.pc_name = "TEST",
+ 	},
+ 	[NLMPROC_LOCK] = {
+ 		.pc_func = nlmsvc_proc_lock,
+ 		.pc_decode = nlmsvc_decode_lockargs,
+ 		.pc_encode = nlmsvc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "LOCK",
+ 	},
+ 	[NLMPROC_CANCEL] = {
+ 		.pc_func = nlmsvc_proc_cancel,
+ 		.pc_decode = nlmsvc_decode_cancargs,
+ 		.pc_encode = nlmsvc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "CANCEL",
+ 	},
+ 	[NLMPROC_UNLOCK] = {
+ 		.pc_func = nlmsvc_proc_unlock,
+ 		.pc_decode = nlmsvc_decode_unlockargs,
+ 		.pc_encode = nlmsvc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "UNLOCK",
+ 	},
+ 	[NLMPROC_GRANTED] = {
+ 		.pc_func = nlmsvc_proc_granted,
+ 		.pc_decode = nlmsvc_decode_testargs,
+ 		.pc_encode = nlmsvc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "GRANTED",
+ 	},
+ 	[NLMPROC_TEST_MSG] = {
+ 		.pc_func = nlmsvc_proc_test_msg,
+ 		.pc_decode = nlmsvc_decode_testargs,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "TEST_MSG",
+ 	},
+ 	[NLMPROC_LOCK_MSG] = {
+ 		.pc_func = nlmsvc_proc_lock_msg,
+ 		.pc_decode = nlmsvc_decode_lockargs,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "LOCK_MSG",
+ 	},
+ 	[NLMPROC_CANCEL_MSG] = {
+ 		.pc_func = nlmsvc_proc_cancel_msg,
+ 		.pc_decode = nlmsvc_decode_cancargs,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "CANCEL_MSG",
+ 	},
+ 	[NLMPROC_UNLOCK_MSG] = {
+ 		.pc_func = nlmsvc_proc_unlock_msg,
+ 		.pc_decode = nlmsvc_decode_unlockargs,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "UNLOCK_MSG",
+ 	},
+ 	[NLMPROC_GRANTED_MSG] = {
+ 		.pc_func = nlmsvc_proc_granted_msg,
+ 		.pc_decode = nlmsvc_decode_testargs,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "GRANTED_MSG",
+ 	},
+ 	[NLMPROC_TEST_RES] = {
+ 		.pc_func = nlmsvc_proc_null,
+ 		.pc_decode = nlmsvc_decode_void,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "TEST_RES",
+ 	},
+ 	[NLMPROC_LOCK_RES] = {
+ 		.pc_func = nlmsvc_proc_null,
+ 		.pc_decode = nlmsvc_decode_void,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "LOCK_RES",
+ 	},
+ 	[NLMPROC_CANCEL_RES] = {
+ 		.pc_func = nlmsvc_proc_null,
+ 		.pc_decode = nlmsvc_decode_void,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "CANCEL_RES",
+ 	},
+ 	[NLMPROC_UNLOCK_RES] = {
+ 		.pc_func = nlmsvc_proc_null,
+ 		.pc_decode = nlmsvc_decode_void,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "UNLOCK_RES",
+ 	},
+ 	[NLMPROC_GRANTED_RES] = {
+ 		.pc_func = nlmsvc_proc_granted_res,
+ 		.pc_decode = nlmsvc_decode_res,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_res),
++		.pc_argzero = sizeof(struct nlm_res),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "GRANTED_RES",
+ 	},
+ 	[NLMPROC_NSM_NOTIFY] = {
+ 		.pc_func = nlmsvc_proc_sm_notify,
+ 		.pc_decode = nlmsvc_decode_reboot,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_reboot),
++		.pc_argzero = sizeof(struct nlm_reboot),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "SM_NOTIFY",
+ 	},
+ 	[17] = {
+ 		.pc_func = nlmsvc_proc_unused,
+ 		.pc_decode = nlmsvc_decode_void,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_void),
++		.pc_argzero = sizeof(struct nlm_void),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "UNUSED",
+ 	},
+ 	[18] = {
+ 		.pc_func = nlmsvc_proc_unused,
+ 		.pc_decode = nlmsvc_decode_void,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_void),
++		.pc_argzero = sizeof(struct nlm_void),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "UNUSED",
+ 	},
+ 	[19] = {
+ 		.pc_func = nlmsvc_proc_unused,
+ 		.pc_decode = nlmsvc_decode_void,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_void),
++		.pc_argzero = sizeof(struct nlm_void),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = St,
++		.pc_name = "UNUSED",
+ 	},
+ 	[NLMPROC_SHARE] = {
+ 		.pc_func = nlmsvc_proc_share,
+ 		.pc_decode = nlmsvc_decode_shareargs,
+ 		.pc_encode = nlmsvc_encode_shareres,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St+1,
++		.pc_name = "SHARE",
+ 	},
+ 	[NLMPROC_UNSHARE] = {
+ 		.pc_func = nlmsvc_proc_unshare,
+ 		.pc_decode = nlmsvc_decode_shareargs,
+ 		.pc_encode = nlmsvc_encode_shareres,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St+1,
++		.pc_name = "UNSHARE",
+ 	},
+ 	[NLMPROC_NM_LOCK] = {
+ 		.pc_func = nlmsvc_proc_nm_lock,
+ 		.pc_decode = nlmsvc_decode_lockargs,
+ 		.pc_encode = nlmsvc_encode_res,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_res),
+ 		.pc_xdrressize = Ck+St,
++		.pc_name = "NM_LOCK",
+ 	},
+ 	[NLMPROC_FREE_ALL] = {
+ 		.pc_func = nlmsvc_proc_free_all,
+ 		.pc_decode = nlmsvc_decode_notify,
+ 		.pc_encode = nlmsvc_encode_void,
+ 		.pc_argsize = sizeof(struct nlm_args),
++		.pc_argzero = sizeof(struct nlm_args),
+ 		.pc_ressize = sizeof(struct nlm_void),
+ 		.pc_xdrressize = 0,
++		.pc_name = "FREE_ALL",
+ 	},
+ };
+diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
+index 028fc152da22f..e3b6229e7ae5c 100644
+--- a/fs/lockd/svcsubs.c
++++ b/fs/lockd/svcsubs.c
+@@ -45,7 +45,7 @@ static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
+ 
+ static inline void nlm_debug_print_file(char *msg, struct nlm_file *file)
+ {
+-	struct inode *inode = locks_inode(file->f_file);
++	struct inode *inode = nlmsvc_file_inode(file);
+ 
+ 	dprintk("lockd: %s %s/%ld\n",
+ 		msg, inode->i_sb->s_id, inode->i_ino);
+@@ -71,56 +71,75 @@ static inline unsigned int file_hash(struct nfs_fh *f)
+ 	return tmp & (FILE_NRHASH - 1);
+ }
+ 
++int lock_to_openmode(struct file_lock *lock)
++{
++	return (lock->fl_type == F_WRLCK) ? O_WRONLY : O_RDONLY;
++}
++
++/*
++ * Open the file. Note that if we're reexporting, for example,
++ * this could block the lockd thread for a while.
++ *
++ * We have to make sure we have the right credential to open
++ * the file.
++ */
++static __be32 nlm_do_fopen(struct svc_rqst *rqstp,
++			   struct nlm_file *file, int mode)
++{
++	struct file **fp = &file->f_file[mode];
++	__be32	nfserr;
++
++	if (*fp)
++		return 0;
++	nfserr = nlmsvc_ops->fopen(rqstp, &file->f_handle, fp, mode);
++	if (nfserr)
++		dprintk("lockd: open failed (error %d)\n", nfserr);
++	return nfserr;
++}
++
+ /*
+  * Lookup file info. If it doesn't exist, create a file info struct
+  * and open a (VFS) file for the given inode.
+- *
+- * FIXME:
+- * Note that we open the file O_RDONLY even when creating write locks.
+- * This is not quite right, but for now, we assume the client performs
+- * the proper R/W checking.
+  */
+ __be32
+ nlm_lookup_file(struct svc_rqst *rqstp, struct nlm_file **result,
+-					struct nfs_fh *f)
++					struct nlm_lock *lock)
+ {
+ 	struct nlm_file	*file;
+ 	unsigned int	hash;
+ 	__be32		nfserr;
++	int		mode;
+ 
+-	nlm_debug_print_fh("nlm_lookup_file", f);
++	nlm_debug_print_fh("nlm_lookup_file", &lock->fh);
+ 
+-	hash = file_hash(f);
++	hash = file_hash(&lock->fh);
++	mode = lock_to_openmode(&lock->fl);
+ 
+ 	/* Lock file table */
+ 	mutex_lock(&nlm_file_mutex);
+ 
+ 	hlist_for_each_entry(file, &nlm_files[hash], f_list)
+-		if (!nfs_compare_fh(&file->f_handle, f))
++		if (!nfs_compare_fh(&file->f_handle, &lock->fh)) {
++			mutex_lock(&file->f_mutex);
++			nfserr = nlm_do_fopen(rqstp, file, mode);
++			mutex_unlock(&file->f_mutex);
+ 			goto found;
+-
+-	nlm_debug_print_fh("creating file for", f);
++		}
++	nlm_debug_print_fh("creating file for", &lock->fh);
+ 
+ 	nfserr = nlm_lck_denied_nolocks;
+ 	file = kzalloc(sizeof(*file), GFP_KERNEL);
+ 	if (!file)
+-		goto out_unlock;
++		goto out_free;
+ 
+-	memcpy(&file->f_handle, f, sizeof(struct nfs_fh));
++	memcpy(&file->f_handle, &lock->fh, sizeof(struct nfs_fh));
+ 	mutex_init(&file->f_mutex);
+ 	INIT_HLIST_NODE(&file->f_list);
+ 	INIT_LIST_HEAD(&file->f_blocks);
+ 
+-	/* Open the file. Note that this must not sleep for too long, else
+-	 * we would lock up lockd:-) So no NFS re-exports, folks.
+-	 *
+-	 * We have to make sure we have the right credential to open
+-	 * the file.
+-	 */
+-	if ((nfserr = nlmsvc_ops->fopen(rqstp, f, &file->f_file)) != 0) {
+-		dprintk("lockd: open failed (error %d)\n", nfserr);
+-		goto out_free;
+-	}
++	nfserr = nlm_do_fopen(rqstp, file, mode);
++	if (nfserr)
++		goto out_unlock;
+ 
+ 	hlist_add_head(&file->f_list, &nlm_files[hash]);
+ 
+@@ -128,7 +147,6 @@ nlm_lookup_file(struct svc_rqst *rqstp, struct nlm_file **result,
+ 	dprintk("lockd: found file %p (count %d)\n", file, file->f_count);
+ 	*result = file;
+ 	file->f_count++;
+-	nfserr = 0;
+ 
+ out_unlock:
+ 	mutex_unlock(&nlm_file_mutex);
+@@ -148,13 +166,40 @@ nlm_delete_file(struct nlm_file *file)
+ 	nlm_debug_print_file("closing file", file);
+ 	if (!hlist_unhashed(&file->f_list)) {
+ 		hlist_del(&file->f_list);
+-		nlmsvc_ops->fclose(file->f_file);
++		if (file->f_file[O_RDONLY])
++			nlmsvc_ops->fclose(file->f_file[O_RDONLY]);
++		if (file->f_file[O_WRONLY])
++			nlmsvc_ops->fclose(file->f_file[O_WRONLY]);
+ 		kfree(file);
+ 	} else {
+ 		printk(KERN_WARNING "lockd: attempt to release unknown file!\n");
+ 	}
+ }
+ 
++static int nlm_unlock_files(struct nlm_file *file, const struct file_lock *fl)
++{
++	struct file_lock lock;
++
++	locks_init_lock(&lock);
++	lock.fl_type  = F_UNLCK;
++	lock.fl_start = 0;
++	lock.fl_end   = OFFSET_MAX;
++	lock.fl_owner = fl->fl_owner;
++	lock.fl_pid   = fl->fl_pid;
++	lock.fl_flags = FL_POSIX;
++
++	lock.fl_file = file->f_file[O_RDONLY];
++	if (lock.fl_file && vfs_lock_file(lock.fl_file, F_SETLK, &lock, NULL))
++		goto out_err;
++	lock.fl_file = file->f_file[O_WRONLY];
++	if (lock.fl_file && vfs_lock_file(lock.fl_file, F_SETLK, &lock, NULL))
++		goto out_err;
++	return 0;
++out_err:
++	pr_warn("lockd: unlock failure in %s:%d\n", __FILE__, __LINE__);
++	return 1;
++}
++
+ /*
+  * Loop over all locks on the given file and perform the specified
+  * action.
+@@ -165,7 +210,7 @@ nlm_traverse_locks(struct nlm_host *host, struct nlm_file *file,
+ {
+ 	struct inode	 *inode = nlmsvc_file_inode(file);
+ 	struct file_lock *fl;
+-	struct file_lock_context *flctx = inode->i_flctx;
++	struct file_lock_context *flctx = locks_inode_context(inode);
+ 	struct nlm_host	 *lockhost;
+ 
+ 	if (!flctx || list_empty_careful(&flctx->flc_posix))
+@@ -182,17 +227,10 @@ nlm_traverse_locks(struct nlm_host *host, struct nlm_file *file,
+ 
+ 		lockhost = ((struct nlm_lockowner *)fl->fl_owner)->host;
+ 		if (match(lockhost, host)) {
+-			struct file_lock lock = *fl;
+ 
+ 			spin_unlock(&flctx->flc_lock);
+-			lock.fl_type  = F_UNLCK;
+-			lock.fl_start = 0;
+-			lock.fl_end   = OFFSET_MAX;
+-			if (vfs_lock_file(file->f_file, F_SETLK, &lock, NULL) < 0) {
+-				printk("lockd: unlock failure in %s:%d\n",
+-						__FILE__, __LINE__);
++			if (nlm_unlock_files(file, fl))
+ 				return 1;
+-			}
+ 			goto again;
+ 		}
+ 	}
+@@ -227,7 +265,7 @@ nlm_file_inuse(struct nlm_file *file)
+ {
+ 	struct inode	 *inode = nlmsvc_file_inode(file);
+ 	struct file_lock *fl;
+-	struct file_lock_context *flctx = inode->i_flctx;
++	struct file_lock_context *flctx = locks_inode_context(inode);
+ 
+ 	if (file->f_count || !list_empty(&file->f_blocks) || file->f_shares)
+ 		return 1;
+@@ -246,6 +284,14 @@ nlm_file_inuse(struct nlm_file *file)
+ 	return 0;
+ }
+ 
++static void nlm_close_files(struct nlm_file *file)
++{
++	if (file->f_file[O_RDONLY])
++		nlmsvc_ops->fclose(file->f_file[O_RDONLY]);
++	if (file->f_file[O_WRONLY])
++		nlmsvc_ops->fclose(file->f_file[O_WRONLY]);
++}
++
+ /*
+  * Loop over all files in the file table.
+  */
+@@ -276,7 +322,7 @@ nlm_traverse_files(void *data, nlm_host_match_fn_t match,
+ 			if (list_empty(&file->f_blocks) && !file->f_locks
+ 			 && !file->f_shares && !file->f_count) {
+ 				hlist_del(&file->f_list);
+-				nlmsvc_ops->fclose(file->f_file);
++				nlm_close_files(file);
+ 				kfree(file);
+ 			}
+ 		}
+@@ -410,12 +456,13 @@ nlmsvc_invalidate_all(void)
+ 	nlm_traverse_files(NULL, nlmsvc_is_client, NULL);
+ }
+ 
++
+ static int
+ nlmsvc_match_sb(void *datap, struct nlm_file *file)
+ {
+ 	struct super_block *sb = datap;
+ 
+-	return sb == locks_inode(file->f_file)->i_sb;
++	return sb == nlmsvc_file_inode(file)->i_sb;
+ }
+ 
+ /**
+diff --git a/fs/lockd/svcxdr.h b/fs/lockd/svcxdr.h
+new file mode 100644
+index 0000000000000..4f1a451da5ba2
+--- /dev/null
++++ b/fs/lockd/svcxdr.h
+@@ -0,0 +1,142 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Encode/decode NLM basic data types
++ *
++ * Basic NLMv3 XDR data types are not defined in an IETF standards
++ * document.  X/Open has a description of these data types that
++ * is useful.  See Chapter 10 of "Protocols for Interworking:
++ * XNFS, Version 3W".
++ *
++ * Basic NLMv4 XDR data types are defined in Appendix II.1.4 of
++ * RFC 1813: "NFS Version 3 Protocol Specification".
++ *
++ * Author: Chuck Lever <chuck.lever@oracle.com>
++ *
++ * Copyright (c) 2020, Oracle and/or its affiliates.
++ */
++
++#ifndef _LOCKD_SVCXDR_H_
++#define _LOCKD_SVCXDR_H_
++
++static inline bool
++svcxdr_decode_stats(struct xdr_stream *xdr, __be32 *status)
++{
++	__be32 *p;
++
++	p = xdr_inline_decode(xdr, XDR_UNIT);
++	if (!p)
++		return false;
++	*status = *p;
++
++	return true;
++}
++
++static inline bool
++svcxdr_encode_stats(struct xdr_stream *xdr, __be32 status)
++{
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, XDR_UNIT);
++	if (!p)
++		return false;
++	*p = status;
++
++	return true;
++}
++
++static inline bool
++svcxdr_decode_string(struct xdr_stream *xdr, char **data, unsigned int *data_len)
++{
++	__be32 *p;
++	u32 len;
++
++	if (xdr_stream_decode_u32(xdr, &len) < 0)
++		return false;
++	if (len > NLM_MAXSTRLEN)
++		return false;
++	p = xdr_inline_decode(xdr, len);
++	if (!p)
++		return false;
++	*data_len = len;
++	*data = (char *)p;
++
++	return true;
++}
++
++/*
++ * NLM cookies are defined by specification to be a variable-length
++ * XDR opaque no longer than 1024 bytes. However, this implementation
++ * limits their length to 32 bytes, and treats zero-length cookies
++ * specially.
++ */
++static inline bool
++svcxdr_decode_cookie(struct xdr_stream *xdr, struct nlm_cookie *cookie)
++{
++	__be32 *p;
++	u32 len;
++
++	if (xdr_stream_decode_u32(xdr, &len) < 0)
++		return false;
++	if (len > NLM_MAXCOOKIELEN)
++		return false;
++	if (!len)
++		goto out_hpux;
++
++	p = xdr_inline_decode(xdr, len);
++	if (!p)
++		return false;
++	cookie->len = len;
++	memcpy(cookie->data, p, len);
++
++	return true;
++
++	/* apparently HPUX can return empty cookies */
++out_hpux:
++	cookie->len = 4;
++	memset(cookie->data, 0, 4);
++	return true;
++}
++
++static inline bool
++svcxdr_encode_cookie(struct xdr_stream *xdr, const struct nlm_cookie *cookie)
++{
++	__be32 *p;
++
++	if (xdr_stream_encode_u32(xdr, cookie->len) < 0)
++		return false;
++	p = xdr_reserve_space(xdr, cookie->len);
++	if (!p)
++		return false;
++	memcpy(p, cookie->data, cookie->len);
++
++	return true;
++}
++
++static inline bool
++svcxdr_decode_owner(struct xdr_stream *xdr, struct xdr_netobj *obj)
++{
++	__be32 *p;
++	u32 len;
++
++	if (xdr_stream_decode_u32(xdr, &len) < 0)
++		return false;
++	if (len > XDR_MAX_NETOBJ)
++		return false;
++	p = xdr_inline_decode(xdr, len);
++	if (!p)
++		return false;
++	obj->len = len;
++	obj->data = (u8 *)p;
++
++	return true;
++}
++
++static inline bool
++svcxdr_encode_owner(struct xdr_stream *xdr, const struct xdr_netobj *obj)
++{
++	if (obj->len > XDR_MAX_NETOBJ)
++		return false;
++	return xdr_stream_encode_opaque(xdr, obj->data, obj->len) > 0;
++}
++
++#endif /* _LOCKD_SVCXDR_H_ */
+diff --git a/fs/lockd/xdr.c b/fs/lockd/xdr.c
+index 982629f7b120a..2fb5748dae0c8 100644
+--- a/fs/lockd/xdr.c
++++ b/fs/lockd/xdr.c
+@@ -19,7 +19,7 @@
+ 
+ #include <uapi/linux/nfs2.h>
+ 
+-#define NLMDBG_FACILITY		NLMDBG_XDR
++#include "svcxdr.h"
+ 
+ 
+ static inline loff_t
+@@ -42,311 +42,313 @@ loff_t_to_s32(loff_t offset)
+ }
+ 
+ /*
+- * XDR functions for basic NLM types
++ * NLM file handles are defined by specification to be a variable-length
++ * XDR opaque no longer than 1024 bytes. However, this implementation
++ * constrains their length to exactly the length of an NFSv2 file
++ * handle.
+  */
+-static __be32 *nlm_decode_cookie(__be32 *p, struct nlm_cookie *c)
++static bool
++svcxdr_decode_fhandle(struct xdr_stream *xdr, struct nfs_fh *fh)
+ {
+-	unsigned int	len;
+-
+-	len = ntohl(*p++);
+-	
+-	if(len==0)
+-	{
+-		c->len=4;
+-		memset(c->data, 0, 4);	/* hockeypux brain damage */
+-	}
+-	else if(len<=NLM_MAXCOOKIELEN)
+-	{
+-		c->len=len;
+-		memcpy(c->data, p, len);
+-		p+=XDR_QUADLEN(len);
+-	}
+-	else 
+-	{
+-		dprintk("lockd: bad cookie size %d (only cookies under "
+-			"%d bytes are supported.)\n",
+-				len, NLM_MAXCOOKIELEN);
+-		return NULL;
+-	}
+-	return p;
+-}
+-
+-static inline __be32 *
+-nlm_encode_cookie(__be32 *p, struct nlm_cookie *c)
+-{
+-	*p++ = htonl(c->len);
+-	memcpy(p, c->data, c->len);
+-	p+=XDR_QUADLEN(c->len);
+-	return p;
+-}
+-
+-static __be32 *
+-nlm_decode_fh(__be32 *p, struct nfs_fh *f)
+-{
+-	unsigned int	len;
+-
+-	if ((len = ntohl(*p++)) != NFS2_FHSIZE) {
+-		dprintk("lockd: bad fhandle size %d (should be %d)\n",
+-			len, NFS2_FHSIZE);
+-		return NULL;
+-	}
+-	f->size = NFS2_FHSIZE;
+-	memset(f->data, 0, sizeof(f->data));
+-	memcpy(f->data, p, NFS2_FHSIZE);
+-	return p + XDR_QUADLEN(NFS2_FHSIZE);
+-}
+-
+-/*
+- * Encode and decode owner handle
+- */
+-static inline __be32 *
+-nlm_decode_oh(__be32 *p, struct xdr_netobj *oh)
+-{
+-	return xdr_decode_netobj(p, oh);
+-}
+-
+-static inline __be32 *
+-nlm_encode_oh(__be32 *p, struct xdr_netobj *oh)
+-{
+-	return xdr_encode_netobj(p, oh);
++	__be32 *p;
++	u32 len;
++
++	if (xdr_stream_decode_u32(xdr, &len) < 0)
++		return false;
++	if (len != NFS2_FHSIZE)
++		return false;
++
++	p = xdr_inline_decode(xdr, len);
++	if (!p)
++		return false;
++	fh->size = NFS2_FHSIZE;
++	memcpy(fh->data, p, len);
++	memset(fh->data + NFS2_FHSIZE, 0, sizeof(fh->data) - NFS2_FHSIZE);
++
++	return true;
+ }
+ 
+-static __be32 *
+-nlm_decode_lock(__be32 *p, struct nlm_lock *lock)
++static bool
++svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
+ {
+-	struct file_lock	*fl = &lock->fl;
+-	s32			start, len, end;
+-
+-	if (!(p = xdr_decode_string_inplace(p, &lock->caller,
+-					    &lock->len,
+-					    NLM_MAXSTRLEN))
+-	 || !(p = nlm_decode_fh(p, &lock->fh))
+-	 || !(p = nlm_decode_oh(p, &lock->oh)))
+-		return NULL;
+-	lock->svid  = ntohl(*p++);
++	struct file_lock *fl = &lock->fl;
++	s32 start, len, end;
++
++	if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
++		return false;
++	if (!svcxdr_decode_fhandle(xdr, &lock->fh))
++		return false;
++	if (!svcxdr_decode_owner(xdr, &lock->oh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &lock->svid) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &start) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &len) < 0)
++		return false;
+ 
+ 	locks_init_lock(fl);
+ 	fl->fl_flags = FL_POSIX;
+-	fl->fl_type  = F_RDLCK;		/* as good as anything else */
+-	start = ntohl(*p++);
+-	len = ntohl(*p++);
++	fl->fl_type  = F_RDLCK;
+ 	end = start + len - 1;
+-
+ 	fl->fl_start = s32_to_loff_t(start);
+-
+ 	if (len == 0 || end < 0)
+ 		fl->fl_end = OFFSET_MAX;
+ 	else
+ 		fl->fl_end = s32_to_loff_t(end);
+-	return p;
++
++	return true;
+ }
+ 
+-/*
+- * Encode result of a TEST/TEST_MSG call
+- */
+-static __be32 *
+-nlm_encode_testres(__be32 *p, struct nlm_res *resp)
++static bool
++svcxdr_encode_holder(struct xdr_stream *xdr, const struct nlm_lock *lock)
+ {
+-	s32		start, len;
+-
+-	if (!(p = nlm_encode_cookie(p, &resp->cookie)))
+-		return NULL;
+-	*p++ = resp->status;
+-
+-	if (resp->status == nlm_lck_denied) {
+-		struct file_lock	*fl = &resp->lock.fl;
+-
+-		*p++ = (fl->fl_type == F_RDLCK)? xdr_zero : xdr_one;
+-		*p++ = htonl(resp->lock.svid);
+-
+-		/* Encode owner handle. */
+-		if (!(p = xdr_encode_netobj(p, &resp->lock.oh)))
+-			return NULL;
++	const struct file_lock *fl = &lock->fl;
++	s32 start, len;
++
++	/* exclusive */
++	if (xdr_stream_encode_bool(xdr, fl->fl_type != F_RDLCK) < 0)
++		return false;
++	if (xdr_stream_encode_u32(xdr, lock->svid) < 0)
++		return false;
++	if (!svcxdr_encode_owner(xdr, &lock->oh))
++		return false;
++	start = loff_t_to_s32(fl->fl_start);
++	if (fl->fl_end == OFFSET_MAX)
++		len = 0;
++	else
++		len = loff_t_to_s32(fl->fl_end - fl->fl_start + 1);
++	if (xdr_stream_encode_u32(xdr, start) < 0)
++		return false;
++	if (xdr_stream_encode_u32(xdr, len) < 0)
++		return false;
+ 
+-		start = loff_t_to_s32(fl->fl_start);
+-		if (fl->fl_end == OFFSET_MAX)
+-			len = 0;
+-		else
+-			len = loff_t_to_s32(fl->fl_end - fl->fl_start + 1);
++	return true;
++}
+ 
+-		*p++ = htonl(start);
+-		*p++ = htonl(len);
++static bool
++svcxdr_encode_testrply(struct xdr_stream *xdr, const struct nlm_res *resp)
++{
++	if (!svcxdr_encode_stats(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nlm_lck_denied:
++		if (!svcxdr_encode_holder(xdr, &resp->lock))
++			return false;
+ 	}
+ 
+-	return p;
++	return true;
+ }
+ 
+ 
+ /*
+- * First, the server side XDR functions
++ * Decode Call arguments
+  */
+-int
+-nlmsvc_decode_testargs(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	struct nlm_args *argp = rqstp->rq_argp;
+-	u32	exclusive;
+-
+-	if (!(p = nlm_decode_cookie(p, &argp->cookie)))
+-		return 0;
+-
+-	exclusive = ntohl(*p++);
+-	if (!(p = nlm_decode_lock(p, &argp->lock)))
+-		return 0;
+-	if (exclusive)
+-		argp->lock.fl.fl_type = F_WRLCK;
+ 
+-	return xdr_argsize_check(rqstp, p);
++bool
++nlmsvc_decode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr)
++{
++	return true;
+ }
+ 
+-int
+-nlmsvc_encode_testres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_decode_testargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_res *resp = rqstp->rq_resp;
++	struct nlm_args *argp = rqstp->rq_argp;
++	u32 exclusive;
++
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
++		return false;
++	if (!svcxdr_decode_lock(xdr, &argp->lock))
++		return false;
++	if (exclusive)
++		argp->lock.fl.fl_type = F_WRLCK;
+ 
+-	if (!(p = nlm_encode_testres(p, resp)))
+-		return 0;
+-	return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nlmsvc_decode_lockargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_decode_lockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nlm_args *argp = rqstp->rq_argp;
+-	u32	exclusive;
+-
+-	if (!(p = nlm_decode_cookie(p, &argp->cookie)))
+-		return 0;
+-	argp->block  = ntohl(*p++);
+-	exclusive    = ntohl(*p++);
+-	if (!(p = nlm_decode_lock(p, &argp->lock)))
+-		return 0;
++	u32 exclusive;
++
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (xdr_stream_decode_bool(xdr, &argp->block) < 0)
++		return false;
++	if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
++		return false;
++	if (!svcxdr_decode_lock(xdr, &argp->lock))
++		return false;
+ 	if (exclusive)
+ 		argp->lock.fl.fl_type = F_WRLCK;
+-	argp->reclaim = ntohl(*p++);
+-	argp->state   = ntohl(*p++);
++	if (xdr_stream_decode_bool(xdr, &argp->reclaim) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
++		return false;
+ 	argp->monitor = 1;		/* monitor client by default */
+ 
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nlmsvc_decode_cancargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_decode_cancargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nlm_args *argp = rqstp->rq_argp;
+-	u32	exclusive;
+-
+-	if (!(p = nlm_decode_cookie(p, &argp->cookie)))
+-		return 0;
+-	argp->block = ntohl(*p++);
+-	exclusive = ntohl(*p++);
+-	if (!(p = nlm_decode_lock(p, &argp->lock)))
+-		return 0;
++	u32 exclusive;
++
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (xdr_stream_decode_bool(xdr, &argp->block) < 0)
++		return false;
++	if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
++		return false;
++	if (!svcxdr_decode_lock(xdr, &argp->lock))
++		return false;
+ 	if (exclusive)
+ 		argp->lock.fl.fl_type = F_WRLCK;
+-	return xdr_argsize_check(rqstp, p);
++
++	return true;
+ }
+ 
+-int
+-nlmsvc_decode_unlockargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_decode_unlockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nlm_args *argp = rqstp->rq_argp;
+ 
+-	if (!(p = nlm_decode_cookie(p, &argp->cookie))
+-	 || !(p = nlm_decode_lock(p, &argp->lock)))
+-		return 0;
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (!svcxdr_decode_lock(xdr, &argp->lock))
++		return false;
+ 	argp->lock.fl.fl_type = F_UNLCK;
+-	return xdr_argsize_check(rqstp, p);
++
++	return true;
+ }
+ 
+-int
+-nlmsvc_decode_shareargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_decode_res(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_args *argp = rqstp->rq_argp;
+-	struct nlm_lock	*lock = &argp->lock;
++	struct nlm_res *resp = rqstp->rq_argp;
+ 
+-	memset(lock, 0, sizeof(*lock));
+-	locks_init_lock(&lock->fl);
+-	lock->svid = ~(u32) 0;
+-
+-	if (!(p = nlm_decode_cookie(p, &argp->cookie))
+-	 || !(p = xdr_decode_string_inplace(p, &lock->caller,
+-					    &lock->len, NLM_MAXSTRLEN))
+-	 || !(p = nlm_decode_fh(p, &lock->fh))
+-	 || !(p = nlm_decode_oh(p, &lock->oh)))
+-		return 0;
+-	argp->fsm_mode = ntohl(*p++);
+-	argp->fsm_access = ntohl(*p++);
+-	return xdr_argsize_check(rqstp, p);
++	if (!svcxdr_decode_cookie(xdr, &resp->cookie))
++		return false;
++	if (!svcxdr_decode_stats(xdr, &resp->status))
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nlmsvc_encode_shareres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_decode_reboot(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_res *resp = rqstp->rq_resp;
++	struct nlm_reboot *argp = rqstp->rq_argp;
++	__be32 *p;
++	u32 len;
++
++	if (xdr_stream_decode_u32(xdr, &len) < 0)
++		return false;
++	if (len > SM_MAXSTRLEN)
++		return false;
++	p = xdr_inline_decode(xdr, len);
++	if (!p)
++		return false;
++	argp->len = len;
++	argp->mon = (char *)p;
++	if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
++		return false;
++	p = xdr_inline_decode(xdr, SM_PRIV_SIZE);
++	if (!p)
++		return false;
++	memcpy(&argp->priv.data, p, sizeof(argp->priv.data));
+ 
+-	if (!(p = nlm_encode_cookie(p, &resp->cookie)))
+-		return 0;
+-	*p++ = resp->status;
+-	*p++ = xdr_zero;		/* sequence argument */
+-	return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nlmsvc_encode_res(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_decode_shareargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_res *resp = rqstp->rq_resp;
++	struct nlm_args *argp = rqstp->rq_argp;
++	struct nlm_lock	*lock = &argp->lock;
+ 
+-	if (!(p = nlm_encode_cookie(p, &resp->cookie)))
+-		return 0;
+-	*p++ = resp->status;
+-	return xdr_ressize_check(rqstp, p);
++	memset(lock, 0, sizeof(*lock));
++	locks_init_lock(&lock->fl);
++	lock->svid = ~(u32)0;
++
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
++		return false;
++	if (!svcxdr_decode_fhandle(xdr, &lock->fh))
++		return false;
++	if (!svcxdr_decode_owner(xdr, &lock->oh))
++		return false;
++	/* XXX: Range checks are missing in the original code */
++	if (xdr_stream_decode_u32(xdr, &argp->fsm_mode) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->fsm_access) < 0)
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nlmsvc_decode_notify(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_decode_notify(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nlm_args *argp = rqstp->rq_argp;
+ 	struct nlm_lock	*lock = &argp->lock;
+ 
+-	if (!(p = xdr_decode_string_inplace(p, &lock->caller,
+-					    &lock->len, NLM_MAXSTRLEN)))
+-		return 0;
+-	argp->state = ntohl(*p++);
+-	return xdr_argsize_check(rqstp, p);
++	if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nlmsvc_decode_reboot(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	struct nlm_reboot *argp = rqstp->rq_argp;
+ 
+-	if (!(p = xdr_decode_string_inplace(p, &argp->mon, &argp->len, SM_MAXSTRLEN)))
+-		return 0;
+-	argp->state = ntohl(*p++);
+-	memcpy(&argp->priv.data, p, sizeof(argp->priv.data));
+-	p += XDR_QUADLEN(SM_PRIV_SIZE);
+-	return xdr_argsize_check(rqstp, p);
++/*
++ * Encode Reply results
++ */
++
++bool
++nlmsvc_encode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr)
++{
++	return true;
+ }
+ 
+-int
+-nlmsvc_decode_res(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_encode_testres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_res *resp = rqstp->rq_argp;
++	struct nlm_res *resp = rqstp->rq_resp;
+ 
+-	if (!(p = nlm_decode_cookie(p, &resp->cookie)))
+-		return 0;
+-	resp->status = *p++;
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_encode_cookie(xdr, &resp->cookie) &&
++		svcxdr_encode_testrply(xdr, resp);
+ }
+ 
+-int
+-nlmsvc_decode_void(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_encode_res(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	return xdr_argsize_check(rqstp, p);
++	struct nlm_res *resp = rqstp->rq_resp;
++
++	return svcxdr_encode_cookie(xdr, &resp->cookie) &&
++		svcxdr_encode_stats(xdr, resp->status);
+ }
+ 
+-int
+-nlmsvc_encode_void(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlmsvc_encode_shareres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	return xdr_ressize_check(rqstp, p);
++	struct nlm_res *resp = rqstp->rq_resp;
++
++	if (!svcxdr_encode_cookie(xdr, &resp->cookie))
++		return false;
++	if (!svcxdr_encode_stats(xdr, resp->status))
++		return false;
++	/* sequence */
++	if (xdr_stream_encode_u32(xdr, 0) < 0)
++		return false;
++
++	return true;
+ }
+diff --git a/fs/lockd/xdr4.c b/fs/lockd/xdr4.c
+index 5fa9f48a9dba7..5fcbf30cd2759 100644
+--- a/fs/lockd/xdr4.c
++++ b/fs/lockd/xdr4.c
+@@ -18,14 +18,7 @@
+ #include <linux/sunrpc/stats.h>
+ #include <linux/lockd/lockd.h>
+ 
+-#define NLMDBG_FACILITY		NLMDBG_XDR
+-
+-static inline loff_t
+-s64_to_loff_t(__s64 offset)
+-{
+-	return (loff_t)offset;
+-}
+-
++#include "svcxdr.h"
+ 
+ static inline s64
+ loff_t_to_s64(loff_t offset)
+@@ -40,310 +33,317 @@ loff_t_to_s64(loff_t offset)
+ 	return res;
+ }
+ 
+-/*
+- * XDR functions for basic NLM types
+- */
+-static __be32 *
+-nlm4_decode_cookie(__be32 *p, struct nlm_cookie *c)
+-{
+-	unsigned int	len;
+-
+-	len = ntohl(*p++);
+-	
+-	if(len==0)
+-	{
+-		c->len=4;
+-		memset(c->data, 0, 4);	/* hockeypux brain damage */
+-	}
+-	else if(len<=NLM_MAXCOOKIELEN)
+-	{
+-		c->len=len;
+-		memcpy(c->data, p, len);
+-		p+=XDR_QUADLEN(len);
+-	}
+-	else 
+-	{
+-		dprintk("lockd: bad cookie size %d (only cookies under "
+-			"%d bytes are supported.)\n",
+-				len, NLM_MAXCOOKIELEN);
+-		return NULL;
+-	}
+-	return p;
+-}
+-
+-static __be32 *
+-nlm4_encode_cookie(__be32 *p, struct nlm_cookie *c)
++void nlm4svc_set_file_lock_range(struct file_lock *fl, u64 off, u64 len)
+ {
+-	*p++ = htonl(c->len);
+-	memcpy(p, c->data, c->len);
+-	p+=XDR_QUADLEN(c->len);
+-	return p;
+-}
++	s64 end = off + len - 1;
+ 
+-static __be32 *
+-nlm4_decode_fh(__be32 *p, struct nfs_fh *f)
+-{
+-	memset(f->data, 0, sizeof(f->data));
+-	f->size = ntohl(*p++);
+-	if (f->size > NFS_MAXFHSIZE) {
+-		dprintk("lockd: bad fhandle size %d (should be <=%d)\n",
+-			f->size, NFS_MAXFHSIZE);
+-		return NULL;
+-	}
+-      	memcpy(f->data, p, f->size);
+-	return p + XDR_QUADLEN(f->size);
++	fl->fl_start = off;
++	if (len == 0 || end < 0)
++		fl->fl_end = OFFSET_MAX;
++	else
++		fl->fl_end = end;
+ }
+ 
+ /*
+- * Encode and decode owner handle
++ * NLM file handles are defined by specification to be a variable-length
++ * XDR opaque no longer than 1024 bytes. However, this implementation
++ * limits their length to the size of an NFSv3 file handle.
+  */
+-static __be32 *
+-nlm4_decode_oh(__be32 *p, struct xdr_netobj *oh)
++static bool
++svcxdr_decode_fhandle(struct xdr_stream *xdr, struct nfs_fh *fh)
+ {
+-	return xdr_decode_netobj(p, oh);
++	__be32 *p;
++	u32 len;
++
++	if (xdr_stream_decode_u32(xdr, &len) < 0)
++		return false;
++	if (len > NFS_MAXFHSIZE)
++		return false;
++
++	p = xdr_inline_decode(xdr, len);
++	if (!p)
++		return false;
++	fh->size = len;
++	memcpy(fh->data, p, len);
++	memset(fh->data + len, 0, sizeof(fh->data) - len);
++
++	return true;
+ }
+ 
+-static __be32 *
+-nlm4_decode_lock(__be32 *p, struct nlm_lock *lock)
++static bool
++svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
+ {
+-	struct file_lock	*fl = &lock->fl;
+-	__u64			len, start;
+-	__s64			end;
+-
+-	if (!(p = xdr_decode_string_inplace(p, &lock->caller,
+-					    &lock->len, NLM_MAXSTRLEN))
+-	 || !(p = nlm4_decode_fh(p, &lock->fh))
+-	 || !(p = nlm4_decode_oh(p, &lock->oh)))
+-		return NULL;
+-	lock->svid  = ntohl(*p++);
++	struct file_lock *fl = &lock->fl;
++
++	if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
++		return false;
++	if (!svcxdr_decode_fhandle(xdr, &lock->fh))
++		return false;
++	if (!svcxdr_decode_owner(xdr, &lock->oh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &lock->svid) < 0)
++		return false;
++	if (xdr_stream_decode_u64(xdr, &lock->lock_start) < 0)
++		return false;
++	if (xdr_stream_decode_u64(xdr, &lock->lock_len) < 0)
++		return false;
+ 
+ 	locks_init_lock(fl);
+ 	fl->fl_flags = FL_POSIX;
+-	fl->fl_type  = F_RDLCK;		/* as good as anything else */
+-	p = xdr_decode_hyper(p, &start);
+-	p = xdr_decode_hyper(p, &len);
+-	end = start + len - 1;
+-
+-	fl->fl_start = s64_to_loff_t(start);
++	fl->fl_type  = F_RDLCK;
++	nlm4svc_set_file_lock_range(fl, lock->lock_start, lock->lock_len);
++	return true;
++}
+ 
+-	if (len == 0 || end < 0)
+-		fl->fl_end = OFFSET_MAX;
++static bool
++svcxdr_encode_holder(struct xdr_stream *xdr, const struct nlm_lock *lock)
++{
++	const struct file_lock *fl = &lock->fl;
++	s64 start, len;
++
++	/* exclusive */
++	if (xdr_stream_encode_bool(xdr, fl->fl_type != F_RDLCK) < 0)
++		return false;
++	if (xdr_stream_encode_u32(xdr, lock->svid) < 0)
++		return false;
++	if (!svcxdr_encode_owner(xdr, &lock->oh))
++		return false;
++	start = loff_t_to_s64(fl->fl_start);
++	if (fl->fl_end == OFFSET_MAX)
++		len = 0;
+ 	else
+-		fl->fl_end = s64_to_loff_t(end);
+-	return p;
++		len = loff_t_to_s64(fl->fl_end - fl->fl_start + 1);
++	if (xdr_stream_encode_u64(xdr, start) < 0)
++		return false;
++	if (xdr_stream_encode_u64(xdr, len) < 0)
++		return false;
++
++	return true;
+ }
+ 
+-/*
+- * Encode result of a TEST/TEST_MSG call
+- */
+-static __be32 *
+-nlm4_encode_testres(__be32 *p, struct nlm_res *resp)
++static bool
++svcxdr_encode_testrply(struct xdr_stream *xdr, const struct nlm_res *resp)
+ {
+-	s64		start, len;
+-
+-	dprintk("xdr: before encode_testres (p %p resp %p)\n", p, resp);
+-	if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+-		return NULL;
+-	*p++ = resp->status;
+-
+-	if (resp->status == nlm_lck_denied) {
+-		struct file_lock	*fl = &resp->lock.fl;
+-
+-		*p++ = (fl->fl_type == F_RDLCK)? xdr_zero : xdr_one;
+-		*p++ = htonl(resp->lock.svid);
+-
+-		/* Encode owner handle. */
+-		if (!(p = xdr_encode_netobj(p, &resp->lock.oh)))
+-			return NULL;
+-
+-		start = loff_t_to_s64(fl->fl_start);
+-		if (fl->fl_end == OFFSET_MAX)
+-			len = 0;
+-		else
+-			len = loff_t_to_s64(fl->fl_end - fl->fl_start + 1);
+-		
+-		p = xdr_encode_hyper(p, start);
+-		p = xdr_encode_hyper(p, len);
+-		dprintk("xdr: encode_testres (status %u pid %d type %d start %Ld end %Ld)\n",
+-			resp->status, (int)resp->lock.svid, fl->fl_type,
+-			(long long)fl->fl_start,  (long long)fl->fl_end);
++	if (!svcxdr_encode_stats(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nlm_lck_denied:
++		if (!svcxdr_encode_holder(xdr, &resp->lock))
++			return false;
+ 	}
+ 
+-	dprintk("xdr: after encode_testres (p %p resp %p)\n", p, resp);
+-	return p;
++	return true;
+ }
+ 
+ 
+ /*
+- * First, the server side XDR functions
++ * Decode Call arguments
+  */
+-int
+-nlm4svc_decode_testargs(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	struct nlm_args *argp = rqstp->rq_argp;
+-	u32	exclusive;
+-
+-	if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+-		return 0;
+-
+-	exclusive = ntohl(*p++);
+-	if (!(p = nlm4_decode_lock(p, &argp->lock)))
+-		return 0;
+-	if (exclusive)
+-		argp->lock.fl.fl_type = F_WRLCK;
+ 
+-	return xdr_argsize_check(rqstp, p);
++bool
++nlm4svc_decode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr)
++{
++	return true;
+ }
+ 
+-int
+-nlm4svc_encode_testres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_decode_testargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_res *resp = rqstp->rq_resp;
++	struct nlm_args *argp = rqstp->rq_argp;
++	u32 exclusive;
++
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
++		return false;
++	if (!svcxdr_decode_lock(xdr, &argp->lock))
++		return false;
++	if (exclusive)
++		argp->lock.fl.fl_type = F_WRLCK;
+ 
+-	if (!(p = nlm4_encode_testres(p, resp)))
+-		return 0;
+-	return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nlm4svc_decode_lockargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_decode_lockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nlm_args *argp = rqstp->rq_argp;
+-	u32	exclusive;
+-
+-	if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+-		return 0;
+-	argp->block  = ntohl(*p++);
+-	exclusive    = ntohl(*p++);
+-	if (!(p = nlm4_decode_lock(p, &argp->lock)))
+-		return 0;
++	u32 exclusive;
++
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (xdr_stream_decode_bool(xdr, &argp->block) < 0)
++		return false;
++	if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
++		return false;
++	if (!svcxdr_decode_lock(xdr, &argp->lock))
++		return false;
+ 	if (exclusive)
+ 		argp->lock.fl.fl_type = F_WRLCK;
+-	argp->reclaim = ntohl(*p++);
+-	argp->state   = ntohl(*p++);
++	if (xdr_stream_decode_bool(xdr, &argp->reclaim) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
++		return false;
+ 	argp->monitor = 1;		/* monitor client by default */
+ 
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nlm4svc_decode_cancargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_decode_cancargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nlm_args *argp = rqstp->rq_argp;
+-	u32	exclusive;
+-
+-	if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+-		return 0;
+-	argp->block = ntohl(*p++);
+-	exclusive = ntohl(*p++);
+-	if (!(p = nlm4_decode_lock(p, &argp->lock)))
+-		return 0;
++	u32 exclusive;
++
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (xdr_stream_decode_bool(xdr, &argp->block) < 0)
++		return false;
++	if (xdr_stream_decode_bool(xdr, &exclusive) < 0)
++		return false;
++	if (!svcxdr_decode_lock(xdr, &argp->lock))
++		return false;
+ 	if (exclusive)
+ 		argp->lock.fl.fl_type = F_WRLCK;
+-	return xdr_argsize_check(rqstp, p);
++
++	return true;
+ }
+ 
+-int
+-nlm4svc_decode_unlockargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_decode_unlockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nlm_args *argp = rqstp->rq_argp;
+ 
+-	if (!(p = nlm4_decode_cookie(p, &argp->cookie))
+-	 || !(p = nlm4_decode_lock(p, &argp->lock)))
+-		return 0;
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (!svcxdr_decode_lock(xdr, &argp->lock))
++		return false;
+ 	argp->lock.fl.fl_type = F_UNLCK;
+-	return xdr_argsize_check(rqstp, p);
++
++	return true;
+ }
+ 
+-int
+-nlm4svc_decode_shareargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_decode_res(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_args *argp = rqstp->rq_argp;
+-	struct nlm_lock	*lock = &argp->lock;
++	struct nlm_res *resp = rqstp->rq_argp;
+ 
+-	memset(lock, 0, sizeof(*lock));
+-	locks_init_lock(&lock->fl);
+-	lock->svid = ~(u32) 0;
+-
+-	if (!(p = nlm4_decode_cookie(p, &argp->cookie))
+-	 || !(p = xdr_decode_string_inplace(p, &lock->caller,
+-					    &lock->len, NLM_MAXSTRLEN))
+-	 || !(p = nlm4_decode_fh(p, &lock->fh))
+-	 || !(p = nlm4_decode_oh(p, &lock->oh)))
+-		return 0;
+-	argp->fsm_mode = ntohl(*p++);
+-	argp->fsm_access = ntohl(*p++);
+-	return xdr_argsize_check(rqstp, p);
++	if (!svcxdr_decode_cookie(xdr, &resp->cookie))
++		return false;
++	if (!svcxdr_decode_stats(xdr, &resp->status))
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nlm4svc_encode_shareres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_decode_reboot(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_res *resp = rqstp->rq_resp;
++	struct nlm_reboot *argp = rqstp->rq_argp;
++	__be32 *p;
++	u32 len;
++
++	if (xdr_stream_decode_u32(xdr, &len) < 0)
++		return false;
++	if (len > SM_MAXSTRLEN)
++		return false;
++	p = xdr_inline_decode(xdr, len);
++	if (!p)
++		return false;
++	argp->len = len;
++	argp->mon = (char *)p;
++	if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
++		return false;
++	p = xdr_inline_decode(xdr, SM_PRIV_SIZE);
++	if (!p)
++		return false;
++	memcpy(&argp->priv.data, p, sizeof(argp->priv.data));
+ 
+-	if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+-		return 0;
+-	*p++ = resp->status;
+-	*p++ = xdr_zero;		/* sequence argument */
+-	return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nlm4svc_encode_res(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_decode_shareargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_res *resp = rqstp->rq_resp;
++	struct nlm_args *argp = rqstp->rq_argp;
++	struct nlm_lock	*lock = &argp->lock;
+ 
+-	if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+-		return 0;
+-	*p++ = resp->status;
+-	return xdr_ressize_check(rqstp, p);
++	memset(lock, 0, sizeof(*lock));
++	locks_init_lock(&lock->fl);
++	lock->svid = ~(u32)0;
++
++	if (!svcxdr_decode_cookie(xdr, &argp->cookie))
++		return false;
++	if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
++		return false;
++	if (!svcxdr_decode_fhandle(xdr, &lock->fh))
++		return false;
++	if (!svcxdr_decode_owner(xdr, &lock->oh))
++		return false;
++	/* XXX: Range checks are missing in the original code */
++	if (xdr_stream_decode_u32(xdr, &argp->fsm_mode) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->fsm_access) < 0)
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nlm4svc_decode_notify(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_decode_notify(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nlm_args *argp = rqstp->rq_argp;
+ 	struct nlm_lock	*lock = &argp->lock;
+ 
+-	if (!(p = xdr_decode_string_inplace(p, &lock->caller,
+-					    &lock->len, NLM_MAXSTRLEN)))
+-		return 0;
+-	argp->state = ntohl(*p++);
+-	return xdr_argsize_check(rqstp, p);
++	if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nlm4svc_decode_reboot(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	struct nlm_reboot *argp = rqstp->rq_argp;
+ 
+-	if (!(p = xdr_decode_string_inplace(p, &argp->mon, &argp->len, SM_MAXSTRLEN)))
+-		return 0;
+-	argp->state = ntohl(*p++);
+-	memcpy(&argp->priv.data, p, sizeof(argp->priv.data));
+-	p += XDR_QUADLEN(SM_PRIV_SIZE);
+-	return xdr_argsize_check(rqstp, p);
++/*
++ * Encode Reply results
++ */
++
++bool
++nlm4svc_encode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr)
++{
++	return true;
+ }
+ 
+-int
+-nlm4svc_decode_res(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_encode_testres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nlm_res *resp = rqstp->rq_argp;
++	struct nlm_res *resp = rqstp->rq_resp;
+ 
+-	if (!(p = nlm4_decode_cookie(p, &resp->cookie)))
+-		return 0;
+-	resp->status = *p++;
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_encode_cookie(xdr, &resp->cookie) &&
++		svcxdr_encode_testrply(xdr, resp);
+ }
+ 
+-int
+-nlm4svc_decode_void(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_encode_res(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	return xdr_argsize_check(rqstp, p);
++	struct nlm_res *resp = rqstp->rq_resp;
++
++	return svcxdr_encode_cookie(xdr, &resp->cookie) &&
++		svcxdr_encode_stats(xdr, resp->status);
+ }
+ 
+-int
+-nlm4svc_encode_void(struct svc_rqst *rqstp, __be32 *p)
++bool
++nlm4svc_encode_shareres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	return xdr_ressize_check(rqstp, p);
++	struct nlm_res *resp = rqstp->rq_resp;
++
++	if (!svcxdr_encode_cookie(xdr, &resp->cookie))
++		return false;
++	if (!svcxdr_encode_stats(xdr, resp->status))
++		return false;
++	/* sequence */
++	if (xdr_stream_encode_u32(xdr, 0) < 0)
++		return false;
++
++	return true;
+ }
+diff --git a/fs/locks.c b/fs/locks.c
+index cbb5701ce9f37..b0753c8871fb2 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -251,7 +251,7 @@ locks_get_lock_context(struct inode *inode, int type)
+ 	struct file_lock_context *ctx;
+ 
+ 	/* paired with cmpxchg() below */
+-	ctx = smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (likely(ctx) || type == F_UNLCK)
+ 		goto out;
+ 
+@@ -270,7 +270,7 @@ locks_get_lock_context(struct inode *inode, int type)
+ 	 */
+ 	if (cmpxchg(&inode->i_flctx, NULL, ctx)) {
+ 		kmem_cache_free(flctx_cache, ctx);
+-		ctx = smp_load_acquire(&inode->i_flctx);
++		ctx = locks_inode_context(inode);
+ 	}
+ out:
+ 	trace_locks_get_lock_context(inode, type, ctx);
+@@ -323,7 +323,7 @@ locks_check_ctx_file_list(struct file *filp, struct list_head *list,
+ void
+ locks_free_lock_context(struct inode *inode)
+ {
+-	struct file_lock_context *ctx = inode->i_flctx;
++	struct file_lock_context *ctx = locks_inode_context(inode);
+ 
+ 	if (unlikely(ctx)) {
+ 		locks_check_ctx_lists(inode);
+@@ -376,6 +376,34 @@ void locks_release_private(struct file_lock *fl)
+ }
+ EXPORT_SYMBOL_GPL(locks_release_private);
+ 
++/**
++ * locks_owner_has_blockers - Check for blocking lock requests
++ * @flctx: file lock context
++ * @owner: lock owner
++ *
++ * Return values:
++ *   %true: @owner has at least one blocker
++ *   %false: @owner has no blockers
++ */
++bool locks_owner_has_blockers(struct file_lock_context *flctx,
++		fl_owner_t owner)
++{
++	struct file_lock *fl;
++
++	spin_lock(&flctx->flc_lock);
++	list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
++		if (fl->fl_owner != owner)
++			continue;
++		if (!list_empty(&fl->fl_blocked_requests)) {
++			spin_unlock(&flctx->flc_lock);
++			return true;
++		}
++	}
++	spin_unlock(&flctx->flc_lock);
++	return false;
++}
++EXPORT_SYMBOL_GPL(locks_owner_has_blockers);
++
+ /* Free a lock which is not in use. */
+ void locks_free_lock(struct file_lock *fl)
+ {
+@@ -954,19 +982,32 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
+ 	struct file_lock *cfl;
+ 	struct file_lock_context *ctx;
+ 	struct inode *inode = locks_inode(filp);
++	void *owner;
++	void (*func)(void);
+ 
+-	ctx = smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (!ctx || list_empty_careful(&ctx->flc_posix)) {
+ 		fl->fl_type = F_UNLCK;
+ 		return;
+ 	}
+ 
++retry:
+ 	spin_lock(&ctx->flc_lock);
+ 	list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
+-		if (posix_locks_conflict(fl, cfl)) {
+-			locks_copy_conflock(fl, cfl);
+-			goto out;
++		if (!posix_locks_conflict(fl, cfl))
++			continue;
++		if (cfl->fl_lmops && cfl->fl_lmops->lm_lock_expirable
++			&& (*cfl->fl_lmops->lm_lock_expirable)(cfl)) {
++			owner = cfl->fl_lmops->lm_mod_owner;
++			func = cfl->fl_lmops->lm_expire_lock;
++			__module_get(owner);
++			spin_unlock(&ctx->flc_lock);
++			(*func)();
++			module_put(owner);
++			goto retry;
+ 		}
++		locks_copy_conflock(fl, cfl);
++		goto out;
+ 	}
+ 	fl->fl_type = F_UNLCK;
+ out:
+@@ -1140,6 +1181,8 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ 	int error;
+ 	bool added = false;
+ 	LIST_HEAD(dispose);
++	void *owner;
++	void (*func)(void);
+ 
+ 	ctx = locks_get_lock_context(inode, request->fl_type);
+ 	if (!ctx)
+@@ -1158,6 +1201,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ 		new_fl2 = locks_alloc_lock();
+ 	}
+ 
++retry:
+ 	percpu_down_read(&file_rwsem);
+ 	spin_lock(&ctx->flc_lock);
+ 	/*
+@@ -1169,6 +1213,17 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ 		list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
+ 			if (!posix_locks_conflict(request, fl))
+ 				continue;
++			if (fl->fl_lmops && fl->fl_lmops->lm_lock_expirable
++				&& (*fl->fl_lmops->lm_lock_expirable)(fl)) {
++				owner = fl->fl_lmops->lm_mod_owner;
++				func = fl->fl_lmops->lm_expire_lock;
++				__module_get(owner);
++				spin_unlock(&ctx->flc_lock);
++				percpu_up_read(&file_rwsem);
++				(*func)();
++				module_put(owner);
++				goto retry;
++			}
+ 			if (conflock)
+ 				locks_copy_conflock(conflock, fl);
+ 			error = -EAGAIN;
+@@ -1619,7 +1674,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
+ 	new_fl->fl_flags = type;
+ 
+ 	/* typically we will check that ctx is non-NULL before calling */
+-	ctx = smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (!ctx) {
+ 		WARN_ON_ONCE(1);
+ 		goto free_lock;
+@@ -1724,7 +1779,7 @@ void lease_get_mtime(struct inode *inode, struct timespec64 *time)
+ 	struct file_lock_context *ctx;
+ 	struct file_lock *fl;
+ 
+-	ctx = smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (ctx && !list_empty_careful(&ctx->flc_lease)) {
+ 		spin_lock(&ctx->flc_lock);
+ 		fl = list_first_entry_or_null(&ctx->flc_lease,
+@@ -1770,7 +1825,7 @@ int fcntl_getlease(struct file *filp)
+ 	int type = F_UNLCK;
+ 	LIST_HEAD(dispose);
+ 
+-	ctx = smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (ctx && !list_empty_careful(&ctx->flc_lease)) {
+ 		percpu_down_read(&file_rwsem);
+ 		spin_lock(&ctx->flc_lock);
+@@ -1808,6 +1863,9 @@ check_conflicting_open(struct file *filp, const long arg, int flags)
+ 
+ 	if (flags & FL_LAYOUT)
+ 		return 0;
++	if (flags & FL_DELEG)
++		/* We leave these checks to the caller */
++		return 0;
+ 
+ 	if (arg == F_RDLCK)
+ 		return inode_is_open_for_write(inode) ? -EAGAIN : 0;
+@@ -1956,7 +2014,7 @@ static int generic_delete_lease(struct file *filp, void *owner)
+ 	struct file_lock_context *ctx;
+ 	LIST_HEAD(dispose);
+ 
+-	ctx = smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (!ctx) {
+ 		trace_generic_delete_lease(inode, NULL);
+ 		return error;
+@@ -2536,14 +2594,15 @@ int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
+ 	 */
+ 	if (!error && file_lock->fl_type != F_UNLCK &&
+ 	    !(file_lock->fl_flags & FL_OFDLCK)) {
++		struct files_struct *files = current->files;
+ 		/*
+ 		 * We need that spin_lock here - it prevents reordering between
+ 		 * update of i_flctx->flc_posix and check for it done in
+ 		 * close(). rcu_read_lock() wouldn't do.
+ 		 */
+-		spin_lock(&current->files->file_lock);
+-		f = fcheck(fd);
+-		spin_unlock(&current->files->file_lock);
++		spin_lock(&files->file_lock);
++		f = files_lookup_fd_locked(files, fd);
++		spin_unlock(&files->file_lock);
+ 		if (f != filp) {
+ 			file_lock->fl_type = F_UNLCK;
+ 			error = do_lock_file_wait(filp, cmd, file_lock);
+@@ -2667,14 +2726,15 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
+ 	 */
+ 	if (!error && file_lock->fl_type != F_UNLCK &&
+ 	    !(file_lock->fl_flags & FL_OFDLCK)) {
++		struct files_struct *files = current->files;
+ 		/*
+ 		 * We need that spin_lock here - it prevents reordering between
+ 		 * update of i_flctx->flc_posix and check for it done in
+ 		 * close(). rcu_read_lock() wouldn't do.
+ 		 */
+-		spin_lock(&current->files->file_lock);
+-		f = fcheck(fd);
+-		spin_unlock(&current->files->file_lock);
++		spin_lock(&files->file_lock);
++		f = files_lookup_fd_locked(files, fd);
++		spin_unlock(&files->file_lock);
+ 		if (f != filp) {
+ 			file_lock->fl_type = F_UNLCK;
+ 			error = do_lock_file_wait(filp, cmd, file_lock);
+@@ -2705,7 +2765,7 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner)
+ 	 * posix_lock_file().  Another process could be setting a lock on this
+ 	 * file at the same time, but we wouldn't remove that lock anyway.
+ 	 */
+-	ctx =  smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (!ctx || list_empty(&ctx->flc_posix))
+ 		return;
+ 
+@@ -2778,7 +2838,7 @@ void locks_remove_file(struct file *filp)
+ {
+ 	struct file_lock_context *ctx;
+ 
+-	ctx = smp_load_acquire(&locks_inode(filp)->i_flctx);
++	ctx = locks_inode_context(locks_inode(filp));
+ 	if (!ctx)
+ 		return;
+ 
+@@ -2825,7 +2885,7 @@ bool vfs_inode_has_locks(struct inode *inode)
+ 	struct file_lock_context *ctx;
+ 	bool ret;
+ 
+-	ctx = smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (!ctx)
+ 		return false;
+ 
+@@ -2970,7 +3030,7 @@ void show_fd_locks(struct seq_file *f,
+ 	struct file_lock_context *ctx;
+ 	int id = 0;
+ 
+-	ctx = smp_load_acquire(&inode->i_flctx);
++	ctx = locks_inode_context(inode);
+ 	if (!ctx)
+ 		return;
+ 
+diff --git a/fs/namei.c b/fs/namei.c
+index cb37d7c477e0b..72521a614514b 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -4277,11 +4277,14 @@ SYSCALL_DEFINE2(link, const char __user *, oldname, const char __user *, newname
+  *	   ->i_mutex on parents, which works but leads to some truly excessive
+  *	   locking].
+  */
+-int vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+-	       struct inode *new_dir, struct dentry *new_dentry,
+-	       struct inode **delegated_inode, unsigned int flags)
++int vfs_rename(struct renamedata *rd)
+ {
+ 	int error;
++	struct inode *old_dir = rd->old_dir, *new_dir = rd->new_dir;
++	struct dentry *old_dentry = rd->old_dentry;
++	struct dentry *new_dentry = rd->new_dentry;
++	struct inode **delegated_inode = rd->delegated_inode;
++	unsigned int flags = rd->flags;
+ 	bool is_dir = d_is_dir(old_dentry);
+ 	struct inode *source = old_dentry->d_inode;
+ 	struct inode *target = new_dentry->d_inode;
+@@ -4429,6 +4432,7 @@ EXPORT_SYMBOL(vfs_rename);
+ int do_renameat2(int olddfd, struct filename *from, int newdfd,
+ 		 struct filename *to, unsigned int flags)
+ {
++	struct renamedata rd;
+ 	struct dentry *old_dentry, *new_dentry;
+ 	struct dentry *trap;
+ 	struct path old_path, new_path;
+@@ -4532,9 +4536,14 @@ int do_renameat2(int olddfd, struct filename *from, int newdfd,
+ 				     &new_path, new_dentry, flags);
+ 	if (error)
+ 		goto exit5;
+-	error = vfs_rename(old_path.dentry->d_inode, old_dentry,
+-			   new_path.dentry->d_inode, new_dentry,
+-			   &delegated_inode, flags);
++
++	rd.old_dir	   = old_path.dentry->d_inode;
++	rd.old_dentry	   = old_dentry;
++	rd.new_dir	   = new_path.dentry->d_inode;
++	rd.new_dentry	   = new_dentry;
++	rd.delegated_inode = &delegated_inode;
++	rd.flags	   = flags;
++	error = vfs_rename(&rd);
+ exit5:
+ 	dput(new_dentry);
+ exit4:
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 73000aa2d220b..a9e563145e0c2 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -699,7 +699,7 @@ bl_alloc_lseg(struct pnfs_layout_hdr *lo, struct nfs4_layoutget_res *lgr,
+ 
+ 	xdr_init_decode_pages(&xdr, &buf,
+ 			lgr->layoutp->pages, lgr->layoutp->len);
+-	xdr_set_scratch_buffer(&xdr, page_address(scratch), PAGE_SIZE);
++	xdr_set_scratch_page(&xdr, scratch);
+ 
+ 	status = -EIO;
+ 	p = xdr_inline_decode(&xdr, 4);
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index 6e3a14fdff9c8..16412d6636e86 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -510,7 +510,7 @@ bl_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+ 		goto out;
+ 
+ 	xdr_init_decode_pages(&xdr, &buf, pdev->pages, pdev->pglen);
+-	xdr_set_scratch_buffer(&xdr, page_address(scratch), PAGE_SIZE);
++	xdr_set_scratch_page(&xdr, scratch);
+ 
+ 	p = xdr_inline_decode(&xdr, sizeof(__be32));
+ 	if (!p)
+diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
+index 7817ad94a6bae..8fe143cad4a2b 100644
+--- a/fs/nfs/callback.c
++++ b/fs/nfs/callback.c
+@@ -17,7 +17,6 @@
+ #include <linux/errno.h>
+ #include <linux/mutex.h>
+ #include <linux/freezer.h>
+-#include <linux/kthread.h>
+ #include <linux/sunrpc/svcauth_gss.h>
+ #include <linux/sunrpc/bc_xprt.h>
+ 
+@@ -45,18 +44,18 @@ static int nfs4_callback_up_net(struct svc_serv *serv, struct net *net)
+ 	int ret;
+ 	struct nfs_net *nn = net_generic(net, nfs_net_id);
+ 
+-	ret = svc_create_xprt(serv, "tcp", net, PF_INET,
+-				nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS,
+-				cred);
++	ret = svc_xprt_create(serv, "tcp", net, PF_INET,
++			      nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS,
++			      cred);
+ 	if (ret <= 0)
+ 		goto out_err;
+ 	nn->nfs_callback_tcpport = ret;
+ 	dprintk("NFS: Callback listener port = %u (af %u, net %x)\n",
+ 		nn->nfs_callback_tcpport, PF_INET, net->ns.inum);
+ 
+-	ret = svc_create_xprt(serv, "tcp", net, PF_INET6,
+-				nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS,
+-				cred);
++	ret = svc_xprt_create(serv, "tcp", net, PF_INET6,
++			      nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS,
++			      cred);
+ 	if (ret > 0) {
+ 		nn->nfs_callback_tcpport6 = ret;
+ 		dprintk("NFS: Callback listener port = %u (af %u, net %x)\n",
+@@ -81,9 +80,6 @@ nfs4_callback_svc(void *vrqstp)
+ 	set_freezable();
+ 
+ 	while (!kthread_freezable_should_stop(NULL)) {
+-
+-		if (signal_pending(current))
+-			flush_signals(current);
+ 		/*
+ 		 * Listen for a request on the socket
+ 		 */
+@@ -92,8 +88,8 @@ nfs4_callback_svc(void *vrqstp)
+ 			continue;
+ 		svc_process(rqstp);
+ 	}
++
+ 	svc_exit_thread(rqstp);
+-	module_put_and_exit(0);
+ 	return 0;
+ }
+ 
+@@ -113,11 +109,7 @@ nfs41_callback_svc(void *vrqstp)
+ 	set_freezable();
+ 
+ 	while (!kthread_freezable_should_stop(NULL)) {
+-
+-		if (signal_pending(current))
+-			flush_signals(current);
+-
+-		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_INTERRUPTIBLE);
++		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_IDLE);
+ 		spin_lock_bh(&serv->sv_cb_lock);
+ 		if (!list_empty(&serv->sv_cb_list)) {
+ 			req = list_first_entry(&serv->sv_cb_list,
+@@ -132,12 +124,12 @@ nfs41_callback_svc(void *vrqstp)
+ 		} else {
+ 			spin_unlock_bh(&serv->sv_cb_lock);
+ 			if (!kthread_should_stop())
+-				schedule();
++				freezable_schedule();
+ 			finish_wait(&serv->sv_cb_waitq, &wq);
+ 		}
+ 	}
++
+ 	svc_exit_thread(rqstp);
+-	module_put_and_exit(0);
+ 	return 0;
+ }
+ 
+@@ -169,12 +161,12 @@ static int nfs_callback_start_svc(int minorversion, struct rpc_xprt *xprt,
+ 	if (nrservs < NFS4_MIN_NR_CALLBACK_THREADS)
+ 		nrservs = NFS4_MIN_NR_CALLBACK_THREADS;
+ 
+-	if (serv->sv_nrthreads-1 == nrservs)
++	if (serv->sv_nrthreads == nrservs)
+ 		return 0;
+ 
+-	ret = serv->sv_ops->svo_setup(serv, NULL, nrservs);
++	ret = svc_set_num_threads(serv, NULL, nrservs);
+ 	if (ret) {
+-		serv->sv_ops->svo_setup(serv, NULL, 0);
++		svc_set_num_threads(serv, NULL, 0);
+ 		return ret;
+ 	}
+ 	dprintk("nfs_callback_up: service started\n");
+@@ -189,7 +181,7 @@ static void nfs_callback_down_net(u32 minorversion, struct svc_serv *serv, struc
+ 		return;
+ 
+ 	dprintk("NFS: destroy per-net callback data; net=%x\n", net->ns.inum);
+-	svc_shutdown_net(serv, net);
++	svc_xprt_destroy_all(serv, net);
+ }
+ 
+ static int nfs_callback_up_net(int minorversion, struct svc_serv *serv,
+@@ -232,59 +224,17 @@ static int nfs_callback_up_net(int minorversion, struct svc_serv *serv,
+ 	return ret;
+ }
+ 
+-static const struct svc_serv_ops nfs40_cb_sv_ops = {
+-	.svo_function		= nfs4_callback_svc,
+-	.svo_enqueue_xprt	= svc_xprt_do_enqueue,
+-	.svo_setup		= svc_set_num_threads_sync,
+-	.svo_module		= THIS_MODULE,
+-};
+-#if defined(CONFIG_NFS_V4_1)
+-static const struct svc_serv_ops nfs41_cb_sv_ops = {
+-	.svo_function		= nfs41_callback_svc,
+-	.svo_enqueue_xprt	= svc_xprt_do_enqueue,
+-	.svo_setup		= svc_set_num_threads_sync,
+-	.svo_module		= THIS_MODULE,
+-};
+-
+-static const struct svc_serv_ops *nfs4_cb_sv_ops[] = {
+-	[0] = &nfs40_cb_sv_ops,
+-	[1] = &nfs41_cb_sv_ops,
+-};
+-#else
+-static const struct svc_serv_ops *nfs4_cb_sv_ops[] = {
+-	[0] = &nfs40_cb_sv_ops,
+-	[1] = NULL,
+-};
+-#endif
+-
+ static struct svc_serv *nfs_callback_create_svc(int minorversion)
+ {
+ 	struct nfs_callback_data *cb_info = &nfs_callback_info[minorversion];
+-	const struct svc_serv_ops *sv_ops;
++	int (*threadfn)(void *data);
+ 	struct svc_serv *serv;
+ 
+ 	/*
+ 	 * Check whether we're already up and running.
+ 	 */
+-	if (cb_info->serv) {
+-		/*
+-		 * Note: increase service usage, because later in case of error
+-		 * svc_destroy() will be called.
+-		 */
+-		svc_get(cb_info->serv);
+-		return cb_info->serv;
+-	}
+-
+-	switch (minorversion) {
+-	case 0:
+-		sv_ops = nfs4_cb_sv_ops[0];
+-		break;
+-	default:
+-		sv_ops = nfs4_cb_sv_ops[1];
+-	}
+-
+-	if (sv_ops == NULL)
+-		return ERR_PTR(-ENOTSUPP);
++	if (cb_info->serv)
++		return svc_get(cb_info->serv);
+ 
+ 	/*
+ 	 * Sanity check: if there's no task,
+@@ -294,7 +244,16 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion)
+ 		printk(KERN_WARNING "nfs_callback_create_svc: no kthread, %d users??\n",
+ 			cb_info->users);
+ 
+-	serv = svc_create_pooled(&nfs4_callback_program, NFS4_CALLBACK_BUFSIZE, sv_ops);
++	threadfn = nfs4_callback_svc;
++#if defined(CONFIG_NFS_V4_1)
++	if (minorversion)
++		threadfn = nfs41_callback_svc;
++#else
++	if (minorversion)
++		return ERR_PTR(-ENOTSUPP);
++#endif
++	serv = svc_create(&nfs4_callback_program, NFS4_CALLBACK_BUFSIZE,
++			  threadfn);
+ 	if (!serv) {
+ 		printk(KERN_ERR "nfs_callback_create_svc: create service failed\n");
+ 		return ERR_PTR(-ENOMEM);
+@@ -335,16 +294,10 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
+ 		goto err_start;
+ 
+ 	cb_info->users++;
+-	/*
+-	 * svc_create creates the svc_serv with sv_nrthreads == 1, and then
+-	 * svc_prepare_thread increments that. So we need to call svc_destroy
+-	 * on both success and failure so that the refcount is 1 when the
+-	 * thread exits.
+-	 */
+ err_net:
+ 	if (!cb_info->users)
+ 		cb_info->serv = NULL;
+-	svc_destroy(serv);
++	svc_put(serv);
+ err_create:
+ 	mutex_unlock(&nfs_callback_mutex);
+ 	return ret;
+@@ -369,8 +322,8 @@ void nfs_callback_down(int minorversion, struct net *net)
+ 	cb_info->users--;
+ 	if (cb_info->users == 0) {
+ 		svc_get(serv);
+-		serv->sv_ops->svo_setup(serv, NULL, 0);
+-		svc_destroy(serv);
++		svc_set_num_threads(serv, NULL, 0);
++		svc_put(serv);
+ 		dprintk("nfs_callback_down: service destroyed\n");
+ 		cb_info->serv = NULL;
+ 	}
+@@ -429,6 +382,8 @@ check_gss_callback_principal(struct nfs_client *clp, struct svc_rqst *rqstp)
+  */
+ static int nfs_callback_authenticate(struct svc_rqst *rqstp)
+ {
++	rqstp->rq_auth_stat = rpc_autherr_badcred;
++
+ 	switch (rqstp->rq_authop->flavour) {
+ 	case RPC_AUTH_NULL:
+ 		if (rqstp->rq_proc != CB_NULL)
+@@ -439,6 +394,8 @@ static int nfs_callback_authenticate(struct svc_rqst *rqstp)
+ 		 if (svc_is_backchannel(rqstp))
+ 			return SVC_DENIED;
+ 	}
++
++	rqstp->rq_auth_stat = rpc_auth_ok;
+ 	return SVC_OK;
+ }
+ 
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index ca8a4aa351dc9..db69fc267c9a0 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -63,14 +63,13 @@ static __be32 nfs4_callback_null(struct svc_rqst *rqstp)
+ 	return htonl(NFS4_OK);
+ }
+ 
+-static int nfs4_decode_void(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	return xdr_argsize_check(rqstp, p);
+-}
+-
+-static int nfs4_encode_void(struct svc_rqst *rqstp, __be32 *p)
++/*
++ * svc_process_common() looks for an XDR encoder to know when
++ * not to drop a Reply.
++ */
++static bool nfs4_encode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+ static __be32 decode_string(struct xdr_stream *xdr, unsigned int *len,
+@@ -984,7 +983,17 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp)
+ 
+ out_invalidcred:
+ 	pr_warn_ratelimited("NFS: NFSv4 callback contains invalid cred\n");
+-	return svc_return_autherr(rqstp, rpc_autherr_badcred);
++	rqstp->rq_auth_stat = rpc_autherr_badcred;
++	return rpc_success;
++}
++
++static int
++nfs_callback_dispatch(struct svc_rqst *rqstp, __be32 *statp)
++{
++	const struct svc_procedure *procp = rqstp->rq_procinfo;
++
++	*statp = procp->pc_func(rqstp);
++	return 1;
+ }
+ 
+ /*
+@@ -1053,16 +1062,18 @@ static struct callback_op callback_ops[] = {
+ static const struct svc_procedure nfs4_callback_procedures1[] = {
+ 	[CB_NULL] = {
+ 		.pc_func = nfs4_callback_null,
+-		.pc_decode = nfs4_decode_void,
+ 		.pc_encode = nfs4_encode_void,
+ 		.pc_xdrressize = 1,
++		.pc_name = "NULL",
+ 	},
+ 	[CB_COMPOUND] = {
+ 		.pc_func = nfs4_callback_compound,
+ 		.pc_encode = nfs4_encode_void,
+ 		.pc_argsize = 256,
++		.pc_argzero = 256,
+ 		.pc_ressize = 256,
+ 		.pc_xdrressize = NFS4_CALLBACK_BUFSIZE,
++		.pc_name = "COMPOUND",
+ 	}
+ };
+ 
+@@ -1073,7 +1084,7 @@ const struct svc_version nfs4_callback_version1 = {
+ 	.vs_proc = nfs4_callback_procedures1,
+ 	.vs_count = nfs4_callback_count1,
+ 	.vs_xdrsize = NFS4_CALLBACK_XDRSIZE,
+-	.vs_dispatch = NULL,
++	.vs_dispatch = nfs_callback_dispatch,
+ 	.vs_hidden = true,
+ 	.vs_need_cong_ctrl = true,
+ };
+@@ -1085,7 +1096,7 @@ const struct svc_version nfs4_callback_version4 = {
+ 	.vs_proc = nfs4_callback_procedures1,
+ 	.vs_count = nfs4_callback_count4,
+ 	.vs_xdrsize = NFS4_CALLBACK_XDRSIZE,
+-	.vs_dispatch = NULL,
++	.vs_dispatch = nfs_callback_dispatch,
+ 	.vs_hidden = true,
+ 	.vs_need_cong_ctrl = true,
+ };
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 9f88ca7b20015..935029632d5f6 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -576,7 +576,7 @@ int nfs_readdir_page_filler(nfs_readdir_descriptor_t *desc, struct nfs_entry *en
+ 		goto out_nopages;
+ 
+ 	xdr_init_decode_pages(&stream, &buf, xdr_pages, buflen);
+-	xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE);
++	xdr_set_scratch_page(&stream, scratch);
+ 
+ 	do {
+ 		if (entry->label)
+diff --git a/fs/nfs/export.c b/fs/nfs/export.c
+index 3430d6891e89f..993be63ab3015 100644
+--- a/fs/nfs/export.c
++++ b/fs/nfs/export.c
+@@ -167,8 +167,25 @@ nfs_get_parent(struct dentry *dentry)
+ 	return parent;
+ }
+ 
++static u64 nfs_fetch_iversion(struct inode *inode)
++{
++	struct nfs_server *server = NFS_SERVER(inode);
++
++	if (nfs_check_cache_invalid(inode, NFS_INO_INVALID_CHANGE |
++						   NFS_INO_REVAL_PAGECACHE))
++		__nfs_revalidate_inode(server, inode);
++	return inode_peek_iversion_raw(inode);
++}
++
+ const struct export_operations nfs_export_ops = {
+ 	.encode_fh = nfs_encode_fh,
+ 	.fh_to_dentry = nfs_fh_to_dentry,
+ 	.get_parent = nfs_get_parent,
++	.fetch_iversion = nfs_fetch_iversion,
++	.flags = EXPORT_OP_NOWCC		|
++		 EXPORT_OP_NOSUBTREECHK		|
++		 EXPORT_OP_CLOSE_BEFORE_UNLINK	|
++		 EXPORT_OP_REMOTE_FS		|
++		 EXPORT_OP_NOATOMIC_ATTR	|
++		 EXPORT_OP_FLUSH_ON_CLOSE,
+ };
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 7be1a7f7fcb2a..d35aae47b062b 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -798,6 +798,9 @@ int nfs_lock(struct file *filp, int cmd, struct file_lock *fl)
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSLOCK);
+ 
++	if (fl->fl_flags & FL_RECLAIM)
++		return -ENOGRACE;
++
+ 	/* No mandatory locks over NFS */
+ 	if (__mandatory_lock(inode) && fl->fl_type != F_UNLCK)
+ 		goto out_err;
+diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
+index deecfb50dd7e3..2ed8b6885b091 100644
+--- a/fs/nfs/filelayout/filelayout.c
++++ b/fs/nfs/filelayout/filelayout.c
+@@ -293,8 +293,6 @@ static void filelayout_read_call_done(struct rpc_task *task, void *data)
+ {
+ 	struct nfs_pgio_header *hdr = data;
+ 
+-	dprintk("--> %s task->tk_status %d\n", __func__, task->tk_status);
+-
+ 	if (test_bit(NFS_IOHDR_REDO, &hdr->flags) &&
+ 	    task->tk_status == 0) {
+ 		nfs41_sequence_done(task, &hdr->res.seq_res);
+@@ -666,7 +664,7 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
+ 		return -ENOMEM;
+ 
+ 	xdr_init_decode_pages(&stream, &buf, lgr->layoutp->pages, lgr->layoutp->len);
+-	xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE);
++	xdr_set_scratch_page(&stream, scratch);
+ 
+ 	/* 20 = ufl_util (4), first_stripe_index (4), pattern_offset (8),
+ 	 * num_fh (4) */
+diff --git a/fs/nfs/filelayout/filelayoutdev.c b/fs/nfs/filelayout/filelayoutdev.c
+index d913e818858f3..86c3f7e69ec42 100644
+--- a/fs/nfs/filelayout/filelayoutdev.c
++++ b/fs/nfs/filelayout/filelayoutdev.c
+@@ -82,7 +82,7 @@ nfs4_fl_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+ 		goto out_err;
+ 
+ 	xdr_init_decode_pages(&stream, &buf, pdev->pages, pdev->pglen);
+-	xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE);
++	xdr_set_scratch_page(&stream, scratch);
+ 
+ 	/* Get the stripe count (number of stripe index) */
+ 	p = xdr_inline_decode(&stream, 4);
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index e4f2820ba5a59..a263bfec4244d 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -378,7 +378,7 @@ ff_layout_alloc_lseg(struct pnfs_layout_hdr *lh,
+ 
+ 	xdr_init_decode_pages(&stream, &buf, lgr->layoutp->pages,
+ 			      lgr->layoutp->len);
+-	xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE);
++	xdr_set_scratch_page(&stream, scratch);
+ 
+ 	/* stripe unit and mirror_array_cnt */
+ 	rc = -EIO;
+@@ -1419,8 +1419,6 @@ static void ff_layout_read_call_done(struct rpc_task *task, void *data)
+ {
+ 	struct nfs_pgio_header *hdr = data;
+ 
+-	dprintk("--> %s task->tk_status %d\n", __func__, task->tk_status);
+-
+ 	if (test_bit(NFS_IOHDR_REDO, &hdr->flags) &&
+ 	    task->tk_status == 0) {
+ 		nfs4_sequence_done(task, &hdr->res.seq_res);
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index 1f12297109b41..bfa7202ca7be1 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -69,7 +69,7 @@ nfs4_ff_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+ 	INIT_LIST_HEAD(&dsaddrs);
+ 
+ 	xdr_init_decode_pages(&stream, &buf, pdev->pages, pdev->pglen);
+-	xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE);
++	xdr_set_scratch_page(&stream, scratch);
+ 
+ 	/* multipath count */
+ 	p = xdr_inline_decode(&stream, 4);
+diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
+index f2248d9d4db51..df5bee2f505c4 100644
+--- a/fs/nfs/nfs42xdr.c
++++ b/fs/nfs/nfs42xdr.c
+@@ -1536,7 +1536,7 @@ static int nfs4_xdr_dec_listxattrs(struct rpc_rqst *rqstp,
+ 	struct compound_hdr hdr;
+ 	int status;
+ 
+-	xdr_set_scratch_buffer(xdr, page_address(res->scratch), PAGE_SIZE);
++	xdr_set_scratch_page(xdr, res->scratch);
+ 
+ 	status = decode_compound_hdr(xdr, &hdr);
+ 	if (status)
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index afb617a4a7e42..d8fc5d72a161c 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -2757,7 +2757,7 @@ static int nfs4_run_state_manager(void *ptr)
+ 		goto again;
+ 
+ 	nfs_put_client(clp);
+-	module_put_and_exit(0);
++	module_put_and_kthread_exit(0);
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index f1e599553f2be..4e5c6cb770ad5 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -6404,10 +6404,8 @@ nfs4_xdr_dec_getacl(struct rpc_rqst *rqstp, struct xdr_stream *xdr,
+ 	struct compound_hdr hdr;
+ 	int status;
+ 
+-	if (res->acl_scratch != NULL) {
+-		void *p = page_address(res->acl_scratch);
+-		xdr_set_scratch_buffer(xdr, p, PAGE_SIZE);
+-	}
++	if (res->acl_scratch != NULL)
++		xdr_set_scratch_page(xdr, res->acl_scratch);
+ 	status = decode_compound_hdr(xdr, &hdr);
+ 	if (status)
+ 		goto out;
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 17fef6eb490c5..d79a3b6cb0701 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -870,9 +870,6 @@ static void nfs_pgio_result(struct rpc_task *task, void *calldata)
+ 	struct nfs_pgio_header *hdr = calldata;
+ 	struct inode *inode = hdr->inode;
+ 
+-	dprintk("NFS: %s: %5u, (status %d)\n", __func__,
+-		task->tk_pid, task->tk_status);
+-
+ 	if (hdr->rw_ops->rw_done(task, hdr, inode) != 0)
+ 		return;
+ 	if (task->tk_status < 0)
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index b3fcc27b95648..1ffce90760606 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -86,9 +86,11 @@ const struct super_operations nfs_sops = {
+ };
+ EXPORT_SYMBOL_GPL(nfs_sops);
+ 
++#ifdef CONFIG_NFS_V4_2
+ static const struct nfs_ssc_client_ops nfs_ssc_clnt_ops_tbl = {
+ 	.sco_sb_deactive = nfs_sb_deactive,
+ };
++#endif
+ 
+ #if IS_ENABLED(CONFIG_NFS_V4)
+ static int __init register_nfs4_fs(void)
+@@ -111,6 +113,7 @@ static void unregister_nfs4_fs(void)
+ }
+ #endif
+ 
++#ifdef CONFIG_NFS_V4_2
+ static void nfs_ssc_register_ops(void)
+ {
+ 	nfs_ssc_register(&nfs_ssc_clnt_ops_tbl);
+@@ -120,6 +123,7 @@ static void nfs_ssc_unregister_ops(void)
+ {
+ 	nfs_ssc_unregister(&nfs_ssc_clnt_ops_tbl);
+ }
++#endif /* CONFIG_NFS_V4_2 */
+ 
+ static struct shrinker acl_shrinker = {
+ 	.count_objects	= nfs_access_cache_count,
+@@ -148,7 +152,9 @@ int __init register_nfs_fs(void)
+ 	ret = register_shrinker(&acl_shrinker);
+ 	if (ret < 0)
+ 		goto error_3;
++#ifdef CONFIG_NFS_V4_2
+ 	nfs_ssc_register_ops();
++#endif
+ 	return 0;
+ error_3:
+ 	nfs_unregister_sysctl();
+@@ -168,7 +174,9 @@ void __exit unregister_nfs_fs(void)
+ 	unregister_shrinker(&acl_shrinker);
+ 	nfs_unregister_sysctl();
+ 	unregister_nfs4_fs();
++#ifdef CONFIG_NFS_V4_2
+ 	nfs_ssc_unregister_ops();
++#endif
+ 	unregister_filesystem(&nfs_fs_type);
+ }
+ 
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 4cf0606919794..2bde35921f2b2 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1809,9 +1809,6 @@ static void nfs_commit_done(struct rpc_task *task, void *calldata)
+ {
+ 	struct nfs_commit_data	*data = calldata;
+ 
+-        dprintk("NFS: %5u nfs_commit_done (status %d)\n",
+-                                task->tk_pid, task->tk_status);
+-
+ 	/* Call the NFS version-specific code */
+ 	NFS_PROTO(data->inode)->commit_done(task, data);
+ 	trace_nfs_commit_done(task, data);
+diff --git a/fs/nfs_common/Makefile b/fs/nfs_common/Makefile
+index fa82f5aaa6d95..119c75ab9fd08 100644
+--- a/fs/nfs_common/Makefile
++++ b/fs/nfs_common/Makefile
+@@ -7,4 +7,4 @@ obj-$(CONFIG_NFS_ACL_SUPPORT) += nfs_acl.o
+ nfs_acl-objs := nfsacl.o
+ 
+ obj-$(CONFIG_GRACE_PERIOD) += grace.o
+-obj-$(CONFIG_GRACE_PERIOD) += nfs_ssc.o
++obj-$(CONFIG_NFS_V4_2_SSC_HELPER) += nfs_ssc.o
+diff --git a/fs/nfs_common/nfs_ssc.c b/fs/nfs_common/nfs_ssc.c
+index f43bbb3739134..7c1509e968c81 100644
+--- a/fs/nfs_common/nfs_ssc.c
++++ b/fs/nfs_common/nfs_ssc.c
+@@ -1,7 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * fs/nfs_common/nfs_ssc_comm.c
+- *
+  * Helper for knfsd's SSC to access ops in NFS client modules
+  *
+  * Author: Dai Ngo <dai.ngo@oracle.com>
+diff --git a/fs/nfs_common/nfsacl.c b/fs/nfs_common/nfsacl.c
+index d056ad2fdefd6..5a5bd85d08f8c 100644
+--- a/fs/nfs_common/nfsacl.c
++++ b/fs/nfs_common/nfsacl.c
+@@ -136,6 +136,77 @@ int nfsacl_encode(struct xdr_buf *buf, unsigned int base, struct inode *inode,
+ }
+ EXPORT_SYMBOL_GPL(nfsacl_encode);
+ 
++/**
++ * nfs_stream_encode_acl - Encode an NFSv3 ACL
++ *
++ * @xdr: an xdr_stream positioned to receive an encoded ACL
++ * @inode: inode of file whose ACL this is
++ * @acl: posix_acl to encode
++ * @encode_entries: whether to encode ACEs as well
++ * @typeflag: ACL type: NFS_ACL_DEFAULT or zero
++ *
++ * Return values:
++ *   %false: The ACL could not be encoded
++ *   %true: @xdr is advanced to the next available position
++ */
++bool nfs_stream_encode_acl(struct xdr_stream *xdr, struct inode *inode,
++			   struct posix_acl *acl, int encode_entries,
++			   int typeflag)
++{
++	const size_t elem_size = XDR_UNIT * 3;
++	u32 entries = (acl && acl->a_count) ? max_t(int, acl->a_count, 4) : 0;
++	struct nfsacl_encode_desc nfsacl_desc = {
++		.desc = {
++			.elem_size = elem_size,
++			.array_len = encode_entries ? entries : 0,
++			.xcode = xdr_nfsace_encode,
++		},
++		.acl = acl,
++		.typeflag = typeflag,
++		.uid = inode->i_uid,
++		.gid = inode->i_gid,
++	};
++	struct nfsacl_simple_acl aclbuf;
++	unsigned int base;
++	int err;
++
++	if (entries > NFS_ACL_MAX_ENTRIES)
++		return false;
++	if (xdr_stream_encode_u32(xdr, entries) < 0)
++		return false;
++
++	if (encode_entries && acl && acl->a_count == 3) {
++		struct posix_acl *acl2 = &aclbuf.acl;
++
++		/* Avoid the use of posix_acl_alloc().  nfsacl_encode() is
++		 * invoked in contexts where a memory allocation failure is
++		 * fatal.  Fortunately this fake ACL is small enough to
++		 * construct on the stack. */
++		posix_acl_init(acl2, 4);
++
++		/* Insert entries in canonical order: other orders seem
++		 to confuse Solaris VxFS. */
++		acl2->a_entries[0] = acl->a_entries[0];  /* ACL_USER_OBJ */
++		acl2->a_entries[1] = acl->a_entries[1];  /* ACL_GROUP_OBJ */
++		acl2->a_entries[2] = acl->a_entries[1];  /* ACL_MASK */
++		acl2->a_entries[2].e_tag = ACL_MASK;
++		acl2->a_entries[3] = acl->a_entries[2];  /* ACL_OTHER */
++		nfsacl_desc.acl = acl2;
++	}
++
++	base = xdr_stream_pos(xdr);
++	if (!xdr_reserve_space(xdr, XDR_UNIT +
++			       elem_size * nfsacl_desc.desc.array_len))
++		return false;
++	err = xdr_encode_array2(xdr->buf, base, &nfsacl_desc.desc);
++	if (err)
++		return false;
++
++	return true;
++}
++EXPORT_SYMBOL_GPL(nfs_stream_encode_acl);
++
++
+ struct nfsacl_decode_desc {
+ 	struct xdr_array2_desc desc;
+ 	unsigned int count;
+@@ -295,3 +366,55 @@ int nfsacl_decode(struct xdr_buf *buf, unsigned int base, unsigned int *aclcnt,
+ 		   nfsacl_desc.desc.array_len;
+ }
+ EXPORT_SYMBOL_GPL(nfsacl_decode);
++
++/**
++ * nfs_stream_decode_acl - Decode an NFSv3 ACL
++ *
++ * @xdr: an xdr_stream positioned at an encoded ACL
++ * @aclcnt: OUT: count of ACEs in decoded posix_acl
++ * @pacl: OUT: a dynamically-allocated buffer containing the decoded posix_acl
++ *
++ * Return values:
++ *   %false: The encoded ACL is not valid
++ *   %true: @pacl contains a decoded ACL, and @xdr is advanced
++ *
++ * On a successful return, caller must release *pacl using posix_acl_release().
++ */
++bool nfs_stream_decode_acl(struct xdr_stream *xdr, unsigned int *aclcnt,
++			   struct posix_acl **pacl)
++{
++	const size_t elem_size = XDR_UNIT * 3;
++	struct nfsacl_decode_desc nfsacl_desc = {
++		.desc = {
++			.elem_size = elem_size,
++			.xcode = pacl ? xdr_nfsace_decode : NULL,
++		},
++	};
++	unsigned int base;
++	u32 entries;
++
++	if (xdr_stream_decode_u32(xdr, &entries) < 0)
++		return false;
++	if (entries > NFS_ACL_MAX_ENTRIES)
++		return false;
++
++	base = xdr_stream_pos(xdr);
++	if (!xdr_inline_decode(xdr, XDR_UNIT + elem_size * entries))
++		return false;
++	nfsacl_desc.desc.array_maxlen = entries;
++	if (xdr_decode_array2(xdr->buf, base, &nfsacl_desc.desc))
++		return false;
++
++	if (pacl) {
++		if (entries != nfsacl_desc.desc.array_len ||
++		    posix_acl_from_nfsacl(nfsacl_desc.acl) != 0) {
++			posix_acl_release(nfsacl_desc.acl);
++			return false;
++		}
++		*pacl = nfsacl_desc.acl;
++	}
++	if (aclcnt)
++		*aclcnt = entries;
++	return true;
++}
++EXPORT_SYMBOL_GPL(nfs_stream_decode_acl);
+diff --git a/fs/nfsd/Kconfig b/fs/nfsd/Kconfig
+index 248f1459c0399..6d2d498a59573 100644
+--- a/fs/nfsd/Kconfig
++++ b/fs/nfsd/Kconfig
+@@ -8,6 +8,7 @@ config NFSD
+ 	select SUNRPC
+ 	select EXPORTFS
+ 	select NFS_ACL_SUPPORT if NFSD_V2_ACL
++	select NFS_ACL_SUPPORT if NFSD_V3_ACL
+ 	depends on MULTIUSER
+ 	help
+ 	  Choose Y here if you want to allow other computers to access
+@@ -26,28 +27,29 @@ config NFSD
+ 
+ 	  Below you can choose which versions of the NFS protocol are
+ 	  available to clients mounting the NFS server on this system.
+-	  Support for NFS version 2 (RFC 1094) is always available when
++	  Support for NFS version 3 (RFC 1813) is always available when
+ 	  CONFIG_NFSD is selected.
+ 
+ 	  If unsure, say N.
+ 
+-config NFSD_V2_ACL
+-	bool
+-	depends on NFSD
+-
+-config NFSD_V3
+-	bool "NFS server support for NFS version 3"
++config NFSD_V2
++	bool "NFS server support for NFS version 2 (DEPRECATED)"
+ 	depends on NFSD
++	default n
+ 	help
+-	  This option enables support in your system's NFS server for
+-	  version 3 of the NFS protocol (RFC 1813).
++	  NFSv2 (RFC 1094) was the first publicly-released version of NFS.
++	  Unless you are hosting ancient (1990's era) NFS clients, you don't
++	  need this.
+ 
+-	  If unsure, say Y.
++	  If unsure, say N.
++
++config NFSD_V2_ACL
++	bool "NFS server support for the NFSv2 ACL protocol extension"
++	depends on NFSD_V2
+ 
+ config NFSD_V3_ACL
+ 	bool "NFS server support for the NFSv3 ACL protocol extension"
+-	depends on NFSD_V3
+-	select NFSD_V2_ACL
++	depends on NFSD
+ 	help
+ 	  Solaris NFS servers support an auxiliary NFSv3 ACL protocol that
+ 	  never became an official part of the NFS version 3 protocol.
+@@ -70,13 +72,13 @@ config NFSD_V3_ACL
+ config NFSD_V4
+ 	bool "NFS server support for NFS version 4"
+ 	depends on NFSD && PROC_FS
+-	select NFSD_V3
+ 	select FS_POSIX_ACL
+ 	select SUNRPC_GSS
+ 	select CRYPTO
+ 	select CRYPTO_MD5
+ 	select CRYPTO_SHA256
+ 	select GRACE_PERIOD
++	select NFS_V4_2_SSC_HELPER if NFS_V4_2
+ 	help
+ 	  This option enables support in your system's NFS server for
+ 	  version 4 of the NFS protocol (RFC 3530).
+@@ -98,7 +100,7 @@ config NFSD_BLOCKLAYOUT
+ 	help
+ 	  This option enables support for the exporting pNFS block layouts
+ 	  in the kernel's NFS server. The pNFS block layout enables NFS
+-	  clients to directly perform I/O to block devices accesible to both
++	  clients to directly perform I/O to block devices accessible to both
+ 	  the server and the clients.  See RFC 5663 for more details.
+ 
+ 	  If unsure, say N.
+@@ -112,7 +114,7 @@ config NFSD_SCSILAYOUT
+ 	help
+ 	  This option enables support for the exporting pNFS SCSI layouts
+ 	  in the kernel's NFS server. The pNFS SCSI layout enables NFS
+-	  clients to directly perform I/O to SCSI devices accesible to both
++	  clients to directly perform I/O to SCSI devices accessible to both
+ 	  the server and the clients.  See draft-ietf-nfsv4-scsi-layout for
+ 	  more details.
+ 
+@@ -126,7 +128,7 @@ config NFSD_FLEXFILELAYOUT
+ 	  This option enables support for the exporting pNFS Flex File
+ 	  layouts in the kernel's NFS server. The pNFS Flex File  layout
+ 	  enables NFS clients to directly perform I/O to NFSv3 devices
+-	  accesible to both the server and the clients.  See
++	  accessible to both the server and the clients.  See
+ 	  draft-ietf-nfsv4-flex-files for more details.
+ 
+ 	  Warning, this server implements the bare minimum functionality
+@@ -137,7 +139,7 @@ config NFSD_FLEXFILELAYOUT
+ 
+ config NFSD_V4_2_INTER_SSC
+ 	bool "NFSv4.2 inter server to server COPY"
+-	depends on NFSD_V4 && NFS_V4_1 && NFS_V4_2
++	depends on NFSD_V4 && NFS_V4_2
+ 	help
+ 	  This option enables support for NFSv4.2 inter server to
+ 	  server copy where the destination server calls the NFSv4.2
+diff --git a/fs/nfsd/Makefile b/fs/nfsd/Makefile
+index 3f0983e93a998..6fffc8f03f740 100644
+--- a/fs/nfsd/Makefile
++++ b/fs/nfsd/Makefile
+@@ -10,11 +10,11 @@ obj-$(CONFIG_NFSD)	+= nfsd.o
+ # this one should be compiled first, as the tracing macros can easily blow up
+ nfsd-y			+= trace.o
+ 
+-nfsd-y 			+= nfssvc.o nfsctl.o nfsproc.o nfsfh.o vfs.o \
+-			   export.o auth.o lockd.o nfscache.o nfsxdr.o \
+-			   stats.o filecache.o
++nfsd-y 			+= nfssvc.o nfsctl.o nfsfh.o vfs.o \
++			   export.o auth.o lockd.o nfscache.o \
++			   stats.o filecache.o nfs3proc.o nfs3xdr.o
++nfsd-$(CONFIG_NFSD_V2) += nfsproc.o nfsxdr.o
+ nfsd-$(CONFIG_NFSD_V2_ACL) += nfs2acl.o
+-nfsd-$(CONFIG_NFSD_V3)	+= nfs3proc.o nfs3xdr.o
+ nfsd-$(CONFIG_NFSD_V3_ACL) += nfs3acl.o
+ nfsd-$(CONFIG_NFSD_V4)	+= nfs4proc.o nfs4xdr.o nfs4state.o nfs4idmap.o \
+ 			   nfs4acl.o nfs4callback.o nfs4recover.o
+diff --git a/fs/nfsd/acl.h b/fs/nfsd/acl.h
+index ba14d2f4b64f4..4b7324458a94e 100644
+--- a/fs/nfsd/acl.h
++++ b/fs/nfsd/acl.h
+@@ -38,6 +38,8 @@
+ struct nfs4_acl;
+ struct svc_fh;
+ struct svc_rqst;
++struct nfsd_attrs;
++enum nfs_ftype4;
+ 
+ int nfs4_acl_bytes(int entries);
+ int nfs4_acl_get_whotype(char *, u32);
+@@ -45,7 +47,7 @@ __be32 nfs4_acl_write_who(struct xdr_stream *xdr, int who);
+ 
+ int nfsd4_get_nfs4_acl(struct svc_rqst *rqstp, struct dentry *dentry,
+ 		struct nfs4_acl **acl);
+-__be32 nfsd4_set_nfs4_acl(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		struct nfs4_acl *acl);
++__be32 nfsd4_acl_to_attr(enum nfs_ftype4 type, struct nfs4_acl *acl,
++			 struct nfsd_attrs *attr);
+ 
+ #endif /* LINUX_NFS4_ACL_H */
+diff --git a/fs/nfsd/blocklayout.c b/fs/nfsd/blocklayout.c
+index a07c39c94bbd0..d91a686d2f313 100644
+--- a/fs/nfsd/blocklayout.c
++++ b/fs/nfsd/blocklayout.c
+@@ -16,6 +16,7 @@
+ #include "blocklayoutxdr.h"
+ #include "pnfs.h"
+ #include "filecache.h"
++#include "vfs.h"
+ 
+ #define NFSDDBG_FACILITY	NFSDDBG_PNFS
+ 
+diff --git a/fs/nfsd/blocklayoutxdr.c b/fs/nfsd/blocklayoutxdr.c
+index 2455dc8be18a8..1ed2f691ebb90 100644
+--- a/fs/nfsd/blocklayoutxdr.c
++++ b/fs/nfsd/blocklayoutxdr.c
+@@ -9,6 +9,7 @@
+ 
+ #include "nfsd.h"
+ #include "blocklayoutxdr.h"
++#include "vfs.h"
+ 
+ #define NFSDDBG_FACILITY	NFSDDBG_PNFS
+ 
+diff --git a/fs/nfsd/cache.h b/fs/nfsd/cache.h
+index 65c331f75e9c7..f21259ead64bb 100644
+--- a/fs/nfsd/cache.h
++++ b/fs/nfsd/cache.h
+@@ -84,6 +84,6 @@ int	nfsd_reply_cache_init(struct nfsd_net *);
+ void	nfsd_reply_cache_shutdown(struct nfsd_net *);
+ int	nfsd_cache_lookup(struct svc_rqst *);
+ void	nfsd_cache_update(struct svc_rqst *, int, __be32 *);
+-int	nfsd_reply_cache_stats_open(struct inode *, struct file *);
++int	nfsd_reply_cache_stats_show(struct seq_file *m, void *v);
+ 
+ #endif /* NFSCACHE_H */
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index 21e404e7cb68c..7c863f2c21e0c 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -331,12 +331,29 @@ static void nfsd4_fslocs_free(struct nfsd4_fs_locations *fsloc)
+ 	fsloc->locations = NULL;
+ }
+ 
++static int export_stats_init(struct export_stats *stats)
++{
++	stats->start_time = ktime_get_seconds();
++	return nfsd_percpu_counters_init(stats->counter, EXP_STATS_COUNTERS_NUM);
++}
++
++static void export_stats_reset(struct export_stats *stats)
++{
++	nfsd_percpu_counters_reset(stats->counter, EXP_STATS_COUNTERS_NUM);
++}
++
++static void export_stats_destroy(struct export_stats *stats)
++{
++	nfsd_percpu_counters_destroy(stats->counter, EXP_STATS_COUNTERS_NUM);
++}
++
+ static void svc_export_put(struct kref *ref)
+ {
+ 	struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
+ 	path_put(&exp->ex_path);
+ 	auth_domain_put(exp->ex_client);
+ 	nfsd4_fslocs_free(&exp->ex_fslocs);
++	export_stats_destroy(&exp->ex_stats);
+ 	kfree(exp->ex_uuid);
+ 	kfree_rcu(exp, ex_rcu);
+ }
+@@ -408,6 +425,12 @@ static int check_export(struct inode *inode, int *flags, unsigned char *uuid)
+ 		return -EINVAL;
+ 	}
+ 
++	if (inode->i_sb->s_export_op->flags & EXPORT_OP_NOSUBTREECHK &&
++	    !(*flags & NFSEXP_NOSUBTREECHECK)) {
++		dprintk("%s: %s does not support subtree checking!\n",
++			__func__, inode->i_sb->s_type->name);
++		return -EINVAL;
++	}
+ 	return 0;
+ 
+ }
+@@ -686,22 +709,47 @@ static void exp_flags(struct seq_file *m, int flag, int fsid,
+ 		kuid_t anonu, kgid_t anong, struct nfsd4_fs_locations *fslocs);
+ static void show_secinfo(struct seq_file *m, struct svc_export *exp);
+ 
++static int is_export_stats_file(struct seq_file *m)
++{
++	/*
++	 * The export_stats file uses the same ops as the exports file.
++	 * We use the file's name to determine the reported info per export.
++	 * There is no rename in nsfdfs, so d_name.name is stable.
++	 */
++	return !strcmp(m->file->f_path.dentry->d_name.name, "export_stats");
++}
++
+ static int svc_export_show(struct seq_file *m,
+ 			   struct cache_detail *cd,
+ 			   struct cache_head *h)
+ {
+-	struct svc_export *exp ;
++	struct svc_export *exp;
++	bool export_stats = is_export_stats_file(m);
+ 
+-	if (h ==NULL) {
+-		seq_puts(m, "#path domain(flags)\n");
++	if (h == NULL) {
++		if (export_stats)
++			seq_puts(m, "#path domain start-time\n#\tstats\n");
++		else
++			seq_puts(m, "#path domain(flags)\n");
+ 		return 0;
+ 	}
+ 	exp = container_of(h, struct svc_export, h);
+ 	seq_path(m, &exp->ex_path, " \t\n\\");
+ 	seq_putc(m, '\t');
+ 	seq_escape(m, exp->ex_client->name, " \t\n\\");
++	if (export_stats) {
++		seq_printf(m, "\t%lld\n", exp->ex_stats.start_time);
++		seq_printf(m, "\tfh_stale: %lld\n",
++			   percpu_counter_sum_positive(&exp->ex_stats.counter[EXP_STATS_FH_STALE]));
++		seq_printf(m, "\tio_read: %lld\n",
++			   percpu_counter_sum_positive(&exp->ex_stats.counter[EXP_STATS_IO_READ]));
++		seq_printf(m, "\tio_write: %lld\n",
++			   percpu_counter_sum_positive(&exp->ex_stats.counter[EXP_STATS_IO_WRITE]));
++		seq_putc(m, '\n');
++		return 0;
++	}
+ 	seq_putc(m, '(');
+-	if (test_bit(CACHE_VALID, &h->flags) && 
++	if (test_bit(CACHE_VALID, &h->flags) &&
+ 	    !test_bit(CACHE_NEGATIVE, &h->flags)) {
+ 		exp_flags(m, exp->ex_flags, exp->ex_fsid,
+ 			  exp->ex_anon_uid, exp->ex_anon_gid, &exp->ex_fslocs);
+@@ -742,6 +790,7 @@ static void svc_export_init(struct cache_head *cnew, struct cache_head *citem)
+ 	new->ex_layout_types = 0;
+ 	new->ex_uuid = NULL;
+ 	new->cd = item->cd;
++	export_stats_reset(&new->ex_stats);
+ }
+ 
+ static void export_update(struct cache_head *cnew, struct cache_head *citem)
+@@ -774,10 +823,15 @@ static void export_update(struct cache_head *cnew, struct cache_head *citem)
+ static struct cache_head *svc_export_alloc(void)
+ {
+ 	struct svc_export *i = kmalloc(sizeof(*i), GFP_KERNEL);
+-	if (i)
+-		return &i->h;
+-	else
++	if (!i)
++		return NULL;
++
++	if (export_stats_init(&i->ex_stats)) {
++		kfree(i);
+ 		return NULL;
++	}
++
++	return &i->h;
+ }
+ 
+ static const struct cache_detail svc_export_cache_template = {
+@@ -1239,10 +1293,14 @@ static int e_show(struct seq_file *m, void *p)
+ 	struct cache_head *cp = p;
+ 	struct svc_export *exp = container_of(cp, struct svc_export, h);
+ 	struct cache_detail *cd = m->private;
++	bool export_stats = is_export_stats_file(m);
+ 
+ 	if (p == SEQ_START_TOKEN) {
+ 		seq_puts(m, "# Version 1.1\n");
+-		seq_puts(m, "# Path Client(Flags) # IPs\n");
++		if (export_stats)
++			seq_puts(m, "# Path Client Start-time\n#\tStats\n");
++		else
++			seq_puts(m, "# Path Client(Flags) # IPs\n");
+ 		return 0;
+ 	}
+ 
+diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h
+index e7daa1f246f08..d03f7f6a8642d 100644
+--- a/fs/nfsd/export.h
++++ b/fs/nfsd/export.h
+@@ -6,6 +6,7 @@
+ #define NFSD_EXPORT_H
+ 
+ #include <linux/sunrpc/cache.h>
++#include <linux/percpu_counter.h>
+ #include <uapi/linux/nfsd/export.h>
+ #include <linux/nfs4.h>
+ 
+@@ -46,6 +47,19 @@ struct exp_flavor_info {
+ 	u32	flags;
+ };
+ 
++/* Per-export stats */
++enum {
++	EXP_STATS_FH_STALE,
++	EXP_STATS_IO_READ,
++	EXP_STATS_IO_WRITE,
++	EXP_STATS_COUNTERS_NUM
++};
++
++struct export_stats {
++	time64_t		start_time;
++	struct percpu_counter	counter[EXP_STATS_COUNTERS_NUM];
++};
++
+ struct svc_export {
+ 	struct cache_head	h;
+ 	struct auth_domain *	ex_client;
+@@ -62,6 +76,7 @@ struct svc_export {
+ 	struct nfsd4_deviceid_map *ex_devid_map;
+ 	struct cache_detail	*cd;
+ 	struct rcu_head		ex_rcu;
++	struct export_stats	ex_stats;
+ };
+ 
+ /* an "export key" (expkey) maps a filehandlefragement to an
+@@ -100,7 +115,6 @@ struct svc_export *	rqst_find_fsidzero_export(struct svc_rqst *);
+ int			exp_rootfh(struct net *, struct auth_domain *,
+ 					char *path, struct knfsd_fh *, int maxsize);
+ __be32			exp_pseudoroot(struct svc_rqst *, struct svc_fh *);
+-__be32			nfserrno(int errno);
+ 
+ static inline void exp_put(struct svc_export *exp)
+ {
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index e30e1ddc1aceb..615ea8324911e 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -1,7 +1,32 @@
++// SPDX-License-Identifier: GPL-2.0
+ /*
+- * Open file cache.
++ * The NFSD open file cache.
+  *
+  * (c) 2015 - Jeff Layton <jeff.layton@primarydata.com>
++ *
++ * An nfsd_file object is a per-file collection of open state that binds
++ * together:
++ *   - a struct file *
++ *   - a user credential
++ *   - a network namespace
++ *   - a read-ahead context
++ *   - monitoring for writeback errors
++ *
++ * nfsd_file objects are reference-counted. Consumers acquire a new
++ * object via the nfsd_file_acquire API. They manage their interest in
++ * the acquired object, and hence the object's reference count, via
++ * nfsd_file_get and nfsd_file_put. There are two varieties of nfsd_file
++ * object:
++ *
++ *  * non-garbage-collected: When a consumer wants to precisely control
++ *    the lifetime of a file's open state, it acquires a non-garbage-
++ *    collected nfsd_file. The final nfsd_file_put releases the open
++ *    state immediately.
++ *
++ *  * garbage-collected: When a consumer does not control the lifetime
++ *    of open state, it acquires a garbage-collected nfsd_file. The
++ *    final nfsd_file_put allows the open state to linger for a period
++ *    during which it may be re-used.
+  */
+ 
+ #include <linux/hash.h>
+@@ -12,6 +37,7 @@
+ #include <linux/fsnotify_backend.h>
+ #include <linux/fsnotify.h>
+ #include <linux/seq_file.h>
++#include <linux/rhashtable.h>
+ 
+ #include "vfs.h"
+ #include "nfsd.h"
+@@ -20,63 +46,75 @@
+ #include "filecache.h"
+ #include "trace.h"
+ 
+-#define NFSDDBG_FACILITY	NFSDDBG_FH
+-
+-/* FIXME: dynamically size this for the machine somehow? */
+-#define NFSD_FILE_HASH_BITS                   12
+-#define NFSD_FILE_HASH_SIZE                  (1 << NFSD_FILE_HASH_BITS)
+ #define NFSD_LAUNDRETTE_DELAY		     (2 * HZ)
+ 
+-#define NFSD_FILE_SHUTDOWN		     (1)
+-#define NFSD_FILE_LRU_THRESHOLD		     (4096UL)
+-#define NFSD_FILE_LRU_LIMIT		     (NFSD_FILE_LRU_THRESHOLD << 2)
++#define NFSD_FILE_CACHE_UP		     (0)
+ 
+ /* We only care about NFSD_MAY_READ/WRITE for this cache */
+ #define NFSD_FILE_MAY_MASK	(NFSD_MAY_READ|NFSD_MAY_WRITE)
+ 
+-struct nfsd_fcache_bucket {
+-	struct hlist_head	nfb_head;
+-	spinlock_t		nfb_lock;
+-	unsigned int		nfb_count;
+-	unsigned int		nfb_maxcount;
+-};
+-
+ static DEFINE_PER_CPU(unsigned long, nfsd_file_cache_hits);
++static DEFINE_PER_CPU(unsigned long, nfsd_file_acquisitions);
++static DEFINE_PER_CPU(unsigned long, nfsd_file_releases);
++static DEFINE_PER_CPU(unsigned long, nfsd_file_total_age);
++static DEFINE_PER_CPU(unsigned long, nfsd_file_evictions);
+ 
+ struct nfsd_fcache_disposal {
+-	struct list_head list;
+ 	struct work_struct work;
+-	struct net *net;
+ 	spinlock_t lock;
+ 	struct list_head freeme;
+-	struct rcu_head rcu;
+ };
+ 
+ static struct workqueue_struct *nfsd_filecache_wq __read_mostly;
+ 
+ static struct kmem_cache		*nfsd_file_slab;
+ static struct kmem_cache		*nfsd_file_mark_slab;
+-static struct nfsd_fcache_bucket	*nfsd_file_hashtbl;
+ static struct list_lru			nfsd_file_lru;
+-static long				nfsd_file_lru_flags;
++static unsigned long			nfsd_file_flags;
+ static struct fsnotify_group		*nfsd_file_fsnotify_group;
+-static atomic_long_t			nfsd_filecache_count;
+ static struct delayed_work		nfsd_filecache_laundrette;
+-static DEFINE_SPINLOCK(laundrette_lock);
+-static LIST_HEAD(laundrettes);
++static struct rhltable			nfsd_file_rhltable
++						____cacheline_aligned_in_smp;
++
++static bool
++nfsd_match_cred(const struct cred *c1, const struct cred *c2)
++{
++	int i;
++
++	if (!uid_eq(c1->fsuid, c2->fsuid))
++		return false;
++	if (!gid_eq(c1->fsgid, c2->fsgid))
++		return false;
++	if (c1->group_info == NULL || c2->group_info == NULL)
++		return c1->group_info == c2->group_info;
++	if (c1->group_info->ngroups != c2->group_info->ngroups)
++		return false;
++	for (i = 0; i < c1->group_info->ngroups; i++) {
++		if (!gid_eq(c1->group_info->gid[i], c2->group_info->gid[i]))
++			return false;
++	}
++	return true;
++}
+ 
+-static void nfsd_file_gc(void);
++static const struct rhashtable_params nfsd_file_rhash_params = {
++	.key_len		= sizeof_field(struct nfsd_file, nf_inode),
++	.key_offset		= offsetof(struct nfsd_file, nf_inode),
++	.head_offset		= offsetof(struct nfsd_file, nf_rlist),
++
++	/*
++	 * Start with a single page hash table to reduce resizing churn
++	 * on light workloads.
++	 */
++	.min_size		= 256,
++	.automatic_shrinking	= true,
++};
+ 
+ static void
+ nfsd_file_schedule_laundrette(void)
+ {
+-	long count = atomic_long_read(&nfsd_filecache_count);
+-
+-	if (count == 0 || test_bit(NFSD_FILE_SHUTDOWN, &nfsd_file_lru_flags))
+-		return;
+-
+-	queue_delayed_work(system_wq, &nfsd_filecache_laundrette,
+-			NFSD_LAUNDRETTE_DELAY);
++	if (test_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags))
++		queue_delayed_work(system_wq, &nfsd_filecache_laundrette,
++				   NFSD_LAUNDRETTE_DELAY);
+ }
+ 
+ static void
+@@ -115,22 +153,21 @@ nfsd_file_mark_put(struct nfsd_file_mark *nfm)
+ }
+ 
+ static struct nfsd_file_mark *
+-nfsd_file_mark_find_or_create(struct nfsd_file *nf)
++nfsd_file_mark_find_or_create(struct nfsd_file *nf, struct inode *inode)
+ {
+ 	int			err;
+ 	struct fsnotify_mark	*mark;
+ 	struct nfsd_file_mark	*nfm = NULL, *new;
+-	struct inode *inode = nf->nf_inode;
+ 
+ 	do {
+-		mutex_lock(&nfsd_file_fsnotify_group->mark_mutex);
++		fsnotify_group_lock(nfsd_file_fsnotify_group);
+ 		mark = fsnotify_find_mark(&inode->i_fsnotify_marks,
+-				nfsd_file_fsnotify_group);
++					  nfsd_file_fsnotify_group);
+ 		if (mark) {
+ 			nfm = nfsd_file_mark_get(container_of(mark,
+ 						 struct nfsd_file_mark,
+ 						 nfm_mark));
+-			mutex_unlock(&nfsd_file_fsnotify_group->mark_mutex);
++			fsnotify_group_unlock(nfsd_file_fsnotify_group);
+ 			if (nfm) {
+ 				fsnotify_put_mark(mark);
+ 				break;
+@@ -138,8 +175,9 @@ nfsd_file_mark_find_or_create(struct nfsd_file *nf)
+ 			/* Avoid soft lockup race with nfsd_file_mark_put() */
+ 			fsnotify_destroy_mark(mark, nfsd_file_fsnotify_group);
+ 			fsnotify_put_mark(mark);
+-		} else
+-			mutex_unlock(&nfsd_file_fsnotify_group->mark_mutex);
++		} else {
++			fsnotify_group_unlock(nfsd_file_fsnotify_group);
++		}
+ 
+ 		/* allocate a new nfm */
+ 		new = kmem_cache_alloc(nfsd_file_mark_slab, GFP_KERNEL);
+@@ -170,244 +208,233 @@ nfsd_file_mark_find_or_create(struct nfsd_file *nf)
+ }
+ 
+ static struct nfsd_file *
+-nfsd_file_alloc(struct inode *inode, unsigned int may, unsigned int hashval,
+-		struct net *net)
++nfsd_file_alloc(struct net *net, struct inode *inode, unsigned char need,
++		bool want_gc)
+ {
+ 	struct nfsd_file *nf;
+ 
+ 	nf = kmem_cache_alloc(nfsd_file_slab, GFP_KERNEL);
+-	if (nf) {
+-		INIT_HLIST_NODE(&nf->nf_node);
+-		INIT_LIST_HEAD(&nf->nf_lru);
+-		nf->nf_file = NULL;
+-		nf->nf_cred = get_current_cred();
+-		nf->nf_net = net;
+-		nf->nf_flags = 0;
+-		nf->nf_inode = inode;
+-		nf->nf_hashval = hashval;
+-		refcount_set(&nf->nf_ref, 1);
+-		nf->nf_may = may & NFSD_FILE_MAY_MASK;
+-		if (may & NFSD_MAY_NOT_BREAK_LEASE) {
+-			if (may & NFSD_MAY_WRITE)
+-				__set_bit(NFSD_FILE_BREAK_WRITE, &nf->nf_flags);
+-			if (may & NFSD_MAY_READ)
+-				__set_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags);
+-		}
+-		nf->nf_mark = NULL;
+-		trace_nfsd_file_alloc(nf);
+-	}
+-	return nf;
+-}
+-
+-static bool
+-nfsd_file_free(struct nfsd_file *nf)
+-{
+-	bool flush = false;
+-
+-	trace_nfsd_file_put_final(nf);
+-	if (nf->nf_mark)
+-		nfsd_file_mark_put(nf->nf_mark);
+-	if (nf->nf_file) {
+-		get_file(nf->nf_file);
+-		filp_close(nf->nf_file, NULL);
+-		fput(nf->nf_file);
+-		flush = true;
+-	}
+-	call_rcu(&nf->nf_rcu, nfsd_file_slab_free);
+-	return flush;
+-}
+-
+-static bool
+-nfsd_file_check_writeback(struct nfsd_file *nf)
+-{
+-	struct file *file = nf->nf_file;
+-	struct address_space *mapping;
++	if (unlikely(!nf))
++		return NULL;
+ 
+-	if (!file || !(file->f_mode & FMODE_WRITE))
+-		return false;
+-	mapping = file->f_mapping;
+-	return mapping_tagged(mapping, PAGECACHE_TAG_DIRTY) ||
+-		mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK);
++	INIT_LIST_HEAD(&nf->nf_lru);
++	nf->nf_birthtime = ktime_get();
++	nf->nf_file = NULL;
++	nf->nf_cred = get_current_cred();
++	nf->nf_net = net;
++	nf->nf_flags = want_gc ?
++		BIT(NFSD_FILE_HASHED) | BIT(NFSD_FILE_PENDING) | BIT(NFSD_FILE_GC) :
++		BIT(NFSD_FILE_HASHED) | BIT(NFSD_FILE_PENDING);
++	nf->nf_inode = inode;
++	refcount_set(&nf->nf_ref, 1);
++	nf->nf_may = need;
++	nf->nf_mark = NULL;
++	return nf;
+ }
+ 
+-static int
++/**
++ * nfsd_file_check_write_error - check for writeback errors on a file
++ * @nf: nfsd_file to check for writeback errors
++ *
++ * Check whether a nfsd_file has an unseen error. Reset the write
++ * verifier if so.
++ */
++static void
+ nfsd_file_check_write_error(struct nfsd_file *nf)
+ {
+ 	struct file *file = nf->nf_file;
+ 
+-	if (!file || !(file->f_mode & FMODE_WRITE))
+-		return 0;
+-	return filemap_check_wb_err(file->f_mapping, READ_ONCE(file->f_wb_err));
++	if ((file->f_mode & FMODE_WRITE) &&
++	    filemap_check_wb_err(file->f_mapping, READ_ONCE(file->f_wb_err)))
++		nfsd_reset_write_verifier(net_generic(nf->nf_net, nfsd_net_id));
+ }
+ 
+ static void
+-nfsd_file_do_unhash(struct nfsd_file *nf)
++nfsd_file_hash_remove(struct nfsd_file *nf)
+ {
+-	lockdep_assert_held(&nfsd_file_hashtbl[nf->nf_hashval].nfb_lock);
+-
+ 	trace_nfsd_file_unhash(nf);
+-
+-	if (nfsd_file_check_write_error(nf))
+-		nfsd_reset_boot_verifier(net_generic(nf->nf_net, nfsd_net_id));
+-	--nfsd_file_hashtbl[nf->nf_hashval].nfb_count;
+-	hlist_del_rcu(&nf->nf_node);
+-	atomic_long_dec(&nfsd_filecache_count);
++	rhltable_remove(&nfsd_file_rhltable, &nf->nf_rlist,
++			nfsd_file_rhash_params);
+ }
+ 
+ static bool
+ nfsd_file_unhash(struct nfsd_file *nf)
+ {
+ 	if (test_and_clear_bit(NFSD_FILE_HASHED, &nf->nf_flags)) {
+-		nfsd_file_do_unhash(nf);
+-		if (!list_empty(&nf->nf_lru))
+-			list_lru_del(&nfsd_file_lru, &nf->nf_lru);
++		nfsd_file_hash_remove(nf);
+ 		return true;
+ 	}
+ 	return false;
+ }
+ 
+-/*
+- * Return true if the file was unhashed.
+- */
+-static bool
+-nfsd_file_unhash_and_release_locked(struct nfsd_file *nf, struct list_head *dispose)
++static void
++nfsd_file_free(struct nfsd_file *nf)
+ {
+-	lockdep_assert_held(&nfsd_file_hashtbl[nf->nf_hashval].nfb_lock);
+-
+-	trace_nfsd_file_unhash_and_release_locked(nf);
+-	if (!nfsd_file_unhash(nf))
+-		return false;
+-	/* keep final reference for nfsd_file_lru_dispose */
+-	if (refcount_dec_not_one(&nf->nf_ref))
+-		return true;
++	s64 age = ktime_to_ms(ktime_sub(ktime_get(), nf->nf_birthtime));
+ 
+-	list_add(&nf->nf_lru, dispose);
+-	return true;
+-}
++	trace_nfsd_file_free(nf);
+ 
+-static void
+-nfsd_file_put_noref(struct nfsd_file *nf)
+-{
+-	trace_nfsd_file_put(nf);
++	this_cpu_inc(nfsd_file_releases);
++	this_cpu_add(nfsd_file_total_age, age);
+ 
+-	if (refcount_dec_and_test(&nf->nf_ref)) {
+-		WARN_ON(test_bit(NFSD_FILE_HASHED, &nf->nf_flags));
+-		nfsd_file_free(nf);
++	nfsd_file_unhash(nf);
++	if (nf->nf_mark)
++		nfsd_file_mark_put(nf->nf_mark);
++	if (nf->nf_file) {
++		nfsd_file_check_write_error(nf);
++		filp_close(nf->nf_file, NULL);
+ 	}
+-}
+ 
+-void
+-nfsd_file_put(struct nfsd_file *nf)
+-{
+-	bool is_hashed;
+-
+-	set_bit(NFSD_FILE_REFERENCED, &nf->nf_flags);
+-	if (refcount_read(&nf->nf_ref) > 2 || !nf->nf_file) {
+-		nfsd_file_put_noref(nf);
++	/*
++	 * If this item is still linked via nf_lru, that's a bug.
++	 * WARN and leak it to preserve system stability.
++	 */
++	if (WARN_ON_ONCE(!list_empty(&nf->nf_lru)))
+ 		return;
+-	}
+ 
+-	filemap_flush(nf->nf_file->f_mapping);
+-	is_hashed = test_bit(NFSD_FILE_HASHED, &nf->nf_flags) != 0;
+-	nfsd_file_put_noref(nf);
+-	if (is_hashed)
+-		nfsd_file_schedule_laundrette();
+-	if (atomic_long_read(&nfsd_filecache_count) >= NFSD_FILE_LRU_LIMIT)
+-		nfsd_file_gc();
++	call_rcu(&nf->nf_rcu, nfsd_file_slab_free);
+ }
+ 
+-struct nfsd_file *
+-nfsd_file_get(struct nfsd_file *nf)
++static bool
++nfsd_file_check_writeback(struct nfsd_file *nf)
+ {
+-	if (likely(refcount_inc_not_zero(&nf->nf_ref)))
+-		return nf;
+-	return NULL;
++	struct file *file = nf->nf_file;
++	struct address_space *mapping;
++
++	/* File not open for write? */
++	if (!(file->f_mode & FMODE_WRITE))
++		return false;
++
++	/*
++	 * Some filesystems (e.g. NFS) flush all dirty data on close.
++	 * On others, there is no need to wait for writeback.
++	 */
++	if (!(file_inode(file)->i_sb->s_export_op->flags & EXPORT_OP_FLUSH_ON_CLOSE))
++		return false;
++
++	mapping = file->f_mapping;
++	return mapping_tagged(mapping, PAGECACHE_TAG_DIRTY) ||
++		mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK);
+ }
+ 
+-static void
+-nfsd_file_dispose_list(struct list_head *dispose)
+-{
+-	struct nfsd_file *nf;
+ 
+-	while(!list_empty(dispose)) {
+-		nf = list_first_entry(dispose, struct nfsd_file, nf_lru);
+-		list_del(&nf->nf_lru);
+-		nfsd_file_put_noref(nf);
++static bool nfsd_file_lru_add(struct nfsd_file *nf)
++{
++	set_bit(NFSD_FILE_REFERENCED, &nf->nf_flags);
++	if (list_lru_add(&nfsd_file_lru, &nf->nf_lru)) {
++		trace_nfsd_file_lru_add(nf);
++		return true;
+ 	}
++	return false;
+ }
+ 
+-static void
+-nfsd_file_dispose_list_sync(struct list_head *dispose)
++static bool nfsd_file_lru_remove(struct nfsd_file *nf)
+ {
+-	bool flush = false;
+-	struct nfsd_file *nf;
+-
+-	while(!list_empty(dispose)) {
+-		nf = list_first_entry(dispose, struct nfsd_file, nf_lru);
+-		list_del(&nf->nf_lru);
+-		if (!refcount_dec_and_test(&nf->nf_ref))
+-			continue;
+-		if (nfsd_file_free(nf))
+-			flush = true;
++	if (list_lru_del(&nfsd_file_lru, &nf->nf_lru)) {
++		trace_nfsd_file_lru_del(nf);
++		return true;
+ 	}
+-	if (flush)
+-		flush_delayed_fput();
++	return false;
+ }
+ 
+-static void
+-nfsd_file_list_remove_disposal(struct list_head *dst,
+-		struct nfsd_fcache_disposal *l)
++struct nfsd_file *
++nfsd_file_get(struct nfsd_file *nf)
+ {
+-	spin_lock(&l->lock);
+-	list_splice_init(&l->freeme, dst);
+-	spin_unlock(&l->lock);
++	if (nf && refcount_inc_not_zero(&nf->nf_ref))
++		return nf;
++	return NULL;
+ }
+ 
+-static void
+-nfsd_file_list_add_disposal(struct list_head *files, struct net *net)
++/**
++ * nfsd_file_put - put the reference to a nfsd_file
++ * @nf: nfsd_file of which to put the reference
++ *
++ * Put a reference to a nfsd_file. In the non-GC case, we just put the
++ * reference immediately. In the GC case, if the reference would be
++ * the last one, the put it on the LRU instead to be cleaned up later.
++ */
++void
++nfsd_file_put(struct nfsd_file *nf)
+ {
+-	struct nfsd_fcache_disposal *l;
++	might_sleep();
++	trace_nfsd_file_put(nf);
+ 
+-	rcu_read_lock();
+-	list_for_each_entry_rcu(l, &laundrettes, list) {
+-		if (l->net == net) {
+-			spin_lock(&l->lock);
+-			list_splice_tail_init(files, &l->freeme);
+-			spin_unlock(&l->lock);
+-			queue_work(nfsd_filecache_wq, &l->work);
+-			break;
++	if (test_bit(NFSD_FILE_GC, &nf->nf_flags) &&
++	    test_bit(NFSD_FILE_HASHED, &nf->nf_flags)) {
++		/*
++		 * If this is the last reference (nf_ref == 1), then try to
++		 * transfer it to the LRU.
++		 */
++		if (refcount_dec_not_one(&nf->nf_ref))
++			return;
++
++		/* Try to add it to the LRU.  If that fails, decrement. */
++		if (nfsd_file_lru_add(nf)) {
++			/* If it's still hashed, we're done */
++			if (test_bit(NFSD_FILE_HASHED, &nf->nf_flags)) {
++				nfsd_file_schedule_laundrette();
++				return;
++			}
++
++			/*
++			 * We're racing with unhashing, so try to remove it from
++			 * the LRU. If removal fails, then someone else already
++			 * has our reference.
++			 */
++			if (!nfsd_file_lru_remove(nf))
++				return;
+ 		}
+ 	}
+-	rcu_read_unlock();
++	if (refcount_dec_and_test(&nf->nf_ref))
++		nfsd_file_free(nf);
+ }
+ 
+ static void
+-nfsd_file_list_add_pernet(struct list_head *dst, struct list_head *src,
+-		struct net *net)
++nfsd_file_dispose_list(struct list_head *dispose)
+ {
+-	struct nfsd_file *nf, *tmp;
++	struct nfsd_file *nf;
+ 
+-	list_for_each_entry_safe(nf, tmp, src, nf_lru) {
+-		if (nf->nf_net == net)
+-			list_move_tail(&nf->nf_lru, dst);
++	while (!list_empty(dispose)) {
++		nf = list_first_entry(dispose, struct nfsd_file, nf_lru);
++		list_del_init(&nf->nf_lru);
++		nfsd_file_free(nf);
+ 	}
+ }
+ 
++/**
++ * nfsd_file_dispose_list_delayed - move list of dead files to net's freeme list
++ * @dispose: list of nfsd_files to be disposed
++ *
++ * Transfers each file to the "freeme" list for its nfsd_net, to eventually
++ * be disposed of by the per-net garbage collector.
++ */
+ static void
+ nfsd_file_dispose_list_delayed(struct list_head *dispose)
+ {
+-	LIST_HEAD(list);
+-	struct nfsd_file *nf;
+-
+ 	while(!list_empty(dispose)) {
+-		nf = list_first_entry(dispose, struct nfsd_file, nf_lru);
+-		nfsd_file_list_add_pernet(&list, dispose, nf->nf_net);
+-		nfsd_file_list_add_disposal(&list, nf->nf_net);
++		struct nfsd_file *nf = list_first_entry(dispose,
++						struct nfsd_file, nf_lru);
++		struct nfsd_net *nn = net_generic(nf->nf_net, nfsd_net_id);
++		struct nfsd_fcache_disposal *l = nn->fcache_disposal;
++
++		spin_lock(&l->lock);
++		list_move_tail(&nf->nf_lru, &l->freeme);
++		spin_unlock(&l->lock);
++		queue_work(nfsd_filecache_wq, &l->work);
+ 	}
+ }
+ 
+-/*
+- * Note this can deadlock with nfsd_file_cache_purge.
++/**
++ * nfsd_file_lru_cb - Examine an entry on the LRU list
++ * @item: LRU entry to examine
++ * @lru: controlling LRU
++ * @lock: LRU list lock (unused)
++ * @arg: dispose list
++ *
++ * Return values:
++ *   %LRU_REMOVED: @item was removed from the LRU
++ *   %LRU_ROTATE: @item is to be moved to the LRU tail
++ *   %LRU_SKIP: @item cannot be evicted
+  */
+ static enum lru_status
+ nfsd_file_lru_cb(struct list_head *item, struct list_lru_one *lru,
+@@ -418,72 +445,60 @@ nfsd_file_lru_cb(struct list_head *item, struct list_lru_one *lru,
+ 	struct list_head *head = arg;
+ 	struct nfsd_file *nf = list_entry(item, struct nfsd_file, nf_lru);
+ 
+-	/*
+-	 * Do a lockless refcount check. The hashtable holds one reference, so
+-	 * we look to see if anything else has a reference, or if any have
+-	 * been put since the shrinker last ran. Those don't get unhashed and
+-	 * released.
+-	 *
+-	 * Note that in the put path, we set the flag and then decrement the
+-	 * counter. Here we check the counter and then test and clear the flag.
+-	 * That order is deliberate to ensure that we can do this locklessly.
+-	 */
+-	if (refcount_read(&nf->nf_ref) > 1)
+-		goto out_skip;
++	/* We should only be dealing with GC entries here */
++	WARN_ON_ONCE(!test_bit(NFSD_FILE_GC, &nf->nf_flags));
+ 
+ 	/*
+ 	 * Don't throw out files that are still undergoing I/O or
+ 	 * that have uncleared errors pending.
+ 	 */
+-	if (nfsd_file_check_writeback(nf))
+-		goto out_skip;
++	if (nfsd_file_check_writeback(nf)) {
++		trace_nfsd_file_gc_writeback(nf);
++		return LRU_SKIP;
++	}
+ 
+-	if (test_and_clear_bit(NFSD_FILE_REFERENCED, &nf->nf_flags))
+-		goto out_skip;
++	/* If it was recently added to the list, skip it */
++	if (test_and_clear_bit(NFSD_FILE_REFERENCED, &nf->nf_flags)) {
++		trace_nfsd_file_gc_referenced(nf);
++		return LRU_ROTATE;
++	}
+ 
+-	if (!test_and_clear_bit(NFSD_FILE_HASHED, &nf->nf_flags))
+-		goto out_skip;
++	/*
++	 * Put the reference held on behalf of the LRU. If it wasn't the last
++	 * one, then just remove it from the LRU and ignore it.
++	 */
++	if (!refcount_dec_and_test(&nf->nf_ref)) {
++		trace_nfsd_file_gc_in_use(nf);
++		list_lru_isolate(lru, &nf->nf_lru);
++		return LRU_REMOVED;
++	}
+ 
++	/* Refcount went to zero. Unhash it and queue it to the dispose list */
++	nfsd_file_unhash(nf);
+ 	list_lru_isolate_move(lru, &nf->nf_lru, head);
++	this_cpu_inc(nfsd_file_evictions);
++	trace_nfsd_file_gc_disposed(nf);
+ 	return LRU_REMOVED;
+-out_skip:
+-	return LRU_SKIP;
+-}
+-
+-static unsigned long
+-nfsd_file_lru_walk_list(struct shrink_control *sc)
+-{
+-	LIST_HEAD(head);
+-	struct nfsd_file *nf;
+-	unsigned long ret;
+-
+-	if (sc)
+-		ret = list_lru_shrink_walk(&nfsd_file_lru, sc,
+-				nfsd_file_lru_cb, &head);
+-	else
+-		ret = list_lru_walk(&nfsd_file_lru,
+-				nfsd_file_lru_cb,
+-				&head, LONG_MAX);
+-	list_for_each_entry(nf, &head, nf_lru) {
+-		spin_lock(&nfsd_file_hashtbl[nf->nf_hashval].nfb_lock);
+-		nfsd_file_do_unhash(nf);
+-		spin_unlock(&nfsd_file_hashtbl[nf->nf_hashval].nfb_lock);
+-	}
+-	nfsd_file_dispose_list_delayed(&head);
+-	return ret;
+ }
+ 
+ static void
+ nfsd_file_gc(void)
+ {
+-	nfsd_file_lru_walk_list(NULL);
++	LIST_HEAD(dispose);
++	unsigned long ret;
++
++	ret = list_lru_walk(&nfsd_file_lru, nfsd_file_lru_cb,
++			    &dispose, list_lru_count(&nfsd_file_lru));
++	trace_nfsd_file_gc_removed(ret, list_lru_count(&nfsd_file_lru));
++	nfsd_file_dispose_list_delayed(&dispose);
+ }
+ 
+ static void
+ nfsd_file_gc_worker(struct work_struct *work)
+ {
+ 	nfsd_file_gc();
+-	nfsd_file_schedule_laundrette();
++	if (list_lru_count(&nfsd_file_lru))
++		nfsd_file_schedule_laundrette();
+ }
+ 
+ static unsigned long
+@@ -495,7 +510,14 @@ nfsd_file_lru_count(struct shrinker *s, struct shrink_control *sc)
+ static unsigned long
+ nfsd_file_lru_scan(struct shrinker *s, struct shrink_control *sc)
+ {
+-	return nfsd_file_lru_walk_list(sc);
++	LIST_HEAD(dispose);
++	unsigned long ret;
++
++	ret = list_lru_shrink_walk(&nfsd_file_lru, sc,
++				   nfsd_file_lru_cb, &dispose);
++	trace_nfsd_file_shrinker_removed(ret, list_lru_count(&nfsd_file_lru));
++	nfsd_file_dispose_list_delayed(&dispose);
++	return ret;
+ }
+ 
+ static struct shrinker	nfsd_file_shrinker = {
+@@ -504,70 +526,123 @@ static struct shrinker	nfsd_file_shrinker = {
+ 	.seeks = 1,
+ };
+ 
++/**
++ * nfsd_file_cond_queue - conditionally unhash and queue a nfsd_file
++ * @nf: nfsd_file to attempt to queue
++ * @dispose: private list to queue successfully-put objects
++ *
++ * Unhash an nfsd_file, try to get a reference to it, and then put that
++ * reference. If it's the last reference, queue it to the dispose list.
++ */
++static void
++nfsd_file_cond_queue(struct nfsd_file *nf, struct list_head *dispose)
++	__must_hold(RCU)
++{
++	int decrement = 1;
++
++	/* If we raced with someone else unhashing, ignore it */
++	if (!nfsd_file_unhash(nf))
++		return;
++
++	/* If we can't get a reference, ignore it */
++	if (!nfsd_file_get(nf))
++		return;
++
++	/* Extra decrement if we remove from the LRU */
++	if (nfsd_file_lru_remove(nf))
++		++decrement;
++
++	/* If refcount goes to 0, then put on the dispose list */
++	if (refcount_sub_and_test(decrement, &nf->nf_ref)) {
++		list_add(&nf->nf_lru, dispose);
++		trace_nfsd_file_closing(nf);
++	}
++}
++
++/**
++ * nfsd_file_queue_for_close: try to close out any open nfsd_files for an inode
++ * @inode:   inode on which to close out nfsd_files
++ * @dispose: list on which to gather nfsd_files to close out
++ *
++ * An nfsd_file represents a struct file being held open on behalf of nfsd.
++ * An open file however can block other activity (such as leases), or cause
++ * undesirable behavior (e.g. spurious silly-renames when reexporting NFS).
++ *
++ * This function is intended to find open nfsd_files when this sort of
++ * conflicting access occurs and then attempt to close those files out.
++ *
++ * Populates the dispose list with entries that have already had their
++ * refcounts go to zero. The actual free of an nfsd_file can be expensive,
++ * so we leave it up to the caller whether it wants to wait or not.
++ */
+ static void
+-__nfsd_file_close_inode(struct inode *inode, unsigned int hashval,
+-			struct list_head *dispose)
++nfsd_file_queue_for_close(struct inode *inode, struct list_head *dispose)
+ {
+-	struct nfsd_file	*nf;
+-	struct hlist_node	*tmp;
++	struct rhlist_head *tmp, *list;
++	struct nfsd_file *nf;
+ 
+-	spin_lock(&nfsd_file_hashtbl[hashval].nfb_lock);
+-	hlist_for_each_entry_safe(nf, tmp, &nfsd_file_hashtbl[hashval].nfb_head, nf_node) {
+-		if (inode == nf->nf_inode)
+-			nfsd_file_unhash_and_release_locked(nf, dispose);
++	rcu_read_lock();
++	list = rhltable_lookup(&nfsd_file_rhltable, &inode,
++			       nfsd_file_rhash_params);
++	rhl_for_each_entry_rcu(nf, tmp, list, nf_rlist) {
++		if (!test_bit(NFSD_FILE_GC, &nf->nf_flags))
++			continue;
++		nfsd_file_cond_queue(nf, dispose);
+ 	}
+-	spin_unlock(&nfsd_file_hashtbl[hashval].nfb_lock);
++	rcu_read_unlock();
+ }
+ 
+ /**
+- * nfsd_file_close_inode_sync - attempt to forcibly close a nfsd_file
++ * nfsd_file_close_inode - attempt a delayed close of a nfsd_file
+  * @inode: inode of the file to attempt to remove
+  *
+- * Walk the whole hash bucket, looking for any files that correspond to "inode".
+- * If any do, then unhash them and put the hashtable reference to them and
+- * destroy any that had their last reference put. Also ensure that any of the
+- * fputs also have their final __fput done as well.
++ * Close out any open nfsd_files that can be reaped for @inode. The
++ * actual freeing is deferred to the dispose_list_delayed infrastructure.
++ *
++ * This is used by the fsnotify callbacks and setlease notifier.
+  */
+-void
+-nfsd_file_close_inode_sync(struct inode *inode)
++static void
++nfsd_file_close_inode(struct inode *inode)
+ {
+-	unsigned int		hashval = (unsigned int)hash_long(inode->i_ino,
+-						NFSD_FILE_HASH_BITS);
+ 	LIST_HEAD(dispose);
+ 
+-	__nfsd_file_close_inode(inode, hashval, &dispose);
+-	trace_nfsd_file_close_inode_sync(inode, hashval, !list_empty(&dispose));
+-	nfsd_file_dispose_list_sync(&dispose);
++	nfsd_file_queue_for_close(inode, &dispose);
++	nfsd_file_dispose_list_delayed(&dispose);
+ }
+ 
+ /**
+  * nfsd_file_close_inode_sync - attempt to forcibly close a nfsd_file
+  * @inode: inode of the file to attempt to remove
+  *
+- * Walk the whole hash bucket, looking for any files that correspond to "inode".
+- * If any do, then unhash them and put the hashtable reference to them and
+- * destroy any that had their last reference put.
++ * Close out any open nfsd_files that can be reaped for @inode. The
++ * nfsd_files are closed out synchronously.
++ *
++ * This is called from nfsd_rename and nfsd_unlink to avoid silly-renames
++ * when reexporting NFS.
+  */
+-static void
+-nfsd_file_close_inode(struct inode *inode)
++void
++nfsd_file_close_inode_sync(struct inode *inode)
+ {
+-	unsigned int		hashval = (unsigned int)hash_long(inode->i_ino,
+-						NFSD_FILE_HASH_BITS);
++	struct nfsd_file *nf;
+ 	LIST_HEAD(dispose);
+ 
+-	__nfsd_file_close_inode(inode, hashval, &dispose);
+-	trace_nfsd_file_close_inode(inode, hashval, !list_empty(&dispose));
+-	nfsd_file_dispose_list_delayed(&dispose);
++	trace_nfsd_file_close(inode);
++
++	nfsd_file_queue_for_close(inode, &dispose);
++	while (!list_empty(&dispose)) {
++		nf = list_first_entry(&dispose, struct nfsd_file, nf_lru);
++		list_del_init(&nf->nf_lru);
++		nfsd_file_free(nf);
++	}
++	flush_delayed_fput();
+ }
+ 
+ /**
+  * nfsd_file_delayed_close - close unused nfsd_files
+  * @work: dummy
+  *
+- * Walk the LRU list and close any entries that have not been used since
+- * the last scan.
+- *
+- * Note this can deadlock with nfsd_file_cache_purge.
++ * Scrape the freeme list for this nfsd_net, and then dispose of them
++ * all.
+  */
+ static void
+ nfsd_file_delayed_close(struct work_struct *work)
+@@ -576,7 +651,10 @@ nfsd_file_delayed_close(struct work_struct *work)
+ 	struct nfsd_fcache_disposal *l = container_of(work,
+ 			struct nfsd_fcache_disposal, work);
+ 
+-	nfsd_file_list_remove_disposal(&head, l);
++	spin_lock(&l->lock);
++	list_splice_init(&l->freeme, &head);
++	spin_unlock(&l->lock);
++
+ 	nfsd_file_dispose_list(&head);
+ }
+ 
+@@ -588,7 +666,7 @@ nfsd_file_lease_notifier_call(struct notifier_block *nb, unsigned long arg,
+ 
+ 	/* Only close files for F_SETLEASE leases */
+ 	if (fl->fl_flags & FL_LEASE)
+-		nfsd_file_close_inode_sync(file_inode(fl->fl_file));
++		nfsd_file_close_inode(file_inode(fl->fl_file));
+ 	return 0;
+ }
+ 
+@@ -601,6 +679,9 @@ nfsd_file_fsnotify_handle_event(struct fsnotify_mark *mark, u32 mask,
+ 				struct inode *inode, struct inode *dir,
+ 				const struct qstr *name, u32 cookie)
+ {
++	if (WARN_ON_ONCE(!inode))
++		return 0;
++
+ 	trace_nfsd_file_fsnotify_handle_event(inode, mask);
+ 
+ 	/* Should be no marks on non-regular files */
+@@ -628,25 +709,21 @@ static const struct fsnotify_ops nfsd_file_fsnotify_ops = {
+ int
+ nfsd_file_cache_init(void)
+ {
+-	int		ret = -ENOMEM;
+-	unsigned int	i;
+-
+-	clear_bit(NFSD_FILE_SHUTDOWN, &nfsd_file_lru_flags);
++	int ret;
+ 
+-	if (nfsd_file_hashtbl)
++	lockdep_assert_held(&nfsd_mutex);
++	if (test_and_set_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags) == 1)
+ 		return 0;
+ 
++	ret = rhltable_init(&nfsd_file_rhltable, &nfsd_file_rhash_params);
++	if (ret)
++		return ret;
++
++	ret = -ENOMEM;
+ 	nfsd_filecache_wq = alloc_workqueue("nfsd_filecache", 0, 0);
+ 	if (!nfsd_filecache_wq)
+ 		goto out;
+ 
+-	nfsd_file_hashtbl = kvcalloc(NFSD_FILE_HASH_SIZE,
+-				sizeof(*nfsd_file_hashtbl), GFP_KERNEL);
+-	if (!nfsd_file_hashtbl) {
+-		pr_err("nfsd: unable to allocate nfsd_file_hashtbl\n");
+-		goto out_err;
+-	}
+-
+ 	nfsd_file_slab = kmem_cache_create("nfsd_file",
+ 				sizeof(struct nfsd_file), 0, 0, NULL);
+ 	if (!nfsd_file_slab) {
+@@ -680,19 +757,16 @@ nfsd_file_cache_init(void)
+ 		goto out_shrinker;
+ 	}
+ 
+-	nfsd_file_fsnotify_group = fsnotify_alloc_group(&nfsd_file_fsnotify_ops);
++	nfsd_file_fsnotify_group = fsnotify_alloc_group(&nfsd_file_fsnotify_ops,
++							FSNOTIFY_GROUP_NOFS);
+ 	if (IS_ERR(nfsd_file_fsnotify_group)) {
+ 		pr_err("nfsd: unable to create fsnotify group: %ld\n",
+ 			PTR_ERR(nfsd_file_fsnotify_group));
++		ret = PTR_ERR(nfsd_file_fsnotify_group);
+ 		nfsd_file_fsnotify_group = NULL;
+ 		goto out_notifier;
+ 	}
+ 
+-	for (i = 0; i < NFSD_FILE_HASH_SIZE; i++) {
+-		INIT_HLIST_HEAD(&nfsd_file_hashtbl[i].nfb_head);
+-		spin_lock_init(&nfsd_file_hashtbl[i].nfb_lock);
+-	}
+-
+ 	INIT_DELAYED_WORK(&nfsd_filecache_laundrette, nfsd_file_gc_worker);
+ out:
+ 	return ret;
+@@ -707,50 +781,47 @@ nfsd_file_cache_init(void)
+ 	nfsd_file_slab = NULL;
+ 	kmem_cache_destroy(nfsd_file_mark_slab);
+ 	nfsd_file_mark_slab = NULL;
+-	kvfree(nfsd_file_hashtbl);
+-	nfsd_file_hashtbl = NULL;
+ 	destroy_workqueue(nfsd_filecache_wq);
+ 	nfsd_filecache_wq = NULL;
++	rhltable_destroy(&nfsd_file_rhltable);
+ 	goto out;
+ }
+ 
+-/*
+- * Note this can deadlock with nfsd_file_lru_cb.
++/**
++ * __nfsd_file_cache_purge: clean out the cache for shutdown
++ * @net: net-namespace to shut down the cache (may be NULL)
++ *
++ * Walk the nfsd_file cache and close out any that match @net. If @net is NULL,
++ * then close out everything. Called when an nfsd instance is being shut down,
++ * and when the exports table is flushed.
+  */
+-void
+-nfsd_file_cache_purge(struct net *net)
++static void
++__nfsd_file_cache_purge(struct net *net)
+ {
+-	unsigned int		i;
+-	struct nfsd_file	*nf;
+-	struct hlist_node	*next;
++	struct rhashtable_iter iter;
++	struct nfsd_file *nf;
+ 	LIST_HEAD(dispose);
+-	bool del;
+ 
+-	if (!nfsd_file_hashtbl)
+-		return;
++	rhltable_walk_enter(&nfsd_file_rhltable, &iter);
++	do {
++		rhashtable_walk_start(&iter);
+ 
+-	for (i = 0; i < NFSD_FILE_HASH_SIZE; i++) {
+-		struct nfsd_fcache_bucket *nfb = &nfsd_file_hashtbl[i];
++		nf = rhashtable_walk_next(&iter);
++		while (!IS_ERR_OR_NULL(nf)) {
++			if (!net || nf->nf_net == net)
++				nfsd_file_cond_queue(nf, &dispose);
++			nf = rhashtable_walk_next(&iter);
++		}
+ 
+-		spin_lock(&nfb->nfb_lock);
+-		hlist_for_each_entry_safe(nf, next, &nfb->nfb_head, nf_node) {
+-			if (net && nf->nf_net != net)
+-				continue;
+-			del = nfsd_file_unhash_and_release_locked(nf, &dispose);
++		rhashtable_walk_stop(&iter);
++	} while (nf == ERR_PTR(-EAGAIN));
++	rhashtable_walk_exit(&iter);
+ 
+-			/*
+-			 * Deadlock detected! Something marked this entry as
+-			 * unhased, but hasn't removed it from the hash list.
+-			 */
+-			WARN_ON_ONCE(!del);
+-		}
+-		spin_unlock(&nfb->nfb_lock);
+-		nfsd_file_dispose_list(&dispose);
+-	}
++	nfsd_file_dispose_list(&dispose);
+ }
+ 
+ static struct nfsd_fcache_disposal *
+-nfsd_alloc_fcache_disposal(struct net *net)
++nfsd_alloc_fcache_disposal(void)
+ {
+ 	struct nfsd_fcache_disposal *l;
+ 
+@@ -758,7 +829,6 @@ nfsd_alloc_fcache_disposal(struct net *net)
+ 	if (!l)
+ 		return NULL;
+ 	INIT_WORK(&l->work, nfsd_file_delayed_close);
+-	l->net = net;
+ 	spin_lock_init(&l->lock);
+ 	INIT_LIST_HEAD(&l->freeme);
+ 	return l;
+@@ -767,61 +837,40 @@ nfsd_alloc_fcache_disposal(struct net *net)
+ static void
+ nfsd_free_fcache_disposal(struct nfsd_fcache_disposal *l)
+ {
+-	rcu_assign_pointer(l->net, NULL);
+ 	cancel_work_sync(&l->work);
+ 	nfsd_file_dispose_list(&l->freeme);
+-	kfree_rcu(l, rcu);
++	kfree(l);
+ }
+ 
+ static void
+-nfsd_add_fcache_disposal(struct nfsd_fcache_disposal *l)
+-{
+-	spin_lock(&laundrette_lock);
+-	list_add_tail_rcu(&l->list, &laundrettes);
+-	spin_unlock(&laundrette_lock);
+-}
+-
+-static void
+-nfsd_del_fcache_disposal(struct nfsd_fcache_disposal *l)
+-{
+-	spin_lock(&laundrette_lock);
+-	list_del_rcu(&l->list);
+-	spin_unlock(&laundrette_lock);
+-}
+-
+-static int
+-nfsd_alloc_fcache_disposal_net(struct net *net)
++nfsd_free_fcache_disposal_net(struct net *net)
+ {
+-	struct nfsd_fcache_disposal *l;
++	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++	struct nfsd_fcache_disposal *l = nn->fcache_disposal;
+ 
+-	l = nfsd_alloc_fcache_disposal(net);
+-	if (!l)
+-		return -ENOMEM;
+-	nfsd_add_fcache_disposal(l);
+-	return 0;
++	nfsd_free_fcache_disposal(l);
+ }
+ 
+-static void
+-nfsd_free_fcache_disposal_net(struct net *net)
++int
++nfsd_file_cache_start_net(struct net *net)
+ {
+-	struct nfsd_fcache_disposal *l;
++	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 
+-	rcu_read_lock();
+-	list_for_each_entry_rcu(l, &laundrettes, list) {
+-		if (l->net != net)
+-			continue;
+-		nfsd_del_fcache_disposal(l);
+-		rcu_read_unlock();
+-		nfsd_free_fcache_disposal(l);
+-		return;
+-	}
+-	rcu_read_unlock();
++	nn->fcache_disposal = nfsd_alloc_fcache_disposal();
++	return nn->fcache_disposal ? 0 : -ENOMEM;
+ }
+ 
+-int
+-nfsd_file_cache_start_net(struct net *net)
++/**
++ * nfsd_file_cache_purge - Remove all cache items associated with @net
++ * @net: target net namespace
++ *
++ */
++void
++nfsd_file_cache_purge(struct net *net)
+ {
+-	return nfsd_alloc_fcache_disposal_net(net);
++	lockdep_assert_held(&nfsd_mutex);
++	if (test_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags) == 1)
++		__nfsd_file_cache_purge(net);
+ }
+ 
+ void
+@@ -834,7 +883,11 @@ nfsd_file_cache_shutdown_net(struct net *net)
+ void
+ nfsd_file_cache_shutdown(void)
+ {
+-	set_bit(NFSD_FILE_SHUTDOWN, &nfsd_file_lru_flags);
++	int i;
++
++	lockdep_assert_held(&nfsd_mutex);
++	if (test_and_clear_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags) == 0)
++		return;
+ 
+ 	lease_unregister_notifier(&nfsd_file_lease_notifier);
+ 	unregister_shrinker(&nfsd_file_shrinker);
+@@ -843,7 +896,7 @@ nfsd_file_cache_shutdown(void)
+ 	 * calling nfsd_file_cache_purge
+ 	 */
+ 	cancel_delayed_work_sync(&nfsd_filecache_laundrette);
+-	nfsd_file_cache_purge(NULL);
++	__nfsd_file_cache_purge(NULL);
+ 	list_lru_destroy(&nfsd_file_lru);
+ 	rcu_barrier();
+ 	fsnotify_put_group(nfsd_file_fsnotify_group);
+@@ -853,240 +906,332 @@ nfsd_file_cache_shutdown(void)
+ 	fsnotify_wait_marks_destroyed();
+ 	kmem_cache_destroy(nfsd_file_mark_slab);
+ 	nfsd_file_mark_slab = NULL;
+-	kvfree(nfsd_file_hashtbl);
+-	nfsd_file_hashtbl = NULL;
+ 	destroy_workqueue(nfsd_filecache_wq);
+ 	nfsd_filecache_wq = NULL;
+-}
+-
+-static bool
+-nfsd_match_cred(const struct cred *c1, const struct cred *c2)
+-{
+-	int i;
+-
+-	if (!uid_eq(c1->fsuid, c2->fsuid))
+-		return false;
+-	if (!gid_eq(c1->fsgid, c2->fsgid))
+-		return false;
+-	if (c1->group_info == NULL || c2->group_info == NULL)
+-		return c1->group_info == c2->group_info;
+-	if (c1->group_info->ngroups != c2->group_info->ngroups)
+-		return false;
+-	for (i = 0; i < c1->group_info->ngroups; i++) {
+-		if (!gid_eq(c1->group_info->gid[i], c2->group_info->gid[i]))
+-			return false;
++	rhltable_destroy(&nfsd_file_rhltable);
++
++	for_each_possible_cpu(i) {
++		per_cpu(nfsd_file_cache_hits, i) = 0;
++		per_cpu(nfsd_file_acquisitions, i) = 0;
++		per_cpu(nfsd_file_releases, i) = 0;
++		per_cpu(nfsd_file_total_age, i) = 0;
++		per_cpu(nfsd_file_evictions, i) = 0;
+ 	}
+-	return true;
+ }
+ 
+ static struct nfsd_file *
+-nfsd_file_find_locked(struct inode *inode, unsigned int may_flags,
+-			unsigned int hashval, struct net *net)
++nfsd_file_lookup_locked(const struct net *net, const struct cred *cred,
++			struct inode *inode, unsigned char need,
++			bool want_gc)
+ {
++	struct rhlist_head *tmp, *list;
+ 	struct nfsd_file *nf;
+-	unsigned char need = may_flags & NFSD_FILE_MAY_MASK;
+ 
+-	hlist_for_each_entry_rcu(nf, &nfsd_file_hashtbl[hashval].nfb_head,
+-				 nf_node, lockdep_is_held(&nfsd_file_hashtbl[hashval].nfb_lock)) {
++	list = rhltable_lookup(&nfsd_file_rhltable, &inode,
++			       nfsd_file_rhash_params);
++	rhl_for_each_entry_rcu(nf, tmp, list, nf_rlist) {
+ 		if (nf->nf_may != need)
+ 			continue;
+-		if (nf->nf_inode != inode)
+-			continue;
+ 		if (nf->nf_net != net)
+ 			continue;
+-		if (!nfsd_match_cred(nf->nf_cred, current_cred()))
++		if (!nfsd_match_cred(nf->nf_cred, cred))
+ 			continue;
+-		if (!test_bit(NFSD_FILE_HASHED, &nf->nf_flags))
++		if (test_bit(NFSD_FILE_GC, &nf->nf_flags) != want_gc)
+ 			continue;
+-		if (nfsd_file_get(nf) != NULL)
+-			return nf;
++		if (test_bit(NFSD_FILE_HASHED, &nf->nf_flags) == 0)
++			continue;
++
++		if (!nfsd_file_get(nf))
++			continue;
++		return nf;
+ 	}
+ 	return NULL;
+ }
+ 
+ /**
+- * nfsd_file_is_cached - are there any cached open files for this fh?
+- * @inode: inode of the file to check
++ * nfsd_file_is_cached - are there any cached open files for this inode?
++ * @inode: inode to check
++ *
++ * The lookup matches inodes in all net namespaces and is atomic wrt
++ * nfsd_file_acquire().
+  *
+- * Scan the hashtable for open files that match this fh. Returns true if there
+- * are any, and false if not.
++ * Return values:
++ *   %true: filecache contains at least one file matching this inode
++ *   %false: filecache contains no files matching this inode
+  */
+ bool
+ nfsd_file_is_cached(struct inode *inode)
+ {
+-	bool			ret = false;
+-	struct nfsd_file	*nf;
+-	unsigned int		hashval;
+-
+-        hashval = (unsigned int)hash_long(inode->i_ino, NFSD_FILE_HASH_BITS);
++	struct rhlist_head *tmp, *list;
++	struct nfsd_file *nf;
++	bool ret = false;
+ 
+ 	rcu_read_lock();
+-	hlist_for_each_entry_rcu(nf, &nfsd_file_hashtbl[hashval].nfb_head,
+-				 nf_node) {
+-		if (inode == nf->nf_inode) {
++	list = rhltable_lookup(&nfsd_file_rhltable, &inode,
++			       nfsd_file_rhash_params);
++	rhl_for_each_entry_rcu(nf, tmp, list, nf_rlist)
++		if (test_bit(NFSD_FILE_GC, &nf->nf_flags)) {
+ 			ret = true;
+ 			break;
+ 		}
+-	}
+ 	rcu_read_unlock();
+-	trace_nfsd_file_is_cached(inode, hashval, (int)ret);
++
++	trace_nfsd_file_is_cached(inode, (int)ret);
+ 	return ret;
+ }
+ 
+-__be32
+-nfsd_file_acquire(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		  unsigned int may_flags, struct nfsd_file **pnf)
++static __be32
++nfsd_file_do_acquire(struct svc_rqst *rqstp, struct svc_fh *fhp,
++		     unsigned int may_flags, struct file *file,
++		     struct nfsd_file **pnf, bool want_gc)
+ {
+-	__be32	status;
++	unsigned char need = may_flags & NFSD_FILE_MAY_MASK;
+ 	struct net *net = SVC_NET(rqstp);
+-	struct nfsd_file *nf, *new;
++	struct nfsd_file *new, *nf;
++	const struct cred *cred;
++	bool open_retry = true;
+ 	struct inode *inode;
+-	unsigned int hashval;
+-	bool retry = true;
++	__be32 status;
++	int ret;
+ 
+-	/* FIXME: skip this if fh_dentry is already set? */
+ 	status = fh_verify(rqstp, fhp, S_IFREG,
+ 				may_flags|NFSD_MAY_OWNER_OVERRIDE);
+ 	if (status != nfs_ok)
+ 		return status;
+-
+ 	inode = d_inode(fhp->fh_dentry);
+-	hashval = (unsigned int)hash_long(inode->i_ino, NFSD_FILE_HASH_BITS);
++	cred = get_current_cred();
++
+ retry:
+ 	rcu_read_lock();
+-	nf = nfsd_file_find_locked(inode, may_flags, hashval, net);
++	nf = nfsd_file_lookup_locked(net, cred, inode, need, want_gc);
+ 	rcu_read_unlock();
+-	if (nf)
++
++	if (nf) {
++		/*
++		 * If the nf is on the LRU then it holds an extra reference
++		 * that must be put if it's removed. It had better not be
++		 * the last one however, since we should hold another.
++		 */
++		if (nfsd_file_lru_remove(nf))
++			WARN_ON_ONCE(refcount_dec_and_test(&nf->nf_ref));
+ 		goto wait_for_construction;
++	}
+ 
+-	new = nfsd_file_alloc(inode, may_flags, hashval, net);
++	new = nfsd_file_alloc(net, inode, need, want_gc);
+ 	if (!new) {
+-		trace_nfsd_file_acquire(rqstp, hashval, inode, may_flags,
+-					NULL, nfserr_jukebox);
+-		return nfserr_jukebox;
++		status = nfserr_jukebox;
++		goto out;
+ 	}
+ 
+-	spin_lock(&nfsd_file_hashtbl[hashval].nfb_lock);
+-	nf = nfsd_file_find_locked(inode, may_flags, hashval, net);
+-	if (nf == NULL)
++	rcu_read_lock();
++	spin_lock(&inode->i_lock);
++	nf = nfsd_file_lookup_locked(net, cred, inode, need, want_gc);
++	if (unlikely(nf)) {
++		spin_unlock(&inode->i_lock);
++		rcu_read_unlock();
++		nfsd_file_slab_free(&new->nf_rcu);
++		goto wait_for_construction;
++	}
++	nf = new;
++	ret = rhltable_insert(&nfsd_file_rhltable, &nf->nf_rlist,
++			      nfsd_file_rhash_params);
++	spin_unlock(&inode->i_lock);
++	rcu_read_unlock();
++	if (likely(ret == 0))
+ 		goto open_file;
+-	spin_unlock(&nfsd_file_hashtbl[hashval].nfb_lock);
+-	nfsd_file_slab_free(&new->nf_rcu);
++
++	if (ret == -EEXIST)
++		goto retry;
++	trace_nfsd_file_insert_err(rqstp, inode, may_flags, ret);
++	status = nfserr_jukebox;
++	goto construction_err;
+ 
+ wait_for_construction:
+ 	wait_on_bit(&nf->nf_flags, NFSD_FILE_PENDING, TASK_UNINTERRUPTIBLE);
+ 
+ 	/* Did construction of this file fail? */
+ 	if (!test_bit(NFSD_FILE_HASHED, &nf->nf_flags)) {
+-		if (!retry) {
++		trace_nfsd_file_cons_err(rqstp, inode, may_flags, nf);
++		if (!open_retry) {
+ 			status = nfserr_jukebox;
+-			goto out;
++			goto construction_err;
+ 		}
+-		retry = false;
+-		nfsd_file_put_noref(nf);
++		open_retry = false;
+ 		goto retry;
+ 	}
+-
+ 	this_cpu_inc(nfsd_file_cache_hits);
+ 
+-	if (!(may_flags & NFSD_MAY_NOT_BREAK_LEASE)) {
+-		bool write = (may_flags & NFSD_MAY_WRITE);
+-
+-		if (test_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags) ||
+-		    (test_bit(NFSD_FILE_BREAK_WRITE, &nf->nf_flags) && write)) {
+-			status = nfserrno(nfsd_open_break_lease(
+-					file_inode(nf->nf_file), may_flags));
+-			if (status == nfs_ok) {
+-				clear_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags);
+-				if (write)
+-					clear_bit(NFSD_FILE_BREAK_WRITE,
+-						  &nf->nf_flags);
+-			}
+-		}
++	status = nfserrno(nfsd_open_break_lease(file_inode(nf->nf_file), may_flags));
++	if (status != nfs_ok) {
++		nfsd_file_put(nf);
++		nf = NULL;
+ 	}
++
+ out:
+ 	if (status == nfs_ok) {
++		this_cpu_inc(nfsd_file_acquisitions);
++		nfsd_file_check_write_error(nf);
+ 		*pnf = nf;
+-	} else {
+-		nfsd_file_put(nf);
+-		nf = NULL;
+ 	}
+-
+-	trace_nfsd_file_acquire(rqstp, hashval, inode, may_flags, nf, status);
++	put_cred(cred);
++	trace_nfsd_file_acquire(rqstp, inode, may_flags, nf, status);
+ 	return status;
++
+ open_file:
+-	nf = new;
+-	/* Take reference for the hashtable */
+-	refcount_inc(&nf->nf_ref);
+-	__set_bit(NFSD_FILE_HASHED, &nf->nf_flags);
+-	__set_bit(NFSD_FILE_PENDING, &nf->nf_flags);
+-	list_lru_add(&nfsd_file_lru, &nf->nf_lru);
+-	hlist_add_head_rcu(&nf->nf_node, &nfsd_file_hashtbl[hashval].nfb_head);
+-	++nfsd_file_hashtbl[hashval].nfb_count;
+-	nfsd_file_hashtbl[hashval].nfb_maxcount = max(nfsd_file_hashtbl[hashval].nfb_maxcount,
+-			nfsd_file_hashtbl[hashval].nfb_count);
+-	spin_unlock(&nfsd_file_hashtbl[hashval].nfb_lock);
+-	if (atomic_long_inc_return(&nfsd_filecache_count) >= NFSD_FILE_LRU_THRESHOLD)
+-		nfsd_file_gc();
+-
+-	nf->nf_mark = nfsd_file_mark_find_or_create(nf);
+-	if (nf->nf_mark)
+-		status = nfsd_open_verified(rqstp, fhp, S_IFREG,
+-				may_flags, &nf->nf_file);
+-	else
++	trace_nfsd_file_alloc(nf);
++	nf->nf_mark = nfsd_file_mark_find_or_create(nf, inode);
++	if (nf->nf_mark) {
++		if (file) {
++			get_file(file);
++			nf->nf_file = file;
++			status = nfs_ok;
++			trace_nfsd_file_opened(nf, status);
++		} else {
++			status = nfsd_open_verified(rqstp, fhp, may_flags,
++						    &nf->nf_file);
++			trace_nfsd_file_open(nf, status);
++		}
++	} else
+ 		status = nfserr_jukebox;
+ 	/*
+ 	 * If construction failed, or we raced with a call to unlink()
+ 	 * then unhash.
+ 	 */
+-	if (status != nfs_ok || inode->i_nlink == 0) {
+-		bool do_free;
+-		spin_lock(&nfsd_file_hashtbl[hashval].nfb_lock);
+-		do_free = nfsd_file_unhash(nf);
+-		spin_unlock(&nfsd_file_hashtbl[hashval].nfb_lock);
+-		if (do_free)
+-			nfsd_file_put_noref(nf);
+-	}
+-	clear_bit_unlock(NFSD_FILE_PENDING, &nf->nf_flags);
+-	smp_mb__after_atomic();
+-	wake_up_bit(&nf->nf_flags, NFSD_FILE_PENDING);
++	if (status != nfs_ok || inode->i_nlink == 0)
++		nfsd_file_unhash(nf);
++	clear_and_wake_up_bit(NFSD_FILE_PENDING, &nf->nf_flags);
++	if (status == nfs_ok)
++		goto out;
++
++construction_err:
++	if (refcount_dec_and_test(&nf->nf_ref))
++		nfsd_file_free(nf);
++	nf = NULL;
+ 	goto out;
+ }
+ 
++/**
++ * nfsd_file_acquire_gc - Get a struct nfsd_file with an open file
++ * @rqstp: the RPC transaction being executed
++ * @fhp: the NFS filehandle of the file to be opened
++ * @may_flags: NFSD_MAY_ settings for the file
++ * @pnf: OUT: new or found "struct nfsd_file" object
++ *
++ * The nfsd_file object returned by this API is reference-counted
++ * and garbage-collected. The object is retained for a few
++ * seconds after the final nfsd_file_put() in case the caller
++ * wants to re-use it.
++ *
++ * Return values:
++ *   %nfs_ok - @pnf points to an nfsd_file with its reference
++ *   count boosted.
++ *
++ * On error, an nfsstat value in network byte order is returned.
++ */
++__be32
++nfsd_file_acquire_gc(struct svc_rqst *rqstp, struct svc_fh *fhp,
++		     unsigned int may_flags, struct nfsd_file **pnf)
++{
++	return nfsd_file_do_acquire(rqstp, fhp, may_flags, NULL, pnf, true);
++}
++
++/**
++ * nfsd_file_acquire - Get a struct nfsd_file with an open file
++ * @rqstp: the RPC transaction being executed
++ * @fhp: the NFS filehandle of the file to be opened
++ * @may_flags: NFSD_MAY_ settings for the file
++ * @pnf: OUT: new or found "struct nfsd_file" object
++ *
++ * The nfsd_file_object returned by this API is reference-counted
++ * but not garbage-collected. The object is unhashed after the
++ * final nfsd_file_put().
++ *
++ * Return values:
++ *   %nfs_ok - @pnf points to an nfsd_file with its reference
++ *   count boosted.
++ *
++ * On error, an nfsstat value in network byte order is returned.
++ */
++__be32
++nfsd_file_acquire(struct svc_rqst *rqstp, struct svc_fh *fhp,
++		  unsigned int may_flags, struct nfsd_file **pnf)
++{
++	return nfsd_file_do_acquire(rqstp, fhp, may_flags, NULL, pnf, false);
++}
++
++/**
++ * nfsd_file_acquire_opened - Get a struct nfsd_file using existing open file
++ * @rqstp: the RPC transaction being executed
++ * @fhp: the NFS filehandle of the file just created
++ * @may_flags: NFSD_MAY_ settings for the file
++ * @file: cached, already-open file (may be NULL)
++ * @pnf: OUT: new or found "struct nfsd_file" object
++ *
++ * Acquire a nfsd_file object that is not GC'ed. If one doesn't already exist,
++ * and @file is non-NULL, use it to instantiate a new nfsd_file instead of
++ * opening a new one.
++ *
++ * Return values:
++ *   %nfs_ok - @pnf points to an nfsd_file with its reference
++ *   count boosted.
++ *
++ * On error, an nfsstat value in network byte order is returned.
++ */
++__be32
++nfsd_file_acquire_opened(struct svc_rqst *rqstp, struct svc_fh *fhp,
++			 unsigned int may_flags, struct file *file,
++			 struct nfsd_file **pnf)
++{
++	return nfsd_file_do_acquire(rqstp, fhp, may_flags, file, pnf, false);
++}
++
+ /*
+  * Note that fields may be added, removed or reordered in the future. Programs
+  * scraping this file for info should test the labels to ensure they're
+  * getting the correct field.
+  */
+-static int nfsd_file_cache_stats_show(struct seq_file *m, void *v)
++int nfsd_file_cache_stats_show(struct seq_file *m, void *v)
+ {
+-	unsigned int i, count = 0, longest = 0;
+-	unsigned long hits = 0;
++	unsigned long releases = 0, evictions = 0;
++	unsigned long hits = 0, acquisitions = 0;
++	unsigned int i, count = 0, buckets = 0;
++	unsigned long lru = 0, total_age = 0;
+ 
+-	/*
+-	 * No need for spinlocks here since we're not terribly interested in
+-	 * accuracy. We do take the nfsd_mutex simply to ensure that we
+-	 * don't end up racing with server shutdown
+-	 */
++	/* Serialize with server shutdown */
+ 	mutex_lock(&nfsd_mutex);
+-	if (nfsd_file_hashtbl) {
+-		for (i = 0; i < NFSD_FILE_HASH_SIZE; i++) {
+-			count += nfsd_file_hashtbl[i].nfb_count;
+-			longest = max(longest, nfsd_file_hashtbl[i].nfb_count);
+-		}
++	if (test_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags) == 1) {
++		struct bucket_table *tbl;
++		struct rhashtable *ht;
++
++		lru = list_lru_count(&nfsd_file_lru);
++
++		rcu_read_lock();
++		ht = &nfsd_file_rhltable.ht;
++		count = atomic_read(&ht->nelems);
++		tbl = rht_dereference_rcu(ht->tbl, ht);
++		buckets = tbl->size;
++		rcu_read_unlock();
+ 	}
+ 	mutex_unlock(&nfsd_mutex);
+ 
+-	for_each_possible_cpu(i)
++	for_each_possible_cpu(i) {
+ 		hits += per_cpu(nfsd_file_cache_hits, i);
++		acquisitions += per_cpu(nfsd_file_acquisitions, i);
++		releases += per_cpu(nfsd_file_releases, i);
++		total_age += per_cpu(nfsd_file_total_age, i);
++		evictions += per_cpu(nfsd_file_evictions, i);
++	}
+ 
+-	seq_printf(m, "total entries: %u\n", count);
+-	seq_printf(m, "longest chain: %u\n", longest);
++	seq_printf(m, "total inodes:  %u\n", count);
++	seq_printf(m, "hash buckets:  %u\n", buckets);
++	seq_printf(m, "lru entries:   %lu\n", lru);
+ 	seq_printf(m, "cache hits:    %lu\n", hits);
++	seq_printf(m, "acquisitions:  %lu\n", acquisitions);
++	seq_printf(m, "releases:      %lu\n", releases);
++	seq_printf(m, "evictions:     %lu\n", evictions);
++	if (releases)
++		seq_printf(m, "mean age (ms): %ld\n", total_age / releases);
++	else
++		seq_printf(m, "mean age (ms): -\n");
+ 	return 0;
+ }
+-
+-int nfsd_file_cache_stats_open(struct inode *inode, struct file *file)
+-{
+-	return single_open(file, nfsd_file_cache_stats_show, NULL);
+-}
+diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h
+index 435ceab27897a..e54165a3224f0 100644
+--- a/fs/nfsd/filecache.h
++++ b/fs/nfsd/filecache.h
+@@ -29,23 +29,23 @@ struct nfsd_file_mark {
+  * never be dereferenced, only used for comparison.
+  */
+ struct nfsd_file {
+-	struct hlist_node	nf_node;
+-	struct list_head	nf_lru;
+-	struct rcu_head		nf_rcu;
++	struct rhlist_head	nf_rlist;
++	void			*nf_inode;
+ 	struct file		*nf_file;
+ 	const struct cred	*nf_cred;
+ 	struct net		*nf_net;
+ #define NFSD_FILE_HASHED	(0)
+ #define NFSD_FILE_PENDING	(1)
+-#define NFSD_FILE_BREAK_READ	(2)
+-#define NFSD_FILE_BREAK_WRITE	(3)
+-#define NFSD_FILE_REFERENCED	(4)
++#define NFSD_FILE_REFERENCED	(2)
++#define NFSD_FILE_GC		(3)
+ 	unsigned long		nf_flags;
+-	struct inode		*nf_inode;
+-	unsigned int		nf_hashval;
+ 	refcount_t		nf_ref;
+ 	unsigned char		nf_may;
++
+ 	struct nfsd_file_mark	*nf_mark;
++	struct list_head	nf_lru;
++	struct rcu_head		nf_rcu;
++	ktime_t			nf_birthtime;
+ };
+ 
+ int nfsd_file_cache_init(void);
+@@ -57,7 +57,12 @@ void nfsd_file_put(struct nfsd_file *nf);
+ struct nfsd_file *nfsd_file_get(struct nfsd_file *nf);
+ void nfsd_file_close_inode_sync(struct inode *inode);
+ bool nfsd_file_is_cached(struct inode *inode);
++__be32 nfsd_file_acquire_gc(struct svc_rqst *rqstp, struct svc_fh *fhp,
++		  unsigned int may_flags, struct nfsd_file **nfp);
+ __be32 nfsd_file_acquire(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 		  unsigned int may_flags, struct nfsd_file **nfp);
+-int	nfsd_file_cache_stats_open(struct inode *, struct file *);
++__be32 nfsd_file_acquire_opened(struct svc_rqst *rqstp, struct svc_fh *fhp,
++		  unsigned int may_flags, struct file *file,
++		  struct nfsd_file **nfp);
++int nfsd_file_cache_stats_show(struct seq_file *m, void *v);
+ #endif /* _FS_NFSD_FILECACHE_H */
+diff --git a/fs/nfsd/flexfilelayout.c b/fs/nfsd/flexfilelayout.c
+index db7ef07ae50c9..fabc21ed68cea 100644
+--- a/fs/nfsd/flexfilelayout.c
++++ b/fs/nfsd/flexfilelayout.c
+@@ -15,6 +15,7 @@
+ 
+ #include "flexfilelayoutxdr.h"
+ #include "pnfs.h"
++#include "vfs.h"
+ 
+ #define NFSDDBG_FACILITY	NFSDDBG_PNFS
+ 
+@@ -61,7 +62,7 @@ nfsd4_ff_proc_layoutget(struct inode *inode, const struct svc_fh *fhp,
+ 		goto out_error;
+ 
+ 	fl->fh.size = fhp->fh_handle.fh_size;
+-	memcpy(fl->fh.data, &fhp->fh_handle.fh_base, fl->fh.size);
++	memcpy(fl->fh.data, &fhp->fh_handle.fh_raw, fl->fh.size);
+ 
+ 	/* Give whole file layout segments */
+ 	seg->offset = 0;
+diff --git a/fs/nfsd/lockd.c b/fs/nfsd/lockd.c
+index 3f5b3d7b62b71..46a7f9b813e52 100644
+--- a/fs/nfsd/lockd.c
++++ b/fs/nfsd/lockd.c
+@@ -25,18 +25,22 @@
+  * Note: we hold the dentry use count while the file is open.
+  */
+ static __be32
+-nlm_fopen(struct svc_rqst *rqstp, struct nfs_fh *f, struct file **filp)
++nlm_fopen(struct svc_rqst *rqstp, struct nfs_fh *f, struct file **filp,
++		int mode)
+ {
+ 	__be32		nfserr;
++	int		access;
+ 	struct svc_fh	fh;
+ 
+ 	/* must initialize before using! but maxsize doesn't matter */
+ 	fh_init(&fh,0);
+ 	fh.fh_handle.fh_size = f->size;
+-	memcpy((char*)&fh.fh_handle.fh_base, f->data, f->size);
++	memcpy(&fh.fh_handle.fh_raw, f->data, f->size);
+ 	fh.fh_export = NULL;
+ 
+-	nfserr = nfsd_open(rqstp, &fh, S_IFREG, NFSD_MAY_LOCK, filp);
++	access = (mode == O_WRONLY) ? NFSD_MAY_WRITE : NFSD_MAY_READ;
++	access |= NFSD_MAY_LOCK;
++	nfserr = nfsd_open(rqstp, &fh, S_IFREG, access, filp);
+ 	fh_put(&fh);
+  	/* We return nlm error codes as nlm doesn't know
+ 	 * about nfsd, but nfsd does know about nlm..
+diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
+index 02d3d2f0e6168..51a4b7885cae2 100644
+--- a/fs/nfsd/netns.h
++++ b/fs/nfsd/netns.h
+@@ -10,6 +10,8 @@
+ 
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
++#include <linux/percpu_counter.h>
++#include <linux/siphash.h>
+ 
+ /* Hash tables for nfs4_clientid state */
+ #define CLIENT_HASH_BITS                 4
+@@ -21,6 +23,14 @@
+ struct cld_net;
+ struct nfsd4_client_tracking_ops;
+ 
++enum {
++	/* cache misses due only to checksum comparison failures */
++	NFSD_NET_PAYLOAD_MISSES,
++	/* amount of memory (in bytes) currently consumed by the DRC */
++	NFSD_NET_DRC_MEM_USAGE,
++	NFSD_NET_COUNTERS_NUM
++};
++
+ /*
+  * Represents a nfsd "container". With respect to nfsv4 state tracking, the
+  * fields of interest are the *_id_hashtbls and the *_name_tree. These track
+@@ -99,9 +109,8 @@ struct nfsd_net {
+ 	bool nfsd_net_up;
+ 	bool lockd_up;
+ 
+-	/* Time of server startup */
+-	struct timespec64 nfssvc_boot;
+-	seqlock_t boot_lock;
++	seqlock_t writeverf_lock;
++	unsigned char writeverf[8];
+ 
+ 	/*
+ 	 * Max number of connections this nfsd container will allow. Defaults
+@@ -114,12 +123,13 @@ struct nfsd_net {
+ 	u32 clverifier_counter;
+ 
+ 	struct svc_serv *nfsd_serv;
+-
+-	wait_queue_head_t ntf_wq;
+-	atomic_t ntf_refcnt;
+-
+-	/* Allow umount to wait for nfsd state cleanup */
+-	struct completion nfsd_shutdown_complete;
++	/* When a listening socket is added to nfsd, keep_active is set
++	 * and this justifies a reference on nfsd_serv.  This stops
++	 * nfsd_serv from being freed.  When the number of threads is
++	 * set, keep_active is cleared and the reference is dropped.  So
++	 * when the last thread exits, the service will be destroyed.
++	 */
++	int keep_active;
+ 
+ 	/*
+ 	 * clientid and stateid data for construction of net unique COPY
+@@ -149,20 +159,16 @@ struct nfsd_net {
+ 
+ 	/*
+ 	 * Stats and other tracking of on the duplicate reply cache.
+-	 * These fields and the "rc" fields in nfsdstats are modified
+-	 * with only the per-bucket cache lock, which isn't really safe
+-	 * and should be fixed if we want the statistics to be
+-	 * completely accurate.
++	 * The longest_chain* fields are modified with only the per-bucket
++	 * cache lock, which isn't really safe and should be fixed if we want
++	 * these statistics to be completely accurate.
+ 	 */
+ 
+ 	/* total number of entries */
+ 	atomic_t                 num_drc_entries;
+ 
+-	/* cache misses due only to checksum comparison failures */
+-	unsigned int             payload_misses;
+-
+-	/* amount of memory (in bytes) currently consumed by the DRC */
+-	unsigned int             drc_mem_usage;
++	/* Per-netns stats counters */
++	struct percpu_counter    counter[NFSD_NET_COUNTERS_NUM];
+ 
+ 	/* longest hash chain seen */
+ 	unsigned int             longest_chain;
+@@ -171,8 +177,25 @@ struct nfsd_net {
+ 	unsigned int             longest_chain_cachesize;
+ 
+ 	struct shrinker		nfsd_reply_cache_shrinker;
++
++	/* tracking server-to-server copy mounts */
++	spinlock_t              nfsd_ssc_lock;
++	struct list_head        nfsd_ssc_mount_list;
++	wait_queue_head_t       nfsd_ssc_waitq;
++
+ 	/* utsname taken from the process that starts the server */
+ 	char			nfsd_name[UNX_MAXNODENAME+1];
++
++	struct nfsd_fcache_disposal *fcache_disposal;
++
++	siphash_key_t		siphash_key;
++
++	atomic_t		nfs4_client_count;
++	int			nfs4_max_clients;
++
++	atomic_t		nfsd_courtesy_clients;
++	struct shrinker		nfsd_client_shrinker;
++	struct work_struct	nfsd_shrinker_work;
+ };
+ 
+ /* Simple check to find out if a given net was properly initialized */
+@@ -182,6 +205,6 @@ extern void nfsd_netns_free_versions(struct nfsd_net *nn);
+ 
+ extern unsigned int nfsd_net_id;
+ 
+-void nfsd_copy_boot_verifier(__be32 verf[2], struct nfsd_net *nn);
+-void nfsd_reset_boot_verifier(struct nfsd_net *nn);
++void nfsd_copy_write_verifier(__be32 verf[2], struct nfsd_net *nn);
++void nfsd_reset_write_verifier(struct nfsd_net *nn);
+ #endif /* __NFSD_NETNS_H__ */
+diff --git a/fs/nfsd/nfs2acl.c b/fs/nfsd/nfs2acl.c
+index 6a900f770dd23..9adf672dedbdd 100644
+--- a/fs/nfsd/nfs2acl.c
++++ b/fs/nfsd/nfs2acl.c
+@@ -111,7 +111,7 @@ static __be32 nfsacld_proc_setacl(struct svc_rqst *rqstp)
+ 	if (error)
+ 		goto out_errno;
+ 
+-	fh_lock(fh);
++	inode_lock(inode);
+ 
+ 	error = set_posix_acl(inode, ACL_TYPE_ACCESS, argp->acl_access);
+ 	if (error)
+@@ -120,7 +120,7 @@ static __be32 nfsacld_proc_setacl(struct svc_rqst *rqstp)
+ 	if (error)
+ 		goto out_drop_lock;
+ 
+-	fh_unlock(fh);
++	inode_unlock(inode);
+ 
+ 	fh_drop_write(fh);
+ 
+@@ -134,7 +134,7 @@ static __be32 nfsacld_proc_setacl(struct svc_rqst *rqstp)
+ 	return rpc_success;
+ 
+ out_drop_lock:
+-	fh_unlock(fh);
++	inode_unlock(inode);
+ 	fh_drop_write(fh);
+ out_errno:
+ 	resp->status = nfserrno(error);
+@@ -185,161 +185,106 @@ static __be32 nfsacld_proc_access(struct svc_rqst *rqstp)
+ /*
+  * XDR decode functions
+  */
+-static int nfsaclsvc_decode_voidarg(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	return 1;
+-}
+ 
+-static int nfsaclsvc_decode_getaclargs(struct svc_rqst *rqstp, __be32 *p)
++static bool
++nfsaclsvc_decode_getaclargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_getaclargs *argp = rqstp->rq_argp;
+ 
+-	p = nfs2svc_decode_fh(p, &argp->fh);
+-	if (!p)
+-		return 0;
+-	argp->mask = ntohl(*p); p++;
++	if (!svcxdr_decode_fhandle(xdr, &argp->fh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->mask) < 0)
++		return false;
+ 
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+-
+-static int nfsaclsvc_decode_setaclargs(struct svc_rqst *rqstp, __be32 *p)
++static bool
++nfsaclsvc_decode_setaclargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_setaclargs *argp = rqstp->rq_argp;
+-	struct kvec *head = rqstp->rq_arg.head;
+-	unsigned int base;
+-	int n;
+-
+-	p = nfs2svc_decode_fh(p, &argp->fh);
+-	if (!p)
+-		return 0;
+-	argp->mask = ntohl(*p++);
+-	if (argp->mask & ~NFS_ACL_MASK ||
+-	    !xdr_argsize_check(rqstp, p))
+-		return 0;
+-
+-	base = (char *)p - (char *)head->iov_base;
+-	n = nfsacl_decode(&rqstp->rq_arg, base, NULL,
+-			  (argp->mask & NFS_ACL) ?
+-			  &argp->acl_access : NULL);
+-	if (n > 0)
+-		n = nfsacl_decode(&rqstp->rq_arg, base + n, NULL,
+-				  (argp->mask & NFS_DFACL) ?
+-				  &argp->acl_default : NULL);
+-	return (n > 0);
+-}
+ 
+-static int nfsaclsvc_decode_fhandleargs(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	struct nfsd_fhandle *argp = rqstp->rq_argp;
+-
+-	p = nfs2svc_decode_fh(p, &argp->fh);
+-	if (!p)
+-		return 0;
+-	return xdr_argsize_check(rqstp, p);
++	if (!svcxdr_decode_fhandle(xdr, &argp->fh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->mask) < 0)
++		return false;
++	if (argp->mask & ~NFS_ACL_MASK)
++		return false;
++	if (!nfs_stream_decode_acl(xdr, NULL, (argp->mask & NFS_ACL) ?
++				   &argp->acl_access : NULL))
++		return false;
++	if (!nfs_stream_decode_acl(xdr, NULL, (argp->mask & NFS_DFACL) ?
++				   &argp->acl_default : NULL))
++		return false;
++
++	return true;
+ }
+ 
+-static int nfsaclsvc_decode_accessargs(struct svc_rqst *rqstp, __be32 *p)
++static bool
++nfsaclsvc_decode_accessargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nfsd3_accessargs *argp = rqstp->rq_argp;
++	struct nfsd3_accessargs *args = rqstp->rq_argp;
+ 
+-	p = nfs2svc_decode_fh(p, &argp->fh);
+-	if (!p)
+-		return 0;
+-	argp->access = ntohl(*p++);
++	if (!svcxdr_decode_fhandle(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->access) < 0)
++		return false;
+ 
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+ /*
+  * XDR encode functions
+  */
+ 
+-/*
+- * There must be an encoding function for void results so svc_process
+- * will work properly.
+- */
+-static int nfsaclsvc_encode_voidres(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	return xdr_ressize_check(rqstp, p);
+-}
+-
+ /* GETACL */
+-static int nfsaclsvc_encode_getaclres(struct svc_rqst *rqstp, __be32 *p)
++static bool
++nfsaclsvc_encode_getaclres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_getaclres *resp = rqstp->rq_resp;
+ 	struct dentry *dentry = resp->fh.fh_dentry;
+ 	struct inode *inode;
+-	struct kvec *head = rqstp->rq_res.head;
+-	unsigned int base;
+-	int n;
+-	int w;
+ 
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		return xdr_ressize_check(rqstp, p);
++	if (!svcxdr_encode_stat(xdr, resp->status))
++		return false;
+ 
+-	/*
+-	 * Since this is version 2, the check for nfserr in
+-	 * nfsd_dispatch actually ensures the following cannot happen.
+-	 * However, it seems fragile to depend on that.
+-	 */
+ 	if (dentry == NULL || d_really_is_negative(dentry))
+-		return 0;
++		return true;
+ 	inode = d_inode(dentry);
+ 
+-	p = nfs2svc_encode_fattr(rqstp, p, &resp->fh, &resp->stat);
+-	*p++ = htonl(resp->mask);
+-	if (!xdr_ressize_check(rqstp, p))
+-		return 0;
+-	base = (char *)p - (char *)head->iov_base;
+-
+-	rqstp->rq_res.page_len = w = nfsacl_size(
+-		(resp->mask & NFS_ACL)   ? resp->acl_access  : NULL,
+-		(resp->mask & NFS_DFACL) ? resp->acl_default : NULL);
+-	while (w > 0) {
+-		if (!*(rqstp->rq_next_page++))
+-			return 0;
+-		w -= PAGE_SIZE;
+-	}
++	if (!svcxdr_encode_fattr(rqstp, xdr, &resp->fh, &resp->stat))
++		return false;
++	if (xdr_stream_encode_u32(xdr, resp->mask) < 0)
++		return false;
+ 
+-	n = nfsacl_encode(&rqstp->rq_res, base, inode,
+-			  resp->acl_access,
+-			  resp->mask & NFS_ACL, 0);
+-	if (n > 0)
+-		n = nfsacl_encode(&rqstp->rq_res, base + n, inode,
+-				  resp->acl_default,
+-				  resp->mask & NFS_DFACL,
+-				  NFS_ACL_DEFAULT);
+-	return (n > 0);
+-}
++	if (!nfs_stream_encode_acl(xdr, inode, resp->acl_access,
++				   resp->mask & NFS_ACL, 0))
++		return false;
++	if (!nfs_stream_encode_acl(xdr, inode, resp->acl_default,
++				   resp->mask & NFS_DFACL, NFS_ACL_DEFAULT))
++		return false;
+ 
+-static int nfsaclsvc_encode_attrstatres(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	struct nfsd_attrstat *resp = rqstp->rq_resp;
+-
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		goto out;
+-
+-	p = nfs2svc_encode_fattr(rqstp, p, &resp->fh, &resp->stat);
+-out:
+-	return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+ /* ACCESS */
+-static int nfsaclsvc_encode_accessres(struct svc_rqst *rqstp, __be32 *p)
++static bool
++nfsaclsvc_encode_accessres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_accessres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		goto out;
++	if (!svcxdr_encode_stat(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_fattr(rqstp, xdr, &resp->fh, &resp->stat))
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->access) < 0)
++			return false;
++		break;
++	}
+ 
+-	p = nfs2svc_encode_fattr(rqstp, p, &resp->fh, &resp->stat);
+-	*p++ = htonl(resp->access);
+-out:
+-	return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+ /*
+@@ -354,13 +299,6 @@ static void nfsaclsvc_release_getacl(struct svc_rqst *rqstp)
+ 	posix_acl_release(resp->acl_default);
+ }
+ 
+-static void nfsaclsvc_release_attrstat(struct svc_rqst *rqstp)
+-{
+-	struct nfsd_attrstat *resp = rqstp->rq_resp;
+-
+-	fh_put(&resp->fh);
+-}
+-
+ static void nfsaclsvc_release_access(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_accessres *resp = rqstp->rq_resp;
+@@ -378,12 +316,14 @@ struct nfsd3_voidargs { int dummy; };
+ static const struct svc_procedure nfsd_acl_procedures2[5] = {
+ 	[ACLPROC2_NULL] = {
+ 		.pc_func = nfsacld_proc_null,
+-		.pc_decode = nfsaclsvc_decode_voidarg,
+-		.pc_encode = nfsaclsvc_encode_voidres,
+-		.pc_argsize = sizeof(struct nfsd3_voidargs),
+-		.pc_ressize = sizeof(struct nfsd3_voidargs),
++		.pc_decode = nfssvc_decode_voidarg,
++		.pc_encode = nfssvc_encode_voidres,
++		.pc_argsize = sizeof(struct nfsd_voidargs),
++		.pc_argzero = sizeof(struct nfsd_voidargs),
++		.pc_ressize = sizeof(struct nfsd_voidres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST,
++		.pc_name = "NULL",
+ 	},
+ 	[ACLPROC2_GETACL] = {
+ 		.pc_func = nfsacld_proc_getacl,
+@@ -391,29 +331,35 @@ static const struct svc_procedure nfsd_acl_procedures2[5] = {
+ 		.pc_encode = nfsaclsvc_encode_getaclres,
+ 		.pc_release = nfsaclsvc_release_getacl,
+ 		.pc_argsize = sizeof(struct nfsd3_getaclargs),
++		.pc_argzero = sizeof(struct nfsd3_getaclargs),
+ 		.pc_ressize = sizeof(struct nfsd3_getaclres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+1+2*(1+ACL),
++		.pc_name = "GETACL",
+ 	},
+ 	[ACLPROC2_SETACL] = {
+ 		.pc_func = nfsacld_proc_setacl,
+ 		.pc_decode = nfsaclsvc_decode_setaclargs,
+-		.pc_encode = nfsaclsvc_encode_attrstatres,
+-		.pc_release = nfsaclsvc_release_attrstat,
++		.pc_encode = nfssvc_encode_attrstatres,
++		.pc_release = nfssvc_release_attrstat,
+ 		.pc_argsize = sizeof(struct nfsd3_setaclargs),
++		.pc_argzero = sizeof(struct nfsd3_setaclargs),
+ 		.pc_ressize = sizeof(struct nfsd_attrstat),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+AT,
++		.pc_name = "SETACL",
+ 	},
+ 	[ACLPROC2_GETATTR] = {
+ 		.pc_func = nfsacld_proc_getattr,
+-		.pc_decode = nfsaclsvc_decode_fhandleargs,
+-		.pc_encode = nfsaclsvc_encode_attrstatres,
+-		.pc_release = nfsaclsvc_release_attrstat,
++		.pc_decode = nfssvc_decode_fhandleargs,
++		.pc_encode = nfssvc_encode_attrstatres,
++		.pc_release = nfssvc_release_attrstat,
+ 		.pc_argsize = sizeof(struct nfsd_fhandle),
++		.pc_argzero = sizeof(struct nfsd_fhandle),
+ 		.pc_ressize = sizeof(struct nfsd_attrstat),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+AT,
++		.pc_name = "GETATTR",
+ 	},
+ 	[ACLPROC2_ACCESS] = {
+ 		.pc_func = nfsacld_proc_access,
+@@ -421,9 +367,11 @@ static const struct svc_procedure nfsd_acl_procedures2[5] = {
+ 		.pc_encode = nfsaclsvc_encode_accessres,
+ 		.pc_release = nfsaclsvc_release_access,
+ 		.pc_argsize = sizeof(struct nfsd3_accessargs),
++		.pc_argzero = sizeof(struct nfsd3_accessargs),
+ 		.pc_ressize = sizeof(struct nfsd3_accessres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+AT+1,
++		.pc_name = "SETATTR",
+ 	},
+ };
+ 
+diff --git a/fs/nfsd/nfs3acl.c b/fs/nfsd/nfs3acl.c
+index 34a394e50e1d1..161f831b3a1b7 100644
+--- a/fs/nfsd/nfs3acl.c
++++ b/fs/nfsd/nfs3acl.c
+@@ -101,7 +101,7 @@ static __be32 nfsd3_proc_setacl(struct svc_rqst *rqstp)
+ 	if (error)
+ 		goto out_errno;
+ 
+-	fh_lock(fh);
++	inode_lock(inode);
+ 
+ 	error = set_posix_acl(inode, ACL_TYPE_ACCESS, argp->acl_access);
+ 	if (error)
+@@ -109,7 +109,7 @@ static __be32 nfsd3_proc_setacl(struct svc_rqst *rqstp)
+ 	error = set_posix_acl(inode, ACL_TYPE_DEFAULT, argp->acl_default);
+ 
+ out_drop_lock:
+-	fh_unlock(fh);
++	inode_unlock(inode);
+ 	fh_drop_write(fh);
+ out_errno:
+ 	resp->status = nfserrno(error);
+@@ -124,43 +124,39 @@ static __be32 nfsd3_proc_setacl(struct svc_rqst *rqstp)
+ /*
+  * XDR decode functions
+  */
+-static int nfs3svc_decode_getaclargs(struct svc_rqst *rqstp, __be32 *p)
++
++static bool
++nfs3svc_decode_getaclargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_getaclargs *args = rqstp->rq_argp;
+ 
+-	p = nfs3svc_decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	args->mask = ntohl(*p); p++;
++	if (!svcxdr_decode_nfs_fh3(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->mask) < 0)
++		return false;
+ 
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+-
+-static int nfs3svc_decode_setaclargs(struct svc_rqst *rqstp, __be32 *p)
++static bool
++nfs3svc_decode_setaclargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+-	struct nfsd3_setaclargs *args = rqstp->rq_argp;
+-	struct kvec *head = rqstp->rq_arg.head;
+-	unsigned int base;
+-	int n;
+-
+-	p = nfs3svc_decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	args->mask = ntohl(*p++);
+-	if (args->mask & ~NFS_ACL_MASK ||
+-	    !xdr_argsize_check(rqstp, p))
+-		return 0;
+-
+-	base = (char *)p - (char *)head->iov_base;
+-	n = nfsacl_decode(&rqstp->rq_arg, base, NULL,
+-			  (args->mask & NFS_ACL) ?
+-			  &args->acl_access : NULL);
+-	if (n > 0)
+-		n = nfsacl_decode(&rqstp->rq_arg, base + n, NULL,
+-				  (args->mask & NFS_DFACL) ?
+-				  &args->acl_default : NULL);
+-	return (n > 0);
++	struct nfsd3_setaclargs *argp = rqstp->rq_argp;
++
++	if (!svcxdr_decode_nfs_fh3(xdr, &argp->fh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &argp->mask) < 0)
++		return false;
++	if (argp->mask & ~NFS_ACL_MASK)
++		return false;
++	if (!nfs_stream_decode_acl(xdr, NULL, (argp->mask & NFS_ACL) ?
++				   &argp->acl_access : NULL))
++		return false;
++	if (!nfs_stream_decode_acl(xdr, NULL, (argp->mask & NFS_DFACL) ?
++				   &argp->acl_default : NULL))
++		return false;
++
++	return true;
+ }
+ 
+ /*
+@@ -168,59 +164,47 @@ static int nfs3svc_decode_setaclargs(struct svc_rqst *rqstp, __be32 *p)
+  */
+ 
+ /* GETACL */
+-static int nfs3svc_encode_getaclres(struct svc_rqst *rqstp, __be32 *p)
++static bool
++nfs3svc_encode_getaclres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_getaclres *resp = rqstp->rq_resp;
+ 	struct dentry *dentry = resp->fh.fh_dentry;
++	struct inode *inode;
+ 
+-	*p++ = resp->status;
+-	p = nfs3svc_encode_post_op_attr(rqstp, p, &resp->fh);
+-	if (resp->status == 0 && dentry && d_really_is_positive(dentry)) {
+-		struct inode *inode = d_inode(dentry);
+-		struct kvec *head = rqstp->rq_res.head;
+-		unsigned int base;
+-		int n;
+-		int w;
+-
+-		*p++ = htonl(resp->mask);
+-		if (!xdr_ressize_check(rqstp, p))
+-			return 0;
+-		base = (char *)p - (char *)head->iov_base;
+-
+-		rqstp->rq_res.page_len = w = nfsacl_size(
+-			(resp->mask & NFS_ACL)   ? resp->acl_access  : NULL,
+-			(resp->mask & NFS_DFACL) ? resp->acl_default : NULL);
+-		while (w > 0) {
+-			if (!*(rqstp->rq_next_page++))
+-				return 0;
+-			w -= PAGE_SIZE;
+-		}
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		inode = d_inode(dentry);
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->mask) < 0)
++			return false;
++
++		if (!nfs_stream_encode_acl(xdr, inode, resp->acl_access,
++					   resp->mask & NFS_ACL, 0))
++			return false;
++		if (!nfs_stream_encode_acl(xdr, inode, resp->acl_default,
++					   resp->mask & NFS_DFACL,
++					   NFS_ACL_DEFAULT))
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++	}
+ 
+-		n = nfsacl_encode(&rqstp->rq_res, base, inode,
+-				  resp->acl_access,
+-				  resp->mask & NFS_ACL, 0);
+-		if (n > 0)
+-			n = nfsacl_encode(&rqstp->rq_res, base + n, inode,
+-					  resp->acl_default,
+-					  resp->mask & NFS_DFACL,
+-					  NFS_ACL_DEFAULT);
+-		if (n <= 0)
+-			return 0;
+-	} else
+-		if (!xdr_ressize_check(rqstp, p))
+-			return 0;
+-
+-	return 1;
++	return true;
+ }
+ 
+ /* SETACL */
+-static int nfs3svc_encode_setaclres(struct svc_rqst *rqstp, __be32 *p)
++static bool
++nfs3svc_encode_setaclres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_attrstat *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	p = nfs3svc_encode_post_op_attr(rqstp, p, &resp->fh);
+-	return xdr_ressize_check(rqstp, p);
++	return svcxdr_encode_nfsstat3(xdr, resp->status) &&
++		svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh);
+ }
+ 
+ /*
+@@ -245,12 +229,14 @@ struct nfsd3_voidargs { int dummy; };
+ static const struct svc_procedure nfsd_acl_procedures3[3] = {
+ 	[ACLPROC3_NULL] = {
+ 		.pc_func = nfsd3_proc_null,
+-		.pc_decode = nfs3svc_decode_voidarg,
+-		.pc_encode = nfs3svc_encode_voidres,
+-		.pc_argsize = sizeof(struct nfsd3_voidargs),
+-		.pc_ressize = sizeof(struct nfsd3_voidargs),
++		.pc_decode = nfssvc_decode_voidarg,
++		.pc_encode = nfssvc_encode_voidres,
++		.pc_argsize = sizeof(struct nfsd_voidargs),
++		.pc_argzero = sizeof(struct nfsd_voidargs),
++		.pc_ressize = sizeof(struct nfsd_voidres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST,
++		.pc_name = "NULL",
+ 	},
+ 	[ACLPROC3_GETACL] = {
+ 		.pc_func = nfsd3_proc_getacl,
+@@ -258,9 +244,11 @@ static const struct svc_procedure nfsd_acl_procedures3[3] = {
+ 		.pc_encode = nfs3svc_encode_getaclres,
+ 		.pc_release = nfs3svc_release_getacl,
+ 		.pc_argsize = sizeof(struct nfsd3_getaclargs),
++		.pc_argzero = sizeof(struct nfsd3_getaclargs),
+ 		.pc_ressize = sizeof(struct nfsd3_getaclres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+1+2*(1+ACL),
++		.pc_name = "GETACL",
+ 	},
+ 	[ACLPROC3_SETACL] = {
+ 		.pc_func = nfsd3_proc_setacl,
+@@ -268,9 +256,11 @@ static const struct svc_procedure nfsd_acl_procedures3[3] = {
+ 		.pc_encode = nfs3svc_encode_setaclres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_setaclargs),
++		.pc_argzero = sizeof(struct nfsd3_setaclargs),
+ 		.pc_ressize = sizeof(struct nfsd3_attrstat),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+pAT,
++		.pc_name = "SETACL",
+ 	},
+ };
+ 
+diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
+index 981a4e4c9a3cf..19cf583096d9c 100644
+--- a/fs/nfsd/nfs3proc.c
++++ b/fs/nfsd/nfs3proc.c
+@@ -8,10 +8,12 @@
+ #include <linux/fs.h>
+ #include <linux/ext2_fs.h>
+ #include <linux/magic.h>
++#include <linux/namei.h>
+ 
+ #include "cache.h"
+ #include "xdr3.h"
+ #include "vfs.h"
++#include "filecache.h"
+ 
+ #define NFSDDBG_FACILITY		NFSDDBG_PROC
+ 
+@@ -66,12 +68,15 @@ nfsd3_proc_setattr(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_sattrargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_attrstat *resp = rqstp->rq_resp;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &argp->attrs,
++	};
+ 
+ 	dprintk("nfsd: SETATTR(3)  %s\n",
+ 				SVCFH_fmt(&argp->fh));
+ 
+ 	fh_copy(&resp->fh, &argp->fh);
+-	resp->status = nfsd_setattr(rqstp, &resp->fh, &argp->attrs,
++	resp->status = nfsd_setattr(rqstp, &resp->fh, &attrs,
+ 				    argp->check_guard, argp->guardtime);
+ 	return rpc_success;
+ }
+@@ -124,7 +129,7 @@ nfsd3_proc_access(struct svc_rqst *rqstp)
+ static __be32
+ nfsd3_proc_readlink(struct svc_rqst *rqstp)
+ {
+-	struct nfsd3_readlinkargs *argp = rqstp->rq_argp;
++	struct nfsd_fhandle *argp = rqstp->rq_argp;
+ 	struct nfsd3_readlinkres *resp = rqstp->rq_resp;
+ 
+ 	dprintk("nfsd: READLINK(3) %s\n", SVCFH_fmt(&argp->fh));
+@@ -132,7 +137,9 @@ nfsd3_proc_readlink(struct svc_rqst *rqstp)
+ 	/* Read the symlink. */
+ 	fh_copy(&resp->fh, &argp->fh);
+ 	resp->len = NFS3_MAXPATHLEN;
+-	resp->status = nfsd_readlink(rqstp, &resp->fh, argp->buffer, &resp->len);
++	resp->pages = rqstp->rq_next_page++;
++	resp->status = nfsd_readlink(rqstp, &resp->fh,
++				     page_address(*resp->pages), &resp->len);
+ 	return rpc_success;
+ }
+ 
+@@ -144,25 +151,43 @@ nfsd3_proc_read(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_readargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_readres *resp = rqstp->rq_resp;
+-	u32	max_blocksize = svc_max_payload(rqstp);
+-	unsigned long cnt = min(argp->count, max_blocksize);
++	unsigned int len;
++	int v;
+ 
+ 	dprintk("nfsd: READ(3) %s %lu bytes at %Lu\n",
+ 				SVCFH_fmt(&argp->fh),
+ 				(unsigned long) argp->count,
+ 				(unsigned long long) argp->offset);
+ 
++	argp->count = min_t(u32, argp->count, svc_max_payload(rqstp));
++	argp->count = min_t(u32, argp->count, rqstp->rq_res.buflen);
++	if (argp->offset > (u64)OFFSET_MAX)
++		argp->offset = (u64)OFFSET_MAX;
++	if (argp->offset + argp->count > (u64)OFFSET_MAX)
++		argp->count = (u64)OFFSET_MAX - argp->offset;
++
++	v = 0;
++	len = argp->count;
++	resp->pages = rqstp->rq_next_page;
++	while (len > 0) {
++		struct page *page = *(rqstp->rq_next_page++);
++
++		rqstp->rq_vec[v].iov_base = page_address(page);
++		rqstp->rq_vec[v].iov_len = min_t(unsigned int, len, PAGE_SIZE);
++		len -= rqstp->rq_vec[v].iov_len;
++		v++;
++	}
++
+ 	/* Obtain buffer pointer for payload.
+ 	 * 1 (status) + 22 (post_op_attr) + 1 (count) + 1 (eof)
+ 	 * + 1 (xdr opaque byte count) = 26
+ 	 */
+-	resp->count = cnt;
++	resp->count = argp->count;
+ 	svc_reserve_auth(rqstp, ((1 + NFS3_POST_OP_ATTR_WORDS + 3)<<2) + resp->count +4);
+ 
+ 	fh_copy(&resp->fh, &argp->fh);
+ 	resp->status = nfsd_read(rqstp, &resp->fh, argp->offset,
+-				 rqstp->rq_vec, argp->vlen, &resp->count,
+-				 &resp->eof);
++				 rqstp->rq_vec, v, &resp->count, &resp->eof);
+ 	return rpc_success;
+ }
+ 
+@@ -190,32 +215,147 @@ nfsd3_proc_write(struct svc_rqst *rqstp)
+ 
+ 	fh_copy(&resp->fh, &argp->fh);
+ 	resp->committed = argp->stable;
+-	nvecs = svc_fill_write_vector(rqstp, rqstp->rq_arg.pages,
+-				      &argp->first, cnt);
+-	if (!nvecs) {
+-		resp->status = nfserr_io;
+-		goto out;
+-	}
++	nvecs = svc_fill_write_vector(rqstp, &argp->payload);
++
+ 	resp->status = nfsd_write(rqstp, &resp->fh, argp->offset,
+ 				  rqstp->rq_vec, nvecs, &cnt,
+ 				  resp->committed, resp->verf);
+ 	resp->count = cnt;
+-out:
+ 	return rpc_success;
+ }
+ 
+ /*
+- * With NFSv3, CREATE processing is a lot easier than with NFSv2.
+- * At least in theory; we'll see how it fares in practice when the
+- * first reports about SunOS compatibility problems start to pour in...
++ * Implement NFSv3's unchecked, guarded, and exclusive CREATE
++ * semantics for regular files. Except for the created file,
++ * this operation is stateless on the server.
++ *
++ * Upon return, caller must release @fhp and @resfhp.
+  */
++static __be32
++nfsd3_create_file(struct svc_rqst *rqstp, struct svc_fh *fhp,
++		  struct svc_fh *resfhp, struct nfsd3_createargs *argp)
++{
++	struct iattr *iap = &argp->attrs;
++	struct dentry *parent, *child;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= iap,
++	};
++	__u32 v_mtime, v_atime;
++	struct inode *inode;
++	__be32 status;
++	int host_err;
++
++	if (isdotent(argp->name, argp->len))
++		return nfserr_exist;
++	if (!(iap->ia_valid & ATTR_MODE))
++		iap->ia_mode = 0;
++
++	status = fh_verify(rqstp, fhp, S_IFDIR, NFSD_MAY_EXEC);
++	if (status != nfs_ok)
++		return status;
++
++	parent = fhp->fh_dentry;
++	inode = d_inode(parent);
++
++	host_err = fh_want_write(fhp);
++	if (host_err)
++		return nfserrno(host_err);
++
++	inode_lock_nested(inode, I_MUTEX_PARENT);
++
++	child = lookup_one_len(argp->name, parent, argp->len);
++	if (IS_ERR(child)) {
++		status = nfserrno(PTR_ERR(child));
++		goto out;
++	}
++
++	if (d_really_is_negative(child)) {
++		status = fh_verify(rqstp, fhp, S_IFDIR, NFSD_MAY_CREATE);
++		if (status != nfs_ok)
++			goto out;
++	}
++
++	status = fh_compose(resfhp, fhp->fh_export, child, fhp);
++	if (status != nfs_ok)
++		goto out;
++
++	v_mtime = 0;
++	v_atime = 0;
++	if (argp->createmode == NFS3_CREATE_EXCLUSIVE) {
++		u32 *verifier = (u32 *)argp->verf;
++
++		/*
++		 * Solaris 7 gets confused (bugid 4218508) if these have
++		 * the high bit set, as do xfs filesystems without the
++		 * "bigtime" feature. So just clear the high bits.
++		 */
++		v_mtime = verifier[0] & 0x7fffffff;
++		v_atime = verifier[1] & 0x7fffffff;
++	}
++
++	if (d_really_is_positive(child)) {
++		status = nfs_ok;
++
++		switch (argp->createmode) {
++		case NFS3_CREATE_UNCHECKED:
++			if (!d_is_reg(child))
++				break;
++			iap->ia_valid &= ATTR_SIZE;
++			goto set_attr;
++		case NFS3_CREATE_GUARDED:
++			status = nfserr_exist;
++			break;
++		case NFS3_CREATE_EXCLUSIVE:
++			if (d_inode(child)->i_mtime.tv_sec == v_mtime &&
++			    d_inode(child)->i_atime.tv_sec == v_atime &&
++			    d_inode(child)->i_size == 0) {
++				break;
++			}
++			status = nfserr_exist;
++		}
++		goto out;
++	}
++
++	if (!IS_POSIXACL(inode))
++		iap->ia_mode &= ~current_umask();
++
++	fh_fill_pre_attrs(fhp);
++	host_err = vfs_create(inode, child, iap->ia_mode, true);
++	if (host_err < 0) {
++		status = nfserrno(host_err);
++		goto out;
++	}
++	fh_fill_post_attrs(fhp);
++
++	/* A newly created file already has a file size of zero. */
++	if ((iap->ia_valid & ATTR_SIZE) && (iap->ia_size == 0))
++		iap->ia_valid &= ~ATTR_SIZE;
++	if (argp->createmode == NFS3_CREATE_EXCLUSIVE) {
++		iap->ia_valid = ATTR_MTIME | ATTR_ATIME |
++				ATTR_MTIME_SET | ATTR_ATIME_SET;
++		iap->ia_mtime.tv_sec = v_mtime;
++		iap->ia_atime.tv_sec = v_atime;
++		iap->ia_mtime.tv_nsec = 0;
++		iap->ia_atime.tv_nsec = 0;
++	}
++
++set_attr:
++	status = nfsd_create_setattr(rqstp, fhp, resfhp, &attrs);
++
++out:
++	inode_unlock(inode);
++	if (child && !IS_ERR(child))
++		dput(child);
++	fh_drop_write(fhp);
++	return status;
++}
++
+ static __be32
+ nfsd3_proc_create(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_createargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_diropres *resp = rqstp->rq_resp;
+-	svc_fh		*dirfhp, *newfhp = NULL;
+-	struct iattr	*attr;
++	svc_fh *dirfhp, *newfhp;
+ 
+ 	dprintk("nfsd: CREATE(3)   %s %.*s\n",
+ 				SVCFH_fmt(&argp->fh),
+@@ -224,21 +364,8 @@ nfsd3_proc_create(struct svc_rqst *rqstp)
+ 
+ 	dirfhp = fh_copy(&resp->dirfh, &argp->fh);
+ 	newfhp = fh_init(&resp->fh, NFS3_FHSIZE);
+-	attr   = &argp->attrs;
+-
+-	/* Unfudge the mode bits */
+-	attr->ia_mode &= ~S_IFMT;
+-	if (!(attr->ia_valid & ATTR_MODE)) { 
+-		attr->ia_valid |= ATTR_MODE;
+-		attr->ia_mode = S_IFREG;
+-	} else {
+-		attr->ia_mode = (attr->ia_mode & ~S_IFMT) | S_IFREG;
+-	}
+ 
+-	/* Now create the file and set attributes */
+-	resp->status = do_nfsd_create(rqstp, dirfhp, argp->name, argp->len,
+-				      attr, newfhp, argp->createmode,
+-				      (u32 *)argp->verf, NULL, NULL);
++	resp->status = nfsd3_create_file(rqstp, dirfhp, newfhp, argp);
+ 	return rpc_success;
+ }
+ 
+@@ -250,6 +377,9 @@ nfsd3_proc_mkdir(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_createargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_diropres *resp = rqstp->rq_resp;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &argp->attrs,
++	};
+ 
+ 	dprintk("nfsd: MKDIR(3)    %s %.*s\n",
+ 				SVCFH_fmt(&argp->fh),
+@@ -260,8 +390,7 @@ nfsd3_proc_mkdir(struct svc_rqst *rqstp)
+ 	fh_copy(&resp->dirfh, &argp->fh);
+ 	fh_init(&resp->fh, NFS3_FHSIZE);
+ 	resp->status = nfsd_create(rqstp, &resp->dirfh, argp->name, argp->len,
+-				   &argp->attrs, S_IFDIR, 0, &resp->fh);
+-	fh_unlock(&resp->dirfh);
++				   &attrs, S_IFDIR, 0, &resp->fh);
+ 	return rpc_success;
+ }
+ 
+@@ -270,6 +399,9 @@ nfsd3_proc_symlink(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_symlinkargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_diropres *resp = rqstp->rq_resp;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &argp->attrs,
++	};
+ 
+ 	if (argp->tlen == 0) {
+ 		resp->status = nfserr_inval;
+@@ -296,7 +428,7 @@ nfsd3_proc_symlink(struct svc_rqst *rqstp)
+ 	fh_copy(&resp->dirfh, &argp->ffh);
+ 	fh_init(&resp->fh, NFS3_FHSIZE);
+ 	resp->status = nfsd_symlink(rqstp, &resp->dirfh, argp->fname,
+-				    argp->flen, argp->tname, &resp->fh);
++				    argp->flen, argp->tname, &attrs, &resp->fh);
+ 	kfree(argp->tname);
+ out:
+ 	return rpc_success;
+@@ -310,6 +442,9 @@ nfsd3_proc_mknod(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_mknodargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_diropres  *resp = rqstp->rq_resp;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &argp->attrs,
++	};
+ 	int type;
+ 	dev_t	rdev = 0;
+ 
+@@ -335,8 +470,7 @@ nfsd3_proc_mknod(struct svc_rqst *rqstp)
+ 
+ 	type = nfs3_ftypes[argp->ftype];
+ 	resp->status = nfsd_create(rqstp, &resp->dirfh, argp->name, argp->len,
+-				   &argp->attrs, type, rdev, &resp->fh);
+-	fh_unlock(&resp->dirfh);
++				   &attrs, type, rdev, &resp->fh);
+ out:
+ 	return rpc_success;
+ }
+@@ -359,7 +493,6 @@ nfsd3_proc_remove(struct svc_rqst *rqstp)
+ 	fh_copy(&resp->fh, &argp->fh);
+ 	resp->status = nfsd_unlink(rqstp, &resp->fh, -S_IFDIR,
+ 				   argp->name, argp->len);
+-	fh_unlock(&resp->fh);
+ 	return rpc_success;
+ }
+ 
+@@ -380,7 +513,6 @@ nfsd3_proc_rmdir(struct svc_rqst *rqstp)
+ 	fh_copy(&resp->fh, &argp->fh);
+ 	resp->status = nfsd_unlink(rqstp, &resp->fh, S_IFDIR,
+ 				   argp->name, argp->len);
+-	fh_unlock(&resp->fh);
+ 	return rpc_success;
+ }
+ 
+@@ -426,6 +558,26 @@ nfsd3_proc_link(struct svc_rqst *rqstp)
+ 	return rpc_success;
+ }
+ 
++static void nfsd3_init_dirlist_pages(struct svc_rqst *rqstp,
++				     struct nfsd3_readdirres *resp,
++				     u32 count)
++{
++	struct xdr_buf *buf = &resp->dirlist;
++	struct xdr_stream *xdr = &resp->xdr;
++	unsigned int sendbuf = min_t(unsigned int, rqstp->rq_res.buflen,
++				     svc_max_payload(rqstp));
++
++	memset(buf, 0, sizeof(*buf));
++
++	/* Reserve room for the NULL ptr & eof flag (-2 words) */
++	buf->buflen = clamp(count, (u32)(XDR_UNIT * 2), sendbuf);
++	buf->buflen -= XDR_UNIT * 2;
++	buf->pages = rqstp->rq_next_page;
++	rqstp->rq_next_page += (buf->buflen + PAGE_SIZE - 1) >> PAGE_SHIFT;
++
++	xdr_init_encode_pages(xdr, buf, buf->pages,  NULL);
++}
++
+ /*
+  * Read a portion of a directory.
+  */
+@@ -434,53 +586,26 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_readdirargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_readdirres  *resp = rqstp->rq_resp;
+-	int		count = 0;
+-	struct page	**p;
+-	caddr_t		page_addr = NULL;
++	loff_t		offset;
+ 
+ 	dprintk("nfsd: READDIR(3)  %s %d bytes at %d\n",
+ 				SVCFH_fmt(&argp->fh),
+ 				argp->count, (u32) argp->cookie);
+ 
+-	/* Make sure we've room for the NULL ptr & eof flag, and shrink to
+-	 * client read size */
+-	count = (argp->count >> 2) - 2;
++	nfsd3_init_dirlist_pages(rqstp, resp, argp->count);
+ 
+-	/* Read directory and encode entries on the fly */
+ 	fh_copy(&resp->fh, &argp->fh);
+-
+-	resp->buflen = count;
+ 	resp->common.err = nfs_ok;
+-	resp->buffer = argp->buffer;
++	resp->cookie_offset = 0;
+ 	resp->rqstp = rqstp;
+-	resp->status = nfsd_readdir(rqstp, &resp->fh, (loff_t *)&argp->cookie,
+-				    &resp->common, nfs3svc_encode_entry);
++	offset = argp->cookie;
++	resp->status = nfsd_readdir(rqstp, &resp->fh, &offset,
++				    &resp->common, nfs3svc_encode_entry3);
+ 	memcpy(resp->verf, argp->verf, 8);
+-	count = 0;
+-	for (p = rqstp->rq_respages + 1; p < rqstp->rq_next_page; p++) {
+-		page_addr = page_address(*p);
++	nfs3svc_encode_cookie3(resp, offset);
+ 
+-		if (((caddr_t)resp->buffer >= page_addr) &&
+-		    ((caddr_t)resp->buffer < page_addr + PAGE_SIZE)) {
+-			count += (caddr_t)resp->buffer - page_addr;
+-			break;
+-		}
+-		count += PAGE_SIZE;
+-	}
+-	resp->count = count >> 2;
+-	if (resp->offset) {
+-		loff_t offset = argp->cookie;
+-
+-		if (unlikely(resp->offset1)) {
+-			/* we ended up with offset on a page boundary */
+-			*resp->offset = htonl(offset >> 32);
+-			*resp->offset1 = htonl(offset & 0xffffffff);
+-			resp->offset1 = NULL;
+-		} else {
+-			xdr_encode_hyper(resp->offset, offset);
+-		}
+-		resp->offset = NULL;
+-	}
++	/* Recycle only pages that were part of the reply */
++	rqstp->rq_next_page = resp->xdr.page_ptr + 1;
+ 
+ 	return rpc_success;
+ }
+@@ -494,25 +619,17 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_readdirargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_readdirres  *resp = rqstp->rq_resp;
+-	int	count = 0;
+ 	loff_t	offset;
+-	struct page **p;
+-	caddr_t	page_addr = NULL;
+ 
+ 	dprintk("nfsd: READDIR+(3) %s %d bytes at %d\n",
+ 				SVCFH_fmt(&argp->fh),
+ 				argp->count, (u32) argp->cookie);
+ 
+-	/* Convert byte count to number of words (i.e. >> 2),
+-	 * and reserve room for the NULL ptr & eof flag (-2 words) */
+-	resp->count = (argp->count >> 2) - 2;
++	nfsd3_init_dirlist_pages(rqstp, resp, argp->count);
+ 
+-	/* Read directory and encode entries on the fly */
+ 	fh_copy(&resp->fh, &argp->fh);
+-
+ 	resp->common.err = nfs_ok;
+-	resp->buffer = argp->buffer;
+-	resp->buflen = resp->count;
++	resp->cookie_offset = 0;
+ 	resp->rqstp = rqstp;
+ 	offset = argp->cookie;
+ 
+@@ -526,30 +643,12 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp)
+ 	}
+ 
+ 	resp->status = nfsd_readdir(rqstp, &resp->fh, &offset,
+-				    &resp->common, nfs3svc_encode_entry_plus);
++				    &resp->common, nfs3svc_encode_entryplus3);
+ 	memcpy(resp->verf, argp->verf, 8);
+-	for (p = rqstp->rq_respages + 1; p < rqstp->rq_next_page; p++) {
+-		page_addr = page_address(*p);
++	nfs3svc_encode_cookie3(resp, offset);
+ 
+-		if (((caddr_t)resp->buffer >= page_addr) &&
+-		    ((caddr_t)resp->buffer < page_addr + PAGE_SIZE)) {
+-			count += (caddr_t)resp->buffer - page_addr;
+-			break;
+-		}
+-		count += PAGE_SIZE;
+-	}
+-	resp->count = count >> 2;
+-	if (resp->offset) {
+-		if (unlikely(resp->offset1)) {
+-			/* we ended up with offset on a page boundary */
+-			*resp->offset = htonl(offset >> 32);
+-			*resp->offset1 = htonl(offset & 0xffffffff);
+-			resp->offset1 = NULL;
+-		} else {
+-			xdr_encode_hyper(resp->offset, offset);
+-		}
+-		resp->offset = NULL;
+-	}
++	/* Recycle only pages that were part of the reply */
++	rqstp->rq_next_page = resp->xdr.page_ptr + 1;
+ 
+ out:
+ 	return rpc_success;
+@@ -665,20 +764,21 @@ nfsd3_proc_commit(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd3_commitargs *argp = rqstp->rq_argp;
+ 	struct nfsd3_commitres *resp = rqstp->rq_resp;
++	struct nfsd_file *nf;
+ 
+ 	dprintk("nfsd: COMMIT(3)   %s %u@%Lu\n",
+ 				SVCFH_fmt(&argp->fh),
+ 				argp->count,
+ 				(unsigned long long) argp->offset);
+ 
+-	if (argp->offset > NFS_OFFSET_MAX) {
+-		resp->status = nfserr_inval;
+-		goto out;
+-	}
+-
+ 	fh_copy(&resp->fh, &argp->fh);
+-	resp->status = nfsd_commit(rqstp, &resp->fh, argp->offset,
++	resp->status = nfsd_file_acquire_gc(rqstp, &resp->fh, NFSD_MAY_WRITE |
++					    NFSD_MAY_NOT_BREAK_LEASE, &nf);
++	if (resp->status)
++		goto out;
++	resp->status = nfsd_commit(rqstp, &resp->fh, nf, argp->offset,
+ 				   argp->count, resp->verf);
++	nfsd_file_put(nf);
+ out:
+ 	return rpc_success;
+ }
+@@ -688,18 +788,14 @@ nfsd3_proc_commit(struct svc_rqst *rqstp)
+  * NFSv3 Server procedures.
+  * Only the results of non-idempotent operations are cached.
+  */
+-#define nfs3svc_decode_fhandleargs	nfs3svc_decode_fhandle
+ #define nfs3svc_encode_attrstatres	nfs3svc_encode_attrstat
+ #define nfs3svc_encode_wccstatres	nfs3svc_encode_wccstat
+ #define nfsd3_mkdirargs			nfsd3_createargs
+ #define nfsd3_readdirplusargs		nfsd3_readdirargs
+ #define nfsd3_fhandleargs		nfsd_fhandle
+-#define nfsd3_fhandleres		nfsd3_attrstat
+ #define nfsd3_attrstatres		nfsd3_attrstat
+ #define nfsd3_wccstatres		nfsd3_attrstat
+ #define nfsd3_createres			nfsd3_diropres
+-#define nfsd3_voidres			nfsd3_voidargs
+-struct nfsd3_voidargs { int dummy; };
+ 
+ #define ST 1		/* status*/
+ #define FH 17		/* filehandle with length */
+@@ -710,22 +806,26 @@ struct nfsd3_voidargs { int dummy; };
+ static const struct svc_procedure nfsd_procedures3[22] = {
+ 	[NFS3PROC_NULL] = {
+ 		.pc_func = nfsd3_proc_null,
+-		.pc_decode = nfs3svc_decode_voidarg,
+-		.pc_encode = nfs3svc_encode_voidres,
+-		.pc_argsize = sizeof(struct nfsd3_voidargs),
+-		.pc_ressize = sizeof(struct nfsd3_voidres),
++		.pc_decode = nfssvc_decode_voidarg,
++		.pc_encode = nfssvc_encode_voidres,
++		.pc_argsize = sizeof(struct nfsd_voidargs),
++		.pc_argzero = sizeof(struct nfsd_voidargs),
++		.pc_ressize = sizeof(struct nfsd_voidres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST,
++		.pc_name = "NULL",
+ 	},
+ 	[NFS3PROC_GETATTR] = {
+ 		.pc_func = nfsd3_proc_getattr,
+ 		.pc_decode = nfs3svc_decode_fhandleargs,
+-		.pc_encode = nfs3svc_encode_attrstatres,
++		.pc_encode = nfs3svc_encode_getattrres,
+ 		.pc_release = nfs3svc_release_fhandle,
+-		.pc_argsize = sizeof(struct nfsd3_fhandleargs),
++		.pc_argsize = sizeof(struct nfsd_fhandle),
++		.pc_argzero = sizeof(struct nfsd_fhandle),
+ 		.pc_ressize = sizeof(struct nfsd3_attrstatres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+AT,
++		.pc_name = "GETATTR",
+ 	},
+ 	[NFS3PROC_SETATTR] = {
+ 		.pc_func = nfsd3_proc_setattr,
+@@ -733,19 +833,23 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_wccstatres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_sattrargs),
++		.pc_argzero = sizeof(struct nfsd3_sattrargs),
+ 		.pc_ressize = sizeof(struct nfsd3_wccstatres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+WC,
++		.pc_name = "SETATTR",
+ 	},
+ 	[NFS3PROC_LOOKUP] = {
+ 		.pc_func = nfsd3_proc_lookup,
+ 		.pc_decode = nfs3svc_decode_diropargs,
+-		.pc_encode = nfs3svc_encode_diropres,
++		.pc_encode = nfs3svc_encode_lookupres,
+ 		.pc_release = nfs3svc_release_fhandle2,
+ 		.pc_argsize = sizeof(struct nfsd3_diropargs),
++		.pc_argzero = sizeof(struct nfsd3_diropargs),
+ 		.pc_ressize = sizeof(struct nfsd3_diropres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+FH+pAT+pAT,
++		.pc_name = "LOOKUP",
+ 	},
+ 	[NFS3PROC_ACCESS] = {
+ 		.pc_func = nfsd3_proc_access,
+@@ -753,19 +857,23 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_accessres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_accessargs),
++		.pc_argzero = sizeof(struct nfsd3_accessargs),
+ 		.pc_ressize = sizeof(struct nfsd3_accessres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+pAT+1,
++		.pc_name = "ACCESS",
+ 	},
+ 	[NFS3PROC_READLINK] = {
+ 		.pc_func = nfsd3_proc_readlink,
+-		.pc_decode = nfs3svc_decode_readlinkargs,
++		.pc_decode = nfs3svc_decode_fhandleargs,
+ 		.pc_encode = nfs3svc_encode_readlinkres,
+ 		.pc_release = nfs3svc_release_fhandle,
+-		.pc_argsize = sizeof(struct nfsd3_readlinkargs),
++		.pc_argsize = sizeof(struct nfsd_fhandle),
++		.pc_argzero = sizeof(struct nfsd_fhandle),
+ 		.pc_ressize = sizeof(struct nfsd3_readlinkres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+pAT+1+NFS3_MAXPATHLEN/4,
++		.pc_name = "READLINK",
+ 	},
+ 	[NFS3PROC_READ] = {
+ 		.pc_func = nfsd3_proc_read,
+@@ -773,9 +881,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_readres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_readargs),
++		.pc_argzero = sizeof(struct nfsd3_readargs),
+ 		.pc_ressize = sizeof(struct nfsd3_readres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+pAT+4+NFSSVC_MAXBLKSIZE/4,
++		.pc_name = "READ",
+ 	},
+ 	[NFS3PROC_WRITE] = {
+ 		.pc_func = nfsd3_proc_write,
+@@ -783,9 +893,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_writeres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_writeargs),
++		.pc_argzero = sizeof(struct nfsd3_writeargs),
+ 		.pc_ressize = sizeof(struct nfsd3_writeres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+WC+4,
++		.pc_name = "WRITE",
+ 	},
+ 	[NFS3PROC_CREATE] = {
+ 		.pc_func = nfsd3_proc_create,
+@@ -793,9 +905,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_createres,
+ 		.pc_release = nfs3svc_release_fhandle2,
+ 		.pc_argsize = sizeof(struct nfsd3_createargs),
++		.pc_argzero = sizeof(struct nfsd3_createargs),
+ 		.pc_ressize = sizeof(struct nfsd3_createres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+(1+FH+pAT)+WC,
++		.pc_name = "CREATE",
+ 	},
+ 	[NFS3PROC_MKDIR] = {
+ 		.pc_func = nfsd3_proc_mkdir,
+@@ -803,9 +917,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_createres,
+ 		.pc_release = nfs3svc_release_fhandle2,
+ 		.pc_argsize = sizeof(struct nfsd3_mkdirargs),
++		.pc_argzero = sizeof(struct nfsd3_mkdirargs),
+ 		.pc_ressize = sizeof(struct nfsd3_createres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+(1+FH+pAT)+WC,
++		.pc_name = "MKDIR",
+ 	},
+ 	[NFS3PROC_SYMLINK] = {
+ 		.pc_func = nfsd3_proc_symlink,
+@@ -813,9 +929,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_createres,
+ 		.pc_release = nfs3svc_release_fhandle2,
+ 		.pc_argsize = sizeof(struct nfsd3_symlinkargs),
++		.pc_argzero = sizeof(struct nfsd3_symlinkargs),
+ 		.pc_ressize = sizeof(struct nfsd3_createres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+(1+FH+pAT)+WC,
++		.pc_name = "SYMLINK",
+ 	},
+ 	[NFS3PROC_MKNOD] = {
+ 		.pc_func = nfsd3_proc_mknod,
+@@ -823,9 +941,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_createres,
+ 		.pc_release = nfs3svc_release_fhandle2,
+ 		.pc_argsize = sizeof(struct nfsd3_mknodargs),
++		.pc_argzero = sizeof(struct nfsd3_mknodargs),
+ 		.pc_ressize = sizeof(struct nfsd3_createres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+(1+FH+pAT)+WC,
++		.pc_name = "MKNOD",
+ 	},
+ 	[NFS3PROC_REMOVE] = {
+ 		.pc_func = nfsd3_proc_remove,
+@@ -833,9 +953,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_wccstatres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_diropargs),
++		.pc_argzero = sizeof(struct nfsd3_diropargs),
+ 		.pc_ressize = sizeof(struct nfsd3_wccstatres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+WC,
++		.pc_name = "REMOVE",
+ 	},
+ 	[NFS3PROC_RMDIR] = {
+ 		.pc_func = nfsd3_proc_rmdir,
+@@ -843,9 +965,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_wccstatres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_diropargs),
++		.pc_argzero = sizeof(struct nfsd3_diropargs),
+ 		.pc_ressize = sizeof(struct nfsd3_wccstatres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+WC,
++		.pc_name = "RMDIR",
+ 	},
+ 	[NFS3PROC_RENAME] = {
+ 		.pc_func = nfsd3_proc_rename,
+@@ -853,9 +977,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_renameres,
+ 		.pc_release = nfs3svc_release_fhandle2,
+ 		.pc_argsize = sizeof(struct nfsd3_renameargs),
++		.pc_argzero = sizeof(struct nfsd3_renameargs),
+ 		.pc_ressize = sizeof(struct nfsd3_renameres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+WC+WC,
++		.pc_name = "RENAME",
+ 	},
+ 	[NFS3PROC_LINK] = {
+ 		.pc_func = nfsd3_proc_link,
+@@ -863,9 +989,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_linkres,
+ 		.pc_release = nfs3svc_release_fhandle2,
+ 		.pc_argsize = sizeof(struct nfsd3_linkargs),
++		.pc_argzero = sizeof(struct nfsd3_linkargs),
+ 		.pc_ressize = sizeof(struct nfsd3_linkres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+pAT+WC,
++		.pc_name = "LINK",
+ 	},
+ 	[NFS3PROC_READDIR] = {
+ 		.pc_func = nfsd3_proc_readdir,
+@@ -873,8 +1001,10 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_readdirres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_readdirargs),
++		.pc_argzero = sizeof(struct nfsd3_readdirargs),
+ 		.pc_ressize = sizeof(struct nfsd3_readdirres),
+ 		.pc_cachetype = RC_NOCACHE,
++		.pc_name = "READDIR",
+ 	},
+ 	[NFS3PROC_READDIRPLUS] = {
+ 		.pc_func = nfsd3_proc_readdirplus,
+@@ -882,35 +1012,43 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_readdirres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_readdirplusargs),
++		.pc_argzero = sizeof(struct nfsd3_readdirplusargs),
+ 		.pc_ressize = sizeof(struct nfsd3_readdirres),
+ 		.pc_cachetype = RC_NOCACHE,
++		.pc_name = "READDIRPLUS",
+ 	},
+ 	[NFS3PROC_FSSTAT] = {
+ 		.pc_func = nfsd3_proc_fsstat,
+ 		.pc_decode = nfs3svc_decode_fhandleargs,
+ 		.pc_encode = nfs3svc_encode_fsstatres,
+ 		.pc_argsize = sizeof(struct nfsd3_fhandleargs),
++		.pc_argzero = sizeof(struct nfsd3_fhandleargs),
+ 		.pc_ressize = sizeof(struct nfsd3_fsstatres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+pAT+2*6+1,
++		.pc_name = "FSSTAT",
+ 	},
+ 	[NFS3PROC_FSINFO] = {
+ 		.pc_func = nfsd3_proc_fsinfo,
+ 		.pc_decode = nfs3svc_decode_fhandleargs,
+ 		.pc_encode = nfs3svc_encode_fsinfores,
+ 		.pc_argsize = sizeof(struct nfsd3_fhandleargs),
++		.pc_argzero = sizeof(struct nfsd3_fhandleargs),
+ 		.pc_ressize = sizeof(struct nfsd3_fsinfores),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+pAT+12,
++		.pc_name = "FSINFO",
+ 	},
+ 	[NFS3PROC_PATHCONF] = {
+ 		.pc_func = nfsd3_proc_pathconf,
+ 		.pc_decode = nfs3svc_decode_fhandleargs,
+ 		.pc_encode = nfs3svc_encode_pathconfres,
+ 		.pc_argsize = sizeof(struct nfsd3_fhandleargs),
++		.pc_argzero = sizeof(struct nfsd3_fhandleargs),
+ 		.pc_ressize = sizeof(struct nfsd3_pathconfres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+pAT+6,
++		.pc_name = "PATHCONF",
+ 	},
+ 	[NFS3PROC_COMMIT] = {
+ 		.pc_func = nfsd3_proc_commit,
+@@ -918,9 +1056,11 @@ static const struct svc_procedure nfsd_procedures3[22] = {
+ 		.pc_encode = nfs3svc_encode_commitres,
+ 		.pc_release = nfs3svc_release_fhandle,
+ 		.pc_argsize = sizeof(struct nfsd3_commitargs),
++		.pc_argzero = sizeof(struct nfsd3_commitargs),
+ 		.pc_ressize = sizeof(struct nfsd3_commitres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+WC+2,
++		.pc_name = "COMMIT",
+ 	},
+ };
+ 
+diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
+index 716566da400e1..3308dd671ef0b 100644
+--- a/fs/nfsd/nfs3xdr.c
++++ b/fs/nfsd/nfs3xdr.c
+@@ -14,13 +14,26 @@
+ #include "netns.h"
+ #include "vfs.h"
+ 
+-#define NFSDDBG_FACILITY		NFSDDBG_XDR
++/*
++ * Force construction of an empty post-op attr
++ */
++static const struct svc_fh nfs3svc_null_fh = {
++	.fh_no_wcc	= true,
++};
+ 
++/*
++ * time_delta. {1, 0} means the server is accurate only
++ * to the nearest second.
++ */
++static const struct timespec64 nfs3svc_time_delta = {
++	.tv_sec		= 1,
++	.tv_nsec	= 0,
++};
+ 
+ /*
+  * Mapping of S_IF* types to NFS file types
+  */
+-static u32	nfs3_ftypes[] = {
++static const u32 nfs3_ftypes[] = {
+ 	NF3NON,  NF3FIFO, NF3CHR, NF3BAD,
+ 	NF3DIR,  NF3BAD,  NF3BLK, NF3BAD,
+ 	NF3REG,  NF3BAD,  NF3LNK, NF3BAD,
+@@ -29,824 +42,938 @@ static u32	nfs3_ftypes[] = {
+ 
+ 
+ /*
+- * XDR functions for basic NFS types
++ * Basic NFSv3 data types (RFC 1813 Sections 2.5 and 2.6)
+  */
++
+ static __be32 *
+-encode_time3(__be32 *p, struct timespec64 *time)
++encode_nfstime3(__be32 *p, const struct timespec64 *time)
+ {
+-	*p++ = htonl((u32) time->tv_sec); *p++ = htonl(time->tv_nsec);
++	*p++ = cpu_to_be32((u32)time->tv_sec);
++	*p++ = cpu_to_be32(time->tv_nsec);
++
+ 	return p;
+ }
+ 
+-static __be32 *
+-decode_time3(__be32 *p, struct timespec64 *time)
++static bool
++svcxdr_decode_nfstime3(struct xdr_stream *xdr, struct timespec64 *timep)
+ {
+-	time->tv_sec = ntohl(*p++);
+-	time->tv_nsec = ntohl(*p++);
+-	return p;
++	__be32 *p;
++
++	p = xdr_inline_decode(xdr, XDR_UNIT * 2);
++	if (!p)
++		return false;
++	timep->tv_sec = be32_to_cpup(p++);
++	timep->tv_nsec = be32_to_cpup(p);
++
++	return true;
+ }
+ 
+-static __be32 *
+-decode_fh(__be32 *p, struct svc_fh *fhp)
++/**
++ * svcxdr_decode_nfs_fh3 - Decode an NFSv3 file handle
++ * @xdr: XDR stream positioned at an undecoded NFSv3 FH
++ * @fhp: OUT: filled-in server file handle
++ *
++ * Return values:
++ *  %false: The encoded file handle was not valid
++ *  %true: @fhp has been initialized
++ */
++bool
++svcxdr_decode_nfs_fh3(struct xdr_stream *xdr, struct svc_fh *fhp)
+ {
+-	unsigned int size;
++	__be32 *p;
++	u32 size;
++
++	if (xdr_stream_decode_u32(xdr, &size) < 0)
++		return false;
++	if (size == 0 || size > NFS3_FHSIZE)
++		return false;
++	p = xdr_inline_decode(xdr, size);
++	if (!p)
++		return false;
+ 	fh_init(fhp, NFS3_FHSIZE);
+-	size = ntohl(*p++);
+-	if (size > NFS3_FHSIZE)
+-		return NULL;
+-
+-	memcpy(&fhp->fh_handle.fh_base, p, size);
+ 	fhp->fh_handle.fh_size = size;
+-	return p + XDR_QUADLEN(size);
++	memcpy(&fhp->fh_handle.fh_raw, p, size);
++
++	return true;
+ }
+ 
+-/* Helper function for NFSv3 ACL code */
+-__be32 *nfs3svc_decode_fh(__be32 *p, struct svc_fh *fhp)
++/**
++ * svcxdr_encode_nfsstat3 - Encode an NFSv3 status code
++ * @xdr: XDR stream
++ * @status: status value to encode
++ *
++ * Return values:
++ *   %false: Send buffer space was exhausted
++ *   %true: Success
++ */
++bool
++svcxdr_encode_nfsstat3(struct xdr_stream *xdr, __be32 status)
+ {
+-	return decode_fh(p, fhp);
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, sizeof(status));
++	if (!p)
++		return false;
++	*p = status;
++
++	return true;
+ }
+ 
+-static __be32 *
+-encode_fh(__be32 *p, struct svc_fh *fhp)
++static bool
++svcxdr_encode_nfs_fh3(struct xdr_stream *xdr, const struct svc_fh *fhp)
++{
++	u32 size = fhp->fh_handle.fh_size;
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, XDR_UNIT + size);
++	if (!p)
++		return false;
++	*p++ = cpu_to_be32(size);
++	if (size)
++		p[XDR_QUADLEN(size) - 1] = 0;
++	memcpy(p, &fhp->fh_handle.fh_raw, size);
++
++	return true;
++}
++
++static bool
++svcxdr_encode_post_op_fh3(struct xdr_stream *xdr, const struct svc_fh *fhp)
+ {
+-	unsigned int size = fhp->fh_handle.fh_size;
+-	*p++ = htonl(size);
+-	if (size) p[XDR_QUADLEN(size)-1]=0;
+-	memcpy(p, &fhp->fh_handle.fh_base, size);
+-	return p + XDR_QUADLEN(size);
++	if (xdr_stream_encode_item_present(xdr) < 0)
++		return false;
++	if (!svcxdr_encode_nfs_fh3(xdr, fhp))
++		return false;
++
++	return true;
+ }
+ 
+-/*
+- * Decode a file name and make sure that the path contains
+- * no slashes or null bytes.
+- */
+-static __be32 *
+-decode_filename(__be32 *p, char **namp, unsigned int *lenp)
++static bool
++svcxdr_encode_cookieverf3(struct xdr_stream *xdr, const __be32 *verf)
++{
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, NFS3_COOKIEVERFSIZE);
++	if (!p)
++		return false;
++	memcpy(p, verf, NFS3_COOKIEVERFSIZE);
++
++	return true;
++}
++
++static bool
++svcxdr_encode_writeverf3(struct xdr_stream *xdr, const __be32 *verf)
++{
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, NFS3_WRITEVERFSIZE);
++	if (!p)
++		return false;
++	memcpy(p, verf, NFS3_WRITEVERFSIZE);
++
++	return true;
++}
++
++static bool
++svcxdr_decode_filename3(struct xdr_stream *xdr, char **name, unsigned int *len)
+ {
+-	char		*name;
+-	unsigned int	i;
+-
+-	if ((p = xdr_decode_string_inplace(p, namp, lenp, NFS3_MAXNAMLEN)) != NULL) {
+-		for (i = 0, name = *namp; i < *lenp; i++, name++) {
+-			if (*name == '\0' || *name == '/')
+-				return NULL;
+-		}
++	u32 size, i;
++	__be32 *p;
++	char *c;
++
++	if (xdr_stream_decode_u32(xdr, &size) < 0)
++		return false;
++	if (size == 0 || size > NFS3_MAXNAMLEN)
++		return false;
++	p = xdr_inline_decode(xdr, size);
++	if (!p)
++		return false;
++
++	*len = size;
++	*name = (char *)p;
++	for (i = 0, c = *name; i < size; i++, c++) {
++		if (*c == '\0' || *c == '/')
++			return false;
+ 	}
+ 
+-	return p;
++	return true;
+ }
+ 
+-static __be32 *
+-decode_sattr3(__be32 *p, struct iattr *iap, struct user_namespace *userns)
++static bool
++svcxdr_decode_diropargs3(struct xdr_stream *xdr, struct svc_fh *fhp,
++			 char **name, unsigned int *len)
++{
++	return svcxdr_decode_nfs_fh3(xdr, fhp) &&
++		svcxdr_decode_filename3(xdr, name, len);
++}
++
++static bool
++svcxdr_decode_sattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++		     struct iattr *iap)
+ {
+-	u32	tmp;
++	u32 set_it;
+ 
+ 	iap->ia_valid = 0;
+ 
+-	if (*p++) {
++	if (xdr_stream_decode_bool(xdr, &set_it) < 0)
++		return false;
++	if (set_it) {
++		u32 mode;
++
++		if (xdr_stream_decode_u32(xdr, &mode) < 0)
++			return false;
+ 		iap->ia_valid |= ATTR_MODE;
+-		iap->ia_mode = ntohl(*p++);
++		iap->ia_mode = mode;
+ 	}
+-	if (*p++) {
+-		iap->ia_uid = make_kuid(userns, ntohl(*p++));
++	if (xdr_stream_decode_bool(xdr, &set_it) < 0)
++		return false;
++	if (set_it) {
++		u32 uid;
++
++		if (xdr_stream_decode_u32(xdr, &uid) < 0)
++			return false;
++		iap->ia_uid = make_kuid(nfsd_user_namespace(rqstp), uid);
+ 		if (uid_valid(iap->ia_uid))
+ 			iap->ia_valid |= ATTR_UID;
+ 	}
+-	if (*p++) {
+-		iap->ia_gid = make_kgid(userns, ntohl(*p++));
++	if (xdr_stream_decode_bool(xdr, &set_it) < 0)
++		return false;
++	if (set_it) {
++		u32 gid;
++
++		if (xdr_stream_decode_u32(xdr, &gid) < 0)
++			return false;
++		iap->ia_gid = make_kgid(nfsd_user_namespace(rqstp), gid);
+ 		if (gid_valid(iap->ia_gid))
+ 			iap->ia_valid |= ATTR_GID;
+ 	}
+-	if (*p++) {
+-		u64	newsize;
++	if (xdr_stream_decode_bool(xdr, &set_it) < 0)
++		return false;
++	if (set_it) {
++		u64 newsize;
+ 
++		if (xdr_stream_decode_u64(xdr, &newsize) < 0)
++			return false;
+ 		iap->ia_valid |= ATTR_SIZE;
+-		p = xdr_decode_hyper(p, &newsize);
+-		iap->ia_size = min_t(u64, newsize, NFS_OFFSET_MAX);
++		iap->ia_size = newsize;
+ 	}
+-	if ((tmp = ntohl(*p++)) == 1) {	/* set to server time */
++	if (xdr_stream_decode_u32(xdr, &set_it) < 0)
++		return false;
++	switch (set_it) {
++	case DONT_CHANGE:
++		break;
++	case SET_TO_SERVER_TIME:
+ 		iap->ia_valid |= ATTR_ATIME;
+-	} else if (tmp == 2) {		/* set to client time */
++		break;
++	case SET_TO_CLIENT_TIME:
++		if (!svcxdr_decode_nfstime3(xdr, &iap->ia_atime))
++			return false;
+ 		iap->ia_valid |= ATTR_ATIME | ATTR_ATIME_SET;
+-		iap->ia_atime.tv_sec = ntohl(*p++);
+-		iap->ia_atime.tv_nsec = ntohl(*p++);
++		break;
++	default:
++		return false;
+ 	}
+-	if ((tmp = ntohl(*p++)) == 1) {	/* set to server time */
++	if (xdr_stream_decode_u32(xdr, &set_it) < 0)
++		return false;
++	switch (set_it) {
++	case DONT_CHANGE:
++		break;
++	case SET_TO_SERVER_TIME:
+ 		iap->ia_valid |= ATTR_MTIME;
+-	} else if (tmp == 2) {		/* set to client time */
++		break;
++	case SET_TO_CLIENT_TIME:
++		if (!svcxdr_decode_nfstime3(xdr, &iap->ia_mtime))
++			return false;
+ 		iap->ia_valid |= ATTR_MTIME | ATTR_MTIME_SET;
+-		iap->ia_mtime.tv_sec = ntohl(*p++);
+-		iap->ia_mtime.tv_nsec = ntohl(*p++);
++		break;
++	default:
++		return false;
+ 	}
+-	return p;
++
++	return true;
+ }
+ 
+-static __be32 *encode_fsid(__be32 *p, struct svc_fh *fhp)
++static bool
++svcxdr_decode_sattrguard3(struct xdr_stream *xdr, struct nfsd3_sattrargs *args)
+ {
+-	u64 f;
+-	switch(fsid_source(fhp)) {
+-	default:
+-	case FSIDSOURCE_DEV:
+-		p = xdr_encode_hyper(p, (u64)huge_encode_dev
+-				     (fhp->fh_dentry->d_sb->s_dev));
+-		break;
+-	case FSIDSOURCE_FSID:
+-		p = xdr_encode_hyper(p, (u64) fhp->fh_export->ex_fsid);
+-		break;
+-	case FSIDSOURCE_UUID:
+-		f = ((u64*)fhp->fh_export->ex_uuid)[0];
+-		f ^= ((u64*)fhp->fh_export->ex_uuid)[1];
+-		p = xdr_encode_hyper(p, f);
+-		break;
+-	}
+-	return p;
++	__be32 *p;
++	u32 check;
++
++	if (xdr_stream_decode_bool(xdr, &check) < 0)
++		return false;
++	if (check) {
++		p = xdr_inline_decode(xdr, XDR_UNIT * 2);
++		if (!p)
++			return false;
++		args->check_guard = 1;
++		args->guardtime = be32_to_cpup(p);
++	} else
++		args->check_guard = 0;
++
++	return true;
+ }
+ 
+-static __be32 *
+-encode_fattr3(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *fhp,
+-	      struct kstat *stat)
++static bool
++svcxdr_decode_specdata3(struct xdr_stream *xdr, struct nfsd3_mknodargs *args)
+ {
+-	struct user_namespace *userns = nfsd_user_namespace(rqstp);
+-	*p++ = htonl(nfs3_ftypes[(stat->mode & S_IFMT) >> 12]);
+-	*p++ = htonl((u32) (stat->mode & S_IALLUGO));
+-	*p++ = htonl((u32) stat->nlink);
+-	*p++ = htonl((u32) from_kuid_munged(userns, stat->uid));
+-	*p++ = htonl((u32) from_kgid_munged(userns, stat->gid));
+-	if (S_ISLNK(stat->mode) && stat->size > NFS3_MAXPATHLEN) {
+-		p = xdr_encode_hyper(p, (u64) NFS3_MAXPATHLEN);
+-	} else {
+-		p = xdr_encode_hyper(p, (u64) stat->size);
+-	}
+-	p = xdr_encode_hyper(p, ((u64)stat->blocks) << 9);
+-	*p++ = htonl((u32) MAJOR(stat->rdev));
+-	*p++ = htonl((u32) MINOR(stat->rdev));
+-	p = encode_fsid(p, fhp);
+-	p = xdr_encode_hyper(p, stat->ino);
+-	p = encode_time3(p, &stat->atime);
+-	p = encode_time3(p, &stat->mtime);
+-	p = encode_time3(p, &stat->ctime);
++	__be32 *p;
+ 
+-	return p;
++	p = xdr_inline_decode(xdr, XDR_UNIT * 2);
++	if (!p)
++		return false;
++	args->major = be32_to_cpup(p++);
++	args->minor = be32_to_cpup(p);
++
++	return true;
+ }
+ 
+-static __be32 *
+-encode_saved_post_attr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *fhp)
++static bool
++svcxdr_decode_devicedata3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++			  struct nfsd3_mknodargs *args)
+ {
+-	/* Attributes to follow */
+-	*p++ = xdr_one;
+-	return encode_fattr3(rqstp, p, fhp, &fhp->fh_post_attr);
++	return svcxdr_decode_sattr3(rqstp, xdr, &args->attrs) &&
++		svcxdr_decode_specdata3(xdr, args);
+ }
+ 
+-/*
+- * Encode post-operation attributes.
+- * The inode may be NULL if the call failed because of a stale file
+- * handle. In this case, no attributes are returned.
+- */
+-static __be32 *
+-encode_post_op_attr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *fhp)
++static bool
++svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++		     const struct svc_fh *fhp, const struct kstat *stat)
+ {
+-	struct dentry *dentry = fhp->fh_dentry;
+-	if (dentry && d_really_is_positive(dentry)) {
+-	        __be32 err;
+-		struct kstat stat;
+-
+-		err = fh_getattr(fhp, &stat);
+-		if (!err) {
+-			*p++ = xdr_one;		/* attributes follow */
+-			lease_get_mtime(d_inode(dentry), &stat.mtime);
+-			return encode_fattr3(rqstp, p, fhp, &stat);
+-		}
++	struct user_namespace *userns = nfsd_user_namespace(rqstp);
++	__be32 *p;
++	u64 fsid;
++
++	p = xdr_reserve_space(xdr, XDR_UNIT * 21);
++	if (!p)
++		return false;
++
++	*p++ = cpu_to_be32(nfs3_ftypes[(stat->mode & S_IFMT) >> 12]);
++	*p++ = cpu_to_be32((u32)(stat->mode & S_IALLUGO));
++	*p++ = cpu_to_be32((u32)stat->nlink);
++	*p++ = cpu_to_be32((u32)from_kuid_munged(userns, stat->uid));
++	*p++ = cpu_to_be32((u32)from_kgid_munged(userns, stat->gid));
++	if (S_ISLNK(stat->mode) && stat->size > NFS3_MAXPATHLEN)
++		p = xdr_encode_hyper(p, (u64)NFS3_MAXPATHLEN);
++	else
++		p = xdr_encode_hyper(p, (u64)stat->size);
++
++	/* used */
++	p = xdr_encode_hyper(p, ((u64)stat->blocks) << 9);
++
++	/* rdev */
++	*p++ = cpu_to_be32((u32)MAJOR(stat->rdev));
++	*p++ = cpu_to_be32((u32)MINOR(stat->rdev));
++
++	switch(fsid_source(fhp)) {
++	case FSIDSOURCE_FSID:
++		fsid = (u64)fhp->fh_export->ex_fsid;
++		break;
++	case FSIDSOURCE_UUID:
++		fsid = ((u64 *)fhp->fh_export->ex_uuid)[0];
++		fsid ^= ((u64 *)fhp->fh_export->ex_uuid)[1];
++		break;
++	default:
++		fsid = (u64)huge_encode_dev(fhp->fh_dentry->d_sb->s_dev);
+ 	}
+-	*p++ = xdr_zero;
+-	return p;
++	p = xdr_encode_hyper(p, fsid);
++
++	/* fileid */
++	p = xdr_encode_hyper(p, stat->ino);
++
++	p = encode_nfstime3(p, &stat->atime);
++	p = encode_nfstime3(p, &stat->mtime);
++	encode_nfstime3(p, &stat->ctime);
++
++	return true;
+ }
+ 
+-/* Helper for NFSv3 ACLs */
+-__be32 *
+-nfs3svc_encode_post_op_attr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *fhp)
++static bool
++svcxdr_encode_wcc_attr(struct xdr_stream *xdr, const struct svc_fh *fhp)
+ {
+-	return encode_post_op_attr(rqstp, p, fhp);
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, XDR_UNIT * 6);
++	if (!p)
++		return false;
++	p = xdr_encode_hyper(p, (u64)fhp->fh_pre_size);
++	p = encode_nfstime3(p, &fhp->fh_pre_mtime);
++	encode_nfstime3(p, &fhp->fh_pre_ctime);
++
++	return true;
+ }
+ 
+-/*
+- * Enocde weak cache consistency data
+- */
+-static __be32 *
+-encode_wcc_data(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *fhp)
++static bool
++svcxdr_encode_pre_op_attr(struct xdr_stream *xdr, const struct svc_fh *fhp)
+ {
+-	struct dentry	*dentry = fhp->fh_dentry;
+-
+-	if (dentry && d_really_is_positive(dentry) && fhp->fh_post_saved) {
+-		if (fhp->fh_pre_saved) {
+-			*p++ = xdr_one;
+-			p = xdr_encode_hyper(p, (u64) fhp->fh_pre_size);
+-			p = encode_time3(p, &fhp->fh_pre_mtime);
+-			p = encode_time3(p, &fhp->fh_pre_ctime);
+-		} else {
+-			*p++ = xdr_zero;
+-		}
+-		return encode_saved_post_attr(rqstp, p, fhp);
++	if (!fhp->fh_pre_saved) {
++		if (xdr_stream_encode_item_absent(xdr) < 0)
++			return false;
++		return true;
+ 	}
+-	/* no pre- or post-attrs */
+-	*p++ = xdr_zero;
+-	return encode_post_op_attr(rqstp, p, fhp);
++
++	if (xdr_stream_encode_item_present(xdr) < 0)
++		return false;
++	return svcxdr_encode_wcc_attr(xdr, fhp);
+ }
+ 
+-/*
+- * Fill in the pre_op attr for the wcc data
++/**
++ * svcxdr_encode_post_op_attr - Encode NFSv3 post-op attributes
++ * @rqstp: Context of a completed RPC transaction
++ * @xdr: XDR stream
++ * @fhp: File handle to encode
++ *
++ * Return values:
++ *   %false: Send buffer space was exhausted
++ *   %true: Success
+  */
+-void fill_pre_wcc(struct svc_fh *fhp)
++bool
++svcxdr_encode_post_op_attr(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++			   const struct svc_fh *fhp)
+ {
+-	struct inode    *inode;
+-	struct kstat	stat;
+-	__be32 err;
++	struct dentry *dentry = fhp->fh_dentry;
++	struct kstat stat;
+ 
+-	if (fhp->fh_pre_saved)
+-		return;
++	/*
++	 * The inode may be NULL if the call failed because of a
++	 * stale file handle. In this case, no attributes are
++	 * returned.
++	 */
++	if (fhp->fh_no_wcc || !dentry || !d_really_is_positive(dentry))
++		goto no_post_op_attrs;
++	if (fh_getattr(fhp, &stat) != nfs_ok)
++		goto no_post_op_attrs;
+ 
+-	inode = d_inode(fhp->fh_dentry);
+-	err = fh_getattr(fhp, &stat);
+-	if (err) {
+-		/* Grab the times from inode anyway */
+-		stat.mtime = inode->i_mtime;
+-		stat.ctime = inode->i_ctime;
+-		stat.size  = inode->i_size;
+-	}
++	if (xdr_stream_encode_item_present(xdr) < 0)
++		return false;
++	lease_get_mtime(d_inode(dentry), &stat.mtime);
++	if (!svcxdr_encode_fattr3(rqstp, xdr, fhp, &stat))
++		return false;
+ 
+-	fhp->fh_pre_mtime = stat.mtime;
+-	fhp->fh_pre_ctime = stat.ctime;
+-	fhp->fh_pre_size  = stat.size;
+-	fhp->fh_pre_change = nfsd4_change_attribute(&stat, inode);
+-	fhp->fh_pre_saved = true;
++	return true;
++
++no_post_op_attrs:
++	return xdr_stream_encode_item_absent(xdr) > 0;
+ }
+ 
+ /*
+- * Fill in the post_op attr for the wcc data
++ * Encode weak cache consistency data
+  */
+-void fill_post_wcc(struct svc_fh *fhp)
++static bool
++svcxdr_encode_wcc_data(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++		       const struct svc_fh *fhp)
+ {
+-	__be32 err;
+-
+-	if (fhp->fh_post_saved)
+-		printk("nfsd: inode locked twice during operation.\n");
+-
+-	err = fh_getattr(fhp, &fhp->fh_post_attr);
+-	fhp->fh_post_change = nfsd4_change_attribute(&fhp->fh_post_attr,
+-						     d_inode(fhp->fh_dentry));
+-	if (err) {
+-		fhp->fh_post_saved = false;
+-		/* Grab the ctime anyway - set_change_info might use it */
+-		fhp->fh_post_attr.ctime = d_inode(fhp->fh_dentry)->i_ctime;
+-	} else
+-		fhp->fh_post_saved = true;
++	struct dentry *dentry = fhp->fh_dentry;
++
++	if (!dentry || !d_really_is_positive(dentry) || !fhp->fh_post_saved)
++		goto neither;
++
++	/* before */
++	if (!svcxdr_encode_pre_op_attr(xdr, fhp))
++		return false;
++
++	/* after */
++	if (xdr_stream_encode_item_present(xdr) < 0)
++		return false;
++	if (!svcxdr_encode_fattr3(rqstp, xdr, fhp, &fhp->fh_post_attr))
++		return false;
++
++	return true;
++
++neither:
++	if (xdr_stream_encode_item_absent(xdr) < 0)
++		return false;
++	if (!svcxdr_encode_post_op_attr(rqstp, xdr, fhp))
++		return false;
++
++	return true;
+ }
+ 
+ /*
+  * XDR decode functions
+  */
+-int
+-nfs3svc_decode_voidarg(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	return 1;
+-}
+ 
+-int
+-nfs3svc_decode_fhandle(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_fhandleargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_fhandle *args = rqstp->rq_argp;
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_nfs_fh3(xdr, &args->fh);
+ }
+ 
+-int
+-nfs3svc_decode_sattrargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_sattrargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_sattrargs *args = rqstp->rq_argp;
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	p = decode_sattr3(p, &args->attrs, nfsd_user_namespace(rqstp));
+-
+-	if ((args->check_guard = ntohl(*p++)) != 0) { 
+-		struct timespec64 time;
+-		p = decode_time3(p, &time);
+-		args->guardtime = time.tv_sec;
+-	}
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_nfs_fh3(xdr, &args->fh) &&
++		svcxdr_decode_sattr3(rqstp, xdr, &args->attrs) &&
++		svcxdr_decode_sattrguard3(xdr, args);
+ }
+ 
+-int
+-nfs3svc_decode_diropargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_diropargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_diropargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->fh))
+-	 || !(p = decode_filename(p, &args->name, &args->len)))
+-		return 0;
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_diropargs3(xdr, &args->fh, &args->name, &args->len);
+ }
+ 
+-int
+-nfs3svc_decode_accessargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_accessargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_accessargs *args = rqstp->rq_argp;
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	args->access = ntohl(*p++);
++	if (!svcxdr_decode_nfs_fh3(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->access) < 0)
++		return false;
+ 
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nfs3svc_decode_readargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_readargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_readargs *args = rqstp->rq_argp;
+-	unsigned int len;
+-	int v;
+-	u32 max_blocksize = svc_max_payload(rqstp);
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	p = xdr_decode_hyper(p, &args->offset);
++	if (!svcxdr_decode_nfs_fh3(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u64(xdr, &args->offset) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->count) < 0)
++		return false;
+ 
+-	args->count = ntohl(*p++);
+-	len = min(args->count, max_blocksize);
+-
+-	/* set up the kvec */
+-	v=0;
+-	while (len > 0) {
+-		struct page *p = *(rqstp->rq_next_page++);
+-
+-		rqstp->rq_vec[v].iov_base = page_address(p);
+-		rqstp->rq_vec[v].iov_len = min_t(unsigned int, len, PAGE_SIZE);
+-		len -= rqstp->rq_vec[v].iov_len;
+-		v++;
+-	}
+-	args->vlen = v;
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nfs3svc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_writeargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_writeargs *args = rqstp->rq_argp;
+-	unsigned int len, hdr, dlen;
+ 	u32 max_blocksize = svc_max_payload(rqstp);
+-	struct kvec *head = rqstp->rq_arg.head;
+-	struct kvec *tail = rqstp->rq_arg.tail;
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	p = xdr_decode_hyper(p, &args->offset);
+-
+-	args->count = ntohl(*p++);
+-	args->stable = ntohl(*p++);
+-	len = args->len = ntohl(*p++);
+-	if ((void *)p > head->iov_base + head->iov_len)
+-		return 0;
+-	/*
+-	 * The count must equal the amount of data passed.
+-	 */
+-	if (args->count != args->len)
+-		return 0;
++	if (!svcxdr_decode_nfs_fh3(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u64(xdr, &args->offset) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->count) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->stable) < 0)
++		return false;
+ 
+-	/*
+-	 * Check to make sure that we got the right number of
+-	 * bytes.
+-	 */
+-	hdr = (void*)p - head->iov_base;
+-	dlen = head->iov_len + rqstp->rq_arg.page_len + tail->iov_len - hdr;
+-	/*
+-	 * Round the length of the data which was specified up to
+-	 * the next multiple of XDR units and then compare that
+-	 * against the length which was actually received.
+-	 * Note that when RPCSEC/GSS (for example) is used, the
+-	 * data buffer can be padded so dlen might be larger
+-	 * than required.  It must never be smaller.
+-	 */
+-	if (dlen < XDR_QUADLEN(len)*4)
+-		return 0;
++	/* opaque data */
++	if (xdr_stream_decode_u32(xdr, &args->len) < 0)
++		return false;
+ 
++	/* request sanity */
++	if (args->count != args->len)
++		return false;
+ 	if (args->count > max_blocksize) {
+ 		args->count = max_blocksize;
+-		len = args->len = max_blocksize;
++		args->len = max_blocksize;
+ 	}
+ 
+-	args->first.iov_base = (void *)p;
+-	args->first.iov_len = head->iov_len - hdr;
+-	return 1;
++	return xdr_stream_subsegment(xdr, &args->payload, args->count);
+ }
+ 
+-int
+-nfs3svc_decode_createargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_createargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_createargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->fh))
+-	 || !(p = decode_filename(p, &args->name, &args->len)))
+-		return 0;
+-
+-	switch (args->createmode = ntohl(*p++)) {
++	if (!svcxdr_decode_diropargs3(xdr, &args->fh, &args->name, &args->len))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->createmode) < 0)
++		return false;
++	switch (args->createmode) {
+ 	case NFS3_CREATE_UNCHECKED:
+ 	case NFS3_CREATE_GUARDED:
+-		p = decode_sattr3(p, &args->attrs, nfsd_user_namespace(rqstp));
+-		break;
++		return svcxdr_decode_sattr3(rqstp, xdr, &args->attrs);
+ 	case NFS3_CREATE_EXCLUSIVE:
+-		args->verf = p;
+-		p += 2;
++		args->verf = xdr_inline_decode(xdr, NFS3_CREATEVERFSIZE);
++		if (!args->verf)
++			return false;
+ 		break;
+ 	default:
+-		return 0;
++		return false;
+ 	}
+-
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nfs3svc_decode_mkdirargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_mkdirargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_createargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->fh)) ||
+-	    !(p = decode_filename(p, &args->name, &args->len)))
+-		return 0;
+-	p = decode_sattr3(p, &args->attrs, nfsd_user_namespace(rqstp));
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_diropargs3(xdr, &args->fh,
++					&args->name, &args->len) &&
++		svcxdr_decode_sattr3(rqstp, xdr, &args->attrs);
+ }
+ 
+-int
+-nfs3svc_decode_symlinkargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_symlinkargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_symlinkargs *args = rqstp->rq_argp;
+-	char *base = (char *)p;
+-	size_t dlen;
+-
+-	if (!(p = decode_fh(p, &args->ffh)) ||
+-	    !(p = decode_filename(p, &args->fname, &args->flen)))
+-		return 0;
+-	p = decode_sattr3(p, &args->attrs, nfsd_user_namespace(rqstp));
+-
+-	args->tlen = ntohl(*p++);
+-
+-	args->first.iov_base = p;
+-	args->first.iov_len = rqstp->rq_arg.head[0].iov_len;
+-	args->first.iov_len -= (char *)p - base;
++	struct kvec *head = rqstp->rq_arg.head;
+ 
+-	dlen = args->first.iov_len + rqstp->rq_arg.page_len +
+-	       rqstp->rq_arg.tail[0].iov_len;
+-	if (dlen < XDR_QUADLEN(args->tlen) << 2)
+-		return 0;
+-	return 1;
++	if (!svcxdr_decode_diropargs3(xdr, &args->ffh, &args->fname, &args->flen))
++		return false;
++	if (!svcxdr_decode_sattr3(rqstp, xdr, &args->attrs))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->tlen) < 0)
++		return false;
++
++	/* symlink_data */
++	args->first.iov_len = head->iov_len - xdr_stream_pos(xdr);
++	args->first.iov_base = xdr_inline_decode(xdr, args->tlen);
++	return args->first.iov_base != NULL;
+ }
+ 
+-int
+-nfs3svc_decode_mknodargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_mknodargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_mknodargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->fh))
+-	 || !(p = decode_filename(p, &args->name, &args->len)))
+-		return 0;
+-
+-	args->ftype = ntohl(*p++);
+-
+-	if (args->ftype == NF3BLK  || args->ftype == NF3CHR
+-	 || args->ftype == NF3SOCK || args->ftype == NF3FIFO)
+-		p = decode_sattr3(p, &args->attrs, nfsd_user_namespace(rqstp));
+-
+-	if (args->ftype == NF3BLK || args->ftype == NF3CHR) {
+-		args->major = ntohl(*p++);
+-		args->minor = ntohl(*p++);
++	if (!svcxdr_decode_diropargs3(xdr, &args->fh, &args->name, &args->len))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->ftype) < 0)
++		return false;
++	switch (args->ftype) {
++	case NF3CHR:
++	case NF3BLK:
++		return svcxdr_decode_devicedata3(rqstp, xdr, args);
++	case NF3SOCK:
++	case NF3FIFO:
++		return svcxdr_decode_sattr3(rqstp, xdr, &args->attrs);
++	case NF3REG:
++	case NF3DIR:
++	case NF3LNK:
++		/* Valid XDR but illegal file types */
++		break;
++	default:
++		return false;
+ 	}
+ 
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+-int
+-nfs3svc_decode_renameargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_renameargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_renameargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->ffh))
+-	 || !(p = decode_filename(p, &args->fname, &args->flen))
+-	 || !(p = decode_fh(p, &args->tfh))
+-	 || !(p = decode_filename(p, &args->tname, &args->tlen)))
+-		return 0;
+-
+-	return xdr_argsize_check(rqstp, p);
+-}
+-
+-int
+-nfs3svc_decode_readlinkargs(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	struct nfsd3_readlinkargs *args = rqstp->rq_argp;
+-
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	args->buffer = page_address(*(rqstp->rq_next_page++));
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_diropargs3(xdr, &args->ffh,
++					&args->fname, &args->flen) &&
++		svcxdr_decode_diropargs3(xdr, &args->tfh,
++					 &args->tname, &args->tlen);
+ }
+ 
+-int
+-nfs3svc_decode_linkargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_linkargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_linkargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->ffh))
+-	 || !(p = decode_fh(p, &args->tfh))
+-	 || !(p = decode_filename(p, &args->tname, &args->tlen)))
+-		return 0;
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_nfs_fh3(xdr, &args->ffh) &&
++		svcxdr_decode_diropargs3(xdr, &args->tfh,
++					 &args->tname, &args->tlen);
+ }
+ 
+-int
+-nfs3svc_decode_readdirargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_readdirargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_readdirargs *args = rqstp->rq_argp;
+-	int len;
+-	u32 max_blocksize = svc_max_payload(rqstp);
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	p = xdr_decode_hyper(p, &args->cookie);
+-	args->verf   = p; p += 2;
+-	args->dircount = ~0;
+-	args->count  = ntohl(*p++);
+-	len = args->count  = min_t(u32, args->count, max_blocksize);
+-
+-	while (len > 0) {
+-		struct page *p = *(rqstp->rq_next_page++);
+-		if (!args->buffer)
+-			args->buffer = page_address(p);
+-		len -= PAGE_SIZE;
+-	}
+-
+-	return xdr_argsize_check(rqstp, p);
++	if (!svcxdr_decode_nfs_fh3(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u64(xdr, &args->cookie) < 0)
++		return false;
++	args->verf = xdr_inline_decode(xdr, NFS3_COOKIEVERFSIZE);
++	if (!args->verf)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->count) < 0)
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nfs3svc_decode_readdirplusargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_readdirplusargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_readdirargs *args = rqstp->rq_argp;
+-	int len;
+-	u32 max_blocksize = svc_max_payload(rqstp);
+-
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	p = xdr_decode_hyper(p, &args->cookie);
+-	args->verf     = p; p += 2;
+-	args->dircount = ntohl(*p++);
+-	args->count    = ntohl(*p++);
+-
+-	len = args->count = min(args->count, max_blocksize);
+-	while (len > 0) {
+-		struct page *p = *(rqstp->rq_next_page++);
+-		if (!args->buffer)
+-			args->buffer = page_address(p);
+-		len -= PAGE_SIZE;
+-	}
+-
+-	return xdr_argsize_check(rqstp, p);
++	u32 dircount;
++
++	if (!svcxdr_decode_nfs_fh3(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u64(xdr, &args->cookie) < 0)
++		return false;
++	args->verf = xdr_inline_decode(xdr, NFS3_COOKIEVERFSIZE);
++	if (!args->verf)
++		return false;
++	/* dircount is ignored */
++	if (xdr_stream_decode_u32(xdr, &dircount) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->count) < 0)
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nfs3svc_decode_commitargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_decode_commitargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_commitargs *args = rqstp->rq_argp;
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	p = xdr_decode_hyper(p, &args->offset);
+-	args->count = ntohl(*p++);
+ 
+-	return xdr_argsize_check(rqstp, p);
++	if (!svcxdr_decode_nfs_fh3(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u64(xdr, &args->offset) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->count) < 0)
++		return false;
++
++	return true;
+ }
+ 
+ /*
+  * XDR encode functions
+  */
+ 
+-int
+-nfs3svc_encode_voidres(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	return xdr_ressize_check(rqstp, p);
+-}
+-
+ /* GETATTR */
+-int
+-nfs3svc_encode_attrstat(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_getattrres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_attrstat *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	if (resp->status == 0) {
+-		lease_get_mtime(d_inode(resp->fh.fh_dentry),
+-				&resp->stat.mtime);
+-		p = encode_fattr3(rqstp, p, &resp->fh, &resp->stat);
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		lease_get_mtime(d_inode(resp->fh.fh_dentry), &resp->stat.mtime);
++		if (!svcxdr_encode_fattr3(rqstp, xdr, &resp->fh, &resp->stat))
++			return false;
++		break;
+ 	}
+-	return xdr_ressize_check(rqstp, p);
++
++	return true;
+ }
+ 
+ /* SETATTR, REMOVE, RMDIR */
+-int
+-nfs3svc_encode_wccstat(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_wccstat(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_attrstat *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	p = encode_wcc_data(rqstp, p, &resp->fh);
+-	return xdr_ressize_check(rqstp, p);
++	return svcxdr_encode_nfsstat3(xdr, resp->status) &&
++		svcxdr_encode_wcc_data(rqstp, xdr, &resp->fh);
+ }
+ 
+ /* LOOKUP */
+-int
+-nfs3svc_encode_diropres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_lookupres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_diropres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	if (resp->status == 0) {
+-		p = encode_fh(p, &resp->fh);
+-		p = encode_post_op_attr(rqstp, p, &resp->fh);
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_nfs_fh3(xdr, &resp->fh))
++			return false;
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->dirfh))
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->dirfh))
++			return false;
+ 	}
+-	p = encode_post_op_attr(rqstp, p, &resp->dirfh);
+-	return xdr_ressize_check(rqstp, p);
++
++	return true;
+ }
+ 
+ /* ACCESS */
+-int
+-nfs3svc_encode_accessres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_accessres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_accessres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	p = encode_post_op_attr(rqstp, p, &resp->fh);
+-	if (resp->status == 0)
+-		*p++ = htonl(resp->access);
+-	return xdr_ressize_check(rqstp, p);
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->access) < 0)
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++	}
++
++	return true;
+ }
+ 
+ /* READLINK */
+-int
+-nfs3svc_encode_readlinkres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_readlinkres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_readlinkres *resp = rqstp->rq_resp;
++	struct kvec *head = rqstp->rq_res.head;
++
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->len) < 0)
++			return false;
++		xdr_write_pages(xdr, resp->pages, 0, resp->len);
++		if (svc_encode_result_payload(rqstp, head->iov_len, resp->len) < 0)
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++	}
+ 
+-	*p++ = resp->status;
+-	p = encode_post_op_attr(rqstp, p, &resp->fh);
+-	if (resp->status == 0) {
+-		*p++ = htonl(resp->len);
+-		xdr_ressize_check(rqstp, p);
+-		rqstp->rq_res.page_len = resp->len;
+-		if (resp->len & 3) {
+-			/* need to pad the tail */
+-			rqstp->rq_res.tail[0].iov_base = p;
+-			*p = 0;
+-			rqstp->rq_res.tail[0].iov_len = 4 - (resp->len&3);
+-		}
+-		return 1;
+-	} else
+-		return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+ /* READ */
+-int
+-nfs3svc_encode_readres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_readres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_readres *resp = rqstp->rq_resp;
++	struct kvec *head = rqstp->rq_res.head;
++
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->count) < 0)
++			return false;
++		if (xdr_stream_encode_bool(xdr, resp->eof) < 0)
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->count) < 0)
++			return false;
++		xdr_write_pages(xdr, resp->pages, rqstp->rq_res.page_base,
++				resp->count);
++		if (svc_encode_result_payload(rqstp, head->iov_len, resp->count) < 0)
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++	}
+ 
+-	*p++ = resp->status;
+-	p = encode_post_op_attr(rqstp, p, &resp->fh);
+-	if (resp->status == 0) {
+-		*p++ = htonl(resp->count);
+-		*p++ = htonl(resp->eof);
+-		*p++ = htonl(resp->count);	/* xdr opaque count */
+-		xdr_ressize_check(rqstp, p);
+-		/* now update rqstp->rq_res to reflect data as well */
+-		rqstp->rq_res.page_len = resp->count;
+-		if (resp->count & 3) {
+-			/* need to pad the tail */
+-			rqstp->rq_res.tail[0].iov_base = p;
+-			*p = 0;
+-			rqstp->rq_res.tail[0].iov_len = 4 - (resp->count & 3);
+-		}
+-		return 1;
+-	} else
+-		return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+ /* WRITE */
+-int
+-nfs3svc_encode_writeres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_writeres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_writeres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	p = encode_wcc_data(rqstp, p, &resp->fh);
+-	if (resp->status == 0) {
+-		*p++ = htonl(resp->count);
+-		*p++ = htonl(resp->committed);
+-		*p++ = resp->verf[0];
+-		*p++ = resp->verf[1];
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_wcc_data(rqstp, xdr, &resp->fh))
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->count) < 0)
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->committed) < 0)
++			return false;
++		if (!svcxdr_encode_writeverf3(xdr, resp->verf))
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_wcc_data(rqstp, xdr, &resp->fh))
++			return false;
+ 	}
+-	return xdr_ressize_check(rqstp, p);
++
++	return true;
+ }
+ 
+ /* CREATE, MKDIR, SYMLINK, MKNOD */
+-int
+-nfs3svc_encode_createres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_createres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_diropres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	if (resp->status == 0) {
+-		*p++ = xdr_one;
+-		p = encode_fh(p, &resp->fh);
+-		p = encode_post_op_attr(rqstp, p, &resp->fh);
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_post_op_fh3(xdr, &resp->fh))
++			return false;
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++		if (!svcxdr_encode_wcc_data(rqstp, xdr, &resp->dirfh))
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_wcc_data(rqstp, xdr, &resp->dirfh))
++			return false;
+ 	}
+-	p = encode_wcc_data(rqstp, p, &resp->dirfh);
+-	return xdr_ressize_check(rqstp, p);
++
++	return true;
+ }
+ 
+ /* RENAME */
+-int
+-nfs3svc_encode_renameres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_renameres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_renameres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	p = encode_wcc_data(rqstp, p, &resp->ffh);
+-	p = encode_wcc_data(rqstp, p, &resp->tfh);
+-	return xdr_ressize_check(rqstp, p);
++	return svcxdr_encode_nfsstat3(xdr, resp->status) &&
++		svcxdr_encode_wcc_data(rqstp, xdr, &resp->ffh) &&
++		svcxdr_encode_wcc_data(rqstp, xdr, &resp->tfh);
+ }
+ 
+ /* LINK */
+-int
+-nfs3svc_encode_linkres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_linkres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_linkres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	p = encode_post_op_attr(rqstp, p, &resp->fh);
+-	p = encode_wcc_data(rqstp, p, &resp->tfh);
+-	return xdr_ressize_check(rqstp, p);
++	return svcxdr_encode_nfsstat3(xdr, resp->status) &&
++		svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh) &&
++		svcxdr_encode_wcc_data(rqstp, xdr, &resp->tfh);
+ }
+ 
+ /* READDIR */
+-int
+-nfs3svc_encode_readdirres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_readdirres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_readdirres *resp = rqstp->rq_resp;
++	struct xdr_buf *dirlist = &resp->dirlist;
++
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++		if (!svcxdr_encode_cookieverf3(xdr, resp->verf))
++			return false;
++		xdr_write_pages(xdr, dirlist->pages, 0, dirlist->len);
++		/* no more entries */
++		if (xdr_stream_encode_item_absent(xdr) < 0)
++			return false;
++		if (xdr_stream_encode_bool(xdr, resp->common.err == nfserr_eof) < 0)
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
++			return false;
++	}
+ 
+-	*p++ = resp->status;
+-	p = encode_post_op_attr(rqstp, p, &resp->fh);
+-
+-	if (resp->status == 0) {
+-		/* stupid readdir cookie */
+-		memcpy(p, resp->verf, 8); p += 2;
+-		xdr_ressize_check(rqstp, p);
+-		if (rqstp->rq_res.head[0].iov_len + (2<<2) > PAGE_SIZE)
+-			return 1; /*No room for trailer */
+-		rqstp->rq_res.page_len = (resp->count) << 2;
+-
+-		/* add the 'tail' to the end of the 'head' page - page 0. */
+-		rqstp->rq_res.tail[0].iov_base = p;
+-		*p++ = 0;		/* no more entries */
+-		*p++ = htonl(resp->common.err == nfserr_eof);
+-		rqstp->rq_res.tail[0].iov_len = 2<<2;
+-		return 1;
+-	} else
+-		return xdr_ressize_check(rqstp, p);
+-}
+-
+-static __be32 *
+-encode_entry_baggage(struct nfsd3_readdirres *cd, __be32 *p, const char *name,
+-	     int namlen, u64 ino)
+-{
+-	*p++ = xdr_one;				 /* mark entry present */
+-	p    = xdr_encode_hyper(p, ino);	 /* file id */
+-	p    = xdr_encode_array(p, name, namlen);/* name length & name */
+-
+-	cd->offset = p;				/* remember pointer */
+-	p = xdr_encode_hyper(p, NFS_OFFSET_MAX);/* offset of next entry */
+-
+-	return p;
++	return true;
+ }
+ 
+ static __be32
+@@ -887,267 +1014,323 @@ compose_entry_fh(struct nfsd3_readdirres *cd, struct svc_fh *fhp,
+ 	return rv;
+ }
+ 
+-static __be32 *encode_entryplus_baggage(struct nfsd3_readdirres *cd, __be32 *p, const char *name, int namlen, u64 ino)
+-{
+-	struct svc_fh	*fh = &cd->scratch;
+-	__be32 err;
+-
+-	fh_init(fh, NFS3_FHSIZE);
+-	err = compose_entry_fh(cd, fh, name, namlen, ino);
+-	if (err) {
+-		*p++ = 0;
+-		*p++ = 0;
+-		goto out;
+-	}
+-	p = encode_post_op_attr(cd->rqstp, p, fh);
+-	*p++ = xdr_one;			/* yes, a file handle follows */
+-	p = encode_fh(p, fh);
+-out:
+-	fh_put(fh);
+-	return p;
+-}
+-
+-/*
+- * Encode a directory entry. This one works for both normal readdir
+- * and readdirplus.
+- * The normal readdir reply requires 2 (fileid) + 1 (stringlen)
+- * + string + 2 (cookie) + 1 (next) words, i.e. 6 + strlen.
+- * 
+- * The readdirplus baggage is 1+21 words for post_op_attr, plus the
+- * file handle.
++/**
++ * nfs3svc_encode_cookie3 - Encode a directory offset cookie
++ * @resp: readdir result context
++ * @offset: offset cookie to encode
++ *
++ * The buffer space for the offset cookie has already been reserved
++ * by svcxdr_encode_entry3_common().
+  */
+-
+-#define NFS3_ENTRY_BAGGAGE	(2 + 1 + 2 + 1)
+-#define NFS3_ENTRYPLUS_BAGGAGE	(1 + 21 + 1 + (NFS3_FHSIZE >> 2))
+-static int
+-encode_entry(struct readdir_cd *ccd, const char *name, int namlen,
+-	     loff_t offset, u64 ino, unsigned int d_type, int plus)
++void nfs3svc_encode_cookie3(struct nfsd3_readdirres *resp, u64 offset)
+ {
+-	struct nfsd3_readdirres *cd = container_of(ccd, struct nfsd3_readdirres,
+-		       					common);
+-	__be32		*p = cd->buffer;
+-	caddr_t		curr_page_addr = NULL;
+-	struct page **	page;
+-	int		slen;		/* string (name) length */
+-	int		elen;		/* estimated entry length in words */
+-	int		num_entry_words = 0;	/* actual number of words */
+-
+-	if (cd->offset) {
+-		u64 offset64 = offset;
+-
+-		if (unlikely(cd->offset1)) {
+-			/* we ended up with offset on a page boundary */
+-			*cd->offset = htonl(offset64 >> 32);
+-			*cd->offset1 = htonl(offset64 & 0xffffffff);
+-			cd->offset1 = NULL;
+-		} else {
+-			xdr_encode_hyper(cd->offset, offset64);
+-		}
+-		cd->offset = NULL;
+-	}
+-
+-	/*
+-	dprintk("encode_entry(%.*s @%ld%s)\n",
+-		namlen, name, (long) offset, plus? " plus" : "");
+-	 */
+-
+-	/* truncate filename if too long */
+-	namlen = min(namlen, NFS3_MAXNAMLEN);
++	__be64 cookie = cpu_to_be64(offset);
+ 
+-	slen = XDR_QUADLEN(namlen);
+-	elen = slen + NFS3_ENTRY_BAGGAGE
+-		+ (plus? NFS3_ENTRYPLUS_BAGGAGE : 0);
++	if (!resp->cookie_offset)
++		return;
++	write_bytes_to_xdr_buf(&resp->dirlist, resp->cookie_offset, &cookie,
++			       sizeof(cookie));
++	resp->cookie_offset = 0;
++}
+ 
+-	if (cd->buflen < elen) {
+-		cd->common.err = nfserr_toosmall;
+-		return -EINVAL;
+-	}
++static bool
++svcxdr_encode_entry3_common(struct nfsd3_readdirres *resp, const char *name,
++			    int namlen, loff_t offset, u64 ino)
++{
++	struct xdr_buf *dirlist = &resp->dirlist;
++	struct xdr_stream *xdr = &resp->xdr;
++
++	if (xdr_stream_encode_item_present(xdr) < 0)
++		return false;
++	/* fileid */
++	if (xdr_stream_encode_u64(xdr, ino) < 0)
++		return false;
++	/* name */
++	if (xdr_stream_encode_opaque(xdr, name, min(namlen, NFS3_MAXNAMLEN)) < 0)
++		return false;
++	/* cookie */
++	resp->cookie_offset = dirlist->len;
++	if (xdr_stream_encode_u64(xdr, OFFSET_MAX) < 0)
++		return false;
++
++	return true;
++}
+ 
+-	/* determine which page in rq_respages[] we are currently filling */
+-	for (page = cd->rqstp->rq_respages + 1;
+-				page < cd->rqstp->rq_next_page; page++) {
+-		curr_page_addr = page_address(*page);
++/**
++ * nfs3svc_encode_entry3 - encode one NFSv3 READDIR entry
++ * @data: directory context
++ * @name: name of the object to be encoded
++ * @namlen: length of that name, in bytes
++ * @offset: the offset of the previous entry
++ * @ino: the fileid of this entry
++ * @d_type: unused
++ *
++ * Return values:
++ *   %0: Entry was successfully encoded.
++ *   %-EINVAL: An encoding problem occured, secondary status code in resp->common.err
++ *
++ * On exit, the following fields are updated:
++ *   - resp->xdr
++ *   - resp->common.err
++ *   - resp->cookie_offset
++ */
++int nfs3svc_encode_entry3(void *data, const char *name, int namlen,
++			  loff_t offset, u64 ino, unsigned int d_type)
++{
++	struct readdir_cd *ccd = data;
++	struct nfsd3_readdirres *resp = container_of(ccd,
++						     struct nfsd3_readdirres,
++						     common);
++	unsigned int starting_length = resp->dirlist.len;
+ 
+-		if (((caddr_t)cd->buffer >= curr_page_addr) &&
+-		    ((caddr_t)cd->buffer <  curr_page_addr + PAGE_SIZE))
+-			break;
+-	}
++	/* The offset cookie for the previous entry */
++	nfs3svc_encode_cookie3(resp, offset);
+ 
+-	if ((caddr_t)(cd->buffer + elen) < (curr_page_addr + PAGE_SIZE)) {
+-		/* encode entry in current page */
++	if (!svcxdr_encode_entry3_common(resp, name, namlen, offset, ino))
++		goto out_toosmall;
+ 
+-		p = encode_entry_baggage(cd, p, name, namlen, ino);
++	xdr_commit_encode(&resp->xdr);
++	resp->common.err = nfs_ok;
++	return 0;
+ 
+-		if (plus)
+-			p = encode_entryplus_baggage(cd, p, name, namlen, ino);
+-		num_entry_words = p - cd->buffer;
+-	} else if (*(page+1) != NULL) {
+-		/* temporarily encode entry into next page, then move back to
+-		 * current and next page in rq_respages[] */
+-		__be32 *p1, *tmp;
+-		int len1, len2;
++out_toosmall:
++	resp->cookie_offset = 0;
++	resp->common.err = nfserr_toosmall;
++	resp->dirlist.len = starting_length;
++	return -EINVAL;
++}
+ 
+-		/* grab next page for temporary storage of entry */
+-		p1 = tmp = page_address(*(page+1));
++static bool
++svcxdr_encode_entry3_plus(struct nfsd3_readdirres *resp, const char *name,
++			  int namlen, u64 ino)
++{
++	struct xdr_stream *xdr = &resp->xdr;
++	struct svc_fh *fhp = &resp->scratch;
++	bool result;
+ 
+-		p1 = encode_entry_baggage(cd, p1, name, namlen, ino);
++	result = false;
++	fh_init(fhp, NFS3_FHSIZE);
++	if (compose_entry_fh(resp, fhp, name, namlen, ino) != nfs_ok)
++		goto out_noattrs;
+ 
+-		if (plus)
+-			p1 = encode_entryplus_baggage(cd, p1, name, namlen, ino);
++	if (!svcxdr_encode_post_op_attr(resp->rqstp, xdr, fhp))
++		goto out;
++	if (!svcxdr_encode_post_op_fh3(xdr, fhp))
++		goto out;
++	result = true;
+ 
+-		/* determine entry word length and lengths to go in pages */
+-		num_entry_words = p1 - tmp;
+-		len1 = curr_page_addr + PAGE_SIZE - (caddr_t)cd->buffer;
+-		if ((num_entry_words << 2) < len1) {
+-			/* the actual number of words in the entry is less
+-			 * than elen and can still fit in the current page
+-			 */
+-			memmove(p, tmp, num_entry_words << 2);
+-			p += num_entry_words;
+-
+-			/* update offset */
+-			cd->offset = cd->buffer + (cd->offset - tmp);
+-		} else {
+-			unsigned int offset_r = (cd->offset - tmp) << 2;
+-
+-			/* update pointer to offset location.
+-			 * This is a 64bit quantity, so we need to
+-			 * deal with 3 cases:
+-			 *  -	entirely in first page
+-			 *  -	entirely in second page
+-			 *  -	4 bytes in each page
+-			 */
+-			if (offset_r + 8 <= len1) {
+-				cd->offset = p + (cd->offset - tmp);
+-			} else if (offset_r >= len1) {
+-				cd->offset -= len1 >> 2;
+-			} else {
+-				/* sitting on the fence */
+-				BUG_ON(offset_r != len1 - 4);
+-				cd->offset = p + (cd->offset - tmp);
+-				cd->offset1 = tmp;
+-			}
+-
+-			len2 = (num_entry_words << 2) - len1;
+-
+-			/* move from temp page to current and next pages */
+-			memmove(p, tmp, len1);
+-			memmove(tmp, (caddr_t)tmp+len1, len2);
+-
+-			p = tmp + (len2 >> 2);
+-		}
+-	}
+-	else {
+-		cd->common.err = nfserr_toosmall;
+-		return -EINVAL;
+-	}
++out:
++	fh_put(fhp);
++	return result;
++
++out_noattrs:
++	if (xdr_stream_encode_item_absent(xdr) < 0)
++		return false;
++	if (xdr_stream_encode_item_absent(xdr) < 0)
++		return false;
++	return true;
++}
+ 
+-	cd->buflen -= num_entry_words;
+-	cd->buffer = p;
+-	cd->common.err = nfs_ok;
++/**
++ * nfs3svc_encode_entryplus3 - encode one NFSv3 READDIRPLUS entry
++ * @data: directory context
++ * @name: name of the object to be encoded
++ * @namlen: length of that name, in bytes
++ * @offset: the offset of the previous entry
++ * @ino: the fileid of this entry
++ * @d_type: unused
++ *
++ * Return values:
++ *   %0: Entry was successfully encoded.
++ *   %-EINVAL: An encoding problem occured, secondary status code in resp->common.err
++ *
++ * On exit, the following fields are updated:
++ *   - resp->xdr
++ *   - resp->common.err
++ *   - resp->cookie_offset
++ */
++int nfs3svc_encode_entryplus3(void *data, const char *name, int namlen,
++			      loff_t offset, u64 ino, unsigned int d_type)
++{
++	struct readdir_cd *ccd = data;
++	struct nfsd3_readdirres *resp = container_of(ccd,
++						     struct nfsd3_readdirres,
++						     common);
++	unsigned int starting_length = resp->dirlist.len;
++
++	/* The offset cookie for the previous entry */
++	nfs3svc_encode_cookie3(resp, offset);
++
++	if (!svcxdr_encode_entry3_common(resp, name, namlen, offset, ino))
++		goto out_toosmall;
++	if (!svcxdr_encode_entry3_plus(resp, name, namlen, ino))
++		goto out_toosmall;
++
++	xdr_commit_encode(&resp->xdr);
++	resp->common.err = nfs_ok;
+ 	return 0;
+ 
++out_toosmall:
++	resp->cookie_offset = 0;
++	resp->common.err = nfserr_toosmall;
++	resp->dirlist.len = starting_length;
++	return -EINVAL;
+ }
+ 
+-int
+-nfs3svc_encode_entry(void *cd, const char *name,
+-		     int namlen, loff_t offset, u64 ino, unsigned int d_type)
++static bool
++svcxdr_encode_fsstat3resok(struct xdr_stream *xdr,
++			   const struct nfsd3_fsstatres *resp)
+ {
+-	return encode_entry(cd, name, namlen, offset, ino, d_type, 0);
+-}
++	const struct kstatfs *s = &resp->stats;
++	u64 bs = s->f_bsize;
++	__be32 *p;
+ 
+-int
+-nfs3svc_encode_entry_plus(void *cd, const char *name,
+-			  int namlen, loff_t offset, u64 ino,
+-			  unsigned int d_type)
+-{
+-	return encode_entry(cd, name, namlen, offset, ino, d_type, 1);
++	p = xdr_reserve_space(xdr, XDR_UNIT * 13);
++	if (!p)
++		return false;
++	p = xdr_encode_hyper(p, bs * s->f_blocks);	/* total bytes */
++	p = xdr_encode_hyper(p, bs * s->f_bfree);	/* free bytes */
++	p = xdr_encode_hyper(p, bs * s->f_bavail);	/* user available bytes */
++	p = xdr_encode_hyper(p, s->f_files);		/* total inodes */
++	p = xdr_encode_hyper(p, s->f_ffree);		/* free inodes */
++	p = xdr_encode_hyper(p, s->f_ffree);		/* user available inodes */
++	*p = cpu_to_be32(resp->invarsec);		/* mean unchanged time */
++
++	return true;
+ }
+ 
+ /* FSSTAT */
+-int
+-nfs3svc_encode_fsstatres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_fsstatres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_fsstatres *resp = rqstp->rq_resp;
+-	struct kstatfs	*s = &resp->stats;
+-	u64		bs = s->f_bsize;
+-
+-	*p++ = resp->status;
+-	*p++ = xdr_zero;	/* no post_op_attr */
+-
+-	if (resp->status == 0) {
+-		p = xdr_encode_hyper(p, bs * s->f_blocks);	/* total bytes */
+-		p = xdr_encode_hyper(p, bs * s->f_bfree);	/* free bytes */
+-		p = xdr_encode_hyper(p, bs * s->f_bavail);	/* user available bytes */
+-		p = xdr_encode_hyper(p, s->f_files);	/* total inodes */
+-		p = xdr_encode_hyper(p, s->f_ffree);	/* free inodes */
+-		p = xdr_encode_hyper(p, s->f_ffree);	/* user available inodes */
+-		*p++ = htonl(resp->invarsec);	/* mean unchanged time */
++
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &nfs3svc_null_fh))
++			return false;
++		if (!svcxdr_encode_fsstat3resok(xdr, resp))
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &nfs3svc_null_fh))
++			return false;
+ 	}
+-	return xdr_ressize_check(rqstp, p);
++
++	return true;
++}
++
++static bool
++svcxdr_encode_fsinfo3resok(struct xdr_stream *xdr,
++			   const struct nfsd3_fsinfores *resp)
++{
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, XDR_UNIT * 12);
++	if (!p)
++		return false;
++	*p++ = cpu_to_be32(resp->f_rtmax);
++	*p++ = cpu_to_be32(resp->f_rtpref);
++	*p++ = cpu_to_be32(resp->f_rtmult);
++	*p++ = cpu_to_be32(resp->f_wtmax);
++	*p++ = cpu_to_be32(resp->f_wtpref);
++	*p++ = cpu_to_be32(resp->f_wtmult);
++	*p++ = cpu_to_be32(resp->f_dtpref);
++	p = xdr_encode_hyper(p, resp->f_maxfilesize);
++	p = encode_nfstime3(p, &nfs3svc_time_delta);
++	*p = cpu_to_be32(resp->f_properties);
++
++	return true;
+ }
+ 
+ /* FSINFO */
+-int
+-nfs3svc_encode_fsinfores(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_fsinfores(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_fsinfores *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	*p++ = xdr_zero;	/* no post_op_attr */
+-
+-	if (resp->status == 0) {
+-		*p++ = htonl(resp->f_rtmax);
+-		*p++ = htonl(resp->f_rtpref);
+-		*p++ = htonl(resp->f_rtmult);
+-		*p++ = htonl(resp->f_wtmax);
+-		*p++ = htonl(resp->f_wtpref);
+-		*p++ = htonl(resp->f_wtmult);
+-		*p++ = htonl(resp->f_dtpref);
+-		p = xdr_encode_hyper(p, resp->f_maxfilesize);
+-		*p++ = xdr_one;
+-		*p++ = xdr_zero;
+-		*p++ = htonl(resp->f_properties);
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &nfs3svc_null_fh))
++			return false;
++		if (!svcxdr_encode_fsinfo3resok(xdr, resp))
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &nfs3svc_null_fh))
++			return false;
+ 	}
+ 
+-	return xdr_ressize_check(rqstp, p);
++	return true;
++}
++
++static bool
++svcxdr_encode_pathconf3resok(struct xdr_stream *xdr,
++			     const struct nfsd3_pathconfres *resp)
++{
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, XDR_UNIT * 6);
++	if (!p)
++		return false;
++	*p++ = cpu_to_be32(resp->p_link_max);
++	*p++ = cpu_to_be32(resp->p_name_max);
++	p = xdr_encode_bool(p, resp->p_no_trunc);
++	p = xdr_encode_bool(p, resp->p_chown_restricted);
++	p = xdr_encode_bool(p, resp->p_case_insensitive);
++	xdr_encode_bool(p, resp->p_case_preserving);
++
++	return true;
+ }
+ 
+ /* PATHCONF */
+-int
+-nfs3svc_encode_pathconfres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_pathconfres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_pathconfres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	*p++ = xdr_zero;	/* no post_op_attr */
+-
+-	if (resp->status == 0) {
+-		*p++ = htonl(resp->p_link_max);
+-		*p++ = htonl(resp->p_name_max);
+-		*p++ = htonl(resp->p_no_trunc);
+-		*p++ = htonl(resp->p_chown_restricted);
+-		*p++ = htonl(resp->p_case_insensitive);
+-		*p++ = htonl(resp->p_case_preserving);
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &nfs3svc_null_fh))
++			return false;
++		if (!svcxdr_encode_pathconf3resok(xdr, resp))
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &nfs3svc_null_fh))
++			return false;
+ 	}
+ 
+-	return xdr_ressize_check(rqstp, p);
++	return true;
+ }
+ 
+ /* COMMIT */
+-int
+-nfs3svc_encode_commitres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs3svc_encode_commitres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd3_commitres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	p = encode_wcc_data(rqstp, p, &resp->fh);
+-	/* Write verifier */
+-	if (resp->status == 0) {
+-		*p++ = resp->verf[0];
+-		*p++ = resp->verf[1];
++	if (!svcxdr_encode_nfsstat3(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_wcc_data(rqstp, xdr, &resp->fh))
++			return false;
++		if (!svcxdr_encode_writeverf3(xdr, resp->verf))
++			return false;
++		break;
++	default:
++		if (!svcxdr_encode_wcc_data(rqstp, xdr, &resp->fh))
++			return false;
+ 	}
+-	return xdr_ressize_check(rqstp, p);
++
++	return true;
+ }
+ 
+ /*
+diff --git a/fs/nfsd/nfs4acl.c b/fs/nfsd/nfs4acl.c
+index 71292a0d6f092..bb8e2f6d7d03c 100644
+--- a/fs/nfsd/nfs4acl.c
++++ b/fs/nfsd/nfs4acl.c
+@@ -751,57 +751,26 @@ static int nfs4_acl_nfsv4_to_posix(struct nfs4_acl *acl,
+ 	return ret;
+ }
+ 
+-__be32
+-nfsd4_set_nfs4_acl(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		struct nfs4_acl *acl)
++__be32 nfsd4_acl_to_attr(enum nfs_ftype4 type, struct nfs4_acl *acl,
++			 struct nfsd_attrs *attr)
+ {
+-	__be32 error;
+ 	int host_error;
+-	struct dentry *dentry;
+-	struct inode *inode;
+-	struct posix_acl *pacl = NULL, *dpacl = NULL;
+ 	unsigned int flags = 0;
+ 
+-	/* Get inode */
+-	error = fh_verify(rqstp, fhp, 0, NFSD_MAY_SATTR);
+-	if (error)
+-		return error;
+-
+-	dentry = fhp->fh_dentry;
+-	inode = d_inode(dentry);
++	if (!acl)
++		return nfs_ok;
+ 
+-	if (S_ISDIR(inode->i_mode))
++	if (type == NF4DIR)
+ 		flags = NFS4_ACL_DIR;
+ 
+-	host_error = nfs4_acl_nfsv4_to_posix(acl, &pacl, &dpacl, flags);
++	host_error = nfs4_acl_nfsv4_to_posix(acl, &attr->na_pacl,
++					     &attr->na_dpacl, flags);
+ 	if (host_error == -EINVAL)
+ 		return nfserr_attrnotsupp;
+-	if (host_error < 0)
+-		goto out_nfserr;
+-
+-	fh_lock(fhp);
+-
+-	host_error = set_posix_acl(inode, ACL_TYPE_ACCESS, pacl);
+-	if (host_error < 0)
+-		goto out_drop_lock;
+-
+-	if (S_ISDIR(inode->i_mode)) {
+-		host_error = set_posix_acl(inode, ACL_TYPE_DEFAULT, dpacl);
+-	}
+-
+-out_drop_lock:
+-	fh_unlock(fhp);
+-
+-	posix_acl_release(pacl);
+-	posix_acl_release(dpacl);
+-out_nfserr:
+-	if (host_error == -EOPNOTSUPP)
+-		return nfserr_attrnotsupp;
+ 	else
+ 		return nfserrno(host_error);
+ }
+ 
+-
+ static short
+ ace2type(struct nfs4_ace *ace)
+ {
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index f5b7ad0847f20..4eae2c5af2edf 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -76,6 +76,17 @@ static __be32 *xdr_encode_empty_array(__be32 *p)
+  * 1 Protocol"
+  */
+ 
++static void encode_uint32(struct xdr_stream *xdr, u32 n)
++{
++	WARN_ON_ONCE(xdr_stream_encode_u32(xdr, n) < 0);
++}
++
++static void encode_bitmap4(struct xdr_stream *xdr, const __u32 *bitmap,
++			   size_t len)
++{
++	WARN_ON_ONCE(xdr_stream_encode_uint32_array(xdr, bitmap, len) < 0);
++}
++
+ /*
+  *	nfs_cb_opnum4
+  *
+@@ -121,7 +132,7 @@ static void encode_nfs_fh4(struct xdr_stream *xdr, const struct knfsd_fh *fh)
+ 
+ 	BUG_ON(length > NFS4_FHSIZE);
+ 	p = xdr_reserve_space(xdr, 4 + length);
+-	xdr_encode_opaque(p, &fh->fh_base, length);
++	xdr_encode_opaque(p, &fh->fh_raw, length);
+ }
+ 
+ /*
+@@ -328,6 +339,24 @@ static void encode_cb_recall4args(struct xdr_stream *xdr,
+ 	hdr->nops++;
+ }
+ 
++/*
++ * CB_RECALLANY4args
++ *
++ *	struct CB_RECALLANY4args {
++ *		uint32_t	craa_objects_to_keep;
++ *		bitmap4		craa_type_mask;
++ *	};
++ */
++static void
++encode_cb_recallany4args(struct xdr_stream *xdr,
++	struct nfs4_cb_compound_hdr *hdr, struct nfsd4_cb_recall_any *ra)
++{
++	encode_nfs_cb_opnum4(xdr, OP_CB_RECALL_ANY);
++	encode_uint32(xdr, ra->ra_keep);
++	encode_bitmap4(xdr, ra->ra_bmval, ARRAY_SIZE(ra->ra_bmval));
++	hdr->nops++;
++}
++
+ /*
+  * CB_SEQUENCE4args
+  *
+@@ -482,6 +511,26 @@ static void nfs4_xdr_enc_cb_recall(struct rpc_rqst *req, struct xdr_stream *xdr,
+ 	encode_cb_nops(&hdr);
+ }
+ 
++/*
++ * 20.6. Operation 8: CB_RECALL_ANY - Keep Any N Recallable Objects
++ */
++static void
++nfs4_xdr_enc_cb_recall_any(struct rpc_rqst *req,
++		struct xdr_stream *xdr, const void *data)
++{
++	const struct nfsd4_callback *cb = data;
++	struct nfsd4_cb_recall_any *ra;
++	struct nfs4_cb_compound_hdr hdr = {
++		.ident = cb->cb_clp->cl_cb_ident,
++		.minorversion = cb->cb_clp->cl_minorversion,
++	};
++
++	ra = container_of(cb, struct nfsd4_cb_recall_any, ra_cb);
++	encode_cb_compound4args(xdr, &hdr);
++	encode_cb_sequence4args(xdr, cb, &hdr);
++	encode_cb_recallany4args(xdr, &hdr, ra);
++	encode_cb_nops(&hdr);
++}
+ 
+ /*
+  * NFSv4.0 and NFSv4.1 XDR decode functions
+@@ -520,6 +569,28 @@ static int nfs4_xdr_dec_cb_recall(struct rpc_rqst *rqstp,
+ 	return decode_cb_op_status(xdr, OP_CB_RECALL, &cb->cb_status);
+ }
+ 
++/*
++ * 20.6. Operation 8: CB_RECALL_ANY - Keep Any N Recallable Objects
++ */
++static int
++nfs4_xdr_dec_cb_recall_any(struct rpc_rqst *rqstp,
++				  struct xdr_stream *xdr,
++				  void *data)
++{
++	struct nfsd4_callback *cb = data;
++	struct nfs4_cb_compound_hdr hdr;
++	int status;
++
++	status = decode_cb_compound4res(xdr, &hdr);
++	if (unlikely(status))
++		return status;
++	status = decode_cb_sequence4res(xdr, cb);
++	if (unlikely(status || cb->cb_seq_status))
++		return status;
++	status =  decode_cb_op_status(xdr, OP_CB_RECALL_ANY, &cb->cb_status);
++	return status;
++}
++
+ #ifdef CONFIG_NFSD_PNFS
+ /*
+  * CB_LAYOUTRECALL4args
+@@ -679,7 +750,7 @@ static int nfs4_xdr_dec_cb_notify_lock(struct rpc_rqst *rqstp,
+  *	case NFS4_OK:
+  *		write_response4	coa_resok4;
+  *	default:
+- *	length4		coa_bytes_copied;
++ *		length4		coa_bytes_copied;
+  * };
+  * struct CB_OFFLOAD4args {
+  *	nfs_fh4		coa_fh;
+@@ -688,21 +759,22 @@ static int nfs4_xdr_dec_cb_notify_lock(struct rpc_rqst *rqstp,
+  * };
+  */
+ static void encode_offload_info4(struct xdr_stream *xdr,
+-				 __be32 nfserr,
+-				 const struct nfsd4_copy *cp)
++				 const struct nfsd4_cb_offload *cbo)
+ {
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 4);
+-	*p++ = nfserr;
+-	if (!nfserr) {
++	*p = cbo->co_nfserr;
++	switch (cbo->co_nfserr) {
++	case nfs_ok:
+ 		p = xdr_reserve_space(xdr, 4 + 8 + 4 + NFS4_VERIFIER_SIZE);
+ 		p = xdr_encode_empty_array(p);
+-		p = xdr_encode_hyper(p, cp->cp_res.wr_bytes_written);
+-		*p++ = cpu_to_be32(cp->cp_res.wr_stable_how);
+-		p = xdr_encode_opaque_fixed(p, cp->cp_res.wr_verifier.data,
++		p = xdr_encode_hyper(p, cbo->co_res.wr_bytes_written);
++		*p++ = cpu_to_be32(cbo->co_res.wr_stable_how);
++		p = xdr_encode_opaque_fixed(p, cbo->co_res.wr_verifier.data,
+ 					    NFS4_VERIFIER_SIZE);
+-	} else {
++		break;
++	default:
+ 		p = xdr_reserve_space(xdr, 8);
+ 		/* We always return success if bytes were written */
+ 		p = xdr_encode_hyper(p, 0);
+@@ -710,18 +782,16 @@ static void encode_offload_info4(struct xdr_stream *xdr,
+ }
+ 
+ static void encode_cb_offload4args(struct xdr_stream *xdr,
+-				   __be32 nfserr,
+-				   const struct knfsd_fh *fh,
+-				   const struct nfsd4_copy *cp,
++				   const struct nfsd4_cb_offload *cbo,
+ 				   struct nfs4_cb_compound_hdr *hdr)
+ {
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 4);
+-	*p++ = cpu_to_be32(OP_CB_OFFLOAD);
+-	encode_nfs_fh4(xdr, fh);
+-	encode_stateid4(xdr, &cp->cp_res.cb_stateid);
+-	encode_offload_info4(xdr, nfserr, cp);
++	*p = cpu_to_be32(OP_CB_OFFLOAD);
++	encode_nfs_fh4(xdr, &cbo->co_fh);
++	encode_stateid4(xdr, &cbo->co_res.cb_stateid);
++	encode_offload_info4(xdr, cbo);
+ 
+ 	hdr->nops++;
+ }
+@@ -731,8 +801,8 @@ static void nfs4_xdr_enc_cb_offload(struct rpc_rqst *req,
+ 				    const void *data)
+ {
+ 	const struct nfsd4_callback *cb = data;
+-	const struct nfsd4_copy *cp =
+-		container_of(cb, struct nfsd4_copy, cp_cb);
++	const struct nfsd4_cb_offload *cbo =
++		container_of(cb, struct nfsd4_cb_offload, co_cb);
+ 	struct nfs4_cb_compound_hdr hdr = {
+ 		.ident = 0,
+ 		.minorversion = cb->cb_clp->cl_minorversion,
+@@ -740,7 +810,7 @@ static void nfs4_xdr_enc_cb_offload(struct rpc_rqst *req,
+ 
+ 	encode_cb_compound4args(xdr, &hdr);
+ 	encode_cb_sequence4args(xdr, cb, &hdr);
+-	encode_cb_offload4args(xdr, cp->nfserr, &cp->fh, cp, &hdr);
++	encode_cb_offload4args(xdr, cbo, &hdr);
+ 	encode_cb_nops(&hdr);
+ }
+ 
+@@ -784,6 +854,7 @@ static const struct rpc_procinfo nfs4_cb_procedures[] = {
+ #endif
+ 	PROC(CB_NOTIFY_LOCK,	COMPOUND,	cb_notify_lock,	cb_notify_lock),
+ 	PROC(CB_OFFLOAD,	COMPOUND,	cb_offload,	cb_offload),
++	PROC(CB_RECALL_ANY,	COMPOUND,	cb_recall_any,	cb_recall_any),
+ };
+ 
+ static unsigned int nfs4_cb_counts[ARRAY_SIZE(nfs4_cb_procedures)];
+@@ -941,37 +1012,43 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
+ 		clp->cl_cb_conn.cb_xprt = conn->cb_xprt;
+ 	clp->cl_cb_client = client;
+ 	clp->cl_cb_cred = cred;
+-	trace_nfsd_cb_setup(clp);
++	rcu_read_lock();
++	trace_nfsd_cb_setup(clp, rpc_peeraddr2str(client, RPC_DISPLAY_NETID),
++			    args.authflavor);
++	rcu_read_unlock();
+ 	return 0;
+ }
+ 
++static void nfsd4_mark_cb_state(struct nfs4_client *clp, int newstate)
++{
++	if (clp->cl_cb_state != newstate) {
++		clp->cl_cb_state = newstate;
++		trace_nfsd_cb_state(clp);
++	}
++}
++
+ static void nfsd4_mark_cb_down(struct nfs4_client *clp, int reason)
+ {
+ 	if (test_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_flags))
+ 		return;
+-	clp->cl_cb_state = NFSD4_CB_DOWN;
+-	trace_nfsd_cb_state(clp);
++	nfsd4_mark_cb_state(clp, NFSD4_CB_DOWN);
+ }
+ 
+ static void nfsd4_mark_cb_fault(struct nfs4_client *clp, int reason)
+ {
+ 	if (test_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_flags))
+ 		return;
+-	clp->cl_cb_state = NFSD4_CB_FAULT;
+-	trace_nfsd_cb_state(clp);
++	nfsd4_mark_cb_state(clp, NFSD4_CB_FAULT);
+ }
+ 
+ static void nfsd4_cb_probe_done(struct rpc_task *task, void *calldata)
+ {
+ 	struct nfs4_client *clp = container_of(calldata, struct nfs4_client, cl_cb_null);
+ 
+-	trace_nfsd_cb_done(clp, task->tk_status);
+ 	if (task->tk_status)
+ 		nfsd4_mark_cb_down(clp, task->tk_status);
+-	else {
+-		clp->cl_cb_state = NFSD4_CB_UP;
+-		trace_nfsd_cb_state(clp);
+-	}
++	else
++		nfsd4_mark_cb_state(clp, NFSD4_CB_UP);
+ }
+ 
+ static void nfsd4_cb_probe_release(void *calldata)
+@@ -995,8 +1072,8 @@ static const struct rpc_call_ops nfsd4_cb_probe_ops = {
+  */
+ void nfsd4_probe_callback(struct nfs4_client *clp)
+ {
+-	clp->cl_cb_state = NFSD4_CB_UNKNOWN;
+-	trace_nfsd_cb_state(clp);
++	trace_nfsd_cb_probe(clp);
++	nfsd4_mark_cb_state(clp, NFSD4_CB_UNKNOWN);
+ 	set_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_flags);
+ 	nfsd4_run_cb(&clp->cl_cb_null);
+ }
+@@ -1009,11 +1086,10 @@ void nfsd4_probe_callback_sync(struct nfs4_client *clp)
+ 
+ void nfsd4_change_callback(struct nfs4_client *clp, struct nfs4_cb_conn *conn)
+ {
+-	clp->cl_cb_state = NFSD4_CB_UNKNOWN;
++	nfsd4_mark_cb_state(clp, NFSD4_CB_UNKNOWN);
+ 	spin_lock(&clp->cl_lock);
+ 	memcpy(&clp->cl_cb_conn, conn, sizeof(struct nfs4_cb_conn));
+ 	spin_unlock(&clp->cl_lock);
+-	trace_nfsd_cb_state(clp);
+ }
+ 
+ /*
+@@ -1170,8 +1246,6 @@ static void nfsd4_cb_done(struct rpc_task *task, void *calldata)
+ 	struct nfsd4_callback *cb = calldata;
+ 	struct nfs4_client *clp = cb->cb_clp;
+ 
+-	trace_nfsd_cb_done(clp, task->tk_status);
+-
+ 	if (!nfsd4_cb_sequence_done(task, cb))
+ 		return;
+ 
+@@ -1231,6 +1305,9 @@ void nfsd4_destroy_callback_queue(void)
+ /* must be called under the state lock */
+ void nfsd4_shutdown_callback(struct nfs4_client *clp)
+ {
++	if (clp->cl_cb_state != NFSD4_CB_UNKNOWN)
++		trace_nfsd_cb_shutdown(clp);
++
+ 	set_bit(NFSD4_CLIENT_CB_KILL, &clp->cl_flags);
+ 	/*
+ 	 * Note this won't actually result in a null callback;
+@@ -1276,7 +1353,6 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
+ 	 * kill the old client:
+ 	 */
+ 	if (clp->cl_cb_client) {
+-		trace_nfsd_cb_shutdown(clp);
+ 		rpc_shutdown_client(clp->cl_cb_client);
+ 		clp->cl_cb_client = NULL;
+ 		put_cred(clp->cl_cb_cred);
+@@ -1322,8 +1398,6 @@ nfsd4_run_cb_work(struct work_struct *work)
+ 	struct rpc_clnt *clnt;
+ 	int flags;
+ 
+-	trace_nfsd_cb_work(clp, cb->cb_msg.rpc_proc->p_name);
+-
+ 	if (cb->cb_need_restart) {
+ 		cb->cb_need_restart = false;
+ 	} else {
+@@ -1345,7 +1419,7 @@ nfsd4_run_cb_work(struct work_struct *work)
+ 	 * Don't send probe messages for 4.1 or later.
+ 	 */
+ 	if (!cb->cb_ops && clp->cl_minorversion) {
+-		clp->cl_cb_state = NFSD4_CB_UP;
++		nfsd4_mark_cb_state(clp, NFSD4_CB_UP);
+ 		nfsd41_destroy_cb(cb);
+ 		return;
+ 	}
+@@ -1371,11 +1445,21 @@ void nfsd4_init_cb(struct nfsd4_callback *cb, struct nfs4_client *clp,
+ 	cb->cb_holds_slot = false;
+ }
+ 
+-void nfsd4_run_cb(struct nfsd4_callback *cb)
++/**
++ * nfsd4_run_cb - queue up a callback job to run
++ * @cb: callback to queue
++ *
++ * Kick off a callback to do its thing. Returns false if it was already
++ * on a queue, true otherwise.
++ */
++bool nfsd4_run_cb(struct nfsd4_callback *cb)
+ {
+ 	struct nfs4_client *clp = cb->cb_clp;
++	bool queued;
+ 
+ 	nfsd41_cb_inflight_begin(clp);
+-	if (!nfsd4_queue_cb(cb))
++	queued = nfsd4_queue_cb(cb);
++	if (!queued)
+ 		nfsd41_cb_inflight_end(clp);
++	return queued;
+ }
+diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
+index f92161ce1f97d..5e9809aff37eb 100644
+--- a/fs/nfsd/nfs4idmap.c
++++ b/fs/nfsd/nfs4idmap.c
+@@ -41,6 +41,7 @@
+ #include "idmap.h"
+ #include "nfsd.h"
+ #include "netns.h"
++#include "vfs.h"
+ 
+ /*
+  * Turn off idmapping when using AUTH_SYS.
+@@ -82,8 +83,8 @@ ent_init(struct cache_head *cnew, struct cache_head *citm)
+ 	new->id = itm->id;
+ 	new->type = itm->type;
+ 
+-	strlcpy(new->name, itm->name, sizeof(new->name));
+-	strlcpy(new->authname, itm->authname, sizeof(new->authname));
++	strscpy(new->name, itm->name, sizeof(new->name));
++	strscpy(new->authname, itm->authname, sizeof(new->authname));
+ }
+ 
+ static void
+@@ -548,7 +549,7 @@ idmap_name_to_id(struct svc_rqst *rqstp, int type, const char *name, u32 namelen
+ 		return nfserr_badowner;
+ 	memcpy(key.name, name, namelen);
+ 	key.name[namelen] = '\0';
+-	strlcpy(key.authname, rqst_authname(rqstp), sizeof(key.authname));
++	strscpy(key.authname, rqst_authname(rqstp), sizeof(key.authname));
+ 	ret = idmap_lookup(rqstp, nametoid_lookup, &key, nn->nametoid_cache, &item);
+ 	if (ret == -ENOENT)
+ 		return nfserr_badowner;
+@@ -584,7 +585,7 @@ static __be32 idmap_id_to_name(struct xdr_stream *xdr,
+ 	int ret;
+ 	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 
+-	strlcpy(key.authname, rqst_authname(rqstp), sizeof(key.authname));
++	strscpy(key.authname, rqst_authname(rqstp), sizeof(key.authname));
+ 	ret = idmap_lookup(rqstp, idtoname_lookup, &key, nn->idtoname_cache, &item);
+ 	if (ret == -ENOENT)
+ 		return encode_ascii_id(xdr, id);
+diff --git a/fs/nfsd/nfs4layouts.c b/fs/nfsd/nfs4layouts.c
+index 2673019d30ecd..e4e23b2a3e655 100644
+--- a/fs/nfsd/nfs4layouts.c
++++ b/fs/nfsd/nfs4layouts.c
+@@ -421,7 +421,7 @@ nfsd4_insert_layout(struct nfsd4_layoutget *lgp, struct nfs4_layout_stateid *ls)
+ 	new = kmem_cache_alloc(nfs4_layout_cache, GFP_KERNEL);
+ 	if (!new)
+ 		return nfserr_jukebox;
+-	memcpy(&new->lo_seg, seg, sizeof(lp->lo_seg));
++	memcpy(&new->lo_seg, seg, sizeof(new->lo_seg));
+ 	new->lo_state = ls;
+ 
+ 	spin_lock(&fp->fi_lock);
+@@ -657,7 +657,7 @@ nfsd4_cb_layout_done(struct nfsd4_callback *cb, struct rpc_task *task)
+ 	ktime_t now, cutoff;
+ 	const struct nfsd4_layout_ops *ops;
+ 
+-
++	trace_nfsd_cb_layout_done(&ls->ls_stid.sc_stateid, task);
+ 	switch (task->tk_status) {
+ 	case 0:
+ 	case -NFS4ERR_DELAY:
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index e84996c3867c7..2c0de247083a9 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -37,6 +37,9 @@
+ #include <linux/falloc.h>
+ #include <linux/slab.h>
+ #include <linux/kthread.h>
++#include <linux/namei.h>
++#include <linux/freezer.h>
++
+ #include <linux/sunrpc/addr.h>
+ #include <linux/nfs_ssc.h>
+ 
+@@ -50,34 +53,16 @@
+ #include "pnfs.h"
+ #include "trace.h"
+ 
+-#ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+-#include <linux/security.h>
+-
+-static inline void
+-nfsd4_security_inode_setsecctx(struct svc_fh *resfh, struct xdr_netobj *label, u32 *bmval)
+-{
+-	struct inode *inode = d_inode(resfh->fh_dentry);
+-	int status;
+-
+-	inode_lock(inode);
+-	status = security_inode_setsecctx(resfh->fh_dentry,
+-		label->data, label->len);
+-	inode_unlock(inode);
+-
+-	if (status)
+-		/*
+-		 * XXX: We should really fail the whole open, but we may
+-		 * already have created a new file, so it may be too
+-		 * late.  For now this seems the least of evils:
+-		 */
+-		bmval[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
++static bool inter_copy_offload_enable;
++module_param(inter_copy_offload_enable, bool, 0644);
++MODULE_PARM_DESC(inter_copy_offload_enable,
++		 "Enable inter server to server copy offload. Default: false");
+ 
+-	return;
+-}
+-#else
+-static inline void
+-nfsd4_security_inode_setsecctx(struct svc_fh *resfh, struct xdr_netobj *label, u32 *bmval)
+-{ }
++#ifdef CONFIG_NFSD_V4_2_INTER_SSC
++static int nfsd4_ssc_umount_timeout = 900000;		/* default to 15 mins */
++module_param(nfsd4_ssc_umount_timeout, int, 0644);
++MODULE_PARM_DESC(nfsd4_ssc_umount_timeout,
++		"idle msecs before unmount export from source server");
+ #endif
+ 
+ #define NFSDDBG_FACILITY		NFSDDBG_PROC
+@@ -144,26 +129,6 @@ is_create_with_attrs(struct nfsd4_open *open)
+ 		    || open->op_createmode == NFS4_CREATE_EXCLUSIVE4_1);
+ }
+ 
+-/*
+- * if error occurs when setting the acl, just clear the acl bit
+- * in the returned attr bitmap.
+- */
+-static void
+-do_set_nfs4_acl(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		struct nfs4_acl *acl, u32 *bmval)
+-{
+-	__be32 status;
+-
+-	status = nfsd4_set_nfs4_acl(rqstp, fhp, acl);
+-	if (status)
+-		/*
+-		 * We should probably fail the whole open at this point,
+-		 * but we've already created the file, so it's too late;
+-		 * So this seems the least of evils:
+-		 */
+-		bmval[0] &= ~FATTR4_WORD0_ACL;
+-}
+-
+ static inline void
+ fh_dup2(struct svc_fh *dst, struct svc_fh *src)
+ {
+@@ -177,7 +142,6 @@ fh_dup2(struct svc_fh *dst, struct svc_fh *src)
+ static __be32
+ do_open_permission(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open, int accmode)
+ {
+-	__be32 status;
+ 
+ 	if (open->op_truncate &&
+ 		!(open->op_share_access & NFS4_SHARE_ACCESS_WRITE))
+@@ -192,9 +156,7 @@ do_open_permission(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfs
+ 	if (open->op_share_deny & NFS4_SHARE_DENY_READ)
+ 		accmode |= NFSD_MAY_WRITE;
+ 
+-	status = fh_verify(rqstp, current_fh, S_IFREG, accmode);
+-
+-	return status;
++	return fh_verify(rqstp, current_fh, S_IFREG, accmode);
+ }
+ 
+ static __be32 nfsd_check_obj_isreg(struct svc_fh *fh)
+@@ -223,6 +185,202 @@ static void nfsd4_set_open_owner_reply_cache(struct nfsd4_compound_state *cstate
+ 			&resfh->fh_handle);
+ }
+ 
++static inline bool nfsd4_create_is_exclusive(int createmode)
++{
++	return createmode == NFS4_CREATE_EXCLUSIVE ||
++		createmode == NFS4_CREATE_EXCLUSIVE4_1;
++}
++
++static __be32
++nfsd4_vfs_create(struct svc_fh *fhp, struct dentry *child,
++		 struct nfsd4_open *open)
++{
++	struct file *filp;
++	struct path path;
++	int oflags;
++
++	oflags = O_CREAT | O_LARGEFILE;
++	switch (open->op_share_access & NFS4_SHARE_ACCESS_BOTH) {
++	case NFS4_SHARE_ACCESS_WRITE:
++		oflags |= O_WRONLY;
++		break;
++	case NFS4_SHARE_ACCESS_BOTH:
++		oflags |= O_RDWR;
++		break;
++	default:
++		oflags |= O_RDONLY;
++	}
++
++	path.mnt = fhp->fh_export->ex_path.mnt;
++	path.dentry = child;
++	filp = dentry_create(&path, oflags, open->op_iattr.ia_mode,
++			     current_cred());
++	if (IS_ERR(filp))
++		return nfserrno(PTR_ERR(filp));
++
++	open->op_filp = filp;
++	return nfs_ok;
++}
++
++/*
++ * Implement NFSv4's unchecked, guarded, and exclusive create
++ * semantics for regular files. Open state for this new file is
++ * subsequently fabricated in nfsd4_process_open2().
++ *
++ * Upon return, caller must release @fhp and @resfhp.
++ */
++static __be32
++nfsd4_create_file(struct svc_rqst *rqstp, struct svc_fh *fhp,
++		  struct svc_fh *resfhp, struct nfsd4_open *open)
++{
++	struct iattr *iap = &open->op_iattr;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= iap,
++		.na_seclabel	= &open->op_label,
++	};
++	struct dentry *parent, *child;
++	__u32 v_mtime, v_atime;
++	struct inode *inode;
++	__be32 status;
++	int host_err;
++
++	if (isdotent(open->op_fname, open->op_fnamelen))
++		return nfserr_exist;
++	if (!(iap->ia_valid & ATTR_MODE))
++		iap->ia_mode = 0;
++
++	status = fh_verify(rqstp, fhp, S_IFDIR, NFSD_MAY_EXEC);
++	if (status != nfs_ok)
++		return status;
++	parent = fhp->fh_dentry;
++	inode = d_inode(parent);
++
++	host_err = fh_want_write(fhp);
++	if (host_err)
++		return nfserrno(host_err);
++
++	if (is_create_with_attrs(open))
++		nfsd4_acl_to_attr(NF4REG, open->op_acl, &attrs);
++
++	inode_lock_nested(inode, I_MUTEX_PARENT);
++
++	child = lookup_one_len(open->op_fname, parent, open->op_fnamelen);
++	if (IS_ERR(child)) {
++		status = nfserrno(PTR_ERR(child));
++		goto out;
++	}
++
++	if (d_really_is_negative(child)) {
++		status = fh_verify(rqstp, fhp, S_IFDIR, NFSD_MAY_CREATE);
++		if (status != nfs_ok)
++			goto out;
++	}
++
++	status = fh_compose(resfhp, fhp->fh_export, child, fhp);
++	if (status != nfs_ok)
++		goto out;
++
++	v_mtime = 0;
++	v_atime = 0;
++	if (nfsd4_create_is_exclusive(open->op_createmode)) {
++		u32 *verifier = (u32 *)open->op_verf.data;
++
++		/*
++		 * Solaris 7 gets confused (bugid 4218508) if these have
++		 * the high bit set, as do xfs filesystems without the
++		 * "bigtime" feature. So just clear the high bits. If this
++		 * is ever changed to use different attrs for storing the
++		 * verifier, then do_open_lookup() will also need to be
++		 * fixed accordingly.
++		 */
++		v_mtime = verifier[0] & 0x7fffffff;
++		v_atime = verifier[1] & 0x7fffffff;
++	}
++
++	if (d_really_is_positive(child)) {
++		status = nfs_ok;
++
++		/* NFSv4 protocol requires change attributes even though
++		 * no change happened.
++		 */
++		fh_fill_both_attrs(fhp);
++
++		switch (open->op_createmode) {
++		case NFS4_CREATE_UNCHECKED:
++			if (!d_is_reg(child))
++				break;
++
++			/*
++			 * In NFSv4, we don't want to truncate the file
++			 * now. This would be wrong if the OPEN fails for
++			 * some other reason. Furthermore, if the size is
++			 * nonzero, we should ignore it according to spec!
++			 */
++			open->op_truncate = (iap->ia_valid & ATTR_SIZE) &&
++						!iap->ia_size;
++			break;
++		case NFS4_CREATE_GUARDED:
++			status = nfserr_exist;
++			break;
++		case NFS4_CREATE_EXCLUSIVE:
++			if (d_inode(child)->i_mtime.tv_sec == v_mtime &&
++			    d_inode(child)->i_atime.tv_sec == v_atime &&
++			    d_inode(child)->i_size == 0) {
++				open->op_created = true;
++				break;		/* subtle */
++			}
++			status = nfserr_exist;
++			break;
++		case NFS4_CREATE_EXCLUSIVE4_1:
++			if (d_inode(child)->i_mtime.tv_sec == v_mtime &&
++			    d_inode(child)->i_atime.tv_sec == v_atime &&
++			    d_inode(child)->i_size == 0) {
++				open->op_created = true;
++				goto set_attr;	/* subtle */
++			}
++			status = nfserr_exist;
++		}
++		goto out;
++	}
++
++	if (!IS_POSIXACL(inode))
++		iap->ia_mode &= ~current_umask();
++
++	fh_fill_pre_attrs(fhp);
++	status = nfsd4_vfs_create(fhp, child, open);
++	if (status != nfs_ok)
++		goto out;
++	open->op_created = true;
++	fh_fill_post_attrs(fhp);
++
++	/* A newly created file already has a file size of zero. */
++	if ((iap->ia_valid & ATTR_SIZE) && (iap->ia_size == 0))
++		iap->ia_valid &= ~ATTR_SIZE;
++	if (nfsd4_create_is_exclusive(open->op_createmode)) {
++		iap->ia_valid = ATTR_MTIME | ATTR_ATIME |
++				ATTR_MTIME_SET|ATTR_ATIME_SET;
++		iap->ia_mtime.tv_sec = v_mtime;
++		iap->ia_atime.tv_sec = v_atime;
++		iap->ia_mtime.tv_nsec = 0;
++		iap->ia_atime.tv_nsec = 0;
++	}
++
++set_attr:
++	status = nfsd_create_setattr(rqstp, fhp, resfhp, &attrs);
++
++	if (attrs.na_labelerr)
++		open->op_bmval[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
++	if (attrs.na_aclerr)
++		open->op_bmval[0] &= ~FATTR4_WORD0_ACL;
++out:
++	inode_unlock(inode);
++	nfsd_attrs_free(&attrs);
++	if (child && !IS_ERR(child))
++		dput(child);
++	fh_drop_write(fhp);
++	return status;
++}
++
+ static __be32
+ do_open_lookup(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, struct nfsd4_open *open, struct svc_fh **resfh)
+ {
+@@ -252,47 +410,33 @@ do_open_lookup(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, stru
+ 		 * yes          | yes    | GUARDED4        | GUARDED4
+ 		 */
+ 
+-		/*
+-		 * Note: create modes (UNCHECKED,GUARDED...) are the same
+-		 * in NFSv4 as in v3 except EXCLUSIVE4_1.
+-		 */
+ 		current->fs->umask = open->op_umask;
+-		status = do_nfsd_create(rqstp, current_fh, open->op_fname.data,
+-					open->op_fname.len, &open->op_iattr,
+-					*resfh, open->op_createmode,
+-					(u32 *)open->op_verf.data,
+-					&open->op_truncate, &open->op_created);
++		status = nfsd4_create_file(rqstp, current_fh, *resfh, open);
+ 		current->fs->umask = 0;
+ 
+-		if (!status && open->op_label.len)
+-			nfsd4_security_inode_setsecctx(*resfh, &open->op_label, open->op_bmval);
+-
+ 		/*
+ 		 * Following rfc 3530 14.2.16, and rfc 5661 18.16.4
+ 		 * use the returned bitmask to indicate which attributes
+ 		 * we used to store the verifier:
+ 		 */
+-		if (nfsd_create_is_exclusive(open->op_createmode) && status == 0)
++		if (nfsd4_create_is_exclusive(open->op_createmode) && status == 0)
+ 			open->op_bmval[1] |= (FATTR4_WORD1_TIME_ACCESS |
+ 						FATTR4_WORD1_TIME_MODIFY);
+-	} else
+-		/*
+-		 * Note this may exit with the parent still locked.
+-		 * We will hold the lock until nfsd4_open's final
+-		 * lookup, to prevent renames or unlinks until we've had
+-		 * a chance to an acquire a delegation if appropriate.
+-		 */
++	} else {
+ 		status = nfsd_lookup(rqstp, current_fh,
+-				     open->op_fname.data, open->op_fname.len, *resfh);
++				     open->op_fname, open->op_fnamelen, *resfh);
++		if (!status)
++			/* NFSv4 protocol requires change attributes even though
++			 * no change happened.
++			 */
++			fh_fill_both_attrs(current_fh);
++	}
+ 	if (status)
+ 		goto out;
+ 	status = nfsd_check_obj_isreg(*resfh);
+ 	if (status)
+ 		goto out;
+ 
+-	if (is_create_with_attrs(open) && open->op_acl != NULL)
+-		do_set_nfs4_acl(rqstp, *resfh, open->op_acl, open->op_bmval);
+-
+ 	nfsd4_set_open_owner_reply_cache(cstate, open, *resfh);
+ 	accmode = NFSD_MAY_NOP;
+ 	if (open->op_created ||
+@@ -308,7 +452,6 @@ static __be32
+ do_open_fhandle(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, struct nfsd4_open *open)
+ {
+ 	struct svc_fh *current_fh = &cstate->current_fh;
+-	__be32 status;
+ 	int accmode = 0;
+ 
+ 	/* We don't know the target directory, and therefore can not
+@@ -333,9 +476,7 @@ do_open_fhandle(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, str
+ 	if (open->op_claim_type == NFS4_OPEN_CLAIM_DELEG_CUR_FH)
+ 		accmode = NFSD_MAY_OWNER_OVERRIDE;
+ 
+-	status = do_open_permission(rqstp, current_fh, open, accmode);
+-
+-	return status;
++	return do_open_permission(rqstp, current_fh, open, accmode);
+ }
+ 
+ static void
+@@ -360,9 +501,12 @@ nfsd4_open(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	bool reclaim = false;
+ 
+ 	dprintk("NFSD: nfsd4_open filename %.*s op_openowner %p\n",
+-		(int)open->op_fname.len, open->op_fname.data,
++		(int)open->op_fnamelen, open->op_fname,
+ 		open->op_openowner);
+ 
++	open->op_filp = NULL;
++	open->op_rqstp = rqstp;
++
+ 	/* This check required by spec. */
+ 	if (open->op_create && open->op_claim_type != NFS4_OPEN_CLAIM_NULL)
+ 		return nfserr_inval;
+@@ -373,8 +517,7 @@ nfsd4_open(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	 * Before RECLAIM_COMPLETE done, server should deny new lock
+ 	 */
+ 	if (nfsd4_has_session(cstate) &&
+-	    !test_bit(NFSD4_CLIENT_RECLAIM_COMPLETE,
+-		      &cstate->session->se_client->cl_flags) &&
++	    !test_bit(NFSD4_CLIENT_RECLAIM_COMPLETE, &cstate->clp->cl_flags) &&
+ 	    open->op_claim_type != NFS4_OPEN_CLAIM_PREVIOUS)
+ 		return nfserr_grace;
+ 
+@@ -416,51 +559,46 @@ nfsd4_open(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		goto out;
+ 
+ 	switch (open->op_claim_type) {
+-		case NFS4_OPEN_CLAIM_DELEGATE_CUR:
+-		case NFS4_OPEN_CLAIM_NULL:
+-			status = do_open_lookup(rqstp, cstate, open, &resfh);
+-			if (status)
+-				goto out;
+-			break;
+-		case NFS4_OPEN_CLAIM_PREVIOUS:
+-			status = nfs4_check_open_reclaim(&open->op_clientid,
+-							 cstate, nn);
+-			if (status)
+-				goto out;
+-			open->op_openowner->oo_flags |= NFS4_OO_CONFIRMED;
+-			reclaim = true;
+-			fallthrough;
+-		case NFS4_OPEN_CLAIM_FH:
+-		case NFS4_OPEN_CLAIM_DELEG_CUR_FH:
+-			status = do_open_fhandle(rqstp, cstate, open);
+-			if (status)
+-				goto out;
+-			resfh = &cstate->current_fh;
+-			break;
+-		case NFS4_OPEN_CLAIM_DELEG_PREV_FH:
+-             	case NFS4_OPEN_CLAIM_DELEGATE_PREV:
+-			dprintk("NFSD: unsupported OPEN claim type %d\n",
+-				open->op_claim_type);
+-			status = nfserr_notsupp;
++	case NFS4_OPEN_CLAIM_DELEGATE_CUR:
++	case NFS4_OPEN_CLAIM_NULL:
++		status = do_open_lookup(rqstp, cstate, open, &resfh);
++		if (status)
++			goto out;
++		break;
++	case NFS4_OPEN_CLAIM_PREVIOUS:
++		status = nfs4_check_open_reclaim(cstate->clp);
++		if (status)
+ 			goto out;
+-		default:
+-			dprintk("NFSD: Invalid OPEN claim type %d\n",
+-				open->op_claim_type);
+-			status = nfserr_inval;
++		open->op_openowner->oo_flags |= NFS4_OO_CONFIRMED;
++		reclaim = true;
++		fallthrough;
++	case NFS4_OPEN_CLAIM_FH:
++	case NFS4_OPEN_CLAIM_DELEG_CUR_FH:
++		status = do_open_fhandle(rqstp, cstate, open);
++		if (status)
+ 			goto out;
++		resfh = &cstate->current_fh;
++		break;
++	case NFS4_OPEN_CLAIM_DELEG_PREV_FH:
++	case NFS4_OPEN_CLAIM_DELEGATE_PREV:
++		status = nfserr_notsupp;
++		goto out;
++	default:
++		status = nfserr_inval;
++		goto out;
+ 	}
+-	/*
+-	 * nfsd4_process_open2() does the actual opening of the file.  If
+-	 * successful, it (1) truncates the file if open->op_truncate was
+-	 * set, (2) sets open->op_stateid, (3) sets open->op_delegation.
+-	 */
++
+ 	status = nfsd4_process_open2(rqstp, resfh, open);
+-	WARN(status && open->op_created,
+-	     "nfsd4_process_open2 failed to open newly-created file! status=%u\n",
+-	     be32_to_cpu(status));
++	if (status && open->op_created)
++		pr_warn("nfsd4_process_open2 failed to open newly-created file: status=%u\n",
++			be32_to_cpu(status));
+ 	if (reclaim && !status)
+ 		nn->somebody_reclaimed = true;
+ out:
++	if (open->op_filp) {
++		fput(open->op_filp);
++		open->op_filp = NULL;
++	}
+ 	if (resfh && resfh != &cstate->current_fh) {
+ 		fh_dup2(&cstate->current_fh, resfh);
+ 		fh_put(resfh);
+@@ -509,7 +647,7 @@ nfsd4_putfh(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 
+ 	fh_put(&cstate->current_fh);
+ 	cstate->current_fh.fh_handle.fh_size = putfh->pf_fhlen;
+-	memcpy(&cstate->current_fh.fh_handle.fh_base, putfh->pf_fhval,
++	memcpy(&cstate->current_fh.fh_handle.fh_raw, putfh->pf_fhval,
+ 	       putfh->pf_fhlen);
+ 	ret = fh_verify(rqstp, &cstate->current_fh, 0, NFSD_MAY_BYPASS_GSS);
+ #ifdef CONFIG_NFSD_V4_2_INTER_SSC
+@@ -525,11 +663,9 @@ static __be32
+ nfsd4_putrootfh(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		union nfsd4_op_u *u)
+ {
+-	__be32 status;
+-
+ 	fh_put(&cstate->current_fh);
+-	status = exp_pseudoroot(rqstp, &cstate->current_fh);
+-	return status;
++
++	return exp_pseudoroot(rqstp, &cstate->current_fh);
+ }
+ 
+ static __be32
+@@ -588,7 +724,7 @@ static void gen_boot_verifier(nfs4_verifier *verifier, struct net *net)
+ 
+ 	BUILD_BUG_ON(2*sizeof(*verf) != sizeof(verifier->data));
+ 
+-	nfsd_copy_boot_verifier(verf, net_generic(net, nfsd_net_id));
++	nfsd_copy_write_verifier(verf, net_generic(net, nfsd_net_id));
+ }
+ 
+ static __be32
+@@ -596,10 +732,19 @@ nfsd4_commit(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	     union nfsd4_op_u *u)
+ {
+ 	struct nfsd4_commit *commit = &u->commit;
++	struct nfsd_file *nf;
++	__be32 status;
++
++	status = nfsd_file_acquire(rqstp, &cstate->current_fh, NFSD_MAY_WRITE |
++				   NFSD_MAY_NOT_BREAK_LEASE, &nf);
++	if (status != nfs_ok)
++		return status;
+ 
+-	return nfsd_commit(rqstp, &cstate->current_fh, commit->co_offset,
++	status = nfsd_commit(rqstp, &cstate->current_fh, nf, commit->co_offset,
+ 			     commit->co_count,
+ 			     (__be32 *)commit->co_verf.data);
++	nfsd_file_put(nf);
++	return status;
+ }
+ 
+ static __be32
+@@ -607,6 +752,10 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	     union nfsd4_op_u *u)
+ {
+ 	struct nfsd4_create *create = &u->create;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &create->cr_iattr,
++		.na_seclabel	= &create->cr_label,
++	};
+ 	struct svc_fh resfh;
+ 	__be32 status;
+ 	dev_t rdev;
+@@ -622,12 +771,13 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	if (status)
+ 		return status;
+ 
++	status = nfsd4_acl_to_attr(create->cr_type, create->cr_acl, &attrs);
+ 	current->fs->umask = create->cr_umask;
+ 	switch (create->cr_type) {
+ 	case NF4LNK:
+ 		status = nfsd_symlink(rqstp, &cstate->current_fh,
+ 				      create->cr_name, create->cr_namelen,
+-				      create->cr_data, &resfh);
++				      create->cr_data, &attrs, &resfh);
+ 		break;
+ 
+ 	case NF4BLK:
+@@ -638,7 +788,7 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			goto out_umask;
+ 		status = nfsd_create(rqstp, &cstate->current_fh,
+ 				     create->cr_name, create->cr_namelen,
+-				     &create->cr_iattr, S_IFBLK, rdev, &resfh);
++				     &attrs, S_IFBLK, rdev, &resfh);
+ 		break;
+ 
+ 	case NF4CHR:
+@@ -649,26 +799,26 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			goto out_umask;
+ 		status = nfsd_create(rqstp, &cstate->current_fh,
+ 				     create->cr_name, create->cr_namelen,
+-				     &create->cr_iattr,S_IFCHR, rdev, &resfh);
++				     &attrs, S_IFCHR, rdev, &resfh);
+ 		break;
+ 
+ 	case NF4SOCK:
+ 		status = nfsd_create(rqstp, &cstate->current_fh,
+ 				     create->cr_name, create->cr_namelen,
+-				     &create->cr_iattr, S_IFSOCK, 0, &resfh);
++				     &attrs, S_IFSOCK, 0, &resfh);
+ 		break;
+ 
+ 	case NF4FIFO:
+ 		status = nfsd_create(rqstp, &cstate->current_fh,
+ 				     create->cr_name, create->cr_namelen,
+-				     &create->cr_iattr, S_IFIFO, 0, &resfh);
++				     &attrs, S_IFIFO, 0, &resfh);
+ 		break;
+ 
+ 	case NF4DIR:
+ 		create->cr_iattr.ia_valid &= ~ATTR_SIZE;
+ 		status = nfsd_create(rqstp, &cstate->current_fh,
+ 				     create->cr_name, create->cr_namelen,
+-				     &create->cr_iattr, S_IFDIR, 0, &resfh);
++				     &attrs, S_IFDIR, 0, &resfh);
+ 		break;
+ 
+ 	default:
+@@ -678,20 +828,17 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	if (status)
+ 		goto out;
+ 
+-	if (create->cr_label.len)
+-		nfsd4_security_inode_setsecctx(&resfh, &create->cr_label, create->cr_bmval);
+-
+-	if (create->cr_acl != NULL)
+-		do_set_nfs4_acl(rqstp, &resfh, create->cr_acl,
+-				create->cr_bmval);
+-
+-	fh_unlock(&cstate->current_fh);
++	if (attrs.na_labelerr)
++		create->cr_bmval[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
++	if (attrs.na_aclerr)
++		create->cr_bmval[0] &= ~FATTR4_WORD0_ACL;
+ 	set_change_info(&create->cr_cinfo, &cstate->current_fh);
+ 	fh_dup2(&cstate->current_fh, &resfh);
+ out:
+ 	fh_put(&resfh);
+ out_umask:
+ 	current->fs->umask = 0;
++	nfsd_attrs_free(&attrs);
+ 	return status;
+ }
+ 
+@@ -772,12 +919,16 @@ nfsd4_read(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	__be32 status;
+ 
+ 	read->rd_nf = NULL;
+-	if (read->rd_offset >= OFFSET_MAX)
+-		return nfserr_inval;
+ 
+ 	trace_nfsd_read_start(rqstp, &cstate->current_fh,
+ 			      read->rd_offset, read->rd_length);
+ 
++	read->rd_length = min_t(u32, read->rd_length, svc_max_payload(rqstp));
++	if (read->rd_offset > (u64)OFFSET_MAX)
++		read->rd_offset = (u64)OFFSET_MAX;
++	if (read->rd_offset + read->rd_length > (u64)OFFSET_MAX)
++		read->rd_length = (u64)OFFSET_MAX - read->rd_offset;
++
+ 	/*
+ 	 * If we do a zero copy read, then a client will see read data
+ 	 * that reflects the state of the file *after* performing the
+@@ -793,12 +944,7 @@ nfsd4_read(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->current_fh,
+ 					&read->rd_stateid, RD_STATE,
+ 					&read->rd_nf, NULL);
+-	if (status) {
+-		dprintk("NFSD: nfsd4_read: couldn't process stateid!\n");
+-		goto out;
+-	}
+-	status = nfs_ok;
+-out:
++
+ 	read->rd_rqstp = rqstp;
+ 	read->rd_fhp = &cstate->current_fh;
+ 	return status;
+@@ -860,10 +1006,8 @@ nfsd4_remove(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		return nfserr_grace;
+ 	status = nfsd_unlink(rqstp, &cstate->current_fh, 0,
+ 			     remove->rm_name, remove->rm_namelen);
+-	if (!status) {
+-		fh_unlock(&cstate->current_fh);
++	if (!status)
+ 		set_change_info(&remove->rm_cinfo, &cstate->current_fh);
+-	}
+ 	return status;
+ }
+ 
+@@ -903,7 +1047,6 @@ nfsd4_secinfo(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 				    &exp, &dentry);
+ 	if (err)
+ 		return err;
+-	fh_unlock(&cstate->current_fh);
+ 	if (d_really_is_negative(dentry)) {
+ 		exp_put(exp);
+ 		err = nfserr_noent;
+@@ -958,17 +1101,21 @@ nfsd4_setattr(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	      union nfsd4_op_u *u)
+ {
+ 	struct nfsd4_setattr *setattr = &u->setattr;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &setattr->sa_iattr,
++		.na_seclabel	= &setattr->sa_label,
++	};
++	struct inode *inode;
+ 	__be32 status = nfs_ok;
++	bool save_no_wcc;
+ 	int err;
+ 
+ 	if (setattr->sa_iattr.ia_valid & ATTR_SIZE) {
+ 		status = nfs4_preprocess_stateid_op(rqstp, cstate,
+ 				&cstate->current_fh, &setattr->sa_stateid,
+ 				WR_STATE, NULL, NULL);
+-		if (status) {
+-			dprintk("NFSD: nfsd4_setattr: couldn't process stateid!\n");
++		if (status)
+ 			return status;
+-		}
+ 	}
+ 	err = fh_want_write(&cstate->current_fh);
+ 	if (err)
+@@ -980,19 +1127,23 @@ nfsd4_setattr(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	if (status)
+ 		goto out;
+ 
+-	if (setattr->sa_acl != NULL)
+-		status = nfsd4_set_nfs4_acl(rqstp, &cstate->current_fh,
+-					    setattr->sa_acl);
+-	if (status)
+-		goto out;
+-	if (setattr->sa_label.len)
+-		status = nfsd4_set_nfs4_label(rqstp, &cstate->current_fh,
+-				&setattr->sa_label);
++	inode = cstate->current_fh.fh_dentry->d_inode;
++	status = nfsd4_acl_to_attr(S_ISDIR(inode->i_mode) ? NF4DIR : NF4REG,
++				   setattr->sa_acl, &attrs);
++
+ 	if (status)
+ 		goto out;
+-	status = nfsd_setattr(rqstp, &cstate->current_fh, &setattr->sa_iattr,
++	save_no_wcc = cstate->current_fh.fh_no_wcc;
++	cstate->current_fh.fh_no_wcc = true;
++	status = nfsd_setattr(rqstp, &cstate->current_fh, &attrs,
+ 				0, (time64_t)0);
++	cstate->current_fh.fh_no_wcc = save_no_wcc;
++	if (!status)
++		status = nfserrno(attrs.na_labelerr);
++	if (!status)
++		status = nfserrno(attrs.na_aclerr);
+ out:
++	nfsd_attrs_free(&attrs);
+ 	fh_drop_write(&cstate->current_fh);
+ 	return status;
+ }
+@@ -1017,15 +1168,12 @@ nfsd4_write(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			       write->wr_offset, cnt);
+ 	status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->current_fh,
+ 						stateid, WR_STATE, &nf, NULL);
+-	if (status) {
+-		dprintk("NFSD: nfsd4_write: couldn't process stateid!\n");
++	if (status)
+ 		return status;
+-	}
+ 
+ 	write->wr_how_written = write->wr_stable_how;
+ 
+-	nvecs = svc_fill_write_vector(rqstp, write->wr_pagelist,
+-				      &write->wr_head, write->wr_buflen);
++	nvecs = svc_fill_write_vector(rqstp, &write->wr_payload);
+ 	WARN_ON_ONCE(nvecs > ARRAY_SIZE(rqstp->rq_vec));
+ 
+ 	status = nfsd_vfs_write(rqstp, &cstate->current_fh, nf,
+@@ -1052,17 +1200,13 @@ nfsd4_verify_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 
+ 	status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->save_fh,
+ 					    src_stateid, RD_STATE, src, NULL);
+-	if (status) {
+-		dprintk("NFSD: %s: couldn't process src stateid!\n", __func__);
++	if (status)
+ 		goto out;
+-	}
+ 
+ 	status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->current_fh,
+ 					    dst_stateid, WR_STATE, dst, NULL);
+-	if (status) {
+-		dprintk("NFSD: %s: couldn't process dst stateid!\n", __func__);
++	if (status)
+ 		goto out_put_src;
+-	}
+ 
+ 	/* fix up for NFS-specific error code */
+ 	if (!S_ISREG(file_inode((*src)->nf_file)->i_mode) ||
+@@ -1095,7 +1239,7 @@ nfsd4_clone(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	if (status)
+ 		goto out;
+ 
+-	status = nfsd4_clone_file_range(src, clone->cl_src_pos,
++	status = nfsd4_clone_file_range(rqstp, src, clone->cl_src_pos,
+ 			dst, clone->cl_dst_pos, clone->cl_count,
+ 			EX_ISSYNC(cstate->current_fh.fh_export));
+ 
+@@ -1105,30 +1249,17 @@ nfsd4_clone(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	return status;
+ }
+ 
+-void nfs4_put_copy(struct nfsd4_copy *copy)
++static void nfs4_put_copy(struct nfsd4_copy *copy)
+ {
+ 	if (!refcount_dec_and_test(&copy->refcount))
+ 		return;
++	kfree(copy->cp_src);
+ 	kfree(copy);
+ }
+ 
+-static bool
+-check_and_set_stop_copy(struct nfsd4_copy *copy)
+-{
+-	bool value;
+-
+-	spin_lock(&copy->cp_clp->async_lock);
+-	value = copy->stopped;
+-	if (!copy->stopped)
+-		copy->stopped = true;
+-	spin_unlock(&copy->cp_clp->async_lock);
+-	return value;
+-}
+-
+ static void nfsd4_stop_copy(struct nfsd4_copy *copy)
+ {
+-	/* only 1 thread should stop the copy */
+-	if (!check_and_set_stop_copy(copy))
++	if (!test_and_set_bit(NFSD4_COPY_F_STOPPED, &copy->cp_flags))
+ 		kthread_stop(copy->copy_task);
+ 	nfs4_put_copy(copy);
+ }
+@@ -1165,12 +1296,88 @@ extern void nfs_sb_deactive(struct super_block *sb);
+ 
+ #define NFSD42_INTERSSC_MOUNTOPS "vers=4.2,addr=%s,sec=sys"
+ 
++/*
++ * setup a work entry in the ssc delayed unmount list.
++ */
++static __be32 nfsd4_ssc_setup_dul(struct nfsd_net *nn, char *ipaddr,
++				  struct nfsd4_ssc_umount_item **nsui)
++{
++	struct nfsd4_ssc_umount_item *ni = NULL;
++	struct nfsd4_ssc_umount_item *work = NULL;
++	struct nfsd4_ssc_umount_item *tmp;
++	DEFINE_WAIT(wait);
++	__be32 status = 0;
++
++	*nsui = NULL;
++	work = kzalloc(sizeof(*work), GFP_KERNEL);
++try_again:
++	spin_lock(&nn->nfsd_ssc_lock);
++	list_for_each_entry_safe(ni, tmp, &nn->nfsd_ssc_mount_list, nsui_list) {
++		if (strncmp(ni->nsui_ipaddr, ipaddr, sizeof(ni->nsui_ipaddr)))
++			continue;
++		/* found a match */
++		if (ni->nsui_busy) {
++			/*  wait - and try again */
++			prepare_to_wait(&nn->nfsd_ssc_waitq, &wait, TASK_IDLE);
++			spin_unlock(&nn->nfsd_ssc_lock);
++
++			/* allow 20secs for mount/unmount for now - revisit */
++			if (kthread_should_stop() ||
++					(freezable_schedule_timeout(20*HZ) == 0)) {
++				finish_wait(&nn->nfsd_ssc_waitq, &wait);
++				kfree(work);
++				return nfserr_eagain;
++			}
++			finish_wait(&nn->nfsd_ssc_waitq, &wait);
++			goto try_again;
++		}
++		*nsui = ni;
++		refcount_inc(&ni->nsui_refcnt);
++		spin_unlock(&nn->nfsd_ssc_lock);
++		kfree(work);
++
++		/* return vfsmount in (*nsui)->nsui_vfsmount */
++		return 0;
++	}
++	if (work) {
++		strscpy(work->nsui_ipaddr, ipaddr, sizeof(work->nsui_ipaddr) - 1);
++		refcount_set(&work->nsui_refcnt, 2);
++		work->nsui_busy = true;
++		list_add_tail(&work->nsui_list, &nn->nfsd_ssc_mount_list);
++		*nsui = work;
++	} else
++		status = nfserr_resource;
++	spin_unlock(&nn->nfsd_ssc_lock);
++	return status;
++}
++
++static void nfsd4_ssc_update_dul(struct nfsd_net *nn,
++				 struct nfsd4_ssc_umount_item *nsui,
++				 struct vfsmount *ss_mnt)
++{
++	spin_lock(&nn->nfsd_ssc_lock);
++	nsui->nsui_vfsmount = ss_mnt;
++	nsui->nsui_busy = false;
++	wake_up_all(&nn->nfsd_ssc_waitq);
++	spin_unlock(&nn->nfsd_ssc_lock);
++}
++
++static void nfsd4_ssc_cancel_dul(struct nfsd_net *nn,
++				 struct nfsd4_ssc_umount_item *nsui)
++{
++	spin_lock(&nn->nfsd_ssc_lock);
++	list_del(&nsui->nsui_list);
++	wake_up_all(&nn->nfsd_ssc_waitq);
++	spin_unlock(&nn->nfsd_ssc_lock);
++	kfree(nsui);
++}
++
+ /*
+  * Support one copy source server for now.
+  */
+ static __be32
+ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+-		       struct vfsmount **mount)
++		       struct nfsd4_ssc_umount_item **nsui)
+ {
+ 	struct file_system_type *type;
+ 	struct vfsmount *ss_mnt;
+@@ -1181,12 +1388,14 @@ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+ 	char *ipaddr, *dev_name, *raw_data;
+ 	int len, raw_len;
+ 	__be32 status = nfserr_inval;
++	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 
+ 	naddr = &nss->u.nl4_addr;
+ 	tmp_addrlen = rpc_uaddr2sockaddr(SVC_NET(rqstp), naddr->addr,
+ 					 naddr->addr_len,
+ 					 (struct sockaddr *)&tmp_addr,
+ 					 sizeof(tmp_addr));
++	*nsui = NULL;
+ 	if (tmp_addrlen == 0)
+ 		goto out_err;
+ 
+@@ -1229,14 +1438,23 @@ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+ 		goto out_free_rawdata;
+ 	snprintf(dev_name, len + 5, "%s%s%s:/", startsep, ipaddr, endsep);
+ 
++	status = nfsd4_ssc_setup_dul(nn, ipaddr, nsui);
++	if (status)
++		goto out_free_devname;
++	if ((*nsui)->nsui_vfsmount)
++		goto out_done;
++
+ 	/* Use an 'internal' mount: SB_KERNMOUNT -> MNT_INTERNAL */
+ 	ss_mnt = vfs_kern_mount(type, SB_KERNMOUNT, dev_name, raw_data);
+ 	module_put(type->owner);
+-	if (IS_ERR(ss_mnt))
++	if (IS_ERR(ss_mnt)) {
++		status = nfserr_nodev;
++		nfsd4_ssc_cancel_dul(nn, *nsui);
+ 		goto out_free_devname;
+-
++	}
++	nfsd4_ssc_update_dul(nn, *nsui, ss_mnt);
++out_done:
+ 	status = 0;
+-	*mount = ss_mnt;
+ 
+ out_free_devname:
+ 	kfree(dev_name);
+@@ -1260,7 +1478,7 @@ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+ static __be32
+ nfsd4_setup_inter_ssc(struct svc_rqst *rqstp,
+ 		      struct nfsd4_compound_state *cstate,
+-		      struct nfsd4_copy *copy, struct vfsmount **mount)
++		      struct nfsd4_copy *copy)
+ {
+ 	struct svc_fh *s_fh = NULL;
+ 	stateid_t *s_stid = &copy->cp_src_stateid;
+@@ -1273,14 +1491,14 @@ nfsd4_setup_inter_ssc(struct svc_rqst *rqstp,
+ 	if (status)
+ 		goto out;
+ 
+-	status = nfsd4_interssc_connect(&copy->cp_src, rqstp, mount);
++	status = nfsd4_interssc_connect(copy->cp_src, rqstp, &copy->ss_nsui);
+ 	if (status)
+ 		goto out;
+ 
+ 	s_fh = &cstate->save_fh;
+ 
+ 	copy->c_fh.size = s_fh->fh_handle.fh_size;
+-	memcpy(copy->c_fh.data, &s_fh->fh_handle.fh_base, copy->c_fh.size);
++	memcpy(copy->c_fh.data, &s_fh->fh_handle.fh_raw, copy->c_fh.size);
+ 	copy->stateid.seqid = cpu_to_be32(s_stid->si_generation);
+ 	memcpy(copy->stateid.other, (void *)&s_stid->si_opaque,
+ 	       sizeof(stateid_opaque_t));
+@@ -1291,13 +1509,26 @@ nfsd4_setup_inter_ssc(struct svc_rqst *rqstp,
+ }
+ 
+ static void
+-nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct nfsd_file *src,
++nfsd4_cleanup_inter_ssc(struct nfsd4_ssc_umount_item *nsui, struct file *filp,
+ 			struct nfsd_file *dst)
+ {
+-	nfs42_ssc_close(src->nf_file);
+-	fput(src->nf_file);
+-	nfsd_file_put(dst);
+-	mntput(ss_mnt);
++	struct nfsd_net *nn = net_generic(dst->nf_net, nfsd_net_id);
++	long timeout = msecs_to_jiffies(nfsd4_ssc_umount_timeout);
++
++	nfs42_ssc_close(filp);
++	fput(filp);
++
++	spin_lock(&nn->nfsd_ssc_lock);
++	list_del(&nsui->nsui_list);
++	/*
++	 * vfsmount can be shared by multiple exports,
++	 * decrement refcnt. If the count drops to 1 it
++	 * will be unmounted when nsui_expire expires.
++	 */
++	refcount_dec(&nsui->nsui_refcnt);
++	nsui->nsui_expire = jiffies + timeout;
++	list_add_tail(&nsui->nsui_list, &nn->nfsd_ssc_mount_list);
++	spin_unlock(&nn->nfsd_ssc_lock);
+ }
+ 
+ #else /* CONFIG_NFSD_V4_2_INTER_SSC */
+@@ -1305,15 +1536,13 @@ nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct nfsd_file *src,
+ static __be32
+ nfsd4_setup_inter_ssc(struct svc_rqst *rqstp,
+ 		      struct nfsd4_compound_state *cstate,
+-		      struct nfsd4_copy *copy,
+-		      struct vfsmount **mount)
++		      struct nfsd4_copy *copy)
+ {
+-	*mount = NULL;
+ 	return nfserr_inval;
+ }
+ 
+ static void
+-nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct nfsd_file *src,
++nfsd4_cleanup_inter_ssc(struct nfsd4_ssc_umount_item *nsui, struct file *filp,
+ 			struct nfsd_file *dst)
+ {
+ }
+@@ -1336,23 +1565,21 @@ nfsd4_setup_intra_ssc(struct svc_rqst *rqstp,
+ 				 &copy->nf_dst);
+ }
+ 
+-static void
+-nfsd4_cleanup_intra_ssc(struct nfsd_file *src, struct nfsd_file *dst)
+-{
+-	nfsd_file_put(src);
+-	nfsd_file_put(dst);
+-}
+-
+ static void nfsd4_cb_offload_release(struct nfsd4_callback *cb)
+ {
+-	struct nfsd4_copy *copy = container_of(cb, struct nfsd4_copy, cp_cb);
++	struct nfsd4_cb_offload *cbo =
++		container_of(cb, struct nfsd4_cb_offload, co_cb);
+ 
+-	nfs4_put_copy(copy);
++	kfree(cbo);
+ }
+ 
+ static int nfsd4_cb_offload_done(struct nfsd4_callback *cb,
+ 				 struct rpc_task *task)
+ {
++	struct nfsd4_cb_offload *cbo =
++		container_of(cb, struct nfsd4_cb_offload, co_cb);
++
++	trace_nfsd_cb_offload_done(&cbo->co_res.cb_stateid, task);
+ 	return 1;
+ }
+ 
+@@ -1363,20 +1590,28 @@ static const struct nfsd4_callback_ops nfsd4_cb_offload_ops = {
+ 
+ static void nfsd4_init_copy_res(struct nfsd4_copy *copy, bool sync)
+ {
+-	copy->cp_res.wr_stable_how = NFS_UNSTABLE;
+-	copy->cp_synchronous = sync;
++	copy->cp_res.wr_stable_how =
++		test_bit(NFSD4_COPY_F_COMMITTED, &copy->cp_flags) ?
++			NFS_FILE_SYNC : NFS_UNSTABLE;
++	nfsd4_copy_set_sync(copy, sync);
+ 	gen_boot_verifier(&copy->cp_res.wr_verifier, copy->cp_clp->net);
+ }
+ 
+-static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
++static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy,
++				     struct file *dst,
++				     struct file *src)
+ {
+-	struct file *dst = copy->nf_dst->nf_file;
+-	struct file *src = copy->nf_src->nf_file;
++	errseq_t since;
+ 	ssize_t bytes_copied = 0;
+-	size_t bytes_total = copy->cp_count;
++	u64 bytes_total = copy->cp_count;
+ 	u64 src_pos = copy->cp_src_pos;
+ 	u64 dst_pos = copy->cp_dst_pos;
++	int status;
++	loff_t end;
+ 
++	/* See RFC 7862 p.67: */
++	if (bytes_total == 0)
++		bytes_total = ULLONG_MAX;
+ 	do {
+ 		if (kthread_should_stop())
+ 			break;
+@@ -1388,16 +1623,29 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
+ 		copy->cp_res.wr_bytes_written += bytes_copied;
+ 		src_pos += bytes_copied;
+ 		dst_pos += bytes_copied;
+-	} while (bytes_total > 0 && !copy->cp_synchronous);
++	} while (bytes_total > 0 && nfsd4_copy_is_async(copy));
++	/* for a non-zero asynchronous copy do a commit of data */
++	if (nfsd4_copy_is_async(copy) && copy->cp_res.wr_bytes_written > 0) {
++		since = READ_ONCE(dst->f_wb_err);
++		end = copy->cp_dst_pos + copy->cp_res.wr_bytes_written - 1;
++		status = vfs_fsync_range(dst, copy->cp_dst_pos, end, 0);
++		if (!status)
++			status = filemap_check_wb_err(dst->f_mapping, since);
++		if (!status)
++			set_bit(NFSD4_COPY_F_COMMITTED, &copy->cp_flags);
++	}
+ 	return bytes_copied;
+ }
+ 
+-static __be32 nfsd4_do_copy(struct nfsd4_copy *copy, bool sync)
++static __be32 nfsd4_do_copy(struct nfsd4_copy *copy,
++			    struct file *src, struct file *dst,
++			    bool sync)
+ {
+ 	__be32 status;
+ 	ssize_t bytes;
+ 
+-	bytes = _nfsd_copy_file_range(copy);
++	bytes = _nfsd_copy_file_range(copy, dst, src);
++
+ 	/* for async copy, we ignore the error, client can always retry
+ 	 * to get the error
+ 	 */
+@@ -1407,13 +1655,6 @@ static __be32 nfsd4_do_copy(struct nfsd4_copy *copy, bool sync)
+ 		nfsd4_init_copy_res(copy, sync);
+ 		status = nfs_ok;
+ 	}
+-
+-	if (!copy->cp_intra) /* Inter server SSC */
+-		nfsd4_cleanup_inter_ssc(copy->ss_mnt, copy->nf_src,
+-					copy->nf_dst);
+-	else
+-		nfsd4_cleanup_intra_ssc(copy->nf_src, copy->nf_dst);
+-
+ 	return status;
+ }
+ 
+@@ -1422,71 +1663,100 @@ static void dup_copy_fields(struct nfsd4_copy *src, struct nfsd4_copy *dst)
+ 	dst->cp_src_pos = src->cp_src_pos;
+ 	dst->cp_dst_pos = src->cp_dst_pos;
+ 	dst->cp_count = src->cp_count;
+-	dst->cp_synchronous = src->cp_synchronous;
++	dst->cp_flags = src->cp_flags;
+ 	memcpy(&dst->cp_res, &src->cp_res, sizeof(src->cp_res));
+ 	memcpy(&dst->fh, &src->fh, sizeof(src->fh));
+ 	dst->cp_clp = src->cp_clp;
+ 	dst->nf_dst = nfsd_file_get(src->nf_dst);
+-	dst->cp_intra = src->cp_intra;
+-	if (src->cp_intra) /* for inter, file_src doesn't exist yet */
++	/* for inter, nf_src doesn't exist yet */
++	if (!nfsd4_ssc_is_inter(src))
+ 		dst->nf_src = nfsd_file_get(src->nf_src);
+ 
+ 	memcpy(&dst->cp_stateid, &src->cp_stateid, sizeof(src->cp_stateid));
+-	memcpy(&dst->cp_src, &src->cp_src, sizeof(struct nl4_server));
++	memcpy(dst->cp_src, src->cp_src, sizeof(struct nl4_server));
+ 	memcpy(&dst->stateid, &src->stateid, sizeof(src->stateid));
+ 	memcpy(&dst->c_fh, &src->c_fh, sizeof(src->c_fh));
+-	dst->ss_mnt = src->ss_mnt;
++	dst->ss_nsui = src->ss_nsui;
++}
++
++static void release_copy_files(struct nfsd4_copy *copy)
++{
++	if (copy->nf_src)
++		nfsd_file_put(copy->nf_src);
++	if (copy->nf_dst)
++		nfsd_file_put(copy->nf_dst);
+ }
+ 
+ static void cleanup_async_copy(struct nfsd4_copy *copy)
+ {
+ 	nfs4_free_copy_state(copy);
+-	nfsd_file_put(copy->nf_dst);
+-	if (copy->cp_intra)
+-		nfsd_file_put(copy->nf_src);
+-	spin_lock(&copy->cp_clp->async_lock);
+-	list_del(&copy->copies);
+-	spin_unlock(&copy->cp_clp->async_lock);
++	release_copy_files(copy);
++	if (copy->cp_clp) {
++		spin_lock(&copy->cp_clp->async_lock);
++		if (!list_empty(&copy->copies))
++			list_del_init(&copy->copies);
++		spin_unlock(&copy->cp_clp->async_lock);
++	}
+ 	nfs4_put_copy(copy);
+ }
+ 
++static void nfsd4_send_cb_offload(struct nfsd4_copy *copy, __be32 nfserr)
++{
++	struct nfsd4_cb_offload *cbo;
++
++	cbo = kzalloc(sizeof(*cbo), GFP_KERNEL);
++	if (!cbo)
++		return;
++
++	memcpy(&cbo->co_res, &copy->cp_res, sizeof(copy->cp_res));
++	memcpy(&cbo->co_fh, &copy->fh, sizeof(copy->fh));
++	cbo->co_nfserr = nfserr;
++
++	nfsd4_init_cb(&cbo->co_cb, copy->cp_clp, &nfsd4_cb_offload_ops,
++		      NFSPROC4_CLNT_CB_OFFLOAD);
++	trace_nfsd_cb_offload(copy->cp_clp, &cbo->co_res.cb_stateid,
++			      &cbo->co_fh, copy->cp_count, nfserr);
++	nfsd4_run_cb(&cbo->co_cb);
++}
++
++/**
++ * nfsd4_do_async_copy - kthread function for background server-side COPY
++ * @data: arguments for COPY operation
++ *
++ * Return values:
++ *   %0: Copy operation is done.
++ */
+ static int nfsd4_do_async_copy(void *data)
+ {
+ 	struct nfsd4_copy *copy = (struct nfsd4_copy *)data;
+-	struct nfsd4_copy *cb_copy;
++	__be32 nfserr;
+ 
+-	if (!copy->cp_intra) { /* Inter server SSC */
+-		copy->nf_src = kzalloc(sizeof(struct nfsd_file), GFP_KERNEL);
+-		if (!copy->nf_src) {
+-			copy->nfserr = nfserr_serverfault;
+-			/* ss_mnt will be unmounted by the laundromat */
+-			goto do_callback;
+-		}
+-		copy->nf_src->nf_file = nfs42_ssc_open(copy->ss_mnt, &copy->c_fh,
+-					      &copy->stateid);
+-		if (IS_ERR(copy->nf_src->nf_file)) {
+-			copy->nfserr = nfserr_offload_denied;
++	if (nfsd4_ssc_is_inter(copy)) {
++		struct file *filp;
++
++		filp = nfs42_ssc_open(copy->ss_nsui->nsui_vfsmount,
++				      &copy->c_fh, &copy->stateid);
++		if (IS_ERR(filp)) {
++			switch (PTR_ERR(filp)) {
++			case -EBADF:
++				nfserr = nfserr_wrong_type;
++				break;
++			default:
++				nfserr = nfserr_offload_denied;
++			}
+ 			/* ss_mnt will be unmounted by the laundromat */
+ 			goto do_callback;
+ 		}
++		nfserr = nfsd4_do_copy(copy, filp, copy->nf_dst->nf_file,
++				       false);
++		nfsd4_cleanup_inter_ssc(copy->ss_nsui, filp, copy->nf_dst);
++	} else {
++		nfserr = nfsd4_do_copy(copy, copy->nf_src->nf_file,
++				       copy->nf_dst->nf_file, false);
+ 	}
+ 
+-	copy->nfserr = nfsd4_do_copy(copy, 0);
+ do_callback:
+-	cb_copy = kzalloc(sizeof(struct nfsd4_copy), GFP_KERNEL);
+-	if (!cb_copy)
+-		goto out;
+-	refcount_set(&cb_copy->refcount, 1);
+-	memcpy(&cb_copy->cp_res, &copy->cp_res, sizeof(copy->cp_res));
+-	cb_copy->cp_clp = copy->cp_clp;
+-	cb_copy->nfserr = copy->nfserr;
+-	memcpy(&cb_copy->fh, &copy->fh, sizeof(copy->fh));
+-	nfsd4_init_cb(&cb_copy->cp_cb, cb_copy->cp_clp,
+-			&nfsd4_cb_offload_ops, NFSPROC4_CLNT_CB_OFFLOAD);
+-	nfsd4_run_cb(&cb_copy->cp_cb);
+-out:
+-	if (!copy->cp_intra)
+-		kfree(copy->nf_src);
++	nfsd4_send_cb_offload(copy, nfserr);
+ 	cleanup_async_copy(copy);
+ 	return 0;
+ }
+@@ -1499,13 +1769,12 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	__be32 status;
+ 	struct nfsd4_copy *async_copy = NULL;
+ 
+-	if (!copy->cp_intra) { /* Inter server SSC */
+-		if (!inter_copy_offload_enable || copy->cp_synchronous) {
++	if (nfsd4_ssc_is_inter(copy)) {
++		if (!inter_copy_offload_enable || nfsd4_copy_is_sync(copy)) {
+ 			status = nfserr_notsupp;
+ 			goto out;
+ 		}
+-		status = nfsd4_setup_inter_ssc(rqstp, cstate, copy,
+-				&copy->ss_mnt);
++		status = nfsd4_setup_inter_ssc(rqstp, cstate, copy);
+ 		if (status)
+ 			return nfserr_offload_denied;
+ 	} else {
+@@ -1517,17 +1786,21 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	copy->cp_clp = cstate->clp;
+ 	memcpy(&copy->fh, &cstate->current_fh.fh_handle,
+ 		sizeof(struct knfsd_fh));
+-	if (!copy->cp_synchronous) {
++	if (nfsd4_copy_is_async(copy)) {
+ 		struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 
+ 		status = nfserrno(-ENOMEM);
+ 		async_copy = kzalloc(sizeof(struct nfsd4_copy), GFP_KERNEL);
+ 		if (!async_copy)
+ 			goto out_err;
++		INIT_LIST_HEAD(&async_copy->copies);
++		refcount_set(&async_copy->refcount, 1);
++		async_copy->cp_src = kmalloc(sizeof(*async_copy->cp_src), GFP_KERNEL);
++		if (!async_copy->cp_src)
++			goto out_err;
+ 		if (!nfs4_init_copy_state(nn, copy))
+ 			goto out_err;
+-		refcount_set(&async_copy->refcount, 1);
+-		memcpy(&copy->cp_res.cb_stateid, &copy->cp_stateid.stid,
++		memcpy(&copy->cp_res.cb_stateid, &copy->cp_stateid.cs_stid,
+ 			sizeof(copy->cp_res.cb_stateid));
+ 		dup_copy_fields(copy, async_copy);
+ 		async_copy->copy_task = kthread_create(nfsd4_do_async_copy,
+@@ -1541,18 +1814,24 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		wake_up_process(async_copy->copy_task);
+ 		status = nfs_ok;
+ 	} else {
+-		status = nfsd4_do_copy(copy, 1);
++		status = nfsd4_do_copy(copy, copy->nf_src->nf_file,
++				       copy->nf_dst->nf_file, true);
+ 	}
+ out:
++	release_copy_files(copy);
+ 	return status;
+ out_err:
++	if (nfsd4_ssc_is_inter(copy)) {
++		/*
++		 * Source's vfsmount of inter-copy will be unmounted
++		 * by the laundromat. Use copy instead of async_copy
++		 * since async_copy->ss_nsui might not be set yet.
++		 */
++		refcount_dec(&copy->ss_nsui->nsui_refcnt);
++	}
+ 	if (async_copy)
+ 		cleanup_async_copy(async_copy);
+ 	status = nfserrno(-ENOMEM);
+-	/*
+-	 * source's vfsmount of inter-copy will be unmounted
+-	 * by the laundromat
+-	 */
+ 	goto out;
+ }
+ 
+@@ -1563,7 +1842,7 @@ find_async_copy(struct nfs4_client *clp, stateid_t *stateid)
+ 
+ 	spin_lock(&clp->async_lock);
+ 	list_for_each_entry(copy, &clp->async_copies, copies) {
+-		if (memcmp(&copy->cp_stateid.stid, stateid, NFS4_STATEID_SIZE))
++		if (memcmp(&copy->cp_stateid.cs_stid, stateid, NFS4_STATEID_SIZE))
+ 			continue;
+ 		refcount_inc(&copy->refcount);
+ 		spin_unlock(&clp->async_lock);
+@@ -1617,16 +1896,16 @@ nfsd4_copy_notify(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	cps = nfs4_alloc_init_cpntf_state(nn, stid);
+ 	if (!cps)
+ 		goto out;
+-	memcpy(&cn->cpn_cnr_stateid, &cps->cp_stateid.stid, sizeof(stateid_t));
++	memcpy(&cn->cpn_cnr_stateid, &cps->cp_stateid.cs_stid, sizeof(stateid_t));
+ 	memcpy(&cps->cp_p_stateid, &stid->sc_stateid, sizeof(stateid_t));
+ 	memcpy(&cps->cp_p_clid, &clp->cl_clientid, sizeof(clientid_t));
+ 
+ 	/* For now, only return one server address in cpn_src, the
+ 	 * address used by the client to connect to this server.
+ 	 */
+-	cn->cpn_src.nl4_type = NL4_NETADDR;
++	cn->cpn_src->nl4_type = NL4_NETADDR;
+ 	status = nfsd4_set_netaddr((struct sockaddr *)&rqstp->rq_daddr,
+-				 &cn->cpn_src.u.nl4_addr);
++				 &cn->cpn_src->u.nl4_addr);
+ 	WARN_ON_ONCE(status);
+ 	if (status) {
+ 		nfs4_put_cpntf_state(nn, cps);
+@@ -1647,10 +1926,8 @@ nfsd4_fallocate(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->current_fh,
+ 					    &fallocate->falloc_stateid,
+ 					    WR_STATE, &nf, NULL);
+-	if (status != nfs_ok) {
+-		dprintk("NFSD: nfsd4_fallocate: couldn't process stateid!\n");
++	if (status != nfs_ok)
+ 		return status;
+-	}
+ 
+ 	status = nfsd4_vfs_fallocate(rqstp, &cstate->current_fh, nf->nf_file,
+ 				     fallocate->falloc_offset,
+@@ -1706,10 +1983,8 @@ nfsd4_seek(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->current_fh,
+ 					    &seek->seek_stateid,
+ 					    RD_STATE, &nf, NULL);
+-	if (status) {
+-		dprintk("NFSD: nfsd4_seek: couldn't process stateid!\n");
++	if (status)
+ 		return status;
+-	}
+ 
+ 	switch (seek->seek_whence) {
+ 	case NFS4_CONTENT_DATA:
+@@ -1877,7 +2152,7 @@ nfsd4_getdeviceinfo(struct svc_rqst *rqstp,
+ 	nfserr = nfs_ok;
+ 	if (gdp->gd_maxcount != 0) {
+ 		nfserr = ops->proc_getdeviceinfo(exp->ex_path.mnt->mnt_sb,
+-				rqstp, cstate->session->se_client, gdp);
++				rqstp, cstate->clp, gdp);
+ 	}
+ 
+ 	gdp->gd_notify_types &= ops->notify_types;
+@@ -2163,7 +2438,7 @@ nfsd4_proc_null(struct svc_rqst *rqstp)
+ static inline void nfsd4_increment_op_stats(u32 opnum)
+ {
+ 	if (opnum >= FIRST_NFS4_OP && opnum <= LAST_NFS4_OP)
+-		nfsdstats.nfs4_opcount[opnum]++;
++		percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_NFS4_OP(opnum)]);
+ }
+ 
+ static const struct nfsd4_operation nfsd4_ops[];
+@@ -2253,25 +2528,6 @@ static bool need_wrongsec_check(struct svc_rqst *rqstp)
+ 	return !(nextd->op_flags & OP_HANDLES_WRONGSEC);
+ }
+ 
+-static void svcxdr_init_encode(struct svc_rqst *rqstp,
+-			       struct nfsd4_compoundres *resp)
+-{
+-	struct xdr_stream *xdr = &resp->xdr;
+-	struct xdr_buf *buf = &rqstp->rq_res;
+-	struct kvec *head = buf->head;
+-
+-	xdr->buf = buf;
+-	xdr->iov = head;
+-	xdr->p   = head->iov_base + head->iov_len;
+-	xdr->end = head->iov_base + PAGE_SIZE - rqstp->rq_auth_slack;
+-	/* Tail and page_len should be zero at this point: */
+-	buf->len = buf->head[0].iov_len;
+-	xdr->scratch.iov_len = 0;
+-	xdr->page_ptr = buf->pages - 1;
+-	buf->buflen = PAGE_SIZE * (1 + rqstp->rq_page_end - buf->pages)
+-		- rqstp->rq_auth_slack;
+-}
+-
+ #ifdef CONFIG_NFSD_V4_2_INTER_SSC
+ static void
+ check_if_stalefh_allowed(struct nfsd4_compoundargs *args)
+@@ -2299,7 +2555,7 @@ check_if_stalefh_allowed(struct nfsd4_compoundargs *args)
+ 				return;
+ 			}
+ 			putfh = (struct nfsd4_putfh *)&saved_op->u;
+-			if (!copy->cp_intra)
++			if (nfsd4_ssc_is_inter(copy))
+ 				putfh->no_verify = true;
+ 		}
+ 	}
+@@ -2326,10 +2582,14 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	__be32		status;
+ 
+-	svcxdr_init_encode(rqstp, resp);
+-	resp->tagp = resp->xdr.p;
++	resp->xdr = &rqstp->rq_res_stream;
++	resp->statusp = resp->xdr->p;
++
++	/* reserve space for: NFS status code */
++	xdr_reserve_space(resp->xdr, XDR_UNIT);
++
+ 	/* reserve space for: taglen, tag, and opcnt */
+-	xdr_reserve_space(&resp->xdr, 8 + args->taglen);
++	xdr_reserve_space(resp->xdr, XDR_UNIT * 2 + args->taglen);
+ 	resp->taglen = args->taglen;
+ 	resp->tag = args->tag;
+ 	resp->rqstp = rqstp;
+@@ -2348,9 +2608,6 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 	status = nfserr_minor_vers_mismatch;
+ 	if (nfsd_minorversion(nn, args->minorversion, NFSD_TEST) <= 0)
+ 		goto out;
+-	status = nfserr_resource;
+-	if (args->opcnt > NFSD_MAX_OPS_PER_COMPOUND)
+-		goto out;
+ 
+ 	status = nfs41_check_op_ordering(args);
+ 	if (status) {
+@@ -2363,10 +2620,20 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 
+ 	rqstp->rq_lease_breaker = (void **)&cstate->clp;
+ 
+-	trace_nfsd_compound(rqstp, args->opcnt);
++	trace_nfsd_compound(rqstp, args->client_opcnt);
+ 	while (!status && resp->opcnt < args->opcnt) {
+ 		op = &args->ops[resp->opcnt++];
+ 
++		if (unlikely(resp->opcnt == NFSD_MAX_OPS_PER_COMPOUND)) {
++			/* If there are still more operations to process,
++			 * stop here and report NFS4ERR_RESOURCE. */
++			if (cstate->minorversion == 0 &&
++			    args->client_opcnt > resp->opcnt) {
++				op->status = nfserr_resource;
++				goto encode_op;
++			}
++		}
++
+ 		/*
+ 		 * The XDR decode routines may have pre-set op->status;
+ 		 * for example, if there is a miscellaneous XDR error
+@@ -2390,13 +2657,13 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 			goto encode_op;
+ 		}
+ 
+-		fh_clear_wcc(current_fh);
++		fh_clear_pre_post_attrs(current_fh);
+ 
+ 		/* If op is non-idempotent */
+ 		if (op->opdesc->op_flags & OP_MODIFIES_SOMETHING) {
+ 			/*
+ 			 * Don't execute this op if we couldn't encode a
+-			 * succesful reply:
++			 * successful reply:
+ 			 */
+ 			u32 plen = op->opdesc->op_rsize_bop(rqstp, op);
+ 			/*
+@@ -2435,15 +2702,15 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ encode_op:
+ 		if (op->status == nfserr_replay_me) {
+ 			op->replay = &cstate->replay_owner->so_replay;
+-			nfsd4_encode_replay(&resp->xdr, op);
++			nfsd4_encode_replay(resp->xdr, op);
+ 			status = op->status = op->replay->rp_status;
+ 		} else {
+ 			nfsd4_encode_operation(resp, op);
+ 			status = op->status;
+ 		}
+ 
+-		trace_nfsd_compound_status(args->opcnt, resp->opcnt, status,
+-					   nfsd4_op_name(op->opnum));
++		trace_nfsd_compound_status(args->client_opcnt, resp->opcnt,
++					   status, nfsd4_op_name(op->opnum));
+ 
+ 		nfsd4_cstate_clear_replay(cstate);
+ 		nfsd4_increment_op_stats(op->opnum);
+@@ -2477,28 +2744,49 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 
+ #define op_encode_channel_attrs_maxsz	(6 + 1 + 1)
+ 
+-static inline u32 nfsd4_only_status_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++/*
++ * The _rsize() helpers are invoked by the NFSv4 COMPOUND decoder, which
++ * is called before sunrpc sets rq_res.buflen. Thus we have to compute
++ * the maximum payload size here, based on transport limits and the size
++ * of the remaining space in the rq_pages array.
++ */
++static u32 nfsd4_max_payload(const struct svc_rqst *rqstp)
++{
++	u32 buflen;
++
++	buflen = (rqstp->rq_page_end - rqstp->rq_next_page) * PAGE_SIZE;
++	buflen -= rqstp->rq_auth_slack;
++	buflen -= rqstp->rq_res.head[0].iov_len;
++	return min_t(u32, buflen, svc_max_payload(rqstp));
++}
++
++static u32 nfsd4_only_status_rsize(const struct svc_rqst *rqstp,
++				   const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_status_stateid_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_status_stateid_rsize(const struct svc_rqst *rqstp,
++				      const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_stateid_maxsz)* sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_access_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_access_rsize(const struct svc_rqst *rqstp,
++			      const struct nfsd4_op *op)
+ {
+ 	/* ac_supported, ac_resp_access */
+ 	return (op_encode_hdr_size + 2)* sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_commit_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_commit_rsize(const struct svc_rqst *rqstp,
++			      const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_verifier_maxsz) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_create_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_create_rsize(const struct svc_rqst *rqstp,
++			      const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_change_info_maxsz
+ 		+ nfs4_fattr_bitmap_maxsz) * sizeof(__be32);
+@@ -2509,17 +2797,17 @@ static inline u32 nfsd4_create_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op
+  * the op prematurely if the estimate is too large.  We may turn off splice
+  * reads unnecessarily.
+  */
+-static inline u32 nfsd4_getattr_rsize(struct svc_rqst *rqstp,
+-				      struct nfsd4_op *op)
++static u32 nfsd4_getattr_rsize(const struct svc_rqst *rqstp,
++			       const struct nfsd4_op *op)
+ {
+-	u32 *bmap = op->u.getattr.ga_bmval;
++	const u32 *bmap = op->u.getattr.ga_bmval;
+ 	u32 bmap0 = bmap[0], bmap1 = bmap[1], bmap2 = bmap[2];
+ 	u32 ret = 0;
+ 
+ 	if (bmap0 & FATTR4_WORD0_ACL)
+-		return svc_max_payload(rqstp);
++		return nfsd4_max_payload(rqstp);
+ 	if (bmap0 & FATTR4_WORD0_FS_LOCATIONS)
+-		return svc_max_payload(rqstp);
++		return nfsd4_max_payload(rqstp);
+ 
+ 	if (bmap1 & FATTR4_WORD1_OWNER) {
+ 		ret += IDMAP_NAMESZ + 4;
+@@ -2547,24 +2835,28 @@ static inline u32 nfsd4_getattr_rsize(struct svc_rqst *rqstp,
+ 	return ret;
+ }
+ 
+-static inline u32 nfsd4_getfh_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_getfh_rsize(const struct svc_rqst *rqstp,
++			     const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + 1) * sizeof(__be32) + NFS4_FHSIZE;
+ }
+ 
+-static inline u32 nfsd4_link_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_link_rsize(const struct svc_rqst *rqstp,
++			    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_change_info_maxsz)
+ 		* sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_lock_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_lock_rsize(const struct svc_rqst *rqstp,
++			    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_lock_denied_maxsz)
+ 		* sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_open_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_open_rsize(const struct svc_rqst *rqstp,
++			    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_stateid_maxsz
+ 		+ op_encode_change_info_maxsz + 1
+@@ -2572,20 +2864,18 @@ static inline u32 nfsd4_open_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
+ 		+ op_encode_delegation_maxsz) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_read_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_read_rsize(const struct svc_rqst *rqstp,
++			    const struct nfsd4_op *op)
+ {
+-	u32 maxcount = 0, rlen = 0;
+-
+-	maxcount = svc_max_payload(rqstp);
+-	rlen = min(op->u.read.rd_length, maxcount);
++	u32 rlen = min(op->u.read.rd_length, nfsd4_max_payload(rqstp));
+ 
+ 	return (op_encode_hdr_size + 2 + XDR_QUADLEN(rlen)) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_read_plus_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_read_plus_rsize(const struct svc_rqst *rqstp,
++				 const struct nfsd4_op *op)
+ {
+-	u32 maxcount = svc_max_payload(rqstp);
+-	u32 rlen = min(op->u.read.rd_length, maxcount);
++	u32 rlen = min(op->u.read.rd_length, nfsd4_max_payload(rqstp));
+ 	/*
+ 	 * If we detect that the file changed during hole encoding, then we
+ 	 * recover by encoding the remaining reply as data. This means we need
+@@ -2596,70 +2886,77 @@ static inline u32 nfsd4_read_plus_rsize(struct svc_rqst *rqstp, struct nfsd4_op
+ 	return (op_encode_hdr_size + 2 + seg_len + XDR_QUADLEN(rlen)) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_readdir_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_readdir_rsize(const struct svc_rqst *rqstp,
++			       const struct nfsd4_op *op)
+ {
+-	u32 maxcount = 0, rlen = 0;
+-
+-	maxcount = svc_max_payload(rqstp);
+-	rlen = min(op->u.readdir.rd_maxcount, maxcount);
++	u32 rlen = min(op->u.readdir.rd_maxcount, nfsd4_max_payload(rqstp));
+ 
+ 	return (op_encode_hdr_size + op_encode_verifier_maxsz +
+ 		XDR_QUADLEN(rlen)) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_readlink_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_readlink_rsize(const struct svc_rqst *rqstp,
++				const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + 1) * sizeof(__be32) + PAGE_SIZE;
+ }
+ 
+-static inline u32 nfsd4_remove_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_remove_rsize(const struct svc_rqst *rqstp,
++			      const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_change_info_maxsz)
+ 		* sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_rename_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_rename_rsize(const struct svc_rqst *rqstp,
++			      const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_change_info_maxsz
+ 		+ op_encode_change_info_maxsz) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_sequence_rsize(struct svc_rqst *rqstp,
+-				       struct nfsd4_op *op)
++static u32 nfsd4_sequence_rsize(const struct svc_rqst *rqstp,
++				const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size
+ 		+ XDR_QUADLEN(NFS4_MAX_SESSIONID_LEN) + 5) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_test_stateid_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_test_stateid_rsize(const struct svc_rqst *rqstp,
++				    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + 1 + op->u.test_stateid.ts_num_ids)
+ 		* sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_setattr_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_setattr_rsize(const struct svc_rqst *rqstp,
++			       const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + nfs4_fattr_bitmap_maxsz) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_secinfo_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_secinfo_rsize(const struct svc_rqst *rqstp,
++			       const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + RPC_AUTH_MAXFLAVOR *
+ 		(4 + XDR_QUADLEN(GSS_OID_MAX_LEN))) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_setclientid_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_setclientid_rsize(const struct svc_rqst *rqstp,
++				   const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + 2 + XDR_QUADLEN(NFS4_VERIFIER_SIZE)) *
+ 								sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_write_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_write_rsize(const struct svc_rqst *rqstp,
++			     const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + 2 + op_encode_verifier_maxsz) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_exchange_id_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_exchange_id_rsize(const struct svc_rqst *rqstp,
++				   const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + 2 + 1 + /* eir_clientid, eir_sequenceid */\
+ 		1 + 1 + /* eir_flags, spr_how */\
+@@ -2673,14 +2970,16 @@ static inline u32 nfsd4_exchange_id_rsize(struct svc_rqst *rqstp, struct nfsd4_o
+ 		0 /* ignored eir_server_impl_id contents */) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_bind_conn_to_session_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_bind_conn_to_session_rsize(const struct svc_rqst *rqstp,
++					    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + \
+ 		XDR_QUADLEN(NFS4_MAX_SESSIONID_LEN) + /* bctsr_sessid */\
+ 		2 /* bctsr_dir, use_conn_in_rdma_mode */) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_create_session_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_create_session_rsize(const struct svc_rqst *rqstp,
++				      const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + \
+ 		XDR_QUADLEN(NFS4_MAX_SESSIONID_LEN) + /* sessionid */\
+@@ -2689,7 +2988,8 @@ static inline u32 nfsd4_create_session_rsize(struct svc_rqst *rqstp, struct nfsd
+ 		op_encode_channel_attrs_maxsz) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_copy_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_copy_rsize(const struct svc_rqst *rqstp,
++			    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size +
+ 		1 /* wr_callback */ +
+@@ -2701,16 +3001,16 @@ static inline u32 nfsd4_copy_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
+ 		1 /* cr_synchronous */) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_offload_status_rsize(struct svc_rqst *rqstp,
+-					     struct nfsd4_op *op)
++static u32 nfsd4_offload_status_rsize(const struct svc_rqst *rqstp,
++				      const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size +
+ 		2 /* osr_count */ +
+ 		1 /* osr_complete<1> optional 0 for now */) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_copy_notify_rsize(struct svc_rqst *rqstp,
+-					struct nfsd4_op *op)
++static u32 nfsd4_copy_notify_rsize(const struct svc_rqst *rqstp,
++				   const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size +
+ 		3 /* cnr_lease_time */ +
+@@ -2725,12 +3025,10 @@ static inline u32 nfsd4_copy_notify_rsize(struct svc_rqst *rqstp,
+ }
+ 
+ #ifdef CONFIG_NFSD_PNFS
+-static inline u32 nfsd4_getdeviceinfo_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_getdeviceinfo_rsize(const struct svc_rqst *rqstp,
++				     const struct nfsd4_op *op)
+ {
+-	u32 maxcount = 0, rlen = 0;
+-
+-	maxcount = svc_max_payload(rqstp);
+-	rlen = min(op->u.getdeviceinfo.gd_maxcount, maxcount);
++	u32 rlen = min(op->u.getdeviceinfo.gd_maxcount, nfsd4_max_payload(rqstp));
+ 
+ 	return (op_encode_hdr_size +
+ 		1 /* gd_layout_type*/ +
+@@ -2743,7 +3041,8 @@ static inline u32 nfsd4_getdeviceinfo_rsize(struct svc_rqst *rqstp, struct nfsd4
+  * so we need to define an arbitrary upper bound here.
+  */
+ #define MAX_LAYOUT_SIZE		128
+-static inline u32 nfsd4_layoutget_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_layoutget_rsize(const struct svc_rqst *rqstp,
++				 const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size +
+ 		1 /* logr_return_on_close */ +
+@@ -2752,14 +3051,16 @@ static inline u32 nfsd4_layoutget_rsize(struct svc_rqst *rqstp, struct nfsd4_op
+ 		MAX_LAYOUT_SIZE) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_layoutcommit_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_layoutcommit_rsize(const struct svc_rqst *rqstp,
++				    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size +
+ 		1 /* locr_newsize */ +
+ 		2 /* ns_size */) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_layoutreturn_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_layoutreturn_rsize(const struct svc_rqst *rqstp,
++				    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size +
+ 		1 /* lrs_stateid */ +
+@@ -2768,41 +3069,36 @@ static inline u32 nfsd4_layoutreturn_rsize(struct svc_rqst *rqstp, struct nfsd4_
+ #endif /* CONFIG_NFSD_PNFS */
+ 
+ 
+-static inline u32 nfsd4_seek_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op)
++static u32 nfsd4_seek_rsize(const struct svc_rqst *rqstp,
++			    const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + 3) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_getxattr_rsize(struct svc_rqst *rqstp,
+-				       struct nfsd4_op *op)
++static u32 nfsd4_getxattr_rsize(const struct svc_rqst *rqstp,
++				const struct nfsd4_op *op)
+ {
+-	u32 maxcount, rlen;
+-
+-	maxcount = svc_max_payload(rqstp);
+-	rlen = min_t(u32, XATTR_SIZE_MAX, maxcount);
++	u32 rlen = min_t(u32, XATTR_SIZE_MAX, nfsd4_max_payload(rqstp));
+ 
+ 	return (op_encode_hdr_size + 1 + XDR_QUADLEN(rlen)) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_setxattr_rsize(struct svc_rqst *rqstp,
+-				       struct nfsd4_op *op)
++static u32 nfsd4_setxattr_rsize(const struct svc_rqst *rqstp,
++				const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_change_info_maxsz)
+ 		* sizeof(__be32);
+ }
+-static inline u32 nfsd4_listxattrs_rsize(struct svc_rqst *rqstp,
+-					 struct nfsd4_op *op)
++static u32 nfsd4_listxattrs_rsize(const struct svc_rqst *rqstp,
++				  const struct nfsd4_op *op)
+ {
+-	u32 maxcount, rlen;
+-
+-	maxcount = svc_max_payload(rqstp);
+-	rlen = min(op->u.listxattrs.lsxa_maxcount, maxcount);
++	u32 rlen = min(op->u.listxattrs.lsxa_maxcount, nfsd4_max_payload(rqstp));
+ 
+ 	return (op_encode_hdr_size + 4 + XDR_QUADLEN(rlen)) * sizeof(__be32);
+ }
+ 
+-static inline u32 nfsd4_removexattr_rsize(struct svc_rqst *rqstp,
+-					  struct nfsd4_op *op)
++static u32 nfsd4_removexattr_rsize(const struct svc_rqst *rqstp,
++				   const struct nfsd4_op *op)
+ {
+ 	return (op_encode_hdr_size + op_encode_change_info_maxsz)
+ 		* sizeof(__be32);
+@@ -3235,7 +3531,7 @@ bool nfsd4_spo_must_allow(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd4_compoundres *resp = rqstp->rq_resp;
+ 	struct nfsd4_compoundargs *argp = rqstp->rq_argp;
+-	struct nfsd4_op *this = &argp->ops[resp->opcnt - 1];
++	struct nfsd4_op *this;
+ 	struct nfsd4_compound_state *cstate = &resp->cstate;
+ 	struct nfs4_op_map *allow = &cstate->clp->cl_spo_must_allow;
+ 	u32 opiter;
+@@ -3272,7 +3568,7 @@ int nfsd4_max_reply(struct svc_rqst *rqstp, struct nfsd4_op *op)
+ void warn_on_nonidempotent_op(struct nfsd4_op *op)
+ {
+ 	if (OPDESC(op)->op_flags & OP_MODIFIES_SOMETHING) {
+-		pr_err("unable to encode reply to nonidempotent op %d (%s)\n",
++		pr_err("unable to encode reply to nonidempotent op %u (%s)\n",
+ 			op->opnum, nfsd4_op_name(op->opnum));
+ 		WARN_ON_ONCE(1);
+ 	}
+@@ -3285,28 +3581,29 @@ static const char *nfsd4_op_name(unsigned opnum)
+ 	return "unknown_operation";
+ }
+ 
+-#define nfsd4_voidres			nfsd4_voidargs
+-struct nfsd4_voidargs { int dummy; };
+-
+ static const struct svc_procedure nfsd_procedures4[2] = {
+ 	[NFSPROC4_NULL] = {
+ 		.pc_func = nfsd4_proc_null,
+-		.pc_decode = nfs4svc_decode_voidarg,
+-		.pc_encode = nfs4svc_encode_voidres,
+-		.pc_argsize = sizeof(struct nfsd4_voidargs),
+-		.pc_ressize = sizeof(struct nfsd4_voidres),
++		.pc_decode = nfssvc_decode_voidarg,
++		.pc_encode = nfssvc_encode_voidres,
++		.pc_argsize = sizeof(struct nfsd_voidargs),
++		.pc_argzero = sizeof(struct nfsd_voidargs),
++		.pc_ressize = sizeof(struct nfsd_voidres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = 1,
++		.pc_name = "NULL",
+ 	},
+ 	[NFSPROC4_COMPOUND] = {
+ 		.pc_func = nfsd4_proc_compound,
+ 		.pc_decode = nfs4svc_decode_compoundargs,
+ 		.pc_encode = nfs4svc_encode_compoundres,
+ 		.pc_argsize = sizeof(struct nfsd4_compoundargs),
++		.pc_argzero = offsetof(struct nfsd4_compoundargs, iops),
+ 		.pc_ressize = sizeof(struct nfsd4_compoundres),
+ 		.pc_release = nfsd4_release_compoundargs,
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = NFSD_BUFSIZE/4,
++		.pc_name = "COMPOUND",
+ 	},
+ };
+ 
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index 83c4e68839537..189c622dde61c 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -626,7 +626,7 @@ nfsd4_legacy_tracking_init(struct net *net)
+ 	status = nfsd4_load_reboot_recovery_data(net);
+ 	if (status)
+ 		goto err;
+-	printk("NFSD: Using legacy client tracking operations.\n");
++	pr_info("NFSD: Using legacy client tracking operations.\n");
+ 	return 0;
+ 
+ err:
+@@ -807,17 +807,17 @@ __cld_pipe_inprogress_downcall(const struct cld_msg_v2 __user *cmsg,
+ 			if (get_user(namelen, &ci->cc_name.cn_len))
+ 				return -EFAULT;
+ 			name.data = memdup_user(&ci->cc_name.cn_id, namelen);
+-			if (IS_ERR_OR_NULL(name.data))
+-				return -EFAULT;
++			if (IS_ERR(name.data))
++				return PTR_ERR(name.data);
+ 			name.len = namelen;
+ 			get_user(princhashlen, &ci->cc_princhash.cp_len);
+ 			if (princhashlen > 0) {
+ 				princhash.data = memdup_user(
+ 						&ci->cc_princhash.cp_data,
+ 						princhashlen);
+-				if (IS_ERR_OR_NULL(princhash.data)) {
++				if (IS_ERR(princhash.data)) {
+ 					kfree(name.data);
+-					return -EFAULT;
++					return PTR_ERR(princhash.data);
+ 				}
+ 				princhash.len = princhashlen;
+ 			} else
+@@ -829,8 +829,8 @@ __cld_pipe_inprogress_downcall(const struct cld_msg_v2 __user *cmsg,
+ 			if (get_user(namelen, &cnm->cn_len))
+ 				return -EFAULT;
+ 			name.data = memdup_user(&cnm->cn_id, namelen);
+-			if (IS_ERR_OR_NULL(name.data))
+-				return -EFAULT;
++			if (IS_ERR(name.data))
++				return PTR_ERR(name.data);
+ 			name.len = namelen;
+ 		}
+ 		if (name.len > 5 && memcmp(name.data, "hash:", 5) == 0) {
+@@ -1030,7 +1030,7 @@ nfsd4_init_cld_pipe(struct net *net)
+ 
+ 	status = __nfsd4_init_cld_pipe(net);
+ 	if (!status)
+-		printk("NFSD: Using old nfsdcld client tracking operations.\n");
++		pr_info("NFSD: Using old nfsdcld client tracking operations.\n");
+ 	return status;
+ }
+ 
+@@ -1607,7 +1607,7 @@ nfsd4_cld_tracking_init(struct net *net)
+ 		nfs4_release_reclaim(nn);
+ 		goto err_remove;
+ 	} else
+-		printk("NFSD: Using nfsdcld client tracking operations.\n");
++		pr_info("NFSD: Using nfsdcld client tracking operations.\n");
+ 	return 0;
+ 
+ err_remove:
+@@ -1866,7 +1866,7 @@ nfsd4_umh_cltrack_init(struct net *net)
+ 	ret = nfsd4_umh_cltrack_upcall("init", NULL, grace_start, NULL);
+ 	kfree(grace_start);
+ 	if (!ret)
+-		printk("NFSD: Using UMH upcall client tracking operations.\n");
++		pr_info("NFSD: Using UMH upcall client tracking operations.\n");
+ 	return ret;
+ }
+ 
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index d402ca0b535f0..228560f3fd0e0 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -43,6 +43,10 @@
+ #include <linux/sunrpc/addr.h>
+ #include <linux/jhash.h>
+ #include <linux/string_helpers.h>
++#include <linux/fsnotify.h>
++#include <linux/rhashtable.h>
++#include <linux/nfs_ssc.h>
++
+ #include "xdr4.h"
+ #include "xdr4cb.h"
+ #include "vfs.h"
+@@ -82,6 +86,7 @@ static bool check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+ static void nfs4_free_ol_stateid(struct nfs4_stid *stid);
+ void nfsd4_end_grace(struct nfsd_net *nn);
+ static void _free_cpntf_state_locked(struct nfsd_net *nn, struct nfs4_cpntf_state *cps);
++static void nfsd4_file_hash_remove(struct nfs4_file *fi);
+ 
+ /* Locking: */
+ 
+@@ -123,6 +128,23 @@ static void free_session(struct nfsd4_session *);
+ static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
+ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
+ 
++static struct workqueue_struct *laundry_wq;
++
++int nfsd4_create_laundry_wq(void)
++{
++	int rc = 0;
++
++	laundry_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, "nfsd4");
++	if (laundry_wq == NULL)
++		rc = -ENOMEM;
++	return rc;
++}
++
++void nfsd4_destroy_laundry_wq(void)
++{
++	destroy_workqueue(laundry_wq);
++}
++
+ static bool is_session_dead(struct nfsd4_session *ses)
+ {
+ 	return ses->se_flags & NFS4_SESSION_DEAD;
+@@ -141,6 +163,13 @@ static bool is_client_expired(struct nfs4_client *clp)
+ 	return clp->cl_time == 0;
+ }
+ 
++static void nfsd4_dec_courtesy_client_count(struct nfsd_net *nn,
++					struct nfs4_client *clp)
++{
++	if (clp->cl_state != NFSD4_ACTIVE)
++		atomic_add_unless(&nn->nfsd_courtesy_clients, -1, 0);
++}
++
+ static __be32 get_client_locked(struct nfs4_client *clp)
+ {
+ 	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+@@ -150,6 +179,8 @@ static __be32 get_client_locked(struct nfs4_client *clp)
+ 	if (is_client_expired(clp))
+ 		return nfserr_expired;
+ 	atomic_inc(&clp->cl_rpc_users);
++	nfsd4_dec_courtesy_client_count(nn, clp);
++	clp->cl_state = NFSD4_ACTIVE;
+ 	return nfs_ok;
+ }
+ 
+@@ -170,6 +201,8 @@ renew_client_locked(struct nfs4_client *clp)
+ 
+ 	list_move_tail(&clp->cl_lru, &nn->client_lru);
+ 	clp->cl_time = ktime_get_boottime_seconds();
++	nfsd4_dec_courtesy_client_count(nn, clp);
++	clp->cl_state = NFSD4_ACTIVE;
+ }
+ 
+ static void put_client_renew_locked(struct nfs4_client *clp)
+@@ -244,6 +277,7 @@ find_blocked_lock(struct nfs4_lockowner *lo, struct knfsd_fh *fh,
+ 	list_for_each_entry(cur, &lo->lo_blocked, nbl_list) {
+ 		if (fh_match(fh, &cur->nbl_fh)) {
+ 			list_del_init(&cur->nbl_list);
++			WARN_ON(list_empty(&cur->nbl_lru));
+ 			list_del_init(&cur->nbl_lru);
+ 			found = cur;
+ 			break;
+@@ -269,6 +303,7 @@ find_or_allocate_block(struct nfs4_lockowner *lo, struct knfsd_fh *fh,
+ 			INIT_LIST_HEAD(&nbl->nbl_lru);
+ 			fh_copy_shallow(&nbl->nbl_fh, fh);
+ 			locks_init_lock(&nbl->nbl_lock);
++			kref_init(&nbl->nbl_kref);
+ 			nfsd4_init_cb(&nbl->nbl_cb, lo->lo_owner.so_client,
+ 					&nfsd4_cb_notify_lock_ops,
+ 					NFSPROC4_CLNT_CB_NOTIFY_LOCK);
+@@ -278,13 +313,22 @@ find_or_allocate_block(struct nfs4_lockowner *lo, struct knfsd_fh *fh,
+ }
+ 
+ static void
+-free_blocked_lock(struct nfsd4_blocked_lock *nbl)
++free_nbl(struct kref *kref)
+ {
+-	locks_delete_block(&nbl->nbl_lock);
++	struct nfsd4_blocked_lock *nbl;
++
++	nbl = container_of(kref, struct nfsd4_blocked_lock, nbl_kref);
+ 	locks_release_private(&nbl->nbl_lock);
+ 	kfree(nbl);
+ }
+ 
++static void
++free_blocked_lock(struct nfsd4_blocked_lock *nbl)
++{
++	locks_delete_block(&nbl->nbl_lock);
++	kref_put(&nbl->nbl_kref, free_nbl);
++}
++
+ static void
+ remove_blocked_locks(struct nfs4_lockowner *lo)
+ {
+@@ -300,6 +344,7 @@ remove_blocked_locks(struct nfs4_lockowner *lo)
+ 					struct nfsd4_blocked_lock,
+ 					nbl_list);
+ 		list_del_init(&nbl->nbl_list);
++		WARN_ON(list_empty(&nbl->nbl_lru));
+ 		list_move(&nbl->nbl_lru, &reaplist);
+ 	}
+ 	spin_unlock(&nn->blocked_locks_lock);
+@@ -324,6 +369,8 @@ nfsd4_cb_notify_lock_prepare(struct nfsd4_callback *cb)
+ static int
+ nfsd4_cb_notify_lock_done(struct nfsd4_callback *cb, struct rpc_task *task)
+ {
++	trace_nfsd_cb_notify_lock_done(&zero_stateid, task);
++
+ 	/*
+ 	 * Since this is just an optimization, we don't try very hard if it
+ 	 * turns out not to succeed. We'll requeue it on NFS4ERR_DELAY, and
+@@ -353,6 +400,130 @@ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops = {
+ 	.release	= nfsd4_cb_notify_lock_release,
+ };
+ 
++/*
++ * We store the NONE, READ, WRITE, and BOTH bits separately in the
++ * st_{access,deny}_bmap field of the stateid, in order to track not
++ * only what share bits are currently in force, but also what
++ * combinations of share bits previous opens have used.  This allows us
++ * to enforce the recommendation in
++ * https://datatracker.ietf.org/doc/html/rfc7530#section-16.19.4 that
++ * the server return an error if the client attempt to downgrade to a
++ * combination of share bits not explicable by closing some of its
++ * previous opens.
++ *
++ * This enforcement is arguably incomplete, since we don't keep
++ * track of access/deny bit combinations; so, e.g., we allow:
++ *
++ *	OPEN allow read, deny write
++ *	OPEN allow both, deny none
++ *	DOWNGRADE allow read, deny none
++ *
++ * which we should reject.
++ *
++ * But you could also argue that our current code is already overkill,
++ * since it only exists to return NFS4ERR_INVAL on incorrect client
++ * behavior.
++ */
++static unsigned int
++bmap_to_share_mode(unsigned long bmap)
++{
++	int i;
++	unsigned int access = 0;
++
++	for (i = 1; i < 4; i++) {
++		if (test_bit(i, &bmap))
++			access |= i;
++	}
++	return access;
++}
++
++/* set share access for a given stateid */
++static inline void
++set_access(u32 access, struct nfs4_ol_stateid *stp)
++{
++	unsigned char mask = 1 << access;
++
++	WARN_ON_ONCE(access > NFS4_SHARE_ACCESS_BOTH);
++	stp->st_access_bmap |= mask;
++}
++
++/* clear share access for a given stateid */
++static inline void
++clear_access(u32 access, struct nfs4_ol_stateid *stp)
++{
++	unsigned char mask = 1 << access;
++
++	WARN_ON_ONCE(access > NFS4_SHARE_ACCESS_BOTH);
++	stp->st_access_bmap &= ~mask;
++}
++
++/* test whether a given stateid has access */
++static inline bool
++test_access(u32 access, struct nfs4_ol_stateid *stp)
++{
++	unsigned char mask = 1 << access;
++
++	return (bool)(stp->st_access_bmap & mask);
++}
++
++/* set share deny for a given stateid */
++static inline void
++set_deny(u32 deny, struct nfs4_ol_stateid *stp)
++{
++	unsigned char mask = 1 << deny;
++
++	WARN_ON_ONCE(deny > NFS4_SHARE_DENY_BOTH);
++	stp->st_deny_bmap |= mask;
++}
++
++/* clear share deny for a given stateid */
++static inline void
++clear_deny(u32 deny, struct nfs4_ol_stateid *stp)
++{
++	unsigned char mask = 1 << deny;
++
++	WARN_ON_ONCE(deny > NFS4_SHARE_DENY_BOTH);
++	stp->st_deny_bmap &= ~mask;
++}
++
++/* test whether a given stateid is denying specific access */
++static inline bool
++test_deny(u32 deny, struct nfs4_ol_stateid *stp)
++{
++	unsigned char mask = 1 << deny;
++
++	return (bool)(stp->st_deny_bmap & mask);
++}
++
++static int nfs4_access_to_omode(u32 access)
++{
++	switch (access & NFS4_SHARE_ACCESS_BOTH) {
++	case NFS4_SHARE_ACCESS_READ:
++		return O_RDONLY;
++	case NFS4_SHARE_ACCESS_WRITE:
++		return O_WRONLY;
++	case NFS4_SHARE_ACCESS_BOTH:
++		return O_RDWR;
++	}
++	WARN_ON_ONCE(1);
++	return O_RDONLY;
++}
++
++static inline int
++access_permit_read(struct nfs4_ol_stateid *stp)
++{
++	return test_access(NFS4_SHARE_ACCESS_READ, stp) ||
++		test_access(NFS4_SHARE_ACCESS_BOTH, stp) ||
++		test_access(NFS4_SHARE_ACCESS_WRITE, stp);
++}
++
++static inline int
++access_permit_write(struct nfs4_ol_stateid *stp)
++{
++	return test_access(NFS4_SHARE_ACCESS_WRITE, stp) ||
++		test_access(NFS4_SHARE_ACCESS_BOTH, stp);
++}
++
+ static inline struct nfs4_stateowner *
+ nfs4_get_stateowner(struct nfs4_stateowner *sop)
+ {
+@@ -420,11 +591,8 @@ static void nfsd4_free_file_rcu(struct rcu_head *rcu)
+ void
+ put_nfs4_file(struct nfs4_file *fi)
+ {
+-	might_lock(&state_lock);
+-
+-	if (refcount_dec_and_lock(&fi->fi_ref, &state_lock)) {
+-		hlist_del_rcu(&fi->fi_hash);
+-		spin_unlock(&state_lock);
++	if (refcount_dec_and_test(&fi->fi_ref)) {
++		nfsd4_file_hash_remove(fi);
+ 		WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
+ 		WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
+ 		call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
+@@ -434,9 +602,7 @@ put_nfs4_file(struct nfs4_file *fi)
+ static struct nfsd_file *
+ __nfs4_get_fd(struct nfs4_file *f, int oflag)
+ {
+-	if (f->fi_fds[oflag])
+-		return nfsd_file_get(f->fi_fds[oflag]);
+-	return NULL;
++	return nfsd_file_get(f->fi_fds[oflag]);
+ }
+ 
+ static struct nfsd_file *
+@@ -549,21 +715,71 @@ static unsigned int ownerstr_hashval(struct xdr_netobj *ownername)
+ 	return ret & OWNER_HASH_MASK;
+ }
+ 
+-/* hash table for nfs4_file */
+-#define FILE_HASH_BITS                   8
+-#define FILE_HASH_SIZE                  (1 << FILE_HASH_BITS)
++static struct rhltable nfs4_file_rhltable ____cacheline_aligned_in_smp;
+ 
+-static unsigned int nfsd_fh_hashval(struct knfsd_fh *fh)
+-{
+-	return jhash2(fh->fh_base.fh_pad, XDR_QUADLEN(fh->fh_size), 0);
+-}
++static const struct rhashtable_params nfs4_file_rhash_params = {
++	.key_len		= sizeof_field(struct nfs4_file, fi_inode),
++	.key_offset		= offsetof(struct nfs4_file, fi_inode),
++	.head_offset		= offsetof(struct nfs4_file, fi_rlist),
+ 
+-static unsigned int file_hashval(struct knfsd_fh *fh)
++	/*
++	 * Start with a single page hash table to reduce resizing churn
++	 * on light workloads.
++	 */
++	.min_size		= 256,
++	.automatic_shrinking	= true,
++};
++
++/*
++ * Check if courtesy clients have conflicting access and resolve it if possible
++ *
++ * access:  is op_share_access if share_access is true.
++ *	    Check if access mode, op_share_access, would conflict with
++ *	    the current deny mode of the file 'fp'.
++ * access:  is op_share_deny if share_access is false.
++ *	    Check if the deny mode, op_share_deny, would conflict with
++ *	    current access of the file 'fp'.
++ * stp:     skip checking this entry.
++ * new_stp: normal open, not open upgrade.
++ *
++ * Function returns:
++ *	false - access/deny mode conflict with normal client.
++ *	true  - no conflict or conflict with courtesy client(s) is resolved.
++ */
++static bool
++nfs4_resolve_deny_conflicts_locked(struct nfs4_file *fp, bool new_stp,
++		struct nfs4_ol_stateid *stp, u32 access, bool share_access)
+ {
+-	return nfsd_fh_hashval(fh) & (FILE_HASH_SIZE - 1);
+-}
++	struct nfs4_ol_stateid *st;
++	bool resolvable = true;
++	unsigned char bmap;
++	struct nfsd_net *nn;
++	struct nfs4_client *clp;
+ 
+-static struct hlist_head file_hashtbl[FILE_HASH_SIZE];
++	lockdep_assert_held(&fp->fi_lock);
++	list_for_each_entry(st, &fp->fi_stateids, st_perfile) {
++		/* ignore lock stateid */
++		if (st->st_openstp)
++			continue;
++		if (st == stp && new_stp)
++			continue;
++		/* check file access against deny mode or vice versa */
++		bmap = share_access ? st->st_deny_bmap : st->st_access_bmap;
++		if (!(access & bmap_to_share_mode(bmap)))
++			continue;
++		clp = st->st_stid.sc_client;
++		if (try_to_expire_client(clp))
++			continue;
++		resolvable = false;
++		break;
++	}
++	if (resolvable) {
++		clp = stp->st_stid.sc_client;
++		nn = net_generic(clp->net, nfsd_net_id);
++		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
++	}
++	return resolvable;
++}
+ 
+ static void
+ __nfs4_file_get_access(struct nfs4_file *fp, u32 access)
+@@ -768,23 +984,23 @@ struct nfs4_stid *nfs4_alloc_stid(struct nfs4_client *cl, struct kmem_cache *sla
+  * Create a unique stateid_t to represent each COPY.
+  */
+ static int nfs4_init_cp_state(struct nfsd_net *nn, copy_stateid_t *stid,
+-			      unsigned char sc_type)
++			      unsigned char cs_type)
+ {
+ 	int new_id;
+ 
+-	stid->stid.si_opaque.so_clid.cl_boot = (u32)nn->boot_time;
+-	stid->stid.si_opaque.so_clid.cl_id = nn->s2s_cp_cl_id;
+-	stid->sc_type = sc_type;
++	stid->cs_stid.si_opaque.so_clid.cl_boot = (u32)nn->boot_time;
++	stid->cs_stid.si_opaque.so_clid.cl_id = nn->s2s_cp_cl_id;
+ 
+ 	idr_preload(GFP_KERNEL);
+ 	spin_lock(&nn->s2s_cp_lock);
+ 	new_id = idr_alloc_cyclic(&nn->s2s_cp_stateids, stid, 0, 0, GFP_NOWAIT);
+-	stid->stid.si_opaque.so_id = new_id;
+-	stid->stid.si_generation = 1;
++	stid->cs_stid.si_opaque.so_id = new_id;
++	stid->cs_stid.si_generation = 1;
+ 	spin_unlock(&nn->s2s_cp_lock);
+ 	idr_preload_end();
+ 	if (new_id < 0)
+ 		return 0;
++	stid->cs_type = cs_type;
+ 	return 1;
+ }
+ 
+@@ -802,7 +1018,7 @@ struct nfs4_cpntf_state *nfs4_alloc_init_cpntf_state(struct nfsd_net *nn,
+ 	if (!cps)
+ 		return NULL;
+ 	cps->cpntf_time = ktime_get_boottime_seconds();
+-	refcount_set(&cps->cp_stateid.sc_count, 1);
++	refcount_set(&cps->cp_stateid.cs_count, 1);
+ 	if (!nfs4_init_cp_state(nn, &cps->cp_stateid, NFS4_COPYNOTIFY_STID))
+ 		goto out_free;
+ 	spin_lock(&nn->s2s_cp_lock);
+@@ -818,11 +1034,12 @@ void nfs4_free_copy_state(struct nfsd4_copy *copy)
+ {
+ 	struct nfsd_net *nn;
+ 
+-	WARN_ON_ONCE(copy->cp_stateid.sc_type != NFS4_COPY_STID);
++	if (copy->cp_stateid.cs_type != NFS4_COPY_STID)
++		return;
+ 	nn = net_generic(copy->cp_clp->net, nfsd_net_id);
+ 	spin_lock(&nn->s2s_cp_lock);
+ 	idr_remove(&nn->s2s_cp_stateids,
+-		   copy->cp_stateid.stid.si_opaque.so_id);
++		   copy->cp_stateid.cs_stid.si_opaque.so_id);
+ 	spin_unlock(&nn->s2s_cp_lock);
+ }
+ 
+@@ -854,7 +1071,12 @@ static struct nfs4_ol_stateid * nfs4_alloc_open_stateid(struct nfs4_client *clp)
+ 
+ static void nfs4_free_deleg(struct nfs4_stid *stid)
+ {
+-	WARN_ON(!list_empty(&stid->sc_cp_list));
++	struct nfs4_delegation *dp = delegstateid(stid);
++
++	WARN_ON_ONCE(!list_empty(&stid->sc_cp_list));
++	WARN_ON_ONCE(!list_empty(&dp->dl_perfile));
++	WARN_ON_ONCE(!list_empty(&dp->dl_perclnt));
++	WARN_ON_ONCE(!list_empty(&dp->dl_recall_lru));
+ 	kmem_cache_free(deleg_slab, stid);
+ 	atomic_long_dec(&num_delegations);
+ }
+@@ -904,7 +1126,7 @@ static int delegation_blocked(struct knfsd_fh *fh)
+ 		}
+ 		spin_unlock(&blocked_delegations_lock);
+ 	}
+-	hash = jhash(&fh->fh_base, fh->fh_size, 0);
++	hash = jhash(&fh->fh_raw, fh->fh_size, 0);
+ 	if (test_bit(hash&255, bd->set[0]) &&
+ 	    test_bit((hash>>8)&255, bd->set[0]) &&
+ 	    test_bit((hash>>16)&255, bd->set[0]))
+@@ -923,7 +1145,7 @@ static void block_delegations(struct knfsd_fh *fh)
+ 	u32 hash;
+ 	struct bloom_pair *bd = &blocked_delegations;
+ 
+-	hash = jhash(&fh->fh_base, fh->fh_size, 0);
++	hash = jhash(&fh->fh_raw, fh->fh_size, 0);
+ 
+ 	spin_lock(&blocked_delegations_lock);
+ 	__set_bit(hash&255, bd->set[bd->new]);
+@@ -937,7 +1159,6 @@ static void block_delegations(struct knfsd_fh *fh)
+ 
+ static struct nfs4_delegation *
+ alloc_init_deleg(struct nfs4_client *clp, struct nfs4_file *fp,
+-		 struct svc_fh *current_fh,
+ 		 struct nfs4_clnt_odstate *odstate)
+ {
+ 	struct nfs4_delegation *dp;
+@@ -947,7 +1168,7 @@ alloc_init_deleg(struct nfs4_client *clp, struct nfs4_file *fp,
+ 	n = atomic_long_inc_return(&num_delegations);
+ 	if (n < 0 || n > max_delegations)
+ 		goto out_dec;
+-	if (delegation_blocked(&current_fh->fh_handle))
++	if (delegation_blocked(&fp->fi_fhandle))
+ 		goto out_dec;
+ 	dp = delegstateid(nfs4_alloc_stid(clp, deleg_slab, nfs4_free_deleg));
+ 	if (dp == NULL)
+@@ -966,6 +1187,7 @@ alloc_init_deleg(struct nfs4_client *clp, struct nfs4_file *fp,
+ 	get_clnt_odstate(odstate);
+ 	dp->dl_type = NFS4_OPEN_DELEGATE_READ;
+ 	dp->dl_retries = 1;
++	dp->dl_recalled = false;
+ 	nfsd4_init_cb(&dp->dl_recall, dp->dl_stid.sc_client,
+ 		      &nfsd4_cb_recall_ops, NFSPROC4_CLNT_CB_RECALL);
+ 	get_nfs4_file(fp);
+@@ -1144,6 +1366,8 @@ static void revoke_delegation(struct nfs4_delegation *dp)
+ 
+ 	WARN_ON(!list_empty(&dp->dl_recall_lru));
+ 
++	trace_nfsd_stid_revoke(&dp->dl_stid);
++
+ 	if (clp->cl_minorversion) {
+ 		spin_lock(&clp->cl_lock);
+ 		dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID;
+@@ -1169,175 +1393,73 @@ static unsigned int clientstr_hashval(struct xdr_netobj name)
+ }
+ 
+ /*
+- * We store the NONE, READ, WRITE, and BOTH bits separately in the
+- * st_{access,deny}_bmap field of the stateid, in order to track not
+- * only what share bits are currently in force, but also what
+- * combinations of share bits previous opens have used.  This allows us
+- * to enforce the recommendation of rfc 3530 14.2.19 that the server
+- * return an error if the client attempt to downgrade to a combination
+- * of share bits not explicable by closing some of its previous opens.
+- *
+- * XXX: This enforcement is actually incomplete, since we don't keep
+- * track of access/deny bit combinations; so, e.g., we allow:
+- *
+- *	OPEN allow read, deny write
+- *	OPEN allow both, deny none
+- *	DOWNGRADE allow read, deny none
+- *
+- * which we should reject.
++ * A stateid that had a deny mode associated with it is being released
++ * or downgraded. Recalculate the deny mode on the file.
+  */
+-static unsigned int
+-bmap_to_share_mode(unsigned long bmap) {
++static void
++recalculate_deny_mode(struct nfs4_file *fp)
++{
++	struct nfs4_ol_stateid *stp;
++
++	spin_lock(&fp->fi_lock);
++	fp->fi_share_deny = 0;
++	list_for_each_entry(stp, &fp->fi_stateids, st_perfile)
++		fp->fi_share_deny |= bmap_to_share_mode(stp->st_deny_bmap);
++	spin_unlock(&fp->fi_lock);
++}
++
++static void
++reset_union_bmap_deny(u32 deny, struct nfs4_ol_stateid *stp)
++{
+ 	int i;
+-	unsigned int access = 0;
++	bool change = false;
+ 
+ 	for (i = 1; i < 4; i++) {
+-		if (test_bit(i, &bmap))
+-			access |= i;
++		if ((i & deny) != i) {
++			change = true;
++			clear_deny(i, stp);
++		}
+ 	}
+-	return access;
++
++	/* Recalculate per-file deny mode if there was a change */
++	if (change)
++		recalculate_deny_mode(stp->st_stid.sc_file);
+ }
+ 
+-/* set share access for a given stateid */
+-static inline void
+-set_access(u32 access, struct nfs4_ol_stateid *stp)
++/* release all access and file references for a given stateid */
++static void
++release_all_access(struct nfs4_ol_stateid *stp)
+ {
+-	unsigned char mask = 1 << access;
++	int i;
++	struct nfs4_file *fp = stp->st_stid.sc_file;
+ 
+-	WARN_ON_ONCE(access > NFS4_SHARE_ACCESS_BOTH);
+-	stp->st_access_bmap |= mask;
++	if (fp && stp->st_deny_bmap != 0)
++		recalculate_deny_mode(fp);
++
++	for (i = 1; i < 4; i++) {
++		if (test_access(i, stp))
++			nfs4_file_put_access(stp->st_stid.sc_file, i);
++		clear_access(i, stp);
++	}
+ }
+ 
+-/* clear share access for a given stateid */
+-static inline void
+-clear_access(u32 access, struct nfs4_ol_stateid *stp)
++static inline void nfs4_free_stateowner(struct nfs4_stateowner *sop)
+ {
+-	unsigned char mask = 1 << access;
+-
+-	WARN_ON_ONCE(access > NFS4_SHARE_ACCESS_BOTH);
+-	stp->st_access_bmap &= ~mask;
++	kfree(sop->so_owner.data);
++	sop->so_ops->so_free(sop);
+ }
+ 
+-/* test whether a given stateid has access */
+-static inline bool
+-test_access(u32 access, struct nfs4_ol_stateid *stp)
++static void nfs4_put_stateowner(struct nfs4_stateowner *sop)
+ {
+-	unsigned char mask = 1 << access;
++	struct nfs4_client *clp = sop->so_client;
+ 
+-	return (bool)(stp->st_access_bmap & mask);
+-}
+-
+-/* set share deny for a given stateid */
+-static inline void
+-set_deny(u32 deny, struct nfs4_ol_stateid *stp)
+-{
+-	unsigned char mask = 1 << deny;
+-
+-	WARN_ON_ONCE(deny > NFS4_SHARE_DENY_BOTH);
+-	stp->st_deny_bmap |= mask;
+-}
+-
+-/* clear share deny for a given stateid */
+-static inline void
+-clear_deny(u32 deny, struct nfs4_ol_stateid *stp)
+-{
+-	unsigned char mask = 1 << deny;
+-
+-	WARN_ON_ONCE(deny > NFS4_SHARE_DENY_BOTH);
+-	stp->st_deny_bmap &= ~mask;
+-}
+-
+-/* test whether a given stateid is denying specific access */
+-static inline bool
+-test_deny(u32 deny, struct nfs4_ol_stateid *stp)
+-{
+-	unsigned char mask = 1 << deny;
+-
+-	return (bool)(stp->st_deny_bmap & mask);
+-}
+-
+-static int nfs4_access_to_omode(u32 access)
+-{
+-	switch (access & NFS4_SHARE_ACCESS_BOTH) {
+-	case NFS4_SHARE_ACCESS_READ:
+-		return O_RDONLY;
+-	case NFS4_SHARE_ACCESS_WRITE:
+-		return O_WRONLY;
+-	case NFS4_SHARE_ACCESS_BOTH:
+-		return O_RDWR;
+-	}
+-	WARN_ON_ONCE(1);
+-	return O_RDONLY;
+-}
+-
+-/*
+- * A stateid that had a deny mode associated with it is being released
+- * or downgraded. Recalculate the deny mode on the file.
+- */
+-static void
+-recalculate_deny_mode(struct nfs4_file *fp)
+-{
+-	struct nfs4_ol_stateid *stp;
+-
+-	spin_lock(&fp->fi_lock);
+-	fp->fi_share_deny = 0;
+-	list_for_each_entry(stp, &fp->fi_stateids, st_perfile)
+-		fp->fi_share_deny |= bmap_to_share_mode(stp->st_deny_bmap);
+-	spin_unlock(&fp->fi_lock);
+-}
+-
+-static void
+-reset_union_bmap_deny(u32 deny, struct nfs4_ol_stateid *stp)
+-{
+-	int i;
+-	bool change = false;
+-
+-	for (i = 1; i < 4; i++) {
+-		if ((i & deny) != i) {
+-			change = true;
+-			clear_deny(i, stp);
+-		}
+-	}
+-
+-	/* Recalculate per-file deny mode if there was a change */
+-	if (change)
+-		recalculate_deny_mode(stp->st_stid.sc_file);
+-}
+-
+-/* release all access and file references for a given stateid */
+-static void
+-release_all_access(struct nfs4_ol_stateid *stp)
+-{
+-	int i;
+-	struct nfs4_file *fp = stp->st_stid.sc_file;
+-
+-	if (fp && stp->st_deny_bmap != 0)
+-		recalculate_deny_mode(fp);
+-
+-	for (i = 1; i < 4; i++) {
+-		if (test_access(i, stp))
+-			nfs4_file_put_access(stp->st_stid.sc_file, i);
+-		clear_access(i, stp);
+-	}
+-}
+-
+-static inline void nfs4_free_stateowner(struct nfs4_stateowner *sop)
+-{
+-	kfree(sop->so_owner.data);
+-	sop->so_ops->so_free(sop);
+-}
+-
+-static void nfs4_put_stateowner(struct nfs4_stateowner *sop)
+-{
+-	struct nfs4_client *clp = sop->so_client;
+-
+-	might_lock(&clp->cl_lock);
+-
+-	if (!atomic_dec_and_lock(&sop->so_count, &clp->cl_lock))
+-		return;
+-	sop->so_ops->so_unhash(sop);
+-	spin_unlock(&clp->cl_lock);
+-	nfs4_free_stateowner(sop);
++	might_lock(&clp->cl_lock);
++
++	if (!atomic_dec_and_lock(&sop->so_count, &clp->cl_lock))
++		return;
++	sop->so_ops->so_unhash(sop);
++	spin_unlock(&clp->cl_lock);
++	nfs4_free_stateowner(sop);
+ }
+ 
+ static bool
+@@ -1710,13 +1832,12 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
+ 	int numslots = fattrs->maxreqs;
+ 	int slotsize = slot_bytes(fattrs);
+ 	struct nfsd4_session *new;
+-	int mem, i;
++	int i;
+ 
+-	BUILD_BUG_ON(NFSD_MAX_SLOTS_PER_SESSION * sizeof(struct nfsd4_slot *)
+-			+ sizeof(struct nfsd4_session) > PAGE_SIZE);
+-	mem = numslots * sizeof(struct nfsd4_slot *);
++	BUILD_BUG_ON(struct_size(new, se_slots, NFSD_MAX_SLOTS_PER_SESSION)
++		     > PAGE_SIZE);
+ 
+-	new = kzalloc(sizeof(*new) + mem, GFP_KERNEL);
++	new = kzalloc(struct_size(new, se_slots, numslots), GFP_KERNEL);
+ 	if (!new)
+ 		return NULL;
+ 	/* allocate each struct nfsd4_slot and data cache in one piece */
+@@ -1748,6 +1869,8 @@ static void nfsd4_conn_lost(struct svc_xpt_user *u)
+ 	struct nfsd4_conn *c = container_of(u, struct nfsd4_conn, cn_xpt_user);
+ 	struct nfs4_client *clp = c->cn_session->se_client;
+ 
++	trace_nfsd_cb_lost(clp);
++
+ 	spin_lock(&clp->cl_lock);
+ 	if (!list_empty(&c->cn_persession)) {
+ 		list_del(&c->cn_persession);
+@@ -1959,11 +2082,16 @@ STALE_CLIENTID(clientid_t *clid, struct nfsd_net *nn)
+  * This type of memory management is somewhat inefficient, but we use it
+  * anyway since SETCLIENTID is not a common operation.
+  */
+-static struct nfs4_client *alloc_client(struct xdr_netobj name)
++static struct nfs4_client *alloc_client(struct xdr_netobj name,
++				struct nfsd_net *nn)
+ {
+ 	struct nfs4_client *clp;
+ 	int i;
+ 
++	if (atomic_read(&nn->nfs4_client_count) >= nn->nfs4_max_clients) {
++		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
++		return NULL;
++	}
+ 	clp = kmem_cache_zalloc(client_slab, GFP_KERNEL);
+ 	if (clp == NULL)
+ 		return NULL;
+@@ -1981,6 +2109,9 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name)
+ 	idr_init(&clp->cl_stateids);
+ 	atomic_set(&clp->cl_rpc_users, 0);
+ 	clp->cl_cb_state = NFSD4_CB_UNKNOWN;
++	clp->cl_state = NFSD4_ACTIVE;
++	atomic_inc(&nn->nfs4_client_count);
++	atomic_set(&clp->cl_delegs_in_recall, 0);
+ 	INIT_LIST_HEAD(&clp->cl_idhash);
+ 	INIT_LIST_HEAD(&clp->cl_openowners);
+ 	INIT_LIST_HEAD(&clp->cl_delegations);
+@@ -2012,6 +2143,7 @@ static void __free_client(struct kref *k)
+ 	kfree(clp->cl_nii_domain.data);
+ 	kfree(clp->cl_nii_name.data);
+ 	idr_destroy(&clp->cl_stateids);
++	kfree(clp->cl_ra);
+ 	kmem_cache_free(client_slab, clp);
+ }
+ 
+@@ -2087,6 +2219,7 @@ static __be32 mark_client_expired_locked(struct nfs4_client *clp)
+ static void
+ __destroy_client(struct nfs4_client *clp)
+ {
++	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+ 	int i;
+ 	struct nfs4_openowner *oo;
+ 	struct nfs4_delegation *dp;
+@@ -2130,6 +2263,8 @@ __destroy_client(struct nfs4_client *clp)
+ 	nfsd4_shutdown_callback(clp);
+ 	if (clp->cl_cb_conn.cb_xprt)
+ 		svc_xprt_put(clp->cl_cb_conn.cb_xprt);
++	atomic_add_unless(&nn->nfs4_client_count, -1, 0);
++	nfsd4_dec_courtesy_client_count(nn, clp);
+ 	free_client(clp);
+ 	wake_up_all(&expiry_wq);
+ }
+@@ -2358,9 +2493,24 @@ static void seq_quote_mem(struct seq_file *m, char *data, int len)
+ 	seq_printf(m, "\"");
+ }
+ 
++static const char *cb_state2str(int state)
++{
++	switch (state) {
++	case NFSD4_CB_UP:
++		return "UP";
++	case NFSD4_CB_UNKNOWN:
++		return "UNKNOWN";
++	case NFSD4_CB_DOWN:
++		return "DOWN";
++	case NFSD4_CB_FAULT:
++		return "FAULT";
++	}
++	return "UNDEFINED";
++}
++
+ static int client_info_show(struct seq_file *m, void *v)
+ {
+-	struct inode *inode = m->private;
++	struct inode *inode = file_inode(m->file);
+ 	struct nfs4_client *clp;
+ 	u64 clid;
+ 
+@@ -2370,6 +2520,17 @@ static int client_info_show(struct seq_file *m, void *v)
+ 	memcpy(&clid, &clp->cl_clientid, sizeof(clid));
+ 	seq_printf(m, "clientid: 0x%llx\n", clid);
+ 	seq_printf(m, "address: \"%pISpc\"\n", (struct sockaddr *)&clp->cl_addr);
++
++	if (clp->cl_state == NFSD4_COURTESY)
++		seq_puts(m, "status: courtesy\n");
++	else if (clp->cl_state == NFSD4_EXPIRABLE)
++		seq_puts(m, "status: expirable\n");
++	else if (test_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags))
++		seq_puts(m, "status: confirmed\n");
++	else
++		seq_puts(m, "status: unconfirmed\n");
++	seq_printf(m, "seconds from last renew: %lld\n",
++		ktime_get_boottime_seconds() - clp->cl_time);
+ 	seq_printf(m, "name: ");
+ 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
+ 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
+@@ -2382,22 +2543,14 @@ static int client_info_show(struct seq_file *m, void *v)
+ 		seq_printf(m, "\nImplementation time: [%lld, %ld]\n",
+ 			clp->cl_nii_time.tv_sec, clp->cl_nii_time.tv_nsec);
+ 	}
++	seq_printf(m, "callback state: %s\n", cb_state2str(clp->cl_cb_state));
++	seq_printf(m, "callback address: %pISpc\n", &clp->cl_cb_conn.cb_addr);
+ 	drop_client(clp);
+ 
+ 	return 0;
+ }
+ 
+-static int client_info_open(struct inode *inode, struct file *file)
+-{
+-	return single_open(file, client_info_show, inode);
+-}
+-
+-static const struct file_operations client_info_fops = {
+-	.open		= client_info_open,
+-	.read		= seq_read,
+-	.llseek		= seq_lseek,
+-	.release	= single_release,
+-};
++DEFINE_SHOW_ATTRIBUTE(client_info);
+ 
+ static void *states_start(struct seq_file *s, loff_t *pos)
+ 	__acquires(&clp->cl_lock)
+@@ -2440,7 +2593,7 @@ static void nfs4_show_fname(struct seq_file *s, struct nfsd_file *f)
+ 
+ static void nfs4_show_superblock(struct seq_file *s, struct nfsd_file *f)
+ {
+-	struct inode *inode = f->nf_inode;
++	struct inode *inode = file_inode(f->nf_file);
+ 
+ 	seq_printf(s, "superblock: \"%02x:%02x:%ld\"",
+ 					MAJOR(inode->i_sb->s_dev),
+@@ -2668,6 +2821,8 @@ static void force_expire_client(struct nfs4_client *clp)
+ 	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+ 	bool already_expired;
+ 
++	trace_nfsd_clid_admin_expired(&clp->cl_clientid);
++
+ 	spin_lock(&nn->client_lock);
+ 	clp->cl_time = 0;
+ 	spin_unlock(&nn->client_lock);
+@@ -2716,6 +2871,36 @@ static const struct tree_descr client_files[] = {
+ 	[3] = {""},
+ };
+ 
++static int
++nfsd4_cb_recall_any_done(struct nfsd4_callback *cb,
++				struct rpc_task *task)
++{
++	switch (task->tk_status) {
++	case -NFS4ERR_DELAY:
++		rpc_delay(task, 2 * HZ);
++		return 0;
++	default:
++		return 1;
++	}
++}
++
++static void
++nfsd4_cb_recall_any_release(struct nfsd4_callback *cb)
++{
++	struct nfs4_client *clp = cb->cb_clp;
++	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
++
++	spin_lock(&nn->client_lock);
++	clear_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags);
++	put_client_renew_locked(clp);
++	spin_unlock(&nn->client_lock);
++}
++
++static const struct nfsd4_callback_ops nfsd4_cb_recall_any_ops = {
++	.done		= nfsd4_cb_recall_any_done,
++	.release	= nfsd4_cb_recall_any_release,
++};
++
+ static struct nfs4_client *create_client(struct xdr_netobj name,
+ 		struct svc_rqst *rqstp, nfs4_verifier *verf)
+ {
+@@ -2724,8 +2909,9 @@ static struct nfs4_client *create_client(struct xdr_netobj name,
+ 	int ret;
+ 	struct net *net = SVC_NET(rqstp);
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++	struct dentry *dentries[ARRAY_SIZE(client_files)];
+ 
+-	clp = alloc_client(name);
++	clp = alloc_client(name, nn);
+ 	if (clp == NULL)
+ 		return NULL;
+ 
+@@ -2743,13 +2929,23 @@ static struct nfs4_client *create_client(struct xdr_netobj name,
+ 	memcpy(&clp->cl_addr, sa, sizeof(struct sockaddr_storage));
+ 	clp->cl_cb_session = NULL;
+ 	clp->net = net;
+-	clp->cl_nfsd_dentry = nfsd_client_mkdir(nn, &clp->cl_nfsdfs,
+-			clp->cl_clientid.cl_id - nn->clientid_base,
+-			client_files);
++	clp->cl_nfsd_dentry = nfsd_client_mkdir(
++		nn, &clp->cl_nfsdfs,
++		clp->cl_clientid.cl_id - nn->clientid_base,
++		client_files, dentries);
++	clp->cl_nfsd_info_dentry = dentries[0];
+ 	if (!clp->cl_nfsd_dentry) {
+ 		free_client(clp);
+ 		return NULL;
+ 	}
++	clp->cl_ra = kzalloc(sizeof(*clp->cl_ra), GFP_KERNEL);
++	if (!clp->cl_ra) {
++		free_client(clp);
++		return NULL;
++	}
++	clp->cl_ra_time = 0;
++	nfsd4_init_cb(&clp->cl_ra->ra_cb, clp, &nfsd4_cb_recall_any_ops,
++			NFSPROC4_CLNT_CB_RECALL_ANY);
+ 	return clp;
+ }
+ 
+@@ -2816,11 +3012,11 @@ move_to_confirmed(struct nfs4_client *clp)
+ 
+ 	lockdep_assert_held(&nn->client_lock);
+ 
+-	dprintk("NFSD: move_to_confirm nfs4_client %p\n", clp);
+ 	list_move(&clp->cl_idhash, &nn->conf_id_hashtbl[idhashval]);
+ 	rb_erase(&clp->cl_namenode, &nn->unconf_name_tree);
+ 	add_clp_to_name_tree(clp, &nn->conf_name_tree);
+ 	set_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags);
++	trace_nfsd_clid_confirmed(&clp->cl_clientid);
+ 	renew_client_locked(clp);
+ }
+ 
+@@ -2925,7 +3121,7 @@ gen_callback(struct nfs4_client *clp, struct nfsd4_setclientid *se, struct svc_r
+ static void
+ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
+ {
+-	struct xdr_buf *buf = resp->xdr.buf;
++	struct xdr_buf *buf = resp->xdr->buf;
+ 	struct nfsd4_slot *slot = resp->cstate.slot;
+ 	unsigned int base;
+ 
+@@ -2995,7 +3191,7 @@ nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
+ 			 struct nfsd4_sequence *seq)
+ {
+ 	struct nfsd4_slot *slot = resp->cstate.slot;
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 	__be32 status;
+ 
+@@ -3089,7 +3285,7 @@ nfsd4_exchange_id(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 
+ 	rpc_ntop(sa, addr_str, sizeof(addr_str));
+ 	dprintk("%s rqstp=%p exid=%p clname.len=%u clname.data=%p "
+-		"ip_addr=%s flags %x, spa_how %d\n",
++		"ip_addr=%s flags %x, spa_how %u\n",
+ 		__func__, rqstp, exid, exid->clname.len, exid->clname.data,
+ 		addr_str, exid->flags, exid->spa_how);
+ 
+@@ -3136,6 +3332,7 @@ nfsd4_exchange_id(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			goto out_nolock;
+ 		}
+ 		new->cl_mach_cred = true;
++		break;
+ 	case SP4_NONE:
+ 		break;
+ 	default:				/* checked by xdr code */
+@@ -3172,20 +3369,24 @@ nfsd4_exchange_id(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			}
+ 			/* case 6 */
+ 			exid->flags |= EXCHGID4_FLAG_CONFIRMED_R;
++			trace_nfsd_clid_confirmed_r(conf);
+ 			goto out_copy;
+ 		}
+ 		if (!creds_match) { /* case 3 */
+ 			if (client_has_state(conf)) {
+ 				status = nfserr_clid_inuse;
++				trace_nfsd_clid_cred_mismatch(conf, rqstp);
+ 				goto out;
+ 			}
+ 			goto out_new;
+ 		}
+ 		if (verfs_match) { /* case 2 */
+ 			conf->cl_exchange_flags |= EXCHGID4_FLAG_CONFIRMED_R;
++			trace_nfsd_clid_confirmed_r(conf);
+ 			goto out_copy;
+ 		}
+ 		/* case 5, client reboot */
++		trace_nfsd_clid_verf_mismatch(conf, rqstp, &verf);
+ 		conf = NULL;
+ 		goto out_new;
+ 	}
+@@ -3195,16 +3396,19 @@ nfsd4_exchange_id(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		goto out;
+ 	}
+ 
+-	unconf  = find_unconfirmed_client_by_name(&exid->clname, nn);
++	unconf = find_unconfirmed_client_by_name(&exid->clname, nn);
+ 	if (unconf) /* case 4, possible retry or client restart */
+ 		unhash_client_locked(unconf);
+ 
+-	/* case 1 (normal case) */
++	/* case 1, new owner ID */
++	trace_nfsd_clid_fresh(new);
++
+ out_new:
+ 	if (conf) {
+ 		status = mark_client_expired_locked(conf);
+ 		if (status)
+ 			goto out;
++		trace_nfsd_clid_replaced(&conf->cl_clientid);
+ 	}
+ 	new->cl_minorversion = cstate->minorversion;
+ 	new->cl_spo_must_allow.u.words[0] = exid->spo_must_allow[0];
+@@ -3228,8 +3432,10 @@ nfsd4_exchange_id(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ out_nolock:
+ 	if (new)
+ 		expire_client(new);
+-	if (unconf)
++	if (unconf) {
++		trace_nfsd_clid_expire_unconf(&unconf->cl_clientid);
+ 		expire_client(unconf);
++	}
+ 	return status;
+ }
+ 
+@@ -3421,9 +3627,10 @@ nfsd4_create_session(struct svc_rqst *rqstp,
+ 			goto out_free_conn;
+ 		}
+ 	} else if (unconf) {
++		status = nfserr_clid_inuse;
+ 		if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
+ 		    !rpc_cmp_addr(sa, (struct sockaddr *) &unconf->cl_addr)) {
+-			status = nfserr_clid_inuse;
++			trace_nfsd_clid_cred_mismatch(unconf, rqstp);
+ 			goto out_free_conn;
+ 		}
+ 		status = nfserr_wrong_cred;
+@@ -3443,6 +3650,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
+ 				old = NULL;
+ 				goto out_free_conn;
+ 			}
++			trace_nfsd_clid_replaced(&old->cl_clientid);
+ 		}
+ 		move_to_confirmed(unconf);
+ 		conf = unconf;
+@@ -3467,6 +3675,8 @@ nfsd4_create_session(struct svc_rqst *rqstp,
+ 	/* cache solo and embedded create sessions under the client_lock */
+ 	nfsd4_cache_create_session(cr_ses, cs_slot, status);
+ 	spin_unlock(&nn->client_lock);
++	if (conf == unconf)
++		fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
+ 	/* init connection and backchannel */
+ 	nfsd4_init_conn(rqstp, conn, new);
+ 	nfsd4_put_session(new);
+@@ -3740,7 +3950,7 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ {
+ 	struct nfsd4_sequence *seq = &u->sequence;
+ 	struct nfsd4_compoundres *resp = rqstp->rq_resp;
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	struct nfsd4_session *session;
+ 	struct nfs4_client *clp;
+ 	struct nfsd4_slot *slot;
+@@ -3910,6 +4120,7 @@ nfsd4_destroy_clientid(struct svc_rqst *rqstp,
+ 		status = nfserr_wrong_cred;
+ 		goto out;
+ 	}
++	trace_nfsd_clid_destroyed(&clp->cl_clientid);
+ 	unhash_client_locked(clp);
+ out:
+ 	spin_unlock(&nn->client_lock);
+@@ -3923,6 +4134,7 @@ nfsd4_reclaim_complete(struct svc_rqst *rqstp,
+ 		struct nfsd4_compound_state *cstate, union nfsd4_op_u *u)
+ {
+ 	struct nfsd4_reclaim_complete *rc = &u->reclaim_complete;
++	struct nfs4_client *clp = cstate->clp;
+ 	__be32 status = 0;
+ 
+ 	if (rc->rca_one_fs) {
+@@ -3936,12 +4148,11 @@ nfsd4_reclaim_complete(struct svc_rqst *rqstp,
+ 	}
+ 
+ 	status = nfserr_complete_already;
+-	if (test_and_set_bit(NFSD4_CLIENT_RECLAIM_COMPLETE,
+-			     &cstate->session->se_client->cl_flags))
++	if (test_and_set_bit(NFSD4_CLIENT_RECLAIM_COMPLETE, &clp->cl_flags))
+ 		goto out;
+ 
+ 	status = nfserr_stale_clientid;
+-	if (is_client_expired(cstate->session->se_client))
++	if (is_client_expired(clp))
+ 		/*
+ 		 * The following error isn't really legal.
+ 		 * But we only get here if the client just explicitly
+@@ -3952,8 +4163,9 @@ nfsd4_reclaim_complete(struct svc_rqst *rqstp,
+ 		goto out;
+ 
+ 	status = nfs_ok;
+-	nfsd4_client_record_create(cstate->session->se_client);
+-	inc_reclaim_complete(cstate->session->se_client);
++	trace_nfsd_clid_reclaim_complete(&clp->cl_clientid);
++	nfsd4_client_record_create(clp);
++	inc_reclaim_complete(clp);
+ out:
+ 	return status;
+ }
+@@ -3973,27 +4185,29 @@ nfsd4_setclientid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	new = create_client(clname, rqstp, &clverifier);
+ 	if (new == NULL)
+ 		return nfserr_jukebox;
+-	/* Cases below refer to rfc 3530 section 14.2.33: */
+ 	spin_lock(&nn->client_lock);
+ 	conf = find_confirmed_client_by_name(&clname, nn);
+ 	if (conf && client_has_state(conf)) {
+-		/* case 0: */
+ 		status = nfserr_clid_inuse;
+ 		if (clp_used_exchangeid(conf))
+ 			goto out;
+ 		if (!same_creds(&conf->cl_cred, &rqstp->rq_cred)) {
+-			trace_nfsd_clid_inuse_err(conf);
++			trace_nfsd_clid_cred_mismatch(conf, rqstp);
+ 			goto out;
+ 		}
+ 	}
+ 	unconf = find_unconfirmed_client_by_name(&clname, nn);
+ 	if (unconf)
+ 		unhash_client_locked(unconf);
+-	/* We need to handle only case 1: probable callback update */
+-	if (conf && same_verf(&conf->cl_verifier, &clverifier)) {
+-		copy_clid(new, conf);
+-		gen_confirm(new, nn);
+-	}
++	if (conf) {
++		if (same_verf(&conf->cl_verifier, &clverifier)) {
++			copy_clid(new, conf);
++			gen_confirm(new, nn);
++		} else
++			trace_nfsd_clid_verf_mismatch(conf, rqstp,
++						      &clverifier);
++	} else
++		trace_nfsd_clid_fresh(new);
+ 	new->cl_minorversion = 0;
+ 	gen_callback(new, setclid, rqstp);
+ 	add_to_unconfirmed(new);
+@@ -4006,12 +4220,13 @@ nfsd4_setclientid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	spin_unlock(&nn->client_lock);
+ 	if (new)
+ 		free_client(new);
+-	if (unconf)
++	if (unconf) {
++		trace_nfsd_clid_expire_unconf(&unconf->cl_clientid);
+ 		expire_client(unconf);
++	}
+ 	return status;
+ }
+ 
+-
+ __be32
+ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 			struct nfsd4_compound_state *cstate,
+@@ -4040,25 +4255,27 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 	 * Nevertheless, RFC 7530 recommends INUSE for this case:
+ 	 */
+ 	status = nfserr_clid_inuse;
+-	if (unconf && !same_creds(&unconf->cl_cred, &rqstp->rq_cred))
++	if (unconf && !same_creds(&unconf->cl_cred, &rqstp->rq_cred)) {
++		trace_nfsd_clid_cred_mismatch(unconf, rqstp);
+ 		goto out;
+-	if (conf && !same_creds(&conf->cl_cred, &rqstp->rq_cred))
++	}
++	if (conf && !same_creds(&conf->cl_cred, &rqstp->rq_cred)) {
++		trace_nfsd_clid_cred_mismatch(conf, rqstp);
+ 		goto out;
+-	/* cases below refer to rfc 3530 section 14.2.34: */
++	}
+ 	if (!unconf || !same_verf(&confirm, &unconf->cl_confirm)) {
+ 		if (conf && same_verf(&confirm, &conf->cl_confirm)) {
+-			/* case 2: probable retransmit */
+ 			status = nfs_ok;
+-		} else /* case 4: client hasn't noticed we rebooted yet? */
++		} else
+ 			status = nfserr_stale_clientid;
+ 		goto out;
+ 	}
+ 	status = nfs_ok;
+-	if (conf) { /* case 1: callback update */
++	if (conf) {
+ 		old = unconf;
+ 		unhash_client_locked(old);
+ 		nfsd4_change_callback(conf, &unconf->cl_cb_conn);
+-	} else { /* case 3: normal case; new or rebooted client */
++	} else {
+ 		old = find_confirmed_client_by_name(&unconf->cl_name, nn);
+ 		if (old) {
+ 			status = nfserr_clid_inuse;
+@@ -4073,12 +4290,15 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 				old = NULL;
+ 				goto out;
+ 			}
++			trace_nfsd_clid_replaced(&old->cl_clientid);
+ 		}
+ 		move_to_confirmed(unconf);
+ 		conf = unconf;
+ 	}
+ 	get_client_locked(conf);
+ 	spin_unlock(&nn->client_lock);
++	if (conf == unconf)
++		fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
+ 	nfsd4_probe_callback(conf);
+ 	spin_lock(&nn->client_lock);
+ 	put_client_renew_locked(conf);
+@@ -4095,27 +4315,26 @@ static struct nfs4_file *nfsd4_alloc_file(void)
+ }
+ 
+ /* OPEN Share state helper functions */
+-static void nfsd4_init_file(struct knfsd_fh *fh, unsigned int hashval,
+-				struct nfs4_file *fp)
+-{
+-	lockdep_assert_held(&state_lock);
+ 
++static void nfsd4_file_init(const struct svc_fh *fh, struct nfs4_file *fp)
++{
+ 	refcount_set(&fp->fi_ref, 1);
+ 	spin_lock_init(&fp->fi_lock);
+ 	INIT_LIST_HEAD(&fp->fi_stateids);
+ 	INIT_LIST_HEAD(&fp->fi_delegations);
+ 	INIT_LIST_HEAD(&fp->fi_clnt_odstate);
+-	fh_copy_shallow(&fp->fi_fhandle, fh);
++	fh_copy_shallow(&fp->fi_fhandle, &fh->fh_handle);
+ 	fp->fi_deleg_file = NULL;
+ 	fp->fi_had_conflict = false;
+ 	fp->fi_share_deny = 0;
+ 	memset(fp->fi_fds, 0, sizeof(fp->fi_fds));
+ 	memset(fp->fi_access, 0, sizeof(fp->fi_access));
++	fp->fi_aliased = false;
++	fp->fi_inode = d_inode(fh->fh_dentry);
+ #ifdef CONFIG_NFSD_PNFS
+ 	INIT_LIST_HEAD(&fp->fi_lo_states);
+ 	atomic_set(&fp->fi_lo_recalls, 0);
+ #endif
+-	hlist_add_head_rcu(&fp->fi_hash, &file_hashtbl[hashval]);
+ }
+ 
+ void
+@@ -4179,6 +4398,51 @@ nfsd4_init_slabs(void)
+ 	return -ENOMEM;
+ }
+ 
++static unsigned long
++nfsd4_state_shrinker_count(struct shrinker *shrink, struct shrink_control *sc)
++{
++	int count;
++	struct nfsd_net *nn = container_of(shrink,
++			struct nfsd_net, nfsd_client_shrinker);
++
++	count = atomic_read(&nn->nfsd_courtesy_clients);
++	if (!count)
++		count = atomic_long_read(&num_delegations);
++	if (count)
++		queue_work(laundry_wq, &nn->nfsd_shrinker_work);
++	return (unsigned long)count;
++}
++
++static unsigned long
++nfsd4_state_shrinker_scan(struct shrinker *shrink, struct shrink_control *sc)
++{
++	return SHRINK_STOP;
++}
++
++void
++nfsd4_init_leases_net(struct nfsd_net *nn)
++{
++	struct sysinfo si;
++	u64 max_clients;
++
++	nn->nfsd4_lease = 90;	/* default lease time */
++	nn->nfsd4_grace = 90;
++	nn->somebody_reclaimed = false;
++	nn->track_reclaim_completes = false;
++	nn->clverifier_counter = prandom_u32();
++	nn->clientid_base = prandom_u32();
++	nn->clientid_counter = nn->clientid_base + 1;
++	nn->s2s_cp_cl_id = nn->clientid_counter++;
++
++	atomic_set(&nn->nfs4_client_count, 0);
++	si_meminfo(&si);
++	max_clients = (u64)si.totalram * si.mem_unit / (1024 * 1024 * 1024);
++	max_clients *= NFS4_CLIENTS_PER_GB;
++	nn->nfs4_max_clients = max_t(int, max_clients, NFS4_CLIENTS_PER_GB);
++
++	atomic_set(&nn->nfsd_courtesy_clients, 0);
++}
++
+ static void init_nfs4_replay(struct nfs4_replay *rp)
+ {
+ 	rp->rp_status = nfserr_serverfault;
+@@ -4447,55 +4711,80 @@ move_to_close_lru(struct nfs4_ol_stateid *s, struct net *net)
+ 		nfs4_put_stid(&last->st_stid);
+ }
+ 
+-/* search file_hashtbl[] for file */
+-static struct nfs4_file *
+-find_file_locked(struct knfsd_fh *fh, unsigned int hashval)
++static noinline_for_stack struct nfs4_file *
++nfsd4_file_hash_lookup(const struct svc_fh *fhp)
+ {
+-	struct nfs4_file *fp;
++	struct inode *inode = d_inode(fhp->fh_dentry);
++	struct rhlist_head *tmp, *list;
++	struct nfs4_file *fi;
+ 
+-	hlist_for_each_entry_rcu(fp, &file_hashtbl[hashval], fi_hash,
+-				lockdep_is_held(&state_lock)) {
+-		if (fh_match(&fp->fi_fhandle, fh)) {
+-			if (refcount_inc_not_zero(&fp->fi_ref))
+-				return fp;
++	rcu_read_lock();
++	list = rhltable_lookup(&nfs4_file_rhltable, &inode,
++			       nfs4_file_rhash_params);
++	rhl_for_each_entry_rcu(fi, tmp, list, fi_rlist) {
++		if (fh_match(&fi->fi_fhandle, &fhp->fh_handle)) {
++			if (refcount_inc_not_zero(&fi->fi_ref)) {
++				rcu_read_unlock();
++				return fi;
++			}
+ 		}
+ 	}
++	rcu_read_unlock();
+ 	return NULL;
+ }
+ 
+-struct nfs4_file *
+-find_file(struct knfsd_fh *fh)
+-{
+-	struct nfs4_file *fp;
+-	unsigned int hashval = file_hashval(fh);
++/*
++ * On hash insertion, identify entries with the same inode but
++ * distinct filehandles. They will all be on the list returned
++ * by rhltable_lookup().
++ *
++ * inode->i_lock prevents racing insertions from adding an entry
++ * for the same inode/fhp pair twice.
++ */
++static noinline_for_stack struct nfs4_file *
++nfsd4_file_hash_insert(struct nfs4_file *new, const struct svc_fh *fhp)
++{
++	struct inode *inode = d_inode(fhp->fh_dentry);
++	struct rhlist_head *tmp, *list;
++	struct nfs4_file *ret = NULL;
++	bool alias_found = false;
++	struct nfs4_file *fi;
++	int err;
+ 
+ 	rcu_read_lock();
+-	fp = find_file_locked(fh, hashval);
+-	rcu_read_unlock();
+-	return fp;
+-}
++	spin_lock(&inode->i_lock);
+ 
+-static struct nfs4_file *
+-find_or_add_file(struct nfs4_file *new, struct knfsd_fh *fh)
+-{
+-	struct nfs4_file *fp;
+-	unsigned int hashval = file_hashval(fh);
++	list = rhltable_lookup(&nfs4_file_rhltable, &inode,
++			       nfs4_file_rhash_params);
++	rhl_for_each_entry_rcu(fi, tmp, list, fi_rlist) {
++		if (fh_match(&fi->fi_fhandle, &fhp->fh_handle)) {
++			if (refcount_inc_not_zero(&fi->fi_ref))
++				ret = fi;
++		} else
++			fi->fi_aliased = alias_found = true;
++	}
++	if (ret)
++		goto out_unlock;
+ 
+-	rcu_read_lock();
+-	fp = find_file_locked(fh, hashval);
+-	rcu_read_unlock();
+-	if (fp)
+-		return fp;
++	nfsd4_file_init(fhp, new);
++	err = rhltable_insert(&nfs4_file_rhltable, &new->fi_rlist,
++			      nfs4_file_rhash_params);
++	if (err)
++		goto out_unlock;
+ 
+-	spin_lock(&state_lock);
+-	fp = find_file_locked(fh, hashval);
+-	if (likely(fp == NULL)) {
+-		nfsd4_init_file(fh, hashval, new);
+-		fp = new;
+-	}
+-	spin_unlock(&state_lock);
++	new->fi_aliased = alias_found;
++	ret = new;
+ 
+-	return fp;
++out_unlock:
++	spin_unlock(&inode->i_lock);
++	rcu_read_unlock();
++	return ret;
++}
++
++static noinline_for_stack void nfsd4_file_hash_remove(struct nfs4_file *fi)
++{
++	rhltable_remove(&nfs4_file_rhltable, &fi->fi_rlist,
++			nfs4_file_rhash_params);
+ }
+ 
+ /*
+@@ -4508,9 +4797,10 @@ nfs4_share_conflict(struct svc_fh *current_fh, unsigned int deny_type)
+ 	struct nfs4_file *fp;
+ 	__be32 ret = nfs_ok;
+ 
+-	fp = find_file(&current_fh->fh_handle);
++	fp = nfsd4_file_hash_lookup(current_fh);
+ 	if (!fp)
+ 		return ret;
++
+ 	/* Check for conflicting share reservations */
+ 	spin_lock(&fp->fi_lock);
+ 	if (fp->fi_share_deny & deny_type)
+@@ -4520,6 +4810,35 @@ nfs4_share_conflict(struct svc_fh *current_fh, unsigned int deny_type)
+ 	return ret;
+ }
+ 
++static bool nfsd4_deleg_present(const struct inode *inode)
++{
++	struct file_lock_context *ctx = locks_inode_context(inode);
++
++	return ctx && !list_empty_careful(&ctx->flc_lease);
++}
++
++/**
++ * nfsd_wait_for_delegreturn - wait for delegations to be returned
++ * @rqstp: the RPC transaction being executed
++ * @inode: in-core inode of the file being waited for
++ *
++ * The timeout prevents deadlock if all nfsd threads happen to be
++ * tied up waiting for returning delegations.
++ *
++ * Return values:
++ *   %true: delegation was returned
++ *   %false: timed out waiting for delegreturn
++ */
++bool nfsd_wait_for_delegreturn(struct svc_rqst *rqstp, struct inode *inode)
++{
++	long __maybe_unused timeo;
++
++	timeo = wait_var_event_timeout(inode, !nfsd4_deleg_present(inode),
++				       NFSD_DELEGRETURN_TIMEOUT);
++	trace_nfsd_delegret_wakeup(rqstp, inode, timeo);
++	return timeo > 0;
++}
++
+ static void nfsd4_cb_recall_prepare(struct nfsd4_callback *cb)
+ {
+ 	struct nfs4_delegation *dp = cb_to_delegation(cb);
+@@ -4548,6 +4867,8 @@ static int nfsd4_cb_recall_done(struct nfsd4_callback *cb,
+ {
+ 	struct nfs4_delegation *dp = cb_to_delegation(cb);
+ 
++	trace_nfsd_cb_recall_done(&dp->dl_stid.sc_stateid, task);
++
+ 	if (dp->dl_stid.sc_type == NFS4_CLOSED_DELEG_STID ||
+ 	    dp->dl_stid.sc_type == NFS4_REVOKED_DELEG_STID)
+ 	        return 1;
+@@ -4593,22 +4914,30 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
+ 	 * We're assuming the state code never drops its reference
+ 	 * without first removing the lease.  Since we're in this lease
+ 	 * callback (and since the lease code is serialized by the
+-	 * i_lock) we know the server hasn't removed the lease yet, and
++	 * flc_lock) we know the server hasn't removed the lease yet, and
+ 	 * we know it's safe to take a reference.
+ 	 */
+ 	refcount_inc(&dp->dl_stid.sc_count);
+-	nfsd4_run_cb(&dp->dl_recall);
++	WARN_ON_ONCE(!nfsd4_run_cb(&dp->dl_recall));
+ }
+ 
+-/* Called from break_lease() with i_lock held. */
++/* Called from break_lease() with flc_lock held. */
+ static bool
+ nfsd_break_deleg_cb(struct file_lock *fl)
+ {
+-	bool ret = false;
+ 	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
+ 	struct nfs4_file *fp = dp->dl_stid.sc_file;
++	struct nfs4_client *clp = dp->dl_stid.sc_client;
++	struct nfsd_net *nn;
++
++	trace_nfsd_cb_recall(&dp->dl_stid);
+ 
+-	trace_nfsd_deleg_break(&dp->dl_stid.sc_stateid);
++	dp->dl_recalled = true;
++	atomic_inc(&clp->cl_delegs_in_recall);
++	if (try_to_expire_client(clp)) {
++		nn = net_generic(clp->net, nfsd_net_id);
++		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
++	}
+ 
+ 	/*
+ 	 * We don't want the locks code to timeout the lease for us;
+@@ -4617,11 +4946,9 @@ nfsd_break_deleg_cb(struct file_lock *fl)
+ 	 */
+ 	fl->fl_break_time = 0;
+ 
+-	spin_lock(&fp->fi_lock);
+ 	fp->fi_had_conflict = true;
+ 	nfsd_break_one_deleg(dp);
+-	spin_unlock(&fp->fi_lock);
+-	return ret;
++	return false;
+ }
+ 
+ /**
+@@ -4652,9 +4979,14 @@ static int
+ nfsd_change_deleg_cb(struct file_lock *onlist, int arg,
+ 		     struct list_head *dispose)
+ {
+-	if (arg & F_UNLCK)
++	struct nfs4_delegation *dp = (struct nfs4_delegation *)onlist->fl_owner;
++	struct nfs4_client *clp = dp->dl_stid.sc_client;
++
++	if (arg & F_UNLCK) {
++		if (dp->dl_recalled)
++			atomic_dec(&clp->cl_delegs_in_recall);
+ 		return lease_modify(onlist, arg, dispose);
+-	else
++	} else
+ 		return -EAGAIN;
+ }
+ 
+@@ -4675,40 +5007,37 @@ static __be32 nfsd4_check_seqid(struct nfsd4_compound_state *cstate, struct nfs4
+ 	return nfserr_bad_seqid;
+ }
+ 
+-static __be32 lookup_clientid(clientid_t *clid,
+-		struct nfsd4_compound_state *cstate,
+-		struct nfsd_net *nn,
+-		bool sessions)
++static struct nfs4_client *lookup_clientid(clientid_t *clid, bool sessions,
++						struct nfsd_net *nn)
+ {
+ 	struct nfs4_client *found;
+ 
++	spin_lock(&nn->client_lock);
++	found = find_confirmed_client(clid, sessions, nn);
++	if (found)
++		atomic_inc(&found->cl_rpc_users);
++	spin_unlock(&nn->client_lock);
++	return found;
++}
++
++static __be32 set_client(clientid_t *clid,
++		struct nfsd4_compound_state *cstate,
++		struct nfsd_net *nn)
++{
+ 	if (cstate->clp) {
+-		found = cstate->clp;
+-		if (!same_clid(&found->cl_clientid, clid))
++		if (!same_clid(&cstate->clp->cl_clientid, clid))
+ 			return nfserr_stale_clientid;
+ 		return nfs_ok;
+ 	}
+-
+ 	if (STALE_CLIENTID(clid, nn))
+ 		return nfserr_stale_clientid;
+-
+ 	/*
+-	 * For v4.1+ we get the client in the SEQUENCE op. If we don't have one
+-	 * cached already then we know this is for is for v4.0 and "sessions"
+-	 * will be false.
++	 * We're in the 4.0 case (otherwise the SEQUENCE op would have
++	 * set cstate->clp), so session = false:
+ 	 */
+-	WARN_ON_ONCE(cstate->session);
+-	spin_lock(&nn->client_lock);
+-	found = find_confirmed_client(clid, sessions, nn);
+-	if (!found) {
+-		spin_unlock(&nn->client_lock);
++	cstate->clp = lookup_clientid(clid, false, nn);
++	if (!cstate->clp)
+ 		return nfserr_expired;
+-	}
+-	atomic_inc(&found->cl_rpc_users);
+-	spin_unlock(&nn->client_lock);
+-
+-	/* Cache the nfs4_client in cstate! */
+-	cstate->clp = found;
+ 	return nfs_ok;
+ }
+ 
+@@ -4722,8 +5051,6 @@ nfsd4_process_open1(struct nfsd4_compound_state *cstate,
+ 	struct nfs4_openowner *oo = NULL;
+ 	__be32 status;
+ 
+-	if (STALE_CLIENTID(&open->op_clientid, nn))
+-		return nfserr_stale_clientid;
+ 	/*
+ 	 * In case we need it later, after we've already created the
+ 	 * file and don't want to risk a further failure:
+@@ -4732,7 +5059,7 @@ nfsd4_process_open1(struct nfsd4_compound_state *cstate,
+ 	if (open->op_file == NULL)
+ 		return nfserr_jukebox;
+ 
+-	status = lookup_clientid(clientid, cstate, nn, false);
++	status = set_client(clientid, cstate, nn);
+ 	if (status)
+ 		return status;
+ 	clp = cstate->clp;
+@@ -4856,16 +5183,19 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh,
+ 		.ia_valid = ATTR_SIZE,
+ 		.ia_size = 0,
+ 	};
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &iattr,
++	};
+ 	if (!open->op_truncate)
+ 		return 0;
+ 	if (!(open->op_share_access & NFS4_SHARE_ACCESS_WRITE))
+ 		return nfserr_inval;
+-	return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0);
++	return nfsd_setattr(rqstp, fh, &attrs, 0, (time64_t)0);
+ }
+ 
+ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
+ 		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
+-		struct nfsd4_open *open)
++		struct nfsd4_open *open, bool new_stp)
+ {
+ 	struct nfsd_file *nf = NULL;
+ 	__be32 status;
+@@ -4881,6 +5211,13 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
+ 	 */
+ 	status = nfs4_file_check_deny(fp, open->op_share_deny);
+ 	if (status != nfs_ok) {
++		if (status != nfserr_share_denied) {
++			spin_unlock(&fp->fi_lock);
++			goto out;
++		}
++		if (nfs4_resolve_deny_conflicts_locked(fp, new_stp,
++				stp, open->op_share_deny, false))
++			status = nfserr_jukebox;
+ 		spin_unlock(&fp->fi_lock);
+ 		goto out;
+ 	}
+@@ -4888,6 +5225,13 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
+ 	/* set access to the file */
+ 	status = nfs4_file_get_access(fp, open->op_share_access);
+ 	if (status != nfs_ok) {
++		if (status != nfserr_share_denied) {
++			spin_unlock(&fp->fi_lock);
++			goto out;
++		}
++		if (nfs4_resolve_deny_conflicts_locked(fp, new_stp,
++				stp, open->op_share_access, true))
++			status = nfserr_jukebox;
+ 		spin_unlock(&fp->fi_lock);
+ 		goto out;
+ 	}
+@@ -4903,9 +5247,12 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
+ 
+ 	if (!fp->fi_fds[oflag]) {
+ 		spin_unlock(&fp->fi_lock);
+-		status = nfsd_file_acquire(rqstp, cur_fh, access, &nf);
+-		if (status)
++
++		status = nfsd_file_acquire_opened(rqstp, cur_fh, access,
++						  open->op_filp, &nf);
++		if (status != nfs_ok)
+ 			goto out_put_access;
++
+ 		spin_lock(&fp->fi_lock);
+ 		if (!fp->fi_fds[oflag]) {
+ 			fp->fi_fds[oflag] = nf;
+@@ -4934,21 +5281,30 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
+ }
+ 
+ static __be32
+-nfs4_upgrade_open(struct svc_rqst *rqstp, struct nfs4_file *fp, struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp, struct nfsd4_open *open)
++nfs4_upgrade_open(struct svc_rqst *rqstp, struct nfs4_file *fp,
++		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
++		struct nfsd4_open *open)
+ {
+ 	__be32 status;
+ 	unsigned char old_deny_bmap = stp->st_deny_bmap;
+ 
+ 	if (!test_access(open->op_share_access, stp))
+-		return nfs4_get_vfs_file(rqstp, fp, cur_fh, stp, open);
++		return nfs4_get_vfs_file(rqstp, fp, cur_fh, stp, open, false);
+ 
+ 	/* test and set deny mode */
+ 	spin_lock(&fp->fi_lock);
+ 	status = nfs4_file_check_deny(fp, open->op_share_deny);
+-	if (status == nfs_ok) {
++	switch (status) {
++	case nfs_ok:
+ 		set_deny(open->op_share_deny, stp);
+ 		fp->fi_share_deny |=
+-				(open->op_share_deny & NFS4_SHARE_DENY_BOTH);
++			(open->op_share_deny & NFS4_SHARE_DENY_BOTH);
++		break;
++	case nfserr_share_denied:
++		if (nfs4_resolve_deny_conflicts_locked(fp, false,
++				stp, open->op_share_deny, false))
++			status = nfserr_jukebox;
++		break;
+ 	}
+ 	spin_unlock(&fp->fi_lock);
+ 
+@@ -4992,11 +5348,118 @@ static struct file_lock *nfs4_alloc_init_lease(struct nfs4_delegation *dp,
+ 	return fl;
+ }
+ 
++static int nfsd4_check_conflicting_opens(struct nfs4_client *clp,
++					 struct nfs4_file *fp)
++{
++	struct nfs4_ol_stateid *st;
++	struct file *f = fp->fi_deleg_file->nf_file;
++	struct inode *ino = locks_inode(f);
++	int writes;
++
++	writes = atomic_read(&ino->i_writecount);
++	if (!writes)
++		return 0;
++	/*
++	 * There could be multiple filehandles (hence multiple
++	 * nfs4_files) referencing this file, but that's not too
++	 * common; let's just give up in that case rather than
++	 * trying to go look up all the clients using that other
++	 * nfs4_file as well:
++	 */
++	if (fp->fi_aliased)
++		return -EAGAIN;
++	/*
++	 * If there's a close in progress, make sure that we see it
++	 * clear any fi_fds[] entries before we see it decrement
++	 * i_writecount:
++	 */
++	smp_mb__after_atomic();
++
++	if (fp->fi_fds[O_WRONLY])
++		writes--;
++	if (fp->fi_fds[O_RDWR])
++		writes--;
++	if (writes > 0)
++		return -EAGAIN; /* There may be non-NFSv4 writers */
++	/*
++	 * It's possible there are non-NFSv4 write opens in progress,
++	 * but if they haven't incremented i_writecount yet then they
++	 * also haven't called break lease yet; so, they'll break this
++	 * lease soon enough.  So, all that's left to check for is NFSv4
++	 * opens:
++	 */
++	spin_lock(&fp->fi_lock);
++	list_for_each_entry(st, &fp->fi_stateids, st_perfile) {
++		if (st->st_openstp == NULL /* it's an open */ &&
++		    access_permit_write(st) &&
++		    st->st_stid.sc_client != clp) {
++			spin_unlock(&fp->fi_lock);
++			return -EAGAIN;
++		}
++	}
++	spin_unlock(&fp->fi_lock);
++	/*
++	 * There's a small chance that we could be racing with another
++	 * NFSv4 open.  However, any open that hasn't added itself to
++	 * the fi_stateids list also hasn't called break_lease yet; so,
++	 * they'll break this lease soon enough.
++	 */
++	return 0;
++}
++
++/*
++ * It's possible that between opening the dentry and setting the delegation,
++ * that it has been renamed or unlinked. Redo the lookup to verify that this
++ * hasn't happened.
++ */
++static int
++nfsd4_verify_deleg_dentry(struct nfsd4_open *open, struct nfs4_file *fp,
++			  struct svc_fh *parent)
++{
++	struct svc_export *exp;
++	struct dentry *child;
++	__be32 err;
++
++	err = nfsd_lookup_dentry(open->op_rqstp, parent,
++				 open->op_fname, open->op_fnamelen,
++				 &exp, &child);
++
++	if (err)
++		return -EAGAIN;
++
++	exp_put(exp);
++	dput(child);
++	if (child != file_dentry(fp->fi_deleg_file->nf_file))
++		return -EAGAIN;
++
++	return 0;
++}
++
++/*
++ * We avoid breaking delegations held by a client due to its own activity, but
++ * clearing setuid/setgid bits on a write is an implicit activity and the client
++ * may not notice and continue using the old mode. Avoid giving out a delegation
++ * on setuid/setgid files when the client is requesting an open for write.
++ */
++static int
++nfsd4_verify_setuid_write(struct nfsd4_open *open, struct nfsd_file *nf)
++{
++	struct inode *inode = file_inode(nf->nf_file);
++
++	if ((open->op_share_access & NFS4_SHARE_ACCESS_WRITE) &&
++	    (inode->i_mode & (S_ISUID|S_ISGID)))
++		return -EAGAIN;
++	return 0;
++}
++
+ static struct nfs4_delegation *
+-nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+-		    struct nfs4_file *fp, struct nfs4_clnt_odstate *odstate)
++nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
++		    struct svc_fh *parent)
+ {
+ 	int status = 0;
++	struct nfs4_client *clp = stp->st_stid.sc_client;
++	struct nfs4_file *fp = stp->st_stid.sc_file;
++	struct nfs4_clnt_odstate *odstate = stp->st_clnt_odstate;
+ 	struct nfs4_delegation *dp;
+ 	struct nfsd_file *nf;
+ 	struct file_lock *fl;
+@@ -5011,14 +5474,19 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 
+ 	nf = find_readable_file(fp);
+ 	if (!nf) {
+-		/* We should always have a readable file here */
+-		WARN_ON_ONCE(1);
+-		return ERR_PTR(-EBADF);
++		/*
++		 * We probably could attempt another open and get a read
++		 * delegation, but for now, don't bother until the
++		 * client actually sends us one.
++		 */
++		return ERR_PTR(-EAGAIN);
+ 	}
+ 	spin_lock(&state_lock);
+ 	spin_lock(&fp->fi_lock);
+ 	if (nfs4_delegation_exists(clp, fp))
+ 		status = -EAGAIN;
++	else if (nfsd4_verify_setuid_write(open, nf))
++		status = -EAGAIN;
+ 	else if (!fp->fi_deleg_file) {
+ 		fp->fi_deleg_file = nf;
+ 		/* increment early to prevent fi_deleg_file from being
+@@ -5035,7 +5503,7 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 		return ERR_PTR(status);
+ 
+ 	status = -ENOMEM;
+-	dp = alloc_init_deleg(clp, fp, fh, odstate);
++	dp = alloc_init_deleg(clp, fp, odstate);
+ 	if (!dp)
+ 		goto out_delegees;
+ 
+@@ -5049,12 +5517,31 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 	if (status)
+ 		goto out_clnt_odstate;
+ 
++	if (parent) {
++		status = nfsd4_verify_deleg_dentry(open, fp, parent);
++		if (status)
++			goto out_unlock;
++	}
++
++	status = nfsd4_check_conflicting_opens(clp, fp);
++	if (status)
++		goto out_unlock;
++
++	/*
++	 * Now that the deleg is set, check again to ensure that nothing
++	 * raced in and changed the mode while we weren't lookng.
++	 */
++	status = nfsd4_verify_setuid_write(open, fp->fi_deleg_file);
++	if (status)
++		goto out_unlock;
++
++	status = -EAGAIN;
++	if (fp->fi_had_conflict)
++		goto out_unlock;
++
+ 	spin_lock(&state_lock);
+ 	spin_lock(&fp->fi_lock);
+-	if (fp->fi_had_conflict)
+-		status = -EAGAIN;
+-	else
+-		status = hash_delegation_locked(dp, fp);
++	status = hash_delegation_locked(dp, fp);
+ 	spin_unlock(&fp->fi_lock);
+ 	spin_unlock(&state_lock);
+ 
+@@ -5100,12 +5587,13 @@ static void nfsd4_open_deleg_none_ext(struct nfsd4_open *open, int status)
+  * proper support for them.
+  */
+ static void
+-nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
+-			struct nfs4_ol_stateid *stp)
++nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
++		     struct svc_fh *currentfh)
+ {
+ 	struct nfs4_delegation *dp;
+ 	struct nfs4_openowner *oo = openowner(stp->st_stateowner);
+ 	struct nfs4_client *clp = stp->st_stid.sc_client;
++	struct svc_fh *parent = NULL;
+ 	int cb_up;
+ 	int status = 0;
+ 
+@@ -5119,6 +5607,8 @@ nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
+ 				goto out_no_deleg;
+ 			break;
+ 		case NFS4_OPEN_CLAIM_NULL:
++			parent = currentfh;
++			fallthrough;
+ 		case NFS4_OPEN_CLAIM_FH:
+ 			/*
+ 			 * Let's not give out any delegations till everyone's
+@@ -5129,22 +5619,11 @@ nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
+ 				goto out_no_deleg;
+ 			if (!cb_up || !(oo->oo_flags & NFS4_OO_CONFIRMED))
+ 				goto out_no_deleg;
+-			/*
+-			 * Also, if the file was opened for write or
+-			 * create, there's a good chance the client's
+-			 * about to write to it, resulting in an
+-			 * immediate recall (since we don't support
+-			 * write delegations):
+-			 */
+-			if (open->op_share_access & NFS4_SHARE_ACCESS_WRITE)
+-				goto out_no_deleg;
+-			if (open->op_create == NFS4_OPEN_CREATE)
+-				goto out_no_deleg;
+ 			break;
+ 		default:
+ 			goto out_no_deleg;
+ 	}
+-	dp = nfs4_set_delegation(clp, fh, stp->st_stid.sc_file, stp->st_clnt_odstate);
++	dp = nfs4_set_delegation(open, stp, parent);
+ 	if (IS_ERR(dp))
+ 		goto out_no_deleg;
+ 
+@@ -5186,6 +5665,18 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
+ 	 */
+ }
+ 
++/**
++ * nfsd4_process_open2 - finish open processing
++ * @rqstp: the RPC transaction being executed
++ * @current_fh: NFSv4 COMPOUND's current filehandle
++ * @open: OPEN arguments
++ *
++ * If successful, (1) truncate the file if open->op_truncate was
++ * set, (2) set open->op_stateid, (3) set open->op_delegation.
++ *
++ * Returns %nfs_ok on success; otherwise an nfs4stat value in
++ * network byte order is returned.
++ */
+ __be32
+ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
+ {
+@@ -5202,7 +5693,9 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ 	 * and check for delegations in the process of being recalled.
+ 	 * If not found, create the nfs4_file struct
+ 	 */
+-	fp = find_or_add_file(open->op_file, &current_fh->fh_handle);
++	fp = nfsd4_file_hash_insert(open->op_file, current_fh);
++	if (unlikely(!fp))
++		return nfserr_jukebox;
+ 	if (fp != open->op_file) {
+ 		status = nfs4_check_deleg(cl, open, &dp);
+ 		if (status)
+@@ -5235,7 +5728,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ 			goto out;
+ 		}
+ 	} else {
+-		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
++		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open, true);
+ 		if (status) {
+ 			stp->st_stid.sc_type = NFS4_CLOSED_STID;
+ 			release_open_stateid(stp);
+@@ -5264,7 +5757,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ 	* Attempt to hand out a delegation. No error return, because the
+ 	* OPEN succeeds even if we fail.
+ 	*/
+-	nfs4_open_delegation(current_fh, open, stp);
++	nfs4_open_delegation(open, stp, &resp->cstate.current_fh);
+ nodeleg:
+ 	status = nfs_ok;
+ 	trace_nfsd_open(&stp->st_stid.sc_stateid);
+@@ -5322,137 +5815,313 @@ nfsd4_renew(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 
+ 	trace_nfsd_clid_renew(clid);
+-	status = lookup_clientid(clid, cstate, nn, false);
++	status = set_client(clid, cstate, nn);
+ 	if (status)
+-		goto out;
++		return status;
+ 	clp = cstate->clp;
+-	status = nfserr_cb_path_down;
+ 	if (!list_empty(&clp->cl_delegations)
+ 			&& clp->cl_cb_state != NFSD4_CB_UP)
+-		goto out;
+-	status = nfs_ok;
+-out:
+-	return status;
++		return nfserr_cb_path_down;
++	return nfs_ok;
++}
++
++void
++nfsd4_end_grace(struct nfsd_net *nn)
++{
++	/* do nothing if grace period already ended */
++	if (nn->grace_ended)
++		return;
++
++	trace_nfsd_grace_complete(nn);
++	nn->grace_ended = true;
++	/*
++	 * If the server goes down again right now, an NFSv4
++	 * client will still be allowed to reclaim after it comes back up,
++	 * even if it hasn't yet had a chance to reclaim state this time.
++	 *
++	 */
++	nfsd4_record_grace_done(nn);
++	/*
++	 * At this point, NFSv4 clients can still reclaim.  But if the
++	 * server crashes, any that have not yet reclaimed will be out
++	 * of luck on the next boot.
++	 *
++	 * (NFSv4.1+ clients are considered to have reclaimed once they
++	 * call RECLAIM_COMPLETE.  NFSv4.0 clients are considered to
++	 * have reclaimed after their first OPEN.)
++	 */
++	locks_end_grace(&nn->nfsd4_manager);
++	/*
++	 * At this point, and once lockd and/or any other containers
++	 * exit their grace period, further reclaims will fail and
++	 * regular locking can resume.
++	 */
++}
++
++/*
++ * If we've waited a lease period but there are still clients trying to
++ * reclaim, wait a little longer to give them a chance to finish.
++ */
++static bool clients_still_reclaiming(struct nfsd_net *nn)
++{
++	time64_t double_grace_period_end = nn->boot_time +
++					   2 * nn->nfsd4_lease;
++
++	if (nn->track_reclaim_completes &&
++			atomic_read(&nn->nr_reclaim_complete) ==
++			nn->reclaim_str_hashtbl_size)
++		return false;
++	if (!nn->somebody_reclaimed)
++		return false;
++	nn->somebody_reclaimed = false;
++	/*
++	 * If we've given them *two* lease times to reclaim, and they're
++	 * still not done, give up:
++	 */
++	if (ktime_get_boottime_seconds() > double_grace_period_end)
++		return false;
++	return true;
++}
++
++struct laundry_time {
++	time64_t cutoff;
++	time64_t new_timeo;
++};
++
++static bool state_expired(struct laundry_time *lt, time64_t last_refresh)
++{
++	time64_t time_remaining;
++
++	if (last_refresh < lt->cutoff)
++		return true;
++	time_remaining = last_refresh - lt->cutoff;
++	lt->new_timeo = min(lt->new_timeo, time_remaining);
++	return false;
++}
++
++#ifdef CONFIG_NFSD_V4_2_INTER_SSC
++void nfsd4_ssc_init_umount_work(struct nfsd_net *nn)
++{
++	spin_lock_init(&nn->nfsd_ssc_lock);
++	INIT_LIST_HEAD(&nn->nfsd_ssc_mount_list);
++	init_waitqueue_head(&nn->nfsd_ssc_waitq);
++}
++EXPORT_SYMBOL_GPL(nfsd4_ssc_init_umount_work);
++
++/*
++ * This is called when nfsd is being shutdown, after all inter_ssc
++ * cleanup were done, to destroy the ssc delayed unmount list.
++ */
++static void nfsd4_ssc_shutdown_umount(struct nfsd_net *nn)
++{
++	struct nfsd4_ssc_umount_item *ni = NULL;
++	struct nfsd4_ssc_umount_item *tmp;
++
++	spin_lock(&nn->nfsd_ssc_lock);
++	list_for_each_entry_safe(ni, tmp, &nn->nfsd_ssc_mount_list, nsui_list) {
++		list_del(&ni->nsui_list);
++		spin_unlock(&nn->nfsd_ssc_lock);
++		mntput(ni->nsui_vfsmount);
++		kfree(ni);
++		spin_lock(&nn->nfsd_ssc_lock);
++	}
++	spin_unlock(&nn->nfsd_ssc_lock);
++}
++
++static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
++{
++	bool do_wakeup = false;
++	struct nfsd4_ssc_umount_item *ni = NULL;
++	struct nfsd4_ssc_umount_item *tmp;
++
++	spin_lock(&nn->nfsd_ssc_lock);
++	list_for_each_entry_safe(ni, tmp, &nn->nfsd_ssc_mount_list, nsui_list) {
++		if (time_after(jiffies, ni->nsui_expire)) {
++			if (refcount_read(&ni->nsui_refcnt) > 1)
++				continue;
++
++			/* mark being unmount */
++			ni->nsui_busy = true;
++			spin_unlock(&nn->nfsd_ssc_lock);
++			mntput(ni->nsui_vfsmount);
++			spin_lock(&nn->nfsd_ssc_lock);
++
++			/* waiters need to start from begin of list */
++			list_del(&ni->nsui_list);
++			kfree(ni);
++
++			/* wakeup ssc_connect waiters */
++			do_wakeup = true;
++			continue;
++		}
++		break;
++	}
++	if (do_wakeup)
++		wake_up_all(&nn->nfsd_ssc_waitq);
++	spin_unlock(&nn->nfsd_ssc_lock);
++}
++#endif
++
++/* Check if any lock belonging to this lockowner has any blockers */
++static bool
++nfs4_lockowner_has_blockers(struct nfs4_lockowner *lo)
++{
++	struct file_lock_context *ctx;
++	struct nfs4_ol_stateid *stp;
++	struct nfs4_file *nf;
++
++	list_for_each_entry(stp, &lo->lo_owner.so_stateids, st_perstateowner) {
++		nf = stp->st_stid.sc_file;
++		ctx = locks_inode_context(nf->fi_inode);
++		if (!ctx)
++			continue;
++		if (locks_owner_has_blockers(ctx, lo))
++			return true;
++	}
++	return false;
++}
++
++static bool
++nfs4_anylock_blockers(struct nfs4_client *clp)
++{
++	int i;
++	struct nfs4_stateowner *so;
++	struct nfs4_lockowner *lo;
++
++	if (atomic_read(&clp->cl_delegs_in_recall))
++		return true;
++	spin_lock(&clp->cl_lock);
++	for (i = 0; i < OWNER_HASH_SIZE; i++) {
++		list_for_each_entry(so, &clp->cl_ownerstr_hashtbl[i],
++				so_strhash) {
++			if (so->so_is_open_owner)
++				continue;
++			lo = lockowner(so);
++			if (nfs4_lockowner_has_blockers(lo)) {
++				spin_unlock(&clp->cl_lock);
++				return true;
++			}
++		}
++	}
++	spin_unlock(&clp->cl_lock);
++	return false;
+ }
+ 
+-void
+-nfsd4_end_grace(struct nfsd_net *nn)
++static void
++nfs4_get_client_reaplist(struct nfsd_net *nn, struct list_head *reaplist,
++				struct laundry_time *lt)
+ {
+-	/* do nothing if grace period already ended */
+-	if (nn->grace_ended)
+-		return;
++	unsigned int maxreap, reapcnt = 0;
++	struct list_head *pos, *next;
++	struct nfs4_client *clp;
+ 
+-	trace_nfsd_grace_complete(nn);
+-	nn->grace_ended = true;
+-	/*
+-	 * If the server goes down again right now, an NFSv4
+-	 * client will still be allowed to reclaim after it comes back up,
+-	 * even if it hasn't yet had a chance to reclaim state this time.
+-	 *
+-	 */
+-	nfsd4_record_grace_done(nn);
+-	/*
+-	 * At this point, NFSv4 clients can still reclaim.  But if the
+-	 * server crashes, any that have not yet reclaimed will be out
+-	 * of luck on the next boot.
+-	 *
+-	 * (NFSv4.1+ clients are considered to have reclaimed once they
+-	 * call RECLAIM_COMPLETE.  NFSv4.0 clients are considered to
+-	 * have reclaimed after their first OPEN.)
+-	 */
+-	locks_end_grace(&nn->nfsd4_manager);
+-	/*
+-	 * At this point, and once lockd and/or any other containers
+-	 * exit their grace period, further reclaims will fail and
+-	 * regular locking can resume.
+-	 */
++	maxreap = (atomic_read(&nn->nfs4_client_count) >= nn->nfs4_max_clients) ?
++			NFSD_CLIENT_MAX_TRIM_PER_RUN : 0;
++	INIT_LIST_HEAD(reaplist);
++	spin_lock(&nn->client_lock);
++	list_for_each_safe(pos, next, &nn->client_lru) {
++		clp = list_entry(pos, struct nfs4_client, cl_lru);
++		if (clp->cl_state == NFSD4_EXPIRABLE)
++			goto exp_client;
++		if (!state_expired(lt, clp->cl_time))
++			break;
++		if (!atomic_read(&clp->cl_rpc_users)) {
++			if (clp->cl_state == NFSD4_ACTIVE)
++				atomic_inc(&nn->nfsd_courtesy_clients);
++			clp->cl_state = NFSD4_COURTESY;
++		}
++		if (!client_has_state(clp))
++			goto exp_client;
++		if (!nfs4_anylock_blockers(clp))
++			if (reapcnt >= maxreap)
++				continue;
++exp_client:
++		if (!mark_client_expired_locked(clp)) {
++			list_add(&clp->cl_lru, reaplist);
++			reapcnt++;
++		}
++	}
++	spin_unlock(&nn->client_lock);
+ }
+ 
+-/*
+- * If we've waited a lease period but there are still clients trying to
+- * reclaim, wait a little longer to give them a chance to finish.
+- */
+-static bool clients_still_reclaiming(struct nfsd_net *nn)
++static void
++nfs4_get_courtesy_client_reaplist(struct nfsd_net *nn,
++				struct list_head *reaplist)
+ {
+-	time64_t double_grace_period_end = nn->boot_time +
+-					   2 * nn->nfsd4_lease;
++	unsigned int maxreap = 0, reapcnt = 0;
++	struct list_head *pos, *next;
++	struct nfs4_client *clp;
+ 
+-	if (nn->track_reclaim_completes &&
+-			atomic_read(&nn->nr_reclaim_complete) ==
+-			nn->reclaim_str_hashtbl_size)
+-		return false;
+-	if (!nn->somebody_reclaimed)
+-		return false;
+-	nn->somebody_reclaimed = false;
+-	/*
+-	 * If we've given them *two* lease times to reclaim, and they're
+-	 * still not done, give up:
+-	 */
+-	if (ktime_get_boottime_seconds() > double_grace_period_end)
+-		return false;
+-	return true;
++	maxreap = NFSD_CLIENT_MAX_TRIM_PER_RUN;
++	INIT_LIST_HEAD(reaplist);
++
++	spin_lock(&nn->client_lock);
++	list_for_each_safe(pos, next, &nn->client_lru) {
++		clp = list_entry(pos, struct nfs4_client, cl_lru);
++		if (clp->cl_state == NFSD4_ACTIVE)
++			break;
++		if (reapcnt >= maxreap)
++			break;
++		if (!mark_client_expired_locked(clp)) {
++			list_add(&clp->cl_lru, reaplist);
++			reapcnt++;
++		}
++	}
++	spin_unlock(&nn->client_lock);
++}
++
++static void
++nfs4_process_client_reaplist(struct list_head *reaplist)
++{
++	struct list_head *pos, *next;
++	struct nfs4_client *clp;
++
++	list_for_each_safe(pos, next, reaplist) {
++		clp = list_entry(pos, struct nfs4_client, cl_lru);
++		trace_nfsd_clid_purged(&clp->cl_clientid);
++		list_del_init(&clp->cl_lru);
++		expire_client(clp);
++	}
+ }
+ 
+ static time64_t
+ nfs4_laundromat(struct nfsd_net *nn)
+ {
+-	struct nfs4_client *clp;
+ 	struct nfs4_openowner *oo;
+ 	struct nfs4_delegation *dp;
+ 	struct nfs4_ol_stateid *stp;
+ 	struct nfsd4_blocked_lock *nbl;
+ 	struct list_head *pos, *next, reaplist;
+-	time64_t cutoff = ktime_get_boottime_seconds() - nn->nfsd4_lease;
+-	time64_t t, new_timeo = nn->nfsd4_lease;
++	struct laundry_time lt = {
++		.cutoff = ktime_get_boottime_seconds() - nn->nfsd4_lease,
++		.new_timeo = nn->nfsd4_lease
++	};
+ 	struct nfs4_cpntf_state *cps;
+ 	copy_stateid_t *cps_t;
+ 	int i;
+ 
+ 	if (clients_still_reclaiming(nn)) {
+-		new_timeo = 0;
++		lt.new_timeo = 0;
+ 		goto out;
+ 	}
+ 	nfsd4_end_grace(nn);
+-	INIT_LIST_HEAD(&reaplist);
+ 
+ 	spin_lock(&nn->s2s_cp_lock);
+ 	idr_for_each_entry(&nn->s2s_cp_stateids, cps_t, i) {
+ 		cps = container_of(cps_t, struct nfs4_cpntf_state, cp_stateid);
+-		if (cps->cp_stateid.sc_type == NFS4_COPYNOTIFY_STID &&
+-				cps->cpntf_time < cutoff)
++		if (cps->cp_stateid.cs_type == NFS4_COPYNOTIFY_STID &&
++				state_expired(&lt, cps->cpntf_time))
+ 			_free_cpntf_state_locked(nn, cps);
+ 	}
+ 	spin_unlock(&nn->s2s_cp_lock);
++	nfs4_get_client_reaplist(nn, &reaplist, &lt);
++	nfs4_process_client_reaplist(&reaplist);
+ 
+-	spin_lock(&nn->client_lock);
+-	list_for_each_safe(pos, next, &nn->client_lru) {
+-		clp = list_entry(pos, struct nfs4_client, cl_lru);
+-		if (clp->cl_time > cutoff) {
+-			t = clp->cl_time - cutoff;
+-			new_timeo = min(new_timeo, t);
+-			break;
+-		}
+-		if (mark_client_expired_locked(clp)) {
+-			trace_nfsd_clid_expired(&clp->cl_clientid);
+-			continue;
+-		}
+-		list_add(&clp->cl_lru, &reaplist);
+-	}
+-	spin_unlock(&nn->client_lock);
+-	list_for_each_safe(pos, next, &reaplist) {
+-		clp = list_entry(pos, struct nfs4_client, cl_lru);
+-		trace_nfsd_clid_purged(&clp->cl_clientid);
+-		list_del_init(&clp->cl_lru);
+-		expire_client(clp);
+-	}
+ 	spin_lock(&state_lock);
+ 	list_for_each_safe(pos, next, &nn->del_recall_lru) {
+ 		dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
+-		if (dp->dl_time > cutoff) {
+-			t = dp->dl_time - cutoff;
+-			new_timeo = min(new_timeo, t);
++		if (!state_expired(&lt, dp->dl_time))
+ 			break;
+-		}
+ 		WARN_ON(!unhash_delegation_locked(dp));
+ 		list_add(&dp->dl_recall_lru, &reaplist);
+ 	}
+@@ -5468,11 +6137,8 @@ nfs4_laundromat(struct nfsd_net *nn)
+ 	while (!list_empty(&nn->close_lru)) {
+ 		oo = list_first_entry(&nn->close_lru, struct nfs4_openowner,
+ 					oo_close_lru);
+-		if (oo->oo_time > cutoff) {
+-			t = oo->oo_time - cutoff;
+-			new_timeo = min(new_timeo, t);
++		if (!state_expired(&lt, oo->oo_time))
+ 			break;
+-		}
+ 		list_del_init(&oo->oo_close_lru);
+ 		stp = oo->oo_last_closed_stid;
+ 		oo->oo_last_closed_stid = NULL;
+@@ -5498,11 +6164,8 @@ nfs4_laundromat(struct nfsd_net *nn)
+ 	while (!list_empty(&nn->blocked_locks_lru)) {
+ 		nbl = list_first_entry(&nn->blocked_locks_lru,
+ 					struct nfsd4_blocked_lock, nbl_lru);
+-		if (nbl->nbl_time > cutoff) {
+-			t = nbl->nbl_time - cutoff;
+-			new_timeo = min(new_timeo, t);
++		if (!state_expired(&lt, nbl->nbl_time))
+ 			break;
+-		}
+ 		list_move(&nbl->nbl_lru, &reaplist);
+ 		list_del_init(&nbl->nbl_list);
+ 	}
+@@ -5514,12 +6177,14 @@ nfs4_laundromat(struct nfsd_net *nn)
+ 		list_del_init(&nbl->nbl_lru);
+ 		free_blocked_lock(nbl);
+ 	}
++#ifdef CONFIG_NFSD_V4_2_INTER_SSC
++	/* service the server-to-server copy delayed unmount list */
++	nfsd4_ssc_expire_umount(nn);
++#endif
+ out:
+-	new_timeo = max_t(time64_t, new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
+-	return new_timeo;
++	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
+ }
+ 
+-static struct workqueue_struct *laundry_wq;
+ static void laundromat_main(struct work_struct *);
+ 
+ static void
+@@ -5534,26 +6199,68 @@ laundromat_main(struct work_struct *laundry)
+ 	queue_delayed_work(laundry_wq, &nn->laundromat_work, t*HZ);
+ }
+ 
+-static inline __be32 nfs4_check_fh(struct svc_fh *fhp, struct nfs4_stid *stp)
++static void
++courtesy_client_reaper(struct nfsd_net *nn)
+ {
+-	if (!fh_match(&fhp->fh_handle, &stp->sc_file->fi_fhandle))
+-		return nfserr_bad_stateid;
+-	return nfs_ok;
++	struct list_head reaplist;
++
++	nfs4_get_courtesy_client_reaplist(nn, &reaplist);
++	nfs4_process_client_reaplist(&reaplist);
+ }
+ 
+-static inline int
+-access_permit_read(struct nfs4_ol_stateid *stp)
++static void
++deleg_reaper(struct nfsd_net *nn)
+ {
+-	return test_access(NFS4_SHARE_ACCESS_READ, stp) ||
+-		test_access(NFS4_SHARE_ACCESS_BOTH, stp) ||
+-		test_access(NFS4_SHARE_ACCESS_WRITE, stp);
++	struct list_head *pos, *next;
++	struct nfs4_client *clp;
++	struct list_head cblist;
++
++	INIT_LIST_HEAD(&cblist);
++	spin_lock(&nn->client_lock);
++	list_for_each_safe(pos, next, &nn->client_lru) {
++		clp = list_entry(pos, struct nfs4_client, cl_lru);
++		if (clp->cl_state != NFSD4_ACTIVE ||
++			list_empty(&clp->cl_delegations) ||
++			atomic_read(&clp->cl_delegs_in_recall) ||
++			test_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags) ||
++			(ktime_get_boottime_seconds() -
++				clp->cl_ra_time < 5)) {
++			continue;
++		}
++		list_add(&clp->cl_ra_cblist, &cblist);
++
++		/* release in nfsd4_cb_recall_any_release */
++		atomic_inc(&clp->cl_rpc_users);
++		set_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags);
++		clp->cl_ra_time = ktime_get_boottime_seconds();
++	}
++	spin_unlock(&nn->client_lock);
++
++	while (!list_empty(&cblist)) {
++		clp = list_first_entry(&cblist, struct nfs4_client,
++					cl_ra_cblist);
++		list_del_init(&clp->cl_ra_cblist);
++		clp->cl_ra->ra_keep = 0;
++		clp->cl_ra->ra_bmval[0] = BIT(RCA4_TYPE_MASK_RDATA_DLG);
++		nfsd4_run_cb(&clp->cl_ra->ra_cb);
++	}
+ }
+ 
+-static inline int
+-access_permit_write(struct nfs4_ol_stateid *stp)
++static void
++nfsd4_state_shrinker_worker(struct work_struct *work)
+ {
+-	return test_access(NFS4_SHARE_ACCESS_WRITE, stp) ||
+-		test_access(NFS4_SHARE_ACCESS_BOTH, stp);
++	struct nfsd_net *nn = container_of(work, struct nfsd_net,
++				nfsd_shrinker_work);
++
++	courtesy_client_reaper(nn);
++	deleg_reaper(nn);
++}
++
++static inline __be32 nfs4_check_fh(struct svc_fh *fhp, struct nfs4_stid *stp)
++{
++	if (!fh_match(&fhp->fh_handle, &stp->sc_file->fi_fhandle))
++		return nfserr_bad_stateid;
++	return nfs_ok;
+ }
+ 
+ static
+@@ -5692,6 +6399,7 @@ nfsd4_lookup_stateid(struct nfsd4_compound_state *cstate,
+ 		     struct nfs4_stid **s, struct nfsd_net *nn)
+ {
+ 	__be32 status;
++	struct nfs4_stid *stid;
+ 	bool return_revoked = false;
+ 
+ 	/*
+@@ -5706,8 +6414,7 @@ nfsd4_lookup_stateid(struct nfsd4_compound_state *cstate,
+ 	if (ZERO_STATEID(stateid) || ONE_STATEID(stateid) ||
+ 		CLOSE_STATEID(stateid))
+ 		return nfserr_bad_stateid;
+-	status = lookup_clientid(&stateid->si_opaque.so_clid, cstate, nn,
+-				 false);
++	status = set_client(&stateid->si_opaque.so_clid, cstate, nn);
+ 	if (status == nfserr_stale_clientid) {
+ 		if (cstate->session)
+ 			return nfserr_bad_stateid;
+@@ -5715,15 +6422,16 @@ nfsd4_lookup_stateid(struct nfsd4_compound_state *cstate,
+ 	}
+ 	if (status)
+ 		return status;
+-	*s = find_stateid_by_type(cstate->clp, stateid, typemask);
+-	if (!*s)
++	stid = find_stateid_by_type(cstate->clp, stateid, typemask);
++	if (!stid)
+ 		return nfserr_bad_stateid;
+-	if (((*s)->sc_type == NFS4_REVOKED_DELEG_STID) && !return_revoked) {
+-		nfs4_put_stid(*s);
++	if ((stid->sc_type == NFS4_REVOKED_DELEG_STID) && !return_revoked) {
++		nfs4_put_stid(stid);
+ 		if (cstate->minorversion)
+ 			return nfserr_deleg_revoked;
+ 		return nfserr_bad_stateid;
+ 	}
++	*s = stid;
+ 	return nfs_ok;
+ }
+ 
+@@ -5788,12 +6496,12 @@ nfs4_check_file(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfs4_stid *s,
+ static void
+ _free_cpntf_state_locked(struct nfsd_net *nn, struct nfs4_cpntf_state *cps)
+ {
+-	WARN_ON_ONCE(cps->cp_stateid.sc_type != NFS4_COPYNOTIFY_STID);
+-	if (!refcount_dec_and_test(&cps->cp_stateid.sc_count))
++	WARN_ON_ONCE(cps->cp_stateid.cs_type != NFS4_COPYNOTIFY_STID);
++	if (!refcount_dec_and_test(&cps->cp_stateid.cs_count))
+ 		return;
+ 	list_del(&cps->cp_list);
+ 	idr_remove(&nn->s2s_cp_stateids,
+-		   cps->cp_stateid.stid.si_opaque.so_id);
++		   cps->cp_stateid.cs_stid.si_opaque.so_id);
+ 	kfree(cps);
+ }
+ /*
+@@ -5815,12 +6523,12 @@ __be32 manage_cpntf_state(struct nfsd_net *nn, stateid_t *st,
+ 	if (cps_t) {
+ 		state = container_of(cps_t, struct nfs4_cpntf_state,
+ 				     cp_stateid);
+-		if (state->cp_stateid.sc_type != NFS4_COPYNOTIFY_STID) {
++		if (state->cp_stateid.cs_type != NFS4_COPYNOTIFY_STID) {
+ 			state = NULL;
+ 			goto unlock;
+ 		}
+ 		if (!clp)
+-			refcount_inc(&state->cp_stateid.sc_count);
++			refcount_inc(&state->cp_stateid.cs_count);
+ 		else
+ 			_free_cpntf_state_locked(nn, state);
+ 	}
+@@ -5838,21 +6546,27 @@ static __be32 find_cpntf_state(struct nfsd_net *nn, stateid_t *st,
+ {
+ 	__be32 status;
+ 	struct nfs4_cpntf_state *cps = NULL;
+-	struct nfsd4_compound_state cstate;
++	struct nfs4_client *found;
+ 
+ 	status = manage_cpntf_state(nn, st, NULL, &cps);
+ 	if (status)
+ 		return status;
+ 
+ 	cps->cpntf_time = ktime_get_boottime_seconds();
+-	memset(&cstate, 0, sizeof(cstate));
+-	status = lookup_clientid(&cps->cp_p_clid, &cstate, nn, true);
+-	if (status)
++
++	status = nfserr_expired;
++	found = lookup_clientid(&cps->cp_p_clid, true, nn);
++	if (!found)
+ 		goto out;
+-	status = nfsd4_lookup_stateid(&cstate, &cps->cp_p_stateid,
+-				NFS4_DELEG_STID|NFS4_OPEN_STID|NFS4_LOCK_STID,
+-				stid, nn);
+-	put_client_renew(cstate.clp);
++
++	*stid = find_stateid_by_type(found, &cps->cp_p_stateid,
++			NFS4_DELEG_STID|NFS4_OPEN_STID|NFS4_LOCK_STID);
++	if (*stid)
++		status = nfs_ok;
++	else
++		status = nfserr_bad_stateid;
++
++	put_client_renew(found);
+ out:
+ 	nfs4_put_cpntf_state(nn, cps);
+ 	return status;
+@@ -5887,7 +6601,11 @@ nfs4_preprocess_stateid_op(struct svc_rqst *rqstp,
+ 		return nfserr_grace;
+ 
+ 	if (ZERO_STATEID(stateid) || ONE_STATEID(stateid)) {
+-		status = check_special_stateids(net, fhp, stateid, flags);
++		if (cstid)
++			status = nfserr_bad_stateid;
++		else
++			status = check_special_stateids(net, fhp, stateid,
++									flags);
+ 		goto done;
+ 	}
+ 
+@@ -5941,7 +6659,7 @@ nfsd4_test_stateid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ {
+ 	struct nfsd4_test_stateid *test_stateid = &u->test_stateid;
+ 	struct nfsd4_test_stateid_id *stateid;
+-	struct nfs4_client *cl = cstate->session->se_client;
++	struct nfs4_client *cl = cstate->clp;
+ 
+ 	list_for_each_entry(stateid, &test_stateid->ts_stateid_list, ts_id_list)
+ 		stateid->ts_id_status =
+@@ -5987,7 +6705,7 @@ nfsd4_free_stateid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	stateid_t *stateid = &free_stateid->fr_stateid;
+ 	struct nfs4_stid *s;
+ 	struct nfs4_delegation *dp;
+-	struct nfs4_client *cl = cstate->session->se_client;
++	struct nfs4_client *cl = cstate->clp;
+ 	__be32 ret = nfserr_bad_stateid;
+ 
+ 	spin_lock(&cl->cl_lock);
+@@ -6316,6 +7034,8 @@ nfsd4_delegreturn(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	if (status)
+ 		goto put_stateid;
+ 
++	trace_nfsd_deleg_return(stateid);
++	wake_up_var(d_inode(cstate->current_fh.fh_dentry));
+ 	destroy_delegation(dp);
+ put_stateid:
+ 	nfs4_put_stid(&dp->dl_stid);
+@@ -6323,15 +7043,6 @@ nfsd4_delegreturn(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	return status;
+ }
+ 
+-static inline u64
+-end_offset(u64 start, u64 len)
+-{
+-	u64 end;
+-
+-	end = start + len;
+-	return end >= start ? end: NFS4_MAX_UINT64;
+-}
+-
+ /* last octet in a range */
+ static inline u64
+ last_byte_offset(u64 start, u64 len)
+@@ -6361,7 +7072,7 @@ nfs4_transform_lock_offset(struct file_lock *lock)
+ }
+ 
+ static fl_owner_t
+-nfsd4_fl_get_owner(fl_owner_t owner)
++nfsd4_lm_get_owner(fl_owner_t owner)
+ {
+ 	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)owner;
+ 
+@@ -6370,7 +7081,7 @@ nfsd4_fl_get_owner(fl_owner_t owner)
+ }
+ 
+ static void
+-nfsd4_fl_put_owner(fl_owner_t owner)
++nfsd4_lm_put_owner(fl_owner_t owner)
+ {
+ 	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)owner;
+ 
+@@ -6378,6 +7089,29 @@ nfsd4_fl_put_owner(fl_owner_t owner)
+ 		nfs4_put_stateowner(&lo->lo_owner);
+ }
+ 
++/* return pointer to struct nfs4_client if client is expirable */
++static bool
++nfsd4_lm_lock_expirable(struct file_lock *cfl)
++{
++	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)cfl->fl_owner;
++	struct nfs4_client *clp = lo->lo_owner.so_client;
++	struct nfsd_net *nn;
++
++	if (try_to_expire_client(clp)) {
++		nn = net_generic(clp->net, nfsd_net_id);
++		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
++		return true;
++	}
++	return false;
++}
++
++/* schedule laundromat to run immediately and wait for it to complete */
++static void
++nfsd4_lm_expire_lock(void)
++{
++	flush_workqueue(laundry_wq);
++}
++
+ static void
+ nfsd4_lm_notify(struct file_lock *fl)
+ {
+@@ -6397,14 +7131,19 @@ nfsd4_lm_notify(struct file_lock *fl)
+ 	}
+ 	spin_unlock(&nn->blocked_locks_lock);
+ 
+-	if (queue)
++	if (queue) {
++		trace_nfsd_cb_notify_lock(lo, nbl);
+ 		nfsd4_run_cb(&nbl->nbl_cb);
++	}
+ }
+ 
+ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
++	.lm_mod_owner = THIS_MODULE,
+ 	.lm_notify = nfsd4_lm_notify,
+-	.lm_get_owner = nfsd4_fl_get_owner,
+-	.lm_put_owner = nfsd4_fl_put_owner,
++	.lm_get_owner = nfsd4_lm_get_owner,
++	.lm_put_owner = nfsd4_lm_put_owner,
++	.lm_lock_expirable = nfsd4_lm_lock_expirable,
++	.lm_expire_lock = nfsd4_lm_expire_lock,
+ };
+ 
+ static inline void
+@@ -6719,13 +7458,9 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		if (nfsd4_has_session(cstate))
+ 			/* See rfc 5661 18.10.3: given clientid is ignored: */
+ 			memcpy(&lock->lk_new_clientid,
+-				&cstate->session->se_client->cl_clientid,
++				&cstate->clp->cl_clientid,
+ 				sizeof(clientid_t));
+ 
+-		status = nfserr_stale_clientid;
+-		if (STALE_CLIENTID(&lock->lk_new_clientid, nn))
+-			goto out;
+-
+ 		/* validate and update open stateid and open seqid */
+ 		status = nfs4_preprocess_confirmed_seqid_op(cstate,
+ 				        lock->lk_new_open_seqid,
+@@ -6763,6 +7498,9 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	if (!locks_in_grace(net) && lock->lk_reclaim)
+ 		goto out;
+ 
++	if (lock->lk_reclaim)
++		fl_flags |= FL_RECLAIM;
++
+ 	fp = lock_stp->st_stid.sc_file;
+ 	switch (lock->lk_type) {
+ 		case NFS4_READW_LT:
+@@ -6799,6 +7537,16 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		goto out;
+ 	}
+ 
++	/*
++	 * Most filesystems with their own ->lock operations will block
++	 * the nfsd thread waiting to acquire the lock.  That leads to
++	 * deadlocks (we don't want every nfsd thread tied up waiting
++	 * for file locks), so don't attempt blocking lock notifications
++	 * on those filesystems:
++	 */
++	if (nf->nf_file->f_op->lock)
++		fl_flags &= ~FL_SLEEP;
++
+ 	nbl = find_or_allocate_block(lock_sop, &fp->fi_fhandle, nn);
+ 	if (!nbl) {
+ 		dprintk("NFSD: %s: unable to allocate block!\n", __func__);
+@@ -6829,6 +7577,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		spin_lock(&nn->blocked_locks_lock);
+ 		list_add_tail(&nbl->nbl_list, &lock_sop->lo_blocked);
+ 		list_add_tail(&nbl->nbl_lru, &nn->blocked_locks_lru);
++		kref_get(&nbl->nbl_kref);
+ 		spin_unlock(&nn->blocked_locks_lock);
+ 	}
+ 
+@@ -6841,6 +7590,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			nn->somebody_reclaimed = true;
+ 		break;
+ 	case FILE_LOCK_DEFERRED:
++		kref_put(&nbl->nbl_kref, free_nbl);
+ 		nbl = NULL;
+ 		fallthrough;
+ 	case -EAGAIN:		/* conflock holds conflicting lock */
+@@ -6861,8 +7611,13 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		/* dequeue it if we queued it before */
+ 		if (fl_flags & FL_SLEEP) {
+ 			spin_lock(&nn->blocked_locks_lock);
+-			list_del_init(&nbl->nbl_list);
+-			list_del_init(&nbl->nbl_lru);
++			if (!list_empty(&nbl->nbl_list) &&
++			    !list_empty(&nbl->nbl_lru)) {
++				list_del_init(&nbl->nbl_list);
++				list_del_init(&nbl->nbl_lru);
++				kref_put(&nbl->nbl_kref, free_nbl);
++			}
++			/* nbl can use one of lists to be linked to reaplist */
+ 			spin_unlock(&nn->blocked_locks_lock);
+ 		}
+ 		free_blocked_lock(nbl);
+@@ -6903,21 +7658,22 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct file_lock *lock)
+ {
+ 	struct nfsd_file *nf;
++	struct inode *inode;
+ 	__be32 err;
+ 
+ 	err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf);
+ 	if (err)
+ 		return err;
+-	fh_lock(fhp); /* to block new leases till after test_lock: */
+-	err = nfserrno(nfsd_open_break_lease(fhp->fh_dentry->d_inode,
+-							NFSD_MAY_READ));
++	inode = fhp->fh_dentry->d_inode;
++	inode_lock(inode); /* to block new leases till after test_lock: */
++	err = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
+ 	if (err)
+ 		goto out;
+ 	lock->fl_file = nf->nf_file;
+ 	err = nfserrno(vfs_test_lock(nf->nf_file, lock));
+ 	lock->fl_file = NULL;
+ out:
+-	fh_unlock(fhp);
++	inode_unlock(inode);
+ 	nfsd_file_put(nf);
+ 	return err;
+ }
+@@ -6942,8 +7698,7 @@ nfsd4_lockt(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		 return nfserr_inval;
+ 
+ 	if (!nfsd4_has_session(cstate)) {
+-		status = lookup_clientid(&lockt->lt_clientid, cstate, nn,
+-					 false);
++		status = set_client(&lockt->lt_clientid, cstate, nn);
+ 		if (status)
+ 			goto out;
+ 	}
+@@ -7080,18 +7835,20 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+ {
+ 	struct file_lock *fl;
+ 	int status = false;
+-	struct nfsd_file *nf = find_any_file(fp);
++	struct nfsd_file *nf;
+ 	struct inode *inode;
+ 	struct file_lock_context *flctx;
+ 
++	spin_lock(&fp->fi_lock);
++	nf = find_any_file_locked(fp);
+ 	if (!nf) {
+ 		/* Any valid lock stateid should have some sort of access */
+ 		WARN_ON_ONCE(1);
+-		return status;
++		goto out;
+ 	}
+ 
+ 	inode = locks_inode(nf->nf_file);
+-	flctx = inode->i_flctx;
++	flctx = locks_inode_context(inode);
+ 
+ 	if (flctx && !list_empty_careful(&flctx->flc_posix)) {
+ 		spin_lock(&flctx->flc_lock);
+@@ -7103,57 +7860,62 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+ 		}
+ 		spin_unlock(&flctx->flc_lock);
+ 	}
+-	nfsd_file_put(nf);
++out:
++	spin_unlock(&fp->fi_lock);
+ 	return status;
+ }
+ 
++/**
++ * nfsd4_release_lockowner - process NFSv4.0 RELEASE_LOCKOWNER operations
++ * @rqstp: RPC transaction
++ * @cstate: NFSv4 COMPOUND state
++ * @u: RELEASE_LOCKOWNER arguments
++ *
++ * Check if theree are any locks still held and if not - free the lockowner
++ * and any lock state that is owned.
++ *
++ * Return values:
++ *   %nfs_ok: lockowner released or not found
++ *   %nfserr_locks_held: lockowner still in use
++ *   %nfserr_stale_clientid: clientid no longer active
++ *   %nfserr_expired: clientid not recognized
++ */
+ __be32
+ nfsd4_release_lockowner(struct svc_rqst *rqstp,
+ 			struct nfsd4_compound_state *cstate,
+ 			union nfsd4_op_u *u)
+ {
+ 	struct nfsd4_release_lockowner *rlockowner = &u->release_lockowner;
++	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	clientid_t *clid = &rlockowner->rl_clientid;
+-	struct nfs4_stateowner *sop;
+-	struct nfs4_lockowner *lo = NULL;
+ 	struct nfs4_ol_stateid *stp;
+-	struct xdr_netobj *owner = &rlockowner->rl_owner;
+-	unsigned int hashval = ownerstr_hashval(owner);
+-	__be32 status;
+-	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
++	struct nfs4_lockowner *lo;
+ 	struct nfs4_client *clp;
+-	LIST_HEAD (reaplist);
++	LIST_HEAD(reaplist);
++	__be32 status;
+ 
+ 	dprintk("nfsd4_release_lockowner clientid: (%08x/%08x):\n",
+ 		clid->cl_boot, clid->cl_id);
+ 
+-	status = lookup_clientid(clid, cstate, nn, false);
++	status = set_client(clid, cstate, nn);
+ 	if (status)
+ 		return status;
+-
+ 	clp = cstate->clp;
+-	/* Find the matching lock stateowner */
+-	spin_lock(&clp->cl_lock);
+-	list_for_each_entry(sop, &clp->cl_ownerstr_hashtbl[hashval],
+-			    so_strhash) {
+ 
+-		if (sop->so_is_open_owner || !same_owner_str(sop, owner))
+-			continue;
++	spin_lock(&clp->cl_lock);
++	lo = find_lockowner_str_locked(clp, &rlockowner->rl_owner);
++	if (!lo) {
++		spin_unlock(&clp->cl_lock);
++		return nfs_ok;
++	}
+ 
+-		if (atomic_read(&sop->so_count) != 1) {
++	list_for_each_entry(stp, &lo->lo_owner.so_stateids, st_perstateowner) {
++		if (check_for_locks(stp->st_stid.sc_file, lo)) {
+ 			spin_unlock(&clp->cl_lock);
++			nfs4_put_stateowner(&lo->lo_owner);
+ 			return nfserr_locks_held;
+ 		}
+-
+-		lo = lockowner(sop);
+-		nfs4_get_stateowner(sop);
+-		break;
+-	}
+-	if (!lo) {
+-		spin_unlock(&clp->cl_lock);
+-		return status;
+ 	}
+-
+ 	unhash_lockowner_locked(lo);
+ 	while (!list_empty(&lo->lo_owner.so_stateids)) {
+ 		stp = list_first_entry(&lo->lo_owner.so_stateids,
+@@ -7163,11 +7925,11 @@ nfsd4_release_lockowner(struct svc_rqst *rqstp,
+ 		put_ol_stateid_locked(stp, &reaplist);
+ 	}
+ 	spin_unlock(&clp->cl_lock);
++
+ 	free_ol_stateid_reaplist(&reaplist);
+ 	remove_blocked_locks(lo);
+ 	nfs4_put_stateowner(&lo->lo_owner);
+-
+-	return status;
++	return nfs_ok;
+ }
+ 
+ static inline struct nfs4_client_reclaim *
+@@ -7256,25 +8018,13 @@ nfsd4_find_reclaim_client(struct xdr_netobj name, struct nfsd_net *nn)
+ 	return NULL;
+ }
+ 
+-/*
+-* Called from OPEN. Look for clientid in reclaim list.
+-*/
+ __be32
+-nfs4_check_open_reclaim(clientid_t *clid,
+-		struct nfsd4_compound_state *cstate,
+-		struct nfsd_net *nn)
++nfs4_check_open_reclaim(struct nfs4_client *clp)
+ {
+-	__be32 status;
+-
+-	/* find clientid in conf_id_hashtbl */
+-	status = lookup_clientid(clid, cstate, nn, false);
+-	if (status)
+-		return nfserr_reclaim_bad;
+-
+-	if (test_bit(NFSD4_CLIENT_RECLAIM_COMPLETE, &cstate->clp->cl_flags))
++	if (test_bit(NFSD4_CLIENT_RECLAIM_COMPLETE, &clp->cl_flags))
+ 		return nfserr_no_grace;
+ 
+-	if (nfsd4_client_record_check(cstate->clp))
++	if (nfsd4_client_record_check(clp))
+ 		return nfserr_reclaim_bad;
+ 
+ 	return nfs_ok;
+@@ -7345,10 +8095,20 @@ static int nfs4_state_create_net(struct net *net)
+ 	INIT_LIST_HEAD(&nn->blocked_locks_lru);
+ 
+ 	INIT_DELAYED_WORK(&nn->laundromat_work, laundromat_main);
++	INIT_WORK(&nn->nfsd_shrinker_work, nfsd4_state_shrinker_worker);
+ 	get_net(net);
+ 
++	nn->nfsd_client_shrinker.scan_objects = nfsd4_state_shrinker_scan;
++	nn->nfsd_client_shrinker.count_objects = nfsd4_state_shrinker_count;
++	nn->nfsd_client_shrinker.seeks = DEFAULT_SEEKS;
++
++	if (register_shrinker(&nn->nfsd_client_shrinker))
++		goto err_shrinker;
+ 	return 0;
+ 
++err_shrinker:
++	put_net(net);
++	kfree(nn->sessionid_hashtbl);
+ err_sessionid:
+ 	kfree(nn->unconf_id_hashtbl);
+ err_unconf_id:
+@@ -7420,22 +8180,18 @@ nfs4_state_start(void)
+ {
+ 	int ret;
+ 
+-	laundry_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, "nfsd4");
+-	if (laundry_wq == NULL) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
+-	ret = nfsd4_create_callback_queue();
++	ret = rhltable_init(&nfs4_file_rhltable, &nfs4_file_rhash_params);
+ 	if (ret)
+-		goto out_free_laundry;
++		return ret;
++
++	ret = nfsd4_create_callback_queue();
++	if (ret) {
++		rhltable_destroy(&nfs4_file_rhltable);
++		return ret;
++	}
+ 
+ 	set_max_delegations();
+ 	return 0;
+-
+-out_free_laundry:
+-	destroy_workqueue(laundry_wq);
+-out:
+-	return ret;
+ }
+ 
+ void
+@@ -7445,6 +8201,8 @@ nfs4_state_shutdown_net(struct net *net)
+ 	struct list_head *pos, *next, reaplist;
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 
++	unregister_shrinker(&nn->nfsd_client_shrinker);
++	cancel_work(&nn->nfsd_shrinker_work);
+ 	cancel_delayed_work_sync(&nn->laundromat_work);
+ 	locks_end_grace(&nn->nfsd4_manager);
+ 
+@@ -7464,13 +8222,16 @@ nfs4_state_shutdown_net(struct net *net)
+ 
+ 	nfsd4_client_tracking_exit(net);
+ 	nfs4_state_destroy_net(net);
++#ifdef CONFIG_NFSD_V4_2_INTER_SSC
++	nfsd4_ssc_shutdown_umount(nn);
++#endif
+ }
+ 
+ void
+ nfs4_state_shutdown(void)
+ {
+-	destroy_workqueue(laundry_wq);
+ 	nfsd4_destroy_callback_queue();
++	rhltable_destroy(&nfs4_file_rhltable);
+ }
+ 
+ static void
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index dbfa24cf33906..5a68c62864925 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -42,6 +42,8 @@
+ #include <linux/sunrpc/svcauth_gss.h>
+ #include <linux/sunrpc/addr.h>
+ #include <linux/xattr.h>
++#include <linux/vmalloc.h>
++
+ #include <uapi/linux/xattr.h>
+ 
+ #include "idmap.h"
+@@ -54,6 +56,8 @@
+ #include "pnfs.h"
+ #include "filecache.h"
+ 
++#include "trace.h"
++
+ #ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+ #include <linux/security.h>
+ #endif
+@@ -90,6 +94,8 @@ check_filename(char *str, int len)
+ 
+ 	if (len == 0)
+ 		return nfserr_inval;
++	if (len > NFS4_MAXNAMLEN)
++		return nfserr_nametoolong;
+ 	if (isdotent(str, len))
+ 		return nfserr_badname;
+ 	for (i = 0; i < len; i++)
+@@ -98,122 +104,6 @@ check_filename(char *str, int len)
+ 	return 0;
+ }
+ 
+-#define DECODE_HEAD				\
+-	__be32 *p;				\
+-	__be32 status
+-#define DECODE_TAIL				\
+-	status = 0;				\
+-out:						\
+-	return status;				\
+-xdr_error:					\
+-	dprintk("NFSD: xdr error (%s:%d)\n",	\
+-			__FILE__, __LINE__);	\
+-	status = nfserr_bad_xdr;		\
+-	goto out
+-
+-#define READMEM(x,nbytes) do {			\
+-	x = (char *)p;				\
+-	p += XDR_QUADLEN(nbytes);		\
+-} while (0)
+-#define SAVEMEM(x,nbytes) do {			\
+-	if (!(x = (p==argp->tmp || p == argp->tmpp) ? \
+- 		savemem(argp, p, nbytes) :	\
+- 		(char *)p)) {			\
+-		dprintk("NFSD: xdr error (%s:%d)\n", \
+-				__FILE__, __LINE__); \
+-		goto xdr_error;			\
+-		}				\
+-	p += XDR_QUADLEN(nbytes);		\
+-} while (0)
+-#define COPYMEM(x,nbytes) do {			\
+-	memcpy((x), p, nbytes);			\
+-	p += XDR_QUADLEN(nbytes);		\
+-} while (0)
+-
+-/* READ_BUF, read_buf(): nbytes must be <= PAGE_SIZE */
+-#define READ_BUF(nbytes)  do {			\
+-	if (nbytes <= (u32)((char *)argp->end - (char *)argp->p)) {	\
+-		p = argp->p;			\
+-		argp->p += XDR_QUADLEN(nbytes);	\
+-	} else if (!(p = read_buf(argp, nbytes))) { \
+-		dprintk("NFSD: xdr error (%s:%d)\n", \
+-				__FILE__, __LINE__); \
+-		goto xdr_error;			\
+-	}					\
+-} while (0)
+-
+-static void next_decode_page(struct nfsd4_compoundargs *argp)
+-{
+-	argp->p = page_address(argp->pagelist[0]);
+-	argp->pagelist++;
+-	if (argp->pagelen < PAGE_SIZE) {
+-		argp->end = argp->p + XDR_QUADLEN(argp->pagelen);
+-		argp->pagelen = 0;
+-	} else {
+-		argp->end = argp->p + (PAGE_SIZE>>2);
+-		argp->pagelen -= PAGE_SIZE;
+-	}
+-}
+-
+-static __be32 *read_buf(struct nfsd4_compoundargs *argp, u32 nbytes)
+-{
+-	/* We want more bytes than seem to be available.
+-	 * Maybe we need a new page, maybe we have just run out
+-	 */
+-	unsigned int avail = (char *)argp->end - (char *)argp->p;
+-	__be32 *p;
+-
+-	if (argp->pagelen == 0) {
+-		struct kvec *vec = &argp->rqstp->rq_arg.tail[0];
+-
+-		if (!argp->tail) {
+-			argp->tail = true;
+-			avail = vec->iov_len;
+-			argp->p = vec->iov_base;
+-			argp->end = vec->iov_base + avail;
+-		}
+-
+-		if (avail < nbytes)
+-			return NULL;
+-
+-		p = argp->p;
+-		argp->p += XDR_QUADLEN(nbytes);
+-		return p;
+-	}
+-
+-	if (avail + argp->pagelen < nbytes)
+-		return NULL;
+-	if (avail + PAGE_SIZE < nbytes) /* need more than a page !! */
+-		return NULL;
+-	/* ok, we can do it with the current plus the next page */
+-	if (nbytes <= sizeof(argp->tmp))
+-		p = argp->tmp;
+-	else {
+-		kfree(argp->tmpp);
+-		p = argp->tmpp = kmalloc(nbytes, GFP_KERNEL);
+-		if (!p)
+-			return NULL;
+-		
+-	}
+-	/*
+-	 * The following memcpy is safe because read_buf is always
+-	 * called with nbytes > avail, and the two cases above both
+-	 * guarantee p points to at least nbytes bytes.
+-	 */
+-	memcpy(p, argp->p, avail);
+-	next_decode_page(argp);
+-	memcpy(((char*)p)+avail, argp->p, (nbytes - avail));
+-	argp->p += XDR_QUADLEN(nbytes - avail);
+-	return p;
+-}
+-
+-static unsigned int compoundargs_bytes_left(struct nfsd4_compoundargs *argp)
+-{
+-	unsigned int this = (char *)argp->end - (char *)argp->p;
+-
+-	return this + argp->pagelen;
+-}
+-
+ static int zero_clientid(clientid_t *clid)
+ {
+ 	return (clid->cl_boot == 0) && (clid->cl_id == 0);
+@@ -259,118 +149,246 @@ svcxdr_dupstr(struct nfsd4_compoundargs *argp, void *buf, u32 len)
+ 	return p;
+ }
+ 
+-static __be32
+-svcxdr_construct_vector(struct nfsd4_compoundargs *argp, struct kvec *head,
+-			struct page ***pagelist, u32 buflen)
++static void *
++svcxdr_savemem(struct nfsd4_compoundargs *argp, __be32 *p, u32 len)
+ {
+-	int avail;
+-	int len;
+-	int pages;
++	__be32 *tmp;
+ 
+-	/* Sorry .. no magic macros for this.. *
+-	 * READ_BUF(write->wr_buflen);
+-	 * SAVEMEM(write->wr_buf, write->wr_buflen);
++	/*
++	 * The location of the decoded data item is stable,
++	 * so @p is OK to use. This is the common case.
+ 	 */
+-	avail = (char *)argp->end - (char *)argp->p;
+-	if (avail + argp->pagelen < buflen) {
+-		dprintk("NFSD: xdr error (%s:%d)\n",
+-			       __FILE__, __LINE__);
++	if (p != argp->xdr->scratch.iov_base)
++		return p;
++
++	tmp = svcxdr_tmpalloc(argp, len);
++	if (!tmp)
++		return NULL;
++	memcpy(tmp, p, len);
++	return tmp;
++}
++
++/*
++ * NFSv4 basic data type decoders
++ */
++
++/*
++ * This helper handles variable-length opaques which belong to protocol
++ * elements that this implementation does not support.
++ */
++static __be32
++nfsd4_decode_ignored_string(struct nfsd4_compoundargs *argp, u32 maxlen)
++{
++	u32 len;
++
++	if (xdr_stream_decode_u32(argp->xdr, &len) < 0)
++		return nfserr_bad_xdr;
++	if (maxlen && len > maxlen)
++		return nfserr_bad_xdr;
++	if (!xdr_inline_decode(argp->xdr, len))
+ 		return nfserr_bad_xdr;
+-	}
+-	head->iov_base = argp->p;
+-	head->iov_len = avail;
+-	*pagelist = argp->pagelist;
+ 
+-	len = XDR_QUADLEN(buflen) << 2;
+-	if (len >= avail) {
+-		len -= avail;
++	return nfs_ok;
++}
+ 
+-		pages = len >> PAGE_SHIFT;
+-		argp->pagelist += pages;
+-		argp->pagelen -= pages * PAGE_SIZE;
+-		len -= pages * PAGE_SIZE;
++static __be32
++nfsd4_decode_opaque(struct nfsd4_compoundargs *argp, struct xdr_netobj *o)
++{
++	__be32 *p;
++	u32 len;
+ 
+-		next_decode_page(argp);
+-	}
+-	argp->p += XDR_QUADLEN(len);
++	if (xdr_stream_decode_u32(argp->xdr, &len) < 0)
++		return nfserr_bad_xdr;
++	if (len == 0 || len > NFS4_OPAQUE_LIMIT)
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, len);
++	if (!p)
++		return nfserr_bad_xdr;
++	o->data = svcxdr_savemem(argp, p, len);
++	if (!o->data)
++		return nfserr_jukebox;
++	o->len = len;
+ 
+-	return 0;
++	return nfs_ok;
+ }
+ 
+-/**
+- * savemem - duplicate a chunk of memory for later processing
+- * @argp: NFSv4 compound argument structure to be freed with
+- * @p: pointer to be duplicated
+- * @nbytes: length to be duplicated
+- *
+- * Returns a pointer to a copy of @nbytes bytes of memory at @p
+- * that are preserved until processing of the NFSv4 compound
+- * operation described by @argp finishes.
+- */
+-static char *savemem(struct nfsd4_compoundargs *argp, __be32 *p, int nbytes)
++static __be32
++nfsd4_decode_component4(struct nfsd4_compoundargs *argp, char **namp, u32 *lenp)
+ {
+-	void *ret;
++	__be32 *p, status;
+ 
+-	ret = svcxdr_tmpalloc(argp, nbytes);
+-	if (!ret)
+-		return NULL;
+-	memcpy(ret, p, nbytes);
+-	return ret;
++	if (xdr_stream_decode_u32(argp->xdr, lenp) < 0)
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, *lenp);
++	if (!p)
++		return nfserr_bad_xdr;
++	status = check_filename((char *)p, *lenp);
++	if (status)
++		return status;
++	*namp = svcxdr_savemem(argp, p, *lenp);
++	if (!*namp)
++		return nfserr_jukebox;
++
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_time(struct nfsd4_compoundargs *argp, struct timespec64 *tv)
++nfsd4_decode_nfstime4(struct nfsd4_compoundargs *argp, struct timespec64 *tv)
+ {
+-	DECODE_HEAD;
++	__be32 *p;
+ 
+-	READ_BUF(12);
++	p = xdr_inline_decode(argp->xdr, XDR_UNIT * 3);
++	if (!p)
++		return nfserr_bad_xdr;
+ 	p = xdr_decode_hyper(p, &tv->tv_sec);
+ 	tv->tv_nsec = be32_to_cpup(p++);
+ 	if (tv->tv_nsec >= (u32)1000000000)
+ 		return nfserr_inval;
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_verifier4(struct nfsd4_compoundargs *argp, nfs4_verifier *verf)
++{
++	__be32 *p;
++
++	p = xdr_inline_decode(argp->xdr, NFS4_VERIFIER_SIZE);
++	if (!p)
++		return nfserr_bad_xdr;
++	memcpy(verf->data, p, sizeof(verf->data));
++	return nfs_ok;
++}
++
++/**
++ * nfsd4_decode_bitmap4 - Decode an NFSv4 bitmap4
++ * @argp: NFSv4 compound argument structure
++ * @bmval: pointer to an array of u32's to decode into
++ * @bmlen: size of the @bmval array
++ *
++ * The server needs to return nfs_ok rather than nfserr_bad_xdr when
++ * encountering bitmaps containing bits it does not recognize. This
++ * includes bits in bitmap words past WORDn, where WORDn is the last
++ * bitmap WORD the implementation currently supports. Thus we are
++ * careful here to simply ignore bits in bitmap words that this
++ * implementation has yet to support explicitly.
++ *
++ * Return values:
++ *   %nfs_ok: @bmval populated successfully
++ *   %nfserr_bad_xdr: the encoded bitmap was invalid
++ */
++static __be32
++nfsd4_decode_bitmap4(struct nfsd4_compoundargs *argp, u32 *bmval, u32 bmlen)
++{
++	ssize_t status;
+ 
+-	DECODE_TAIL;
++	status = xdr_stream_decode_uint32_array(argp->xdr, bmval, bmlen);
++	return status == -EBADMSG ? nfserr_bad_xdr : nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_bitmap(struct nfsd4_compoundargs *argp, u32 *bmval)
++nfsd4_decode_nfsace4(struct nfsd4_compoundargs *argp, struct nfs4_ace *ace)
++{
++	__be32 *p, status;
++	u32 length;
++
++	if (xdr_stream_decode_u32(argp->xdr, &ace->type) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &ace->flag) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &ace->access_mask) < 0)
++		return nfserr_bad_xdr;
++
++	if (xdr_stream_decode_u32(argp->xdr, &length) < 0)
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, length);
++	if (!p)
++		return nfserr_bad_xdr;
++	ace->whotype = nfs4_acl_get_whotype((char *)p, length);
++	if (ace->whotype != NFS4_ACL_WHO_NAMED)
++		status = nfs_ok;
++	else if (ace->flag & NFS4_ACE_IDENTIFIER_GROUP)
++		status = nfsd_map_name_to_gid(argp->rqstp,
++				(char *)p, length, &ace->who_gid);
++	else
++		status = nfsd_map_name_to_uid(argp->rqstp,
++				(char *)p, length, &ace->who_uid);
++
++	return status;
++}
++
++/* A counted array of nfsace4's */
++static noinline __be32
++nfsd4_decode_acl(struct nfsd4_compoundargs *argp, struct nfs4_acl **acl)
+ {
+-	u32 bmlen;
+-	DECODE_HEAD;
++	struct nfs4_ace *ace;
++	__be32 status;
++	u32 count;
++
++	if (xdr_stream_decode_u32(argp->xdr, &count) < 0)
++		return nfserr_bad_xdr;
++
++	if (count > xdr_stream_remaining(argp->xdr) / 20)
++		/*
++		 * Even with 4-byte names there wouldn't be
++		 * space for that many aces; something fishy is
++		 * going on:
++		 */
++		return nfserr_fbig;
++
++	*acl = svcxdr_tmpalloc(argp, nfs4_acl_bytes(count));
++	if (*acl == NULL)
++		return nfserr_jukebox;
+ 
+-	bmval[0] = 0;
+-	bmval[1] = 0;
+-	bmval[2] = 0;
++	(*acl)->naces = count;
++	for (ace = (*acl)->aces; ace < (*acl)->aces + count; ace++) {
++		status = nfsd4_decode_nfsace4(argp, ace);
++		if (status)
++			return status;
++	}
++
++	return nfs_ok;
++}
++
++static noinline __be32
++nfsd4_decode_security_label(struct nfsd4_compoundargs *argp,
++			    struct xdr_netobj *label)
++{
++	u32 lfs, pi, length;
++	__be32 *p;
+ 
+-	READ_BUF(4);
+-	bmlen = be32_to_cpup(p++);
+-	if (bmlen > 1000)
+-		goto xdr_error;
++	if (xdr_stream_decode_u32(argp->xdr, &lfs) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &pi) < 0)
++		return nfserr_bad_xdr;
+ 
+-	READ_BUF(bmlen << 2);
+-	if (bmlen > 0)
+-		bmval[0] = be32_to_cpup(p++);
+-	if (bmlen > 1)
+-		bmval[1] = be32_to_cpup(p++);
+-	if (bmlen > 2)
+-		bmval[2] = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &length) < 0)
++		return nfserr_bad_xdr;
++	if (length > NFS4_MAXLABELLEN)
++		return nfserr_badlabel;
++	p = xdr_inline_decode(argp->xdr, length);
++	if (!p)
++		return nfserr_bad_xdr;
++	label->len = length;
++	label->data = svcxdr_dupstr(argp, p, length);
++	if (!label->data)
++		return nfserr_jukebox;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_fattr(struct nfsd4_compoundargs *argp, u32 *bmval,
+-		   struct iattr *iattr, struct nfs4_acl **acl,
+-		   struct xdr_netobj *label, int *umask)
++nfsd4_decode_fattr4(struct nfsd4_compoundargs *argp, u32 *bmval, u32 bmlen,
++		    struct iattr *iattr, struct nfs4_acl **acl,
++		    struct xdr_netobj *label, int *umask)
+ {
+-	int expected_len, len = 0;
+-	u32 dummy32;
+-	char *buf;
++	unsigned int starting_pos;
++	u32 attrlist4_count;
++	__be32 *p, status;
+ 
+-	DECODE_HEAD;
+ 	iattr->ia_valid = 0;
+-	if ((status = nfsd4_decode_bitmap(argp, bmval)))
+-		return status;
++	status = nfsd4_decode_bitmap4(argp, bmval, bmlen);
++	if (status)
++		return nfserr_bad_xdr;
+ 
+ 	if (bmval[0] & ~NFSD_WRITEABLE_ATTRS_WORD0
+ 	    || bmval[1] & ~NFSD_WRITEABLE_ATTRS_WORD1
+@@ -380,96 +398,69 @@ nfsd4_decode_fattr(struct nfsd4_compoundargs *argp, u32 *bmval,
+ 		return nfserr_attrnotsupp;
+ 	}
+ 
+-	READ_BUF(4);
+-	expected_len = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &attrlist4_count) < 0)
++		return nfserr_bad_xdr;
++	starting_pos = xdr_stream_pos(argp->xdr);
+ 
+ 	if (bmval[0] & FATTR4_WORD0_SIZE) {
+-		READ_BUF(8);
+-		len += 8;
+-		p = xdr_decode_hyper(p, &iattr->ia_size);
++		u64 size;
++
++		if (xdr_stream_decode_u64(argp->xdr, &size) < 0)
++			return nfserr_bad_xdr;
++		iattr->ia_size = size;
+ 		iattr->ia_valid |= ATTR_SIZE;
+ 	}
+ 	if (bmval[0] & FATTR4_WORD0_ACL) {
+-		u32 nace;
+-		struct nfs4_ace *ace;
+-
+-		READ_BUF(4); len += 4;
+-		nace = be32_to_cpup(p++);
+-
+-		if (nace > compoundargs_bytes_left(argp)/20)
+-			/*
+-			 * Even with 4-byte names there wouldn't be
+-			 * space for that many aces; something fishy is
+-			 * going on:
+-			 */
+-			return nfserr_fbig;
+-
+-		*acl = svcxdr_tmpalloc(argp, nfs4_acl_bytes(nace));
+-		if (*acl == NULL)
+-			return nfserr_jukebox;
+-
+-		(*acl)->naces = nace;
+-		for (ace = (*acl)->aces; ace < (*acl)->aces + nace; ace++) {
+-			READ_BUF(16); len += 16;
+-			ace->type = be32_to_cpup(p++);
+-			ace->flag = be32_to_cpup(p++);
+-			ace->access_mask = be32_to_cpup(p++);
+-			dummy32 = be32_to_cpup(p++);
+-			READ_BUF(dummy32);
+-			len += XDR_QUADLEN(dummy32) << 2;
+-			READMEM(buf, dummy32);
+-			ace->whotype = nfs4_acl_get_whotype(buf, dummy32);
+-			status = nfs_ok;
+-			if (ace->whotype != NFS4_ACL_WHO_NAMED)
+-				;
+-			else if (ace->flag & NFS4_ACE_IDENTIFIER_GROUP)
+-				status = nfsd_map_name_to_gid(argp->rqstp,
+-						buf, dummy32, &ace->who_gid);
+-			else
+-				status = nfsd_map_name_to_uid(argp->rqstp,
+-						buf, dummy32, &ace->who_uid);
+-			if (status)
+-				return status;
+-		}
++		status = nfsd4_decode_acl(argp, acl);
++		if (status)
++			return status;
+ 	} else
+ 		*acl = NULL;
+ 	if (bmval[1] & FATTR4_WORD1_MODE) {
+-		READ_BUF(4);
+-		len += 4;
+-		iattr->ia_mode = be32_to_cpup(p++);
++		u32 mode;
++
++		if (xdr_stream_decode_u32(argp->xdr, &mode) < 0)
++			return nfserr_bad_xdr;
++		iattr->ia_mode = mode;
+ 		iattr->ia_mode &= (S_IFMT | S_IALLUGO);
+ 		iattr->ia_valid |= ATTR_MODE;
+ 	}
+ 	if (bmval[1] & FATTR4_WORD1_OWNER) {
+-		READ_BUF(4);
+-		len += 4;
+-		dummy32 = be32_to_cpup(p++);
+-		READ_BUF(dummy32);
+-		len += (XDR_QUADLEN(dummy32) << 2);
+-		READMEM(buf, dummy32);
+-		if ((status = nfsd_map_name_to_uid(argp->rqstp, buf, dummy32, &iattr->ia_uid)))
++		u32 length;
++
++		if (xdr_stream_decode_u32(argp->xdr, &length) < 0)
++			return nfserr_bad_xdr;
++		p = xdr_inline_decode(argp->xdr, length);
++		if (!p)
++			return nfserr_bad_xdr;
++		status = nfsd_map_name_to_uid(argp->rqstp, (char *)p, length,
++					      &iattr->ia_uid);
++		if (status)
+ 			return status;
+ 		iattr->ia_valid |= ATTR_UID;
+ 	}
+ 	if (bmval[1] & FATTR4_WORD1_OWNER_GROUP) {
+-		READ_BUF(4);
+-		len += 4;
+-		dummy32 = be32_to_cpup(p++);
+-		READ_BUF(dummy32);
+-		len += (XDR_QUADLEN(dummy32) << 2);
+-		READMEM(buf, dummy32);
+-		if ((status = nfsd_map_name_to_gid(argp->rqstp, buf, dummy32, &iattr->ia_gid)))
++		u32 length;
++
++		if (xdr_stream_decode_u32(argp->xdr, &length) < 0)
++			return nfserr_bad_xdr;
++		p = xdr_inline_decode(argp->xdr, length);
++		if (!p)
++			return nfserr_bad_xdr;
++		status = nfsd_map_name_to_gid(argp->rqstp, (char *)p, length,
++					      &iattr->ia_gid);
++		if (status)
+ 			return status;
+ 		iattr->ia_valid |= ATTR_GID;
+ 	}
+ 	if (bmval[1] & FATTR4_WORD1_TIME_ACCESS_SET) {
+-		READ_BUF(4);
+-		len += 4;
+-		dummy32 = be32_to_cpup(p++);
+-		switch (dummy32) {
++		u32 set_it;
++
++		if (xdr_stream_decode_u32(argp->xdr, &set_it) < 0)
++			return nfserr_bad_xdr;
++		switch (set_it) {
+ 		case NFS4_SET_TO_CLIENT_TIME:
+-			len += 12;
+-			status = nfsd4_decode_time(argp, &iattr->ia_atime);
++			status = nfsd4_decode_nfstime4(argp, &iattr->ia_atime);
+ 			if (status)
+ 				return status;
+ 			iattr->ia_valid |= (ATTR_ATIME | ATTR_ATIME_SET);
+@@ -478,17 +469,26 @@ nfsd4_decode_fattr(struct nfsd4_compoundargs *argp, u32 *bmval,
+ 			iattr->ia_valid |= ATTR_ATIME;
+ 			break;
+ 		default:
+-			goto xdr_error;
++			return nfserr_bad_xdr;
+ 		}
+ 	}
++	if (bmval[1] & FATTR4_WORD1_TIME_CREATE) {
++		struct timespec64 ts;
++
++		/* No Linux filesystem supports setting this attribute. */
++		bmval[1] &= ~FATTR4_WORD1_TIME_CREATE;
++		status = nfsd4_decode_nfstime4(argp, &ts);
++		if (status)
++			return status;
++	}
+ 	if (bmval[1] & FATTR4_WORD1_TIME_MODIFY_SET) {
+-		READ_BUF(4);
+-		len += 4;
+-		dummy32 = be32_to_cpup(p++);
+-		switch (dummy32) {
++		u32 set_it;
++
++		if (xdr_stream_decode_u32(argp->xdr, &set_it) < 0)
++			return nfserr_bad_xdr;
++		switch (set_it) {
+ 		case NFS4_SET_TO_CLIENT_TIME:
+-			len += 12;
+-			status = nfsd4_decode_time(argp, &iattr->ia_mtime);
++			status = nfsd4_decode_nfstime4(argp, &iattr->ia_mtime);
+ 			if (status)
+ 				return status;
+ 			iattr->ia_valid |= (ATTR_MTIME | ATTR_MTIME_SET);
+@@ -497,222 +497,335 @@ nfsd4_decode_fattr(struct nfsd4_compoundargs *argp, u32 *bmval,
+ 			iattr->ia_valid |= ATTR_MTIME;
+ 			break;
+ 		default:
+-			goto xdr_error;
++			return nfserr_bad_xdr;
+ 		}
+ 	}
+-
+ 	label->len = 0;
+ 	if (IS_ENABLED(CONFIG_NFSD_V4_SECURITY_LABEL) &&
+ 	    bmval[2] & FATTR4_WORD2_SECURITY_LABEL) {
+-		READ_BUF(4);
+-		len += 4;
+-		dummy32 = be32_to_cpup(p++); /* lfs: we don't use it */
+-		READ_BUF(4);
+-		len += 4;
+-		dummy32 = be32_to_cpup(p++); /* pi: we don't use it either */
+-		READ_BUF(4);
+-		len += 4;
+-		dummy32 = be32_to_cpup(p++);
+-		READ_BUF(dummy32);
+-		if (dummy32 > NFS4_MAXLABELLEN)
+-			return nfserr_badlabel;
+-		len += (XDR_QUADLEN(dummy32) << 2);
+-		READMEM(buf, dummy32);
+-		label->len = dummy32;
+-		label->data = svcxdr_dupstr(argp, buf, dummy32);
+-		if (!label->data)
+-			return nfserr_jukebox;
++		status = nfsd4_decode_security_label(argp, label);
++		if (status)
++			return status;
+ 	}
+ 	if (bmval[2] & FATTR4_WORD2_MODE_UMASK) {
++		u32 mode, mask;
++
+ 		if (!umask)
+-			goto xdr_error;
+-		READ_BUF(8);
+-		len += 8;
+-		dummy32 = be32_to_cpup(p++);
+-		iattr->ia_mode = dummy32 & (S_IFMT | S_IALLUGO);
+-		dummy32 = be32_to_cpup(p++);
+-		*umask = dummy32 & S_IRWXUGO;
++			return nfserr_bad_xdr;
++		if (xdr_stream_decode_u32(argp->xdr, &mode) < 0)
++			return nfserr_bad_xdr;
++		iattr->ia_mode = mode & (S_IFMT | S_IALLUGO);
++		if (xdr_stream_decode_u32(argp->xdr, &mask) < 0)
++			return nfserr_bad_xdr;
++		*umask = mask & S_IRWXUGO;
+ 		iattr->ia_valid |= ATTR_MODE;
+ 	}
+-	if (len != expected_len)
+-		goto xdr_error;
+ 
+-	DECODE_TAIL;
++	/* request sanity: did attrlist4 contain the expected number of words? */
++	if (attrlist4_count != xdr_stream_pos(argp->xdr) - starting_pos)
++		return nfserr_bad_xdr;
++
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_stateid(struct nfsd4_compoundargs *argp, stateid_t *sid)
++nfsd4_decode_stateid4(struct nfsd4_compoundargs *argp, stateid_t *sid)
+ {
+-	DECODE_HEAD;
++	__be32 *p;
+ 
+-	READ_BUF(sizeof(stateid_t));
++	p = xdr_inline_decode(argp->xdr, NFS4_STATEID_SIZE);
++	if (!p)
++		return nfserr_bad_xdr;
+ 	sid->si_generation = be32_to_cpup(p++);
+-	COPYMEM(&sid->si_opaque, sizeof(stateid_opaque_t));
+-
+-	DECODE_TAIL;
++	memcpy(&sid->si_opaque, p, sizeof(sid->si_opaque));
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_access(struct nfsd4_compoundargs *argp, struct nfsd4_access *access)
++nfsd4_decode_clientid4(struct nfsd4_compoundargs *argp, clientid_t *clientid)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(4);
+-	access->ac_req_access = be32_to_cpup(p++);
++	__be32 *p;
+ 
+-	DECODE_TAIL;
++	p = xdr_inline_decode(argp->xdr, sizeof(__be64));
++	if (!p)
++		return nfserr_bad_xdr;
++	memcpy(clientid, p, sizeof(*clientid));
++	return nfs_ok;
+ }
+ 
+-static __be32 nfsd4_decode_cb_sec(struct nfsd4_compoundargs *argp, struct nfsd4_cb_sec *cbs)
++static __be32
++nfsd4_decode_state_owner4(struct nfsd4_compoundargs *argp,
++			  clientid_t *clientid, struct xdr_netobj *owner)
+ {
+-	DECODE_HEAD;
+-	struct user_namespace *userns = nfsd_user_namespace(argp->rqstp);
+-	u32 dummy, uid, gid;
+-	char *machine_name;
+-	int i;
+-	int nr_secflavs;
++	__be32 status;
+ 
+-	/* callback_sec_params4 */
+-	READ_BUF(4);
+-	nr_secflavs = be32_to_cpup(p++);
+-	if (nr_secflavs)
+-		cbs->flavor = (u32)(-1);
+-	else
+-		/* Is this legal? Be generous, take it to mean AUTH_NONE: */
+-		cbs->flavor = 0;
+-	for (i = 0; i < nr_secflavs; ++i) {
+-		READ_BUF(4);
+-		dummy = be32_to_cpup(p++);
+-		switch (dummy) {
+-		case RPC_AUTH_NULL:
+-			/* Nothing to read */
+-			if (cbs->flavor == (u32)(-1))
+-				cbs->flavor = RPC_AUTH_NULL;
+-			break;
+-		case RPC_AUTH_UNIX:
+-			READ_BUF(8);
+-			/* stamp */
+-			dummy = be32_to_cpup(p++);
+-
+-			/* machine name */
+-			dummy = be32_to_cpup(p++);
+-			READ_BUF(dummy);
+-			SAVEMEM(machine_name, dummy);
+-
+-			/* uid, gid */
+-			READ_BUF(8);
+-			uid = be32_to_cpup(p++);
+-			gid = be32_to_cpup(p++);
+-
+-			/* more gids */
+-			READ_BUF(4);
+-			dummy = be32_to_cpup(p++);
+-			READ_BUF(dummy * 4);
+-			if (cbs->flavor == (u32)(-1)) {
+-				kuid_t kuid = make_kuid(userns, uid);
+-				kgid_t kgid = make_kgid(userns, gid);
+-				if (uid_valid(kuid) && gid_valid(kgid)) {
+-					cbs->uid = kuid;
+-					cbs->gid = kgid;
+-					cbs->flavor = RPC_AUTH_UNIX;
+-				} else {
+-					dprintk("RPC_AUTH_UNIX with invalid"
+-						"uid or gid ignoring!\n");
+-				}
+-			}
+-			break;
+-		case RPC_AUTH_GSS:
+-			dprintk("RPC_AUTH_GSS callback secflavor "
+-				"not supported!\n");
+-			READ_BUF(8);
+-			/* gcbp_service */
+-			dummy = be32_to_cpup(p++);
+-			/* gcbp_handle_from_server */
+-			dummy = be32_to_cpup(p++);
+-			READ_BUF(dummy);
+-			p += XDR_QUADLEN(dummy);
+-			/* gcbp_handle_from_client */
+-			READ_BUF(4);
+-			dummy = be32_to_cpup(p++);
+-			READ_BUF(dummy);
+-			break;
+-		default:
+-			dprintk("Illegal callback secflavor\n");
+-			return nfserr_inval;
+-		}
+-	}
+-	DECODE_TAIL;
++	status = nfsd4_decode_clientid4(argp, clientid);
++	if (status)
++		return status;
++	return nfsd4_decode_opaque(argp, owner);
+ }
+ 
+-static __be32 nfsd4_decode_backchannel_ctl(struct nfsd4_compoundargs *argp, struct nfsd4_backchannel_ctl *bc)
++#ifdef CONFIG_NFSD_PNFS
++static __be32
++nfsd4_decode_deviceid4(struct nfsd4_compoundargs *argp,
++		       struct nfsd4_deviceid *devid)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(4);
+-	bc->bc_cb_program = be32_to_cpup(p++);
+-	nfsd4_decode_cb_sec(argp, &bc->bc_cb_sec);
++	__be32 *p;
+ 
+-	DECODE_TAIL;
++	p = xdr_inline_decode(argp->xdr, NFS4_DEVICEID4_SIZE);
++	if (!p)
++		return nfserr_bad_xdr;
++	memcpy(devid, p, sizeof(*devid));
++	return nfs_ok;
+ }
+ 
+-static __be32 nfsd4_decode_bind_conn_to_session(struct nfsd4_compoundargs *argp, struct nfsd4_bind_conn_to_session *bcts)
++static __be32
++nfsd4_decode_layoutupdate4(struct nfsd4_compoundargs *argp,
++			   struct nfsd4_layoutcommit *lcp)
+ {
+-	DECODE_HEAD;
++	if (xdr_stream_decode_u32(argp->xdr, &lcp->lc_layout_type) < 0)
++		return nfserr_bad_xdr;
++	if (lcp->lc_layout_type < LAYOUT_NFSV4_1_FILES)
++		return nfserr_bad_xdr;
++	if (lcp->lc_layout_type >= LAYOUT_TYPE_MAX)
++		return nfserr_bad_xdr;
++
++	if (xdr_stream_decode_u32(argp->xdr, &lcp->lc_up_len) < 0)
++		return nfserr_bad_xdr;
++	if (lcp->lc_up_len > 0) {
++		lcp->lc_up_layout = xdr_inline_decode(argp->xdr, lcp->lc_up_len);
++		if (!lcp->lc_up_layout)
++			return nfserr_bad_xdr;
++	}
+ 
+-	READ_BUF(NFS4_MAX_SESSIONID_LEN + 8);
+-	COPYMEM(bcts->sessionid.data, NFS4_MAX_SESSIONID_LEN);
+-	bcts->dir = be32_to_cpup(p++);
+-	/* XXX: skipping ctsa_use_conn_in_rdma_mode.  Perhaps Tom Tucker
+-	 * could help us figure out we should be using it. */
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_close(struct nfsd4_compoundargs *argp, struct nfsd4_close *close)
++nfsd4_decode_layoutreturn4(struct nfsd4_compoundargs *argp,
++			   struct nfsd4_layoutreturn *lrp)
+ {
+-	DECODE_HEAD;
++	__be32 status;
+ 
+-	READ_BUF(4);
+-	close->cl_seqid = be32_to_cpup(p++);
+-	return nfsd4_decode_stateid(argp, &close->cl_stateid);
++	if (xdr_stream_decode_u32(argp->xdr, &lrp->lr_return_type) < 0)
++		return nfserr_bad_xdr;
++	switch (lrp->lr_return_type) {
++	case RETURN_FILE:
++		if (xdr_stream_decode_u64(argp->xdr, &lrp->lr_seg.offset) < 0)
++			return nfserr_bad_xdr;
++		if (xdr_stream_decode_u64(argp->xdr, &lrp->lr_seg.length) < 0)
++			return nfserr_bad_xdr;
++		status = nfsd4_decode_stateid4(argp, &lrp->lr_sid);
++		if (status)
++			return status;
++		if (xdr_stream_decode_u32(argp->xdr, &lrp->lrf_body_len) < 0)
++			return nfserr_bad_xdr;
++		if (lrp->lrf_body_len > 0) {
++			lrp->lrf_body = xdr_inline_decode(argp->xdr, lrp->lrf_body_len);
++			if (!lrp->lrf_body)
++				return nfserr_bad_xdr;
++		}
++		break;
++	case RETURN_FSID:
++	case RETURN_ALL:
++		lrp->lr_seg.offset = 0;
++		lrp->lr_seg.length = NFS4_MAX_UINT64;
++		break;
++	default:
++		return nfserr_bad_xdr;
++	}
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
++#endif /* CONFIG_NFSD_PNFS */
+ 
+ static __be32
+-nfsd4_decode_commit(struct nfsd4_compoundargs *argp, struct nfsd4_commit *commit)
++nfsd4_decode_sessionid4(struct nfsd4_compoundargs *argp,
++			struct nfs4_sessionid *sessionid)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(12);
+-	p = xdr_decode_hyper(p, &commit->co_offset);
+-	commit->co_count = be32_to_cpup(p++);
++	__be32 *p;
+ 
+-	DECODE_TAIL;
++	p = xdr_inline_decode(argp->xdr, NFS4_MAX_SESSIONID_LEN);
++	if (!p)
++		return nfserr_bad_xdr;
++	memcpy(sessionid->data, p, sizeof(sessionid->data));
++	return nfs_ok;
+ }
+ 
++/* Defined in Appendix A of RFC 5531 */
+ static __be32
+-nfsd4_decode_create(struct nfsd4_compoundargs *argp, struct nfsd4_create *create)
++nfsd4_decode_authsys_parms(struct nfsd4_compoundargs *argp,
++			   struct nfsd4_cb_sec *cbs)
+ {
+-	DECODE_HEAD;
++	u32 stamp, gidcount, uid, gid;
++	__be32 *p, status;
+ 
+-	READ_BUF(4);
+-	create->cr_type = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &stamp) < 0)
++		return nfserr_bad_xdr;
++	/* machine name */
++	status = nfsd4_decode_ignored_string(argp, 255);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &uid) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &gid) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &gidcount) < 0)
++		return nfserr_bad_xdr;
++	if (gidcount > 16)
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, gidcount << 2);
++	if (!p)
++		return nfserr_bad_xdr;
++	if (cbs->flavor == (u32)(-1)) {
++		struct user_namespace *userns = nfsd_user_namespace(argp->rqstp);
++
++		kuid_t kuid = make_kuid(userns, uid);
++		kgid_t kgid = make_kgid(userns, gid);
++		if (uid_valid(kuid) && gid_valid(kgid)) {
++			cbs->uid = kuid;
++			cbs->gid = kgid;
++			cbs->flavor = RPC_AUTH_UNIX;
++		} else {
++			dprintk("RPC_AUTH_UNIX with invalid uid or gid, ignoring!\n");
++		}
++	}
++
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_gss_cb_handles4(struct nfsd4_compoundargs *argp,
++			     struct nfsd4_cb_sec *cbs)
++{
++	__be32 status;
++	u32 service;
++
++	dprintk("RPC_AUTH_GSS callback secflavor not supported!\n");
++
++	if (xdr_stream_decode_u32(argp->xdr, &service) < 0)
++		return nfserr_bad_xdr;
++	if (service < RPC_GSS_SVC_NONE || service > RPC_GSS_SVC_PRIVACY)
++		return nfserr_bad_xdr;
++	/* gcbp_handle_from_server */
++	status = nfsd4_decode_ignored_string(argp, 0);
++	if (status)
++		return status;
++	/* gcbp_handle_from_client */
++	status = nfsd4_decode_ignored_string(argp, 0);
++	if (status)
++		return status;
++
++	return nfs_ok;
++}
++
++/* a counted array of callback_sec_parms4 items */
++static __be32
++nfsd4_decode_cb_sec(struct nfsd4_compoundargs *argp, struct nfsd4_cb_sec *cbs)
++{
++	u32 i, secflavor, nr_secflavs;
++	__be32 status;
++
++	/* callback_sec_params4 */
++	if (xdr_stream_decode_u32(argp->xdr, &nr_secflavs) < 0)
++		return nfserr_bad_xdr;
++	if (nr_secflavs)
++		cbs->flavor = (u32)(-1);
++	else
++		/* Is this legal? Be generous, take it to mean AUTH_NONE: */
++		cbs->flavor = 0;
++
++	for (i = 0; i < nr_secflavs; ++i) {
++		if (xdr_stream_decode_u32(argp->xdr, &secflavor) < 0)
++			return nfserr_bad_xdr;
++		switch (secflavor) {
++		case RPC_AUTH_NULL:
++			/* void */
++			if (cbs->flavor == (u32)(-1))
++				cbs->flavor = RPC_AUTH_NULL;
++			break;
++		case RPC_AUTH_UNIX:
++			status = nfsd4_decode_authsys_parms(argp, cbs);
++			if (status)
++				return status;
++			break;
++		case RPC_AUTH_GSS:
++			status = nfsd4_decode_gss_cb_handles4(argp, cbs);
++			if (status)
++				return status;
++			break;
++		default:
++			return nfserr_inval;
++		}
++	}
++
++	return nfs_ok;
++}
++
++
++/*
++ * NFSv4 operation argument decoders
++ */
++
++static __be32
++nfsd4_decode_access(struct nfsd4_compoundargs *argp,
++		    union nfsd4_op_u *u)
++{
++	struct nfsd4_access *access = &u->access;
++	if (xdr_stream_decode_u32(argp->xdr, &access->ac_req_access) < 0)
++		return nfserr_bad_xdr;
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_close(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
++{
++	struct nfsd4_close *close = &u->close;
++	if (xdr_stream_decode_u32(argp->xdr, &close->cl_seqid) < 0)
++		return nfserr_bad_xdr;
++	return nfsd4_decode_stateid4(argp, &close->cl_stateid);
++}
++
++
++static __be32
++nfsd4_decode_commit(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
++{
++	struct nfsd4_commit *commit = &u->commit;
++	if (xdr_stream_decode_u64(argp->xdr, &commit->co_offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &commit->co_count) < 0)
++		return nfserr_bad_xdr;
++	memset(&commit->co_verf, 0, sizeof(commit->co_verf));
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_create(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
++{
++	struct nfsd4_create *create = &u->create;
++	__be32 *p, status;
++
++	memset(create, 0, sizeof(*create));
++	if (xdr_stream_decode_u32(argp->xdr, &create->cr_type) < 0)
++		return nfserr_bad_xdr;
+ 	switch (create->cr_type) {
+ 	case NF4LNK:
+-		READ_BUF(4);
+-		create->cr_datalen = be32_to_cpup(p++);
+-		READ_BUF(create->cr_datalen);
++		if (xdr_stream_decode_u32(argp->xdr, &create->cr_datalen) < 0)
++			return nfserr_bad_xdr;
++		p = xdr_inline_decode(argp->xdr, create->cr_datalen);
++		if (!p)
++			return nfserr_bad_xdr;
+ 		create->cr_data = svcxdr_dupstr(argp, p, create->cr_datalen);
+ 		if (!create->cr_data)
+ 			return nfserr_jukebox;
+ 		break;
+ 	case NF4BLK:
+ 	case NF4CHR:
+-		READ_BUF(8);
+-		create->cr_specdata1 = be32_to_cpup(p++);
+-		create->cr_specdata2 = be32_to_cpup(p++);
++		if (xdr_stream_decode_u32(argp->xdr, &create->cr_specdata1) < 0)
++			return nfserr_bad_xdr;
++		if (xdr_stream_decode_u32(argp->xdr, &create->cr_specdata2) < 0)
++			return nfserr_bad_xdr;
+ 		break;
+ 	case NF4SOCK:
+ 	case NF4FIFO:
+@@ -720,151 +833,221 @@ nfsd4_decode_create(struct nfsd4_compoundargs *argp, struct nfsd4_create *create
+ 	default:
+ 		break;
+ 	}
+-
+-	READ_BUF(4);
+-	create->cr_namelen = be32_to_cpup(p++);
+-	READ_BUF(create->cr_namelen);
+-	SAVEMEM(create->cr_name, create->cr_namelen);
+-	if ((status = check_filename(create->cr_name, create->cr_namelen)))
++	status = nfsd4_decode_component4(argp, &create->cr_name,
++					 &create->cr_namelen);
++	if (status)
+ 		return status;
+-
+-	status = nfsd4_decode_fattr(argp, create->cr_bmval, &create->cr_iattr,
+-				    &create->cr_acl, &create->cr_label,
+-				    &create->cr_umask);
++	status = nfsd4_decode_fattr4(argp, create->cr_bmval,
++				    ARRAY_SIZE(create->cr_bmval),
++				    &create->cr_iattr, &create->cr_acl,
++				    &create->cr_label, &create->cr_umask);
+ 	if (status)
+-		goto out;
++		return status;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static inline __be32
+-nfsd4_decode_delegreturn(struct nfsd4_compoundargs *argp, struct nfsd4_delegreturn *dr)
++nfsd4_decode_delegreturn(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	return nfsd4_decode_stateid(argp, &dr->dr_stateid);
++	struct nfsd4_delegreturn *dr = &u->delegreturn;
++	return nfsd4_decode_stateid4(argp, &dr->dr_stateid);
+ }
+ 
+ static inline __be32
+-nfsd4_decode_getattr(struct nfsd4_compoundargs *argp, struct nfsd4_getattr *getattr)
++nfsd4_decode_getattr(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	return nfsd4_decode_bitmap(argp, getattr->ga_bmval);
++	struct nfsd4_getattr *getattr = &u->getattr;
++	memset(getattr, 0, sizeof(*getattr));
++	return nfsd4_decode_bitmap4(argp, getattr->ga_bmval,
++				    ARRAY_SIZE(getattr->ga_bmval));
+ }
+ 
+ static __be32
+-nfsd4_decode_link(struct nfsd4_compoundargs *argp, struct nfsd4_link *link)
++nfsd4_decode_link(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_link *link = &u->link;
++	memset(link, 0, sizeof(*link));
++	return nfsd4_decode_component4(argp, &link->li_name, &link->li_namelen);
++}
+ 
+-	READ_BUF(4);
+-	link->li_namelen = be32_to_cpup(p++);
+-	READ_BUF(link->li_namelen);
+-	SAVEMEM(link->li_name, link->li_namelen);
+-	if ((status = check_filename(link->li_name, link->li_namelen)))
+-		return status;
++static __be32
++nfsd4_decode_open_to_lock_owner4(struct nfsd4_compoundargs *argp,
++				 struct nfsd4_lock *lock)
++{
++	__be32 status;
+ 
+-	DECODE_TAIL;
++	if (xdr_stream_decode_u32(argp->xdr, &lock->lk_new_open_seqid) < 0)
++		return nfserr_bad_xdr;
++	status = nfsd4_decode_stateid4(argp, &lock->lk_new_open_stateid);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &lock->lk_new_lock_seqid) < 0)
++		return nfserr_bad_xdr;
++	return nfsd4_decode_state_owner4(argp, &lock->lk_new_clientid,
++					 &lock->lk_new_owner);
+ }
+ 
+ static __be32
+-nfsd4_decode_lock(struct nfsd4_compoundargs *argp, struct nfsd4_lock *lock)
++nfsd4_decode_exist_lock_owner4(struct nfsd4_compoundargs *argp,
++			       struct nfsd4_lock *lock)
+ {
+-	DECODE_HEAD;
++	__be32 status;
+ 
+-	/*
+-	* type, reclaim(boolean), offset, length, new_lock_owner(boolean)
+-	*/
+-	READ_BUF(28);
+-	lock->lk_type = be32_to_cpup(p++);
+-	if ((lock->lk_type < NFS4_READ_LT) || (lock->lk_type > NFS4_WRITEW_LT))
+-		goto xdr_error;
+-	lock->lk_reclaim = be32_to_cpup(p++);
+-	p = xdr_decode_hyper(p, &lock->lk_offset);
+-	p = xdr_decode_hyper(p, &lock->lk_length);
+-	lock->lk_is_new = be32_to_cpup(p++);
+-
+-	if (lock->lk_is_new) {
+-		READ_BUF(4);
+-		lock->lk_new_open_seqid = be32_to_cpup(p++);
+-		status = nfsd4_decode_stateid(argp, &lock->lk_new_open_stateid);
+-		if (status)
+-			return status;
+-		READ_BUF(8 + sizeof(clientid_t));
+-		lock->lk_new_lock_seqid = be32_to_cpup(p++);
+-		COPYMEM(&lock->lk_new_clientid, sizeof(clientid_t));
+-		lock->lk_new_owner.len = be32_to_cpup(p++);
+-		READ_BUF(lock->lk_new_owner.len);
+-		READMEM(lock->lk_new_owner.data, lock->lk_new_owner.len);
+-	} else {
+-		status = nfsd4_decode_stateid(argp, &lock->lk_old_lock_stateid);
+-		if (status)
+-			return status;
+-		READ_BUF(4);
+-		lock->lk_old_lock_seqid = be32_to_cpup(p++);
+-	}
++	status = nfsd4_decode_stateid4(argp, &lock->lk_old_lock_stateid);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &lock->lk_old_lock_seqid) < 0)
++		return nfserr_bad_xdr;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_locker4(struct nfsd4_compoundargs *argp, struct nfsd4_lock *lock)
++{
++	if (xdr_stream_decode_bool(argp->xdr, &lock->lk_is_new) < 0)
++		return nfserr_bad_xdr;
++	if (lock->lk_is_new)
++		return nfsd4_decode_open_to_lock_owner4(argp, lock);
++	return nfsd4_decode_exist_lock_owner4(argp, lock);
+ }
+ 
+ static __be32
+-nfsd4_decode_lockt(struct nfsd4_compoundargs *argp, struct nfsd4_lockt *lockt)
++nfsd4_decode_lock(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
+-		        
+-	READ_BUF(32);
+-	lockt->lt_type = be32_to_cpup(p++);
+-	if((lockt->lt_type < NFS4_READ_LT) || (lockt->lt_type > NFS4_WRITEW_LT))
+-		goto xdr_error;
+-	p = xdr_decode_hyper(p, &lockt->lt_offset);
+-	p = xdr_decode_hyper(p, &lockt->lt_length);
+-	COPYMEM(&lockt->lt_clientid, 8);
+-	lockt->lt_owner.len = be32_to_cpup(p++);
+-	READ_BUF(lockt->lt_owner.len);
+-	READMEM(lockt->lt_owner.data, lockt->lt_owner.len);
++	struct nfsd4_lock *lock = &u->lock;
++	memset(lock, 0, sizeof(*lock));
++	if (xdr_stream_decode_u32(argp->xdr, &lock->lk_type) < 0)
++		return nfserr_bad_xdr;
++	if ((lock->lk_type < NFS4_READ_LT) || (lock->lk_type > NFS4_WRITEW_LT))
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_bool(argp->xdr, &lock->lk_reclaim) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &lock->lk_offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &lock->lk_length) < 0)
++		return nfserr_bad_xdr;
++	return nfsd4_decode_locker4(argp, lock);
++}
+ 
+-	DECODE_TAIL;
++static __be32
++nfsd4_decode_lockt(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
++{
++	struct nfsd4_lockt *lockt = &u->lockt;
++	memset(lockt, 0, sizeof(*lockt));
++	if (xdr_stream_decode_u32(argp->xdr, &lockt->lt_type) < 0)
++		return nfserr_bad_xdr;
++	if ((lockt->lt_type < NFS4_READ_LT) || (lockt->lt_type > NFS4_WRITEW_LT))
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &lockt->lt_offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &lockt->lt_length) < 0)
++		return nfserr_bad_xdr;
++	return nfsd4_decode_state_owner4(argp, &lockt->lt_clientid,
++					 &lockt->lt_owner);
+ }
+ 
+ static __be32
+-nfsd4_decode_locku(struct nfsd4_compoundargs *argp, struct nfsd4_locku *locku)
++nfsd4_decode_locku(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_locku *locku = &u->locku;
++	__be32 status;
+ 
+-	READ_BUF(8);
+-	locku->lu_type = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &locku->lu_type) < 0)
++		return nfserr_bad_xdr;
+ 	if ((locku->lu_type < NFS4_READ_LT) || (locku->lu_type > NFS4_WRITEW_LT))
+-		goto xdr_error;
+-	locku->lu_seqid = be32_to_cpup(p++);
+-	status = nfsd4_decode_stateid(argp, &locku->lu_stateid);
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &locku->lu_seqid) < 0)
++		return nfserr_bad_xdr;
++	status = nfsd4_decode_stateid4(argp, &locku->lu_stateid);
+ 	if (status)
+ 		return status;
+-	READ_BUF(16);
+-	p = xdr_decode_hyper(p, &locku->lu_offset);
+-	p = xdr_decode_hyper(p, &locku->lu_length);
++	if (xdr_stream_decode_u64(argp->xdr, &locku->lu_offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &locku->lu_length) < 0)
++		return nfserr_bad_xdr;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_lookup(struct nfsd4_compoundargs *argp, struct nfsd4_lookup *lookup)
++nfsd4_decode_lookup(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_lookup *lookup = &u->lookup;
++	return nfsd4_decode_component4(argp, &lookup->lo_name, &lookup->lo_len);
++}
+ 
+-	READ_BUF(4);
+-	lookup->lo_len = be32_to_cpup(p++);
+-	READ_BUF(lookup->lo_len);
+-	SAVEMEM(lookup->lo_name, lookup->lo_len);
+-	if ((status = check_filename(lookup->lo_name, lookup->lo_len)))
+-		return status;
++static __be32
++nfsd4_decode_createhow4(struct nfsd4_compoundargs *argp, struct nfsd4_open *open)
++{
++	__be32 status;
++
++	if (xdr_stream_decode_u32(argp->xdr, &open->op_createmode) < 0)
++		return nfserr_bad_xdr;
++	switch (open->op_createmode) {
++	case NFS4_CREATE_UNCHECKED:
++	case NFS4_CREATE_GUARDED:
++		status = nfsd4_decode_fattr4(argp, open->op_bmval,
++					     ARRAY_SIZE(open->op_bmval),
++					     &open->op_iattr, &open->op_acl,
++					     &open->op_label, &open->op_umask);
++		if (status)
++			return status;
++		break;
++	case NFS4_CREATE_EXCLUSIVE:
++		status = nfsd4_decode_verifier4(argp, &open->op_verf);
++		if (status)
++			return status;
++		break;
++	case NFS4_CREATE_EXCLUSIVE4_1:
++		if (argp->minorversion < 1)
++			return nfserr_bad_xdr;
++		status = nfsd4_decode_verifier4(argp, &open->op_verf);
++		if (status)
++			return status;
++		status = nfsd4_decode_fattr4(argp, open->op_bmval,
++					     ARRAY_SIZE(open->op_bmval),
++					     &open->op_iattr, &open->op_acl,
++					     &open->op_label, &open->op_umask);
++		if (status)
++			return status;
++		break;
++	default:
++		return nfserr_bad_xdr;
++	}
++
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_openflag4(struct nfsd4_compoundargs *argp, struct nfsd4_open *open)
++{
++	__be32 status;
++
++	if (xdr_stream_decode_u32(argp->xdr, &open->op_create) < 0)
++		return nfserr_bad_xdr;
++	switch (open->op_create) {
++	case NFS4_OPEN_NOCREATE:
++		break;
++	case NFS4_OPEN_CREATE:
++		status = nfsd4_decode_createhow4(argp, open);
++		if (status)
++			return status;
++		break;
++	default:
++		return nfserr_bad_xdr;
++	}
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32 nfsd4_decode_share_access(struct nfsd4_compoundargs *argp, u32 *share_access, u32 *deleg_want, u32 *deleg_when)
+ {
+-	__be32 *p;
+ 	u32 w;
+ 
+-	READ_BUF(4);
+-	w = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &w) < 0)
++		return nfserr_bad_xdr;
+ 	*share_access = w & NFS4_SHARE_ACCESS_MASK;
+ 	*deleg_want = w & NFS4_SHARE_WANT_MASK;
+ 	if (deleg_when)
+@@ -907,930 +1090,935 @@ static __be32 nfsd4_decode_share_access(struct nfsd4_compoundargs *argp, u32 *sh
+ 	      NFS4_SHARE_PUSH_DELEG_WHEN_UNCONTENDED):
+ 		return nfs_ok;
+ 	}
+-xdr_error:
+ 	return nfserr_bad_xdr;
+ }
+ 
+ static __be32 nfsd4_decode_share_deny(struct nfsd4_compoundargs *argp, u32 *x)
+ {
+-	__be32 *p;
+-
+-	READ_BUF(4);
+-	*x = be32_to_cpup(p++);
+-	/* Note: unlinke access bits, deny bits may be zero. */
+-	if (*x & ~NFS4_SHARE_DENY_BOTH)
++	if (xdr_stream_decode_u32(argp->xdr, x) < 0)
+ 		return nfserr_bad_xdr;
+-	return nfs_ok;
+-xdr_error:
+-	return nfserr_bad_xdr;
+-}
+-
+-static __be32 nfsd4_decode_opaque(struct nfsd4_compoundargs *argp, struct xdr_netobj *o)
+-{
+-	__be32 *p;
+-
+-	READ_BUF(4);
+-	o->len = be32_to_cpup(p++);
+-
+-	if (o->len == 0 || o->len > NFS4_OPAQUE_LIMIT)
++	/* Note: unlike access bits, deny bits may be zero. */
++	if (*x & ~NFS4_SHARE_DENY_BOTH)
+ 		return nfserr_bad_xdr;
+ 
+-	READ_BUF(o->len);
+-	SAVEMEM(o->data, o->len);
+ 	return nfs_ok;
+-xdr_error:
+-	return nfserr_bad_xdr;
+ }
+ 
+ static __be32
+-nfsd4_decode_open(struct nfsd4_compoundargs *argp, struct nfsd4_open *open)
++nfsd4_decode_open_claim4(struct nfsd4_compoundargs *argp,
++			 struct nfsd4_open *open)
+ {
+-	DECODE_HEAD;
+-	u32 dummy;
+-
+-	memset(open->op_bmval, 0, sizeof(open->op_bmval));
+-	open->op_iattr.ia_valid = 0;
+-	open->op_openowner = NULL;
+-
+-	open->op_xdr_error = 0;
+-	/* seqid, share_access, share_deny, clientid, ownerlen */
+-	READ_BUF(4);
+-	open->op_seqid = be32_to_cpup(p++);
+-	/* decode, yet ignore deleg_when until supported */
+-	status = nfsd4_decode_share_access(argp, &open->op_share_access,
+-					   &open->op_deleg_want, &dummy);
+-	if (status)
+-		goto xdr_error;
+-	status = nfsd4_decode_share_deny(argp, &open->op_share_deny);
+-	if (status)
+-		goto xdr_error;
+-	READ_BUF(sizeof(clientid_t));
+-	COPYMEM(&open->op_clientid, sizeof(clientid_t));
+-	status = nfsd4_decode_opaque(argp, &open->op_owner);
+-	if (status)
+-		goto xdr_error;
+-	READ_BUF(4);
+-	open->op_create = be32_to_cpup(p++);
+-	switch (open->op_create) {
+-	case NFS4_OPEN_NOCREATE:
+-		break;
+-	case NFS4_OPEN_CREATE:
+-		READ_BUF(4);
+-		open->op_createmode = be32_to_cpup(p++);
+-		switch (open->op_createmode) {
+-		case NFS4_CREATE_UNCHECKED:
+-		case NFS4_CREATE_GUARDED:
+-			status = nfsd4_decode_fattr(argp, open->op_bmval,
+-				&open->op_iattr, &open->op_acl, &open->op_label,
+-				&open->op_umask);
+-			if (status)
+-				goto out;
+-			break;
+-		case NFS4_CREATE_EXCLUSIVE:
+-			READ_BUF(NFS4_VERIFIER_SIZE);
+-			COPYMEM(open->op_verf.data, NFS4_VERIFIER_SIZE);
+-			break;
+-		case NFS4_CREATE_EXCLUSIVE4_1:
+-			if (argp->minorversion < 1)
+-				goto xdr_error;
+-			READ_BUF(NFS4_VERIFIER_SIZE);
+-			COPYMEM(open->op_verf.data, NFS4_VERIFIER_SIZE);
+-			status = nfsd4_decode_fattr(argp, open->op_bmval,
+-				&open->op_iattr, &open->op_acl, &open->op_label,
+-				&open->op_umask);
+-			if (status)
+-				goto out;
+-			break;
+-		default:
+-			goto xdr_error;
+-		}
+-		break;
+-	default:
+-		goto xdr_error;
+-	}
++	__be32 status;
+ 
+-	/* open_claim */
+-	READ_BUF(4);
+-	open->op_claim_type = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &open->op_claim_type) < 0)
++		return nfserr_bad_xdr;
+ 	switch (open->op_claim_type) {
+ 	case NFS4_OPEN_CLAIM_NULL:
+ 	case NFS4_OPEN_CLAIM_DELEGATE_PREV:
+-		READ_BUF(4);
+-		open->op_fname.len = be32_to_cpup(p++);
+-		READ_BUF(open->op_fname.len);
+-		SAVEMEM(open->op_fname.data, open->op_fname.len);
+-		if ((status = check_filename(open->op_fname.data, open->op_fname.len)))
++		status = nfsd4_decode_component4(argp, &open->op_fname,
++						 &open->op_fnamelen);
++		if (status)
+ 			return status;
+ 		break;
+ 	case NFS4_OPEN_CLAIM_PREVIOUS:
+-		READ_BUF(4);
+-		open->op_delegate_type = be32_to_cpup(p++);
++		if (xdr_stream_decode_u32(argp->xdr, &open->op_delegate_type) < 0)
++			return nfserr_bad_xdr;
+ 		break;
+ 	case NFS4_OPEN_CLAIM_DELEGATE_CUR:
+-		status = nfsd4_decode_stateid(argp, &open->op_delegate_stateid);
++		status = nfsd4_decode_stateid4(argp, &open->op_delegate_stateid);
+ 		if (status)
+ 			return status;
+-		READ_BUF(4);
+-		open->op_fname.len = be32_to_cpup(p++);
+-		READ_BUF(open->op_fname.len);
+-		SAVEMEM(open->op_fname.data, open->op_fname.len);
+-		if ((status = check_filename(open->op_fname.data, open->op_fname.len)))
++		status = nfsd4_decode_component4(argp, &open->op_fname,
++						 &open->op_fnamelen);
++		if (status)
+ 			return status;
+ 		break;
+ 	case NFS4_OPEN_CLAIM_FH:
+ 	case NFS4_OPEN_CLAIM_DELEG_PREV_FH:
+ 		if (argp->minorversion < 1)
+-			goto xdr_error;
++			return nfserr_bad_xdr;
+ 		/* void */
+ 		break;
+ 	case NFS4_OPEN_CLAIM_DELEG_CUR_FH:
+ 		if (argp->minorversion < 1)
+-			goto xdr_error;
+-		status = nfsd4_decode_stateid(argp, &open->op_delegate_stateid);
++			return nfserr_bad_xdr;
++		status = nfsd4_decode_stateid4(argp, &open->op_delegate_stateid);
+ 		if (status)
+ 			return status;
+ 		break;
+ 	default:
+-		goto xdr_error;
++		return nfserr_bad_xdr;
+ 	}
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_open_confirm(struct nfsd4_compoundargs *argp, struct nfsd4_open_confirm *open_conf)
++nfsd4_decode_open(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_open *open = &u->open;
++	__be32 status;
++	u32 dummy;
+ 
+-	if (argp->minorversion >= 1)
+-		return nfserr_notsupp;
++	memset(open, 0, sizeof(*open));
+ 
+-	status = nfsd4_decode_stateid(argp, &open_conf->oc_req_stateid);
++	if (xdr_stream_decode_u32(argp->xdr, &open->op_seqid) < 0)
++		return nfserr_bad_xdr;
++	/* deleg_want is ignored */
++	status = nfsd4_decode_share_access(argp, &open->op_share_access,
++					   &open->op_deleg_want, &dummy);
+ 	if (status)
+ 		return status;
+-	READ_BUF(4);
+-	open_conf->oc_seqid = be32_to_cpup(p++);
+-
+-	DECODE_TAIL;
+-}
+-
+-static __be32
+-nfsd4_decode_open_downgrade(struct nfsd4_compoundargs *argp, struct nfsd4_open_downgrade *open_down)
+-{
+-	DECODE_HEAD;
+-		    
+-	status = nfsd4_decode_stateid(argp, &open_down->od_stateid);
++	status = nfsd4_decode_share_deny(argp, &open->op_share_deny);
+ 	if (status)
+ 		return status;
+-	READ_BUF(4);
+-	open_down->od_seqid = be32_to_cpup(p++);
+-	status = nfsd4_decode_share_access(argp, &open_down->od_share_access,
+-					   &open_down->od_deleg_want, NULL);
++	status = nfsd4_decode_state_owner4(argp, &open->op_clientid,
++					   &open->op_owner);
+ 	if (status)
+ 		return status;
+-	status = nfsd4_decode_share_deny(argp, &open_down->od_share_deny);
++	status = nfsd4_decode_openflag4(argp, open);
+ 	if (status)
+ 		return status;
+-	DECODE_TAIL;
++	return nfsd4_decode_open_claim4(argp, open);
+ }
+ 
+ static __be32
+-nfsd4_decode_putfh(struct nfsd4_compoundargs *argp, struct nfsd4_putfh *putfh)
++nfsd4_decode_open_confirm(struct nfsd4_compoundargs *argp,
++			  union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(4);
+-	putfh->pf_fhlen = be32_to_cpup(p++);
+-	if (putfh->pf_fhlen > NFS4_FHSIZE)
+-		goto xdr_error;
+-	READ_BUF(putfh->pf_fhlen);
+-	SAVEMEM(putfh->pf_fhval, putfh->pf_fhlen);
++	struct nfsd4_open_confirm *open_conf = &u->open_confirm;
++	__be32 status;
+ 
+-	DECODE_TAIL;
+-}
++	if (argp->minorversion >= 1)
++		return nfserr_notsupp;
+ 
+-static __be32
+-nfsd4_decode_putpubfh(struct nfsd4_compoundargs *argp, void *p)
+-{
+-	if (argp->minorversion == 0)
+-		return nfs_ok;
+-	return nfserr_notsupp;
++	status = nfsd4_decode_stateid4(argp, &open_conf->oc_req_stateid);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &open_conf->oc_seqid) < 0)
++		return nfserr_bad_xdr;
++
++	memset(&open_conf->oc_resp_stateid, 0,
++	       sizeof(open_conf->oc_resp_stateid));
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_read(struct nfsd4_compoundargs *argp, struct nfsd4_read *read)
++nfsd4_decode_open_downgrade(struct nfsd4_compoundargs *argp,
++			    union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_open_downgrade *open_down = &u->open_downgrade;
++	__be32 status;
+ 
+-	status = nfsd4_decode_stateid(argp, &read->rd_stateid);
++	memset(open_down, 0, sizeof(*open_down));
++	status = nfsd4_decode_stateid4(argp, &open_down->od_stateid);
+ 	if (status)
+ 		return status;
+-	READ_BUF(12);
+-	p = xdr_decode_hyper(p, &read->rd_offset);
+-	read->rd_length = be32_to_cpup(p++);
+-
+-	DECODE_TAIL;
++	if (xdr_stream_decode_u32(argp->xdr, &open_down->od_seqid) < 0)
++		return nfserr_bad_xdr;
++	/* deleg_want is ignored */
++	status = nfsd4_decode_share_access(argp, &open_down->od_share_access,
++					   &open_down->od_deleg_want, NULL);
++	if (status)
++		return status;
++	return nfsd4_decode_share_deny(argp, &open_down->od_share_deny);
+ }
+ 
+ static __be32
+-nfsd4_decode_readdir(struct nfsd4_compoundargs *argp, struct nfsd4_readdir *readdir)
++nfsd4_decode_putfh(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_putfh *putfh = &u->putfh;
++	__be32 *p;
+ 
+-	READ_BUF(24);
+-	p = xdr_decode_hyper(p, &readdir->rd_cookie);
+-	COPYMEM(readdir->rd_verf.data, sizeof(readdir->rd_verf.data));
+-	readdir->rd_dircount = be32_to_cpup(p++);
+-	readdir->rd_maxcount = be32_to_cpup(p++);
+-	if ((status = nfsd4_decode_bitmap(argp, readdir->rd_bmval)))
+-		goto out;
++	if (xdr_stream_decode_u32(argp->xdr, &putfh->pf_fhlen) < 0)
++		return nfserr_bad_xdr;
++	if (putfh->pf_fhlen > NFS4_FHSIZE)
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, putfh->pf_fhlen);
++	if (!p)
++		return nfserr_bad_xdr;
++	putfh->pf_fhval = svcxdr_savemem(argp, p, putfh->pf_fhlen);
++	if (!putfh->pf_fhval)
++		return nfserr_jukebox;
+ 
+-	DECODE_TAIL;
++	putfh->no_verify = false;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_remove(struct nfsd4_compoundargs *argp, struct nfsd4_remove *remove)
++nfsd4_decode_putpubfh(struct nfsd4_compoundargs *argp, union nfsd4_op_u *p)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(4);
+-	remove->rm_namelen = be32_to_cpup(p++);
+-	READ_BUF(remove->rm_namelen);
+-	SAVEMEM(remove->rm_name, remove->rm_namelen);
+-	if ((status = check_filename(remove->rm_name, remove->rm_namelen)))
+-		return status;
+-
+-	DECODE_TAIL;
++	if (argp->minorversion == 0)
++		return nfs_ok;
++	return nfserr_notsupp;
+ }
+ 
+ static __be32
+-nfsd4_decode_rename(struct nfsd4_compoundargs *argp, struct nfsd4_rename *rename)
++nfsd4_decode_read(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_read *read = &u->read;
++	__be32 status;
+ 
+-	READ_BUF(4);
+-	rename->rn_snamelen = be32_to_cpup(p++);
+-	READ_BUF(rename->rn_snamelen);
+-	SAVEMEM(rename->rn_sname, rename->rn_snamelen);
+-	READ_BUF(4);
+-	rename->rn_tnamelen = be32_to_cpup(p++);
+-	READ_BUF(rename->rn_tnamelen);
+-	SAVEMEM(rename->rn_tname, rename->rn_tnamelen);
+-	if ((status = check_filename(rename->rn_sname, rename->rn_snamelen)))
+-		return status;
+-	if ((status = check_filename(rename->rn_tname, rename->rn_tnamelen)))
++	memset(read, 0, sizeof(*read));
++	status = nfsd4_decode_stateid4(argp, &read->rd_stateid);
++	if (status)
+ 		return status;
++	if (xdr_stream_decode_u64(argp->xdr, &read->rd_offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &read->rd_length) < 0)
++		return nfserr_bad_xdr;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_renew(struct nfsd4_compoundargs *argp, clientid_t *clientid)
++nfsd4_decode_readdir(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_readdir *readdir = &u->readdir;
++	__be32 status;
+ 
+-	if (argp->minorversion >= 1)
+-		return nfserr_notsupp;
++	memset(readdir, 0, sizeof(*readdir));
++	if (xdr_stream_decode_u64(argp->xdr, &readdir->rd_cookie) < 0)
++		return nfserr_bad_xdr;
++	status = nfsd4_decode_verifier4(argp, &readdir->rd_verf);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &readdir->rd_dircount) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &readdir->rd_maxcount) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_uint32_array(argp->xdr, readdir->rd_bmval,
++					   ARRAY_SIZE(readdir->rd_bmval)) < 0)
++		return nfserr_bad_xdr;
+ 
+-	READ_BUF(sizeof(clientid_t));
+-	COPYMEM(clientid, sizeof(clientid_t));
++	return nfs_ok;
++}
+ 
+-	DECODE_TAIL;
++static __be32
++nfsd4_decode_remove(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
++{
++	struct nfsd4_remove *remove = &u->remove;
++	memset(&remove->rm_cinfo, 0, sizeof(remove->rm_cinfo));
++	return nfsd4_decode_component4(argp, &remove->rm_name, &remove->rm_namelen);
+ }
+ 
+ static __be32
+-nfsd4_decode_secinfo(struct nfsd4_compoundargs *argp,
+-		     struct nfsd4_secinfo *secinfo)
++nfsd4_decode_rename(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_rename *rename = &u->rename;
++	__be32 status;
+ 
+-	READ_BUF(4);
+-	secinfo->si_namelen = be32_to_cpup(p++);
+-	READ_BUF(secinfo->si_namelen);
+-	SAVEMEM(secinfo->si_name, secinfo->si_namelen);
+-	status = check_filename(secinfo->si_name, secinfo->si_namelen);
++	memset(rename, 0, sizeof(*rename));
++	status = nfsd4_decode_component4(argp, &rename->rn_sname, &rename->rn_snamelen);
+ 	if (status)
+ 		return status;
+-	DECODE_TAIL;
++	return nfsd4_decode_component4(argp, &rename->rn_tname, &rename->rn_tnamelen);
+ }
+ 
+ static __be32
+-nfsd4_decode_secinfo_no_name(struct nfsd4_compoundargs *argp,
+-		     struct nfsd4_secinfo_no_name *sin)
++nfsd4_decode_renew(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	clientid_t *clientid = &u->renew;
++	return nfsd4_decode_clientid4(argp, clientid);
++}
+ 
+-	READ_BUF(4);
+-	sin->sin_style = be32_to_cpup(p++);
+-	DECODE_TAIL;
++static __be32
++nfsd4_decode_secinfo(struct nfsd4_compoundargs *argp,
++		     union nfsd4_op_u *u)
++{
++	struct nfsd4_secinfo *secinfo = &u->secinfo;
++	secinfo->si_exp = NULL;
++	return nfsd4_decode_component4(argp, &secinfo->si_name, &secinfo->si_namelen);
+ }
+ 
+ static __be32
+-nfsd4_decode_setattr(struct nfsd4_compoundargs *argp, struct nfsd4_setattr *setattr)
++nfsd4_decode_setattr(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
++	struct nfsd4_setattr *setattr = &u->setattr;
+ 	__be32 status;
+ 
+-	status = nfsd4_decode_stateid(argp, &setattr->sa_stateid);
++	memset(setattr, 0, sizeof(*setattr));
++	status = nfsd4_decode_stateid4(argp, &setattr->sa_stateid);
+ 	if (status)
+ 		return status;
+-	return nfsd4_decode_fattr(argp, setattr->sa_bmval, &setattr->sa_iattr,
+-				  &setattr->sa_acl, &setattr->sa_label, NULL);
++	return nfsd4_decode_fattr4(argp, setattr->sa_bmval,
++				   ARRAY_SIZE(setattr->sa_bmval),
++				   &setattr->sa_iattr, &setattr->sa_acl,
++				   &setattr->sa_label, NULL);
+ }
+ 
+ static __be32
+-nfsd4_decode_setclientid(struct nfsd4_compoundargs *argp, struct nfsd4_setclientid *setclientid)
++nfsd4_decode_setclientid(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_setclientid *setclientid = &u->setclientid;
++	__be32 *p, status;
++
++	memset(setclientid, 0, sizeof(*setclientid));
+ 
+ 	if (argp->minorversion >= 1)
+ 		return nfserr_notsupp;
+ 
+-	READ_BUF(NFS4_VERIFIER_SIZE);
+-	COPYMEM(setclientid->se_verf.data, NFS4_VERIFIER_SIZE);
+-
++	status = nfsd4_decode_verifier4(argp, &setclientid->se_verf);
++	if (status)
++		return status;
+ 	status = nfsd4_decode_opaque(argp, &setclientid->se_name);
+ 	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_prog) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_netid_len) < 0)
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, setclientid->se_callback_netid_len);
++	if (!p)
+ 		return nfserr_bad_xdr;
+-	READ_BUF(8);
+-	setclientid->se_callback_prog = be32_to_cpup(p++);
+-	setclientid->se_callback_netid_len = be32_to_cpup(p++);
+-	READ_BUF(setclientid->se_callback_netid_len);
+-	SAVEMEM(setclientid->se_callback_netid_val, setclientid->se_callback_netid_len);
+-	READ_BUF(4);
+-	setclientid->se_callback_addr_len = be32_to_cpup(p++);
++	setclientid->se_callback_netid_val = svcxdr_savemem(argp, p,
++						setclientid->se_callback_netid_len);
++	if (!setclientid->se_callback_netid_val)
++		return nfserr_jukebox;
+ 
+-	READ_BUF(setclientid->se_callback_addr_len);
+-	SAVEMEM(setclientid->se_callback_addr_val, setclientid->se_callback_addr_len);
+-	READ_BUF(4);
+-	setclientid->se_callback_ident = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_addr_len) < 0)
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, setclientid->se_callback_addr_len);
++	if (!p)
++		return nfserr_bad_xdr;
++	setclientid->se_callback_addr_val = svcxdr_savemem(argp, p,
++						setclientid->se_callback_addr_len);
++	if (!setclientid->se_callback_addr_val)
++		return nfserr_jukebox;
++	if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_ident) < 0)
++		return nfserr_bad_xdr;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_setclientid_confirm(struct nfsd4_compoundargs *argp, struct nfsd4_setclientid_confirm *scd_c)
++nfsd4_decode_setclientid_confirm(struct nfsd4_compoundargs *argp,
++				 union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_setclientid_confirm *scd_c = &u->setclientid_confirm;
++	__be32 status;
+ 
+ 	if (argp->minorversion >= 1)
+ 		return nfserr_notsupp;
+ 
+-	READ_BUF(8 + NFS4_VERIFIER_SIZE);
+-	COPYMEM(&scd_c->sc_clientid, 8);
+-	COPYMEM(&scd_c->sc_confirm, NFS4_VERIFIER_SIZE);
+-
+-	DECODE_TAIL;
++	status = nfsd4_decode_clientid4(argp, &scd_c->sc_clientid);
++	if (status)
++		return status;
++	return nfsd4_decode_verifier4(argp, &scd_c->sc_confirm);
+ }
+ 
+ /* Also used for NVERIFY */
+ static __be32
+-nfsd4_decode_verify(struct nfsd4_compoundargs *argp, struct nfsd4_verify *verify)
++nfsd4_decode_verify(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_verify *verify = &u->verify;
++	__be32 *p, status;
+ 
+-	if ((status = nfsd4_decode_bitmap(argp, verify->ve_bmval)))
+-		goto out;
++	memset(verify, 0, sizeof(*verify));
++
++	status = nfsd4_decode_bitmap4(argp, verify->ve_bmval,
++				      ARRAY_SIZE(verify->ve_bmval));
++	if (status)
++		return status;
+ 
+ 	/* For convenience's sake, we compare raw xdr'd attributes in
+ 	 * nfsd4_proc_verify */
+ 
+-	READ_BUF(4);
+-	verify->ve_attrlen = be32_to_cpup(p++);
+-	READ_BUF(verify->ve_attrlen);
+-	SAVEMEM(verify->ve_attrval, verify->ve_attrlen);
++	if (xdr_stream_decode_u32(argp->xdr, &verify->ve_attrlen) < 0)
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, verify->ve_attrlen);
++	if (!p)
++		return nfserr_bad_xdr;
++	verify->ve_attrval = svcxdr_savemem(argp, p, verify->ve_attrlen);
++	if (!verify->ve_attrval)
++		return nfserr_jukebox;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_write(struct nfsd4_compoundargs *argp, struct nfsd4_write *write)
++nfsd4_decode_write(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_write *write = &u->write;
++	__be32 status;
+ 
+-	status = nfsd4_decode_stateid(argp, &write->wr_stateid);
++	status = nfsd4_decode_stateid4(argp, &write->wr_stateid);
+ 	if (status)
+ 		return status;
+-	READ_BUF(16);
+-	p = xdr_decode_hyper(p, &write->wr_offset);
+-	write->wr_stable_how = be32_to_cpup(p++);
++	if (xdr_stream_decode_u64(argp->xdr, &write->wr_offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &write->wr_stable_how) < 0)
++		return nfserr_bad_xdr;
+ 	if (write->wr_stable_how > NFS_FILE_SYNC)
+-		goto xdr_error;
+-	write->wr_buflen = be32_to_cpup(p++);
+-
+-	status = svcxdr_construct_vector(argp, &write->wr_head,
+-					 &write->wr_pagelist, write->wr_buflen);
+-	if (status)
+-		return status;
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &write->wr_buflen) < 0)
++		return nfserr_bad_xdr;
++	if (!xdr_stream_subsegment(argp->xdr, &write->wr_payload, write->wr_buflen))
++		return nfserr_bad_xdr;
+ 
+-	DECODE_TAIL;
++	write->wr_bytes_written = 0;
++	write->wr_how_written = 0;
++	memset(&write->wr_verifier, 0, sizeof(write->wr_verifier));
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_release_lockowner(struct nfsd4_compoundargs *argp, struct nfsd4_release_lockowner *rlockowner)
++nfsd4_decode_release_lockowner(struct nfsd4_compoundargs *argp,
++			       union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_release_lockowner *rlockowner = &u->release_lockowner;
++	__be32 status;
+ 
+ 	if (argp->minorversion >= 1)
+ 		return nfserr_notsupp;
+ 
+-	READ_BUF(12);
+-	COPYMEM(&rlockowner->rl_clientid, sizeof(clientid_t));
+-	rlockowner->rl_owner.len = be32_to_cpup(p++);
+-	READ_BUF(rlockowner->rl_owner.len);
+-	READMEM(rlockowner->rl_owner.data, rlockowner->rl_owner.len);
++	status = nfsd4_decode_state_owner4(argp, &rlockowner->rl_clientid,
++					   &rlockowner->rl_owner);
++	if (status)
++		return status;
+ 
+ 	if (argp->minorversion && !zero_clientid(&rlockowner->rl_clientid))
+ 		return nfserr_inval;
+-	DECODE_TAIL;
++
++	return nfs_ok;
++}
++
++static __be32 nfsd4_decode_backchannel_ctl(struct nfsd4_compoundargs *argp,
++					   union nfsd4_op_u *u)
++{
++	struct nfsd4_backchannel_ctl *bc = &u->backchannel_ctl;
++	memset(bc, 0, sizeof(*bc));
++	if (xdr_stream_decode_u32(argp->xdr, &bc->bc_cb_program) < 0)
++		return nfserr_bad_xdr;
++	return nfsd4_decode_cb_sec(argp, &bc->bc_cb_sec);
++}
++
++static __be32 nfsd4_decode_bind_conn_to_session(struct nfsd4_compoundargs *argp,
++						union nfsd4_op_u *u)
++{
++	struct nfsd4_bind_conn_to_session *bcts = &u->bind_conn_to_session;
++	u32 use_conn_in_rdma_mode;
++	__be32 status;
++
++	memset(bcts, 0, sizeof(*bcts));
++	status = nfsd4_decode_sessionid4(argp, &bcts->sessionid);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &bcts->dir) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &use_conn_in_rdma_mode) < 0)
++		return nfserr_bad_xdr;
++
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_exchange_id(struct nfsd4_compoundargs *argp,
+-			 struct nfsd4_exchange_id *exid)
++nfsd4_decode_state_protect_ops(struct nfsd4_compoundargs *argp,
++			       struct nfsd4_exchange_id *exid)
+ {
+-	int dummy, tmp;
+-	DECODE_HEAD;
++	__be32 status;
+ 
+-	READ_BUF(NFS4_VERIFIER_SIZE);
+-	COPYMEM(exid->verifier.data, NFS4_VERIFIER_SIZE);
++	status = nfsd4_decode_bitmap4(argp, exid->spo_must_enforce,
++				      ARRAY_SIZE(exid->spo_must_enforce));
++	if (status)
++		return nfserr_bad_xdr;
++	status = nfsd4_decode_bitmap4(argp, exid->spo_must_allow,
++				      ARRAY_SIZE(exid->spo_must_allow));
++	if (status)
++		return nfserr_bad_xdr;
+ 
+-	status = nfsd4_decode_opaque(argp, &exid->clname);
++	return nfs_ok;
++}
++
++/*
++ * This implementation currently does not support SP4_SSV.
++ * This decoder simply skips over these arguments.
++ */
++static noinline __be32
++nfsd4_decode_ssv_sp_parms(struct nfsd4_compoundargs *argp,
++			  struct nfsd4_exchange_id *exid)
++{
++	u32 count, window, num_gss_handles;
++	__be32 status;
++
++	/* ssp_ops */
++	status = nfsd4_decode_state_protect_ops(argp, exid);
+ 	if (status)
++		return status;
++
++	/* ssp_hash_algs<> */
++	if (xdr_stream_decode_u32(argp->xdr, &count) < 0)
+ 		return nfserr_bad_xdr;
++	while (count--) {
++		status = nfsd4_decode_ignored_string(argp, 0);
++		if (status)
++			return status;
++	}
+ 
+-	READ_BUF(4);
+-	exid->flags = be32_to_cpup(p++);
++	/* ssp_encr_algs<> */
++	if (xdr_stream_decode_u32(argp->xdr, &count) < 0)
++		return nfserr_bad_xdr;
++	while (count--) {
++		status = nfsd4_decode_ignored_string(argp, 0);
++		if (status)
++			return status;
++	}
++
++	if (xdr_stream_decode_u32(argp->xdr, &window) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &num_gss_handles) < 0)
++		return nfserr_bad_xdr;
++
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_state_protect4_a(struct nfsd4_compoundargs *argp,
++			      struct nfsd4_exchange_id *exid)
++{
++	__be32 status;
+ 
+-	/* Ignore state_protect4_a */
+-	READ_BUF(4);
+-	exid->spa_how = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &exid->spa_how) < 0)
++		return nfserr_bad_xdr;
+ 	switch (exid->spa_how) {
+ 	case SP4_NONE:
+ 		break;
+ 	case SP4_MACH_CRED:
+-		/* spo_must_enforce */
+-		status = nfsd4_decode_bitmap(argp,
+-					exid->spo_must_enforce);
+-		if (status)
+-			goto out;
+-		/* spo_must_allow */
+-		status = nfsd4_decode_bitmap(argp, exid->spo_must_allow);
++		status = nfsd4_decode_state_protect_ops(argp, exid);
+ 		if (status)
+-			goto out;
++			return status;
+ 		break;
+ 	case SP4_SSV:
+-		/* ssp_ops */
+-		READ_BUF(4);
+-		dummy = be32_to_cpup(p++);
+-		READ_BUF(dummy * 4);
+-		p += dummy;
+-
+-		READ_BUF(4);
+-		dummy = be32_to_cpup(p++);
+-		READ_BUF(dummy * 4);
+-		p += dummy;
+-
+-		/* ssp_hash_algs<> */
+-		READ_BUF(4);
+-		tmp = be32_to_cpup(p++);
+-		while (tmp--) {
+-			READ_BUF(4);
+-			dummy = be32_to_cpup(p++);
+-			READ_BUF(dummy);
+-			p += XDR_QUADLEN(dummy);
+-		}
+-
+-		/* ssp_encr_algs<> */
+-		READ_BUF(4);
+-		tmp = be32_to_cpup(p++);
+-		while (tmp--) {
+-			READ_BUF(4);
+-			dummy = be32_to_cpup(p++);
+-			READ_BUF(dummy);
+-			p += XDR_QUADLEN(dummy);
+-		}
+-
+-		/* ignore ssp_window and ssp_num_gss_handles: */
+-		READ_BUF(8);
++		status = nfsd4_decode_ssv_sp_parms(argp, exid);
++		if (status)
++			return status;
+ 		break;
+ 	default:
+-		goto xdr_error;
++		return nfserr_bad_xdr;
+ 	}
+ 
+-	READ_BUF(4);    /* nfs_impl_id4 array length */
+-	dummy = be32_to_cpup(p++);
++	return nfs_ok;
++}
+ 
+-	if (dummy > 1)
+-		goto xdr_error;
++static __be32
++nfsd4_decode_nfs_impl_id4(struct nfsd4_compoundargs *argp,
++			  struct nfsd4_exchange_id *exid)
++{
++	__be32 status;
++	u32 count;
+ 
+-	if (dummy == 1) {
++	if (xdr_stream_decode_u32(argp->xdr, &count) < 0)
++		return nfserr_bad_xdr;
++	switch (count) {
++	case 0:
++		break;
++	case 1:
++		/* Note that RFC 8881 places no length limit on
++		 * nii_domain, but this implementation permits no
++		 * more than NFS4_OPAQUE_LIMIT bytes */
+ 		status = nfsd4_decode_opaque(argp, &exid->nii_domain);
+ 		if (status)
+-			goto xdr_error;
+-
+-		/* nii_name */
++			return status;
++		/* Note that RFC 8881 places no length limit on
++		 * nii_name, but this implementation permits no
++		 * more than NFS4_OPAQUE_LIMIT bytes */
+ 		status = nfsd4_decode_opaque(argp, &exid->nii_name);
+ 		if (status)
+-			goto xdr_error;
+-
+-		/* nii_date */
+-		status = nfsd4_decode_time(argp, &exid->nii_time);
++			return status;
++		status = nfsd4_decode_nfstime4(argp, &exid->nii_time);
+ 		if (status)
+-			goto xdr_error;
++			return status;
++		break;
++	default:
++		return nfserr_bad_xdr;
+ 	}
+-	DECODE_TAIL;
+-}
+ 
+-static __be32
+-nfsd4_decode_create_session(struct nfsd4_compoundargs *argp,
+-			    struct nfsd4_create_session *sess)
+-{
+-	DECODE_HEAD;
+-
+-	READ_BUF(16);
+-	COPYMEM(&sess->clientid, 8);
+-	sess->seqid = be32_to_cpup(p++);
+-	sess->flags = be32_to_cpup(p++);
+-
+-	/* Fore channel attrs */
+-	READ_BUF(28);
+-	p++; /* headerpadsz is always 0 */
+-	sess->fore_channel.maxreq_sz = be32_to_cpup(p++);
+-	sess->fore_channel.maxresp_sz = be32_to_cpup(p++);
+-	sess->fore_channel.maxresp_cached = be32_to_cpup(p++);
+-	sess->fore_channel.maxops = be32_to_cpup(p++);
+-	sess->fore_channel.maxreqs = be32_to_cpup(p++);
+-	sess->fore_channel.nr_rdma_attrs = be32_to_cpup(p++);
+-	if (sess->fore_channel.nr_rdma_attrs == 1) {
+-		READ_BUF(4);
+-		sess->fore_channel.rdma_attrs = be32_to_cpup(p++);
+-	} else if (sess->fore_channel.nr_rdma_attrs > 1) {
+-		dprintk("Too many fore channel attr bitmaps!\n");
+-		goto xdr_error;
+-	}
+-
+-	/* Back channel attrs */
+-	READ_BUF(28);
+-	p++; /* headerpadsz is always 0 */
+-	sess->back_channel.maxreq_sz = be32_to_cpup(p++);
+-	sess->back_channel.maxresp_sz = be32_to_cpup(p++);
+-	sess->back_channel.maxresp_cached = be32_to_cpup(p++);
+-	sess->back_channel.maxops = be32_to_cpup(p++);
+-	sess->back_channel.maxreqs = be32_to_cpup(p++);
+-	sess->back_channel.nr_rdma_attrs = be32_to_cpup(p++);
+-	if (sess->back_channel.nr_rdma_attrs == 1) {
+-		READ_BUF(4);
+-		sess->back_channel.rdma_attrs = be32_to_cpup(p++);
+-	} else if (sess->back_channel.nr_rdma_attrs > 1) {
+-		dprintk("Too many back channel attr bitmaps!\n");
+-		goto xdr_error;
+-	}
+-
+-	READ_BUF(4);
+-	sess->callback_prog = be32_to_cpup(p++);
+-	nfsd4_decode_cb_sec(argp, &sess->cb_sec);
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_destroy_session(struct nfsd4_compoundargs *argp,
+-			     struct nfsd4_destroy_session *destroy_session)
++nfsd4_decode_exchange_id(struct nfsd4_compoundargs *argp,
++			 union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
+-	READ_BUF(NFS4_MAX_SESSIONID_LEN);
+-	COPYMEM(destroy_session->sessionid.data, NFS4_MAX_SESSIONID_LEN);
++	struct nfsd4_exchange_id *exid = &u->exchange_id;
++	__be32 status;
+ 
+-	DECODE_TAIL;
++	memset(exid, 0, sizeof(*exid));
++	status = nfsd4_decode_verifier4(argp, &exid->verifier);
++	if (status)
++		return status;
++	status = nfsd4_decode_opaque(argp, &exid->clname);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &exid->flags) < 0)
++		return nfserr_bad_xdr;
++	status = nfsd4_decode_state_protect4_a(argp, exid);
++	if (status)
++		return status;
++	return nfsd4_decode_nfs_impl_id4(argp, exid);
+ }
+ 
+ static __be32
+-nfsd4_decode_free_stateid(struct nfsd4_compoundargs *argp,
+-			  struct nfsd4_free_stateid *free_stateid)
++nfsd4_decode_channel_attrs4(struct nfsd4_compoundargs *argp,
++			    struct nfsd4_channel_attrs *ca)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(sizeof(stateid_t));
+-	free_stateid->fr_stateid.si_generation = be32_to_cpup(p++);
+-	COPYMEM(&free_stateid->fr_stateid.si_opaque, sizeof(stateid_opaque_t));
+-
+-	DECODE_TAIL;
+-}
++	__be32 *p;
+ 
+-static __be32
+-nfsd4_decode_sequence(struct nfsd4_compoundargs *argp,
+-		      struct nfsd4_sequence *seq)
+-{
+-	DECODE_HEAD;
++	p = xdr_inline_decode(argp->xdr, XDR_UNIT * 7);
++	if (!p)
++		return nfserr_bad_xdr;
+ 
+-	READ_BUF(NFS4_MAX_SESSIONID_LEN + 16);
+-	COPYMEM(seq->sessionid.data, NFS4_MAX_SESSIONID_LEN);
+-	seq->seqid = be32_to_cpup(p++);
+-	seq->slotid = be32_to_cpup(p++);
+-	seq->maxslots = be32_to_cpup(p++);
+-	seq->cachethis = be32_to_cpup(p++);
++	/* headerpadsz is ignored */
++	p++;
++	ca->maxreq_sz = be32_to_cpup(p++);
++	ca->maxresp_sz = be32_to_cpup(p++);
++	ca->maxresp_cached = be32_to_cpup(p++);
++	ca->maxops = be32_to_cpup(p++);
++	ca->maxreqs = be32_to_cpup(p++);
++	ca->nr_rdma_attrs = be32_to_cpup(p);
++	switch (ca->nr_rdma_attrs) {
++	case 0:
++		break;
++	case 1:
++		if (xdr_stream_decode_u32(argp->xdr, &ca->rdma_attrs) < 0)
++			return nfserr_bad_xdr;
++		break;
++	default:
++		return nfserr_bad_xdr;
++	}
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_test_stateid(struct nfsd4_compoundargs *argp, struct nfsd4_test_stateid *test_stateid)
++nfsd4_decode_create_session(struct nfsd4_compoundargs *argp,
++			    union nfsd4_op_u *u)
+ {
+-	int i;
+-	__be32 *p, status;
+-	struct nfsd4_test_stateid_id *stateid;
+-
+-	READ_BUF(4);
+-	test_stateid->ts_num_ids = ntohl(*p++);
+-
+-	INIT_LIST_HEAD(&test_stateid->ts_stateid_list);
+-
+-	for (i = 0; i < test_stateid->ts_num_ids; i++) {
+-		stateid = svcxdr_tmpalloc(argp, sizeof(*stateid));
+-		if (!stateid) {
+-			status = nfserrno(-ENOMEM);
+-			goto out;
+-		}
+-
+-		INIT_LIST_HEAD(&stateid->ts_id_list);
+-		list_add_tail(&stateid->ts_id_list, &test_stateid->ts_stateid_list);
+-
+-		status = nfsd4_decode_stateid(argp, &stateid->ts_id_stateid);
+-		if (status)
+-			goto out;
+-	}
++	struct nfsd4_create_session *sess = &u->create_session;
++	__be32 status;
+ 
+-	status = 0;
+-out:
+-	return status;
+-xdr_error:
+-	dprintk("NFSD: xdr error (%s:%d)\n", __FILE__, __LINE__);
+-	status = nfserr_bad_xdr;
+-	goto out;
++	memset(sess, 0, sizeof(*sess));
++	status = nfsd4_decode_clientid4(argp, &sess->clientid);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &sess->seqid) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &sess->flags) < 0)
++		return nfserr_bad_xdr;
++	status = nfsd4_decode_channel_attrs4(argp, &sess->fore_channel);
++	if (status)
++		return status;
++	status = nfsd4_decode_channel_attrs4(argp, &sess->back_channel);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &sess->callback_prog) < 0)
++		return nfserr_bad_xdr;
++	return nfsd4_decode_cb_sec(argp, &sess->cb_sec);
+ }
+ 
+-static __be32 nfsd4_decode_destroy_clientid(struct nfsd4_compoundargs *argp, struct nfsd4_destroy_clientid *dc)
++static __be32
++nfsd4_decode_destroy_session(struct nfsd4_compoundargs *argp,
++			     union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(8);
+-	COPYMEM(&dc->clientid, 8);
+-
+-	DECODE_TAIL;
++	struct nfsd4_destroy_session *destroy_session = &u->destroy_session;
++	return nfsd4_decode_sessionid4(argp, &destroy_session->sessionid);
+ }
+ 
+-static __be32 nfsd4_decode_reclaim_complete(struct nfsd4_compoundargs *argp, struct nfsd4_reclaim_complete *rc)
++static __be32
++nfsd4_decode_free_stateid(struct nfsd4_compoundargs *argp,
++			  union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(4);
+-	rc->rca_one_fs = be32_to_cpup(p++);
+-
+-	DECODE_TAIL;
++	struct nfsd4_free_stateid *free_stateid = &u->free_stateid;
++	return nfsd4_decode_stateid4(argp, &free_stateid->fr_stateid);
+ }
+ 
+ #ifdef CONFIG_NFSD_PNFS
+ static __be32
+ nfsd4_decode_getdeviceinfo(struct nfsd4_compoundargs *argp,
+-		struct nfsd4_getdeviceinfo *gdev)
+-{
+-	DECODE_HEAD;
+-	u32 num, i;
+-
+-	READ_BUF(sizeof(struct nfsd4_deviceid) + 3 * 4);
+-	COPYMEM(&gdev->gd_devid, sizeof(struct nfsd4_deviceid));
+-	gdev->gd_layout_type = be32_to_cpup(p++);
+-	gdev->gd_maxcount = be32_to_cpup(p++);
+-	num = be32_to_cpup(p++);
+-	if (num) {
+-		if (num > 1000)
+-			goto xdr_error;
+-		READ_BUF(4 * num);
+-		gdev->gd_notify_types = be32_to_cpup(p++);
+-		for (i = 1; i < num; i++) {
+-			if (be32_to_cpup(p++)) {
+-				status = nfserr_inval;
+-				goto out;
+-			}
+-		}
+-	}
+-	DECODE_TAIL;
+-}
+-
+-static __be32
+-nfsd4_decode_layoutget(struct nfsd4_compoundargs *argp,
+-		struct nfsd4_layoutget *lgp)
++		union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
+-
+-	READ_BUF(36);
+-	lgp->lg_signal = be32_to_cpup(p++);
+-	lgp->lg_layout_type = be32_to_cpup(p++);
+-	lgp->lg_seg.iomode = be32_to_cpup(p++);
+-	p = xdr_decode_hyper(p, &lgp->lg_seg.offset);
+-	p = xdr_decode_hyper(p, &lgp->lg_seg.length);
+-	p = xdr_decode_hyper(p, &lgp->lg_minlength);
++	struct nfsd4_getdeviceinfo *gdev = &u->getdeviceinfo;
++	__be32 status;
+ 
+-	status = nfsd4_decode_stateid(argp, &lgp->lg_sid);
++	memset(gdev, 0, sizeof(*gdev));
++	status = nfsd4_decode_deviceid4(argp, &gdev->gd_devid);
+ 	if (status)
+ 		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &gdev->gd_layout_type) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &gdev->gd_maxcount) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_uint32_array(argp->xdr,
++					   &gdev->gd_notify_types, 1) < 0)
++		return nfserr_bad_xdr;
+ 
+-	READ_BUF(4);
+-	lgp->lg_maxcount = be32_to_cpup(p++);
+-
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+ nfsd4_decode_layoutcommit(struct nfsd4_compoundargs *argp,
+-		struct nfsd4_layoutcommit *lcp)
++			  union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
+-	u32 timechange;
+-
+-	READ_BUF(20);
+-	p = xdr_decode_hyper(p, &lcp->lc_seg.offset);
+-	p = xdr_decode_hyper(p, &lcp->lc_seg.length);
+-	lcp->lc_reclaim = be32_to_cpup(p++);
++	struct nfsd4_layoutcommit *lcp = &u->layoutcommit;
++	__be32 *p, status;
+ 
+-	status = nfsd4_decode_stateid(argp, &lcp->lc_sid);
++	memset(lcp, 0, sizeof(*lcp));
++	if (xdr_stream_decode_u64(argp->xdr, &lcp->lc_seg.offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &lcp->lc_seg.length) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_bool(argp->xdr, &lcp->lc_reclaim) < 0)
++		return nfserr_bad_xdr;
++	status = nfsd4_decode_stateid4(argp, &lcp->lc_sid);
+ 	if (status)
+ 		return status;
+-
+-	READ_BUF(4);
+-	lcp->lc_newoffset = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &lcp->lc_newoffset) < 0)
++		return nfserr_bad_xdr;
+ 	if (lcp->lc_newoffset) {
+-		READ_BUF(8);
+-		p = xdr_decode_hyper(p, &lcp->lc_last_wr);
++		if (xdr_stream_decode_u64(argp->xdr, &lcp->lc_last_wr) < 0)
++			return nfserr_bad_xdr;
+ 	} else
+ 		lcp->lc_last_wr = 0;
+-	READ_BUF(4);
+-	timechange = be32_to_cpup(p++);
+-	if (timechange) {
+-		status = nfsd4_decode_time(argp, &lcp->lc_mtime);
++	p = xdr_inline_decode(argp->xdr, XDR_UNIT);
++	if (!p)
++		return nfserr_bad_xdr;
++	if (xdr_item_is_present(p)) {
++		status = nfsd4_decode_nfstime4(argp, &lcp->lc_mtime);
+ 		if (status)
+ 			return status;
+ 	} else {
+ 		lcp->lc_mtime.tv_nsec = UTIME_NOW;
+ 	}
+-	READ_BUF(8);
+-	lcp->lc_layout_type = be32_to_cpup(p++);
++	return nfsd4_decode_layoutupdate4(argp, lcp);
++}
+ 
+-	/*
+-	 * Save the layout update in XDR format and let the layout driver deal
+-	 * with it later.
+-	 */
+-	lcp->lc_up_len = be32_to_cpup(p++);
+-	if (lcp->lc_up_len > 0) {
+-		READ_BUF(lcp->lc_up_len);
+-		READMEM(lcp->lc_up_layout, lcp->lc_up_len);
+-	}
++static __be32
++nfsd4_decode_layoutget(struct nfsd4_compoundargs *argp,
++		union nfsd4_op_u *u)
++{
++	struct nfsd4_layoutget *lgp = &u->layoutget;
++	__be32 status;
+ 
+-	DECODE_TAIL;
++	memset(lgp, 0, sizeof(*lgp));
++	if (xdr_stream_decode_u32(argp->xdr, &lgp->lg_signal) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &lgp->lg_layout_type) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &lgp->lg_seg.iomode) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &lgp->lg_seg.offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &lgp->lg_seg.length) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &lgp->lg_minlength) < 0)
++		return nfserr_bad_xdr;
++	status = nfsd4_decode_stateid4(argp, &lgp->lg_sid);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u32(argp->xdr, &lgp->lg_maxcount) < 0)
++		return nfserr_bad_xdr;
++
++	return nfs_ok;
+ }
+ 
+ static __be32
+ nfsd4_decode_layoutreturn(struct nfsd4_compoundargs *argp,
+-		struct nfsd4_layoutreturn *lrp)
++		union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_layoutreturn *lrp = &u->layoutreturn;
++	memset(lrp, 0, sizeof(*lrp));
++	if (xdr_stream_decode_bool(argp->xdr, &lrp->lr_reclaim) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &lrp->lr_layout_type) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &lrp->lr_seg.iomode) < 0)
++		return nfserr_bad_xdr;
++	return nfsd4_decode_layoutreturn4(argp, lrp);
++}
++#endif /* CONFIG_NFSD_PNFS */
+ 
+-	READ_BUF(16);
+-	lrp->lr_reclaim = be32_to_cpup(p++);
+-	lrp->lr_layout_type = be32_to_cpup(p++);
+-	lrp->lr_seg.iomode = be32_to_cpup(p++);
+-	lrp->lr_return_type = be32_to_cpup(p++);
+-	if (lrp->lr_return_type == RETURN_FILE) {
+-		READ_BUF(16);
+-		p = xdr_decode_hyper(p, &lrp->lr_seg.offset);
+-		p = xdr_decode_hyper(p, &lrp->lr_seg.length);
++static __be32 nfsd4_decode_secinfo_no_name(struct nfsd4_compoundargs *argp,
++					   union nfsd4_op_u *u)
++{
++	struct nfsd4_secinfo_no_name *sin = &u->secinfo_no_name;
++	if (xdr_stream_decode_u32(argp->xdr, &sin->sin_style) < 0)
++		return nfserr_bad_xdr;
+ 
+-		status = nfsd4_decode_stateid(argp, &lrp->lr_sid);
+-		if (status)
+-			return status;
++	sin->sin_exp = NULL;
++	return nfs_ok;
++}
+ 
+-		READ_BUF(4);
+-		lrp->lrf_body_len = be32_to_cpup(p++);
+-		if (lrp->lrf_body_len > 0) {
+-			READ_BUF(lrp->lrf_body_len);
+-			READMEM(lrp->lrf_body, lrp->lrf_body_len);
+-		}
+-	} else {
+-		lrp->lr_seg.offset = 0;
+-		lrp->lr_seg.length = NFS4_MAX_UINT64;
++static __be32
++nfsd4_decode_sequence(struct nfsd4_compoundargs *argp,
++		      union nfsd4_op_u *u)
++{
++	struct nfsd4_sequence *seq = &u->sequence;
++	__be32 *p, status;
++
++	status = nfsd4_decode_sessionid4(argp, &seq->sessionid);
++	if (status)
++		return status;
++	p = xdr_inline_decode(argp->xdr, XDR_UNIT * 4);
++	if (!p)
++		return nfserr_bad_xdr;
++	seq->seqid = be32_to_cpup(p++);
++	seq->slotid = be32_to_cpup(p++);
++	seq->maxslots = be32_to_cpup(p++);
++	seq->cachethis = be32_to_cpup(p);
++
++	seq->status_flags = 0;
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_test_stateid(struct nfsd4_compoundargs *argp,
++			  union nfsd4_op_u *u)
++{
++	struct nfsd4_test_stateid *test_stateid = &u->test_stateid;
++	struct nfsd4_test_stateid_id *stateid;
++	__be32 status;
++	u32 i;
++
++	memset(test_stateid, 0, sizeof(*test_stateid));
++	if (xdr_stream_decode_u32(argp->xdr, &test_stateid->ts_num_ids) < 0)
++		return nfserr_bad_xdr;
++
++	INIT_LIST_HEAD(&test_stateid->ts_stateid_list);
++	for (i = 0; i < test_stateid->ts_num_ids; i++) {
++		stateid = svcxdr_tmpalloc(argp, sizeof(*stateid));
++		if (!stateid)
++			return nfserr_jukebox;
++		INIT_LIST_HEAD(&stateid->ts_id_list);
++		list_add_tail(&stateid->ts_id_list, &test_stateid->ts_stateid_list);
++		status = nfsd4_decode_stateid4(argp, &stateid->ts_id_stateid);
++		if (status)
++			return status;
+ 	}
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+-#endif /* CONFIG_NFSD_PNFS */
+ 
+-static __be32
+-nfsd4_decode_fallocate(struct nfsd4_compoundargs *argp,
+-		       struct nfsd4_fallocate *fallocate)
++static __be32 nfsd4_decode_destroy_clientid(struct nfsd4_compoundargs *argp,
++					    union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
+-
+-	status = nfsd4_decode_stateid(argp, &fallocate->falloc_stateid);
+-	if (status)
+-		return status;
+-
+-	READ_BUF(16);
+-	p = xdr_decode_hyper(p, &fallocate->falloc_offset);
+-	xdr_decode_hyper(p, &fallocate->falloc_length);
++	struct nfsd4_destroy_clientid *dc = &u->destroy_clientid;
++	return nfsd4_decode_clientid4(argp, &dc->clientid);
++}
+ 
+-	DECODE_TAIL;
++static __be32 nfsd4_decode_reclaim_complete(struct nfsd4_compoundargs *argp,
++					    union nfsd4_op_u *u)
++{
++	struct nfsd4_reclaim_complete *rc = &u->reclaim_complete;
++	if (xdr_stream_decode_bool(argp->xdr, &rc->rca_one_fs) < 0)
++		return nfserr_bad_xdr;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_clone(struct nfsd4_compoundargs *argp, struct nfsd4_clone *clone)
++nfsd4_decode_fallocate(struct nfsd4_compoundargs *argp,
++		       union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_fallocate *fallocate = &u->allocate;
++	__be32 status;
+ 
+-	status = nfsd4_decode_stateid(argp, &clone->cl_src_stateid);
+-	if (status)
+-		return status;
+-	status = nfsd4_decode_stateid(argp, &clone->cl_dst_stateid);
++	status = nfsd4_decode_stateid4(argp, &fallocate->falloc_stateid);
+ 	if (status)
+ 		return status;
++	if (xdr_stream_decode_u64(argp->xdr, &fallocate->falloc_offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &fallocate->falloc_length) < 0)
++		return nfserr_bad_xdr;
+ 
+-	READ_BUF(8 + 8 + 8);
+-	p = xdr_decode_hyper(p, &clone->cl_src_pos);
+-	p = xdr_decode_hyper(p, &clone->cl_dst_pos);
+-	p = xdr_decode_hyper(p, &clone->cl_count);
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32 nfsd4_decode_nl4_server(struct nfsd4_compoundargs *argp,
+ 				      struct nl4_server *ns)
+ {
+-	DECODE_HEAD;
+ 	struct nfs42_netaddr *naddr;
++	__be32 *p;
+ 
+-	READ_BUF(4);
+-	ns->nl4_type = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &ns->nl4_type) < 0)
++		return nfserr_bad_xdr;
+ 
+ 	/* currently support for 1 inter-server source server */
+ 	switch (ns->nl4_type) {
+ 	case NL4_NETADDR:
+ 		naddr = &ns->u.nl4_addr;
+ 
+-		READ_BUF(4);
+-		naddr->netid_len = be32_to_cpup(p++);
++		if (xdr_stream_decode_u32(argp->xdr, &naddr->netid_len) < 0)
++			return nfserr_bad_xdr;
+ 		if (naddr->netid_len > RPCBIND_MAXNETIDLEN)
+-			goto xdr_error;
++			return nfserr_bad_xdr;
+ 
+-		READ_BUF(naddr->netid_len + 4); /* 4 for uaddr len */
+-		COPYMEM(naddr->netid, naddr->netid_len);
++		p = xdr_inline_decode(argp->xdr, naddr->netid_len);
++		if (!p)
++			return nfserr_bad_xdr;
++		memcpy(naddr->netid, p, naddr->netid_len);
+ 
+-		naddr->addr_len = be32_to_cpup(p++);
++		if (xdr_stream_decode_u32(argp->xdr, &naddr->addr_len) < 0)
++			return nfserr_bad_xdr;
+ 		if (naddr->addr_len > RPCBIND_MAXUADDRLEN)
+-			goto xdr_error;
++			return nfserr_bad_xdr;
+ 
+-		READ_BUF(naddr->addr_len);
+-		COPYMEM(naddr->addr, naddr->addr_len);
++		p = xdr_inline_decode(argp->xdr, naddr->addr_len);
++		if (!p)
++			return nfserr_bad_xdr;
++		memcpy(naddr->addr, p, naddr->addr_len);
+ 		break;
+ 	default:
+-		goto xdr_error;
++		return nfserr_bad_xdr;
+ 	}
+-	DECODE_TAIL;
++
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_copy(struct nfsd4_compoundargs *argp, struct nfsd4_copy *copy)
++nfsd4_decode_copy(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_copy *copy = &u->copy;
++	u32 consecutive, i, count, sync;
+ 	struct nl4_server *ns_dummy;
+-	int i, count;
++	__be32 status;
+ 
+-	status = nfsd4_decode_stateid(argp, &copy->cp_src_stateid);
++	memset(copy, 0, sizeof(*copy));
++	status = nfsd4_decode_stateid4(argp, &copy->cp_src_stateid);
+ 	if (status)
+ 		return status;
+-	status = nfsd4_decode_stateid(argp, &copy->cp_dst_stateid);
++	status = nfsd4_decode_stateid4(argp, &copy->cp_dst_stateid);
+ 	if (status)
+ 		return status;
++	if (xdr_stream_decode_u64(argp->xdr, &copy->cp_src_pos) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &copy->cp_dst_pos) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &copy->cp_count) < 0)
++		return nfserr_bad_xdr;
++	/* ca_consecutive: we always do consecutive copies */
++	if (xdr_stream_decode_u32(argp->xdr, &consecutive) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_bool(argp->xdr, &sync) < 0)
++		return nfserr_bad_xdr;
++	nfsd4_copy_set_sync(copy, sync);
+ 
+-	READ_BUF(8 + 8 + 8 + 4 + 4 + 4);
+-	p = xdr_decode_hyper(p, &copy->cp_src_pos);
+-	p = xdr_decode_hyper(p, &copy->cp_dst_pos);
+-	p = xdr_decode_hyper(p, &copy->cp_count);
+-	p++; /* ca_consecutive: we always do consecutive copies */
+-	copy->cp_synchronous = be32_to_cpup(p++);
+-
+-	count = be32_to_cpup(p++);
+-
+-	copy->cp_intra = false;
++	if (xdr_stream_decode_u32(argp->xdr, &count) < 0)
++		return nfserr_bad_xdr;
++	copy->cp_src = svcxdr_tmpalloc(argp, sizeof(*copy->cp_src));
++	if (copy->cp_src == NULL)
++		return nfserr_jukebox;
+ 	if (count == 0) { /* intra-server copy */
+-		copy->cp_intra = true;
+-		goto intra;
++		__set_bit(NFSD4_COPY_F_INTRA, &copy->cp_flags);
++		return nfs_ok;
+ 	}
+ 
+-	/* decode all the supplied server addresses but use first */
+-	status = nfsd4_decode_nl4_server(argp, &copy->cp_src);
++	/* decode all the supplied server addresses but use only the first */
++	status = nfsd4_decode_nl4_server(argp, copy->cp_src);
+ 	if (status)
+ 		return status;
+ 
+ 	ns_dummy = kmalloc(sizeof(struct nl4_server), GFP_KERNEL);
+ 	if (ns_dummy == NULL)
+-		return nfserrno(-ENOMEM);
++		return nfserr_jukebox;
+ 	for (i = 0; i < count - 1; i++) {
+ 		status = nfsd4_decode_nl4_server(argp, ns_dummy);
+ 		if (status) {
+@@ -1839,44 +2027,80 @@ nfsd4_decode_copy(struct nfsd4_compoundargs *argp, struct nfsd4_copy *copy)
+ 		}
+ 	}
+ 	kfree(ns_dummy);
+-intra:
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
++}
++
++static __be32
++nfsd4_decode_copy_notify(struct nfsd4_compoundargs *argp,
++			 union nfsd4_op_u *u)
++{
++	struct nfsd4_copy_notify *cn = &u->copy_notify;
++	__be32 status;
++
++	memset(cn, 0, sizeof(*cn));
++	cn->cpn_src = svcxdr_tmpalloc(argp, sizeof(*cn->cpn_src));
++	if (cn->cpn_src == NULL)
++		return nfserr_jukebox;
++	cn->cpn_dst = svcxdr_tmpalloc(argp, sizeof(*cn->cpn_dst));
++	if (cn->cpn_dst == NULL)
++		return nfserr_jukebox;
++
++	status = nfsd4_decode_stateid4(argp, &cn->cpn_src_stateid);
++	if (status)
++		return status;
++	return nfsd4_decode_nl4_server(argp, cn->cpn_dst);
+ }
+ 
+ static __be32
+ nfsd4_decode_offload_status(struct nfsd4_compoundargs *argp,
+-			    struct nfsd4_offload_status *os)
++			    union nfsd4_op_u *u)
+ {
+-	return nfsd4_decode_stateid(argp, &os->stateid);
++	struct nfsd4_offload_status *os = &u->offload_status;
++	os->count = 0;
++	os->status = 0;
++	return nfsd4_decode_stateid4(argp, &os->stateid);
+ }
+ 
+ static __be32
+-nfsd4_decode_copy_notify(struct nfsd4_compoundargs *argp,
+-			 struct nfsd4_copy_notify *cn)
++nfsd4_decode_seek(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
++	struct nfsd4_seek *seek = &u->seek;
+ 	__be32 status;
+ 
+-	status = nfsd4_decode_stateid(argp, &cn->cpn_src_stateid);
++	status = nfsd4_decode_stateid4(argp, &seek->seek_stateid);
+ 	if (status)
+ 		return status;
+-	return nfsd4_decode_nl4_server(argp, &cn->cpn_dst);
++	if (xdr_stream_decode_u64(argp->xdr, &seek->seek_offset) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u32(argp->xdr, &seek->seek_whence) < 0)
++		return nfserr_bad_xdr;
++
++	seek->seek_eof = 0;
++	seek->seek_pos = 0;
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_seek(struct nfsd4_compoundargs *argp, struct nfsd4_seek *seek)
++nfsd4_decode_clone(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_clone *clone = &u->clone;
++	__be32 status;
+ 
+-	status = nfsd4_decode_stateid(argp, &seek->seek_stateid);
++	status = nfsd4_decode_stateid4(argp, &clone->cl_src_stateid);
+ 	if (status)
+ 		return status;
++	status = nfsd4_decode_stateid4(argp, &clone->cl_dst_stateid);
++	if (status)
++		return status;
++	if (xdr_stream_decode_u64(argp->xdr, &clone->cl_src_pos) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &clone->cl_dst_pos) < 0)
++		return nfserr_bad_xdr;
++	if (xdr_stream_decode_u64(argp->xdr, &clone->cl_count) < 0)
++		return nfserr_bad_xdr;
+ 
+-	READ_BUF(8 + 4);
+-	p = xdr_decode_hyper(p, &seek->seek_offset);
+-	seek->seek_whence = be32_to_cpup(p);
+-
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ /*
+@@ -1889,13 +2113,14 @@ nfsd4_decode_seek(struct nfsd4_compoundargs *argp, struct nfsd4_seek *seek)
+  */
+ 
+ /*
+- * Decode data into buffer. Uses head and pages constructed by
+- * svcxdr_construct_vector.
++ * Decode data into buffer.
+  */
+ static __be32
+-nfsd4_vbuf_from_vector(struct nfsd4_compoundargs *argp, struct kvec *head,
+-		       struct page **pages, char **bufp, u32 buflen)
++nfsd4_vbuf_from_vector(struct nfsd4_compoundargs *argp, struct xdr_buf *xdr,
++		       char **bufp, u32 buflen)
+ {
++	struct page **pages = xdr->pages;
++	struct kvec *head = xdr->head;
+ 	char *tmp, *dp;
+ 	u32 len;
+ 
+@@ -1938,25 +2163,22 @@ nfsd4_vbuf_from_vector(struct nfsd4_compoundargs *argp, struct kvec *head,
+ static __be32
+ nfsd4_decode_xattr_name(struct nfsd4_compoundargs *argp, char **namep)
+ {
+-	DECODE_HEAD;
+ 	char *name, *sp, *dp;
+ 	u32 namelen, cnt;
++	__be32 *p;
+ 
+-	READ_BUF(4);
+-	namelen = be32_to_cpup(p++);
+-
++	if (xdr_stream_decode_u32(argp->xdr, &namelen) < 0)
++		return nfserr_bad_xdr;
+ 	if (namelen > (XATTR_NAME_MAX - XATTR_USER_PREFIX_LEN))
+ 		return nfserr_nametoolong;
+-
+ 	if (namelen == 0)
+-		goto xdr_error;
+-
+-	READ_BUF(namelen);
+-
++		return nfserr_bad_xdr;
++	p = xdr_inline_decode(argp->xdr, namelen);
++	if (!p)
++		return nfserr_bad_xdr;
+ 	name = svcxdr_tmpalloc(argp, namelen + XATTR_USER_PREFIX_LEN + 1);
+ 	if (!name)
+ 		return nfserr_jukebox;
+-
+ 	memcpy(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN);
+ 
+ 	/*
+@@ -1969,14 +2191,14 @@ nfsd4_decode_xattr_name(struct nfsd4_compoundargs *argp, char **namep)
+ 
+ 	while (cnt-- > 0) {
+ 		if (*sp == '\0')
+-			goto xdr_error;
++			return nfserr_bad_xdr;
+ 		*dp++ = *sp++;
+ 	}
+ 	*dp = '\0';
+ 
+ 	*namep = name;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ /*
+@@ -1987,11 +2209,13 @@ nfsd4_decode_xattr_name(struct nfsd4_compoundargs *argp, char **namep)
+  */
+ static __be32
+ nfsd4_decode_getxattr(struct nfsd4_compoundargs *argp,
+-		      struct nfsd4_getxattr *getxattr)
++		      union nfsd4_op_u *u)
+ {
++	struct nfsd4_getxattr *getxattr = &u->getxattr;
+ 	__be32 status;
+ 	u32 maxcount;
+ 
++	memset(getxattr, 0, sizeof(*getxattr));
+ 	status = nfsd4_decode_xattr_name(argp, &getxattr->getxa_name);
+ 	if (status)
+ 		return status;
+@@ -2000,21 +2224,21 @@ nfsd4_decode_getxattr(struct nfsd4_compoundargs *argp,
+ 	maxcount = min_t(u32, XATTR_SIZE_MAX, maxcount);
+ 
+ 	getxattr->getxa_len = maxcount;
+-
+-	return status;
++	return nfs_ok;
+ }
+ 
+ static __be32
+ nfsd4_decode_setxattr(struct nfsd4_compoundargs *argp,
+-		      struct nfsd4_setxattr *setxattr)
++		      union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_setxattr *setxattr = &u->setxattr;
+ 	u32 flags, maxcount, size;
+-	struct kvec head;
+-	struct page **pagelist;
++	__be32 status;
++
++	memset(setxattr, 0, sizeof(*setxattr));
+ 
+-	READ_BUF(4);
+-	flags = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &flags) < 0)
++		return nfserr_bad_xdr;
+ 
+ 	if (flags > SETXATTR4_REPLACE)
+ 		return nfserr_inval;
+@@ -2027,33 +2251,35 @@ nfsd4_decode_setxattr(struct nfsd4_compoundargs *argp,
+ 	maxcount = svc_max_payload(argp->rqstp);
+ 	maxcount = min_t(u32, XATTR_SIZE_MAX, maxcount);
+ 
+-	READ_BUF(4);
+-	size = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &size) < 0)
++		return nfserr_bad_xdr;
+ 	if (size > maxcount)
+ 		return nfserr_xattr2big;
+ 
+ 	setxattr->setxa_len = size;
+ 	if (size > 0) {
+-		status = svcxdr_construct_vector(argp, &head, &pagelist, size);
+-		if (status)
+-			return status;
++		struct xdr_buf payload;
+ 
+-		status = nfsd4_vbuf_from_vector(argp, &head, pagelist,
+-		    &setxattr->setxa_buf, size);
++		if (!xdr_stream_subsegment(argp->xdr, &payload, size))
++			return nfserr_bad_xdr;
++		status = nfsd4_vbuf_from_vector(argp, &payload,
++						&setxattr->setxa_buf, size);
+ 	}
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+ nfsd4_decode_listxattrs(struct nfsd4_compoundargs *argp,
+-			struct nfsd4_listxattrs *listxattrs)
++			union nfsd4_op_u *u)
+ {
+-	DECODE_HEAD;
++	struct nfsd4_listxattrs *listxattrs = &u->listxattrs;
+ 	u32 maxcount;
+ 
+-	READ_BUF(12);
+-	p = xdr_decode_hyper(p, &listxattrs->lsxa_cookie);
++	memset(listxattrs, 0, sizeof(*listxattrs));
++
++	if (xdr_stream_decode_u64(argp->xdr, &listxattrs->lsxa_cookie) < 0)
++		return nfserr_bad_xdr;
+ 
+ 	/*
+ 	 * If the cookie  is too large to have even one user.x attribute
+@@ -2063,7 +2289,8 @@ nfsd4_decode_listxattrs(struct nfsd4_compoundargs *argp,
+ 	    (XATTR_LIST_MAX / (XATTR_USER_PREFIX_LEN + 2)))
+ 		return nfserr_badcookie;
+ 
+-	maxcount = be32_to_cpup(p++);
++	if (xdr_stream_decode_u32(argp->xdr, &maxcount) < 0)
++		return nfserr_bad_xdr;
+ 	if (maxcount < 8)
+ 		/* Always need at least 2 words (length and one character) */
+ 		return nfserr_inval;
+@@ -2071,117 +2298,119 @@ nfsd4_decode_listxattrs(struct nfsd4_compoundargs *argp,
+ 	maxcount = min(maxcount, svc_max_payload(argp->rqstp));
+ 	listxattrs->lsxa_maxcount = maxcount;
+ 
+-	DECODE_TAIL;
++	return nfs_ok;
+ }
+ 
+ static __be32
+ nfsd4_decode_removexattr(struct nfsd4_compoundargs *argp,
+-			 struct nfsd4_removexattr *removexattr)
++			 union nfsd4_op_u *u)
+ {
++	struct nfsd4_removexattr *removexattr = &u->removexattr;
++	memset(removexattr, 0, sizeof(*removexattr));
+ 	return nfsd4_decode_xattr_name(argp, &removexattr->rmxa_name);
+ }
+ 
+ static __be32
+-nfsd4_decode_noop(struct nfsd4_compoundargs *argp, void *p)
++nfsd4_decode_noop(struct nfsd4_compoundargs *argp, union nfsd4_op_u *p)
+ {
+ 	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_decode_notsupp(struct nfsd4_compoundargs *argp, void *p)
++nfsd4_decode_notsupp(struct nfsd4_compoundargs *argp, union nfsd4_op_u *p)
+ {
+ 	return nfserr_notsupp;
+ }
+ 
+-typedef __be32(*nfsd4_dec)(struct nfsd4_compoundargs *argp, void *);
++typedef __be32(*nfsd4_dec)(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u);
+ 
+ static const nfsd4_dec nfsd4_dec_ops[] = {
+-	[OP_ACCESS]		= (nfsd4_dec)nfsd4_decode_access,
+-	[OP_CLOSE]		= (nfsd4_dec)nfsd4_decode_close,
+-	[OP_COMMIT]		= (nfsd4_dec)nfsd4_decode_commit,
+-	[OP_CREATE]		= (nfsd4_dec)nfsd4_decode_create,
+-	[OP_DELEGPURGE]		= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_DELEGRETURN]	= (nfsd4_dec)nfsd4_decode_delegreturn,
+-	[OP_GETATTR]		= (nfsd4_dec)nfsd4_decode_getattr,
+-	[OP_GETFH]		= (nfsd4_dec)nfsd4_decode_noop,
+-	[OP_LINK]		= (nfsd4_dec)nfsd4_decode_link,
+-	[OP_LOCK]		= (nfsd4_dec)nfsd4_decode_lock,
+-	[OP_LOCKT]		= (nfsd4_dec)nfsd4_decode_lockt,
+-	[OP_LOCKU]		= (nfsd4_dec)nfsd4_decode_locku,
+-	[OP_LOOKUP]		= (nfsd4_dec)nfsd4_decode_lookup,
+-	[OP_LOOKUPP]		= (nfsd4_dec)nfsd4_decode_noop,
+-	[OP_NVERIFY]		= (nfsd4_dec)nfsd4_decode_verify,
+-	[OP_OPEN]		= (nfsd4_dec)nfsd4_decode_open,
+-	[OP_OPENATTR]		= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_OPEN_CONFIRM]	= (nfsd4_dec)nfsd4_decode_open_confirm,
+-	[OP_OPEN_DOWNGRADE]	= (nfsd4_dec)nfsd4_decode_open_downgrade,
+-	[OP_PUTFH]		= (nfsd4_dec)nfsd4_decode_putfh,
+-	[OP_PUTPUBFH]		= (nfsd4_dec)nfsd4_decode_putpubfh,
+-	[OP_PUTROOTFH]		= (nfsd4_dec)nfsd4_decode_noop,
+-	[OP_READ]		= (nfsd4_dec)nfsd4_decode_read,
+-	[OP_READDIR]		= (nfsd4_dec)nfsd4_decode_readdir,
+-	[OP_READLINK]		= (nfsd4_dec)nfsd4_decode_noop,
+-	[OP_REMOVE]		= (nfsd4_dec)nfsd4_decode_remove,
+-	[OP_RENAME]		= (nfsd4_dec)nfsd4_decode_rename,
+-	[OP_RENEW]		= (nfsd4_dec)nfsd4_decode_renew,
+-	[OP_RESTOREFH]		= (nfsd4_dec)nfsd4_decode_noop,
+-	[OP_SAVEFH]		= (nfsd4_dec)nfsd4_decode_noop,
+-	[OP_SECINFO]		= (nfsd4_dec)nfsd4_decode_secinfo,
+-	[OP_SETATTR]		= (nfsd4_dec)nfsd4_decode_setattr,
+-	[OP_SETCLIENTID]	= (nfsd4_dec)nfsd4_decode_setclientid,
+-	[OP_SETCLIENTID_CONFIRM] = (nfsd4_dec)nfsd4_decode_setclientid_confirm,
+-	[OP_VERIFY]		= (nfsd4_dec)nfsd4_decode_verify,
+-	[OP_WRITE]		= (nfsd4_dec)nfsd4_decode_write,
+-	[OP_RELEASE_LOCKOWNER]	= (nfsd4_dec)nfsd4_decode_release_lockowner,
++	[OP_ACCESS]		= nfsd4_decode_access,
++	[OP_CLOSE]		= nfsd4_decode_close,
++	[OP_COMMIT]		= nfsd4_decode_commit,
++	[OP_CREATE]		= nfsd4_decode_create,
++	[OP_DELEGPURGE]		= nfsd4_decode_notsupp,
++	[OP_DELEGRETURN]	= nfsd4_decode_delegreturn,
++	[OP_GETATTR]		= nfsd4_decode_getattr,
++	[OP_GETFH]		= nfsd4_decode_noop,
++	[OP_LINK]		= nfsd4_decode_link,
++	[OP_LOCK]		= nfsd4_decode_lock,
++	[OP_LOCKT]		= nfsd4_decode_lockt,
++	[OP_LOCKU]		= nfsd4_decode_locku,
++	[OP_LOOKUP]		= nfsd4_decode_lookup,
++	[OP_LOOKUPP]		= nfsd4_decode_noop,
++	[OP_NVERIFY]		= nfsd4_decode_verify,
++	[OP_OPEN]		= nfsd4_decode_open,
++	[OP_OPENATTR]		= nfsd4_decode_notsupp,
++	[OP_OPEN_CONFIRM]	= nfsd4_decode_open_confirm,
++	[OP_OPEN_DOWNGRADE]	= nfsd4_decode_open_downgrade,
++	[OP_PUTFH]		= nfsd4_decode_putfh,
++	[OP_PUTPUBFH]		= nfsd4_decode_putpubfh,
++	[OP_PUTROOTFH]		= nfsd4_decode_noop,
++	[OP_READ]		= nfsd4_decode_read,
++	[OP_READDIR]		= nfsd4_decode_readdir,
++	[OP_READLINK]		= nfsd4_decode_noop,
++	[OP_REMOVE]		= nfsd4_decode_remove,
++	[OP_RENAME]		= nfsd4_decode_rename,
++	[OP_RENEW]		= nfsd4_decode_renew,
++	[OP_RESTOREFH]		= nfsd4_decode_noop,
++	[OP_SAVEFH]		= nfsd4_decode_noop,
++	[OP_SECINFO]		= nfsd4_decode_secinfo,
++	[OP_SETATTR]		= nfsd4_decode_setattr,
++	[OP_SETCLIENTID]	= nfsd4_decode_setclientid,
++	[OP_SETCLIENTID_CONFIRM] = nfsd4_decode_setclientid_confirm,
++	[OP_VERIFY]		= nfsd4_decode_verify,
++	[OP_WRITE]		= nfsd4_decode_write,
++	[OP_RELEASE_LOCKOWNER]	= nfsd4_decode_release_lockowner,
+ 
+ 	/* new operations for NFSv4.1 */
+-	[OP_BACKCHANNEL_CTL]	= (nfsd4_dec)nfsd4_decode_backchannel_ctl,
+-	[OP_BIND_CONN_TO_SESSION]= (nfsd4_dec)nfsd4_decode_bind_conn_to_session,
+-	[OP_EXCHANGE_ID]	= (nfsd4_dec)nfsd4_decode_exchange_id,
+-	[OP_CREATE_SESSION]	= (nfsd4_dec)nfsd4_decode_create_session,
+-	[OP_DESTROY_SESSION]	= (nfsd4_dec)nfsd4_decode_destroy_session,
+-	[OP_FREE_STATEID]	= (nfsd4_dec)nfsd4_decode_free_stateid,
+-	[OP_GET_DIR_DELEGATION]	= (nfsd4_dec)nfsd4_decode_notsupp,
++	[OP_BACKCHANNEL_CTL]	= nfsd4_decode_backchannel_ctl,
++	[OP_BIND_CONN_TO_SESSION] = nfsd4_decode_bind_conn_to_session,
++	[OP_EXCHANGE_ID]	= nfsd4_decode_exchange_id,
++	[OP_CREATE_SESSION]	= nfsd4_decode_create_session,
++	[OP_DESTROY_SESSION]	= nfsd4_decode_destroy_session,
++	[OP_FREE_STATEID]	= nfsd4_decode_free_stateid,
++	[OP_GET_DIR_DELEGATION]	= nfsd4_decode_notsupp,
+ #ifdef CONFIG_NFSD_PNFS
+-	[OP_GETDEVICEINFO]	= (nfsd4_dec)nfsd4_decode_getdeviceinfo,
+-	[OP_GETDEVICELIST]	= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_LAYOUTCOMMIT]	= (nfsd4_dec)nfsd4_decode_layoutcommit,
+-	[OP_LAYOUTGET]		= (nfsd4_dec)nfsd4_decode_layoutget,
+-	[OP_LAYOUTRETURN]	= (nfsd4_dec)nfsd4_decode_layoutreturn,
++	[OP_GETDEVICEINFO]	= nfsd4_decode_getdeviceinfo,
++	[OP_GETDEVICELIST]	= nfsd4_decode_notsupp,
++	[OP_LAYOUTCOMMIT]	= nfsd4_decode_layoutcommit,
++	[OP_LAYOUTGET]		= nfsd4_decode_layoutget,
++	[OP_LAYOUTRETURN]	= nfsd4_decode_layoutreturn,
+ #else
+-	[OP_GETDEVICEINFO]	= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_GETDEVICELIST]	= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_LAYOUTCOMMIT]	= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_LAYOUTGET]		= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_LAYOUTRETURN]	= (nfsd4_dec)nfsd4_decode_notsupp,
++	[OP_GETDEVICEINFO]	= nfsd4_decode_notsupp,
++	[OP_GETDEVICELIST]	= nfsd4_decode_notsupp,
++	[OP_LAYOUTCOMMIT]	= nfsd4_decode_notsupp,
++	[OP_LAYOUTGET]		= nfsd4_decode_notsupp,
++	[OP_LAYOUTRETURN]	= nfsd4_decode_notsupp,
+ #endif
+-	[OP_SECINFO_NO_NAME]	= (nfsd4_dec)nfsd4_decode_secinfo_no_name,
+-	[OP_SEQUENCE]		= (nfsd4_dec)nfsd4_decode_sequence,
+-	[OP_SET_SSV]		= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_TEST_STATEID]	= (nfsd4_dec)nfsd4_decode_test_stateid,
+-	[OP_WANT_DELEGATION]	= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_DESTROY_CLIENTID]	= (nfsd4_dec)nfsd4_decode_destroy_clientid,
+-	[OP_RECLAIM_COMPLETE]	= (nfsd4_dec)nfsd4_decode_reclaim_complete,
++	[OP_SECINFO_NO_NAME]	= nfsd4_decode_secinfo_no_name,
++	[OP_SEQUENCE]		= nfsd4_decode_sequence,
++	[OP_SET_SSV]		= nfsd4_decode_notsupp,
++	[OP_TEST_STATEID]	= nfsd4_decode_test_stateid,
++	[OP_WANT_DELEGATION]	= nfsd4_decode_notsupp,
++	[OP_DESTROY_CLIENTID]	= nfsd4_decode_destroy_clientid,
++	[OP_RECLAIM_COMPLETE]	= nfsd4_decode_reclaim_complete,
+ 
+ 	/* new operations for NFSv4.2 */
+-	[OP_ALLOCATE]		= (nfsd4_dec)nfsd4_decode_fallocate,
+-	[OP_COPY]		= (nfsd4_dec)nfsd4_decode_copy,
+-	[OP_COPY_NOTIFY]	= (nfsd4_dec)nfsd4_decode_copy_notify,
+-	[OP_DEALLOCATE]		= (nfsd4_dec)nfsd4_decode_fallocate,
+-	[OP_IO_ADVISE]		= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_LAYOUTERROR]	= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_LAYOUTSTATS]	= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_OFFLOAD_CANCEL]	= (nfsd4_dec)nfsd4_decode_offload_status,
+-	[OP_OFFLOAD_STATUS]	= (nfsd4_dec)nfsd4_decode_offload_status,
+-	[OP_READ_PLUS]		= (nfsd4_dec)nfsd4_decode_read,
+-	[OP_SEEK]		= (nfsd4_dec)nfsd4_decode_seek,
+-	[OP_WRITE_SAME]		= (nfsd4_dec)nfsd4_decode_notsupp,
+-	[OP_CLONE]		= (nfsd4_dec)nfsd4_decode_clone,
++	[OP_ALLOCATE]		= nfsd4_decode_fallocate,
++	[OP_COPY]		= nfsd4_decode_copy,
++	[OP_COPY_NOTIFY]	= nfsd4_decode_copy_notify,
++	[OP_DEALLOCATE]		= nfsd4_decode_fallocate,
++	[OP_IO_ADVISE]		= nfsd4_decode_notsupp,
++	[OP_LAYOUTERROR]	= nfsd4_decode_notsupp,
++	[OP_LAYOUTSTATS]	= nfsd4_decode_notsupp,
++	[OP_OFFLOAD_CANCEL]	= nfsd4_decode_offload_status,
++	[OP_OFFLOAD_STATUS]	= nfsd4_decode_offload_status,
++	[OP_READ_PLUS]		= nfsd4_decode_read,
++	[OP_SEEK]		= nfsd4_decode_seek,
++	[OP_WRITE_SAME]		= nfsd4_decode_notsupp,
++	[OP_CLONE]		= nfsd4_decode_clone,
+ 	/* RFC 8276 extended atributes operations */
+-	[OP_GETXATTR]		= (nfsd4_dec)nfsd4_decode_getxattr,
+-	[OP_SETXATTR]		= (nfsd4_dec)nfsd4_decode_setxattr,
+-	[OP_LISTXATTRS]		= (nfsd4_dec)nfsd4_decode_listxattrs,
+-	[OP_REMOVEXATTR]	= (nfsd4_dec)nfsd4_decode_removexattr,
++	[OP_GETXATTR]		= nfsd4_decode_getxattr,
++	[OP_SETXATTR]		= nfsd4_decode_setxattr,
++	[OP_LISTXATTRS]		= nfsd4_decode_listxattrs,
++	[OP_REMOVEXATTR]	= nfsd4_decode_removexattr,
+ };
+ 
+ static inline bool
+@@ -2198,43 +2427,46 @@ nfsd4_opnum_in_range(struct nfsd4_compoundargs *argp, struct nfsd4_op *op)
+ 	return true;
+ }
+ 
+-static __be32
++static bool
+ nfsd4_decode_compound(struct nfsd4_compoundargs *argp)
+ {
+-	DECODE_HEAD;
+ 	struct nfsd4_op *op;
+ 	bool cachethis = false;
+ 	int auth_slack= argp->rqstp->rq_auth_slack;
+ 	int max_reply = auth_slack + 8; /* opcnt, status */
+ 	int readcount = 0;
+ 	int readbytes = 0;
++	__be32 *p;
+ 	int i;
+ 
+-	READ_BUF(4);
+-	argp->taglen = be32_to_cpup(p++);
+-	READ_BUF(argp->taglen);
+-	SAVEMEM(argp->tag, argp->taglen);
+-	READ_BUF(8);
+-	argp->minorversion = be32_to_cpup(p++);
+-	argp->opcnt = be32_to_cpup(p++);
+-	max_reply += 4 + (XDR_QUADLEN(argp->taglen) << 2);
+-
+-	if (argp->taglen > NFSD4_MAX_TAGLEN)
+-		goto xdr_error;
+-	/*
+-	 * NFS4ERR_RESOURCE is a more helpful error than GARBAGE_ARGS
+-	 * here, so we return success at the xdr level so that
+-	 * nfsd4_proc can handle this is an NFS-level error.
+-	 */
+-	if (argp->opcnt > NFSD_MAX_OPS_PER_COMPOUND)
+-		return 0;
++	if (xdr_stream_decode_u32(argp->xdr, &argp->taglen) < 0)
++		return false;
++	max_reply += XDR_UNIT;
++	argp->tag = NULL;
++	if (unlikely(argp->taglen)) {
++		if (argp->taglen > NFSD4_MAX_TAGLEN)
++			return false;
++		p = xdr_inline_decode(argp->xdr, argp->taglen);
++		if (!p)
++			return false;
++		argp->tag = svcxdr_savemem(argp, p, argp->taglen);
++		if (!argp->tag)
++			return false;
++		max_reply += xdr_align_size(argp->taglen);
++	}
++
++	if (xdr_stream_decode_u32(argp->xdr, &argp->minorversion) < 0)
++		return false;
++	if (xdr_stream_decode_u32(argp->xdr, &argp->client_opcnt) < 0)
++		return false;
++	argp->opcnt = min_t(u32, argp->client_opcnt,
++			    NFSD_MAX_OPS_PER_COMPOUND);
+ 
+ 	if (argp->opcnt > ARRAY_SIZE(argp->iops)) {
+-		argp->ops = kzalloc(argp->opcnt * sizeof(*argp->ops), GFP_KERNEL);
++		argp->ops = vcalloc(argp->opcnt, sizeof(*argp->ops));
+ 		if (!argp->ops) {
+ 			argp->ops = argp->iops;
+-			dprintk("nfsd: couldn't allocate room for COMPOUND\n");
+-			goto xdr_error;
++			return false;
+ 		}
+ 	}
+ 
+@@ -2244,17 +2476,23 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp)
+ 	for (i = 0; i < argp->opcnt; i++) {
+ 		op = &argp->ops[i];
+ 		op->replay = NULL;
++		op->opdesc = NULL;
+ 
+-		READ_BUF(4);
+-		op->opnum = be32_to_cpup(p++);
+-
+-		if (nfsd4_opnum_in_range(argp, op))
++		if (xdr_stream_decode_u32(argp->xdr, &op->opnum) < 0)
++			return false;
++		if (nfsd4_opnum_in_range(argp, op)) {
++			op->opdesc = OPDESC(op);
+ 			op->status = nfsd4_dec_ops[op->opnum](argp, &op->u);
+-		else {
++			if (op->status != nfs_ok)
++				trace_nfsd_compound_decode_err(argp->rqstp,
++							       argp->opcnt, i,
++							       op->opnum,
++							       op->status);
++		} else {
+ 			op->opnum = OP_ILLEGAL;
+ 			op->status = nfserr_op_illegal;
+ 		}
+-		op->opdesc = OPDESC(op);
++
+ 		/*
+ 		 * We'll try to cache the result in the DRC if any one
+ 		 * op in the compound wants to be cached:
+@@ -2289,7 +2527,7 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp)
+ 	if (readcount > 1 || max_reply > PAGE_SIZE - auth_slack)
+ 		clear_bit(RQ_SPLICE_OK, &argp->rqstp->rq_flags);
+ 
+-	DECODE_TAIL;
++	return true;
+ }
+ 
+ static __be32 *encode_change(__be32 *p, struct kstat *stat, struct inode *inode,
+@@ -2298,15 +2536,25 @@ static __be32 *encode_change(__be32 *p, struct kstat *stat, struct inode *inode,
+ 	if (exp->ex_flags & NFSEXP_V4ROOT) {
+ 		*p++ = cpu_to_be32(convert_to_wallclock(exp->cd->flush_time));
+ 		*p++ = 0;
+-	} else if (IS_I_VERSION(inode)) {
++	} else
+ 		p = xdr_encode_hyper(p, nfsd4_change_attribute(stat, inode));
+-	} else {
+-		*p++ = cpu_to_be32(stat->ctime.tv_sec);
+-		*p++ = cpu_to_be32(stat->ctime.tv_nsec);
+-	}
+ 	return p;
+ }
+ 
++static __be32 nfsd4_encode_nfstime4(struct xdr_stream *xdr,
++				    struct timespec64 *tv)
++{
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, XDR_UNIT * 3);
++	if (!p)
++		return nfserr_resource;
++
++	p = xdr_encode_hyper(p, (s64)tv->tv_sec);
++	*p = cpu_to_be32(tv->tv_nsec);
++	return nfs_ok;
++}
++
+ /*
+  * ctime (in NFSv4, time_metadata) is not writeable, and the client
+  * doesn't really care what resolution could theoretically be stored by
+@@ -2335,15 +2583,8 @@ static __be32 *encode_time_delta(__be32 *p, struct inode *inode)
+ static __be32 *encode_cinfo(__be32 *p, struct nfsd4_change_info *c)
+ {
+ 	*p++ = cpu_to_be32(c->atomic);
+-	if (c->change_supported) {
+-		p = xdr_encode_hyper(p, c->before_change);
+-		p = xdr_encode_hyper(p, c->after_change);
+-	} else {
+-		*p++ = cpu_to_be32(c->before_ctime_sec);
+-		*p++ = cpu_to_be32(c->before_ctime_nsec);
+-		*p++ = cpu_to_be32(c->after_ctime_sec);
+-		*p++ = cpu_to_be32(c->after_ctime_nsec);
+-	}
++	p = xdr_encode_hyper(p, c->before_change);
++	p = xdr_encode_hyper(p, c->after_change);
+ 	return p;
+ }
+ 
+@@ -2558,7 +2799,7 @@ static u32 nfs4_file_type(umode_t mode)
+ 	case S_IFREG:	return NF4REG;
+ 	case S_IFSOCK:	return NF4SOCK;
+ 	default:	return NF4BAD;
+-	};
++	}
+ }
+ 
+ static inline __be32
+@@ -2642,9 +2883,10 @@ static __be32 fattr_handle_absent_fs(u32 *bmval0, u32 *bmval1, u32 *bmval2, u32
+ }
+ 
+ 
+-static int get_parent_attributes(struct svc_export *exp, struct kstat *stat)
++static int nfsd4_get_mounted_on_ino(struct svc_export *exp, u64 *pino)
+ {
+ 	struct path path = exp->ex_path;
++	struct kstat stat;
+ 	int err;
+ 
+ 	path_get(&path);
+@@ -2652,8 +2894,10 @@ static int get_parent_attributes(struct svc_export *exp, struct kstat *stat)
+ 		if (path.dentry != path.mnt->mnt_root)
+ 			break;
+ 	}
+-	err = vfs_getattr(&path, stat, STATX_BASIC_STATS, AT_STATX_SYNC_AS_STAT);
++	err = vfs_getattr(&path, &stat, STATX_INO, AT_STATX_SYNC_AS_STAT);
+ 	path_put(&path);
++	if (!err)
++		*pino = stat.ino;
+ 	return err;
+ }
+ 
+@@ -2706,10 +2950,9 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 	struct kstat stat;
+ 	struct svc_fh *tempfh = NULL;
+ 	struct kstatfs statfs;
+-	__be32 *p;
++	__be32 *p, *attrlen_p;
+ 	int starting_len = xdr->buf->len;
+ 	int attrlen_offset;
+-	__be32 attrlen;
+ 	u32 dummy;
+ 	u64 dummy64;
+ 	u32 rdattr_err = 0;
+@@ -2741,6 +2984,9 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 	err = vfs_getattr(&path, &stat, STATX_BASIC_STATS, AT_STATX_SYNC_AS_STAT);
+ 	if (err)
+ 		goto out_nfserr;
++	if (!(stat.result_mask & STATX_BTIME))
++		/* underlying FS does not offer btime so we can't share it */
++		bmval1 &= ~FATTR4_WORD1_TIME_CREATE;
+ 	if ((bmval0 & (FATTR4_WORD0_FILES_AVAIL | FATTR4_WORD0_FILES_FREE |
+ 			FATTR4_WORD0_FILES_TOTAL | FATTR4_WORD0_MAXNAME)) ||
+ 	    (bmval1 & (FATTR4_WORD1_SPACE_AVAIL | FATTR4_WORD1_SPACE_FREE |
+@@ -2794,10 +3040,9 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 		goto out;
+ 
+ 	attrlen_offset = xdr->buf->len;
+-	p = xdr_reserve_space(xdr, 4);
+-	if (!p)
++	attrlen_p = xdr_reserve_space(xdr, XDR_UNIT);
++	if (!attrlen_p)
+ 		goto out_resource;
+-	p++;                /* to be backfilled later */
+ 
+ 	if (bmval0 & FATTR4_WORD0_SUPPORTED_ATTRS) {
+ 		u32 supp[3];
+@@ -2983,7 +3228,7 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 		p = xdr_reserve_space(xdr, fhp->fh_handle.fh_size + 4);
+ 		if (!p)
+ 			goto out_resource;
+-		p = xdr_encode_opaque(p, &fhp->fh_handle.fh_base,
++		p = xdr_encode_opaque(p, &fhp->fh_handle.fh_raw,
+ 					fhp->fh_handle.fh_size);
+ 	}
+ 	if (bmval0 & FATTR4_WORD0_FILEID) {
+@@ -3115,11 +3360,14 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 		p = xdr_encode_hyper(p, dummy64);
+ 	}
+ 	if (bmval1 & FATTR4_WORD1_TIME_ACCESS) {
+-		p = xdr_reserve_space(xdr, 12);
+-		if (!p)
+-			goto out_resource;
+-		p = xdr_encode_hyper(p, (s64)stat.atime.tv_sec);
+-		*p++ = cpu_to_be32(stat.atime.tv_nsec);
++		status = nfsd4_encode_nfstime4(xdr, &stat.atime);
++		if (status)
++			goto out;
++	}
++	if (bmval1 & FATTR4_WORD1_TIME_CREATE) {
++		status = nfsd4_encode_nfstime4(xdr, &stat.btime);
++		if (status)
++			goto out;
+ 	}
+ 	if (bmval1 & FATTR4_WORD1_TIME_DELTA) {
+ 		p = xdr_reserve_space(xdr, 12);
+@@ -3128,36 +3376,31 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 		p = encode_time_delta(p, d_inode(dentry));
+ 	}
+ 	if (bmval1 & FATTR4_WORD1_TIME_METADATA) {
+-		p = xdr_reserve_space(xdr, 12);
+-		if (!p)
+-			goto out_resource;
+-		p = xdr_encode_hyper(p, (s64)stat.ctime.tv_sec);
+-		*p++ = cpu_to_be32(stat.ctime.tv_nsec);
++		status = nfsd4_encode_nfstime4(xdr, &stat.ctime);
++		if (status)
++			goto out;
+ 	}
+ 	if (bmval1 & FATTR4_WORD1_TIME_MODIFY) {
+-		p = xdr_reserve_space(xdr, 12);
+-		if (!p)
+-			goto out_resource;
+-		p = xdr_encode_hyper(p, (s64)stat.mtime.tv_sec);
+-		*p++ = cpu_to_be32(stat.mtime.tv_nsec);
++		status = nfsd4_encode_nfstime4(xdr, &stat.mtime);
++		if (status)
++			goto out;
+ 	}
+ 	if (bmval1 & FATTR4_WORD1_MOUNTED_ON_FILEID) {
+-		struct kstat parent_stat;
+ 		u64 ino = stat.ino;
+ 
+ 		p = xdr_reserve_space(xdr, 8);
+ 		if (!p)
+                 	goto out_resource;
+ 		/*
+-		 * Get parent's attributes if not ignoring crossmount
+-		 * and this is the root of a cross-mounted filesystem.
++		 * Get ino of mountpoint in parent filesystem, if not ignoring
++		 * crossmount and this is the root of a cross-mounted
++		 * filesystem.
+ 		 */
+ 		if (ignore_crossmnt == 0 &&
+ 		    dentry == exp->ex_path.mnt->mnt_root) {
+-			err = get_parent_attributes(exp, &parent_stat);
++			err = nfsd4_get_mounted_on_ino(exp, &ino);
+ 			if (err)
+ 				goto out_nfserr;
+-			ino = parent_stat.ino;
+ 		}
+ 		p = xdr_encode_hyper(p, ino);
+ 	}
+@@ -3194,16 +3437,6 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 			goto out;
+ 	}
+ 
+-	if (bmval2 & FATTR4_WORD2_CHANGE_ATTR_TYPE) {
+-		p = xdr_reserve_space(xdr, 4);
+-		if (!p)
+-			goto out_resource;
+-		if (IS_I_VERSION(d_inode(dentry)))
+-			*p++ = cpu_to_be32(NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR);
+-		else
+-			*p++ = cpu_to_be32(NFS4_CHANGE_TYPE_IS_TIME_METADATA);
+-	}
+-
+ #ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+ 	if (bmval2 & FATTR4_WORD2_SECURITY_LABEL) {
+ 		status = nfsd4_encode_security_label(xdr, rqstp, context,
+@@ -3222,8 +3455,7 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 		*p++ = cpu_to_be32(err == 0);
+ 	}
+ 
+-	attrlen = htonl(xdr->buf->len - attrlen_offset - 4);
+-	write_bytes_to_xdr_buf(xdr->buf, attrlen_offset, &attrlen, 4);
++	*attrlen_p = cpu_to_be32(xdr->buf->len - attrlen_offset - XDR_UNIT);
+ 	status = nfs_ok;
+ 
+ out:
+@@ -3392,7 +3624,7 @@ nfsd4_encode_dirent(void *ccdv, const char *name, int namlen,
+ 	p = xdr_reserve_space(xdr, 3*4 + namlen);
+ 	if (!p)
+ 		goto fail;
+-	p = xdr_encode_hyper(p, NFS_OFFSET_MAX);    /* offset of next entry */
++	p = xdr_encode_hyper(p, OFFSET_MAX);        /* offset of next entry */
+ 	p = xdr_encode_array(p, name, namlen);      /* name length & name */
+ 
+ 	nfserr = nfsd4_encode_dirent_fattr(xdr, cd, name, namlen);
+@@ -3476,9 +3708,11 @@ nfsd4_encode_stateid(struct xdr_stream *xdr, stateid_t *sid)
+ }
+ 
+ static __be32
+-nfsd4_encode_access(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_access *access)
++nfsd4_encode_access(struct nfsd4_compoundres *resp, __be32 nfserr,
++		    union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_access *access = &u->access;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 8);
+@@ -3489,9 +3723,11 @@ nfsd4_encode_access(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_
+ 	return 0;
+ }
+ 
+-static __be32 nfsd4_encode_bind_conn_to_session(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_bind_conn_to_session *bcts)
++static __be32 nfsd4_encode_bind_conn_to_session(struct nfsd4_compoundres *resp, __be32 nfserr,
++						union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_bind_conn_to_session *bcts = &u->bind_conn_to_session;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, NFS4_MAX_SESSIONID_LEN + 8);
+@@ -3506,18 +3742,22 @@ static __be32 nfsd4_encode_bind_conn_to_session(struct nfsd4_compoundres *resp,
+ }
+ 
+ static __be32
+-nfsd4_encode_close(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_close *close)
++nfsd4_encode_close(struct nfsd4_compoundres *resp, __be32 nfserr,
++		   union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_close *close = &u->close;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	return nfsd4_encode_stateid(xdr, &close->cl_stateid);
+ }
+ 
+ 
+ static __be32
+-nfsd4_encode_commit(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_commit *commit)
++nfsd4_encode_commit(struct nfsd4_compoundres *resp, __be32 nfserr,
++		    union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_commit *commit = &u->commit;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, NFS4_VERIFIER_SIZE);
+@@ -3529,9 +3769,11 @@ nfsd4_encode_commit(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_
+ }
+ 
+ static __be32
+-nfsd4_encode_create(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_create *create)
++nfsd4_encode_create(struct nfsd4_compoundres *resp, __be32 nfserr,
++		    union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_create *create = &u->create;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 20);
+@@ -3543,19 +3785,23 @@ nfsd4_encode_create(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_
+ }
+ 
+ static __be32
+-nfsd4_encode_getattr(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_getattr *getattr)
++nfsd4_encode_getattr(struct nfsd4_compoundres *resp, __be32 nfserr,
++		     union nfsd4_op_u *u)
+ {
++	struct nfsd4_getattr *getattr = &u->getattr;
+ 	struct svc_fh *fhp = getattr->ga_fhp;
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	return nfsd4_encode_fattr(xdr, fhp, fhp->fh_export, fhp->fh_dentry,
+ 				    getattr->ga_bmval, resp->rqstp, 0);
+ }
+ 
+ static __be32
+-nfsd4_encode_getfh(struct nfsd4_compoundres *resp, __be32 nfserr, struct svc_fh **fhpp)
++nfsd4_encode_getfh(struct nfsd4_compoundres *resp, __be32 nfserr,
++		   union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct svc_fh **fhpp = &u->getfh;
++	struct xdr_stream *xdr = resp->xdr;
+ 	struct svc_fh *fhp = *fhpp;
+ 	unsigned int len;
+ 	__be32 *p;
+@@ -3564,7 +3810,7 @@ nfsd4_encode_getfh(struct nfsd4_compoundres *resp, __be32 nfserr, struct svc_fh
+ 	p = xdr_reserve_space(xdr, len + 4);
+ 	if (!p)
+ 		return nfserr_resource;
+-	p = xdr_encode_opaque(p, &fhp->fh_handle.fh_base, len);
++	p = xdr_encode_opaque(p, &fhp->fh_handle.fh_raw, len);
+ 	return 0;
+ }
+ 
+@@ -3608,9 +3854,11 @@ nfsd4_encode_lock_denied(struct xdr_stream *xdr, struct nfsd4_lock_denied *ld)
+ }
+ 
+ static __be32
+-nfsd4_encode_lock(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_lock *lock)
++nfsd4_encode_lock(struct nfsd4_compoundres *resp, __be32 nfserr,
++		  union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_lock *lock = &u->lock;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	if (!nfserr)
+ 		nfserr = nfsd4_encode_stateid(xdr, &lock->lk_resp_stateid);
+@@ -3621,9 +3869,11 @@ nfsd4_encode_lock(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_lo
+ }
+ 
+ static __be32
+-nfsd4_encode_lockt(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_lockt *lockt)
++nfsd4_encode_lockt(struct nfsd4_compoundres *resp, __be32 nfserr,
++		   union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_lockt *lockt = &u->lockt;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	if (nfserr == nfserr_denied)
+ 		nfsd4_encode_lock_denied(xdr, &lockt->lt_denied);
+@@ -3631,18 +3881,22 @@ nfsd4_encode_lockt(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_l
+ }
+ 
+ static __be32
+-nfsd4_encode_locku(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_locku *locku)
++nfsd4_encode_locku(struct nfsd4_compoundres *resp, __be32 nfserr,
++		   union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_locku *locku = &u->locku;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	return nfsd4_encode_stateid(xdr, &locku->lu_stateid);
+ }
+ 
+ 
+ static __be32
+-nfsd4_encode_link(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_link *link)
++nfsd4_encode_link(struct nfsd4_compoundres *resp, __be32 nfserr,
++		  union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_link *link = &u->link;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 20);
+@@ -3654,9 +3908,11 @@ nfsd4_encode_link(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_li
+ 
+ 
+ static __be32
+-nfsd4_encode_open(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_open *open)
++nfsd4_encode_open(struct nfsd4_compoundres *resp, __be32 nfserr,
++		  union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_open *open = &u->open;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	nfserr = nfsd4_encode_stateid(xdr, &open->op_stateid);
+@@ -3748,17 +4004,21 @@ nfsd4_encode_open(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_op
+ }
+ 
+ static __be32
+-nfsd4_encode_open_confirm(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_open_confirm *oc)
++nfsd4_encode_open_confirm(struct nfsd4_compoundres *resp, __be32 nfserr,
++			  union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_open_confirm *oc = &u->open_confirm;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	return nfsd4_encode_stateid(xdr, &oc->oc_resp_stateid);
+ }
+ 
+ static __be32
+-nfsd4_encode_open_downgrade(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_open_downgrade *od)
++nfsd4_encode_open_downgrade(struct nfsd4_compoundres *resp, __be32 nfserr,
++			    union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_open_downgrade *od = &u->open_downgrade;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	return nfsd4_encode_stateid(xdr, &od->od_stateid);
+ }
+@@ -3768,33 +4028,28 @@ static __be32 nfsd4_encode_splice_read(
+ 				struct nfsd4_read *read,
+ 				struct file *file, unsigned long maxcount)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	struct xdr_buf *buf = xdr->buf;
+-	u32 eof;
+-	int space_left;
++	int status, space_left;
+ 	__be32 nfserr;
+-	__be32 *p = xdr->p - 2;
+ 
+ 	/* Make sure there will be room for padding if needed */
+ 	if (xdr->end - xdr->p < 1)
+ 		return nfserr_resource;
+ 
+ 	nfserr = nfsd_splice_read(read->rd_rqstp, read->rd_fhp,
+-				  file, read->rd_offset, &maxcount, &eof);
++				  file, read->rd_offset, &maxcount,
++				  &read->rd_eof);
+ 	read->rd_length = maxcount;
+-	if (nfserr) {
+-		/*
+-		 * nfsd_splice_actor may have already messed with the
+-		 * page length; reset it so as not to confuse
+-		 * xdr_truncate_encode:
+-		 */
+-		buf->page_len = 0;
+-		return nfserr;
++	if (nfserr)
++		goto out_err;
++	status = svc_encode_result_payload(read->rd_rqstp,
++					   buf->head[0].iov_len, maxcount);
++	if (status) {
++		nfserr = nfserrno(status);
++		goto out_err;
+ 	}
+ 
+-	*(p++) = htonl(eof);
+-	*(p++) = htonl(maxcount);
+-
+ 	buf->page_len = maxcount;
+ 	buf->len += maxcount;
+ 	xdr->page_ptr += (buf->page_base + maxcount + PAGE_SIZE - 1)
+@@ -3820,18 +4075,25 @@ static __be32 nfsd4_encode_splice_read(
+ 	xdr->end = (__be32 *)((void *)xdr->end + space_left);
+ 
+ 	return 0;
++
++out_err:
++	/*
++	 * nfsd_splice_actor may have already messed with the
++	 * page length; reset it so as not to confuse
++	 * xdr_truncate_encode in our caller.
++	 */
++	buf->page_len = 0;
++	return nfserr;
+ }
+ 
+ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp,
+ 				 struct nfsd4_read *read,
+ 				 struct file *file, unsigned long maxcount)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
+-	u32 eof;
+-	int starting_len = xdr->buf->len - 8;
++	struct xdr_stream *xdr = resp->xdr;
++	unsigned int starting_len = xdr->buf->len;
++	__be32 zero = xdr_zero;
+ 	__be32 nfserr;
+-	__be32 tmp;
+-	int pad;
+ 
+ 	read->rd_vlen = xdr_reserve_space_vec(xdr, resp->rqstp->rq_vec, maxcount);
+ 	if (read->rd_vlen < 0)
+@@ -3839,33 +4101,27 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp,
+ 
+ 	nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, file, read->rd_offset,
+ 			    resp->rqstp->rq_vec, read->rd_vlen, &maxcount,
+-			    &eof);
++			    &read->rd_eof);
+ 	read->rd_length = maxcount;
+ 	if (nfserr)
+ 		return nfserr;
+-	if (svc_encode_read_payload(resp->rqstp, starting_len + 8, maxcount))
++	if (svc_encode_result_payload(resp->rqstp, starting_len, maxcount))
+ 		return nfserr_io;
+-	xdr_truncate_encode(xdr, starting_len + 8 + xdr_align_size(maxcount));
+-
+-	tmp = htonl(eof);
+-	write_bytes_to_xdr_buf(xdr->buf, starting_len    , &tmp, 4);
+-	tmp = htonl(maxcount);
+-	write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
+-
+-	tmp = xdr_zero;
+-	pad = (maxcount&3) ? 4 - (maxcount&3) : 0;
+-	write_bytes_to_xdr_buf(xdr->buf, starting_len + 8 + maxcount,
+-								&tmp, pad);
+-	return 0;
++	xdr_truncate_encode(xdr, starting_len + xdr_align_size(maxcount));
+ 
++	write_bytes_to_xdr_buf(xdr->buf, starting_len + maxcount, &zero,
++			       xdr_pad_size(maxcount));
++	return nfs_ok;
+ }
+ 
+ static __be32
+ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		  struct nfsd4_read *read)
++		  union nfsd4_op_u *u)
+ {
++	struct nfsd4_read *read = &u->read;
++	bool splice_ok = test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags);
+ 	unsigned long maxcount;
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	struct file *file;
+ 	int starting_len = xdr->buf->len;
+ 	__be32 *p;
+@@ -3876,45 +4132,44 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ 	p = xdr_reserve_space(xdr, 8); /* eof flag and byte count */
+ 	if (!p) {
+-		WARN_ON_ONCE(test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags));
++		WARN_ON_ONCE(splice_ok);
+ 		return nfserr_resource;
+ 	}
+-	if (resp->xdr.buf->page_len &&
+-	    test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags)) {
++	if (resp->xdr->buf->page_len && splice_ok) {
+ 		WARN_ON_ONCE(1);
+ 		return nfserr_serverfault;
+ 	}
+ 	xdr_commit_encode(xdr);
+ 
+-	maxcount = svc_max_payload(resp->rqstp);
+-	maxcount = min_t(unsigned long, maxcount,
++	maxcount = min_t(unsigned long, read->rd_length,
+ 			 (xdr->buf->buflen - xdr->buf->len));
+-	maxcount = min_t(unsigned long, maxcount, read->rd_length);
+ 
+-	if (file->f_op->splice_read &&
+-	    test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags))
++	if (file->f_op->splice_read && splice_ok)
+ 		nfserr = nfsd4_encode_splice_read(resp, read, file, maxcount);
+ 	else
+ 		nfserr = nfsd4_encode_readv(resp, read, file, maxcount);
+-
+-	if (nfserr)
++	if (nfserr) {
+ 		xdr_truncate_encode(xdr, starting_len);
++		return nfserr;
++	}
+ 
+-	return nfserr;
++	p = xdr_encode_bool(p, read->rd_eof);
++	*p = cpu_to_be32(read->rd_length);
++	return nfs_ok;
+ }
+ 
+ static __be32
+-nfsd4_encode_readlink(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_readlink *readlink)
++nfsd4_encode_readlink(struct nfsd4_compoundres *resp, __be32 nfserr,
++		      union nfsd4_op_u *u)
+ {
+-	int maxcount;
+-	__be32 wire_count;
+-	int zero = 0;
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_readlink *readlink = &u->readlink;
++	__be32 *p, *maxcount_p, zero = xdr_zero;
++	struct xdr_stream *xdr = resp->xdr;
+ 	int length_offset = xdr->buf->len;
+-	__be32 *p;
++	int maxcount, status;
+ 
+-	p = xdr_reserve_space(xdr, 4);
+-	if (!p)
++	maxcount_p = xdr_reserve_space(xdr, XDR_UNIT);
++	if (!maxcount_p)
+ 		return nfserr_resource;
+ 	maxcount = PAGE_SIZE;
+ 
+@@ -3931,28 +4186,35 @@ nfsd4_encode_readlink(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd
+ 						(char *)p, &maxcount);
+ 	if (nfserr == nfserr_isdir)
+ 		nfserr = nfserr_inval;
+-	if (nfserr) {
+-		xdr_truncate_encode(xdr, length_offset);
+-		return nfserr;
+-	}
++	if (nfserr)
++		goto out_err;
++	status = svc_encode_result_payload(readlink->rl_rqstp, length_offset,
++					   maxcount);
++	if (status) {
++		nfserr = nfserrno(status);
++		goto out_err;
++	}
++	*maxcount_p = cpu_to_be32(maxcount);
++	xdr_truncate_encode(xdr, length_offset + 4 + xdr_align_size(maxcount));
++	write_bytes_to_xdr_buf(xdr->buf, length_offset + 4 + maxcount, &zero,
++			       xdr_pad_size(maxcount));
++	return nfs_ok;
+ 
+-	wire_count = htonl(maxcount);
+-	write_bytes_to_xdr_buf(xdr->buf, length_offset, &wire_count, 4);
+-	xdr_truncate_encode(xdr, length_offset + 4 + ALIGN(maxcount, 4));
+-	if (maxcount & 3)
+-		write_bytes_to_xdr_buf(xdr->buf, length_offset + 4 + maxcount,
+-						&zero, 4 - (maxcount&3));
+-	return 0;
++out_err:
++	xdr_truncate_encode(xdr, length_offset);
++	return nfserr;
+ }
+ 
+ static __be32
+-nfsd4_encode_readdir(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_readdir *readdir)
++nfsd4_encode_readdir(struct nfsd4_compoundres *resp, __be32 nfserr,
++		     union nfsd4_op_u *u)
+ {
++	struct nfsd4_readdir *readdir = &u->readdir;
+ 	int maxcount;
+ 	int bytes_left;
+ 	loff_t offset;
+ 	__be64 wire_offset;
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	int starting_len = xdr->buf->len;
+ 	__be32 *p;
+ 
+@@ -3963,8 +4225,8 @@ nfsd4_encode_readdir(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4
+ 	/* XXX: Following NFSv3, we ignore the READDIR verifier for now. */
+ 	*p++ = cpu_to_be32(0);
+ 	*p++ = cpu_to_be32(0);
+-	resp->xdr.buf->head[0].iov_len = ((char *)resp->xdr.p)
+-				- (char *)resp->xdr.buf->head[0].iov_base;
++	xdr->buf->head[0].iov_len = (char *)xdr->p -
++				    (char *)xdr->buf->head[0].iov_base;
+ 
+ 	/*
+ 	 * Number of bytes left for directory entries allowing for the
+@@ -4037,9 +4299,11 @@ nfsd4_encode_readdir(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4
+ }
+ 
+ static __be32
+-nfsd4_encode_remove(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_remove *remove)
++nfsd4_encode_remove(struct nfsd4_compoundres *resp, __be32 nfserr,
++		    union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_remove *remove = &u->remove;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 20);
+@@ -4050,9 +4314,11 @@ nfsd4_encode_remove(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_
+ }
+ 
+ static __be32
+-nfsd4_encode_rename(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_rename *rename)
++nfsd4_encode_rename(struct nfsd4_compoundres *resp, __be32 nfserr,
++		    union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_rename *rename = &u->rename;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 40);
+@@ -4133,18 +4399,20 @@ nfsd4_do_encode_secinfo(struct xdr_stream *xdr, struct svc_export *exp)
+ 
+ static __be32
+ nfsd4_encode_secinfo(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		     struct nfsd4_secinfo *secinfo)
++		     union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_secinfo *secinfo = &u->secinfo;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	return nfsd4_do_encode_secinfo(xdr, secinfo->si_exp);
+ }
+ 
+ static __be32
+ nfsd4_encode_secinfo_no_name(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		     struct nfsd4_secinfo_no_name *secinfo)
++		     union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_secinfo_no_name *secinfo = &u->secinfo_no_name;
++	struct xdr_stream *xdr = resp->xdr;
+ 
+ 	return nfsd4_do_encode_secinfo(xdr, secinfo->sin_exp);
+ }
+@@ -4154,9 +4422,11 @@ nfsd4_encode_secinfo_no_name(struct nfsd4_compoundres *resp, __be32 nfserr,
+  * regardless of the error status.
+  */
+ static __be32
+-nfsd4_encode_setattr(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_setattr *setattr)
++nfsd4_encode_setattr(struct nfsd4_compoundres *resp, __be32 nfserr,
++		     union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_setattr *setattr = &u->setattr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 16);
+@@ -4178,9 +4448,11 @@ nfsd4_encode_setattr(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4
+ }
+ 
+ static __be32
+-nfsd4_encode_setclientid(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_setclientid *scd)
++nfsd4_encode_setclientid(struct nfsd4_compoundres *resp, __be32 nfserr,
++			 union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_setclientid *scd = &u->setclientid;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	if (!nfserr) {
+@@ -4202,9 +4474,11 @@ nfsd4_encode_setclientid(struct nfsd4_compoundres *resp, __be32 nfserr, struct n
+ }
+ 
+ static __be32
+-nfsd4_encode_write(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_write *write)
++nfsd4_encode_write(struct nfsd4_compoundres *resp, __be32 nfserr,
++		   union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_write *write = &u->write;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 16);
+@@ -4219,9 +4493,10 @@ nfsd4_encode_write(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_w
+ 
+ static __be32
+ nfsd4_encode_exchange_id(struct nfsd4_compoundres *resp, __be32 nfserr,
+-			 struct nfsd4_exchange_id *exid)
++			 union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_exchange_id *exid = &u->exchange_id;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 	char *major_id;
+ 	char *server_scope;
+@@ -4297,9 +4572,10 @@ nfsd4_encode_exchange_id(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_create_session(struct nfsd4_compoundres *resp, __be32 nfserr,
+-			    struct nfsd4_create_session *sess)
++			    union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_create_session *sess = &u->create_session;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 24);
+@@ -4350,9 +4626,10 @@ nfsd4_encode_create_session(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_sequence(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		      struct nfsd4_sequence *seq)
++		      union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_sequence *seq = &u->sequence;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, NFS4_MAX_SESSIONID_LEN + 20);
+@@ -4373,9 +4650,10 @@ nfsd4_encode_sequence(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_test_stateid(struct nfsd4_compoundres *resp, __be32 nfserr,
+-			  struct nfsd4_test_stateid *test_stateid)
++			  union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_test_stateid *test_stateid = &u->test_stateid;
++	struct xdr_stream *xdr = resp->xdr;
+ 	struct nfsd4_test_stateid_id *stateid, *next;
+ 	__be32 *p;
+ 
+@@ -4394,9 +4672,10 @@ nfsd4_encode_test_stateid(struct nfsd4_compoundres *resp, __be32 nfserr,
+ #ifdef CONFIG_NFSD_PNFS
+ static __be32
+ nfsd4_encode_getdeviceinfo(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		struct nfsd4_getdeviceinfo *gdev)
++		union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_getdeviceinfo *gdev = &u->getdeviceinfo;
++	struct xdr_stream *xdr = resp->xdr;
+ 	const struct nfsd4_layout_ops *ops;
+ 	u32 starting_len = xdr->buf->len, needed_len;
+ 	__be32 *p;
+@@ -4447,9 +4726,10 @@ nfsd4_encode_getdeviceinfo(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_layoutget(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		struct nfsd4_layoutget *lgp)
++		union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_layoutget *lgp = &u->layoutget;
++	struct xdr_stream *xdr = resp->xdr;
+ 	const struct nfsd4_layout_ops *ops;
+ 	__be32 *p;
+ 
+@@ -4474,9 +4754,10 @@ nfsd4_encode_layoutget(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_layoutcommit(struct nfsd4_compoundres *resp, __be32 nfserr,
+-			  struct nfsd4_layoutcommit *lcp)
++			  union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_layoutcommit *lcp = &u->layoutcommit;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 4);
+@@ -4495,9 +4776,10 @@ nfsd4_encode_layoutcommit(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_layoutreturn(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		struct nfsd4_layoutreturn *lrp)
++		union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_layoutreturn *lrp = &u->layoutreturn;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 4);
+@@ -4515,7 +4797,7 @@ nfsd42_encode_write_res(struct nfsd4_compoundres *resp,
+ 		struct nfsd42_write_res *write, bool sync)
+ {
+ 	__be32 *p;
+-	p = xdr_reserve_space(&resp->xdr, 4);
++	p = xdr_reserve_space(resp->xdr, 4);
+ 	if (!p)
+ 		return nfserr_resource;
+ 
+@@ -4524,11 +4806,11 @@ nfsd42_encode_write_res(struct nfsd4_compoundres *resp,
+ 	else {
+ 		__be32 nfserr;
+ 		*p++ = cpu_to_be32(1);
+-		nfserr = nfsd4_encode_stateid(&resp->xdr, &write->cb_stateid);
++		nfserr = nfsd4_encode_stateid(resp->xdr, &write->cb_stateid);
+ 		if (nfserr)
+ 			return nfserr;
+ 	}
+-	p = xdr_reserve_space(&resp->xdr, 8 + 4 + NFS4_VERIFIER_SIZE);
++	p = xdr_reserve_space(resp->xdr, 8 + 4 + NFS4_VERIFIER_SIZE);
+ 	if (!p)
+ 		return nfserr_resource;
+ 
+@@ -4542,7 +4824,7 @@ nfsd42_encode_write_res(struct nfsd4_compoundres *resp,
+ static __be32
+ nfsd42_encode_nl4_server(struct nfsd4_compoundres *resp, struct nl4_server *ns)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	struct nfs42_netaddr *addr;
+ 	__be32 *p;
+ 
+@@ -4581,26 +4863,28 @@ nfsd42_encode_nl4_server(struct nfsd4_compoundres *resp, struct nl4_server *ns)
+ 
+ static __be32
+ nfsd4_encode_copy(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		  struct nfsd4_copy *copy)
++		  union nfsd4_op_u *u)
+ {
++	struct nfsd4_copy *copy = &u->copy;
+ 	__be32 *p;
+ 
+ 	nfserr = nfsd42_encode_write_res(resp, &copy->cp_res,
+-			copy->cp_synchronous);
++					 nfsd4_copy_is_sync(copy));
+ 	if (nfserr)
+ 		return nfserr;
+ 
+-	p = xdr_reserve_space(&resp->xdr, 4 + 4);
++	p = xdr_reserve_space(resp->xdr, 4 + 4);
+ 	*p++ = xdr_one; /* cr_consecutive */
+-	*p++ = cpu_to_be32(copy->cp_synchronous);
++	*p = nfsd4_copy_is_sync(copy) ? xdr_one : xdr_zero;
+ 	return 0;
+ }
+ 
+ static __be32
+ nfsd4_encode_offload_status(struct nfsd4_compoundres *resp, __be32 nfserr,
+-			    struct nfsd4_offload_status *os)
++			    union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_offload_status *os = &u->offload_status;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 8 + 4);
+@@ -4613,159 +4897,84 @@ nfsd4_encode_offload_status(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp,
+-			    struct nfsd4_read *read,
+-			    unsigned long *maxcount, u32 *eof,
+-			    loff_t *pos)
++			    struct nfsd4_read *read)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	bool splice_ok = test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags);
+ 	struct file *file = read->rd_nf->nf_file;
+-	int starting_len = xdr->buf->len;
+-	loff_t hole_pos;
+-	__be32 nfserr;
+-	__be32 *p, tmp;
+-	__be64 tmp64;
+-
+-	hole_pos = pos ? *pos : vfs_llseek(file, read->rd_offset, SEEK_HOLE);
+-	if (hole_pos > read->rd_offset)
+-		*maxcount = min_t(unsigned long, *maxcount, hole_pos - read->rd_offset);
+-	*maxcount = min_t(unsigned long, *maxcount, (xdr->buf->buflen - xdr->buf->len));
++	struct xdr_stream *xdr = resp->xdr;
++	unsigned long maxcount;
++	__be32 nfserr, *p;
+ 
+ 	/* Content type, offset, byte count */
+ 	p = xdr_reserve_space(xdr, 4 + 8 + 4);
+ 	if (!p)
+-		return nfserr_resource;
++		return nfserr_io;
++	if (resp->xdr->buf->page_len && splice_ok) {
++		WARN_ON_ONCE(splice_ok);
++		return nfserr_serverfault;
++	}
+ 
+-	read->rd_vlen = xdr_reserve_space_vec(xdr, resp->rqstp->rq_vec, *maxcount);
+-	if (read->rd_vlen < 0)
+-		return nfserr_resource;
++	maxcount = min_t(unsigned long, read->rd_length,
++			 (xdr->buf->buflen - xdr->buf->len));
+ 
+-	nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, file, read->rd_offset,
+-			    resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof);
++	if (file->f_op->splice_read && splice_ok)
++		nfserr = nfsd4_encode_splice_read(resp, read, file, maxcount);
++	else
++		nfserr = nfsd4_encode_readv(resp, read, file, maxcount);
+ 	if (nfserr)
+ 		return nfserr;
+-	xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount));
+-
+-	tmp = htonl(NFS4_CONTENT_DATA);
+-	write_bytes_to_xdr_buf(xdr->buf, starting_len,      &tmp,   4);
+-	tmp64 = cpu_to_be64(read->rd_offset);
+-	write_bytes_to_xdr_buf(xdr->buf, starting_len + 4,  &tmp64, 8);
+-	tmp = htonl(*maxcount);
+-	write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp,   4);
+-
+-	tmp = xdr_zero;
+-	write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp,
+-			       xdr_pad_size(*maxcount));
+-	return nfs_ok;
+-}
+-
+-static __be32
+-nfsd4_encode_read_plus_hole(struct nfsd4_compoundres *resp,
+-			    struct nfsd4_read *read,
+-			    unsigned long *maxcount, u32 *eof)
+-{
+-	struct file *file = read->rd_nf->nf_file;
+-	loff_t data_pos = vfs_llseek(file, read->rd_offset, SEEK_DATA);
+-	loff_t f_size = i_size_read(file_inode(file));
+-	unsigned long count;
+-	__be32 *p;
+-
+-	if (data_pos == -ENXIO)
+-		data_pos = f_size;
+-	else if (data_pos <= read->rd_offset || (data_pos < f_size && data_pos % PAGE_SIZE))
+-		return nfsd4_encode_read_plus_data(resp, read, maxcount, eof, &f_size);
+-	count = data_pos - read->rd_offset;
+-
+-	/* Content type, offset, byte count */
+-	p = xdr_reserve_space(&resp->xdr, 4 + 8 + 8);
+-	if (!p)
+-		return nfserr_resource;
+ 
+-	*p++ = htonl(NFS4_CONTENT_HOLE);
+-	 p   = xdr_encode_hyper(p, read->rd_offset);
+-	 p   = xdr_encode_hyper(p, count);
++	*p++ = cpu_to_be32(NFS4_CONTENT_DATA);
++	p = xdr_encode_hyper(p, read->rd_offset);
++	*p = cpu_to_be32(read->rd_length);
+ 
+-	*eof = (read->rd_offset + count) >= f_size;
+-	*maxcount = min_t(unsigned long, count, *maxcount);
+ 	return nfs_ok;
+ }
+ 
+ static __be32
+ nfsd4_encode_read_plus(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		       struct nfsd4_read *read)
++		       union nfsd4_op_u *u)
+ {
+-	unsigned long maxcount, count;
+-	struct xdr_stream *xdr = &resp->xdr;
+-	struct file *file;
++	struct nfsd4_read *read = &u->read;
++	struct file *file = read->rd_nf->nf_file;
++	struct xdr_stream *xdr = resp->xdr;
+ 	int starting_len = xdr->buf->len;
+-	int last_segment = xdr->buf->len;
+-	int segments = 0;
+-	__be32 *p, tmp;
+-	bool is_data;
+-	loff_t pos;
+-	u32 eof;
++	u32 segments = 0;
++	__be32 *p;
+ 
+ 	if (nfserr)
+ 		return nfserr;
+-	file = read->rd_nf->nf_file;
+ 
+ 	/* eof flag, segment count */
+ 	p = xdr_reserve_space(xdr, 4 + 4);
+ 	if (!p)
+-		return nfserr_resource;
++		return nfserr_io;
+ 	xdr_commit_encode(xdr);
+ 
+-	maxcount = svc_max_payload(resp->rqstp);
+-	maxcount = min_t(unsigned long, maxcount,
+-			 (xdr->buf->buflen - xdr->buf->len));
+-	maxcount = min_t(unsigned long, maxcount, read->rd_length);
+-	count    = maxcount;
+-
+-	eof = read->rd_offset >= i_size_read(file_inode(file));
+-	if (eof)
++	read->rd_eof = read->rd_offset >= i_size_read(file_inode(file));
++	if (read->rd_eof)
+ 		goto out;
+ 
+-	pos = vfs_llseek(file, read->rd_offset, SEEK_HOLE);
+-	is_data = pos > read->rd_offset;
+-
+-	while (count > 0 && !eof) {
+-		maxcount = count;
+-		if (is_data)
+-			nfserr = nfsd4_encode_read_plus_data(resp, read, &maxcount, &eof,
+-						segments == 0 ? &pos : NULL);
+-		else
+-			nfserr = nfsd4_encode_read_plus_hole(resp, read, &maxcount, &eof);
+-		if (nfserr)
+-			goto out;
+-		count -= maxcount;
+-		read->rd_offset += maxcount;
+-		is_data = !is_data;
+-		last_segment = xdr->buf->len;
+-		segments++;
+-	}
+-
+-out:
+-	if (nfserr && segments == 0)
++	nfserr = nfsd4_encode_read_plus_data(resp, read);
++	if (nfserr) {
+ 		xdr_truncate_encode(xdr, starting_len);
+-	else {
+-		if (nfserr) {
+-			xdr_truncate_encode(xdr, last_segment);
+-			nfserr = nfs_ok;
+-			eof = 0;
+-		}
+-		tmp = htonl(eof);
+-		write_bytes_to_xdr_buf(xdr->buf, starting_len,     &tmp, 4);
+-		tmp = htonl(segments);
+-		write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
++		return nfserr;
+ 	}
+ 
++	segments++;
++
++out:
++	p = xdr_encode_bool(p, read->rd_eof);
++	*p = cpu_to_be32(segments);
+ 	return nfserr;
+ }
+ 
+ static __be32
+ nfsd4_encode_copy_notify(struct nfsd4_compoundres *resp, __be32 nfserr,
+-			 struct nfsd4_copy_notify *cn)
++			 union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_copy_notify *cn = &u->copy_notify;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	if (nfserr)
+@@ -4792,16 +5001,18 @@ nfsd4_encode_copy_notify(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ 	*p++ = cpu_to_be32(1);
+ 
+-	return nfsd42_encode_nl4_server(resp, &cn->cpn_src);
++	nfserr = nfsd42_encode_nl4_server(resp, cn->cpn_src);
++	return nfserr;
+ }
+ 
+ static __be32
+ nfsd4_encode_seek(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		  struct nfsd4_seek *seek)
++		  union nfsd4_op_u *u)
+ {
++	struct nfsd4_seek *seek = &u->seek;
+ 	__be32 *p;
+ 
+-	p = xdr_reserve_space(&resp->xdr, 4 + 8);
++	p = xdr_reserve_space(resp->xdr, 4 + 8);
+ 	*p++ = cpu_to_be32(seek->seek_eof);
+ 	p = xdr_encode_hyper(p, seek->seek_pos);
+ 
+@@ -4809,7 +5020,8 @@ nfsd4_encode_seek(struct nfsd4_compoundres *resp, __be32 nfserr,
+ }
+ 
+ static __be32
+-nfsd4_encode_noop(struct nfsd4_compoundres *resp, __be32 nfserr, void *p)
++nfsd4_encode_noop(struct nfsd4_compoundres *resp, __be32 nfserr,
++		  union nfsd4_op_u *p)
+ {
+ 	return nfserr;
+ }
+@@ -4860,9 +5072,10 @@ nfsd4_vbuf_to_stream(struct xdr_stream *xdr, char *buf, u32 buflen)
+ 
+ static __be32
+ nfsd4_encode_getxattr(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		      struct nfsd4_getxattr *getxattr)
++		      union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_getxattr *getxattr = &u->getxattr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p, err;
+ 
+ 	p = xdr_reserve_space(xdr, 4);
+@@ -4884,9 +5097,10 @@ nfsd4_encode_getxattr(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_setxattr(struct nfsd4_compoundres *resp, __be32 nfserr,
+-		      struct nfsd4_setxattr *setxattr)
++		      union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_setxattr *setxattr = &u->setxattr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 20);
+@@ -4925,9 +5139,10 @@ nfsd4_listxattr_validate_cookie(struct nfsd4_listxattrs *listxattrs,
+ 
+ static __be32
+ nfsd4_encode_listxattrs(struct nfsd4_compoundres *resp, __be32 nfserr,
+-			struct nfsd4_listxattrs *listxattrs)
++			union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_listxattrs *listxattrs = &u->listxattrs;
++	struct xdr_stream *xdr = resp->xdr;
+ 	u32 cookie_offset, count_offset, eof;
+ 	u32 left, xdrleft, slen, count;
+ 	u32 xdrlen, offset;
+@@ -5036,9 +5251,10 @@ nfsd4_encode_listxattrs(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ static __be32
+ nfsd4_encode_removexattr(struct nfsd4_compoundres *resp, __be32 nfserr,
+-			 struct nfsd4_removexattr *removexattr)
++			 union nfsd4_op_u *u)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct nfsd4_removexattr *removexattr = &u->removexattr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 20);
+@@ -5049,7 +5265,7 @@ nfsd4_encode_removexattr(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	return 0;
+ }
+ 
+-typedef __be32(* nfsd4_enc)(struct nfsd4_compoundres *, __be32, void *);
++typedef __be32(*nfsd4_enc)(struct nfsd4_compoundres *, __be32, union nfsd4_op_u *u);
+ 
+ /*
+  * Note: nfsd4_enc_ops vector is shared for v4.0 and v4.1
+@@ -5057,93 +5273,93 @@ typedef __be32(* nfsd4_enc)(struct nfsd4_compoundres *, __be32, void *);
+  * done in the decoding phase.
+  */
+ static const nfsd4_enc nfsd4_enc_ops[] = {
+-	[OP_ACCESS]		= (nfsd4_enc)nfsd4_encode_access,
+-	[OP_CLOSE]		= (nfsd4_enc)nfsd4_encode_close,
+-	[OP_COMMIT]		= (nfsd4_enc)nfsd4_encode_commit,
+-	[OP_CREATE]		= (nfsd4_enc)nfsd4_encode_create,
+-	[OP_DELEGPURGE]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_DELEGRETURN]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_GETATTR]		= (nfsd4_enc)nfsd4_encode_getattr,
+-	[OP_GETFH]		= (nfsd4_enc)nfsd4_encode_getfh,
+-	[OP_LINK]		= (nfsd4_enc)nfsd4_encode_link,
+-	[OP_LOCK]		= (nfsd4_enc)nfsd4_encode_lock,
+-	[OP_LOCKT]		= (nfsd4_enc)nfsd4_encode_lockt,
+-	[OP_LOCKU]		= (nfsd4_enc)nfsd4_encode_locku,
+-	[OP_LOOKUP]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_LOOKUPP]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_NVERIFY]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_OPEN]		= (nfsd4_enc)nfsd4_encode_open,
+-	[OP_OPENATTR]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_OPEN_CONFIRM]	= (nfsd4_enc)nfsd4_encode_open_confirm,
+-	[OP_OPEN_DOWNGRADE]	= (nfsd4_enc)nfsd4_encode_open_downgrade,
+-	[OP_PUTFH]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_PUTPUBFH]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_PUTROOTFH]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_READ]		= (nfsd4_enc)nfsd4_encode_read,
+-	[OP_READDIR]		= (nfsd4_enc)nfsd4_encode_readdir,
+-	[OP_READLINK]		= (nfsd4_enc)nfsd4_encode_readlink,
+-	[OP_REMOVE]		= (nfsd4_enc)nfsd4_encode_remove,
+-	[OP_RENAME]		= (nfsd4_enc)nfsd4_encode_rename,
+-	[OP_RENEW]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_RESTOREFH]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_SAVEFH]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_SECINFO]		= (nfsd4_enc)nfsd4_encode_secinfo,
+-	[OP_SETATTR]		= (nfsd4_enc)nfsd4_encode_setattr,
+-	[OP_SETCLIENTID]	= (nfsd4_enc)nfsd4_encode_setclientid,
+-	[OP_SETCLIENTID_CONFIRM] = (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_VERIFY]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_WRITE]		= (nfsd4_enc)nfsd4_encode_write,
+-	[OP_RELEASE_LOCKOWNER]	= (nfsd4_enc)nfsd4_encode_noop,
++	[OP_ACCESS]		= nfsd4_encode_access,
++	[OP_CLOSE]		= nfsd4_encode_close,
++	[OP_COMMIT]		= nfsd4_encode_commit,
++	[OP_CREATE]		= nfsd4_encode_create,
++	[OP_DELEGPURGE]		= nfsd4_encode_noop,
++	[OP_DELEGRETURN]	= nfsd4_encode_noop,
++	[OP_GETATTR]		= nfsd4_encode_getattr,
++	[OP_GETFH]		= nfsd4_encode_getfh,
++	[OP_LINK]		= nfsd4_encode_link,
++	[OP_LOCK]		= nfsd4_encode_lock,
++	[OP_LOCKT]		= nfsd4_encode_lockt,
++	[OP_LOCKU]		= nfsd4_encode_locku,
++	[OP_LOOKUP]		= nfsd4_encode_noop,
++	[OP_LOOKUPP]		= nfsd4_encode_noop,
++	[OP_NVERIFY]		= nfsd4_encode_noop,
++	[OP_OPEN]		= nfsd4_encode_open,
++	[OP_OPENATTR]		= nfsd4_encode_noop,
++	[OP_OPEN_CONFIRM]	= nfsd4_encode_open_confirm,
++	[OP_OPEN_DOWNGRADE]	= nfsd4_encode_open_downgrade,
++	[OP_PUTFH]		= nfsd4_encode_noop,
++	[OP_PUTPUBFH]		= nfsd4_encode_noop,
++	[OP_PUTROOTFH]		= nfsd4_encode_noop,
++	[OP_READ]		= nfsd4_encode_read,
++	[OP_READDIR]		= nfsd4_encode_readdir,
++	[OP_READLINK]		= nfsd4_encode_readlink,
++	[OP_REMOVE]		= nfsd4_encode_remove,
++	[OP_RENAME]		= nfsd4_encode_rename,
++	[OP_RENEW]		= nfsd4_encode_noop,
++	[OP_RESTOREFH]		= nfsd4_encode_noop,
++	[OP_SAVEFH]		= nfsd4_encode_noop,
++	[OP_SECINFO]		= nfsd4_encode_secinfo,
++	[OP_SETATTR]		= nfsd4_encode_setattr,
++	[OP_SETCLIENTID]	= nfsd4_encode_setclientid,
++	[OP_SETCLIENTID_CONFIRM] = nfsd4_encode_noop,
++	[OP_VERIFY]		= nfsd4_encode_noop,
++	[OP_WRITE]		= nfsd4_encode_write,
++	[OP_RELEASE_LOCKOWNER]	= nfsd4_encode_noop,
+ 
+ 	/* NFSv4.1 operations */
+-	[OP_BACKCHANNEL_CTL]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_BIND_CONN_TO_SESSION] = (nfsd4_enc)nfsd4_encode_bind_conn_to_session,
+-	[OP_EXCHANGE_ID]	= (nfsd4_enc)nfsd4_encode_exchange_id,
+-	[OP_CREATE_SESSION]	= (nfsd4_enc)nfsd4_encode_create_session,
+-	[OP_DESTROY_SESSION]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_FREE_STATEID]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_GET_DIR_DELEGATION]	= (nfsd4_enc)nfsd4_encode_noop,
++	[OP_BACKCHANNEL_CTL]	= nfsd4_encode_noop,
++	[OP_BIND_CONN_TO_SESSION] = nfsd4_encode_bind_conn_to_session,
++	[OP_EXCHANGE_ID]	= nfsd4_encode_exchange_id,
++	[OP_CREATE_SESSION]	= nfsd4_encode_create_session,
++	[OP_DESTROY_SESSION]	= nfsd4_encode_noop,
++	[OP_FREE_STATEID]	= nfsd4_encode_noop,
++	[OP_GET_DIR_DELEGATION]	= nfsd4_encode_noop,
+ #ifdef CONFIG_NFSD_PNFS
+-	[OP_GETDEVICEINFO]	= (nfsd4_enc)nfsd4_encode_getdeviceinfo,
+-	[OP_GETDEVICELIST]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_LAYOUTCOMMIT]	= (nfsd4_enc)nfsd4_encode_layoutcommit,
+-	[OP_LAYOUTGET]		= (nfsd4_enc)nfsd4_encode_layoutget,
+-	[OP_LAYOUTRETURN]	= (nfsd4_enc)nfsd4_encode_layoutreturn,
++	[OP_GETDEVICEINFO]	= nfsd4_encode_getdeviceinfo,
++	[OP_GETDEVICELIST]	= nfsd4_encode_noop,
++	[OP_LAYOUTCOMMIT]	= nfsd4_encode_layoutcommit,
++	[OP_LAYOUTGET]		= nfsd4_encode_layoutget,
++	[OP_LAYOUTRETURN]	= nfsd4_encode_layoutreturn,
+ #else
+-	[OP_GETDEVICEINFO]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_GETDEVICELIST]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_LAYOUTCOMMIT]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_LAYOUTGET]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_LAYOUTRETURN]	= (nfsd4_enc)nfsd4_encode_noop,
++	[OP_GETDEVICEINFO]	= nfsd4_encode_noop,
++	[OP_GETDEVICELIST]	= nfsd4_encode_noop,
++	[OP_LAYOUTCOMMIT]	= nfsd4_encode_noop,
++	[OP_LAYOUTGET]		= nfsd4_encode_noop,
++	[OP_LAYOUTRETURN]	= nfsd4_encode_noop,
+ #endif
+-	[OP_SECINFO_NO_NAME]	= (nfsd4_enc)nfsd4_encode_secinfo_no_name,
+-	[OP_SEQUENCE]		= (nfsd4_enc)nfsd4_encode_sequence,
+-	[OP_SET_SSV]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_TEST_STATEID]	= (nfsd4_enc)nfsd4_encode_test_stateid,
+-	[OP_WANT_DELEGATION]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_DESTROY_CLIENTID]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_RECLAIM_COMPLETE]	= (nfsd4_enc)nfsd4_encode_noop,
++	[OP_SECINFO_NO_NAME]	= nfsd4_encode_secinfo_no_name,
++	[OP_SEQUENCE]		= nfsd4_encode_sequence,
++	[OP_SET_SSV]		= nfsd4_encode_noop,
++	[OP_TEST_STATEID]	= nfsd4_encode_test_stateid,
++	[OP_WANT_DELEGATION]	= nfsd4_encode_noop,
++	[OP_DESTROY_CLIENTID]	= nfsd4_encode_noop,
++	[OP_RECLAIM_COMPLETE]	= nfsd4_encode_noop,
+ 
+ 	/* NFSv4.2 operations */
+-	[OP_ALLOCATE]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_COPY]		= (nfsd4_enc)nfsd4_encode_copy,
+-	[OP_COPY_NOTIFY]	= (nfsd4_enc)nfsd4_encode_copy_notify,
+-	[OP_DEALLOCATE]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_IO_ADVISE]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_LAYOUTERROR]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_LAYOUTSTATS]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_OFFLOAD_CANCEL]	= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_OFFLOAD_STATUS]	= (nfsd4_enc)nfsd4_encode_offload_status,
+-	[OP_READ_PLUS]		= (nfsd4_enc)nfsd4_encode_read_plus,
+-	[OP_SEEK]		= (nfsd4_enc)nfsd4_encode_seek,
+-	[OP_WRITE_SAME]		= (nfsd4_enc)nfsd4_encode_noop,
+-	[OP_CLONE]		= (nfsd4_enc)nfsd4_encode_noop,
++	[OP_ALLOCATE]		= nfsd4_encode_noop,
++	[OP_COPY]		= nfsd4_encode_copy,
++	[OP_COPY_NOTIFY]	= nfsd4_encode_copy_notify,
++	[OP_DEALLOCATE]		= nfsd4_encode_noop,
++	[OP_IO_ADVISE]		= nfsd4_encode_noop,
++	[OP_LAYOUTERROR]	= nfsd4_encode_noop,
++	[OP_LAYOUTSTATS]	= nfsd4_encode_noop,
++	[OP_OFFLOAD_CANCEL]	= nfsd4_encode_noop,
++	[OP_OFFLOAD_STATUS]	= nfsd4_encode_offload_status,
++	[OP_READ_PLUS]		= nfsd4_encode_read_plus,
++	[OP_SEEK]		= nfsd4_encode_seek,
++	[OP_WRITE_SAME]		= nfsd4_encode_noop,
++	[OP_CLONE]		= nfsd4_encode_noop,
+ 
+ 	/* RFC 8276 extended atributes operations */
+-	[OP_GETXATTR]		= (nfsd4_enc)nfsd4_encode_getxattr,
+-	[OP_SETXATTR]		= (nfsd4_enc)nfsd4_encode_setxattr,
+-	[OP_LISTXATTRS]		= (nfsd4_enc)nfsd4_encode_listxattrs,
+-	[OP_REMOVEXATTR]	= (nfsd4_enc)nfsd4_encode_removexattr,
++	[OP_GETXATTR]		= nfsd4_encode_getxattr,
++	[OP_SETXATTR]		= nfsd4_encode_setxattr,
++	[OP_LISTXATTRS]		= nfsd4_encode_listxattrs,
++	[OP_REMOVEXATTR]	= nfsd4_encode_removexattr,
+ };
+ 
+ /*
+@@ -5178,7 +5394,7 @@ __be32 nfsd4_check_resp_size(struct nfsd4_compoundres *resp, u32 respsize)
+ void
+ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
+ {
+-	struct xdr_stream *xdr = &resp->xdr;
++	struct xdr_stream *xdr = resp->xdr;
+ 	struct nfs4_stateowner *so = resp->cstate.replay_owner;
+ 	struct svc_rqst *rqstp = resp->rqstp;
+ 	const struct nfsd4_operation *opdesc = op->opdesc;
+@@ -5187,10 +5403,8 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
+ 	__be32 *p;
+ 
+ 	p = xdr_reserve_space(xdr, 8);
+-	if (!p) {
+-		WARN_ON_ONCE(1);
+-		return;
+-	}
++	if (!p)
++		goto release;
+ 	*p++ = cpu_to_be32(op->opnum);
+ 	post_err_offset = xdr->buf->len;
+ 
+@@ -5199,12 +5413,12 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
+ 	if (op->status && opdesc &&
+ 			!(opdesc->op_flags & OP_NONTRIVIAL_ERROR_ENCODE))
+ 		goto status;
+-	BUG_ON(op->opnum < 0 || op->opnum >= ARRAY_SIZE(nfsd4_enc_ops) ||
++	BUG_ON(op->opnum >= ARRAY_SIZE(nfsd4_enc_ops) ||
+ 	       !nfsd4_enc_ops[op->opnum]);
+ 	encoder = nfsd4_enc_ops[op->opnum];
+ 	op->status = encoder(resp, op->status, &op->u);
+-	if (opdesc && opdesc->op_release)
+-		opdesc->op_release(&op->u);
++	if (op->status)
++		trace_nfsd_compound_encode_err(rqstp, op->opnum, op->status);
+ 	xdr_commit_encode(xdr);
+ 
+ 	/* nfsd4_check_resp_size guarantees enough room for error status */
+@@ -5244,8 +5458,10 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
+ 						so->so_replay.rp_buf, len);
+ 	}
+ status:
+-	/* Note that op->status is already in network byte order: */
+-	write_bytes_to_xdr_buf(xdr->buf, post_err_offset - 4, &op->status, 4);
++	*p = op->status;
++release:
++	if (opdesc && opdesc->op_release)
++		opdesc->op_release(&op->u);
+ }
+ 
+ /* 
+@@ -5271,22 +5487,14 @@ nfsd4_encode_replay(struct xdr_stream *xdr, struct nfsd4_op *op)
+ 	p = xdr_encode_opaque_fixed(p, rp->rp_buf, rp->rp_buflen);
+ }
+ 
+-int
+-nfs4svc_encode_voidres(struct svc_rqst *rqstp, __be32 *p)
+-{
+-        return xdr_ressize_check(rqstp, p);
+-}
+-
+ void nfsd4_release_compoundargs(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd4_compoundargs *args = rqstp->rq_argp;
+ 
+ 	if (args->ops != args->iops) {
+-		kfree(args->ops);
++		vfree(args->ops);
+ 		args->ops = args->iops;
+ 	}
+-	kfree(args->tmpp);
+-	args->tmpp = NULL;
+ 	while (args->to_free) {
+ 		struct svcxdr_tmpbuf *tb = args->to_free;
+ 		args->to_free = tb->next;
+@@ -5294,57 +5502,44 @@ void nfsd4_release_compoundargs(struct svc_rqst *rqstp)
+ 	}
+ }
+ 
+-int
+-nfs4svc_decode_voidarg(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	return 1;
+-}
+-
+-int
+-nfs4svc_decode_compoundargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs4svc_decode_compoundargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd4_compoundargs *args = rqstp->rq_argp;
+ 
+-	if (rqstp->rq_arg.head[0].iov_len % 4) {
+-		/* client is nuts */
+-		dprintk("%s: compound not properly padded! (peeraddr=%pISc xid=0x%x)",
+-			__func__, svc_addr(rqstp), be32_to_cpu(rqstp->rq_xid));
+-		return 0;
+-	}
+-	args->p = p;
+-	args->end = rqstp->rq_arg.head[0].iov_base + rqstp->rq_arg.head[0].iov_len;
+-	args->pagelist = rqstp->rq_arg.pages;
+-	args->pagelen = rqstp->rq_arg.page_len;
+-	args->tail = false;
+-	args->tmpp = NULL;
++	/* svcxdr_tmp_alloc */
+ 	args->to_free = NULL;
++
++	args->xdr = xdr;
+ 	args->ops = args->iops;
+ 	args->rqstp = rqstp;
+ 
+-	return !nfsd4_decode_compound(args);
++	return nfsd4_decode_compound(args);
+ }
+ 
+-int
+-nfs4svc_encode_compoundres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfs4svc_encode_compoundres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd4_compoundres *resp = rqstp->rq_resp;
+-	struct xdr_buf *buf = resp->xdr.buf;
++	__be32 *p;
+ 
+-	WARN_ON_ONCE(buf->len != buf->head[0].iov_len + buf->page_len +
+-				 buf->tail[0].iov_len);
++	/*
++	 * Send buffer space for the following items is reserved
++	 * at the top of nfsd4_proc_compound().
++	 */
++	p = resp->statusp;
+ 
+-	*p = resp->cstate.status;
++	*p++ = resp->cstate.status;
+ 
+-	rqstp->rq_next_page = resp->xdr.page_ptr + 1;
++	rqstp->rq_next_page = xdr->page_ptr + 1;
+ 
+-	p = resp->tagp;
+ 	*p++ = htonl(resp->taglen);
+ 	memcpy(p, resp->tag, resp->taglen);
+ 	p += XDR_QUADLEN(resp->taglen);
+ 	*p++ = htonl(resp->opcnt);
+ 
+ 	nfsd4_sequence_done(resp);
+-	return 1;
++	return true;
+ }
+ 
+ /*
+diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
+index 80c90fc231a53..2b5417e06d80d 100644
+--- a/fs/nfsd/nfscache.c
++++ b/fs/nfsd/nfscache.c
+@@ -84,12 +84,6 @@ nfsd_hashsize(unsigned int limit)
+ 	return roundup_pow_of_two(limit / TARGET_BUCKET_SIZE);
+ }
+ 
+-static u32
+-nfsd_cache_hash(__be32 xid, struct nfsd_net *nn)
+-{
+-	return hash_32(be32_to_cpu(xid), nn->maskbits);
+-}
+-
+ static struct svc_cacherep *
+ nfsd_reply_cache_alloc(struct svc_rqst *rqstp, __wsum csum,
+ 			struct nfsd_net *nn)
+@@ -121,14 +115,14 @@ nfsd_reply_cache_free_locked(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
+ 				struct nfsd_net *nn)
+ {
+ 	if (rp->c_type == RC_REPLBUFF && rp->c_replvec.iov_base) {
+-		nn->drc_mem_usage -= rp->c_replvec.iov_len;
++		nfsd_stats_drc_mem_usage_sub(nn, rp->c_replvec.iov_len);
+ 		kfree(rp->c_replvec.iov_base);
+ 	}
+ 	if (rp->c_state != RC_UNUSED) {
+ 		rb_erase(&rp->c_node, &b->rb_head);
+ 		list_del(&rp->c_lru);
+ 		atomic_dec(&nn->num_drc_entries);
+-		nn->drc_mem_usage -= sizeof(*rp);
++		nfsd_stats_drc_mem_usage_sub(nn, sizeof(*rp));
+ 	}
+ 	kmem_cache_free(drc_slab, rp);
+ }
+@@ -154,6 +148,16 @@ void nfsd_drc_slab_free(void)
+ 	kmem_cache_destroy(drc_slab);
+ }
+ 
++static int nfsd_reply_cache_stats_init(struct nfsd_net *nn)
++{
++	return nfsd_percpu_counters_init(nn->counter, NFSD_NET_COUNTERS_NUM);
++}
++
++static void nfsd_reply_cache_stats_destroy(struct nfsd_net *nn)
++{
++	nfsd_percpu_counters_destroy(nn->counter, NFSD_NET_COUNTERS_NUM);
++}
++
+ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ {
+ 	unsigned int hashsize;
+@@ -165,12 +169,16 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ 	hashsize = nfsd_hashsize(nn->max_drc_entries);
+ 	nn->maskbits = ilog2(hashsize);
+ 
++	status = nfsd_reply_cache_stats_init(nn);
++	if (status)
++		goto out_nomem;
++
+ 	nn->nfsd_reply_cache_shrinker.scan_objects = nfsd_reply_cache_scan;
+ 	nn->nfsd_reply_cache_shrinker.count_objects = nfsd_reply_cache_count;
+ 	nn->nfsd_reply_cache_shrinker.seeks = 1;
+ 	status = register_shrinker(&nn->nfsd_reply_cache_shrinker);
+ 	if (status)
+-		goto out_nomem;
++		goto out_stats_destroy;
+ 
+ 	nn->drc_hashtbl = kvzalloc(array_size(hashsize,
+ 				sizeof(*nn->drc_hashtbl)), GFP_KERNEL);
+@@ -186,6 +194,8 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ 	return 0;
+ out_shrinker:
+ 	unregister_shrinker(&nn->nfsd_reply_cache_shrinker);
++out_stats_destroy:
++	nfsd_reply_cache_stats_destroy(nn);
+ out_nomem:
+ 	printk(KERN_ERR "nfsd: failed to allocate reply cache\n");
+ 	return -ENOMEM;
+@@ -206,6 +216,7 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ 									rp, nn);
+ 		}
+ 	}
++	nfsd_reply_cache_stats_destroy(nn);
+ 
+ 	kvfree(nn->drc_hashtbl);
+ 	nn->drc_hashtbl = NULL;
+@@ -224,8 +235,16 @@ lru_put_end(struct nfsd_drc_bucket *b, struct svc_cacherep *rp)
+ 	list_move_tail(&rp->c_lru, &b->lru_head);
+ }
+ 
+-static long
+-prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn)
++static noinline struct nfsd_drc_bucket *
++nfsd_cache_bucket_find(__be32 xid, struct nfsd_net *nn)
++{
++	unsigned int hash = hash_32((__force u32)xid, nn->maskbits);
++
++	return &nn->drc_hashtbl[hash];
++}
++
++static long prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn,
++			 unsigned int max)
+ {
+ 	struct svc_cacherep *rp, *tmp;
+ 	long freed = 0;
+@@ -241,11 +260,17 @@ prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn)
+ 		    time_before(jiffies, rp->c_timestamp + RC_EXPIRE))
+ 			break;
+ 		nfsd_reply_cache_free_locked(b, rp, nn);
+-		freed++;
++		if (max && freed++ > max)
++			break;
+ 	}
+ 	return freed;
+ }
+ 
++static long nfsd_prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn)
++{
++	return prune_bucket(b, nn, 3);
++}
++
+ /*
+  * Walk the LRU list and prune off entries that are older than RC_EXPIRE.
+  * Also prune the oldest ones when the total exceeds the max number of entries.
+@@ -262,7 +287,7 @@ prune_cache_entries(struct nfsd_net *nn)
+ 		if (list_empty(&b->lru_head))
+ 			continue;
+ 		spin_lock(&b->cache_lock);
+-		freed += prune_bucket(b, nn);
++		freed += prune_bucket(b, nn, 0);
+ 		spin_unlock(&b->cache_lock);
+ 	}
+ 	return freed;
+@@ -324,7 +349,7 @@ nfsd_cache_key_cmp(const struct svc_cacherep *key,
+ {
+ 	if (key->c_key.k_xid == rp->c_key.k_xid &&
+ 	    key->c_key.k_csum != rp->c_key.k_csum) {
+-		++nn->payload_misses;
++		nfsd_stats_payload_misses_inc(nn);
+ 		trace_nfsd_drc_mismatch(nn, key, rp);
+ 	}
+ 
+@@ -396,18 +421,16 @@ nfsd_cache_insert(struct nfsd_drc_bucket *b, struct svc_cacherep *key,
+  */
+ int nfsd_cache_lookup(struct svc_rqst *rqstp)
+ {
+-	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
++	struct nfsd_net		*nn;
+ 	struct svc_cacherep	*rp, *found;
+-	__be32			xid = rqstp->rq_xid;
+ 	__wsum			csum;
+-	u32 hash = nfsd_cache_hash(xid, nn);
+-	struct nfsd_drc_bucket *b = &nn->drc_hashtbl[hash];
++	struct nfsd_drc_bucket	*b;
+ 	int type = rqstp->rq_cachetype;
+ 	int rtn = RC_DOIT;
+ 
+ 	rqstp->rq_cacherep = NULL;
+ 	if (type == RC_NOCACHE) {
+-		nfsdstats.rcnocache++;
++		nfsd_stats_rc_nocache_inc();
+ 		goto out;
+ 	}
+ 
+@@ -417,27 +440,25 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp)
+ 	 * Since the common case is a cache miss followed by an insert,
+ 	 * preallocate an entry.
+ 	 */
++	nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	rp = nfsd_reply_cache_alloc(rqstp, csum, nn);
+ 	if (!rp)
+ 		goto out;
+ 
++	b = nfsd_cache_bucket_find(rqstp->rq_xid, nn);
+ 	spin_lock(&b->cache_lock);
+ 	found = nfsd_cache_insert(b, rp, nn);
+-	if (found != rp) {
+-		nfsd_reply_cache_free_locked(NULL, rp, nn);
+-		rp = found;
++	if (found != rp)
+ 		goto found_entry;
+-	}
+ 
+-	nfsdstats.rcmisses++;
++	nfsd_stats_rc_misses_inc();
+ 	rqstp->rq_cacherep = rp;
+ 	rp->c_state = RC_INPROG;
+ 
+ 	atomic_inc(&nn->num_drc_entries);
+-	nn->drc_mem_usage += sizeof(*rp);
++	nfsd_stats_drc_mem_usage_add(nn, sizeof(*rp));
+ 
+-	/* go ahead and prune the cache */
+-	prune_bucket(b, nn);
++	nfsd_prune_bucket(b, nn);
+ 
+ out_unlock:
+ 	spin_unlock(&b->cache_lock);
+@@ -446,8 +467,10 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp)
+ 
+ found_entry:
+ 	/* We found a matching entry which is either in progress or done. */
+-	nfsdstats.rchits++;
++	nfsd_reply_cache_free_locked(NULL, rp, nn);
++	nfsd_stats_rc_hits_inc();
+ 	rtn = RC_DROPIT;
++	rp = found;
+ 
+ 	/* Request being processed */
+ 	if (rp->c_state == RC_INPROG)
+@@ -506,7 +529,6 @@ void nfsd_cache_update(struct svc_rqst *rqstp, int cachetype, __be32 *statp)
+ 	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	struct svc_cacherep *rp = rqstp->rq_cacherep;
+ 	struct kvec	*resv = &rqstp->rq_res.head[0], *cachv;
+-	u32		hash;
+ 	struct nfsd_drc_bucket *b;
+ 	int		len;
+ 	size_t		bufsize = 0;
+@@ -514,8 +536,7 @@ void nfsd_cache_update(struct svc_rqst *rqstp, int cachetype, __be32 *statp)
+ 	if (!rp)
+ 		return;
+ 
+-	hash = nfsd_cache_hash(rp->c_key.k_xid, nn);
+-	b = &nn->drc_hashtbl[hash];
++	b = nfsd_cache_bucket_find(rp->c_key.k_xid, nn);
+ 
+ 	len = resv->iov_len - ((char*)statp - (char*)resv->iov_base);
+ 	len >>= 2;
+@@ -548,7 +569,7 @@ void nfsd_cache_update(struct svc_rqst *rqstp, int cachetype, __be32 *statp)
+ 		return;
+ 	}
+ 	spin_lock(&b->cache_lock);
+-	nn->drc_mem_usage += bufsize;
++	nfsd_stats_drc_mem_usage_add(nn, bufsize);
+ 	lru_put_end(b, rp);
+ 	rp->c_secure = test_bit(RQ_SECURE, &rqstp->rq_flags);
+ 	rp->c_type = cachetype;
+@@ -582,28 +603,26 @@ nfsd_cache_append(struct svc_rqst *rqstp, struct kvec *data)
+  * scraping this file for info should test the labels to ensure they're
+  * getting the correct field.
+  */
+-static int nfsd_reply_cache_stats_show(struct seq_file *m, void *v)
++int nfsd_reply_cache_stats_show(struct seq_file *m, void *v)
+ {
+-	struct nfsd_net *nn = m->private;
++	struct nfsd_net *nn = net_generic(file_inode(m->file)->i_sb->s_fs_info,
++					  nfsd_net_id);
+ 
+ 	seq_printf(m, "max entries:           %u\n", nn->max_drc_entries);
+ 	seq_printf(m, "num entries:           %u\n",
+-			atomic_read(&nn->num_drc_entries));
++		   atomic_read(&nn->num_drc_entries));
+ 	seq_printf(m, "hash buckets:          %u\n", 1 << nn->maskbits);
+-	seq_printf(m, "mem usage:             %u\n", nn->drc_mem_usage);
+-	seq_printf(m, "cache hits:            %u\n", nfsdstats.rchits);
+-	seq_printf(m, "cache misses:          %u\n", nfsdstats.rcmisses);
+-	seq_printf(m, "not cached:            %u\n", nfsdstats.rcnocache);
+-	seq_printf(m, "payload misses:        %u\n", nn->payload_misses);
++	seq_printf(m, "mem usage:             %lld\n",
++		   percpu_counter_sum_positive(&nn->counter[NFSD_NET_DRC_MEM_USAGE]));
++	seq_printf(m, "cache hits:            %lld\n",
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_HITS]));
++	seq_printf(m, "cache misses:          %lld\n",
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_MISSES]));
++	seq_printf(m, "not cached:            %lld\n",
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_NOCACHE]));
++	seq_printf(m, "payload misses:        %lld\n",
++		   percpu_counter_sum_positive(&nn->counter[NFSD_NET_PAYLOAD_MISSES]));
+ 	seq_printf(m, "longest chain len:     %u\n", nn->longest_chain);
+ 	seq_printf(m, "cachesize at longest:  %u\n", nn->longest_chain_cachesize);
+ 	return 0;
+ }
+-
+-int nfsd_reply_cache_stats_open(struct inode *inode, struct file *file)
+-{
+-	struct nfsd_net *nn = net_generic(file_inode(file)->i_sb->s_fs_info,
+-								nfsd_net_id);
+-
+-	return single_open(file, nfsd_reply_cache_stats_show, nn);
+-}
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index c4b11560ac1b6..f77f00c931723 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -25,6 +25,7 @@
+ #include "state.h"
+ #include "netns.h"
+ #include "pnfs.h"
++#include "filecache.h"
+ 
+ /*
+  *	We have a single directory with several nodes in it.
+@@ -32,6 +33,7 @@
+ enum {
+ 	NFSD_Root = 1,
+ 	NFSD_List,
++	NFSD_Export_Stats,
+ 	NFSD_Export_features,
+ 	NFSD_Fh,
+ 	NFSD_FO_UnlockIP,
+@@ -44,6 +46,7 @@ enum {
+ 	NFSD_Ports,
+ 	NFSD_MaxBlkSize,
+ 	NFSD_MaxConnections,
++	NFSD_Filecache,
+ 	NFSD_SupportedEnctypes,
+ 	/*
+ 	 * The below MUST come last.  Otherwise we leave a hole in nfsd_files[]
+@@ -182,17 +185,7 @@ static int export_features_show(struct seq_file *m, void *v)
+ 	return 0;
+ }
+ 
+-static int export_features_open(struct inode *inode, struct file *file)
+-{
+-	return single_open(file, export_features_show, NULL);
+-}
+-
+-static const struct file_operations export_features_operations = {
+-	.open		= export_features_open,
+-	.read		= seq_read,
+-	.llseek		= seq_lseek,
+-	.release	= single_release,
+-};
++DEFINE_SHOW_ATTRIBUTE(export_features);
+ 
+ #if defined(CONFIG_SUNRPC_GSS) || defined(CONFIG_SUNRPC_GSS_MODULE)
+ static int supported_enctypes_show(struct seq_file *m, void *v)
+@@ -201,17 +194,7 @@ static int supported_enctypes_show(struct seq_file *m, void *v)
+ 	return 0;
+ }
+ 
+-static int supported_enctypes_open(struct inode *inode, struct file *file)
+-{
+-	return single_open(file, supported_enctypes_show, NULL);
+-}
+-
+-static const struct file_operations supported_enctypes_ops = {
+-	.open		= supported_enctypes_open,
+-	.read		= seq_read,
+-	.llseek		= seq_lseek,
+-	.release	= single_release,
+-};
++DEFINE_SHOW_ATTRIBUTE(supported_enctypes);
+ #endif /* CONFIG_SUNRPC_GSS or CONFIG_SUNRPC_GSS_MODULE */
+ 
+ static const struct file_operations pool_stats_operations = {
+@@ -221,12 +204,9 @@ static const struct file_operations pool_stats_operations = {
+ 	.release	= nfsd_pool_stats_release,
+ };
+ 
+-static const struct file_operations reply_cache_stats_operations = {
+-	.open		= nfsd_reply_cache_stats_open,
+-	.read		= seq_read,
+-	.llseek		= seq_lseek,
+-	.release	= single_release,
+-};
++DEFINE_SHOW_ATTRIBUTE(nfsd_reply_cache_stats);
++
++DEFINE_SHOW_ATTRIBUTE(nfsd_file_cache_stats);
+ 
+ /*----------------------------------------------------------------------------*/
+ /*
+@@ -394,12 +374,12 @@ static ssize_t write_filehandle(struct file *file, char *buf, size_t size)
+ 	auth_domain_put(dom);
+ 	if (len)
+ 		return len;
+-	
++
+ 	mesg = buf;
+ 	len = SIMPLE_TRANSACTION_LIMIT;
+-	qword_addhex(&mesg, &len, (char*)&fh.fh_base, fh.fh_size);
++	qword_addhex(&mesg, &len, fh.fh_raw, fh.fh_size);
+ 	mesg[-1] = '\n';
+-	return mesg - buf;	
++	return mesg - buf;
+ }
+ 
+ /*
+@@ -601,7 +581,9 @@ static ssize_t __write_versions(struct file *file, char *buf, size_t size)
+ 
+ 			cmd = sign == '-' ? NFSD_CLEAR : NFSD_SET;
+ 			switch(num) {
++#ifdef CONFIG_NFSD_V2
+ 			case 2:
++#endif
+ 			case 3:
+ 				nfsd_vers(nn, num, cmd);
+ 				break;
+@@ -621,7 +603,9 @@ static ssize_t __write_versions(struct file *file, char *buf, size_t size)
+ 				}
+ 				break;
+ 			default:
+-				return -EINVAL;
++				/* Ignore requests to disable non-existent versions */
++				if (cmd == NFSD_SET)
++					return -EINVAL;
+ 			}
+ 			vers += len + 1;
+ 		} while ((len = qword_get(&mesg, vers, size)) > 0);
+@@ -632,7 +616,6 @@ static ssize_t __write_versions(struct file *file, char *buf, size_t size)
+ 	}
+ 
+ 	/* Now write current state into reply buffer */
+-	len = 0;
+ 	sep = "";
+ 	remaining = SIMPLE_TRANSACTION_LIMIT;
+ 	for (num=2 ; num <= 4 ; num++) {
+@@ -726,28 +709,25 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
+ 	char *mesg = buf;
+ 	int fd, err;
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++	struct svc_serv *serv;
+ 
+ 	err = get_int(&mesg, &fd);
+ 	if (err != 0 || fd < 0)
+ 		return -EINVAL;
+ 
+-	if (svc_alien_sock(net, fd)) {
+-		printk(KERN_ERR "%s: socket net is different to NFSd's one\n", __func__);
+-		return -EINVAL;
+-	}
+-
+ 	err = nfsd_create_serv(net);
+ 	if (err != 0)
+ 		return err;
+ 
+-	err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
+-	if (err < 0) {
+-		nfsd_destroy(net);
+-		return err;
+-	}
++	serv = nn->nfsd_serv;
++	err = svc_addsock(serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
++
++	if (err < 0 && !serv->sv_nrthreads && !nn->keep_active)
++		nfsd_last_thread(net);
++	else if (err >= 0 && !serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
++		svc_get(serv);
+ 
+-	/* Decrease the count, but don't shut down the service */
+-	nn->nfsd_serv->sv_nrthreads--;
++	svc_put(serv);
+ 	return err;
+ }
+ 
+@@ -761,6 +741,7 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
+ 	struct svc_xprt *xprt;
+ 	int port, err;
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++	struct svc_serv *serv;
+ 
+ 	if (sscanf(buf, "%15s %5u", transport, &port) != 2)
+ 		return -EINVAL;
+@@ -772,30 +753,33 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
+ 	if (err != 0)
+ 		return err;
+ 
+-	err = svc_create_xprt(nn->nfsd_serv, transport, net,
+-				PF_INET, port, SVC_SOCK_ANONYMOUS, cred);
++	serv = nn->nfsd_serv;
++	err = svc_xprt_create(serv, transport, net,
++			      PF_INET, port, SVC_SOCK_ANONYMOUS, cred);
+ 	if (err < 0)
+ 		goto out_err;
+ 
+-	err = svc_create_xprt(nn->nfsd_serv, transport, net,
+-				PF_INET6, port, SVC_SOCK_ANONYMOUS, cred);
++	err = svc_xprt_create(serv, transport, net,
++			      PF_INET6, port, SVC_SOCK_ANONYMOUS, cred);
+ 	if (err < 0 && err != -EAFNOSUPPORT)
+ 		goto out_close;
+ 
+-	/* Decrease the count, but don't shut down the service */
+-	nn->nfsd_serv->sv_nrthreads--;
++	if (!serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
++		svc_get(serv);
++
++	svc_put(serv);
+ 	return 0;
+ out_close:
+-	xprt = svc_find_xprt(nn->nfsd_serv, transport, net, PF_INET, port);
++	xprt = svc_find_xprt(serv, transport, net, PF_INET, port);
+ 	if (xprt != NULL) {
+-		svc_close_xprt(xprt);
++		svc_xprt_close(xprt);
+ 		svc_xprt_put(xprt);
+ 	}
+ out_err:
+-	if (!list_empty(&nn->nfsd_serv->sv_permsocks))
+-		nn->nfsd_serv->sv_nrthreads--;
+-	 else
+-		nfsd_destroy(net);
++	if (!serv->sv_nrthreads && !nn->keep_active)
++		nfsd_last_thread(net);
++
++	svc_put(serv);
+ 	return err;
+ }
+ 
+@@ -1168,6 +1152,7 @@ static struct inode *nfsd_get_inode(struct super_block *sb, umode_t mode)
+ 		inode->i_fop = &simple_dir_operations;
+ 		inode->i_op = &simple_dir_inode_operations;
+ 		inc_nlink(inode);
++		break;
+ 	default:
+ 		break;
+ 	}
+@@ -1269,7 +1254,8 @@ static void nfsdfs_remove_files(struct dentry *root)
+ /* XXX: cut'n'paste from simple_fill_super; figure out if we could share
+  * code instead. */
+ static  int nfsdfs_create_files(struct dentry *root,
+-					const struct tree_descr *files)
++				const struct tree_descr *files,
++				struct dentry **fdentries)
+ {
+ 	struct inode *dir = d_inode(root);
+ 	struct inode *inode;
+@@ -1278,8 +1264,6 @@ static  int nfsdfs_create_files(struct dentry *root,
+ 
+ 	inode_lock(dir);
+ 	for (i = 0; files->name && files->name[0]; i++, files++) {
+-		if (!files->name)
+-			continue;
+ 		dentry = d_alloc_name(root, files->name);
+ 		if (!dentry)
+ 			goto out;
+@@ -1293,6 +1277,8 @@ static  int nfsdfs_create_files(struct dentry *root,
+ 		inode->i_private = __get_nfsdfs_client(dir);
+ 		d_add(dentry, inode);
+ 		fsnotify_create(dir, dentry);
++		if (fdentries)
++			fdentries[i] = dentry;
+ 	}
+ 	inode_unlock(dir);
+ 	return 0;
+@@ -1304,8 +1290,9 @@ static  int nfsdfs_create_files(struct dentry *root,
+ 
+ /* on success, returns positive number unique to that client. */
+ struct dentry *nfsd_client_mkdir(struct nfsd_net *nn,
+-		struct nfsdfs_client *ncl, u32 id,
+-		const struct tree_descr *files)
++				 struct nfsdfs_client *ncl, u32 id,
++				 const struct tree_descr *files,
++				 struct dentry **fdentries)
+ {
+ 	struct dentry *dentry;
+ 	char name[11];
+@@ -1316,7 +1303,7 @@ struct dentry *nfsd_client_mkdir(struct nfsd_net *nn,
+ 	dentry = nfsd_mkdir(nn->nfsd_client_dir, ncl, name);
+ 	if (IS_ERR(dentry)) /* XXX: tossing errors? */
+ 		return NULL;
+-	ret = nfsdfs_create_files(dentry, files);
++	ret = nfsdfs_create_files(dentry, files, fdentries);
+ 	if (ret) {
+ 		nfsd_client_rmdir(dentry);
+ 		return NULL;
+@@ -1352,8 +1339,10 @@ static int nfsd_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ 	static const struct tree_descr nfsd_files[] = {
+ 		[NFSD_List] = {"exports", &exports_nfsd_operations, S_IRUGO},
++		/* Per-export io stats use same ops as exports file */
++		[NFSD_Export_Stats] = {"export_stats", &exports_nfsd_operations, S_IRUGO},
+ 		[NFSD_Export_features] = {"export_features",
+-					&export_features_operations, S_IRUGO},
++					&export_features_fops, S_IRUGO},
+ 		[NFSD_FO_UnlockIP] = {"unlock_ip",
+ 					&transaction_ops, S_IWUSR|S_IRUSR},
+ 		[NFSD_FO_UnlockFS] = {"unlock_filesystem",
+@@ -1362,13 +1351,16 @@ static int nfsd_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		[NFSD_Threads] = {"threads", &transaction_ops, S_IWUSR|S_IRUSR},
+ 		[NFSD_Pool_Threads] = {"pool_threads", &transaction_ops, S_IWUSR|S_IRUSR},
+ 		[NFSD_Pool_Stats] = {"pool_stats", &pool_stats_operations, S_IRUGO},
+-		[NFSD_Reply_Cache_Stats] = {"reply_cache_stats", &reply_cache_stats_operations, S_IRUGO},
++		[NFSD_Reply_Cache_Stats] = {"reply_cache_stats",
++					&nfsd_reply_cache_stats_fops, S_IRUGO},
+ 		[NFSD_Versions] = {"versions", &transaction_ops, S_IWUSR|S_IRUSR},
+ 		[NFSD_Ports] = {"portlist", &transaction_ops, S_IWUSR|S_IRUGO},
+ 		[NFSD_MaxBlkSize] = {"max_block_size", &transaction_ops, S_IWUSR|S_IRUGO},
+ 		[NFSD_MaxConnections] = {"max_connections", &transaction_ops, S_IWUSR|S_IRUGO},
++		[NFSD_Filecache] = {"filecache", &nfsd_file_cache_stats_fops, S_IRUGO},
+ #if defined(CONFIG_SUNRPC_GSS) || defined(CONFIG_SUNRPC_GSS_MODULE)
+-		[NFSD_SupportedEnctypes] = {"supported_krb5_enctypes", &supported_enctypes_ops, S_IRUGO},
++		[NFSD_SupportedEnctypes] = {"supported_krb5_enctypes",
++					&supported_enctypes_fops, S_IRUGO},
+ #endif /* CONFIG_SUNRPC_GSS or CONFIG_SUNRPC_GSS_MODULE */
+ #ifdef CONFIG_NFSD_V4
+ 		[NFSD_Leasetime] = {"nfsv4leasetime", &transaction_ops, S_IWUSR|S_IRUSR},
+@@ -1468,25 +1460,16 @@ static __net_init int nfsd_init_net(struct net *net)
+ 		goto out_idmap_error;
+ 	nn->nfsd_versions = NULL;
+ 	nn->nfsd4_minorversions = NULL;
++	nfsd4_init_leases_net(nn);
+ 	retval = nfsd_reply_cache_init(nn);
+ 	if (retval)
+-		goto out_drc_error;
+-	nn->nfsd4_lease = 90;	/* default lease time */
+-	nn->nfsd4_grace = 90;
+-	nn->somebody_reclaimed = false;
+-	nn->track_reclaim_completes = false;
+-	nn->clverifier_counter = prandom_u32();
+-	nn->clientid_base = prandom_u32();
+-	nn->clientid_counter = nn->clientid_base + 1;
+-	nn->s2s_cp_cl_id = nn->clientid_counter++;
+-
+-	atomic_set(&nn->ntf_refcnt, 0);
+-	init_waitqueue_head(&nn->ntf_wq);
+-	seqlock_init(&nn->boot_lock);
++		goto out_cache_error;
++	get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key));
++	seqlock_init(&nn->writeverf_lock);
+ 
+ 	return 0;
+ 
+-out_drc_error:
++out_cache_error:
+ 	nfsd_idmap_shutdown(net);
+ out_idmap_error:
+ 	nfsd_export_shutdown(net);
+@@ -1514,7 +1497,6 @@ static struct pernet_operations nfsd_net_ops = {
+ static int __init init_nfsd(void)
+ {
+ 	int retval;
+-	printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n");
+ 
+ 	retval = nfsd4_init_slabs();
+ 	if (retval)
+@@ -1522,7 +1504,9 @@ static int __init init_nfsd(void)
+ 	retval = nfsd4_init_pnfs();
+ 	if (retval)
+ 		goto out_free_slabs;
+-	nfsd_stat_init();	/* Statistics */
++	retval = nfsd_stat_init();	/* Statistics */
++	if (retval)
++		goto out_free_pnfs;
+ 	retval = nfsd_drc_slab_create();
+ 	if (retval)
+ 		goto out_free_stat;
+@@ -1530,20 +1514,25 @@ static int __init init_nfsd(void)
+ 	retval = create_proc_exports_entry();
+ 	if (retval)
+ 		goto out_free_lockd;
+-	retval = register_filesystem(&nfsd_fs_type);
+-	if (retval)
+-		goto out_free_exports;
+ 	retval = register_pernet_subsys(&nfsd_net_ops);
+ 	if (retval < 0)
+-		goto out_free_filesystem;
++		goto out_free_exports;
+ 	retval = register_cld_notifier();
++	if (retval)
++		goto out_free_subsys;
++	retval = nfsd4_create_laundry_wq();
++	if (retval)
++		goto out_free_cld;
++	retval = register_filesystem(&nfsd_fs_type);
+ 	if (retval)
+ 		goto out_free_all;
+ 	return 0;
+ out_free_all:
++	nfsd4_destroy_laundry_wq();
++out_free_cld:
++	unregister_cld_notifier();
++out_free_subsys:
+ 	unregister_pernet_subsys(&nfsd_net_ops);
+-out_free_filesystem:
+-	unregister_filesystem(&nfsd_fs_type);
+ out_free_exports:
+ 	remove_proc_entry("fs/nfs/exports", NULL);
+ 	remove_proc_entry("fs/nfs", NULL);
+@@ -1552,6 +1541,7 @@ static int __init init_nfsd(void)
+ 	nfsd_drc_slab_free();
+ out_free_stat:
+ 	nfsd_stat_shutdown();
++out_free_pnfs:
+ 	nfsd4_exit_pnfs();
+ out_free_slabs:
+ 	nfsd4_free_slabs();
+@@ -1560,6 +1550,8 @@ static int __init init_nfsd(void)
+ 
+ static void __exit exit_nfsd(void)
+ {
++	unregister_filesystem(&nfsd_fs_type);
++	nfsd4_destroy_laundry_wq();
+ 	unregister_cld_notifier();
+ 	unregister_pernet_subsys(&nfsd_net_ops);
+ 	nfsd_drc_slab_free();
+@@ -1569,7 +1561,6 @@ static void __exit exit_nfsd(void)
+ 	nfsd_lockd_shutdown();
+ 	nfsd4_free_slabs();
+ 	nfsd4_exit_pnfs();
+-	unregister_filesystem(&nfsd_fs_type);
+ }
+ 
+ MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
+diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
+index 4362d295ed340..013bfa24ced21 100644
+--- a/fs/nfsd/nfsd.h
++++ b/fs/nfsd/nfsd.h
+@@ -24,8 +24,8 @@
+ #include <uapi/linux/nfsd/debug.h>
+ 
+ #include "netns.h"
+-#include "stats.h"
+ #include "export.h"
++#include "stats.h"
+ 
+ #undef ifdebug
+ #ifdef CONFIG_SUNRPC_DEBUG
+@@ -64,8 +64,7 @@ struct readdir_cd {
+ 
+ 
+ extern struct svc_program	nfsd_program;
+-extern const struct svc_version	nfsd_version2, nfsd_version3,
+-				nfsd_version4;
++extern const struct svc_version	nfsd_version2, nfsd_version3, nfsd_version4;
+ extern struct mutex		nfsd_mutex;
+ extern spinlock_t		nfsd_drc_lock;
+ extern unsigned long		nfsd_drc_max_mem;
+@@ -73,6 +72,16 @@ extern unsigned long		nfsd_drc_mem_used;
+ 
+ extern const struct seq_operations nfs_exports_op;
+ 
++/*
++ * Common void argument and result helpers
++ */
++struct nfsd_voidargs { };
++struct nfsd_voidres { };
++bool		nfssvc_decode_voidarg(struct svc_rqst *rqstp,
++				      struct xdr_stream *xdr);
++bool		nfssvc_encode_voidres(struct svc_rqst *rqstp,
++				      struct xdr_stream *xdr);
++
+ /*
+  * Function prototypes.
+  */
+@@ -87,8 +96,6 @@ int		nfsd_pool_stats_open(struct inode *, struct file *);
+ int		nfsd_pool_stats_release(struct inode *, struct file *);
+ void		nfsd_shutdown_threads(struct net *net);
+ 
+-void		nfsd_destroy(struct net *net);
+-
+ bool		i_am_nfsd(void);
+ 
+ struct nfsdfs_client {
+@@ -98,7 +105,9 @@ struct nfsdfs_client {
+ 
+ struct nfsdfs_client *get_nfsdfs_client(struct inode *);
+ struct dentry *nfsd_client_mkdir(struct nfsd_net *nn,
+-		struct nfsdfs_client *ncl, u32 id, const struct tree_descr *);
++				 struct nfsdfs_client *ncl, u32 id,
++				 const struct tree_descr *,
++				 struct dentry **fdentries);
+ void nfsd_client_rmdir(struct dentry *dentry);
+ 
+ 
+@@ -122,6 +131,7 @@ int nfsd_vers(struct nfsd_net *nn, int vers, enum vers_op change);
+ int nfsd_minorversion(struct nfsd_net *nn, u32 minorversion, enum vers_op change);
+ void nfsd_reset_versions(struct nfsd_net *nn);
+ int nfsd_create_serv(struct net *net);
++void nfsd_last_thread(struct net *net);
+ 
+ extern int nfsd_max_blksize;
+ 
+@@ -150,6 +160,9 @@ void nfs4_state_shutdown_net(struct net *net);
+ int nfs4_reset_recoverydir(char *recdir);
+ char * nfs4_recoverydir(void);
+ bool nfsd4_spo_must_allow(struct svc_rqst *rqstp);
++int nfsd4_create_laundry_wq(void);
++void nfsd4_destroy_laundry_wq(void);
++bool nfsd_wait_for_delegreturn(struct svc_rqst *rqstp, struct inode *inode);
+ #else
+ static inline int nfsd4_init_slabs(void) { return 0; }
+ static inline void nfsd4_free_slabs(void) { }
+@@ -163,6 +176,13 @@ static inline bool nfsd4_spo_must_allow(struct svc_rqst *rqstp)
+ {
+ 	return false;
+ }
++static inline int nfsd4_create_laundry_wq(void) { return 0; };
++static inline void nfsd4_destroy_laundry_wq(void) {};
++static inline bool nfsd_wait_for_delegreturn(struct svc_rqst *rqstp,
++					      struct inode *inode)
++{
++	return false;
++}
+ #endif
+ 
+ /*
+@@ -324,6 +344,10 @@ void		nfsd_lockd_shutdown(void);
+ #define COMPOUND_ERR_SLACK_SPACE	16     /* OP_SETATTR */
+ 
+ #define NFSD_LAUNDROMAT_MINTIMEOUT      1   /* seconds */
++#define	NFSD_COURTESY_CLIENT_TIMEOUT	(24 * 60 * 60)	/* seconds */
++#define	NFSD_CLIENT_MAX_TRIM_PER_RUN	128
++#define	NFS4_CLIENTS_PER_GB		1024
++#define NFSD_DELEGRETURN_TIMEOUT	(HZ / 34)	/* 30ms */
+ 
+ /*
+  * The following attributes are currently not supported by the NFSv4 server:
+@@ -352,7 +376,7 @@ void		nfsd_lockd_shutdown(void);
+  | FATTR4_WORD1_OWNER	        | FATTR4_WORD1_OWNER_GROUP  | FATTR4_WORD1_RAWDEV           \
+  | FATTR4_WORD1_SPACE_AVAIL     | FATTR4_WORD1_SPACE_FREE   | FATTR4_WORD1_SPACE_TOTAL      \
+  | FATTR4_WORD1_SPACE_USED      | FATTR4_WORD1_TIME_ACCESS  | FATTR4_WORD1_TIME_ACCESS_SET  \
+- | FATTR4_WORD1_TIME_DELTA   | FATTR4_WORD1_TIME_METADATA    \
++ | FATTR4_WORD1_TIME_DELTA      | FATTR4_WORD1_TIME_METADATA   | FATTR4_WORD1_TIME_CREATE      \
+  | FATTR4_WORD1_TIME_MODIFY     | FATTR4_WORD1_TIME_MODIFY_SET | FATTR4_WORD1_MOUNTED_ON_FILEID)
+ 
+ #define NFSD4_SUPPORTED_ATTRS_WORD2 0
+@@ -386,7 +410,6 @@ void		nfsd_lockd_shutdown(void);
+ 
+ #define NFSD4_2_SUPPORTED_ATTRS_WORD2 \
+ 	(NFSD4_1_SUPPORTED_ATTRS_WORD2 | \
+-	FATTR4_WORD2_CHANGE_ATTR_TYPE | \
+ 	FATTR4_WORD2_MODE_UMASK | \
+ 	NFSD4_2_SECURITY_ATTRS | \
+ 	FATTR4_WORD2_XATTR_SUPPORT)
+@@ -449,7 +472,8 @@ static inline bool nfsd_attrs_supported(u32 minorversion, const u32 *bmval)
+ 	(FATTR4_WORD0_SIZE | FATTR4_WORD0_ACL)
+ #define NFSD_WRITEABLE_ATTRS_WORD1 \
+ 	(FATTR4_WORD1_MODE | FATTR4_WORD1_OWNER | FATTR4_WORD1_OWNER_GROUP \
+-	| FATTR4_WORD1_TIME_ACCESS_SET | FATTR4_WORD1_TIME_MODIFY_SET)
++	| FATTR4_WORD1_TIME_ACCESS_SET | FATTR4_WORD1_TIME_CREATE \
++	| FATTR4_WORD1_TIME_MODIFY_SET)
+ #ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+ #define MAYBE_FATTR4_WORD2_SECURITY_LABEL \
+ 	FATTR4_WORD2_SECURITY_LABEL
+@@ -475,12 +499,20 @@ static inline bool nfsd_attrs_supported(u32 minorversion, const u32 *bmval)
+ extern int nfsd4_is_junction(struct dentry *dentry);
+ extern int register_cld_notifier(void);
+ extern void unregister_cld_notifier(void);
++#ifdef CONFIG_NFSD_V4_2_INTER_SSC
++extern void nfsd4_ssc_init_umount_work(struct nfsd_net *nn);
++#endif
++
++extern void nfsd4_init_leases_net(struct nfsd_net *nn);
++
+ #else /* CONFIG_NFSD_V4 */
+ static inline int nfsd4_is_junction(struct dentry *dentry)
+ {
+ 	return 0;
+ }
+ 
++static inline void nfsd4_init_leases_net(struct nfsd_net *nn) { };
++
+ #define register_cld_notifier() 0
+ #define unregister_cld_notifier() do { } while(0)
+ 
+diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
+index c81dbbad87920..db8d62632a5be 100644
+--- a/fs/nfsd/nfsfh.c
++++ b/fs/nfsd/nfsfh.c
+@@ -153,11 +153,12 @@ static inline __be32 check_pseudo_root(struct svc_rqst *rqstp,
+ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
+ {
+ 	struct knfsd_fh	*fh = &fhp->fh_handle;
+-	struct fid *fid = NULL, sfid;
++	struct fid *fid = NULL;
+ 	struct svc_export *exp;
+ 	struct dentry *dentry;
+ 	int fileid_type;
+ 	int data_left = fh->fh_size/4;
++	int len;
+ 	__be32 error;
+ 
+ 	error = nfserr_stale;
+@@ -166,48 +167,35 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
+ 	if (rqstp->rq_vers == 4 && fh->fh_size == 0)
+ 		return nfserr_nofilehandle;
+ 
+-	if (fh->fh_version == 1) {
+-		int len;
+-
+-		if (--data_left < 0)
+-			return error;
+-		if (fh->fh_auth_type != 0)
+-			return error;
+-		len = key_len(fh->fh_fsid_type) / 4;
+-		if (len == 0)
+-			return error;
+-		if  (fh->fh_fsid_type == FSID_MAJOR_MINOR) {
+-			/* deprecated, convert to type 3 */
+-			len = key_len(FSID_ENCODE_DEV)/4;
+-			fh->fh_fsid_type = FSID_ENCODE_DEV;
+-			/*
+-			 * struct knfsd_fh uses host-endian fields, which are
+-			 * sometimes used to hold net-endian values. This
+-			 * confuses sparse, so we must use __force here to
+-			 * keep it from complaining.
+-			 */
+-			fh->fh_fsid[0] = new_encode_dev(MKDEV(ntohl((__force __be32)fh->fh_fsid[0]),
+-							ntohl((__force __be32)fh->fh_fsid[1])));
+-			fh->fh_fsid[1] = fh->fh_fsid[2];
+-		}
+-		data_left -= len;
+-		if (data_left < 0)
+-			return error;
+-		exp = rqst_exp_find(rqstp, fh->fh_fsid_type, fh->fh_fsid);
+-		fid = (struct fid *)(fh->fh_fsid + len);
+-	} else {
+-		__u32 tfh[2];
+-		dev_t xdev;
+-		ino_t xino;
+-
+-		if (fh->fh_size != NFS_FHSIZE)
+-			return error;
+-		/* assume old filehandle format */
+-		xdev = old_decode_dev(fh->ofh_xdev);
+-		xino = u32_to_ino_t(fh->ofh_xino);
+-		mk_fsid(FSID_DEV, tfh, xdev, xino, 0, NULL);
+-		exp = rqst_exp_find(rqstp, FSID_DEV, tfh);
++	if (fh->fh_version != 1)
++		return error;
++
++	if (--data_left < 0)
++		return error;
++	if (fh->fh_auth_type != 0)
++		return error;
++	len = key_len(fh->fh_fsid_type) / 4;
++	if (len == 0)
++		return error;
++	if (fh->fh_fsid_type == FSID_MAJOR_MINOR) {
++		/* deprecated, convert to type 3 */
++		len = key_len(FSID_ENCODE_DEV)/4;
++		fh->fh_fsid_type = FSID_ENCODE_DEV;
++		/*
++		 * struct knfsd_fh uses host-endian fields, which are
++		 * sometimes used to hold net-endian values. This
++		 * confuses sparse, so we must use __force here to
++		 * keep it from complaining.
++		 */
++		fh->fh_fsid[0] = new_encode_dev(MKDEV(ntohl((__force __be32)fh->fh_fsid[0]),
++						      ntohl((__force __be32)fh->fh_fsid[1])));
++		fh->fh_fsid[1] = fh->fh_fsid[2];
+ 	}
++	data_left -= len;
++	if (data_left < 0)
++		return error;
++	exp = rqst_exp_find(rqstp, fh->fh_fsid_type, fh->fh_fsid);
++	fid = (struct fid *)(fh->fh_fsid + len);
+ 
+ 	error = nfserr_stale;
+ 	if (IS_ERR(exp)) {
+@@ -252,28 +240,25 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
+ 	if (rqstp->rq_vers > 2)
+ 		error = nfserr_badhandle;
+ 
+-	if (fh->fh_version != 1) {
+-		sfid.i32.ino = fh->ofh_ino;
+-		sfid.i32.gen = fh->ofh_generation;
+-		sfid.i32.parent_ino = fh->ofh_dirino;
+-		fid = &sfid;
+-		data_left = 3;
+-		if (fh->ofh_dirino == 0)
+-			fileid_type = FILEID_INO32_GEN;
+-		else
+-			fileid_type = FILEID_INO32_GEN_PARENT;
+-	} else
+-		fileid_type = fh->fh_fileid_type;
++	fileid_type = fh->fh_fileid_type;
+ 
+ 	if (fileid_type == FILEID_ROOT)
+ 		dentry = dget(exp->ex_path.dentry);
+ 	else {
+-		dentry = exportfs_decode_fh(exp->ex_path.mnt, fid,
+-				data_left, fileid_type,
+-				nfsd_acceptable, exp);
+-		if (IS_ERR_OR_NULL(dentry))
++		dentry = exportfs_decode_fh_raw(exp->ex_path.mnt, fid,
++						data_left, fileid_type,
++						nfsd_acceptable, exp);
++		if (IS_ERR_OR_NULL(dentry)) {
+ 			trace_nfsd_set_fh_dentry_badhandle(rqstp, fhp,
+ 					dentry ?  PTR_ERR(dentry) : -ESTALE);
++			switch (PTR_ERR(dentry)) {
++			case -ENOMEM:
++			case -ETIMEDOUT:
++				break;
++			default:
++				dentry = ERR_PTR(-ESTALE);
++			}
++		}
+ 	}
+ 	if (dentry == NULL)
+ 		goto out;
+@@ -291,6 +276,20 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
+ 
+ 	fhp->fh_dentry = dentry;
+ 	fhp->fh_export = exp;
++
++	switch (rqstp->rq_vers) {
++	case 4:
++		if (dentry->d_sb->s_export_op->flags & EXPORT_OP_NOATOMIC_ATTR)
++			fhp->fh_no_atomic_attr = true;
++		break;
++	case 3:
++		if (dentry->d_sb->s_export_op->flags & EXPORT_OP_NOWCC)
++			fhp->fh_no_wcc = true;
++		break;
++	case 2:
++		fhp->fh_no_wcc = true;
++	}
++
+ 	return 0;
+ out:
+ 	exp_put(exp);
+@@ -327,7 +326,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
+ __be32
+ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type, int access)
+ {
+-	struct svc_export *exp;
++	struct svc_export *exp = NULL;
+ 	struct dentry	*dentry;
+ 	__be32		error;
+ 
+@@ -400,7 +399,7 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type, int access)
+ 	}
+ out:
+ 	if (error == nfserr_stale)
+-		nfsdstats.fh_stale++;
++		nfsd_stats_fh_stale_inc(exp);
+ 	return error;
+ }
+ 
+@@ -429,20 +428,6 @@ static void _fh_update(struct svc_fh *fhp, struct svc_export *exp,
+ 	}
+ }
+ 
+-/*
+- * for composing old style file handles
+- */
+-static inline void _fh_update_old(struct dentry *dentry,
+-				  struct svc_export *exp,
+-				  struct knfsd_fh *fh)
+-{
+-	fh->ofh_ino = ino_t_to_u32(d_inode(dentry)->i_ino);
+-	fh->ofh_generation = d_inode(dentry)->i_generation;
+-	if (d_is_dir(dentry) ||
+-	    (exp->ex_flags & NFSEXP_NOSUBTREECHECK))
+-		fh->ofh_dirino = 0;
+-}
+-
+ static bool is_root_export(struct svc_export *exp)
+ {
+ 	return exp->ex_path.dentry == exp->ex_path.dentry->d_sb->s_root;
+@@ -539,9 +524,6 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
+ 	/* ref_fh is a reference file handle.
+ 	 * if it is non-null and for the same filesystem, then we should compose
+ 	 * a filehandle which is of the same version, where possible.
+-	 * Currently, that means that if ref_fh->fh_handle.fh_version == 0xca
+-	 * Then create a 32byte filehandle using nfs_fhbase_old
+-	 *
+ 	 */
+ 
+ 	struct inode * inode = d_inode(dentry);
+@@ -559,10 +541,13 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
+ 	 */
+ 	set_version_and_fsid_type(fhp, exp, ref_fh);
+ 
++	/* If we have a ref_fh, then copy the fh_no_wcc setting from it. */
++	fhp->fh_no_wcc = ref_fh ? ref_fh->fh_no_wcc : false;
++
+ 	if (ref_fh == fhp)
+ 		fh_put(ref_fh);
+ 
+-	if (fhp->fh_locked || fhp->fh_dentry) {
++	if (fhp->fh_dentry) {
+ 		printk(KERN_ERR "fh_compose: fh %pd2 not initialized!\n",
+ 		       dentry);
+ 	}
+@@ -574,35 +559,21 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
+ 	fhp->fh_dentry = dget(dentry); /* our internal copy */
+ 	fhp->fh_export = exp_get(exp);
+ 
+-	if (fhp->fh_handle.fh_version == 0xca) {
+-		/* old style filehandle please */
+-		memset(&fhp->fh_handle.fh_base, 0, NFS_FHSIZE);
+-		fhp->fh_handle.fh_size = NFS_FHSIZE;
+-		fhp->fh_handle.ofh_dcookie = 0xfeebbaca;
+-		fhp->fh_handle.ofh_dev =  old_encode_dev(ex_dev);
+-		fhp->fh_handle.ofh_xdev = fhp->fh_handle.ofh_dev;
+-		fhp->fh_handle.ofh_xino =
+-			ino_t_to_u32(d_inode(exp->ex_path.dentry)->i_ino);
+-		fhp->fh_handle.ofh_dirino = ino_t_to_u32(parent_ino(dentry));
+-		if (inode)
+-			_fh_update_old(dentry, exp, &fhp->fh_handle);
+-	} else {
+-		fhp->fh_handle.fh_size =
+-			key_len(fhp->fh_handle.fh_fsid_type) + 4;
+-		fhp->fh_handle.fh_auth_type = 0;
+-
+-		mk_fsid(fhp->fh_handle.fh_fsid_type,
+-			fhp->fh_handle.fh_fsid,
+-			ex_dev,
+-			d_inode(exp->ex_path.dentry)->i_ino,
+-			exp->ex_fsid, exp->ex_uuid);
+-
+-		if (inode)
+-			_fh_update(fhp, exp, dentry);
+-		if (fhp->fh_handle.fh_fileid_type == FILEID_INVALID) {
+-			fh_put(fhp);
+-			return nfserr_opnotsupp;
+-		}
++	fhp->fh_handle.fh_size =
++		key_len(fhp->fh_handle.fh_fsid_type) + 4;
++	fhp->fh_handle.fh_auth_type = 0;
++
++	mk_fsid(fhp->fh_handle.fh_fsid_type,
++		fhp->fh_handle.fh_fsid,
++		ex_dev,
++		d_inode(exp->ex_path.dentry)->i_ino,
++		exp->ex_fsid, exp->ex_uuid);
++
++	if (inode)
++		_fh_update(fhp, exp, dentry);
++	if (fhp->fh_handle.fh_fileid_type == FILEID_INVALID) {
++		fh_put(fhp);
++		return nfserr_opnotsupp;
+ 	}
+ 
+ 	return 0;
+@@ -623,16 +594,12 @@ fh_update(struct svc_fh *fhp)
+ 	dentry = fhp->fh_dentry;
+ 	if (d_really_is_negative(dentry))
+ 		goto out_negative;
+-	if (fhp->fh_handle.fh_version != 1) {
+-		_fh_update_old(dentry, fhp->fh_export, &fhp->fh_handle);
+-	} else {
+-		if (fhp->fh_handle.fh_fileid_type != FILEID_ROOT)
+-			return 0;
++	if (fhp->fh_handle.fh_fileid_type != FILEID_ROOT)
++		return 0;
+ 
+-		_fh_update(fhp, fhp->fh_export, dentry);
+-		if (fhp->fh_handle.fh_fileid_type == FILEID_INVALID)
+-			return nfserr_opnotsupp;
+-	}
++	_fh_update(fhp, fhp->fh_export, dentry);
++	if (fhp->fh_handle.fh_fileid_type == FILEID_INVALID)
++		return nfserr_opnotsupp;
+ 	return 0;
+ out_bad:
+ 	printk(KERN_ERR "fh_update: fh not verified!\n");
+@@ -643,6 +610,85 @@ fh_update(struct svc_fh *fhp)
+ 	return nfserr_serverfault;
+ }
+ 
++/**
++ * fh_fill_pre_attrs - Fill in pre-op attributes
++ * @fhp: file handle to be updated
++ *
++ */
++void fh_fill_pre_attrs(struct svc_fh *fhp)
++{
++	bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
++	struct inode *inode;
++	struct kstat stat;
++	__be32 err;
++
++	if (fhp->fh_no_wcc || fhp->fh_pre_saved)
++		return;
++
++	inode = d_inode(fhp->fh_dentry);
++	err = fh_getattr(fhp, &stat);
++	if (err) {
++		/* Grab the times from inode anyway */
++		stat.mtime = inode->i_mtime;
++		stat.ctime = inode->i_ctime;
++		stat.size  = inode->i_size;
++	}
++	if (v4)
++		fhp->fh_pre_change = nfsd4_change_attribute(&stat, inode);
++
++	fhp->fh_pre_mtime = stat.mtime;
++	fhp->fh_pre_ctime = stat.ctime;
++	fhp->fh_pre_size  = stat.size;
++	fhp->fh_pre_saved = true;
++}
++
++/**
++ * fh_fill_post_attrs - Fill in post-op attributes
++ * @fhp: file handle to be updated
++ *
++ */
++void fh_fill_post_attrs(struct svc_fh *fhp)
++{
++	bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
++	struct inode *inode = d_inode(fhp->fh_dentry);
++	__be32 err;
++
++	if (fhp->fh_no_wcc)
++		return;
++
++	if (fhp->fh_post_saved)
++		printk("nfsd: inode locked twice during operation.\n");
++
++	err = fh_getattr(fhp, &fhp->fh_post_attr);
++	if (err) {
++		fhp->fh_post_saved = false;
++		fhp->fh_post_attr.ctime = inode->i_ctime;
++	} else
++		fhp->fh_post_saved = true;
++	if (v4)
++		fhp->fh_post_change =
++			nfsd4_change_attribute(&fhp->fh_post_attr, inode);
++}
++
++/**
++ * fh_fill_both_attrs - Fill pre-op and post-op attributes
++ * @fhp: file handle to be updated
++ *
++ * This is used when the directory wasn't changed, but wcc attributes
++ * are needed anyway.
++ */
++void fh_fill_both_attrs(struct svc_fh *fhp)
++{
++	fh_fill_post_attrs(fhp);
++	if (!fhp->fh_post_saved)
++		return;
++	fhp->fh_pre_change = fhp->fh_post_change;
++	fhp->fh_pre_mtime = fhp->fh_post_attr.mtime;
++	fhp->fh_pre_ctime = fhp->fh_post_attr.ctime;
++	fhp->fh_pre_size = fhp->fh_post_attr.size;
++	fhp->fh_pre_saved = true;
++}
++
+ /*
+  * Release a file handle.
+  */
+@@ -652,16 +698,16 @@ fh_put(struct svc_fh *fhp)
+ 	struct dentry * dentry = fhp->fh_dentry;
+ 	struct svc_export * exp = fhp->fh_export;
+ 	if (dentry) {
+-		fh_unlock(fhp);
+ 		fhp->fh_dentry = NULL;
+ 		dput(dentry);
+-		fh_clear_wcc(fhp);
++		fh_clear_pre_post_attrs(fhp);
+ 	}
+ 	fh_drop_write(fhp);
+ 	if (exp) {
+ 		exp_put(exp);
+ 		fhp->fh_export = NULL;
+ 	}
++	fhp->fh_no_wcc = false;
+ 	return;
+ }
+ 
+@@ -671,20 +717,15 @@ fh_put(struct svc_fh *fhp)
+ char * SVCFH_fmt(struct svc_fh *fhp)
+ {
+ 	struct knfsd_fh *fh = &fhp->fh_handle;
++	static char buf[2+1+1+64*3+1];
+ 
+-	static char buf[80];
+-	sprintf(buf, "%d: %08x %08x %08x %08x %08x %08x",
+-		fh->fh_size,
+-		fh->fh_base.fh_pad[0],
+-		fh->fh_base.fh_pad[1],
+-		fh->fh_base.fh_pad[2],
+-		fh->fh_base.fh_pad[3],
+-		fh->fh_base.fh_pad[4],
+-		fh->fh_base.fh_pad[5]);
++	if (fh->fh_size < 0 || fh->fh_size> 64)
++		return "bad-fh";
++	sprintf(buf, "%d: %*ph", fh->fh_size, fh->fh_size, fh->fh_raw);
+ 	return buf;
+ }
+ 
+-enum fsid_source fsid_source(struct svc_fh *fhp)
++enum fsid_source fsid_source(const struct svc_fh *fhp)
+ {
+ 	if (fhp->fh_handle.fh_version != 1)
+ 		return FSIDSOURCE_DEV;
+diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h
+index 56cfbc3615618..513e028b0bbee 100644
+--- a/fs/nfsd/nfsfh.h
++++ b/fs/nfsd/nfsfh.h
+@@ -10,8 +10,56 @@
+ 
+ #include <linux/crc32.h>
+ #include <linux/sunrpc/svc.h>
+-#include <uapi/linux/nfsd/nfsfh.h>
+ #include <linux/iversion.h>
++#include <linux/exportfs.h>
++#include <linux/nfs4.h>
++
++/*
++ * The file handle starts with a sequence of four-byte words.
++ * The first word contains a version number (1) and three descriptor bytes
++ * that tell how the remaining 3 variable length fields should be handled.
++ * These three bytes are auth_type, fsid_type and fileid_type.
++ *
++ * All four-byte values are in host-byte-order.
++ *
++ * The auth_type field is deprecated and must be set to 0.
++ *
++ * The fsid_type identifies how the filesystem (or export point) is
++ *    encoded.
++ *  Current values:
++ *     0  - 4 byte device id (ms-2-bytes major, ls-2-bytes minor), 4byte inode number
++ *        NOTE: we cannot use the kdev_t device id value, because kdev_t.h
++ *              says we mustn't.  We must break it up and reassemble.
++ *     1  - 4 byte user specified identifier
++ *     2  - 4 byte major, 4 byte minor, 4 byte inode number - DEPRECATED
++ *     3  - 4 byte device id, encoded for user-space, 4 byte inode number
++ *     4  - 4 byte inode number and 4 byte uuid
++ *     5  - 8 byte uuid
++ *     6  - 16 byte uuid
++ *     7  - 8 byte inode number and 16 byte uuid
++ *
++ * The fileid_type identifies how the file within the filesystem is encoded.
++ *   The values for this field are filesystem specific, exccept that
++ *   filesystems must not use the values '0' or '0xff'. 'See enum fid_type'
++ *   in include/linux/exportfs.h for currently registered values.
++ */
++
++struct knfsd_fh {
++	unsigned int	fh_size;	/*
++					 * Points to the current size while
++					 * building a new file handle.
++					 */
++	union {
++		char			fh_raw[NFS4_FHSIZE];
++		struct {
++			u8		fh_version;	/* == 1 */
++			u8		fh_auth_type;	/* deprecated */
++			u8		fh_fsid_type;
++			u8		fh_fileid_type;
++			u32		fh_fsid[]; /* flexible-array member */
++		};
++	};
++};
+ 
+ static inline __u32 ino_t_to_u32(ino_t ino)
+ {
+@@ -33,14 +81,18 @@ typedef struct svc_fh {
+ 	struct dentry *		fh_dentry;	/* validated dentry */
+ 	struct svc_export *	fh_export;	/* export pointer */
+ 
+-	bool			fh_locked;	/* inode locked by us */
+ 	bool			fh_want_write;	/* remount protection taken */
++	bool			fh_no_wcc;	/* no wcc data needed */
++	bool			fh_no_atomic_attr;
++						/*
++						 * wcc data is not atomic with
++						 * operation
++						 */
+ 	int			fh_flags;	/* FH flags */
+-#ifdef CONFIG_NFSD_V3
+ 	bool			fh_post_saved;	/* post-op attrs saved */
+ 	bool			fh_pre_saved;	/* pre-op attrs saved */
+ 
+-	/* Pre-op attributes saved during fh_lock */
++	/* Pre-op attributes saved when inode is locked */
+ 	__u64			fh_pre_size;	/* size before operation */
+ 	struct timespec64	fh_pre_mtime;	/* mtime before oper */
+ 	struct timespec64	fh_pre_ctime;	/* ctime before oper */
+@@ -50,11 +102,9 @@ typedef struct svc_fh {
+ 	 */
+ 	u64			fh_pre_change;
+ 
+-	/* Post-op attributes saved in fh_unlock */
++	/* Post-op attributes saved in fh_fill_post_attrs() */
+ 	struct kstat		fh_post_attr;	/* full attrs after operation */
+ 	u64			fh_post_change; /* nfsv4 change; see above */
+-#endif /* CONFIG_NFSD_V3 */
+-
+ } svc_fh;
+ #define NFSD4_FH_FOREIGN (1<<0)
+ #define SET_FH_FLAG(c, f) ((c)->fh_flags |= (f))
+@@ -76,7 +126,7 @@ enum fsid_source {
+ 	FSIDSOURCE_FSID,
+ 	FSIDSOURCE_UUID,
+ };
+-extern enum fsid_source fsid_source(struct svc_fh *fhp);
++extern enum fsid_source fsid_source(const struct svc_fh *fhp);
+ 
+ 
+ /*
+@@ -170,19 +220,19 @@ __be32	fh_update(struct svc_fh *);
+ void	fh_put(struct svc_fh *);
+ 
+ static __inline__ struct svc_fh *
+-fh_copy(struct svc_fh *dst, struct svc_fh *src)
++fh_copy(struct svc_fh *dst, const struct svc_fh *src)
+ {
+-	WARN_ON(src->fh_dentry || src->fh_locked);
+-			
++	WARN_ON(src->fh_dentry);
++
+ 	*dst = *src;
+ 	return dst;
+ }
+ 
+ static inline void
+-fh_copy_shallow(struct knfsd_fh *dst, struct knfsd_fh *src)
++fh_copy_shallow(struct knfsd_fh *dst, const struct knfsd_fh *src)
+ {
+ 	dst->fh_size = src->fh_size;
+-	memcpy(&dst->fh_base, &src->fh_base, src->fh_size);
++	memcpy(&dst->fh_raw, &src->fh_raw, src->fh_size);
+ }
+ 
+ static __inline__ struct svc_fh *
+@@ -193,16 +243,18 @@ fh_init(struct svc_fh *fhp, int maxsize)
+ 	return fhp;
+ }
+ 
+-static inline bool fh_match(struct knfsd_fh *fh1, struct knfsd_fh *fh2)
++static inline bool fh_match(const struct knfsd_fh *fh1,
++			    const struct knfsd_fh *fh2)
+ {
+ 	if (fh1->fh_size != fh2->fh_size)
+ 		return false;
+-	if (memcmp(fh1->fh_base.fh_pad, fh2->fh_base.fh_pad, fh1->fh_size) != 0)
++	if (memcmp(fh1->fh_raw, fh2->fh_raw, fh1->fh_size) != 0)
+ 		return false;
+ 	return true;
+ }
+ 
+-static inline bool fh_fsid_match(struct knfsd_fh *fh1, struct knfsd_fh *fh2)
++static inline bool fh_fsid_match(const struct knfsd_fh *fh1,
++				 const struct knfsd_fh *fh2)
+ {
+ 	if (fh1->fh_fsid_type != fh2->fh_fsid_type)
+ 		return false;
+@@ -219,27 +271,23 @@ static inline bool fh_fsid_match(struct knfsd_fh *fh1, struct knfsd_fh *fh2)
+  * returns a crc32 hash for the filehandle that is compatible with
+  * the one displayed by "wireshark".
+  */
+-
+-static inline u32
+-knfsd_fh_hash(struct knfsd_fh *fh)
++static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh)
+ {
+-	return ~crc32_le(0xFFFFFFFF, (unsigned char *)&fh->fh_base, fh->fh_size);
++	return ~crc32_le(0xFFFFFFFF, fh->fh_raw, fh->fh_size);
+ }
+ #else
+-static inline u32
+-knfsd_fh_hash(struct knfsd_fh *fh)
++static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh)
+ {
+ 	return 0;
+ }
+ #endif
+ 
+-#ifdef CONFIG_NFSD_V3
+-/*
+- * The wcc data stored in current_fh should be cleared
+- * between compound ops.
++/**
++ * fh_clear_pre_post_attrs - Reset pre/post attributes
++ * @fhp: file handle to be updated
++ *
+  */
+-static inline void
+-fh_clear_wcc(struct svc_fh *fhp)
++static inline void fh_clear_pre_post_attrs(struct svc_fh *fhp)
+ {
+ 	fhp->fh_post_saved = false;
+ 	fhp->fh_pre_saved = false;
+@@ -259,68 +307,21 @@ fh_clear_wcc(struct svc_fh *fhp)
+ static inline u64 nfsd4_change_attribute(struct kstat *stat,
+ 					 struct inode *inode)
+ {
+-	u64 chattr;
+-
+-	chattr =  stat->ctime.tv_sec;
+-	chattr <<= 30;
+-	chattr += stat->ctime.tv_nsec;
+-	chattr += inode_query_iversion(inode);
+-	return chattr;
+-}
+-
+-extern void fill_pre_wcc(struct svc_fh *fhp);
+-extern void fill_post_wcc(struct svc_fh *fhp);
+-#else
+-#define fh_clear_wcc(ignored)
+-#define fill_pre_wcc(ignored)
+-#define fill_post_wcc(notused)
+-#endif /* CONFIG_NFSD_V3 */
+-
+-
+-/*
+- * Lock a file handle/inode
+- * NOTE: both fh_lock and fh_unlock are done "by hand" in
+- * vfs.c:nfsd_rename as it needs to grab 2 i_mutex's at once
+- * so, any changes here should be reflected there.
+- */
+-
+-static inline void
+-fh_lock_nested(struct svc_fh *fhp, unsigned int subclass)
+-{
+-	struct dentry	*dentry = fhp->fh_dentry;
+-	struct inode	*inode;
+-
+-	BUG_ON(!dentry);
+-
+-	if (fhp->fh_locked) {
+-		printk(KERN_WARNING "fh_lock: %pd2 already locked!\n",
+-			dentry);
+-		return;
+-	}
+-
+-	inode = d_inode(dentry);
+-	inode_lock_nested(inode, subclass);
+-	fill_pre_wcc(fhp);
+-	fhp->fh_locked = true;
+-}
+-
+-static inline void
+-fh_lock(struct svc_fh *fhp)
+-{
+-	fh_lock_nested(fhp, I_MUTEX_NORMAL);
+-}
+-
+-/*
+- * Unlock a file handle/inode
+- */
+-static inline void
+-fh_unlock(struct svc_fh *fhp)
+-{
+-	if (fhp->fh_locked) {
+-		fill_post_wcc(fhp);
+-		inode_unlock(d_inode(fhp->fh_dentry));
+-		fhp->fh_locked = false;
+-	}
++	if (inode->i_sb->s_export_op->fetch_iversion)
++		return inode->i_sb->s_export_op->fetch_iversion(inode);
++	else if (IS_I_VERSION(inode)) {
++		u64 chattr;
++
++		chattr =  stat->ctime.tv_sec;
++		chattr <<= 30;
++		chattr += stat->ctime.tv_nsec;
++		chattr += inode_query_iversion(inode);
++		return chattr;
++	} else
++		return time_to_chattr(&stat->ctime);
+ }
+ 
++extern void fh_fill_pre_attrs(struct svc_fh *fhp);
++extern void fh_fill_post_attrs(struct svc_fh *fhp);
++extern void fh_fill_both_attrs(struct svc_fh *fhp);
+ #endif /* _LINUX_NFSD_NFSFH_H */
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index bbd01e8397f6e..96426dea7d412 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -51,6 +51,9 @@ nfsd_proc_setattr(struct svc_rqst *rqstp)
+ 	struct nfsd_sattrargs *argp = rqstp->rq_argp;
+ 	struct nfsd_attrstat *resp = rqstp->rq_resp;
+ 	struct iattr *iap = &argp->attrs;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= iap,
++	};
+ 	struct svc_fh *fhp;
+ 
+ 	dprintk("nfsd: SETATTR  %s, valid=%x, size=%ld\n",
+@@ -100,7 +103,7 @@ nfsd_proc_setattr(struct svc_rqst *rqstp)
+ 		}
+ 	}
+ 
+-	resp->status = nfsd_setattr(rqstp, fhp, iap, 0, (time64_t)0);
++	resp->status = nfsd_setattr(rqstp, fhp, &attrs, 0, (time64_t)0);
+ 	if (resp->status != nfs_ok)
+ 		goto out;
+ 
+@@ -149,14 +152,16 @@ nfsd_proc_lookup(struct svc_rqst *rqstp)
+ static __be32
+ nfsd_proc_readlink(struct svc_rqst *rqstp)
+ {
+-	struct nfsd_readlinkargs *argp = rqstp->rq_argp;
++	struct nfsd_fhandle *argp = rqstp->rq_argp;
+ 	struct nfsd_readlinkres *resp = rqstp->rq_resp;
+ 
+ 	dprintk("nfsd: READLINK %s\n", SVCFH_fmt(&argp->fh));
+ 
+ 	/* Read the symlink. */
+ 	resp->len = NFS_MAXPATHLEN;
+-	resp->status = nfsd_readlink(rqstp, &argp->fh, argp->buffer, &resp->len);
++	resp->page = *(rqstp->rq_next_page++);
++	resp->status = nfsd_readlink(rqstp, &argp->fh,
++				     page_address(resp->page), &resp->len);
+ 
+ 	fh_put(&argp->fh);
+ 	return rpc_success;
+@@ -171,36 +176,42 @@ nfsd_proc_read(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd_readargs *argp = rqstp->rq_argp;
+ 	struct nfsd_readres *resp = rqstp->rq_resp;
++	unsigned int len;
+ 	u32 eof;
++	int v;
+ 
+ 	dprintk("nfsd: READ    %s %d bytes at %d\n",
+ 		SVCFH_fmt(&argp->fh),
+ 		argp->count, argp->offset);
+ 
++	argp->count = min_t(u32, argp->count, NFSSVC_MAXBLKSIZE_V2);
++	argp->count = min_t(u32, argp->count, rqstp->rq_res.buflen);
++
++	v = 0;
++	len = argp->count;
++	resp->pages = rqstp->rq_next_page;
++	while (len > 0) {
++		struct page *page = *(rqstp->rq_next_page++);
++
++		rqstp->rq_vec[v].iov_base = page_address(page);
++		rqstp->rq_vec[v].iov_len = min_t(unsigned int, len, PAGE_SIZE);
++		len -= rqstp->rq_vec[v].iov_len;
++		v++;
++	}
++
+ 	/* Obtain buffer pointer for payload. 19 is 1 word for
+ 	 * status, 17 words for fattr, and 1 word for the byte count.
+ 	 */
+-
+-	if (NFSSVC_MAXBLKSIZE_V2 < argp->count) {
+-		char buf[RPC_MAX_ADDRBUFLEN];
+-		printk(KERN_NOTICE
+-			"oversized read request from %s (%d bytes)\n",
+-				svc_print_addr(rqstp, buf, sizeof(buf)),
+-				argp->count);
+-		argp->count = NFSSVC_MAXBLKSIZE_V2;
+-	}
+ 	svc_reserve_auth(rqstp, (19<<2) + argp->count + 4);
+ 
+ 	resp->count = argp->count;
+-	resp->status = nfsd_read(rqstp, fh_copy(&resp->fh, &argp->fh),
+-				 argp->offset,
+-				 rqstp->rq_vec, argp->vlen,
+-				 &resp->count,
+-				 &eof);
++	fh_copy(&resp->fh, &argp->fh);
++	resp->status = nfsd_read(rqstp, &resp->fh, argp->offset,
++				 rqstp->rq_vec, v, &resp->count, &eof);
+ 	if (resp->status == nfs_ok)
+ 		resp->status = fh_getattr(&resp->fh, &resp->stat);
+ 	else if (resp->status == nfserr_jukebox)
+-		return rpc_drop_reply;
++		set_bit(RQ_DROPME, &rqstp->rq_flags);
+ 	return rpc_success;
+ }
+ 
+@@ -227,12 +238,7 @@ nfsd_proc_write(struct svc_rqst *rqstp)
+ 		SVCFH_fmt(&argp->fh),
+ 		argp->len, argp->offset);
+ 
+-	nvecs = svc_fill_write_vector(rqstp, rqstp->rq_arg.pages,
+-				      &argp->first, cnt);
+-	if (!nvecs) {
+-		resp->status = nfserr_io;
+-		goto out;
+-	}
++	nvecs = svc_fill_write_vector(rqstp, &argp->payload);
+ 
+ 	resp->status = nfsd_write(rqstp, fh_copy(&resp->fh, &argp->fh),
+ 				  argp->offset, rqstp->rq_vec, nvecs,
+@@ -240,8 +246,7 @@ nfsd_proc_write(struct svc_rqst *rqstp)
+ 	if (resp->status == nfs_ok)
+ 		resp->status = fh_getattr(&resp->fh, &resp->stat);
+ 	else if (resp->status == nfserr_jukebox)
+-		return rpc_drop_reply;
+-out:
++		set_bit(RQ_DROPME, &rqstp->rq_flags);
+ 	return rpc_success;
+ }
+ 
+@@ -259,6 +264,9 @@ nfsd_proc_create(struct svc_rqst *rqstp)
+ 	svc_fh		*dirfhp = &argp->fh;
+ 	svc_fh		*newfhp = &resp->fh;
+ 	struct iattr	*attr = &argp->attrs;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= attr,
++	};
+ 	struct inode	*inode;
+ 	struct dentry	*dchild;
+ 	int		type, mode;
+@@ -284,7 +292,7 @@ nfsd_proc_create(struct svc_rqst *rqstp)
+ 		goto done;
+ 	}
+ 
+-	fh_lock_nested(dirfhp, I_MUTEX_PARENT);
++	inode_lock_nested(dirfhp->fh_dentry->d_inode, I_MUTEX_PARENT);
+ 	dchild = lookup_one_len(argp->name, dirfhp->fh_dentry, argp->len);
+ 	if (IS_ERR(dchild)) {
+ 		resp->status = nfserrno(PTR_ERR(dchild));
+@@ -383,9 +391,8 @@ nfsd_proc_create(struct svc_rqst *rqstp)
+ 	resp->status = nfs_ok;
+ 	if (!inode) {
+ 		/* File doesn't exist. Create it and set attrs */
+-		resp->status = nfsd_create_locked(rqstp, dirfhp, argp->name,
+-						  argp->len, attr, type, rdev,
+-						  newfhp);
++		resp->status = nfsd_create_locked(rqstp, dirfhp, &attrs, type,
++						  rdev, newfhp);
+ 	} else if (type == S_IFREG) {
+ 		dprintk("nfsd:   existing %s, valid=%x, size=%ld\n",
+ 			argp->name, attr->ia_valid, (long) attr->ia_size);
+@@ -395,13 +402,12 @@ nfsd_proc_create(struct svc_rqst *rqstp)
+ 		 */
+ 		attr->ia_valid &= ATTR_SIZE;
+ 		if (attr->ia_valid)
+-			resp->status = nfsd_setattr(rqstp, newfhp, attr, 0,
++			resp->status = nfsd_setattr(rqstp, newfhp, &attrs, 0,
+ 						    (time64_t)0);
+ 	}
+ 
+ out_unlock:
+-	/* We don't really need to unlock, as fh_put does it. */
+-	fh_unlock(dirfhp);
++	inode_unlock(dirfhp->fh_dentry->d_inode);
+ 	fh_drop_write(dirfhp);
+ done:
+ 	fh_put(dirfhp);
+@@ -471,6 +477,9 @@ nfsd_proc_symlink(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd_symlinkargs *argp = rqstp->rq_argp;
+ 	struct nfsd_stat *resp = rqstp->rq_resp;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &argp->attrs,
++	};
+ 	struct svc_fh	newfh;
+ 
+ 	if (argp->tlen > NFS_MAXPATHLEN) {
+@@ -492,7 +501,7 @@ nfsd_proc_symlink(struct svc_rqst *rqstp)
+ 
+ 	fh_init(&newfh, NFS_FHSIZE);
+ 	resp->status = nfsd_symlink(rqstp, &argp->ffh, argp->fname, argp->flen,
+-				    argp->tname, &newfh);
++				    argp->tname, &attrs, &newfh);
+ 
+ 	kfree(argp->tname);
+ 	fh_put(&argp->ffh);
+@@ -510,6 +519,9 @@ nfsd_proc_mkdir(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd_createargs *argp = rqstp->rq_argp;
+ 	struct nfsd_diropres *resp = rqstp->rq_resp;
++	struct nfsd_attrs attrs = {
++		.na_iattr	= &argp->attrs,
++	};
+ 
+ 	dprintk("nfsd: MKDIR    %s %.*s\n", SVCFH_fmt(&argp->fh), argp->len, argp->name);
+ 
+@@ -521,7 +533,7 @@ nfsd_proc_mkdir(struct svc_rqst *rqstp)
+ 	argp->attrs.ia_valid &= ~ATTR_SIZE;
+ 	fh_init(&resp->fh, NFS_FHSIZE);
+ 	resp->status = nfsd_create(rqstp, &argp->fh, argp->name, argp->len,
+-				   &argp->attrs, S_IFDIR, 0, &resp->fh);
++				   &attrs, S_IFDIR, 0, &resp->fh);
+ 	fh_put(&argp->fh);
+ 	if (resp->status != nfs_ok)
+ 		goto out;
+@@ -548,6 +560,24 @@ nfsd_proc_rmdir(struct svc_rqst *rqstp)
+ 	return rpc_success;
+ }
+ 
++static void nfsd_init_dirlist_pages(struct svc_rqst *rqstp,
++				    struct nfsd_readdirres *resp,
++				    u32 count)
++{
++	struct xdr_buf *buf = &resp->dirlist;
++	struct xdr_stream *xdr = &resp->xdr;
++
++	memset(buf, 0, sizeof(*buf));
++
++	/* Reserve room for the NULL ptr & eof flag (-2 words) */
++	buf->buflen = clamp(count, (u32)(XDR_UNIT * 2), (u32)PAGE_SIZE);
++	buf->buflen -= XDR_UNIT * 2;
++	buf->pages = rqstp->rq_next_page;
++	rqstp->rq_next_page++;
++
++	xdr_init_encode_pages(xdr, buf, buf->pages,  NULL);
++}
++
+ /*
+  * Read a portion of a directory.
+  */
+@@ -556,33 +586,20 @@ nfsd_proc_readdir(struct svc_rqst *rqstp)
+ {
+ 	struct nfsd_readdirargs *argp = rqstp->rq_argp;
+ 	struct nfsd_readdirres *resp = rqstp->rq_resp;
+-	int		count;
+ 	loff_t		offset;
+ 
+ 	dprintk("nfsd: READDIR  %s %d bytes at %d\n",
+ 		SVCFH_fmt(&argp->fh),		
+ 		argp->count, argp->cookie);
+ 
+-	/* Shrink to the client read size */
+-	count = (argp->count >> 2) - 2;
++	nfsd_init_dirlist_pages(rqstp, resp, argp->count);
+ 
+-	/* Make sure we've room for the NULL ptr & eof flag */
+-	count -= 2;
+-	if (count < 0)
+-		count = 0;
+-
+-	resp->buffer = argp->buffer;
+-	resp->offset = NULL;
+-	resp->buflen = count;
+ 	resp->common.err = nfs_ok;
+-	/* Read directory and encode entries on the fly */
++	resp->cookie_offset = 0;
+ 	offset = argp->cookie;
+ 	resp->status = nfsd_readdir(rqstp, &argp->fh, &offset,
+ 				    &resp->common, nfssvc_encode_entry);
+-
+-	resp->count = resp->buffer - argp->buffer;
+-	if (resp->offset)
+-		*resp->offset = htonl(offset);
++	nfssvc_encode_nfscookie(resp, offset);
+ 
+ 	fh_put(&argp->fh);
+ 	return rpc_success;
+@@ -609,7 +626,6 @@ nfsd_proc_statfs(struct svc_rqst *rqstp)
+  * NFSv2 Server procedures.
+  * Only the results of non-idempotent operations are cached.
+  */
+-struct nfsd_void { int dummy; };
+ 
+ #define ST 1		/* status */
+ #define FH 8		/* filehandle */
+@@ -618,41 +634,49 @@ struct nfsd_void { int dummy; };
+ static const struct svc_procedure nfsd_procedures2[18] = {
+ 	[NFSPROC_NULL] = {
+ 		.pc_func = nfsd_proc_null,
+-		.pc_decode = nfssvc_decode_void,
+-		.pc_encode = nfssvc_encode_void,
+-		.pc_argsize = sizeof(struct nfsd_void),
+-		.pc_ressize = sizeof(struct nfsd_void),
++		.pc_decode = nfssvc_decode_voidarg,
++		.pc_encode = nfssvc_encode_voidres,
++		.pc_argsize = sizeof(struct nfsd_voidargs),
++		.pc_argzero = sizeof(struct nfsd_voidargs),
++		.pc_ressize = sizeof(struct nfsd_voidres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = 0,
++		.pc_name = "NULL",
+ 	},
+ 	[NFSPROC_GETATTR] = {
+ 		.pc_func = nfsd_proc_getattr,
+-		.pc_decode = nfssvc_decode_fhandle,
+-		.pc_encode = nfssvc_encode_attrstat,
++		.pc_decode = nfssvc_decode_fhandleargs,
++		.pc_encode = nfssvc_encode_attrstatres,
+ 		.pc_release = nfssvc_release_attrstat,
+ 		.pc_argsize = sizeof(struct nfsd_fhandle),
++		.pc_argzero = sizeof(struct nfsd_fhandle),
+ 		.pc_ressize = sizeof(struct nfsd_attrstat),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+AT,
++		.pc_name = "GETATTR",
+ 	},
+ 	[NFSPROC_SETATTR] = {
+ 		.pc_func = nfsd_proc_setattr,
+ 		.pc_decode = nfssvc_decode_sattrargs,
+-		.pc_encode = nfssvc_encode_attrstat,
++		.pc_encode = nfssvc_encode_attrstatres,
+ 		.pc_release = nfssvc_release_attrstat,
+ 		.pc_argsize = sizeof(struct nfsd_sattrargs),
++		.pc_argzero = sizeof(struct nfsd_sattrargs),
+ 		.pc_ressize = sizeof(struct nfsd_attrstat),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+AT,
++		.pc_name = "SETATTR",
+ 	},
+ 	[NFSPROC_ROOT] = {
+ 		.pc_func = nfsd_proc_root,
+-		.pc_decode = nfssvc_decode_void,
+-		.pc_encode = nfssvc_encode_void,
+-		.pc_argsize = sizeof(struct nfsd_void),
+-		.pc_ressize = sizeof(struct nfsd_void),
++		.pc_decode = nfssvc_decode_voidarg,
++		.pc_encode = nfssvc_encode_voidres,
++		.pc_argsize = sizeof(struct nfsd_voidargs),
++		.pc_argzero = sizeof(struct nfsd_voidargs),
++		.pc_ressize = sizeof(struct nfsd_voidres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = 0,
++		.pc_name = "ROOT",
+ 	},
+ 	[NFSPROC_LOOKUP] = {
+ 		.pc_func = nfsd_proc_lookup,
+@@ -660,18 +684,22 @@ static const struct svc_procedure nfsd_procedures2[18] = {
+ 		.pc_encode = nfssvc_encode_diropres,
+ 		.pc_release = nfssvc_release_diropres,
+ 		.pc_argsize = sizeof(struct nfsd_diropargs),
++		.pc_argzero = sizeof(struct nfsd_diropargs),
+ 		.pc_ressize = sizeof(struct nfsd_diropres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+FH+AT,
++		.pc_name = "LOOKUP",
+ 	},
+ 	[NFSPROC_READLINK] = {
+ 		.pc_func = nfsd_proc_readlink,
+-		.pc_decode = nfssvc_decode_readlinkargs,
++		.pc_decode = nfssvc_decode_fhandleargs,
+ 		.pc_encode = nfssvc_encode_readlinkres,
+-		.pc_argsize = sizeof(struct nfsd_readlinkargs),
++		.pc_argsize = sizeof(struct nfsd_fhandle),
++		.pc_argzero = sizeof(struct nfsd_fhandle),
+ 		.pc_ressize = sizeof(struct nfsd_readlinkres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+1+NFS_MAXPATHLEN/4,
++		.pc_name = "READLINK",
+ 	},
+ 	[NFSPROC_READ] = {
+ 		.pc_func = nfsd_proc_read,
+@@ -679,28 +707,34 @@ static const struct svc_procedure nfsd_procedures2[18] = {
+ 		.pc_encode = nfssvc_encode_readres,
+ 		.pc_release = nfssvc_release_readres,
+ 		.pc_argsize = sizeof(struct nfsd_readargs),
++		.pc_argzero = sizeof(struct nfsd_readargs),
+ 		.pc_ressize = sizeof(struct nfsd_readres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+AT+1+NFSSVC_MAXBLKSIZE_V2/4,
++		.pc_name = "READ",
+ 	},
+ 	[NFSPROC_WRITECACHE] = {
+ 		.pc_func = nfsd_proc_writecache,
+-		.pc_decode = nfssvc_decode_void,
+-		.pc_encode = nfssvc_encode_void,
+-		.pc_argsize = sizeof(struct nfsd_void),
+-		.pc_ressize = sizeof(struct nfsd_void),
++		.pc_decode = nfssvc_decode_voidarg,
++		.pc_encode = nfssvc_encode_voidres,
++		.pc_argsize = sizeof(struct nfsd_voidargs),
++		.pc_argzero = sizeof(struct nfsd_voidargs),
++		.pc_ressize = sizeof(struct nfsd_voidres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = 0,
++		.pc_name = "WRITECACHE",
+ 	},
+ 	[NFSPROC_WRITE] = {
+ 		.pc_func = nfsd_proc_write,
+ 		.pc_decode = nfssvc_decode_writeargs,
+-		.pc_encode = nfssvc_encode_attrstat,
++		.pc_encode = nfssvc_encode_attrstatres,
+ 		.pc_release = nfssvc_release_attrstat,
+ 		.pc_argsize = sizeof(struct nfsd_writeargs),
++		.pc_argzero = sizeof(struct nfsd_writeargs),
+ 		.pc_ressize = sizeof(struct nfsd_attrstat),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+AT,
++		.pc_name = "WRITE",
+ 	},
+ 	[NFSPROC_CREATE] = {
+ 		.pc_func = nfsd_proc_create,
+@@ -708,45 +742,55 @@ static const struct svc_procedure nfsd_procedures2[18] = {
+ 		.pc_encode = nfssvc_encode_diropres,
+ 		.pc_release = nfssvc_release_diropres,
+ 		.pc_argsize = sizeof(struct nfsd_createargs),
++		.pc_argzero = sizeof(struct nfsd_createargs),
+ 		.pc_ressize = sizeof(struct nfsd_diropres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+FH+AT,
++		.pc_name = "CREATE",
+ 	},
+ 	[NFSPROC_REMOVE] = {
+ 		.pc_func = nfsd_proc_remove,
+ 		.pc_decode = nfssvc_decode_diropargs,
+-		.pc_encode = nfssvc_encode_stat,
++		.pc_encode = nfssvc_encode_statres,
+ 		.pc_argsize = sizeof(struct nfsd_diropargs),
++		.pc_argzero = sizeof(struct nfsd_diropargs),
+ 		.pc_ressize = sizeof(struct nfsd_stat),
+ 		.pc_cachetype = RC_REPLSTAT,
+ 		.pc_xdrressize = ST,
++		.pc_name = "REMOVE",
+ 	},
+ 	[NFSPROC_RENAME] = {
+ 		.pc_func = nfsd_proc_rename,
+ 		.pc_decode = nfssvc_decode_renameargs,
+-		.pc_encode = nfssvc_encode_stat,
++		.pc_encode = nfssvc_encode_statres,
+ 		.pc_argsize = sizeof(struct nfsd_renameargs),
++		.pc_argzero = sizeof(struct nfsd_renameargs),
+ 		.pc_ressize = sizeof(struct nfsd_stat),
+ 		.pc_cachetype = RC_REPLSTAT,
+ 		.pc_xdrressize = ST,
++		.pc_name = "RENAME",
+ 	},
+ 	[NFSPROC_LINK] = {
+ 		.pc_func = nfsd_proc_link,
+ 		.pc_decode = nfssvc_decode_linkargs,
+-		.pc_encode = nfssvc_encode_stat,
++		.pc_encode = nfssvc_encode_statres,
+ 		.pc_argsize = sizeof(struct nfsd_linkargs),
++		.pc_argzero = sizeof(struct nfsd_linkargs),
+ 		.pc_ressize = sizeof(struct nfsd_stat),
+ 		.pc_cachetype = RC_REPLSTAT,
+ 		.pc_xdrressize = ST,
++		.pc_name = "LINK",
+ 	},
+ 	[NFSPROC_SYMLINK] = {
+ 		.pc_func = nfsd_proc_symlink,
+ 		.pc_decode = nfssvc_decode_symlinkargs,
+-		.pc_encode = nfssvc_encode_stat,
++		.pc_encode = nfssvc_encode_statres,
+ 		.pc_argsize = sizeof(struct nfsd_symlinkargs),
++		.pc_argzero = sizeof(struct nfsd_symlinkargs),
+ 		.pc_ressize = sizeof(struct nfsd_stat),
+ 		.pc_cachetype = RC_REPLSTAT,
+ 		.pc_xdrressize = ST,
++		.pc_name = "SYMLINK",
+ 	},
+ 	[NFSPROC_MKDIR] = {
+ 		.pc_func = nfsd_proc_mkdir,
+@@ -754,35 +798,43 @@ static const struct svc_procedure nfsd_procedures2[18] = {
+ 		.pc_encode = nfssvc_encode_diropres,
+ 		.pc_release = nfssvc_release_diropres,
+ 		.pc_argsize = sizeof(struct nfsd_createargs),
++		.pc_argzero = sizeof(struct nfsd_createargs),
+ 		.pc_ressize = sizeof(struct nfsd_diropres),
+ 		.pc_cachetype = RC_REPLBUFF,
+ 		.pc_xdrressize = ST+FH+AT,
++		.pc_name = "MKDIR",
+ 	},
+ 	[NFSPROC_RMDIR] = {
+ 		.pc_func = nfsd_proc_rmdir,
+ 		.pc_decode = nfssvc_decode_diropargs,
+-		.pc_encode = nfssvc_encode_stat,
++		.pc_encode = nfssvc_encode_statres,
+ 		.pc_argsize = sizeof(struct nfsd_diropargs),
++		.pc_argzero = sizeof(struct nfsd_diropargs),
+ 		.pc_ressize = sizeof(struct nfsd_stat),
+ 		.pc_cachetype = RC_REPLSTAT,
+ 		.pc_xdrressize = ST,
++		.pc_name = "RMDIR",
+ 	},
+ 	[NFSPROC_READDIR] = {
+ 		.pc_func = nfsd_proc_readdir,
+ 		.pc_decode = nfssvc_decode_readdirargs,
+ 		.pc_encode = nfssvc_encode_readdirres,
+ 		.pc_argsize = sizeof(struct nfsd_readdirargs),
++		.pc_argzero = sizeof(struct nfsd_readdirargs),
+ 		.pc_ressize = sizeof(struct nfsd_readdirres),
+ 		.pc_cachetype = RC_NOCACHE,
++		.pc_name = "READDIR",
+ 	},
+ 	[NFSPROC_STATFS] = {
+ 		.pc_func = nfsd_proc_statfs,
+-		.pc_decode = nfssvc_decode_fhandle,
++		.pc_decode = nfssvc_decode_fhandleargs,
+ 		.pc_encode = nfssvc_encode_statfsres,
+ 		.pc_argsize = sizeof(struct nfsd_fhandle),
++		.pc_argzero = sizeof(struct nfsd_fhandle),
+ 		.pc_ressize = sizeof(struct nfsd_statfsres),
+ 		.pc_cachetype = RC_NOCACHE,
+ 		.pc_xdrressize = ST+5,
++		.pc_name = "STATFS",
+ 	},
+ };
+ 
+@@ -796,61 +848,3 @@ const struct svc_version nfsd_version2 = {
+ 	.vs_dispatch	= nfsd_dispatch,
+ 	.vs_xdrsize	= NFS2_SVC_XDRSIZE,
+ };
+-
+-/*
+- * Map errnos to NFS errnos.
+- */
+-__be32
+-nfserrno (int errno)
+-{
+-	static struct {
+-		__be32	nfserr;
+-		int	syserr;
+-	} nfs_errtbl[] = {
+-		{ nfs_ok, 0 },
+-		{ nfserr_perm, -EPERM },
+-		{ nfserr_noent, -ENOENT },
+-		{ nfserr_io, -EIO },
+-		{ nfserr_nxio, -ENXIO },
+-		{ nfserr_fbig, -E2BIG },
+-		{ nfserr_acces, -EACCES },
+-		{ nfserr_exist, -EEXIST },
+-		{ nfserr_xdev, -EXDEV },
+-		{ nfserr_mlink, -EMLINK },
+-		{ nfserr_nodev, -ENODEV },
+-		{ nfserr_notdir, -ENOTDIR },
+-		{ nfserr_isdir, -EISDIR },
+-		{ nfserr_inval, -EINVAL },
+-		{ nfserr_fbig, -EFBIG },
+-		{ nfserr_nospc, -ENOSPC },
+-		{ nfserr_rofs, -EROFS },
+-		{ nfserr_mlink, -EMLINK },
+-		{ nfserr_nametoolong, -ENAMETOOLONG },
+-		{ nfserr_notempty, -ENOTEMPTY },
+-#ifdef EDQUOT
+-		{ nfserr_dquot, -EDQUOT },
+-#endif
+-		{ nfserr_stale, -ESTALE },
+-		{ nfserr_jukebox, -ETIMEDOUT },
+-		{ nfserr_jukebox, -ERESTARTSYS },
+-		{ nfserr_jukebox, -EAGAIN },
+-		{ nfserr_jukebox, -EWOULDBLOCK },
+-		{ nfserr_jukebox, -ENOMEM },
+-		{ nfserr_io, -ETXTBSY },
+-		{ nfserr_notsupp, -EOPNOTSUPP },
+-		{ nfserr_toosmall, -ETOOSMALL },
+-		{ nfserr_serverfault, -ESERVERFAULT },
+-		{ nfserr_serverfault, -ENFILE },
+-		{ nfserr_io, -EUCLEAN },
+-		{ nfserr_perm, -ENOKEY },
+-	};
+-	int	i;
+-
+-	for (i = 0; i < ARRAY_SIZE(nfs_errtbl); i++) {
+-		if (nfs_errtbl[i].syserr == errno)
+-			return nfs_errtbl[i].nfserr;
+-	}
+-	WARN_ONCE(1, "nfsd: non-standard errno: %d\n", errno);
+-	return nfserr_io;
+-}
+-
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 2e61a565cdbd8..3d4fd40c987bd 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -12,6 +12,7 @@
+ #include <linux/module.h>
+ #include <linux/fs_struct.h>
+ #include <linux/swap.h>
++#include <linux/siphash.h>
+ 
+ #include <linux/sunrpc/stats.h>
+ #include <linux/sunrpc/svcsock.h>
+@@ -29,13 +30,9 @@
+ #include "netns.h"
+ #include "filecache.h"
+ 
+-#define NFSDDBG_FACILITY	NFSDDBG_SVC
++#include "trace.h"
+ 
+-bool inter_copy_offload_enable;
+-EXPORT_SYMBOL_GPL(inter_copy_offload_enable);
+-module_param(inter_copy_offload_enable, bool, 0644);
+-MODULE_PARM_DESC(inter_copy_offload_enable,
+-		 "Enable inter server to server copy offload. Default: false");
++#define NFSDDBG_FACILITY	NFSDDBG_SVC
+ 
+ extern struct svc_program	nfsd_program;
+ static int			nfsd(void *vrqstp);
+@@ -59,18 +56,17 @@ static __be32			nfsd_init_request(struct svc_rqst *,
+ 						struct svc_process_info *);
+ 
+ /*
+- * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and the members
+- * of the svc_serv struct. In particular, ->sv_nrthreads but also to some
+- * extent ->sv_temp_socks and ->sv_permsocks. It also protects nfsdstats.th_cnt
++ * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members
++ * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks.
+  *
+  * If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a
+- * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0. That number
+- * of nfsd threads must exist and each must listed in ->sp_all_threads in each
+- * entry of ->sv_pools[].
++ * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless
++ * nn->keep_active is set).  That number of nfsd threads must
++ * exist and each must be listed in ->sp_all_threads in some entry of
++ * ->sv_pools[].
+  *
+- * Transitions of the thread count between zero and non-zero are of particular
+- * interest since the svc_serv needs to be created and initialized at that
+- * point, or freed.
++ * Each active thread holds a counted reference on nn->nfsd_serv, as does
++ * the nn->keep_active flag and various transient calls to svc_get().
+  *
+  * Finally, the nfsd_mutex also protects some of the global variables that are
+  * accessed when nfsd starts and that are settable via the write_* routines in
+@@ -88,15 +84,19 @@ DEFINE_MUTEX(nfsd_mutex);
+  * version 4.1 DRC caches.
+  * nfsd_drc_pages_used tracks the current version 4.1 DRC memory usage.
+  */
+-spinlock_t	nfsd_drc_lock;
++DEFINE_SPINLOCK(nfsd_drc_lock);
+ unsigned long	nfsd_drc_max_mem;
+ unsigned long	nfsd_drc_mem_used;
+ 
+ #if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
+ static struct svc_stat	nfsd_acl_svcstats;
+ static const struct svc_version *nfsd_acl_version[] = {
++# if defined(CONFIG_NFSD_V2_ACL)
+ 	[2] = &nfsd_acl_version2,
++# endif
++# if defined(CONFIG_NFSD_V3_ACL)
+ 	[3] = &nfsd_acl_version3,
++# endif
+ };
+ 
+ #define NFSD_ACL_MINVERS            2
+@@ -120,10 +120,10 @@ static struct svc_stat	nfsd_acl_svcstats = {
+ #endif /* defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL) */
+ 
+ static const struct svc_version *nfsd_version[] = {
++#if defined(CONFIG_NFSD_V2)
+ 	[2] = &nfsd_version2,
+-#if defined(CONFIG_NFSD_V3)
+-	[3] = &nfsd_version3,
+ #endif
++	[3] = &nfsd_version3,
+ #if defined(CONFIG_NFSD_V4)
+ 	[4] = &nfsd_version4,
+ #endif
+@@ -297,13 +297,13 @@ static int nfsd_init_socks(struct net *net, const struct cred *cred)
+ 	if (!list_empty(&nn->nfsd_serv->sv_permsocks))
+ 		return 0;
+ 
+-	error = svc_create_xprt(nn->nfsd_serv, "udp", net, PF_INET, NFS_PORT,
+-					SVC_SOCK_DEFAULTS, cred);
++	error = svc_xprt_create(nn->nfsd_serv, "udp", net, PF_INET, NFS_PORT,
++				SVC_SOCK_DEFAULTS, cred);
+ 	if (error < 0)
+ 		return error;
+ 
+-	error = svc_create_xprt(nn->nfsd_serv, "tcp", net, PF_INET, NFS_PORT,
+-					SVC_SOCK_DEFAULTS, cred);
++	error = svc_xprt_create(nn->nfsd_serv, "tcp", net, PF_INET, NFS_PORT,
++				SVC_SOCK_DEFAULTS, cred);
+ 	if (error < 0)
+ 		return error;
+ 
+@@ -312,7 +312,7 @@ static int nfsd_init_socks(struct net *net, const struct cred *cred)
+ 
+ static int nfsd_users = 0;
+ 
+-static int nfsd_startup_generic(int nrservs)
++static int nfsd_startup_generic(void)
+ {
+ 	int ret;
+ 
+@@ -349,36 +349,60 @@ static bool nfsd_needs_lockd(struct nfsd_net *nn)
+ 	return nfsd_vers(nn, 2, NFSD_TEST) || nfsd_vers(nn, 3, NFSD_TEST);
+ }
+ 
+-void nfsd_copy_boot_verifier(__be32 verf[2], struct nfsd_net *nn)
++/**
++ * nfsd_copy_write_verifier - Atomically copy a write verifier
++ * @verf: buffer in which to receive the verifier cookie
++ * @nn: NFS net namespace
++ *
++ * This function provides a wait-free mechanism for copying the
++ * namespace's write verifier without tearing it.
++ */
++void nfsd_copy_write_verifier(__be32 verf[2], struct nfsd_net *nn)
+ {
+ 	int seq = 0;
+ 
+ 	do {
+-		read_seqbegin_or_lock(&nn->boot_lock, &seq);
+-		/*
+-		 * This is opaque to client, so no need to byte-swap. Use
+-		 * __force to keep sparse happy. y2038 time_t overflow is
+-		 * irrelevant in this usage
+-		 */
+-		verf[0] = (__force __be32)nn->nfssvc_boot.tv_sec;
+-		verf[1] = (__force __be32)nn->nfssvc_boot.tv_nsec;
+-	} while (need_seqretry(&nn->boot_lock, seq));
+-	done_seqretry(&nn->boot_lock, seq);
++		read_seqbegin_or_lock(&nn->writeverf_lock, &seq);
++		memcpy(verf, nn->writeverf, sizeof(nn->writeverf));
++	} while (need_seqretry(&nn->writeverf_lock, seq));
++	done_seqretry(&nn->writeverf_lock, seq);
+ }
+ 
+-static void nfsd_reset_boot_verifier_locked(struct nfsd_net *nn)
++static void nfsd_reset_write_verifier_locked(struct nfsd_net *nn)
+ {
+-	ktime_get_real_ts64(&nn->nfssvc_boot);
++	struct timespec64 now;
++	u64 verf;
++
++	/*
++	 * Because the time value is hashed, y2038 time_t overflow
++	 * is irrelevant in this usage.
++	 */
++	ktime_get_raw_ts64(&now);
++	verf = siphash_2u64(now.tv_sec, now.tv_nsec, &nn->siphash_key);
++	memcpy(nn->writeverf, &verf, sizeof(nn->writeverf));
+ }
+ 
+-void nfsd_reset_boot_verifier(struct nfsd_net *nn)
++/**
++ * nfsd_reset_write_verifier - Generate a new write verifier
++ * @nn: NFS net namespace
++ *
++ * This function updates the ->writeverf field of @nn. This field
++ * contains an opaque cookie that, according to Section 18.32.3 of
++ * RFC 8881, "the client can use to determine whether a server has
++ * changed instance state (e.g., server restart) between a call to
++ * WRITE and a subsequent call to either WRITE or COMMIT.  This
++ * cookie MUST be unchanged during a single instance of the NFSv4.1
++ * server and MUST be unique between instances of the NFSv4.1
++ * server."
++ */
++void nfsd_reset_write_verifier(struct nfsd_net *nn)
+ {
+-	write_seqlock(&nn->boot_lock);
+-	nfsd_reset_boot_verifier_locked(nn);
+-	write_sequnlock(&nn->boot_lock);
++	write_seqlock(&nn->writeverf_lock);
++	nfsd_reset_write_verifier_locked(nn);
++	write_sequnlock(&nn->writeverf_lock);
+ }
+ 
+-static int nfsd_startup_net(int nrservs, struct net *net, const struct cred *cred)
++static int nfsd_startup_net(struct net *net, const struct cred *cred)
+ {
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 	int ret;
+@@ -386,7 +410,7 @@ static int nfsd_startup_net(int nrservs, struct net *net, const struct cred *cre
+ 	if (nn->nfsd_net_up)
+ 		return 0;
+ 
+-	ret = nfsd_startup_generic(nrservs);
++	ret = nfsd_startup_generic();
+ 	if (ret)
+ 		return ret;
+ 	ret = nfsd_init_socks(net, cred);
+@@ -407,6 +431,9 @@ static int nfsd_startup_net(int nrservs, struct net *net, const struct cred *cre
+ 	if (ret)
+ 		goto out_filecache;
+ 
++#ifdef CONFIG_NFSD_V4_2_INTER_SSC
++	nfsd4_ssc_init_umount_work(nn);
++#endif
+ 	nn->nfsd_net_up = true;
+ 	return 0;
+ 
+@@ -436,6 +463,7 @@ static void nfsd_shutdown_net(struct net *net)
+ 	nfsd_shutdown_generic();
+ }
+ 
++static DEFINE_SPINLOCK(nfsd_notifier_lock);
+ static int nfsd_inetaddr_event(struct notifier_block *this, unsigned long event,
+ 	void *ptr)
+ {
+@@ -445,18 +473,17 @@ static int nfsd_inetaddr_event(struct notifier_block *this, unsigned long event,
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 	struct sockaddr_in sin;
+ 
+-	if ((event != NETDEV_DOWN) ||
+-	    !atomic_inc_not_zero(&nn->ntf_refcnt))
++	if (event != NETDEV_DOWN || !nn->nfsd_serv)
+ 		goto out;
+ 
++	spin_lock(&nfsd_notifier_lock);
+ 	if (nn->nfsd_serv) {
+ 		dprintk("nfsd_inetaddr_event: removed %pI4\n", &ifa->ifa_local);
+ 		sin.sin_family = AF_INET;
+ 		sin.sin_addr.s_addr = ifa->ifa_local;
+ 		svc_age_temp_xprts_now(nn->nfsd_serv, (struct sockaddr *)&sin);
+ 	}
+-	atomic_dec(&nn->ntf_refcnt);
+-	wake_up(&nn->ntf_wq);
++	spin_unlock(&nfsd_notifier_lock);
+ 
+ out:
+ 	return NOTIFY_DONE;
+@@ -476,10 +503,10 @@ static int nfsd_inet6addr_event(struct notifier_block *this,
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 	struct sockaddr_in6 sin6;
+ 
+-	if ((event != NETDEV_DOWN) ||
+-	    !atomic_inc_not_zero(&nn->ntf_refcnt))
++	if (event != NETDEV_DOWN || !nn->nfsd_serv)
+ 		goto out;
+ 
++	spin_lock(&nfsd_notifier_lock);
+ 	if (nn->nfsd_serv) {
+ 		dprintk("nfsd_inet6addr_event: removed %pI6\n", &ifa->addr);
+ 		sin6.sin6_family = AF_INET6;
+@@ -488,8 +515,8 @@ static int nfsd_inet6addr_event(struct notifier_block *this,
+ 			sin6.sin6_scope_id = ifa->idev->dev->ifindex;
+ 		svc_age_temp_xprts_now(nn->nfsd_serv, (struct sockaddr *)&sin6);
+ 	}
+-	atomic_dec(&nn->ntf_refcnt);
+-	wake_up(&nn->ntf_wq);
++	spin_unlock(&nfsd_notifier_lock);
++
+ out:
+ 	return NOTIFY_DONE;
+ }
+@@ -502,11 +529,15 @@ static struct notifier_block nfsd_inet6addr_notifier = {
+ /* Only used under nfsd_mutex, so this atomic may be overkill: */
+ static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
+ 
+-static void nfsd_last_thread(struct svc_serv *serv, struct net *net)
++void nfsd_last_thread(struct net *net)
+ {
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++	struct svc_serv *serv = nn->nfsd_serv;
++
++	spin_lock(&nfsd_notifier_lock);
++	nn->nfsd_serv = NULL;
++	spin_unlock(&nfsd_notifier_lock);
+ 
+-	atomic_dec(&nn->ntf_refcnt);
+ 	/* check if the notifier still has clients */
+ 	if (atomic_dec_return(&nfsd_notifier_refcount) == 0) {
+ 		unregister_inetaddr_notifier(&nfsd_inetaddr_notifier);
+@@ -514,7 +545,8 @@ static void nfsd_last_thread(struct svc_serv *serv, struct net *net)
+ 		unregister_inet6addr_notifier(&nfsd_inet6addr_notifier);
+ #endif
+ 	}
+-	wait_event(nn->ntf_wq, atomic_read(&nn->ntf_refcnt) == 0);
++
++	svc_xprt_destroy_all(serv, net);
+ 
+ 	/*
+ 	 * write_ports can create the server without actually starting
+@@ -567,7 +599,6 @@ static void set_max_drc(void)
+ 	nfsd_drc_max_mem = (nr_free_buffer_pages()
+ 					>> NFSD_DRC_SIZE_SHIFT) * PAGE_SIZE;
+ 	nfsd_drc_mem_used = 0;
+-	spin_lock_init(&nfsd_drc_lock);
+ 	dprintk("%s nfsd_drc_max_mem %lu \n", __func__, nfsd_drc_max_mem);
+ }
+ 
+@@ -592,24 +623,6 @@ static int nfsd_get_default_max_blksize(void)
+ 	return ret;
+ }
+ 
+-static const struct svc_serv_ops nfsd_thread_sv_ops = {
+-	.svo_shutdown		= nfsd_last_thread,
+-	.svo_function		= nfsd,
+-	.svo_enqueue_xprt	= svc_xprt_do_enqueue,
+-	.svo_setup		= svc_set_num_threads,
+-	.svo_module		= THIS_MODULE,
+-};
+-
+-static void nfsd_complete_shutdown(struct net *net)
+-{
+-	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+-
+-	WARN_ON(!mutex_is_locked(&nfsd_mutex));
+-
+-	nn->nfsd_serv = NULL;
+-	complete(&nn->nfsd_shutdown_complete);
+-}
+-
+ void nfsd_shutdown_threads(struct net *net)
+ {
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+@@ -624,11 +637,10 @@ void nfsd_shutdown_threads(struct net *net)
+ 
+ 	svc_get(serv);
+ 	/* Kill outstanding nfsd threads */
+-	serv->sv_ops->svo_setup(serv, NULL, 0);
+-	nfsd_destroy(net);
++	svc_set_num_threads(serv, NULL, 0);
++	nfsd_last_thread(net);
++	svc_put(serv);
+ 	mutex_unlock(&nfsd_mutex);
+-	/* Wait for shutdown of nfsd_serv to complete */
+-	wait_for_completion(&nn->nfsd_shutdown_complete);
+ }
+ 
+ bool i_am_nfsd(void)
+@@ -640,6 +652,7 @@ int nfsd_create_serv(struct net *net)
+ {
+ 	int error;
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++	struct svc_serv *serv;
+ 
+ 	WARN_ON(!mutex_is_locked(&nfsd_mutex));
+ 	if (nn->nfsd_serv) {
+@@ -649,19 +662,19 @@ int nfsd_create_serv(struct net *net)
+ 	if (nfsd_max_blksize == 0)
+ 		nfsd_max_blksize = nfsd_get_default_max_blksize();
+ 	nfsd_reset_versions(nn);
+-	nn->nfsd_serv = svc_create_pooled(&nfsd_program, nfsd_max_blksize,
+-						&nfsd_thread_sv_ops);
+-	if (nn->nfsd_serv == NULL)
++	serv = svc_create_pooled(&nfsd_program, nfsd_max_blksize, nfsd);
++	if (serv == NULL)
+ 		return -ENOMEM;
+-	init_completion(&nn->nfsd_shutdown_complete);
+ 
+-	nn->nfsd_serv->sv_maxconn = nn->max_connections;
+-	error = svc_bind(nn->nfsd_serv, net);
++	serv->sv_maxconn = nn->max_connections;
++	error = svc_bind(serv, net);
+ 	if (error < 0) {
+-		svc_destroy(nn->nfsd_serv);
+-		nfsd_complete_shutdown(net);
++		svc_put(serv);
+ 		return error;
+ 	}
++	spin_lock(&nfsd_notifier_lock);
++	nn->nfsd_serv = serv;
++	spin_unlock(&nfsd_notifier_lock);
+ 
+ 	set_max_drc();
+ 	/* check if the notifier is already set */
+@@ -671,8 +684,7 @@ int nfsd_create_serv(struct net *net)
+ 		register_inet6addr_notifier(&nfsd_inet6addr_notifier);
+ #endif
+ 	}
+-	atomic_inc(&nn->ntf_refcnt);
+-	nfsd_reset_boot_verifier(nn);
++	nfsd_reset_write_verifier(nn);
+ 	return 0;
+ }
+ 
+@@ -699,18 +711,6 @@ int nfsd_get_nrthreads(int n, int *nthreads, struct net *net)
+ 	return 0;
+ }
+ 
+-void nfsd_destroy(struct net *net)
+-{
+-	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+-	int destroy = (nn->nfsd_serv->sv_nrthreads == 1);
+-
+-	if (destroy)
+-		svc_shutdown_net(nn->nfsd_serv, net);
+-	svc_destroy(nn->nfsd_serv);
+-	if (destroy)
+-		nfsd_complete_shutdown(net);
+-}
+-
+ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
+ {
+ 	int i = 0;
+@@ -735,7 +735,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
+ 	if (tot > NFSD_MAXSERVS) {
+ 		/* total too large: scale down requested numbers */
+ 		for (i = 0; i < n && tot > 0; i++) {
+-		    	int new = nthreads[i] * NFSD_MAXSERVS / tot;
++			int new = nthreads[i] * NFSD_MAXSERVS / tot;
+ 			tot -= (nthreads[i] - new);
+ 			nthreads[i] = new;
+ 		}
+@@ -755,12 +755,13 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
+ 	/* apply the new numbers */
+ 	svc_get(nn->nfsd_serv);
+ 	for (i = 0; i < n; i++) {
+-		err = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv,
+-				&nn->nfsd_serv->sv_pools[i], nthreads[i]);
++		err = svc_set_num_threads(nn->nfsd_serv,
++					  &nn->nfsd_serv->sv_pools[i],
++					  nthreads[i]);
+ 		if (err)
+ 			break;
+ 	}
+-	nfsd_destroy(net);
++	svc_put(nn->nfsd_serv);
+ 	return err;
+ }
+ 
+@@ -775,6 +776,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
+ 	int	error;
+ 	bool	nfsd_up_before;
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++	struct svc_serv *serv;
+ 
+ 	mutex_lock(&nfsd_mutex);
+ 	dprintk("nfsd: creating service\n");
+@@ -786,7 +788,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
+ 	if (nrservs == 0 && nn->nfsd_serv == NULL)
+ 		goto out;
+ 
+-	strlcpy(nn->nfsd_name, utsname()->nodename,
++	strscpy(nn->nfsd_name, utsname()->nodename,
+ 		sizeof(nn->nfsd_name));
+ 
+ 	error = nfsd_create_serv(net);
+@@ -794,24 +796,25 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
+ 		goto out;
+ 
+ 	nfsd_up_before = nn->nfsd_net_up;
++	serv = nn->nfsd_serv;
+ 
+-	error = nfsd_startup_net(nrservs, net, cred);
++	error = nfsd_startup_net(net, cred);
+ 	if (error)
+-		goto out_destroy;
+-	error = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv,
+-			NULL, nrservs);
++		goto out_put;
++	error = svc_set_num_threads(serv, NULL, nrservs);
+ 	if (error)
+ 		goto out_shutdown;
+-	/* We are holding a reference to nn->nfsd_serv which
+-	 * we don't want to count in the return value,
+-	 * so subtract 1
+-	 */
+-	error = nn->nfsd_serv->sv_nrthreads - 1;
++	error = serv->sv_nrthreads;
++	if (error == 0)
++		nfsd_last_thread(net);
+ out_shutdown:
+ 	if (error < 0 && !nfsd_up_before)
+ 		nfsd_shutdown_net(net);
+-out_destroy:
+-	nfsd_destroy(net);		/* Release server */
++out_put:
++	/* Threads now hold service active */
++	if (xchg(&nn->keep_active, 0))
++		svc_put(serv);
++	svc_put(serv);
+ out:
+ 	mutex_unlock(&nfsd_mutex);
+ 	return error;
+@@ -925,9 +928,6 @@ nfsd(void *vrqstp)
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 	int err;
+ 
+-	/* Lock module and set up kernel thread */
+-	mutex_lock(&nfsd_mutex);
+-
+ 	/* At this point, the thread shares current->fs
+ 	 * with the init process. We need to create files with the
+ 	 * umask as defined by the client instead of init's umask. */
+@@ -938,17 +938,7 @@ nfsd(void *vrqstp)
+ 
+ 	current->fs->umask = 0;
+ 
+-	/*
+-	 * thread is spawned with all signals set to SIG_IGN, re-enable
+-	 * the ones that will bring down the thread
+-	 */
+-	allow_signal(SIGKILL);
+-	allow_signal(SIGHUP);
+-	allow_signal(SIGINT);
+-	allow_signal(SIGQUIT);
+-
+-	nfsdstats.th_cnt++;
+-	mutex_unlock(&nfsd_mutex);
++	atomic_inc(&nfsdstats.th_cnt);
+ 
+ 	set_freezable();
+ 
+@@ -972,57 +962,14 @@ nfsd(void *vrqstp)
+ 		validate_process_creds();
+ 	}
+ 
+-	/* Clear signals before calling svc_exit_thread() */
+-	flush_signals(current);
+-
+-	mutex_lock(&nfsd_mutex);
+-	nfsdstats.th_cnt --;
++	atomic_dec(&nfsdstats.th_cnt);
+ 
+ out:
+-	rqstp->rq_server = NULL;
+-
+ 	/* Release the thread */
+ 	svc_exit_thread(rqstp);
+-
+-	nfsd_destroy(net);
+-
+-	/* Release module */
+-	mutex_unlock(&nfsd_mutex);
+-	module_put_and_exit(0);
+ 	return 0;
+ }
+ 
+-/*
+- * A write procedure can have a large argument, and a read procedure can
+- * have a large reply, but no NFSv2 or NFSv3 procedure has argument and
+- * reply that can both be larger than a page.  The xdr code has taken
+- * advantage of this assumption to be a sloppy about bounds checking in
+- * some cases.  Pending a rewrite of the NFSv2/v3 xdr code to fix that
+- * problem, we enforce these assumptions here:
+- */
+-static bool nfs_request_too_big(struct svc_rqst *rqstp,
+-				const struct svc_procedure *proc)
+-{
+-	/*
+-	 * The ACL code has more careful bounds-checking and is not
+-	 * susceptible to this problem:
+-	 */
+-	if (rqstp->rq_prog != NFS_PROGRAM)
+-		return false;
+-	/*
+-	 * Ditto NFSv4 (which can in theory have argument and reply both
+-	 * more than a page):
+-	 */
+-	if (rqstp->rq_vers >= 4)
+-		return false;
+-	/* The reply will be small, we're OK: */
+-	if (proc->pc_xdrressize > 0 &&
+-	    proc->pc_xdrressize < XDR_QUADLEN(PAGE_SIZE))
+-		return false;
+-
+-	return rqstp->rq_arg.len > PAGE_SIZE;
+-}
+-
+ /**
+  * nfsd_dispatch - Process an NFS or NFSACL Request
+  * @rqstp: incoming request
+@@ -1037,22 +984,15 @@ static bool nfs_request_too_big(struct svc_rqst *rqstp,
+ int nfsd_dispatch(struct svc_rqst *rqstp, __be32 *statp)
+ {
+ 	const struct svc_procedure *proc = rqstp->rq_procinfo;
+-	struct kvec *argv = &rqstp->rq_arg.head[0];
+-	struct kvec *resv = &rqstp->rq_res.head[0];
+-	__be32 *p;
+-
+-	dprintk("nfsd_dispatch: vers %d proc %d\n",
+-				rqstp->rq_vers, rqstp->rq_proc);
+-
+-	if (nfs_request_too_big(rqstp, proc))
+-		goto out_too_large;
+ 
+ 	/*
+ 	 * Give the xdr decoder a chance to change this if it wants
+ 	 * (necessary in the NFSv4.0 compound case)
+ 	 */
+ 	rqstp->rq_cachetype = proc->pc_cachetype;
+-	if (!proc->pc_decode(rqstp, argv->iov_base))
++
++	svcxdr_init_decode(rqstp);
++	if (!proc->pc_decode(rqstp, &rqstp->rq_arg_stream))
+ 		goto out_decode_err;
+ 
+ 	switch (nfsd_cache_lookup(rqstp)) {
+@@ -1068,43 +1008,64 @@ int nfsd_dispatch(struct svc_rqst *rqstp, __be32 *statp)
+ 	 * Need to grab the location to store the status, as
+ 	 * NFSv4 does some encoding while processing
+ 	 */
+-	p = resv->iov_base + resv->iov_len;
+-	resv->iov_len += sizeof(__be32);
++	svcxdr_init_encode(rqstp);
+ 
+ 	*statp = proc->pc_func(rqstp);
+-	if (*statp == rpc_drop_reply || test_bit(RQ_DROPME, &rqstp->rq_flags))
++	if (test_bit(RQ_DROPME, &rqstp->rq_flags))
+ 		goto out_update_drop;
+ 
+-	if (!proc->pc_encode(rqstp, p))
++	if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream))
+ 		goto out_encode_err;
+ 
+ 	nfsd_cache_update(rqstp, rqstp->rq_cachetype, statp + 1);
+ out_cached_reply:
+ 	return 1;
+ 
+-out_too_large:
+-	dprintk("nfsd: NFSv%d argument too large\n", rqstp->rq_vers);
+-	*statp = rpc_garbage_args;
+-	return 1;
+-
+ out_decode_err:
+-	dprintk("nfsd: failed to decode arguments!\n");
++	trace_nfsd_garbage_args_err(rqstp);
+ 	*statp = rpc_garbage_args;
+ 	return 1;
+ 
+ out_update_drop:
+-	dprintk("nfsd: Dropping request; may be revisited later\n");
+ 	nfsd_cache_update(rqstp, RC_NOCACHE, NULL);
+ out_dropit:
+ 	return 0;
+ 
+ out_encode_err:
+-	dprintk("nfsd: failed to encode result!\n");
++	trace_nfsd_cant_encode_err(rqstp);
+ 	nfsd_cache_update(rqstp, RC_NOCACHE, NULL);
+ 	*statp = rpc_system_err;
+ 	return 1;
+ }
+ 
++/**
++ * nfssvc_decode_voidarg - Decode void arguments
++ * @rqstp: Server RPC transaction context
++ * @xdr: XDR stream positioned at arguments to decode
++ *
++ * Return values:
++ *   %false: Arguments were not valid
++ *   %true: Decoding was successful
++ */
++bool nfssvc_decode_voidarg(struct svc_rqst *rqstp, struct xdr_stream *xdr)
++{
++	return true;
++}
++
++/**
++ * nfssvc_encode_voidres - Encode void results
++ * @rqstp: Server RPC transaction context
++ * @xdr: XDR stream into which to encode results
++ *
++ * Return values:
++ *   %false: Local error while encoding
++ *   %true: Encoding was successful
++ */
++bool nfssvc_encode_voidres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
++{
++	return true;
++}
++
+ int nfsd_pool_stats_open(struct inode *inode, struct file *file)
+ {
+ 	int ret;
+@@ -1115,7 +1076,6 @@ int nfsd_pool_stats_open(struct inode *inode, struct file *file)
+ 		mutex_unlock(&nfsd_mutex);
+ 		return -ENODEV;
+ 	}
+-	/* bump up the psudo refcount while traversing */
+ 	svc_get(nn->nfsd_serv);
+ 	ret = svc_pool_stats_open(nn->nfsd_serv, file);
+ 	mutex_unlock(&nfsd_mutex);
+@@ -1124,12 +1084,12 @@ int nfsd_pool_stats_open(struct inode *inode, struct file *file)
+ 
+ int nfsd_pool_stats_release(struct inode *inode, struct file *file)
+ {
++	struct seq_file *seq = file->private_data;
++	struct svc_serv *serv = seq->private;
+ 	int ret = seq_release(inode, file);
+-	struct net *net = inode->i_sb->s_fs_info;
+ 
+ 	mutex_lock(&nfsd_mutex);
+-	/* this function really, really should have been called svc_put() */
+-	nfsd_destroy(net);
++	svc_put(serv);
+ 	mutex_unlock(&nfsd_mutex);
+ 	return ret;
+ }
+diff --git a/fs/nfsd/nfsxdr.c b/fs/nfsd/nfsxdr.c
+index 8a288c8fcd57c..caf6355b18fa9 100644
+--- a/fs/nfsd/nfsxdr.c
++++ b/fs/nfsd/nfsxdr.c
+@@ -9,12 +9,10 @@
+ #include "xdr.h"
+ #include "auth.h"
+ 
+-#define NFSDDBG_FACILITY		NFSDDBG_XDR
+-
+ /*
+  * Mapping of S_IF* types to NFS file types
+  */
+-static u32	nfs_ftypes[] = {
++static const u32 nfs_ftypes[] = {
+ 	NFNON,  NFCHR,  NFCHR, NFBAD,
+ 	NFDIR,  NFBAD,  NFBLK, NFBAD,
+ 	NFREG,  NFBAD,  NFLNK, NFBAD,
+@@ -23,93 +21,168 @@ static u32	nfs_ftypes[] = {
+ 
+ 
+ /*
+- * XDR functions for basic NFS types
++ * Basic NFSv2 data types (RFC 1094 Section 2.3)
+  */
+-static __be32 *
+-decode_fh(__be32 *p, struct svc_fh *fhp)
++
++/**
++ * svcxdr_encode_stat - Encode an NFSv2 status code
++ * @xdr: XDR stream
++ * @status: status value to encode
++ *
++ * Return values:
++ *   %false: Send buffer space was exhausted
++ *   %true: Success
++ */
++bool
++svcxdr_encode_stat(struct xdr_stream *xdr, __be32 status)
+ {
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, sizeof(status));
++	if (!p)
++		return false;
++	*p = status;
++
++	return true;
++}
++
++/**
++ * svcxdr_decode_fhandle - Decode an NFSv2 file handle
++ * @xdr: XDR stream positioned at an encoded NFSv2 FH
++ * @fhp: OUT: filled-in server file handle
++ *
++ * Return values:
++ *  %false: The encoded file handle was not valid
++ *  %true: @fhp has been initialized
++ */
++bool
++svcxdr_decode_fhandle(struct xdr_stream *xdr, struct svc_fh *fhp)
++{
++	__be32 *p;
++
++	p = xdr_inline_decode(xdr, NFS_FHSIZE);
++	if (!p)
++		return false;
+ 	fh_init(fhp, NFS_FHSIZE);
+-	memcpy(&fhp->fh_handle.fh_base, p, NFS_FHSIZE);
++	memcpy(&fhp->fh_handle.fh_raw, p, NFS_FHSIZE);
+ 	fhp->fh_handle.fh_size = NFS_FHSIZE;
+ 
+-	/* FIXME: Look up export pointer here and verify
+-	 * Sun Secure RPC if requested */
+-	return p + (NFS_FHSIZE >> 2);
++	return true;
+ }
+ 
+-/* Helper function for NFSv2 ACL code */
+-__be32 *nfs2svc_decode_fh(__be32 *p, struct svc_fh *fhp)
++static bool
++svcxdr_encode_fhandle(struct xdr_stream *xdr, const struct svc_fh *fhp)
+ {
+-	return decode_fh(p, fhp);
++	__be32 *p;
++
++	p = xdr_reserve_space(xdr, NFS_FHSIZE);
++	if (!p)
++		return false;
++	memcpy(p, &fhp->fh_handle.fh_raw, NFS_FHSIZE);
++
++	return true;
+ }
+ 
+ static __be32 *
+-encode_fh(__be32 *p, struct svc_fh *fhp)
++encode_timeval(__be32 *p, const struct timespec64 *time)
+ {
+-	memcpy(p, &fhp->fh_handle.fh_base, NFS_FHSIZE);
+-	return p + (NFS_FHSIZE>> 2);
++	*p++ = cpu_to_be32((u32)time->tv_sec);
++	if (time->tv_nsec)
++		*p++ = cpu_to_be32(time->tv_nsec / NSEC_PER_USEC);
++	else
++		*p++ = xdr_zero;
++	return p;
+ }
+ 
+-/*
+- * Decode a file name and make sure that the path contains
+- * no slashes or null bytes.
+- */
+-static __be32 *
+-decode_filename(__be32 *p, char **namp, unsigned int *lenp)
++static bool
++svcxdr_decode_filename(struct xdr_stream *xdr, char **name, unsigned int *len)
+ {
+-	char		*name;
+-	unsigned int	i;
+-
+-	if ((p = xdr_decode_string_inplace(p, namp, lenp, NFS_MAXNAMLEN)) != NULL) {
+-		for (i = 0, name = *namp; i < *lenp; i++, name++) {
+-			if (*name == '\0' || *name == '/')
+-				return NULL;
+-		}
+-	}
++	u32 size, i;
++	__be32 *p;
++	char *c;
++
++	if (xdr_stream_decode_u32(xdr, &size) < 0)
++		return false;
++	if (size == 0 || size > NFS_MAXNAMLEN)
++		return false;
++	p = xdr_inline_decode(xdr, size);
++	if (!p)
++		return false;
+ 
+-	return p;
++	*len = size;
++	*name = (char *)p;
++	for (i = 0, c = *name; i < size; i++, c++)
++		if (*c == '\0' || *c == '/')
++			return false;
++
++	return true;
+ }
+ 
+-static __be32 *
+-decode_sattr(__be32 *p, struct iattr *iap, struct user_namespace *userns)
++static bool
++svcxdr_decode_diropargs(struct xdr_stream *xdr, struct svc_fh *fhp,
++			char **name, unsigned int *len)
++{
++	return svcxdr_decode_fhandle(xdr, fhp) &&
++		svcxdr_decode_filename(xdr, name, len);
++}
++
++static bool
++svcxdr_decode_sattr(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++		    struct iattr *iap)
+ {
+-	u32	tmp, tmp1;
++	u32 tmp1, tmp2;
++	__be32 *p;
++
++	p = xdr_inline_decode(xdr, XDR_UNIT * 8);
++	if (!p)
++		return false;
+ 
+ 	iap->ia_valid = 0;
+ 
+-	/* Sun client bug compatibility check: some sun clients seem to
+-	 * put 0xffff in the mode field when they mean 0xffffffff.
+-	 * Quoting the 4.4BSD nfs server code: Nah nah nah nah na nah.
++	/*
++	 * Some Sun clients put 0xffff in the mode field when they
++	 * mean 0xffffffff.
+ 	 */
+-	if ((tmp = ntohl(*p++)) != (u32)-1 && tmp != 0xffff) {
++	tmp1 = be32_to_cpup(p++);
++	if (tmp1 != (u32)-1 && tmp1 != 0xffff) {
+ 		iap->ia_valid |= ATTR_MODE;
+-		iap->ia_mode = tmp;
++		iap->ia_mode = tmp1;
+ 	}
+-	if ((tmp = ntohl(*p++)) != (u32)-1) {
+-		iap->ia_uid = make_kuid(userns, tmp);
++
++	tmp1 = be32_to_cpup(p++);
++	if (tmp1 != (u32)-1) {
++		iap->ia_uid = make_kuid(nfsd_user_namespace(rqstp), tmp1);
+ 		if (uid_valid(iap->ia_uid))
+ 			iap->ia_valid |= ATTR_UID;
+ 	}
+-	if ((tmp = ntohl(*p++)) != (u32)-1) {
+-		iap->ia_gid = make_kgid(userns, tmp);
++
++	tmp1 = be32_to_cpup(p++);
++	if (tmp1 != (u32)-1) {
++		iap->ia_gid = make_kgid(nfsd_user_namespace(rqstp), tmp1);
+ 		if (gid_valid(iap->ia_gid))
+ 			iap->ia_valid |= ATTR_GID;
+ 	}
+-	if ((tmp = ntohl(*p++)) != (u32)-1) {
++
++	tmp1 = be32_to_cpup(p++);
++	if (tmp1 != (u32)-1) {
+ 		iap->ia_valid |= ATTR_SIZE;
+-		iap->ia_size = tmp;
++		iap->ia_size = tmp1;
+ 	}
+-	tmp  = ntohl(*p++); tmp1 = ntohl(*p++);
+-	if (tmp != (u32)-1 && tmp1 != (u32)-1) {
++
++	tmp1 = be32_to_cpup(p++);
++	tmp2 = be32_to_cpup(p++);
++	if (tmp1 != (u32)-1 && tmp2 != (u32)-1) {
+ 		iap->ia_valid |= ATTR_ATIME | ATTR_ATIME_SET;
+-		iap->ia_atime.tv_sec = tmp;
+-		iap->ia_atime.tv_nsec = tmp1 * 1000; 
++		iap->ia_atime.tv_sec = tmp1;
++		iap->ia_atime.tv_nsec = tmp2 * NSEC_PER_USEC;
+ 	}
+-	tmp  = ntohl(*p++); tmp1 = ntohl(*p++);
+-	if (tmp != (u32)-1 && tmp1 != (u32)-1) {
++
++	tmp1 = be32_to_cpup(p++);
++	tmp2 = be32_to_cpup(p++);
++	if (tmp1 != (u32)-1 && tmp2 != (u32)-1) {
+ 		iap->ia_valid |= ATTR_MTIME | ATTR_MTIME_SET;
+-		iap->ia_mtime.tv_sec = tmp;
+-		iap->ia_mtime.tv_nsec = tmp1 * 1000; 
++		iap->ia_mtime.tv_sec = tmp1;
++		iap->ia_mtime.tv_nsec = tmp2 * NSEC_PER_USEC;
+ 		/*
+ 		 * Passing the invalid value useconds=1000000 for mtime
+ 		 * is a Sun convention for "set both mtime and atime to
+@@ -119,476 +192,447 @@ decode_sattr(__be32 *p, struct iattr *iap, struct user_namespace *userns)
+ 		 * sattr in section 6.1 of "NFS Illustrated" by
+ 		 * Brent Callaghan, Addison-Wesley, ISBN 0-201-32750-5
+ 		 */
+-		if (tmp1 == 1000000)
++		if (tmp2 == 1000000)
+ 			iap->ia_valid &= ~(ATTR_ATIME_SET|ATTR_MTIME_SET);
+ 	}
+-	return p;
++
++	return true;
+ }
+ 
+-static __be32 *
+-encode_fattr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *fhp,
+-	     struct kstat *stat)
++/**
++ * svcxdr_encode_fattr - Encode NFSv2 file attributes
++ * @rqstp: Context of a completed RPC transaction
++ * @xdr: XDR stream
++ * @fhp: File handle to encode
++ * @stat: Attributes to encode
++ *
++ * Return values:
++ *   %false: Send buffer space was exhausted
++ *   %true: Success
++ */
++bool
++svcxdr_encode_fattr(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++		    const struct svc_fh *fhp, const struct kstat *stat)
+ {
+ 	struct user_namespace *userns = nfsd_user_namespace(rqstp);
+-	struct dentry	*dentry = fhp->fh_dentry;
+-	int type;
++	struct dentry *dentry = fhp->fh_dentry;
++	int type = stat->mode & S_IFMT;
+ 	struct timespec64 time;
+-	u32 f;
++	__be32 *p;
++	u32 fsid;
+ 
+-	type = (stat->mode & S_IFMT);
++	p = xdr_reserve_space(xdr, XDR_UNIT * 17);
++	if (!p)
++		return false;
+ 
+-	*p++ = htonl(nfs_ftypes[type >> 12]);
+-	*p++ = htonl((u32) stat->mode);
+-	*p++ = htonl((u32) stat->nlink);
+-	*p++ = htonl((u32) from_kuid_munged(userns, stat->uid));
+-	*p++ = htonl((u32) from_kgid_munged(userns, stat->gid));
++	*p++ = cpu_to_be32(nfs_ftypes[type >> 12]);
++	*p++ = cpu_to_be32((u32)stat->mode);
++	*p++ = cpu_to_be32((u32)stat->nlink);
++	*p++ = cpu_to_be32((u32)from_kuid_munged(userns, stat->uid));
++	*p++ = cpu_to_be32((u32)from_kgid_munged(userns, stat->gid));
+ 
+-	if (S_ISLNK(type) && stat->size > NFS_MAXPATHLEN) {
+-		*p++ = htonl(NFS_MAXPATHLEN);
+-	} else {
+-		*p++ = htonl((u32) stat->size);
+-	}
+-	*p++ = htonl((u32) stat->blksize);
++	if (S_ISLNK(type) && stat->size > NFS_MAXPATHLEN)
++		*p++ = cpu_to_be32(NFS_MAXPATHLEN);
++	else
++		*p++ = cpu_to_be32((u32) stat->size);
++	*p++ = cpu_to_be32((u32) stat->blksize);
+ 	if (S_ISCHR(type) || S_ISBLK(type))
+-		*p++ = htonl(new_encode_dev(stat->rdev));
++		*p++ = cpu_to_be32(new_encode_dev(stat->rdev));
+ 	else
+-		*p++ = htonl(0xffffffff);
+-	*p++ = htonl((u32) stat->blocks);
++		*p++ = cpu_to_be32(0xffffffff);
++	*p++ = cpu_to_be32((u32)stat->blocks);
++
+ 	switch (fsid_source(fhp)) {
+-	default:
+-	case FSIDSOURCE_DEV:
+-		*p++ = htonl(new_encode_dev(stat->dev));
+-		break;
+ 	case FSIDSOURCE_FSID:
+-		*p++ = htonl((u32) fhp->fh_export->ex_fsid);
++		fsid = (u32)fhp->fh_export->ex_fsid;
+ 		break;
+ 	case FSIDSOURCE_UUID:
+-		f = ((u32*)fhp->fh_export->ex_uuid)[0];
+-		f ^= ((u32*)fhp->fh_export->ex_uuid)[1];
+-		f ^= ((u32*)fhp->fh_export->ex_uuid)[2];
+-		f ^= ((u32*)fhp->fh_export->ex_uuid)[3];
+-		*p++ = htonl(f);
++		fsid = ((u32 *)fhp->fh_export->ex_uuid)[0];
++		fsid ^= ((u32 *)fhp->fh_export->ex_uuid)[1];
++		fsid ^= ((u32 *)fhp->fh_export->ex_uuid)[2];
++		fsid ^= ((u32 *)fhp->fh_export->ex_uuid)[3];
++		break;
++	default:
++		fsid = new_encode_dev(stat->dev);
+ 		break;
+ 	}
+-	*p++ = htonl((u32) stat->ino);
+-	*p++ = htonl((u32) stat->atime.tv_sec);
+-	*p++ = htonl(stat->atime.tv_nsec ? stat->atime.tv_nsec / 1000 : 0);
++	*p++ = cpu_to_be32(fsid);
++
++	*p++ = cpu_to_be32((u32)stat->ino);
++	p = encode_timeval(p, &stat->atime);
+ 	time = stat->mtime;
+-	lease_get_mtime(d_inode(dentry), &time); 
+-	*p++ = htonl((u32) time.tv_sec);
+-	*p++ = htonl(time.tv_nsec ? time.tv_nsec / 1000 : 0); 
+-	*p++ = htonl((u32) stat->ctime.tv_sec);
+-	*p++ = htonl(stat->ctime.tv_nsec ? stat->ctime.tv_nsec / 1000 : 0);
++	lease_get_mtime(d_inode(dentry), &time);
++	p = encode_timeval(p, &time);
++	encode_timeval(p, &stat->ctime);
+ 
+-	return p;
+-}
+-
+-/* Helper function for NFSv2 ACL code */
+-__be32 *nfs2svc_encode_fattr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *fhp, struct kstat *stat)
+-{
+-	return encode_fattr(rqstp, p, fhp, stat);
++	return true;
+ }
+ 
+ /*
+  * XDR decode functions
+  */
+-int
+-nfssvc_decode_void(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	return xdr_argsize_check(rqstp, p);
+-}
+ 
+-int
+-nfssvc_decode_fhandle(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_fhandleargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_fhandle *args = rqstp->rq_argp;
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_fhandle(xdr, &args->fh);
+ }
+ 
+-int
+-nfssvc_decode_sattrargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_sattrargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_sattrargs *args = rqstp->rq_argp;
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	p = decode_sattr(p, &args->attrs, nfsd_user_namespace(rqstp));
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_fhandle(xdr, &args->fh) &&
++		svcxdr_decode_sattr(rqstp, xdr, &args->attrs);
+ }
+ 
+-int
+-nfssvc_decode_diropargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_diropargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_diropargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->fh))
+-	 || !(p = decode_filename(p, &args->name, &args->len)))
+-		return 0;
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_diropargs(xdr, &args->fh, &args->name, &args->len);
+ }
+ 
+-int
+-nfssvc_decode_readargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_readargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_readargs *args = rqstp->rq_argp;
+-	unsigned int len;
+-	int v;
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-
+-	args->offset    = ntohl(*p++);
+-	len = args->count     = ntohl(*p++);
+-	p++; /* totalcount - unused */
+-
+-	len = min_t(unsigned int, len, NFSSVC_MAXBLKSIZE_V2);
+-
+-	/* set up somewhere to store response.
+-	 * We take pages, put them on reslist and include in iovec
+-	 */
+-	v=0;
+-	while (len > 0) {
+-		struct page *p = *(rqstp->rq_next_page++);
+-
+-		rqstp->rq_vec[v].iov_base = page_address(p);
+-		rqstp->rq_vec[v].iov_len = min_t(unsigned int, len, PAGE_SIZE);
+-		len -= rqstp->rq_vec[v].iov_len;
+-		v++;
+-	}
+-	args->vlen = v;
+-	return xdr_argsize_check(rqstp, p);
++	u32 totalcount;
++
++	if (!svcxdr_decode_fhandle(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->offset) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->count) < 0)
++		return false;
++	/* totalcount is ignored */
++	if (xdr_stream_decode_u32(xdr, &totalcount) < 0)
++		return false;
++
++	return true;
+ }
+ 
+-int
+-nfssvc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_writeargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_writeargs *args = rqstp->rq_argp;
+-	unsigned int len, hdr, dlen;
+-	struct kvec *head = rqstp->rq_arg.head;
+-
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-
+-	p++;				/* beginoffset */
+-	args->offset = ntohl(*p++);	/* offset */
+-	p++;				/* totalcount */
+-	len = args->len = ntohl(*p++);
+-	/*
+-	 * The protocol specifies a maximum of 8192 bytes.
+-	 */
+-	if (len > NFSSVC_MAXBLKSIZE_V2)
+-		return 0;
+-
+-	/*
+-	 * Check to make sure that we got the right number of
+-	 * bytes.
+-	 */
+-	hdr = (void*)p - head->iov_base;
+-	if (hdr > head->iov_len)
+-		return 0;
+-	dlen = head->iov_len + rqstp->rq_arg.page_len - hdr;
+-
+-	/*
+-	 * Round the length of the data which was specified up to
+-	 * the next multiple of XDR units and then compare that
+-	 * against the length which was actually received.
+-	 * Note that when RPCSEC/GSS (for example) is used, the
+-	 * data buffer can be padded so dlen might be larger
+-	 * than required.  It must never be smaller.
+-	 */
+-	if (dlen < XDR_QUADLEN(len)*4)
+-		return 0;
+-
+-	args->first.iov_base = (void *)p;
+-	args->first.iov_len = head->iov_len - hdr;
+-	return 1;
++	u32 beginoffset, totalcount;
++
++	if (!svcxdr_decode_fhandle(xdr, &args->fh))
++		return false;
++	/* beginoffset is ignored */
++	if (xdr_stream_decode_u32(xdr, &beginoffset) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->offset) < 0)
++		return false;
++	/* totalcount is ignored */
++	if (xdr_stream_decode_u32(xdr, &totalcount) < 0)
++		return false;
++
++	/* opaque data */
++	if (xdr_stream_decode_u32(xdr, &args->len) < 0)
++		return false;
++	if (args->len > NFSSVC_MAXBLKSIZE_V2)
++		return false;
++
++	return xdr_stream_subsegment(xdr, &args->payload, args->len);
+ }
+ 
+-int
+-nfssvc_decode_createargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_createargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_createargs *args = rqstp->rq_argp;
+ 
+-	if (   !(p = decode_fh(p, &args->fh))
+-	    || !(p = decode_filename(p, &args->name, &args->len)))
+-		return 0;
+-	p = decode_sattr(p, &args->attrs, nfsd_user_namespace(rqstp));
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_diropargs(xdr, &args->fh,
++				       &args->name, &args->len) &&
++		svcxdr_decode_sattr(rqstp, xdr, &args->attrs);
+ }
+ 
+-int
+-nfssvc_decode_renameargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_renameargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_renameargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->ffh))
+-	 || !(p = decode_filename(p, &args->fname, &args->flen))
+-	 || !(p = decode_fh(p, &args->tfh))
+-	 || !(p = decode_filename(p, &args->tname, &args->tlen)))
+-		return 0;
+-
+-	return xdr_argsize_check(rqstp, p);
+-}
+-
+-int
+-nfssvc_decode_readlinkargs(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	struct nfsd_readlinkargs *args = rqstp->rq_argp;
+-
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	args->buffer = page_address(*(rqstp->rq_next_page++));
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_diropargs(xdr, &args->ffh,
++				       &args->fname, &args->flen) &&
++		svcxdr_decode_diropargs(xdr, &args->tfh,
++					&args->tname, &args->tlen);
+ }
+ 
+-int
+-nfssvc_decode_linkargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_linkargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_linkargs *args = rqstp->rq_argp;
+ 
+-	if (!(p = decode_fh(p, &args->ffh))
+-	 || !(p = decode_fh(p, &args->tfh))
+-	 || !(p = decode_filename(p, &args->tname, &args->tlen)))
+-		return 0;
+-
+-	return xdr_argsize_check(rqstp, p);
++	return svcxdr_decode_fhandle(xdr, &args->ffh) &&
++		svcxdr_decode_diropargs(xdr, &args->tfh,
++					&args->tname, &args->tlen);
+ }
+ 
+-int
+-nfssvc_decode_symlinkargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_symlinkargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_symlinkargs *args = rqstp->rq_argp;
+-	char *base = (char *)p;
+-	size_t xdrlen;
+-
+-	if (   !(p = decode_fh(p, &args->ffh))
+-	    || !(p = decode_filename(p, &args->fname, &args->flen)))
+-		return 0;
++	struct kvec *head = rqstp->rq_arg.head;
+ 
+-	args->tlen = ntohl(*p++);
++	if (!svcxdr_decode_diropargs(xdr, &args->ffh, &args->fname, &args->flen))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->tlen) < 0)
++		return false;
+ 	if (args->tlen == 0)
+-		return 0;
+-
+-	args->first.iov_base = p;
+-	args->first.iov_len = rqstp->rq_arg.head[0].iov_len;
+-	args->first.iov_len -= (char *)p - base;
++		return false;
+ 
+-	/* This request is never larger than a page. Therefore,
+-	 * transport will deliver either:
+-	 * 1. pathname in the pagelist -> sattr is in the tail.
+-	 * 2. everything in the head buffer -> sattr is in the head.
+-	 */
+-	if (rqstp->rq_arg.page_len) {
+-		if (args->tlen != rqstp->rq_arg.page_len)
+-			return 0;
+-		p = rqstp->rq_arg.tail[0].iov_base;
+-	} else {
+-		xdrlen = XDR_QUADLEN(args->tlen);
+-		if (xdrlen > args->first.iov_len - (8 * sizeof(__be32)))
+-			return 0;
+-		p += xdrlen;
+-	}
+-	decode_sattr(p, &args->attrs, nfsd_user_namespace(rqstp));
+-
+-	return 1;
++	args->first.iov_len = head->iov_len - xdr_stream_pos(xdr);
++	args->first.iov_base = xdr_inline_decode(xdr, args->tlen);
++	if (!args->first.iov_base)
++		return false;
++	return svcxdr_decode_sattr(rqstp, xdr, &args->attrs);
+ }
+ 
+-int
+-nfssvc_decode_readdirargs(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_decode_readdirargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_readdirargs *args = rqstp->rq_argp;
+ 
+-	p = decode_fh(p, &args->fh);
+-	if (!p)
+-		return 0;
+-	args->cookie = ntohl(*p++);
+-	args->count  = ntohl(*p++);
+-	args->count  = min_t(u32, args->count, PAGE_SIZE);
+-	args->buffer = page_address(*(rqstp->rq_next_page++));
++	if (!svcxdr_decode_fhandle(xdr, &args->fh))
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->cookie) < 0)
++		return false;
++	if (xdr_stream_decode_u32(xdr, &args->count) < 0)
++		return false;
+ 
+-	return xdr_argsize_check(rqstp, p);
++	return true;
+ }
+ 
+ /*
+  * XDR encode functions
+  */
+-int
+-nfssvc_encode_void(struct svc_rqst *rqstp, __be32 *p)
+-{
+-	return xdr_ressize_check(rqstp, p);
+-}
+ 
+-int
+-nfssvc_encode_stat(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_encode_statres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_stat *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	return xdr_ressize_check(rqstp, p);
++	return svcxdr_encode_stat(xdr, resp->status);
+ }
+ 
+-int
+-nfssvc_encode_attrstat(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_encode_attrstatres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_attrstat *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		goto out;
+-	p = encode_fattr(rqstp, p, &resp->fh, &resp->stat);
+-out:
+-	return xdr_ressize_check(rqstp, p);
++	if (!svcxdr_encode_stat(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_fattr(rqstp, xdr, &resp->fh, &resp->stat))
++			return false;
++		break;
++	}
++
++	return true;
+ }
+ 
+-int
+-nfssvc_encode_diropres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_encode_diropres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_diropres *resp = rqstp->rq_resp;
+ 
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		goto out;
+-	p = encode_fh(p, &resp->fh);
+-	p = encode_fattr(rqstp, p, &resp->fh, &resp->stat);
+-out:
+-	return xdr_ressize_check(rqstp, p);
++	if (!svcxdr_encode_stat(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_fhandle(xdr, &resp->fh))
++			return false;
++		if (!svcxdr_encode_fattr(rqstp, xdr, &resp->fh, &resp->stat))
++			return false;
++		break;
++	}
++
++	return true;
+ }
+ 
+-int
+-nfssvc_encode_readlinkres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_encode_readlinkres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_readlinkres *resp = rqstp->rq_resp;
+-
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		return xdr_ressize_check(rqstp, p);
+-
+-	*p++ = htonl(resp->len);
+-	xdr_ressize_check(rqstp, p);
+-	rqstp->rq_res.page_len = resp->len;
+-	if (resp->len & 3) {
+-		/* need to pad the tail */
+-		rqstp->rq_res.tail[0].iov_base = p;
+-		*p = 0;
+-		rqstp->rq_res.tail[0].iov_len = 4 - (resp->len&3);
++	struct kvec *head = rqstp->rq_res.head;
++
++	if (!svcxdr_encode_stat(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (xdr_stream_encode_u32(xdr, resp->len) < 0)
++			return false;
++		xdr_write_pages(xdr, &resp->page, 0, resp->len);
++		if (svc_encode_result_payload(rqstp, head->iov_len, resp->len) < 0)
++			return false;
++		break;
+ 	}
+-	return 1;
++
++	return true;
+ }
+ 
+-int
+-nfssvc_encode_readres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_encode_readres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_readres *resp = rqstp->rq_resp;
+-
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		return xdr_ressize_check(rqstp, p);
+-
+-	p = encode_fattr(rqstp, p, &resp->fh, &resp->stat);
+-	*p++ = htonl(resp->count);
+-	xdr_ressize_check(rqstp, p);
+-
+-	/* now update rqstp->rq_res to reflect data as well */
+-	rqstp->rq_res.page_len = resp->count;
+-	if (resp->count & 3) {
+-		/* need to pad the tail */
+-		rqstp->rq_res.tail[0].iov_base = p;
+-		*p = 0;
+-		rqstp->rq_res.tail[0].iov_len = 4 - (resp->count&3);
++	struct kvec *head = rqstp->rq_res.head;
++
++	if (!svcxdr_encode_stat(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		if (!svcxdr_encode_fattr(rqstp, xdr, &resp->fh, &resp->stat))
++			return false;
++		if (xdr_stream_encode_u32(xdr, resp->count) < 0)
++			return false;
++		xdr_write_pages(xdr, resp->pages, rqstp->rq_res.page_base,
++				resp->count);
++		if (svc_encode_result_payload(rqstp, head->iov_len, resp->count) < 0)
++			return false;
++		break;
+ 	}
+-	return 1;
++
++	return true;
+ }
+ 
+-int
+-nfssvc_encode_readdirres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_encode_readdirres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_readdirres *resp = rqstp->rq_resp;
++	struct xdr_buf *dirlist = &resp->dirlist;
++
++	if (!svcxdr_encode_stat(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		xdr_write_pages(xdr, dirlist->pages, 0, dirlist->len);
++		/* no more entries */
++		if (xdr_stream_encode_item_absent(xdr) < 0)
++			return false;
++		if (xdr_stream_encode_bool(xdr, resp->common.err == nfserr_eof) < 0)
++			return false;
++		break;
++	}
+ 
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		return xdr_ressize_check(rqstp, p);
+-
+-	xdr_ressize_check(rqstp, p);
+-	p = resp->buffer;
+-	*p++ = 0;			/* no more entries */
+-	*p++ = htonl((resp->common.err == nfserr_eof));
+-	rqstp->rq_res.page_len = (((unsigned long)p-1) & ~PAGE_MASK)+1;
+-
+-	return 1;
++	return true;
+ }
+ 
+-int
+-nfssvc_encode_statfsres(struct svc_rqst *rqstp, __be32 *p)
++bool
++nfssvc_encode_statfsres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+ {
+ 	struct nfsd_statfsres *resp = rqstp->rq_resp;
+ 	struct kstatfs	*stat = &resp->stats;
++	__be32 *p;
++
++	if (!svcxdr_encode_stat(xdr, resp->status))
++		return false;
++	switch (resp->status) {
++	case nfs_ok:
++		p = xdr_reserve_space(xdr, XDR_UNIT * 5);
++		if (!p)
++			return false;
++		*p++ = cpu_to_be32(NFSSVC_MAXBLKSIZE_V2);
++		*p++ = cpu_to_be32(stat->f_bsize);
++		*p++ = cpu_to_be32(stat->f_blocks);
++		*p++ = cpu_to_be32(stat->f_bfree);
++		*p = cpu_to_be32(stat->f_bavail);
++		break;
++	}
+ 
+-	*p++ = resp->status;
+-	if (resp->status != nfs_ok)
+-		return xdr_ressize_check(rqstp, p);
++	return true;
++}
+ 
+-	*p++ = htonl(NFSSVC_MAXBLKSIZE_V2);	/* max transfer size */
+-	*p++ = htonl(stat->f_bsize);
+-	*p++ = htonl(stat->f_blocks);
+-	*p++ = htonl(stat->f_bfree);
+-	*p++ = htonl(stat->f_bavail);
+-	return xdr_ressize_check(rqstp, p);
++/**
++ * nfssvc_encode_nfscookie - Encode a directory offset cookie
++ * @resp: readdir result context
++ * @offset: offset cookie to encode
++ *
++ * The buffer space for the offset cookie has already been reserved
++ * by svcxdr_encode_entry_common().
++ */
++void nfssvc_encode_nfscookie(struct nfsd_readdirres *resp, u32 offset)
++{
++	__be32 cookie = cpu_to_be32(offset);
++
++	if (!resp->cookie_offset)
++		return;
++
++	write_bytes_to_xdr_buf(&resp->dirlist, resp->cookie_offset, &cookie,
++			       sizeof(cookie));
++	resp->cookie_offset = 0;
+ }
+ 
+-int
+-nfssvc_encode_entry(void *ccdv, const char *name,
+-		    int namlen, loff_t offset, u64 ino, unsigned int d_type)
++static bool
++svcxdr_encode_entry_common(struct nfsd_readdirres *resp, const char *name,
++			   int namlen, loff_t offset, u64 ino)
+ {
+-	struct readdir_cd *ccd = ccdv;
+-	struct nfsd_readdirres *cd = container_of(ccd, struct nfsd_readdirres, common);
+-	__be32	*p = cd->buffer;
+-	int	buflen, slen;
++	struct xdr_buf *dirlist = &resp->dirlist;
++	struct xdr_stream *xdr = &resp->xdr;
++
++	if (xdr_stream_encode_item_present(xdr) < 0)
++		return false;
++	/* fileid */
++	if (xdr_stream_encode_u32(xdr, (u32)ino) < 0)
++		return false;
++	/* name */
++	if (xdr_stream_encode_opaque(xdr, name, min(namlen, NFS2_MAXNAMLEN)) < 0)
++		return false;
++	/* cookie */
++	resp->cookie_offset = dirlist->len;
++	if (xdr_stream_encode_u32(xdr, ~0U) < 0)
++		return false;
++
++	return true;
++}
+ 
+-	/*
+-	dprintk("nfsd: entry(%.*s off %ld ino %ld)\n",
+-			namlen, name, offset, ino);
+-	 */
++/**
++ * nfssvc_encode_entry - encode one NFSv2 READDIR entry
++ * @data: directory context
++ * @name: name of the object to be encoded
++ * @namlen: length of that name, in bytes
++ * @offset: the offset of the previous entry
++ * @ino: the fileid of this entry
++ * @d_type: unused
++ *
++ * Return values:
++ *   %0: Entry was successfully encoded.
++ *   %-EINVAL: An encoding problem occured, secondary status code in resp->common.err
++ *
++ * On exit, the following fields are updated:
++ *   - resp->xdr
++ *   - resp->common.err
++ *   - resp->cookie_offset
++ */
++int nfssvc_encode_entry(void *data, const char *name, int namlen,
++			loff_t offset, u64 ino, unsigned int d_type)
++{
++	struct readdir_cd *ccd = data;
++	struct nfsd_readdirres *resp = container_of(ccd,
++						    struct nfsd_readdirres,
++						    common);
++	unsigned int starting_length = resp->dirlist.len;
+ 
+-	if (offset > ~((u32) 0)) {
+-		cd->common.err = nfserr_fbig;
+-		return -EINVAL;
+-	}
+-	if (cd->offset)
+-		*cd->offset = htonl(offset);
++	/* The offset cookie for the previous entry */
++	nfssvc_encode_nfscookie(resp, offset);
+ 
+-	/* truncate filename */
+-	namlen = min(namlen, NFS2_MAXNAMLEN);
+-	slen = XDR_QUADLEN(namlen);
++	if (!svcxdr_encode_entry_common(resp, name, namlen, offset, ino))
++		goto out_toosmall;
+ 
+-	if ((buflen = cd->buflen - slen - 4) < 0) {
+-		cd->common.err = nfserr_toosmall;
+-		return -EINVAL;
+-	}
+-	if (ino > ~((u32) 0)) {
+-		cd->common.err = nfserr_fbig;
+-		return -EINVAL;
+-	}
+-	*p++ = xdr_one;				/* mark entry present */
+-	*p++ = htonl((u32) ino);		/* file id */
+-	p    = xdr_encode_array(p, name, namlen);/* name length & name */
+-	cd->offset = p;			/* remember pointer */
+-	*p++ = htonl(~0U);		/* offset of next entry */
+-
+-	cd->buflen = buflen;
+-	cd->buffer = p;
+-	cd->common.err = nfs_ok;
++	xdr_commit_encode(&resp->xdr);
++	resp->common.err = nfs_ok;
+ 	return 0;
++
++out_toosmall:
++	resp->cookie_offset = 0;
++	resp->common.err = nfserr_toosmall;
++	resp->dirlist.len = starting_length;
++	return -EINVAL;
+ }
+ 
+ /*
+diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
+index 9eae11a9d21ca..e94634d305912 100644
+--- a/fs/nfsd/state.h
++++ b/fs/nfsd/state.h
+@@ -57,11 +57,11 @@ typedef struct {
+ } stateid_t;
+ 
+ typedef struct {
+-	stateid_t		stid;
++	stateid_t		cs_stid;
+ #define NFS4_COPY_STID 1
+ #define NFS4_COPYNOTIFY_STID 2
+-	unsigned char		sc_type;
+-	refcount_t		sc_count;
++	unsigned char		cs_type;
++	refcount_t		cs_count;
+ } copy_stateid_t;
+ 
+ struct nfsd4_callback {
+@@ -149,6 +149,7 @@ struct nfs4_delegation {
+ /* For recall: */
+ 	int			dl_retries;
+ 	struct nfsd4_callback	dl_recall;
++	bool			dl_recalled;
+ };
+ 
+ #define cb_to_delegation(cb) \
+@@ -174,7 +175,7 @@ static inline struct nfs4_delegation *delegstateid(struct nfs4_stid *s)
+ /* Maximum number of slots per session. 160 is useful for long haul TCP */
+ #define NFSD_MAX_SLOTS_PER_SESSION     160
+ /* Maximum number of operations per session compound */
+-#define NFSD_MAX_OPS_PER_COMPOUND	16
++#define NFSD_MAX_OPS_PER_COMPOUND	50
+ /* Maximum  session per slot cache size */
+ #define NFSD_SLOT_CACHE_SIZE		2048
+ /* Maximum number of NFSD_SLOT_CACHE_SIZE slots per session */
+@@ -282,6 +283,28 @@ struct nfsd4_sessionid {
+ 
+ #define HEXDIR_LEN     33 /* hex version of 16 byte md5 of cl_name plus '\0' */
+ 
++/*
++ *       State                Meaning                  Where set
++ * --------------------------------------------------------------------------
++ * | NFSD4_ACTIVE      | Confirmed, active    | Default                     |
++ * |------------------- ----------------------------------------------------|
++ * | NFSD4_COURTESY    | Courtesy state.      | nfs4_get_client_reaplist    |
++ * |                   | Lease/lock/share     |                             |
++ * |                   | reservation conflict |                             |
++ * |                   | can cause Courtesy   |                             |
++ * |                   | client to be expired |                             |
++ * |------------------------------------------------------------------------|
++ * | NFSD4_EXPIRABLE   | Courtesy client to be| nfs4_laundromat             |
++ * |                   | expired by Laundromat| try_to_expire_client        |
++ * |                   | due to conflict      |                             |
++ * |------------------------------------------------------------------------|
++ */
++enum {
++	NFSD4_ACTIVE = 0,
++	NFSD4_COURTESY,
++	NFSD4_EXPIRABLE,
++};
++
+ /*
+  * struct nfs4_client - one per client.  Clientids live here.
+  *
+@@ -345,6 +368,7 @@ struct nfs4_client {
+ #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
+ #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
+ 					 1 << NFSD4_CLIENT_CB_KILL)
++#define NFSD4_CLIENT_CB_RECALL_ANY	(6)
+ 	unsigned long		cl_flags;
+ 	const struct cred	*cl_cb_cred;
+ 	struct rpc_clnt		*cl_cb_client;
+@@ -371,6 +395,10 @@ struct nfs4_client {
+ 
+ 	/* debugging info directory under nfsd/clients/ : */
+ 	struct dentry		*cl_nfsd_dentry;
++	/* 'info' file within that directory. Ref is not counted,
++	 * but will remain valid iff cl_nfsd_dentry != NULL
++	 */
++	struct dentry		*cl_nfsd_info_dentry;
+ 
+ 	/* for nfs41 callbacks */
+ 	/* We currently support a single back channel with a single slot */
+@@ -381,6 +409,13 @@ struct nfs4_client {
+ 	struct list_head	async_copies;	/* list of async copies */
+ 	spinlock_t		async_lock;	/* lock for async copies */
+ 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
++
++	unsigned int		cl_state;
++	atomic_t		cl_delegs_in_recall;
++
++	struct nfsd4_cb_recall_any	*cl_ra;
++	time64_t		cl_ra_time;
++	struct list_head	cl_ra_cblist;
+ };
+ 
+ /* struct nfs4_client_reset
+@@ -506,14 +541,13 @@ struct nfs4_clnt_odstate {
+  * inode can have multiple filehandles associated with it, so there is
+  * (potentially) a many to one relationship between this struct and struct
+  * inode.
+- *
+- * These are hashed by filehandle in the file_hashtbl, which is protected by
+- * the global state_lock spinlock.
+  */
+ struct nfs4_file {
+ 	refcount_t		fi_ref;
++	struct inode *		fi_inode;
++	bool			fi_aliased;
+ 	spinlock_t		fi_lock;
+-	struct hlist_node       fi_hash;	/* hash on fi_fhandle */
++	struct rhlist_head	fi_rlist;
+ 	struct list_head        fi_stateids;
+ 	union {
+ 		struct list_head	fi_delegations;
+@@ -562,6 +596,10 @@ struct nfs4_ol_stateid {
+ 	struct list_head		st_locks;
+ 	struct nfs4_stateowner		*st_stateowner;
+ 	struct nfs4_clnt_odstate	*st_clnt_odstate;
++/*
++ * These bitmasks use 3 separate bits for READ, ALLOW, and BOTH; see the
++ * comment above bmap_to_share_mode() for explanation:
++ */
+ 	unsigned char			st_access_bmap;
+ 	unsigned char			st_deny_bmap;
+ 	struct nfs4_ol_stateid		*st_openstp;
+@@ -603,6 +641,7 @@ enum nfsd4_cb_op {
+ 	NFSPROC4_CLNT_CB_OFFLOAD,
+ 	NFSPROC4_CLNT_CB_SEQUENCE,
+ 	NFSPROC4_CLNT_CB_NOTIFY_LOCK,
++	NFSPROC4_CLNT_CB_RECALL_ANY,
+ };
+ 
+ /* Returns true iff a is later than b: */
+@@ -623,6 +662,7 @@ struct nfsd4_blocked_lock {
+ 	struct file_lock	nbl_lock;
+ 	struct knfsd_fh		nbl_fh;
+ 	struct nfsd4_callback	nbl_cb;
++	struct kref		nbl_kref;
+ };
+ 
+ struct nfsd4_compound_state;
+@@ -649,26 +689,22 @@ void nfs4_remove_reclaim_record(struct nfs4_client_reclaim *, struct nfsd_net *)
+ extern void nfs4_release_reclaim(struct nfsd_net *);
+ extern struct nfs4_client_reclaim *nfsd4_find_reclaim_client(struct xdr_netobj name,
+ 							struct nfsd_net *nn);
+-extern __be32 nfs4_check_open_reclaim(clientid_t *clid,
+-		struct nfsd4_compound_state *cstate, struct nfsd_net *nn);
++extern __be32 nfs4_check_open_reclaim(struct nfs4_client *);
+ extern void nfsd4_probe_callback(struct nfs4_client *clp);
+ extern void nfsd4_probe_callback_sync(struct nfs4_client *clp);
+ extern void nfsd4_change_callback(struct nfs4_client *clp, struct nfs4_cb_conn *);
+ extern void nfsd4_init_cb(struct nfsd4_callback *cb, struct nfs4_client *clp,
+ 		const struct nfsd4_callback_ops *ops, enum nfsd4_cb_op op);
+-extern void nfsd4_run_cb(struct nfsd4_callback *cb);
++extern bool nfsd4_run_cb(struct nfsd4_callback *cb);
+ extern int nfsd4_create_callback_queue(void);
+ extern void nfsd4_destroy_callback_queue(void);
+ extern void nfsd4_shutdown_callback(struct nfs4_client *);
+ extern void nfsd4_shutdown_copy(struct nfs4_client *clp);
+-extern void nfsd4_prepare_cb_recall(struct nfs4_delegation *dp);
+ extern struct nfs4_client_reclaim *nfs4_client_to_reclaim(struct xdr_netobj name,
+ 				struct xdr_netobj princhash, struct nfsd_net *nn);
+ extern bool nfs4_has_reclaimed_state(struct xdr_netobj name, struct nfsd_net *nn);
+ 
+-struct nfs4_file *find_file(struct knfsd_fh *fh);
+ void put_nfs4_file(struct nfs4_file *fi);
+-extern void nfs4_put_copy(struct nfsd4_copy *copy);
+ extern struct nfsd4_copy *
+ find_async_copy(struct nfs4_client *clp, stateid_t *staetid);
+ extern void nfs4_put_cpntf_state(struct nfsd_net *nn,
+@@ -693,4 +729,9 @@ extern void nfsd4_client_record_remove(struct nfs4_client *clp);
+ extern int nfsd4_client_record_check(struct nfs4_client *clp);
+ extern void nfsd4_record_grace_done(struct nfsd_net *nn);
+ 
++static inline bool try_to_expire_client(struct nfs4_client *clp)
++{
++	cmpxchg(&clp->cl_state, NFSD4_COURTESY, NFSD4_EXPIRABLE);
++	return clp->cl_state == NFSD4_EXPIRABLE;
++}
+ #endif   /* NFSD4_STATE_H */
+diff --git a/fs/nfsd/stats.c b/fs/nfsd/stats.c
+index b1bc582b0493e..777e24e5da33b 100644
+--- a/fs/nfsd/stats.c
++++ b/fs/nfsd/stats.c
+@@ -7,16 +7,14 @@
+  * Format:
+  *	rc <hits> <misses> <nocache>
+  *			Statistsics for the reply cache
+- *	fh <stale> <total-lookups> <anonlookups> <dir-not-in-dcache> <nondir-not-in-dcache>
++ *	fh <stale> <deprecated filehandle cache stats>
+  *			statistics for filehandle lookup
+  *	io <bytes-read> <bytes-written>
+  *			statistics for IO throughput
+- *	th <threads> <fullcnt> <10%-20%> <20%-30%> ... <90%-100%> <100%> 
+- *			time (seconds) when nfsd thread usage above thresholds
+- *			and number of times that all threads were in use
+- *	ra cache-size  <10%  <20%  <30% ... <100% not-found
+- *			number of times that read-ahead entry was found that deep in
+- *			the cache.
++ *	th <threads> <deprecated thread usage histogram stats>
++ *			number of threads
++ *	ra <deprecated ra-cache stats>
++ *
+  *	plus generic RPC stats (see net/sunrpc/stats.c)
+  *
+  * Copyright (C) 1995, 1996, 1997 Olaf Kirch <okir@monad.swb.de>
+@@ -34,35 +32,28 @@ struct svc_stat		nfsd_svcstats = {
+ 	.program	= &nfsd_program,
+ };
+ 
+-static int nfsd_proc_show(struct seq_file *seq, void *v)
++static int nfsd_show(struct seq_file *seq, void *v)
+ {
+ 	int i;
+ 
+-	seq_printf(seq, "rc %u %u %u\nfh %u %u %u %u %u\nio %u %u\n",
+-		      nfsdstats.rchits,
+-		      nfsdstats.rcmisses,
+-		      nfsdstats.rcnocache,
+-		      nfsdstats.fh_stale,
+-		      nfsdstats.fh_lookup,
+-		      nfsdstats.fh_anon,
+-		      nfsdstats.fh_nocache_dir,
+-		      nfsdstats.fh_nocache_nondir,
+-		      nfsdstats.io_read,
+-		      nfsdstats.io_write);
++	seq_printf(seq, "rc %lld %lld %lld\nfh %lld 0 0 0 0\nio %lld %lld\n",
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_HITS]),
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_MISSES]),
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_NOCACHE]),
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_FH_STALE]),
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_IO_READ]),
++		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_IO_WRITE]));
++
+ 	/* thread usage: */
+-	seq_printf(seq, "th %u %u", nfsdstats.th_cnt, nfsdstats.th_fullcnt);
+-	for (i=0; i<10; i++) {
+-		unsigned int jifs = nfsdstats.th_usage[i];
+-		unsigned int sec = jifs / HZ, msec = (jifs % HZ)*1000/HZ;
+-		seq_printf(seq, " %u.%03u", sec, msec);
+-	}
++	seq_printf(seq, "th %u 0", atomic_read(&nfsdstats.th_cnt));
++
++	/* deprecated thread usage histogram stats */
++	for (i = 0; i < 10; i++)
++		seq_puts(seq, " 0.000");
++
++	/* deprecated ra-cache stats */
++	seq_puts(seq, "\nra 0 0 0 0 0 0 0 0 0 0 0 0\n");
+ 
+-	/* newline and ra-cache */
+-	seq_printf(seq, "\nra %u", nfsdstats.ra_size);
+-	for (i=0; i<11; i++)
+-		seq_printf(seq, " %u", nfsdstats.ra_depth[i]);
+-	seq_putc(seq, '\n');
+-	
+ 	/* show my rpc info */
+ 	svc_seq_show(seq, &nfsd_svcstats);
+ 
+@@ -70,8 +61,10 @@ static int nfsd_proc_show(struct seq_file *seq, void *v)
+ 	/* Show count for individual nfsv4 operations */
+ 	/* Writing operation numbers 0 1 2 also for maintaining uniformity */
+ 	seq_printf(seq,"proc4ops %u", LAST_NFS4_OP + 1);
+-	for (i = 0; i <= LAST_NFS4_OP; i++)
+-		seq_printf(seq, " %u", nfsdstats.nfs4_opcount[i]);
++	for (i = 0; i <= LAST_NFS4_OP; i++) {
++		seq_printf(seq, " %lld",
++			   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_NFS4_OP(i)]));
++	}
+ 
+ 	seq_putc(seq, '\n');
+ #endif
+@@ -79,26 +72,65 @@ static int nfsd_proc_show(struct seq_file *seq, void *v)
+ 	return 0;
+ }
+ 
+-static int nfsd_proc_open(struct inode *inode, struct file *file)
++DEFINE_PROC_SHOW_ATTRIBUTE(nfsd);
++
++int nfsd_percpu_counters_init(struct percpu_counter counters[], int num)
+ {
+-	return single_open(file, nfsd_proc_show, NULL);
++	int i, err = 0;
++
++	for (i = 0; !err && i < num; i++)
++		err = percpu_counter_init(&counters[i], 0, GFP_KERNEL);
++
++	if (!err)
++		return 0;
++
++	for (; i > 0; i--)
++		percpu_counter_destroy(&counters[i-1]);
++
++	return err;
+ }
+ 
+-static const struct proc_ops nfsd_proc_ops = {
+-	.proc_open	= nfsd_proc_open,
+-	.proc_read	= seq_read,
+-	.proc_lseek	= seq_lseek,
+-	.proc_release	= single_release,
+-};
++void nfsd_percpu_counters_reset(struct percpu_counter counters[], int num)
++{
++	int i;
++
++	for (i = 0; i < num; i++)
++		percpu_counter_set(&counters[i], 0);
++}
++
++void nfsd_percpu_counters_destroy(struct percpu_counter counters[], int num)
++{
++	int i;
++
++	for (i = 0; i < num; i++)
++		percpu_counter_destroy(&counters[i]);
++}
++
++static int nfsd_stat_counters_init(void)
++{
++	return nfsd_percpu_counters_init(nfsdstats.counter, NFSD_STATS_COUNTERS_NUM);
++}
++
++static void nfsd_stat_counters_destroy(void)
++{
++	nfsd_percpu_counters_destroy(nfsdstats.counter, NFSD_STATS_COUNTERS_NUM);
++}
+ 
+-void
+-nfsd_stat_init(void)
++int nfsd_stat_init(void)
+ {
++	int err;
++
++	err = nfsd_stat_counters_init();
++	if (err)
++		return err;
++
+ 	svc_proc_register(&init_net, &nfsd_svcstats, &nfsd_proc_ops);
++
++	return 0;
+ }
+ 
+-void
+-nfsd_stat_shutdown(void)
++void nfsd_stat_shutdown(void)
+ {
++	nfsd_stat_counters_destroy();
+ 	svc_proc_unregister(&init_net, "nfsd");
+ }
+diff --git a/fs/nfsd/stats.h b/fs/nfsd/stats.h
+index b23fdac698201..9b43dc3d99913 100644
+--- a/fs/nfsd/stats.h
++++ b/fs/nfsd/stats.h
+@@ -8,37 +8,89 @@
+ #define _NFSD_STATS_H
+ 
+ #include <uapi/linux/nfsd/stats.h>
++#include <linux/percpu_counter.h>
+ 
+ 
+-struct nfsd_stats {
+-	unsigned int	rchits;		/* repcache hits */
+-	unsigned int	rcmisses;	/* repcache hits */
+-	unsigned int	rcnocache;	/* uncached reqs */
+-	unsigned int	fh_stale;	/* FH stale error */
+-	unsigned int	fh_lookup;	/* dentry cached */
+-	unsigned int	fh_anon;	/* anon file dentry returned */
+-	unsigned int	fh_nocache_dir;	/* filehandle not found in dcache */
+-	unsigned int	fh_nocache_nondir;	/* filehandle not found in dcache */
+-	unsigned int	io_read;	/* bytes returned to read requests */
+-	unsigned int	io_write;	/* bytes passed in write requests */
+-	unsigned int	th_cnt;		/* number of available threads */
+-	unsigned int	th_usage[10];	/* number of ticks during which n perdeciles
+-					 * of available threads were in use */
+-	unsigned int	th_fullcnt;	/* number of times last free thread was used */
+-	unsigned int	ra_size;	/* size of ra cache */
+-	unsigned int	ra_depth[11];	/* number of times ra entry was found that deep
+-					 * in the cache (10percentiles). [10] = not found */
++enum {
++	NFSD_STATS_RC_HITS,		/* repcache hits */
++	NFSD_STATS_RC_MISSES,		/* repcache misses */
++	NFSD_STATS_RC_NOCACHE,		/* uncached reqs */
++	NFSD_STATS_FH_STALE,		/* FH stale error */
++	NFSD_STATS_IO_READ,		/* bytes returned to read requests */
++	NFSD_STATS_IO_WRITE,		/* bytes passed in write requests */
+ #ifdef CONFIG_NFSD_V4
+-	unsigned int	nfs4_opcount[LAST_NFS4_OP + 1];	/* count of individual nfsv4 operations */
++	NFSD_STATS_FIRST_NFS4_OP,	/* count of individual nfsv4 operations */
++	NFSD_STATS_LAST_NFS4_OP = NFSD_STATS_FIRST_NFS4_OP + LAST_NFS4_OP,
++#define NFSD_STATS_NFS4_OP(op)	(NFSD_STATS_FIRST_NFS4_OP + (op))
+ #endif
+-
++	NFSD_STATS_COUNTERS_NUM
+ };
+ 
++struct nfsd_stats {
++	struct percpu_counter	counter[NFSD_STATS_COUNTERS_NUM];
++
++	atomic_t	th_cnt;		/* number of available threads */
++};
+ 
+ extern struct nfsd_stats	nfsdstats;
++
+ extern struct svc_stat		nfsd_svcstats;
+ 
+-void	nfsd_stat_init(void);
+-void	nfsd_stat_shutdown(void);
++int nfsd_percpu_counters_init(struct percpu_counter counters[], int num);
++void nfsd_percpu_counters_reset(struct percpu_counter counters[], int num);
++void nfsd_percpu_counters_destroy(struct percpu_counter counters[], int num);
++int nfsd_stat_init(void);
++void nfsd_stat_shutdown(void);
++
++static inline void nfsd_stats_rc_hits_inc(void)
++{
++	percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_RC_HITS]);
++}
++
++static inline void nfsd_stats_rc_misses_inc(void)
++{
++	percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_RC_MISSES]);
++}
++
++static inline void nfsd_stats_rc_nocache_inc(void)
++{
++	percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_RC_NOCACHE]);
++}
++
++static inline void nfsd_stats_fh_stale_inc(struct svc_export *exp)
++{
++	percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_FH_STALE]);
++	if (exp)
++		percpu_counter_inc(&exp->ex_stats.counter[EXP_STATS_FH_STALE]);
++}
++
++static inline void nfsd_stats_io_read_add(struct svc_export *exp, s64 amount)
++{
++	percpu_counter_add(&nfsdstats.counter[NFSD_STATS_IO_READ], amount);
++	if (exp)
++		percpu_counter_add(&exp->ex_stats.counter[EXP_STATS_IO_READ], amount);
++}
++
++static inline void nfsd_stats_io_write_add(struct svc_export *exp, s64 amount)
++{
++	percpu_counter_add(&nfsdstats.counter[NFSD_STATS_IO_WRITE], amount);
++	if (exp)
++		percpu_counter_add(&exp->ex_stats.counter[EXP_STATS_IO_WRITE], amount);
++}
++
++static inline void nfsd_stats_payload_misses_inc(struct nfsd_net *nn)
++{
++	percpu_counter_inc(&nn->counter[NFSD_NET_PAYLOAD_MISSES]);
++}
++
++static inline void nfsd_stats_drc_mem_usage_add(struct nfsd_net *nn, s64 amount)
++{
++	percpu_counter_add(&nn->counter[NFSD_NET_DRC_MEM_USAGE], amount);
++}
++
++static inline void nfsd_stats_drc_mem_usage_sub(struct nfsd_net *nn, s64 amount)
++{
++	percpu_counter_sub(&nn->counter[NFSD_NET_DRC_MEM_USAGE], amount);
++}
+ 
+ #endif /* _NFSD_STATS_H */
+diff --git a/fs/nfsd/trace.c b/fs/nfsd/trace.c
+index 90967466a1e56..f008b95ceec2e 100644
+--- a/fs/nfsd/trace.c
++++ b/fs/nfsd/trace.c
+@@ -1,3 +1,4 @@
++// SPDX-License-Identifier: GPL-2.0
+ 
+ #define CREATE_TRACE_POINTS
+ #include "trace.h"
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index a952f4a9b2a68..445d00f00eab7 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -12,6 +12,86 @@
+ #include "export.h"
+ #include "nfsfh.h"
+ 
++#define NFSD_TRACE_PROC_ARG_FIELDS \
++		__field(unsigned int, netns_ino) \
++		__field(u32, xid) \
++		__array(unsigned char, server, sizeof(struct sockaddr_in6)) \
++		__array(unsigned char, client, sizeof(struct sockaddr_in6))
++
++#define NFSD_TRACE_PROC_ARG_ASSIGNMENTS \
++		do { \
++			__entry->netns_ino = SVC_NET(rqstp)->ns.inum; \
++			__entry->xid = be32_to_cpu(rqstp->rq_xid); \
++			memcpy(__entry->server, &rqstp->rq_xprt->xpt_local, \
++			       rqstp->rq_xprt->xpt_locallen); \
++			memcpy(__entry->client, &rqstp->rq_xprt->xpt_remote, \
++			       rqstp->rq_xprt->xpt_remotelen); \
++		} while (0);
++
++#define NFSD_TRACE_PROC_RES_FIELDS \
++		__field(unsigned int, netns_ino) \
++		__field(u32, xid) \
++		__field(unsigned long, status) \
++		__array(unsigned char, server, sizeof(struct sockaddr_in6)) \
++		__array(unsigned char, client, sizeof(struct sockaddr_in6))
++
++#define NFSD_TRACE_PROC_RES_ASSIGNMENTS(error) \
++		do { \
++			__entry->netns_ino = SVC_NET(rqstp)->ns.inum; \
++			__entry->xid = be32_to_cpu(rqstp->rq_xid); \
++			__entry->status = be32_to_cpu(error); \
++			memcpy(__entry->server, &rqstp->rq_xprt->xpt_local, \
++			       rqstp->rq_xprt->xpt_locallen); \
++			memcpy(__entry->client, &rqstp->rq_xprt->xpt_remote, \
++			       rqstp->rq_xprt->xpt_remotelen); \
++		} while (0);
++
++DECLARE_EVENT_CLASS(nfsd_xdr_err_class,
++	TP_PROTO(
++		const struct svc_rqst *rqstp
++	),
++	TP_ARGS(rqstp),
++	TP_STRUCT__entry(
++		NFSD_TRACE_PROC_ARG_FIELDS
++
++		__field(u32, vers)
++		__field(u32, proc)
++	),
++	TP_fast_assign(
++		NFSD_TRACE_PROC_ARG_ASSIGNMENTS
++
++		__entry->vers = rqstp->rq_vers;
++		__entry->proc = rqstp->rq_proc;
++	),
++	TP_printk("xid=0x%08x vers=%u proc=%u",
++		__entry->xid, __entry->vers, __entry->proc
++	)
++);
++
++#define DEFINE_NFSD_XDR_ERR_EVENT(name) \
++DEFINE_EVENT(nfsd_xdr_err_class, nfsd_##name##_err, \
++	TP_PROTO(const struct svc_rqst *rqstp), \
++	TP_ARGS(rqstp))
++
++DEFINE_NFSD_XDR_ERR_EVENT(garbage_args);
++DEFINE_NFSD_XDR_ERR_EVENT(cant_encode);
++
++#define show_nfsd_may_flags(x)						\
++	__print_flags(x, "|",						\
++		{ NFSD_MAY_EXEC,		"EXEC" },		\
++		{ NFSD_MAY_WRITE,		"WRITE" },		\
++		{ NFSD_MAY_READ,		"READ" },		\
++		{ NFSD_MAY_SATTR,		"SATTR" },		\
++		{ NFSD_MAY_TRUNC,		"TRUNC" },		\
++		{ NFSD_MAY_LOCK,		"LOCK" },		\
++		{ NFSD_MAY_OWNER_OVERRIDE,	"OWNER_OVERRIDE" },	\
++		{ NFSD_MAY_LOCAL_ACCESS,	"LOCAL_ACCESS" },	\
++		{ NFSD_MAY_BYPASS_GSS_ON_ROOT,	"BYPASS_GSS_ON_ROOT" },	\
++		{ NFSD_MAY_NOT_BREAK_LEASE,	"NOT_BREAK_LEASE" },	\
++		{ NFSD_MAY_BYPASS_GSS,		"BYPASS_GSS" },		\
++		{ NFSD_MAY_READ_IF_EXEC,	"READ_IF_EXEC" },	\
++		{ NFSD_MAY_64BIT_COOKIE,	"64BIT_COOKIE" })
++
+ TRACE_EVENT(nfsd_compound,
+ 	TP_PROTO(const struct svc_rqst *rqst,
+ 		 u32 args_opcnt),
+@@ -51,6 +131,56 @@ TRACE_EVENT(nfsd_compound_status,
+ 		__get_str(name), __entry->status)
+ )
+ 
++TRACE_EVENT(nfsd_compound_decode_err,
++	TP_PROTO(
++		const struct svc_rqst *rqstp,
++		u32 args_opcnt,
++		u32 resp_opcnt,
++		u32 opnum,
++		__be32 status
++	),
++	TP_ARGS(rqstp, args_opcnt, resp_opcnt, opnum, status),
++	TP_STRUCT__entry(
++		NFSD_TRACE_PROC_RES_FIELDS
++
++		__field(u32, args_opcnt)
++		__field(u32, resp_opcnt)
++		__field(u32, opnum)
++	),
++	TP_fast_assign(
++		NFSD_TRACE_PROC_RES_ASSIGNMENTS(status)
++
++		__entry->args_opcnt = args_opcnt;
++		__entry->resp_opcnt = resp_opcnt;
++		__entry->opnum = opnum;
++	),
++	TP_printk("op=%u/%u opnum=%u status=%lu",
++		__entry->resp_opcnt, __entry->args_opcnt,
++		__entry->opnum, __entry->status)
++);
++
++TRACE_EVENT(nfsd_compound_encode_err,
++	TP_PROTO(
++		const struct svc_rqst *rqstp,
++		u32 opnum,
++		__be32 status
++	),
++	TP_ARGS(rqstp, opnum, status),
++	TP_STRUCT__entry(
++		NFSD_TRACE_PROC_RES_FIELDS
++
++		__field(u32, opnum)
++	),
++	TP_fast_assign(
++		NFSD_TRACE_PROC_RES_ASSIGNMENTS(status)
++
++		__entry->opnum = opnum;
++	),
++	TP_printk("opnum=%u status=%lu",
++		__entry->opnum, __entry->status)
++);
++
++
+ DECLARE_EVENT_CLASS(nfsd_fh_err_class,
+ 	TP_PROTO(struct svc_rqst *rqstp,
+ 		 struct svc_fh	*fhp,
+@@ -247,10 +377,106 @@ DEFINE_EVENT(nfsd_err_class, nfsd_##name,	\
+ DEFINE_NFSD_ERR_EVENT(read_err);
+ DEFINE_NFSD_ERR_EVENT(write_err);
+ 
++TRACE_EVENT(nfsd_dirent,
++	TP_PROTO(struct svc_fh *fhp,
++		 u64 ino,
++		 const char *name,
++		 int namlen),
++	TP_ARGS(fhp, ino, name, namlen),
++	TP_STRUCT__entry(
++		__field(u32, fh_hash)
++		__field(u64, ino)
++		__field(int, len)
++		__dynamic_array(unsigned char, name, namlen)
++	),
++	TP_fast_assign(
++		__entry->fh_hash = fhp ? knfsd_fh_hash(&fhp->fh_handle) : 0;
++		__entry->ino = ino;
++		__entry->len = namlen;
++		memcpy(__get_str(name), name, namlen);
++	),
++	TP_printk("fh_hash=0x%08x ino=%llu name=%.*s",
++		__entry->fh_hash, __entry->ino,
++		__entry->len, __get_str(name))
++)
++
++DECLARE_EVENT_CLASS(nfsd_copy_err_class,
++	TP_PROTO(struct svc_rqst *rqstp,
++		 struct svc_fh	*src_fhp,
++		 loff_t		src_offset,
++		 struct svc_fh	*dst_fhp,
++		 loff_t		dst_offset,
++		 u64		count,
++		 int		status),
++	TP_ARGS(rqstp, src_fhp, src_offset, dst_fhp, dst_offset, count, status),
++	TP_STRUCT__entry(
++		__field(u32, xid)
++		__field(u32, src_fh_hash)
++		__field(loff_t, src_offset)
++		__field(u32, dst_fh_hash)
++		__field(loff_t, dst_offset)
++		__field(u64, count)
++		__field(int, status)
++	),
++	TP_fast_assign(
++		__entry->xid = be32_to_cpu(rqstp->rq_xid);
++		__entry->src_fh_hash = knfsd_fh_hash(&src_fhp->fh_handle);
++		__entry->src_offset = src_offset;
++		__entry->dst_fh_hash = knfsd_fh_hash(&dst_fhp->fh_handle);
++		__entry->dst_offset = dst_offset;
++		__entry->count = count;
++		__entry->status = status;
++	),
++	TP_printk("xid=0x%08x src_fh_hash=0x%08x src_offset=%lld "
++			"dst_fh_hash=0x%08x dst_offset=%lld "
++			"count=%llu status=%d",
++		  __entry->xid, __entry->src_fh_hash, __entry->src_offset,
++		  __entry->dst_fh_hash, __entry->dst_offset,
++		  (unsigned long long)__entry->count,
++		  __entry->status)
++)
++
++#define DEFINE_NFSD_COPY_ERR_EVENT(name)		\
++DEFINE_EVENT(nfsd_copy_err_class, nfsd_##name,		\
++	TP_PROTO(struct svc_rqst	*rqstp,		\
++		 struct svc_fh		*src_fhp,	\
++		 loff_t			src_offset,	\
++		 struct svc_fh		*dst_fhp,	\
++		 loff_t			dst_offset,	\
++		 u64			count,		\
++		 int			status),	\
++	TP_ARGS(rqstp, src_fhp, src_offset, dst_fhp, dst_offset, \
++		count, status))
++
++DEFINE_NFSD_COPY_ERR_EVENT(clone_file_range_err);
++
+ #include "state.h"
+ #include "filecache.h"
+ #include "vfs.h"
+ 
++TRACE_EVENT(nfsd_delegret_wakeup,
++	TP_PROTO(
++		const struct svc_rqst *rqstp,
++		const struct inode *inode,
++		long timeo
++	),
++	TP_ARGS(rqstp, inode, timeo),
++	TP_STRUCT__entry(
++		__field(u32, xid)
++		__field(const void *, inode)
++		__field(long, timeo)
++	),
++	TP_fast_assign(
++		__entry->xid = be32_to_cpu(rqstp->rq_xid);
++		__entry->inode = inode;
++		__entry->timeo = timeo;
++	),
++	TP_printk("xid=0x%08x inode=%p%s",
++		  __entry->xid, __entry->inode,
++		  __entry->timeo == 0 ? " (timed out)" : ""
++	)
++);
++
+ DECLARE_EVENT_CLASS(nfsd_stateid_class,
+ 	TP_PROTO(stateid_t *stp),
+ 	TP_ARGS(stp),
+@@ -291,7 +517,7 @@ DEFINE_STATEID_EVENT(layout_recall_release);
+ 
+ DEFINE_STATEID_EVENT(open);
+ DEFINE_STATEID_EVENT(deleg_read);
+-DEFINE_STATEID_EVENT(deleg_break);
++DEFINE_STATEID_EVENT(deleg_return);
+ DEFINE_STATEID_EVENT(deleg_recall);
+ 
+ DECLARE_EVENT_CLASS(nfsd_stateseqid_class,
+@@ -324,6 +550,61 @@ DEFINE_EVENT(nfsd_stateseqid_class, nfsd_##name, \
+ DEFINE_STATESEQID_EVENT(preprocess);
+ DEFINE_STATESEQID_EVENT(open_confirm);
+ 
++TRACE_DEFINE_ENUM(NFS4_OPEN_STID);
++TRACE_DEFINE_ENUM(NFS4_LOCK_STID);
++TRACE_DEFINE_ENUM(NFS4_DELEG_STID);
++TRACE_DEFINE_ENUM(NFS4_CLOSED_STID);
++TRACE_DEFINE_ENUM(NFS4_REVOKED_DELEG_STID);
++TRACE_DEFINE_ENUM(NFS4_CLOSED_DELEG_STID);
++TRACE_DEFINE_ENUM(NFS4_LAYOUT_STID);
++
++#define show_stid_type(x)						\
++	__print_flags(x, "|",						\
++		{ NFS4_OPEN_STID,		"OPEN" },		\
++		{ NFS4_LOCK_STID,		"LOCK" },		\
++		{ NFS4_DELEG_STID,		"DELEG" },		\
++		{ NFS4_CLOSED_STID,		"CLOSED" },		\
++		{ NFS4_REVOKED_DELEG_STID,	"REVOKED" },		\
++		{ NFS4_CLOSED_DELEG_STID,	"CLOSED_DELEG" },	\
++		{ NFS4_LAYOUT_STID,		"LAYOUT" })
++
++DECLARE_EVENT_CLASS(nfsd_stid_class,
++	TP_PROTO(
++		const struct nfs4_stid *stid
++	),
++	TP_ARGS(stid),
++	TP_STRUCT__entry(
++		__field(unsigned long, sc_type)
++		__field(int, sc_count)
++		__field(u32, cl_boot)
++		__field(u32, cl_id)
++		__field(u32, si_id)
++		__field(u32, si_generation)
++	),
++	TP_fast_assign(
++		const stateid_t *stp = &stid->sc_stateid;
++
++		__entry->sc_type = stid->sc_type;
++		__entry->sc_count = refcount_read(&stid->sc_count);
++		__entry->cl_boot = stp->si_opaque.so_clid.cl_boot;
++		__entry->cl_id = stp->si_opaque.so_clid.cl_id;
++		__entry->si_id = stp->si_opaque.so_id;
++		__entry->si_generation = stp->si_generation;
++	),
++	TP_printk("client %08x:%08x stateid %08x:%08x ref=%d type=%s",
++		__entry->cl_boot, __entry->cl_id,
++		__entry->si_id, __entry->si_generation,
++		__entry->sc_count, show_stid_type(__entry->sc_type)
++	)
++);
++
++#define DEFINE_STID_EVENT(name)					\
++DEFINE_EVENT(nfsd_stid_class, nfsd_stid_##name,			\
++	TP_PROTO(const struct nfs4_stid *stid),			\
++	TP_ARGS(stid))
++
++DEFINE_STID_EVENT(revoke);
++
+ DECLARE_EVENT_CLASS(nfsd_clientid_class,
+ 	TP_PROTO(const clientid_t *clid),
+ 	TP_ARGS(clid),
+@@ -343,7 +624,12 @@ DEFINE_EVENT(nfsd_clientid_class, nfsd_clid_##name, \
+ 	TP_PROTO(const clientid_t *clid), \
+ 	TP_ARGS(clid))
+ 
+-DEFINE_CLIENTID_EVENT(expired);
++DEFINE_CLIENTID_EVENT(expire_unconf);
++DEFINE_CLIENTID_EVENT(reclaim_complete);
++DEFINE_CLIENTID_EVENT(confirmed);
++DEFINE_CLIENTID_EVENT(destroyed);
++DEFINE_CLIENTID_EVENT(admin_expired);
++DEFINE_CLIENTID_EVENT(replaced);
+ DEFINE_CLIENTID_EVENT(purged);
+ DEFINE_CLIENTID_EVENT(renew);
+ DEFINE_CLIENTID_EVENT(stale);
+@@ -368,56 +654,145 @@ DEFINE_EVENT(nfsd_net_class, nfsd_##name, \
+ DEFINE_NET_EVENT(grace_start);
+ DEFINE_NET_EVENT(grace_complete);
+ 
+-TRACE_EVENT(nfsd_clid_inuse_err,
++TRACE_EVENT(nfsd_writeverf_reset,
++	TP_PROTO(
++		const struct nfsd_net *nn,
++		const struct svc_rqst *rqstp,
++		int error
++	),
++	TP_ARGS(nn, rqstp, error),
++	TP_STRUCT__entry(
++		__field(unsigned long long, boot_time)
++		__field(u32, xid)
++		__field(int, error)
++		__array(unsigned char, verifier, NFS4_VERIFIER_SIZE)
++	),
++	TP_fast_assign(
++		__entry->boot_time = nn->boot_time;
++		__entry->xid = be32_to_cpu(rqstp->rq_xid);
++		__entry->error = error;
++
++		/* avoid seqlock inside TP_fast_assign */
++		memcpy(__entry->verifier, nn->writeverf,
++		       NFS4_VERIFIER_SIZE);
++	),
++	TP_printk("boot_time=%16llx xid=0x%08x error=%d new verifier=0x%s",
++		__entry->boot_time, __entry->xid, __entry->error,
++		__print_hex_str(__entry->verifier, NFS4_VERIFIER_SIZE)
++	)
++);
++
++TRACE_EVENT(nfsd_clid_cred_mismatch,
++	TP_PROTO(
++		const struct nfs4_client *clp,
++		const struct svc_rqst *rqstp
++	),
++	TP_ARGS(clp, rqstp),
++	TP_STRUCT__entry(
++		__field(u32, cl_boot)
++		__field(u32, cl_id)
++		__field(unsigned long, cl_flavor)
++		__field(unsigned long, new_flavor)
++		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
++	),
++	TP_fast_assign(
++		__entry->cl_boot = clp->cl_clientid.cl_boot;
++		__entry->cl_id = clp->cl_clientid.cl_id;
++		__entry->cl_flavor = clp->cl_cred.cr_flavor;
++		__entry->new_flavor = rqstp->rq_cred.cr_flavor;
++		memcpy(__entry->addr, &rqstp->rq_xprt->xpt_remote,
++			sizeof(struct sockaddr_in6));
++	),
++	TP_printk("client %08x:%08x flavor=%s, conflict=%s from addr=%pISpc",
++		__entry->cl_boot, __entry->cl_id,
++		show_nfsd_authflavor(__entry->cl_flavor),
++		show_nfsd_authflavor(__entry->new_flavor), __entry->addr
++	)
++)
++
++TRACE_EVENT(nfsd_clid_verf_mismatch,
++	TP_PROTO(
++		const struct nfs4_client *clp,
++		const struct svc_rqst *rqstp,
++		const nfs4_verifier *verf
++	),
++	TP_ARGS(clp, rqstp, verf),
++	TP_STRUCT__entry(
++		__field(u32, cl_boot)
++		__field(u32, cl_id)
++		__array(unsigned char, cl_verifier, NFS4_VERIFIER_SIZE)
++		__array(unsigned char, new_verifier, NFS4_VERIFIER_SIZE)
++		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
++	),
++	TP_fast_assign(
++		__entry->cl_boot = clp->cl_clientid.cl_boot;
++		__entry->cl_id = clp->cl_clientid.cl_id;
++		memcpy(__entry->cl_verifier, (void *)&clp->cl_verifier,
++		       NFS4_VERIFIER_SIZE);
++		memcpy(__entry->new_verifier, (void *)verf,
++		       NFS4_VERIFIER_SIZE);
++		memcpy(__entry->addr, &rqstp->rq_xprt->xpt_remote,
++			sizeof(struct sockaddr_in6));
++	),
++	TP_printk("client %08x:%08x verf=0x%s, updated=0x%s from addr=%pISpc",
++		__entry->cl_boot, __entry->cl_id,
++		__print_hex_str(__entry->cl_verifier, NFS4_VERIFIER_SIZE),
++		__print_hex_str(__entry->new_verifier, NFS4_VERIFIER_SIZE),
++		__entry->addr
++	)
++);
++
++DECLARE_EVENT_CLASS(nfsd_clid_class,
+ 	TP_PROTO(const struct nfs4_client *clp),
+ 	TP_ARGS(clp),
+ 	TP_STRUCT__entry(
+ 		__field(u32, cl_boot)
+ 		__field(u32, cl_id)
+ 		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
+-		__field(unsigned int, namelen)
+-		__dynamic_array(unsigned char, name, clp->cl_name.len)
++		__field(unsigned long, flavor)
++		__array(unsigned char, verifier, NFS4_VERIFIER_SIZE)
++		__dynamic_array(char, name, clp->cl_name.len + 1)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->cl_boot = clp->cl_clientid.cl_boot;
+ 		__entry->cl_id = clp->cl_clientid.cl_id;
+ 		memcpy(__entry->addr, &clp->cl_addr,
+ 			sizeof(struct sockaddr_in6));
+-		__entry->namelen = clp->cl_name.len;
+-		memcpy(__get_dynamic_array(name), clp->cl_name.data,
+-			clp->cl_name.len);
++		__entry->flavor = clp->cl_cred.cr_flavor;
++		memcpy(__entry->verifier, (void *)&clp->cl_verifier,
++		       NFS4_VERIFIER_SIZE);
++		memcpy(__get_str(name), clp->cl_name.data, clp->cl_name.len);
++		__get_str(name)[clp->cl_name.len] = '\0';
+ 	),
+-	TP_printk("nfs4_clientid %.*s already in use by %pISpc, client %08x:%08x",
+-		__entry->namelen, __get_str(name), __entry->addr,
++	TP_printk("addr=%pISpc name='%s' verifier=0x%s flavor=%s client=%08x:%08x",
++		__entry->addr, __get_str(name),
++		__print_hex_str(__entry->verifier, NFS4_VERIFIER_SIZE),
++		show_nfsd_authflavor(__entry->flavor),
+ 		__entry->cl_boot, __entry->cl_id)
+-)
++);
+ 
+-TRACE_DEFINE_ENUM(NFSD_FILE_HASHED);
+-TRACE_DEFINE_ENUM(NFSD_FILE_PENDING);
+-TRACE_DEFINE_ENUM(NFSD_FILE_BREAK_READ);
+-TRACE_DEFINE_ENUM(NFSD_FILE_BREAK_WRITE);
+-TRACE_DEFINE_ENUM(NFSD_FILE_REFERENCED);
++#define DEFINE_CLID_EVENT(name) \
++DEFINE_EVENT(nfsd_clid_class, nfsd_clid_##name, \
++	TP_PROTO(const struct nfs4_client *clp), \
++	TP_ARGS(clp))
++
++DEFINE_CLID_EVENT(fresh);
++DEFINE_CLID_EVENT(confirmed_r);
+ 
++/*
++ * from fs/nfsd/filecache.h
++ */
+ #define show_nf_flags(val)						\
+ 	__print_flags(val, "|",						\
+ 		{ 1 << NFSD_FILE_HASHED,	"HASHED" },		\
+ 		{ 1 << NFSD_FILE_PENDING,	"PENDING" },		\
+-		{ 1 << NFSD_FILE_BREAK_READ,	"BREAK_READ" },		\
+-		{ 1 << NFSD_FILE_BREAK_WRITE,	"BREAK_WRITE" },	\
+-		{ 1 << NFSD_FILE_REFERENCED,	"REFERENCED"})
+-
+-/* FIXME: This should probably be fleshed out in the future. */
+-#define show_nf_may(val)						\
+-	__print_flags(val, "|",						\
+-		{ NFSD_MAY_READ,		"READ" },		\
+-		{ NFSD_MAY_WRITE,		"WRITE" },		\
+-		{ NFSD_MAY_NOT_BREAK_LEASE,	"NOT_BREAK_LEASE" })
++		{ 1 << NFSD_FILE_REFERENCED,	"REFERENCED" },		\
++		{ 1 << NFSD_FILE_GC,		"GC" })
+ 
+ DECLARE_EVENT_CLASS(nfsd_file_class,
+ 	TP_PROTO(struct nfsd_file *nf),
+ 	TP_ARGS(nf),
+ 	TP_STRUCT__entry(
+-		__field(unsigned int, nf_hashval)
+ 		__field(void *, nf_inode)
+ 		__field(int, nf_ref)
+ 		__field(unsigned long, nf_flags)
+@@ -425,19 +800,17 @@ DECLARE_EVENT_CLASS(nfsd_file_class,
+ 		__field(struct file *, nf_file)
+ 	),
+ 	TP_fast_assign(
+-		__entry->nf_hashval = nf->nf_hashval;
+ 		__entry->nf_inode = nf->nf_inode;
+ 		__entry->nf_ref = refcount_read(&nf->nf_ref);
+ 		__entry->nf_flags = nf->nf_flags;
+ 		__entry->nf_may = nf->nf_may;
+ 		__entry->nf_file = nf->nf_file;
+ 	),
+-	TP_printk("hash=0x%x inode=0x%p ref=%d flags=%s may=%s file=%p",
+-		__entry->nf_hashval,
++	TP_printk("inode=%p ref=%d flags=%s may=%s nf_file=%p",
+ 		__entry->nf_inode,
+ 		__entry->nf_ref,
+ 		show_nf_flags(__entry->nf_flags),
+-		show_nf_may(__entry->nf_may),
++		show_nfsd_may_flags(__entry->nf_may),
+ 		__entry->nf_file)
+ )
+ 
+@@ -446,34 +819,60 @@ DEFINE_EVENT(nfsd_file_class, name, \
+ 	TP_PROTO(struct nfsd_file *nf), \
+ 	TP_ARGS(nf))
+ 
+-DEFINE_NFSD_FILE_EVENT(nfsd_file_alloc);
+-DEFINE_NFSD_FILE_EVENT(nfsd_file_put_final);
++DEFINE_NFSD_FILE_EVENT(nfsd_file_free);
+ DEFINE_NFSD_FILE_EVENT(nfsd_file_unhash);
+ DEFINE_NFSD_FILE_EVENT(nfsd_file_put);
+-DEFINE_NFSD_FILE_EVENT(nfsd_file_unhash_and_release_locked);
++DEFINE_NFSD_FILE_EVENT(nfsd_file_closing);
++DEFINE_NFSD_FILE_EVENT(nfsd_file_unhash_and_queue);
++
++TRACE_EVENT(nfsd_file_alloc,
++	TP_PROTO(
++		const struct nfsd_file *nf
++	),
++	TP_ARGS(nf),
++	TP_STRUCT__entry(
++		__field(const void *, nf_inode)
++		__field(unsigned long, nf_flags)
++		__field(unsigned long, nf_may)
++		__field(unsigned int, nf_ref)
++	),
++	TP_fast_assign(
++		__entry->nf_inode = nf->nf_inode;
++		__entry->nf_flags = nf->nf_flags;
++		__entry->nf_ref = refcount_read(&nf->nf_ref);
++		__entry->nf_may = nf->nf_may;
++	),
++	TP_printk("inode=%p ref=%u flags=%s may=%s",
++		__entry->nf_inode, __entry->nf_ref,
++		show_nf_flags(__entry->nf_flags),
++		show_nfsd_may_flags(__entry->nf_may)
++	)
++);
+ 
+ TRACE_EVENT(nfsd_file_acquire,
+-	TP_PROTO(struct svc_rqst *rqstp, unsigned int hash,
+-		 struct inode *inode, unsigned int may_flags,
+-		 struct nfsd_file *nf, __be32 status),
++	TP_PROTO(
++		const struct svc_rqst *rqstp,
++		const struct inode *inode,
++		unsigned int may_flags,
++		const struct nfsd_file *nf,
++		__be32 status
++	),
+ 
+-	TP_ARGS(rqstp, hash, inode, may_flags, nf, status),
++	TP_ARGS(rqstp, inode, may_flags, nf, status),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(u32, xid)
+-		__field(unsigned int, hash)
+-		__field(void *, inode)
+-		__field(unsigned int, may_flags)
+-		__field(int, nf_ref)
++		__field(const void *, inode)
++		__field(unsigned long, may_flags)
++		__field(unsigned int, nf_ref)
+ 		__field(unsigned long, nf_flags)
+-		__field(unsigned char, nf_may)
+-		__field(struct file *, nf_file)
++		__field(unsigned long, nf_may)
++		__field(const void *, nf_file)
+ 		__field(u32, status)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->xid = be32_to_cpu(rqstp->rq_xid);
+-		__entry->hash = hash;
+ 		__entry->inode = inode;
+ 		__entry->may_flags = may_flags;
+ 		__entry->nf_ref = nf ? refcount_read(&nf->nf_ref) : 0;
+@@ -483,39 +882,131 @@ TRACE_EVENT(nfsd_file_acquire,
+ 		__entry->status = be32_to_cpu(status);
+ 	),
+ 
+-	TP_printk("xid=0x%x hash=0x%x inode=0x%p may_flags=%s ref=%d nf_flags=%s nf_may=%s nf_file=0x%p status=%u",
+-			__entry->xid, __entry->hash, __entry->inode,
+-			show_nf_may(__entry->may_flags), __entry->nf_ref,
+-			show_nf_flags(__entry->nf_flags),
+-			show_nf_may(__entry->nf_may), __entry->nf_file,
+-			__entry->status)
++	TP_printk("xid=0x%x inode=%p may_flags=%s ref=%u nf_flags=%s nf_may=%s nf_file=%p status=%u",
++			__entry->xid, __entry->inode,
++			show_nfsd_may_flags(__entry->may_flags),
++			__entry->nf_ref, show_nf_flags(__entry->nf_flags),
++			show_nfsd_may_flags(__entry->nf_may),
++			__entry->nf_file, __entry->status
++	)
+ );
+ 
+-DECLARE_EVENT_CLASS(nfsd_file_search_class,
+-	TP_PROTO(struct inode *inode, unsigned int hash, int found),
+-	TP_ARGS(inode, hash, found),
++TRACE_EVENT(nfsd_file_insert_err,
++	TP_PROTO(
++		const struct svc_rqst *rqstp,
++		const struct inode *inode,
++		unsigned int may_flags,
++		long error
++	),
++	TP_ARGS(rqstp, inode, may_flags, error),
+ 	TP_STRUCT__entry(
+-		__field(struct inode *, inode)
+-		__field(unsigned int, hash)
+-		__field(int, found)
++		__field(u32, xid)
++		__field(const void *, inode)
++		__field(unsigned long, may_flags)
++		__field(long, error)
+ 	),
+ 	TP_fast_assign(
++		__entry->xid = be32_to_cpu(rqstp->rq_xid);
+ 		__entry->inode = inode;
+-		__entry->hash = hash;
+-		__entry->found = found;
++		__entry->may_flags = may_flags;
++		__entry->error = error;
+ 	),
+-	TP_printk("hash=0x%x inode=0x%p found=%d", __entry->hash,
+-			__entry->inode, __entry->found)
++	TP_printk("xid=0x%x inode=%p may_flags=%s error=%ld",
++		__entry->xid, __entry->inode,
++		show_nfsd_may_flags(__entry->may_flags),
++		__entry->error
++	)
+ );
+ 
+-#define DEFINE_NFSD_FILE_SEARCH_EVENT(name)				\
+-DEFINE_EVENT(nfsd_file_search_class, name,				\
+-	TP_PROTO(struct inode *inode, unsigned int hash, int found),	\
+-	TP_ARGS(inode, hash, found))
++TRACE_EVENT(nfsd_file_cons_err,
++	TP_PROTO(
++		const struct svc_rqst *rqstp,
++		const struct inode *inode,
++		unsigned int may_flags,
++		const struct nfsd_file *nf
++	),
++	TP_ARGS(rqstp, inode, may_flags, nf),
++	TP_STRUCT__entry(
++		__field(u32, xid)
++		__field(const void *, inode)
++		__field(unsigned long, may_flags)
++		__field(unsigned int, nf_ref)
++		__field(unsigned long, nf_flags)
++		__field(unsigned long, nf_may)
++		__field(const void *, nf_file)
++	),
++	TP_fast_assign(
++		__entry->xid = be32_to_cpu(rqstp->rq_xid);
++		__entry->inode = inode;
++		__entry->may_flags = may_flags;
++		__entry->nf_ref = refcount_read(&nf->nf_ref);
++		__entry->nf_flags = nf->nf_flags;
++		__entry->nf_may = nf->nf_may;
++		__entry->nf_file = nf->nf_file;
++	),
++	TP_printk("xid=0x%x inode=%p may_flags=%s ref=%u nf_flags=%s nf_may=%s nf_file=%p",
++		__entry->xid, __entry->inode,
++		show_nfsd_may_flags(__entry->may_flags), __entry->nf_ref,
++		show_nf_flags(__entry->nf_flags),
++		show_nfsd_may_flags(__entry->nf_may), __entry->nf_file
++	)
++);
++
++DECLARE_EVENT_CLASS(nfsd_file_open_class,
++	TP_PROTO(const struct nfsd_file *nf, __be32 status),
++	TP_ARGS(nf, status),
++	TP_STRUCT__entry(
++		__field(void *, nf_inode)	/* cannot be dereferenced */
++		__field(int, nf_ref)
++		__field(unsigned long, nf_flags)
++		__field(unsigned long, nf_may)
++		__field(void *, nf_file)	/* cannot be dereferenced */
++	),
++	TP_fast_assign(
++		__entry->nf_inode = nf->nf_inode;
++		__entry->nf_ref = refcount_read(&nf->nf_ref);
++		__entry->nf_flags = nf->nf_flags;
++		__entry->nf_may = nf->nf_may;
++		__entry->nf_file = nf->nf_file;
++	),
++	TP_printk("inode=%p ref=%d flags=%s may=%s file=%p",
++		__entry->nf_inode,
++		__entry->nf_ref,
++		show_nf_flags(__entry->nf_flags),
++		show_nfsd_may_flags(__entry->nf_may),
++		__entry->nf_file)
++)
++
++#define DEFINE_NFSD_FILE_OPEN_EVENT(name)					\
++DEFINE_EVENT(nfsd_file_open_class, name,					\
++	TP_PROTO(							\
++		const struct nfsd_file *nf,				\
++		__be32 status						\
++	),								\
++	TP_ARGS(nf, status))
+ 
+-DEFINE_NFSD_FILE_SEARCH_EVENT(nfsd_file_close_inode_sync);
+-DEFINE_NFSD_FILE_SEARCH_EVENT(nfsd_file_close_inode);
+-DEFINE_NFSD_FILE_SEARCH_EVENT(nfsd_file_is_cached);
++DEFINE_NFSD_FILE_OPEN_EVENT(nfsd_file_open);
++DEFINE_NFSD_FILE_OPEN_EVENT(nfsd_file_opened);
++
++TRACE_EVENT(nfsd_file_is_cached,
++	TP_PROTO(
++		const struct inode *inode,
++		int found
++	),
++	TP_ARGS(inode, found),
++	TP_STRUCT__entry(
++		__field(const struct inode *, inode)
++		__field(int, found)
++	),
++	TP_fast_assign(
++		__entry->inode = inode;
++		__entry->found = found;
++	),
++	TP_printk("inode=%p is %scached",
++		__entry->inode,
++		__entry->found ? "" : "not "
++	)
++);
+ 
+ TRACE_EVENT(nfsd_file_fsnotify_handle_event,
+ 	TP_PROTO(struct inode *inode, u32 mask),
+@@ -532,10 +1023,95 @@ TRACE_EVENT(nfsd_file_fsnotify_handle_event,
+ 		__entry->mode = inode->i_mode;
+ 		__entry->mask = mask;
+ 	),
+-	TP_printk("inode=0x%p nlink=%u mode=0%ho mask=0x%x", __entry->inode,
++	TP_printk("inode=%p nlink=%u mode=0%ho mask=0x%x", __entry->inode,
+ 			__entry->nlink, __entry->mode, __entry->mask)
+ );
+ 
++DECLARE_EVENT_CLASS(nfsd_file_gc_class,
++	TP_PROTO(
++		const struct nfsd_file *nf
++	),
++	TP_ARGS(nf),
++	TP_STRUCT__entry(
++		__field(void *, nf_inode)
++		__field(void *, nf_file)
++		__field(int, nf_ref)
++		__field(unsigned long, nf_flags)
++	),
++	TP_fast_assign(
++		__entry->nf_inode = nf->nf_inode;
++		__entry->nf_file = nf->nf_file;
++		__entry->nf_ref = refcount_read(&nf->nf_ref);
++		__entry->nf_flags = nf->nf_flags;
++	),
++	TP_printk("inode=%p ref=%d nf_flags=%s nf_file=%p",
++		__entry->nf_inode, __entry->nf_ref,
++		show_nf_flags(__entry->nf_flags),
++		__entry->nf_file
++	)
++);
++
++#define DEFINE_NFSD_FILE_GC_EVENT(name)					\
++DEFINE_EVENT(nfsd_file_gc_class, name,					\
++	TP_PROTO(							\
++		const struct nfsd_file *nf				\
++	),								\
++	TP_ARGS(nf))
++
++DEFINE_NFSD_FILE_GC_EVENT(nfsd_file_lru_add);
++DEFINE_NFSD_FILE_GC_EVENT(nfsd_file_lru_add_disposed);
++DEFINE_NFSD_FILE_GC_EVENT(nfsd_file_lru_del);
++DEFINE_NFSD_FILE_GC_EVENT(nfsd_file_lru_del_disposed);
++DEFINE_NFSD_FILE_GC_EVENT(nfsd_file_gc_in_use);
++DEFINE_NFSD_FILE_GC_EVENT(nfsd_file_gc_writeback);
++DEFINE_NFSD_FILE_GC_EVENT(nfsd_file_gc_referenced);
++DEFINE_NFSD_FILE_GC_EVENT(nfsd_file_gc_disposed);
++
++DECLARE_EVENT_CLASS(nfsd_file_lruwalk_class,
++	TP_PROTO(
++		unsigned long removed,
++		unsigned long remaining
++	),
++	TP_ARGS(removed, remaining),
++	TP_STRUCT__entry(
++		__field(unsigned long, removed)
++		__field(unsigned long, remaining)
++	),
++	TP_fast_assign(
++		__entry->removed = removed;
++		__entry->remaining = remaining;
++	),
++	TP_printk("%lu entries removed, %lu remaining",
++		__entry->removed, __entry->remaining)
++);
++
++#define DEFINE_NFSD_FILE_LRUWALK_EVENT(name)				\
++DEFINE_EVENT(nfsd_file_lruwalk_class, name,				\
++	TP_PROTO(							\
++		unsigned long removed,					\
++		unsigned long remaining					\
++	),								\
++	TP_ARGS(removed, remaining))
++
++DEFINE_NFSD_FILE_LRUWALK_EVENT(nfsd_file_gc_removed);
++DEFINE_NFSD_FILE_LRUWALK_EVENT(nfsd_file_shrinker_removed);
++
++TRACE_EVENT(nfsd_file_close,
++	TP_PROTO(
++		const struct inode *inode
++	),
++	TP_ARGS(inode),
++	TP_STRUCT__entry(
++		__field(const void *, inode)
++	),
++	TP_fast_assign(
++		__entry->inode = inode;
++	),
++	TP_printk("inode=%p",
++		__entry->inode
++	)
++);
++
+ #include "cache.h"
+ 
+ TRACE_DEFINE_ENUM(RC_DROPIT);
+@@ -616,9 +1192,9 @@ TRACE_EVENT(nfsd_cb_args,
+ 		memcpy(__entry->addr, &conn->cb_addr,
+ 			sizeof(struct sockaddr_in6));
+ 	),
+-	TP_printk("client %08x:%08x callback addr=%pISpc prog=%u ident=%u",
+-		__entry->cl_boot, __entry->cl_id,
+-		__entry->addr, __entry->prog, __entry->ident)
++	TP_printk("addr=%pISpc client %08x:%08x prog=%u ident=%u",
++		__entry->addr, __entry->cl_boot, __entry->cl_id,
++		__entry->prog, __entry->ident)
+ );
+ 
+ TRACE_EVENT(nfsd_cb_nodelegs,
+@@ -635,11 +1211,6 @@ TRACE_EVENT(nfsd_cb_nodelegs,
+ 	TP_printk("client %08x:%08x", __entry->cl_boot, __entry->cl_id)
+ )
+ 
+-TRACE_DEFINE_ENUM(NFSD4_CB_UP);
+-TRACE_DEFINE_ENUM(NFSD4_CB_UNKNOWN);
+-TRACE_DEFINE_ENUM(NFSD4_CB_DOWN);
+-TRACE_DEFINE_ENUM(NFSD4_CB_FAULT);
+-
+ #define show_cb_state(val)						\
+ 	__print_symbolic(val,						\
+ 		{ NFSD4_CB_UP,		"UP" },				\
+@@ -673,10 +1244,53 @@ DEFINE_EVENT(nfsd_cb_class, nfsd_cb_##name,		\
+ 	TP_PROTO(const struct nfs4_client *clp),	\
+ 	TP_ARGS(clp))
+ 
+-DEFINE_NFSD_CB_EVENT(setup);
+ DEFINE_NFSD_CB_EVENT(state);
++DEFINE_NFSD_CB_EVENT(probe);
++DEFINE_NFSD_CB_EVENT(lost);
+ DEFINE_NFSD_CB_EVENT(shutdown);
+ 
++TRACE_DEFINE_ENUM(RPC_AUTH_NULL);
++TRACE_DEFINE_ENUM(RPC_AUTH_UNIX);
++TRACE_DEFINE_ENUM(RPC_AUTH_GSS);
++TRACE_DEFINE_ENUM(RPC_AUTH_GSS_KRB5);
++TRACE_DEFINE_ENUM(RPC_AUTH_GSS_KRB5I);
++TRACE_DEFINE_ENUM(RPC_AUTH_GSS_KRB5P);
++
++#define show_nfsd_authflavor(val)					\
++	__print_symbolic(val,						\
++		{ RPC_AUTH_NULL,		"none" },		\
++		{ RPC_AUTH_UNIX,		"sys" },		\
++		{ RPC_AUTH_GSS,			"gss" },		\
++		{ RPC_AUTH_GSS_KRB5,		"krb5" },		\
++		{ RPC_AUTH_GSS_KRB5I,		"krb5i" },		\
++		{ RPC_AUTH_GSS_KRB5P,		"krb5p" })
++
++TRACE_EVENT(nfsd_cb_setup,
++	TP_PROTO(const struct nfs4_client *clp,
++		 const char *netid,
++		 rpc_authflavor_t authflavor
++	),
++	TP_ARGS(clp, netid, authflavor),
++	TP_STRUCT__entry(
++		__field(u32, cl_boot)
++		__field(u32, cl_id)
++		__field(unsigned long, authflavor)
++		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
++		__array(unsigned char, netid, 8)
++	),
++	TP_fast_assign(
++		__entry->cl_boot = clp->cl_clientid.cl_boot;
++		__entry->cl_id = clp->cl_clientid.cl_id;
++		strlcpy(__entry->netid, netid, sizeof(__entry->netid));
++		__entry->authflavor = authflavor;
++		memcpy(__entry->addr, &clp->cl_cb_conn.cb_addr,
++			sizeof(struct sockaddr_in6));
++	),
++	TP_printk("addr=%pISpc client %08x:%08x proto=%s flavor=%s",
++		__entry->addr, __entry->cl_boot, __entry->cl_id,
++		__entry->netid, show_nfsd_authflavor(__entry->authflavor))
++);
++
+ TRACE_EVENT(nfsd_cb_setup_err,
+ 	TP_PROTO(
+ 		const struct nfs4_client *clp,
+@@ -700,54 +1314,138 @@ TRACE_EVENT(nfsd_cb_setup_err,
+ 		__entry->addr, __entry->cl_boot, __entry->cl_id, __entry->error)
+ );
+ 
+-TRACE_EVENT(nfsd_cb_work,
++TRACE_EVENT(nfsd_cb_recall,
+ 	TP_PROTO(
+-		const struct nfs4_client *clp,
+-		const char *procedure
++		const struct nfs4_stid *stid
++	),
++	TP_ARGS(stid),
++	TP_STRUCT__entry(
++		__field(u32, cl_boot)
++		__field(u32, cl_id)
++		__field(u32, si_id)
++		__field(u32, si_generation)
++		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
+ 	),
+-	TP_ARGS(clp, procedure),
++	TP_fast_assign(
++		const stateid_t *stp = &stid->sc_stateid;
++		const struct nfs4_client *clp = stid->sc_client;
++
++		__entry->cl_boot = stp->si_opaque.so_clid.cl_boot;
++		__entry->cl_id = stp->si_opaque.so_clid.cl_id;
++		__entry->si_id = stp->si_opaque.so_id;
++		__entry->si_generation = stp->si_generation;
++		if (clp)
++			memcpy(__entry->addr, &clp->cl_cb_conn.cb_addr,
++				sizeof(struct sockaddr_in6));
++		else
++			memset(__entry->addr, 0, sizeof(struct sockaddr_in6));
++	),
++	TP_printk("addr=%pISpc client %08x:%08x stateid %08x:%08x",
++		__entry->addr, __entry->cl_boot, __entry->cl_id,
++		__entry->si_id, __entry->si_generation)
++);
++
++TRACE_EVENT(nfsd_cb_notify_lock,
++	TP_PROTO(
++		const struct nfs4_lockowner *lo,
++		const struct nfsd4_blocked_lock *nbl
++	),
++	TP_ARGS(lo, nbl),
+ 	TP_STRUCT__entry(
+ 		__field(u32, cl_boot)
+ 		__field(u32, cl_id)
+-		__string(procedure, procedure)
++		__field(u32, fh_hash)
+ 		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
+ 	),
+ 	TP_fast_assign(
++		const struct nfs4_client *clp = lo->lo_owner.so_client;
++
+ 		__entry->cl_boot = clp->cl_clientid.cl_boot;
+ 		__entry->cl_id = clp->cl_clientid.cl_id;
+-		__assign_str(procedure, procedure)
++		__entry->fh_hash = knfsd_fh_hash(&nbl->nbl_fh);
+ 		memcpy(__entry->addr, &clp->cl_cb_conn.cb_addr,
+ 			sizeof(struct sockaddr_in6));
+ 	),
+-	TP_printk("addr=%pISpc client %08x:%08x procedure=%s",
++	TP_printk("addr=%pISpc client %08x:%08x fh_hash=0x%08x",
+ 		__entry->addr, __entry->cl_boot, __entry->cl_id,
+-		__get_str(procedure))
++		__entry->fh_hash)
+ );
+ 
+-TRACE_EVENT(nfsd_cb_done,
++TRACE_EVENT(nfsd_cb_offload,
+ 	TP_PROTO(
+ 		const struct nfs4_client *clp,
+-		int status
++		const stateid_t *stp,
++		const struct knfsd_fh *fh,
++		u64 count,
++		__be32 status
+ 	),
+-	TP_ARGS(clp, status),
++	TP_ARGS(clp, stp, fh, count, status),
+ 	TP_STRUCT__entry(
+ 		__field(u32, cl_boot)
+ 		__field(u32, cl_id)
++		__field(u32, si_id)
++		__field(u32, si_generation)
++		__field(u32, fh_hash)
+ 		__field(int, status)
++		__field(u64, count)
+ 		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
+ 	),
+ 	TP_fast_assign(
+-		__entry->cl_boot = clp->cl_clientid.cl_boot;
+-		__entry->cl_id = clp->cl_clientid.cl_id;
+-		__entry->status = status;
++		__entry->cl_boot = stp->si_opaque.so_clid.cl_boot;
++		__entry->cl_id = stp->si_opaque.so_clid.cl_id;
++		__entry->si_id = stp->si_opaque.so_id;
++		__entry->si_generation = stp->si_generation;
++		__entry->fh_hash = knfsd_fh_hash(fh);
++		__entry->status = be32_to_cpu(status);
++		__entry->count = count;
+ 		memcpy(__entry->addr, &clp->cl_cb_conn.cb_addr,
+ 			sizeof(struct sockaddr_in6));
+ 	),
+-	TP_printk("addr=%pISpc client %08x:%08x status=%d",
++	TP_printk("addr=%pISpc client %08x:%08x stateid %08x:%08x fh_hash=0x%08x count=%llu status=%d",
+ 		__entry->addr, __entry->cl_boot, __entry->cl_id,
+-		__entry->status)
++		__entry->si_id, __entry->si_generation,
++		__entry->fh_hash, __entry->count, __entry->status)
++);
++
++DECLARE_EVENT_CLASS(nfsd_cb_done_class,
++	TP_PROTO(
++		const stateid_t *stp,
++		const struct rpc_task *task
++	),
++	TP_ARGS(stp, task),
++	TP_STRUCT__entry(
++		__field(u32, cl_boot)
++		__field(u32, cl_id)
++		__field(u32, si_id)
++		__field(u32, si_generation)
++		__field(int, status)
++	),
++	TP_fast_assign(
++		__entry->cl_boot = stp->si_opaque.so_clid.cl_boot;
++		__entry->cl_id = stp->si_opaque.so_clid.cl_id;
++		__entry->si_id = stp->si_opaque.so_id;
++		__entry->si_generation = stp->si_generation;
++		__entry->status = task->tk_status;
++	),
++	TP_printk("client %08x:%08x stateid %08x:%08x status=%d",
++		__entry->cl_boot, __entry->cl_id, __entry->si_id,
++		__entry->si_generation, __entry->status
++	)
+ );
+ 
++#define DEFINE_NFSD_CB_DONE_EVENT(name)			\
++DEFINE_EVENT(nfsd_cb_done_class, name,			\
++	TP_PROTO(					\
++		const stateid_t *stp,			\
++		const struct rpc_task *task		\
++	),						\
++	TP_ARGS(stp, task))
++
++DEFINE_NFSD_CB_DONE_EVENT(nfsd_cb_recall_done);
++DEFINE_NFSD_CB_DONE_EVENT(nfsd_cb_notify_lock_done);
++DEFINE_NFSD_CB_DONE_EVENT(nfsd_cb_layout_done);
++DEFINE_NFSD_CB_DONE_EVENT(nfsd_cb_offload_done);
++
+ #endif /* _NFSD_TRACE_H */
+ 
+ #undef TRACE_INCLUDE_PATH
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 31edb883afd0d..0ea05ddff0d08 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -32,14 +32,13 @@
+ #include <linux/writeback.h>
+ #include <linux/security.h>
+ 
+-#ifdef CONFIG_NFSD_V3
+ #include "xdr3.h"
+-#endif /* CONFIG_NFSD_V3 */
+ 
+ #ifdef CONFIG_NFSD_V4
+ #include "../internal.h"
+ #include "acl.h"
+ #include "idmap.h"
++#include "xdr4.h"
+ #endif /* CONFIG_NFSD_V4 */
+ 
+ #include "nfsd.h"
+@@ -49,6 +48,69 @@
+ 
+ #define NFSDDBG_FACILITY		NFSDDBG_FILEOP
+ 
++/**
++ * nfserrno - Map Linux errnos to NFS errnos
++ * @errno: POSIX(-ish) error code to be mapped
++ *
++ * Returns the appropriate (net-endian) nfserr_* (or nfs_ok if errno is 0). If
++ * it's an error we don't expect, log it once and return nfserr_io.
++ */
++__be32
++nfserrno (int errno)
++{
++	static struct {
++		__be32	nfserr;
++		int	syserr;
++	} nfs_errtbl[] = {
++		{ nfs_ok, 0 },
++		{ nfserr_perm, -EPERM },
++		{ nfserr_noent, -ENOENT },
++		{ nfserr_io, -EIO },
++		{ nfserr_nxio, -ENXIO },
++		{ nfserr_fbig, -E2BIG },
++		{ nfserr_stale, -EBADF },
++		{ nfserr_acces, -EACCES },
++		{ nfserr_exist, -EEXIST },
++		{ nfserr_xdev, -EXDEV },
++		{ nfserr_mlink, -EMLINK },
++		{ nfserr_nodev, -ENODEV },
++		{ nfserr_notdir, -ENOTDIR },
++		{ nfserr_isdir, -EISDIR },
++		{ nfserr_inval, -EINVAL },
++		{ nfserr_fbig, -EFBIG },
++		{ nfserr_nospc, -ENOSPC },
++		{ nfserr_rofs, -EROFS },
++		{ nfserr_mlink, -EMLINK },
++		{ nfserr_nametoolong, -ENAMETOOLONG },
++		{ nfserr_notempty, -ENOTEMPTY },
++		{ nfserr_dquot, -EDQUOT },
++		{ nfserr_stale, -ESTALE },
++		{ nfserr_jukebox, -ETIMEDOUT },
++		{ nfserr_jukebox, -ERESTARTSYS },
++		{ nfserr_jukebox, -EAGAIN },
++		{ nfserr_jukebox, -EWOULDBLOCK },
++		{ nfserr_jukebox, -ENOMEM },
++		{ nfserr_io, -ETXTBSY },
++		{ nfserr_notsupp, -EOPNOTSUPP },
++		{ nfserr_toosmall, -ETOOSMALL },
++		{ nfserr_serverfault, -ESERVERFAULT },
++		{ nfserr_serverfault, -ENFILE },
++		{ nfserr_io, -EREMOTEIO },
++		{ nfserr_stale, -EOPENSTALE },
++		{ nfserr_io, -EUCLEAN },
++		{ nfserr_perm, -ENOKEY },
++		{ nfserr_no_grace, -ENOGRACE},
++	};
++	int	i;
++
++	for (i = 0; i < ARRAY_SIZE(nfs_errtbl); i++) {
++		if (nfs_errtbl[i].syserr == errno)
++			return nfs_errtbl[i].nfserr;
++	}
++	WARN_ONCE(1, "nfsd: non-standard errno: %d\n", errno);
++	return nfserr_io;
++}
++
+ /* 
+  * Called from nfsd_lookup and encode_dirent. Check if we have crossed 
+  * a mount point.
+@@ -199,27 +261,13 @@ nfsd_lookup_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 				goto out_nfserr;
+ 		}
+ 	} else {
+-		/*
+-		 * In the nfsd4_open() case, this may be held across
+-		 * subsequent open and delegation acquisition which may
+-		 * need to take the child's i_mutex:
+-		 */
+-		fh_lock_nested(fhp, I_MUTEX_PARENT);
+-		dentry = lookup_one_len(name, dparent, len);
++		dentry = lookup_one_len_unlocked(name, dparent, len);
+ 		host_err = PTR_ERR(dentry);
+ 		if (IS_ERR(dentry))
+ 			goto out_nfserr;
+ 		if (nfsd_mountpoint(dentry, exp)) {
+-			/*
+-			 * We don't need the i_mutex after all.  It's
+-			 * still possible we could open this (regular
+-			 * files can be mountpoints too), but the
+-			 * i_mutex is just there to prevent renames of
+-			 * something that we might be about to delegate,
+-			 * and a mountpoint won't be renamed:
+-			 */
+-			fh_unlock(fhp);
+-			if ((host_err = nfsd_cross_mnt(rqstp, &dentry, &exp))) {
++			host_err = nfsd_cross_mnt(rqstp, &dentry, &exp);
++			if (host_err) {
+ 				dput(dentry);
+ 				goto out_nfserr;
+ 			}
+@@ -234,7 +282,15 @@ nfsd_lookup_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	return nfserrno(host_err);
+ }
+ 
+-/*
++/**
++ * nfsd_lookup - look up a single path component for nfsd
++ *
++ * @rqstp:   the request context
++ * @fhp:     the file handle of the directory
++ * @name:    the component name, or %NULL to look up parent
++ * @len:     length of name to examine
++ * @resfh:   pointer to pre-initialised filehandle to hold result.
++ *
+  * Look up one component of a pathname.
+  * N.B. After this call _both_ fhp and resfh need an fh_put
+  *
+@@ -244,11 +300,11 @@ nfsd_lookup_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp,
+  * returned. Otherwise the covered directory is returned.
+  * NOTE: this mountpoint crossing is not supported properly by all
+  *   clients and is explicitly disallowed for NFSv3
+- *      NeilBrown <neilb@cse.unsw.edu.au>
++ *
+  */
+ __be32
+ nfsd_lookup(struct svc_rqst *rqstp, struct svc_fh *fhp, const char *name,
+-				unsigned int len, struct svc_fh *resfh)
++	    unsigned int len, struct svc_fh *resfh)
+ {
+ 	struct svc_export	*exp;
+ 	struct dentry		*dentry;
+@@ -306,6 +362,10 @@ commit_metadata(struct svc_fh *fhp)
+ static void
+ nfsd_sanitize_attrs(struct inode *inode, struct iattr *iap)
+ {
++	/* Ignore mode updates on symlinks */
++	if (S_ISLNK(inode->i_mode))
++		iap->ia_valid &= ~ATTR_MODE;
++
+ 	/* sanitize the mode change */
+ 	if (iap->ia_valid & ATTR_MODE) {
+ 		iap->ia_mode &= S_IALLUGO;
+@@ -359,21 +419,77 @@ nfsd_get_write_access(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	return nfserrno(host_err);
+ }
+ 
+-/*
+- * Set various file attributes.  After this call fhp needs an fh_put.
++static int __nfsd_setattr(struct dentry *dentry, struct iattr *iap)
++{
++	int host_err;
++
++	if (iap->ia_valid & ATTR_SIZE) {
++		/*
++		 * RFC5661, Section 18.30.4:
++		 *   Changing the size of a file with SETATTR indirectly
++		 *   changes the time_modify and change attributes.
++		 *
++		 * (and similar for the older RFCs)
++		 */
++		struct iattr size_attr = {
++			.ia_valid	= ATTR_SIZE | ATTR_CTIME | ATTR_MTIME,
++			.ia_size	= iap->ia_size,
++		};
++
++		if (iap->ia_size < 0)
++			return -EFBIG;
++
++		host_err = notify_change(dentry, &size_attr, NULL);
++		if (host_err)
++			return host_err;
++		iap->ia_valid &= ~ATTR_SIZE;
++
++		/*
++		 * Avoid the additional setattr call below if the only other
++		 * attribute that the client sends is the mtime, as we update
++		 * it as part of the size change above.
++		 */
++		if ((iap->ia_valid & ~ATTR_MTIME) == 0)
++			return 0;
++	}
++
++	if (!iap->ia_valid)
++		return 0;
++
++	iap->ia_valid |= ATTR_CTIME;
++	return notify_change(dentry, iap, NULL);
++}
++
++/**
++ * nfsd_setattr - Set various file attributes.
++ * @rqstp: controlling RPC transaction
++ * @fhp: filehandle of target
++ * @attr: attributes to set
++ * @check_guard: set to 1 if guardtime is a valid timestamp
++ * @guardtime: do not act if ctime.tv_sec does not match this timestamp
++ *
++ * This call may adjust the contents of @attr (in particular, this
++ * call may change the bits in the na_iattr.ia_valid field).
++ *
++ * Returns nfs_ok on success, otherwise an NFS status code is
++ * returned. Caller must release @fhp by calling fh_put in either
++ * case.
+  */
+ __be32
+-nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
++nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp,
++	     struct nfsd_attrs *attr,
+ 	     int check_guard, time64_t guardtime)
+ {
+ 	struct dentry	*dentry;
+ 	struct inode	*inode;
++	struct iattr	*iap = attr->na_iattr;
+ 	int		accmode = NFSD_MAY_SATTR;
+ 	umode_t		ftype = 0;
+ 	__be32		err;
+-	int		host_err;
++	int		host_err = 0;
+ 	bool		get_write_count;
+ 	bool		size_change = (iap->ia_valid & ATTR_SIZE);
++	int		retries;
+ 
+ 	if (iap->ia_valid & ATTR_SIZE) {
+ 		accmode |= NFSD_MAY_WRITE|NFSD_MAY_OWNER_OVERRIDE;
+@@ -409,13 +525,6 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
+ 	dentry = fhp->fh_dentry;
+ 	inode = d_inode(dentry);
+ 
+-	/* Ignore any mode updates on symlinks */
+-	if (S_ISLNK(inode->i_mode))
+-		iap->ia_valid &= ~ATTR_MODE;
+-
+-	if (!iap->ia_valid)
+-		return 0;
+-
+ 	nfsd_sanitize_attrs(inode, iap);
+ 
+ 	if (check_guard && guardtime != inode->i_ctime.tv_sec)
+@@ -434,45 +543,41 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
+ 			return err;
+ 	}
+ 
+-	fh_lock(fhp);
+-	if (size_change) {
+-		/*
+-		 * RFC5661, Section 18.30.4:
+-		 *   Changing the size of a file with SETATTR indirectly
+-		 *   changes the time_modify and change attributes.
+-		 *
+-		 * (and similar for the older RFCs)
+-		 */
+-		struct iattr size_attr = {
+-			.ia_valid	= ATTR_SIZE | ATTR_CTIME | ATTR_MTIME,
+-			.ia_size	= iap->ia_size,
+-		};
+-
+-		host_err = notify_change(dentry, &size_attr, NULL);
+-		if (host_err)
+-			goto out_unlock;
+-		iap->ia_valid &= ~ATTR_SIZE;
++	inode_lock(inode);
++	fh_fill_pre_attrs(fhp);
++	for (retries = 1;;) {
++		struct iattr attrs;
+ 
+ 		/*
+-		 * Avoid the additional setattr call below if the only other
+-		 * attribute that the client sends is the mtime, as we update
+-		 * it as part of the size change above.
++		 * notify_change() can alter its iattr argument, making
++		 * @iap unsuitable for submission multiple times. Make a
++		 * copy for every loop iteration.
+ 		 */
+-		if ((iap->ia_valid & ~ATTR_MTIME) == 0)
+-			goto out_unlock;
++		attrs = *iap;
++		host_err = __nfsd_setattr(dentry, &attrs);
++		if (host_err != -EAGAIN || !retries--)
++			break;
++		if (!nfsd_wait_for_delegreturn(rqstp, inode))
++			break;
+ 	}
+-
+-	iap->ia_valid |= ATTR_CTIME;
+-	host_err = notify_change(dentry, iap, NULL);
+-
+-out_unlock:
+-	fh_unlock(fhp);
++	if (attr->na_seclabel && attr->na_seclabel->len)
++		attr->na_labelerr = security_inode_setsecctx(dentry,
++			attr->na_seclabel->data, attr->na_seclabel->len);
++	if (IS_ENABLED(CONFIG_FS_POSIX_ACL) && attr->na_pacl)
++		attr->na_aclerr = set_posix_acl(inode, ACL_TYPE_ACCESS,
++						attr->na_pacl);
++	if (IS_ENABLED(CONFIG_FS_POSIX_ACL) &&
++	    !attr->na_aclerr && attr->na_dpacl && S_ISDIR(inode->i_mode))
++		attr->na_aclerr = set_posix_acl(inode, ACL_TYPE_DEFAULT,
++						attr->na_dpacl);
++	fh_fill_post_attrs(fhp);
++	inode_unlock(inode);
+ 	if (size_change)
+ 		put_write_access(inode);
+ out:
+ 	if (!host_err)
+ 		host_err = commit_metadata(fhp);
+-	return nfserrno(host_err);
++	return err != 0 ? err : nfserrno(host_err);
+ }
+ 
+ #if defined(CONFIG_NFSD_V4)
+@@ -503,35 +608,16 @@ int nfsd4_is_junction(struct dentry *dentry)
+ 		return 0;
+ 	return 1;
+ }
+-#ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+-__be32 nfsd4_set_nfs4_label(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		struct xdr_netobj *label)
+-{
+-	__be32 error;
+-	int host_error;
+-	struct dentry *dentry;
+ 
+-	error = fh_verify(rqstp, fhp, 0 /* S_IFREG */, NFSD_MAY_SATTR);
+-	if (error)
+-		return error;
+-
+-	dentry = fhp->fh_dentry;
+-
+-	inode_lock(d_inode(dentry));
+-	host_error = security_inode_setsecctx(dentry, label->data, label->len);
+-	inode_unlock(d_inode(dentry));
+-	return nfserrno(host_error);
+-}
+-#else
+-__be32 nfsd4_set_nfs4_label(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		struct xdr_netobj *label)
++static struct nfsd4_compound_state *nfsd4_get_cstate(struct svc_rqst *rqstp)
+ {
+-	return nfserr_notsupp;
++	return &((struct nfsd4_compoundres *)rqstp->rq_resp)->cstate;
+ }
+-#endif
+ 
+-__be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
+-		struct nfsd_file *nf_dst, u64 dst_pos, u64 count, bool sync)
++__be32 nfsd4_clone_file_range(struct svc_rqst *rqstp,
++		struct nfsd_file *nf_src, u64 src_pos,
++		struct nfsd_file *nf_dst, u64 dst_pos,
++		u64 count, bool sync)
+ {
+ 	struct file *src = nf_src->nf_file;
+ 	struct file *dst = nf_dst->nf_file;
+@@ -558,8 +644,17 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
+ 		if (!status)
+ 			status = commit_inode_metadata(file_inode(src));
+ 		if (status < 0) {
+-			nfsd_reset_boot_verifier(net_generic(nf_dst->nf_net,
+-						 nfsd_net_id));
++			struct nfsd_net *nn = net_generic(nf_dst->nf_net,
++							  nfsd_net_id);
++
++			trace_nfsd_clone_file_range_err(rqstp,
++					&nfsd4_get_cstate(rqstp)->save_fh,
++					src_pos,
++					&nfsd4_get_cstate(rqstp)->current_fh,
++					dst_pos,
++					count, status);
++			nfsd_reset_write_verifier(nn);
++			trace_nfsd_writeverf_reset(nn, rqstp, status);
+ 			ret = nfserrno(status);
+ 		}
+ 	}
+@@ -606,7 +701,6 @@ __be32 nfsd4_vfs_fallocate(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ }
+ #endif /* defined(CONFIG_NFSD_V4) */
+ 
+-#ifdef CONFIG_NFSD_V3
+ /*
+  * Check server access rights to a file system object
+  */
+@@ -718,7 +812,6 @@ nfsd_access(struct svc_rqst *rqstp, struct svc_fh *fhp, u32 *access, u32 *suppor
+  out:
+ 	return error;
+ }
+-#endif /* CONFIG_NFSD_V3 */
+ 
+ int nfsd_open_break_lease(struct inode *inode, int access)
+ {
+@@ -751,9 +844,6 @@ __nfsd_open(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type,
+ 	path.dentry = fhp->fh_dentry;
+ 	inode = d_inode(path.dentry);
+ 
+-	/* Disallow write access to files with the append-only bit set
+-	 * or any access when mandatory locking enabled
+-	 */
+ 	err = nfserr_perm;
+ 	if (IS_APPEND(inode) && (may_flags & NFSD_MAY_WRITE))
+ 		goto out;
+@@ -808,6 +898,7 @@ nfsd_open(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type,
+ 		int may_flags, struct file **filp)
+ {
+ 	__be32 err;
++	bool retried = false;
+ 
+ 	validate_process_creds();
+ 	/*
+@@ -823,21 +914,37 @@ nfsd_open(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type,
+ 	 */
+ 	if (type == S_IFREG)
+ 		may_flags |= NFSD_MAY_OWNER_OVERRIDE;
++retry:
+ 	err = fh_verify(rqstp, fhp, type, may_flags);
+-	if (!err)
++	if (!err) {
+ 		err = __nfsd_open(rqstp, fhp, type, may_flags, filp);
++		if (err == nfserr_stale && !retried) {
++			retried = true;
++			fh_put(fhp);
++			goto retry;
++		}
++	}
+ 	validate_process_creds();
+ 	return err;
+ }
+ 
++/**
++ * nfsd_open_verified - Open a regular file for the filecache
++ * @rqstp: RPC request
++ * @fhp: NFS filehandle of the file to open
++ * @may_flags: internal permission flags
++ * @filp: OUT: open "struct file *"
++ *
++ * Returns an nfsstat value in network byte order.
++ */
+ __be32
+-nfsd_open_verified(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type,
+-		int may_flags, struct file **filp)
++nfsd_open_verified(struct svc_rqst *rqstp, struct svc_fh *fhp, int may_flags,
++		   struct file **filp)
+ {
+ 	__be32 err;
+ 
+ 	validate_process_creds();
+-	err = __nfsd_open(rqstp, fhp, type, may_flags, filp);
++	err = __nfsd_open(rqstp, fhp, S_IFREG, may_flags, filp);
+ 	validate_process_creds();
+ 	return err;
+ }
+@@ -852,28 +959,24 @@ nfsd_splice_actor(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
+ 		  struct splice_desc *sd)
+ {
+ 	struct svc_rqst *rqstp = sd->u.data;
+-	struct page **pp = rqstp->rq_next_page;
+-	struct page *page = buf->page;
+-	size_t size;
+-
+-	size = sd->len;
+-
+-	if (rqstp->rq_res.page_len == 0) {
+-		get_page(page);
+-		put_page(*rqstp->rq_next_page);
+-		*(rqstp->rq_next_page++) = page;
+-		rqstp->rq_res.page_base = buf->offset;
+-		rqstp->rq_res.page_len = size;
+-	} else if (page != pp[-1]) {
+-		get_page(page);
+-		if (*rqstp->rq_next_page)
+-			put_page(*rqstp->rq_next_page);
+-		*(rqstp->rq_next_page++) = page;
+-		rqstp->rq_res.page_len += size;
+-	} else
+-		rqstp->rq_res.page_len += size;
++	struct page *page = buf->page;	// may be a compound one
++	unsigned offset = buf->offset;
++	struct page *last_page;
+ 
+-	return size;
++	last_page = page + (offset + sd->len - 1) / PAGE_SIZE;
++	for (page += offset / PAGE_SIZE; page <= last_page; page++) {
++		/*
++		 * Skip page replacement when extending the contents
++		 * of the current page.
++		 */
++		if (page == *(rqstp->rq_next_page - 1))
++			continue;
++		svc_rqst_replace_page(rqstp, page);
++	}
++	if (rqstp->rq_res.page_len == 0)	// first call
++		rqstp->rq_res.page_base = offset % PAGE_SIZE;
++	rqstp->rq_res.page_len += sd->len;
++	return sd->len;
+ }
+ 
+ static int nfsd_direct_splice_actor(struct pipe_inode_info *pipe,
+@@ -897,7 +1000,7 @@ static __be32 nfsd_finish_read(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 			       unsigned long *count, u32 *eof, ssize_t host_err)
+ {
+ 	if (host_err >= 0) {
+-		nfsdstats.io_read += host_err;
++		nfsd_stats_io_read_add(fhp->fh_export, host_err);
+ 		*eof = nfsd_eof_on_read(file, offset, host_err, *count);
+ 		*count = host_err;
+ 		fsnotify_access(file);
+@@ -985,7 +1088,9 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 				unsigned long *cnt, int stable,
+ 				__be32 *verf)
+ {
++	struct nfsd_net		*nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	struct file		*file = nf->nf_file;
++	struct super_block	*sb = file_inode(file)->i_sb;
+ 	struct svc_export	*exp;
+ 	struct iov_iter		iter;
+ 	errseq_t		since;
+@@ -993,12 +1098,18 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 	int			host_err;
+ 	int			use_wgather;
+ 	loff_t			pos = offset;
++	unsigned long		exp_op_flags = 0;
+ 	unsigned int		pflags = current->flags;
+ 	rwf_t			flags = 0;
++	bool			restore_flags = false;
+ 
+ 	trace_nfsd_write_opened(rqstp, fhp, offset, *cnt);
+ 
+-	if (test_bit(RQ_LOCAL, &rqstp->rq_flags))
++	if (sb->s_export_op)
++		exp_op_flags = sb->s_export_op->flags;
++
++	if (test_bit(RQ_LOCAL, &rqstp->rq_flags) &&
++	    !(exp_op_flags & EXPORT_OP_REMOTE_FS)) {
+ 		/*
+ 		 * We want throttling in balance_dirty_pages()
+ 		 * and shrink_inactive_list() to only consider
+@@ -1007,6 +1118,8 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 		 * the client's dirty pages or its congested queue.
+ 		 */
+ 		current->flags |= PF_LOCAL_THROTTLE;
++		restore_flags = true;
++	}
+ 
+ 	exp = fhp->fh_export;
+ 	use_wgather = (rqstp->rq_vers == 2) && EX_WGATHER(exp);
+@@ -1019,29 +1132,18 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 
+ 	iov_iter_kvec(&iter, WRITE, vec, vlen, *cnt);
+ 	since = READ_ONCE(file->f_wb_err);
+-	if (flags & RWF_SYNC) {
+-		if (verf)
+-			nfsd_copy_boot_verifier(verf,
+-					net_generic(SVC_NET(rqstp),
+-					nfsd_net_id));
+-		host_err = vfs_iter_write(file, &iter, &pos, flags);
+-		if (host_err < 0)
+-			nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
+-						 nfsd_net_id));
+-	} else {
+-		if (verf)
+-			nfsd_copy_boot_verifier(verf,
+-					net_generic(SVC_NET(rqstp),
+-					nfsd_net_id));
+-		host_err = vfs_iter_write(file, &iter, &pos, flags);
+-	}
++	if (verf)
++		nfsd_copy_write_verifier(verf, nn);
++	file_start_write(file);
++	host_err = vfs_iter_write(file, &iter, &pos, flags);
++	file_end_write(file);
+ 	if (host_err < 0) {
+-		nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
+-					 nfsd_net_id));
++		nfsd_reset_write_verifier(nn);
++		trace_nfsd_writeverf_reset(nn, rqstp, host_err);
+ 		goto out_nfserr;
+ 	}
+ 	*cnt = host_err;
+-	nfsdstats.io_write += *cnt;
++	nfsd_stats_io_write_add(exp, *cnt);
+ 	fsnotify_modify(file);
+ 	host_err = filemap_check_wb_err(file->f_mapping, since);
+ 	if (host_err < 0)
+@@ -1049,9 +1151,10 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 
+ 	if (stable && use_wgather) {
+ 		host_err = wait_for_concurrent_writes(file);
+-		if (host_err < 0)
+-			nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
+-						 nfsd_net_id));
++		if (host_err < 0) {
++			nfsd_reset_write_verifier(nn);
++			trace_nfsd_writeverf_reset(nn, rqstp, host_err);
++		}
+ 	}
+ 
+ out_nfserr:
+@@ -1062,7 +1165,7 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 		trace_nfsd_write_err(rqstp, fhp, offset, host_err);
+ 		nfserr = nfserrno(host_err);
+ 	}
+-	if (test_bit(RQ_LOCAL, &rqstp->rq_flags))
++	if (restore_flags)
+ 		current_restore_flags(pflags, PF_LOCAL_THROTTLE);
+ 	return nfserr;
+ }
+@@ -1081,7 +1184,7 @@ __be32 nfsd_read(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	__be32 err;
+ 
+ 	trace_nfsd_read_start(rqstp, fhp, offset, *count);
+-	err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf);
++	err = nfsd_file_acquire_gc(rqstp, fhp, NFSD_MAY_READ, &nf);
+ 	if (err)
+ 		return err;
+ 
+@@ -1113,7 +1216,7 @@ nfsd_write(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t offset,
+ 
+ 	trace_nfsd_write_start(rqstp, fhp, offset, *cnt);
+ 
+-	err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_WRITE, &nf);
++	err = nfsd_file_acquire_gc(rqstp, fhp, NFSD_MAY_WRITE, &nf);
+ 	if (err)
+ 		goto out;
+ 
+@@ -1125,45 +1228,59 @@ nfsd_write(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t offset,
+ 	return err;
+ }
+ 
+-#ifdef CONFIG_NFSD_V3
+-/*
+- * Commit all pending writes to stable storage.
++/**
++ * nfsd_commit - Commit pending writes to stable storage
++ * @rqstp: RPC request being processed
++ * @fhp: NFS filehandle
++ * @nf: target file
++ * @offset: raw offset from beginning of file
++ * @count: raw count of bytes to sync
++ * @verf: filled in with the server's current write verifier
+  *
+- * Note: we only guarantee that data that lies within the range specified
+- * by the 'offset' and 'count' parameters will be synced.
++ * Note: we guarantee that data that lies within the range specified
++ * by the 'offset' and 'count' parameters will be synced. The server
++ * is permitted to sync data that lies outside this range at the
++ * same time.
+  *
+  * Unfortunately we cannot lock the file to make sure we return full WCC
+  * data to the client, as locking happens lower down in the filesystem.
++ *
++ * Return values:
++ *   An nfsstat value in network byte order.
+  */
+ __be32
+-nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-               loff_t offset, unsigned long count, __be32 *verf)
++nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
++	    u64 offset, u32 count, __be32 *verf)
+ {
+-	struct nfsd_file	*nf;
+-	loff_t			end = LLONG_MAX;
+-	__be32			err = nfserr_inval;
++	__be32			err = nfs_ok;
++	u64			maxbytes;
++	loff_t			start, end;
++	struct nfsd_net		*nn;
+ 
+-	if (offset < 0)
+-		goto out;
+-	if (count != 0) {
+-		end = offset + (loff_t)count - 1;
+-		if (end < offset)
+-			goto out;
++	/*
++	 * Convert the client-provided (offset, count) range to a
++	 * (start, end) range. If the client-provided range falls
++	 * outside the maximum file size of the underlying FS,
++	 * clamp the sync range appropriately.
++	 */
++	start = 0;
++	end = LLONG_MAX;
++	maxbytes = (u64)fhp->fh_dentry->d_sb->s_maxbytes;
++	if (offset < maxbytes) {
++		start = offset;
++		if (count && (offset + count - 1 < maxbytes))
++			end = offset + count - 1;
+ 	}
+ 
+-	err = nfsd_file_acquire(rqstp, fhp,
+-			NFSD_MAY_WRITE|NFSD_MAY_NOT_BREAK_LEASE, &nf);
+-	if (err)
+-		goto out;
++	nn = net_generic(nf->nf_net, nfsd_net_id);
+ 	if (EX_ISSYNC(fhp->fh_export)) {
+ 		errseq_t since = READ_ONCE(nf->nf_file->f_wb_err);
+ 		int err2;
+ 
+-		err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
++		err2 = vfs_fsync_range(nf->nf_file, start, end, 0);
+ 		switch (err2) {
+ 		case 0:
+-			nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
+-						nfsd_net_id));
++			nfsd_copy_write_verifier(verf, nn);
+ 			err2 = filemap_check_wb_err(nf->nf_file->f_mapping,
+ 						    since);
+ 			err = nfserrno(err2);
+@@ -1172,28 +1289,37 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 			err = nfserr_notsupp;
+ 			break;
+ 		default:
+-			nfsd_reset_boot_verifier(net_generic(nf->nf_net,
+-						 nfsd_net_id));
++			nfsd_reset_write_verifier(nn);
++			trace_nfsd_writeverf_reset(nn, rqstp, err2);
+ 			err = nfserrno(err2);
+ 		}
+ 	} else
+-		nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
+-					nfsd_net_id));
++		nfsd_copy_write_verifier(verf, nn);
+ 
+-	nfsd_file_put(nf);
+-out:
+ 	return err;
+ }
+-#endif /* CONFIG_NFSD_V3 */
+ 
+-static __be32
+-nfsd_create_setattr(struct svc_rqst *rqstp, struct svc_fh *resfhp,
+-			struct iattr *iap)
++/**
++ * nfsd_create_setattr - Set a created file's attributes
++ * @rqstp: RPC transaction being executed
++ * @fhp: NFS filehandle of parent directory
++ * @resfhp: NFS filehandle of new object
++ * @attrs: requested attributes of new object
++ *
++ * Returns nfs_ok on success, or an nfsstat in network byte order.
++ */
++__be32
++nfsd_create_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp,
++		    struct svc_fh *resfhp, struct nfsd_attrs *attrs)
+ {
++	struct iattr *iap = attrs->na_iattr;
++	__be32 status;
++
+ 	/*
+-	 * Mode has already been set earlier in create:
++	 * Mode has already been set by file creation.
+ 	 */
+ 	iap->ia_valid &= ~ATTR_MODE;
++
+ 	/*
+ 	 * Setting uid/gid works only for root.  Irix appears to
+ 	 * send along the gid on create when it tries to implement
+@@ -1201,10 +1327,31 @@ nfsd_create_setattr(struct svc_rqst *rqstp, struct svc_fh *resfhp,
+ 	 */
+ 	if (!uid_eq(current_fsuid(), GLOBAL_ROOT_UID))
+ 		iap->ia_valid &= ~(ATTR_UID|ATTR_GID);
++
++	/*
++	 * Callers expect new file metadata to be committed even
++	 * if the attributes have not changed.
++	 */
+ 	if (iap->ia_valid)
+-		return nfsd_setattr(rqstp, resfhp, iap, 0, (time64_t)0);
+-	/* Callers expect file metadata to be committed here */
+-	return nfserrno(commit_metadata(resfhp));
++		status = nfsd_setattr(rqstp, resfhp, attrs, 0, (time64_t)0);
++	else
++		status = nfserrno(commit_metadata(resfhp));
++
++	/*
++	 * Transactional filesystems had a chance to commit changes
++	 * for both parent and child simultaneously making the
++	 * following commit_metadata a noop in many cases.
++	 */
++	if (!status)
++		status = nfserrno(commit_metadata(fhp));
++
++	/*
++	 * Update the new filehandle to pick up the new attributes.
++	 */
++	if (!status)
++		status = fh_update(resfhp);
++
++	return status;
+ }
+ 
+ /* HPUX client sometimes creates a file in mode 000, and sets size to 0.
+@@ -1225,26 +1372,19 @@ nfsd_check_ignore_resizing(struct iattr *iap)
+ /* The parent directory should already be locked: */
+ __be32
+ nfsd_create_locked(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		char *fname, int flen, struct iattr *iap,
+-		int type, dev_t rdev, struct svc_fh *resfhp)
++		   struct nfsd_attrs *attrs,
++		   int type, dev_t rdev, struct svc_fh *resfhp)
+ {
+ 	struct dentry	*dentry, *dchild;
+ 	struct inode	*dirp;
++	struct iattr	*iap = attrs->na_iattr;
+ 	__be32		err;
+-	__be32		err2;
+ 	int		host_err;
+ 
+ 	dentry = fhp->fh_dentry;
+ 	dirp = d_inode(dentry);
+ 
+ 	dchild = dget(resfhp->fh_dentry);
+-	if (!fhp->fh_locked) {
+-		WARN_ONCE(1, "nfsd_create: parent %pd2 not locked!\n",
+-				dentry);
+-		err = nfserr_io;
+-		goto out;
+-	}
+-
+ 	err = nfsd_permission(rqstp, fhp->fh_export, dentry, NFSD_MAY_CREATE);
+ 	if (err)
+ 		goto out;
+@@ -1257,7 +1397,6 @@ nfsd_create_locked(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 		iap->ia_mode &= ~current_umask();
+ 
+ 	err = 0;
+-	host_err = 0;
+ 	switch (type) {
+ 	case S_IFREG:
+ 		host_err = vfs_create(dirp, dchild, iap->ia_mode, true);
+@@ -1303,22 +1442,8 @@ nfsd_create_locked(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	if (host_err < 0)
+ 		goto out_nfserr;
+ 
+-	err = nfsd_create_setattr(rqstp, resfhp, iap);
++	err = nfsd_create_setattr(rqstp, fhp, resfhp, attrs);
+ 
+-	/*
+-	 * nfsd_create_setattr already committed the child.  Transactional
+-	 * filesystems had a chance to commit changes for both parent and
+-	 * child simultaneously making the following commit_metadata a
+-	 * noop.
+-	 */
+-	err2 = nfserrno(commit_metadata(fhp));
+-	if (err2)
+-		err = err2;
+-	/*
+-	 * Update the file handle to get the new inode info.
+-	 */
+-	if (!err)
+-		err = fh_update(resfhp);
+ out:
+ 	dput(dchild);
+ 	return err;
+@@ -1336,8 +1461,8 @@ nfsd_create_locked(struct svc_rqst *rqstp, struct svc_fh *fhp,
+  */
+ __be32
+ nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		char *fname, int flen, struct iattr *iap,
+-		int type, dev_t rdev, struct svc_fh *resfhp)
++	    char *fname, int flen, struct nfsd_attrs *attrs,
++	    int type, dev_t rdev, struct svc_fh *resfhp)
+ {
+ 	struct dentry	*dentry, *dchild = NULL;
+ 	__be32		err;
+@@ -1356,11 +1481,13 @@ nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	if (host_err)
+ 		return nfserrno(host_err);
+ 
+-	fh_lock_nested(fhp, I_MUTEX_PARENT);
++	inode_lock_nested(dentry->d_inode, I_MUTEX_PARENT);
+ 	dchild = lookup_one_len(fname, dentry, flen);
+ 	host_err = PTR_ERR(dchild);
+-	if (IS_ERR(dchild))
+-		return nfserrno(host_err);
++	if (IS_ERR(dchild)) {
++		err = nfserrno(host_err);
++		goto out_unlock;
++	}
+ 	err = fh_compose(resfhp, fhp->fh_export, dchild, fhp);
+ 	/*
+ 	 * We unconditionally drop our ref to dchild as fh_compose will have
+@@ -1368,178 +1495,14 @@ nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	 */
+ 	dput(dchild);
+ 	if (err)
+-		return err;
+-	return nfsd_create_locked(rqstp, fhp, fname, flen, iap, type,
+-					rdev, resfhp);
+-}
+-
+-#ifdef CONFIG_NFSD_V3
+-
+-/*
+- * NFSv3 and NFSv4 version of nfsd_create
+- */
+-__be32
+-do_nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-		char *fname, int flen, struct iattr *iap,
+-		struct svc_fh *resfhp, int createmode, u32 *verifier,
+-	        bool *truncp, bool *created)
+-{
+-	struct dentry	*dentry, *dchild = NULL;
+-	struct inode	*dirp;
+-	__be32		err;
+-	int		host_err;
+-	__u32		v_mtime=0, v_atime=0;
+-
+-	err = nfserr_perm;
+-	if (!flen)
+-		goto out;
+-	err = nfserr_exist;
+-	if (isdotent(fname, flen))
+-		goto out;
+-	if (!(iap->ia_valid & ATTR_MODE))
+-		iap->ia_mode = 0;
+-	err = fh_verify(rqstp, fhp, S_IFDIR, NFSD_MAY_EXEC);
+-	if (err)
+-		goto out;
+-
+-	dentry = fhp->fh_dentry;
+-	dirp = d_inode(dentry);
+-
+-	host_err = fh_want_write(fhp);
+-	if (host_err)
+-		goto out_nfserr;
+-
+-	fh_lock_nested(fhp, I_MUTEX_PARENT);
+-
+-	/*
+-	 * Compose the response file handle.
+-	 */
+-	dchild = lookup_one_len(fname, dentry, flen);
+-	host_err = PTR_ERR(dchild);
+-	if (IS_ERR(dchild))
+-		goto out_nfserr;
+-
+-	/* If file doesn't exist, check for permissions to create one */
+-	if (d_really_is_negative(dchild)) {
+-		err = fh_verify(rqstp, fhp, S_IFDIR, NFSD_MAY_CREATE);
+-		if (err)
+-			goto out;
+-	}
+-
+-	err = fh_compose(resfhp, fhp->fh_export, dchild, fhp);
+-	if (err)
+-		goto out;
+-
+-	if (nfsd_create_is_exclusive(createmode)) {
+-		/* solaris7 gets confused (bugid 4218508) if these have
+-		 * the high bit set, so just clear the high bits. If this is
+-		 * ever changed to use different attrs for storing the
+-		 * verifier, then do_open_lookup() will also need to be fixed
+-		 * accordingly.
+-		 */
+-		v_mtime = verifier[0]&0x7fffffff;
+-		v_atime = verifier[1]&0x7fffffff;
+-	}
+-	
+-	if (d_really_is_positive(dchild)) {
+-		err = 0;
+-
+-		switch (createmode) {
+-		case NFS3_CREATE_UNCHECKED:
+-			if (! d_is_reg(dchild))
+-				goto out;
+-			else if (truncp) {
+-				/* in nfsv4, we need to treat this case a little
+-				 * differently.  we don't want to truncate the
+-				 * file now; this would be wrong if the OPEN
+-				 * fails for some other reason.  furthermore,
+-				 * if the size is nonzero, we should ignore it
+-				 * according to spec!
+-				 */
+-				*truncp = (iap->ia_valid & ATTR_SIZE) && !iap->ia_size;
+-			}
+-			else {
+-				iap->ia_valid &= ATTR_SIZE;
+-				goto set_attr;
+-			}
+-			break;
+-		case NFS3_CREATE_EXCLUSIVE:
+-			if (   d_inode(dchild)->i_mtime.tv_sec == v_mtime
+-			    && d_inode(dchild)->i_atime.tv_sec == v_atime
+-			    && d_inode(dchild)->i_size  == 0 ) {
+-				if (created)
+-					*created = true;
+-				break;
+-			}
+-			fallthrough;
+-		case NFS4_CREATE_EXCLUSIVE4_1:
+-			if (   d_inode(dchild)->i_mtime.tv_sec == v_mtime
+-			    && d_inode(dchild)->i_atime.tv_sec == v_atime
+-			    && d_inode(dchild)->i_size  == 0 ) {
+-				if (created)
+-					*created = true;
+-				goto set_attr;
+-			}
+-			fallthrough;
+-		case NFS3_CREATE_GUARDED:
+-			err = nfserr_exist;
+-		}
+-		fh_drop_write(fhp);
+-		goto out;
+-	}
+-
+-	if (!IS_POSIXACL(dirp))
+-		iap->ia_mode &= ~current_umask();
+-
+-	host_err = vfs_create(dirp, dchild, iap->ia_mode, true);
+-	if (host_err < 0) {
+-		fh_drop_write(fhp);
+-		goto out_nfserr;
+-	}
+-	if (created)
+-		*created = true;
+-
+-	nfsd_check_ignore_resizing(iap);
+-
+-	if (nfsd_create_is_exclusive(createmode)) {
+-		/* Cram the verifier into atime/mtime */
+-		iap->ia_valid = ATTR_MTIME|ATTR_ATIME
+-			| ATTR_MTIME_SET|ATTR_ATIME_SET;
+-		/* XXX someone who knows this better please fix it for nsec */ 
+-		iap->ia_mtime.tv_sec = v_mtime;
+-		iap->ia_atime.tv_sec = v_atime;
+-		iap->ia_mtime.tv_nsec = 0;
+-		iap->ia_atime.tv_nsec = 0;
+-	}
+-
+- set_attr:
+-	err = nfsd_create_setattr(rqstp, resfhp, iap);
+-
+-	/*
+-	 * nfsd_create_setattr already committed the child
+-	 * (and possibly also the parent).
+-	 */
+-	if (!err)
+-		err = nfserrno(commit_metadata(fhp));
+-
+-	/*
+-	 * Update the filehandle to get the new inode info.
+-	 */
+-	if (!err)
+-		err = fh_update(resfhp);
+-
+- out:
+-	fh_unlock(fhp);
+-	if (dchild && !IS_ERR(dchild))
+-		dput(dchild);
+-	fh_drop_write(fhp);
+- 	return err;
+- 
+- out_nfserr:
+-	err = nfserrno(host_err);
+-	goto out;
++		goto out_unlock;
++	fh_fill_pre_attrs(fhp);
++	err = nfsd_create_locked(rqstp, fhp, attrs, type, rdev, resfhp);
++	fh_fill_post_attrs(fhp);
++out_unlock:
++	inode_unlock(dentry->d_inode);
++	return err;
+ }
+-#endif /* CONFIG_NFSD_V3 */
+ 
+ /*
+  * Read a symlink. On entry, *lenp must contain the maximum path length that
+@@ -1579,15 +1542,25 @@ nfsd_readlink(struct svc_rqst *rqstp, struct svc_fh *fhp, char *buf, int *lenp)
+ 	return 0;
+ }
+ 
+-/*
+- * Create a symlink and look up its inode
++/**
++ * nfsd_symlink - Create a symlink and look up its inode
++ * @rqstp: RPC transaction being executed
++ * @fhp: NFS filehandle of parent directory
++ * @fname: filename of the new symlink
++ * @flen: length of @fname
++ * @path: content of the new symlink (NUL-terminated)
++ * @attrs: requested attributes of new object
++ * @resfhp: NFS filehandle of new object
++ *
+  * N.B. After this call _both_ fhp and resfhp need an fh_put
++ *
++ * Returns nfs_ok on success, or an nfsstat in network byte order.
+  */
+ __be32
+ nfsd_symlink(struct svc_rqst *rqstp, struct svc_fh *fhp,
+-				char *fname, int flen,
+-				char *path,
+-				struct svc_fh *resfhp)
++	     char *fname, int flen,
++	     char *path, struct nfsd_attrs *attrs,
++	     struct svc_fh *resfhp)
+ {
+ 	struct dentry	*dentry, *dnew;
+ 	__be32		err, cerr;
+@@ -1605,33 +1578,35 @@ nfsd_symlink(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 		goto out;
+ 
+ 	host_err = fh_want_write(fhp);
+-	if (host_err)
+-		goto out_nfserr;
++	if (host_err) {
++		err = nfserrno(host_err);
++		goto out;
++	}
+ 
+-	fh_lock(fhp);
+ 	dentry = fhp->fh_dentry;
++	inode_lock_nested(dentry->d_inode, I_MUTEX_PARENT);
+ 	dnew = lookup_one_len(fname, dentry, flen);
+-	host_err = PTR_ERR(dnew);
+-	if (IS_ERR(dnew))
+-		goto out_nfserr;
+-
++	if (IS_ERR(dnew)) {
++		err = nfserrno(PTR_ERR(dnew));
++		inode_unlock(dentry->d_inode);
++		goto out_drop_write;
++	}
++	fh_fill_pre_attrs(fhp);
+ 	host_err = vfs_symlink(d_inode(dentry), dnew, path);
+ 	err = nfserrno(host_err);
++	cerr = fh_compose(resfhp, fhp->fh_export, dnew, fhp);
++	if (!err)
++		nfsd_create_setattr(rqstp, fhp, resfhp, attrs);
++	fh_fill_post_attrs(fhp);
++	inode_unlock(dentry->d_inode);
+ 	if (!err)
+ 		err = nfserrno(commit_metadata(fhp));
+-	fh_unlock(fhp);
+-
+-	fh_drop_write(fhp);
+-
+-	cerr = fh_compose(resfhp, fhp->fh_export, dnew, fhp);
+ 	dput(dnew);
+ 	if (err==0) err = cerr;
++out_drop_write:
++	fh_drop_write(fhp);
+ out:
+ 	return err;
+-
+-out_nfserr:
+-	err = nfserrno(host_err);
+-	goto out;
+ }
+ 
+ /*
+@@ -1669,21 +1644,25 @@ nfsd_link(struct svc_rqst *rqstp, struct svc_fh *ffhp,
+ 		goto out;
+ 	}
+ 
+-	fh_lock_nested(ffhp, I_MUTEX_PARENT);
+ 	ddir = ffhp->fh_dentry;
+ 	dirp = d_inode(ddir);
++	inode_lock_nested(dirp, I_MUTEX_PARENT);
+ 
+ 	dnew = lookup_one_len(name, ddir, len);
+-	host_err = PTR_ERR(dnew);
+-	if (IS_ERR(dnew))
+-		goto out_nfserr;
++	if (IS_ERR(dnew)) {
++		err = nfserrno(PTR_ERR(dnew));
++		goto out_unlock;
++	}
+ 
+ 	dold = tfhp->fh_dentry;
+ 
+ 	err = nfserr_noent;
+ 	if (d_really_is_negative(dold))
+ 		goto out_dput;
++	fh_fill_pre_attrs(ffhp);
+ 	host_err = vfs_link(dold, dirp, dnew, NULL);
++	fh_fill_post_attrs(ffhp);
++	inode_unlock(dirp);
+ 	if (!host_err) {
+ 		err = nfserrno(commit_metadata(ffhp));
+ 		if (!err)
+@@ -1694,17 +1673,17 @@ nfsd_link(struct svc_rqst *rqstp, struct svc_fh *ffhp,
+ 		else
+ 			err = nfserrno(host_err);
+ 	}
+-out_dput:
+ 	dput(dnew);
+-out_unlock:
+-	fh_unlock(ffhp);
++out_drop_write:
+ 	fh_drop_write(tfhp);
+ out:
+ 	return err;
+ 
+-out_nfserr:
+-	err = nfserrno(host_err);
+-	goto out_unlock;
++out_dput:
++	dput(dnew);
++out_unlock:
++	inode_unlock(dirp);
++	goto out_drop_write;
+ }
+ 
+ static void
+@@ -1739,7 +1718,7 @@ nfsd_rename(struct svc_rqst *rqstp, struct svc_fh *ffhp, char *fname, int flen,
+ 	struct inode	*fdir, *tdir;
+ 	__be32		err;
+ 	int		host_err;
+-	bool		has_cached = false;
++	bool		close_cached = false;
+ 
+ 	err = fh_verify(rqstp, ffhp, S_IFDIR, NFSD_MAY_REMOVE);
+ 	if (err)
+@@ -1771,12 +1750,9 @@ nfsd_rename(struct svc_rqst *rqstp, struct svc_fh *ffhp, char *fname, int flen,
+ 		goto out;
+ 	}
+ 
+-	/* cannot use fh_lock as we need deadlock protective ordering
+-	 * so do it by hand */
+ 	trap = lock_rename(tdentry, fdentry);
+-	ffhp->fh_locked = tfhp->fh_locked = true;
+-	fill_pre_wcc(ffhp);
+-	fill_pre_wcc(tfhp);
++	fh_fill_pre_attrs(ffhp);
++	fh_fill_pre_attrs(tfhp);
+ 
+ 	odentry = lookup_one_len(fname, fdentry, flen);
+ 	host_err = PTR_ERR(odentry);
+@@ -1798,11 +1774,26 @@ nfsd_rename(struct svc_rqst *rqstp, struct svc_fh *ffhp, char *fname, int flen,
+ 	if (ndentry == trap)
+ 		goto out_dput_new;
+ 
+-	if (nfsd_has_cached_files(ndentry)) {
+-		has_cached = true;
++	if ((ndentry->d_sb->s_export_op->flags & EXPORT_OP_CLOSE_BEFORE_UNLINK) &&
++	    nfsd_has_cached_files(ndentry)) {
++		close_cached = true;
+ 		goto out_dput_old;
+ 	} else {
+-		host_err = vfs_rename(fdir, odentry, tdir, ndentry, NULL, 0);
++		struct renamedata rd = {
++			.old_dir	= fdir,
++			.old_dentry	= odentry,
++			.new_dir	= tdir,
++			.new_dentry	= ndentry,
++		};
++		int retries;
++
++		for (retries = 1;;) {
++			host_err = vfs_rename(&rd);
++			if (host_err != -EAGAIN || !retries--)
++				break;
++			if (!nfsd_wait_for_delegreturn(rqstp, d_inode(odentry)))
++				break;
++		}
+ 		if (!host_err) {
+ 			host_err = commit_metadata(tfhp);
+ 			if (!host_err)
+@@ -1815,17 +1806,12 @@ nfsd_rename(struct svc_rqst *rqstp, struct svc_fh *ffhp, char *fname, int flen,
+ 	dput(odentry);
+  out_nfserr:
+ 	err = nfserrno(host_err);
+-	/*
+-	 * We cannot rely on fh_unlock on the two filehandles,
+-	 * as that would do the wrong thing if the two directories
+-	 * were the same, so again we do it by hand.
+-	 */
+-	if (!has_cached) {
+-		fill_post_wcc(ffhp);
+-		fill_post_wcc(tfhp);
++
++	if (!close_cached) {
++		fh_fill_post_attrs(ffhp);
++		fh_fill_post_attrs(tfhp);
+ 	}
+ 	unlock_rename(tdentry, fdentry);
+-	ffhp->fh_locked = tfhp->fh_locked = false;
+ 	fh_drop_write(ffhp);
+ 
+ 	/*
+@@ -1834,8 +1820,8 @@ nfsd_rename(struct svc_rqst *rqstp, struct svc_fh *ffhp, char *fname, int flen,
+ 	 * shouldn't be done with locks held however, so we delay it until this
+ 	 * point and then reattempt the whole shebang.
+ 	 */
+-	if (has_cached) {
+-		has_cached = false;
++	if (close_cached) {
++		close_cached = false;
+ 		nfsd_close_cached_files(ndentry);
+ 		dput(ndentry);
+ 		goto retry;
+@@ -1854,6 +1840,7 @@ nfsd_unlink(struct svc_rqst *rqstp, struct svc_fh *fhp, int type,
+ {
+ 	struct dentry	*dentry, *rdentry;
+ 	struct inode	*dirp;
++	struct inode	*rinode;
+ 	__be32		err;
+ 	int		host_err;
+ 
+@@ -1868,34 +1855,50 @@ nfsd_unlink(struct svc_rqst *rqstp, struct svc_fh *fhp, int type,
+ 	if (host_err)
+ 		goto out_nfserr;
+ 
+-	fh_lock_nested(fhp, I_MUTEX_PARENT);
+ 	dentry = fhp->fh_dentry;
+ 	dirp = d_inode(dentry);
++	inode_lock_nested(dirp, I_MUTEX_PARENT);
+ 
+ 	rdentry = lookup_one_len(fname, dentry, flen);
+ 	host_err = PTR_ERR(rdentry);
+ 	if (IS_ERR(rdentry))
+-		goto out_drop_write;
++		goto out_unlock;
+ 
+ 	if (d_really_is_negative(rdentry)) {
+ 		dput(rdentry);
+ 		host_err = -ENOENT;
+-		goto out_drop_write;
++		goto out_unlock;
+ 	}
++	rinode = d_inode(rdentry);
++	ihold(rinode);
+ 
+ 	if (!type)
+ 		type = d_inode(rdentry)->i_mode & S_IFMT;
+ 
++	fh_fill_pre_attrs(fhp);
+ 	if (type != S_IFDIR) {
+-		nfsd_close_cached_files(rdentry);
+-		host_err = vfs_unlink(dirp, rdentry, NULL);
++		int retries;
++
++		if (rdentry->d_sb->s_export_op->flags & EXPORT_OP_CLOSE_BEFORE_UNLINK)
++			nfsd_close_cached_files(rdentry);
++
++		for (retries = 1;;) {
++			host_err = vfs_unlink(dirp, rdentry, NULL);
++			if (host_err != -EAGAIN || !retries--)
++				break;
++			if (!nfsd_wait_for_delegreturn(rqstp, rinode))
++				break;
++		}
+ 	} else {
+ 		host_err = vfs_rmdir(dirp, rdentry);
+ 	}
++	fh_fill_post_attrs(fhp);
+ 
++	inode_unlock(dirp);
+ 	if (!host_err)
+ 		host_err = commit_metadata(fhp);
+ 	dput(rdentry);
++	iput(rinode);    /* truncate the inode here */
+ 
+ out_drop_write:
+ 	fh_drop_write(fhp);
+@@ -1913,6 +1916,9 @@ nfsd_unlink(struct svc_rqst *rqstp, struct svc_fh *fhp, int type,
+ 	}
+ out:
+ 	return err;
++out_unlock:
++	inode_unlock(dirp);
++	goto out_drop_write;
+ }
+ 
+ /*
+@@ -1962,8 +1968,9 @@ static int nfsd_buffered_filldir(struct dir_context *ctx, const char *name,
+ 	return 0;
+ }
+ 
+-static __be32 nfsd_buffered_readdir(struct file *file, nfsd_filldir_t func,
+-				    struct readdir_cd *cdp, loff_t *offsetp)
++static __be32 nfsd_buffered_readdir(struct file *file, struct svc_fh *fhp,
++				    nfsd_filldir_t func, struct readdir_cd *cdp,
++				    loff_t *offsetp)
+ {
+ 	struct buffered_dirent *de;
+ 	int host_err;
+@@ -2009,6 +2016,8 @@ static __be32 nfsd_buffered_readdir(struct file *file, nfsd_filldir_t func,
+ 			if (cdp->err != nfs_ok)
+ 				break;
+ 
++			trace_nfsd_dirent(fhp, de->ino, de->name, de->namlen);
++
+ 			reclen = ALIGN(sizeof(*de) + de->namlen,
+ 				       sizeof(u64));
+ 			size -= reclen;
+@@ -2056,7 +2065,7 @@ nfsd_readdir(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t *offsetp,
+ 		goto out_close;
+ 	}
+ 
+-	err = nfsd_buffered_readdir(file, func, cdp, offsetp);
++	err = nfsd_buffered_readdir(file, fhp, func, cdp, offsetp);
+ 
+ 	if (err == nfserr_eof || err == nfserr_toosmall)
+ 		err = nfs_ok; /* can still be found in ->err */
+@@ -2263,13 +2272,16 @@ nfsd_listxattr(struct svc_rqst *rqstp, struct svc_fh *fhp, char **bufp,
+ 	return err;
+ }
+ 
+-/*
+- * Removexattr and setxattr need to call fh_lock to both lock the inode
+- * and set the change attribute. Since the top-level vfs_removexattr
+- * and vfs_setxattr calls already do their own inode_lock calls, call
+- * the _locked variant. Pass in a NULL pointer for delegated_inode,
+- * and let the client deal with NFS4ERR_DELAY (same as with e.g.
+- * setattr and remove).
++/**
++ * nfsd_removexattr - Remove an extended attribute
++ * @rqstp: RPC transaction being executed
++ * @fhp: NFS filehandle of object with xattr to remove
++ * @name: name of xattr to remove (NUL-terminate)
++ *
++ * Pass in a NULL pointer for delegated_inode, and let the client deal
++ * with NFS4ERR_DELAY (same as with e.g. setattr and remove).
++ *
++ * Returns nfs_ok on success, or an nfsstat in network byte order.
+  */
+ __be32
+ nfsd_removexattr(struct svc_rqst *rqstp, struct svc_fh *fhp, char *name)
+@@ -2285,11 +2297,13 @@ nfsd_removexattr(struct svc_rqst *rqstp, struct svc_fh *fhp, char *name)
+ 	if (ret)
+ 		return nfserrno(ret);
+ 
+-	fh_lock(fhp);
++	inode_lock(fhp->fh_dentry->d_inode);
++	fh_fill_pre_attrs(fhp);
+ 
+ 	ret = __vfs_removexattr_locked(fhp->fh_dentry, name, NULL);
+ 
+-	fh_unlock(fhp);
++	fh_fill_post_attrs(fhp);
++	inode_unlock(fhp->fh_dentry->d_inode);
+ 	fh_drop_write(fhp);
+ 
+ 	return nfsd_xattr_errno(ret);
+@@ -2309,12 +2323,13 @@ nfsd_setxattr(struct svc_rqst *rqstp, struct svc_fh *fhp, char *name,
+ 	ret = fh_want_write(fhp);
+ 	if (ret)
+ 		return nfserrno(ret);
+-	fh_lock(fhp);
++	inode_lock(fhp->fh_dentry->d_inode);
++	fh_fill_pre_attrs(fhp);
+ 
+ 	ret = __vfs_setxattr_locked(fhp->fh_dentry, name, buf, len, flags,
+ 				    NULL);
+-
+-	fh_unlock(fhp);
++	fh_fill_post_attrs(fhp);
++	inode_unlock(fhp->fh_dentry->d_inode);
+ 	fh_drop_write(fhp);
+ 
+ 	return nfsd_xattr_errno(ret);
+diff --git a/fs/nfsd/vfs.h b/fs/nfsd/vfs.h
+index a2442ebe5acf6..dbdfef7ae85bb 100644
+--- a/fs/nfsd/vfs.h
++++ b/fs/nfsd/vfs.h
+@@ -6,6 +6,8 @@
+ #ifndef LINUX_NFSD_VFS_H
+ #define LINUX_NFSD_VFS_H
+ 
++#include <linux/fs.h>
++#include <linux/posix_acl.h>
+ #include "nfsfh.h"
+ #include "nfsd.h"
+ 
+@@ -42,6 +44,23 @@ struct nfsd_file;
+ typedef int (*nfsd_filldir_t)(void *, const char *, int, loff_t, u64, unsigned);
+ 
+ /* nfsd/vfs.c */
++struct nfsd_attrs {
++	struct iattr		*na_iattr;	/* input */
++	struct xdr_netobj	*na_seclabel;	/* input */
++	struct posix_acl	*na_pacl;	/* input */
++	struct posix_acl	*na_dpacl;	/* input */
++
++	int			na_labelerr;	/* output */
++	int			na_aclerr;	/* output */
++};
++
++static inline void nfsd_attrs_free(struct nfsd_attrs *attrs)
++{
++	posix_acl_release(attrs->na_pacl);
++	posix_acl_release(attrs->na_dpacl);
++}
++
++__be32		nfserrno (int errno);
+ int		nfsd_cross_mnt(struct svc_rqst *rqstp, struct dentry **dpp,
+ 		                struct svc_export **expp);
+ __be32		nfsd_lookup(struct svc_rqst *, struct svc_fh *,
+@@ -50,32 +69,28 @@ __be32		 nfsd_lookup_dentry(struct svc_rqst *, struct svc_fh *,
+ 				const char *, unsigned int,
+ 				struct svc_export **, struct dentry **);
+ __be32		nfsd_setattr(struct svc_rqst *, struct svc_fh *,
+-				struct iattr *, int, time64_t);
++				struct nfsd_attrs *, int, time64_t);
+ int nfsd_mountpoint(struct dentry *, struct svc_export *);
+ #ifdef CONFIG_NFSD_V4
+-__be32          nfsd4_set_nfs4_label(struct svc_rqst *, struct svc_fh *,
+-		    struct xdr_netobj *);
+ __be32		nfsd4_vfs_fallocate(struct svc_rqst *, struct svc_fh *,
+ 				    struct file *, loff_t, loff_t, int);
+-__be32		nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
++__be32		nfsd4_clone_file_range(struct svc_rqst *rqstp,
++				       struct nfsd_file *nf_src, u64 src_pos,
+ 				       struct nfsd_file *nf_dst, u64 dst_pos,
+ 				       u64 count, bool sync);
+ #endif /* CONFIG_NFSD_V4 */
+ __be32		nfsd_create_locked(struct svc_rqst *, struct svc_fh *,
+-				char *name, int len, struct iattr *attrs,
+-				int type, dev_t rdev, struct svc_fh *res);
++				struct nfsd_attrs *attrs, int type, dev_t rdev,
++				struct svc_fh *res);
+ __be32		nfsd_create(struct svc_rqst *, struct svc_fh *,
+-				char *name, int len, struct iattr *attrs,
++				char *name, int len, struct nfsd_attrs *attrs,
+ 				int type, dev_t rdev, struct svc_fh *res);
+-#ifdef CONFIG_NFSD_V3
+ __be32		nfsd_access(struct svc_rqst *, struct svc_fh *, u32 *, u32 *);
+-__be32		do_nfsd_create(struct svc_rqst *, struct svc_fh *,
+-				char *name, int len, struct iattr *attrs,
+-				struct svc_fh *res, int createmode,
+-				u32 *verifier, bool *truncp, bool *created);
+-__be32		nfsd_commit(struct svc_rqst *, struct svc_fh *,
+-				loff_t, unsigned long, __be32 *verf);
+-#endif /* CONFIG_NFSD_V3 */
++__be32		nfsd_create_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp,
++				struct svc_fh *resfhp, struct nfsd_attrs *iap);
++__be32		nfsd_commit(struct svc_rqst *rqst, struct svc_fh *fhp,
++				struct nfsd_file *nf, u64 offset, u32 count,
++				__be32 *verf);
+ #ifdef CONFIG_NFSD_V4
+ __be32		nfsd_getxattr(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 			    char *name, void **bufp, int *lenp);
+@@ -89,7 +104,7 @@ __be32		nfsd_setxattr(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ int 		nfsd_open_break_lease(struct inode *, int);
+ __be32		nfsd_open(struct svc_rqst *, struct svc_fh *, umode_t,
+ 				int, struct file **);
+-__be32		nfsd_open_verified(struct svc_rqst *, struct svc_fh *, umode_t,
++__be32		nfsd_open_verified(struct svc_rqst *, struct svc_fh *,
+ 				int, struct file **);
+ __be32		nfsd_splice_read(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 				struct file *file, loff_t offset,
+@@ -113,8 +128,9 @@ __be32		nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ __be32		nfsd_readlink(struct svc_rqst *, struct svc_fh *,
+ 				char *, int *);
+ __be32		nfsd_symlink(struct svc_rqst *, struct svc_fh *,
+-				char *name, int len, char *path,
+-				struct svc_fh *res);
++			     char *name, int len, char *path,
++			     struct nfsd_attrs *attrs,
++			     struct svc_fh *res);
+ __be32		nfsd_link(struct svc_rqst *, struct svc_fh *,
+ 				char *, int, struct svc_fh *);
+ ssize_t		nfsd_copy_file_range(struct file *, u64,
+@@ -152,7 +168,7 @@ static inline void fh_drop_write(struct svc_fh *fh)
+ 	}
+ }
+ 
+-static inline __be32 fh_getattr(struct svc_fh *fh, struct kstat *stat)
++static inline __be32 fh_getattr(const struct svc_fh *fh, struct kstat *stat)
+ {
+ 	struct path p = {.mnt = fh->fh_export->ex_path.mnt,
+ 			 .dentry = fh->fh_dentry};
+@@ -160,10 +176,4 @@ static inline __be32 fh_getattr(struct svc_fh *fh, struct kstat *stat)
+ 				    AT_STATX_SYNC_AS_STAT));
+ }
+ 
+-static inline int nfsd_create_is_exclusive(int createmode)
+-{
+-	return createmode == NFS3_CREATE_EXCLUSIVE
+-	       || createmode == NFS4_CREATE_EXCLUSIVE4_1;
+-}
+-
+ #endif /* LINUX_NFSD_VFS_H */
+diff --git a/fs/nfsd/xdr.h b/fs/nfsd/xdr.h
+index b8cc6a4b2e0ec..852f71580bd06 100644
+--- a/fs/nfsd/xdr.h
++++ b/fs/nfsd/xdr.h
+@@ -27,14 +27,13 @@ struct nfsd_readargs {
+ 	struct svc_fh		fh;
+ 	__u32			offset;
+ 	__u32			count;
+-	int			vlen;
+ };
+ 
+ struct nfsd_writeargs {
+ 	svc_fh			fh;
+ 	__u32			offset;
+ 	__u32			len;
+-	struct kvec		first;
++	struct xdr_buf		payload;
+ };
+ 
+ struct nfsd_createargs {
+@@ -53,11 +52,6 @@ struct nfsd_renameargs {
+ 	unsigned int		tlen;
+ };
+ 
+-struct nfsd_readlinkargs {
+-	struct svc_fh		fh;
+-	char *			buffer;
+-};
+-	
+ struct nfsd_linkargs {
+ 	struct svc_fh		ffh;
+ 	struct svc_fh		tfh;
+@@ -79,7 +73,6 @@ struct nfsd_readdirargs {
+ 	struct svc_fh		fh;
+ 	__u32			cookie;
+ 	__u32			count;
+-	__be32 *		buffer;
+ };
+ 
+ struct nfsd_stat {
+@@ -101,6 +94,7 @@ struct nfsd_diropres  {
+ struct nfsd_readlinkres {
+ 	__be32			status;
+ 	int			len;
++	struct page		*page;
+ };
+ 
+ struct nfsd_readres {
+@@ -108,17 +102,20 @@ struct nfsd_readres {
+ 	struct svc_fh		fh;
+ 	unsigned long		count;
+ 	struct kstat		stat;
++	struct page		**pages;
+ };
+ 
+ struct nfsd_readdirres {
++	/* Components of the reply */
+ 	__be32			status;
+ 
+ 	int			count;
+ 
++	/* Used to encode the reply's entry list */
++	struct xdr_stream	xdr;
++	struct xdr_buf		dirlist;
+ 	struct readdir_cd	common;
+-	__be32 *		buffer;
+-	int			buflen;
+-	__be32 *		offset;
++	unsigned int		cookie_offset;
+ };
+ 
+ struct nfsd_statfsres {
+@@ -144,36 +141,37 @@ union nfsd_xdrstore {
+ #define NFS2_SVC_XDRSIZE	sizeof(union nfsd_xdrstore)
+ 
+ 
+-int nfssvc_decode_void(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_fhandle(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_sattrargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_diropargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_readargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_writeargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_createargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_renameargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_readlinkargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_linkargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_symlinkargs(struct svc_rqst *, __be32 *);
+-int nfssvc_decode_readdirargs(struct svc_rqst *, __be32 *);
+-int nfssvc_encode_void(struct svc_rqst *, __be32 *);
+-int nfssvc_encode_stat(struct svc_rqst *, __be32 *);
+-int nfssvc_encode_attrstat(struct svc_rqst *, __be32 *);
+-int nfssvc_encode_diropres(struct svc_rqst *, __be32 *);
+-int nfssvc_encode_readlinkres(struct svc_rqst *, __be32 *);
+-int nfssvc_encode_readres(struct svc_rqst *, __be32 *);
+-int nfssvc_encode_statfsres(struct svc_rqst *, __be32 *);
+-int nfssvc_encode_readdirres(struct svc_rqst *, __be32 *);
+-
+-int nfssvc_encode_entry(void *, const char *name,
+-			int namlen, loff_t offset, u64 ino, unsigned int);
++bool nfssvc_decode_fhandleargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_sattrargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_diropargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_readargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_writeargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_createargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_renameargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_linkargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_symlinkargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_decode_readdirargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++
++bool nfssvc_encode_statres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_encode_attrstatres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_encode_diropres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_encode_readlinkres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_encode_readres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_encode_statfsres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfssvc_encode_readdirres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++
++void nfssvc_encode_nfscookie(struct nfsd_readdirres *resp, u32 offset);
++int nfssvc_encode_entry(void *data, const char *name, int namlen,
++			loff_t offset, u64 ino, unsigned int d_type);
+ 
+ void nfssvc_release_attrstat(struct svc_rqst *rqstp);
+ void nfssvc_release_diropres(struct svc_rqst *rqstp);
+ void nfssvc_release_readres(struct svc_rqst *rqstp);
+ 
+ /* Helper functions for NFSv2 ACL code */
+-__be32 *nfs2svc_encode_fattr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *fhp, struct kstat *stat);
+-__be32 *nfs2svc_decode_fh(__be32 *p, struct svc_fh *fhp);
++bool svcxdr_decode_fhandle(struct xdr_stream *xdr, struct svc_fh *fhp);
++bool svcxdr_encode_stat(struct xdr_stream *xdr, __be32 status);
++bool svcxdr_encode_fattr(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++			 const struct svc_fh *fhp, const struct kstat *stat);
+ 
+ #endif /* LINUX_NFSD_H */
+diff --git a/fs/nfsd/xdr3.h b/fs/nfsd/xdr3.h
+index ae6fa6c9cb467..03fe4e21306cb 100644
+--- a/fs/nfsd/xdr3.h
++++ b/fs/nfsd/xdr3.h
+@@ -25,14 +25,13 @@ struct nfsd3_diropargs {
+ 
+ struct nfsd3_accessargs {
+ 	struct svc_fh		fh;
+-	unsigned int		access;
++	__u32			access;
+ };
+ 
+ struct nfsd3_readargs {
+ 	struct svc_fh		fh;
+ 	__u64			offset;
+ 	__u32			count;
+-	int			vlen;
+ };
+ 
+ struct nfsd3_writeargs {
+@@ -41,7 +40,7 @@ struct nfsd3_writeargs {
+ 	__u32			count;
+ 	int			stable;
+ 	__u32			len;
+-	struct kvec		first;
++	struct xdr_buf		payload;
+ };
+ 
+ struct nfsd3_createargs {
+@@ -71,11 +70,6 @@ struct nfsd3_renameargs {
+ 	unsigned int		tlen;
+ };
+ 
+-struct nfsd3_readlinkargs {
+-	struct svc_fh		fh;
+-	char *			buffer;
+-};
+-
+ struct nfsd3_linkargs {
+ 	struct svc_fh		ffh;
+ 	struct svc_fh		tfh;
+@@ -96,10 +90,8 @@ struct nfsd3_symlinkargs {
+ struct nfsd3_readdirargs {
+ 	struct svc_fh		fh;
+ 	__u64			cookie;
+-	__u32			dircount;
+ 	__u32			count;
+ 	__be32 *		verf;
+-	__be32 *		buffer;
+ };
+ 
+ struct nfsd3_commitargs {
+@@ -110,13 +102,13 @@ struct nfsd3_commitargs {
+ 
+ struct nfsd3_getaclargs {
+ 	struct svc_fh		fh;
+-	int			mask;
++	__u32			mask;
+ };
+ 
+ struct posix_acl;
+ struct nfsd3_setaclargs {
+ 	struct svc_fh		fh;
+-	int			mask;
++	__u32			mask;
+ 	struct posix_acl	*acl_access;
+ 	struct posix_acl	*acl_default;
+ };
+@@ -145,6 +137,7 @@ struct nfsd3_readlinkres {
+ 	__be32			status;
+ 	struct svc_fh		fh;
+ 	__u32			len;
++	struct page		**pages;
+ };
+ 
+ struct nfsd3_readres {
+@@ -152,6 +145,7 @@ struct nfsd3_readres {
+ 	struct svc_fh		fh;
+ 	unsigned long		count;
+ 	__u32			eof;
++	struct page		**pages;
+ };
+ 
+ struct nfsd3_writeres {
+@@ -175,19 +169,17 @@ struct nfsd3_linkres {
+ };
+ 
+ struct nfsd3_readdirres {
++	/* Components of the reply */
+ 	__be32			status;
+ 	struct svc_fh		fh;
+-	/* Just to save kmalloc on every readdirplus entry (svc_fh is a
+-	 * little large for the stack): */
+-	struct svc_fh		scratch;
+-	int			count;
+ 	__be32			verf[2];
+ 
++	/* Used to encode the reply's entry list */
++	struct xdr_stream	xdr;
++	struct xdr_buf		dirlist;
++	struct svc_fh		scratch;
+ 	struct readdir_cd	common;
+-	__be32 *		buffer;
+-	int			buflen;
+-	__be32 *		offset;
+-	__be32 *		offset1;
++	unsigned int		cookie_offset;
+ 	struct svc_rqst *	rqstp;
+ 
+ };
+@@ -273,52 +265,50 @@ union nfsd3_xdrstore {
+ 
+ #define NFS3_SVC_XDRSIZE		sizeof(union nfsd3_xdrstore)
+ 
+-int nfs3svc_decode_voidarg(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_fhandle(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_sattrargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_diropargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_accessargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_readargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_writeargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_createargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_mkdirargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_mknodargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_renameargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_readlinkargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_linkargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_symlinkargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_readdirargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_readdirplusargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_decode_commitargs(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_voidres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_attrstat(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_wccstat(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_diropres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_accessres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_readlinkres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_readres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_writeres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_createres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_renameres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_linkres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_readdirres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_fsstatres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_fsinfores(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_pathconfres(struct svc_rqst *, __be32 *);
+-int nfs3svc_encode_commitres(struct svc_rqst *, __be32 *);
++bool nfs3svc_decode_fhandleargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_sattrargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_diropargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_accessargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_readargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_writeargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_createargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_mkdirargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_mknodargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_renameargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_linkargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_symlinkargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_readdirargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_readdirplusargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_decode_commitargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++
++bool nfs3svc_encode_getattrres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_wccstat(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_lookupres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_accessres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_readlinkres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_readres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_writeres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_createres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_renameres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_linkres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_readdirres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_fsstatres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_fsinfores(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_pathconfres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs3svc_encode_commitres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
+ 
+ void nfs3svc_release_fhandle(struct svc_rqst *);
+ void nfs3svc_release_fhandle2(struct svc_rqst *);
+-int nfs3svc_encode_entry(void *, const char *name,
+-				int namlen, loff_t offset, u64 ino,
+-				unsigned int);
+-int nfs3svc_encode_entry_plus(void *, const char *name,
+-				int namlen, loff_t offset, u64 ino,
+-				unsigned int);
+-/* Helper functions for NFSv3 ACL code */
+-__be32 *nfs3svc_encode_post_op_attr(struct svc_rqst *rqstp, __be32 *p,
+-				struct svc_fh *fhp);
+-__be32 *nfs3svc_decode_fh(__be32 *p, struct svc_fh *fhp);
+ 
++void nfs3svc_encode_cookie3(struct nfsd3_readdirres *resp, u64 offset);
++int nfs3svc_encode_entry3(void *data, const char *name, int namlen,
++			  loff_t offset, u64 ino, unsigned int d_type);
++int nfs3svc_encode_entryplus3(void *data, const char *name, int namlen,
++			      loff_t offset, u64 ino, unsigned int d_type);
++/* Helper functions for NFSv3 ACL code */
++bool svcxdr_decode_nfs_fh3(struct xdr_stream *xdr, struct svc_fh *fhp);
++bool svcxdr_encode_nfsstat3(struct xdr_stream *xdr, __be32 status);
++bool svcxdr_encode_post_op_attr(struct svc_rqst *rqstp, struct xdr_stream *xdr,
++				const struct svc_fh *fhp);
+ 
+ #endif /* _LINUX_NFSD_XDR3_H */
+diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h
+index 679d40af1bbb1..a034b9b62137c 100644
+--- a/fs/nfsd/xdr4.h
++++ b/fs/nfsd/xdr4.h
+@@ -76,12 +76,7 @@ static inline bool nfsd4_has_session(struct nfsd4_compound_state *cs)
+ 
+ struct nfsd4_change_info {
+ 	u32		atomic;
+-	bool		change_supported;
+-	u32		before_ctime_sec;
+-	u32		before_ctime_nsec;
+ 	u64		before_change;
+-	u32		after_ctime_sec;
+-	u32		after_ctime_nsec;
+ 	u64		after_change;
+ };
+ 
+@@ -252,7 +247,8 @@ struct nfsd4_listxattrs {
+ 
+ struct nfsd4_open {
+ 	u32		op_claim_type;      /* request */
+-	struct xdr_netobj op_fname;	    /* request - everything but CLAIM_PREV */
++	u32		op_fnamelen;
++	char *		op_fname;	    /* request - everything but CLAIM_PREV */
+ 	u32		op_delegate_type;   /* request - CLAIM_PREV only */
+ 	stateid_t       op_delegate_stateid; /* request - response */
+ 	u32		op_why_no_deleg;    /* response - DELEG_NONE_EXT only */
+@@ -277,11 +273,13 @@ struct nfsd4_open {
+ 	bool		op_truncate;        /* used during processing */
+ 	bool		op_created;         /* used during processing */
+ 	struct nfs4_openowner *op_openowner; /* used during processing */
++	struct file	*op_filp;           /* used during processing */
+ 	struct nfs4_file *op_file;          /* used during processing */
+ 	struct nfs4_ol_stateid *op_stp;	    /* used during processing */
+ 	struct nfs4_clnt_odstate *op_odstate; /* used during processing */
+ 	struct nfs4_acl *op_acl;
+ 	struct xdr_netobj op_label;
++	struct svc_rqst *op_rqstp;
+ };
+ 
+ struct nfsd4_open_confirm {
+@@ -305,9 +303,10 @@ struct nfsd4_read {
+ 	u32			rd_length;          /* request */
+ 	int			rd_vlen;
+ 	struct nfsd_file	*rd_nf;
+-	
++
+ 	struct svc_rqst		*rd_rqstp;          /* response */
+-	struct svc_fh		*rd_fhp;             /* response */
++	struct svc_fh		*rd_fhp;            /* response */
++	u32			rd_eof;             /* response */
+ };
+ 
+ struct nfsd4_readdir {
+@@ -385,13 +384,6 @@ struct nfsd4_setclientid_confirm {
+ 	nfs4_verifier	sc_confirm;
+ };
+ 
+-struct nfsd4_saved_compoundargs {
+-	__be32 *p;
+-	__be32 *end;
+-	int pagelen;
+-	struct page **pagelist;
+-};
+-
+ struct nfsd4_test_stateid_id {
+ 	__be32			ts_id_status;
+ 	stateid_t		ts_id_stateid;
+@@ -419,8 +411,7 @@ struct nfsd4_write {
+ 	u64		wr_offset;          /* request */
+ 	u32		wr_stable_how;      /* request */
+ 	u32		wr_buflen;          /* request */
+-	struct kvec	wr_head;
+-	struct page **	wr_pagelist;        /* request */
++	struct xdr_buf	wr_payload;         /* request */
+ 
+ 	u32		wr_bytes_written;   /* response */
+ 	u32		wr_how_written;     /* response */
+@@ -433,7 +424,7 @@ struct nfsd4_exchange_id {
+ 	u32		flags;
+ 	clientid_t	clientid;
+ 	u32		seqid;
+-	int		spa_how;
++	u32		spa_how;
+ 	u32             spo_must_enforce[3];
+ 	u32             spo_must_allow[3];
+ 	struct xdr_netobj nii_domain;
+@@ -543,6 +534,13 @@ struct nfsd42_write_res {
+ 	stateid_t		cb_stateid;
+ };
+ 
++struct nfsd4_cb_offload {
++	struct nfsd4_callback	co_cb;
++	struct nfsd42_write_res	co_res;
++	__be32			co_nfserr;
++	struct knfsd_fh		co_fh;
++};
++
+ struct nfsd4_copy {
+ 	/* request */
+ 	stateid_t		cp_src_stateid;
+@@ -550,18 +548,16 @@ struct nfsd4_copy {
+ 	u64			cp_src_pos;
+ 	u64			cp_dst_pos;
+ 	u64			cp_count;
+-	struct nl4_server	cp_src;
+-	bool			cp_intra;
++	struct nl4_server	*cp_src;
+ 
+-	/* both */
+-	bool		cp_synchronous;
++	unsigned long		cp_flags;
++#define NFSD4_COPY_F_STOPPED		(0)
++#define NFSD4_COPY_F_INTRA		(1)
++#define NFSD4_COPY_F_SYNCHRONOUS	(2)
++#define NFSD4_COPY_F_COMMITTED		(3)
+ 
+ 	/* response */
+ 	struct nfsd42_write_res	cp_res;
+-
+-	/* for cb_offload */
+-	struct nfsd4_callback	cp_cb;
+-	__be32			nfserr;
+ 	struct knfsd_fh		fh;
+ 
+ 	struct nfs4_client      *cp_clp;
+@@ -574,13 +570,34 @@ struct nfsd4_copy {
+ 	struct list_head	copies;
+ 	struct task_struct	*copy_task;
+ 	refcount_t		refcount;
+-	bool			stopped;
+ 
+-	struct vfsmount		*ss_mnt;
++	struct nfsd4_ssc_umount_item *ss_nsui;
+ 	struct nfs_fh		c_fh;
+ 	nfs4_stateid		stateid;
+ };
+-extern bool inter_copy_offload_enable;
++
++static inline void nfsd4_copy_set_sync(struct nfsd4_copy *copy, bool sync)
++{
++	if (sync)
++		set_bit(NFSD4_COPY_F_SYNCHRONOUS, &copy->cp_flags);
++	else
++		clear_bit(NFSD4_COPY_F_SYNCHRONOUS, &copy->cp_flags);
++}
++
++static inline bool nfsd4_copy_is_sync(const struct nfsd4_copy *copy)
++{
++	return test_bit(NFSD4_COPY_F_SYNCHRONOUS, &copy->cp_flags);
++}
++
++static inline bool nfsd4_copy_is_async(const struct nfsd4_copy *copy)
++{
++	return !test_bit(NFSD4_COPY_F_SYNCHRONOUS, &copy->cp_flags);
++}
++
++static inline bool nfsd4_ssc_is_inter(const struct nfsd4_copy *copy)
++{
++	return !test_bit(NFSD4_COPY_F_INTRA, &copy->cp_flags);
++}
+ 
+ struct nfsd4_seek {
+ 	/* request */
+@@ -605,19 +622,20 @@ struct nfsd4_offload_status {
+ struct nfsd4_copy_notify {
+ 	/* request */
+ 	stateid_t		cpn_src_stateid;
+-	struct nl4_server	cpn_dst;
++	struct nl4_server	*cpn_dst;
+ 
+ 	/* response */
+ 	stateid_t		cpn_cnr_stateid;
+ 	u64			cpn_sec;
+ 	u32			cpn_nsec;
+-	struct nl4_server	cpn_src;
++	struct nl4_server	*cpn_src;
+ };
+ 
+ struct nfsd4_op {
+-	int					opnum;
+-	const struct nfsd4_operation *		opdesc;
++	u32					opnum;
+ 	__be32					status;
++	const struct nfsd4_operation		*opdesc;
++	struct nfs4_replay			*replay;
+ 	union nfsd4_op_u {
+ 		struct nfsd4_access		access;
+ 		struct nfsd4_close		close;
+@@ -681,7 +699,6 @@ struct nfsd4_op {
+ 		struct nfsd4_listxattrs		listxattrs;
+ 		struct nfsd4_removexattr	removexattr;
+ 	} u;
+-	struct nfs4_replay *			replay;
+ };
+ 
+ bool nfsd4_cache_this_op(struct nfsd4_op *);
+@@ -696,35 +713,29 @@ struct svcxdr_tmpbuf {
+ 
+ struct nfsd4_compoundargs {
+ 	/* scratch variables for XDR decode */
+-	__be32 *			p;
+-	__be32 *			end;
+-	struct page **			pagelist;
+-	int				pagelen;
+-	bool				tail;
+-	__be32				tmp[8];
+-	__be32 *			tmpp;
++	struct xdr_stream		*xdr;
+ 	struct svcxdr_tmpbuf		*to_free;
+-
+ 	struct svc_rqst			*rqstp;
+ 
+-	u32				taglen;
+ 	char *				tag;
++	u32				taglen;
+ 	u32				minorversion;
++	u32				client_opcnt;
+ 	u32				opcnt;
+ 	struct nfsd4_op			*ops;
+ 	struct nfsd4_op			iops[8];
+-	int				cachetype;
+ };
+ 
+ struct nfsd4_compoundres {
+ 	/* scratch variables for XDR encode */
+-	struct xdr_stream		xdr;
++	struct xdr_stream		*xdr;
+ 	struct svc_rqst *		rqstp;
+ 
+-	u32				taglen;
++	__be32				*statusp;
+ 	char *				tag;
++	u32				taglen;
+ 	u32				opcnt;
+-	__be32 *			tagp; /* tag, opcount encode location */
++
+ 	struct nfsd4_compound_state	cstate;
+ };
+ 
+@@ -767,24 +778,16 @@ static inline void
+ set_change_info(struct nfsd4_change_info *cinfo, struct svc_fh *fhp)
+ {
+ 	BUG_ON(!fhp->fh_pre_saved);
+-	cinfo->atomic = (u32)fhp->fh_post_saved;
+-	cinfo->change_supported = IS_I_VERSION(d_inode(fhp->fh_dentry));
++	cinfo->atomic = (u32)(fhp->fh_post_saved && !fhp->fh_no_atomic_attr);
+ 
+ 	cinfo->before_change = fhp->fh_pre_change;
+ 	cinfo->after_change = fhp->fh_post_change;
+-	cinfo->before_ctime_sec = fhp->fh_pre_ctime.tv_sec;
+-	cinfo->before_ctime_nsec = fhp->fh_pre_ctime.tv_nsec;
+-	cinfo->after_ctime_sec = fhp->fh_post_attr.ctime.tv_sec;
+-	cinfo->after_ctime_nsec = fhp->fh_post_attr.ctime.tv_nsec;
+-
+ }
+ 
+ 
+ bool nfsd4_mach_creds_match(struct nfs4_client *cl, struct svc_rqst *rqstp);
+-int nfs4svc_decode_voidarg(struct svc_rqst *, __be32 *);
+-int nfs4svc_encode_voidres(struct svc_rqst *, __be32 *);
+-int nfs4svc_decode_compoundargs(struct svc_rqst *, __be32 *);
+-int nfs4svc_encode_compoundres(struct svc_rqst *, __be32 *);
++bool nfs4svc_decode_compoundargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool nfs4svc_encode_compoundres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
+ __be32 nfsd4_check_resp_size(struct nfsd4_compoundres *, u32);
+ void nfsd4_encode_operation(struct nfsd4_compoundres *, struct nfsd4_op *);
+ void nfsd4_encode_replay(struct xdr_stream *xdr, struct nfsd4_op *op);
+@@ -885,13 +888,19 @@ struct nfsd4_operation {
+ 	u32 op_flags;
+ 	char *op_name;
+ 	/* Try to get response size before operation */
+-	u32 (*op_rsize_bop)(struct svc_rqst *, struct nfsd4_op *);
++	u32 (*op_rsize_bop)(const struct svc_rqst *rqstp,
++			const struct nfsd4_op *op);
+ 	void (*op_get_currentstateid)(struct nfsd4_compound_state *,
+ 			union nfsd4_op_u *);
+ 	void (*op_set_currentstateid)(struct nfsd4_compound_state *,
+ 			union nfsd4_op_u *);
+ };
+ 
++struct nfsd4_cb_recall_any {
++	struct nfsd4_callback	ra_cb;
++	u32			ra_keep;
++	u32			ra_bmval[1];
++};
+ 
+ #endif
+ 
+diff --git a/fs/nfsd/xdr4cb.h b/fs/nfsd/xdr4cb.h
+index 547cf07cf4e08..0d39af1b00a0f 100644
+--- a/fs/nfsd/xdr4cb.h
++++ b/fs/nfsd/xdr4cb.h
+@@ -48,3 +48,9 @@
+ #define NFS4_dec_cb_offload_sz		(cb_compound_dec_hdr_sz  +      \
+ 					cb_sequence_dec_sz +            \
+ 					op_dec_sz)
++#define NFS4_enc_cb_recall_any_sz	(cb_compound_enc_hdr_sz +       \
++					cb_sequence_enc_sz +            \
++					1 + 1 + 1)
++#define NFS4_dec_cb_recall_any_sz	(cb_compound_dec_hdr_sz  +      \
++					cb_sequence_dec_sz +            \
++					op_dec_sz)
+diff --git a/fs/notify/dnotify/dnotify.c b/fs/notify/dnotify/dnotify.c
+index e45ca6ecba959..fa81c59a2ad41 100644
+--- a/fs/notify/dnotify/dnotify.c
++++ b/fs/notify/dnotify/dnotify.c
+@@ -150,7 +150,7 @@ void dnotify_flush(struct file *filp, fl_owner_t id)
+ 		return;
+ 	dn_mark = container_of(fsn_mark, struct dnotify_mark, fsn_mark);
+ 
+-	mutex_lock(&dnotify_group->mark_mutex);
++	fsnotify_group_lock(dnotify_group);
+ 
+ 	spin_lock(&fsn_mark->lock);
+ 	prev = &dn_mark->dn;
+@@ -173,7 +173,7 @@ void dnotify_flush(struct file *filp, fl_owner_t id)
+ 		free = true;
+ 	}
+ 
+-	mutex_unlock(&dnotify_group->mark_mutex);
++	fsnotify_group_unlock(dnotify_group);
+ 
+ 	if (free)
+ 		fsnotify_free_mark(fsn_mark);
+@@ -196,7 +196,7 @@ static __u32 convert_arg(unsigned long arg)
+ 	if (arg & DN_ATTRIB)
+ 		new_mask |= FS_ATTRIB;
+ 	if (arg & DN_RENAME)
+-		new_mask |= FS_DN_RENAME;
++		new_mask |= FS_RENAME;
+ 	if (arg & DN_CREATE)
+ 		new_mask |= (FS_CREATE | FS_MOVED_TO);
+ 
+@@ -306,7 +306,7 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned long arg)
+ 	new_dn_mark->dn = NULL;
+ 
+ 	/* this is needed to prevent the fcntl/close race described below */
+-	mutex_lock(&dnotify_group->mark_mutex);
++	fsnotify_group_lock(dnotify_group);
+ 
+ 	/* add the new_fsn_mark or find an old one. */
+ 	fsn_mark = fsnotify_find_mark(&inode->i_fsnotify_marks, dnotify_group);
+@@ -316,7 +316,7 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned long arg)
+ 	} else {
+ 		error = fsnotify_add_inode_mark_locked(new_fsn_mark, inode, 0);
+ 		if (error) {
+-			mutex_unlock(&dnotify_group->mark_mutex);
++			fsnotify_group_unlock(dnotify_group);
+ 			goto out_err;
+ 		}
+ 		spin_lock(&new_fsn_mark->lock);
+@@ -327,7 +327,7 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned long arg)
+ 	}
+ 
+ 	rcu_read_lock();
+-	f = fcheck(fd);
++	f = lookup_fd_rcu(fd);
+ 	rcu_read_unlock();
+ 
+ 	/* if (f != filp) means that we lost a race and another task/thread
+@@ -365,7 +365,7 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned long arg)
+ 
+ 	if (destroy)
+ 		fsnotify_detach_mark(fsn_mark);
+-	mutex_unlock(&dnotify_group->mark_mutex);
++	fsnotify_group_unlock(dnotify_group);
+ 	if (destroy)
+ 		fsnotify_free_mark(fsn_mark);
+ 	fsnotify_put_mark(fsn_mark);
+@@ -383,7 +383,8 @@ static int __init dnotify_init(void)
+ 					  SLAB_PANIC|SLAB_ACCOUNT);
+ 	dnotify_mark_cache = KMEM_CACHE(dnotify_mark, SLAB_PANIC|SLAB_ACCOUNT);
+ 
+-	dnotify_group = fsnotify_alloc_group(&dnotify_fsnotify_ops);
++	dnotify_group = fsnotify_alloc_group(&dnotify_fsnotify_ops,
++					     FSNOTIFY_GROUP_NOFS);
+ 	if (IS_ERR(dnotify_group))
+ 		panic("unable to allocate fsnotify group for dnotify\n");
+ 	return 0;
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index c3af99e94f1d1..a2a15bc4df280 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -14,20 +14,33 @@
+ #include <linux/audit.h>
+ #include <linux/sched/mm.h>
+ #include <linux/statfs.h>
++#include <linux/stringhash.h>
+ 
+ #include "fanotify.h"
+ 
+-static bool fanotify_path_equal(struct path *p1, struct path *p2)
++static bool fanotify_path_equal(const struct path *p1, const struct path *p2)
+ {
+ 	return p1->mnt == p2->mnt && p1->dentry == p2->dentry;
+ }
+ 
++static unsigned int fanotify_hash_path(const struct path *path)
++{
++	return hash_ptr(path->dentry, FANOTIFY_EVENT_HASH_BITS) ^
++		hash_ptr(path->mnt, FANOTIFY_EVENT_HASH_BITS);
++}
++
+ static inline bool fanotify_fsid_equal(__kernel_fsid_t *fsid1,
+ 				       __kernel_fsid_t *fsid2)
+ {
+ 	return fsid1->val[0] == fsid2->val[0] && fsid1->val[1] == fsid2->val[1];
+ }
+ 
++static unsigned int fanotify_hash_fsid(__kernel_fsid_t *fsid)
++{
++	return hash_32(fsid->val[0], FANOTIFY_EVENT_HASH_BITS) ^
++		hash_32(fsid->val[1], FANOTIFY_EVENT_HASH_BITS);
++}
++
+ static bool fanotify_fh_equal(struct fanotify_fh *fh1,
+ 			      struct fanotify_fh *fh2)
+ {
+@@ -38,6 +51,16 @@ static bool fanotify_fh_equal(struct fanotify_fh *fh1,
+ 		!memcmp(fanotify_fh_buf(fh1), fanotify_fh_buf(fh2), fh1->len);
+ }
+ 
++static unsigned int fanotify_hash_fh(struct fanotify_fh *fh)
++{
++	long salt = (long)fh->type | (long)fh->len << 8;
++
++	/*
++	 * full_name_hash() works long by long, so it handles fh buf optimally.
++	 */
++	return full_name_hash((void *)salt, fanotify_fh_buf(fh), fh->len);
++}
++
+ static bool fanotify_fid_event_equal(struct fanotify_fid_event *ffe1,
+ 				     struct fanotify_fid_event *ffe2)
+ {
+@@ -53,8 +76,10 @@ static bool fanotify_info_equal(struct fanotify_info *info1,
+ 				struct fanotify_info *info2)
+ {
+ 	if (info1->dir_fh_totlen != info2->dir_fh_totlen ||
++	    info1->dir2_fh_totlen != info2->dir2_fh_totlen ||
+ 	    info1->file_fh_totlen != info2->file_fh_totlen ||
+-	    info1->name_len != info2->name_len)
++	    info1->name_len != info2->name_len ||
++	    info1->name2_len != info2->name2_len)
+ 		return false;
+ 
+ 	if (info1->dir_fh_totlen &&
+@@ -62,14 +87,24 @@ static bool fanotify_info_equal(struct fanotify_info *info1,
+ 			       fanotify_info_dir_fh(info2)))
+ 		return false;
+ 
++	if (info1->dir2_fh_totlen &&
++	    !fanotify_fh_equal(fanotify_info_dir2_fh(info1),
++			       fanotify_info_dir2_fh(info2)))
++		return false;
++
+ 	if (info1->file_fh_totlen &&
+ 	    !fanotify_fh_equal(fanotify_info_file_fh(info1),
+ 			       fanotify_info_file_fh(info2)))
+ 		return false;
+ 
+-	return !info1->name_len ||
+-		!memcmp(fanotify_info_name(info1), fanotify_info_name(info2),
+-			info1->name_len);
++	if (info1->name_len &&
++	    memcmp(fanotify_info_name(info1), fanotify_info_name(info2),
++		   info1->name_len))
++		return false;
++
++	return !info1->name2_len ||
++		!memcmp(fanotify_info_name2(info1), fanotify_info_name2(info2),
++			info1->name2_len);
+ }
+ 
+ static bool fanotify_name_event_equal(struct fanotify_name_event *fne1,
+@@ -88,16 +123,22 @@ static bool fanotify_name_event_equal(struct fanotify_name_event *fne1,
+ 	return fanotify_info_equal(info1, info2);
+ }
+ 
+-static bool fanotify_should_merge(struct fsnotify_event *old_fsn,
+-				  struct fsnotify_event *new_fsn)
++static bool fanotify_error_event_equal(struct fanotify_error_event *fee1,
++				       struct fanotify_error_event *fee2)
+ {
+-	struct fanotify_event *old, *new;
++	/* Error events against the same file system are always merged. */
++	if (!fanotify_fsid_equal(&fee1->fsid, &fee2->fsid))
++		return false;
+ 
+-	pr_debug("%s: old=%p new=%p\n", __func__, old_fsn, new_fsn);
+-	old = FANOTIFY_E(old_fsn);
+-	new = FANOTIFY_E(new_fsn);
++	return true;
++}
+ 
+-	if (old_fsn->objectid != new_fsn->objectid ||
++static bool fanotify_should_merge(struct fanotify_event *old,
++				  struct fanotify_event *new)
++{
++	pr_debug("%s: old=%p new=%p\n", __func__, old, new);
++
++	if (old->hash != new->hash ||
+ 	    old->type != new->type || old->pid != new->pid)
+ 		return false;
+ 
+@@ -112,6 +153,13 @@ static bool fanotify_should_merge(struct fsnotify_event *old_fsn,
+ 	if ((old->mask & FS_ISDIR) != (new->mask & FS_ISDIR))
+ 		return false;
+ 
++	/*
++	 * FAN_RENAME event is reported with special info record types,
++	 * so we cannot merge it with other events.
++	 */
++	if ((old->mask & FAN_RENAME) != (new->mask & FAN_RENAME))
++		return false;
++
+ 	switch (old->type) {
+ 	case FANOTIFY_EVENT_TYPE_PATH:
+ 		return fanotify_path_equal(fanotify_event_path(old),
+@@ -122,6 +170,9 @@ static bool fanotify_should_merge(struct fsnotify_event *old_fsn,
+ 	case FANOTIFY_EVENT_TYPE_FID_NAME:
+ 		return fanotify_name_event_equal(FANOTIFY_NE(old),
+ 						 FANOTIFY_NE(new));
++	case FANOTIFY_EVENT_TYPE_FS_ERROR:
++		return fanotify_error_event_equal(FANOTIFY_EE(old),
++						  FANOTIFY_EE(new));
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 	}
+@@ -133,14 +184,16 @@ static bool fanotify_should_merge(struct fsnotify_event *old_fsn,
+ #define FANOTIFY_MAX_MERGE_EVENTS 128
+ 
+ /* and the list better be locked by something too! */
+-static int fanotify_merge(struct list_head *list, struct fsnotify_event *event)
++static int fanotify_merge(struct fsnotify_group *group,
++			  struct fsnotify_event *event)
+ {
+-	struct fsnotify_event *test_event;
+-	struct fanotify_event *new;
++	struct fanotify_event *old, *new = FANOTIFY_E(event);
++	unsigned int bucket = fanotify_event_hash_bucket(group, new);
++	struct hlist_head *hlist = &group->fanotify_data.merge_hash[bucket];
+ 	int i = 0;
+ 
+-	pr_debug("%s: list=%p event=%p\n", __func__, list, event);
+-	new = FANOTIFY_E(event);
++	pr_debug("%s: group=%p event=%p bucket=%u\n", __func__,
++		 group, event, bucket);
+ 
+ 	/*
+ 	 * Don't merge a permission event with any other event so that we know
+@@ -150,11 +203,15 @@ static int fanotify_merge(struct list_head *list, struct fsnotify_event *event)
+ 	if (fanotify_is_perm_event(new->mask))
+ 		return 0;
+ 
+-	list_for_each_entry_reverse(test_event, list, list) {
++	hlist_for_each_entry(old, hlist, merge_list) {
+ 		if (++i > FANOTIFY_MAX_MERGE_EVENTS)
+ 			break;
+-		if (fanotify_should_merge(test_event, event)) {
+-			FANOTIFY_E(test_event)->mask |= new->mask;
++		if (fanotify_should_merge(old, new)) {
++			old->mask |= new->mask;
++
++			if (fanotify_is_error_event(old->mask))
++				FANOTIFY_EE(old)->err_count++;
++
+ 			return 1;
+ 		}
+ 	}
+@@ -190,8 +247,11 @@ static int fanotify_get_response(struct fsnotify_group *group,
+ 			return ret;
+ 		}
+ 		/* Event not yet reported? Just remove it. */
+-		if (event->state == FAN_EVENT_INIT)
++		if (event->state == FAN_EVENT_INIT) {
+ 			fsnotify_remove_queued_event(group, &event->fae.fse);
++			/* Permission events are not supposed to be hashed */
++			WARN_ON_ONCE(!hlist_unhashed(&event->fae.merge_list));
++		}
+ 		/*
+ 		 * Event may be also answered in case signal delivery raced
+ 		 * with wakeup. In that case we have nothing to do besides
+@@ -231,15 +291,17 @@ static int fanotify_get_response(struct fsnotify_group *group,
+  */
+ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ 				     struct fsnotify_iter_info *iter_info,
+-				     u32 event_mask, const void *data,
+-				     int data_type, struct inode *dir)
++				     u32 *match_mask, u32 event_mask,
++				     const void *data, int data_type,
++				     struct inode *dir)
+ {
+-	__u32 marks_mask = 0, marks_ignored_mask = 0;
++	__u32 marks_mask = 0, marks_ignore_mask = 0;
+ 	__u32 test_mask, user_mask = FANOTIFY_OUTGOING_EVENTS |
+ 				     FANOTIFY_EVENT_FLAGS;
+ 	const struct path *path = fsnotify_data_path(data, data_type);
+ 	unsigned int fid_mode = FAN_GROUP_FLAG(group, FANOTIFY_FID_BITS);
+ 	struct fsnotify_mark *mark;
++	bool ondir = event_mask & FAN_ONDIR;
+ 	int type;
+ 
+ 	pr_debug("%s: report_mask=%x mask=%x data=%p data_type=%d\n",
+@@ -254,37 +316,30 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ 			return 0;
+ 	} else if (!(fid_mode & FAN_REPORT_FID)) {
+ 		/* Do we have a directory inode to report? */
+-		if (!dir && !(event_mask & FS_ISDIR))
++		if (!dir && !ondir)
+ 			return 0;
+ 	}
+ 
+-	fsnotify_foreach_obj_type(type) {
+-		if (!fsnotify_iter_should_report_type(iter_info, type))
+-			continue;
+-		mark = iter_info->marks[type];
+-
+-		/* Apply ignore mask regardless of ISDIR and ON_CHILD flags */
+-		marks_ignored_mask |= mark->ignored_mask;
+-
++	fsnotify_foreach_iter_mark_type(iter_info, mark, type) {
+ 		/*
+-		 * If the event is on dir and this mark doesn't care about
+-		 * events on dir, don't send it!
++		 * Apply ignore mask depending on event flags in ignore mask.
+ 		 */
+-		if (event_mask & FS_ISDIR && !(mark->mask & FS_ISDIR))
+-			continue;
++		marks_ignore_mask |=
++			fsnotify_effective_ignore_mask(mark, ondir, type);
+ 
+ 		/*
+-		 * If the event is on a child and this mark is on a parent not
+-		 * watching children, don't send it!
++		 * Send the event depending on event flags in mark mask.
+ 		 */
+-		if (type == FSNOTIFY_OBJ_TYPE_PARENT &&
+-		    !(mark->mask & FS_EVENT_ON_CHILD))
++		if (!fsnotify_mask_applicable(mark->mask, ondir, type))
+ 			continue;
+ 
+ 		marks_mask |= mark->mask;
++
++		/* Record the mark types of this group that matched the event */
++		*match_mask |= 1U << type;
+ 	}
+ 
+-	test_mask = event_mask & marks_mask & ~marks_ignored_mask;
++	test_mask = event_mask & marks_mask & ~marks_ignore_mask;
+ 
+ 	/*
+ 	 * For dirent modification events (create/delete/move) that do not carry
+@@ -319,13 +374,23 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ static int fanotify_encode_fh_len(struct inode *inode)
+ {
+ 	int dwords = 0;
++	int fh_len;
+ 
+ 	if (!inode)
+ 		return 0;
+ 
+ 	exportfs_encode_inode_fh(inode, NULL, &dwords, NULL);
++	fh_len = dwords << 2;
+ 
+-	return dwords << 2;
++	/*
++	 * struct fanotify_error_event might be preallocated and is
++	 * limited to MAX_HANDLE_SZ.  This should never happen, but
++	 * safeguard by forcing an invalid file handle.
++	 */
++	if (WARN_ON_ONCE(fh_len > MAX_HANDLE_SZ))
++		return 0;
++
++	return fh_len;
+ }
+ 
+ /*
+@@ -335,7 +400,8 @@ static int fanotify_encode_fh_len(struct inode *inode)
+  * Return 0 on failure to encode.
+  */
+ static int fanotify_encode_fh(struct fanotify_fh *fh, struct inode *inode,
+-			      unsigned int fh_len, gfp_t gfp)
++			      unsigned int fh_len, unsigned int *hash,
++			      gfp_t gfp)
+ {
+ 	int dwords, type = 0;
+ 	char *ext_buf = NULL;
+@@ -345,15 +411,21 @@ static int fanotify_encode_fh(struct fanotify_fh *fh, struct inode *inode,
+ 	fh->type = FILEID_ROOT;
+ 	fh->len = 0;
+ 	fh->flags = 0;
++
++	/*
++	 * Invalid FHs are used by FAN_FS_ERROR for errors not
++	 * linked to any inode. The f_handle won't be reported
++	 * back to userspace.
++	 */
+ 	if (!inode)
+-		return 0;
++		goto out;
+ 
+ 	/*
+ 	 * !gpf means preallocated variable size fh, but fh_len could
+ 	 * be zero in that case if encoding fh len failed.
+ 	 */
+ 	err = -ENOENT;
+-	if (fh_len < 4 || WARN_ON_ONCE(fh_len % 4))
++	if (fh_len < 4 || WARN_ON_ONCE(fh_len % 4) || fh_len > MAX_HANDLE_SZ)
+ 		goto out_err;
+ 
+ 	/* No external buffer in a variable size allocated fh */
+@@ -378,6 +450,14 @@ static int fanotify_encode_fh(struct fanotify_fh *fh, struct inode *inode,
+ 	fh->type = type;
+ 	fh->len = fh_len;
+ 
++out:
++	/*
++	 * Mix fh into event merge key.  Hash might be NULL in case of
++	 * unhashed FID events (i.e. FAN_FS_ERROR).
++	 */
++	if (hash)
++		*hash ^= fanotify_hash_fh(fh);
++
+ 	return FANOTIFY_FH_HDR_LEN + fh_len;
+ 
+ out_err:
+@@ -392,17 +472,41 @@ static int fanotify_encode_fh(struct fanotify_fh *fh, struct inode *inode,
+ }
+ 
+ /*
+- * The inode to use as identifier when reporting fid depends on the event.
+- * Report the modified directory inode on dirent modification events.
+- * Report the "victim" inode otherwise.
++ * FAN_REPORT_FID is ambiguous in that it reports the fid of the child for
++ * some events and the fid of the parent for create/delete/move events.
++ *
++ * With the FAN_REPORT_TARGET_FID flag, the fid of the child is reported
++ * also in create/delete/move events in addition to the fid of the parent
++ * and the name of the child.
++ */
++static inline bool fanotify_report_child_fid(unsigned int fid_mode, u32 mask)
++{
++	if (mask & ALL_FSNOTIFY_DIRENT_EVENTS)
++		return (fid_mode & FAN_REPORT_TARGET_FID);
++
++	return (fid_mode & FAN_REPORT_FID) && !(mask & FAN_ONDIR);
++}
++
++/*
++ * The inode to use as identifier when reporting fid depends on the event
++ * and the group flags.
++ *
++ * With the group flag FAN_REPORT_TARGET_FID, always report the child fid.
++ *
++ * Without the group flag FAN_REPORT_TARGET_FID, report the modified directory
++ * fid on dirent events and the child fid otherwise.
++ *
+  * For example:
+- * FS_ATTRIB reports the child inode even if reported on a watched parent.
+- * FS_CREATE reports the modified dir inode and not the created inode.
++ * FS_ATTRIB reports the child fid even if reported on a watched parent.
++ * FS_CREATE reports the modified dir fid without FAN_REPORT_TARGET_FID.
++ *       and reports the created child fid with FAN_REPORT_TARGET_FID.
+  */
+ static struct inode *fanotify_fid_inode(u32 event_mask, const void *data,
+-					int data_type, struct inode *dir)
++					int data_type, struct inode *dir,
++					unsigned int fid_mode)
+ {
+-	if (event_mask & ALL_FSNOTIFY_DIRENT_EVENTS)
++	if ((event_mask & ALL_FSNOTIFY_DIRENT_EVENTS) &&
++	    !(fid_mode & FAN_REPORT_TARGET_FID))
+ 		return dir;
+ 
+ 	return fsnotify_data_inode(data, data_type);
+@@ -424,13 +528,14 @@ static struct inode *fanotify_dfid_inode(u32 event_mask, const void *data,
+ 	if (event_mask & ALL_FSNOTIFY_DIRENT_EVENTS)
+ 		return dir;
+ 
+-	if (S_ISDIR(inode->i_mode))
++	if (inode && S_ISDIR(inode->i_mode))
+ 		return inode;
+ 
+ 	return dir;
+ }
+ 
+ static struct fanotify_event *fanotify_alloc_path_event(const struct path *path,
++							unsigned int *hash,
+ 							gfp_t gfp)
+ {
+ 	struct fanotify_path_event *pevent;
+@@ -441,6 +546,7 @@ static struct fanotify_event *fanotify_alloc_path_event(const struct path *path,
+ 
+ 	pevent->fae.type = FANOTIFY_EVENT_TYPE_PATH;
+ 	pevent->path = *path;
++	*hash ^= fanotify_hash_path(path);
+ 	path_get(path);
+ 
+ 	return &pevent->fae;
+@@ -466,6 +572,7 @@ static struct fanotify_event *fanotify_alloc_perm_event(const struct path *path,
+ 
+ static struct fanotify_event *fanotify_alloc_fid_event(struct inode *id,
+ 						       __kernel_fsid_t *fsid,
++						       unsigned int *hash,
+ 						       gfp_t gfp)
+ {
+ 	struct fanotify_fid_event *ffe;
+@@ -476,78 +583,153 @@ static struct fanotify_event *fanotify_alloc_fid_event(struct inode *id,
+ 
+ 	ffe->fae.type = FANOTIFY_EVENT_TYPE_FID;
+ 	ffe->fsid = *fsid;
++	*hash ^= fanotify_hash_fsid(fsid);
+ 	fanotify_encode_fh(&ffe->object_fh, id, fanotify_encode_fh_len(id),
+-			   gfp);
++			   hash, gfp);
+ 
+ 	return &ffe->fae;
+ }
+ 
+-static struct fanotify_event *fanotify_alloc_name_event(struct inode *id,
++static struct fanotify_event *fanotify_alloc_name_event(struct inode *dir,
+ 							__kernel_fsid_t *fsid,
+-							const struct qstr *file_name,
++							const struct qstr *name,
+ 							struct inode *child,
++							struct dentry *moved,
++							unsigned int *hash,
+ 							gfp_t gfp)
+ {
+ 	struct fanotify_name_event *fne;
+ 	struct fanotify_info *info;
+ 	struct fanotify_fh *dfh, *ffh;
+-	unsigned int dir_fh_len = fanotify_encode_fh_len(id);
++	struct inode *dir2 = moved ? d_inode(moved->d_parent) : NULL;
++	const struct qstr *name2 = moved ? &moved->d_name : NULL;
++	unsigned int dir_fh_len = fanotify_encode_fh_len(dir);
++	unsigned int dir2_fh_len = fanotify_encode_fh_len(dir2);
+ 	unsigned int child_fh_len = fanotify_encode_fh_len(child);
+-	unsigned int size;
+-
+-	size = sizeof(*fne) + FANOTIFY_FH_HDR_LEN + dir_fh_len;
++	unsigned long name_len = name ? name->len : 0;
++	unsigned long name2_len = name2 ? name2->len : 0;
++	unsigned int len, size;
++
++	/* Reserve terminating null byte even for empty name */
++	size = sizeof(*fne) + name_len + name2_len + 2;
++	if (dir_fh_len)
++		size += FANOTIFY_FH_HDR_LEN + dir_fh_len;
++	if (dir2_fh_len)
++		size += FANOTIFY_FH_HDR_LEN + dir2_fh_len;
+ 	if (child_fh_len)
+ 		size += FANOTIFY_FH_HDR_LEN + child_fh_len;
+-	if (file_name)
+-		size += file_name->len + 1;
+ 	fne = kmalloc(size, gfp);
+ 	if (!fne)
+ 		return NULL;
+ 
+ 	fne->fae.type = FANOTIFY_EVENT_TYPE_FID_NAME;
+ 	fne->fsid = *fsid;
++	*hash ^= fanotify_hash_fsid(fsid);
+ 	info = &fne->info;
+ 	fanotify_info_init(info);
+-	dfh = fanotify_info_dir_fh(info);
+-	info->dir_fh_totlen = fanotify_encode_fh(dfh, id, dir_fh_len, 0);
++	if (dir_fh_len) {
++		dfh = fanotify_info_dir_fh(info);
++		len = fanotify_encode_fh(dfh, dir, dir_fh_len, hash, 0);
++		fanotify_info_set_dir_fh(info, len);
++	}
++	if (dir2_fh_len) {
++		dfh = fanotify_info_dir2_fh(info);
++		len = fanotify_encode_fh(dfh, dir2, dir2_fh_len, hash, 0);
++		fanotify_info_set_dir2_fh(info, len);
++	}
+ 	if (child_fh_len) {
+ 		ffh = fanotify_info_file_fh(info);
+-		info->file_fh_totlen = fanotify_encode_fh(ffh, child, child_fh_len, 0);
++		len = fanotify_encode_fh(ffh, child, child_fh_len, hash, 0);
++		fanotify_info_set_file_fh(info, len);
++	}
++	if (name_len) {
++		fanotify_info_copy_name(info, name);
++		*hash ^= full_name_hash((void *)name_len, name->name, name_len);
++	}
++	if (name2_len) {
++		fanotify_info_copy_name2(info, name2);
++		*hash ^= full_name_hash((void *)name2_len, name2->name,
++					name2_len);
+ 	}
+-	if (file_name)
+-		fanotify_info_copy_name(info, file_name);
+ 
+-	pr_debug("%s: ino=%lu size=%u dir_fh_len=%u child_fh_len=%u name_len=%u name='%.*s'\n",
+-		 __func__, id->i_ino, size, dir_fh_len, child_fh_len,
++	pr_debug("%s: size=%u dir_fh_len=%u child_fh_len=%u name_len=%u name='%.*s'\n",
++		 __func__, size, dir_fh_len, child_fh_len,
+ 		 info->name_len, info->name_len, fanotify_info_name(info));
+ 
++	if (dir2_fh_len) {
++		pr_debug("%s: dir2_fh_len=%u name2_len=%u name2='%.*s'\n",
++			 __func__, dir2_fh_len, info->name2_len,
++			 info->name2_len, fanotify_info_name2(info));
++	}
++
+ 	return &fne->fae;
+ }
+ 
+-static struct fanotify_event *fanotify_alloc_event(struct fsnotify_group *group,
+-						   u32 mask, const void *data,
+-						   int data_type, struct inode *dir,
+-						   const struct qstr *file_name,
+-						   __kernel_fsid_t *fsid)
++static struct fanotify_event *fanotify_alloc_error_event(
++						struct fsnotify_group *group,
++						__kernel_fsid_t *fsid,
++						const void *data, int data_type,
++						unsigned int *hash)
++{
++	struct fs_error_report *report =
++			fsnotify_data_error_report(data, data_type);
++	struct inode *inode;
++	struct fanotify_error_event *fee;
++	int fh_len;
++
++	if (WARN_ON_ONCE(!report))
++		return NULL;
++
++	fee = mempool_alloc(&group->fanotify_data.error_events_pool, GFP_NOFS);
++	if (!fee)
++		return NULL;
++
++	fee->fae.type = FANOTIFY_EVENT_TYPE_FS_ERROR;
++	fee->error = report->error;
++	fee->err_count = 1;
++	fee->fsid = *fsid;
++
++	inode = report->inode;
++	fh_len = fanotify_encode_fh_len(inode);
++
++	/* Bad fh_len. Fallback to using an invalid fh. Should never happen. */
++	if (!fh_len && inode)
++		inode = NULL;
++
++	fanotify_encode_fh(&fee->object_fh, inode, fh_len, NULL, 0);
++
++	*hash ^= fanotify_hash_fsid(fsid);
++
++	return &fee->fae;
++}
++
++static struct fanotify_event *fanotify_alloc_event(
++				struct fsnotify_group *group,
++				u32 mask, const void *data, int data_type,
++				struct inode *dir, const struct qstr *file_name,
++				__kernel_fsid_t *fsid, u32 match_mask)
+ {
+ 	struct fanotify_event *event = NULL;
+ 	gfp_t gfp = GFP_KERNEL_ACCOUNT;
+-	struct inode *id = fanotify_fid_inode(mask, data, data_type, dir);
++	unsigned int fid_mode = FAN_GROUP_FLAG(group, FANOTIFY_FID_BITS);
++	struct inode *id = fanotify_fid_inode(mask, data, data_type, dir,
++					      fid_mode);
+ 	struct inode *dirid = fanotify_dfid_inode(mask, data, data_type, dir);
+ 	const struct path *path = fsnotify_data_path(data, data_type);
+-	unsigned int fid_mode = FAN_GROUP_FLAG(group, FANOTIFY_FID_BITS);
+ 	struct mem_cgroup *old_memcg;
++	struct dentry *moved = NULL;
+ 	struct inode *child = NULL;
+ 	bool name_event = false;
++	unsigned int hash = 0;
++	bool ondir = mask & FAN_ONDIR;
++	struct pid *pid;
+ 
+ 	if ((fid_mode & FAN_REPORT_DIR_FID) && dirid) {
+ 		/*
+-		 * With both flags FAN_REPORT_DIR_FID and FAN_REPORT_FID, we
+-		 * report the child fid for events reported on a non-dir child
++		 * For certain events and group flags, report the child fid
+ 		 * in addition to reporting the parent fid and maybe child name.
+ 		 */
+-		if ((fid_mode & FAN_REPORT_FID) &&
+-		    id != dirid && !(mask & FAN_ONDIR))
++		if (fanotify_report_child_fid(fid_mode, mask) && id != dirid)
+ 			child = id;
+ 
+ 		id = dirid;
+@@ -568,10 +750,41 @@ static struct fanotify_event *fanotify_alloc_event(struct fsnotify_group *group,
+ 		if (!(fid_mode & FAN_REPORT_NAME)) {
+ 			name_event = !!child;
+ 			file_name = NULL;
+-		} else if ((mask & ALL_FSNOTIFY_DIRENT_EVENTS) ||
+-			   !(mask & FAN_ONDIR)) {
++		} else if ((mask & ALL_FSNOTIFY_DIRENT_EVENTS) || !ondir) {
+ 			name_event = true;
+ 		}
++
++		/*
++		 * In the special case of FAN_RENAME event, use the match_mask
++		 * to determine if we need to report only the old parent+name,
++		 * only the new parent+name or both.
++		 * 'dirid' and 'file_name' are the old parent+name and
++		 * 'moved' has the new parent+name.
++		 */
++		if (mask & FAN_RENAME) {
++			bool report_old, report_new;
++
++			if (WARN_ON_ONCE(!match_mask))
++				return NULL;
++
++			/* Report both old and new parent+name if sb watching */
++			report_old = report_new =
++				match_mask & (1U << FSNOTIFY_ITER_TYPE_SB);
++			report_old |=
++				match_mask & (1U << FSNOTIFY_ITER_TYPE_INODE);
++			report_new |=
++				match_mask & (1U << FSNOTIFY_ITER_TYPE_INODE2);
++
++			if (!report_old) {
++				/* Do not report old parent+name */
++				dirid = NULL;
++				file_name = NULL;
++			}
++			if (report_new) {
++				/* Report new parent+name */
++				moved = fsnotify_data_dentry(data, data_type);
++			}
++		}
+ 	}
+ 
+ 	/*
+@@ -590,28 +803,30 @@ static struct fanotify_event *fanotify_alloc_event(struct fsnotify_group *group,
+ 
+ 	if (fanotify_is_perm_event(mask)) {
+ 		event = fanotify_alloc_perm_event(path, gfp);
+-	} else if (name_event && (file_name || child)) {
+-		event = fanotify_alloc_name_event(id, fsid, file_name, child,
+-						  gfp);
++	} else if (fanotify_is_error_event(mask)) {
++		event = fanotify_alloc_error_event(group, fsid, data,
++						   data_type, &hash);
++	} else if (name_event && (file_name || moved || child)) {
++		event = fanotify_alloc_name_event(dirid, fsid, file_name, child,
++						  moved, &hash, gfp);
+ 	} else if (fid_mode) {
+-		event = fanotify_alloc_fid_event(id, fsid, gfp);
++		event = fanotify_alloc_fid_event(id, fsid, &hash, gfp);
+ 	} else {
+-		event = fanotify_alloc_path_event(path, gfp);
++		event = fanotify_alloc_path_event(path, &hash, gfp);
+ 	}
+ 
+ 	if (!event)
+ 		goto out;
+ 
+-	/*
+-	 * Use the victim inode instead of the watching inode as the id for
+-	 * event queue, so event reported on parent is merged with event
+-	 * reported on child when both directory and child watches exist.
+-	 */
+-	fanotify_init_event(event, (unsigned long)id, mask);
+ 	if (FAN_GROUP_FLAG(group, FAN_REPORT_TID))
+-		event->pid = get_pid(task_pid(current));
++		pid = get_pid(task_pid(current));
+ 	else
+-		event->pid = get_pid(task_tgid(current));
++		pid = get_pid(task_tgid(current));
++
++	/* Mix event info, FAN_ONDIR flag and pid into event merge key */
++	hash ^= hash_long((unsigned long)pid | ondir, FANOTIFY_EVENT_HASH_BITS);
++	fanotify_init_event(event, hash, mask);
++	event->pid = pid;
+ 
+ out:
+ 	set_active_memcg(old_memcg);
+@@ -625,16 +840,14 @@ static struct fanotify_event *fanotify_alloc_event(struct fsnotify_group *group,
+  */
+ static __kernel_fsid_t fanotify_get_fsid(struct fsnotify_iter_info *iter_info)
+ {
++	struct fsnotify_mark *mark;
+ 	int type;
+ 	__kernel_fsid_t fsid = {};
+ 
+-	fsnotify_foreach_obj_type(type) {
++	fsnotify_foreach_iter_mark_type(iter_info, mark, type) {
+ 		struct fsnotify_mark_connector *conn;
+ 
+-		if (!fsnotify_iter_should_report_type(iter_info, type))
+-			continue;
+-
+-		conn = READ_ONCE(iter_info->marks[type]->connector);
++		conn = READ_ONCE(mark->connector);
+ 		/* Mark is just getting destroyed or created? */
+ 		if (!conn)
+ 			continue;
+@@ -651,6 +864,27 @@ static __kernel_fsid_t fanotify_get_fsid(struct fsnotify_iter_info *iter_info)
+ 	return fsid;
+ }
+ 
++/*
++ * Add an event to hash table for faster merge.
++ */
++static void fanotify_insert_event(struct fsnotify_group *group,
++				  struct fsnotify_event *fsn_event)
++{
++	struct fanotify_event *event = FANOTIFY_E(fsn_event);
++	unsigned int bucket = fanotify_event_hash_bucket(group, event);
++	struct hlist_head *hlist = &group->fanotify_data.merge_hash[bucket];
++
++	assert_spin_locked(&group->notification_lock);
++
++	if (!fanotify_is_hashed_event(event->mask))
++		return;
++
++	pr_debug("%s: group=%p event=%p bucket=%u\n", __func__,
++		 group, event, bucket);
++
++	hlist_add_head(&event->merge_list, hlist);
++}
++
+ static int fanotify_handle_event(struct fsnotify_group *group, u32 mask,
+ 				 const void *data, int data_type,
+ 				 struct inode *dir,
+@@ -661,6 +895,7 @@ static int fanotify_handle_event(struct fsnotify_group *group, u32 mask,
+ 	struct fanotify_event *event;
+ 	struct fsnotify_event *fsn_event;
+ 	__kernel_fsid_t fsid = {};
++	u32 match_mask = 0;
+ 
+ 	BUILD_BUG_ON(FAN_ACCESS != FS_ACCESS);
+ 	BUILD_BUG_ON(FAN_MODIFY != FS_MODIFY);
+@@ -681,15 +916,18 @@ static int fanotify_handle_event(struct fsnotify_group *group, u32 mask,
+ 	BUILD_BUG_ON(FAN_ONDIR != FS_ISDIR);
+ 	BUILD_BUG_ON(FAN_OPEN_EXEC != FS_OPEN_EXEC);
+ 	BUILD_BUG_ON(FAN_OPEN_EXEC_PERM != FS_OPEN_EXEC_PERM);
++	BUILD_BUG_ON(FAN_FS_ERROR != FS_ERROR);
++	BUILD_BUG_ON(FAN_RENAME != FS_RENAME);
+ 
+-	BUILD_BUG_ON(HWEIGHT32(ALL_FANOTIFY_EVENT_BITS) != 19);
++	BUILD_BUG_ON(HWEIGHT32(ALL_FANOTIFY_EVENT_BITS) != 21);
+ 
+-	mask = fanotify_group_event_mask(group, iter_info, mask, data,
+-					 data_type, dir);
++	mask = fanotify_group_event_mask(group, iter_info, &match_mask,
++					 mask, data, data_type, dir);
+ 	if (!mask)
+ 		return 0;
+ 
+-	pr_debug("%s: group=%p mask=%x\n", __func__, group, mask);
++	pr_debug("%s: group=%p mask=%x report_mask=%x\n", __func__,
++		 group, mask, match_mask);
+ 
+ 	if (fanotify_is_perm_event(mask)) {
+ 		/*
+@@ -708,7 +946,7 @@ static int fanotify_handle_event(struct fsnotify_group *group, u32 mask,
+ 	}
+ 
+ 	event = fanotify_alloc_event(group, mask, data, data_type, dir,
+-				     file_name, &fsid);
++				     file_name, &fsid, match_mask);
+ 	ret = -ENOMEM;
+ 	if (unlikely(!event)) {
+ 		/*
+@@ -721,7 +959,8 @@ static int fanotify_handle_event(struct fsnotify_group *group, u32 mask,
+ 	}
+ 
+ 	fsn_event = &event->fse;
+-	ret = fsnotify_add_event(group, fsn_event, fanotify_merge);
++	ret = fsnotify_insert_event(group, fsn_event, fanotify_merge,
++				    fanotify_insert_event);
+ 	if (ret) {
+ 		/* Permission events shouldn't be merged */
+ 		BUG_ON(ret == 1 && mask & FANOTIFY_PERM_EVENTS);
+@@ -742,11 +981,13 @@ static int fanotify_handle_event(struct fsnotify_group *group, u32 mask,
+ 
+ static void fanotify_free_group_priv(struct fsnotify_group *group)
+ {
+-	struct user_struct *user;
++	kfree(group->fanotify_data.merge_hash);
++	if (group->fanotify_data.ucounts)
++		dec_ucount(group->fanotify_data.ucounts,
++			   UCOUNT_FANOTIFY_GROUPS);
+ 
+-	user = group->fanotify_data.user;
+-	atomic_dec(&user->fanotify_listeners);
+-	free_uid(user);
++	if (mempool_initialized(&group->fanotify_data.error_events_pool))
++		mempool_exit(&group->fanotify_data.error_events_pool);
+ }
+ 
+ static void fanotify_free_path_event(struct fanotify_event *event)
+@@ -775,7 +1016,16 @@ static void fanotify_free_name_event(struct fanotify_event *event)
+ 	kfree(FANOTIFY_NE(event));
+ }
+ 
+-static void fanotify_free_event(struct fsnotify_event *fsn_event)
++static void fanotify_free_error_event(struct fsnotify_group *group,
++				      struct fanotify_event *event)
++{
++	struct fanotify_error_event *fee = FANOTIFY_EE(event);
++
++	mempool_free(fee, &group->fanotify_data.error_events_pool);
++}
++
++static void fanotify_free_event(struct fsnotify_group *group,
++				struct fsnotify_event *fsn_event)
+ {
+ 	struct fanotify_event *event;
+ 
+@@ -797,11 +1047,21 @@ static void fanotify_free_event(struct fsnotify_event *fsn_event)
+ 	case FANOTIFY_EVENT_TYPE_OVERFLOW:
+ 		kfree(event);
+ 		break;
++	case FANOTIFY_EVENT_TYPE_FS_ERROR:
++		fanotify_free_error_event(group, event);
++		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 	}
+ }
+ 
++static void fanotify_freeing_mark(struct fsnotify_mark *mark,
++				  struct fsnotify_group *group)
++{
++	if (!FAN_GROUP_FLAG(group, FAN_UNLIMITED_MARKS))
++		dec_ucount(group->fanotify_data.ucounts, UCOUNT_FANOTIFY_MARKS);
++}
++
+ static void fanotify_free_mark(struct fsnotify_mark *fsn_mark)
+ {
+ 	kmem_cache_free(fanotify_mark_cache, fsn_mark);
+@@ -811,5 +1071,6 @@ const struct fsnotify_ops fanotify_fsnotify_ops = {
+ 	.handle_event = fanotify_handle_event,
+ 	.free_group_priv = fanotify_free_group_priv,
+ 	.free_event = fanotify_free_event,
++	.freeing_mark = fanotify_freeing_mark,
+ 	.free_mark = fanotify_free_mark,
+ };
+diff --git a/fs/notify/fanotify/fanotify.h b/fs/notify/fanotify/fanotify.h
+index 896c819a17863..57f51a9a3015d 100644
+--- a/fs/notify/fanotify/fanotify.h
++++ b/fs/notify/fanotify/fanotify.h
+@@ -3,6 +3,7 @@
+ #include <linux/path.h>
+ #include <linux/slab.h>
+ #include <linux/exportfs.h>
++#include <linux/hashtable.h>
+ 
+ extern struct kmem_cache *fanotify_mark_cache;
+ extern struct kmem_cache *fanotify_fid_event_cachep;
+@@ -39,15 +40,45 @@ struct fanotify_fh {
+ struct fanotify_info {
+ 	/* size of dir_fh/file_fh including fanotify_fh hdr size */
+ 	u8 dir_fh_totlen;
++	u8 dir2_fh_totlen;
+ 	u8 file_fh_totlen;
+ 	u8 name_len;
+-	u8 pad;
++	u8 name2_len;
++	u8 pad[3];
+ 	unsigned char buf[];
+ 	/*
+ 	 * (struct fanotify_fh) dir_fh starts at buf[0]
+-	 * (optional) file_fh starts at buf[dir_fh_totlen]
+-	 * name starts at buf[dir_fh_totlen + file_fh_totlen]
++	 * (optional) dir2_fh starts at buf[dir_fh_totlen]
++	 * (optional) file_fh starts at buf[dir_fh_totlen + dir2_fh_totlen]
++	 * name starts at buf[dir_fh_totlen + dir2_fh_totlen + file_fh_totlen]
++	 * ...
+ 	 */
++#define FANOTIFY_DIR_FH_SIZE(info)	((info)->dir_fh_totlen)
++#define FANOTIFY_DIR2_FH_SIZE(info)	((info)->dir2_fh_totlen)
++#define FANOTIFY_FILE_FH_SIZE(info)	((info)->file_fh_totlen)
++#define FANOTIFY_NAME_SIZE(info)	((info)->name_len + 1)
++#define FANOTIFY_NAME2_SIZE(info)	((info)->name2_len + 1)
++
++#define FANOTIFY_DIR_FH_OFFSET(info)	0
++#define FANOTIFY_DIR2_FH_OFFSET(info) \
++	(FANOTIFY_DIR_FH_OFFSET(info) + FANOTIFY_DIR_FH_SIZE(info))
++#define FANOTIFY_FILE_FH_OFFSET(info) \
++	(FANOTIFY_DIR2_FH_OFFSET(info) + FANOTIFY_DIR2_FH_SIZE(info))
++#define FANOTIFY_NAME_OFFSET(info) \
++	(FANOTIFY_FILE_FH_OFFSET(info) + FANOTIFY_FILE_FH_SIZE(info))
++#define FANOTIFY_NAME2_OFFSET(info) \
++	(FANOTIFY_NAME_OFFSET(info) + FANOTIFY_NAME_SIZE(info))
++
++#define FANOTIFY_DIR_FH_BUF(info) \
++	((info)->buf + FANOTIFY_DIR_FH_OFFSET(info))
++#define FANOTIFY_DIR2_FH_BUF(info) \
++	((info)->buf + FANOTIFY_DIR2_FH_OFFSET(info))
++#define FANOTIFY_FILE_FH_BUF(info) \
++	((info)->buf + FANOTIFY_FILE_FH_OFFSET(info))
++#define FANOTIFY_NAME_BUF(info) \
++	((info)->buf + FANOTIFY_NAME_OFFSET(info))
++#define FANOTIFY_NAME2_BUF(info) \
++	((info)->buf + FANOTIFY_NAME2_OFFSET(info))
+ } __aligned(4);
+ 
+ static inline bool fanotify_fh_has_ext_buf(struct fanotify_fh *fh)
+@@ -86,7 +117,21 @@ static inline struct fanotify_fh *fanotify_info_dir_fh(struct fanotify_info *inf
+ {
+ 	BUILD_BUG_ON(offsetof(struct fanotify_info, buf) % 4);
+ 
+-	return (struct fanotify_fh *)info->buf;
++	return (struct fanotify_fh *)FANOTIFY_DIR_FH_BUF(info);
++}
++
++static inline int fanotify_info_dir2_fh_len(struct fanotify_info *info)
++{
++	if (!info->dir2_fh_totlen ||
++	    WARN_ON_ONCE(info->dir2_fh_totlen < FANOTIFY_FH_HDR_LEN))
++		return 0;
++
++	return info->dir2_fh_totlen - FANOTIFY_FH_HDR_LEN;
++}
++
++static inline struct fanotify_fh *fanotify_info_dir2_fh(struct fanotify_info *info)
++{
++	return (struct fanotify_fh *)FANOTIFY_DIR2_FH_BUF(info);
+ }
+ 
+ static inline int fanotify_info_file_fh_len(struct fanotify_info *info)
+@@ -100,27 +145,90 @@ static inline int fanotify_info_file_fh_len(struct fanotify_info *info)
+ 
+ static inline struct fanotify_fh *fanotify_info_file_fh(struct fanotify_info *info)
+ {
+-	return (struct fanotify_fh *)(info->buf + info->dir_fh_totlen);
++	return (struct fanotify_fh *)FANOTIFY_FILE_FH_BUF(info);
++}
++
++static inline char *fanotify_info_name(struct fanotify_info *info)
++{
++	if (!info->name_len)
++		return NULL;
++
++	return FANOTIFY_NAME_BUF(info);
+ }
+ 
+-static inline const char *fanotify_info_name(struct fanotify_info *info)
++static inline char *fanotify_info_name2(struct fanotify_info *info)
+ {
+-	return info->buf + info->dir_fh_totlen + info->file_fh_totlen;
++	if (!info->name2_len)
++		return NULL;
++
++	return FANOTIFY_NAME2_BUF(info);
+ }
+ 
+ static inline void fanotify_info_init(struct fanotify_info *info)
+ {
++	BUILD_BUG_ON(FANOTIFY_FH_HDR_LEN + MAX_HANDLE_SZ > U8_MAX);
++	BUILD_BUG_ON(NAME_MAX > U8_MAX);
++
+ 	info->dir_fh_totlen = 0;
++	info->dir2_fh_totlen = 0;
+ 	info->file_fh_totlen = 0;
+ 	info->name_len = 0;
++	info->name2_len = 0;
++}
++
++/* These set/copy helpers MUST be called by order */
++static inline void fanotify_info_set_dir_fh(struct fanotify_info *info,
++					    unsigned int totlen)
++{
++	if (WARN_ON_ONCE(info->dir2_fh_totlen > 0) ||
++	    WARN_ON_ONCE(info->file_fh_totlen > 0) ||
++	    WARN_ON_ONCE(info->name_len > 0) ||
++	    WARN_ON_ONCE(info->name2_len > 0))
++		return;
++
++	info->dir_fh_totlen = totlen;
++}
++
++static inline void fanotify_info_set_dir2_fh(struct fanotify_info *info,
++					     unsigned int totlen)
++{
++	if (WARN_ON_ONCE(info->file_fh_totlen > 0) ||
++	    WARN_ON_ONCE(info->name_len > 0) ||
++	    WARN_ON_ONCE(info->name2_len > 0))
++		return;
++
++	info->dir2_fh_totlen = totlen;
++}
++
++static inline void fanotify_info_set_file_fh(struct fanotify_info *info,
++					     unsigned int totlen)
++{
++	if (WARN_ON_ONCE(info->name_len > 0) ||
++	    WARN_ON_ONCE(info->name2_len > 0))
++		return;
++
++	info->file_fh_totlen = totlen;
+ }
+ 
+ static inline void fanotify_info_copy_name(struct fanotify_info *info,
+ 					   const struct qstr *name)
+ {
++	if (WARN_ON_ONCE(name->len > NAME_MAX) ||
++	    WARN_ON_ONCE(info->name2_len > 0))
++		return;
++
+ 	info->name_len = name->len;
+-	strcpy(info->buf + info->dir_fh_totlen + info->file_fh_totlen,
+-	       name->name);
++	strcpy(fanotify_info_name(info), name->name);
++}
++
++static inline void fanotify_info_copy_name2(struct fanotify_info *info,
++					    const struct qstr *name)
++{
++	if (WARN_ON_ONCE(name->len > NAME_MAX))
++		return;
++
++	info->name2_len = name->len;
++	strcpy(fanotify_info_name2(info), name->name);
+ }
+ 
+ /*
+@@ -135,29 +243,48 @@ enum fanotify_event_type {
+ 	FANOTIFY_EVENT_TYPE_PATH,
+ 	FANOTIFY_EVENT_TYPE_PATH_PERM,
+ 	FANOTIFY_EVENT_TYPE_OVERFLOW, /* struct fanotify_event */
++	FANOTIFY_EVENT_TYPE_FS_ERROR, /* struct fanotify_error_event */
++	__FANOTIFY_EVENT_TYPE_NUM
+ };
+ 
++#define FANOTIFY_EVENT_TYPE_BITS \
++	(ilog2(__FANOTIFY_EVENT_TYPE_NUM - 1) + 1)
++#define FANOTIFY_EVENT_HASH_BITS \
++	(32 - FANOTIFY_EVENT_TYPE_BITS)
++
+ struct fanotify_event {
+ 	struct fsnotify_event fse;
++	struct hlist_node merge_list;	/* List for hashed merge */
+ 	u32 mask;
+-	enum fanotify_event_type type;
++	struct {
++		unsigned int type : FANOTIFY_EVENT_TYPE_BITS;
++		unsigned int hash : FANOTIFY_EVENT_HASH_BITS;
++	};
+ 	struct pid *pid;
+ };
+ 
+ static inline void fanotify_init_event(struct fanotify_event *event,
+-				       unsigned long id, u32 mask)
++				       unsigned int hash, u32 mask)
+ {
+-	fsnotify_init_event(&event->fse, id);
++	fsnotify_init_event(&event->fse);
++	INIT_HLIST_NODE(&event->merge_list);
++	event->hash = hash;
+ 	event->mask = mask;
+ 	event->pid = NULL;
+ }
+ 
++#define FANOTIFY_INLINE_FH(name, size)					\
++struct {								\
++	struct fanotify_fh (name);					\
++	/* Space for object_fh.buf[] - access with fanotify_fh_buf() */	\
++	unsigned char _inline_fh_buf[(size)];				\
++}
++
+ struct fanotify_fid_event {
+ 	struct fanotify_event fae;
+ 	__kernel_fsid_t fsid;
+-	struct fanotify_fh object_fh;
+-	/* Reserve space in object_fh.buf[] - access with fanotify_fh_buf() */
+-	unsigned char _inline_fh_buf[FANOTIFY_INLINE_FH_LEN];
++
++	FANOTIFY_INLINE_FH(object_fh, FANOTIFY_INLINE_FH_LEN);
+ };
+ 
+ static inline struct fanotify_fid_event *
+@@ -178,12 +305,30 @@ FANOTIFY_NE(struct fanotify_event *event)
+ 	return container_of(event, struct fanotify_name_event, fae);
+ }
+ 
++struct fanotify_error_event {
++	struct fanotify_event fae;
++	s32 error; /* Error reported by the Filesystem. */
++	u32 err_count; /* Suppressed errors count */
++
++	__kernel_fsid_t fsid; /* FSID this error refers to. */
++
++	FANOTIFY_INLINE_FH(object_fh, MAX_HANDLE_SZ);
++};
++
++static inline struct fanotify_error_event *
++FANOTIFY_EE(struct fanotify_event *event)
++{
++	return container_of(event, struct fanotify_error_event, fae);
++}
++
+ static inline __kernel_fsid_t *fanotify_event_fsid(struct fanotify_event *event)
+ {
+ 	if (event->type == FANOTIFY_EVENT_TYPE_FID)
+ 		return &FANOTIFY_FE(event)->fsid;
+ 	else if (event->type == FANOTIFY_EVENT_TYPE_FID_NAME)
+ 		return &FANOTIFY_NE(event)->fsid;
++	else if (event->type == FANOTIFY_EVENT_TYPE_FS_ERROR)
++		return &FANOTIFY_EE(event)->fsid;
+ 	else
+ 		return NULL;
+ }
+@@ -195,6 +340,8 @@ static inline struct fanotify_fh *fanotify_event_object_fh(
+ 		return &FANOTIFY_FE(event)->object_fh;
+ 	else if (event->type == FANOTIFY_EVENT_TYPE_FID_NAME)
+ 		return fanotify_info_file_fh(&FANOTIFY_NE(event)->info);
++	else if (event->type == FANOTIFY_EVENT_TYPE_FS_ERROR)
++		return &FANOTIFY_EE(event)->object_fh;
+ 	else
+ 		return NULL;
+ }
+@@ -226,6 +373,37 @@ static inline int fanotify_event_dir_fh_len(struct fanotify_event *event)
+ 	return info ? fanotify_info_dir_fh_len(info) : 0;
+ }
+ 
++static inline int fanotify_event_dir2_fh_len(struct fanotify_event *event)
++{
++	struct fanotify_info *info = fanotify_event_info(event);
++
++	return info ? fanotify_info_dir2_fh_len(info) : 0;
++}
++
++static inline bool fanotify_event_has_object_fh(struct fanotify_event *event)
++{
++	/* For error events, even zeroed fh are reported. */
++	if (event->type == FANOTIFY_EVENT_TYPE_FS_ERROR)
++		return true;
++	return fanotify_event_object_fh_len(event) > 0;
++}
++
++static inline bool fanotify_event_has_dir_fh(struct fanotify_event *event)
++{
++	return fanotify_event_dir_fh_len(event) > 0;
++}
++
++static inline bool fanotify_event_has_dir2_fh(struct fanotify_event *event)
++{
++	return fanotify_event_dir2_fh_len(event) > 0;
++}
++
++static inline bool fanotify_event_has_any_dir_fh(struct fanotify_event *event)
++{
++	return fanotify_event_has_dir_fh(event) ||
++		fanotify_event_has_dir2_fh(event);
++}
++
+ struct fanotify_path_event {
+ 	struct fanotify_event fae;
+ 	struct path path;
+@@ -269,13 +447,12 @@ static inline struct fanotify_event *FANOTIFY_E(struct fsnotify_event *fse)
+ 	return container_of(fse, struct fanotify_event, fse);
+ }
+ 
+-static inline bool fanotify_event_has_path(struct fanotify_event *event)
++static inline bool fanotify_is_error_event(u32 mask)
+ {
+-	return event->type == FANOTIFY_EVENT_TYPE_PATH ||
+-		event->type == FANOTIFY_EVENT_TYPE_PATH_PERM;
++	return mask & FAN_FS_ERROR;
+ }
+ 
+-static inline struct path *fanotify_event_path(struct fanotify_event *event)
++static inline const struct path *fanotify_event_path(struct fanotify_event *event)
+ {
+ 	if (event->type == FANOTIFY_EVENT_TYPE_PATH)
+ 		return &FANOTIFY_PE(event)->path;
+@@ -284,3 +461,40 @@ static inline struct path *fanotify_event_path(struct fanotify_event *event)
+ 	else
+ 		return NULL;
+ }
++
++/*
++ * Use 128 size hash table to speed up events merge.
++ */
++#define FANOTIFY_HTABLE_BITS	(7)
++#define FANOTIFY_HTABLE_SIZE	(1 << FANOTIFY_HTABLE_BITS)
++#define FANOTIFY_HTABLE_MASK	(FANOTIFY_HTABLE_SIZE - 1)
++
++/*
++ * Permission events and overflow event do not get merged - don't hash them.
++ */
++static inline bool fanotify_is_hashed_event(u32 mask)
++{
++	return !(fanotify_is_perm_event(mask) ||
++		 fsnotify_is_overflow_event(mask));
++}
++
++static inline unsigned int fanotify_event_hash_bucket(
++						struct fsnotify_group *group,
++						struct fanotify_event *event)
++{
++	return event->hash & FANOTIFY_HTABLE_MASK;
++}
++
++static inline unsigned int fanotify_mark_user_flags(struct fsnotify_mark *mark)
++{
++	unsigned int mflags = 0;
++
++	if (mark->flags & FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY)
++		mflags |= FAN_MARK_IGNORED_SURV_MODIFY;
++	if (mark->flags & FSNOTIFY_MARK_FLAG_NO_IREF)
++		mflags |= FAN_MARK_EVICTABLE;
++	if (mark->flags & FSNOTIFY_MARK_FLAG_HAS_IGNORE_FLAGS)
++		mflags |= FAN_MARK_IGNORE;
++
++	return mflags;
++}
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 84de9f97bbc09..5302313f28bed 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/fanotify.h>
+ #include <linux/fcntl.h>
++#include <linux/fdtable.h>
+ #include <linux/file.h>
+ #include <linux/fs.h>
+ #include <linux/anon_inodes.h>
+@@ -27,8 +28,62 @@
+ #include "fanotify.h"
+ 
+ #define FANOTIFY_DEFAULT_MAX_EVENTS	16384
+-#define FANOTIFY_DEFAULT_MAX_MARKS	8192
+-#define FANOTIFY_DEFAULT_MAX_LISTENERS	128
++#define FANOTIFY_OLD_DEFAULT_MAX_MARKS	8192
++#define FANOTIFY_DEFAULT_MAX_GROUPS	128
++#define FANOTIFY_DEFAULT_FEE_POOL_SIZE	32
++
++/*
++ * Legacy fanotify marks limits (8192) is per group and we introduced a tunable
++ * limit of marks per user, similar to inotify.  Effectively, the legacy limit
++ * of fanotify marks per user is <max marks per group> * <max groups per user>.
++ * This default limit (1M) also happens to match the increased limit of inotify
++ * max_user_watches since v5.10.
++ */
++#define FANOTIFY_DEFAULT_MAX_USER_MARKS	\
++	(FANOTIFY_OLD_DEFAULT_MAX_MARKS * FANOTIFY_DEFAULT_MAX_GROUPS)
++
++/*
++ * Most of the memory cost of adding an inode mark is pinning the marked inode.
++ * The size of the filesystem inode struct is not uniform across filesystems,
++ * so double the size of a VFS inode is used as a conservative approximation.
++ */
++#define INODE_MARK_COST	(2 * sizeof(struct inode))
++
++/* configurable via /proc/sys/fs/fanotify/ */
++static int fanotify_max_queued_events __read_mostly;
++
++#ifdef CONFIG_SYSCTL
++
++#include <linux/sysctl.h>
++
++struct ctl_table fanotify_table[] = {
++	{
++		.procname	= "max_user_groups",
++		.data	= &init_user_ns.ucount_max[UCOUNT_FANOTIFY_GROUPS],
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++	},
++	{
++		.procname	= "max_user_marks",
++		.data	= &init_user_ns.ucount_max[UCOUNT_FANOTIFY_MARKS],
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++	},
++	{
++		.procname	= "max_queued_events",
++		.data		= &fanotify_max_queued_events,
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO
++	},
++	{ }
++};
++#endif /* CONFIG_SYSCTL */
+ 
+ /*
+  * All flags that may be specified in parameter event_f_flags of fanotify_init.
+@@ -51,8 +106,12 @@ struct kmem_cache *fanotify_path_event_cachep __read_mostly;
+ struct kmem_cache *fanotify_perm_event_cachep __read_mostly;
+ 
+ #define FANOTIFY_EVENT_ALIGN 4
+-#define FANOTIFY_INFO_HDR_LEN \
++#define FANOTIFY_FID_INFO_HDR_LEN \
+ 	(sizeof(struct fanotify_event_info_fid) + sizeof(struct file_handle))
++#define FANOTIFY_PIDFD_INFO_HDR_LEN \
++	sizeof(struct fanotify_event_info_pidfd)
++#define FANOTIFY_ERROR_INFO_LEN \
++	(sizeof(struct fanotify_event_info_error))
+ 
+ static int fanotify_fid_info_len(int fh_len, int name_len)
+ {
+@@ -61,21 +120,45 @@ static int fanotify_fid_info_len(int fh_len, int name_len)
+ 	if (name_len)
+ 		info_len += name_len + 1;
+ 
+-	return roundup(FANOTIFY_INFO_HDR_LEN + info_len, FANOTIFY_EVENT_ALIGN);
++	return roundup(FANOTIFY_FID_INFO_HDR_LEN + info_len,
++		       FANOTIFY_EVENT_ALIGN);
+ }
+ 
+-static int fanotify_event_info_len(unsigned int fid_mode,
+-				   struct fanotify_event *event)
++/* FAN_RENAME may have one or two dir+name info records */
++static int fanotify_dir_name_info_len(struct fanotify_event *event)
+ {
+ 	struct fanotify_info *info = fanotify_event_info(event);
+ 	int dir_fh_len = fanotify_event_dir_fh_len(event);
+-	int fh_len = fanotify_event_object_fh_len(event);
++	int dir2_fh_len = fanotify_event_dir2_fh_len(event);
+ 	int info_len = 0;
++
++	if (dir_fh_len)
++		info_len += fanotify_fid_info_len(dir_fh_len,
++						  info->name_len);
++	if (dir2_fh_len)
++		info_len += fanotify_fid_info_len(dir2_fh_len,
++						  info->name2_len);
++
++	return info_len;
++}
++
++static size_t fanotify_event_len(unsigned int info_mode,
++				 struct fanotify_event *event)
++{
++	size_t event_len = FAN_EVENT_METADATA_LEN;
++	int fh_len;
+ 	int dot_len = 0;
+ 
+-	if (dir_fh_len) {
+-		info_len += fanotify_fid_info_len(dir_fh_len, info->name_len);
+-	} else if ((fid_mode & FAN_REPORT_NAME) && (event->mask & FAN_ONDIR)) {
++	if (!info_mode)
++		return event_len;
++
++	if (fanotify_is_error_event(event->mask))
++		event_len += FANOTIFY_ERROR_INFO_LEN;
++
++	if (fanotify_event_has_any_dir_fh(event)) {
++		event_len += fanotify_dir_name_info_len(event);
++	} else if ((info_mode & FAN_REPORT_NAME) &&
++		   (event->mask & FAN_ONDIR)) {
+ 		/*
+ 		 * With group flag FAN_REPORT_NAME, if name was not recorded in
+ 		 * event on a directory, we will report the name ".".
+@@ -83,10 +166,32 @@ static int fanotify_event_info_len(unsigned int fid_mode,
+ 		dot_len = 1;
+ 	}
+ 
+-	if (fh_len)
+-		info_len += fanotify_fid_info_len(fh_len, dot_len);
++	if (info_mode & FAN_REPORT_PIDFD)
++		event_len += FANOTIFY_PIDFD_INFO_HDR_LEN;
+ 
+-	return info_len;
++	if (fanotify_event_has_object_fh(event)) {
++		fh_len = fanotify_event_object_fh_len(event);
++		event_len += fanotify_fid_info_len(fh_len, dot_len);
++	}
++
++	return event_len;
++}
++
++/*
++ * Remove an hashed event from merge hash table.
++ */
++static void fanotify_unhash_event(struct fsnotify_group *group,
++				  struct fanotify_event *event)
++{
++	assert_spin_locked(&group->notification_lock);
++
++	pr_debug("%s: group=%p event=%p bucket=%u\n", __func__,
++		 group, event, fanotify_event_hash_bucket(group, event));
++
++	if (WARN_ON_ONCE(hlist_unhashed(&event->merge_list)))
++		return;
++
++	hlist_del_init(&event->merge_list);
+ }
+ 
+ /*
+@@ -98,34 +203,41 @@ static int fanotify_event_info_len(unsigned int fid_mode,
+ static struct fanotify_event *get_one_event(struct fsnotify_group *group,
+ 					    size_t count)
+ {
+-	size_t event_size = FAN_EVENT_METADATA_LEN;
++	size_t event_size;
+ 	struct fanotify_event *event = NULL;
+-	unsigned int fid_mode = FAN_GROUP_FLAG(group, FANOTIFY_FID_BITS);
++	struct fsnotify_event *fsn_event;
++	unsigned int info_mode = FAN_GROUP_FLAG(group, FANOTIFY_INFO_MODES);
+ 
+ 	pr_debug("%s: group=%p count=%zd\n", __func__, group, count);
+ 
+ 	spin_lock(&group->notification_lock);
+-	if (fsnotify_notify_queue_is_empty(group))
++	fsn_event = fsnotify_peek_first_event(group);
++	if (!fsn_event)
+ 		goto out;
+ 
+-	if (fid_mode) {
+-		event_size += fanotify_event_info_len(fid_mode,
+-			FANOTIFY_E(fsnotify_peek_first_event(group)));
+-	}
++	event = FANOTIFY_E(fsn_event);
++	event_size = fanotify_event_len(info_mode, event);
+ 
+ 	if (event_size > count) {
+ 		event = ERR_PTR(-EINVAL);
+ 		goto out;
+ 	}
+-	event = FANOTIFY_E(fsnotify_remove_first_event(group));
++
++	/*
++	 * Held the notification_lock the whole time, so this is the
++	 * same event we peeked above.
++	 */
++	fsnotify_remove_first_event(group);
+ 	if (fanotify_is_perm_event(event->mask))
+ 		FANOTIFY_PERM(event)->state = FAN_EVENT_REPORTED;
++	if (fanotify_is_hashed_event(event->mask))
++		fanotify_unhash_event(group, event);
+ out:
+ 	spin_unlock(&group->notification_lock);
+ 	return event;
+ }
+ 
+-static int create_fd(struct fsnotify_group *group, struct path *path,
++static int create_fd(struct fsnotify_group *group, const struct path *path,
+ 		     struct file **file)
+ {
+ 	int client_fd;
+@@ -140,7 +252,7 @@ static int create_fd(struct fsnotify_group *group, struct path *path,
+ 	 * originally opened O_WRONLY.
+ 	 */
+ 	new_file = dentry_open(path,
+-			       group->fanotify_data.f_flags | FMODE_NONOTIFY,
++			       group->fanotify_data.f_flags | __FMODE_NONOTIFY,
+ 			       current_cred());
+ 	if (IS_ERR(new_file)) {
+ 		/*
+@@ -225,9 +337,31 @@ static int process_access_response(struct fsnotify_group *group,
+ 	return -ENOENT;
+ }
+ 
+-static int copy_info_to_user(__kernel_fsid_t *fsid, struct fanotify_fh *fh,
+-			     int info_type, const char *name, size_t name_len,
+-			     char __user *buf, size_t count)
++static size_t copy_error_info_to_user(struct fanotify_event *event,
++				      char __user *buf, int count)
++{
++	struct fanotify_event_info_error info = { };
++	struct fanotify_error_event *fee = FANOTIFY_EE(event);
++
++	info.hdr.info_type = FAN_EVENT_INFO_TYPE_ERROR;
++	info.hdr.len = FANOTIFY_ERROR_INFO_LEN;
++
++	if (WARN_ON(count < info.hdr.len))
++		return -EFAULT;
++
++	info.error = fee->error;
++	info.error_count = fee->err_count;
++
++	if (copy_to_user(buf, &info, sizeof(info)))
++		return -EFAULT;
++
++	return info.hdr.len;
++}
++
++static int copy_fid_info_to_user(__kernel_fsid_t *fsid, struct fanotify_fh *fh,
++				 int info_type, const char *name,
++				 size_t name_len,
++				 char __user *buf, size_t count)
+ {
+ 	struct fanotify_event_info_fid info = { };
+ 	struct file_handle handle = { };
+@@ -239,9 +373,6 @@ static int copy_info_to_user(__kernel_fsid_t *fsid, struct fanotify_fh *fh,
+ 	pr_debug("%s: fh_len=%zu name_len=%zu, info_len=%zu, count=%zu\n",
+ 		 __func__, fh_len, name_len, info_len, count);
+ 
+-	if (!fh_len)
+-		return 0;
+-
+ 	if (WARN_ON_ONCE(len < sizeof(info) || len > count))
+ 		return -EFAULT;
+ 
+@@ -256,6 +387,8 @@ static int copy_info_to_user(__kernel_fsid_t *fsid, struct fanotify_fh *fh,
+ 			return -EFAULT;
+ 		break;
+ 	case FAN_EVENT_INFO_TYPE_DFID_NAME:
++	case FAN_EVENT_INFO_TYPE_OLD_DFID_NAME:
++	case FAN_EVENT_INFO_TYPE_NEW_DFID_NAME:
+ 		if (WARN_ON_ONCE(!name || !name_len))
+ 			return -EFAULT;
+ 		break;
+@@ -276,6 +409,11 @@ static int copy_info_to_user(__kernel_fsid_t *fsid, struct fanotify_fh *fh,
+ 
+ 	handle.handle_type = fh->type;
+ 	handle.handle_bytes = fh_len;
++
++	/* Mangle handle_type for bad file_handle */
++	if (!fh_len)
++		handle.handle_type = FILEID_INVALID;
++
+ 	if (copy_to_user(buf, &handle, sizeof(handle)))
+ 		return -EFAULT;
+ 
+@@ -320,68 +458,79 @@ static int copy_info_to_user(__kernel_fsid_t *fsid, struct fanotify_fh *fh,
+ 	return info_len;
+ }
+ 
+-static ssize_t copy_event_to_user(struct fsnotify_group *group,
+-				  struct fanotify_event *event,
+-				  char __user *buf, size_t count)
++static int copy_pidfd_info_to_user(int pidfd,
++				   char __user *buf,
++				   size_t count)
+ {
+-	struct fanotify_event_metadata metadata;
+-	struct path *path = fanotify_event_path(event);
+-	struct fanotify_info *info = fanotify_event_info(event);
+-	unsigned int fid_mode = FAN_GROUP_FLAG(group, FANOTIFY_FID_BITS);
+-	struct file *f = NULL;
+-	int ret, fd = FAN_NOFD;
+-	int info_type = 0;
++	struct fanotify_event_info_pidfd info = { };
++	size_t info_len = FANOTIFY_PIDFD_INFO_HDR_LEN;
+ 
+-	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
++	if (WARN_ON_ONCE(info_len > count))
++		return -EFAULT;
+ 
+-	metadata.event_len = FAN_EVENT_METADATA_LEN +
+-				fanotify_event_info_len(fid_mode, event);
+-	metadata.metadata_len = FAN_EVENT_METADATA_LEN;
+-	metadata.vers = FANOTIFY_METADATA_VERSION;
+-	metadata.reserved = 0;
+-	metadata.mask = event->mask & FANOTIFY_OUTGOING_EVENTS;
+-	metadata.pid = pid_vnr(event->pid);
++	info.hdr.info_type = FAN_EVENT_INFO_TYPE_PIDFD;
++	info.hdr.len = info_len;
++	info.pidfd = pidfd;
+ 
+-	if (path && path->mnt && path->dentry) {
+-		fd = create_fd(group, path, &f);
+-		if (fd < 0)
+-			return fd;
+-	}
+-	metadata.fd = fd;
++	if (copy_to_user(buf, &info, info_len))
++		return -EFAULT;
++
++	return info_len;
++}
++
++static int copy_info_records_to_user(struct fanotify_event *event,
++				     struct fanotify_info *info,
++				     unsigned int info_mode, int pidfd,
++				     char __user *buf, size_t count)
++{
++	int ret, total_bytes = 0, info_type = 0;
++	unsigned int fid_mode = info_mode & FANOTIFY_FID_BITS;
++	unsigned int pidfd_mode = info_mode & FAN_REPORT_PIDFD;
+ 
+-	ret = -EFAULT;
+ 	/*
+-	 * Sanity check copy size in case get_one_event() and
+-	 * event_len sizes ever get out of sync.
++	 * Event info records order is as follows:
++	 * 1. dir fid + name
++	 * 2. (optional) new dir fid + new name
++	 * 3. (optional) child fid
+ 	 */
+-	if (WARN_ON_ONCE(metadata.event_len > count))
+-		goto out_close_fd;
++	if (fanotify_event_has_dir_fh(event)) {
++		info_type = info->name_len ? FAN_EVENT_INFO_TYPE_DFID_NAME :
++					     FAN_EVENT_INFO_TYPE_DFID;
+ 
+-	if (copy_to_user(buf, &metadata, FAN_EVENT_METADATA_LEN))
+-		goto out_close_fd;
++		/* FAN_RENAME uses special info types */
++		if (event->mask & FAN_RENAME)
++			info_type = FAN_EVENT_INFO_TYPE_OLD_DFID_NAME;
+ 
+-	buf += FAN_EVENT_METADATA_LEN;
+-	count -= FAN_EVENT_METADATA_LEN;
++		ret = copy_fid_info_to_user(fanotify_event_fsid(event),
++					    fanotify_info_dir_fh(info),
++					    info_type,
++					    fanotify_info_name(info),
++					    info->name_len, buf, count);
++		if (ret < 0)
++			return ret;
+ 
+-	if (fanotify_is_perm_event(event->mask))
+-		FANOTIFY_PERM(event)->fd = fd;
++		buf += ret;
++		count -= ret;
++		total_bytes += ret;
++	}
+ 
+-	/* Event info records order is: dir fid + name, child fid */
+-	if (fanotify_event_dir_fh_len(event)) {
+-		info_type = info->name_len ? FAN_EVENT_INFO_TYPE_DFID_NAME :
+-					     FAN_EVENT_INFO_TYPE_DFID;
+-		ret = copy_info_to_user(fanotify_event_fsid(event),
+-					fanotify_info_dir_fh(info),
+-					info_type, fanotify_info_name(info),
+-					info->name_len, buf, count);
++	/* New dir fid+name may be reported in addition to old dir fid+name */
++	if (fanotify_event_has_dir2_fh(event)) {
++		info_type = FAN_EVENT_INFO_TYPE_NEW_DFID_NAME;
++		ret = copy_fid_info_to_user(fanotify_event_fsid(event),
++					    fanotify_info_dir2_fh(info),
++					    info_type,
++					    fanotify_info_name2(info),
++					    info->name2_len, buf, count);
+ 		if (ret < 0)
+-			goto out_close_fd;
++			return ret;
+ 
+ 		buf += ret;
+ 		count -= ret;
++		total_bytes += ret;
+ 	}
+ 
+-	if (fanotify_event_object_fh_len(event)) {
++	if (fanotify_event_has_object_fh(event)) {
+ 		const char *dot = NULL;
+ 		int dot_len = 0;
+ 
+@@ -395,8 +544,8 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 			   (event->mask & FAN_ONDIR)) {
+ 			/*
+ 			 * With group flag FAN_REPORT_NAME, if name was not
+-			 * recorded in an event on a directory, report the
+-			 * name "." with info type DFID_NAME.
++			 * recorded in an event on a directory, report the name
++			 * "." with info type DFID_NAME.
+ 			 */
+ 			info_type = FAN_EVENT_INFO_TYPE_DFID_NAME;
+ 			dot = ".";
+@@ -419,14 +568,132 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 			info_type = FAN_EVENT_INFO_TYPE_FID;
+ 		}
+ 
+-		ret = copy_info_to_user(fanotify_event_fsid(event),
+-					fanotify_event_object_fh(event),
+-					info_type, dot, dot_len, buf, count);
++		ret = copy_fid_info_to_user(fanotify_event_fsid(event),
++					    fanotify_event_object_fh(event),
++					    info_type, dot, dot_len,
++					    buf, count);
+ 		if (ret < 0)
+-			goto out_close_fd;
++			return ret;
++
++		buf += ret;
++		count -= ret;
++		total_bytes += ret;
++	}
++
++	if (pidfd_mode) {
++		ret = copy_pidfd_info_to_user(pidfd, buf, count);
++		if (ret < 0)
++			return ret;
++
++		buf += ret;
++		count -= ret;
++		total_bytes += ret;
++	}
+ 
++	if (fanotify_is_error_event(event->mask)) {
++		ret = copy_error_info_to_user(event, buf, count);
++		if (ret < 0)
++			return ret;
+ 		buf += ret;
+ 		count -= ret;
++		total_bytes += ret;
++	}
++
++	return total_bytes;
++}
++
++static ssize_t copy_event_to_user(struct fsnotify_group *group,
++				  struct fanotify_event *event,
++				  char __user *buf, size_t count)
++{
++	struct fanotify_event_metadata metadata;
++	const struct path *path = fanotify_event_path(event);
++	struct fanotify_info *info = fanotify_event_info(event);
++	unsigned int info_mode = FAN_GROUP_FLAG(group, FANOTIFY_INFO_MODES);
++	unsigned int pidfd_mode = info_mode & FAN_REPORT_PIDFD;
++	struct file *f = NULL;
++	int ret, pidfd = FAN_NOPIDFD, fd = FAN_NOFD;
++
++	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
++
++	metadata.event_len = fanotify_event_len(info_mode, event);
++	metadata.metadata_len = FAN_EVENT_METADATA_LEN;
++	metadata.vers = FANOTIFY_METADATA_VERSION;
++	metadata.reserved = 0;
++	metadata.mask = event->mask & FANOTIFY_OUTGOING_EVENTS;
++	metadata.pid = pid_vnr(event->pid);
++	/*
++	 * For an unprivileged listener, event->pid can be used to identify the
++	 * events generated by the listener process itself, without disclosing
++	 * the pids of other processes.
++	 */
++	if (FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) &&
++	    task_tgid(current) != event->pid)
++		metadata.pid = 0;
++
++	/*
++	 * For now, fid mode is required for an unprivileged listener and
++	 * fid mode does not report fd in events.  Keep this check anyway
++	 * for safety in case fid mode requirement is relaxed in the future
++	 * to allow unprivileged listener to get events with no fd and no fid.
++	 */
++	if (!FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) &&
++	    path && path->mnt && path->dentry) {
++		fd = create_fd(group, path, &f);
++		if (fd < 0)
++			return fd;
++	}
++	metadata.fd = fd;
++
++	if (pidfd_mode) {
++		/*
++		 * Complain if the FAN_REPORT_PIDFD and FAN_REPORT_TID mutual
++		 * exclusion is ever lifted. At the time of incoporating pidfd
++		 * support within fanotify, the pidfd API only supported the
++		 * creation of pidfds for thread-group leaders.
++		 */
++		WARN_ON_ONCE(FAN_GROUP_FLAG(group, FAN_REPORT_TID));
++
++		/*
++		 * The PIDTYPE_TGID check for an event->pid is performed
++		 * preemptively in an attempt to catch out cases where the event
++		 * listener reads events after the event generating process has
++		 * already terminated. Report FAN_NOPIDFD to the event listener
++		 * in those cases, with all other pidfd creation errors being
++		 * reported as FAN_EPIDFD.
++		 */
++		if (metadata.pid == 0 ||
++		    !pid_has_task(event->pid, PIDTYPE_TGID)) {
++			pidfd = FAN_NOPIDFD;
++		} else {
++			pidfd = pidfd_create(event->pid, 0);
++			if (pidfd < 0)
++				pidfd = FAN_EPIDFD;
++		}
++	}
++
++	ret = -EFAULT;
++	/*
++	 * Sanity check copy size in case get_one_event() and
++	 * event_len sizes ever get out of sync.
++	 */
++	if (WARN_ON_ONCE(metadata.event_len > count))
++		goto out_close_fd;
++
++	if (copy_to_user(buf, &metadata, FAN_EVENT_METADATA_LEN))
++		goto out_close_fd;
++
++	buf += FAN_EVENT_METADATA_LEN;
++	count -= FAN_EVENT_METADATA_LEN;
++
++	if (fanotify_is_perm_event(event->mask))
++		FANOTIFY_PERM(event)->fd = fd;
++
++	if (info_mode) {
++		ret = copy_info_records_to_user(event, info, info_mode, pidfd,
++						buf, count);
++		if (ret < 0)
++			goto out_close_fd;
+ 	}
+ 
+ 	if (f)
+@@ -439,6 +706,10 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 		put_unused_fd(fd);
+ 		fput(f);
+ 	}
++
++	if (pidfd >= 0)
++		close_fd(pidfd);
++
+ 	return ret;
+ }
+ 
+@@ -573,6 +844,7 @@ static ssize_t fanotify_write(struct file *file, const char __user *buf, size_t
+ static int fanotify_release(struct inode *ignored, struct file *file)
+ {
+ 	struct fsnotify_group *group = file->private_data;
++	struct fsnotify_event *fsn_event;
+ 
+ 	/*
+ 	 * Stop new events from arriving in the notification queue. since
+@@ -601,13 +873,12 @@ static int fanotify_release(struct inode *ignored, struct file *file)
+ 	 * dequeue them and set the response. They will be freed once the
+ 	 * response is consumed and fanotify_get_response() returns.
+ 	 */
+-	while (!fsnotify_notify_queue_is_empty(group)) {
+-		struct fanotify_event *event;
++	while ((fsn_event = fsnotify_remove_first_event(group))) {
++		struct fanotify_event *event = FANOTIFY_E(fsn_event);
+ 
+-		event = FANOTIFY_E(fsnotify_remove_first_event(group));
+ 		if (!(event->mask & FANOTIFY_PERM_EVENTS)) {
+ 			spin_unlock(&group->notification_lock);
+-			fsnotify_destroy_event(group, &event->fse);
++			fsnotify_destroy_event(group, fsn_event);
+ 		} else {
+ 			finish_permission_event(group, FANOTIFY_PERM(event),
+ 						FAN_ALLOW);
+@@ -702,7 +973,7 @@ static int fanotify_find_path(int dfd, const char __user *filename,
+ 	}
+ 
+ 	/* you can only watch an inode if you have read permissions on it */
+-	ret = inode_permission(path->dentry->d_inode, MAY_READ);
++	ret = path_permission(path, MAY_READ);
+ 	if (ret) {
+ 		path_put(path);
+ 		goto out;
+@@ -720,27 +991,28 @@ static __u32 fanotify_mark_remove_from_mask(struct fsnotify_mark *fsn_mark,
+ 					    __u32 mask, unsigned int flags,
+ 					    __u32 umask, int *destroy)
+ {
+-	__u32 oldmask = 0;
++	__u32 oldmask, newmask;
+ 
+ 	/* umask bits cannot be removed by user */
+ 	mask &= ~umask;
+ 	spin_lock(&fsn_mark->lock);
+-	if (!(flags & FAN_MARK_IGNORED_MASK)) {
+-		oldmask = fsn_mark->mask;
++	oldmask = fsnotify_calc_mask(fsn_mark);
++	if (!(flags & FANOTIFY_MARK_IGNORE_BITS)) {
+ 		fsn_mark->mask &= ~mask;
+ 	} else {
+-		fsn_mark->ignored_mask &= ~mask;
++		fsn_mark->ignore_mask &= ~mask;
+ 	}
++	newmask = fsnotify_calc_mask(fsn_mark);
+ 	/*
+ 	 * We need to keep the mark around even if remaining mask cannot
+ 	 * result in any events (e.g. mask == FAN_ONDIR) to support incremenal
+ 	 * changes to the mask.
+ 	 * Destroy mark when only umask bits remain.
+ 	 */
+-	*destroy = !((fsn_mark->mask | fsn_mark->ignored_mask) & ~umask);
++	*destroy = !((fsn_mark->mask | fsn_mark->ignore_mask) & ~umask);
+ 	spin_unlock(&fsn_mark->lock);
+ 
+-	return mask & oldmask;
++	return oldmask & ~newmask;
+ }
+ 
+ static int fanotify_remove_mark(struct fsnotify_group *group,
+@@ -751,10 +1023,10 @@ static int fanotify_remove_mark(struct fsnotify_group *group,
+ 	__u32 removed;
+ 	int destroy_mark;
+ 
+-	mutex_lock(&group->mark_mutex);
++	fsnotify_group_lock(group);
+ 	fsn_mark = fsnotify_find_mark(connp, group);
+ 	if (!fsn_mark) {
+-		mutex_unlock(&group->mark_mutex);
++		fsnotify_group_unlock(group);
+ 		return -ENOENT;
+ 	}
+ 
+@@ -764,7 +1036,7 @@ static int fanotify_remove_mark(struct fsnotify_group *group,
+ 		fsnotify_recalc_mask(fsn_mark->connector);
+ 	if (destroy_mark)
+ 		fsnotify_detach_mark(fsn_mark);
+-	mutex_unlock(&group->mark_mutex);
++	fsnotify_group_unlock(group);
+ 	if (destroy_mark)
+ 		fsnotify_free_mark(fsn_mark);
+ 
+@@ -797,76 +1069,199 @@ static int fanotify_remove_inode_mark(struct fsnotify_group *group,
+ 				    flags, umask);
+ }
+ 
+-static __u32 fanotify_mark_add_to_mask(struct fsnotify_mark *fsn_mark,
+-				       __u32 mask,
+-				       unsigned int flags)
++static bool fanotify_mark_update_flags(struct fsnotify_mark *fsn_mark,
++				       unsigned int fan_flags)
++{
++	bool want_iref = !(fan_flags & FAN_MARK_EVICTABLE);
++	unsigned int ignore = fan_flags & FANOTIFY_MARK_IGNORE_BITS;
++	bool recalc = false;
++
++	/*
++	 * When using FAN_MARK_IGNORE for the first time, mark starts using
++	 * independent event flags in ignore mask.  After that, trying to
++	 * update the ignore mask with the old FAN_MARK_IGNORED_MASK API
++	 * will result in EEXIST error.
++	 */
++	if (ignore == FAN_MARK_IGNORE)
++		fsn_mark->flags |= FSNOTIFY_MARK_FLAG_HAS_IGNORE_FLAGS;
++
++	/*
++	 * Setting FAN_MARK_IGNORED_SURV_MODIFY for the first time may lead to
++	 * the removal of the FS_MODIFY bit in calculated mask if it was set
++	 * because of an ignore mask that is now going to survive FS_MODIFY.
++	 */
++	if (ignore && (fan_flags & FAN_MARK_IGNORED_SURV_MODIFY) &&
++	    !(fsn_mark->flags & FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY)) {
++		fsn_mark->flags |= FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY;
++		if (!(fsn_mark->mask & FS_MODIFY))
++			recalc = true;
++	}
++
++	if (fsn_mark->connector->type != FSNOTIFY_OBJ_TYPE_INODE ||
++	    want_iref == !(fsn_mark->flags & FSNOTIFY_MARK_FLAG_NO_IREF))
++		return recalc;
++
++	/*
++	 * NO_IREF may be removed from a mark, but not added.
++	 * When removed, fsnotify_recalc_mask() will take the inode ref.
++	 */
++	WARN_ON_ONCE(!want_iref);
++	fsn_mark->flags &= ~FSNOTIFY_MARK_FLAG_NO_IREF;
++
++	return true;
++}
++
++static bool fanotify_mark_add_to_mask(struct fsnotify_mark *fsn_mark,
++				      __u32 mask, unsigned int fan_flags)
+ {
+-	__u32 oldmask = -1;
++	bool recalc;
+ 
+ 	spin_lock(&fsn_mark->lock);
+-	if (!(flags & FAN_MARK_IGNORED_MASK)) {
+-		oldmask = fsn_mark->mask;
++	if (!(fan_flags & FANOTIFY_MARK_IGNORE_BITS))
+ 		fsn_mark->mask |= mask;
+-	} else {
+-		fsn_mark->ignored_mask |= mask;
+-		if (flags & FAN_MARK_IGNORED_SURV_MODIFY)
+-			fsn_mark->flags |= FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY;
+-	}
++	else
++		fsn_mark->ignore_mask |= mask;
++
++	recalc = fsnotify_calc_mask(fsn_mark) &
++		~fsnotify_conn_mask(fsn_mark->connector);
++
++	recalc |= fanotify_mark_update_flags(fsn_mark, fan_flags);
+ 	spin_unlock(&fsn_mark->lock);
+ 
+-	return mask & ~oldmask;
++	return recalc;
+ }
+ 
+ static struct fsnotify_mark *fanotify_add_new_mark(struct fsnotify_group *group,
+ 						   fsnotify_connp_t *connp,
+-						   unsigned int type,
++						   unsigned int obj_type,
++						   unsigned int fan_flags,
+ 						   __kernel_fsid_t *fsid)
+ {
++	struct ucounts *ucounts = group->fanotify_data.ucounts;
+ 	struct fsnotify_mark *mark;
+ 	int ret;
+ 
+-	if (atomic_read(&group->num_marks) > group->fanotify_data.max_marks)
++	/*
++	 * Enforce per user marks limits per user in all containing user ns.
++	 * A group with FAN_UNLIMITED_MARKS does not contribute to mark count
++	 * in the limited groups account.
++	 */
++	if (!FAN_GROUP_FLAG(group, FAN_UNLIMITED_MARKS) &&
++	    !inc_ucount(ucounts->ns, ucounts->uid, UCOUNT_FANOTIFY_MARKS))
+ 		return ERR_PTR(-ENOSPC);
+ 
+ 	mark = kmem_cache_alloc(fanotify_mark_cache, GFP_KERNEL);
+-	if (!mark)
+-		return ERR_PTR(-ENOMEM);
++	if (!mark) {
++		ret = -ENOMEM;
++		goto out_dec_ucounts;
++	}
+ 
+ 	fsnotify_init_mark(mark, group);
+-	ret = fsnotify_add_mark_locked(mark, connp, type, 0, fsid);
++	if (fan_flags & FAN_MARK_EVICTABLE)
++		mark->flags |= FSNOTIFY_MARK_FLAG_NO_IREF;
++
++	ret = fsnotify_add_mark_locked(mark, connp, obj_type, 0, fsid);
+ 	if (ret) {
+ 		fsnotify_put_mark(mark);
+-		return ERR_PTR(ret);
++		goto out_dec_ucounts;
+ 	}
+ 
+ 	return mark;
++
++out_dec_ucounts:
++	if (!FAN_GROUP_FLAG(group, FAN_UNLIMITED_MARKS))
++		dec_ucount(ucounts, UCOUNT_FANOTIFY_MARKS);
++	return ERR_PTR(ret);
+ }
+ 
++static int fanotify_group_init_error_pool(struct fsnotify_group *group)
++{
++	if (mempool_initialized(&group->fanotify_data.error_events_pool))
++		return 0;
++
++	return mempool_init_kmalloc_pool(&group->fanotify_data.error_events_pool,
++					 FANOTIFY_DEFAULT_FEE_POOL_SIZE,
++					 sizeof(struct fanotify_error_event));
++}
++
++static int fanotify_may_update_existing_mark(struct fsnotify_mark *fsn_mark,
++					      unsigned int fan_flags)
++{
++	/*
++	 * Non evictable mark cannot be downgraded to evictable mark.
++	 */
++	if (fan_flags & FAN_MARK_EVICTABLE &&
++	    !(fsn_mark->flags & FSNOTIFY_MARK_FLAG_NO_IREF))
++		return -EEXIST;
++
++	/*
++	 * New ignore mask semantics cannot be downgraded to old semantics.
++	 */
++	if (fan_flags & FAN_MARK_IGNORED_MASK &&
++	    fsn_mark->flags & FSNOTIFY_MARK_FLAG_HAS_IGNORE_FLAGS)
++		return -EEXIST;
++
++	/*
++	 * An ignore mask that survives modify could never be downgraded to not
++	 * survive modify.  With new FAN_MARK_IGNORE semantics we make that rule
++	 * explicit and return an error when trying to update the ignore mask
++	 * without the original FAN_MARK_IGNORED_SURV_MODIFY value.
++	 */
++	if (fan_flags & FAN_MARK_IGNORE &&
++	    !(fan_flags & FAN_MARK_IGNORED_SURV_MODIFY) &&
++	    fsn_mark->flags & FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY)
++		return -EEXIST;
++
++	return 0;
++}
+ 
+ static int fanotify_add_mark(struct fsnotify_group *group,
+-			     fsnotify_connp_t *connp, unsigned int type,
+-			     __u32 mask, unsigned int flags,
++			     fsnotify_connp_t *connp, unsigned int obj_type,
++			     __u32 mask, unsigned int fan_flags,
+ 			     __kernel_fsid_t *fsid)
+ {
+ 	struct fsnotify_mark *fsn_mark;
+-	__u32 added;
++	bool recalc;
++	int ret = 0;
+ 
+-	mutex_lock(&group->mark_mutex);
++	fsnotify_group_lock(group);
+ 	fsn_mark = fsnotify_find_mark(connp, group);
+ 	if (!fsn_mark) {
+-		fsn_mark = fanotify_add_new_mark(group, connp, type, fsid);
++		fsn_mark = fanotify_add_new_mark(group, connp, obj_type,
++						 fan_flags, fsid);
+ 		if (IS_ERR(fsn_mark)) {
+-			mutex_unlock(&group->mark_mutex);
++			fsnotify_group_unlock(group);
+ 			return PTR_ERR(fsn_mark);
+ 		}
+ 	}
+-	added = fanotify_mark_add_to_mask(fsn_mark, mask, flags);
+-	if (added & ~fsnotify_conn_mask(fsn_mark->connector))
++
++	/*
++	 * Check if requested mark flags conflict with an existing mark flags.
++	 */
++	ret = fanotify_may_update_existing_mark(fsn_mark, fan_flags);
++	if (ret)
++		goto out;
++
++	/*
++	 * Error events are pre-allocated per group, only if strictly
++	 * needed (i.e. FAN_FS_ERROR was requested).
++	 */
++	if (!(fan_flags & FANOTIFY_MARK_IGNORE_BITS) &&
++	    (mask & FAN_FS_ERROR)) {
++		ret = fanotify_group_init_error_pool(group);
++		if (ret)
++			goto out;
++	}
++
++	recalc = fanotify_mark_add_to_mask(fsn_mark, mask, fan_flags);
++	if (recalc)
+ 		fsnotify_recalc_mask(fsn_mark->connector);
+-	mutex_unlock(&group->mark_mutex);
++
++out:
++	fsnotify_group_unlock(group);
+ 
+ 	fsnotify_put_mark(fsn_mark);
+-	return 0;
++	return ret;
+ }
+ 
+ static int fanotify_add_vfsmount_mark(struct fsnotify_group *group,
+@@ -893,10 +1288,10 @@ static int fanotify_add_inode_mark(struct fsnotify_group *group,
+ 
+ 	/*
+ 	 * If some other task has this inode open for write we should not add
+-	 * an ignored mark, unless that ignored mark is supposed to survive
++	 * an ignore mask, unless that ignore mask is supposed to survive
+ 	 * modification changes anyway.
+ 	 */
+-	if ((flags & FAN_MARK_IGNORED_MASK) &&
++	if ((flags & FANOTIFY_MARK_IGNORE_BITS) &&
+ 	    !(flags & FAN_MARK_IGNORED_SURV_MODIFY) &&
+ 	    inode_is_open_for_write(inode))
+ 		return 0;
+@@ -919,20 +1314,49 @@ static struct fsnotify_event *fanotify_alloc_overflow_event(void)
+ 	return &oevent->fse;
+ }
+ 
++static struct hlist_head *fanotify_alloc_merge_hash(void)
++{
++	struct hlist_head *hash;
++
++	hash = kmalloc(sizeof(struct hlist_head) << FANOTIFY_HTABLE_BITS,
++		       GFP_KERNEL_ACCOUNT);
++	if (!hash)
++		return NULL;
++
++	__hash_init(hash, FANOTIFY_HTABLE_SIZE);
++
++	return hash;
++}
++
+ /* fanotify syscalls */
+ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
+ {
+ 	struct fsnotify_group *group;
+ 	int f_flags, fd;
+-	struct user_struct *user;
+ 	unsigned int fid_mode = flags & FANOTIFY_FID_BITS;
+ 	unsigned int class = flags & FANOTIFY_CLASS_BITS;
++	unsigned int internal_flags = 0;
+ 
+ 	pr_debug("%s: flags=%x event_f_flags=%x\n",
+ 		 __func__, flags, event_f_flags);
+ 
+-	if (!capable(CAP_SYS_ADMIN))
+-		return -EPERM;
++	if (!capable(CAP_SYS_ADMIN)) {
++		/*
++		 * An unprivileged user can setup an fanotify group with
++		 * limited functionality - an unprivileged group is limited to
++		 * notification events with file handles and it cannot use
++		 * unlimited queue/marks.
++		 */
++		if ((flags & FANOTIFY_ADMIN_INIT_FLAGS) || !fid_mode)
++			return -EPERM;
++
++		/*
++		 * Setting the internal flag FANOTIFY_UNPRIV on the group
++		 * prevents setting mount/filesystem marks on this group and
++		 * prevents reporting pid and open fd in events.
++		 */
++		internal_flags |= FANOTIFY_UNPRIV;
++	}
+ 
+ #ifdef CONFIG_AUDITSYSCALL
+ 	if (flags & ~(FANOTIFY_INIT_FLAGS | FAN_ENABLE_AUDIT))
+@@ -941,6 +1365,14 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
+ #endif
+ 		return -EINVAL;
+ 
++	/*
++	 * A pidfd can only be returned for a thread-group leader; thus
++	 * FAN_REPORT_PIDFD and FAN_REPORT_TID need to remain mutually
++	 * exclusive.
++	 */
++	if ((flags & FAN_REPORT_PIDFD) && (flags & FAN_REPORT_TID))
++		return -EINVAL;
++
+ 	if (event_f_flags & ~FANOTIFY_INIT_ALL_EVENT_F_BITS)
+ 		return -EINVAL;
+ 
+@@ -963,30 +1395,46 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
+ 	if ((fid_mode & FAN_REPORT_NAME) && !(fid_mode & FAN_REPORT_DIR_FID))
+ 		return -EINVAL;
+ 
+-	user = get_current_user();
+-	if (atomic_read(&user->fanotify_listeners) > FANOTIFY_DEFAULT_MAX_LISTENERS) {
+-		free_uid(user);
+-		return -EMFILE;
+-	}
++	/*
++	 * FAN_REPORT_TARGET_FID requires FAN_REPORT_NAME and FAN_REPORT_FID
++	 * and is used as an indication to report both dir and child fid on all
++	 * dirent events.
++	 */
++	if ((fid_mode & FAN_REPORT_TARGET_FID) &&
++	    (!(fid_mode & FAN_REPORT_NAME) || !(fid_mode & FAN_REPORT_FID)))
++		return -EINVAL;
+ 
+-	f_flags = O_RDWR | FMODE_NONOTIFY;
++	f_flags = O_RDWR | __FMODE_NONOTIFY;
+ 	if (flags & FAN_CLOEXEC)
+ 		f_flags |= O_CLOEXEC;
+ 	if (flags & FAN_NONBLOCK)
+ 		f_flags |= O_NONBLOCK;
+ 
+ 	/* fsnotify_alloc_group takes a ref.  Dropped in fanotify_release */
+-	group = fsnotify_alloc_group(&fanotify_fsnotify_ops);
++	group = fsnotify_alloc_group(&fanotify_fsnotify_ops,
++				     FSNOTIFY_GROUP_USER | FSNOTIFY_GROUP_NOFS);
+ 	if (IS_ERR(group)) {
+-		free_uid(user);
+ 		return PTR_ERR(group);
+ 	}
+ 
+-	group->fanotify_data.user = user;
+-	group->fanotify_data.flags = flags;
+-	atomic_inc(&user->fanotify_listeners);
++	/* Enforce groups limits per user in all containing user ns */
++	group->fanotify_data.ucounts = inc_ucount(current_user_ns(),
++						  current_euid(),
++						  UCOUNT_FANOTIFY_GROUPS);
++	if (!group->fanotify_data.ucounts) {
++		fd = -EMFILE;
++		goto out_destroy_group;
++	}
++
++	group->fanotify_data.flags = flags | internal_flags;
+ 	group->memcg = get_mem_cgroup_from_mm(current->mm);
+ 
++	group->fanotify_data.merge_hash = fanotify_alloc_merge_hash();
++	if (!group->fanotify_data.merge_hash) {
++		fd = -ENOMEM;
++		goto out_destroy_group;
++	}
++
+ 	group->overflow_event = fanotify_alloc_overflow_event();
+ 	if (unlikely(!group->overflow_event)) {
+ 		fd = -ENOMEM;
+@@ -1019,16 +1467,13 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
+ 			goto out_destroy_group;
+ 		group->max_events = UINT_MAX;
+ 	} else {
+-		group->max_events = FANOTIFY_DEFAULT_MAX_EVENTS;
++		group->max_events = fanotify_max_queued_events;
+ 	}
+ 
+ 	if (flags & FAN_UNLIMITED_MARKS) {
+ 		fd = -EPERM;
+ 		if (!capable(CAP_SYS_ADMIN))
+ 			goto out_destroy_group;
+-		group->fanotify_data.max_marks = UINT_MAX;
+-	} else {
+-		group->fanotify_data.max_marks = FANOTIFY_DEFAULT_MAX_MARKS;
+ 	}
+ 
+ 	if (flags & FAN_ENABLE_AUDIT) {
+@@ -1048,16 +1493,15 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
+ 	return fd;
+ }
+ 
+-/* Check if filesystem can encode a unique fid */
+-static int fanotify_test_fid(struct path *path, __kernel_fsid_t *fsid)
++static int fanotify_test_fsid(struct dentry *dentry, __kernel_fsid_t *fsid)
+ {
+ 	__kernel_fsid_t root_fsid;
+ 	int err;
+ 
+ 	/*
+-	 * Make sure path is not in filesystem with zero fsid (e.g. tmpfs).
++	 * Make sure dentry is not of a filesystem with zero fsid (e.g. fuse).
+ 	 */
+-	err = vfs_get_fsid(path->dentry, fsid);
++	err = vfs_get_fsid(dentry, fsid);
+ 	if (err)
+ 		return err;
+ 
+@@ -1065,10 +1509,10 @@ static int fanotify_test_fid(struct path *path, __kernel_fsid_t *fsid)
+ 		return -ENODEV;
+ 
+ 	/*
+-	 * Make sure path is not inside a filesystem subvolume (e.g. btrfs)
++	 * Make sure dentry is not of a filesystem subvolume (e.g. btrfs)
+ 	 * which uses a different fsid than sb root.
+ 	 */
+-	err = vfs_get_fsid(path->dentry->d_sb->s_root, &root_fsid);
++	err = vfs_get_fsid(dentry->d_sb->s_root, &root_fsid);
+ 	if (err)
+ 		return err;
+ 
+@@ -1076,6 +1520,12 @@ static int fanotify_test_fid(struct path *path, __kernel_fsid_t *fsid)
+ 	    root_fsid.val[1] != fsid->val[1])
+ 		return -EXDEV;
+ 
++	return 0;
++}
++
++/* Check if filesystem can encode a unique fid */
++static int fanotify_test_fid(struct dentry *dentry)
++{
+ 	/*
+ 	 * We need to make sure that the file system supports at least
+ 	 * encoding a file handle so user can use name_to_handle_at() to
+@@ -1083,17 +1533,22 @@ static int fanotify_test_fid(struct path *path, __kernel_fsid_t *fsid)
+ 	 * objects. However, name_to_handle_at() requires that the
+ 	 * filesystem also supports decoding file handles.
+ 	 */
+-	if (!path->dentry->d_sb->s_export_op ||
+-	    !path->dentry->d_sb->s_export_op->fh_to_dentry)
++	if (!dentry->d_sb->s_export_op ||
++	    !dentry->d_sb->s_export_op->fh_to_dentry)
+ 		return -EOPNOTSUPP;
+ 
+ 	return 0;
+ }
+ 
+-static int fanotify_events_supported(struct path *path, __u64 mask,
++static int fanotify_events_supported(struct fsnotify_group *group,
++				     const struct path *path, __u64 mask,
+ 				     unsigned int flags)
+ {
+ 	unsigned int mark_type = flags & FANOTIFY_MARK_TYPE_BITS;
++	/* Strict validation of events in non-dir inode mask with v5.17+ APIs */
++	bool strict_dir_events = FAN_GROUP_FLAG(group, FAN_REPORT_TARGET_FID) ||
++				 (mask & FAN_RENAME) ||
++				 (flags & FAN_MARK_IGNORE);
+ 
+ 	/*
+ 	 * Some filesystems such as 'proc' acquire unusual locks when opening
+@@ -1121,6 +1576,15 @@ static int fanotify_events_supported(struct path *path, __u64 mask,
+ 	    path->mnt->mnt_sb->s_flags & SB_NOUSER)
+ 		return -EINVAL;
+ 
++	/*
++	 * We shouldn't have allowed setting dirent events and the directory
++	 * flags FAN_ONDIR and FAN_EVENT_ON_CHILD in mask of non-dir inode,
++	 * but because we always allowed it, error only when using new APIs.
++	 */
++	if (strict_dir_events && mark_type == FAN_MARK_INODE &&
++	    !d_is_dir(path->dentry) && (mask & FANOTIFY_DIRONLY_EVENT_BITS))
++		return -ENOTDIR;
++
+ 	return 0;
+ }
+ 
+@@ -1135,7 +1599,8 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 	__kernel_fsid_t __fsid, *fsid = NULL;
+ 	u32 valid_mask = FANOTIFY_EVENTS | FANOTIFY_EVENT_FLAGS;
+ 	unsigned int mark_type = flags & FANOTIFY_MARK_TYPE_BITS;
+-	bool ignored = flags & FAN_MARK_IGNORED_MASK;
++	unsigned int mark_cmd = flags & FANOTIFY_MARK_CMD_BITS;
++	unsigned int ignore = flags & FANOTIFY_MARK_IGNORE_BITS;
+ 	unsigned int obj_type, fid_mode;
+ 	u32 umask = 0;
+ 	int ret;
+@@ -1144,7 +1609,7 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 		 __func__, fanotify_fd, flags, dfd, pathname, mask);
+ 
+ 	/* we only use the lower 32 bits as of right now. */
+-	if (mask & ((__u64)0xffffffff << 32))
++	if (upper_32_bits(mask))
+ 		return -EINVAL;
+ 
+ 	if (flags & ~FANOTIFY_MARK_FLAGS)
+@@ -1164,7 +1629,7 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 		return -EINVAL;
+ 	}
+ 
+-	switch (flags & (FAN_MARK_ADD | FAN_MARK_REMOVE | FAN_MARK_FLUSH)) {
++	switch (mark_cmd) {
+ 	case FAN_MARK_ADD:
+ 	case FAN_MARK_REMOVE:
+ 		if (!mask)
+@@ -1184,9 +1649,19 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 	if (mask & ~valid_mask)
+ 		return -EINVAL;
+ 
+-	/* Event flags (ONDIR, ON_CHILD) are meaningless in ignored mask */
+-	if (ignored)
++
++	/* We don't allow FAN_MARK_IGNORE & FAN_MARK_IGNORED_MASK together */
++	if (ignore == (FAN_MARK_IGNORE | FAN_MARK_IGNORED_MASK))
++		return -EINVAL;
++
++	/*
++	 * Event flags (FAN_ONDIR, FAN_EVENT_ON_CHILD) have no effect with
++	 * FAN_MARK_IGNORED_MASK.
++	 */
++	if (ignore == FAN_MARK_IGNORED_MASK) {
+ 		mask &= ~FANOTIFY_EVENT_FLAGS;
++		umask = FANOTIFY_EVENT_FLAGS;
++	}
+ 
+ 	f = fdget(fanotify_fd);
+ 	if (unlikely(!f.file))
+@@ -1198,6 +1673,17 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 		goto fput_and_out;
+ 	group = f.file->private_data;
+ 
++	/*
++	 * An unprivileged user is not allowed to setup mount nor filesystem
++	 * marks.  This also includes setting up such marks by a group that
++	 * was initialized by an unprivileged user.
++	 */
++	ret = -EPERM;
++	if ((!capable(CAP_SYS_ADMIN) ||
++	     FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV)) &&
++	    mark_type != FAN_MARK_INODE)
++		goto fput_and_out;
++
+ 	/*
+ 	 * group->priority == FS_PRIO_0 == FAN_CLASS_NOTIF.  These are not
+ 	 * allowed to set permissions events.
+@@ -1207,19 +1693,39 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 	    group->priority == FS_PRIO_0)
+ 		goto fput_and_out;
+ 
++	if (mask & FAN_FS_ERROR &&
++	    mark_type != FAN_MARK_FILESYSTEM)
++		goto fput_and_out;
++
++	/*
++	 * Evictable is only relevant for inode marks, because only inode object
++	 * can be evicted on memory pressure.
++	 */
++	if (flags & FAN_MARK_EVICTABLE &&
++	     mark_type != FAN_MARK_INODE)
++		goto fput_and_out;
++
+ 	/*
+-	 * Events with data type inode do not carry enough information to report
+-	 * event->fd, so we do not allow setting a mask for inode events unless
+-	 * group supports reporting fid.
+-	 * inode events are not supported on a mount mark, because they do not
+-	 * carry enough information (i.e. path) to be filtered by mount point.
++	 * Events that do not carry enough information to report
++	 * event->fd require a group that supports reporting fid.  Those
++	 * events are not supported on a mount mark, because they do not
++	 * carry enough information (i.e. path) to be filtered by mount
++	 * point.
+ 	 */
+ 	fid_mode = FAN_GROUP_FLAG(group, FANOTIFY_FID_BITS);
+-	if (mask & FANOTIFY_INODE_EVENTS &&
++	if (mask & ~(FANOTIFY_FD_EVENTS|FANOTIFY_EVENT_FLAGS) &&
+ 	    (!fid_mode || mark_type == FAN_MARK_MOUNT))
+ 		goto fput_and_out;
+ 
+-	if (flags & FAN_MARK_FLUSH) {
++	/*
++	 * FAN_RENAME uses special info type records to report the old and
++	 * new parent+name.  Reporting only old and new parent id is less
++	 * useful and was not implemented.
++	 */
++	if (mask & FAN_RENAME && !(fid_mode & FAN_REPORT_NAME))
++		goto fput_and_out;
++
++	if (mark_cmd == FAN_MARK_FLUSH) {
+ 		ret = 0;
+ 		if (mark_type == FAN_MARK_MOUNT)
+ 			fsnotify_clear_vfsmount_marks_by_group(group);
+@@ -1235,14 +1741,18 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 	if (ret)
+ 		goto fput_and_out;
+ 
+-	if (flags & FAN_MARK_ADD) {
+-		ret = fanotify_events_supported(&path, mask, flags);
++	if (mark_cmd == FAN_MARK_ADD) {
++		ret = fanotify_events_supported(group, &path, mask, flags);
+ 		if (ret)
+ 			goto path_put_and_out;
+ 	}
+ 
+ 	if (fid_mode) {
+-		ret = fanotify_test_fid(&path, &__fsid);
++		ret = fanotify_test_fsid(path.dentry, &__fsid);
++		if (ret)
++			goto path_put_and_out;
++
++		ret = fanotify_test_fid(path.dentry);
+ 		if (ret)
+ 			goto path_put_and_out;
+ 
+@@ -1255,6 +1765,13 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 	else
+ 		mnt = path.mnt;
+ 
++	ret = mnt ? -EINVAL : -EISDIR;
++	/* FAN_MARK_IGNORE requires SURV_MODIFY for sb/mount/dir marks */
++	if (mark_cmd == FAN_MARK_ADD && ignore == FAN_MARK_IGNORE &&
++	    (mnt || S_ISDIR(inode->i_mode)) &&
++	    !(flags & FAN_MARK_IGNORED_SURV_MODIFY))
++		goto path_put_and_out;
++
+ 	/* Mask out FAN_EVENT_ON_CHILD flag for sb/mount/non-dir marks */
+ 	if (mnt || !S_ISDIR(inode->i_mode)) {
+ 		mask &= ~FAN_EVENT_ON_CHILD;
+@@ -1264,12 +1781,12 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ 		 * events with parent/name info for non-directory.
+ 		 */
+ 		if ((fid_mode & FAN_REPORT_DIR_FID) &&
+-		    (flags & FAN_MARK_ADD) && !ignored)
++		    (flags & FAN_MARK_ADD) && !ignore)
+ 			mask |= FAN_EVENT_ON_CHILD;
+ 	}
+ 
+ 	/* create/update an inode mark */
+-	switch (flags & (FAN_MARK_ADD | FAN_MARK_REMOVE)) {
++	switch (mark_cmd) {
+ 	case FAN_MARK_ADD:
+ 		if (mark_type == FAN_MARK_MOUNT)
+ 			ret = fanotify_add_vfsmount_mark(group, mnt, mask,
+@@ -1330,8 +1847,24 @@ SYSCALL32_DEFINE6(fanotify_mark,
+  */
+ static int __init fanotify_user_setup(void)
+ {
+-	BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 10);
+-	BUILD_BUG_ON(HWEIGHT32(FANOTIFY_MARK_FLAGS) != 9);
++	struct sysinfo si;
++	int max_marks;
++
++	si_meminfo(&si);
++	/*
++	 * Allow up to 1% of addressable memory to be accounted for per user
++	 * marks limited to the range [8192, 1048576]. mount and sb marks are
++	 * a lot cheaper than inode marks, but there is no reason for a user
++	 * to have many of those, so calculate by the cost of inode marks.
++	 */
++	max_marks = (((si.totalram - si.totalhigh) / 100) << PAGE_SHIFT) /
++		    INODE_MARK_COST;
++	max_marks = clamp(max_marks, FANOTIFY_OLD_DEFAULT_MAX_MARKS,
++				     FANOTIFY_DEFAULT_MAX_USER_MARKS);
++
++	BUILD_BUG_ON(FANOTIFY_INIT_FLAGS & FANOTIFY_INTERNAL_GROUP_FLAGS);
++	BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 12);
++	BUILD_BUG_ON(HWEIGHT32(FANOTIFY_MARK_FLAGS) != 11);
+ 
+ 	fanotify_mark_cache = KMEM_CACHE(fsnotify_mark,
+ 					 SLAB_PANIC|SLAB_ACCOUNT);
+@@ -1344,6 +1877,11 @@ static int __init fanotify_user_setup(void)
+ 			KMEM_CACHE(fanotify_perm_event, SLAB_PANIC);
+ 	}
+ 
++	fanotify_max_queued_events = FANOTIFY_DEFAULT_MAX_EVENTS;
++	init_user_ns.ucount_max[UCOUNT_FANOTIFY_GROUPS] =
++					FANOTIFY_DEFAULT_MAX_GROUPS;
++	init_user_ns.ucount_max[UCOUNT_FANOTIFY_MARKS] = max_marks;
++
+ 	return 0;
+ }
+ device_initcall(fanotify_user_setup);
+diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c
+index 765b50aeadd28..55081ae3a6ec0 100644
+--- a/fs/notify/fdinfo.c
++++ b/fs/notify/fdinfo.c
+@@ -14,6 +14,7 @@
+ #include <linux/exportfs.h>
+ 
+ #include "inotify/inotify.h"
++#include "fanotify/fanotify.h"
+ #include "fdinfo.h"
+ #include "fsnotify.h"
+ 
+@@ -28,13 +29,13 @@ static void show_fdinfo(struct seq_file *m, struct file *f,
+ 	struct fsnotify_group *group = f->private_data;
+ 	struct fsnotify_mark *mark;
+ 
+-	mutex_lock(&group->mark_mutex);
++	fsnotify_group_lock(group);
+ 	list_for_each_entry(mark, &group->marks_list, g_list) {
+ 		show(m, mark);
+ 		if (seq_has_overflowed(m))
+ 			break;
+ 	}
+-	mutex_unlock(&group->mark_mutex);
++	fsnotify_group_unlock(group);
+ }
+ 
+ #if defined(CONFIG_EXPORTFS)
+@@ -103,19 +104,16 @@ void inotify_show_fdinfo(struct seq_file *m, struct file *f)
+ 
+ static void fanotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark)
+ {
+-	unsigned int mflags = 0;
++	unsigned int mflags = fanotify_mark_user_flags(mark);
+ 	struct inode *inode;
+ 
+-	if (mark->flags & FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY)
+-		mflags |= FAN_MARK_IGNORED_SURV_MODIFY;
+-
+ 	if (mark->connector->type == FSNOTIFY_OBJ_TYPE_INODE) {
+ 		inode = igrab(fsnotify_conn_inode(mark->connector));
+ 		if (!inode)
+ 			return;
+ 		seq_printf(m, "fanotify ino:%lx sdev:%x mflags:%x mask:%x ignored_mask:%x ",
+ 			   inode->i_ino, inode->i_sb->s_dev,
+-			   mflags, mark->mask, mark->ignored_mask);
++			   mflags, mark->mask, mark->ignore_mask);
+ 		show_mark_fhandle(m, inode);
+ 		seq_putc(m, '\n');
+ 		iput(inode);
+@@ -123,12 +121,12 @@ static void fanotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark)
+ 		struct mount *mnt = fsnotify_conn_mount(mark->connector);
+ 
+ 		seq_printf(m, "fanotify mnt_id:%x mflags:%x mask:%x ignored_mask:%x\n",
+-			   mnt->mnt_id, mflags, mark->mask, mark->ignored_mask);
++			   mnt->mnt_id, mflags, mark->mask, mark->ignore_mask);
+ 	} else if (mark->connector->type == FSNOTIFY_OBJ_TYPE_SB) {
+ 		struct super_block *sb = fsnotify_conn_sb(mark->connector);
+ 
+ 		seq_printf(m, "fanotify sdev:%x mflags:%x mask:%x ignored_mask:%x\n",
+-			   sb->s_dev, mflags, mark->mask, mark->ignored_mask);
++			   sb->s_dev, mflags, mark->mask, mark->ignore_mask);
+ 	}
+ }
+ 
+@@ -137,7 +135,8 @@ void fanotify_show_fdinfo(struct seq_file *m, struct file *f)
+ 	struct fsnotify_group *group = f->private_data;
+ 
+ 	seq_printf(m, "fanotify flags:%x event-flags:%x\n",
+-		   group->fanotify_data.flags, group->fanotify_data.f_flags);
++		   group->fanotify_data.flags & FANOTIFY_INIT_FLAGS,
++		   group->fanotify_data.f_flags);
+ 
+ 	show_fdinfo(m, f, fanotify_fdinfo);
+ }
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index 30d422b8c0fc7..7974e91ffe134 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -70,8 +70,7 @@ static void fsnotify_unmount_inodes(struct super_block *sb)
+ 		spin_unlock(&inode->i_lock);
+ 		spin_unlock(&sb->s_inode_list_lock);
+ 
+-		if (iput_inode)
+-			iput(iput_inode);
++		iput(iput_inode);
+ 
+ 		/* for each watch, send FS_UNMOUNT and then remove it */
+ 		fsnotify_inode(inode, FS_UNMOUNT);
+@@ -85,24 +84,23 @@ static void fsnotify_unmount_inodes(struct super_block *sb)
+ 	}
+ 	spin_unlock(&sb->s_inode_list_lock);
+ 
+-	if (iput_inode)
+-		iput(iput_inode);
+-	/* Wait for outstanding inode references from connectors */
+-	wait_var_event(&sb->s_fsnotify_inode_refs,
+-		       !atomic_long_read(&sb->s_fsnotify_inode_refs));
++	iput(iput_inode);
+ }
+ 
+ void fsnotify_sb_delete(struct super_block *sb)
+ {
+ 	fsnotify_unmount_inodes(sb);
+ 	fsnotify_clear_marks_by_sb(sb);
++	/* Wait for outstanding object references from connectors */
++	wait_var_event(&sb->s_fsnotify_connectors,
++		       !atomic_long_read(&sb->s_fsnotify_connectors));
+ }
+ 
+ /*
+  * Given an inode, first check if we care what happens to our children.  Inotify
+  * and dnotify both tell their parents about events.  If we care about any event
+  * on a child we run all of our children and set a dentry flag saying that the
+- * parent cares.  Thus when an event happens on a child it can quickly tell if
++ * parent cares.  Thus when an event happens on a child it can quickly tell
+  * if there is a need to find a parent and send the event to the parent.
+  */
+ void __fsnotify_update_child_dentry_flags(struct inode *inode)
+@@ -252,7 +250,10 @@ static int fsnotify_handle_inode_event(struct fsnotify_group *group,
+ 	if (WARN_ON_ONCE(!ops->handle_inode_event))
+ 		return 0;
+ 
+-	if ((inode_mark->mask & FS_EXCL_UNLINK) &&
++	if (WARN_ON_ONCE(!inode && !dir))
++		return 0;
++
++	if ((inode_mark->flags & FSNOTIFY_MARK_FLAG_EXCL_UNLINK) &&
+ 	    path && d_unlinked(path->dentry))
+ 		return 0;
+ 
+@@ -276,23 +277,28 @@ static int fsnotify_handle_event(struct fsnotify_group *group, __u32 mask,
+ 	    WARN_ON_ONCE(fsnotify_iter_vfsmount_mark(iter_info)))
+ 		return 0;
+ 
+-	if (parent_mark) {
+-		/*
+-		 * parent_mark indicates that the parent inode is watching
+-		 * children and interested in this event, which is an event
+-		 * possible on child. But is *this mark* watching children and
+-		 * interested in this event?
+-		 */
+-		if (parent_mark->mask & FS_EVENT_ON_CHILD) {
+-			ret = fsnotify_handle_inode_event(group, parent_mark, mask,
+-							  data, data_type, dir, name, 0);
+-			if (ret)
+-				return ret;
+-		}
+-		if (!inode_mark)
++	/*
++	 * For FS_RENAME, 'dir' is old dir and 'data' is new dentry.
++	 * The only ->handle_inode_event() backend that supports FS_RENAME is
++	 * dnotify, where it means file was renamed within same parent.
++	 */
++	if (mask & FS_RENAME) {
++		struct dentry *moved = fsnotify_data_dentry(data, data_type);
++
++		if (dir != moved->d_parent->d_inode)
+ 			return 0;
+ 	}
+ 
++	if (parent_mark) {
++		ret = fsnotify_handle_inode_event(group, parent_mark, mask,
++						  data, data_type, dir, name, 0);
++		if (ret)
++			return ret;
++	}
++
++	if (!inode_mark)
++		return 0;
++
+ 	if (mask & FS_EVENT_ON_CHILD) {
+ 		/*
+ 		 * Some events can be sent on both parent dir and child marks
+@@ -318,42 +324,36 @@ static int send_to_group(__u32 mask, const void *data, int data_type,
+ 	struct fsnotify_group *group = NULL;
+ 	__u32 test_mask = (mask & ALL_FSNOTIFY_EVENTS);
+ 	__u32 marks_mask = 0;
+-	__u32 marks_ignored_mask = 0;
++	__u32 marks_ignore_mask = 0;
++	bool is_dir = mask & FS_ISDIR;
+ 	struct fsnotify_mark *mark;
+ 	int type;
+ 
+-	if (WARN_ON(!iter_info->report_mask))
++	if (!iter_info->report_mask)
+ 		return 0;
+ 
+ 	/* clear ignored on inode modification */
+ 	if (mask & FS_MODIFY) {
+-		fsnotify_foreach_obj_type(type) {
+-			if (!fsnotify_iter_should_report_type(iter_info, type))
+-				continue;
+-			mark = iter_info->marks[type];
+-			if (mark &&
+-			    !(mark->flags & FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY))
+-				mark->ignored_mask = 0;
++		fsnotify_foreach_iter_mark_type(iter_info, mark, type) {
++			if (!(mark->flags &
++			      FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY))
++				mark->ignore_mask = 0;
+ 		}
+ 	}
+ 
+-	fsnotify_foreach_obj_type(type) {
+-		if (!fsnotify_iter_should_report_type(iter_info, type))
+-			continue;
+-		mark = iter_info->marks[type];
+-		/* does the object mark tell us to do something? */
+-		if (mark) {
+-			group = mark->group;
+-			marks_mask |= mark->mask;
+-			marks_ignored_mask |= mark->ignored_mask;
+-		}
++	/* Are any of the group marks interested in this event? */
++	fsnotify_foreach_iter_mark_type(iter_info, mark, type) {
++		group = mark->group;
++		marks_mask |= mark->mask;
++		marks_ignore_mask |=
++			fsnotify_effective_ignore_mask(mark, is_dir, type);
+ 	}
+ 
+-	pr_debug("%s: group=%p mask=%x marks_mask=%x marks_ignored_mask=%x data=%p data_type=%d dir=%p cookie=%d\n",
+-		 __func__, group, mask, marks_mask, marks_ignored_mask,
++	pr_debug("%s: group=%p mask=%x marks_mask=%x marks_ignore_mask=%x data=%p data_type=%d dir=%p cookie=%d\n",
++		 __func__, group, mask, marks_mask, marks_ignore_mask,
+ 		 data, data_type, dir, cookie);
+ 
+-	if (!(test_mask & marks_mask & ~marks_ignored_mask))
++	if (!(test_mask & marks_mask & ~marks_ignore_mask))
+ 		return 0;
+ 
+ 	if (group->ops->handle_event) {
+@@ -390,11 +390,11 @@ static struct fsnotify_mark *fsnotify_next_mark(struct fsnotify_mark *mark)
+ 
+ /*
+  * iter_info is a multi head priority queue of marks.
+- * Pick a subset of marks from queue heads, all with the
+- * same group and set the report_mask for selected subset.
+- * Returns the report_mask of the selected subset.
++ * Pick a subset of marks from queue heads, all with the same group
++ * and set the report_mask to a subset of the selected marks.
++ * Returns false if there are no more groups to iterate.
+  */
+-static unsigned int fsnotify_iter_select_report_types(
++static bool fsnotify_iter_select_report_types(
+ 		struct fsnotify_iter_info *iter_info)
+ {
+ 	struct fsnotify_group *max_prio_group = NULL;
+@@ -402,7 +402,7 @@ static unsigned int fsnotify_iter_select_report_types(
+ 	int type;
+ 
+ 	/* Choose max prio group among groups of all queue heads */
+-	fsnotify_foreach_obj_type(type) {
++	fsnotify_foreach_iter_type(type) {
+ 		mark = iter_info->marks[type];
+ 		if (mark &&
+ 		    fsnotify_compare_groups(max_prio_group, mark->group) > 0)
+@@ -410,30 +410,49 @@ static unsigned int fsnotify_iter_select_report_types(
+ 	}
+ 
+ 	if (!max_prio_group)
+-		return 0;
++		return false;
+ 
+ 	/* Set the report mask for marks from same group as max prio group */
++	iter_info->current_group = max_prio_group;
+ 	iter_info->report_mask = 0;
+-	fsnotify_foreach_obj_type(type) {
++	fsnotify_foreach_iter_type(type) {
+ 		mark = iter_info->marks[type];
+-		if (mark &&
+-		    fsnotify_compare_groups(max_prio_group, mark->group) == 0)
++		if (mark && mark->group == iter_info->current_group) {
++			/*
++			 * FSNOTIFY_ITER_TYPE_PARENT indicates that this inode
++			 * is watching children and interested in this event,
++			 * which is an event possible on child.
++			 * But is *this mark* watching children?
++			 */
++			if (type == FSNOTIFY_ITER_TYPE_PARENT &&
++			    !(mark->mask & FS_EVENT_ON_CHILD) &&
++			    !(fsnotify_ignore_mask(mark) & FS_EVENT_ON_CHILD))
++				continue;
++
+ 			fsnotify_iter_set_report_type(iter_info, type);
++		}
+ 	}
+ 
+-	return iter_info->report_mask;
++	return true;
+ }
+ 
+ /*
+- * Pop from iter_info multi head queue, the marks that were iterated in the
++ * Pop from iter_info multi head queue, the marks that belong to the group of
+  * current iteration step.
+  */
+ static void fsnotify_iter_next(struct fsnotify_iter_info *iter_info)
+ {
++	struct fsnotify_mark *mark;
+ 	int type;
+ 
+-	fsnotify_foreach_obj_type(type) {
+-		if (fsnotify_iter_should_report_type(iter_info, type))
++	/*
++	 * We cannot use fsnotify_foreach_iter_mark_type() here because we
++	 * may need to advance a mark of type X that belongs to current_group
++	 * but was not selected for reporting.
++	 */
++	fsnotify_foreach_iter_type(type) {
++		mark = iter_info->marks[type];
++		if (mark && mark->group == iter_info->current_group)
+ 			iter_info->marks[type] =
+ 				fsnotify_next_mark(iter_info->marks[type]);
+ 	}
+@@ -455,18 +474,20 @@ static void fsnotify_iter_next(struct fsnotify_iter_info *iter_info)
+  *		@file_name is relative to
+  * @file_name:	optional file name associated with event
+  * @inode:	optional inode associated with event -
+- *		either @dir or @inode must be non-NULL.
+- *		if both are non-NULL event may be reported to both.
++ *		If @dir and @inode are both non-NULL, event may be
++ *		reported to both.
+  * @cookie:	inotify rename cookie
+  */
+ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 	     const struct qstr *file_name, struct inode *inode, u32 cookie)
+ {
+ 	const struct path *path = fsnotify_data_path(data, data_type);
++	struct super_block *sb = fsnotify_data_sb(data, data_type);
+ 	struct fsnotify_iter_info iter_info = {};
+-	struct super_block *sb;
+ 	struct mount *mnt = NULL;
+-	struct inode *parent = NULL;
++	struct inode *inode2 = NULL;
++	struct dentry *moved;
++	int inode2_type;
+ 	int ret = 0;
+ 	__u32 test_mask, marks_mask;
+ 
+@@ -476,14 +497,20 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 	if (!inode) {
+ 		/* Dirent event - report on TYPE_INODE to dir */
+ 		inode = dir;
++		/* For FS_RENAME, inode is old_dir and inode2 is new_dir */
++		if (mask & FS_RENAME) {
++			moved = fsnotify_data_dentry(data, data_type);
++			inode2 = moved->d_parent->d_inode;
++			inode2_type = FSNOTIFY_ITER_TYPE_INODE2;
++		}
+ 	} else if (mask & FS_EVENT_ON_CHILD) {
+ 		/*
+ 		 * Event on child - report on TYPE_PARENT to dir if it is
+ 		 * watching children and on TYPE_INODE to child.
+ 		 */
+-		parent = dir;
++		inode2 = dir;
++		inode2_type = FSNOTIFY_ITER_TYPE_PARENT;
+ 	}
+-	sb = inode->i_sb;
+ 
+ 	/*
+ 	 * Optimization: srcu_read_lock() has a memory barrier which can
+@@ -495,7 +522,7 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 	if (!sb->s_fsnotify_marks &&
+ 	    (!mnt || !mnt->mnt_fsnotify_marks) &&
+ 	    (!inode || !inode->i_fsnotify_marks) &&
+-	    (!parent || !parent->i_fsnotify_marks))
++	    (!inode2 || !inode2->i_fsnotify_marks))
+ 		return 0;
+ 
+ 	marks_mask = sb->s_fsnotify_mask;
+@@ -503,33 +530,35 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ 		marks_mask |= mnt->mnt_fsnotify_mask;
+ 	if (inode)
+ 		marks_mask |= inode->i_fsnotify_mask;
+-	if (parent)
+-		marks_mask |= parent->i_fsnotify_mask;
++	if (inode2)
++		marks_mask |= inode2->i_fsnotify_mask;
+ 
+ 
+ 	/*
+-	 * if this is a modify event we may need to clear the ignored masks
+-	 * otherwise return if none of the marks care about this type of event.
++	 * If this is a modify event we may need to clear some ignore masks.
++	 * In that case, the object with ignore masks will have the FS_MODIFY
++	 * event in its mask.
++	 * Otherwise, return if none of the marks care about this type of event.
+ 	 */
+ 	test_mask = (mask & ALL_FSNOTIFY_EVENTS);
+-	if (!(mask & FS_MODIFY) && !(test_mask & marks_mask))
++	if (!(test_mask & marks_mask))
+ 		return 0;
+ 
+ 	iter_info.srcu_idx = srcu_read_lock(&fsnotify_mark_srcu);
+ 
+-	iter_info.marks[FSNOTIFY_OBJ_TYPE_SB] =
++	iter_info.marks[FSNOTIFY_ITER_TYPE_SB] =
+ 		fsnotify_first_mark(&sb->s_fsnotify_marks);
+ 	if (mnt) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_VFSMOUNT] =
++		iter_info.marks[FSNOTIFY_ITER_TYPE_VFSMOUNT] =
+ 			fsnotify_first_mark(&mnt->mnt_fsnotify_marks);
+ 	}
+ 	if (inode) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
++		iter_info.marks[FSNOTIFY_ITER_TYPE_INODE] =
+ 			fsnotify_first_mark(&inode->i_fsnotify_marks);
+ 	}
+-	if (parent) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_PARENT] =
+-			fsnotify_first_mark(&parent->i_fsnotify_marks);
++	if (inode2) {
++		iter_info.marks[inode2_type] =
++			fsnotify_first_mark(&inode2->i_fsnotify_marks);
+ 	}
+ 
+ 	/*
+@@ -558,7 +587,7 @@ static __init int fsnotify_init(void)
+ {
+ 	int ret;
+ 
+-	BUILD_BUG_ON(HWEIGHT32(ALL_FSNOTIFY_BITS) != 25);
++	BUILD_BUG_ON(HWEIGHT32(ALL_FSNOTIFY_BITS) != 23);
+ 
+ 	ret = init_srcu_struct(&fsnotify_mark_srcu);
+ 	if (ret)
+diff --git a/fs/notify/fsnotify.h b/fs/notify/fsnotify.h
+index ff2063ec6b0f3..fde74eb333cc9 100644
+--- a/fs/notify/fsnotify.h
++++ b/fs/notify/fsnotify.h
+@@ -27,6 +27,21 @@ static inline struct super_block *fsnotify_conn_sb(
+ 	return container_of(conn->obj, struct super_block, s_fsnotify_marks);
+ }
+ 
++static inline struct super_block *fsnotify_connector_sb(
++				struct fsnotify_mark_connector *conn)
++{
++	switch (conn->type) {
++	case FSNOTIFY_OBJ_TYPE_INODE:
++		return fsnotify_conn_inode(conn)->i_sb;
++	case FSNOTIFY_OBJ_TYPE_VFSMOUNT:
++		return fsnotify_conn_mount(conn)->mnt.mnt_sb;
++	case FSNOTIFY_OBJ_TYPE_SB:
++		return fsnotify_conn_sb(conn);
++	default:
++		return NULL;
++	}
++}
++
+ /* destroy all events sitting in this groups notification queue */
+ extern void fsnotify_flush_notify(struct fsnotify_group *group);
+ 
+@@ -61,10 +76,6 @@ static inline void fsnotify_clear_marks_by_sb(struct super_block *sb)
+  */
+ extern void __fsnotify_update_child_dentry_flags(struct inode *inode);
+ 
+-/* allocate and destroy and event holder to attach events to notification/access queues */
+-extern struct fsnotify_event_holder *fsnotify_alloc_event_holder(void);
+-extern void fsnotify_destroy_event_holder(struct fsnotify_event_holder *holder);
+-
+ extern struct kmem_cache *fsnotify_mark_connector_cachep;
+ 
+ #endif	/* __FS_NOTIFY_FSNOTIFY_H_ */
+diff --git a/fs/notify/group.c b/fs/notify/group.c
+index a4a4b1c64d32a..1de6631a3925e 100644
+--- a/fs/notify/group.c
++++ b/fs/notify/group.c
+@@ -58,7 +58,7 @@ void fsnotify_destroy_group(struct fsnotify_group *group)
+ 	fsnotify_group_stop_queueing(group);
+ 
+ 	/* Clear all marks for this group and queue them for destruction */
+-	fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_ALL_TYPES_MASK);
++	fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_ANY);
+ 
+ 	/*
+ 	 * Some marks can still be pinned when waiting for response from
+@@ -88,7 +88,7 @@ void fsnotify_destroy_group(struct fsnotify_group *group)
+ 	 * that deliberately ignores overflow events.
+ 	 */
+ 	if (group->overflow_event)
+-		group->ops->free_event(group->overflow_event);
++		group->ops->free_event(group, group->overflow_event);
+ 
+ 	fsnotify_put_group(group);
+ }
+@@ -111,20 +111,19 @@ void fsnotify_put_group(struct fsnotify_group *group)
+ }
+ EXPORT_SYMBOL_GPL(fsnotify_put_group);
+ 
+-/*
+- * Create a new fsnotify_group and hold a reference for the group returned.
+- */
+-struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *ops)
++static struct fsnotify_group *__fsnotify_alloc_group(
++				const struct fsnotify_ops *ops,
++				int flags, gfp_t gfp)
+ {
++	static struct lock_class_key nofs_marks_lock;
+ 	struct fsnotify_group *group;
+ 
+-	group = kzalloc(sizeof(struct fsnotify_group), GFP_KERNEL);
++	group = kzalloc(sizeof(struct fsnotify_group), gfp);
+ 	if (!group)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	/* set to 0 when there a no external references to this group */
+ 	refcount_set(&group->refcnt, 1);
+-	atomic_set(&group->num_marks, 0);
+ 	atomic_set(&group->user_waits, 0);
+ 
+ 	spin_lock_init(&group->notification_lock);
+@@ -136,9 +135,32 @@ struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *ops)
+ 	INIT_LIST_HEAD(&group->marks_list);
+ 
+ 	group->ops = ops;
++	group->flags = flags;
++	/*
++	 * For most backends, eviction of inode with a mark is not expected,
++	 * because marks hold a refcount on the inode against eviction.
++	 *
++	 * Use a different lockdep class for groups that support evictable
++	 * inode marks, because with evictable marks, mark_mutex is NOT
++	 * fs-reclaim safe - the mutex is taken when evicting inodes.
++	 */
++	if (flags & FSNOTIFY_GROUP_NOFS)
++		lockdep_set_class(&group->mark_mutex, &nofs_marks_lock);
+ 
+ 	return group;
+ }
++
++/*
++ * Create a new fsnotify_group and hold a reference for the group returned.
++ */
++struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *ops,
++					    int flags)
++{
++	gfp_t gfp = (flags & FSNOTIFY_GROUP_USER) ? GFP_KERNEL_ACCOUNT :
++						    GFP_KERNEL;
++
++	return __fsnotify_alloc_group(ops, flags, gfp);
++}
+ EXPORT_SYMBOL_GPL(fsnotify_alloc_group);
+ 
+ int fsnotify_fasync(int fd, struct file *file, int on)
+diff --git a/fs/notify/inotify/inotify.h b/fs/notify/inotify/inotify.h
+index 8f00151eb731f..7d5df7a215397 100644
+--- a/fs/notify/inotify/inotify.h
++++ b/fs/notify/inotify/inotify.h
+@@ -27,11 +27,18 @@ static inline struct inotify_event_info *INOTIFY_E(struct fsnotify_event *fse)
+  * userspace.  There is at least one bit (FS_EVENT_ON_CHILD) which is
+  * used only internally to the kernel.
+  */
+-#define INOTIFY_USER_MASK (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK)
++#define INOTIFY_USER_MASK (IN_ALL_EVENTS)
+ 
+ static inline __u32 inotify_mark_user_mask(struct fsnotify_mark *fsn_mark)
+ {
+-	return fsn_mark->mask & INOTIFY_USER_MASK;
++	__u32 mask = fsn_mark->mask & INOTIFY_USER_MASK;
++
++	if (fsn_mark->flags & FSNOTIFY_MARK_FLAG_EXCL_UNLINK)
++		mask |= IN_EXCL_UNLINK;
++	if (fsn_mark->flags & FSNOTIFY_MARK_FLAG_IN_ONESHOT)
++		mask |= IN_ONESHOT;
++
++	return mask;
+ }
+ 
+ extern void inotify_ignored_and_remove_idr(struct fsnotify_mark *fsn_mark,
+diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c
+index 66991c7fef9e2..993375f0db673 100644
+--- a/fs/notify/inotify/inotify_fsnotify.c
++++ b/fs/notify/inotify/inotify_fsnotify.c
+@@ -46,9 +46,10 @@ static bool event_compare(struct fsnotify_event *old_fsn,
+ 	return false;
+ }
+ 
+-static int inotify_merge(struct list_head *list,
+-			  struct fsnotify_event *event)
++static int inotify_merge(struct fsnotify_group *group,
++			 struct fsnotify_event *event)
+ {
++	struct list_head *list = &group->notification_list;
+ 	struct fsnotify_event *last_event;
+ 
+ 	last_event = list_entry(list->prev, struct fsnotify_event, list);
+@@ -114,7 +115,7 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 		mask &= ~IN_ISDIR;
+ 
+ 	fsn_event = &event->fse;
+-	fsnotify_init_event(fsn_event, 0);
++	fsnotify_init_event(fsn_event);
+ 	event->mask = mask;
+ 	event->wd = wd;
+ 	event->sync_cookie = cookie;
+@@ -128,7 +129,7 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 		fsnotify_destroy_event(group, fsn_event);
+ 	}
+ 
+-	if (inode_mark->mask & IN_ONESHOT)
++	if (inode_mark->flags & FSNOTIFY_MARK_FLAG_IN_ONESHOT)
+ 		fsnotify_destroy_mark(inode_mark, group);
+ 
+ 	return 0;
+@@ -183,7 +184,8 @@ static void inotify_free_group_priv(struct fsnotify_group *group)
+ 		dec_inotify_instances(group->inotify_data.ucounts);
+ }
+ 
+-static void inotify_free_event(struct fsnotify_event *fsn_event)
++static void inotify_free_event(struct fsnotify_group *group,
++			       struct fsnotify_event *fsn_event)
+ {
+ 	kfree(INOTIFY_E(fsn_event));
+ }
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 32b6b97021bef..7360d16ce46d7 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -37,6 +37,15 @@
+ 
+ #include <asm/ioctls.h>
+ 
++/*
++ * An inotify watch requires allocating an inotify_inode_mark structure as
++ * well as pinning the watched inode. Doubling the size of a VFS inode
++ * should be more than enough to cover the additional filesystem inode
++ * size increase.
++ */
++#define INOTIFY_WATCH_COST	(sizeof(struct inotify_inode_mark) + \
++				 2 * sizeof(struct inode))
++
+ /* configurable via /proc/sys/fs/inotify/ */
+ static int inotify_max_queued_events __read_mostly;
+ 
+@@ -80,10 +89,10 @@ static inline __u32 inotify_arg_to_mask(struct inode *inode, u32 arg)
+ 	__u32 mask;
+ 
+ 	/*
+-	 * Everything should accept their own ignored and should receive events
+-	 * when the inode is unmounted.  All directories care about children.
++	 * Everything should receive events when the inode is unmounted.
++	 * All directories care about children.
+ 	 */
+-	mask = (FS_IN_IGNORED | FS_UNMOUNT);
++	mask = (FS_UNMOUNT);
+ 	if (S_ISDIR(inode->i_mode))
+ 		mask |= FS_EVENT_ON_CHILD;
+ 
+@@ -93,13 +102,28 @@ static inline __u32 inotify_arg_to_mask(struct inode *inode, u32 arg)
+ 	return mask;
+ }
+ 
++#define INOTIFY_MARK_FLAGS \
++	(FSNOTIFY_MARK_FLAG_EXCL_UNLINK | FSNOTIFY_MARK_FLAG_IN_ONESHOT)
++
++static inline unsigned int inotify_arg_to_flags(u32 arg)
++{
++	unsigned int flags = 0;
++
++	if (arg & IN_EXCL_UNLINK)
++		flags |= FSNOTIFY_MARK_FLAG_EXCL_UNLINK;
++	if (arg & IN_ONESHOT)
++		flags |= FSNOTIFY_MARK_FLAG_IN_ONESHOT;
++
++	return flags;
++}
++
+ static inline u32 inotify_mask_to_arg(__u32 mask)
+ {
+ 	return mask & (IN_ALL_EVENTS | IN_ISDIR | IN_UNMOUNT | IN_IGNORED |
+ 		       IN_Q_OVERFLOW);
+ }
+ 
+-/* intofiy userspace file descriptor functions */
++/* inotify userspace file descriptor functions */
+ static __poll_t inotify_poll(struct file *file, poll_table *wait)
+ {
+ 	struct fsnotify_group *group = file->private_data;
+@@ -137,10 +161,9 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
+ 	size_t event_size = sizeof(struct inotify_event);
+ 	struct fsnotify_event *event;
+ 
+-	if (fsnotify_notify_queue_is_empty(group))
+-		return NULL;
+-
+ 	event = fsnotify_peek_first_event(group);
++	if (!event)
++		return NULL;
+ 
+ 	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
+ 
+@@ -343,7 +366,7 @@ static int inotify_find_inode(const char __user *dirname, struct path *path,
+ 	if (error)
+ 		return error;
+ 	/* you can only watch an inode if you have read permissions on it */
+-	error = inode_permission(path->dentry->d_inode, MAY_READ);
++	error = path_permission(path, MAY_READ);
+ 	if (error) {
+ 		path_put(path);
+ 		return error;
+@@ -505,13 +528,10 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
+ 	struct fsnotify_mark *fsn_mark;
+ 	struct inotify_inode_mark *i_mark;
+ 	__u32 old_mask, new_mask;
+-	__u32 mask;
+-	int add = (arg & IN_MASK_ADD);
++	int replace = !(arg & IN_MASK_ADD);
+ 	int create = (arg & IN_MASK_CREATE);
+ 	int ret;
+ 
+-	mask = inotify_arg_to_mask(inode, arg);
+-
+ 	fsn_mark = fsnotify_find_mark(&inode->i_fsnotify_marks, group);
+ 	if (!fsn_mark)
+ 		return -ENOENT;
+@@ -524,10 +544,12 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
+ 
+ 	spin_lock(&fsn_mark->lock);
+ 	old_mask = fsn_mark->mask;
+-	if (add)
+-		fsn_mark->mask |= mask;
+-	else
+-		fsn_mark->mask = mask;
++	if (replace) {
++		fsn_mark->mask = 0;
++		fsn_mark->flags &= ~INOTIFY_MARK_FLAGS;
++	}
++	fsn_mark->mask |= inotify_arg_to_mask(inode, arg);
++	fsn_mark->flags |= inotify_arg_to_flags(arg);
+ 	new_mask = fsn_mark->mask;
+ 	spin_unlock(&fsn_mark->lock);
+ 
+@@ -558,19 +580,17 @@ static int inotify_new_watch(struct fsnotify_group *group,
+ 			     u32 arg)
+ {
+ 	struct inotify_inode_mark *tmp_i_mark;
+-	__u32 mask;
+ 	int ret;
+ 	struct idr *idr = &group->inotify_data.idr;
+ 	spinlock_t *idr_lock = &group->inotify_data.idr_lock;
+ 
+-	mask = inotify_arg_to_mask(inode, arg);
+-
+ 	tmp_i_mark = kmem_cache_alloc(inotify_inode_mark_cachep, GFP_KERNEL);
+ 	if (unlikely(!tmp_i_mark))
+ 		return -ENOMEM;
+ 
+ 	fsnotify_init_mark(&tmp_i_mark->fsn_mark, group);
+-	tmp_i_mark->fsn_mark.mask = mask;
++	tmp_i_mark->fsn_mark.mask = inotify_arg_to_mask(inode, arg);
++	tmp_i_mark->fsn_mark.flags = inotify_arg_to_flags(arg);
+ 	tmp_i_mark->wd = -1;
+ 
+ 	ret = inotify_add_to_idr(idr, idr_lock, tmp_i_mark);
+@@ -607,13 +627,13 @@ static int inotify_update_watch(struct fsnotify_group *group, struct inode *inod
+ {
+ 	int ret = 0;
+ 
+-	mutex_lock(&group->mark_mutex);
++	fsnotify_group_lock(group);
+ 	/* try to update and existing watch with the new arg */
+ 	ret = inotify_update_existing_watch(group, inode, arg);
+ 	/* no mark present, try to add a new one */
+ 	if (ret == -ENOENT)
+ 		ret = inotify_new_watch(group, inode, arg);
+-	mutex_unlock(&group->mark_mutex);
++	fsnotify_group_unlock(group);
+ 
+ 	return ret;
+ }
+@@ -623,17 +643,18 @@ static struct fsnotify_group *inotify_new_group(unsigned int max_events)
+ 	struct fsnotify_group *group;
+ 	struct inotify_event_info *oevent;
+ 
+-	group = fsnotify_alloc_group(&inotify_fsnotify_ops);
++	group = fsnotify_alloc_group(&inotify_fsnotify_ops,
++				     FSNOTIFY_GROUP_USER);
+ 	if (IS_ERR(group))
+ 		return group;
+ 
+-	oevent = kmalloc(sizeof(struct inotify_event_info), GFP_KERNEL);
++	oevent = kmalloc(sizeof(struct inotify_event_info), GFP_KERNEL_ACCOUNT);
+ 	if (unlikely(!oevent)) {
+ 		fsnotify_destroy_group(group);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	group->overflow_event = &oevent->fse;
+-	fsnotify_init_event(group->overflow_event, 0);
++	fsnotify_init_event(group->overflow_event);
+ 	oevent->mask = FS_Q_OVERFLOW;
+ 	oevent->wd = -1;
+ 	oevent->sync_cookie = 0;
+@@ -797,6 +818,18 @@ SYSCALL_DEFINE2(inotify_rm_watch, int, fd, __s32, wd)
+  */
+ static int __init inotify_user_setup(void)
+ {
++	unsigned long watches_max;
++	struct sysinfo si;
++
++	si_meminfo(&si);
++	/*
++	 * Allow up to 1% of addressable memory to be allocated for inotify
++	 * watches (per user) limited to the range [8192, 1048576].
++	 */
++	watches_max = (((si.totalram - si.totalhigh) / 100) << PAGE_SHIFT) /
++			INOTIFY_WATCH_COST;
++	watches_max = clamp(watches_max, 8192UL, 1048576UL);
++
+ 	BUILD_BUG_ON(IN_ACCESS != FS_ACCESS);
+ 	BUILD_BUG_ON(IN_MODIFY != FS_MODIFY);
+ 	BUILD_BUG_ON(IN_ATTRIB != FS_ATTRIB);
+@@ -812,9 +845,7 @@ static int __init inotify_user_setup(void)
+ 	BUILD_BUG_ON(IN_UNMOUNT != FS_UNMOUNT);
+ 	BUILD_BUG_ON(IN_Q_OVERFLOW != FS_Q_OVERFLOW);
+ 	BUILD_BUG_ON(IN_IGNORED != FS_IN_IGNORED);
+-	BUILD_BUG_ON(IN_EXCL_UNLINK != FS_EXCL_UNLINK);
+ 	BUILD_BUG_ON(IN_ISDIR != FS_ISDIR);
+-	BUILD_BUG_ON(IN_ONESHOT != FS_IN_ONESHOT);
+ 
+ 	BUILD_BUG_ON(HWEIGHT32(ALL_INOTIFY_BITS) != 22);
+ 
+@@ -823,7 +854,7 @@ static int __init inotify_user_setup(void)
+ 
+ 	inotify_max_queued_events = 16384;
+ 	init_user_ns.ucount_max[UCOUNT_INOTIFY_INSTANCES] = 128;
+-	init_user_ns.ucount_max[UCOUNT_INOTIFY_WATCHES] = 8192;
++	init_user_ns.ucount_max[UCOUNT_INOTIFY_WATCHES] = watches_max;
+ 
+ 	return 0;
+ }
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 5b44be5f93dd8..c74ef947447d6 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -116,20 +116,64 @@ __u32 fsnotify_conn_mask(struct fsnotify_mark_connector *conn)
+ 	return *fsnotify_conn_mask_p(conn);
+ }
+ 
+-static void __fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
++static void fsnotify_get_inode_ref(struct inode *inode)
++{
++	ihold(inode);
++	atomic_long_inc(&inode->i_sb->s_fsnotify_connectors);
++}
++
++/*
++ * Grab or drop inode reference for the connector if needed.
++ *
++ * When it's time to drop the reference, we only clear the HAS_IREF flag and
++ * return the inode object. fsnotify_drop_object() will be resonsible for doing
++ * iput() outside of spinlocks. This happens when last mark that wanted iref is
++ * detached.
++ */
++static struct inode *fsnotify_update_iref(struct fsnotify_mark_connector *conn,
++					  bool want_iref)
++{
++	bool has_iref = conn->flags & FSNOTIFY_CONN_FLAG_HAS_IREF;
++	struct inode *inode = NULL;
++
++	if (conn->type != FSNOTIFY_OBJ_TYPE_INODE ||
++	    want_iref == has_iref)
++		return NULL;
++
++	if (want_iref) {
++		/* Pin inode if any mark wants inode refcount held */
++		fsnotify_get_inode_ref(fsnotify_conn_inode(conn));
++		conn->flags |= FSNOTIFY_CONN_FLAG_HAS_IREF;
++	} else {
++		/* Unpin inode after detach of last mark that wanted iref */
++		inode = fsnotify_conn_inode(conn);
++		conn->flags &= ~FSNOTIFY_CONN_FLAG_HAS_IREF;
++	}
++
++	return inode;
++}
++
++static void *__fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
+ {
+ 	u32 new_mask = 0;
++	bool want_iref = false;
+ 	struct fsnotify_mark *mark;
+ 
+ 	assert_spin_locked(&conn->lock);
+ 	/* We can get detached connector here when inode is getting unlinked. */
+ 	if (!fsnotify_valid_obj_type(conn->type))
+-		return;
++		return NULL;
+ 	hlist_for_each_entry(mark, &conn->list, obj_list) {
+-		if (mark->flags & FSNOTIFY_MARK_FLAG_ATTACHED)
+-			new_mask |= mark->mask;
++		if (!(mark->flags & FSNOTIFY_MARK_FLAG_ATTACHED))
++			continue;
++		new_mask |= fsnotify_calc_mask(mark);
++		if (conn->type == FSNOTIFY_OBJ_TYPE_INODE &&
++		    !(mark->flags & FSNOTIFY_MARK_FLAG_NO_IREF))
++			want_iref = true;
+ 	}
+ 	*fsnotify_conn_mask_p(conn) = new_mask;
++
++	return fsnotify_update_iref(conn, want_iref);
+ }
+ 
+ /*
+@@ -169,6 +213,31 @@ static void fsnotify_connector_destroy_workfn(struct work_struct *work)
+ 	}
+ }
+ 
++static void fsnotify_put_inode_ref(struct inode *inode)
++{
++	struct super_block *sb = inode->i_sb;
++
++	iput(inode);
++	if (atomic_long_dec_and_test(&sb->s_fsnotify_connectors))
++		wake_up_var(&sb->s_fsnotify_connectors);
++}
++
++static void fsnotify_get_sb_connectors(struct fsnotify_mark_connector *conn)
++{
++	struct super_block *sb = fsnotify_connector_sb(conn);
++
++	if (sb)
++		atomic_long_inc(&sb->s_fsnotify_connectors);
++}
++
++static void fsnotify_put_sb_connectors(struct fsnotify_mark_connector *conn)
++{
++	struct super_block *sb = fsnotify_connector_sb(conn);
++
++	if (sb && atomic_long_dec_and_test(&sb->s_fsnotify_connectors))
++		wake_up_var(&sb->s_fsnotify_connectors);
++}
++
+ static void *fsnotify_detach_connector_from_object(
+ 					struct fsnotify_mark_connector *conn,
+ 					unsigned int *type)
+@@ -182,13 +251,17 @@ static void *fsnotify_detach_connector_from_object(
+ 	if (conn->type == FSNOTIFY_OBJ_TYPE_INODE) {
+ 		inode = fsnotify_conn_inode(conn);
+ 		inode->i_fsnotify_mask = 0;
+-		atomic_long_inc(&inode->i_sb->s_fsnotify_inode_refs);
++
++		/* Unpin inode when detaching from connector */
++		if (!(conn->flags & FSNOTIFY_CONN_FLAG_HAS_IREF))
++			inode = NULL;
+ 	} else if (conn->type == FSNOTIFY_OBJ_TYPE_VFSMOUNT) {
+ 		fsnotify_conn_mount(conn)->mnt_fsnotify_mask = 0;
+ 	} else if (conn->type == FSNOTIFY_OBJ_TYPE_SB) {
+ 		fsnotify_conn_sb(conn)->s_fsnotify_mask = 0;
+ 	}
+ 
++	fsnotify_put_sb_connectors(conn);
+ 	rcu_assign_pointer(*(conn->obj), NULL);
+ 	conn->obj = NULL;
+ 	conn->type = FSNOTIFY_OBJ_TYPE_DETACHED;
+@@ -209,19 +282,12 @@ static void fsnotify_final_mark_destroy(struct fsnotify_mark *mark)
+ /* Drop object reference originally held by a connector */
+ static void fsnotify_drop_object(unsigned int type, void *objp)
+ {
+-	struct inode *inode;
+-	struct super_block *sb;
+-
+ 	if (!objp)
+ 		return;
+ 	/* Currently only inode references are passed to be dropped */
+ 	if (WARN_ON_ONCE(type != FSNOTIFY_OBJ_TYPE_INODE))
+ 		return;
+-	inode = objp;
+-	sb = inode->i_sb;
+-	iput(inode);
+-	if (atomic_long_dec_and_test(&sb->s_fsnotify_inode_refs))
+-		wake_up_var(&sb->s_fsnotify_inode_refs);
++	fsnotify_put_inode_ref(objp);
+ }
+ 
+ void fsnotify_put_mark(struct fsnotify_mark *mark)
+@@ -250,7 +316,8 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ 		objp = fsnotify_detach_connector_from_object(conn, &type);
+ 		free_conn = true;
+ 	} else {
+-		__fsnotify_recalc_mask(conn);
++		objp = __fsnotify_recalc_mask(conn);
++		type = conn->type;
+ 	}
+ 	WRITE_ONCE(mark->connector, NULL);
+ 	spin_unlock(&conn->lock);
+@@ -329,7 +396,7 @@ bool fsnotify_prepare_user_wait(struct fsnotify_iter_info *iter_info)
+ {
+ 	int type;
+ 
+-	fsnotify_foreach_obj_type(type) {
++	fsnotify_foreach_iter_type(type) {
+ 		/* This can fail if mark is being removed */
+ 		if (!fsnotify_get_mark_safe(iter_info->marks[type])) {
+ 			__release(&fsnotify_mark_srcu);
+@@ -358,7 +425,7 @@ void fsnotify_finish_user_wait(struct fsnotify_iter_info *iter_info)
+ 	int type;
+ 
+ 	iter_info->srcu_idx = srcu_read_lock(&fsnotify_mark_srcu);
+-	fsnotify_foreach_obj_type(type)
++	fsnotify_foreach_iter_type(type)
+ 		fsnotify_put_mark_wake(iter_info->marks[type]);
+ }
+ 
+@@ -374,9 +441,7 @@ void fsnotify_finish_user_wait(struct fsnotify_iter_info *iter_info)
+  */
+ void fsnotify_detach_mark(struct fsnotify_mark *mark)
+ {
+-	struct fsnotify_group *group = mark->group;
+-
+-	WARN_ON_ONCE(!mutex_is_locked(&group->mark_mutex));
++	fsnotify_group_assert_locked(mark->group);
+ 	WARN_ON_ONCE(!srcu_read_lock_held(&fsnotify_mark_srcu) &&
+ 		     refcount_read(&mark->refcnt) < 1 +
+ 			!!(mark->flags & FSNOTIFY_MARK_FLAG_ATTACHED));
+@@ -391,8 +456,6 @@ void fsnotify_detach_mark(struct fsnotify_mark *mark)
+ 	list_del_init(&mark->g_list);
+ 	spin_unlock(&mark->lock);
+ 
+-	atomic_dec(&group->num_marks);
+-
+ 	/* Drop mark reference acquired in fsnotify_add_mark_locked() */
+ 	fsnotify_put_mark(mark);
+ }
+@@ -430,9 +493,9 @@ void fsnotify_free_mark(struct fsnotify_mark *mark)
+ void fsnotify_destroy_mark(struct fsnotify_mark *mark,
+ 			   struct fsnotify_group *group)
+ {
+-	mutex_lock(&group->mark_mutex);
++	fsnotify_group_lock(group);
+ 	fsnotify_detach_mark(mark);
+-	mutex_unlock(&group->mark_mutex);
++	fsnotify_group_unlock(group);
+ 	fsnotify_free_mark(mark);
+ }
+ EXPORT_SYMBOL_GPL(fsnotify_destroy_mark);
+@@ -474,10 +537,9 @@ int fsnotify_compare_groups(struct fsnotify_group *a, struct fsnotify_group *b)
+ }
+ 
+ static int fsnotify_attach_connector_to_object(fsnotify_connp_t *connp,
+-					       unsigned int type,
++					       unsigned int obj_type,
+ 					       __kernel_fsid_t *fsid)
+ {
+-	struct inode *inode = NULL;
+ 	struct fsnotify_mark_connector *conn;
+ 
+ 	conn = kmem_cache_alloc(fsnotify_mark_connector_cachep, GFP_KERNEL);
+@@ -485,7 +547,8 @@ static int fsnotify_attach_connector_to_object(fsnotify_connp_t *connp,
+ 		return -ENOMEM;
+ 	spin_lock_init(&conn->lock);
+ 	INIT_HLIST_HEAD(&conn->list);
+-	conn->type = type;
++	conn->flags = 0;
++	conn->type = obj_type;
+ 	conn->obj = connp;
+ 	/* Cache fsid of filesystem containing the object */
+ 	if (fsid) {
+@@ -495,16 +558,15 @@ static int fsnotify_attach_connector_to_object(fsnotify_connp_t *connp,
+ 		conn->fsid.val[0] = conn->fsid.val[1] = 0;
+ 		conn->flags = 0;
+ 	}
+-	if (conn->type == FSNOTIFY_OBJ_TYPE_INODE)
+-		inode = igrab(fsnotify_conn_inode(conn));
++	fsnotify_get_sb_connectors(conn);
++
+ 	/*
+ 	 * cmpxchg() provides the barrier so that readers of *connp can see
+ 	 * only initialized structure
+ 	 */
+ 	if (cmpxchg(connp, NULL, conn)) {
+ 		/* Someone else created list structure for us */
+-		if (inode)
+-			iput(inode);
++		fsnotify_put_sb_connectors(conn);
+ 		kmem_cache_free(fsnotify_mark_connector_cachep, conn);
+ 	}
+ 
+@@ -545,15 +607,16 @@ static struct fsnotify_mark_connector *fsnotify_grab_connector(
+  * priority, highest number first, and then by the group's location in memory.
+  */
+ static int fsnotify_add_mark_list(struct fsnotify_mark *mark,
+-				  fsnotify_connp_t *connp, unsigned int type,
+-				  int allow_dups, __kernel_fsid_t *fsid)
++				  fsnotify_connp_t *connp,
++				  unsigned int obj_type,
++				  int add_flags, __kernel_fsid_t *fsid)
+ {
+ 	struct fsnotify_mark *lmark, *last = NULL;
+ 	struct fsnotify_mark_connector *conn;
+ 	int cmp;
+ 	int err = 0;
+ 
+-	if (WARN_ON(!fsnotify_valid_obj_type(type)))
++	if (WARN_ON(!fsnotify_valid_obj_type(obj_type)))
+ 		return -EINVAL;
+ 
+ 	/* Backend is expected to check for zero fsid (e.g. tmpfs) */
+@@ -565,7 +628,8 @@ static int fsnotify_add_mark_list(struct fsnotify_mark *mark,
+ 	conn = fsnotify_grab_connector(connp);
+ 	if (!conn) {
+ 		spin_unlock(&mark->lock);
+-		err = fsnotify_attach_connector_to_object(connp, type, fsid);
++		err = fsnotify_attach_connector_to_object(connp, obj_type,
++							  fsid);
+ 		if (err)
+ 			return err;
+ 		goto restart;
+@@ -604,7 +668,7 @@ static int fsnotify_add_mark_list(struct fsnotify_mark *mark,
+ 
+ 		if ((lmark->group == mark->group) &&
+ 		    (lmark->flags & FSNOTIFY_MARK_FLAG_ATTACHED) &&
+-		    !allow_dups) {
++		    !(mark->group->flags & FSNOTIFY_GROUP_DUPS)) {
+ 			err = -EEXIST;
+ 			goto out_err;
+ 		}
+@@ -638,13 +702,13 @@ static int fsnotify_add_mark_list(struct fsnotify_mark *mark,
+  * event types should be delivered to which group.
+  */
+ int fsnotify_add_mark_locked(struct fsnotify_mark *mark,
+-			     fsnotify_connp_t *connp, unsigned int type,
+-			     int allow_dups, __kernel_fsid_t *fsid)
++			     fsnotify_connp_t *connp, unsigned int obj_type,
++			     int add_flags, __kernel_fsid_t *fsid)
+ {
+ 	struct fsnotify_group *group = mark->group;
+ 	int ret = 0;
+ 
+-	BUG_ON(!mutex_is_locked(&group->mark_mutex));
++	fsnotify_group_assert_locked(group);
+ 
+ 	/*
+ 	 * LOCKING ORDER!!!!
+@@ -656,16 +720,14 @@ int fsnotify_add_mark_locked(struct fsnotify_mark *mark,
+ 	mark->flags |= FSNOTIFY_MARK_FLAG_ALIVE | FSNOTIFY_MARK_FLAG_ATTACHED;
+ 
+ 	list_add(&mark->g_list, &group->marks_list);
+-	atomic_inc(&group->num_marks);
+ 	fsnotify_get_mark(mark); /* for g_list */
+ 	spin_unlock(&mark->lock);
+ 
+-	ret = fsnotify_add_mark_list(mark, connp, type, allow_dups, fsid);
++	ret = fsnotify_add_mark_list(mark, connp, obj_type, add_flags, fsid);
+ 	if (ret)
+ 		goto err;
+ 
+-	if (mark->mask)
+-		fsnotify_recalc_mask(mark->connector);
++	fsnotify_recalc_mask(mark->connector);
+ 
+ 	return ret;
+ err:
+@@ -674,21 +736,21 @@ int fsnotify_add_mark_locked(struct fsnotify_mark *mark,
+ 			 FSNOTIFY_MARK_FLAG_ATTACHED);
+ 	list_del_init(&mark->g_list);
+ 	spin_unlock(&mark->lock);
+-	atomic_dec(&group->num_marks);
+ 
+ 	fsnotify_put_mark(mark);
+ 	return ret;
+ }
+ 
+ int fsnotify_add_mark(struct fsnotify_mark *mark, fsnotify_connp_t *connp,
+-		      unsigned int type, int allow_dups, __kernel_fsid_t *fsid)
++		      unsigned int obj_type, int add_flags,
++		      __kernel_fsid_t *fsid)
+ {
+ 	int ret;
+ 	struct fsnotify_group *group = mark->group;
+ 
+-	mutex_lock(&group->mark_mutex);
+-	ret = fsnotify_add_mark_locked(mark, connp, type, allow_dups, fsid);
+-	mutex_unlock(&group->mark_mutex);
++	fsnotify_group_lock(group);
++	ret = fsnotify_add_mark_locked(mark, connp, obj_type, add_flags, fsid);
++	fsnotify_group_unlock(group);
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(fsnotify_add_mark);
+@@ -722,14 +784,14 @@ EXPORT_SYMBOL_GPL(fsnotify_find_mark);
+ 
+ /* Clear any marks in a group with given type mask */
+ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
+-				   unsigned int type_mask)
++				   unsigned int obj_type)
+ {
+ 	struct fsnotify_mark *lmark, *mark;
+ 	LIST_HEAD(to_free);
+ 	struct list_head *head = &to_free;
+ 
+ 	/* Skip selection step if we want to clear all marks. */
+-	if (type_mask == FSNOTIFY_OBJ_ALL_TYPES_MASK) {
++	if (obj_type == FSNOTIFY_OBJ_TYPE_ANY) {
+ 		head = &group->marks_list;
+ 		goto clear;
+ 	}
+@@ -742,24 +804,24 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
+ 	 * move marks to free to to_free list in one go and then free marks in
+ 	 * to_free list one by one.
+ 	 */
+-	mutex_lock(&group->mark_mutex);
++	fsnotify_group_lock(group);
+ 	list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) {
+-		if ((1U << mark->connector->type) & type_mask)
++		if (mark->connector->type == obj_type)
+ 			list_move(&mark->g_list, &to_free);
+ 	}
+-	mutex_unlock(&group->mark_mutex);
++	fsnotify_group_unlock(group);
+ 
+ clear:
+ 	while (1) {
+-		mutex_lock(&group->mark_mutex);
++		fsnotify_group_lock(group);
+ 		if (list_empty(head)) {
+-			mutex_unlock(&group->mark_mutex);
++			fsnotify_group_unlock(group);
+ 			break;
+ 		}
+ 		mark = list_first_entry(head, struct fsnotify_mark, g_list);
+ 		fsnotify_get_mark(mark);
+ 		fsnotify_detach_mark(mark);
+-		mutex_unlock(&group->mark_mutex);
++		fsnotify_group_unlock(group);
+ 		fsnotify_free_mark(mark);
+ 		fsnotify_put_mark(mark);
+ 	}
+diff --git a/fs/notify/notification.c b/fs/notify/notification.c
+index 75d79d6d3ef09..9022ae650cf86 100644
+--- a/fs/notify/notification.c
++++ b/fs/notify/notification.c
+@@ -47,13 +47,6 @@ u32 fsnotify_get_cookie(void)
+ }
+ EXPORT_SYMBOL_GPL(fsnotify_get_cookie);
+ 
+-/* return true if the notify queue is empty, false otherwise */
+-bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group)
+-{
+-	assert_spin_locked(&group->notification_lock);
+-	return list_empty(&group->notification_list) ? true : false;
+-}
+-
+ void fsnotify_destroy_event(struct fsnotify_group *group,
+ 			    struct fsnotify_event *event)
+ {
+@@ -71,20 +64,26 @@ void fsnotify_destroy_event(struct fsnotify_group *group,
+ 		WARN_ON(!list_empty(&event->list));
+ 		spin_unlock(&group->notification_lock);
+ 	}
+-	group->ops->free_event(event);
++	group->ops->free_event(group, event);
+ }
+ 
+ /*
+- * Add an event to the group notification queue.  The group can later pull this
+- * event off the queue to deal with.  The function returns 0 if the event was
+- * added to the queue, 1 if the event was merged with some other queued event,
++ * Try to add an event to the notification queue.
++ * The group can later pull this event off the queue to deal with.
++ * The group can use the @merge hook to merge the event with a queued event.
++ * The group can use the @insert hook to insert the event into hash table.
++ * The function returns:
++ * 0 if the event was added to a queue
++ * 1 if the event was merged with some other queued event
+  * 2 if the event was not queued - either the queue of events has overflown
+- * or the group is shutting down.
++ *   or the group is shutting down.
+  */
+-int fsnotify_add_event(struct fsnotify_group *group,
+-		       struct fsnotify_event *event,
+-		       int (*merge)(struct list_head *,
+-				    struct fsnotify_event *))
++int fsnotify_insert_event(struct fsnotify_group *group,
++			  struct fsnotify_event *event,
++			  int (*merge)(struct fsnotify_group *,
++				       struct fsnotify_event *),
++			  void (*insert)(struct fsnotify_group *,
++					 struct fsnotify_event *))
+ {
+ 	int ret = 0;
+ 	struct list_head *list = &group->notification_list;
+@@ -111,7 +110,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
+ 	}
+ 
+ 	if (!list_empty(list) && merge) {
+-		ret = merge(list, event);
++		ret = merge(group, event);
+ 		if (ret) {
+ 			spin_unlock(&group->notification_lock);
+ 			return ret;
+@@ -121,6 +120,8 @@ int fsnotify_add_event(struct fsnotify_group *group,
+ queue:
+ 	group->q_len++;
+ 	list_add_tail(&event->list, list);
++	if (insert)
++		insert(group, event);
+ 	spin_unlock(&group->notification_lock);
+ 
+ 	wake_up(&group->notification_waitq);
+@@ -141,33 +142,36 @@ void fsnotify_remove_queued_event(struct fsnotify_group *group,
+ }
+ 
+ /*
+- * Remove and return the first event from the notification list.  It is the
+- * responsibility of the caller to destroy the obtained event
++ * Return the first event on the notification list without removing it.
++ * Returns NULL if the list is empty.
+  */
+-struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
++struct fsnotify_event *fsnotify_peek_first_event(struct fsnotify_group *group)
+ {
+-	struct fsnotify_event *event;
+-
+ 	assert_spin_locked(&group->notification_lock);
+ 
+-	pr_debug("%s: group=%p\n", __func__, group);
++	if (fsnotify_notify_queue_is_empty(group))
++		return NULL;
+ 
+-	event = list_first_entry(&group->notification_list,
+-				 struct fsnotify_event, list);
+-	fsnotify_remove_queued_event(group, event);
+-	return event;
++	return list_first_entry(&group->notification_list,
++				struct fsnotify_event, list);
+ }
+ 
+ /*
+- * This will not remove the event, that must be done with
+- * fsnotify_remove_first_event()
++ * Remove and return the first event from the notification list.  It is the
++ * responsibility of the caller to destroy the obtained event
+  */
+-struct fsnotify_event *fsnotify_peek_first_event(struct fsnotify_group *group)
++struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
+ {
+-	assert_spin_locked(&group->notification_lock);
++	struct fsnotify_event *event = fsnotify_peek_first_event(group);
+ 
+-	return list_first_entry(&group->notification_list,
+-				struct fsnotify_event, list);
++	if (!event)
++		return NULL;
++
++	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
++
++	fsnotify_remove_queued_event(group, event);
++
++	return event;
+ }
+ 
+ /*
+diff --git a/fs/open.c b/fs/open.c
+index 83f62cf1432c8..d69312a2d434b 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -492,7 +492,7 @@ SYSCALL_DEFINE1(chdir, const char __user *, filename)
+ 	if (error)
+ 		goto out;
+ 
+-	error = inode_permission(path.dentry->d_inode, MAY_EXEC | MAY_CHDIR);
++	error = path_permission(&path, MAY_EXEC | MAY_CHDIR);
+ 	if (error)
+ 		goto dput_and_out;
+ 
+@@ -521,7 +521,7 @@ SYSCALL_DEFINE1(fchdir, unsigned int, fd)
+ 	if (!d_can_lookup(f.file->f_path.dentry))
+ 		goto out_putf;
+ 
+-	error = inode_permission(file_inode(f.file), MAY_EXEC | MAY_CHDIR);
++	error = file_permission(f.file, MAY_EXEC | MAY_CHDIR);
+ 	if (!error)
+ 		set_fs_pwd(current->fs, &f.file->f_path);
+ out_putf:
+@@ -540,7 +540,7 @@ SYSCALL_DEFINE1(chroot, const char __user *, filename)
+ 	if (error)
+ 		goto out;
+ 
+-	error = inode_permission(path.dentry->d_inode, MAY_EXEC | MAY_CHDIR);
++	error = path_permission(&path, MAY_EXEC | MAY_CHDIR);
+ 	if (error)
+ 		goto dput_and_out;
+ 
+@@ -954,6 +954,47 @@ struct file *dentry_open(const struct path *path, int flags,
+ }
+ EXPORT_SYMBOL(dentry_open);
+ 
++/**
++ * dentry_create - Create and open a file
++ * @path: path to create
++ * @flags: O_ flags
++ * @mode: mode bits for new file
++ * @cred: credentials to use
++ *
++ * Caller must hold the parent directory's lock, and have prepared
++ * a negative dentry, placed in @path->dentry, for the new file.
++ *
++ * Caller sets @path->mnt to the vfsmount of the filesystem where
++ * the new file is to be created. The parent directory and the
++ * negative dentry must reside on the same filesystem instance.
++ *
++ * On success, returns a "struct file *". Otherwise a ERR_PTR
++ * is returned.
++ */
++struct file *dentry_create(const struct path *path, int flags, umode_t mode,
++			   const struct cred *cred)
++{
++	struct file *f;
++	int error;
++
++	validate_creds(cred);
++	f = alloc_empty_file(flags, cred);
++	if (IS_ERR(f))
++		return f;
++
++	error = vfs_create(d_inode(path->dentry->d_parent),
++			   path->dentry, mode, true);
++	if (!error)
++		error = vfs_open(path, f);
++
++	if (unlikely(error)) {
++		fput(f);
++		return ERR_PTR(error);
++	}
++	return f;
++}
++EXPORT_SYMBOL(dentry_create);
++
+ struct file *open_with_fake_path(const struct path *path, int flags,
+ 				struct inode *inode, const struct cred *cred)
+ {
+@@ -1310,7 +1351,7 @@ EXPORT_SYMBOL(filp_close);
+  */
+ SYSCALL_DEFINE1(close, unsigned int, fd)
+ {
+-	int retval = __close_fd(current->files, fd);
++	int retval = close_fd(fd);
+ 
+ 	/* can't restart close syscall because file table entry was cleared */
+ 	if (unlikely(retval == -ERESTARTSYS ||
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 26f91868fbdaf..87b7a4a74f4ed 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -212,9 +212,16 @@ static inline int ovl_do_rename(struct inode *olddir, struct dentry *olddentry,
+ 				unsigned int flags)
+ {
+ 	int err;
++	struct renamedata rd = {
++		.old_dir 	= olddir,
++		.old_dentry 	= olddentry,
++		.new_dir 	= newdir,
++		.new_dentry 	= newdentry,
++		.flags 		= flags,
++	};
+ 
+ 	pr_debug("rename(%pd2, %pd2, 0x%x)\n", olddentry, newdentry, flags);
+-	err = vfs_rename(olddir, olddentry, newdir, newdentry, NULL, flags);
++	err = vfs_rename(&rd);
+ 	if (err) {
+ 		pr_debug("...rename(%pd2, %pd2, ...) = %i\n",
+ 			 olddentry, newdentry, err);
+diff --git a/fs/proc/fd.c b/fs/proc/fd.c
+index 81882a13212d3..cb51763ed554b 100644
+--- a/fs/proc/fd.c
++++ b/fs/proc/fd.c
+@@ -28,14 +28,13 @@ static int seq_show(struct seq_file *m, void *v)
+ 	if (!task)
+ 		return -ENOENT;
+ 
+-	files = get_files_struct(task);
+-	put_task_struct(task);
+-
++	task_lock(task);
++	files = task->files;
+ 	if (files) {
+ 		unsigned int fd = proc_fd(m->private);
+ 
+ 		spin_lock(&files->file_lock);
+-		file = fcheck_files(files, fd);
++		file = files_lookup_fd_locked(files, fd);
+ 		if (file) {
+ 			struct fdtable *fdt = files_fdtable(files);
+ 
+@@ -47,8 +46,9 @@ static int seq_show(struct seq_file *m, void *v)
+ 			ret = 0;
+ 		}
+ 		spin_unlock(&files->file_lock);
+-		put_files_struct(files);
+ 	}
++	task_unlock(task);
++	put_task_struct(task);
+ 
+ 	if (ret)
+ 		return ret;
+@@ -57,6 +57,7 @@ static int seq_show(struct seq_file *m, void *v)
+ 		   (long long)file->f_pos, f_flags,
+ 		   real_mount(file->f_path.mnt)->mnt_id);
+ 
++	/* show_fd_locks() never deferences files so a stale value is safe */
+ 	show_fd_locks(m, file, files);
+ 	if (seq_has_overflowed(m))
+ 		goto out;
+@@ -83,18 +84,13 @@ static const struct file_operations proc_fdinfo_file_operations = {
+ 
+ static bool tid_fd_mode(struct task_struct *task, unsigned fd, fmode_t *mode)
+ {
+-	struct files_struct *files = get_files_struct(task);
+ 	struct file *file;
+ 
+-	if (!files)
+-		return false;
+-
+ 	rcu_read_lock();
+-	file = fcheck_files(files, fd);
++	file = task_lookup_fd_rcu(task, fd);
+ 	if (file)
+ 		*mode = file->f_mode;
+ 	rcu_read_unlock();
+-	put_files_struct(files);
+ 	return !!file;
+ }
+ 
+@@ -146,29 +142,22 @@ static const struct dentry_operations tid_fd_dentry_operations = {
+ 
+ static int proc_fd_link(struct dentry *dentry, struct path *path)
+ {
+-	struct files_struct *files = NULL;
+ 	struct task_struct *task;
+ 	int ret = -ENOENT;
+ 
+ 	task = get_proc_task(d_inode(dentry));
+ 	if (task) {
+-		files = get_files_struct(task);
+-		put_task_struct(task);
+-	}
+-
+-	if (files) {
+ 		unsigned int fd = proc_fd(d_inode(dentry));
+ 		struct file *fd_file;
+ 
+-		spin_lock(&files->file_lock);
+-		fd_file = fcheck_files(files, fd);
++		fd_file = fget_task(task, fd);
+ 		if (fd_file) {
+ 			*path = fd_file->f_path;
+ 			path_get(&fd_file->f_path);
+ 			ret = 0;
++			fput(fd_file);
+ 		}
+-		spin_unlock(&files->file_lock);
+-		put_files_struct(files);
++		put_task_struct(task);
+ 	}
+ 
+ 	return ret;
+@@ -229,7 +218,6 @@ static int proc_readfd_common(struct file *file, struct dir_context *ctx,
+ 			      instantiate_t instantiate)
+ {
+ 	struct task_struct *p = get_proc_task(file_inode(file));
+-	struct files_struct *files;
+ 	unsigned int fd;
+ 
+ 	if (!p)
+@@ -237,22 +225,18 @@ static int proc_readfd_common(struct file *file, struct dir_context *ctx,
+ 
+ 	if (!dir_emit_dots(file, ctx))
+ 		goto out;
+-	files = get_files_struct(p);
+-	if (!files)
+-		goto out;
+ 
+ 	rcu_read_lock();
+-	for (fd = ctx->pos - 2;
+-	     fd < files_fdtable(files)->max_fds;
+-	     fd++, ctx->pos++) {
++	for (fd = ctx->pos - 2;; fd++) {
+ 		struct file *f;
+ 		struct fd_data data;
+ 		char name[10 + 1];
+ 		unsigned int len;
+ 
+-		f = fcheck_files(files, fd);
++		f = task_lookup_next_fd_rcu(p, &fd);
++		ctx->pos = fd + 2LL;
+ 		if (!f)
+-			continue;
++			break;
+ 		data.mode = f->f_mode;
+ 		rcu_read_unlock();
+ 		data.fd = fd;
+@@ -261,13 +245,11 @@ static int proc_readfd_common(struct file *file, struct dir_context *ctx,
+ 		if (!proc_fill_cache(file, ctx,
+ 				     name, len, instantiate, p,
+ 				     &data))
+-			goto out_fd_loop;
++			goto out;
+ 		cond_resched();
+ 		rcu_read_lock();
+ 	}
+ 	rcu_read_unlock();
+-out_fd_loop:
+-	put_files_struct(files);
+ out:
+ 	put_task_struct(p);
+ 	return 0;
+diff --git a/fs/udf/file.c b/fs/udf/file.c
+index e283a62701b83..25f7c915f22b7 100644
+--- a/fs/udf/file.c
++++ b/fs/udf/file.c
+@@ -181,7 +181,7 @@ long udf_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	long old_block, new_block;
+ 	int result;
+ 
+-	if (inode_permission(inode, MAY_READ) != 0) {
++	if (file_permission(filp, MAY_READ) != 0) {
+ 		udf_debug("no permission to access inode %lu\n", inode->i_ino);
+ 		return -EPERM;
+ 	}
+diff --git a/fs/verity/enable.c b/fs/verity/enable.c
+index 5ceae66e1ae02..29becb66d0d88 100644
+--- a/fs/verity/enable.c
++++ b/fs/verity/enable.c
+@@ -369,7 +369,7 @@ int fsverity_ioctl_enable(struct file *filp, const void __user *uarg)
+ 	 * has verity enabled, and to stabilize the data being hashed.
+ 	 */
+ 
+-	err = inode_permission(inode, MAY_WRITE);
++	err = file_permission(filp, MAY_WRITE);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/include/linux/dnotify.h b/include/linux/dnotify.h
+index 0aad774beaec4..b87c3b85a166c 100644
+--- a/include/linux/dnotify.h
++++ b/include/linux/dnotify.h
+@@ -26,7 +26,7 @@ struct dnotify_struct {
+ 			    FS_MODIFY | FS_MODIFY_CHILD |\
+ 			    FS_ACCESS | FS_ACCESS_CHILD |\
+ 			    FS_ATTRIB | FS_ATTRIB_CHILD |\
+-			    FS_CREATE | FS_DN_RENAME |\
++			    FS_CREATE | FS_RENAME |\
+ 			    FS_MOVED_FROM | FS_MOVED_TO)
+ 
+ extern int dir_notify_enable;
+diff --git a/include/linux/errno.h b/include/linux/errno.h
+index d73f597a24849..8b0c754bab025 100644
+--- a/include/linux/errno.h
++++ b/include/linux/errno.h
+@@ -31,5 +31,6 @@
+ #define EJUKEBOX	528	/* Request initiated, but will not complete before timeout */
+ #define EIOCBQUEUED	529	/* iocb queued, will get completion event */
+ #define ERECALLCONFLICT	530	/* conflict with recalled state */
++#define ENOGRACE	531	/* NFS file lock reclaim refused */
+ 
+ #endif
+diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
+index 3ceb72b67a7aa..218fc5c54e901 100644
+--- a/include/linux/exportfs.h
++++ b/include/linux/exportfs.h
+@@ -213,12 +213,27 @@ struct export_operations {
+ 			  bool write, u32 *device_generation);
+ 	int (*commit_blocks)(struct inode *inode, struct iomap *iomaps,
+ 			     int nr_iomaps, struct iattr *iattr);
++	u64 (*fetch_iversion)(struct inode *);
++#define	EXPORT_OP_NOWCC			(0x1) /* don't collect v3 wcc data */
++#define	EXPORT_OP_NOSUBTREECHK		(0x2) /* no subtree checking */
++#define	EXPORT_OP_CLOSE_BEFORE_UNLINK	(0x4) /* close files before unlink */
++#define EXPORT_OP_REMOTE_FS		(0x8) /* Filesystem is remote */
++#define EXPORT_OP_NOATOMIC_ATTR		(0x10) /* Filesystem cannot supply
++						  atomic attribute updates
++						*/
++#define EXPORT_OP_FLUSH_ON_CLOSE	(0x20) /* fs flushes file data on close */
++	unsigned long	flags;
+ };
+ 
+ extern int exportfs_encode_inode_fh(struct inode *inode, struct fid *fid,
+ 				    int *max_len, struct inode *parent);
+ extern int exportfs_encode_fh(struct dentry *dentry, struct fid *fid,
+ 	int *max_len, int connectable);
++extern struct dentry *exportfs_decode_fh_raw(struct vfsmount *mnt,
++					     struct fid *fid, int fh_len,
++					     int fileid_type,
++					     int (*acceptable)(void *, struct dentry *),
++					     void *context);
+ extern struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
+ 	int fh_len, int fileid_type, int (*acceptable)(void *, struct dentry *),
+ 	void *context);
+diff --git a/include/linux/fanotify.h b/include/linux/fanotify.h
+index 3e9c56ee651f7..558844c8d2598 100644
+--- a/include/linux/fanotify.h
++++ b/include/linux/fanotify.h
+@@ -2,8 +2,11 @@
+ #ifndef _LINUX_FANOTIFY_H
+ #define _LINUX_FANOTIFY_H
+ 
++#include <linux/sysctl.h>
+ #include <uapi/linux/fanotify.h>
+ 
++extern struct ctl_table fanotify_table[]; /* for sysctl */
++
+ #define FAN_GROUP_FLAG(group, flag) \
+ 	((group)->fanotify_data.flags & (flag))
+ 
+@@ -15,27 +18,62 @@
+  * these constant, the programs may break if re-compiled with new uapi headers
+  * and then run on an old kernel.
+  */
+-#define FANOTIFY_CLASS_BITS	(FAN_CLASS_NOTIF | FAN_CLASS_CONTENT | \
++
++/* Group classes where permission events are allowed */
++#define FANOTIFY_PERM_CLASSES	(FAN_CLASS_CONTENT | \
+ 				 FAN_CLASS_PRE_CONTENT)
+ 
+-#define FANOTIFY_FID_BITS	(FAN_REPORT_FID | FAN_REPORT_DFID_NAME)
++#define FANOTIFY_CLASS_BITS	(FAN_CLASS_NOTIF | FANOTIFY_PERM_CLASSES)
++
++#define FANOTIFY_FID_BITS	(FAN_REPORT_DFID_NAME_TARGET)
++
++#define FANOTIFY_INFO_MODES	(FANOTIFY_FID_BITS | FAN_REPORT_PIDFD)
++
++/*
++ * fanotify_init() flags that require CAP_SYS_ADMIN.
++ * We do not allow unprivileged groups to request permission events.
++ * We do not allow unprivileged groups to get other process pid in events.
++ * We do not allow unprivileged groups to use unlimited resources.
++ */
++#define FANOTIFY_ADMIN_INIT_FLAGS	(FANOTIFY_PERM_CLASSES | \
++					 FAN_REPORT_TID | \
++					 FAN_REPORT_PIDFD | \
++					 FAN_UNLIMITED_QUEUE | \
++					 FAN_UNLIMITED_MARKS)
++
++/*
++ * fanotify_init() flags that are allowed for user without CAP_SYS_ADMIN.
++ * FAN_CLASS_NOTIF is the only class we allow for unprivileged group.
++ * We do not allow unprivileged groups to get file descriptors in events,
++ * so one of the flags for reporting file handles is required.
++ */
++#define FANOTIFY_USER_INIT_FLAGS	(FAN_CLASS_NOTIF | \
++					 FANOTIFY_FID_BITS | \
++					 FAN_CLOEXEC | FAN_NONBLOCK)
++
++#define FANOTIFY_INIT_FLAGS	(FANOTIFY_ADMIN_INIT_FLAGS | \
++				 FANOTIFY_USER_INIT_FLAGS)
+ 
+-#define FANOTIFY_INIT_FLAGS	(FANOTIFY_CLASS_BITS | FANOTIFY_FID_BITS | \
+-				 FAN_REPORT_TID | \
+-				 FAN_CLOEXEC | FAN_NONBLOCK | \
+-				 FAN_UNLIMITED_QUEUE | FAN_UNLIMITED_MARKS)
++/* Internal group flags */
++#define FANOTIFY_UNPRIV		0x80000000
++#define FANOTIFY_INTERNAL_GROUP_FLAGS	(FANOTIFY_UNPRIV)
+ 
+ #define FANOTIFY_MARK_TYPE_BITS	(FAN_MARK_INODE | FAN_MARK_MOUNT | \
+ 				 FAN_MARK_FILESYSTEM)
+ 
++#define FANOTIFY_MARK_CMD_BITS	(FAN_MARK_ADD | FAN_MARK_REMOVE | \
++				 FAN_MARK_FLUSH)
++
++#define FANOTIFY_MARK_IGNORE_BITS (FAN_MARK_IGNORED_MASK | \
++				   FAN_MARK_IGNORE)
++
+ #define FANOTIFY_MARK_FLAGS	(FANOTIFY_MARK_TYPE_BITS | \
+-				 FAN_MARK_ADD | \
+-				 FAN_MARK_REMOVE | \
++				 FANOTIFY_MARK_CMD_BITS | \
++				 FANOTIFY_MARK_IGNORE_BITS | \
+ 				 FAN_MARK_DONT_FOLLOW | \
+ 				 FAN_MARK_ONLYDIR | \
+-				 FAN_MARK_IGNORED_MASK | \
+ 				 FAN_MARK_IGNORED_SURV_MODIFY | \
+-				 FAN_MARK_FLUSH)
++				 FAN_MARK_EVICTABLE)
+ 
+ /*
+  * Events that can be reported with data type FSNOTIFY_EVENT_PATH.
+@@ -49,15 +87,23 @@
+  * Directory entry modification events - reported only to directory
+  * where entry is modified and not to a watching parent.
+  */
+-#define FANOTIFY_DIRENT_EVENTS	(FAN_MOVE | FAN_CREATE | FAN_DELETE)
++#define FANOTIFY_DIRENT_EVENTS	(FAN_MOVE | FAN_CREATE | FAN_DELETE | \
++				 FAN_RENAME)
++
++/* Events that can be reported with event->fd */
++#define FANOTIFY_FD_EVENTS (FANOTIFY_PATH_EVENTS | FANOTIFY_PERM_EVENTS)
+ 
+ /* Events that can only be reported with data type FSNOTIFY_EVENT_INODE */
+ #define FANOTIFY_INODE_EVENTS	(FANOTIFY_DIRENT_EVENTS | \
+ 				 FAN_ATTRIB | FAN_MOVE_SELF | FAN_DELETE_SELF)
+ 
++/* Events that can only be reported with data type FSNOTIFY_EVENT_ERROR */
++#define FANOTIFY_ERROR_EVENTS	(FAN_FS_ERROR)
++
+ /* Events that user can request to be notified on */
+ #define FANOTIFY_EVENTS		(FANOTIFY_PATH_EVENTS | \
+-				 FANOTIFY_INODE_EVENTS)
++				 FANOTIFY_INODE_EVENTS | \
++				 FANOTIFY_ERROR_EVENTS)
+ 
+ /* Events that require a permission response from user */
+ #define FANOTIFY_PERM_EVENTS	(FAN_OPEN_PERM | FAN_ACCESS_PERM | \
+@@ -71,6 +117,10 @@
+ 					 FANOTIFY_PERM_EVENTS | \
+ 					 FAN_Q_OVERFLOW | FAN_ONDIR)
+ 
++/* Events and flags relevant only for directories */
++#define FANOTIFY_DIRONLY_EVENT_BITS	(FANOTIFY_DIRENT_EVENTS | \
++					 FAN_EVENT_ON_CHILD | FAN_ONDIR)
++
+ #define ALL_FANOTIFY_EVENT_BITS		(FANOTIFY_OUTGOING_EVENTS | \
+ 					 FANOTIFY_EVENT_FLAGS)
+ 
+diff --git a/include/linux/fdtable.h b/include/linux/fdtable.h
+index f1a99d3e55707..4ed3589f9294e 100644
+--- a/include/linux/fdtable.h
++++ b/include/linux/fdtable.h
+@@ -80,7 +80,7 @@ struct dentry;
+ /*
+  * The caller must ensure that fd table isn't shared or hold rcu or file lock
+  */
+-static inline struct file *__fcheck_files(struct files_struct *files, unsigned int fd)
++static inline struct file *files_lookup_fd_raw(struct files_struct *files, unsigned int fd)
+ {
+ 	struct fdtable *fdt = rcu_dereference_raw(files->fdt);
+ 
+@@ -91,37 +91,40 @@ static inline struct file *__fcheck_files(struct files_struct *files, unsigned i
+ 	return NULL;
+ }
+ 
+-static inline struct file *fcheck_files(struct files_struct *files, unsigned int fd)
++static inline struct file *files_lookup_fd_locked(struct files_struct *files, unsigned int fd)
+ {
+-	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&
+-			   !lockdep_is_held(&files->file_lock),
++	RCU_LOCKDEP_WARN(!lockdep_is_held(&files->file_lock),
+ 			   "suspicious rcu_dereference_check() usage");
+-	return __fcheck_files(files, fd);
++	return files_lookup_fd_raw(files, fd);
+ }
+ 
+-/*
+- * Check whether the specified fd has an open file.
+- */
+-#define fcheck(fd)	fcheck_files(current->files, fd)
++static inline struct file *files_lookup_fd_rcu(struct files_struct *files, unsigned int fd)
++{
++	RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
++			   "suspicious rcu_dereference_check() usage");
++	return files_lookup_fd_raw(files, fd);
++}
++
++static inline struct file *lookup_fd_rcu(unsigned int fd)
++{
++	return files_lookup_fd_rcu(current->files, fd);
++}
++
++struct file *task_lookup_fd_rcu(struct task_struct *task, unsigned int fd);
++struct file *task_lookup_next_fd_rcu(struct task_struct *task, unsigned int *fd);
+ 
+ struct task_struct;
+ 
+ struct files_struct *get_files_struct(struct task_struct *);
+ void put_files_struct(struct files_struct *fs);
+-void reset_files_struct(struct files_struct *);
+-int unshare_files(struct files_struct **);
++int unshare_files(void);
+ struct files_struct *dup_fd(struct files_struct *, unsigned, int *) __latent_entropy;
+ void do_close_on_exec(struct files_struct *);
+ int iterate_fd(struct files_struct *, unsigned,
+ 		int (*)(const void *, struct file *, unsigned),
+ 		const void *);
+ 
+-extern int __alloc_fd(struct files_struct *files,
+-		      unsigned start, unsigned end, unsigned flags);
+-extern void __fd_install(struct files_struct *files,
+-		      unsigned int fd, struct file *file);
+-extern int __close_fd(struct files_struct *files,
+-		      unsigned int fd);
++extern int close_fd(unsigned int fd);
+ extern int __close_range(unsigned int fd, unsigned int max_fd, unsigned int flags);
+ extern int close_fd_get_file(unsigned int fd, struct file **res);
+ extern int unshare_fd(unsigned long unshare_flags, unsigned int max_fds,
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 6de70634e5471..6a26ef54ac25d 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -996,6 +996,7 @@ static inline struct file *get_file(struct file *f)
+ #define FL_UNLOCK_PENDING	512 /* Lease is being broken */
+ #define FL_OFDLCK	1024	/* lock is "owned" by struct file */
+ #define FL_LAYOUT	2048	/* outstanding pNFS layout */
++#define FL_RECLAIM	4096	/* reclaiming from a reboot server */
+ 
+ #define FL_CLOSE_POSIX (FL_POSIX | FL_CLOSE)
+ 
+@@ -1016,6 +1017,7 @@ struct file_lock_operations {
+ };
+ 
+ struct lock_manager_operations {
++	void *lm_mod_owner;
+ 	fl_owner_t (*lm_get_owner)(fl_owner_t);
+ 	void (*lm_put_owner)(fl_owner_t);
+ 	void (*lm_notify)(struct file_lock *);	/* unblock callback */
+@@ -1024,6 +1026,8 @@ struct lock_manager_operations {
+ 	int (*lm_change)(struct file_lock *, int, struct list_head *);
+ 	void (*lm_setup)(struct file_lock *, void **);
+ 	bool (*lm_breaker_owns_lease)(struct file_lock *);
++	bool (*lm_lock_expirable)(struct file_lock *cfl);
++	void (*lm_expire_lock)(void);
+ };
+ 
+ struct lock_manager {
+@@ -1162,6 +1166,15 @@ extern void lease_unregister_notifier(struct notifier_block *);
+ struct files_struct;
+ extern void show_fd_locks(struct seq_file *f,
+ 			 struct file *filp, struct files_struct *files);
++extern bool locks_owner_has_blockers(struct file_lock_context *flctx,
++			fl_owner_t owner);
++
++static inline struct file_lock_context *
++locks_inode_context(const struct inode *inode)
++{
++	return smp_load_acquire(&inode->i_flctx);
++}
++
+ #else /* !CONFIG_FILE_LOCKING */
+ static inline int fcntl_getlk(struct file *file, unsigned int cmd,
+ 			      struct flock __user *user)
+@@ -1302,6 +1315,18 @@ static inline int lease_modify(struct file_lock *fl, int arg,
+ struct files_struct;
+ static inline void show_fd_locks(struct seq_file *f,
+ 			struct file *filp, struct files_struct *files) {}
++static inline bool locks_owner_has_blockers(struct file_lock_context *flctx,
++			fl_owner_t owner)
++{
++	return false;
++}
++
++static inline struct file_lock_context *
++locks_inode_context(const struct inode *inode)
++{
++	return NULL;
++}
++
+ #endif /* !CONFIG_FILE_LOCKING */
+ 
+ static inline struct inode *file_inode(const struct file *f)
+@@ -1512,8 +1537,11 @@ struct super_block {
+ 	/* Number of inodes with nlink == 0 but still referenced */
+ 	atomic_long_t s_remove_count;
+ 
+-	/* Pending fsnotify inode refs */
+-	atomic_long_t s_fsnotify_inode_refs;
++	/*
++	 * Number of inode/mount/sb objects that are being watched, note that
++	 * inodes objects are currently double-accounted.
++	 */
++	atomic_long_t s_fsnotify_connectors;
+ 
+ 	/* Being remounted read-only */
+ 	int s_readonly_remount;
+@@ -1780,7 +1808,17 @@ extern int vfs_symlink(struct inode *, struct dentry *, const char *);
+ extern int vfs_link(struct dentry *, struct inode *, struct dentry *, struct inode **);
+ extern int vfs_rmdir(struct inode *, struct dentry *);
+ extern int vfs_unlink(struct inode *, struct dentry *, struct inode **);
+-extern int vfs_rename(struct inode *, struct dentry *, struct inode *, struct dentry *, struct inode **, unsigned int);
++
++struct renamedata {
++	struct inode *old_dir;
++	struct dentry *old_dentry;
++	struct inode *new_dir;
++	struct dentry *new_dentry;
++	struct inode **delegated_inode;
++	unsigned int flags;
++} __randomize_layout;
++
++int vfs_rename(struct renamedata *);
+ 
+ static inline int vfs_whiteout(struct inode *dir, struct dentry *dentry)
+ {
+@@ -2594,6 +2632,8 @@ extern struct file *filp_open(const char *, int, umode_t);
+ extern struct file *file_open_root(struct dentry *, struct vfsmount *,
+ 				   const char *, int, umode_t);
+ extern struct file * dentry_open(const struct path *, int, const struct cred *);
++extern struct file *dentry_create(const struct path *path, int flags,
++				  umode_t mode, const struct cred *cred);
+ extern struct file * open_with_fake_path(const struct path *, int,
+ 					 struct inode*, const struct cred *);
+ static inline struct file *file_clone_open(struct file *file)
+@@ -2824,6 +2864,14 @@ static inline int bmap(struct inode *inode,  sector_t *block)
+ extern int notify_change(struct dentry *, struct iattr *, struct inode **);
+ extern int inode_permission(struct inode *, int);
+ extern int generic_permission(struct inode *, int);
++static inline int file_permission(struct file *file, int mask)
++{
++	return inode_permission(file_inode(file), mask);
++}
++static inline int path_permission(const struct path *path, int mask)
++{
++	return inode_permission(d_inode(path->dentry), mask);
++}
+ extern int __check_sticky(struct inode *dir, struct inode *inode);
+ 
+ static inline bool execute_ok(struct inode *inode)
+diff --git a/include/linux/fsnotify.h b/include/linux/fsnotify.h
+index 79add91eaa04e..bb8467cd11ae2 100644
+--- a/include/linux/fsnotify.h
++++ b/include/linux/fsnotify.h
+@@ -26,21 +26,27 @@
+  * FS_EVENT_ON_CHILD mask on the parent inode and will not be reported if only
+  * the child is interested and not the parent.
+  */
+-static inline void fsnotify_name(struct inode *dir, __u32 mask,
+-				 struct inode *child,
+-				 const struct qstr *name, u32 cookie)
++static inline int fsnotify_name(__u32 mask, const void *data, int data_type,
++				struct inode *dir, const struct qstr *name,
++				u32 cookie)
+ {
+-	fsnotify(mask, child, FSNOTIFY_EVENT_INODE, dir, name, NULL, cookie);
++	if (atomic_long_read(&dir->i_sb->s_fsnotify_connectors) == 0)
++		return 0;
++
++	return fsnotify(mask, data, data_type, dir, name, NULL, cookie);
+ }
+ 
+ static inline void fsnotify_dirent(struct inode *dir, struct dentry *dentry,
+ 				   __u32 mask)
+ {
+-	fsnotify_name(dir, mask, d_inode(dentry), &dentry->d_name, 0);
++	fsnotify_name(mask, dentry, FSNOTIFY_EVENT_DENTRY, dir, &dentry->d_name, 0);
+ }
+ 
+ static inline void fsnotify_inode(struct inode *inode, __u32 mask)
+ {
++	if (atomic_long_read(&inode->i_sb->s_fsnotify_connectors) == 0)
++		return;
++
+ 	if (S_ISDIR(inode->i_mode))
+ 		mask |= FS_ISDIR;
+ 
+@@ -53,6 +59,9 @@ static inline int fsnotify_parent(struct dentry *dentry, __u32 mask,
+ {
+ 	struct inode *inode = d_inode(dentry);
+ 
++	if (atomic_long_read(&inode->i_sb->s_fsnotify_connectors) == 0)
++		return 0;
++
+ 	if (S_ISDIR(inode->i_mode)) {
+ 		mask |= FS_ISDIR;
+ 
+@@ -77,7 +86,7 @@ static inline int fsnotify_parent(struct dentry *dentry, __u32 mask,
+  */
+ static inline void fsnotify_dentry(struct dentry *dentry, __u32 mask)
+ {
+-	fsnotify_parent(dentry, mask, d_inode(dentry), FSNOTIFY_EVENT_INODE);
++	fsnotify_parent(dentry, mask, dentry, FSNOTIFY_EVENT_DENTRY);
+ }
+ 
+ static inline int fsnotify_file(struct file *file, __u32 mask)
+@@ -135,18 +144,23 @@ static inline void fsnotify_move(struct inode *old_dir, struct inode *new_dir,
+ 	u32 fs_cookie = fsnotify_get_cookie();
+ 	__u32 old_dir_mask = FS_MOVED_FROM;
+ 	__u32 new_dir_mask = FS_MOVED_TO;
++	__u32 rename_mask = FS_RENAME;
+ 	const struct qstr *new_name = &moved->d_name;
+ 
+-	if (old_dir == new_dir)
+-		old_dir_mask |= FS_DN_RENAME;
+-
+ 	if (isdir) {
+ 		old_dir_mask |= FS_ISDIR;
+ 		new_dir_mask |= FS_ISDIR;
++		rename_mask |= FS_ISDIR;
+ 	}
+ 
+-	fsnotify_name(old_dir, old_dir_mask, source, old_name, fs_cookie);
+-	fsnotify_name(new_dir, new_dir_mask, source, new_name, fs_cookie);
++	/* Event with information about both old and new parent+name */
++	fsnotify_name(rename_mask, moved, FSNOTIFY_EVENT_DENTRY,
++		      old_dir, old_name, 0);
++
++	fsnotify_name(old_dir_mask, source, FSNOTIFY_EVENT_INODE,
++		      old_dir, old_name, fs_cookie);
++	fsnotify_name(new_dir_mask, source, FSNOTIFY_EVENT_INODE,
++		      new_dir, new_name, fs_cookie);
+ 
+ 	if (target)
+ 		fsnotify_link_count(target);
+@@ -181,16 +195,22 @@ static inline void fsnotify_inoderemove(struct inode *inode)
+ 
+ /*
+  * fsnotify_create - 'name' was linked in
++ *
++ * Caller must make sure that dentry->d_name is stable.
++ * Note: some filesystems (e.g. kernfs) leave @dentry negative and instantiate
++ * ->d_inode later
+  */
+-static inline void fsnotify_create(struct inode *inode, struct dentry *dentry)
++static inline void fsnotify_create(struct inode *dir, struct dentry *dentry)
+ {
+-	audit_inode_child(inode, dentry, AUDIT_TYPE_CHILD_CREATE);
++	audit_inode_child(dir, dentry, AUDIT_TYPE_CHILD_CREATE);
+ 
+-	fsnotify_dirent(inode, dentry, FS_CREATE);
++	fsnotify_dirent(dir, dentry, FS_CREATE);
+ }
+ 
+ /*
+  * fsnotify_link - new hardlink in 'inode' directory
++ *
++ * Caller must make sure that new_dentry->d_name is stable.
+  * Note: We have to pass also the linked inode ptr as some filesystems leave
+  *   new_dentry->d_inode NULL and instantiate inode pointer later
+  */
+@@ -200,7 +220,8 @@ static inline void fsnotify_link(struct inode *dir, struct inode *inode,
+ 	fsnotify_link_count(inode);
+ 	audit_inode_child(dir, new_dentry, AUDIT_TYPE_CHILD_CREATE);
+ 
+-	fsnotify_name(dir, FS_CREATE, inode, &new_dentry->d_name, 0);
++	fsnotify_name(FS_CREATE, inode, FSNOTIFY_EVENT_INODE,
++		      dir, &new_dentry->d_name, 0);
+ }
+ 
+ /*
+@@ -219,7 +240,8 @@ static inline void fsnotify_delete(struct inode *dir, struct inode *inode,
+ 	if (S_ISDIR(inode->i_mode))
+ 		mask |= FS_ISDIR;
+ 
+-	fsnotify_name(dir, mask, inode, &dentry->d_name, 0);
++	fsnotify_name(mask, inode, FSNOTIFY_EVENT_INODE, dir, &dentry->d_name,
++		      0);
+ }
+ 
+ /**
+@@ -254,12 +276,16 @@ static inline void fsnotify_unlink(struct inode *dir, struct dentry *dentry)
+ 
+ /*
+  * fsnotify_mkdir - directory 'name' was created
++ *
++ * Caller must make sure that dentry->d_name is stable.
++ * Note: some filesystems (e.g. kernfs) leave @dentry negative and instantiate
++ * ->d_inode later
+  */
+-static inline void fsnotify_mkdir(struct inode *inode, struct dentry *dentry)
++static inline void fsnotify_mkdir(struct inode *dir, struct dentry *dentry)
+ {
+-	audit_inode_child(inode, dentry, AUDIT_TYPE_CHILD_CREATE);
++	audit_inode_child(dir, dentry, AUDIT_TYPE_CHILD_CREATE);
+ 
+-	fsnotify_dirent(inode, dentry, FS_CREATE | FS_ISDIR);
++	fsnotify_dirent(dir, dentry, FS_CREATE | FS_ISDIR);
+ }
+ 
+ /*
+@@ -353,4 +379,17 @@ static inline void fsnotify_change(struct dentry *dentry, unsigned int ia_valid)
+ 		fsnotify_dentry(dentry, mask);
+ }
+ 
++static inline int fsnotify_sb_error(struct super_block *sb, struct inode *inode,
++				    int error)
++{
++	struct fs_error_report report = {
++		.error = error,
++		.inode = inode,
++		.sb = sb,
++	};
++
++	return fsnotify(FS_ERROR, &report, FSNOTIFY_EVENT_ERROR,
++			NULL, NULL, NULL, 0);
++}
++
+ #endif	/* _LINUX_FS_NOTIFY_H */
+diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
+index a2e42d3cd87cf..d7d96c806bff2 100644
+--- a/include/linux/fsnotify_backend.h
++++ b/include/linux/fsnotify_backend.h
+@@ -19,6 +19,8 @@
+ #include <linux/atomic.h>
+ #include <linux/user_namespace.h>
+ #include <linux/refcount.h>
++#include <linux/mempool.h>
++#include <linux/sched/mm.h>
+ 
+ /*
+  * IN_* from inotfy.h lines up EXACTLY with FS_*, this is so we can easily
+@@ -42,13 +44,18 @@
+ 
+ #define FS_UNMOUNT		0x00002000	/* inode on umount fs */
+ #define FS_Q_OVERFLOW		0x00004000	/* Event queued overflowed */
++#define FS_ERROR		0x00008000	/* Filesystem Error (fanotify) */
++
++/*
++ * FS_IN_IGNORED overloads FS_ERROR.  It is only used internally by inotify
++ * which does not support FS_ERROR.
++ */
+ #define FS_IN_IGNORED		0x00008000	/* last inotify event here */
+ 
+ #define FS_OPEN_PERM		0x00010000	/* open event in an permission hook */
+ #define FS_ACCESS_PERM		0x00020000	/* access event in a permissions hook */
+ #define FS_OPEN_EXEC_PERM	0x00040000	/* open/exec event in a permission hook */
+ 
+-#define FS_EXCL_UNLINK		0x04000000	/* do not send events if object is unlinked */
+ /*
+  * Set on inode mark that cares about things that happen to its children.
+  * Always set for dnotify and inotify.
+@@ -56,10 +63,9 @@
+  */
+ #define FS_EVENT_ON_CHILD	0x08000000
+ 
+-#define FS_DN_RENAME		0x10000000	/* file renamed */
++#define FS_RENAME		0x10000000	/* File was renamed */
+ #define FS_DN_MULTISHOT		0x20000000	/* dnotify multishot */
+ #define FS_ISDIR		0x40000000	/* event occurred against dir */
+-#define FS_IN_ONESHOT		0x80000000	/* only send event once */
+ 
+ #define FS_MOVE			(FS_MOVED_FROM | FS_MOVED_TO)
+ 
+@@ -69,7 +75,7 @@
+  * The watching parent may get an FS_ATTRIB|FS_EVENT_ON_CHILD event
+  * when a directory entry inside a child subdir changes.
+  */
+-#define ALL_FSNOTIFY_DIRENT_EVENTS	(FS_CREATE | FS_DELETE | FS_MOVE)
++#define ALL_FSNOTIFY_DIRENT_EVENTS (FS_CREATE | FS_DELETE | FS_MOVE | FS_RENAME)
+ 
+ #define ALL_FSNOTIFY_PERM_EVENTS (FS_OPEN_PERM | FS_ACCESS_PERM | \
+ 				  FS_OPEN_EXEC_PERM)
+@@ -94,12 +100,12 @@
+ /* Events that can be reported to backends */
+ #define ALL_FSNOTIFY_EVENTS (ALL_FSNOTIFY_DIRENT_EVENTS | \
+ 			     FS_EVENTS_POSS_ON_CHILD | \
+-			     FS_DELETE_SELF | FS_MOVE_SELF | FS_DN_RENAME | \
+-			     FS_UNMOUNT | FS_Q_OVERFLOW | FS_IN_IGNORED)
++			     FS_DELETE_SELF | FS_MOVE_SELF | \
++			     FS_UNMOUNT | FS_Q_OVERFLOW | FS_IN_IGNORED | \
++			     FS_ERROR)
+ 
+ /* Extra flags that may be reported with event or control handling of events */
+-#define ALL_FSNOTIFY_FLAGS  (FS_EXCL_UNLINK | FS_ISDIR | FS_IN_ONESHOT | \
+-			     FS_DN_MULTISHOT | FS_EVENT_ON_CHILD)
++#define ALL_FSNOTIFY_FLAGS  (FS_ISDIR | FS_EVENT_ON_CHILD | FS_DN_MULTISHOT)
+ 
+ #define ALL_FSNOTIFY_BITS   (ALL_FSNOTIFY_EVENTS | ALL_FSNOTIFY_FLAGS)
+ 
+@@ -136,6 +142,7 @@ struct mem_cgroup;
+  * @dir:	optional directory associated with event -
+  *		if @file_name is not NULL, this is the directory that
+  *		@file_name is relative to.
++ *		Either @inode or @dir must be non-NULL.
+  * @file_name:	optional file name associated with event
+  * @cookie:	inotify rename cookie
+  *
+@@ -155,7 +162,7 @@ struct fsnotify_ops {
+ 			    const struct qstr *file_name, u32 cookie);
+ 	void (*free_group_priv)(struct fsnotify_group *group);
+ 	void (*freeing_mark)(struct fsnotify_mark *mark, struct fsnotify_group *group);
+-	void (*free_event)(struct fsnotify_event *event);
++	void (*free_event)(struct fsnotify_group *group, struct fsnotify_event *event);
+ 	/* called on final put+free to free memory */
+ 	void (*free_mark)(struct fsnotify_mark *mark);
+ };
+@@ -167,7 +174,6 @@ struct fsnotify_ops {
+  */
+ struct fsnotify_event {
+ 	struct list_head list;
+-	unsigned long objectid;	/* identifier for queue merges */
+ };
+ 
+ /*
+@@ -205,11 +211,14 @@ struct fsnotify_group {
+ 	unsigned int priority;
+ 	bool shutdown;		/* group is being shut down, don't queue more events */
+ 
++#define FSNOTIFY_GROUP_USER	0x01 /* user allocated group */
++#define FSNOTIFY_GROUP_DUPS	0x02 /* allow multiple marks per object */
++#define FSNOTIFY_GROUP_NOFS	0x04 /* group lock is not direct reclaim safe */
++	int flags;
++	unsigned int owner_flags;	/* stored flags of mark_mutex owner */
++
+ 	/* stores all fastpath marks assoc with this group so they can be cleaned on unregister */
+ 	struct mutex mark_mutex;	/* protect marks_list */
+-	atomic_t num_marks;		/* 1 for each mark and 1 for not being
+-					 * past the point of no return when freeing
+-					 * a group */
+ 	atomic_t user_waits;		/* Number of tasks waiting for user
+ 					 * response */
+ 	struct list_head marks_list;	/* all inode marks for this group */
+@@ -234,23 +243,58 @@ struct fsnotify_group {
+ #endif
+ #ifdef CONFIG_FANOTIFY
+ 		struct fanotify_group_private_data {
++			/* Hash table of events for merge */
++			struct hlist_head *merge_hash;
+ 			/* allows a group to block waiting for a userspace response */
+ 			struct list_head access_list;
+ 			wait_queue_head_t access_waitq;
+ 			int flags;           /* flags from fanotify_init() */
+ 			int f_flags; /* event_f_flags from fanotify_init() */
+-			unsigned int max_marks;
+-			struct user_struct *user;
++			struct ucounts *ucounts;
++			mempool_t error_events_pool;
+ 		} fanotify_data;
+ #endif /* CONFIG_FANOTIFY */
+ 	};
+ };
+ 
++/*
++ * These helpers are used to prevent deadlock when reclaiming inodes with
++ * evictable marks of the same group that is allocating a new mark.
++ */
++static inline void fsnotify_group_lock(struct fsnotify_group *group)
++{
++	mutex_lock(&group->mark_mutex);
++	if (group->flags & FSNOTIFY_GROUP_NOFS)
++		group->owner_flags = memalloc_nofs_save();
++}
++
++static inline void fsnotify_group_unlock(struct fsnotify_group *group)
++{
++	if (group->flags & FSNOTIFY_GROUP_NOFS)
++		memalloc_nofs_restore(group->owner_flags);
++	mutex_unlock(&group->mark_mutex);
++}
++
++static inline void fsnotify_group_assert_locked(struct fsnotify_group *group)
++{
++	WARN_ON_ONCE(!mutex_is_locked(&group->mark_mutex));
++	if (group->flags & FSNOTIFY_GROUP_NOFS)
++		WARN_ON_ONCE(!(current->flags & PF_MEMALLOC_NOFS));
++}
++
+ /* When calling fsnotify tell it if the data is a path or inode */
+ enum fsnotify_data_type {
+ 	FSNOTIFY_EVENT_NONE,
+ 	FSNOTIFY_EVENT_PATH,
+ 	FSNOTIFY_EVENT_INODE,
++	FSNOTIFY_EVENT_DENTRY,
++	FSNOTIFY_EVENT_ERROR,
++};
++
++struct fs_error_report {
++	int error;
++	struct inode *inode;
++	struct super_block *sb;
+ };
+ 
+ static inline struct inode *fsnotify_data_inode(const void *data, int data_type)
+@@ -258,8 +302,25 @@ static inline struct inode *fsnotify_data_inode(const void *data, int data_type)
+ 	switch (data_type) {
+ 	case FSNOTIFY_EVENT_INODE:
+ 		return (struct inode *)data;
++	case FSNOTIFY_EVENT_DENTRY:
++		return d_inode(data);
+ 	case FSNOTIFY_EVENT_PATH:
+ 		return d_inode(((const struct path *)data)->dentry);
++	case FSNOTIFY_EVENT_ERROR:
++		return ((struct fs_error_report *)data)->inode;
++	default:
++		return NULL;
++	}
++}
++
++static inline struct dentry *fsnotify_data_dentry(const void *data, int data_type)
++{
++	switch (data_type) {
++	case FSNOTIFY_EVENT_DENTRY:
++		/* Non const is needed for dget() */
++		return (struct dentry *)data;
++	case FSNOTIFY_EVENT_PATH:
++		return ((const struct path *)data)->dentry;
+ 	default:
+ 		return NULL;
+ 	}
+@@ -276,58 +337,110 @@ static inline const struct path *fsnotify_data_path(const void *data,
+ 	}
+ }
+ 
++static inline struct super_block *fsnotify_data_sb(const void *data,
++						   int data_type)
++{
++	switch (data_type) {
++	case FSNOTIFY_EVENT_INODE:
++		return ((struct inode *)data)->i_sb;
++	case FSNOTIFY_EVENT_DENTRY:
++		return ((struct dentry *)data)->d_sb;
++	case FSNOTIFY_EVENT_PATH:
++		return ((const struct path *)data)->dentry->d_sb;
++	case FSNOTIFY_EVENT_ERROR:
++		return ((struct fs_error_report *) data)->sb;
++	default:
++		return NULL;
++	}
++}
++
++static inline struct fs_error_report *fsnotify_data_error_report(
++							const void *data,
++							int data_type)
++{
++	switch (data_type) {
++	case FSNOTIFY_EVENT_ERROR:
++		return (struct fs_error_report *) data;
++	default:
++		return NULL;
++	}
++}
++
++/*
++ * Index to merged marks iterator array that correlates to a type of watch.
++ * The type of watched object can be deduced from the iterator type, but not
++ * the other way around, because an event can match different watched objects
++ * of the same object type.
++ * For example, both parent and child are watching an object of type inode.
++ */
++enum fsnotify_iter_type {
++	FSNOTIFY_ITER_TYPE_INODE,
++	FSNOTIFY_ITER_TYPE_VFSMOUNT,
++	FSNOTIFY_ITER_TYPE_SB,
++	FSNOTIFY_ITER_TYPE_PARENT,
++	FSNOTIFY_ITER_TYPE_INODE2,
++	FSNOTIFY_ITER_TYPE_COUNT
++};
++
++/* The type of object that a mark is attached to */
+ enum fsnotify_obj_type {
++	FSNOTIFY_OBJ_TYPE_ANY = -1,
+ 	FSNOTIFY_OBJ_TYPE_INODE,
+-	FSNOTIFY_OBJ_TYPE_PARENT,
+ 	FSNOTIFY_OBJ_TYPE_VFSMOUNT,
+ 	FSNOTIFY_OBJ_TYPE_SB,
+ 	FSNOTIFY_OBJ_TYPE_COUNT,
+ 	FSNOTIFY_OBJ_TYPE_DETACHED = FSNOTIFY_OBJ_TYPE_COUNT
+ };
+ 
+-#define FSNOTIFY_OBJ_TYPE_INODE_FL	(1U << FSNOTIFY_OBJ_TYPE_INODE)
+-#define FSNOTIFY_OBJ_TYPE_PARENT_FL	(1U << FSNOTIFY_OBJ_TYPE_PARENT)
+-#define FSNOTIFY_OBJ_TYPE_VFSMOUNT_FL	(1U << FSNOTIFY_OBJ_TYPE_VFSMOUNT)
+-#define FSNOTIFY_OBJ_TYPE_SB_FL		(1U << FSNOTIFY_OBJ_TYPE_SB)
+-#define FSNOTIFY_OBJ_ALL_TYPES_MASK	((1U << FSNOTIFY_OBJ_TYPE_COUNT) - 1)
+-
+-static inline bool fsnotify_valid_obj_type(unsigned int type)
++static inline bool fsnotify_valid_obj_type(unsigned int obj_type)
+ {
+-	return (type < FSNOTIFY_OBJ_TYPE_COUNT);
++	return (obj_type < FSNOTIFY_OBJ_TYPE_COUNT);
+ }
+ 
+ struct fsnotify_iter_info {
+-	struct fsnotify_mark *marks[FSNOTIFY_OBJ_TYPE_COUNT];
++	struct fsnotify_mark *marks[FSNOTIFY_ITER_TYPE_COUNT];
++	struct fsnotify_group *current_group;
+ 	unsigned int report_mask;
+ 	int srcu_idx;
+ };
+ 
+ static inline bool fsnotify_iter_should_report_type(
+-		struct fsnotify_iter_info *iter_info, int type)
++		struct fsnotify_iter_info *iter_info, int iter_type)
+ {
+-	return (iter_info->report_mask & (1U << type));
++	return (iter_info->report_mask & (1U << iter_type));
+ }
+ 
+ static inline void fsnotify_iter_set_report_type(
+-		struct fsnotify_iter_info *iter_info, int type)
++		struct fsnotify_iter_info *iter_info, int iter_type)
++{
++	iter_info->report_mask |= (1U << iter_type);
++}
++
++static inline struct fsnotify_mark *fsnotify_iter_mark(
++		struct fsnotify_iter_info *iter_info, int iter_type)
+ {
+-	iter_info->report_mask |= (1U << type);
++	if (fsnotify_iter_should_report_type(iter_info, iter_type))
++		return iter_info->marks[iter_type];
++	return NULL;
+ }
+ 
+-static inline void fsnotify_iter_set_report_type_mark(
+-		struct fsnotify_iter_info *iter_info, int type,
+-		struct fsnotify_mark *mark)
++static inline int fsnotify_iter_step(struct fsnotify_iter_info *iter, int type,
++				     struct fsnotify_mark **markp)
+ {
+-	iter_info->marks[type] = mark;
+-	iter_info->report_mask |= (1U << type);
++	while (type < FSNOTIFY_ITER_TYPE_COUNT) {
++		*markp = fsnotify_iter_mark(iter, type);
++		if (*markp)
++			break;
++		type++;
++	}
++	return type;
+ }
+ 
+ #define FSNOTIFY_ITER_FUNCS(name, NAME) \
+ static inline struct fsnotify_mark *fsnotify_iter_##name##_mark( \
+ 		struct fsnotify_iter_info *iter_info) \
+ { \
+-	return (iter_info->report_mask & FSNOTIFY_OBJ_TYPE_##NAME##_FL) ? \
+-		iter_info->marks[FSNOTIFY_OBJ_TYPE_##NAME] : NULL; \
++	return fsnotify_iter_mark(iter_info, FSNOTIFY_ITER_TYPE_##NAME); \
+ }
+ 
+ FSNOTIFY_ITER_FUNCS(inode, INODE)
+@@ -335,8 +448,13 @@ FSNOTIFY_ITER_FUNCS(parent, PARENT)
+ FSNOTIFY_ITER_FUNCS(vfsmount, VFSMOUNT)
+ FSNOTIFY_ITER_FUNCS(sb, SB)
+ 
+-#define fsnotify_foreach_obj_type(type) \
+-	for (type = 0; type < FSNOTIFY_OBJ_TYPE_COUNT; type++)
++#define fsnotify_foreach_iter_type(type) \
++	for (type = 0; type < FSNOTIFY_ITER_TYPE_COUNT; type++)
++#define fsnotify_foreach_iter_mark_type(iter, mark, type) \
++	for (type = 0; \
++	     type = fsnotify_iter_step(iter, type, &mark), \
++	     type < FSNOTIFY_ITER_TYPE_COUNT; \
++	     type++)
+ 
+ /*
+  * fsnotify_connp_t is what we embed in objects which connector can be attached
+@@ -355,6 +473,7 @@ struct fsnotify_mark_connector {
+ 	spinlock_t lock;
+ 	unsigned short type;	/* Type of object [lock] */
+ #define FSNOTIFY_CONN_FLAG_HAS_FSID	0x01
++#define FSNOTIFY_CONN_FLAG_HAS_IREF	0x02
+ 	unsigned short flags;	/* flags [lock] */
+ 	__kernel_fsid_t fsid;	/* fsid of filesystem containing object */
+ 	union {
+@@ -399,11 +518,18 @@ struct fsnotify_mark {
+ 	struct hlist_node obj_list;
+ 	/* Head of list of marks for an object [mark ref] */
+ 	struct fsnotify_mark_connector *connector;
+-	/* Events types to ignore [mark->lock, group->mark_mutex] */
+-	__u32 ignored_mask;
+-#define FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY	0x01
+-#define FSNOTIFY_MARK_FLAG_ALIVE		0x02
+-#define FSNOTIFY_MARK_FLAG_ATTACHED		0x04
++	/* Events types and flags to ignore [mark->lock, group->mark_mutex] */
++	__u32 ignore_mask;
++	/* General fsnotify mark flags */
++#define FSNOTIFY_MARK_FLAG_ALIVE		0x0001
++#define FSNOTIFY_MARK_FLAG_ATTACHED		0x0002
++	/* inotify mark flags */
++#define FSNOTIFY_MARK_FLAG_EXCL_UNLINK		0x0010
++#define FSNOTIFY_MARK_FLAG_IN_ONESHOT		0x0020
++	/* fanotify mark flags */
++#define FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY	0x0100
++#define FSNOTIFY_MARK_FLAG_NO_IREF		0x0200
++#define FSNOTIFY_MARK_FLAG_HAS_IGNORE_FLAGS	0x0400
+ 	unsigned int flags;		/* flags [mark->lock] */
+ };
+ 
+@@ -469,7 +595,9 @@ static inline void fsnotify_update_flags(struct dentry *dentry)
+ /* called from fsnotify listeners, such as fanotify or dnotify */
+ 
+ /* create a new group */
+-extern struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *ops);
++extern struct fsnotify_group *fsnotify_alloc_group(
++				const struct fsnotify_ops *ops,
++				int flags);
+ /* get reference to a group */
+ extern void fsnotify_get_group(struct fsnotify_group *group);
+ /* drop reference on a group from fsnotify_alloc_group */
+@@ -484,17 +612,39 @@ extern int fsnotify_fasync(int fd, struct file *file, int on);
+ extern void fsnotify_destroy_event(struct fsnotify_group *group,
+ 				   struct fsnotify_event *event);
+ /* attach the event to the group notification queue */
+-extern int fsnotify_add_event(struct fsnotify_group *group,
+-			      struct fsnotify_event *event,
+-			      int (*merge)(struct list_head *,
+-					   struct fsnotify_event *));
++extern int fsnotify_insert_event(struct fsnotify_group *group,
++				 struct fsnotify_event *event,
++				 int (*merge)(struct fsnotify_group *,
++					      struct fsnotify_event *),
++				 void (*insert)(struct fsnotify_group *,
++						struct fsnotify_event *));
++
++static inline int fsnotify_add_event(struct fsnotify_group *group,
++				     struct fsnotify_event *event,
++				     int (*merge)(struct fsnotify_group *,
++						  struct fsnotify_event *))
++{
++	return fsnotify_insert_event(group, event, merge, NULL);
++}
++
+ /* Queue overflow event to a notification group */
+ static inline void fsnotify_queue_overflow(struct fsnotify_group *group)
+ {
+ 	fsnotify_add_event(group, group->overflow_event, NULL);
+ }
+ 
+-/* true if the group notification queue is empty */
++static inline bool fsnotify_is_overflow_event(u32 mask)
++{
++	return mask & FS_Q_OVERFLOW;
++}
++
++static inline bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group)
++{
++	assert_spin_locked(&group->notification_lock);
++
++	return list_empty(&group->notification_list);
++}
++
+ extern bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group);
+ /* return, but do not dequeue the first event on the notification queue */
+ extern struct fsnotify_event *fsnotify_peek_first_event(struct fsnotify_group *group);
+@@ -506,6 +656,101 @@ extern void fsnotify_remove_queued_event(struct fsnotify_group *group,
+ 
+ /* functions used to manipulate the marks attached to inodes */
+ 
++/*
++ * Canonical "ignore mask" including event flags.
++ *
++ * Note the subtle semantic difference from the legacy ->ignored_mask.
++ * ->ignored_mask traditionally only meant which events should be ignored,
++ * while ->ignore_mask also includes flags regarding the type of objects on
++ * which events should be ignored.
++ */
++static inline __u32 fsnotify_ignore_mask(struct fsnotify_mark *mark)
++{
++	__u32 ignore_mask = mark->ignore_mask;
++
++	/* The event flags in ignore mask take effect */
++	if (mark->flags & FSNOTIFY_MARK_FLAG_HAS_IGNORE_FLAGS)
++		return ignore_mask;
++
++	/*
++	 * Legacy behavior:
++	 * - Always ignore events on dir
++	 * - Ignore events on child if parent is watching children
++	 */
++	ignore_mask |= FS_ISDIR;
++	ignore_mask &= ~FS_EVENT_ON_CHILD;
++	ignore_mask |= mark->mask & FS_EVENT_ON_CHILD;
++
++	return ignore_mask;
++}
++
++/* Legacy ignored_mask - only event types to ignore */
++static inline __u32 fsnotify_ignored_events(struct fsnotify_mark *mark)
++{
++	return mark->ignore_mask & ALL_FSNOTIFY_EVENTS;
++}
++
++/*
++ * Check if mask (or ignore mask) should be applied depending if victim is a
++ * directory and whether it is reported to a watching parent.
++ */
++static inline bool fsnotify_mask_applicable(__u32 mask, bool is_dir,
++					    int iter_type)
++{
++	/* Should mask be applied to a directory? */
++	if (is_dir && !(mask & FS_ISDIR))
++		return false;
++
++	/* Should mask be applied to a child? */
++	if (iter_type == FSNOTIFY_ITER_TYPE_PARENT &&
++	    !(mask & FS_EVENT_ON_CHILD))
++		return false;
++
++	return true;
++}
++
++/*
++ * Effective ignore mask taking into account if event victim is a
++ * directory and whether it is reported to a watching parent.
++ */
++static inline __u32 fsnotify_effective_ignore_mask(struct fsnotify_mark *mark,
++						   bool is_dir, int iter_type)
++{
++	__u32 ignore_mask = fsnotify_ignored_events(mark);
++
++	if (!ignore_mask)
++		return 0;
++
++	/* For non-dir and non-child, no need to consult the event flags */
++	if (!is_dir && iter_type != FSNOTIFY_ITER_TYPE_PARENT)
++		return ignore_mask;
++
++	ignore_mask = fsnotify_ignore_mask(mark);
++	if (!fsnotify_mask_applicable(ignore_mask, is_dir, iter_type))
++		return 0;
++
++	return ignore_mask & ALL_FSNOTIFY_EVENTS;
++}
++
++/* Get mask for calculating object interest taking ignore mask into account */
++static inline __u32 fsnotify_calc_mask(struct fsnotify_mark *mark)
++{
++	__u32 mask = mark->mask;
++
++	if (!fsnotify_ignored_events(mark))
++		return mask;
++
++	/* Interest in FS_MODIFY may be needed for clearing ignore mask */
++	if (!(mark->flags & FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY))
++		mask |= FS_MODIFY;
++
++	/*
++	 * If mark is interested in ignoring events on children, the object must
++	 * show interest in those events for fsnotify_parent() to notice it.
++	 */
++	return mask | mark->ignore_mask;
++}
++
+ /* Get mask of events for a list of marks */
+ extern __u32 fsnotify_conn_mask(struct fsnotify_mark_connector *conn);
+ /* Calculate mask of events for a list of marks */
+@@ -520,27 +765,27 @@ extern int fsnotify_get_conn_fsid(const struct fsnotify_mark_connector *conn,
+ 				  __kernel_fsid_t *fsid);
+ /* attach the mark to the object */
+ extern int fsnotify_add_mark(struct fsnotify_mark *mark,
+-			     fsnotify_connp_t *connp, unsigned int type,
+-			     int allow_dups, __kernel_fsid_t *fsid);
++			     fsnotify_connp_t *connp, unsigned int obj_type,
++			     int add_flags, __kernel_fsid_t *fsid);
+ extern int fsnotify_add_mark_locked(struct fsnotify_mark *mark,
+ 				    fsnotify_connp_t *connp,
+-				    unsigned int type, int allow_dups,
++				    unsigned int obj_type, int add_flags,
+ 				    __kernel_fsid_t *fsid);
+ 
+ /* attach the mark to the inode */
+ static inline int fsnotify_add_inode_mark(struct fsnotify_mark *mark,
+ 					  struct inode *inode,
+-					  int allow_dups)
++					  int add_flags)
+ {
+ 	return fsnotify_add_mark(mark, &inode->i_fsnotify_marks,
+-				 FSNOTIFY_OBJ_TYPE_INODE, allow_dups, NULL);
++				 FSNOTIFY_OBJ_TYPE_INODE, add_flags, NULL);
+ }
+ static inline int fsnotify_add_inode_mark_locked(struct fsnotify_mark *mark,
+ 						 struct inode *inode,
+-						 int allow_dups)
++						 int add_flags)
+ {
+ 	return fsnotify_add_mark_locked(mark, &inode->i_fsnotify_marks,
+-					FSNOTIFY_OBJ_TYPE_INODE, allow_dups,
++					FSNOTIFY_OBJ_TYPE_INODE, add_flags,
+ 					NULL);
+ }
+ 
+@@ -553,33 +798,32 @@ extern void fsnotify_detach_mark(struct fsnotify_mark *mark);
+ extern void fsnotify_free_mark(struct fsnotify_mark *mark);
+ /* Wait until all marks queued for destruction are destroyed */
+ extern void fsnotify_wait_marks_destroyed(void);
+-/* run all the marks in a group, and clear all of the marks attached to given object type */
+-extern void fsnotify_clear_marks_by_group(struct fsnotify_group *group, unsigned int type);
++/* Clear all of the marks of a group attached to a given object type */
++extern void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
++					  unsigned int obj_type);
+ /* run all the marks in a group, and clear all of the vfsmount marks */
+ static inline void fsnotify_clear_vfsmount_marks_by_group(struct fsnotify_group *group)
+ {
+-	fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_VFSMOUNT_FL);
++	fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_VFSMOUNT);
+ }
+ /* run all the marks in a group, and clear all of the inode marks */
+ static inline void fsnotify_clear_inode_marks_by_group(struct fsnotify_group *group)
+ {
+-	fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_INODE_FL);
++	fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_INODE);
+ }
+ /* run all the marks in a group, and clear all of the sn marks */
+ static inline void fsnotify_clear_sb_marks_by_group(struct fsnotify_group *group)
+ {
+-	fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_SB_FL);
++	fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_SB);
+ }
+ extern void fsnotify_get_mark(struct fsnotify_mark *mark);
+ extern void fsnotify_put_mark(struct fsnotify_mark *mark);
+ extern void fsnotify_finish_user_wait(struct fsnotify_iter_info *iter_info);
+ extern bool fsnotify_prepare_user_wait(struct fsnotify_iter_info *iter_info);
+ 
+-static inline void fsnotify_init_event(struct fsnotify_event *event,
+-				       unsigned long objectid)
++static inline void fsnotify_init_event(struct fsnotify_event *event)
+ {
+ 	INIT_LIST_HEAD(&event->list);
+-	event->objectid = objectid;
+ }
+ 
+ #else
+diff --git a/include/linux/iversion.h b/include/linux/iversion.h
+index 2917ef990d435..3bfebde5a1a6d 100644
+--- a/include/linux/iversion.h
++++ b/include/linux/iversion.h
+@@ -328,6 +328,19 @@ inode_query_iversion(struct inode *inode)
+ 	return cur >> I_VERSION_QUERIED_SHIFT;
+ }
+ 
++/*
++ * For filesystems without any sort of change attribute, the best we can
++ * do is fake one up from the ctime:
++ */
++static inline u64 time_to_chattr(struct timespec64 *t)
++{
++	u64 chattr = t->tv_sec;
++
++	chattr <<= 32;
++	chattr += t->tv_nsec;
++	return chattr;
++}
++
+ /**
+  * inode_eq_iversion_raw - check whether the raw i_version counter has changed
+  * @inode: inode to check
+diff --git a/include/linux/kallsyms.h b/include/linux/kallsyms.h
+index 481273f0c72d4..465060acc9816 100644
+--- a/include/linux/kallsyms.h
++++ b/include/linux/kallsyms.h
+@@ -71,15 +71,14 @@ static inline void *dereference_symbol_descriptor(void *ptr)
+ 	return ptr;
+ }
+ 
+-#ifdef CONFIG_KALLSYMS
+-/* Lookup the address for a symbol. Returns 0 if not found. */
+-unsigned long kallsyms_lookup_name(const char *name);
+-
+-/* Call a function on each kallsyms symbol in the core kernel */
+ int kallsyms_on_each_symbol(int (*fn)(void *, const char *, struct module *,
+ 				      unsigned long),
+ 			    void *data);
+ 
++#ifdef CONFIG_KALLSYMS
++/* Lookup the address for a symbol. Returns 0 if not found. */
++unsigned long kallsyms_lookup_name(const char *name);
++
+ extern int kallsyms_lookup_size_offset(unsigned long addr,
+ 				  unsigned long *symbolsize,
+ 				  unsigned long *offset);
+@@ -108,14 +107,6 @@ static inline unsigned long kallsyms_lookup_name(const char *name)
+ 	return 0;
+ }
+ 
+-static inline int kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+-						    struct module *,
+-						    unsigned long),
+-					  void *data)
+-{
+-	return 0;
+-}
+-
+ static inline int kallsyms_lookup_size_offset(unsigned long addr,
+ 					      unsigned long *symbolsize,
+ 					      unsigned long *offset)
+diff --git a/include/linux/kthread.h b/include/linux/kthread.h
+index 2484ed97e72f5..9dae77a97a033 100644
+--- a/include/linux/kthread.h
++++ b/include/linux/kthread.h
+@@ -68,6 +68,7 @@ void *kthread_probe_data(struct task_struct *k);
+ int kthread_park(struct task_struct *k);
+ void kthread_unpark(struct task_struct *k);
+ void kthread_parkme(void);
++void kthread_exit(long result) __noreturn;
+ 
+ int kthreadd(void *unused);
+ extern struct task_struct *kthreadd_task;
+diff --git a/include/linux/lockd/bind.h b/include/linux/lockd/bind.h
+index 0520c0cd73f42..3bc9f7410e213 100644
+--- a/include/linux/lockd/bind.h
++++ b/include/linux/lockd/bind.h
+@@ -27,7 +27,8 @@ struct rpc_task;
+ struct nlmsvc_binding {
+ 	__be32			(*fopen)(struct svc_rqst *,
+ 						struct nfs_fh *,
+-						struct file **);
++						struct file **,
++						int mode);
+ 	void			(*fclose)(struct file *);
+ };
+ 
+diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
+index 666f5f310a041..70ce419e27093 100644
+--- a/include/linux/lockd/lockd.h
++++ b/include/linux/lockd/lockd.h
+@@ -10,6 +10,8 @@
+ #ifndef LINUX_LOCKD_LOCKD_H
+ #define LINUX_LOCKD_LOCKD_H
+ 
++/* XXX: a lot of this should really be under fs/lockd. */
++
+ #include <linux/in.h>
+ #include <linux/in6.h>
+ #include <net/ipv6.h>
+@@ -154,7 +156,8 @@ struct nlm_rqst {
+ struct nlm_file {
+ 	struct hlist_node	f_list;		/* linked list */
+ 	struct nfs_fh		f_handle;	/* NFS file handle */
+-	struct file *		f_file;		/* VFS file pointer */
++	struct file *		f_file[2];	/* VFS file pointers,
++						   indexed by O_ flags */
+ 	struct nlm_share *	f_shares;	/* DOS shares */
+ 	struct list_head	f_blocks;	/* blocked locks */
+ 	unsigned int		f_locks;	/* guesstimate # of locks */
+@@ -267,6 +270,7 @@ typedef int	  (*nlm_host_match_fn_t)(void *cur, struct nlm_host *ref);
+ /*
+  * Server-side lock handling
+  */
++int		  lock_to_openmode(struct file_lock *);
+ __be32		  nlmsvc_lock(struct svc_rqst *, struct nlm_file *,
+ 			      struct nlm_host *, struct nlm_lock *, int,
+ 			      struct nlm_cookie *, int);
+@@ -286,8 +290,9 @@ void		  nlmsvc_locks_init_private(struct file_lock *, struct nlm_host *, pid_t);
+  * File handling for the server personality
+  */
+ __be32		  nlm_lookup_file(struct svc_rqst *, struct nlm_file **,
+-					struct nfs_fh *);
++					struct nlm_lock *);
+ void		  nlm_release_file(struct nlm_file *);
++void		  nlmsvc_put_lockowner(struct nlm_lockowner *);
+ void		  nlmsvc_release_lockowner(struct nlm_lock *);
+ void		  nlmsvc_mark_resources(struct net *);
+ void		  nlmsvc_free_host_resources(struct nlm_host *);
+@@ -299,9 +304,15 @@ void		  nlmsvc_invalidate_all(void);
+ int           nlmsvc_unlock_all_by_sb(struct super_block *sb);
+ int           nlmsvc_unlock_all_by_ip(struct sockaddr *server_addr);
+ 
++static inline struct file *nlmsvc_file_file(struct nlm_file *file)
++{
++	return file->f_file[O_RDONLY] ?
++	       file->f_file[O_RDONLY] : file->f_file[O_WRONLY];
++}
++
+ static inline struct inode *nlmsvc_file_inode(struct nlm_file *file)
+ {
+-	return locks_inode(file->f_file);
++	return locks_inode(nlmsvc_file_file(file));
+ }
+ 
+ static inline int __nlm_privileged_request4(const struct sockaddr *sap)
+diff --git a/include/linux/lockd/xdr.h b/include/linux/lockd/xdr.h
+index 7ab9f264313f0..67e4a2c5500bd 100644
+--- a/include/linux/lockd/xdr.h
++++ b/include/linux/lockd/xdr.h
+@@ -41,6 +41,8 @@ struct nlm_lock {
+ 	struct nfs_fh		fh;
+ 	struct xdr_netobj	oh;
+ 	u32			svid;
++	u64			lock_start;
++	u64			lock_len;
+ 	struct file_lock	fl;
+ };
+ 
+@@ -96,24 +98,19 @@ struct nlm_reboot {
+  */
+ #define NLMSVC_XDRSIZE		sizeof(struct nlm_args)
+ 
+-int	nlmsvc_decode_testargs(struct svc_rqst *, __be32 *);
+-int	nlmsvc_encode_testres(struct svc_rqst *, __be32 *);
+-int	nlmsvc_decode_lockargs(struct svc_rqst *, __be32 *);
+-int	nlmsvc_decode_cancargs(struct svc_rqst *, __be32 *);
+-int	nlmsvc_decode_unlockargs(struct svc_rqst *, __be32 *);
+-int	nlmsvc_encode_res(struct svc_rqst *, __be32 *);
+-int	nlmsvc_decode_res(struct svc_rqst *, __be32 *);
+-int	nlmsvc_encode_void(struct svc_rqst *, __be32 *);
+-int	nlmsvc_decode_void(struct svc_rqst *, __be32 *);
+-int	nlmsvc_decode_shareargs(struct svc_rqst *, __be32 *);
+-int	nlmsvc_encode_shareres(struct svc_rqst *, __be32 *);
+-int	nlmsvc_decode_notify(struct svc_rqst *, __be32 *);
+-int	nlmsvc_decode_reboot(struct svc_rqst *, __be32 *);
+-/*
+-int	nlmclt_encode_testargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+-int	nlmclt_encode_lockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+-int	nlmclt_encode_cancargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+-int	nlmclt_encode_unlockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+- */
++bool	nlmsvc_decode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_decode_testargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_decode_lockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_decode_cancargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_decode_unlockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_decode_res(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_decode_reboot(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_decode_shareargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_decode_notify(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++
++bool	nlmsvc_encode_testres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_encode_res(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_encode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlmsvc_encode_shareres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
+ 
+ #endif /* LOCKD_XDR_H */
+diff --git a/include/linux/lockd/xdr4.h b/include/linux/lockd/xdr4.h
+index e709fe5924f2b..72831e35dca32 100644
+--- a/include/linux/lockd/xdr4.h
++++ b/include/linux/lockd/xdr4.h
+@@ -22,27 +22,22 @@
+ #define	nlm4_fbig		cpu_to_be32(NLM_FBIG)
+ #define	nlm4_failed		cpu_to_be32(NLM_FAILED)
+ 
++void	nlm4svc_set_file_lock_range(struct file_lock *fl, u64 off, u64 len);
++bool	nlm4svc_decode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_decode_testargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_decode_lockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_decode_cancargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_decode_unlockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_decode_res(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_decode_reboot(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_decode_shareargs(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_decode_notify(struct svc_rqst *rqstp, struct xdr_stream *xdr);
+ 
++bool	nlm4svc_encode_testres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_encode_res(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_encode_void(struct svc_rqst *rqstp, struct xdr_stream *xdr);
++bool	nlm4svc_encode_shareres(struct svc_rqst *rqstp, struct xdr_stream *xdr);
+ 
+-int	nlm4svc_decode_testargs(struct svc_rqst *, __be32 *);
+-int	nlm4svc_encode_testres(struct svc_rqst *, __be32 *);
+-int	nlm4svc_decode_lockargs(struct svc_rqst *, __be32 *);
+-int	nlm4svc_decode_cancargs(struct svc_rqst *, __be32 *);
+-int	nlm4svc_decode_unlockargs(struct svc_rqst *, __be32 *);
+-int	nlm4svc_encode_res(struct svc_rqst *, __be32 *);
+-int	nlm4svc_decode_res(struct svc_rqst *, __be32 *);
+-int	nlm4svc_encode_void(struct svc_rqst *, __be32 *);
+-int	nlm4svc_decode_void(struct svc_rqst *, __be32 *);
+-int	nlm4svc_decode_shareargs(struct svc_rqst *, __be32 *);
+-int	nlm4svc_encode_shareres(struct svc_rqst *, __be32 *);
+-int	nlm4svc_decode_notify(struct svc_rqst *, __be32 *);
+-int	nlm4svc_decode_reboot(struct svc_rqst *, __be32 *);
+-/*
+-int	nlmclt_encode_testargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+-int	nlmclt_encode_lockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+-int	nlmclt_encode_cancargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+-int	nlmclt_encode_unlockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+- */
+ extern const struct rpc_version nlm_version4;
+ 
+ #endif /* LOCKD_XDR4_H */
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 6264617bab4d4..a55a40c28568e 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -582,7 +582,7 @@ static inline bool within_module(unsigned long addr, const struct module *mod)
+ 	return within_module_init(addr, mod) || within_module_core(addr, mod);
+ }
+ 
+-/* Search for module by name: must hold module_mutex. */
++/* Search for module by name: must be in a RCU-sched critical section. */
+ struct module *find_module(const char *name);
+ 
+ struct symsearch {
+@@ -604,13 +604,9 @@ int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
+ /* Look for this name: can be of form module:name. */
+ unsigned long module_kallsyms_lookup_name(const char *name);
+ 
+-int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+-					     struct module *, unsigned long),
+-				   void *data);
+-
+-extern void __noreturn __module_put_and_exit(struct module *mod,
++extern void __noreturn __module_put_and_kthread_exit(struct module *mod,
+ 			long code);
+-#define module_put_and_exit(code) __module_put_and_exit(THIS_MODULE, code)
++#define module_put_and_kthread_exit(code) __module_put_and_kthread_exit(THIS_MODULE, code)
+ 
+ #ifdef CONFIG_MODULE_UNLOAD
+ int module_refcount(struct module *mod);
+@@ -791,14 +787,6 @@ static inline unsigned long module_kallsyms_lookup_name(const char *name)
+ 	return 0;
+ }
+ 
+-static inline int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+-							   struct module *,
+-							   unsigned long),
+-						 void *data)
+-{
+-	return 0;
+-}
+-
+ static inline int register_module_notifier(struct notifier_block *nb)
+ {
+ 	/* no events will happen anyway, so this can always succeed */
+@@ -810,7 +798,7 @@ static inline int unregister_module_notifier(struct notifier_block *nb)
+ 	return 0;
+ }
+ 
+-#define module_put_and_exit(code) do_exit(code)
++#define module_put_and_kthread_exit(code) kthread_exit(code)
+ 
+ static inline void print_modules(void)
+ {
+@@ -887,4 +875,8 @@ static inline bool module_sig_ok(struct module *module)
+ }
+ #endif	/* CONFIG_MODULE_SIG */
+ 
++int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
++					     struct module *, unsigned long),
++				   void *data);
++
+ #endif /* _LINUX_MODULE_H */
+diff --git a/include/linux/nfs.h b/include/linux/nfs.h
+index 0dc7ad38a0da4..b06375e88e589 100644
+--- a/include/linux/nfs.h
++++ b/include/linux/nfs.h
+@@ -36,14 +36,6 @@ static inline void nfs_copy_fh(struct nfs_fh *target, const struct nfs_fh *sourc
+ 	memcpy(target->data, source->data, source->size);
+ }
+ 
+-
+-/*
+- * This is really a general kernel constant, but since nothing like
+- * this is defined in the kernel headers, I have to do it here.
+- */
+-#define NFS_OFFSET_MAX		((__s64)((~(__u64)0) >> 1))
+-
+-
+ enum nfs3_stable_how {
+ 	NFS_UNSTABLE = 0,
+ 	NFS_DATA_SYNC = 1,
+diff --git a/include/linux/nfs4.h b/include/linux/nfs4.h
+index 9dc7eeac924f0..ea88d0f462c9d 100644
+--- a/include/linux/nfs4.h
++++ b/include/linux/nfs4.h
+@@ -385,13 +385,6 @@ enum lock_type4 {
+ 	NFS4_WRITEW_LT = 4
+ };
+ 
+-enum change_attr_type4 {
+-	NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR = 0,
+-	NFS4_CHANGE_TYPE_IS_VERSION_COUNTER = 1,
+-	NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS = 2,
+-	NFS4_CHANGE_TYPE_IS_TIME_METADATA = 3,
+-	NFS4_CHANGE_TYPE_IS_UNDEFINED = 4
+-};
+ 
+ /* Mandatory Attributes */
+ #define FATTR4_WORD0_SUPPORTED_ATTRS    (1UL << 0)
+@@ -459,7 +452,6 @@ enum change_attr_type4 {
+ #define FATTR4_WORD2_LAYOUT_BLKSIZE     (1UL << 1)
+ #define FATTR4_WORD2_MDSTHRESHOLD       (1UL << 4)
+ #define FATTR4_WORD2_CLONE_BLKSIZE	(1UL << 13)
+-#define FATTR4_WORD2_CHANGE_ATTR_TYPE	(1UL << 15)
+ #define FATTR4_WORD2_SECURITY_LABEL     (1UL << 16)
+ #define FATTR4_WORD2_MODE_UMASK		(1UL << 17)
+ #define FATTR4_WORD2_XATTR_SUPPORT	(1UL << 18)
+@@ -725,4 +717,17 @@ enum nfs4_setxattr_options {
+ 	SETXATTR4_CREATE	= 1,
+ 	SETXATTR4_REPLACE	= 2,
+ };
++
++enum {
++	RCA4_TYPE_MASK_RDATA_DLG	= 0,
++	RCA4_TYPE_MASK_WDATA_DLG	= 1,
++	RCA4_TYPE_MASK_DIR_DLG		= 2,
++	RCA4_TYPE_MASK_FILE_LAYOUT	= 3,
++	RCA4_TYPE_MASK_BLK_LAYOUT	= 4,
++	RCA4_TYPE_MASK_OBJ_LAYOUT_MIN	= 8,
++	RCA4_TYPE_MASK_OBJ_LAYOUT_MAX	= 9,
++	RCA4_TYPE_MASK_OTHER_LAYOUT_MIN	= 12,
++	RCA4_TYPE_MASK_OTHER_LAYOUT_MAX	= 15,
++};
++
+ #endif
+diff --git a/include/linux/nfs_ssc.h b/include/linux/nfs_ssc.h
+index f5ba0fbff72fe..22265b1ff0800 100644
+--- a/include/linux/nfs_ssc.h
++++ b/include/linux/nfs_ssc.h
+@@ -8,6 +8,7 @@
+  */
+ 
+ #include <linux/nfs_fs.h>
++#include <linux/sunrpc/svc.h>
+ 
+ extern struct nfs_ssc_client_ops_tbl nfs_ssc_client_tbl;
+ 
+@@ -54,6 +55,19 @@ static inline void nfs42_ssc_close(struct file *filep)
+ }
+ #endif
+ 
++struct nfsd4_ssc_umount_item {
++	struct list_head nsui_list;
++	bool nsui_busy;
++	/*
++	 * nsui_refcnt inited to 2, 1 on list and 1 for consumer. Entry
++	 * is removed when refcnt drops to 1 and nsui_expire expires.
++	 */
++	refcount_t nsui_refcnt;
++	unsigned long nsui_expire;
++	struct vfsmount *nsui_vfsmount;
++	char nsui_ipaddr[RPC_MAX_ADDRBUFLEN + 1];
++};
++
+ /*
+  * NFS_FS
+  */
+diff --git a/include/linux/nfsacl.h b/include/linux/nfsacl.h
+index 103d446953234..8e76a79cdc6ae 100644
+--- a/include/linux/nfsacl.h
++++ b/include/linux/nfsacl.h
+@@ -38,5 +38,11 @@ nfsacl_encode(struct xdr_buf *buf, unsigned int base, struct inode *inode,
+ extern int
+ nfsacl_decode(struct xdr_buf *buf, unsigned int base, unsigned int *aclcnt,
+ 	      struct posix_acl **pacl);
++extern bool
++nfs_stream_decode_acl(struct xdr_stream *xdr, unsigned int *aclcnt,
++		      struct posix_acl **pacl);
++extern bool
++nfs_stream_encode_acl(struct xdr_stream *xdr, struct inode *inode,
++		      struct posix_acl *acl, int encode_entries, int typeflag);
+ 
+ #endif  /* __LINUX_NFSACL_H */
+diff --git a/include/linux/pid.h b/include/linux/pid.h
+index fa10acb8d6a42..af308e15f174c 100644
+--- a/include/linux/pid.h
++++ b/include/linux/pid.h
+@@ -78,6 +78,7 @@ struct file;
+ 
+ extern struct pid *pidfd_pid(const struct file *file);
+ struct pid *pidfd_get_pid(unsigned int fd, unsigned int *flags);
++int pidfd_create(struct pid *pid, unsigned int flags);
+ 
+ static inline struct pid *get_pid(struct pid *pid)
+ {
+diff --git a/include/linux/sched/user.h b/include/linux/sched/user.h
+index a8ec3b6093fcb..3632c5d6ec559 100644
+--- a/include/linux/sched/user.h
++++ b/include/linux/sched/user.h
+@@ -14,9 +14,6 @@ struct user_struct {
+ 	refcount_t __count;	/* reference count */
+ 	atomic_t processes;	/* How many processes does this user have? */
+ 	atomic_t sigpending;	/* How many pending signals does this user have? */
+-#ifdef CONFIG_FANOTIFY
+-	atomic_t fanotify_listeners;
+-#endif
+ #ifdef CONFIG_EPOLL
+ 	atomic_long_t epoll_watches; /* The number of file descriptors currently watched */
+ #endif
+diff --git a/include/linux/sunrpc/msg_prot.h b/include/linux/sunrpc/msg_prot.h
+index 43f854487539b..938c2bf29db88 100644
+--- a/include/linux/sunrpc/msg_prot.h
++++ b/include/linux/sunrpc/msg_prot.h
+@@ -10,9 +10,6 @@
+ 
+ #define RPC_VERSION 2
+ 
+-/* size of an XDR encoding unit in bytes, i.e. 32bit */
+-#define XDR_UNIT	(4)
+-
+ /* spec defines authentication flavor as an unsigned 32 bit integer */
+ typedef u32	rpc_authflavor_t;
+ 
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index 386628b36bc75..1cf7a7799cc04 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -19,6 +19,7 @@
+ #include <linux/sunrpc/svcauth.h>
+ #include <linux/wait.h>
+ #include <linux/mm.h>
++#include <linux/pagevec.h>
+ 
+ /* statistics for svc_pool structures */
+ struct svc_pool_stats {
+@@ -51,25 +52,6 @@ struct svc_pool {
+ 	unsigned long		sp_flags;
+ } ____cacheline_aligned_in_smp;
+ 
+-struct svc_serv;
+-
+-struct svc_serv_ops {
+-	/* Callback to use when last thread exits. */
+-	void		(*svo_shutdown)(struct svc_serv *, struct net *);
+-
+-	/* function for service threads to run */
+-	int		(*svo_function)(void *);
+-
+-	/* queue up a transport for servicing */
+-	void		(*svo_enqueue_xprt)(struct svc_xprt *);
+-
+-	/* set up thread (or whatever) execution context */
+-	int		(*svo_setup)(struct svc_serv *, struct svc_pool *, int);
+-
+-	/* optional module to count when adding threads (pooled svcs only) */
+-	struct module	*svo_module;
+-};
+-
+ /*
+  * RPC service.
+  *
+@@ -84,6 +66,7 @@ struct svc_serv {
+ 	struct svc_program *	sv_program;	/* RPC program */
+ 	struct svc_stat *	sv_stats;	/* RPC statistics */
+ 	spinlock_t		sv_lock;
++	struct kref		sv_refcnt;
+ 	unsigned int		sv_nrthreads;	/* # of server threads */
+ 	unsigned int		sv_maxconn;	/* max connections allowed or
+ 						 * '0' causing max to be based
+@@ -101,7 +84,8 @@ struct svc_serv {
+ 
+ 	unsigned int		sv_nrpools;	/* number of thread pools */
+ 	struct svc_pool *	sv_pools;	/* array of thread pools */
+-	const struct svc_serv_ops *sv_ops;	/* server operations */
++	int			(*sv_threadfn)(void *data);
++
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
+ 	struct list_head	sv_cb_list;	/* queue for callback requests
+ 						 * that arrive over the same
+@@ -113,15 +97,30 @@ struct svc_serv {
+ #endif /* CONFIG_SUNRPC_BACKCHANNEL */
+ };
+ 
+-/*
+- * We use sv_nrthreads as a reference count.  svc_destroy() drops
+- * this refcount, so we need to bump it up around operations that
+- * change the number of threads.  Horrible, but there it is.
+- * Should be called with the "service mutex" held.
++/**
++ * svc_get() - increment reference count on a SUNRPC serv
++ * @serv:  the svc_serv to have count incremented
++ *
++ * Returns: the svc_serv that was passed in.
+  */
+-static inline void svc_get(struct svc_serv *serv)
++static inline struct svc_serv *svc_get(struct svc_serv *serv)
+ {
+-	serv->sv_nrthreads++;
++	kref_get(&serv->sv_refcnt);
++	return serv;
++}
++
++void svc_destroy(struct kref *);
++
++/**
++ * svc_put - decrement reference count on a SUNRPC serv
++ * @serv:  the svc_serv to have count decremented
++ *
++ * When the reference count reaches zero, svc_destroy()
++ * is called to clean up and free the serv.
++ */
++static inline void svc_put(struct svc_serv *serv)
++{
++	kref_put(&serv->sv_refcnt, svc_destroy);
+ }
+ 
+ /*
+@@ -247,12 +246,16 @@ struct svc_rqst {
+ 
+ 	size_t			rq_xprt_hlen;	/* xprt header len */
+ 	struct xdr_buf		rq_arg;
++	struct xdr_stream	rq_arg_stream;
++	struct xdr_stream	rq_res_stream;
++	struct page		*rq_scratch_page;
+ 	struct xdr_buf		rq_res;
+ 	struct page		*rq_pages[RPCSVC_MAXPAGES + 1];
+ 	struct page *		*rq_respages;	/* points into rq_pages */
+ 	struct page *		*rq_next_page; /* next reply page to use */
+ 	struct page *		*rq_page_end;  /* one past the last page */
+ 
++	struct pagevec		rq_pvec;
+ 	struct kvec		rq_vec[RPCSVC_MAXPAGES]; /* generally useful.. */
+ 	struct bio_vec		rq_bvec[RPCSVC_MAXPAGES];
+ 
+@@ -272,13 +275,13 @@ struct svc_rqst {
+ #define	RQ_VICTIM	(5)			/* about to be shut down */
+ #define	RQ_BUSY		(6)			/* request is busy */
+ #define	RQ_DATA		(7)			/* request has data */
+-#define RQ_AUTHERR	(8)			/* Request status is auth error */
+ 	unsigned long		rq_flags;	/* flags field */
+ 	ktime_t			rq_qtime;	/* enqueue time */
+ 
+ 	void *			rq_argp;	/* decoded arguments */
+ 	void *			rq_resp;	/* xdr'd results */
+ 	void *			rq_auth_data;	/* flavor-specific data */
++	__be32			rq_auth_stat;	/* authentication status */
+ 	int			rq_auth_slack;	/* extra space xdr code
+ 						 * should leave in head
+ 						 * for krb5i, krb5p.
+@@ -452,40 +455,21 @@ struct svc_procedure {
+ 	/* process the request: */
+ 	__be32			(*pc_func)(struct svc_rqst *);
+ 	/* XDR decode args: */
+-	int			(*pc_decode)(struct svc_rqst *, __be32 *data);
++	bool			(*pc_decode)(struct svc_rqst *rqstp,
++					     struct xdr_stream *xdr);
+ 	/* XDR encode result: */
+-	int			(*pc_encode)(struct svc_rqst *, __be32 *data);
++	bool			(*pc_encode)(struct svc_rqst *rqstp,
++					     struct xdr_stream *xdr);
+ 	/* XDR free result: */
+ 	void			(*pc_release)(struct svc_rqst *);
+ 	unsigned int		pc_argsize;	/* argument struct size */
++	unsigned int		pc_argzero;	/* how much of argument to clear */
+ 	unsigned int		pc_ressize;	/* result struct size */
+ 	unsigned int		pc_cachetype;	/* cache info (NFS) */
+ 	unsigned int		pc_xdrressize;	/* maximum size of XDR reply */
++	const char *		pc_name;	/* for display */
+ };
+ 
+-/*
+- * Mode for mapping cpus to pools.
+- */
+-enum {
+-	SVC_POOL_AUTO = -1,	/* choose one of the others */
+-	SVC_POOL_GLOBAL,	/* no mapping, just a single global pool
+-				 * (legacy & UP mode) */
+-	SVC_POOL_PERCPU,	/* one pool per cpu */
+-	SVC_POOL_PERNODE	/* one pool per numa node */
+-};
+-
+-struct svc_pool_map {
+-	int count;			/* How many svc_servs use us */
+-	int mode;			/* Note: int not enum to avoid
+-					 * warnings about "enumeration value
+-					 * not handled in switch" */
+-	unsigned int npools;
+-	unsigned int *pool_to;		/* maps pool id to cpu or node */
+-	unsigned int *to_pool;		/* maps cpu or node to pool id */
+-};
+-
+-extern struct svc_pool_map svc_pool_map;
+-
+ /*
+  * Function prototypes.
+  */
+@@ -493,22 +477,17 @@ int svc_rpcb_setup(struct svc_serv *serv, struct net *net);
+ void svc_rpcb_cleanup(struct svc_serv *serv, struct net *net);
+ int svc_bind(struct svc_serv *serv, struct net *net);
+ struct svc_serv *svc_create(struct svc_program *, unsigned int,
+-			    const struct svc_serv_ops *);
++			    int (*threadfn)(void *data));
+ struct svc_rqst *svc_rqst_alloc(struct svc_serv *serv,
+ 					struct svc_pool *pool, int node);
+-struct svc_rqst *svc_prepare_thread(struct svc_serv *serv,
+-					struct svc_pool *pool, int node);
++void		   svc_rqst_replace_page(struct svc_rqst *rqstp,
++					 struct page *page);
+ void		   svc_rqst_free(struct svc_rqst *);
+ void		   svc_exit_thread(struct svc_rqst *);
+-unsigned int	   svc_pool_map_get(void);
+-void		   svc_pool_map_put(void);
+ struct svc_serv *  svc_create_pooled(struct svc_program *, unsigned int,
+-			const struct svc_serv_ops *);
++				     int (*threadfn)(void *data));
+ int		   svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
+-int		   svc_set_num_threads_sync(struct svc_serv *, struct svc_pool *, int);
+ int		   svc_pool_stats_open(struct svc_serv *serv, struct file *file);
+-void		   svc_destroy(struct svc_serv *);
+-void		   svc_shutdown_net(struct svc_serv *, struct net *);
+ int		   svc_process(struct svc_rqst *);
+ int		   bc_svc_process(struct svc_serv *, struct rpc_rqst *,
+ 			struct svc_rqst *);
+@@ -519,16 +498,14 @@ void		   svc_wake_up(struct svc_serv *);
+ void		   svc_reserve(struct svc_rqst *rqstp, int space);
+ struct svc_pool *  svc_pool_for_cpu(struct svc_serv *serv, int cpu);
+ char *		   svc_print_addr(struct svc_rqst *, char *, size_t);
+-int		   svc_encode_read_payload(struct svc_rqst *rqstp,
+-					   unsigned int offset,
+-					   unsigned int length);
++int		   svc_encode_result_payload(struct svc_rqst *rqstp,
++					     unsigned int offset,
++					     unsigned int length);
+ unsigned int	   svc_fill_write_vector(struct svc_rqst *rqstp,
+-					 struct page **pages,
+-					 struct kvec *first, size_t total);
++					 struct xdr_buf *payload);
+ char		  *svc_fill_symlink_pathname(struct svc_rqst *rqstp,
+ 					     struct kvec *first, void *p,
+ 					     size_t total);
+-__be32		   svc_return_autherr(struct svc_rqst *rqstp, __be32 auth_err);
+ __be32		   svc_generic_init_request(struct svc_rqst *rqstp,
+ 					    const struct svc_program *progp,
+ 					    struct svc_process_info *procinfo);
+@@ -557,4 +534,42 @@ static inline void svc_reserve_auth(struct svc_rqst *rqstp, int space)
+ 	svc_reserve(rqstp, space + rqstp->rq_auth_slack);
+ }
+ 
++/**
++ * svcxdr_init_decode - Prepare an xdr_stream for svc Call decoding
++ * @rqstp: controlling server RPC transaction context
++ *
++ */
++static inline void svcxdr_init_decode(struct svc_rqst *rqstp)
++{
++	struct xdr_stream *xdr = &rqstp->rq_arg_stream;
++	struct kvec *argv = rqstp->rq_arg.head;
++
++	xdr_init_decode(xdr, &rqstp->rq_arg, argv->iov_base, NULL);
++	xdr_set_scratch_page(xdr, rqstp->rq_scratch_page);
++}
++
++/**
++ * svcxdr_init_encode - Prepare an xdr_stream for svc Reply encoding
++ * @rqstp: controlling server RPC transaction context
++ *
++ */
++static inline void svcxdr_init_encode(struct svc_rqst *rqstp)
++{
++	struct xdr_stream *xdr = &rqstp->rq_res_stream;
++	struct xdr_buf *buf = &rqstp->rq_res;
++	struct kvec *resv = buf->head;
++
++	xdr_reset_scratch_buffer(xdr);
++
++	xdr->buf = buf;
++	xdr->iov = resv;
++	xdr->p   = resv->iov_base + resv->iov_len;
++	xdr->end = resv->iov_base + PAGE_SIZE - rqstp->rq_auth_slack;
++	buf->len = resv->iov_len;
++	xdr->page_ptr = buf->pages - 1;
++	buf->buflen = PAGE_SIZE * (1 + rqstp->rq_page_end - buf->pages);
++	buf->buflen -= rqstp->rq_auth_slack;
++	xdr->rqst = NULL;
++}
++
+ #endif /* SUNRPC_SVC_H */
+diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
+index 9dc3a3b88391b..2b870a3f391b1 100644
+--- a/include/linux/sunrpc/svc_rdma.h
++++ b/include/linux/sunrpc/svc_rdma.h
+@@ -207,8 +207,8 @@ extern void svc_rdma_send_error_msg(struct svcxprt_rdma *rdma,
+ 				    struct svc_rdma_recv_ctxt *rctxt,
+ 				    int status);
+ extern int svc_rdma_sendto(struct svc_rqst *);
+-extern int svc_rdma_read_payload(struct svc_rqst *rqstp, unsigned int offset,
+-				 unsigned int length);
++extern int svc_rdma_result_payload(struct svc_rqst *rqstp, unsigned int offset,
++				   unsigned int length);
+ 
+ /* svc_rdma_transport.c */
+ extern struct svc_xprt_class svc_rdma_class;
+diff --git a/include/linux/sunrpc/svc_xprt.h b/include/linux/sunrpc/svc_xprt.h
+index aca35ab5cff24..dbffb92511ef2 100644
+--- a/include/linux/sunrpc/svc_xprt.h
++++ b/include/linux/sunrpc/svc_xprt.h
+@@ -21,8 +21,8 @@ struct svc_xprt_ops {
+ 	int		(*xpo_has_wspace)(struct svc_xprt *);
+ 	int		(*xpo_recvfrom)(struct svc_rqst *);
+ 	int		(*xpo_sendto)(struct svc_rqst *);
+-	int		(*xpo_read_payload)(struct svc_rqst *, unsigned int,
+-					    unsigned int);
++	int		(*xpo_result_payload)(struct svc_rqst *, unsigned int,
++					      unsigned int);
+ 	void		(*xpo_release_rqst)(struct svc_rqst *);
+ 	void		(*xpo_detach)(struct svc_xprt *);
+ 	void		(*xpo_free)(struct svc_xprt *);
+@@ -127,14 +127,16 @@ int	svc_reg_xprt_class(struct svc_xprt_class *);
+ void	svc_unreg_xprt_class(struct svc_xprt_class *);
+ void	svc_xprt_init(struct net *, struct svc_xprt_class *, struct svc_xprt *,
+ 		      struct svc_serv *);
+-int	svc_create_xprt(struct svc_serv *, const char *, struct net *,
+-			const int, const unsigned short, int,
+-			const struct cred *);
+-void	svc_xprt_do_enqueue(struct svc_xprt *xprt);
++int	svc_xprt_create(struct svc_serv *serv, const char *xprt_name,
++			struct net *net, const int family,
++			const unsigned short port, int flags,
++			const struct cred *cred);
++void	svc_xprt_destroy_all(struct svc_serv *serv, struct net *net);
++void	svc_xprt_received(struct svc_xprt *xprt);
+ void	svc_xprt_enqueue(struct svc_xprt *xprt);
+ void	svc_xprt_put(struct svc_xprt *xprt);
+ void	svc_xprt_copy_addrs(struct svc_rqst *rqstp, struct svc_xprt *xprt);
+-void	svc_close_xprt(struct svc_xprt *xprt);
++void	svc_xprt_close(struct svc_xprt *xprt);
+ int	svc_port_is_privileged(struct sockaddr *sin);
+ int	svc_print_xprts(char *buf, int maxlen);
+ struct	svc_xprt *svc_find_xprt(struct svc_serv *serv, const char *xcl_name,
+diff --git a/include/linux/sunrpc/svcauth.h b/include/linux/sunrpc/svcauth.h
+index b0003866a2497..6d9cc9080aca7 100644
+--- a/include/linux/sunrpc/svcauth.h
++++ b/include/linux/sunrpc/svcauth.h
+@@ -127,7 +127,7 @@ struct auth_ops {
+ 	char *	name;
+ 	struct module *owner;
+ 	int	flavour;
+-	int	(*accept)(struct svc_rqst *rq, __be32 *authp);
++	int	(*accept)(struct svc_rqst *rq);
+ 	int	(*release)(struct svc_rqst *rq);
+ 	void	(*domain_release)(struct auth_domain *);
+ 	int	(*set_client)(struct svc_rqst *rq);
+@@ -149,7 +149,7 @@ struct auth_ops {
+ 
+ struct svc_xprt;
+ 
+-extern int	svc_authenticate(struct svc_rqst *rqstp, __be32 *authp);
++extern int	svc_authenticate(struct svc_rqst *rqstp);
+ extern int	svc_authorise(struct svc_rqst *rqstp);
+ extern int	svc_set_client(struct svc_rqst *rqstp);
+ extern int	svc_auth_register(rpc_authflavor_t flavor, struct auth_ops *aops);
+diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
+index b7ac7fe683067..a366d3eb05315 100644
+--- a/include/linux/sunrpc/svcsock.h
++++ b/include/linux/sunrpc/svcsock.h
+@@ -57,10 +57,9 @@ int		svc_recv(struct svc_rqst *, long);
+ int		svc_send(struct svc_rqst *);
+ void		svc_drop(struct svc_rqst *);
+ void		svc_sock_update_bufs(struct svc_serv *serv);
+-bool		svc_alien_sock(struct net *net, int fd);
+-int		svc_addsock(struct svc_serv *serv, const int fd,
+-					char *name_return, const size_t len,
+-					const struct cred *cred);
++int		svc_addsock(struct svc_serv *serv, struct net *net,
++			    const int fd, char *name_return, const size_t len,
++			    const struct cred *cred);
+ void		svc_init_xprt_sock(void);
+ void		svc_cleanup_xprt_sock(void);
+ struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index 6d9d1520612b8..c1c50eaae4726 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -19,6 +19,13 @@
+ struct bio_vec;
+ struct rpc_rqst;
+ 
++/*
++ * Size of an XDR encoding unit in bytes, i.e. 32 bits,
++ * as defined in Section 3 of RFC 4506. All encoded
++ * XDR data items are aligned on a boundary of 32 bits.
++ */
++#define XDR_UNIT		sizeof(__be32)
++
+ /*
+  * Buffer adjustment
+  */
+@@ -232,10 +239,12 @@ typedef int	(*kxdrdproc_t)(struct rpc_rqst *rqstp, struct xdr_stream *xdr,
+ 
+ extern void xdr_init_encode(struct xdr_stream *xdr, struct xdr_buf *buf,
+ 			    __be32 *p, struct rpc_rqst *rqst);
++extern void xdr_init_encode_pages(struct xdr_stream *xdr, struct xdr_buf *buf,
++			   struct page **pages, struct rpc_rqst *rqst);
+ extern __be32 *xdr_reserve_space(struct xdr_stream *xdr, size_t nbytes);
+ extern int xdr_reserve_space_vec(struct xdr_stream *xdr, struct kvec *vec,
+ 		size_t nbytes);
+-extern void xdr_commit_encode(struct xdr_stream *xdr);
++extern void __xdr_commit_encode(struct xdr_stream *xdr);
+ extern void xdr_truncate_encode(struct xdr_stream *xdr, size_t len);
+ extern int xdr_restrict_buflen(struct xdr_stream *xdr, int newbuflen);
+ extern void xdr_write_pages(struct xdr_stream *xdr, struct page **pages,
+@@ -246,13 +255,71 @@ extern void xdr_init_decode(struct xdr_stream *xdr, struct xdr_buf *buf,
+ 			    __be32 *p, struct rpc_rqst *rqst);
+ extern void xdr_init_decode_pages(struct xdr_stream *xdr, struct xdr_buf *buf,
+ 		struct page **pages, unsigned int len);
+-extern void xdr_set_scratch_buffer(struct xdr_stream *xdr, void *buf, size_t buflen);
+ extern __be32 *xdr_inline_decode(struct xdr_stream *xdr, size_t nbytes);
+ extern unsigned int xdr_read_pages(struct xdr_stream *xdr, unsigned int len);
+ extern void xdr_enter_page(struct xdr_stream *xdr, unsigned int len);
+ extern int xdr_process_buf(struct xdr_buf *buf, unsigned int offset, unsigned int len, int (*actor)(struct scatterlist *, void *), void *data);
+ extern uint64_t xdr_align_data(struct xdr_stream *, uint64_t, uint32_t);
+ extern uint64_t xdr_expand_hole(struct xdr_stream *, uint64_t, uint64_t);
++extern bool xdr_stream_subsegment(struct xdr_stream *xdr, struct xdr_buf *subbuf,
++				  unsigned int len);
++
++/**
++ * xdr_set_scratch_buffer - Attach a scratch buffer for decoding data.
++ * @xdr: pointer to xdr_stream struct
++ * @buf: pointer to an empty buffer
++ * @buflen: size of 'buf'
++ *
++ * The scratch buffer is used when decoding from an array of pages.
++ * If an xdr_inline_decode() call spans across page boundaries, then
++ * we copy the data into the scratch buffer in order to allow linear
++ * access.
++ */
++static inline void
++xdr_set_scratch_buffer(struct xdr_stream *xdr, void *buf, size_t buflen)
++{
++	xdr->scratch.iov_base = buf;
++	xdr->scratch.iov_len = buflen;
++}
++
++/**
++ * xdr_set_scratch_page - Attach a scratch buffer for decoding data
++ * @xdr: pointer to xdr_stream struct
++ * @page: an anonymous page
++ *
++ * See xdr_set_scratch_buffer().
++ */
++static inline void
++xdr_set_scratch_page(struct xdr_stream *xdr, struct page *page)
++{
++	xdr_set_scratch_buffer(xdr, page_address(page), PAGE_SIZE);
++}
++
++/**
++ * xdr_reset_scratch_buffer - Clear scratch buffer information
++ * @xdr: pointer to xdr_stream struct
++ *
++ * See xdr_set_scratch_buffer().
++ */
++static inline void
++xdr_reset_scratch_buffer(struct xdr_stream *xdr)
++{
++	xdr_set_scratch_buffer(xdr, NULL, 0);
++}
++
++/**
++ * xdr_commit_encode - Ensure all data is written to xdr->buf
++ * @xdr: pointer to xdr_stream
++ *
++ * Handle encoding across page boundaries by giving the caller a
++ * temporary location to write to, then later copying the data into
++ * place. __xdr_commit_encode() does that copying.
++ */
++static inline void xdr_commit_encode(struct xdr_stream *xdr)
++{
++	if (unlikely(xdr->scratch.iov_len))
++		__xdr_commit_encode(xdr);
++}
+ 
+ /**
+  * xdr_stream_remaining - Return the number of bytes remaining in the stream
+@@ -285,7 +352,7 @@ ssize_t xdr_stream_decode_string_dup(struct xdr_stream *xdr, char **str,
+ static inline size_t
+ xdr_align_size(size_t n)
+ {
+-	const size_t mask = sizeof(__u32) - 1;
++	const size_t mask = XDR_UNIT - 1;
+ 
+ 	return (n + mask) & ~mask;
+ }
+@@ -315,7 +382,7 @@ static inline size_t xdr_pad_size(size_t n)
+  */
+ static inline ssize_t xdr_stream_encode_item_present(struct xdr_stream *xdr)
+ {
+-	const size_t len = sizeof(__be32);
++	const size_t len = XDR_UNIT;
+ 	__be32 *p = xdr_reserve_space(xdr, len);
+ 
+ 	if (unlikely(!p))
+@@ -334,7 +401,7 @@ static inline ssize_t xdr_stream_encode_item_present(struct xdr_stream *xdr)
+  */
+ static inline int xdr_stream_encode_item_absent(struct xdr_stream *xdr)
+ {
+-	const size_t len = sizeof(__be32);
++	const size_t len = XDR_UNIT;
+ 	__be32 *p = xdr_reserve_space(xdr, len);
+ 
+ 	if (unlikely(!p))
+@@ -343,6 +410,40 @@ static inline int xdr_stream_encode_item_absent(struct xdr_stream *xdr)
+ 	return len;
+ }
+ 
++/**
++ * xdr_encode_bool - Encode a boolean item
++ * @p: address in a buffer into which to encode
++ * @n: boolean value to encode
++ *
++ * Return value:
++ *   Address of item following the encoded boolean
++ */
++static inline __be32 *xdr_encode_bool(__be32 *p, u32 n)
++{
++	*p++ = n ? xdr_one : xdr_zero;
++	return p;
++}
++
++/**
++ * xdr_stream_encode_bool - Encode a boolean item
++ * @xdr: pointer to xdr_stream
++ * @n: boolean value to encode
++ *
++ * Return values:
++ *   On success, returns length in bytes of XDR buffer consumed
++ *   %-EMSGSIZE on XDR buffer overflow
++ */
++static inline int xdr_stream_encode_bool(struct xdr_stream *xdr, __u32 n)
++{
++	const size_t len = XDR_UNIT;
++	__be32 *p = xdr_reserve_space(xdr, len);
++
++	if (unlikely(!p))
++		return -EMSGSIZE;
++	xdr_encode_bool(p, n);
++	return len;
++}
++
+ /**
+  * xdr_stream_encode_u32 - Encode a 32-bit integer
+  * @xdr: pointer to xdr_stream
+@@ -504,6 +605,27 @@ static inline bool xdr_item_is_present(const __be32 *p)
+ 	return *p != xdr_zero;
+ }
+ 
++/**
++ * xdr_stream_decode_bool - Decode a boolean
++ * @xdr: pointer to xdr_stream
++ * @ptr: pointer to a u32 in which to store the result
++ *
++ * Return values:
++ *   %0 on success
++ *   %-EBADMSG on XDR buffer overflow
++ */
++static inline ssize_t
++xdr_stream_decode_bool(struct xdr_stream *xdr, __u32 *ptr)
++{
++	const size_t count = sizeof(*ptr);
++	__be32 *p = xdr_inline_decode(xdr, count);
++
++	if (unlikely(!p))
++		return -EBADMSG;
++	*ptr = (*p != xdr_zero);
++	return 0;
++}
++
+ /**
+  * xdr_stream_decode_u32 - Decode a 32-bit integer
+  * @xdr: pointer to xdr_stream
+@@ -525,6 +647,27 @@ xdr_stream_decode_u32(struct xdr_stream *xdr, __u32 *ptr)
+ 	return 0;
+ }
+ 
++/**
++ * xdr_stream_decode_u64 - Decode a 64-bit integer
++ * @xdr: pointer to xdr_stream
++ * @ptr: location to store 64-bit integer
++ *
++ * Return values:
++ *   %0 on success
++ *   %-EBADMSG on XDR buffer overflow
++ */
++static inline ssize_t
++xdr_stream_decode_u64(struct xdr_stream *xdr, __u64 *ptr)
++{
++	const size_t count = sizeof(*ptr);
++	__be32 *p = xdr_inline_decode(xdr, count);
++
++	if (unlikely(!p))
++		return -EBADMSG;
++	xdr_decode_hyper(p, ptr);
++	return 0;
++}
++
+ /**
+  * xdr_stream_decode_opaque_fixed - Decode fixed length opaque xdr data
+  * @xdr: pointer to xdr_stream
+diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
+index 17a24e1180dad..1ea422b1a9f1c 100644
+--- a/include/linux/syscalls.h
++++ b/include/linux/syscalls.h
+@@ -1320,18 +1320,6 @@ static inline long ksys_ftruncate(unsigned int fd, loff_t length)
+ 	return do_sys_ftruncate(fd, length, 1);
+ }
+ 
+-extern int __close_fd(struct files_struct *files, unsigned int fd);
+-
+-/*
+- * In contrast to sys_close(), this stub does not check whether the syscall
+- * should or should not be restarted, but returns the raw error codes from
+- * __close_fd().
+- */
+-static inline int ksys_close(unsigned int fd)
+-{
+-	return __close_fd(current->files, fd);
+-}
+-
+ extern long do_sys_truncate(const char __user *pathname, loff_t length);
+ 
+ static inline long ksys_truncate(const char __user *pathname, loff_t length)
+diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
+index c202a72e16906..47cf70c8eb93c 100644
+--- a/include/linux/sysctl.h
++++ b/include/linux/sysctl.h
+@@ -55,6 +55,8 @@ typedef int proc_handler(struct ctl_table *ctl, int write, void *buffer,
+ 		size_t *lenp, loff_t *ppos);
+ 
+ int proc_dostring(struct ctl_table *, int, void *, size_t *, loff_t *);
++int proc_dobool(struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos);
+ int proc_dointvec(struct ctl_table *, int, void *, size_t *, loff_t *);
+ int proc_douintvec(struct ctl_table *, int, void *, size_t *, loff_t *);
+ int proc_dointvec_minmax(struct ctl_table *, int, void *, size_t *, loff_t *);
+diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
+index 7616c7bf4b241..0793951867aef 100644
+--- a/include/linux/user_namespace.h
++++ b/include/linux/user_namespace.h
+@@ -49,6 +49,10 @@ enum ucount_type {
+ #ifdef CONFIG_INOTIFY_USER
+ 	UCOUNT_INOTIFY_INSTANCES,
+ 	UCOUNT_INOTIFY_WATCHES,
++#endif
++#ifdef CONFIG_FANOTIFY
++	UCOUNT_FANOTIFY_GROUPS,
++	UCOUNT_FANOTIFY_MARKS,
+ #endif
+ 	UCOUNT_COUNTS,
+ };
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 8220369ee6105..56e4a57d25382 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -394,6 +394,7 @@ DEFINE_RPC_RUNNING_EVENT(complete);
+ DEFINE_RPC_RUNNING_EVENT(timeout);
+ DEFINE_RPC_RUNNING_EVENT(signalled);
+ DEFINE_RPC_RUNNING_EVENT(end);
++DEFINE_RPC_RUNNING_EVENT(call_done);
+ 
+ DECLARE_EVENT_CLASS(rpc_task_queued,
+ 
+@@ -1480,8 +1481,7 @@ DEFINE_SVCXDRBUF_EVENT(sendto);
+ 	svc_rqst_flag(SPLICE_OK)					\
+ 	svc_rqst_flag(VICTIM)						\
+ 	svc_rqst_flag(BUSY)						\
+-	svc_rqst_flag(DATA)						\
+-	svc_rqst_flag_end(AUTHERR)
++	svc_rqst_flag_end(DATA)
+ 
+ #undef svc_rqst_flag
+ #undef svc_rqst_flag_end
+@@ -1547,9 +1547,9 @@ TRACE_DEFINE_ENUM(SVC_COMPLETE);
+ 		{ SVC_COMPLETE,	"SVC_COMPLETE" })
+ 
+ TRACE_EVENT(svc_authenticate,
+-	TP_PROTO(const struct svc_rqst *rqst, int auth_res, __be32 auth_stat),
++	TP_PROTO(const struct svc_rqst *rqst, int auth_res),
+ 
+-	TP_ARGS(rqst, auth_res, auth_stat),
++	TP_ARGS(rqst, auth_res),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(u32, xid)
+@@ -1560,7 +1560,7 @@ TRACE_EVENT(svc_authenticate,
+ 	TP_fast_assign(
+ 		__entry->xid = be32_to_cpu(rqst->rq_xid);
+ 		__entry->svc_status = auth_res;
+-		__entry->auth_stat = be32_to_cpu(auth_stat);
++		__entry->auth_stat = be32_to_cpu(rqst->rq_auth_stat);
+ 	),
+ 
+ 	TP_printk("xid=0x%08x auth_res=%s auth_stat=%s",
+@@ -1578,6 +1578,7 @@ TRACE_EVENT(svc_process,
+ 		__field(u32, vers)
+ 		__field(u32, proc)
+ 		__string(service, name)
++		__string(procedure, rqst->rq_procinfo->pc_name)
+ 		__string(addr, rqst->rq_xprt ?
+ 			 rqst->rq_xprt->xpt_remotebuf : "(null)")
+ 	),
+@@ -1587,13 +1588,16 @@ TRACE_EVENT(svc_process,
+ 		__entry->vers = rqst->rq_vers;
+ 		__entry->proc = rqst->rq_proc;
+ 		__assign_str(service, name);
++		__assign_str(procedure, rqst->rq_procinfo->pc_name);
+ 		__assign_str(addr, rqst->rq_xprt ?
+ 			     rqst->rq_xprt->xpt_remotebuf : "(null)");
+ 	),
+ 
+-	TP_printk("addr=%s xid=0x%08x service=%s vers=%u proc=%u",
++	TP_printk("addr=%s xid=0x%08x service=%s vers=%u proc=%s",
+ 			__get_str(addr), __entry->xid,
+-			__get_str(service), __entry->vers, __entry->proc)
++			__get_str(service), __entry->vers,
++			__get_str(procedure)
++	)
+ );
+ 
+ DECLARE_EVENT_CLASS(svc_rqst_event,
+@@ -1752,6 +1756,7 @@ DECLARE_EVENT_CLASS(svc_xprt_event,
+ 			), \
+ 			TP_ARGS(xprt))
+ 
++DEFINE_SVC_XPRT_EVENT(received);
+ DEFINE_SVC_XPRT_EVENT(no_write_space);
+ DEFINE_SVC_XPRT_EVENT(close);
+ DEFINE_SVC_XPRT_EVENT(detach);
+@@ -1849,6 +1854,7 @@ TRACE_EVENT(svc_stats_latency,
+ 	TP_STRUCT__entry(
+ 		__field(u32, xid)
+ 		__field(unsigned long, execute)
++		__string(procedure, rqst->rq_procinfo->pc_name)
+ 		__string(addr, rqst->rq_xprt->xpt_remotebuf)
+ 	),
+ 
+@@ -1856,11 +1862,13 @@ TRACE_EVENT(svc_stats_latency,
+ 		__entry->xid = be32_to_cpu(rqst->rq_xid);
+ 		__entry->execute = ktime_to_us(ktime_sub(ktime_get(),
+ 							 rqst->rq_stime));
++		__assign_str(procedure, rqst->rq_procinfo->pc_name);
+ 		__assign_str(addr, rqst->rq_xprt->xpt_remotebuf);
+ 	),
+ 
+-	TP_printk("addr=%s xid=0x%08x execute-us=%lu",
+-		__get_str(addr), __entry->xid, __entry->execute)
++	TP_printk("addr=%s xid=0x%08x proc=%s execute-us=%lu",
++		__get_str(addr), __entry->xid, __get_str(procedure),
++		__entry->execute)
+ );
+ 
+ DECLARE_EVENT_CLASS(svc_deferred_event,
+diff --git a/include/uapi/linux/fanotify.h b/include/uapi/linux/fanotify.h
+index fbf9c5c7dd59a..d8536d77fea1c 100644
+--- a/include/uapi/linux/fanotify.h
++++ b/include/uapi/linux/fanotify.h
+@@ -20,6 +20,7 @@
+ #define FAN_OPEN_EXEC		0x00001000	/* File was opened for exec */
+ 
+ #define FAN_Q_OVERFLOW		0x00004000	/* Event queued overflowed */
++#define FAN_FS_ERROR		0x00008000	/* Filesystem error */
+ 
+ #define FAN_OPEN_PERM		0x00010000	/* File open in perm check */
+ #define FAN_ACCESS_PERM		0x00020000	/* File accessed in perm check */
+@@ -27,6 +28,8 @@
+ 
+ #define FAN_EVENT_ON_CHILD	0x08000000	/* Interested in child events */
+ 
++#define FAN_RENAME		0x10000000	/* File was renamed */
++
+ #define FAN_ONDIR		0x40000000	/* Event occurred against dir */
+ 
+ /* helper events */
+@@ -51,13 +54,18 @@
+ #define FAN_ENABLE_AUDIT	0x00000040
+ 
+ /* Flags to determine fanotify event format */
++#define FAN_REPORT_PIDFD	0x00000080	/* Report pidfd for event->pid */
+ #define FAN_REPORT_TID		0x00000100	/* event->pid is thread id */
+ #define FAN_REPORT_FID		0x00000200	/* Report unique file id */
+ #define FAN_REPORT_DIR_FID	0x00000400	/* Report unique directory id */
+ #define FAN_REPORT_NAME		0x00000800	/* Report events with name */
++#define FAN_REPORT_TARGET_FID	0x00001000	/* Report dirent target id  */
+ 
+ /* Convenience macro - FAN_REPORT_NAME requires FAN_REPORT_DIR_FID */
+ #define FAN_REPORT_DFID_NAME	(FAN_REPORT_DIR_FID | FAN_REPORT_NAME)
++/* Convenience macro - FAN_REPORT_TARGET_FID requires all other FID flags */
++#define FAN_REPORT_DFID_NAME_TARGET (FAN_REPORT_DFID_NAME | \
++				     FAN_REPORT_FID | FAN_REPORT_TARGET_FID)
+ 
+ /* Deprecated - do not use this in programs and do not add new flags here! */
+ #define FAN_ALL_INIT_FLAGS	(FAN_CLOEXEC | FAN_NONBLOCK | \
+@@ -74,12 +82,21 @@
+ #define FAN_MARK_IGNORED_SURV_MODIFY	0x00000040
+ #define FAN_MARK_FLUSH		0x00000080
+ /* FAN_MARK_FILESYSTEM is	0x00000100 */
++#define FAN_MARK_EVICTABLE	0x00000200
++/* This bit is mutually exclusive with FAN_MARK_IGNORED_MASK bit */
++#define FAN_MARK_IGNORE		0x00000400
+ 
+ /* These are NOT bitwise flags.  Both bits can be used togther.  */
+ #define FAN_MARK_INODE		0x00000000
+ #define FAN_MARK_MOUNT		0x00000010
+ #define FAN_MARK_FILESYSTEM	0x00000100
+ 
++/*
++ * Convenience macro - FAN_MARK_IGNORE requires FAN_MARK_IGNORED_SURV_MODIFY
++ * for non-inode mark types.
++ */
++#define FAN_MARK_IGNORE_SURV	(FAN_MARK_IGNORE | FAN_MARK_IGNORED_SURV_MODIFY)
++
+ /* Deprecated - do not use this in programs and do not add new flags here! */
+ #define FAN_ALL_MARK_FLAGS	(FAN_MARK_ADD |\
+ 				 FAN_MARK_REMOVE |\
+@@ -123,6 +140,14 @@ struct fanotify_event_metadata {
+ #define FAN_EVENT_INFO_TYPE_FID		1
+ #define FAN_EVENT_INFO_TYPE_DFID_NAME	2
+ #define FAN_EVENT_INFO_TYPE_DFID	3
++#define FAN_EVENT_INFO_TYPE_PIDFD	4
++#define FAN_EVENT_INFO_TYPE_ERROR	5
++
++/* Special info types for FAN_RENAME */
++#define FAN_EVENT_INFO_TYPE_OLD_DFID_NAME	10
++/* Reserved for FAN_EVENT_INFO_TYPE_OLD_DFID	11 */
++#define FAN_EVENT_INFO_TYPE_NEW_DFID_NAME	12
++/* Reserved for FAN_EVENT_INFO_TYPE_NEW_DFID	13 */
+ 
+ /* Variable length info record following event metadata */
+ struct fanotify_event_info_header {
+@@ -148,6 +173,21 @@ struct fanotify_event_info_fid {
+ 	unsigned char handle[0];
+ };
+ 
++/*
++ * This structure is used for info records of type FAN_EVENT_INFO_TYPE_PIDFD.
++ * It holds a pidfd for the pid that was responsible for generating an event.
++ */
++struct fanotify_event_info_pidfd {
++	struct fanotify_event_info_header hdr;
++	__s32 pidfd;
++};
++
++struct fanotify_event_info_error {
++	struct fanotify_event_info_header hdr;
++	__s32 error;
++	__u32 error_count;
++};
++
+ struct fanotify_response {
+ 	__s32 fd;
+ 	__u32 response;
+@@ -160,6 +200,8 @@ struct fanotify_response {
+ 
+ /* No fd set in event */
+ #define FAN_NOFD	-1
++#define FAN_NOPIDFD	FAN_NOFD
++#define FAN_EPIDFD	-2
+ 
+ /* Helper functions to deal with fanotify_event_metadata buffers */
+ #define FAN_EVENT_METADATA_LEN (sizeof(struct fanotify_event_metadata))
+diff --git a/include/uapi/linux/nfs3.h b/include/uapi/linux/nfs3.h
+index 37e4b34e6b435..c22ab77713bd0 100644
+--- a/include/uapi/linux/nfs3.h
++++ b/include/uapi/linux/nfs3.h
+@@ -63,6 +63,12 @@ enum nfs3_ftype {
+ 	NF3BAD  = 8
+ };
+ 
++enum nfs3_time_how {
++	DONT_CHANGE		= 0,
++	SET_TO_SERVER_TIME	= 1,
++	SET_TO_CLIENT_TIME	= 2,
++};
++
+ struct nfs3_fh {
+ 	unsigned short size;
+ 	unsigned char  data[NFS3_FHSIZE];
+diff --git a/include/uapi/linux/nfsd/nfsfh.h b/include/uapi/linux/nfsd/nfsfh.h
+deleted file mode 100644
+index ff0ca88b1c8f6..0000000000000
+--- a/include/uapi/linux/nfsd/nfsfh.h
++++ /dev/null
+@@ -1,105 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+-/*
+- * This file describes the layout of the file handles as passed
+- * over the wire.
+- *
+- * Copyright (C) 1995, 1996, 1997 Olaf Kirch <okir@monad.swb.de>
+- */
+-
+-#ifndef _UAPI_LINUX_NFSD_FH_H
+-#define _UAPI_LINUX_NFSD_FH_H
+-
+-#include <linux/types.h>
+-#include <linux/nfs.h>
+-#include <linux/nfs2.h>
+-#include <linux/nfs3.h>
+-#include <linux/nfs4.h>
+-
+-/*
+- * This is the old "dentry style" Linux NFSv2 file handle.
+- *
+- * The xino and xdev fields are currently used to transport the
+- * ino/dev of the exported inode.
+- */
+-struct nfs_fhbase_old {
+-	__u32		fb_dcookie;	/* dentry cookie - always 0xfeebbaca */
+-	__u32		fb_ino;		/* our inode number */
+-	__u32		fb_dirino;	/* dir inode number, 0 for directories */
+-	__u32		fb_dev;		/* our device */
+-	__u32		fb_xdev;
+-	__u32		fb_xino;
+-	__u32		fb_generation;
+-};
+-
+-/*
+- * This is the new flexible, extensible style NFSv2/v3/v4 file handle.
+- * by Neil Brown <neilb@cse.unsw.edu.au> - March 2000
+- *
+- * The file handle starts with a sequence of four-byte words.
+- * The first word contains a version number (1) and three descriptor bytes
+- * that tell how the remaining 3 variable length fields should be handled.
+- * These three bytes are auth_type, fsid_type and fileid_type.
+- *
+- * All four-byte values are in host-byte-order.
+- *
+- * The auth_type field is deprecated and must be set to 0.
+- *
+- * The fsid_type identifies how the filesystem (or export point) is
+- *    encoded.
+- *  Current values:
+- *     0  - 4 byte device id (ms-2-bytes major, ls-2-bytes minor), 4byte inode number
+- *        NOTE: we cannot use the kdev_t device id value, because kdev_t.h
+- *              says we mustn't.  We must break it up and reassemble.
+- *     1  - 4 byte user specified identifier
+- *     2  - 4 byte major, 4 byte minor, 4 byte inode number - DEPRECATED
+- *     3  - 4 byte device id, encoded for user-space, 4 byte inode number
+- *     4  - 4 byte inode number and 4 byte uuid
+- *     5  - 8 byte uuid
+- *     6  - 16 byte uuid
+- *     7  - 8 byte inode number and 16 byte uuid
+- *
+- * The fileid_type identified how the file within the filesystem is encoded.
+- *   The values for this field are filesystem specific, exccept that
+- *   filesystems must not use the values '0' or '0xff'. 'See enum fid_type'
+- *   in include/linux/exportfs.h for currently registered values.
+- */
+-struct nfs_fhbase_new {
+-	__u8		fb_version;	/* == 1, even => nfs_fhbase_old */
+-	__u8		fb_auth_type;
+-	__u8		fb_fsid_type;
+-	__u8		fb_fileid_type;
+-	__u32		fb_auth[1];
+-/*	__u32		fb_fsid[0]; floating */
+-/*	__u32		fb_fileid[0]; floating */
+-};
+-
+-struct knfsd_fh {
+-	unsigned int	fh_size;	/* significant for NFSv3.
+-					 * Points to the current size while building
+-					 * a new file handle
+-					 */
+-	union {
+-		struct nfs_fhbase_old	fh_old;
+-		__u32			fh_pad[NFS4_FHSIZE/4];
+-		struct nfs_fhbase_new	fh_new;
+-	} fh_base;
+-};
+-
+-#define ofh_dcookie		fh_base.fh_old.fb_dcookie
+-#define ofh_ino			fh_base.fh_old.fb_ino
+-#define ofh_dirino		fh_base.fh_old.fb_dirino
+-#define ofh_dev			fh_base.fh_old.fb_dev
+-#define ofh_xdev		fh_base.fh_old.fb_xdev
+-#define ofh_xino		fh_base.fh_old.fb_xino
+-#define ofh_generation		fh_base.fh_old.fb_generation
+-
+-#define	fh_version		fh_base.fh_new.fb_version
+-#define	fh_fsid_type		fh_base.fh_new.fb_fsid_type
+-#define	fh_auth_type		fh_base.fh_new.fb_auth_type
+-#define	fh_fileid_type		fh_base.fh_new.fb_fileid_type
+-#define	fh_fsid			fh_base.fh_new.fb_auth
+-
+-/* Do not use, provided for userspace compatiblity. */
+-#define	fh_auth			fh_base.fh_new.fb_auth
+-
+-#endif /* _UAPI_LINUX_NFSD_FH_H */
+diff --git a/kernel/audit_fsnotify.c b/kernel/audit_fsnotify.c
+index b2ebacd2f3097..691f90dd09d25 100644
+--- a/kernel/audit_fsnotify.c
++++ b/kernel/audit_fsnotify.c
+@@ -100,7 +100,7 @@ struct audit_fsnotify_mark *audit_alloc_mark(struct audit_krule *krule, char *pa
+ 	audit_update_mark(audit_mark, dentry->d_inode);
+ 	audit_mark->rule = krule;
+ 
+-	ret = fsnotify_add_inode_mark(&audit_mark->mark, inode, true);
++	ret = fsnotify_add_inode_mark(&audit_mark->mark, inode, 0);
+ 	if (ret < 0) {
+ 		audit_mark->path = NULL;
+ 		fsnotify_put_mark(&audit_mark->mark);
+@@ -161,8 +161,7 @@ static int audit_mark_handle_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 
+ 	audit_mark = container_of(inode_mark, struct audit_fsnotify_mark, mark);
+ 
+-	if (WARN_ON_ONCE(inode_mark->group != audit_fsnotify_group) ||
+-	    WARN_ON_ONCE(!inode))
++	if (WARN_ON_ONCE(inode_mark->group != audit_fsnotify_group))
+ 		return 0;
+ 
+ 	if (mask & (FS_CREATE|FS_MOVED_TO|FS_DELETE|FS_MOVED_FROM)) {
+@@ -183,7 +182,8 @@ static const struct fsnotify_ops audit_mark_fsnotify_ops = {
+ 
+ static int __init audit_fsnotify_init(void)
+ {
+-	audit_fsnotify_group = fsnotify_alloc_group(&audit_mark_fsnotify_ops);
++	audit_fsnotify_group = fsnotify_alloc_group(&audit_mark_fsnotify_ops,
++						    FSNOTIFY_GROUP_DUPS);
+ 	if (IS_ERR(audit_fsnotify_group)) {
+ 		audit_fsnotify_group = NULL;
+ 		audit_panic("cannot create audit fsnotify group");
+diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
+index 39241207ec044..0c35879bbf7c3 100644
+--- a/kernel/audit_tree.c
++++ b/kernel/audit_tree.c
+@@ -1077,7 +1077,7 @@ static int __init audit_tree_init(void)
+ 
+ 	audit_tree_mark_cachep = KMEM_CACHE(audit_tree_mark, SLAB_PANIC);
+ 
+-	audit_tree_group = fsnotify_alloc_group(&audit_tree_ops);
++	audit_tree_group = fsnotify_alloc_group(&audit_tree_ops, 0);
+ 	if (IS_ERR(audit_tree_group))
+ 		audit_panic("cannot initialize fsnotify group for rectree watches");
+ 
+diff --git a/kernel/audit_watch.c b/kernel/audit_watch.c
+index edbeffee64b8e..5cf22fe301493 100644
+--- a/kernel/audit_watch.c
++++ b/kernel/audit_watch.c
+@@ -472,8 +472,7 @@ static int audit_watch_handle_event(struct fsnotify_mark *inode_mark, u32 mask,
+ 
+ 	parent = container_of(inode_mark, struct audit_parent, mark);
+ 
+-	if (WARN_ON_ONCE(inode_mark->group != audit_watch_group) ||
+-	    WARN_ON_ONCE(!inode))
++	if (WARN_ON_ONCE(inode_mark->group != audit_watch_group))
+ 		return 0;
+ 
+ 	if (mask & (FS_CREATE|FS_MOVED_TO) && inode)
+@@ -493,7 +492,7 @@ static const struct fsnotify_ops audit_watch_fsnotify_ops = {
+ 
+ static int __init audit_watch_init(void)
+ {
+-	audit_watch_group = fsnotify_alloc_group(&audit_watch_fsnotify_ops);
++	audit_watch_group = fsnotify_alloc_group(&audit_watch_fsnotify_ops, 0);
+ 	if (IS_ERR(audit_watch_group)) {
+ 		audit_watch_group = NULL;
+ 		audit_panic("cannot create audit fsnotify group");
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index 6b14b4c4068cc..5966013bc788b 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -507,7 +507,7 @@ static void *bpf_obj_do_get(const char __user *pathname,
+ 		return ERR_PTR(ret);
+ 
+ 	inode = d_backing_inode(path.dentry);
+-	ret = inode_permission(inode, ACC_MODE(flags));
++	ret = path_permission(&path, ACC_MODE(flags));
+ 	if (ret)
+ 		goto out;
+ 
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index e1bee8cd34044..fbe7f8e2b022c 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -3929,7 +3929,6 @@ static int bpf_task_fd_query(const union bpf_attr *attr,
+ 	pid_t pid = attr->task_fd_query.pid;
+ 	u32 fd = attr->task_fd_query.fd;
+ 	const struct perf_event *event;
+-	struct files_struct *files;
+ 	struct task_struct *task;
+ 	struct file *file;
+ 	int err;
+@@ -3949,23 +3948,11 @@ static int bpf_task_fd_query(const union bpf_attr *attr,
+ 	if (!task)
+ 		return -ENOENT;
+ 
+-	files = get_files_struct(task);
+-	put_task_struct(task);
+-	if (!files)
+-		return -ENOENT;
+-
+ 	err = 0;
+-	spin_lock(&files->file_lock);
+-	file = fcheck_files(files, fd);
++	file = fget_task(task, fd);
++	put_task_struct(task);
+ 	if (!file)
+-		err = -EBADF;
+-	else
+-		get_file(file);
+-	spin_unlock(&files->file_lock);
+-	put_files_struct(files);
+-
+-	if (err)
+-		goto out;
++		return -EBADF;
+ 
+ 	if (file->f_op == &bpf_link_fops) {
+ 		struct bpf_link *link = file->private_data;
+@@ -4005,7 +3992,6 @@ static int bpf_task_fd_query(const union bpf_attr *attr,
+ 	err = -ENOTSUPP;
+ put_file:
+ 	fput(file);
+-out:
+ 	return err;
+ }
+ 
+diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
+index f3d3a562a802a..762b4d7c37795 100644
+--- a/kernel/bpf/task_iter.c
++++ b/kernel/bpf/task_iter.c
+@@ -185,7 +185,7 @@ task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info)
+ 	for (; curr_fd < max_fds; curr_fd++) {
+ 		struct file *f;
+ 
+-		f = fcheck_files(curr_files, curr_fd);
++		f = files_lookup_fd_rcu(curr_files, curr_fd);
+ 		if (!f)
+ 			continue;
+ 		if (!get_file_rcu(f))
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 633b0af1d1a73..8b8a5a172b158 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -3077,21 +3077,21 @@ SYSCALL_DEFINE1(unshare, unsigned long, unshare_flags)
+  *	the exec layer of the kernel.
+  */
+ 
+-int unshare_files(struct files_struct **displaced)
++int unshare_files(void)
+ {
+ 	struct task_struct *task = current;
+-	struct files_struct *copy = NULL;
++	struct files_struct *old, *copy = NULL;
+ 	int error;
+ 
+ 	error = unshare_fd(CLONE_FILES, NR_OPEN_MAX, &copy);
+-	if (error || !copy) {
+-		*displaced = NULL;
++	if (error || !copy)
+ 		return error;
+-	}
+-	*displaced = task->files;
++
++	old = task->files;
+ 	task_lock(task);
+ 	task->files = copy;
+ 	task_unlock(task);
++	put_files_struct(old);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
+index fe9de067771c3..8043a90aa50ed 100644
+--- a/kernel/kallsyms.c
++++ b/kernel/kallsyms.c
+@@ -177,6 +177,11 @@ unsigned long kallsyms_lookup_name(const char *name)
+ 	return module_kallsyms_lookup_name(name);
+ }
+ 
++#ifdef CONFIG_LIVEPATCH
++/*
++ * Iterate over all symbols in vmlinux.  For symbols from modules use
++ * module_kallsyms_on_each_symbol instead.
++ */
+ int kallsyms_on_each_symbol(int (*fn)(void *, const char *, struct module *,
+ 				      unsigned long),
+ 			    void *data)
+@@ -192,8 +197,9 @@ int kallsyms_on_each_symbol(int (*fn)(void *, const char *, struct module *,
+ 		if (ret != 0)
+ 			return ret;
+ 	}
+-	return module_kallsyms_on_each_symbol(fn, data);
++	return 0;
+ }
++#endif /* CONFIG_LIVEPATCH */
+ 
+ static unsigned long get_symbol_pos(unsigned long addr,
+ 				    unsigned long *symbolsize,
+diff --git a/kernel/kcmp.c b/kernel/kcmp.c
+index c0d2ad9b4705d..5353edfad8e11 100644
+--- a/kernel/kcmp.c
++++ b/kernel/kcmp.c
+@@ -61,16 +61,11 @@ static int kcmp_ptr(void *v1, void *v2, enum kcmp_type type)
+ static struct file *
+ get_file_raw_ptr(struct task_struct *task, unsigned int idx)
+ {
+-	struct file *file = NULL;
++	struct file *file;
+ 
+-	task_lock(task);
+ 	rcu_read_lock();
+-
+-	if (task->files)
+-		file = fcheck_files(task->files, idx);
+-
++	file = task_lookup_fd_rcu(task, idx);
+ 	rcu_read_unlock();
+-	task_unlock(task);
+ 
+ 	return file;
+ }
+@@ -107,7 +102,6 @@ static int kcmp_epoll_target(struct task_struct *task1,
+ {
+ 	struct file *filp, *filp_epoll, *filp_tgt;
+ 	struct kcmp_epoll_slot slot;
+-	struct files_struct *files;
+ 
+ 	if (copy_from_user(&slot, uslot, sizeof(slot)))
+ 		return -EFAULT;
+@@ -116,23 +110,12 @@ static int kcmp_epoll_target(struct task_struct *task1,
+ 	if (!filp)
+ 		return -EBADF;
+ 
+-	files = get_files_struct(task2);
+-	if (!files)
++	filp_epoll = fget_task(task2, slot.efd);
++	if (!filp_epoll)
+ 		return -EBADF;
+ 
+-	spin_lock(&files->file_lock);
+-	filp_epoll = fcheck_files(files, slot.efd);
+-	if (filp_epoll)
+-		get_file(filp_epoll);
+-	else
+-		filp_tgt = ERR_PTR(-EBADF);
+-	spin_unlock(&files->file_lock);
+-	put_files_struct(files);
+-
+-	if (filp_epoll) {
+-		filp_tgt = get_epoll_tfile_raw_ptr(filp_epoll, slot.tfd, slot.toff);
+-		fput(filp_epoll);
+-	}
++	filp_tgt = get_epoll_tfile_raw_ptr(filp_epoll, slot.tfd, slot.toff);
++	fput(filp_epoll);
+ 
+ 	if (IS_ERR(filp_tgt))
+ 		return PTR_ERR(filp_tgt);
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 508fe52782857..9d6cc9c15a55e 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -262,6 +262,21 @@ void kthread_parkme(void)
+ }
+ EXPORT_SYMBOL_GPL(kthread_parkme);
+ 
++/**
++ * kthread_exit - Cause the current kthread return @result to kthread_stop().
++ * @result: The integer value to return to kthread_stop().
++ *
++ * While kthread_exit can be called directly, it exists so that
++ * functions which do some additional work in non-modular code such as
++ * module_put_and_kthread_exit can be implemented.
++ *
++ * Does not return.
++ */
++void __noreturn kthread_exit(long result)
++{
++	do_exit(result);
++}
++
+ static int kthread(void *_create)
+ {
+ 	/* Copy data: it's on kthread's stack */
+@@ -279,13 +294,13 @@ static int kthread(void *_create)
+ 	done = xchg(&create->done, NULL);
+ 	if (!done) {
+ 		kfree(create);
+-		do_exit(-EINTR);
++		kthread_exit(-EINTR);
+ 	}
+ 
+ 	if (!self) {
+ 		create->result = ERR_PTR(-ENOMEM);
+ 		complete(done);
+-		do_exit(-ENOMEM);
++		kthread_exit(-ENOMEM);
+ 	}
+ 
+ 	self->threadfn = threadfn;
+@@ -312,7 +327,7 @@ static int kthread(void *_create)
+ 		__kthread_parkme(self);
+ 		ret = threadfn(data);
+ 	}
+-	do_exit(ret);
++	kthread_exit(ret);
+ }
+ 
+ /* called from do_fork() to get node information for about to be created task */
+@@ -621,7 +636,7 @@ EXPORT_SYMBOL_GPL(kthread_park);
+  * instead of calling wake_up_process(): the thread will exit without
+  * calling threadfn().
+  *
+- * If threadfn() may call do_exit() itself, the caller must ensure
++ * If threadfn() may call kthread_exit() itself, the caller must ensure
+  * task_struct can't go away.
+  *
+  * Returns the result of threadfn(), or %-EINTR if wake_up_process()
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index f5faf935c2d8f..147ed154ebc77 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -19,6 +19,7 @@
+ #include <linux/moduleloader.h>
+ #include <linux/completion.h>
+ #include <linux/memory.h>
++#include <linux/rcupdate.h>
+ #include <asm/cacheflush.h>
+ #include "core.h"
+ #include "patch.h"
+@@ -57,7 +58,7 @@ static void klp_find_object_module(struct klp_object *obj)
+ 	if (!klp_is_module(obj))
+ 		return;
+ 
+-	mutex_lock(&module_mutex);
++	rcu_read_lock_sched();
+ 	/*
+ 	 * We do not want to block removal of patched modules and therefore
+ 	 * we do not take a reference here. The patches are removed by
+@@ -74,7 +75,7 @@ static void klp_find_object_module(struct klp_object *obj)
+ 	if (mod && mod->klp_alive)
+ 		obj->mod = mod;
+ 
+-	mutex_unlock(&module_mutex);
++	rcu_read_unlock_sched();
+ }
+ 
+ static bool klp_initialized(void)
+@@ -163,12 +164,10 @@ static int klp_find_object_symbol(const char *objname, const char *name,
+ 		.pos = sympos,
+ 	};
+ 
+-	mutex_lock(&module_mutex);
+ 	if (objname)
+ 		module_kallsyms_on_each_symbol(klp_find_callback, &args);
+ 	else
+ 		kallsyms_on_each_symbol(klp_find_callback, &args);
+-	mutex_unlock(&module_mutex);
+ 
+ 	/*
+ 	 * Ensure an address was found. If sympos is 0, ensure symbol is unique;
+diff --git a/kernel/module.c b/kernel/module.c
+index 72a5dcdccf7b1..edc7b99cb16fa 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -88,7 +88,6 @@
+  * 3) module_addr_min/module_addr_max.
+  * (delete and add uses RCU list operations). */
+ DEFINE_MUTEX(module_mutex);
+-EXPORT_SYMBOL_GPL(module_mutex);
+ static LIST_HEAD(modules);
+ 
+ /* Work queue for freeing init sections in success case */
+@@ -256,11 +255,6 @@ static void mod_update_bounds(struct module *mod)
+ struct list_head *kdb_modules = &modules; /* kdb needs the list of modules */
+ #endif /* CONFIG_KGDB_KDB */
+ 
+-static void module_assert_mutex(void)
+-{
+-	lockdep_assert_held(&module_mutex);
+-}
+-
+ static void module_assert_mutex_or_preempt(void)
+ {
+ #ifdef CONFIG_LOCKDEP
+@@ -340,14 +334,14 @@ static inline void add_taint_module(struct module *mod, unsigned flag,
+ 
+ /*
+  * A thread that wants to hold a reference to a module only while it
+- * is running can call this to safely exit.  nfsd and lockd use this.
++ * is running can call this to safely exit.
+  */
+-void __noreturn __module_put_and_exit(struct module *mod, long code)
++void __noreturn __module_put_and_kthread_exit(struct module *mod, long code)
+ {
+ 	module_put(mod);
+-	do_exit(code);
++	kthread_exit(code);
+ }
+-EXPORT_SYMBOL(__module_put_and_exit);
++EXPORT_SYMBOL(__module_put_and_kthread_exit);
+ 
+ /* Find a module section: 0 means not found. */
+ static unsigned int find_sec(const struct load_info *info, const char *name)
+@@ -642,10 +636,8 @@ static struct module *find_module_all(const char *name, size_t len,
+ 
+ struct module *find_module(const char *name)
+ {
+-	module_assert_mutex();
+ 	return find_module_all(name, strlen(name), false);
+ }
+-EXPORT_SYMBOL_GPL(find_module);
+ 
+ #ifdef CONFIG_SMP
+ 
+@@ -4452,6 +4444,7 @@ unsigned long module_kallsyms_lookup_name(const char *name)
+ 	return ret;
+ }
+ 
++#ifdef CONFIG_LIVEPATCH
+ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+ 					     struct module *, unsigned long),
+ 				   void *data)
+@@ -4460,8 +4453,7 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+ 	unsigned int i;
+ 	int ret;
+ 
+-	module_assert_mutex();
+-
++	mutex_lock(&module_mutex);
+ 	list_for_each_entry(mod, &modules, list) {
+ 		/* We hold module_mutex: no need for rcu_dereference_sched */
+ 		struct mod_kallsyms *kallsyms = mod->kallsyms;
+@@ -4477,11 +4469,13 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+ 			ret = fn(data, kallsyms_symbol_name(kallsyms, i),
+ 				 mod, kallsyms_symbol_value(sym));
+ 			if (ret != 0)
+-				return ret;
++				break;
+ 		}
+ 	}
+-	return 0;
++	mutex_unlock(&module_mutex);
++	return ret;
+ }
++#endif /* CONFIG_LIVEPATCH */
+ #endif /* CONFIG_KALLSYMS */
+ 
+ /* Maximum number of characters written by module_flags() */
+diff --git a/kernel/pid.c b/kernel/pid.c
+index 4856818c9de1a..0820f2c50bb0c 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -550,13 +550,21 @@ struct pid *pidfd_get_pid(unsigned int fd, unsigned int *flags)
+  * Note, that this function can only be called after the fd table has
+  * been unshared to avoid leaking the pidfd to the new process.
+  *
++ * This symbol should not be explicitly exported to loadable modules.
++ *
+  * Return: On success, a cloexec pidfd is returned.
+  *         On error, a negative errno number will be returned.
+  */
+-static int pidfd_create(struct pid *pid, unsigned int flags)
++int pidfd_create(struct pid *pid, unsigned int flags)
+ {
+ 	int fd;
+ 
++	if (!pid || !pid_has_task(pid, PIDTYPE_TGID))
++		return -EINVAL;
++
++	if (flags & ~(O_NONBLOCK | O_RDWR | O_CLOEXEC))
++		return -EINVAL;
++
+ 	fd = anon_inode_getfd("[pidfd]", &pidfd_fops, get_pid(pid),
+ 			      flags | O_RDWR | O_CLOEXEC);
+ 	if (fd < 0)
+@@ -596,10 +604,7 @@ SYSCALL_DEFINE2(pidfd_open, pid_t, pid, unsigned int, flags)
+ 	if (!p)
+ 		return -ESRCH;
+ 
+-	if (pid_has_task(p, PIDTYPE_TGID))
+-		fd = pidfd_create(p, flags);
+-	else
+-		fd = -EINVAL;
++	fd = pidfd_create(p, flags);
+ 
+ 	put_pid(p);
+ 	return fd;
+diff --git a/kernel/sys.c b/kernel/sys.c
+index efc213ae4c5ad..7a2cfb57fa9e7 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1873,7 +1873,7 @@ static int prctl_set_mm_exe_file(struct mm_struct *mm, unsigned int fd)
+ 	if (!S_ISREG(inode->i_mode) || path_noexec(&exe.file->f_path))
+ 		goto exit;
+ 
+-	err = inode_permission(inode, MAY_EXEC);
++	err = file_permission(exe.file, MAY_EXEC);
+ 	if (err)
+ 		goto exit;
+ 
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 99a19190196e0..abe0f16d53641 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -142,6 +142,9 @@ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
++#ifdef CONFIG_FANOTIFY
++#include <linux/fanotify.h>
++#endif
+ 
+ #ifdef CONFIG_PROC_SYSCTL
+ 
+@@ -543,6 +546,21 @@ static void proc_put_char(void **buf, size_t *size, char c)
+ 	}
+ }
+ 
++static int do_proc_dobool_conv(bool *negp, unsigned long *lvalp,
++				int *valp,
++				int write, void *data)
++{
++	if (write) {
++		*(bool *)valp = *lvalp;
++	} else {
++		int val = *(bool *)valp;
++
++		*lvalp = (unsigned long)val;
++		*negp = false;
++	}
++	return 0;
++}
++
+ static int do_proc_dointvec_conv(bool *negp, unsigned long *lvalp,
+ 				 int *valp,
+ 				 int write, void *data)
+@@ -805,6 +823,26 @@ static int do_proc_douintvec(struct ctl_table *table, int write,
+ 				   buffer, lenp, ppos, conv, data);
+ }
+ 
++/**
++ * proc_dobool - read/write a bool
++ * @table: the sysctl table
++ * @write: %TRUE if this is a write to the sysctl file
++ * @buffer: the user buffer
++ * @lenp: the size of the user buffer
++ * @ppos: file position
++ *
++ * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
++ * values from/to the user buffer, treated as an ASCII string.
++ *
++ * Returns 0 on success.
++ */
++int proc_dobool(struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	return do_proc_dointvec(table, write, buffer, lenp, ppos,
++				do_proc_dobool_conv, NULL);
++}
++
+ /**
+  * proc_dointvec - read a vector of integers
+  * @table: the sysctl table
+@@ -1641,6 +1679,12 @@ int proc_dostring(struct ctl_table *table, int write,
+ 	return -ENOSYS;
+ }
+ 
++int proc_dobool(struct ctl_table *table, int write,
++		void *buffer, size_t *lenp, loff_t *ppos)
++{
++	return -ENOSYS;
++}
++
+ int proc_dointvec(struct ctl_table *table, int write,
+ 		  void *buffer, size_t *lenp, loff_t *ppos)
+ {
+@@ -3330,7 +3374,14 @@ static struct ctl_table fs_table[] = {
+ 		.mode		= 0555,
+ 		.child		= inotify_table,
+ 	},
+-#endif	
++#endif
++#ifdef CONFIG_FANOTIFY
++	{
++		.procname	= "fanotify",
++		.mode		= 0555,
++		.child		= fanotify_table,
++	},
++#endif
+ #ifdef CONFIG_EPOLL
+ 	{
+ 		.procname	= "epoll",
+@@ -3493,6 +3544,7 @@ int __init sysctl_init(void)
+  * No sense putting this after each symbol definition, twice,
+  * exception granted :-)
+  */
++EXPORT_SYMBOL(proc_dobool);
+ EXPORT_SYMBOL(proc_dointvec);
+ EXPORT_SYMBOL(proc_douintvec);
+ EXPORT_SYMBOL(proc_dointvec_jiffies);
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 7183572898998..5453af26ff764 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -124,9 +124,9 @@ static nokprobe_inline bool trace_kprobe_module_exist(struct trace_kprobe *tk)
+ 	if (!p)
+ 		return true;
+ 	*p = '\0';
+-	mutex_lock(&module_mutex);
++	rcu_read_lock_sched();
+ 	ret = !!find_module(tk->symbol);
+-	mutex_unlock(&module_mutex);
++	rcu_read_unlock_sched();
+ 	*p = ':';
+ 
+ 	return ret;
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 11b1596e2542a..8d8874f1c35e2 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -73,6 +73,10 @@ static struct ctl_table user_table[] = {
+ #ifdef CONFIG_INOTIFY_USER
+ 	UCOUNT_ENTRY("max_inotify_instances"),
+ 	UCOUNT_ENTRY("max_inotify_watches"),
++#endif
++#ifdef CONFIG_FANOTIFY
++	UCOUNT_ENTRY("max_fanotify_groups"),
++	UCOUNT_ENTRY("max_fanotify_marks"),
+ #endif
+ 	{ }
+ };
+diff --git a/mm/madvise.c b/mm/madvise.c
+index f71fc88f0b331..a63aa04ec7fa3 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -543,7 +543,7 @@ static inline bool can_do_pageout(struct vm_area_struct *vma)
+ 	 * opens a side channel.
+ 	 */
+ 	return inode_owner_or_capable(file_inode(vma->vm_file)) ||
+-		inode_permission(file_inode(vma->vm_file), MAY_WRITE) == 0;
++	       file_permission(vma->vm_file, MAY_WRITE) == 0;
+ }
+ 
+ static long madvise_pageout(struct vm_area_struct *vma,
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index ddc8ed096deca..186ae9dba0fd5 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -4918,7 +4918,7 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
+ 
+ 	/* the process need read permission on control file */
+ 	/* AV: shouldn't we check that it's been opened for read instead? */
+-	ret = inode_permission(file_inode(cfile.file), MAY_READ);
++	ret = file_permission(cfile.file, MAY_READ);
+ 	if (ret < 0)
+ 		goto out_put_cfile;
+ 
+diff --git a/mm/mincore.c b/mm/mincore.c
+index 02db1a834021b..7bdb4673f776a 100644
+--- a/mm/mincore.c
++++ b/mm/mincore.c
+@@ -167,7 +167,7 @@ static inline bool can_do_mincore(struct vm_area_struct *vma)
+ 	 * mappings, which opens a side channel.
+ 	 */
+ 	return inode_owner_or_capable(file_inode(vma->vm_file)) ||
+-		inode_permission(file_inode(vma->vm_file), MAY_WRITE) == 0;
++	       file_permission(vma->vm_file, MAY_WRITE) == 0;
+ }
+ 
+ static const struct mm_walk_ops mincore_walk_ops = {
+diff --git a/net/bluetooth/bnep/core.c b/net/bluetooth/bnep/core.c
+index 43c284158f63e..09b6d825124ee 100644
+--- a/net/bluetooth/bnep/core.c
++++ b/net/bluetooth/bnep/core.c
+@@ -535,7 +535,7 @@ static int bnep_session(void *arg)
+ 
+ 	up_write(&bnep_session_sem);
+ 	free_netdev(dev);
+-	module_put_and_exit(0);
++	module_put_and_kthread_exit(0);
+ 	return 0;
+ }
+ 
+diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c
+index 83eb84e8e688f..90d130588a3e5 100644
+--- a/net/bluetooth/cmtp/core.c
++++ b/net/bluetooth/cmtp/core.c
+@@ -323,7 +323,7 @@ static int cmtp_session(void *arg)
+ 	up_write(&cmtp_session_sem);
+ 
+ 	kfree(session);
+-	module_put_and_exit(0);
++	module_put_and_kthread_exit(0);
+ 	return 0;
+ }
+ 
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index b946a6379433a..3ff870599eb77 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -1305,7 +1305,7 @@ static int hidp_session_thread(void *arg)
+ 	l2cap_unregister_user(session->conn, &session->user);
+ 	hidp_session_put(session);
+ 
+-	module_put_and_exit(0);
++	module_put_and_kthread_exit(0);
+ 	return 0;
+ }
+ 
+diff --git a/net/sunrpc/auth_gss/gss_rpc_xdr.c b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+index e265b8d38aa14..a857fc99431ce 100644
+--- a/net/sunrpc/auth_gss/gss_rpc_xdr.c
++++ b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+@@ -800,7 +800,7 @@ int gssx_dec_accept_sec_context(struct rpc_rqst *rqstp,
+ 	scratch = alloc_page(GFP_KERNEL);
+ 	if (!scratch)
+ 		return -ENOMEM;
+-	xdr_set_scratch_buffer(xdr, page_address(scratch), PAGE_SIZE);
++	xdr_set_scratch_page(xdr, scratch);
+ 
+ 	/* res->status */
+ 	err = gssx_dec_status(xdr, &res->status);
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 784c8b24f1640..329eac782cc5e 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -707,11 +707,11 @@ svc_safe_putnetobj(struct kvec *resv, struct xdr_netobj *o)
+ /*
+  * Verify the checksum on the header and return SVC_OK on success.
+  * Otherwise, return SVC_DROP (in the case of a bad sequence number)
+- * or return SVC_DENIED and indicate error in authp.
++ * or return SVC_DENIED and indicate error in rqstp->rq_auth_stat.
+  */
+ static int
+ gss_verify_header(struct svc_rqst *rqstp, struct rsc *rsci,
+-		  __be32 *rpcstart, struct rpc_gss_wire_cred *gc, __be32 *authp)
++		  __be32 *rpcstart, struct rpc_gss_wire_cred *gc)
+ {
+ 	struct gss_ctx		*ctx_id = rsci->mechctx;
+ 	struct xdr_buf		rpchdr;
+@@ -725,7 +725,7 @@ gss_verify_header(struct svc_rqst *rqstp, struct rsc *rsci,
+ 	iov.iov_len = (u8 *)argv->iov_base - (u8 *)rpcstart;
+ 	xdr_buf_from_iov(&iov, &rpchdr);
+ 
+-	*authp = rpc_autherr_badverf;
++	rqstp->rq_auth_stat = rpc_autherr_badverf;
+ 	if (argv->iov_len < 4)
+ 		return SVC_DENIED;
+ 	flavor = svc_getnl(argv);
+@@ -737,13 +737,13 @@ gss_verify_header(struct svc_rqst *rqstp, struct rsc *rsci,
+ 	if (rqstp->rq_deferred) /* skip verification of revisited request */
+ 		return SVC_OK;
+ 	if (gss_verify_mic(ctx_id, &rpchdr, &checksum) != GSS_S_COMPLETE) {
+-		*authp = rpcsec_gsserr_credproblem;
++		rqstp->rq_auth_stat = rpcsec_gsserr_credproblem;
+ 		return SVC_DENIED;
+ 	}
+ 
+ 	if (gc->gc_seq > MAXSEQ) {
+ 		trace_rpcgss_svc_seqno_large(rqstp, gc->gc_seq);
+-		*authp = rpcsec_gsserr_ctxproblem;
++		rqstp->rq_auth_stat = rpcsec_gsserr_ctxproblem;
+ 		return SVC_DENIED;
+ 	}
+ 	if (!gss_check_seq_num(rqstp, rsci, gc->gc_seq))
+@@ -1038,6 +1038,8 @@ svcauth_gss_set_client(struct svc_rqst *rqstp)
+ 	struct rpc_gss_wire_cred *gc = &svcdata->clcred;
+ 	int stat;
+ 
++	rqstp->rq_auth_stat = rpc_autherr_badcred;
++
+ 	/*
+ 	 * A gss export can be specified either by:
+ 	 * 	export	*(sec=krb5,rw)
+@@ -1053,6 +1055,8 @@ svcauth_gss_set_client(struct svc_rqst *rqstp)
+ 	stat = svcauth_unix_set_client(rqstp);
+ 	if (stat == SVC_DROP || stat == SVC_CLOSE)
+ 		return stat;
++
++	rqstp->rq_auth_stat = rpc_auth_ok;
+ 	return SVC_OK;
+ }
+ 
+@@ -1136,7 +1140,7 @@ static void gss_free_in_token_pages(struct gssp_in_token *in_token)
+ }
+ 
+ static int gss_read_proxy_verf(struct svc_rqst *rqstp,
+-			       struct rpc_gss_wire_cred *gc, __be32 *authp,
++			       struct rpc_gss_wire_cred *gc,
+ 			       struct xdr_netobj *in_handle,
+ 			       struct gssp_in_token *in_token)
+ {
+@@ -1145,7 +1149,7 @@ static int gss_read_proxy_verf(struct svc_rqst *rqstp,
+ 	int pages, i, res, pgto, pgfrom;
+ 	size_t inlen, to_offs, from_offs;
+ 
+-	res = gss_read_common_verf(gc, argv, authp, in_handle);
++	res = gss_read_common_verf(gc, argv, &rqstp->rq_auth_stat, in_handle);
+ 	if (res)
+ 		return res;
+ 
+@@ -1226,7 +1230,7 @@ gss_write_resv(struct kvec *resv, size_t size_limit,
+  * Otherwise, drop the request pending an answer to the upcall.
+  */
+ static int svcauth_gss_legacy_init(struct svc_rqst *rqstp,
+-			struct rpc_gss_wire_cred *gc, __be32 *authp)
++				   struct rpc_gss_wire_cred *gc)
+ {
+ 	struct kvec *argv = &rqstp->rq_arg.head[0];
+ 	struct kvec *resv = &rqstp->rq_res.head[0];
+@@ -1235,7 +1239,7 @@ static int svcauth_gss_legacy_init(struct svc_rqst *rqstp,
+ 	struct sunrpc_net *sn = net_generic(SVC_NET(rqstp), sunrpc_net_id);
+ 
+ 	memset(&rsikey, 0, sizeof(rsikey));
+-	ret = gss_read_verf(gc, argv, authp,
++	ret = gss_read_verf(gc, argv, &rqstp->rq_auth_stat,
+ 			    &rsikey.in_handle, &rsikey.in_token);
+ 	if (ret)
+ 		return ret;
+@@ -1338,7 +1342,7 @@ static int gss_proxy_save_rsc(struct cache_detail *cd,
+ }
+ 
+ static int svcauth_gss_proxy_init(struct svc_rqst *rqstp,
+-			struct rpc_gss_wire_cred *gc, __be32 *authp)
++				  struct rpc_gss_wire_cred *gc)
+ {
+ 	struct kvec *resv = &rqstp->rq_res.head[0];
+ 	struct xdr_netobj cli_handle;
+@@ -1350,8 +1354,7 @@ static int svcauth_gss_proxy_init(struct svc_rqst *rqstp,
+ 	struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
+ 
+ 	memset(&ud, 0, sizeof(ud));
+-	ret = gss_read_proxy_verf(rqstp, gc, authp,
+-				  &ud.in_handle, &ud.in_token);
++	ret = gss_read_proxy_verf(rqstp, gc, &ud.in_handle, &ud.in_token);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1524,7 +1527,7 @@ static void destroy_use_gss_proxy_proc_entry(struct net *net) {}
+  * response here and return SVC_COMPLETE.
+  */
+ static int
+-svcauth_gss_accept(struct svc_rqst *rqstp, __be32 *authp)
++svcauth_gss_accept(struct svc_rqst *rqstp)
+ {
+ 	struct kvec	*argv = &rqstp->rq_arg.head[0];
+ 	struct kvec	*resv = &rqstp->rq_res.head[0];
+@@ -1537,7 +1540,7 @@ svcauth_gss_accept(struct svc_rqst *rqstp, __be32 *authp)
+ 	int		ret;
+ 	struct sunrpc_net *sn = net_generic(SVC_NET(rqstp), sunrpc_net_id);
+ 
+-	*authp = rpc_autherr_badcred;
++	rqstp->rq_auth_stat = rpc_autherr_badcred;
+ 	if (!svcdata)
+ 		svcdata = kmalloc(sizeof(*svcdata), GFP_KERNEL);
+ 	if (!svcdata)
+@@ -1574,22 +1577,22 @@ svcauth_gss_accept(struct svc_rqst *rqstp, __be32 *authp)
+ 	if ((gc->gc_proc != RPC_GSS_PROC_DATA) && (rqstp->rq_proc != 0))
+ 		goto auth_err;
+ 
+-	*authp = rpc_autherr_badverf;
++	rqstp->rq_auth_stat = rpc_autherr_badverf;
+ 	switch (gc->gc_proc) {
+ 	case RPC_GSS_PROC_INIT:
+ 	case RPC_GSS_PROC_CONTINUE_INIT:
+ 		if (use_gss_proxy(SVC_NET(rqstp)))
+-			return svcauth_gss_proxy_init(rqstp, gc, authp);
++			return svcauth_gss_proxy_init(rqstp, gc);
+ 		else
+-			return svcauth_gss_legacy_init(rqstp, gc, authp);
++			return svcauth_gss_legacy_init(rqstp, gc);
+ 	case RPC_GSS_PROC_DATA:
+ 	case RPC_GSS_PROC_DESTROY:
+ 		/* Look up the context, and check the verifier: */
+-		*authp = rpcsec_gsserr_credproblem;
++		rqstp->rq_auth_stat = rpcsec_gsserr_credproblem;
+ 		rsci = gss_svc_searchbyctx(sn->rsc_cache, &gc->gc_ctx);
+ 		if (!rsci)
+ 			goto auth_err;
+-		switch (gss_verify_header(rqstp, rsci, rpcstart, gc, authp)) {
++		switch (gss_verify_header(rqstp, rsci, rpcstart, gc)) {
+ 		case SVC_OK:
+ 			break;
+ 		case SVC_DENIED:
+@@ -1599,7 +1602,7 @@ svcauth_gss_accept(struct svc_rqst *rqstp, __be32 *authp)
+ 		}
+ 		break;
+ 	default:
+-		*authp = rpc_autherr_rejectedcred;
++		rqstp->rq_auth_stat = rpc_autherr_rejectedcred;
+ 		goto auth_err;
+ 	}
+ 
+@@ -1615,13 +1618,13 @@ svcauth_gss_accept(struct svc_rqst *rqstp, __be32 *authp)
+ 		svc_putnl(resv, RPC_SUCCESS);
+ 		goto complete;
+ 	case RPC_GSS_PROC_DATA:
+-		*authp = rpcsec_gsserr_ctxproblem;
++		rqstp->rq_auth_stat = rpcsec_gsserr_ctxproblem;
+ 		svcdata->verf_start = resv->iov_base + resv->iov_len;
+ 		if (gss_write_verf(rqstp, rsci->mechctx, gc->gc_seq))
+ 			goto auth_err;
+ 		rqstp->rq_cred = rsci->cred;
+ 		get_group_info(rsci->cred.cr_group_info);
+-		*authp = rpc_autherr_badcred;
++		rqstp->rq_auth_stat = rpc_autherr_badcred;
+ 		switch (gc->gc_svc) {
+ 		case RPC_GSS_SVC_NONE:
+ 			break;
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index a00890962e115..a4c9d410eb8d5 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -821,6 +821,7 @@ void rpc_exit_task(struct rpc_task *task)
+ 	else if (task->tk_client)
+ 		rpc_count_iostats(task, task->tk_client->cl_metrics);
+ 	if (task->tk_ops->rpc_call_done != NULL) {
++		trace_rpc_task_call_done(task, task->tk_ops->rpc_call_done);
+ 		task->tk_ops->rpc_call_done(task, task->tk_calldata);
+ 		if (task->tk_action != NULL) {
+ 			/* Always release the RPC slot and buffer memory */
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index cfe8b911ca013..26d972c54a593 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -35,18 +35,37 @@
+ 
+ static void svc_unregister(const struct svc_serv *serv, struct net *net);
+ 
+-#define svc_serv_is_pooled(serv)    ((serv)->sv_ops->svo_function)
+-
+ #define SVC_POOL_DEFAULT	SVC_POOL_GLOBAL
+ 
++/*
++ * Mode for mapping cpus to pools.
++ */
++enum {
++	SVC_POOL_AUTO = -1,	/* choose one of the others */
++	SVC_POOL_GLOBAL,	/* no mapping, just a single global pool
++				 * (legacy & UP mode) */
++	SVC_POOL_PERCPU,	/* one pool per cpu */
++	SVC_POOL_PERNODE	/* one pool per numa node */
++};
++
+ /*
+  * Structure for mapping cpus to pools and vice versa.
+  * Setup once during sunrpc initialisation.
+  */
+-struct svc_pool_map svc_pool_map = {
++
++struct svc_pool_map {
++	int count;			/* How many svc_servs use us */
++	int mode;			/* Note: int not enum to avoid
++					 * warnings about "enumeration value
++					 * not handled in switch" */
++	unsigned int npools;
++	unsigned int *pool_to;		/* maps pool id to cpu or node */
++	unsigned int *to_pool;		/* maps cpu or node to pool id */
++};
++
++static struct svc_pool_map svc_pool_map = {
+ 	.mode = SVC_POOL_DEFAULT
+ };
+-EXPORT_SYMBOL_GPL(svc_pool_map);
+ 
+ static DEFINE_MUTEX(svc_pool_map_mutex);/* protects svc_pool_map.count only */
+ 
+@@ -217,10 +236,12 @@ svc_pool_map_init_pernode(struct svc_pool_map *m)
+ 
+ /*
+  * Add a reference to the global map of cpus to pools (and
+- * vice versa).  Initialise the map if we're the first user.
+- * Returns the number of pools.
++ * vice versa) if pools are in use.
++ * Initialise the map if we're the first user.
++ * Returns the number of pools. If this is '1', no reference
++ * was taken.
+  */
+-unsigned int
++static unsigned int
+ svc_pool_map_get(void)
+ {
+ 	struct svc_pool_map *m = &svc_pool_map;
+@@ -230,6 +251,7 @@ svc_pool_map_get(void)
+ 
+ 	if (m->count++) {
+ 		mutex_unlock(&svc_pool_map_mutex);
++		WARN_ON_ONCE(m->npools <= 1);
+ 		return m->npools;
+ 	}
+ 
+@@ -245,30 +267,36 @@ svc_pool_map_get(void)
+ 		break;
+ 	}
+ 
+-	if (npools < 0) {
++	if (npools <= 0) {
+ 		/* default, or memory allocation failure */
+ 		npools = 1;
+ 		m->mode = SVC_POOL_GLOBAL;
+ 	}
+ 	m->npools = npools;
+ 
++	if (npools == 1)
++		/* service is unpooled, so doesn't hold a reference */
++		m->count--;
++
+ 	mutex_unlock(&svc_pool_map_mutex);
+-	return m->npools;
++	return npools;
+ }
+-EXPORT_SYMBOL_GPL(svc_pool_map_get);
+ 
+ /*
+- * Drop a reference to the global map of cpus to pools.
++ * Drop a reference to the global map of cpus to pools, if
++ * pools were in use, i.e. if npools > 1.
+  * When the last reference is dropped, the map data is
+  * freed; this allows the sysadmin to change the pool
+  * mode using the pool_mode module option without
+  * rebooting or re-loading sunrpc.ko.
+  */
+-void
+-svc_pool_map_put(void)
++static void
++svc_pool_map_put(int npools)
+ {
+ 	struct svc_pool_map *m = &svc_pool_map;
+ 
++	if (npools <= 1)
++		return;
+ 	mutex_lock(&svc_pool_map_mutex);
+ 
+ 	if (!--m->count) {
+@@ -281,7 +309,6 @@ svc_pool_map_put(void)
+ 
+ 	mutex_unlock(&svc_pool_map_mutex);
+ }
+-EXPORT_SYMBOL_GPL(svc_pool_map_put);
+ 
+ static int svc_pool_map_get_node(unsigned int pidx)
+ {
+@@ -338,21 +365,18 @@ svc_pool_for_cpu(struct svc_serv *serv, int cpu)
+ 	struct svc_pool_map *m = &svc_pool_map;
+ 	unsigned int pidx = 0;
+ 
+-	/*
+-	 * An uninitialised map happens in a pure client when
+-	 * lockd is brought up, so silently treat it the
+-	 * same as SVC_POOL_GLOBAL.
+-	 */
+-	if (svc_serv_is_pooled(serv)) {
+-		switch (m->mode) {
+-		case SVC_POOL_PERCPU:
+-			pidx = m->to_pool[cpu];
+-			break;
+-		case SVC_POOL_PERNODE:
+-			pidx = m->to_pool[cpu_to_node(cpu)];
+-			break;
+-		}
++	if (serv->sv_nrpools <= 1)
++		return serv->sv_pools;
++
++	switch (m->mode) {
++	case SVC_POOL_PERCPU:
++		pidx = m->to_pool[cpu];
++		break;
++	case SVC_POOL_PERNODE:
++		pidx = m->to_pool[cpu_to_node(cpu)];
++		break;
+ 	}
++
+ 	return &serv->sv_pools[pidx % serv->sv_nrpools];
+ }
+ 
+@@ -422,7 +446,7 @@ __svc_init_bc(struct svc_serv *serv)
+  */
+ static struct svc_serv *
+ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
+-	     const struct svc_serv_ops *ops)
++	     int (*threadfn)(void *data))
+ {
+ 	struct svc_serv	*serv;
+ 	unsigned int vers;
+@@ -433,13 +457,13 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
+ 		return NULL;
+ 	serv->sv_name      = prog->pg_name;
+ 	serv->sv_program   = prog;
+-	serv->sv_nrthreads = 1;
++	kref_init(&serv->sv_refcnt);
+ 	serv->sv_stats     = prog->pg_stats;
+ 	if (bufsize > RPCSVC_MAXPAYLOAD)
+ 		bufsize = RPCSVC_MAXPAYLOAD;
+ 	serv->sv_max_payload = bufsize? bufsize : 4096;
+ 	serv->sv_max_mesg  = roundup(serv->sv_max_payload + PAGE_SIZE, PAGE_SIZE);
+-	serv->sv_ops = ops;
++	serv->sv_threadfn = threadfn;
+ 	xdrsize = 0;
+ 	while (prog) {
+ 		prog->pg_lovers = prog->pg_nvers-1;
+@@ -485,59 +509,56 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
+ 	return serv;
+ }
+ 
+-struct svc_serv *
+-svc_create(struct svc_program *prog, unsigned int bufsize,
+-	   const struct svc_serv_ops *ops)
++/**
++ * svc_create - Create an RPC service
++ * @prog: the RPC program the new service will handle
++ * @bufsize: maximum message size for @prog
++ * @threadfn: a function to service RPC requests for @prog
++ *
++ * Returns an instantiated struct svc_serv object or NULL.
++ */
++struct svc_serv *svc_create(struct svc_program *prog, unsigned int bufsize,
++			    int (*threadfn)(void *data))
+ {
+-	return __svc_create(prog, bufsize, /*npools*/1, ops);
++	return __svc_create(prog, bufsize, 1, threadfn);
+ }
+ EXPORT_SYMBOL_GPL(svc_create);
+ 
+-struct svc_serv *
+-svc_create_pooled(struct svc_program *prog, unsigned int bufsize,
+-		  const struct svc_serv_ops *ops)
++/**
++ * svc_create_pooled - Create an RPC service with pooled threads
++ * @prog: the RPC program the new service will handle
++ * @bufsize: maximum message size for @prog
++ * @threadfn: a function to service RPC requests for @prog
++ *
++ * Returns an instantiated struct svc_serv object or NULL.
++ */
++struct svc_serv *svc_create_pooled(struct svc_program *prog,
++				   unsigned int bufsize,
++				   int (*threadfn)(void *data))
+ {
+ 	struct svc_serv *serv;
+ 	unsigned int npools = svc_pool_map_get();
+ 
+-	serv = __svc_create(prog, bufsize, npools, ops);
++	serv = __svc_create(prog, bufsize, npools, threadfn);
+ 	if (!serv)
+ 		goto out_err;
+ 	return serv;
+ out_err:
+-	svc_pool_map_put();
++	svc_pool_map_put(npools);
+ 	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(svc_create_pooled);
+ 
+-void svc_shutdown_net(struct svc_serv *serv, struct net *net)
+-{
+-	svc_close_net(serv, net);
+-
+-	if (serv->sv_ops->svo_shutdown)
+-		serv->sv_ops->svo_shutdown(serv, net);
+-}
+-EXPORT_SYMBOL_GPL(svc_shutdown_net);
+-
+ /*
+  * Destroy an RPC service. Should be called with appropriate locking to
+- * protect the sv_nrthreads, sv_permsocks and sv_tempsocks.
++ * protect sv_permsocks and sv_tempsocks.
+  */
+ void
+-svc_destroy(struct svc_serv *serv)
++svc_destroy(struct kref *ref)
+ {
+-	dprintk("svc: svc_destroy(%s, %d)\n",
+-				serv->sv_program->pg_name,
+-				serv->sv_nrthreads);
+-
+-	if (serv->sv_nrthreads) {
+-		if (--(serv->sv_nrthreads) != 0) {
+-			svc_sock_update_bufs(serv);
+-			return;
+-		}
+-	} else
+-		printk("svc_destroy: no threads for serv=%p!\n", serv);
++	struct svc_serv *serv = container_of(ref, struct svc_serv, sv_refcnt);
+ 
++	dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name);
+ 	del_timer_sync(&serv->sv_temptimer);
+ 
+ 	/*
+@@ -549,8 +570,7 @@ svc_destroy(struct svc_serv *serv)
+ 
+ 	cache_clean_deferred(serv);
+ 
+-	if (svc_serv_is_pooled(serv))
+-		svc_pool_map_put();
++	svc_pool_map_put(serv->sv_nrpools);
+ 
+ 	kfree(serv->sv_pools);
+ 	kfree(serv);
+@@ -614,6 +634,10 @@ svc_rqst_alloc(struct svc_serv *serv, struct svc_pool *pool, int node)
+ 	rqstp->rq_server = serv;
+ 	rqstp->rq_pool = pool;
+ 
++	rqstp->rq_scratch_page = alloc_pages_node(node, GFP_KERNEL, 0);
++	if (!rqstp->rq_scratch_page)
++		goto out_enomem;
++
+ 	rqstp->rq_argp = kmalloc_node(serv->sv_xdrsize, GFP_KERNEL, node);
+ 	if (!rqstp->rq_argp)
+ 		goto out_enomem;
+@@ -632,7 +656,7 @@ svc_rqst_alloc(struct svc_serv *serv, struct svc_pool *pool, int node)
+ }
+ EXPORT_SYMBOL_GPL(svc_rqst_alloc);
+ 
+-struct svc_rqst *
++static struct svc_rqst *
+ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
+ {
+ 	struct svc_rqst	*rqstp;
+@@ -641,14 +665,17 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
+ 	if (!rqstp)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	serv->sv_nrthreads++;
++	svc_get(serv);
++	spin_lock_bh(&serv->sv_lock);
++	serv->sv_nrthreads += 1;
++	spin_unlock_bh(&serv->sv_lock);
++
+ 	spin_lock_bh(&pool->sp_lock);
+ 	pool->sp_nrthreads++;
+ 	list_add_rcu(&rqstp->rq_all, &pool->sp_all_threads);
+ 	spin_unlock_bh(&pool->sp_lock);
+ 	return rqstp;
+ }
+-EXPORT_SYMBOL_GPL(svc_prepare_thread);
+ 
+ /*
+  * Choose a pool in which to create a new thread, for svc_set_num_threads
+@@ -722,11 +749,9 @@ svc_start_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+ 		if (IS_ERR(rqstp))
+ 			return PTR_ERR(rqstp);
+ 
+-		__module_get(serv->sv_ops->svo_module);
+-		task = kthread_create_on_node(serv->sv_ops->svo_function, rqstp,
++		task = kthread_create_on_node(serv->sv_threadfn, rqstp,
+ 					      node, "%s", serv->sv_name);
+ 		if (IS_ERR(task)) {
+-			module_put(serv->sv_ops->svo_module);
+ 			svc_exit_thread(rqstp);
+ 			return PTR_ERR(task);
+ 		}
+@@ -742,59 +767,13 @@ svc_start_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+ 	return 0;
+ }
+ 
+-
+-/* destroy old threads */
+-static int
+-svc_signal_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+-{
+-	struct task_struct *task;
+-	unsigned int state = serv->sv_nrthreads-1;
+-
+-	/* destroy old threads */
+-	do {
+-		task = choose_victim(serv, pool, &state);
+-		if (task == NULL)
+-			break;
+-		send_sig(SIGINT, task, 1);
+-		nrservs++;
+-	} while (nrservs < 0);
+-
+-	return 0;
+-}
+-
+ /*
+  * Create or destroy enough new threads to make the number
+  * of threads the given number.  If `pool' is non-NULL, applies
+  * only to threads in that pool, otherwise round-robins between
+  * all pools.  Caller must ensure that mutual exclusion between this and
+  * server startup or shutdown.
+- *
+- * Destroying threads relies on the service threads filling in
+- * rqstp->rq_task, which only the nfs ones do.  Assumes the serv
+- * has been created using svc_create_pooled().
+- *
+- * Based on code that used to be in nfsd_svc() but tweaked
+- * to be pool-aware.
+  */
+-int
+-svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+-{
+-	if (pool == NULL) {
+-		/* The -1 assumes caller has done a svc_get() */
+-		nrservs -= (serv->sv_nrthreads-1);
+-	} else {
+-		spin_lock_bh(&pool->sp_lock);
+-		nrservs -= pool->sp_nrthreads;
+-		spin_unlock_bh(&pool->sp_lock);
+-	}
+-
+-	if (nrservs > 0)
+-		return svc_start_kthreads(serv, pool, nrservs);
+-	if (nrservs < 0)
+-		return svc_signal_kthreads(serv, pool, nrservs);
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(svc_set_num_threads);
+ 
+ /* destroy old threads */
+ static int
+@@ -819,11 +798,10 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+ }
+ 
+ int
+-svc_set_num_threads_sync(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
++svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+ {
+ 	if (pool == NULL) {
+-		/* The -1 assumes caller has done a svc_get() */
+-		nrservs -= (serv->sv_nrthreads-1);
++		nrservs -= serv->sv_nrthreads;
+ 	} else {
+ 		spin_lock_bh(&pool->sp_lock);
+ 		nrservs -= pool->sp_nrthreads;
+@@ -836,7 +814,28 @@ svc_set_num_threads_sync(struct svc_serv *serv, struct svc_pool *pool, int nrser
+ 		return svc_stop_kthreads(serv, pool, nrservs);
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(svc_set_num_threads_sync);
++EXPORT_SYMBOL_GPL(svc_set_num_threads);
++
++/**
++ * svc_rqst_replace_page - Replace one page in rq_pages[]
++ * @rqstp: svc_rqst with pages to replace
++ * @page: replacement page
++ *
++ * When replacing a page in rq_pages, batch the release of the
++ * replaced pages to avoid hammering the page allocator.
++ */
++void svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page)
++{
++	if (*rqstp->rq_next_page) {
++		if (!pagevec_space(&rqstp->rq_pvec))
++			__pagevec_release(&rqstp->rq_pvec);
++		pagevec_add(&rqstp->rq_pvec, *rqstp->rq_next_page);
++	}
++
++	get_page(page);
++	*(rqstp->rq_next_page++) = page;
++}
++EXPORT_SYMBOL_GPL(svc_rqst_replace_page);
+ 
+ /*
+  * Called from a server thread as it's exiting. Caller must hold the "service
+@@ -846,6 +845,7 @@ void
+ svc_rqst_free(struct svc_rqst *rqstp)
+ {
+ 	svc_release_buffer(rqstp);
++	put_page(rqstp->rq_scratch_page);
+ 	kfree(rqstp->rq_resp);
+ 	kfree(rqstp->rq_argp);
+ 	kfree(rqstp->rq_auth_data);
+@@ -865,11 +865,14 @@ svc_exit_thread(struct svc_rqst *rqstp)
+ 		list_del_rcu(&rqstp->rq_all);
+ 	spin_unlock_bh(&pool->sp_lock);
+ 
++	spin_lock_bh(&serv->sv_lock);
++	serv->sv_nrthreads -= 1;
++	spin_unlock_bh(&serv->sv_lock);
++	svc_sock_update_bufs(serv);
++
+ 	svc_rqst_free(rqstp);
+ 
+-	/* Release the server */
+-	if (serv)
+-		svc_destroy(serv);
++	svc_put(serv);
+ }
+ EXPORT_SYMBOL_GPL(svc_exit_thread);
+ 
+@@ -1161,22 +1164,6 @@ void svc_printk(struct svc_rqst *rqstp, const char *fmt, ...)
+ static __printf(2,3) void svc_printk(struct svc_rqst *rqstp, const char *fmt, ...) {}
+ #endif
+ 
+-__be32
+-svc_return_autherr(struct svc_rqst *rqstp, __be32 auth_err)
+-{
+-	set_bit(RQ_AUTHERR, &rqstp->rq_flags);
+-	return auth_err;
+-}
+-EXPORT_SYMBOL_GPL(svc_return_autherr);
+-
+-static __be32
+-svc_get_autherr(struct svc_rqst *rqstp, __be32 *statp)
+-{
+-	if (test_and_clear_bit(RQ_AUTHERR, &rqstp->rq_flags))
+-		return *statp;
+-	return rpc_auth_ok;
+-}
+-
+ static int
+ svc_generic_dispatch(struct svc_rqst *rqstp, __be32 *statp)
+ {
+@@ -1200,7 +1187,7 @@ svc_generic_dispatch(struct svc_rqst *rqstp, __be32 *statp)
+ 	    test_bit(RQ_DROPME, &rqstp->rq_flags))
+ 		return 0;
+ 
+-	if (test_bit(RQ_AUTHERR, &rqstp->rq_flags))
++	if (rqstp->rq_auth_stat != rpc_auth_ok)
+ 		return 1;
+ 
+ 	if (*statp != rpc_success)
+@@ -1250,7 +1237,7 @@ svc_generic_init_request(struct svc_rqst *rqstp,
+ 	rqstp->rq_procinfo = procp = &versp->vs_proc[rqstp->rq_proc];
+ 
+ 	/* Initialize storage for argp and resp */
+-	memset(rqstp->rq_argp, 0, procp->pc_argsize);
++	memset(rqstp->rq_argp, 0, procp->pc_argzero);
+ 	memset(rqstp->rq_resp, 0, procp->pc_ressize);
+ 
+ 	/* Bump per-procedure stats counter */
+@@ -1279,7 +1266,7 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 	struct svc_process_info process;
+ 	__be32			*statp;
+ 	u32			prog, vers;
+-	__be32			auth_stat, rpc_stat;
++	__be32			rpc_stat;
+ 	int			auth_res;
+ 	__be32			*reply_statp;
+ 
+@@ -1322,14 +1309,12 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 	 * We do this before anything else in order to get a decent
+ 	 * auth verifier.
+ 	 */
+-	auth_res = svc_authenticate(rqstp, &auth_stat);
++	auth_res = svc_authenticate(rqstp);
+ 	/* Also give the program a chance to reject this call: */
+-	if (auth_res == SVC_OK && progp) {
+-		auth_stat = rpc_autherr_badcred;
++	if (auth_res == SVC_OK && progp)
+ 		auth_res = progp->pg_authenticate(rqstp);
+-	}
+ 	if (auth_res != SVC_OK)
+-		trace_svc_authenticate(rqstp, auth_res, auth_stat);
++		trace_svc_authenticate(rqstp, auth_res);
+ 	switch (auth_res) {
+ 	case SVC_OK:
+ 		break;
+@@ -1388,15 +1373,15 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 			goto release_dropit;
+ 		if (*statp == rpc_garbage_args)
+ 			goto err_garbage;
+-		auth_stat = svc_get_autherr(rqstp, statp);
+-		if (auth_stat != rpc_auth_ok)
+-			goto err_release_bad_auth;
+ 	} else {
+ 		dprintk("svc: calling dispatcher\n");
+ 		if (!process.dispatch(rqstp, statp))
+ 			goto release_dropit; /* Release reply info */
+ 	}
+ 
++	if (rqstp->rq_auth_stat != rpc_auth_ok)
++		goto err_release_bad_auth;
++
+ 	/* Check RPC status result */
+ 	if (*statp != rpc_success)
+ 		resv->iov_len = ((void*)statp)  - resv->iov_base + 4;
+@@ -1425,7 +1410,7 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 	svc_authorise(rqstp);
+ close_xprt:
+ 	if (rqstp->rq_xprt && test_bit(XPT_TEMP, &rqstp->rq_xprt->xpt_flags))
+-		svc_close_xprt(rqstp->rq_xprt);
++		svc_xprt_close(rqstp->rq_xprt);
+ 	dprintk("svc: svc_process close\n");
+ 	return 0;
+ 
+@@ -1446,13 +1431,14 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 	if (procp->pc_release)
+ 		procp->pc_release(rqstp);
+ err_bad_auth:
+-	dprintk("svc: authentication failed (%d)\n", ntohl(auth_stat));
++	dprintk("svc: authentication failed (%d)\n",
++		be32_to_cpu(rqstp->rq_auth_stat));
+ 	serv->sv_stats->rpcbadauth++;
+ 	/* Restore write pointer to location of accept status: */
+ 	xdr_ressize_check(rqstp, reply_statp);
+ 	svc_putnl(resv, 1);	/* REJECT */
+ 	svc_putnl(resv, 1);	/* AUTH_ERROR */
+-	svc_putnl(resv, ntohl(auth_stat));	/* status */
++	svc_putu32(resv, rqstp->rq_auth_stat);	/* status */
+ 	goto sendit;
+ 
+ err_bad_prog:
+@@ -1626,7 +1612,7 @@ u32 svc_max_payload(const struct svc_rqst *rqstp)
+ EXPORT_SYMBOL_GPL(svc_max_payload);
+ 
+ /**
+- * svc_encode_read_payload - mark a range of bytes as a READ payload
++ * svc_encode_result_payload - mark a range of bytes as a result payload
+  * @rqstp: svc_rqst to operate on
+  * @offset: payload's byte offset in rqstp->rq_res
+  * @length: size of payload, in bytes
+@@ -1634,26 +1620,28 @@ EXPORT_SYMBOL_GPL(svc_max_payload);
+  * Returns zero on success, or a negative errno if a permanent
+  * error occurred.
+  */
+-int svc_encode_read_payload(struct svc_rqst *rqstp, unsigned int offset,
+-			    unsigned int length)
++int svc_encode_result_payload(struct svc_rqst *rqstp, unsigned int offset,
++			      unsigned int length)
+ {
+-	return rqstp->rq_xprt->xpt_ops->xpo_read_payload(rqstp, offset, length);
++	return rqstp->rq_xprt->xpt_ops->xpo_result_payload(rqstp, offset,
++							   length);
+ }
+-EXPORT_SYMBOL_GPL(svc_encode_read_payload);
++EXPORT_SYMBOL_GPL(svc_encode_result_payload);
+ 
+ /**
+  * svc_fill_write_vector - Construct data argument for VFS write call
+  * @rqstp: svc_rqst to operate on
+- * @pages: list of pages containing data payload
+- * @first: buffer containing first section of write payload
+- * @total: total number of bytes of write payload
++ * @payload: xdr_buf containing only the write data payload
+  *
+  * Fills in rqstp::rq_vec, and returns the number of elements.
+  */
+-unsigned int svc_fill_write_vector(struct svc_rqst *rqstp, struct page **pages,
+-				   struct kvec *first, size_t total)
++unsigned int svc_fill_write_vector(struct svc_rqst *rqstp,
++				   struct xdr_buf *payload)
+ {
++	struct page **pages = payload->pages;
++	struct kvec *first = payload->head;
+ 	struct kvec *vec = rqstp->rq_vec;
++	size_t total = payload->len;
+ 	unsigned int i;
+ 
+ 	/* Some types of transport can present the write payload
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index 06e503466c32c..d1eacf3358b81 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -233,30 +233,35 @@ static struct svc_xprt *__svc_xpo_create(struct svc_xprt_class *xcl,
+ 	return xprt;
+ }
+ 
+-/*
+- * svc_xprt_received conditionally queues the transport for processing
+- * by another thread. The caller must hold the XPT_BUSY bit and must
++/**
++ * svc_xprt_received - start next receiver thread
++ * @xprt: controlling transport
++ *
++ * The caller must hold the XPT_BUSY bit and must
+  * not thereafter touch transport data.
+  *
+  * Note: XPT_DATA only gets cleared when a read-attempt finds no (or
+  * insufficient) data.
+  */
+-static void svc_xprt_received(struct svc_xprt *xprt)
++void svc_xprt_received(struct svc_xprt *xprt)
+ {
+ 	if (!test_bit(XPT_BUSY, &xprt->xpt_flags)) {
+ 		WARN_ONCE(1, "xprt=0x%p already busy!", xprt);
+ 		return;
+ 	}
+ 
++	trace_svc_xprt_received(xprt);
++
+ 	/* As soon as we clear busy, the xprt could be closed and
+-	 * 'put', so we need a reference to call svc_enqueue_xprt with:
++	 * 'put', so we need a reference to call svc_xprt_enqueue with:
+ 	 */
+ 	svc_xprt_get(xprt);
+ 	smp_mb__before_atomic();
+ 	clear_bit(XPT_BUSY, &xprt->xpt_flags);
+-	xprt->xpt_server->sv_ops->svo_enqueue_xprt(xprt);
++	svc_xprt_enqueue(xprt);
+ 	svc_xprt_put(xprt);
+ }
++EXPORT_SYMBOL_GPL(svc_xprt_received);
+ 
+ void svc_add_new_perm_xprt(struct svc_serv *serv, struct svc_xprt *new)
+ {
+@@ -267,7 +272,7 @@ void svc_add_new_perm_xprt(struct svc_serv *serv, struct svc_xprt *new)
+ 	svc_xprt_received(new);
+ }
+ 
+-static int _svc_create_xprt(struct svc_serv *serv, const char *xprt_name,
++static int _svc_xprt_create(struct svc_serv *serv, const char *xprt_name,
+ 			    struct net *net, const int family,
+ 			    const unsigned short port, int flags,
+ 			    const struct cred *cred)
+@@ -303,21 +308,35 @@ static int _svc_create_xprt(struct svc_serv *serv, const char *xprt_name,
+ 	return -EPROTONOSUPPORT;
+ }
+ 
+-int svc_create_xprt(struct svc_serv *serv, const char *xprt_name,
++/**
++ * svc_xprt_create - Add a new listener to @serv
++ * @serv: target RPC service
++ * @xprt_name: transport class name
++ * @net: network namespace
++ * @family: network address family
++ * @port: listener port
++ * @flags: SVC_SOCK flags
++ * @cred: credential to bind to this transport
++ *
++ * Return values:
++ *   %0: New listener added successfully
++ *   %-EPROTONOSUPPORT: Requested transport type not supported
++ */
++int svc_xprt_create(struct svc_serv *serv, const char *xprt_name,
+ 		    struct net *net, const int family,
+ 		    const unsigned short port, int flags,
+ 		    const struct cred *cred)
+ {
+ 	int err;
+ 
+-	err = _svc_create_xprt(serv, xprt_name, net, family, port, flags, cred);
++	err = _svc_xprt_create(serv, xprt_name, net, family, port, flags, cred);
+ 	if (err == -EPROTONOSUPPORT) {
+ 		request_module("svc%s", xprt_name);
+-		err = _svc_create_xprt(serv, xprt_name, net, family, port, flags, cred);
++		err = _svc_xprt_create(serv, xprt_name, net, family, port, flags, cred);
+ 	}
+ 	return err;
+ }
+-EXPORT_SYMBOL_GPL(svc_create_xprt);
++EXPORT_SYMBOL_GPL(svc_xprt_create);
+ 
+ /*
+  * Copy the local and remote xprt addresses to the rqstp structure
+@@ -393,6 +412,8 @@ static bool svc_xprt_ready(struct svc_xprt *xprt)
+ 	smp_rmb();
+ 	xpt_flags = READ_ONCE(xprt->xpt_flags);
+ 
++	if (xpt_flags & BIT(XPT_BUSY))
++		return false;
+ 	if (xpt_flags & (BIT(XPT_CONN) | BIT(XPT_CLOSE)))
+ 		return true;
+ 	if (xpt_flags & (BIT(XPT_DATA) | BIT(XPT_DEFERRED))) {
+@@ -405,7 +426,12 @@ static bool svc_xprt_ready(struct svc_xprt *xprt)
+ 	return false;
+ }
+ 
+-void svc_xprt_do_enqueue(struct svc_xprt *xprt)
++/**
++ * svc_xprt_enqueue - Queue a transport on an idle nfsd thread
++ * @xprt: transport with data pending
++ *
++ */
++void svc_xprt_enqueue(struct svc_xprt *xprt)
+ {
+ 	struct svc_pool *pool;
+ 	struct svc_rqst	*rqstp = NULL;
+@@ -449,19 +475,6 @@ void svc_xprt_do_enqueue(struct svc_xprt *xprt)
+ 	put_cpu();
+ 	trace_svc_xprt_do_enqueue(xprt, rqstp);
+ }
+-EXPORT_SYMBOL_GPL(svc_xprt_do_enqueue);
+-
+-/*
+- * Queue up a transport with data pending. If there are idle nfsd
+- * processes, wake 'em up.
+- *
+- */
+-void svc_xprt_enqueue(struct svc_xprt *xprt)
+-{
+-	if (test_bit(XPT_BUSY, &xprt->xpt_flags))
+-		return;
+-	xprt->xpt_server->sv_ops->svo_enqueue_xprt(xprt);
+-}
+ EXPORT_SYMBOL_GPL(svc_xprt_enqueue);
+ 
+ /*
+@@ -520,6 +533,7 @@ static void svc_xprt_release(struct svc_rqst *rqstp)
+ 	kfree(rqstp->rq_deferred);
+ 	rqstp->rq_deferred = NULL;
+ 
++	pagevec_release(&rqstp->rq_pvec);
+ 	svc_free_res_pages(rqstp);
+ 	rqstp->rq_res.page_len = 0;
+ 	rqstp->rq_res.page_base = 0;
+@@ -646,6 +660,8 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
+ 	int pages;
+ 	int i;
+ 
++	pagevec_init(&rqstp->rq_pvec);
++
+ 	/* now allocate needed pages.  If we get a failure, sleep briefly */
+ 	pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
+ 	if (pages > RPCSVC_MAXPAGES) {
+@@ -658,13 +674,13 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
+ 		while (rqstp->rq_pages[i] == NULL) {
+ 			struct page *p = alloc_page(GFP_KERNEL);
+ 			if (!p) {
+-				set_current_state(TASK_INTERRUPTIBLE);
+-				if (signalled() || kthread_should_stop()) {
++				set_current_state(TASK_IDLE);
++				if (kthread_should_stop()) {
+ 					set_current_state(TASK_RUNNING);
+ 					return -EINTR;
+ 				}
+-				schedule_timeout(msecs_to_jiffies(500));
+ 			}
++			freezable_schedule_timeout(msecs_to_jiffies(500));
+ 			rqstp->rq_pages[i] = p;
+ 		}
+ 	rqstp->rq_page_end = &rqstp->rq_pages[i];
+@@ -697,7 +713,7 @@ rqst_should_sleep(struct svc_rqst *rqstp)
+ 		return false;
+ 
+ 	/* are we shutting down? */
+-	if (signalled() || kthread_should_stop())
++	if (kthread_should_stop())
+ 		return false;
+ 
+ 	/* are we freezing? */
+@@ -719,18 +735,14 @@ static struct svc_xprt *svc_get_next_xprt(struct svc_rqst *rqstp, long timeout)
+ 	if (rqstp->rq_xprt)
+ 		goto out_found;
+ 
+-	/*
+-	 * We have to be able to interrupt this wait
+-	 * to bring down the daemons ...
+-	 */
+-	set_current_state(TASK_INTERRUPTIBLE);
++	set_current_state(TASK_IDLE);
+ 	smp_mb__before_atomic();
+ 	clear_bit(SP_CONGESTED, &pool->sp_flags);
+ 	clear_bit(RQ_BUSY, &rqstp->rq_flags);
+ 	smp_mb__after_atomic();
+ 
+ 	if (likely(rqst_should_sleep(rqstp)))
+-		time_left = schedule_timeout(timeout);
++		time_left = freezable_schedule_timeout(timeout);
+ 	else
+ 		__set_current_state(TASK_RUNNING);
+ 
+@@ -745,7 +757,7 @@ static struct svc_xprt *svc_get_next_xprt(struct svc_rqst *rqstp, long timeout)
+ 	if (!time_left)
+ 		atomic_long_inc(&pool->sp_stats.threads_timedout);
+ 
+-	if (signalled() || kthread_should_stop())
++	if (kthread_should_stop())
+ 		return ERR_PTR(-EINTR);
+ 	return ERR_PTR(-EAGAIN);
+ out_found:
+@@ -844,7 +856,7 @@ int svc_recv(struct svc_rqst *rqstp, long timeout)
+ 	try_to_freeze();
+ 	cond_resched();
+ 	err = -EINTR;
+-	if (signalled() || kthread_should_stop())
++	if (kthread_should_stop())
+ 		goto out;
+ 
+ 	xprt = svc_get_next_xprt(rqstp, timeout);
+@@ -1040,7 +1052,12 @@ static void svc_delete_xprt(struct svc_xprt *xprt)
+ 	svc_xprt_put(xprt);
+ }
+ 
+-void svc_close_xprt(struct svc_xprt *xprt)
++/**
++ * svc_xprt_close - Close a client connection
++ * @xprt: transport to disconnect
++ *
++ */
++void svc_xprt_close(struct svc_xprt *xprt)
+ {
+ 	trace_svc_xprt_close(xprt);
+ 	set_bit(XPT_CLOSE, &xprt->xpt_flags);
+@@ -1055,7 +1072,7 @@ void svc_close_xprt(struct svc_xprt *xprt)
+ 	 */
+ 	svc_delete_xprt(xprt);
+ }
+-EXPORT_SYMBOL_GPL(svc_close_xprt);
++EXPORT_SYMBOL_GPL(svc_xprt_close);
+ 
+ static int svc_close_list(struct svc_serv *serv, struct list_head *xprt_list, struct net *net)
+ {
+@@ -1107,7 +1124,11 @@ static void svc_clean_up_xprts(struct svc_serv *serv, struct net *net)
+ 	}
+ }
+ 
+-/*
++/**
++ * svc_xprt_destroy_all - Destroy transports associated with @serv
++ * @serv: RPC service to be shut down
++ * @net: target network namespace
++ *
+  * Server threads may still be running (especially in the case where the
+  * service is still running in other network namespaces).
+  *
+@@ -1119,7 +1140,7 @@ static void svc_clean_up_xprts(struct svc_serv *serv, struct net *net)
+  * threads, we may need to wait a little while and then check again to
+  * see if they're done.
+  */
+-void svc_close_net(struct svc_serv *serv, struct net *net)
++void svc_xprt_destroy_all(struct svc_serv *serv, struct net *net)
+ {
+ 	int delay = 0;
+ 
+@@ -1130,6 +1151,7 @@ void svc_close_net(struct svc_serv *serv, struct net *net)
+ 		msleep(delay++);
+ 	}
+ }
++EXPORT_SYMBOL_GPL(svc_xprt_destroy_all);
+ 
+ /*
+  * Handle defer and revisit of requests
+diff --git a/net/sunrpc/svcauth.c b/net/sunrpc/svcauth.c
+index 998b196b61767..5a8b8e03fdd42 100644
+--- a/net/sunrpc/svcauth.c
++++ b/net/sunrpc/svcauth.c
+@@ -59,12 +59,12 @@ svc_put_auth_ops(struct auth_ops *aops)
+ }
+ 
+ int
+-svc_authenticate(struct svc_rqst *rqstp, __be32 *authp)
++svc_authenticate(struct svc_rqst *rqstp)
+ {
+ 	rpc_authflavor_t	flavor;
+ 	struct auth_ops		*aops;
+ 
+-	*authp = rpc_auth_ok;
++	rqstp->rq_auth_stat = rpc_auth_ok;
+ 
+ 	flavor = svc_getnl(&rqstp->rq_arg.head[0]);
+ 
+@@ -72,7 +72,7 @@ svc_authenticate(struct svc_rqst *rqstp, __be32 *authp)
+ 
+ 	aops = svc_get_auth_ops(flavor);
+ 	if (aops == NULL) {
+-		*authp = rpc_autherr_badcred;
++		rqstp->rq_auth_stat = rpc_autherr_badcred;
+ 		return SVC_DENIED;
+ 	}
+ 
+@@ -80,7 +80,7 @@ svc_authenticate(struct svc_rqst *rqstp, __be32 *authp)
+ 	init_svc_cred(&rqstp->rq_cred);
+ 
+ 	rqstp->rq_authop = aops;
+-	return aops->accept(rqstp, authp);
++	return aops->accept(rqstp);
+ }
+ EXPORT_SYMBOL_GPL(svc_authenticate);
+ 
+diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c
+index 60754a292589b..1868596259af5 100644
+--- a/net/sunrpc/svcauth_unix.c
++++ b/net/sunrpc/svcauth_unix.c
+@@ -699,8 +699,9 @@ svcauth_unix_set_client(struct svc_rqst *rqstp)
+ 
+ 	rqstp->rq_client = NULL;
+ 	if (rqstp->rq_proc == 0)
+-		return SVC_OK;
++		goto out;
+ 
++	rqstp->rq_auth_stat = rpc_autherr_badcred;
+ 	ipm = ip_map_cached_get(xprt);
+ 	if (ipm == NULL)
+ 		ipm = __ip_map_lookup(sn->ip_map_cache, rqstp->rq_server->sv_program->pg_class,
+@@ -737,13 +738,16 @@ svcauth_unix_set_client(struct svc_rqst *rqstp)
+ 		put_group_info(cred->cr_group_info);
+ 		cred->cr_group_info = gi;
+ 	}
++
++out:
++	rqstp->rq_auth_stat = rpc_auth_ok;
+ 	return SVC_OK;
+ }
+ 
+ EXPORT_SYMBOL_GPL(svcauth_unix_set_client);
+ 
+ static int
+-svcauth_null_accept(struct svc_rqst *rqstp, __be32 *authp)
++svcauth_null_accept(struct svc_rqst *rqstp)
+ {
+ 	struct kvec	*argv = &rqstp->rq_arg.head[0];
+ 	struct kvec	*resv = &rqstp->rq_res.head[0];
+@@ -754,12 +758,12 @@ svcauth_null_accept(struct svc_rqst *rqstp, __be32 *authp)
+ 
+ 	if (svc_getu32(argv) != 0) {
+ 		dprintk("svc: bad null cred\n");
+-		*authp = rpc_autherr_badcred;
++		rqstp->rq_auth_stat = rpc_autherr_badcred;
+ 		return SVC_DENIED;
+ 	}
+ 	if (svc_getu32(argv) != htonl(RPC_AUTH_NULL) || svc_getu32(argv) != 0) {
+ 		dprintk("svc: bad null verf\n");
+-		*authp = rpc_autherr_badverf;
++		rqstp->rq_auth_stat = rpc_autherr_badverf;
+ 		return SVC_DENIED;
+ 	}
+ 
+@@ -803,7 +807,7 @@ struct auth_ops svcauth_null = {
+ 
+ 
+ static int
+-svcauth_unix_accept(struct svc_rqst *rqstp, __be32 *authp)
++svcauth_unix_accept(struct svc_rqst *rqstp)
+ {
+ 	struct kvec	*argv = &rqstp->rq_arg.head[0];
+ 	struct kvec	*resv = &rqstp->rq_res.head[0];
+@@ -845,7 +849,7 @@ svcauth_unix_accept(struct svc_rqst *rqstp, __be32 *authp)
+ 	}
+ 	groups_sort(cred->cr_group_info);
+ 	if (svc_getu32(argv) != htonl(RPC_AUTH_NULL) || svc_getu32(argv) != 0) {
+-		*authp = rpc_autherr_badverf;
++		rqstp->rq_auth_stat = rpc_autherr_badverf;
+ 		return SVC_DENIED;
+ 	}
+ 
+@@ -857,7 +861,7 @@ svcauth_unix_accept(struct svc_rqst *rqstp, __be32 *authp)
+ 	return SVC_OK;
+ 
+ badcred:
+-	*authp = rpc_autherr_badcred;
++	rqstp->rq_auth_stat = rpc_autherr_badcred;
+ 	return SVC_DENIED;
+ }
+ 
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 3d5ee042c5015..cb0cfcd8a8141 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -181,8 +181,8 @@ static void svc_set_cmsg_data(struct svc_rqst *rqstp, struct cmsghdr *cmh)
+ 	}
+ }
+ 
+-static int svc_sock_read_payload(struct svc_rqst *rqstp, unsigned int offset,
+-				 unsigned int length)
++static int svc_sock_result_payload(struct svc_rqst *rqstp, unsigned int offset,
++				   unsigned int length)
+ {
+ 	return 0;
+ }
+@@ -635,7 +635,7 @@ static const struct svc_xprt_ops svc_udp_ops = {
+ 	.xpo_create = svc_udp_create,
+ 	.xpo_recvfrom = svc_udp_recvfrom,
+ 	.xpo_sendto = svc_udp_sendto,
+-	.xpo_read_payload = svc_sock_read_payload,
++	.xpo_result_payload = svc_sock_result_payload,
+ 	.xpo_release_rqst = svc_udp_release_rqst,
+ 	.xpo_detach = svc_sock_detach,
+ 	.xpo_free = svc_sock_free,
+@@ -1209,7 +1209,7 @@ static const struct svc_xprt_ops svc_tcp_ops = {
+ 	.xpo_create = svc_tcp_create,
+ 	.xpo_recvfrom = svc_tcp_recvfrom,
+ 	.xpo_sendto = svc_tcp_sendto,
+-	.xpo_read_payload = svc_sock_read_payload,
++	.xpo_result_payload = svc_sock_result_payload,
+ 	.xpo_release_rqst = svc_tcp_release_rqst,
+ 	.xpo_detach = svc_tcp_sock_detach,
+ 	.xpo_free = svc_sock_free,
+@@ -1342,25 +1342,10 @@ static struct svc_sock *svc_setup_socket(struct svc_serv *serv,
+ 	return svsk;
+ }
+ 
+-bool svc_alien_sock(struct net *net, int fd)
+-{
+-	int err;
+-	struct socket *sock = sockfd_lookup(fd, &err);
+-	bool ret = false;
+-
+-	if (!sock)
+-		goto out;
+-	if (sock_net(sock->sk) != net)
+-		ret = true;
+-	sockfd_put(sock);
+-out:
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(svc_alien_sock);
+-
+ /**
+  * svc_addsock - add a listener socket to an RPC service
+  * @serv: pointer to RPC service to which to add a new listener
++ * @net: caller's network namespace
+  * @fd: file descriptor of the new listener
+  * @name_return: pointer to buffer to fill in with name of listener
+  * @len: size of the buffer
+@@ -1370,8 +1355,8 @@ EXPORT_SYMBOL_GPL(svc_alien_sock);
+  * Name is terminated with '\n'.  On error, returns a negative errno
+  * value.
+  */
+-int svc_addsock(struct svc_serv *serv, const int fd, char *name_return,
+-		const size_t len, const struct cred *cred)
++int svc_addsock(struct svc_serv *serv, struct net *net, const int fd,
++		char *name_return, const size_t len, const struct cred *cred)
+ {
+ 	int err = 0;
+ 	struct socket *so = sockfd_lookup(fd, &err);
+@@ -1382,6 +1367,9 @@ int svc_addsock(struct svc_serv *serv, const int fd, char *name_return,
+ 
+ 	if (!so)
+ 		return err;
++	err = -EINVAL;
++	if (sock_net(so->sk) != net)
++		goto out;
+ 	err = -EAFNOSUPPORT;
+ 	if ((so->sk->sk_family != PF_INET) && (so->sk->sk_family != PF_INET6))
+ 		goto out;
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index d84bb5037bb5b..e2bd0cd391142 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -669,7 +669,7 @@ void xdr_init_encode(struct xdr_stream *xdr, struct xdr_buf *buf, __be32 *p,
+ 	struct kvec *iov = buf->head;
+ 	int scratch_len = buf->buflen - buf->page_len - buf->tail[0].iov_len;
+ 
+-	xdr_set_scratch_buffer(xdr, NULL, 0);
++	xdr_reset_scratch_buffer(xdr);
+ 	BUG_ON(scratch_len < 0);
+ 	xdr->buf = buf;
+ 	xdr->iov = iov;
+@@ -691,7 +691,29 @@ void xdr_init_encode(struct xdr_stream *xdr, struct xdr_buf *buf, __be32 *p,
+ EXPORT_SYMBOL_GPL(xdr_init_encode);
+ 
+ /**
+- * xdr_commit_encode - Ensure all data is written to buffer
++ * xdr_init_encode_pages - Initialize an xdr_stream for encoding into pages
++ * @xdr: pointer to xdr_stream struct
++ * @buf: pointer to XDR buffer into which to encode data
++ * @pages: list of pages to decode into
++ * @rqst: pointer to controlling rpc_rqst, for debugging
++ *
++ */
++void xdr_init_encode_pages(struct xdr_stream *xdr, struct xdr_buf *buf,
++			   struct page **pages, struct rpc_rqst *rqst)
++{
++	xdr_reset_scratch_buffer(xdr);
++
++	xdr->buf = buf;
++	xdr->page_ptr = pages;
++	xdr->iov = NULL;
++	xdr->p = page_address(*pages);
++	xdr->end = (void *)xdr->p + min_t(u32, buf->buflen, PAGE_SIZE);
++	xdr->rqst = rqst;
++}
++EXPORT_SYMBOL_GPL(xdr_init_encode_pages);
++
++/**
++ * __xdr_commit_encode - Ensure all data is written to buffer
+  * @xdr: pointer to xdr_stream
+  *
+  * We handle encoding across page boundaries by giving the caller a
+@@ -703,22 +725,25 @@ EXPORT_SYMBOL_GPL(xdr_init_encode);
+  * required at the end of encoding, or any other time when the xdr_buf
+  * data might be read.
+  */
+-inline void xdr_commit_encode(struct xdr_stream *xdr)
++void __xdr_commit_encode(struct xdr_stream *xdr)
+ {
+ 	int shift = xdr->scratch.iov_len;
+ 	void *page;
+ 
+-	if (shift == 0)
+-		return;
+ 	page = page_address(*xdr->page_ptr);
+ 	memcpy(xdr->scratch.iov_base, page, shift);
+ 	memmove(page, page + shift, (void *)xdr->p - page);
+-	xdr->scratch.iov_len = 0;
++	xdr_reset_scratch_buffer(xdr);
+ }
+-EXPORT_SYMBOL_GPL(xdr_commit_encode);
++EXPORT_SYMBOL_GPL(__xdr_commit_encode);
+ 
+-static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr,
+-		size_t nbytes)
++/*
++ * The buffer space to be reserved crosses the boundary between
++ * xdr->buf->head and xdr->buf->pages, or between two pages
++ * in xdr->buf->pages.
++ */
++static noinline __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr,
++						   size_t nbytes)
+ {
+ 	__be32 *p;
+ 	int space_left;
+@@ -743,8 +768,7 @@ static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr,
+ 	 * the "scratch" iov to track any temporarily unused fragment of
+ 	 * space at the end of the previous buffer:
+ 	 */
+-	xdr->scratch.iov_base = xdr->p;
+-	xdr->scratch.iov_len = frag1bytes;
++	xdr_set_scratch_buffer(xdr, xdr->p, frag1bytes);
+ 	p = page_address(*xdr->page_ptr);
+ 	/*
+ 	 * Note this is where the next encode will start after we've
+@@ -1056,8 +1080,7 @@ void xdr_init_decode(struct xdr_stream *xdr, struct xdr_buf *buf, __be32 *p,
+ 		     struct rpc_rqst *rqst)
+ {
+ 	xdr->buf = buf;
+-	xdr->scratch.iov_base = NULL;
+-	xdr->scratch.iov_len = 0;
++	xdr_reset_scratch_buffer(xdr);
+ 	xdr->nwords = XDR_QUADLEN(buf->len);
+ 	if (buf->head[0].iov_len != 0)
+ 		xdr_set_iov(xdr, buf->head, buf->len);
+@@ -1105,24 +1128,6 @@ static __be32 * __xdr_inline_decode(struct xdr_stream *xdr, size_t nbytes)
+ 	return p;
+ }
+ 
+-/**
+- * xdr_set_scratch_buffer - Attach a scratch buffer for decoding data.
+- * @xdr: pointer to xdr_stream struct
+- * @buf: pointer to an empty buffer
+- * @buflen: size of 'buf'
+- *
+- * The scratch buffer is used when decoding from an array of pages.
+- * If an xdr_inline_decode() call spans across page boundaries, then
+- * we copy the data into the scratch buffer in order to allow linear
+- * access.
+- */
+-void xdr_set_scratch_buffer(struct xdr_stream *xdr, void *buf, size_t buflen)
+-{
+-	xdr->scratch.iov_base = buf;
+-	xdr->scratch.iov_len = buflen;
+-}
+-EXPORT_SYMBOL_GPL(xdr_set_scratch_buffer);
+-
+ static __be32 *xdr_copy_to_scratch(struct xdr_stream *xdr, size_t nbytes)
+ {
+ 	__be32 *p;
+@@ -1432,6 +1437,51 @@ xdr_buf_subsegment(struct xdr_buf *buf, struct xdr_buf *subbuf,
+ }
+ EXPORT_SYMBOL_GPL(xdr_buf_subsegment);
+ 
++/**
++ * xdr_stream_subsegment - set @subbuf to a portion of @xdr
++ * @xdr: an xdr_stream set up for decoding
++ * @subbuf: the result buffer
++ * @nbytes: length of @xdr to extract, in bytes
++ *
++ * Sets up @subbuf to represent a portion of @xdr. The portion
++ * starts at the current offset in @xdr, and extends for a length
++ * of @nbytes. If this is successful, @xdr is advanced to the next
++ * position following that portion.
++ *
++ * Return values:
++ *   %true: @subbuf has been initialized, and @xdr has been advanced.
++ *   %false: a bounds error has occurred
++ */
++bool xdr_stream_subsegment(struct xdr_stream *xdr, struct xdr_buf *subbuf,
++			   unsigned int nbytes)
++{
++	unsigned int remaining, offset, len;
++
++	if (xdr_buf_subsegment(xdr->buf, subbuf, xdr_stream_pos(xdr), nbytes))
++		return false;
++
++	if (subbuf->head[0].iov_len)
++		if (!__xdr_inline_decode(xdr, subbuf->head[0].iov_len))
++			return false;
++
++	remaining = subbuf->page_len;
++	offset = subbuf->page_base;
++	while (remaining) {
++		len = min_t(unsigned int, remaining, PAGE_SIZE) - offset;
++
++		if (xdr->p == xdr->end && !xdr_set_next_buffer(xdr))
++			return false;
++		if (!__xdr_inline_decode(xdr, len))
++			return false;
++
++		remaining -= len;
++		offset = 0;
++	}
++
++	return true;
++}
++EXPORT_SYMBOL_GPL(xdr_stream_subsegment);
++
+ /**
+  * xdr_buf_trim - lop at most "len" bytes off the end of "buf"
+  * @buf: buf to be trimmed
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+index c5154bc38e129..feac8c26fb87d 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+@@ -186,7 +186,7 @@ static int xprt_rdma_bc_send_request(struct rpc_rqst *rqst)
+ 
+ 	ret = rpcrdma_bc_send_request(rdma, rqst);
+ 	if (ret == -ENOTCONN)
+-		svc_close_xprt(sxprt);
++		svc_xprt_close(sxprt);
+ 	return ret;
+ }
+ 
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+index c3d588b149aaa..d6436c13d5c47 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+@@ -448,7 +448,6 @@ static ssize_t svc_rdma_encode_write_chunk(__be32 *src,
+  * svc_rdma_encode_write_list - Encode RPC Reply's Write chunk list
+  * @rctxt: Reply context with information about the RPC Call
+  * @sctxt: Send context for the RPC Reply
+- * @length: size in bytes of the payload in the first Write chunk
+  *
+  * The client provides a Write chunk list in the Call message. Fill
+  * in the segments in the first Write chunk in the Reply's transport
+@@ -465,12 +464,12 @@ static ssize_t svc_rdma_encode_write_chunk(__be32 *src,
+  */
+ static ssize_t
+ svc_rdma_encode_write_list(const struct svc_rdma_recv_ctxt *rctxt,
+-			   struct svc_rdma_send_ctxt *sctxt,
+-			   unsigned int length)
++			   struct svc_rdma_send_ctxt *sctxt)
+ {
+ 	ssize_t len, ret;
+ 
+-	ret = svc_rdma_encode_write_chunk(rctxt->rc_write_list, sctxt, length);
++	ret = svc_rdma_encode_write_chunk(rctxt->rc_write_list, sctxt,
++					  rctxt->rc_read_payload_length);
+ 	if (ret < 0)
+ 		return ret;
+ 	len = ret;
+@@ -923,21 +922,12 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
+ 		goto err0;
+ 	if (wr_lst) {
+ 		/* XXX: Presume the client sent only one Write chunk */
+-		unsigned long offset;
+-		unsigned int length;
+-
+-		if (rctxt->rc_read_payload_length) {
+-			offset = rctxt->rc_read_payload_offset;
+-			length = rctxt->rc_read_payload_length;
+-		} else {
+-			offset = xdr->head[0].iov_len;
+-			length = xdr->page_len;
+-		}
+-		ret = svc_rdma_send_write_chunk(rdma, wr_lst, xdr, offset,
+-						length);
++		ret = svc_rdma_send_write_chunk(rdma, wr_lst, xdr,
++						rctxt->rc_read_payload_offset,
++						rctxt->rc_read_payload_length);
+ 		if (ret < 0)
+ 			goto err2;
+-		if (svc_rdma_encode_write_list(rctxt, sctxt, length) < 0)
++		if (svc_rdma_encode_write_list(rctxt, sctxt) < 0)
+ 			goto err0;
+ 	} else {
+ 		if (xdr_stream_encode_item_absent(&sctxt->sc_stream) < 0)
+@@ -979,19 +969,19 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
+ }
+ 
+ /**
+- * svc_rdma_read_payload - special processing for a READ payload
++ * svc_rdma_result_payload - special processing for a result payload
+  * @rqstp: svc_rqst to operate on
+  * @offset: payload's byte offset in @xdr
+  * @length: size of payload, in bytes
+  *
+  * Returns zero on success.
+  *
+- * For the moment, just record the xdr_buf location of the READ
++ * For the moment, just record the xdr_buf location of the result
+  * payload. svc_rdma_sendto will use that location later when
+  * we actually send the payload.
+  */
+-int svc_rdma_read_payload(struct svc_rqst *rqstp, unsigned int offset,
+-			  unsigned int length)
++int svc_rdma_result_payload(struct svc_rqst *rqstp, unsigned int offset,
++			    unsigned int length)
+ {
+ 	struct svc_rdma_recv_ctxt *rctxt = rqstp->rq_xprt_ctxt;
+ 
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index 5f7e3d12523fe..c895f80df659c 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -80,7 +80,7 @@ static const struct svc_xprt_ops svc_rdma_ops = {
+ 	.xpo_create = svc_rdma_create,
+ 	.xpo_recvfrom = svc_rdma_recvfrom,
+ 	.xpo_sendto = svc_rdma_sendto,
+-	.xpo_read_payload = svc_rdma_read_payload,
++	.xpo_result_payload = svc_rdma_result_payload,
+ 	.xpo_release_rqst = svc_rdma_release_rqst,
+ 	.xpo_detach = svc_rdma_detach,
+ 	.xpo_free = svc_rdma_free,
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 3ab726a668e8a..405bf3e6eb796 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -959,7 +959,7 @@ static struct sock *unix_find_other(struct net *net,
+ 		if (err)
+ 			goto fail;
+ 		inode = d_backing_inode(path.dentry);
+-		err = inode_permission(inode, MAY_WRITE);
++		err = path_permission(&path, MAY_WRITE);
+ 		if (err)
+ 			goto put_fail;
+ 
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 059b78d08f7af..0506a48f124c2 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -168,8 +168,9 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
+ 		"panic",
+ 		"do_exit",
+ 		"do_task_dead",
++		"kthread_exit",
+ 		"make_task_dead",
+-		"__module_put_and_exit",
++		"__module_put_and_kthread_exit",
+ 		"complete_and_exit",
+ 		"__reiserfs_panic",
+ 		"lbug_with_loc",


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-07-05 10:51 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-07-05 10:51 UTC (permalink / raw
  To: gentoo-commits

commit:     85f317897bbead9b637ab92276f763a5c4361029
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul  5 10:51:02 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul  5 10:51:02 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=85f31789

Linux patch 5.10.221

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1220_linux-5.10.221.patch | 10454 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 10458 insertions(+)

diff --git a/0000_README b/0000_README
index 7461b294..a8ab8c34 100644
--- a/0000_README
+++ b/0000_README
@@ -923,6 +923,10 @@ Patch:  1219_linux-5.10.220.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.220
 
+Patch:  1220_linux-5.10.221.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.221
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1220_linux-5.10.221.patch b/1220_linux-5.10.221.patch
new file mode 100644
index 00000000..e313136b
--- /dev/null
+++ b/1220_linux-5.10.221.patch
@@ -0,0 +1,10454 @@
+diff --git a/Documentation/devicetree/bindings/i2c/google,cros-ec-i2c-tunnel.yaml b/Documentation/devicetree/bindings/i2c/google,cros-ec-i2c-tunnel.yaml
+index b386e4128a791..d3e01d8c043cd 100644
+--- a/Documentation/devicetree/bindings/i2c/google,cros-ec-i2c-tunnel.yaml
++++ b/Documentation/devicetree/bindings/i2c/google,cros-ec-i2c-tunnel.yaml
+@@ -22,7 +22,7 @@ description: |
+   google,cros-ec-spi or google,cros-ec-i2c.
+ 
+ allOf:
+-  - $ref: i2c-controller.yaml#
++  - $ref: /schemas/i2c/i2c-controller.yaml#
+ 
+ properties:
+   compatible:
+diff --git a/Makefile b/Makefile
+index 9304408d8ace2..b0e22161cd553 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 220
++SUBLEVEL = 221
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-smdkv310.dts b/arch/arm/boot/dts/exynos4210-smdkv310.dts
+index c5609afa6101c..a6622a0d9b58b 100644
+--- a/arch/arm/boot/dts/exynos4210-smdkv310.dts
++++ b/arch/arm/boot/dts/exynos4210-smdkv310.dts
+@@ -84,7 +84,7 @@ eeprom@52 {
+ &keypad {
+ 	samsung,keypad-num-rows = <2>;
+ 	samsung,keypad-num-columns = <8>;
+-	linux,keypad-no-autorepeat;
++	linux,input-no-autorepeat;
+ 	wakeup-source;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&keypad_rows &keypad_cols>;
+diff --git a/arch/arm/boot/dts/exynos4412-origen.dts b/arch/arm/boot/dts/exynos4412-origen.dts
+index e2d76ea4404e8..7138bf291437e 100644
+--- a/arch/arm/boot/dts/exynos4412-origen.dts
++++ b/arch/arm/boot/dts/exynos4412-origen.dts
+@@ -447,7 +447,7 @@ buck9_reg: BUCK9 {
+ &keypad {
+ 	samsung,keypad-num-rows = <3>;
+ 	samsung,keypad-num-columns = <2>;
+-	linux,keypad-no-autorepeat;
++	linux,input-no-autorepeat;
+ 	wakeup-source;
+ 	pinctrl-0 = <&keypad_rows &keypad_cols>;
+ 	pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/exynos4412-smdk4412.dts b/arch/arm/boot/dts/exynos4412-smdk4412.dts
+index 49971203a8aa0..6e6bef907a72e 100644
+--- a/arch/arm/boot/dts/exynos4412-smdk4412.dts
++++ b/arch/arm/boot/dts/exynos4412-smdk4412.dts
+@@ -65,7 +65,7 @@ cooling_map1: map1 {
+ &keypad {
+ 	samsung,keypad-num-rows = <3>;
+ 	samsung,keypad-num-columns = <8>;
+-	linux,keypad-no-autorepeat;
++	linux,input-no-autorepeat;
+ 	wakeup-source;
+ 	pinctrl-0 = <&keypad_rows &keypad_cols>;
+ 	pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/rk3066a.dtsi b/arch/arm/boot/dts/rk3066a.dtsi
+index bbc3bff508560..75ea8e03bef0a 100644
+--- a/arch/arm/boot/dts/rk3066a.dtsi
++++ b/arch/arm/boot/dts/rk3066a.dtsi
+@@ -124,6 +124,7 @@ hdmi: hdmi@10116000 {
+ 		pinctrl-0 = <&hdmii2c_xfer>, <&hdmi_hpd>;
+ 		power-domains = <&power RK3066_PD_VIO>;
+ 		rockchip,grf = <&grf>;
++		#sound-dai-cells = <0>;
+ 		status = "disabled";
+ 
+ 		ports {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3368.dtsi b/arch/arm64/boot/dts/rockchip/rk3368.dtsi
+index 3746f23dc3df4..bc4df59df3f1e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3368.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3368.dtsi
+@@ -687,6 +687,7 @@ spdif: spdif@ff880000 {
+ 		dma-names = "tx";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&spdif_tx>;
++		#sound-dai-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+@@ -698,6 +699,7 @@ i2s_2ch: i2s-2ch@ff890000 {
+ 		clocks = <&cru SCLK_I2S_2CH>, <&cru HCLK_I2S_2CH>;
+ 		dmas = <&dmac_bus 6>, <&dmac_bus 7>;
+ 		dma-names = "tx", "rx";
++		#sound-dai-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+@@ -711,6 +713,7 @@ i2s_8ch: i2s-8ch@ff898000 {
+ 		dma-names = "tx", "rx";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&i2s_8ch_bus>;
++		#sound-dai-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 912b83e784bbf..48ee1fe3aca40 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -410,6 +410,7 @@ struct kvm_vcpu_arch {
+ #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
+ #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
+ #define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
++#define KVM_ARM64_VCPU_IN_WFI		(1 << 8) /* WFI instruction trapped */
+ 
+ #define vcpu_has_sve(vcpu) (system_supports_sve() && \
+ 			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
+diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
+index 107f08e03b9fd..2f120f254e26a 100644
+--- a/arch/arm64/include/asm/unistd32.h
++++ b/arch/arm64/include/asm/unistd32.h
+@@ -840,7 +840,7 @@ __SYSCALL(__NR_pselect6_time64, compat_sys_pselect6_time64)
+ #define __NR_ppoll_time64 414
+ __SYSCALL(__NR_ppoll_time64, compat_sys_ppoll_time64)
+ #define __NR_io_pgetevents_time64 416
+-__SYSCALL(__NR_io_pgetevents_time64, sys_io_pgetevents)
++__SYSCALL(__NR_io_pgetevents_time64, compat_sys_io_pgetevents_time64)
+ #define __NR_recvmmsg_time64 417
+ __SYSCALL(__NR_recvmmsg_time64, compat_sys_recvmmsg_time64)
+ #define __NR_mq_timedsend_time64 418
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 4d63fcd7574b2..afe8be2fef880 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -332,13 +332,15 @@ void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
+ 	 */
+ 	preempt_disable();
+ 	kvm_vgic_vmcr_sync(vcpu);
+-	vgic_v4_put(vcpu, true);
++	vcpu->arch.flags |= KVM_ARM64_VCPU_IN_WFI;
++	vgic_v4_put(vcpu);
+ 	preempt_enable();
+ }
+ 
+ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
+ {
+ 	preempt_disable();
++	vcpu->arch.flags &= ~KVM_ARM64_VCPU_IN_WFI;
+ 	vgic_v4_load(vcpu);
+ 	preempt_enable();
+ }
+@@ -649,7 +651,7 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
+ 		if (kvm_check_request(KVM_REQ_RELOAD_GICv4, vcpu)) {
+ 			/* The distributor enable bits were changed */
+ 			preempt_disable();
+-			vgic_v4_put(vcpu, false);
++			vgic_v4_put(vcpu);
+ 			vgic_v4_load(vcpu);
+ 			preempt_enable();
+ 		}
+diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
+index 9cdf39a94a635..29c12bf9601ab 100644
+--- a/arch/arm64/kvm/vgic/vgic-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-v3.c
+@@ -682,7 +682,7 @@ void vgic_v3_put(struct kvm_vcpu *vcpu)
+ {
+ 	struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ 
+-	WARN_ON(vgic_v4_put(vcpu, false));
++	WARN_ON(vgic_v4_put(vcpu));
+ 
+ 	vgic_v3_vmcr_sync(vcpu);
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
+index b5fa73c9fd355..cdfaaeabbb7dc 100644
+--- a/arch/arm64/kvm/vgic/vgic-v4.c
++++ b/arch/arm64/kvm/vgic/vgic-v4.c
+@@ -310,14 +310,15 @@ void vgic_v4_teardown(struct kvm *kvm)
+ 	its_vm->vpes = NULL;
+ }
+ 
+-int vgic_v4_put(struct kvm_vcpu *vcpu, bool need_db)
++int vgic_v4_put(struct kvm_vcpu *vcpu)
+ {
+ 	struct its_vpe *vpe = &vcpu->arch.vgic_cpu.vgic_v3.its_vpe;
+ 
+ 	if (!vgic_supports_direct_msis(vcpu->kvm) || !vpe->resident)
+ 		return 0;
+ 
+-	return its_make_vpe_non_resident(vpe, need_db);
++	return its_make_vpe_non_resident(vpe,
++			vcpu->arch.flags & KVM_ARM64_VCPU_IN_WFI);
+ }
+ 
+ int vgic_v4_load(struct kvm_vcpu *vcpu)
+@@ -328,6 +329,9 @@ int vgic_v4_load(struct kvm_vcpu *vcpu)
+ 	if (!vgic_supports_direct_msis(vcpu->kvm) || vpe->resident)
+ 		return 0;
+ 
++	if (vcpu->arch.flags & KVM_ARM64_VCPU_IN_WFI)
++		return 0;
++
+ 	/*
+ 	 * Before making the VPE resident, make sure the redistributor
+ 	 * corresponding to our current CPU expects us here. See the
+diff --git a/arch/csky/include/uapi/asm/unistd.h b/arch/csky/include/uapi/asm/unistd.h
+index ba40189297338..3594062a1bba0 100644
+--- a/arch/csky/include/uapi/asm/unistd.h
++++ b/arch/csky/include/uapi/asm/unistd.h
+@@ -7,6 +7,7 @@
+ #define __ARCH_WANT_SYS_CLONE3
+ #define __ARCH_WANT_SET_GET_RLIMIT
+ #define __ARCH_WANT_TIME32_SYSCALLS
++#define __ARCH_WANT_SYNC_FILE_RANGE2
+ #include <asm-generic/unistd.h>
+ 
+ #define __NR_set_thread_area	(__NR_arch_specific_syscall + 0)
+diff --git a/arch/hexagon/include/asm/syscalls.h b/arch/hexagon/include/asm/syscalls.h
+new file mode 100644
+index 0000000000000..40f2d08bec92c
+--- /dev/null
++++ b/arch/hexagon/include/asm/syscalls.h
+@@ -0,0 +1,6 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++
++#include <asm-generic/syscalls.h>
++
++asmlinkage long sys_hexagon_fadvise64_64(int fd, int advice,
++	                                  u32 a2, u32 a3, u32 a4, u32 a5);
+diff --git a/arch/hexagon/include/uapi/asm/unistd.h b/arch/hexagon/include/uapi/asm/unistd.h
+index 432c4db1b6239..21ae22306b5dc 100644
+--- a/arch/hexagon/include/uapi/asm/unistd.h
++++ b/arch/hexagon/include/uapi/asm/unistd.h
+@@ -36,5 +36,6 @@
+ #define __ARCH_WANT_SYS_VFORK
+ #define __ARCH_WANT_SYS_FORK
+ #define __ARCH_WANT_TIME32_SYSCALLS
++#define __ARCH_WANT_SYNC_FILE_RANGE2
+ 
+ #include <asm-generic/unistd.h>
+diff --git a/arch/hexagon/kernel/syscalltab.c b/arch/hexagon/kernel/syscalltab.c
+index 0fadd582cfc77..5d98bdc494ec2 100644
+--- a/arch/hexagon/kernel/syscalltab.c
++++ b/arch/hexagon/kernel/syscalltab.c
+@@ -14,6 +14,13 @@
+ #undef __SYSCALL
+ #define __SYSCALL(nr, call) [nr] = (call),
+ 
++SYSCALL_DEFINE6(hexagon_fadvise64_64, int, fd, int, advice,
++		SC_ARG64(offset), SC_ARG64(len))
++{
++	return ksys_fadvise64_64(fd, SC_VAL64(loff_t, offset), SC_VAL64(loff_t, len), advice);
++}
++#define sys_fadvise64_64 sys_hexagon_fadvise64_64
++
+ void *sys_call_table[__NR_syscalls] = {
+ #include <asm/unistd.h>
+ };
+diff --git a/arch/mips/bmips/setup.c b/arch/mips/bmips/setup.c
+index 16063081d61ec..ac75797739d22 100644
+--- a/arch/mips/bmips/setup.c
++++ b/arch/mips/bmips/setup.c
+@@ -110,7 +110,8 @@ static void bcm6358_quirks(void)
+ 	 * RAC flush causes kernel panics on BCM6358 when booting from TP1
+ 	 * because the bootloader is not initializing it properly.
+ 	 */
+-	bmips_rac_flush_disable = !!(read_c0_brcm_cmt_local() & (1 << 31));
++	bmips_rac_flush_disable = !!(read_c0_brcm_cmt_local() & (1 << 31)) ||
++				  !!BMIPS_GET_CBR();
+ }
+ 
+ static void bcm6368_quirks(void)
+diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl
+index 32817c954435d..67fc2d9b7cb19 100644
+--- a/arch/mips/kernel/syscalls/syscall_n32.tbl
++++ b/arch/mips/kernel/syscalls/syscall_n32.tbl
+@@ -354,7 +354,7 @@
+ 412	n32	utimensat_time64		sys_utimensat
+ 413	n32	pselect6_time64			compat_sys_pselect6_time64
+ 414	n32	ppoll_time64			compat_sys_ppoll_time64
+-416	n32	io_pgetevents_time64		sys_io_pgetevents
++416	n32	io_pgetevents_time64		compat_sys_io_pgetevents_time64
+ 417	n32	recvmmsg_time64			compat_sys_recvmmsg_time64
+ 418	n32	mq_timedsend_time64		sys_mq_timedsend
+ 419	n32	mq_timedreceive_time64		sys_mq_timedreceive
+diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl
+index 29f5f28cf5cea..6036af4f30e2d 100644
+--- a/arch/mips/kernel/syscalls/syscall_o32.tbl
++++ b/arch/mips/kernel/syscalls/syscall_o32.tbl
+@@ -403,7 +403,7 @@
+ 412	o32	utimensat_time64		sys_utimensat			sys_utimensat
+ 413	o32	pselect6_time64			sys_pselect6			compat_sys_pselect6_time64
+ 414	o32	ppoll_time64			sys_ppoll			compat_sys_ppoll_time64
+-416	o32	io_pgetevents_time64		sys_io_pgetevents		sys_io_pgetevents
++416	o32	io_pgetevents_time64		sys_io_pgetevents		compat_sys_io_pgetevents_time64
+ 417	o32	recvmmsg_time64			sys_recvmmsg			compat_sys_recvmmsg_time64
+ 418	o32	mq_timedsend_time64		sys_mq_timedsend		sys_mq_timedsend
+ 419	o32	mq_timedreceive_time64		sys_mq_timedreceive		sys_mq_timedreceive
+diff --git a/arch/mips/pci/ops-rc32434.c b/arch/mips/pci/ops-rc32434.c
+index 874ed6df97683..34b9323bdabb0 100644
+--- a/arch/mips/pci/ops-rc32434.c
++++ b/arch/mips/pci/ops-rc32434.c
+@@ -112,8 +112,8 @@ static int read_config_dword(struct pci_bus *bus, unsigned int devfn,
+ 	 * gives them time to settle
+ 	 */
+ 	if (where == PCI_VENDOR_ID) {
+-		if (ret == 0xffffffff || ret == 0x00000000 ||
+-		    ret == 0x0000ffff || ret == 0xffff0000) {
++		if (*val == 0xffffffff || *val == 0x00000000 ||
++		    *val == 0x0000ffff || *val == 0xffff0000) {
+ 			if (delay > 4)
+ 				return 0;
+ 			delay *= 2;
+diff --git a/arch/mips/pci/pcie-octeon.c b/arch/mips/pci/pcie-octeon.c
+old mode 100644
+new mode 100755
+index d919a0d813a17..38de2a9c3cf1a
+--- a/arch/mips/pci/pcie-octeon.c
++++ b/arch/mips/pci/pcie-octeon.c
+@@ -230,12 +230,18 @@ static inline uint64_t __cvmx_pcie_build_config_addr(int pcie_port, int bus,
+ {
+ 	union cvmx_pcie_address pcie_addr;
+ 	union cvmx_pciercx_cfg006 pciercx_cfg006;
++	union cvmx_pciercx_cfg032 pciercx_cfg032;
+ 
+ 	pciercx_cfg006.u32 =
+ 	    cvmx_pcie_cfgx_read(pcie_port, CVMX_PCIERCX_CFG006(pcie_port));
+ 	if ((bus <= pciercx_cfg006.s.pbnum) && (dev != 0))
+ 		return 0;
+ 
++	pciercx_cfg032.u32 =
++		cvmx_pcie_cfgx_read(pcie_port, CVMX_PCIERCX_CFG032(pcie_port));
++	if ((pciercx_cfg032.s.dlla == 0) || (pciercx_cfg032.s.lt == 1))
++		return 0;
++
+ 	pcie_addr.u64 = 0;
+ 	pcie_addr.config.upper = 2;
+ 	pcie_addr.config.io = 1;
+diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
+index dfe9254ee74b6..e9c70f9a2f505 100644
+--- a/arch/parisc/kernel/syscalls/syscall.tbl
++++ b/arch/parisc/kernel/syscalls/syscall.tbl
+@@ -108,7 +108,7 @@
+ 95	common	fchown			sys_fchown
+ 96	common	getpriority		sys_getpriority
+ 97	common	setpriority		sys_setpriority
+-98	common	recv			sys_recv
++98	common	recv			sys_recv			compat_sys_recv
+ 99	common	statfs			sys_statfs			compat_sys_statfs
+ 100	common	fstatfs			sys_fstatfs			compat_sys_fstatfs
+ 101	common	stat64			sys_stat64
+@@ -135,7 +135,7 @@
+ 120	common	clone			sys_clone_wrapper
+ 121	common	setdomainname		sys_setdomainname
+ 122	common	sendfile		sys_sendfile			compat_sys_sendfile
+-123	common	recvfrom		sys_recvfrom
++123	common	recvfrom		sys_recvfrom			compat_sys_recvfrom
+ 124	32	adjtimex		sys_adjtimex_time32
+ 124	64	adjtimex		sys_adjtimex
+ 125	common	mprotect		sys_mprotect
+diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
+index 1a60188f74ad8..f3b46a0fa7082 100644
+--- a/arch/powerpc/include/asm/hvcall.h
++++ b/arch/powerpc/include/asm/hvcall.h
+@@ -453,7 +453,7 @@ long plpar_hcall_norets_notrace(unsigned long opcode, ...);
+  * Used for all but the craziest of phyp interfaces (see plpar_hcall9)
+  */
+ #define PLPAR_HCALL_BUFSIZE 4
+-long plpar_hcall(unsigned long opcode, unsigned long *retbuf, ...);
++long plpar_hcall(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL_BUFSIZE], ...);
+ 
+ /**
+  * plpar_hcall_raw: - Make a hypervisor call without calculating hcall stats
+@@ -467,7 +467,7 @@ long plpar_hcall(unsigned long opcode, unsigned long *retbuf, ...);
+  * plpar_hcall, but plpar_hcall_raw works in real mode and does not
+  * calculate hypervisor call statistics.
+  */
+-long plpar_hcall_raw(unsigned long opcode, unsigned long *retbuf, ...);
++long plpar_hcall_raw(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL_BUFSIZE], ...);
+ 
+ /**
+  * plpar_hcall9: - Make a pseries hypervisor call with up to 9 return arguments
+@@ -478,8 +478,8 @@ long plpar_hcall_raw(unsigned long opcode, unsigned long *retbuf, ...);
+  * PLPAR_HCALL9_BUFSIZE to size the return argument buffer.
+  */
+ #define PLPAR_HCALL9_BUFSIZE 9
+-long plpar_hcall9(unsigned long opcode, unsigned long *retbuf, ...);
+-long plpar_hcall9_raw(unsigned long opcode, unsigned long *retbuf, ...);
++long plpar_hcall9(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL9_BUFSIZE], ...);
++long plpar_hcall9_raw(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL9_BUFSIZE], ...);
+ 
+ struct hvcall_mpp_data {
+ 	unsigned long entitled_mem;
+diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
+index 0182b291248ac..058d21f493fad 100644
+--- a/arch/powerpc/include/asm/io.h
++++ b/arch/powerpc/include/asm/io.h
+@@ -541,12 +541,12 @@ __do_out_asm(_rec_outl, "stwbrx")
+ #define __do_inw(port)		_rec_inw(port)
+ #define __do_inl(port)		_rec_inl(port)
+ #else /* CONFIG_PPC32 */
+-#define __do_outb(val, port)	writeb(val,(PCI_IO_ADDR)_IO_BASE+port);
+-#define __do_outw(val, port)	writew(val,(PCI_IO_ADDR)_IO_BASE+port);
+-#define __do_outl(val, port)	writel(val,(PCI_IO_ADDR)_IO_BASE+port);
+-#define __do_inb(port)		readb((PCI_IO_ADDR)_IO_BASE + port);
+-#define __do_inw(port)		readw((PCI_IO_ADDR)_IO_BASE + port);
+-#define __do_inl(port)		readl((PCI_IO_ADDR)_IO_BASE + port);
++#define __do_outb(val, port)	writeb(val,(PCI_IO_ADDR)(_IO_BASE+port));
++#define __do_outw(val, port)	writew(val,(PCI_IO_ADDR)(_IO_BASE+port));
++#define __do_outl(val, port)	writel(val,(PCI_IO_ADDR)(_IO_BASE+port));
++#define __do_inb(port)		readb((PCI_IO_ADDR)(_IO_BASE + port));
++#define __do_inw(port)		readw((PCI_IO_ADDR)(_IO_BASE + port));
++#define __do_inl(port)		readl((PCI_IO_ADDR)(_IO_BASE + port));
+ #endif /* !CONFIG_PPC32 */
+ 
+ #ifdef CONFIG_EEH
+@@ -562,12 +562,12 @@ __do_out_asm(_rec_outl, "stwbrx")
+ #define __do_writesw(a, b, n)	_outsw(PCI_FIX_ADDR(a),(b),(n))
+ #define __do_writesl(a, b, n)	_outsl(PCI_FIX_ADDR(a),(b),(n))
+ 
+-#define __do_insb(p, b, n)	readsb((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
+-#define __do_insw(p, b, n)	readsw((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
+-#define __do_insl(p, b, n)	readsl((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
+-#define __do_outsb(p, b, n)	writesb((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
+-#define __do_outsw(p, b, n)	writesw((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
+-#define __do_outsl(p, b, n)	writesl((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
++#define __do_insb(p, b, n)	readsb((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
++#define __do_insw(p, b, n)	readsw((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
++#define __do_insl(p, b, n)	readsl((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
++#define __do_outsb(p, b, n)	writesb((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))
++#define __do_outsw(p, b, n)	writesw((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))
++#define __do_outsl(p, b, n)	writesl((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))
+ 
+ #define __do_memset_io(addr, c, n)	\
+ 				_memset_io(PCI_FIX_ADDR(addr), c, n)
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 6b808bcdecd52..6df110c1254e2 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -186,9 +186,20 @@ do {								\
+ 		:						\
+ 		: label)
+ 
++#ifdef CONFIG_CC_IS_CLANG
++#define DS_FORM_CONSTRAINT "Z<>"
++#else
++#define DS_FORM_CONSTRAINT "YZ<>"
++#endif
++
+ #ifdef __powerpc64__
+-#define __put_user_asm2_goto(x, ptr, label)			\
+-	__put_user_asm_goto(x, ptr, label, "std")
++#define __put_user_asm2_goto(x, addr, label)			\
++	asm goto ("1: std%U1%X1 %0,%1	# put_user\n"		\
++		EX_TABLE(1b, %l2)				\
++		:						\
++		: "r" (x), DS_FORM_CONSTRAINT (*addr)		\
++		:						\
++		: label)
+ #else /* __powerpc64__ */
+ #define __put_user_asm2_goto(x, addr, label)			\
+ 	asm_volatile_goto(					\
+diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl
+index 1275daec7fec3..59f550062bb75 100644
+--- a/arch/powerpc/kernel/syscalls/syscall.tbl
++++ b/arch/powerpc/kernel/syscalls/syscall.tbl
+@@ -503,7 +503,7 @@
+ 412	32	utimensat_time64		sys_utimensat			sys_utimensat
+ 413	32	pselect6_time64			sys_pselect6			compat_sys_pselect6_time64
+ 414	32	ppoll_time64			sys_ppoll			compat_sys_ppoll_time64
+-416	32	io_pgetevents_time64		sys_io_pgetevents		sys_io_pgetevents
++416	32	io_pgetevents_time64		sys_io_pgetevents		compat_sys_io_pgetevents_time64
+ 417	32	recvmmsg_time64			sys_recvmmsg			compat_sys_recvmmsg_time64
+ 418	32	mq_timedsend_time64		sys_mq_timedsend		sys_mq_timedsend
+ 419	32	mq_timedreceive_time64		sys_mq_timedreceive		sys_mq_timedreceive
+diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl
+index 28c1680004834..b846233b81dcf 100644
+--- a/arch/s390/kernel/syscalls/syscall.tbl
++++ b/arch/s390/kernel/syscalls/syscall.tbl
+@@ -418,7 +418,7 @@
+ 412	32	utimensat_time64	-				sys_utimensat
+ 413	32	pselect6_time64		-				compat_sys_pselect6_time64
+ 414	32	ppoll_time64		-				compat_sys_ppoll_time64
+-416	32	io_pgetevents_time64	-				sys_io_pgetevents
++416	32	io_pgetevents_time64	-				compat_sys_io_pgetevents_time64
+ 417	32	recvmmsg_time64		-				compat_sys_recvmmsg_time64
+ 418	32	mq_timedsend_time64	-				sys_mq_timedsend
+ 419	32	mq_timedreceive_time64	-				sys_mq_timedreceive
+diff --git a/arch/sparc/kernel/sys32.S b/arch/sparc/kernel/sys32.S
+index a45f0f31fe51a..a3d308f2043e5 100644
+--- a/arch/sparc/kernel/sys32.S
++++ b/arch/sparc/kernel/sys32.S
+@@ -18,224 +18,3 @@ sys32_mmap2:
+ 	sethi		%hi(sys_mmap), %g1
+ 	jmpl		%g1 + %lo(sys_mmap), %g0
+ 	 sllx		%o5, 12, %o5
+-
+-	.align		32
+-	.globl		sys32_socketcall
+-sys32_socketcall:	/* %o0=call, %o1=args */
+-	cmp		%o0, 1
+-	bl,pn		%xcc, do_einval
+-	 cmp		%o0, 18
+-	bg,pn		%xcc, do_einval
+-	 sub		%o0, 1, %o0
+-	sllx		%o0, 5, %o0
+-	sethi		%hi(__socketcall_table_begin), %g2
+-	or		%g2, %lo(__socketcall_table_begin), %g2
+-	jmpl		%g2 + %o0, %g0
+-	 nop
+-do_einval:
+-	retl
+-	 mov		-EINVAL, %o0
+-
+-	.align		32
+-__socketcall_table_begin:
+-
+-	/* Each entry is exactly 32 bytes. */
+-do_sys_socket: /* sys_socket(int, int, int) */
+-1:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_socket), %g1
+-2:	ldswa		[%o1 + 0x8] %asi, %o2
+-	jmpl		%g1 + %lo(sys_socket), %g0
+-3:	 ldswa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-do_sys_bind: /* sys_bind(int fd, struct sockaddr *, int) */
+-4:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_bind), %g1
+-5:	ldswa		[%o1 + 0x8] %asi, %o2
+-	jmpl		%g1 + %lo(sys_bind), %g0
+-6:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-do_sys_connect: /* sys_connect(int, struct sockaddr *, int) */
+-7:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_connect), %g1
+-8:	ldswa		[%o1 + 0x8] %asi, %o2
+-	jmpl		%g1 + %lo(sys_connect), %g0
+-9:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-do_sys_listen: /* sys_listen(int, int) */
+-10:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_listen), %g1
+-	jmpl		%g1 + %lo(sys_listen), %g0
+-11:	 ldswa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-	nop
+-do_sys_accept: /* sys_accept(int, struct sockaddr *, int *) */
+-12:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_accept), %g1
+-13:	lduwa		[%o1 + 0x8] %asi, %o2
+-	jmpl		%g1 + %lo(sys_accept), %g0
+-14:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-do_sys_getsockname: /* sys_getsockname(int, struct sockaddr *, int *) */
+-15:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_getsockname), %g1
+-16:	lduwa		[%o1 + 0x8] %asi, %o2
+-	jmpl		%g1 + %lo(sys_getsockname), %g0
+-17:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-do_sys_getpeername: /* sys_getpeername(int, struct sockaddr *, int *) */
+-18:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_getpeername), %g1
+-19:	lduwa		[%o1 + 0x8] %asi, %o2
+-	jmpl		%g1 + %lo(sys_getpeername), %g0
+-20:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-do_sys_socketpair: /* sys_socketpair(int, int, int, int *) */
+-21:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_socketpair), %g1
+-22:	ldswa		[%o1 + 0x8] %asi, %o2
+-23:	lduwa		[%o1 + 0xc] %asi, %o3
+-	jmpl		%g1 + %lo(sys_socketpair), %g0
+-24:	 ldswa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-do_sys_send: /* sys_send(int, void *, size_t, unsigned int) */
+-25:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_send), %g1
+-26:	lduwa		[%o1 + 0x8] %asi, %o2
+-27:	lduwa		[%o1 + 0xc] %asi, %o3
+-	jmpl		%g1 + %lo(sys_send), %g0
+-28:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-do_sys_recv: /* sys_recv(int, void *, size_t, unsigned int) */
+-29:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_recv), %g1
+-30:	lduwa		[%o1 + 0x8] %asi, %o2
+-31:	lduwa		[%o1 + 0xc] %asi, %o3
+-	jmpl		%g1 + %lo(sys_recv), %g0
+-32:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-do_sys_sendto: /* sys_sendto(int, u32, compat_size_t, unsigned int, u32, int) */
+-33:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_sendto), %g1
+-34:	lduwa		[%o1 + 0x8] %asi, %o2
+-35:	lduwa		[%o1 + 0xc] %asi, %o3
+-36:	lduwa		[%o1 + 0x10] %asi, %o4
+-37:	ldswa		[%o1 + 0x14] %asi, %o5
+-	jmpl		%g1 + %lo(sys_sendto), %g0
+-38:	 lduwa		[%o1 + 0x4] %asi, %o1
+-do_sys_recvfrom: /* sys_recvfrom(int, u32, compat_size_t, unsigned int, u32, u32) */
+-39:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_recvfrom), %g1
+-40:	lduwa		[%o1 + 0x8] %asi, %o2
+-41:	lduwa		[%o1 + 0xc] %asi, %o3
+-42:	lduwa		[%o1 + 0x10] %asi, %o4
+-43:	lduwa		[%o1 + 0x14] %asi, %o5
+-	jmpl		%g1 + %lo(sys_recvfrom), %g0
+-44:	 lduwa		[%o1 + 0x4] %asi, %o1
+-do_sys_shutdown: /* sys_shutdown(int, int) */
+-45:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_shutdown), %g1
+-	jmpl		%g1 + %lo(sys_shutdown), %g0
+-46:	 ldswa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-	nop
+-do_sys_setsockopt: /* sys_setsockopt(int, int, int, char *, int) */
+-47:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_setsockopt), %g1
+-48:	ldswa		[%o1 + 0x8] %asi, %o2
+-49:	lduwa		[%o1 + 0xc] %asi, %o3
+-50:	ldswa		[%o1 + 0x10] %asi, %o4
+-	jmpl		%g1 + %lo(sys_setsockopt), %g0
+-51:	 ldswa		[%o1 + 0x4] %asi, %o1
+-	nop
+-do_sys_getsockopt: /* sys_getsockopt(int, int, int, u32, u32) */
+-52:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_getsockopt), %g1
+-53:	ldswa		[%o1 + 0x8] %asi, %o2
+-54:	lduwa		[%o1 + 0xc] %asi, %o3
+-55:	lduwa		[%o1 + 0x10] %asi, %o4
+-	jmpl		%g1 + %lo(sys_getsockopt), %g0
+-56:	 ldswa		[%o1 + 0x4] %asi, %o1
+-	nop
+-do_sys_sendmsg: /* compat_sys_sendmsg(int, struct compat_msghdr *, unsigned int) */
+-57:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(compat_sys_sendmsg), %g1
+-58:	lduwa		[%o1 + 0x8] %asi, %o2
+-	jmpl		%g1 + %lo(compat_sys_sendmsg), %g0
+-59:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-do_sys_recvmsg: /* compat_sys_recvmsg(int, struct compat_msghdr *, unsigned int) */
+-60:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(compat_sys_recvmsg), %g1
+-61:	lduwa		[%o1 + 0x8] %asi, %o2
+-	jmpl		%g1 + %lo(compat_sys_recvmsg), %g0
+-62:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-	nop
+-do_sys_accept4: /* sys_accept4(int, struct sockaddr *, int *, int) */
+-63:	ldswa		[%o1 + 0x0] %asi, %o0
+-	sethi		%hi(sys_accept4), %g1
+-64:	lduwa		[%o1 + 0x8] %asi, %o2
+-65:	ldswa		[%o1 + 0xc] %asi, %o3
+-	jmpl		%g1 + %lo(sys_accept4), %g0
+-66:	 lduwa		[%o1 + 0x4] %asi, %o1
+-	nop
+-	nop
+-
+-	.section	__ex_table,"a"
+-	.align		4
+-	.word		1b, __retl_efault, 2b, __retl_efault
+-	.word		3b, __retl_efault, 4b, __retl_efault
+-	.word		5b, __retl_efault, 6b, __retl_efault
+-	.word		7b, __retl_efault, 8b, __retl_efault
+-	.word		9b, __retl_efault, 10b, __retl_efault
+-	.word		11b, __retl_efault, 12b, __retl_efault
+-	.word		13b, __retl_efault, 14b, __retl_efault
+-	.word		15b, __retl_efault, 16b, __retl_efault
+-	.word		17b, __retl_efault, 18b, __retl_efault
+-	.word		19b, __retl_efault, 20b, __retl_efault
+-	.word		21b, __retl_efault, 22b, __retl_efault
+-	.word		23b, __retl_efault, 24b, __retl_efault
+-	.word		25b, __retl_efault, 26b, __retl_efault
+-	.word		27b, __retl_efault, 28b, __retl_efault
+-	.word		29b, __retl_efault, 30b, __retl_efault
+-	.word		31b, __retl_efault, 32b, __retl_efault
+-	.word		33b, __retl_efault, 34b, __retl_efault
+-	.word		35b, __retl_efault, 36b, __retl_efault
+-	.word		37b, __retl_efault, 38b, __retl_efault
+-	.word		39b, __retl_efault, 40b, __retl_efault
+-	.word		41b, __retl_efault, 42b, __retl_efault
+-	.word		43b, __retl_efault, 44b, __retl_efault
+-	.word		45b, __retl_efault, 46b, __retl_efault
+-	.word		47b, __retl_efault, 48b, __retl_efault
+-	.word		49b, __retl_efault, 50b, __retl_efault
+-	.word		51b, __retl_efault, 52b, __retl_efault
+-	.word		53b, __retl_efault, 54b, __retl_efault
+-	.word		55b, __retl_efault, 56b, __retl_efault
+-	.word		57b, __retl_efault, 58b, __retl_efault
+-	.word		59b, __retl_efault, 60b, __retl_efault
+-	.word		61b, __retl_efault, 62b, __retl_efault
+-	.word		63b, __retl_efault, 64b, __retl_efault
+-	.word		65b, __retl_efault, 66b, __retl_efault
+-	.previous
+diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl
+index 78160260991be..e38a5773d85e4 100644
+--- a/arch/sparc/kernel/syscalls/syscall.tbl
++++ b/arch/sparc/kernel/syscalls/syscall.tbl
+@@ -117,7 +117,7 @@
+ 90	common	dup2			sys_dup2
+ 91	32	setfsuid32		sys_setfsuid
+ 92	common	fcntl			sys_fcntl			compat_sys_fcntl
+-93	common	select			sys_select
++93	common	select			sys_select			compat_sys_select
+ 94	32	setfsgid32		sys_setfsgid
+ 95	common	fsync			sys_fsync
+ 96	common	setpriority		sys_setpriority
+@@ -155,7 +155,7 @@
+ 123	32	fchown			sys_fchown16
+ 123	64	fchown			sys_fchown
+ 124	common	fchmod			sys_fchmod
+-125	common	recvfrom		sys_recvfrom
++125	common	recvfrom		sys_recvfrom			compat_sys_recvfrom
+ 126	32	setreuid		sys_setreuid16
+ 126	64	setreuid		sys_setreuid
+ 127	32	setregid		sys_setregid16
+@@ -247,7 +247,7 @@
+ 204	32	readdir			sys_old_readdir			compat_sys_old_readdir
+ 204	64	readdir			sys_nis_syscall
+ 205	common	readahead		sys_readahead			compat_sys_readahead
+-206	common	socketcall		sys_socketcall			sys32_socketcall
++206	common	socketcall		sys_socketcall			compat_sys_socketcall
+ 207	common	syslog			sys_syslog
+ 208	common	lookup_dcookie		sys_lookup_dcookie		compat_sys_lookup_dcookie
+ 209	common	fadvise64		sys_fadvise64			compat_sys_fadvise64
+@@ -461,7 +461,7 @@
+ 412	32	utimensat_time64		sys_utimensat			sys_utimensat
+ 413	32	pselect6_time64			sys_pselect6			compat_sys_pselect6_time64
+ 414	32	ppoll_time64			sys_ppoll			compat_sys_ppoll_time64
+-416	32	io_pgetevents_time64		sys_io_pgetevents		sys_io_pgetevents
++416	32	io_pgetevents_time64		sys_io_pgetevents		compat_sys_io_pgetevents_time64
+ 417	32	recvmmsg_time64			sys_recvmmsg			compat_sys_recvmmsg_time64
+ 418	32	mq_timedsend_time64		sys_mq_timedsend		sys_mq_timedsend
+ 419	32	mq_timedreceive_time64		sys_mq_timedreceive		sys_mq_timedreceive
+diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
+index 0d0667a9fbd70..bad9833120609 100644
+--- a/arch/x86/entry/syscalls/syscall_32.tbl
++++ b/arch/x86/entry/syscalls/syscall_32.tbl
+@@ -420,7 +420,7 @@
+ 412	i386	utimensat_time64	sys_utimensat
+ 413	i386	pselect6_time64		sys_pselect6			compat_sys_pselect6_time64
+ 414	i386	ppoll_time64		sys_ppoll			compat_sys_ppoll_time64
+-416	i386	io_pgetevents_time64	sys_io_pgetevents
++416	i386	io_pgetevents_time64	sys_io_pgetevents		compat_sys_io_pgetevents_time64
+ 417	i386	recvmmsg_time64		sys_recvmmsg			compat_sys_recvmmsg_time64
+ 418	i386	mq_timedsend_time64	sys_mq_timedsend
+ 419	i386	mq_timedreceive_time64	sys_mq_timedreceive
+diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
+index eb8fcede9e3bf..e8e3dbe7f1730 100644
+--- a/arch/x86/include/asm/cpu_device_id.h
++++ b/arch/x86/include/asm/cpu_device_id.h
+@@ -2,6 +2,39 @@
+ #ifndef _ASM_X86_CPU_DEVICE_ID
+ #define _ASM_X86_CPU_DEVICE_ID
+ 
++/*
++ * Can't use <linux/bitfield.h> because it generates expressions that
++ * cannot be used in structure initializers. Bitfield construction
++ * here must match the union in struct cpuinfo_86:
++ *	union {
++ *		struct {
++ *			__u8	x86_model;
++ *			__u8	x86;
++ *			__u8	x86_vendor;
++ *			__u8	x86_reserved;
++ *		};
++ *		__u32		x86_vfm;
++ *	};
++ */
++#define VFM_MODEL_BIT	0
++#define VFM_FAMILY_BIT	8
++#define VFM_VENDOR_BIT	16
++#define VFM_RSVD_BIT	24
++
++#define	VFM_MODEL_MASK	GENMASK(VFM_FAMILY_BIT - 1, VFM_MODEL_BIT)
++#define	VFM_FAMILY_MASK	GENMASK(VFM_VENDOR_BIT - 1, VFM_FAMILY_BIT)
++#define	VFM_VENDOR_MASK	GENMASK(VFM_RSVD_BIT - 1, VFM_VENDOR_BIT)
++
++#define VFM_MODEL(vfm)	(((vfm) & VFM_MODEL_MASK) >> VFM_MODEL_BIT)
++#define VFM_FAMILY(vfm)	(((vfm) & VFM_FAMILY_MASK) >> VFM_FAMILY_BIT)
++#define VFM_VENDOR(vfm)	(((vfm) & VFM_VENDOR_MASK) >> VFM_VENDOR_BIT)
++
++#define	VFM_MAKE(_vendor, _family, _model) (	\
++	((_model) << VFM_MODEL_BIT) |		\
++	((_family) << VFM_FAMILY_BIT) |		\
++	((_vendor) << VFM_VENDOR_BIT)		\
++)
++
+ /*
+  * Declare drivers belonging to specific x86 CPUs
+  * Similar in spirit to pci_device_id and related PCI functions
+@@ -20,6 +53,9 @@
+ #define X86_CENTAUR_FAM6_C7_D		0xd
+ #define X86_CENTAUR_FAM6_NANO		0xf
+ 
++/* x86_cpu_id::flags */
++#define X86_CPU_ID_FLAG_ENTRY_VALID	BIT(0)
++
+ #define X86_STEPPINGS(mins, maxs)    GENMASK(maxs, mins)
+ /**
+  * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
+@@ -46,6 +82,18 @@
+ 	.model		= _model,					\
+ 	.steppings	= _steppings,					\
+ 	.feature	= _feature,					\
++	.flags		= X86_CPU_ID_FLAG_ENTRY_VALID,			\
++	.driver_data	= (unsigned long) _data				\
++}
++
++#define X86_MATCH_VENDORID_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
++						    _steppings, _feature, _data) { \
++	.vendor		= _vendor,					\
++	.family		= _family,					\
++	.model		= _model,					\
++	.steppings	= _steppings,					\
++	.feature	= _feature,					\
++	.flags		= X86_CPU_ID_FLAG_ENTRY_VALID,			\
+ 	.driver_data	= (unsigned long) _data				\
+ }
+ 
+@@ -164,6 +212,56 @@
+ 	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, INTEL_FAM6_##model, \
+ 						     steppings, X86_FEATURE_ANY, data)
+ 
++/**
++ * X86_MATCH_VFM - Match encoded vendor/family/model
++ * @vfm:	Encoded 8-bits each for vendor, family, model
++ * @data:	Driver specific data or NULL. The internal storage
++ *		format is unsigned long. The supplied value, pointer
++ *		etc. is cast to unsigned long internally.
++ *
++ * Stepping and feature are set to wildcards
++ */
++#define X86_MATCH_VFM(vfm, data)			\
++	X86_MATCH_VENDORID_FAM_MODEL_STEPPINGS_FEATURE(	\
++		VFM_VENDOR(vfm),			\
++		VFM_FAMILY(vfm),			\
++		VFM_MODEL(vfm),				\
++		X86_STEPPING_ANY, X86_FEATURE_ANY, data)
++
++/**
++ * X86_MATCH_VFM_STEPPINGS - Match encoded vendor/family/model/stepping
++ * @vfm:	Encoded 8-bits each for vendor, family, model
++ * @steppings:	Bitmask of steppings to match
++ * @data:	Driver specific data or NULL. The internal storage
++ *		format is unsigned long. The supplied value, pointer
++ *		etc. is cast to unsigned long internally.
++ *
++ * feature is set to wildcard
++ */
++#define X86_MATCH_VFM_STEPPINGS(vfm, steppings, data)	\
++	X86_MATCH_VENDORID_FAM_MODEL_STEPPINGS_FEATURE(	\
++		VFM_VENDOR(vfm),			\
++		VFM_FAMILY(vfm),			\
++		VFM_MODEL(vfm),				\
++		steppings, X86_FEATURE_ANY, data)
++
++/**
++ * X86_MATCH_VFM_FEATURE - Match encoded vendor/family/model/feature
++ * @vfm:	Encoded 8-bits each for vendor, family, model
++ * @feature:	A X86_FEATURE bit
++ * @data:	Driver specific data or NULL. The internal storage
++ *		format is unsigned long. The supplied value, pointer
++ *		etc. is cast to unsigned long internally.
++ *
++ * Steppings is set to wildcard
++ */
++#define X86_MATCH_VFM_FEATURE(vfm, feature, data)	\
++	X86_MATCH_VENDORID_FAM_MODEL_STEPPINGS_FEATURE(	\
++		VFM_VENDOR(vfm),			\
++		VFM_FAMILY(vfm),			\
++		VFM_MODEL(vfm),				\
++		X86_STEPPING_ANY, feature, data)
++
+ /*
+  * Match specific microcode revisions.
+  *
+diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
+index bc9758ef292ef..7c675832712d0 100644
+--- a/arch/x86/include/asm/efi.h
++++ b/arch/x86/include/asm/efi.h
+@@ -382,4 +382,15 @@ static inline void efi_fake_memmap_early(void)
+ }
+ #endif
+ 
++extern int __init efi_memmap_alloc(unsigned int num_entries,
++				   struct efi_memory_map_data *data);
++extern void __efi_memmap_free(u64 phys, unsigned long size,
++			      unsigned long flags);
++
++extern int __init efi_memmap_install(struct efi_memory_map_data *data);
++extern int __init efi_memmap_split_count(efi_memory_desc_t *md,
++					 struct range *range);
++extern void __init efi_memmap_insert(struct efi_memory_map *old_memmap,
++				     void *buf, struct efi_mem_range *mem);
++
+ #endif /* _ASM_X86_EFI_H */
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 18f6b7c4bd79f..16cd56627574d 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -164,7 +164,14 @@ static int __amd_smn_rw(u16 node, u32 address, u32 *value, bool write)
+ 
+ int amd_smn_read(u16 node, u32 address, u32 *value)
+ {
+-	return __amd_smn_rw(node, address, value, false);
++	int err = __amd_smn_rw(node, address, value, false);
++
++	if (PCI_POSSIBLE_ERROR(*value)) {
++		err = -ENODEV;
++		*value = 0;
++	}
++
++	return err;
+ }
+ EXPORT_SYMBOL_GPL(amd_smn_read);
+ 
+diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c
+index ad6776081e60d..ae71b8ef909c9 100644
+--- a/arch/x86/kernel/cpu/match.c
++++ b/arch/x86/kernel/cpu/match.c
+@@ -39,9 +39,7 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
+ 	const struct x86_cpu_id *m;
+ 	struct cpuinfo_x86 *c = &boot_cpu_data;
+ 
+-	for (m = match;
+-	     m->vendor | m->family | m->model | m->steppings | m->feature;
+-	     m++) {
++	for (m = match; m->flags & X86_CPU_ID_FLAG_ENTRY_VALID; m++) {
+ 		if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
+ 			continue;
+ 		if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index e42faa792c079..52e1f3f0b361c 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -27,25 +27,7 @@
+ 
+ unsigned long profile_pc(struct pt_regs *regs)
+ {
+-	unsigned long pc = instruction_pointer(regs);
+-
+-	if (!user_mode(regs) && in_lock_functions(pc)) {
+-#ifdef CONFIG_FRAME_POINTER
+-		return *(unsigned long *)(regs->bp + sizeof(long));
+-#else
+-		unsigned long *sp = (unsigned long *)regs->sp;
+-		/*
+-		 * Return address is either directly at stack pointer
+-		 * or above a saved flags. Eflags has bits 22-31 zero,
+-		 * kernel addresses don't.
+-		 */
+-		if (sp[0] >> 22)
+-			return sp[0];
+-		if (sp[1] >> 22)
+-			return sp[1];
+-#endif
+-	}
+-	return pc;
++	return instruction_pointer(regs);
+ }
+ EXPORT_SYMBOL(profile_pc);
+ 
+diff --git a/arch/x86/platform/efi/Makefile b/arch/x86/platform/efi/Makefile
+index 84b09c230cbd5..196f61fdbf289 100644
+--- a/arch/x86/platform/efi/Makefile
++++ b/arch/x86/platform/efi/Makefile
+@@ -3,5 +3,6 @@ OBJECT_FILES_NON_STANDARD_efi_thunk_$(BITS).o := y
+ KASAN_SANITIZE := n
+ GCOV_PROFILE := n
+ 
+-obj-$(CONFIG_EFI) 		+= quirks.o efi.o efi_$(BITS).o efi_stub_$(BITS).o
++obj-$(CONFIG_EFI) 		+= memmap.o quirks.o efi.o efi_$(BITS).o \
++				   efi_stub_$(BITS).o
+ obj-$(CONFIG_EFI_MIXED)		+= efi_thunk_$(BITS).o
+diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
+index 8a26e705cb060..41229bcbe0d90 100644
+--- a/arch/x86/platform/efi/efi.c
++++ b/arch/x86/platform/efi/efi.c
+@@ -234,9 +234,11 @@ int __init efi_memblock_x86_reserve_range(void)
+ 	data.desc_size		= e->efi_memdesc_size;
+ 	data.desc_version	= e->efi_memdesc_version;
+ 
+-	rv = efi_memmap_init_early(&data);
+-	if (rv)
+-		return rv;
++	if (!efi_enabled(EFI_PARAVIRT)) {
++		rv = efi_memmap_init_early(&data);
++		if (rv)
++			return rv;
++	}
+ 
+ 	if (add_efi_memmap || do_efi_soft_reserve())
+ 		do_add_efi_memmap();
+diff --git a/arch/x86/platform/efi/memmap.c b/arch/x86/platform/efi/memmap.c
+new file mode 100644
+index 0000000000000..872d310c426e2
+--- /dev/null
++++ b/arch/x86/platform/efi/memmap.c
+@@ -0,0 +1,249 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Common EFI memory map functions.
++ */
++
++#define pr_fmt(fmt) "efi: " fmt
++
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/efi.h>
++#include <linux/io.h>
++#include <asm/early_ioremap.h>
++#include <asm/efi.h>
++#include <linux/memblock.h>
++#include <linux/slab.h>
++
++static phys_addr_t __init __efi_memmap_alloc_early(unsigned long size)
++{
++	return memblock_phys_alloc(size, SMP_CACHE_BYTES);
++}
++
++static phys_addr_t __init __efi_memmap_alloc_late(unsigned long size)
++{
++	unsigned int order = get_order(size);
++	struct page *p = alloc_pages(GFP_KERNEL, order);
++
++	if (!p)
++		return 0;
++
++	return PFN_PHYS(page_to_pfn(p));
++}
++
++void __init __efi_memmap_free(u64 phys, unsigned long size, unsigned long flags)
++{
++	if (flags & EFI_MEMMAP_MEMBLOCK) {
++		if (slab_is_available())
++			memblock_free_late(phys, size);
++		else
++			memblock_free(phys, size);
++	} else if (flags & EFI_MEMMAP_SLAB) {
++		struct page *p = pfn_to_page(PHYS_PFN(phys));
++		unsigned int order = get_order(size);
++
++		free_pages((unsigned long) page_address(p), order);
++	}
++}
++
++/**
++ * efi_memmap_alloc - Allocate memory for the EFI memory map
++ * @num_entries: Number of entries in the allocated map.
++ * @data: efi memmap installation parameters
++ *
++ * Depending on whether mm_init() has already been invoked or not,
++ * either memblock or "normal" page allocation is used.
++ *
++ * Returns zero on success, a negative error code on failure.
++ */
++int __init efi_memmap_alloc(unsigned int num_entries,
++		struct efi_memory_map_data *data)
++{
++	/* Expect allocation parameters are zero initialized */
++	WARN_ON(data->phys_map || data->size);
++
++	data->size = num_entries * efi.memmap.desc_size;
++	data->desc_version = efi.memmap.desc_version;
++	data->desc_size = efi.memmap.desc_size;
++	data->flags &= ~(EFI_MEMMAP_SLAB | EFI_MEMMAP_MEMBLOCK);
++	data->flags |= efi.memmap.flags & EFI_MEMMAP_LATE;
++
++	if (slab_is_available()) {
++		data->flags |= EFI_MEMMAP_SLAB;
++		data->phys_map = __efi_memmap_alloc_late(data->size);
++	} else {
++		data->flags |= EFI_MEMMAP_MEMBLOCK;
++		data->phys_map = __efi_memmap_alloc_early(data->size);
++	}
++
++	if (!data->phys_map)
++		return -ENOMEM;
++	return 0;
++}
++
++/**
++ * efi_memmap_install - Install a new EFI memory map in efi.memmap
++ * @ctx: map allocation parameters (address, size, flags)
++ *
++ * Unlike efi_memmap_init_*(), this function does not allow the caller
++ * to switch from early to late mappings. It simply uses the existing
++ * mapping function and installs the new memmap.
++ *
++ * Returns zero on success, a negative error code on failure.
++ */
++int __init efi_memmap_install(struct efi_memory_map_data *data)
++{
++	unsigned long size = efi.memmap.desc_size * efi.memmap.nr_map;
++	unsigned long flags = efi.memmap.flags;
++	u64 phys = efi.memmap.phys_map;
++	int ret;
++
++	efi_memmap_unmap();
++
++	if (efi_enabled(EFI_PARAVIRT))
++		return 0;
++
++	ret = __efi_memmap_init(data);
++	if (ret)
++		return ret;
++
++	__efi_memmap_free(phys, size, flags);
++	return 0;
++}
++
++/**
++ * efi_memmap_split_count - Count number of additional EFI memmap entries
++ * @md: EFI memory descriptor to split
++ * @range: Address range (start, end) to split around
++ *
++ * Returns the number of additional EFI memmap entries required to
++ * accommodate @range.
++ */
++int __init efi_memmap_split_count(efi_memory_desc_t *md, struct range *range)
++{
++	u64 m_start, m_end;
++	u64 start, end;
++	int count = 0;
++
++	start = md->phys_addr;
++	end = start + (md->num_pages << EFI_PAGE_SHIFT) - 1;
++
++	/* modifying range */
++	m_start = range->start;
++	m_end = range->end;
++
++	if (m_start <= start) {
++		/* split into 2 parts */
++		if (start < m_end && m_end < end)
++			count++;
++	}
++
++	if (start < m_start && m_start < end) {
++		/* split into 3 parts */
++		if (m_end < end)
++			count += 2;
++		/* split into 2 parts */
++		if (end <= m_end)
++			count++;
++	}
++
++	return count;
++}
++
++/**
++ * efi_memmap_insert - Insert a memory region in an EFI memmap
++ * @old_memmap: The existing EFI memory map structure
++ * @buf: Address of buffer to store new map
++ * @mem: Memory map entry to insert
++ *
++ * It is suggested that you call efi_memmap_split_count() first
++ * to see how large @buf needs to be.
++ */
++void __init efi_memmap_insert(struct efi_memory_map *old_memmap, void *buf,
++			      struct efi_mem_range *mem)
++{
++	u64 m_start, m_end, m_attr;
++	efi_memory_desc_t *md;
++	u64 start, end;
++	void *old, *new;
++
++	/* modifying range */
++	m_start = mem->range.start;
++	m_end = mem->range.end;
++	m_attr = mem->attribute;
++
++	/*
++	 * The EFI memory map deals with regions in EFI_PAGE_SIZE
++	 * units. Ensure that the region described by 'mem' is aligned
++	 * correctly.
++	 */
++	if (!IS_ALIGNED(m_start, EFI_PAGE_SIZE) ||
++	    !IS_ALIGNED(m_end + 1, EFI_PAGE_SIZE)) {
++		WARN_ON(1);
++		return;
++	}
++
++	for (old = old_memmap->map, new = buf;
++	     old < old_memmap->map_end;
++	     old += old_memmap->desc_size, new += old_memmap->desc_size) {
++
++		/* copy original EFI memory descriptor */
++		memcpy(new, old, old_memmap->desc_size);
++		md = new;
++		start = md->phys_addr;
++		end = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - 1;
++
++		if (m_start <= start && end <= m_end)
++			md->attribute |= m_attr;
++
++		if (m_start <= start &&
++		    (start < m_end && m_end < end)) {
++			/* first part */
++			md->attribute |= m_attr;
++			md->num_pages = (m_end - md->phys_addr + 1) >>
++				EFI_PAGE_SHIFT;
++			/* latter part */
++			new += old_memmap->desc_size;
++			memcpy(new, old, old_memmap->desc_size);
++			md = new;
++			md->phys_addr = m_end + 1;
++			md->num_pages = (end - md->phys_addr + 1) >>
++				EFI_PAGE_SHIFT;
++		}
++
++		if ((start < m_start && m_start < end) && m_end < end) {
++			/* first part */
++			md->num_pages = (m_start - md->phys_addr) >>
++				EFI_PAGE_SHIFT;
++			/* middle part */
++			new += old_memmap->desc_size;
++			memcpy(new, old, old_memmap->desc_size);
++			md = new;
++			md->attribute |= m_attr;
++			md->phys_addr = m_start;
++			md->num_pages = (m_end - m_start + 1) >>
++				EFI_PAGE_SHIFT;
++			/* last part */
++			new += old_memmap->desc_size;
++			memcpy(new, old, old_memmap->desc_size);
++			md = new;
++			md->phys_addr = m_end + 1;
++			md->num_pages = (end - m_end) >>
++				EFI_PAGE_SHIFT;
++		}
++
++		if ((start < m_start && m_start < end) &&
++		    (end <= m_end)) {
++			/* first part */
++			md->num_pages = (m_start - md->phys_addr) >>
++				EFI_PAGE_SHIFT;
++			/* latter part */
++			new += old_memmap->desc_size;
++			memcpy(new, old, old_memmap->desc_size);
++			md = new;
++			md->phys_addr = m_start;
++			md->num_pages = (end - md->phys_addr + 1) >>
++				EFI_PAGE_SHIFT;
++			md->attribute |= m_attr;
++		}
++	}
++}
+diff --git a/block/ioctl.c b/block/ioctl.c
+index bc97698e0e8a3..11e692741f17c 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -32,7 +32,7 @@ static int blkpg_do_ioctl(struct block_device *bdev,
+ 	if (op == BLKPG_DEL_PARTITION)
+ 		return bdev_del_partition(bdev, p.pno);
+ 
+-	if (p.start < 0 || p.length <= 0 || p.start + p.length < 0)
++	if (p.start < 0 || p.length <= 0 || LLONG_MAX - p.length < p.start)
+ 		return -EINVAL;
+ 	/* Check that the partition is aligned to the block size */
+ 	if (!IS_ALIGNED(p.start | p.length, bdev_logical_block_size(bdev)))
+diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
+index 4914dbc445179..5bbbd015de5a4 100644
+--- a/drivers/acpi/acpica/exregion.c
++++ b/drivers/acpi/acpica/exregion.c
+@@ -44,7 +44,6 @@ acpi_ex_system_memory_space_handler(u32 function,
+ 	struct acpi_mem_mapping *mm = mem_info->cur_mm;
+ 	u32 length;
+ 	acpi_size map_length;
+-	acpi_size page_boundary_map_length;
+ #ifdef ACPI_MISALIGNMENT_NOT_SUPPORTED
+ 	u32 remainder;
+ #endif
+@@ -138,26 +137,8 @@ acpi_ex_system_memory_space_handler(u32 function,
+ 		map_length = (acpi_size)
+ 		    ((mem_info->address + mem_info->length) - address);
+ 
+-		/*
+-		 * If mapping the entire remaining portion of the region will cross
+-		 * a page boundary, just map up to the page boundary, do not cross.
+-		 * On some systems, crossing a page boundary while mapping regions
+-		 * can cause warnings if the pages have different attributes
+-		 * due to resource management.
+-		 *
+-		 * This has the added benefit of constraining a single mapping to
+-		 * one page, which is similar to the original code that used a 4k
+-		 * maximum window.
+-		 */
+-		page_boundary_map_length = (acpi_size)
+-		    (ACPI_ROUND_UP(address, ACPI_DEFAULT_PAGE_SIZE) - address);
+-		if (page_boundary_map_length == 0) {
+-			page_boundary_map_length = ACPI_DEFAULT_PAGE_SIZE;
+-		}
+-
+-		if (map_length > page_boundary_map_length) {
+-			map_length = page_boundary_map_length;
+-		}
++		if (map_length > ACPI_DEFAULT_PAGE_SIZE)
++			map_length = ACPI_DEFAULT_PAGE_SIZE;
+ 
+ 		/* Create a new mapping starting at the address given */
+ 
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 66e53df758655..4c38846fea966 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -1346,6 +1346,9 @@ bool acpi_storage_d3(struct device *dev)
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
+ 	u8 val;
+ 
++	if (force_storage_d3())
++		return true;
++
+ 	if (!adev)
+ 		return false;
+ 	if (fwnode_property_read_u8(acpi_fwnode_handle(adev), "StorageD3Enable",
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index 125e4901c9b47..f6c929787c9e6 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -237,6 +237,15 @@ static inline int suspend_nvs_save(void) { return 0; }
+ static inline void suspend_nvs_restore(void) {}
+ #endif
+ 
++#ifdef CONFIG_X86
++bool force_storage_d3(void);
++#else
++static inline bool force_storage_d3(void)
++{
++	return false;
++}
++#endif
++
+ /*--------------------------------------------------------------------------
+ 				Device properties
+   -------------------------------------------------------------------------- */
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index a5cb9e1d48bcc..338e1f44906a9 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -341,6 +341,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "82BK"),
+ 		},
+ 	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Lenovo Slim 7 16ARH7 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "82UX"),
++		},
++	},
+ 	{
+ 	 .callback = video_detect_force_native,
+ 	 /* Lenovo ThinkPad X131e (3371 AMD version) */
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index 3f9a162be84e3..aa4f233373afc 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -177,3 +177,37 @@ bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *s
+ 
+ 	return ret;
+ }
++
++/*
++ * AMD systems from Renoir onwards *require* that the NVME controller
++ * is put into D3 over a Modern Standby / suspend-to-idle cycle.
++ *
++ * This is "typically" accomplished using the `StorageD3Enable`
++ * property in the _DSD that is checked via the `acpi_storage_d3` function
++ * but some OEM systems still don't have it in their BIOS.
++ *
++ * The Microsoft documentation for StorageD3Enable mentioned that Windows has
++ * a hardcoded allowlist for D3 support as well as a registry key to override
++ * the BIOS, which has been used for these cases.
++ *
++ * This allows quirking on Linux in a similar fashion.
++ *
++ * Cezanne systems shouldn't *normally* need this as the BIOS includes
++ * StorageD3Enable.  But for two reasons we have added it.
++ * 1) The BIOS on a number of Dell systems have ambiguity
++ *    between the same value used for _ADR on ACPI nodes GPP1.DEV0 and GPP1.NVME.
++ *    GPP1.NVME is needed to get StorageD3Enable node set properly.
++ *    https://bugzilla.kernel.org/show_bug.cgi?id=216440
++ *    https://bugzilla.kernel.org/show_bug.cgi?id=216773
++ *    https://bugzilla.kernel.org/show_bug.cgi?id=217003
++ * 2) On at least one HP system StorageD3Enable is missing on the second NVME
++ *    disk in the system.
++ * 3) On at least one HP Rembrandt system StorageD3Enable is missing on the only
++ *    NVME device.
++ */
++bool force_storage_d3(void)
++{
++	if (!cpu_feature_enabled(X86_FEATURE_ZEN))
++		return false;
++	return acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0;
++}
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 55a07dbe1d8a6..04b53bb7a692d 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -1893,8 +1893,10 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	n_ports = max(ahci_nr_ports(hpriv->cap), fls(hpriv->port_map));
+ 
+ 	host = ata_host_alloc_pinfo(&pdev->dev, ppi, n_ports);
+-	if (!host)
+-		return -ENOMEM;
++	if (!host) {
++		rc = -ENOMEM;
++		goto err_rm_sysfs_file;
++	}
+ 	host->private_data = hpriv;
+ 
+ 	if (ahci_init_msi(pdev, n_ports, hpriv) < 0) {
+@@ -1947,11 +1949,11 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* initialize adapter */
+ 	rc = ahci_configure_dma_masks(pdev, hpriv);
+ 	if (rc)
+-		return rc;
++		goto err_rm_sysfs_file;
+ 
+ 	rc = ahci_pci_reset_controller(host);
+ 	if (rc)
+-		return rc;
++		goto err_rm_sysfs_file;
+ 
+ 	ahci_pci_init_controller(host);
+ 	ahci_pci_print_info(host);
+@@ -1960,10 +1962,15 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	rc = ahci_host_activate(host, &ahci_sht);
+ 	if (rc)
+-		return rc;
++		goto err_rm_sysfs_file;
+ 
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	return 0;
++
++err_rm_sysfs_file:
++	sysfs_remove_file_from_group(&pdev->dev.kobj,
++				     &dev_attr_remapped_nvme.attr, NULL);
++	return rc;
+ }
+ 
+ static void ahci_shutdown_one(struct pci_dev *pdev)
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 702b8e061b36e..3717ed6fcccc2 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5420,8 +5420,10 @@ struct ata_host *ata_host_alloc(struct device *dev, int max_ports)
+ 	if (!host)
+ 		return NULL;
+ 
+-	if (!devres_open_group(dev, NULL, GFP_KERNEL))
+-		goto err_free;
++	if (!devres_open_group(dev, NULL, GFP_KERNEL)) {
++		kfree(host);
++		return NULL;
++	}
+ 
+ 	dr = devres_alloc(ata_devres_release, 0, GFP_KERNEL);
+ 	if (!dr)
+@@ -5453,8 +5455,6 @@ struct ata_host *ata_host_alloc(struct device *dev, int max_ports)
+ 
+  err_out:
+ 	devres_release_group(dev, NULL);
+- err_free:
+-	kfree(host);
+ 	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(ata_host_alloc);
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 2c978941b488e..b13a60de5a863 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2008,8 +2008,11 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
+ 	if (!env)
+ 		return -ENOMEM;
+ 
++	/* Synchronize with really_probe() */
++	device_lock(dev);
+ 	/* let the kset specific function add its keys */
+ 	retval = kset->uevent_ops->uevent(kset, &dev->kobj, env);
++	device_unlock(dev);
+ 	if (retval)
+ 		goto out;
+ 
+diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
+index 41220ce59659b..2bad11424734b 100644
+--- a/drivers/block/null_blk/zoned.c
++++ b/drivers/block/null_blk/zoned.c
+@@ -81,7 +81,7 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
+ 	if (dev->zone_max_active && dev->zone_max_open > dev->zone_max_active) {
+ 		dev->zone_max_open = dev->zone_max_active;
+ 		pr_info("changed the maximum number of open zones to %u\n",
+-			dev->nr_zones);
++			dev->zone_max_open);
+ 	} else if (dev->zone_max_open >= dev->nr_zones - dev->zone_nr_conv) {
+ 		dev->zone_max_open = 0;
+ 		pr_info("zone_max_open limit disabled, limit >= zone count\n");
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index 759d7828931d9..56ada48104f0e 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -3,7 +3,6 @@
+  * Copyright (c) 2008-2009 Atheros Communications Inc.
+  */
+ 
+-
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+@@ -129,7 +128,6 @@ MODULE_DEVICE_TABLE(usb, ath3k_table);
+  * for AR3012
+  */
+ static const struct usb_device_id ath3k_blist_tbl[] = {
+-
+ 	/* Atheros AR3012 with sflash firmware*/
+ 	{ USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 },
+@@ -203,7 +201,7 @@ static inline void ath3k_log_failed_loading(int err, int len, int size,
+ #define TIMEGAP_USEC_MAX	100
+ 
+ static int ath3k_load_firmware(struct usb_device *udev,
+-				const struct firmware *firmware)
++			       const struct firmware *firmware)
+ {
+ 	u8 *send_buf;
+ 	int len = 0;
+@@ -238,9 +236,9 @@ static int ath3k_load_firmware(struct usb_device *udev,
+ 		memcpy(send_buf, firmware->data + sent, size);
+ 
+ 		err = usb_bulk_msg(udev, pipe, send_buf, size,
+-					&len, 3000);
++				   &len, 3000);
+ 
+-		if (err || (len != size)) {
++		if (err || len != size) {
+ 			ath3k_log_failed_loading(err, len, size, count);
+ 			goto error;
+ 		}
+@@ -263,7 +261,7 @@ static int ath3k_get_state(struct usb_device *udev, unsigned char *state)
+ }
+ 
+ static int ath3k_get_version(struct usb_device *udev,
+-			struct ath3k_version *version)
++			     struct ath3k_version *version)
+ {
+ 	return usb_control_msg_recv(udev, 0, ATH3K_GETVERSION,
+ 				    USB_TYPE_VENDOR | USB_DIR_IN, 0, 0,
+@@ -272,7 +270,7 @@ static int ath3k_get_version(struct usb_device *udev,
+ }
+ 
+ static int ath3k_load_fwfile(struct usb_device *udev,
+-		const struct firmware *firmware)
++			     const struct firmware *firmware)
+ {
+ 	u8 *send_buf;
+ 	int len = 0;
+@@ -311,8 +309,8 @@ static int ath3k_load_fwfile(struct usb_device *udev,
+ 		memcpy(send_buf, firmware->data + sent, size);
+ 
+ 		err = usb_bulk_msg(udev, pipe, send_buf, size,
+-					&len, 3000);
+-		if (err || (len != size)) {
++				   &len, 3000);
++		if (err || len != size) {
+ 			ath3k_log_failed_loading(err, len, size, count);
+ 			kfree(send_buf);
+ 			return err;
+@@ -426,7 +424,6 @@ static int ath3k_load_syscfg(struct usb_device *udev)
+ 	}
+ 
+ 	switch (fw_version.ref_clock) {
+-
+ 	case ATH3K_XTAL_FREQ_26M:
+ 		clk_value = 26;
+ 		break;
+@@ -442,7 +439,7 @@ static int ath3k_load_syscfg(struct usb_device *udev)
+ 	}
+ 
+ 	snprintf(filename, ATH3K_NAME_LEN, "ar3k/ramps_0x%08x_%d%s",
+-		le32_to_cpu(fw_version.rom_version), clk_value, ".dfu");
++		 le32_to_cpu(fw_version.rom_version), clk_value, ".dfu");
+ 
+ 	ret = request_firmware(&firmware, filename, &udev->dev);
+ 	if (ret < 0) {
+@@ -457,7 +454,7 @@ static int ath3k_load_syscfg(struct usb_device *udev)
+ }
+ 
+ static int ath3k_probe(struct usb_interface *intf,
+-			const struct usb_device_id *id)
++		       const struct usb_device_id *id)
+ {
+ 	const struct firmware *firmware;
+ 	struct usb_device *udev = interface_to_usbdev(intf);
+@@ -506,10 +503,10 @@ static int ath3k_probe(struct usb_interface *intf,
+ 	if (ret < 0) {
+ 		if (ret == -ENOENT)
+ 			BT_ERR("Firmware file \"%s\" not found",
+-							ATH3K_FIRMWARE);
++			       ATH3K_FIRMWARE);
+ 		else
+ 			BT_ERR("Firmware file \"%s\" request failed (err=%d)",
+-							ATH3K_FIRMWARE, ret);
++			       ATH3K_FIRMWARE, ret);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/counter/ti-eqep.c b/drivers/counter/ti-eqep.c
+index 65df9ef5b5bc0..6343998819296 100644
+--- a/drivers/counter/ti-eqep.c
++++ b/drivers/counter/ti-eqep.c
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include <linux/bitops.h>
++#include <linux/clk.h>
+ #include <linux/counter.h>
+ #include <linux/kernel.h>
+ #include <linux/mod_devicetable.h>
+@@ -349,6 +350,7 @@ static int ti_eqep_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct ti_eqep_cnt *priv;
+ 	void __iomem *base;
++	struct clk *clk;
+ 	int err;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -388,6 +390,10 @@ static int ti_eqep_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	pm_runtime_get_sync(dev);
+ 
++	clk = devm_clk_get_enabled(dev, NULL);
++	if (IS_ERR(clk))
++		return dev_err_probe(dev, PTR_ERR(clk), "failed to enable clock\n");
++
+ 	err = counter_register(&priv->counter);
+ 	if (err < 0) {
+ 		pm_runtime_put_sync(dev);
+diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
+index 5161b73c30c43..e91aeec71c811 100644
+--- a/drivers/dma/dma-axi-dmac.c
++++ b/drivers/dma/dma-axi-dmac.c
+@@ -1020,8 +1020,8 @@ static int axi_dmac_remove(struct platform_device *pdev)
+ {
+ 	struct axi_dmac *dmac = platform_get_drvdata(pdev);
+ 
+-	of_dma_controller_free(pdev->dev.of_node);
+ 	free_irq(dmac->irq, dmac);
++	of_dma_controller_free(pdev->dev.of_node);
+ 	tasklet_kill(&dmac->chan.vchan.task);
+ 	dma_async_device_unregister(&dmac->dma_dev);
+ 	clk_disable_unprepare(dmac->clk);
+diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
+index 191b592790073..2a47b9053bad8 100644
+--- a/drivers/dma/ioat/init.c
++++ b/drivers/dma/ioat/init.c
+@@ -15,7 +15,6 @@
+ #include <linux/workqueue.h>
+ #include <linux/prefetch.h>
+ #include <linux/dca.h>
+-#include <linux/aer.h>
+ #include <linux/sizes.h>
+ #include "dma.h"
+ #include "registers.h"
+@@ -535,18 +534,6 @@ static int ioat_probe(struct ioatdma_device *ioat_dma)
+ 	return err;
+ }
+ 
+-static int ioat_register(struct ioatdma_device *ioat_dma)
+-{
+-	int err = dma_async_device_register(&ioat_dma->dma_dev);
+-
+-	if (err) {
+-		ioat_disable_interrupts(ioat_dma);
+-		dma_pool_destroy(ioat_dma->completion_pool);
+-	}
+-
+-	return err;
+-}
+-
+ static void ioat_dma_remove(struct ioatdma_device *ioat_dma)
+ {
+ 	struct dma_device *dma = &ioat_dma->dma_dev;
+@@ -1181,9 +1168,9 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca)
+ 		       ioat_chan->reg_base + IOAT_DCACTRL_OFFSET);
+ 	}
+ 
+-	err = ioat_register(ioat_dma);
++	err = dma_async_device_register(&ioat_dma->dma_dev);
+ 	if (err)
+-		return err;
++		goto err_disable_interrupts;
+ 
+ 	ioat_kobject_add(ioat_dma, &ioat_ktype);
+ 
+@@ -1191,21 +1178,30 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca)
+ 		ioat_dma->dca = ioat_dca_init(pdev, ioat_dma->reg_base);
+ 
+ 	/* disable relaxed ordering */
+-	err = pcie_capability_read_word(pdev, IOAT_DEVCTRL_OFFSET, &val16);
+-	if (err)
+-		return pcibios_err_to_errno(err);
++	err = pcie_capability_read_word(pdev, PCI_EXP_DEVCTL, &val16);
++	if (err) {
++		err = pcibios_err_to_errno(err);
++		goto err_disable_interrupts;
++	}
+ 
+ 	/* clear relaxed ordering enable */
+-	val16 &= ~IOAT_DEVCTRL_ROE;
+-	err = pcie_capability_write_word(pdev, IOAT_DEVCTRL_OFFSET, val16);
+-	if (err)
+-		return pcibios_err_to_errno(err);
++	val16 &= ~PCI_EXP_DEVCTL_RELAX_EN;
++	err = pcie_capability_write_word(pdev, PCI_EXP_DEVCTL, val16);
++	if (err) {
++		err = pcibios_err_to_errno(err);
++		goto err_disable_interrupts;
++	}
+ 
+ 	if (ioat_dma->cap & IOAT_CAP_DPS)
+ 		writeb(ioat_pending_level + 1,
+ 		       ioat_dma->reg_base + IOAT_PREFETCH_LIMIT_OFFSET);
+ 
+ 	return 0;
++
++err_disable_interrupts:
++	ioat_disable_interrupts(ioat_dma);
++	dma_pool_destroy(ioat_dma->completion_pool);
++	return err;
+ }
+ 
+ static void ioat_shutdown(struct pci_dev *pdev)
+@@ -1350,6 +1346,8 @@ static int ioat_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	void __iomem * const *iomap;
+ 	struct device *dev = &pdev->dev;
+ 	struct ioatdma_device *device;
++	unsigned int i;
++	u8 version;
+ 	int err;
+ 
+ 	err = pcim_enable_device(pdev);
+@@ -1363,15 +1361,13 @@ static int ioat_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (!iomap)
+ 		return -ENOMEM;
+ 
+-	err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+-	if (err)
+-		err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+-	if (err)
+-		return err;
++	version = readb(iomap[IOAT_MMIO_BAR] + IOAT_VER_OFFSET);
++	if (version < IOAT_VER_3_0)
++		return -ENODEV;
+ 
+-	err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
++	err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+ 	if (err)
+-		err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
++		err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ 	if (err)
+ 		return err;
+ 
+@@ -1381,22 +1377,19 @@ static int ioat_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	pci_set_master(pdev);
+ 	pci_set_drvdata(pdev, device);
+ 
+-	device->version = readb(device->reg_base + IOAT_VER_OFFSET);
++	device->version = version;
+ 	if (device->version >= IOAT_VER_3_4)
+ 		ioat_dca_enabled = 0;
+-	if (device->version >= IOAT_VER_3_0) {
+-		if (is_skx_ioat(pdev))
+-			device->version = IOAT_VER_3_2;
+-		err = ioat3_dma_probe(device, ioat_dca_enabled);
+-
+-		if (device->version >= IOAT_VER_3_3)
+-			pci_enable_pcie_error_reporting(pdev);
+-	} else
+-		return -ENODEV;
+ 
++	if (is_skx_ioat(pdev))
++		device->version = IOAT_VER_3_2;
++
++	err = ioat3_dma_probe(device, ioat_dca_enabled);
+ 	if (err) {
++		for (i = 0; i < IOAT_MAX_CHANS; i++)
++			kfree(device->idx[i]);
++		kfree(device);
+ 		dev_err(dev, "Intel(R) I/OAT DMA Engine init failed\n");
+-		pci_disable_pcie_error_reporting(pdev);
+ 		return -ENODEV;
+ 	}
+ 
+@@ -1419,7 +1412,6 @@ static void ioat_remove(struct pci_dev *pdev)
+ 		device->dca = NULL;
+ 	}
+ 
+-	pci_disable_pcie_error_reporting(pdev);
+ 	ioat_dma_remove(device);
+ }
+ 
+@@ -1458,6 +1450,7 @@ module_init(ioat_init_module);
+ static void __exit ioat_exit_module(void)
+ {
+ 	pci_unregister_driver(&ioat_pci_driver);
++	kmem_cache_destroy(ioat_sed_cache);
+ 	kmem_cache_destroy(ioat_cache);
+ }
+ module_exit(ioat_exit_module);
+diff --git a/drivers/dma/ioat/registers.h b/drivers/dma/ioat/registers.h
+index f55a5f92f1857..54cf0ad39887b 100644
+--- a/drivers/dma/ioat/registers.h
++++ b/drivers/dma/ioat/registers.h
+@@ -14,13 +14,6 @@
+ #define IOAT_PCI_CHANERR_INT_OFFSET		0x180
+ #define IOAT_PCI_CHANERRMASK_INT_OFFSET		0x184
+ 
+-/* PCIe config registers */
+-
+-/* EXPCAPID + N */
+-#define IOAT_DEVCTRL_OFFSET			0x8
+-/* relaxed ordering enable */
+-#define IOAT_DEVCTRL_ROE			0x10
+-
+ /* MMIO Device Registers */
+ #define IOAT_CHANCNT_OFFSET			0x00	/*  8-bit */
+ 
+diff --git a/drivers/firmware/efi/fdtparams.c b/drivers/firmware/efi/fdtparams.c
+index e901f8564ca0c..0ec83ba580972 100644
+--- a/drivers/firmware/efi/fdtparams.c
++++ b/drivers/firmware/efi/fdtparams.c
+@@ -30,11 +30,13 @@ static __initconst const char name[][22] = {
+ 
+ static __initconst const struct {
+ 	const char	path[17];
++	u8		paravirt;
+ 	const char	params[PARAMCOUNT][26];
+ } dt_params[] = {
+ 	{
+ #ifdef CONFIG_XEN    //  <-------17------>
+ 		.path = "/hypervisor/uefi",
++		.paravirt = 1,
+ 		.params = {
+ 			[SYSTAB] = "xen,uefi-system-table",
+ 			[MMBASE] = "xen,uefi-mmap-start",
+@@ -121,6 +123,8 @@ u64 __init efi_get_fdt_params(struct efi_memory_map_data *mm)
+ 			pr_err("Can't find property '%s' in DT!\n", pname);
+ 			return 0;
+ 		}
++		if (dt_params[i].paravirt)
++			set_bit(EFI_PARAVIRT, &efi.flags);
+ 		return systab;
+ 	}
+ notfound:
+diff --git a/drivers/firmware/efi/memmap.c b/drivers/firmware/efi/memmap.c
+index 2ff1883dc788d..77dd20f9df312 100644
+--- a/drivers/firmware/efi/memmap.c
++++ b/drivers/firmware/efi/memmap.c
+@@ -9,83 +9,11 @@
+ #include <linux/kernel.h>
+ #include <linux/efi.h>
+ #include <linux/io.h>
+-#include <asm/early_ioremap.h>
+ #include <linux/memblock.h>
+ #include <linux/slab.h>
+ 
+-static phys_addr_t __init __efi_memmap_alloc_early(unsigned long size)
+-{
+-	return memblock_phys_alloc(size, SMP_CACHE_BYTES);
+-}
+-
+-static phys_addr_t __init __efi_memmap_alloc_late(unsigned long size)
+-{
+-	unsigned int order = get_order(size);
+-	struct page *p = alloc_pages(GFP_KERNEL, order);
+-
+-	if (!p)
+-		return 0;
+-
+-	return PFN_PHYS(page_to_pfn(p));
+-}
+-
+-void __init __efi_memmap_free(u64 phys, unsigned long size, unsigned long flags)
+-{
+-	if (flags & EFI_MEMMAP_MEMBLOCK) {
+-		if (slab_is_available())
+-			memblock_free_late(phys, size);
+-		else
+-			memblock_free(phys, size);
+-	} else if (flags & EFI_MEMMAP_SLAB) {
+-		struct page *p = pfn_to_page(PHYS_PFN(phys));
+-		unsigned int order = get_order(size);
+-
+-		free_pages((unsigned long) page_address(p), order);
+-	}
+-}
+-
+-static void __init efi_memmap_free(void)
+-{
+-	__efi_memmap_free(efi.memmap.phys_map,
+-			efi.memmap.desc_size * efi.memmap.nr_map,
+-			efi.memmap.flags);
+-}
+-
+-/**
+- * efi_memmap_alloc - Allocate memory for the EFI memory map
+- * @num_entries: Number of entries in the allocated map.
+- * @data: efi memmap installation parameters
+- *
+- * Depending on whether mm_init() has already been invoked or not,
+- * either memblock or "normal" page allocation is used.
+- *
+- * Returns the physical address of the allocated memory map on
+- * success, zero on failure.
+- */
+-int __init efi_memmap_alloc(unsigned int num_entries,
+-		struct efi_memory_map_data *data)
+-{
+-	/* Expect allocation parameters are zero initialized */
+-	WARN_ON(data->phys_map || data->size);
+-
+-	data->size = num_entries * efi.memmap.desc_size;
+-	data->desc_version = efi.memmap.desc_version;
+-	data->desc_size = efi.memmap.desc_size;
+-	data->flags &= ~(EFI_MEMMAP_SLAB | EFI_MEMMAP_MEMBLOCK);
+-	data->flags |= efi.memmap.flags & EFI_MEMMAP_LATE;
+-
+-	if (slab_is_available()) {
+-		data->flags |= EFI_MEMMAP_SLAB;
+-		data->phys_map = __efi_memmap_alloc_late(data->size);
+-	} else {
+-		data->flags |= EFI_MEMMAP_MEMBLOCK;
+-		data->phys_map = __efi_memmap_alloc_early(data->size);
+-	}
+-
+-	if (!data->phys_map)
+-		return -ENOMEM;
+-	return 0;
+-}
++#include <asm/early_ioremap.h>
++#include <asm/efi.h>
+ 
+ /**
+  * __efi_memmap_init - Common code for mapping the EFI memory map
+@@ -102,14 +30,11 @@ int __init efi_memmap_alloc(unsigned int num_entries,
+  *
+  * Returns zero on success, a negative error code on failure.
+  */
+-static int __init __efi_memmap_init(struct efi_memory_map_data *data)
++int __init __efi_memmap_init(struct efi_memory_map_data *data)
+ {
+ 	struct efi_memory_map map;
+ 	phys_addr_t phys_map;
+ 
+-	if (efi_enabled(EFI_PARAVIRT))
+-		return 0;
+-
+ 	phys_map = data->phys_map;
+ 
+ 	if (data->flags & EFI_MEMMAP_LATE)
+@@ -122,9 +47,6 @@ static int __init __efi_memmap_init(struct efi_memory_map_data *data)
+ 		return -ENOMEM;
+ 	}
+ 
+-	/* NOP if data->flags & (EFI_MEMMAP_MEMBLOCK | EFI_MEMMAP_SLAB) == 0 */
+-	efi_memmap_free();
+-
+ 	map.phys_map = data->phys_map;
+ 	map.nr_map = data->size / data->desc_size;
+ 	map.map_end = map.map + data->size;
+@@ -221,158 +143,3 @@ int __init efi_memmap_init_late(phys_addr_t addr, unsigned long size)
+ 
+ 	return __efi_memmap_init(&data);
+ }
+-
+-/**
+- * efi_memmap_install - Install a new EFI memory map in efi.memmap
+- * @ctx: map allocation parameters (address, size, flags)
+- *
+- * Unlike efi_memmap_init_*(), this function does not allow the caller
+- * to switch from early to late mappings. It simply uses the existing
+- * mapping function and installs the new memmap.
+- *
+- * Returns zero on success, a negative error code on failure.
+- */
+-int __init efi_memmap_install(struct efi_memory_map_data *data)
+-{
+-	efi_memmap_unmap();
+-
+-	return __efi_memmap_init(data);
+-}
+-
+-/**
+- * efi_memmap_split_count - Count number of additional EFI memmap entries
+- * @md: EFI memory descriptor to split
+- * @range: Address range (start, end) to split around
+- *
+- * Returns the number of additional EFI memmap entries required to
+- * accomodate @range.
+- */
+-int __init efi_memmap_split_count(efi_memory_desc_t *md, struct range *range)
+-{
+-	u64 m_start, m_end;
+-	u64 start, end;
+-	int count = 0;
+-
+-	start = md->phys_addr;
+-	end = start + (md->num_pages << EFI_PAGE_SHIFT) - 1;
+-
+-	/* modifying range */
+-	m_start = range->start;
+-	m_end = range->end;
+-
+-	if (m_start <= start) {
+-		/* split into 2 parts */
+-		if (start < m_end && m_end < end)
+-			count++;
+-	}
+-
+-	if (start < m_start && m_start < end) {
+-		/* split into 3 parts */
+-		if (m_end < end)
+-			count += 2;
+-		/* split into 2 parts */
+-		if (end <= m_end)
+-			count++;
+-	}
+-
+-	return count;
+-}
+-
+-/**
+- * efi_memmap_insert - Insert a memory region in an EFI memmap
+- * @old_memmap: The existing EFI memory map structure
+- * @buf: Address of buffer to store new map
+- * @mem: Memory map entry to insert
+- *
+- * It is suggested that you call efi_memmap_split_count() first
+- * to see how large @buf needs to be.
+- */
+-void __init efi_memmap_insert(struct efi_memory_map *old_memmap, void *buf,
+-			      struct efi_mem_range *mem)
+-{
+-	u64 m_start, m_end, m_attr;
+-	efi_memory_desc_t *md;
+-	u64 start, end;
+-	void *old, *new;
+-
+-	/* modifying range */
+-	m_start = mem->range.start;
+-	m_end = mem->range.end;
+-	m_attr = mem->attribute;
+-
+-	/*
+-	 * The EFI memory map deals with regions in EFI_PAGE_SIZE
+-	 * units. Ensure that the region described by 'mem' is aligned
+-	 * correctly.
+-	 */
+-	if (!IS_ALIGNED(m_start, EFI_PAGE_SIZE) ||
+-	    !IS_ALIGNED(m_end + 1, EFI_PAGE_SIZE)) {
+-		WARN_ON(1);
+-		return;
+-	}
+-
+-	for (old = old_memmap->map, new = buf;
+-	     old < old_memmap->map_end;
+-	     old += old_memmap->desc_size, new += old_memmap->desc_size) {
+-
+-		/* copy original EFI memory descriptor */
+-		memcpy(new, old, old_memmap->desc_size);
+-		md = new;
+-		start = md->phys_addr;
+-		end = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - 1;
+-
+-		if (m_start <= start && end <= m_end)
+-			md->attribute |= m_attr;
+-
+-		if (m_start <= start &&
+-		    (start < m_end && m_end < end)) {
+-			/* first part */
+-			md->attribute |= m_attr;
+-			md->num_pages = (m_end - md->phys_addr + 1) >>
+-				EFI_PAGE_SHIFT;
+-			/* latter part */
+-			new += old_memmap->desc_size;
+-			memcpy(new, old, old_memmap->desc_size);
+-			md = new;
+-			md->phys_addr = m_end + 1;
+-			md->num_pages = (end - md->phys_addr + 1) >>
+-				EFI_PAGE_SHIFT;
+-		}
+-
+-		if ((start < m_start && m_start < end) && m_end < end) {
+-			/* first part */
+-			md->num_pages = (m_start - md->phys_addr) >>
+-				EFI_PAGE_SHIFT;
+-			/* middle part */
+-			new += old_memmap->desc_size;
+-			memcpy(new, old, old_memmap->desc_size);
+-			md = new;
+-			md->attribute |= m_attr;
+-			md->phys_addr = m_start;
+-			md->num_pages = (m_end - m_start + 1) >>
+-				EFI_PAGE_SHIFT;
+-			/* last part */
+-			new += old_memmap->desc_size;
+-			memcpy(new, old, old_memmap->desc_size);
+-			md = new;
+-			md->phys_addr = m_end + 1;
+-			md->num_pages = (end - m_end) >>
+-				EFI_PAGE_SHIFT;
+-		}
+-
+-		if ((start < m_start && m_start < end) &&
+-		    (end <= m_end)) {
+-			/* first part */
+-			md->num_pages = (m_start - md->phys_addr) >>
+-				EFI_PAGE_SHIFT;
+-			/* latter part */
+-			new += old_memmap->desc_size;
+-			memcpy(new, old, old_memmap->desc_size);
+-			md = new;
+-			md->phys_addr = m_start;
+-			md->num_pages = (end - md->phys_addr + 1) >>
+-				EFI_PAGE_SHIFT;
+-			md->attribute |= m_attr;
+-		}
+-	}
+-}
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index 39f3e13664099..b7811dbe0ec28 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -1335,7 +1335,7 @@ config GPIO_TPS68470
+ 	  drivers are loaded.
+ 
+ config GPIO_TQMX86
+-	tristate "TQ-Systems QTMX86 GPIO"
++	tristate "TQ-Systems TQMx86 GPIO"
+ 	depends on MFD_TQMX86 || COMPILE_TEST
+ 	depends on HAS_IOPORT_MAP
+ 	select GPIOLIB_IRQCHIP
+diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c
+index 80597e90de9c6..33623bcfc886c 100644
+--- a/drivers/gpio/gpio-davinci.c
++++ b/drivers/gpio/gpio-davinci.c
+@@ -227,6 +227,11 @@ static int davinci_gpio_probe(struct platform_device *pdev)
+ 	else
+ 		nirq = DIV_ROUND_UP(ngpio, 16);
+ 
++	if (nirq > MAX_INT_PER_BANK) {
++		dev_err(dev, "Too many IRQs!\n");
++		return -EINVAL;
++	}
++
+ 	chips = devm_kzalloc(dev, sizeof(*chips), GFP_KERNEL);
+ 	if (!chips)
+ 		return -ENOMEM;
+diff --git a/drivers/gpio/gpio-tqmx86.c b/drivers/gpio/gpio-tqmx86.c
+index 0f5d17f343f1e..70c52544ec801 100644
+--- a/drivers/gpio/gpio-tqmx86.c
++++ b/drivers/gpio/gpio-tqmx86.c
+@@ -27,16 +27,20 @@
+ #define TQMX86_GPIIC	3	/* GPI Interrupt Configuration Register */
+ #define TQMX86_GPIIS	4	/* GPI Interrupt Status Register */
+ 
++#define TQMX86_GPII_NONE	0
+ #define TQMX86_GPII_FALLING	BIT(0)
+ #define TQMX86_GPII_RISING	BIT(1)
+ #define TQMX86_GPII_MASK	(BIT(0) | BIT(1))
+ #define TQMX86_GPII_BITS	2
++/* Stored in irq_type with GPII bits */
++#define TQMX86_INT_UNMASKED	BIT(2)
+ 
+ struct tqmx86_gpio_data {
+ 	struct gpio_chip	chip;
+ 	struct irq_chip		irq_chip;
+ 	void __iomem		*io_base;
+ 	int			irq;
++	/* Lock must be held for accessing output and irq_type fields */
+ 	raw_spinlock_t		spinlock;
+ 	u8			irq_type[TQMX86_NGPI];
+ };
+@@ -107,20 +111,30 @@ static int tqmx86_gpio_get_direction(struct gpio_chip *chip,
+ 	return GPIO_LINE_DIRECTION_OUT;
+ }
+ 
++static void tqmx86_gpio_irq_config(struct tqmx86_gpio_data *gpio, int offset)
++	__must_hold(&gpio->spinlock)
++{
++	u8 type = TQMX86_GPII_NONE, gpiic;
++
++	if (gpio->irq_type[offset] & TQMX86_INT_UNMASKED)
++		type = gpio->irq_type[offset] & TQMX86_GPII_MASK;
++
++	gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC);
++	gpiic &= ~(TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS));
++	gpiic |= type << (offset * TQMX86_GPII_BITS);
++	tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC);
++}
++
+ static void tqmx86_gpio_irq_mask(struct irq_data *data)
+ {
+ 	unsigned int offset = (data->hwirq - TQMX86_NGPO);
+ 	struct tqmx86_gpio_data *gpio = gpiochip_get_data(
+ 		irq_data_get_irq_chip_data(data));
+ 	unsigned long flags;
+-	u8 gpiic, mask;
+-
+-	mask = TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS);
+ 
+ 	raw_spin_lock_irqsave(&gpio->spinlock, flags);
+-	gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC);
+-	gpiic &= ~mask;
+-	tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC);
++	gpio->irq_type[offset] &= ~TQMX86_INT_UNMASKED;
++	tqmx86_gpio_irq_config(gpio, offset);
+ 	raw_spin_unlock_irqrestore(&gpio->spinlock, flags);
+ }
+ 
+@@ -130,15 +144,10 @@ static void tqmx86_gpio_irq_unmask(struct irq_data *data)
+ 	struct tqmx86_gpio_data *gpio = gpiochip_get_data(
+ 		irq_data_get_irq_chip_data(data));
+ 	unsigned long flags;
+-	u8 gpiic, mask;
+-
+-	mask = TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS);
+ 
+ 	raw_spin_lock_irqsave(&gpio->spinlock, flags);
+-	gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC);
+-	gpiic &= ~mask;
+-	gpiic |= gpio->irq_type[offset] << (offset * TQMX86_GPII_BITS);
+-	tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC);
++	gpio->irq_type[offset] |= TQMX86_INT_UNMASKED;
++	tqmx86_gpio_irq_config(gpio, offset);
+ 	raw_spin_unlock_irqrestore(&gpio->spinlock, flags);
+ }
+ 
+@@ -149,7 +158,7 @@ static int tqmx86_gpio_irq_set_type(struct irq_data *data, unsigned int type)
+ 	unsigned int offset = (data->hwirq - TQMX86_NGPO);
+ 	unsigned int edge_type = type & IRQF_TRIGGER_MASK;
+ 	unsigned long flags;
+-	u8 new_type, gpiic;
++	u8 new_type;
+ 
+ 	switch (edge_type) {
+ 	case IRQ_TYPE_EDGE_RISING:
+@@ -165,13 +174,10 @@ static int tqmx86_gpio_irq_set_type(struct irq_data *data, unsigned int type)
+ 		return -EINVAL; /* not supported */
+ 	}
+ 
+-	gpio->irq_type[offset] = new_type;
+-
+ 	raw_spin_lock_irqsave(&gpio->spinlock, flags);
+-	gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC);
+-	gpiic &= ~((TQMX86_GPII_MASK) << (offset * TQMX86_GPII_BITS));
+-	gpiic |= new_type << (offset * TQMX86_GPII_BITS);
+-	tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC);
++	gpio->irq_type[offset] &= ~TQMX86_GPII_MASK;
++	gpio->irq_type[offset] |= new_type;
++	tqmx86_gpio_irq_config(gpio, offset);
+ 	raw_spin_unlock_irqrestore(&gpio->spinlock, flags);
+ 
+ 	return 0;
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index 40d0196d8bdcc..95861916deffb 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -83,6 +83,10 @@ struct linehandle_state {
+ 	GPIOHANDLE_REQUEST_OPEN_DRAIN | \
+ 	GPIOHANDLE_REQUEST_OPEN_SOURCE)
+ 
++#define GPIOHANDLE_REQUEST_DIRECTION_FLAGS \
++	(GPIOHANDLE_REQUEST_INPUT | \
++	 GPIOHANDLE_REQUEST_OUTPUT)
++
+ static int linehandle_validate_flags(u32 flags)
+ {
+ 	/* Return an error if an unknown flag is set */
+@@ -163,21 +167,21 @@ static long linehandle_set_config(struct linehandle_state *lh,
+ 	if (ret)
+ 		return ret;
+ 
++	/* Lines must be reconfigured explicitly as input or output. */
++	if (!(lflags & GPIOHANDLE_REQUEST_DIRECTION_FLAGS))
++		return -EINVAL;
++
+ 	for (i = 0; i < lh->num_descs; i++) {
+ 		desc = lh->descs[i];
+-		linehandle_flags_to_desc_flags(gcnf.flags, &desc->flags);
++		linehandle_flags_to_desc_flags(lflags, &desc->flags);
+ 
+-		/*
+-		 * Lines have to be requested explicitly for input
+-		 * or output, else the line will be treated "as is".
+-		 */
+ 		if (lflags & GPIOHANDLE_REQUEST_OUTPUT) {
+ 			int val = !!gcnf.default_values[i];
+ 
+ 			ret = gpiod_direction_output(desc, val);
+ 			if (ret)
+ 				return ret;
+-		} else if (lflags & GPIOHANDLE_REQUEST_INPUT) {
++		} else {
+ 			ret = gpiod_direction_input(desc);
+ 			if (ret)
+ 				return ret;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
+index f70fcadf1ee55..a4a6b99bddbaf 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
+@@ -633,6 +633,12 @@ void enc1_stream_encoder_set_throttled_vcp_size(
+ 				x),
+ 			26));
+ 
++	// If y rounds up to integer, carry it over to x.
++	if (y >> 26) {
++		x += 1;
++		y = 0;
++	}
++
+ 	REG_SET_2(DP_MSE_RATE_CNTL, 0,
+ 		DP_MSE_RATE_X, x,
+ 		DP_MSE_RATE_Y, y);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+index 6eb6f05c11367..56e15f5bc8225 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+@@ -163,6 +163,8 @@ static void sumo_construct_vid_mapping_table(struct amdgpu_device *adev,
+ 
+ 	for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++) {
+ 		if (table[i].ulSupportedSCLK != 0) {
++			if (table[i].usVoltageIndex >= SUMO_MAX_NUMBER_VOLTAGES)
++				continue;
+ 			vid_mapping_table->entries[table[i].usVoltageIndex].vid_7bit =
+ 				table[i].usVoltageID;
+ 			vid_mapping_table->entries[table[i].usVoltageIndex].vid_2bit =
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
+index 1e922703e26b2..7cc891c091f84 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
+@@ -259,7 +259,7 @@ komeda_component_get_avail_scaler(struct komeda_component *c,
+ 	u32 avail_scalers;
+ 
+ 	pipe_st = komeda_pipeline_get_state(c->pipeline, state);
+-	if (!pipe_st)
++	if (IS_ERR_OR_NULL(pipe_st))
+ 		return NULL;
+ 
+ 	avail_scalers = (pipe_st->active_comps & KOMEDA_PIPELINE_SCALERS) ^
+diff --git a/drivers/gpu/drm/bridge/panel.c b/drivers/gpu/drm/bridge/panel.c
+index c916f4b8907ef..35a6d9c4e081e 100644
+--- a/drivers/gpu/drm/bridge/panel.c
++++ b/drivers/gpu/drm/bridge/panel.c
+@@ -252,9 +252,12 @@ EXPORT_SYMBOL(drm_panel_bridge_remove);
+ 
+ static void devm_drm_panel_bridge_release(struct device *dev, void *res)
+ {
+-	struct drm_bridge **bridge = res;
++	struct drm_bridge *bridge = *(struct drm_bridge **)res;
+ 
+-	drm_panel_bridge_remove(*bridge);
++	if (!bridge)
++		return;
++
++	drm_bridge_remove(bridge);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_vidi.c b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+index e1ffe8a28b649..d87ab8ecb0233 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_vidi.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+@@ -308,6 +308,7 @@ static int vidi_get_modes(struct drm_connector *connector)
+ 	struct vidi_context *ctx = ctx_from_connector(connector);
+ 	struct edid *edid;
+ 	int edid_len;
++	int count;
+ 
+ 	/*
+ 	 * the edid data comes from user side and it would be set
+@@ -327,7 +328,11 @@ static int vidi_get_modes(struct drm_connector *connector)
+ 
+ 	drm_connector_update_edid_property(connector, edid);
+ 
+-	return drm_add_edid_modes(connector, edid);
++	count = drm_add_edid_modes(connector, edid);
++
++	kfree(edid);
++
++	return count;
+ }
+ 
+ static const struct drm_connector_helper_funcs vidi_connector_helper_funcs = {
+diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c
+index 576fcf1807164..d8579c7c15fdd 100644
+--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
++++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
+@@ -878,11 +878,11 @@ static int hdmi_get_modes(struct drm_connector *connector)
+ 	int ret;
+ 
+ 	if (!hdata->ddc_adpt)
+-		return 0;
++		goto no_edid;
+ 
+ 	edid = drm_get_edid(connector, hdata->ddc_adpt);
+ 	if (!edid)
+-		return 0;
++		goto no_edid;
+ 
+ 	hdata->dvi_mode = !drm_detect_hdmi_monitor(edid);
+ 	DRM_DEV_DEBUG_KMS(hdata->dev, "%s : width[%d] x height[%d]\n",
+@@ -897,6 +897,9 @@ static int hdmi_get_modes(struct drm_connector *connector)
+ 	kfree(edid);
+ 
+ 	return ret;
++
++no_edid:
++	return drm_add_modes_noedid(connector, 640, 480);
+ }
+ 
+ static int hdmi_find_phy_conf(struct hdmi_context *hdata, u32 pixel_clock)
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+index cd71631bef0ca..ddf76a4e4fa26 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+@@ -309,6 +309,7 @@ void i915_vma_revoke_fence(struct i915_vma *vma)
+ 		return;
+ 
+ 	GEM_BUG_ON(fence->vma != vma);
++	i915_active_wait(&fence->active);
+ 	GEM_BUG_ON(!i915_active_is_idle(&fence->active));
+ 	GEM_BUG_ON(atomic_read(&fence->pin_count));
+ 
+diff --git a/drivers/gpu/drm/lima/lima_bcast.c b/drivers/gpu/drm/lima/lima_bcast.c
+index fbc43f243c54d..6d000504e1a4e 100644
+--- a/drivers/gpu/drm/lima/lima_bcast.c
++++ b/drivers/gpu/drm/lima/lima_bcast.c
+@@ -43,6 +43,18 @@ void lima_bcast_suspend(struct lima_ip *ip)
+ 
+ }
+ 
++int lima_bcast_mask_irq(struct lima_ip *ip)
++{
++	bcast_write(LIMA_BCAST_BROADCAST_MASK, 0);
++	bcast_write(LIMA_BCAST_INTERRUPT_MASK, 0);
++	return 0;
++}
++
++int lima_bcast_reset(struct lima_ip *ip)
++{
++	return lima_bcast_hw_init(ip);
++}
++
+ int lima_bcast_init(struct lima_ip *ip)
+ {
+ 	int i;
+diff --git a/drivers/gpu/drm/lima/lima_bcast.h b/drivers/gpu/drm/lima/lima_bcast.h
+index 465ee587bceb2..cd08841e47879 100644
+--- a/drivers/gpu/drm/lima/lima_bcast.h
++++ b/drivers/gpu/drm/lima/lima_bcast.h
+@@ -13,4 +13,7 @@ void lima_bcast_fini(struct lima_ip *ip);
+ 
+ void lima_bcast_enable(struct lima_device *dev, int num_pp);
+ 
++int lima_bcast_mask_irq(struct lima_ip *ip);
++int lima_bcast_reset(struct lima_ip *ip);
++
+ #endif
+diff --git a/drivers/gpu/drm/lima/lima_gp.c b/drivers/gpu/drm/lima/lima_gp.c
+index 8dd501b7a3d0d..6cf46b653e810 100644
+--- a/drivers/gpu/drm/lima/lima_gp.c
++++ b/drivers/gpu/drm/lima/lima_gp.c
+@@ -212,6 +212,13 @@ static void lima_gp_task_mmu_error(struct lima_sched_pipe *pipe)
+ 	lima_sched_pipe_task_done(pipe);
+ }
+ 
++static void lima_gp_task_mask_irq(struct lima_sched_pipe *pipe)
++{
++	struct lima_ip *ip = pipe->processor[0];
++
++	gp_write(LIMA_GP_INT_MASK, 0);
++}
++
+ static int lima_gp_task_recover(struct lima_sched_pipe *pipe)
+ {
+ 	struct lima_ip *ip = pipe->processor[0];
+@@ -344,6 +351,7 @@ int lima_gp_pipe_init(struct lima_device *dev)
+ 	pipe->task_error = lima_gp_task_error;
+ 	pipe->task_mmu_error = lima_gp_task_mmu_error;
+ 	pipe->task_recover = lima_gp_task_recover;
++	pipe->task_mask_irq = lima_gp_task_mask_irq;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/lima/lima_pp.c b/drivers/gpu/drm/lima/lima_pp.c
+index a5c95bed08c09..54b208a4a768e 100644
+--- a/drivers/gpu/drm/lima/lima_pp.c
++++ b/drivers/gpu/drm/lima/lima_pp.c
+@@ -408,6 +408,9 @@ static void lima_pp_task_error(struct lima_sched_pipe *pipe)
+ 
+ 		lima_pp_hard_reset(ip);
+ 	}
++
++	if (pipe->bcast_processor)
++		lima_bcast_reset(pipe->bcast_processor);
+ }
+ 
+ static void lima_pp_task_mmu_error(struct lima_sched_pipe *pipe)
+@@ -416,6 +419,20 @@ static void lima_pp_task_mmu_error(struct lima_sched_pipe *pipe)
+ 		lima_sched_pipe_task_done(pipe);
+ }
+ 
++static void lima_pp_task_mask_irq(struct lima_sched_pipe *pipe)
++{
++	int i;
++
++	for (i = 0; i < pipe->num_processor; i++) {
++		struct lima_ip *ip = pipe->processor[i];
++
++		pp_write(LIMA_PP_INT_MASK, 0);
++	}
++
++	if (pipe->bcast_processor)
++		lima_bcast_mask_irq(pipe->bcast_processor);
++}
++
+ static struct kmem_cache *lima_pp_task_slab;
+ static int lima_pp_task_slab_refcnt;
+ 
+@@ -447,6 +464,7 @@ int lima_pp_pipe_init(struct lima_device *dev)
+ 	pipe->task_fini = lima_pp_task_fini;
+ 	pipe->task_error = lima_pp_task_error;
+ 	pipe->task_mmu_error = lima_pp_task_mmu_error;
++	pipe->task_mask_irq = lima_pp_task_mask_irq;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
+index f6e7a88a56f1b..290f875c28598 100644
+--- a/drivers/gpu/drm/lima/lima_sched.c
++++ b/drivers/gpu/drm/lima/lima_sched.c
+@@ -419,6 +419,13 @@ static void lima_sched_timedout_job(struct drm_sched_job *job)
+ 	struct lima_sched_task *task = to_lima_task(job);
+ 	struct lima_device *ldev = pipe->ldev;
+ 
++	/*
++	 * The task might still finish while this timeout handler runs.
++	 * To prevent a race condition on its completion, mask all irqs
++	 * on the running core until the next hard reset completes.
++	 */
++	pipe->task_mask_irq(pipe);
++
+ 	if (!pipe->error)
+ 		DRM_ERROR("lima job timeout\n");
+ 
+diff --git a/drivers/gpu/drm/lima/lima_sched.h b/drivers/gpu/drm/lima/lima_sched.h
+index 90f03c48ef4a8..f8bbfa69baea6 100644
+--- a/drivers/gpu/drm/lima/lima_sched.h
++++ b/drivers/gpu/drm/lima/lima_sched.h
+@@ -83,6 +83,7 @@ struct lima_sched_pipe {
+ 	void (*task_error)(struct lima_sched_pipe *pipe);
+ 	void (*task_mmu_error)(struct lima_sched_pipe *pipe);
+ 	int (*task_recover)(struct lima_sched_pipe *pipe);
++	void (*task_mask_irq)(struct lima_sched_pipe *pipe);
+ 
+ 	struct work_struct recover_work;
+ };
+diff --git a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+index be28e7bd74903..1da9d1e89f91b 100644
+--- a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
++++ b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+@@ -208,6 +208,8 @@ static int nv17_tv_get_ld_modes(struct drm_encoder *encoder,
+ 		struct drm_display_mode *mode;
+ 
+ 		mode = drm_mode_duplicate(encoder->dev, tv_mode);
++		if (!mode)
++			continue;
+ 
+ 		mode->clock = tv_norm->tv_enc_mode.vrefresh *
+ 			mode->htotal / 1000 *
+@@ -257,6 +259,8 @@ static int nv17_tv_get_hd_modes(struct drm_encoder *encoder,
+ 		if (modes[i].hdisplay == output_mode->hdisplay &&
+ 		    modes[i].vdisplay == output_mode->vdisplay) {
+ 			mode = drm_mode_duplicate(encoder->dev, output_mode);
++			if (!mode)
++				continue;
+ 			mode->type |= DRM_MODE_TYPE_PREFERRED;
+ 
+ 		} else {
+@@ -264,6 +268,8 @@ static int nv17_tv_get_hd_modes(struct drm_encoder *encoder,
+ 					    modes[i].vdisplay, 60, false,
+ 					    (output_mode->flags &
+ 					     DRM_MODE_FLAG_INTERLACE), false);
++			if (!mode)
++				continue;
+ 		}
+ 
+ 		/* CVT modes are sometimes unsuitable... */
+diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c b/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
+index 534dd7414d428..917cb322bab1a 100644
+--- a/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
+@@ -506,10 +506,10 @@ static int ili9881c_prepare(struct drm_panel *panel)
+ 	msleep(5);
+ 
+ 	/* And reset it */
+-	gpiod_set_value(ctx->reset, 1);
++	gpiod_set_value_cansleep(ctx->reset, 1);
+ 	msleep(20);
+ 
+-	gpiod_set_value(ctx->reset, 0);
++	gpiod_set_value_cansleep(ctx->reset, 0);
+ 	msleep(20);
+ 
+ 	for (i = 0; i < ctx->desc->init_length; i++) {
+@@ -564,7 +564,7 @@ static int ili9881c_unprepare(struct drm_panel *panel)
+ 
+ 	mipi_dsi_dcs_enter_sleep_mode(ctx->dsi);
+ 	regulator_disable(ctx->power);
+-	gpiod_set_value(ctx->reset, 1);
++	gpiod_set_value_cansleep(ctx->reset, 1);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 7797aad592a19..07b59021008ea 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2438,6 +2438,7 @@ static const struct display_timing koe_tx26d202vm0bwa_timing = {
+ 	.vfront_porch = { 3, 5, 10 },
+ 	.vback_porch = { 2, 5, 10 },
+ 	.vsync_len = { 5, 5, 5 },
++	.flags = DISPLAY_FLAGS_DE_HIGH,
+ };
+ 
+ static const struct panel_desc koe_tx26d202vm0bwa = {
+diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
+index a813c00f109b5..26fd45ea14cd4 100644
+--- a/drivers/gpu/drm/radeon/radeon.h
++++ b/drivers/gpu/drm/radeon/radeon.h
+@@ -132,7 +132,6 @@ extern int radeon_cik_support;
+ /* RADEON_IB_POOL_SIZE must be a power of 2 */
+ #define RADEON_IB_POOL_SIZE			16
+ #define RADEON_DEBUGFS_MAX_COMPONENTS		32
+-#define RADEONFB_CONN_LIMIT			4
+ #define RADEON_BIOS_NUM_SCRATCH			8
+ 
+ /* internal ring indices */
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index 07d23a1e62a07..8643eebe9e243 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -685,7 +685,7 @@ static void radeon_crtc_init(struct drm_device *dev, int index)
+ 	struct radeon_device *rdev = dev->dev_private;
+ 	struct radeon_crtc *radeon_crtc;
+ 
+-	radeon_crtc = kzalloc(sizeof(struct radeon_crtc) + (RADEONFB_CONN_LIMIT * sizeof(struct drm_connector *)), GFP_KERNEL);
++	radeon_crtc = kzalloc(sizeof(*radeon_crtc), GFP_KERNEL);
+ 	if (radeon_crtc == NULL)
+ 		return;
+ 
+@@ -711,12 +711,6 @@ static void radeon_crtc_init(struct drm_device *dev, int index)
+ 	dev->mode_config.cursor_width = radeon_crtc->max_cursor_width;
+ 	dev->mode_config.cursor_height = radeon_crtc->max_cursor_height;
+ 
+-#if 0
+-	radeon_crtc->mode_set.crtc = &radeon_crtc->base;
+-	radeon_crtc->mode_set.connectors = (struct drm_connector **)(radeon_crtc + 1);
+-	radeon_crtc->mode_set.num_connectors = 0;
+-#endif
+-
+ 	if (rdev->is_atom_bios && (ASIC_IS_AVIVO(rdev) || radeon_r4xx_atom))
+ 		radeon_atombios_init_crtc(dev, radeon_crtc);
+ 	else
+diff --git a/drivers/gpu/drm/radeon/sumo_dpm.c b/drivers/gpu/drm/radeon/sumo_dpm.c
+index 45d04996adf5b..a80e2edb7c0f3 100644
+--- a/drivers/gpu/drm/radeon/sumo_dpm.c
++++ b/drivers/gpu/drm/radeon/sumo_dpm.c
+@@ -1621,6 +1621,8 @@ void sumo_construct_vid_mapping_table(struct radeon_device *rdev,
+ 
+ 	for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++) {
+ 		if (table[i].ulSupportedSCLK != 0) {
++			if (table[i].usVoltageIndex >= SUMO_MAX_NUMBER_VOLTAGES)
++				continue;
+ 			vid_mapping_table->entries[table[i].usVoltageIndex].vid_7bit =
+ 				table[i].usVoltageID;
+ 			vid_mapping_table->entries[table[i].usVoltageIndex].vid_2bit =
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index bdb7a5e965601..25a9c72cca806 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -752,13 +752,6 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
+ 				vmw_read(dev_priv,
+ 					 SVGA_REG_SUGGESTED_GBOBJECT_MEM_SIZE_KB);
+ 
+-		/*
+-		 * Workaround for low memory 2D VMs to compensate for the
+-		 * allocation taken by fbdev
+-		 */
+-		if (!(dev_priv->capabilities & SVGA_CAP_3D))
+-			mem_size *= 3;
+-
+ 		dev_priv->max_mob_pages = mem_size * 1024 / PAGE_SIZE;
+ 		dev_priv->prim_bb_mem =
+ 			vmw_read(dev_priv,
+diff --git a/drivers/greybus/interface.c b/drivers/greybus/interface.c
+index 9ec949a438ef6..52ef6be9d4499 100644
+--- a/drivers/greybus/interface.c
++++ b/drivers/greybus/interface.c
+@@ -694,6 +694,7 @@ static void gb_interface_release(struct device *dev)
+ 
+ 	trace_gb_interface_release(intf);
+ 
++	cancel_work_sync(&intf->mode_switch_work);
+ 	kfree(intf);
+ }
+ 
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 476967ab6294c..5281d693b32d2 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1446,7 +1446,6 @@ static void implement(const struct hid_device *hid, u8 *report,
+ 			hid_warn(hid,
+ 				 "%s() called with too large value %d (n: %d)! (%s)\n",
+ 				 __func__, value, n, current->comm);
+-			WARN_ON(1);
+ 			value &= m;
+ 		}
+ 	}
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 0732fe6c7a853..9f3f7588fe46d 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -765,6 +765,7 @@
+ #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e
+ #define USB_DEVICE_ID_LOGITECH_T651	0xb00c
+ #define USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD	0xb309
++#define USB_DEVICE_ID_LOGITECH_CASA_TOUCHPAD	0xbb00
+ #define USB_DEVICE_ID_LOGITECH_C007	0xc007
+ #define USB_DEVICE_ID_LOGITECH_C077	0xc077
+ #define USB_DEVICE_ID_LOGITECH_RECEIVER	0xc101
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index f4d79ec826797..bda9150aa372a 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1218,8 +1218,10 @@ static int logi_dj_recv_switch_to_dj_mode(struct dj_receiver_dev *djrcv_dev,
+ 		 */
+ 		msleep(50);
+ 
+-		if (retval)
++		if (retval) {
++			kfree(dj_report);
+ 			return retval;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 8dcd636daf270..e7b047421f3d9 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1998,6 +1998,12 @@ static const struct hid_device_id mt_devices[] = {
+ 			   USB_VENDOR_ID_LENOVO,
+ 			   USB_DEVICE_ID_LENOVO_X12_TAB) },
+ 
++	/* Logitech devices */
++	{ .driver_data = MT_CLS_NSMU,
++		HID_DEVICE(BUS_BLUETOOTH, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_LOGITECH,
++			USB_DEVICE_ID_LOGITECH_CASA_TOUCHPAD) },
++
+ 	/* MosArt panels */
+ 	{ .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_ASUS,
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 7f13687267fd3..eba0dd541fa29 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -294,6 +294,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xae24),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Meteor Lake-S */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7f26),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{
+ 		/* Raptor Lake-S */
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7a26),
+@@ -304,6 +309,26 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa76f),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Granite Rapids */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0963),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
++	{
++		/* Granite Rapids SOC */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x3256),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
++	{
++		/* Sapphire Rapids SOC */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x3456),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
++	{
++		/* Lunar Lake */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa824),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{
+ 		/* Alder Lake CPU */
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
+diff --git a/drivers/i2c/busses/i2c-at91-slave.c b/drivers/i2c/busses/i2c-at91-slave.c
+index d6eeea5166c04..131a67d9d4a68 100644
+--- a/drivers/i2c/busses/i2c-at91-slave.c
++++ b/drivers/i2c/busses/i2c-at91-slave.c
+@@ -106,8 +106,7 @@ static int at91_unreg_slave(struct i2c_client *slave)
+ 
+ static u32 at91_twi_func(struct i2c_adapter *adapter)
+ {
+-	return I2C_FUNC_SLAVE | I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL
+-		| I2C_FUNC_SMBUS_READ_BLOCK_DATA;
++	return I2C_FUNC_SLAVE;
+ }
+ 
+ static const struct i2c_algorithm at91_twi_algorithm_slave = {
+diff --git a/drivers/i2c/busses/i2c-designware-slave.c b/drivers/i2c/busses/i2c-designware-slave.c
+index 0d15f4c1e9f7e..5b54a9b9ed1a3 100644
+--- a/drivers/i2c/busses/i2c-designware-slave.c
++++ b/drivers/i2c/busses/i2c-designware-slave.c
+@@ -232,7 +232,7 @@ static const struct i2c_algorithm i2c_dw_algo = {
+ 
+ void i2c_dw_configure_slave(struct dw_i2c_dev *dev)
+ {
+-	dev->functionality = I2C_FUNC_SLAVE | DW_IC_DEFAULT_FUNCTIONALITY;
++	dev->functionality = I2C_FUNC_SLAVE;
+ 
+ 	dev->slave_cfg = DW_IC_CON_RX_FIFO_FULL_HLD_CTRL |
+ 			 DW_IC_CON_RESTART_EN | DW_IC_CON_STOP_DET_IFADDRESSED;
+diff --git a/drivers/i2c/busses/i2c-ocores.c b/drivers/i2c/busses/i2c-ocores.c
+index 71e26aa6bd8ff..5e2e8bd2d055e 100644
+--- a/drivers/i2c/busses/i2c-ocores.c
++++ b/drivers/i2c/busses/i2c-ocores.c
+@@ -443,8 +443,8 @@ static int ocores_init(struct device *dev, struct ocores_i2c *i2c)
+ 	oc_setreg(i2c, OCI2C_PREHIGH, prescale >> 8);
+ 
+ 	/* Init the device */
+-	oc_setreg(i2c, OCI2C_CMD, OCI2C_CMD_IACK);
+ 	oc_setreg(i2c, OCI2C_CONTROL, ctrl | OCI2C_CTRL_EN);
++	oc_setreg(i2c, OCI2C_CMD, OCI2C_CMD_IACK);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/adc/ad7266.c b/drivers/iio/adc/ad7266.c
+index a8ec3efd659ed..90b9ac589e90e 100644
+--- a/drivers/iio/adc/ad7266.c
++++ b/drivers/iio/adc/ad7266.c
+@@ -157,6 +157,8 @@ static int ad7266_read_raw(struct iio_dev *indio_dev,
+ 		ret = ad7266_read_single(st, val, chan->address);
+ 		iio_device_release_direct_mode(indio_dev);
+ 
++		if (ret < 0)
++			return ret;
+ 		*val = (*val >> 2) & 0xfff;
+ 		if (chan->scan_type.sign == 's')
+ 			*val = sign_extend32(*val, 11);
+diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
+index 6b9627bfebb0a..4fd74e2847f00 100644
+--- a/drivers/iio/adc/ad9467.c
++++ b/drivers/iio/adc/ad9467.c
+@@ -223,11 +223,11 @@ static void __ad9467_get_scale(struct adi_axi_adc_conv *conv, int index,
+ }
+ 
+ static const struct iio_chan_spec ad9434_channels[] = {
+-	AD9467_CHAN(0, 0, 12, 'S'),
++	AD9467_CHAN(0, 0, 12, 's'),
+ };
+ 
+ static const struct iio_chan_spec ad9467_channels[] = {
+-	AD9467_CHAN(0, 0, 16, 'S'),
++	AD9467_CHAN(0, 0, 16, 's'),
+ };
+ 
+ static const struct ad9467_chip_info ad9467_chip_tbl[] = {
+diff --git a/drivers/iio/chemical/bme680.h b/drivers/iio/chemical/bme680.h
+index 4edc5d21cb9fa..f959252a4fe66 100644
+--- a/drivers/iio/chemical/bme680.h
++++ b/drivers/iio/chemical/bme680.h
+@@ -54,7 +54,9 @@
+ #define   BME680_NB_CONV_MASK			GENMASK(3, 0)
+ 
+ #define BME680_REG_MEAS_STAT_0			0x1D
++#define   BME680_NEW_DATA_BIT			BIT(7)
+ #define   BME680_GAS_MEAS_BIT			BIT(6)
++#define   BME680_MEAS_BIT			BIT(5)
+ 
+ /* Calibration Parameters */
+ #define BME680_T2_LSB_REG	0x8A
+diff --git a/drivers/iio/chemical/bme680_core.c b/drivers/iio/chemical/bme680_core.c
+index 6ea99e4cbf924..2216577b1005b 100644
+--- a/drivers/iio/chemical/bme680_core.c
++++ b/drivers/iio/chemical/bme680_core.c
+@@ -10,6 +10,7 @@
+  */
+ #include <linux/acpi.h>
+ #include <linux/bitfield.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/module.h>
+ #include <linux/log2.h>
+@@ -38,7 +39,7 @@ struct bme680_calib {
+ 	s8  par_h3;
+ 	s8  par_h4;
+ 	s8  par_h5;
+-	s8  par_h6;
++	u8  par_h6;
+ 	s8  par_h7;
+ 	s8  par_gh1;
+ 	s16 par_gh2;
+@@ -342,10 +343,10 @@ static s16 bme680_compensate_temp(struct bme680_data *data,
+ 	if (!calib->par_t2)
+ 		bme680_read_calib(data, calib);
+ 
+-	var1 = (adc_temp >> 3) - (calib->par_t1 << 1);
++	var1 = (adc_temp >> 3) - ((s32)calib->par_t1 << 1);
+ 	var2 = (var1 * calib->par_t2) >> 11;
+ 	var3 = ((var1 >> 1) * (var1 >> 1)) >> 12;
+-	var3 = (var3 * (calib->par_t3 << 4)) >> 14;
++	var3 = (var3 * ((s32)calib->par_t3 << 4)) >> 14;
+ 	data->t_fine = var2 + var3;
+ 	calc_temp = (data->t_fine * 5 + 128) >> 8;
+ 
+@@ -368,9 +369,9 @@ static u32 bme680_compensate_press(struct bme680_data *data,
+ 	var1 = (data->t_fine >> 1) - 64000;
+ 	var2 = ((((var1 >> 2) * (var1 >> 2)) >> 11) * calib->par_p6) >> 2;
+ 	var2 = var2 + (var1 * calib->par_p5 << 1);
+-	var2 = (var2 >> 2) + (calib->par_p4 << 16);
++	var2 = (var2 >> 2) + ((s32)calib->par_p4 << 16);
+ 	var1 = (((((var1 >> 2) * (var1 >> 2)) >> 13) *
+-			(calib->par_p3 << 5)) >> 3) +
++			((s32)calib->par_p3 << 5)) >> 3) +
+ 			((calib->par_p2 * var1) >> 1);
+ 	var1 = var1 >> 18;
+ 	var1 = ((32768 + var1) * calib->par_p1) >> 15;
+@@ -388,7 +389,7 @@ static u32 bme680_compensate_press(struct bme680_data *data,
+ 	var3 = ((press_comp >> 8) * (press_comp >> 8) *
+ 			(press_comp >> 8) * calib->par_p10) >> 17;
+ 
+-	press_comp += (var1 + var2 + var3 + (calib->par_p7 << 7)) >> 4;
++	press_comp += (var1 + var2 + var3 + ((s32)calib->par_p7 << 7)) >> 4;
+ 
+ 	return press_comp;
+ }
+@@ -414,7 +415,7 @@ static u32 bme680_compensate_humid(struct bme680_data *data,
+ 		 (((temp_scaled * ((temp_scaled * calib->par_h5) / 100))
+ 		   >> 6) / 100) + (1 << 14))) >> 10;
+ 	var3 = var1 * var2;
+-	var4 = calib->par_h6 << 7;
++	var4 = (s32)calib->par_h6 << 7;
+ 	var4 = (var4 + ((temp_scaled * calib->par_h7) / 100)) >> 4;
+ 	var5 = ((var3 >> 14) * (var3 >> 14)) >> 10;
+ 	var6 = (var4 * var5) >> 1;
+@@ -532,6 +533,43 @@ static u8 bme680_oversampling_to_reg(u8 val)
+ 	return ilog2(val) + 1;
+ }
+ 
++/*
++ * Taken from Bosch BME680 API:
++ * https://github.com/boschsensortec/BME68x_SensorAPI/blob/v4.4.8/bme68x.c#L490
++ */
++static int bme680_wait_for_eoc(struct bme680_data *data)
++{
++	struct device *dev = regmap_get_device(data->regmap);
++	unsigned int check;
++	int ret;
++	/*
++	 * (Sum of oversampling ratios * time per oversampling) +
++	 * TPH measurement + gas measurement + wait transition from forced mode
++	 * + heater duration
++	 */
++	int wait_eoc_us = ((data->oversampling_temp + data->oversampling_press +
++			   data->oversampling_humid) * 1936) + (477 * 4) +
++			   (477 * 5) + 1000 + (data->heater_dur * 1000);
++
++	usleep_range(wait_eoc_us, wait_eoc_us + 100);
++
++	ret = regmap_read(data->regmap, BME680_REG_MEAS_STAT_0, &check);
++	if (ret) {
++		dev_err(dev, "failed to read measurement status register.\n");
++		return ret;
++	}
++	if (check & BME680_MEAS_BIT) {
++		dev_err(dev, "Device measurement cycle incomplete.\n");
++		return -EBUSY;
++	}
++	if (!(check & BME680_NEW_DATA_BIT)) {
++		dev_err(dev, "No new data available from the device.\n");
++		return -ENODATA;
++	}
++
++	return 0;
++}
++
+ static int bme680_chip_config(struct bme680_data *data)
+ {
+ 	struct device *dev = regmap_get_device(data->regmap);
+@@ -622,6 +660,10 @@ static int bme680_read_temp(struct bme680_data *data, int *val)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = bme680_wait_for_eoc(data);
++	if (ret)
++		return ret;
++
+ 	ret = regmap_bulk_read(data->regmap, BME680_REG_TEMP_MSB,
+ 			       &tmp, 3);
+ 	if (ret < 0) {
+@@ -678,7 +720,7 @@ static int bme680_read_press(struct bme680_data *data,
+ 	}
+ 
+ 	*val = bme680_compensate_press(data, adc_press);
+-	*val2 = 100;
++	*val2 = 1000;
+ 	return IIO_VAL_FRACTIONAL;
+ }
+ 
+@@ -738,6 +780,10 @@ static int bme680_read_gas(struct bme680_data *data,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = bme680_wait_for_eoc(data);
++	if (ret)
++		return ret;
++
+ 	ret = regmap_read(data->regmap, BME680_REG_MEAS_STAT_0, &check);
+ 	if (check & BME680_GAS_MEAS_BIT) {
+ 		dev_err(dev, "gas measurement incomplete\n");
+diff --git a/drivers/iio/dac/ad5592r-base.c b/drivers/iio/dac/ad5592r-base.c
+index 987264410278c..520a3748befa0 100644
+--- a/drivers/iio/dac/ad5592r-base.c
++++ b/drivers/iio/dac/ad5592r-base.c
+@@ -411,7 +411,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
+ 			s64 tmp = *val * (3767897513LL / 25LL);
+ 			*val = div_s64_rem(tmp, 1000000000LL, val2);
+ 
+-			return IIO_VAL_INT_PLUS_MICRO;
++			return IIO_VAL_INT_PLUS_NANO;
+ 		}
+ 
+ 		mutex_lock(&st->lock);
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
+index 3441b0d61c5d5..cef76c1c40f36 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
+@@ -128,10 +128,6 @@ static int inv_icm42600_accel_update_scan_mode(struct iio_dev *indio_dev,
+ 	/* update data FIFO write */
+ 	inv_icm42600_timestamp_apply_odr(ts, 0, 0, 0);
+ 	ret = inv_icm42600_buffer_set_fifo_en(st, fifo_en | st->fifo.en);
+-	if (ret)
+-		goto out_unlock;
+-
+-	ret = inv_icm42600_buffer_update_watermark(st);
+ 
+ out_unlock:
+ 	mutex_unlock(&st->lock);
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
+index aee7b9ff4bf4f..608d07115a36e 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
+@@ -128,10 +128,6 @@ static int inv_icm42600_gyro_update_scan_mode(struct iio_dev *indio_dev,
+ 	/* update data FIFO write */
+ 	inv_icm42600_timestamp_apply_odr(ts, 0, 0, 0);
+ 	ret = inv_icm42600_buffer_set_fifo_en(st, fifo_en | st->fifo.en);
+-	if (ret)
+-		goto out_unlock;
+-
+-	ret = inv_icm42600_buffer_update_watermark(st);
+ 
+ out_unlock:
+ 	mutex_unlock(&st->lock);
+diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
+index e2f720eec1e18..56ce0fea00e92 100644
+--- a/drivers/infiniband/hw/mlx5/srq.c
++++ b/drivers/infiniband/hw/mlx5/srq.c
+@@ -225,12 +225,15 @@ int mlx5_ib_create_srq(struct ib_srq *ib_srq,
+ 	int err;
+ 	struct mlx5_srq_attr in = {};
+ 	__u32 max_srq_wqes = 1 << MLX5_CAP_GEN(dev->mdev, log_max_srq_sz);
+-
+-	/* Sanity check SRQ size before proceeding */
+-	if (init_attr->attr.max_wr >= max_srq_wqes) {
+-		mlx5_ib_dbg(dev, "max_wr %d, cap %d\n",
+-			    init_attr->attr.max_wr,
+-			    max_srq_wqes);
++	__u32 max_sge_sz =  MLX5_CAP_GEN(dev->mdev, max_wqe_sz_rq) /
++			    sizeof(struct mlx5_wqe_data_seg);
++
++	/* Sanity check SRQ and sge size before proceeding */
++	if (init_attr->attr.max_wr >= max_srq_wqes ||
++	    init_attr->attr.max_sge > max_sge_sz) {
++		mlx5_ib_dbg(dev, "max_wr %d,wr_cap %d,max_sge %d, sge_cap:%d\n",
++			    init_attr->attr.max_wr, max_srq_wqes,
++			    init_attr->attr.max_sge, max_sge_sz);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index 49504dcd5dc63..9190aa18263e8 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -1356,19 +1356,19 @@ static int input_print_modalias_bits(char *buf, int size,
+ 				     char name, unsigned long *bm,
+ 				     unsigned int min_bit, unsigned int max_bit)
+ {
+-	int len = 0, i;
++	int bit = min_bit;
++	int len = 0;
+ 
+ 	len += snprintf(buf, max(size, 0), "%c", name);
+-	for (i = min_bit; i < max_bit; i++)
+-		if (bm[BIT_WORD(i)] & BIT_MASK(i))
+-			len += snprintf(buf + len, max(size - len, 0), "%X,", i);
++	for_each_set_bit_from(bit, bm, max_bit)
++		len += snprintf(buf + len, max(size - len, 0), "%X,", bit);
+ 	return len;
+ }
+ 
+-static int input_print_modalias(char *buf, int size, struct input_dev *id,
+-				int add_cr)
++static int input_print_modalias_parts(char *buf, int size, int full_len,
++				      struct input_dev *id)
+ {
+-	int len;
++	int len, klen, remainder, space;
+ 
+ 	len = snprintf(buf, max(size, 0),
+ 		       "input:b%04Xv%04Xp%04Xe%04X-",
+@@ -1377,8 +1377,49 @@ static int input_print_modalias(char *buf, int size, struct input_dev *id,
+ 
+ 	len += input_print_modalias_bits(buf + len, size - len,
+ 				'e', id->evbit, 0, EV_MAX);
+-	len += input_print_modalias_bits(buf + len, size - len,
++
++	/*
++	 * Calculate the remaining space in the buffer making sure we
++	 * have place for the terminating 0.
++	 */
++	space = max(size - (len + 1), 0);
++
++	klen = input_print_modalias_bits(buf + len, size - len,
+ 				'k', id->keybit, KEY_MIN_INTERESTING, KEY_MAX);
++	len += klen;
++
++	/*
++	 * If we have more data than we can fit in the buffer, check
++	 * if we can trim key data to fit in the rest. We will indicate
++	 * that key data is incomplete by adding "+" sign at the end, like
++	 * this: * "k1,2,3,45,+,".
++	 *
++	 * Note that we shortest key info (if present) is "k+," so we
++	 * can only try to trim if key data is longer than that.
++	 */
++	if (full_len && size < full_len + 1 && klen > 3) {
++		remainder = full_len - len;
++		/*
++		 * We can only trim if we have space for the remainder
++		 * and also for at least "k+," which is 3 more characters.
++		 */
++		if (remainder <= space - 3) {
++			int i;
++			/*
++			 * We are guaranteed to have 'k' in the buffer, so
++			 * we need at least 3 additional bytes for storing
++			 * "+," in addition to the remainder.
++			 */
++			for (i = size - 1 - remainder - 3; i >= 0; i--) {
++				if (buf[i] == 'k' || buf[i] == ',') {
++					strcpy(buf + i + 1, "+,");
++					len = i + 3; /* Not counting '\0' */
++					break;
++				}
++			}
++		}
++	}
++
+ 	len += input_print_modalias_bits(buf + len, size - len,
+ 				'r', id->relbit, 0, REL_MAX);
+ 	len += input_print_modalias_bits(buf + len, size - len,
+@@ -1394,12 +1435,25 @@ static int input_print_modalias(char *buf, int size, struct input_dev *id,
+ 	len += input_print_modalias_bits(buf + len, size - len,
+ 				'w', id->swbit, 0, SW_MAX);
+ 
+-	if (add_cr)
+-		len += snprintf(buf + len, max(size - len, 0), "\n");
+-
+ 	return len;
+ }
+ 
++static int input_print_modalias(char *buf, int size, struct input_dev *id)
++{
++	int full_len;
++
++	/*
++	 * Printing is done in 2 passes: first one figures out total length
++	 * needed for the modalias string, second one will try to trim key
++	 * data in case when buffer is too small for the entire modalias.
++	 * If the buffer is too small regardless, it will fill as much as it
++	 * can (without trimming key data) into the buffer and leave it to
++	 * the caller to figure out what to do with the result.
++	 */
++	full_len = input_print_modalias_parts(NULL, 0, 0, id);
++	return input_print_modalias_parts(buf, size, full_len, id);
++}
++
+ static ssize_t input_dev_show_modalias(struct device *dev,
+ 				       struct device_attribute *attr,
+ 				       char *buf)
+@@ -1407,7 +1461,9 @@ static ssize_t input_dev_show_modalias(struct device *dev,
+ 	struct input_dev *id = to_input_dev(dev);
+ 	ssize_t len;
+ 
+-	len = input_print_modalias(buf, PAGE_SIZE, id, 1);
++	len = input_print_modalias(buf, PAGE_SIZE, id);
++	if (len < PAGE_SIZE - 2)
++		len += snprintf(buf + len, PAGE_SIZE - len, "\n");
+ 
+ 	return min_t(int, len, PAGE_SIZE);
+ }
+@@ -1582,6 +1638,23 @@ static int input_add_uevent_bm_var(struct kobj_uevent_env *env,
+ 	return 0;
+ }
+ 
++/*
++ * This is a pretty gross hack. When building uevent data the driver core
++ * may try adding more environment variables to kobj_uevent_env without
++ * telling us, so we have no idea how much of the buffer we can use to
++ * avoid overflows/-ENOMEM elsewhere. To work around this let's artificially
++ * reduce amount of memory we will use for the modalias environment variable.
++ *
++ * The potential additions are:
++ *
++ * SEQNUM=18446744073709551615 - (%llu - 28 bytes)
++ * HOME=/ (6 bytes)
++ * PATH=/sbin:/bin:/usr/sbin:/usr/bin (34 bytes)
++ *
++ * 68 bytes total. Allow extra buffer - 96 bytes
++ */
++#define UEVENT_ENV_EXTRA_LEN	96
++
+ static int input_add_uevent_modalias_var(struct kobj_uevent_env *env,
+ 					 struct input_dev *dev)
+ {
+@@ -1591,9 +1664,11 @@ static int input_add_uevent_modalias_var(struct kobj_uevent_env *env,
+ 		return -ENOMEM;
+ 
+ 	len = input_print_modalias(&env->buf[env->buflen - 1],
+-				   sizeof(env->buf) - env->buflen,
+-				   dev, 0);
+-	if (len >= (sizeof(env->buf) - env->buflen))
++				   (int)sizeof(env->buf) - env->buflen -
++					UEVENT_ENV_EXTRA_LEN,
++				   dev);
++	if (len >= ((int)sizeof(env->buf) - env->buflen -
++					UEVENT_ENV_EXTRA_LEN))
+ 		return -ENOMEM;
+ 
+ 	env->buflen += len;
+diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c
+index f437eefec94ad..9452a12ddb096 100644
+--- a/drivers/input/touchscreen/ili210x.c
++++ b/drivers/input/touchscreen/ili210x.c
+@@ -231,8 +231,8 @@ static int ili251x_read_touch_data(struct i2c_client *client, u8 *data)
+ 	if (!error && data[0] == 2) {
+ 		error = i2c_master_recv(client, data + ILI251X_DATA_SIZE1,
+ 					ILI251X_DATA_SIZE2);
+-		if (error >= 0 && error != ILI251X_DATA_SIZE2)
+-			error = -EIO;
++		if (error >= 0)
++			error = error == ILI251X_DATA_SIZE2 ? 0 : -EIO;
+ 	}
+ 
+ 	return error;
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 4a8791e037b84..c4b1a652c2c7f 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -435,6 +435,11 @@ extern bool amd_iommu_irq_remap;
+ /* kmem_cache to get tables with 128 byte alignement */
+ extern struct kmem_cache *amd_iommu_irq_cache;
+ 
++/* Make iterating over all pci segment easier */
++#define for_each_pci_segment(pci_seg) \
++	list_for_each_entry((pci_seg), &amd_iommu_pci_seg_list, list)
++#define for_each_pci_segment_safe(pci_seg, next) \
++	list_for_each_entry_safe((pci_seg), (next), &amd_iommu_pci_seg_list, list)
+ /*
+  * Make iterating over all IOMMUs easier
+  */
+@@ -494,6 +499,17 @@ struct domain_pgtable {
+ 	u64 *root;
+ };
+ 
++/*
++ * This structure contains information about one PCI segment in the system.
++ */
++struct amd_iommu_pci_seg {
++	/* List with all PCI segments in the system */
++	struct list_head list;
++
++	/* PCI segment number */
++	u16 id;
++};
++
+ /*
+  * Structure where we save information about one hardware AMD IOMMU in the
+  * system.
+@@ -545,7 +561,7 @@ struct amd_iommu {
+ 	u16 cap_ptr;
+ 
+ 	/* pci domain of this IOMMU */
+-	u16 pci_seg;
++	struct amd_iommu_pci_seg *pci_seg;
+ 
+ 	/* start of exclusion range of that IOMMU */
+ 	u64 exclusion_start;
+@@ -676,6 +692,12 @@ extern struct list_head ioapic_map;
+ extern struct list_head hpet_map;
+ extern struct list_head acpihid_map;
+ 
++/*
++ * List with all PCI segments in the system. This list is not locked because
++ * it is only written at driver initialization time
++ */
++extern struct list_head amd_iommu_pci_seg_list;
++
+ /*
+  * List with all IOMMUs in the system. This list is not locked because it is
+  * only written and read at driver initialization or suspend time
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 91cc3a5643caf..917ee5a67e787 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -165,6 +165,7 @@ LIST_HEAD(amd_iommu_unity_map);		/* a list of required unity mappings
+ 					   we find in ACPI */
+ bool amd_iommu_unmap_flush;		/* if true, flush on every unmap */
+ 
++LIST_HEAD(amd_iommu_pci_seg_list);	/* list of all PCI segments */
+ LIST_HEAD(amd_iommu_list);		/* list of all AMD IOMMUs in the
+ 					   system */
+ 
+@@ -1456,8 +1457,54 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
+ 	return 0;
+ }
+ 
++/* Allocate PCI segment data structure */
++static struct amd_iommu_pci_seg *__init alloc_pci_segment(u16 id)
++{
++	struct amd_iommu_pci_seg *pci_seg;
++
++	pci_seg = kzalloc(sizeof(struct amd_iommu_pci_seg), GFP_KERNEL);
++	if (pci_seg == NULL)
++		return NULL;
++
++	pci_seg->id = id;
++	list_add_tail(&pci_seg->list, &amd_iommu_pci_seg_list);
++
++	return pci_seg;
++}
++
++static struct amd_iommu_pci_seg *__init get_pci_segment(u16 id)
++{
++	struct amd_iommu_pci_seg *pci_seg;
++
++	for_each_pci_segment(pci_seg) {
++		if (pci_seg->id == id)
++			return pci_seg;
++	}
++
++	return alloc_pci_segment(id);
++}
++
++static void __init free_pci_segments(void)
++{
++	struct amd_iommu_pci_seg *pci_seg, *next;
++
++	for_each_pci_segment_safe(pci_seg, next) {
++		list_del(&pci_seg->list);
++		kfree(pci_seg);
++	}
++}
++
++static void __init free_sysfs(struct amd_iommu *iommu)
++{
++	if (iommu->iommu.dev) {
++		iommu_device_unregister(&iommu->iommu);
++		iommu_device_sysfs_remove(&iommu->iommu);
++	}
++}
++
+ static void __init free_iommu_one(struct amd_iommu *iommu)
+ {
++	free_sysfs(iommu);
+ 	free_cwwb_sem(iommu);
+ 	free_command_buffer(iommu);
+ 	free_event_buffer(iommu);
+@@ -1542,8 +1589,14 @@ static void amd_iommu_ats_write_check_workaround(struct amd_iommu *iommu)
+  */
+ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
+ {
++	struct amd_iommu_pci_seg *pci_seg;
+ 	int ret;
+ 
++	pci_seg = get_pci_segment(h->pci_seg);
++	if (pci_seg == NULL)
++		return -ENOMEM;
++	iommu->pci_seg = pci_seg;
++
+ 	raw_spin_lock_init(&iommu->lock);
+ 	iommu->cmd_sem_val = 0;
+ 
+@@ -1564,7 +1617,6 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
+ 	 */
+ 	iommu->devid   = h->devid;
+ 	iommu->cap_ptr = h->cap_ptr;
+-	iommu->pci_seg = h->pci_seg;
+ 	iommu->mmio_phys = h->mmio_phys;
+ 
+ 	switch (h->type) {
+@@ -2511,6 +2563,7 @@ static void __init free_iommu_resources(void)
+ 	amd_iommu_dev_table = NULL;
+ 
+ 	free_iommu_all();
++	free_pci_segments();
+ }
+ 
+ /* SB IOAPIC is always on this device in AMD systems */
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 982c42c873102..9ac7b37290eb0 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2925,7 +2925,7 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
+ 	}
+ 
+ 	/* Add callback to free MSIs on teardown */
+-	devm_add_action(dev, arm_smmu_free_msis, dev);
++	devm_add_action_or_reset(dev, arm_smmu_free_msis, dev);
+ }
+ 
+ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
+diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
+index 67a2c47f4201a..f006c780dd4bf 100644
+--- a/drivers/md/bcache/bset.c
++++ b/drivers/md/bcache/bset.c
+@@ -54,7 +54,7 @@ void bch_dump_bucket(struct btree_keys *b)
+ int __bch_count_data(struct btree_keys *b)
+ {
+ 	unsigned int ret = 0;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bkey *k;
+ 
+ 	if (b->ops->is_extents)
+@@ -67,7 +67,7 @@ void __bch_check_keys(struct btree_keys *b, const char *fmt, ...)
+ {
+ 	va_list args;
+ 	struct bkey *k, *p = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	const char *err;
+ 
+ 	for_each_key(b, k, &iter) {
+@@ -877,7 +877,7 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k,
+ 	unsigned int status = BTREE_INSERT_STATUS_NO_INSERT;
+ 	struct bset *i = bset_tree_last(b)->data;
+ 	struct bkey *m, *prev = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bkey preceding_key_on_stack = ZERO_KEY;
+ 	struct bkey *preceding_key_p = &preceding_key_on_stack;
+ 
+@@ -893,9 +893,9 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k,
+ 	else
+ 		preceding_key(k, &preceding_key_p);
+ 
+-	m = bch_btree_iter_init(b, &iter, preceding_key_p);
++	m = bch_btree_iter_stack_init(b, &iter, preceding_key_p);
+ 
+-	if (b->ops->insert_fixup(b, k, &iter, replace_key))
++	if (b->ops->insert_fixup(b, k, &iter.iter, replace_key))
+ 		return status;
+ 
+ 	status = BTREE_INSERT_STATUS_INSERT;
+@@ -1096,33 +1096,33 @@ void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k,
+ 				 btree_iter_cmp));
+ }
+ 
+-static struct bkey *__bch_btree_iter_init(struct btree_keys *b,
+-					  struct btree_iter *iter,
+-					  struct bkey *search,
+-					  struct bset_tree *start)
++static struct bkey *__bch_btree_iter_stack_init(struct btree_keys *b,
++						struct btree_iter_stack *iter,
++						struct bkey *search,
++						struct bset_tree *start)
+ {
+ 	struct bkey *ret = NULL;
+ 
+-	iter->size = ARRAY_SIZE(iter->data);
+-	iter->used = 0;
++	iter->iter.size = ARRAY_SIZE(iter->stack_data);
++	iter->iter.used = 0;
+ 
+ #ifdef CONFIG_BCACHE_DEBUG
+-	iter->b = b;
++	iter->iter.b = b;
+ #endif
+ 
+ 	for (; start <= bset_tree_last(b); start++) {
+ 		ret = bch_bset_search(b, start, search);
+-		bch_btree_iter_push(iter, ret, bset_bkey_last(start->data));
++		bch_btree_iter_push(&iter->iter, ret, bset_bkey_last(start->data));
+ 	}
+ 
+ 	return ret;
+ }
+ 
+-struct bkey *bch_btree_iter_init(struct btree_keys *b,
+-				 struct btree_iter *iter,
++struct bkey *bch_btree_iter_stack_init(struct btree_keys *b,
++				 struct btree_iter_stack *iter,
+ 				 struct bkey *search)
+ {
+-	return __bch_btree_iter_init(b, iter, search, b->set);
++	return __bch_btree_iter_stack_init(b, iter, search, b->set);
+ }
+ 
+ static inline struct bkey *__bch_btree_iter_next(struct btree_iter *iter,
+@@ -1289,10 +1289,10 @@ void bch_btree_sort_partial(struct btree_keys *b, unsigned int start,
+ 			    struct bset_sort_state *state)
+ {
+ 	size_t order = b->page_order, keys = 0;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	int oldsize = bch_count_data(b);
+ 
+-	__bch_btree_iter_init(b, &iter, NULL, &b->set[start]);
++	__bch_btree_iter_stack_init(b, &iter, NULL, &b->set[start]);
+ 
+ 	if (start) {
+ 		unsigned int i;
+@@ -1303,7 +1303,7 @@ void bch_btree_sort_partial(struct btree_keys *b, unsigned int start,
+ 		order = get_order(__set_bytes(b->set->data, keys));
+ 	}
+ 
+-	__btree_sort(b, &iter, start, order, false, state);
++	__btree_sort(b, &iter.iter, start, order, false, state);
+ 
+ 	EBUG_ON(oldsize >= 0 && bch_count_data(b) != oldsize);
+ }
+@@ -1319,11 +1319,11 @@ void bch_btree_sort_into(struct btree_keys *b, struct btree_keys *new,
+ 			 struct bset_sort_state *state)
+ {
+ 	uint64_t start_time = local_clock();
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 
+-	bch_btree_iter_init(b, &iter, NULL);
++	bch_btree_iter_stack_init(b, &iter, NULL);
+ 
+-	btree_mergesort(b, new->set->data, &iter, false, true);
++	btree_mergesort(b, new->set->data, &iter.iter, false, true);
+ 
+ 	bch_time_stats_update(&state->time, start_time);
+ 
+diff --git a/drivers/md/bcache/bset.h b/drivers/md/bcache/bset.h
+index a50dcfda656f5..2ed6dbd35d6e5 100644
+--- a/drivers/md/bcache/bset.h
++++ b/drivers/md/bcache/bset.h
+@@ -321,7 +321,14 @@ struct btree_iter {
+ #endif
+ 	struct btree_iter_set {
+ 		struct bkey *k, *end;
+-	} data[MAX_BSETS];
++	} data[];
++};
++
++/* Fixed-size btree_iter that can be allocated on the stack */
++
++struct btree_iter_stack {
++	struct btree_iter iter;
++	struct btree_iter_set stack_data[MAX_BSETS];
+ };
+ 
+ typedef bool (*ptr_filter_fn)(struct btree_keys *b, const struct bkey *k);
+@@ -333,9 +340,9 @@ struct bkey *bch_btree_iter_next_filter(struct btree_iter *iter,
+ 
+ void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k,
+ 			 struct bkey *end);
+-struct bkey *bch_btree_iter_init(struct btree_keys *b,
+-				 struct btree_iter *iter,
+-				 struct bkey *search);
++struct bkey *bch_btree_iter_stack_init(struct btree_keys *b,
++				       struct btree_iter_stack *iter,
++				       struct bkey *search);
+ 
+ struct bkey *__bch_bset_search(struct btree_keys *b, struct bset_tree *t,
+ 			       const struct bkey *search);
+@@ -350,13 +357,14 @@ static inline struct bkey *bch_bset_search(struct btree_keys *b,
+ 	return search ? __bch_bset_search(b, t, search) : t->data->start;
+ }
+ 
+-#define for_each_key_filter(b, k, iter, filter)				\
+-	for (bch_btree_iter_init((b), (iter), NULL);			\
+-	     ((k) = bch_btree_iter_next_filter((iter), (b), filter));)
++#define for_each_key_filter(b, k, stack_iter, filter)                      \
++	for (bch_btree_iter_stack_init((b), (stack_iter), NULL);           \
++	     ((k) = bch_btree_iter_next_filter(&((stack_iter)->iter), (b), \
++					       filter));)
+ 
+-#define for_each_key(b, k, iter)					\
+-	for (bch_btree_iter_init((b), (iter), NULL);			\
+-	     ((k) = bch_btree_iter_next(iter));)
++#define for_each_key(b, k, stack_iter)                           \
++	for (bch_btree_iter_stack_init((b), (stack_iter), NULL); \
++	     ((k) = bch_btree_iter_next(&((stack_iter)->iter)));)
+ 
+ /* Sorting */
+ 
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 1a1a9554474ae..2768b4b4302d6 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -1283,7 +1283,7 @@ static bool btree_gc_mark_node(struct btree *b, struct gc_stat *gc)
+ 	uint8_t stale = 0;
+ 	unsigned int keys = 0, good_keys = 0;
+ 	struct bkey *k;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bset_tree *t;
+ 
+ 	gc->nodes++;
+@@ -1544,7 +1544,7 @@ static int btree_gc_rewrite_node(struct btree *b, struct btree_op *op,
+ static unsigned int btree_gc_count_keys(struct btree *b)
+ {
+ 	struct bkey *k;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	unsigned int ret = 0;
+ 
+ 	for_each_key_filter(&b->keys, k, &iter, bch_ptr_bad)
+@@ -1585,17 +1585,18 @@ static int btree_gc_recurse(struct btree *b, struct btree_op *op,
+ 	int ret = 0;
+ 	bool should_rewrite;
+ 	struct bkey *k;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct gc_merge_info r[GC_MERGE_NODES];
+ 	struct gc_merge_info *i, *last = r + ARRAY_SIZE(r) - 1;
+ 
+-	bch_btree_iter_init(&b->keys, &iter, &b->c->gc_done);
++	bch_btree_iter_stack_init(&b->keys, &iter, &b->c->gc_done);
+ 
+ 	for (i = r; i < r + ARRAY_SIZE(r); i++)
+ 		i->b = ERR_PTR(-EINTR);
+ 
+ 	while (1) {
+-		k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad);
++		k = bch_btree_iter_next_filter(&iter.iter, &b->keys,
++					       bch_ptr_bad);
+ 		if (k) {
+ 			r->b = bch_btree_node_get(b->c, op, k, b->level - 1,
+ 						  true, b);
+@@ -1885,7 +1886,7 @@ static int bch_btree_check_recurse(struct btree *b, struct btree_op *op)
+ {
+ 	int ret = 0;
+ 	struct bkey *k, *p = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 
+ 	for_each_key_filter(&b->keys, k, &iter, bch_ptr_invalid)
+ 		bch_initial_mark_key(b->c, b->level, k);
+@@ -1893,10 +1894,10 @@ static int bch_btree_check_recurse(struct btree *b, struct btree_op *op)
+ 	bch_initial_mark_key(b->c, b->level + 1, &b->key);
+ 
+ 	if (b->level) {
+-		bch_btree_iter_init(&b->keys, &iter, NULL);
++		bch_btree_iter_stack_init(&b->keys, &iter, NULL);
+ 
+ 		do {
+-			k = bch_btree_iter_next_filter(&iter, &b->keys,
++			k = bch_btree_iter_next_filter(&iter.iter, &b->keys,
+ 						       bch_ptr_bad);
+ 			if (k) {
+ 				btree_node_prefetch(b, k);
+@@ -1924,7 +1925,7 @@ static int bch_btree_check_thread(void *arg)
+ 	struct btree_check_info *info = arg;
+ 	struct btree_check_state *check_state = info->state;
+ 	struct cache_set *c = check_state->c;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bkey *k, *p;
+ 	int cur_idx, prev_idx, skip_nr;
+ 
+@@ -1933,8 +1934,8 @@ static int bch_btree_check_thread(void *arg)
+ 	ret = 0;
+ 
+ 	/* root node keys are checked before thread created */
+-	bch_btree_iter_init(&c->root->keys, &iter, NULL);
+-	k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad);
++	bch_btree_iter_stack_init(&c->root->keys, &iter, NULL);
++	k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad);
+ 	BUG_ON(!k);
+ 
+ 	p = k;
+@@ -1952,7 +1953,7 @@ static int bch_btree_check_thread(void *arg)
+ 		skip_nr = cur_idx - prev_idx;
+ 
+ 		while (skip_nr) {
+-			k = bch_btree_iter_next_filter(&iter,
++			k = bch_btree_iter_next_filter(&iter.iter,
+ 						       &c->root->keys,
+ 						       bch_ptr_bad);
+ 			if (k)
+@@ -2025,7 +2026,7 @@ int bch_btree_check(struct cache_set *c)
+ 	int ret = 0;
+ 	int i;
+ 	struct bkey *k = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct btree_check_state check_state;
+ 
+ 	/* check and mark root node keys */
+@@ -2521,11 +2522,11 @@ static int bch_btree_map_nodes_recurse(struct btree *b, struct btree_op *op,
+ 
+ 	if (b->level) {
+ 		struct bkey *k;
+-		struct btree_iter iter;
++		struct btree_iter_stack iter;
+ 
+-		bch_btree_iter_init(&b->keys, &iter, from);
++		bch_btree_iter_stack_init(&b->keys, &iter, from);
+ 
+-		while ((k = bch_btree_iter_next_filter(&iter, &b->keys,
++		while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys,
+ 						       bch_ptr_bad))) {
+ 			ret = bcache_btree(map_nodes_recurse, k, b,
+ 				    op, from, fn, flags);
+@@ -2554,11 +2555,12 @@ int bch_btree_map_keys_recurse(struct btree *b, struct btree_op *op,
+ {
+ 	int ret = MAP_CONTINUE;
+ 	struct bkey *k;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 
+-	bch_btree_iter_init(&b->keys, &iter, from);
++	bch_btree_iter_stack_init(&b->keys, &iter, from);
+ 
+-	while ((k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad))) {
++	while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys,
++					       bch_ptr_bad))) {
+ 		ret = !b->level
+ 			? fn(op, b, k)
+ 			: bcache_btree(map_keys_recurse, k,
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 04ddaa4bbd77f..14336fd541020 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1939,8 +1939,9 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
+ 	INIT_LIST_HEAD(&c->btree_cache_freed);
+ 	INIT_LIST_HEAD(&c->data_buckets);
+ 
+-	iter_size = ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size + 1) *
+-		sizeof(struct btree_iter_set);
++	iter_size = sizeof(struct btree_iter) +
++		    ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size) *
++			    sizeof(struct btree_iter_set);
+ 
+ 	c->devices = kcalloc(c->nr_uuids, sizeof(void *), GFP_KERNEL);
+ 	if (!c->devices)
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index ca3e2f000cd4d..a31108625f463 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -639,7 +639,7 @@ static unsigned int bch_root_usage(struct cache_set *c)
+ 	unsigned int bytes = 0;
+ 	struct bkey *k;
+ 	struct btree *b;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 
+ 	goto lock_root;
+ 
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 8e3f5f004c397..9a2aac59f6bcb 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -852,15 +852,15 @@ static int bch_dirty_init_thread(void *arg)
+ 	struct dirty_init_thrd_info *info = arg;
+ 	struct bch_dirty_init_state *state = info->state;
+ 	struct cache_set *c = state->c;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct bkey *k, *p;
+ 	int cur_idx, prev_idx, skip_nr;
+ 
+ 	k = p = NULL;
+ 	prev_idx = 0;
+ 
+-	bch_btree_iter_init(&c->root->keys, &iter, NULL);
+-	k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad);
++	bch_btree_iter_stack_init(&c->root->keys, &iter, NULL);
++	k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad);
+ 	BUG_ON(!k);
+ 
+ 	p = k;
+@@ -874,7 +874,7 @@ static int bch_dirty_init_thread(void *arg)
+ 		skip_nr = cur_idx - prev_idx;
+ 
+ 		while (skip_nr) {
+-			k = bch_btree_iter_next_filter(&iter,
++			k = bch_btree_iter_next_filter(&iter.iter,
+ 						       &c->root->keys,
+ 						       bch_ptr_bad);
+ 			if (k)
+@@ -923,7 +923,7 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ 	int i;
+ 	struct btree *b = NULL;
+ 	struct bkey *k = NULL;
+-	struct btree_iter iter;
++	struct btree_iter_stack iter;
+ 	struct sectors_dirty_init op;
+ 	struct cache_set *c = d->c;
+ 	struct bch_dirty_init_state state;
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 23a0c209744dc..661588fc64f6a 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -974,7 +974,7 @@ int dvb_usercopy(struct file *file,
+ 		     int (*func)(struct file *file,
+ 		     unsigned int cmd, void *arg))
+ {
+-	char    sbuf[128];
++	char    sbuf[128] = {};
+ 	void    *mbuf = NULL;
+ 	void    *parg = NULL;
+ 	int     err  = -EINVAL;
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 188d847662ff7..7b8e92bcaa98d 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -400,8 +400,10 @@ static int mei_me_pci_resume(struct device *device)
+ 	}
+ 
+ 	err = mei_restart(dev);
+-	if (err)
++	if (err) {
++		free_irq(pdev->irq, dev);
+ 		return err;
++	}
+ 
+ 	/* Start timer if stopped in suspend */
+ 	schedule_delayed_work(&dev->timer_work, HZ);
+diff --git a/drivers/misc/vmw_vmci/vmci_event.c b/drivers/misc/vmw_vmci/vmci_event.c
+index e3436abf39f45..5b7ef47f4c118 100644
+--- a/drivers/misc/vmw_vmci/vmci_event.c
++++ b/drivers/misc/vmw_vmci/vmci_event.c
+@@ -9,6 +9,7 @@
+ #include <linux/vmw_vmci_api.h>
+ #include <linux/list.h>
+ #include <linux/module.h>
++#include <linux/nospec.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/rculist.h>
+@@ -86,9 +87,12 @@ static void event_deliver(struct vmci_event_msg *event_msg)
+ {
+ 	struct vmci_subscription *cur;
+ 	struct list_head *subscriber_list;
++	u32 sanitized_event, max_vmci_event;
+ 
+ 	rcu_read_lock();
+-	subscriber_list = &subscriber_array[event_msg->event_data.event];
++	max_vmci_event = ARRAY_SIZE(subscriber_array);
++	sanitized_event = array_index_nospec(event_msg->event_data.event, max_vmci_event);
++	subscriber_list = &subscriber_array[sanitized_event];
+ 	list_for_each_entry_rcu(cur, subscriber_list, node) {
+ 		cur->callback(cur->id, &event_msg->event_data,
+ 			      cur->callback_data);
+diff --git a/drivers/mmc/host/davinci_mmc.c b/drivers/mmc/host/davinci_mmc.c
+index 647928ab00a30..77258948440f9 100644
+--- a/drivers/mmc/host/davinci_mmc.c
++++ b/drivers/mmc/host/davinci_mmc.c
+@@ -1347,7 +1347,7 @@ static int davinci_mmcsd_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
+-static int __exit davinci_mmcsd_remove(struct platform_device *pdev)
++static int davinci_mmcsd_remove(struct platform_device *pdev)
+ {
+ 	struct mmc_davinci_host *host = platform_get_drvdata(pdev);
+ 
+@@ -1404,7 +1404,7 @@ static struct platform_driver davinci_mmcsd_driver = {
+ 		.of_match_table = davinci_mmc_dt_ids,
+ 	},
+ 	.probe		= davinci_mmcsd_probe,
+-	.remove		= __exit_p(davinci_mmcsd_remove),
++	.remove		= davinci_mmcsd_remove,
+ 	.id_table	= davinci_mmc_devtype,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 8b02fe3916d12..7e5dab3855187 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -1380,7 +1380,7 @@ static int jmicron_pmos(struct sdhci_pci_chip *chip, int on)
+ 
+ 	ret = pci_read_config_byte(chip->pdev, 0xAE, &scratch);
+ 	if (ret)
+-		return ret;
++		goto fail;
+ 
+ 	/*
+ 	 * Turn PMOS on [bit 0], set over current detection to 2.4 V
+@@ -1391,7 +1391,10 @@ static int jmicron_pmos(struct sdhci_pci_chip *chip, int on)
+ 	else
+ 		scratch &= ~0x47;
+ 
+-	return pci_write_config_byte(chip->pdev, 0xAE, scratch);
++	ret = pci_write_config_byte(chip->pdev, 0xAE, scratch);
++
++fail:
++	return pcibios_err_to_errno(ret);
+ }
+ 
+ static int jmicron_probe(struct sdhci_pci_chip *chip)
+@@ -2308,7 +2311,7 @@ static int sdhci_pci_probe(struct pci_dev *pdev,
+ 
+ 	ret = pci_read_config_byte(pdev, PCI_SLOT_INFO, &slots);
+ 	if (ret)
+-		return ret;
++		return pcibios_err_to_errno(ret);
+ 
+ 	slots = PCI_SLOT_INFO_SLOTS(slots) + 1;
+ 	dev_dbg(&pdev->dev, "found %d slot(s)\n", slots);
+@@ -2317,7 +2320,7 @@ static int sdhci_pci_probe(struct pci_dev *pdev,
+ 
+ 	ret = pci_read_config_byte(pdev, PCI_SLOT_INFO, &first_bar);
+ 	if (ret)
+-		return ret;
++		return pcibios_err_to_errno(ret);
+ 
+ 	first_bar &= PCI_SLOT_INFO_FIRST_BAR_MASK;
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index bad01cc6823f6..9091930f58591 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2488,26 +2488,29 @@ static int sdhci_get_cd(struct mmc_host *mmc)
+ 
+ static int sdhci_check_ro(struct sdhci_host *host)
+ {
+-	unsigned long flags;
++	bool allow_invert = false;
+ 	int is_readonly;
+ 
+-	spin_lock_irqsave(&host->lock, flags);
+-
+-	if (host->flags & SDHCI_DEVICE_DEAD)
++	if (host->flags & SDHCI_DEVICE_DEAD) {
+ 		is_readonly = 0;
+-	else if (host->ops->get_ro)
++	} else if (host->ops->get_ro) {
+ 		is_readonly = host->ops->get_ro(host);
+-	else if (mmc_can_gpio_ro(host->mmc))
++	} else if (mmc_can_gpio_ro(host->mmc)) {
+ 		is_readonly = mmc_gpio_get_ro(host->mmc);
+-	else
++		/* Do not invert twice */
++		allow_invert = !(host->mmc->caps2 & MMC_CAP2_RO_ACTIVE_HIGH);
++	} else {
+ 		is_readonly = !(sdhci_readl(host, SDHCI_PRESENT_STATE)
+ 				& SDHCI_WRITE_PROTECT);
++		allow_invert = true;
++	}
+ 
+-	spin_unlock_irqrestore(&host->lock, flags);
++	if (is_readonly >= 0 &&
++	    allow_invert &&
++	    (host->quirks & SDHCI_QUIRK_INVERTED_WRITE_PROTECT))
++		is_readonly = !is_readonly;
+ 
+-	/* This quirk needs to be replaced by a callback-function later */
+-	return host->quirks & SDHCI_QUIRK_INVERTED_WRITE_PROTECT ?
+-		!is_readonly : is_readonly;
++	return is_readonly;
+ }
+ 
+ #define SAMPLE_COUNT	5
+diff --git a/drivers/mtd/nand/spi/macronix.c b/drivers/mtd/nand/spi/macronix.c
+index 8bd3f6bf9b103..be6bdedc9b61b 100644
+--- a/drivers/mtd/nand/spi/macronix.c
++++ b/drivers/mtd/nand/spi/macronix.c
+@@ -159,6 +159,118 @@ static const struct spinand_info macronix_spinand_table[] = {
+ 		     0 /*SPINAND_HAS_QE_BIT*/,
+ 		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
+ 				     mx35lf1ge4ab_ecc_get_status)),
++
++	SPINAND_INFO("MX35LF2G14AC",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x20),
++		     NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 2, 1, 1),
++		     NAND_ECCREQ(4, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF4G24AD",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb5),
++		     NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 2, 1, 1),
++		     NAND_ECCREQ(8, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF4GE4AD",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb7),
++		     NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1),
++		     NAND_ECCREQ(8, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF2G14AC",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa0),
++		     NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 2, 1, 1),
++		     NAND_ECCREQ(4, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF2G24AD",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa4),
++		     NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1),
++		     NAND_ECCREQ(8, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF2GE4AD",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa6),
++		     NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
++		     NAND_ECCREQ(8, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF2GE4AC",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa2),
++		     NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1),
++		     NAND_ECCREQ(4, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF1G14AC",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x90),
++		     NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
++		     NAND_ECCREQ(4, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF1G24AD",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x94),
++		     NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
++		     NAND_ECCREQ(8, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF1GE4AD",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x96),
++		     NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
++		     NAND_ECCREQ(8, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++	SPINAND_INFO("MX35UF1GE4AC",
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x92),
++		     NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
++		     NAND_ECCREQ(4, 512),
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++					      &write_cache_variants,
++					      &update_cache_variants),
++		     SPINAND_HAS_QE_BIT,
++		     SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
++				     mx35lf1ge4ab_ecc_get_status)),
++
+ };
+ 
+ static const struct spinand_manufacturer_ops macronix_spinand_manuf_ops = {
+diff --git a/drivers/mtd/parsers/redboot.c b/drivers/mtd/parsers/redboot.c
+index 4f3bcc59a6385..3351be6514732 100644
+--- a/drivers/mtd/parsers/redboot.c
++++ b/drivers/mtd/parsers/redboot.c
+@@ -102,7 +102,7 @@ static int parse_redboot_partitions(struct mtd_info *master,
+ 			offset -= master->erasesize;
+ 		}
+ 	} else {
+-		offset = directory * master->erasesize;
++		offset = (unsigned long) directory * master->erasesize;
+ 		while (mtd_block_isbad(master, offset)) {
+ 			offset += master->erasesize;
+ 			if (offset == master->size)
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index f42f2f4e4b60e..535b64155320a 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -206,10 +206,8 @@ static int ksz9477_reset_switch(struct ksz_device *dev)
+ 			   SPI_AUTO_EDGE_DETECTION, 0);
+ 
+ 	/* default configuration */
+-	ksz_read8(dev, REG_SW_LUE_CTRL_1, &data8);
+-	data8 = SW_AGING_ENABLE | SW_LINK_AUTO_AGING |
+-	      SW_SRC_ADDR_FILTER | SW_FLUSH_STP_TABLE | SW_FLUSH_MSTP_TABLE;
+-	ksz_write8(dev, REG_SW_LUE_CTRL_1, data8);
++	ksz_write8(dev, REG_SW_LUE_CTRL_1,
++		   SW_AGING_ENABLE | SW_LINK_AUTO_AGING | SW_SRC_ADDR_FILTER);
+ 
+ 	/* disable interrupts */
+ 	ksz_write32(dev, REG_SW_INT_MASK__4, SWITCH_INT_MASK);
+diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
+index 600de587d7a98..e70b9ccca380e 100644
+--- a/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
++++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
+@@ -272,13 +272,12 @@ lio_vf_rep_copy_packet(struct octeon_device *oct,
+ 				pg_info->page_offset;
+ 			memcpy(skb->data, va, MIN_SKB_SIZE);
+ 			skb_put(skb, MIN_SKB_SIZE);
++			skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
++					pg_info->page,
++					pg_info->page_offset + MIN_SKB_SIZE,
++					len - MIN_SKB_SIZE,
++					LIO_RXBUFFER_SZ);
+ 		}
+-
+-		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+-				pg_info->page,
+-				pg_info->page_offset + MIN_SKB_SIZE,
+-				len - MIN_SKB_SIZE,
+-				LIO_RXBUFFER_SZ);
+ 	} else {
+ 		struct octeon_skb_page_info *pg_info =
+ 			((struct octeon_skb_page_info *)(skb->cb));
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 07ba0438f9655..fa202fea537f8 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -2383,11 +2383,14 @@ static int dpaa2_eth_xdp_xmit(struct net_device *net_dev, int n,
+ static int update_xps(struct dpaa2_eth_priv *priv)
+ {
+ 	struct net_device *net_dev = priv->net_dev;
+-	struct cpumask xps_mask;
+-	struct dpaa2_eth_fq *fq;
+ 	int i, num_queues, netdev_queues;
++	struct dpaa2_eth_fq *fq;
++	cpumask_var_t xps_mask;
+ 	int err = 0;
+ 
++	if (!alloc_cpumask_var(&xps_mask, GFP_KERNEL))
++		return -ENOMEM;
++
+ 	num_queues = dpaa2_eth_queue_count(priv);
+ 	netdev_queues = (net_dev->num_tc ? : 1) * num_queues;
+ 
+@@ -2397,16 +2400,17 @@ static int update_xps(struct dpaa2_eth_priv *priv)
+ 	for (i = 0; i < netdev_queues; i++) {
+ 		fq = &priv->fq[i % num_queues];
+ 
+-		cpumask_clear(&xps_mask);
+-		cpumask_set_cpu(fq->target_cpu, &xps_mask);
++		cpumask_clear(xps_mask);
++		cpumask_set_cpu(fq->target_cpu, xps_mask);
+ 
+-		err = netif_set_xps_queue(net_dev, &xps_mask, i);
++		err = netif_set_xps_queue(net_dev, xps_mask, i);
+ 		if (err) {
+ 			netdev_warn_once(net_dev, "Error setting XPS queue\n");
+ 			break;
+ 		}
+ 	}
+ 
++	free_cpumask_var(xps_mask);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index a4ab3e7efa5e4..f8275534205a7 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -2513,6 +2513,9 @@ static int hns3_alloc_ring_buffers(struct hns3_enet_ring *ring)
+ 		ret = hns3_alloc_and_attach_buffer(ring, i);
+ 		if (ret)
+ 			goto out_buffer_fail;
++
++		if (!(i % HNS3_RESCHED_BD_NUM))
++			cond_resched();
+ 	}
+ 
+ 	return 0;
+@@ -3946,6 +3949,7 @@ int hns3_init_all_ring(struct hns3_nic_priv *priv)
+ 		}
+ 
+ 		u64_stats_init(&priv->ring[i].syncp);
++		cond_resched();
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 54d02ea4aaa7c..669cd30b9871b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -182,6 +182,8 @@ enum hns3_nic_state {
+ 
+ #define HNS3_RING_EN_B				0
+ 
++#define HNS3_RESCHED_BD_NUM			1024
++
+ enum hns3_pkt_l2t_type {
+ 	HNS3_L2_TYPE_UNICAST,
+ 	HNS3_L2_TYPE_MULTICAST,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index f1834853872da..aeb8bb3c549a1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -4393,7 +4393,7 @@ static netdev_features_t mlx5e_tunnel_features_check(struct mlx5e_priv *priv,
+ 
+ 		/* Verify if UDP port is being offloaded by HW */
+ 		if (mlx5_vxlan_lookup_port(priv->mdev->vxlan, port))
+-			return features;
++			return vxlan_features_check(skb, features);
+ 
+ #if IS_ENABLED(CONFIG_GENEVE)
+ 		/* Support Geneve offload for default UDP port */
+@@ -4414,7 +4414,6 @@ netdev_features_t mlx5e_features_check(struct sk_buff *skb,
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
+ 
+ 	features = vlan_features_check(skb, features);
+-	features = vxlan_features_check(skb, features);
+ 
+ #ifdef CONFIG_MLX5_EN_IPSEC
+ 	if (mlx5e_ipsec_feature_check(skb, netdev, features))
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index a37ca4b1e5665..324ef6990e9a7 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -272,10 +272,8 @@ static int ionic_qcq_enable(struct ionic_qcq *qcq)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (qcq->napi.poll)
+-		napi_enable(&qcq->napi);
+-
+ 	if (qcq->flags & IONIC_QCQ_F_INTR) {
++		napi_enable(&qcq->napi);
+ 		irq_set_affinity_hint(qcq->intr.vector,
+ 				      &qcq->intr.affinity_mask);
+ 		ionic_intr_mask(idev->intr_ctrl, qcq->intr.index,
+diff --git a/drivers/net/ethernet/qualcomm/qca_debug.c b/drivers/net/ethernet/qualcomm/qca_debug.c
+index 66229b300c5a4..c205d3f036961 100644
+--- a/drivers/net/ethernet/qualcomm/qca_debug.c
++++ b/drivers/net/ethernet/qualcomm/qca_debug.c
+@@ -110,10 +110,8 @@ qcaspi_info_show(struct seq_file *s, void *what)
+ 
+ 	seq_printf(s, "IRQ              : %d\n",
+ 		   qca->spi_dev->irq);
+-	seq_printf(s, "INTR REQ         : %u\n",
+-		   qca->intr_req);
+-	seq_printf(s, "INTR SVC         : %u\n",
+-		   qca->intr_svc);
++	seq_printf(s, "INTR             : %lx\n",
++		   qca->intr);
+ 
+ 	seq_printf(s, "SPI max speed    : %lu\n",
+ 		   (unsigned long)qca->spi_dev->max_speed_hz);
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index ffa1846f5b4c4..f6bc5a273477f 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -49,6 +49,8 @@
+ 
+ #define MAX_DMA_BURST_LEN 5000
+ 
++#define SPI_INTR 0
++
+ /*   Modules parameters     */
+ #define QCASPI_CLK_SPEED_MIN 1000000
+ #define QCASPI_CLK_SPEED_MAX 16000000
+@@ -585,14 +587,14 @@ qcaspi_spi_thread(void *data)
+ 			continue;
+ 		}
+ 
+-		if ((qca->intr_req == qca->intr_svc) &&
++		if (!test_bit(SPI_INTR, &qca->intr) &&
+ 		    !qca->txr.skb[qca->txr.head])
+ 			schedule();
+ 
+ 		set_current_state(TASK_RUNNING);
+ 
+-		netdev_dbg(qca->net_dev, "have work to do. int: %d, tx_skb: %p\n",
+-			   qca->intr_req - qca->intr_svc,
++		netdev_dbg(qca->net_dev, "have work to do. int: %lu, tx_skb: %p\n",
++			   qca->intr,
+ 			   qca->txr.skb[qca->txr.head]);
+ 
+ 		qcaspi_qca7k_sync(qca, QCASPI_EVENT_UPDATE);
+@@ -606,8 +608,7 @@ qcaspi_spi_thread(void *data)
+ 			msleep(QCASPI_QCA7K_REBOOT_TIME_MS);
+ 		}
+ 
+-		if (qca->intr_svc != qca->intr_req) {
+-			qca->intr_svc = qca->intr_req;
++		if (test_and_clear_bit(SPI_INTR, &qca->intr)) {
+ 			start_spi_intr_handling(qca, &intr_cause);
+ 
+ 			if (intr_cause & SPI_INT_CPU_ON) {
+@@ -669,7 +670,7 @@ qcaspi_intr_handler(int irq, void *data)
+ {
+ 	struct qcaspi *qca = data;
+ 
+-	qca->intr_req++;
++	set_bit(SPI_INTR, &qca->intr);
+ 	if (qca->spi_thread &&
+ 	    qca->spi_thread->state != TASK_RUNNING)
+ 		wake_up_process(qca->spi_thread);
+@@ -686,8 +687,7 @@ qcaspi_netdev_open(struct net_device *dev)
+ 	if (!qca)
+ 		return -EINVAL;
+ 
+-	qca->intr_req = 1;
+-	qca->intr_svc = 0;
++	set_bit(SPI_INTR, &qca->intr);
+ 	qca->sync = QCASPI_SYNC_UNKNOWN;
+ 	qcafrm_fsm_init_spi(&qca->frm_handle);
+ 
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h
+index d13a67e20d650..8d4767e9b9149 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.h
++++ b/drivers/net/ethernet/qualcomm/qca_spi.h
+@@ -92,8 +92,7 @@ struct qcaspi {
+ 	struct qcafrm_handle frm_handle;
+ 	struct sk_buff *rx_skb;
+ 
+-	unsigned int intr_req;
+-	unsigned int intr_svc;
++	unsigned long intr;
+ 	u16 reset_count;
+ 
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index c29d43c5f4504..d24eb5ee152a5 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4247,13 +4247,12 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
+ 	return true;
+ }
+ 
+-static bool rtl_tx_slots_avail(struct rtl8169_private *tp,
+-			       unsigned int nr_frags)
++static bool rtl_tx_slots_avail(struct rtl8169_private *tp)
+ {
+ 	unsigned int slots_avail = tp->dirty_tx + NUM_TX_DESC - tp->cur_tx;
+ 
+ 	/* A skbuff with nr_frags needs nr_frags+1 entries in the tx queue */
+-	return slots_avail > nr_frags;
++	return slots_avail > MAX_SKB_FRAGS;
+ }
+ 
+ /* Versions RTL8102e and from RTL8168c onwards support csum_v2 */
+@@ -4279,24 +4278,19 @@ static void rtl8169_doorbell(struct rtl8169_private *tp)
+ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
+ 				      struct net_device *dev)
+ {
+-	unsigned int frags = skb_shinfo(skb)->nr_frags;
+ 	struct rtl8169_private *tp = netdev_priv(dev);
+ 	unsigned int entry = tp->cur_tx % NUM_TX_DESC;
+ 	struct TxDesc *txd_first, *txd_last;
+ 	bool stop_queue, door_bell;
++	unsigned int frags;
+ 	u32 opts[2];
+ 
+-	txd_first = tp->TxDescArray + entry;
+-
+-	if (unlikely(!rtl_tx_slots_avail(tp, frags))) {
++	if (unlikely(!rtl_tx_slots_avail(tp))) {
+ 		if (net_ratelimit())
+ 			netdev_err(dev, "BUG! Tx Ring full when queue awake!\n");
+ 		goto err_stop_0;
+ 	}
+ 
+-	if (unlikely(le32_to_cpu(txd_first->opts1) & DescOwn))
+-		goto err_stop_0;
+-
+ 	opts[1] = rtl8169_tx_vlan_tag(skb);
+ 	opts[0] = 0;
+ 
+@@ -4309,6 +4303,9 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
+ 				    entry, false)))
+ 		goto err_dma_0;
+ 
++	txd_first = tp->TxDescArray + entry;
++
++	frags = skb_shinfo(skb)->nr_frags;
+ 	if (frags) {
+ 		if (rtl8169_xmit_frags(tp, skb, opts, entry))
+ 			goto err_dma_1;
+@@ -4331,22 +4328,15 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
+ 	/* rtl_tx needs to see descriptor changes before updated tp->cur_tx */
+ 	smp_wmb();
+ 
+-	tp->cur_tx += frags + 1;
++	WRITE_ONCE(tp->cur_tx, tp->cur_tx + frags + 1);
+ 
+-	stop_queue = !rtl_tx_slots_avail(tp, MAX_SKB_FRAGS);
++	stop_queue = !rtl_tx_slots_avail(tp);
+ 	if (unlikely(stop_queue)) {
+ 		/* Avoid wrongly optimistic queue wake-up: rtl_tx thread must
+ 		 * not miss a ring update when it notices a stopped queue.
+ 		 */
+ 		smp_wmb();
+ 		netif_stop_queue(dev);
+-		door_bell = true;
+-	}
+-
+-	if (door_bell)
+-		rtl8169_doorbell(tp);
+-
+-	if (unlikely(stop_queue)) {
+ 		/* Sync with rtl_tx:
+ 		 * - publish queue status and cur_tx ring index (write barrier)
+ 		 * - refresh dirty_tx ring index (read barrier).
+@@ -4354,11 +4344,15 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
+ 		 * status and forget to wake up queue, a racing rtl_tx thread
+ 		 * can't.
+ 		 */
+-		smp_mb();
+-		if (rtl_tx_slots_avail(tp, MAX_SKB_FRAGS))
++		smp_mb__after_atomic();
++		if (rtl_tx_slots_avail(tp))
+ 			netif_start_queue(dev);
++		door_bell = true;
+ 	}
+ 
++	if (door_bell)
++		rtl8169_doorbell(tp);
++
+ 	return NETDEV_TX_OK;
+ 
+ err_dma_1:
+@@ -4469,12 +4463,11 @@ static void rtl8169_pcierr_interrupt(struct net_device *dev)
+ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp,
+ 		   int budget)
+ {
+-	unsigned int dirty_tx, tx_left, bytes_compl = 0, pkts_compl = 0;
++	unsigned int dirty_tx, bytes_compl = 0, pkts_compl = 0;
+ 
+ 	dirty_tx = tp->dirty_tx;
+-	smp_rmb();
+ 
+-	for (tx_left = tp->cur_tx - dirty_tx; tx_left > 0; tx_left--) {
++	while (READ_ONCE(tp->cur_tx) != dirty_tx) {
+ 		unsigned int entry = dirty_tx % NUM_TX_DESC;
+ 		struct sk_buff *skb = tp->tx_skb[entry].skb;
+ 		u32 status;
+@@ -4498,7 +4491,6 @@ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp,
+ 
+ 		rtl_inc_priv_stats(&tp->tx_stats, pkts_compl, bytes_compl);
+ 
+-		tp->dirty_tx = dirty_tx;
+ 		/* Sync with rtl8169_start_xmit:
+ 		 * - publish dirty_tx ring index (write barrier)
+ 		 * - refresh cur_tx ring index and queue status (read barrier)
+@@ -4506,11 +4498,9 @@ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp,
+ 		 * a racing xmit thread can only have a right view of the
+ 		 * ring status.
+ 		 */
+-		smp_mb();
+-		if (netif_queue_stopped(dev) &&
+-		    rtl_tx_slots_avail(tp, MAX_SKB_FRAGS)) {
++		smp_store_mb(tp->dirty_tx, dirty_tx);
++		if (netif_queue_stopped(dev) && rtl_tx_slots_avail(tp))
+ 			netif_wake_queue(dev);
+-		}
+ 		/*
+ 		 * 8168 hack: TxPoll requests are lost when the Tx packets are
+ 		 * too close. Let's kick an extra TxPoll request when a burst
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 43165c662740d..4da1a80de7225 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -310,10 +310,11 @@ static int tc_setup_cbs(struct stmmac_priv *priv,
+ 			struct tc_cbs_qopt_offload *qopt)
+ {
+ 	u32 tx_queues_count = priv->plat->tx_queues_to_use;
++	s64 port_transmit_rate_kbps;
+ 	u32 queue = qopt->queue;
+-	u32 ptr, speed_div;
+ 	u32 mode_to_use;
+ 	u64 value;
++	u32 ptr;
+ 	int ret;
+ 
+ 	/* Queue 0 is not AVB capable */
+@@ -322,30 +323,30 @@ static int tc_setup_cbs(struct stmmac_priv *priv,
+ 	if (!priv->dma_cap.av)
+ 		return -EOPNOTSUPP;
+ 
+-	/* Port Transmit Rate and Speed Divider */
+-	switch (priv->speed) {
+-	case SPEED_10000:
+-		ptr = 32;
+-		speed_div = 10000000;
+-		break;
+-	case SPEED_5000:
+-		ptr = 32;
+-		speed_div = 5000000;
+-		break;
+-	case SPEED_2500:
+-		ptr = 8;
+-		speed_div = 2500000;
+-		break;
+-	case SPEED_1000:
+-		ptr = 8;
+-		speed_div = 1000000;
+-		break;
+-	case SPEED_100:
+-		ptr = 4;
+-		speed_div = 100000;
+-		break;
+-	default:
+-		return -EOPNOTSUPP;
++	port_transmit_rate_kbps = qopt->idleslope - qopt->sendslope;
++
++	if (qopt->enable) {
++		/* Port Transmit Rate and Speed Divider */
++		switch (div_s64(port_transmit_rate_kbps, 1000)) {
++		case SPEED_10000:
++		case SPEED_5000:
++			ptr = 32;
++			break;
++		case SPEED_2500:
++		case SPEED_1000:
++			ptr = 8;
++			break;
++		case SPEED_100:
++			ptr = 4;
++			break;
++		default:
++			netdev_err(priv->dev,
++				   "Invalid portTransmitRate %lld (idleSlope - sendSlope)\n",
++				   port_transmit_rate_kbps);
++			return -EINVAL;
++		}
++	} else {
++		ptr = 0;
+ 	}
+ 
+ 	mode_to_use = priv->plat->tx_queues_cfg[queue].mode_to_use;
+@@ -365,10 +366,10 @@ static int tc_setup_cbs(struct stmmac_priv *priv,
+ 	}
+ 
+ 	/* Final adjustments for HW */
+-	value = div_s64(qopt->idleslope * 1024ll * ptr, speed_div);
++	value = div_s64(qopt->idleslope * 1024ll * ptr, port_transmit_rate_kbps);
+ 	priv->plat->tx_queues_cfg[queue].idle_slope = value & GENMASK(31, 0);
+ 
+-	value = div_s64(-qopt->sendslope * 1024ll * ptr, speed_div);
++	value = div_s64(-qopt->sendslope * 1024ll * ptr, port_transmit_rate_kbps);
+ 	priv->plat->tx_queues_cfg[queue].send_slope = value & GENMASK(31, 0);
+ 
+ 	value = qopt->hicredit * 1024ll * 8;
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 2b7616f161d69..b68682006c06c 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1410,6 +1410,7 @@ static struct mdio_device_id __maybe_unused micrel_tbl[] = {
+ 	{ PHY_ID_KSZ8081, MICREL_PHY_ID_MASK },
+ 	{ PHY_ID_KSZ8873MLL, MICREL_PHY_ID_MASK },
+ 	{ PHY_ID_KSZ886X, MICREL_PHY_ID_MASK },
++	{ PHY_ID_KSZ9477, MICREL_PHY_ID_MASK },
+ 	{ PHY_ID_LAN8814, MICREL_PHY_ID_MASK },
+ 	{ }
+ };
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 6a5f40f11db3f..d08990437f3e7 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -1952,8 +1952,7 @@ static void sfp_sm_module(struct sfp *sfp, unsigned int event)
+ 
+ 	/* Handle remove event globally, it resets this state machine */
+ 	if (event == SFP_E_REMOVE) {
+-		if (sfp->sm_mod_state > SFP_MOD_PROBE)
+-			sfp_sm_mod_remove(sfp);
++		sfp_sm_mod_remove(sfp);
+ 		sfp_sm_mod_next(sfp, SFP_MOD_EMPTY, 0);
+ 		return;
+ 	}
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index da4a2427b005f..29a04b059c110 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -346,7 +346,8 @@ static void ax88179_status(struct usbnet *dev, struct urb *urb)
+ 
+ 	if (netif_carrier_ok(dev->net) != link) {
+ 		usbnet_link_change(dev, link, 1);
+-		netdev_info(dev->net, "ax88179 - Link status is: %d\n", link);
++		if (!link)
++			netdev_info(dev->net, "ax88179 - Link status is: 0\n");
+ 	}
+ }
+ 
+@@ -1638,6 +1639,7 @@ static int ax88179_link_reset(struct usbnet *dev)
+ 			 GMII_PHY_PHYSR, 2, &tmp16);
+ 
+ 	if (!(tmp16 & GMII_PHY_PHYSR_LINK)) {
++		netdev_info(dev->net, "ax88179 - Link status is: 0\n");
+ 		return 0;
+ 	} else if (GMII_PHY_PHYSR_GIGA == (tmp16 & GMII_PHY_PHYSR_SMASK)) {
+ 		mode |= AX_MEDIUM_GIGAMODE | AX_MEDIUM_EN_125MHZ;
+@@ -1675,6 +1677,8 @@ static int ax88179_link_reset(struct usbnet *dev)
+ 
+ 	netif_carrier_on(dev->net);
+ 
++	netdev_info(dev->net, "ax88179 - Link status is: 1\n");
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/usb/rtl8150.c b/drivers/net/usb/rtl8150.c
+index bf8a60533f3e7..d128b4ac7c9f0 100644
+--- a/drivers/net/usb/rtl8150.c
++++ b/drivers/net/usb/rtl8150.c
+@@ -778,7 +778,8 @@ static int rtl8150_get_link_ksettings(struct net_device *netdev,
+ 				      struct ethtool_link_ksettings *ecmd)
+ {
+ 	rtl8150_t *dev = netdev_priv(netdev);
+-	short lpa, bmcr;
++	short lpa = 0;
++	short bmcr = 0;
+ 	u32 supported;
+ 
+ 	supported = (SUPPORTED_10baseT_Half |
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 4029c56dfcf0f..f7ed99561c192 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3112,8 +3112,16 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 			dev->features |= dev->hw_features & NETIF_F_ALL_TSO;
+ 		/* (!csum && gso) case will be fixed by register_netdev() */
+ 	}
+-	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
+-		dev->features |= NETIF_F_RXCSUM;
++
++	/* 1. With VIRTIO_NET_F_GUEST_CSUM negotiation, the driver doesn't
++	 * need to calculate checksums for partially checksummed packets,
++	 * as they're considered valid by the upper layer.
++	 * 2. Without VIRTIO_NET_F_GUEST_CSUM negotiation, the driver only
++	 * receives fully checksummed packets. The device may assist in
++	 * validating these packets' checksums, so the driver won't have to.
++	 */
++	dev->features |= NETIF_F_RXCSUM;
++
+ 	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
+ 	    virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6))
+ 		dev->features |= NETIF_F_GRO_HW;
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 3096769e718ed..ec67d2eb05ecd 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -1492,6 +1492,10 @@ static bool vxlan_snoop(struct net_device *dev,
+ 	struct vxlan_fdb *f;
+ 	u32 ifindex = 0;
+ 
++	/* Ignore packets from invalid src-address */
++	if (!is_valid_ether_addr(src_mac))
++		return true;
++
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	if (src_ip->sa.sa_family == AF_INET6 &&
+ 	    (ipv6_addr_type(&src_ip->sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL))
+diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h
+index f02a308a9ffc5..34654f710d8a1 100644
+--- a/drivers/net/wireless/ath/ath.h
++++ b/drivers/net/wireless/ath/ath.h
+@@ -171,8 +171,10 @@ struct ath_common {
+ 	unsigned int clockrate;
+ 
+ 	spinlock_t cc_lock;
+-	struct ath_cycle_counters cc_ani;
+-	struct ath_cycle_counters cc_survey;
++	struct_group(cc,
++		struct ath_cycle_counters cc_ani;
++		struct ath_cycle_counters cc_survey;
++	);
+ 
+ 	struct ath_regulatory regulatory;
+ 	struct ath_regulatory reg_world_copy;
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index b2cfc483515c0..c5904d81d0006 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -135,8 +135,7 @@ void ath9k_ps_wakeup(struct ath_softc *sc)
+ 	if (power_mode != ATH9K_PM_AWAKE) {
+ 		spin_lock(&common->cc_lock);
+ 		ath_hw_cycle_counters_update(common);
+-		memset(&common->cc_survey, 0, sizeof(common->cc_survey));
+-		memset(&common->cc_ani, 0, sizeof(common->cc_ani));
++		memset(&common->cc, 0, sizeof(common->cc));
+ 		spin_unlock(&common->cc_lock);
+ 	}
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index ab84ac3f8f03f..bf00c2fede746 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1699,8 +1699,8 @@ struct iwl_drv *iwl_drv_start(struct iwl_trans *trans)
+ err_fw:
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+ 	debugfs_remove_recursive(drv->dbgfs_drv);
+-	iwl_dbg_tlv_free(drv->trans);
+ #endif
++	iwl_dbg_tlv_free(drv->trans);
+ 	kfree(drv);
+ err:
+ 	return ERR_PTR(ret);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 54b28f0932e25..793208d99b5f9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -196,20 +196,10 @@ void iwl_mvm_mfu_assert_dump_notif(struct iwl_mvm *mvm,
+ {
+ 	struct iwl_rx_packet *pkt = rxb_addr(rxb);
+ 	struct iwl_mfu_assert_dump_notif *mfu_dump_notif = (void *)pkt->data;
+-	__le32 *dump_data = mfu_dump_notif->data;
+-	int n_words = le32_to_cpu(mfu_dump_notif->data_size) / sizeof(__le32);
+-	int i;
+ 
+ 	if (mfu_dump_notif->index_num == 0)
+ 		IWL_INFO(mvm, "MFUART assert id 0x%x occurred\n",
+ 			 le32_to_cpu(mfu_dump_notif->assert_id));
+-
+-	for (i = 0; i < n_words; i++)
+-		IWL_DEBUG_INFO(mvm,
+-			       "MFUART assert dump, dword %u: 0x%08x\n",
+-			       le16_to_cpu(mfu_dump_notif->index_num) *
+-			       n_words + i,
+-			       le32_to_cpu(dump_data[i]));
+ }
+ 
+ static bool iwl_alive_fn(struct iwl_notif_wait_data *notif_wait,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
+index 32104c9f8f5ee..d59a47637d120 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
+@@ -133,13 +133,8 @@ enum {
+ 
+ #define LINK_QUAL_AGG_FRAME_LIMIT_DEF	(63)
+ #define LINK_QUAL_AGG_FRAME_LIMIT_MAX	(63)
+-/*
+- * FIXME - various places in firmware API still use u8,
+- * e.g. LQ command and SCD config command.
+- * This should be 256 instead.
+- */
+-#define LINK_QUAL_AGG_FRAME_LIMIT_GEN2_DEF	(255)
+-#define LINK_QUAL_AGG_FRAME_LIMIT_GEN2_MAX	(255)
++#define LINK_QUAL_AGG_FRAME_LIMIT_GEN2_DEF	(64)
++#define LINK_QUAL_AGG_FRAME_LIMIT_GEN2_MAX	(64)
+ #define LINK_QUAL_AGG_FRAME_LIMIT_MIN	(0)
+ 
+ #define LQ_SIZE		2	/* 2 mode tables:  "Active" and "Search" */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 17b9925266947..a9df48c75155b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1354,7 +1354,7 @@ static void iwl_mvm_scan_umac_dwell(struct iwl_mvm *mvm,
+ 		if (IWL_MVM_ADWELL_MAX_BUDGET)
+ 			cmd->v7.adwell_max_budget =
+ 				cpu_to_le16(IWL_MVM_ADWELL_MAX_BUDGET);
+-		else if (params->ssids && params->ssids[0].ssid_len)
++		else if (params->n_ssids && params->ssids[0].ssid_len)
+ 			cmd->v7.adwell_max_budget =
+ 				cpu_to_le16(IWL_SCAN_ADWELL_MAX_BUDGET_DIRECTED_SCAN);
+ 		else
+@@ -1456,7 +1456,7 @@ iwl_mvm_scan_umac_dwell_v10(struct iwl_mvm *mvm,
+ 	if (IWL_MVM_ADWELL_MAX_BUDGET)
+ 		general_params->adwell_max_budget =
+ 			cpu_to_le16(IWL_MVM_ADWELL_MAX_BUDGET);
+-	else if (params->ssids && params->ssids[0].ssid_len)
++	else if (params->n_ssids && params->ssids[0].ssid_len)
+ 		general_params->adwell_max_budget =
+ 			cpu_to_le16(IWL_SCAN_ADWELL_MAX_BUDGET_DIRECTED_SCAN);
+ 	else
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+index af0c7d74b3f5a..ffe6d243c7649 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+@@ -892,8 +892,8 @@ static u8 _rtl92c_phy_get_rightchnlplace(u8 chnl)
+ 	u8 place = chnl;
+ 
+ 	if (chnl > 14) {
+-		for (place = 14; place < sizeof(channel5g); place++) {
+-			if (channel5g[place] == chnl) {
++		for (place = 14; place < ARRAY_SIZE(channel_all); place++) {
++			if (channel_all[place] == chnl) {
+ 				place++;
+ 				break;
+ 			}
+@@ -1359,7 +1359,7 @@ u8 rtl92d_get_rightchnlplace_for_iqk(u8 chnl)
+ 	u8 place = chnl;
+ 
+ 	if (chnl > 14) {
+-		for (place = 14; place < sizeof(channel_all); place++) {
++		for (place = 14; place < ARRAY_SIZE(channel_all); place++) {
+ 			if (channel_all[place] == chnl)
+ 				return place - 13;
+ 		}
+@@ -2417,7 +2417,7 @@ static bool _rtl92d_is_legal_5g_channel(struct ieee80211_hw *hw, u8 channel)
+ 
+ 	int i;
+ 
+-	for (i = 0; i < sizeof(channel5g); i++)
++	for (i = 0; i < ARRAY_SIZE(channel5g); i++)
+ 		if (channel == channel5g[i])
+ 			return true;
+ 	return false;
+@@ -2681,9 +2681,8 @@ void rtl92d_phy_reset_iqk_result(struct ieee80211_hw *hw)
+ 	u8 i;
+ 
+ 	rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+-		"settings regs %d default regs %d\n",
+-		(int)(sizeof(rtlphy->iqk_matrix) /
+-		      sizeof(struct iqk_matrix_regs)),
++		"settings regs %zu default regs %d\n",
++		ARRAY_SIZE(rtlphy->iqk_matrix),
+ 		IQK_MATRIX_REG_NUM);
+ 	/* 0xe94, 0xe9c, 0xea4, 0xeac, 0xeb4, 0xebc, 0xec4, 0xecc */
+ 	for (i = 0; i < IQK_MATRIX_SETTINGS_NUM; i++) {
+@@ -2850,16 +2849,14 @@ u8 rtl92d_phy_sw_chnl(struct ieee80211_hw *hw)
+ 	case BAND_ON_5G:
+ 		/* Get first channel error when change between
+ 		 * 5G and 2.4G band. */
+-		if (channel <= 14)
++		if (WARN_ONCE(channel <= 14, "rtl8192de: 5G but channel<=14\n"))
+ 			return 0;
+-		WARN_ONCE((channel <= 14), "rtl8192de: 5G but channel<=14\n");
+ 		break;
+ 	case BAND_ON_2_4G:
+ 		/* Get first channel error when change between
+ 		 * 5G and 2.4G band. */
+-		if (channel > 14)
++		if (WARN_ONCE(channel > 14, "rtl8192de: 2G but channel>14\n"))
+ 			return 0;
+-		WARN_ONCE((channel > 14), "rtl8192de: 2G but channel>14\n");
+ 		break;
+ 	default:
+ 		WARN_ONCE(true, "rtl8192de: Invalid WirelessMode(%#x)!!\n",
+diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+index a89e232d6963f..c997d8bfda975 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
++++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+@@ -108,7 +108,6 @@
+ #define	CHANNEL_GROUP_IDX_5GM		6
+ #define	CHANNEL_GROUP_IDX_5GH		9
+ #define	CHANNEL_GROUP_MAX_5G		9
+-#define CHANNEL_MAX_NUMBER_2G		14
+ #define AVG_THERMAL_NUM			8
+ #define AVG_THERMAL_NUM_88E		4
+ #define AVG_THERMAL_NUM_8723BE		4
+diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
+index dba8bdc3fc942..d1b72b704c319 100644
+--- a/drivers/pci/controller/pcie-rockchip-ep.c
++++ b/drivers/pci/controller/pcie-rockchip-ep.c
+@@ -131,10 +131,8 @@ static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
+ 
+ 	/* All functions share the same vendor ID with function 0 */
+ 	if (fn == 0) {
+-		u32 vid_regs = (hdr->vendorid & GENMASK(15, 0)) |
+-			       (hdr->subsys_vendor_id & GENMASK(31, 16)) << 16;
+-
+-		rockchip_pcie_write(rockchip, vid_regs,
++		rockchip_pcie_write(rockchip,
++				    hdr->vendorid | hdr->subsys_vendor_id << 16,
+ 				    PCIE_CORE_CONFIG_VENDOR);
+ 	}
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index d1631109b1422..530ced8f7abd2 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2840,6 +2840,18 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+ 			DMI_MATCH(DMI_BOARD_VERSION, "Continental Z2"),
+ 		},
+ 	},
++	{
++		/*
++		 * Changing power state of root port dGPU is connected fails
++		 * https://gitlab.freedesktop.org/drm/amd/-/issues/3229
++		 */
++		.ident = "Hewlett-Packard HP Pavilion 17 Notebook PC/1972",
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_BOARD_NAME, "1972"),
++			DMI_MATCH(DMI_BOARD_VERSION, "95.33"),
++		},
++	},
+ #endif
+ 	{ }
+ };
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index ee99dc56c5448..3d44d6f48cc4c 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1092,8 +1092,8 @@ static struct pinctrl *create_pinctrl(struct device *dev,
+ 		 * an -EPROBE_DEFER later, as that is the worst case.
+ 		 */
+ 		if (ret == -EPROBE_DEFER) {
+-			pinctrl_free(p, false);
+ 			mutex_unlock(&pinctrl_maps_mutex);
++			pinctrl_free(p, false);
+ 			return ERR_PTR(ret);
+ 		}
+ 	}
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 2a454098eaaa5..02b41f1bafe71 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -35,6 +35,7 @@
+ 
+ #include "core.h"
+ #include "pinconf.h"
++#include "pinctrl-rockchip.h"
+ 
+ /* GPIO control registers */
+ #define GPIO_SWPORT_DR		0x00
+@@ -50,21 +51,6 @@
+ #define GPIO_EXT_PORT		0x50
+ #define GPIO_LS_SYNC		0x60
+ 
+-enum rockchip_pinctrl_type {
+-	PX30,
+-	RV1108,
+-	RK2928,
+-	RK3066B,
+-	RK3128,
+-	RK3188,
+-	RK3288,
+-	RK3308,
+-	RK3368,
+-	RK3399,
+-	RK3568,
+-};
+-
+-
+ /**
+  * Generate a bitmask for setting a value (v) with a write mask bit in hiword
+  * register 31:16 area.
+@@ -82,103 +68,6 @@ enum rockchip_pinctrl_type {
+ #define IOMUX_WIDTH_3BIT	BIT(4)
+ #define IOMUX_WIDTH_2BIT	BIT(5)
+ 
+-/**
+- * struct rockchip_iomux
+- * @type: iomux variant using IOMUX_* constants
+- * @offset: if initialized to -1 it will be autocalculated, by specifying
+- *	    an initial offset value the relevant source offset can be reset
+- *	    to a new value for autocalculating the following iomux registers.
+- */
+-struct rockchip_iomux {
+-	int				type;
+-	int				offset;
+-};
+-
+-/*
+- * enum type index corresponding to rockchip_perpin_drv_list arrays index.
+- */
+-enum rockchip_pin_drv_type {
+-	DRV_TYPE_IO_DEFAULT = 0,
+-	DRV_TYPE_IO_1V8_OR_3V0,
+-	DRV_TYPE_IO_1V8_ONLY,
+-	DRV_TYPE_IO_1V8_3V0_AUTO,
+-	DRV_TYPE_IO_3V3_ONLY,
+-	DRV_TYPE_MAX
+-};
+-
+-/*
+- * enum type index corresponding to rockchip_pull_list arrays index.
+- */
+-enum rockchip_pin_pull_type {
+-	PULL_TYPE_IO_DEFAULT = 0,
+-	PULL_TYPE_IO_1V8_ONLY,
+-	PULL_TYPE_MAX
+-};
+-
+-/**
+- * struct rockchip_drv
+- * @drv_type: drive strength variant using rockchip_perpin_drv_type
+- * @offset: if initialized to -1 it will be autocalculated, by specifying
+- *	    an initial offset value the relevant source offset can be reset
+- *	    to a new value for autocalculating the following drive strength
+- *	    registers. if used chips own cal_drv func instead to calculate
+- *	    registers offset, the variant could be ignored.
+- */
+-struct rockchip_drv {
+-	enum rockchip_pin_drv_type	drv_type;
+-	int				offset;
+-};
+-
+-/**
+- * struct rockchip_pin_bank
+- * @reg_base: register base of the gpio bank
+- * @regmap_pull: optional separate register for additional pull settings
+- * @clk: clock of the gpio bank
+- * @irq: interrupt of the gpio bank
+- * @saved_masks: Saved content of GPIO_INTEN at suspend time.
+- * @pin_base: first pin number
+- * @nr_pins: number of pins in this bank
+- * @name: name of the bank
+- * @bank_num: number of the bank, to account for holes
+- * @iomux: array describing the 4 iomux sources of the bank
+- * @drv: array describing the 4 drive strength sources of the bank
+- * @pull_type: array describing the 4 pull type sources of the bank
+- * @valid: is all necessary information present
+- * @of_node: dt node of this bank
+- * @drvdata: common pinctrl basedata
+- * @domain: irqdomain of the gpio bank
+- * @gpio_chip: gpiolib chip
+- * @grange: gpio range
+- * @slock: spinlock for the gpio bank
+- * @toggle_edge_mode: bit mask to toggle (falling/rising) edge mode
+- * @recalced_mask: bit mask to indicate a need to recalulate the mask
+- * @route_mask: bits describing the routing pins of per bank
+- */
+-struct rockchip_pin_bank {
+-	void __iomem			*reg_base;
+-	struct regmap			*regmap_pull;
+-	struct clk			*clk;
+-	int				irq;
+-	u32				saved_masks;
+-	u32				pin_base;
+-	u8				nr_pins;
+-	char				*name;
+-	u8				bank_num;
+-	struct rockchip_iomux		iomux[4];
+-	struct rockchip_drv		drv[4];
+-	enum rockchip_pin_pull_type	pull_type[4];
+-	bool				valid;
+-	struct device_node		*of_node;
+-	struct rockchip_pinctrl		*drvdata;
+-	struct irq_domain		*domain;
+-	struct gpio_chip		gpio_chip;
+-	struct pinctrl_gpio_range	grange;
+-	raw_spinlock_t			slock;
+-	u32				toggle_edge_mode;
+-	u32				recalced_mask;
+-	u32				route_mask;
+-};
+-
+ #define PIN_BANK(id, pins, label)			\
+ 	{						\
+ 		.bank_num	= id,			\
+@@ -318,119 +207,6 @@ struct rockchip_pin_bank {
+ #define RK_MUXROUTE_PMU(ID, PIN, FUNC, REG, VAL)	\
+ 	PIN_BANK_MUX_ROUTE_FLAGS(ID, PIN, FUNC, REG, VAL, ROCKCHIP_ROUTE_PMU)
+ 
+-/**
+- * struct rockchip_mux_recalced_data: represent a pin iomux data.
+- * @num: bank number.
+- * @pin: pin number.
+- * @bit: index at register.
+- * @reg: register offset.
+- * @mask: mask bit
+- */
+-struct rockchip_mux_recalced_data {
+-	u8 num;
+-	u8 pin;
+-	u32 reg;
+-	u8 bit;
+-	u8 mask;
+-};
+-
+-enum rockchip_mux_route_location {
+-	ROCKCHIP_ROUTE_SAME = 0,
+-	ROCKCHIP_ROUTE_PMU,
+-	ROCKCHIP_ROUTE_GRF,
+-};
+-
+-/**
+- * struct rockchip_mux_recalced_data: represent a pin iomux data.
+- * @bank_num: bank number.
+- * @pin: index at register or used to calc index.
+- * @func: the min pin.
+- * @route_location: the mux route location (same, pmu, grf).
+- * @route_offset: the max pin.
+- * @route_val: the register offset.
+- */
+-struct rockchip_mux_route_data {
+-	u8 bank_num;
+-	u8 pin;
+-	u8 func;
+-	enum rockchip_mux_route_location route_location;
+-	u32 route_offset;
+-	u32 route_val;
+-};
+-
+-struct rockchip_pin_ctrl {
+-	struct rockchip_pin_bank	*pin_banks;
+-	u32				nr_banks;
+-	u32				nr_pins;
+-	char				*label;
+-	enum rockchip_pinctrl_type	type;
+-	int				grf_mux_offset;
+-	int				pmu_mux_offset;
+-	int				grf_drv_offset;
+-	int				pmu_drv_offset;
+-	struct rockchip_mux_recalced_data *iomux_recalced;
+-	u32				niomux_recalced;
+-	struct rockchip_mux_route_data *iomux_routes;
+-	u32				niomux_routes;
+-
+-	void	(*pull_calc_reg)(struct rockchip_pin_bank *bank,
+-				    int pin_num, struct regmap **regmap,
+-				    int *reg, u8 *bit);
+-	void	(*drv_calc_reg)(struct rockchip_pin_bank *bank,
+-				    int pin_num, struct regmap **regmap,
+-				    int *reg, u8 *bit);
+-	int	(*schmitt_calc_reg)(struct rockchip_pin_bank *bank,
+-				    int pin_num, struct regmap **regmap,
+-				    int *reg, u8 *bit);
+-};
+-
+-struct rockchip_pin_config {
+-	unsigned int		func;
+-	unsigned long		*configs;
+-	unsigned int		nconfigs;
+-};
+-
+-/**
+- * struct rockchip_pin_group: represent group of pins of a pinmux function.
+- * @name: name of the pin group, used to lookup the group.
+- * @pins: the pins included in this group.
+- * @npins: number of pins included in this group.
+- * @data: local pin configuration
+- */
+-struct rockchip_pin_group {
+-	const char			*name;
+-	unsigned int			npins;
+-	unsigned int			*pins;
+-	struct rockchip_pin_config	*data;
+-};
+-
+-/**
+- * struct rockchip_pmx_func: represent a pin function.
+- * @name: name of the pin function, used to lookup the function.
+- * @groups: one or more names of pin groups that provide this function.
+- * @ngroups: number of groups included in @groups.
+- */
+-struct rockchip_pmx_func {
+-	const char		*name;
+-	const char		**groups;
+-	u8			ngroups;
+-};
+-
+-struct rockchip_pinctrl {
+-	struct regmap			*regmap_base;
+-	int				reg_size;
+-	struct regmap			*regmap_pull;
+-	struct regmap			*regmap_pmu;
+-	struct device			*dev;
+-	struct rockchip_pin_ctrl	*ctrl;
+-	struct pinctrl_desc		pctl;
+-	struct pinctrl_dev		*pctl_dev;
+-	struct rockchip_pin_group	*groups;
+-	unsigned int			ngroups;
+-	struct rockchip_pmx_func	*functions;
+-	unsigned int			nfunctions;
+-};
+-
+ static struct regmap_config rockchip_regmap_config = {
+ 	.reg_bits = 32,
+ 	.val_bits = 32,
+@@ -800,23 +576,68 @@ static struct rockchip_mux_recalced_data rk3308_mux_recalced_data[] = {
+ 
+ static struct rockchip_mux_recalced_data rk3328_mux_recalced_data[] = {
+ 	{
+-		.num = 2,
+-		.pin = 12,
+-		.reg = 0x24,
+-		.bit = 8,
+-		.mask = 0x3
+-	}, {
++		/* gpio2_b7_sel */
+ 		.num = 2,
+ 		.pin = 15,
+ 		.reg = 0x28,
+ 		.bit = 0,
+ 		.mask = 0x7
+ 	}, {
++		/* gpio2_c7_sel */
+ 		.num = 2,
+ 		.pin = 23,
+ 		.reg = 0x30,
+ 		.bit = 14,
+ 		.mask = 0x3
++	}, {
++		/* gpio3_b1_sel */
++		.num = 3,
++		.pin = 9,
++		.reg = 0x44,
++		.bit = 2,
++		.mask = 0x3
++	}, {
++		/* gpio3_b2_sel */
++		.num = 3,
++		.pin = 10,
++		.reg = 0x44,
++		.bit = 4,
++		.mask = 0x3
++	}, {
++		/* gpio3_b3_sel */
++		.num = 3,
++		.pin = 11,
++		.reg = 0x44,
++		.bit = 6,
++		.mask = 0x3
++	}, {
++		/* gpio3_b4_sel */
++		.num = 3,
++		.pin = 12,
++		.reg = 0x44,
++		.bit = 8,
++		.mask = 0x3
++	}, {
++		/* gpio3_b5_sel */
++		.num = 3,
++		.pin = 13,
++		.reg = 0x44,
++		.bit = 10,
++		.mask = 0x3
++	}, {
++		/* gpio3_b6_sel */
++		.num = 3,
++		.pin = 14,
++		.reg = 0x44,
++		.bit = 12,
++		.mask = 0x3
++	}, {
++		/* gpio3_b7_sel */
++		.num = 3,
++		.pin = 15,
++		.reg = 0x44,
++		.bit = 14,
++		.mask = 0x3
+ 	},
+ };
+ 
+@@ -2043,6 +1864,7 @@ static int rockchip_get_pull(struct rockchip_pin_bank *bank, int pin_num)
+ 	case RK3188:
+ 	case RK3288:
+ 	case RK3308:
++	case RK3328:
+ 	case RK3368:
+ 	case RK3399:
+ 	case RK3568:
+@@ -2097,6 +1919,7 @@ static int rockchip_set_pull(struct rockchip_pin_bank *bank,
+ 	case RK3188:
+ 	case RK3288:
+ 	case RK3308:
++	case RK3328:
+ 	case RK3368:
+ 	case RK3399:
+ 	case RK3568:
+@@ -2308,8 +2131,10 @@ static int rockchip_pmx_set(struct pinctrl_dev *pctldev, unsigned selector,
+ 
+ 	if (ret) {
+ 		/* revert the already done pin settings */
+-		for (cnt--; cnt >= 0; cnt--)
++		for (cnt--; cnt >= 0; cnt--) {
++			bank = pin_to_bank(info, pins[cnt]);
+ 			rockchip_set_mux(bank, pins[cnt] - bank->pin_base, 0);
++		}
+ 
+ 		return ret;
+ 	}
+@@ -2418,6 +2243,7 @@ static bool rockchip_pinconf_pull_valid(struct rockchip_pin_ctrl *ctrl,
+ 	case RK3188:
+ 	case RK3288:
+ 	case RK3308:
++	case RK3328:
+ 	case RK3368:
+ 	case RK3399:
+ 	case RK3568:
+@@ -3882,7 +3708,7 @@ static struct rockchip_pin_bank rk3328_pin_banks[] = {
+ 	PIN_BANK_IOMUX_FLAGS(0, 32, "gpio0", 0, 0, 0, 0),
+ 	PIN_BANK_IOMUX_FLAGS(1, 32, "gpio1", 0, 0, 0, 0),
+ 	PIN_BANK_IOMUX_FLAGS(2, 32, "gpio2", 0,
+-			     IOMUX_WIDTH_3BIT,
++			     0,
+ 			     IOMUX_WIDTH_3BIT,
+ 			     0),
+ 	PIN_BANK_IOMUX_FLAGS(3, 32, "gpio3",
+@@ -3896,7 +3722,7 @@ static struct rockchip_pin_ctrl rk3328_pin_ctrl = {
+ 		.pin_banks		= rk3328_pin_banks,
+ 		.nr_banks		= ARRAY_SIZE(rk3328_pin_banks),
+ 		.label			= "RK3328-GPIO",
+-		.type			= RK3288,
++		.type			= RK3328,
+ 		.grf_mux_offset		= 0x0,
+ 		.iomux_recalced		= rk3328_mux_recalced_data,
+ 		.niomux_recalced	= ARRAY_SIZE(rk3328_mux_recalced_data),
+diff --git a/drivers/pinctrl/pinctrl-rockchip.h b/drivers/pinctrl/pinctrl-rockchip.h
+new file mode 100644
+index 0000000000000..7263db68d0efe
+--- /dev/null
++++ b/drivers/pinctrl/pinctrl-rockchip.h
+@@ -0,0 +1,246 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (c) 2020-2021 Rockchip Electronics Co. Ltd.
++ *
++ * Copyright (c) 2013 MundoReader S.L.
++ * Author: Heiko Stuebner <heiko@sntech.de>
++ *
++ * With some ideas taken from pinctrl-samsung:
++ * Copyright (c) 2012 Samsung Electronics Co., Ltd.
++ *		http://www.samsung.com
++ * Copyright (c) 2012 Linaro Ltd
++ *		https://www.linaro.org
++ *
++ * and pinctrl-at91:
++ * Copyright (C) 2011-2012 Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
++ */
++
++#ifndef _PINCTRL_ROCKCHIP_H
++#define _PINCTRL_ROCKCHIP_H
++
++enum rockchip_pinctrl_type {
++	PX30,
++	RV1108,
++	RK2928,
++	RK3066B,
++	RK3128,
++	RK3188,
++	RK3288,
++	RK3308,
++	RK3328,
++	RK3368,
++	RK3399,
++	RK3568,
++};
++
++/**
++ * struct rockchip_iomux
++ * @type: iomux variant using IOMUX_* constants
++ * @offset: if initialized to -1 it will be autocalculated, by specifying
++ *	    an initial offset value the relevant source offset can be reset
++ *	    to a new value for autocalculating the following iomux registers.
++ */
++struct rockchip_iomux {
++	int type;
++	int offset;
++};
++
++/*
++ * enum type index corresponding to rockchip_perpin_drv_list arrays index.
++ */
++enum rockchip_pin_drv_type {
++	DRV_TYPE_IO_DEFAULT = 0,
++	DRV_TYPE_IO_1V8_OR_3V0,
++	DRV_TYPE_IO_1V8_ONLY,
++	DRV_TYPE_IO_1V8_3V0_AUTO,
++	DRV_TYPE_IO_3V3_ONLY,
++	DRV_TYPE_MAX
++};
++
++/*
++ * enum type index corresponding to rockchip_pull_list arrays index.
++ */
++enum rockchip_pin_pull_type {
++	PULL_TYPE_IO_DEFAULT = 0,
++	PULL_TYPE_IO_1V8_ONLY,
++	PULL_TYPE_MAX
++};
++
++/**
++ * struct rockchip_drv
++ * @drv_type: drive strength variant using rockchip_perpin_drv_type
++ * @offset: if initialized to -1 it will be autocalculated, by specifying
++ *	    an initial offset value the relevant source offset can be reset
++ *	    to a new value for autocalculating the following drive strength
++ *	    registers. if used chips own cal_drv func instead to calculate
++ *	    registers offset, the variant could be ignored.
++ */
++struct rockchip_drv {
++	enum rockchip_pin_drv_type	drv_type;
++	int				offset;
++};
++
++/**
++ * struct rockchip_pin_bank
++ * @reg_base: register base of the gpio bank
++ * @regmap_pull: optional separate register for additional pull settings
++ * @clk: clock of the gpio bank
++ * @irq: interrupt of the gpio bank
++ * @saved_masks: Saved content of GPIO_INTEN at suspend time.
++ * @pin_base: first pin number
++ * @nr_pins: number of pins in this bank
++ * @name: name of the bank
++ * @bank_num: number of the bank, to account for holes
++ * @iomux: array describing the 4 iomux sources of the bank
++ * @drv: array describing the 4 drive strength sources of the bank
++ * @pull_type: array describing the 4 pull type sources of the bank
++ * @valid: is all necessary information present
++ * @of_node: dt node of this bank
++ * @drvdata: common pinctrl basedata
++ * @domain: irqdomain of the gpio bank
++ * @gpio_chip: gpiolib chip
++ * @grange: gpio range
++ * @slock: spinlock for the gpio bank
++ * @toggle_edge_mode: bit mask to toggle (falling/rising) edge mode
++ * @recalced_mask: bit mask to indicate a need to recalulate the mask
++ * @route_mask: bits describing the routing pins of per bank
++ */
++struct rockchip_pin_bank {
++	void __iomem			*reg_base;
++	struct regmap			*regmap_pull;
++	struct clk			*clk;
++	int				irq;
++	u32				saved_masks;
++	u32				pin_base;
++	u8				nr_pins;
++	char				*name;
++	u8				bank_num;
++	struct rockchip_iomux		iomux[4];
++	struct rockchip_drv		drv[4];
++	enum rockchip_pin_pull_type	pull_type[4];
++	bool				valid;
++	struct device_node		*of_node;
++	struct rockchip_pinctrl		*drvdata;
++	struct irq_domain		*domain;
++	struct gpio_chip		gpio_chip;
++	struct pinctrl_gpio_range	grange;
++	raw_spinlock_t			slock;
++	u32				toggle_edge_mode;
++	u32				recalced_mask;
++	u32				route_mask;
++};
++
++/**
++ * struct rockchip_mux_recalced_data: represent a pin iomux data.
++ * @num: bank number.
++ * @pin: pin number.
++ * @bit: index at register.
++ * @reg: register offset.
++ * @mask: mask bit
++ */
++struct rockchip_mux_recalced_data {
++	u8 num;
++	u8 pin;
++	u32 reg;
++	u8 bit;
++	u8 mask;
++};
++
++enum rockchip_mux_route_location {
++	ROCKCHIP_ROUTE_SAME = 0,
++	ROCKCHIP_ROUTE_PMU,
++	ROCKCHIP_ROUTE_GRF,
++};
++
++/**
++ * struct rockchip_mux_recalced_data: represent a pin iomux data.
++ * @bank_num: bank number.
++ * @pin: index at register or used to calc index.
++ * @func: the min pin.
++ * @route_location: the mux route location (same, pmu, grf).
++ * @route_offset: the max pin.
++ * @route_val: the register offset.
++ */
++struct rockchip_mux_route_data {
++	u8 bank_num;
++	u8 pin;
++	u8 func;
++	enum rockchip_mux_route_location route_location;
++	u32 route_offset;
++	u32 route_val;
++};
++
++struct rockchip_pin_ctrl {
++	struct rockchip_pin_bank	*pin_banks;
++	u32				nr_banks;
++	u32				nr_pins;
++	char				*label;
++	enum rockchip_pinctrl_type	type;
++	int				grf_mux_offset;
++	int				pmu_mux_offset;
++	int				grf_drv_offset;
++	int				pmu_drv_offset;
++	struct rockchip_mux_recalced_data *iomux_recalced;
++	u32				niomux_recalced;
++	struct rockchip_mux_route_data *iomux_routes;
++	u32				niomux_routes;
++
++	void	(*pull_calc_reg)(struct rockchip_pin_bank *bank,
++				    int pin_num, struct regmap **regmap,
++				    int *reg, u8 *bit);
++	void	(*drv_calc_reg)(struct rockchip_pin_bank *bank,
++				    int pin_num, struct regmap **regmap,
++				    int *reg, u8 *bit);
++	int	(*schmitt_calc_reg)(struct rockchip_pin_bank *bank,
++				    int pin_num, struct regmap **regmap,
++				    int *reg, u8 *bit);
++};
++
++struct rockchip_pin_config {
++	unsigned int		func;
++	unsigned long		*configs;
++	unsigned int		nconfigs;
++};
++
++/**
++ * struct rockchip_pin_group: represent group of pins of a pinmux function.
++ * @name: name of the pin group, used to lookup the group.
++ * @pins: the pins included in this group.
++ * @npins: number of pins included in this group.
++ * @data: local pin configuration
++ */
++struct rockchip_pin_group {
++	const char			*name;
++	unsigned int			npins;
++	unsigned int			*pins;
++	struct rockchip_pin_config	*data;
++};
++
++/**
++ * struct rockchip_pmx_func: represent a pin function.
++ * @name: name of the pin function, used to lookup the function.
++ * @groups: one or more names of pin groups that provide this function.
++ * @ngroups: number of groups included in @groups.
++ */
++struct rockchip_pmx_func {
++	const char		*name;
++	const char		**groups;
++	u8			ngroups;
++};
++
++struct rockchip_pinctrl {
++	struct regmap			*regmap_base;
++	int				reg_size;
++	struct regmap			*regmap_pull;
++	struct regmap			*regmap_pmu;
++	struct device			*dev;
++	struct rockchip_pin_ctrl	*ctrl;
++	struct pinctrl_desc		pctl;
++	struct pinctrl_dev		*pctl_dev;
++	struct rockchip_pin_group	*groups;
++	unsigned int			ngroups;
++	struct rockchip_pmx_func	*functions;
++	unsigned int			nfunctions;
++};
++
++#endif
+diff --git a/drivers/power/supply/cros_usbpd-charger.c b/drivers/power/supply/cros_usbpd-charger.c
+index 0a4f02e4ae7ba..d7ee1eb9ca880 100644
+--- a/drivers/power/supply/cros_usbpd-charger.c
++++ b/drivers/power/supply/cros_usbpd-charger.c
+@@ -5,6 +5,7 @@
+  * Copyright (c) 2014 - 2018 Google, Inc
+  */
+ 
++#include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+ #include <linux/platform_data/cros_ec_proto.h>
+@@ -711,16 +712,22 @@ static int cros_usbpd_charger_resume(struct device *dev)
+ static SIMPLE_DEV_PM_OPS(cros_usbpd_charger_pm_ops, NULL,
+ 			 cros_usbpd_charger_resume);
+ 
++static const struct platform_device_id cros_usbpd_charger_id[] = {
++	{ DRV_NAME, 0 },
++	{}
++};
++MODULE_DEVICE_TABLE(platform, cros_usbpd_charger_id);
++
+ static struct platform_driver cros_usbpd_charger_driver = {
+ 	.driver = {
+ 		.name = DRV_NAME,
+ 		.pm = &cros_usbpd_charger_pm_ops,
+ 	},
+-	.probe = cros_usbpd_charger_probe
++	.probe = cros_usbpd_charger_probe,
++	.id_table = cros_usbpd_charger_id,
+ };
+ 
+ module_platform_driver(cros_usbpd_charger_driver);
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("ChromeOS EC USBPD charger");
+-MODULE_ALIAS("platform:" DRV_NAME);
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index 9311f3d09c8fc..8eb902fe73a98 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -84,7 +84,8 @@ int ptp_set_pinfunc(struct ptp_clock *ptp, unsigned int pin,
+ 	}
+ 
+ 	if (info->verify(info, pin, func, chan)) {
+-		pr_err("driver cannot use function %u on pin %u\n", func, chan);
++		pr_err("driver cannot use function %u and channel %u on pin %u\n",
++		       func, chan, pin);
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+diff --git a/drivers/pwm/pwm-stm32.c b/drivers/pwm/pwm-stm32.c
+index 69b7bc6049466..3e5f1b622af89 100644
+--- a/drivers/pwm/pwm-stm32.c
++++ b/drivers/pwm/pwm-stm32.c
+@@ -340,6 +340,9 @@ static int stm32_pwm_config(struct stm32_pwm *priv, int ch,
+ 
+ 	prd = div;
+ 
++	if (!prd)
++		return -EINVAL;
++
+ 	if (prescaler > MAX_TIM_PSC)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 2d1a23b9eae3b..7082cffdd10e6 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -3185,6 +3185,7 @@ struct regmap *regulator_get_regmap(struct regulator *regulator)
+ 
+ 	return map ? map : ERR_PTR(-EOPNOTSUPP);
+ }
++EXPORT_SYMBOL_GPL(regulator_get_regmap);
+ 
+ /**
+  * regulator_get_hardware_vsel_register - get the HW voltage selector register
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index f92a18c06d805..48b45fd94437c 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -429,7 +429,7 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+ 	struct mbox_client *client = &kproc->client;
+ 	struct device *dev = kproc->dev;
+-	struct k3_r5_core *core;
++	struct k3_r5_core *core0, *core;
+ 	u32 boot_addr;
+ 	int ret;
+ 
+@@ -478,6 +478,16 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 				goto unroll_core_run;
+ 		}
+ 	} else {
++		/* do not allow core 1 to start before core 0 */
++		core0 = list_first_entry(&cluster->cores, struct k3_r5_core,
++					 elem);
++		if (core != core0 && core0->rproc->state == RPROC_OFFLINE) {
++			dev_err(dev, "%s: can not start core 1 before core 0\n",
++				__func__);
++			ret = -EPERM;
++			goto put_mbox;
++		}
++
+ 		ret = k3_r5_core_run(core);
+ 		if (ret)
+ 			goto put_mbox;
+@@ -518,7 +528,8 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ {
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+-	struct k3_r5_core *core = kproc->core;
++	struct device *dev = kproc->dev;
++	struct k3_r5_core *core1, *core = kproc->core;
+ 	int ret;
+ 
+ 	/* halt all applicable cores */
+@@ -531,6 +542,16 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ 			}
+ 		}
+ 	} else {
++		/* do not allow core 0 to stop before core 1 */
++		core1 = list_last_entry(&cluster->cores, struct k3_r5_core,
++					elem);
++		if (core != core1 && core1->rproc->state != RPROC_OFFLINE) {
++			dev_err(dev, "%s: can not stop core 0 before core 1\n",
++				__func__);
++			ret = -EPERM;
++			goto out;
++		}
++
+ 		ret = k3_r5_core_halt(core);
+ 		if (ret)
+ 			goto out;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 105d781d0cacf..2803b475dae6a 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -7440,6 +7440,12 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
+ 	ioc->pd_handles_sz = (ioc->facts.MaxDevHandle / 8);
+ 	if (ioc->facts.MaxDevHandle % 8)
+ 		ioc->pd_handles_sz++;
++	/*
++	 * pd_handles_sz should have, at least, the minimal room for
++	 * set_bit()/test_bit(), otherwise out-of-memory touch may occur.
++	 */
++	ioc->pd_handles_sz = ALIGN(ioc->pd_handles_sz, sizeof(unsigned long));
++
+ 	ioc->pd_handles = kzalloc(ioc->pd_handles_sz,
+ 	    GFP_KERNEL);
+ 	if (!ioc->pd_handles) {
+@@ -7457,6 +7463,13 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
+ 	ioc->pend_os_device_add_sz = (ioc->facts.MaxDevHandle / 8);
+ 	if (ioc->facts.MaxDevHandle % 8)
+ 		ioc->pend_os_device_add_sz++;
++
++	/*
++	 * pend_os_device_add_sz should have, at least, the minimal room for
++	 * set_bit()/test_bit(), otherwise out-of-memory may occur.
++	 */
++	ioc->pend_os_device_add_sz = ALIGN(ioc->pend_os_device_add_sz,
++					   sizeof(unsigned long));
+ 	ioc->pend_os_device_add = kzalloc(ioc->pend_os_device_add_sz,
+ 	    GFP_KERNEL);
+ 	if (!ioc->pend_os_device_add) {
+@@ -7747,6 +7760,12 @@ _base_check_ioc_facts_changes(struct MPT3SAS_ADAPTER *ioc)
+ 		if (ioc->facts.MaxDevHandle % 8)
+ 			pd_handles_sz++;
+ 
++		/*
++		 * pd_handles should have, at least, the minimal room for
++		 * set_bit()/test_bit(), otherwise out-of-memory touch may
++		 * occur.
++		 */
++		pd_handles_sz = ALIGN(pd_handles_sz, sizeof(unsigned long));
+ 		pd_handles = krealloc(ioc->pd_handles, pd_handles_sz,
+ 		    GFP_KERNEL);
+ 		if (!pd_handles) {
+diff --git a/drivers/scsi/qedi/qedi_debugfs.c b/drivers/scsi/qedi/qedi_debugfs.c
+index 42f5afb60055c..6e724f47ab9e8 100644
+--- a/drivers/scsi/qedi/qedi_debugfs.c
++++ b/drivers/scsi/qedi/qedi_debugfs.c
+@@ -120,15 +120,11 @@ static ssize_t
+ qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer,
+ 				 size_t count, loff_t *ppos)
+ {
+-	size_t cnt = 0;
+-
+-	if (*ppos)
+-		return 0;
++	char buf[64];
++	int len;
+ 
+-	cnt = sprintf(buffer, "do_not_recover=%d\n", qedi_do_not_recover);
+-	cnt = min_t(int, count, cnt - *ppos);
+-	*ppos += cnt;
+-	return cnt;
++	len = sprintf(buf, "do_not_recover=%d\n", qedi_do_not_recover);
++	return simple_read_from_buffer(buffer, count, ppos, buf, len);
+ }
+ 
+ static int
+diff --git a/drivers/soc/ti/ti_sci_pm_domains.c b/drivers/soc/ti/ti_sci_pm_domains.c
+index a33ec7eaf23d1..17984a7bffba5 100644
+--- a/drivers/soc/ti/ti_sci_pm_domains.c
++++ b/drivers/soc/ti/ti_sci_pm_domains.c
+@@ -114,6 +114,18 @@ static const struct of_device_id ti_sci_pm_domain_matches[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ti_sci_pm_domain_matches);
+ 
++static bool ti_sci_pm_idx_exists(struct ti_sci_genpd_provider *pd_provider, u32 idx)
++{
++	struct ti_sci_pm_domain *pd;
++
++	list_for_each_entry(pd, &pd_provider->pd_list, node) {
++		if (pd->idx == idx)
++			return true;
++	}
++
++	return false;
++}
++
+ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -153,8 +165,14 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ 				break;
+ 
+ 			if (args.args_count >= 1 && args.np == dev->of_node) {
+-				if (args.args[0] > max_id)
++				if (args.args[0] > max_id) {
+ 					max_id = args.args[0];
++				} else {
++					if (ti_sci_pm_idx_exists(pd_provider, args.args[0])) {
++						index++;
++						continue;
++					}
++				}
+ 
+ 				pd = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
+ 				if (!pd)
+diff --git a/drivers/soc/ti/wkup_m3_ipc.c b/drivers/soc/ti/wkup_m3_ipc.c
+index ef3f95fefab58..6634709e646c4 100644
+--- a/drivers/soc/ti/wkup_m3_ipc.c
++++ b/drivers/soc/ti/wkup_m3_ipc.c
+@@ -14,7 +14,6 @@
+ #include <linux/irq.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+-#include <linux/omap-mailbox.h>
+ #include <linux/platform_device.h>
+ #include <linux/remoteproc.h>
+ #include <linux/suspend.h>
+@@ -151,7 +150,6 @@ static irqreturn_t wkup_m3_txev_handler(int irq, void *ipc_data)
+ static int wkup_m3_ping(struct wkup_m3_ipc *m3_ipc)
+ {
+ 	struct device *dev = m3_ipc->dev;
+-	mbox_msg_t dummy_msg = 0;
+ 	int ret;
+ 
+ 	if (!m3_ipc->mbox) {
+@@ -167,7 +165,7 @@ static int wkup_m3_ping(struct wkup_m3_ipc *m3_ipc)
+ 	 * the RX callback to avoid multiple interrupts being received
+ 	 * by the CM3.
+ 	 */
+-	ret = mbox_send_message(m3_ipc->mbox, &dummy_msg);
++	ret = mbox_send_message(m3_ipc->mbox, NULL);
+ 	if (ret < 0) {
+ 		dev_err(dev, "%s: mbox_send_message() failed: %d\n",
+ 			__func__, ret);
+@@ -189,7 +187,6 @@ static int wkup_m3_ping(struct wkup_m3_ipc *m3_ipc)
+ static int wkup_m3_ping_noirq(struct wkup_m3_ipc *m3_ipc)
+ {
+ 	struct device *dev = m3_ipc->dev;
+-	mbox_msg_t dummy_msg = 0;
+ 	int ret;
+ 
+ 	if (!m3_ipc->mbox) {
+@@ -198,7 +195,7 @@ static int wkup_m3_ping_noirq(struct wkup_m3_ipc *m3_ipc)
+ 		return -EIO;
+ 	}
+ 
+-	ret = mbox_send_message(m3_ipc->mbox, &dummy_msg);
++	ret = mbox_send_message(m3_ipc->mbox, NULL);
+ 	if (ret < 0) {
+ 		dev_err(dev, "%s: mbox_send_message() failed: %d\n",
+ 			__func__, ret);
+diff --git a/drivers/staging/hikey9xx/hisi-spmi-controller.c b/drivers/staging/hikey9xx/hisi-spmi-controller.c
+index 29f226503668d..eee3dcf96ee79 100644
+--- a/drivers/staging/hikey9xx/hisi-spmi-controller.c
++++ b/drivers/staging/hikey9xx/hisi-spmi-controller.c
+@@ -303,7 +303,6 @@ static int spmi_controller_probe(struct platform_device *pdev)
+ 
+ 	spin_lock_init(&spmi_controller->lock);
+ 
+-	ctrl->nr = spmi_controller->channel;
+ 	ctrl->dev.parent = pdev->dev.parent;
+ 	ctrl->dev.of_node = of_node_get(pdev->dev.of_node);
+ 
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 7c28d2752a4cd..3d09f8f30e02a 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -41,8 +41,50 @@
+ #define PCI_DEVICE_ID_COMMTECH_4228PCIE		0x0021
+ #define PCI_DEVICE_ID_COMMTECH_4222PCIE		0x0022
+ 
++#define PCI_VENDOR_ID_CONNECT_TECH				0x12c4
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_SP_OPTO        0x0340
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_SP_OPTO_A      0x0341
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_SP_OPTO_B      0x0342
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XPRS           0x0350
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_A         0x0351
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_B         0x0352
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS           0x0353
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_16_XPRS_A        0x0354
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_16_XPRS_B        0x0355
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XPRS_OPTO      0x0360
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_OPTO_A    0x0361
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_OPTO_B    0x0362
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP             0x0370
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_232         0x0371
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_485         0x0372
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_SP           0x0373
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_6_2_SP           0x0374
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_6_SP           0x0375
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_232_NS      0x0376
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_LEFT   0x0380
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_RIGHT  0x0381
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XP_OPTO        0x0382
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_XPRS_OPTO    0x0392
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP        0x03A0
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_232    0x03A1
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_485    0x03A2
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_232_NS 0x03A3
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XEG001               0x0602
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_BASE           0x1000
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_2              0x1002
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_4              0x1004
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_8              0x1008
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_12             0x100C
++#define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_16             0x1010
++#define PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_12_XIG00X          0x110c
++#define PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_12_XIG01X          0x110d
++#define PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_16                 0x1110
++
+ #define PCI_DEVICE_ID_EXAR_XR17V4358		0x4358
+ #define PCI_DEVICE_ID_EXAR_XR17V8358		0x8358
++#define PCI_DEVICE_ID_EXAR_XR17V252		0x0252
++#define PCI_DEVICE_ID_EXAR_XR17V254		0x0254
++#define PCI_DEVICE_ID_EXAR_XR17V258		0x0258
+ 
+ #define PCI_SUBDEVICE_ID_USR_2980		0x0128
+ #define PCI_SUBDEVICE_ID_USR_2981		0x0129
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 25765ebb756ae..ff461d0a9acc8 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -164,6 +164,10 @@ static void uart_write(struct omap8250_priv *priv, u32 reg, u32 val)
+ 	writel(val, priv->membase + (reg << OMAP_UART_REGSHIFT));
+ }
+ 
++/* Timeout low and High */
++#define UART_OMAP_TO_L                 0x26
++#define UART_OMAP_TO_H                 0x27
++
+ /*
+  * Called on runtime PM resume path from omap8250_restore_regs(), and
+  * omap8250_set_mctrl().
+@@ -647,13 +651,25 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 
+ 	/*
+ 	 * On K3 SoCs, it is observed that RX TIMEOUT is signalled after
+-	 * FIFO has been drained, in which case a dummy read of RX FIFO
+-	 * is required to clear RX TIMEOUT condition.
++	 * FIFO has been drained or erroneously.
++	 * So apply solution of Errata i2310 as mentioned in
++	 * https://www.ti.com/lit/pdf/sprz536
+ 	 */
+ 	if (priv->habit & UART_RX_TIMEOUT_QUIRK &&
+ 	    (iir & UART_IIR_RX_TIMEOUT) == UART_IIR_RX_TIMEOUT &&
+ 	    serial_port_in(port, UART_OMAP_RX_LVL) == 0) {
+-		serial_port_in(port, UART_RX);
++		unsigned char efr2, timeout_h, timeout_l;
++
++		efr2 = serial_in(up, UART_OMAP_EFR2);
++		timeout_h = serial_in(up, UART_OMAP_TO_H);
++		timeout_l = serial_in(up, UART_OMAP_TO_L);
++		serial_out(up, UART_OMAP_TO_H, 0xFF);
++		serial_out(up, UART_OMAP_TO_L, 0xFF);
++		serial_out(up, UART_OMAP_EFR2, UART_OMAP_EFR2_TIMEOUT_BEHAVE);
++		serial_in(up, UART_IIR);
++		serial_out(up, UART_OMAP_EFR2, efr2);
++		serial_out(up, UART_OMAP_TO_H, timeout_h);
++		serial_out(up, UART_OMAP_TO_L, timeout_l);
+ 	}
+ 
+ 	/* Stop processing interrupts on input overrun */
+diff --git a/drivers/tty/serial/8250/8250_pxa.c b/drivers/tty/serial/8250/8250_pxa.c
+index 33ca98bfa5b37..0780f5d7be62b 100644
+--- a/drivers/tty/serial/8250/8250_pxa.c
++++ b/drivers/tty/serial/8250/8250_pxa.c
+@@ -125,6 +125,7 @@ static int serial_pxa_probe(struct platform_device *pdev)
+ 	uart.port.regshift = 2;
+ 	uart.port.irq = irq;
+ 	uart.port.fifosize = 64;
++	uart.tx_loadsz = 32;
+ 	uart.port.flags = UPF_IOREMAP | UPF_SKIP_TEST | UPF_FIXED_TYPE;
+ 	uart.port.dev = &pdev->dev;
+ 	uart.port.uartclk = clk_get_rate(data->clk);
+diff --git a/drivers/tty/serial/mcf.c b/drivers/tty/serial/mcf.c
+index 09c88c48fb7b3..b90bb745277d9 100644
+--- a/drivers/tty/serial/mcf.c
++++ b/drivers/tty/serial/mcf.c
+@@ -479,7 +479,7 @@ static const struct uart_ops mcf_uart_ops = {
+ 	.verify_port	= mcf_verify_port,
+ };
+ 
+-static struct mcf_uart mcf_ports[4];
++static struct mcf_uart mcf_ports[10];
+ 
+ #define	MCF_MAXPORTS	ARRAY_SIZE(mcf_ports)
+ 
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index d751f8ce5cf6d..4ea52426acf9e 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -490,16 +490,28 @@ static bool sc16is7xx_regmap_noinc(struct device *dev, unsigned int reg)
+ 	return reg == SC16IS7XX_RHR_REG;
+ }
+ 
++/*
++ * Configure programmable baud rate generator (divisor) according to the
++ * desired baud rate.
++ *
++ * From the datasheet, the divisor is computed according to:
++ *
++ *              XTAL1 input frequency
++ *             -----------------------
++ *                    prescaler
++ * divisor = ---------------------------
++ *            baud-rate x sampling-rate
++ */
+ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ {
+ 	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+ 	u8 lcr;
+-	u8 prescaler = 0;
++	unsigned int prescaler = 1;
+ 	unsigned long clk = port->uartclk, div = clk / 16 / baud;
+ 
+-	if (div > 0xffff) {
+-		prescaler = SC16IS7XX_MCR_CLKSEL_BIT;
+-		div /= 4;
++	if (div >= BIT(16)) {
++		prescaler = 4;
++		div /= prescaler;
+ 	}
+ 
+ 	/* In an amazing feat of design, the Enhanced Features Register shares
+@@ -534,9 +546,10 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ 
+ 	mutex_unlock(&s->efr_lock);
+ 
++	/* If bit MCR_CLKSEL is set, the divide by 4 prescaler is activated. */
+ 	sc16is7xx_port_update(port, SC16IS7XX_MCR_REG,
+ 			      SC16IS7XX_MCR_CLKSEL_BIT,
+-			      prescaler);
++			      prescaler == 1 ? 0 : SC16IS7XX_MCR_CLKSEL_BIT);
+ 
+ 	/* Open the LCR divisors for configuration */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG,
+@@ -551,7 +564,7 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ 	/* Put LCR back to the normal mode */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr);
+ 
+-	return DIV_ROUND_CLOSEST(clk / 16, div);
++	return DIV_ROUND_CLOSEST((clk / prescaler) / 16, div);
+ }
+ 
+ static void sc16is7xx_handle_rx(struct uart_port *port, unsigned int rxlen,
+diff --git a/drivers/usb/atm/cxacru.c b/drivers/usb/atm/cxacru.c
+index e62a770a5d3bf..1d2c736dbf6ae 100644
+--- a/drivers/usb/atm/cxacru.c
++++ b/drivers/usb/atm/cxacru.c
+@@ -1134,6 +1134,7 @@ static int cxacru_bind(struct usbatm_data *usbatm_instance,
+ 	struct cxacru_data *instance;
+ 	struct usb_device *usb_dev = interface_to_usbdev(intf);
+ 	struct usb_host_endpoint *cmd_ep = usb_dev->ep_in[CXACRU_EP_CMD];
++	struct usb_endpoint_descriptor *in, *out;
+ 	int ret;
+ 
+ 	/* instance init */
+@@ -1180,6 +1181,19 @@ static int cxacru_bind(struct usbatm_data *usbatm_instance,
+ 		goto fail;
+ 	}
+ 
++	if (usb_endpoint_xfer_int(&cmd_ep->desc))
++		ret = usb_find_common_endpoints(intf->cur_altsetting,
++						NULL, NULL, &in, &out);
++	else
++		ret = usb_find_common_endpoints(intf->cur_altsetting,
++						&in, &out, NULL, NULL);
++
++	if (ret) {
++		usb_err(usbatm_instance, "cxacru_bind: interface has incorrect endpoints\n");
++		ret = -ENODEV;
++		goto fail;
++	}
++
+ 	if ((cmd_ep->desc.bmAttributes & USB_ENDPOINT_XFERTYPE_MASK)
+ 			== USB_ENDPOINT_XFER_INT) {
+ 		usb_fill_int_urb(instance->rcv_urb,
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 80332b6a1963e..aa91d561a0ace 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -252,14 +252,14 @@ static void wdm_int_callback(struct urb *urb)
+ 			dev_err(&desc->intf->dev, "Stall on int endpoint\n");
+ 			goto sw; /* halt is cleared in work */
+ 		default:
+-			dev_err(&desc->intf->dev,
++			dev_err_ratelimited(&desc->intf->dev,
+ 				"nonzero urb status received: %d\n", status);
+ 			break;
+ 		}
+ 	}
+ 
+ 	if (urb->actual_length < sizeof(struct usb_cdc_notification)) {
+-		dev_err(&desc->intf->dev, "wdm_int_callback - %d bytes\n",
++		dev_err_ratelimited(&desc->intf->dev, "wdm_int_callback - %d bytes\n",
+ 			urb->actual_length);
+ 		goto exit;
+ 	}
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index ad7df99f09a4c..592c79a04d64d 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -827,6 +827,7 @@ static void ffs_user_copy_worker(struct work_struct *work)
+ 	int ret = io_data->req->status ? io_data->req->status :
+ 					 io_data->req->actual;
+ 	bool kiocb_has_eventfd = io_data->kiocb->ki_flags & IOCB_EVENTFD;
++	unsigned long flags;
+ 
+ 	if (io_data->read && ret > 0) {
+ 		kthread_use_mm(io_data->mm);
+@@ -839,7 +840,10 @@ static void ffs_user_copy_worker(struct work_struct *work)
+ 	if (io_data->ffs->ffs_eventfd && !kiocb_has_eventfd)
+ 		eventfd_signal(io_data->ffs->ffs_eventfd, 1);
+ 
++	spin_lock_irqsave(&io_data->ffs->eps_lock, flags);
+ 	usb_ep_free_request(io_data->ep, io_data->req);
++	io_data->req = NULL;
++	spin_unlock_irqrestore(&io_data->ffs->eps_lock, flags);
+ 
+ 	if (io_data->read)
+ 		kfree(io_data->to_free);
+diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c
+index c13bb29a160e8..a31f2fe5d9843 100644
+--- a/drivers/usb/gadget/function/f_printer.c
++++ b/drivers/usb/gadget/function/f_printer.c
+@@ -208,6 +208,7 @@ static inline struct usb_endpoint_descriptor *ep_desc(struct usb_gadget *gadget,
+ 					struct usb_endpoint_descriptor *ss)
+ {
+ 	switch (gadget->speed) {
++	case USB_SPEED_SUPER_PLUS:
+ 	case USB_SPEED_SUPER:
+ 		return ss;
+ 	case USB_SPEED_HIGH:
+@@ -445,11 +446,8 @@ printer_read(struct file *fd, char __user *buf, size_t len, loff_t *ptr)
+ 	mutex_lock(&dev->lock_printer_io);
+ 	spin_lock_irqsave(&dev->lock, flags);
+ 
+-	if (dev->interface < 0) {
+-		spin_unlock_irqrestore(&dev->lock, flags);
+-		mutex_unlock(&dev->lock_printer_io);
+-		return -ENODEV;
+-	}
++	if (dev->interface < 0)
++		goto out_disabled;
+ 
+ 	/* We will use this flag later to check if a printer reset happened
+ 	 * after we turn interrupts back on.
+@@ -457,6 +455,9 @@ printer_read(struct file *fd, char __user *buf, size_t len, loff_t *ptr)
+ 	dev->reset_printer = 0;
+ 
+ 	setup_rx_reqs(dev);
++	/* this dropped the lock - need to retest */
++	if (dev->interface < 0)
++		goto out_disabled;
+ 
+ 	bytes_copied = 0;
+ 	current_rx_req = dev->current_rx_req;
+@@ -490,6 +491,8 @@ printer_read(struct file *fd, char __user *buf, size_t len, loff_t *ptr)
+ 		wait_event_interruptible(dev->rx_wait,
+ 				(likely(!list_empty(&dev->rx_buffers))));
+ 		spin_lock_irqsave(&dev->lock, flags);
++		if (dev->interface < 0)
++			goto out_disabled;
+ 	}
+ 
+ 	/* We have data to return then copy it to the caller's buffer.*/
+@@ -533,6 +536,9 @@ printer_read(struct file *fd, char __user *buf, size_t len, loff_t *ptr)
+ 			return -EAGAIN;
+ 		}
+ 
++		if (dev->interface < 0)
++			goto out_disabled;
++
+ 		/* If we not returning all the data left in this RX request
+ 		 * buffer then adjust the amount of data left in the buffer.
+ 		 * Othewise if we are done with this RX request buffer then
+@@ -562,6 +568,11 @@ printer_read(struct file *fd, char __user *buf, size_t len, loff_t *ptr)
+ 		return bytes_copied;
+ 	else
+ 		return -EAGAIN;
++
++out_disabled:
++	spin_unlock_irqrestore(&dev->lock, flags);
++	mutex_unlock(&dev->lock_printer_io);
++	return -ENODEV;
+ }
+ 
+ static ssize_t
+@@ -582,11 +593,8 @@ printer_write(struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 	mutex_lock(&dev->lock_printer_io);
+ 	spin_lock_irqsave(&dev->lock, flags);
+ 
+-	if (dev->interface < 0) {
+-		spin_unlock_irqrestore(&dev->lock, flags);
+-		mutex_unlock(&dev->lock_printer_io);
+-		return -ENODEV;
+-	}
++	if (dev->interface < 0)
++		goto out_disabled;
+ 
+ 	/* Check if a printer reset happens while we have interrupts on */
+ 	dev->reset_printer = 0;
+@@ -609,6 +617,8 @@ printer_write(struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 		wait_event_interruptible(dev->tx_wait,
+ 				(likely(!list_empty(&dev->tx_reqs))));
+ 		spin_lock_irqsave(&dev->lock, flags);
++		if (dev->interface < 0)
++			goto out_disabled;
+ 	}
+ 
+ 	while (likely(!list_empty(&dev->tx_reqs)) && len) {
+@@ -658,6 +668,9 @@ printer_write(struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 			return -EAGAIN;
+ 		}
+ 
++		if (dev->interface < 0)
++			goto out_disabled;
++
+ 		list_add(&req->list, &dev->tx_reqs_active);
+ 
+ 		/* here, we unlock, and only unlock, to avoid deadlock. */
+@@ -671,6 +684,8 @@ printer_write(struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 			mutex_unlock(&dev->lock_printer_io);
+ 			return -EAGAIN;
+ 		}
++		if (dev->interface < 0)
++			goto out_disabled;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+@@ -682,6 +697,11 @@ printer_write(struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 		return bytes_copied;
+ 	else
+ 		return -EAGAIN;
++
++out_disabled:
++	spin_unlock_irqrestore(&dev->lock, flags);
++	mutex_unlock(&dev->lock_printer_io);
++	return -ENODEV;
+ }
+ 
+ static int
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 53f8327873501..88f223b975d34 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -35,6 +35,7 @@
+ 
+ #define PCI_VENDOR_ID_ETRON		0x1b6f
+ #define PCI_DEVICE_ID_EJ168		0x7023
++#define PCI_DEVICE_ID_EJ188		0x7052
+ 
+ #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI	0x8c31
+ #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI	0x9c31
+@@ -270,6 +271,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+ 		xhci->quirks |= XHCI_BROKEN_STREAMS;
+ 	}
++	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
++			pdev->device == PCI_DEVICE_ID_EJ188) {
++		xhci->quirks |= XHCI_RESET_ON_RESUME;
++		xhci->quirks |= XHCI_BROKEN_STREAMS;
++	}
++
+ 	if (pdev->vendor == PCI_VENDOR_ID_RENESAS &&
+ 	    pdev->device == 0x0014) {
+ 		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 4fa387e447f08..fbb7a5b51ef46 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2386,9 +2386,8 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
+ 		goto finish_td;
+ 	case COMP_STOPPED_LENGTH_INVALID:
+ 		/* stopped on ep trb with invalid length, exclude it */
+-		ep_trb_len	= 0;
+-		remaining	= 0;
+-		break;
++		td->urb->actual_length = sum_trb_lengths(xhci, ep_ring, ep_trb);
++		goto finish_td;
+ 	case COMP_USB_TRANSACTION_ERROR:
+ 		if (xhci->quirks & XHCI_NO_SOFT_RETRY ||
+ 		    (ep->err_count++ > MAX_SOFT_RETRY) ||
+diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
+index 0be8efcda15d5..d972c09629397 100644
+--- a/drivers/usb/misc/uss720.c
++++ b/drivers/usb/misc/uss720.c
+@@ -677,7 +677,7 @@ static int uss720_probe(struct usb_interface *intf,
+ 	struct parport_uss720_private *priv;
+ 	struct parport *pp;
+ 	unsigned char reg;
+-	int i;
++	int ret;
+ 
+ 	dev_dbg(&intf->dev, "probe: vendor id 0x%x, device id 0x%x\n",
+ 		le16_to_cpu(usbdev->descriptor.idVendor),
+@@ -688,8 +688,8 @@ static int uss720_probe(struct usb_interface *intf,
+ 		usb_put_dev(usbdev);
+ 		return -ENODEV;
+ 	}
+-	i = usb_set_interface(usbdev, intf->altsetting->desc.bInterfaceNumber, 2);
+-	dev_dbg(&intf->dev, "set interface result %d\n", i);
++	ret = usb_set_interface(usbdev, intf->altsetting->desc.bInterfaceNumber, 2);
++	dev_dbg(&intf->dev, "set interface result %d\n", ret);
+ 
+ 	interface = intf->cur_altsetting;
+ 
+@@ -725,12 +725,18 @@ static int uss720_probe(struct usb_interface *intf,
+ 	set_1284_register(pp, 7, 0x00, GFP_KERNEL);
+ 	set_1284_register(pp, 6, 0x30, GFP_KERNEL);  /* PS/2 mode */
+ 	set_1284_register(pp, 2, 0x0c, GFP_KERNEL);
+-	/* debugging */
+-	get_1284_register(pp, 0, &reg, GFP_KERNEL);
++
++	/* The Belkin F5U002 Rev 2 P80453-B USB parallel port adapter shares the
++	 * device ID 050d:0002 with some other device that works with this
++	 * driver, but it itself does not. Detect and handle the bad cable
++	 * here. */
++	ret = get_1284_register(pp, 0, &reg, GFP_KERNEL);
+ 	dev_dbg(&intf->dev, "reg: %7ph\n", priv->reg);
++	if (ret < 0)
++		return ret;
+ 
+-	i = usb_find_last_int_in_endpoint(interface, &epd);
+-	if (!i) {
++	ret = usb_find_last_int_in_endpoint(interface, &epd);
++	if (!ret) {
+ 		dev_dbg(&intf->dev, "epaddr %d interval %d\n",
+ 				epd->bEndpointAddress, epd->bInterval);
+ 	}
+diff --git a/drivers/usb/musb/da8xx.c b/drivers/usb/musb/da8xx.c
+index 1c023c0091c4b..ad336daa93447 100644
+--- a/drivers/usb/musb/da8xx.c
++++ b/drivers/usb/musb/da8xx.c
+@@ -556,7 +556,7 @@ static int da8xx_probe(struct platform_device *pdev)
+ 	ret = of_platform_populate(pdev->dev.of_node, NULL,
+ 				   da8xx_auxdata_lookup, &pdev->dev);
+ 	if (ret)
+-		return ret;
++		goto err_unregister_phy;
+ 
+ 	memset(musb_resources, 0x00, sizeof(*musb_resources) *
+ 			ARRAY_SIZE(musb_resources));
+@@ -582,9 +582,13 @@ static int da8xx_probe(struct platform_device *pdev)
+ 	ret = PTR_ERR_OR_ZERO(glue->musb);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register musb device: %d\n", ret);
+-		usb_phy_generic_unregister(glue->usb_phy);
++		goto err_unregister_phy;
+ 	}
+ 
++	return 0;
++
++err_unregister_phy:
++	usb_phy_generic_unregister(glue->usb_phy);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/storage/alauda.c b/drivers/usb/storage/alauda.c
+index dcc4778d1ae99..17fe35083f048 100644
+--- a/drivers/usb/storage/alauda.c
++++ b/drivers/usb/storage/alauda.c
+@@ -105,6 +105,8 @@ struct alauda_info {
+ 	unsigned char sense_key;
+ 	unsigned long sense_asc;	/* additional sense code */
+ 	unsigned long sense_ascq;	/* additional sense code qualifier */
++
++	bool media_initialized;
+ };
+ 
+ #define short_pack(lsb,msb) ( ((u16)(lsb)) | ( ((u16)(msb))<<8 ) )
+@@ -476,11 +478,12 @@ static int alauda_check_media(struct us_data *us)
+ 	}
+ 
+ 	/* Check for media change */
+-	if (status[0] & 0x08) {
++	if (status[0] & 0x08 || !info->media_initialized) {
+ 		usb_stor_dbg(us, "Media change detected\n");
+ 		alauda_free_maps(&MEDIA_INFO(us));
+-		alauda_init_media(us);
+-
++		rc = alauda_init_media(us);
++		if (rc == USB_STOR_TRANSPORT_GOOD)
++			info->media_initialized = true;
+ 		info->sense_key = UNIT_ATTENTION;
+ 		info->sense_asc = 0x28;
+ 		info->sense_ascq = 0x00;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 019f0925fa73c..c484c145c5d05 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4442,19 +4442,11 @@ static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
+ 				      struct btrfs_fs_info *fs_info)
+ {
+ 	struct rb_node *node;
+-	struct btrfs_delayed_ref_root *delayed_refs;
++	struct btrfs_delayed_ref_root *delayed_refs = &trans->delayed_refs;
+ 	struct btrfs_delayed_ref_node *ref;
+ 	int ret = 0;
+ 
+-	delayed_refs = &trans->delayed_refs;
+-
+ 	spin_lock(&delayed_refs->lock);
+-	if (atomic_read(&delayed_refs->num_entries) == 0) {
+-		spin_unlock(&delayed_refs->lock);
+-		btrfs_debug(fs_info, "delayed_refs has NO entry");
+-		return ret;
+-	}
+-
+ 	while ((node = rb_first_cached(&delayed_refs->href_root)) != NULL) {
+ 		struct btrfs_delayed_ref_head *head;
+ 		struct rb_node *n;
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index d659eb70df76d..adb324234b444 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -154,6 +154,7 @@ smb2_find_smb_ses_unlocked(struct TCP_Server_Info *server, __u64 ses_id)
+ 	list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
+ 		if (ses->Suid != ses_id)
+ 			continue;
++		++ses->ses_count;
+ 		return ses;
+ 	}
+ 
+@@ -205,7 +206,14 @@ smb2_find_smb_tcon(struct TCP_Server_Info *server, __u64 ses_id, __u32  tid)
+ 		return NULL;
+ 	}
+ 	tcon = smb2_find_smb_sess_tcon_unlocked(ses, tid);
++	if (!tcon) {
++		spin_unlock(&cifs_tcp_ses_lock);
++		cifs_put_smb_ses(ses);
++		return NULL;
++	}
+ 	spin_unlock(&cifs_tcp_ses_lock);
++	/* tcon already has a ref to ses, so we don't need ses anymore */
++	cifs_put_smb_ses(ses);
+ 
+ 	return tcon;
+ }
+@@ -239,7 +247,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ 		if (rc) {
+ 			cifs_server_dbg(VFS,
+ 					"%s: sha256 alloc failed\n", __func__);
+-			return rc;
++			goto out;
+ 		}
+ 		shash = &sdesc->shash;
+ 	} else {
+@@ -290,6 +298,8 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ out:
+ 	if (allocate_crypto)
+ 		cifs_free_hash(&hash, &sdesc);
++	if (ses)
++		cifs_put_smb_ses(ses);
+ 	return rc;
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 1281b59da6a2a..9fed42e7bb1d2 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1745,8 +1745,6 @@ static void default_options(struct f2fs_sb_info *sbi)
+ 	F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
+ 	F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
+ 
+-	sbi->sb->s_flags &= ~SB_INLINECRYPT;
+-
+ 	set_opt(sbi, INLINE_XATTR);
+ 	set_opt(sbi, INLINE_DATA);
+ 	set_opt(sbi, INLINE_DENTRY);
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index db41e7803163e..7ae54f78a5b0b 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -557,9 +557,11 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+ 
+       size_check:
+ 	if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
++		int size = min_t(int, EALIST_SIZE(ea_buf->xattr), ea_size);
++
+ 		printk(KERN_ERR "ea_get: invalid extended attribute\n");
+ 		print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
+-				     ea_buf->xattr, ea_size, 1);
++				     ea_buf->xattr, size, 1);
+ 		ea_release(inode, ea_buf);
+ 		rc = -EIO;
+ 		goto clean_up;
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index eb854f1f86e2e..75334ba109474 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -103,12 +103,8 @@ static void nfs_readpage_release(struct nfs_page *req, int error)
+ 	if (nfs_error_is_fatal_on_server(error) && error != -ETIMEDOUT)
+ 		SetPageError(page);
+ 	if (nfs_page_group_sync_on_bit(req, PG_UNLOCKPAGE)) {
+-		struct address_space *mapping = page_file_mapping(page);
+-
+ 		if (PageUptodate(page))
+ 			nfs_readpage_to_fscache(inode, page, 0);
+-		else if (!PageError(page) && !PagePrivate(page))
+-			generic_error_remove_page(mapping, page);
+ 		unlock_page(page);
+ 	}
+ 	nfs_release_request(req);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 228560f3fd0e0..8e84ddccce4bf 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -2888,12 +2888,9 @@ static void
+ nfsd4_cb_recall_any_release(struct nfsd4_callback *cb)
+ {
+ 	struct nfs4_client *clp = cb->cb_clp;
+-	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+ 
+-	spin_lock(&nn->client_lock);
+ 	clear_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags);
+-	put_client_renew_locked(clp);
+-	spin_unlock(&nn->client_lock);
++	drop_client(clp);
+ }
+ 
+ static const struct nfsd4_callback_ops nfsd4_cb_recall_any_ops = {
+@@ -6230,7 +6227,7 @@ deleg_reaper(struct nfsd_net *nn)
+ 		list_add(&clp->cl_ra_cblist, &cblist);
+ 
+ 		/* release in nfsd4_cb_recall_any_release */
+-		atomic_inc(&clp->cl_rpc_users);
++		kref_get(&clp->cl_nfsdfs.cl_ref);
+ 		set_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags);
+ 		clp->cl_ra_time = ktime_get_boottime_seconds();
+ 	}
+diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
+index db8d62632a5be..ae3323e0708dd 100644
+--- a/fs/nfsd/nfsfh.c
++++ b/fs/nfsd/nfsfh.c
+@@ -573,7 +573,7 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
+ 		_fh_update(fhp, exp, dentry);
+ 	if (fhp->fh_handle.fh_fileid_type == FILEID_INVALID) {
+ 		fh_put(fhp);
+-		return nfserr_opnotsupp;
++		return nfserr_stale;
+ 	}
+ 
+ 	return 0;
+@@ -599,7 +599,7 @@ fh_update(struct svc_fh *fhp)
+ 
+ 	_fh_update(fhp, fhp->fh_export, dentry);
+ 	if (fhp->fh_handle.fh_fileid_type == FILEID_INVALID)
+-		return nfserr_opnotsupp;
++		return nfserr_stale;
+ 	return 0;
+ out_bad:
+ 	printk(KERN_ERR "fh_update: fh not verified!\n");
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index eb7de9e2a384e..552234ef22fe7 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -186,19 +186,24 @@ static bool nilfs_check_page(struct page *page)
+ 	return false;
+ }
+ 
+-static struct page *nilfs_get_page(struct inode *dir, unsigned long n)
++static void *nilfs_get_page(struct inode *dir, unsigned long n,
++		struct page **pagep)
+ {
+ 	struct address_space *mapping = dir->i_mapping;
+ 	struct page *page = read_mapping_page(mapping, n, NULL);
++	void *kaddr;
+ 
+-	if (!IS_ERR(page)) {
+-		kmap(page);
+-		if (unlikely(!PageChecked(page))) {
+-			if (PageError(page) || !nilfs_check_page(page))
+-				goto fail;
+-		}
++	if (IS_ERR(page))
++		return page;
++
++	kaddr = kmap(page);
++	if (unlikely(!PageChecked(page))) {
++		if (!nilfs_check_page(page))
++			goto fail;
+ 	}
+-	return page;
++
++	*pagep = page;
++	return kaddr;
+ 
+ fail:
+ 	nilfs_put_page(page);
+@@ -275,14 +280,14 @@ static int nilfs_readdir(struct file *file, struct dir_context *ctx)
+ 	for ( ; n < npages; n++, offset = 0) {
+ 		char *kaddr, *limit;
+ 		struct nilfs_dir_entry *de;
+-		struct page *page = nilfs_get_page(inode, n);
++		struct page *page;
+ 
+-		if (IS_ERR(page)) {
++		kaddr = nilfs_get_page(inode, n, &page);
++		if (IS_ERR(kaddr)) {
+ 			nilfs_error(sb, "bad page in #%lu", inode->i_ino);
+ 			ctx->pos += PAGE_SIZE - offset;
+ 			return -EIO;
+ 		}
+-		kaddr = page_address(page);
+ 		de = (struct nilfs_dir_entry *)(kaddr + offset);
+ 		limit = kaddr + nilfs_last_byte(inode, n) -
+ 			NILFS_DIR_REC_LEN(1);
+@@ -345,11 +350,9 @@ nilfs_find_entry(struct inode *dir, const struct qstr *qstr,
+ 		start = 0;
+ 	n = start;
+ 	do {
+-		char *kaddr;
++		char *kaddr = nilfs_get_page(dir, n, &page);
+ 
+-		page = nilfs_get_page(dir, n);
+-		if (!IS_ERR(page)) {
+-			kaddr = page_address(page);
++		if (!IS_ERR(kaddr)) {
+ 			de = (struct nilfs_dir_entry *)kaddr;
+ 			kaddr += nilfs_last_byte(dir, n) - reclen;
+ 			while ((char *) de <= kaddr) {
+@@ -387,15 +390,11 @@ nilfs_find_entry(struct inode *dir, const struct qstr *qstr,
+ 
+ struct nilfs_dir_entry *nilfs_dotdot(struct inode *dir, struct page **p)
+ {
+-	struct page *page = nilfs_get_page(dir, 0);
+-	struct nilfs_dir_entry *de = NULL;
++	struct nilfs_dir_entry *de = nilfs_get_page(dir, 0, p);
+ 
+-	if (!IS_ERR(page)) {
+-		de = nilfs_next_entry(
+-			(struct nilfs_dir_entry *)page_address(page));
+-		*p = page;
+-	}
+-	return de;
++	if (IS_ERR(de))
++		return NULL;
++	return nilfs_next_entry(de);
+ }
+ 
+ ino_t nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr)
+@@ -459,12 +458,11 @@ int nilfs_add_link(struct dentry *dentry, struct inode *inode)
+ 	for (n = 0; n <= npages; n++) {
+ 		char *dir_end;
+ 
+-		page = nilfs_get_page(dir, n);
+-		err = PTR_ERR(page);
+-		if (IS_ERR(page))
++		kaddr = nilfs_get_page(dir, n, &page);
++		err = PTR_ERR(kaddr);
++		if (IS_ERR(kaddr))
+ 			goto out;
+ 		lock_page(page);
+-		kaddr = page_address(page);
+ 		dir_end = kaddr + nilfs_last_byte(dir, n);
+ 		de = (struct nilfs_dir_entry *)kaddr;
+ 		kaddr += PAGE_SIZE - reclen;
+@@ -627,11 +625,10 @@ int nilfs_empty_dir(struct inode *inode)
+ 		char *kaddr;
+ 		struct nilfs_dir_entry *de;
+ 
+-		page = nilfs_get_page(inode, i);
+-		if (IS_ERR(page))
+-			continue;
++		kaddr = nilfs_get_page(inode, i, &page);
++		if (IS_ERR(kaddr))
++			return 0;
+ 
+-		kaddr = page_address(page);
+ 		de = (struct nilfs_dir_entry *)kaddr;
+ 		kaddr += nilfs_last_byte(inode, i) - NILFS_DIR_REC_LEN(1);
+ 
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 2eeae67ad2983..02407c524382c 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -1697,6 +1697,7 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci)
+ 			if (bh->b_page != bd_page) {
+ 				if (bd_page) {
+ 					lock_page(bd_page);
++					wait_on_page_writeback(bd_page);
+ 					clear_page_dirty_for_io(bd_page);
+ 					set_page_writeback(bd_page);
+ 					unlock_page(bd_page);
+@@ -1710,6 +1711,7 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci)
+ 			if (bh == segbuf->sb_super_root) {
+ 				if (bh->b_page != bd_page) {
+ 					lock_page(bd_page);
++					wait_on_page_writeback(bd_page);
+ 					clear_page_dirty_for_io(bd_page);
+ 					set_page_writeback(bd_page);
+ 					unlock_page(bd_page);
+@@ -1726,6 +1728,7 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci)
+ 	}
+ 	if (bd_page) {
+ 		lock_page(bd_page);
++		wait_on_page_writeback(bd_page);
+ 		clear_page_dirty_for_io(bd_page);
+ 		set_page_writeback(bd_page);
+ 		unlock_page(bd_page);
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 9b23e74036eb9..1a5f23e79f5e5 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -2373,6 +2373,11 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
+ 	}
+ 
+ 	list_for_each_entry(ue, &dwc->dw_zero_list, ue_node) {
++		ret = ocfs2_assure_trans_credits(handle, credits);
++		if (ret < 0) {
++			mlog_errno(ret);
++			break;
++		}
+ 		ret = ocfs2_mark_extent_written(inode, &et, handle,
+ 						ue->ue_cpos, 1,
+ 						ue->ue_phys,
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index df36d84aedc48..5fd565a6228f7 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1940,6 +1940,8 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 
+ 	inode_lock(inode);
+ 
++	/* Wait all existing dio workers, newcomers will block on i_rwsem */
++	inode_dio_wait(inode);
+ 	/*
+ 	 * This prevents concurrent writes on other nodes
+ 	 */
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index 0534800a472a1..dfa6ff2756fb6 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -449,6 +449,23 @@ int ocfs2_extend_trans(handle_t *handle, int nblocks)
+ 	return status;
+ }
+ 
++/*
++ * Make sure handle has at least 'nblocks' credits available. If it does not
++ * have that many credits available, we will try to extend the handle to have
++ * enough credits. If that fails, we will restart transaction to have enough
++ * credits. Similar notes regarding data consistency and locking implications
++ * as for ocfs2_extend_trans() apply here.
++ */
++int ocfs2_assure_trans_credits(handle_t *handle, int nblocks)
++{
++	int old_nblks = jbd2_handle_buffer_credits(handle);
++
++	trace_ocfs2_assure_trans_credits(old_nblks);
++	if (old_nblks >= nblocks)
++		return 0;
++	return ocfs2_extend_trans(handle, nblocks - old_nblks);
++}
++
+ /*
+  * If we have fewer than thresh credits, extend by OCFS2_MAX_TRANS_DATA.
+  * If that fails, restart the transaction & regain write access for the
+diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
+index eb7a21bac71ef..bc5d77cb3c500 100644
+--- a/fs/ocfs2/journal.h
++++ b/fs/ocfs2/journal.h
+@@ -244,6 +244,8 @@ handle_t		    *ocfs2_start_trans(struct ocfs2_super *osb,
+ int			     ocfs2_commit_trans(struct ocfs2_super *osb,
+ 						handle_t *handle);
+ int			     ocfs2_extend_trans(handle_t *handle, int nblocks);
++int			     ocfs2_assure_trans_credits(handle_t *handle,
++						int nblocks);
+ int			     ocfs2_allocate_extend_trans(handle_t *handle,
+ 						int thresh);
+ 
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 5c98813b3dcaf..7bdda635ca80e 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -566,7 +566,7 @@ static int __ocfs2_mknod_locked(struct inode *dir,
+ 	fe->i_last_eb_blk = 0;
+ 	strcpy(fe->i_signature, OCFS2_INODE_SIGNATURE);
+ 	fe->i_flags |= cpu_to_le32(OCFS2_VALID_FL);
+-	ktime_get_real_ts64(&ts);
++	ktime_get_coarse_real_ts64(&ts);
+ 	fe->i_atime = fe->i_ctime = fe->i_mtime =
+ 		cpu_to_le64(ts.tv_sec);
+ 	fe->i_mtime_nsec = fe->i_ctime_nsec = fe->i_atime_nsec =
+diff --git a/fs/ocfs2/ocfs2_trace.h b/fs/ocfs2/ocfs2_trace.h
+index dc4bce1649c1b..7a9cfd61145a0 100644
+--- a/fs/ocfs2/ocfs2_trace.h
++++ b/fs/ocfs2/ocfs2_trace.h
+@@ -2578,6 +2578,8 @@ DEFINE_OCFS2_ULL_UINT_EVENT(ocfs2_commit_cache_end);
+ 
+ DEFINE_OCFS2_INT_INT_EVENT(ocfs2_extend_trans);
+ 
++DEFINE_OCFS2_INT_EVENT(ocfs2_assure_trans_credits);
++
+ DEFINE_OCFS2_INT_EVENT(ocfs2_extend_trans_restart);
+ 
+ DEFINE_OCFS2_INT_INT_EVENT(ocfs2_allocate_extend_trans);
+diff --git a/fs/open.c b/fs/open.c
+index d69312a2d434b..694110929519c 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -200,13 +200,13 @@ long do_sys_ftruncate(unsigned int fd, loff_t length, int small)
+ 	return error;
+ }
+ 
+-SYSCALL_DEFINE2(ftruncate, unsigned int, fd, unsigned long, length)
++SYSCALL_DEFINE2(ftruncate, unsigned int, fd, off_t, length)
+ {
+ 	return do_sys_ftruncate(fd, length, 1);
+ }
+ 
+ #ifdef CONFIG_COMPAT
+-COMPAT_SYSCALL_DEFINE2(ftruncate, unsigned int, fd, compat_ulong_t, length)
++COMPAT_SYSCALL_DEFINE2(ftruncate, unsigned int, fd, compat_off_t, length)
+ {
+ 	return do_sys_ftruncate(fd, length, 1);
+ }
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index 0e4278d4a7691..0833676da5f40 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -373,6 +373,8 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
+ 		/* leave now if filled buffer already */
+ 		if (buflen == 0)
+ 			return acc;
++
++		cond_resched();
+ 	}
+ 
+ 	list_for_each_entry(m, &vmcore_list, list) {
+diff --git a/fs/udf/udftime.c b/fs/udf/udftime.c
+index fce4ad976c8c2..26169b1f482c3 100644
+--- a/fs/udf/udftime.c
++++ b/fs/udf/udftime.c
+@@ -60,13 +60,18 @@ udf_disk_stamp_to_time(struct timespec64 *dest, struct timestamp src)
+ 	dest->tv_sec = mktime64(year, src.month, src.day, src.hour, src.minute,
+ 			src.second);
+ 	dest->tv_sec -= offset * 60;
+-	dest->tv_nsec = 1000 * (src.centiseconds * 10000 +
+-			src.hundredsOfMicroseconds * 100 + src.microseconds);
++
+ 	/*
+ 	 * Sanitize nanosecond field since reportedly some filesystems are
+ 	 * recorded with bogus sub-second values.
+ 	 */
+-	dest->tv_nsec %= NSEC_PER_SEC;
++	if (src.centiseconds < 100 && src.hundredsOfMicroseconds < 100 &&
++	    src.microseconds < 100) {
++		dest->tv_nsec = 1000 * (src.centiseconds * 10000 +
++			src.hundredsOfMicroseconds * 100 + src.microseconds);
++	} else {
++		dest->tv_nsec = 0;
++	}
+ }
+ 
+ void
+diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
+index a8d8fdcd37230..92348c085c0c7 100644
+--- a/include/kvm/arm_vgic.h
++++ b/include/kvm/arm_vgic.h
+@@ -402,6 +402,6 @@ int kvm_vgic_v4_unset_forwarding(struct kvm *kvm, int irq,
+ 				 struct kvm_kernel_irq_routing_entry *irq_entry);
+ 
+ int vgic_v4_load(struct kvm_vcpu *vcpu);
+-int vgic_v4_put(struct kvm_vcpu *vcpu, bool need_db);
++int vgic_v4_put(struct kvm_vcpu *vcpu);
+ 
+ #endif /* __KVM_ARM_VGIC_H */
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index 14d514233e1d4..8dffffe846ce5 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -527,7 +527,7 @@ asmlinkage long compat_sys_fstatfs(unsigned int fd,
+ asmlinkage long compat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
+ 				     struct compat_statfs64 __user *buf);
+ asmlinkage long compat_sys_truncate(const char __user *, compat_off_t);
+-asmlinkage long compat_sys_ftruncate(unsigned int, compat_ulong_t);
++asmlinkage long compat_sys_ftruncate(unsigned int, compat_off_t);
+ /* No generic prototype for truncate64, ftruncate64, fallocate */
+ asmlinkage long compat_sys_openat(int dfd, const char __user *filename,
+ 				  int flags, umode_t mode);
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 0849903904209..2fd321da5c4b7 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -633,18 +633,10 @@ static inline efi_status_t efi_query_variable_store(u32 attributes,
+ #endif
+ extern void __iomem *efi_lookup_mapped_addr(u64 phys_addr);
+ 
+-extern int __init efi_memmap_alloc(unsigned int num_entries,
+-				   struct efi_memory_map_data *data);
+-extern void __efi_memmap_free(u64 phys, unsigned long size,
+-			      unsigned long flags);
++extern int __init __efi_memmap_init(struct efi_memory_map_data *data);
+ extern int __init efi_memmap_init_early(struct efi_memory_map_data *data);
+ extern int __init efi_memmap_init_late(phys_addr_t addr, unsigned long size);
+ extern void __init efi_memmap_unmap(void);
+-extern int __init efi_memmap_install(struct efi_memory_map_data *data);
+-extern int __init efi_memmap_split_count(efi_memory_desc_t *md,
+-					 struct range *range);
+-extern void __init efi_memmap_insert(struct efi_memory_map *old_memmap,
+-				     void *buf, struct efi_mem_range *mem);
+ 
+ #ifdef CONFIG_EFI_ESRT
+ extern void __init efi_esrt_init(void);
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index e90c267e7f3e1..2698dd231298c 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -1026,7 +1026,7 @@ iommu_aux_get_pasid(struct iommu_domain *domain, struct device *dev)
+ static inline struct iommu_sva *
+ iommu_sva_bind_device(struct device *dev, struct mm_struct *mm, void *drvdata)
+ {
+-	return NULL;
++	return ERR_PTR(-ENODEV);
+ }
+ 
+ static inline void iommu_sva_unbind_device(struct iommu_sva *handle)
+diff --git a/include/linux/kcov.h b/include/linux/kcov.h
+index b48128b717f1f..84535b168762a 100644
+--- a/include/linux/kcov.h
++++ b/include/linux/kcov.h
+@@ -21,6 +21,8 @@ enum kcov_mode {
+ 	KCOV_MODE_TRACE_PC = 2,
+ 	/* Collecting comparison operands mode. */
+ 	KCOV_MODE_TRACE_CMP = 3,
++	/* The process owns a KCOV remote reference. */
++	KCOV_MODE_REMOTE = 4,
+ };
+ 
+ #define KCOV_IN_CTXSW	(1 << 30)
+diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
+index 5b08a473cdba4..18be4459aaf72 100644
+--- a/include/linux/mod_devicetable.h
++++ b/include/linux/mod_devicetable.h
+@@ -669,6 +669,8 @@ struct x86_cpu_id {
+ 	__u16 model;
+ 	__u16 steppings;
+ 	__u16 feature;	/* bit index */
++	/* Solely for kernel-internal use: DO NOT EXPORT to userspace! */
++	__u16 flags;
+ 	kernel_ulong_t driver_data;
+ };
+ 
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index f454dd1003347..ddf9ae37a2cce 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -71,8 +71,8 @@ enum {
+ 	NVMF_RDMA_QPTYPE_DATAGRAM	= 2, /* Reliable Datagram */
+ };
+ 
+-/* RDMA QP Service Type codes for Discovery Log Page entry TSAS
+- * RDMA_QPTYPE field
++/* RDMA Provider Type codes for Discovery Log Page entry TSAS
++ * RDMA_PRTYPE field
+  */
+ enum {
+ 	NVMF_RDMA_PRTYPE_NOT_SPECIFIED	= 1, /* No Provider Specified */
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 5b24a6fbfa0be..30bc462fb1964 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -148,6 +148,15 @@ enum pci_interrupt_pin {
+ /* The number of legacy PCI INTx interrupts */
+ #define PCI_NUM_INTX	4
+ 
++/*
++ * Reading from a device that doesn't respond typically returns ~0.  A
++ * successful read from a device may also return ~0, so you need additional
++ * information to reliably identify errors.
++ */
++#define PCI_ERROR_RESPONSE		(~0ULL)
++#define PCI_SET_ERROR_RESPONSE(val)	(*(val) = ((typeof(*(val))) PCI_ERROR_RESPONSE))
++#define PCI_POSSIBLE_ERROR(val)		((val) == ((typeof(val)) PCI_ERROR_RESPONSE))
++
+ /*
+  * pci_power_t values must match the bits in the Capabilities PME_Support
+  * and Control/Status PowerState fields in the Power Management capability.
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index 1cf7a7799cc04..00303c636a89d 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -498,6 +498,7 @@ void		   svc_wake_up(struct svc_serv *);
+ void		   svc_reserve(struct svc_rqst *rqstp, int space);
+ struct svc_pool *  svc_pool_for_cpu(struct svc_serv *serv, int cpu);
+ char *		   svc_print_addr(struct svc_rqst *, char *, size_t);
++const char *	   svc_proc_name(const struct svc_rqst *rqstp);
+ int		   svc_encode_result_payload(struct svc_rqst *rqstp,
+ 					     unsigned int offset,
+ 					     unsigned int length);
+@@ -535,16 +536,27 @@ static inline void svc_reserve_auth(struct svc_rqst *rqstp, int space)
+ }
+ 
+ /**
+- * svcxdr_init_decode - Prepare an xdr_stream for svc Call decoding
++ * svcxdr_init_decode - Prepare an xdr_stream for Call decoding
+  * @rqstp: controlling server RPC transaction context
+  *
++ * This function currently assumes the RPC header in rq_arg has
++ * already been decoded. Upon return, xdr->p points to the
++ * location of the upper layer header.
+  */
+ static inline void svcxdr_init_decode(struct svc_rqst *rqstp)
+ {
+ 	struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+-	struct kvec *argv = rqstp->rq_arg.head;
++	struct xdr_buf *buf = &rqstp->rq_arg;
++	struct kvec *argv = buf->head;
+ 
+-	xdr_init_decode(xdr, &rqstp->rq_arg, argv->iov_base, NULL);
++	/*
++	 * svc_getnl() and friends do not keep the xdr_buf's ::len
++	 * field up to date. Refresh that field before initializing
++	 * the argument decoding stream.
++	 */
++	buf->len = buf->head->iov_len + buf->page_len + buf->tail->iov_len;
++
++	xdr_init_decode(xdr, buf, argv->iov_base, NULL);
+ 	xdr_set_scratch_page(xdr, rqstp->rq_scratch_page);
+ }
+ 
+@@ -567,7 +579,7 @@ static inline void svcxdr_init_encode(struct svc_rqst *rqstp)
+ 	xdr->end = resv->iov_base + PAGE_SIZE - rqstp->rq_auth_slack;
+ 	buf->len = resv->iov_len;
+ 	xdr->page_ptr = buf->pages - 1;
+-	buf->buflen = PAGE_SIZE * (1 + rqstp->rq_page_end - buf->pages);
++	buf->buflen = PAGE_SIZE * (rqstp->rq_page_end - buf->pages);
+ 	buf->buflen -= rqstp->rq_auth_slack;
+ 	xdr->rqst = NULL;
+ }
+diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
+index 1ea422b1a9f1c..a96e924c7b45e 100644
+--- a/include/linux/syscalls.h
++++ b/include/linux/syscalls.h
+@@ -445,7 +445,7 @@ asmlinkage long sys_fstatfs(unsigned int fd, struct statfs __user *buf);
+ asmlinkage long sys_fstatfs64(unsigned int fd, size_t sz,
+ 				struct statfs64 __user *buf);
+ asmlinkage long sys_truncate(const char __user *path, long length);
+-asmlinkage long sys_ftruncate(unsigned int fd, unsigned long length);
++asmlinkage long sys_ftruncate(unsigned int fd, off_t length);
+ #if BITS_PER_LONG == 32
+ asmlinkage long sys_truncate64(const char __user *path, loff_t length);
+ asmlinkage long sys_ftruncate64(unsigned int fd, loff_t length);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 33873266b2bc7..9128c0db11f88 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1625,18 +1625,46 @@ static inline int hci_check_conn_params(u16 min, u16 max, u16 latency,
+ {
+ 	u16 max_latency;
+ 
+-	if (min > max || min < 6 || max > 3200)
++	if (min > max) {
++		BT_WARN("min %d > max %d", min, max);
+ 		return -EINVAL;
++	}
++
++	if (min < 6) {
++		BT_WARN("min %d < 6", min);
++		return -EINVAL;
++	}
++
++	if (max > 3200) {
++		BT_WARN("max %d > 3200", max);
++		return -EINVAL;
++	}
++
++	if (to_multiplier < 10) {
++		BT_WARN("to_multiplier %d < 10", to_multiplier);
++		return -EINVAL;
++	}
+ 
+-	if (to_multiplier < 10 || to_multiplier > 3200)
++	if (to_multiplier > 3200) {
++		BT_WARN("to_multiplier %d > 3200", to_multiplier);
+ 		return -EINVAL;
++	}
+ 
+-	if (max >= to_multiplier * 8)
++	if (max >= to_multiplier * 8) {
++		BT_WARN("max %d >= to_multiplier %d * 8", max, to_multiplier);
+ 		return -EINVAL;
++	}
+ 
+ 	max_latency = (to_multiplier * 4 / max) - 1;
+-	if (latency > 499 || latency > max_latency)
++	if (latency > 499) {
++		BT_WARN("latency %d > 499", latency);
+ 		return -EINVAL;
++	}
++
++	if (latency > max_latency) {
++		BT_WARN("latency %d > max_latency %d", latency, max_latency);
++		return -EINVAL;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index ab8d84775ca87..2b99ee1303d92 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -490,6 +490,11 @@ static inline void *nft_set_priv(const struct nft_set *set)
+ 	return (void *)set->data;
+ }
+ 
++static inline enum nft_data_types nft_set_datatype(const struct nft_set *set)
++{
++	return set->dtype == NFT_DATA_VERDICT ? NFT_DATA_VERDICT : NFT_DATA_VALUE;
++}
++
+ static inline bool nft_set_gc_is_pending(const struct nft_set *s)
+ {
+ 	return refcount_read(&s->refs) != 1;
+diff --git a/include/net/xdp.h b/include/net/xdp.h
+index 9dab2bc6f187b..9e6c10b323b8e 100644
+--- a/include/net/xdp.h
++++ b/include/net/xdp.h
+@@ -218,6 +218,9 @@ bool xdp_rxq_info_is_reg(struct xdp_rxq_info *xdp_rxq);
+ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq,
+ 			       enum xdp_mem_type type, void *allocator);
+ void xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq);
++int xdp_reg_mem_model(struct xdp_mem_info *mem,
++		      enum xdp_mem_type type, void *allocator);
++void xdp_unreg_mem_model(struct xdp_mem_info *mem);
+ 
+ /* Drivers not supporting XDP metadata can use this helper, which
+  * rejects any room expansion for metadata as a result.
+diff --git a/include/trace/events/qdisc.h b/include/trace/events/qdisc.h
+index a50df41634c58..980cb7a74e358 100644
+--- a/include/trace/events/qdisc.h
++++ b/include/trace/events/qdisc.h
+@@ -78,14 +78,14 @@ TRACE_EVENT(qdisc_destroy,
+ 	TP_ARGS(q),
+ 
+ 	TP_STRUCT__entry(
+-		__string(	dev,		qdisc_dev(q)->name	)
++		__string(	dev,		qdisc_dev(q) ? qdisc_dev(q)->name : "(null)"	)
+ 		__string(	kind,		q->ops->id		)
+ 		__field(	u32,		parent			)
+ 		__field(	u32,		handle			)
+ 	),
+ 
+ 	TP_fast_assign(
+-		__assign_str(dev, qdisc_dev(q)->name);
++		__assign_str(dev, qdisc_dev(q) ? qdisc_dev(q)->name : "(null)");
+ 		__assign_str(kind, q->ops->id);
+ 		__entry->parent = q->parent;
+ 		__entry->handle = q->handle;
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 56e4a57d25382..5d34deca0f300 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1578,7 +1578,7 @@ TRACE_EVENT(svc_process,
+ 		__field(u32, vers)
+ 		__field(u32, proc)
+ 		__string(service, name)
+-		__string(procedure, rqst->rq_procinfo->pc_name)
++		__string(procedure, svc_proc_name(rqst))
+ 		__string(addr, rqst->rq_xprt ?
+ 			 rqst->rq_xprt->xpt_remotebuf : "(null)")
+ 	),
+@@ -1588,7 +1588,7 @@ TRACE_EVENT(svc_process,
+ 		__entry->vers = rqst->rq_vers;
+ 		__entry->proc = rqst->rq_proc;
+ 		__assign_str(service, name);
+-		__assign_str(procedure, rqst->rq_procinfo->pc_name);
++		__assign_str(procedure, svc_proc_name(rqst));
+ 		__assign_str(addr, rqst->rq_xprt ?
+ 			     rqst->rq_xprt->xpt_remotebuf : "(null)");
+ 	),
+@@ -1854,7 +1854,7 @@ TRACE_EVENT(svc_stats_latency,
+ 	TP_STRUCT__entry(
+ 		__field(u32, xid)
+ 		__field(unsigned long, execute)
+-		__string(procedure, rqst->rq_procinfo->pc_name)
++		__string(procedure, svc_proc_name(rqst))
+ 		__string(addr, rqst->rq_xprt->xpt_remotebuf)
+ 	),
+ 
+@@ -1862,7 +1862,7 @@ TRACE_EVENT(svc_stats_latency,
+ 		__entry->xid = be32_to_cpu(rqst->rq_xid);
+ 		__entry->execute = ktime_to_us(ktime_sub(ktime_get(),
+ 							 rqst->rq_stime));
+-		__assign_str(procedure, rqst->rq_procinfo->pc_name);
++		__assign_str(procedure, svc_proc_name(rqst));
+ 		__assign_str(addr, rqst->rq_xprt->xpt_remotebuf);
+ 	),
+ 
+diff --git a/include/uapi/asm-generic/hugetlb_encode.h b/include/uapi/asm-generic/hugetlb_encode.h
+index 4f3d5aaa11f53..de687009bfe53 100644
+--- a/include/uapi/asm-generic/hugetlb_encode.h
++++ b/include/uapi/asm-generic/hugetlb_encode.h
+@@ -20,18 +20,18 @@
+ #define HUGETLB_FLAG_ENCODE_SHIFT	26
+ #define HUGETLB_FLAG_ENCODE_MASK	0x3f
+ 
+-#define HUGETLB_FLAG_ENCODE_16KB	(14 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_64KB	(16 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_512KB	(19 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_1MB		(20 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_2MB		(21 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_8MB		(23 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_16MB	(24 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_32MB	(25 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_256MB	(28 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_512MB	(29 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_1GB		(30 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_2GB		(31 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_16GB	(34 << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_16KB	(14U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_64KB	(16U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_512KB	(19U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_1MB		(20U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_2MB		(21U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_8MB		(23U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_16MB	(24U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_32MB	(25U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_256MB	(28U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_512MB	(29U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_1GB		(30U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_2GB		(31U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_16GB	(34U << HUGETLB_FLAG_ENCODE_SHIFT)
+ 
+ #endif /* _ASM_GENERIC_HUGETLB_ENCODE_H_ */
+diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
+index 2056318988f77..1ff9589ab6111 100644
+--- a/include/uapi/asm-generic/unistd.h
++++ b/include/uapi/asm-generic/unistd.h
+@@ -805,7 +805,7 @@ __SC_COMP(__NR_pselect6_time64, sys_pselect6, compat_sys_pselect6_time64)
+ #define __NR_ppoll_time64 414
+ __SC_COMP(__NR_ppoll_time64, sys_ppoll, compat_sys_ppoll_time64)
+ #define __NR_io_pgetevents_time64 416
+-__SYSCALL(__NR_io_pgetevents_time64, sys_io_pgetevents)
++__SC_COMP(__NR_io_pgetevents_time64, sys_io_pgetevents, compat_sys_io_pgetevents_time64)
+ #define __NR_recvmmsg_time64 417
+ __SC_COMP(__NR_recvmmsg_time64, sys_recvmmsg, compat_sys_recvmmsg_time64)
+ #define __NR_mq_timedsend_time64 418
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e0b47bed86750..3a191bec69aca 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -5128,6 +5128,7 @@ int perf_event_release_kernel(struct perf_event *event)
+ again:
+ 	mutex_lock(&event->child_mutex);
+ 	list_for_each_entry(child, &event->child_list, child_list) {
++		void *var = NULL;
+ 
+ 		/*
+ 		 * Cannot change, child events are not migrated, see the
+@@ -5168,11 +5169,23 @@ int perf_event_release_kernel(struct perf_event *event)
+ 			 * this can't be the last reference.
+ 			 */
+ 			put_event(event);
++		} else {
++			var = &ctx->refcount;
+ 		}
+ 
+ 		mutex_unlock(&event->child_mutex);
+ 		mutex_unlock(&ctx->mutex);
+ 		put_ctx(ctx);
++
++		if (var) {
++			/*
++			 * If perf_event_free_task() has deleted all events from the
++			 * ctx while the child_mutex got released above, make sure to
++			 * notify about the preceding put_ctx().
++			 */
++			smp_mb(); /* pairs with wait_var_event() */
++			wake_up_var(var);
++		}
+ 		goto again;
+ 	}
+ 	mutex_unlock(&event->child_mutex);
+diff --git a/kernel/gcov/gcc_4_7.c b/kernel/gcov/gcc_4_7.c
+index 04880d8fba254..42de79cd1a678 100644
+--- a/kernel/gcov/gcc_4_7.c
++++ b/kernel/gcov/gcc_4_7.c
+@@ -19,7 +19,9 @@
+ #include <linux/vmalloc.h>
+ #include "gcov.h"
+ 
+-#if (__GNUC__ >= 10)
++#if (__GNUC__ >= 14)
++#define GCOV_COUNTERS			9
++#elif (__GNUC__ >= 10)
+ #define GCOV_COUNTERS			8
+ #elif (__GNUC__ >= 7)
+ #define GCOV_COUNTERS			9
+diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
+index c1510f0ab3ea5..206ab3d41ee76 100755
+--- a/kernel/gen_kheaders.sh
++++ b/kernel/gen_kheaders.sh
+@@ -83,12 +83,9 @@ find $cpio_dir -type f -print0 |
+ 	xargs -0 -P8 -n1 perl -pi -e 'BEGIN {undef $/;}; s/\/\*((?!SPDX).)*?\*\///smg;'
+ 
+ # Create archive and try to normalize metadata for reproducibility.
+-# For compatibility with older versions of tar, files are fed to tar
+-# pre-sorted, as --sort=name might not be available.
+-find $cpio_dir -printf "./%P\n" | LC_ALL=C sort | \
+-    tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
+-    --owner=0 --group=0 --numeric-owner --no-recursion \
+-    -I $XZ -cf $tarfile -C $cpio_dir/ -T - > /dev/null
++tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
++    --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \
++    -I $XZ -cf $tarfile -C $cpio_dir/ . > /dev/null
+ 
+ echo $headers_md5 > kernel/kheaders.md5
+ echo "$this_file_md5" >> kernel/kheaders.md5
+diff --git a/kernel/kcov.c b/kernel/kcov.c
+index 6b8368be89c8e..ec2fd698ebf53 100644
+--- a/kernel/kcov.c
++++ b/kernel/kcov.c
+@@ -635,6 +635,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
+ 			return -EINVAL;
+ 		kcov->mode = mode;
+ 		t->kcov = kcov;
++	        t->kcov_mode = KCOV_MODE_REMOTE;
+ 		kcov->t = t;
+ 		kcov->remote = true;
+ 		kcov->remote_size = remote_arg->area_size;
+diff --git a/kernel/padata.c b/kernel/padata.c
+index fdcd78302cd72..471ccbc44541d 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -111,7 +111,7 @@ static int __init padata_work_alloc_mt(int nworks, void *data,
+ {
+ 	int i;
+ 
+-	spin_lock(&padata_works_lock);
++	spin_lock_bh(&padata_works_lock);
+ 	/* Start at 1 because the current task participates in the job. */
+ 	for (i = 1; i < nworks; ++i) {
+ 		struct padata_work *pw = padata_work_alloc();
+@@ -121,7 +121,7 @@ static int __init padata_work_alloc_mt(int nworks, void *data,
+ 		padata_work_init(pw, padata_mt_helper, data, 0);
+ 		list_add(&pw->pw_list, head);
+ 	}
+-	spin_unlock(&padata_works_lock);
++	spin_unlock_bh(&padata_works_lock);
+ 
+ 	return i;
+ }
+@@ -139,12 +139,12 @@ static void __init padata_works_free(struct list_head *works)
+ 	if (list_empty(works))
+ 		return;
+ 
+-	spin_lock(&padata_works_lock);
++	spin_lock_bh(&padata_works_lock);
+ 	list_for_each_entry_safe(cur, next, works, pw_list) {
+ 		list_del(&cur->pw_list);
+ 		padata_work_free(cur);
+ 	}
+-	spin_unlock(&padata_works_lock);
++	spin_unlock_bh(&padata_works_lock);
+ }
+ 
+ static void padata_parallel_worker(struct work_struct *parallel_work)
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index 20243682e6056..e032b1ce79649 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -221,6 +221,7 @@ void zap_pid_ns_processes(struct pid_namespace *pid_ns)
+ 	 */
+ 	do {
+ 		clear_thread_flag(TIF_SIGPENDING);
++		clear_thread_flag(TIF_NOTIFY_SIGNAL);
+ 		rc = kernel_wait4(-1, NULL, __WALL, NULL);
+ 	} while (rc != -ECHILD);
+ 
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 6c1aea48a79a1..9f505688291e5 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -1407,7 +1407,8 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp)
+ 	preempt_disable();
+ 	pipe_count = READ_ONCE(p->rtort_pipe_count);
+ 	if (pipe_count > RCU_TORTURE_PIPE_LEN) {
+-		/* Should not happen, but... */
++		// Should not happen in a correct RCU implementation,
++		// happens quite often for torture_type=busted.
+ 		pipe_count = RCU_TORTURE_PIPE_LEN;
+ 	}
+ 	completed = cur_ops->get_gp_seq();
+@@ -2209,11 +2210,12 @@ static void rcu_torture_barrier_cbf(struct rcu_head *rcu)
+ }
+ 
+ /* IPI handler to get callback posted on desired CPU, if online. */
+-static void rcu_torture_barrier1cb(void *rcu_void)
++static int rcu_torture_barrier1cb(void *rcu_void)
+ {
+ 	struct rcu_head *rhp = rcu_void;
+ 
+ 	cur_ops->call(rhp, rcu_torture_barrier_cbf);
++	return 0;
+ }
+ 
+ /* kthread function to register callbacks used to test RCU barriers. */
+@@ -2239,11 +2241,9 @@ static int rcu_torture_barrier_cbs(void *arg)
+ 		 * The above smp_load_acquire() ensures barrier_phase load
+ 		 * is ordered before the following ->call().
+ 		 */
+-		if (smp_call_function_single(myid, rcu_torture_barrier1cb,
+-					     &rcu, 1)) {
+-			// IPI failed, so use direct call from current CPU.
++		if (smp_call_on_cpu(myid, rcu_torture_barrier1cb, &rcu, 1))
+ 			cur_ops->call(&rcu, rcu_torture_barrier_cbf);
+-		}
++
+ 		if (atomic_dec_and_test(&barrier_cbs_count))
+ 			wake_up(&barrier_wq);
+ 	} while (!torture_must_stop());
+diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
+index cdecd47e5580d..9019299069b12 100644
+--- a/kernel/sys_ni.c
++++ b/kernel/sys_ni.c
+@@ -46,8 +46,8 @@ COND_SYSCALL(io_getevents_time32);
+ COND_SYSCALL(io_getevents);
+ COND_SYSCALL(io_pgetevents_time32);
+ COND_SYSCALL(io_pgetevents);
+-COND_SYSCALL_COMPAT(io_pgetevents_time32);
+ COND_SYSCALL_COMPAT(io_pgetevents);
++COND_SYSCALL_COMPAT(io_pgetevents_time64);
+ COND_SYSCALL(io_uring_setup);
+ COND_SYSCALL(io_uring_enter);
+ COND_SYSCALL(io_uring_register);
+diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
+index e883d12dcb0d4..b13b3e3f6c9f8 100644
+--- a/kernel/time/tick-common.c
++++ b/kernel/time/tick-common.c
+@@ -177,26 +177,6 @@ void tick_setup_periodic(struct clock_event_device *dev, int broadcast)
+ 	}
+ }
+ 
+-#ifdef CONFIG_NO_HZ_FULL
+-static void giveup_do_timer(void *info)
+-{
+-	int cpu = *(unsigned int *)info;
+-
+-	WARN_ON(tick_do_timer_cpu != smp_processor_id());
+-
+-	tick_do_timer_cpu = cpu;
+-}
+-
+-static void tick_take_do_timer_from_boot(void)
+-{
+-	int cpu = smp_processor_id();
+-	int from = tick_do_timer_boot_cpu;
+-
+-	if (from >= 0 && from != cpu)
+-		smp_call_function_single(from, giveup_do_timer, &cpu, 1);
+-}
+-#endif
+-
+ /*
+  * Setup the tick device
+  */
+@@ -220,19 +200,25 @@ static void tick_setup_device(struct tick_device *td,
+ 			tick_next_period = ktime_get();
+ #ifdef CONFIG_NO_HZ_FULL
+ 			/*
+-			 * The boot CPU may be nohz_full, in which case set
+-			 * tick_do_timer_boot_cpu so the first housekeeping
+-			 * secondary that comes up will take do_timer from
+-			 * us.
++			 * The boot CPU may be nohz_full, in which case the
++			 * first housekeeping secondary will take do_timer()
++			 * from it.
+ 			 */
+ 			if (tick_nohz_full_cpu(cpu))
+ 				tick_do_timer_boot_cpu = cpu;
+ 
+-		} else if (tick_do_timer_boot_cpu != -1 &&
+-						!tick_nohz_full_cpu(cpu)) {
+-			tick_take_do_timer_from_boot();
++		} else if (tick_do_timer_boot_cpu != -1 && !tick_nohz_full_cpu(cpu)) {
+ 			tick_do_timer_boot_cpu = -1;
+-			WARN_ON(tick_do_timer_cpu != cpu);
++			/*
++			 * The boot CPU will stay in periodic (NOHZ disabled)
++			 * mode until clocksource_done_booting() called after
++			 * smp_init() selects a high resolution clocksource and
++			 * timekeeping_notify() kicks the NOHZ stuff alive.
++			 *
++			 * So this WRITE_ONCE can only race with the READ_ONCE
++			 * check in tick_periodic() but this race is harmless.
++			 */
++			WRITE_ONCE(tick_do_timer_cpu, cpu);
+ #endif
+ 		}
+ 
+diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
+index 29db703f68806..467975300ddd5 100644
+--- a/kernel/trace/Kconfig
++++ b/kernel/trace/Kconfig
+@@ -824,7 +824,7 @@ config PREEMPTIRQ_DELAY_TEST
+ 
+ config SYNTH_EVENT_GEN_TEST
+ 	tristate "Test module for in-kernel synthetic event generation"
+-	depends on SYNTH_EVENTS
++	depends on SYNTH_EVENTS && m
+ 	help
+           This option creates a test module to check the base
+           functionality of in-kernel synthetic event definition and
+@@ -837,7 +837,7 @@ config SYNTH_EVENT_GEN_TEST
+ 
+ config KPROBE_EVENT_GEN_TEST
+ 	tristate "Test module for in-kernel kprobe event generation"
+-	depends on KPROBE_EVENTS
++	depends on KPROBE_EVENTS && m
+ 	help
+           This option creates a test module to check the base
+           functionality of in-kernel kprobe event definition.
+diff --git a/kernel/trace/preemptirq_delay_test.c b/kernel/trace/preemptirq_delay_test.c
+index 312d1a0ca3b60..1a4f2f4249961 100644
+--- a/kernel/trace/preemptirq_delay_test.c
++++ b/kernel/trace/preemptirq_delay_test.c
+@@ -201,4 +201,5 @@ static void __exit preemptirq_delay_exit(void)
+ 
+ module_init(preemptirq_delay_init)
+ module_exit(preemptirq_delay_exit)
++MODULE_DESCRIPTION("Preempt / IRQ disable delay thread to test latency tracers");
+ MODULE_LICENSE("GPL v2");
+diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c
+index 2d38a09459bb5..cbebe3aa1f604 100644
+--- a/net/batman-adv/originator.c
++++ b/net/batman-adv/originator.c
+@@ -11,6 +11,7 @@
+ #include <linux/errno.h>
+ #include <linux/etherdevice.h>
+ #include <linux/gfp.h>
++#include <linux/if_vlan.h>
+ #include <linux/jiffies.h>
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+@@ -132,6 +133,29 @@ batadv_orig_node_vlan_get(struct batadv_orig_node *orig_node,
+ 	return vlan;
+ }
+ 
++/**
++ * batadv_vlan_id_valid() - check if vlan id is in valid batman-adv encoding
++ * @vid: the VLAN identifier
++ *
++ * Return: true when either no vlan is set or if VLAN is in correct range,
++ *  false otherwise
++ */
++static bool batadv_vlan_id_valid(unsigned short vid)
++{
++	unsigned short non_vlan = vid & ~(BATADV_VLAN_HAS_TAG | VLAN_VID_MASK);
++
++	if (vid == 0)
++		return true;
++
++	if (!(vid & BATADV_VLAN_HAS_TAG))
++		return false;
++
++	if (non_vlan)
++		return false;
++
++	return true;
++}
++
+ /**
+  * batadv_orig_node_vlan_new() - search and possibly create an orig_node_vlan
+  *  object
+@@ -150,6 +174,9 @@ batadv_orig_node_vlan_new(struct batadv_orig_node *orig_node,
+ {
+ 	struct batadv_orig_node_vlan *vlan;
+ 
++	if (!batadv_vlan_id_valid(vid))
++		return NULL;
++
+ 	spin_lock_bh(&orig_node->vlan_list_lock);
+ 
+ 	/* first look if an object for this vid already exists */
+@@ -1285,6 +1312,8 @@ void batadv_purge_orig_ref(struct batadv_priv *bat_priv)
+ 	/* for all origins... */
+ 	for (i = 0; i < hash->size; i++) {
+ 		head = &hash->table[i];
++		if (hlist_empty(head))
++			continue;
+ 		list_lock = &hash->list_locks[i];
+ 
+ 		spin_lock_bh(list_lock);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index da03ca6dd9221..9cc034e6074c1 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -5612,13 +5612,7 @@ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn,
+ 
+ 	memset(&rsp, 0, sizeof(rsp));
+ 
+-	if (max > hcon->le_conn_max_interval) {
+-		BT_DBG("requested connection interval exceeds current bounds.");
+-		err = -EINVAL;
+-	} else {
+-		err = hci_check_conn_params(min, max, latency, to_multiplier);
+-	}
+-
++	err = hci_check_conn_params(min, max, latency, to_multiplier);
+ 	if (err)
+ 		rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED);
+ 	else
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 9169ef174ff09..d102efcae65a0 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -30,10 +30,6 @@ MODULE_ALIAS("can-proto-" __stringify(CAN_J1939));
+ /* CAN_HDR: #bytes before can_frame data part */
+ #define J1939_CAN_HDR (offsetof(struct can_frame, data))
+ 
+-/* CAN_FTR: #bytes beyond data part */
+-#define J1939_CAN_FTR (sizeof(struct can_frame) - J1939_CAN_HDR - \
+-		 sizeof(((struct can_frame *)0)->data))
+-
+ /* lowest layer */
+ static void j1939_can_recv(struct sk_buff *iskb, void *data)
+ {
+@@ -338,7 +334,7 @@ int j1939_send_one(struct j1939_priv *priv, struct sk_buff *skb)
+ 	memset(cf, 0, J1939_CAN_HDR);
+ 
+ 	/* make it a full can frame again */
+-	skb_put(skb, J1939_CAN_FTR + (8 - dlc));
++	skb_put_zero(skb, 8 - dlc);
+ 
+ 	canid = CAN_EFF_FLAG |
+ 		(skcb->priority << 26) |
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 5dcbb0b7d123a..478dafc738571 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1577,8 +1577,8 @@ j1939_session *j1939_xtp_rx_rts_session_new(struct j1939_priv *priv,
+ 	struct j1939_sk_buff_cb skcb = *j1939_skb_to_cb(skb);
+ 	struct j1939_session *session;
+ 	const u8 *dat;
++	int len, ret;
+ 	pgn_t pgn;
+-	int len;
+ 
+ 	netdev_dbg(priv->ndev, "%s\n", __func__);
+ 
+@@ -1634,7 +1634,22 @@ j1939_session *j1939_xtp_rx_rts_session_new(struct j1939_priv *priv,
+ 	session->pkt.rx = 0;
+ 	session->pkt.tx = 0;
+ 
+-	WARN_ON_ONCE(j1939_session_activate(session));
++	ret = j1939_session_activate(session);
++	if (ret) {
++		/* Entering this scope indicates an issue with the J1939 bus.
++		 * Possible scenarios include:
++		 * - A time lapse occurred, and a new session was initiated
++		 *   due to another packet being sent correctly. This could
++		 *   have been caused by too long interrupt, debugger, or being
++		 *   out-scheduled by another task.
++		 * - The bus is receiving numerous erroneous packets, either
++		 *   from a malfunctioning device or during a test scenario.
++		 */
++		netdev_alert(priv->ndev, "%s: 0x%p: concurrent session with same addr (%02x %02x) is already active.\n",
++			     __func__, session, skcb.addr.sa, skcb.addr.da);
++		j1939_session_put(session);
++		return NULL;
++	}
+ 
+ 	return session;
+ }
+@@ -1662,6 +1677,8 @@ static int j1939_xtp_rx_rts_session_active(struct j1939_session *session,
+ 
+ 		j1939_session_timers_cancel(session);
+ 		j1939_session_cancel(session, J1939_XTP_ABORT_BUSY);
++		if (session->transmission)
++			j1939_session_deactivate_activate_next(session);
+ 
+ 		return -EBUSY;
+ 	}
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index 7742ee689141f..009b9e22c4e75 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -73,7 +73,7 @@ struct net_dm_hw_entries {
+ };
+ 
+ struct per_cpu_dm_data {
+-	spinlock_t		lock;	/* Protects 'skb', 'hw_entries' and
++	raw_spinlock_t		lock;	/* Protects 'skb', 'hw_entries' and
+ 					 * 'send_timer'
+ 					 */
+ 	union {
+@@ -168,9 +168,9 @@ static struct sk_buff *reset_per_cpu_data(struct per_cpu_dm_data *data)
+ err:
+ 	mod_timer(&data->send_timer, jiffies + HZ / 10);
+ out:
+-	spin_lock_irqsave(&data->lock, flags);
++	raw_spin_lock_irqsave(&data->lock, flags);
+ 	swap(data->skb, skb);
+-	spin_unlock_irqrestore(&data->lock, flags);
++	raw_spin_unlock_irqrestore(&data->lock, flags);
+ 
+ 	if (skb) {
+ 		struct nlmsghdr *nlh = (struct nlmsghdr *)skb->data;
+@@ -225,7 +225,7 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
+ 
+ 	local_irq_save(flags);
+ 	data = this_cpu_ptr(&dm_cpu_data);
+-	spin_lock(&data->lock);
++	raw_spin_lock(&data->lock);
+ 	dskb = data->skb;
+ 
+ 	if (!dskb)
+@@ -259,7 +259,7 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
+ 	}
+ 
+ out:
+-	spin_unlock_irqrestore(&data->lock, flags);
++	raw_spin_unlock_irqrestore(&data->lock, flags);
+ }
+ 
+ static void trace_kfree_skb_hit(void *ignore, struct sk_buff *skb, void *location)
+@@ -318,9 +318,9 @@ net_dm_hw_reset_per_cpu_data(struct per_cpu_dm_data *hw_data)
+ 		mod_timer(&hw_data->send_timer, jiffies + HZ / 10);
+ 	}
+ 
+-	spin_lock_irqsave(&hw_data->lock, flags);
++	raw_spin_lock_irqsave(&hw_data->lock, flags);
+ 	swap(hw_data->hw_entries, hw_entries);
+-	spin_unlock_irqrestore(&hw_data->lock, flags);
++	raw_spin_unlock_irqrestore(&hw_data->lock, flags);
+ 
+ 	return hw_entries;
+ }
+@@ -452,7 +452,7 @@ net_dm_hw_trap_summary_probe(void *ignore, const struct devlink *devlink,
+ 		return;
+ 
+ 	hw_data = this_cpu_ptr(&dm_hw_cpu_data);
+-	spin_lock_irqsave(&hw_data->lock, flags);
++	raw_spin_lock_irqsave(&hw_data->lock, flags);
+ 	hw_entries = hw_data->hw_entries;
+ 
+ 	if (!hw_entries)
+@@ -481,7 +481,7 @@ net_dm_hw_trap_summary_probe(void *ignore, const struct devlink *devlink,
+ 	}
+ 
+ out:
+-	spin_unlock_irqrestore(&hw_data->lock, flags);
++	raw_spin_unlock_irqrestore(&hw_data->lock, flags);
+ }
+ 
+ static const struct net_dm_alert_ops net_dm_alert_summary_ops = {
+@@ -1669,7 +1669,7 @@ static struct notifier_block dropmon_net_notifier = {
+ 
+ static void __net_dm_cpu_data_init(struct per_cpu_dm_data *data)
+ {
+-	spin_lock_init(&data->lock);
++	raw_spin_lock_init(&data->lock);
+ 	skb_queue_head_init(&data->drop_queue);
+ 	u64_stats_init(&data->stats.syncp);
+ }
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 49e4d1535cc82..a3101cdfd47b9 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -78,6 +78,9 @@
+ #include <linux/btf_ids.h>
+ #include <net/tls.h>
+ 
++/* Keep the struct bpf_fib_lookup small so that it fits into a cacheline */
++static_assert(sizeof(struct bpf_fib_lookup) == 64, "struct bpf_fib_lookup size check");
++
+ static const struct bpf_func_proto *
+ bpf_sk_base_func_proto(enum bpf_func_id func_id);
+ 
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index 72cfe5248b764..6192a05ebcce2 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -670,11 +670,16 @@ EXPORT_SYMBOL_GPL(__put_net);
+  * get_net_ns - increment the refcount of the network namespace
+  * @ns: common namespace (net)
+  *
+- * Returns the net's common namespace.
++ * Returns the net's common namespace or ERR_PTR() if ref is zero.
+  */
+ struct ns_common *get_net_ns(struct ns_common *ns)
+ {
+-	return &get_net(container_of(ns, struct net, ns))->ns;
++	struct net *net;
++
++	net = maybe_get_net(container_of(ns, struct net, ns));
++	if (net)
++		return &net->ns;
++	return ERR_PTR(-EINVAL);
+ }
+ EXPORT_SYMBOL_GPL(get_net_ns);
+ 
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 2ad22511b9c6d..f76afab9fd8bd 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -316,7 +316,7 @@ static int netpoll_owner_active(struct net_device *dev)
+ 	struct napi_struct *napi;
+ 
+ 	list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
+-		if (napi->poll_owner == smp_processor_id())
++		if (READ_ONCE(napi->poll_owner) == smp_processor_id())
+ 			return 1;
+ 	}
+ 	return 0;
+diff --git a/net/core/sock.c b/net/core/sock.c
+index b4ecd0071e220..d5818a5a86fdd 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -3263,7 +3263,8 @@ int sock_common_getsockopt(struct socket *sock, int level, int optname,
+ {
+ 	struct sock *sk = sock->sk;
+ 
+-	return sk->sk_prot->getsockopt(sk, level, optname, optval, optlen);
++	/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++	return READ_ONCE(sk->sk_prot)->getsockopt(sk, level, optname, optval, optlen);
+ }
+ EXPORT_SYMBOL(sock_common_getsockopt);
+ 
+@@ -3290,7 +3291,8 @@ int sock_common_setsockopt(struct socket *sock, int level, int optname,
+ {
+ 	struct sock *sk = sock->sk;
+ 
+-	return sk->sk_prot->setsockopt(sk, level, optname, optval, optlen);
++	/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++	return READ_ONCE(sk->sk_prot)->setsockopt(sk, level, optname, optval, optlen);
+ }
+ EXPORT_SYMBOL(sock_common_setsockopt);
+ 
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index b8d7fa47d293c..fd98d6059007c 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -110,26 +110,37 @@ static void mem_allocator_disconnect(void *allocator)
+ 	mutex_unlock(&mem_id_lock);
+ }
+ 
+-void xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq)
++void xdp_unreg_mem_model(struct xdp_mem_info *mem)
+ {
+ 	struct xdp_mem_allocator *xa;
+-	int id = xdp_rxq->mem.id;
++	int type = mem->type;
++	int id = mem->id;
+ 
+-	if (xdp_rxq->reg_state != REG_STATE_REGISTERED) {
+-		WARN(1, "Missing register, driver bug");
+-		return;
+-	}
++	/* Reset mem info to defaults */
++	mem->id = 0;
++	mem->type = 0;
+ 
+ 	if (id == 0)
+ 		return;
+ 
+-	if (xdp_rxq->mem.type == MEM_TYPE_PAGE_POOL) {
++	if (type == MEM_TYPE_PAGE_POOL) {
+ 		rcu_read_lock();
+ 		xa = rhashtable_lookup(mem_id_ht, &id, mem_id_rht_params);
+ 		page_pool_destroy(xa->page_pool);
+ 		rcu_read_unlock();
+ 	}
+ }
++EXPORT_SYMBOL_GPL(xdp_unreg_mem_model);
++
++void xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq)
++{
++	if (xdp_rxq->reg_state != REG_STATE_REGISTERED) {
++		WARN(1, "Missing register, driver bug");
++		return;
++	}
++
++	xdp_unreg_mem_model(&xdp_rxq->mem);
++}
+ EXPORT_SYMBOL_GPL(xdp_rxq_info_unreg_mem_model);
+ 
+ void xdp_rxq_info_unreg(struct xdp_rxq_info *xdp_rxq)
+@@ -144,10 +155,6 @@ void xdp_rxq_info_unreg(struct xdp_rxq_info *xdp_rxq)
+ 
+ 	xdp_rxq->reg_state = REG_STATE_UNREGISTERED;
+ 	xdp_rxq->dev = NULL;
+-
+-	/* Reset mem info to defaults */
+-	xdp_rxq->mem.id = 0;
+-	xdp_rxq->mem.type = 0;
+ }
+ EXPORT_SYMBOL_GPL(xdp_rxq_info_unreg);
+ 
+@@ -259,28 +266,24 @@ static bool __is_supported_mem_type(enum xdp_mem_type type)
+ 	return true;
+ }
+ 
+-int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq,
+-			       enum xdp_mem_type type, void *allocator)
++static struct xdp_mem_allocator *__xdp_reg_mem_model(struct xdp_mem_info *mem,
++						     enum xdp_mem_type type,
++						     void *allocator)
+ {
+ 	struct xdp_mem_allocator *xdp_alloc;
+ 	gfp_t gfp = GFP_KERNEL;
+ 	int id, errno, ret;
+ 	void *ptr;
+ 
+-	if (xdp_rxq->reg_state != REG_STATE_REGISTERED) {
+-		WARN(1, "Missing register, driver bug");
+-		return -EFAULT;
+-	}
+-
+ 	if (!__is_supported_mem_type(type))
+-		return -EOPNOTSUPP;
++		return ERR_PTR(-EOPNOTSUPP);
+ 
+-	xdp_rxq->mem.type = type;
++	mem->type = type;
+ 
+ 	if (!allocator) {
+ 		if (type == MEM_TYPE_PAGE_POOL)
+-			return -EINVAL; /* Setup time check page_pool req */
+-		return 0;
++			return ERR_PTR(-EINVAL); /* Setup time check page_pool req */
++		return NULL;
+ 	}
+ 
+ 	/* Delay init of rhashtable to save memory if feature isn't used */
+@@ -288,15 +291,13 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq,
+ 		mutex_lock(&mem_id_lock);
+ 		ret = __mem_id_init_hash_table();
+ 		mutex_unlock(&mem_id_lock);
+-		if (ret < 0) {
+-			WARN_ON(1);
+-			return ret;
+-		}
++		if (ret < 0)
++			return ERR_PTR(ret);
+ 	}
+ 
+ 	xdp_alloc = kzalloc(sizeof(*xdp_alloc), gfp);
+ 	if (!xdp_alloc)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	mutex_lock(&mem_id_lock);
+ 	id = __mem_id_cyclic_get(gfp);
+@@ -304,15 +305,15 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq,
+ 		errno = id;
+ 		goto err;
+ 	}
+-	xdp_rxq->mem.id = id;
+-	xdp_alloc->mem  = xdp_rxq->mem;
++	mem->id = id;
++	xdp_alloc->mem = *mem;
+ 	xdp_alloc->allocator = allocator;
+ 
+ 	/* Insert allocator into ID lookup table */
+ 	ptr = rhashtable_insert_slow(mem_id_ht, &id, &xdp_alloc->node);
+ 	if (IS_ERR(ptr)) {
+-		ida_simple_remove(&mem_id_pool, xdp_rxq->mem.id);
+-		xdp_rxq->mem.id = 0;
++		ida_simple_remove(&mem_id_pool, mem->id);
++		mem->id = 0;
+ 		errno = PTR_ERR(ptr);
+ 		goto err;
+ 	}
+@@ -322,13 +323,44 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq,
+ 
+ 	mutex_unlock(&mem_id_lock);
+ 
+-	trace_mem_connect(xdp_alloc, xdp_rxq);
+-	return 0;
++	return xdp_alloc;
+ err:
+ 	mutex_unlock(&mem_id_lock);
+ 	kfree(xdp_alloc);
+-	return errno;
++	return ERR_PTR(errno);
+ }
++
++int xdp_reg_mem_model(struct xdp_mem_info *mem,
++		      enum xdp_mem_type type, void *allocator)
++{
++	struct xdp_mem_allocator *xdp_alloc;
++
++	xdp_alloc = __xdp_reg_mem_model(mem, type, allocator);
++	if (IS_ERR(xdp_alloc))
++		return PTR_ERR(xdp_alloc);
++	return 0;
++}
++EXPORT_SYMBOL_GPL(xdp_reg_mem_model);
++
++int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq,
++			       enum xdp_mem_type type, void *allocator)
++{
++	struct xdp_mem_allocator *xdp_alloc;
++
++	if (xdp_rxq->reg_state != REG_STATE_REGISTERED) {
++		WARN(1, "Missing register, driver bug");
++		return -EFAULT;
++	}
++
++	xdp_alloc = __xdp_reg_mem_model(&xdp_rxq->mem, type, allocator);
++	if (IS_ERR(xdp_alloc))
++		return PTR_ERR(xdp_alloc);
++
++	if (trace_mem_connect_enabled() && xdp_alloc)
++		trace_mem_connect(xdp_alloc, xdp_rxq);
++	return 0;
++}
++
+ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model);
+ 
+ /* XDP RX runs under NAPI protection, and in different delivery error
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 5f1b334e64b32..475a19db37132 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -562,22 +562,27 @@ int inet_dgram_connect(struct socket *sock, struct sockaddr *uaddr,
+ 		       int addr_len, int flags)
+ {
+ 	struct sock *sk = sock->sk;
++	const struct proto *prot;
+ 	int err;
+ 
+ 	if (addr_len < sizeof(uaddr->sa_family))
+ 		return -EINVAL;
++
++	/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++	prot = READ_ONCE(sk->sk_prot);
++
+ 	if (uaddr->sa_family == AF_UNSPEC)
+-		return sk->sk_prot->disconnect(sk, flags);
++		return prot->disconnect(sk, flags);
+ 
+ 	if (BPF_CGROUP_PRE_CONNECT_ENABLED(sk)) {
+-		err = sk->sk_prot->pre_connect(sk, uaddr, addr_len);
++		err = prot->pre_connect(sk, uaddr, addr_len);
+ 		if (err)
+ 			return err;
+ 	}
+ 
+ 	if (data_race(!inet_sk(sk)->inet_num) && inet_autobind(sk))
+ 		return -EAGAIN;
+-	return sk->sk_prot->connect(sk, uaddr, addr_len);
++	return prot->connect(sk, uaddr, addr_len);
+ }
+ EXPORT_SYMBOL(inet_dgram_connect);
+ 
+@@ -740,10 +745,11 @@ EXPORT_SYMBOL(inet_stream_connect);
+ int inet_accept(struct socket *sock, struct socket *newsock, int flags,
+ 		bool kern)
+ {
+-	struct sock *sk1 = sock->sk;
++	struct sock *sk1 = sock->sk, *sk2;
+ 	int err = -EINVAL;
+-	struct sock *sk2 = sk1->sk_prot->accept(sk1, flags, &err, kern);
+ 
++	/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++	sk2 = READ_ONCE(sk1->sk_prot)->accept(sk1, flags, &err, kern);
+ 	if (!sk2)
+ 		goto do_err;
+ 
+@@ -828,12 +834,15 @@ ssize_t inet_sendpage(struct socket *sock, struct page *page, int offset,
+ 		      size_t size, int flags)
+ {
+ 	struct sock *sk = sock->sk;
++	const struct proto *prot;
+ 
+ 	if (unlikely(inet_send_prepare(sk)))
+ 		return -EAGAIN;
+ 
+-	if (sk->sk_prot->sendpage)
+-		return sk->sk_prot->sendpage(sk, page, offset, size, flags);
++	/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++	prot = READ_ONCE(sk->sk_prot);
++	if (prot->sendpage)
++		return prot->sendpage(sk, page, offset, size, flags);
+ 	return sock_no_sendpage(sock, page, offset, size, flags);
+ }
+ EXPORT_SYMBOL(inet_sendpage);
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index d4a4160159a92..c5da958e3bbdb 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -2016,12 +2016,16 @@ static int cipso_v4_delopt(struct ip_options_rcu __rcu **opt_ptr)
+ 		 * from there we can determine the new total option length */
+ 		iter = 0;
+ 		optlen_new = 0;
+-		while (iter < opt->opt.optlen)
+-			if (opt->opt.__data[iter] != IPOPT_NOP) {
++		while (iter < opt->opt.optlen) {
++			if (opt->opt.__data[iter] == IPOPT_END) {
++				break;
++			} else if (opt->opt.__data[iter] == IPOPT_NOP) {
++				iter++;
++			} else {
+ 				iter += opt->opt.__data[iter + 1];
+ 				optlen_new = iter;
+-			} else
+-				iter++;
++			}
++		}
+ 		hdr_delta = opt->opt.optlen;
+ 		opt->opt.optlen = (optlen_new + 3) & ~3;
+ 		hdr_delta -= opt->opt.optlen;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 4ed0d303791a1..24ebd51c5e0b8 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2420,6 +2420,10 @@ void tcp_set_state(struct sock *sk, int state)
+ 		if (oldstate != TCP_ESTABLISHED)
+ 			TCP_INC_STATS(sock_net(sk), TCP_MIB_CURRESTAB);
+ 		break;
++	case TCP_CLOSE_WAIT:
++		if (oldstate == TCP_SYN_RECV)
++			TCP_INC_STATS(sock_net(sk), TCP_MIB_CURRESTAB);
++		break;
+ 
+ 	case TCP_CLOSE:
+ 		if (oldstate == TCP_CLOSE_WAIT || oldstate == TCP_ESTABLISHED)
+@@ -2431,7 +2435,7 @@ void tcp_set_state(struct sock *sk, int state)
+ 			inet_put_port(sk);
+ 		fallthrough;
+ 	default:
+-		if (oldstate == TCP_ESTABLISHED)
++		if (oldstate == TCP_ESTABLISHED || oldstate == TCP_CLOSE_WAIT)
+ 			TCP_DEC_STATS(sock_net(sk), TCP_MIB_CURRESTAB);
+ 	}
+ 
+@@ -3497,8 +3501,9 @@ int tcp_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
+ 	const struct inet_connection_sock *icsk = inet_csk(sk);
+ 
+ 	if (level != SOL_TCP)
+-		return icsk->icsk_af_ops->setsockopt(sk, level, optname,
+-						     optval, optlen);
++		/* Paired with WRITE_ONCE() in do_ipv6_setsockopt() and tcp_v6_connect() */
++		return READ_ONCE(icsk->icsk_af_ops)->setsockopt(sk, level, optname,
++								optval, optlen);
+ 	return do_tcp_setsockopt(sk, level, optname, optval, optlen);
+ }
+ EXPORT_SYMBOL(tcp_setsockopt);
+@@ -4059,8 +4064,9 @@ int tcp_getsockopt(struct sock *sk, int level, int optname, char __user *optval,
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 
+ 	if (level != SOL_TCP)
+-		return icsk->icsk_af_ops->getsockopt(sk, level, optname,
+-						     optval, optlen);
++		/* Paired with WRITE_ONCE() in do_ipv6_setsockopt() and tcp_v6_connect() */
++		return READ_ONCE(icsk->icsk_af_ops)->getsockopt(sk, level, optname,
++								optval, optlen);
+ 	return do_tcp_getsockopt(sk, level, optname, optval, optlen);
+ }
+ EXPORT_SYMBOL(tcp_getsockopt);
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 329b3b36688aa..32da2b66fa2fb 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -449,11 +449,14 @@ static int __inet6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len,
+ int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ {
+ 	struct sock *sk = sock->sk;
++	const struct proto *prot;
+ 	int err = 0;
+ 
++	/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++	prot = READ_ONCE(sk->sk_prot);
+ 	/* If the socket has its own bind function then use it. */
+-	if (sk->sk_prot->bind)
+-		return sk->sk_prot->bind(sk, uaddr, addr_len);
++	if (prot->bind)
++		return prot->bind(sk, uaddr, addr_len);
+ 
+ 	if (addr_len < SIN6_LEN_RFC2133)
+ 		return -EINVAL;
+@@ -566,6 +569,7 @@ int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 	void __user *argp = (void __user *)arg;
+ 	struct sock *sk = sock->sk;
+ 	struct net *net = sock_net(sk);
++	const struct proto *prot;
+ 
+ 	switch (cmd) {
+ 	case SIOCADDRT:
+@@ -583,9 +587,11 @@ int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 	case SIOCSIFDSTADDR:
+ 		return addrconf_set_dstaddr(net, argp);
+ 	default:
+-		if (!sk->sk_prot->ioctl)
++		/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++		prot = READ_ONCE(sk->sk_prot);
++		if (!prot->ioctl)
+ 			return -ENOIOCTLCMD;
+-		return sk->sk_prot->ioctl(sk, cmd, arg);
++		return prot->ioctl(sk, cmd, arg);
+ 	}
+ 	/*NOTREACHED*/
+ 	return 0;
+@@ -647,11 +653,14 @@ INDIRECT_CALLABLE_DECLARE(int udpv6_sendmsg(struct sock *, struct msghdr *,
+ int inet6_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ {
+ 	struct sock *sk = sock->sk;
++	const struct proto *prot;
+ 
+ 	if (unlikely(inet_send_prepare(sk)))
+ 		return -EAGAIN;
+ 
+-	return INDIRECT_CALL_2(sk->sk_prot->sendmsg, tcp_sendmsg, udpv6_sendmsg,
++	/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++	prot = READ_ONCE(sk->sk_prot);
++	return INDIRECT_CALL_2(prot->sendmsg, tcp_sendmsg, udpv6_sendmsg,
+ 			       sk, msg, size);
+ }
+ 
+@@ -661,13 +670,16 @@ int inet6_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 		  int flags)
+ {
+ 	struct sock *sk = sock->sk;
++	const struct proto *prot;
+ 	int addr_len = 0;
+ 	int err;
+ 
+ 	if (likely(!(flags & MSG_ERRQUEUE)))
+ 		sock_rps_record_flow(sk);
+ 
+-	err = INDIRECT_CALL_2(sk->sk_prot->recvmsg, tcp_recvmsg, udpv6_recvmsg,
++	/* IPV6_ADDRFORM can change sk->sk_prot under us. */
++	prot = READ_ONCE(sk->sk_prot);
++	err = INDIRECT_CALL_2(prot->recvmsg, tcp_recvmsg, udpv6_recvmsg,
+ 			      sk, msg, size, flags & MSG_DONTWAIT,
+ 			      flags & ~MSG_DONTWAIT, &addr_len);
+ 	if (err >= 0)
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index b79e571e5a863..83f15e930b57a 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -959,6 +959,7 @@ static void __fib6_drop_pcpu_from(struct fib6_nh *fib6_nh,
+ 	if (!fib6_nh->rt6i_pcpu)
+ 		return;
+ 
++	rcu_read_lock();
+ 	/* release the reference to this fib entry from
+ 	 * all of its cached pcpu routes
+ 	 */
+@@ -967,7 +968,9 @@ static void __fib6_drop_pcpu_from(struct fib6_nh *fib6_nh,
+ 		struct rt6_info *pcpu_rt;
+ 
+ 		ppcpu_rt = per_cpu_ptr(fib6_nh->rt6i_pcpu, cpu);
+-		pcpu_rt = *ppcpu_rt;
++
++		/* Paired with xchg() in rt6_get_pcpu_route() */
++		pcpu_rt = READ_ONCE(*ppcpu_rt);
+ 
+ 		/* only dropping the 'from' reference if the cached route
+ 		 * is using 'match'. The cached pcpu_rt->from only changes
+@@ -981,6 +984,7 @@ static void __fib6_drop_pcpu_from(struct fib6_nh *fib6_nh,
+ 			fib6_info_release(from);
+ 		}
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ struct fib6_nh_pcpu_arg {
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 0ac527cd5d56d..dbbe53260b008 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -475,8 +475,10 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ 				sock_prot_inuse_add(net, sk->sk_prot, -1);
+ 				sock_prot_inuse_add(net, &tcp_prot, 1);
+ 				local_bh_enable();
+-				sk->sk_prot = &tcp_prot;
+-				icsk->icsk_af_ops = &ipv4_specific;
++				/* Paired with READ_ONCE(sk->sk_prot) in inet6_stream_ops */
++				WRITE_ONCE(sk->sk_prot, &tcp_prot);
++				/* Paired with READ_ONCE() in tcp_(get|set)sockopt() */
++				WRITE_ONCE(icsk->icsk_af_ops, &ipv4_specific);
+ 				sk->sk_socket->ops = &inet_stream_ops;
+ 				sk->sk_family = PF_INET;
+ 				tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);
+@@ -489,7 +491,8 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ 				sock_prot_inuse_add(net, sk->sk_prot, -1);
+ 				sock_prot_inuse_add(net, prot, 1);
+ 				local_bh_enable();
+-				sk->sk_prot = prot;
++				/* Paired with READ_ONCE(sk->sk_prot) in inet6_dgram_ops */
++				WRITE_ONCE(sk->sk_prot, prot);
+ 				sk->sk_socket->ops = &inet_dgram_ops;
+ 				sk->sk_family = PF_INET;
+ 			}
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 88f96241ca971..799779475c7de 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -632,6 +632,8 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
+ 	rcu_read_lock_bh();
+ 	last_probe = READ_ONCE(fib6_nh->last_probe);
+ 	idev = __in6_dev_get(dev);
++	if (!idev)
++		goto out;
+ 	neigh = __ipv6_neigh_lookup_noref(dev, nh_gw);
+ 	if (neigh) {
+ 		if (neigh->nud_state & NUD_VALID)
+@@ -1396,6 +1398,7 @@ static struct rt6_info *rt6_get_pcpu_route(const struct fib6_result *res)
+ 		struct rt6_info *prev, **p;
+ 
+ 		p = this_cpu_ptr(res->nh->rt6i_pcpu);
++		/* Paired with READ_ONCE() in __fib6_drop_pcpu_from() */
+ 		prev = xchg(p, NULL);
+ 		if (prev) {
+ 			dst_dev_put(&prev->dst);
+@@ -3487,7 +3490,7 @@ int fib6_nh_init(struct net *net, struct fib6_nh *fib6_nh,
+ 	if (!dev)
+ 		goto out;
+ 
+-	if (idev->cnf.disable_ipv6) {
++	if (!idev || idev->cnf.disable_ipv6) {
+ 		NL_SET_ERR_MSG(extack, "IPv6 is disabled on nexthop device");
+ 		err = -EACCES;
+ 		goto out;
+@@ -6180,12 +6183,12 @@ static int ipv6_sysctl_rtcache_flush(struct ctl_table *ctl, int write,
+ 	if (!write)
+ 		return -EINVAL;
+ 
+-	net = (struct net *)ctl->extra1;
+-	delay = net->ipv6.sysctl.flush_delay;
+ 	ret = proc_dointvec(ctl, write, buffer, lenp, ppos);
+ 	if (ret)
+ 		return ret;
+ 
++	net = (struct net *)ctl->extra1;
++	delay = net->ipv6.sysctl.flush_delay;
+ 	fib6_run_gc(delay <= 0 ? 0 : (unsigned long)delay, net, delay > 0);
+ 	return 0;
+ }
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index 40ac23242c378..a73840da34ed9 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -325,9 +325,8 @@ static int seg6_input(struct sk_buff *skb)
+ 
+ 	slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
+ 
+-	preempt_disable();
++	local_bh_disable();
+ 	dst = dst_cache_get(&slwt->cache);
+-	preempt_enable();
+ 
+ 	skb_dst_drop(skb);
+ 
+@@ -335,14 +334,13 @@ static int seg6_input(struct sk_buff *skb)
+ 		ip6_route_input(skb);
+ 		dst = skb_dst(skb);
+ 		if (!dst->error) {
+-			preempt_disable();
+ 			dst_cache_set_ip6(&slwt->cache, dst,
+ 					  &ipv6_hdr(skb)->saddr);
+-			preempt_enable();
+ 		}
+ 	} else {
+ 		skb_dst_set(skb, dst);
+ 	}
++	local_bh_enable();
+ 
+ 	err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+ 	if (unlikely(err))
+@@ -364,9 +362,9 @@ static int seg6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 
+ 	slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
+ 
+-	preempt_disable();
++	local_bh_disable();
+ 	dst = dst_cache_get(&slwt->cache);
+-	preempt_enable();
++	local_bh_enable();
+ 
+ 	if (unlikely(!dst)) {
+ 		struct ipv6hdr *hdr = ipv6_hdr(skb);
+@@ -386,9 +384,9 @@ static int seg6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 			goto drop;
+ 		}
+ 
+-		preempt_disable();
++		local_bh_disable();
+ 		dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr);
+-		preempt_enable();
++		local_bh_enable();
+ 	}
+ 
+ 	skb_dst_drop(skb);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 79d6f6ea3c546..7e595585d0596 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -237,7 +237,8 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ 		sin.sin_port = usin->sin6_port;
+ 		sin.sin_addr.s_addr = usin->sin6_addr.s6_addr32[3];
+ 
+-		icsk->icsk_af_ops = &ipv6_mapped;
++		/* Paired with READ_ONCE() in tcp_(get|set)sockopt() */
++		WRITE_ONCE(icsk->icsk_af_ops, &ipv6_mapped);
+ 		if (sk_is_mptcp(sk))
+ 			mptcpv6_handle_mapped(sk, true);
+ 		sk->sk_backlog_rcv = tcp_v4_do_rcv;
+@@ -249,7 +250,8 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ 
+ 		if (err) {
+ 			icsk->icsk_ext_hdr_len = exthdrlen;
+-			icsk->icsk_af_ops = &ipv6_specific;
++			/* Paired with READ_ONCE() in tcp_(get|set)sockopt() */
++			WRITE_ONCE(icsk->icsk_af_ops, &ipv6_specific);
+ 			if (sk_is_mptcp(sk))
+ 				mptcpv6_handle_mapped(sk, false);
+ 			sk->sk_backlog_rcv = tcp_v6_do_rcv;
+@@ -1311,7 +1313,6 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 	 */
+ 
+ 	newsk->sk_gso_type = SKB_GSO_TCPV6;
+-	ip6_dst_store(newsk, dst, NULL, NULL);
+ 	inet6_sk_rx_dst_set(newsk, skb);
+ 
+ 	inet_sk(newsk)->pinet6 = tcp_inet6_sk(newsk);
+@@ -1322,6 +1323,8 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 
+ 	memcpy(newnp, np, sizeof(struct ipv6_pinfo));
+ 
++	ip6_dst_store(newsk, dst, NULL, NULL);
++
+ 	newsk->sk_v6_daddr = ireq->ir_v6_rmt_addr;
+ 	newnp->saddr = ireq->ir_v6_loc_addr;
+ 	newsk->sk_v6_rcv_saddr = ireq->ir_v6_loc_addr;
+diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c
+index 4c3aa97f23faa..7c903e0e446cb 100644
+--- a/net/ipv6/xfrm6_policy.c
++++ b/net/ipv6/xfrm6_policy.c
+@@ -57,12 +57,18 @@ static int xfrm6_get_saddr(struct net *net, int oif,
+ {
+ 	struct dst_entry *dst;
+ 	struct net_device *dev;
++	struct inet6_dev *idev;
+ 
+ 	dst = xfrm6_dst_lookup(net, 0, oif, NULL, daddr, mark);
+ 	if (IS_ERR(dst))
+ 		return -EHOSTUNREACH;
+ 
+-	dev = ip6_dst_idev(dst)->dev;
++	idev = ip6_dst_idev(dst);
++	if (!idev) {
++		dst_release(dst);
++		return -EHOSTUNREACH;
++	}
++	dev = idev->dev;
+ 	ipv6_dev_get_saddr(dev_net(dev), dev, &daddr->in6, 0, &saddr->in6);
+ 	dst_release(dst);
+ 	return 0;
+diff --git a/net/iucv/iucv.c b/net/iucv/iucv.c
+index ed0dbdbba4d94..06770b77e5d22 100644
+--- a/net/iucv/iucv.c
++++ b/net/iucv/iucv.c
+@@ -517,7 +517,7 @@ static void iucv_setmask_mp(void)
+  */
+ static void iucv_setmask_up(void)
+ {
+-	cpumask_t cpumask;
++	static cpumask_t cpumask;
+ 	int cpu;
+ 
+ 	/* Disable all cpu but the first in cpu_irq_cpumask. */
+@@ -625,23 +625,33 @@ static int iucv_cpu_online(unsigned int cpu)
+ 
+ static int iucv_cpu_down_prep(unsigned int cpu)
+ {
+-	cpumask_t cpumask;
++	cpumask_var_t cpumask;
++	int ret = 0;
+ 
+ 	if (!iucv_path_table)
+ 		return 0;
+ 
+-	cpumask_copy(&cpumask, &iucv_buffer_cpumask);
+-	cpumask_clear_cpu(cpu, &cpumask);
+-	if (cpumask_empty(&cpumask))
++	if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
++		return -ENOMEM;
++
++	cpumask_copy(cpumask, &iucv_buffer_cpumask);
++	cpumask_clear_cpu(cpu, cpumask);
++	if (cpumask_empty(cpumask)) {
+ 		/* Can't offline last IUCV enabled cpu. */
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto __free_cpumask;
++	}
+ 
+ 	iucv_retrieve_cpu(NULL);
+ 	if (!cpumask_empty(&iucv_irq_cpumask))
+-		return 0;
++		goto __free_cpumask;
++
+ 	smp_call_function_single(cpumask_first(&iucv_buffer_cpumask),
+ 				 iucv_allow_cpu, NULL, 1);
+-	return 0;
++
++__free_cpumask:
++	free_cpumask_var(cpumask);
++	return ret;
+ }
+ 
+ /**
+diff --git a/net/mac80211/he.c b/net/mac80211/he.c
+index cc26f239838ba..41413a4588db8 100644
+--- a/net/mac80211/he.c
++++ b/net/mac80211/he.c
+@@ -127,15 +127,21 @@ ieee80211_he_spr_ie_to_bss_conf(struct ieee80211_vif *vif,
+ 
+ 	if (!he_spr_ie_elem)
+ 		return;
++
++	he_obss_pd->sr_ctrl = he_spr_ie_elem->he_sr_control;
+ 	data = he_spr_ie_elem->optional;
+ 
+ 	if (he_spr_ie_elem->he_sr_control &
+ 	    IEEE80211_HE_SPR_NON_SRG_OFFSET_PRESENT)
+-		data++;
++		he_obss_pd->non_srg_max_offset = *data++;
++
+ 	if (he_spr_ie_elem->he_sr_control &
+ 	    IEEE80211_HE_SPR_SRG_INFORMATION_PRESENT) {
+-		he_obss_pd->max_offset = *data++;
+ 		he_obss_pd->min_offset = *data++;
++		he_obss_pd->max_offset = *data++;
++		memcpy(he_obss_pd->bss_color_bitmap, data, 8);
++		data += 8;
++		memcpy(he_obss_pd->partial_bssid_bitmap, data, 8);
+ 		he_obss_pd->enable = true;
+ 	}
+ }
+diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
+index d936ef0c17a37..72ecce377d174 100644
+--- a/net/mac80211/mesh_pathtbl.c
++++ b/net/mac80211/mesh_pathtbl.c
+@@ -723,10 +723,23 @@ void mesh_path_discard_frame(struct ieee80211_sub_if_data *sdata,
+  */
+ void mesh_path_flush_pending(struct mesh_path *mpath)
+ {
++	struct ieee80211_sub_if_data *sdata = mpath->sdata;
++	struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
++	struct mesh_preq_queue *preq, *tmp;
+ 	struct sk_buff *skb;
+ 
+ 	while ((skb = skb_dequeue(&mpath->frame_queue)) != NULL)
+ 		mesh_path_discard_frame(mpath->sdata, skb);
++
++	spin_lock_bh(&ifmsh->mesh_preq_queue_lock);
++	list_for_each_entry_safe(preq, tmp, &ifmsh->preq_queue.list, list) {
++		if (ether_addr_equal(mpath->dst, preq->dst)) {
++			list_del(&preq->list);
++			kfree(preq);
++			--ifmsh->preq_queue_len;
++		}
++	}
++	spin_unlock_bh(&ifmsh->mesh_preq_queue_lock);
+ }
+ 
+ /**
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 44bd03c6b8473..f7637176d719d 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -1343,7 +1343,7 @@ void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
+ 	skb_queue_head_init(&pending);
+ 
+ 	/* sync with ieee80211_tx_h_unicast_ps_buf */
+-	spin_lock(&sta->ps_lock);
++	spin_lock_bh(&sta->ps_lock);
+ 	/* Send all buffered frames to the station */
+ 	for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
+ 		int count = skb_queue_len(&pending), tmp;
+@@ -1372,7 +1372,7 @@ void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
+ 	 */
+ 	clear_sta_flag(sta, WLAN_STA_PSPOLL);
+ 	clear_sta_flag(sta, WLAN_STA_UAPSD);
+-	spin_unlock(&sta->ps_lock);
++	spin_unlock_bh(&sta->ps_lock);
+ 
+ 	atomic_dec(&ps->num_sta_ps);
+ 
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 0d6f3d912891a..452c7e21befd6 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -371,15 +371,12 @@ void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
+ 	struct sock *sk = (struct sock *)msk;
+ 	struct mptcp_addr_info remote;
+ 	struct mptcp_addr_info local;
++	int err;
+ 
+ 	pr_debug("accepted %d:%d remote family %d",
+ 		 msk->pm.add_addr_accepted, msk->pm.add_addr_accept_max,
+ 		 msk->pm.remote.family);
+-	msk->pm.add_addr_accepted++;
+ 	msk->pm.subflows++;
+-	if (msk->pm.add_addr_accepted >= msk->pm.add_addr_accept_max ||
+-	    msk->pm.subflows >= msk->pm.subflows_max)
+-		WRITE_ONCE(msk->pm.accept_addr, false);
+ 
+ 	/* connect to the specified remote address, using whatever
+ 	 * local address the routing configuration will pick.
+@@ -391,9 +388,16 @@ void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
+ 	local.family = remote.family;
+ 
+ 	spin_unlock_bh(&msk->pm.lock);
+-	__mptcp_subflow_connect((struct sock *)msk, &local, &remote);
++	err = __mptcp_subflow_connect((struct sock *)msk, &local, &remote);
+ 	spin_lock_bh(&msk->pm.lock);
+ 
++	if (!err) {
++		msk->pm.add_addr_accepted++;
++		if (msk->pm.add_addr_accepted >= msk->pm.add_addr_accept_max ||
++		    msk->pm.subflows >= msk->pm.subflows_max)
++			WRITE_ONCE(msk->pm.accept_addr, false);
++	}
++
+ 	mptcp_pm_announce_addr(msk, &remote, true);
+ }
+ 
+@@ -427,10 +431,10 @@ void mptcp_pm_nl_rm_addr_received(struct mptcp_sock *msk)
+ 		msk->pm.subflows--;
+ 		WRITE_ONCE(msk->pm.accept_addr, true);
+ 
+-		__MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMADDR);
+-
+ 		break;
+ 	}
++
++	__MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMADDR);
+ }
+ 
+ void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk, u8 rm_id)
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 36fa456f42ba9..a36493bbf8950 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2646,6 +2646,7 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 		mptcp_subflow_early_fallback(msk, subflow);
+ 
+ 	WRITE_ONCE(msk->write_seq, subflow->idsn);
++	atomic64_set(&msk->snd_una, msk->write_seq);
+ 
+ do_connect:
+ 	err = ssock->ops->connect(ssock, uaddr, addr_len, flags);
+diff --git a/net/ncsi/Kconfig b/net/ncsi/Kconfig
+index 93309081f5a40..ea1dd32b6b1f6 100644
+--- a/net/ncsi/Kconfig
++++ b/net/ncsi/Kconfig
+@@ -17,3 +17,9 @@ config NCSI_OEM_CMD_GET_MAC
+ 	help
+ 	  This allows to get MAC address from NCSI firmware and set them back to
+ 		controller.
++config NCSI_OEM_CMD_KEEP_PHY
++	bool "Keep PHY Link up"
++	depends on NET_NCSI
++	help
++	  This allows to keep PHY link up and prevents any channel resets during
++	  the host load.
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index ec765f2a75691..dea60e25e8607 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -78,6 +78,9 @@ enum {
+ /* OEM Vendor Manufacture ID */
+ #define NCSI_OEM_MFR_MLX_ID             0x8119
+ #define NCSI_OEM_MFR_BCM_ID             0x113d
++#define NCSI_OEM_MFR_INTEL_ID           0x157
++/* Intel specific OEM command */
++#define NCSI_OEM_INTEL_CMD_KEEP_PHY     0x20   /* CMD ID for Keep PHY up */
+ /* Broadcom specific OEM Command */
+ #define NCSI_OEM_BCM_CMD_GMA            0x01   /* CMD ID for Get MAC */
+ /* Mellanox specific OEM Command */
+@@ -86,6 +89,7 @@ enum {
+ #define NCSI_OEM_MLX_CMD_SMAF           0x01   /* CMD ID for Set MC Affinity */
+ #define NCSI_OEM_MLX_CMD_SMAF_PARAM     0x07   /* Parameter for SMAF         */
+ /* OEM Command payload lengths*/
++#define NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN 7
+ #define NCSI_OEM_BCM_CMD_GMA_LEN        12
+ #define NCSI_OEM_MLX_CMD_GMA_LEN        8
+ #define NCSI_OEM_MLX_CMD_SMAF_LEN        60
+@@ -274,6 +278,7 @@ enum {
+ 	ncsi_dev_state_probe_mlx_gma,
+ 	ncsi_dev_state_probe_mlx_smaf,
+ 	ncsi_dev_state_probe_cis,
++	ncsi_dev_state_probe_keep_phy,
+ 	ncsi_dev_state_probe_gvi,
+ 	ncsi_dev_state_probe_gc,
+ 	ncsi_dev_state_probe_gls,
+@@ -317,6 +322,7 @@ struct ncsi_dev_priv {
+ 	spinlock_t          lock;            /* Protect the NCSI device    */
+ 	unsigned int        package_probe_id;/* Current ID during probe    */
+ 	unsigned int        package_num;     /* Number of packages         */
++	unsigned int        channel_probe_id;/* Current cahnnel ID during probe */
+ 	struct list_head    packages;        /* List of packages           */
+ 	struct ncsi_channel *hot_channel;    /* Channel was ever active    */
+ 	struct ncsi_request requests[256];   /* Request table              */
+@@ -335,6 +341,7 @@ struct ncsi_dev_priv {
+ 	bool                multi_package;   /* Enable multiple packages   */
+ 	bool                mlx_multi_host;  /* Enable multi host Mellanox */
+ 	u32                 package_whitelist; /* Packages to configure    */
++	unsigned char       channel_count;     /* Num of channels to probe   */
+ };
+ 
+ struct ncsi_cmd_arg {
+diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
+index ffff8da707b8c..bb3248214746a 100644
+--- a/net/ncsi/ncsi-manage.c
++++ b/net/ncsi/ncsi-manage.c
+@@ -510,17 +510,19 @@ static void ncsi_suspend_channel(struct ncsi_dev_priv *ndp)
+ 
+ 		break;
+ 	case ncsi_dev_state_suspend_gls:
+-		ndp->pending_req_num = np->channel_num;
++		ndp->pending_req_num = 1;
+ 
+ 		nca.type = NCSI_PKT_CMD_GLS;
+ 		nca.package = np->id;
++		nca.channel = ndp->channel_probe_id;
++		ret = ncsi_xmit_cmd(&nca);
++		if (ret)
++			goto error;
++		ndp->channel_probe_id++;
+ 
+-		nd->state = ncsi_dev_state_suspend_dcnt;
+-		NCSI_FOR_EACH_CHANNEL(np, nc) {
+-			nca.channel = nc->id;
+-			ret = ncsi_xmit_cmd(&nca);
+-			if (ret)
+-				goto error;
++		if (ndp->channel_probe_id == ndp->channel_count) {
++			ndp->channel_probe_id = 0;
++			nd->state = ncsi_dev_state_suspend_dcnt;
+ 		}
+ 
+ 		break;
+@@ -689,7 +691,30 @@ static int set_one_vid(struct ncsi_dev_priv *ndp, struct ncsi_channel *nc,
+ 	return 0;
+ }
+ 
+-#if IS_ENABLED(CONFIG_NCSI_OEM_CMD_GET_MAC)
++static int ncsi_oem_keep_phy_intel(struct ncsi_cmd_arg *nca)
++{
++	unsigned char data[NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN];
++	int ret = 0;
++
++	nca->payload = NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN;
++
++	memset(data, 0, NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN);
++	*(unsigned int *)data = ntohl((__force __be32)NCSI_OEM_MFR_INTEL_ID);
++
++	data[4] = NCSI_OEM_INTEL_CMD_KEEP_PHY;
++
++	/* PHY Link up attribute */
++	data[6] = 0x1;
++
++	nca->data = data;
++
++	ret = ncsi_xmit_cmd(nca);
++	if (ret)
++		netdev_err(nca->ndp->ndev.dev,
++			   "NCSI: Failed to transmit cmd 0x%x during configure\n",
++			   nca->type);
++	return ret;
++}
+ 
+ /* NCSI OEM Command APIs */
+ static int ncsi_oem_gma_handler_bcm(struct ncsi_cmd_arg *nca)
+@@ -804,8 +829,6 @@ static int ncsi_gma_handler(struct ncsi_cmd_arg *nca, unsigned int mf_id)
+ 	return nch->handler(nca);
+ }
+ 
+-#endif /* CONFIG_NCSI_OEM_CMD_GET_MAC */
+-
+ /* Determine if a given channel from the channel_queue should be used for Tx */
+ static bool ncsi_channel_is_tx(struct ncsi_dev_priv *ndp,
+ 			       struct ncsi_channel *nc)
+@@ -987,20 +1010,18 @@ static void ncsi_configure_channel(struct ncsi_dev_priv *ndp)
+ 			goto error;
+ 		}
+ 
+-		nd->state = ncsi_dev_state_config_oem_gma;
++		nd->state = IS_ENABLED(CONFIG_NCSI_OEM_CMD_GET_MAC)
++			  ? ncsi_dev_state_config_oem_gma
++			  : ncsi_dev_state_config_clear_vids;
+ 		break;
+ 	case ncsi_dev_state_config_oem_gma:
+ 		nd->state = ncsi_dev_state_config_clear_vids;
+-		ret = -1;
+ 
+-#if IS_ENABLED(CONFIG_NCSI_OEM_CMD_GET_MAC)
+ 		nca.type = NCSI_PKT_CMD_OEM;
+ 		nca.package = np->id;
+ 		nca.channel = nc->id;
+ 		ndp->pending_req_num = 1;
+ 		ret = ncsi_gma_handler(&nca, nc->version.mf_id);
+-#endif /* CONFIG_NCSI_OEM_CMD_GET_MAC */
+-
+ 		if (ret < 0)
+ 			schedule_work(&ndp->work);
+ 
+@@ -1298,7 +1319,6 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+ {
+ 	struct ncsi_dev *nd = &ndp->ndev;
+ 	struct ncsi_package *np;
+-	struct ncsi_channel *nc;
+ 	struct ncsi_cmd_arg nca;
+ 	unsigned char index;
+ 	int ret;
+@@ -1352,7 +1372,6 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+ 
+ 		schedule_work(&ndp->work);
+ 		break;
+-#if IS_ENABLED(CONFIG_NCSI_OEM_CMD_GET_MAC)
+ 	case ncsi_dev_state_probe_mlx_gma:
+ 		ndp->pending_req_num = 1;
+ 
+@@ -1377,30 +1396,29 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+ 
+ 		nd->state = ncsi_dev_state_probe_cis;
+ 		break;
+-#endif /* CONFIG_NCSI_OEM_CMD_GET_MAC */
+-	case ncsi_dev_state_probe_cis:
+-		ndp->pending_req_num = NCSI_RESERVED_CHANNEL;
++	case ncsi_dev_state_probe_keep_phy:
++		ndp->pending_req_num = 1;
+ 
+-		/* Clear initial state */
+-		nca.type = NCSI_PKT_CMD_CIS;
++		nca.type = NCSI_PKT_CMD_OEM;
+ 		nca.package = ndp->active_package->id;
+-		for (index = 0; index < NCSI_RESERVED_CHANNEL; index++) {
+-			nca.channel = index;
+-			ret = ncsi_xmit_cmd(&nca);
+-			if (ret)
+-				goto error;
+-		}
++		nca.channel = 0;
++		ret = ncsi_oem_keep_phy_intel(&nca);
++		if (ret)
++			goto error;
+ 
+ 		nd->state = ncsi_dev_state_probe_gvi;
+ 		break;
++	case ncsi_dev_state_probe_cis:
+ 	case ncsi_dev_state_probe_gvi:
+ 	case ncsi_dev_state_probe_gc:
+ 	case ncsi_dev_state_probe_gls:
+ 		np = ndp->active_package;
+-		ndp->pending_req_num = np->channel_num;
++		ndp->pending_req_num = 1;
+ 
+-		/* Retrieve version, capability or link status */
+-		if (nd->state == ncsi_dev_state_probe_gvi)
++		/* Clear initial state Retrieve version, capability or link status */
++		if (nd->state == ncsi_dev_state_probe_cis)
++			nca.type = NCSI_PKT_CMD_CIS;
++		else if (nd->state == ncsi_dev_state_probe_gvi)
+ 			nca.type = NCSI_PKT_CMD_GVI;
+ 		else if (nd->state == ncsi_dev_state_probe_gc)
+ 			nca.type = NCSI_PKT_CMD_GC;
+@@ -1408,19 +1426,29 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+ 			nca.type = NCSI_PKT_CMD_GLS;
+ 
+ 		nca.package = np->id;
+-		NCSI_FOR_EACH_CHANNEL(np, nc) {
+-			nca.channel = nc->id;
+-			ret = ncsi_xmit_cmd(&nca);
+-			if (ret)
+-				goto error;
+-		}
++		nca.channel = ndp->channel_probe_id;
++
++		ret = ncsi_xmit_cmd(&nca);
++		if (ret)
++			goto error;
+ 
+-		if (nd->state == ncsi_dev_state_probe_gvi)
++		if (nd->state == ncsi_dev_state_probe_cis) {
++			nd->state = ncsi_dev_state_probe_gvi;
++			if (IS_ENABLED(CONFIG_NCSI_OEM_CMD_KEEP_PHY) && ndp->channel_probe_id == 0)
++				nd->state = ncsi_dev_state_probe_keep_phy;
++		} else if (nd->state == ncsi_dev_state_probe_gvi) {
+ 			nd->state = ncsi_dev_state_probe_gc;
+-		else if (nd->state == ncsi_dev_state_probe_gc)
++		} else if (nd->state == ncsi_dev_state_probe_gc) {
+ 			nd->state = ncsi_dev_state_probe_gls;
+-		else
++		} else {
++			nd->state = ncsi_dev_state_probe_cis;
++			ndp->channel_probe_id++;
++		}
++
++		if (ndp->channel_probe_id == ndp->channel_count) {
++			ndp->channel_probe_id = 0;
+ 			nd->state = ncsi_dev_state_probe_dp;
++		}
+ 		break;
+ 	case ncsi_dev_state_probe_dp:
+ 		ndp->pending_req_num = 1;
+@@ -1721,6 +1749,7 @@ struct ncsi_dev *ncsi_register_dev(struct net_device *dev,
+ 		ndp->requests[i].ndp = ndp;
+ 		timer_setup(&ndp->requests[i].timer, ncsi_request_timeout, 0);
+ 	}
++	ndp->channel_count = NCSI_RESERVED_CHANNEL;
+ 
+ 	spin_lock_irqsave(&ncsi_dev_lock, flags);
+ 	list_add_tail_rcu(&ndp->node, &ncsi_dev_list);
+@@ -1753,6 +1782,7 @@ int ncsi_start_dev(struct ncsi_dev *nd)
+ 
+ 	if (!(ndp->flags & NCSI_DEV_PROBED)) {
+ 		ndp->package_probe_id = 0;
++		ndp->channel_probe_id = 0;
+ 		nd->state = ncsi_dev_state_probe;
+ 		schedule_work(&ndp->work);
+ 		return 0;
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 6a46388116601..960e2cfc1fd2a 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -795,12 +795,13 @@ static int ncsi_rsp_handler_gc(struct ncsi_request *nr)
+ 	struct ncsi_rsp_gc_pkt *rsp;
+ 	struct ncsi_dev_priv *ndp = nr->ndp;
+ 	struct ncsi_channel *nc;
++	struct ncsi_package *np;
+ 	size_t size;
+ 
+ 	/* Find the channel */
+ 	rsp = (struct ncsi_rsp_gc_pkt *)skb_network_header(nr->rsp);
+ 	ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+-				      NULL, &nc);
++				      &np, &nc);
+ 	if (!nc)
+ 		return -ENODEV;
+ 
+@@ -835,6 +836,7 @@ static int ncsi_rsp_handler_gc(struct ncsi_request *nr)
+ 	 */
+ 	nc->vlan_filter.bitmap = U64_MAX;
+ 	nc->vlan_filter.n_vids = rsp->vlan_cnt;
++	np->ndp->channel_count = rsp->channel_cnt;
+ 
+ 	return 0;
+ }
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index cc04c4d7956c5..bac92369a5436 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -53,12 +53,13 @@ MODULE_DESCRIPTION("core IP set support");
+ MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_IPSET);
+ 
+ /* When the nfnl mutex or ip_set_ref_lock is held: */
+-#define ip_set_dereference(p)		\
+-	rcu_dereference_protected(p,	\
++#define ip_set_dereference(inst)	\
++	rcu_dereference_protected((inst)->ip_set_list,	\
+ 		lockdep_nfnl_is_held(NFNL_SUBSYS_IPSET) || \
+-		lockdep_is_held(&ip_set_ref_lock))
++		lockdep_is_held(&ip_set_ref_lock) || \
++		(inst)->is_deleted)
+ #define ip_set(inst, id)		\
+-	ip_set_dereference((inst)->ip_set_list)[id]
++	ip_set_dereference(inst)[id]
+ #define ip_set_ref_netlink(inst,id)	\
+ 	rcu_dereference_raw((inst)->ip_set_list)[id]
+ #define ip_set_dereference_nfnl(p)	\
+@@ -1137,7 +1138,7 @@ static int ip_set_create(struct net *net, struct sock *ctnl,
+ 		if (!list)
+ 			goto cleanup;
+ 		/* nfnl mutex is held, both lists are valid */
+-		tmp = ip_set_dereference(inst->ip_set_list);
++		tmp = ip_set_dereference(inst);
+ 		memcpy(list, tmp, sizeof(struct ip_set *) * inst->ip_set_max);
+ 		rcu_assign_pointer(inst->ip_set_list, list);
+ 		/* Make sure all current packets have passed through */
+@@ -1176,23 +1177,50 @@ ip_set_setname_policy[IPSET_ATTR_CMD_MAX + 1] = {
+ 				    .len = IPSET_MAXNAMELEN - 1 },
+ };
+ 
++/* In order to return quickly when destroying a single set, it is split
++ * into two stages:
++ * - Cancel garbage collector
++ * - Destroy the set itself via call_rcu()
++ */
++
+ static void
+-ip_set_destroy_set(struct ip_set *set)
++ip_set_destroy_set_rcu(struct rcu_head *head)
+ {
+-	pr_debug("set: %s\n",  set->name);
++	struct ip_set *set = container_of(head, struct ip_set, rcu);
+ 
+-	/* Must call it without holding any lock */
+ 	set->variant->destroy(set);
+ 	module_put(set->type->me);
+ 	kfree(set);
+ }
+ 
+ static void
+-ip_set_destroy_set_rcu(struct rcu_head *head)
++_destroy_all_sets(struct ip_set_net *inst)
+ {
+-	struct ip_set *set = container_of(head, struct ip_set, rcu);
++	struct ip_set *set;
++	ip_set_id_t i;
++	bool need_wait = false;
+ 
+-	ip_set_destroy_set(set);
++	/* First cancel gc's: set:list sets are flushed as well */
++	for (i = 0; i < inst->ip_set_max; i++) {
++		set = ip_set(inst, i);
++		if (set) {
++			set->variant->cancel_gc(set);
++			if (set->type->features & IPSET_TYPE_NAME)
++				need_wait = true;
++		}
++	}
++	/* Must wait for flush to be really finished  */
++	if (need_wait)
++		rcu_barrier();
++	for (i = 0; i < inst->ip_set_max; i++) {
++		set = ip_set(inst, i);
++		if (set) {
++			ip_set(inst, i) = NULL;
++			set->variant->destroy(set);
++			module_put(set->type->me);
++			kfree(set);
++		}
++	}
+ }
+ 
+ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+@@ -1208,11 +1236,10 @@ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 	if (unlikely(protocol_min_failed(attr)))
+ 		return -IPSET_ERR_PROTOCOL;
+ 
+-
+ 	/* Commands are serialized and references are
+ 	 * protected by the ip_set_ref_lock.
+ 	 * External systems (i.e. xt_set) must call
+-	 * ip_set_put|get_nfnl_* functions, that way we
++	 * ip_set_nfnl_get_* functions, that way we
+ 	 * can safely check references here.
+ 	 *
+ 	 * list:set timer can only decrement the reference
+@@ -1220,8 +1247,6 @@ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 	 * without holding the lock.
+ 	 */
+ 	if (!attr[IPSET_ATTR_SETNAME]) {
+-		/* Must wait for flush to be really finished in list:set */
+-		rcu_barrier();
+ 		read_lock_bh(&ip_set_ref_lock);
+ 		for (i = 0; i < inst->ip_set_max; i++) {
+ 			s = ip_set(inst, i);
+@@ -1232,15 +1257,7 @@ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 		}
+ 		inst->is_destroyed = true;
+ 		read_unlock_bh(&ip_set_ref_lock);
+-		for (i = 0; i < inst->ip_set_max; i++) {
+-			s = ip_set(inst, i);
+-			if (s) {
+-				ip_set(inst, i) = NULL;
+-				/* Must cancel garbage collectors */
+-				s->variant->cancel_gc(s);
+-				ip_set_destroy_set(s);
+-			}
+-		}
++		_destroy_all_sets(inst);
+ 		/* Modified by ip_set_destroy() only, which is serialized */
+ 		inst->is_destroyed = false;
+ 	} else {
+@@ -1259,12 +1276,12 @@ static int ip_set_destroy(struct net *net, struct sock *ctnl,
+ 		features = s->type->features;
+ 		ip_set(inst, i) = NULL;
+ 		read_unlock_bh(&ip_set_ref_lock);
++		/* Must cancel garbage collectors */
++		s->variant->cancel_gc(s);
+ 		if (features & IPSET_TYPE_NAME) {
+ 			/* Must wait for flush to be really finished  */
+ 			rcu_barrier();
+ 		}
+-		/* Must cancel garbage collectors */
+-		s->variant->cancel_gc(s);
+ 		call_rcu(&s->rcu, ip_set_destroy_set_rcu);
+ 	}
+ 	return 0;
+@@ -2400,30 +2417,25 @@ ip_set_net_init(struct net *net)
+ }
+ 
+ static void __net_exit
+-ip_set_net_exit(struct net *net)
++ip_set_net_pre_exit(struct net *net)
+ {
+ 	struct ip_set_net *inst = ip_set_pernet(net);
+ 
+-	struct ip_set *set = NULL;
+-	ip_set_id_t i;
+-
+ 	inst->is_deleted = true; /* flag for ip_set_nfnl_put */
++}
+ 
+-	nfnl_lock(NFNL_SUBSYS_IPSET);
+-	for (i = 0; i < inst->ip_set_max; i++) {
+-		set = ip_set(inst, i);
+-		if (set) {
+-			ip_set(inst, i) = NULL;
+-			set->variant->cancel_gc(set);
+-			ip_set_destroy_set(set);
+-		}
+-	}
+-	nfnl_unlock(NFNL_SUBSYS_IPSET);
++static void __net_exit
++ip_set_net_exit(struct net *net)
++{
++	struct ip_set_net *inst = ip_set_pernet(net);
++
++	_destroy_all_sets(inst);
+ 	kvfree(rcu_dereference_protected(inst->ip_set_list, 1));
+ }
+ 
+ static struct pernet_operations ip_set_net_ops = {
+ 	.init	= ip_set_net_init,
++	.pre_exit = ip_set_net_pre_exit,
+ 	.exit   = ip_set_net_exit,
+ 	.id	= &ip_set_net_id,
+ 	.size	= sizeof(struct ip_set_net),
+diff --git a/net/netfilter/ipset/ip_set_list_set.c b/net/netfilter/ipset/ip_set_list_set.c
+index 6bc7019982b05..e839c356bcb56 100644
+--- a/net/netfilter/ipset/ip_set_list_set.c
++++ b/net/netfilter/ipset/ip_set_list_set.c
+@@ -79,7 +79,7 @@ list_set_kadd(struct ip_set *set, const struct sk_buff *skb,
+ 	struct set_elem *e;
+ 	int ret;
+ 
+-	list_for_each_entry(e, &map->members, list) {
++	list_for_each_entry_rcu(e, &map->members, list) {
+ 		if (SET_WITH_TIMEOUT(set) &&
+ 		    ip_set_timeout_expired(ext_timeout(e, set)))
+ 			continue;
+@@ -99,7 +99,7 @@ list_set_kdel(struct ip_set *set, const struct sk_buff *skb,
+ 	struct set_elem *e;
+ 	int ret;
+ 
+-	list_for_each_entry(e, &map->members, list) {
++	list_for_each_entry_rcu(e, &map->members, list) {
+ 		if (SET_WITH_TIMEOUT(set) &&
+ 		    ip_set_timeout_expired(ext_timeout(e, set)))
+ 			continue;
+@@ -188,9 +188,10 @@ list_set_utest(struct ip_set *set, void *value, const struct ip_set_ext *ext,
+ 	struct list_set *map = set->data;
+ 	struct set_adt_elem *d = value;
+ 	struct set_elem *e, *next, *prev = NULL;
+-	int ret;
++	int ret = 0;
+ 
+-	list_for_each_entry(e, &map->members, list) {
++	rcu_read_lock();
++	list_for_each_entry_rcu(e, &map->members, list) {
+ 		if (SET_WITH_TIMEOUT(set) &&
+ 		    ip_set_timeout_expired(ext_timeout(e, set)))
+ 			continue;
+@@ -201,6 +202,7 @@ list_set_utest(struct ip_set *set, void *value, const struct ip_set_ext *ext,
+ 
+ 		if (d->before == 0) {
+ 			ret = 1;
++			goto out;
+ 		} else if (d->before > 0) {
+ 			next = list_next_entry(e, list);
+ 			ret = !list_is_last(&e->list, &map->members) &&
+@@ -208,9 +210,11 @@ list_set_utest(struct ip_set *set, void *value, const struct ip_set_ext *ext,
+ 		} else {
+ 			ret = prev && prev->id == d->refid;
+ 		}
+-		return ret;
++		goto out;
+ 	}
+-	return 0;
++out:
++	rcu_read_unlock();
++	return ret;
+ }
+ 
+ static void
+@@ -239,7 +243,7 @@ list_set_uadd(struct ip_set *set, void *value, const struct ip_set_ext *ext,
+ 
+ 	/* Find where to add the new entry */
+ 	n = prev = next = NULL;
+-	list_for_each_entry(e, &map->members, list) {
++	list_for_each_entry_rcu(e, &map->members, list) {
+ 		if (SET_WITH_TIMEOUT(set) &&
+ 		    ip_set_timeout_expired(ext_timeout(e, set)))
+ 			continue;
+@@ -316,9 +320,9 @@ list_set_udel(struct ip_set *set, void *value, const struct ip_set_ext *ext,
+ {
+ 	struct list_set *map = set->data;
+ 	struct set_adt_elem *d = value;
+-	struct set_elem *e, *next, *prev = NULL;
++	struct set_elem *e, *n, *next, *prev = NULL;
+ 
+-	list_for_each_entry(e, &map->members, list) {
++	list_for_each_entry_safe(e, n, &map->members, list) {
+ 		if (SET_WITH_TIMEOUT(set) &&
+ 		    ip_set_timeout_expired(ext_timeout(e, set)))
+ 			continue;
+@@ -424,14 +428,8 @@ static void
+ list_set_destroy(struct ip_set *set)
+ {
+ 	struct list_set *map = set->data;
+-	struct set_elem *e, *n;
+ 
+-	list_for_each_entry_safe(e, n, &map->members, list) {
+-		list_del(&e->list);
+-		ip_set_put_byindex(map->net, e->id);
+-		ip_set_ext_destroy(set, e);
+-		kfree(e);
+-	}
++	WARN_ON_ONCE(!list_empty(&map->members));
+ 	kfree(map);
+ 
+ 	set->data = NULL;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f3cb5c9202760..f4bbddfbbc247 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -713,7 +713,7 @@ static struct nft_table *nft_table_lookup(const struct net *net,
+ 
+ static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
+ 						   const struct nlattr *nla,
+-						   u8 genmask)
++						   int family, u8 genmask)
+ {
+ 	struct nftables_pernet *nft_net;
+ 	struct nft_table *table;
+@@ -721,6 +721,7 @@ static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
+ 	nft_net = net_generic(net, nf_tables_net_id);
+ 	list_for_each_entry(table, &nft_net->tables, list) {
+ 		if (be64_to_cpu(nla_get_be64(nla)) == table->handle &&
++		    table->family == family &&
+ 		    nft_active_genmask(table, genmask))
+ 			return table;
+ 	}
+@@ -1440,7 +1441,7 @@ static int nf_tables_deltable(struct net *net, struct sock *nlsk,
+ 
+ 	if (nla[NFTA_TABLE_HANDLE]) {
+ 		attr = nla[NFTA_TABLE_HANDLE];
+-		table = nft_table_lookup_byhandle(net, attr, genmask);
++		table = nft_table_lookup_byhandle(net, attr, family, genmask);
+ 	} else {
+ 		attr = nla[NFTA_TABLE_NAME];
+ 		table = nft_table_lookup(net, attr, family, genmask);
+@@ -4989,8 +4990,7 @@ static int nf_tables_fill_setelem(struct sk_buff *skb,
+ 
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA) &&
+ 	    nft_data_dump(skb, NFTA_SET_ELEM_DATA, nft_set_ext_data(ext),
+-			  set->dtype == NFT_DATA_VERDICT ? NFT_DATA_VERDICT : NFT_DATA_VALUE,
+-			  set->dlen) < 0)
++			  nft_set_datatype(set), set->dlen) < 0)
+ 		goto nla_put_failure;
+ 
+ 	if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPR) &&
+@@ -9336,6 +9336,9 @@ static int nft_validate_register_store(const struct nft_ctx *ctx,
+ 
+ 		return 0;
+ 	default:
++		if (type != NFT_DATA_VALUE)
++			return -EINVAL;
++
+ 		if (reg < NFT_REG_1 * NFT_REG_SIZE / NFT_REG32_SIZE)
+ 			return -EINVAL;
+ 		if (len == 0)
+@@ -9344,8 +9347,6 @@ static int nft_validate_register_store(const struct nft_ctx *ctx,
+ 		    sizeof_field(struct nft_regs, data))
+ 			return -ERANGE;
+ 
+-		if (data != NULL && type != NFT_DATA_VALUE)
+-			return -EINVAL;
+ 		return 0;
+ 	}
+ }
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index 8bc008ff00cb7..d2f8131edaf14 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -101,7 +101,8 @@ static int nft_lookup_init(const struct nft_ctx *ctx,
+ 			return -EINVAL;
+ 
+ 		err = nft_parse_register_store(ctx, tb[NFTA_LOOKUP_DREG],
+-					       &priv->dreg, NULL, set->dtype,
++					       &priv->dreg, NULL,
++					       nft_set_datatype(set),
+ 					       set->dlen);
+ 		if (err < 0)
+ 			return err;
+diff --git a/net/netrom/nr_timer.c b/net/netrom/nr_timer.c
+index 4e7c968cde2dc..5e3ca068f04e0 100644
+--- a/net/netrom/nr_timer.c
++++ b/net/netrom/nr_timer.c
+@@ -121,7 +121,8 @@ static void nr_heartbeat_expiry(struct timer_list *t)
+ 		   is accepted() it isn't 'dead' so doesn't get removed. */
+ 		if (sock_flag(sk, SOCK_DESTROY) ||
+ 		    (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) {
+-			sock_hold(sk);
++			if (sk->sk_state == TCP_LISTEN)
++				sock_hold(sk);
+ 			bh_unlock_sock(sk);
+ 			nr_destroy_socket(sk);
+ 			goto out;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 8e52f09493053..9bec88fe35058 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3761,28 +3761,30 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 	case PACKET_TX_RING:
+ 	{
+ 		union tpacket_req_u req_u;
+-		int len;
+ 
++		ret = -EINVAL;
+ 		lock_sock(sk);
+ 		switch (po->tp_version) {
+ 		case TPACKET_V1:
+ 		case TPACKET_V2:
+-			len = sizeof(req_u.req);
++			if (optlen < sizeof(req_u.req))
++				break;
++			ret = copy_from_sockptr(&req_u.req, optval,
++						sizeof(req_u.req)) ?
++						-EINVAL : 0;
+ 			break;
+ 		case TPACKET_V3:
+ 		default:
+-			len = sizeof(req_u.req3);
++			if (optlen < sizeof(req_u.req3))
++				break;
++			ret = copy_from_sockptr(&req_u.req3, optval,
++						sizeof(req_u.req3)) ?
++						-EINVAL : 0;
+ 			break;
+ 		}
+-		if (optlen < len) {
+-			ret = -EINVAL;
+-		} else {
+-			if (copy_from_sockptr(&req_u.req, optval, len))
+-				ret = -EFAULT;
+-			else
+-				ret = packet_set_ring(sk, &req_u, 0,
+-						    optname == PACKET_TX_RING);
+-		}
++		if (!ret)
++			ret = packet_set_ring(sk, &req_u, 0,
++					      optname == PACKET_TX_RING);
+ 		release_sock(sk);
+ 		return ret;
+ 	}
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 4ab9c2a6f6501..bf98bb602a9de 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -507,6 +507,9 @@ EXPORT_SYMBOL(tcf_idr_cleanup);
+  * its reference and bind counters, and return 1. Otherwise insert temporary
+  * error pointer (to prevent concurrent users from inserting actions with same
+  * index) and return 0.
++ *
++ * May return -EAGAIN for binding actions in case of a parallel add/delete on
++ * the requested index.
+  */
+ 
+ int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index,
+@@ -515,43 +518,60 @@ int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index,
+ 	struct tcf_idrinfo *idrinfo = tn->idrinfo;
+ 	struct tc_action *p;
+ 	int ret;
++	u32 max;
+ 
+-again:
+-	mutex_lock(&idrinfo->lock);
+ 	if (*index) {
++		rcu_read_lock();
+ 		p = idr_find(&idrinfo->action_idr, *index);
++
+ 		if (IS_ERR(p)) {
+ 			/* This means that another process allocated
+ 			 * index but did not assign the pointer yet.
+ 			 */
+-			mutex_unlock(&idrinfo->lock);
+-			goto again;
++			rcu_read_unlock();
++			return -EAGAIN;
+ 		}
+ 
+-		if (p) {
+-			refcount_inc(&p->tcfa_refcnt);
+-			if (bind)
+-				atomic_inc(&p->tcfa_bindcnt);
+-			*a = p;
+-			ret = 1;
+-		} else {
+-			*a = NULL;
+-			ret = idr_alloc_u32(&idrinfo->action_idr, NULL, index,
+-					    *index, GFP_KERNEL);
+-			if (!ret)
+-				idr_replace(&idrinfo->action_idr,
+-					    ERR_PTR(-EBUSY), *index);
++		if (!p) {
++			/* Empty slot, try to allocate it */
++			max = *index;
++			rcu_read_unlock();
++			goto new;
+ 		}
++
++		if (!refcount_inc_not_zero(&p->tcfa_refcnt)) {
++			/* Action was deleted in parallel */
++			rcu_read_unlock();
++			return -EAGAIN;
++		}
++
++		if (bind)
++			atomic_inc(&p->tcfa_bindcnt);
++		*a = p;
++
++		rcu_read_unlock();
++
++		return 1;
+ 	} else {
++		/* Find a slot */
+ 		*index = 1;
+-		*a = NULL;
+-		ret = idr_alloc_u32(&idrinfo->action_idr, NULL, index,
+-				    UINT_MAX, GFP_KERNEL);
+-		if (!ret)
+-			idr_replace(&idrinfo->action_idr, ERR_PTR(-EBUSY),
+-				    *index);
++		max = UINT_MAX;
+ 	}
++
++new:
++	*a = NULL;
++
++	mutex_lock(&idrinfo->lock);
++	ret = idr_alloc_u32(&idrinfo->action_idr, ERR_PTR(-EBUSY), index, max,
++			    GFP_KERNEL);
+ 	mutex_unlock(&idrinfo->lock);
++
++	/* N binds raced for action allocation,
++	 * retry for all the ones that failed.
++	 */
++	if (ret == -ENOSPC && *index == max)
++		ret = -EAGAIN;
++
+ 	return ret;
+ }
+ EXPORT_SYMBOL(tcf_idr_check_alloc);
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 2d41d866de3e3..c6d6a6fe9602b 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -38,21 +38,26 @@ static struct workqueue_struct *act_ct_wq;
+ static struct rhashtable zones_ht;
+ static DEFINE_MUTEX(zones_mutex);
+ 
++struct zones_ht_key {
++	struct net *net;
++	u16 zone;
++};
++
+ struct tcf_ct_flow_table {
+ 	struct rhash_head node; /* In zones tables */
+ 
+ 	struct rcu_work rwork;
+ 	struct nf_flowtable nf_ft;
+ 	refcount_t ref;
+-	u16 zone;
++	struct zones_ht_key key;
+ 
+ 	bool dying;
+ };
+ 
+ static const struct rhashtable_params zones_params = {
+ 	.head_offset = offsetof(struct tcf_ct_flow_table, node),
+-	.key_offset = offsetof(struct tcf_ct_flow_table, zone),
+-	.key_len = sizeof_field(struct tcf_ct_flow_table, zone),
++	.key_offset = offsetof(struct tcf_ct_flow_table, key),
++	.key_len = sizeof_field(struct tcf_ct_flow_table, key),
+ 	.automatic_shrinking = true,
+ };
+ 
+@@ -275,13 +280,14 @@ static struct nf_flowtable_type flowtable_ct = {
+ 	.owner		= THIS_MODULE,
+ };
+ 
+-static int tcf_ct_flow_table_get(struct tcf_ct_params *params)
++static int tcf_ct_flow_table_get(struct net *net, struct tcf_ct_params *params)
+ {
++	struct zones_ht_key key = { .net = net, .zone = params->zone };
+ 	struct tcf_ct_flow_table *ct_ft;
+ 	int err = -ENOMEM;
+ 
+ 	mutex_lock(&zones_mutex);
+-	ct_ft = rhashtable_lookup_fast(&zones_ht, &params->zone, zones_params);
++	ct_ft = rhashtable_lookup_fast(&zones_ht, &key, zones_params);
+ 	if (ct_ft && refcount_inc_not_zero(&ct_ft->ref))
+ 		goto out_unlock;
+ 
+@@ -290,7 +296,7 @@ static int tcf_ct_flow_table_get(struct tcf_ct_params *params)
+ 		goto err_alloc;
+ 	refcount_set(&ct_ft->ref, 1);
+ 
+-	ct_ft->zone = params->zone;
++	ct_ft->key = key;
+ 	err = rhashtable_insert_fast(&zones_ht, &ct_ft->node, zones_params);
+ 	if (err)
+ 		goto err_insert;
+@@ -300,6 +306,7 @@ static int tcf_ct_flow_table_get(struct tcf_ct_params *params)
+ 	err = nf_flow_table_init(&ct_ft->nf_ft);
+ 	if (err)
+ 		goto err_init;
++	write_pnet(&ct_ft->nf_ft.net, net);
+ 
+ 	__module_get(THIS_MODULE);
+ out_unlock:
+@@ -1291,7 +1298,7 @@ static int tcf_ct_init(struct net *net, struct nlattr *nla,
+ 	if (err)
+ 		goto cleanup;
+ 
+-	err = tcf_ct_flow_table_get(params);
++	err = tcf_ct_flow_table_get(net, params);
+ 	if (err)
+ 		goto cleanup_params;
+ 
+diff --git a/net/sched/sch_multiq.c b/net/sched/sch_multiq.c
+index 1c6dbcfa89b87..77fd7be3a9cd1 100644
+--- a/net/sched/sch_multiq.c
++++ b/net/sched/sch_multiq.c
+@@ -185,7 +185,7 @@ static int multiq_tune(struct Qdisc *sch, struct nlattr *opt,
+ 
+ 	qopt->bands = qdisc_dev(sch)->real_num_tx_queues;
+ 
+-	removed = kmalloc(sizeof(*removed) * (q->max_bands - q->bands),
++	removed = kmalloc(sizeof(*removed) * (q->max_bands - qopt->bands),
+ 			  GFP_KERNEL);
+ 	if (!removed)
+ 		return -ENOMEM;
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 2d842f31ec5a8..ec6b24edf5f93 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -925,16 +925,13 @@ static int taprio_parse_mqprio_opt(struct net_device *dev,
+ {
+ 	int i, j;
+ 
+-	if (!qopt && !dev->num_tc) {
+-		NL_SET_ERR_MSG(extack, "'mqprio' configuration is necessary");
+-		return -EINVAL;
+-	}
+-
+-	/* If num_tc is already set, it means that the user already
+-	 * configured the mqprio part
+-	 */
+-	if (dev->num_tc)
++	if (!qopt) {
++		if (!dev->num_tc) {
++			NL_SET_ERR_MSG(extack, "'mqprio' configuration is necessary");
++			return -EINVAL;
++		}
+ 		return 0;
++	}
+ 
+ 	/* Verify num_tc is not out of max range */
+ 	if (qopt->num_tc > TC_MAX_QUEUE) {
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index 2ff66a6a7e54c..7ce4a6b7cfae6 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -1855,8 +1855,10 @@ gss_wrap_req_priv(struct rpc_cred *cred, struct gss_cl_ctx *ctx,
+ 	offset = (u8 *)p - (u8 *)snd_buf->head[0].iov_base;
+ 	maj_stat = gss_wrap(ctx->gc_gss_ctx, offset, snd_buf, inpages);
+ 	/* slack space should prevent this ever happening: */
+-	if (unlikely(snd_buf->len > snd_buf->buflen))
++	if (unlikely(snd_buf->len > snd_buf->buflen)) {
++		status = -EIO;
+ 		goto wrap_failed;
++	}
+ 	/* We're assuming that when GSS_S_CONTEXT_EXPIRED, the encryption was
+ 	 * done anyway, so it's safe to put the request on the wire: */
+ 	if (maj_stat == GSS_S_CONTEXT_EXPIRED)
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index 26d972c54a593..f8815ae776e68 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -845,7 +845,8 @@ void
+ svc_rqst_free(struct svc_rqst *rqstp)
+ {
+ 	svc_release_buffer(rqstp);
+-	put_page(rqstp->rq_scratch_page);
++	if (rqstp->rq_scratch_page)
++		put_page(rqstp->rq_scratch_page);
+ 	kfree(rqstp->rq_resp);
+ 	kfree(rqstp->rq_argp);
+ 	kfree(rqstp->rq_auth_data);
+@@ -1611,6 +1612,21 @@ u32 svc_max_payload(const struct svc_rqst *rqstp)
+ }
+ EXPORT_SYMBOL_GPL(svc_max_payload);
+ 
++/**
++ * svc_proc_name - Return RPC procedure name in string form
++ * @rqstp: svc_rqst to operate on
++ *
++ * Return value:
++ *   Pointer to a NUL-terminated string
++ */
++const char *svc_proc_name(const struct svc_rqst *rqstp)
++{
++	if (rqstp && rqstp->rq_procinfo)
++		return rqstp->rq_procinfo->pc_name;
++	return "unknown";
++}
++
++
+ /**
+  * svc_encode_result_payload - mark a range of bytes as a result payload
+  * @rqstp: svc_rqst to operate on
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index 9e3cfeb82a23d..5f6866407be54 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -2076,6 +2076,7 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b)
+ 	} else {
+ 		n = tipc_node_find_by_id(net, ehdr->id);
+ 	}
++	skb_dst_force(skb);
+ 	tipc_crypto_rcv(net, (n) ? n->crypto_rx : NULL, &skb, b);
+ 	if (!skb)
+ 		return;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 405bf3e6eb796..e2ff610d27760 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -189,15 +189,9 @@ static inline int unix_may_send(struct sock *sk, struct sock *osk)
+ 	return unix_peer(osk) == NULL || unix_our_peer(sk, osk);
+ }
+ 
+-static inline int unix_recvq_full(const struct sock *sk)
+-{
+-	return skb_queue_len(&sk->sk_receive_queue) > sk->sk_max_ack_backlog;
+-}
+-
+ static inline int unix_recvq_full_lockless(const struct sock *sk)
+ {
+-	return skb_queue_len_lockless(&sk->sk_receive_queue) >
+-		READ_ONCE(sk->sk_max_ack_backlog);
++	return skb_queue_len_lockless(&sk->sk_receive_queue) > sk->sk_max_ack_backlog;
+ }
+ 
+ struct sock *unix_peer_get(struct sock *s)
+@@ -447,9 +441,9 @@ static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other)
+ 	return 0;
+ }
+ 
+-static int unix_writable(const struct sock *sk)
++static int unix_writable(const struct sock *sk, unsigned char state)
+ {
+-	return sk->sk_state != TCP_LISTEN &&
++	return state != TCP_LISTEN &&
+ 	       (refcount_read(&sk->sk_wmem_alloc) << 2) <= sk->sk_sndbuf;
+ }
+ 
+@@ -458,7 +452,7 @@ static void unix_write_space(struct sock *sk)
+ 	struct socket_wq *wq;
+ 
+ 	rcu_read_lock();
+-	if (unix_writable(sk)) {
++	if (unix_writable(sk, READ_ONCE(sk->sk_state))) {
+ 		wq = rcu_dereference(sk->sk_wq);
+ 		if (skwq_has_sleeper(wq))
+ 			wake_up_interruptible_sync_poll(&wq->wait,
+@@ -815,7 +809,7 @@ static struct sock *unix_create1(struct net *net, struct socket *sock, int kern)
+ 
+ 	sk->sk_allocation	= GFP_KERNEL_ACCOUNT;
+ 	sk->sk_write_space	= unix_write_space;
+-	sk->sk_max_ack_backlog	= net->unx.sysctl_max_dgram_qlen;
++	sk->sk_max_ack_backlog	= READ_ONCE(net->unx.sysctl_max_dgram_qlen);
+ 	sk->sk_destruct		= unix_sock_destructor;
+ 	u = unix_sk(sk);
+ 	u->inflight = 0;
+@@ -1310,7 +1304,7 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 	if (other->sk_shutdown & RCV_SHUTDOWN)
+ 		goto out_unlock;
+ 
+-	if (unix_recvq_full(other)) {
++	if (unix_recvq_full_lockless(other)) {
+ 		err = -EAGAIN;
+ 		if (!timeo)
+ 			goto out_unlock;
+@@ -1909,7 +1903,7 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
+ 		goto out_err;
+ 
+ 	if (msg->msg_namelen) {
+-		err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP;
++		err = READ_ONCE(sk->sk_state) == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP;
+ 		goto out_err;
+ 	} else {
+ 		err = -ENOTCONN;
+@@ -2112,7 +2106,7 @@ static int unix_seqpacket_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	if (err)
+ 		return err;
+ 
+-	if (sk->sk_state != TCP_ESTABLISHED)
++	if (READ_ONCE(sk->sk_state) != TCP_ESTABLISHED)
+ 		return -ENOTCONN;
+ 
+ 	if (msg->msg_namelen)
+@@ -2126,7 +2120,7 @@ static int unix_seqpacket_recvmsg(struct socket *sock, struct msghdr *msg,
+ {
+ 	struct sock *sk = sock->sk;
+ 
+-	if (sk->sk_state != TCP_ESTABLISHED)
++	if (READ_ONCE(sk->sk_state) != TCP_ESTABLISHED)
+ 		return -ENOTCONN;
+ 
+ 	return unix_dgram_recvmsg(sock, msg, size, flags);
+@@ -2326,7 +2320,7 @@ static int unix_stream_read_generic(struct unix_stream_read_state *state,
+ 	size_t size = state->size;
+ 	unsigned int last_len;
+ 
+-	if (unlikely(sk->sk_state != TCP_ESTABLISHED)) {
++	if (unlikely(READ_ONCE(sk->sk_state) != TCP_ESTABLISHED)) {
+ 		err = -EINVAL;
+ 		goto out;
+ 	}
+@@ -2614,7 +2608,7 @@ long unix_inq_len(struct sock *sk)
+ 	struct sk_buff *skb;
+ 	long amount = 0;
+ 
+-	if (sk->sk_state == TCP_LISTEN)
++	if (READ_ONCE(sk->sk_state) == TCP_LISTEN)
+ 		return -EINVAL;
+ 
+ 	spin_lock(&sk->sk_receive_queue.lock);
+@@ -2713,12 +2707,14 @@ static int unix_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned lon
+ static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wait)
+ {
+ 	struct sock *sk = sock->sk;
++	unsigned char state;
+ 	__poll_t mask;
+ 	u8 shutdown;
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 	mask = 0;
+ 	shutdown = READ_ONCE(sk->sk_shutdown);
++	state = READ_ONCE(sk->sk_state);
+ 
+ 	/* exceptional events? */
+ 	if (sk->sk_err)
+@@ -2734,14 +2730,14 @@ static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wa
+ 
+ 	/* Connection-based need to check for termination and startup */
+ 	if ((sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) &&
+-	    sk->sk_state == TCP_CLOSE)
++	    state == TCP_CLOSE)
+ 		mask |= EPOLLHUP;
+ 
+ 	/*
+ 	 * we set writable also when the other side has shut down the
+ 	 * connection. This prevents stuck sockets.
+ 	 */
+-	if (unix_writable(sk))
++	if (unix_writable(sk, state))
+ 		mask |= EPOLLOUT | EPOLLWRNORM | EPOLLWRBAND;
+ 
+ 	return mask;
+@@ -2752,12 +2748,14 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ {
+ 	struct sock *sk = sock->sk, *other;
+ 	unsigned int writable;
++	unsigned char state;
+ 	__poll_t mask;
+ 	u8 shutdown;
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 	mask = 0;
+ 	shutdown = READ_ONCE(sk->sk_shutdown);
++	state = READ_ONCE(sk->sk_state);
+ 
+ 	/* exceptional events? */
+ 	if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+@@ -2774,19 +2772,14 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ 		mask |= EPOLLIN | EPOLLRDNORM;
+ 
+ 	/* Connection-based need to check for termination and startup */
+-	if (sk->sk_type == SOCK_SEQPACKET) {
+-		if (sk->sk_state == TCP_CLOSE)
+-			mask |= EPOLLHUP;
+-		/* connection hasn't started yet? */
+-		if (sk->sk_state == TCP_SYN_SENT)
+-			return mask;
+-	}
++	if (sk->sk_type == SOCK_SEQPACKET && state == TCP_CLOSE)
++		mask |= EPOLLHUP;
+ 
+ 	/* No write status requested, avoid expensive OUT tests. */
+ 	if (!(poll_requested_events(wait) & (EPOLLWRBAND|EPOLLWRNORM|EPOLLOUT)))
+ 		return mask;
+ 
+-	writable = unix_writable(sk);
++	writable = unix_writable(sk, state);
+ 	if (writable) {
+ 		unix_state_lock(sk);
+ 
+diff --git a/net/unix/diag.c b/net/unix/diag.c
+index 2975e7a061d0b..7066a36234106 100644
+--- a/net/unix/diag.c
++++ b/net/unix/diag.c
+@@ -64,7 +64,7 @@ static int sk_diag_dump_icons(struct sock *sk, struct sk_buff *nlskb)
+ 	u32 *buf;
+ 	int i;
+ 
+-	if (sk->sk_state == TCP_LISTEN) {
++	if (READ_ONCE(sk->sk_state) == TCP_LISTEN) {
+ 		spin_lock(&sk->sk_receive_queue.lock);
+ 
+ 		attr = nla_reserve(nlskb, UNIX_DIAG_ICONS,
+@@ -102,8 +102,8 @@ static int sk_diag_show_rqlen(struct sock *sk, struct sk_buff *nlskb)
+ {
+ 	struct unix_diag_rqlen rql;
+ 
+-	if (sk->sk_state == TCP_LISTEN) {
+-		rql.udiag_rqueue = sk->sk_receive_queue.qlen;
++	if (READ_ONCE(sk->sk_state) == TCP_LISTEN) {
++		rql.udiag_rqueue = skb_queue_len_lockless(&sk->sk_receive_queue);
+ 		rql.udiag_wqueue = sk->sk_max_ack_backlog;
+ 	} else {
+ 		rql.udiag_rqueue = (u32) unix_inq_len(sk);
+@@ -135,7 +135,7 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, struct unix_diag_r
+ 	rep = nlmsg_data(nlh);
+ 	rep->udiag_family = AF_UNIX;
+ 	rep->udiag_type = sk->sk_type;
+-	rep->udiag_state = sk->sk_state;
++	rep->udiag_state = READ_ONCE(sk->sk_state);
+ 	rep->pad = 0;
+ 	rep->udiag_ino = sk_ino;
+ 	sock_diag_save_cookie(sk, rep->udiag_cookie);
+@@ -164,7 +164,7 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, struct unix_diag_r
+ 	    sock_diag_put_meminfo(sk, skb, UNIX_DIAG_MEMINFO))
+ 		goto out_nlmsg_trim;
+ 
+-	if (nla_put_u8(skb, UNIX_DIAG_SHUTDOWN, sk->sk_shutdown))
++	if (nla_put_u8(skb, UNIX_DIAG_SHUTDOWN, READ_ONCE(sk->sk_shutdown)))
+ 		goto out_nlmsg_trim;
+ 
+ 	if ((req->udiag_show & UDIAG_SHOW_UID) &&
+@@ -218,7 +218,7 @@ static int unix_diag_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 				continue;
+ 			if (num < s_num)
+ 				goto next;
+-			if (!(req->udiag_states & (1 << sk->sk_state)))
++			if (!(req->udiag_states & (1 << READ_ONCE(sk->sk_state))))
+ 				goto next;
+ 			if (sk_diag_dump(sk, skb, req, sk_user_ns(skb->sk),
+ 					 NETLINK_CB(cb->skb).portid,
+diff --git a/net/wireless/pmsr.c b/net/wireless/pmsr.c
+index a817d8e3e4b36..7503c7dd71ab5 100644
+--- a/net/wireless/pmsr.c
++++ b/net/wireless/pmsr.c
+@@ -58,7 +58,7 @@ static int pmsr_parse_ftm(struct cfg80211_registered_device *rdev,
+ 	out->ftm.burst_period = 0;
+ 	if (tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_PERIOD])
+ 		out->ftm.burst_period =
+-			nla_get_u32(tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_PERIOD]);
++			nla_get_u16(tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_PERIOD]);
+ 
+ 	out->ftm.asap = !!tb[NL80211_PMSR_FTM_REQ_ATTR_ASAP];
+ 	if (out->ftm.asap && !capa->ftm.asap) {
+@@ -77,7 +77,7 @@ static int pmsr_parse_ftm(struct cfg80211_registered_device *rdev,
+ 	out->ftm.num_bursts_exp = 0;
+ 	if (tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_BURSTS_EXP])
+ 		out->ftm.num_bursts_exp =
+-			nla_get_u32(tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_BURSTS_EXP]);
++			nla_get_u8(tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_BURSTS_EXP]);
+ 
+ 	if (capa->ftm.max_bursts_exponent >= 0 &&
+ 	    out->ftm.num_bursts_exp > capa->ftm.max_bursts_exponent) {
+@@ -90,7 +90,7 @@ static int pmsr_parse_ftm(struct cfg80211_registered_device *rdev,
+ 	out->ftm.burst_duration = 15;
+ 	if (tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION])
+ 		out->ftm.burst_duration =
+-			nla_get_u32(tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION]);
++			nla_get_u8(tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION]);
+ 
+ 	out->ftm.ftms_per_burst = 0;
+ 	if (tb[NL80211_PMSR_FTM_REQ_ATTR_FTMS_PER_BURST])
+@@ -109,7 +109,7 @@ static int pmsr_parse_ftm(struct cfg80211_registered_device *rdev,
+ 	out->ftm.ftmr_retries = 3;
+ 	if (tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_FTMR_RETRIES])
+ 		out->ftm.ftmr_retries =
+-			nla_get_u32(tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_FTMR_RETRIES]);
++			nla_get_u8(tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_FTMR_RETRIES]);
+ 
+ 	out->ftm.request_lci = !!tb[NL80211_PMSR_FTM_REQ_ATTR_REQUEST_LCI];
+ 	if (out->ftm.request_lci && !capa->ftm.request_lci) {
+diff --git a/scripts/Makefile.dtbinst b/scripts/Makefile.dtbinst
+index 50d580d77ae92..e1ac7e40bbaa7 100644
+--- a/scripts/Makefile.dtbinst
++++ b/scripts/Makefile.dtbinst
+@@ -24,7 +24,7 @@ __dtbs_install: $(dtbs) $(subdirs)
+ 	@:
+ 
+ quiet_cmd_dtb_install = INSTALL $@
+-      cmd_dtb_install = install -D $< $@
++      cmd_dtb_install = install -D -m 0644 $< $@
+ 
+ $(dst)/%.dtb: $(obj)/%.dtb
+ 	$(call cmd,dtb_install)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4af8094938059..5c5a144e707f0 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9380,6 +9380,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
++	SND_PCI_QUIRK(0x1c6c, 0x122a, "Positivo N14AP7", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1c6c, 0x1251, "Positivo N14KP6-TG", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS),
+ 	SND_PCI_QUIRK(0x1d05, 0x1096, "TongFang GMxMRxx", ALC269_FIXUP_NO_SHUTUP),
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index 9a756d0a60320..c876f111d8b03 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -538,6 +538,8 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
++	priv->pdev = pdev;
++
+ 	cpu_np = of_parse_phandle(np, "audio-cpu", 0);
+ 	/* Give a chance to old DT binding */
+ 	if (!cpu_np)
+@@ -718,7 +720,6 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Initialize sound card */
+-	priv->pdev = pdev;
+ 	priv->card.dev = &pdev->dev;
+ 	priv->card.owner = THIS_MODULE;
+ 	ret = snd_soc_of_parse_card_name(&priv->card, "model");
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index f36a0fda1b6ae..25bf73a7e7bfa 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -216,6 +216,15 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(SOF_SDW_PCH_DMIC),
+ 	},
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "OMEN Transcend Gaming Laptop"),
++		},
++		.driver_data = (void *)(RT711_JD2),
++	},
++
+ 	/* LunarLake devices */
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+diff --git a/sound/synth/emux/soundfont.c b/sound/synth/emux/soundfont.c
+index 9ebc711afa6b5..18b2d84926563 100644
+--- a/sound/synth/emux/soundfont.c
++++ b/sound/synth/emux/soundfont.c
+@@ -697,7 +697,6 @@ load_data(struct snd_sf_list *sflist, const void __user *data, long count)
+ 	struct snd_soundfont *sf;
+ 	struct soundfont_sample_info sample_info;
+ 	struct snd_sf_sample *sp;
+-	long off;
+ 
+ 	/* patch must be opened */
+ 	if ((sf = sflist->currsf) == NULL)
+@@ -706,12 +705,16 @@ load_data(struct snd_sf_list *sflist, const void __user *data, long count)
+ 	if (is_special_type(sf->type))
+ 		return -EINVAL;
+ 
++	if (count < (long)sizeof(sample_info)) {
++		return -EINVAL;
++	}
+ 	if (copy_from_user(&sample_info, data, sizeof(sample_info)))
+ 		return -EFAULT;
++	data += sizeof(sample_info);
++	count -= sizeof(sample_info);
+ 
+-	off = sizeof(sample_info);
+-
+-	if (sample_info.size != (count-off)/2)
++	// SoundFont uses S16LE samples.
++	if (sample_info.size * 2 != count)
+ 		return -EINVAL;
+ 
+ 	/* Check for dup */
+@@ -738,7 +741,7 @@ load_data(struct snd_sf_list *sflist, const void __user *data, long count)
+ 		int  rc;
+ 		rc = sflist->callback.sample_new
+ 			(sflist->callback.private_data, sp, sflist->memhdr,
+-			 data + off, count - off);
++			 data, count);
+ 		if (rc < 0) {
+ 			sf_sample_delete(sflist, sf, sp);
+ 			return rc;
+@@ -951,10 +954,12 @@ load_guspatch(struct snd_sf_list *sflist, const char __user *data,
+ 	}
+ 	if (copy_from_user(&patch, data, sizeof(patch)))
+ 		return -EFAULT;
+-	
+ 	count -= sizeof(patch);
+ 	data += sizeof(patch);
+ 
++	if ((patch.len << (patch.mode & WAVE_16_BITS ? 1 : 0)) != count)
++		return -EINVAL;
++
+ 	sf = newsf(sflist, SNDRV_SFNT_PAT_TYPE_GUS|SNDRV_SFNT_PAT_SHARED, NULL);
+ 	if (sf == NULL)
+ 		return -ENOMEM;
+diff --git a/tools/include/asm-generic/hugetlb_encode.h b/tools/include/asm-generic/hugetlb_encode.h
+index e4732d3c29982..9d279fa4c36f2 100644
+--- a/tools/include/asm-generic/hugetlb_encode.h
++++ b/tools/include/asm-generic/hugetlb_encode.h
+@@ -20,15 +20,15 @@
+ #define HUGETLB_FLAG_ENCODE_SHIFT	26
+ #define HUGETLB_FLAG_ENCODE_MASK	0x3f
+ 
+-#define HUGETLB_FLAG_ENCODE_64KB	(16 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_512KB	(19 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_1MB		(20 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_2MB		(21 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_8MB		(23 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_16MB	(24 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_256MB	(28 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_1GB		(30 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_2GB		(31 << HUGETLB_FLAG_ENCODE_SHIFT)
+-#define HUGETLB_FLAG_ENCODE_16GB	(34 << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_64KB	(16U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_512KB	(19U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_1MB		(20U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_2MB		(21U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_8MB		(23U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_16MB	(24U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_256MB	(28U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_1GB		(30U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_2GB		(31U << HUGETLB_FLAG_ENCODE_SHIFT)
++#define HUGETLB_FLAG_ENCODE_16GB	(34U << HUGETLB_FLAG_ENCODE_SHIFT)
+ 
+ #endif /* _ASM_GENERIC_HUGETLB_ENCODE_H_ */
+diff --git a/tools/testing/selftests/arm64/tags/tags_test.c b/tools/testing/selftests/arm64/tags/tags_test.c
+index 5701163460ef7..955f87c1170d7 100644
+--- a/tools/testing/selftests/arm64/tags/tags_test.c
++++ b/tools/testing/selftests/arm64/tags/tags_test.c
+@@ -6,6 +6,7 @@
+ #include <stdint.h>
+ #include <sys/prctl.h>
+ #include <sys/utsname.h>
++#include "../../kselftest.h"
+ 
+ #define SHIFT_TAG(tag)		((uint64_t)(tag) << 56)
+ #define SET_TAG(ptr, tag)	(((uint64_t)(ptr) & ~SHIFT_TAG(0xff)) | \
+@@ -21,6 +22,9 @@ int main(void)
+ 	if (prctl(PR_SET_TAGGED_ADDR_CTRL, PR_TAGGED_ADDR_ENABLE, 0, 0, 0) == 0)
+ 		tbi_enabled = 1;
+ 	ptr = (struct utsname *)malloc(sizeof(*ptr));
++	if (!ptr)
++		ksft_exit_fail_msg("Failed to allocate utsname buffer\n");
++
+ 	if (tbi_enabled)
+ 		tag = 0x42;
+ 	ptr = (struct utsname *)SET_TAG(ptr, tag);
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf_map_in_map.c b/tools/testing/selftests/bpf/prog_tests/btf_map_in_map.c
+index 76ebe4c250f11..a434828bc7ab7 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf_map_in_map.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf_map_in_map.c
+@@ -58,7 +58,7 @@ static void test_lookup_update(void)
+ 	int map1_fd, map2_fd, map3_fd, map4_fd, map5_fd, map1_id, map2_id;
+ 	int outer_arr_fd, outer_hash_fd, outer_arr_dyn_fd;
+ 	struct test_btf_map_in_map *skel;
+-	int err, key = 0, val, i, fd;
++	int err, key = 0, val, i;
+ 
+ 	skel = test_btf_map_in_map__open_and_load();
+ 	if (CHECK(!skel, "skel_open", "failed to open&load skeleton\n"))
+@@ -135,30 +135,6 @@ static void test_lookup_update(void)
+ 	CHECK(map1_id == 0, "map1_id", "failed to get ID 1\n");
+ 	CHECK(map2_id == 0, "map2_id", "failed to get ID 2\n");
+ 
+-	test_btf_map_in_map__destroy(skel);
+-	skel = NULL;
+-
+-	/* we need to either wait for or force synchronize_rcu(), before
+-	 * checking for "still exists" condition, otherwise map could still be
+-	 * resolvable by ID, causing false positives.
+-	 *
+-	 * Older kernels (5.8 and earlier) freed map only after two
+-	 * synchronize_rcu()s, so trigger two, to be entirely sure.
+-	 */
+-	CHECK(kern_sync_rcu(), "sync_rcu", "failed\n");
+-	CHECK(kern_sync_rcu(), "sync_rcu", "failed\n");
+-
+-	fd = bpf_map_get_fd_by_id(map1_id);
+-	if (CHECK(fd >= 0, "map1_leak", "inner_map1 leaked!\n")) {
+-		close(fd);
+-		goto cleanup;
+-	}
+-	fd = bpf_map_get_fd_by_id(map2_id);
+-	if (CHECK(fd >= 0, "map2_leak", "inner_map2 leaked!\n")) {
+-		close(fd);
+-		goto cleanup;
+-	}
+-
+ cleanup:
+ 	test_btf_map_in_map__destroy(skel);
+ }
+diff --git a/tools/testing/selftests/bpf/test_tc_tunnel.sh b/tools/testing/selftests/bpf/test_tc_tunnel.sh
+index 7c76b841b17bb..21bde60c95230 100755
+--- a/tools/testing/selftests/bpf/test_tc_tunnel.sh
++++ b/tools/testing/selftests/bpf/test_tc_tunnel.sh
+@@ -71,7 +71,6 @@ cleanup() {
+ server_listen() {
+ 	ip netns exec "${ns2}" nc "${netcat_opt}" -l -p "${port}" > "${outfile}" &
+ 	server_pid=$!
+-	sleep 0.2
+ }
+ 
+ client_connect() {
+@@ -92,6 +91,16 @@ verify_data() {
+ 	fi
+ }
+ 
++wait_for_port() {
++	for i in $(seq 20); do
++		if ip netns exec "${ns2}" ss ${2:--4}OHntl | grep -q "$1"; then
++			return 0
++		fi
++		sleep 0.1
++	done
++	return 1
++}
++
+ set -e
+ 
+ # no arguments: automated test, run all
+@@ -183,6 +192,7 @@ setup
+ # basic communication works
+ echo "test basic connectivity"
+ server_listen
++wait_for_port ${port} ${netcat_opt}
+ client_connect
+ verify_data
+ 
+@@ -194,6 +204,7 @@ ip netns exec "${ns1}" tc filter add dev veth1 egress \
+ 	section "encap_${tuntype}_${mac}"
+ echo "test bpf encap without decap (expect failure)"
+ server_listen
++wait_for_port ${port} ${netcat_opt}
+ ! client_connect
+ 
+ if [[ "$tuntype" =~ "udp" ]]; then
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc
+index 1f6981ef7afa0..ba19b81cef39a 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc
+@@ -30,7 +30,8 @@ find_dot_func() {
+ 	fi
+ 
+ 	grep " [tT] .*\.isra\..*" /proc/kallsyms | cut -f 3 -d " " | while read f; do
+-		if grep -s $f available_filter_functions; then
++		cnt=`grep -s $f available_filter_functions | wc -l`;
++		if [ $cnt -eq 1 ]; then
+ 			echo $f
+ 			break
+ 		fi
+diff --git a/tools/testing/selftests/vm/compaction_test.c b/tools/testing/selftests/vm/compaction_test.c
+index 9b420140ba2ba..7c260060a1a6b 100644
+--- a/tools/testing/selftests/vm/compaction_test.c
++++ b/tools/testing/selftests/vm/compaction_test.c
+@@ -33,7 +33,7 @@ int read_memory_info(unsigned long *memfree, unsigned long *hugepagesize)
+ 	FILE *cmdfile = popen(cmd, "r");
+ 
+ 	if (!(fgets(buffer, sizeof(buffer), cmdfile))) {
+-		perror("Failed to read meminfo\n");
++		ksft_print_msg("Failed to read meminfo: %s\n", strerror(errno));
+ 		return -1;
+ 	}
+ 
+@@ -44,7 +44,7 @@ int read_memory_info(unsigned long *memfree, unsigned long *hugepagesize)
+ 	cmdfile = popen(cmd, "r");
+ 
+ 	if (!(fgets(buffer, sizeof(buffer), cmdfile))) {
+-		perror("Failed to read meminfo\n");
++		ksft_print_msg("Failed to read meminfo: %s\n", strerror(errno));
+ 		return -1;
+ 	}
+ 
+@@ -62,14 +62,14 @@ int prereq(void)
+ 	fd = open("/proc/sys/vm/compact_unevictable_allowed",
+ 		  O_RDONLY | O_NONBLOCK);
+ 	if (fd < 0) {
+-		perror("Failed to open\n"
+-		       "/proc/sys/vm/compact_unevictable_allowed\n");
++		ksft_print_msg("Failed to open /proc/sys/vm/compact_unevictable_allowed: %s\n",
++			       strerror(errno));
+ 		return -1;
+ 	}
+ 
+ 	if (read(fd, &allowed, sizeof(char)) != sizeof(char)) {
+-		perror("Failed to read from\n"
+-		       "/proc/sys/vm/compact_unevictable_allowed\n");
++		ksft_print_msg("Failed to read from /proc/sys/vm/compact_unevictable_allowed: %s\n",
++			       strerror(errno));
+ 		close(fd);
+ 		return -1;
+ 	}
+@@ -78,15 +78,17 @@ int prereq(void)
+ 	if (allowed == '1')
+ 		return 0;
+ 
++	ksft_print_msg("Compaction isn't allowed\n");
+ 	return -1;
+ }
+ 
+-int check_compaction(unsigned long mem_free, unsigned int hugepage_size)
++int check_compaction(unsigned long mem_free, unsigned long hugepage_size)
+ {
+-	int fd;
++	unsigned long nr_hugepages_ul;
++	int fd, ret = -1;
+ 	int compaction_index = 0;
+-	char initial_nr_hugepages[10] = {0};
+-	char nr_hugepages[10] = {0};
++	char initial_nr_hugepages[20] = {0};
++	char nr_hugepages[20] = {0};
+ 
+ 	/* We want to test with 80% of available memory. Else, OOM killer comes
+ 	   in to play */
+@@ -94,18 +96,23 @@ int check_compaction(unsigned long mem_free, unsigned int hugepage_size)
+ 
+ 	fd = open("/proc/sys/vm/nr_hugepages", O_RDWR | O_NONBLOCK);
+ 	if (fd < 0) {
+-		perror("Failed to open /proc/sys/vm/nr_hugepages");
++		ksft_test_result_fail("Failed to open /proc/sys/vm/nr_hugepages: %s\n",
++				      strerror(errno));
+ 		return -1;
+ 	}
+ 
+ 	if (read(fd, initial_nr_hugepages, sizeof(initial_nr_hugepages)) <= 0) {
+-		perror("Failed to read from /proc/sys/vm/nr_hugepages");
++		ksft_test_result_fail("Failed to read from /proc/sys/vm/nr_hugepages: %s\n",
++				      strerror(errno));
+ 		goto close_fd;
+ 	}
+ 
++	lseek(fd, 0, SEEK_SET);
++
+ 	/* Start with the initial condition of 0 huge pages*/
+ 	if (write(fd, "0", sizeof(char)) != sizeof(char)) {
+-		perror("Failed to write 0 to /proc/sys/vm/nr_hugepages\n");
++		ksft_test_result_fail("Failed to write 0 to /proc/sys/vm/nr_hugepages: %s\n",
++				      strerror(errno));
+ 		goto close_fd;
+ 	}
+ 
+@@ -114,82 +121,80 @@ int check_compaction(unsigned long mem_free, unsigned int hugepage_size)
+ 	/* Request a large number of huge pages. The Kernel will allocate
+ 	   as much as it can */
+ 	if (write(fd, "100000", (6*sizeof(char))) != (6*sizeof(char))) {
+-		perror("Failed to write 100000 to /proc/sys/vm/nr_hugepages\n");
++		ksft_test_result_fail("Failed to write 100000 to /proc/sys/vm/nr_hugepages: %s\n",
++				      strerror(errno));
+ 		goto close_fd;
+ 	}
+ 
+ 	lseek(fd, 0, SEEK_SET);
+ 
+ 	if (read(fd, nr_hugepages, sizeof(nr_hugepages)) <= 0) {
+-		perror("Failed to re-read from /proc/sys/vm/nr_hugepages\n");
++		ksft_test_result_fail("Failed to re-read from /proc/sys/vm/nr_hugepages: %s\n",
++				      strerror(errno));
+ 		goto close_fd;
+ 	}
+ 
+ 	/* We should have been able to request at least 1/3 rd of the memory in
+ 	   huge pages */
+-	compaction_index = mem_free/(atoi(nr_hugepages) * hugepage_size);
+-
+-	if (compaction_index > 3) {
+-		printf("No of huge pages allocated = %d\n",
+-		       (atoi(nr_hugepages)));
+-		fprintf(stderr, "ERROR: Less that 1/%d of memory is available\n"
+-			"as huge pages\n", compaction_index);
++	nr_hugepages_ul = strtoul(nr_hugepages, NULL, 10);
++	if (!nr_hugepages_ul) {
++		ksft_print_msg("ERROR: No memory is available as huge pages\n");
+ 		goto close_fd;
+ 	}
+-
+-	printf("No of huge pages allocated = %d\n",
+-	       (atoi(nr_hugepages)));
++	compaction_index = mem_free/(nr_hugepages_ul * hugepage_size);
+ 
+ 	lseek(fd, 0, SEEK_SET);
+ 
+ 	if (write(fd, initial_nr_hugepages, strlen(initial_nr_hugepages))
+ 	    != strlen(initial_nr_hugepages)) {
+-		perror("Failed to write value to /proc/sys/vm/nr_hugepages\n");
++		ksft_test_result_fail("Failed to write value to /proc/sys/vm/nr_hugepages: %s\n",
++				      strerror(errno));
+ 		goto close_fd;
+ 	}
+ 
+-	close(fd);
+-	return 0;
++	if (compaction_index > 3) {
++		ksft_print_msg("ERROR: Less than 1/%d of memory is available\n"
++			       "as huge pages\n", compaction_index);
++		ksft_test_result_fail("No of huge pages allocated = %d\n", (atoi(nr_hugepages)));
++		goto close_fd;
++	}
++
++	ksft_test_result_pass("Memory compaction succeeded. No of huge pages allocated = %d\n",
++			      (atoi(nr_hugepages)));
++	ret = 0;
+ 
+  close_fd:
+ 	close(fd);
+-	printf("Not OK. Compaction test failed.");
+-	return -1;
++	return ret;
+ }
+ 
+ 
+ int main(int argc, char **argv)
+ {
+ 	struct rlimit lim;
+-	struct map_list *list, *entry;
++	struct map_list *list = NULL, *entry;
+ 	size_t page_size, i;
+ 	void *map = NULL;
+ 	unsigned long mem_free = 0;
+ 	unsigned long hugepage_size = 0;
+ 	long mem_fragmentable_MB = 0;
+ 
+-	if (prereq() != 0) {
+-		printf("Either the sysctl compact_unevictable_allowed is not\n"
+-		       "set to 1 or couldn't read the proc file.\n"
+-		       "Skipping the test\n");
+-		return KSFT_SKIP;
+-	}
++	ksft_print_header();
++
++	if (prereq() != 0)
++		return ksft_exit_pass();
++
++	ksft_set_plan(1);
+ 
+ 	lim.rlim_cur = RLIM_INFINITY;
+ 	lim.rlim_max = RLIM_INFINITY;
+-	if (setrlimit(RLIMIT_MEMLOCK, &lim)) {
+-		perror("Failed to set rlimit:\n");
+-		return -1;
+-	}
++	if (setrlimit(RLIMIT_MEMLOCK, &lim))
++		ksft_exit_fail_msg("Failed to set rlimit: %s\n", strerror(errno));
+ 
+ 	page_size = getpagesize();
+ 
+-	list = NULL;
+-
+-	if (read_memory_info(&mem_free, &hugepage_size) != 0) {
+-		printf("ERROR: Cannot read meminfo\n");
+-		return -1;
+-	}
++	if (read_memory_info(&mem_free, &hugepage_size) != 0)
++		ksft_exit_fail_msg("Failed to get meminfo\n");
+ 
+ 	mem_fragmentable_MB = mem_free * 0.8 / 1024;
+ 
+@@ -225,7 +230,7 @@ int main(int argc, char **argv)
+ 	}
+ 
+ 	if (check_compaction(mem_free, hugepage_size) == 0)
+-		return 0;
++		return ksft_exit_pass();
+ 
+-	return -1;
++	return ksft_exit_fail();
+ }


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-07-05 10:53 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-07-05 10:53 UTC (permalink / raw
  To: gentoo-commits

commit:     5684c398d17a6587216f663d94727f3f69f36262
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul  5 10:53:17 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul  5 10:53:17 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5684c398

Remove redundant patch

Removed:
2930_tar_override.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 2930_tar_override.patch | 63 -------------------------------------------------
 1 file changed, 63 deletions(-)

diff --git a/2930_tar_override.patch b/2930_tar_override.patch
deleted file mode 100644
index babfa83a..00000000
--- a/2930_tar_override.patch
+++ /dev/null
@@ -1,63 +0,0 @@
-From: "Michał Górny" <mgorny@gentoo.org>
-To: Dmitry Goldin <dgoldin+lkml@protonmail.ch>
-Cc: "Masahiro Yamada" <yamada.masahiro@socionext.com>,
-	linux-kernel@vger.kernel.org, "Michał Górny" <mgorny@gentoo.org>,
-	"Sam James" <sam@gentoo.org>,
-	"Masahiro Yamada" <masahiroy@kernel.org>
-Subject: [PATCH v2] kheaders: make it possible to override TAR
-Date: Wed, 12 Apr 2023 10:27:43 +0200	[thread overview]
-Message-ID: <20230412082743.350699-1-mgorny@gentoo.org> (raw)
-In-Reply-To: <CAK7LNATfrxu7BK0ZRq+qSjObiz6GpS3U5L=12vDys5_yy=Mdow@mail.gmail.com>
-
-Commit 86cdd2fdc4e39c388d39c7ba2396d1a9dfd66226 ("kheaders: make headers
-archive reproducible") introduced a number of options specific to GNU
-tar to the `tar` invocation in `gen_kheaders.sh` script.  This causes
-the script to fail to work on systems where `tar` is not GNU tar.  This
-can occur e.g. on recent Gentoo Linux installations that support using
-bsdtar from libarchive instead.
-
-Add a `TAR` make variable to make it possible to override the tar
-executable used, e.g. by specifying:
-
-  make TAR=gtar
-
-Link: https://bugs.gentoo.org/884061
-Reported-by: Sam James <sam@gentoo.org>
-Tested-by: Sam James <sam@gentoo.org>
-Co-developed-by: Masahiro Yamada <masahiroy@kernel.org>
-Signed-off-by: Michał Górny <mgorny@gentoo.org>
----
---- a/Makefile	2023-10-18 16:13:06.496343048 -0400
-+++ b/Makefile	2023-10-18 16:14:00.136587613 -0400
-@@ -471,6 +471,7 @@ LZMA		= lzma
- LZ4		= lz4c
- XZ		= xz
- ZSTD		= zstd
-+TAR    = tar
- 
- PAHOLE_FLAGS	= $(shell PAHOLE=$(PAHOLE) $(srctree)/scripts/pahole-flags.sh)
- 
-@@ -519,7 +520,7 @@ CLANG_FLAGS :=
- export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC
- export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AWK INSTALLKERNEL
- export PERL PYTHON PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
--export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
-+export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD TAR
- export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
- 
- export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
-diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
-index 1ef9a8751..82d539648 100755
---- a/kernel/gen_kheaders.sh
-+++ b/kernel/gen_kheaders.sh
-@@ -86,7 +86,7 @@ find $cpio_dir -type f -print0 |
- # For compatibility with older versions of tar, files are fed to tar
- # pre-sorted, as --sort=name might not be available.
- find $cpio_dir -printf "./%P\n" | LC_ALL=C sort | \
--    tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
-+    ${TAR:-tar} "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
-     --owner=0 --group=0 --numeric-owner --no-recursion \
-     -I $XZ -cf $tarfile -C $cpio_dir/ -T - > /dev/null
- 
--- 
-2.40.0


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-07-18 12:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-07-18 12:17 UTC (permalink / raw
  To: gentoo-commits

commit:     b132765e3a1b8baf4aaf7f150b321432ad4938fb
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 18 12:17:01 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul 18 12:17:01 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b132765e

Linux patch 5.10.222

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1221_linux-5.10.222.patch | 4072 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4076 insertions(+)

diff --git a/0000_README b/0000_README
index a8ab8c34..4ec64a79 100644
--- a/0000_README
+++ b/0000_README
@@ -927,6 +927,10 @@ Patch:  1220_linux-5.10.221.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.221
 
+Patch:  1221_linux-5.10.222.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.222
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1221_linux-5.10.222.patch b/1221_linux-5.10.222.patch
new file mode 100644
index 00000000..330516e8
--- /dev/null
+++ b/1221_linux-5.10.222.patch
@@ -0,0 +1,4072 @@
+diff --git a/Makefile b/Makefile
+index b0e22161cd553..5b6dae61250c4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 221
++SUBLEVEL = 222
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/mach-davinci/pm.c b/arch/arm/mach-davinci/pm.c
+index 323ee4e657c45..94d7d69b9db7c 100644
+--- a/arch/arm/mach-davinci/pm.c
++++ b/arch/arm/mach-davinci/pm.c
+@@ -62,7 +62,7 @@ static void davinci_pm_suspend(void)
+ 
+ 	/* Configure sleep count in deep sleep register */
+ 	val = __raw_readl(pm_config.deepsleep_reg);
+-	val &= ~DEEPSLEEP_SLEEPCOUNT_MASK,
++	val &= ~DEEPSLEEP_SLEEPCOUNT_MASK;
+ 	val |= pm_config.sleepcount;
+ 	__raw_writel(val, pm_config.deepsleep_reg);
+ 
+diff --git a/arch/ia64/include/asm/efi.h b/arch/ia64/include/asm/efi.h
+new file mode 100644
+index 0000000000000..6a4a50d8f19a5
+--- /dev/null
++++ b/arch/ia64/include/asm/efi.h
+@@ -0,0 +1,13 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_EFI_H
++#define _ASM_EFI_H
++
++typedef int (*efi_freemem_callback_t) (u64 start, u64 end, void *arg);
++
++void *efi_get_pal_addr(void);
++void efi_map_pal_code(void);
++void efi_memmap_walk(efi_freemem_callback_t, void *);
++void efi_memmap_walk_uc(efi_freemem_callback_t, void *);
++void efi_gettimeofday(struct timespec64 *ts);
++
++#endif
+diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
+index 33282f33466e7..4707c5ee6692a 100644
+--- a/arch/ia64/kernel/efi.c
++++ b/arch/ia64/kernel/efi.c
+@@ -34,6 +34,7 @@
+ #include <linux/kexec.h>
+ #include <linux/mm.h>
+ 
++#include <asm/efi.h>
+ #include <asm/io.h>
+ #include <asm/kregs.h>
+ #include <asm/meminit.h>
+diff --git a/arch/ia64/kernel/machine_kexec.c b/arch/ia64/kernel/machine_kexec.c
+index efc9b568401c8..af310dc8a356b 100644
+--- a/arch/ia64/kernel/machine_kexec.c
++++ b/arch/ia64/kernel/machine_kexec.c
+@@ -16,6 +16,7 @@
+ #include <linux/numa.h>
+ #include <linux/mmzone.h>
+ 
++#include <asm/efi.h>
+ #include <asm/numa.h>
+ #include <asm/mmu_context.h>
+ #include <asm/setup.h>
+diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
+index bd0a51dc345af..d9f51f0b33a75 100644
+--- a/arch/ia64/kernel/mca.c
++++ b/arch/ia64/kernel/mca.c
+@@ -91,6 +91,7 @@
+ #include <linux/gfp.h>
+ 
+ #include <asm/delay.h>
++#include <asm/efi.h>
+ #include <asm/meminit.h>
+ #include <asm/page.h>
+ #include <asm/ptrace.h>
+diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
+index 0cad990385c04..d10f780c13b9e 100644
+--- a/arch/ia64/kernel/smpboot.c
++++ b/arch/ia64/kernel/smpboot.c
+@@ -45,6 +45,7 @@
+ #include <asm/cache.h>
+ #include <asm/current.h>
+ #include <asm/delay.h>
++#include <asm/efi.h>
+ #include <asm/io.h>
+ #include <asm/irq.h>
+ #include <asm/mca.h>
+diff --git a/arch/ia64/kernel/time.c b/arch/ia64/kernel/time.c
+index 7abc5f37bfaf9..b53d4688d5a7e 100644
+--- a/arch/ia64/kernel/time.c
++++ b/arch/ia64/kernel/time.c
+@@ -26,6 +26,7 @@
+ #include <linux/sched/cputime.h>
+ 
+ #include <asm/delay.h>
++#include <asm/efi.h>
+ #include <asm/hw_irq.h>
+ #include <asm/ptrace.h>
+ #include <asm/sal.h>
+diff --git a/arch/ia64/kernel/uncached.c b/arch/ia64/kernel/uncached.c
+index 0750f367837d2..51883a66aeb58 100644
+--- a/arch/ia64/kernel/uncached.c
++++ b/arch/ia64/kernel/uncached.c
+@@ -20,14 +20,12 @@
+ #include <linux/genalloc.h>
+ #include <linux/gfp.h>
+ #include <linux/pgtable.h>
++#include <asm/efi.h>
+ #include <asm/page.h>
+ #include <asm/pal.h>
+ #include <linux/atomic.h>
+ #include <asm/tlbflush.h>
+ 
+-
+-extern void __init efi_memmap_walk_uc(efi_freemem_callback_t, void *);
+-
+ struct uncached_pool {
+ 	struct gen_pool *pool;
+ 	struct mutex add_chunk_mutex;	/* serialize adding a converted chunk */
+diff --git a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c
+index c638e012ad051..11c82d4d4f7c2 100644
+--- a/arch/ia64/mm/contig.c
++++ b/arch/ia64/mm/contig.c
+@@ -20,6 +20,7 @@
+ #include <linux/nmi.h>
+ #include <linux/swap.h>
+ 
++#include <asm/efi.h>
+ #include <asm/meminit.h>
+ #include <asm/sections.h>
+ #include <asm/mca.h>
+diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
+index 4d08134190134..fc3c77fb25975 100644
+--- a/arch/ia64/mm/discontig.c
++++ b/arch/ia64/mm/discontig.c
+@@ -24,6 +24,7 @@
+ #include <linux/efi.h>
+ #include <linux/nodemask.h>
+ #include <linux/slab.h>
++#include <asm/efi.h>
+ #include <asm/tlb.h>
+ #include <asm/meminit.h>
+ #include <asm/numa.h>
+diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
+index f316a833b7033..85e4e9ef027a0 100644
+--- a/arch/ia64/mm/init.c
++++ b/arch/ia64/mm/init.c
+@@ -27,6 +27,7 @@
+ #include <linux/swiotlb.h>
+ 
+ #include <asm/dma.h>
++#include <asm/efi.h>
+ #include <asm/io.h>
+ #include <asm/numa.h>
+ #include <asm/patch.h>
+diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
+index 058d21f493fad..c6b56aa0334fd 100644
+--- a/arch/powerpc/include/asm/io.h
++++ b/arch/powerpc/include/asm/io.h
+@@ -45,7 +45,7 @@ extern struct pci_dev *isa_bridge_pcidev;
+  * define properly based on the platform
+  */
+ #ifndef CONFIG_PCI
+-#define _IO_BASE	0
++#define _IO_BASE	POISON_POINTER_DELTA
+ #define _ISA_MEM_BASE	0
+ #define PCI_DRAM_OFFSET 0
+ #elif defined(CONFIG_PPC32)
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index 3de2adc0a8074..a2883360d07c9 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -1249,7 +1249,7 @@ static int cpu_cmd(void)
+ 	unsigned long cpu, first_cpu, last_cpu;
+ 	int timeout;
+ 
+-	if (!scanhex(&cpu)) {
++	if (!scanhex(&cpu) || cpu >= num_possible_cpus()) {
+ 		/* print cpus waiting or in xmon */
+ 		printf("cpus stopped:");
+ 		last_cpu = first_cpu = NR_CPUS;
+@@ -2680,7 +2680,7 @@ static void dump_pacas(void)
+ 
+ 	termch = c;	/* Put c back, it wasn't 'a' */
+ 
+-	if (scanhex(&num))
++	if (scanhex(&num) && num < num_possible_cpus())
+ 		dump_one_paca(num);
+ 	else
+ 		dump_one_paca(xmon_owner);
+@@ -2777,7 +2777,7 @@ static void dump_xives(void)
+ 
+ 	termch = c;	/* Put c back, it wasn't 'a' */
+ 
+-	if (scanhex(&num))
++	if (scanhex(&num) && num < num_possible_cpus())
+ 		dump_one_xive(num);
+ 	else
+ 		dump_one_xive(xmon_owner);
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index 0987c3fc45f58..95ed01a3536c6 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -252,8 +252,8 @@ static inline void __load_psw(psw_t psw)
+  */
+ static __always_inline void __load_psw_mask(unsigned long mask)
+ {
++	psw_t psw __uninitialized;
+ 	unsigned long addr;
+-	psw_t psw;
+ 
+ 	psw.mask = mask;
+ 
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index ab9b047790dd0..d1902213a0d63 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -105,6 +105,7 @@ __EXPORT_THUNK(srso_alias_untrain_ret)
+ /* dummy definition for alternatives */
+ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+ 	ANNOTATE_UNRET_SAFE
++	ANNOTATE_NOENDBR
+ 	ret
+ 	int3
+ SYM_FUNC_END(srso_alias_untrain_ret)
+@@ -258,7 +259,6 @@ SYM_CODE_START(__x86_return_thunk)
+ 	UNWIND_HINT_FUNC
+ 	ANNOTATE_NOENDBR
+ 	ANNOTATE_UNRET_SAFE
+-	ANNOTATE_NOENDBR
+ 	ret
+ 	int3
+ SYM_CODE_END(__x86_return_thunk)
+diff --git a/crypto/aead.c b/crypto/aead.c
+index 16991095270d2..c4ece86c45bc4 100644
+--- a/crypto/aead.c
++++ b/crypto/aead.c
+@@ -35,8 +35,7 @@ static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
+ 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+ 	memcpy(alignbuffer, key, keylen);
+ 	ret = crypto_aead_alg(tfm)->setkey(tfm, alignbuffer, keylen);
+-	memset(alignbuffer, 0, keylen);
+-	kfree(buffer);
++	kfree_sensitive(buffer);
+ 	return ret;
+ }
+ 
+diff --git a/crypto/cipher.c b/crypto/cipher.c
+index fd78150deb1c1..72c5606cc7f81 100644
+--- a/crypto/cipher.c
++++ b/crypto/cipher.c
+@@ -33,8 +33,7 @@ static int setkey_unaligned(struct crypto_cipher *tfm, const u8 *key,
+ 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+ 	memcpy(alignbuffer, key, keylen);
+ 	ret = cia->cia_setkey(crypto_cipher_tfm(tfm), alignbuffer, keylen);
+-	memset(alignbuffer, 0, keylen);
+-	kfree(buffer);
++	kfree_sensitive(buffer);
+ 	return ret;
+ 
+ }
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 6e0c0762fbabf..24cfc552e6a8c 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2076,15 +2076,27 @@ static void qca_serdev_shutdown(struct device *dev)
+ 	struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev);
+ 	struct hci_uart *hu = &qcadev->serdev_hu;
+ 	struct hci_dev *hdev = hu->hdev;
+-	struct qca_data *qca = hu->priv;
+ 	const u8 ibs_wake_cmd[] = { 0xFD };
+ 	const u8 edl_reset_soc_cmd[] = { 0x01, 0x00, 0xFC, 0x01, 0x05 };
+ 
+ 	if (qcadev->btsoc_type == QCA_QCA6390) {
+-		if (test_bit(QCA_BT_OFF, &qca->flags) ||
+-		    !test_bit(HCI_RUNNING, &hdev->flags))
++		/* The purpose of sending the VSC is to reset SOC into a initial
++		 * state and the state will ensure next hdev->setup() success.
++		 * if HCI_QUIRK_NON_PERSISTENT_SETUP is set, it means that
++		 * hdev->setup() can do its job regardless of SoC state, so
++		 * don't need to send the VSC.
++		 * if HCI_SETUP is set, it means that hdev->setup() was never
++		 * invoked and the SOC is already in the initial state, so
++		 * don't also need to send the VSC.
++		 */
++		if (test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks) ||
++		    hci_dev_test_flag(hdev, HCI_SETUP))
+ 			return;
+ 
++		/* The serdev must be in open state when conrol logic arrives
++		 * here, so also fix the use-after-free issue caused by that
++		 * the serdev is flushed or wrote after it is closed.
++		 */
+ 		serdev_device_write_flush(serdev);
+ 		ret = serdev_device_write_buf(serdev, ibs_wake_cmd,
+ 					      sizeof(ibs_wake_cmd));
+diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
+index 8b55085650ad0..900d1da0075e8 100644
+--- a/drivers/char/hpet.c
++++ b/drivers/char/hpet.c
+@@ -304,8 +304,13 @@ hpet_read(struct file *file, char __user *buf, size_t count, loff_t * ppos)
+ 	if (!devp->hd_ireqfreq)
+ 		return -EIO;
+ 
+-	if (count < sizeof(unsigned long))
+-		return -EINVAL;
++	if (in_compat_syscall()) {
++		if (count < sizeof(compat_ulong_t))
++			return -EINVAL;
++	} else {
++		if (count < sizeof(unsigned long))
++			return -EINVAL;
++	}
+ 
+ 	add_wait_queue(&devp->hd_waitqueue, &wait);
+ 
+@@ -329,9 +334,16 @@ hpet_read(struct file *file, char __user *buf, size_t count, loff_t * ppos)
+ 		schedule();
+ 	}
+ 
+-	retval = put_user(data, (unsigned long __user *)buf);
+-	if (!retval)
+-		retval = sizeof(unsigned long);
++	if (in_compat_syscall()) {
++		retval = put_user(data, (compat_ulong_t __user *)buf);
++		if (!retval)
++			retval = sizeof(compat_ulong_t);
++	} else {
++		retval = put_user(data, (unsigned long __user *)buf);
++		if (!retval)
++			retval = sizeof(unsigned long);
++	}
++
+ out:
+ 	__set_current_state(TASK_RUNNING);
+ 	remove_wait_queue(&devp->hd_waitqueue, &wait);
+@@ -686,12 +698,24 @@ struct compat_hpet_info {
+ 	unsigned short hi_timer;
+ };
+ 
++/* 32-bit types would lead to different command codes which should be
++ * translated into 64-bit ones before passed to hpet_ioctl_common
++ */
++#define COMPAT_HPET_INFO       _IOR('h', 0x03, struct compat_hpet_info)
++#define COMPAT_HPET_IRQFREQ    _IOW('h', 0x6, compat_ulong_t)
++
+ static long
+ hpet_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ 	struct hpet_info info;
+ 	int err;
+ 
++	if (cmd == COMPAT_HPET_INFO)
++		cmd = HPET_INFO;
++
++	if (cmd == COMPAT_HPET_IRQFREQ)
++		cmd = HPET_IRQFREQ;
++
+ 	mutex_lock(&hpet_mutex);
+ 	err = hpet_ioctl_common(file->private_data, cmd, arg, &info);
+ 	mutex_unlock(&hpet_mutex);
+diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c
+index d51ca0428bb82..ded0878dc3b63 100644
+--- a/drivers/firmware/dmi_scan.c
++++ b/drivers/firmware/dmi_scan.c
+@@ -101,6 +101,17 @@ static void dmi_decode_table(u8 *buf,
+ 	       (data - buf + sizeof(struct dmi_header)) <= dmi_len) {
+ 		const struct dmi_header *dm = (const struct dmi_header *)data;
+ 
++		/*
++		 * If a short entry is found (less than 4 bytes), not only it
++		 * is invalid, but we cannot reliably locate the next entry.
++		 */
++		if (dm->length < sizeof(struct dmi_header)) {
++			pr_warn(FW_BUG
++				"Corrupted DMI table, offset %zd (only %d entries processed)\n",
++				data - buf, i);
++			break;
++		}
++
+ 		/*
+ 		 *  We want to know the total length (formatted area and
+ 		 *  strings) before decoding to make sure we won't run off the
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+index 582055136cdbf..87dcbaf540e8c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+@@ -413,6 +413,14 @@ void amdgpu_irq_dispatch(struct amdgpu_device *adev,
+ 	int r;
+ 
+ 	entry.iv_entry = (const uint32_t *)&ih->ring[ring_index];
++
++	/*
++	 * timestamp is not supported on some legacy SOCs (cik, cz, iceland,
++	 * si and tonga), so initialize timestamp and timestamp_src to 0
++	 */
++	entry.timestamp = 0;
++	entry.timestamp_src = 0;
++
+ 	amdgpu_ih_decode_iv(adev, &entry);
+ 
+ 	trace_amdgpu_iv(ih - &adev->irq.ih, &entry);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index f1eda1a6496d4..0a13c06eea447 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1802,6 +1802,9 @@ static struct audio *find_first_free_audio(
+ {
+ 	int i, available_audio_count;
+ 
++	if (id == ENGINE_ID_UNKNOWN)
++		return NULL;
++
+ 	available_audio_count = pool->audio_count;
+ 
+ 	for (i = 0; i < available_audio_count; i++) {
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c b/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
+index 378cc11aa0476..3d8b2b127f3f5 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
+@@ -211,8 +211,12 @@ bool dce110_vblank_set(struct irq_service *irq_service,
+ 						   info->ext_id);
+ 	uint8_t pipe_offset = dal_irq_src - IRQ_TYPE_VBLANK;
+ 
+-	struct timing_generator *tg =
+-			dc->current_state->res_ctx.pipe_ctx[pipe_offset].stream_res.tg;
++	struct timing_generator *tg;
++
++	if (pipe_offset >= MAX_PIPES)
++		return false;
++
++	tg = dc->current_state->res_ctx.pipe_ctx[pipe_offset].stream_res.tg;
+ 
+ 	if (enable) {
+ 		if (!tg || !tg->funcs->arm_vert_intr(tg, 2)) {
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
+index f7b5583ee609a..8e9caae7c9559 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
+@@ -156,6 +156,10 @@ static enum mod_hdcp_status read(struct mod_hdcp *hdcp,
+ 	uint32_t cur_size = 0;
+ 	uint32_t data_offset = 0;
+ 
++	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID) {
++		return MOD_HDCP_STATUS_DDC_FAILURE;
++	}
++
+ 	if (is_dp_hdcp(hdcp)) {
+ 		while (buf_len > 0) {
+ 			cur_size = MIN(buf_len, HDCP_MAX_AUX_TRANSACTION_SIZE);
+@@ -215,6 +219,10 @@ static enum mod_hdcp_status write(struct mod_hdcp *hdcp,
+ 	uint32_t cur_size = 0;
+ 	uint32_t data_offset = 0;
+ 
++	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID) {
++		return MOD_HDCP_STATUS_DDC_FAILURE;
++	}
++
+ 	if (is_dp_hdcp(hdcp)) {
+ 		while (buf_len > 0) {
+ 			cur_size = MIN(buf_len, HDCP_MAX_AUX_TRANSACTION_SIZE);
+diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h b/drivers/gpu/drm/amd/include/atomfirmware.h
+index 3e526c394f6cb..ce7b0763ff191 100644
+--- a/drivers/gpu/drm/amd/include/atomfirmware.h
++++ b/drivers/gpu/drm/amd/include/atomfirmware.h
+@@ -690,7 +690,7 @@ struct atom_gpio_pin_lut_v2_1
+ {
+   struct  atom_common_table_header  table_header;
+   /*the real number of this included in the structure is calcualted by using the (whole structure size - the header size)/size of atom_gpio_pin_lut  */
+-  struct  atom_gpio_pin_assignment  gpio_pin[8];
++  struct  atom_gpio_pin_assignment  gpio_pin[];
+ };
+ 
+ 
+diff --git a/drivers/gpu/drm/lima/lima_gp.c b/drivers/gpu/drm/lima/lima_gp.c
+index 6cf46b653e810..ca3842f719842 100644
+--- a/drivers/gpu/drm/lima/lima_gp.c
++++ b/drivers/gpu/drm/lima/lima_gp.c
+@@ -324,7 +324,9 @@ int lima_gp_init(struct lima_ip *ip)
+ 
+ void lima_gp_fini(struct lima_ip *ip)
+ {
++	struct lima_device *dev = ip->dev;
+ 
++	devm_free_irq(dev->dev, ip->irq, ip);
+ }
+ 
+ int lima_gp_pipe_init(struct lima_device *dev)
+diff --git a/drivers/gpu/drm/lima/lima_mmu.c b/drivers/gpu/drm/lima/lima_mmu.c
+index a1ae6c252dc2b..8ca7047adbaca 100644
+--- a/drivers/gpu/drm/lima/lima_mmu.c
++++ b/drivers/gpu/drm/lima/lima_mmu.c
+@@ -118,7 +118,12 @@ int lima_mmu_init(struct lima_ip *ip)
+ 
+ void lima_mmu_fini(struct lima_ip *ip)
+ {
++	struct lima_device *dev = ip->dev;
++
++	if (ip->id == lima_ip_ppmmu_bcast)
++		return;
+ 
++	devm_free_irq(dev->dev, ip->irq, ip);
+ }
+ 
+ void lima_mmu_flush_tlb(struct lima_ip *ip)
+diff --git a/drivers/gpu/drm/lima/lima_pp.c b/drivers/gpu/drm/lima/lima_pp.c
+index 54b208a4a768e..d34c9e8840f45 100644
+--- a/drivers/gpu/drm/lima/lima_pp.c
++++ b/drivers/gpu/drm/lima/lima_pp.c
+@@ -266,7 +266,9 @@ int lima_pp_init(struct lima_ip *ip)
+ 
+ void lima_pp_fini(struct lima_ip *ip)
+ {
++	struct lima_device *dev = ip->dev;
+ 
++	devm_free_irq(dev->dev, ip->irq, ip);
+ }
+ 
+ int lima_pp_bcast_resume(struct lima_ip *ip)
+@@ -299,7 +301,9 @@ int lima_pp_bcast_init(struct lima_ip *ip)
+ 
+ void lima_pp_bcast_fini(struct lima_ip *ip)
+ {
++	struct lima_device *dev = ip->dev;
+ 
++	devm_free_irq(dev->dev, ip->irq, ip);
+ }
+ 
+ static int lima_pp_task_validate(struct lima_sched_pipe *pipe,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index d2cefca120c4b..6b8ad830c0348 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -960,6 +960,9 @@ nouveau_connector_get_modes(struct drm_connector *connector)
+ 		struct drm_display_mode *mode;
+ 
+ 		mode = drm_mode_duplicate(dev, nv_connector->native_mode);
++		if (!mode)
++			return 0;
++
+ 		drm_mode_probed_add(connector, mode);
+ 		ret = 1;
+ 	}
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index d6b945f5b8872..4baa9bce02b67 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1078,7 +1078,7 @@ static const struct pci_device_id i801_ids[] = {
+ MODULE_DEVICE_TABLE(pci, i801_ids);
+ 
+ #if defined CONFIG_X86 && defined CONFIG_DMI
+-static unsigned char apanel_addr;
++static unsigned char apanel_addr __ro_after_init;
+ 
+ /* Scan the system ROM for the signature "FJKEYINF" */
+ static __init const void __iomem *bios_signature(const void __iomem *bios)
+diff --git a/drivers/i2c/busses/i2c-pnx.c b/drivers/i2c/busses/i2c-pnx.c
+index 8c4ec7f13f5ab..b6b5a65efcbbc 100644
+--- a/drivers/i2c/busses/i2c-pnx.c
++++ b/drivers/i2c/busses/i2c-pnx.c
+@@ -15,7 +15,6 @@
+ #include <linux/ioport.h>
+ #include <linux/delay.h>
+ #include <linux/i2c.h>
+-#include <linux/timer.h>
+ #include <linux/completion.h>
+ #include <linux/platform_device.h>
+ #include <linux/io.h>
+@@ -32,7 +31,6 @@ struct i2c_pnx_mif {
+ 	int			ret;		/* Return value */
+ 	int			mode;		/* Interface mode */
+ 	struct completion	complete;	/* I/O completion */
+-	struct timer_list	timer;		/* Timeout */
+ 	u8 *			buf;		/* Data buffer */
+ 	int			len;		/* Length of data buffer */
+ 	int			order;		/* RX Bytes to order via TX */
+@@ -117,24 +115,6 @@ static inline int wait_reset(struct i2c_pnx_algo_data *data)
+ 	return (timeout <= 0);
+ }
+ 
+-static inline void i2c_pnx_arm_timer(struct i2c_pnx_algo_data *alg_data)
+-{
+-	struct timer_list *timer = &alg_data->mif.timer;
+-	unsigned long expires = msecs_to_jiffies(alg_data->timeout);
+-
+-	if (expires <= 1)
+-		expires = 2;
+-
+-	del_timer_sync(timer);
+-
+-	dev_dbg(&alg_data->adapter.dev, "Timer armed at %lu plus %lu jiffies.\n",
+-		jiffies, expires);
+-
+-	timer->expires = jiffies + expires;
+-
+-	add_timer(timer);
+-}
+-
+ /**
+  * i2c_pnx_start - start a device
+  * @slave_addr:		slave address
+@@ -259,8 +239,6 @@ static int i2c_pnx_master_xmit(struct i2c_pnx_algo_data *alg_data)
+ 				~(mcntrl_afie | mcntrl_naie | mcntrl_drmie),
+ 				  I2C_REG_CTL(alg_data));
+ 
+-			del_timer_sync(&alg_data->mif.timer);
+-
+ 			dev_dbg(&alg_data->adapter.dev,
+ 				"%s(): Waking up xfer routine.\n",
+ 				__func__);
+@@ -276,8 +254,6 @@ static int i2c_pnx_master_xmit(struct i2c_pnx_algo_data *alg_data)
+ 			~(mcntrl_afie | mcntrl_naie | mcntrl_drmie),
+ 			  I2C_REG_CTL(alg_data));
+ 
+-		/* Stop timer. */
+-		del_timer_sync(&alg_data->mif.timer);
+ 		dev_dbg(&alg_data->adapter.dev,
+ 			"%s(): Waking up xfer routine after zero-xfer.\n",
+ 			__func__);
+@@ -364,8 +340,6 @@ static int i2c_pnx_master_rcv(struct i2c_pnx_algo_data *alg_data)
+ 				 mcntrl_drmie | mcntrl_daie);
+ 			iowrite32(ctl, I2C_REG_CTL(alg_data));
+ 
+-			/* Kill timer. */
+-			del_timer_sync(&alg_data->mif.timer);
+ 			complete(&alg_data->mif.complete);
+ 		}
+ 	}
+@@ -400,8 +374,6 @@ static irqreturn_t i2c_pnx_interrupt(int irq, void *dev_id)
+ 			 mcntrl_drmie);
+ 		iowrite32(ctl, I2C_REG_CTL(alg_data));
+ 
+-		/* Stop timer, to prevent timeout. */
+-		del_timer_sync(&alg_data->mif.timer);
+ 		complete(&alg_data->mif.complete);
+ 	} else if (stat & mstatus_nai) {
+ 		/* Slave did not acknowledge, generate a STOP */
+@@ -419,8 +391,6 @@ static irqreturn_t i2c_pnx_interrupt(int irq, void *dev_id)
+ 		/* Our return value. */
+ 		alg_data->mif.ret = -EIO;
+ 
+-		/* Stop timer, to prevent timeout. */
+-		del_timer_sync(&alg_data->mif.timer);
+ 		complete(&alg_data->mif.complete);
+ 	} else {
+ 		/*
+@@ -453,9 +423,8 @@ static irqreturn_t i2c_pnx_interrupt(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void i2c_pnx_timeout(struct timer_list *t)
++static void i2c_pnx_timeout(struct i2c_pnx_algo_data *alg_data)
+ {
+-	struct i2c_pnx_algo_data *alg_data = from_timer(alg_data, t, mif.timer);
+ 	u32 ctl;
+ 
+ 	dev_err(&alg_data->adapter.dev,
+@@ -472,7 +441,6 @@ static void i2c_pnx_timeout(struct timer_list *t)
+ 	iowrite32(ctl, I2C_REG_CTL(alg_data));
+ 	wait_reset(alg_data);
+ 	alg_data->mif.ret = -EIO;
+-	complete(&alg_data->mif.complete);
+ }
+ 
+ static inline void bus_reset_if_active(struct i2c_pnx_algo_data *alg_data)
+@@ -514,6 +482,7 @@ i2c_pnx_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 	struct i2c_msg *pmsg;
+ 	int rc = 0, completed = 0, i;
+ 	struct i2c_pnx_algo_data *alg_data = adap->algo_data;
++	unsigned long time_left;
+ 	u32 stat;
+ 
+ 	dev_dbg(&alg_data->adapter.dev,
+@@ -548,7 +517,6 @@ i2c_pnx_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 		dev_dbg(&alg_data->adapter.dev, "%s(): mode %d, %d bytes\n",
+ 			__func__, alg_data->mif.mode, alg_data->mif.len);
+ 
+-		i2c_pnx_arm_timer(alg_data);
+ 
+ 		/* initialize the completion var */
+ 		init_completion(&alg_data->mif.complete);
+@@ -564,7 +532,10 @@ i2c_pnx_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 			break;
+ 
+ 		/* Wait for completion */
+-		wait_for_completion(&alg_data->mif.complete);
++		time_left = wait_for_completion_timeout(&alg_data->mif.complete,
++							alg_data->timeout);
++		if (time_left == 0)
++			i2c_pnx_timeout(alg_data);
+ 
+ 		if (!(rc = alg_data->mif.ret))
+ 			completed++;
+@@ -657,7 +628,10 @@ static int i2c_pnx_probe(struct platform_device *pdev)
+ 	alg_data->adapter.algo_data = alg_data;
+ 	alg_data->adapter.nr = pdev->id;
+ 
+-	alg_data->timeout = I2C_PNX_TIMEOUT_DEFAULT;
++	alg_data->timeout = msecs_to_jiffies(I2C_PNX_TIMEOUT_DEFAULT);
++	if (alg_data->timeout <= 1)
++		alg_data->timeout = 2;
++
+ #ifdef CONFIG_OF
+ 	alg_data->adapter.dev.of_node = of_node_get(pdev->dev.of_node);
+ 	if (pdev->dev.of_node) {
+@@ -677,8 +651,6 @@ static int i2c_pnx_probe(struct platform_device *pdev)
+ 	if (IS_ERR(alg_data->clk))
+ 		return PTR_ERR(alg_data->clk);
+ 
+-	timer_setup(&alg_data->mif.timer, i2c_pnx_timeout, 0);
+-
+ 	snprintf(alg_data->adapter.name, sizeof(alg_data->adapter.name),
+ 		 "%s", pdev->name);
+ 
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 6a7a7a074a975..7a6bae9df568b 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -116,6 +116,7 @@ enum rcar_i2c_type {
+ 	I2C_RCAR_GEN1,
+ 	I2C_RCAR_GEN2,
+ 	I2C_RCAR_GEN3,
++	I2C_RCAR_GEN4,
+ };
+ 
+ struct rcar_i2c_priv {
+@@ -220,6 +221,14 @@ static void rcar_i2c_init(struct rcar_i2c_priv *priv)
+ 
+ }
+ 
++static void rcar_i2c_reset_slave(struct rcar_i2c_priv *priv)
++{
++	rcar_i2c_write(priv, ICSIER, 0);
++	rcar_i2c_write(priv, ICSSR, 0);
++	rcar_i2c_write(priv, ICSCR, SDBS);
++	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
++}
++
+ static int rcar_i2c_bus_barrier(struct rcar_i2c_priv *priv)
+ {
+ 	int ret;
+@@ -372,8 +381,8 @@ static void rcar_i2c_dma_unmap(struct rcar_i2c_priv *priv)
+ 	dma_unmap_single(chan->device->dev, sg_dma_address(&priv->sg),
+ 			 sg_dma_len(&priv->sg), priv->dma_direction);
+ 
+-	/* Gen3 can only do one RXDMA per transfer and we just completed it */
+-	if (priv->devtype == I2C_RCAR_GEN3 &&
++	/* Gen3+ can only do one RXDMA per transfer and we just completed it */
++	if (priv->devtype >= I2C_RCAR_GEN3 &&
+ 	    priv->dma_direction == DMA_FROM_DEVICE)
+ 		priv->flags |= ID_P_NO_RXDMA;
+ 
+@@ -787,6 +796,10 @@ static int rcar_i2c_do_reset(struct rcar_i2c_priv *priv)
+ {
+ 	int ret;
+ 
++	/* Don't reset if a slave instance is currently running */
++	if (priv->slave)
++		return -EISCONN;
++
+ 	ret = reset_control_reset(priv->rstc);
+ 	if (ret)
+ 		return ret;
+@@ -811,14 +824,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ 	if (ret < 0)
+ 		goto out;
+ 
+-	/* Gen3 needs a reset before allowing RXDMA once */
+-	if (priv->devtype == I2C_RCAR_GEN3) {
+-		priv->flags |= ID_P_NO_RXDMA;
+-		if (!IS_ERR(priv->rstc)) {
+-			ret = rcar_i2c_do_reset(priv);
+-			if (ret == 0)
+-				priv->flags &= ~ID_P_NO_RXDMA;
+-		}
++	/* Gen3+ needs a reset. That also allows RXDMA once */
++	if (priv->devtype >= I2C_RCAR_GEN3) {
++		ret = rcar_i2c_do_reset(priv);
++		if (ret)
++			goto out;
++		priv->flags &= ~ID_P_NO_RXDMA;
+ 	}
+ 
+ 	rcar_i2c_init(priv);
+@@ -888,11 +899,8 @@ static int rcar_unreg_slave(struct i2c_client *slave)
+ 
+ 	/* ensure no irq is running before clearing ptr */
+ 	disable_irq(priv->irq);
+-	rcar_i2c_write(priv, ICSIER, 0);
+-	rcar_i2c_write(priv, ICSSR, 0);
++	rcar_i2c_reset_slave(priv);
+ 	enable_irq(priv->irq);
+-	rcar_i2c_write(priv, ICSCR, SDBS);
+-	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+ 
+ 	priv->slave = NULL;
+ 
+@@ -945,6 +953,7 @@ static const struct of_device_id rcar_i2c_dt_ids[] = {
+ 	{ .compatible = "renesas,rcar-gen1-i2c", .data = (void *)I2C_RCAR_GEN1 },
+ 	{ .compatible = "renesas,rcar-gen2-i2c", .data = (void *)I2C_RCAR_GEN2 },
+ 	{ .compatible = "renesas,rcar-gen3-i2c", .data = (void *)I2C_RCAR_GEN3 },
++	{ .compatible = "renesas,rcar-gen4-i2c", .data = (void *)I2C_RCAR_GEN4 },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, rcar_i2c_dt_ids);
+@@ -1004,22 +1013,15 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 		goto out_pm_disable;
+ 	}
+ 
+-	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
++	/* Bring hardware to known state */
++	rcar_i2c_init(priv);
++	rcar_i2c_reset_slave(priv);
+ 
+ 	if (priv->devtype < I2C_RCAR_GEN3) {
+ 		irqflags |= IRQF_NO_THREAD;
+ 		irqhandler = rcar_i2c_gen2_irq;
+ 	}
+ 
+-	if (priv->devtype == I2C_RCAR_GEN3) {
+-		priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+-		if (!IS_ERR(priv->rstc)) {
+-			ret = reset_control_status(priv->rstc);
+-			if (ret < 0)
+-				priv->rstc = ERR_PTR(-ENOTSUPP);
+-		}
+-	}
+-
+ 	/* Stay always active when multi-master to keep arbitration working */
+ 	if (of_property_read_bool(dev->of_node, "multi-master"))
+ 		priv->flags |= ID_P_PM_BLOCKED;
+@@ -1029,6 +1031,22 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 	if (of_property_read_bool(dev->of_node, "smbus"))
+ 		priv->flags |= ID_P_HOST_NOTIFY;
+ 
++	/* R-Car Gen3+ needs a reset before every transfer */
++	if (priv->devtype >= I2C_RCAR_GEN3) {
++		priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
++		if (IS_ERR(priv->rstc)) {
++			ret = PTR_ERR(priv->rstc);
++			goto out_pm_put;
++		}
++
++		ret = reset_control_status(priv->rstc);
++		if (ret < 0)
++			goto out_pm_put;
++
++		/* hard reset disturbs HostNotify local target, so disable it */
++		priv->flags &= ~ID_P_HOST_NOTIFY;
++	}
++
+ 	ret = platform_get_irq(pdev, 0);
+ 	if (ret < 0)
+ 		goto out_pm_put;
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index e8a89e18c640e..6fac638e423ac 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -969,6 +969,7 @@ EXPORT_SYMBOL_GPL(i2c_unregister_device);
+ 
+ static const struct i2c_device_id dummy_id[] = {
+ 	{ "dummy", 0 },
++	{ "smbus_host_notify", 0 },
+ 	{ },
+ };
+ 
+diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
+index 3bd0dcde8576d..063707dd4fe37 100644
+--- a/drivers/infiniband/core/user_mad.c
++++ b/drivers/infiniband/core/user_mad.c
+@@ -63,6 +63,8 @@ MODULE_AUTHOR("Roland Dreier");
+ MODULE_DESCRIPTION("InfiniBand userspace MAD packet access");
+ MODULE_LICENSE("Dual BSD/GPL");
+ 
++#define MAX_UMAD_RECV_LIST_SIZE 200000
++
+ enum {
+ 	IB_UMAD_MAX_PORTS  = RDMA_MAX_PORTS,
+ 	IB_UMAD_MAX_AGENTS = 32,
+@@ -113,6 +115,7 @@ struct ib_umad_file {
+ 	struct mutex		mutex;
+ 	struct ib_umad_port    *port;
+ 	struct list_head	recv_list;
++	atomic_t		recv_list_size;
+ 	struct list_head	send_list;
+ 	struct list_head	port_list;
+ 	spinlock_t		send_lock;
+@@ -180,24 +183,28 @@ static struct ib_mad_agent *__get_agent(struct ib_umad_file *file, int id)
+ 	return file->agents_dead ? NULL : file->agent[id];
+ }
+ 
+-static int queue_packet(struct ib_umad_file *file,
+-			struct ib_mad_agent *agent,
+-			struct ib_umad_packet *packet)
++static int queue_packet(struct ib_umad_file *file, struct ib_mad_agent *agent,
++			struct ib_umad_packet *packet, bool is_recv_mad)
+ {
+ 	int ret = 1;
+ 
+ 	mutex_lock(&file->mutex);
+ 
++	if (is_recv_mad &&
++	    atomic_read(&file->recv_list_size) > MAX_UMAD_RECV_LIST_SIZE)
++		goto unlock;
++
+ 	for (packet->mad.hdr.id = 0;
+ 	     packet->mad.hdr.id < IB_UMAD_MAX_AGENTS;
+ 	     packet->mad.hdr.id++)
+ 		if (agent == __get_agent(file, packet->mad.hdr.id)) {
+ 			list_add_tail(&packet->list, &file->recv_list);
++			atomic_inc(&file->recv_list_size);
+ 			wake_up_interruptible(&file->recv_wait);
+ 			ret = 0;
+ 			break;
+ 		}
+-
++unlock:
+ 	mutex_unlock(&file->mutex);
+ 
+ 	return ret;
+@@ -224,7 +231,7 @@ static void send_handler(struct ib_mad_agent *agent,
+ 	if (send_wc->status == IB_WC_RESP_TIMEOUT_ERR) {
+ 		packet->length = IB_MGMT_MAD_HDR;
+ 		packet->mad.hdr.status = ETIMEDOUT;
+-		if (!queue_packet(file, agent, packet))
++		if (!queue_packet(file, agent, packet, false))
+ 			return;
+ 	}
+ 	kfree(packet);
+@@ -284,7 +291,7 @@ static void recv_handler(struct ib_mad_agent *agent,
+ 		rdma_destroy_ah_attr(&ah_attr);
+ 	}
+ 
+-	if (queue_packet(file, agent, packet))
++	if (queue_packet(file, agent, packet, true))
+ 		goto err2;
+ 	return;
+ 
+@@ -409,6 +416,7 @@ static ssize_t ib_umad_read(struct file *filp, char __user *buf,
+ 
+ 	packet = list_entry(file->recv_list.next, struct ib_umad_packet, list);
+ 	list_del(&packet->list);
++	atomic_dec(&file->recv_list_size);
+ 
+ 	mutex_unlock(&file->mutex);
+ 
+@@ -421,6 +429,7 @@ static ssize_t ib_umad_read(struct file *filp, char __user *buf,
+ 		/* Requeue packet */
+ 		mutex_lock(&file->mutex);
+ 		list_add(&packet->list, &file->recv_list);
++		atomic_inc(&file->recv_list_size);
+ 		mutex_unlock(&file->mutex);
+ 	} else {
+ 		if (packet->recv_wc)
+diff --git a/drivers/input/ff-core.c b/drivers/input/ff-core.c
+index 1cf5deda06e19..a765e185c7a12 100644
+--- a/drivers/input/ff-core.c
++++ b/drivers/input/ff-core.c
+@@ -12,8 +12,10 @@
+ /* #define DEBUG */
+ 
+ #include <linux/input.h>
++#include <linux/limits.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
++#include <linux/overflow.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ 
+@@ -318,9 +320,8 @@ int input_ff_create(struct input_dev *dev, unsigned int max_effects)
+ 		return -EINVAL;
+ 	}
+ 
+-	ff_dev_size = sizeof(struct ff_device) +
+-				max_effects * sizeof(struct file *);
+-	if (ff_dev_size < max_effects) /* overflow */
++	ff_dev_size = struct_size(ff, effect_owners, max_effects);
++	if (ff_dev_size == SIZE_MAX) /* overflow */
+ 		return -EINVAL;
+ 
+ 	ff = kzalloc(ff_dev_size, GFP_KERNEL);
+diff --git a/drivers/media/dvb-frontends/as102_fe_types.h b/drivers/media/dvb-frontends/as102_fe_types.h
+index 297f9520ebf9d..8a4e392c88965 100644
+--- a/drivers/media/dvb-frontends/as102_fe_types.h
++++ b/drivers/media/dvb-frontends/as102_fe_types.h
+@@ -174,6 +174,6 @@ struct as10x_register_addr {
+ 	uint32_t addr;
+ 	/* register mode access */
+ 	uint8_t mode;
+-};
++} __packed;
+ 
+ #endif
+diff --git a/drivers/media/dvb-frontends/tda10048.c b/drivers/media/dvb-frontends/tda10048.c
+index f1d5e77d5dcce..db829754f1359 100644
+--- a/drivers/media/dvb-frontends/tda10048.c
++++ b/drivers/media/dvb-frontends/tda10048.c
+@@ -410,6 +410,7 @@ static int tda10048_set_if(struct dvb_frontend *fe, u32 bw)
+ 	struct tda10048_config *config = &state->config;
+ 	int i;
+ 	u32 if_freq_khz;
++	u64 sample_freq;
+ 
+ 	dprintk(1, "%s(bw = %d)\n", __func__, bw);
+ 
+@@ -451,9 +452,11 @@ static int tda10048_set_if(struct dvb_frontend *fe, u32 bw)
+ 	dprintk(1, "- pll_pfactor = %d\n", state->pll_pfactor);
+ 
+ 	/* Calculate the sample frequency */
+-	state->sample_freq = state->xtal_hz * (state->pll_mfactor + 45);
+-	state->sample_freq /= (state->pll_nfactor + 1);
+-	state->sample_freq /= (state->pll_pfactor + 4);
++	sample_freq = state->xtal_hz;
++	sample_freq *= state->pll_mfactor + 45;
++	do_div(sample_freq, state->pll_nfactor + 1);
++	do_div(sample_freq, state->pll_pfactor + 4);
++	state->sample_freq = sample_freq;
+ 	dprintk(1, "- sample_freq = %d\n", state->sample_freq);
+ 
+ 	/* Update the I/F */
+diff --git a/drivers/media/dvb-frontends/tda18271c2dd.c b/drivers/media/dvb-frontends/tda18271c2dd.c
+index a348344879433..fd928787207ed 100644
+--- a/drivers/media/dvb-frontends/tda18271c2dd.c
++++ b/drivers/media/dvb-frontends/tda18271c2dd.c
+@@ -328,7 +328,7 @@ static int CalcMainPLL(struct tda_state *state, u32 freq)
+ 
+ 	OscFreq = (u64) freq * (u64) Div;
+ 	OscFreq *= (u64) 16384;
+-	do_div(OscFreq, (u64)16000000);
++	do_div(OscFreq, 16000000);
+ 	MainDiv = OscFreq;
+ 
+ 	state->m_Regs[MPD] = PostDiv & 0x77;
+@@ -352,7 +352,7 @@ static int CalcCalPLL(struct tda_state *state, u32 freq)
+ 	OscFreq = (u64)freq * (u64)Div;
+ 	/* CalDiv = u32( OscFreq * 16384 / 16000000 ); */
+ 	OscFreq *= (u64)16384;
+-	do_div(OscFreq, (u64)16000000);
++	do_div(OscFreq, 16000000);
+ 	CalDiv = OscFreq;
+ 
+ 	state->m_Regs[CPD] = PostDiv;
+diff --git a/drivers/media/usb/dvb-usb/dib0700_devices.c b/drivers/media/usb/dvb-usb/dib0700_devices.c
+index d3288c1079062..afc561c1a5d61 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_devices.c
++++ b/drivers/media/usb/dvb-usb/dib0700_devices.c
+@@ -2419,7 +2419,12 @@ static int stk9090m_frontend_attach(struct dvb_usb_adapter *adap)
+ 
+ 	adap->fe_adap[0].fe = dvb_attach(dib9000_attach, &adap->dev->i2c_adap, 0x80, &stk9090m_config);
+ 
+-	return adap->fe_adap[0].fe == NULL ?  -ENODEV : 0;
++	if (!adap->fe_adap[0].fe) {
++		release_firmware(state->frontend_firmware);
++		return -ENODEV;
++	}
++
++	return 0;
+ }
+ 
+ static int dib9090_tuner_attach(struct dvb_usb_adapter *adap)
+@@ -2492,8 +2497,10 @@ static int nim9090md_frontend_attach(struct dvb_usb_adapter *adap)
+ 	dib9000_i2c_enumeration(&adap->dev->i2c_adap, 1, 0x20, 0x80);
+ 	adap->fe_adap[0].fe = dvb_attach(dib9000_attach, &adap->dev->i2c_adap, 0x80, &nim9090md_config[0]);
+ 
+-	if (adap->fe_adap[0].fe == NULL)
++	if (!adap->fe_adap[0].fe) {
++		release_firmware(state->frontend_firmware);
+ 		return -ENODEV;
++	}
+ 
+ 	i2c = dib9000_get_i2c_master(adap->fe_adap[0].fe, DIBX000_I2C_INTERFACE_GPIO_3_4, 0);
+ 	dib9000_i2c_enumeration(i2c, 1, 0x12, 0x82);
+@@ -2501,7 +2508,12 @@ static int nim9090md_frontend_attach(struct dvb_usb_adapter *adap)
+ 	fe_slave = dvb_attach(dib9000_attach, i2c, 0x82, &nim9090md_config[1]);
+ 	dib9000_set_slave_frontend(adap->fe_adap[0].fe, fe_slave);
+ 
+-	return fe_slave == NULL ?  -ENODEV : 0;
++	if (!fe_slave) {
++		release_firmware(state->frontend_firmware);
++		return -ENODEV;
++	}
++
++	return 0;
+ }
+ 
+ static int nim9090md_tuner_attach(struct dvb_usb_adapter *adap)
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index 2290f132a82c8..86e537f0316d7 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -716,6 +716,7 @@ static int su3000_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ {
+ 	struct dvb_usb_device *d = i2c_get_adapdata(adap);
+ 	struct dw2102_state *state;
++	int j;
+ 
+ 	if (!d)
+ 		return -ENODEV;
+@@ -729,11 +730,11 @@ static int su3000_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 		return -EAGAIN;
+ 	}
+ 
+-	switch (num) {
+-	case 1:
+-		switch (msg[0].addr) {
++	j = 0;
++	while (j < num) {
++		switch (msg[j].addr) {
+ 		case SU3000_STREAM_CTRL:
+-			state->data[0] = msg[0].buf[0] + 0x36;
++			state->data[0] = msg[j].buf[0] + 0x36;
+ 			state->data[1] = 3;
+ 			state->data[2] = 0;
+ 			if (dvb_usb_generic_rw(d, state->data, 3,
+@@ -745,61 +746,86 @@ static int su3000_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ 			if (dvb_usb_generic_rw(d, state->data, 1,
+ 					state->data, 2, 0) < 0)
+ 				err("i2c transfer failed.");
+-			msg[0].buf[1] = state->data[0];
+-			msg[0].buf[0] = state->data[1];
++			msg[j].buf[1] = state->data[0];
++			msg[j].buf[0] = state->data[1];
+ 			break;
+ 		default:
+-			if (3 + msg[0].len > sizeof(state->data)) {
+-				warn("i2c wr: len=%d is too big!\n",
+-				     msg[0].len);
++			/* if the current write msg is followed by a another
++			 * read msg to/from the same address
++			 */
++			if ((j+1 < num) && (msg[j+1].flags & I2C_M_RD) &&
++			    (msg[j].addr == msg[j+1].addr)) {
++				/* join both i2c msgs to one usb read command */
++				if (4 + msg[j].len > sizeof(state->data)) {
++					warn("i2c combined wr/rd: write len=%d is too big!\n",
++					    msg[j].len);
++					num = -EOPNOTSUPP;
++					break;
++				}
++				if (1 + msg[j+1].len > sizeof(state->data)) {
++					warn("i2c combined wr/rd: read len=%d is too big!\n",
++					    msg[j+1].len);
++					num = -EOPNOTSUPP;
++					break;
++				}
++
++				state->data[0] = 0x09;
++				state->data[1] = msg[j].len;
++				state->data[2] = msg[j+1].len;
++				state->data[3] = msg[j].addr;
++				memcpy(&state->data[4], msg[j].buf, msg[j].len);
++
++				if (dvb_usb_generic_rw(d, state->data, msg[j].len + 4,
++					state->data, msg[j+1].len + 1, 0) < 0)
++					err("i2c transfer failed.");
++
++				memcpy(msg[j+1].buf, &state->data[1], msg[j+1].len);
++				j++;
++				break;
++			}
++
++			if (msg[j].flags & I2C_M_RD) {
++				/* single read */
++				if (4 + msg[j].len > sizeof(state->data)) {
++					warn("i2c rd: len=%d is too big!\n", msg[j].len);
++					num = -EOPNOTSUPP;
++					break;
++				}
++
++				state->data[0] = 0x09;
++				state->data[1] = 0;
++				state->data[2] = msg[j].len;
++				state->data[3] = msg[j].addr;
++				memcpy(&state->data[4], msg[j].buf, msg[j].len);
++
++				if (dvb_usb_generic_rw(d, state->data, 4,
++					state->data, msg[j].len + 1, 0) < 0)
++					err("i2c transfer failed.");
++
++				memcpy(msg[j].buf, &state->data[1], msg[j].len);
++				break;
++			}
++
++			/* single write */
++			if (3 + msg[j].len > sizeof(state->data)) {
++				warn("i2c wr: len=%d is too big!\n", msg[j].len);
+ 				num = -EOPNOTSUPP;
+ 				break;
+ 			}
+ 
+-			/* always i2c write*/
+ 			state->data[0] = 0x08;
+-			state->data[1] = msg[0].addr;
+-			state->data[2] = msg[0].len;
++			state->data[1] = msg[j].addr;
++			state->data[2] = msg[j].len;
+ 
+-			memcpy(&state->data[3], msg[0].buf, msg[0].len);
++			memcpy(&state->data[3], msg[j].buf, msg[j].len);
+ 
+-			if (dvb_usb_generic_rw(d, state->data, msg[0].len + 3,
++			if (dvb_usb_generic_rw(d, state->data, msg[j].len + 3,
+ 						state->data, 1, 0) < 0)
+ 				err("i2c transfer failed.");
++		} // switch
++		j++;
+ 
+-		}
+-		break;
+-	case 2:
+-		/* always i2c read */
+-		if (4 + msg[0].len > sizeof(state->data)) {
+-			warn("i2c rd: len=%d is too big!\n",
+-			     msg[0].len);
+-			num = -EOPNOTSUPP;
+-			break;
+-		}
+-		if (1 + msg[1].len > sizeof(state->data)) {
+-			warn("i2c rd: len=%d is too big!\n",
+-			     msg[1].len);
+-			num = -EOPNOTSUPP;
+-			break;
+-		}
+-
+-		state->data[0] = 0x09;
+-		state->data[1] = msg[0].len;
+-		state->data[2] = msg[1].len;
+-		state->data[3] = msg[0].addr;
+-		memcpy(&state->data[4], msg[0].buf, msg[0].len);
+-
+-		if (dvb_usb_generic_rw(d, state->data, msg[0].len + 4,
+-					state->data, msg[1].len + 1, 0) < 0)
+-			err("i2c transfer failed.");
+-
+-		memcpy(msg[1].buf, &state->data[1], msg[1].len);
+-		break;
+-	default:
+-		warn("more than 2 i2c messages at a time is not handled yet.");
+-		break;
+-	}
++	} // while
+ 	mutex_unlock(&d->data_mutex);
+ 	mutex_unlock(&d->i2c_mutex);
+ 	return num;
+diff --git a/drivers/media/usb/s2255/s2255drv.c b/drivers/media/usb/s2255/s2255drv.c
+index cb15eb32d2a6b..50ac20c226309 100644
+--- a/drivers/media/usb/s2255/s2255drv.c
++++ b/drivers/media/usb/s2255/s2255drv.c
+@@ -247,7 +247,7 @@ struct s2255_vc {
+ struct s2255_dev {
+ 	struct s2255_vc         vc[MAX_CHANNELS];
+ 	struct v4l2_device      v4l2_dev;
+-	atomic_t                num_channels;
++	refcount_t		num_channels;
+ 	int			frames;
+ 	struct mutex		lock;	/* channels[].vdev.lock */
+ 	struct mutex		cmdlock; /* protects cmdbuf */
+@@ -1552,11 +1552,11 @@ static void s2255_video_device_release(struct video_device *vdev)
+ 		container_of(vdev, struct s2255_vc, vdev);
+ 
+ 	dprintk(dev, 4, "%s, chnls: %d\n", __func__,
+-		atomic_read(&dev->num_channels));
++		refcount_read(&dev->num_channels));
+ 
+ 	v4l2_ctrl_handler_free(&vc->hdl);
+ 
+-	if (atomic_dec_and_test(&dev->num_channels))
++	if (refcount_dec_and_test(&dev->num_channels))
+ 		s2255_destroy(dev);
+ 	return;
+ }
+@@ -1661,7 +1661,7 @@ static int s2255_probe_v4l(struct s2255_dev *dev)
+ 				"failed to register video device!\n");
+ 			break;
+ 		}
+-		atomic_inc(&dev->num_channels);
++		refcount_inc(&dev->num_channels);
+ 		v4l2_info(&dev->v4l2_dev, "V4L2 device registered as %s\n",
+ 			  video_device_node_name(&vc->vdev));
+ 
+@@ -1669,11 +1669,11 @@ static int s2255_probe_v4l(struct s2255_dev *dev)
+ 	pr_info("Sensoray 2255 V4L driver Revision: %s\n",
+ 		S2255_VERSION);
+ 	/* if no channels registered, return error and probe will fail*/
+-	if (atomic_read(&dev->num_channels) == 0) {
++	if (refcount_read(&dev->num_channels) == 0) {
+ 		v4l2_device_unregister(&dev->v4l2_dev);
+ 		return ret;
+ 	}
+-	if (atomic_read(&dev->num_channels) != MAX_CHANNELS)
++	if (refcount_read(&dev->num_channels) != MAX_CHANNELS)
+ 		pr_warn("s2255: Not all channels available.\n");
+ 	return 0;
+ }
+@@ -2222,7 +2222,7 @@ static int s2255_probe(struct usb_interface *interface,
+ 		goto errorFWDATA1;
+ 	}
+ 
+-	atomic_set(&dev->num_channels, 0);
++	refcount_set(&dev->num_channels, 0);
+ 	dev->pid = id->idProduct;
+ 	dev->fw_data = kzalloc(sizeof(struct s2255_fw), GFP_KERNEL);
+ 	if (!dev->fw_data)
+@@ -2342,12 +2342,12 @@ static void s2255_disconnect(struct usb_interface *interface)
+ {
+ 	struct s2255_dev *dev = to_s2255_dev(usb_get_intfdata(interface));
+ 	int i;
+-	int channels = atomic_read(&dev->num_channels);
++	int channels = refcount_read(&dev->num_channels);
+ 	mutex_lock(&dev->lock);
+ 	v4l2_device_disconnect(&dev->v4l2_dev);
+ 	mutex_unlock(&dev->lock);
+ 	/*see comments in the uvc_driver.c usb disconnect function */
+-	atomic_inc(&dev->num_channels);
++	refcount_inc(&dev->num_channels);
+ 	/* unregister each video device. */
+ 	for (i = 0; i < channels; i++)
+ 		video_unregister_device(&dev->vc[i].vdev);
+@@ -2360,7 +2360,7 @@ static void s2255_disconnect(struct usb_interface *interface)
+ 		dev->vc[i].vidstatus_ready = 1;
+ 		wake_up(&dev->vc[i].wait_vidstatus);
+ 	}
+-	if (atomic_dec_and_test(&dev->num_channels))
++	if (refcount_dec_and_test(&dev->num_channels))
+ 		s2255_destroy(dev);
+ 	dev_info(&interface->dev, "%s\n", __func__);
+ }
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index c41c0ff611b1b..308fcbe394a5e 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -964,28 +964,32 @@ static int nand_fill_column_cycles(struct nand_chip *chip, u8 *addrs,
+ 				   unsigned int offset_in_page)
+ {
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
++	bool ident_stage = !mtd->writesize;
+ 
+-	/* Make sure the offset is less than the actual page size. */
+-	if (offset_in_page > mtd->writesize + mtd->oobsize)
+-		return -EINVAL;
++	/* Bypass all checks during NAND identification */
++	if (likely(!ident_stage)) {
++		/* Make sure the offset is less than the actual page size. */
++		if (offset_in_page > mtd->writesize + mtd->oobsize)
++			return -EINVAL;
+ 
+-	/*
+-	 * On small page NANDs, there's a dedicated command to access the OOB
+-	 * area, and the column address is relative to the start of the OOB
+-	 * area, not the start of the page. Asjust the address accordingly.
+-	 */
+-	if (mtd->writesize <= 512 && offset_in_page >= mtd->writesize)
+-		offset_in_page -= mtd->writesize;
++		/*
++		 * On small page NANDs, there's a dedicated command to access the OOB
++		 * area, and the column address is relative to the start of the OOB
++		 * area, not the start of the page. Asjust the address accordingly.
++		 */
++		if (mtd->writesize <= 512 && offset_in_page >= mtd->writesize)
++			offset_in_page -= mtd->writesize;
+ 
+-	/*
+-	 * The offset in page is expressed in bytes, if the NAND bus is 16-bit
+-	 * wide, then it must be divided by 2.
+-	 */
+-	if (chip->options & NAND_BUSWIDTH_16) {
+-		if (WARN_ON(offset_in_page % 2))
+-			return -EINVAL;
++		/*
++		 * The offset in page is expressed in bytes, if the NAND bus is 16-bit
++		 * wide, then it must be divided by 2.
++		 */
++		if (chip->options & NAND_BUSWIDTH_16) {
++			if (WARN_ON(offset_in_page % 2))
++				return -EINVAL;
+ 
+-		offset_in_page /= 2;
++			offset_in_page /= 2;
++		}
+ 	}
+ 
+ 	addrs[0] = offset_in_page;
+@@ -994,7 +998,7 @@ static int nand_fill_column_cycles(struct nand_chip *chip, u8 *addrs,
+ 	 * Small page NANDs use 1 cycle for the columns, while large page NANDs
+ 	 * need 2
+ 	 */
+-	if (mtd->writesize <= 512)
++	if (!ident_stage && mtd->writesize <= 512)
+ 		return 1;
+ 
+ 	addrs[1] = offset_in_page >> 8;
+@@ -1189,16 +1193,19 @@ int nand_change_read_column_op(struct nand_chip *chip,
+ 			       unsigned int len, bool force_8bit)
+ {
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
++	bool ident_stage = !mtd->writesize;
+ 
+ 	if (len && !buf)
+ 		return -EINVAL;
+ 
+-	if (offset_in_page + len > mtd->writesize + mtd->oobsize)
+-		return -EINVAL;
++	if (!ident_stage) {
++		if (offset_in_page + len > mtd->writesize + mtd->oobsize)
++			return -EINVAL;
+ 
+-	/* Small page NANDs do not support column change. */
+-	if (mtd->writesize <= 512)
+-		return -ENOTSUPP;
++		/* Small page NANDs do not support column change. */
++		if (mtd->writesize <= 512)
++			return -ENOTSUPP;
++	}
+ 
+ 	if (nand_has_exec_op(chip)) {
+ 		const struct nand_sdr_timings *sdr =
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index fe55c81608daa..fa0bf77870d43 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -1100,9 +1100,9 @@ static int bond_option_arp_ip_targets_set(struct bonding *bond,
+ 	__be32 target;
+ 
+ 	if (newval->string) {
+-		if (!in4_pton(newval->string+1, -1, (u8 *)&target, -1, NULL)) {
+-			netdev_err(bond->dev, "invalid ARP target %pI4 specified\n",
+-				   &target);
++		if (strlen(newval->string) < 1 ||
++		    !in4_pton(newval->string + 1, -1, (u8 *)&target, -1, NULL)) {
++			netdev_err(bond->dev, "invalid ARP target specified\n");
+ 			return ret;
+ 		}
+ 		if (newval->string[0] == '+')
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index 1f015b496a472..411b3adb1d9ea 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -114,6 +114,7 @@ static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err_liste
+ 
+ static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leafimx = {
+ 	.quirks = 0,
++	.family = KVASER_LEAF,
+ 	.ops = &kvaser_usb_leaf_dev_ops,
+ };
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index ac56bc175b51b..6a1ae774cfe99 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -116,8 +116,8 @@ struct mii_bus *mv88e6xxx_default_mdio_bus(struct mv88e6xxx_chip *chip)
+ {
+ 	struct mv88e6xxx_mdio_bus *mdio_bus;
+ 
+-	mdio_bus = list_first_entry(&chip->mdios, struct mv88e6xxx_mdio_bus,
+-				    list);
++	mdio_bus = list_first_entry_or_null(&chip->mdios,
++					    struct mv88e6xxx_mdio_bus, list);
+ 	if (!mdio_bus)
+ 		return NULL;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index 2a61229d3f976..c1fe9bcbb955f 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -1262,7 +1262,7 @@ enum {
+ 
+ struct bnx2x_fw_stats_req {
+ 	struct stats_query_header hdr;
+-	struct stats_query_entry query[FP_SB_MAX_E1x+
++	struct stats_query_entry query[FP_SB_MAX_E2 +
+ 		BNX2X_FIRST_QUEUE_QUERY_IDX];
+ };
+ 
+diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
+index 5ea626b1e5783..1d7c0b872c594 100644
+--- a/drivers/net/ethernet/lantiq_etop.c
++++ b/drivers/net/ethernet/lantiq_etop.c
+@@ -213,8 +213,9 @@ ltq_etop_free_channel(struct net_device *dev, struct ltq_etop_chan *ch)
+ 	if (ch->dma.irq)
+ 		free_irq(ch->dma.irq, priv);
+ 	if (IS_RX(ch->idx)) {
+-		int desc;
+-		for (desc = 0; desc < LTQ_DESC_NUM; desc++)
++		struct ltq_dma_channel *dma = &ch->dma;
++
++		for (dma->desc = 0; dma->desc < LTQ_DESC_NUM; dma->desc++)
+ 			dev_kfree_skb_any(ch->skb[ch->dma.desc]);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+index dc34e564c9192..46689d95254bf 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+@@ -54,8 +54,13 @@ enum npc_kpu_lb_ltype {
+ 	NPC_LT_LB_CUSTOM1 = 0xF,
+ };
+ 
++/* Don't modify ltypes up to IP6_EXT, otherwise length and checksum of IP
++ * headers may not be checked correctly. IPv4 ltypes and IPv6 ltypes must
++ * differ only at bit 0 so mask 0xE can be used to detect extended headers.
++ */
+ enum npc_kpu_lc_ltype {
+-	NPC_LT_LC_IP = 1,
++	NPC_LT_LC_PTP = 1,
++	NPC_LT_LC_IP,
+ 	NPC_LT_LC_IP_OPT,
+ 	NPC_LT_LC_IP6,
+ 	NPC_LT_LC_IP6_EXT,
+@@ -63,7 +68,6 @@ enum npc_kpu_lc_ltype {
+ 	NPC_LT_LC_RARP,
+ 	NPC_LT_LC_MPLS,
+ 	NPC_LT_LC_NSH,
+-	NPC_LT_LC_PTP,
+ 	NPC_LT_LC_FCOE,
+ 	NPC_LT_LC_CUSTOM0 = 0xE,
+ 	NPC_LT_LC_CUSTOM1 = 0xF,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 23b829f974de1..e8a2552fb690a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -1357,7 +1357,7 @@ static int rvu_check_rsrc_availability(struct rvu *rvu,
+ 		if (req->ssow > block->lf.max) {
+ 			dev_err(&rvu->pdev->dev,
+ 				"Func 0x%x: Invalid SSOW req, %d > max %d\n",
+-				 pcifunc, req->sso, block->lf.max);
++				 pcifunc, req->ssow, block->lf.max);
+ 			return -EINVAL;
+ 		}
+ 		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->addr);
+diff --git a/drivers/net/ethernet/micrel/ks8851_common.c b/drivers/net/ethernet/micrel/ks8851_common.c
+index 3d0ac7f3c87e1..a4b4fa78ef921 100644
+--- a/drivers/net/ethernet/micrel/ks8851_common.c
++++ b/drivers/net/ethernet/micrel/ks8851_common.c
+@@ -501,6 +501,7 @@ static int ks8851_net_open(struct net_device *dev)
+ 	ks8851_wrreg16(ks, KS_IER, ks->rc_ier);
+ 
+ 	ks->queued_len = 0;
++	ks->tx_space = ks8851_rdreg16(ks, KS_TXMIR);
+ 	netif_start_queue(ks->netdev);
+ 
+ 	netif_dbg(ks, ifup, ks->netdev, "network device up\n");
+@@ -1057,7 +1058,6 @@ int ks8851_probe_common(struct net_device *netdev, struct device *dev,
+ 	int ret;
+ 
+ 	ks->netdev = netdev;
+-	ks->tx_space = 6144;
+ 
+ 	gpio = of_get_named_gpio_flags(dev->of_node, "reset-gpios", 0, NULL);
+ 	if (gpio == -EPROBE_DEFER)
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index b825c6a9b6dde..e2bca6fa08220 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -70,6 +70,7 @@
+ #define MPHDRLEN_SSN	4	/* ditto with short sequence numbers */
+ 
+ #define PPP_PROTO_LEN	2
++#define PPP_LCP_HDRLEN	4
+ 
+ /*
+  * An instance of /dev/ppp can be associated with either a ppp
+@@ -489,6 +490,15 @@ static ssize_t ppp_read(struct file *file, char __user *buf,
+ 	return ret;
+ }
+ 
++static bool ppp_check_packet(struct sk_buff *skb, size_t count)
++{
++	/* LCP packets must include LCP header which 4 bytes long:
++	 * 1-byte code, 1-byte identifier, and 2-byte length.
++	 */
++	return get_unaligned_be16(skb->data) != PPP_LCP ||
++		count >= PPP_PROTO_LEN + PPP_LCP_HDRLEN;
++}
++
+ static ssize_t ppp_write(struct file *file, const char __user *buf,
+ 			 size_t count, loff_t *ppos)
+ {
+@@ -511,6 +521,11 @@ static ssize_t ppp_write(struct file *file, const char __user *buf,
+ 		kfree_skb(skb);
+ 		goto out;
+ 	}
++	ret = -EINVAL;
++	if (unlikely(!ppp_check_packet(skb, count))) {
++		kfree_skb(skb);
++		goto out;
++	}
+ 
+ 	switch (pf->kind) {
+ 	case INTERFACE:
+diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
+index 0ba714ca5185c..4b8528206cc8a 100644
+--- a/drivers/net/wireguard/allowedips.c
++++ b/drivers/net/wireguard/allowedips.c
+@@ -15,8 +15,8 @@ static void swap_endian(u8 *dst, const u8 *src, u8 bits)
+ 	if (bits == 32) {
+ 		*(u32 *)dst = be32_to_cpu(*(const __be32 *)src);
+ 	} else if (bits == 128) {
+-		((u64 *)dst)[0] = be64_to_cpu(((const __be64 *)src)[0]);
+-		((u64 *)dst)[1] = be64_to_cpu(((const __be64 *)src)[1]);
++		((u64 *)dst)[0] = get_unaligned_be64(src);
++		((u64 *)dst)[1] = get_unaligned_be64(src + 8);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
+index a2e702f8c5826..5d2480d90fdb9 100644
+--- a/drivers/net/wireguard/queueing.h
++++ b/drivers/net/wireguard/queueing.h
+@@ -126,10 +126,10 @@ static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
+  */
+ static inline int wg_cpumask_next_online(int *last_cpu)
+ {
+-	int cpu = cpumask_next(*last_cpu, cpu_online_mask);
++	int cpu = cpumask_next(READ_ONCE(*last_cpu), cpu_online_mask);
+ 	if (cpu >= nr_cpu_ids)
+ 		cpu = cpumask_first(cpu_online_mask);
+-	*last_cpu = cpu;
++	WRITE_ONCE(*last_cpu, cpu);
+ 	return cpu;
+ }
+ 
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index 0d48e0f4a1ba3..26e09c30d596c 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -222,7 +222,7 @@ void wg_packet_send_keepalive(struct wg_peer *peer)
+ {
+ 	struct sk_buff *skb;
+ 
+-	if (skb_queue_empty(&peer->staged_packet_queue)) {
++	if (skb_queue_empty_lockless(&peer->staged_packet_queue)) {
+ 		skb = alloc_skb(DATA_PACKET_HEAD_ROOM + MESSAGE_MINIMUM_LENGTH,
+ 				GFP_ATOMIC);
+ 		if (unlikely(!skb))
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index 457386f9de990..3f167bf4eef35 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -364,7 +364,8 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 	struct ieee80211_p2p_noa_attr noa_attr;
+ 	const struct cfg80211_bss_ies *ies;
+ 	struct wilc_join_bss_param *param;
+-	u8 rates_len = 0, ies_len;
++	u8 rates_len = 0;
++	int ies_len;
+ 	int ret;
+ 
+ 	param = kzalloc(sizeof(*param), GFP_KERNEL);
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 9f59f93b70e26..54ca60db65473 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -420,7 +420,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
+ 		int node, srcu_idx;
+ 
+ 		srcu_idx = srcu_read_lock(&head->srcu);
+-		for_each_node(node)
++		for_each_online_node(node)
+ 			__nvme_find_path(head, node);
+ 		srcu_read_unlock(&head->srcu, srcu_idx);
+ 	}
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 5242feda5471a..a7131f4752e28 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -844,7 +844,8 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
+ 		struct bio_vec bv = req_bvec(req);
+ 
+ 		if (!is_pci_p2pdma_page(bv.bv_page)) {
+-			if (bv.bv_offset + bv.bv_len <= NVME_CTRL_PAGE_SIZE * 2)
++			if ((bv.bv_offset & (NVME_CTRL_PAGE_SIZE - 1)) +
++			     bv.bv_len <= NVME_CTRL_PAGE_SIZE * 2)
+ 				return nvme_setup_prp_simple(dev, req,
+ 							     &cmnd->rw, &bv);
+ 
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 59109eb8e8e46..a04bb02c1251b 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -795,6 +795,15 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
+ 	wait_for_completion(&sq->free_done);
+ 	percpu_ref_exit(&sq->ref);
+ 
++	/*
++	 * we must reference the ctrl again after waiting for inflight IO
++	 * to complete. Because admin connect may have sneaked in after we
++	 * store sq->ctrl locally, but before we killed the percpu_ref. the
++	 * admin connect allocates and assigns sq->ctrl, which now needs a
++	 * final ref put, as this ctrl is going away.
++	 */
++	ctrl = sq->ctrl;
++
+ 	if (ctrl) {
+ 		/*
+ 		 * The teardown flow may take some time, and the host may not
+diff --git a/drivers/nvmem/meson-efuse.c b/drivers/nvmem/meson-efuse.c
+index ba2714bef8d0e..cf1b249e67ca2 100644
+--- a/drivers/nvmem/meson-efuse.c
++++ b/drivers/nvmem/meson-efuse.c
+@@ -18,18 +18,24 @@ static int meson_efuse_read(void *context, unsigned int offset,
+ 			    void *val, size_t bytes)
+ {
+ 	struct meson_sm_firmware *fw = context;
++	int ret;
+ 
+-	return meson_sm_call_read(fw, (u8 *)val, bytes, SM_EFUSE_READ, offset,
+-				  bytes, 0, 0, 0);
++	ret = meson_sm_call_read(fw, (u8 *)val, bytes, SM_EFUSE_READ, offset,
++				 bytes, 0, 0, 0);
++
++	return ret < 0 ? ret : 0;
+ }
+ 
+ static int meson_efuse_write(void *context, unsigned int offset,
+ 			     void *val, size_t bytes)
+ {
+ 	struct meson_sm_firmware *fw = context;
++	int ret;
++
++	ret = meson_sm_call_write(fw, (u8 *)val, bytes, SM_EFUSE_WRITE, offset,
++				  bytes, 0, 0, 0);
+ 
+-	return meson_sm_call_write(fw, (u8 *)val, bytes, SM_EFUSE_WRITE, offset,
+-				   bytes, 0, 0, 0);
++	return ret < 0 ? ret : 0;
+ }
+ 
+ static const struct of_device_id meson_efuse_match[] = {
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index fbaa618594628..dce2d26b1d0fc 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -857,6 +857,22 @@ static const struct ts_dmi_data schneider_sct101ctm_data = {
+ 	.properties	= schneider_sct101ctm_props,
+ };
+ 
++static const struct property_entry globalspace_solt_ivw116_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 7),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 22),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1723),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1077),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-globalspace-solt-ivw116.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	PROPERTY_ENTRY_BOOL("silead,home-button"),
++	{ }
++};
++
++static const struct ts_dmi_data globalspace_solt_ivw116_data = {
++	.acpi_name	= "MSSL1680:00",
++	.properties	= globalspace_solt_ivw116_props,
++};
++
+ static const struct property_entry techbite_arc_11_6_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-min-x", 5),
+ 	PROPERTY_ENTRY_U32("touchscreen-min-y", 7),
+@@ -1261,6 +1277,17 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BIOS_DATE, "04/24/2018"),
+ 		},
+ 	},
++	{
++		/* Jumper EZpad 6s Pro */
++		.driver_data = (void *)&jumper_ezpad_6_pro_b_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Jumper"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Ezpad"),
++			/* Above matches are too generic, add bios match */
++			DMI_MATCH(DMI_BIOS_VERSION, "E.WSA116_8.E1.042.bin"),
++			DMI_MATCH(DMI_BIOS_DATE, "01/08/2020"),
++		},
++	},
+ 	{
+ 		/* Jumper EZpad 6 m4 */
+ 		.driver_data = (void *)&jumper_ezpad_6_m4_data,
+@@ -1490,6 +1517,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "SCT101CTM"),
+ 		},
+ 	},
++	{
++		/* GlobalSpace SoLT IVW 11.6" */
++		.driver_data = (void *)&globalspace_solt_ivw116_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Globalspace Tech Pvt Ltd"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "SolTIVW"),
++			DMI_MATCH(DMI_PRODUCT_SKU, "PN20170413488"),
++		},
++	},
+ 	{
+ 		/* Techbite Arc 11.6 */
+ 		.driver_data = (void *)&techbite_arc_11_6_data,
+diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
+index 69882ff4db107..362c97d9bd5b1 100644
+--- a/drivers/s390/crypto/pkey_api.c
++++ b/drivers/s390/crypto/pkey_api.c
+@@ -1155,7 +1155,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 		if (rc)
+ 			break;
+ 		if (copy_to_user(ucs, &kcs, sizeof(kcs)))
+-			return -EFAULT;
++			rc = -EFAULT;
+ 		memzero_explicit(&kcs, sizeof(kcs));
+ 		break;
+ 	}
+@@ -1187,7 +1187,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 		if (rc)
+ 			break;
+ 		if (copy_to_user(ucp, &kcp, sizeof(kcp)))
+-			return -EFAULT;
++			rc = -EFAULT;
+ 		memzero_explicit(&kcp, sizeof(kcp));
+ 		break;
+ 	}
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 1f8e81296beb7..70f920f4b7a19 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -2351,9 +2351,6 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd,
+ 	io_req->fcport = fcport;
+ 	io_req->cmd_type = QEDF_TASK_MGMT_CMD;
+ 
+-	/* Record which cpu this request is associated with */
+-	io_req->cpu = smp_processor_id();
+-
+ 	/* Set TM flags */
+ 	io_req->io_req_flags = QEDF_READ;
+ 	io_req->data_xfer_len = 0;
+@@ -2375,6 +2372,9 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd,
+ 
+ 	spin_lock_irqsave(&fcport->rport_lock, flags);
+ 
++	/* Record which cpu this request is associated with */
++	io_req->cpu = smp_processor_id();
++
+ 	sqe_idx = qedf_get_sqe_idx(fcport);
+ 	sqe = &fcport->sq[sqe_idx];
+ 	memset(sqe, 0, sizeof(struct fcoe_wqe));
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 2f8a1225b6976..1508e0f00dbc6 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -291,6 +291,20 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 	if (ifp->desc.bNumEndpoints >= num_ep)
+ 		goto skip_to_next_endpoint_or_interface_descriptor;
+ 
++	/* Save a copy of the descriptor and use it instead of the original */
++	endpoint = &ifp->endpoint[ifp->desc.bNumEndpoints];
++	memcpy(&endpoint->desc, d, n);
++	d = &endpoint->desc;
++
++	/* Clear the reserved bits in bEndpointAddress */
++	i = d->bEndpointAddress &
++			(USB_ENDPOINT_DIR_MASK | USB_ENDPOINT_NUMBER_MASK);
++	if (i != d->bEndpointAddress) {
++		dev_notice(ddev, "config %d interface %d altsetting %d has an endpoint descriptor with address 0x%X, changing to 0x%X\n",
++		    cfgno, inum, asnum, d->bEndpointAddress, i);
++		endpoint->desc.bEndpointAddress = i;
++	}
++
+ 	/* Check for duplicate endpoint addresses */
+ 	if (config_endpoint_is_duplicate(config, inum, asnum, d)) {
+ 		dev_notice(ddev, "config %d interface %d altsetting %d has a duplicate endpoint with address 0x%X, skipping\n",
+@@ -308,10 +322,8 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
+ 		}
+ 	}
+ 
+-	endpoint = &ifp->endpoint[ifp->desc.bNumEndpoints];
++	/* Accept this endpoint */
+ 	++ifp->desc.bNumEndpoints;
+-
+-	memcpy(&endpoint->desc, d, n);
+ 	INIT_LIST_HEAD(&endpoint->urb_list);
+ 
+ 	/*
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 856947620f140..3a54d76c55a34 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -504,6 +504,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x1b1c, 0x1b38), .driver_info = USB_QUIRK_DELAY_INIT |
+ 	  USB_QUIRK_DELAY_CTRL_MSG },
+ 
++	/* START BP-850k Printer */
++	{ USB_DEVICE(0x1bc3, 0x0003), .driver_info = USB_QUIRK_NO_SET_INTF },
++
+ 	/* MIDI keyboard WORLDE MINI */
+ 	{ USB_DEVICE(0x1c75, 0x0204), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index d51ea1c052f24..6bb69a4e64704 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -104,9 +104,12 @@ static int usb_string_copy(const char *s, char **s_copy)
+ 	int ret;
+ 	char *str;
+ 	char *copy = *s_copy;
++
+ 	ret = strlen(s);
+ 	if (ret > USB_MAX_STRING_LEN)
+ 		return -EOVERFLOW;
++	if (ret < 1)
++		return -EINVAL;
+ 
+ 	if (copy) {
+ 		str = copy;
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index 48a9a0476a453..e568bf566ab0b 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -1764,6 +1764,49 @@ static int mos7840_port_remove(struct usb_serial_port *port)
+ 	return 0;
+ }
+ 
++static int mos7840_suspend(struct usb_serial *serial, pm_message_t message)
++{
++	struct moschip_port *mos7840_port;
++	struct usb_serial_port *port;
++	int i;
++
++	for (i = 0; i < serial->num_ports; ++i) {
++		port = serial->port[i];
++		if (!tty_port_initialized(&port->port))
++			continue;
++
++		mos7840_port = usb_get_serial_port_data(port);
++
++		usb_kill_urb(mos7840_port->read_urb);
++		mos7840_port->read_urb_busy = false;
++	}
++
++	return 0;
++}
++
++static int mos7840_resume(struct usb_serial *serial)
++{
++	struct moschip_port *mos7840_port;
++	struct usb_serial_port *port;
++	int res;
++	int i;
++
++	for (i = 0; i < serial->num_ports; ++i) {
++		port = serial->port[i];
++		if (!tty_port_initialized(&port->port))
++			continue;
++
++		mos7840_port = usb_get_serial_port_data(port);
++
++		mos7840_port->read_urb_busy = true;
++		res = usb_submit_urb(mos7840_port->read_urb, GFP_NOIO);
++		if (res)
++			mos7840_port->read_urb_busy = false;
++	}
++
++	return 0;
++}
++
+ static struct usb_serial_driver moschip7840_4port_device = {
+ 	.driver = {
+ 		   .owner = THIS_MODULE,
+@@ -1792,6 +1835,8 @@ static struct usb_serial_driver moschip7840_4port_device = {
+ 	.port_probe = mos7840_port_probe,
+ 	.port_remove = mos7840_port_remove,
+ 	.read_bulk_callback = mos7840_bulk_in_callback,
++	.suspend = mos7840_suspend,
++	.resume = mos7840_resume,
+ };
+ 
+ static struct usb_serial_driver * const serial_drivers[] = {
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 400e3e10b435c..73d97f7bf15bc 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1425,6 +1425,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff),	/* Telit LN940 (MBIM) */
+ 	  .driver_info = NCTRL(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x3000, 0xff),	/* Telit FN912 */
++	  .driver_info = RSVD(0) | NCTRL(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x3001, 0xff),	/* Telit FN912 */
++	  .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7010, 0xff),	/* Telit LE910-S1 (RNDIS) */
+ 	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff),	/* Telit LE910-S1 (ECM) */
+@@ -1433,6 +1437,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x701b, 0xff),	/* Telit LE910R1 (ECM) */
+ 	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x9000, 0xff),	/* Telit generic core-dump device */
++	  .driver_info = NCTRL(0) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010),				/* Telit SBL FN980 flashing device */
+ 	  .driver_info = NCTRL(0) | ZLP },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9200),				/* Telit LE910S1 flashing device */
+@@ -2224,6 +2230,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_7106_2COM, 0x02, 0x02, 0x01) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x02, 0x01) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7126, 0xff, 0x00, 0x00),
++	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7127, 0xff, 0x00, 0x00),
++	  .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) },
+ 	{ USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
+ 	{ USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MPL200),
+ 	  .driver_info = RSVD(1) | RSVD(4) },
+@@ -2284,6 +2294,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0f0, 0xff),			/* Foxconn T99W373 MBIM */
+ 	  .driver_info = RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff),			/* Foxconn T99W651 RNDIS */
++	  .driver_info = RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+ 	  .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x1782, 0x4d10) },						/* Fibocom L610 (AT mode) */
+@@ -2321,6 +2333,32 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0115, 0xff),			/* Rolling RW135-GL (laptop MBIM) */
+ 	  .driver_info = RSVD(5) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0802, 0xff),			/* Rolling RW350-GL (laptop MBIM) */
++	  .driver_info = RSVD(5) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for Global */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0101, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WRD for Global SKU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0101, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0101, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0106, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WRD for China SKU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0106, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0106, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0111, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for SA */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0111, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0111, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0112, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for EU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0112, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0112, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0113, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for NA */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0113, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0113, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0115, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for China EDU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0115, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0115, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x30) },	/* NetPrisma LCUK54-WWD for Golbal EDU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0x00, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+diff --git a/fs/dcache.c b/fs/dcache.c
+index 976c7474d62a9..406a71abb1b59 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -329,7 +329,11 @@ static inline void __d_clear_type_and_inode(struct dentry *dentry)
+ 	flags &= ~(DCACHE_ENTRY_TYPE | DCACHE_FALLTHRU);
+ 	WRITE_ONCE(dentry->d_flags, flags);
+ 	dentry->d_inode = NULL;
+-	if (dentry->d_flags & DCACHE_LRU_LIST)
++	/*
++	 * The negative counter only tracks dentries on the LRU. Don't inc if
++	 * d_lru is on another list.
++	 */
++	if ((flags & (DCACHE_LRU_LIST|DCACHE_SHRINK_LIST)) == DCACHE_LRU_LIST)
+ 		this_cpu_inc(nr_dentry_negative);
+ }
+ 
+@@ -1940,9 +1944,11 @@ static void __d_instantiate(struct dentry *dentry, struct inode *inode)
+ 
+ 	spin_lock(&dentry->d_lock);
+ 	/*
+-	 * Decrement negative dentry count if it was in the LRU list.
++	 * The negative counter only tracks dentries on the LRU. Don't dec if
++	 * d_lru is on another list.
+ 	 */
+-	if (dentry->d_flags & DCACHE_LRU_LIST)
++	if ((dentry->d_flags &
++	     (DCACHE_LRU_LIST|DCACHE_SHRINK_LIST)) == DCACHE_LRU_LIST)
+ 		this_cpu_dec(nr_dentry_negative);
+ 	hlist_add_head(&dentry->d_u.d_alias, &inode->i_dentry);
+ 	raw_write_seqcount_begin(&dentry->d_seq);
+diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
+index 81ca58c10b728..40cc5e62907c1 100644
+--- a/fs/jffs2/super.c
++++ b/fs/jffs2/super.c
+@@ -58,6 +58,7 @@ static void jffs2_i_init_once(void *foo)
+ 	struct jffs2_inode_info *f = foo;
+ 
+ 	mutex_init(&f->sem);
++	f->target = NULL;
+ 	inode_init_once(&f->vfs_inode);
+ }
+ 
+diff --git a/fs/locks.c b/fs/locks.c
+index b0753c8871fb2..843fa3d3375d4 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -1392,9 +1392,9 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ 		locks_wake_up_blocks(left);
+ 	}
+  out:
++	trace_posix_lock_inode(inode, request, error);
+ 	spin_unlock(&ctx->flc_lock);
+ 	percpu_up_read(&file_rwsem);
+-	trace_posix_lock_inode(inode, request, error);
+ 	/*
+ 	 * Free any unused locks.
+ 	 */
+diff --git a/fs/nilfs2/alloc.c b/fs/nilfs2/alloc.c
+index 279d945d4ebee..2dc5fae6a6ee7 100644
+--- a/fs/nilfs2/alloc.c
++++ b/fs/nilfs2/alloc.c
+@@ -377,11 +377,12 @@ void *nilfs_palloc_block_get_entry(const struct inode *inode, __u64 nr,
+  * @target: offset number of an entry in the group (start point)
+  * @bsize: size in bits
+  * @lock: spin lock protecting @bitmap
++ * @wrap: whether to wrap around
+  */
+ static int nilfs_palloc_find_available_slot(unsigned char *bitmap,
+ 					    unsigned long target,
+ 					    unsigned int bsize,
+-					    spinlock_t *lock)
++					    spinlock_t *lock, bool wrap)
+ {
+ 	int pos, end = bsize;
+ 
+@@ -397,6 +398,8 @@ static int nilfs_palloc_find_available_slot(unsigned char *bitmap,
+ 
+ 		end = target;
+ 	}
++	if (!wrap)
++		return -ENOSPC;
+ 
+ 	/* wrap around */
+ 	for (pos = 0; pos < end; pos++) {
+@@ -495,9 +498,10 @@ int nilfs_palloc_count_max_entries(struct inode *inode, u64 nused, u64 *nmaxp)
+  * nilfs_palloc_prepare_alloc_entry - prepare to allocate a persistent object
+  * @inode: inode of metadata file using this allocator
+  * @req: nilfs_palloc_req structure exchanged for the allocation
++ * @wrap: whether to wrap around
+  */
+ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
+-				     struct nilfs_palloc_req *req)
++				     struct nilfs_palloc_req *req, bool wrap)
+ {
+ 	struct buffer_head *desc_bh, *bitmap_bh;
+ 	struct nilfs_palloc_group_desc *desc;
+@@ -516,7 +520,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
+ 	entries_per_group = nilfs_palloc_entries_per_group(inode);
+ 
+ 	for (i = 0; i < ngroups; i += n) {
+-		if (group >= ngroups) {
++		if (group >= ngroups && wrap) {
+ 			/* wrap around */
+ 			group = 0;
+ 			maxgroup = nilfs_palloc_group(inode, req->pr_entry_nr,
+@@ -541,7 +545,13 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
+ 				bitmap = bitmap_kaddr + bh_offset(bitmap_bh);
+ 				pos = nilfs_palloc_find_available_slot(
+ 					bitmap, group_offset,
+-					entries_per_group, lock);
++					entries_per_group, lock, wrap);
++				/*
++				 * Since the search for a free slot in the
++				 * second and subsequent bitmap blocks always
++				 * starts from the beginning, the wrap flag
++				 * only has an effect on the first search.
++				 */
+ 				if (pos >= 0) {
+ 					/* found a free entry */
+ 					nilfs_palloc_group_desc_add_entries(
+diff --git a/fs/nilfs2/alloc.h b/fs/nilfs2/alloc.h
+index 0303c3968cee0..071fc620264ed 100644
+--- a/fs/nilfs2/alloc.h
++++ b/fs/nilfs2/alloc.h
+@@ -50,8 +50,8 @@ struct nilfs_palloc_req {
+ 	struct buffer_head *pr_entry_bh;
+ };
+ 
+-int nilfs_palloc_prepare_alloc_entry(struct inode *,
+-				     struct nilfs_palloc_req *);
++int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
++				     struct nilfs_palloc_req *req, bool wrap);
+ void nilfs_palloc_commit_alloc_entry(struct inode *,
+ 				     struct nilfs_palloc_req *);
+ void nilfs_palloc_abort_alloc_entry(struct inode *, struct nilfs_palloc_req *);
+diff --git a/fs/nilfs2/dat.c b/fs/nilfs2/dat.c
+index 22b1ca5c379da..9b63cc42caac5 100644
+--- a/fs/nilfs2/dat.c
++++ b/fs/nilfs2/dat.c
+@@ -75,7 +75,7 @@ int nilfs_dat_prepare_alloc(struct inode *dat, struct nilfs_palloc_req *req)
+ {
+ 	int ret;
+ 
+-	ret = nilfs_palloc_prepare_alloc_entry(dat, req);
++	ret = nilfs_palloc_prepare_alloc_entry(dat, req, true);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index 552234ef22fe7..5c0e280c83eea 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -143,6 +143,9 @@ static bool nilfs_check_page(struct page *page)
+ 			goto Enamelen;
+ 		if (((offs + rec_len - 1) ^ offs) & ~(chunk_size-1))
+ 			goto Espan;
++		if (unlikely(p->inode &&
++			     NILFS_PRIVATE_INODE(le64_to_cpu(p->inode))))
++			goto Einumber;
+ 	}
+ 	if (offs != limit)
+ 		goto Eend;
+@@ -168,6 +171,9 @@ static bool nilfs_check_page(struct page *page)
+ 	goto bad_entry;
+ Espan:
+ 	error = "directory entry across blocks";
++	goto bad_entry;
++Einumber:
++	error = "disallowed inode number";
+ bad_entry:
+ 	nilfs_error(sb,
+ 		    "bad entry in directory #%lu: %s - offset=%lu, inode=%lu, rec_len=%d, name_len=%d",
+@@ -390,11 +396,39 @@ nilfs_find_entry(struct inode *dir, const struct qstr *qstr,
+ 
+ struct nilfs_dir_entry *nilfs_dotdot(struct inode *dir, struct page **p)
+ {
+-	struct nilfs_dir_entry *de = nilfs_get_page(dir, 0, p);
++	struct page *page;
++	struct nilfs_dir_entry *de, *next_de;
++	size_t limit;
++	char *msg;
+ 
++	de = nilfs_get_page(dir, 0, &page);
+ 	if (IS_ERR(de))
+ 		return NULL;
+-	return nilfs_next_entry(de);
++
++	limit = nilfs_last_byte(dir, 0);  /* is a multiple of chunk size */
++	if (unlikely(!limit || le64_to_cpu(de->inode) != dir->i_ino ||
++		     !nilfs_match(1, ".", de))) {
++		msg = "missing '.'";
++		goto fail;
++	}
++
++	next_de = nilfs_next_entry(de);
++	/*
++	 * If "next_de" has not reached the end of the chunk, there is
++	 * at least one more record.  Check whether it matches "..".
++	 */
++	if (unlikely((char *)next_de == (char *)de + nilfs_chunk_size(dir) ||
++		     !nilfs_match(2, "..", next_de))) {
++		msg = "missing '..'";
++		goto fail;
++	}
++	*p = page;
++	return next_de;
++
++fail:
++	nilfs_error(dir->i_sb, "directory #%lu %s", dir->i_ino, msg);
++	nilfs_put_page(page);
++	return NULL;
+ }
+ 
+ ino_t nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr)
+diff --git a/fs/nilfs2/ifile.c b/fs/nilfs2/ifile.c
+index 02727ed3a7c6a..9ee8d006f1a2b 100644
+--- a/fs/nilfs2/ifile.c
++++ b/fs/nilfs2/ifile.c
+@@ -55,13 +55,10 @@ int nilfs_ifile_create_inode(struct inode *ifile, ino_t *out_ino,
+ 	struct nilfs_palloc_req req;
+ 	int ret;
+ 
+-	req.pr_entry_nr = 0;  /*
+-			       * 0 says find free inode from beginning
+-			       * of a group. dull code!!
+-			       */
++	req.pr_entry_nr = NILFS_FIRST_INO(ifile->i_sb);
+ 	req.pr_entry_bh = NULL;
+ 
+-	ret = nilfs_palloc_prepare_alloc_entry(ifile, &req);
++	ret = nilfs_palloc_prepare_alloc_entry(ifile, &req, false);
+ 	if (!ret) {
+ 		ret = nilfs_palloc_get_entry_block(ifile, req.pr_entry_nr, 1,
+ 						   &req.pr_entry_bh);
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index ace27a89fbb07..3f3971e0292da 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -116,9 +116,15 @@ enum {
+ #define NILFS_FIRST_INO(sb) (((struct the_nilfs *)sb->s_fs_info)->ns_first_ino)
+ 
+ #define NILFS_MDT_INODE(sb, ino) \
+-	((ino) < NILFS_FIRST_INO(sb) && (NILFS_MDT_INO_BITS & BIT(ino)))
++	((ino) < NILFS_USER_INO && (NILFS_MDT_INO_BITS & BIT(ino)))
+ #define NILFS_VALID_INODE(sb, ino) \
+-	((ino) >= NILFS_FIRST_INO(sb) || (NILFS_SYS_INO_BITS & BIT(ino)))
++	((ino) >= NILFS_FIRST_INO(sb) ||				\
++	 ((ino) < NILFS_USER_INO && (NILFS_SYS_INO_BITS & BIT(ino))))
++
++#define NILFS_PRIVATE_INODE(ino) ({					\
++	ino_t __ino = (ino);						\
++	((__ino) < NILFS_USER_INO && (__ino) != NILFS_ROOT_INO &&	\
++	 (__ino) != NILFS_SKETCH_INO); })
+ 
+ /**
+  * struct nilfs_transaction_info: context information for synchronization
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index a8374d89da7c8..09ba8bbaf6cad 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -452,6 +452,12 @@ static int nilfs_store_disk_layout(struct the_nilfs *nilfs,
+ 	}
+ 
+ 	nilfs->ns_first_ino = le32_to_cpu(sbp->s_first_ino);
++	if (nilfs->ns_first_ino < NILFS_USER_INO) {
++		nilfs_err(nilfs->ns_sb,
++			  "too small lower limit for non-reserved inode numbers: %u",
++			  nilfs->ns_first_ino);
++		return -EINVAL;
++	}
+ 
+ 	nilfs->ns_blocks_per_segment = le32_to_cpu(sbp->s_blocks_per_segment);
+ 	if (nilfs->ns_blocks_per_segment < NILFS_SEG_MIN_BLOCKS) {
+diff --git a/fs/nilfs2/the_nilfs.h b/fs/nilfs2/the_nilfs.h
+index 01914089e76df..c1eea04052e65 100644
+--- a/fs/nilfs2/the_nilfs.h
++++ b/fs/nilfs2/the_nilfs.h
+@@ -182,7 +182,7 @@ struct the_nilfs {
+ 	unsigned long		ns_nrsvsegs;
+ 	unsigned long		ns_first_data_block;
+ 	int			ns_inode_size;
+-	int			ns_first_ino;
++	unsigned int		ns_first_ino;
+ 	u32			ns_crc_seed;
+ 
+ 	/* /sys/fs/<nilfs>/<device> */
+diff --git a/fs/orangefs/super.c b/fs/orangefs/super.c
+index 2f2e430461b21..b48aef43b51d5 100644
+--- a/fs/orangefs/super.c
++++ b/fs/orangefs/super.c
+@@ -200,7 +200,8 @@ static int orangefs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 		     (long)new_op->downcall.resp.statfs.files_avail);
+ 
+ 	buf->f_type = sb->s_magic;
+-	memcpy(&buf->f_fsid, &ORANGEFS_SB(sb)->fs_id, sizeof(buf->f_fsid));
++	buf->f_fsid.val[0] = ORANGEFS_SB(sb)->fs_id;
++	buf->f_fsid.val[1] = ORANGEFS_SB(sb)->id;
+ 	buf->f_bsize = new_op->downcall.resp.statfs.block_size;
+ 	buf->f_namelen = ORANGEFS_NAME_MAX;
+ 
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index a75faf437e750..340f4fef5b5ab 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1800,6 +1800,7 @@ int sock_map_get_from_fd(const union bpf_attr *attr, struct bpf_prog *prog);
+ int sock_map_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype);
+ int sock_map_update_elem_sys(struct bpf_map *map, void *key, void *value, u64 flags);
+ void sock_map_unhash(struct sock *sk);
++void sock_map_destroy(struct sock *sk);
+ void sock_map_close(struct sock *sk, long timeout);
+ #else
+ static inline int sock_map_prog_update(struct bpf_map *map,
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index 08eb06301791d..3982c2dc541eb 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -269,6 +269,18 @@
+  */
+ #define __section(section)              __attribute__((__section__(section)))
+ 
++/*
++ * Optional: only supported since gcc >= 12
++ *
++ *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-uninitialized-variable-attribute
++ * clang: https://clang.llvm.org/docs/AttributeReference.html#uninitialized
++ */
++#if __has_attribute(__uninitialized__)
++# define __uninitialized		__attribute__((__uninitialized__))
++#else
++# define __uninitialized
++#endif
++
+ /*
+  *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-unused-function-attribute
+  *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Type-Attributes.html#index-unused-type-attribute
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 2fd321da5c4b7..5554d26f91d80 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -171,8 +171,6 @@ int efi_capsule_setup_info(struct capsule_info *cap_info, void *kbuff,
+                            size_t hdr_bytes);
+ int __efi_capsule_setup_info(struct capsule_info *cap_info);
+ 
+-typedef int (*efi_freemem_callback_t) (u64 start, u64 end, void *arg);
+-
+ /*
+  * Types and defines for Time Services
+  */
+@@ -609,10 +607,6 @@ efi_guid_to_str(efi_guid_t *guid, char *out)
+ }
+ 
+ extern void efi_init (void);
+-extern void *efi_get_pal_addr (void);
+-extern void efi_map_pal_code (void);
+-extern void efi_memmap_walk (efi_freemem_callback_t callback, void *arg);
+-extern void efi_gettimeofday (struct timespec64 *ts);
+ #ifdef CONFIG_EFI
+ extern void efi_enter_virtual_mode (void);	/* switch EFI to virtual mode, if possible */
+ #else
+diff --git a/include/linux/fsnotify.h b/include/linux/fsnotify.h
+index bb8467cd11ae2..34f242105be23 100644
+--- a/include/linux/fsnotify.h
++++ b/include/linux/fsnotify.h
+@@ -93,7 +93,13 @@ static inline int fsnotify_file(struct file *file, __u32 mask)
+ {
+ 	const struct path *path = &file->f_path;
+ 
+-	if (file->f_mode & FMODE_NONOTIFY)
++	/*
++	 * FMODE_NONOTIFY are fds generated by fanotify itself which should not
++	 * generate new events. We also don't want to generate events for
++	 * FMODE_PATH fds (involves open & close events) as they are just
++	 * handle creation / destruction events and not "real" file events.
++	 */
++	if (file->f_mode & (FMODE_NONOTIFY | FMODE_PATH))
+ 		return 0;
+ 
+ 	return fsnotify_parent(path->dentry, mask, path, FSNOTIFY_EVENT_PATH);
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 07abcd384975b..35bb13ce1faf9 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -370,7 +370,7 @@ LSM_HOOK(int, 0, key_getsecurity, struct key *key, char **_buffer)
+ 
+ #ifdef CONFIG_AUDIT
+ LSM_HOOK(int, 0, audit_rule_init, u32 field, u32 op, char *rulestr,
+-	 void **lsmrule)
++	 void **lsmrule, gfp_t gfp)
+ LSM_HOOK(int, 0, audit_rule_known, struct audit_krule *krule)
+ LSM_HOOK(int, 0, audit_rule_match, u32 secid, u32 field, u32 op, void *lsmrule)
+ LSM_HOOK(void, LSM_RET_VOID, audit_rule_free, void *lsmrule)
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index ffae2b3308180..71150fb1cb2ad 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1353,8 +1353,9 @@ static inline int subsection_map_index(unsigned long pfn)
+ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+ {
+ 	int idx = subsection_map_index(pfn);
++	struct mem_section_usage *usage = READ_ONCE(ms->usage);
+ 
+-	return test_bit(idx, READ_ONCE(ms->usage)->subsection_map);
++	return usage ? test_bit(idx, usage->subsection_map) : 0;
+ }
+ #else
+ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 5b61aa19fac66..e32e040f094c2 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -1856,7 +1856,8 @@ static inline int security_key_getsecurity(struct key *key, char **_buffer)
+ 
+ #ifdef CONFIG_AUDIT
+ #ifdef CONFIG_SECURITY
+-int security_audit_rule_init(u32 field, u32 op, char *rulestr, void **lsmrule);
++int security_audit_rule_init(u32 field, u32 op, char *rulestr, void **lsmrule,
++			     gfp_t gfp);
+ int security_audit_rule_known(struct audit_krule *krule);
+ int security_audit_rule_match(u32 secid, u32 field, u32 op, void *lsmrule);
+ void security_audit_rule_free(void *lsmrule);
+@@ -1864,7 +1865,7 @@ void security_audit_rule_free(void *lsmrule);
+ #else
+ 
+ static inline int security_audit_rule_init(u32 field, u32 op, char *rulestr,
+-					   void **lsmrule)
++					   void **lsmrule, gfp_t gfp)
+ {
+ 	return 0;
+ }
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 1138dd3071dbd..e2af013ec05f4 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -98,6 +98,7 @@ struct sk_psock {
+ 	spinlock_t			link_lock;
+ 	refcount_t			refcnt;
+ 	void (*saved_unhash)(struct sock *sk);
++	void (*saved_destroy)(struct sock *sk);
+ 	void (*saved_close)(struct sock *sk, long timeout);
+ 	void (*saved_write_space)(struct sock *sk);
+ 	struct proto			*sk_proto;
+diff --git a/kernel/auditfilter.c b/kernel/auditfilter.c
+index 333b3bcfc5458..a5b43f25609e0 100644
+--- a/kernel/auditfilter.c
++++ b/kernel/auditfilter.c
+@@ -521,7 +521,8 @@ static struct audit_entry *audit_data_to_entry(struct audit_rule_data *data,
+ 			entry->rule.buflen += f_val;
+ 			f->lsm_str = str;
+ 			err = security_audit_rule_init(f->type, f->op, str,
+-						       (void **)&f->lsm_rule);
++						       (void **)&f->lsm_rule,
++						       GFP_KERNEL);
+ 			/* Keep currently invalid fields around in case they
+ 			 * become valid after a policy reload. */
+ 			if (err == -EINVAL) {
+@@ -790,7 +791,7 @@ static inline int audit_dupe_lsm_field(struct audit_field *df,
+ 
+ 	/* our own (refreshed) copy of lsm_rule */
+ 	ret = security_audit_rule_init(df->type, df->op, df->lsm_str,
+-				       (void **)&df->lsm_rule);
++				       (void **)&df->lsm_rule, GFP_KERNEL);
+ 	/* Keep currently invalid fields around in case they
+ 	 * become valid after a policy reload. */
+ 	if (ret == -EINVAL) {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index ad115ccc2fe0e..60db311480d0a 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2807,6 +2807,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
+ 						continue;
+ 					if (type == STACK_MISC)
+ 						continue;
++					if (type == STACK_INVALID && env->allow_uninit_stack)
++						continue;
+ 					verbose(env, "invalid read from stack off %d+%d size %d\n",
+ 						off, i, size);
+ 					return -EACCES;
+@@ -2844,6 +2846,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
+ 				continue;
+ 			if (type == STACK_ZERO)
+ 				continue;
++			if (type == STACK_INVALID && env->allow_uninit_stack)
++				continue;
+ 			verbose(env, "invalid read from stack off %d+%d size %d\n",
+ 				off, i, size);
+ 			return -EACCES;
+@@ -4300,7 +4304,8 @@ static int check_stack_range_initialized(
+ 		stype = &state->stack[spi].slot_type[slot % BPF_REG_SIZE];
+ 		if (*stype == STACK_MISC)
+ 			goto mark;
+-		if (*stype == STACK_ZERO) {
++		if ((*stype == STACK_ZERO) ||
++		    (*stype == STACK_INVALID && env->allow_uninit_stack)) {
+ 			if (clobber) {
+ 				/* helper can write anything into the stack */
+ 				*stype = STACK_MISC;
+@@ -9492,6 +9497,10 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ 		if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_INVALID)
+ 			continue;
+ 
++		if (env->allow_uninit_stack &&
++		    old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_MISC)
++			continue;
++
+ 		/* explored stack has more populated slots than current stack
+ 		 * and these slots were used
+ 		 */
+diff --git a/kernel/exit.c b/kernel/exit.c
+index bacdaf980933b..af9c8e794e4d7 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -432,6 +432,8 @@ void mm_update_next_owner(struct mm_struct *mm)
+ 	 * Search through everything else, we should not get here often.
+ 	 */
+ 	for_each_process(g) {
++		if (atomic_read(&mm->mm_users) <= 1)
++			break;
+ 		if (g->flags & PF_KTHREAD)
+ 			continue;
+ 		for_each_thread(g, c) {
+diff --git a/lib/kunit/try-catch.c b/lib/kunit/try-catch.c
+index 71e5c58530996..d18da926b2cd7 100644
+--- a/lib/kunit/try-catch.c
++++ b/lib/kunit/try-catch.c
+@@ -76,7 +76,6 @@ void kunit_try_catch_run(struct kunit_try_catch *try_catch, void *context)
+ 	time_remaining = wait_for_completion_timeout(&try_completion,
+ 						     kunit_test_timeout());
+ 	if (time_remaining == 0) {
+-		kunit_err(test, "try timed out\n");
+ 		try_catch->try_result = -ETIMEDOUT;
+ 	}
+ 
+@@ -89,6 +88,8 @@ void kunit_try_catch_run(struct kunit_try_catch *try_catch, void *context)
+ 		try_catch->try_result = 0;
+ 	else if (exit_code == -EINTR)
+ 		kunit_err(test, "wake_up_process() was never called\n");
++	else if (exit_code == -ETIMEDOUT)
++		kunit_err(test, "try timed out\n");
+ 	else if (exit_code)
+ 		kunit_err(test, "Unknown error: %d\n", exit_code);
+ 
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index e8d7d3c2bfcb8..b2c9164748558 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -432,13 +432,20 @@ static void domain_dirty_limits(struct dirty_throttle_control *dtc)
+ 	else
+ 		bg_thresh = (bg_ratio * available_memory) / PAGE_SIZE;
+ 
+-	if (bg_thresh >= thresh)
+-		bg_thresh = thresh / 2;
+ 	tsk = current;
+ 	if (rt_task(tsk)) {
+ 		bg_thresh += bg_thresh / 4 + global_wb_domain.dirty_limit / 32;
+ 		thresh += thresh / 4 + global_wb_domain.dirty_limit / 32;
+ 	}
++	/*
++	 * Dirty throttling logic assumes the limits in page units fit into
++	 * 32-bits. This gives 16TB dirty limits max which is hopefully enough.
++	 */
++	if (thresh > UINT_MAX)
++		thresh = UINT_MAX;
++	/* This makes sure bg_thresh is within 32-bits as well */
++	if (bg_thresh >= thresh)
++		bg_thresh = thresh / 2;
+ 	dtc->thresh = thresh;
+ 	dtc->bg_thresh = bg_thresh;
+ 
+@@ -488,7 +495,11 @@ static unsigned long node_dirty_limit(struct pglist_data *pgdat)
+ 	if (rt_task(tsk))
+ 		dirty += dirty / 4;
+ 
+-	return dirty;
++	/*
++	 * Dirty throttling logic assumes the limits in page units fit into
++	 * 32-bits. This gives 16TB dirty limits max which is hopefully enough.
++	 */
++	return min_t(unsigned long, dirty, UINT_MAX);
+ }
+ 
+ /**
+@@ -524,10 +535,17 @@ int dirty_background_bytes_handler(struct ctl_table *table, int write,
+ 		void *buffer, size_t *lenp, loff_t *ppos)
+ {
+ 	int ret;
++	unsigned long old_bytes = dirty_background_bytes;
+ 
+ 	ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos);
+-	if (ret == 0 && write)
++	if (ret == 0 && write) {
++		if (DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE) >
++								UINT_MAX) {
++			dirty_background_bytes = old_bytes;
++			return -ERANGE;
++		}
+ 		dirty_background_ratio = 0;
++	}
+ 	return ret;
+ }
+ 
+@@ -553,6 +571,10 @@ int dirty_bytes_handler(struct ctl_table *table, int write,
+ 
+ 	ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos);
+ 	if (ret == 0 && write && vm_dirty_bytes != old_bytes) {
++		if (DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE) > UINT_MAX) {
++			vm_dirty_bytes = old_bytes;
++			return -ERANGE;
++		}
+ 		writeback_set_ratelimit();
+ 		vm_dirty_ratio = 0;
+ 	}
+@@ -1524,7 +1546,7 @@ static inline void wb_dirty_limits(struct dirty_throttle_control *dtc)
+ 	 */
+ 	dtc->wb_thresh = __wb_calc_thresh(dtc);
+ 	dtc->wb_bg_thresh = dtc->thresh ?
+-		div64_u64(dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0;
++		div_u64((u64)dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0;
+ 
+ 	/*
+ 	 * In order to avoid the stacked BDI deadlock we need
+diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
+index ef5c174102d5e..82d3f1204ee48 100644
+--- a/net/ceph/mon_client.c
++++ b/net/ceph/mon_client.c
+@@ -1014,13 +1014,19 @@ static void delayed_work(struct work_struct *work)
+ 	struct ceph_mon_client *monc =
+ 		container_of(work, struct ceph_mon_client, delayed_work.work);
+ 
+-	dout("monc delayed_work\n");
+ 	mutex_lock(&monc->mutex);
++	dout("%s mon%d\n", __func__, monc->cur_mon);
++	if (monc->cur_mon < 0) {
++		goto out;
++	}
++
+ 	if (monc->hunting) {
+ 		dout("%s continuing hunt\n", __func__);
+ 		reopen_session(monc);
+ 	} else {
+ 		int is_auth = ceph_auth_is_authenticated(monc->auth);
++
++		dout("%s is_authed %d\n", __func__, is_auth);
+ 		if (ceph_con_keepalive_expired(&monc->con,
+ 					       CEPH_MONC_PING_TIMEOUT)) {
+ 			dout("monc keepalive timeout\n");
+@@ -1045,6 +1051,8 @@ static void delayed_work(struct work_struct *work)
+ 		}
+ 	}
+ 	__schedule_delayed(monc);
++
++out:
+ 	mutex_unlock(&monc->mutex);
+ }
+ 
+@@ -1157,13 +1165,15 @@ EXPORT_SYMBOL(ceph_monc_init);
+ void ceph_monc_stop(struct ceph_mon_client *monc)
+ {
+ 	dout("stop\n");
+-	cancel_delayed_work_sync(&monc->delayed_work);
+ 
+ 	mutex_lock(&monc->mutex);
+ 	__close_session(monc);
++	monc->hunting = false;
+ 	monc->cur_mon = -1;
+ 	mutex_unlock(&monc->mutex);
+ 
++	cancel_delayed_work_sync(&monc->delayed_work);
++
+ 	/*
+ 	 * flush msgr queue before we destroy ourselves to ensure that:
+ 	 *  - any work that references our embedded con is finished.
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index bb4fbc60b272e..51792dda1b731 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -599,6 +599,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
+ 	psock->eval = __SK_NONE;
+ 	psock->sk_proto = prot;
+ 	psock->saved_unhash = prot->unhash;
++	psock->saved_destroy = prot->destroy;
+ 	psock->saved_close = prot->close;
+ 	psock->saved_write_space = sk->sk_write_space;
+ 
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 52e395a189dff..d1d0ee2dbfaad 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -1566,6 +1566,28 @@ void sock_map_unhash(struct sock *sk)
+ 	saved_unhash(sk);
+ }
+ 
++void sock_map_destroy(struct sock *sk)
++{
++	void (*saved_destroy)(struct sock *sk);
++	struct sk_psock *psock;
++
++	rcu_read_lock();
++	psock = sk_psock_get(sk);
++	if (unlikely(!psock)) {
++		rcu_read_unlock();
++		if (sk->sk_prot->destroy)
++			sk->sk_prot->destroy(sk);
++		return;
++	}
++
++	saved_destroy = psock->saved_destroy;
++	sock_map_remove_links(sk, psock);
++	rcu_read_unlock();
++	sk_psock_put(sk, psock);
++	saved_destroy(sk);
++}
++EXPORT_SYMBOL_GPL(sock_map_destroy);
++
+ void sock_map_close(struct sock *sk, long timeout)
+ {
+ 	void (*saved_close)(struct sock *sk, long timeout);
+diff --git a/net/ethtool/linkstate.c b/net/ethtool/linkstate.c
+index fb676f349455a..470582a70ccbe 100644
+--- a/net/ethtool/linkstate.c
++++ b/net/ethtool/linkstate.c
+@@ -36,6 +36,8 @@ static int linkstate_get_sqi(struct net_device *dev)
+ 	mutex_lock(&phydev->lock);
+ 	if (!phydev->drv || !phydev->drv->get_sqi)
+ 		ret = -EOPNOTSUPP;
++	else if (!phydev->link)
++		ret = -ENETDOWN;
+ 	else
+ 		ret = phydev->drv->get_sqi(phydev);
+ 	mutex_unlock(&phydev->lock);
+@@ -54,6 +56,8 @@ static int linkstate_get_sqi_max(struct net_device *dev)
+ 	mutex_lock(&phydev->lock);
+ 	if (!phydev->drv || !phydev->drv->get_sqi_max)
+ 		ret = -EOPNOTSUPP;
++	else if (!phydev->link)
++		ret = -ENETDOWN;
+ 	else
+ 		ret = phydev->drv->get_sqi_max(phydev);
+ 	mutex_unlock(&phydev->lock);
+@@ -61,6 +65,17 @@ static int linkstate_get_sqi_max(struct net_device *dev)
+ 	return ret;
+ };
+ 
++static bool linkstate_sqi_critical_error(int sqi)
++{
++	return sqi < 0 && sqi != -EOPNOTSUPP && sqi != -ENETDOWN;
++}
++
++static bool linkstate_sqi_valid(struct linkstate_reply_data *data)
++{
++	return data->sqi >= 0 && data->sqi_max >= 0 &&
++	       data->sqi <= data->sqi_max;
++}
++
+ static int linkstate_get_link_ext_state(struct net_device *dev,
+ 					struct linkstate_reply_data *data)
+ {
+@@ -92,12 +107,12 @@ static int linkstate_prepare_data(const struct ethnl_req_info *req_base,
+ 	data->link = __ethtool_get_link(dev);
+ 
+ 	ret = linkstate_get_sqi(dev);
+-	if (ret < 0 && ret != -EOPNOTSUPP)
++	if (linkstate_sqi_critical_error(ret))
+ 		goto out;
+ 	data->sqi = ret;
+ 
+ 	ret = linkstate_get_sqi_max(dev);
+-	if (ret < 0 && ret != -EOPNOTSUPP)
++	if (linkstate_sqi_critical_error(ret))
+ 		goto out;
+ 	data->sqi_max = ret;
+ 
+@@ -122,11 +137,10 @@ static int linkstate_reply_size(const struct ethnl_req_info *req_base,
+ 	len = nla_total_size(sizeof(u8)) /* LINKSTATE_LINK */
+ 		+ 0;
+ 
+-	if (data->sqi != -EOPNOTSUPP)
+-		len += nla_total_size(sizeof(u32));
+-
+-	if (data->sqi_max != -EOPNOTSUPP)
+-		len += nla_total_size(sizeof(u32));
++	if (linkstate_sqi_valid(data)) {
++		len += nla_total_size(sizeof(u32)); /* LINKSTATE_SQI */
++		len += nla_total_size(sizeof(u32)); /* LINKSTATE_SQI_MAX */
++	}
+ 
+ 	if (data->link_ext_state_provided)
+ 		len += nla_total_size(sizeof(u8)); /* LINKSTATE_EXT_STATE */
+@@ -147,13 +161,14 @@ static int linkstate_fill_reply(struct sk_buff *skb,
+ 	    nla_put_u8(skb, ETHTOOL_A_LINKSTATE_LINK, !!data->link))
+ 		return -EMSGSIZE;
+ 
+-	if (data->sqi != -EOPNOTSUPP &&
+-	    nla_put_u32(skb, ETHTOOL_A_LINKSTATE_SQI, data->sqi))
+-		return -EMSGSIZE;
++	if (linkstate_sqi_valid(data)) {
++		if (nla_put_u32(skb, ETHTOOL_A_LINKSTATE_SQI, data->sqi))
++			return -EMSGSIZE;
+ 
+-	if (data->sqi_max != -EOPNOTSUPP &&
+-	    nla_put_u32(skb, ETHTOOL_A_LINKSTATE_SQI_MAX, data->sqi_max))
+-		return -EMSGSIZE;
++		if (nla_put_u32(skb, ETHTOOL_A_LINKSTATE_SQI_MAX,
++				data->sqi_max))
++			return -EMSGSIZE;
++	}
+ 
+ 	if (data->link_ext_state_provided) {
+ 		if (nla_put_u8(skb, ETHTOOL_A_LINKSTATE_EXT_STATE,
+diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
+index 27a5a7d66d184..b9df76f6571cd 100644
+--- a/net/ipv4/inet_diag.c
++++ b/net/ipv4/inet_diag.c
+@@ -1275,6 +1275,7 @@ static int inet_diag_dump_compat(struct sk_buff *skb,
+ 	req.sdiag_family = AF_UNSPEC; /* compatibility */
+ 	req.sdiag_protocol = inet_diag_type2proto(cb->nlh->nlmsg_type);
+ 	req.idiag_ext = rc->idiag_ext;
++	req.pad = 0;
+ 	req.idiag_states = rc->idiag_states;
+ 	req.id = rc->id;
+ 
+@@ -1290,6 +1291,7 @@ static int inet_diag_get_exact_compat(struct sk_buff *in_skb,
+ 	req.sdiag_family = rc->idiag_family;
+ 	req.sdiag_protocol = inet_diag_type2proto(nlh->nlmsg_type);
+ 	req.idiag_ext = rc->idiag_ext;
++	req.pad = 0;
+ 	req.idiag_states = rc->idiag_states;
+ 	req.id = rc->id;
+ 
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index d0ca1fc325cd6..f909e440bb226 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -582,6 +582,7 @@ static void tcp_bpf_rebuild_protos(struct proto prot[TCP_BPF_NUM_CFGS],
+ 				   struct proto *base)
+ {
+ 	prot[TCP_BPF_BASE]			= *base;
++	prot[TCP_BPF_BASE].destroy		= sock_map_destroy;
+ 	prot[TCP_BPF_BASE].close		= sock_map_close;
+ 	prot[TCP_BPF_BASE].recvmsg		= tcp_bpf_recvmsg;
+ 	prot[TCP_BPF_BASE].stream_memory_read	= tcp_bpf_stream_read;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 512f8dc051c61..06c03b21500fb 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2084,8 +2084,16 @@ void tcp_clear_retrans(struct tcp_sock *tp)
+ static inline void tcp_init_undo(struct tcp_sock *tp)
+ {
+ 	tp->undo_marker = tp->snd_una;
++
+ 	/* Retransmission still in flight may cause DSACKs later. */
+-	tp->undo_retrans = tp->retrans_out ? : -1;
++	/* First, account for regular retransmits in flight: */
++	tp->undo_retrans = tp->retrans_out;
++	/* Next, account for TLP retransmits in flight: */
++	if (tp->tlp_high_seq && tp->tlp_retrans)
++		tp->undo_retrans++;
++	/* Finally, avoid 0, because undo_retrans==0 means "can undo now": */
++	if (!tp->undo_retrans)
++		tp->undo_retrans = -1;
+ }
+ 
+ static bool tcp_is_rack(const struct sock *sk)
+@@ -2164,6 +2172,7 @@ void tcp_enter_loss(struct sock *sk)
+ 
+ 	tcp_set_ca_state(sk, TCP_CA_Loss);
+ 	tp->high_seq = tp->snd_nxt;
++	tp->tlp_high_seq = 0;
+ 	tcp_ecn_queue_cwr(tp);
+ 
+ 	/* F-RTO RFC5682 sec 3.1 step 1: retransmit SND.UNA if no previous
+@@ -2993,7 +3002,7 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una,
+ 			return;
+ 
+ 		if (tcp_try_undo_dsack(sk))
+-			tcp_try_keep_open(sk);
++			tcp_try_to_open(sk, flag);
+ 
+ 		tcp_identify_packet_loss(sk, ack_flag);
+ 		if (icsk->icsk_ca_state != TCP_CA_Recovery) {
+diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
+index f823a15b973c4..95b5ac082a2f4 100644
+--- a/net/ipv4/tcp_metrics.c
++++ b/net/ipv4/tcp_metrics.c
+@@ -619,6 +619,7 @@ static const struct nla_policy tcp_metrics_nl_policy[TCP_METRICS_ATTR_MAX + 1] =
+ 	[TCP_METRICS_ATTR_ADDR_IPV4]	= { .type = NLA_U32, },
+ 	[TCP_METRICS_ATTR_ADDR_IPV6]	= { .type = NLA_BINARY,
+ 					    .len = sizeof(struct in6_addr), },
++	[TCP_METRICS_ATTR_SADDR_IPV4]	= { .type = NLA_U32, },
+ 	/* Following attributes are not received for GET/DEL,
+ 	 * we keep them for reference
+ 	 */
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 5c7e10939dd90..316a4ef4db371 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -440,17 +440,34 @@ static void tcp_fastopen_synack_timer(struct sock *sk, struct request_sock *req)
+ static bool tcp_rtx_probe0_timed_out(const struct sock *sk,
+ 				     const struct sk_buff *skb)
+ {
++	const struct inet_connection_sock *icsk = inet_csk(sk);
++	u32 user_timeout = READ_ONCE(icsk->icsk_user_timeout);
+ 	const struct tcp_sock *tp = tcp_sk(sk);
+-	const int timeout = TCP_RTO_MAX * 2;
+-	u32 rcv_delta, rtx_delta;
+-
+-	rcv_delta = inet_csk(sk)->icsk_timeout - tp->rcv_tstamp;
+-	if (rcv_delta <= timeout)
+-		return false;
++	int timeout = TCP_RTO_MAX * 2;
++	u32 rtx_delta;
++	s32 rcv_delta;
+ 
+ 	rtx_delta = (u32)msecs_to_jiffies(tcp_time_stamp(tp) -
+ 			(tp->retrans_stamp ?: tcp_skb_timestamp(skb)));
+ 
++	if (user_timeout) {
++		/* If user application specified a TCP_USER_TIMEOUT,
++		 * it does not want win 0 packets to 'reset the timer'
++		 * while retransmits are not making progress.
++		 */
++		if (rtx_delta > user_timeout)
++			return true;
++		timeout = min_t(u32, timeout, msecs_to_jiffies(user_timeout));
++	}
++
++	/* Note: timer interrupt might have been delayed by at least one jiffy,
++	 * and tp->rcv_tstamp might very well have been written recently.
++	 * rcv_delta can thus be negative.
++	 */
++	rcv_delta = icsk->icsk_timeout - tp->rcv_tstamp;
++	if (rcv_delta <= timeout)
++		return false;
++
+ 	return rtx_delta > timeout;
+ }
+ 
+@@ -492,8 +509,6 @@ void tcp_retransmit_timer(struct sock *sk)
+ 	if (WARN_ON_ONCE(!skb))
+ 		return;
+ 
+-	tp->tlp_high_seq = 0;
+-
+ 	if (!tp->snd_wnd && !sock_flag(sk, SOCK_DEAD) &&
+ 	    !((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV))) {
+ 		/* Receiver dastardly shrinks window. Our retransmits
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index da9015efb45e4..6ad25dc9710c1 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -317,6 +317,8 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
+ 			goto fail_unlock;
+ 		}
+ 
++		sock_set_flag(sk, SOCK_RCU_FREE);
++
+ 		sk_add_node_rcu(sk, &hslot->head);
+ 		hslot->count++;
+ 		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+@@ -333,7 +335,7 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
+ 		hslot2->count++;
+ 		spin_unlock(&hslot2->lock);
+ 	}
+-	sock_set_flag(sk, SOCK_RCU_FREE);
++
+ 	error = 0;
+ fail_unlock:
+ 	spin_unlock_bh(&hslot->lock);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 8a6f4cdd5a486..ac09d4543f3e1 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4107,7 +4107,7 @@ static void addrconf_dad_work(struct work_struct *w)
+ 			if (!ipv6_generate_eui64(addr.s6_addr + 8, idev->dev) &&
+ 			    ipv6_addr_equal(&ifp->addr, &addr)) {
+ 				/* DAD failed for link-local based on MAC */
+-				idev->cnf.disable_ipv6 = 1;
++				WRITE_ONCE(idev->cnf.disable_ipv6, 1);
+ 
+ 				pr_info("%s: IPv6 being disabled!\n",
+ 					ifp->idev->dev->name);
+@@ -6220,7 +6220,8 @@ static void addrconf_disable_change(struct net *net, __s32 newf)
+ 		idev = __in6_dev_get(dev);
+ 		if (idev) {
+ 			int changed = (!idev->cnf.disable_ipv6) ^ (!newf);
+-			idev->cnf.disable_ipv6 = newf;
++
++			WRITE_ONCE(idev->cnf.disable_ipv6, newf);
+ 			if (changed)
+ 				dev_disable_change(idev);
+ 		}
+@@ -6237,7 +6238,7 @@ static int addrconf_disable_ipv6(struct ctl_table *table, int *p, int newf)
+ 
+ 	net = (struct net *)table->extra2;
+ 	old = *p;
+-	*p = newf;
++	WRITE_ONCE(*p, newf);
+ 
+ 	if (p == &net->ipv6.devconf_dflt->disable_ipv6) {
+ 		rtnl_unlock();
+@@ -6245,7 +6246,7 @@ static int addrconf_disable_ipv6(struct ctl_table *table, int *p, int newf)
+ 	}
+ 
+ 	if (p == &net->ipv6.devconf_all->disable_ipv6) {
+-		net->ipv6.devconf_dflt->disable_ipv6 = newf;
++		WRITE_ONCE(net->ipv6.devconf_dflt->disable_ipv6, newf);
+ 		addrconf_disable_change(net, newf);
+ 	} else if ((!newf) ^ (!old))
+ 		dev_disable_change((struct inet6_dev *)table->extra1);
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index 4eb9fbfdce332..8cf5b10eee695 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -165,7 +165,7 @@ static struct sk_buff *ip6_rcv_core(struct sk_buff *skb, struct net_device *dev,
+ 	__IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_IN, skb->len);
+ 
+ 	if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL ||
+-	    !idev || unlikely(idev->cnf.disable_ipv6)) {
++	    !idev || unlikely(READ_ONCE(idev->cnf.disable_ipv6))) {
+ 		__IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
+ 		goto drop;
+ 	}
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 4126be15e0d9b..32512b8ca5e72 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -240,7 +240,7 @@ int ip6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 	skb->protocol = htons(ETH_P_IPV6);
+ 	skb->dev = dev;
+ 
+-	if (unlikely(idev->cnf.disable_ipv6)) {
++	if (unlikely(!idev || READ_ONCE(idev->cnf.disable_ipv6))) {
+ 		IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
+ 		kfree_skb(skb);
+ 		return 0;
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index c6d6a6fe9602b..a59a8ad387211 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -1038,6 +1038,14 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ 		 */
+ 		if (nf_conntrack_confirm(skb) != NF_ACCEPT)
+ 			goto drop;
++
++		/* The ct may be dropped if a clash has been resolved,
++		 * so it's necessary to retrieve it from skb again to
++		 * prevent UAF.
++		 */
++		ct = nf_ct_get(skb, &ctinfo);
++		if (!ct)
++			skip_add = true;
+ 	}
+ 
+ 	if (!skip_add)
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index bc4fe944ef858..79cf4cda2cf6d 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -6994,6 +6994,7 @@ static int sctp_getsockopt_assoc_ids(struct sock *sk, int len,
+ 	struct sctp_sock *sp = sctp_sk(sk);
+ 	struct sctp_association *asoc;
+ 	struct sctp_assoc_ids *ids;
++	size_t ids_size;
+ 	u32 num = 0;
+ 
+ 	if (sctp_style(sk, TCP))
+@@ -7006,11 +7007,11 @@ static int sctp_getsockopt_assoc_ids(struct sock *sk, int len,
+ 		num++;
+ 	}
+ 
+-	if (len < sizeof(struct sctp_assoc_ids) + sizeof(sctp_assoc_t) * num)
++	ids_size = struct_size(ids, gaids_assoc_id, num);
++	if (len < ids_size)
+ 		return -EINVAL;
+ 
+-	len = sizeof(struct sctp_assoc_ids) + sizeof(sctp_assoc_t) * num;
+-
++	len = ids_size;
+ 	ids = kmalloc(len, GFP_USER | __GFP_NOWARN);
+ 	if (unlikely(!ids))
+ 		return -ENOMEM;
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index 3a1ffd84eac28..bf534a323fddc 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -213,7 +213,7 @@ kallsyms_step()
+ 	vmlinux_link ${kallsyms_vmlinux} "${kallsymso_prev}" ${btf_vmlinux_bin_o}
+ 	kallsyms ${kallsyms_vmlinux} ${kallsyms_S}
+ 
+-	info AS ${kallsyms_S}
++	info AS ${kallsymso}
+ 	${CC} ${NOSTDINC_FLAGS} ${LINUXINCLUDE} ${KBUILD_CPPFLAGS} \
+ 	      ${KBUILD_AFLAGS} ${KBUILD_AFLAGS_KERNEL} \
+ 	      -c -o ${kallsymso} ${kallsyms_S}
+diff --git a/security/apparmor/audit.c b/security/apparmor/audit.c
+index 704b0c895605a..963df28584eed 100644
+--- a/security/apparmor/audit.c
++++ b/security/apparmor/audit.c
+@@ -173,7 +173,7 @@ void aa_audit_rule_free(void *vrule)
+ 	}
+ }
+ 
+-int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule)
++int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule, gfp_t gfp)
+ {
+ 	struct aa_audit_rule *rule;
+ 
+@@ -186,14 +186,14 @@ int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule)
+ 		return -EINVAL;
+ 	}
+ 
+-	rule = kzalloc(sizeof(struct aa_audit_rule), GFP_KERNEL);
++	rule = kzalloc(sizeof(struct aa_audit_rule), gfp);
+ 
+ 	if (!rule)
+ 		return -ENOMEM;
+ 
+ 	/* Currently rules are treated as coming from the root ns */
+ 	rule->label = aa_label_parse(&root_ns->unconfined->label, rulestr,
+-				     GFP_KERNEL, true, false);
++				     gfp, true, false);
+ 	if (IS_ERR(rule->label)) {
+ 		int err = PTR_ERR(rule->label);
+ 		aa_audit_rule_free(rule);
+diff --git a/security/apparmor/include/audit.h b/security/apparmor/include/audit.h
+index 18519a4eb67e3..f325f1bef8d6d 100644
+--- a/security/apparmor/include/audit.h
++++ b/security/apparmor/include/audit.h
+@@ -186,7 +186,7 @@ static inline int complain_error(int error)
+ }
+ 
+ void aa_audit_rule_free(void *vrule);
+-int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule);
++int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule, gfp_t gfp);
+ int aa_audit_rule_known(struct audit_krule *rule);
+ int aa_audit_rule_match(u32 sid, u32 field, u32 op, void *vrule);
+ 
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index 6ebefec616e44..14b77a71c16c6 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -420,7 +420,7 @@ static inline void ima_free_modsig(struct modsig *modsig)
+ #else
+ 
+ static inline int ima_filter_rule_init(u32 field, u32 op, char *rulestr,
+-				       void **lsmrule)
++				       void **lsmrule, gfp_t gfp)
+ {
+ 	return -EINVAL;
+ }
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 4f5d44037081b..8d69b2e27936a 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -349,7 +349,8 @@ static void ima_free_rule(struct ima_rule_entry *entry)
+ 	kfree(entry);
+ }
+ 
+-static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
++static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry,
++						gfp_t gfp)
+ {
+ 	struct ima_rule_entry *nentry;
+ 	int i;
+@@ -358,7 +359,7 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
+ 	 * Immutable elements are copied over as pointers and data; only
+ 	 * lsm rules can change
+ 	 */
+-	nentry = kmemdup(entry, sizeof(*nentry), GFP_KERNEL);
++	nentry = kmemdup(entry, sizeof(*nentry), gfp);
+ 	if (!nentry)
+ 		return NULL;
+ 
+@@ -373,7 +374,8 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
+ 
+ 		ima_filter_rule_init(nentry->lsm[i].type, Audit_equal,
+ 				     nentry->lsm[i].args_p,
+-				     &nentry->lsm[i].rule);
++				     &nentry->lsm[i].rule,
++				     gfp);
+ 		if (!nentry->lsm[i].rule)
+ 			pr_warn("rule for LSM \'%s\' is undefined\n",
+ 				nentry->lsm[i].args_p);
+@@ -386,7 +388,7 @@ static int ima_lsm_update_rule(struct ima_rule_entry *entry)
+ 	int i;
+ 	struct ima_rule_entry *nentry;
+ 
+-	nentry = ima_lsm_copy_rule(entry);
++	nentry = ima_lsm_copy_rule(entry, GFP_KERNEL);
+ 	if (!nentry)
+ 		return -ENOMEM;
+ 
+@@ -573,7 +575,7 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode,
+ 		}
+ 
+ 		if (rc == -ESTALE && !rule_reinitialized) {
+-			lsm_rule = ima_lsm_copy_rule(rule);
++			lsm_rule = ima_lsm_copy_rule(rule, GFP_ATOMIC);
+ 			if (lsm_rule) {
+ 				rule_reinitialized = true;
+ 				goto retry;
+@@ -990,7 +992,8 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry,
+ 	entry->lsm[lsm_rule].type = audit_type;
+ 	result = ima_filter_rule_init(entry->lsm[lsm_rule].type, Audit_equal,
+ 				      entry->lsm[lsm_rule].args_p,
+-				      &entry->lsm[lsm_rule].rule);
++				      &entry->lsm[lsm_rule].rule,
++				      GFP_KERNEL);
+ 	if (!entry->lsm[lsm_rule].rule) {
+ 		pr_warn("rule for LSM \'%s\' is undefined\n",
+ 			entry->lsm[lsm_rule].args_p);
+diff --git a/security/security.c b/security/security.c
+index 0bbcb100ba8e9..f836f292ea16c 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -2545,9 +2545,11 @@ int security_key_getsecurity(struct key *key, char **_buffer)
+ 
+ #ifdef CONFIG_AUDIT
+ 
+-int security_audit_rule_init(u32 field, u32 op, char *rulestr, void **lsmrule)
++int security_audit_rule_init(u32 field, u32 op, char *rulestr, void **lsmrule,
++			     gfp_t gfp)
+ {
+-	return call_int_hook(audit_rule_init, 0, field, op, rulestr, lsmrule);
++	return call_int_hook(audit_rule_init, 0, field, op, rulestr, lsmrule,
++			     gfp);
+ }
+ 
+ int security_audit_rule_known(struct audit_krule *krule)
+diff --git a/security/selinux/include/audit.h b/security/selinux/include/audit.h
+index 073a3d34a0d21..72af85ff96a48 100644
+--- a/security/selinux/include/audit.h
++++ b/security/selinux/include/audit.h
+@@ -18,12 +18,14 @@
+  *	@op: the operater the rule uses
+  *	@rulestr: the text "target" of the rule
+  *	@rule: pointer to the new rule structure returned via this
++ *	@gfp: GFP flag used for kmalloc
+  *
+  *	Returns 0 if successful, -errno if not.  On success, the rule structure
+  *	will be allocated internally.  The caller must free this structure with
+  *	selinux_audit_rule_free() after use.
+  */
+-int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **rule);
++int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **rule,
++			    gfp_t gfp);
+ 
+ /**
+  *	selinux_audit_rule_free - free an selinux audit rule structure.
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 3db8bd2158d9b..a01e768337cd4 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -3542,7 +3542,8 @@ void selinux_audit_rule_free(void *vrule)
+ 	}
+ }
+ 
+-int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule)
++int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule,
++			    gfp_t gfp)
+ {
+ 	struct selinux_state *state = &selinux_state;
+ 	struct selinux_policy *policy;
+@@ -3583,7 +3584,7 @@ int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule)
+ 		return -EINVAL;
+ 	}
+ 
+-	tmprule = kzalloc(sizeof(struct selinux_audit_rule), GFP_KERNEL);
++	tmprule = kzalloc(sizeof(struct selinux_audit_rule), gfp);
+ 	if (!tmprule)
+ 		return -ENOMEM;
+ 
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 750f6007bbbb0..8c790563b33ac 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -4490,11 +4490,13 @@ static int smack_post_notification(const struct cred *w_cred,
+  * @op: required testing operator (=, !=, >, <, ...)
+  * @rulestr: smack label to be audited
+  * @vrule: pointer to save our own audit rule representation
++ * @gfp: type of the memory for the allocation
+  *
+  * Prepare to audit cases where (@field @op @rulestr) is true.
+  * The label to be audited is created if necessay.
+  */
+-static int smack_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule)
++static int smack_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule,
++				 gfp_t gfp)
+ {
+ 	struct smack_known *skp;
+ 	char **rule = (char **)vrule;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 5c5a144e707f0..669937bae570e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9057,6 +9057,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x841c, "HP Pavilion 15-CK0xx", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x84a6, "HP 250 G7 Notebook PC", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x84ae, "HP 15-db0403ng", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+@@ -9189,6 +9190,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
++	SND_PCI_QUIRK(0x10ec, 0x11bc, "VAIO VJFE-IL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+@@ -9396,6 +9398,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
++	SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
+ 	SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+@@ -10807,6 +10810,7 @@ enum {
+ 	ALC897_FIXUP_LENOVO_HEADSET_MODE,
+ 	ALC897_FIXUP_HEADSET_MIC_PIN2,
+ 	ALC897_FIXUP_UNIS_H3C_X500S,
++	ALC897_FIXUP_HEADSET_MIC_PIN3,
+ };
+ 
+ static const struct hda_fixup alc662_fixups[] = {
+@@ -11253,10 +11257,18 @@ static const struct hda_fixup alc662_fixups[] = {
+ 			{}
+ 		},
+ 	},
++	[ALC897_FIXUP_HEADSET_MIC_PIN3] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x03a11050 }, /* use as headset mic */
++			{ }
++		},
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1019, 0x9087, "ECS", ALC662_FIXUP_ASUS_MODE2),
++	SND_PCI_QUIRK(0x1019, 0x9859, "JP-IK LEAP W502", ALC897_FIXUP_HEADSET_MIC_PIN3),
+ 	SND_PCI_QUIRK(0x1025, 0x022f, "Acer Aspire One", ALC662_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0241, "Packard Bell DOTS", ALC662_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0308, "Acer Aspire 8942G", ALC662_FIXUP_ASPIRE),
+diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
+index f05cfc082915d..4303b31498d81 100644
+--- a/tools/lib/bpf/bpf_core_read.h
++++ b/tools/lib/bpf/bpf_core_read.h
+@@ -101,6 +101,7 @@ enum bpf_enum_value_kind {
+ 	case 2: val = *(const unsigned short *)p; break;		      \
+ 	case 4: val = *(const unsigned int *)p; break;			      \
+ 	case 8: val = *(const unsigned long long *)p; break;		      \
++	default: val = 0; break;					      \
+ 	}								      \
+ 	val <<= __CORE_RELO(s, field, LSHIFT_U64);			      \
+ 	if (__CORE_RELO(s, field, SIGNED))				      \
+diff --git a/tools/testing/selftests/bpf/progs/test_global_func10.c b/tools/testing/selftests/bpf/progs/test_global_func10.c
+new file mode 100644
+index 0000000000000..8fba3f3649e22
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/test_global_func10.c
+@@ -0,0 +1,31 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include <stddef.h>
++#include <linux/bpf.h>
++#include <bpf/bpf_helpers.h>
++#include "bpf_misc.h"
++
++struct Small {
++	long x;
++};
++
++struct Big {
++	long x;
++	long y;
++};
++
++__noinline int foo(const struct Big *big)
++{
++	if (!big)
++		return 0;
++
++	return bpf_get_prandom_u32() < big->y;
++}
++
++SEC("cgroup_skb/ingress")
++__failure __msg("invalid indirect access to stack")
++int global_func10(struct __sk_buff *skb)
++{
++	const struct Small small = {.x = skb->len };
++
++	return foo((struct Big *)&small) ? 1 : 0;
++}
+diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
+index eb888c8479c32..4b0628cd2d035 100644
+--- a/tools/testing/selftests/bpf/verifier/calls.c
++++ b/tools/testing/selftests/bpf/verifier/calls.c
+@@ -1948,19 +1948,22 @@
+ 	 * that fp-8 stack slot was unused in the fall-through
+ 	 * branch and will accept the program incorrectly
+ 	 */
+-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 2, 2),
++	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
++	BPF_JMP_IMM(BPF_JGT, BPF_REG_0, 2, 2),
+ 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ 	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ 	BPF_LD_MAP_FD(BPF_REG_1, 0),
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.fixup_map_hash_48b = { 6 },
+-	.errstr = "invalid indirect read from stack R2 off -8+0 size 8",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_XDP,
++	.fixup_map_hash_48b = { 7 },
++	.errstr_unpriv = "invalid indirect read from stack R2 off -8+0 size 8",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"calls: ctx read at start of subprog",
+diff --git a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+index 0ab7f1dfc97ac..0e24aa11c4571 100644
+--- a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
++++ b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+@@ -29,19 +29,30 @@
+ {
+ 	"helper access to variable memory: stack, bitwise AND, zero included",
+ 	.insns = {
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8),
+-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
+-	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
+-	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 64),
+-	BPF_MOV64_IMM(BPF_REG_3, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
++	/* set max stack size */
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0),
++	/* set r3 to a random value */
++	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
++	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
++	/* use bitwise AND to limit r3 range to [0, 64] */
++	BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 64),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
++	BPF_MOV64_IMM(BPF_REG_4, 0),
++	/* Call bpf_ringbuf_output(), it is one of a few helper functions with
++	 * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.
++	 * For unpriv this should signal an error, because memory at &fp[-64] is
++	 * not initialized.
++	 */
++	BPF_EMIT_CALL(BPF_FUNC_ringbuf_output),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack R1 off -64+0 size 64",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
++	.fixup_map_ringbuf = { 4 },
++	.errstr_unpriv = "invalid indirect read from stack R2 off -64+0 size 64",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"helper access to variable memory: stack, bitwise AND + JMP, wrong max",
+@@ -183,20 +194,31 @@
+ {
+ 	"helper access to variable memory: stack, JMP, no min check",
+ 	.insns = {
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8),
+-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
+-	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
+-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 64, 3),
+-	BPF_MOV64_IMM(BPF_REG_3, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
++	/* set max stack size */
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0),
++	/* set r3 to a random value */
++	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
++	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
++	/* use JMP to limit r3 range to [0, 64] */
++	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 64, 6),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
++	BPF_MOV64_IMM(BPF_REG_4, 0),
++	/* Call bpf_ringbuf_output(), it is one of a few helper functions with
++	 * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.
++	 * For unpriv this should signal an error, because memory at &fp[-64] is
++	 * not initialized.
++	 */
++	BPF_EMIT_CALL(BPF_FUNC_ringbuf_output),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack R1 off -64+0 size 64",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
++	.fixup_map_ringbuf = { 4 },
++	.errstr_unpriv = "invalid indirect read from stack R2 off -64+0 size 64",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"helper access to variable memory: stack, JMP (signed), no min check",
+@@ -564,29 +586,41 @@
+ {
+ 	"helper access to variable memory: 8 bytes leak",
+ 	.insns = {
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8),
+-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
++	/* set max stack size */
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0),
++	/* set r3 to a random value */
++	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
++	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
+ 	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -64),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -56),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -48),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -40),
++	/* Note: fp[-32] left uninitialized */
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -24),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
+ 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
+-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
+-	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 63),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
+-	BPF_MOV64_IMM(BPF_REG_3, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
++	/* Limit r3 range to [1, 64] */
++	BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 63),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 1),
++	BPF_MOV64_IMM(BPF_REG_4, 0),
++	/* Call bpf_ringbuf_output(), it is one of a few helper functions with
++	 * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.
++	 * For unpriv this should signal an error, because memory region [1, 64]
++	 * at &fp[-64] is not fully initialized.
++	 */
++	BPF_EMIT_CALL(BPF_FUNC_ringbuf_output),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
+ 	BPF_EXIT_INSN(),
+ 	},
+-	.errstr = "invalid indirect read from stack R1 off -64+32 size 64",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
++	.fixup_map_ringbuf = { 3 },
++	.errstr_unpriv = "invalid indirect read from stack R2 off -64+32 size 64",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"helper access to variable memory: 8 bytes no leak (init memory)",
+diff --git a/tools/testing/selftests/bpf/verifier/int_ptr.c b/tools/testing/selftests/bpf/verifier/int_ptr.c
+index 070893fb29007..02d9e004260b3 100644
+--- a/tools/testing/selftests/bpf/verifier/int_ptr.c
++++ b/tools/testing/selftests/bpf/verifier/int_ptr.c
+@@ -54,12 +54,13 @@
+ 		/* bpf_strtoul() */
+ 		BPF_EMIT_CALL(BPF_FUNC_strtoul),
+ 
+-		BPF_MOV64_IMM(BPF_REG_0, 1),
++		BPF_MOV64_IMM(BPF_REG_0, 0),
+ 		BPF_EXIT_INSN(),
+ 	},
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL,
+-	.errstr = "invalid indirect read from stack R4 off -16+4 size 8",
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "invalid indirect read from stack R4 off -16+4 size 8",
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"ARG_PTR_TO_LONG misaligned",
+diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c
+index 7e36078f8f482..949cbe4602480 100644
+--- a/tools/testing/selftests/bpf/verifier/search_pruning.c
++++ b/tools/testing/selftests/bpf/verifier/search_pruning.c
+@@ -128,9 +128,10 @@
+ 		BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_8b = { 3 },
+-	.errstr = "invalid read from stack off -16+0 size 8",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
++	.errstr_unpriv = "invalid read from stack off -16+0 size 8",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+ {
+ 	"allocated_stack",
+@@ -187,6 +188,8 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.flags = BPF_F_TEST_STATE_FREQ,
+-	.errstr = "invalid read from stack off -8+1 size 8",
+-	.result = REJECT,
++	.errstr_unpriv = "invalid read from stack off -8+1 size 8",
++	.result_unpriv = REJECT,
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
+ },
+diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c
+index 8c224eac93df7..59d976d228673 100644
+--- a/tools/testing/selftests/bpf/verifier/sock.c
++++ b/tools/testing/selftests/bpf/verifier/sock.c
+@@ -530,33 +530,6 @@
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ 	.result = ACCEPT,
+ },
+-{
+-	"sk_storage_get(map, skb->sk, &stack_value, 1): partially init stack_value",
+-	.insns = {
+-	BPF_MOV64_IMM(BPF_REG_2, 0),
+-	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_2, -8),
+-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
+-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	BPF_MOV64_IMM(BPF_REG_4, 1),
+-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
+-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -8),
+-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+-	BPF_LD_MAP_FD(BPF_REG_1, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_sk_storage_get),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	},
+-	.fixup_sk_storage_map = { 14 },
+-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+-	.result = REJECT,
+-	.errstr = "invalid indirect read from stack",
+-},
+ {
+ 	"bpf_map_lookup_elem(smap, &key)",
+ 	.insns = {
+diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c
+index 0b943897aaf6c..1e76841b7bfa6 100644
+--- a/tools/testing/selftests/bpf/verifier/spill_fill.c
++++ b/tools/testing/selftests/bpf/verifier/spill_fill.c
+@@ -104,3 +104,214 @@
+ 	.result = ACCEPT,
+ 	.retval = POINTER_VALUE,
+ },
++{
++	"Spill and refill a u32 const scalar.  Offset to skb->data",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct __sk_buff, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct __sk_buff, data_end)),
++	/* r4 = 20 */
++	BPF_MOV32_IMM(BPF_REG_4, 20),
++	/* *(u32 *)(r10 -8) = r4 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8),
++	/* r4 = *(u32 *)(r10 -8) */
++	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8),
++	/* r0 = r2 */
++	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
++	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
++	/* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */
++	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
++	/* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=20 */
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
++},
++{
++	"Spill a u32 const, refill from another half of the uninit u32 from the stack",
++	.insns = {
++	/* r4 = 20 */
++	BPF_MOV32_IMM(BPF_REG_4, 20),
++	/* *(u32 *)(r10 -8) = r4 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8),
++	/* r4 = *(u32 *)(r10 -4) fp-8=????rrrr*/
++	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "invalid read from stack off -4+0 size 4",
++	/* in privileged mode reads from uninitialized stack locations are permitted */
++	.result = ACCEPT,
++},
++{
++	"Spill a u32 const scalar.  Refill as u16.  Offset to skb->data",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct __sk_buff, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct __sk_buff, data_end)),
++	/* r4 = 20 */
++	BPF_MOV32_IMM(BPF_REG_4, 20),
++	/* *(u32 *)(r10 -8) = r4 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8),
++	/* r4 = *(u16 *)(r10 -8) */
++	BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8),
++	/* r0 = r2 */
++	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
++	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
++	/* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
++	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
++	/* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = REJECT,
++	.errstr = "invalid access to packet",
++	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
++},
++{
++	"Spill u32 const scalars.  Refill as u64.  Offset to skb->data",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct __sk_buff, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct __sk_buff, data_end)),
++	/* r6 = 0 */
++	BPF_MOV32_IMM(BPF_REG_6, 0),
++	/* r7 = 20 */
++	BPF_MOV32_IMM(BPF_REG_7, 20),
++	/* *(u32 *)(r10 -4) = r6 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4),
++	/* *(u32 *)(r10 -8) = r7 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8),
++	/* r4 = *(u64 *)(r10 -8) */
++	BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8),
++	/* r0 = r2 */
++	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
++	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
++	/* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
++	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
++	/* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = REJECT,
++	.errstr = "invalid access to packet",
++	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
++},
++{
++	"Spill a u32 const scalar.  Refill as u16 from fp-6.  Offset to skb->data",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct __sk_buff, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct __sk_buff, data_end)),
++	/* r4 = 20 */
++	BPF_MOV32_IMM(BPF_REG_4, 20),
++	/* *(u32 *)(r10 -8) = r4 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8),
++	/* r4 = *(u16 *)(r10 -6) */
++	BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -6),
++	/* r0 = r2 */
++	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
++	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
++	/* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
++	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
++	/* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = REJECT,
++	.errstr = "invalid access to packet",
++	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
++},
++{
++	"Spill and refill a u32 const scalar at non 8byte aligned stack addr.  Offset to skb->data",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct __sk_buff, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct __sk_buff, data_end)),
++	/* r4 = 20 */
++	BPF_MOV32_IMM(BPF_REG_4, 20),
++	/* *(u32 *)(r10 -8) = r4 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8),
++	/* *(u32 *)(r10 -4) = r4 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -4),
++	/* r4 = *(u32 *)(r10 -4),  */
++	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4),
++	/* r0 = r2 */
++	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
++	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=U32_MAX */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
++	/* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */
++	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
++	/* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = REJECT,
++	.errstr = "invalid access to packet",
++	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
++},
++{
++	"Spill and refill a umax=40 bounded scalar.  Offset to skb->data",
++	.insns = {
++	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++		    offsetof(struct __sk_buff, data)),
++	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++		    offsetof(struct __sk_buff, data_end)),
++	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_1,
++		    offsetof(struct __sk_buff, tstamp)),
++	BPF_JMP_IMM(BPF_JLE, BPF_REG_4, 40, 2),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	/* *(u32 *)(r10 -8) = r4 R4=umax=40 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8),
++	/* r4 = (*u32 *)(r10 - 8) */
++	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8),
++	/* r2 += r4 R2=pkt R4=umax=40 */
++	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_4),
++	/* r0 = r2 R2=pkt,umax=40 R4=umax=40 */
++	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
++	/* r2 += 20 R0=pkt,umax=40 R2=pkt,umax=40 */
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 20),
++	/* if (r2 > r3) R0=pkt,umax=40 R2=pkt,off=20,umax=40 */
++	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 1),
++	/* r0 = *(u32 *)r0 R0=pkt,r=20,umax=40 R2=pkt,off=20,r=20,umax=40 */
++	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
++},
++{
++	"Spill a u32 scalar at fp-4 and then at fp-8",
++	.insns = {
++	/* r4 = 4321 */
++	BPF_MOV32_IMM(BPF_REG_4, 4321),
++	/* *(u32 *)(r10 -4) = r4 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -4),
++	/* *(u32 *)(r10 -8) = r4 */
++	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8),
++	/* r4 = *(u64 *)(r10 -8) */
++	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.result = ACCEPT,
++	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
++},
+diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c
+index eab1f7f56e2f0..dc92a29f0d744 100644
+--- a/tools/testing/selftests/bpf/verifier/var_off.c
++++ b/tools/testing/selftests/bpf/verifier/var_off.c
+@@ -212,31 +212,6 @@
+ 	.result = REJECT,
+ 	.prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+-{
+-	"indirect variable-offset stack access, max_off+size > max_initialized",
+-	.insns = {
+-	/* Fill only the second from top 8 bytes of the stack. */
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0),
+-	/* Get an unknown value. */
+-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
+-	/* Make it small and 4-byte aligned. */
+-	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4),
+-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16),
+-	/* Add it to fp.  We now have either fp-12 or fp-16, but we don't know
+-	 * which. fp-12 size 8 is partially uninitialized stack.
+-	 */
+-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
+-	/* Dereference it indirectly. */
+-	BPF_LD_MAP_FD(BPF_REG_1, 0),
+-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	},
+-	.fixup_map_hash_8b = { 5 },
+-	.errstr = "invalid indirect read from stack R2 var_off",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_LWT_IN,
+-},
+ {
+ 	"indirect variable-offset stack access, min_off < min_initialized",
+ 	.insns = {
+@@ -289,33 +264,6 @@
+ 	.result = ACCEPT,
+ 	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
+ },
+-{
+-	"indirect variable-offset stack access, uninitialized",
+-	.insns = {
+-	BPF_MOV64_IMM(BPF_REG_2, 6),
+-	BPF_MOV64_IMM(BPF_REG_3, 28),
+-	/* Fill the top 16 bytes of the stack. */
+-	BPF_ST_MEM(BPF_W, BPF_REG_10, -16, 0),
+-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+-	/* Get an unknown value. */
+-	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, 0),
+-	/* Make it small and 4-byte aligned. */
+-	BPF_ALU64_IMM(BPF_AND, BPF_REG_4, 4),
+-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_4, 16),
+-	/* Add it to fp.  We now have either fp-12 or fp-16, we don't know
+-	 * which, but either way it points to initialized stack.
+-	 */
+-	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_10),
+-	BPF_MOV64_IMM(BPF_REG_5, 8),
+-	/* Dereference it indirectly. */
+-	BPF_EMIT_CALL(BPF_FUNC_getsockopt),
+-	BPF_MOV64_IMM(BPF_REG_0, 0),
+-	BPF_EXIT_INSN(),
+-	},
+-	.errstr = "invalid indirect read from stack R4 var_off",
+-	.result = REJECT,
+-	.prog_type = BPF_PROG_TYPE_SOCK_OPS,
+-},
+ {
+ 	"indirect variable-offset stack access, ok",
+ 	.insns = {
+diff --git a/tools/testing/selftests/net/msg_zerocopy.c b/tools/testing/selftests/net/msg_zerocopy.c
+index bdc03a2097e85..7ea5fb28c93db 100644
+--- a/tools/testing/selftests/net/msg_zerocopy.c
++++ b/tools/testing/selftests/net/msg_zerocopy.c
+@@ -85,6 +85,7 @@ static bool cfg_rx;
+ static int  cfg_runtime_ms	= 4200;
+ static int  cfg_verbose;
+ static int  cfg_waittime_ms	= 500;
++static int  cfg_notification_limit = 32;
+ static bool cfg_zerocopy;
+ 
+ static socklen_t cfg_alen;
+@@ -95,6 +96,7 @@ static char payload[IP_MAXPACKET];
+ static long packets, bytes, completions, expected_completions;
+ static int  zerocopied = -1;
+ static uint32_t next_completion;
++static uint32_t sends_since_notify;
+ 
+ static unsigned long gettimeofday_ms(void)
+ {
+@@ -208,6 +210,7 @@ static bool do_sendmsg(int fd, struct msghdr *msg, bool do_zerocopy, int domain)
+ 		error(1, errno, "send");
+ 	if (cfg_verbose && ret != len)
+ 		fprintf(stderr, "send: ret=%u != %u\n", ret, len);
++	sends_since_notify++;
+ 
+ 	if (len) {
+ 		packets++;
+@@ -435,7 +438,7 @@ static bool do_recv_completion(int fd, int domain)
+ 	/* Detect notification gaps. These should not happen often, if at all.
+ 	 * Gaps can occur due to drops, reordering and retransmissions.
+ 	 */
+-	if (lo != next_completion)
++	if (cfg_verbose && lo != next_completion)
+ 		fprintf(stderr, "gap: %u..%u does not append to %u\n",
+ 			lo, hi, next_completion);
+ 	next_completion = hi + 1;
+@@ -460,6 +463,7 @@ static bool do_recv_completion(int fd, int domain)
+ static void do_recv_completions(int fd, int domain)
+ {
+ 	while (do_recv_completion(fd, domain)) {}
++	sends_since_notify = 0;
+ }
+ 
+ /* Wait for all remaining completions on the errqueue */
+@@ -549,6 +553,9 @@ static void do_tx(int domain, int type, int protocol)
+ 		else
+ 			do_sendmsg(fd, &msg, cfg_zerocopy, domain);
+ 
++		if (cfg_zerocopy && sends_since_notify >= cfg_notification_limit)
++			do_recv_completions(fd, domain);
++
+ 		while (!do_poll(fd, POLLOUT)) {
+ 			if (cfg_zerocopy)
+ 				do_recv_completions(fd, domain);
+@@ -708,7 +715,7 @@ static void parse_opts(int argc, char **argv)
+ 
+ 	cfg_payload_len = max_payload_len;
+ 
+-	while ((c = getopt(argc, argv, "46c:C:D:i:mp:rs:S:t:vz")) != -1) {
++	while ((c = getopt(argc, argv, "46c:C:D:i:l:mp:rs:S:t:vz")) != -1) {
+ 		switch (c) {
+ 		case '4':
+ 			if (cfg_family != PF_UNSPEC)
+@@ -736,6 +743,9 @@ static void parse_opts(int argc, char **argv)
+ 			if (cfg_ifindex == 0)
+ 				error(1, errno, "invalid iface: %s", optarg);
+ 			break;
++		case 'l':
++			cfg_notification_limit = strtoul(optarg, NULL, 0);
++			break;
+ 		case 'm':
+ 			cfg_cork_mixed = true;
+ 			break;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-07-27  9:17 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-07-27  9:17 UTC (permalink / raw
  To: gentoo-commits

commit:     72412e10d28ef50809261bc99f27714df60c120c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jul 27 09:16:58 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jul 27 09:16:58 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=72412e10

Linux patch 5.10.223

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1222_linux-5.10.223.patch | 1857 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1861 insertions(+)

diff --git a/0000_README b/0000_README
index 4ec64a79..e3ae6e55 100644
--- a/0000_README
+++ b/0000_README
@@ -931,6 +931,10 @@ Patch:  1221_linux-5.10.222.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.222
 
+Patch:  1222_linux-5.10.223.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.223
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1222_linux-5.10.223.patch b/1222_linux-5.10.223.patch
new file mode 100644
index 00000000..9d5a9a51
--- /dev/null
+++ b/1222_linux-5.10.223.patch
@@ -0,0 +1,1857 @@
+diff --git a/Makefile b/Makefile
+index 5b6dae61250c4..91c79c64f5a39 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 222
++SUBLEVEL = 223
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
+index d9db752c51fe2..effcf14e0d477 100644
+--- a/arch/arm/include/asm/uaccess.h
++++ b/arch/arm/include/asm/uaccess.h
+@@ -147,16 +147,6 @@ extern int __get_user_64t_1(void *);
+ extern int __get_user_64t_2(void *);
+ extern int __get_user_64t_4(void *);
+ 
+-#define __GUP_CLOBBER_1	"lr", "cc"
+-#ifdef CONFIG_CPU_USE_DOMAINS
+-#define __GUP_CLOBBER_2	"ip", "lr", "cc"
+-#else
+-#define __GUP_CLOBBER_2 "lr", "cc"
+-#endif
+-#define __GUP_CLOBBER_4	"lr", "cc"
+-#define __GUP_CLOBBER_32t_8 "lr", "cc"
+-#define __GUP_CLOBBER_8	"lr", "cc"
+-
+ #define __get_user_x(__r2, __p, __e, __l, __s)				\
+ 	   __asm__ __volatile__ (					\
+ 		__asmeq("%0", "r0") __asmeq("%1", "r2")			\
+@@ -164,7 +154,7 @@ extern int __get_user_64t_4(void *);
+ 		"bl	__get_user_" #__s				\
+ 		: "=&r" (__e), "=r" (__r2)				\
+ 		: "0" (__p), "r" (__l)					\
+-		: __GUP_CLOBBER_##__s)
++		: "ip", "lr", "cc")
+ 
+ /* narrowing a double-word get into a single 32bit word register: */
+ #ifdef __ARMEB__
+@@ -186,7 +176,7 @@ extern int __get_user_64t_4(void *);
+ 		"bl	__get_user_64t_" #__s				\
+ 		: "=&r" (__e), "=r" (__r2)				\
+ 		: "0" (__p), "r" (__l)					\
+-		: __GUP_CLOBBER_##__s)
++		: "ip", "lr", "cc")
+ #else
+ #define __get_user_x_64t __get_user_x
+ #endif
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index d766f3b5c03ec..e990e727cc0fa 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -1796,6 +1796,7 @@ dwc3@6a00000 {
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
+ 				snps,is-utmi-l1-suspend;
++				snps,parkmode-disable-ss-quirk;
+ 				tx-fifo-resize;
+ 			};
+ 		};
+diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
+index f0ba854f0045e..34370be75acd5 100644
+--- a/arch/arm64/kernel/armv8_deprecated.c
++++ b/arch/arm64/kernel/armv8_deprecated.c
+@@ -471,6 +471,9 @@ static int run_all_insn_set_hw_mode(unsigned int cpu)
+ 	for (i = 0; i < ARRAY_SIZE(insn_emulations); i++) {
+ 		struct insn_emulation *insn = insn_emulations[i];
+ 		bool enable = READ_ONCE(insn->current_mode) == INSN_HW;
++		if (insn->status == INSN_UNAVAILABLE)
++			continue;
++
+ 		if (insn->set_hw_mode && insn->set_hw_mode(enable)) {
+ 			pr_warn("CPU[%u] cannot support the emulation of %s",
+ 				cpu, insn->name);
+diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl
+index 6036af4f30e2d..c262975484fa4 100644
+--- a/arch/mips/kernel/syscalls/syscall_o32.tbl
++++ b/arch/mips/kernel/syscalls/syscall_o32.tbl
+@@ -27,7 +27,7 @@
+ 17	o32	break				sys_ni_syscall
+ # 18 was sys_stat
+ 18	o32	unused18			sys_ni_syscall
+-19	o32	lseek				sys_lseek
++19	o32	lseek				sys_lseek			compat_sys_lseek
+ 20	o32	getpid				sys_getpid
+ 21	o32	mount				sys_mount
+ 22	o32	umount				sys_oldumount
+diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
+index 845e024321d47..a856d9ba42d20 100644
+--- a/arch/powerpc/kernel/eeh_pe.c
++++ b/arch/powerpc/kernel/eeh_pe.c
+@@ -849,6 +849,7 @@ struct pci_bus *eeh_pe_bus_get(struct eeh_pe *pe)
+ {
+ 	struct eeh_dev *edev;
+ 	struct pci_dev *pdev;
++	struct pci_bus *bus = NULL;
+ 
+ 	if (pe->type & EEH_PE_PHB)
+ 		return pe->phb->bus;
+@@ -859,9 +860,11 @@ struct pci_bus *eeh_pe_bus_get(struct eeh_pe *pe)
+ 
+ 	/* Retrieve the parent PCI bus of first (top) PCI device */
+ 	edev = list_first_entry_or_null(&pe->edevs, struct eeh_dev, entry);
++	pci_lock_rescan_remove();
+ 	pdev = eeh_dev_to_pci_dev(edev);
+ 	if (pdev)
+-		return pdev->bus;
++		bus = pdev->bus;
++	pci_unlock_rescan_remove();
+ 
+-	return NULL;
++	return bus;
+ }
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index c640053ab03f2..2686ba59873dd 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -117,14 +117,16 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 	}
+ 	rcu_read_unlock();
+ 
+-	fdput(f);
+-
+-	if (!found)
++	if (!found) {
++		fdput(f);
+ 		return -EINVAL;
++	}
+ 
+ 	table_group = iommu_group_get_iommudata(grp);
+-	if (WARN_ON(!table_group))
++	if (WARN_ON(!table_group)) {
++		fdput(f);
+ 		return -EFAULT;
++	}
+ 
+ 	for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i) {
+ 		struct iommu_table *tbltmp = table_group->tables[i];
+@@ -145,8 +147,10 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 			break;
+ 		}
+ 	}
+-	if (!tbl)
++	if (!tbl) {
++		fdput(f);
+ 		return -EINVAL;
++	}
+ 
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(stit, &stt->iommu_tables, next) {
+@@ -157,6 +161,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 			/* stit is being destroyed */
+ 			iommu_tce_table_put(tbl);
+ 			rcu_read_unlock();
++			fdput(f);
+ 			return -ENOTTY;
+ 		}
+ 		/*
+@@ -164,6 +169,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 		 * its KVM reference counter and can return.
+ 		 */
+ 		rcu_read_unlock();
++		fdput(f);
+ 		return 0;
+ 	}
+ 	rcu_read_unlock();
+@@ -171,6 +177,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 	stit = kzalloc(sizeof(*stit), GFP_KERNEL);
+ 	if (!stit) {
+ 		iommu_tce_table_put(tbl);
++		fdput(f);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -179,6 +186,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 
+ 	list_add_rcu(&stit->next, &stt->iommu_tables);
+ 
++	fdput(f);
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 822be2680b792..8e4a2e8aee114 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -312,8 +312,8 @@ static int alloc_dispatch_log_kmem_cache(void)
+ {
+ 	void (*ctor)(void *) = get_dtl_cache_ctor();
+ 
+-	dtl_cache = kmem_cache_create("dtl", DISPATCH_LOG_BYTES,
+-						DISPATCH_LOG_BYTES, 0, ctor);
++	dtl_cache = kmem_cache_create_usercopy("dtl", DISPATCH_LOG_BYTES,
++						DISPATCH_LOG_BYTES, 0, 0, DISPATCH_LOG_BYTES, ctor);
+ 	if (!dtl_cache) {
+ 		pr_warn("Failed to create dispatch trace log buffer cache\n");
+ 		pr_warn("Stolen time statistics will be unreliable\n");
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 487884420fb0d..01a6400c32349 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -1316,10 +1316,13 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ 	if (ec->busy_polling || bits > 8)
+ 		acpi_ec_burst_enable(ec);
+ 
+-	for (i = 0; i < bytes; ++i, ++address, ++value)
++	for (i = 0; i < bytes; ++i, ++address, ++value) {
+ 		result = (function == ACPI_READ) ?
+ 			acpi_ec_read(ec, address, value) :
+ 			acpi_ec_write(ec, address, *value);
++		if (result < 0)
++			break;
++	}
+ 
+ 	if (ec->busy_polling || bits > 8)
+ 		acpi_ec_burst_disable(ec);
+@@ -1331,8 +1334,10 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ 		return AE_NOT_FOUND;
+ 	case -ETIME:
+ 		return AE_TIME;
+-	default:
++	case 0:
+ 		return AE_OK;
++	default:
++		return AE_ERROR;
+ 	}
+ }
+ 
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 3deeabb273940..ae07927910ca0 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -16,7 +16,6 @@
+ #include <linux/acpi.h>
+ #include <linux/dmi.h>
+ #include <linux/sched.h>       /* need_resched() */
+-#include <linux/sort.h>
+ #include <linux/tick.h>
+ #include <linux/cpuidle.h>
+ #include <linux/cpu.h>
+@@ -390,28 +389,24 @@ static void acpi_processor_power_verify_c3(struct acpi_processor *pr,
+ 	return;
+ }
+ 
+-static int acpi_cst_latency_cmp(const void *a, const void *b)
++static void acpi_cst_latency_sort(struct acpi_processor_cx *states, size_t length)
+ {
+-	const struct acpi_processor_cx *x = a, *y = b;
++	int i, j, k;
+ 
+-	if (!(x->valid && y->valid))
+-		return 0;
+-	if (x->latency > y->latency)
+-		return 1;
+-	if (x->latency < y->latency)
+-		return -1;
+-	return 0;
+-}
+-static void acpi_cst_latency_swap(void *a, void *b, int n)
+-{
+-	struct acpi_processor_cx *x = a, *y = b;
+-	u32 tmp;
++	for (i = 1; i < length; i++) {
++		if (!states[i].valid)
++			continue;
+ 
+-	if (!(x->valid && y->valid))
+-		return;
+-	tmp = x->latency;
+-	x->latency = y->latency;
+-	y->latency = tmp;
++		for (j = i - 1, k = i; j >= 0; j--) {
++			if (!states[j].valid)
++				continue;
++
++			if (states[j].latency > states[k].latency)
++				swap(states[j].latency, states[k].latency);
++
++			k = j;
++		}
++	}
+ }
+ 
+ static int acpi_processor_power_verify(struct acpi_processor *pr)
+@@ -456,10 +451,7 @@ static int acpi_processor_power_verify(struct acpi_processor *pr)
+ 
+ 	if (buggy_latency) {
+ 		pr_notice("FW issue: working around C-state latencies out of order\n");
+-		sort(&pr->power.states[1], max_cstate,
+-		     sizeof(struct acpi_processor_cx),
+-		     acpi_cst_latency_cmp,
+-		     acpi_cst_latency_swap);
++		acpi_cst_latency_sort(&pr->power.states[1], max_cstate);
+ 	}
+ 
+ 	lapic_timer_propagate_broadcast(pr);
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 862a9420df526..8e24fb93324cb 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -1743,8 +1743,8 @@ static int null_validate_conf(struct nullb_device *dev)
+ 		return -EINVAL;
+ 	}
+ 
+-	dev->blocksize = round_down(dev->blocksize, 512);
+-	dev->blocksize = clamp_t(unsigned int, dev->blocksize, 512, 4096);
++	if (blk_validate_block_size(dev->blocksize))
++		return -EINVAL;
+ 
+ 	if (dev->queue_mode == NULL_Q_MQ && dev->use_per_node_hctx) {
+ 		if (dev->submit_queues != nr_online_nodes)
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index c54834ed8751e..1bc3167d3a1c3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -2069,7 +2069,7 @@ static int sdma_v4_0_process_trap_irq(struct amdgpu_device *adev,
+ 				      struct amdgpu_irq_src *source,
+ 				      struct amdgpu_iv_entry *entry)
+ {
+-	uint32_t instance;
++	int instance;
+ 
+ 	DRM_DEBUG("IH: SDMA trap\n");
+ 	instance = sdma_v4_0_irq_id_to_seq(entry->client_id);
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 400281feb4e8d..8246662fa60b7 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1476,16 +1476,47 @@ static void elantech_disconnect(struct psmouse *psmouse)
+ 	psmouse->private = NULL;
+ }
+ 
++/*
++ * Some hw_version 4 models fail to properly activate absolute mode on
++ * resume without going through disable/enable cycle.
++ */
++static const struct dmi_system_id elantech_needs_reenable[] = {
++#if defined(CONFIG_DMI) && defined(CONFIG_X86)
++	{
++		/* Lenovo N24 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "81AF"),
++		},
++	},
++#endif
++	{ }
++};
++
+ /*
+  * Put the touchpad back into absolute mode when reconnecting
+  */
+ static int elantech_reconnect(struct psmouse *psmouse)
+ {
++	int err;
++
+ 	psmouse_reset(psmouse);
+ 
+ 	if (elantech_detect(psmouse, 0))
+ 		return -1;
+ 
++	if (dmi_check_system(elantech_needs_reenable)) {
++		err = ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_DISABLE);
++		if (err)
++			psmouse_warn(psmouse, "failed to deactivate mouse on %s: %d\n",
++				     psmouse->ps2dev.serio->phys, err);
++
++		err = ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_ENABLE);
++		if (err)
++			psmouse_warn(psmouse, "failed to reactivate mouse on %s: %d\n",
++				     psmouse->ps2dev.serio->phys, err);
++	}
++
+ 	if (elantech_set_absolute_mode(psmouse)) {
+ 		psmouse_err(psmouse,
+ 			    "failed to put touchpad back into absolute mode.\n");
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 6804970d8f51a..91edfb88a218e 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -75,7 +75,7 @@ static inline void i8042_write_command(int val)
+ #define SERIO_QUIRK_PROBE_DEFER		BIT(5)
+ #define SERIO_QUIRK_RESET_ALWAYS	BIT(6)
+ #define SERIO_QUIRK_RESET_NEVER		BIT(7)
+-#define SERIO_QUIRK_DIECT		BIT(8)
++#define SERIO_QUIRK_DIRECT		BIT(8)
+ #define SERIO_QUIRK_DUMBKBD		BIT(9)
+ #define SERIO_QUIRK_NOLOOP		BIT(10)
+ #define SERIO_QUIRK_NOTIMEOUT		BIT(11)
+@@ -1235,6 +1235,20 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+ 					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
++	{
++		/*
++		 * The Ayaneo Kun is a handheld device where some the buttons
++		 * are handled by an AT keyboard. The keyboard is usually
++		 * detected as raw, but sometimes, usually after a cold boot,
++		 * it is detected as translated. Make sure that the keyboard
++		 * is always in raw mode.
++		 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AYANEO"),
++			DMI_MATCH(DMI_BOARD_NAME, "KUN"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_DIRECT)
++	},
+ 	{ }
+ };
+ 
+@@ -1553,7 +1567,7 @@ static void __init i8042_check_quirks(void)
+ 		if (quirks & SERIO_QUIRK_RESET_NEVER)
+ 			i8042_reset = I8042_RESET_NEVER;
+ 	}
+-	if (quirks & SERIO_QUIRK_DIECT)
++	if (quirks & SERIO_QUIRK_DIRECT)
+ 		i8042_direct = true;
+ 	if (quirks & SERIO_QUIRK_DUMBKBD)
+ 		i8042_dumbkbd = true;
+diff --git a/drivers/input/touchscreen/silead.c b/drivers/input/touchscreen/silead.c
+index e8b6c3137420b..901e28bc01645 100644
+--- a/drivers/input/touchscreen/silead.c
++++ b/drivers/input/touchscreen/silead.c
+@@ -70,7 +70,6 @@ struct silead_ts_data {
+ 	struct regulator_bulk_data regulators[2];
+ 	char fw_name[64];
+ 	struct touchscreen_properties prop;
+-	u32 max_fingers;
+ 	u32 chip_id;
+ 	struct input_mt_pos pos[SILEAD_MAX_FINGERS];
+ 	int slots[SILEAD_MAX_FINGERS];
+@@ -98,7 +97,7 @@ static int silead_ts_request_input_dev(struct silead_ts_data *data)
+ 	input_set_abs_params(data->input, ABS_MT_POSITION_Y, 0, 4095, 0, 0);
+ 	touchscreen_parse_properties(data->input, true, &data->prop);
+ 
+-	input_mt_init_slots(data->input, data->max_fingers,
++	input_mt_init_slots(data->input, SILEAD_MAX_FINGERS,
+ 			    INPUT_MT_DIRECT | INPUT_MT_DROP_UNUSED |
+ 			    INPUT_MT_TRACK);
+ 
+@@ -145,10 +144,10 @@ static void silead_ts_read_data(struct i2c_client *client)
+ 		return;
+ 	}
+ 
+-	if (buf[0] > data->max_fingers) {
++	if (buf[0] > SILEAD_MAX_FINGERS) {
+ 		dev_warn(dev, "More touches reported then supported %d > %d\n",
+-			 buf[0], data->max_fingers);
+-		buf[0] = data->max_fingers;
++			 buf[0], SILEAD_MAX_FINGERS);
++		buf[0] = SILEAD_MAX_FINGERS;
+ 	}
+ 
+ 	touch_nr = 0;
+@@ -200,7 +199,6 @@ static void silead_ts_read_data(struct i2c_client *client)
+ 
+ static int silead_ts_init(struct i2c_client *client)
+ {
+-	struct silead_ts_data *data = i2c_get_clientdata(client);
+ 	int error;
+ 
+ 	error = i2c_smbus_write_byte_data(client, SILEAD_REG_RESET,
+@@ -210,7 +208,7 @@ static int silead_ts_init(struct i2c_client *client)
+ 	usleep_range(SILEAD_CMD_SLEEP_MIN, SILEAD_CMD_SLEEP_MAX);
+ 
+ 	error = i2c_smbus_write_byte_data(client, SILEAD_REG_TOUCH_NR,
+-					data->max_fingers);
++					  SILEAD_MAX_FINGERS);
+ 	if (error)
+ 		goto i2c_write_err;
+ 	usleep_range(SILEAD_CMD_SLEEP_MIN, SILEAD_CMD_SLEEP_MAX);
+@@ -437,13 +435,6 @@ static void silead_ts_read_props(struct i2c_client *client)
+ 	const char *str;
+ 	int error;
+ 
+-	error = device_property_read_u32(dev, "silead,max-fingers",
+-					 &data->max_fingers);
+-	if (error) {
+-		dev_dbg(dev, "Max fingers read error %d\n", error);
+-		data->max_fingers = 5; /* Most devices handle up-to 5 fingers */
+-	}
+-
+ 	error = device_property_read_string(dev, "firmware-name", &str);
+ 	if (!error)
+ 		snprintf(data->fw_name, sizeof(data->fw_name),
+diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
+index 9f6682033ed7e..d8311d41f0a7b 100644
+--- a/drivers/misc/mei/main.c
++++ b/drivers/misc/mei/main.c
+@@ -329,7 +329,7 @@ static ssize_t mei_write(struct file *file, const char __user *ubuf,
+ 	}
+ 
+ 	if (!mei_cl_is_connected(cl)) {
+-		cl_err(dev, cl, "is not connected");
++		cl_dbg(dev, cl, "is not connected");
+ 		rets = -ENODEV;
+ 		goto out;
+ 	}
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index 411b3adb1d9ea..a96b223984070 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -266,7 +266,7 @@ int kvaser_usb_send_cmd_async(struct kvaser_usb_net_priv *priv, void *cmd,
+ 	}
+ 	usb_free_urb(urb);
+ 
+-	return 0;
++	return err;
+ }
+ 
+ int kvaser_usb_can_rx_over_error(struct net_device *netdev)
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 41ee56015a45e..16fa0e3e752ab 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -1141,6 +1141,11 @@ static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp)
+ 	struct sk_buff *skb;
+ 	int err, depth;
+ 
++	if (unlikely(xdp->data_end - xdp->data < ETH_HLEN)) {
++		err = -EINVAL;
++		goto err;
++	}
++
+ 	if (q->flags & IFF_VNET_HDR)
+ 		vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz);
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 77e63e7366e78..c34c6f0d23efe 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2472,6 +2472,9 @@ static int tun_xdp_one(struct tun_struct *tun,
+ 	bool skb_xdp = false;
+ 	struct page *page;
+ 
++	if (unlikely(datasize < ETH_HLEN))
++		return -EINVAL;
++
+ 	xdp_prog = rcu_dereference(tun->xdp_prog);
+ 	if (xdp_prog) {
+ 		if (gso->gso_type) {
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 4dd1a9fb4c8a0..d2a8238e144a6 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1312,6 +1312,8 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1260, 2)},	/* Telit LE910Cx */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1261, 2)},	/* Telit LE910Cx */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)},	/* Telit LN940 series */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x3000, 0)},	/* Telit FN912 series */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x3001, 0)},	/* Telit FN912 series */
+ 	{QMI_FIXED_INTF(0x1c9e, 0x9801, 3)},	/* Telewell TW-3G HSPA+ */
+ 	{QMI_FIXED_INTF(0x1c9e, 0x9803, 4)},	/* Telewell TW-3G HSPA+ */
+ 	{QMI_FIXED_INTF(0x1c9e, 0x9b01, 3)},	/* XS Stick W100-2 from 4G Systems */
+diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
+index d2ab3f07c008c..8296e6bc229ee 100644
+--- a/drivers/s390/char/sclp.c
++++ b/drivers/s390/char/sclp.c
+@@ -1208,6 +1208,7 @@ sclp_init(void)
+ fail_unregister_reboot_notifier:
+ 	unregister_reboot_notifier(&sclp_reboot_notifier);
+ fail_init_state_uninitialized:
++	list_del(&sclp_state_change_event.list);
+ 	sclp_init_state = sclp_init_state_uninitialized;
+ 	free_page((unsigned long) sclp_read_sccb);
+ 	free_page((unsigned long) sclp_init_sccb);
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 297f2b412d074..17fa1cd91da61 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -182,6 +182,15 @@ void scsi_remove_host(struct Scsi_Host *shost)
+ 	scsi_proc_host_rm(shost);
+ 	scsi_proc_hostdir_rm(shost->hostt);
+ 
++	/*
++	 * New SCSI devices cannot be attached anymore because of the SCSI host
++	 * state so drop the tag set refcnt. Wait until the tag set refcnt drops
++	 * to zero because .exit_cmd_priv implementations may need the host
++	 * pointer.
++	 */
++	kref_put(&shost->tagset_refcnt, scsi_mq_free_tags);
++	wait_for_completion(&shost->tagset_freed);
++
+ 	spin_lock_irqsave(shost->host_lock, flags);
+ 	if (scsi_host_set_state(shost, SHOST_DEL))
+ 		BUG_ON(scsi_host_set_state(shost, SHOST_DEL_RECOVERY));
+@@ -240,6 +249,9 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 
+ 	shost->dma_dev = dma_dev;
+ 
++	kref_init(&shost->tagset_refcnt);
++	init_completion(&shost->tagset_freed);
++
+ 	/*
+ 	 * Increase usage count temporarily here so that calling
+ 	 * scsi_autopm_put_host() will trigger runtime idle if there is
+@@ -312,6 +324,7 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 	pm_runtime_disable(&shost->shost_gendev);
+ 	pm_runtime_set_suspended(&shost->shost_gendev);
+ 	pm_runtime_put_noidle(&shost->shost_gendev);
++	kref_put(&shost->tagset_refcnt, scsi_mq_free_tags);
+  fail:
+ 	return error;
+ }
+@@ -344,9 +357,6 @@ static void scsi_host_dev_release(struct device *dev)
+ 		kfree(dev_name(&shost->shost_dev));
+ 	}
+ 
+-	if (shost->tag_set.tags)
+-		scsi_mq_destroy_tags(shost);
+-
+ 	kfree(shost->shost_data);
+ 
+ 	ida_simple_remove(&host_index_ida, shost->host_no);
+diff --git a/drivers/scsi/libsas/sas_internal.h b/drivers/scsi/libsas/sas_internal.h
+index 52e09c3e2b50d..3ef2fde28b8ed 100644
+--- a/drivers/scsi/libsas/sas_internal.h
++++ b/drivers/scsi/libsas/sas_internal.h
+@@ -114,6 +114,20 @@ static inline void sas_fail_probe(struct domain_device *dev, const char *func, i
+ 		func, dev->parent ? "exp-attached" :
+ 		"direct-attached",
+ 		SAS_ADDR(dev->sas_addr), err);
++
++	/*
++	 * If the device probe failed, the expander phy attached address
++	 * needs to be reset so that the phy will not be treated as flutter
++	 * in the next revalidation
++	 */
++	if (dev->parent && !dev_is_expander(dev->dev_type)) {
++		struct sas_phy *phy = dev->phy;
++		struct domain_device *parent = dev->parent;
++		struct ex_phy *ex_phy = &parent->ex_dev.ex_phy[phy->number];
++
++		memset(ex_phy->attached_sas_addr, 0, SAS_ADDR_SIZE);
++	}
++
+ 	sas_unregister_dev(dev->port, dev);
+ }
+ 
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 6923862be3fbc..2536da96130ea 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3453,6 +3453,7 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+ 	}
+ 
+ 	/* Start the Slowpath-process */
++	memset(&slowpath_params, 0, sizeof(struct qed_slowpath_params));
+ 	slowpath_params.int_mode = QED_INT_MODE_MSIX;
+ 	slowpath_params.drv_major = QEDF_DRIVER_MAJOR_VER;
+ 	slowpath_params.drv_minor = QEDF_DRIVER_MINOR_VER;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 14dec86ff749e..64ae7bc2de604 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1923,9 +1923,13 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
+ 	return blk_mq_alloc_tag_set(tag_set);
+ }
+ 
+-void scsi_mq_destroy_tags(struct Scsi_Host *shost)
++void scsi_mq_free_tags(struct kref *kref)
+ {
++	struct Scsi_Host *shost = container_of(kref, typeof(*shost),
++					       tagset_refcnt);
++
+ 	blk_mq_free_tag_set(&shost->tag_set);
++	complete(&shost->tagset_freed);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
+index 1183dbed687c6..3fee9af059257 100644
+--- a/drivers/scsi/scsi_priv.h
++++ b/drivers/scsi/scsi_priv.h
+@@ -93,7 +93,7 @@ extern void scsi_requeue_run_queue(struct work_struct *work);
+ extern struct request_queue *scsi_mq_alloc_queue(struct scsi_device *sdev);
+ extern void scsi_start_queue(struct scsi_device *sdev);
+ extern int scsi_mq_setup_tags(struct Scsi_Host *shost);
+-extern void scsi_mq_destroy_tags(struct Scsi_Host *shost);
++extern void scsi_mq_free_tags(struct kref *kref);
+ extern void scsi_exit_queue(void);
+ extern void scsi_evt_thread(struct work_struct *work);
+ struct request_queue;
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 6f7c4d41c51de..e8703b043805e 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -273,6 +273,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
+ 		kfree(sdev);
+ 		goto out;
+ 	}
++	kref_get(&sdev->host->tagset_refcnt);
+ 	WARN_ON_ONCE(!blk_get_queue(sdev->request_queue));
+ 	sdev->request_queue->queuedata = sdev;
+ 
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 6cc4d0792e3d0..bb37d698da1a1 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -1481,6 +1481,7 @@ void __scsi_remove_device(struct scsi_device *sdev)
+ 	mutex_unlock(&sdev->state_mutex);
+ 
+ 	blk_cleanup_queue(sdev->request_queue);
++	kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags);
+ 	cancel_work_sync(&sdev->requeue_work);
+ 
+ 	if (sdev->host->hostt->slave_destroy)
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 21297cc62571a..8566da12d15e3 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1001,7 +1001,7 @@ static struct spi_imx_devtype_data imx35_cspi_devtype_data = {
+ 	.rx_available = mx31_rx_available,
+ 	.reset = mx31_reset,
+ 	.fifo_size = 8,
+-	.has_dmamode = true,
++	.has_dmamode = false,
+ 	.dynamic_burst = false,
+ 	.has_slavemode = false,
+ 	.devtype = IMX35_CSPI,
+diff --git a/drivers/spi/spi-mux.c b/drivers/spi/spi-mux.c
+index 9708b7827ff70..b18c5265e858c 100644
+--- a/drivers/spi/spi-mux.c
++++ b/drivers/spi/spi-mux.c
+@@ -149,6 +149,7 @@ static int spi_mux_probe(struct spi_device *spi)
+ 	/* supported modes are the same as our parent's */
+ 	ctlr->mode_bits = spi->controller->mode_bits;
+ 	ctlr->flags = spi->controller->flags;
++	ctlr->bits_per_word_mask = spi->controller->bits_per_word_mask;
+ 	ctlr->transfer_one_message = spi_mux_transfer_one_message;
+ 	ctlr->setup = spi_mux_setup;
+ 	ctlr->num_chipselect = mux_control_states(priv->mux);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 50669ff9346c6..83d17f22335b1 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1197,7 +1197,7 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info)
+ 
+ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ {
+-	struct btrfs_root *quota_root;
++	struct btrfs_root *quota_root = NULL;
+ 	struct btrfs_trans_handle *trans = NULL;
+ 	int ret = 0;
+ 
+@@ -1290,9 +1290,9 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	btrfs_tree_unlock(quota_root->node);
+ 	btrfs_free_tree_block(trans, quota_root, quota_root->node, 0, 1);
+ 
+-	btrfs_put_root(quota_root);
+ 
+ out:
++	btrfs_put_root(quota_root);
+ 	mutex_unlock(&fs_info->qgroup_ioctl_lock);
+ 	if (ret && trans)
+ 		btrfs_end_transaction(trans);
+diff --git a/fs/dcache.c b/fs/dcache.c
+index 406a71abb1b59..5febd219fdebf 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -3092,28 +3092,25 @@ EXPORT_SYMBOL(d_splice_alias);
+   
+ bool is_subdir(struct dentry *new_dentry, struct dentry *old_dentry)
+ {
+-	bool result;
++	bool subdir;
+ 	unsigned seq;
+ 
+ 	if (new_dentry == old_dentry)
+ 		return true;
+ 
+-	do {
+-		/* for restarting inner loop in case of seq retry */
+-		seq = read_seqbegin(&rename_lock);
+-		/*
+-		 * Need rcu_readlock to protect against the d_parent trashing
+-		 * due to d_move
+-		 */
+-		rcu_read_lock();
+-		if (d_ancestor(old_dentry, new_dentry))
+-			result = true;
+-		else
+-			result = false;
+-		rcu_read_unlock();
+-	} while (read_seqretry(&rename_lock, seq));
+-
+-	return result;
++	/* Access d_parent under rcu as d_move() may change it. */
++	rcu_read_lock();
++	seq = read_seqbegin(&rename_lock);
++	subdir = d_ancestor(old_dentry, new_dentry);
++	 /* Try lockless once... */
++	if (read_seqretry(&rename_lock, seq)) {
++		/* ...else acquire lock for progress even on deep chains. */
++		read_seqlock_excl(&rename_lock);
++		subdir = d_ancestor(old_dentry, new_dentry);
++		read_sequnlock_excl(&rename_lock);
++	}
++	rcu_read_unlock();
++	return subdir;
+ }
+ EXPORT_SYMBOL(is_subdir);
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 0149d3c2cfd78..02236f298de93 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -46,6 +46,7 @@
+ #include <linux/part_stat.h>
+ #include <linux/kthread.h>
+ #include <linux/freezer.h>
++#include <linux/fsnotify.h>
+ 
+ #include "ext4.h"
+ #include "ext4_extents.h"	/* Needed for trace points definition */
+@@ -699,6 +700,7 @@ void __ext4_error(struct super_block *sb, const char *function,
+ 		       sb->s_id, function, line, current->comm, &vaf);
+ 		va_end(args);
+ 	}
++	fsnotify_sb_error(sb, NULL, error ? error : EFSCORRUPTED);
+ 	save_error_info(sb, error, 0, block, function, line);
+ 	ext4_handle_error(sb);
+ }
+@@ -730,6 +732,7 @@ void __ext4_error_inode(struct inode *inode, const char *function,
+ 			       current->comm, &vaf);
+ 		va_end(args);
+ 	}
++	fsnotify_sb_error(inode->i_sb, inode, error ? error : EFSCORRUPTED);
+ 	save_error_info(inode->i_sb, error, inode->i_ino, block,
+ 			function, line);
+ 	ext4_handle_error(inode->i_sb);
+@@ -769,6 +772,7 @@ void __ext4_error_file(struct file *file, const char *function,
+ 			       current->comm, path, &vaf);
+ 		va_end(args);
+ 	}
++	fsnotify_sb_error(inode->i_sb, inode, EFSCORRUPTED);
+ 	save_error_info(inode->i_sb, EFSCORRUPTED, inode->i_ino, block,
+ 			function, line);
+ 	ext4_handle_error(inode->i_sb);
+@@ -837,7 +841,7 @@ void __ext4_std_error(struct super_block *sb, const char *function,
+ 		printk(KERN_CRIT "EXT4-fs error (device %s) in %s:%d: %s\n",
+ 		       sb->s_id, function, line, errstr);
+ 	}
+-
++	fsnotify_sb_error(sb, NULL, errno ? errno : EFSCORRUPTED);
+ 	save_error_info(sb, -errno, 0, 0, function, line);
+ 	ext4_handle_error(sb);
+ }
+@@ -861,6 +865,7 @@ void __ext4_abort(struct super_block *sb, const char *function,
+ 	if (unlikely(ext4_forced_shutdown(EXT4_SB(sb))))
+ 		return;
+ 
++	fsnotify_sb_error(sb, NULL, error ? error : EFSCORRUPTED);
+ 	save_error_info(sb, error, 0, 0, function, line);
+ 	va_start(args, fmt);
+ 	vaf.fmt = fmt;
+@@ -5887,7 +5892,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	}
+ 
+ 	if (ext4_test_mount_flag(sb, EXT4_MF_FS_ABORTED))
+-		ext4_abort(sb, EXT4_ERR_ESHUTDOWN, "Abort forced by user");
++		ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
+ 
+ 	sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |
+ 		(test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);
+diff --git a/fs/file.c b/fs/file.c
+index fdb84a64724b7..913f7d897d2fc 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -494,12 +494,12 @@ struct files_struct init_files = {
+ 
+ static unsigned int find_next_fd(struct fdtable *fdt, unsigned int start)
+ {
+-	unsigned int maxfd = fdt->max_fds;
++	unsigned int maxfd = fdt->max_fds; /* always multiple of BITS_PER_LONG */
+ 	unsigned int maxbit = maxfd / BITS_PER_LONG;
+ 	unsigned int bitbit = start / BITS_PER_LONG;
+ 
+ 	bitbit = find_next_zero_bit(fdt->full_fds_bits, maxbit, bitbit) * BITS_PER_LONG;
+-	if (bitbit > maxfd)
++	if (bitbit >= maxfd)
+ 		return maxfd;
+ 	if (bitbit > start)
+ 		start = bitbit;
+diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c
+index bb0b27d88e502..d91f76ef18d9b 100644
+--- a/fs/hfsplus/xattr.c
++++ b/fs/hfsplus/xattr.c
+@@ -700,7 +700,7 @@ ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size)
+ 		return err;
+ 	}
+ 
+-	strbuf = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN +
++	strbuf = kzalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN +
+ 			XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL);
+ 	if (!strbuf) {
+ 		res = -ENOMEM;
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 7ae54f78a5b0b..aea5531559c06 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -797,7 +797,7 @@ ssize_t __jfs_getxattr(struct inode *inode, const char *name, void *data,
+ 		       size_t buf_size)
+ {
+ 	struct jfs_ea_list *ealist;
+-	struct jfs_ea *ea;
++	struct jfs_ea *ea, *ealist_end;
+ 	struct ea_buffer ea_buf;
+ 	int xattr_size;
+ 	ssize_t size;
+@@ -817,9 +817,16 @@ ssize_t __jfs_getxattr(struct inode *inode, const char *name, void *data,
+ 		goto not_found;
+ 
+ 	ealist = (struct jfs_ea_list *) ea_buf.xattr;
++	ealist_end = END_EALIST(ealist);
+ 
+ 	/* Find the named attribute */
+-	for (ea = FIRST_EA(ealist); ea < END_EALIST(ealist); ea = NEXT_EA(ea))
++	for (ea = FIRST_EA(ealist); ea < ealist_end; ea = NEXT_EA(ea)) {
++		if (unlikely(ea + 1 > ealist_end) ||
++		    unlikely(NEXT_EA(ea) > ealist_end)) {
++			size = -EUCLEAN;
++			goto release;
++		}
++
+ 		if ((namelen == ea->namelen) &&
+ 		    memcmp(name, ea->name, namelen) == 0) {
+ 			/* Found it */
+@@ -834,6 +841,7 @@ ssize_t __jfs_getxattr(struct inode *inode, const char *name, void *data,
+ 			memcpy(data, value, size);
+ 			goto release;
+ 		}
++	}
+       not_found:
+ 	size = -ENODATA;
+       release:
+@@ -861,7 +869,7 @@ ssize_t jfs_listxattr(struct dentry * dentry, char *data, size_t buf_size)
+ 	ssize_t size = 0;
+ 	int xattr_size;
+ 	struct jfs_ea_list *ealist;
+-	struct jfs_ea *ea;
++	struct jfs_ea *ea, *ealist_end;
+ 	struct ea_buffer ea_buf;
+ 
+ 	down_read(&JFS_IP(inode)->xattr_sem);
+@@ -876,9 +884,16 @@ ssize_t jfs_listxattr(struct dentry * dentry, char *data, size_t buf_size)
+ 		goto release;
+ 
+ 	ealist = (struct jfs_ea_list *) ea_buf.xattr;
++	ealist_end = END_EALIST(ealist);
+ 
+ 	/* compute required size of list */
+-	for (ea = FIRST_EA(ealist); ea < END_EALIST(ealist); ea = NEXT_EA(ea)) {
++	for (ea = FIRST_EA(ealist); ea < ealist_end; ea = NEXT_EA(ea)) {
++		if (unlikely(ea + 1 > ealist_end) ||
++		    unlikely(NEXT_EA(ea) > ealist_end)) {
++			size = -EUCLEAN;
++			goto release;
++		}
++
+ 		if (can_list(ea))
+ 			size += name_size(ea) + 1;
+ 	}
+diff --git a/fs/locks.c b/fs/locks.c
+index 843fa3d3375d4..ed8b3e318f97d 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -2588,8 +2588,9 @@ int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
+ 	error = do_lock_file_wait(filp, cmd, file_lock);
+ 
+ 	/*
+-	 * Attempt to detect a close/fcntl race and recover by releasing the
+-	 * lock that was just acquired. There is no need to do that when we're
++	 * Detect close/fcntl races and recover by zapping all POSIX locks
++	 * associated with this file and our files_struct, just like on
++	 * filp_flush(). There is no need to do that when we're
+ 	 * unlocking though, or for OFD locks.
+ 	 */
+ 	if (!error && file_lock->fl_type != F_UNLCK &&
+@@ -2604,9 +2605,7 @@ int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
+ 		f = files_lookup_fd_locked(files, fd);
+ 		spin_unlock(&files->file_lock);
+ 		if (f != filp) {
+-			file_lock->fl_type = F_UNLCK;
+-			error = do_lock_file_wait(filp, cmd, file_lock);
+-			WARN_ON_ONCE(error);
++			locks_remove_posix(filp, files);
+ 			error = -EBADF;
+ 		}
+ 	}
+@@ -2720,8 +2719,9 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
+ 	error = do_lock_file_wait(filp, cmd, file_lock);
+ 
+ 	/*
+-	 * Attempt to detect a close/fcntl race and recover by releasing the
+-	 * lock that was just acquired. There is no need to do that when we're
++	 * Detect close/fcntl races and recover by zapping all POSIX locks
++	 * associated with this file and our files_struct, just like on
++	 * filp_flush(). There is no need to do that when we're
+ 	 * unlocking though, or for OFD locks.
+ 	 */
+ 	if (!error && file_lock->fl_type != F_UNLCK &&
+@@ -2736,9 +2736,7 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
+ 		f = files_lookup_fd_locked(files, fd);
+ 		spin_unlock(&files->file_lock);
+ 		if (f != filp) {
+-			file_lock->fl_type = F_UNLCK;
+-			error = do_lock_file_wait(filp, cmd, file_lock);
+-			WARN_ON_ONCE(error);
++			locks_remove_posix(filp, files);
+ 			error = -EBADF;
+ 		}
+ 	}
+diff --git a/fs/ocfs2/dir.c b/fs/ocfs2/dir.c
+index bdfba9db558ac..4cc29b808d180 100644
+--- a/fs/ocfs2/dir.c
++++ b/fs/ocfs2/dir.c
+@@ -296,13 +296,16 @@ static void ocfs2_dx_dir_name_hash(struct inode *dir, const char *name, int len,
+  * bh passed here can be an inode block or a dir data block, depending
+  * on the inode inline data flag.
+  */
+-static int ocfs2_check_dir_entry(struct inode * dir,
+-				 struct ocfs2_dir_entry * de,
+-				 struct buffer_head * bh,
++static int ocfs2_check_dir_entry(struct inode *dir,
++				 struct ocfs2_dir_entry *de,
++				 struct buffer_head *bh,
++				 char *buf,
++				 unsigned int size,
+ 				 unsigned long offset)
+ {
+ 	const char *error_msg = NULL;
+ 	const int rlen = le16_to_cpu(de->rec_len);
++	const unsigned long next_offset = ((char *) de - buf) + rlen;
+ 
+ 	if (unlikely(rlen < OCFS2_DIR_REC_LEN(1)))
+ 		error_msg = "rec_len is smaller than minimal";
+@@ -310,9 +313,11 @@ static int ocfs2_check_dir_entry(struct inode * dir,
+ 		error_msg = "rec_len % 4 != 0";
+ 	else if (unlikely(rlen < OCFS2_DIR_REC_LEN(de->name_len)))
+ 		error_msg = "rec_len is too small for name_len";
+-	else if (unlikely(
+-		 ((char *) de - bh->b_data) + rlen > dir->i_sb->s_blocksize))
+-		error_msg = "directory entry across blocks";
++	else if (unlikely(next_offset > size))
++		error_msg = "directory entry overrun";
++	else if (unlikely(next_offset > size - OCFS2_DIR_REC_LEN(1)) &&
++		 next_offset != size)
++		error_msg = "directory entry too close to end";
+ 
+ 	if (unlikely(error_msg != NULL))
+ 		mlog(ML_ERROR, "bad entry in directory #%llu: %s - "
+@@ -354,16 +359,17 @@ static inline int ocfs2_search_dirblock(struct buffer_head *bh,
+ 	de_buf = first_de;
+ 	dlimit = de_buf + bytes;
+ 
+-	while (de_buf < dlimit) {
++	while (de_buf < dlimit - OCFS2_DIR_MEMBER_LEN) {
+ 		/* this code is executed quadratically often */
+ 		/* do minimal checking `by hand' */
+ 
+ 		de = (struct ocfs2_dir_entry *) de_buf;
+ 
+-		if (de_buf + namelen <= dlimit &&
++		if (de->name + namelen <= dlimit &&
+ 		    ocfs2_match(namelen, name, de)) {
+ 			/* found a match - just to be sure, do a full check */
+-			if (!ocfs2_check_dir_entry(dir, de, bh, offset)) {
++			if (!ocfs2_check_dir_entry(dir, de, bh, first_de,
++						   bytes, offset)) {
+ 				ret = -1;
+ 				goto bail;
+ 			}
+@@ -1140,7 +1146,7 @@ static int __ocfs2_delete_entry(handle_t *handle, struct inode *dir,
+ 	pde = NULL;
+ 	de = (struct ocfs2_dir_entry *) first_de;
+ 	while (i < bytes) {
+-		if (!ocfs2_check_dir_entry(dir, de, bh, i)) {
++		if (!ocfs2_check_dir_entry(dir, de, bh, first_de, bytes, i)) {
+ 			status = -EIO;
+ 			mlog_errno(status);
+ 			goto bail;
+@@ -1640,7 +1646,8 @@ int __ocfs2_add_entry(handle_t *handle,
+ 		/* These checks should've already been passed by the
+ 		 * prepare function, but I guess we can leave them
+ 		 * here anyway. */
+-		if (!ocfs2_check_dir_entry(dir, de, insert_bh, offset)) {
++		if (!ocfs2_check_dir_entry(dir, de, insert_bh, data_start,
++					   size, offset)) {
+ 			retval = -ENOENT;
+ 			goto bail;
+ 		}
+@@ -1778,7 +1785,8 @@ static int ocfs2_dir_foreach_blk_id(struct inode *inode,
+ 		}
+ 
+ 		de = (struct ocfs2_dir_entry *) (data->id_data + ctx->pos);
+-		if (!ocfs2_check_dir_entry(inode, de, di_bh, ctx->pos)) {
++		if (!ocfs2_check_dir_entry(inode, de, di_bh, (char *)data->id_data,
++					   i_size_read(inode), ctx->pos)) {
+ 			/* On error, skip the f_pos to the end. */
+ 			ctx->pos = i_size_read(inode);
+ 			break;
+@@ -1871,7 +1879,8 @@ static int ocfs2_dir_foreach_blk_el(struct inode *inode,
+ 		while (ctx->pos < i_size_read(inode)
+ 		       && offset < sb->s_blocksize) {
+ 			de = (struct ocfs2_dir_entry *) (bh->b_data + offset);
+-			if (!ocfs2_check_dir_entry(inode, de, bh, offset)) {
++			if (!ocfs2_check_dir_entry(inode, de, bh, bh->b_data,
++						   sb->s_blocksize, offset)) {
+ 				/* On error, skip the f_pos to the
+ 				   next block. */
+ 				ctx->pos = (ctx->pos | (sb->s_blocksize - 1)) + 1;
+@@ -3343,7 +3352,7 @@ static int ocfs2_find_dir_space_id(struct inode *dir, struct buffer_head *di_bh,
+ 	struct super_block *sb = dir->i_sb;
+ 	struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;
+ 	struct ocfs2_dir_entry *de, *last_de = NULL;
+-	char *de_buf, *limit;
++	char *first_de, *de_buf, *limit;
+ 	unsigned long offset = 0;
+ 	unsigned int rec_len, new_rec_len, free_space = dir->i_sb->s_blocksize;
+ 
+@@ -3356,14 +3365,16 @@ static int ocfs2_find_dir_space_id(struct inode *dir, struct buffer_head *di_bh,
+ 	else
+ 		free_space = dir->i_sb->s_blocksize - i_size_read(dir);
+ 
+-	de_buf = di->id2.i_data.id_data;
++	first_de = di->id2.i_data.id_data;
++	de_buf = first_de;
+ 	limit = de_buf + i_size_read(dir);
+ 	rec_len = OCFS2_DIR_REC_LEN(namelen);
+ 
+ 	while (de_buf < limit) {
+ 		de = (struct ocfs2_dir_entry *)de_buf;
+ 
+-		if (!ocfs2_check_dir_entry(dir, de, di_bh, offset)) {
++		if (!ocfs2_check_dir_entry(dir, de, di_bh, first_de,
++					   i_size_read(dir), offset)) {
+ 			ret = -ENOENT;
+ 			goto out;
+ 		}
+@@ -3445,7 +3456,8 @@ static int ocfs2_find_dir_space_el(struct inode *dir, const char *name,
+ 			/* move to next block */
+ 			de = (struct ocfs2_dir_entry *) bh->b_data;
+ 		}
+-		if (!ocfs2_check_dir_entry(dir, de, bh, offset)) {
++		if (!ocfs2_check_dir_entry(dir, de, bh, bh->b_data, blocksize,
++					   offset)) {
+ 			status = -ENOENT;
+ 			goto bail;
+ 		}
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index e2af013ec05f4..49db90cfe375f 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -407,10 +407,12 @@ static inline void sk_psock_put(struct sock *sk, struct sk_psock *psock)
+ 
+ static inline void sk_psock_data_ready(struct sock *sk, struct sk_psock *psock)
+ {
++	read_lock_bh(&sk->sk_callback_lock);
+ 	if (psock->parser.enabled)
+ 		psock->parser.saved_data_ready(sk);
+ 	else
+ 		sk->sk_data_ready(sk);
++	read_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+ static inline void psock_set_prog(struct bpf_prog **pprog,
+diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
+index 4a9f1e6e3aaca..3979e946c3fd1 100644
+--- a/include/scsi/scsi_host.h
++++ b/include/scsi/scsi_host.h
+@@ -548,6 +548,8 @@ struct Scsi_Host {
+ 	struct scsi_host_template *hostt;
+ 	struct scsi_transport_template *transportt;
+ 
++	struct kref		tagset_refcnt;
++	struct completion	tagset_freed;
+ 	/* Area to keep a shared tag map */
+ 	struct blk_mq_tag_set	tag_set;
+ 
+diff --git a/include/sound/dmaengine_pcm.h b/include/sound/dmaengine_pcm.h
+index 8c5e38180fb04..618405da95b30 100644
+--- a/include/sound/dmaengine_pcm.h
++++ b/include/sound/dmaengine_pcm.h
+@@ -34,6 +34,7 @@ snd_pcm_uframes_t snd_dmaengine_pcm_pointer_no_residue(struct snd_pcm_substream
+ int snd_dmaengine_pcm_open(struct snd_pcm_substream *substream,
+ 	struct dma_chan *chan);
+ int snd_dmaengine_pcm_close(struct snd_pcm_substream *substream);
++int snd_dmaengine_pcm_sync_stop(struct snd_pcm_substream *substream);
+ 
+ int snd_dmaengine_pcm_open_request_chan(struct snd_pcm_substream *substream,
+ 	dma_filter_fn filter_fn, void *filter_data);
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index 1e4bf23528a3d..eac0026e2fa62 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -41,9 +41,12 @@ struct bpf_ringbuf {
+ 	 * mapping consumer page as r/w, but restrict producer page to r/o.
+ 	 * This protects producer position from being modified by user-space
+ 	 * application and ruining in-kernel position tracking.
++	 * Note that the pending counter is placed in the same
++	 * page as the producer, so that it shares the same cache line.
+ 	 */
+ 	unsigned long consumer_pos __aligned(PAGE_SIZE);
+ 	unsigned long producer_pos __aligned(PAGE_SIZE);
++	unsigned long pending_pos;
+ 	char data[] __aligned(PAGE_SIZE);
+ };
+ 
+@@ -145,6 +148,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
+ 	rb->mask = data_sz - 1;
+ 	rb->consumer_pos = 0;
+ 	rb->producer_pos = 0;
++	rb->pending_pos = 0;
+ 
+ 	return rb;
+ }
+@@ -323,9 +327,9 @@ bpf_ringbuf_restore_from_rec(struct bpf_ringbuf_hdr *hdr)
+ 
+ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
+ {
+-	unsigned long cons_pos, prod_pos, new_prod_pos, flags;
+-	u32 len, pg_off;
++	unsigned long cons_pos, prod_pos, new_prod_pos, pend_pos, flags;
+ 	struct bpf_ringbuf_hdr *hdr;
++	u32 len, pg_off, tmp_size, hdr_len;
+ 
+ 	if (unlikely(size > RINGBUF_MAX_RECORD_SZ))
+ 		return NULL;
+@@ -343,13 +347,29 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
+ 		spin_lock_irqsave(&rb->spinlock, flags);
+ 	}
+ 
++	pend_pos = rb->pending_pos;
+ 	prod_pos = rb->producer_pos;
+ 	new_prod_pos = prod_pos + len;
+ 
+-	/* check for out of ringbuf space by ensuring producer position
+-	 * doesn't advance more than (ringbuf_size - 1) ahead
++	while (pend_pos < prod_pos) {
++		hdr = (void *)rb->data + (pend_pos & rb->mask);
++		hdr_len = READ_ONCE(hdr->len);
++		if (hdr_len & BPF_RINGBUF_BUSY_BIT)
++			break;
++		tmp_size = hdr_len & ~BPF_RINGBUF_DISCARD_BIT;
++		tmp_size = round_up(tmp_size + BPF_RINGBUF_HDR_SZ, 8);
++		pend_pos += tmp_size;
++	}
++	rb->pending_pos = pend_pos;
++
++	/* check for out of ringbuf space:
++	 * - by ensuring producer position doesn't advance more than
++	 *   (ringbuf_size - 1) ahead
++	 * - by ensuring oldest not yet committed record until newest
++	 *   record does not span more than (ringbuf_size - 1)
+ 	 */
+-	if (new_prod_pos - cons_pos > rb->mask) {
++	if (new_prod_pos - cons_pos > rb->mask ||
++	    new_prod_pos - pend_pos > rb->mask) {
+ 		spin_unlock_irqrestore(&rb->spinlock, flags);
+ 		return NULL;
+ 	}
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index b9cf5bc9364c1..c8c1cd55c0eb0 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3839,7 +3839,11 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ 	list_del(&hdev->list);
+ 	write_unlock(&hci_dev_list_lock);
+ 
++	cancel_work_sync(&hdev->rx_work);
++	cancel_work_sync(&hdev->cmd_work);
++	cancel_work_sync(&hdev->tx_work);
+ 	cancel_work_sync(&hdev->power_on);
++	cancel_work_sync(&hdev->error_reset);
+ 
+ 	if (!test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks)) {
+ 		hci_suspend_clear_tasks(hdev);
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 475a19db37132..ce42626663de6 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -758,7 +758,9 @@ int inet_accept(struct socket *sock, struct socket *newsock, int flags,
+ 	sock_rps_record_flow(sk2);
+ 	WARN_ON(!((1 << sk2->sk_state) &
+ 		  (TCPF_ESTABLISHED | TCPF_SYN_RECV |
+-		  TCPF_CLOSE_WAIT | TCPF_CLOSE)));
++		   TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 |
++		   TCPF_CLOSING | TCPF_CLOSE_WAIT |
++		   TCPF_CLOSE)));
+ 
+ 	sock_graft(sk2, newsock);
+ 
+diff --git a/net/ipv6/ila/ila_lwt.c b/net/ipv6/ila/ila_lwt.c
+index 8c1ce78956bae..9d37f7164e732 100644
+--- a/net/ipv6/ila/ila_lwt.c
++++ b/net/ipv6/ila/ila_lwt.c
+@@ -58,7 +58,9 @@ static int ila_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 		return orig_dst->lwtstate->orig_output(net, sk, skb);
+ 	}
+ 
++	local_bh_disable();
+ 	dst = dst_cache_get(&ilwt->dst_cache);
++	local_bh_enable();
+ 	if (unlikely(!dst)) {
+ 		struct ipv6hdr *ip6h = ipv6_hdr(skb);
+ 		struct flowi6 fl6;
+@@ -86,8 +88,11 @@ static int ila_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 			goto drop;
+ 		}
+ 
+-		if (ilwt->connected)
++		if (ilwt->connected) {
++			local_bh_disable();
+ 			dst_cache_set_ip6(&ilwt->dst_cache, dst, &fl6.saddr);
++			local_bh_enable();
++		}
+ 	}
+ 
+ 	skb_dst_set(skb, dst);
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index 5fdf3ebb953fb..2ba605db69769 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -217,9 +217,9 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 	if (unlikely(err))
+ 		goto drop;
+ 
+-	preempt_disable();
++	local_bh_disable();
+ 	dst = dst_cache_get(&rlwt->cache);
+-	preempt_enable();
++	local_bh_enable();
+ 
+ 	if (unlikely(!dst)) {
+ 		struct ipv6hdr *hdr = ipv6_hdr(skb);
+@@ -239,9 +239,9 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 			goto drop;
+ 		}
+ 
+-		preempt_disable();
++		local_bh_disable();
+ 		dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr);
+-		preempt_enable();
++		local_bh_enable();
+ 	}
+ 
+ 	skb_dst_drop(skb);
+@@ -273,9 +273,8 @@ static int rpl_input(struct sk_buff *skb)
+ 		return err;
+ 	}
+ 
+-	preempt_disable();
++	local_bh_disable();
+ 	dst = dst_cache_get(&rlwt->cache);
+-	preempt_enable();
+ 
+ 	skb_dst_drop(skb);
+ 
+@@ -283,14 +282,13 @@ static int rpl_input(struct sk_buff *skb)
+ 		ip6_route_input(skb);
+ 		dst = skb_dst(skb);
+ 		if (!dst->error) {
+-			preempt_disable();
+ 			dst_cache_set_ip6(&rlwt->cache, dst,
+ 					  &ipv6_hdr(skb)->saddr);
+-			preempt_enable();
+ 		}
+ 	} else {
+ 		skb_dst_set(skb, dst);
+ 	}
++	local_bh_enable();
+ 
+ 	err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+ 	if (unlikely(err))
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index ce5825d6f1d1c..d3a9ce1f8e53f 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -1584,6 +1584,7 @@ void ieee80211_mesh_init_sdata(struct ieee80211_sub_if_data *sdata)
+ 	ifmsh->last_preq = jiffies;
+ 	ifmsh->next_perr = jiffies;
+ 	ifmsh->csa_role = IEEE80211_MESH_CSA_ROLE_NONE;
++	ifmsh->nonpeer_pm = NL80211_MESH_POWER_ACTIVE;
+ 	/* Allocate all mesh structures when creating the first mesh interface. */
+ 	if (!mesh_allocated)
+ 		ieee80211s_init();
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index b241ff8c015a9..be5d02c129e92 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -727,15 +727,21 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata,
+ 			local->hw_scan_ies_bufsize *= n_bands;
+ 		}
+ 
+-		local->hw_scan_req = kmalloc(
+-				sizeof(*local->hw_scan_req) +
+-				req->n_channels * sizeof(req->channels[0]) +
+-				local->hw_scan_ies_bufsize, GFP_KERNEL);
++		local->hw_scan_req = kmalloc(struct_size(local->hw_scan_req,
++							 req.channels,
++							 req->n_channels) +
++					     local->hw_scan_ies_bufsize,
++					     GFP_KERNEL);
+ 		if (!local->hw_scan_req)
+ 			return -ENOMEM;
+ 
+ 		local->hw_scan_req->req.ssids = req->ssids;
+ 		local->hw_scan_req->req.n_ssids = req->n_ssids;
++		/* None of the channels are actually set
++		 * up but let UBSAN know the boundaries.
++		 */
++		local->hw_scan_req->req.n_channels = req->n_channels;
++
+ 		ies = (u8 *)local->hw_scan_req +
+ 			sizeof(*local->hw_scan_req) +
+ 			req->n_channels * sizeof(req->channels[0]);
+diff --git a/net/mac802154/tx.c b/net/mac802154/tx.c
+index c829e4a753256..7cea95d0b78f9 100644
+--- a/net/mac802154/tx.c
++++ b/net/mac802154/tx.c
+@@ -34,8 +34,8 @@ void ieee802154_xmit_worker(struct work_struct *work)
+ 	if (res)
+ 		goto err_tx;
+ 
+-	dev->stats.tx_packets++;
+-	dev->stats.tx_bytes += skb->len;
++	DEV_STATS_INC(dev, tx_packets);
++	DEV_STATS_ADD(dev, tx_bytes, skb->len);
+ 
+ 	ieee802154_xmit_complete(&local->hw, skb, false);
+ 
+@@ -86,8 +86,8 @@ ieee802154_tx(struct ieee802154_local *local, struct sk_buff *skb)
+ 			goto err_tx;
+ 		}
+ 
+-		dev->stats.tx_packets++;
+-		dev->stats.tx_bytes += len;
++		DEV_STATS_INC(dev, tx_packets);
++		DEV_STATS_ADD(dev, tx_bytes, len);
+ 	} else {
+ 		local->tx_skb = skb;
+ 		queue_work(local->workqueue, &local->tx_work);
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index a6c289a61d30c..76a27b6d45d28 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -2772,10 +2772,14 @@ int cfg80211_wext_siwscan(struct net_device *dev,
+ 	wiphy = &rdev->wiphy;
+ 
+ 	/* Determine number of channels, needed to allocate creq */
+-	if (wreq && wreq->num_channels)
++	if (wreq && wreq->num_channels) {
++		/* Passed from userspace so should be checked */
++		if (unlikely(wreq->num_channels > IW_MAX_FREQUENCIES))
++			return -EINVAL;
+ 		n_channels = wreq->num_channels;
+-	else
++	} else {
+ 		n_channels = ieee80211_get_num_supported_channels(wiphy);
++	}
+ 
+ 	creq = kzalloc(sizeof(*creq) + sizeof(struct cfg80211_ssid) +
+ 		       n_channels * sizeof(void *),
+diff --git a/scripts/gcc-plugins/gcc-common.h b/scripts/gcc-plugins/gcc-common.h
+index 6d4563b8a52c6..0c037b8845308 100644
+--- a/scripts/gcc-plugins/gcc-common.h
++++ b/scripts/gcc-plugins/gcc-common.h
+@@ -980,4 +980,8 @@ static inline void debug_gimple_stmt(const_gimple s)
+ #define SET_DECL_MODE(decl, mode)	DECL_MODE(decl) = (mode)
+ #endif
+ 
++#if BUILDING_GCC_VERSION >= 14000
++#define last_stmt(x)			last_nondebug_stmt(x)
++#endif
++
+ #endif
+diff --git a/scripts/kconfig/expr.c b/scripts/kconfig/expr.c
+index 81ebf8108ca74..81dfdf4470f75 100644
+--- a/scripts/kconfig/expr.c
++++ b/scripts/kconfig/expr.c
+@@ -396,35 +396,6 @@ static struct expr *expr_eliminate_yn(struct expr *e)
+ 	return e;
+ }
+ 
+-/*
+- * bool FOO!=n => FOO
+- */
+-struct expr *expr_trans_bool(struct expr *e)
+-{
+-	if (!e)
+-		return NULL;
+-	switch (e->type) {
+-	case E_AND:
+-	case E_OR:
+-	case E_NOT:
+-		e->left.expr = expr_trans_bool(e->left.expr);
+-		e->right.expr = expr_trans_bool(e->right.expr);
+-		break;
+-	case E_UNEQUAL:
+-		// FOO!=n -> FOO
+-		if (e->left.sym->type == S_TRISTATE) {
+-			if (e->right.sym == &symbol_no) {
+-				e->type = E_SYMBOL;
+-				e->right.sym = NULL;
+-			}
+-		}
+-		break;
+-	default:
+-		;
+-	}
+-	return e;
+-}
+-
+ /*
+  * e1 || e2 -> ?
+  */
+diff --git a/scripts/kconfig/expr.h b/scripts/kconfig/expr.h
+index 5c3443692f346..385a47daa3643 100644
+--- a/scripts/kconfig/expr.h
++++ b/scripts/kconfig/expr.h
+@@ -302,7 +302,6 @@ void expr_free(struct expr *e);
+ void expr_eliminate_eq(struct expr **ep1, struct expr **ep2);
+ int expr_eq(struct expr *e1, struct expr *e2);
+ tristate expr_calc_value(struct expr *e);
+-struct expr *expr_trans_bool(struct expr *e);
+ struct expr *expr_eliminate_dups(struct expr *e);
+ struct expr *expr_transform(struct expr *e);
+ int expr_contains_symbol(struct expr *dep, struct symbol *sym);
+diff --git a/scripts/kconfig/gconf.c b/scripts/kconfig/gconf.c
+index 5527482c30779..4097999127315 100644
+--- a/scripts/kconfig/gconf.c
++++ b/scripts/kconfig/gconf.c
+@@ -1484,7 +1484,6 @@ int main(int ac, char *av[])
+ 
+ 	conf_parse(name);
+ 	fixup_rootmenu(&rootmenu);
+-	conf_read(NULL);
+ 
+ 	/* Load the interface and connect signals */
+ 	init_main_window(glade_file);
+@@ -1492,6 +1491,8 @@ int main(int ac, char *av[])
+ 	init_left_tree();
+ 	init_right_tree();
+ 
++	conf_read(NULL);
++
+ 	switch (view_mode) {
+ 	case SINGLE_VIEW:
+ 		display_tree_part();
+diff --git a/scripts/kconfig/menu.c b/scripts/kconfig/menu.c
+index a5fbd6ccc006e..e5ad6313cfa1d 100644
+--- a/scripts/kconfig/menu.c
++++ b/scripts/kconfig/menu.c
+@@ -401,8 +401,6 @@ void menu_finalize(struct menu *parent)
+ 				dep = expr_transform(dep);
+ 				dep = expr_alloc_and(expr_copy(basedep), dep);
+ 				dep = expr_eliminate_dups(dep);
+-				if (menu->sym && menu->sym->type != S_TRISTATE)
+-					dep = expr_trans_bool(dep);
+ 				prop->visible.expr = dep;
+ 
+ 				/*
+diff --git a/sound/core/pcm_dmaengine.c b/sound/core/pcm_dmaengine.c
+index be58505889a36..c46c53656393e 100644
+--- a/sound/core/pcm_dmaengine.c
++++ b/sound/core/pcm_dmaengine.c
+@@ -342,6 +342,20 @@ int snd_dmaengine_pcm_open_request_chan(struct snd_pcm_substream *substream,
+ }
+ EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_open_request_chan);
+ 
++int snd_dmaengine_pcm_sync_stop(struct snd_pcm_substream *substream)
++{
++	struct dmaengine_pcm_runtime_data *prtd = substream_to_prtd(substream);
++	struct dma_tx_state state;
++	enum dma_status status;
++
++	status = dmaengine_tx_status(prtd->dma_chan, prtd->cookie, &state);
++	if (status != DMA_PAUSED)
++		dmaengine_synchronize(prtd->dma_chan);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_sync_stop);
++
+ /**
+  * snd_dmaengine_pcm_close - Close a dmaengine based PCM substream
+  * @substream: PCM substream
+@@ -349,6 +363,12 @@ EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_open_request_chan);
+ int snd_dmaengine_pcm_close(struct snd_pcm_substream *substream)
+ {
+ 	struct dmaengine_pcm_runtime_data *prtd = substream_to_prtd(substream);
++	struct dma_tx_state state;
++	enum dma_status status;
++
++	status = dmaengine_tx_status(prtd->dma_chan, prtd->cookie, &state);
++	if (status == DMA_PAUSED)
++		dmaengine_terminate_async(prtd->dma_chan);
+ 
+ 	dmaengine_synchronize(prtd->dma_chan);
+ 	kfree(prtd);
+@@ -367,6 +387,12 @@ EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_close);
+ int snd_dmaengine_pcm_close_release_chan(struct snd_pcm_substream *substream)
+ {
+ 	struct dmaengine_pcm_runtime_data *prtd = substream_to_prtd(substream);
++	struct dma_tx_state state;
++	enum dma_status status;
++
++	status = dmaengine_tx_status(prtd->dma_chan, prtd->cookie, &state);
++	if (status == DMA_PAUSED)
++		dmaengine_terminate_async(prtd->dma_chan);
+ 
+ 	dmaengine_synchronize(prtd->dma_chan);
+ 	dma_release_channel(prtd->dma_chan);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 669937bae570e..28dbe8cbbffd8 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -578,10 +578,14 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ 	switch (codec->core.vendor_id) {
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x10ec0257:
+ 	case 0x19e58326:
+ 	case 0x10ec0283:
++	case 0x10ec0285:
+ 	case 0x10ec0286:
++	case 0x10ec0287:
+ 	case 0x10ec0288:
++	case 0x10ec0295:
+ 	case 0x10ec0298:
+ 		alc_headset_mic_no_shutup(codec);
+ 		break;
+@@ -9092,6 +9096,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87b7, "HP Laptop 14-fq0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87d3, "HP Laptop 15-gw0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f1, "HP ProBook 630 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -9190,6 +9195,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
++	SND_PCI_QUIRK(0x10ec, 0x119e, "Positivo SU C1400", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x11bc, "VAIO VJFE-IL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+@@ -9202,6 +9208,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc1a3, "Samsung Galaxy Book Pro (NP935XDB-KC1SE)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc1a4, "Samsung Galaxy Book Pro 360 (NT935QBD)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc1a6, "Samsung Galaxy Book Pro 360 (NP930QBD)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ 	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP),
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 1d049685e7075..47b581d99da67 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -468,6 +468,17 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ARCHOS"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ARCHOS 101 CESIUM"),
++		},
++		.driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++					BYT_RT5640_JD_NOT_INV |
++					BYT_RT5640_DIFF_MIC |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ARCHOS"),
+diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c
+index 9ef80a48707eb..d65dc1acff439 100644
+--- a/sound/soc/soc-generic-dmaengine-pcm.c
++++ b/sound/soc/soc-generic-dmaengine-pcm.c
+@@ -326,6 +326,12 @@ static int dmaengine_copy_user(struct snd_soc_component *component,
+ 	return 0;
+ }
+ 
++static int dmaengine_pcm_sync_stop(struct snd_soc_component *component,
++				   struct snd_pcm_substream *substream)
++{
++	return snd_dmaengine_pcm_sync_stop(substream);
++}
++
+ static const struct snd_soc_component_driver dmaengine_pcm_component = {
+ 	.name		= SND_DMAENGINE_PCM_DRV_NAME,
+ 	.probe_order	= SND_SOC_COMP_ORDER_LATE,
+@@ -335,6 +341,7 @@ static const struct snd_soc_component_driver dmaengine_pcm_component = {
+ 	.trigger	= dmaengine_pcm_trigger,
+ 	.pointer	= dmaengine_pcm_pointer,
+ 	.pcm_construct	= dmaengine_pcm_new,
++	.sync_stop	= dmaengine_pcm_sync_stop,
+ };
+ 
+ static const struct snd_soc_component_driver dmaengine_pcm_component_process = {
+@@ -347,6 +354,7 @@ static const struct snd_soc_component_driver dmaengine_pcm_component_process = {
+ 	.pointer	= dmaengine_pcm_pointer,
+ 	.copy_user	= dmaengine_copy_user,
+ 	.pcm_construct	= dmaengine_pcm_new,
++	.sync_stop	= dmaengine_pcm_sync_stop,
+ };
+ 
+ static const char * const dmaengine_pcm_dma_channel_names[] = {
+diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
+index a6b72ad53b434..61ea444f2018d 100644
+--- a/sound/soc/ti/davinci-mcasp.c
++++ b/sound/soc/ti/davinci-mcasp.c
+@@ -1441,10 +1441,11 @@ static int davinci_mcasp_hw_rule_min_periodsize(
+ {
+ 	struct snd_interval *period_size = hw_param_interval(params,
+ 						SNDRV_PCM_HW_PARAM_PERIOD_SIZE);
++	u8 numevt = *((u8 *)rule->private);
+ 	struct snd_interval frames;
+ 
+ 	snd_interval_any(&frames);
+-	frames.min = 64;
++	frames.min = numevt;
+ 	frames.integer = 1;
+ 
+ 	return snd_interval_refine(period_size, &frames);
+@@ -1459,6 +1460,7 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
+ 	u32 max_channels = 0;
+ 	int i, dir, ret;
+ 	int tdm_slots = mcasp->tdm_slots;
++	u8 *numevt;
+ 
+ 	/* Do not allow more then one stream per direction */
+ 	if (mcasp->substreams[substream->stream])
+@@ -1558,9 +1560,12 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
+ 			return ret;
+ 	}
+ 
++	numevt = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) ?
++			 &mcasp->txnumevt :
++			 &mcasp->rxnumevt;
+ 	snd_pcm_hw_rule_add(substream->runtime, 0,
+ 			    SNDRV_PCM_HW_PARAM_PERIOD_SIZE,
+-			    davinci_mcasp_hw_rule_min_periodsize, NULL,
++			    davinci_mcasp_hw_rule_min_periodsize, numevt,
+ 			    SNDRV_PCM_HW_PARAM_PERIOD_SIZE, -1);
+ 
+ 	return 0;
+diff --git a/sound/soc/ti/omap-hdmi.c b/sound/soc/ti/omap-hdmi.c
+index 3328c02f93c74..1dfe439d13417 100644
+--- a/sound/soc/ti/omap-hdmi.c
++++ b/sound/soc/ti/omap-hdmi.c
+@@ -353,11 +353,7 @@ static int omap_hdmi_audio_probe(struct platform_device *pdev)
+ 	if (!card)
+ 		return -ENOMEM;
+ 
+-	card->name = devm_kasprintf(dev, GFP_KERNEL,
+-				    "HDMI %s", dev_name(ad->dssdev));
+-	if (!card->name)
+-		return -ENOMEM;
+-
++	card->name = "HDMI";
+ 	card->owner = THIS_MODULE;
+ 	card->dai_link =
+ 		devm_kzalloc(dev, sizeof(*(card->dai_link)), GFP_KERNEL);
+diff --git a/tools/testing/selftests/openat2/openat2_test.c b/tools/testing/selftests/openat2/openat2_test.c
+index 453152b58e7f0..1045df1a98c07 100644
+--- a/tools/testing/selftests/openat2/openat2_test.c
++++ b/tools/testing/selftests/openat2/openat2_test.c
+@@ -5,6 +5,7 @@
+  */
+ 
+ #define _GNU_SOURCE
++#define __SANE_USERSPACE_TYPES__ // Use ll64
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <sys/stat.h>
+diff --git a/tools/testing/selftests/vDSO/parse_vdso.c b/tools/testing/selftests/vDSO/parse_vdso.c
+index 413f75620a35b..4ae417372e9eb 100644
+--- a/tools/testing/selftests/vDSO/parse_vdso.c
++++ b/tools/testing/selftests/vDSO/parse_vdso.c
+@@ -55,14 +55,20 @@ static struct vdso_info
+ 	ELF(Verdef) *verdef;
+ } vdso_info;
+ 
+-/* Straight from the ELF specification. */
+-static unsigned long elf_hash(const unsigned char *name)
++/*
++ * Straight from the ELF specification...and then tweaked slightly, in order to
++ * avoid a few clang warnings.
++ */
++static unsigned long elf_hash(const char *name)
+ {
+ 	unsigned long h = 0, g;
+-	while (*name)
++	const unsigned char *uch_name = (const unsigned char *)name;
++
++	while (*uch_name)
+ 	{
+-		h = (h << 4) + *name++;
+-		if (g = h & 0xf0000000)
++		h = (h << 4) + *uch_name++;
++		g = h & 0xf0000000;
++		if (g)
+ 			h ^= g >> 24;
+ 		h &= ~g;
+ 	}
+diff --git a/tools/testing/selftests/vDSO/vdso_standalone_test_x86.c b/tools/testing/selftests/vDSO/vdso_standalone_test_x86.c
+index 8a44ff973ee17..27f6fdf119691 100644
+--- a/tools/testing/selftests/vDSO/vdso_standalone_test_x86.c
++++ b/tools/testing/selftests/vDSO/vdso_standalone_test_x86.c
+@@ -18,7 +18,7 @@
+ 
+ #include "parse_vdso.h"
+ 
+-/* We need a libc functions... */
++/* We need some libc functions... */
+ int strcmp(const char *a, const char *b)
+ {
+ 	/* This implementation is buggy: it never returns -1. */
+@@ -34,6 +34,20 @@ int strcmp(const char *a, const char *b)
+ 	return 0;
+ }
+ 
++/*
++ * The clang build needs this, although gcc does not.
++ * Stolen from lib/string.c.
++ */
++void *memcpy(void *dest, const void *src, size_t count)
++{
++	char *tmp = dest;
++	const char *s = src;
++
++	while (count--)
++		*tmp++ = *s++;
++	return dest;
++}
++
+ /* ...and two syscalls.  This is x86-specific. */
+ static inline long x86_syscall3(long nr, long a0, long a1, long a2)
+ {
+@@ -70,7 +84,7 @@ void to_base10(char *lastdig, time_t n)
+ 	}
+ }
+ 
+-__attribute__((externally_visible)) void c_main(void **stack)
++void c_main(void **stack)
+ {
+ 	/* Parse the stack */
+ 	long argc = (long)*stack;


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-07-27  9:20 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-07-27  9:20 UTC (permalink / raw
  To: gentoo-commits

commit:     2741f0788ab051b9f2e6f28dc34cdb384f360145
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jul 27 09:19:32 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jul 27 09:19:32 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2741f078

Remove redundant patch

Removed:
2945_handle-gcc-14-last-stmt-rename.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 2945_handle-gcc-14-last-stmt-rename.patch | 31 -------------------------------
 1 file changed, 31 deletions(-)

diff --git a/2945_handle-gcc-14-last-stmt-rename.patch b/2945_handle-gcc-14-last-stmt-rename.patch
deleted file mode 100644
index b04ce8da..00000000
--- a/2945_handle-gcc-14-last-stmt-rename.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From: Kees Cook <keescook@chromium.org>
-To: linux-hardening@vger.kernel.org
-Cc: Kees Cook <keescook@chromium.org>, linux-kernel@vger.kernel.org
-Subject: [PATCH] gcc-plugins: Rename last_stmt() for GCC 14+
-Date: Thu, 10 Aug 2023 23:05:49 -0700	[thread overview]
-Message-ID: <20230811060545.never.564-kees@kernel.org> (raw)
-
-In GCC 14, last_stmt() was renamed to last_nondebug_stmt(). Add a helper
-macro to handle the renaming.
-
-Cc: linux-hardening@vger.kernel.org
-Signed-off-by: Kees Cook <keescook@chromium.org>
----
- scripts/gcc-plugins/gcc-common.h | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/scripts/gcc-plugins/gcc-common.h b/scripts/gcc-plugins/gcc-common.h
-index 84c730da36dd..1ae39b9f4a95 100644
---- a/scripts/gcc-plugins/gcc-common.h
-+++ b/scripts/gcc-plugins/gcc-common.h
-@@ -440,4 +440,8 @@ static inline void debug_gimple_stmt(const_gimple s)
- #define SET_DECL_MODE(decl, mode)	DECL_MODE(decl) = (mode)
- #endif
- 
-+#if BUILDING_GCC_VERSION >= 14000
-+#define last_stmt(x)			last_nondebug_stmt(x)
-+#endif
-+
- #endif
--- 
-2.34.1


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-08-19 10:44 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-08-19 10:44 UTC (permalink / raw
  To: gentoo-commits

commit:     250e1342c146b4c437f91385b94db16d1fdea404
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 19 10:44:42 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug 19 10:44:42 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=250e1342

Linux patch 5.10.224

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1223_linux-5.10.224.patch | 11868 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11872 insertions(+)

diff --git a/0000_README b/0000_README
index e3ae6e55..28ec5762 100644
--- a/0000_README
+++ b/0000_README
@@ -935,6 +935,10 @@ Patch:  1222_linux-5.10.223.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.223
 
+Patch:  1223_linux-5.10.224.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.224
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1223_linux-5.10.224.patch b/1223_linux-5.10.224.patch
new file mode 100644
index 00000000..ab61c02e
--- /dev/null
+++ b/1223_linux-5.10.224.patch
@@ -0,0 +1,11868 @@
+diff --git a/Documentation/arm64/cpu-feature-registers.rst b/Documentation/arm64/cpu-feature-registers.rst
+index 749ae970c31955..8aaa5d13d3cc20 100644
+--- a/Documentation/arm64/cpu-feature-registers.rst
++++ b/Documentation/arm64/cpu-feature-registers.rst
+@@ -92,7 +92,7 @@ operation if the source belongs to the supported system register space.
+ 
+ The infrastructure emulates only the following system register space::
+ 
+-	Op0=3, Op1=0, CRn=0, CRm=0,4,5,6,7
++	Op0=3, Op1=0, CRn=0, CRm=0,2,3,4,5,6,7
+ 
+ (See Table C5-6 'System instruction encodings for non-Debug System
+ register accesses' in ARMv8 ARM DDI 0487A.h, for the list of
+@@ -291,6 +291,42 @@ infrastructure:
+      | RPRES                        | [7-4]   |    y    |
+      +------------------------------+---------+---------+
+ 
++  10) MVFR0_EL1 - AArch32 Media and VFP Feature Register 0
++
++     +------------------------------+---------+---------+
++     | Name                         |  bits   | visible |
++     +------------------------------+---------+---------+
++     | FPDP                         | [11-8]  |    y    |
++     +------------------------------+---------+---------+
++
++  11) MVFR1_EL1 - AArch32 Media and VFP Feature Register 1
++
++     +------------------------------+---------+---------+
++     | Name                         |  bits   | visible |
++     +------------------------------+---------+---------+
++     | SIMDFMAC                     | [31-28] |    y    |
++     +------------------------------+---------+---------+
++     | SIMDSP                       | [19-16] |    y    |
++     +------------------------------+---------+---------+
++     | SIMDInt                      | [15-12] |    y    |
++     +------------------------------+---------+---------+
++     | SIMDLS                       | [11-8]  |    y    |
++     +------------------------------+---------+---------+
++
++  12) ID_ISAR5_EL1 - AArch32 Instruction Set Attribute Register 5
++
++     +------------------------------+---------+---------+
++     | Name                         |  bits   | visible |
++     +------------------------------+---------+---------+
++     | CRC32                        | [19-16] |    y    |
++     +------------------------------+---------+---------+
++     | SHA2                         | [15-12] |    y    |
++     +------------------------------+---------+---------+
++     | SHA1                         | [11-8]  |    y    |
++     +------------------------------+---------+---------+
++     | AES                          | [7-4]   |    y    |
++     +------------------------------+---------+---------+
++
+ 
+ Appendix I: Example
+ -------------------
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 10a26d44ef4a97..14eef7e93614be 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -94,16 +94,52 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A76      | #1463225        | ARM64_ERRATUM_1463225       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A76      | #3324349        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A77      | #1508412        | ARM64_ERRATUM_1508412       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A77      | #3324348        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A78      | #3324344        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A78C     | #3324346,3324347| ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A510     | #2457168        | ARM64_ERRATUM_2457168       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A710     | #3324338        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A720     | #3456091        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A725     | #3456106        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X1       | #3324344        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X1C      | #3324346        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X2       | #3324338        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X3       | #3324335        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X4       | #3194386        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X925     | #3324334        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1188873,1418040| ARM64_ERRATUM_1418040       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1349291        | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1542419        | ARM64_ERRATUM_1542419       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-N1     | #3324349        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-N2     | #3324339        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-V1     | #3324341        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-V2     | #3324336        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-V3     | #3312417        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | MMU-500         | #841119,826419  | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
+index 1b3954aa71c157..af946e3da82a67 100644
+--- a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
++++ b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
+@@ -49,7 +49,10 @@ properties:
+       to take when the temperature crosses those thresholds.
+ 
+ patternProperties:
+-  "^[a-zA-Z][a-zA-Z0-9\\-]{1,12}-thermal$":
++  # Node name is limited in size due to Linux kernel requirements - 19
++  # characters in total (see THERMAL_NAME_LENGTH, including terminating NUL
++  # byte):
++  "^[a-zA-Z][a-zA-Z0-9\\-]{1,10}-thermal$":
+     type: object
+     description:
+       Each thermal zone node contains information about how frequently it
+diff --git a/Makefile b/Makefile
+index 91c79c64f5a397..38351794f6c93e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 223
++SUBLEVEL = 224
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6q-kontron-samx6i.dtsi b/arch/arm/boot/dts/imx6q-kontron-samx6i.dtsi
+index 4d6a0c3e8455f9..ff062f4fd726eb 100644
+--- a/arch/arm/boot/dts/imx6q-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/imx6q-kontron-samx6i.dtsi
+@@ -5,31 +5,8 @@
+ 
+ #include "imx6q.dtsi"
+ #include "imx6qdl-kontron-samx6i.dtsi"
+-#include <dt-bindings/gpio/gpio.h>
+ 
+ / {
+ 	model = "Kontron SMARC sAMX6i Quad/Dual";
+ 	compatible = "kontron,imx6q-samx6i", "fsl,imx6q";
+ };
+-
+-/* Quad/Dual SoMs have 3 chip-select signals */
+-&ecspi4 {
+-	cs-gpios = <&gpio3 24 GPIO_ACTIVE_LOW>,
+-		   <&gpio3 29 GPIO_ACTIVE_LOW>,
+-		   <&gpio3 25 GPIO_ACTIVE_LOW>;
+-};
+-
+-&pinctrl_ecspi4 {
+-	fsl,pins = <
+-		MX6QDL_PAD_EIM_D21__ECSPI4_SCLK 0x100b1
+-		MX6QDL_PAD_EIM_D28__ECSPI4_MOSI 0x100b1
+-		MX6QDL_PAD_EIM_D22__ECSPI4_MISO 0x100b1
+-
+-		/* SPI4_IMX_CS2# - connected to internal flash */
+-		MX6QDL_PAD_EIM_D24__GPIO3_IO24 0x1b0b0
+-		/* SPI4_IMX_CS0# - connected to SMARC SPI0_CS0# */
+-		MX6QDL_PAD_EIM_D29__GPIO3_IO29 0x1b0b0
+-		/* SPI4_CS3# - connected to  SMARC SPI0_CS1# */
+-		MX6QDL_PAD_EIM_D25__GPIO3_IO25 0x1b0b0
+-	>;
+-};
+diff --git a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+index 37d94aa45a8b72..2d1cc67628b4ee 100644
+--- a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+@@ -244,7 +244,8 @@ &ecspi4 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_ecspi4>;
+ 	cs-gpios = <&gpio3 24 GPIO_ACTIVE_LOW>,
+-		   <&gpio3 29 GPIO_ACTIVE_LOW>;
++		   <&gpio3 29 GPIO_ACTIVE_LOW>,
++		   <&gpio3 25 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+ 	/* default boot source: workaround #1 for errata ERR006282 */
+@@ -259,8 +260,20 @@ smarc_flash: flash@0 {
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
+-	phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
++	phy-connection-type = "rgmii-id";
++	phy-handle = <&ethphy>;
++
++	mdio {
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		ethphy: ethernet-phy@1 {
++			compatible = "ethernet-phy-ieee802.3-c22";
++			reg = <1>;
++			reset-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
++			reset-assert-us = <1000>;
++		};
++	};
+ };
+ 
+ &i2c_intern {
+@@ -448,6 +461,8 @@ MX6QDL_PAD_EIM_D22__ECSPI4_MISO 0x100b1
+ 			MX6QDL_PAD_EIM_D24__GPIO3_IO24 0x1b0b0
+ 			/* SPI_IMX_CS0# - connected to SMARC SPI0_CS0# */
+ 			MX6QDL_PAD_EIM_D29__GPIO3_IO29 0x1b0b0
++			/* SPI4_CS3# - connected to SMARC SPI0_CS1# */
++			MX6QDL_PAD_EIM_D25__GPIO3_IO25 0x1b0b0
+ 		>;
+ 	};
+ 
+@@ -500,7 +515,7 @@ MX6QDL_PAD_RGMII_RX_CTL__RGMII_RX_CTL 0x1b0b0
+ 			MX6QDL_PAD_ENET_MDIO__ENET_MDIO       0x1b0b0
+ 			MX6QDL_PAD_ENET_MDC__ENET_MDC         0x1b0b0
+ 			MX6QDL_PAD_ENET_REF_CLK__ENET_TX_CLK  0x1b0b0
+-			MX6QDL_PAD_ENET_CRS_DV__GPIO1_IO25    0x1b0b0 /* RST_GBE0_PHY# */
++			MX6QDL_PAD_NANDF_D1__GPIO2_IO01       0x1b0b0 /* RST_GBE0_PHY# */
+ 		>;
+ 	};
+ 
+@@ -713,7 +728,7 @@ &pcie {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_pcie>;
+ 	wake-up-gpio = <&gpio6 18 GPIO_ACTIVE_HIGH>;
+-	reset-gpio = <&gpio3 13 GPIO_ACTIVE_HIGH>;
++	reset-gpio = <&gpio3 13 GPIO_ACTIVE_LOW>;
+ };
+ 
+ /* LCD_BKLT_PWM */
+@@ -801,5 +816,6 @@ &wdog1 {
+ 	/* CPLD is feeded by watchdog (hardwired) */
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_wdog1>;
++	fsl,ext-reset-output;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/mach-pxa/include/mach/spitz.h b/arch/arm/mach-pxa/include/mach/spitz.h
+deleted file mode 100644
+index 04828d8918aa3f..00000000000000
+--- a/arch/arm/mach-pxa/include/mach/spitz.h
++++ /dev/null
+@@ -1,185 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-only */
+-/*
+- * Hardware specific definitions for SL-Cx000 series of PDAs
+- *
+- * Copyright (c) 2005 Alexander Wykes
+- * Copyright (c) 2005 Richard Purdie
+- *
+- * Based on Sharp's 2.4 kernel patches
+- */
+-#ifndef __ASM_ARCH_SPITZ_H
+-#define __ASM_ARCH_SPITZ_H  1
+-#endif
+-
+-#include "irqs.h" /* PXA_NR_BUILTIN_GPIO, PXA_GPIO_TO_IRQ */
+-#include <linux/fb.h>
+-
+-/* Spitz/Akita GPIOs */
+-
+-#define SPITZ_GPIO_KEY_INT         (0) /* Key Interrupt */
+-#define SPITZ_GPIO_RESET           (1)
+-#define SPITZ_GPIO_nSD_DETECT      (9)
+-#define SPITZ_GPIO_TP_INT          (11) /* Touch Panel interrupt */
+-#define SPITZ_GPIO_AK_INT          (13) /* Remote Control */
+-#define SPITZ_GPIO_ADS7846_CS      (14)
+-#define SPITZ_GPIO_SYNC            (16)
+-#define SPITZ_GPIO_MAX1111_CS      (20)
+-#define SPITZ_GPIO_FATAL_BAT       (21)
+-#define SPITZ_GPIO_HSYNC           (22)
+-#define SPITZ_GPIO_nSD_CLK         (32)
+-#define SPITZ_GPIO_USB_DEVICE      (35)
+-#define SPITZ_GPIO_USB_HOST        (37)
+-#define SPITZ_GPIO_USB_CONNECT     (41)
+-#define SPITZ_GPIO_LCDCON_CS       (53)
+-#define SPITZ_GPIO_nPCE            (54)
+-#define SPITZ_GPIO_nSD_WP          (81)
+-#define SPITZ_GPIO_ON_RESET        (89)
+-#define SPITZ_GPIO_BAT_COVER       (90)
+-#define SPITZ_GPIO_CF_CD           (94)
+-#define SPITZ_GPIO_ON_KEY          (95)
+-#define SPITZ_GPIO_SWA             (97)
+-#define SPITZ_GPIO_SWB             (96)
+-#define SPITZ_GPIO_CHRG_FULL       (101)
+-#define SPITZ_GPIO_CO              (101)
+-#define SPITZ_GPIO_CF_IRQ          (105)
+-#define SPITZ_GPIO_AC_IN           (115)
+-#define SPITZ_GPIO_HP_IN           (116)
+-
+-/* Spitz Only GPIOs */
+-
+-#define SPITZ_GPIO_CF2_IRQ         (106) /* CF slot1 Ready */
+-#define SPITZ_GPIO_CF2_CD          (93)
+-
+-
+-/* Spitz/Akita Keyboard Definitions */
+-
+-#define SPITZ_KEY_STROBE_NUM         (11)
+-#define SPITZ_KEY_SENSE_NUM          (7)
+-#define SPITZ_GPIO_G0_STROBE_BIT     0x0f800000
+-#define SPITZ_GPIO_G1_STROBE_BIT     0x00100000
+-#define SPITZ_GPIO_G2_STROBE_BIT     0x01000000
+-#define SPITZ_GPIO_G3_STROBE_BIT     0x00041880
+-#define SPITZ_GPIO_G0_SENSE_BIT      0x00021000
+-#define SPITZ_GPIO_G1_SENSE_BIT      0x000000d4
+-#define SPITZ_GPIO_G2_SENSE_BIT      0x08000000
+-#define SPITZ_GPIO_G3_SENSE_BIT      0x00000000
+-
+-#define SPITZ_GPIO_KEY_STROBE0       88
+-#define SPITZ_GPIO_KEY_STROBE1       23
+-#define SPITZ_GPIO_KEY_STROBE2       24
+-#define SPITZ_GPIO_KEY_STROBE3       25
+-#define SPITZ_GPIO_KEY_STROBE4       26
+-#define SPITZ_GPIO_KEY_STROBE5       27
+-#define SPITZ_GPIO_KEY_STROBE6       52
+-#define SPITZ_GPIO_KEY_STROBE7       103
+-#define SPITZ_GPIO_KEY_STROBE8       107
+-#define SPITZ_GPIO_KEY_STROBE9       108
+-#define SPITZ_GPIO_KEY_STROBE10      114
+-
+-#define SPITZ_GPIO_KEY_SENSE0        12
+-#define SPITZ_GPIO_KEY_SENSE1        17
+-#define SPITZ_GPIO_KEY_SENSE2        91
+-#define SPITZ_GPIO_KEY_SENSE3        34
+-#define SPITZ_GPIO_KEY_SENSE4        36
+-#define SPITZ_GPIO_KEY_SENSE5        38
+-#define SPITZ_GPIO_KEY_SENSE6        39
+-
+-
+-/* Spitz Scoop Device (No. 1) GPIOs */
+-/* Suspend States in comments */
+-#define SPITZ_SCP_LED_GREEN     SCOOP_GPCR_PA11  /* Keep */
+-#define SPITZ_SCP_JK_B          SCOOP_GPCR_PA12  /* Keep */
+-#define SPITZ_SCP_CHRG_ON       SCOOP_GPCR_PA13  /* Keep */
+-#define SPITZ_SCP_MUTE_L        SCOOP_GPCR_PA14  /* Low */
+-#define SPITZ_SCP_MUTE_R        SCOOP_GPCR_PA15  /* Low */
+-#define SPITZ_SCP_CF_POWER      SCOOP_GPCR_PA16  /* Keep */
+-#define SPITZ_SCP_LED_ORANGE    SCOOP_GPCR_PA17  /* Keep */
+-#define SPITZ_SCP_JK_A          SCOOP_GPCR_PA18  /* Low */
+-#define SPITZ_SCP_ADC_TEMP_ON   SCOOP_GPCR_PA19  /* Low */
+-
+-#define SPITZ_SCP_IO_DIR      (SPITZ_SCP_JK_B | SPITZ_SCP_CHRG_ON | \
+-                               SPITZ_SCP_MUTE_L | SPITZ_SCP_MUTE_R | \
+-                               SPITZ_SCP_CF_POWER | SPITZ_SCP_JK_A | SPITZ_SCP_ADC_TEMP_ON)
+-#define SPITZ_SCP_IO_OUT      (SPITZ_SCP_CHRG_ON | SPITZ_SCP_MUTE_L | SPITZ_SCP_MUTE_R)
+-#define SPITZ_SCP_SUS_CLR     (SPITZ_SCP_MUTE_L | SPITZ_SCP_MUTE_R | SPITZ_SCP_JK_A | SPITZ_SCP_ADC_TEMP_ON)
+-#define SPITZ_SCP_SUS_SET     0
+-
+-#define SPITZ_SCP_GPIO_BASE	(PXA_NR_BUILTIN_GPIO)
+-#define SPITZ_GPIO_LED_GREEN	(SPITZ_SCP_GPIO_BASE + 0)
+-#define SPITZ_GPIO_JK_B		(SPITZ_SCP_GPIO_BASE + 1)
+-#define SPITZ_GPIO_CHRG_ON	(SPITZ_SCP_GPIO_BASE + 2)
+-#define SPITZ_GPIO_MUTE_L	(SPITZ_SCP_GPIO_BASE + 3)
+-#define SPITZ_GPIO_MUTE_R	(SPITZ_SCP_GPIO_BASE + 4)
+-#define SPITZ_GPIO_CF_POWER	(SPITZ_SCP_GPIO_BASE + 5)
+-#define SPITZ_GPIO_LED_ORANGE	(SPITZ_SCP_GPIO_BASE + 6)
+-#define SPITZ_GPIO_JK_A		(SPITZ_SCP_GPIO_BASE + 7)
+-#define SPITZ_GPIO_ADC_TEMP_ON	(SPITZ_SCP_GPIO_BASE + 8)
+-
+-/* Spitz Scoop Device (No. 2) GPIOs */
+-/* Suspend States in comments */
+-#define SPITZ_SCP2_IR_ON           SCOOP_GPCR_PA11  /* High */
+-#define SPITZ_SCP2_AKIN_PULLUP     SCOOP_GPCR_PA12  /* Keep */
+-#define SPITZ_SCP2_RESERVED_1      SCOOP_GPCR_PA13  /* High */
+-#define SPITZ_SCP2_RESERVED_2      SCOOP_GPCR_PA14  /* Low */
+-#define SPITZ_SCP2_RESERVED_3      SCOOP_GPCR_PA15  /* Low */
+-#define SPITZ_SCP2_RESERVED_4      SCOOP_GPCR_PA16  /* Low */
+-#define SPITZ_SCP2_BACKLIGHT_CONT  SCOOP_GPCR_PA17  /* Low */
+-#define SPITZ_SCP2_BACKLIGHT_ON    SCOOP_GPCR_PA18  /* Low */
+-#define SPITZ_SCP2_MIC_BIAS        SCOOP_GPCR_PA19  /* Low */
+-
+-#define SPITZ_SCP2_IO_DIR (SPITZ_SCP2_AKIN_PULLUP | SPITZ_SCP2_RESERVED_1 | \
+-                           SPITZ_SCP2_RESERVED_2 | SPITZ_SCP2_RESERVED_3 | SPITZ_SCP2_RESERVED_4 | \
+-                           SPITZ_SCP2_BACKLIGHT_CONT | SPITZ_SCP2_BACKLIGHT_ON | SPITZ_SCP2_MIC_BIAS)
+-
+-#define SPITZ_SCP2_IO_OUT   (SPITZ_SCP2_AKIN_PULLUP | SPITZ_SCP2_RESERVED_1)
+-#define SPITZ_SCP2_SUS_CLR  (SPITZ_SCP2_RESERVED_2 | SPITZ_SCP2_RESERVED_3 | SPITZ_SCP2_RESERVED_4 | \
+-                             SPITZ_SCP2_BACKLIGHT_CONT | SPITZ_SCP2_BACKLIGHT_ON | SPITZ_SCP2_MIC_BIAS)
+-#define SPITZ_SCP2_SUS_SET  (SPITZ_SCP2_IR_ON | SPITZ_SCP2_RESERVED_1)
+-
+-#define SPITZ_SCP2_GPIO_BASE		(PXA_NR_BUILTIN_GPIO + 12)
+-#define SPITZ_GPIO_IR_ON		(SPITZ_SCP2_GPIO_BASE + 0)
+-#define SPITZ_GPIO_AKIN_PULLUP		(SPITZ_SCP2_GPIO_BASE + 1)
+-#define SPITZ_GPIO_RESERVED_1		(SPITZ_SCP2_GPIO_BASE + 2)
+-#define SPITZ_GPIO_RESERVED_2		(SPITZ_SCP2_GPIO_BASE + 3)
+-#define SPITZ_GPIO_RESERVED_3		(SPITZ_SCP2_GPIO_BASE + 4)
+-#define SPITZ_GPIO_RESERVED_4		(SPITZ_SCP2_GPIO_BASE + 5)
+-#define SPITZ_GPIO_BACKLIGHT_CONT	(SPITZ_SCP2_GPIO_BASE + 6)
+-#define SPITZ_GPIO_BACKLIGHT_ON		(SPITZ_SCP2_GPIO_BASE + 7)
+-#define SPITZ_GPIO_MIC_BIAS		(SPITZ_SCP2_GPIO_BASE + 8)
+-
+-/* Akita IO Expander GPIOs */
+-#define AKITA_IOEXP_GPIO_BASE		(PXA_NR_BUILTIN_GPIO + 12)
+-#define AKITA_GPIO_RESERVED_0		(AKITA_IOEXP_GPIO_BASE + 0)
+-#define AKITA_GPIO_RESERVED_1		(AKITA_IOEXP_GPIO_BASE + 1)
+-#define AKITA_GPIO_MIC_BIAS		(AKITA_IOEXP_GPIO_BASE + 2)
+-#define AKITA_GPIO_BACKLIGHT_ON		(AKITA_IOEXP_GPIO_BASE + 3)
+-#define AKITA_GPIO_BACKLIGHT_CONT	(AKITA_IOEXP_GPIO_BASE + 4)
+-#define AKITA_GPIO_AKIN_PULLUP		(AKITA_IOEXP_GPIO_BASE + 5)
+-#define AKITA_GPIO_IR_ON		(AKITA_IOEXP_GPIO_BASE + 6)
+-#define AKITA_GPIO_RESERVED_7		(AKITA_IOEXP_GPIO_BASE + 7)
+-
+-/* Spitz IRQ Definitions */
+-
+-#define SPITZ_IRQ_GPIO_KEY_INT        PXA_GPIO_TO_IRQ(SPITZ_GPIO_KEY_INT)
+-#define SPITZ_IRQ_GPIO_AC_IN          PXA_GPIO_TO_IRQ(SPITZ_GPIO_AC_IN)
+-#define SPITZ_IRQ_GPIO_AK_INT         PXA_GPIO_TO_IRQ(SPITZ_GPIO_AK_INT)
+-#define SPITZ_IRQ_GPIO_HP_IN          PXA_GPIO_TO_IRQ(SPITZ_GPIO_HP_IN)
+-#define SPITZ_IRQ_GPIO_TP_INT         PXA_GPIO_TO_IRQ(SPITZ_GPIO_TP_INT)
+-#define SPITZ_IRQ_GPIO_SYNC           PXA_GPIO_TO_IRQ(SPITZ_GPIO_SYNC)
+-#define SPITZ_IRQ_GPIO_ON_KEY         PXA_GPIO_TO_IRQ(SPITZ_GPIO_ON_KEY)
+-#define SPITZ_IRQ_GPIO_SWA            PXA_GPIO_TO_IRQ(SPITZ_GPIO_SWA)
+-#define SPITZ_IRQ_GPIO_SWB            PXA_GPIO_TO_IRQ(SPITZ_GPIO_SWB)
+-#define SPITZ_IRQ_GPIO_BAT_COVER      PXA_GPIO_TO_IRQ(SPITZ_GPIO_BAT_COVER)
+-#define SPITZ_IRQ_GPIO_FATAL_BAT      PXA_GPIO_TO_IRQ(SPITZ_GPIO_FATAL_BAT)
+-#define SPITZ_IRQ_GPIO_CO             PXA_GPIO_TO_IRQ(SPITZ_GPIO_CO)
+-#define SPITZ_IRQ_GPIO_CF_IRQ         PXA_GPIO_TO_IRQ(SPITZ_GPIO_CF_IRQ)
+-#define SPITZ_IRQ_GPIO_CF_CD          PXA_GPIO_TO_IRQ(SPITZ_GPIO_CF_CD)
+-#define SPITZ_IRQ_GPIO_CF2_IRQ        PXA_GPIO_TO_IRQ(SPITZ_GPIO_CF2_IRQ)
+-#define SPITZ_IRQ_GPIO_nSD_INT        PXA_GPIO_TO_IRQ(SPITZ_GPIO_nSD_INT)
+-#define SPITZ_IRQ_GPIO_nSD_DETECT     PXA_GPIO_TO_IRQ(SPITZ_GPIO_nSD_DETECT)
+-
+-/*
+- * Shared data structures
+- */
+-extern struct platform_device spitzssp_device;
+-extern struct sharpsl_charger_machinfo spitz_pm_machinfo;
+diff --git a/arch/arm/mach-pxa/spitz.c b/arch/arm/mach-pxa/spitz.c
+index 264de0bc97d689..9bdc20706d187b 100644
+--- a/arch/arm/mach-pxa/spitz.c
++++ b/arch/arm/mach-pxa/spitz.c
+@@ -43,7 +43,7 @@
+ #include <linux/platform_data/mmc-pxamci.h>
+ #include <linux/platform_data/usb-ohci-pxa27x.h>
+ #include <linux/platform_data/video-pxafb.h>
+-#include <mach/spitz.h>
++#include "spitz.h"
+ #include "sharpsl_pm.h"
+ #include <mach/smemc.h>
+ 
+@@ -516,10 +516,8 @@ static struct pxa2xx_spi_chip spitz_ads7846_chip = {
+ static struct gpiod_lookup_table spitz_lcdcon_gpio_table = {
+ 	.dev_id = "spi2.1",
+ 	.table = {
+-		GPIO_LOOKUP("gpio-pxa", SPITZ_GPIO_BACKLIGHT_CONT,
+-			    "BL_CONT", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("gpio-pxa", SPITZ_GPIO_BACKLIGHT_ON,
+-			    "BL_ON", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.1", 6, "BL_CONT", GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP("sharp-scoop.1", 7, "BL_ON", GPIO_ACTIVE_HIGH),
+ 		{ },
+ 	},
+ };
+@@ -527,10 +525,8 @@ static struct gpiod_lookup_table spitz_lcdcon_gpio_table = {
+ static struct gpiod_lookup_table akita_lcdcon_gpio_table = {
+ 	.dev_id = "spi2.1",
+ 	.table = {
+-		GPIO_LOOKUP("gpio-pxa", AKITA_GPIO_BACKLIGHT_CONT,
+-			    "BL_CONT", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("gpio-pxa", AKITA_GPIO_BACKLIGHT_ON,
+-			    "BL_ON", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("i2c-max7310", 3, "BL_ON", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("i2c-max7310", 4, "BL_CONT", GPIO_ACTIVE_LOW),
+ 		{ },
+ 	},
+ };
+@@ -954,11 +950,36 @@ static void __init spitz_i2c_init(void)
+ static inline void spitz_i2c_init(void) {}
+ #endif
+ 
++static struct gpiod_lookup_table spitz_audio_gpio_table = {
++	.dev_id = "spitz-audio",
++	.table = {
++		GPIO_LOOKUP("sharp-scoop.0", 3, "mute-l", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.0", 4, "mute-r", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.1", 8, "mic", GPIO_ACTIVE_HIGH),
++		{ },
++	},
++};
++
++static struct gpiod_lookup_table akita_audio_gpio_table = {
++	.dev_id = "spitz-audio",
++	.table = {
++		GPIO_LOOKUP("sharp-scoop.0", 3, "mute-l", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.0", 4, "mute-r", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("i2c-max7310", 2, "mic", GPIO_ACTIVE_HIGH),
++		{ },
++	},
++};
++
+ /******************************************************************************
+  * Audio devices
+  ******************************************************************************/
+ static inline void spitz_audio_init(void)
+ {
++	if (machine_is_akita())
++		gpiod_add_lookup_table(&akita_audio_gpio_table);
++	else
++		gpiod_add_lookup_table(&spitz_audio_gpio_table);
++
+ 	platform_device_register_simple("spitz-audio", -1, NULL, 0);
+ }
+ 
+diff --git a/arch/arm/mach-pxa/spitz.h b/arch/arm/mach-pxa/spitz.h
+new file mode 100644
+index 00000000000000..f97e3ebd762d51
+--- /dev/null
++++ b/arch/arm/mach-pxa/spitz.h
+@@ -0,0 +1,185 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Hardware specific definitions for SL-Cx000 series of PDAs
++ *
++ * Copyright (c) 2005 Alexander Wykes
++ * Copyright (c) 2005 Richard Purdie
++ *
++ * Based on Sharp's 2.4 kernel patches
++ */
++#ifndef __ASM_ARCH_SPITZ_H
++#define __ASM_ARCH_SPITZ_H  1
++#endif
++
++#include <mach/irqs.h> /* PXA_NR_BUILTIN_GPIO, PXA_GPIO_TO_IRQ */
++#include <linux/fb.h>
++
++/* Spitz/Akita GPIOs */
++
++#define SPITZ_GPIO_KEY_INT         (0) /* Key Interrupt */
++#define SPITZ_GPIO_RESET           (1)
++#define SPITZ_GPIO_nSD_DETECT      (9)
++#define SPITZ_GPIO_TP_INT          (11) /* Touch Panel interrupt */
++#define SPITZ_GPIO_AK_INT          (13) /* Remote Control */
++#define SPITZ_GPIO_ADS7846_CS      (14)
++#define SPITZ_GPIO_SYNC            (16)
++#define SPITZ_GPIO_MAX1111_CS      (20)
++#define SPITZ_GPIO_FATAL_BAT       (21)
++#define SPITZ_GPIO_HSYNC           (22)
++#define SPITZ_GPIO_nSD_CLK         (32)
++#define SPITZ_GPIO_USB_DEVICE      (35)
++#define SPITZ_GPIO_USB_HOST        (37)
++#define SPITZ_GPIO_USB_CONNECT     (41)
++#define SPITZ_GPIO_LCDCON_CS       (53)
++#define SPITZ_GPIO_nPCE            (54)
++#define SPITZ_GPIO_nSD_WP          (81)
++#define SPITZ_GPIO_ON_RESET        (89)
++#define SPITZ_GPIO_BAT_COVER       (90)
++#define SPITZ_GPIO_CF_CD           (94)
++#define SPITZ_GPIO_ON_KEY          (95)
++#define SPITZ_GPIO_SWA             (97)
++#define SPITZ_GPIO_SWB             (96)
++#define SPITZ_GPIO_CHRG_FULL       (101)
++#define SPITZ_GPIO_CO              (101)
++#define SPITZ_GPIO_CF_IRQ          (105)
++#define SPITZ_GPIO_AC_IN           (115)
++#define SPITZ_GPIO_HP_IN           (116)
++
++/* Spitz Only GPIOs */
++
++#define SPITZ_GPIO_CF2_IRQ         (106) /* CF slot1 Ready */
++#define SPITZ_GPIO_CF2_CD          (93)
++
++
++/* Spitz/Akita Keyboard Definitions */
++
++#define SPITZ_KEY_STROBE_NUM         (11)
++#define SPITZ_KEY_SENSE_NUM          (7)
++#define SPITZ_GPIO_G0_STROBE_BIT     0x0f800000
++#define SPITZ_GPIO_G1_STROBE_BIT     0x00100000
++#define SPITZ_GPIO_G2_STROBE_BIT     0x01000000
++#define SPITZ_GPIO_G3_STROBE_BIT     0x00041880
++#define SPITZ_GPIO_G0_SENSE_BIT      0x00021000
++#define SPITZ_GPIO_G1_SENSE_BIT      0x000000d4
++#define SPITZ_GPIO_G2_SENSE_BIT      0x08000000
++#define SPITZ_GPIO_G3_SENSE_BIT      0x00000000
++
++#define SPITZ_GPIO_KEY_STROBE0       88
++#define SPITZ_GPIO_KEY_STROBE1       23
++#define SPITZ_GPIO_KEY_STROBE2       24
++#define SPITZ_GPIO_KEY_STROBE3       25
++#define SPITZ_GPIO_KEY_STROBE4       26
++#define SPITZ_GPIO_KEY_STROBE5       27
++#define SPITZ_GPIO_KEY_STROBE6       52
++#define SPITZ_GPIO_KEY_STROBE7       103
++#define SPITZ_GPIO_KEY_STROBE8       107
++#define SPITZ_GPIO_KEY_STROBE9       108
++#define SPITZ_GPIO_KEY_STROBE10      114
++
++#define SPITZ_GPIO_KEY_SENSE0        12
++#define SPITZ_GPIO_KEY_SENSE1        17
++#define SPITZ_GPIO_KEY_SENSE2        91
++#define SPITZ_GPIO_KEY_SENSE3        34
++#define SPITZ_GPIO_KEY_SENSE4        36
++#define SPITZ_GPIO_KEY_SENSE5        38
++#define SPITZ_GPIO_KEY_SENSE6        39
++
++
++/* Spitz Scoop Device (No. 1) GPIOs */
++/* Suspend States in comments */
++#define SPITZ_SCP_LED_GREEN     SCOOP_GPCR_PA11  /* Keep */
++#define SPITZ_SCP_JK_B          SCOOP_GPCR_PA12  /* Keep */
++#define SPITZ_SCP_CHRG_ON       SCOOP_GPCR_PA13  /* Keep */
++#define SPITZ_SCP_MUTE_L        SCOOP_GPCR_PA14  /* Low */
++#define SPITZ_SCP_MUTE_R        SCOOP_GPCR_PA15  /* Low */
++#define SPITZ_SCP_CF_POWER      SCOOP_GPCR_PA16  /* Keep */
++#define SPITZ_SCP_LED_ORANGE    SCOOP_GPCR_PA17  /* Keep */
++#define SPITZ_SCP_JK_A          SCOOP_GPCR_PA18  /* Low */
++#define SPITZ_SCP_ADC_TEMP_ON   SCOOP_GPCR_PA19  /* Low */
++
++#define SPITZ_SCP_IO_DIR      (SPITZ_SCP_JK_B | SPITZ_SCP_CHRG_ON | \
++                               SPITZ_SCP_MUTE_L | SPITZ_SCP_MUTE_R | \
++                               SPITZ_SCP_CF_POWER | SPITZ_SCP_JK_A | SPITZ_SCP_ADC_TEMP_ON)
++#define SPITZ_SCP_IO_OUT      (SPITZ_SCP_CHRG_ON | SPITZ_SCP_MUTE_L | SPITZ_SCP_MUTE_R)
++#define SPITZ_SCP_SUS_CLR     (SPITZ_SCP_MUTE_L | SPITZ_SCP_MUTE_R | SPITZ_SCP_JK_A | SPITZ_SCP_ADC_TEMP_ON)
++#define SPITZ_SCP_SUS_SET     0
++
++#define SPITZ_SCP_GPIO_BASE	(PXA_NR_BUILTIN_GPIO)
++#define SPITZ_GPIO_LED_GREEN	(SPITZ_SCP_GPIO_BASE + 0)
++#define SPITZ_GPIO_JK_B		(SPITZ_SCP_GPIO_BASE + 1)
++#define SPITZ_GPIO_CHRG_ON	(SPITZ_SCP_GPIO_BASE + 2)
++#define SPITZ_GPIO_MUTE_L	(SPITZ_SCP_GPIO_BASE + 3)
++#define SPITZ_GPIO_MUTE_R	(SPITZ_SCP_GPIO_BASE + 4)
++#define SPITZ_GPIO_CF_POWER	(SPITZ_SCP_GPIO_BASE + 5)
++#define SPITZ_GPIO_LED_ORANGE	(SPITZ_SCP_GPIO_BASE + 6)
++#define SPITZ_GPIO_JK_A		(SPITZ_SCP_GPIO_BASE + 7)
++#define SPITZ_GPIO_ADC_TEMP_ON	(SPITZ_SCP_GPIO_BASE + 8)
++
++/* Spitz Scoop Device (No. 2) GPIOs */
++/* Suspend States in comments */
++#define SPITZ_SCP2_IR_ON           SCOOP_GPCR_PA11  /* High */
++#define SPITZ_SCP2_AKIN_PULLUP     SCOOP_GPCR_PA12  /* Keep */
++#define SPITZ_SCP2_RESERVED_1      SCOOP_GPCR_PA13  /* High */
++#define SPITZ_SCP2_RESERVED_2      SCOOP_GPCR_PA14  /* Low */
++#define SPITZ_SCP2_RESERVED_3      SCOOP_GPCR_PA15  /* Low */
++#define SPITZ_SCP2_RESERVED_4      SCOOP_GPCR_PA16  /* Low */
++#define SPITZ_SCP2_BACKLIGHT_CONT  SCOOP_GPCR_PA17  /* Low */
++#define SPITZ_SCP2_BACKLIGHT_ON    SCOOP_GPCR_PA18  /* Low */
++#define SPITZ_SCP2_MIC_BIAS        SCOOP_GPCR_PA19  /* Low */
++
++#define SPITZ_SCP2_IO_DIR (SPITZ_SCP2_AKIN_PULLUP | SPITZ_SCP2_RESERVED_1 | \
++                           SPITZ_SCP2_RESERVED_2 | SPITZ_SCP2_RESERVED_3 | SPITZ_SCP2_RESERVED_4 | \
++                           SPITZ_SCP2_BACKLIGHT_CONT | SPITZ_SCP2_BACKLIGHT_ON | SPITZ_SCP2_MIC_BIAS)
++
++#define SPITZ_SCP2_IO_OUT   (SPITZ_SCP2_AKIN_PULLUP | SPITZ_SCP2_RESERVED_1)
++#define SPITZ_SCP2_SUS_CLR  (SPITZ_SCP2_RESERVED_2 | SPITZ_SCP2_RESERVED_3 | SPITZ_SCP2_RESERVED_4 | \
++                             SPITZ_SCP2_BACKLIGHT_CONT | SPITZ_SCP2_BACKLIGHT_ON | SPITZ_SCP2_MIC_BIAS)
++#define SPITZ_SCP2_SUS_SET  (SPITZ_SCP2_IR_ON | SPITZ_SCP2_RESERVED_1)
++
++#define SPITZ_SCP2_GPIO_BASE		(PXA_NR_BUILTIN_GPIO + 12)
++#define SPITZ_GPIO_IR_ON		(SPITZ_SCP2_GPIO_BASE + 0)
++#define SPITZ_GPIO_AKIN_PULLUP		(SPITZ_SCP2_GPIO_BASE + 1)
++#define SPITZ_GPIO_RESERVED_1		(SPITZ_SCP2_GPIO_BASE + 2)
++#define SPITZ_GPIO_RESERVED_2		(SPITZ_SCP2_GPIO_BASE + 3)
++#define SPITZ_GPIO_RESERVED_3		(SPITZ_SCP2_GPIO_BASE + 4)
++#define SPITZ_GPIO_RESERVED_4		(SPITZ_SCP2_GPIO_BASE + 5)
++#define SPITZ_GPIO_BACKLIGHT_CONT	(SPITZ_SCP2_GPIO_BASE + 6)
++#define SPITZ_GPIO_BACKLIGHT_ON		(SPITZ_SCP2_GPIO_BASE + 7)
++#define SPITZ_GPIO_MIC_BIAS		(SPITZ_SCP2_GPIO_BASE + 8)
++
++/* Akita IO Expander GPIOs */
++#define AKITA_IOEXP_GPIO_BASE		(PXA_NR_BUILTIN_GPIO + 12)
++#define AKITA_GPIO_RESERVED_0		(AKITA_IOEXP_GPIO_BASE + 0)
++#define AKITA_GPIO_RESERVED_1		(AKITA_IOEXP_GPIO_BASE + 1)
++#define AKITA_GPIO_MIC_BIAS		(AKITA_IOEXP_GPIO_BASE + 2)
++#define AKITA_GPIO_BACKLIGHT_ON		(AKITA_IOEXP_GPIO_BASE + 3)
++#define AKITA_GPIO_BACKLIGHT_CONT	(AKITA_IOEXP_GPIO_BASE + 4)
++#define AKITA_GPIO_AKIN_PULLUP		(AKITA_IOEXP_GPIO_BASE + 5)
++#define AKITA_GPIO_IR_ON		(AKITA_IOEXP_GPIO_BASE + 6)
++#define AKITA_GPIO_RESERVED_7		(AKITA_IOEXP_GPIO_BASE + 7)
++
++/* Spitz IRQ Definitions */
++
++#define SPITZ_IRQ_GPIO_KEY_INT        PXA_GPIO_TO_IRQ(SPITZ_GPIO_KEY_INT)
++#define SPITZ_IRQ_GPIO_AC_IN          PXA_GPIO_TO_IRQ(SPITZ_GPIO_AC_IN)
++#define SPITZ_IRQ_GPIO_AK_INT         PXA_GPIO_TO_IRQ(SPITZ_GPIO_AK_INT)
++#define SPITZ_IRQ_GPIO_HP_IN          PXA_GPIO_TO_IRQ(SPITZ_GPIO_HP_IN)
++#define SPITZ_IRQ_GPIO_TP_INT         PXA_GPIO_TO_IRQ(SPITZ_GPIO_TP_INT)
++#define SPITZ_IRQ_GPIO_SYNC           PXA_GPIO_TO_IRQ(SPITZ_GPIO_SYNC)
++#define SPITZ_IRQ_GPIO_ON_KEY         PXA_GPIO_TO_IRQ(SPITZ_GPIO_ON_KEY)
++#define SPITZ_IRQ_GPIO_SWA            PXA_GPIO_TO_IRQ(SPITZ_GPIO_SWA)
++#define SPITZ_IRQ_GPIO_SWB            PXA_GPIO_TO_IRQ(SPITZ_GPIO_SWB)
++#define SPITZ_IRQ_GPIO_BAT_COVER      PXA_GPIO_TO_IRQ(SPITZ_GPIO_BAT_COVER)
++#define SPITZ_IRQ_GPIO_FATAL_BAT      PXA_GPIO_TO_IRQ(SPITZ_GPIO_FATAL_BAT)
++#define SPITZ_IRQ_GPIO_CO             PXA_GPIO_TO_IRQ(SPITZ_GPIO_CO)
++#define SPITZ_IRQ_GPIO_CF_IRQ         PXA_GPIO_TO_IRQ(SPITZ_GPIO_CF_IRQ)
++#define SPITZ_IRQ_GPIO_CF_CD          PXA_GPIO_TO_IRQ(SPITZ_GPIO_CF_CD)
++#define SPITZ_IRQ_GPIO_CF2_IRQ        PXA_GPIO_TO_IRQ(SPITZ_GPIO_CF2_IRQ)
++#define SPITZ_IRQ_GPIO_nSD_INT        PXA_GPIO_TO_IRQ(SPITZ_GPIO_nSD_INT)
++#define SPITZ_IRQ_GPIO_nSD_DETECT     PXA_GPIO_TO_IRQ(SPITZ_GPIO_nSD_DETECT)
++
++/*
++ * Shared data structures
++ */
++extern struct platform_device spitzssp_device;
++extern struct sharpsl_charger_machinfo spitz_pm_machinfo;
+diff --git a/arch/arm/mach-pxa/spitz_pm.c b/arch/arm/mach-pxa/spitz_pm.c
+index 25a1f8c5a7382d..6167f96d7b41ee 100644
+--- a/arch/arm/mach-pxa/spitz_pm.c
++++ b/arch/arm/mach-pxa/spitz_pm.c
+@@ -20,7 +20,7 @@
+ #include <asm/mach-types.h>
+ #include <mach/hardware.h>
+ 
+-#include <mach/spitz.h>
++#include "spitz.h"
+ #include "pxa27x.h"
+ #include "sharpsl_pm.h"
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 13cf137da999a6..9fdf8b0364288d 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -691,6 +691,44 @@ config ARM64_ERRATUM_2457168
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_3194386
++	bool "Cortex-*/Neoverse-*: workaround for MSR SSBS not self-synchronizing"
++	default y
++	help
++	  This option adds the workaround for the following errata:
++
++	  * ARM Cortex-A76 erratum 3324349
++	  * ARM Cortex-A77 erratum 3324348
++	  * ARM Cortex-A78 erratum 3324344
++	  * ARM Cortex-A78C erratum 3324346
++	  * ARM Cortex-A78C erratum 3324347
++	  * ARM Cortex-A710 erratam 3324338
++	  * ARM Cortex-A720 erratum 3456091
++	  * ARM Cortex-A725 erratum 3456106
++	  * ARM Cortex-X1 erratum 3324344
++	  * ARM Cortex-X1C erratum 3324346
++	  * ARM Cortex-X2 erratum 3324338
++	  * ARM Cortex-X3 erratum 3324335
++	  * ARM Cortex-X4 erratum 3194386
++	  * ARM Cortex-X925 erratum 3324334
++	  * ARM Neoverse-N1 erratum 3324349
++	  * ARM Neoverse N2 erratum 3324339
++	  * ARM Neoverse-V1 erratum 3324341
++	  * ARM Neoverse V2 erratum 3324336
++	  * ARM Neoverse-V3 erratum 3312417
++
++	  On affected cores "MSR SSBS, #0" instructions may not affect
++	  subsequent speculative instructions, which may permit unexepected
++	  speculative store bypassing.
++
++	  Work around this problem by placing a Speculation Barrier (SB) or
++	  Instruction Synchronization Barrier (ISB) after kernel changes to
++	  SSBS. The presence of the SSBS special-purpose register is hidden
++	  from hwcaps and EL0 reads of ID_AA64PFR1_EL1, such that userspace
++	  will use the PR_SPEC_STORE_BYPASS prctl to change SSBS.
++
++	  If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ 	bool "Cavium erratum 22375, 24313"
+ 	default y
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
+index 7c029f552a23b3..256c46771db782 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
+@@ -311,8 +311,8 @@ &hdmi_tx {
+ 		 <&reset RESET_HDMI_SYSTEM_RESET>,
+ 		 <&reset RESET_HDMI_TX>;
+ 	reset-names = "hdmitx_apb", "hdmitx", "hdmitx_phy";
+-	clocks = <&clkc CLKID_HDMI_PCLK>,
+-		 <&clkc CLKID_CLK81>,
++	clocks = <&clkc CLKID_HDMI>,
++		 <&clkc CLKID_HDMI_PCLK>,
+ 		 <&clkc CLKID_GCLK_VENCI_INT0>;
+ 	clock-names = "isfr", "iahb", "venci";
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+index 35002293505224..a689bd14ece999 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+@@ -323,8 +323,8 @@ &hdmi_tx {
+ 		 <&reset RESET_HDMI_SYSTEM_RESET>,
+ 		 <&reset RESET_HDMI_TX>;
+ 	reset-names = "hdmitx_apb", "hdmitx", "hdmitx_phy";
+-	clocks = <&clkc CLKID_HDMI_PCLK>,
+-		 <&clkc CLKID_CLK81>,
++	clocks = <&clkc CLKID_HDMI>,
++		 <&clkc CLKID_HDMI_PCLK>,
+ 		 <&clkc CLKID_GCLK_VENCI_INT0>;
+ 	clock-names = "isfr", "iahb", "venci";
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+index 778174a7d649b5..46e412b436ed97 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+@@ -285,8 +285,8 @@ asm_sel {
+ 	/* eMMC is shared pin with parallel NAND */
+ 	emmc_pins_default: emmc-pins-default {
+ 		mux {
+-			function = "emmc", "emmc_rst";
+-			groups = "emmc";
++			function = "emmc";
++			groups = "emmc", "emmc_rst";
+ 		};
+ 
+ 		/* "NDL0","NDL1","NDL2","NDL3","NDL4","NDL5","NDL6","NDL7",
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+index 810575de66702b..5dd993496a5c07 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+@@ -249,8 +249,8 @@ &pio {
+ 	/* eMMC is shared pin with parallel NAND */
+ 	emmc_pins_default: emmc-pins-default {
+ 		mux {
+-			function = "emmc", "emmc_rst";
+-			groups = "emmc";
++			function = "emmc";
++			groups = "emmc", "emmc_rst";
+ 		};
+ 
+ 		/* "NDL0","NDL1","NDL2","NDL3","NDL4","NDL5","NDL6","NDL7",
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index a4f860bb4a8427..ad8b11267c7d2a 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -628,7 +628,6 @@ pins_tx {
+ 		};
+ 		pins_rts {
+ 			pinmux = <PINMUX_GPIO47__FUNC_URTS1>;
+-			output-enable;
+ 		};
+ 		pins_cts {
+ 			pinmux = <PINMUX_GPIO46__FUNC_UCTS1>;
+@@ -647,7 +646,6 @@ pins_tx {
+ 		};
+ 		pins_rts {
+ 			pinmux = <PINMUX_GPIO47__FUNC_URTS1>;
+-			output-enable;
+ 		};
+ 		pins_cts {
+ 			pinmux = <PINMUX_GPIO46__FUNC_UCTS1>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index e990e727cc0fa6..118fd1e47d5c47 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -927,7 +927,7 @@ ufshc: ufshc@624000 {
+ 				<&gcc GCC_UFS_RX_SYMBOL_0_CLK>;
+ 			freq-table-hz =
+ 				<100000000 200000000>,
+-				<0 0>,
++				<100000000 200000000>,
+ 				<0 0>,
+ 				<0 0>,
+ 				<0 0>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index b00f6d8bc8baca..da48ae60155af1 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -2125,6 +2125,8 @@ ufs_mem_phy: phy@1d87000 {
+ 			clocks = <&gcc GCC_UFS_MEM_CLKREF_CLK>,
+ 				 <&gcc GCC_UFS_PHY_PHY_AUX_CLK>;
+ 
++			power-domains = <&gcc UFS_PHY_GDSC>;
++
+ 			resets = <&ufs_mem_hc 0>;
+ 			reset-names = "ufsphy";
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index 10df6636a6b6cc..3c6398e98f7670 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -811,8 +811,8 @@ cru: clock-controller@ff440000 {
+ 			<0>, <24000000>,
+ 			<24000000>, <24000000>,
+ 			<15000000>, <15000000>,
+-			<100000000>, <100000000>,
+-			<100000000>, <100000000>,
++			<300000000>, <100000000>,
++			<400000000>, <100000000>,
+ 			<50000000>, <100000000>,
+ 			<100000000>, <100000000>,
+ 			<50000000>, <50000000>,
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index d2080a41f6e6fc..931c88182fb8bf 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -69,7 +69,8 @@
+ #define ARM64_SPECTRE_BHB			59
+ #define ARM64_WORKAROUND_2457168		60
+ #define ARM64_WORKAROUND_1742098		61
++#define ARM64_WORKAROUND_SPECULATIVE_SSBS	62
+ 
+-#define ARM64_NCAPS				62
++#define ARM64_NCAPS				63
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index c2a1ccd5fd4681..91890e9fcb6c84 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -84,6 +84,14 @@
+ #define ARM_CPU_PART_CORTEX_X2		0xD48
+ #define ARM_CPU_PART_NEOVERSE_N2	0xD49
+ #define ARM_CPU_PART_CORTEX_A78C	0xD4B
++#define ARM_CPU_PART_CORTEX_X1C		0xD4C
++#define ARM_CPU_PART_CORTEX_X3		0xD4E
++#define ARM_CPU_PART_NEOVERSE_V2	0xD4F
++#define ARM_CPU_PART_CORTEX_A720	0xD81
++#define ARM_CPU_PART_CORTEX_X4		0xD82
++#define ARM_CPU_PART_NEOVERSE_V3	0xD84
++#define ARM_CPU_PART_CORTEX_X925	0xD85
++#define ARM_CPU_PART_CORTEX_A725	0xD87
+ 
+ #define APM_CPU_PART_POTENZA		0x000
+ 
+@@ -136,6 +144,14 @@
+ #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
+ #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
+ #define MIDR_CORTEX_A78C	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C)
++#define MIDR_CORTEX_X1C	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C)
++#define MIDR_CORTEX_X3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X3)
++#define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2)
++#define MIDR_CORTEX_A720 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A720)
++#define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4)
++#define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3)
++#define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925)
++#define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725)
+ #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 5d6f19bc628c22..6e63dc8f0e8c6e 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -364,6 +364,30 @@ static struct midr_range broken_aarch32_aes[] = {
+ };
+ #endif
+ 
++#ifdef CONFIG_ARM64_ERRATUM_3194386
++static const struct midr_range erratum_spec_ssbs_list[] = {
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A720),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A725),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X1C),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X3),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X4),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X925),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V3),
++	{}
++};
++#endif
++
+ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
+ 	{
+@@ -570,6 +594,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		CAP_MIDR_RANGE_LIST(broken_aarch32_aes),
+ 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ 	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_3194386
++	{
++		.desc = "SSBS not fully self-synchronizing",
++		.capability = ARM64_WORKAROUND_SPECULATIVE_SSBS,
++		ERRATA_MIDR_RANGE_LIST(erratum_spec_ssbs_list),
++	},
+ #endif
+ 	{
+ 	}
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 1f0a2deafd643b..2dc269b865de26 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -399,6 +399,30 @@ static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
+ 	ARM64_FTR_END,
+ };
+ 
++static const struct arm64_ftr_bits ftr_mvfr0[] = {
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPROUND_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSHVEC_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSQRT_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPDIVIDE_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPTRAP_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPDP_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_FPSP_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR0_SIMD_SHIFT, 4, 0),
++	ARM64_FTR_END,
++};
++
++static const struct arm64_ftr_bits ftr_mvfr1[] = {
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDFMAC_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPHP_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDHP_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDSP_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDINT_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_SIMDLS_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPDNAN_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR1_FPFTZ_SHIFT, 4, 0),
++	ARM64_FTR_END,
++};
++
+ static const struct arm64_ftr_bits ftr_mvfr2[] = {
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_FPMISC_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_SIMDMISC_SHIFT, 4, 0),
+@@ -424,10 +448,10 @@ static const struct arm64_ftr_bits ftr_id_isar0[] = {
+ 
+ static const struct arm64_ftr_bits ftr_id_isar5[] = {
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
+-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
+-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
+-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
+-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
+ 	ARM64_FTR_END,
+ };
+@@ -534,7 +558,7 @@ static const struct arm64_ftr_bits ftr_zcr[] = {
+  * Common ftr bits for a 32bit register with all hidden, strict
+  * attributes, with 4bit feature fields and a default safe value of
+  * 0. Covers the following 32bit registers:
+- * id_isar[1-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
++ * id_isar[1-3], id_mmfr[1-3]
+  */
+ static const struct arm64_ftr_bits ftr_generic_32bits[] = {
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
+@@ -590,8 +614,8 @@ static const struct __ftr_reg_entry {
+ 	ARM64_FTR_REG(SYS_ID_ISAR6_EL1, ftr_id_isar6),
+ 
+ 	/* Op1 = 0, CRn = 0, CRm = 3 */
+-	ARM64_FTR_REG(SYS_MVFR0_EL1, ftr_generic_32bits),
+-	ARM64_FTR_REG(SYS_MVFR1_EL1, ftr_generic_32bits),
++	ARM64_FTR_REG(SYS_MVFR0_EL1, ftr_mvfr0),
++	ARM64_FTR_REG(SYS_MVFR1_EL1, ftr_mvfr1),
+ 	ARM64_FTR_REG(SYS_MVFR2_EL1, ftr_mvfr2),
+ 	ARM64_FTR_REG(SYS_ID_PFR2_EL1, ftr_id_pfr2),
+ 	ARM64_FTR_REG(SYS_ID_DFR1_EL1, ftr_id_dfr1),
+@@ -1196,20 +1220,42 @@ feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
+ 	return val >= entry->min_field_value;
+ }
+ 
+-static bool
+-has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
++static u64
++read_scoped_sysreg(const struct arm64_cpu_capabilities *entry, int scope)
+ {
+-	u64 val;
+-
+ 	WARN_ON(scope == SCOPE_LOCAL_CPU && preemptible());
+ 	if (scope == SCOPE_SYSTEM)
+-		val = read_sanitised_ftr_reg(entry->sys_reg);
++		return read_sanitised_ftr_reg(entry->sys_reg);
+ 	else
+-		val = __read_sysreg_by_encoding(entry->sys_reg);
++		return __read_sysreg_by_encoding(entry->sys_reg);
++}
++
++static bool
++has_user_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
++{
++	int mask;
++	struct arm64_ftr_reg *regp;
++	u64 val = read_scoped_sysreg(entry, scope);
++
++	regp = get_arm64_ftr_reg(entry->sys_reg);
++	if (!regp)
++		return false;
++
++	mask = cpuid_feature_extract_unsigned_field(regp->user_mask,
++						    entry->field_pos);
++	if (!mask)
++		return false;
+ 
+ 	return feature_matches(val, entry);
+ }
+ 
++static bool
++has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
++{
++	u64 val = read_scoped_sysreg(entry, scope);
++	return feature_matches(val, entry);
++}
++
+ static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope)
+ {
+ 	bool has_sre;
+@@ -1731,6 +1777,17 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
+ }
+ #endif /* CONFIG_ARM64_MTE */
+ 
++static void user_feature_fixup(void)
++{
++	if (cpus_have_cap(ARM64_WORKAROUND_SPECULATIVE_SSBS)) {
++		struct arm64_ftr_reg *regp;
++
++		regp = get_arm64_ftr_reg(SYS_ID_AA64PFR1_EL1);
++		if (regp)
++			regp->user_mask &= ~GENMASK(7, 4); /* SSBS */
++	}
++}
++
+ static void elf_hwcap_fixup(void)
+ {
+ #ifdef CONFIG_ARM64_ERRATUM_1742098
+@@ -2172,7 +2229,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
+ };
+ 
+ #define HWCAP_CPUID_MATCH(reg, field, s, min_value)				\
+-		.matches = has_cpuid_feature,					\
++		.matches = has_user_cpuid_feature,				\
+ 		.sys_reg = reg,							\
+ 		.field_pos = field,						\
+ 		.sign = s,							\
+@@ -2742,6 +2799,7 @@ void __init setup_cpu_features(void)
+ 	u32 cwg;
+ 
+ 	setup_system_capabilities();
++	user_feature_fixup();
+ 	setup_elf_hwcaps(arm64_elf_hwcaps);
+ 
+ 	if (system_supports_32bit_el0()) {
+@@ -2780,7 +2838,7 @@ static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *c
+ 
+ /*
+  * We emulate only the following system register space.
+- * Op0 = 0x3, CRn = 0x0, Op1 = 0x0, CRm = [0, 4 - 7]
++ * Op0 = 0x3, CRn = 0x0, Op1 = 0x0, CRm = [0, 2 - 7]
+  * See Table C5-6 System instruction encodings for System register accesses,
+  * ARMv8 ARM(ARM DDI 0487A.f) for more details.
+  */
+@@ -2790,7 +2848,7 @@ static inline bool __attribute_const__ is_emulated(u32 id)
+ 		sys_reg_CRn(id) == 0x0 &&
+ 		sys_reg_Op1(id) == 0x0 &&
+ 		(sys_reg_CRm(id) == 0 ||
+-		 ((sys_reg_CRm(id) >= 4) && (sys_reg_CRm(id) <= 7))));
++		 ((sys_reg_CRm(id) >= 2) && (sys_reg_CRm(id) <= 7))));
+ }
+ 
+ /*
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 9c0e9d9eed6e26..90337910c3f533 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -574,6 +574,18 @@ static enum mitigation_state spectre_v4_enable_hw_mitigation(void)
+ 
+ 	/* SCTLR_EL1.DSSBS was initialised to 0 during boot */
+ 	asm volatile(SET_PSTATE_SSBS(0));
++
++	/*
++	 * SSBS is self-synchronizing and is intended to affect subsequent
++	 * speculative instructions, but some CPUs can speculate with a stale
++	 * value of SSBS.
++	 *
++	 * Mitigate this with an unconditional speculation barrier, as CPUs
++	 * could mis-speculate branches and bypass a conditional barrier.
++	 */
++	if (IS_ENABLED(CONFIG_ARM64_ERRATUM_3194386))
++		spec_bar();
++
+ 	return SPECTRE_MITIGATED;
+ }
+ 
+diff --git a/arch/m68k/amiga/config.c b/arch/m68k/amiga/config.c
+index bee9f240f35dee..c92c1e559da0c4 100644
+--- a/arch/m68k/amiga/config.c
++++ b/arch/m68k/amiga/config.c
+@@ -179,6 +179,15 @@ int __init amiga_parse_bootinfo(const struct bi_record *record)
+ 			dev->slotsize = be16_to_cpu(cd->cd_SlotSize);
+ 			dev->boardaddr = be32_to_cpu(cd->cd_BoardAddr);
+ 			dev->boardsize = be32_to_cpu(cd->cd_BoardSize);
++
++			/* CS-LAB Warp 1260 workaround */
++			if (be16_to_cpu(dev->rom.er_Manufacturer) == ZORRO_MANUF(ZORRO_PROD_CSLAB_WARP_1260) &&
++			    dev->rom.er_Product == ZORRO_PROD(ZORRO_PROD_CSLAB_WARP_1260)) {
++
++				/* turn off all interrupts */
++				pr_info("Warp 1260 card detected: applying interrupt storm workaround\n");
++				*(uint32_t *)(dev->boardaddr + 0x1000) = 0xfff;
++			}
+ 		} else
+ 			pr_warn("amiga_parse_bootinfo: too many AutoConfig devices\n");
+ #endif /* CONFIG_ZORRO */
+diff --git a/arch/m68k/atari/ataints.c b/arch/m68k/atari/ataints.c
+index 56f02ea2c248d8..715d1e0d973e61 100644
+--- a/arch/m68k/atari/ataints.c
++++ b/arch/m68k/atari/ataints.c
+@@ -302,11 +302,7 @@ void __init atari_init_IRQ(void)
+ 
+ 	if (ATARIHW_PRESENT(SCU)) {
+ 		/* init the SCU if present */
+-		tt_scu.sys_mask = 0x10;		/* enable VBL (for the cursor) and
+-									 * disable HSYNC interrupts (who
+-									 * needs them?)  MFP and SCC are
+-									 * enabled in VME mask
+-									 */
++		tt_scu.sys_mask = 0x0;		/* disable all interrupts */
+ 		tt_scu.vme_mask = 0x60;		/* enable MFP and SCC ints */
+ 	} else {
+ 		/* If no SCU and no Hades, the HSYNC interrupt needs to be
+diff --git a/arch/m68k/include/asm/cmpxchg.h b/arch/m68k/include/asm/cmpxchg.h
+index 3a3bdcfcd3754f..2035b30d79518d 100644
+--- a/arch/m68k/include/asm/cmpxchg.h
++++ b/arch/m68k/include/asm/cmpxchg.h
+@@ -33,7 +33,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void * ptr, int siz
+ 		x = tmp;
+ 		break;
+ 	default:
+-		tmp = __invalid_xchg_size(x, ptr, size);
++		x = __invalid_xchg_size(x, ptr, size);
+ 		break;
+ 	}
+ 
+diff --git a/arch/mips/include/asm/mach-loongson64/boot_param.h b/arch/mips/include/asm/mach-loongson64/boot_param.h
+index afc92b7a61c607..deafd177f095d5 100644
+--- a/arch/mips/include/asm/mach-loongson64/boot_param.h
++++ b/arch/mips/include/asm/mach-loongson64/boot_param.h
+@@ -38,12 +38,14 @@ enum loongson_cpu_type {
+ 	Legacy_1B = 0x5,
+ 	Legacy_2G = 0x6,
+ 	Legacy_2H = 0x7,
++	Legacy_2K = 0x8,
+ 	Loongson_1A = 0x100,
+ 	Loongson_1B = 0x101,
+ 	Loongson_2E = 0x200,
+ 	Loongson_2F = 0x201,
+ 	Loongson_2G = 0x202,
+ 	Loongson_2H = 0x203,
++	Loongson_2K = 0x204,
+ 	Loongson_3A = 0x300,
+ 	Loongson_3B = 0x301
+ };
+diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h
+index 23c67c0871b17c..696b40beb774f5 100644
+--- a/arch/mips/include/asm/mips-cm.h
++++ b/arch/mips/include/asm/mips-cm.h
+@@ -228,6 +228,10 @@ GCR_ACCESSOR_RO(32, 0x0d0, gic_status)
+ GCR_ACCESSOR_RO(32, 0x0f0, cpc_status)
+ #define CM_GCR_CPC_STATUS_EX			BIT(0)
+ 
++/* GCR_ACCESS - Controls core/IOCU access to GCRs */
++GCR_ACCESSOR_RW(32, 0x120, access_cm3)
++#define CM_GCR_ACCESS_ACCESSEN			GENMASK(7, 0)
++
+ /* GCR_L2_CONFIG - Indicates L2 cache configuration when Config5.L2C=1 */
+ GCR_ACCESSOR_RW(32, 0x130, l2_config)
+ #define CM_GCR_L2_CONFIG_BYPASS			BIT(20)
+diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
+index f659adb681bc32..02ae0b29e68880 100644
+--- a/arch/mips/kernel/smp-cps.c
++++ b/arch/mips/kernel/smp-cps.c
+@@ -229,7 +229,10 @@ static void boot_core(unsigned int core, unsigned int vpe_id)
+ 	write_gcr_co_reset_ext_base(CM_GCR_Cx_RESET_EXT_BASE_UEB);
+ 
+ 	/* Ensure the core can access the GCRs */
+-	set_gcr_access(1 << core);
++	if (mips_cm_revision() < CM_REV_CM3)
++		set_gcr_access(1 << core);
++	else
++		set_gcr_access_cm3(1 << core);
+ 
+ 	if (mips_cpc_present()) {
+ 		/* Reset the core */
+diff --git a/arch/mips/loongson64/env.c b/arch/mips/loongson64/env.c
+index 134cb8e9efc21c..c9db6c496138f6 100644
+--- a/arch/mips/loongson64/env.c
++++ b/arch/mips/loongson64/env.c
+@@ -65,6 +65,12 @@ void __init prom_init_env(void)
+ 	cpu_clock_freq = ecpu->cpu_clock_freq;
+ 	loongson_sysconf.cputype = ecpu->cputype;
+ 	switch (ecpu->cputype) {
++	case Legacy_2K:
++	case Loongson_2K:
++		smp_group[0] = 0x900000001fe11000;
++		loongson_sysconf.cores_per_node = 2;
++		loongson_sysconf.cores_per_package = 2;
++		break;
+ 	case Legacy_3A:
+ 	case Loongson_3A:
+ 		loongson_sysconf.cores_per_node = 4;
+@@ -213,6 +219,8 @@ void __init prom_init_env(void)
+ 		default:
+ 			break;
+ 		}
++	} else if ((read_c0_prid() & PRID_IMP_MASK) == PRID_IMP_LOONGSON_64R) {
++		loongson_fdt_blob = __dtb_loongson64_2core_2k1000_begin;
+ 	} else if ((read_c0_prid() & PRID_IMP_MASK) == PRID_IMP_LOONGSON_64G) {
+ 		if (loongson_sysconf.bridgetype == LS7A)
+ 			loongson_fdt_blob = __dtb_loongson64g_4core_ls7a_begin;
+diff --git a/arch/mips/pci/pcie-octeon.c b/arch/mips/pci/pcie-octeon.c
+old mode 100755
+new mode 100644
+diff --git a/arch/mips/sgi-ip30/ip30-console.c b/arch/mips/sgi-ip30/ip30-console.c
+index b91f8c4fdc7860..a087b7ebe12936 100644
+--- a/arch/mips/sgi-ip30/ip30-console.c
++++ b/arch/mips/sgi-ip30/ip30-console.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ 
+ #include <linux/io.h>
++#include <linux/processor.h>
+ 
+ #include <asm/sn/ioc3.h>
+ 
+diff --git a/arch/powerpc/configs/85xx-hw.config b/arch/powerpc/configs/85xx-hw.config
+index 524db76f47b737..8aff8321739778 100644
+--- a/arch/powerpc/configs/85xx-hw.config
++++ b/arch/powerpc/configs/85xx-hw.config
+@@ -24,6 +24,7 @@ CONFIG_FS_ENET=y
+ CONFIG_FSL_CORENET_CF=y
+ CONFIG_FSL_DMA=y
+ CONFIG_FSL_HV_MANAGER=y
++CONFIG_FSL_IFC=y
+ CONFIG_FSL_PQ_MDIO=y
+ CONFIG_FSL_RIO=y
+ CONFIG_FSL_XGMAC_MDIO=y
+@@ -58,6 +59,7 @@ CONFIG_INPUT_FF_MEMLESS=m
+ CONFIG_MARVELL_PHY=y
+ CONFIG_MDIO_BUS_MUX_GPIO=y
+ CONFIG_MDIO_BUS_MUX_MMIOREG=y
++CONFIG_MEMORY=y
+ CONFIG_MMC_SDHCI_OF_ESDHC=y
+ CONFIG_MMC_SDHCI_PLTFM=y
+ CONFIG_MMC_SDHCI=y
+diff --git a/arch/powerpc/include/asm/percpu.h b/arch/powerpc/include/asm/percpu.h
+index 8e5b7d0b851c61..634970ce13c6b9 100644
+--- a/arch/powerpc/include/asm/percpu.h
++++ b/arch/powerpc/include/asm/percpu.h
+@@ -15,6 +15,16 @@
+ #endif /* CONFIG_SMP */
+ #endif /* __powerpc64__ */
+ 
++#if defined(CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK) && defined(CONFIG_SMP)
++#include <linux/jump_label.h>
++DECLARE_STATIC_KEY_FALSE(__percpu_first_chunk_is_paged);
++
++#define percpu_first_chunk_is_paged	\
++		(static_key_enabled(&__percpu_first_chunk_is_paged.key))
++#else
++#define percpu_first_chunk_is_paged	false
++#endif /* CONFIG_PPC64 && CONFIG_SMP */
++
+ #include <asm-generic/percpu.h>
+ 
+ #include <asm/paca.h>
+diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
+index 63702c0badb973..259343040e1ba9 100644
+--- a/arch/powerpc/kernel/mce.c
++++ b/arch/powerpc/kernel/mce.c
+@@ -594,8 +594,15 @@ long notrace machine_check_early(struct pt_regs *regs)
+ 	u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
+ 
+ 	this_cpu_set_ftrace_enabled(0);
+-	/* Do not use nmi_enter/exit for pseries hpte guest */
+-	if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR))
++	/*
++	 * Do not use nmi_enter/exit for pseries hpte guest
++	 *
++	 * Likewise, do not use it in real mode if percpu first chunk is not
++	 * embedded. With CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK enabled there
++	 * are chances where percpu allocation can come from vmalloc area.
++	 */
++	if ((radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR)) &&
++	    !percpu_first_chunk_is_paged)
+ 		nmi_enter();
+ 
+ 	hv_nmi_check_nonrecoverable(regs);
+@@ -606,7 +613,8 @@ long notrace machine_check_early(struct pt_regs *regs)
+ 	if (ppc_md.machine_check_early)
+ 		handled = ppc_md.machine_check_early(regs);
+ 
+-	if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR))
++	if ((radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR)) &&
++	    !percpu_first_chunk_is_paged)
+ 		nmi_exit();
+ 
+ 	this_cpu_set_ftrace_enabled(ftrace_enabled);
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 3f8426bccd168b..899d87de01655d 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -824,6 +824,7 @@ static int pcpu_cpu_distance(unsigned int from, unsigned int to)
+ 
+ unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
+ EXPORT_SYMBOL(__per_cpu_offset);
++DEFINE_STATIC_KEY_FALSE(__percpu_first_chunk_is_paged);
+ 
+ static void __init pcpu_populate_pte(unsigned long addr)
+ {
+@@ -903,6 +904,7 @@ void __init setup_per_cpu_areas(void)
+ 	if (rc < 0)
+ 		panic("cannot initialize percpu area (err=%d)", rc);
+ 
++	static_key_enable(&__percpu_first_chunk_is_paged.key);
+ 	delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start;
+ 	for_each_possible_cpu(cpu) {
+                 __per_cpu_offset[cpu] = delta + pcpu_unit_offsets[cpu];
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index b0e87dce2b9a0c..b4d108bef81470 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -835,8 +835,14 @@ void machine_check_exception(struct pt_regs *regs)
+ 	 * This is silly. The BOOK3S_64 should just call a different function
+ 	 * rather than expecting semantics to magically change. Something
+ 	 * like 'non_nmi_machine_check_exception()', perhaps?
++	 *
++	 * Do not use nmi_enter/exit in real mode if percpu first chunk is
++	 * not embedded. With CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK enabled
++	 * there are chances where percpu allocation can come from
++	 * vmalloc area.
+ 	 */
+-	const bool nmi = !IS_ENABLED(CONFIG_PPC_BOOK3S_64);
++	const bool nmi = !IS_ENABLED(CONFIG_PPC_BOOK3S_64) &&
++			 !percpu_first_chunk_is_paged;
+ 
+ 	if (nmi) nmi_enter();
+ 
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index ef8077a739b881..0f5ebec660a78b 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -1956,8 +1956,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+ 			break;
+ 
+ 		r = -ENXIO;
+-		if (!xive_enabled())
++		if (!xive_enabled()) {
++			fdput(f);
+ 			break;
++		}
+ 
+ 		r = -EPERM;
+ 		dev = kvm_device_from_filp(f.file);
+diff --git a/arch/powerpc/xmon/ppc-dis.c b/arch/powerpc/xmon/ppc-dis.c
+index 75fa98221d485d..af105e1bc3fca4 100644
+--- a/arch/powerpc/xmon/ppc-dis.c
++++ b/arch/powerpc/xmon/ppc-dis.c
+@@ -122,32 +122,21 @@ int print_insn_powerpc (unsigned long insn, unsigned long memaddr)
+   bool insn_is_short;
+   ppc_cpu_t dialect;
+ 
+-  dialect = PPC_OPCODE_PPC | PPC_OPCODE_COMMON
+-            | PPC_OPCODE_64 | PPC_OPCODE_POWER4 | PPC_OPCODE_ALTIVEC;
++  dialect = PPC_OPCODE_PPC | PPC_OPCODE_COMMON;
+ 
+-  if (cpu_has_feature(CPU_FTRS_POWER5))
+-    dialect |= PPC_OPCODE_POWER5;
++  if (IS_ENABLED(CONFIG_PPC64))
++    dialect |= PPC_OPCODE_64 | PPC_OPCODE_POWER4 | PPC_OPCODE_CELL |
++	PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7 | PPC_OPCODE_POWER8 |
++	PPC_OPCODE_POWER9;
+ 
+-  if (cpu_has_feature(CPU_FTRS_CELL))
+-    dialect |= (PPC_OPCODE_CELL | PPC_OPCODE_ALTIVEC);
++  if (cpu_has_feature(CPU_FTR_TM))
++    dialect |= PPC_OPCODE_HTM;
+ 
+-  if (cpu_has_feature(CPU_FTRS_POWER6))
+-    dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_ALTIVEC);
++  if (cpu_has_feature(CPU_FTR_ALTIVEC))
++    dialect |= PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2;
+ 
+-  if (cpu_has_feature(CPU_FTRS_POWER7))
+-    dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+-                | PPC_OPCODE_ALTIVEC | PPC_OPCODE_VSX);
+-
+-  if (cpu_has_feature(CPU_FTRS_POWER8))
+-    dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+-		| PPC_OPCODE_POWER8 | PPC_OPCODE_HTM
+-		| PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2 | PPC_OPCODE_VSX);
+-
+-  if (cpu_has_feature(CPU_FTRS_POWER9))
+-    dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+-		| PPC_OPCODE_POWER8 | PPC_OPCODE_POWER9 | PPC_OPCODE_HTM
+-		| PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2
+-		| PPC_OPCODE_VSX | PPC_OPCODE_VSX3);
++  if (cpu_has_feature(CPU_FTR_VSX))
++    dialect |= PPC_OPCODE_VSX | PPC_OPCODE_VSX3;
+ 
+   /* Get the major opcode of the insn.  */
+   opcode = NULL;
+diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
+index 54b12943cc7b0a..9bed3208919740 100644
+--- a/arch/riscv/mm/fault.c
++++ b/arch/riscv/mm/fault.c
+@@ -39,26 +39,27 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr)
+ 
+ static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_fault_t fault)
+ {
++	if (!user_mode(regs)) {
++		no_context(regs, addr);
++		return;
++	}
++
+ 	if (fault & VM_FAULT_OOM) {
+ 		/*
+ 		 * We ran out of memory, call the OOM killer, and return the userspace
+ 		 * (which will retry the fault, or kill us if we got oom-killed).
+ 		 */
+-		if (!user_mode(regs)) {
+-			no_context(regs, addr);
+-			return;
+-		}
+ 		pagefault_out_of_memory();
+ 		return;
+ 	} else if (fault & VM_FAULT_SIGBUS) {
+ 		/* Kernel mode? Handle exceptions or die */
+-		if (!user_mode(regs)) {
+-			no_context(regs, addr);
+-			return;
+-		}
+ 		do_trap(regs, SIGBUS, BUS_ADRERR, addr);
+ 		return;
++	} else if (fault & VM_FAULT_SIGSEGV) {
++		do_trap(regs, SIGSEGV, SEGV_MAPERR, addr);
++		return;
+ 	}
++
+ 	BUG();
+ }
+ 
+diff --git a/arch/sparc/include/asm/oplib_64.h b/arch/sparc/include/asm/oplib_64.h
+index a67abebd43592f..1b86d02a84556a 100644
+--- a/arch/sparc/include/asm/oplib_64.h
++++ b/arch/sparc/include/asm/oplib_64.h
+@@ -247,6 +247,7 @@ void prom_sun4v_guest_soft_state(void);
+ int prom_ihandle2path(int handle, char *buffer, int bufsize);
+ 
+ /* Client interface level routines. */
++void prom_cif_init(void *cif_handler);
+ void p1275_cmd_direct(unsigned long *);
+ 
+ #endif /* !(__SPARC64_OPLIB_H) */
+diff --git a/arch/sparc/prom/init_64.c b/arch/sparc/prom/init_64.c
+index 103aa910431856..f7b8a1a865b8fe 100644
+--- a/arch/sparc/prom/init_64.c
++++ b/arch/sparc/prom/init_64.c
+@@ -26,9 +26,6 @@ phandle prom_chosen_node;
+  * routines in the prom library.
+  * It gets passed the pointer to the PROM vector.
+  */
+-
+-extern void prom_cif_init(void *);
+-
+ void __init prom_init(void *cif_handler)
+ {
+ 	phandle node;
+diff --git a/arch/sparc/prom/p1275.c b/arch/sparc/prom/p1275.c
+index 889aa602f8d860..51c3f984bbf728 100644
+--- a/arch/sparc/prom/p1275.c
++++ b/arch/sparc/prom/p1275.c
+@@ -49,7 +49,7 @@ void p1275_cmd_direct(unsigned long *args)
+ 	local_irq_restore(flags);
+ }
+ 
+-void prom_cif_init(void *cif_handler, void *cif_stack)
++void prom_cif_init(void *cif_handler)
+ {
+ 	p1275buf.prom_cif_handler = (void (*)(long *))cif_handler;
+ }
+diff --git a/arch/um/kernel/time.c b/arch/um/kernel/time.c
+index 8dafc3f2add420..9a0fcafafd00be 100644
+--- a/arch/um/kernel/time.c
++++ b/arch/um/kernel/time.c
+@@ -756,9 +756,9 @@ int setup_time_travel_start(char *str)
+ 	return 1;
+ }
+ 
+-__setup("time-travel-start", setup_time_travel_start);
++__setup("time-travel-start=", setup_time_travel_start);
+ __uml_help(setup_time_travel_start,
+-"time-travel-start=<seconds>\n"
++"time-travel-start=<nanoseconds>\n"
+ "Configure the UML instance's wall clock to start at this value rather than\n"
+ "the host's wall clock at the time of UML boot.\n");
+ #endif
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 5667b8b994e343..da7e8c2b53473c 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -861,7 +861,7 @@ static void pt_update_head(struct pt *pt)
+  */
+ static void *pt_buffer_region(struct pt_buffer *buf)
+ {
+-	return phys_to_virt(TOPA_ENTRY(buf->cur, buf->cur_idx)->base << TOPA_SHIFT);
++	return phys_to_virt((phys_addr_t)TOPA_ENTRY(buf->cur, buf->cur_idx)->base << TOPA_SHIFT);
+ }
+ 
+ /**
+@@ -973,7 +973,7 @@ pt_topa_entry_for_page(struct pt_buffer *buf, unsigned int pg)
+ 	 * order allocations, there shouldn't be many of these.
+ 	 */
+ 	list_for_each_entry(topa, &buf->tables, list) {
+-		if (topa->offset + topa->size > pg << PAGE_SHIFT)
++		if (topa->offset + topa->size > (unsigned long)pg << PAGE_SHIFT)
+ 			goto found;
+ 	}
+ 
+diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h
+index 96906a62aacdad..f5e46c04c145d0 100644
+--- a/arch/x86/events/intel/pt.h
++++ b/arch/x86/events/intel/pt.h
+@@ -33,8 +33,8 @@ struct topa_entry {
+ 	u64	rsvd2	: 1;
+ 	u64	size	: 4;
+ 	u64	rsvd3	: 2;
+-	u64	base	: 36;
+-	u64	rsvd4	: 16;
++	u64	base	: 40;
++	u64	rsvd4	: 12;
+ };
+ 
+ /* TSC to Core Crystal Clock Ratio */
+diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
+index 5f436cb4f7c495..9d5313118660c4 100644
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
+@@ -817,7 +817,7 @@ void mtrr_save_state(void)
+ {
+ 	int first_cpu;
+ 
+-	if (!mtrr_enabled())
++	if (!mtrr_enabled() || !mtrr_state.have_fixed)
+ 		return;
+ 
+ 	first_cpu = cpumask_first(cpu_online_mask);
+diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c
+index ddffd80f5c5253..2d654fdf001dcd 100644
+--- a/arch/x86/kernel/devicetree.c
++++ b/arch/x86/kernel/devicetree.c
+@@ -92,7 +92,7 @@ static int x86_of_pci_irq_enable(struct pci_dev *dev)
+ 
+ 	ret = pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);
+ 	if (ret)
+-		return ret;
++		return pcibios_err_to_errno(ret);
+ 	if (!pin)
+ 		return 0;
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 3e9bb9ae836ddf..b29be51b72b445 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4738,14 +4738,19 @@ static int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+ 	return !vmx_nmi_blocked(vcpu);
+ }
+ 
++bool __vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
++{
++	return !(vmx_get_rflags(vcpu) & X86_EFLAGS_IF) ||
++	       (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
++		(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
++}
++
+ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
+ {
+ 	if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
+ 		return false;
+ 
+-	return !(vmx_get_rflags(vcpu) & X86_EFLAGS_IF) ||
+-	       (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
+-		(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
++	return __vmx_interrupt_blocked(vcpu);
+ }
+ 
+ static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index ed4b6da83aa87c..811d7ef4638bb9 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -359,6 +359,7 @@ bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu);
+ void update_exception_bitmap(struct kvm_vcpu *vcpu);
+ void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
+ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu);
++bool __vmx_interrupt_blocked(struct kvm_vcpu *vcpu);
+ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu);
+ bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
+ void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 1aab92930569af..50e31d14351bf6 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -374,14 +374,14 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
+ 			 */
+ 			*target_pmd = *pmd;
+ 
+-			addr += PMD_SIZE;
++			addr = round_up(addr + 1, PMD_SIZE);
+ 
+ 		} else if (level == PTI_CLONE_PTE) {
+ 
+ 			/* Walk the page-table down to the pte level */
+ 			pte = pte_offset_kernel(pmd, addr);
+ 			if (pte_none(*pte)) {
+-				addr += PAGE_SIZE;
++				addr = round_up(addr + 1, PAGE_SIZE);
+ 				continue;
+ 			}
+ 
+@@ -401,7 +401,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
+ 			/* Clone the PTE */
+ 			*target_pte = *pte;
+ 
+-			addr += PAGE_SIZE;
++			addr = round_up(addr + 1, PAGE_SIZE);
+ 
+ 		} else {
+ 			BUG();
+@@ -497,7 +497,7 @@ static void pti_clone_entry_text(void)
+ {
+ 	pti_clone_pgtable((unsigned long) __entry_text_start,
+ 			  (unsigned long) __entry_text_end,
+-			  PTI_CLONE_PMD);
++			  PTI_LEVEL_KERNEL_IMAGE);
+ }
+ 
+ /*
+diff --git a/arch/x86/pci/intel_mid_pci.c b/arch/x86/pci/intel_mid_pci.c
+index 24ca4ee2802fbb..3399dcdd526c69 100644
+--- a/arch/x86/pci/intel_mid_pci.c
++++ b/arch/x86/pci/intel_mid_pci.c
+@@ -223,9 +223,9 @@ static int intel_mid_pci_irq_enable(struct pci_dev *dev)
+ 		return 0;
+ 
+ 	ret = pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &gsi);
+-	if (ret < 0) {
++	if (ret) {
+ 		dev_warn(&dev->dev, "Failed to read interrupt line: %d\n", ret);
+-		return ret;
++		return pcibios_err_to_errno(ret);
+ 	}
+ 
+ 	switch (intel_mid_identify_cpu()) {
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index 326d6d17373388..cbe9ab42cbebb6 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -37,10 +37,10 @@ static int xen_pcifront_enable_irq(struct pci_dev *dev)
+ 	u8 gsi;
+ 
+ 	rc = pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &gsi);
+-	if (rc < 0) {
++	if (rc) {
+ 		dev_warn(&dev->dev, "Xen PCI: failed to read interrupt line: %d\n",
+ 			 rc);
+-		return rc;
++		return pcibios_err_to_errno(rc);
+ 	}
+ 	/* In PV DomU the Xen PCI backend puts the PIRQ in the interrupt line.*/
+ 	pirq = gsi;
+diff --git a/arch/x86/platform/intel/iosf_mbi.c b/arch/x86/platform/intel/iosf_mbi.c
+index 526f70f27c1c35..70a1c3877f8e31 100644
+--- a/arch/x86/platform/intel/iosf_mbi.c
++++ b/arch/x86/platform/intel/iosf_mbi.c
+@@ -62,7 +62,7 @@ static int iosf_mbi_pci_read_mdr(u32 mcrx, u32 mcr, u32 *mdr)
+ 
+ fail_read:
+ 	dev_err(&mbi_pdev->dev, "PCI config access failed with %d\n", result);
+-	return result;
++	return pcibios_err_to_errno(result);
+ }
+ 
+ static int iosf_mbi_pci_write_mdr(u32 mcrx, u32 mcr, u32 mdr)
+@@ -91,7 +91,7 @@ static int iosf_mbi_pci_write_mdr(u32 mcrx, u32 mcr, u32 mdr)
+ 
+ fail_write:
+ 	dev_err(&mbi_pdev->dev, "PCI config access failed with %d\n", result);
+-	return result;
++	return pcibios_err_to_errno(result);
+ }
+ 
+ int iosf_mbi_read(u8 port, u8 opcode, u32 offset, u32 *mdr)
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index e809f144684643..bfa972f7e87451 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -736,7 +736,7 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 		 * immediate unmapping.
+ 		 */
+ 		map_ops[i].status = GNTST_general_error;
+-		unmap[0].host_addr = map_ops[i].host_addr,
++		unmap[0].host_addr = map_ops[i].host_addr;
+ 		unmap[0].handle = map_ops[i].handle;
+ 		map_ops[i].handle = ~0;
+ 		if (map_ops[i].flags & GNTMAP_device_map)
+@@ -746,7 +746,7 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 
+ 		if (kmap_ops) {
+ 			kmap_ops[i].status = GNTST_general_error;
+-			unmap[1].host_addr = kmap_ops[i].host_addr,
++			unmap[1].host_addr = kmap_ops[i].host_addr;
+ 			unmap[1].handle = kmap_ops[i].handle;
+ 			kmap_ops[i].handle = ~0;
+ 			if (kmap_ops[i].flags & GNTMAP_device_map)
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index 8b43efe97da5d3..2e1462b8929c07 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -670,12 +670,18 @@ static ssize_t acpi_battery_alarm_store(struct device *dev,
+ 	return count;
+ }
+ 
+-static const struct device_attribute alarm_attr = {
++static struct device_attribute alarm_attr = {
+ 	.attr = {.name = "alarm", .mode = 0644},
+ 	.show = acpi_battery_alarm_show,
+ 	.store = acpi_battery_alarm_store,
+ };
+ 
++static struct attribute *acpi_battery_attrs[] = {
++	&alarm_attr.attr,
++	NULL
++};
++ATTRIBUTE_GROUPS(acpi_battery);
++
+ /*
+  * The Battery Hooking API
+  *
+@@ -812,7 +818,10 @@ static void __exit battery_hook_exit(void)
+ 
+ static int sysfs_add_battery(struct acpi_battery *battery)
+ {
+-	struct power_supply_config psy_cfg = { .drv_data = battery, };
++	struct power_supply_config psy_cfg = {
++		.drv_data = battery,
++		.attr_grp = acpi_battery_groups,
++	};
+ 	bool full_cap_broken = false;
+ 
+ 	if (!ACPI_BATTERY_CAPACITY_VALID(battery->full_charge_capacity) &&
+@@ -857,7 +866,7 @@ static int sysfs_add_battery(struct acpi_battery *battery)
+ 		return result;
+ 	}
+ 	battery_hook_add_battery(battery);
+-	return device_create_file(&battery->bat->dev, &alarm_attr);
++	return 0;
+ }
+ 
+ static void sysfs_remove_battery(struct acpi_battery *battery)
+@@ -868,7 +877,6 @@ static void sysfs_remove_battery(struct acpi_battery *battery)
+ 		return;
+ 	}
+ 	battery_hook_remove_battery(battery);
+-	device_remove_file(&battery->bat->dev, &alarm_attr);
+ 	power_supply_unregister(battery->bat);
+ 	battery->bat = NULL;
+ 	mutex_unlock(&battery->sysfs_lock);
+diff --git a/drivers/acpi/sbs.c b/drivers/acpi/sbs.c
+index e6d9f4de280000..54396cb8930a4a 100644
+--- a/drivers/acpi/sbs.c
++++ b/drivers/acpi/sbs.c
+@@ -77,7 +77,6 @@ struct acpi_battery {
+ 	u16 spec;
+ 	u8 id;
+ 	u8 present:1;
+-	u8 have_sysfs_alarm:1;
+ };
+ 
+ #define to_acpi_battery(x) power_supply_get_drvdata(x)
+@@ -462,12 +461,18 @@ static ssize_t acpi_battery_alarm_store(struct device *dev,
+ 	return count;
+ }
+ 
+-static const struct device_attribute alarm_attr = {
++static struct device_attribute alarm_attr = {
+ 	.attr = {.name = "alarm", .mode = 0644},
+ 	.show = acpi_battery_alarm_show,
+ 	.store = acpi_battery_alarm_store,
+ };
+ 
++static struct attribute *acpi_battery_attrs[] = {
++	&alarm_attr.attr,
++	NULL
++};
++ATTRIBUTE_GROUPS(acpi_battery);
++
+ /* --------------------------------------------------------------------------
+                                  Driver Interface
+    -------------------------------------------------------------------------- */
+@@ -509,7 +514,10 @@ static int acpi_battery_read(struct acpi_battery *battery)
+ static int acpi_battery_add(struct acpi_sbs *sbs, int id)
+ {
+ 	struct acpi_battery *battery = &sbs->battery[id];
+-	struct power_supply_config psy_cfg = { .drv_data = battery, };
++	struct power_supply_config psy_cfg = {
++		.drv_data = battery,
++		.attr_grp = acpi_battery_groups,
++	};
+ 	int result;
+ 
+ 	battery->id = id;
+@@ -539,10 +547,6 @@ static int acpi_battery_add(struct acpi_sbs *sbs, int id)
+ 		goto end;
+ 	}
+ 
+-	result = device_create_file(&battery->bat->dev, &alarm_attr);
+-	if (result)
+-		goto end;
+-	battery->have_sysfs_alarm = 1;
+       end:
+ 	printk(KERN_INFO PREFIX "%s [%s]: Battery Slot [%s] (battery %s)\n",
+ 	       ACPI_SBS_DEVICE_NAME, acpi_device_bid(sbs->device),
+@@ -554,11 +558,8 @@ static void acpi_battery_remove(struct acpi_sbs *sbs, int id)
+ {
+ 	struct acpi_battery *battery = &sbs->battery[id];
+ 
+-	if (battery->bat) {
+-		if (battery->have_sysfs_alarm)
+-			device_remove_file(&battery->bat->dev, &alarm_attr);
++	if (battery->bat)
+ 		power_supply_unregister(battery->bat);
+-	}
+ }
+ 
+ static int acpi_charger_add(struct acpi_sbs *sbs)
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 6631a65f632b14..cd3de4ec176709 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -928,9 +928,7 @@ static bool binder_has_work(struct binder_thread *thread, bool do_proc_work)
+ static bool binder_available_for_proc_work_ilocked(struct binder_thread *thread)
+ {
+ 	return !thread->transaction_stack &&
+-		binder_worklist_empty_ilocked(&thread->todo) &&
+-		(thread->looper & (BINDER_LOOPER_STATE_ENTERED |
+-				   BINDER_LOOPER_STATE_REGISTERED));
++		binder_worklist_empty_ilocked(&thread->todo);
+ }
+ 
+ static void binder_wakeup_poll_threads_ilocked(struct binder_proc *proc,
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index b13a60de5a863a..b81fd39226ca74 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -25,6 +25,7 @@
+ #include <linux/mutex.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/netdevice.h>
++#include <linux/rcupdate.h>
+ #include <linux/sched/signal.h>
+ #include <linux/sched/mm.h>
+ #include <linux/sysfs.h>
+@@ -1909,6 +1910,7 @@ static int dev_uevent(struct kset *kset, struct kobject *kobj,
+ 		      struct kobj_uevent_env *env)
+ {
+ 	struct device *dev = kobj_to_dev(kobj);
++	struct device_driver *driver;
+ 	int retval = 0;
+ 
+ 	/* add device node properties if present */
+@@ -1937,8 +1939,12 @@ static int dev_uevent(struct kset *kset, struct kobject *kobj,
+ 	if (dev->type && dev->type->name)
+ 		add_uevent_var(env, "DEVTYPE=%s", dev->type->name);
+ 
+-	if (dev->driver)
+-		add_uevent_var(env, "DRIVER=%s", dev->driver->name);
++	/* Synchronize with module_remove_driver() */
++	rcu_read_lock();
++	driver = READ_ONCE(dev->driver);
++	if (driver)
++		add_uevent_var(env, "DRIVER=%s", driver->name);
++	rcu_read_unlock();
+ 
+ 	/* Add common DT information about the device */
+ 	of_device_uevent(dev, env);
+@@ -2008,11 +2014,8 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
+ 	if (!env)
+ 		return -ENOMEM;
+ 
+-	/* Synchronize with really_probe() */
+-	device_lock(dev);
+ 	/* let the kset specific function add its keys */
+ 	retval = kset->uevent_ops->uevent(kset, &dev->kobj, env);
+-	device_unlock(dev);
+ 	if (retval)
+ 		goto out;
+ 
+diff --git a/drivers/base/devres.c b/drivers/base/devres.c
+index 586e9a75c840a9..8a74008c13c443 100644
+--- a/drivers/base/devres.c
++++ b/drivers/base/devres.c
+@@ -901,9 +901,12 @@ void *devm_krealloc(struct device *dev, void *ptr, size_t new_size, gfp_t gfp)
+ 	/*
+ 	 * Otherwise: allocate new, larger chunk. We need to allocate before
+ 	 * taking the lock as most probably the caller uses GFP_KERNEL.
++	 * alloc_dr() will call check_dr_size() to reserve extra memory
++	 * for struct devres automatically, so size @new_size user request
++	 * is delivered to it directly as devm_kmalloc() does.
+ 	 */
+ 	new_dr = alloc_dr(devm_kmalloc_release,
+-			  total_new_size, gfp, dev_to_node(dev));
++			  new_size, gfp, dev_to_node(dev));
+ 	if (!new_dr)
+ 		return NULL;
+ 
+@@ -1227,7 +1230,11 @@ EXPORT_SYMBOL_GPL(__devm_alloc_percpu);
+  */
+ void devm_free_percpu(struct device *dev, void __percpu *pdata)
+ {
+-	WARN_ON(devres_destroy(dev, devm_percpu_release, devm_percpu_match,
+-			       (void *)pdata));
++	/*
++	 * Use devres_release() to prevent memory leakage as
++	 * devm_free_pages() does.
++	 */
++	WARN_ON(devres_release(dev, devm_percpu_release, devm_percpu_match,
++			       (__force void *)pdata));
+ }
+ EXPORT_SYMBOL_GPL(devm_free_percpu);
+diff --git a/drivers/base/module.c b/drivers/base/module.c
+index 46ad4d636731dd..851cc5367c04c0 100644
+--- a/drivers/base/module.c
++++ b/drivers/base/module.c
+@@ -7,6 +7,7 @@
+ #include <linux/errno.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
++#include <linux/rcupdate.h>
+ #include "base.h"
+ 
+ static char *make_driver_name(struct device_driver *drv)
+@@ -77,6 +78,9 @@ void module_remove_driver(struct device_driver *drv)
+ 	if (!drv)
+ 		return;
+ 
++	/* Synchronize with dev_uevent() */
++	synchronize_rcu();
++
+ 	sysfs_remove_link(&drv->p->kobj, "module");
+ 
+ 	if (drv->owner)
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 5b102d333a4104..0ceef7bcfe8ee9 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -362,7 +362,7 @@ enum rbd_watch_state {
+ enum rbd_lock_state {
+ 	RBD_LOCK_STATE_UNLOCKED,
+ 	RBD_LOCK_STATE_LOCKED,
+-	RBD_LOCK_STATE_RELEASING,
++	RBD_LOCK_STATE_QUIESCING,
+ };
+ 
+ /* WatchNotify::ClientId */
+@@ -422,7 +422,7 @@ struct rbd_device {
+ 	struct list_head	running_list;
+ 	struct completion	acquire_wait;
+ 	int			acquire_err;
+-	struct completion	releasing_wait;
++	struct completion	quiescing_wait;
+ 
+ 	spinlock_t		object_map_lock;
+ 	u8			*object_map;
+@@ -525,7 +525,7 @@ static bool __rbd_is_lock_owner(struct rbd_device *rbd_dev)
+ 	lockdep_assert_held(&rbd_dev->lock_rwsem);
+ 
+ 	return rbd_dev->lock_state == RBD_LOCK_STATE_LOCKED ||
+-	       rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING;
++	       rbd_dev->lock_state == RBD_LOCK_STATE_QUIESCING;
+ }
+ 
+ static bool rbd_is_lock_owner(struct rbd_device *rbd_dev)
+@@ -3522,13 +3522,14 @@ static void rbd_lock_del_request(struct rbd_img_request *img_req)
+ 	lockdep_assert_held(&rbd_dev->lock_rwsem);
+ 	spin_lock(&rbd_dev->lock_lists_lock);
+ 	if (!list_empty(&img_req->lock_item)) {
++		rbd_assert(!list_empty(&rbd_dev->running_list));
+ 		list_del_init(&img_req->lock_item);
+-		need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
++		need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_QUIESCING &&
+ 			       list_empty(&rbd_dev->running_list));
+ 	}
+ 	spin_unlock(&rbd_dev->lock_lists_lock);
+ 	if (need_wakeup)
+-		complete(&rbd_dev->releasing_wait);
++		complete(&rbd_dev->quiescing_wait);
+ }
+ 
+ static int rbd_img_exclusive_lock(struct rbd_img_request *img_req)
+@@ -3541,11 +3542,6 @@ static int rbd_img_exclusive_lock(struct rbd_img_request *img_req)
+ 	if (rbd_lock_add_request(img_req))
+ 		return 1;
+ 
+-	if (rbd_dev->opts->exclusive) {
+-		WARN_ON(1); /* lock got released? */
+-		return -EROFS;
+-	}
+-
+ 	/*
+ 	 * Note the use of mod_delayed_work() in rbd_acquire_lock()
+ 	 * and cancel_delayed_work() in wake_lock_waiters().
+@@ -4237,16 +4233,16 @@ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
+ 	/*
+ 	 * Ensure that all in-flight IO is flushed.
+ 	 */
+-	rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING;
+-	rbd_assert(!completion_done(&rbd_dev->releasing_wait));
++	rbd_dev->lock_state = RBD_LOCK_STATE_QUIESCING;
++	rbd_assert(!completion_done(&rbd_dev->quiescing_wait));
+ 	if (list_empty(&rbd_dev->running_list))
+ 		return true;
+ 
+ 	up_write(&rbd_dev->lock_rwsem);
+-	wait_for_completion(&rbd_dev->releasing_wait);
++	wait_for_completion(&rbd_dev->quiescing_wait);
+ 
+ 	down_write(&rbd_dev->lock_rwsem);
+-	if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING)
++	if (rbd_dev->lock_state != RBD_LOCK_STATE_QUIESCING)
+ 		return false;
+ 
+ 	rbd_assert(list_empty(&rbd_dev->running_list));
+@@ -4657,6 +4653,10 @@ static void rbd_reacquire_lock(struct rbd_device *rbd_dev)
+ 			rbd_warn(rbd_dev, "failed to update lock cookie: %d",
+ 				 ret);
+ 
++		if (rbd_dev->opts->exclusive)
++			rbd_warn(rbd_dev,
++			     "temporarily releasing lock on exclusive mapping");
++
+ 		/*
+ 		 * Lock cookie cannot be updated on older OSDs, so do
+ 		 * a manual release and queue an acquire.
+@@ -5455,7 +5455,7 @@ static struct rbd_device *__rbd_dev_create(struct rbd_spec *spec)
+ 	INIT_LIST_HEAD(&rbd_dev->acquiring_list);
+ 	INIT_LIST_HEAD(&rbd_dev->running_list);
+ 	init_completion(&rbd_dev->acquire_wait);
+-	init_completion(&rbd_dev->releasing_wait);
++	init_completion(&rbd_dev->quiescing_wait);
+ 
+ 	spin_lock_init(&rbd_dev->object_map_lock);
+ 
+@@ -6660,11 +6660,6 @@ static int rbd_add_acquire_lock(struct rbd_device *rbd_dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	/*
+-	 * The lock may have been released by now, unless automatic lock
+-	 * transitions are disabled.
+-	 */
+-	rbd_assert(!rbd_dev->opts->exclusive || rbd_is_lock_owner(rbd_dev));
+ 	return 0;
+ }
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index b2836e8efefd28..b0d97c9ffd2607 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -428,6 +428,10 @@ static const struct usb_device_id blacklist_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x13d3, 0x3571), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x13d3, 0x3591), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0489, 0xe125), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 
+ 	/* Realtek Bluetooth devices */
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01),
+diff --git a/drivers/char/hw_random/amd-rng.c b/drivers/char/hw_random/amd-rng.c
+index db3dd467194c27..3f3fdf6ee3d52a 100644
+--- a/drivers/char/hw_random/amd-rng.c
++++ b/drivers/char/hw_random/amd-rng.c
+@@ -142,8 +142,10 @@ static int __init mod_init(void)
+ 
+ found:
+ 	err = pci_read_config_dword(pdev, 0x58, &pmbase);
+-	if (err)
++	if (err) {
++		err = pcibios_err_to_errno(err);
+ 		goto put_dev;
++	}
+ 
+ 	pmbase &= 0x0000FF00;
+ 	if (pmbase == 0) {
+diff --git a/drivers/char/tpm/eventlog/common.c b/drivers/char/tpm/eventlog/common.c
+index 8512ec76d5260d..4a6186f9f8899f 100644
+--- a/drivers/char/tpm/eventlog/common.c
++++ b/drivers/char/tpm/eventlog/common.c
+@@ -47,6 +47,8 @@ static int tpm_bios_measurements_open(struct inode *inode,
+ 	if (!err) {
+ 		seq = file->private_data;
+ 		seq->private = chip;
++	} else {
++		put_device(&chip->dev);
+ 	}
+ 
+ 	return err;
+diff --git a/drivers/clk/davinci/da8xx-cfgchip.c b/drivers/clk/davinci/da8xx-cfgchip.c
+index 77d18276bfe82c..90d96ee4a58b34 100644
+--- a/drivers/clk/davinci/da8xx-cfgchip.c
++++ b/drivers/clk/davinci/da8xx-cfgchip.c
+@@ -505,7 +505,7 @@ da8xx_cfgchip_register_usb0_clk48(struct device *dev,
+ 	const char * const parent_names[] = { "usb_refclkin", "pll0_auxclk" };
+ 	struct clk *fck_clk;
+ 	struct da8xx_usb0_clk48 *usb0;
+-	struct clk_init_data init;
++	struct clk_init_data init = {};
+ 	int ret;
+ 
+ 	fck_clk = devm_clk_get(dev, "fck");
+@@ -580,7 +580,7 @@ da8xx_cfgchip_register_usb1_clk48(struct device *dev,
+ {
+ 	const char * const parent_names[] = { "usb0_clk48", "usb_refclkin" };
+ 	struct da8xx_usb1_clk48 *usb1;
+-	struct clk_init_data init;
++	struct clk_init_data init = {};
+ 	int ret;
+ 
+ 	usb1 = devm_kzalloc(dev, sizeof(*usb1), GFP_KERNEL);
+diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
+index 66e4872ab34f97..26dbf01d2ec5ed 100644
+--- a/drivers/clocksource/sh_cmt.c
++++ b/drivers/clocksource/sh_cmt.c
+@@ -530,6 +530,7 @@ static void sh_cmt_set_next(struct sh_cmt_channel *ch, unsigned long delta)
+ static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id)
+ {
+ 	struct sh_cmt_channel *ch = dev_id;
++	unsigned long flags;
+ 
+ 	/* clear flags */
+ 	sh_cmt_write_cmcsr(ch, sh_cmt_read_cmcsr(ch) &
+@@ -560,6 +561,8 @@ static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id)
+ 
+ 	ch->flags &= ~FLAG_SKIPEVENT;
+ 
++	raw_spin_lock_irqsave(&ch->lock, flags);
++
+ 	if (ch->flags & FLAG_REPROGRAM) {
+ 		ch->flags &= ~FLAG_REPROGRAM;
+ 		sh_cmt_clock_event_program_verify(ch, 1);
+@@ -572,6 +575,8 @@ static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id)
+ 
+ 	ch->flags &= ~FLAG_IRQCONTEXT;
+ 
++	raw_spin_unlock_irqrestore(&ch->lock, flags);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -770,12 +775,18 @@ static int sh_cmt_clock_event_next(unsigned long delta,
+ 				   struct clock_event_device *ced)
+ {
+ 	struct sh_cmt_channel *ch = ced_to_sh_cmt(ced);
++	unsigned long flags;
+ 
+ 	BUG_ON(!clockevent_state_oneshot(ced));
++
++	raw_spin_lock_irqsave(&ch->lock, flags);
++
+ 	if (likely(ch->flags & FLAG_IRQCONTEXT))
+ 		ch->next_match_value = delta - 1;
+ 	else
+-		sh_cmt_set_next(ch, delta - 1);
++		__sh_cmt_set_next(ch, delta - 1);
++
++	raw_spin_unlock_irqrestore(&ch->lock, flags);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/edac/Makefile b/drivers/edac/Makefile
+index 3a849168780dd1..eec69c885c1dec 100644
+--- a/drivers/edac/Makefile
++++ b/drivers/edac/Makefile
+@@ -58,11 +58,13 @@ obj-$(CONFIG_EDAC_MPC85XX)		+= mpc85xx_edac_mod.o
+ layerscape_edac_mod-y			:= fsl_ddr_edac.o layerscape_edac.o
+ obj-$(CONFIG_EDAC_LAYERSCAPE)		+= layerscape_edac_mod.o
+ 
+-skx_edac-y				:= skx_common.o skx_base.o
+-obj-$(CONFIG_EDAC_SKX)			+= skx_edac.o
++skx_edac_common-y			:= skx_common.o
+ 
+-i10nm_edac-y				:= skx_common.o i10nm_base.o
+-obj-$(CONFIG_EDAC_I10NM)		+= i10nm_edac.o
++skx_edac-y				:= skx_base.o
++obj-$(CONFIG_EDAC_SKX)			+= skx_edac.o skx_edac_common.o
++
++i10nm_edac-y				:= i10nm_base.o
++obj-$(CONFIG_EDAC_I10NM)		+= i10nm_edac.o skx_edac_common.o
+ 
+ obj-$(CONFIG_EDAC_MV64X60)		+= mv64x60_edac.o
+ obj-$(CONFIG_EDAC_CELL)			+= cell_edac.o
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 2b4ce8e5ac2fa6..b585cbe3eff94f 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -23,10 +23,13 @@
+ #include "skx_common.h"
+ 
+ static const char * const component_names[] = {
+-	[INDEX_SOCKET]	= "ProcessorSocketId",
+-	[INDEX_MEMCTRL]	= "MemoryControllerId",
+-	[INDEX_CHANNEL]	= "ChannelId",
+-	[INDEX_DIMM]	= "DimmSlotId",
++	[INDEX_SOCKET]		= "ProcessorSocketId",
++	[INDEX_MEMCTRL]		= "MemoryControllerId",
++	[INDEX_CHANNEL]		= "ChannelId",
++	[INDEX_DIMM]		= "DimmSlotId",
++	[INDEX_NM_MEMCTRL]	= "NmMemoryControllerId",
++	[INDEX_NM_CHANNEL]	= "NmChannelId",
++	[INDEX_NM_DIMM]		= "NmDimmSlotId",
+ };
+ 
+ static int component_indices[ARRAY_SIZE(component_names)];
+@@ -34,14 +37,16 @@ static int adxl_component_count;
+ static const char * const *adxl_component_names;
+ static u64 *adxl_values;
+ static char *adxl_msg;
++static unsigned long adxl_nm_bitmap;
+ 
+ static char skx_msg[MSG_SIZE];
+ static skx_decode_f skx_decode;
+ static skx_show_retry_log_f skx_show_retry_rd_err_log;
+ static u64 skx_tolm, skx_tohm;
+ static LIST_HEAD(dev_edac_list);
++static bool skx_mem_cfg_2lm;
+ 
+-int __init skx_adxl_get(void)
++int skx_adxl_get(void)
+ {
+ 	const char * const *names;
+ 	int i, j;
+@@ -56,14 +61,25 @@ int __init skx_adxl_get(void)
+ 		for (j = 0; names[j]; j++) {
+ 			if (!strcmp(component_names[i], names[j])) {
+ 				component_indices[i] = j;
++
++				if (i >= INDEX_NM_FIRST)
++					adxl_nm_bitmap |= 1 << i;
++
+ 				break;
+ 			}
+ 		}
+ 
+-		if (!names[j])
++		if (!names[j] && i < INDEX_NM_FIRST)
+ 			goto err;
+ 	}
+ 
++	if (skx_mem_cfg_2lm) {
++		if (!adxl_nm_bitmap)
++			skx_printk(KERN_NOTICE, "Not enough ADXL components for 2-level memory.\n");
++		else
++			edac_dbg(2, "adxl_nm_bitmap: 0x%lx\n", adxl_nm_bitmap);
++	}
++
+ 	adxl_component_names = names;
+ 	while (*names++)
+ 		adxl_component_count++;
+@@ -92,14 +108,16 @@ int __init skx_adxl_get(void)
+ 
+ 	return -ENODEV;
+ }
++EXPORT_SYMBOL_GPL(skx_adxl_get);
+ 
+-void __exit skx_adxl_put(void)
++void skx_adxl_put(void)
+ {
+ 	kfree(adxl_values);
+ 	kfree(adxl_msg);
+ }
++EXPORT_SYMBOL_GPL(skx_adxl_put);
+ 
+-static bool skx_adxl_decode(struct decoded_addr *res)
++static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_mem)
+ {
+ 	struct skx_dev *d;
+ 	int i, len = 0;
+@@ -116,11 +134,20 @@ static bool skx_adxl_decode(struct decoded_addr *res)
+ 	}
+ 
+ 	res->socket  = (int)adxl_values[component_indices[INDEX_SOCKET]];
+-	res->imc     = (int)adxl_values[component_indices[INDEX_MEMCTRL]];
+-	res->channel = (int)adxl_values[component_indices[INDEX_CHANNEL]];
+-	res->dimm    = (int)adxl_values[component_indices[INDEX_DIMM]];
++	if (error_in_1st_level_mem) {
++		res->imc     = (adxl_nm_bitmap & BIT_NM_MEMCTRL) ?
++			       (int)adxl_values[component_indices[INDEX_NM_MEMCTRL]] : -1;
++		res->channel = (adxl_nm_bitmap & BIT_NM_CHANNEL) ?
++			       (int)adxl_values[component_indices[INDEX_NM_CHANNEL]] : -1;
++		res->dimm    = (adxl_nm_bitmap & BIT_NM_DIMM) ?
++			       (int)adxl_values[component_indices[INDEX_NM_DIMM]] : -1;
++	} else {
++		res->imc     = (int)adxl_values[component_indices[INDEX_MEMCTRL]];
++		res->channel = (int)adxl_values[component_indices[INDEX_CHANNEL]];
++		res->dimm    = (int)adxl_values[component_indices[INDEX_DIMM]];
++	}
+ 
+-	if (res->imc > NUM_IMC - 1) {
++	if (res->imc > NUM_IMC - 1 || res->imc < 0) {
+ 		skx_printk(KERN_ERR, "Bad imc %d\n", res->imc);
+ 		return false;
+ 	}
+@@ -151,11 +178,18 @@ static bool skx_adxl_decode(struct decoded_addr *res)
+ 	return true;
+ }
+ 
++void skx_set_mem_cfg(bool mem_cfg_2lm)
++{
++	skx_mem_cfg_2lm = mem_cfg_2lm;
++}
++EXPORT_SYMBOL_GPL(skx_set_mem_cfg);
++
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log)
+ {
+ 	skx_decode = decode;
+ 	skx_show_retry_rd_err_log = show_retry_log;
+ }
++EXPORT_SYMBOL_GPL(skx_set_decode);
+ 
+ int skx_get_src_id(struct skx_dev *d, int off, u8 *id)
+ {
+@@ -169,6 +203,7 @@ int skx_get_src_id(struct skx_dev *d, int off, u8 *id)
+ 	*id = GET_BITFIELD(reg, 12, 14);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(skx_get_src_id);
+ 
+ int skx_get_node_id(struct skx_dev *d, u8 *id)
+ {
+@@ -182,6 +217,7 @@ int skx_get_node_id(struct skx_dev *d, u8 *id)
+ 	*id = GET_BITFIELD(reg, 0, 2);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(skx_get_node_id);
+ 
+ static int get_width(u32 mtr)
+ {
+@@ -247,6 +283,7 @@ int skx_get_all_bus_mappings(struct res_config *cfg, struct list_head **list)
+ 		*list = &dev_edac_list;
+ 	return ndev;
+ }
++EXPORT_SYMBOL_GPL(skx_get_all_bus_mappings);
+ 
+ int skx_get_hi_lo(unsigned int did, int off[], u64 *tolm, u64 *tohm)
+ {
+@@ -286,6 +323,7 @@ int skx_get_hi_lo(unsigned int did, int off[], u64 *tolm, u64 *tohm)
+ 	pci_dev_put(pdev);
+ 	return -ENODEV;
+ }
++EXPORT_SYMBOL_GPL(skx_get_hi_lo);
+ 
+ static int skx_get_dimm_attr(u32 reg, int lobit, int hibit, int add,
+ 			     int minval, int maxval, const char *name)
+@@ -339,6 +377,7 @@ int skx_get_dimm_info(u32 mtr, u32 mcmtr, u32 amap, struct dimm_info *dimm,
+ 
+ 	return 1;
+ }
++EXPORT_SYMBOL_GPL(skx_get_dimm_info);
+ 
+ int skx_get_nvdimm_info(struct dimm_info *dimm, struct skx_imc *imc,
+ 			int chan, int dimmno, const char *mod_str)
+@@ -387,6 +426,7 @@ int skx_get_nvdimm_info(struct dimm_info *dimm, struct skx_imc *imc,
+ 
+ 	return (size == 0 || size == ~0ull) ? 0 : 1;
+ }
++EXPORT_SYMBOL_GPL(skx_get_nvdimm_info);
+ 
+ int skx_register_mci(struct skx_imc *imc, struct pci_dev *pdev,
+ 		     const char *ctl_name, const char *mod_str,
+@@ -454,6 +494,7 @@ int skx_register_mci(struct skx_imc *imc, struct pci_dev *pdev,
+ 	imc->mci = NULL;
+ 	return rc;
+ }
++EXPORT_SYMBOL_GPL(skx_register_mci);
+ 
+ static void skx_unregister_mci(struct skx_imc *imc)
+ {
+@@ -565,6 +606,21 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ 			     optype, skx_msg);
+ }
+ 
++static bool skx_error_in_1st_level_mem(const struct mce *m)
++{
++	u32 errcode;
++
++	if (!skx_mem_cfg_2lm)
++		return false;
++
++	errcode = GET_BITFIELD(m->status, 0, 15);
++
++	if ((errcode & 0xef80) != 0x280)
++		return false;
++
++	return true;
++}
++
+ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ 			void *data)
+ {
+@@ -584,7 +640,7 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ 	res.addr = mce->addr;
+ 
+ 	if (adxl_component_count) {
+-		if (!skx_adxl_decode(&res))
++		if (!skx_adxl_decode(&res, skx_error_in_1st_level_mem(mce)))
+ 			return NOTIFY_DONE;
+ 	} else if (!skx_decode || !skx_decode(&res)) {
+ 		return NOTIFY_DONE;
+@@ -618,6 +674,7 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ 	mce->kflags |= MCE_HANDLED_EDAC;
+ 	return NOTIFY_DONE;
+ }
++EXPORT_SYMBOL_GPL(skx_mce_check_error);
+ 
+ void skx_remove(void)
+ {
+@@ -653,3 +710,8 @@ void skx_remove(void)
+ 		kfree(d);
+ 	}
+ }
++EXPORT_SYMBOL_GPL(skx_remove);
++
++MODULE_LICENSE("GPL v2");
++MODULE_AUTHOR("Tony Luck");
++MODULE_DESCRIPTION("MC Driver for Intel server processors");
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index 78f8c1de0b71c8..b93c33ac8e6075 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -9,6 +9,8 @@
+ #ifndef _SKX_COMM_EDAC_H
+ #define _SKX_COMM_EDAC_H
+ 
++#include <linux/bits.h>
++
+ #define MSG_SIZE		1024
+ 
+ /*
+@@ -90,9 +92,17 @@ enum {
+ 	INDEX_MEMCTRL,
+ 	INDEX_CHANNEL,
+ 	INDEX_DIMM,
++	INDEX_NM_FIRST,
++	INDEX_NM_MEMCTRL = INDEX_NM_FIRST,
++	INDEX_NM_CHANNEL,
++	INDEX_NM_DIMM,
+ 	INDEX_MAX
+ };
+ 
++#define BIT_NM_MEMCTRL	BIT_ULL(INDEX_NM_MEMCTRL)
++#define BIT_NM_CHANNEL	BIT_ULL(INDEX_NM_CHANNEL)
++#define BIT_NM_DIMM	BIT_ULL(INDEX_NM_DIMM)
++
+ struct decoded_addr {
+ 	struct skx_dev *dev;
+ 	u64	addr;
+@@ -124,9 +134,10 @@ typedef int (*get_dimm_config_f)(struct mem_ctl_info *mci);
+ typedef bool (*skx_decode_f)(struct decoded_addr *res);
+ typedef void (*skx_show_retry_log_f)(struct decoded_addr *res, char *msg, int len);
+ 
+-int __init skx_adxl_get(void);
+-void __exit skx_adxl_put(void);
++int skx_adxl_get(void);
++void skx_adxl_put(void);
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log);
++void skx_set_mem_cfg(bool mem_cfg_2lm);
+ 
+ int skx_get_src_id(struct skx_dev *d, int off, u8 *id);
+ int skx_get_node_id(struct skx_dev *d, u8 *id);
+diff --git a/drivers/firmware/turris-mox-rwtm.c b/drivers/firmware/turris-mox-rwtm.c
+index 0bef988580ada1..dcb717254c79ed 100644
+--- a/drivers/firmware/turris-mox-rwtm.c
++++ b/drivers/firmware/turris-mox-rwtm.c
+@@ -199,9 +199,8 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
+-	if (ret < 0)
+-		return ret;
++	if (!wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2))
++		return -ETIMEDOUT;
+ 
+ 	ret = mox_get_status(MBOX_CMD_BOARD_INFO, reply->retval);
+ 	if (ret == -ENODATA) {
+@@ -235,9 +234,8 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
+-	if (ret < 0)
+-		return ret;
++	if (!wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2))
++		return -ETIMEDOUT;
+ 
+ 	ret = mox_get_status(MBOX_CMD_ECDSA_PUB_KEY, reply->retval);
+ 	if (ret == -ENODATA) {
+@@ -274,9 +272,8 @@ static int check_get_random_support(struct mox_rwtm *rwtm)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
+-	if (ret < 0)
+-		return ret;
++	if (!wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2))
++		return -ETIMEDOUT;
+ 
+ 	return mox_get_status(MBOX_CMD_GET_RANDOM, rwtm->reply.retval);
+ }
+@@ -499,6 +496,7 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, rwtm);
+ 
+ 	mutex_init(&rwtm->busy);
++	init_completion(&rwtm->cmd_done);
+ 
+ 	rwtm->mbox_client.dev = dev;
+ 	rwtm->mbox_client.rx_callback = mox_rwtm_rx_callback;
+@@ -512,8 +510,6 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
+ 		goto remove_files;
+ 	}
+ 
+-	init_completion(&rwtm->cmd_done);
+-
+ 	ret = mox_get_board_info(rwtm);
+ 	if (ret < 0)
+ 		dev_warn(dev, "Cannot read board information: %i\n", ret);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index e971d2b9e3c00c..56f10679a26d15 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1351,12 +1351,15 @@ static void amdgpu_ras_interrupt_process_handler(struct work_struct *work)
+ int amdgpu_ras_interrupt_dispatch(struct amdgpu_device *adev,
+ 		struct ras_dispatch_if *info)
+ {
+-	struct ras_manager *obj = amdgpu_ras_find_obj(adev, &info->head);
+-	struct ras_ih_data *data = &obj->ih_data;
++	struct ras_manager *obj;
++	struct ras_ih_data *data;
+ 
++	obj = amdgpu_ras_find_obj(adev, &info->head);
+ 	if (!obj)
+ 		return -EINVAL;
+ 
++	data = &obj->ih_data;
++
+ 	if (data->inuse == 0)
+ 		return 0;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
+index 3d7d27435f15eb..98391ba103d6c8 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
+@@ -154,7 +154,8 @@ const struct dc_plane_status *dc_plane_get_status(
+ 		if (pipe_ctx->plane_state != plane_state)
+ 			continue;
+ 
+-		pipe_ctx->plane_state->status.is_flip_pending = false;
++		if (pipe_ctx->plane_state)
++			pipe_ctx->plane_state->status.is_flip_pending = false;
+ 
+ 		break;
+ 	}
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index 7931528bc864bc..5e72b7555edae9 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -2983,8 +2983,7 @@ static int smu7_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 			const struct pp_power_state *current_ps)
+ {
+ 	struct amdgpu_device *adev = hwmgr->adev;
+-	struct smu7_power_state *smu7_ps =
+-				cast_phw_smu7_power_state(&request_ps->hardware);
++	struct smu7_power_state *smu7_ps;
+ 	uint32_t sclk;
+ 	uint32_t mclk;
+ 	struct PP_Clocks minimum_clocks = {0};
+@@ -2998,6 +2997,10 @@ static int smu7_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 	int32_t count;
+ 	int32_t stable_pstate_sclk = 0, stable_pstate_mclk = 0;
+ 
++	smu7_ps = cast_phw_smu7_power_state(&request_ps->hardware);
++	if (!smu7_ps)
++		return -EINVAL;
++
+ 	data->battery_state = (PP_StateUILabel_Battery ==
+ 			request_ps->classification.ui_label);
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+index 35ed47ebaf09d4..35d0ff57a59604 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1051,16 +1051,18 @@ static int smu8_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 				struct pp_power_state  *prequest_ps,
+ 			const struct pp_power_state *pcurrent_ps)
+ {
+-	struct smu8_power_state *smu8_ps =
+-				cast_smu8_power_state(&prequest_ps->hardware);
+-
+-	const struct smu8_power_state *smu8_current_ps =
+-				cast_const_smu8_power_state(&pcurrent_ps->hardware);
+-
++	struct smu8_power_state *smu8_ps;
++	const struct smu8_power_state *smu8_current_ps;
+ 	struct smu8_hwmgr *data = hwmgr->backend;
+ 	struct PP_Clocks clocks = {0, 0, 0, 0};
+ 	bool force_high;
+ 
++	smu8_ps = cast_smu8_power_state(&prequest_ps->hardware);
++	smu8_current_ps = cast_const_smu8_power_state(&pcurrent_ps->hardware);
++
++	if (!smu8_ps || !smu8_current_ps)
++		return -EINVAL;
++
+ 	smu8_ps->need_dfs_bypass = true;
+ 
+ 	data->battery_state = (PP_StateUILabel_Battery == prequest_ps->classification.ui_label);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index 4dc27ec4d012d1..10678b51999570 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -3232,8 +3232,7 @@ static int vega10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 			const struct pp_power_state *current_ps)
+ {
+ 	struct amdgpu_device *adev = hwmgr->adev;
+-	struct vega10_power_state *vega10_ps =
+-				cast_phw_vega10_power_state(&request_ps->hardware);
++	struct vega10_power_state *vega10_ps;
+ 	uint32_t sclk;
+ 	uint32_t mclk;
+ 	struct PP_Clocks minimum_clocks = {0};
+@@ -3251,6 +3250,10 @@ static int vega10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 	uint32_t stable_pstate_sclk = 0, stable_pstate_mclk = 0;
+ 	uint32_t latency;
+ 
++	vega10_ps = cast_phw_vega10_power_state(&request_ps->hardware);
++	if (!vega10_ps)
++		return -EINVAL;
++
+ 	data->battery_state = (PP_StateUILabel_Battery ==
+ 			request_ps->classification.ui_label);
+ 
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
+index cab3f5c4e2fc83..37c62794e213d8 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
+@@ -1115,7 +1115,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
+ 	u32 status_reg;
+ 	u8 *buffer = msg->buffer;
+ 	unsigned int i;
+-	int num_transferred = 0;
+ 	int ret;
+ 
+ 	/* Buffer size of AUX CH is 16 bytes */
+@@ -1167,7 +1166,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
+ 			reg = buffer[i];
+ 			writel(reg, dp->reg_base + ANALOGIX_DP_BUF_DATA_0 +
+ 			       4 * i);
+-			num_transferred++;
+ 		}
+ 	}
+ 
+@@ -1215,7 +1213,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
+ 			reg = readl(dp->reg_base + ANALOGIX_DP_BUF_DATA_0 +
+ 				    4 * i);
+ 			buffer[i] = (unsigned char)reg;
+-			num_transferred++;
+ 		}
+ 	}
+ 
+@@ -1232,7 +1229,7 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
+ 		 (msg->request & ~DP_AUX_I2C_MOT) == DP_AUX_NATIVE_READ)
+ 		msg->reply = DP_AUX_NATIVE_REPLY_ACK;
+ 
+-	return num_transferred > 0 ? num_transferred : -EBUSY;
++	return msg->size;
+ 
+ aux_error:
+ 	/* if aux err happen, reset aux */
+diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c
+index 7872a04e9a7210..3884fab0111066 100644
+--- a/drivers/gpu/drm/drm_client_modeset.c
++++ b/drivers/gpu/drm/drm_client_modeset.c
+@@ -866,6 +866,11 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
+ 
+ 			kfree(modeset->mode);
+ 			modeset->mode = drm_mode_duplicate(dev, mode);
++			if (!modeset->mode) {
++				ret = -ENOMEM;
++				break;
++			}
++
+ 			drm_connector_get(connector);
+ 			modeset->connectors[modeset->num_connectors++] = connector;
+ 			modeset->x = offset->x;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+index 424474041c9433..aa372982335e9f 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+@@ -364,9 +364,11 @@ static void *etnaviv_gem_vmap_impl(struct etnaviv_gem_object *obj)
+ 
+ static inline enum dma_data_direction etnaviv_op_to_dma_dir(u32 op)
+ {
+-	if (op & ETNA_PREP_READ)
++	op &= ETNA_PREP_READ | ETNA_PREP_WRITE;
++
++	if (op == ETNA_PREP_READ)
+ 		return DMA_FROM_DEVICE;
+-	else if (op & ETNA_PREP_WRITE)
++	else if (op == ETNA_PREP_WRITE)
+ 		return DMA_TO_DEVICE;
+ 	else
+ 		return DMA_BIDIRECTIONAL;
+diff --git a/drivers/gpu/drm/gma500/cdv_intel_lvds.c b/drivers/gpu/drm/gma500/cdv_intel_lvds.c
+index eaaf4efec21765..b13c34fa29eda5 100644
+--- a/drivers/gpu/drm/gma500/cdv_intel_lvds.c
++++ b/drivers/gpu/drm/gma500/cdv_intel_lvds.c
+@@ -310,6 +310,9 @@ static int cdv_intel_lvds_get_modes(struct drm_connector *connector)
+ 	if (mode_dev->panel_fixed_mode != NULL) {
+ 		struct drm_display_mode *mode =
+ 		    drm_mode_duplicate(dev, mode_dev->panel_fixed_mode);
++		if (!mode)
++			return 0;
++
+ 		drm_mode_probed_add(connector, mode);
+ 		return 1;
+ 	}
+diff --git a/drivers/gpu/drm/gma500/psb_intel_lvds.c b/drivers/gpu/drm/gma500/psb_intel_lvds.c
+index 063c66bb946d0c..889964efe62d52 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_lvds.c
++++ b/drivers/gpu/drm/gma500/psb_intel_lvds.c
+@@ -508,6 +508,9 @@ static int psb_intel_lvds_get_modes(struct drm_connector *connector)
+ 	if (mode_dev->panel_fixed_mode != NULL) {
+ 		struct drm_display_mode *mode =
+ 		    drm_mode_duplicate(dev, mode_dev->panel_fixed_mode);
++		if (!mode)
++			return 0;
++
+ 		drm_mode_probed_add(connector, mode);
+ 		return 1;
+ 	}
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 01a88b03bc6d38..0021b039b72b5e 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -273,6 +273,41 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
+ 	return i915_error_to_vmf_fault(err);
+ }
+ 
++static void set_address_limits(struct vm_area_struct *area,
++			       struct i915_vma *vma,
++			       unsigned long obj_offset,
++			       unsigned long *start_vaddr,
++			       unsigned long *end_vaddr)
++{
++	unsigned long vm_start, vm_end, vma_size; /* user's memory parameters */
++	long start, end; /* memory boundaries */
++
++	/*
++	 * Let's move into the ">> PAGE_SHIFT"
++	 * domain to be sure not to lose bits
++	 */
++	vm_start = area->vm_start >> PAGE_SHIFT;
++	vm_end = area->vm_end >> PAGE_SHIFT;
++	vma_size = vma->size >> PAGE_SHIFT;
++
++	/*
++	 * Calculate the memory boundaries by considering the offset
++	 * provided by the user during memory mapping and the offset
++	 * provided for the partial mapping.
++	 */
++	start = vm_start;
++	start -= obj_offset;
++	start += vma->ggtt_view.partial.offset;
++	end = start + vma_size;
++
++	start = max_t(long, start, vm_start);
++	end = min_t(long, end, vm_end);
++
++	/* Let's move back into the "<< PAGE_SHIFT" domain */
++	*start_vaddr = (unsigned long)start << PAGE_SHIFT;
++	*end_vaddr = (unsigned long)end << PAGE_SHIFT;
++}
++
+ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
+ {
+ #define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
+@@ -285,14 +320,18 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
+ 	struct i915_ggtt *ggtt = &i915->ggtt;
+ 	bool write = area->vm_flags & VM_WRITE;
+ 	struct i915_gem_ww_ctx ww;
++	unsigned long obj_offset;
++	unsigned long start, end; /* memory boundaries */
+ 	intel_wakeref_t wakeref;
+ 	struct i915_vma *vma;
+ 	pgoff_t page_offset;
++	unsigned long pfn;
+ 	int srcu;
+ 	int ret;
+ 
+-	/* We don't use vmf->pgoff since that has the fake offset */
++	obj_offset = area->vm_pgoff - drm_vma_node_start(&mmo->vma_node);
+ 	page_offset = (vmf->address - area->vm_start) >> PAGE_SHIFT;
++	page_offset += obj_offset;
+ 
+ 	trace_i915_gem_object_fault(obj, page_offset, true, write);
+ 
+@@ -363,12 +402,14 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
+ 	if (ret)
+ 		goto err_unpin;
+ 
++	set_address_limits(area, vma, obj_offset, &start, &end);
++
++	pfn = (ggtt->gmadr.start + i915_ggtt_offset(vma)) >> PAGE_SHIFT;
++	pfn += (start - area->vm_start) >> PAGE_SHIFT;
++	pfn += obj_offset - vma->ggtt_view.partial.offset;
++
+ 	/* Finally, remap it using the new GTT offset */
+-	ret = remap_io_mapping(area,
+-			       area->vm_start + (vma->ggtt_view.partial.offset << PAGE_SHIFT),
+-			       (ggtt->gmadr.start + vma->node.start) >> PAGE_SHIFT,
+-			       min_t(u64, vma->size, area->vm_end - area->vm_start),
+-			       &ggtt->iomap);
++	ret = remap_io_mapping(area, start, pfn, end - start, &ggtt->iomap);
+ 	if (ret)
+ 		goto err_fence;
+ 
+diff --git a/drivers/gpu/drm/mgag200/mgag200_i2c.c b/drivers/gpu/drm/mgag200/mgag200_i2c.c
+index 09731e614e46d7..40cda2468332ef 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_i2c.c
++++ b/drivers/gpu/drm/mgag200/mgag200_i2c.c
+@@ -134,7 +134,7 @@ struct mga_i2c_chan *mgag200_i2c_create(struct drm_device *dev)
+ 	i2c->adapter.algo_data = &i2c->bit;
+ 
+ 	i2c->bit.udelay = 10;
+-	i2c->bit.timeout = 2;
++	i2c->bit.timeout = usecs_to_jiffies(2200);
+ 	i2c->bit.data = i2c;
+ 	i2c->bit.setsda		= mga_gpio_setsda;
+ 	i2c->bit.setscl		= mga_gpio_setscl;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
+index f08bda533bd943..65874a48dc8077 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
++++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
+@@ -81,7 +81,8 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
+ 	 * to the caller, instead of a normal nouveau_bo ttm reference. */
+ 	ret = drm_gem_object_init(dev, &nvbo->bo.base, size);
+ 	if (ret) {
+-		nouveau_bo_ref(NULL, &nvbo);
++		drm_gem_object_release(&nvbo->bo.base);
++		kfree(nvbo);
+ 		obj = ERR_PTR(-ENOMEM);
+ 		goto unlock;
+ 	}
+diff --git a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+index 9e518213a54ff9..0a2d5f461aee8e 100644
+--- a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
++++ b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+@@ -553,7 +553,11 @@ static int boe_panel_prepare(struct drm_panel *panel)
+ 	usleep_range(5000, 10000);
+ 
+ 	if (boe->desc->lp11_before_reset) {
+-		mipi_dsi_dcs_nop(boe->dsi);
++		ret = mipi_dsi_dcs_nop(boe->dsi);
++		if (ret < 0) {
++			dev_err(&boe->dsi->dev, "Failed to send NOP: %d\n", ret);
++			goto poweroff;
++		}
+ 		usleep_range(1000, 2000);
+ 	}
+ 	gpiod_set_value(boe->enable_gpio, 1);
+@@ -574,13 +578,13 @@ static int boe_panel_prepare(struct drm_panel *panel)
+ 	return 0;
+ 
+ poweroff:
++	gpiod_set_value(boe->enable_gpio, 0);
+ 	regulator_disable(boe->avee);
+ poweroffavdd:
+ 	regulator_disable(boe->avdd);
+ poweroff1v8:
+ 	usleep_range(5000, 7000);
+ 	regulator_disable(boe->pp1800);
+-	gpiod_set_value(boe->enable_gpio, 0);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index 4af25c0b6570f0..ba098b4dee2e55 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -681,3 +681,4 @@ module_platform_driver(panfrost_driver);
+ MODULE_AUTHOR("Panfrost Project Developers");
+ MODULE_DESCRIPTION("Panfrost DRM Driver");
+ MODULE_LICENSE("GPL v2");
++MODULE_SOFTDEP("pre: governor_simpleondemand");
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index f22a1b776f4baf..3ffa721d777bf3 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -232,6 +232,9 @@ static int qxl_add_mode(struct drm_connector *connector,
+ 		return 0;
+ 
+ 	mode = drm_cvt_mode(dev, width, height, 60, false, false, false);
++	if (!mode)
++		return 0;
++
+ 	if (preferred)
+ 		mode->type |= DRM_MODE_TYPE_PREFERRED;
+ 	mode->hdisplay = width;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
+index cd7ed1650d60c8..7aa1d9218ea6b9 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
+@@ -98,7 +98,7 @@ static int vmw_overlay_send_put(struct vmw_private *dev_priv,
+ {
+ 	struct vmw_escape_video_flush *flush;
+ 	size_t fifo_size;
+-	bool have_so = (dev_priv->active_display_unit == vmw_du_screen_object);
++	bool have_so = (dev_priv->active_display_unit != vmw_du_legacy);
+ 	int i, num_items;
+ 	SVGAGuestPtr ptr;
+ 
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index c454768ffb4907..f6ee287a892c66 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -714,13 +714,12 @@ static int wacom_intuos_get_tool_type(int tool_id)
+ 	case 0x8e2: /* IntuosHT2 pen */
+ 	case 0x022:
+ 	case 0x200: /* Pro Pen 3 */
+-	case 0x04200: /* Pro Pen 3 */
+ 	case 0x10842: /* MobileStudio Pro Pro Pen slim */
+ 	case 0x14802: /* Intuos4/5 13HD/24HD Classic Pen */
+ 	case 0x16802: /* Cintiq 13HD Pro Pen */
+ 	case 0x18802: /* DTH2242 Pen */
+ 	case 0x10802: /* Intuos4/5 13HD/24HD General Pen */
+-	case 0x80842: /* Intuos Pro and Cintiq Pro 3D Pen */
++	case 0x8842: /* Intuos Pro and Cintiq Pro 3D Pen */
+ 		tool_type = BTN_TOOL_PEN;
+ 		break;
+ 
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 22e314725def0b..b4c0f01f52c4ff 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -1770,7 +1770,7 @@ static void adt7475_read_pwm(struct i2c_client *client, int index)
+ 		data->pwm[CONTROL][index] &= ~0xE0;
+ 		data->pwm[CONTROL][index] |= (7 << 5);
+ 
+-		i2c_smbus_write_byte_data(client, PWM_CONFIG_REG(index),
++		i2c_smbus_write_byte_data(client, PWM_REG(index),
+ 					  data->pwm[INPUT][index]);
+ 
+ 		i2c_smbus_write_byte_data(client, PWM_CONFIG_REG(index),
+diff --git a/drivers/hwmon/max6697.c b/drivers/hwmon/max6697.c
+index fc3241101178de..7508d67c3dfb0b 100644
+--- a/drivers/hwmon/max6697.c
++++ b/drivers/hwmon/max6697.c
+@@ -312,6 +312,7 @@ static ssize_t temp_store(struct device *dev,
+ 		return ret;
+ 
+ 	mutex_lock(&data->update_lock);
++	temp = clamp_val(temp, -1000000, 1000000);	/* prevent underflow */
+ 	temp = DIV_ROUND_CLOSEST(temp, 1000) + data->temp_offset;
+ 	temp = clamp_val(temp, 0, data->type == max6581 ? 255 : 127);
+ 	data->temp[nr][index] = temp;
+@@ -429,14 +430,14 @@ static SENSOR_DEVICE_ATTR_RO(temp6_max_alarm, alarm, 20);
+ static SENSOR_DEVICE_ATTR_RO(temp7_max_alarm, alarm, 21);
+ static SENSOR_DEVICE_ATTR_RO(temp8_max_alarm, alarm, 23);
+ 
+-static SENSOR_DEVICE_ATTR_RO(temp1_crit_alarm, alarm, 14);
++static SENSOR_DEVICE_ATTR_RO(temp1_crit_alarm, alarm, 15);
+ static SENSOR_DEVICE_ATTR_RO(temp2_crit_alarm, alarm, 8);
+ static SENSOR_DEVICE_ATTR_RO(temp3_crit_alarm, alarm, 9);
+ static SENSOR_DEVICE_ATTR_RO(temp4_crit_alarm, alarm, 10);
+ static SENSOR_DEVICE_ATTR_RO(temp5_crit_alarm, alarm, 11);
+ static SENSOR_DEVICE_ATTR_RO(temp6_crit_alarm, alarm, 12);
+ static SENSOR_DEVICE_ATTR_RO(temp7_crit_alarm, alarm, 13);
+-static SENSOR_DEVICE_ATTR_RO(temp8_crit_alarm, alarm, 15);
++static SENSOR_DEVICE_ATTR_RO(temp8_crit_alarm, alarm, 14);
+ 
+ static SENSOR_DEVICE_ATTR_RO(temp2_fault, alarm, 1);
+ static SENSOR_DEVICE_ATTR_RO(temp3_fault, alarm, 2);
+diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
+index c594f45319fc55..dfb8022c4da1af 100644
+--- a/drivers/hwtracing/coresight/coresight-platform.c
++++ b/drivers/hwtracing/coresight/coresight-platform.c
+@@ -323,8 +323,10 @@ static int of_get_coresight_platform_data(struct device *dev,
+ 			continue;
+ 
+ 		ret = of_coresight_parse_endpoint(dev, ep, pdata);
+-		if (ret)
++		if (ret) {
++			of_node_put(ep);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/i2c/i2c-smbus.c b/drivers/i2c/i2c-smbus.c
+index d3d06e3b4f3b31..44582cf29e1622 100644
+--- a/drivers/i2c/i2c-smbus.c
++++ b/drivers/i2c/i2c-smbus.c
+@@ -34,6 +34,7 @@ static int smbus_do_alert(struct device *dev, void *addrp)
+ 	struct i2c_client *client = i2c_verify_client(dev);
+ 	struct alert_data *data = addrp;
+ 	struct i2c_driver *driver;
++	int ret;
+ 
+ 	if (!client || client->addr != data->addr)
+ 		return 0;
+@@ -47,16 +48,47 @@ static int smbus_do_alert(struct device *dev, void *addrp)
+ 	device_lock(dev);
+ 	if (client->dev.driver) {
+ 		driver = to_i2c_driver(client->dev.driver);
+-		if (driver->alert)
++		if (driver->alert) {
++			/* Stop iterating after we find the device */
+ 			driver->alert(client, data->type, data->data);
+-		else
++			ret = -EBUSY;
++		} else {
+ 			dev_warn(&client->dev, "no driver alert()!\n");
+-	} else
++			ret = -EOPNOTSUPP;
++		}
++	} else {
+ 		dev_dbg(&client->dev, "alert with no driver\n");
++		ret = -ENODEV;
++	}
++	device_unlock(dev);
++
++	return ret;
++}
++
++/* Same as above, but call back all drivers with alert handler */
++
++static int smbus_do_alert_force(struct device *dev, void *addrp)
++{
++	struct i2c_client *client = i2c_verify_client(dev);
++	struct alert_data *data = addrp;
++	struct i2c_driver *driver;
++
++	if (!client || (client->flags & I2C_CLIENT_TEN))
++		return 0;
++
++	/*
++	 * Drivers should either disable alerts, or provide at least
++	 * a minimal handler. Lock so the driver won't change.
++	 */
++	device_lock(dev);
++	if (client->dev.driver) {
++		driver = to_i2c_driver(client->dev.driver);
++		if (driver->alert)
++			driver->alert(client, data->type, data->data);
++	}
+ 	device_unlock(dev);
+ 
+-	/* Stop iterating after we find the device */
+-	return -EBUSY;
++	return 0;
+ }
+ 
+ /*
+@@ -67,6 +99,7 @@ static irqreturn_t smbus_alert(int irq, void *d)
+ {
+ 	struct i2c_smbus_alert *alert = d;
+ 	struct i2c_client *ara;
++	unsigned short prev_addr = I2C_CLIENT_END; /* Not a valid address */
+ 
+ 	ara = alert->ara;
+ 
+@@ -94,8 +127,25 @@ static irqreturn_t smbus_alert(int irq, void *d)
+ 			data.addr, data.data);
+ 
+ 		/* Notify driver for the device which issued the alert */
+-		device_for_each_child(&ara->adapter->dev, &data,
+-				      smbus_do_alert);
++		status = device_for_each_child(&ara->adapter->dev, &data,
++					       smbus_do_alert);
++		/*
++		 * If we read the same address more than once, and the alert
++		 * was not handled by a driver, it won't do any good to repeat
++		 * the loop because it will never terminate. Try again, this
++		 * time calling the alert handlers of all devices connected to
++		 * the bus, and abort the loop afterwards. If this helps, we
++		 * are all set. If it doesn't, there is nothing else we can do,
++		 * so we might as well abort the loop.
++		 * Note: This assumes that a driver with alert handler handles
++		 * the alert properly and clears it if necessary.
++		 */
++		if (data.addr == prev_addr && status != -EBUSY) {
++			device_for_each_child(&ara->adapter->dev, &data,
++					      smbus_do_alert_force);
++			break;
++		}
++		prev_addr = data.addr;
+ 	}
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 94c3bad72cc59b..3848fe0449a4c4 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -2106,6 +2106,9 @@ int ib_device_set_netdev(struct ib_device *ib_dev, struct net_device *ndev,
+ 	unsigned long flags;
+ 	int ret;
+ 
++	if (!rdma_is_port_valid(ib_dev, port))
++		return -EINVAL;
++
+ 	/*
+ 	 * Drivers wish to call this before ib_register_driver, so we have to
+ 	 * setup the port data early.
+@@ -2114,9 +2117,6 @@ int ib_device_set_netdev(struct ib_device *ib_dev, struct net_device *ndev,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!rdma_is_port_valid(ib_dev, port))
+-		return -EINVAL;
+-
+ 	pdata = &ib_dev->port_data[port];
+ 	spin_lock_irqsave(&pdata->netdev_lock, flags);
+ 	old_ndev = rcu_dereference_protected(
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index 75b6da00065a34..7a6747850aea81 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -370,8 +370,10 @@ EXPORT_SYMBOL(iw_cm_disconnect);
+  *
+  * Clean up all resources associated with the connection and release
+  * the initial reference taken by iw_create_cm_id.
++ *
++ * Returns true if and only if the last cm_id_priv reference has been dropped.
+  */
+-static void destroy_cm_id(struct iw_cm_id *cm_id)
++static bool destroy_cm_id(struct iw_cm_id *cm_id)
+ {
+ 	struct iwcm_id_private *cm_id_priv;
+ 	struct ib_qp *qp;
+@@ -441,7 +443,7 @@ static void destroy_cm_id(struct iw_cm_id *cm_id)
+ 		iwpm_remove_mapping(&cm_id->local_addr, RDMA_NL_IWCM);
+ 	}
+ 
+-	(void)iwcm_deref_id(cm_id_priv);
++	return iwcm_deref_id(cm_id_priv);
+ }
+ 
+ /*
+@@ -452,7 +454,8 @@ static void destroy_cm_id(struct iw_cm_id *cm_id)
+  */
+ void iw_destroy_cm_id(struct iw_cm_id *cm_id)
+ {
+-	destroy_cm_id(cm_id);
++	if (!destroy_cm_id(cm_id))
++		flush_workqueue(iwcm_wq);
+ }
+ EXPORT_SYMBOL(iw_destroy_cm_id);
+ 
+@@ -1036,7 +1039,7 @@ static void cm_work_handler(struct work_struct *_work)
+ 		if (!test_bit(IWCM_F_DROP_EVENTS, &cm_id_priv->flags)) {
+ 			ret = process_event(cm_id_priv, &levent);
+ 			if (ret)
+-				destroy_cm_id(&cm_id_priv->id);
++				WARN_ON_ONCE(destroy_cm_id(&cm_id_priv->id));
+ 		} else
+ 			pr_debug("dropping event %d\n", levent.event);
+ 		if (iwcm_deref_id(cm_id_priv))
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index a0d7777acb6d4c..f16e0b2c7895e7 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -2357,7 +2357,7 @@ static int bnxt_re_build_send_wqe(struct bnxt_re_qp *qp,
+ 		break;
+ 	case IB_WR_SEND_WITH_IMM:
+ 		wqe->type = BNXT_QPLIB_SWQE_TYPE_SEND_WITH_IMM;
+-		wqe->send.imm_data = wr->ex.imm_data;
++		wqe->send.imm_data = be32_to_cpu(wr->ex.imm_data);
+ 		break;
+ 	case IB_WR_SEND_WITH_INV:
+ 		wqe->type = BNXT_QPLIB_SWQE_TYPE_SEND_WITH_INV;
+@@ -2387,7 +2387,7 @@ static int bnxt_re_build_rdma_wqe(const struct ib_send_wr *wr,
+ 		break;
+ 	case IB_WR_RDMA_WRITE_WITH_IMM:
+ 		wqe->type = BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE_WITH_IMM;
+-		wqe->rdma.imm_data = wr->ex.imm_data;
++		wqe->rdma.imm_data = be32_to_cpu(wr->ex.imm_data);
+ 		break;
+ 	case IB_WR_RDMA_READ:
+ 		wqe->type = BNXT_QPLIB_SWQE_TYPE_RDMA_READ;
+@@ -3334,7 +3334,7 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp,
+ 	wc->byte_len = orig_cqe->length;
+ 	wc->qp = &gsi_qp->ib_qp;
+ 
+-	wc->ex.imm_data = orig_cqe->immdata;
++	wc->ex.imm_data = cpu_to_be32(le32_to_cpu(orig_cqe->immdata));
+ 	wc->src_qp = orig_cqe->src_qp;
+ 	memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
+ 	if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
+@@ -3474,7 +3474,7 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
+ 				continue;
+ 			}
+ 			wc->qp = &qp->ib_qp;
+-			wc->ex.imm_data = cqe->immdata;
++			wc->ex.imm_data = cpu_to_be32(le32_to_cpu(cqe->immdata));
+ 			wc->src_qp = cqe->src_qp;
+ 			memcpy(wc->smac, cqe->smac, ETH_ALEN);
+ 			wc->port_num = 1;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index 667f93d90045e1..f112f013df7d97 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -162,7 +162,7 @@ struct bnxt_qplib_swqe {
+ 		/* Send, with imm, inval key */
+ 		struct {
+ 			union {
+-				__be32	imm_data;
++				u32	imm_data;
+ 				u32	inv_key;
+ 			};
+ 			u32		q_key;
+@@ -180,7 +180,7 @@ struct bnxt_qplib_swqe {
+ 		/* RDMA write, with imm, read */
+ 		struct {
+ 			union {
+-				__be32	imm_data;
++				u32	imm_data;
+ 				u32	inv_key;
+ 			};
+ 			u64		remote_va;
+@@ -372,7 +372,7 @@ struct bnxt_qplib_cqe {
+ 	u16				cfa_meta;
+ 	u64				wr_id;
+ 	union {
+-		__be32			immdata;
++		__le32			immdata;
+ 		u32			invrkey;
+ 	};
+ 	u64				qp_handle;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index fe54e09eeccdd4..72a41f34ecfd54 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -98,6 +98,7 @@
+ #define MR_TYPE_DMA				0x03
+ 
+ #define HNS_ROCE_FRMR_MAX_PA			512
++#define HNS_ROCE_FRMR_ALIGN_SIZE		128
+ 
+ #define PKEY_ID					0xffff
+ #define GUID_LEN				8
+@@ -267,6 +268,9 @@ enum {
+ #define HNS_HW_PAGE_SHIFT			12
+ #define HNS_HW_PAGE_SIZE			(1 << HNS_HW_PAGE_SHIFT)
+ 
++#define HNS_HW_MAX_PAGE_SHIFT			27
++#define HNS_HW_MAX_PAGE_SIZE			(1 << HNS_HW_MAX_PAGE_SHIFT)
++
+ struct hns_roce_uar {
+ 	u64		pfn;
+ 	unsigned long	index;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index c038ed7d94962a..7e93c9b4a33f16 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -486,6 +486,11 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ 	struct hns_roce_mtr *mtr = &mr->pbl_mtr;
+ 	int ret, sg_num = 0;
+ 
++	if (!IS_ALIGNED(*sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
++	    ibmr->page_size < HNS_HW_PAGE_SIZE ||
++	    ibmr->page_size > HNS_HW_MAX_PAGE_SIZE)
++		return sg_num;
++
+ 	mr->npages = 0;
+ 	mr->page_list = kvcalloc(mr->pbl_mtr.hem_cfg.buf_pg_count,
+ 				 sizeof(dma_addr_t), GFP_KERNEL);
+diff --git a/drivers/infiniband/hw/mlx4/alias_GUID.c b/drivers/infiniband/hw/mlx4/alias_GUID.c
+index cca414ecfcd5a0..05420e189ca902 100644
+--- a/drivers/infiniband/hw/mlx4/alias_GUID.c
++++ b/drivers/infiniband/hw/mlx4/alias_GUID.c
+@@ -832,7 +832,7 @@ void mlx4_ib_destroy_alias_guid_service(struct mlx4_ib_dev *dev)
+ 
+ int mlx4_ib_init_alias_guid_service(struct mlx4_ib_dev *dev)
+ {
+-	char alias_wq_name[15];
++	char alias_wq_name[22];
+ 	int ret = 0;
+ 	int i, j;
+ 	union ib_gid gid;
+diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
+index 8bd16474708fbc..44ea25e8254900 100644
+--- a/drivers/infiniband/hw/mlx4/mad.c
++++ b/drivers/infiniband/hw/mlx4/mad.c
+@@ -2155,7 +2155,7 @@ static int mlx4_ib_alloc_demux_ctx(struct mlx4_ib_dev *dev,
+ 				       struct mlx4_ib_demux_ctx *ctx,
+ 				       int port)
+ {
+-	char name[12];
++	char name[21];
+ 	int ret = 0;
+ 	int i;
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 69238f856c677a..a034aa8fc5ca5e 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -362,7 +362,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+ 	int			solicited;
+ 	u16			pkey;
+ 	u32			qp_num;
+-	int			ack_req;
++	int			ack_req = 0;
+ 
+ 	/* length from start of bth to end of icrc */
+ 	paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE;
+@@ -396,8 +396,9 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+ 	qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn :
+ 					 qp->attr.dest_qp_num;
+ 
+-	ack_req = ((pkt->mask & RXE_END_MASK) ||
+-		(qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK));
++	if (qp_type(qp) != IB_QPT_UD && qp_type(qp) != IB_QPT_UC)
++		ack_req = ((pkt->mask & RXE_END_MASK) ||
++			   (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK));
+ 	if (ack_req)
+ 		qp->req.noack_pkts = 0;
+ 
+diff --git a/drivers/input/keyboard/qt1050.c b/drivers/input/keyboard/qt1050.c
+index 403060d05c3b34..7193a4198e214f 100644
+--- a/drivers/input/keyboard/qt1050.c
++++ b/drivers/input/keyboard/qt1050.c
+@@ -226,7 +226,12 @@ static bool qt1050_identify(struct qt1050_priv *ts)
+ 	int err;
+ 
+ 	/* Read Chip ID */
+-	regmap_read(ts->regmap, QT1050_CHIP_ID, &val);
++	err = regmap_read(ts->regmap, QT1050_CHIP_ID, &val);
++	if (err) {
++		dev_err(&ts->client->dev, "Failed to read chip ID: %d\n", err);
++		return false;
++	}
++
+ 	if (val != QT1050_CHIP_ID_VER) {
+ 		dev_err(&ts->client->dev, "ID %d not supported\n", val);
+ 		return false;
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 6f59c8b245f240..14c2c66414f4e3 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1340,6 +1340,8 @@ static int __maybe_unused elan_suspend(struct device *dev)
+ 	}
+ 
+ err:
++	if (ret)
++		enable_irq(client->irq);
+ 	mutex_unlock(&data->sysfs_mutex);
+ 	return ret;
+ }
+diff --git a/drivers/irqchip/irq-imx-irqsteer.c b/drivers/irqchip/irq-imx-irqsteer.c
+index 1edf7692a790bd..4bdcefa44f11e2 100644
+--- a/drivers/irqchip/irq-imx-irqsteer.c
++++ b/drivers/irqchip/irq-imx-irqsteer.c
+@@ -12,6 +12,7 @@
+ #include <linux/kernel.h>
+ #include <linux/of_irq.h>
+ #include <linux/of_platform.h>
++#include <linux/pm_runtime.h>
+ #include <linux/spinlock.h>
+ 
+ #define CTRL_STRIDE_OFF(_t, _r)	(_t * 4 * _r)
+@@ -34,6 +35,7 @@ struct irqsteer_data {
+ 	int			channel;
+ 	struct irq_domain	*domain;
+ 	u32			*saved_reg;
++	struct device		*dev;
+ };
+ 
+ static int imx_irqsteer_get_reg_index(struct irqsteer_data *data,
+@@ -70,10 +72,26 @@ static void imx_irqsteer_irq_mask(struct irq_data *d)
+ 	raw_spin_unlock_irqrestore(&data->lock, flags);
+ }
+ 
+-static struct irq_chip imx_irqsteer_irq_chip = {
+-	.name		= "irqsteer",
+-	.irq_mask	= imx_irqsteer_irq_mask,
+-	.irq_unmask	= imx_irqsteer_irq_unmask,
++static void imx_irqsteer_irq_bus_lock(struct irq_data *d)
++{
++	struct irqsteer_data *data = d->chip_data;
++
++	pm_runtime_get_sync(data->dev);
++}
++
++static void imx_irqsteer_irq_bus_sync_unlock(struct irq_data *d)
++{
++	struct irqsteer_data *data = d->chip_data;
++
++	pm_runtime_put_autosuspend(data->dev);
++}
++
++static const struct irq_chip imx_irqsteer_irq_chip = {
++	.name			= "irqsteer",
++	.irq_mask		= imx_irqsteer_irq_mask,
++	.irq_unmask		= imx_irqsteer_irq_unmask,
++	.irq_bus_lock		= imx_irqsteer_irq_bus_lock,
++	.irq_bus_sync_unlock	= imx_irqsteer_irq_bus_sync_unlock,
+ };
+ 
+ static int imx_irqsteer_irq_map(struct irq_domain *h, unsigned int irq,
+@@ -151,6 +169,7 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
+ 	if (!data)
+ 		return -ENOMEM;
+ 
++	data->dev = &pdev->dev;
+ 	data->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(data->regs)) {
+ 		dev_err(&pdev->dev, "failed to initialize reg\n");
+@@ -178,7 +197,7 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
+ 	data->irq_count = DIV_ROUND_UP(irqs_num, 64);
+ 	data->reg_num = irqs_num / 32;
+ 
+-	if (IS_ENABLED(CONFIG_PM_SLEEP)) {
++	if (IS_ENABLED(CONFIG_PM)) {
+ 		data->saved_reg = devm_kzalloc(&pdev->dev,
+ 					sizeof(u32) * data->reg_num,
+ 					GFP_KERNEL);
+@@ -202,6 +221,7 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
+ 		ret = -ENOMEM;
+ 		goto out;
+ 	}
++	irq_domain_set_pm_device(data->domain, &pdev->dev);
+ 
+ 	if (!data->irq_count || data->irq_count > CHAN_MAX_OUTPUT_INT) {
+ 		ret = -EINVAL;
+@@ -222,6 +242,9 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, data);
+ 
++	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_enable(&pdev->dev);
++
+ 	return 0;
+ out:
+ 	clk_disable_unprepare(data->ipg_clk);
+@@ -244,7 +267,7 @@ static int imx_irqsteer_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_PM_SLEEP
++#ifdef CONFIG_PM
+ static void imx_irqsteer_save_regs(struct irqsteer_data *data)
+ {
+ 	int i;
+@@ -291,7 +314,10 @@ static int imx_irqsteer_resume(struct device *dev)
+ #endif
+ 
+ static const struct dev_pm_ops imx_irqsteer_pm_ops = {
+-	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_irqsteer_suspend, imx_irqsteer_resume)
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
++				      pm_runtime_force_resume)
++	SET_RUNTIME_PM_OPS(imx_irqsteer_suspend,
++			   imx_irqsteer_resume, NULL)
+ };
+ 
+ static const struct of_device_id imx_irqsteer_dt_ids[] = {
+diff --git a/drivers/irqchip/irq-mbigen.c b/drivers/irqchip/irq-mbigen.c
+index ff7627b577726e..192950e9909b9d 100644
+--- a/drivers/irqchip/irq-mbigen.c
++++ b/drivers/irqchip/irq-mbigen.c
+@@ -64,6 +64,20 @@ struct mbigen_device {
+ 	void __iomem		*base;
+ };
+ 
++static inline unsigned int get_mbigen_node_offset(unsigned int nid)
++{
++	unsigned int offset = nid * MBIGEN_NODE_OFFSET;
++
++	/*
++	 * To avoid touched clear register in unexpected way, we need to directly
++	 * skip clear register when access to more than 10 mbigen nodes.
++	 */
++	if (nid >= (REG_MBIGEN_CLEAR_OFFSET / MBIGEN_NODE_OFFSET))
++		offset += MBIGEN_NODE_OFFSET;
++
++	return offset;
++}
++
+ static inline unsigned int get_mbigen_vec_reg(irq_hw_number_t hwirq)
+ {
+ 	unsigned int nid, pin;
+@@ -72,8 +86,7 @@ static inline unsigned int get_mbigen_vec_reg(irq_hw_number_t hwirq)
+ 	nid = hwirq / IRQS_PER_MBIGEN_NODE + 1;
+ 	pin = hwirq % IRQS_PER_MBIGEN_NODE;
+ 
+-	return pin * 4 + nid * MBIGEN_NODE_OFFSET
+-			+ REG_MBIGEN_VEC_OFFSET;
++	return pin * 4 + get_mbigen_node_offset(nid) + REG_MBIGEN_VEC_OFFSET;
+ }
+ 
+ static inline void get_mbigen_type_reg(irq_hw_number_t hwirq,
+@@ -88,8 +101,7 @@ static inline void get_mbigen_type_reg(irq_hw_number_t hwirq,
+ 	*mask = 1 << (irq_ofst % 32);
+ 	ofst = irq_ofst / 32 * 4;
+ 
+-	*addr = ofst + nid * MBIGEN_NODE_OFFSET
+-		+ REG_MBIGEN_TYPE_OFFSET;
++	*addr = ofst + get_mbigen_node_offset(nid) + REG_MBIGEN_TYPE_OFFSET;
+ }
+ 
+ static inline void get_mbigen_clear_reg(irq_hw_number_t hwirq,
+diff --git a/drivers/irqchip/irq-meson-gpio.c b/drivers/irqchip/irq-meson-gpio.c
+index e50676ce2ec84b..6a3d18d5ba2f0b 100644
+--- a/drivers/irqchip/irq-meson-gpio.c
++++ b/drivers/irqchip/irq-meson-gpio.c
+@@ -16,7 +16,7 @@
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ 
+-#define NUM_CHANNEL 8
++#define MAX_NUM_CHANNEL 64
+ #define MAX_INPUT_MUX 256
+ 
+ #define REG_EDGE_POL	0x00
+@@ -60,6 +60,7 @@ struct irq_ctl_ops {
+ 
+ struct meson_gpio_irq_params {
+ 	unsigned int nr_hwirq;
++	unsigned int nr_channels;
+ 	bool support_edge_both;
+ 	unsigned int edge_both_offset;
+ 	unsigned int edge_single_offset;
+@@ -81,6 +82,7 @@ struct meson_gpio_irq_params {
+ 	.edge_single_offset = 0,				\
+ 	.pol_low_offset = 16,					\
+ 	.pin_sel_mask = 0xff,					\
++	.nr_channels = 8,					\
+ 
+ #define INIT_MESON_A1_COMMON_DATA(irqs)				\
+ 	INIT_MESON_COMMON(irqs, meson_a1_gpio_irq_init,		\
+@@ -90,6 +92,7 @@ struct meson_gpio_irq_params {
+ 	.edge_single_offset = 8,				\
+ 	.pol_low_offset = 0,					\
+ 	.pin_sel_mask = 0x7f,					\
++	.nr_channels = 8,					\
+ 
+ static const struct meson_gpio_irq_params meson8_params = {
+ 	INIT_MESON8_COMMON_DATA(134)
+@@ -136,9 +139,9 @@ static const struct of_device_id meson_irq_gpio_matches[] = {
+ struct meson_gpio_irq_controller {
+ 	const struct meson_gpio_irq_params *params;
+ 	void __iomem *base;
+-	u32 channel_irqs[NUM_CHANNEL];
+-	DECLARE_BITMAP(channel_map, NUM_CHANNEL);
+-	spinlock_t lock;
++	u32 channel_irqs[MAX_NUM_CHANNEL];
++	DECLARE_BITMAP(channel_map, MAX_NUM_CHANNEL);
++	raw_spinlock_t lock;
+ };
+ 
+ static void meson_gpio_irq_update_bits(struct meson_gpio_irq_controller *ctl,
+@@ -147,14 +150,14 @@ static void meson_gpio_irq_update_bits(struct meson_gpio_irq_controller *ctl,
+ 	unsigned long flags;
+ 	u32 tmp;
+ 
+-	spin_lock_irqsave(&ctl->lock, flags);
++	raw_spin_lock_irqsave(&ctl->lock, flags);
+ 
+ 	tmp = readl_relaxed(ctl->base + reg);
+ 	tmp &= ~mask;
+ 	tmp |= val;
+ 	writel_relaxed(tmp, ctl->base + reg);
+ 
+-	spin_unlock_irqrestore(&ctl->lock, flags);
++	raw_spin_unlock_irqrestore(&ctl->lock, flags);
+ }
+ 
+ static void meson_gpio_irq_init_dummy(struct meson_gpio_irq_controller *ctl)
+@@ -204,12 +207,12 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
+ 	unsigned long flags;
+ 	unsigned int idx;
+ 
+-	spin_lock_irqsave(&ctl->lock, flags);
++	raw_spin_lock_irqsave(&ctl->lock, flags);
+ 
+ 	/* Find a free channel */
+-	idx = find_first_zero_bit(ctl->channel_map, NUM_CHANNEL);
+-	if (idx >= NUM_CHANNEL) {
+-		spin_unlock_irqrestore(&ctl->lock, flags);
++	idx = find_first_zero_bit(ctl->channel_map, ctl->params->nr_channels);
++	if (idx >= ctl->params->nr_channels) {
++		raw_spin_unlock_irqrestore(&ctl->lock, flags);
+ 		pr_err("No channel available\n");
+ 		return -ENOSPC;
+ 	}
+@@ -217,7 +220,7 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
+ 	/* Mark the channel as used */
+ 	set_bit(idx, ctl->channel_map);
+ 
+-	spin_unlock_irqrestore(&ctl->lock, flags);
++	raw_spin_unlock_irqrestore(&ctl->lock, flags);
+ 
+ 	/*
+ 	 * Setup the mux of the channel to route the signal of the pad
+@@ -451,10 +454,10 @@ static int __init meson_gpio_irq_parse_dt(struct device_node *node,
+ 	ret = of_property_read_variable_u32_array(node,
+ 						  "amlogic,channel-interrupts",
+ 						  ctl->channel_irqs,
+-						  NUM_CHANNEL,
+-						  NUM_CHANNEL);
++						  ctl->params->nr_channels,
++						  ctl->params->nr_channels);
+ 	if (ret < 0) {
+-		pr_err("can't get %d channel interrupts\n", NUM_CHANNEL);
++		pr_err("can't get %d channel interrupts\n", ctl->params->nr_channels);
+ 		return ret;
+ 	}
+ 
+@@ -485,7 +488,7 @@ static int __init meson_gpio_irq_of_init(struct device_node *node,
+ 	if (!ctl)
+ 		return -ENOMEM;
+ 
+-	spin_lock_init(&ctl->lock);
++	raw_spin_lock_init(&ctl->lock);
+ 
+ 	ctl->base = of_iomap(node, 0);
+ 	if (!ctl->base) {
+@@ -509,7 +512,7 @@ static int __init meson_gpio_irq_of_init(struct device_node *node,
+ 	}
+ 
+ 	pr_info("%d to %d gpio interrupt mux initialized\n",
+-		ctl->params->nr_hwirq, NUM_CHANNEL);
++		ctl->params->nr_hwirq, ctl->params->nr_channels);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/irqchip/irq-xilinx-intc.c b/drivers/irqchip/irq-xilinx-intc.c
+index 8cd1bfc7305722..b48c3fb4b35ea1 100644
+--- a/drivers/irqchip/irq-xilinx-intc.c
++++ b/drivers/irqchip/irq-xilinx-intc.c
+@@ -201,7 +201,7 @@ static int __init xilinx_intc_of_init(struct device_node *intc,
+ 		irqc->intr_mask = 0;
+ 	}
+ 
+-	if (irqc->intr_mask >> irqc->nr_irq)
++	if ((u64)irqc->intr_mask >> irqc->nr_irq)
+ 		pr_warn("irq-xilinx: mismatch in kind-of-intr param\n");
+ 
+ 	pr_info("irq-xilinx: %pOF: num_irq=%d, edge=0x%x\n",
+diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
+index 4c5b6772562dc6..a0697d64aab3c4 100644
+--- a/drivers/isdn/hardware/mISDN/hfcmulti.c
++++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
+@@ -1931,7 +1931,7 @@ hfcmulti_dtmf(struct hfc_multi *hc)
+ static void
+ hfcmulti_tx(struct hfc_multi *hc, int ch)
+ {
+-	int i, ii, temp, len = 0;
++	int i, ii, temp, tmp_len, len = 0;
+ 	int Zspace, z1, z2; /* must be int for calculation */
+ 	int Fspace, f1, f2;
+ 	u_char *d;
+@@ -2152,14 +2152,15 @@ hfcmulti_tx(struct hfc_multi *hc, int ch)
+ 		HFC_wait_nodebug(hc);
+ 	}
+ 
++	tmp_len = (*sp)->len;
+ 	dev_kfree_skb(*sp);
+ 	/* check for next frame */
+ 	if (bch && get_next_bframe(bch)) {
+-		len = (*sp)->len;
++		len = tmp_len;
+ 		goto next_frame;
+ 	}
+ 	if (dch && get_next_dframe(dch)) {
+-		len = (*sp)->len;
++		len = tmp_len;
+ 		goto next_frame;
+ 	}
+ 
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index fcb9eee3b60977..e28a4bb716032b 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -236,7 +236,6 @@ struct led_classdev *of_led_get(struct device_node *np, int index)
+ 
+ 	led_dev = class_find_device_by_of_node(leds_class, led_node);
+ 	of_node_put(led_node);
+-	put_device(led_dev);
+ 
+ 	if (!led_dev)
+ 		return ERR_PTR(-EPROBE_DEFER);
+diff --git a/drivers/leds/led-triggers.c b/drivers/leds/led-triggers.c
+index 4e7b78a84149be..cbe70f38cb5723 100644
+--- a/drivers/leds/led-triggers.c
++++ b/drivers/leds/led-triggers.c
+@@ -177,9 +177,9 @@ int led_trigger_set(struct led_classdev *led_cdev, struct led_trigger *trig)
+ 			flags);
+ 		cancel_work_sync(&led_cdev->set_brightness_work);
+ 		led_stop_software_blink(led_cdev);
++		device_remove_groups(led_cdev->dev, led_cdev->trigger->groups);
+ 		if (led_cdev->trigger->deactivate)
+ 			led_cdev->trigger->deactivate(led_cdev);
+-		device_remove_groups(led_cdev->dev, led_cdev->trigger->groups);
+ 		led_cdev->trigger = NULL;
+ 		led_cdev->trigger_data = NULL;
+ 		led_cdev->activated = false;
+diff --git a/drivers/leds/leds-ss4200.c b/drivers/leds/leds-ss4200.c
+index 245de443fe9c4e..833bb721a8e079 100644
+--- a/drivers/leds/leds-ss4200.c
++++ b/drivers/leds/leds-ss4200.c
+@@ -356,8 +356,10 @@ static int ich7_lpc_probe(struct pci_dev *dev,
+ 
+ 	nas_gpio_pci_dev = dev;
+ 	status = pci_read_config_dword(dev, PMBASE, &g_pm_io_base);
+-	if (status)
++	if (status) {
++		status = pcibios_err_to_errno(status);
+ 		goto out;
++	}
+ 	g_pm_io_base &= 0x00000ff80;
+ 
+ 	status = pci_read_config_dword(dev, GPIO_CTRL, &gc);
+@@ -369,8 +371,9 @@ static int ich7_lpc_probe(struct pci_dev *dev,
+ 	}
+ 
+ 	status = pci_read_config_dword(dev, GPIO_BASE, &nas_gpio_io_base);
+-	if (0 > status) {
++	if (status) {
+ 		dev_info(&dev->dev, "Unable to read GPIOBASE.\n");
++		status = pcibios_err_to_errno(status);
+ 		goto out;
+ 	}
+ 	dev_dbg(&dev->dev, ": GPIOBASE = 0x%08x\n", nas_gpio_io_base);
+diff --git a/drivers/macintosh/therm_windtunnel.c b/drivers/macintosh/therm_windtunnel.c
+index f55f6adf5e5ff7..49805eb4d145a2 100644
+--- a/drivers/macintosh/therm_windtunnel.c
++++ b/drivers/macintosh/therm_windtunnel.c
+@@ -549,7 +549,7 @@ g4fan_exit( void )
+ 	platform_driver_unregister( &therm_of_driver );
+ 
+ 	if( x.of_dev )
+-		of_device_unregister( x.of_dev );
++		of_platform_device_destroy(&x.of_dev->dev, NULL);
+ }
+ 
+ module_init(g4fan_init);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 67ceab4573be49..f1f029243e0cb0 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -524,7 +524,6 @@ void mddev_suspend(struct mddev *mddev)
+ 	clear_bit_unlock(MD_ALLOW_SB_UPDATE, &mddev->flags);
+ 	wait_event(mddev->sb_wait, !test_bit(MD_UPDATING_SB, &mddev->flags));
+ 
+-	del_timer_sync(&mddev->safemode_timer);
+ 	/* restrict memory reclaim I/O during raid array is suspend */
+ 	mddev->noio_flag = memalloc_noio_save();
+ }
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 66167c4c7bc9e4..7cdc6f20f50436 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -6005,7 +6005,9 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk
+ 	safepos = conf->reshape_safe;
+ 	sector_div(safepos, data_disks);
+ 	if (mddev->reshape_backwards) {
+-		BUG_ON(writepos < reshape_sectors);
++		if (WARN_ON(writepos < reshape_sectors))
++			return MaxSector;
++
+ 		writepos -= reshape_sectors;
+ 		readpos += reshape_sectors;
+ 		safepos += reshape_sectors;
+@@ -6023,14 +6025,18 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk
+ 	 * to set 'stripe_addr' which is where we will write to.
+ 	 */
+ 	if (mddev->reshape_backwards) {
+-		BUG_ON(conf->reshape_progress == 0);
++		if (WARN_ON(conf->reshape_progress == 0))
++			return MaxSector;
++
+ 		stripe_addr = writepos;
+-		BUG_ON((mddev->dev_sectors &
+-			~((sector_t)reshape_sectors - 1))
+-		       - reshape_sectors - stripe_addr
+-		       != sector_nr);
++		if (WARN_ON((mddev->dev_sectors &
++		    ~((sector_t)reshape_sectors - 1)) -
++		    reshape_sectors - stripe_addr != sector_nr))
++			return MaxSector;
+ 	} else {
+-		BUG_ON(writepos != sector_nr + reshape_sectors);
++		if (WARN_ON(writepos != sector_nr + reshape_sectors))
++			return MaxSector;
++
+ 		stripe_addr = sector_nr;
+ 	}
+ 
+diff --git a/drivers/media/pci/saa7134/saa7134-dvb.c b/drivers/media/pci/saa7134/saa7134-dvb.c
+index f359cd5c006a75..c786b83a69d2ba 100644
+--- a/drivers/media/pci/saa7134/saa7134-dvb.c
++++ b/drivers/media/pci/saa7134/saa7134-dvb.c
+@@ -466,7 +466,9 @@ static int philips_europa_tuner_sleep(struct dvb_frontend *fe)
+ 	/* switch the board to analog mode */
+ 	if (fe->ops.i2c_gate_ctrl)
+ 		fe->ops.i2c_gate_ctrl(fe, 1);
+-	i2c_transfer(&dev->i2c_adap, &analog_msg, 1);
++	if (i2c_transfer(&dev->i2c_adap, &analog_msg, 1) != 1)
++		return -EIO;
++
+ 	return 0;
+ }
+ 
+@@ -1018,7 +1020,9 @@ static int md8800_set_voltage2(struct dvb_frontend *fe,
+ 	else
+ 		wbuf[1] = rbuf & 0xef;
+ 	msg[0].len = 2;
+-	i2c_transfer(&dev->i2c_adap, msg, 1);
++	if (i2c_transfer(&dev->i2c_adap, msg, 1) != 1)
++		return -EIO;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index c437a929a54516..d91030a134c0ea 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -1142,7 +1142,7 @@ static int vdec_stop_output(struct venus_inst *inst)
+ 		break;
+ 	case VENUS_DEC_STATE_INIT:
+ 	case VENUS_DEC_STATE_CAPTURE_SETUP:
+-		ret = hfi_session_flush(inst, HFI_FLUSH_INPUT, true);
++		ret = hfi_session_flush(inst, HFI_FLUSH_ALL, true);
+ 		break;
+ 	default:
+ 		break;
+@@ -1587,6 +1587,7 @@ static int vdec_close(struct file *file)
+ 
+ 	vdec_pm_get(inst);
+ 
++	cancel_work_sync(&inst->delayed_process_work);
+ 	v4l2_m2m_ctx_release(inst->m2m_ctx);
+ 	v4l2_m2m_release(inst->m2m_dev);
+ 	vdec_ctrl_deinit(inst);
+diff --git a/drivers/media/platform/vsp1/vsp1_histo.c b/drivers/media/platform/vsp1/vsp1_histo.c
+index a91e142bcb9480..37df7d2d4639e1 100644
+--- a/drivers/media/platform/vsp1/vsp1_histo.c
++++ b/drivers/media/platform/vsp1/vsp1_histo.c
+@@ -36,9 +36,8 @@ struct vsp1_histogram_buffer *
+ vsp1_histogram_buffer_get(struct vsp1_histogram *histo)
+ {
+ 	struct vsp1_histogram_buffer *buf = NULL;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&histo->irqlock, flags);
++	spin_lock(&histo->irqlock);
+ 
+ 	if (list_empty(&histo->irqqueue))
+ 		goto done;
+@@ -49,7 +48,7 @@ vsp1_histogram_buffer_get(struct vsp1_histogram *histo)
+ 	histo->readout = true;
+ 
+ done:
+-	spin_unlock_irqrestore(&histo->irqlock, flags);
++	spin_unlock(&histo->irqlock);
+ 	return buf;
+ }
+ 
+@@ -58,7 +57,6 @@ void vsp1_histogram_buffer_complete(struct vsp1_histogram *histo,
+ 				    size_t size)
+ {
+ 	struct vsp1_pipeline *pipe = histo->entity.pipe;
+-	unsigned long flags;
+ 
+ 	/*
+ 	 * The pipeline pointer is guaranteed to be valid as this function is
+@@ -70,10 +68,10 @@ void vsp1_histogram_buffer_complete(struct vsp1_histogram *histo,
+ 	vb2_set_plane_payload(&buf->buf.vb2_buf, 0, size);
+ 	vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_DONE);
+ 
+-	spin_lock_irqsave(&histo->irqlock, flags);
++	spin_lock(&histo->irqlock);
+ 	histo->readout = false;
+ 	wake_up(&histo->wait_queue);
+-	spin_unlock_irqrestore(&histo->irqlock, flags);
++	spin_unlock(&histo->irqlock);
+ }
+ 
+ /* -----------------------------------------------------------------------------
+@@ -124,11 +122,10 @@ static void histo_buffer_queue(struct vb2_buffer *vb)
+ 	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ 	struct vsp1_histogram *histo = vb2_get_drv_priv(vb->vb2_queue);
+ 	struct vsp1_histogram_buffer *buf = to_vsp1_histogram_buffer(vbuf);
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&histo->irqlock, flags);
++	spin_lock_irq(&histo->irqlock);
+ 	list_add_tail(&buf->queue, &histo->irqqueue);
+-	spin_unlock_irqrestore(&histo->irqlock, flags);
++	spin_unlock_irq(&histo->irqlock);
+ }
+ 
+ static int histo_start_streaming(struct vb2_queue *vq, unsigned int count)
+@@ -140,9 +137,8 @@ static void histo_stop_streaming(struct vb2_queue *vq)
+ {
+ 	struct vsp1_histogram *histo = vb2_get_drv_priv(vq);
+ 	struct vsp1_histogram_buffer *buffer;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&histo->irqlock, flags);
++	spin_lock_irq(&histo->irqlock);
+ 
+ 	/* Remove all buffers from the IRQ queue. */
+ 	list_for_each_entry(buffer, &histo->irqqueue, queue)
+@@ -152,7 +148,7 @@ static void histo_stop_streaming(struct vb2_queue *vq)
+ 	/* Wait for the buffer being read out (if any) to complete. */
+ 	wait_event_lock_irq(histo->wait_queue, !histo->readout, histo->irqlock);
+ 
+-	spin_unlock_irqrestore(&histo->irqlock, flags);
++	spin_unlock_irq(&histo->irqlock);
+ }
+ 
+ static const struct vb2_ops histo_video_queue_qops = {
+diff --git a/drivers/media/platform/vsp1/vsp1_pipe.h b/drivers/media/platform/vsp1/vsp1_pipe.h
+index ae646c9ef33731..15daf35bda2163 100644
+--- a/drivers/media/platform/vsp1/vsp1_pipe.h
++++ b/drivers/media/platform/vsp1/vsp1_pipe.h
+@@ -73,7 +73,7 @@ struct vsp1_partition_window {
+  * @wpf: The WPF partition window configuration
+  */
+ struct vsp1_partition {
+-	struct vsp1_partition_window rpf;
++	struct vsp1_partition_window rpf[VSP1_MAX_RPF];
+ 	struct vsp1_partition_window uds_sink;
+ 	struct vsp1_partition_window uds_source;
+ 	struct vsp1_partition_window sru;
+diff --git a/drivers/media/platform/vsp1/vsp1_rpf.c b/drivers/media/platform/vsp1/vsp1_rpf.c
+index 75083cb234fe35..996a3058d5b76a 100644
+--- a/drivers/media/platform/vsp1/vsp1_rpf.c
++++ b/drivers/media/platform/vsp1/vsp1_rpf.c
+@@ -271,8 +271,8 @@ static void rpf_configure_partition(struct vsp1_entity *entity,
+ 	 * 'width' need to be adjusted.
+ 	 */
+ 	if (pipe->partitions > 1) {
+-		crop.width = pipe->partition->rpf.width;
+-		crop.left += pipe->partition->rpf.left;
++		crop.width = pipe->partition->rpf[rpf->entity.index].width;
++		crop.left += pipe->partition->rpf[rpf->entity.index].left;
+ 	}
+ 
+ 	if (pipe->interlaced) {
+@@ -327,7 +327,9 @@ static void rpf_partition(struct vsp1_entity *entity,
+ 			  unsigned int partition_idx,
+ 			  struct vsp1_partition_window *window)
+ {
+-	partition->rpf = *window;
++	struct vsp1_rwpf *rpf = to_rwpf(&entity->subdev);
++
++	partition->rpf[rpf->entity.index] = *window;
+ }
+ 
+ static const struct vsp1_entity_operations rpf_entity_ops = {
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index 253a1d1a840a0b..cd4995e74b977b 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -1153,10 +1153,7 @@ static int imon_ir_change_protocol(struct rc_dev *rc, u64 *rc_proto)
+ 
+ 	memcpy(ictx->usb_tx_buf, &ir_proto_packet, sizeof(ir_proto_packet));
+ 
+-	if (!mutex_is_locked(&ictx->lock)) {
+-		unlock = true;
+-		mutex_lock(&ictx->lock);
+-	}
++	unlock = mutex_trylock(&ictx->lock);
+ 
+ 	retval = send_packet(ictx);
+ 	if (retval)
+diff --git a/drivers/media/rc/lirc_dev.c b/drivers/media/rc/lirc_dev.c
+index 14243ce03b46e5..c59601487334ce 100644
+--- a/drivers/media/rc/lirc_dev.c
++++ b/drivers/media/rc/lirc_dev.c
+@@ -840,8 +840,10 @@ struct rc_dev *rc_dev_get_from_fd(int fd, bool write)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	if (write && !(f.file->f_mode & FMODE_WRITE))
++	if (write && !(f.file->f_mode & FMODE_WRITE)) {
++		fdput(f);
+ 		return ERR_PTR(-EPERM);
++	}
+ 
+ 	fh = f.file->private_data;
+ 	dev = fh->rc;
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 5e0acabed37a0f..dc8d790eb91145 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -992,26 +992,56 @@ static s32 __uvc_ctrl_get_value(struct uvc_control_mapping *mapping,
+ 	return value;
+ }
+ 
+-static int __uvc_ctrl_get(struct uvc_video_chain *chain,
+-	struct uvc_control *ctrl, struct uvc_control_mapping *mapping,
+-	s32 *value)
++static int __uvc_ctrl_load_cur(struct uvc_video_chain *chain,
++			       struct uvc_control *ctrl)
+ {
++	u8 *data;
+ 	int ret;
+ 
+-	if ((ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) == 0)
+-		return -EACCES;
++	if (ctrl->loaded)
++		return 0;
+ 
+-	if (!ctrl->loaded) {
+-		ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR, ctrl->entity->id,
+-				chain->dev->intfnum, ctrl->info.selector,
+-				uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+-				ctrl->info.size);
+-		if (ret < 0)
+-			return ret;
++	data = uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT);
+ 
++	if ((ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) == 0) {
++		memset(data, 0, ctrl->info.size);
+ 		ctrl->loaded = 1;
++
++		return 0;
+ 	}
+ 
++	if (ctrl->entity->get_cur)
++		ret = ctrl->entity->get_cur(chain->dev, ctrl->entity,
++					    ctrl->info.selector, data,
++					    ctrl->info.size);
++	else
++		ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR,
++				     ctrl->entity->id, chain->dev->intfnum,
++				     ctrl->info.selector, data,
++				     ctrl->info.size);
++
++	if (ret < 0)
++		return ret;
++
++	ctrl->loaded = 1;
++
++	return ret;
++}
++
++static int __uvc_ctrl_get(struct uvc_video_chain *chain,
++			  struct uvc_control *ctrl,
++			  struct uvc_control_mapping *mapping,
++			  s32 *value)
++{
++	int ret;
++
++	if ((ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) == 0)
++		return -EACCES;
++
++	ret = __uvc_ctrl_load_cur(chain, ctrl);
++	if (ret < 0)
++		return ret;
++
+ 	*value = __uvc_ctrl_get_value(mapping,
+ 				uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT));
+ 
+@@ -1665,21 +1695,10 @@ int uvc_ctrl_set(struct uvc_fh *handle,
+ 	 * needs to be loaded from the device to perform the read-modify-write
+ 	 * operation.
+ 	 */
+-	if (!ctrl->loaded && (ctrl->info.size * 8) != mapping->size) {
+-		if ((ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) == 0) {
+-			memset(uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+-				0, ctrl->info.size);
+-		} else {
+-			ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR,
+-				ctrl->entity->id, chain->dev->intfnum,
+-				ctrl->info.selector,
+-				uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+-				ctrl->info.size);
+-			if (ret < 0)
+-				return ret;
+-		}
+-
+-		ctrl->loaded = 1;
++	if ((ctrl->info.size * 8) != mapping->size) {
++		ret = __uvc_ctrl_load_cur(chain, ctrl);
++		if (ret < 0)
++			return ret;
+ 	}
+ 
+ 	/* Backup the current value in case we need to rollback later. */
+@@ -1718,9 +1737,19 @@ static int uvc_ctrl_get_flags(struct uvc_device *dev,
+ 	if (data == NULL)
+ 		return -ENOMEM;
+ 
+-	ret = uvc_query_ctrl(dev, UVC_GET_INFO, ctrl->entity->id, dev->intfnum,
+-			     info->selector, data, 1);
+-	if (!ret)
++	if (ctrl->entity->get_info)
++		ret = ctrl->entity->get_info(dev, ctrl->entity,
++					     ctrl->info.selector, data);
++	else
++		ret = uvc_query_ctrl(dev, UVC_GET_INFO, ctrl->entity->id,
++				     dev->intfnum, info->selector, data, 1);
++
++	if (!ret) {
++		info->flags &= ~(UVC_CTRL_FLAG_GET_CUR |
++				 UVC_CTRL_FLAG_SET_CUR |
++				 UVC_CTRL_FLAG_AUTO_UPDATE |
++				 UVC_CTRL_FLAG_ASYNCHRONOUS);
++
+ 		info->flags |= (data[0] & UVC_CONTROL_CAP_GET ?
+ 				UVC_CTRL_FLAG_GET_CUR : 0)
+ 			    |  (data[0] & UVC_CONTROL_CAP_SET ?
+@@ -1729,6 +1758,7 @@ static int uvc_ctrl_get_flags(struct uvc_device *dev,
+ 				UVC_CTRL_FLAG_AUTO_UPDATE : 0)
+ 			    |  (data[0] & UVC_CONTROL_CAP_ASYNCHRONOUS ?
+ 				UVC_CTRL_FLAG_ASYNCHRONOUS : 0);
++	}
+ 
+ 	kfree(data);
+ 	return ret;
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 03dfe96bcebacb..288f097e2e6f29 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -207,13 +207,13 @@ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream,
+ 		/* Compute a bandwidth estimation by multiplying the frame
+ 		 * size by the number of video frames per second, divide the
+ 		 * result by the number of USB frames (or micro-frames for
+-		 * high-speed devices) per second and add the UVC header size
+-		 * (assumed to be 12 bytes long).
++		 * high- and super-speed devices) per second and add the UVC
++		 * header size (assumed to be 12 bytes long).
+ 		 */
+ 		bandwidth = frame->wWidth * frame->wHeight / 8 * format->bpp;
+ 		bandwidth *= 10000000 / interval + 1;
+ 		bandwidth /= 1000;
+-		if (stream->dev->udev->speed == USB_SPEED_HIGH)
++		if (stream->dev->udev->speed >= USB_SPEED_HIGH)
+ 			bandwidth /= 8;
+ 		bandwidth += 12;
+ 
+@@ -468,6 +468,7 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf,
+ 	ktime_t time;
+ 	u16 host_sof;
+ 	u16 dev_sof;
++	u32 dev_stc;
+ 
+ 	switch (data[1] & (UVC_STREAM_PTS | UVC_STREAM_SCR)) {
+ 	case UVC_STREAM_PTS | UVC_STREAM_SCR:
+@@ -512,6 +513,34 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf,
+ 	if (dev_sof == stream->clock.last_sof)
+ 		return;
+ 
++	dev_stc = get_unaligned_le32(&data[header_size - 6]);
++
++	/*
++	 * STC (Source Time Clock) is the clock used by the camera. The UVC 1.5
++	 * standard states that it "must be captured when the first video data
++	 * of a video frame is put on the USB bus". This is generally understood
++	 * as requiring devices to clear the payload header's SCR bit before
++	 * the first packet containing video data.
++	 *
++	 * Most vendors follow that interpretation, but some (namely SunplusIT
++	 * on some devices) always set the `UVC_STREAM_SCR` bit, fill the SCR
++	 * field with 0's,and expect that the driver only processes the SCR if
++	 * there is data in the packet.
++	 *
++	 * Ignore all the hardware timestamp information if we haven't received
++	 * any data for this frame yet, the packet contains no data, and both
++	 * STC and SOF are zero. This heuristics should be safe on compliant
++	 * devices. This should be safe with compliant devices, as in the very
++	 * unlikely case where a UVC 1.1 device would send timing information
++	 * only before the first packet containing data, and both STC and SOF
++	 * happen to be zero for a particular frame, we would only miss one
++	 * clock sample from many and the clock recovery algorithm wouldn't
++	 * suffer from this condition.
++	 */
++	if (buf && buf->bytesused == 0 && len == header_size &&
++	    dev_stc == 0 && dev_sof == 0)
++		return;
++
+ 	stream->clock.last_sof = dev_sof;
+ 
+ 	host_sof = usb_get_current_frame_number(stream->dev->udev);
+@@ -549,7 +578,7 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf,
+ 	spin_lock_irqsave(&stream->clock.lock, flags);
+ 
+ 	sample = &stream->clock.samples[stream->clock.head];
+-	sample->dev_stc = get_unaligned_le32(&data[header_size - 6]);
++	sample->dev_stc = dev_stc;
+ 	sample->dev_sof = dev_sof;
+ 	sample->host_sof = host_sof;
+ 	sample->host_time = time;
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index c75990c0957e79..c3241cf5f7b435 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -354,6 +354,11 @@ struct uvc_entity {
+ 	u8 bNrInPins;
+ 	u8 *baSourceID;
+ 
++	int (*get_info)(struct uvc_device *dev, struct uvc_entity *entity,
++			u8 cs, u8 *caps);
++	int (*get_cur)(struct uvc_device *dev, struct uvc_entity *entity,
++		       u8 cs, void *data, u16 size);
++
+ 	unsigned int ncontrols;
+ 	struct uvc_control *controls;
+ };
+diff --git a/drivers/mfd/omap-usb-tll.c b/drivers/mfd/omap-usb-tll.c
+index 16fad79c73f1fd..99ca0dfa127b57 100644
+--- a/drivers/mfd/omap-usb-tll.c
++++ b/drivers/mfd/omap-usb-tll.c
+@@ -237,8 +237,7 @@ static int usbtll_omap_probe(struct platform_device *pdev)
+ 		break;
+ 	}
+ 
+-	tll = devm_kzalloc(dev, sizeof(*tll) + sizeof(tll->ch_clk[nch]),
+-			   GFP_KERNEL);
++	tll = devm_kzalloc(dev, struct_size(tll, ch_clk, nch), GFP_KERNEL);
+ 	if (!tll) {
+ 		pm_runtime_put_sync(dev);
+ 		pm_runtime_disable(dev);
+diff --git a/drivers/mtd/tests/Makefile b/drivers/mtd/tests/Makefile
+index 5de0378f90dbdc..7dae831ee8b6bf 100644
+--- a/drivers/mtd/tests/Makefile
++++ b/drivers/mtd/tests/Makefile
+@@ -1,19 +1,19 @@
+ # SPDX-License-Identifier: GPL-2.0
+-obj-$(CONFIG_MTD_TESTS) += mtd_oobtest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_pagetest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_readtest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_speedtest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_stresstest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_subpagetest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_torturetest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_nandecctest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_nandbiterrs.o
++obj-$(CONFIG_MTD_TESTS) += mtd_oobtest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_pagetest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_readtest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_speedtest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_stresstest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_subpagetest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_torturetest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_nandecctest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_nandbiterrs.o mtd_test.o
+ 
+-mtd_oobtest-objs := oobtest.o mtd_test.o
+-mtd_pagetest-objs := pagetest.o mtd_test.o
+-mtd_readtest-objs := readtest.o mtd_test.o
+-mtd_speedtest-objs := speedtest.o mtd_test.o
+-mtd_stresstest-objs := stresstest.o mtd_test.o
+-mtd_subpagetest-objs := subpagetest.o mtd_test.o
+-mtd_torturetest-objs := torturetest.o mtd_test.o
+-mtd_nandbiterrs-objs := nandbiterrs.o mtd_test.o
++mtd_oobtest-objs := oobtest.o
++mtd_pagetest-objs := pagetest.o
++mtd_readtest-objs := readtest.o
++mtd_speedtest-objs := speedtest.o
++mtd_stresstest-objs := stresstest.o
++mtd_subpagetest-objs := subpagetest.o
++mtd_torturetest-objs := torturetest.o
++mtd_nandbiterrs-objs := nandbiterrs.o
+diff --git a/drivers/mtd/tests/mtd_test.c b/drivers/mtd/tests/mtd_test.c
+index c84250beffdc91..f391e0300cdc94 100644
+--- a/drivers/mtd/tests/mtd_test.c
++++ b/drivers/mtd/tests/mtd_test.c
+@@ -25,6 +25,7 @@ int mtdtest_erase_eraseblock(struct mtd_info *mtd, unsigned int ebnum)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_erase_eraseblock);
+ 
+ static int is_block_bad(struct mtd_info *mtd, unsigned int ebnum)
+ {
+@@ -57,6 +58,7 @@ int mtdtest_scan_for_bad_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_scan_for_bad_eraseblocks);
+ 
+ int mtdtest_erase_good_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
+ 				unsigned int eb, int ebcnt)
+@@ -75,6 +77,7 @@ int mtdtest_erase_good_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_erase_good_eraseblocks);
+ 
+ int mtdtest_read(struct mtd_info *mtd, loff_t addr, size_t size, void *buf)
+ {
+@@ -92,6 +95,7 @@ int mtdtest_read(struct mtd_info *mtd, loff_t addr, size_t size, void *buf)
+ 
+ 	return err;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_read);
+ 
+ int mtdtest_write(struct mtd_info *mtd, loff_t addr, size_t size,
+ 		const void *buf)
+@@ -107,3 +111,8 @@ int mtdtest_write(struct mtd_info *mtd, loff_t addr, size_t size,
+ 
+ 	return err;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_write);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("MTD function test helpers");
++MODULE_AUTHOR("Akinobu Mita");
+diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
+index b4cdf2351cac92..d33deb84b6ecce 100644
+--- a/drivers/mtd/ubi/eba.c
++++ b/drivers/mtd/ubi/eba.c
+@@ -1560,6 +1560,7 @@ int self_check_eba(struct ubi_device *ubi, struct ubi_attach_info *ai_fastmap,
+ 					  GFP_KERNEL);
+ 		if (!fm_eba[i]) {
+ 			ret = -ENOMEM;
++			kfree(scan_eba[i]);
+ 			goto out_free;
+ 		}
+ 
+@@ -1595,7 +1596,7 @@ int self_check_eba(struct ubi_device *ubi, struct ubi_attach_info *ai_fastmap,
+ 	}
+ 
+ out_free:
+-	for (i = 0; i < num_volumes; i++) {
++	while (--i >= 0) {
+ 		if (!ubi->volumes[i])
+ 			continue;
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 50fabba0424888..c07b9bac1a6a05 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1052,13 +1052,10 @@ static struct slave *bond_find_best_slave(struct bonding *bond)
+ 	return bestslave;
+ }
+ 
++/* must be called in RCU critical section or with RTNL held */
+ static bool bond_should_notify_peers(struct bonding *bond)
+ {
+-	struct slave *slave;
+-
+-	rcu_read_lock();
+-	slave = rcu_dereference(bond->curr_active_slave);
+-	rcu_read_unlock();
++	struct slave *slave = rcu_dereference_rtnl(bond->curr_active_slave);
+ 
+ 	if (!slave || !bond->send_peer_notif ||
+ 	    bond->send_peer_notif %
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index d3b37cebcfde8d..2bf07a39805446 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -2180,6 +2180,9 @@ static int b53_change_mtu(struct dsa_switch *ds, int port, int mtu)
+ 	if (is5325(dev) || is5365(dev))
+ 		return -EOPNOTSUPP;
+ 
++	if (!dsa_is_cpu_port(ds, port))
++		return 0;
++
+ 	enable_jumbo = (mtu >= JMS_MIN_SIZE);
+ 	allow_10_100 = (dev->chip_id == BCM583XX_DEVICE_ID);
+ 
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index a5849663f65c30..d0f94a5fae5aed 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -558,8 +558,10 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 			of_remove_property(child, prop);
+ 
+ 		phydev = of_phy_find_device(child);
+-		if (phydev)
++		if (phydev) {
+ 			phy_device_remove(phydev);
++			phy_device_free(phydev);
++		}
+ 	}
+ 
+ 	err = mdiobus_register(priv->slave_mii_bus);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 6a1ae774cfe99c..c7f93329ae753c 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2774,7 +2774,8 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ 	mv88e6xxx_reg_lock(chip);
+ 	if (chip->info->ops->port_set_jumbo_size)
+ 		ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
+-	else if (chip->info->ops->set_max_frame_size)
++	else if (chip->info->ops->set_max_frame_size &&
++		 dsa_is_cpu_port(ds, port))
+ 		ret = chip->info->ops->set_max_frame_size(chip, new_mtu);
+ 	mv88e6xxx_reg_unlock(chip);
+ 
+diff --git a/drivers/net/ethernet/brocade/bna/bna_types.h b/drivers/net/ethernet/brocade/bna/bna_types.h
+index 666b6922e24db3..ebf54d74c2bbe5 100644
+--- a/drivers/net/ethernet/brocade/bna/bna_types.h
++++ b/drivers/net/ethernet/brocade/bna/bna_types.h
+@@ -410,7 +410,7 @@ struct bna_ib {
+ /* Tx object */
+ 
+ /* Tx datapath control structure */
+-#define BNA_Q_NAME_SIZE		16
++#define BNA_Q_NAME_SIZE		(IFNAMSIZ + 6)
+ struct bna_tcb {
+ 	/* Fast path */
+ 	void			**sw_qpt;
+diff --git a/drivers/net/ethernet/brocade/bna/bnad.c b/drivers/net/ethernet/brocade/bna/bnad.c
+index 7e4e831d720f83..9ccfb038ffc70e 100644
+--- a/drivers/net/ethernet/brocade/bna/bnad.c
++++ b/drivers/net/ethernet/brocade/bna/bnad.c
+@@ -1535,8 +1535,9 @@ bnad_tx_msix_register(struct bnad *bnad, struct bnad_tx_info *tx_info,
+ 
+ 	for (i = 0; i < num_txqs; i++) {
+ 		vector_num = tx_info->tcb[i]->intr_vector;
+-		sprintf(tx_info->tcb[i]->name, "%s TXQ %d", bnad->netdev->name,
+-				tx_id + tx_info->tcb[i]->id);
++		snprintf(tx_info->tcb[i]->name, BNA_Q_NAME_SIZE, "%s TXQ %d",
++			 bnad->netdev->name,
++			 tx_id + tx_info->tcb[i]->id);
+ 		err = request_irq(bnad->msix_table[vector_num].vector,
+ 				  (irq_handler_t)bnad_msix_tx, 0,
+ 				  tx_info->tcb[i]->name,
+@@ -1586,9 +1587,9 @@ bnad_rx_msix_register(struct bnad *bnad, struct bnad_rx_info *rx_info,
+ 
+ 	for (i = 0; i < num_rxps; i++) {
+ 		vector_num = rx_info->rx_ctrl[i].ccb->intr_vector;
+-		sprintf(rx_info->rx_ctrl[i].ccb->name, "%s CQ %d",
+-			bnad->netdev->name,
+-			rx_id + rx_info->rx_ctrl[i].ccb->id);
++		snprintf(rx_info->rx_ctrl[i].ccb->name, BNA_Q_NAME_SIZE,
++			 "%s CQ %d", bnad->netdev->name,
++			 rx_id + rx_info->rx_ctrl[i].ccb->id);
+ 		err = request_irq(bnad->msix_table[vector_num].vector,
+ 				  (irq_handler_t)bnad_msix_rx, 0,
+ 				  rx_info->rx_ctrl[i].ccb->name,
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index adb76db66031f5..a591ca0b37787f 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -220,8 +220,8 @@ MODULE_PARM_DESC(macaddr, "FEC Ethernet MAC address");
+ #define PKT_MINBUF_SIZE		64
+ 
+ /* FEC receive acceleration */
+-#define FEC_RACC_IPDIS		(1 << 1)
+-#define FEC_RACC_PRODIS		(1 << 2)
++#define FEC_RACC_IPDIS		BIT(1)
++#define FEC_RACC_PRODIS		BIT(2)
+ #define FEC_RACC_SHIFT16	BIT(7)
+ #define FEC_RACC_OPTIONS	(FEC_RACC_IPDIS | FEC_RACC_PRODIS)
+ 
+@@ -253,8 +253,23 @@ MODULE_PARM_DESC(macaddr, "FEC Ethernet MAC address");
+ #define FEC_MMFR_TA		(2 << 16)
+ #define FEC_MMFR_DATA(v)	(v & 0xffff)
+ /* FEC ECR bits definition */
+-#define FEC_ECR_MAGICEN		(1 << 2)
+-#define FEC_ECR_SLEEP		(1 << 3)
++#define FEC_ECR_RESET           BIT(0)
++#define FEC_ECR_ETHEREN         BIT(1)
++#define FEC_ECR_MAGICEN         BIT(2)
++#define FEC_ECR_SLEEP           BIT(3)
++#define FEC_ECR_EN1588          BIT(4)
++#define FEC_ECR_BYTESWP         BIT(8)
++/* FEC RCR bits definition */
++#define FEC_RCR_LOOP            BIT(0)
++#define FEC_RCR_HALFDPX         BIT(1)
++#define FEC_RCR_MII             BIT(2)
++#define FEC_RCR_PROMISC         BIT(3)
++#define FEC_RCR_BC_REJ          BIT(4)
++#define FEC_RCR_FLOWCTL         BIT(5)
++#define FEC_RCR_RMII            BIT(8)
++#define FEC_RCR_10BASET         BIT(9)
++/* TX WMARK bits */
++#define FEC_TXWMRK_STRFWD       BIT(8)
+ 
+ #define FEC_MII_TIMEOUT		30000 /* us */
+ 
+@@ -949,7 +964,7 @@ fec_restart(struct net_device *ndev)
+ 	u32 val;
+ 	u32 temp_mac[2];
+ 	u32 rcntl = OPT_FRAME_SIZE | 0x04;
+-	u32 ecntl = 0x2; /* ETHEREN */
++	u32 ecntl = FEC_ECR_ETHEREN;
+ 
+ 	/* Whack a reset.  We should wait for this.
+ 	 * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+@@ -1025,18 +1040,18 @@ fec_restart(struct net_device *ndev)
+ 		    fep->phy_interface == PHY_INTERFACE_MODE_RGMII_TXID)
+ 			rcntl |= (1 << 6);
+ 		else if (fep->phy_interface == PHY_INTERFACE_MODE_RMII)
+-			rcntl |= (1 << 8);
++			rcntl |= FEC_RCR_RMII;
+ 		else
+-			rcntl &= ~(1 << 8);
++			rcntl &= ~FEC_RCR_RMII;
+ 
+ 		/* 1G, 100M or 10M */
+ 		if (ndev->phydev) {
+ 			if (ndev->phydev->speed == SPEED_1000)
+ 				ecntl |= (1 << 5);
+ 			else if (ndev->phydev->speed == SPEED_100)
+-				rcntl &= ~(1 << 9);
++				rcntl &= ~FEC_RCR_10BASET;
+ 			else
+-				rcntl |= (1 << 9);
++				rcntl |= FEC_RCR_10BASET;
+ 		}
+ 	} else {
+ #ifdef FEC_MIIGSK_ENR
+@@ -1095,13 +1110,13 @@ fec_restart(struct net_device *ndev)
+ 
+ 	if (fep->quirks & FEC_QUIRK_ENET_MAC) {
+ 		/* enable ENET endian swap */
+-		ecntl |= (1 << 8);
++		ecntl |= FEC_ECR_BYTESWP;
+ 		/* enable ENET store and forward mode */
+-		writel(1 << 8, fep->hwp + FEC_X_WMRK);
++		writel(FEC_TXWMRK_STRFWD, fep->hwp + FEC_X_WMRK);
+ 	}
+ 
+ 	if (fep->bufdesc_ex)
+-		ecntl |= (1 << 4);
++		ecntl |= FEC_ECR_EN1588;
+ 
+ #ifndef CONFIG_M5272
+ 	/* Enable the MIB statistic event counters */
+@@ -1148,7 +1163,7 @@ static void
+ fec_stop(struct net_device *ndev)
+ {
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+-	u32 rmii_mode = readl(fep->hwp + FEC_R_CNTRL) & (1 << 8);
++	u32 rmii_mode = readl(fep->hwp + FEC_R_CNTRL) & FEC_RCR_RMII;
+ 	u32 val;
+ 
+ 	/* We cannot expect a graceful transmit stop without link !!! */
+@@ -1167,7 +1182,7 @@ fec_stop(struct net_device *ndev)
+ 		if (fep->quirks & FEC_QUIRK_HAS_AVB) {
+ 			writel(0, fep->hwp + FEC_ECNTRL);
+ 		} else {
+-			writel(1, fep->hwp + FEC_ECNTRL);
++			writel(FEC_ECR_RESET, fep->hwp + FEC_ECNTRL);
+ 			udelay(10);
+ 		}
+ 		writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
+@@ -1183,11 +1198,16 @@ fec_stop(struct net_device *ndev)
+ 	/* We have to keep ENET enabled to have MII interrupt stay working */
+ 	if (fep->quirks & FEC_QUIRK_ENET_MAC &&
+ 		!(fep->wol_flag & FEC_WOL_FLAG_SLEEP_ON)) {
+-		writel(2, fep->hwp + FEC_ECNTRL);
++		writel(FEC_ECR_ETHEREN, fep->hwp + FEC_ECNTRL);
+ 		writel(rmii_mode, fep->hwp + FEC_R_CNTRL);
+ 	}
+-}
+ 
++	if (fep->bufdesc_ex) {
++		val = readl(fep->hwp + FEC_ECNTRL);
++		val |= FEC_ECR_EN1588;
++		writel(val, fep->hwp + FEC_ECNTRL);
++	}
++}
+ 
+ static void
+ fec_timeout(struct net_device *ndev, unsigned int txqueue)
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 780fbb3e1ed062..84e0855069a848 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -640,6 +640,9 @@ void fec_ptp_stop(struct platform_device *pdev)
+ 	struct net_device *ndev = platform_get_drvdata(pdev);
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+ 
++	if (fep->pps_enable)
++		fec_ptp_enable_pps(fep, 0);
++
+ 	cancel_delayed_work_sync(&fep->time_keep);
+ 	if (fep->ptp_clock)
+ 		ptp_clock_unregister(fep->ptp_clock);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index d3817dd07e3dc3..1fdb42899a9f3d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1155,7 +1155,12 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
+ 	if (!an_changes && link_modes == eproto.admin)
+ 		goto out;
+ 
+-	mlx5_port_set_eth_ptys(mdev, an_disable, link_modes, ext);
++	err = mlx5_port_set_eth_ptys(mdev, an_disable, link_modes, ext);
++	if (err) {
++		netdev_err(priv->netdev, "%s: failed to set ptys reg: %d\n", __func__, err);
++		goto out;
++	}
++
+ 	mlx5_toggle_port_link(mdev);
+ 
+ out:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
+index 4b713832fdd559..f5c0a4214c4e56 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
+@@ -391,7 +391,8 @@ mlxsw_sp_acl_atcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp,
+ 	if (err)
+ 		return err;
+ 
+-	lkey_id = aregion->ops->lkey_id_get(aregion, aentry->enc_key, erp_id);
++	lkey_id = aregion->ops->lkey_id_get(aregion, aentry->ht_key.enc_key,
++					    erp_id);
+ 	if (IS_ERR(lkey_id))
+ 		return PTR_ERR(lkey_id);
+ 	aentry->lkey_id = lkey_id;
+@@ -399,7 +400,7 @@ mlxsw_sp_acl_atcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp,
+ 	kvdl_index = mlxsw_afa_block_first_kvdl_index(rulei->act_block);
+ 	mlxsw_reg_ptce3_pack(ptce3_pl, true, MLXSW_REG_PTCE3_OP_WRITE_WRITE,
+ 			     priority, region->tcam_region_info,
+-			     aentry->enc_key, erp_id,
++			     aentry->ht_key.enc_key, erp_id,
+ 			     aentry->delta_info.start,
+ 			     aentry->delta_info.mask,
+ 			     aentry->delta_info.value,
+@@ -428,7 +429,7 @@ mlxsw_sp_acl_atcam_region_entry_remove(struct mlxsw_sp *mlxsw_sp,
+ 
+ 	mlxsw_reg_ptce3_pack(ptce3_pl, false, MLXSW_REG_PTCE3_OP_WRITE_WRITE, 0,
+ 			     region->tcam_region_info,
+-			     aentry->enc_key, erp_id,
++			     aentry->ht_key.enc_key, erp_id,
+ 			     aentry->delta_info.start,
+ 			     aentry->delta_info.mask,
+ 			     aentry->delta_info.value,
+@@ -457,7 +458,7 @@ mlxsw_sp_acl_atcam_region_entry_action_replace(struct mlxsw_sp *mlxsw_sp,
+ 	kvdl_index = mlxsw_afa_block_first_kvdl_index(rulei->act_block);
+ 	mlxsw_reg_ptce3_pack(ptce3_pl, true, MLXSW_REG_PTCE3_OP_WRITE_UPDATE,
+ 			     priority, region->tcam_region_info,
+-			     aentry->enc_key, erp_id,
++			     aentry->ht_key.enc_key, erp_id,
+ 			     aentry->delta_info.start,
+ 			     aentry->delta_info.mask,
+ 			     aentry->delta_info.value,
+@@ -480,15 +481,13 @@ __mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp,
+ 	int err;
+ 
+ 	mlxsw_afk_encode(afk, region->key_info, &rulei->values,
+-			 aentry->ht_key.full_enc_key, mask);
++			 aentry->ht_key.enc_key, mask);
+ 
+ 	erp_mask = mlxsw_sp_acl_erp_mask_get(aregion, mask, false);
+ 	if (IS_ERR(erp_mask))
+ 		return PTR_ERR(erp_mask);
+ 	aentry->erp_mask = erp_mask;
+ 	aentry->ht_key.erp_id = mlxsw_sp_acl_erp_mask_erp_id(erp_mask);
+-	memcpy(aentry->enc_key, aentry->ht_key.full_enc_key,
+-	       sizeof(aentry->enc_key));
+ 
+ 	/* Compute all needed delta information and clear the delta bits
+ 	 * from the encrypted key.
+@@ -497,9 +496,8 @@ __mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp,
+ 	aentry->delta_info.start = mlxsw_sp_acl_erp_delta_start(delta);
+ 	aentry->delta_info.mask = mlxsw_sp_acl_erp_delta_mask(delta);
+ 	aentry->delta_info.value =
+-		mlxsw_sp_acl_erp_delta_value(delta,
+-					     aentry->ht_key.full_enc_key);
+-	mlxsw_sp_acl_erp_delta_clear(delta, aentry->enc_key);
++		mlxsw_sp_acl_erp_delta_value(delta, aentry->ht_key.enc_key);
++	mlxsw_sp_acl_erp_delta_clear(delta, aentry->ht_key.enc_key);
+ 
+ 	/* Add rule to the list of A-TCAM rules, assuming this
+ 	 * rule is intended to A-TCAM. In case this rule does
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+index 2e8b17e3b93583..3ab87db83b7fc1 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+@@ -116,9 +116,10 @@ static u16 mlxsw_sp_acl_bf_crc(const u8 *buffer, size_t len)
+ }
+ 
+ static void
+-mlxsw_sp_acl_bf_key_encode(struct mlxsw_sp_acl_atcam_region *aregion,
+-			   struct mlxsw_sp_acl_atcam_entry *aentry,
+-			   char *output, u8 *len)
++__mlxsw_sp_acl_bf_key_encode(struct mlxsw_sp_acl_atcam_region *aregion,
++			     struct mlxsw_sp_acl_atcam_entry *aentry,
++			     char *output, u8 *len, u8 max_chunks, u8 pad_bytes,
++			     u8 key_offset, u8 chunk_key_len, u8 chunk_len)
+ {
+ 	struct mlxsw_afk_key_info *key_info = aregion->region->key_info;
+ 	u8 chunk_index, chunk_count, block_count;
+@@ -129,17 +130,30 @@ mlxsw_sp_acl_bf_key_encode(struct mlxsw_sp_acl_atcam_region *aregion,
+ 	chunk_count = 1 + ((block_count - 1) >> 2);
+ 	erp_region_id = cpu_to_be16(aentry->ht_key.erp_id |
+ 				   (aregion->region->id << 4));
+-	for (chunk_index = MLXSW_BLOOM_KEY_CHUNKS - chunk_count;
+-	     chunk_index < MLXSW_BLOOM_KEY_CHUNKS; chunk_index++) {
+-		memset(chunk, 0, MLXSW_BLOOM_CHUNK_PAD_BYTES);
+-		memcpy(chunk + MLXSW_BLOOM_CHUNK_PAD_BYTES, &erp_region_id,
++	for (chunk_index = max_chunks - chunk_count; chunk_index < max_chunks;
++	     chunk_index++) {
++		memset(chunk, 0, pad_bytes);
++		memcpy(chunk + pad_bytes, &erp_region_id,
+ 		       sizeof(erp_region_id));
+-		memcpy(chunk + MLXSW_BLOOM_CHUNK_KEY_OFFSET,
+-		       &aentry->enc_key[chunk_key_offsets[chunk_index]],
+-		       MLXSW_BLOOM_CHUNK_KEY_BYTES);
+-		chunk += MLXSW_BLOOM_KEY_CHUNK_BYTES;
++		memcpy(chunk + key_offset,
++		       &aentry->ht_key.enc_key[chunk_key_offsets[chunk_index]],
++		       chunk_key_len);
++		chunk += chunk_len;
+ 	}
+-	*len = chunk_count * MLXSW_BLOOM_KEY_CHUNK_BYTES;
++	*len = chunk_count * chunk_len;
++}
++
++static void
++mlxsw_sp_acl_bf_key_encode(struct mlxsw_sp_acl_atcam_region *aregion,
++			   struct mlxsw_sp_acl_atcam_entry *aentry,
++			   char *output, u8 *len)
++{
++	__mlxsw_sp_acl_bf_key_encode(aregion, aentry, output, len,
++				     MLXSW_BLOOM_KEY_CHUNKS,
++				     MLXSW_BLOOM_CHUNK_PAD_BYTES,
++				     MLXSW_BLOOM_CHUNK_KEY_OFFSET,
++				     MLXSW_BLOOM_CHUNK_KEY_BYTES,
++				     MLXSW_BLOOM_KEY_CHUNK_BYTES);
+ }
+ 
+ static unsigned int
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+index d231f4d2888bee..9eee229303cced 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+@@ -1217,18 +1217,6 @@ static bool mlxsw_sp_acl_erp_delta_check(void *priv, const void *parent_obj,
+ 	return err ? false : true;
+ }
+ 
+-static int mlxsw_sp_acl_erp_hints_obj_cmp(const void *obj1, const void *obj2)
+-{
+-	const struct mlxsw_sp_acl_erp_key *key1 = obj1;
+-	const struct mlxsw_sp_acl_erp_key *key2 = obj2;
+-
+-	/* For hints purposes, two objects are considered equal
+-	 * in case the masks are the same. Does not matter what
+-	 * the "ctcam" value is.
+-	 */
+-	return memcmp(key1->mask, key2->mask, sizeof(key1->mask));
+-}
+-
+ static void *mlxsw_sp_acl_erp_delta_create(void *priv, void *parent_obj,
+ 					   void *obj)
+ {
+@@ -1308,7 +1296,6 @@ static void mlxsw_sp_acl_erp_root_destroy(void *priv, void *root_priv)
+ static const struct objagg_ops mlxsw_sp_acl_erp_objagg_ops = {
+ 	.obj_size = sizeof(struct mlxsw_sp_acl_erp_key),
+ 	.delta_check = mlxsw_sp_acl_erp_delta_check,
+-	.hints_obj_cmp = mlxsw_sp_acl_erp_hints_obj_cmp,
+ 	.delta_create = mlxsw_sp_acl_erp_delta_create,
+ 	.delta_destroy = mlxsw_sp_acl_erp_delta_destroy,
+ 	.root_create = mlxsw_sp_acl_erp_root_create,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
+index a41df10ade9bf4..f28c47ae548807 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
+@@ -171,9 +171,9 @@ struct mlxsw_sp_acl_atcam_region {
+ };
+ 
+ struct mlxsw_sp_acl_atcam_entry_ht_key {
+-	char full_enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded
+-								 * key.
+-								 */
++	char enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded key, minus
++							    * delta bits.
++							    */
+ 	u8 erp_id;
+ };
+ 
+@@ -185,9 +185,6 @@ struct mlxsw_sp_acl_atcam_entry {
+ 	struct rhash_head ht_node;
+ 	struct list_head list; /* Member in entries_list */
+ 	struct mlxsw_sp_acl_atcam_entry_ht_key ht_key;
+-	char enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded key,
+-							    * minus delta bits.
+-							    */
+ 	struct {
+ 		u16 start;
+ 		u8 mask;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+index 0157bcd2efffa9..198022bc1f941b 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+@@ -2762,25 +2762,6 @@ static int qed_configure_filter_mcast(struct qed_dev *cdev,
+ 	return qed_filter_mcast_cmd(cdev, &mcast, QED_SPQ_MODE_CB, NULL);
+ }
+ 
+-static int qed_configure_filter(struct qed_dev *cdev,
+-				struct qed_filter_params *params)
+-{
+-	enum qed_filter_rx_mode_type accept_flags;
+-
+-	switch (params->type) {
+-	case QED_FILTER_TYPE_UCAST:
+-		return qed_configure_filter_ucast(cdev, &params->filter.ucast);
+-	case QED_FILTER_TYPE_MCAST:
+-		return qed_configure_filter_mcast(cdev, &params->filter.mcast);
+-	case QED_FILTER_TYPE_RX_MODE:
+-		accept_flags = params->filter.accept_flags;
+-		return qed_configure_filter_rx_mode(cdev, accept_flags);
+-	default:
+-		DP_NOTICE(cdev, "Unknown filter type %d\n", (int)params->type);
+-		return -EINVAL;
+-	}
+-}
+-
+ static int qed_configure_arfs_searcher(struct qed_dev *cdev,
+ 				       enum qed_filter_config_mode mode)
+ {
+@@ -2903,7 +2884,9 @@ static const struct qed_eth_ops qed_eth_ops_pass = {
+ 	.q_rx_stop = &qed_stop_rxq,
+ 	.q_tx_start = &qed_start_txq,
+ 	.q_tx_stop = &qed_stop_txq,
+-	.filter_config = &qed_configure_filter,
++	.filter_config_rx_mode = &qed_configure_filter_rx_mode,
++	.filter_config_ucast = &qed_configure_filter_ucast,
++	.filter_config_mcast = &qed_configure_filter_mcast,
+ 	.fastpath_stop = &qed_fastpath_stop,
+ 	.eth_cqe_completion = &qed_fp_cqe_completion,
+ 	.get_vport_stats = &qed_get_vport_stats,
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+index 5f4962d90022ee..f4385466418ce7 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+@@ -619,30 +619,28 @@ static int qede_set_ucast_rx_mac(struct qede_dev *edev,
+ 				 enum qed_filter_xcast_params_type opcode,
+ 				 unsigned char mac[ETH_ALEN])
+ {
+-	struct qed_filter_params filter_cmd;
++	struct qed_filter_ucast_params ucast;
+ 
+-	memset(&filter_cmd, 0, sizeof(filter_cmd));
+-	filter_cmd.type = QED_FILTER_TYPE_UCAST;
+-	filter_cmd.filter.ucast.type = opcode;
+-	filter_cmd.filter.ucast.mac_valid = 1;
+-	ether_addr_copy(filter_cmd.filter.ucast.mac, mac);
++	memset(&ucast, 0, sizeof(ucast));
++	ucast.type = opcode;
++	ucast.mac_valid = 1;
++	ether_addr_copy(ucast.mac, mac);
+ 
+-	return edev->ops->filter_config(edev->cdev, &filter_cmd);
++	return edev->ops->filter_config_ucast(edev->cdev, &ucast);
+ }
+ 
+ static int qede_set_ucast_rx_vlan(struct qede_dev *edev,
+ 				  enum qed_filter_xcast_params_type opcode,
+ 				  u16 vid)
+ {
+-	struct qed_filter_params filter_cmd;
++	struct qed_filter_ucast_params ucast;
+ 
+-	memset(&filter_cmd, 0, sizeof(filter_cmd));
+-	filter_cmd.type = QED_FILTER_TYPE_UCAST;
+-	filter_cmd.filter.ucast.type = opcode;
+-	filter_cmd.filter.ucast.vlan_valid = 1;
+-	filter_cmd.filter.ucast.vlan = vid;
++	memset(&ucast, 0, sizeof(ucast));
++	ucast.type = opcode;
++	ucast.vlan_valid = 1;
++	ucast.vlan = vid;
+ 
+-	return edev->ops->filter_config(edev->cdev, &filter_cmd);
++	return edev->ops->filter_config_ucast(edev->cdev, &ucast);
+ }
+ 
+ static int qede_config_accept_any_vlan(struct qede_dev *edev, bool action)
+@@ -1057,18 +1055,17 @@ static int qede_set_mcast_rx_mac(struct qede_dev *edev,
+ 				 enum qed_filter_xcast_params_type opcode,
+ 				 unsigned char *mac, int num_macs)
+ {
+-	struct qed_filter_params filter_cmd;
++	struct qed_filter_mcast_params mcast;
+ 	int i;
+ 
+-	memset(&filter_cmd, 0, sizeof(filter_cmd));
+-	filter_cmd.type = QED_FILTER_TYPE_MCAST;
+-	filter_cmd.filter.mcast.type = opcode;
+-	filter_cmd.filter.mcast.num = num_macs;
++	memset(&mcast, 0, sizeof(mcast));
++	mcast.type = opcode;
++	mcast.num = num_macs;
+ 
+ 	for (i = 0; i < num_macs; i++, mac += ETH_ALEN)
+-		ether_addr_copy(filter_cmd.filter.mcast.mac[i], mac);
++		ether_addr_copy(mcast.mac[i], mac);
+ 
+-	return edev->ops->filter_config(edev->cdev, &filter_cmd);
++	return edev->ops->filter_config_mcast(edev->cdev, &mcast);
+ }
+ 
+ int qede_set_mac_addr(struct net_device *ndev, void *p)
+@@ -1194,7 +1191,6 @@ void qede_config_rx_mode(struct net_device *ndev)
+ {
+ 	enum qed_filter_rx_mode_type accept_flags;
+ 	struct qede_dev *edev = netdev_priv(ndev);
+-	struct qed_filter_params rx_mode;
+ 	unsigned char *uc_macs, *temp;
+ 	struct netdev_hw_addr *ha;
+ 	int rc, uc_count;
+@@ -1220,10 +1216,6 @@ void qede_config_rx_mode(struct net_device *ndev)
+ 
+ 	netif_addr_unlock_bh(ndev);
+ 
+-	/* Configure the struct for the Rx mode */
+-	memset(&rx_mode, 0, sizeof(struct qed_filter_params));
+-	rx_mode.type = QED_FILTER_TYPE_RX_MODE;
+-
+ 	/* Remove all previous unicast secondary macs and multicast macs
+ 	 * (configure / leave the primary mac)
+ 	 */
+@@ -1271,8 +1263,7 @@ void qede_config_rx_mode(struct net_device *ndev)
+ 		qede_config_accept_any_vlan(edev, false);
+ 	}
+ 
+-	rx_mode.filter.accept_flags = accept_flags;
+-	edev->ops->filter_config(edev->cdev, &rx_mode);
++	edev->ops->filter_config_rx_mode(edev->cdev, accept_flags);
+ out:
+ 	kfree(uc_macs);
+ }
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index d24eb5ee152a57..4c588fc43eb9b3 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4288,7 +4288,8 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
+ 	if (unlikely(!rtl_tx_slots_avail(tp))) {
+ 		if (net_ratelimit())
+ 			netdev_err(dev, "BUG! Tx Ring full when queue awake!\n");
+-		goto err_stop_0;
++		netif_stop_queue(dev);
++		return NETDEV_TX_BUSY;
+ 	}
+ 
+ 	opts[1] = rtl8169_tx_vlan_tag(skb);
+@@ -4361,11 +4362,6 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
+ 	dev_kfree_skb_any(skb);
+ 	dev->stats.tx_dropped++;
+ 	return NETDEV_TX_OK;
+-
+-err_stop_0:
+-	netif_stop_queue(dev);
+-	dev->stats.tx_dropped++;
+-	return NETDEV_TX_BUSY;
+ }
+ 
+ static unsigned int rtl_last_frag_len(struct sk_buff *skb)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 5c6073d95f023a..c315e0605baa97 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -938,7 +938,7 @@ static void dwmac4_set_mac_loopback(void __iomem *ioaddr, bool enable)
+ }
+ 
+ static void dwmac4_update_vlan_hash(struct mac_device_info *hw, u32 hash,
+-				    __le16 perfect_match, bool is_double)
++				    u16 perfect_match, bool is_double)
+ {
+ 	void __iomem *ioaddr = hw->pcsr;
+ 	u32 value;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index 86f70ea9a520cd..357762ce23ff94 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -581,7 +581,7 @@ static int dwxgmac2_rss_configure(struct mac_device_info *hw,
+ }
+ 
+ static void dwxgmac2_update_vlan_hash(struct mac_device_info *hw, u32 hash,
+-				      __le16 perfect_match, bool is_double)
++				      u16 perfect_match, bool is_double)
+ {
+ 	void __iomem *ioaddr = hw->pcsr;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+index 8b7ec2457eba23..d7ea2fd944ee64 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
++++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+@@ -368,7 +368,7 @@ struct stmmac_ops {
+ 			     struct stmmac_rss *cfg, u32 num_rxq);
+ 	/* VLAN */
+ 	void (*update_vlan_hash)(struct mac_device_info *hw, u32 hash,
+-				 __le16 perfect_match, bool is_double);
++				 u16 perfect_match, bool is_double);
+ 	void (*enable_vlan)(struct mac_device_info *hw, u32 type);
+ 	int (*add_hw_vlan_rx_fltr)(struct net_device *dev,
+ 				   struct mac_device_info *hw,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 8416a186cd7f30..b8581a711514c9 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4660,7 +4660,7 @@ static u32 stmmac_vid_crc32_le(__le16 vid_le)
+ static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
+ {
+ 	u32 crc, hash = 0;
+-	__le16 pmatch = 0;
++	u16 pmatch = 0;
+ 	int count = 0;
+ 	u16 vid = 0;
+ 
+@@ -4675,7 +4675,7 @@ static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
+ 		if (count > 2) /* VID = 0 always passes filter */
+ 			return -EOPNOTSUPP;
+ 
+-		pmatch = cpu_to_le16(vid);
++		pmatch = vid;
+ 		hash = 0;
+ 	}
+ 
+diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
+index 92001f7af380c7..924370e530030f 100644
+--- a/drivers/net/netconsole.c
++++ b/drivers/net/netconsole.c
+@@ -715,6 +715,7 @@ static int netconsole_netdev_event(struct notifier_block *this,
+ 				/* rtnl_lock already held
+ 				 * we might sleep in __netpoll_cleanup()
+ 				 */
++				nt->enabled = false;
+ 				spin_unlock_irqrestore(&target_list_lock, flags);
+ 
+ 				__netpoll_cleanup(&nt->np);
+@@ -722,7 +723,6 @@ static int netconsole_netdev_event(struct notifier_block *this,
+ 				spin_lock_irqsave(&target_list_lock, flags);
+ 				dev_put(nt->np.dev);
+ 				nt->np.dev = NULL;
+-				nt->enabled = false;
+ 				stopped = true;
+ 				netconsole_target_put(nt);
+ 				goto restart;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index d2a8238e144a6b..47cc54a64b56d7 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -216,6 +216,7 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 			break;
+ 		default:
+ 			/* not ip - do not know what to do */
++			kfree_skb(skbn);
+ 			goto skip;
+ 		}
+ 
+diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
+index 3fac642bec7726..8d2e3daf03cf2c 100644
+--- a/drivers/net/usb/sr9700.c
++++ b/drivers/net/usb/sr9700.c
+@@ -178,6 +178,7 @@ static int sr_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ 	struct usbnet *dev = netdev_priv(netdev);
+ 	__le16 res;
+ 	int rc = 0;
++	int err;
+ 
+ 	if (phy_id) {
+ 		netdev_dbg(netdev, "Only internal phy supported\n");
+@@ -188,11 +189,17 @@ static int sr_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ 	if (loc == MII_BMSR) {
+ 		u8 value;
+ 
+-		sr_read_reg(dev, SR_NSR, &value);
++		err = sr_read_reg(dev, SR_NSR, &value);
++		if (err < 0)
++			return err;
++
+ 		if (value & NSR_LINKST)
+ 			rc = 1;
+ 	}
+-	sr_share_read_word(dev, 1, loc, &res);
++	err = sr_share_read_word(dev, 1, loc, &res);
++	if (err < 0)
++		return err;
++
+ 	if (rc == 1)
+ 		res = le16_to_cpu(res) | BMSR_LSTATUS;
+ 	else
+diff --git a/drivers/net/wireless/ath/ath11k/dp.h b/drivers/net/wireless/ath/ath11k/dp.h
+index c4972233149f40..89dc3ab2e2fb51 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.h
++++ b/drivers/net/wireless/ath/ath11k/dp.h
+@@ -40,6 +40,7 @@ struct dp_rx_tid {
+ 
+ #define DP_REO_DESC_FREE_THRESHOLD  64
+ #define DP_REO_DESC_FREE_TIMEOUT_MS 1000
++#define DP_MON_PURGE_TIMEOUT_MS     100
+ #define DP_MON_SERVICE_BUDGET       128
+ 
+ struct dp_reo_cache_flush_elem {
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index a50325f4634bae..6c4b84282e44cc 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -274,6 +274,28 @@ static void ath11k_dp_service_mon_ring(struct timer_list *t)
+ 		  msecs_to_jiffies(ATH11K_MON_TIMER_INTERVAL));
+ }
+ 
++static int ath11k_dp_purge_mon_ring(struct ath11k_base *ab)
++{
++	int i, reaped = 0;
++	unsigned long timeout = jiffies + msecs_to_jiffies(DP_MON_PURGE_TIMEOUT_MS);
++
++	do {
++		for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++)
++			reaped += ath11k_dp_rx_process_mon_rings(ab, i,
++								 NULL,
++								 DP_MON_SERVICE_BUDGET);
++
++		/* nothing more to reap */
++		if (reaped < DP_MON_SERVICE_BUDGET)
++			return 0;
++
++	} while (time_before(jiffies, timeout));
++
++	ath11k_warn(ab, "dp mon ring purge timeout");
++
++	return -ETIMEDOUT;
++}
++
+ /* Returns number of Rx buffers replenished */
+ int ath11k_dp_rxbufs_replenish(struct ath11k_base *ab, int mac_id,
+ 			       struct dp_rxdma_ring *rx_ring,
+@@ -1830,8 +1852,7 @@ static void ath11k_dp_rx_h_csum_offload(struct sk_buff *msdu)
+ 			  CHECKSUM_NONE : CHECKSUM_UNNECESSARY;
+ }
+ 
+-static int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar,
+-				       enum hal_encrypt_type enctype)
++int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar, enum hal_encrypt_type enctype)
+ {
+ 	switch (enctype) {
+ 	case HAL_ENCRYPT_TYPE_OPEN:
+@@ -5065,3 +5086,29 @@ int ath11k_dp_rx_pdev_mon_detach(struct ath11k *ar)
+ 	ath11k_dp_mon_link_free(ar);
+ 	return 0;
+ }
++
++int ath11k_dp_rx_pktlog_start(struct ath11k_base *ab)
++{
++	/* start reap timer */
++	mod_timer(&ab->mon_reap_timer,
++		  jiffies + msecs_to_jiffies(ATH11K_MON_TIMER_INTERVAL));
++
++	return 0;
++}
++
++int ath11k_dp_rx_pktlog_stop(struct ath11k_base *ab, bool stop_timer)
++{
++	int ret;
++
++	if (stop_timer)
++		del_timer_sync(&ab->mon_reap_timer);
++
++	/* reap all the monitor related rings */
++	ret = ath11k_dp_purge_mon_ring(ab);
++	if (ret) {
++		ath11k_warn(ab, "failed to purge dp mon ring: %d\n", ret);
++		return ret;
++	}
++
++	return 0;
++}
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.h b/drivers/net/wireless/ath/ath11k/dp_rx.h
+index 6986752fc4b68b..c322e30caa9683 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.h
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.h
+@@ -1,6 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
++ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #ifndef ATH11K_DP_RX_H
+ #define ATH11K_DP_RX_H
+@@ -92,4 +93,9 @@ int ath11k_dp_rx_pdev_mon_detach(struct ath11k *ar);
+ int ath11k_dp_rx_pdev_mon_attach(struct ath11k *ar);
+ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id);
+ 
++int ath11k_dp_rx_pktlog_start(struct ath11k_base *ab);
++int ath11k_dp_rx_pktlog_stop(struct ath11k_base *ab, bool stop_timer);
++
++int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar, enum hal_encrypt_type enctype);
++
+ #endif /* ATH11K_DP_RX_H */
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 3170c54c97b744..b66b6a7072d821 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -2410,6 +2410,7 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
+ 
+ 	switch (key->cipher) {
+ 	case WLAN_CIPHER_SUITE_CCMP:
++	case WLAN_CIPHER_SUITE_CCMP_256:
+ 		arg.key_cipher = WMI_CIPHER_AES_CCM;
+ 		/* TODO: Re-check if flag is valid */
+ 		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+@@ -2419,12 +2420,10 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
+ 		arg.key_txmic_len = 8;
+ 		arg.key_rxmic_len = 8;
+ 		break;
+-	case WLAN_CIPHER_SUITE_CCMP_256:
+-		arg.key_cipher = WMI_CIPHER_AES_CCM;
+-		break;
+ 	case WLAN_CIPHER_SUITE_GCMP:
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+ 		arg.key_cipher = WMI_CIPHER_AES_GCM;
++		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+ 		break;
+ 	default:
+ 		ath11k_warn(ar->ab, "cipher %d is not supported\n", key->cipher);
+@@ -3946,7 +3945,10 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
+ {
+ 	struct ath11k_base *ab = ar->ab;
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
++	struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB(skb);
+ 	struct ieee80211_tx_info *info;
++	enum hal_encrypt_type enctype;
++	unsigned int mic_len;
+ 	dma_addr_t paddr;
+ 	int buf_id;
+ 	int ret;
+@@ -3966,7 +3968,12 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
+ 		     ieee80211_is_deauth(hdr->frame_control) ||
+ 		     ieee80211_is_disassoc(hdr->frame_control)) &&
+ 		     ieee80211_has_protected(hdr->frame_control)) {
+-			skb_put(skb, IEEE80211_CCMP_MIC_LEN);
++			if (!(skb_cb->flags & ATH11K_SKB_CIPHER_SET))
++				ath11k_warn(ab, "WMI management tx frame without ATH11K_SKB_CIPHER_SET");
++
++			enctype = ath11k_dp_tx_get_encrypt_type(skb_cb->cipher);
++			mic_len = ath11k_dp_rx_crypto_mic_len(ar, enctype);
++			skb_put(skb, mic_len);
+ 		}
+ 	}
+ 
+@@ -4149,6 +4156,10 @@ static int ath11k_mac_config_mon_status_default(struct ath11k *ar, bool enable)
+ 						       &tlv_filter);
+ 	}
+ 
++	if (enable && !ar->ab->hw_params.rxdma1_enable)
++		mod_timer(&ar->ab->mon_reap_timer, jiffies +
++			  msecs_to_jiffies(ATH11K_MON_TIMER_INTERVAL));
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+index 7717eb85a1db68..47c0e8e429e544 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+@@ -2567,7 +2567,6 @@ wlc_lcnphy_tx_iqlo_cal(struct brcms_phy *pi,
+ 
+ 	struct lcnphy_txgains cal_gains, temp_gains;
+ 	u16 hash;
+-	u8 band_idx;
+ 	int j;
+ 	u16 ncorr_override[5];
+ 	u16 syst_coeffs[] = { 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
+@@ -2599,6 +2598,9 @@ wlc_lcnphy_tx_iqlo_cal(struct brcms_phy *pi,
+ 	u16 *values_to_save;
+ 	struct brcms_phy_lcnphy *pi_lcn = pi->u.pi_lcnphy;
+ 
++	if (WARN_ON(CHSPEC_IS5G(pi->radio_chanspec)))
++		return;
++
+ 	values_to_save = kmalloc_array(20, sizeof(u16), GFP_ATOMIC);
+ 	if (NULL == values_to_save)
+ 		return;
+@@ -2662,20 +2664,18 @@ wlc_lcnphy_tx_iqlo_cal(struct brcms_phy *pi,
+ 	hash = (target_gains->gm_gain << 8) |
+ 	       (target_gains->pga_gain << 4) | (target_gains->pad_gain);
+ 
+-	band_idx = (CHSPEC_IS5G(pi->radio_chanspec) ? 1 : 0);
+-
+ 	cal_gains = *target_gains;
+ 	memset(ncorr_override, 0, sizeof(ncorr_override));
+-	for (j = 0; j < iqcal_gainparams_numgains_lcnphy[band_idx]; j++) {
+-		if (hash == tbl_iqcal_gainparams_lcnphy[band_idx][j][0]) {
++	for (j = 0; j < iqcal_gainparams_numgains_lcnphy[0]; j++) {
++		if (hash == tbl_iqcal_gainparams_lcnphy[0][j][0]) {
+ 			cal_gains.gm_gain =
+-				tbl_iqcal_gainparams_lcnphy[band_idx][j][1];
++				tbl_iqcal_gainparams_lcnphy[0][j][1];
+ 			cal_gains.pga_gain =
+-				tbl_iqcal_gainparams_lcnphy[band_idx][j][2];
++				tbl_iqcal_gainparams_lcnphy[0][j][2];
+ 			cal_gains.pad_gain =
+-				tbl_iqcal_gainparams_lcnphy[band_idx][j][3];
++				tbl_iqcal_gainparams_lcnphy[0][j][3];
+ 			memcpy(ncorr_override,
+-			       &tbl_iqcal_gainparams_lcnphy[band_idx][j][3],
++			       &tbl_iqcal_gainparams_lcnphy[0][j][3],
+ 			       sizeof(ncorr_override));
+ 			break;
+ 		}
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index e1196c565a62f7..03ba8ed995bf27 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -930,6 +930,8 @@ mwifiex_init_new_priv_params(struct mwifiex_private *priv,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	priv->bss_num = mwifiex_get_unused_bss_num(adapter, priv->bss_type);
++
+ 	spin_lock_irqsave(&adapter->main_proc_lock, flags);
+ 	adapter->main_locked = false;
+ 	spin_unlock_irqrestore(&adapter->main_proc_lock, flags);
+diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c
+index 514f2c1124b618..dd6675436bda6d 100644
+--- a/drivers/net/wireless/virt_wifi.c
++++ b/drivers/net/wireless/virt_wifi.c
+@@ -136,6 +136,9 @@ static struct ieee80211_supported_band band_5ghz = {
+ /* Assigned at module init. Guaranteed locally-administered and unicast. */
+ static u8 fake_router_bssid[ETH_ALEN] __ro_after_init = {};
+ 
++#define VIRT_WIFI_SSID "VirtWifi"
++#define VIRT_WIFI_SSID_LEN 8
++
+ static void virt_wifi_inform_bss(struct wiphy *wiphy)
+ {
+ 	u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
+@@ -146,8 +149,8 @@ static void virt_wifi_inform_bss(struct wiphy *wiphy)
+ 		u8 ssid[8];
+ 	} __packed ssid = {
+ 		.tag = WLAN_EID_SSID,
+-		.len = 8,
+-		.ssid = "VirtWifi",
++		.len = VIRT_WIFI_SSID_LEN,
++		.ssid = VIRT_WIFI_SSID,
+ 	};
+ 
+ 	informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
+@@ -213,6 +216,8 @@ struct virt_wifi_netdev_priv {
+ 	struct net_device *upperdev;
+ 	u32 tx_packets;
+ 	u32 tx_failed;
++	u32 connect_requested_ssid_len;
++	u8 connect_requested_ssid[IEEE80211_MAX_SSID_LEN];
+ 	u8 connect_requested_bss[ETH_ALEN];
+ 	bool is_up;
+ 	bool is_connected;
+@@ -229,6 +234,12 @@ static int virt_wifi_connect(struct wiphy *wiphy, struct net_device *netdev,
+ 	if (priv->being_deleted || !priv->is_up)
+ 		return -EBUSY;
+ 
++	if (!sme->ssid)
++		return -EINVAL;
++
++	priv->connect_requested_ssid_len = sme->ssid_len;
++	memcpy(priv->connect_requested_ssid, sme->ssid, sme->ssid_len);
++
+ 	could_schedule = schedule_delayed_work(&priv->connect, HZ * 2);
+ 	if (!could_schedule)
+ 		return -EBUSY;
+@@ -252,12 +263,15 @@ static void virt_wifi_connect_complete(struct work_struct *work)
+ 		container_of(work, struct virt_wifi_netdev_priv, connect.work);
+ 	u8 *requested_bss = priv->connect_requested_bss;
+ 	bool right_addr = ether_addr_equal(requested_bss, fake_router_bssid);
++	bool right_ssid = priv->connect_requested_ssid_len == VIRT_WIFI_SSID_LEN &&
++			  !memcmp(priv->connect_requested_ssid, VIRT_WIFI_SSID,
++				  priv->connect_requested_ssid_len);
+ 	u16 status = WLAN_STATUS_SUCCESS;
+ 
+ 	if (is_zero_ether_addr(requested_bss))
+ 		requested_bss = NULL;
+ 
+-	if (!priv->is_up || (requested_bss && !right_addr))
++	if (!priv->is_up || (requested_bss && !right_addr) || !right_ssid)
+ 		status = WLAN_STATUS_UNSPECIFIED_FAILURE;
+ 	else
+ 		priv->is_connected = true;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a7131f4752e288..78cac4220e03af 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -486,22 +486,13 @@ static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq)
+ 	nvmeq->last_sq_tail = nvmeq->sq_tail;
+ }
+ 
+-/**
+- * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell
+- * @nvmeq: The queue to use
+- * @cmd: The command to send
+- * @write_sq: whether to write to the SQ doorbell
+- */
+-static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd,
+-			    bool write_sq)
++static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq,
++				    struct nvme_command *cmd)
+ {
+-	spin_lock(&nvmeq->sq_lock);
+ 	memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
+-	       cmd, sizeof(*cmd));
++		absolute_pointer(cmd), sizeof(*cmd));
+ 	if (++nvmeq->sq_tail == nvmeq->q_depth)
+ 		nvmeq->sq_tail = 0;
+-	nvme_write_sq_db(nvmeq, write_sq);
+-	spin_unlock(&nvmeq->sq_lock);
+ }
+ 
+ static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx)
+@@ -945,10 +936,14 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	}
+ 
+ 	blk_mq_start_request(req);
+-	nvme_submit_cmd(nvmeq, cmnd, bd->last);
++	spin_lock(&nvmeq->sq_lock);
++	nvme_sq_copy_cmd(nvmeq, &iod->cmd);
++	nvme_write_sq_db(nvmeq, bd->last);
++	spin_unlock(&nvmeq->sq_lock);
+ 	return BLK_STS_OK;
+ out_unmap_data:
+-	nvme_unmap_data(dev, req);
++	if (blk_rq_nr_phys_segments(req))
++		nvme_unmap_data(dev, req);
+ out_free_cmd:
+ 	nvme_cleanup_cmd(req);
+ 	return ret;
+@@ -1120,7 +1115,11 @@ static void nvme_pci_submit_async_event(struct nvme_ctrl *ctrl)
+ 	memset(&c, 0, sizeof(c));
+ 	c.common.opcode = nvme_admin_async_event;
+ 	c.common.command_id = NVME_AQ_BLK_MQ_DEPTH;
+-	nvme_submit_cmd(nvmeq, &c, true);
++
++	spin_lock(&nvmeq->sq_lock);
++	nvme_sq_copy_cmd(nvmeq, &c);
++	nvme_write_sq_db(nvmeq, true);
++	spin_unlock(&nvmeq->sq_lock);
+ }
+ 
+ static int adapter_delete_queue(struct nvme_dev *dev, u8 opcode, u16 id)
+@@ -2847,6 +2846,13 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
+ 			return NVME_QUIRK_SIMPLE_SUSPEND;
+ 	}
+ 
++	/*
++	 * NVMe SSD drops off the PCIe bus after system idle
++	 * for 10 hours on a Lenovo N60z board.
++	 */
++	if (dmi_match(DMI_BOARD_NAME, "LXKT-ZXEG-N6"))
++		return NVME_QUIRK_NO_APST;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/parport/procfs.c b/drivers/parport/procfs.c
+index d740eba3c0999e..8400a379186ea0 100644
+--- a/drivers/parport/procfs.c
++++ b/drivers/parport/procfs.c
+@@ -51,12 +51,12 @@ static int do_active_device(struct ctl_table *table, int write,
+ 	
+ 	for (dev = port->devices; dev ; dev = dev->next) {
+ 		if(dev == port->cad) {
+-			len += sprintf(buffer, "%s\n", dev->name);
++			len += snprintf(buffer, sizeof(buffer), "%s\n", dev->name);
+ 		}
+ 	}
+ 
+ 	if(!len) {
+-		len += sprintf(buffer, "%s\n", "none");
++		len += snprintf(buffer, sizeof(buffer), "%s\n", "none");
+ 	}
+ 
+ 	if (len > *lenp)
+@@ -87,19 +87,19 @@ static int do_autoprobe(struct ctl_table *table, int write,
+ 	}
+ 	
+ 	if ((str = info->class_name) != NULL)
+-		len += sprintf (buffer + len, "CLASS:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "CLASS:%s;\n", str);
+ 
+ 	if ((str = info->model) != NULL)
+-		len += sprintf (buffer + len, "MODEL:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "MODEL:%s;\n", str);
+ 
+ 	if ((str = info->mfr) != NULL)
+-		len += sprintf (buffer + len, "MANUFACTURER:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "MANUFACTURER:%s;\n", str);
+ 
+ 	if ((str = info->description) != NULL)
+-		len += sprintf (buffer + len, "DESCRIPTION:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "DESCRIPTION:%s;\n", str);
+ 
+ 	if ((str = info->cmdset) != NULL)
+-		len += sprintf (buffer + len, "COMMAND SET:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "COMMAND SET:%s;\n", str);
+ 
+ 	if (len > *lenp)
+ 		len = *lenp;
+@@ -117,7 +117,7 @@ static int do_hardware_base_addr(struct ctl_table *table, int write,
+ 				 void *result, size_t *lenp, loff_t *ppos)
+ {
+ 	struct parport *port = (struct parport *)table->extra1;
+-	char buffer[20];
++	char buffer[64];
+ 	int len = 0;
+ 
+ 	if (*ppos) {
+@@ -128,7 +128,7 @@ static int do_hardware_base_addr(struct ctl_table *table, int write,
+ 	if (write) /* permissions prevent this anyway */
+ 		return -EACCES;
+ 
+-	len += sprintf (buffer, "%lu\t%lu\n", port->base, port->base_hi);
++	len += snprintf (buffer, sizeof(buffer), "%lu\t%lu\n", port->base, port->base_hi);
+ 
+ 	if (len > *lenp)
+ 		len = *lenp;
+@@ -155,7 +155,7 @@ static int do_hardware_irq(struct ctl_table *table, int write,
+ 	if (write) /* permissions prevent this anyway */
+ 		return -EACCES;
+ 
+-	len += sprintf (buffer, "%d\n", port->irq);
++	len += snprintf (buffer, sizeof(buffer), "%d\n", port->irq);
+ 
+ 	if (len > *lenp)
+ 		len = *lenp;
+@@ -182,7 +182,7 @@ static int do_hardware_dma(struct ctl_table *table, int write,
+ 	if (write) /* permissions prevent this anyway */
+ 		return -EACCES;
+ 
+-	len += sprintf (buffer, "%d\n", port->dma);
++	len += snprintf (buffer, sizeof(buffer), "%d\n", port->dma);
+ 
+ 	if (len > *lenp)
+ 		len = *lenp;
+@@ -213,7 +213,7 @@ static int do_hardware_modes(struct ctl_table *table, int write,
+ #define printmode(x)							\
+ do {									\
+ 	if (port->modes & PARPORT_MODE_##x)				\
+-		len += sprintf(buffer + len, "%s%s", f++ ? "," : "", #x); \
++		len += snprintf(buffer + len, sizeof(buffer) - len, "%s%s", f++ ? "," : "", #x); \
+ } while (0)
+ 		int f = 0;
+ 		printmode(PCSPP);
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 2d6c77dcc815c9..d0b3ec2373850d 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -680,8 +680,8 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where,
+ 		   PCI_CAPABILITY_LIST) {
+ 		/* ROM BARs are unimplemented */
+ 		*val = 0;
+-	} else if (where >= PCI_INTERRUPT_LINE && where + size <=
+-		   PCI_INTERRUPT_PIN) {
++	} else if ((where >= PCI_INTERRUPT_LINE && where + size <= PCI_INTERRUPT_PIN) ||
++		   (where >= PCI_INTERRUPT_PIN && where + size <= PCI_MIN_GNT)) {
+ 		/*
+ 		 * Interrupt Line and Interrupt PIN are hard-wired to zero
+ 		 * because this front-end only supports message-signaled
+diff --git a/drivers/pci/controller/pcie-rockchip.c b/drivers/pci/controller/pcie-rockchip.c
+index 1aa84035a8bc77..bdce1ba7c1bc08 100644
+--- a/drivers/pci/controller/pcie-rockchip.c
++++ b/drivers/pci/controller/pcie-rockchip.c
+@@ -120,7 +120,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
+ 
+ 	if (rockchip->is_rc) {
+ 		rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep",
+-							    GPIOD_OUT_HIGH);
++							    GPIOD_OUT_LOW);
+ 		if (IS_ERR(rockchip->ep_gpio))
+ 			return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio),
+ 					     "failed to get ep GPIO\n");
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 530ced8f7abd2c..09d5fa637b9849 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4817,7 +4817,7 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
+ 				      int timeout)
+ {
+ 	struct pci_dev *child;
+-	int delay;
++	int delay, ret = 0;
+ 
+ 	if (pci_dev_is_disconnected(dev))
+ 		return 0;
+@@ -4845,8 +4845,8 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
+ 		return 0;
+ 	}
+ 
+-	child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
+-				 bus_list);
++	child = pci_dev_get(list_first_entry(&dev->subordinate->devices,
++					     struct pci_dev, bus_list));
+ 	up_read(&pci_bus_sem);
+ 
+ 	/*
+@@ -4856,7 +4856,7 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
+ 	if (!pci_is_pcie(dev)) {
+ 		pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
+ 		msleep(1000 + delay);
+-		return 0;
++		goto put_child;
+ 	}
+ 
+ 	/*
+@@ -4877,7 +4877,7 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
+ 	 * until the timeout expires.
+ 	 */
+ 	if (!pcie_downstream_port(dev))
+-		return 0;
++		goto put_child;
+ 
+ 	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
+ 		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
+@@ -4888,11 +4888,16 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
+ 		if (!pcie_wait_for_link_delay(dev, true, delay)) {
+ 			/* Did not train, no need to wait any further */
+ 			pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
+-			return -ENOTTY;
++			ret = -ENOTTY;
++			goto put_child;
+ 		}
+ 	}
+ 
+-	return pci_dev_wait(child, reset_type, timeout - delay);
++	ret = pci_dev_wait(child, reset_type, timeout - delay);
++
++put_child:
++	pci_dev_put(child);
++	return ret;
+ }
+ 
+ void pci_reset_secondary_bus(struct pci_dev *dev)
+diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+index 16d291e10627b0..a159bfdfa2512e 100644
+--- a/drivers/pci/setup-bus.c
++++ b/drivers/pci/setup-bus.c
+@@ -824,11 +824,9 @@ static resource_size_t calculate_memsize(resource_size_t size,
+ 		size = min_size;
+ 	if (old_size == 1)
+ 		old_size = 0;
+-	if (size < old_size)
+-		size = old_size;
+ 
+-	size = ALIGN(max(size, add_size) + children_add_size, align);
+-	return size;
++	size = max(size, add_size) + children_add_size;
++	return ALIGN(max(size, old_size), align);
+ }
+ 
+ resource_size_t __weak pcibios_window_alignment(struct pci_bus *bus,
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 3d44d6f48cc4cd..8152d24e128a34 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -2039,6 +2039,14 @@ pinctrl_init_controller(struct pinctrl_desc *pctldesc, struct device *dev,
+ 	return ERR_PTR(ret);
+ }
+ 
++static void pinctrl_uninit_controller(struct pinctrl_dev *pctldev, struct pinctrl_desc *pctldesc)
++{
++	pinctrl_free_pindescs(pctldev, pctldesc->pins,
++			      pctldesc->npins);
++	mutex_destroy(&pctldev->mutex);
++	kfree(pctldev);
++}
++
+ static int pinctrl_claim_hogs(struct pinctrl_dev *pctldev)
+ {
+ 	pctldev->p = create_pinctrl(pctldev->dev, pctldev);
+@@ -2119,8 +2127,10 @@ struct pinctrl_dev *pinctrl_register(struct pinctrl_desc *pctldesc,
+ 		return pctldev;
+ 
+ 	error = pinctrl_enable(pctldev);
+-	if (error)
++	if (error) {
++		pinctrl_uninit_controller(pctldev, pctldesc);
+ 		return ERR_PTR(error);
++	}
+ 
+ 	return pctldev;
+ }
+diff --git a/drivers/pinctrl/freescale/pinctrl-mxs.c b/drivers/pinctrl/freescale/pinctrl-mxs.c
+index 735cedd0958a2b..5b0fcf15f28049 100644
+--- a/drivers/pinctrl/freescale/pinctrl-mxs.c
++++ b/drivers/pinctrl/freescale/pinctrl-mxs.c
+@@ -405,8 +405,8 @@ static int mxs_pinctrl_probe_dt(struct platform_device *pdev,
+ 	int ret;
+ 	u32 val;
+ 
+-	child = of_get_next_child(np, NULL);
+-	if (!child) {
++	val = of_get_child_count(np);
++	if (val == 0) {
+ 		dev_err(&pdev->dev, "no group is defined\n");
+ 		return -ENOENT;
+ 	}
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 02b41f1bafe71c..e0f22ce219ee84 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -720,9 +720,8 @@ static struct rockchip_mux_route_data rk3308_mux_route_data[] = {
+ 	RK_MUXROUTE_SAME(0, RK_PC3, 1, 0x314, BIT(16 + 0) | BIT(0)), /* rtc_clk */
+ 	RK_MUXROUTE_SAME(1, RK_PC6, 2, 0x314, BIT(16 + 2) | BIT(16 + 3)), /* uart2_rxm0 */
+ 	RK_MUXROUTE_SAME(4, RK_PD2, 2, 0x314, BIT(16 + 2) | BIT(16 + 3) | BIT(2)), /* uart2_rxm1 */
+-	RK_MUXROUTE_SAME(0, RK_PB7, 2, 0x608, BIT(16 + 8) | BIT(16 + 9)), /* i2c3_sdam0 */
+-	RK_MUXROUTE_SAME(3, RK_PB4, 2, 0x608, BIT(16 + 8) | BIT(16 + 9) | BIT(8)), /* i2c3_sdam1 */
+-	RK_MUXROUTE_SAME(2, RK_PA0, 3, 0x608, BIT(16 + 8) | BIT(16 + 9) | BIT(9)), /* i2c3_sdam2 */
++	RK_MUXROUTE_SAME(0, RK_PB7, 2, 0x314, BIT(16 + 4)), /* i2c3_sdam0 */
++	RK_MUXROUTE_SAME(3, RK_PB4, 2, 0x314, BIT(16 + 4) | BIT(4)), /* i2c3_sdam1 */
+ 	RK_MUXROUTE_SAME(1, RK_PA3, 2, 0x308, BIT(16 + 3)), /* i2s-8ch-1-sclktxm0 */
+ 	RK_MUXROUTE_SAME(1, RK_PA4, 2, 0x308, BIT(16 + 3)), /* i2s-8ch-1-sclkrxm0 */
+ 	RK_MUXROUTE_SAME(1, RK_PB5, 2, 0x308, BIT(16 + 3) | BIT(3)), /* i2s-8ch-1-sclktxm1 */
+@@ -731,18 +730,6 @@ static struct rockchip_mux_route_data rk3308_mux_route_data[] = {
+ 	RK_MUXROUTE_SAME(1, RK_PB6, 4, 0x308, BIT(16 + 12) | BIT(16 + 13) | BIT(12)), /* pdm-clkm1 */
+ 	RK_MUXROUTE_SAME(2, RK_PA6, 2, 0x308, BIT(16 + 12) | BIT(16 + 13) | BIT(13)), /* pdm-clkm2 */
+ 	RK_MUXROUTE_SAME(2, RK_PA4, 3, 0x600, BIT(16 + 2) | BIT(2)), /* pdm-clkm-m2 */
+-	RK_MUXROUTE_SAME(3, RK_PB2, 3, 0x314, BIT(16 + 9)), /* spi1_miso */
+-	RK_MUXROUTE_SAME(2, RK_PA4, 2, 0x314, BIT(16 + 9) | BIT(9)), /* spi1_miso_m1 */
+-	RK_MUXROUTE_SAME(0, RK_PB3, 3, 0x314, BIT(16 + 10) | BIT(16 + 11)), /* owire_m0 */
+-	RK_MUXROUTE_SAME(1, RK_PC6, 7, 0x314, BIT(16 + 10) | BIT(16 + 11) | BIT(10)), /* owire_m1 */
+-	RK_MUXROUTE_SAME(2, RK_PA2, 5, 0x314, BIT(16 + 10) | BIT(16 + 11) | BIT(11)), /* owire_m2 */
+-	RK_MUXROUTE_SAME(0, RK_PB3, 2, 0x314, BIT(16 + 12) | BIT(16 + 13)), /* can_rxd_m0 */
+-	RK_MUXROUTE_SAME(1, RK_PC6, 5, 0x314, BIT(16 + 12) | BIT(16 + 13) | BIT(12)), /* can_rxd_m1 */
+-	RK_MUXROUTE_SAME(2, RK_PA2, 4, 0x314, BIT(16 + 12) | BIT(16 + 13) | BIT(13)), /* can_rxd_m2 */
+-	RK_MUXROUTE_SAME(1, RK_PC4, 3, 0x314, BIT(16 + 14)), /* mac_rxd0_m0 */
+-	RK_MUXROUTE_SAME(4, RK_PA2, 2, 0x314, BIT(16 + 14) | BIT(14)), /* mac_rxd0_m1 */
+-	RK_MUXROUTE_SAME(3, RK_PB4, 4, 0x314, BIT(16 + 15)), /* uart3_rx */
+-	RK_MUXROUTE_SAME(0, RK_PC1, 3, 0x314, BIT(16 + 15) | BIT(15)), /* uart3_rx_m1 */
+ };
+ 
+ static struct rockchip_mux_route_data rk3328_mux_route_data[] = {
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 22e471933b373e..4860c4dd853f33 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -1332,7 +1332,6 @@ static void pcs_irq_free(struct pcs_device *pcs)
+ static void pcs_free_resources(struct pcs_device *pcs)
+ {
+ 	pcs_irq_free(pcs);
+-	pinctrl_unregister(pcs->pctl);
+ 
+ #if IS_BUILTIN(CONFIG_PINCTRL_SINGLE)
+ 	if (pcs->missing_nr_pinctrl_cells)
+@@ -1889,7 +1888,7 @@ static int pcs_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto free;
+ 
+-	ret = pinctrl_register_and_init(&pcs->desc, pcs->dev, pcs, &pcs->pctl);
++	ret = devm_pinctrl_register_and_init(pcs->dev, &pcs->desc, pcs, &pcs->pctl);
+ 	if (ret) {
+ 		dev_err(pcs->dev, "could not register single pinctrl driver\n");
+ 		goto free;
+@@ -1922,8 +1921,10 @@ static int pcs_probe(struct platform_device *pdev)
+ 
+ 	dev_info(pcs->dev, "%i pins, size %u\n", pcs->desc.npins, pcs->size);
+ 
+-	return pinctrl_enable(pcs->pctl);
++	if (pinctrl_enable(pcs->pctl))
++		goto free;
+ 
++	return 0;
+ free:
+ 	pcs_free_resources(pcs);
+ 
+diff --git a/drivers/pinctrl/ti/pinctrl-ti-iodelay.c b/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
+index cfb924228d877c..6e1b067fb72ed9 100644
+--- a/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
++++ b/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
+@@ -881,7 +881,7 @@ static int ti_iodelay_probe(struct platform_device *pdev)
+ 	iod->desc.name = dev_name(dev);
+ 	iod->desc.owner = THIS_MODULE;
+ 
+-	ret = pinctrl_register_and_init(&iod->desc, dev, iod, &iod->pctl);
++	ret = devm_pinctrl_register_and_init(dev, &iod->desc, iod, &iod->pctl);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register pinctrl\n");
+ 		goto exit_out;
+@@ -889,7 +889,11 @@ static int ti_iodelay_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, iod);
+ 
+-	return pinctrl_enable(iod->pctl);
++	ret = pinctrl_enable(iod->pctl);
++	if (ret)
++		goto exit_out;
++
++	return 0;
+ 
+ exit_out:
+ 	of_node_put(np);
+@@ -906,12 +910,6 @@ static int ti_iodelay_remove(struct platform_device *pdev)
+ {
+ 	struct ti_iodelay_device *iod = platform_get_drvdata(pdev);
+ 
+-	if (!iod)
+-		return 0;
+-
+-	if (iod->pctl)
+-		pinctrl_unregister(iod->pctl);
+-
+ 	ti_iodelay_pinconf_deinit_dev(iod);
+ 
+ 	/* Expect other allocations to be freed by devm */
+diff --git a/drivers/platform/chrome/cros_ec_debugfs.c b/drivers/platform/chrome/cros_ec_debugfs.c
+index 0dbceee87a4b1a..2928c3cb378545 100644
+--- a/drivers/platform/chrome/cros_ec_debugfs.c
++++ b/drivers/platform/chrome/cros_ec_debugfs.c
+@@ -326,6 +326,7 @@ static int ec_read_version_supported(struct cros_ec_dev *ec)
+ 	if (!msg)
+ 		return 0;
+ 
++	msg->version = 1;
+ 	msg->command = EC_CMD_GET_CMD_VERSIONS + ec->cmd_offset;
+ 	msg->outsize = sizeof(*params);
+ 	msg->insize = sizeof(*response);
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index 8ffbf92ec1e071..39c41aeb57e41c 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -780,9 +780,11 @@ int cros_ec_get_next_event(struct cros_ec_device *ec_dev,
+ 	if (ret == -ENOPROTOOPT) {
+ 		dev_dbg(ec_dev->dev,
+ 			"GET_NEXT_EVENT returned invalid version error.\n");
++		mutex_lock(&ec_dev->lock);
+ 		ret = cros_ec_get_host_command_version_mask(ec_dev,
+ 							EC_CMD_GET_NEXT_EVENT,
+ 							&ver_mask);
++		mutex_unlock(&ec_dev->lock);
+ 		if (ret < 0 || ver_mask == 0)
+ 			/*
+ 			 * Do not change the MKBP supported version if we can't
+diff --git a/drivers/platform/mips/cpu_hwmon.c b/drivers/platform/mips/cpu_hwmon.c
+index d8c5f9195f85f5..2ac2f31090f96f 100644
+--- a/drivers/platform/mips/cpu_hwmon.c
++++ b/drivers/platform/mips/cpu_hwmon.c
+@@ -139,6 +139,9 @@ static int __init loongson_hwmon_init(void)
+ 		csr_temp_enable = csr_readl(LOONGSON_CSR_FEATURES) &
+ 				  LOONGSON_CSRF_TEMP;
+ 
++	if (!csr_temp_enable && !loongson_chiptemp[0])
++		return -ENODEV;
++
+ 	nr_packages = loongson_sysconf.nr_cpus /
+ 		loongson_sysconf.cores_per_package;
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index f65bf7b295c594..c4f1a2bac333b5 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -168,18 +168,18 @@ static inline int axp288_charger_set_cv(struct axp288_chrg_info *info, int cv)
+ 	u8 reg_val;
+ 	int ret;
+ 
+-	if (cv <= CV_4100MV) {
+-		reg_val = CHRG_CCCV_CV_4100MV;
+-		cv = CV_4100MV;
+-	} else if (cv <= CV_4150MV) {
+-		reg_val = CHRG_CCCV_CV_4150MV;
+-		cv = CV_4150MV;
+-	} else if (cv <= CV_4200MV) {
++	if (cv >= CV_4350MV) {
++		reg_val = CHRG_CCCV_CV_4350MV;
++		cv = CV_4350MV;
++	} else if (cv >= CV_4200MV) {
+ 		reg_val = CHRG_CCCV_CV_4200MV;
+ 		cv = CV_4200MV;
++	} else if (cv >= CV_4150MV) {
++		reg_val = CHRG_CCCV_CV_4150MV;
++		cv = CV_4150MV;
+ 	} else {
+-		reg_val = CHRG_CCCV_CV_4350MV;
+-		cv = CV_4350MV;
++		reg_val = CHRG_CCCV_CV_4100MV;
++		cv = CV_4100MV;
+ 	}
+ 
+ 	reg_val = reg_val << CHRG_CCCV_CV_BIT_POS;
+@@ -371,8 +371,8 @@ static int axp288_charger_usb_set_property(struct power_supply *psy,
+ 			dev_warn(&info->pdev->dev, "set charge current failed\n");
+ 		break;
+ 	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
+-		scaled_val = min(val->intval, info->max_cv);
+-		scaled_val = DIV_ROUND_CLOSEST(scaled_val, 1000);
++		scaled_val = DIV_ROUND_CLOSEST(val->intval, 1000);
++		scaled_val = min(scaled_val, info->max_cv);
+ 		ret = axp288_charger_set_cv(info, scaled_val);
+ 		if (ret < 0)
+ 			dev_warn(&info->pdev->dev, "set charge voltage failed\n");
+diff --git a/drivers/pwm/pwm-stm32.c b/drivers/pwm/pwm-stm32.c
+index 3e5f1b622af899..7146b3f6755bc9 100644
+--- a/drivers/pwm/pwm-stm32.c
++++ b/drivers/pwm/pwm-stm32.c
+@@ -452,8 +452,9 @@ static int stm32_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	enabled = pwm->state.enabled;
+ 
+-	if (enabled && !state->enabled) {
+-		stm32_pwm_disable(priv, pwm->hwpwm);
++	if (!state->enabled) {
++		if (enabled)
++			stm32_pwm_disable(priv, pwm->hwpwm);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 8957ed271d209b..373fce8b91064f 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -287,6 +287,11 @@ static int imx_rproc_addr_init(struct imx_rproc *priv,
+ 		struct resource res;
+ 
+ 		node = of_parse_phandle(np, "memory-region", a);
++		if (!node)
++			continue;
++		/* Not map vdevbuffer, vdevring region */
++		if (!strncmp(node->name, "vdev", strlen("vdev")))
++			continue;
+ 		err = of_address_to_resource(node, 0, &res);
+ 		if (err) {
+ 			dev_err(dev, "unable to resolve memory region\n");
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index 146056858135e3..154ea5ae2c0c3b 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -274,10 +274,9 @@ int __rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ 			return err;
+ 
+ 		/* full-function RTCs won't have such missing fields */
+-		if (rtc_valid_tm(&alarm->time) == 0) {
+-			rtc_add_offset(rtc, &alarm->time);
+-			return 0;
+-		}
++		err = rtc_valid_tm(&alarm->time);
++		if (!err)
++			goto done;
+ 
+ 		/* get the "after" timestamp, to detect wrapped fields */
+ 		err = rtc_read_time(rtc, &now);
+@@ -379,6 +378,8 @@ int __rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ 	if (err)
+ 		dev_warn(&rtc->dev, "invalid alarm value: %ptR\n",
+ 			 &alarm->time);
++	else
++		rtc_add_offset(rtc, &alarm->time);
+ 
+ 	return err;
+ }
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 2c4ccab6e462d9..a55a1cff2ef033 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -649,11 +649,10 @@ static int cmos_nvram_read(void *priv, unsigned int off, void *val,
+ 			   size_t count)
+ {
+ 	unsigned char *buf = val;
+-	int	retval;
+ 
+ 	off += NVRAM_OFFSET;
+ 	spin_lock_irq(&rtc_lock);
+-	for (retval = 0; count; count--, off++, retval++) {
++	for (; count; count--, off++) {
+ 		if (off < 128)
+ 			*buf++ = CMOS_READ(off);
+ 		else if (can_bank2)
+@@ -663,7 +662,7 @@ static int cmos_nvram_read(void *priv, unsigned int off, void *val,
+ 	}
+ 	spin_unlock_irq(&rtc_lock);
+ 
+-	return retval;
++	return count ? -EIO : 0;
+ }
+ 
+ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+@@ -671,7 +670,6 @@ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+ {
+ 	struct cmos_rtc	*cmos = priv;
+ 	unsigned char	*buf = val;
+-	int		retval;
+ 
+ 	/* NOTE:  on at least PCs and Ataris, the boot firmware uses a
+ 	 * checksum on part of the NVRAM data.  That's currently ignored
+@@ -680,7 +678,7 @@ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+ 	 */
+ 	off += NVRAM_OFFSET;
+ 	spin_lock_irq(&rtc_lock);
+-	for (retval = 0; count; count--, off++, retval++) {
++	for (; count; count--, off++) {
+ 		/* don't trash RTC registers */
+ 		if (off == cmos->day_alrm
+ 				|| off == cmos->mon_alrm
+@@ -695,7 +693,7 @@ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+ 	}
+ 	spin_unlock_irq(&rtc_lock);
+ 
+-	return retval;
++	return count ? -EIO : 0;
+ }
+ 
+ /*----------------------------------------------------------------*/
+diff --git a/drivers/rtc/rtc-isl1208.c b/drivers/rtc/rtc-isl1208.c
+index ebb691fa48a6b4..cba0dfe21a1523 100644
+--- a/drivers/rtc/rtc-isl1208.c
++++ b/drivers/rtc/rtc-isl1208.c
+@@ -743,14 +743,13 @@ static int isl1208_nvmem_read(void *priv, unsigned int off, void *buf,
+ {
+ 	struct isl1208_state *isl1208 = priv;
+ 	struct i2c_client *client = to_i2c_client(isl1208->rtc->dev.parent);
+-	int ret;
+ 
+ 	/* nvmem sanitizes offset/count for us, but count==0 is possible */
+ 	if (!count)
+ 		return count;
+-	ret = isl1208_i2c_read_regs(client, ISL1208_REG_USR1 + off, buf,
++
++	return isl1208_i2c_read_regs(client, ISL1208_REG_USR1 + off, buf,
+ 				    count);
+-	return ret == 0 ? count : ret;
+ }
+ 
+ static int isl1208_nvmem_write(void *priv, unsigned int off, void *buf,
+@@ -758,15 +757,13 @@ static int isl1208_nvmem_write(void *priv, unsigned int off, void *buf,
+ {
+ 	struct isl1208_state *isl1208 = priv;
+ 	struct i2c_client *client = to_i2c_client(isl1208->rtc->dev.parent);
+-	int ret;
+ 
+ 	/* nvmem sanitizes off/count for us, but count==0 is possible */
+ 	if (!count)
+ 		return count;
+-	ret = isl1208_i2c_set_regs(client, ISL1208_REG_USR1 + off, buf,
+-				   count);
+ 
+-	return ret == 0 ? count : ret;
++	return isl1208_i2c_set_regs(client, ISL1208_REG_USR1 + off, buf,
++				   count);
+ }
+ 
+ static const struct nvmem_config isl1208_nvmem_config = {
+diff --git a/drivers/s390/char/sclp_sd.c b/drivers/s390/char/sclp_sd.c
+index 1e244f78f1929c..64581433c33491 100644
+--- a/drivers/s390/char/sclp_sd.c
++++ b/drivers/s390/char/sclp_sd.c
+@@ -319,8 +319,14 @@ static int sclp_sd_store_data(struct sclp_sd_data *result, u8 di)
+ 			  &esize);
+ 	if (rc) {
+ 		/* Cancel running request if interrupted */
+-		if (rc == -ERESTARTSYS)
+-			sclp_sd_sync(page, SD_EQ_HALT, di, 0, 0, NULL, NULL);
++		if (rc == -ERESTARTSYS) {
++			if (sclp_sd_sync(page, SD_EQ_HALT, di, 0, 0, NULL, NULL)) {
++				pr_warn("Could not stop Store Data request - leaking at least %zu bytes\n",
++					(size_t)dsize * PAGE_SIZE);
++				data = NULL;
++				asce = 0;
++			}
++		}
+ 		vfree(data);
+ 		goto out;
+ 	}
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 2803b475dae6ae..53528711dac1ff 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -2431,12 +2431,8 @@ _base_check_pcie_native_sgl(struct MPT3SAS_ADAPTER *ioc,
+ 
+ 	/* Get the SG list pointer and info. */
+ 	sges_left = scsi_dma_map(scmd);
+-	if (sges_left < 0) {
+-		sdev_printk(KERN_ERR, scmd->device,
+-			"scsi_dma_map failed: request for %d bytes!\n",
+-			scsi_bufflen(scmd));
++	if (sges_left < 0)
+ 		return 1;
+-	}
+ 
+ 	/* Check if we need to build a native SG list. */
+ 	if (base_is_prp_possible(ioc, pcie_device,
+@@ -2496,6 +2492,22 @@ _base_build_zero_len_sge_ieee(struct MPT3SAS_ADAPTER *ioc, void *paddr)
+ 	_base_add_sg_single_ieee(paddr, sgl_flags, 0, 0, -1);
+ }
+ 
++static inline int _base_scsi_dma_map(struct scsi_cmnd *cmd)
++{
++	/*
++	 * Some firmware versions byte-swap the REPORT ZONES command reply from
++	 * ATA-ZAC devices by directly accessing in the host buffer. This does
++	 * not respect the default command DMA direction and causes IOMMU page
++	 * faults on some architectures with an IOMMU enforcing write mappings
++	 * (e.g. AMD hosts). Avoid such issue by making the report zones buffer
++	 * mapping bi-directional.
++	 */
++	if (cmd->cmnd[0] == ZBC_IN && cmd->cmnd[1] == ZI_REPORT_ZONES)
++		cmd->sc_data_direction = DMA_BIDIRECTIONAL;
++
++	return scsi_dma_map(cmd);
++}
++
+ /**
+  * _base_build_sg_scmd - main sg creation routine
+  *		pcie_device is unused here!
+@@ -2542,13 +2554,9 @@ _base_build_sg_scmd(struct MPT3SAS_ADAPTER *ioc,
+ 	sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+ 
+ 	sg_scmd = scsi_sglist(scmd);
+-	sges_left = scsi_dma_map(scmd);
+-	if (sges_left < 0) {
+-		sdev_printk(KERN_ERR, scmd->device,
+-		 "scsi_dma_map failed: request for %d bytes!\n",
+-		 scsi_bufflen(scmd));
++	sges_left = _base_scsi_dma_map(scmd);
++	if (sges_left < 0)
+ 		return -ENOMEM;
+-	}
+ 
+ 	sg_local = &mpi_request->SGL;
+ 	sges_in_segment = ioc->max_sges_in_main_message;
+@@ -2690,13 +2698,9 @@ _base_build_sg_scmd_ieee(struct MPT3SAS_ADAPTER *ioc,
+ 	}
+ 
+ 	sg_scmd = scsi_sglist(scmd);
+-	sges_left = scsi_dma_map(scmd);
+-	if (sges_left < 0) {
+-		sdev_printk(KERN_ERR, scmd->device,
+-			"scsi_dma_map failed: request for %d bytes!\n",
+-			scsi_bufflen(scmd));
++	sges_left = _base_scsi_dma_map(scmd);
++	if (sges_left < 0)
+ 		return -ENOMEM;
+-	}
+ 
+ 	sg_local = &mpi_request->SGL;
+ 	sges_in_segment = (ioc->request_sz -
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 804cac4c34769a..d415e816ad0eb6 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -306,7 +306,7 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ 		    "request_sg_cnt=%x reply_sg_cnt=%x.\n",
+ 		    bsg_job->request_payload.sg_cnt,
+ 		    bsg_job->reply_payload.sg_cnt);
+-		rval = -EPERM;
++		rval = -ENOBUFS;
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index d9ac17dbad7899..b08a92d346f5f0 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -1708,7 +1708,7 @@ qla2x00_hba_attributes(scsi_qla_host_t *vha, void *entries,
+ 	eiter->type = cpu_to_be16(FDMI_HBA_OPTION_ROM_VERSION);
+ 	alen = scnprintf(
+ 		eiter->a.orom_version, sizeof(eiter->a.orom_version),
+-		"%d.%02d", ha->bios_revision[1], ha->bios_revision[0]);
++		"%d.%02d", ha->efi_revision[1], ha->efi_revision[0]);
+ 	alen += FDMI_ATTR_ALIGNMENT(alen);
+ 	alen += FDMI_ATTR_TYPELEN(eiter);
+ 	eiter->len = cpu_to_be16(alen);
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 8d54f609980293..affb3bc39006ce 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -7696,15 +7696,21 @@ qla28xx_get_aux_images(
+ 	struct qla27xx_image_status pri_aux_image_status, sec_aux_image_status;
+ 	bool valid_pri_image = false, valid_sec_image = false;
+ 	bool active_pri_image = false, active_sec_image = false;
++	int rc;
+ 
+ 	if (!ha->flt_region_aux_img_status_pri) {
+ 		ql_dbg(ql_dbg_init, vha, 0x018a, "Primary aux image not addressed\n");
+ 		goto check_sec_image;
+ 	}
+ 
+-	qla24xx_read_flash_data(vha, (uint32_t *)&pri_aux_image_status,
++	rc = qla24xx_read_flash_data(vha, (uint32_t *)&pri_aux_image_status,
+ 	    ha->flt_region_aux_img_status_pri,
+ 	    sizeof(pri_aux_image_status) >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x01a1,
++		    "Unable to read Primary aux image(%x).\n", rc);
++		goto check_sec_image;
++	}
+ 	qla27xx_print_image(vha, "Primary aux image", &pri_aux_image_status);
+ 
+ 	if (qla28xx_check_aux_image_status_signature(&pri_aux_image_status)) {
+@@ -7735,9 +7741,15 @@ qla28xx_get_aux_images(
+ 		goto check_valid_image;
+ 	}
+ 
+-	qla24xx_read_flash_data(vha, (uint32_t *)&sec_aux_image_status,
++	rc = qla24xx_read_flash_data(vha, (uint32_t *)&sec_aux_image_status,
+ 	    ha->flt_region_aux_img_status_sec,
+ 	    sizeof(sec_aux_image_status) >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x01a2,
++		    "Unable to read Secondary aux image(%x).\n", rc);
++		goto check_valid_image;
++	}
++
+ 	qla27xx_print_image(vha, "Secondary aux image", &sec_aux_image_status);
+ 
+ 	if (qla28xx_check_aux_image_status_signature(&sec_aux_image_status)) {
+@@ -7794,6 +7806,7 @@ qla27xx_get_active_image(struct scsi_qla_host *vha,
+ 	struct qla27xx_image_status pri_image_status, sec_image_status;
+ 	bool valid_pri_image = false, valid_sec_image = false;
+ 	bool active_pri_image = false, active_sec_image = false;
++	int rc;
+ 
+ 	if (!ha->flt_region_img_status_pri) {
+ 		ql_dbg(ql_dbg_init, vha, 0x018a, "Primary image not addressed\n");
+@@ -7835,8 +7848,14 @@ qla27xx_get_active_image(struct scsi_qla_host *vha,
+ 		goto check_valid_image;
+ 	}
+ 
+-	qla24xx_read_flash_data(vha, (uint32_t *)(&sec_image_status),
++	rc = qla24xx_read_flash_data(vha, (uint32_t *)(&sec_image_status),
+ 	    ha->flt_region_img_status_sec, sizeof(sec_image_status) >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x01a3,
++		    "Unable to read Secondary image status(%x).\n", rc);
++		goto check_valid_image;
++	}
++
+ 	qla27xx_print_image(vha, "Secondary image", &sec_image_status);
+ 
+ 	if (qla27xx_check_image_status_signature(&sec_image_status)) {
+@@ -7908,11 +7927,10 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 	    "FW: Loading firmware from flash (%x).\n", faddr);
+ 
+ 	dcode = (uint32_t *)req->ring;
+-	qla24xx_read_flash_data(vha, dcode, faddr, 8);
+-	if (qla24xx_risc_firmware_invalid(dcode)) {
++	rval = qla24xx_read_flash_data(vha, dcode, faddr, 8);
++	if (rval || qla24xx_risc_firmware_invalid(dcode)) {
+ 		ql_log(ql_log_fatal, vha, 0x008c,
+-		    "Unable to verify the integrity of flash firmware "
+-		    "image.\n");
++		    "Unable to verify the integrity of flash firmware image (rval %x).\n", rval);
+ 		ql_log(ql_log_fatal, vha, 0x008d,
+ 		    "Firmware data: %08x %08x %08x %08x.\n",
+ 		    dcode[0], dcode[1], dcode[2], dcode[3]);
+@@ -7926,7 +7944,12 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 	for (j = 0; j < segments; j++) {
+ 		ql_dbg(ql_dbg_init, vha, 0x008d,
+ 		    "-> Loading segment %u...\n", j);
+-		qla24xx_read_flash_data(vha, dcode, faddr, 10);
++		rval = qla24xx_read_flash_data(vha, dcode, faddr, 10);
++		if (rval) {
++			ql_log(ql_log_fatal, vha, 0x016a,
++			    "-> Unable to read segment addr + size .\n");
++			return QLA_FUNCTION_FAILED;
++		}
+ 		risc_addr = be32_to_cpu((__force __be32)dcode[2]);
+ 		risc_size = be32_to_cpu((__force __be32)dcode[3]);
+ 		if (!*srisc_addr) {
+@@ -7942,7 +7965,13 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 			ql_dbg(ql_dbg_init, vha, 0x008e,
+ 			    "-> Loading fragment %u: %#x <- %#x (%#lx dwords)...\n",
+ 			    fragment, risc_addr, faddr, dlen);
+-			qla24xx_read_flash_data(vha, dcode, faddr, dlen);
++			rval = qla24xx_read_flash_data(vha, dcode, faddr, dlen);
++			if (rval) {
++				ql_log(ql_log_fatal, vha, 0x016b,
++				    "-> Unable to read fragment(faddr %#x dlen %#lx).\n",
++				    faddr, dlen);
++				return QLA_FUNCTION_FAILED;
++			}
+ 			for (i = 0; i < dlen; i++)
+ 				dcode[i] = swab32(dcode[i]);
+ 
+@@ -7972,7 +8001,14 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 		fwdt->length = 0;
+ 
+ 		dcode = (uint32_t *)req->ring;
+-		qla24xx_read_flash_data(vha, dcode, faddr, 7);
++
++		rval = qla24xx_read_flash_data(vha, dcode, faddr, 7);
++		if (rval) {
++			ql_log(ql_log_fatal, vha, 0x016c,
++			    "-> Unable to read template size.\n");
++			goto failed;
++		}
++
+ 		risc_size = be32_to_cpu((__force __be32)dcode[2]);
+ 		ql_dbg(ql_dbg_init, vha, 0x0161,
+ 		    "-> fwdt%u template array at %#x (%#x dwords)\n",
+@@ -7998,11 +8034,12 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 		}
+ 
+ 		dcode = fwdt->template;
+-		qla24xx_read_flash_data(vha, dcode, faddr, risc_size);
++		rval = qla24xx_read_flash_data(vha, dcode, faddr, risc_size);
+ 
+-		if (!qla27xx_fwdt_template_valid(dcode)) {
++		if (rval || !qla27xx_fwdt_template_valid(dcode)) {
+ 			ql_log(ql_log_warn, vha, 0x0165,
+-			    "-> fwdt%u failed template validate\n", j);
++			    "-> fwdt%u failed template validate (rval %x)\n",
++			    j, rval);
+ 			goto failed;
+ 		}
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index c7caf322f445be..b98c390b4b27c2 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -164,7 +164,7 @@ qla24xx_disable_vp(scsi_qla_host_t *vha)
+ 	atomic_set(&vha->loop_state, LOOP_DOWN);
+ 	atomic_set(&vha->loop_down_timer, LOOP_DOWN_TIME);
+ 	list_for_each_entry(fcport, &vha->vp_fcports, list)
+-		fcport->logout_on_delete = 0;
++		fcport->logout_on_delete = 1;
+ 
+ 	qla2x00_mark_all_devices_lost(vha);
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 6dad7787f20de2..28a5b40e0f329c 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -27,7 +27,10 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
+ 		return 0;
+ 	}
+ 
+-	if (!vha->nvme_local_port && qla_nvme_register_hba(vha))
++	if (qla_nvme_register_hba(vha))
++		return 0;
++
++	if (!vha->nvme_local_port)
+ 		return 0;
+ 
+ 	if (!(fcport->nvme_prli_service_param &
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 0930bf996cd302..00b971d1c419c4 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1752,14 +1752,9 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)
+ 	for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
+ 		sp = req->outstanding_cmds[cnt];
+ 		if (sp) {
+-			/*
+-			 * perform lockless completion during driver unload
+-			 */
+ 			if (qla2x00_chip_is_down(vha)) {
+ 				req->outstanding_cmds[cnt] = NULL;
+-				spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
+ 				sp->done(sp, res);
+-				spin_lock_irqsave(qp->qp_lock_ptr, flags);
+ 				continue;
+ 			}
+ 
+@@ -4453,7 +4448,7 @@ static void
+ qla2x00_number_of_exch(scsi_qla_host_t *vha, u32 *ret_cnt, u16 max_cnt)
+ {
+ 	u32 temp;
+-	struct init_cb_81xx *icb = (struct init_cb_81xx *)&vha->hw->init_cb;
++	struct init_cb_81xx *icb = (struct init_cb_81xx *)vha->hw->init_cb;
+ 	*ret_cnt = FW_DEF_EXCHANGES_CNT;
+ 
+ 	if (max_cnt > vha->hw->max_exchg)
+diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
+index 0fa9c529fca11a..c55135f1463e54 100644
+--- a/drivers/scsi/qla2xxx/qla_sup.c
++++ b/drivers/scsi/qla2xxx/qla_sup.c
+@@ -555,6 +555,7 @@ qla2xxx_find_flt_start(scsi_qla_host_t *vha, uint32_t *start)
+ 	struct qla_flt_location *fltl = (void *)req->ring;
+ 	uint32_t *dcode = (uint32_t *)req->ring;
+ 	uint8_t *buf = (void *)req->ring, *bcode,  last_image;
++	int rc;
+ 
+ 	/*
+ 	 * FLT-location structure resides after the last PCI region.
+@@ -584,14 +585,24 @@ qla2xxx_find_flt_start(scsi_qla_host_t *vha, uint32_t *start)
+ 	pcihdr = 0;
+ 	do {
+ 		/* Verify PCI expansion ROM header. */
+-		qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, 0x20);
++		rc = qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, 0x20);
++		if (rc) {
++			ql_log(ql_log_info, vha, 0x016d,
++			    "Unable to read PCI Expansion Rom Header (%x).\n", rc);
++			return QLA_FUNCTION_FAILED;
++		}
+ 		bcode = buf + (pcihdr % 4);
+ 		if (bcode[0x0] != 0x55 || bcode[0x1] != 0xaa)
+ 			goto end;
+ 
+ 		/* Locate PCI data structure. */
+ 		pcids = pcihdr + ((bcode[0x19] << 8) | bcode[0x18]);
+-		qla24xx_read_flash_data(vha, dcode, pcids >> 2, 0x20);
++		rc = qla24xx_read_flash_data(vha, dcode, pcids >> 2, 0x20);
++		if (rc) {
++			ql_log(ql_log_info, vha, 0x0179,
++			    "Unable to read PCI Data Structure (%x).\n", rc);
++			return QLA_FUNCTION_FAILED;
++		}
+ 		bcode = buf + (pcihdr % 4);
+ 
+ 		/* Validate signature of PCI data structure. */
+@@ -606,7 +617,12 @@ qla2xxx_find_flt_start(scsi_qla_host_t *vha, uint32_t *start)
+ 	} while (!last_image);
+ 
+ 	/* Now verify FLT-location structure. */
+-	qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, sizeof(*fltl) >> 2);
++	rc = qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, sizeof(*fltl) >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x017a,
++		    "Unable to read FLT (%x).\n", rc);
++		return QLA_FUNCTION_FAILED;
++	}
+ 	if (memcmp(fltl->sig, "QFLT", 4))
+ 		goto end;
+ 
+@@ -2605,13 +2621,18 @@ qla24xx_read_optrom_data(struct scsi_qla_host *vha, void *buf,
+     uint32_t offset, uint32_t length)
+ {
+ 	struct qla_hw_data *ha = vha->hw;
++	int rc;
+ 
+ 	/* Suspend HBA. */
+ 	scsi_block_requests(vha->host);
+ 	set_bit(MBX_UPDATE_FLASH_ACTIVE, &ha->mbx_cmd_flags);
+ 
+ 	/* Go with read. */
+-	qla24xx_read_flash_data(vha, buf, offset >> 2, length >> 2);
++	rc = qla24xx_read_flash_data(vha, buf, offset >> 2, length >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x01a0,
++		    "Unable to perform optrom read(%x).\n", rc);
++	}
+ 
+ 	/* Resume HBA. */
+ 	clear_bit(MBX_UPDATE_FLASH_ACTIVE, &ha->mbx_cmd_flags);
+@@ -3412,7 +3433,7 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 	struct active_regions active_regions = { };
+ 
+ 	if (IS_P3P_TYPE(ha))
+-		return ret;
++		return QLA_SUCCESS;
+ 
+ 	if (!mbuf)
+ 		return QLA_FUNCTION_FAILED;
+@@ -3432,20 +3453,31 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 
+ 	do {
+ 		/* Verify PCI expansion ROM header. */
+-		qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, 0x20);
++		ret = qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, 0x20);
++		if (ret) {
++			ql_log(ql_log_info, vha, 0x017d,
++			    "Unable to read PCI EXP Rom Header(%x).\n", ret);
++			return QLA_FUNCTION_FAILED;
++		}
++
+ 		bcode = mbuf + (pcihdr % 4);
+ 		if (memcmp(bcode, "\x55\xaa", 2)) {
+ 			/* No signature */
+ 			ql_log(ql_log_fatal, vha, 0x0059,
+ 			    "No matching ROM signature.\n");
+-			ret = QLA_FUNCTION_FAILED;
+-			break;
++			return QLA_FUNCTION_FAILED;
+ 		}
+ 
+ 		/* Locate PCI data structure. */
+ 		pcids = pcihdr + ((bcode[0x19] << 8) | bcode[0x18]);
+ 
+-		qla24xx_read_flash_data(vha, dcode, pcids >> 2, 0x20);
++		ret = qla24xx_read_flash_data(vha, dcode, pcids >> 2, 0x20);
++		if (ret) {
++			ql_log(ql_log_info, vha, 0x018e,
++			    "Unable to read PCI Data Structure (%x).\n", ret);
++			return QLA_FUNCTION_FAILED;
++		}
++
+ 		bcode = mbuf + (pcihdr % 4);
+ 
+ 		/* Validate signature of PCI data structure. */
+@@ -3454,8 +3486,7 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 			ql_log(ql_log_fatal, vha, 0x005a,
+ 			    "PCI data struct not found pcir_adr=%x.\n", pcids);
+ 			ql_dump_buffer(ql_dbg_init, vha, 0x0059, dcode, 32);
+-			ret = QLA_FUNCTION_FAILED;
+-			break;
++			return QLA_FUNCTION_FAILED;
+ 		}
+ 
+ 		/* Read version */
+@@ -3507,20 +3538,26 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 			faddr = ha->flt_region_fw_sec;
+ 	}
+ 
+-	qla24xx_read_flash_data(vha, dcode, faddr, 8);
+-	if (qla24xx_risc_firmware_invalid(dcode)) {
+-		ql_log(ql_log_warn, vha, 0x005f,
+-		    "Unrecognized fw revision at %x.\n",
+-		    ha->flt_region_fw * 4);
+-		ql_dump_buffer(ql_dbg_init, vha, 0x005f, dcode, 32);
++	ret = qla24xx_read_flash_data(vha, dcode, faddr, 8);
++	if (ret) {
++		ql_log(ql_log_info, vha, 0x019e,
++		    "Unable to read FW version (%x).\n", ret);
++		return ret;
+ 	} else {
+-		for (i = 0; i < 4; i++)
+-			ha->fw_revision[i] =
++		if (qla24xx_risc_firmware_invalid(dcode)) {
++			ql_log(ql_log_warn, vha, 0x005f,
++			    "Unrecognized fw revision at %x.\n",
++			    ha->flt_region_fw * 4);
++			ql_dump_buffer(ql_dbg_init, vha, 0x005f, dcode, 32);
++		} else {
++			for (i = 0; i < 4; i++)
++				ha->fw_revision[i] =
+ 				be32_to_cpu((__force __be32)dcode[4+i]);
+-		ql_dbg(ql_dbg_init, vha, 0x0060,
+-		    "Firmware revision (flash) %u.%u.%u (%x).\n",
+-		    ha->fw_revision[0], ha->fw_revision[1],
+-		    ha->fw_revision[2], ha->fw_revision[3]);
++			ql_dbg(ql_dbg_init, vha, 0x0060,
++			    "Firmware revision (flash) %u.%u.%u (%x).\n",
++			    ha->fw_revision[0], ha->fw_revision[1],
++			    ha->fw_revision[2], ha->fw_revision[3]);
++		}
+ 	}
+ 
+ 	/* Check for golden firmware and get version if available */
+@@ -3531,18 +3568,23 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 
+ 	memset(ha->gold_fw_version, 0, sizeof(ha->gold_fw_version));
+ 	faddr = ha->flt_region_gold_fw;
+-	qla24xx_read_flash_data(vha, dcode, ha->flt_region_gold_fw, 8);
+-	if (qla24xx_risc_firmware_invalid(dcode)) {
+-		ql_log(ql_log_warn, vha, 0x0056,
+-		    "Unrecognized golden fw at %#x.\n", faddr);
+-		ql_dump_buffer(ql_dbg_init, vha, 0x0056, dcode, 32);
++	ret = qla24xx_read_flash_data(vha, dcode, ha->flt_region_gold_fw, 8);
++	if (ret) {
++		ql_log(ql_log_info, vha, 0x019f,
++		    "Unable to read Gold FW version (%x).\n", ret);
+ 		return ret;
+-	}
+-
+-	for (i = 0; i < 4; i++)
+-		ha->gold_fw_version[i] =
+-			be32_to_cpu((__force __be32)dcode[4+i]);
++	} else {
++		if (qla24xx_risc_firmware_invalid(dcode)) {
++			ql_log(ql_log_warn, vha, 0x0056,
++			    "Unrecognized golden fw at %#x.\n", faddr);
++			ql_dump_buffer(ql_dbg_init, vha, 0x0056, dcode, 32);
++			return QLA_FUNCTION_FAILED;
++		}
+ 
++		for (i = 0; i < 4; i++)
++			ha->gold_fw_version[i] =
++			   be32_to_cpu((__force __be32)dcode[4+i]);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 0b0230b46f28e6..a4c70fbc809f1b 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -3653,11 +3653,16 @@ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba)
+ 			min_sleep_time_us =
+ 				MIN_DELAY_BEFORE_DME_CMDS_US - delta;
+ 		else
+-			return; /* no more delay required */
++			min_sleep_time_us = 0; /* no more delay required */
+ 	}
+ 
+-	/* allow sleep for extra 50us if needed */
+-	usleep_range(min_sleep_time_us, min_sleep_time_us + 50);
++	if (min_sleep_time_us > 0) {
++		/* allow sleep for extra 50us if needed */
++		usleep_range(min_sleep_time_us, min_sleep_time_us + 50);
++	}
++
++	/* update the last_dme_cmd_tstamp */
++	hba->last_dme_cmd_tstamp = ktime_get();
+ }
+ 
+ /**
+diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
+index 205cc96823b702..6e3cbd82266378 100644
+--- a/drivers/soc/qcom/pdr_interface.c
++++ b/drivers/soc/qcom/pdr_interface.c
+@@ -76,12 +76,12 @@ static int pdr_locator_new_server(struct qmi_handle *qmi,
+ 					      locator_hdl);
+ 	struct pdr_service *pds;
+ 
++	mutex_lock(&pdr->lock);
+ 	/* Create a local client port for QMI communication */
+ 	pdr->locator_addr.sq_family = AF_QIPCRTR;
+ 	pdr->locator_addr.sq_node = svc->node;
+ 	pdr->locator_addr.sq_port = svc->port;
+ 
+-	mutex_lock(&pdr->lock);
+ 	pdr->locator_init_complete = true;
+ 	mutex_unlock(&pdr->lock);
+ 
+@@ -104,10 +104,10 @@ static void pdr_locator_del_server(struct qmi_handle *qmi,
+ 
+ 	mutex_lock(&pdr->lock);
+ 	pdr->locator_init_complete = false;
+-	mutex_unlock(&pdr->lock);
+ 
+ 	pdr->locator_addr.sq_node = 0;
+ 	pdr->locator_addr.sq_port = 0;
++	mutex_unlock(&pdr->lock);
+ }
+ 
+ static struct qmi_ops pdr_locator_ops = {
+@@ -366,12 +366,14 @@ static int pdr_get_domain_list(struct servreg_get_domain_list_req *req,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	mutex_lock(&pdr->lock);
+ 	ret = qmi_send_request(&pdr->locator_hdl,
+ 			       &pdr->locator_addr,
+ 			       &txn, SERVREG_GET_DOMAIN_LIST_REQ,
+ 			       SERVREG_GET_DOMAIN_LIST_REQ_MAX_LEN,
+ 			       servreg_get_domain_list_req_ei,
+ 			       req);
++	mutex_unlock(&pdr->lock);
+ 	if (ret < 0) {
+ 		qmi_txn_cancel(&txn);
+ 		return ret;
+@@ -416,7 +418,7 @@ static int pdr_locate_service(struct pdr_handle *pdr, struct pdr_service *pds)
+ 		if (ret < 0)
+ 			goto out;
+ 
+-		for (i = domains_read; i < resp->domain_list_len; i++) {
++		for (i = 0; i < resp->domain_list_len; i++) {
+ 			entry = &resp->domain_list[i];
+ 
+ 			if (strnlen(entry->name, sizeof(entry->name)) == sizeof(entry->name))
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index a297911afe5713..26e6dd860b5d9d 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -629,13 +629,14 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
+ {
+ 	struct tcs_group *tcs;
+ 	int tcs_id;
+-	unsigned long flags;
++
++	might_sleep();
+ 
+ 	tcs = get_tcs_for_msg(drv, msg);
+ 	if (IS_ERR(tcs))
+ 		return PTR_ERR(tcs);
+ 
+-	spin_lock_irqsave(&drv->lock, flags);
++	spin_lock_irq(&drv->lock);
+ 
+ 	/* Wait forever for a free tcs. It better be there eventually! */
+ 	wait_event_lock_irq(drv->tcs_wait,
+@@ -654,7 +655,7 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
+ 		write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0);
+ 		enable_tcs_irq(drv, tcs_id, true);
+ 	}
+-	spin_unlock_irqrestore(&drv->lock, flags);
++	spin_unlock_irq(&drv->lock);
+ 
+ 	/*
+ 	 * These two can be done after the lock is released because:
+diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
+index b61e183ede6945..707e176ed4ed82 100644
+--- a/drivers/soc/qcom/rpmh.c
++++ b/drivers/soc/qcom/rpmh.c
+@@ -193,7 +193,6 @@ static int __rpmh_write(const struct device *dev, enum rpmh_state state,
+ 	rpm_msg->msg.state = state;
+ 
+ 	if (state == RPMH_ACTIVE_ONLY_STATE) {
+-		WARN_ON(irqs_disabled());
+ 		ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msg->msg);
+ 	} else {
+ 		/* Clean up our call by spoofing tx_done */
+diff --git a/drivers/soc/xilinx/zynqmp_pm_domains.c b/drivers/soc/xilinx/zynqmp_pm_domains.c
+index 226d343f0a6a5b..81e8e10f10929b 100644
+--- a/drivers/soc/xilinx/zynqmp_pm_domains.c
++++ b/drivers/soc/xilinx/zynqmp_pm_domains.c
+@@ -152,11 +152,17 @@ static int zynqmp_gpd_power_off(struct generic_pm_domain *domain)
+ static int zynqmp_gpd_attach_dev(struct generic_pm_domain *domain,
+ 				 struct device *dev)
+ {
++	struct device_link *link;
+ 	int ret;
+ 	struct zynqmp_pm_domain *pd;
+ 
+ 	pd = container_of(domain, struct zynqmp_pm_domain, gpd);
+ 
++	link = device_link_add(dev, &domain->dev, DL_FLAG_SYNC_STATE_ONLY);
++	if (!link)
++		dev_dbg(&domain->dev, "failed to create device link for %s\n",
++			dev_name(dev));
++
+ 	/* If this is not the first device to attach there is nothing to do */
+ 	if (domain->device_count)
+ 		return 0;
+@@ -299,9 +305,19 @@ static int zynqmp_gpd_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void zynqmp_gpd_sync_state(struct device *dev)
++{
++	int ret;
++
++	ret = zynqmp_pm_init_finalize();
++	if (ret)
++		dev_warn(dev, "failed to release power management to firmware\n");
++}
++
+ static struct platform_driver zynqmp_power_domain_driver = {
+ 	.driver	= {
+ 		.name = "zynqmp_power_controller",
++		.sync_state = zynqmp_gpd_sync_state,
+ 	},
+ 	.probe = zynqmp_gpd_probe,
+ 	.remove = zynqmp_gpd_remove,
+diff --git a/drivers/soc/xilinx/zynqmp_power.c b/drivers/soc/xilinx/zynqmp_power.c
+index c556623dae0248..2653d29ba829b0 100644
+--- a/drivers/soc/xilinx/zynqmp_power.c
++++ b/drivers/soc/xilinx/zynqmp_power.c
+@@ -178,8 +178,9 @@ static int zynqmp_pm_probe(struct platform_device *pdev)
+ 	u32 pm_api_version;
+ 	struct mbox_client *client;
+ 
+-	zynqmp_pm_init_finalize();
+-	zynqmp_pm_get_api_version(&pm_api_version);
++	ret = zynqmp_pm_get_api_version(&pm_api_version);
++	if (ret)
++		return ret;
+ 
+ 	/* Check PM API version number */
+ 	if (pm_api_version < ZYNQMP_PM_VERSION)
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index c5ff6e8c45be0d..c21d7959dcd237 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -297,7 +297,7 @@ static void fsl_lpspi_set_watermark(struct fsl_lpspi_data *fsl_lpspi)
+ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ {
+ 	struct lpspi_config config = fsl_lpspi->config;
+-	unsigned int perclk_rate, scldiv;
++	unsigned int perclk_rate, scldiv, div;
+ 	u8 prescale;
+ 
+ 	perclk_rate = clk_get_rate(fsl_lpspi->clk_per);
+@@ -308,8 +308,10 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ 		return -EINVAL;
+ 	}
+ 
++	div = DIV_ROUND_UP(perclk_rate, config.speed_hz);
++
+ 	for (prescale = 0; prescale < 8; prescale++) {
+-		scldiv = perclk_rate / config.speed_hz / (1 << prescale) - 2;
++		scldiv = div / (1 << prescale) - 2;
+ 		if (scldiv < 256) {
+ 			fsl_lpspi->config.prescale = prescale;
+ 			break;
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 10b8785b998279..c7adcf97e2a33a 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -862,6 +862,14 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port,
+ 	new_flags = (__force upf_t)new_info->flags;
+ 	old_custom_divisor = uport->custom_divisor;
+ 
++	if (!(uport->flags & UPF_FIXED_PORT)) {
++		unsigned int uartclk = new_info->baud_base * 16;
++		/* check needs to be done here before other settings made */
++		if (uartclk == 0) {
++			retval = -EINVAL;
++			goto exit;
++		}
++	}
+ 	if (!capable(CAP_SYS_ADMIN)) {
+ 		retval = -EPERM;
+ 		if (change_irq || change_port ||
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index a717b53847a8ef..03ad1ed83c92e6 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1438,6 +1438,7 @@ void gserial_suspend(struct gserial *gser)
+ 	spin_lock(&port->port_lock);
+ 	spin_unlock(&serial_port_lock);
+ 	port->suspended = true;
++	port->start_delayed = true;
+ 	spin_unlock_irqrestore(&port->port_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(gserial_suspend);
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index e80aa717c8b56c..dd2fafc5b0c3e6 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -99,12 +99,10 @@ int usb_ep_enable(struct usb_ep *ep)
+ 		goto out;
+ 
+ 	/* UDC drivers can't handle endpoints with maxpacket size 0 */
+-	if (usb_endpoint_maxp(ep->desc) == 0) {
+-		/*
+-		 * We should log an error message here, but we can't call
+-		 * dev_err() because there's no way to find the gadget
+-		 * given only ep.
+-		 */
++	if (!ep->desc || usb_endpoint_maxp(ep->desc) == 0) {
++		WARN_ONCE(1, "%s: ep%d (%s) has %s\n", __func__, ep->address, ep->name,
++			  (!ep->desc) ? "NULL descriptor" : "maxpacket 0");
++
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/drivers/usb/serial/usb_debug.c b/drivers/usb/serial/usb_debug.c
+index aaf4813e4971ee..406cb326e8124a 100644
+--- a/drivers/usb/serial/usb_debug.c
++++ b/drivers/usb/serial/usb_debug.c
+@@ -69,6 +69,11 @@ static void usb_debug_process_read_urb(struct urb *urb)
+ 	usb_serial_generic_process_read_urb(urb);
+ }
+ 
++static void usb_debug_init_termios(struct tty_struct *tty)
++{
++	tty->termios.c_lflag &= ~(ECHO | ECHONL);
++}
++
+ static struct usb_serial_driver debug_device = {
+ 	.driver = {
+ 		.owner =	THIS_MODULE,
+@@ -78,6 +83,7 @@ static struct usb_serial_driver debug_device = {
+ 	.num_ports =		1,
+ 	.bulk_out_size =	USB_DEBUG_MAX_PACKET_SIZE,
+ 	.break_ctl =		usb_debug_break_ctl,
++	.init_termios =		usb_debug_init_termios,
+ 	.process_read_urb =	usb_debug_process_read_urb,
+ };
+ 
+@@ -89,6 +95,7 @@ static struct usb_serial_driver dbc_device = {
+ 	.id_table =		dbc_id_table,
+ 	.num_ports =		1,
+ 	.break_ctl =		usb_debug_break_ctl,
++	.init_termios =		usb_debug_init_termios,
+ 	.process_read_urb =	usb_debug_process_read_urb,
+ };
+ 
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index b07b2925ff78bb..affcb928771d89 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -745,6 +745,7 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 	 *
+ 	 */
+ 	if (usb_pipedevice(urb->pipe) == 0) {
++		struct usb_device *old;
+ 		__u8 type = usb_pipetype(urb->pipe);
+ 		struct usb_ctrlrequest *ctrlreq =
+ 			(struct usb_ctrlrequest *) urb->setup_packet;
+@@ -755,14 +756,15 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 			goto no_need_xmit;
+ 		}
+ 
++		old = vdev->udev;
+ 		switch (ctrlreq->bRequest) {
+ 		case USB_REQ_SET_ADDRESS:
+ 			/* set_address may come when a device is reset */
+ 			dev_info(dev, "SetAddress Request (%d) to port %d\n",
+ 				 ctrlreq->wValue, vdev->rhport);
+ 
+-			usb_put_dev(vdev->udev);
+ 			vdev->udev = usb_get_dev(urb->dev);
++			usb_put_dev(old);
+ 
+ 			spin_lock(&vdev->ud.lock);
+ 			vdev->ud.status = VDEV_ST_USED;
+@@ -781,8 +783,8 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 				usbip_dbg_vhci_hc(
+ 					"Not yet?:Get_Descriptor to device 0 (get max pipe size)\n");
+ 
+-			usb_put_dev(vdev->udev);
+ 			vdev->udev = usb_get_dev(urb->dev);
++			usb_put_dev(old);
+ 			goto out;
+ 
+ 		default:
+@@ -1095,6 +1097,7 @@ static void vhci_shutdown_connection(struct usbip_device *ud)
+ static void vhci_device_reset(struct usbip_device *ud)
+ {
+ 	struct vhci_device *vdev = container_of(ud, struct vhci_device, ud);
++	struct usb_device *old = vdev->udev;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&ud->lock, flags);
+@@ -1102,8 +1105,8 @@ static void vhci_device_reset(struct usbip_device *ud)
+ 	vdev->speed  = 0;
+ 	vdev->devid  = 0;
+ 
+-	usb_put_dev(vdev->udev);
+ 	vdev->udev = NULL;
++	usb_put_dev(old);
+ 
+ 	if (ud->tcp_socket) {
+ 		sockfd_put(ud->tcp_socket);
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 04578aa87e4daf..c9f585db1553c3 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -519,15 +519,15 @@ static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, u64 start, u64 last)
+ 	unsigned long pfn, pinned;
+ 
+ 	while ((map = vhost_iotlb_itree_first(iotlb, start, last)) != NULL) {
+-		pinned = map->size >> PAGE_SHIFT;
+-		for (pfn = map->addr >> PAGE_SHIFT;
++		pinned = PFN_DOWN(map->size);
++		for (pfn = PFN_DOWN(map->addr);
+ 		     pinned > 0; pfn++, pinned--) {
+ 			page = pfn_to_page(pfn);
+ 			if (map->perm & VHOST_ACCESS_WO)
+ 				set_page_dirty_lock(page);
+ 			unpin_user_page(page);
+ 		}
+-		atomic64_sub(map->size >> PAGE_SHIFT, &dev->mm->pinned_vm);
++		atomic64_sub(PFN_DOWN(map->size), &dev->mm->pinned_vm);
+ 		vhost_iotlb_map_free(iotlb, map);
+ 	}
+ }
+@@ -589,7 +589,7 @@ static int vhost_vdpa_map(struct vhost_vdpa *v,
+ 	if (r)
+ 		vhost_iotlb_del_range(dev->iotlb, iova, iova + size - 1);
+ 	else
+-		atomic64_add(size >> PAGE_SHIFT, &dev->mm->pinned_vm);
++		atomic64_add(PFN_DOWN(size), &dev->mm->pinned_vm);
+ 
+ 	return r;
+ }
+@@ -643,7 +643,7 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ 	if (msg->perm & VHOST_ACCESS_WO)
+ 		gup_flags |= FOLL_WRITE;
+ 
+-	npages = PAGE_ALIGN(msg->size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT;
++	npages = PFN_UP(msg->size + (iova & ~PAGE_MASK));
+ 	if (!npages) {
+ 		ret = -EINVAL;
+ 		goto free;
+@@ -651,7 +651,7 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ 
+ 	mmap_read_lock(dev->mm);
+ 
+-	lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
++	lock_limit = PFN_DOWN(rlimit(RLIMIT_MEMLOCK));
+ 	if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) {
+ 		ret = -ENOMEM;
+ 		goto unlock;
+@@ -685,9 +685,9 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ 
+ 			if (last_pfn && (this_pfn != last_pfn + 1)) {
+ 				/* Pin a contiguous chunk of memory */
+-				csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT;
++				csize = PFN_PHYS(last_pfn - map_pfn + 1);
+ 				ret = vhost_vdpa_map(v, iova, csize,
+-						     map_pfn << PAGE_SHIFT,
++						     PFN_PHYS(map_pfn),
+ 						     msg->perm);
+ 				if (ret) {
+ 					/*
+@@ -711,13 +711,13 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ 			last_pfn = this_pfn;
+ 		}
+ 
+-		cur_base += pinned << PAGE_SHIFT;
++		cur_base += PFN_PHYS(pinned);
+ 		npages -= pinned;
+ 	}
+ 
+ 	/* Pin the rest chunk */
+-	ret = vhost_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT,
+-			     map_pfn << PAGE_SHIFT, msg->perm);
++	ret = vhost_vdpa_map(v, iova, PFN_PHYS(last_pfn - map_pfn + 1),
++			     PFN_PHYS(map_pfn), msg->perm);
+ out:
+ 	if (ret) {
+ 		if (nchunks) {
+@@ -959,13 +959,7 @@ static vm_fault_t vhost_vdpa_fault(struct vm_fault *vmf)
+ 
+ 	notify = ops->get_vq_notification(vdpa, index);
+ 
+-	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+-	if (remap_pfn_range(vma, vmf->address & PAGE_MASK,
+-			    notify.addr >> PAGE_SHIFT, PAGE_SIZE,
+-			    vma->vm_page_prot))
+-		return VM_FAULT_SIGBUS;
+-
+-	return VM_FAULT_NOPAGE;
++	return vmf_insert_pfn(vma, vmf->address & PAGE_MASK, PFN_DOWN(notify.addr));
+ }
+ 
+ static const struct vm_operations_struct vhost_vdpa_vm_ops = {
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 4989c60b1df9c3..af52c9e005b3c8 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -798,6 +798,7 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode,
+ 				spin_unlock(&ctl->tree_lock);
+ 				btrfs_err(fs_info,
+ 					"Duplicate entries in free space cache, dumping");
++				kmem_cache_free(btrfs_free_space_bitmap_cachep, e->bitmap);
+ 				kmem_cache_free(btrfs_free_space_cachep, e);
+ 				goto free_cache;
+ 			}
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index f2aff97348bc9d..4e09d8e0664738 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -783,7 +783,8 @@ static int __init init_caches(void)
+ 	if (!ceph_mds_request_cachep)
+ 		goto bad_mds_req;
+ 
+-	ceph_wb_pagevec_pool = mempool_create_kmalloc_pool(10, CEPH_MAX_WRITE_SIZE >> PAGE_SHIFT);
++	ceph_wb_pagevec_pool = mempool_create_kmalloc_pool(10,
++	    (CEPH_MAX_WRITE_SIZE >> PAGE_SHIFT) * sizeof(struct page *));
+ 	if (!ceph_wb_pagevec_pool)
+ 		goto bad_pagevec_pool;
+ 
+diff --git a/fs/exec.c b/fs/exec.c
+index d5c8f085235bcd..6a4bbe58d3c05b 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1588,6 +1588,7 @@ static void bprm_fill_uid(struct linux_binprm *bprm, struct file *file)
+ 	unsigned int mode;
+ 	kuid_t uid;
+ 	kgid_t gid;
++	int err;
+ 
+ 	if (!mnt_may_suid(file->f_path.mnt))
+ 		return;
+@@ -1603,12 +1604,17 @@ static void bprm_fill_uid(struct linux_binprm *bprm, struct file *file)
+ 	/* Be careful if suid/sgid is set */
+ 	inode_lock(inode);
+ 
+-	/* reload atomically mode/uid/gid now that lock held */
++	/* Atomically reload and check mode/uid/gid now that lock held. */
+ 	mode = inode->i_mode;
+ 	uid = inode->i_uid;
+ 	gid = inode->i_gid;
++	err = inode_permission(inode, MAY_EXEC);
+ 	inode_unlock(inode);
+ 
++	/* Did the exec bit vanish out from under us? Give up. */
++	if (err)
++		return;
++
+ 	/* We ignore suid/sgid if there are no mappings for them in the ns */
+ 	if (!kuid_has_mapping(bprm->cred->user_ns, uid) ||
+ 		 !kgid_has_mapping(bprm->cred->user_ns, gid))
+diff --git a/fs/ext2/balloc.c b/fs/ext2/balloc.c
+index c17ccc19b938e2..9bf086821eb3ac 100644
+--- a/fs/ext2/balloc.c
++++ b/fs/ext2/balloc.c
+@@ -79,26 +79,33 @@ static int ext2_valid_block_bitmap(struct super_block *sb,
+ 	ext2_grpblk_t next_zero_bit;
+ 	ext2_fsblk_t bitmap_blk;
+ 	ext2_fsblk_t group_first_block;
++	ext2_grpblk_t max_bit;
+ 
+ 	group_first_block = ext2_group_first_block_no(sb, block_group);
++	max_bit = ext2_group_last_block_no(sb, block_group) - group_first_block;
+ 
+ 	/* check whether block bitmap block number is set */
+ 	bitmap_blk = le32_to_cpu(desc->bg_block_bitmap);
+ 	offset = bitmap_blk - group_first_block;
+-	if (!ext2_test_bit(offset, bh->b_data))
++	if (offset < 0 || offset > max_bit ||
++	    !ext2_test_bit(offset, bh->b_data))
+ 		/* bad block bitmap */
+ 		goto err_out;
+ 
+ 	/* check whether the inode bitmap block number is set */
+ 	bitmap_blk = le32_to_cpu(desc->bg_inode_bitmap);
+ 	offset = bitmap_blk - group_first_block;
+-	if (!ext2_test_bit(offset, bh->b_data))
++	if (offset < 0 || offset > max_bit ||
++	    !ext2_test_bit(offset, bh->b_data))
+ 		/* bad block bitmap */
+ 		goto err_out;
+ 
+ 	/* check whether the inode table block number is set */
+ 	bitmap_blk = le32_to_cpu(desc->bg_inode_table);
+ 	offset = bitmap_blk - group_first_block;
++	if (offset < 0 || offset > max_bit ||
++	    offset + EXT2_SB(sb)->s_itb_per_group - 1 > max_bit)
++		goto err_out;
+ 	next_zero_bit = ext2_find_next_zero_bit(bh->b_data,
+ 				offset + EXT2_SB(sb)->s_itb_per_group,
+ 				offset);
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index f37e62546745b0..be3b3ccbf70b61 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -312,6 +312,8 @@ void ext4_es_find_extent_range(struct inode *inode,
+ 			       ext4_lblk_t lblk, ext4_lblk_t end,
+ 			       struct extent_status *es)
+ {
++	es->es_lblk = es->es_len = es->es_pblk = 0;
++
+ 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
+ 		return;
+ 
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 8b48ed351c4b95..6e9323a56d289f 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -484,6 +484,35 @@ static void ext4_map_blocks_es_recheck(handle_t *handle,
+ }
+ #endif /* ES_AGGRESSIVE_TEST */
+ 
++static int ext4_map_query_blocks(handle_t *handle, struct inode *inode,
++				 struct ext4_map_blocks *map)
++{
++	unsigned int status;
++	int retval;
++
++	if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
++		retval = ext4_ext_map_blocks(handle, inode, map, 0);
++	else
++		retval = ext4_ind_map_blocks(handle, inode, map, 0);
++
++	if (retval <= 0)
++		return retval;
++
++	if (unlikely(retval != map->m_len)) {
++		ext4_warning(inode->i_sb,
++			     "ES len assertion failed for inode "
++			     "%lu: retval %d != map->m_len %d",
++			     inode->i_ino, retval, map->m_len);
++		WARN_ON(1);
++	}
++
++	status = map->m_flags & EXT4_MAP_UNWRITTEN ?
++			EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
++	ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
++			      map->m_pblk, status);
++	return retval;
++}
++
+ /*
+  * The ext4_map_blocks() function tries to look up the requested blocks,
+  * and returns if the blocks are already mapped.
+@@ -1731,6 +1760,7 @@ static int ext4_da_map_blocks(struct inode *inode, sector_t iblock,
+ 		if (ext4_es_is_hole(&es))
+ 			goto add_delayed;
+ 
++found:
+ 		/*
+ 		 * Delayed extent could be allocated by fallocate.
+ 		 * So we need to check it.
+@@ -1767,36 +1797,34 @@ static int ext4_da_map_blocks(struct inode *inode, sector_t iblock,
+ 	down_read(&EXT4_I(inode)->i_data_sem);
+ 	if (ext4_has_inline_data(inode))
+ 		retval = 0;
+-	else if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+-		retval = ext4_ext_map_blocks(NULL, inode, map, 0);
+ 	else
+-		retval = ext4_ind_map_blocks(NULL, inode, map, 0);
+-	if (retval < 0) {
+-		up_read(&EXT4_I(inode)->i_data_sem);
++		retval = ext4_map_query_blocks(NULL, inode, map);
++	up_read(&EXT4_I(inode)->i_data_sem);
++	if (retval)
+ 		return retval;
+-	}
+-	if (retval > 0) {
+-		unsigned int status;
+ 
+-		if (unlikely(retval != map->m_len)) {
+-			ext4_warning(inode->i_sb,
+-				     "ES len assertion failed for inode "
+-				     "%lu: retval %d != map->m_len %d",
+-				     inode->i_ino, retval, map->m_len);
+-			WARN_ON(1);
++add_delayed:
++	down_write(&EXT4_I(inode)->i_data_sem);
++	/*
++	 * Page fault path (ext4_page_mkwrite does not take i_rwsem)
++	 * and fallocate path (no folio lock) can race. Make sure we
++	 * lookup the extent status tree here again while i_data_sem
++	 * is held in write mode, before inserting a new da entry in
++	 * the extent status tree.
++	 */
++	if (ext4_es_lookup_extent(inode, iblock, NULL, &es)) {
++		if (!ext4_es_is_hole(&es)) {
++			up_write(&EXT4_I(inode)->i_data_sem);
++			goto found;
++		}
++	} else if (!ext4_has_inline_data(inode)) {
++		retval = ext4_map_query_blocks(NULL, inode, map);
++		if (retval) {
++			up_write(&EXT4_I(inode)->i_data_sem);
++			return retval;
+ 		}
+-
+-		status = map->m_flags & EXT4_MAP_UNWRITTEN ?
+-				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+-		ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
+-				      map->m_pblk, status);
+-		up_read(&EXT4_I(inode)->i_data_sem);
+-		return retval;
+ 	}
+-	up_read(&EXT4_I(inode)->i_data_sem);
+ 
+-add_delayed:
+-	down_write(&EXT4_I(inode)->i_data_sem);
+ 	retval = ext4_insert_delayed_block(inode, map->m_lblk);
+ 	up_write(&EXT4_I(inode)->i_data_sem);
+ 	if (retval)
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 87378d08a414bd..bc5db22df9fe72 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1927,8 +1927,7 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
+ 	if (max >= ac->ac_g_ex.fe_len && ac->ac_g_ex.fe_len == sbi->s_stripe) {
+ 		ext4_fsblk_t start;
+ 
+-		start = ext4_group_first_block_no(ac->ac_sb, e4b->bd_group) +
+-			ex.fe_start;
++		start = ext4_grp_offs_to_block(ac->ac_sb, &ex);
+ 		/* use do_div to get remainder (would be 64-bit modulo) */
+ 		if (do_div(start, sbi->s_stripe) == 0) {
+ 			ac->ac_found++;
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index af0801593abb8b..bf312f94c3bf7c 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -145,10 +145,11 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode,
+ 
+ 		return bh;
+ 	}
+-	if (!bh && (type == INDEX || type == DIRENT_HTREE)) {
++	/* The first directory block must not be a hole. */
++	if (!bh && (type == INDEX || type == DIRENT_HTREE || block == 0)) {
+ 		ext4_error_inode(inode, func, line, block,
+-				 "Directory hole found for htree %s block",
+-				 (type == INDEX) ? "index" : "leaf");
++				 "Directory hole found for htree %s block %u",
++				 (type == INDEX) ? "index" : "leaf", block);
+ 		return ERR_PTR(-EFSCORRUPTED);
+ 	}
+ 	if (!bh)
+@@ -2098,6 +2099,52 @@ static int add_dirent_to_buf(handle_t *handle, struct ext4_filename *fname,
+ 	return err ? err : err2;
+ }
+ 
++static bool ext4_check_dx_root(struct inode *dir, struct dx_root *root)
++{
++	struct fake_dirent *fde;
++	const char *error_msg;
++	unsigned int rlen;
++	unsigned int blocksize = dir->i_sb->s_blocksize;
++	char *blockend = (char *)root + dir->i_sb->s_blocksize;
++
++	fde = &root->dot;
++	if (unlikely(fde->name_len != 1)) {
++		error_msg = "invalid name_len for '.'";
++		goto corrupted;
++	}
++	if (unlikely(strncmp(root->dot_name, ".", fde->name_len))) {
++		error_msg = "invalid name for '.'";
++		goto corrupted;
++	}
++	rlen = ext4_rec_len_from_disk(fde->rec_len, blocksize);
++	if (unlikely((char *)fde + rlen >= blockend)) {
++		error_msg = "invalid rec_len for '.'";
++		goto corrupted;
++	}
++
++	fde = &root->dotdot;
++	if (unlikely(fde->name_len != 2)) {
++		error_msg = "invalid name_len for '..'";
++		goto corrupted;
++	}
++	if (unlikely(strncmp(root->dotdot_name, "..", fde->name_len))) {
++		error_msg = "invalid name for '..'";
++		goto corrupted;
++	}
++	rlen = ext4_rec_len_from_disk(fde->rec_len, blocksize);
++	if (unlikely((char *)fde + rlen >= blockend)) {
++		error_msg = "invalid rec_len for '..'";
++		goto corrupted;
++	}
++
++	return true;
++
++corrupted:
++	EXT4_ERROR_INODE(dir, "Corrupt dir, %s, running e2fsck is recommended",
++			 error_msg);
++	return false;
++}
++
+ /*
+  * This converts a one block unindexed directory to a 3 block indexed
+  * directory, and adds the dentry to the indexed directory.
+@@ -2131,17 +2178,17 @@ static int make_indexed_dir(handle_t *handle, struct ext4_filename *fname,
+ 		brelse(bh);
+ 		return retval;
+ 	}
++
+ 	root = (struct dx_root *) bh->b_data;
++	if (!ext4_check_dx_root(dir, root)) {
++		brelse(bh);
++		return -EFSCORRUPTED;
++	}
+ 
+ 	/* The 0th block becomes the root, move the dirents out */
+ 	fde = &root->dotdot;
+ 	de = (struct ext4_dir_entry_2 *)((char *)fde +
+ 		ext4_rec_len_from_disk(fde->rec_len, blocksize));
+-	if ((char *) de >= (((char *) root) + blocksize)) {
+-		EXT4_ERROR_INODE(dir, "invalid rec_len for '..'");
+-		brelse(bh);
+-		return -EFSCORRUPTED;
+-	}
+ 	len = ((char *) root) + (blocksize - csum_size) - (char *) de;
+ 
+ 	/* Allocate new block for the 0th block's dirents */
+@@ -2931,10 +2978,7 @@ bool ext4_empty_dir(struct inode *inode)
+ 		EXT4_ERROR_INODE(inode, "invalid size");
+ 		return false;
+ 	}
+-	/* The first directory block must not be a hole,
+-	 * so treat it as DIRENT_HTREE
+-	 */
+-	bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
++	bh = ext4_read_dirblock(inode, 0, EITHER);
+ 	if (IS_ERR(bh))
+ 		return false;
+ 
+@@ -3565,10 +3609,7 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ 		struct ext4_dir_entry_2 *de;
+ 		unsigned int offset;
+ 
+-		/* The first directory block must not be a hole, so
+-		 * treat it as DIRENT_HTREE
+-		 */
+-		bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
++		bh = ext4_read_dirblock(inode, 0, EITHER);
+ 		if (IS_ERR(bh)) {
+ 			*retval = PTR_ERR(bh);
+ 			return NULL;
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 2dcb7d2bb85e80..b91a1d1099d598 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1377,6 +1377,12 @@ static int ext4_xattr_inode_write(handle_t *handle, struct inode *ea_inode,
+ 			goto out;
+ 
+ 		memcpy(bh->b_data, buf, csize);
++		/*
++		 * Zero out block tail to avoid writing uninitialized memory
++		 * to disk.
++		 */
++		if (csize < blocksize)
++			memset(bh->b_data + csize, 0, blocksize - csize);
+ 		set_buffer_uptodate(bh);
+ 		ext4_handle_dirty_metadata(handle, ea_inode, bh);
+ 
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 724760353bcd59..b23e6a848e9b7a 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -23,6 +23,9 @@ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
+ 	if (is_inode_flag_set(inode, FI_NEW_INODE))
+ 		return;
+ 
++	if (f2fs_readonly(F2FS_I_SB(inode)->sb))
++		return;
++
+ 	if (f2fs_inode_dirtied(inode, sync))
+ 		return;
+ 
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 979296b835b5a9..665e0e186687d8 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -371,7 +371,8 @@ static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi,
+ 				unsigned int segno, bool use_section)
+ {
+ 	if (use_section && __is_large_section(sbi)) {
+-		unsigned int start_segno = START_SEGNO(segno);
++		unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
++		unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
+ 		unsigned int blocks = 0;
+ 		int i;
+ 
+diff --git a/fs/file.c b/fs/file.c
+index 913f7d897d2fc6..105a084b7924df 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -1057,6 +1057,7 @@ __releases(&files->file_lock)
+ 	 * tables and this condition does not arise without those.
+ 	 */
+ 	fdt = files_fdtable(files);
++	fd = array_index_nospec(fd, fdt->max_fds);
+ 	tofree = fdt->fd[fd];
+ 	if (!tofree && fd_is_open(fd, fdt))
+ 		goto Ebusy;
+diff --git a/fs/fuse/control.c b/fs/fuse/control.c
+index 24b4d9db231db8..79f01d09c78cb8 100644
+--- a/fs/fuse/control.c
++++ b/fs/fuse/control.c
+@@ -328,7 +328,7 @@ void fuse_ctl_remove_conn(struct fuse_conn *fc)
+ 	drop_nlink(d_inode(fuse_control_sb->s_root));
+ }
+ 
+-static int fuse_ctl_fill_super(struct super_block *sb, struct fs_context *fctx)
++static int fuse_ctl_fill_super(struct super_block *sb, struct fs_context *fsc)
+ {
+ 	static const struct tree_descr empty_descr = {""};
+ 	struct fuse_conn *fc;
+@@ -354,18 +354,18 @@ static int fuse_ctl_fill_super(struct super_block *sb, struct fs_context *fctx)
+ 	return 0;
+ }
+ 
+-static int fuse_ctl_get_tree(struct fs_context *fc)
++static int fuse_ctl_get_tree(struct fs_context *fsc)
+ {
+-	return get_tree_single(fc, fuse_ctl_fill_super);
++	return get_tree_single(fsc, fuse_ctl_fill_super);
+ }
+ 
+ static const struct fs_context_operations fuse_ctl_context_ops = {
+ 	.get_tree	= fuse_ctl_get_tree,
+ };
+ 
+-static int fuse_ctl_init_fs_context(struct fs_context *fc)
++static int fuse_ctl_init_fs_context(struct fs_context *fsc)
+ {
+-	fc->ops = &fuse_ctl_context_ops;
++	fsc->ops = &fuse_ctl_context_ops;
+ 	return 0;
+ }
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 4a7ebccd359eed..a5d1eb0bc52149 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -141,12 +141,12 @@ static void fuse_evict_inode(struct inode *inode)
+ 	}
+ }
+ 
+-static int fuse_reconfigure(struct fs_context *fc)
++static int fuse_reconfigure(struct fs_context *fsc)
+ {
+-	struct super_block *sb = fc->root->d_sb;
++	struct super_block *sb = fsc->root->d_sb;
+ 
+ 	sync_filesystem(sb);
+-	if (fc->sb_flags & SB_MANDLOCK)
++	if (fsc->sb_flags & SB_MANDLOCK)
+ 		return -EINVAL;
+ 
+ 	return 0;
+@@ -535,38 +535,40 @@ static const struct fs_parameter_spec fuse_fs_parameters[] = {
+ 	{}
+ };
+ 
+-static int fuse_parse_param(struct fs_context *fc, struct fs_parameter *param)
++static int fuse_parse_param(struct fs_context *fsc, struct fs_parameter *param)
+ {
+ 	struct fs_parse_result result;
+-	struct fuse_fs_context *ctx = fc->fs_private;
++	struct fuse_fs_context *ctx = fsc->fs_private;
+ 	int opt;
++	kuid_t kuid;
++	kgid_t kgid;
+ 
+-	if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE) {
++	if (fsc->purpose == FS_CONTEXT_FOR_RECONFIGURE) {
+ 		/*
+ 		 * Ignore options coming from mount(MS_REMOUNT) for backward
+ 		 * compatibility.
+ 		 */
+-		if (fc->oldapi)
++		if (fsc->oldapi)
+ 			return 0;
+ 
+-		return invalfc(fc, "No changes allowed in reconfigure");
++		return invalfc(fsc, "No changes allowed in reconfigure");
+ 	}
+ 
+-	opt = fs_parse(fc, fuse_fs_parameters, param, &result);
++	opt = fs_parse(fsc, fuse_fs_parameters, param, &result);
+ 	if (opt < 0)
+ 		return opt;
+ 
+ 	switch (opt) {
+ 	case OPT_SOURCE:
+-		if (fc->source)
+-			return invalfc(fc, "Multiple sources specified");
+-		fc->source = param->string;
++		if (fsc->source)
++			return invalfc(fsc, "Multiple sources specified");
++		fsc->source = param->string;
+ 		param->string = NULL;
+ 		break;
+ 
+ 	case OPT_SUBTYPE:
+ 		if (ctx->subtype)
+-			return invalfc(fc, "Multiple subtypes specified");
++			return invalfc(fsc, "Multiple subtypes specified");
+ 		ctx->subtype = param->string;
+ 		param->string = NULL;
+ 		return 0;
+@@ -578,22 +580,36 @@ static int fuse_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 
+ 	case OPT_ROOTMODE:
+ 		if (!fuse_valid_type(result.uint_32))
+-			return invalfc(fc, "Invalid rootmode");
++			return invalfc(fsc, "Invalid rootmode");
+ 		ctx->rootmode = result.uint_32;
+ 		ctx->rootmode_present = true;
+ 		break;
+ 
+ 	case OPT_USER_ID:
+-		ctx->user_id = make_kuid(fc->user_ns, result.uint_32);
+-		if (!uid_valid(ctx->user_id))
+-			return invalfc(fc, "Invalid user_id");
++		kuid =  make_kuid(fsc->user_ns, result.uint_32);
++		if (!uid_valid(kuid))
++			return invalfc(fsc, "Invalid user_id");
++		/*
++		 * The requested uid must be representable in the
++		 * filesystem's idmapping.
++		 */
++		if (!kuid_has_mapping(fsc->user_ns, kuid))
++			return invalfc(fsc, "Invalid user_id");
++		ctx->user_id = kuid;
+ 		ctx->user_id_present = true;
+ 		break;
+ 
+ 	case OPT_GROUP_ID:
+-		ctx->group_id = make_kgid(fc->user_ns, result.uint_32);
+-		if (!gid_valid(ctx->group_id))
+-			return invalfc(fc, "Invalid group_id");
++		kgid = make_kgid(fsc->user_ns, result.uint_32);;
++		if (!gid_valid(kgid))
++			return invalfc(fsc, "Invalid group_id");
++		/*
++		 * The requested gid must be representable in the
++		 * filesystem's idmapping.
++		 */
++		if (!kgid_has_mapping(fsc->user_ns, kgid))
++			return invalfc(fsc, "Invalid group_id");
++		ctx->group_id = kgid;
+ 		ctx->group_id_present = true;
+ 		break;
+ 
+@@ -611,7 +627,7 @@ static int fuse_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 
+ 	case OPT_BLKSIZE:
+ 		if (!ctx->is_bdev)
+-			return invalfc(fc, "blksize only supported for fuseblk");
++			return invalfc(fsc, "blksize only supported for fuseblk");
+ 		ctx->blksize = result.uint_32;
+ 		break;
+ 
+@@ -622,9 +638,9 @@ static int fuse_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 	return 0;
+ }
+ 
+-static void fuse_free_fc(struct fs_context *fc)
++static void fuse_free_fsc(struct fs_context *fsc)
+ {
+-	struct fuse_fs_context *ctx = fc->fs_private;
++	struct fuse_fs_context *ctx = fsc->fs_private;
+ 
+ 	if (ctx) {
+ 		kfree(ctx->subtype);
+@@ -1486,9 +1502,9 @@ static int fuse_fill_super(struct super_block *sb, struct fs_context *fsc)
+ 	return err;
+ }
+ 
+-static int fuse_get_tree(struct fs_context *fc)
++static int fuse_get_tree(struct fs_context *fsc)
+ {
+-	struct fuse_fs_context *ctx = fc->fs_private;
++	struct fuse_fs_context *ctx = fsc->fs_private;
+ 
+ 	if (!ctx->fd_present || !ctx->rootmode_present ||
+ 	    !ctx->user_id_present || !ctx->group_id_present)
+@@ -1496,14 +1512,14 @@ static int fuse_get_tree(struct fs_context *fc)
+ 
+ #ifdef CONFIG_BLOCK
+ 	if (ctx->is_bdev)
+-		return get_tree_bdev(fc, fuse_fill_super);
++		return get_tree_bdev(fsc, fuse_fill_super);
+ #endif
+ 
+-	return get_tree_nodev(fc, fuse_fill_super);
++	return get_tree_nodev(fsc, fuse_fill_super);
+ }
+ 
+ static const struct fs_context_operations fuse_context_ops = {
+-	.free		= fuse_free_fc,
++	.free		= fuse_free_fsc,
+ 	.parse_param	= fuse_parse_param,
+ 	.reconfigure	= fuse_reconfigure,
+ 	.get_tree	= fuse_get_tree,
+@@ -1512,7 +1528,7 @@ static const struct fs_context_operations fuse_context_ops = {
+ /*
+  * Set up the filesystem mount context.
+  */
+-static int fuse_init_fs_context(struct fs_context *fc)
++static int fuse_init_fs_context(struct fs_context *fsc)
+ {
+ 	struct fuse_fs_context *ctx;
+ 
+@@ -1525,14 +1541,14 @@ static int fuse_init_fs_context(struct fs_context *fc)
+ 	ctx->legacy_opts_show = true;
+ 
+ #ifdef CONFIG_BLOCK
+-	if (fc->fs_type == &fuseblk_fs_type) {
++	if (fsc->fs_type == &fuseblk_fs_type) {
+ 		ctx->is_bdev = true;
+ 		ctx->destroy = true;
+ 	}
+ #endif
+ 
+-	fc->fs_private = ctx;
+-	fc->ops = &fuse_context_ops;
++	fsc->fs_private = ctx;
++	fsc->ops = &fuse_context_ops;
+ 	return 0;
+ }
+ 
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index faadc80485e7fe..7d4655022afc6d 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -97,14 +97,14 @@ static const struct fs_parameter_spec virtio_fs_parameters[] = {
+ 	{}
+ };
+ 
+-static int virtio_fs_parse_param(struct fs_context *fc,
++static int virtio_fs_parse_param(struct fs_context *fsc,
+ 				 struct fs_parameter *param)
+ {
+ 	struct fs_parse_result result;
+-	struct fuse_fs_context *ctx = fc->fs_private;
++	struct fuse_fs_context *ctx = fsc->fs_private;
+ 	int opt;
+ 
+-	opt = fs_parse(fc, virtio_fs_parameters, param, &result);
++	opt = fs_parse(fsc, virtio_fs_parameters, param, &result);
+ 	if (opt < 0)
+ 		return opt;
+ 
+@@ -119,9 +119,9 @@ static int virtio_fs_parse_param(struct fs_context *fc,
+ 	return 0;
+ }
+ 
+-static void virtio_fs_free_fc(struct fs_context *fc)
++static void virtio_fs_free_fsc(struct fs_context *fsc)
+ {
+-	struct fuse_fs_context *ctx = fc->fs_private;
++	struct fuse_fs_context *ctx = fsc->fs_private;
+ 
+ 	kfree(ctx);
+ }
+@@ -1500,7 +1500,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ }
+ 
+ static const struct fs_context_operations virtio_fs_context_ops = {
+-	.free		= virtio_fs_free_fc,
++	.free		= virtio_fs_free_fsc,
+ 	.parse_param	= virtio_fs_parse_param,
+ 	.get_tree	= virtio_fs_get_tree,
+ };
+diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
+index 6fbde990e0566f..3d59cdb3127842 100644
+--- a/fs/hfs/inode.c
++++ b/fs/hfs/inode.c
+@@ -200,6 +200,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
+ 	HFS_I(inode)->flags = 0;
+ 	HFS_I(inode)->rsrc_inode = NULL;
+ 	HFS_I(inode)->fs_blocks = 0;
++	HFS_I(inode)->tz_secondswest = sys_tz.tz_minuteswest * 60;
+ 	if (S_ISDIR(mode)) {
+ 		inode->i_size = 2;
+ 		HFS_SB(sb)->folder_count++;
+@@ -275,6 +276,8 @@ void hfs_inode_read_fork(struct inode *inode, struct hfs_extent *ext,
+ 	for (count = 0, i = 0; i < 3; i++)
+ 		count += be16_to_cpu(ext[i].count);
+ 	HFS_I(inode)->first_blocks = count;
++	HFS_I(inode)->cached_start = 0;
++	HFS_I(inode)->cached_blocks = 0;
+ 
+ 	inode->i_size = HFS_I(inode)->phys_size = log_size;
+ 	HFS_I(inode)->fs_blocks = (log_size + sb->s_blocksize - 1) >> sb->s_blocksize_bits;
+diff --git a/fs/hfsplus/bfind.c b/fs/hfsplus/bfind.c
+index ca2ba8c9f82ef2..901e83d65d2021 100644
+--- a/fs/hfsplus/bfind.c
++++ b/fs/hfsplus/bfind.c
+@@ -25,19 +25,8 @@ int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd)
+ 	fd->key = ptr + tree->max_key_len + 2;
+ 	hfs_dbg(BNODE_REFS, "find_init: %d (%p)\n",
+ 		tree->cnid, __builtin_return_address(0));
+-	switch (tree->cnid) {
+-	case HFSPLUS_CAT_CNID:
+-		mutex_lock_nested(&tree->tree_lock, CATALOG_BTREE_MUTEX);
+-		break;
+-	case HFSPLUS_EXT_CNID:
+-		mutex_lock_nested(&tree->tree_lock, EXTENTS_BTREE_MUTEX);
+-		break;
+-	case HFSPLUS_ATTR_CNID:
+-		mutex_lock_nested(&tree->tree_lock, ATTR_BTREE_MUTEX);
+-		break;
+-	default:
+-		BUG();
+-	}
++	mutex_lock_nested(&tree->tree_lock,
++			hfsplus_btree_lock_class(tree));
+ 	return 0;
+ }
+ 
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index 7054a542689f9c..c95a2f0ed4a74e 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -430,7 +430,8 @@ int hfsplus_free_fork(struct super_block *sb, u32 cnid,
+ 		hfsplus_free_extents(sb, ext_entry, total_blocks - start,
+ 				     total_blocks);
+ 		total_blocks = start;
+-		mutex_lock(&fd.tree->tree_lock);
++		mutex_lock_nested(&fd.tree->tree_lock,
++			hfsplus_btree_lock_class(fd.tree));
+ 	} while (total_blocks > blocks);
+ 	hfs_find_exit(&fd);
+ 
+@@ -592,7 +593,8 @@ void hfsplus_file_truncate(struct inode *inode)
+ 					     alloc_cnt, alloc_cnt - blk_cnt);
+ 			hfsplus_dump_extent(hip->first_extents);
+ 			hip->first_blocks = blk_cnt;
+-			mutex_lock(&fd.tree->tree_lock);
++			mutex_lock_nested(&fd.tree->tree_lock,
++				hfsplus_btree_lock_class(fd.tree));
+ 			break;
+ 		}
+ 		res = __hfsplus_ext_cache_extent(&fd, inode, alloc_cnt);
+@@ -606,7 +608,8 @@ void hfsplus_file_truncate(struct inode *inode)
+ 		hfsplus_free_extents(sb, hip->cached_extents,
+ 				     alloc_cnt - start, alloc_cnt - blk_cnt);
+ 		hfsplus_dump_extent(hip->cached_extents);
+-		mutex_lock(&fd.tree->tree_lock);
++		mutex_lock_nested(&fd.tree->tree_lock,
++				hfsplus_btree_lock_class(fd.tree));
+ 		if (blk_cnt > start) {
+ 			hip->extent_state |= HFSPLUS_EXT_DIRTY;
+ 			break;
+diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
+index c438680ef9f756..bfbe88e804eb03 100644
+--- a/fs/hfsplus/hfsplus_fs.h
++++ b/fs/hfsplus/hfsplus_fs.h
+@@ -557,6 +557,27 @@ static inline __be32 __hfsp_ut2mt(time64_t ut)
+ 	return cpu_to_be32(lower_32_bits(ut) + HFSPLUS_UTC_OFFSET);
+ }
+ 
++static inline enum hfsplus_btree_mutex_classes
++hfsplus_btree_lock_class(struct hfs_btree *tree)
++{
++	enum hfsplus_btree_mutex_classes class;
++
++	switch (tree->cnid) {
++	case HFSPLUS_CAT_CNID:
++		class = CATALOG_BTREE_MUTEX;
++		break;
++	case HFSPLUS_EXT_CNID:
++		class = EXTENTS_BTREE_MUTEX;
++		break;
++	case HFSPLUS_ATTR_CNID:
++		class = ATTR_BTREE_MUTEX;
++		break;
++	default:
++		BUG();
++	}
++	return class;
++}
++
+ /* compatibility */
+ #define hfsp_mt2ut(t)		(struct timespec64){ .tv_sec = __hfsp_mt2ut(t) }
+ #define hfsp_ut2mt(t)		__hfsp_ut2mt((t).tv_sec)
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index db137671a41f1b..7d548821854e0d 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -813,7 +813,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ 		if (first_block < journal->j_tail)
+ 			freed += journal->j_last - journal->j_first;
+ 		/* Update tail only if we free significant amount of space */
+-		if (freed < jbd2_journal_get_max_txn_bufs(journal))
++		if (freed < journal->j_max_transaction_buffers)
+ 			update_tail = 0;
+ 	}
+ 	J_ASSERT(commit_transaction->t_state == T_COMMIT);
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index effd837b8c1ff1..205e6c7c2fd0c3 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -412,6 +412,7 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction,
+ 		tmp = jbd2_alloc(bh_in->b_size, GFP_NOFS);
+ 		if (!tmp) {
+ 			brelse(new_bh);
++			free_buffer_head(new_bh);
+ 			return -ENOMEM;
+ 		}
+ 		spin_lock(&jh_in->b_state_lock);
+@@ -1481,6 +1482,11 @@ static void journal_fail_superblock(journal_t *journal)
+ 	journal->j_sb_buffer = NULL;
+ }
+ 
++static int jbd2_journal_get_max_txn_bufs(journal_t *journal)
++{
++	return (journal->j_total_len - journal->j_fc_wbufsize) / 4;
++}
++
+ /*
+  * Given a journal_t structure, initialise the various fields for
+  * startup of a new journaling session.  We use this both when creating
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index b0965f3ef18658..36ed7568206485 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -292,7 +292,7 @@ int diSync(struct inode *ipimap)
+ int diRead(struct inode *ip)
+ {
+ 	struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb);
+-	int iagno, ino, extno, rc;
++	int iagno, ino, extno, rc, agno;
+ 	struct inode *ipimap;
+ 	struct dinode *dp;
+ 	struct iag *iagp;
+@@ -341,8 +341,11 @@ int diRead(struct inode *ip)
+ 
+ 	/* get the ag for the iag */
+ 	agstart = le64_to_cpu(iagp->agstart);
++	agno = BLKTOAG(agstart, JFS_SBI(ip->i_sb));
+ 
+ 	release_metapage(mp);
++	if (agno >= MAXAG || agno < 0)
++		return -EIO;
+ 
+ 	rel_inode = (ino & (INOSPERPAGE - 1));
+ 	pageno = blkno >> sbi->l2nbperpage;
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index 1776121677e28b..28a726553318b0 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -51,12 +51,21 @@ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
+ 
+ 	bh = nilfs_grab_buffer(inode, btnc, blocknr, BIT(BH_NILFS_Node));
+ 	if (unlikely(!bh))
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	if (unlikely(buffer_mapped(bh) || buffer_uptodate(bh) ||
+ 		     buffer_dirty(bh))) {
+-		brelse(bh);
+-		BUG();
++		/*
++		 * The block buffer at the specified new address was already
++		 * in use.  This can happen if it is a virtual block number
++		 * and has been reallocated due to corruption of the bitmap
++		 * used to manage its allocation state (if not, the buffer
++		 * clearing of an abandoned b-tree node is missing somewhere).
++		 */
++		nilfs_error(inode->i_sb,
++			    "state inconsistency probably due to duplicate use of b-tree node block address %llu (ino=%lu)",
++			    (unsigned long long)blocknr, inode->i_ino);
++		goto failed;
+ 	}
+ 	memset(bh->b_data, 0, i_blocksize(inode));
+ 	bh->b_bdev = inode->i_sb->s_bdev;
+@@ -67,6 +76,12 @@ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
+ 	unlock_page(bh->b_page);
+ 	put_page(bh->b_page);
+ 	return bh;
++
++failed:
++	unlock_page(bh->b_page);
++	put_page(bh->b_page);
++	brelse(bh);
++	return ERR_PTR(-EIO);
+ }
+ 
+ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
+@@ -217,8 +232,8 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
+ 	}
+ 
+ 	nbh = nilfs_btnode_create_block(btnc, newkey);
+-	if (!nbh)
+-		return -ENOMEM;
++	if (IS_ERR(nbh))
++		return PTR_ERR(nbh);
+ 
+ 	BUG_ON(nbh == obh);
+ 	ctxt->newbh = nbh;
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 4905b7cd7bf33d..a426e4e2acdac1 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -63,8 +63,8 @@ static int nilfs_btree_get_new_block(const struct nilfs_bmap *btree,
+ 	struct buffer_head *bh;
+ 
+ 	bh = nilfs_btnode_create_block(btnc, ptr);
+-	if (!bh)
+-		return -ENOMEM;
++	if (IS_ERR(bh))
++		return PTR_ERR(bh);
+ 
+ 	set_buffer_nilfs_volatile(bh);
+ 	*bhp = bh;
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 02407c524382ce..d9f92df15a84f8 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -134,14 +134,9 @@ static void nilfs_segctor_do_flush(struct nilfs_sc_info *, int);
+ static void nilfs_segctor_do_immediate_flush(struct nilfs_sc_info *);
+ static void nilfs_dispose_list(struct the_nilfs *, struct list_head *, int);
+ 
+-#define nilfs_cnt32_gt(a, b)   \
+-	(typecheck(__u32, a) && typecheck(__u32, b) && \
+-	 ((__s32)(b) - (__s32)(a) < 0))
+ #define nilfs_cnt32_ge(a, b)   \
+ 	(typecheck(__u32, a) && typecheck(__u32, b) && \
+-	 ((__s32)(a) - (__s32)(b) >= 0))
+-#define nilfs_cnt32_lt(a, b)  nilfs_cnt32_gt(b, a)
+-#define nilfs_cnt32_le(a, b)  nilfs_cnt32_ge(b, a)
++	 ((__s32)((a) - (b)) >= 0))
+ 
+ static int nilfs_prepare_segment_lock(struct super_block *sb,
+ 				      struct nilfs_transaction_info *ti)
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index aff9593feb73c8..f5c9677353354c 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -471,12 +471,10 @@ static struct inode *proc_sys_make_inode(struct super_block *sb,
+ 			make_empty_dir_inode(inode);
+ 	}
+ 
++	inode->i_uid = GLOBAL_ROOT_UID;
++	inode->i_gid = GLOBAL_ROOT_GID;
+ 	if (root->set_ownership)
+ 		root->set_ownership(head, table, &inode->i_uid, &inode->i_gid);
+-	else {
+-		inode->i_uid = GLOBAL_ROOT_UID;
+-		inode->i_gid = GLOBAL_ROOT_GID;
+-	}
+ 
+ 	return inode;
+ }
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 39b1038076c3ee..97023c0dca60a5 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1468,6 +1468,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 		}
+ #endif
+ 
++		if (page && !PageAnon(page))
++			flags |= PM_FILE;
+ 		if (page && !migration && page_mapcount(page) == 1)
+ 			flags |= PM_MMAP_EXCLUSIVE;
+ 
+diff --git a/fs/super.c b/fs/super.c
+index f9795e72e3bf8b..282aa36901eb1a 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -518,6 +518,17 @@ struct super_block *sget_fc(struct fs_context *fc,
+ 	struct user_namespace *user_ns = fc->global ? &init_user_ns : fc->user_ns;
+ 	int err;
+ 
++	/*
++	 * Never allow s_user_ns != &init_user_ns when FS_USERNS_MOUNT is
++	 * not set, as the filesystem is likely unprepared to handle it.
++	 * This can happen when fsconfig() is called from init_user_ns with
++	 * an fs_fd opened in another user namespace.
++	 */
++	if (user_ns != &init_user_ns && !(fc->fs_type->fs_flags & FS_USERNS_MOUNT)) {
++		errorfc(fc, "VFS: Mounting from non-initial user namespace is not allowed");
++		return ERR_PTR(-EPERM);
++	}
++
+ retry:
+ 	spin_lock(&sb_lock);
+ 	if (test) {
+diff --git a/fs/udf/balloc.c b/fs/udf/balloc.c
+index f416b7fe092fcc..aa73ab1b50a529 100644
+--- a/fs/udf/balloc.c
++++ b/fs/udf/balloc.c
+@@ -22,6 +22,7 @@
+ #include "udfdecl.h"
+ 
+ #include <linux/bitops.h>
++#include <linux/overflow.h>
+ 
+ #include "udf_i.h"
+ #include "udf_sb.h"
+@@ -68,8 +69,12 @@ static int read_block_bitmap(struct super_block *sb,
+ 	}
+ 
+ 	for (i = 0; i < count; i++)
+-		if (udf_test_bit(i + off, bh->b_data))
++		if (udf_test_bit(i + off, bh->b_data)) {
++			bitmap->s_block_bitmap[bitmap_nr] =
++							ERR_PTR(-EFSCORRUPTED);
++			brelse(bh);
+ 			return -EFSCORRUPTED;
++		}
+ 	return 0;
+ }
+ 
+@@ -85,8 +90,15 @@ static int __load_block_bitmap(struct super_block *sb,
+ 			  block_group, nr_groups);
+ 	}
+ 
+-	if (bitmap->s_block_bitmap[block_group])
++	if (bitmap->s_block_bitmap[block_group]) {
++		/*
++		 * The bitmap failed verification in the past. No point in
++		 * trying again.
++		 */
++		if (IS_ERR(bitmap->s_block_bitmap[block_group]))
++			return PTR_ERR(bitmap->s_block_bitmap[block_group]);
+ 		return block_group;
++	}
+ 
+ 	retval = read_block_bitmap(sb, bitmap, block_group, block_group);
+ 	if (retval < 0)
+@@ -133,7 +145,6 @@ static void udf_bitmap_free_blocks(struct super_block *sb,
+ {
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	struct buffer_head *bh = NULL;
+-	struct udf_part_map *partmap;
+ 	unsigned long block;
+ 	unsigned long block_group;
+ 	unsigned long bit;
+@@ -142,19 +153,9 @@ static void udf_bitmap_free_blocks(struct super_block *sb,
+ 	unsigned long overflow;
+ 
+ 	mutex_lock(&sbi->s_alloc_mutex);
+-	partmap = &sbi->s_partmaps[bloc->partitionReferenceNum];
+-	if (bloc->logicalBlockNum + count < count ||
+-	    (bloc->logicalBlockNum + count) > partmap->s_partition_len) {
+-		udf_debug("%u < %d || %u + %u > %u\n",
+-			  bloc->logicalBlockNum, 0,
+-			  bloc->logicalBlockNum, count,
+-			  partmap->s_partition_len);
+-		goto error_return;
+-	}
+-
++	/* We make sure this cannot overflow when mounting the filesystem */
+ 	block = bloc->logicalBlockNum + offset +
+ 		(sizeof(struct spaceBitmapDesc) << 3);
+-
+ 	do {
+ 		overflow = 0;
+ 		block_group = block >> (sb->s_blocksize_bits + 3);
+@@ -384,7 +385,6 @@ static void udf_table_free_blocks(struct super_block *sb,
+ 				  uint32_t count)
+ {
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+-	struct udf_part_map *partmap;
+ 	uint32_t start, end;
+ 	uint32_t elen;
+ 	struct kernel_lb_addr eloc;
+@@ -393,16 +393,6 @@ static void udf_table_free_blocks(struct super_block *sb,
+ 	struct udf_inode_info *iinfo;
+ 
+ 	mutex_lock(&sbi->s_alloc_mutex);
+-	partmap = &sbi->s_partmaps[bloc->partitionReferenceNum];
+-	if (bloc->logicalBlockNum + count < count ||
+-	    (bloc->logicalBlockNum + count) > partmap->s_partition_len) {
+-		udf_debug("%u < %d || %u + %u > %u\n",
+-			  bloc->logicalBlockNum, 0,
+-			  bloc->logicalBlockNum, count,
+-			  partmap->s_partition_len);
+-		goto error_return;
+-	}
+-
+ 	iinfo = UDF_I(table);
+ 	udf_add_free_space(sb, sbi->s_partition, count);
+ 
+@@ -677,6 +667,17 @@ void udf_free_blocks(struct super_block *sb, struct inode *inode,
+ {
+ 	uint16_t partition = bloc->partitionReferenceNum;
+ 	struct udf_part_map *map = &UDF_SB(sb)->s_partmaps[partition];
++	uint32_t blk;
++
++	if (check_add_overflow(bloc->logicalBlockNum, offset, &blk) ||
++	    check_add_overflow(blk, count, &blk) ||
++	    bloc->logicalBlockNum + count > map->s_partition_len) {
++		udf_debug("Invalid request to free blocks: (%d, %u), off %u, "
++			  "len %u, partition len %u\n",
++			  partition, bloc->logicalBlockNum, offset, count,
++			  map->s_partition_len);
++		return;
++	}
+ 
+ 	if (map->s_partition_flags & UDF_PART_FLAG_UNALLOC_BITMAP) {
+ 		udf_bitmap_free_blocks(sb, map->s_uspace.s_bitmap,
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 4af9ce34ee8047..1939678f0b6224 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -266,7 +266,8 @@ static void udf_sb_free_bitmap(struct udf_bitmap *bitmap)
+ 	int nr_groups = bitmap->s_nr_groups;
+ 
+ 	for (i = 0; i < nr_groups; i++)
+-		brelse(bitmap->s_block_bitmap[i]);
++		if (!IS_ERR_OR_NULL(bitmap->s_block_bitmap[i]))
++			brelse(bitmap->s_block_bitmap[i]);
+ 
+ 	kvfree(bitmap);
+ }
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index 3982c2dc541eb6..a3b85459959e82 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -37,6 +37,7 @@
+ # define __GCC4_has_attribute___nonstring__           0
+ # define __GCC4_has_attribute___no_sanitize_address__ (__GNUC_MINOR__ >= 8)
+ # define __GCC4_has_attribute___no_sanitize_undefined__ (__GNUC_MINOR__ >= 9)
++# define __GCC4_has_attribute___uninitialized__       0
+ # define __GCC4_has_attribute___fallthrough__         0
+ # define __GCC4_has_attribute___warning__             1
+ #endif
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index b89a8ac83d1bc0..eb7af809e6e532 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -708,10 +708,11 @@ extern struct irq_chip no_irq_chip;
+ extern struct irq_chip dummy_irq_chip;
+ 
+ extern void
+-irq_set_chip_and_handler_name(unsigned int irq, struct irq_chip *chip,
++irq_set_chip_and_handler_name(unsigned int irq, const struct irq_chip *chip,
+ 			      irq_flow_handler_t handle, const char *name);
+ 
+-static inline void irq_set_chip_and_handler(unsigned int irq, struct irq_chip *chip,
++static inline void irq_set_chip_and_handler(unsigned int irq,
++					    const struct irq_chip *chip,
+ 					    irq_flow_handler_t handle)
+ {
+ 	irq_set_chip_and_handler_name(irq, chip, handle, NULL);
+@@ -801,7 +802,7 @@ static inline void irq_set_percpu_devid_flags(unsigned int irq)
+ }
+ 
+ /* Set/get chip/data for an IRQ: */
+-extern int irq_set_chip(unsigned int irq, struct irq_chip *chip);
++extern int irq_set_chip(unsigned int irq, const struct irq_chip *chip);
+ extern int irq_set_handler_data(unsigned int irq, void *data);
+ extern int irq_set_chip_data(unsigned int irq, void *data);
+ extern int irq_set_irq_type(unsigned int irq, unsigned int type);
+diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
+index 9b9743f7538c44..60f53eadfa4220 100644
+--- a/include/linux/irqdomain.h
++++ b/include/linux/irqdomain.h
+@@ -149,6 +149,8 @@ struct irq_domain_chip_generic;
+  * @gc: Pointer to a list of generic chips. There is a helper function for
+  *      setting up one or more generic chips for interrupt controllers
+  *      drivers using the generic chip library which uses this pointer.
++ * @dev: Pointer to a device that the domain represent, and that will be
++ *       used for power management purposes.
+  * @parent: Pointer to parent irq_domain to support hierarchy irq_domains
+  * @debugfs_file: dentry for the domain debugfs file
+  *
+@@ -171,6 +173,7 @@ struct irq_domain {
+ 	struct fwnode_handle *fwnode;
+ 	enum irq_domain_bus_token bus_token;
+ 	struct irq_domain_chip_generic *gc;
++	struct device *dev;
+ #ifdef	CONFIG_IRQ_DOMAIN_HIERARCHY
+ 	struct irq_domain *parent;
+ #endif
+@@ -227,6 +230,13 @@ static inline struct device_node *irq_domain_get_of_node(struct irq_domain *d)
+ 	return to_of_node(d->fwnode);
+ }
+ 
++static inline void irq_domain_set_pm_device(struct irq_domain *d,
++					    struct device *dev)
++{
++	if (d)
++		d->dev = dev;
++}
++
+ #ifdef CONFIG_IRQ_DOMAIN
+ struct fwnode_handle *__irq_domain_alloc_fwnode(unsigned int type, int id,
+ 						const char *name, phys_addr_t *pa);
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index 578ff196b3cef9..80f573d1b3b83b 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1626,11 +1626,6 @@ int jbd2_wait_inode_data(journal_t *journal, struct jbd2_inode *jinode);
+ int jbd2_fc_wait_bufs(journal_t *journal, int num_blks);
+ int jbd2_fc_release_bufs(journal_t *journal);
+ 
+-static inline int jbd2_journal_get_max_txn_bufs(journal_t *journal)
+-{
+-	return (journal->j_total_len - journal->j_fc_wbufsize) / 4;
+-}
+-
+ /*
+  * is_journal_abort
+  *
+diff --git a/include/linux/objagg.h b/include/linux/objagg.h
+index 78021777df4626..6df5b887dc547c 100644
+--- a/include/linux/objagg.h
++++ b/include/linux/objagg.h
+@@ -8,7 +8,6 @@ struct objagg_ops {
+ 	size_t obj_size;
+ 	bool (*delta_check)(void *priv, const void *parent_obj,
+ 			    const void *obj);
+-	int (*hints_obj_cmp)(const void *obj1, const void *obj2);
+ 	void * (*delta_create)(void *priv, void *parent_obj, void *obj);
+ 	void (*delta_destroy)(void *priv, void *delta_priv);
+ 	void * (*root_create)(void *priv, void *obj, unsigned int root_id);
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 80744a7b5e333f..b2418bfda4a98e 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2137,6 +2137,8 @@
+ 
+ #define PCI_VENDOR_ID_CHELSIO		0x1425
+ 
++#define PCI_VENDOR_ID_EDIMAX		0x1432
++
+ #define PCI_VENDOR_ID_ADLINK		0x144a
+ 
+ #define PCI_VENDOR_ID_SAMSUNG		0x144d
+diff --git a/include/linux/qed/qed_eth_if.h b/include/linux/qed/qed_eth_if.h
+index 812a4d75116338..4df0bf0a0864e3 100644
+--- a/include/linux/qed/qed_eth_if.h
++++ b/include/linux/qed/qed_eth_if.h
+@@ -145,12 +145,6 @@ struct qed_filter_mcast_params {
+ 	unsigned char mac[64][ETH_ALEN];
+ };
+ 
+-union qed_filter_type_params {
+-	enum qed_filter_rx_mode_type accept_flags;
+-	struct qed_filter_ucast_params ucast;
+-	struct qed_filter_mcast_params mcast;
+-};
+-
+ enum qed_filter_type {
+ 	QED_FILTER_TYPE_UCAST,
+ 	QED_FILTER_TYPE_MCAST,
+@@ -158,11 +152,6 @@ enum qed_filter_type {
+ 	QED_MAX_FILTER_TYPES,
+ };
+ 
+-struct qed_filter_params {
+-	enum qed_filter_type type;
+-	union qed_filter_type_params filter;
+-};
+-
+ struct qed_tunn_params {
+ 	u16 vxlan_port;
+ 	u8 update_vxlan_port;
+@@ -314,8 +303,14 @@ struct qed_eth_ops {
+ 
+ 	int (*q_tx_stop)(struct qed_dev *cdev, u8 rss_id, void *handle);
+ 
+-	int (*filter_config)(struct qed_dev *cdev,
+-			     struct qed_filter_params *params);
++	int (*filter_config_rx_mode)(struct qed_dev *cdev,
++				     enum qed_filter_rx_mode_type type);
++
++	int (*filter_config_ucast)(struct qed_dev *cdev,
++				   struct qed_filter_ucast_params *params);
++
++	int (*filter_config_mcast)(struct qed_dev *cdev,
++				   struct qed_filter_mcast_params *params);
+ 
+ 	int (*fastpath_stop)(struct qed_dev *cdev);
+ 
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 2c634010cc7bd1..5e4194e12a7db5 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -348,6 +348,12 @@ extern void sigqueue_free(struct sigqueue *);
+ extern int send_sigqueue(struct sigqueue *, struct pid *, enum pid_type);
+ extern int do_sigaction(int, struct k_sigaction *, struct k_sigaction *);
+ 
++static inline void clear_notify_signal(void)
++{
++	clear_thread_flag(TIF_NOTIFY_SIGNAL);
++	smp_mb__after_atomic();
++}
++
+ static inline int restart_syscall(void)
+ {
+ 	set_tsk_thread_flag(current, TIF_SIGPENDING);
+diff --git a/include/linux/task_work.h b/include/linux/task_work.h
+index 5b8a93f288bb4d..ad943231362709 100644
+--- a/include/linux/task_work.h
++++ b/include/linux/task_work.h
+@@ -24,7 +24,8 @@ int task_work_add(struct task_struct *task, struct callback_head *twork,
+ 
+ struct callback_head *task_work_cancel_match(struct task_struct *task,
+ 	bool (*match)(struct callback_head *, void *data), void *data);
+-struct callback_head *task_work_cancel(struct task_struct *, task_work_func_t);
++struct callback_head *task_work_cancel_func(struct task_struct *, task_work_func_t);
++bool task_work_cancel(struct task_struct *task, struct callback_head *cb);
+ void task_work_run(void);
+ 
+ static inline void exit_task_work(struct task_struct *task)
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index dbf0993153d35d..64af1e11ea13da 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -729,7 +729,6 @@ do {									\
+ struct perf_event;
+ 
+ DECLARE_PER_CPU(struct pt_regs, perf_trace_regs);
+-DECLARE_PER_CPU(int, bpf_kprobe_override);
+ 
+ extern int  perf_trace_init(struct perf_event *event);
+ extern void perf_trace_destroy(struct perf_event *event);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 2b99ee1303d925..3cc25a5faa2365 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -13,6 +13,7 @@
+ #include <net/netfilter/nf_flow_table.h>
+ #include <net/netlink.h>
+ #include <net/flow_offload.h>
++#include <net/netns/generic.h>
+ 
+ #define NFT_MAX_HOOKS	(NF_INET_INGRESS + 1)
+ 
+@@ -686,10 +687,16 @@ static inline struct nft_expr *nft_set_ext_expr(const struct nft_set_ext *ext)
+ 	return nft_set_ext(ext, NFT_SET_EXT_EXPR);
+ }
+ 
+-static inline bool nft_set_elem_expired(const struct nft_set_ext *ext)
++static inline bool __nft_set_elem_expired(const struct nft_set_ext *ext,
++					  u64 tstamp)
+ {
+ 	return nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION) &&
+-	       time_is_before_eq_jiffies64(*nft_set_ext_expiration(ext));
++	       time_after_eq64(tstamp, *nft_set_ext_expiration(ext));
++}
++
++static inline bool nft_set_elem_expired(const struct nft_set_ext *ext)
++{
++	return __nft_set_elem_expired(ext, get_jiffies_64());
+ }
+ 
+ static inline struct nft_set_ext *nft_set_elem_ext(const struct nft_set *set,
+@@ -779,7 +786,7 @@ struct nft_expr_ops {
+ 						struct nft_regs *regs,
+ 						const struct nft_pktinfo *pkt);
+ 	int				(*clone)(struct nft_expr *dst,
+-						 const struct nft_expr *src);
++						 const struct nft_expr *src, gfp_t gfp);
+ 	unsigned int			size;
+ 
+ 	int				(*init)(const struct nft_ctx *ctx,
+@@ -830,7 +837,7 @@ static inline void *nft_expr_priv(const struct nft_expr *expr)
+ 	return (void *)expr->data;
+ }
+ 
+-int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src);
++int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src, gfp_t gfp);
+ void nft_expr_destroy(const struct nft_ctx *ctx, struct nft_expr *expr);
+ int nft_expr_dump(struct sk_buff *skb, unsigned int attr,
+ 		  const struct nft_expr *expr);
+@@ -1580,9 +1587,19 @@ struct nftables_pernet {
+ 	struct list_head	module_list;
+ 	struct list_head	notify_list;
+ 	struct mutex		commit_mutex;
++	u64			tstamp;
+ 	unsigned int		base_seq;
+ 	u8			validate_state;
+ 	unsigned int		gc_seq;
+ };
+ 
++extern unsigned int nf_tables_net_id;
++
++static inline u64 nft_net_tstamp(const struct net *net)
++{
++	struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
++
++	return nft_net->tstamp;
++}
++
+ #endif /* _NET_NF_TABLES_H */
+diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
+index 33475d061823e8..6d89a7f3f6a4c2 100644
+--- a/include/net/sctp/sctp.h
++++ b/include/net/sctp/sctp.h
+@@ -506,8 +506,8 @@ static inline int sctp_ep_hashfn(struct net *net, __u16 lport)
+ 	return (net_hash_mix(net) + lport) & (sctp_ep_hashsize - 1);
+ }
+ 
+-#define sctp_for_each_hentry(epb, head) \
+-	hlist_for_each_entry(epb, head, node)
++#define sctp_for_each_hentry(ep, head) \
++	hlist_for_each_entry(ep, head, node)
+ 
+ /* Is a socket of this style? */
+ #define sctp_style(sk, style) __sctp_style((sk), (SCTP_SOCKET_##style))
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index be59e8df0bffc3..108eb62cdc2c5a 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -1218,10 +1218,6 @@ enum sctp_endpoint_type {
+  */
+ 
+ struct sctp_ep_common {
+-	/* Fields to help us manage our entries in the hash tables. */
+-	struct hlist_node node;
+-	int hashent;
+-
+ 	/* Runtime type information.  What kind of endpoint is this? */
+ 	enum sctp_endpoint_type type;
+ 
+@@ -1273,6 +1269,10 @@ struct sctp_endpoint {
+ 	/* Common substructure for endpoint and association. */
+ 	struct sctp_ep_common base;
+ 
++	/* Fields to help us manage our entries in the hash tables. */
++	struct hlist_node node;
++	int hashent;
++
+ 	/* Associations: A list of current associations and mappings
+ 	 *	      to the data consumers for each association. This
+ 	 *	      may be in the form of a hash table or other
+diff --git a/include/trace/events/rpcgss.h b/include/trace/events/rpcgss.h
+index ffdbe6f85da8ba..9c8cb69c791127 100644
+--- a/include/trace/events/rpcgss.h
++++ b/include/trace/events/rpcgss.h
+@@ -52,7 +52,7 @@ TRACE_DEFINE_ENUM(GSS_S_UNSEQ_TOKEN);
+ TRACE_DEFINE_ENUM(GSS_S_GAP_TOKEN);
+ 
+ #define show_gss_status(x)						\
+-	__print_flags(x, "|",						\
++	__print_symbolic(x, 						\
+ 		{ GSS_S_BAD_MECH, "GSS_S_BAD_MECH" },			\
+ 		{ GSS_S_BAD_NAME, "GSS_S_BAD_NAME" },			\
+ 		{ GSS_S_BAD_NAMETYPE, "GSS_S_BAD_NAMETYPE" },		\
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index f93ffb1b67398b..40d90053709390 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -1284,7 +1284,7 @@ enum nft_secmark_attributes {
+ #define NFTA_SECMARK_MAX	(__NFTA_SECMARK_MAX - 1)
+ 
+ /* Max security context length */
+-#define NFT_SECMARK_CTX_MAXLEN		256
++#define NFT_SECMARK_CTX_MAXLEN		4096
+ 
+ /**
+  * enum nft_reject_types - nf_tables reject expression reject types
+diff --git a/include/uapi/linux/zorro_ids.h b/include/uapi/linux/zorro_ids.h
+index 6e574d7b7d79cd..393f2ee9c04228 100644
+--- a/include/uapi/linux/zorro_ids.h
++++ b/include/uapi/linux/zorro_ids.h
+@@ -449,6 +449,9 @@
+ #define  ZORRO_PROD_VMC_ISDN_BLASTER_Z2				ZORRO_ID(VMC, 0x01, 0)
+ #define  ZORRO_PROD_VMC_HYPERCOM_4				ZORRO_ID(VMC, 0x02, 0)
+ 
++#define ZORRO_MANUF_CSLAB					0x1400
++#define  ZORRO_PROD_CSLAB_WARP_1260				ZORRO_ID(CSLAB, 0x65, 0)
++
+ #define ZORRO_MANUF_INFORMATION					0x157C
+ #define  ZORRO_PROD_INFORMATION_ISDN_ENGINE_I			ZORRO_ID(INFORMATION, 0x64, 0)
+ 
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index fe8594a0396cae..c5d249f5d21409 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -19,6 +19,7 @@
+ #include "io-wq.h"
+ 
+ #define WORKER_IDLE_TIMEOUT	(5 * HZ)
++#define WORKER_INIT_LIMIT	3
+ 
+ enum {
+ 	IO_WORKER_F_UP		= 1,	/* up and active */
+@@ -54,6 +55,7 @@ struct io_worker {
+ 	unsigned long create_state;
+ 	struct callback_head create_work;
+ 	int create_index;
++	int init_retries;
+ 
+ 	union {
+ 		struct rcu_head rcu;
+@@ -732,7 +734,7 @@ static bool io_wq_work_match_all(struct io_wq_work *work, void *data)
+ 	return true;
+ }
+ 
+-static inline bool io_should_retry_thread(long err)
++static inline bool io_should_retry_thread(struct io_worker *worker, long err)
+ {
+ 	/*
+ 	 * Prevent perpetual task_work retry, if the task (or its group) is
+@@ -740,6 +742,8 @@ static inline bool io_should_retry_thread(long err)
+ 	 */
+ 	if (fatal_signal_pending(current))
+ 		return false;
++	if (worker->init_retries++ >= WORKER_INIT_LIMIT)
++		return false;
+ 
+ 	switch (err) {
+ 	case -EAGAIN:
+@@ -766,7 +770,7 @@ static void create_worker_cont(struct callback_head *cb)
+ 		io_init_new_worker(wqe, worker, tsk);
+ 		io_worker_release(worker);
+ 		return;
+-	} else if (!io_should_retry_thread(PTR_ERR(tsk))) {
++	} else if (!io_should_retry_thread(worker, PTR_ERR(tsk))) {
+ 		struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+ 
+ 		atomic_dec(&acct->nr_running);
+@@ -831,7 +835,7 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
+ 	tsk = create_io_thread(io_wqe_worker, worker, wqe->node);
+ 	if (!IS_ERR(tsk)) {
+ 		io_init_new_worker(wqe, worker, tsk);
+-	} else if (!io_should_retry_thread(PTR_ERR(tsk))) {
++	} else if (!io_should_retry_thread(worker, PTR_ERR(tsk))) {
+ 		kfree(worker);
+ 		goto fail;
+ 	} else {
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 06c028bdb8d4d6..cfbb038d150c42 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -340,7 +340,7 @@ static const char *btf_type_str(const struct btf_type *t)
+ struct btf_show {
+ 	u64 flags;
+ 	void *target;	/* target of show operation (seq file, buffer) */
+-	void (*showfn)(struct btf_show *show, const char *fmt, va_list args);
++	__printf(2, 0) void (*showfn)(struct btf_show *show, const char *fmt, va_list args);
+ 	const struct btf *btf;
+ 	/* below are used during iteration */
+ 	struct {
+@@ -5344,8 +5344,8 @@ static void btf_type_show(const struct btf *btf, u32 type_id, void *obj,
+ 	btf_type_ops(t)->show(btf, t, type_id, obj, 0, show);
+ }
+ 
+-static void btf_seq_show(struct btf_show *show, const char *fmt,
+-			 va_list args)
++__printf(2, 0) static void btf_seq_show(struct btf_show *show, const char *fmt,
++					va_list args)
+ {
+ 	seq_vprintf((struct seq_file *)show->target, fmt, args);
+ }
+@@ -5378,8 +5378,8 @@ struct btf_show_snprintf {
+ 	int len;		/* length we would have written */
+ };
+ 
+-static void btf_snprintf_show(struct btf_show *show, const char *fmt,
+-			      va_list args)
++__printf(2, 0) static void btf_snprintf_show(struct btf_show *show, const char *fmt,
++					     va_list args)
+ {
+ 	struct btf_show_snprintf *ssnprintf = (struct btf_show_snprintf *)show;
+ 	int len;
+diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
+index a3b4b55d2e2e1b..b28b8a5ef6381b 100644
+--- a/kernel/debug/kdb/kdb_io.c
++++ b/kernel/debug/kdb/kdb_io.c
+@@ -194,7 +194,7 @@ char kdb_getchar(void)
+  */
+ static void kdb_position_cursor(char *prompt, char *buffer, char *cp)
+ {
+-	kdb_printf("\r%s", kdb_prompt_str);
++	kdb_printf("\r%s", prompt);
+ 	if (cp > buffer)
+ 		kdb_printf("%.*s", (int)(cp - buffer), buffer);
+ }
+@@ -358,7 +358,7 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 			if (i >= dtab_count)
+ 				kdb_printf("...");
+ 			kdb_printf("\n");
+-			kdb_printf(kdb_prompt_str);
++			kdb_printf("%s",  kdb_prompt_str);
+ 			kdb_printf("%s", buffer);
+ 			if (cp != lastchar)
+ 				kdb_position_cursor(kdb_prompt_str, buffer, cp);
+@@ -450,7 +450,7 @@ char *kdb_getstr(char *buffer, size_t bufsize, const char *prompt)
+ {
+ 	if (prompt && kdb_prompt_str != prompt)
+ 		strscpy(kdb_prompt_str, prompt, CMD_BUFLEN);
+-	kdb_printf(kdb_prompt_str);
++	kdb_printf("%s", kdb_prompt_str);
+ 	kdb_nextline = 1;	/* Prompt and input resets line number */
+ 	return kdb_read(buffer, bufsize);
+ }
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index 51bb8fa8eb8948..453c0fbe87ff4a 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -60,8 +60,8 @@ void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
+ {
+ 	struct dma_devres match_data = { size, vaddr, dma_handle };
+ 
+-	dma_free_coherent(dev, size, vaddr, dma_handle);
+ 	WARN_ON(devres_destroy(dev, dmam_release, dmam_match, &match_data));
++	dma_free_coherent(dev, size, vaddr, dma_handle);
+ }
+ EXPORT_SYMBOL(dmam_free_coherent);
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 3a191bec69acac..b60325cc8604db 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6239,6 +6239,8 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 			return -EINVAL;
+ 
+ 		nr_pages = vma_size / PAGE_SIZE;
++		if (nr_pages > INT_MAX)
++			return -ENOMEM;
+ 
+ 		mutex_lock(&event->mmap_mutex);
+ 		ret = -EINVAL;
+diff --git a/kernel/events/internal.h b/kernel/events/internal.h
+index aa23ffdaf819fb..8e63cc2bd4f7d7 100644
+--- a/kernel/events/internal.h
++++ b/kernel/events/internal.h
+@@ -128,7 +128,7 @@ static inline unsigned long perf_data_size(struct perf_buffer *rb)
+ 
+ static inline unsigned long perf_aux_size(struct perf_buffer *rb)
+ {
+-	return rb->aux_nr_pages << PAGE_SHIFT;
++	return (unsigned long)rb->aux_nr_pages << PAGE_SHIFT;
+ }
+ 
+ #define __DEFINE_OUTPUT_COPY_BODY(advance_buf, memcpy_func, ...)	\
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index e7d284261d450f..09b91aebb90b14 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -38,7 +38,7 @@ struct irqaction chained_action = {
+  *	@irq:	irq number
+  *	@chip:	pointer to irq chip description structure
+  */
+-int irq_set_chip(unsigned int irq, struct irq_chip *chip)
++int irq_set_chip(unsigned int irq, const struct irq_chip *chip)
+ {
+ 	unsigned long flags;
+ 	struct irq_desc *desc = irq_get_desc_lock(irq, &flags, 0);
+@@ -46,10 +46,7 @@ int irq_set_chip(unsigned int irq, struct irq_chip *chip)
+ 	if (!desc)
+ 		return -EINVAL;
+ 
+-	if (!chip)
+-		chip = &no_irq_chip;
+-
+-	desc->irq_data.chip = chip;
++	desc->irq_data.chip = (struct irq_chip *)(chip ?: &no_irq_chip);
+ 	irq_put_desc_unlock(desc, flags);
+ 	/*
+ 	 * For !CONFIG_SPARSE_IRQ make the irq show up in
+@@ -1102,7 +1099,7 @@ irq_set_chained_handler_and_data(unsigned int irq, irq_flow_handler_t handle,
+ EXPORT_SYMBOL_GPL(irq_set_chained_handler_and_data);
+ 
+ void
+-irq_set_chip_and_handler_name(unsigned int irq, struct irq_chip *chip,
++irq_set_chip_and_handler_name(unsigned int irq, const struct irq_chip *chip,
+ 			      irq_flow_handler_t handle, const char *name)
+ {
+ 	irq_set_chip(irq, chip);
+@@ -1586,6 +1583,17 @@ int irq_chip_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 	return 0;
+ }
+ 
++static struct device *irq_get_parent_device(struct irq_data *data)
++{
++	if (data->chip->parent_device)
++		return data->chip->parent_device;
++
++	if (data->domain)
++		return data->domain->dev;
++
++	return NULL;
++}
++
+ /**
+  * irq_chip_pm_get - Enable power for an IRQ chip
+  * @data:	Pointer to interrupt specific data
+@@ -1595,12 +1603,13 @@ int irq_chip_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+  */
+ int irq_chip_pm_get(struct irq_data *data)
+ {
++	struct device *dev = irq_get_parent_device(data);
+ 	int retval;
+ 
+-	if (IS_ENABLED(CONFIG_PM) && data->chip->parent_device) {
+-		retval = pm_runtime_get_sync(data->chip->parent_device);
++	if (IS_ENABLED(CONFIG_PM) && dev) {
++		retval = pm_runtime_get_sync(dev);
+ 		if (retval < 0) {
+-			pm_runtime_put_noidle(data->chip->parent_device);
++			pm_runtime_put_noidle(dev);
+ 			return retval;
+ 		}
+ 	}
+@@ -1618,10 +1627,11 @@ int irq_chip_pm_get(struct irq_data *data)
+  */
+ int irq_chip_pm_put(struct irq_data *data)
+ {
++	struct device *dev = irq_get_parent_device(data);
+ 	int retval = 0;
+ 
+-	if (IS_ENABLED(CONFIG_PM) && data->chip->parent_device)
+-		retval = pm_runtime_put(data->chip->parent_device);
++	if (IS_ENABLED(CONFIG_PM) && dev)
++		retval = pm_runtime_put(dev);
+ 
+ 	return (retval < 0) ? retval : 0;
+ }
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index 6c009a033c73fb..68183511226a39 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -491,6 +491,7 @@ static int alloc_descs(unsigned int start, unsigned int cnt, int node,
+ 				flags = IRQD_AFFINITY_MANAGED |
+ 					IRQD_MANAGED_SHUTDOWN;
+ 			}
++			flags |= IRQD_AFFINITY_SET;
+ 			mask = &affinity->mask;
+ 			node = cpu_to_node(cpumask_first(mask));
+ 			affinity++;
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 0159925054faa8..c7f4f948f17e46 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -1230,7 +1230,7 @@ static int irq_thread(void *data)
+ 	 * synchronize_hardirq(). So neither IRQTF_RUNTHREAD nor the
+ 	 * oneshot mask bit can be set.
+ 	 */
+-	task_work_cancel(current, irq_thread_dtor);
++	task_work_cancel_func(current, irq_thread_dtor);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index dba6541c0fc3c4..c8e62458d323f2 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1632,8 +1632,8 @@ static bool is_cfi_preamble_symbol(unsigned long addr)
+ 	if (lookup_symbol_name(addr, symbuf))
+ 		return false;
+ 
+-	return str_has_prefix("__cfi_", symbuf) ||
+-		str_has_prefix("__pfx_", symbuf);
++	return str_has_prefix(symbuf, "__cfi_") ||
++		str_has_prefix(symbuf, "__pfx_");
+ }
+ 
+ static int check_kprobe_address_safe(struct kprobe *p,
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 471ccbc44541d8..2a514cf8379b44 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -521,6 +521,13 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
+ 	ps.chunk_size = max(ps.chunk_size, job->min_chunk);
+ 	ps.chunk_size = roundup(ps.chunk_size, job->align);
+ 
++	/*
++	 * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
++	 * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
++	 */
++	if (!ps.chunk_size)
++		ps.chunk_size = 1U;
++
+ 	list_for_each_entry(pw, &works, pw_list)
+ 		queue_work(system_unbound_wq, &pw->pw_work);
+ 
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 9f505688291e51..5c4bdbe76df043 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -1867,7 +1867,7 @@ static void rcu_torture_fwd_cb_cr(struct rcu_head *rhp)
+ 	spin_lock_irqsave(&rfp->rcu_fwd_lock, flags);
+ 	rfcpp = rfp->rcu_fwd_cb_tail;
+ 	rfp->rcu_fwd_cb_tail = &rfcp->rfc_next;
+-	WRITE_ONCE(*rfcpp, rfcp);
++	smp_store_release(rfcpp, rfcp);
+ 	WRITE_ONCE(rfp->n_launders_cb, rfp->n_launders_cb + 1);
+ 	i = ((jiffies - rfp->rcu_fwd_startat) / (HZ / FWD_CBS_HIST_DIV));
+ 	if (i >= ARRAY_SIZE(rfp->n_launders_hist))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 40f40f359c5d56..29d8fc3a7bbd24 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -848,27 +848,24 @@ static void set_load_weight(struct task_struct *p)
+ {
+ 	bool update_load = !(READ_ONCE(p->state) & TASK_NEW);
+ 	int prio = p->static_prio - MAX_RT_PRIO;
+-	struct load_weight *load = &p->se.load;
++	struct load_weight lw;
+ 
+-	/*
+-	 * SCHED_IDLE tasks get minimal weight:
+-	 */
+ 	if (task_has_idle_policy(p)) {
+-		load->weight = scale_load(WEIGHT_IDLEPRIO);
+-		load->inv_weight = WMULT_IDLEPRIO;
+-		return;
++		lw.weight = scale_load(WEIGHT_IDLEPRIO);
++		lw.inv_weight = WMULT_IDLEPRIO;
++	} else {
++		lw.weight = scale_load(sched_prio_to_weight[prio]);
++		lw.inv_weight = sched_prio_to_wmult[prio];
+ 	}
+ 
+ 	/*
+ 	 * SCHED_OTHER tasks have to update their load when changing their
+ 	 * weight
+ 	 */
+-	if (update_load && p->sched_class == &fair_sched_class) {
+-		reweight_task(p, prio);
+-	} else {
+-		load->weight = scale_load(sched_prio_to_weight[prio]);
+-		load->inv_weight = sched_prio_to_wmult[prio];
+-	}
++	if (update_load && p->sched_class == &fair_sched_class)
++		reweight_task(p, &lw);
++	else
++		p->se.load = lw;
+ }
+ 
+ #ifdef CONFIG_UCLAMP_TASK
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index ca0eef7d3852b5..f03b3af2fb7929 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -579,6 +579,12 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ 	}
+ 
+ 	stime = mul_u64_u64_div_u64(stime, rtime, stime + utime);
++	/*
++	 * Because mul_u64_u64_div_u64() can approximate on some
++	 * achitectures; enforce the constraint that: a*b/(b+c) <= a.
++	 */
++	if (unlikely(stime > rtime))
++		stime = rtime;
+ 
+ update:
+ 	/*
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 73a89fbd81be83..a6a755aec32b50 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3119,15 +3119,14 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
+ 
+ }
+ 
+-void reweight_task(struct task_struct *p, int prio)
++void reweight_task(struct task_struct *p, const struct load_weight *lw)
+ {
+ 	struct sched_entity *se = &p->se;
+ 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
+ 	struct load_weight *load = &se->load;
+-	unsigned long weight = scale_load(sched_prio_to_weight[prio]);
+ 
+-	reweight_entity(cfs_rq, se, weight);
+-	load->inv_weight = sched_prio_to_wmult[prio];
++	reweight_entity(cfs_rq, se, lw->weight);
++	load->inv_weight = lw->inv_weight;
+ }
+ 
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+@@ -7951,7 +7950,7 @@ static int detach_tasks(struct lb_env *env)
+ 		case migrate_util:
+ 			util = task_util_est(p);
+ 
+-			if (util > env->imbalance)
++			if (shr_bound(util, env->sd->nr_balance_failed) > env->imbalance)
+ 				goto next;
+ 
+ 			env->imbalance -= util;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 8de07aba8bdd4b..df6cf8aa59f89f 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1938,7 +1938,7 @@ extern void init_sched_dl_class(void);
+ extern void init_sched_rt_class(void);
+ extern void init_sched_fair_class(void);
+ 
+-extern void reweight_task(struct task_struct *p, int prio);
++extern void reweight_task(struct task_struct *p, const struct load_weight *lw);
+ 
+ extern void resched_curr(struct rq *rq);
+ extern void resched_cpu(int cpu);
+diff --git a/kernel/signal.c b/kernel/signal.c
+index e487c4660921db..bfc1da526ebbe7 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2464,6 +2464,14 @@ static void do_freezer_trap(void)
+ 	spin_unlock_irq(&current->sighand->siglock);
+ 	cgroup_enter_frozen();
+ 	freezable_schedule();
++
++	/*
++	 * We could've been woken by task_work, run it to clear
++	 * TIF_NOTIFY_SIGNAL. The caller will retry if necessary.
++	 */
++	clear_notify_signal();
++	if (unlikely(READ_ONCE(current->task_works)))
++		task_work_run();
+ }
+ 
+ static int ptrace_signal(int signr, kernel_siginfo_t *info)
+diff --git a/kernel/task_work.c b/kernel/task_work.c
+index e9316198c64bf5..6138e13dce3d6c 100644
+--- a/kernel/task_work.c
++++ b/kernel/task_work.c
+@@ -101,9 +101,9 @@ static bool task_work_func_match(struct callback_head *cb, void *data)
+ }
+ 
+ /**
+- * task_work_cancel - cancel a pending work added by task_work_add()
+- * @task: the task which should execute the work
+- * @func: identifies the work to remove
++ * task_work_cancel_func - cancel a pending work matching a function added by task_work_add()
++ * @task: the task which should execute the func's work
++ * @func: identifies the func to match with a work to remove
+  *
+  * Find the last queued pending work with ->func == @func and remove
+  * it from queue.
+@@ -112,11 +112,35 @@ static bool task_work_func_match(struct callback_head *cb, void *data)
+  * The found work or NULL if not found.
+  */
+ struct callback_head *
+-task_work_cancel(struct task_struct *task, task_work_func_t func)
++task_work_cancel_func(struct task_struct *task, task_work_func_t func)
+ {
+ 	return task_work_cancel_match(task, task_work_func_match, func);
+ }
+ 
++static bool task_work_match(struct callback_head *cb, void *data)
++{
++	return cb == data;
++}
++
++/**
++ * task_work_cancel - cancel a pending work added by task_work_add()
++ * @task: the task which should execute the work
++ * @cb: the callback to remove if queued
++ *
++ * Remove a callback from a task's queue if queued.
++ *
++ * RETURNS:
++ * True if the callback was queued and got cancelled, false otherwise.
++ */
++bool task_work_cancel(struct task_struct *task, struct callback_head *cb)
++{
++	struct callback_head *ret;
++
++	ret = task_work_cancel_match(task, task_work_match, cb);
++
++	return ret == cb;
++}
++
+ /**
+  * task_work_run - execute the works added by task_work_add()
+  *
+@@ -149,7 +173,7 @@ void task_work_run(void)
+ 		if (!work)
+ 			break;
+ 		/*
+-		 * Synchronize with task_work_cancel(). It can not remove
++		 * Synchronize with task_work_cancel_match(). It can not remove
+ 		 * the first entry == work, cmpxchg(task_works) must fail.
+ 		 * But it can remove another entry from the ->next list.
+ 		 */
+diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
+index 069ca78fb0bfad..02d96c007673c3 100644
+--- a/kernel/time/ntp.c
++++ b/kernel/time/ntp.c
+@@ -679,17 +679,16 @@ static inline void process_adjtimex_modes(const struct __kernel_timex *txc,
+ 	}
+ 
+ 	if (txc->modes & ADJ_MAXERROR)
+-		time_maxerror = txc->maxerror;
++		time_maxerror = clamp(txc->maxerror, (long long)0, (long long)NTP_PHASE_LIMIT);
+ 
+ 	if (txc->modes & ADJ_ESTERROR)
+-		time_esterror = txc->esterror;
++		time_esterror = clamp(txc->esterror, (long long)0, (long long)NTP_PHASE_LIMIT);
+ 
+ 	if (txc->modes & ADJ_TIMECONST) {
+-		time_constant = txc->constant;
++		time_constant = clamp(txc->constant, (long long)0, (long long)MAXTC);
+ 		if (!(time_status & STA_NANO))
+ 			time_constant += 4;
+-		time_constant = min(time_constant, (long)MAXTC);
+-		time_constant = max(time_constant, 0l);
++		time_constant = clamp(time_constant, (long)0, (long)MAXTC);
+ 	}
+ 
+ 	if (txc->modes & ADJ_TAI &&
+diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
+index a9530e866e5f1b..dc3838c00d6c70 100644
+--- a/kernel/time/tick-broadcast.c
++++ b/kernel/time/tick-broadcast.c
+@@ -951,6 +951,30 @@ void hotplug_cpu__broadcast_tick_pull(int deadcpu)
+ 	bc = tick_broadcast_device.evtdev;
+ 
+ 	if (bc && broadcast_needs_cpu(bc, deadcpu)) {
++		/*
++		 * If the broadcast force bit of the current CPU is set,
++		 * then the current CPU has not yet reprogrammed the local
++		 * timer device to avoid a ping-pong race. See
++		 * ___tick_broadcast_oneshot_control().
++		 *
++		 * If the broadcast device is hrtimer based then
++		 * programming the broadcast event below does not have any
++		 * effect because the local clockevent device is not
++		 * running and not programmed because the broadcast event
++		 * is not earlier than the pending event of the local clock
++		 * event device. As a consequence all CPUs waiting for a
++		 * broadcast event are stuck forever.
++		 *
++		 * Detect this condition and reprogram the cpu local timer
++		 * device to avoid the starvation.
++		 */
++		if (tick_check_broadcast_expired()) {
++			struct tick_device *td = this_cpu_ptr(&tick_cpu_device);
++
++			cpumask_clear_cpu(smp_processor_id(), tick_broadcast_force_mask);
++			tick_program_event(td->evtdev->next_event, 1);
++		}
++
+ 		/* This moves the broadcast assignment to this CPU: */
+ 		clockevents_program_event(bc, bc->next_event, 1);
+ 	}
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index d47641f9740bcc..e6cc8d5ab1a459 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -454,7 +454,7 @@ static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
+ 	struct tracing_map_elt *elt = NULL;
+ 	int idx;
+ 
+-	idx = atomic_inc_return(&map->next_elt);
++	idx = atomic_fetch_add_unless(&map->next_elt, 1, map->max_elts);
+ 	if (idx < map->max_elts) {
+ 		elt = *(TRACING_MAP_ELT(map->elts, idx));
+ 		if (map->ops && map->ops->elt_init)
+@@ -699,7 +699,7 @@ void tracing_map_clear(struct tracing_map *map)
+ {
+ 	unsigned int i;
+ 
+-	atomic_set(&map->next_elt, -1);
++	atomic_set(&map->next_elt, 0);
+ 	atomic64_set(&map->hits, 0);
+ 	atomic64_set(&map->drops, 0);
+ 
+@@ -783,7 +783,7 @@ struct tracing_map *tracing_map_create(unsigned int map_bits,
+ 
+ 	map->map_bits = map_bits;
+ 	map->max_elts = (1 << map_bits);
+-	atomic_set(&map->next_elt, -1);
++	atomic_set(&map->next_elt, 0);
+ 
+ 	map->map_size = (1 << (map_bits + 1));
+ 	map->ops = ops;
+diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
+index 1e8a49dc956e2a..8ba4b269ab89c8 100644
+--- a/kernel/watchdog_hld.c
++++ b/kernel/watchdog_hld.c
+@@ -91,11 +91,15 @@ static bool watchdog_check_timestamp(void)
+ 	__this_cpu_write(last_timestamp, now);
+ 	return true;
+ }
+-#else
+-static inline bool watchdog_check_timestamp(void)
++
++static void watchdog_init_timestamp(void)
+ {
+-	return true;
++	__this_cpu_write(nmi_rearmed, 0);
++	__this_cpu_write(last_timestamp, ktime_get_mono_fast_ns());
+ }
++#else
++static inline bool watchdog_check_timestamp(void) { return true; }
++static inline void watchdog_init_timestamp(void) { }
+ #endif
+ 
+ static struct perf_event_attr wd_hw_attr = {
+@@ -196,6 +200,7 @@ void hardlockup_detector_perf_enable(void)
+ 	if (!atomic_fetch_inc(&watchdog_cpus))
+ 		pr_info("Enabled. Permanently consumes one hw-PMU counter.\n");
+ 
++	watchdog_init_timestamp();
+ 	perf_event_enable(this_cpu_read(watchdog_ev));
+ }
+ 
+diff --git a/lib/decompress_bunzip2.c b/lib/decompress_bunzip2.c
+index c72c865032fabe..0cc292256fd3fb 100644
+--- a/lib/decompress_bunzip2.c
++++ b/lib/decompress_bunzip2.c
+@@ -232,7 +232,8 @@ static int INIT get_next_block(struct bunzip_data *bd)
+ 	   RUNB) */
+ 	symCount = symTotal+2;
+ 	for (j = 0; j < groupCount; j++) {
+-		unsigned char length[MAX_SYMBOLS], temp[MAX_HUFCODE_BITS+1];
++		unsigned char length[MAX_SYMBOLS];
++		unsigned short temp[MAX_HUFCODE_BITS+1];
+ 		int	minLen,	maxLen, pp;
+ 		/* Read Huffman code lengths for each symbol.  They're
+ 		   stored in a way similar to mtf; record a starting
+diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c
+index c87d5b6a8a55a3..38716b2eb671d7 100644
+--- a/lib/kobject_uevent.c
++++ b/lib/kobject_uevent.c
+@@ -432,8 +432,23 @@ static void zap_modalias_env(struct kobj_uevent_env *env)
+ 		len = strlen(env->envp[i]) + 1;
+ 
+ 		if (i != env->envp_idx - 1) {
++			/* @env->envp[] contains pointers to @env->buf[]
++			 * with @env->buflen chars, and we are removing
++			 * variable MODALIAS here pointed by @env->envp[i]
++			 * with length @len as shown below:
++			 *
++			 * 0               @env->buf[]      @env->buflen
++			 * ---------------------------------------------
++			 * ^             ^              ^              ^
++			 * |             |->   @len   <-| target block |
++			 * @env->envp[0] @env->envp[i]  @env->envp[i + 1]
++			 *
++			 * so the "target block" indicated above is moved
++			 * backward by @len, and its right size is
++			 * @env->buflen - (@env->envp[i + 1] - @env->envp[0]).
++			 */
+ 			memmove(env->envp[i], env->envp[i + 1],
+-				env->buflen - len);
++				env->buflen - (env->envp[i + 1] - env->envp[0]));
+ 
+ 			for (j = i; j < env->envp_idx - 1; j++)
+ 				env->envp[j] = env->envp[j + 1] - len;
+diff --git a/lib/objagg.c b/lib/objagg.c
+index 5e1676ccdaddd0..57bde522f2493c 100644
+--- a/lib/objagg.c
++++ b/lib/objagg.c
+@@ -167,6 +167,9 @@ static int objagg_obj_parent_assign(struct objagg *objagg,
+ {
+ 	void *delta_priv;
+ 
++	if (WARN_ON(!objagg_obj_is_root(parent)))
++		return -EINVAL;
++
+ 	delta_priv = objagg->ops->delta_create(objagg->priv, parent->obj,
+ 					       objagg_obj->obj);
+ 	if (IS_ERR(delta_priv))
+@@ -906,20 +909,6 @@ static const struct objagg_opt_algo *objagg_opt_algos[] = {
+ 	[OBJAGG_OPT_ALGO_SIMPLE_GREEDY] = &objagg_opt_simple_greedy,
+ };
+ 
+-static int objagg_hints_obj_cmp(struct rhashtable_compare_arg *arg,
+-				const void *obj)
+-{
+-	struct rhashtable *ht = arg->ht;
+-	struct objagg_hints *objagg_hints =
+-			container_of(ht, struct objagg_hints, node_ht);
+-	const struct objagg_ops *ops = objagg_hints->ops;
+-	const char *ptr = obj;
+-
+-	ptr += ht->p.key_offset;
+-	return ops->hints_obj_cmp ? ops->hints_obj_cmp(ptr, arg->key) :
+-				    memcmp(ptr, arg->key, ht->p.key_len);
+-}
+-
+ /**
+  * objagg_hints_get - obtains hints instance
+  * @objagg:		objagg instance
+@@ -958,7 +947,6 @@ struct objagg_hints *objagg_hints_get(struct objagg *objagg,
+ 				offsetof(struct objagg_hints_node, obj);
+ 	objagg_hints->ht_params.head_offset =
+ 				offsetof(struct objagg_hints_node, ht_node);
+-	objagg_hints->ht_params.obj_cmpfn = objagg_hints_obj_cmp;
+ 
+ 	err = rhashtable_init(&objagg_hints->node_ht, &objagg_hints->ht_params);
+ 	if (err)
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 9cc034e6074c1e..23fc03f7bf3124 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -7762,6 +7762,7 @@ static void l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm,
+ 	bt_cb(skb)->l2cap.psm = psm;
+ 
+ 	if (!chan->ops->recv(chan, skb)) {
++		l2cap_chan_unlock(chan);
+ 		l2cap_chan_put(chan);
+ 		return;
+ 	}
+diff --git a/net/core/filter.c b/net/core/filter.c
+index a3101cdfd47b97..99fdd8afeeda32 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3535,13 +3535,20 @@ static int bpf_skb_net_grow(struct sk_buff *skb, u32 off, u32 len_diff,
+ 	if (skb_is_gso(skb)) {
+ 		struct skb_shared_info *shinfo = skb_shinfo(skb);
+ 
+-		/* Due to header grow, MSS needs to be downgraded. */
+-		if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO))
+-			skb_decrease_gso_size(shinfo, len_diff);
+-
+ 		/* Header must be checked, and gso_segs recomputed. */
+ 		shinfo->gso_type |= gso_type;
+ 		shinfo->gso_segs = 0;
++
++		/* Due to header growth, MSS needs to be downgraded.
++		 * There is a BUG_ON() when segmenting the frag_list with
++		 * head_frag true, so linearize the skb after downgrading
++		 * the MSS.
++		 */
++		if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO)) {
++			skb_decrease_gso_size(shinfo, len_diff);
++			if (shinfo->frag_list)
++				return skb_linearize(skb);
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/net/core/link_watch.c b/net/core/link_watch.c
+index 1a455847da54fc..0311d4d309e1be 100644
+--- a/net/core/link_watch.c
++++ b/net/core/link_watch.c
+@@ -138,9 +138,9 @@ static void linkwatch_schedule_work(int urgent)
+ 	 * override the existing timer.
+ 	 */
+ 	if (test_bit(LW_URGENT, &linkwatch_flags))
+-		mod_delayed_work(system_wq, &linkwatch_work, 0);
++		mod_delayed_work(system_unbound_wq, &linkwatch_work, 0);
+ 	else
+-		schedule_delayed_work(&linkwatch_work, delay);
++		queue_delayed_work(system_unbound_wq, &linkwatch_work, delay);
+ }
+ 
+ 
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index fd98d6059007c1..b2ad644df21f1c 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -124,10 +124,8 @@ void xdp_unreg_mem_model(struct xdp_mem_info *mem)
+ 		return;
+ 
+ 	if (type == MEM_TYPE_PAGE_POOL) {
+-		rcu_read_lock();
+-		xa = rhashtable_lookup(mem_id_ht, &id, mem_id_rht_params);
++		xa = rhashtable_lookup_fast(mem_id_ht, &id, mem_id_rht_params);
+ 		page_pool_destroy(xa->page_pool);
+-		rcu_read_unlock();
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(xdp_unreg_mem_model);
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 412a3c153cad31..adfefcd88bbcc0 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -239,8 +239,7 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb)
+ #else
+ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb)
+ {
+-	kfree_skb(skb);
+-
++	WARN_ON(1);
+ 	return -EOPNOTSUPP;
+ }
+ #endif
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 7a0102a4b1de7e..a508fd94b8be0b 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -210,9 +210,10 @@ static int nla_put_nh_group(struct sk_buff *skb, struct nh_group *nhg)
+ 
+ 	p = nla_data(nla);
+ 	for (i = 0; i < nhg->num_nh; ++i) {
+-		p->id = nhg->nh_entries[i].nh->id;
+-		p->weight = nhg->nh_entries[i].weight - 1;
+-		p += 1;
++		*p++ = (struct nexthop_grp) {
++			.id = nhg->nh_entries[i].nh->id,
++			.weight = nhg->nh_entries[i].weight - 1,
++		};
+ 	}
+ 
+ 	return 0;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index b3b49d8b386d88..1eb1e4316ed6db 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1302,7 +1302,7 @@ void ip_rt_get_source(u8 *addr, struct sk_buff *skb, struct rtable *rt)
+ 		struct flowi4 fl4 = {
+ 			.daddr = iph->daddr,
+ 			.saddr = iph->saddr,
+-			.flowi4_tos = RT_TOS(iph->tos),
++			.flowi4_tos = iph->tos & IPTOS_RT_MASK,
+ 			.flowi4_oif = rt->dst.dev->ifindex,
+ 			.flowi4_iif = skb->dev->ifindex,
+ 			.flowi4_mark = skb->mark,
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index ac09d4543f3e14..455bb4668407f3 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -1822,7 +1822,8 @@ int ipv6_dev_get_saddr(struct net *net, const struct net_device *dst_dev,
+ 							    master, &dst,
+ 							    scores, hiscore_idx);
+ 
+-			if (scores[hiscore_idx].ifa)
++			if (scores[hiscore_idx].ifa &&
++			    scores[hiscore_idx].scopedist >= 0)
+ 				goto out;
+ 		}
+ 
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index fddc811bbde1f5..39154531d45597 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -255,8 +255,7 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb)
+ #else
+ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb)
+ {
+-	kfree_skb(skb);
+-
++	WARN_ON(1);
+ 	return -EOPNOTSUPP;
+ }
+ #endif
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index 14251347c4a509..4f46b0a2e5680a 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -226,6 +226,7 @@ struct ndisc_options *ndisc_parse_options(const struct net_device *dev,
+ 		return NULL;
+ 	memset(ndopts, 0, sizeof(*ndopts));
+ 	while (opt_len) {
++		bool unknown = false;
+ 		int l;
+ 		if (opt_len < sizeof(struct nd_opt_hdr))
+ 			return NULL;
+@@ -261,22 +262,23 @@ struct ndisc_options *ndisc_parse_options(const struct net_device *dev,
+ 			break;
+ #endif
+ 		default:
+-			if (ndisc_is_useropt(dev, nd_opt)) {
+-				ndopts->nd_useropts_end = nd_opt;
+-				if (!ndopts->nd_useropts)
+-					ndopts->nd_useropts = nd_opt;
+-			} else {
+-				/*
+-				 * Unknown options must be silently ignored,
+-				 * to accommodate future extension to the
+-				 * protocol.
+-				 */
+-				ND_PRINTK(2, notice,
+-					  "%s: ignored unsupported option; type=%d, len=%d\n",
+-					  __func__,
+-					  nd_opt->nd_opt_type,
+-					  nd_opt->nd_opt_len);
+-			}
++			unknown = true;
++		}
++		if (ndisc_is_useropt(dev, nd_opt)) {
++			ndopts->nd_useropts_end = nd_opt;
++			if (!ndopts->nd_useropts)
++				ndopts->nd_useropts = nd_opt;
++		} else if (unknown) {
++			/*
++			 * Unknown options must be silently ignored,
++			 * to accommodate future extension to the
++			 * protocol.
++			 */
++			ND_PRINTK(2, notice,
++				  "%s: ignored unsupported option; type=%d, len=%d\n",
++				  __func__,
++				  nd_opt->nd_opt_type,
++				  nd_opt->nd_opt_len);
+ 		}
+ next_opt:
+ 		opt_len -= l;
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index 7c73faa5336cd6..3d0424e4ae6c9c 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -359,8 +359,8 @@ static void iucv_sever_path(struct sock *sk, int with_user_data)
+ 	struct iucv_sock *iucv = iucv_sk(sk);
+ 	struct iucv_path *path = iucv->path;
+ 
+-	if (iucv->path) {
+-		iucv->path = NULL;
++	/* Whoever resets the path pointer, must sever and free it. */
++	if (xchg(&iucv->path, NULL)) {
+ 		if (with_user_data) {
+ 			low_nmcpy(user_data, iucv->src_name);
+ 			high_nmcpy(user_data, iucv->dst_name);
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index a4b793d1b7d768..b6dcfca740c1c0 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -88,6 +88,11 @@
+ /* Default trace flags */
+ #define L2TP_DEFAULT_DEBUG_FLAGS	0
+ 
++#define L2TP_DEPTH_NESTING		2
++#if L2TP_DEPTH_NESTING == SINGLE_DEPTH_NESTING
++#error "L2TP requires its own lockdep subclass"
++#endif
++
+ /* Private data stored for received packets in the skb.
+  */
+ struct l2tp_skb_cb {
+@@ -1041,7 +1046,13 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb, uns
+ 	IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | IPSKB_REROUTED);
+ 	nf_reset_ct(skb);
+ 
+-	bh_lock_sock_nested(sk);
++	/* L2TP uses its own lockdep subclass to avoid lockdep splats caused by
++	 * nested socket calls on the same lockdep socket class. This can
++	 * happen when data from a user socket is routed over l2tp, which uses
++	 * another userspace socket.
++	 */
++	spin_lock_nested(&sk->sk_lock.slock, L2TP_DEPTH_NESTING);
++
+ 	if (sock_owned_by_user(sk)) {
+ 		kfree_skb(skb);
+ 		ret = NET_XMIT_DROP;
+@@ -1093,7 +1104,7 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb, uns
+ 	ret = l2tp_xmit_queue(tunnel, skb, &inet->cork.fl);
+ 
+ out_unlock:
+-	bh_unlock_sock(sk);
++	spin_unlock(&sk->sk_lock.slock);
+ 
+ 	return ret;
+ }
+diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c
+index b921cbdd9aaa26..f4034e000f3ef6 100644
+--- a/net/mptcp/mib.c
++++ b/net/mptcp/mib.c
+@@ -16,7 +16,9 @@ static const struct snmp_mib mptcp_snmp_list[] = {
+ 	SNMP_MIB_ITEM("MPTCPRetrans", MPTCP_MIB_RETRANSSEGS),
+ 	SNMP_MIB_ITEM("MPJoinNoTokenFound", MPTCP_MIB_JOINNOTOKEN),
+ 	SNMP_MIB_ITEM("MPJoinSynRx", MPTCP_MIB_JOINSYNRX),
++	SNMP_MIB_ITEM("MPJoinSynBackupRx", MPTCP_MIB_JOINSYNBACKUPRX),
+ 	SNMP_MIB_ITEM("MPJoinSynAckRx", MPTCP_MIB_JOINSYNACKRX),
++	SNMP_MIB_ITEM("MPJoinSynAckBackupRx", MPTCP_MIB_JOINSYNACKBACKUPRX),
+ 	SNMP_MIB_ITEM("MPJoinSynAckHMacFailure", MPTCP_MIB_JOINSYNACKMAC),
+ 	SNMP_MIB_ITEM("MPJoinAckRx", MPTCP_MIB_JOINACKRX),
+ 	SNMP_MIB_ITEM("MPJoinAckHMacFailure", MPTCP_MIB_JOINACKMAC),
+diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h
+index 47bcecce1106ea..a9f43ff00b3c8d 100644
+--- a/net/mptcp/mib.h
++++ b/net/mptcp/mib.h
+@@ -9,7 +9,9 @@ enum linux_mptcp_mib_field {
+ 	MPTCP_MIB_RETRANSSEGS,		/* Segments retransmitted at the MPTCP-level */
+ 	MPTCP_MIB_JOINNOTOKEN,		/* Received MP_JOIN but the token was not found */
+ 	MPTCP_MIB_JOINSYNRX,		/* Received a SYN + MP_JOIN */
++	MPTCP_MIB_JOINSYNBACKUPRX,	/* Received a SYN + MP_JOIN + backup flag */
+ 	MPTCP_MIB_JOINSYNACKRX,		/* Received a SYN/ACK + MP_JOIN */
++	MPTCP_MIB_JOINSYNACKBACKUPRX,	/* Received a SYN/ACK + MP_JOIN + backup flag */
+ 	MPTCP_MIB_JOINSYNACKMAC,	/* HMAC was wrong on SYN/ACK + MP_JOIN */
+ 	MPTCP_MIB_JOINACKRX,		/* Received an ACK + MP_JOIN */
+ 	MPTCP_MIB_JOINACKMAC,		/* HMAC was wrong on ACK + MP_JOIN */
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index c389d7e47135d2..f7a91266d5a9cb 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -708,7 +708,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
+ 		return true;
+ 	} else if (subflow_req->mp_join) {
+ 		opts->suboptions = OPTION_MPTCP_MPJ_SYNACK;
+-		opts->backup = subflow_req->backup;
++		opts->backup = subflow_req->request_bkup;
+ 		opts->join_id = subflow_req->local_id;
+ 		opts->thmac = subflow_req->thmac;
+ 		opts->nonce = subflow_req->local_nonce;
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index e19e1525ecbb00..1f310abbf1ede8 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -225,6 +225,15 @@ int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
+ 	return mptcp_pm_nl_get_local_id(msk, skc);
+ }
+ 
++bool mptcp_pm_is_backup(struct mptcp_sock *msk, struct sock_common *skc)
++{
++	struct mptcp_addr_info skc_local;
++
++	mptcp_local_address((struct sock_common *)skc, &skc_local);
++
++	return mptcp_pm_nl_is_backup(msk, &skc_local);
++}
++
+ void mptcp_pm_data_init(struct mptcp_sock *msk)
+ {
+ 	msk->pm.add_addr_signaled = 0;
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 452c7e21befd68..ca57d856d5df5a 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -84,8 +84,7 @@ static bool address_zero(const struct mptcp_addr_info *addr)
+ 	return addresses_equal(addr, &zero, false);
+ }
+ 
+-static void local_address(const struct sock_common *skc,
+-			  struct mptcp_addr_info *addr)
++void mptcp_local_address(const struct sock_common *skc, struct mptcp_addr_info *addr)
+ {
+ 	addr->port = 0;
+ 	addr->family = skc->skc_family;
+@@ -120,7 +119,7 @@ static bool lookup_subflow_by_saddr(const struct list_head *list,
+ 	list_for_each_entry(subflow, list, node) {
+ 		skc = (struct sock_common *)mptcp_subflow_tcp_sock(subflow);
+ 
+-		local_address(skc, &cur);
++		mptcp_local_address(skc, &cur);
+ 		if (addresses_equal(&cur, saddr, false))
+ 			return true;
+ 	}
+@@ -533,8 +532,8 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
+ 	/* The 0 ID mapping is defined by the first subflow, copied into the msk
+ 	 * addr
+ 	 */
+-	local_address((struct sock_common *)msk, &msk_local);
+-	local_address((struct sock_common *)skc, &skc_local);
++	mptcp_local_address((struct sock_common *)msk, &msk_local);
++	mptcp_local_address((struct sock_common *)skc, &skc_local);
+ 	if (addresses_equal(&msk_local, &skc_local, false))
+ 		return 0;
+ 
+@@ -569,6 +568,26 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
+ 	return ret;
+ }
+ 
++bool mptcp_pm_nl_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc)
++{
++	struct mptcp_pm_addr_entry *entry;
++	struct pm_nl_pernet *pernet;
++	bool backup = false;
++
++	pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id);
++
++	rcu_read_lock();
++	list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) {
++		if (addresses_equal(&entry->addr, skc, entry->addr.port)) {
++			backup = !!(entry->addr.flags & MPTCP_PM_ADDR_FLAG_BACKUP);
++			break;
++		}
++	}
++	rcu_read_unlock();
++
++	return backup;
++}
++
+ void mptcp_pm_nl_data_init(struct mptcp_sock *msk)
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+@@ -759,6 +778,7 @@ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
+ 	ret = remove_anno_list_by_saddr(msk, addr);
+ 	if (ret || force) {
+ 		spin_lock_bh(&msk->pm.lock);
++		msk->pm.add_addr_signaled -= ret;
+ 		mptcp_pm_remove_addr(msk, addr->id);
+ 		spin_unlock_bh(&msk->pm.lock);
+ 	}
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index a36493bbf8950a..a343b307745884 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1124,11 +1124,13 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk,
+ 		send_info[i].ratio = -1;
+ 	}
+ 	mptcp_for_each_subflow(msk, subflow) {
++		bool backup = subflow->backup || subflow->request_bkup;
++
+ 		ssk =  mptcp_subflow_tcp_sock(subflow);
+ 		if (!mptcp_subflow_active(subflow))
+ 			continue;
+ 
+-		nr_active += !subflow->backup;
++		nr_active += !backup;
+ 		*sndbuf = max(tcp_sk(ssk)->snd_wnd, *sndbuf);
+ 		if (!sk_stream_memory_free(subflow->tcp_sock))
+ 			continue;
+@@ -1139,9 +1141,9 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk,
+ 
+ 		ratio = div_u64((u64)READ_ONCE(ssk->sk_wmem_queued) << 32,
+ 				pace);
+-		if (ratio < send_info[subflow->backup].ratio) {
+-			send_info[subflow->backup].ssk = ssk;
+-			send_info[subflow->backup].ratio = ratio;
++		if (ratio < send_info[backup].ratio) {
++			send_info[backup].ssk = ssk;
++			send_info[backup].ratio = ratio;
+ 		}
+ 	}
+ 
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 3e5af8397434a9..4348bccb982f92 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -261,7 +261,8 @@ struct mptcp_subflow_request_sock {
+ 	struct	tcp_request_sock sk;
+ 	u16	mp_capable : 1,
+ 		mp_join : 1,
+-		backup : 1;
++		backup : 1,
++		request_bkup : 1;
+ 	u8	local_id;
+ 	u8	remote_id;
+ 	u64	local_key;
+@@ -371,6 +372,7 @@ void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ 		       struct mptcp_subflow_context *subflow,
+ 		       long timeout);
+ void mptcp_subflow_reset(struct sock *ssk);
++void mptcp_local_address(const struct sock_common *skc, struct mptcp_addr_info *addr);
+ 
+ /* called with sk socket lock held */
+ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
+@@ -479,6 +481,7 @@ bool mptcp_pm_add_addr_signal(struct mptcp_sock *msk, unsigned int remaining,
+ bool mptcp_pm_rm_addr_signal(struct mptcp_sock *msk, unsigned int remaining,
+ 			     u8 *rm_id);
+ int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc);
++bool mptcp_pm_is_backup(struct mptcp_sock *msk, struct sock_common *skc);
+ 
+ void __init mptcp_pm_nl_init(void);
+ void mptcp_pm_nl_data_init(struct mptcp_sock *msk);
+@@ -488,6 +491,7 @@ void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk);
+ void mptcp_pm_nl_rm_addr_received(struct mptcp_sock *msk);
+ void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk, u8 rm_id);
+ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc);
++bool mptcp_pm_nl_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc);
+ 
+ static inline struct mptcp_ext *mptcp_get_ext(struct sk_buff *skb)
+ {
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 276fe9f44df734..ba86cb06d6d8c9 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -80,6 +80,7 @@ static struct mptcp_sock *subflow_token_join_request(struct request_sock *req,
+ 		return NULL;
+ 	}
+ 	subflow_req->local_id = local_id;
++	subflow_req->request_bkup = mptcp_pm_is_backup(msk, (struct sock_common *)req);
+ 
+ 	get_random_bytes(&subflow_req->local_nonce, sizeof(u32));
+ 
+@@ -135,6 +136,9 @@ static void subflow_init_req(struct request_sock *req,
+ 			return;
+ 	} else if (mp_opt.mp_join) {
+ 		SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNRX);
++
++		if (mp_opt.backup)
++			SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNBACKUPRX);
+ 	}
+ 
+ 	if (mp_opt.mp_capable && listener->request_mptcp) {
+@@ -347,6 +351,9 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 
+ 		subflow->mp_join = 1;
+ 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKRX);
++
++		if (subflow->backup)
++			MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKBACKUPRX);
+ 	} else if (mptcp_check_fallback(sk)) {
+ fallback:
+ 		mptcp_rcv_space_init(mptcp_sk(parent), sk);
+@@ -863,14 +870,22 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ 	bool fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN;
+-	u32 incr;
++	struct tcp_sock *tp = tcp_sk(ssk);
++	u32 offset, incr, avail_len;
++
++	offset = tp->copied_seq - TCP_SKB_CB(skb)->seq;
++	if (WARN_ON_ONCE(offset > skb->len))
++		goto out;
+ 
+-	incr = limit >= skb->len ? skb->len + fin : limit;
++	avail_len = skb->len - offset;
++	incr = limit >= avail_len ? avail_len + fin : limit;
+ 
+-	pr_debug("discarding=%d len=%d seq=%d", incr, skb->len,
+-		 subflow->map_subflow_seq);
++	pr_debug("discarding=%d len=%d offset=%d seq=%d", incr, skb->len,
++		 offset, subflow->map_subflow_seq);
+ 	MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DUPDATA);
+ 	tcp_sk(ssk)->copied_seq += incr;
++
++out:
+ 	if (!before(tcp_sk(ssk)->copied_seq, TCP_SKB_CB(skb)->end_seq))
+ 		sk_eat_skb(ssk, skb);
+ 	if (mptcp_subflow_get_map_offset(subflow) >= subflow->map_data_len)
+@@ -1387,6 +1402,7 @@ static void subflow_ulp_clone(const struct request_sock *req,
+ 		new_ctx->mp_join = 1;
+ 		new_ctx->fully_established = 1;
+ 		new_ctx->backup = subflow_req->backup;
++		new_ctx->request_bkup = subflow_req->request_bkup;
+ 		new_ctx->local_id = subflow_req->local_id;
+ 		new_ctx->remote_id = subflow_req->remote_id;
+ 		new_ctx->token = subflow_req->token;
+diff --git a/net/netfilter/ipset/ip_set_list_set.c b/net/netfilter/ipset/ip_set_list_set.c
+index e839c356bcb56b..902ff2f3bc72b5 100644
+--- a/net/netfilter/ipset/ip_set_list_set.c
++++ b/net/netfilter/ipset/ip_set_list_set.c
+@@ -547,6 +547,9 @@ list_set_cancel_gc(struct ip_set *set)
+ 
+ 	if (SET_WITH_TIMEOUT(set))
+ 		del_timer_sync(&map->gc);
++
++	/* Flush list to drop references to other ipsets */
++	list_set_flush(set);
+ }
+ 
+ static const struct ip_set_type_variant set_variant = {
+diff --git a/net/netfilter/ipvs/ip_vs_proto_sctp.c b/net/netfilter/ipvs/ip_vs_proto_sctp.c
+index 1e689c71412716..83e452916403d5 100644
+--- a/net/netfilter/ipvs/ip_vs_proto_sctp.c
++++ b/net/netfilter/ipvs/ip_vs_proto_sctp.c
+@@ -126,7 +126,7 @@ sctp_snat_handler(struct sk_buff *skb, struct ip_vs_protocol *pp,
+ 	if (sctph->source != cp->vport || payload_csum ||
+ 	    skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		sctph->source = cp->vport;
+-		if (!skb_is_gso(skb) || !skb_is_gso_sctp(skb))
++		if (!skb_is_gso(skb))
+ 			sctp_nat_csum(skb, sctph, sctphoff);
+ 	} else {
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+@@ -175,7 +175,7 @@ sctp_dnat_handler(struct sk_buff *skb, struct ip_vs_protocol *pp,
+ 	    (skb->ip_summed == CHECKSUM_PARTIAL &&
+ 	     !(skb_dst(skb)->dev->features & NETIF_F_SCTP_CRC))) {
+ 		sctph->dest = cp->dport;
+-		if (!skb_is_gso(skb) || !skb_is_gso_sctp(skb))
++		if (!skb_is_gso(skb))
+ 			sctp_nat_csum(skb, sctph, sctphoff);
+ 	} else if (skb->ip_summed != CHECKSUM_PARTIAL) {
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index ceb7c988edefa0..b55e87143c2cef 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -3413,7 +3413,8 @@ static int ctnetlink_del_expect(struct net *net, struct sock *ctnl,
+ 
+ 		if (cda[CTA_EXPECT_ID]) {
+ 			__be32 id = nla_get_be32(cda[CTA_EXPECT_ID]);
+-			if (ntohl(id) != (u32)(unsigned long)exp) {
++
++			if (id != nf_expect_get_id(exp)) {
+ 				nf_ct_expect_put(exp);
+ 				return -ENOENT;
+ 			}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f4bbddfbbc2476..249c30c47cbd69 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2968,13 +2968,13 @@ static struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+ 	return ERR_PTR(err);
+ }
+ 
+-int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src)
++int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src, gfp_t gfp)
+ {
+ 	int err;
+ 
+ 	if (src->ops->clone) {
+ 		dst->ops = src->ops;
+-		err = src->ops->clone(dst, src);
++		err = src->ops->clone(dst, src, gfp);
+ 		if (err < 0)
+ 			return err;
+ 	} else {
+@@ -3349,6 +3349,15 @@ static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *r
+ 	nf_tables_rule_destroy(ctx, rule);
+ }
+ 
++/** nft_chain_validate - loop detection and hook validation
++ *
++ * @ctx: context containing call depth and base chain
++ * @chain: chain to validate
++ *
++ * Walk through the rules of the given chain and chase all jumps/gotos
++ * and set lookups until either the jump limit is hit or all reachable
++ * chains have been validated.
++ */
+ int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain)
+ {
+ 	struct nft_expr *expr, *last;
+@@ -3367,6 +3376,9 @@ int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain)
+ 			if (!expr->ops->validate)
+ 				continue;
+ 
++			/* This may call nft_chain_validate() recursively,
++			 * callers that do so must increment ctx->level.
++			 */
+ 			err = expr->ops->validate(ctx, expr, &data);
+ 			if (err < 0)
+ 				return err;
+@@ -5354,8 +5366,10 @@ static int nf_tables_getsetelem(struct net *net, struct sock *nlsk,
+ 
+ 	nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
+ 		err = nft_get_set_elem(&ctx, set, attr);
+-		if (err < 0)
++		if (err < 0) {
++			NL_SET_BAD_ATTR(extack, attr);
+ 			break;
++		}
+ 	}
+ 
+ 	return err;
+@@ -5522,7 +5536,7 @@ static int nft_set_elem_expr_setup(struct nft_ctx *ctx,
+ 	if (expr == NULL)
+ 		return 0;
+ 
+-	err = nft_expr_clone(elem_expr, expr);
++	err = nft_expr_clone(elem_expr, expr, GFP_KERNEL);
+ 	if (err < 0)
+ 		return -ENOMEM;
+ 
+@@ -5630,7 +5644,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 		if (!expr)
+ 			return -ENOMEM;
+ 
+-		err = nft_expr_clone(expr, set->expr);
++		err = nft_expr_clone(expr, set->expr, GFP_KERNEL);
+ 		if (err < 0)
+ 			goto err_set_elem_expr;
+ 	}
+@@ -5848,8 +5862,10 @@ static int nf_tables_newsetelem(struct net *net, struct sock *nlsk,
+ 
+ 	nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
+ 		err = nft_add_set_elem(&ctx, set, attr, nlh->nlmsg_flags);
+-		if (err < 0)
++		if (err < 0) {
++			NL_SET_BAD_ATTR(extack, attr);
+ 			return err;
++		}
+ 	}
+ 
+ 	if (nft_net->validate_state == NFT_VALIDATE_DO)
+@@ -6058,9 +6074,10 @@ static int nf_tables_delsetelem(struct net *net, struct sock *nlsk,
+ 
+ 	nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
+ 		err = nft_del_setelem(&ctx, set, attr);
+-		if (err < 0)
++		if (err < 0) {
++			NL_SET_BAD_ATTR(extack, attr);
+ 			break;
+-
++		}
+ 		set->ndeact++;
+ 	}
+ 	return err;
+@@ -9029,6 +9046,7 @@ static bool nf_tables_valid_genid(struct net *net, u32 genid)
+ 	bool genid_ok;
+ 
+ 	mutex_lock(&nft_net->commit_mutex);
++	nft_net->tstamp = get_jiffies_64();
+ 
+ 	genid_ok = genid == 0 || nft_net->base_seq == genid;
+ 	if (!genid_ok)
+@@ -9081,119 +9099,6 @@ int nft_chain_validate_hooks(const struct nft_chain *chain,
+ }
+ EXPORT_SYMBOL_GPL(nft_chain_validate_hooks);
+ 
+-/*
+- * Loop detection - walk through the ruleset beginning at the destination chain
+- * of a new jump until either the source chain is reached (loop) or all
+- * reachable chains have been traversed.
+- *
+- * The loop check is performed whenever a new jump verdict is added to an
+- * expression or verdict map or a verdict map is bound to a new chain.
+- */
+-
+-static int nf_tables_check_loops(const struct nft_ctx *ctx,
+-				 const struct nft_chain *chain);
+-
+-static int nft_check_loops(const struct nft_ctx *ctx,
+-			   const struct nft_set_ext *ext)
+-{
+-	const struct nft_data *data;
+-	int ret;
+-
+-	data = nft_set_ext_data(ext);
+-	switch (data->verdict.code) {
+-	case NFT_JUMP:
+-	case NFT_GOTO:
+-		ret = nf_tables_check_loops(ctx, data->verdict.chain);
+-		break;
+-	default:
+-		ret = 0;
+-		break;
+-	}
+-
+-	return ret;
+-}
+-
+-static int nf_tables_loop_check_setelem(const struct nft_ctx *ctx,
+-					struct nft_set *set,
+-					const struct nft_set_iter *iter,
+-					struct nft_set_elem *elem)
+-{
+-	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+-
+-	if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) &&
+-	    *nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END)
+-		return 0;
+-
+-	return nft_check_loops(ctx, ext);
+-}
+-
+-static int nf_tables_check_loops(const struct nft_ctx *ctx,
+-				 const struct nft_chain *chain)
+-{
+-	const struct nft_rule *rule;
+-	const struct nft_expr *expr, *last;
+-	struct nft_set *set;
+-	struct nft_set_binding *binding;
+-	struct nft_set_iter iter;
+-
+-	if (ctx->chain == chain)
+-		return -ELOOP;
+-
+-	list_for_each_entry(rule, &chain->rules, list) {
+-		nft_rule_for_each_expr(expr, last, rule) {
+-			struct nft_immediate_expr *priv;
+-			const struct nft_data *data;
+-			int err;
+-
+-			if (strcmp(expr->ops->type->name, "immediate"))
+-				continue;
+-
+-			priv = nft_expr_priv(expr);
+-			if (priv->dreg != NFT_REG_VERDICT)
+-				continue;
+-
+-			data = &priv->data;
+-			switch (data->verdict.code) {
+-			case NFT_JUMP:
+-			case NFT_GOTO:
+-				err = nf_tables_check_loops(ctx,
+-							data->verdict.chain);
+-				if (err < 0)
+-					return err;
+-				break;
+-			default:
+-				break;
+-			}
+-		}
+-	}
+-
+-	list_for_each_entry(set, &ctx->table->sets, list) {
+-		if (!nft_is_active_next(ctx->net, set))
+-			continue;
+-		if (!(set->flags & NFT_SET_MAP) ||
+-		    set->dtype != NFT_DATA_VERDICT)
+-			continue;
+-
+-		list_for_each_entry(binding, &set->bindings, list) {
+-			if (!(binding->flags & NFT_SET_MAP) ||
+-			    binding->chain != chain)
+-				continue;
+-
+-			iter.genmask	= nft_genmask_next(ctx->net);
+-			iter.skip 	= 0;
+-			iter.count	= 0;
+-			iter.err	= 0;
+-			iter.fn		= nf_tables_loop_check_setelem;
+-
+-			set->ops->walk(ctx, set, &iter);
+-			if (iter.err < 0)
+-				return iter.err;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+ /**
+  *	nft_parse_u32_check - fetch u32 attribute and check for maximum value
+  *
+@@ -9329,7 +9234,7 @@ static int nft_validate_register_store(const struct nft_ctx *ctx,
+ 		if (data != NULL &&
+ 		    (data->verdict.code == NFT_GOTO ||
+ 		     data->verdict.code == NFT_JUMP)) {
+-			err = nf_tables_check_loops(ctx, data->verdict.chain);
++			err = nft_chain_validate(ctx, data->verdict.chain);
+ 			if (err < 0)
+ 				return err;
+ 		}
+diff --git a/net/netfilter/nft_connlimit.c b/net/netfilter/nft_connlimit.c
+index 7d0761fad37ef5..091457e5c260d8 100644
+--- a/net/netfilter/nft_connlimit.c
++++ b/net/netfilter/nft_connlimit.c
+@@ -195,7 +195,7 @@ static void nft_connlimit_destroy(const struct nft_ctx *ctx,
+ 	nft_connlimit_do_destroy(ctx, priv);
+ }
+ 
+-static int nft_connlimit_clone(struct nft_expr *dst, const struct nft_expr *src)
++static int nft_connlimit_clone(struct nft_expr *dst, const struct nft_expr *src, gfp_t gfp)
+ {
+ 	struct nft_connlimit *priv_dst = nft_expr_priv(dst);
+ 	struct nft_connlimit *priv_src = nft_expr_priv(src);
+diff --git a/net/netfilter/nft_counter.c b/net/netfilter/nft_counter.c
+index 85ed461ec24e85..75fa6fcd6cd6c6 100644
+--- a/net/netfilter/nft_counter.c
++++ b/net/netfilter/nft_counter.c
+@@ -224,7 +224,7 @@ static void nft_counter_destroy(const struct nft_ctx *ctx,
+ 	nft_counter_do_destroy(priv);
+ }
+ 
+-static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src)
++static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src, gfp_t gfp)
+ {
+ 	struct nft_counter_percpu_priv *priv = nft_expr_priv(src);
+ 	struct nft_counter_percpu_priv *priv_clone = nft_expr_priv(dst);
+@@ -234,7 +234,7 @@ static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src)
+ 
+ 	nft_counter_fetch(priv, &total);
+ 
+-	cpu_stats = alloc_percpu_gfp(struct nft_counter, GFP_ATOMIC);
++	cpu_stats = alloc_percpu_gfp(struct nft_counter, gfp);
+ 	if (cpu_stats == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 408b7f5faa5e59..9461293182e85e 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -48,7 +48,7 @@ static void *nft_dynset_new(struct nft_set *set, const struct nft_expr *expr,
+ 
+ 	ext = nft_set_elem_ext(set, elem);
+ 	if (priv->expr != NULL &&
+-	    nft_expr_clone(nft_set_ext_expr(ext), priv->expr) < 0)
++	    nft_expr_clone(nft_set_ext_expr(ext), priv->expr, GFP_ATOMIC) < 0)
+ 		goto err2;
+ 
+ 	return elem;
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index f0a9ad1c4ea442..2499d25a5c85f2 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -38,6 +38,7 @@ struct nft_rhash_cmp_arg {
+ 	const struct nft_set		*set;
+ 	const u32			*key;
+ 	u8				genmask;
++	u64				tstamp;
+ };
+ 
+ static inline u32 nft_rhash_key(const void *data, u32 len, u32 seed)
+@@ -64,7 +65,7 @@ static inline int nft_rhash_cmp(struct rhashtable_compare_arg *arg,
+ 		return 1;
+ 	if (nft_set_elem_is_dead(&he->ext))
+ 		return 1;
+-	if (nft_set_elem_expired(&he->ext))
++	if (__nft_set_elem_expired(&he->ext, x->tstamp))
+ 		return 1;
+ 	if (!nft_set_elem_active(&he->ext, x->genmask))
+ 		return 1;
+@@ -88,6 +89,7 @@ static bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+ 		.genmask = nft_genmask_cur(net),
+ 		.set	 = set,
+ 		.key	 = key,
++		.tstamp  = get_jiffies_64(),
+ 	};
+ 
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+@@ -106,6 +108,7 @@ static void *nft_rhash_get(const struct net *net, const struct nft_set *set,
+ 		.genmask = nft_genmask_cur(net),
+ 		.set	 = set,
+ 		.key	 = elem->key.val.data,
++		.tstamp  = get_jiffies_64(),
+ 	};
+ 
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+@@ -129,6 +132,7 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+ 		.genmask = NFT_GENMASK_ANY,
+ 		.set	 = set,
+ 		.key	 = key,
++		.tstamp  = get_jiffies_64(),
+ 	};
+ 
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+@@ -172,6 +176,7 @@ static int nft_rhash_insert(const struct net *net, const struct nft_set *set,
+ 		.genmask = nft_genmask_next(net),
+ 		.set	 = set,
+ 		.key	 = elem->key.val.data,
++		.tstamp	 = nft_net_tstamp(net),
+ 	};
+ 	struct nft_rhash_elem *prev;
+ 
+@@ -214,6 +219,7 @@ static void *nft_rhash_deactivate(const struct net *net,
+ 		.genmask = nft_genmask_next(net),
+ 		.set	 = set,
+ 		.key	 = elem->key.val.data,
++		.tstamp	 = nft_net_tstamp(net),
+ 	};
+ 
+ 	rcu_read_lock();
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 5a8521abd8f5cc..9e0269e8501799 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -504,6 +504,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+  * @set:	nftables API set representation
+  * @data:	Key data to be matched against existing elements
+  * @genmask:	If set, check that element is active in given genmask
++ * @tstamp:	timestamp to check for expired elements
+  *
+  * This is essentially the same as the lookup function, except that it matches
+  * key data against the uncommitted copy and doesn't use preallocated maps for
+@@ -513,7 +514,8 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+  */
+ static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+ 					  const struct nft_set *set,
+-					  const u8 *data, u8 genmask)
++					  const u8 *data, u8 genmask,
++					  u64 tstamp)
+ {
+ 	struct nft_pipapo_elem *ret = ERR_PTR(-ENOENT);
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+@@ -566,7 +568,7 @@ static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+ 			goto out;
+ 
+ 		if (last) {
+-			if (nft_set_elem_expired(&f->mt[b].e->ext))
++			if (__nft_set_elem_expired(&f->mt[b].e->ext, tstamp))
+ 				goto next_match;
+ 			if ((genmask &&
+ 			     !nft_set_elem_active(&f->mt[b].e->ext, genmask)))
+@@ -603,7 +605,7 @@ static void *nft_pipapo_get(const struct net *net, const struct nft_set *set,
+ 			    const struct nft_set_elem *elem, unsigned int flags)
+ {
+ 	return pipapo_get(net, set, (const u8 *)elem->key.val.data,
+-			  nft_genmask_cur(net));
++			  nft_genmask_cur(net), get_jiffies_64());
+ }
+ 
+ /**
+@@ -1197,6 +1199,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct nft_pipapo_match *m = priv->clone;
+ 	u8 genmask = nft_genmask_next(net);
++	u64 tstamp = nft_net_tstamp(net);
+ 	struct nft_pipapo_field *f;
+ 	const u8 *start_p, *end_p;
+ 	int i, bsize_max, err = 0;
+@@ -1206,7 +1209,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 	else
+ 		end = start;
+ 
+-	dup = pipapo_get(net, set, start, genmask);
++	dup = pipapo_get(net, set, start, genmask, tstamp);
+ 	if (!IS_ERR(dup)) {
+ 		/* Check if we already have the same exact entry */
+ 		const struct nft_data *dup_key, *dup_end;
+@@ -1228,7 +1231,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 
+ 	if (PTR_ERR(dup) == -ENOENT) {
+ 		/* Look for partially overlapping entries */
+-		dup = pipapo_get(net, set, end, nft_genmask_next(net));
++		dup = pipapo_get(net, set, end, nft_genmask_next(net), tstamp);
+ 	}
+ 
+ 	if (PTR_ERR(dup) != -ENOENT) {
+@@ -1580,6 +1583,7 @@ static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m)
+ 	struct nft_set *set = (struct nft_set *) _set;
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct net *net = read_pnet(&set->net);
++	u64 tstamp = nft_net_tstamp(net);
+ 	int rules_f0, first_rule = 0;
+ 	struct nft_trans_gc *gc;
+ 
+@@ -1613,7 +1617,7 @@ static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m)
+ 		/* synchronous gc never fails, there is no need to set on
+ 		 * NFT_SET_ELEM_DEAD_BIT.
+ 		 */
+-		if (nft_set_elem_expired(&e->ext)) {
++		if (__nft_set_elem_expired(&e->ext, tstamp)) {
+ 			priv->dirty = true;
+ 
+ 			gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
+@@ -1772,7 +1776,7 @@ static void *pipapo_deactivate(const struct net *net, const struct nft_set *set,
+ {
+ 	struct nft_pipapo_elem *e;
+ 
+-	e = pipapo_get(net, set, data, nft_genmask_next(net));
++	e = pipapo_get(net, set, data, nft_genmask_next(net), nft_net_tstamp(net));
+ 	if (IS_ERR(e))
+ 		return NULL;
+ 
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index 60fb8bc0fdcc96..13c7e22c938423 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -1129,8 +1129,14 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	bool map_index;
+ 	int i, ret = 0;
+ 
+-	if (unlikely(!irq_fpu_usable()))
+-		return nft_pipapo_lookup(net, set, key, ext);
++	local_bh_disable();
++
++	if (unlikely(!irq_fpu_usable())) {
++		bool fallback_res = nft_pipapo_lookup(net, set, key, ext);
++
++		local_bh_enable();
++		return fallback_res;
++	}
+ 
+ 	m = rcu_dereference(priv->match);
+ 
+@@ -1140,6 +1146,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	scratch = *raw_cpu_ptr(m->scratch);
+ 	if (unlikely(!scratch)) {
+ 		kernel_fpu_end();
++		local_bh_enable();
+ 		return false;
+ 	}
+ 
+@@ -1220,6 +1227,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	if (i % 2)
+ 		scratch->map_index = !map_index;
+ 	kernel_fpu_end();
++	local_bh_enable();
+ 
+ 	return ret >= 0;
+ }
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 18c0d163dc76cb..bbced30113e4e2 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -316,6 +316,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	u8 cur_genmask = nft_genmask_cur(net);
+ 	u8 genmask = nft_genmask_next(net);
++	u64 tstamp = nft_net_tstamp(net);
+ 	int d;
+ 
+ 	/* Descend the tree to search for an existing element greater than the
+@@ -363,7 +364,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 		/* perform garbage collection to avoid bogus overlap reports
+ 		 * but skip new elements in this transaction.
+ 		 */
+-		if (nft_set_elem_expired(&rbe->ext) &&
++		if (__nft_set_elem_expired(&rbe->ext, tstamp) &&
+ 		    nft_set_elem_active(&rbe->ext, cur_genmask)) {
+ 			const struct nft_rbtree_elem *removed_end;
+ 
+@@ -550,6 +551,7 @@ static void *nft_rbtree_deactivate(const struct net *net,
+ 	const struct rb_node *parent = priv->root.rb_node;
+ 	struct nft_rbtree_elem *rbe, *this = elem->priv;
+ 	u8 genmask = nft_genmask_next(net);
++	u64 tstamp = nft_net_tstamp(net);
+ 	int d;
+ 
+ 	while (parent != NULL) {
+@@ -570,7 +572,7 @@ static void *nft_rbtree_deactivate(const struct net *net,
+ 				   nft_rbtree_interval_end(this)) {
+ 				parent = parent->rb_right;
+ 				continue;
+-			} else if (nft_set_elem_expired(&rbe->ext)) {
++			} else if (__nft_set_elem_expired(&rbe->ext, tstamp)) {
+ 				break;
+ 			} else if (!nft_set_elem_active(&rbe->ext, genmask)) {
+ 				parent = parent->rb_left;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 9bec88fe350586..ce3e20bcde4ab5 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -503,6 +503,61 @@ static void *packet_current_frame(struct packet_sock *po,
+ 	return packet_lookup_frame(po, rb, rb->head, status);
+ }
+ 
++static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev)
++{
++	u8 *skb_orig_data = skb->data;
++	int skb_orig_len = skb->len;
++	struct vlan_hdr vhdr, *vh;
++	unsigned int header_len;
++
++	if (!dev)
++		return 0;
++
++	/* In the SOCK_DGRAM scenario, skb data starts at the network
++	 * protocol, which is after the VLAN headers. The outer VLAN
++	 * header is at the hard_header_len offset in non-variable
++	 * length link layer headers. If it's a VLAN device, the
++	 * min_header_len should be used to exclude the VLAN header
++	 * size.
++	 */
++	if (dev->min_header_len == dev->hard_header_len)
++		header_len = dev->hard_header_len;
++	else if (is_vlan_dev(dev))
++		header_len = dev->min_header_len;
++	else
++		return 0;
++
++	skb_push(skb, skb->data - skb_mac_header(skb));
++	vh = skb_header_pointer(skb, header_len, sizeof(vhdr), &vhdr);
++	if (skb_orig_data != skb->data) {
++		skb->data = skb_orig_data;
++		skb->len = skb_orig_len;
++	}
++	if (unlikely(!vh))
++		return 0;
++
++	return ntohs(vh->h_vlan_TCI);
++}
++
++static __be16 vlan_get_protocol_dgram(struct sk_buff *skb)
++{
++	__be16 proto = skb->protocol;
++
++	if (unlikely(eth_type_vlan(proto))) {
++		u8 *skb_orig_data = skb->data;
++		int skb_orig_len = skb->len;
++
++		skb_push(skb, skb->data - skb_mac_header(skb));
++		proto = __vlan_get_protocol(skb, proto, NULL);
++		if (skb_orig_data != skb->data) {
++			skb->data = skb_orig_data;
++			skb->len = skb_orig_len;
++		}
++	}
++
++	return proto;
++}
++
+ static void prb_del_retire_blk_timer(struct tpacket_kbdq_core *pkc)
+ {
+ 	del_timer_sync(&pkc->retire_blk_timer);
+@@ -972,10 +1027,16 @@ static void prb_clear_rxhash(struct tpacket_kbdq_core *pkc,
+ static void prb_fill_vlan_info(struct tpacket_kbdq_core *pkc,
+ 			struct tpacket3_hdr *ppd)
+ {
++	struct packet_sock *po = container_of(pkc, struct packet_sock, rx_ring.prb_bdqc);
++
+ 	if (skb_vlan_tag_present(pkc->skb)) {
+ 		ppd->hv1.tp_vlan_tci = skb_vlan_tag_get(pkc->skb);
+ 		ppd->hv1.tp_vlan_tpid = ntohs(pkc->skb->vlan_proto);
+ 		ppd->tp_status = TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
++	} else if (unlikely(po->sk.sk_type == SOCK_DGRAM && eth_type_vlan(pkc->skb->protocol))) {
++		ppd->hv1.tp_vlan_tci = vlan_get_tci(pkc->skb, pkc->skb->dev);
++		ppd->hv1.tp_vlan_tpid = ntohs(pkc->skb->protocol);
++		ppd->tp_status = TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
+ 	} else {
+ 		ppd->hv1.tp_vlan_tci = 0;
+ 		ppd->hv1.tp_vlan_tpid = 0;
+@@ -2390,6 +2451,10 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 			h.h2->tp_vlan_tci = skb_vlan_tag_get(skb);
+ 			h.h2->tp_vlan_tpid = ntohs(skb->vlan_proto);
+ 			status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
++		} else if (unlikely(sk->sk_type == SOCK_DGRAM && eth_type_vlan(skb->protocol))) {
++			h.h2->tp_vlan_tci = vlan_get_tci(skb, skb->dev);
++			h.h2->tp_vlan_tpid = ntohs(skb->protocol);
++			status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
+ 		} else {
+ 			h.h2->tp_vlan_tci = 0;
+ 			h.h2->tp_vlan_tpid = 0;
+@@ -2419,7 +2484,8 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	sll->sll_halen = dev_parse_header(skb, sll->sll_addr);
+ 	sll->sll_family = AF_PACKET;
+ 	sll->sll_hatype = dev->type;
+-	sll->sll_protocol = skb->protocol;
++	sll->sll_protocol = (sk->sk_type == SOCK_DGRAM) ?
++		vlan_get_protocol_dgram(skb) : skb->protocol;
+ 	sll->sll_pkttype = skb->pkt_type;
+ 	if (unlikely(packet_sock_flag(po, PACKET_SOCK_ORIGDEV)))
+ 		sll->sll_ifindex = orig_dev->ifindex;
+@@ -3451,7 +3517,8 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		/* Original length was stored in sockaddr_ll fields */
+ 		origlen = PACKET_SKB_CB(skb)->sa.origlen;
+ 		sll->sll_family = AF_PACKET;
+-		sll->sll_protocol = skb->protocol;
++		sll->sll_protocol = (sock->type == SOCK_DGRAM) ?
++			vlan_get_protocol_dgram(skb) : skb->protocol;
+ 	}
+ 
+ 	sock_recv_ts_and_drops(msg, sk, skb);
+@@ -3506,6 +3573,21 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 			aux.tp_vlan_tci = skb_vlan_tag_get(skb);
+ 			aux.tp_vlan_tpid = ntohs(skb->vlan_proto);
+ 			aux.tp_status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
++		} else if (unlikely(sock->type == SOCK_DGRAM && eth_type_vlan(skb->protocol))) {
++			struct sockaddr_ll *sll = &PACKET_SKB_CB(skb)->sa.ll;
++			struct net_device *dev;
++
++			rcu_read_lock();
++			dev = dev_get_by_index_rcu(sock_net(sk), sll->sll_ifindex);
++			if (dev) {
++				aux.tp_vlan_tci = vlan_get_tci(skb, dev);
++				aux.tp_vlan_tpid = ntohs(skb->protocol);
++				aux.tp_status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
++			} else {
++				aux.tp_vlan_tci = 0;
++				aux.tp_vlan_tpid = 0;
++			}
++			rcu_read_unlock();
+ 		} else {
+ 			aux.tp_vlan_tci = 0;
+ 			aux.tp_vlan_tpid = 0;
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index a59a8ad3872117..4ea7a81707f3fe 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -41,6 +41,8 @@ static DEFINE_MUTEX(zones_mutex);
+ struct zones_ht_key {
+ 	struct net *net;
+ 	u16 zone;
++	/* Note : pad[] must be the last field. */
++	u8  pad[];
+ };
+ 
+ struct tcf_ct_flow_table {
+@@ -57,7 +59,7 @@ struct tcf_ct_flow_table {
+ static const struct rhashtable_params zones_params = {
+ 	.head_offset = offsetof(struct tcf_ct_flow_table, node),
+ 	.key_offset = offsetof(struct tcf_ct_flow_table, key),
+-	.key_len = sizeof_field(struct tcf_ct_flow_table, key),
++	.key_len = offsetof(struct zones_ht_key, pad),
+ 	.automatic_shrinking = true,
+ };
+ 
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 8f3aab6a4458b3..8fe1a74f0618d3 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -723,23 +723,25 @@ static int __sctp_hash_endpoint(struct sctp_endpoint *ep)
+ 	struct sock *sk = ep->base.sk;
+ 	struct net *net = sock_net(sk);
+ 	struct sctp_hashbucket *head;
+-	struct sctp_ep_common *epb;
++	int err = 0;
+ 
+-	epb = &ep->base;
+-	epb->hashent = sctp_ep_hashfn(net, epb->bind_addr.port);
+-	head = &sctp_ep_hashtable[epb->hashent];
++	ep->hashent = sctp_ep_hashfn(net, ep->base.bind_addr.port);
++	head = &sctp_ep_hashtable[ep->hashent];
+ 
++	write_lock(&head->lock);
+ 	if (sk->sk_reuseport) {
+ 		bool any = sctp_is_ep_boundall(sk);
+-		struct sctp_ep_common *epb2;
++		struct sctp_endpoint *ep2;
+ 		struct list_head *list;
+-		int cnt = 0, err = 1;
++		int cnt = 0;
++
++		err = 1;
+ 
+ 		list_for_each(list, &ep->base.bind_addr.address_list)
+ 			cnt++;
+ 
+-		sctp_for_each_hentry(epb2, &head->chain) {
+-			struct sock *sk2 = epb2->sk;
++		sctp_for_each_hentry(ep2, &head->chain) {
++			struct sock *sk2 = ep2->base.sk;
+ 
+ 			if (!net_eq(sock_net(sk2), net) || sk2 == sk ||
+ 			    !uid_eq(sock_i_uid(sk2), sock_i_uid(sk)) ||
+@@ -751,24 +753,24 @@ static int __sctp_hash_endpoint(struct sctp_endpoint *ep)
+ 			if (!err) {
+ 				err = reuseport_add_sock(sk, sk2, any);
+ 				if (err)
+-					return err;
++					goto out;
+ 				break;
+ 			} else if (err < 0) {
+-				return err;
++				goto out;
+ 			}
+ 		}
+ 
+ 		if (err) {
+ 			err = reuseport_alloc(sk, any);
+ 			if (err)
+-				return err;
++				goto out;
+ 		}
+ 	}
+ 
+-	write_lock(&head->lock);
+-	hlist_add_head(&epb->node, &head->chain);
++	hlist_add_head(&ep->node, &head->chain);
++out:
+ 	write_unlock(&head->lock);
+-	return 0;
++	return err;
+ }
+ 
+ /* Add an endpoint to the hash. Local BH-safe. */
+@@ -788,19 +790,15 @@ static void __sctp_unhash_endpoint(struct sctp_endpoint *ep)
+ {
+ 	struct sock *sk = ep->base.sk;
+ 	struct sctp_hashbucket *head;
+-	struct sctp_ep_common *epb;
+-
+-	epb = &ep->base;
+ 
+-	epb->hashent = sctp_ep_hashfn(sock_net(sk), epb->bind_addr.port);
++	ep->hashent = sctp_ep_hashfn(sock_net(sk), ep->base.bind_addr.port);
+ 
+-	head = &sctp_ep_hashtable[epb->hashent];
++	head = &sctp_ep_hashtable[ep->hashent];
+ 
++	write_lock(&head->lock);
+ 	if (rcu_access_pointer(sk->sk_reuseport_cb))
+ 		reuseport_detach_sock(sk);
+-
+-	write_lock(&head->lock);
+-	hlist_del_init(&epb->node);
++	hlist_del_init(&ep->node);
+ 	write_unlock(&head->lock);
+ }
+ 
+@@ -833,7 +831,6 @@ static struct sctp_endpoint *__sctp_rcv_lookup_endpoint(
+ 					const union sctp_addr *paddr)
+ {
+ 	struct sctp_hashbucket *head;
+-	struct sctp_ep_common *epb;
+ 	struct sctp_endpoint *ep;
+ 	struct sock *sk;
+ 	__be16 lport;
+@@ -843,8 +840,7 @@ static struct sctp_endpoint *__sctp_rcv_lookup_endpoint(
+ 	hash = sctp_ep_hashfn(net, ntohs(lport));
+ 	head = &sctp_ep_hashtable[hash];
+ 	read_lock(&head->lock);
+-	sctp_for_each_hentry(epb, &head->chain) {
+-		ep = sctp_ep(epb);
++	sctp_for_each_hentry(ep, &head->chain) {
+ 		if (sctp_endpoint_is_match(ep, net, laddr))
+ 			goto hit;
+ 	}
+diff --git a/net/sctp/proc.c b/net/sctp/proc.c
+index 963b94517ec20f..ec00ee75d59a65 100644
+--- a/net/sctp/proc.c
++++ b/net/sctp/proc.c
+@@ -161,7 +161,6 @@ static void *sctp_eps_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ static int sctp_eps_seq_show(struct seq_file *seq, void *v)
+ {
+ 	struct sctp_hashbucket *head;
+-	struct sctp_ep_common *epb;
+ 	struct sctp_endpoint *ep;
+ 	struct sock *sk;
+ 	int    hash = *(loff_t *)v;
+@@ -171,18 +170,17 @@ static int sctp_eps_seq_show(struct seq_file *seq, void *v)
+ 
+ 	head = &sctp_ep_hashtable[hash];
+ 	read_lock_bh(&head->lock);
+-	sctp_for_each_hentry(epb, &head->chain) {
+-		ep = sctp_ep(epb);
+-		sk = epb->sk;
++	sctp_for_each_hentry(ep, &head->chain) {
++		sk = ep->base.sk;
+ 		if (!net_eq(sock_net(sk), seq_file_net(seq)))
+ 			continue;
+ 		seq_printf(seq, "%8pK %8pK %-3d %-3d %-4d %-5d %5u %5lu ", ep, sk,
+ 			   sctp_sk(sk)->type, sk->sk_state, hash,
+-			   epb->bind_addr.port,
++			   ep->base.bind_addr.port,
+ 			   from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk)),
+ 			   sock_i_ino(sk));
+ 
+-		sctp_seq_dump_local_addrs(seq, epb);
++		sctp_seq_dump_local_addrs(seq, &ep->base);
+ 		seq_printf(seq, "\n");
+ 	}
+ 	read_unlock_bh(&head->lock);
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 79cf4cda2cf6dd..5053d813e91cf2 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -5193,14 +5193,14 @@ int sctp_for_each_endpoint(int (*cb)(struct sctp_endpoint *, void *),
+ 			   void *p) {
+ 	int err = 0;
+ 	int hash = 0;
+-	struct sctp_ep_common *epb;
++	struct sctp_endpoint *ep;
+ 	struct sctp_hashbucket *head;
+ 
+ 	for (head = sctp_ep_hashtable; hash < sctp_ep_hashsize;
+ 	     hash++, head++) {
+ 		read_lock_bh(&head->lock);
+-		sctp_for_each_hentry(epb, &head->chain) {
+-			err = cb(sctp_ep(epb), p);
++		sctp_for_each_hentry(ep, &head->chain) {
++			err = cb(ep, p);
+ 			if (err)
+ 				break;
+ 		}
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index ab9ecdd1af0ac4..0e9d22e9c760b1 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1375,21 +1375,31 @@ int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
+ 	return rc;
+ }
+ 
+-/* convert the RMB size into the compressed notation - minimum 16K.
++#define SMCD_DMBE_SIZES		6 /* 0 -> 16KB, 1 -> 32KB, .. 6 -> 1MB */
++#define SMCR_RMBE_SIZES		5 /* 0 -> 16KB, 1 -> 32KB, .. 5 -> 512KB */
++
++/* convert the RMB size into the compressed notation (minimum 16K, see
++ * SMCD/R_DMBE_SIZES.
+  * In contrast to plain ilog2, this rounds towards the next power of 2,
+  * so the socket application gets at least its desired sndbuf / rcvbuf size.
+  */
+-static u8 smc_compress_bufsize(int size)
++static u8 smc_compress_bufsize(int size, bool is_smcd, bool is_rmb)
+ {
+ 	u8 compressed;
+ 
+ 	if (size <= SMC_BUF_MIN_SIZE)
+ 		return 0;
+ 
+-	size = (size - 1) >> 14;
+-	compressed = ilog2(size) + 1;
+-	if (compressed >= SMC_RMBE_SIZES)
+-		compressed = SMC_RMBE_SIZES - 1;
++	size = (size - 1) >> 14;  /* convert to 16K multiple */
++	compressed = min_t(u8, ilog2(size) + 1,
++			   is_smcd ? SMCD_DMBE_SIZES : SMCR_RMBE_SIZES);
++
++#ifdef CONFIG_ARCH_NO_SG_CHAIN
++	if (!is_smcd && is_rmb)
++		/* RMBs are backed by & limited to max size of scatterlists */
++		compressed = min_t(u8, compressed, ilog2((SG_MAX_SINGLE_ALLOC * PAGE_SIZE) >> 14));
++#endif
++
+ 	return compressed;
+ }
+ 
+@@ -1608,17 +1618,12 @@ static int smcr_buf_map_usable_links(struct smc_link_group *lgr,
+ 	return rc;
+ }
+ 
+-#define SMCD_DMBE_SIZES		6 /* 0 -> 16KB, 1 -> 32KB, .. 6 -> 1MB */
+-
+ static struct smc_buf_desc *smcd_new_buf_create(struct smc_link_group *lgr,
+ 						bool is_dmb, int bufsize)
+ {
+ 	struct smc_buf_desc *buf_desc;
+ 	int rc;
+ 
+-	if (smc_compress_bufsize(bufsize) > SMCD_DMBE_SIZES)
+-		return ERR_PTR(-EAGAIN);
+-
+ 	/* try to alloc a new DMB */
+ 	buf_desc = kzalloc(sizeof(*buf_desc), GFP_KERNEL);
+ 	if (!buf_desc)
+@@ -1666,9 +1671,8 @@ static int __smc_buf_create(struct smc_sock *smc, bool is_smcd, bool is_rmb)
+ 		/* use socket send buffer size (w/o overhead) as start value */
+ 		sk_buf_size = smc->sk.sk_sndbuf / 2;
+ 
+-	for (bufsize_short = smc_compress_bufsize(sk_buf_size);
++	for (bufsize_short = smc_compress_bufsize(sk_buf_size, is_smcd, is_rmb);
+ 	     bufsize_short >= 0; bufsize_short--) {
+-
+ 		if (is_rmb) {
+ 			lock = &lgr->rmbs_lock;
+ 			buf_list = &lgr->rmbs[bufsize_short];
+@@ -1677,8 +1681,6 @@ static int __smc_buf_create(struct smc_sock *smc, bool is_smcd, bool is_rmb)
+ 			buf_list = &lgr->sndbufs[bufsize_short];
+ 		}
+ 		bufsize = smc_uncompress_bufsize(bufsize_short);
+-		if ((1 << get_order(bufsize)) > SG_MAX_SINGLE_ALLOC)
+-			continue;
+ 
+ 		/* check for reusable slot in the link group */
+ 		buf_desc = smc_buf_get_slot(bufsize_short, lock, buf_list);
+diff --git a/net/sunrpc/auth_gss/gss_krb5_keys.c b/net/sunrpc/auth_gss/gss_krb5_keys.c
+index 726c076950c042..fc4639687c0fde 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_keys.c
++++ b/net/sunrpc/auth_gss/gss_krb5_keys.c
+@@ -161,7 +161,7 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e,
+ 	if (IS_ERR(cipher))
+ 		goto err_return;
+ 	if (crypto_sync_skcipher_setkey(cipher, inkey->data, inkey->len))
+-		goto err_return;
++		goto err_free_cipher;
+ 
+ 	/* allocate and set up buffers */
+ 
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 196a3b11d15095..86397f9c4bc832 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2188,12 +2188,13 @@ call_transmit_status(struct rpc_task *task)
+ 		task->tk_action = call_transmit;
+ 		task->tk_status = 0;
+ 		break;
+-	case -ECONNREFUSED:
+ 	case -EHOSTDOWN:
+ 	case -ENETDOWN:
+ 	case -EHOSTUNREACH:
+ 	case -ENETUNREACH:
+ 	case -EPERM:
++		break;
++	case -ECONNREFUSED:
+ 		if (RPC_IS_SOFTCONN(task)) {
+ 			if (!task->tk_msg.rpc_proc->p_proc)
+ 				trace_xprt_ping(task->tk_xprt,
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index a4c9d410eb8d51..f4b1b7fee2c05d 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -348,8 +348,10 @@ static void rpc_make_runnable(struct workqueue_struct *wq,
+ 	if (RPC_IS_ASYNC(task)) {
+ 		INIT_WORK(&task->u.tk_work, rpc_async_schedule);
+ 		queue_work(wq, &task->u.tk_work);
+-	} else
++	} else {
++		smp_mb__after_atomic();
+ 		wake_up_bit(&task->tk_runstate, RPC_TASK_QUEUED);
++	}
+ }
+ 
+ /*
+diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
+index bf3627dce5529d..2035d748d923c7 100644
+--- a/net/sunrpc/xprtrdma/frwr_ops.c
++++ b/net/sunrpc/xprtrdma/frwr_ops.c
+@@ -50,11 +50,11 @@
+ #endif
+ 
+ /**
+- * frwr_release_mr - Destroy one MR
++ * frwr_mr_release - Destroy one MR
+  * @mr: MR allocated by frwr_mr_init
+  *
+  */
+-void frwr_release_mr(struct rpcrdma_mr *mr)
++void frwr_mr_release(struct rpcrdma_mr *mr)
+ {
+ 	int rc;
+ 
+@@ -83,10 +83,11 @@ static void frwr_mr_recycle(struct rpcrdma_mr *mr)
+ 	r_xprt->rx_stats.mrs_recycled++;
+ 	spin_unlock(&r_xprt->rx_buf.rb_lock);
+ 
+-	frwr_release_mr(mr);
++	frwr_mr_release(mr);
+ }
+ 
+-/* frwr_reset - Place MRs back on the free list
++/**
++ * frwr_reset - Place MRs back on @req's free list
+  * @req: request to reset
+  *
+  * Used after a failed marshal. For FRWR, this means the MRs
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 9e9df38b29f74b..9262c94a13c1d4 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -932,6 +932,8 @@ static int rpcrdma_reqs_setup(struct rpcrdma_xprt *r_xprt)
+ 
+ static void rpcrdma_req_reset(struct rpcrdma_req *req)
+ {
++	struct rpcrdma_mr *mr;
++
+ 	/* Credits are valid for only one connection */
+ 	req->rl_slot.rq_cong = 0;
+ 
+@@ -941,7 +943,19 @@ static void rpcrdma_req_reset(struct rpcrdma_req *req)
+ 	rpcrdma_regbuf_dma_unmap(req->rl_sendbuf);
+ 	rpcrdma_regbuf_dma_unmap(req->rl_recvbuf);
+ 
+-	frwr_reset(req);
++	/* The verbs consumer can't know the state of an MR on the
++	 * req->rl_registered list unless a successful completion
++	 * has occurred, so they cannot be re-used.
++	 */
++	while ((mr = rpcrdma_mr_pop(&req->rl_registered))) {
++		struct rpcrdma_buffer *buf = &mr->mr_xprt->rx_buf;
++
++		spin_lock(&buf->rb_lock);
++		list_del(&mr->mr_all);
++		spin_unlock(&buf->rb_lock);
++
++		frwr_mr_release(mr);
++	}
+ }
+ 
+ /* ASSUMPTION: the rb_allreqs list is stable for the duration,
+@@ -1100,7 +1114,7 @@ void rpcrdma_req_destroy(struct rpcrdma_req *req)
+ 		list_del(&mr->mr_all);
+ 		spin_unlock(&buf->rb_lock);
+ 
+-		frwr_release_mr(mr);
++		frwr_mr_release(mr);
+ 	}
+ 
+ 	rpcrdma_regbuf_free(req->rl_recvbuf);
+@@ -1131,7 +1145,7 @@ static void rpcrdma_mrs_destroy(struct rpcrdma_xprt *r_xprt)
+ 		list_del(&mr->mr_all);
+ 		spin_unlock(&buf->rb_lock);
+ 
+-		frwr_release_mr(mr);
++		frwr_mr_release(mr);
+ 
+ 		spin_lock(&buf->rb_lock);
+ 	}
+diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
+index 702f0344523cc9..73c8ab89456b04 100644
+--- a/net/sunrpc/xprtrdma/xprt_rdma.h
++++ b/net/sunrpc/xprtrdma/xprt_rdma.h
+@@ -520,7 +520,7 @@ rpcrdma_data_dir(bool writing)
+ void frwr_reset(struct rpcrdma_req *req);
+ int frwr_query_device(struct rpcrdma_ep *ep, const struct ib_device *device);
+ int frwr_mr_init(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr *mr);
+-void frwr_release_mr(struct rpcrdma_mr *mr);
++void frwr_mr_release(struct rpcrdma_mr *mr);
+ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
+ 				struct rpcrdma_mr_seg *seg,
+ 				int nsegs, bool writing, __be32 xid,
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 3e47501f024fdf..ec6d7730b85224 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -129,8 +129,11 @@ static int tipc_udp_addr2str(struct tipc_media_addr *a, char *buf, int size)
+ 		snprintf(buf, size, "%pI4:%u", &ua->ipv4, ntohs(ua->port));
+ 	else if (ntohs(ua->proto) == ETH_P_IPV6)
+ 		snprintf(buf, size, "%pI6:%u", &ua->ipv6, ntohs(ua->port));
+-	else
++	else {
+ 		pr_err("Invalid UDP media address\n");
++		return 1;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 846e40dc00bb67..9a6bbf24b0f7d1 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -403,6 +403,10 @@ nl80211_unsol_bcast_probe_resp_policy[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_MAX +
+ 						       .len = IEEE80211_MAX_DATA_LEN }
+ };
+ 
++static struct netlink_range_validation q_range = {
++	.max = INT_MAX,
++};
++
+ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ 	[0] = { .strict_start_type = NL80211_ATTR_HE_OBSS_PD },
+ 	[NL80211_ATTR_WIPHY] = { .type = NLA_U32 },
+@@ -685,7 +689,7 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ 
+ 	[NL80211_ATTR_TXQ_LIMIT] = { .type = NLA_U32 },
+ 	[NL80211_ATTR_TXQ_MEMORY_LIMIT] = { .type = NLA_U32 },
+-	[NL80211_ATTR_TXQ_QUANTUM] = { .type = NLA_U32 },
++	[NL80211_ATTR_TXQ_QUANTUM] = NLA_POLICY_FULL_RANGE(NLA_U32, &q_range),
+ 	[NL80211_ATTR_HE_CAPABILITY] =
+ 		NLA_POLICY_RANGE(NLA_BINARY,
+ 				 NL80211_HE_MIN_CAPABILITY_LEN,
+@@ -3968,10 +3972,7 @@ static void get_key_callback(void *c, struct key_params *params)
+ 	struct nlattr *key;
+ 	struct get_key_cookie *cookie = c;
+ 
+-	if ((params->key &&
+-	     nla_put(cookie->msg, NL80211_ATTR_KEY_DATA,
+-		     params->key_len, params->key)) ||
+-	    (params->seq &&
++	if ((params->seq &&
+ 	     nla_put(cookie->msg, NL80211_ATTR_KEY_SEQ,
+ 		     params->seq_len, params->seq)) ||
+ 	    (params->cipher &&
+@@ -3983,10 +3984,7 @@ static void get_key_callback(void *c, struct key_params *params)
+ 	if (!key)
+ 		goto nla_put_failure;
+ 
+-	if ((params->key &&
+-	     nla_put(cookie->msg, NL80211_KEY_DATA,
+-		     params->key_len, params->key)) ||
+-	    (params->seq &&
++	if ((params->seq &&
+ 	     nla_put(cookie->msg, NL80211_KEY_SEQ,
+ 		     params->seq_len, params->seq)) ||
+ 	    (params->cipher &&
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 4b32e85c2d9a1b..37719fc39f64d3 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1334,7 +1334,7 @@ static u32 cfg80211_calculate_bitrate_he(struct rate_info *rate)
+ 		 2048, /*  1.000000... */
+ 	};
+ 	u32 rates_160M[3] = { 960777777, 907400000, 816666666 };
+-	u32 rates_969[3] =  { 480388888, 453700000, 408333333 };
++	u32 rates_996[3] =  { 480388888, 453700000, 408333333 };
+ 	u32 rates_484[3] =  { 229411111, 216666666, 195000000 };
+ 	u32 rates_242[3] =  { 114711111, 108333333,  97500000 };
+ 	u32 rates_106[3] =  {  40000000,  37777777,  34000000 };
+@@ -1354,12 +1354,14 @@ static u32 cfg80211_calculate_bitrate_he(struct rate_info *rate)
+ 	if (WARN_ON_ONCE(rate->nss < 1 || rate->nss > 8))
+ 		return 0;
+ 
+-	if (rate->bw == RATE_INFO_BW_160)
++	if (rate->bw == RATE_INFO_BW_160 ||
++	    (rate->bw == RATE_INFO_BW_HE_RU &&
++	     rate->he_ru_alloc == NL80211_RATE_INFO_HE_RU_ALLOC_2x996))
+ 		result = rates_160M[rate->he_gi];
+ 	else if (rate->bw == RATE_INFO_BW_80 ||
+ 		 (rate->bw == RATE_INFO_BW_HE_RU &&
+ 		  rate->he_ru_alloc == NL80211_RATE_INFO_HE_RU_ALLOC_996))
+-		result = rates_969[rate->he_gi];
++		result = rates_996[rate->he_gi];
+ 	else if (rate->bw == RATE_INFO_BW_40 ||
+ 		 (rate->bw == RATE_INFO_BW_HE_RU &&
+ 		  rate->he_ru_alloc == NL80211_RATE_INFO_HE_RU_ALLOC_484))
+diff --git a/samples/Kconfig b/samples/Kconfig
+index e76cdfc50e257d..a98e89992a18b2 100644
+--- a/samples/Kconfig
++++ b/samples/Kconfig
+@@ -120,6 +120,14 @@ config SAMPLE_CONNECTOR
+ 	  with it.
+ 	  See also Documentation/driver-api/connector.rst
+ 
++config SAMPLE_FANOTIFY_ERROR
++	bool "Build fanotify error monitoring sample"
++	depends on FANOTIFY && CC_CAN_LINK && HEADERS_INSTALL
++	help
++	  When enabled, this builds an example code that uses the
++	  FAN_FS_ERROR fanotify mechanism to monitor filesystem
++	  errors.
++
+ config SAMPLE_HIDRAW
+ 	bool "hidraw sample"
+ 	depends on CC_CAN_LINK && HEADERS_INSTALL
+diff --git a/samples/Makefile b/samples/Makefile
+index c3392a595e4b73..93e2d64bc9a79c 100644
+--- a/samples/Makefile
++++ b/samples/Makefile
+@@ -5,6 +5,7 @@ subdir-$(CONFIG_SAMPLE_AUXDISPLAY)	+= auxdisplay
+ subdir-$(CONFIG_SAMPLE_ANDROID_BINDERFS) += binderfs
+ obj-$(CONFIG_SAMPLE_CONFIGFS)		+= configfs/
+ obj-$(CONFIG_SAMPLE_CONNECTOR)		+= connector/
++obj-$(CONFIG_SAMPLE_FANOTIFY_ERROR)	+= fanotify/
+ subdir-$(CONFIG_SAMPLE_HIDRAW)		+= hidraw
+ obj-$(CONFIG_SAMPLE_HW_BREAKPOINT)	+= hw_breakpoint/
+ obj-$(CONFIG_SAMPLE_KDB)		+= kdb/
+diff --git a/samples/fanotify/.gitignore b/samples/fanotify/.gitignore
+new file mode 100644
+index 00000000000000..d74593e8b2dee8
+--- /dev/null
++++ b/samples/fanotify/.gitignore
+@@ -0,0 +1 @@
++fs-monitor
+diff --git a/samples/fanotify/Makefile b/samples/fanotify/Makefile
+new file mode 100644
+index 00000000000000..e20db1bdde3b4c
+--- /dev/null
++++ b/samples/fanotify/Makefile
+@@ -0,0 +1,5 @@
++# SPDX-License-Identifier: GPL-2.0-only
++userprogs-always-y += fs-monitor
++
++userccflags += -I usr/include -Wall
++
+diff --git a/samples/fanotify/fs-monitor.c b/samples/fanotify/fs-monitor.c
+new file mode 100644
+index 00000000000000..a0e44cd31e6f33
+--- /dev/null
++++ b/samples/fanotify/fs-monitor.c
+@@ -0,0 +1,142 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright 2021, Collabora Ltd.
++ */
++
++#define _GNU_SOURCE
++#include <errno.h>
++#include <err.h>
++#include <stdlib.h>
++#include <stdio.h>
++#include <fcntl.h>
++#include <sys/fanotify.h>
++#include <sys/types.h>
++#include <unistd.h>
++#include <sys/types.h>
++
++#ifndef FAN_FS_ERROR
++#define FAN_FS_ERROR		0x00008000
++#define FAN_EVENT_INFO_TYPE_ERROR	5
++
++struct fanotify_event_info_error {
++	struct fanotify_event_info_header hdr;
++	__s32 error;
++	__u32 error_count;
++};
++#endif
++
++#ifndef FILEID_INO32_GEN
++#define FILEID_INO32_GEN	1
++#endif
++
++#ifndef FILEID_INVALID
++#define	FILEID_INVALID		0xff
++#endif
++
++static void print_fh(struct file_handle *fh)
++{
++	int i;
++	uint32_t *h = (uint32_t *) fh->f_handle;
++
++	printf("\tfh: ");
++	for (i = 0; i < fh->handle_bytes; i++)
++		printf("%hhx", fh->f_handle[i]);
++	printf("\n");
++
++	printf("\tdecoded fh: ");
++	if (fh->handle_type == FILEID_INO32_GEN)
++		printf("inode=%u gen=%u\n", h[0], h[1]);
++	else if (fh->handle_type == FILEID_INVALID && !fh->handle_bytes)
++		printf("Type %d (Superblock error)\n", fh->handle_type);
++	else
++		printf("Type %d (Unknown)\n", fh->handle_type);
++
++}
++
++static void handle_notifications(char *buffer, int len)
++{
++	struct fanotify_event_metadata *event =
++		(struct fanotify_event_metadata *) buffer;
++	struct fanotify_event_info_header *info;
++	struct fanotify_event_info_error *err;
++	struct fanotify_event_info_fid *fid;
++	int off;
++
++	for (; FAN_EVENT_OK(event, len); event = FAN_EVENT_NEXT(event, len)) {
++
++		if (event->mask != FAN_FS_ERROR) {
++			printf("unexpected FAN MARK: %llx\n", event->mask);
++			goto next_event;
++		}
++
++		if (event->fd != FAN_NOFD) {
++			printf("Unexpected fd (!= FAN_NOFD)\n");
++			goto next_event;
++		}
++
++		printf("FAN_FS_ERROR (len=%d)\n", event->event_len);
++
++		for (off = sizeof(*event) ; off < event->event_len;
++		     off += info->len) {
++			info = (struct fanotify_event_info_header *)
++				((char *) event + off);
++
++			switch (info->info_type) {
++			case FAN_EVENT_INFO_TYPE_ERROR:
++				err = (struct fanotify_event_info_error *) info;
++
++				printf("\tGeneric Error Record: len=%d\n",
++				       err->hdr.len);
++				printf("\terror: %d\n", err->error);
++				printf("\terror_count: %d\n", err->error_count);
++				break;
++
++			case FAN_EVENT_INFO_TYPE_FID:
++				fid = (struct fanotify_event_info_fid *) info;
++
++				printf("\tfsid: %x%x\n",
++				       fid->fsid.val[0], fid->fsid.val[1]);
++				print_fh((struct file_handle *) &fid->handle);
++				break;
++
++			default:
++				printf("\tUnknown info type=%d len=%d:\n",
++				       info->info_type, info->len);
++			}
++		}
++next_event:
++		printf("---\n\n");
++	}
++}
++
++int main(int argc, char **argv)
++{
++	int fd;
++
++	char buffer[BUFSIZ];
++
++	if (argc < 2) {
++		printf("Missing path argument\n");
++		return 1;
++	}
++
++	fd = fanotify_init(FAN_CLASS_NOTIF|FAN_REPORT_FID, O_RDONLY);
++	if (fd < 0)
++		errx(1, "fanotify_init");
++
++	if (fanotify_mark(fd, FAN_MARK_ADD|FAN_MARK_FILESYSTEM,
++			  FAN_FS_ERROR, AT_FDCWD, argv[1])) {
++		errx(1, "fanotify_mark");
++	}
++
++	while (1) {
++		int n = read(fd, buffer, BUFSIZ);
++
++		if (n < 0)
++			errx(1, "read");
++
++		handle_notifications(buffer, n);
++	}
++
++	return 0;
++}
+diff --git a/scripts/gcc-x86_32-has-stack-protector.sh b/scripts/gcc-x86_32-has-stack-protector.sh
+index 825c75c5b71505..9459ca4f0f11fb 100755
+--- a/scripts/gcc-x86_32-has-stack-protector.sh
++++ b/scripts/gcc-x86_32-has-stack-protector.sh
+@@ -5,4 +5,4 @@
+ # -mstack-protector-guard-reg, added by
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81708
+ 
+-echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -m32 -O0 -fstack-protector -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard - -o - 2> /dev/null | grep -q "%fs"
++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -m32 -O0 -fstack-protector -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard - -o - 2> /dev/null | grep -q "%fs"
+diff --git a/scripts/gcc-x86_64-has-stack-protector.sh b/scripts/gcc-x86_64-has-stack-protector.sh
+index 75e4e22b986adc..f680bb01aeeb30 100755
+--- a/scripts/gcc-x86_64-has-stack-protector.sh
++++ b/scripts/gcc-x86_64-has-stack-protector.sh
+@@ -1,4 +1,4 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ 
+-echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -m64 -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -m64 -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index 052f1b920e43fe..37aa1650c74eb2 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -1048,6 +1048,13 @@ static int apparmor_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 	if (!skb->secmark)
+ 		return 0;
+ 
++	/*
++	 * If reach here before socket_post_create hook is called, in which
++	 * case label is null, drop the packet.
++	 */
++	if (!ctx->label)
++		return -EACCES;
++
+ 	return apparmor_secmark_check(ctx->label, OP_RECVMSG, AA_MAY_RECEIVE,
+ 				      skb->secmark, sk);
+ }
+diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
+index fcf22577f606c8..e59bdb750ef00c 100644
+--- a/security/apparmor/policy.c
++++ b/security/apparmor/policy.c
+@@ -187,7 +187,7 @@ static void aa_free_data(void *ptr, void *arg)
+ {
+ 	struct aa_data *data = ptr;
+ 
+-	kfree_sensitive(data->data);
++	kvfree_sensitive(data->data, data->size);
+ 	kfree_sensitive(data->key);
+ 	kfree_sensitive(data);
+ }
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 6c2a536173b5b9..93fcafdaa5489d 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -915,6 +915,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 
+ 			if (rhashtable_insert_fast(profile->data, &data->head,
+ 						   profile->data->p)) {
++				kvfree_sensitive(data->data, data->size);
+ 				kfree_sensitive(data->key);
+ 				kfree_sensitive(data);
+ 				info = "failed to insert data to table";
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index e3ffaf5ad63940..c283474ecea56c 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -1694,7 +1694,7 @@ long keyctl_session_to_parent(void)
+ 		goto unlock;
+ 
+ 	/* cancel an already pending keyring replacement */
+-	oldwork = task_work_cancel(parent, key_change_session_keyring);
++	oldwork = task_work_cancel_func(parent, key_change_session_keyring);
+ 
+ 	/* the replacement session keyring is applied just prior to userspace
+ 	 * restarting */
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 0d1c6c4c1ee62f..0ffab5541de816 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1963,6 +1963,8 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ }
+ 
+ static const struct snd_pci_quirk force_connect_list[] = {
++	SND_PCI_QUIRK(0x103c, 0x83e2, "HP EliteDesk 800 G4", 1),
++	SND_PCI_QUIRK(0x103c, 0x83ef, "HP MP9 G4 Retail System AMS", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 28dbe8cbbffd84..e8e9cfb4a93577 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8893,6 +8893,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
+ 	SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF),
++	SND_PCI_QUIRK(0x1025, 0x100c, "Acer Aspire E5-574G", ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
+diff --git a/sound/soc/codecs/max98088.c b/sound/soc/codecs/max98088.c
+index f8e49e45ce33f1..a71fbfddc29a7c 100644
+--- a/sound/soc/codecs/max98088.c
++++ b/sound/soc/codecs/max98088.c
+@@ -1319,6 +1319,7 @@ static int max98088_set_bias_level(struct snd_soc_component *component,
+                                   enum snd_soc_bias_level level)
+ {
+ 	struct max98088_priv *max98088 = snd_soc_component_get_drvdata(component);
++	int ret;
+ 
+ 	switch (level) {
+ 	case SND_SOC_BIAS_ON:
+@@ -1334,10 +1335,13 @@ static int max98088_set_bias_level(struct snd_soc_component *component,
+ 		 */
+ 		if (!IS_ERR(max98088->mclk)) {
+ 			if (snd_soc_component_get_bias_level(component) ==
+-			    SND_SOC_BIAS_ON)
++			    SND_SOC_BIAS_ON) {
+ 				clk_disable_unprepare(max98088->mclk);
+-			else
+-				clk_prepare_enable(max98088->mclk);
++			} else {
++				ret = clk_prepare_enable(max98088->mclk);
++				if (ret)
++					return ret;
++			}
+ 		}
+ 		break;
+ 
+diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c
+index 9f66f6dc2c67f3..77012979e13199 100644
+--- a/sound/soc/codecs/wsa881x.c
++++ b/sound/soc/codecs/wsa881x.c
+@@ -1120,7 +1120,7 @@ static int wsa881x_probe(struct sdw_slave *pdev,
+ 	wsa881x->sconfig.frame_rate = 48000;
+ 	wsa881x->sconfig.direction = SDW_DATA_DIR_RX;
+ 	wsa881x->sconfig.type = SDW_STREAM_PDM;
+-	pdev->prop.sink_ports = GENMASK(WSA881X_MAX_SWR_PORTS, 0);
++	pdev->prop.sink_ports = GENMASK(WSA881X_MAX_SWR_PORTS - 1, 0);
+ 	pdev->prop.sink_dpn_prop = wsa_sink_dpn_prop;
+ 	pdev->prop.scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY;
+ 	gpiod_direction_output(wsa881x->sd_n, 1);
+diff --git a/sound/soc/intel/common/soc-intel-quirks.h b/sound/soc/intel/common/soc-intel-quirks.h
+index de4e550c5b34dc..42bd51456b945d 100644
+--- a/sound/soc/intel/common/soc-intel-quirks.h
++++ b/sound/soc/intel/common/soc-intel-quirks.h
+@@ -11,7 +11,7 @@
+ 
+ #include <linux/platform_data/x86/soc.h>
+ 
+-#if IS_ENABLED(CONFIG_X86)
++#if IS_REACHABLE(CONFIG_IOSF_MBI)
+ 
+ #include <linux/dmi.h>
+ #include <asm/iosf_mbi.h>
+diff --git a/sound/soc/pxa/spitz.c b/sound/soc/pxa/spitz.c
+index 7c1384a869ca48..44303b6eb228bf 100644
+--- a/sound/soc/pxa/spitz.c
++++ b/sound/soc/pxa/spitz.c
+@@ -14,13 +14,12 @@
+ #include <linux/timer.h>
+ #include <linux/interrupt.h>
+ #include <linux/platform_device.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <sound/core.h>
+ #include <sound/pcm.h>
+ #include <sound/soc.h>
+ 
+ #include <asm/mach-types.h>
+-#include <mach/spitz.h>
+ #include "../codecs/wm8750.h"
+ #include "pxa2xx-i2s.h"
+ 
+@@ -37,7 +36,7 @@
+ 
+ static int spitz_jack_func;
+ static int spitz_spk_func;
+-static int spitz_mic_gpio;
++static struct gpio_desc *gpiod_mic, *gpiod_mute_l, *gpiod_mute_r;
+ 
+ static void spitz_ext_control(struct snd_soc_dapm_context *dapm)
+ {
+@@ -56,8 +55,8 @@ static void spitz_ext_control(struct snd_soc_dapm_context *dapm)
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Mic Jack");
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Line Jack");
+ 		snd_soc_dapm_enable_pin_unlocked(dapm, "Headphone Jack");
+-		gpio_set_value(SPITZ_GPIO_MUTE_L, 1);
+-		gpio_set_value(SPITZ_GPIO_MUTE_R, 1);
++		gpiod_set_value(gpiod_mute_l, 1);
++		gpiod_set_value(gpiod_mute_r, 1);
+ 		break;
+ 	case SPITZ_MIC:
+ 		/* enable mic jack and bias, mute hp */
+@@ -65,8 +64,8 @@ static void spitz_ext_control(struct snd_soc_dapm_context *dapm)
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Headset Jack");
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Line Jack");
+ 		snd_soc_dapm_enable_pin_unlocked(dapm, "Mic Jack");
+-		gpio_set_value(SPITZ_GPIO_MUTE_L, 0);
+-		gpio_set_value(SPITZ_GPIO_MUTE_R, 0);
++		gpiod_set_value(gpiod_mute_l, 0);
++		gpiod_set_value(gpiod_mute_r, 0);
+ 		break;
+ 	case SPITZ_LINE:
+ 		/* enable line jack, disable mic bias and mute hp */
+@@ -74,8 +73,8 @@ static void spitz_ext_control(struct snd_soc_dapm_context *dapm)
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Headset Jack");
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Mic Jack");
+ 		snd_soc_dapm_enable_pin_unlocked(dapm, "Line Jack");
+-		gpio_set_value(SPITZ_GPIO_MUTE_L, 0);
+-		gpio_set_value(SPITZ_GPIO_MUTE_R, 0);
++		gpiod_set_value(gpiod_mute_l, 0);
++		gpiod_set_value(gpiod_mute_r, 0);
+ 		break;
+ 	case SPITZ_HEADSET:
+ 		/* enable and unmute headset jack enable mic bias, mute L hp */
+@@ -83,8 +82,8 @@ static void spitz_ext_control(struct snd_soc_dapm_context *dapm)
+ 		snd_soc_dapm_enable_pin_unlocked(dapm, "Mic Jack");
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Line Jack");
+ 		snd_soc_dapm_enable_pin_unlocked(dapm, "Headset Jack");
+-		gpio_set_value(SPITZ_GPIO_MUTE_L, 0);
+-		gpio_set_value(SPITZ_GPIO_MUTE_R, 1);
++		gpiod_set_value(gpiod_mute_l, 0);
++		gpiod_set_value(gpiod_mute_r, 1);
+ 		break;
+ 	case SPITZ_HP_OFF:
+ 
+@@ -93,8 +92,8 @@ static void spitz_ext_control(struct snd_soc_dapm_context *dapm)
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Headset Jack");
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Mic Jack");
+ 		snd_soc_dapm_disable_pin_unlocked(dapm, "Line Jack");
+-		gpio_set_value(SPITZ_GPIO_MUTE_L, 0);
+-		gpio_set_value(SPITZ_GPIO_MUTE_R, 0);
++		gpiod_set_value(gpiod_mute_l, 0);
++		gpiod_set_value(gpiod_mute_r, 0);
+ 		break;
+ 	}
+ 
+@@ -199,7 +198,7 @@ static int spitz_set_spk(struct snd_kcontrol *kcontrol,
+ static int spitz_mic_bias(struct snd_soc_dapm_widget *w,
+ 	struct snd_kcontrol *k, int event)
+ {
+-	gpio_set_value_cansleep(spitz_mic_gpio, SND_SOC_DAPM_EVENT_ON(event));
++	gpiod_set_value_cansleep(gpiod_mic, SND_SOC_DAPM_EVENT_ON(event));
+ 	return 0;
+ }
+ 
+@@ -287,39 +286,28 @@ static int spitz_probe(struct platform_device *pdev)
+ 	struct snd_soc_card *card = &snd_soc_spitz;
+ 	int ret;
+ 
+-	if (machine_is_akita())
+-		spitz_mic_gpio = AKITA_GPIO_MIC_BIAS;
+-	else
+-		spitz_mic_gpio = SPITZ_GPIO_MIC_BIAS;
+-
+-	ret = gpio_request(spitz_mic_gpio, "MIC GPIO");
+-	if (ret)
+-		goto err1;
+-
+-	ret = gpio_direction_output(spitz_mic_gpio, 0);
+-	if (ret)
+-		goto err2;
++	gpiod_mic = devm_gpiod_get(&pdev->dev, "mic", GPIOD_OUT_LOW);
++	if (IS_ERR(gpiod_mic))
++		return PTR_ERR(gpiod_mic);
++	gpiod_mute_l = devm_gpiod_get(&pdev->dev, "mute-l", GPIOD_OUT_LOW);
++	if (IS_ERR(gpiod_mute_l))
++		return PTR_ERR(gpiod_mute_l);
++	gpiod_mute_r = devm_gpiod_get(&pdev->dev, "mute-r", GPIOD_OUT_LOW);
++	if (IS_ERR(gpiod_mute_r))
++		return PTR_ERR(gpiod_mute_r);
+ 
+ 	card->dev = &pdev->dev;
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+-	if (ret) {
++	if (ret)
+ 		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n",
+ 			ret);
+-		goto err2;
+-	}
+-
+-	return 0;
+ 
+-err2:
+-	gpio_free(spitz_mic_gpio);
+-err1:
+ 	return ret;
+ }
+ 
+ static int spitz_remove(struct platform_device *pdev)
+ {
+-	gpio_free(spitz_mic_gpio);
+ 	return 0;
+ }
+ 
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index f4437015d43a75..9df49a880b750d 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -286,12 +286,14 @@ static void line6_data_received(struct urb *urb)
+ {
+ 	struct usb_line6 *line6 = (struct usb_line6 *)urb->context;
+ 	struct midi_buffer *mb = &line6->line6midi->midibuf_in;
++	unsigned long flags;
+ 	int done;
+ 
+ 	if (urb->status == -ESHUTDOWN)
+ 		return;
+ 
+ 	if (line6->properties->capabilities & LINE6_CAP_CONTROL_MIDI) {
++		spin_lock_irqsave(&line6->line6midi->lock, flags);
+ 		done =
+ 			line6_midibuf_write(mb, urb->transfer_buffer, urb->actual_length);
+ 
+@@ -300,12 +302,15 @@ static void line6_data_received(struct urb *urb)
+ 			dev_dbg(line6->ifcdev, "%d %d buffer overflow - message skipped\n",
+ 				done, urb->actual_length);
+ 		}
++		spin_unlock_irqrestore(&line6->line6midi->lock, flags);
+ 
+ 		for (;;) {
++			spin_lock_irqsave(&line6->line6midi->lock, flags);
+ 			done =
+ 				line6_midibuf_read(mb, line6->buffer_message,
+ 						   LINE6_MIDI_MESSAGE_MAXLEN,
+ 						   LINE6_MIDIBUF_READ_RX);
++			spin_unlock_irqrestore(&line6->line6midi->lock, flags);
+ 
+ 			if (done <= 0)
+ 				break;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 97fe2fadcafb3e..8f2fb2ac7af67d 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2573,6 +2573,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	}
+ },
+ 
++/* Stanton ScratchAmp */
++{ USB_DEVICE(0x103d, 0x0100) },
++{ USB_DEVICE(0x103d, 0x0101) },
++
+ /* Novation EMS devices */
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x1235, 0x0001),
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index f51e901a9689ec..0c77f244e5d668 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -245,8 +245,8 @@ static struct snd_pcm_chmap_elem *convert_chmap(int channels, unsigned int bits,
+ 		SNDRV_CHMAP_FR,		/* right front */
+ 		SNDRV_CHMAP_FC,		/* center front */
+ 		SNDRV_CHMAP_LFE,	/* LFE */
+-		SNDRV_CHMAP_SL,		/* left surround */
+-		SNDRV_CHMAP_SR,		/* right surround */
++		SNDRV_CHMAP_RL,		/* left surround */
++		SNDRV_CHMAP_RR,		/* right surround */
+ 		SNDRV_CHMAP_FLC,	/* left of center */
+ 		SNDRV_CHMAP_FRC,	/* right of center */
+ 		SNDRV_CHMAP_RC,		/* surround */
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 61aa2c47fbd5e9..2342aec3c5a3e6 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -1426,10 +1426,12 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
+ 			 * Clang for BPF target generates func_proto with no
+ 			 * args as a func_proto with a single void arg (e.g.,
+ 			 * `int (*f)(void)` vs just `int (*f)()`). We are
+-			 * going to pretend there are no args for such case.
++			 * going to emit valid empty args (void) syntax for
++			 * such case. Similarly and conveniently, valid
++			 * no args case can be special-cased here as well.
+ 			 */
+-			if (vlen == 1 && p->type == 0) {
+-				btf_dump_printf(d, ")");
++			if (vlen == 0 || (vlen == 1 && p->type == 0)) {
++				btf_dump_printf(d, "void)");
+ 				return;
+ 			}
+ 
+diff --git a/tools/memory-model/lock.cat b/tools/memory-model/lock.cat
+index 6b52f365d73ac4..9f3b5b38221bf8 100644
+--- a/tools/memory-model/lock.cat
++++ b/tools/memory-model/lock.cat
+@@ -102,19 +102,19 @@ let rf-lf = rfe-lf | rfi-lf
+  * within one of the lock's critical sections returns False.
+  *)
+ 
+-(* rfi for RU events: an RU may read from the last po-previous UL *)
+-let rfi-ru = ([UL] ; po-loc ; [RU]) \ ([UL] ; po-loc ; [LKW] ; po-loc)
+-
+-(* rfe for RU events: an RU may read from an external UL or the initial write *)
+-let all-possible-rfe-ru =
+-	let possible-rfe-ru r =
++(*
++ * rf for RU events: an RU may read from an external UL or the initial write,
++ * or from the last po-previous UL
++ *)
++let all-possible-rf-ru =
++	let possible-rf-ru r =
+ 		let pair-to-relation p = p ++ 0
+-		in map pair-to-relation (((UL | IW) * {r}) & loc & ext)
+-	in map possible-rfe-ru RU
++		in map pair-to-relation ((((UL | IW) * {r}) & loc & ext) |
++			(((UL * {r}) & po-loc) \ ([UL] ; po-loc ; [LKW] ; po-loc)))
++	in map possible-rf-ru RU
+ 
+ (* Generate all rf relations for RU events *)
+-with rfe-ru from cross(all-possible-rfe-ru)
+-let rf-ru = rfe-ru | rfi-ru
++with rf-ru from cross(all-possible-rf-ru)
+ 
+ (* Final rf relation *)
+ let rf = rf | rf-lf | rf-ru
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index 42806102010bb5..bbebb1e51b88f1 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -272,7 +272,7 @@ sort__sym_cmp(struct hist_entry *left, struct hist_entry *right)
+ 	 * comparing symbol address alone is not enough since it's a
+ 	 * relative address within a dso.
+ 	 */
+-	if (!hists__has(left->hists, dso) || hists__has(right->hists, dso)) {
++	if (!hists__has(left->hists, dso)) {
+ 		ret = sort__dso_cmp(left, right);
+ 		if (ret != 0)
+ 			return ret;
+diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+index 75b72c751772b2..0b6349070824b7 100644
+--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
++++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+@@ -155,7 +155,8 @@ static void test_send_signal_tracepoint(bool signal_thread)
+ static void test_send_signal_perf(bool signal_thread)
+ {
+ 	struct perf_event_attr attr = {
+-		.sample_period = 1,
++		.freq = 1,
++		.sample_freq = 1000,
+ 		.type = PERF_TYPE_SOFTWARE,
+ 		.config = PERF_COUNT_SW_CPU_CLOCK,
+ 	};
+diff --git a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+index b4c9f4a96ae4d3..95a1d3ee55a782 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
++++ b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+@@ -964,7 +964,7 @@ static void drop_on_reuseport(const struct test *t)
+ 
+ 	err = update_lookup_map(t->sock_map, SERVER_A, server1);
+ 	if (err)
+-		goto detach;
++		goto close_srv1;
+ 
+ 	/* second server on destination address we should never reach */
+ 	server2 = make_server(t->sotype, t->connect_to.ip, t->connect_to.port,
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c
+index ba97165bdb2822..a657651eba523e 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c
+@@ -14,9 +14,9 @@ typedef int *ptr_arr_t[6];
+ 
+ typedef int *ptr_multiarr_t[7][8][9][10];
+ 
+-typedef int * (*fn_ptr_arr_t[11])();
++typedef int * (*fn_ptr_arr_t[11])(void);
+ 
+-typedef int * (*fn_ptr_multiarr_t[12][13])();
++typedef int * (*fn_ptr_multiarr_t[12][13])(void);
+ 
+ struct root_struct {
+ 	arr_t _1;
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+index fe43556e1a6111..956b24ce814564 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+@@ -67,7 +67,7 @@ typedef void (*printf_fn_t)(const char *, ...);
+  *   `int -> char *` function and returns pointer to a char. Equivalent:
+  *   typedef char * (*fn_input_t)(int);
+  *   typedef char * (*fn_output_outer_t)(fn_input_t);
+- *   typedef const fn_output_outer_t (* fn_output_inner_t)();
++ *   typedef const fn_output_outer_t (* fn_output_inner_t)(void);
+  *   typedef const fn_output_inner_t fn_ptr_arr2_t[5];
+  */
+ /* ----- START-EXPECTED-OUTPUT ----- */
+@@ -94,7 +94,7 @@ typedef void (* (*signal_t)(int, void (*)(int)))(int);
+ 
+ typedef char * (*fn_ptr_arr1_t[10])(int **);
+ 
+-typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int));
++typedef char * (* (* const fn_ptr_arr2_t[5])(void))(char * (*)(int));
+ 
+ struct struct_w_typedefs {
+ 	int_t a;
+diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
+index 85d57633c8b650..61be5993416e93 100644
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -65,7 +65,7 @@ int passed;
+ int failed;
+ int map_fd[9];
+ struct bpf_map *maps[9];
+-int prog_fd[11];
++int prog_fd[9];
+ 
+ int txmsg_pass;
+ int txmsg_redir;
+@@ -663,7 +663,8 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ 				}
+ 			}
+ 
+-			s->bytes_recvd += recv;
++			if (recv > 0)
++				s->bytes_recvd += recv;
+ 
+ 			if (data) {
+ 				int chunk_sz = opt->sendpage ?
+@@ -1708,8 +1709,6 @@ int prog_attach_type[] = {
+ 	BPF_SK_MSG_VERDICT,
+ 	BPF_SK_MSG_VERDICT,
+ 	BPF_SK_MSG_VERDICT,
+-	BPF_SK_MSG_VERDICT,
+-	BPF_SK_MSG_VERDICT,
+ };
+ 
+ int prog_type[] = {
+@@ -1722,8 +1721,6 @@ int prog_type[] = {
+ 	BPF_PROG_TYPE_SK_MSG,
+ 	BPF_PROG_TYPE_SK_MSG,
+ 	BPF_PROG_TYPE_SK_MSG,
+-	BPF_PROG_TYPE_SK_MSG,
+-	BPF_PROG_TYPE_SK_MSG,
+ };
+ 
+ static int populate_progs(char *bpf_file)
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
+index 616d3581419ca0..21d0f419cc6d77 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
+@@ -11,7 +11,7 @@ ALL_TESTS="single_mask_test identical_filters_test two_masks_test \
+ 	multiple_masks_test ctcam_edge_cases_test delta_simple_test \
+ 	delta_two_masks_one_key_test delta_simple_rehash_test \
+ 	bloom_simple_test bloom_complex_test bloom_delta_test \
+-	max_erp_entries_test max_group_size_test"
++	max_erp_entries_test max_group_size_test collision_test"
+ NUM_NETIFS=2
+ source $lib_dir/lib.sh
+ source $lib_dir/tc_common.sh
+@@ -457,7 +457,7 @@ delta_two_masks_one_key_test()
+ {
+ 	# If 2 keys are the same and only differ in mask in a way that
+ 	# they belong under the same ERP (second is delta of the first),
+-	# there should be no C-TCAM spill.
++	# there should be C-TCAM spill.
+ 
+ 	RET=0
+ 
+@@ -474,8 +474,8 @@ delta_two_masks_one_key_test()
+ 	tp_record "mlxsw:*" "tc filter add dev $h2 ingress protocol ip \
+ 		   pref 2 handle 102 flower $tcflags dst_ip 192.0.2.2 \
+ 		   action drop"
+-	tp_check_hits "mlxsw:mlxsw_sp_acl_atcam_entry_add_ctcam_spill" 0
+-	check_err $? "incorrect C-TCAM spill while inserting the second rule"
++	tp_check_hits "mlxsw:mlxsw_sp_acl_atcam_entry_add_ctcam_spill" 1
++	check_err $? "C-TCAM spill did not happen while inserting the second rule"
+ 
+ 	$MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \
+ 		-t ip -q
+@@ -1087,6 +1087,53 @@ max_group_size_test()
+ 	log_test "max ACL group size test ($tcflags). max size $max_size"
+ }
+ 
++collision_test()
++{
++	# Filters cannot share an eRP if in the common unmasked part (i.e.,
++	# without the delta bits) they have the same values. If the driver does
++	# not prevent such configuration (by spilling into the C-TCAM), then
++	# multiple entries will be present in the device with the same key,
++	# leading to collisions and a reduced scale.
++	#
++	# Create such a scenario and make sure all the filters are successfully
++	# added.
++
++	RET=0
++
++	local ret
++
++	if [[ "$tcflags" != "skip_sw" ]]; then
++		return 0;
++	fi
++
++	# Add a single dst_ip/24 filter and multiple dst_ip/32 filters that all
++	# have the same values in the common unmasked part (dst_ip/24).
++
++	tc filter add dev $h2 ingress pref 1 proto ipv4 handle 101 \
++		flower $tcflags dst_ip 198.51.100.0/24 \
++		action drop
++
++	for i in {0..255}; do
++		tc filter add dev $h2 ingress pref 2 proto ipv4 \
++			handle $((102 + i)) \
++			flower $tcflags dst_ip 198.51.100.${i}/32 \
++			action drop
++		ret=$?
++		[[ $ret -ne 0 ]] && break
++	done
++
++	check_err $ret "failed to add all the filters"
++
++	for i in {255..0}; do
++		tc filter del dev $h2 ingress pref 2 proto ipv4 \
++			handle $((102 + i)) flower
++	done
++
++	tc filter del dev $h2 ingress pref 1 proto ipv4 handle 101 flower
++
++	log_test "collision test ($tcflags)"
++}
++
+ setup_prepare()
+ {
+ 	h1=${NETIFS[p1]}
+diff --git a/tools/testing/selftests/net/forwarding/devlink_lib.sh b/tools/testing/selftests/net/forwarding/devlink_lib.sh
+index 9c12c4fd3afc94..6edfd207e32407 100644
+--- a/tools/testing/selftests/net/forwarding/devlink_lib.sh
++++ b/tools/testing/selftests/net/forwarding/devlink_lib.sh
+@@ -113,6 +113,8 @@ devlink_reload()
+ 	still_pending=$(devlink resource show "$DEVLINK_DEV" | \
+ 			grep -c "size_new")
+ 	check_err $still_pending "Failed reload - There are still unset sizes"
++
++	udevadm settle
+ }
+ 
+ declare -A DEVLINK_ORIG
+diff --git a/tools/testing/selftests/sigaltstack/current_stack_pointer.h b/tools/testing/selftests/sigaltstack/current_stack_pointer.h
+index ea9bdf3a90b164..09da8f1011ce4c 100644
+--- a/tools/testing/selftests/sigaltstack/current_stack_pointer.h
++++ b/tools/testing/selftests/sigaltstack/current_stack_pointer.h
+@@ -8,7 +8,7 @@ register unsigned long sp asm("sp");
+ register unsigned long sp asm("esp");
+ #elif __loongarch64
+ register unsigned long sp asm("$sp");
+-#elif __ppc__
++#elif __powerpc__
+ register unsigned long sp asm("r1");
+ #elif __s390x__
+ register unsigned long sp asm("%15");


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-09-04 13:53 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-09-04 13:53 UTC (permalink / raw
  To: gentoo-commits

commit:     5ff879e9a2b2eddd9a0e9f5e29a4204b05b9da6b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  4 13:53:34 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep  4 13:53:34 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5ff879e9

Linux patch 5.10.225

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1224_linux-5.10.225.patch | 5257 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5261 insertions(+)

diff --git a/0000_README b/0000_README
index 28ec5762..ad8e0d5b 100644
--- a/0000_README
+++ b/0000_README
@@ -939,6 +939,10 @@ Patch:  1223_linux-5.10.224.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.224
 
+Patch:  1224_linux-5.10.225.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.225
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1224_linux-5.10.225.patch b/1224_linux-5.10.225.patch
new file mode 100644
index 00000000..29d98bc8
--- /dev/null
+++ b/1224_linux-5.10.225.patch
@@ -0,0 +1,5257 @@
+diff --git a/Makefile b/Makefile
+index 38351794f6c93..30918576f9de4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 224
++SUBLEVEL = 225
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/kernel/acpi_numa.c b/arch/arm64/kernel/acpi_numa.c
+index 7ff8000454346..048b75cadd2fd 100644
+--- a/arch/arm64/kernel/acpi_numa.c
++++ b/arch/arm64/kernel/acpi_numa.c
+@@ -27,7 +27,7 @@
+ 
+ #include <asm/numa.h>
+ 
+-static int acpi_early_node_map[NR_CPUS] __initdata = { NUMA_NO_NODE };
++static int acpi_early_node_map[NR_CPUS] __initdata = { [0 ... NR_CPUS - 1] = NUMA_NO_NODE };
+ 
+ int __init acpi_numa_get_nid(unsigned int cpu)
+ {
+diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
+index 0c66a1d408fd7..e831d3dfd50d7 100644
+--- a/arch/arm64/kvm/hyp/entry.S
++++ b/arch/arm64/kvm/hyp/entry.S
+@@ -85,8 +85,10 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
+ 
+ 	// If the hyp context is loaded, go straight to hyp_panic
+ 	get_loaded_vcpu x0, x1
+-	cbz	x0, hyp_panic
++	cbnz	x0, 1f
++	b	hyp_panic
+ 
++1:
+ 	// The hyp context is saved so make sure it is restored to allow
+ 	// hyp_panic to run at hyp and, subsequently, panic to run in the host.
+ 	// This makes use of __guest_exit to avoid duplication but sets the
+@@ -94,7 +96,7 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
+ 	// current state is saved to the guest context but it will only be
+ 	// accurate if the guest had been completely restored.
+ 	adr_this_cpu x0, kvm_hyp_ctxt, x1
+-	adr	x1, hyp_panic
++	adr_l	x1, hyp_panic
+ 	str	x1, [x0, #CPU_XREG_OFFSET(30)]
+ 
+ 	get_vcpu_ptr	x1, x0
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 835fa036b2d54..b4bdabbfafdd2 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -30,6 +30,7 @@
+ #include <trace/events/kvm.h>
+ 
+ #include "sys_regs.h"
++#include "vgic/vgic.h"
+ 
+ #include "trace.h"
+ 
+@@ -275,6 +276,11 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu,
+ {
+ 	bool g1;
+ 
++	if (!kvm_has_gicv3(vcpu->kvm)) {
++		kvm_inject_undefined(vcpu);
++		return false;
++	}
++
+ 	if (!p->is_write)
+ 		return read_from_write_only(vcpu, p, r);
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
+index 64fcd75111108..3d7fa7ef353ec 100644
+--- a/arch/arm64/kvm/vgic/vgic.h
++++ b/arch/arm64/kvm/vgic/vgic.h
+@@ -318,4 +318,11 @@ int vgic_v4_init(struct kvm *kvm);
+ void vgic_v4_teardown(struct kvm *kvm);
+ void vgic_v4_configure_vsgis(struct kvm *kvm);
+ 
++static inline bool kvm_has_gicv3(struct kvm *kvm)
++{
++	return (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif) &&
++		irqchip_in_kernel(kvm) &&
++		kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3);
++}
++
+ #endif
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index f8d1933bfe823..24d2ab277d78e 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1769,12 +1769,16 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
+ 			MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2);
+ 		c->ases &= ~MIPS_ASE_VZ; /* VZ of Loongson-3A2000/3000 is incomplete */
++		change_c0_config6(LOONGSON_CONF6_EXTIMER | LOONGSON_CONF6_INTIMER,
++				  LOONGSON_CONF6_INTIMER);
+ 		break;
+ 	case PRID_IMP_LOONGSON_64G:
+ 		__cpu_name[cpu] = "ICT Loongson-3";
+ 		set_elf_platform(cpu, "loongson3a");
+ 		set_isa(c, MIPS_CPU_ISA_M64R2);
+ 		decode_cpucfg(c);
++		change_c0_config6(LOONGSON_CONF6_EXTIMER | LOONGSON_CONF6_INTIMER,
++				  LOONGSON_CONF6_INTIMER);
+ 		break;
+ 	default:
+ 		panic("Unknown Loongson Processor ID!");
+diff --git a/arch/openrisc/kernel/setup.c b/arch/openrisc/kernel/setup.c
+index c6f9e7b9f7cb2..8c65810eeca24 100644
+--- a/arch/openrisc/kernel/setup.c
++++ b/arch/openrisc/kernel/setup.c
+@@ -284,6 +284,9 @@ void calibrate_delay(void)
+ 
+ void __init setup_arch(char **cmdline_p)
+ {
++	/* setup memblock allocator */
++	setup_memory();
++
+ 	unflatten_and_copy_device_tree();
+ 
+ 	setup_cpuinfo();
+@@ -310,9 +313,6 @@ void __init setup_arch(char **cmdline_p)
+ 	}
+ #endif
+ 
+-	/* setup memblock allocator */
+-	setup_memory();
+-
+ 	/* paging_init() sets up the MMU and marks all pages as reserved */
+ 	paging_init();
+ 
+diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
+index 2762e8540672e..5e3b9be45de92 100644
+--- a/arch/parisc/kernel/irq.c
++++ b/arch/parisc/kernel/irq.c
+@@ -520,7 +520,7 @@ void do_cpu_irq_mask(struct pt_regs *regs)
+ 
+ 	old_regs = set_irq_regs(regs);
+ 	local_irq_disable();
+-	irq_enter();
++	irq_enter_rcu();
+ 
+ 	eirr_val = mfctl(23) & cpu_eiem & per_cpu(local_ack_eiem, cpu);
+ 	if (!eirr_val)
+@@ -555,7 +555,7 @@ void do_cpu_irq_mask(struct pt_regs *regs)
+ #endif /* CONFIG_IRQSTACKS */
+ 
+  out:
+-	irq_exit();
++	irq_exit_rcu();
+ 	set_irq_regs(old_regs);
+ 	return;
+ 
+diff --git a/arch/powerpc/boot/simple_alloc.c b/arch/powerpc/boot/simple_alloc.c
+index 65ec135d01579..bc99f75b8582d 100644
+--- a/arch/powerpc/boot/simple_alloc.c
++++ b/arch/powerpc/boot/simple_alloc.c
+@@ -114,8 +114,11 @@ static void *simple_realloc(void *ptr, unsigned long size)
+ 		return ptr;
+ 
+ 	new = simple_malloc(size);
+-	memcpy(new, ptr, p->size);
+-	simple_free(ptr);
++	if (new) {
++		memcpy(new, ptr, p->size);
++		simple_free(ptr);
++	}
++
+ 	return new;
+ }
+ 
+diff --git a/arch/powerpc/sysdev/xics/icp-native.c b/arch/powerpc/sysdev/xics/icp-native.c
+index 7d13d2ef5a905..66de291b27d08 100644
+--- a/arch/powerpc/sysdev/xics/icp-native.c
++++ b/arch/powerpc/sysdev/xics/icp-native.c
+@@ -235,6 +235,8 @@ static int __init icp_native_map_one_cpu(int hw_id, unsigned long addr,
+ 	rname = kasprintf(GFP_KERNEL, "CPU %d [0x%x] Interrupt Presentation",
+ 			  cpu, hw_id);
+ 
++	if (!rname)
++		return -ENOMEM;
+ 	if (!request_mem_region(addr, size, rname)) {
+ 		pr_warn("icp_native: Could not reserve ICP MMIO for CPU %d, interrupt server #0x%x\n",
+ 			cpu, hw_id);
+diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
+index 12c5f006c1364..8fd441150a4e6 100644
+--- a/arch/s390/include/asm/uv.h
++++ b/arch/s390/include/asm/uv.h
+@@ -312,7 +312,10 @@ static inline int share(unsigned long addr, u16 cmd)
+ 
+ 	if (!uv_call(0, (u64)&uvcb))
+ 		return 0;
+-	return -EINVAL;
++	pr_err("%s UVC failed (rc: 0x%x, rrc: 0x%x), possible hypervisor bug.\n",
++	       uvcb.header.cmd == UVC_CMD_SET_SHARED_ACCESS ? "Share" : "Unshare",
++	       uvcb.header.rc, uvcb.header.rrc);
++	panic("System security cannot be guaranteed unless the system panics now.\n");
+ }
+ 
+ /*
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index 985e1e7553336..bac1be43d36cb 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -252,15 +252,9 @@ static inline void save_vector_registers(void)
+ #endif
+ }
+ 
+-static inline void setup_control_registers(void)
++static inline void setup_low_address_protection(void)
+ {
+-	unsigned long reg;
+-
+-	__ctl_store(reg, 0, 0);
+-	reg |= CR0_LOW_ADDRESS_PROTECTION;
+-	reg |= CR0_EMERGENCY_SIGNAL_SUBMASK;
+-	reg |= CR0_EXTERNAL_CALL_SUBMASK;
+-	__ctl_load(reg, 0, 0);
++	__ctl_set_bit(0, 28);
+ }
+ 
+ static inline void setup_access_registers(void)
+@@ -313,7 +307,7 @@ void __init startup_init(void)
+ 	save_vector_registers();
+ 	setup_topology();
+ 	sclp_early_detect();
+-	setup_control_registers();
++	setup_low_address_protection();
+ 	setup_access_registers();
+ 	lockdep_on();
+ }
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 5674792726cd9..2e0c3b0a5a58a 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -982,12 +982,12 @@ void __init smp_fill_possible_mask(void)
+ 
+ void __init smp_prepare_cpus(unsigned int max_cpus)
+ {
+-	/* request the 0x1201 emergency signal external interrupt */
+ 	if (register_external_irq(EXT_IRQ_EMERGENCY_SIG, do_ext_call_interrupt))
+ 		panic("Couldn't request external interrupt 0x1201");
+-	/* request the 0x1202 external call external interrupt */
++	ctl_set_bit(0, 14);
+ 	if (register_external_irq(EXT_IRQ_EXTERNAL_CALL, do_ext_call_interrupt))
+ 		panic("Couldn't request external interrupt 0x1202");
++	ctl_set_bit(0, 13);
+ }
+ 
+ void __init smp_prepare_boot_cpu(void)
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 1cba09a9f1c13..4f731981d3267 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -926,7 +926,10 @@ unsigned long arch_align_stack(unsigned long sp)
+ 
+ unsigned long arch_randomize_brk(struct mm_struct *mm)
+ {
+-	return randomize_page(mm->brk, 0x02000000);
++	if (mmap_is_ia32())
++		return randomize_page(mm->brk, SZ_32M);
++
++	return randomize_page(mm->brk, SZ_1G);
+ }
+ 
+ /*
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 3717ed6fcccc2..467fc8002c447 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5366,6 +5366,9 @@ static void ata_host_release(struct kref *kref)
+ 	for (i = 0; i < host->n_ports; i++) {
+ 		struct ata_port *ap = host->ports[i];
+ 
++		if (!ap)
++			continue;
++
+ 		kfree(ap->pmp_link);
+ 		kfree(ap->slave_link);
+ 		kfree(ap);
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index e616e33c8a209..25fd73fafb371 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -1118,8 +1118,8 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
+ 	rpp->len += skb->len;
+ 
+ 	if (stat & SAR_RSQE_EPDU) {
++		unsigned int len, truesize;
+ 		unsigned char *l1l2;
+-		unsigned int len;
+ 
+ 		l1l2 = (unsigned char *) ((unsigned long) skb->data + skb->len - 6);
+ 
+@@ -1189,14 +1189,15 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
+ 		ATM_SKB(skb)->vcc = vcc;
+ 		__net_timestamp(skb);
+ 
++		truesize = skb->truesize;
+ 		vcc->push(vcc, skb);
+ 		atomic_inc(&vcc->stats->rx);
+ 
+-		if (skb->truesize > SAR_FB_SIZE_3)
++		if (truesize > SAR_FB_SIZE_3)
+ 			add_rx_skb(card, 3, SAR_FB_SIZE_3, 1);
+-		else if (skb->truesize > SAR_FB_SIZE_2)
++		else if (truesize > SAR_FB_SIZE_2)
+ 			add_rx_skb(card, 2, SAR_FB_SIZE_2, 1);
+-		else if (skb->truesize > SAR_FB_SIZE_1)
++		else if (truesize > SAR_FB_SIZE_1)
+ 			add_rx_skb(card, 1, SAR_FB_SIZE_1, 1);
+ 		else
+ 			add_rx_skb(card, 0, SAR_FB_SIZE_0, 1);
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index 726d5c83c550e..e7d78937f7d6b 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -768,7 +768,8 @@ static int hci_uart_tty_ioctl(struct tty_struct *tty, struct file *file,
+ 		break;
+ 
+ 	case HCIUARTGETPROTO:
+-		if (test_bit(HCI_UART_PROTO_SET, &hu->flags))
++		if (test_bit(HCI_UART_PROTO_SET, &hu->flags) &&
++		    test_bit(HCI_UART_PROTO_READY, &hu->flags))
+ 			err = hu->proto->id;
+ 		else
+ 			err = -EUNATCH;
+diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
+index 7ab83fe601ede..0beafcee72673 100644
+--- a/drivers/dma/dw/core.c
++++ b/drivers/dma/dw/core.c
+@@ -16,6 +16,7 @@
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
++#include <linux/log2.h>
+ #include <linux/mm.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+@@ -624,12 +625,10 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 	struct dw_desc		*prev;
+ 	struct dw_desc		*first;
+ 	u32			ctllo, ctlhi;
+-	u8			m_master = dwc->dws.m_master;
+-	u8			lms = DWC_LLP_LMS(m_master);
++	u8			lms = DWC_LLP_LMS(dwc->dws.m_master);
+ 	dma_addr_t		reg;
+ 	unsigned int		reg_width;
+ 	unsigned int		mem_width;
+-	unsigned int		data_width = dw->pdata->data_width[m_master];
+ 	unsigned int		i;
+ 	struct scatterlist	*sg;
+ 	size_t			total_len = 0;
+@@ -663,7 +662,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 			mem = sg_dma_address(sg);
+ 			len = sg_dma_len(sg);
+ 
+-			mem_width = __ffs(data_width | mem | len);
++			mem_width = __ffs(sconfig->src_addr_width | mem | len);
+ 
+ slave_sg_todev_fill_desc:
+ 			desc = dwc_desc_get(dwc);
+@@ -723,7 +722,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 			lli_write(desc, sar, reg);
+ 			lli_write(desc, dar, mem);
+ 			lli_write(desc, ctlhi, ctlhi);
+-			mem_width = __ffs(data_width | mem);
++			mem_width = __ffs(sconfig->dst_addr_width | mem);
+ 			lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width));
+ 			desc->len = dlen;
+ 
+@@ -783,17 +782,93 @@ bool dw_dma_filter(struct dma_chan *chan, void *param)
+ }
+ EXPORT_SYMBOL_GPL(dw_dma_filter);
+ 
++static int dwc_verify_p_buswidth(struct dma_chan *chan)
++{
++	struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
++	struct dw_dma *dw = to_dw_dma(chan->device);
++	u32 reg_width, max_width;
++
++	if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV)
++		reg_width = dwc->dma_sconfig.dst_addr_width;
++	else if (dwc->dma_sconfig.direction == DMA_DEV_TO_MEM)
++		reg_width = dwc->dma_sconfig.src_addr_width;
++	else /* DMA_MEM_TO_MEM */
++		return 0;
++
++	max_width = dw->pdata->data_width[dwc->dws.p_master];
++
++	/* Fall-back to 1-byte transfer width if undefined */
++	if (reg_width == DMA_SLAVE_BUSWIDTH_UNDEFINED)
++		reg_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
++	else if (!is_power_of_2(reg_width) || reg_width > max_width)
++		return -EINVAL;
++	else /* bus width is valid */
++		return 0;
++
++	/* Update undefined addr width value */
++	if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV)
++		dwc->dma_sconfig.dst_addr_width = reg_width;
++	else /* DMA_DEV_TO_MEM */
++		dwc->dma_sconfig.src_addr_width = reg_width;
++
++	return 0;
++}
++
++static int dwc_verify_m_buswidth(struct dma_chan *chan)
++{
++	struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
++	struct dw_dma *dw = to_dw_dma(chan->device);
++	u32 reg_width, reg_burst, mem_width;
++
++	mem_width = dw->pdata->data_width[dwc->dws.m_master];
++
++	/*
++	 * It's possible to have a data portion locked in the DMA FIFO in case
++	 * of the channel suspension. Subsequent channel disabling will cause
++	 * that data silent loss. In order to prevent that maintain the src and
++	 * dst transfer widths coherency by means of the relation:
++	 * (CTLx.SRC_TR_WIDTH * CTLx.SRC_MSIZE >= CTLx.DST_TR_WIDTH)
++	 * Look for the details in the commit message that brings this change.
++	 *
++	 * Note the DMA configs utilized in the calculations below must have
++	 * been verified to have correct values by this method call.
++	 */
++	if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV) {
++		reg_width = dwc->dma_sconfig.dst_addr_width;
++		if (mem_width < reg_width)
++			return -EINVAL;
++
++		dwc->dma_sconfig.src_addr_width = mem_width;
++	} else if (dwc->dma_sconfig.direction == DMA_DEV_TO_MEM) {
++		reg_width = dwc->dma_sconfig.src_addr_width;
++		reg_burst = rounddown_pow_of_two(dwc->dma_sconfig.src_maxburst);
++
++		dwc->dma_sconfig.dst_addr_width = min(mem_width, reg_width * reg_burst);
++	}
++
++	return 0;
++}
++
+ static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
+ {
+ 	struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ 	struct dw_dma *dw = to_dw_dma(chan->device);
++	int ret;
+ 
+ 	memcpy(&dwc->dma_sconfig, sconfig, sizeof(*sconfig));
+ 
+ 	dwc->dma_sconfig.src_maxburst =
+-		clamp(dwc->dma_sconfig.src_maxburst, 0U, dwc->max_burst);
++		clamp(dwc->dma_sconfig.src_maxburst, 1U, dwc->max_burst);
+ 	dwc->dma_sconfig.dst_maxburst =
+-		clamp(dwc->dma_sconfig.dst_maxburst, 0U, dwc->max_burst);
++		clamp(dwc->dma_sconfig.dst_maxburst, 1U, dwc->max_burst);
++
++	ret = dwc_verify_p_buswidth(chan);
++	if (ret)
++		return ret;
++
++	ret = dwc_verify_m_buswidth(chan);
++	if (ret)
++		return ret;
+ 
+ 	dw->encode_maxburst(dwc, &dwc->dma_sconfig.src_maxburst);
+ 	dw->encode_maxburst(dwc, &dwc->dma_sconfig.dst_maxburst);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+index 2c1c5f7f98deb..48817d2cda89c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+@@ -386,16 +386,24 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
+ 
+ 	switch (args->in.op) {
+ 	case AMDGPU_CTX_OP_ALLOC_CTX:
++		if (args->in.flags)
++			return -EINVAL;
+ 		r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);
+ 		args->out.alloc.ctx_id = id;
+ 		break;
+ 	case AMDGPU_CTX_OP_FREE_CTX:
++		if (args->in.flags)
++			return -EINVAL;
+ 		r = amdgpu_ctx_free(fpriv, id);
+ 		break;
+ 	case AMDGPU_CTX_OP_QUERY_STATE:
++		if (args->in.flags)
++			return -EINVAL;
+ 		r = amdgpu_ctx_query(adev, fpriv, id, &args->out);
+ 		break;
+ 	case AMDGPU_CTX_OP_QUERY_STATE2:
++		if (args->in.flags)
++			return -EINVAL;
+ 		r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
+ 		break;
+ 	default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index ecaa2d7483b20..0a28daa14fb11 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -725,7 +725,8 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
+ 	uint32_t created = 0;
+ 	uint32_t allocated = 0;
+ 	uint32_t tmp, handle = 0;
+-	uint32_t *size = &tmp;
++	uint32_t dummy = 0xffffffff;
++	uint32_t *size = &dummy;
+ 	unsigned idx;
+ 	int i, r = 0;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+index ae8c0f897d59b..24dd4df55cb21 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+@@ -553,11 +553,11 @@ void jpeg_v2_0_dec_ring_emit_ib(struct amdgpu_ring *ring,
+ 
+ 	amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+-	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
++	amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8)));
+ 
+ 	amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JPEG_VMID_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+-	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
++	amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8)));
+ 
+ 	amdgpu_ring_write(ring,	PACKETJ(mmUVD_LMI_JRBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 799a91a064a1b..9a444b17530a4 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -1311,7 +1311,7 @@ static int kfd_ioctl_alloc_memory_of_gpu(struct file *filep,
+ 			goto err_unlock;
+ 		}
+ 		offset = amdgpu_amdkfd_get_mmio_remap_phys_addr(dev->kgd);
+-		if (!offset) {
++		if (!offset || (PAGE_SIZE > 4096)) {
+ 			err = -ENOMEM;
+ 			goto err_unlock;
+ 		}
+@@ -1969,6 +1969,9 @@ static int kfd_mmio_mmap(struct kfd_dev *dev, struct kfd_process *process,
+ 	if (vma->vm_end - vma->vm_start != PAGE_SIZE)
+ 		return -EINVAL;
+ 
++	if (PAGE_SIZE > 4096)
++		return -EINVAL;
++
+ 	address = amdgpu_amdkfd_get_mmio_remap_phys_addr(dev->kgd);
+ 
+ 	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
+diff --git a/drivers/gpu/drm/lima/lima_gp.c b/drivers/gpu/drm/lima/lima_gp.c
+index ca3842f719842..82071835ec9ed 100644
+--- a/drivers/gpu/drm/lima/lima_gp.c
++++ b/drivers/gpu/drm/lima/lima_gp.c
+@@ -166,6 +166,11 @@ static void lima_gp_task_run(struct lima_sched_pipe *pipe,
+ 	gp_write(LIMA_GP_CMD, cmd);
+ }
+ 
++static int lima_gp_bus_stop_poll(struct lima_ip *ip)
++{
++	return !!(gp_read(LIMA_GP_STATUS) & LIMA_GP_STATUS_BUS_STOPPED);
++}
++
+ static int lima_gp_hard_reset_poll(struct lima_ip *ip)
+ {
+ 	gp_write(LIMA_GP_PERF_CNT_0_LIMIT, 0xC01A0000);
+@@ -179,6 +184,13 @@ static int lima_gp_hard_reset(struct lima_ip *ip)
+ 
+ 	gp_write(LIMA_GP_PERF_CNT_0_LIMIT, 0xC0FFE000);
+ 	gp_write(LIMA_GP_INT_MASK, 0);
++
++	gp_write(LIMA_GP_CMD, LIMA_GP_CMD_STOP_BUS);
++	ret = lima_poll_timeout(ip, lima_gp_bus_stop_poll, 10, 100);
++	if (ret) {
++		dev_err(dev->dev, "%s bus stop timeout\n", lima_ip_name(ip));
++		return ret;
++	}
+ 	gp_write(LIMA_GP_CMD, LIMA_GP_CMD_RESET);
+ 	ret = lima_poll_timeout(ip, lima_gp_hard_reset_poll, 10, 100);
+ 	if (ret) {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+index bb7c7e437242e..31a5646f54939 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+@@ -32,24 +32,14 @@
+  * @fmt: Pointer to format string
+  */
+ #define DPU_DEBUG(fmt, ...)                                                \
+-	do {                                                               \
+-		if (drm_debug_enabled(DRM_UT_KMS))                         \
+-			DRM_DEBUG(fmt, ##__VA_ARGS__); \
+-		else                                                       \
+-			pr_debug(fmt, ##__VA_ARGS__);                      \
+-	} while (0)
++	DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)
+ 
+ /**
+  * DPU_DEBUG_DRIVER - macro for hardware driver logging
+  * @fmt: Pointer to format string
+  */
+ #define DPU_DEBUG_DRIVER(fmt, ...)                                         \
+-	do {                                                               \
+-		if (drm_debug_enabled(DRM_UT_DRIVER))                      \
+-			DRM_ERROR(fmt, ##__VA_ARGS__); \
+-		else                                                       \
+-			pr_debug(fmt, ##__VA_ARGS__);                      \
+-	} while (0)
++	DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)
+ 
+ #define DPU_ERROR(fmt, ...) pr_err("[dpu error]" fmt, ##__VA_ARGS__)
+ #define DPU_ERROR_RATELIMITED(fmt, ...) pr_err_ratelimited("[dpu error]" fmt, ##__VA_ARGS__)
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 07becbf3c64fc..0b0d86d2121b9 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1246,6 +1246,8 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
+ 	link_info.rate = ctrl->link->link_params.rate;
+ 	link_info.capabilities = DP_LINK_CAP_ENHANCED_FRAMING;
+ 
++	dp_link_reset_phy_params_vx_px(ctrl->link);
++
+ 	dp_aux_link_configure(ctrl->aux, &link_info);
+ 	drm_dp_dpcd_write(ctrl->aux, DP_MAIN_LINK_CHANNEL_CODING_SET,
+ 				&encoding, 1);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 9f3f7588fe46d..c4e4a24692f6b 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -875,7 +875,15 @@
+ #define USB_DEVICE_ID_MS_TYPE_COVER_2    0x07a9
+ #define USB_DEVICE_ID_MS_POWER_COVER     0x07da
+ #define USB_DEVICE_ID_MS_SURFACE3_COVER		0x07de
+-#define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER	0x02fd
++/*
++ * For a description of the Xbox controller models, refer to:
++ * https://en.wikipedia.org/wiki/Xbox_Wireless_Controller#Summary
++ */
++#define USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1708	0x02fd
++#define USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1708_BLE	0x0b20
++#define USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1914	0x0b13
++#define USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1797	0x0b05
++#define USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1797_BLE	0x0b22
+ #define USB_DEVICE_ID_MS_PIXART_MOUSE    0x00cb
+ #define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS      0x02e0
+ #define USB_DEVICE_ID_MS_MOUSE_0783      0x0783
+diff --git a/drivers/hid/hid-microsoft.c b/drivers/hid/hid-microsoft.c
+index 071fd093a5f4e..9345e2bfd56ed 100644
+--- a/drivers/hid/hid-microsoft.c
++++ b/drivers/hid/hid-microsoft.c
+@@ -446,7 +446,16 @@ static const struct hid_device_id ms_devices[] = {
+ 		.driver_data = MS_PRESENTER },
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, 0x091B),
+ 		.driver_data = MS_SURFACE_DIAL },
+-	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER),
++
++	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1708),
++		.driver_data = MS_QUIRK_FF },
++	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1708_BLE),
++		.driver_data = MS_QUIRK_FF },
++	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1914),
++		.driver_data = MS_QUIRK_FF },
++	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1797),
++		.driver_data = MS_QUIRK_FF },
++	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_XBOX_CONTROLLER_MODEL_1797_BLE),
+ 		.driver_data = MS_QUIRK_FF },
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS),
+ 		.driver_data = MS_QUIRK_FF },
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index f6ee287a892c6..eee0f938e4d69 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1920,12 +1920,14 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
+ 	int fmax = field->logical_maximum;
+ 	unsigned int equivalent_usage = wacom_equivalent_usage(usage->hid);
+ 	int resolution_code = code;
+-	int resolution = hidinput_calc_abs_res(field, resolution_code);
++	int resolution;
+ 
+ 	if (equivalent_usage == HID_DG_TWIST) {
+ 		resolution_code = ABS_RZ;
+ 	}
+ 
++	resolution = hidinput_calc_abs_res(field, resolution_code);
++
+ 	if (equivalent_usage == HID_GD_X) {
+ 		fmin += features->offset_left;
+ 		fmax -= features->offset_right;
+diff --git a/drivers/i2c/busses/i2c-riic.c b/drivers/i2c/busses/i2c-riic.c
+index 4eccc0f69861f..d8f252c4caf2b 100644
+--- a/drivers/i2c/busses/i2c-riic.c
++++ b/drivers/i2c/busses/i2c-riic.c
+@@ -312,7 +312,7 @@ static int riic_init_hw(struct riic_dev *riic, struct i2c_timings *t)
+ 	 * frequency with only 62 clock ticks max (31 high, 31 low).
+ 	 * Aim for a duty of 60% LOW, 40% HIGH.
+ 	 */
+-	total_ticks = DIV_ROUND_UP(rate, t->bus_freq_hz);
++	total_ticks = DIV_ROUND_UP(rate, t->bus_freq_hz ?: 1);
+ 
+ 	for (cks = 0; cks < 7; cks++) {
+ 		/*
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index c74868f016379..b7ae4bf2f5499 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -13224,15 +13224,16 @@ static void read_mod_write(struct hfi1_devdata *dd, u16 src, u64 bits,
+ {
+ 	u64 reg;
+ 	u16 idx = src / BITS_PER_REGISTER;
++	unsigned long flags;
+ 
+-	spin_lock(&dd->irq_src_lock);
++	spin_lock_irqsave(&dd->irq_src_lock, flags);
+ 	reg = read_csr(dd, CCE_INT_MASK + (8 * idx));
+ 	if (set)
+ 		reg |= bits;
+ 	else
+ 		reg &= ~bits;
+ 	write_csr(dd, CCE_INT_MASK + (8 * idx), reg);
+-	spin_unlock(&dd->irq_src_lock);
++	spin_unlock_irqrestore(&dd->irq_src_lock, flags);
+ }
+ 
+ /**
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index 76b993e8d672f..f34793068f344 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -235,7 +235,7 @@ static int create_cq(struct rtrs_con *con, int cq_vector, u16 cq_size,
+ static int create_qp(struct rtrs_con *con, struct ib_pd *pd,
+ 		     u32 max_send_wr, u32 max_recv_wr, u32 max_sge)
+ {
+-	struct ib_qp_init_attr init_attr = {NULL};
++	struct ib_qp_init_attr init_attr = {};
+ 	struct rdma_cm_id *cm_id = con->cm_id;
+ 	int ret;
+ 
+diff --git a/drivers/input/input-mt.c b/drivers/input/input-mt.c
+index 44fe6f2f063ce..d0f8c31d7cc04 100644
+--- a/drivers/input/input-mt.c
++++ b/drivers/input/input-mt.c
+@@ -45,6 +45,9 @@ int input_mt_init_slots(struct input_dev *dev, unsigned int num_slots,
+ 		return 0;
+ 	if (mt)
+ 		return mt->num_slots != num_slots ? -EINVAL : 0;
++	/* Arbitrary limit for avoiding too large memory allocation. */
++	if (num_slots > 1024)
++		return -EINVAL;
+ 
+ 	mt = kzalloc(struct_size(mt, slots, num_slots), GFP_KERNEL);
+ 	if (!mt)
+diff --git a/drivers/input/serio/ioc3kbd.c b/drivers/input/serio/ioc3kbd.c
+index 676b0bda3d720..bf305fbc891d8 100644
+--- a/drivers/input/serio/ioc3kbd.c
++++ b/drivers/input/serio/ioc3kbd.c
+@@ -190,7 +190,7 @@ static int ioc3kbd_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static void ioc3kbd_remove(struct platform_device *pdev)
++static int ioc3kbd_remove(struct platform_device *pdev)
+ {
+ 	struct ioc3kbd_data *d = platform_get_drvdata(pdev);
+ 
+@@ -198,6 +198,8 @@ static void ioc3kbd_remove(struct platform_device *pdev)
+ 
+ 	serio_unregister_port(d->kbd);
+ 	serio_unregister_port(d->aux);
++
++	return 0;
+ }
+ 
+ static const struct platform_device_id ioc3kbd_id_table[] = {
+@@ -208,7 +210,7 @@ MODULE_DEVICE_TABLE(platform, ioc3kbd_id_table);
+ 
+ static struct platform_driver ioc3kbd_driver = {
+ 	.probe          = ioc3kbd_probe,
+-	.remove_new     = ioc3kbd_remove,
++	.remove         = ioc3kbd_remove,
+ 	.id_table	= ioc3kbd_id_table,
+ 	.driver = {
+ 		.name = "ioc3-kbd",
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 4e486cccc4cc6..a9469751720cc 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -4469,8 +4469,6 @@ static int its_vpe_irq_domain_alloc(struct irq_domain *domain, unsigned int virq
+ 	struct page *vprop_page;
+ 	int base, nr_ids, i, err = 0;
+ 
+-	BUG_ON(!vm);
+-
+ 	bitmap = its_lpi_alloc(roundup_pow_of_two(nr_irqs), &base, &nr_ids);
+ 	if (!bitmap)
+ 		return -ENOMEM;
+diff --git a/drivers/md/dm-clone-metadata.c b/drivers/md/dm-clone-metadata.c
+index 17712456fa634..383258eae750d 100644
+--- a/drivers/md/dm-clone-metadata.c
++++ b/drivers/md/dm-clone-metadata.c
+@@ -471,11 +471,6 @@ static void __destroy_persistent_data_structures(struct dm_clone_metadata *cmd)
+ 
+ /*---------------------------------------------------------------------------*/
+ 
+-static size_t bitmap_size(unsigned long nr_bits)
+-{
+-	return BITS_TO_LONGS(nr_bits) * sizeof(long);
+-}
+-
+ static int __dirty_map_init(struct dirty_map *dmap, unsigned long nr_words,
+ 			    unsigned long nr_regions)
+ {
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 4184c8a2d4977..d5c103f82ea11 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1064,8 +1064,26 @@ static int do_resume(struct dm_ioctl *param)
+ 			suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
+ 		if (param->flags & DM_NOFLUSH_FLAG)
+ 			suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG;
+-		if (!dm_suspended_md(md))
+-			dm_suspend(md, suspend_flags);
++		if (!dm_suspended_md(md)) {
++			r = dm_suspend(md, suspend_flags);
++			if (r) {
++				down_write(&_hash_lock);
++				hc = dm_get_mdptr(md);
++				if (hc && !hc->new_map) {
++					hc->new_map = new_map;
++					new_map = NULL;
++				} else {
++					r = -ENXIO;
++				}
++				up_write(&_hash_lock);
++				if (new_map) {
++					dm_sync_table(md);
++					dm_table_destroy(new_map);
++				}
++				dm_put(md);
++				return r;
++			}
++		}
+ 
+ 		old_map = dm_swap_table(md, new_map);
+ 		if (IS_ERR(old_map)) {
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index dc8498b4b5c13..b56ea42ab7d2b 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2343,7 +2343,7 @@ static int dm_wait_for_bios_completion(struct mapped_device *md, long task_state
+ 			break;
+ 
+ 		if (signal_pending_state(task_state, current)) {
+-			r = -EINTR;
++			r = -ERESTARTSYS;
+ 			break;
+ 		}
+ 
+@@ -2368,7 +2368,7 @@ static int dm_wait_for_completion(struct mapped_device *md, long task_state)
+ 			break;
+ 
+ 		if (signal_pending_state(task_state, current)) {
+-			r = -EINTR;
++			r = -ERESTARTSYS;
+ 			break;
+ 		}
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index f1f029243e0cb..accf9ac9bb8c0 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -7601,11 +7601,6 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
+ 
+ 	mddev = bdev->bd_disk->private_data;
+ 
+-	if (!mddev) {
+-		BUG();
+-		goto out;
+-	}
+-
+ 	/* Some actions do not requires the mutex */
+ 	switch (cmd) {
+ 	case GET_ARRAY_INFO:
+diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c
+index da439ac857963..25ce7fb7fd9d0 100644
+--- a/drivers/md/persistent-data/dm-space-map-metadata.c
++++ b/drivers/md/persistent-data/dm-space-map-metadata.c
+@@ -275,7 +275,7 @@ static void sm_metadata_destroy(struct dm_space_map *sm)
+ {
+ 	struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+ 
+-	kfree(smm);
++	kvfree(smm);
+ }
+ 
+ static int sm_metadata_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
+@@ -759,7 +759,7 @@ struct dm_space_map *dm_sm_metadata_init(void)
+ {
+ 	struct sm_metadata *smm;
+ 
+-	smm = kmalloc(sizeof(*smm), GFP_KERNEL);
++	smm = kvmalloc(sizeof(*smm), GFP_KERNEL);
+ 	if (!smm)
+ 		return ERR_PTR(-ENOMEM);
+ 
+diff --git a/drivers/media/pci/cx23885/cx23885-video.c b/drivers/media/pci/cx23885/cx23885-video.c
+index 86e3bb5903712..022a9f8e19a2b 100644
+--- a/drivers/media/pci/cx23885/cx23885-video.c
++++ b/drivers/media/pci/cx23885/cx23885-video.c
+@@ -1353,6 +1353,10 @@ int cx23885_video_register(struct cx23885_dev *dev)
+ 	/* register Video device */
+ 	dev->video_dev = cx23885_vdev_init(dev, dev->pci,
+ 		&cx23885_video_template, "video");
++	if (!dev->video_dev) {
++		err = -ENOMEM;
++		goto fail_unreg;
++	}
+ 	dev->video_dev->queue = &dev->vb2_vidq;
+ 	dev->video_dev->device_caps = V4L2_CAP_READWRITE | V4L2_CAP_STREAMING |
+ 				      V4L2_CAP_AUDIO | V4L2_CAP_VIDEO_CAPTURE;
+@@ -1381,6 +1385,10 @@ int cx23885_video_register(struct cx23885_dev *dev)
+ 	/* register VBI device */
+ 	dev->vbi_dev = cx23885_vdev_init(dev, dev->pci,
+ 		&cx23885_vbi_template, "vbi");
++	if (!dev->vbi_dev) {
++		err = -ENOMEM;
++		goto fail_unreg;
++	}
+ 	dev->vbi_dev->queue = &dev->vb2_vbiq;
+ 	dev->vbi_dev->device_caps = V4L2_CAP_READWRITE | V4L2_CAP_STREAMING |
+ 				    V4L2_CAP_AUDIO | V4L2_CAP_VBI_CAPTURE;
+diff --git a/drivers/media/pci/solo6x10/solo6x10-offsets.h b/drivers/media/pci/solo6x10/solo6x10-offsets.h
+index f414ee1316f29..fdbb817e63601 100644
+--- a/drivers/media/pci/solo6x10/solo6x10-offsets.h
++++ b/drivers/media/pci/solo6x10/solo6x10-offsets.h
+@@ -57,16 +57,16 @@
+ #define SOLO_MP4E_EXT_ADDR(__solo) \
+ 	(SOLO_EREF_EXT_ADDR(__solo) + SOLO_EREF_EXT_AREA(__solo))
+ #define SOLO_MP4E_EXT_SIZE(__solo) \
+-	max((__solo->nr_chans * 0x00080000),				\
+-	    min(((__solo->sdram_size - SOLO_MP4E_EXT_ADDR(__solo)) -	\
+-		 __SOLO_JPEG_MIN_SIZE(__solo)), 0x00ff0000))
++	clamp(__solo->sdram_size - SOLO_MP4E_EXT_ADDR(__solo) -	\
++	      __SOLO_JPEG_MIN_SIZE(__solo),			\
++	      __solo->nr_chans * 0x00080000, 0x00ff0000)
+ 
+ #define __SOLO_JPEG_MIN_SIZE(__solo)		(__solo->nr_chans * 0x00080000)
+ #define SOLO_JPEG_EXT_ADDR(__solo) \
+ 		(SOLO_MP4E_EXT_ADDR(__solo) + SOLO_MP4E_EXT_SIZE(__solo))
+ #define SOLO_JPEG_EXT_SIZE(__solo) \
+-	max(__SOLO_JPEG_MIN_SIZE(__solo),				\
+-	    min((__solo->sdram_size - SOLO_JPEG_EXT_ADDR(__solo)), 0x00ff0000))
++	clamp(__solo->sdram_size - SOLO_JPEG_EXT_ADDR(__solo),	\
++	      __SOLO_JPEG_MIN_SIZE(__solo), 0x00ff0000)
+ 
+ #define SOLO_SDRAM_END(__solo) \
+ 	(SOLO_JPEG_EXT_ADDR(__solo) + SOLO_JPEG_EXT_SIZE(__solo))
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index 6bf9c5c319de7..fd55352d743ee 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -765,7 +765,7 @@ static int vcodec_domains_get(struct venus_core *core)
+ 		pd = dev_pm_domain_attach_by_name(dev,
+ 						  res->vcodec_pmdomains[i]);
+ 		if (IS_ERR_OR_NULL(pd))
+-			return PTR_ERR(pd) ? : -ENODATA;
++			return pd ? PTR_ERR(pd) : -ENODATA;
+ 		core->pmdomains[i] = pd;
+ 	}
+ 
+diff --git a/drivers/media/radio/radio-isa.c b/drivers/media/radio/radio-isa.c
+index ad2ac16ff12dd..610d3e3269518 100644
+--- a/drivers/media/radio/radio-isa.c
++++ b/drivers/media/radio/radio-isa.c
+@@ -36,7 +36,7 @@ static int radio_isa_querycap(struct file *file, void  *priv,
+ 
+ 	strscpy(v->driver, isa->drv->driver.driver.name, sizeof(v->driver));
+ 	strscpy(v->card, isa->drv->card, sizeof(v->card));
+-	snprintf(v->bus_info, sizeof(v->bus_info), "ISA:%s", isa->v4l2_dev.name);
++	snprintf(v->bus_info, sizeof(v->bus_info), "ISA:%s", dev_name(isa->v4l2_dev.dev));
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 288f097e2e6f2..f6e97ff7a8e4b 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -723,11 +723,11 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 	unsigned long flags;
+ 	u64 timestamp;
+ 	u32 delta_stc;
+-	u32 y1, y2;
++	u32 y1;
+ 	u32 x1, x2;
+ 	u32 mean;
+ 	u32 sof;
+-	u64 y;
++	u64 y, y2;
+ 
+ 	if (!uvc_hw_timestamps_param)
+ 		return;
+@@ -767,7 +767,7 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 	sof = y;
+ 
+ 	uvc_trace(UVC_TRACE_CLOCK, "%s: PTS %u y %llu.%06llu SOF %u.%06llu "
+-		  "(x1 %u x2 %u y1 %u y2 %u SOF offset %u)\n",
++		  "(x1 %u x2 %u y1 %u y2 %llu SOF offset %u)\n",
+ 		  stream->dev->name, buf->pts,
+ 		  y >> 16, div_u64((y & 0xffff) * 1000000, 65536),
+ 		  sof >> 16, div_u64(((u64)sof & 0xffff) * 1000000LLU, 65536),
+@@ -782,7 +782,7 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 		goto done;
+ 
+ 	y1 = NSEC_PER_SEC;
+-	y2 = (u32)ktime_to_ns(ktime_sub(last->host_time, first->host_time)) + y1;
++	y2 = ktime_to_ns(ktime_sub(last->host_time, first->host_time)) + y1;
+ 
+ 	/* Interpolated and host SOF timestamps can wrap around at slightly
+ 	 * different times. Handle this by adding or removing 2048 to or from
+@@ -802,7 +802,7 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 	timestamp = ktime_to_ns(first->host_time) + y - y1;
+ 
+ 	uvc_trace(UVC_TRACE_CLOCK, "%s: SOF %u.%06llu y %llu ts %llu "
+-		  "buf ts %llu (x1 %u/%u/%u x2 %u/%u/%u y1 %u y2 %u)\n",
++		  "buf ts %llu (x1 %u/%u/%u x2 %u/%u/%u y1 %u y2 %llu)\n",
+ 		  stream->dev->name,
+ 		  sof >> 16, div_u64(((u64)sof & 0xffff) * 1000000LLU, 65536),
+ 		  y, timestamp, vbuf->vb2_buf.timestamp,
+diff --git a/drivers/memory/stm32-fmc2-ebi.c b/drivers/memory/stm32-fmc2-ebi.c
+index ffec26a99313b..5c387d32c078f 100644
+--- a/drivers/memory/stm32-fmc2-ebi.c
++++ b/drivers/memory/stm32-fmc2-ebi.c
+@@ -179,8 +179,11 @@ static int stm32_fmc2_ebi_check_mux(struct stm32_fmc2_ebi *ebi,
+ 				    int cs)
+ {
+ 	u32 bcr;
++	int ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	if (ret)
++		return ret;
+ 
+ 	if (bcr & FMC2_BCR_MTYP)
+ 		return 0;
+@@ -193,8 +196,11 @@ static int stm32_fmc2_ebi_check_waitcfg(struct stm32_fmc2_ebi *ebi,
+ 					int cs)
+ {
+ 	u32 bcr, val = FIELD_PREP(FMC2_BCR_MTYP, FMC2_BCR_MTYP_NOR);
++	int ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	if (ret)
++		return ret;
+ 
+ 	if ((bcr & FMC2_BCR_MTYP) == val && bcr & FMC2_BCR_BURSTEN)
+ 		return 0;
+@@ -207,8 +213,11 @@ static int stm32_fmc2_ebi_check_sync_trans(struct stm32_fmc2_ebi *ebi,
+ 					   int cs)
+ {
+ 	u32 bcr;
++	int ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	if (ret)
++		return ret;
+ 
+ 	if (bcr & FMC2_BCR_BURSTEN)
+ 		return 0;
+@@ -221,8 +230,11 @@ static int stm32_fmc2_ebi_check_async_trans(struct stm32_fmc2_ebi *ebi,
+ 					    int cs)
+ {
+ 	u32 bcr;
++	int ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	if (ret)
++		return ret;
+ 
+ 	if (!(bcr & FMC2_BCR_BURSTEN) || !(bcr & FMC2_BCR_CBURSTRW))
+ 		return 0;
+@@ -235,8 +247,11 @@ static int stm32_fmc2_ebi_check_cpsize(struct stm32_fmc2_ebi *ebi,
+ 				       int cs)
+ {
+ 	u32 bcr, val = FIELD_PREP(FMC2_BCR_MTYP, FMC2_BCR_MTYP_PSRAM);
++	int ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	if (ret)
++		return ret;
+ 
+ 	if ((bcr & FMC2_BCR_MTYP) == val && bcr & FMC2_BCR_BURSTEN)
+ 		return 0;
+@@ -249,12 +264,18 @@ static int stm32_fmc2_ebi_check_address_hold(struct stm32_fmc2_ebi *ebi,
+ 					     int cs)
+ {
+ 	u32 bcr, bxtr, val = FIELD_PREP(FMC2_BXTR_ACCMOD, FMC2_BXTR_EXTMOD_D);
++	int ret;
++
++	ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	if (ret)
++		return ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
+ 	if (prop->reg_type == FMC2_REG_BWTR)
+-		regmap_read(ebi->regmap, FMC2_BWTR(cs), &bxtr);
++		ret = regmap_read(ebi->regmap, FMC2_BWTR(cs), &bxtr);
+ 	else
+-		regmap_read(ebi->regmap, FMC2_BTR(cs), &bxtr);
++		ret = regmap_read(ebi->regmap, FMC2_BTR(cs), &bxtr);
++	if (ret)
++		return ret;
+ 
+ 	if ((!(bcr & FMC2_BCR_BURSTEN) || !(bcr & FMC2_BCR_CBURSTRW)) &&
+ 	    ((bxtr & FMC2_BXTR_ACCMOD) == val || bcr & FMC2_BCR_MUXEN))
+@@ -268,12 +289,19 @@ static int stm32_fmc2_ebi_check_clk_period(struct stm32_fmc2_ebi *ebi,
+ 					   int cs)
+ {
+ 	u32 bcr, bcr1;
++	int ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
+-	if (cs)
+-		regmap_read(ebi->regmap, FMC2_BCR1, &bcr1);
+-	else
++	ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	if (ret)
++		return ret;
++
++	if (cs) {
++		ret = regmap_read(ebi->regmap, FMC2_BCR1, &bcr1);
++		if (ret)
++			return ret;
++	} else {
+ 		bcr1 = bcr;
++	}
+ 
+ 	if (bcr & FMC2_BCR_BURSTEN && (!cs || !(bcr1 & FMC2_BCR1_CCLKEN)))
+ 		return 0;
+@@ -305,12 +333,18 @@ static u32 stm32_fmc2_ebi_ns_to_clk_period(struct stm32_fmc2_ebi *ebi,
+ {
+ 	u32 nb_clk_cycles = stm32_fmc2_ebi_ns_to_clock_cycles(ebi, cs, setup);
+ 	u32 bcr, btr, clk_period;
++	int ret;
++
++	ret = regmap_read(ebi->regmap, FMC2_BCR1, &bcr);
++	if (ret)
++		return ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR1, &bcr);
+ 	if (bcr & FMC2_BCR1_CCLKEN || !cs)
+-		regmap_read(ebi->regmap, FMC2_BTR1, &btr);
++		ret = regmap_read(ebi->regmap, FMC2_BTR1, &btr);
+ 	else
+-		regmap_read(ebi->regmap, FMC2_BTR(cs), &btr);
++		ret = regmap_read(ebi->regmap, FMC2_BTR(cs), &btr);
++	if (ret)
++		return ret;
+ 
+ 	clk_period = FIELD_GET(FMC2_BTR_CLKDIV, btr) + 1;
+ 
+@@ -569,11 +603,16 @@ static int stm32_fmc2_ebi_set_address_setup(struct stm32_fmc2_ebi *ebi,
+ 	if (ret)
+ 		return ret;
+ 
+-	regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++	if (ret)
++		return ret;
++
+ 	if (prop->reg_type == FMC2_REG_BWTR)
+-		regmap_read(ebi->regmap, FMC2_BWTR(cs), &bxtr);
++		ret = regmap_read(ebi->regmap, FMC2_BWTR(cs), &bxtr);
+ 	else
+-		regmap_read(ebi->regmap, FMC2_BTR(cs), &bxtr);
++		ret = regmap_read(ebi->regmap, FMC2_BTR(cs), &bxtr);
++	if (ret)
++		return ret;
+ 
+ 	if ((bxtr & FMC2_BXTR_ACCMOD) == val || bcr & FMC2_BCR_MUXEN)
+ 		val = clamp_val(setup, 1, FMC2_BXTR_ADDSET_MAX);
+@@ -691,11 +730,14 @@ static int stm32_fmc2_ebi_set_max_low_pulse(struct stm32_fmc2_ebi *ebi,
+ 					    int cs, u32 setup)
+ {
+ 	u32 old_val, new_val, pcscntr;
++	int ret;
+ 
+ 	if (setup < 1)
+ 		return 0;
+ 
+-	regmap_read(ebi->regmap, FMC2_PCSCNTR, &pcscntr);
++	ret = regmap_read(ebi->regmap, FMC2_PCSCNTR, &pcscntr);
++	if (ret)
++		return ret;
+ 
+ 	/* Enable counter for the bank */
+ 	regmap_update_bits(ebi->regmap, FMC2_PCSCNTR,
+@@ -942,17 +984,20 @@ static void stm32_fmc2_ebi_disable_bank(struct stm32_fmc2_ebi *ebi, int cs)
+ 	regmap_update_bits(ebi->regmap, FMC2_BCR(cs), FMC2_BCR_MBKEN, 0);
+ }
+ 
+-static void stm32_fmc2_ebi_save_setup(struct stm32_fmc2_ebi *ebi)
++static int stm32_fmc2_ebi_save_setup(struct stm32_fmc2_ebi *ebi)
+ {
+ 	unsigned int cs;
++	int ret;
+ 
+ 	for (cs = 0; cs < FMC2_MAX_EBI_CE; cs++) {
+-		regmap_read(ebi->regmap, FMC2_BCR(cs), &ebi->bcr[cs]);
+-		regmap_read(ebi->regmap, FMC2_BTR(cs), &ebi->btr[cs]);
+-		regmap_read(ebi->regmap, FMC2_BWTR(cs), &ebi->bwtr[cs]);
++		ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &ebi->bcr[cs]);
++		ret |= regmap_read(ebi->regmap, FMC2_BTR(cs), &ebi->btr[cs]);
++		ret |= regmap_read(ebi->regmap, FMC2_BWTR(cs), &ebi->bwtr[cs]);
++		if (ret)
++			return ret;
+ 	}
+ 
+-	regmap_read(ebi->regmap, FMC2_PCSCNTR, &ebi->pcscntr);
++	return regmap_read(ebi->regmap, FMC2_PCSCNTR, &ebi->pcscntr);
+ }
+ 
+ static void stm32_fmc2_ebi_set_setup(struct stm32_fmc2_ebi *ebi)
+@@ -981,22 +1026,29 @@ static void stm32_fmc2_ebi_disable_banks(struct stm32_fmc2_ebi *ebi)
+ }
+ 
+ /* NWAIT signal can not be connected to EBI controller and NAND controller */
+-static bool stm32_fmc2_ebi_nwait_used_by_ctrls(struct stm32_fmc2_ebi *ebi)
++static int stm32_fmc2_ebi_nwait_used_by_ctrls(struct stm32_fmc2_ebi *ebi)
+ {
++	struct device *dev = ebi->dev;
+ 	unsigned int cs;
+ 	u32 bcr;
++	int ret;
+ 
+ 	for (cs = 0; cs < FMC2_MAX_EBI_CE; cs++) {
+ 		if (!(ebi->bank_assigned & BIT(cs)))
+ 			continue;
+ 
+-		regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++		ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr);
++		if (ret)
++			return ret;
++
+ 		if ((bcr & FMC2_BCR_WAITEN || bcr & FMC2_BCR_ASYNCWAIT) &&
+-		    ebi->bank_assigned & BIT(FMC2_NAND))
+-			return true;
++		    ebi->bank_assigned & BIT(FMC2_NAND)) {
++			dev_err(dev, "NWAIT signal connected to EBI and NAND controllers\n");
++			return -EINVAL;
++		}
+ 	}
+ 
+-	return false;
++	return 0;
+ }
+ 
+ static void stm32_fmc2_ebi_enable(struct stm32_fmc2_ebi *ebi)
+@@ -1083,10 +1135,9 @@ static int stm32_fmc2_ebi_parse_dt(struct stm32_fmc2_ebi *ebi)
+ 		return -ENODEV;
+ 	}
+ 
+-	if (stm32_fmc2_ebi_nwait_used_by_ctrls(ebi)) {
+-		dev_err(dev, "NWAIT signal connected to EBI and NAND controllers\n");
+-		return -EINVAL;
+-	}
++	ret = stm32_fmc2_ebi_nwait_used_by_ctrls(ebi);
++	if (ret)
++		return ret;
+ 
+ 	stm32_fmc2_ebi_enable(ebi);
+ 
+@@ -1131,7 +1182,10 @@ static int stm32_fmc2_ebi_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_release;
+ 
+-	stm32_fmc2_ebi_save_setup(ebi);
++	ret = stm32_fmc2_ebi_save_setup(ebi);
++	if (ret)
++		goto err_release;
++
+ 	platform_set_drvdata(pdev, ebi);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c
+index b9b6f000154bb..9ebd5cebd4e13 100644
+--- a/drivers/mmc/core/mmc_test.c
++++ b/drivers/mmc/core/mmc_test.c
+@@ -3125,13 +3125,13 @@ static ssize_t mtf_test_write(struct file *file, const char __user *buf,
+ 	test->buffer = kzalloc(BUFFER_SIZE, GFP_KERNEL);
+ #ifdef CONFIG_HIGHMEM
+ 	test->highmem = alloc_pages(GFP_KERNEL | __GFP_HIGHMEM, BUFFER_ORDER);
++	if (!test->highmem) {
++		count = -ENOMEM;
++		goto free_test_buffer;
++	}
+ #endif
+ 
+-#ifdef CONFIG_HIGHMEM
+-	if (test->buffer && test->highmem) {
+-#else
+ 	if (test->buffer) {
+-#endif
+ 		mutex_lock(&mmc_test_lock);
+ 		mmc_test_run(test, testcase);
+ 		mutex_unlock(&mmc_test_lock);
+@@ -3139,6 +3139,7 @@ static ssize_t mtf_test_write(struct file *file, const char __user *buf,
+ 
+ #ifdef CONFIG_HIGHMEM
+ 	__free_pages(test->highmem, BUFFER_ORDER);
++free_test_buffer:
+ #endif
+ 	kfree(test->buffer);
+ 	kfree(test);
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index a6170f80ba64d..4da525f9c11f0 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -3171,6 +3171,10 @@ int dw_mci_probe(struct dw_mci *host)
+ 	host->biu_clk = devm_clk_get(host->dev, "biu");
+ 	if (IS_ERR(host->biu_clk)) {
+ 		dev_dbg(host->dev, "biu clock not available\n");
++		ret = PTR_ERR(host->biu_clk);
++		if (ret == -EPROBE_DEFER)
++			return ret;
++
+ 	} else {
+ 		ret = clk_prepare_enable(host->biu_clk);
+ 		if (ret) {
+@@ -3182,6 +3186,10 @@ int dw_mci_probe(struct dw_mci *host)
+ 	host->ciu_clk = devm_clk_get(host->dev, "ciu");
+ 	if (IS_ERR(host->ciu_clk)) {
+ 		dev_dbg(host->dev, "ciu clock not available\n");
++		ret = PTR_ERR(host->ciu_clk);
++		if (ret == -EPROBE_DEFER)
++			goto err_clk_biu;
++
+ 		host->bus_hz = host->pdata->bus_hz;
+ 	} else {
+ 		ret = clk_prepare_enable(host->ciu_clk);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index c07b9bac1a6a0..506b6d1cc27df 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -540,7 +540,6 @@ static void bond_ipsec_del_sa_all(struct bonding *bond)
+ 		} else {
+ 			slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
+ 		}
+-		ipsec->xs->xso.real_dev = NULL;
+ 	}
+ 	spin_unlock_bh(&bond->ipsec_lock);
+ 	rcu_read_unlock();
+@@ -557,34 +556,30 @@ static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
+ 	struct net_device *real_dev;
+ 	struct slave *curr_active;
+ 	struct bonding *bond;
+-	int err;
++	bool ok = false;
+ 
+ 	bond = netdev_priv(bond_dev);
+ 	rcu_read_lock();
+ 	curr_active = rcu_dereference(bond->curr_active_slave);
++	if (!curr_active)
++		goto out;
+ 	real_dev = curr_active->dev;
+ 
+-	if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
+-		err = false;
++	if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)
+ 		goto out;
+-	}
+ 
+-	if (!xs->xso.real_dev) {
+-		err = false;
++	if (!xs->xso.real_dev)
+ 		goto out;
+-	}
+ 
+ 	if (!real_dev->xfrmdev_ops ||
+ 	    !real_dev->xfrmdev_ops->xdo_dev_offload_ok ||
+-	    netif_is_bond_master(real_dev)) {
+-		err = false;
++	    netif_is_bond_master(real_dev))
+ 		goto out;
+-	}
+ 
+-	err = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
++	ok = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
+ out:
+ 	rcu_read_unlock();
+-	return err;
++	return ok;
+ }
+ 
+ static const struct xfrmdev_ops bond_xfrmdev_ops = {
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index fa0bf77870d43..acc6185749945 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -822,7 +822,7 @@ static int bond_option_active_slave_set(struct bonding *bond,
+ 	/* check to see if we are clearing active */
+ 	if (!slave_dev) {
+ 		netdev_dbg(bond->dev, "Clearing current active slave\n");
+-		RCU_INIT_POINTER(bond->curr_active_slave, NULL);
++		bond_change_active_slave(bond, NULL);
+ 		bond_select_active_slave(bond);
+ 	} else {
+ 		struct slave *old_active = rtnl_dereference(bond->curr_active_slave);
+diff --git a/drivers/net/dsa/mv88e6xxx/Makefile b/drivers/net/dsa/mv88e6xxx/Makefile
+index 4b080b448ce74..1f7240e0d6bce 100644
+--- a/drivers/net/dsa/mv88e6xxx/Makefile
++++ b/drivers/net/dsa/mv88e6xxx/Makefile
+@@ -15,3 +15,7 @@ mv88e6xxx-objs += port_hidden.o
+ mv88e6xxx-$(CONFIG_NET_DSA_MV88E6XXX_PTP) += ptp.o
+ mv88e6xxx-objs += serdes.o
+ mv88e6xxx-objs += smi.o
++mv88e6xxx-objs += trace.o
++
++# for tracing framework to find trace.h
++CFLAGS_trace.o := -I$(src)
+diff --git a/drivers/net/dsa/mv88e6xxx/global1_atu.c b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+index bac9a8a68e500..79377ceb66af5 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1_atu.c
++++ b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+@@ -12,6 +12,7 @@
+ 
+ #include "chip.h"
+ #include "global1.h"
++#include "trace.h"
+ 
+ /* Offset 0x01: ATU FID Register */
+ 
+@@ -114,6 +115,19 @@ static int mv88e6xxx_g1_atu_op_wait(struct mv88e6xxx_chip *chip)
+ 	return mv88e6xxx_g1_wait_bit(chip, MV88E6XXX_G1_ATU_OP, bit, 0);
+ }
+ 
++static int mv88e6xxx_g1_read_atu_violation(struct mv88e6xxx_chip *chip)
++{
++	int err;
++
++	err = mv88e6xxx_g1_write(chip, MV88E6XXX_G1_ATU_OP,
++				 MV88E6XXX_G1_ATU_OP_BUSY |
++				 MV88E6XXX_G1_ATU_OP_GET_CLR_VIOLATION);
++	if (err)
++		return err;
++
++	return mv88e6xxx_g1_atu_op_wait(chip);
++}
++
+ static int mv88e6xxx_g1_atu_op(struct mv88e6xxx_chip *chip, u16 fid, u16 op)
+ {
+ 	u16 val;
+@@ -159,6 +173,41 @@ int mv88e6xxx_g1_atu_get_next(struct mv88e6xxx_chip *chip, u16 fid)
+ 	return mv88e6xxx_g1_atu_op(chip, fid, MV88E6XXX_G1_ATU_OP_GET_NEXT_DB);
+ }
+ 
++static int mv88e6xxx_g1_atu_fid_read(struct mv88e6xxx_chip *chip, u16 *fid)
++{
++	u16 val = 0, upper = 0, op = 0;
++	int err = -EOPNOTSUPP;
++
++	if (mv88e6xxx_num_databases(chip) > 256) {
++		err = mv88e6xxx_g1_read(chip, MV88E6352_G1_ATU_FID, &val);
++		val &= 0xfff;
++		if (err)
++			return err;
++	} else {
++		err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_ATU_OP, &op);
++		if (err)
++			return err;
++		if (mv88e6xxx_num_databases(chip) > 64) {
++			/* ATU DBNum[7:4] are located in ATU Control 15:12 */
++			err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_ATU_CTL,
++						&upper);
++			if (err)
++				return err;
++
++			upper = (upper >> 8) & 0x00f0;
++		} else if (mv88e6xxx_num_databases(chip) > 16) {
++			/* ATU DBNum[5:4] are located in ATU Operation 9:8 */
++			upper = (op >> 4) & 0x30;
++		}
++
++		/* ATU DBNum[3:0] are located in ATU Operation 3:0 */
++		val = (op & 0xf) | upper;
++	}
++	*fid = val;
++
++	return err;
++}
++
+ /* Offset 0x0C: ATU Data Register */
+ 
+ static int mv88e6xxx_g1_atu_data_read(struct mv88e6xxx_chip *chip,
+@@ -353,14 +402,12 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
+ {
+ 	struct mv88e6xxx_chip *chip = dev_id;
+ 	struct mv88e6xxx_atu_entry entry;
+-	int spid;
+-	int err;
+-	u16 val;
++	int err, spid;
++	u16 val, fid;
+ 
+ 	mv88e6xxx_reg_lock(chip);
+ 
+-	err = mv88e6xxx_g1_atu_op(chip, 0,
+-				  MV88E6XXX_G1_ATU_OP_GET_CLR_VIOLATION);
++	err = mv88e6xxx_g1_read_atu_violation(chip);
+ 	if (err)
+ 		goto out;
+ 
+@@ -368,6 +415,10 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
+ 	if (err)
+ 		goto out;
+ 
++	err = mv88e6xxx_g1_atu_fid_read(chip, &fid);
++	if (err)
++		goto out;
++
+ 	err = mv88e6xxx_g1_atu_data_read(chip, &entry);
+ 	if (err)
+ 		goto out;
+@@ -385,24 +436,25 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
+ 	}
+ 
+ 	if (val & MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION) {
+-		dev_err_ratelimited(chip->dev,
+-				    "ATU member violation for %pM portvec %x spid %d\n",
+-				    entry.mac, entry.portvec, spid);
++		trace_mv88e6xxx_atu_member_violation(chip->dev, spid,
++						     entry.portvec, entry.mac,
++						     fid);
+ 		chip->ports[spid].atu_member_violation++;
+ 	}
+ 
+ 	if (val & MV88E6XXX_G1_ATU_OP_MISS_VIOLATION) {
+-		dev_err_ratelimited(chip->dev,
+-				    "ATU miss violation for %pM portvec %x spid %d\n",
+-				    entry.mac, entry.portvec, spid);
++		trace_mv88e6xxx_atu_miss_violation(chip->dev, spid,
++						   entry.portvec, entry.mac,
++						   fid);
+ 		chip->ports[spid].atu_miss_violation++;
+ 	}
+ 
+ 	if (val & MV88E6XXX_G1_ATU_OP_FULL_VIOLATION) {
+-		dev_err_ratelimited(chip->dev,
+-				    "ATU full violation for %pM portvec %x spid %d\n",
+-				    entry.mac, entry.portvec, spid);
+-		chip->ports[spid].atu_full_violation++;
++		trace_mv88e6xxx_atu_full_violation(chip->dev, spid,
++						   entry.portvec, entry.mac,
++						   fid);
++		if (spid < ARRAY_SIZE(chip->ports))
++			chip->ports[spid].atu_full_violation++;
+ 	}
+ 	mv88e6xxx_reg_unlock(chip);
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/trace.c b/drivers/net/dsa/mv88e6xxx/trace.c
+new file mode 100644
+index 0000000000000..7833cb50ca5d7
+--- /dev/null
++++ b/drivers/net/dsa/mv88e6xxx/trace.c
+@@ -0,0 +1,6 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/* Copyright 2022 NXP
++ */
++
++#define CREATE_TRACE_POINTS
++#include "trace.h"
+diff --git a/drivers/net/dsa/mv88e6xxx/trace.h b/drivers/net/dsa/mv88e6xxx/trace.h
+new file mode 100644
+index 0000000000000..d9ab5c8dee55d
+--- /dev/null
++++ b/drivers/net/dsa/mv88e6xxx/trace.h
+@@ -0,0 +1,66 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/* Copyright 2022 NXP
++ */
++
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM	mv88e6xxx
++
++#if !defined(_MV88E6XXX_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
++#define _MV88E6XXX_TRACE_H
++
++#include <linux/device.h>
++#include <linux/if_ether.h>
++#include <linux/tracepoint.h>
++
++DECLARE_EVENT_CLASS(mv88e6xxx_atu_violation,
++
++	TP_PROTO(const struct device *dev, int spid, u16 portvec,
++		 const unsigned char *addr, u16 fid),
++
++	TP_ARGS(dev, spid, portvec, addr, fid),
++
++	TP_STRUCT__entry(
++		__string(name, dev_name(dev))
++		__field(int, spid)
++		__field(u16, portvec)
++		__array(unsigned char, addr, ETH_ALEN)
++		__field(u16, fid)
++	),
++
++	TP_fast_assign(
++		__assign_str(name, dev_name(dev));
++		__entry->spid = spid;
++		__entry->portvec = portvec;
++		memcpy(__entry->addr, addr, ETH_ALEN);
++		__entry->fid = fid;
++	),
++
++	TP_printk("dev %s spid %d portvec 0x%x addr %pM fid %u",
++		  __get_str(name), __entry->spid, __entry->portvec,
++		  __entry->addr, __entry->fid)
++);
++
++DEFINE_EVENT(mv88e6xxx_atu_violation, mv88e6xxx_atu_member_violation,
++	     TP_PROTO(const struct device *dev, int spid, u16 portvec,
++		      const unsigned char *addr, u16 fid),
++	     TP_ARGS(dev, spid, portvec, addr, fid));
++
++DEFINE_EVENT(mv88e6xxx_atu_violation, mv88e6xxx_atu_miss_violation,
++	     TP_PROTO(const struct device *dev, int spid, u16 portvec,
++		      const unsigned char *addr, u16 fid),
++	     TP_ARGS(dev, spid, portvec, addr, fid));
++
++DEFINE_EVENT(mv88e6xxx_atu_violation, mv88e6xxx_atu_full_violation,
++	     TP_PROTO(const struct device *dev, int spid, u16 portvec,
++		      const unsigned char *addr, u16 fid),
++	     TP_ARGS(dev, spid, portvec, addr, fid));
++
++#endif /* _MV88E6XXX_TRACE_H */
++
++/* We don't want to use include/trace/events */
++#undef TRACE_INCLUDE_PATH
++#define TRACE_INCLUDE_PATH .
++#undef TRACE_INCLUDE_FILE
++#define TRACE_INCLUDE_FILE	trace
++/* This part must be outside protection */
++#include <trace/define_trace.h>
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index 018988b95035e..8a21902212e04 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -17,6 +17,7 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/device.h>
++#include <linux/iopoll.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+ #include <linux/of_mdio.h>
+@@ -38,6 +39,10 @@
+ #define VSC73XX_BLOCK_ARBITER	0x5 /* Only subblock 0 */
+ #define VSC73XX_BLOCK_SYSTEM	0x7 /* Only subblock 0 */
+ 
++/* MII Block subblock */
++#define VSC73XX_BLOCK_MII_INTERNAL	0x0 /* Internal MDIO subblock */
++#define VSC73XX_BLOCK_MII_EXTERNAL	0x1 /* External MDIO subblock */
++
+ #define CPU_PORT	6 /* CPU port */
+ 
+ /* MAC Block registers */
+@@ -196,6 +201,8 @@
+ #define VSC73XX_MII_CMD		0x1
+ #define VSC73XX_MII_DATA	0x2
+ 
++#define VSC73XX_MII_STAT_BUSY	BIT(3)
++
+ /* Arbiter block 5 registers */
+ #define VSC73XX_ARBEMPTY		0x0c
+ #define VSC73XX_ARBDISC			0x0e
+@@ -269,6 +276,10 @@
+ #define IS_7398(a) ((a)->chipid == VSC73XX_CHIPID_ID_7398)
+ #define IS_739X(a) (IS_7395(a) || IS_7398(a))
+ 
++#define VSC73XX_POLL_SLEEP_US		1000
++#define VSC73XX_MDIO_POLL_SLEEP_US	5
++#define VSC73XX_POLL_TIMEOUT_US		10000
++
+ struct vsc73xx_counter {
+ 	u8 counter;
+ 	const char *name;
+@@ -484,6 +495,22 @@ static int vsc73xx_detect(struct vsc73xx *vsc)
+ 	return 0;
+ }
+ 
++static int vsc73xx_mdio_busy_check(struct vsc73xx *vsc)
++{
++	int ret, err;
++	u32 val;
++
++	ret = read_poll_timeout(vsc73xx_read, err,
++				err < 0 || !(val & VSC73XX_MII_STAT_BUSY),
++				VSC73XX_MDIO_POLL_SLEEP_US,
++				VSC73XX_POLL_TIMEOUT_US, false, vsc,
++				VSC73XX_BLOCK_MII, VSC73XX_BLOCK_MII_INTERNAL,
++				VSC73XX_MII_STAT, &val);
++	if (ret)
++		return ret;
++	return err;
++}
++
+ static int vsc73xx_phy_read(struct dsa_switch *ds, int phy, int regnum)
+ {
+ 	struct vsc73xx *vsc = ds->priv;
+@@ -491,12 +518,20 @@ static int vsc73xx_phy_read(struct dsa_switch *ds, int phy, int regnum)
+ 	u32 val;
+ 	int ret;
+ 
++	ret = vsc73xx_mdio_busy_check(vsc);
++	if (ret)
++		return ret;
++
+ 	/* Setting bit 26 means "read" */
+ 	cmd = BIT(26) | (phy << 21) | (regnum << 16);
+ 	ret = vsc73xx_write(vsc, VSC73XX_BLOCK_MII, 0, 1, cmd);
+ 	if (ret)
+ 		return ret;
+-	msleep(2);
++
++	ret = vsc73xx_mdio_busy_check(vsc);
++	if (ret)
++		return ret;
++
+ 	ret = vsc73xx_read(vsc, VSC73XX_BLOCK_MII, 0, 2, &val);
+ 	if (ret)
+ 		return ret;
+@@ -520,6 +555,10 @@ static int vsc73xx_phy_write(struct dsa_switch *ds, int phy, int regnum,
+ 	u32 cmd;
+ 	int ret;
+ 
++	ret = vsc73xx_mdio_busy_check(vsc);
++	if (ret)
++		return ret;
++
+ 	/* It was found through tedious experiments that this router
+ 	 * chip really hates to have it's PHYs reset. They
+ 	 * never recover if that happens: autonegotiation stops
+@@ -531,7 +570,7 @@ static int vsc73xx_phy_write(struct dsa_switch *ds, int phy, int regnum,
+ 		return 0;
+ 	}
+ 
+-	cmd = (phy << 21) | (regnum << 16);
++	cmd = (phy << 21) | (regnum << 16) | val;
+ 	ret = vsc73xx_write(vsc, VSC73XX_BLOCK_MII, 0, 1, cmd);
+ 	if (ret)
+ 		return ret;
+@@ -780,7 +819,7 @@ static void vsc73xx_adjust_link(struct dsa_switch *ds, int port,
+ 	 * after a PHY or the CPU port comes up or down.
+ 	 */
+ 	if (!phydev->link) {
+-		int maxloop = 10;
++		int ret, err;
+ 
+ 		dev_dbg(vsc->dev, "port %d: went down\n",
+ 			port);
+@@ -795,19 +834,17 @@ static void vsc73xx_adjust_link(struct dsa_switch *ds, int port,
+ 				    VSC73XX_ARBDISC, BIT(port), BIT(port));
+ 
+ 		/* Wait until queue is empty */
+-		vsc73xx_read(vsc, VSC73XX_BLOCK_ARBITER, 0,
+-			     VSC73XX_ARBEMPTY, &val);
+-		while (!(val & BIT(port))) {
+-			msleep(1);
+-			vsc73xx_read(vsc, VSC73XX_BLOCK_ARBITER, 0,
+-				     VSC73XX_ARBEMPTY, &val);
+-			if (--maxloop == 0) {
+-				dev_err(vsc->dev,
+-					"timeout waiting for block arbiter\n");
+-				/* Continue anyway */
+-				break;
+-			}
+-		}
++		ret = read_poll_timeout(vsc73xx_read, err,
++					err < 0 || (val & BIT(port)),
++					VSC73XX_POLL_SLEEP_US,
++					VSC73XX_POLL_TIMEOUT_US, false,
++					vsc, VSC73XX_BLOCK_ARBITER, 0,
++					VSC73XX_ARBEMPTY, &val);
++		if (ret)
++			dev_err(vsc->dev,
++				"timeout waiting for block arbiter\n");
++		else if (err < 0)
++			dev_err(vsc->dev, "error reading arbiter\n");
+ 
+ 		/* Put this port into reset */
+ 		vsc73xx_write(vsc, VSC73XX_BLOCK_MAC, port, VSC73XX_MAC_CFG,
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index 5fbc087268dbe..92a9f80554b39 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -1244,7 +1244,8 @@ static u64 hash_filter_ntuple(struct ch_filter_specification *fs,
+ 	 * in the Compressed Filter Tuple.
+ 	 */
+ 	if (tp->vlan_shift >= 0 && fs->mask.ivlan)
+-		ntuple |= (FT_VLAN_VLD_F | fs->val.ivlan) << tp->vlan_shift;
++		ntuple |= (u64)(FT_VLAN_VLD_F |
++				fs->val.ivlan) << tp->vlan_shift;
+ 
+ 	if (tp->port_shift >= 0 && fs->mask.iport)
+ 		ntuple |= (u64)fs->val.iport << tp->port_shift;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index f8275534205a7..9ff5179b4d879 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -4536,6 +4536,9 @@ static int hns3_reset_notify_uninit_enet(struct hnae3_handle *handle)
+ 	struct hns3_nic_priv *priv = netdev_priv(netdev);
+ 	int ret;
+ 
++	if (!test_bit(HNS3_NIC_STATE_DOWN, &priv->state))
++		hns3_nic_net_stop(netdev);
++
+ 	if (!test_and_clear_bit(HNS3_NIC_STATE_INITED, &priv->state)) {
+ 		netdev_warn(netdev, "already uninitialized\n");
+ 		return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 5dbee850fef53..885793707a5f1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -10051,8 +10051,8 @@ static void hclge_flr_done(struct hnae3_ae_dev *ae_dev)
+ 		dev_err(&hdev->pdev->dev, "fail to rebuild, ret=%d\n", ret);
+ 
+ 	hdev->reset_type = HNAE3_NONE_RESET;
+-	clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state);
+-	up(&hdev->reset_sem);
++	if (test_and_clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
++		up(&hdev->reset_sem);
+ }
+ 
+ static void hclge_clear_resetting_state(struct hclge_dev *hdev)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 51b7b46f2f309..9969714d1133d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -715,10 +715,11 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
+ 		req = (struct hclge_mbx_vf_to_pf_cmd *)desc->data;
+ 
+ 		flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
+-		if (unlikely(!hnae3_get_bit(flag, HCLGE_CMDQ_RX_OUTVLD_B))) {
++		if (unlikely(!hnae3_get_bit(flag, HCLGE_CMDQ_RX_OUTVLD_B) ||
++			     req->mbx_src_vfid > hdev->num_req_vfs)) {
+ 			dev_warn(&hdev->pdev->dev,
+-				 "dropped invalid mailbox message, code = %u\n",
+-				 req->msg.code);
++				 "dropped invalid mailbox message, code = %u, vfid = %u\n",
++				 req->msg.code, req->mbx_src_vfid);
+ 
+ 			/* dropping/not processing this invalid message */
+ 			crq->desc[crq->next_to_use].flag = 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index be41117ec1465..755935f9efc81 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2093,8 +2093,8 @@ static void hclgevf_flr_done(struct hnae3_ae_dev *ae_dev)
+ 			 ret);
+ 
+ 	hdev->reset_type = HNAE3_NONE_RESET;
+-	clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state);
+-	up(&hdev->reset_sem);
++	if (test_and_clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state))
++		up(&hdev->reset_sem);
+ }
+ 
+ static u32 hclgevf_get_fw_version(struct hnae3_handle *handle)
+diff --git a/drivers/net/ethernet/i825xx/sun3_82586.c b/drivers/net/ethernet/i825xx/sun3_82586.c
+index 4564ee02c95f6..83a6114afbf90 100644
+--- a/drivers/net/ethernet/i825xx/sun3_82586.c
++++ b/drivers/net/ethernet/i825xx/sun3_82586.c
+@@ -990,7 +990,7 @@ static void sun3_82586_timeout(struct net_device *dev, unsigned int txqueue)
+ 	{
+ #ifdef DEBUG
+ 		printk("%s: xmitter timed out, try to restart! stat: %02x\n",dev->name,p->scb->cus);
+-		printk("%s: command-stats: %04x %04x\n",dev->name,swab16(p->xmit_cmds[0]->cmd_status),swab16(p->xmit_cmds[1]->cmd_status));
++		printk("%s: command-stats: %04x\n", dev->name, swab16(p->xmit_cmds[0]->cmd_status));
+ 		printk("%s: check, whether you set the right interrupt number!\n",dev->name);
+ #endif
+ 		sun3_82586_close(dev);
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 442a9bcbf60a7..7734264d570db 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -786,7 +786,7 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt)
+ 		return false;
+ #else
+ #define ICE_LAST_OFFSET \
+-	(SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_2048)
++	(SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_3072)
+ 	if (rx_buf->page_offset > ICE_LAST_OFFSET)
+ 		return false;
+ #endif /* PAGE_SIZE < 8192) */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+index b416a8ee2eed2..26631bca5ab85 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+@@ -679,7 +679,7 @@ mlx5e_ethtool_flow_replace(struct mlx5e_priv *priv,
+ 	if (num_tuples <= 0) {
+ 		netdev_warn(priv->netdev, "%s: flow is not valid %d\n",
+ 			    __func__, num_tuples);
+-		return num_tuples;
++		return num_tuples < 0 ? num_tuples : -EINVAL;
+ 	}
+ 
+ 	eth_ft = get_flow_table(priv, fs, num_tuples);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+index 7326ad4d5e1c7..071822028ea5e 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+@@ -159,16 +159,17 @@
+ #define XAE_RCW1_OFFSET		0x00000404 /* Rx Configuration Word 1 */
+ #define XAE_TC_OFFSET		0x00000408 /* Tx Configuration */
+ #define XAE_FCC_OFFSET		0x0000040C /* Flow Control Configuration */
+-#define XAE_EMMC_OFFSET		0x00000410 /* EMAC mode configuration */
+-#define XAE_PHYC_OFFSET		0x00000414 /* RGMII/SGMII configuration */
++#define XAE_EMMC_OFFSET		0x00000410 /* MAC speed configuration */
++#define XAE_PHYC_OFFSET		0x00000414 /* RX Max Frame Configuration */
+ #define XAE_ID_OFFSET		0x000004F8 /* Identification register */
+-#define XAE_MDIO_MC_OFFSET	0x00000500 /* MII Management Config */
+-#define XAE_MDIO_MCR_OFFSET	0x00000504 /* MII Management Control */
+-#define XAE_MDIO_MWD_OFFSET	0x00000508 /* MII Management Write Data */
+-#define XAE_MDIO_MRD_OFFSET	0x0000050C /* MII Management Read Data */
++#define XAE_MDIO_MC_OFFSET	0x00000500 /* MDIO Setup */
++#define XAE_MDIO_MCR_OFFSET	0x00000504 /* MDIO Control */
++#define XAE_MDIO_MWD_OFFSET	0x00000508 /* MDIO Write Data */
++#define XAE_MDIO_MRD_OFFSET	0x0000050C /* MDIO Read Data */
+ #define XAE_UAW0_OFFSET		0x00000700 /* Unicast address word 0 */
+ #define XAE_UAW1_OFFSET		0x00000704 /* Unicast address word 1 */
+-#define XAE_FMI_OFFSET		0x00000708 /* Filter Mask Index */
++#define XAE_FMI_OFFSET		0x00000708 /* Frame Filter Control */
++#define XAE_FFE_OFFSET		0x0000070C /* Frame Filter Enable */
+ #define XAE_AF0_OFFSET		0x00000710 /* Address Filter 0 */
+ #define XAE_AF1_OFFSET		0x00000714 /* Address Filter 1 */
+ 
+@@ -307,7 +308,7 @@
+  */
+ #define XAE_UAW1_UNICASTADDR_MASK	0x0000FFFF
+ 
+-/* Bit masks for Axi Ethernet FMI register */
++/* Bit masks for Axi Ethernet FMC register */
+ #define XAE_FMI_PM_MASK			0x80000000 /* Promis. mode enable */
+ #define XAE_FMI_IND_MASK		0x00000003 /* Index Mask */
+ 
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 2a5a3f8761c30..3253ace2b6014 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -409,7 +409,7 @@ static int netdev_set_mac_address(struct net_device *ndev, void *p)
+  */
+ static void axienet_set_multicast_list(struct net_device *ndev)
+ {
+-	int i;
++	int i = 0;
+ 	u32 reg, af0reg, af1reg;
+ 	struct axienet_local *lp = netdev_priv(ndev);
+ 
+@@ -427,7 +427,10 @@ static void axienet_set_multicast_list(struct net_device *ndev)
+ 	} else if (!netdev_mc_empty(ndev)) {
+ 		struct netdev_hw_addr *ha;
+ 
+-		i = 0;
++		reg = axienet_ior(lp, XAE_FMI_OFFSET);
++		reg &= ~XAE_FMI_PM_MASK;
++		axienet_iow(lp, XAE_FMI_OFFSET, reg);
++
+ 		netdev_for_each_mc_addr(ha, ndev) {
+ 			if (i >= XAE_MULTICAST_CAM_TABLE_NUM)
+ 				break;
+@@ -446,6 +449,7 @@ static void axienet_set_multicast_list(struct net_device *ndev)
+ 			axienet_iow(lp, XAE_FMI_OFFSET, reg);
+ 			axienet_iow(lp, XAE_AF0_OFFSET, af0reg);
+ 			axienet_iow(lp, XAE_AF1_OFFSET, af1reg);
++			axienet_iow(lp, XAE_FFE_OFFSET, 1);
+ 			i++;
+ 		}
+ 	} else {
+@@ -453,18 +457,15 @@ static void axienet_set_multicast_list(struct net_device *ndev)
+ 		reg &= ~XAE_FMI_PM_MASK;
+ 
+ 		axienet_iow(lp, XAE_FMI_OFFSET, reg);
+-
+-		for (i = 0; i < XAE_MULTICAST_CAM_TABLE_NUM; i++) {
+-			reg = axienet_ior(lp, XAE_FMI_OFFSET) & 0xFFFFFF00;
+-			reg |= i;
+-
+-			axienet_iow(lp, XAE_FMI_OFFSET, reg);
+-			axienet_iow(lp, XAE_AF0_OFFSET, 0);
+-			axienet_iow(lp, XAE_AF1_OFFSET, 0);
+-		}
+-
+ 		dev_info(&ndev->dev, "Promiscuous mode disabled.\n");
+ 	}
++
++	for (; i < XAE_MULTICAST_CAM_TABLE_NUM; i++) {
++		reg = axienet_ior(lp, XAE_FMI_OFFSET) & 0xFFFFFF00;
++		reg |= i;
++		axienet_iow(lp, XAE_FMI_OFFSET, reg);
++		axienet_iow(lp, XAE_FFE_OFFSET, 0);
++	}
+ }
+ 
+ /**
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index c8246363d3832..24cb7b97e4fcc 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -567,6 +567,9 @@ static netdev_tx_t gtp_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (skb_cow_head(skb, dev->needed_headroom))
+ 		goto tx_err;
+ 
++	if (!pskb_inet_may_pull(skb))
++		goto tx_err;
++
+ 	skb_reset_inner_headers(skb);
+ 
+ 	/* PDP context lookups in gtp_build_skb_*() need rcu read-side lock. */
+@@ -798,7 +801,7 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
+ 	sock = sockfd_lookup(fd, &err);
+ 	if (!sock) {
+ 		pr_debug("gtp socket fd=%d not found\n", fd);
+-		return NULL;
++		return ERR_PTR(err);
+ 	}
+ 
+ 	sk = sock->sk;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index a9df48c75155b..a52af491eed58 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -2675,7 +2675,7 @@ int iwl_mvm_scan_stop(struct iwl_mvm *mvm, int type, bool notify)
+ 	if (!(mvm->scan_status & type))
+ 		return 0;
+ 
+-	if (iwl_mvm_is_radio_killed(mvm)) {
++	if (!test_bit(STATUS_DEVICE_ENABLED, &mvm->trans->status)) {
+ 		ret = 0;
+ 		goto out;
+ 	}
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 03ba8ed995bf2..9c90a5bd3a81e 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -4314,11 +4314,27 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 	if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
+ 		wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC);
+ 
+-	wiphy->bands[NL80211_BAND_2GHZ] = &mwifiex_band_2ghz;
+-	if (adapter->config_bands & BAND_A)
+-		wiphy->bands[NL80211_BAND_5GHZ] = &mwifiex_band_5ghz;
+-	else
++	wiphy->bands[NL80211_BAND_2GHZ] = devm_kmemdup(adapter->dev,
++						       &mwifiex_band_2ghz,
++						       sizeof(mwifiex_band_2ghz),
++						       GFP_KERNEL);
++	if (!wiphy->bands[NL80211_BAND_2GHZ]) {
++		ret = -ENOMEM;
++		goto err;
++	}
++
++	if (adapter->config_bands & BAND_A) {
++		wiphy->bands[NL80211_BAND_5GHZ] = devm_kmemdup(adapter->dev,
++							       &mwifiex_band_5ghz,
++							       sizeof(mwifiex_band_5ghz),
++							       GFP_KERNEL);
++		if (!wiphy->bands[NL80211_BAND_5GHZ]) {
++			ret = -ENOMEM;
++			goto err;
++		}
++	} else {
+ 		wiphy->bands[NL80211_BAND_5GHZ] = NULL;
++	}
+ 
+ 	if (adapter->drcs_enabled && ISSUPP_DRCS_ENABLED(adapter->fw_cap_info))
+ 		wiphy->iface_combinations = &mwifiex_iface_comb_ap_sta_drcs;
+@@ -4411,8 +4427,7 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 	if (ret < 0) {
+ 		mwifiex_dbg(adapter, ERROR,
+ 			    "%s: wiphy_register failed: %d\n", __func__, ret);
+-		wiphy_free(wiphy);
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	if (!adapter->regd) {
+@@ -4454,4 +4469,9 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 
+ 	adapter->wiphy = wiphy;
+ 	return ret;
++
++err:
++	wiphy_free(wiphy);
++
++	return ret;
+ }
+diff --git a/drivers/net/wireless/st/cw1200/txrx.c b/drivers/net/wireless/st/cw1200/txrx.c
+index 400dd585916b5..7ef0886503578 100644
+--- a/drivers/net/wireless/st/cw1200/txrx.c
++++ b/drivers/net/wireless/st/cw1200/txrx.c
+@@ -1170,7 +1170,7 @@ void cw1200_rx_cb(struct cw1200_common *priv,
+ 		size_t ies_len = skb->len - (ies - (u8 *)(skb->data));
+ 
+ 		tim_ie = cfg80211_find_ie(WLAN_EID_TIM, ies, ies_len);
+-		if (tim_ie) {
++		if (tim_ie && tim_ie[1] >= sizeof(struct ieee80211_tim_ie)) {
+ 			struct ieee80211_tim_ie *tim =
+ 				(struct ieee80211_tim_ie *)&tim_ie[2];
+ 
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index 87e1296c68381..4de5205d9d61b 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -1751,6 +1751,11 @@ static int pn533_start_poll(struct nfc_dev *nfc_dev,
+ 	}
+ 
+ 	pn533_poll_create_mod_list(dev, im_protocols, tm_protocols);
++	if (!dev->poll_mod_count) {
++		nfc_err(dev->dev,
++			"Poll mod list is empty\n");
++		return -EINVAL;
++	}
+ 
+ 	/* Do not always start polling from the same modulation */
+ 	get_random_bytes(&rand_mod, sizeof(rand_mod));
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 6d5552f2f184a..944e8a2766630 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -472,12 +472,8 @@ nvmet_rdma_alloc_rsps(struct nvmet_rdma_queue *queue)
+ 	return 0;
+ 
+ out_free:
+-	while (--i >= 0) {
+-		struct nvmet_rdma_rsp *rsp = &queue->rsps[i];
+-
+-		list_del(&rsp->free_list);
+-		nvmet_rdma_free_rsp(ndev, rsp);
+-	}
++	while (--i >= 0)
++		nvmet_rdma_free_rsp(ndev, &queue->rsps[i]);
+ 	kfree(queue->rsps);
+ out:
+ 	return ret;
+@@ -488,12 +484,8 @@ static void nvmet_rdma_free_rsps(struct nvmet_rdma_queue *queue)
+ 	struct nvmet_rdma_device *ndev = queue->dev;
+ 	int i, nr_rsps = queue->recv_queue_size * 2;
+ 
+-	for (i = 0; i < nr_rsps; i++) {
+-		struct nvmet_rdma_rsp *rsp = &queue->rsps[i];
+-
+-		list_del(&rsp->free_list);
+-		nvmet_rdma_free_rsp(ndev, rsp);
+-	}
++	for (i = 0; i < nr_rsps; i++)
++		nvmet_rdma_free_rsp(ndev, &queue->rsps[i]);
+ 	kfree(queue->rsps);
+ }
+ 
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index d70a2fa4ba45f..e493fc709065a 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -846,6 +846,7 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
+ 		pr_err("bad nvme-tcp pdu length (%d)\n",
+ 			le32_to_cpu(icreq->hdr.plen));
+ 		nvmet_tcp_fatal_error(queue);
++		return -EPROTO;
+ 	}
+ 
+ 	if (icreq->pfv != NVME_TCP_PFV_1_0) {
+diff --git a/drivers/nvme/target/trace.c b/drivers/nvme/target/trace.c
+index 1373a3c67962a..a3564e12927b6 100644
+--- a/drivers/nvme/target/trace.c
++++ b/drivers/nvme/target/trace.c
+@@ -195,7 +195,7 @@ const char *nvmet_trace_disk_name(struct trace_seq *p, char *name)
+ 	return ret;
+ }
+ 
+-const char *nvmet_trace_ctrl_name(struct trace_seq *p, struct nvmet_ctrl *ctrl)
++const char *nvmet_trace_ctrl_id(struct trace_seq *p, u16 ctrl_id)
+ {
+ 	const char *ret = trace_seq_buffer_ptr(p);
+ 
+@@ -208,8 +208,8 @@ const char *nvmet_trace_ctrl_name(struct trace_seq *p, struct nvmet_ctrl *ctrl)
+ 	 * If we can know the extra data of the connect command in this stage,
+ 	 * we can update this print statement later.
+ 	 */
+-	if (ctrl)
+-		trace_seq_printf(p, "%d", ctrl->cntlid);
++	if (ctrl_id)
++		trace_seq_printf(p, "%d", ctrl_id);
+ 	else
+ 		trace_seq_printf(p, "_");
+ 	trace_seq_putc(p, 0);
+diff --git a/drivers/nvme/target/trace.h b/drivers/nvme/target/trace.h
+index c14e3249a14dc..13fb8265f9520 100644
+--- a/drivers/nvme/target/trace.h
++++ b/drivers/nvme/target/trace.h
+@@ -32,18 +32,24 @@ const char *nvmet_trace_parse_fabrics_cmd(struct trace_seq *p, u8 fctype,
+ 	 nvmet_trace_parse_nvm_cmd(p, opcode, cdw10) :			\
+ 	 nvmet_trace_parse_admin_cmd(p, opcode, cdw10)))
+ 
+-const char *nvmet_trace_ctrl_name(struct trace_seq *p, struct nvmet_ctrl *ctrl);
+-#define __print_ctrl_name(ctrl)				\
+-	nvmet_trace_ctrl_name(p, ctrl)
++const char *nvmet_trace_ctrl_id(struct trace_seq *p, u16 ctrl_id);
++#define __print_ctrl_id(ctrl_id)			\
++	nvmet_trace_ctrl_id(p, ctrl_id)
+ 
+ const char *nvmet_trace_disk_name(struct trace_seq *p, char *name);
+ #define __print_disk_name(name)				\
+ 	nvmet_trace_disk_name(p, name)
+ 
+ #ifndef TRACE_HEADER_MULTI_READ
+-static inline struct nvmet_ctrl *nvmet_req_to_ctrl(struct nvmet_req *req)
++static inline u16 nvmet_req_to_ctrl_id(struct nvmet_req *req)
+ {
+-	return req->sq->ctrl;
++	/*
++	 * The queue and controller pointers are not valid until an association
++	 * has been established.
++	 */
++	if (!req->sq || !req->sq->ctrl)
++		return 0;
++	return req->sq->ctrl->cntlid;
+ }
+ 
+ static inline void __assign_req_name(char *name, struct nvmet_req *req)
+@@ -60,7 +66,7 @@ TRACE_EVENT(nvmet_req_init,
+ 	TP_ARGS(req, cmd),
+ 	TP_STRUCT__entry(
+ 		__field(struct nvme_command *, cmd)
+-		__field(struct nvmet_ctrl *, ctrl)
++		__field(u16, ctrl_id)
+ 		__array(char, disk, DISK_NAME_LEN)
+ 		__field(int, qid)
+ 		__field(u16, cid)
+@@ -73,7 +79,7 @@ TRACE_EVENT(nvmet_req_init,
+ 	),
+ 	TP_fast_assign(
+ 		__entry->cmd = cmd;
+-		__entry->ctrl = nvmet_req_to_ctrl(req);
++		__entry->ctrl_id = nvmet_req_to_ctrl_id(req);
+ 		__assign_req_name(__entry->disk, req);
+ 		__entry->qid = req->sq->qid;
+ 		__entry->cid = cmd->common.command_id;
+@@ -87,7 +93,7 @@ TRACE_EVENT(nvmet_req_init,
+ 	),
+ 	TP_printk("nvmet%s: %sqid=%d, cmdid=%u, nsid=%u, flags=%#x, "
+ 		  "meta=%#llx, cmd=(%s, %s)",
+-		__print_ctrl_name(__entry->ctrl),
++		__print_ctrl_id(__entry->ctrl_id),
+ 		__print_disk_name(__entry->disk),
+ 		__entry->qid, __entry->cid, __entry->nsid,
+ 		__entry->flags, __entry->metadata,
+@@ -101,7 +107,7 @@ TRACE_EVENT(nvmet_req_complete,
+ 	TP_PROTO(struct nvmet_req *req),
+ 	TP_ARGS(req),
+ 	TP_STRUCT__entry(
+-		__field(struct nvmet_ctrl *, ctrl)
++		__field(u16, ctrl_id)
+ 		__array(char, disk, DISK_NAME_LEN)
+ 		__field(int, qid)
+ 		__field(int, cid)
+@@ -109,7 +115,7 @@ TRACE_EVENT(nvmet_req_complete,
+ 		__field(u16, status)
+ 	),
+ 	TP_fast_assign(
+-		__entry->ctrl = nvmet_req_to_ctrl(req);
++		__entry->ctrl_id = nvmet_req_to_ctrl_id(req);
+ 		__entry->qid = req->cq->qid;
+ 		__entry->cid = req->cqe->command_id;
+ 		__entry->result = le64_to_cpu(req->cqe->result.u64);
+@@ -117,7 +123,7 @@ TRACE_EVENT(nvmet_req_complete,
+ 		__assign_req_name(__entry->disk, req);
+ 	),
+ 	TP_printk("nvmet%s: %sqid=%d, cmdid=%u, res=%#llx, status=%#x",
+-		__print_ctrl_name(__entry->ctrl),
++		__print_ctrl_id(__entry->ctrl_id),
+ 		__print_disk_name(__entry->disk),
+ 		__entry->qid, __entry->cid, __entry->result, __entry->status)
+ 
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index e0f22ce219ee8..b85c1ce210f6e 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -3695,7 +3695,7 @@ static struct rockchip_pin_bank rk3328_pin_banks[] = {
+ 	PIN_BANK_IOMUX_FLAGS(0, 32, "gpio0", 0, 0, 0, 0),
+ 	PIN_BANK_IOMUX_FLAGS(1, 32, "gpio1", 0, 0, 0, 0),
+ 	PIN_BANK_IOMUX_FLAGS(2, 32, "gpio2", 0,
+-			     0,
++			     IOMUX_WIDTH_2BIT,
+ 			     IOMUX_WIDTH_3BIT,
+ 			     0),
+ 	PIN_BANK_IOMUX_FLAGS(3, 32, "gpio3",
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 4860c4dd853f3..5b76594b535c1 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -350,6 +350,8 @@ static int pcs_get_function(struct pinctrl_dev *pctldev, unsigned pin,
+ 		return -ENOTSUPP;
+ 	fselector = setting->func;
+ 	function = pinmux_generic_get_function(pctldev, fselector);
++	if (!function)
++		return -EINVAL;
+ 	*func = function->data;
+ 	if (!(*func)) {
+ 		dev_err(pcs->dev, "%s could not find function%i\n",
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 81de5c98221a6..0b09ed6ac1331 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -1665,9 +1665,15 @@ static int dasd_ese_needs_format(struct dasd_block *block, struct irb *irb)
+ 	if (!sense)
+ 		return 0;
+ 
+-	return !!(sense[1] & SNS1_NO_REC_FOUND) ||
+-		!!(sense[1] & SNS1_FILE_PROTECTED) ||
+-		scsw_cstat(&irb->scsw) == SCHN_STAT_INCORR_LEN;
++	if (sense[1] & SNS1_NO_REC_FOUND)
++		return 1;
++
++	if ((sense[1] & SNS1_INV_TRACK_FORMAT) &&
++	    scsw_is_tm(&irb->scsw) &&
++	    !(sense[2] & SNS2_ENV_DATA_PRESENT))
++		return 1;
++
++	return 0;
+ }
+ 
+ static int dasd_ese_oos_cond(u8 *sense)
+@@ -1688,7 +1694,7 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ 	struct dasd_device *device;
+ 	unsigned long now;
+ 	int nrf_suppressed = 0;
+-	int fp_suppressed = 0;
++	int it_suppressed = 0;
+ 	struct request *req;
+ 	u8 *sense = NULL;
+ 	int expires;
+@@ -1743,8 +1749,9 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ 		 */
+ 		sense = dasd_get_sense(irb);
+ 		if (sense) {
+-			fp_suppressed = (sense[1] & SNS1_FILE_PROTECTED) &&
+-				test_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);
++			it_suppressed =	(sense[1] & SNS1_INV_TRACK_FORMAT) &&
++				!(sense[2] & SNS2_ENV_DATA_PRESENT) &&
++				test_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);
+ 			nrf_suppressed = (sense[1] & SNS1_NO_REC_FOUND) &&
+ 				test_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);
+ 
+@@ -1759,7 +1766,7 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ 				return;
+ 			}
+ 		}
+-		if (!(fp_suppressed || nrf_suppressed))
++		if (!(it_suppressed || nrf_suppressed))
+ 			device->discipline->dump_sense_dbf(device, irb, "int");
+ 
+ 		if (device->features & DASD_FEATURE_ERPLOG)
+@@ -2513,14 +2520,17 @@ static int _dasd_sleep_on_queue(struct list_head *ccw_queue, int interruptible)
+ 	rc = 0;
+ 	list_for_each_entry_safe(cqr, n, ccw_queue, blocklist) {
+ 		/*
+-		 * In some cases the 'File Protected' or 'Incorrect Length'
+-		 * error might be expected and error recovery would be
+-		 * unnecessary in these cases.	Check if the according suppress
+-		 * bit is set.
++		 * In some cases certain errors might be expected and
++		 * error recovery would be unnecessary in these cases.
++		 * Check if the according suppress bit is set.
+ 		 */
+ 		sense = dasd_get_sense(&cqr->irb);
+-		if (sense && sense[1] & SNS1_FILE_PROTECTED &&
+-		    test_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags))
++		if (sense && (sense[1] & SNS1_INV_TRACK_FORMAT) &&
++		    !(sense[2] & SNS2_ENV_DATA_PRESENT) &&
++		    test_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags))
++			continue;
++		if (sense && (sense[1] & SNS1_NO_REC_FOUND) &&
++		    test_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags))
+ 			continue;
+ 		if (scsw_cstat(&cqr->irb.scsw) == 0x40 &&
+ 		    test_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags))
+diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c
+index c2d4ea74e0d00..845f088ce97b1 100644
+--- a/drivers/s390/block/dasd_3990_erp.c
++++ b/drivers/s390/block/dasd_3990_erp.c
+@@ -1401,14 +1401,8 @@ dasd_3990_erp_file_prot(struct dasd_ccw_req * erp)
+ 
+ 	struct dasd_device *device = erp->startdev;
+ 
+-	/*
+-	 * In some cases the 'File Protected' error might be expected and
+-	 * log messages shouldn't be written then.
+-	 * Check if the according suppress bit is set.
+-	 */
+-	if (!test_bit(DASD_CQR_SUPPRESS_FP, &erp->flags))
+-		dev_err(&device->cdev->dev,
+-			"Accessing the DASD failed because of a hardware error\n");
++	dev_err(&device->cdev->dev,
++		"Accessing the DASD failed because of a hardware error\n");
+ 
+ 	return dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED);
+ 
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index c6930c159d2a6..fddcb910157cc 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -2201,6 +2201,7 @@ dasd_eckd_analysis_ccw(struct dasd_device *device)
+ 	cqr->status = DASD_CQR_FILLED;
+ 	/* Set flags to suppress output for expected errors */
+ 	set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);
++	set_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);
+ 
+ 	return cqr;
+ }
+@@ -2482,7 +2483,6 @@ dasd_eckd_build_check_tcw(struct dasd_device *base, struct format_data_t *fdata,
+ 	cqr->buildclk = get_tod_clock();
+ 	cqr->status = DASD_CQR_FILLED;
+ 	/* Set flags to suppress output for expected errors */
+-	set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);
+ 	set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);
+ 
+ 	return cqr;
+@@ -4031,8 +4031,6 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_single(
+ 
+ 	/* Set flags to suppress output for expected errors */
+ 	if (dasd_eckd_is_ese(basedev)) {
+-		set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);
+-		set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);
+ 		set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);
+ 	}
+ 
+@@ -4534,9 +4532,8 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_tpm_track(
+ 
+ 	/* Set flags to suppress output for expected errors */
+ 	if (dasd_eckd_is_ese(basedev)) {
+-		set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);
+-		set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);
+ 		set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);
++		set_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);
+ 	}
+ 
+ 	return cqr;
+@@ -5706,36 +5703,32 @@ static void dasd_eckd_dump_sense(struct dasd_device *device,
+ {
+ 	u8 *sense = dasd_get_sense(irb);
+ 
+-	if (scsw_is_tm(&irb->scsw)) {
+-		/*
+-		 * In some cases the 'File Protected' or 'Incorrect Length'
+-		 * error might be expected and log messages shouldn't be written
+-		 * then. Check if the according suppress bit is set.
+-		 */
+-		if (sense && (sense[1] & SNS1_FILE_PROTECTED) &&
+-		    test_bit(DASD_CQR_SUPPRESS_FP, &req->flags))
+-			return;
+-		if (scsw_cstat(&irb->scsw) == 0x40 &&
+-		    test_bit(DASD_CQR_SUPPRESS_IL, &req->flags))
+-			return;
++	/*
++	 * In some cases certain errors might be expected and
++	 * log messages shouldn't be written then.
++	 * Check if the according suppress bit is set.
++	 */
++	if (sense && (sense[1] & SNS1_INV_TRACK_FORMAT) &&
++	    !(sense[2] & SNS2_ENV_DATA_PRESENT) &&
++	    test_bit(DASD_CQR_SUPPRESS_IT, &req->flags))
++		return;
+ 
+-		dasd_eckd_dump_sense_tcw(device, req, irb);
+-	} else {
+-		/*
+-		 * In some cases the 'Command Reject' or 'No Record Found'
+-		 * error might be expected and log messages shouldn't be
+-		 * written then. Check if the according suppress bit is set.
+-		 */
+-		if (sense && sense[0] & SNS0_CMD_REJECT &&
+-		    test_bit(DASD_CQR_SUPPRESS_CR, &req->flags))
+-			return;
++	if (sense && sense[0] & SNS0_CMD_REJECT &&
++	    test_bit(DASD_CQR_SUPPRESS_CR, &req->flags))
++		return;
+ 
+-		if (sense && sense[1] & SNS1_NO_REC_FOUND &&
+-		    test_bit(DASD_CQR_SUPPRESS_NRF, &req->flags))
+-			return;
++	if (sense && sense[1] & SNS1_NO_REC_FOUND &&
++	    test_bit(DASD_CQR_SUPPRESS_NRF, &req->flags))
++		return;
+ 
++	if (scsw_cstat(&irb->scsw) == 0x40 &&
++	    test_bit(DASD_CQR_SUPPRESS_IL, &req->flags))
++		return;
++
++	if (scsw_is_tm(&irb->scsw))
++		dasd_eckd_dump_sense_tcw(device, req, irb);
++	else
+ 		dasd_eckd_dump_sense_ccw(device, req, irb);
+-	}
+ }
+ 
+ static int dasd_eckd_pm_freeze(struct dasd_device *device)
+diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
+index 5d7d35ca5eb48..052b5d1ba9c12 100644
+--- a/drivers/s390/block/dasd_int.h
++++ b/drivers/s390/block/dasd_int.h
+@@ -226,7 +226,7 @@ struct dasd_ccw_req {
+  * The following flags are used to suppress output of certain errors.
+  */
+ #define DASD_CQR_SUPPRESS_NRF	4	/* Suppress 'No Record Found' error */
+-#define DASD_CQR_SUPPRESS_FP	5	/* Suppress 'File Protected' error*/
++#define DASD_CQR_SUPPRESS_IT	5	/* Suppress 'Invalid Track' error*/
+ #define DASD_CQR_SUPPRESS_IL	6	/* Suppress 'Incorrect Length' error */
+ #define DASD_CQR_SUPPRESS_CR	7	/* Suppress 'Command Reject' error */
+ 
+diff --git a/drivers/s390/cio/idset.c b/drivers/s390/cio/idset.c
+index 45f9c0736be4f..e5f28370a9039 100644
+--- a/drivers/s390/cio/idset.c
++++ b/drivers/s390/cio/idset.c
+@@ -16,20 +16,21 @@ struct idset {
+ 	unsigned long bitmap[];
+ };
+ 
+-static inline unsigned long bitmap_size(int num_ssid, int num_id)
++static inline unsigned long idset_bitmap_size(int num_ssid, int num_id)
+ {
+-	return BITS_TO_LONGS(num_ssid * num_id) * sizeof(unsigned long);
++	return bitmap_size(size_mul(num_ssid, num_id));
+ }
+ 
+ static struct idset *idset_new(int num_ssid, int num_id)
+ {
+ 	struct idset *set;
+ 
+-	set = vmalloc(sizeof(struct idset) + bitmap_size(num_ssid, num_id));
++	set = vmalloc(sizeof(struct idset) +
++		      idset_bitmap_size(num_ssid, num_id));
+ 	if (set) {
+ 		set->num_ssid = num_ssid;
+ 		set->num_id = num_id;
+-		memset(set->bitmap, 0, bitmap_size(num_ssid, num_id));
++		memset(set->bitmap, 0, idset_bitmap_size(num_ssid, num_id));
+ 	}
+ 	return set;
+ }
+@@ -41,7 +42,8 @@ void idset_free(struct idset *set)
+ 
+ void idset_fill(struct idset *set)
+ {
+-	memset(set->bitmap, 0xff, bitmap_size(set->num_ssid, set->num_id));
++	memset(set->bitmap, 0xff,
++	       idset_bitmap_size(set->num_ssid, set->num_id));
+ }
+ 
+ static inline void idset_add(struct idset *set, int ssid, int id)
+diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
+index 355b16f0b1456..34e45c87cae03 100644
+--- a/drivers/scsi/aacraid/comminit.c
++++ b/drivers/scsi/aacraid/comminit.c
+@@ -642,6 +642,7 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
+ 
+ 	if (aac_comm_init(dev)<0){
+ 		kfree(dev->queues);
++		dev->queues = NULL;
+ 		return NULL;
+ 	}
+ 	/*
+@@ -649,6 +650,7 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
+ 	 */
+ 	if (aac_fib_setup(dev) < 0) {
+ 		kfree(dev->queues);
++		dev->queues = NULL;
+ 		return NULL;
+ 	}
+ 		
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 923ceaba0bf30..84f90f4d5abd8 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -7048,7 +7048,7 @@ lpfc_sli4_repost_sgl_list(struct lpfc_hba *phba,
+ 	struct lpfc_sglq *sglq_entry = NULL;
+ 	struct lpfc_sglq *sglq_entry_next = NULL;
+ 	struct lpfc_sglq *sglq_entry_first = NULL;
+-	int status, total_cnt;
++	int status = 0, total_cnt;
+ 	int post_cnt = 0, num_posted = 0, block_cnt = 0;
+ 	int last_xritag = NO_XRI;
+ 	LIST_HEAD(prep_sgl_list);
+diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
+index c37dd15d16d24..83f2576ed2aa0 100644
+--- a/drivers/scsi/scsi_transport_spi.c
++++ b/drivers/scsi/scsi_transport_spi.c
+@@ -677,10 +677,10 @@ spi_dv_device_echo_buffer(struct scsi_device *sdev, u8 *buffer,
+ 	for (r = 0; r < retries; r++) {
+ 		result = spi_execute(sdev, spi_write_buffer, DMA_TO_DEVICE,
+ 				     buffer, len, &sshdr);
+-		if(result || !scsi_device_online(sdev)) {
++		if (result || !scsi_device_online(sdev)) {
+ 
+ 			scsi_device_set_state(sdev, SDEV_QUIESCE);
+-			if (scsi_sense_valid(&sshdr)
++			if (result > 0 && scsi_sense_valid(&sshdr)
+ 			    && sshdr.sense_key == ILLEGAL_REQUEST
+ 			    /* INVALID FIELD IN CDB */
+ 			    && sshdr.asc == 0x24 && sshdr.ascq == 0x00)
+diff --git a/drivers/soc/qcom/cmd-db.c b/drivers/soc/qcom/cmd-db.c
+index fc5610603b173..006bb28e2a6e5 100644
+--- a/drivers/soc/qcom/cmd-db.c
++++ b/drivers/soc/qcom/cmd-db.c
+@@ -319,7 +319,7 @@ static int cmd_db_dev_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	cmd_db_header = memremap(rmem->base, rmem->size, MEMREMAP_WB);
++	cmd_db_header = memremap(rmem->base, rmem->size, MEMREMAP_WC);
+ 	if (!cmd_db_header) {
+ 		ret = -ENOMEM;
+ 		cmd_db_header = NULL;
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index a377c3d02c559..d8f556f793fc5 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1425,18 +1425,18 @@ struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave,
+ 					    unsigned int port_num)
+ {
+ 	struct sdw_dpn_prop *dpn_prop;
+-	u8 num_ports;
++	unsigned long mask;
+ 	int i;
+ 
+ 	if (direction == SDW_DATA_DIR_TX) {
+-		num_ports = hweight32(slave->prop.source_ports);
++		mask = slave->prop.source_ports;
+ 		dpn_prop = slave->prop.src_dpn_prop;
+ 	} else {
+-		num_ports = hweight32(slave->prop.sink_ports);
++		mask = slave->prop.sink_ports;
+ 		dpn_prop = slave->prop.sink_dpn_prop;
+ 	}
+ 
+-	for (i = 0; i < num_ports; i++) {
++	for_each_set_bit(i, &mask, 32) {
+ 		if (dpn_prop[i].num == port_num)
+ 			return &dpn_prop[i];
+ 	}
+diff --git a/drivers/ssb/main.c b/drivers/ssb/main.c
+index 0a26984acb2ca..9e54bc7eec663 100644
+--- a/drivers/ssb/main.c
++++ b/drivers/ssb/main.c
+@@ -835,7 +835,7 @@ static u32 clkfactor_f6_resolve(u32 v)
+ 	case SSB_CHIPCO_CLK_F6_7:
+ 		return 7;
+ 	}
+-	return 0;
++	return 1;
+ }
+ 
+ /* Calculate the speed the backplane would run at a given set of clockcontrol values */
+diff --git a/drivers/staging/iio/resolver/ad2s1210.c b/drivers/staging/iio/resolver/ad2s1210.c
+index a19cfb2998c93..f19bb5c796cf7 100644
+--- a/drivers/staging/iio/resolver/ad2s1210.c
++++ b/drivers/staging/iio/resolver/ad2s1210.c
+@@ -658,9 +658,6 @@ static int ad2s1210_probe(struct spi_device *spi)
+ 	if (!indio_dev)
+ 		return -ENOMEM;
+ 	st = iio_priv(indio_dev);
+-	ret = ad2s1210_setup_gpios(st);
+-	if (ret < 0)
+-		return ret;
+ 
+ 	spi_set_drvdata(spi, indio_dev);
+ 
+@@ -671,6 +668,10 @@ static int ad2s1210_probe(struct spi_device *spi)
+ 	st->resolution = 12;
+ 	st->fexcit = AD2S1210_DEF_EXCIT;
+ 
++	ret = ad2s1210_setup_gpios(st);
++	if (ret < 0)
++		return ret;
++
+ 	indio_dev->info = &ad2s1210_info;
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->channels = ad2s1210_channels;
+diff --git a/drivers/staging/ks7010/ks7010_sdio.c b/drivers/staging/ks7010/ks7010_sdio.c
+index 8c740c771f509..8e3fc4b78fd20 100644
+--- a/drivers/staging/ks7010/ks7010_sdio.c
++++ b/drivers/staging/ks7010/ks7010_sdio.c
+@@ -395,9 +395,9 @@ int ks_wlan_hw_tx(struct ks_wlan_private *priv, void *p, unsigned long size,
+ 	priv->hostt.buff[priv->hostt.qtail] = le16_to_cpu(hdr->event);
+ 	priv->hostt.qtail = (priv->hostt.qtail + 1) % SME_EVENT_BUFF_SIZE;
+ 
+-	spin_lock(&priv->tx_dev.tx_dev_lock);
++	spin_lock_bh(&priv->tx_dev.tx_dev_lock);
+ 	result = enqueue_txdev(priv, p, size, complete_handler, skb);
+-	spin_unlock(&priv->tx_dev.tx_dev_lock);
++	spin_unlock_bh(&priv->tx_dev.tx_dev_lock);
+ 
+ 	if (txq_has_space(priv))
+ 		queue_delayed_work(priv->wq, &priv->rw_dwork, 0);
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index b13a944eb4620..f6580d11c0fe4 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2584,6 +2584,7 @@ void tb_switch_remove(struct tb_switch *sw)
+ 			tb_switch_remove(port->remote->sw);
+ 			port->remote = NULL;
+ 		} else if (port->xdomain) {
++			port->xdomain->is_unplugged = true;
+ 			tb_xdomain_remove(port->xdomain);
+ 			port->xdomain = NULL;
+ 		}
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 4e4a71307d63c..c494b77e67493 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1757,6 +1757,9 @@ static const struct usb_device_id acm_ids[] = {
+ 	{ USB_DEVICE(0x11ca, 0x0201), /* VeriFone Mx870 Gadget Serial */
+ 	.driver_info = SINGLE_RX_URB,
+ 	},
++	{ USB_DEVICE(0x1901, 0x0006), /* GE Healthcare Patient Monitor UI Controller */
++	.driver_info = DISABLE_ECHO, /* DISABLE ECHO in termios flag */
++	},
+ 	{ USB_DEVICE(0x1965, 0x0018), /* Uniden UBC125XLT */
+ 	.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
+ 	},
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index 366c095217859..d93a02e6050b8 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -667,6 +667,7 @@ static int add_power_attributes(struct device *dev)
+ 
+ static void remove_power_attributes(struct device *dev)
+ {
++	sysfs_unmerge_group(&dev->kobj, &usb3_hardware_lpm_attr_group);
+ 	sysfs_unmerge_group(&dev->kobj, &usb2_hardware_lpm_attr_group);
+ 	sysfs_unmerge_group(&dev->kobj, &power_attr_group);
+ }
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 7275bff857ae8..ee7682faa6f3a 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -430,6 +430,13 @@ static void dwc3_free_event_buffers(struct dwc3 *dwc)
+ static int dwc3_alloc_event_buffers(struct dwc3 *dwc, unsigned length)
+ {
+ 	struct dwc3_event_buffer *evt;
++	unsigned int hw_mode;
++
++	hw_mode = DWC3_GHWPARAMS0_MODE(dwc->hwparams.hwparams0);
++	if (hw_mode == DWC3_GHWPARAMS0_MODE_HOST) {
++		dwc->ev_buf = NULL;
++		return 0;
++	}
+ 
+ 	evt = dwc3_alloc_one_event_buffer(dwc, length);
+ 	if (IS_ERR(evt)) {
+@@ -451,6 +458,9 @@ int dwc3_event_buffers_setup(struct dwc3 *dwc)
+ {
+ 	struct dwc3_event_buffer	*evt;
+ 
++	if (!dwc->ev_buf)
++		return 0;
++
+ 	evt = dwc->ev_buf;
+ 	evt->lpos = 0;
+ 	dwc3_writel(dwc->regs, DWC3_GEVNTADRLO(0),
+@@ -467,6 +477,17 @@ int dwc3_event_buffers_setup(struct dwc3 *dwc)
+ void dwc3_event_buffers_cleanup(struct dwc3 *dwc)
+ {
+ 	struct dwc3_event_buffer	*evt;
++	u32				reg;
++
++	if (!dwc->ev_buf)
++		return;
++	/*
++	 * Exynos platforms may not be able to access event buffer if the
++	 * controller failed to halt on dwc3_core_exit().
++	 */
++	reg = dwc3_readl(dwc->regs, DWC3_DSTS);
++	if (!(reg & DWC3_DSTS_DEVCTRLHLT))
++		return;
+ 
+ 	evt = dwc->ev_buf;
+ 
+diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
+index efaf0db595f46..6b59bbb22da49 100644
+--- a/drivers/usb/dwc3/dwc3-omap.c
++++ b/drivers/usb/dwc3/dwc3-omap.c
+@@ -522,11 +522,13 @@ static int dwc3_omap_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(dev, "failed to request IRQ #%d --> %d\n",
+ 			omap->irq, ret);
+-		goto err1;
++		goto err2;
+ 	}
+ 	dwc3_omap_enable_irqs(omap);
+ 	return 0;
+ 
++err2:
++	of_platform_depopulate(dev);
+ err1:
+ 	pm_runtime_put_sync(dev);
+ 	pm_runtime_disable(dev);
+diff --git a/drivers/usb/dwc3/dwc3-st.c b/drivers/usb/dwc3/dwc3-st.c
+index e733be8405459..e0ab69bdddbec 100644
+--- a/drivers/usb/dwc3/dwc3-st.c
++++ b/drivers/usb/dwc3/dwc3-st.c
+@@ -219,10 +219,8 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 	dwc3_data->regmap = regmap;
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "syscfg-reg");
+-	if (!res) {
+-		ret = -ENXIO;
+-		goto undo_platform_dev_alloc;
+-	}
++	if (!res)
++		return -ENXIO;
+ 
+ 	dwc3_data->syscfg_reg_off = res->start;
+ 
+@@ -233,8 +231,7 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 		devm_reset_control_get_exclusive(dev, "powerdown");
+ 	if (IS_ERR(dwc3_data->rstc_pwrdn)) {
+ 		dev_err(&pdev->dev, "could not get power controller\n");
+-		ret = PTR_ERR(dwc3_data->rstc_pwrdn);
+-		goto undo_platform_dev_alloc;
++		return PTR_ERR(dwc3_data->rstc_pwrdn);
+ 	}
+ 
+ 	/* Manage PowerDown */
+@@ -269,7 +266,7 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 	if (!child_pdev) {
+ 		dev_err(dev, "failed to find dwc3 core device\n");
+ 		ret = -ENODEV;
+-		goto err_node_put;
++		goto depopulate;
+ 	}
+ 
+ 	dwc3_data->dr_mode = usb_get_dr_mode(&child_pdev->dev);
+@@ -285,6 +282,7 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 	ret = st_dwc3_drd_init(dwc3_data);
+ 	if (ret) {
+ 		dev_err(dev, "drd initialisation failed\n");
++		of_platform_depopulate(dev);
+ 		goto undo_softreset;
+ 	}
+ 
+@@ -294,14 +292,14 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, dwc3_data);
+ 	return 0;
+ 
++depopulate:
++	of_platform_depopulate(dev);
+ err_node_put:
+ 	of_node_put(child);
+ undo_softreset:
+ 	reset_control_assert(dwc3_data->rstc_rst);
+ undo_powerdown:
+ 	reset_control_assert(dwc3_data->rstc_pwrdn);
+-undo_platform_dev_alloc:
+-	platform_device_put(pdev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/fsl_udc_core.c b/drivers/usb/gadget/udc/fsl_udc_core.c
+index ad6ff9c4188ef..3986e0639f05a 100644
+--- a/drivers/usb/gadget/udc/fsl_udc_core.c
++++ b/drivers/usb/gadget/udc/fsl_udc_core.c
+@@ -2501,7 +2501,7 @@ static int fsl_udc_probe(struct platform_device *pdev)
+ 	/* setup the udc->eps[] for non-control endpoints and link
+ 	 * to gadget.ep_list */
+ 	for (i = 1; i < (int)(udc_controller->max_ep / 2); i++) {
+-		char name[14];
++		char name[16];
+ 
+ 		sprintf(name, "ep%dout", i);
+ 		struct_ep_setup(udc_controller, i * 2, name, 1);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index b069fe3f8ab0b..19914d08fc0dd 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -2826,7 +2826,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
+ 				xhci->num_active_eps);
+ 		return -ENOMEM;
+ 	}
+-	if ((xhci->quirks & XHCI_SW_BW_CHECKING) &&
++	if ((xhci->quirks & XHCI_SW_BW_CHECKING) && !ctx_change &&
+ 	    xhci_reserve_bandwidth(xhci, virt_dev, command->in_ctx)) {
+ 		if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK))
+ 			xhci_free_host_resources(xhci, ctrl_ctx);
+@@ -4242,8 +4242,10 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		mutex_unlock(&xhci->mutex);
+ 		ret = xhci_disable_slot(xhci, udev->slot_id);
+ 		xhci_free_virt_device(xhci, udev->slot_id);
+-		if (!ret)
+-			xhci_alloc_dev(hcd, udev);
++		if (!ret) {
++			if (xhci_alloc_dev(hcd, udev) == 1)
++				xhci_setup_addressable_virt_dev(xhci, udev);
++		}
+ 		kfree(command->completion);
+ 		kfree(command);
+ 		return -EPROTO;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 73d97f7bf15bc..c9fade980f367 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -619,6 +619,8 @@ static void option_instat_callback(struct urb *urb);
+ 
+ /* MeiG Smart Technology products */
+ #define MEIGSMART_VENDOR_ID			0x2dee
++/* MeiG Smart SRM825L based on Qualcomm 315 */
++#define MEIGSMART_PRODUCT_SRM825L		0x4d22
+ /* MeiG Smart SLM320 based on UNISOC UIS8910 */
+ #define MEIGSMART_PRODUCT_SLM320		0x4d41
+ 
+@@ -2366,6 +2368,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index ccc4c6d8a578f..11dc833ca2c49 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -186,6 +186,7 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+ 	unsigned char k_rand_bytes[16];
+ 	int items;
+ 	elf_addr_t *elf_info;
++	elf_addr_t flags = 0;
+ 	int ei_index;
+ 	const struct cred *cred = current_cred();
+ 	struct vm_area_struct *vma;
+@@ -260,7 +261,9 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+ 	NEW_AUX_ENT(AT_PHENT, sizeof(struct elf_phdr));
+ 	NEW_AUX_ENT(AT_PHNUM, exec->e_phnum);
+ 	NEW_AUX_ENT(AT_BASE, interp_load_addr);
+-	NEW_AUX_ENT(AT_FLAGS, 0);
++	if (bprm->interp_flags & BINPRM_FLAGS_PRESERVE_ARGV0)
++		flags |= AT_FLAGS_PRESERVE_ARGV0;
++	NEW_AUX_ENT(AT_FLAGS, flags);
+ 	NEW_AUX_ENT(AT_ENTRY, e_entry);
+ 	NEW_AUX_ENT(AT_UID, from_kuid_munged(cred->user_ns, cred->uid));
+ 	NEW_AUX_ENT(AT_EUID, from_kuid_munged(cred->user_ns, cred->euid));
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index 48bb1290ed2fd..a3bf15dafab79 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -320,7 +320,7 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ 	else
+ 		executable_stack = EXSTACK_DEFAULT;
+ 
+-	if (stack_size == 0) {
++	if (stack_size == 0 && interp_params.flags & ELF_FDPIC_FLAG_PRESENT) {
+ 		stack_size = interp_params.stack_size;
+ 		if (interp_params.flags & ELF_FDPIC_FLAG_EXEC_STACK)
+ 			executable_stack = EXSTACK_ENABLE_X;
+@@ -506,6 +506,7 @@ static int create_elf_fdpic_tables(struct linux_binprm *bprm,
+ 	char __user *u_platform, *u_base_platform, *p;
+ 	int loop;
+ 	int nr;	/* reset for each csp adjustment */
++	unsigned long flags = 0;
+ 
+ #ifdef CONFIG_MMU
+ 	/* In some cases (e.g. Hyper-Threading), we want to avoid L1 evictions
+@@ -648,7 +649,9 @@ static int create_elf_fdpic_tables(struct linux_binprm *bprm,
+ 	NEW_AUX_ENT(AT_PHENT,	sizeof(struct elf_phdr));
+ 	NEW_AUX_ENT(AT_PHNUM,	exec_params->hdr.e_phnum);
+ 	NEW_AUX_ENT(AT_BASE,	interp_params->elfhdr_addr);
+-	NEW_AUX_ENT(AT_FLAGS,	0);
++	if (bprm->interp_flags & BINPRM_FLAGS_PRESERVE_ARGV0)
++		flags |= AT_FLAGS_PRESERVE_ARGV0;
++	NEW_AUX_ENT(AT_FLAGS,	flags);
+ 	NEW_AUX_ENT(AT_ENTRY,	exec_params->entry_addr);
+ 	NEW_AUX_ENT(AT_UID,	(elf_addr_t) from_kuid_munged(cred->user_ns, cred->uid));
+ 	NEW_AUX_ENT(AT_EUID,	(elf_addr_t) from_kuid_munged(cred->user_ns, cred->euid));
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index ce0047feea729..740dac1012ae8 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -60,12 +60,11 @@ typedef struct {
+ 	char *name;
+ 	struct dentry *dentry;
+ 	struct file *interp_file;
++	refcount_t users;		/* sync removal with load_misc_binary() */
+ } Node;
+ 
+ static DEFINE_RWLOCK(entries_lock);
+ static struct file_system_type bm_fs_type;
+-static struct vfsmount *bm_mnt;
+-static int entry_count;
+ 
+ /*
+  * Max length of the register string.  Determined by:
+@@ -82,19 +81,23 @@ static int entry_count;
+  */
+ #define MAX_REGISTER_LENGTH 1920
+ 
+-/*
+- * Check if we support the binfmt
+- * if we do, return the node, else NULL
+- * locking is done in load_misc_binary
++/**
++ * search_binfmt_handler - search for a binary handler for @bprm
++ * @misc: handle to binfmt_misc instance
++ * @bprm: binary for which we are looking for a handler
++ *
++ * Search for a binary type handler for @bprm in the list of registered binary
++ * type handlers.
++ *
++ * Return: binary type list entry on success, NULL on failure
+  */
+-static Node *check_file(struct linux_binprm *bprm)
++static Node *search_binfmt_handler(struct linux_binprm *bprm)
+ {
+ 	char *p = strrchr(bprm->interp, '.');
+-	struct list_head *l;
++	Node *e;
+ 
+ 	/* Walk all the registered handlers. */
+-	list_for_each(l, &entries) {
+-		Node *e = list_entry(l, Node, list);
++	list_for_each_entry(e, &entries, list) {
+ 		char *s;
+ 		int j;
+ 
+@@ -123,9 +126,49 @@ static Node *check_file(struct linux_binprm *bprm)
+ 		if (j == e->size)
+ 			return e;
+ 	}
++
+ 	return NULL;
+ }
+ 
++/**
++ * get_binfmt_handler - try to find a binary type handler
++ * @misc: handle to binfmt_misc instance
++ * @bprm: binary for which we are looking for a handler
++ *
++ * Try to find a binfmt handler for the binary type. If one is found take a
++ * reference to protect against removal via bm_{entry,status}_write().
++ *
++ * Return: binary type list entry on success, NULL on failure
++ */
++static Node *get_binfmt_handler(struct linux_binprm *bprm)
++{
++	Node *e;
++
++	read_lock(&entries_lock);
++	e = search_binfmt_handler(bprm);
++	if (e)
++		refcount_inc(&e->users);
++	read_unlock(&entries_lock);
++	return e;
++}
++
++/**
++ * put_binfmt_handler - put binary handler node
++ * @e: node to put
++ *
++ * Free node syncing with load_misc_binary() and defer final free to
++ * load_misc_binary() in case it is using the binary type handler we were
++ * requested to remove.
++ */
++static void put_binfmt_handler(Node *e)
++{
++	if (refcount_dec_and_test(&e->users)) {
++		if (e->flags & MISC_FMT_OPEN_FILE)
++			filp_close(e->interp_file, NULL);
++		kfree(e);
++	}
++}
++
+ /*
+  * the loader itself
+  */
+@@ -139,12 +182,7 @@ static int load_misc_binary(struct linux_binprm *bprm)
+ 	if (!enabled)
+ 		return retval;
+ 
+-	/* to keep locking time low, we copy the interpreter string */
+-	read_lock(&entries_lock);
+-	fmt = check_file(bprm);
+-	if (fmt)
+-		dget(fmt->dentry);
+-	read_unlock(&entries_lock);
++	fmt = get_binfmt_handler(bprm);
+ 	if (!fmt)
+ 		return retval;
+ 
+@@ -153,7 +191,9 @@ static int load_misc_binary(struct linux_binprm *bprm)
+ 	if (bprm->interp_flags & BINPRM_FLAGS_PATH_INACCESSIBLE)
+ 		goto ret;
+ 
+-	if (!(fmt->flags & MISC_FMT_PRESERVE_ARGV0)) {
++	if (fmt->flags & MISC_FMT_PRESERVE_ARGV0) {
++		bprm->interp_flags |= BINPRM_FLAGS_PRESERVE_ARGV0;
++	} else {
+ 		retval = remove_arg_zero(bprm);
+ 		if (retval)
+ 			goto ret;
+@@ -196,7 +236,16 @@ static int load_misc_binary(struct linux_binprm *bprm)
+ 
+ 	retval = 0;
+ ret:
+-	dput(fmt->dentry);
++
++	/*
++	 * If we actually put the node here all concurrent calls to
++	 * load_misc_binary() will have finished. We also know
++	 * that for the refcount to be zero ->evict_inode() must have removed
++	 * the node to be deleted from the list. All that is left for us is to
++	 * close and free.
++	 */
++	put_binfmt_handler(fmt);
++
+ 	return retval;
+ }
+ 
+@@ -551,30 +600,90 @@ static struct inode *bm_get_inode(struct super_block *sb, int mode)
+ 	return inode;
+ }
+ 
++/**
++ * bm_evict_inode - cleanup data associated with @inode
++ * @inode: inode to which the data is attached
++ *
++ * Cleanup the binary type handler data associated with @inode if a binary type
++ * entry is removed or the filesystem is unmounted and the super block is
++ * shutdown.
++ *
++ * If the ->evict call was not caused by a super block shutdown but by a write
++ * to remove the entry or all entries via bm_{entry,status}_write() the entry
++ * will have already been removed from the list. We keep the list_empty() check
++ * to make that explicit.
++*/
+ static void bm_evict_inode(struct inode *inode)
+ {
+ 	Node *e = inode->i_private;
+ 
+-	if (e && e->flags & MISC_FMT_OPEN_FILE)
+-		filp_close(e->interp_file, NULL);
+-
+ 	clear_inode(inode);
+-	kfree(e);
++
++	if (e) {
++		write_lock(&entries_lock);
++		if (!list_empty(&e->list))
++			list_del_init(&e->list);
++		write_unlock(&entries_lock);
++		put_binfmt_handler(e);
++	}
+ }
+ 
+-static void kill_node(Node *e)
++/**
++ * unlink_binfmt_dentry - remove the dentry for the binary type handler
++ * @dentry: dentry associated with the binary type handler
++ *
++ * Do the actual filesystem work to remove a dentry for a registered binary
++ * type handler. Since binfmt_misc only allows simple files to be created
++ * directly under the root dentry of the filesystem we ensure that we are
++ * indeed passed a dentry directly beneath the root dentry, that the inode
++ * associated with the root dentry is locked, and that it is a regular file we
++ * are asked to remove.
++ */
++static void unlink_binfmt_dentry(struct dentry *dentry)
+ {
+-	struct dentry *dentry;
++	struct dentry *parent = dentry->d_parent;
++	struct inode *inode, *parent_inode;
++
++	/* All entries are immediate descendants of the root dentry. */
++	if (WARN_ON_ONCE(dentry->d_sb->s_root != parent))
++		return;
+ 
++	/* We only expect to be called on regular files. */
++	inode = d_inode(dentry);
++	if (WARN_ON_ONCE(!S_ISREG(inode->i_mode)))
++		return;
++
++	/* The parent inode must be locked. */
++	parent_inode = d_inode(parent);
++	if (WARN_ON_ONCE(!inode_is_locked(parent_inode)))
++		return;
++
++	if (simple_positive(dentry)) {
++		dget(dentry);
++		simple_unlink(parent_inode, dentry);
++		d_delete(dentry);
++		dput(dentry);
++	}
++}
++
++/**
++ * remove_binfmt_handler - remove a binary type handler
++ * @misc: handle to binfmt_misc instance
++ * @e: binary type handler to remove
++ *
++ * Remove a binary type handler from the list of binary type handlers and
++ * remove its associated dentry. This is called from
++ * binfmt_{entry,status}_write(). In the future, we might want to think about
++ * adding a proper ->unlink() method to binfmt_misc instead of forcing caller's
++ * to use writes to files in order to delete binary type handlers. But it has
++ * worked for so long that it's not a pressing issue.
++ */
++static void remove_binfmt_handler(Node *e)
++{
+ 	write_lock(&entries_lock);
+ 	list_del_init(&e->list);
+ 	write_unlock(&entries_lock);
+-
+-	dentry = e->dentry;
+-	drop_nlink(d_inode(dentry));
+-	d_drop(dentry);
+-	dput(dentry);
+-	simple_release_fs(&bm_mnt, &entry_count);
++	unlink_binfmt_dentry(e->dentry);
+ }
+ 
+ /* /<entry> */
+@@ -601,8 +710,8 @@ bm_entry_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
+ static ssize_t bm_entry_write(struct file *file, const char __user *buffer,
+ 				size_t count, loff_t *ppos)
+ {
+-	struct dentry *root;
+-	Node *e = file_inode(file)->i_private;
++	struct inode *inode = file_inode(file);
++	Node *e = inode->i_private;
+ 	int res = parse_command(buffer, count);
+ 
+ 	switch (res) {
+@@ -616,13 +725,22 @@ static ssize_t bm_entry_write(struct file *file, const char __user *buffer,
+ 		break;
+ 	case 3:
+ 		/* Delete this handler. */
+-		root = file_inode(file)->i_sb->s_root;
+-		inode_lock(d_inode(root));
++		inode = d_inode(inode->i_sb->s_root);
++		inode_lock(inode);
+ 
++		/*
++		 * In order to add new element or remove elements from the list
++		 * via bm_{entry,register,status}_write() inode_lock() on the
++		 * root inode must be held.
++		 * The lock is exclusive ensuring that the list can't be
++		 * modified. Only load_misc_binary() can access but does so
++		 * read-only. So we only need to take the write lock when we
++		 * actually remove the entry from the list.
++		 */
+ 		if (!list_empty(&e->list))
+-			kill_node(e);
++			remove_binfmt_handler(e);
+ 
+-		inode_unlock(d_inode(root));
++		inode_unlock(inode);
+ 		break;
+ 	default:
+ 		return res;
+@@ -681,13 +799,7 @@ static ssize_t bm_register_write(struct file *file, const char __user *buffer,
+ 	if (!inode)
+ 		goto out2;
+ 
+-	err = simple_pin_fs(&bm_fs_type, &bm_mnt, &entry_count);
+-	if (err) {
+-		iput(inode);
+-		inode = NULL;
+-		goto out2;
+-	}
+-
++	refcount_set(&e->users, 1);
+ 	e->dentry = dget(dentry);
+ 	inode->i_private = e;
+ 	inode->i_fop = &bm_entry_operations;
+@@ -731,7 +843,8 @@ static ssize_t bm_status_write(struct file *file, const char __user *buffer,
+ 		size_t count, loff_t *ppos)
+ {
+ 	int res = parse_command(buffer, count);
+-	struct dentry *root;
++	Node *e, *next;
++	struct inode *inode;
+ 
+ 	switch (res) {
+ 	case 1:
+@@ -744,13 +857,22 @@ static ssize_t bm_status_write(struct file *file, const char __user *buffer,
+ 		break;
+ 	case 3:
+ 		/* Delete all handlers. */
+-		root = file_inode(file)->i_sb->s_root;
+-		inode_lock(d_inode(root));
++		inode = d_inode(file_inode(file)->i_sb->s_root);
++		inode_lock(inode);
+ 
+-		while (!list_empty(&entries))
+-			kill_node(list_first_entry(&entries, Node, list));
++		/*
++		 * In order to add new element or remove elements from the list
++		 * via bm_{entry,register,status}_write() inode_lock() on the
++		 * root inode must be held.
++		 * The lock is exclusive ensuring that the list can't be
++		 * modified. Only load_misc_binary() can access but does so
++		 * read-only. So we only need to take the write lock when we
++		 * actually remove the entry from the list.
++		 */
++		list_for_each_entry_safe(e, next, &entries, list)
++			remove_binfmt_handler(e);
+ 
+-		inode_unlock(d_inode(root));
++		inode_unlock(inode);
+ 		break;
+ 	default:
+ 		return res;
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index cdfc791b3c405..e2afaa70ae5e5 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -986,7 +986,7 @@ static void btrfs_release_delayed_inode(struct btrfs_delayed_node *delayed_node)
+ 
+ 	if (delayed_node &&
+ 	    test_bit(BTRFS_DELAYED_NODE_INODE_DIRTY, &delayed_node->flags)) {
+-		BUG_ON(!delayed_node->root);
++		ASSERT(delayed_node->root);
+ 		clear_bit(BTRFS_DELAYED_NODE_INODE_DIRTY, &delayed_node->flags);
+ 		delayed_node->count--;
+ 
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index af52c9e005b3c..a779965d29905 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -1755,9 +1755,9 @@ static void bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+ 	ctl->free_space -= bytes;
+ }
+ 
+-static void bitmap_set_bits(struct btrfs_free_space_ctl *ctl,
+-			    struct btrfs_free_space *info, u64 offset,
+-			    u64 bytes)
++static void btrfs_bitmap_set_bits(struct btrfs_free_space_ctl *ctl,
++				  struct btrfs_free_space *info, u64 offset,
++				  u64 bytes)
+ {
+ 	unsigned long start, count, end;
+ 	int extent_delta = 1;
+@@ -2077,7 +2077,7 @@ static u64 add_bytes_to_bitmap(struct btrfs_free_space_ctl *ctl,
+ 
+ 	bytes_to_set = min(end - offset, bytes);
+ 
+-	bitmap_set_bits(ctl, info, offset, bytes_to_set);
++	btrfs_bitmap_set_bits(ctl, info, offset, bytes_to_set);
+ 
+ 	/*
+ 	 * We set some bytes, we have no idea what the max extent size is
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 1f99d7dced17a..4bf28f74605fd 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -3918,7 +3918,14 @@ static noinline int may_destroy_subvol(struct btrfs_root *root)
+ 	ret = btrfs_search_slot(NULL, fs_info->tree_root, &key, path, 0, 0);
+ 	if (ret < 0)
+ 		goto out;
+-	BUG_ON(ret == 0);
++	if (ret == 0) {
++		/*
++		 * Key with offset -1 found, there would have to exist a root
++		 * with such id, but this is out of valid range.
++		 */
++		ret = -EUCLEAN;
++		goto out;
++	}
+ 
+ 	ret = 0;
+ 	if (path->slots[0] > 0) {
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 83d17f22335b1..7518ab3b409c5 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2658,8 +2658,6 @@ int btrfs_qgroup_account_extent(struct btrfs_trans_handle *trans, u64 bytenr,
+ 	if (nr_old_roots == 0 && nr_new_roots == 0)
+ 		goto out_free;
+ 
+-	BUG_ON(!fs_info->quota_root);
+-
+ 	trace_btrfs_qgroup_account_extent(fs_info, trans->transid, bytenr,
+ 					num_bytes, nr_old_roots, nr_new_roots);
+ 
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index a5ed01d49f069..a9e72f42e91e0 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -685,7 +685,12 @@ static int begin_cmd(struct send_ctx *sctx, int cmd)
+ 	if (WARN_ON(!sctx->send_buf))
+ 		return -EINVAL;
+ 
+-	BUG_ON(sctx->send_size);
++	if (unlikely(sctx->send_size != 0)) {
++		btrfs_err(sctx->send_root->fs_info,
++			  "send: command header buffer not empty cmd %d offset %llu",
++			  cmd, sctx->send_off);
++		return -EINVAL;
++	}
+ 
+ 	sctx->send_size += sizeof(*hdr);
+ 	hdr = (struct btrfs_cmd_header *)sctx->send_buf;
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 5b952f69bc1f6..2b0fc0c30f36e 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1546,6 +1546,72 @@ static int check_inode_ref(struct extent_buffer *leaf,
+ 	return 0;
+ }
+ 
++static int check_dev_extent_item(const struct extent_buffer *leaf,
++				 const struct btrfs_key *key,
++				 int slot,
++				 struct btrfs_key *prev_key)
++{
++	struct btrfs_dev_extent *de;
++	const u32 sectorsize = leaf->fs_info->sectorsize;
++
++	de = btrfs_item_ptr(leaf, slot, struct btrfs_dev_extent);
++	/* Basic fixed member checks. */
++	if (unlikely(btrfs_dev_extent_chunk_tree(leaf, de) !=
++		     BTRFS_CHUNK_TREE_OBJECTID)) {
++		generic_err(leaf, slot,
++			    "invalid dev extent chunk tree id, has %llu expect %llu",
++			    btrfs_dev_extent_chunk_tree(leaf, de),
++			    BTRFS_CHUNK_TREE_OBJECTID);
++		return -EUCLEAN;
++	}
++	if (unlikely(btrfs_dev_extent_chunk_objectid(leaf, de) !=
++		     BTRFS_FIRST_CHUNK_TREE_OBJECTID)) {
++		generic_err(leaf, slot,
++			    "invalid dev extent chunk objectid, has %llu expect %llu",
++			    btrfs_dev_extent_chunk_objectid(leaf, de),
++			    BTRFS_FIRST_CHUNK_TREE_OBJECTID);
++		return -EUCLEAN;
++	}
++	/* Alignment check. */
++	if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) {
++		generic_err(leaf, slot,
++			    "invalid dev extent key.offset, has %llu not aligned to %u",
++			    key->offset, sectorsize);
++		return -EUCLEAN;
++	}
++	if (unlikely(!IS_ALIGNED(btrfs_dev_extent_chunk_offset(leaf, de),
++				 sectorsize))) {
++		generic_err(leaf, slot,
++			    "invalid dev extent chunk offset, has %llu not aligned to %u",
++			    btrfs_dev_extent_chunk_objectid(leaf, de),
++			    sectorsize);
++		return -EUCLEAN;
++	}
++	if (unlikely(!IS_ALIGNED(btrfs_dev_extent_length(leaf, de),
++				 sectorsize))) {
++		generic_err(leaf, slot,
++			    "invalid dev extent length, has %llu not aligned to %u",
++			    btrfs_dev_extent_length(leaf, de), sectorsize);
++		return -EUCLEAN;
++	}
++	/* Overlap check with previous dev extent. */
++	if (slot && prev_key->objectid == key->objectid &&
++	    prev_key->type == key->type) {
++		struct btrfs_dev_extent *prev_de;
++		u64 prev_len;
++
++		prev_de = btrfs_item_ptr(leaf, slot - 1, struct btrfs_dev_extent);
++		prev_len = btrfs_dev_extent_length(leaf, prev_de);
++		if (unlikely(prev_key->offset + prev_len > key->offset)) {
++			generic_err(leaf, slot,
++		"dev extent overlap, prev offset %llu len %llu current offset %llu",
++				    prev_key->objectid, prev_len, key->offset);
++			return -EUCLEAN;
++		}
++	}
++	return 0;
++}
++
+ /*
+  * Common point to switch the item-specific validation.
+  */
+@@ -1581,6 +1647,9 @@ static int check_leaf_item(struct extent_buffer *leaf,
+ 	case BTRFS_DEV_ITEM_KEY:
+ 		ret = check_dev_item(leaf, key, slot);
+ 		break;
++	case BTRFS_DEV_EXTENT_KEY:
++		ret = check_dev_extent_item(leaf, key, slot, prev_key);
++		break;
+ 	case BTRFS_INODE_ITEM_KEY:
+ 		ret = check_inode_item(leaf, key, slot);
+ 		break;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 9e12592727914..f5fa9d542d648 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -3399,9 +3399,10 @@ static int ext4_ext_convert_to_initialized(handle_t *handle,
+ 	struct ext4_extent *ex, *abut_ex;
+ 	ext4_lblk_t ee_block, eof_block;
+ 	unsigned int ee_len, depth, map_len = map->m_len;
+-	int allocated = 0, max_zeroout = 0;
+ 	int err = 0;
+ 	int split_flag = EXT4_EXT_DATA_VALID2;
++	int allocated = 0;
++	unsigned int max_zeroout = 0;
+ 
+ 	ext_debug(inode, "logical block %llu, max_blocks %u\n",
+ 		  (unsigned long long)map->m_lblk, map_len);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index bc5db22df9fe7..7cbbcee225ddd 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5955,6 +5955,9 @@ static int ext4_try_to_trim_range(struct super_block *sb,
+ 	bool set_trimmed = false;
+ 	void *bitmap;
+ 
++	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)))
++		return 0;
++
+ 	last = ext4_last_grp_cluster(sb, e4b->bd_group);
+ 	bitmap = e4b->bd_bitmap;
+ 	if (start == 0 && max >= last)
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 134a78179e6ff..6fcc83637b153 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2215,6 +2215,8 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
+ #endif
+ 
+ 	segno = GET_SEGNO(sbi, blkaddr);
++	if (segno == NULL_SEGNO)
++		return;
+ 
+ 	se = get_seg_entry(sbi, segno);
+ 	new_vblocks = se->valid_blocks + del;
+@@ -3391,8 +3393,7 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
+ 	 * since SSR needs latest valid block information.
+ 	 */
+ 	update_sit_entry(sbi, *new_blkaddr, 1);
+-	if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO)
+-		update_sit_entry(sbi, old_blkaddr, -1);
++	update_sit_entry(sbi, old_blkaddr, -1);
+ 
+ 	if (!__has_curseg_space(sbi, curseg)) {
+ 		if (from_gc)
+diff --git a/fs/file.c b/fs/file.c
+index 105a084b7924d..40a7fc127f37a 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -46,27 +46,23 @@ static void free_fdtable_rcu(struct rcu_head *rcu)
+ #define BITBIT_NR(nr)	BITS_TO_LONGS(BITS_TO_LONGS(nr))
+ #define BITBIT_SIZE(nr)	(BITBIT_NR(nr) * sizeof(long))
+ 
++#define fdt_words(fdt) ((fdt)->max_fds / BITS_PER_LONG) // words in ->open_fds
+ /*
+  * Copy 'count' fd bits from the old table to the new table and clear the extra
+  * space if any.  This does not copy the file pointers.  Called with the files
+  * spinlock held for write.
+  */
+-static void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt,
+-			    unsigned int count)
++static inline void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt,
++			    unsigned int copy_words)
+ {
+-	unsigned int cpy, set;
+-
+-	cpy = count / BITS_PER_BYTE;
+-	set = (nfdt->max_fds - count) / BITS_PER_BYTE;
+-	memcpy(nfdt->open_fds, ofdt->open_fds, cpy);
+-	memset((char *)nfdt->open_fds + cpy, 0, set);
+-	memcpy(nfdt->close_on_exec, ofdt->close_on_exec, cpy);
+-	memset((char *)nfdt->close_on_exec + cpy, 0, set);
+-
+-	cpy = BITBIT_SIZE(count);
+-	set = BITBIT_SIZE(nfdt->max_fds) - cpy;
+-	memcpy(nfdt->full_fds_bits, ofdt->full_fds_bits, cpy);
+-	memset((char *)nfdt->full_fds_bits + cpy, 0, set);
++	unsigned int nwords = fdt_words(nfdt);
++
++	bitmap_copy_and_extend(nfdt->open_fds, ofdt->open_fds,
++			copy_words * BITS_PER_LONG, nwords * BITS_PER_LONG);
++	bitmap_copy_and_extend(nfdt->close_on_exec, ofdt->close_on_exec,
++			copy_words * BITS_PER_LONG, nwords * BITS_PER_LONG);
++	bitmap_copy_and_extend(nfdt->full_fds_bits, ofdt->full_fds_bits,
++			copy_words, nwords);
+ }
+ 
+ /*
+@@ -84,7 +80,7 @@ static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt)
+ 	memcpy(nfdt->fd, ofdt->fd, cpy);
+ 	memset((char *)nfdt->fd + cpy, 0, set);
+ 
+-	copy_fd_bitmaps(nfdt, ofdt, ofdt->max_fds);
++	copy_fd_bitmaps(nfdt, ofdt, fdt_words(ofdt));
+ }
+ 
+ /*
+@@ -374,7 +370,7 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+ 		open_files = sane_fdtable_size(old_fdt, max_fds);
+ 	}
+ 
+-	copy_fd_bitmaps(new_fdt, old_fdt, open_files);
++	copy_fd_bitmaps(new_fdt, old_fdt, open_files / BITS_PER_LONG);
+ 
+ 	old_fds = old_fdt->fd;
+ 	new_fds = new_fdt->fd;
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 8ac91ba05d6de..e6cbed7aedcba 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -1627,9 +1627,11 @@ static int fuse_notify_store(struct fuse_conn *fc, unsigned int size,
+ 
+ 		this_num = min_t(unsigned, num, PAGE_SIZE - offset);
+ 		err = fuse_copy_page(cs, &page, offset, this_num, 0);
+-		if (!err && offset == 0 &&
+-		    (this_num == PAGE_SIZE || file_size == end))
++		if (!PageUptodate(page) && !err && offset == 0 &&
++		    (this_num == PAGE_SIZE || file_size == end)) {
++			zero_user_segment(page, this_num, PAGE_SIZE);
+ 			SetPageUptodate(page);
++		}
+ 		unlock_page(page);
+ 		put_page(page);
+ 
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index 7d4655022afc6..c50999ad9f7ab 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -315,6 +315,16 @@ static int virtio_fs_read_tag(struct virtio_device *vdev, struct virtio_fs *fs)
+ 		return -ENOMEM;
+ 	memcpy(fs->tag, tag_buf, len);
+ 	fs->tag[len] = '\0';
++
++	/* While the VIRTIO specification allows any character, newlines are
++	 * awkward on mount(8) command-lines and cause problems in the sysfs
++	 * "tag" attr and uevent TAG= properties. Forbid them.
++	 */
++	if (strchr(fs->tag, '\n')) {
++		dev_dbg(&vdev->dev, "refusing virtiofs tag with newline character\n");
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index d75d56d9ea0ca..22905a076a6a2 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -1905,7 +1905,7 @@ static int setattr_chown(struct inode *inode, struct iattr *attr)
+ 	kuid_t ouid, nuid;
+ 	kgid_t ogid, ngid;
+ 	int error;
+-	struct gfs2_alloc_parms ap;
++	struct gfs2_alloc_parms ap = {};
+ 
+ 	ouid = inode->i_uid;
+ 	ogid = inode->i_gid;
+diff --git a/fs/inode.c b/fs/inode.c
+index 5c7139aa2bda7..de7a63c24c5d1 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -453,6 +453,39 @@ static void inode_lru_list_del(struct inode *inode)
+ 		this_cpu_dec(nr_unused);
+ }
+ 
++static void inode_pin_lru_isolating(struct inode *inode)
++{
++	lockdep_assert_held(&inode->i_lock);
++	WARN_ON(inode->i_state & (I_LRU_ISOLATING | I_FREEING | I_WILL_FREE));
++	inode->i_state |= I_LRU_ISOLATING;
++}
++
++static void inode_unpin_lru_isolating(struct inode *inode)
++{
++	spin_lock(&inode->i_lock);
++	WARN_ON(!(inode->i_state & I_LRU_ISOLATING));
++	inode->i_state &= ~I_LRU_ISOLATING;
++	smp_mb();
++	wake_up_bit(&inode->i_state, __I_LRU_ISOLATING);
++	spin_unlock(&inode->i_lock);
++}
++
++static void inode_wait_for_lru_isolating(struct inode *inode)
++{
++	spin_lock(&inode->i_lock);
++	if (inode->i_state & I_LRU_ISOLATING) {
++		DEFINE_WAIT_BIT(wq, &inode->i_state, __I_LRU_ISOLATING);
++		wait_queue_head_t *wqh;
++
++		wqh = bit_waitqueue(&inode->i_state, __I_LRU_ISOLATING);
++		spin_unlock(&inode->i_lock);
++		__wait_on_bit(wqh, &wq, bit_wait, TASK_UNINTERRUPTIBLE);
++		spin_lock(&inode->i_lock);
++		WARN_ON(inode->i_state & I_LRU_ISOLATING);
++	}
++	spin_unlock(&inode->i_lock);
++}
++
+ /**
+  * inode_sb_list_add - add inode to the superblock list of inodes
+  * @inode: inode to add
+@@ -565,6 +598,8 @@ static void evict(struct inode *inode)
+ 
+ 	inode_sb_list_del(inode);
+ 
++	inode_wait_for_lru_isolating(inode);
++
+ 	/*
+ 	 * Wait for flusher thread to be done with the inode so that filesystem
+ 	 * does not start destroying it while writeback is still running. Since
+@@ -764,7 +799,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
+ 	}
+ 
+ 	if (inode_has_buffers(inode) || inode->i_data.nrpages) {
+-		__iget(inode);
++		inode_pin_lru_isolating(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		spin_unlock(lru_lock);
+ 		if (remove_inode_buffers(inode)) {
+@@ -777,7 +812,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
+ 			if (current->reclaim_state)
+ 				current->reclaim_state->reclaimed_slab += reap;
+ 		}
+-		iput(inode);
++		inode_unpin_lru_isolating(inode);
+ 		spin_lock(lru_lock);
+ 		return LRU_RETRY;
+ 	}
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index ed6a3ed83755d..f2da20ce68754 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2000,6 +2000,14 @@ pnfs_update_layout(struct inode *ino,
+ 	}
+ 
+ lookup_again:
++	if (!nfs4_valid_open_stateid(ctx->state)) {
++		trace_pnfs_update_layout(ino, pos, count,
++					 iomode, lo, lseg,
++					 PNFS_UPDATE_LAYOUT_INVALID_OPEN);
++		lseg = ERR_PTR(-EIO);
++		goto out;
++	}
++
+ 	lseg = ERR_PTR(nfs4_client_recover_expired_lease(clp));
+ 	if (IS_ERR(lseg))
+ 		goto out;
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 244e4258ce16f..4440ff43cb66d 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -53,9 +53,10 @@ static struct file *ovl_open_realfile(const struct file *file,
+ 	err = inode_permission(realinode, MAY_OPEN | acc_mode);
+ 	if (err) {
+ 		realfile = ERR_PTR(err);
+-	} else if (!inode_owner_or_capable(realinode)) {
+-		realfile = ERR_PTR(-EPERM);
+ 	} else {
++		if (!inode_owner_or_capable(realinode))
++			flags &= ~O_NOATIME;
++
+ 		realfile = open_with_fake_path(&file->f_path, flags, realinode,
+ 					       current_cred());
+ 	}
+@@ -75,12 +76,6 @@ static int ovl_change_flags(struct file *file, unsigned int flags)
+ 	struct inode *inode = file_inode(file);
+ 	int err;
+ 
+-	flags |= OVL_OPEN_FLAGS;
+-
+-	/* If some flag changed that cannot be changed then something's amiss */
+-	if (WARN_ON((file->f_flags ^ flags) & ~OVL_SETFL_MASK))
+-		return -EIO;
+-
+ 	flags &= OVL_SETFL_MASK;
+ 
+ 	if (((flags ^ file->f_flags) & O_APPEND) && IS_APPEND(inode))
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 6a7b7d44753a3..9b8babbd1653c 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -997,9 +997,8 @@ struct dquot *dqget(struct super_block *sb, struct kqid qid)
+ 	 * smp_mb__before_atomic() in dquot_acquire().
+ 	 */
+ 	smp_rmb();
+-#ifdef CONFIG_QUOTA_DEBUG
+-	BUG_ON(!dquot->dq_sb);	/* Has somebody invalidated entry under us? */
+-#endif
++	/* Has somebody invalidated entry under us? */
++	WARN_ON_ONCE(hlist_unhashed(&dquot->dq_hash));
+ out:
+ 	if (empty)
+ 		do_destroy_dquot(empty);
+diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
+index 5a9786e6b5546..c26ce73dcb08a 100644
+--- a/include/linux/binfmts.h
++++ b/include/linux/binfmts.h
+@@ -73,6 +73,10 @@ struct linux_binprm {
+ #define BINPRM_FLAGS_PATH_INACCESSIBLE_BIT 2
+ #define BINPRM_FLAGS_PATH_INACCESSIBLE (1 << BINPRM_FLAGS_PATH_INACCESSIBLE_BIT)
+ 
++/* preserve argv0 for the interpreter  */
++#define BINPRM_FLAGS_PRESERVE_ARGV0_BIT 3
++#define BINPRM_FLAGS_PRESERVE_ARGV0 (1 << BINPRM_FLAGS_PRESERVE_ARGV0_BIT)
++
+ /* Function parameter for binfmt->coredump */
+ struct coredump_params {
+ 	const kernel_siginfo_t *siginfo;
+diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
+index c4f6a9270c03c..29b19d2ba5bac 100644
+--- a/include/linux/bitmap.h
++++ b/include/linux/bitmap.h
+@@ -240,22 +240,24 @@ extern int bitmap_print_to_pagebuf(bool list, char *buf,
+ #define small_const_nbits(nbits) \
+ 	(__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG && (nbits) > 0)
+ 
++#define bitmap_size(nbits)	(ALIGN(nbits, BITS_PER_LONG) / BITS_PER_BYTE)
++
+ static inline void bitmap_zero(unsigned long *dst, unsigned int nbits)
+ {
+-	unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
++	unsigned int len = bitmap_size(nbits);
+ 	memset(dst, 0, len);
+ }
+ 
+ static inline void bitmap_fill(unsigned long *dst, unsigned int nbits)
+ {
+-	unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
++	unsigned int len = bitmap_size(nbits);
+ 	memset(dst, 0xff, len);
+ }
+ 
+ static inline void bitmap_copy(unsigned long *dst, const unsigned long *src,
+ 			unsigned int nbits)
+ {
+-	unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
++	unsigned int len = bitmap_size(nbits);
+ 	memcpy(dst, src, len);
+ }
+ 
+@@ -270,6 +272,18 @@ static inline void bitmap_copy_clear_tail(unsigned long *dst,
+ 		dst[nbits / BITS_PER_LONG] &= BITMAP_LAST_WORD_MASK(nbits);
+ }
+ 
++static inline void bitmap_copy_and_extend(unsigned long *to,
++					  const unsigned long *from,
++					  unsigned int count, unsigned int size)
++{
++	unsigned int copy = BITS_TO_LONGS(count);
++
++	memcpy(to, from, copy * sizeof(long));
++	if (count % BITS_PER_LONG)
++		to[copy - 1] &= BITMAP_LAST_WORD_MASK(count);
++	memset(to + copy, 0, bitmap_size(size) - copy * sizeof(long));
++}
++
+ /*
+  * On 32-bit systems bitmaps are represented as u32 arrays internally, and
+  * therefore conversion is not needed when copying data from/to arrays of u32.
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index a47e1aebaff24..e5f11dae208dd 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -59,7 +59,7 @@ struct blk_keyslot_manager;
+  */
+ #define BLKCG_MAX_POLS		5
+ 
+-static inline int blk_validate_block_size(unsigned int bsize)
++static inline int blk_validate_block_size(unsigned long bsize)
+ {
+ 	if (bsize < 512 || bsize > PAGE_SIZE || !is_power_of_2(bsize))
+ 		return -EINVAL;
+diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
+index f0d895d6ac39f..bb29102ec9662 100644
+--- a/include/linux/cpumask.h
++++ b/include/linux/cpumask.h
+@@ -690,7 +690,7 @@ static inline int cpulist_parse(const char *buf, struct cpumask *dstp)
+  */
+ static inline unsigned int cpumask_size(void)
+ {
+-	return BITS_TO_LONGS(nr_cpumask_bits) * sizeof(long);
++	return bitmap_size(nr_cpumask_bits);
+ }
+ 
+ /*
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 6a26ef54ac25d..2e202f01c38d0 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2249,6 +2249,9 @@ static inline void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
+  *			Used to detect that mark_inode_dirty() should not move
+  * 			inode between dirty lists.
+  *
++ * I_LRU_ISOLATING	Inode is pinned being isolated from LRU without holding
++ *			i_count.
++ *
+  * Q: What is the difference between I_WILL_FREE and I_FREEING?
+  */
+ #define I_DIRTY_SYNC		(1 << 0)
+@@ -2271,6 +2274,8 @@ static inline void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
+ #define I_CREATING		(1 << 15)
+ #define I_DONTCACHE		(1 << 16)
+ #define I_SYNC_QUEUED		(1 << 17)
++#define __I_LRU_ISOLATING	19
++#define I_LRU_ISOLATING		(1 << __I_LRU_ISOLATING)
+ 
+ #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC)
+ #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)
+diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
+index 36e5e75e71720..be01eda3b6ff7 100644
+--- a/include/net/busy_poll.h
++++ b/include/net/busy_poll.h
+@@ -61,7 +61,7 @@ static inline bool sk_can_busy_loop(struct sock *sk)
+ static inline unsigned long busy_loop_current_time(void)
+ {
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+-	return (unsigned long)(local_clock() >> 10);
++	return (unsigned long)(ktime_get_ns() >> 10);
+ #else
+ 	return 0;
+ #endif
+diff --git a/include/net/kcm.h b/include/net/kcm.h
+index 2d704f8f49059..8e8252e08a9ce 100644
+--- a/include/net/kcm.h
++++ b/include/net/kcm.h
+@@ -70,6 +70,7 @@ struct kcm_sock {
+ 	struct work_struct tx_work;
+ 	struct list_head wait_psock_list;
+ 	struct sk_buff *seq_skb;
++	struct mutex tx_mutex;
+ 	u32 tx_stopped : 1;
+ 
+ 	/* Don't use bit fields here, these are set under different locks */
+diff --git a/include/uapi/linux/binfmts.h b/include/uapi/linux/binfmts.h
+index 689025d9c185b..c6f9450efc120 100644
+--- a/include/uapi/linux/binfmts.h
++++ b/include/uapi/linux/binfmts.h
+@@ -18,4 +18,8 @@ struct pt_regs;
+ /* sizeof(linux_binprm->buf) */
+ #define BINPRM_BUF_SIZE 256
+ 
++/* preserve argv0 for the interpreter  */
++#define AT_FLAGS_PRESERVE_ARGV0_BIT 0
++#define AT_FLAGS_PRESERVE_ARGV0 (1 << AT_FLAGS_PRESERVE_ARGV0_BIT)
++
+ #endif /* _UAPI_LINUX_BINFMTS_H */
+diff --git a/ipc/util.c b/ipc/util.c
+index bbb5190af6d9f..7c3601dad9bd5 100644
+--- a/ipc/util.c
++++ b/ipc/util.c
+@@ -754,21 +754,13 @@ struct pid_namespace *ipc_seq_pid_ns(struct seq_file *s)
+ static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos,
+ 					      loff_t *new_pos)
+ {
+-	struct kern_ipc_perm *ipc;
+-	int total, id;
+-
+-	total = 0;
+-	for (id = 0; id < pos && total < ids->in_use; id++) {
+-		ipc = idr_find(&ids->ipcs_idr, id);
+-		if (ipc != NULL)
+-			total++;
+-	}
++	struct kern_ipc_perm *ipc = NULL;
++	int max_idx = ipc_get_maxidx(ids);
+ 
+-	ipc = NULL;
+-	if (total >= ids->in_use)
++	if (max_idx == -1 || pos > max_idx)
+ 		goto out;
+ 
+-	for (; pos < ipc_mni; pos++) {
++	for (; pos <= max_idx; pos++) {
+ 		ipc = idr_find(&ids->ipcs_idr, pos);
+ 		if (ipc != NULL) {
+ 			rcu_read_lock();
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 9f2a93c829a91..731547a0d057a 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -22,6 +22,7 @@
+  *  distribution for more details.
+  */
+ 
++#include "cgroup-internal.h"
+ #include <linux/cpu.h>
+ #include <linux/cpumask.h>
+ #include <linux/cpuset.h>
+@@ -3725,10 +3726,14 @@ int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
+ 	if (!buf)
+ 		goto out;
+ 
+-	css = task_get_css(tsk, cpuset_cgrp_id);
+-	retval = cgroup_path_ns(css->cgroup, buf, PATH_MAX,
+-				current->nsproxy->cgroup_ns);
+-	css_put(css);
++	rcu_read_lock();
++	spin_lock_irq(&css_set_lock);
++	css = task_css(tsk, cpuset_cgrp_id);
++	retval = cgroup_path_ns_locked(css->cgroup, buf, PATH_MAX,
++				       current->nsproxy->cgroup_ns);
++	spin_unlock_irq(&css_set_lock);
++	rcu_read_unlock();
++
+ 	if (retval >= PATH_MAX)
+ 		retval = -ENAMETOOLONG;
+ 	if (retval < 0)
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 2b2a6e29219dc..16f1e747c5673 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -1182,6 +1182,8 @@ void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 	struct hrtimer_clock_base *base;
+ 	unsigned long flags;
+ 
++	if (WARN_ON_ONCE(!timer->function))
++		return;
+ 	/*
+ 	 * Check whether the HRTIMER_MODE_SOFT bit and hrtimer.is_soft
+ 	 * match on CONFIG_PREEMPT_RT = n. With PREEMPT_RT check the hard
+diff --git a/lib/math/prime_numbers.c b/lib/math/prime_numbers.c
+index d42cebf7407fc..d3b64b10da1c5 100644
+--- a/lib/math/prime_numbers.c
++++ b/lib/math/prime_numbers.c
+@@ -6,8 +6,6 @@
+ #include <linux/prime_numbers.h>
+ #include <linux/slab.h>
+ 
+-#define bitmap_size(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long))
+-
+ struct primes {
+ 	struct rcu_head rcu;
+ 	unsigned long last, sz;
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 186ae9dba0fd5..874f91715296b 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -4884,9 +4884,12 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
+ 	buf = endp + 1;
+ 
+ 	cfd = simple_strtoul(buf, &endp, 10);
+-	if ((*endp != ' ') && (*endp != '\0'))
++	if (*endp == '\0')
++		buf = endp;
++	else if (*endp == ' ')
++		buf = endp + 1;
++	else
+ 		return -EINVAL;
+-	buf = endp + 1;
+ 
+ 	event = kzalloc(sizeof(*event), GFP_KERNEL);
+ 	if (!event)
+diff --git a/net/bluetooth/bnep/core.c b/net/bluetooth/bnep/core.c
+index 09b6d825124ee..f749904272961 100644
+--- a/net/bluetooth/bnep/core.c
++++ b/net/bluetooth/bnep/core.c
+@@ -385,7 +385,8 @@ static int bnep_rx_frame(struct bnep_session *s, struct sk_buff *skb)
+ 
+ 	case BNEP_COMPRESSED_DST_ONLY:
+ 		__skb_put_data(nskb, skb_mac_header(skb), ETH_ALEN);
+-		__skb_put_data(nskb, s->eh.h_source, ETH_ALEN + 2);
++		__skb_put_data(nskb, s->eh.h_source, ETH_ALEN);
++		put_unaligned(s->eh.h_proto, (__be16 *)__skb_put(nskb, 2));
+ 		break;
+ 
+ 	case BNEP_GENERAL:
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index c8c1cd55c0eb0..9787a4c551138 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -4685,19 +4685,19 @@ static void hci_sched_le(struct hci_dev *hdev)
+ {
+ 	struct hci_chan *chan;
+ 	struct sk_buff *skb;
+-	int quote, cnt, tmp;
++	int quote, *cnt, tmp;
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	if (!hci_conn_num(hdev, LE_LINK))
+ 		return;
+ 
+-	cnt = hdev->le_pkts ? hdev->le_cnt : hdev->acl_cnt;
++	cnt = hdev->le_pkts ? &hdev->le_cnt : &hdev->acl_cnt;
+ 
+-	__check_timeout(hdev, cnt, LE_LINK);
++	__check_timeout(hdev, *cnt, LE_LINK);
+ 
+-	tmp = cnt;
+-	while (cnt && (chan = hci_chan_sent(hdev, LE_LINK, &quote))) {
++	tmp = *cnt;
++	while (*cnt && (chan = hci_chan_sent(hdev, LE_LINK, &quote))) {
+ 		u32 priority = (skb_peek(&chan->data_q))->priority;
+ 		while (quote-- && (skb = skb_peek(&chan->data_q))) {
+ 			BT_DBG("chan %p skb %p len %d priority %u", chan, skb,
+@@ -4712,7 +4712,7 @@ static void hci_sched_le(struct hci_dev *hdev)
+ 			hci_send_frame(hdev, skb);
+ 			hdev->le_last_tx = jiffies;
+ 
+-			cnt--;
++			(*cnt)--;
+ 			chan->sent++;
+ 			chan->conn->sent++;
+ 
+@@ -4722,12 +4722,7 @@ static void hci_sched_le(struct hci_dev *hdev)
+ 		}
+ 	}
+ 
+-	if (hdev->le_pkts)
+-		hdev->le_cnt = cnt;
+-	else
+-		hdev->acl_cnt = cnt;
+-
+-	if (cnt != tmp)
++	if (*cnt != tmp)
+ 		hci_prio_recalculate(hdev, LE_LINK);
+ }
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index bd8cfcfca7aef..0078e33e12ba9 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -2962,6 +2962,10 @@ static int pair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		 * will be kept and this function does nothing.
+ 		 */
+ 		p = hci_conn_params_add(hdev, &cp->addr.bdaddr, addr_type);
++		if (!p) {
++			err = -EIO;
++			goto unlock;
++		}
+ 
+ 		if (p->auto_connect == HCI_AUTO_CONN_EXPLICIT)
+ 			p->auto_connect = HCI_AUTO_CONN_DISABLED;
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 27381e7425a8f..20cae8f768762 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -914,7 +914,7 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth,
+ 	 * Confirms and the responder Enters the passkey.
+ 	 */
+ 	if (smp->method == OVERLAP) {
+-		if (hcon->role == HCI_ROLE_MASTER)
++		if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 			smp->method = CFM_PASSKEY;
+ 		else
+ 			smp->method = REQ_PASSKEY;
+@@ -964,7 +964,7 @@ static u8 smp_confirm(struct smp_chan *smp)
+ 
+ 	smp_send_cmd(smp->conn, SMP_CMD_PAIRING_CONFIRM, sizeof(cp), &cp);
+ 
+-	if (conn->hcon->out)
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
+ 	else
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+@@ -980,7 +980,8 @@ static u8 smp_random(struct smp_chan *smp)
+ 	int ret;
+ 
+ 	bt_dev_dbg(conn->hcon->hdev, "conn %p %s", conn,
+-		   conn->hcon->out ? "initiator" : "responder");
++		   test_bit(SMP_FLAG_INITIATOR, &smp->flags) ? "initiator" :
++		   "responder");
+ 
+ 	ret = smp_c1(smp->tk, smp->rrnd, smp->preq, smp->prsp,
+ 		     hcon->init_addr_type, &hcon->init_addr,
+@@ -994,7 +995,7 @@ static u8 smp_random(struct smp_chan *smp)
+ 		return SMP_CONFIRM_FAILED;
+ 	}
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		u8 stk[16];
+ 		__le64 rand = 0;
+ 		__le16 ediv = 0;
+@@ -1251,14 +1252,15 @@ static void smp_distribute_keys(struct smp_chan *smp)
+ 	rsp = (void *) &smp->prsp[1];
+ 
+ 	/* The responder sends its keys first */
+-	if (hcon->out && (smp->remote_key_dist & KEY_DIST_MASK)) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags) &&
++	    (smp->remote_key_dist & KEY_DIST_MASK)) {
+ 		smp_allow_key_dist(smp);
+ 		return;
+ 	}
+ 
+ 	req = (void *) &smp->preq[1];
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		keydist = &rsp->init_key_dist;
+ 		*keydist &= req->init_key_dist;
+ 	} else {
+@@ -1427,7 +1429,7 @@ static int sc_mackey_and_ltk(struct smp_chan *smp, u8 mackey[16], u8 ltk[16])
+ 	struct hci_conn *hcon = smp->conn->hcon;
+ 	u8 *na, *nb, a[7], b[7];
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		na   = smp->prnd;
+ 		nb   = smp->rrnd;
+ 	} else {
+@@ -1455,7 +1457,7 @@ static void sc_dhkey_check(struct smp_chan *smp)
+ 	a[6] = hcon->init_addr_type;
+ 	b[6] = hcon->resp_addr_type;
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		local_addr = a;
+ 		remote_addr = b;
+ 		memcpy(io_cap, &smp->preq[1], 3);
+@@ -1534,7 +1536,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
+ 		/* The round is only complete when the initiator
+ 		 * receives pairing random.
+ 		 */
+-		if (!hcon->out) {
++		if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 			smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
+ 				     sizeof(smp->prnd), smp->prnd);
+ 			if (smp->passkey_round == 20)
+@@ -1562,7 +1564,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
+ 
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+ 
+-		if (hcon->out) {
++		if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 			smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
+ 				     sizeof(smp->prnd), smp->prnd);
+ 			return 0;
+@@ -1573,7 +1575,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
+ 	case SMP_CMD_PUBLIC_KEY:
+ 	default:
+ 		/* Initiating device starts the round */
+-		if (!hcon->out)
++		if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 			return 0;
+ 
+ 		bt_dev_dbg(hdev, "Starting passkey round %u",
+@@ -1618,7 +1620,7 @@ static int sc_user_reply(struct smp_chan *smp, u16 mgmt_op, __le32 passkey)
+ 	}
+ 
+ 	/* Initiator sends DHKey check first */
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		sc_dhkey_check(smp);
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
+ 	} else if (test_and_clear_bit(SMP_FLAG_DHKEY_PENDING, &smp->flags)) {
+@@ -1741,7 +1743,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct smp_cmd_pairing rsp, *req = (void *) skb->data;
+ 	struct l2cap_chan *chan = conn->smp;
+ 	struct hci_dev *hdev = conn->hcon->hdev;
+-	struct smp_chan *smp;
++	struct smp_chan *smp = chan->data;
+ 	u8 key_size, auth, sec_level;
+ 	int ret;
+ 
+@@ -1750,16 +1752,14 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (skb->len < sizeof(*req))
+ 		return SMP_INVALID_PARAMS;
+ 
+-	if (conn->hcon->role != HCI_ROLE_SLAVE)
++	if (smp && test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		return SMP_CMD_NOTSUPP;
+ 
+-	if (!chan->data)
++	if (!smp) {
+ 		smp = smp_chan_create(conn);
+-	else
+-		smp = chan->data;
+-
+-	if (!smp)
+-		return SMP_UNSPECIFIED;
++		if (!smp)
++			return SMP_UNSPECIFIED;
++	}
+ 
+ 	/* We didn't start the pairing, so match remote */
+ 	auth = req->auth_req & AUTH_REQ_MASK(hdev);
+@@ -1941,7 +1941,7 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (skb->len < sizeof(*rsp))
+ 		return SMP_INVALID_PARAMS;
+ 
+-	if (conn->hcon->role != HCI_ROLE_MASTER)
++	if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		return SMP_CMD_NOTSUPP;
+ 
+ 	skb_pull(skb, sizeof(*rsp));
+@@ -2036,7 +2036,7 @@ static u8 sc_check_confirm(struct smp_chan *smp)
+ 	if (smp->method == REQ_PASSKEY || smp->method == DSP_PASSKEY)
+ 		return sc_passkey_round(smp, SMP_CMD_PAIRING_CONFIRM);
+ 
+-	if (conn->hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM, sizeof(smp->prnd),
+ 			     smp->prnd);
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+@@ -2058,7 +2058,7 @@ static int fixup_sc_false_positive(struct smp_chan *smp)
+ 	u8 auth;
+ 
+ 	/* The issue is only observed when we're in responder role */
+-	if (hcon->out)
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		return SMP_UNSPECIFIED;
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_SC_ONLY)) {
+@@ -2094,7 +2094,8 @@ static u8 smp_cmd_pairing_confirm(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct hci_dev *hdev = hcon->hdev;
+ 
+ 	bt_dev_dbg(hdev, "conn %p %s", conn,
+-		   hcon->out ? "initiator" : "responder");
++		   test_bit(SMP_FLAG_INITIATOR, &smp->flags) ? "initiator" :
++		   "responder");
+ 
+ 	if (skb->len < sizeof(smp->pcnf))
+ 		return SMP_INVALID_PARAMS;
+@@ -2116,7 +2117,7 @@ static u8 smp_cmd_pairing_confirm(struct l2cap_conn *conn, struct sk_buff *skb)
+ 			return ret;
+ 	}
+ 
+-	if (conn->hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM, sizeof(smp->prnd),
+ 			     smp->prnd);
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+@@ -2151,7 +2152,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (!test_bit(SMP_FLAG_SC, &smp->flags))
+ 		return smp_random(smp);
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		pkax = smp->local_pk;
+ 		pkbx = smp->remote_pk;
+ 		na   = smp->prnd;
+@@ -2164,7 +2165,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	}
+ 
+ 	if (smp->method == REQ_OOB) {
+-		if (!hcon->out)
++		if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 			smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
+ 				     sizeof(smp->prnd), smp->prnd);
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
+@@ -2175,7 +2176,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (smp->method == REQ_PASSKEY || smp->method == DSP_PASSKEY)
+ 		return sc_passkey_round(smp, SMP_CMD_PAIRING_RANDOM);
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		u8 cfm[16];
+ 
+ 		err = smp_f4(smp->tfm_cmac, smp->remote_pk, smp->local_pk,
+@@ -2216,7 +2217,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 		return SMP_UNSPECIFIED;
+ 
+ 	if (smp->method == REQ_OOB) {
+-		if (hcon->out) {
++		if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 			sc_dhkey_check(smp);
+ 			SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
+ 		}
+@@ -2290,10 +2291,27 @@ bool smp_sufficient_security(struct hci_conn *hcon, u8 sec_level,
+ 	return false;
+ }
+ 
++static void smp_send_pairing_req(struct smp_chan *smp, __u8 auth)
++{
++	struct smp_cmd_pairing cp;
++
++	if (smp->conn->hcon->type == ACL_LINK)
++		build_bredr_pairing_cmd(smp, &cp, NULL);
++	else
++		build_pairing_cmd(smp->conn, &cp, NULL, auth);
++
++	smp->preq[0] = SMP_CMD_PAIRING_REQ;
++	memcpy(&smp->preq[1], &cp, sizeof(cp));
++
++	smp_send_cmd(smp->conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
++	SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
++
++	set_bit(SMP_FLAG_INITIATOR, &smp->flags);
++}
++
+ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ {
+ 	struct smp_cmd_security_req *rp = (void *) skb->data;
+-	struct smp_cmd_pairing cp;
+ 	struct hci_conn *hcon = conn->hcon;
+ 	struct hci_dev *hdev = hcon->hdev;
+ 	struct smp_chan *smp;
+@@ -2342,16 +2360,20 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 
+ 	skb_pull(skb, sizeof(*rp));
+ 
+-	memset(&cp, 0, sizeof(cp));
+-	build_pairing_cmd(conn, &cp, NULL, auth);
++	smp_send_pairing_req(smp, auth);
+ 
+-	smp->preq[0] = SMP_CMD_PAIRING_REQ;
+-	memcpy(&smp->preq[1], &cp, sizeof(cp));
++	return 0;
++}
+ 
+-	smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
+-	SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
++static void smp_send_security_req(struct smp_chan *smp, __u8 auth)
++{
++	struct smp_cmd_security_req cp;
+ 
+-	return 0;
++	cp.auth_req = auth;
++	smp_send_cmd(smp->conn, SMP_CMD_SECURITY_REQ, sizeof(cp), &cp);
++	SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_REQ);
++
++	clear_bit(SMP_FLAG_INITIATOR, &smp->flags);
+ }
+ 
+ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
+@@ -2422,23 +2444,11 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
+ 			authreq |= SMP_AUTH_MITM;
+ 	}
+ 
+-	if (hcon->role == HCI_ROLE_MASTER) {
+-		struct smp_cmd_pairing cp;
+-
+-		build_pairing_cmd(conn, &cp, NULL, authreq);
+-		smp->preq[0] = SMP_CMD_PAIRING_REQ;
+-		memcpy(&smp->preq[1], &cp, sizeof(cp));
+-
+-		smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
+-		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
+-	} else {
+-		struct smp_cmd_security_req cp;
+-		cp.auth_req = authreq;
+-		smp_send_cmd(conn, SMP_CMD_SECURITY_REQ, sizeof(cp), &cp);
+-		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_REQ);
+-	}
++	if (hcon->role == HCI_ROLE_MASTER)
++		smp_send_pairing_req(smp, authreq);
++	else
++		smp_send_security_req(smp, authreq);
+ 
+-	set_bit(SMP_FLAG_INITIATOR, &smp->flags);
+ 	ret = 0;
+ 
+ unlock:
+@@ -2689,8 +2699,6 @@ static int smp_cmd_sign_info(struct l2cap_conn *conn, struct sk_buff *skb)
+ 
+ static u8 sc_select_method(struct smp_chan *smp)
+ {
+-	struct l2cap_conn *conn = smp->conn;
+-	struct hci_conn *hcon = conn->hcon;
+ 	struct smp_cmd_pairing *local, *remote;
+ 	u8 local_mitm, remote_mitm, local_io, remote_io, method;
+ 
+@@ -2703,7 +2711,7 @@ static u8 sc_select_method(struct smp_chan *smp)
+ 	 * the "struct smp_cmd_pairing" from them we need to skip the
+ 	 * first byte which contains the opcode.
+ 	 */
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		local = (void *) &smp->preq[1];
+ 		remote = (void *) &smp->prsp[1];
+ 	} else {
+@@ -2772,7 +2780,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	/* Non-initiating device sends its public key after receiving
+ 	 * the key from the initiating device.
+ 	 */
+-	if (!hcon->out) {
++	if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		err = sc_send_public_key(smp);
+ 		if (err)
+ 			return err;
+@@ -2834,7 +2842,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	}
+ 
+ 	if (smp->method == REQ_OOB) {
+-		if (hcon->out)
++		if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 			smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
+ 				     sizeof(smp->prnd), smp->prnd);
+ 
+@@ -2843,7 +2851,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	if (hcon->out)
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
+ 
+ 	if (smp->method == REQ_PASSKEY) {
+@@ -2858,7 +2866,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	/* The Initiating device waits for the non-initiating device to
+ 	 * send the confirm value.
+ 	 */
+-	if (conn->hcon->out)
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		return 0;
+ 
+ 	err = smp_f4(smp->tfm_cmac, smp->local_pk, smp->remote_pk, smp->prnd,
+@@ -2892,7 +2900,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	a[6] = hcon->init_addr_type;
+ 	b[6] = hcon->resp_addr_type;
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		local_addr = a;
+ 		remote_addr = b;
+ 		memcpy(io_cap, &smp->prsp[1], 3);
+@@ -2917,7 +2925,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (crypto_memneq(check->e, e, 16))
+ 		return SMP_DHKEY_CHECK_FAILED;
+ 
+-	if (!hcon->out) {
++	if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		if (test_bit(SMP_FLAG_WAIT_USER, &smp->flags)) {
+ 			set_bit(SMP_FLAG_DHKEY_PENDING, &smp->flags);
+ 			return 0;
+@@ -2929,7 +2937,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
+ 
+ 	sc_add_ltk(smp);
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		hci_le_start_enc(hcon, 0, 0, smp->tk, smp->enc_key_size);
+ 		hcon->enc_key_size = smp->enc_key_size;
+ 	}
+@@ -3078,7 +3086,6 @@ static void bredr_pairing(struct l2cap_chan *chan)
+ 	struct l2cap_conn *conn = chan->conn;
+ 	struct hci_conn *hcon = conn->hcon;
+ 	struct hci_dev *hdev = hcon->hdev;
+-	struct smp_cmd_pairing req;
+ 	struct smp_chan *smp;
+ 
+ 	bt_dev_dbg(hdev, "chan %p", chan);
+@@ -3130,14 +3137,7 @@ static void bredr_pairing(struct l2cap_chan *chan)
+ 
+ 	bt_dev_dbg(hdev, "starting SMP over BR/EDR");
+ 
+-	/* Prepare and send the BR/EDR SMP Pairing Request */
+-	build_bredr_pairing_cmd(smp, &req, NULL);
+-
+-	smp->preq[0] = SMP_CMD_PAIRING_REQ;
+-	memcpy(&smp->preq[1], &req, sizeof(req));
+-
+-	smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(req), &req);
+-	SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
++	smp_send_pairing_req(smp, 0x00);
+ }
+ 
+ static void smp_resume_cb(struct l2cap_chan *chan)
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 989b3f7ee85f4..99303897b7bb7 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -213,7 +213,7 @@ static ssize_t speed_show(struct device *dev,
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+-	if (netif_running(netdev) && netif_device_present(netdev)) {
++	if (netif_running(netdev)) {
+ 		struct ethtool_link_ksettings cmd;
+ 
+ 		if (!__ethtool_get_link_ksettings(netdev, &cmd))
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 12bf740e2fb31..0a588545d3526 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -432,6 +432,9 @@ int __ethtool_get_link_ksettings(struct net_device *dev,
+ 	if (!dev->ethtool_ops->get_link_ksettings)
+ 		return -EOPNOTSUPP;
+ 
++	if (!netif_device_present(dev))
++		return -ENODEV;
++
+ 	memset(link_ksettings, 0, sizeof(*link_ksettings));
+ 	return dev->ethtool_ops->get_link_ksettings(dev, link_ksettings);
+ }
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 32512b8ca5e72..436733021b1e9 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1952,6 +1952,7 @@ int ip6_send_skb(struct sk_buff *skb)
+ 	struct rt6_info *rt = (struct rt6_info *)skb_dst(skb);
+ 	int err;
+ 
++	rcu_read_lock();
+ 	err = ip6_local_out(net, skb->sk, skb);
+ 	if (err) {
+ 		if (err > 0)
+@@ -1961,6 +1962,7 @@ int ip6_send_skb(struct sk_buff *skb)
+ 				      IPSTATS_MIB_OUTDISCARDS);
+ 	}
+ 
++	rcu_read_unlock();
+ 	return err;
+ }
+ 
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index d1f8192384147..bd27204725ed8 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1531,7 +1531,8 @@ static void ip6_tnl_link_config(struct ip6_tnl *t)
+ 			tdev = __dev_get_by_index(t->net, p->link);
+ 
+ 		if (tdev) {
+-			dev->hard_header_len = tdev->hard_header_len + t_hlen;
++			dev->needed_headroom = tdev->hard_header_len +
++				tdev->needed_headroom + t_hlen;
+ 			mtu = min_t(unsigned int, tdev->mtu, IP6_MAX_MTU);
+ 
+ 			mtu = mtu - t_hlen;
+@@ -1758,7 +1759,9 @@ ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
+ {
+ 	struct ip6_tnl *tnl = netdev_priv(dev);
++	int t_hlen;
+ 
++	t_hlen = tnl->hlen + sizeof(struct ipv6hdr);
+ 	if (tnl->parms.proto == IPPROTO_IPV6) {
+ 		if (new_mtu < IPV6_MIN_MTU)
+ 			return -EINVAL;
+@@ -1767,10 +1770,10 @@ int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
+ 			return -EINVAL;
+ 	}
+ 	if (tnl->parms.proto == IPPROTO_IPV6 || tnl->parms.proto == 0) {
+-		if (new_mtu > IP6_MAX_MTU - dev->hard_header_len)
++		if (new_mtu > IP6_MAX_MTU - dev->hard_header_len - t_hlen)
+ 			return -EINVAL;
+ 	} else {
+-		if (new_mtu > IP_MAX_MTU - dev->hard_header_len)
++		if (new_mtu > IP_MAX_MTU - dev->hard_header_len - t_hlen)
+ 			return -EINVAL;
+ 	}
+ 	dev->mtu = new_mtu;
+@@ -1916,12 +1919,11 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
+ 	t_hlen = t->hlen + sizeof(struct ipv6hdr);
+ 
+ 	dev->type = ARPHRD_TUNNEL6;
+-	dev->hard_header_len = LL_MAX_HEADER + t_hlen;
+ 	dev->mtu = ETH_DATA_LEN - t_hlen;
+ 	if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
+ 		dev->mtu -= 8;
+ 	dev->min_mtu = ETH_MIN_MTU;
+-	dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len;
++	dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len - t_hlen;
+ 
+ 	dev_hold(dev);
+ 	return 0;
+diff --git a/net/iucv/iucv.c b/net/iucv/iucv.c
+index 06770b77e5d22..be5f598acbce2 100644
+--- a/net/iucv/iucv.c
++++ b/net/iucv/iucv.c
+@@ -1088,8 +1088,7 @@ static int iucv_message_receive_iprmdata(struct iucv_path *path,
+ 		size = (size < 8) ? size : 8;
+ 		for (array = buffer; size > 0; array++) {
+ 			copy = min_t(size_t, size, array->length);
+-			memcpy((u8 *)(addr_t) array->address,
+-				rmmsg, copy);
++			memcpy(phys_to_virt(array->address), rmmsg, copy);
+ 			rmmsg += copy;
+ 			size -= copy;
+ 		}
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 7420b4f19b45e..2f2840aa4a812 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -911,6 +911,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		  !(msg->msg_flags & MSG_MORE) : !!(msg->msg_flags & MSG_EOR);
+ 	int err = -EPIPE;
+ 
++	mutex_lock(&kcm->tx_mutex);
+ 	lock_sock(sk);
+ 
+ 	/* Per tcp_sendmsg this should be in poll */
+@@ -1059,6 +1060,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	KCM_STATS_ADD(kcm->stats.tx_bytes, copied);
+ 
+ 	release_sock(sk);
++	mutex_unlock(&kcm->tx_mutex);
+ 	return copied;
+ 
+ out_error:
+@@ -1084,6 +1086,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		sk->sk_write_space(sk);
+ 
+ 	release_sock(sk);
++	mutex_unlock(&kcm->tx_mutex);
+ 	return err;
+ }
+ 
+@@ -1326,6 +1329,7 @@ static void init_kcm_sock(struct kcm_sock *kcm, struct kcm_mux *mux)
+ 	spin_unlock_bh(&mux->lock);
+ 
+ 	INIT_WORK(&kcm->tx_work, kcm_tx_work);
++	mutex_init(&kcm->tx_mutex);
+ 
+ 	spin_lock_bh(&mux->rx_lock);
+ 	kcm_rcv_ready(kcm);
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index 92e5812daf892..4b4ab1961068f 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -491,7 +491,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ {
+ 	struct tid_ampdu_tx *tid_tx;
+ 	struct ieee80211_local *local = sta->local;
+-	struct ieee80211_sub_if_data *sdata;
++	struct ieee80211_sub_if_data *sdata = sta->sdata;
+ 	struct ieee80211_ampdu_params params = {
+ 		.sta = &sta->sta,
+ 		.action = IEEE80211_AMPDU_TX_START,
+@@ -521,7 +521,6 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ 	 */
+ 	synchronize_net();
+ 
+-	sdata = sta->sdata;
+ 	params.ssn = sta->tid_seq[tid] >> 4;
+ 	ret = drv_ampdu_action(local, sdata, &params);
+ 	tid_tx->ssn = params.ssn;
+@@ -535,9 +534,6 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ 		 */
+ 		set_bit(HT_AGG_STATE_DRV_READY, &tid_tx->state);
+ 	} else if (ret) {
+-		if (!sdata)
+-			return;
+-
+ 		ht_dbg(sdata,
+ 		       "BA request denied - HW unavailable for %pM tid %d\n",
+ 		       sta->sta.addr, tid);
+diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c
+index 120bd9cdf7dfa..48322e45e7ddb 100644
+--- a/net/mac80211/driver-ops.c
++++ b/net/mac80211/driver-ops.c
+@@ -331,9 +331,6 @@ int drv_ampdu_action(struct ieee80211_local *local,
+ 
+ 	might_sleep();
+ 
+-	if (!sdata)
+-		return -EIO;
+-
+ 	sdata = get_bss_sdata(sdata);
+ 	if (!check_sdata_in_driver(sdata))
+ 		return -EIO;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index f7637176d719d..3bb7a3314788e 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -1064,6 +1064,20 @@ static void __sta_info_destroy_part2(struct sta_info *sta)
+ 	 *	 after _part1 and before _part2!
+ 	 */
+ 
++	/*
++	 * There's a potential race in _part1 where we set WLAN_STA_BLOCK_BA
++	 * but someone might have just gotten past a check, and not yet into
++	 * queuing the work/creating the data/etc.
++	 *
++	 * Do another round of destruction so that the worker is certainly
++	 * canceled before we later free the station.
++	 *
++	 * Since this is after synchronize_rcu()/synchronize_net() we're now
++	 * certain that nobody can actually hold a reference to the STA and
++	 * be calling e.g. ieee80211_start_tx_ba_session().
++	 */
++	ieee80211_sta_tear_down_BA_sessions(sta, AGG_STOP_DESTROY_STA);
++
+ 	might_sleep();
+ 	lockdep_assert_held(&local->sta_mtx);
+ 
+diff --git a/net/mptcp/diag.c b/net/mptcp/diag.c
+index d7ca71c597545..23bd18084c8a2 100644
+--- a/net/mptcp/diag.c
++++ b/net/mptcp/diag.c
+@@ -95,7 +95,7 @@ static size_t subflow_get_info_size(const struct sock *sk)
+ 		nla_total_size(4) +	/* MPTCP_SUBFLOW_ATTR_RELWRITE_SEQ */
+ 		nla_total_size_64bit(8) +	/* MPTCP_SUBFLOW_ATTR_MAP_SEQ */
+ 		nla_total_size(4) +	/* MPTCP_SUBFLOW_ATTR_MAP_SFSEQ */
+-		nla_total_size(2) +	/* MPTCP_SUBFLOW_ATTR_SSN_OFFSET */
++		nla_total_size(4) +	/* MPTCP_SUBFLOW_ATTR_SSN_OFFSET */
+ 		nla_total_size(2) +	/* MPTCP_SUBFLOW_ATTR_MAP_DATALEN */
+ 		nla_total_size(4) +	/* MPTCP_SUBFLOW_ATTR_FLAGS */
+ 		nla_total_size(1) +	/* MPTCP_SUBFLOW_ATTR_ID_REM */
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index a343b30774588..0ef6a99b62b0d 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1664,7 +1664,7 @@ static struct sock *mptcp_subflow_get_retrans(const struct mptcp_sock *msk)
+ 			return NULL;
+ 		}
+ 
+-		if (subflow->backup) {
++		if (subflow->backup || subflow->request_bkup) {
+ 			if (!backup)
+ 				backup = ssk;
+ 			continue;
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index 746ca77d0aad6..f6275d93f8a51 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -682,8 +682,8 @@ static int nf_flow_offload_tuple(struct nf_flowtable *flowtable,
+ 				 struct list_head *block_cb_list)
+ {
+ 	struct flow_cls_offload cls_flow = {};
++	struct netlink_ext_ack extack = {};
+ 	struct flow_block_cb *block_cb;
+-	struct netlink_ext_ack extack;
+ 	__be16 proto = ETH_P_ALL;
+ 	int err, i = 0;
+ 
+diff --git a/net/netfilter/nft_counter.c b/net/netfilter/nft_counter.c
+index 75fa6fcd6cd6c..ea102b9a168b8 100644
+--- a/net/netfilter/nft_counter.c
++++ b/net/netfilter/nft_counter.c
+@@ -105,11 +105,16 @@ static void nft_counter_reset(struct nft_counter_percpu_priv *priv,
+ 			      struct nft_counter *total)
+ {
+ 	struct nft_counter *this_cpu;
++	seqcount_t *myseq;
+ 
+ 	local_bh_disable();
+ 	this_cpu = this_cpu_ptr(priv->counter);
++	myseq = this_cpu_ptr(&nft_counter_seq);
++
++	write_seqcount_begin(myseq);
+ 	this_cpu->packets -= total->packets;
+ 	this_cpu->bytes -= total->bytes;
++	write_seqcount_end(myseq);
+ 	local_bh_enable();
+ }
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index ac3678d2d6d52..4f2a3d46554ff 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -126,7 +126,7 @@ static const char *const nlk_cb_mutex_key_strings[MAX_LINKS + 1] = {
+ 	"nlk_cb_mutex-MAX_LINKS"
+ };
+ 
+-static int netlink_dump(struct sock *sk);
++static int netlink_dump(struct sock *sk, bool lock_taken);
+ 
+ /* nl_table locking explained:
+  * Lookup and traversal are protected with an RCU read-side lock. Insertion
+@@ -1996,7 +1996,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 
+ 	if (READ_ONCE(nlk->cb_running) &&
+ 	    atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) {
+-		ret = netlink_dump(sk);
++		ret = netlink_dump(sk, false);
+ 		if (ret) {
+ 			sk->sk_err = -ret;
+ 			sk->sk_error_report(sk);
+@@ -2206,7 +2206,7 @@ static int netlink_dump_done(struct netlink_sock *nlk, struct sk_buff *skb,
+ 	return 0;
+ }
+ 
+-static int netlink_dump(struct sock *sk)
++static int netlink_dump(struct sock *sk, bool lock_taken)
+ {
+ 	struct netlink_sock *nlk = nlk_sk(sk);
+ 	struct netlink_ext_ack extack = {};
+@@ -2218,7 +2218,8 @@ static int netlink_dump(struct sock *sk)
+ 	int alloc_min_size;
+ 	int alloc_size;
+ 
+-	mutex_lock(nlk->cb_mutex);
++	if (!lock_taken)
++		mutex_lock(nlk->cb_mutex);
+ 	if (!nlk->cb_running) {
+ 		err = -EINVAL;
+ 		goto errout_skb;
+@@ -2374,9 +2375,7 @@ int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
+ 	WRITE_ONCE(nlk->cb_running, true);
+ 	nlk->dump_done_errno = INT_MAX;
+ 
+-	mutex_unlock(nlk->cb_mutex);
+-
+-	ret = netlink_dump(sk);
++	ret = netlink_dump(sk, true);
+ 
+ 	sock_put(sk);
+ 
+diff --git a/net/rds/recv.c b/net/rds/recv.c
+index 967d115f97efd..f570d64367a41 100644
+--- a/net/rds/recv.c
++++ b/net/rds/recv.c
+@@ -424,6 +424,7 @@ static int rds_still_queued(struct rds_sock *rs, struct rds_incoming *inc,
+ 	struct sock *sk = rds_rs_to_sk(rs);
+ 	int ret = 0;
+ 	unsigned long flags;
++	struct rds_incoming *to_drop = NULL;
+ 
+ 	write_lock_irqsave(&rs->rs_recv_lock, flags);
+ 	if (!list_empty(&inc->i_item)) {
+@@ -434,11 +435,14 @@ static int rds_still_queued(struct rds_sock *rs, struct rds_incoming *inc,
+ 					      -be32_to_cpu(inc->i_hdr.h_len),
+ 					      inc->i_hdr.h_dport);
+ 			list_del_init(&inc->i_item);
+-			rds_inc_put(inc);
++			to_drop = inc;
+ 		}
+ 	}
+ 	write_unlock_irqrestore(&rs->rs_recv_lock, flags);
+ 
++	if (to_drop)
++		rds_inc_put(to_drop);
++
+ 	rdsdebug("inc %p rs %p still %d dropped %d\n", inc, rs, ret, drop);
+ 	return ret;
+ }
+@@ -761,16 +765,21 @@ void rds_clear_recv_queue(struct rds_sock *rs)
+ 	struct sock *sk = rds_rs_to_sk(rs);
+ 	struct rds_incoming *inc, *tmp;
+ 	unsigned long flags;
++	LIST_HEAD(to_drop);
+ 
+ 	write_lock_irqsave(&rs->rs_recv_lock, flags);
+ 	list_for_each_entry_safe(inc, tmp, &rs->rs_recv_queue, i_item) {
+ 		rds_recv_rcvbuf_delta(rs, sk, inc->i_conn->c_lcong,
+ 				      -be32_to_cpu(inc->i_hdr.h_len),
+ 				      inc->i_hdr.h_dport);
++		list_move(&inc->i_item, &to_drop);
++	}
++	write_unlock_irqrestore(&rs->rs_recv_lock, flags);
++
++	list_for_each_entry_safe(inc, tmp, &to_drop, i_item) {
+ 		list_del_init(&inc->i_item);
+ 		rds_inc_put(inc);
+ 	}
+-	write_unlock_irqrestore(&rs->rs_recv_lock, flags);
+ }
+ 
+ /*
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 08aaa6efc62c8..e0e16b0fdb179 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -437,12 +437,10 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 	struct netem_sched_data *q = qdisc_priv(sch);
+ 	/* We don't fill cb now as skb_unshare() may invalidate it */
+ 	struct netem_skb_cb *cb;
+-	struct sk_buff *skb2;
++	struct sk_buff *skb2 = NULL;
+ 	struct sk_buff *segs = NULL;
+ 	unsigned int prev_len = qdisc_pkt_len(skb);
+ 	int count = 1;
+-	int rc = NET_XMIT_SUCCESS;
+-	int rc_drop = NET_XMIT_DROP;
+ 
+ 	/* Do not fool qdisc_drop_all() */
+ 	skb->prev = NULL;
+@@ -471,19 +469,11 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		skb_orphan_partial(skb);
+ 
+ 	/*
+-	 * If we need to duplicate packet, then re-insert at top of the
+-	 * qdisc tree, since parent queuer expects that only one
+-	 * skb will be queued.
++	 * If we need to duplicate packet, then clone it before
++	 * original is modified.
+ 	 */
+-	if (count > 1 && (skb2 = skb_clone(skb, GFP_ATOMIC)) != NULL) {
+-		struct Qdisc *rootq = qdisc_root_bh(sch);
+-		u32 dupsave = q->duplicate; /* prevent duplicating a dup... */
+-
+-		q->duplicate = 0;
+-		rootq->enqueue(skb2, rootq, to_free);
+-		q->duplicate = dupsave;
+-		rc_drop = NET_XMIT_SUCCESS;
+-	}
++	if (count > 1)
++		skb2 = skb_clone(skb, GFP_ATOMIC);
+ 
+ 	/*
+ 	 * Randomized packet corruption.
+@@ -495,7 +485,8 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		if (skb_is_gso(skb)) {
+ 			skb = netem_segment(skb, sch, to_free);
+ 			if (!skb)
+-				return rc_drop;
++				goto finish_segs;
++
+ 			segs = skb->next;
+ 			skb_mark_not_on_list(skb);
+ 			qdisc_skb_cb(skb)->pkt_len = skb->len;
+@@ -521,7 +512,24 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		/* re-link segs, so that qdisc_drop_all() frees them all */
+ 		skb->next = segs;
+ 		qdisc_drop_all(skb, sch, to_free);
+-		return rc_drop;
++		if (skb2)
++			__qdisc_drop(skb2, to_free);
++		return NET_XMIT_DROP;
++	}
++
++	/*
++	 * If doing duplication then re-insert at top of the
++	 * qdisc tree, since parent queuer expects that only one
++	 * skb will be queued.
++	 */
++	if (skb2) {
++		struct Qdisc *rootq = qdisc_root_bh(sch);
++		u32 dupsave = q->duplicate; /* prevent duplicating a dup... */
++
++		q->duplicate = 0;
++		rootq->enqueue(skb2, rootq, to_free);
++		q->duplicate = dupsave;
++		skb2 = NULL;
+ 	}
+ 
+ 	qdisc_qstats_backlog_inc(sch, skb);
+@@ -592,9 +600,12 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 	}
+ 
+ finish_segs:
++	if (skb2)
++		__qdisc_drop(skb2, to_free);
++
+ 	if (segs) {
+ 		unsigned int len, last_len;
+-		int nb;
++		int rc, nb;
+ 
+ 		len = skb ? skb->len : 0;
+ 		nb = skb ? 1 : 0;
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index d1eacf3358b81..60782504ad3e7 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -679,8 +679,8 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
+ 					set_current_state(TASK_RUNNING);
+ 					return -EINTR;
+ 				}
++				freezable_schedule_timeout(msecs_to_jiffies(500));
+ 			}
+-			freezable_schedule_timeout(msecs_to_jiffies(500));
+ 			rqstp->rq_pages[i] = p;
+ 		}
+ 	rqstp->rq_page_end = &rqstp->rq_pages[i];
+diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c
+index 533137f45361c..4951d9bef5794 100644
+--- a/security/apparmor/policy_unpack_test.c
++++ b/security/apparmor/policy_unpack_test.c
+@@ -78,14 +78,14 @@ struct aa_ext *build_aa_ext_struct(struct policy_unpack_fixture *puf,
+ 	*(buf + 1) = strlen(TEST_U32_NAME) + 1;
+ 	strcpy(buf + 3, TEST_U32_NAME);
+ 	*(buf + 3 + strlen(TEST_U32_NAME) + 1) = AA_U32;
+-	*((u32 *)(buf + 3 + strlen(TEST_U32_NAME) + 2)) = TEST_U32_DATA;
++	*((__le32 *)(buf + 3 + strlen(TEST_U32_NAME) + 2)) = cpu_to_le32(TEST_U32_DATA);
+ 
+ 	buf = e->start + TEST_NAMED_U64_BUF_OFFSET;
+ 	*buf = AA_NAME;
+ 	*(buf + 1) = strlen(TEST_U64_NAME) + 1;
+ 	strcpy(buf + 3, TEST_U64_NAME);
+ 	*(buf + 3 + strlen(TEST_U64_NAME) + 1) = AA_U64;
+-	*((u64 *)(buf + 3 + strlen(TEST_U64_NAME) + 2)) = TEST_U64_DATA;
++	*((__le64 *)(buf + 3 + strlen(TEST_U64_NAME) + 2)) = cpu_to_le64(TEST_U64_DATA);
+ 
+ 	buf = e->start + TEST_NAMED_BLOB_BUF_OFFSET;
+ 	*buf = AA_NAME;
+@@ -101,7 +101,7 @@ struct aa_ext *build_aa_ext_struct(struct policy_unpack_fixture *puf,
+ 	*(buf + 1) = strlen(TEST_ARRAY_NAME) + 1;
+ 	strcpy(buf + 3, TEST_ARRAY_NAME);
+ 	*(buf + 3 + strlen(TEST_ARRAY_NAME) + 1) = AA_ARRAY;
+-	*((u16 *)(buf + 3 + strlen(TEST_ARRAY_NAME) + 2)) = TEST_ARRAY_SIZE;
++	*((__le16 *)(buf + 3 + strlen(TEST_ARRAY_NAME) + 2)) = cpu_to_le16(TEST_ARRAY_SIZE);
+ 
+ 	return e;
+ }
+diff --git a/security/selinux/avc.c b/security/selinux/avc.c
+index 884a014ce2b85..ab71d09482050 100644
+--- a/security/selinux/avc.c
++++ b/security/selinux/avc.c
+@@ -332,12 +332,12 @@ static int avc_add_xperms_decision(struct avc_node *node,
+ {
+ 	struct avc_xperms_decision_node *dest_xpd;
+ 
+-	node->ae.xp_node->xp.len++;
+ 	dest_xpd = avc_xperms_decision_alloc(src->used);
+ 	if (!dest_xpd)
+ 		return -ENOMEM;
+ 	avc_copy_xperms_decision(&dest_xpd->xpd, src);
+ 	list_add(&dest_xpd->xpd_list, &node->ae.xp_node->xpd_head);
++	node->ae.xp_node->xp.len++;
+ 	return 0;
+ }
+ 
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 708c9a46eefe3..7e6fd86df41c9 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -556,7 +556,7 @@ static int snd_timer_start1(struct snd_timer_instance *timeri,
+ 	/* check the actual time for the start tick;
+ 	 * bail out as error if it's way too low (< 100us)
+ 	 */
+-	if (start) {
++	if (start && !(timer->hw.flags & SNDRV_TIMER_HW_SLAVE)) {
+ 		if ((u64)snd_timer_hw_resolution(timer) * ticks < 100000) {
+ 			result = -EINVAL;
+ 			goto unlock;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e8e9cfb4a9357..04fd52bba0573 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -578,7 +578,6 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ 	switch (codec->core.vendor_id) {
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+-	case 0x10ec0257:
+ 	case 0x19e58326:
+ 	case 0x10ec0283:
+ 	case 0x10ec0285:
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 8f2fb2ac7af67..008229ae7ff41 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -273,6 +273,7 @@ YAMAHA_DEVICE(0x105a, NULL),
+ YAMAHA_DEVICE(0x105b, NULL),
+ YAMAHA_DEVICE(0x105c, NULL),
+ YAMAHA_DEVICE(0x105d, NULL),
++YAMAHA_DEVICE(0x1718, "P-125"),
+ {
+ 	USB_DEVICE(0x0499, 0x1503),
+ 	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+diff --git a/tools/include/linux/align.h b/tools/include/linux/align.h
+new file mode 100644
+index 0000000000000..14e34ace80dda
+--- /dev/null
++++ b/tools/include/linux/align.h
+@@ -0,0 +1,12 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++#ifndef _TOOLS_LINUX_ALIGN_H
++#define _TOOLS_LINUX_ALIGN_H
++
++#include <uapi/linux/const.h>
++
++#define ALIGN(x, a)		__ALIGN_KERNEL((x), (a))
++#define ALIGN_DOWN(x, a)	__ALIGN_KERNEL((x) - ((a) - 1), (a))
++#define IS_ALIGNED(x, a)	(((x) & ((typeof(x))(a) - 1)) == 0)
++
++#endif /* _TOOLS_LINUX_ALIGN_H */
+diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h
+index 477a1cae513f2..de45cad6cec19 100644
+--- a/tools/include/linux/bitmap.h
++++ b/tools/include/linux/bitmap.h
+@@ -3,6 +3,7 @@
+ #define _PERF_BITOPS_H
+ 
+ #include <string.h>
++#include <linux/align.h>
+ #include <linux/bitops.h>
+ #include <stdlib.h>
+ #include <linux/kernel.h>
+@@ -30,13 +31,14 @@ void bitmap_clear(unsigned long *map, unsigned int start, int len);
+ #define small_const_nbits(nbits) \
+ 	(__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG)
+ 
++#define bitmap_size(nbits)	(ALIGN(nbits, BITS_PER_LONG) / BITS_PER_BYTE)
++
+ static inline void bitmap_zero(unsigned long *dst, int nbits)
+ {
+ 	if (small_const_nbits(nbits))
+ 		*dst = 0UL;
+ 	else {
+-		int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
+-		memset(dst, 0, len);
++		memset(dst, 0, bitmap_size(nbits));
+ 	}
+ }
+ 
+@@ -122,7 +124,7 @@ static inline int test_and_clear_bit(int nr, unsigned long *addr)
+  */
+ static inline unsigned long *bitmap_alloc(int nbits)
+ {
+-	return calloc(1, BITS_TO_LONGS(nbits) * sizeof(unsigned long));
++	return calloc(1, bitmap_size(nbits));
+ }
+ 
+ /*
+@@ -165,7 +167,6 @@ static inline int bitmap_and(unsigned long *dst, const unsigned long *src1,
+ #define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long))
+ #endif
+ #define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1)
+-#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0)
+ 
+ static inline int bitmap_equal(const unsigned long *src1,
+ 			const unsigned long *src2, unsigned int nbits)
+diff --git a/tools/testing/selftests/core/close_range_test.c b/tools/testing/selftests/core/close_range_test.c
+index 0a26795842f6f..506f3ab605189 100644
+--- a/tools/testing/selftests/core/close_range_test.c
++++ b/tools/testing/selftests/core/close_range_test.c
+@@ -224,4 +224,39 @@ TEST(close_range_unshare_capped)
+ 	EXPECT_EQ(0, WEXITSTATUS(status));
+ }
+ 
++TEST(close_range_bitmap_corruption)
++{
++	pid_t pid;
++	int status;
++	struct __clone_args args = {
++		.flags = CLONE_FILES,
++		.exit_signal = SIGCHLD,
++	};
++
++	/* get the first 128 descriptors open */
++	for (int i = 2; i < 128; i++)
++		EXPECT_GE(dup2(0, i), 0);
++
++	/* get descriptor table shared */
++	pid = sys_clone3(&args, sizeof(args));
++	ASSERT_GE(pid, 0);
++
++	if (pid == 0) {
++		/* unshare and truncate descriptor table down to 64 */
++		if (sys_close_range(64, ~0U, CLOSE_RANGE_UNSHARE))
++			exit(EXIT_FAILURE);
++
++		ASSERT_EQ(fcntl(64, F_GETFD), -1);
++		/* ... and verify that the range 64..127 is not
++		   stuck "fully used" according to secondary bitmap */
++		EXPECT_EQ(dup(0), 64)
++			exit(EXIT_FAILURE);
++		exit(EXIT_SUCCESS);
++	}
++
++	EXPECT_EQ(waitpid(pid, &status, 0), pid);
++	EXPECT_EQ(true, WIFEXITED(status));
++	EXPECT_EQ(0, WEXITSTATUS(status));
++}
++
+ TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/tc-testing/tdc.py b/tools/testing/selftests/tc-testing/tdc.py
+index a3e43189d9400..d6a9d97f73c24 100755
+--- a/tools/testing/selftests/tc-testing/tdc.py
++++ b/tools/testing/selftests/tc-testing/tdc.py
+@@ -129,7 +129,6 @@ class PluginMgr:
+             except Exception as ee:
+                 print('exception {} in call to pre_case for {} plugin'.
+                       format(ee, pgn_inst.__class__))
+-                print('test_ordinal is {}'.format(test_ordinal))
+                 print('testid is {}'.format(caseinfo['id']))
+                 raise
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

* [gentoo-commits] proj/linux-patches:5.10 commit in: /
@ 2024-09-12 12:42 Mike Pagano
  0 siblings, 0 replies; 289+ messages in thread
From: Mike Pagano @ 2024-09-12 12:42 UTC (permalink / raw
  To: gentoo-commits

commit:     dbf9d5158e0707e771e47bd7a23f9fe3991c70bb
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 12 12:42:38 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 12 12:42:38 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dbf9d515

Linux patchg 5.10.226

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 1225_linux-5.10.226.patch | 8691 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8695 insertions(+)

diff --git a/0000_README b/0000_README
index ad8e0d5b..695208ee 100644
--- a/0000_README
+++ b/0000_README
@@ -943,6 +943,10 @@ Patch:  1224_linux-5.10.225.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.10.225
 
+Patch:  1225_linux-5.10.226.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.10.226
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1225_linux-5.10.226.patch b/1225_linux-5.10.226.patch
new file mode 100644
index 00000000..7df9b67b
--- /dev/null
+++ b/1225_linux-5.10.226.patch
@@ -0,0 +1,8691 @@
+diff --git a/Documentation/locking/hwspinlock.rst b/Documentation/locking/hwspinlock.rst
+index 6f03713b70039..2ffaa3cbd63f1 100644
+--- a/Documentation/locking/hwspinlock.rst
++++ b/Documentation/locking/hwspinlock.rst
+@@ -85,6 +85,17 @@ is already free).
+ 
+ Should be called from a process context (might sleep).
+ 
++::
++
++  int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id);
++
++After verifying the owner of the hwspinlock, release a previously acquired
++hwspinlock; returns 0 on success, or an appropriate error code on failure
++(e.g. -EOPNOTSUPP if the bust operation is not defined for the specific
++hwspinlock).
++
++Should be called from a process context (might sleep).
++
+ ::
+ 
+   int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
+diff --git a/Makefile b/Makefile
+index 30918576f9de4..cf232897553bf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 225
++SUBLEVEL = 226
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index bd68e1b7f29f3..702587fda70cf 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -97,6 +97,18 @@ static inline u32 get_acpi_id_for_cpu(unsigned int cpu)
+ 	return	acpi_cpu_get_madt_gicc(cpu)->uid;
+ }
+ 
++static inline int get_cpu_for_acpi_id(u32 uid)
++{
++	int cpu;
++
++	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
++		if (acpi_cpu_get_madt_gicc(cpu) &&
++		    uid == get_acpi_id_for_cpu(cpu))
++			return cpu;
++
++	return -EINVAL;
++}
++
+ static inline void arch_fix_phys_package_id(int num, u32 slot) { }
+ void __init acpi_init_cpus(void);
+ int apei_claim_sea(struct pt_regs *regs);
+diff --git a/arch/arm64/kernel/acpi_numa.c b/arch/arm64/kernel/acpi_numa.c
+index 048b75cadd2fd..c5feac18c238a 100644
+--- a/arch/arm64/kernel/acpi_numa.c
++++ b/arch/arm64/kernel/acpi_numa.c
+@@ -34,17 +34,6 @@ int __init acpi_numa_get_nid(unsigned int cpu)
+ 	return acpi_early_node_map[cpu];
+ }
+ 
+-static inline int get_cpu_for_acpi_id(u32 uid)
+-{
+-	int cpu;
+-
+-	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
+-		if (uid == get_acpi_id_for_cpu(cpu))
+-			return cpu;
+-
+-	return -EINVAL;
+-}
+-
+ static int __init acpi_parse_gicc_pxm(union acpi_subtable_headers *header,
+ 				      const unsigned long end)
+ {
+diff --git a/arch/mips/kernel/cevt-r4k.c b/arch/mips/kernel/cevt-r4k.c
+index 995ad9e69ded3..23207516015cc 100644
+--- a/arch/mips/kernel/cevt-r4k.c
++++ b/arch/mips/kernel/cevt-r4k.c
+@@ -307,13 +307,6 @@ int r4k_clockevent_init(void)
+ 	if (!c0_compare_int_usable())
+ 		return -ENXIO;
+ 
+-	/*
+-	 * With vectored interrupts things are getting platform specific.
+-	 * get_c0_compare_int is a hook to allow a platform to return the
+-	 * interrupt number of its liking.
+-	 */
+-	irq = get_c0_compare_int();
+-
+ 	cd = &per_cpu(mips_clockevent_device, cpu);
+ 
+ 	cd->name		= "MIPS";
+@@ -324,7 +317,6 @@ int r4k_clockevent_init(void)
+ 	min_delta		= calculate_min_delta();
+ 
+ 	cd->rating		= 300;
+-	cd->irq			= irq;
+ 	cd->cpumask		= cpumask_of(cpu);
+ 	cd->set_next_event	= mips_next_event;
+ 	cd->event_handler	= mips_event_handler;
+@@ -336,6 +328,13 @@ int r4k_clockevent_init(void)
+ 
+ 	cp0_timer_irq_installed = 1;
+ 
++	/*
++	 * With vectored interrupts things are getting platform specific.
++	 * get_c0_compare_int is a hook to allow a platform to return the
++	 * interrupt number of its liking.
++	 */
++	irq = get_c0_compare_int();
++
+ 	if (request_irq(irq, c0_compare_interrupt, flags, "timer",
+ 			c0_compare_interrupt))
+ 		pr_err("Failed to request irq %d (timer)\n", irq);
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index 1c65c38ec9a3e..c4bf95371f493 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -69,6 +69,15 @@ SECTIONS
+ 	. = ALIGN(PAGE_SIZE);
+ 	__end_ro_after_init = .;
+ 
++	.data.rel.ro : {
++		*(.data.rel.ro .data.rel.ro.*)
++	}
++	.got : {
++		__got_start = .;
++		*(.got)
++		__got_end = .;
++	}
++
+ 	RW_DATA(0x100, PAGE_SIZE, THREAD_SIZE)
+ 	BOOT_DATA_PRESERVED
+ 
+diff --git a/arch/um/drivers/line.c b/arch/um/drivers/line.c
+index 37e96ba0f5fb1..d2beb4a497a2a 100644
+--- a/arch/um/drivers/line.c
++++ b/arch/um/drivers/line.c
+@@ -378,6 +378,7 @@ int setup_one_line(struct line *lines, int n, char *init,
+ 			parse_chan_pair(NULL, line, n, opts, error_out);
+ 			err = 0;
+ 		}
++		*error_out = "configured as 'none'";
+ 	} else {
+ 		char *new = kstrdup(init, GFP_KERNEL);
+ 		if (!new) {
+@@ -401,6 +402,7 @@ int setup_one_line(struct line *lines, int n, char *init,
+ 			}
+ 		}
+ 		if (err) {
++			*error_out = "failed to parse channel pair";
+ 			line->init_str = NULL;
+ 			line->valid = 0;
+ 			kfree(new);
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 50e31d14351bf..85289c8f21db8 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -241,7 +241,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+  *
+  * Returns a pointer to a PTE on success, or NULL on failure.
+  */
+-static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
++static pte_t *pti_user_pagetable_walk_pte(unsigned long address, bool late_text)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+ 	pmd_t *pmd;
+@@ -251,10 +251,15 @@ static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ 	if (!pmd)
+ 		return NULL;
+ 
+-	/* We can't do anything sensible if we hit a large mapping. */
++	/* Large PMD mapping found */
+ 	if (pmd_large(*pmd)) {
+-		WARN_ON(1);
+-		return NULL;
++		/* Clear the PMD if we hit a large mapping from the first round */
++		if (late_text) {
++			set_pmd(pmd, __pmd(0));
++		} else {
++			WARN_ON_ONCE(1);
++			return NULL;
++		}
+ 	}
+ 
+ 	if (pmd_none(*pmd)) {
+@@ -283,7 +288,7 @@ static void __init pti_setup_vsyscall(void)
+ 	if (!pte || WARN_ON(level != PG_LEVEL_4K) || pte_none(*pte))
+ 		return;
+ 
+-	target_pte = pti_user_pagetable_walk_pte(VSYSCALL_ADDR);
++	target_pte = pti_user_pagetable_walk_pte(VSYSCALL_ADDR, false);
+ 	if (WARN_ON(!target_pte))
+ 		return;
+ 
+@@ -301,7 +306,7 @@ enum pti_clone_level {
+ 
+ static void
+ pti_clone_pgtable(unsigned long start, unsigned long end,
+-		  enum pti_clone_level level)
++		  enum pti_clone_level level, bool late_text)
+ {
+ 	unsigned long addr;
+ 
+@@ -390,7 +395,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
+ 				return;
+ 
+ 			/* Allocate PTE in the user page-table */
+-			target_pte = pti_user_pagetable_walk_pte(addr);
++			target_pte = pti_user_pagetable_walk_pte(addr, late_text);
+ 			if (WARN_ON(!target_pte))
+ 				return;
+ 
+@@ -453,7 +458,7 @@ static void __init pti_clone_user_shared(void)
+ 		phys_addr_t pa = per_cpu_ptr_to_phys((void *)va);
+ 		pte_t *target_pte;
+ 
+-		target_pte = pti_user_pagetable_walk_pte(va);
++		target_pte = pti_user_pagetable_walk_pte(va, false);
+ 		if (WARN_ON(!target_pte))
+ 			return;
+ 
+@@ -476,7 +481,7 @@ static void __init pti_clone_user_shared(void)
+ 	start = CPU_ENTRY_AREA_BASE;
+ 	end   = start + (PAGE_SIZE * CPU_ENTRY_AREA_PAGES);
+ 
+-	pti_clone_pgtable(start, end, PTI_CLONE_PMD);
++	pti_clone_pgtable(start, end, PTI_CLONE_PMD, false);
+ }
+ #endif /* CONFIG_X86_64 */
+ 
+@@ -493,11 +498,11 @@ static void __init pti_setup_espfix64(void)
+ /*
+  * Clone the populated PMDs of the entry text and force it RO.
+  */
+-static void pti_clone_entry_text(void)
++static void pti_clone_entry_text(bool late)
+ {
+ 	pti_clone_pgtable((unsigned long) __entry_text_start,
+ 			  (unsigned long) __entry_text_end,
+-			  PTI_LEVEL_KERNEL_IMAGE);
++			  PTI_LEVEL_KERNEL_IMAGE, late);
+ }
+ 
+ /*
+@@ -572,7 +577,7 @@ static void pti_clone_kernel_text(void)
+ 	 * pti_set_kernel_image_nonglobal() did to clear the
+ 	 * global bit.
+ 	 */
+-	pti_clone_pgtable(start, end_clone, PTI_LEVEL_KERNEL_IMAGE);
++	pti_clone_pgtable(start, end_clone, PTI_LEVEL_KERNEL_IMAGE, false);
+ 
+ 	/*
+ 	 * pti_clone_pgtable() will set the global bit in any PMDs
+@@ -639,8 +644,15 @@ void __init pti_init(void)
+ 
+ 	/* Undo all global bits from the init pagetables in head_64.S: */
+ 	pti_set_kernel_image_nonglobal();
++
+ 	/* Replace some of the global bits just for shared entry text: */
+-	pti_clone_entry_text();
++	/*
++	 * This is very early in boot. Device and Late initcalls can do
++	 * modprobe before free_initmem() and mark_readonly(). This
++	 * pti_clone_entry_text() allows those user-mode-helpers to function,
++	 * but notably the text is still RW.
++	 */
++	pti_clone_entry_text(false);
+ 	pti_setup_espfix64();
+ 	pti_setup_vsyscall();
+ }
+@@ -657,10 +669,11 @@ void pti_finalize(void)
+ 	if (!boot_cpu_has(X86_FEATURE_PTI))
+ 		return;
+ 	/*
+-	 * We need to clone everything (again) that maps parts of the
+-	 * kernel image.
++	 * This is after free_initmem() (all initcalls are done) and we've done
++	 * mark_readonly(). Text is now NX which might've split some PMDs
++	 * relative to the early clone.
+ 	 */
+-	pti_clone_entry_text();
++	pti_clone_entry_text(true);
+ 	pti_clone_kernel_text();
+ 
+ 	debug_checkwx_user();
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index a4cfc97275df6..a5fd04db5ae8e 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -216,6 +216,7 @@ bool bio_integrity_prep(struct bio *bio)
+ 	unsigned int bytes, offset, i;
+ 	unsigned int intervals;
+ 	blk_status_t status;
++	gfp_t gfp = GFP_NOIO;
+ 
+ 	if (!bi)
+ 		return true;
+@@ -238,12 +239,20 @@ bool bio_integrity_prep(struct bio *bio)
+ 		if (!bi->profile->generate_fn ||
+ 		    !(bi->flags & BLK_INTEGRITY_GENERATE))
+ 			return true;
++
++		/*
++		 * Zero the memory allocated to not leak uninitialized kernel
++		 * memory to disk.  For PI this only affects the app tag, but
++		 * for non-integrity metadata it affects the entire metadata
++		 * buffer.
++		 */
++		gfp |= __GFP_ZERO;
+ 	}
+ 	intervals = bio_integrity_intervals(bi, bio_sectors(bio));
+ 
+ 	/* Allocate kernel buffer for protection data */
+ 	len = intervals * bi->tuple_size;
+-	buf = kmalloc(len, GFP_NOIO | q->bounce_gfp);
++	buf = kmalloc(len, gfp | q->bounce_gfp);
+ 	status = BLK_STS_RESOURCE;
+ 	if (unlikely(buf == NULL)) {
+ 		printk(KERN_ERR "could not allocate integrity buffer\n");
+diff --git a/block/blk-integrity.c b/block/blk-integrity.c
+index 9e83159f5a527..2bcf3760538c2 100644
+--- a/block/blk-integrity.c
++++ b/block/blk-integrity.c
+@@ -431,8 +431,6 @@ void blk_integrity_unregister(struct gendisk *disk)
+ 	if (!bi->profile)
+ 		return;
+ 
+-	/* ensure all bios are off the integrity workqueue */
+-	blk_flush_integrity();
+ 	blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, disk->queue);
+ 	memset(bi, 0, sizeof(*bi));
+ }
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 2ee5e05a0d69e..707b2c37e5ee6 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -387,7 +387,7 @@ static int acpi_processor_add(struct acpi_device *device,
+ 
+ 	result = acpi_processor_get_info(device);
+ 	if (result) /* Processor is not physically present or unavailable */
+-		return 0;
++		goto err_clear_driver_data;
+ 
+ 	BUG_ON(pr->id >= nr_cpu_ids);
+ 
+@@ -402,7 +402,7 @@ static int acpi_processor_add(struct acpi_device *device,
+ 			"BIOS reported wrong ACPI id %d for the processor\n",
+ 			pr->id);
+ 		/* Give up, but do not abort the namespace scan. */
+-		goto err;
++		goto err_clear_driver_data;
+ 	}
+ 	/*
+ 	 * processor_device_array is not cleared on errors to allow buggy BIOS
+@@ -414,12 +414,12 @@ static int acpi_processor_add(struct acpi_device *device,
+ 	dev = get_cpu_device(pr->id);
+ 	if (!dev) {
+ 		result = -ENODEV;
+-		goto err;
++		goto err_clear_per_cpu;
+ 	}
+ 
+ 	result = acpi_bind_one(dev, device);
+ 	if (result)
+-		goto err;
++		goto err_clear_per_cpu;
+ 
+ 	pr->dev = dev;
+ 
+@@ -430,10 +430,11 @@ static int acpi_processor_add(struct acpi_device *device,
+ 	dev_err(dev, "Processor driver could not be attached\n");
+ 	acpi_unbind_one(dev);
+ 
+- err:
+-	free_cpumask_var(pr->throttling.shared_cpu_map);
+-	device->driver_data = NULL;
++ err_clear_per_cpu:
+ 	per_cpu(processors, pr->id) = NULL;
++ err_clear_driver_data:
++	device->driver_data = NULL;
++	free_cpumask_var(pr->throttling.shared_cpu_map);
+  err_free_pr:
+ 	kfree(pr);
+ 	return result;
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index cd3de4ec17670..eabb4c9d4718b 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -3530,6 +3530,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		 */
+ 		copy_size = object_offset - user_offset;
+ 		if (copy_size && (user_offset > object_offset ||
++				object_offset > tr->data_size ||
+ 				binder_alloc_copy_user_to_buffer(
+ 					&target_proc->alloc,
+ 					t->buffer, user_offset,
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 467fc8002c447..107c28ec23b8a 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5429,8 +5429,10 @@ struct ata_host *ata_host_alloc(struct device *dev, int max_ports)
+ 	}
+ 
+ 	dr = devres_alloc(ata_devres_release, 0, GFP_KERNEL);
+-	if (!dr)
++	if (!dr) {
++		kfree(host);
+ 		goto err_out;
++	}
+ 
+ 	devres_add(dev, dr);
+ 	dev_set_drvdata(dev, host);
+diff --git a/drivers/ata/pata_macio.c b/drivers/ata/pata_macio.c
+index e47a28271f5bb..ba8f0084075bd 100644
+--- a/drivers/ata/pata_macio.c
++++ b/drivers/ata/pata_macio.c
+@@ -540,7 +540,8 @@ static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc)
+ 
+ 		while (sg_len) {
+ 			/* table overflow should never happen */
+-			BUG_ON (pi++ >= MAX_DCMDS);
++			if (WARN_ON_ONCE(pi >= MAX_DCMDS))
++				return AC_ERR_SYSTEM;
+ 
+ 			len = (sg_len < MAX_DBDMA_SEG) ? sg_len : MAX_DBDMA_SEG;
+ 			table->command = cpu_to_le16(write ? OUTPUT_MORE: INPUT_MORE);
+@@ -552,11 +553,13 @@ static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc)
+ 			addr += len;
+ 			sg_len -= len;
+ 			++table;
++			++pi;
+ 		}
+ 	}
+ 
+ 	/* Should never happen according to Tejun */
+-	BUG_ON(!pi);
++	if (WARN_ON_ONCE(!pi))
++		return AC_ERR_SYSTEM;
+ 
+ 	/* Convert the last command to an input/output */
+ 	table--;
+diff --git a/drivers/base/devres.c b/drivers/base/devres.c
+index 8a74008c13c44..e3a735d0213a8 100644
+--- a/drivers/base/devres.c
++++ b/drivers/base/devres.c
+@@ -577,6 +577,7 @@ void * devres_open_group(struct device *dev, void *id, gfp_t gfp)
+ 	grp->id = grp;
+ 	if (id)
+ 		grp->id = id;
++	grp->color = 0;
+ 
+ 	spin_lock_irqsave(&dev->devres_lock, flags);
+ 	add_dr(dev, &grp->node[0]);
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index cf265ab035ea9..095ad50fd363e 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -38,7 +38,7 @@
+ 
+ #define PLL_USER_CTL(p)		((p)->offset + (p)->regs[PLL_OFF_USER_CTL])
+ # define PLL_POST_DIV_SHIFT	8
+-# define PLL_POST_DIV_MASK(p)	GENMASK((p)->width, 0)
++# define PLL_POST_DIV_MASK(p)	GENMASK((p)->width - 1, 0)
+ # define PLL_ALPHA_EN		BIT(24)
+ # define PLL_ALPHA_MODE		BIT(25)
+ # define PLL_VCO_SHIFT		20
+@@ -1321,8 +1321,8 @@ clk_trion_pll_postdiv_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	}
+ 
+ 	return regmap_update_bits(regmap, PLL_USER_CTL(pll),
+-				  PLL_POST_DIV_MASK(pll) << PLL_POST_DIV_SHIFT,
+-				  val << PLL_POST_DIV_SHIFT);
++				  PLL_POST_DIV_MASK(pll) << pll->post_div_shift,
++				  val << pll->post_div_shift);
+ }
+ 
+ const struct clk_ops clk_alpha_pll_postdiv_trion_ops = {
+diff --git a/drivers/clocksource/timer-imx-tpm.c b/drivers/clocksource/timer-imx-tpm.c
+index 2cdc077a39f5d..9f0aeda4031ff 100644
+--- a/drivers/clocksource/timer-imx-tpm.c
++++ b/drivers/clocksource/timer-imx-tpm.c
+@@ -83,20 +83,28 @@ static u64 notrace tpm_read_sched_clock(void)
+ static int tpm_set_next_event(unsigned long delta,
+ 				struct clock_event_device *evt)
+ {
+-	unsigned long next, now;
++	unsigned long next, prev, now;
+ 
+-	next = tpm_read_counter();
+-	next += delta;
++	prev = tpm_read_counter();
++	next = prev + delta;
+ 	writel(next, timer_base + TPM_C0V);
+ 	now = tpm_read_counter();
+ 
++	/*
++	 * Need to wait CNT increase at least 1 cycle to make sure
++	 * the C0V has been updated into HW.
++	 */
++	if ((next & 0xffffffff) != readl(timer_base + TPM_C0V))
++		while (now == tpm_read_counter())
++			;
++
+ 	/*
+ 	 * NOTE: We observed in a very small probability, the bus fabric
+ 	 * contention between GPU and A7 may results a few cycles delay
+ 	 * of writing CNT registers which may cause the min_delta event got
+ 	 * missed, so we need add a ETIME check here in case it happened.
+ 	 */
+-	return (int)(next - now) <= 0 ? -ETIME : 0;
++	return (now - prev) >= delta ? -ETIME : 0;
+ }
+ 
+ static int tpm_set_state_oneshot(struct clock_event_device *evt)
+diff --git a/drivers/clocksource/timer-of.c b/drivers/clocksource/timer-of.c
+index b965f20174e3a..411f16c4de05a 100644
+--- a/drivers/clocksource/timer-of.c
++++ b/drivers/clocksource/timer-of.c
+@@ -25,10 +25,7 @@ static __init void timer_of_irq_exit(struct of_timer_irq *of_irq)
+ 
+ 	struct clock_event_device *clkevt = &to->clkevt;
+ 
+-	if (of_irq->percpu)
+-		free_percpu_irq(of_irq->irq, clkevt);
+-	else
+-		free_irq(of_irq->irq, clkevt);
++	free_irq(of_irq->irq, clkevt);
+ }
+ 
+ /**
+@@ -42,9 +39,6 @@ static __init void timer_of_irq_exit(struct of_timer_irq *of_irq)
+  * - Get interrupt number by name
+  * - Get interrupt number by index
+  *
+- * When the interrupt is per CPU, 'request_percpu_irq()' is called,
+- * otherwise 'request_irq()' is used.
+- *
+  * Returns 0 on success, < 0 otherwise
+  */
+ static __init int timer_of_irq_init(struct device_node *np,
+@@ -69,12 +63,9 @@ static __init int timer_of_irq_init(struct device_node *np,
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = of_irq->percpu ?
+-		request_percpu_irq(of_irq->irq, of_irq->handler,
+-				   np->full_name, clkevt) :
+-		request_irq(of_irq->irq, of_irq->handler,
+-			    of_irq->flags ? of_irq->flags : IRQF_TIMER,
+-			    np->full_name, clkevt);
++	ret = request_irq(of_irq->irq, of_irq->handler,
++			  of_irq->flags ? of_irq->flags : IRQF_TIMER,
++			  np->full_name, clkevt);
+ 	if (ret) {
+ 		pr_err("Failed to request irq %d for %pOF\n", of_irq->irq, np);
+ 		return ret;
+diff --git a/drivers/clocksource/timer-of.h b/drivers/clocksource/timer-of.h
+index a5478f3e8589d..01a2c6b7db065 100644
+--- a/drivers/clocksource/timer-of.h
++++ b/drivers/clocksource/timer-of.h
+@@ -11,7 +11,6 @@
+ struct of_timer_irq {
+ 	int irq;
+ 	int index;
+-	int percpu;
+ 	const char *name;
+ 	unsigned long flags;
+ 	irq_handler_t handler;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c
+index a4d65973bf7cf..80771b1480fff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c
+@@ -100,6 +100,7 @@ struct amdgpu_afmt_acr amdgpu_afmt_acr(uint32_t clock)
+ 	amdgpu_afmt_calc_cts(clock, &res.cts_32khz, &res.n_32khz, 32000);
+ 	amdgpu_afmt_calc_cts(clock, &res.cts_44_1khz, &res.n_44_1khz, 44100);
+ 	amdgpu_afmt_calc_cts(clock, &res.cts_48khz, &res.n_48khz, 48000);
++	res.clock = clock;
+ 
+ 	return res;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+index 469352e2d6ecf..436d436b2ea23 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+@@ -1626,6 +1626,8 @@ int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
+ 										(u32)le32_to_cpu(*((u32 *)reg_data + j));
+ 									j++;
+ 								} else if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_EQU_PREV) {
++									if (i == 0)
++										continue;
+ 									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
+ 										reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
+ 								}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+index 78ac6dbe70d84..854b218602574 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+@@ -213,6 +213,9 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
+ 		struct amdgpu_firmware_info *ucode;
+ 
+ 		id = fw_type_convert(cgs_device, type);
++		if (id >= AMDGPU_UCODE_ID_MAXIMUM)
++			return -EINVAL;
++
+ 		ucode = &adev->firmware.ucode[id];
+ 		if (ucode->fw == NULL)
+ 			return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 15ee13c3bd9e1..b78feb8ba01e1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -260,7 +260,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
+ 	ring->priority = DRM_SCHED_PRIORITY_NORMAL;
+ 	mutex_init(&ring->priority_mutex);
+ 
+-	if (!ring->no_scheduler) {
++	if (!ring->no_scheduler && ring->funcs->type < AMDGPU_HW_IP_NUM) {
+ 		hw_ip = ring->funcs->type;
+ 		num_sched = &adev->gpu_sched[hw_ip][hw_prio].num_scheds;
+ 		adev->gpu_sched[hw_ip][hw_prio].sched[(*num_sched)++] =
+@@ -368,8 +368,9 @@ static ssize_t amdgpu_debugfs_ring_read(struct file *f, char __user *buf,
+ 					size_t size, loff_t *pos)
+ {
+ 	struct amdgpu_ring *ring = file_inode(f)->i_private;
+-	int r, i;
+ 	uint32_t value, result, early[3];
++	loff_t i;
++	int r;
+ 
+ 	if (*pos & 3 || size & 3)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index d6f2951035959..ca4c915e3a6c7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -404,6 +404,8 @@ static void amdgpu_virt_add_bad_page(struct amdgpu_device *adev,
+ 	uint64_t retired_page;
+ 	uint32_t bp_idx, bp_cnt;
+ 
++	memset(&bp, 0, sizeof(bp));
++
+ 	if (bp_block_size) {
+ 		bp_cnt = bp_block_size / sizeof(uint64_t);
+ 		for (bp_idx = 0; bp_idx < bp_cnt; bp_idx++) {
+@@ -550,7 +552,7 @@ static int amdgpu_virt_write_vf2pf_data(struct amdgpu_device *adev)
+ 
+ 	vf2pf_info->checksum =
+ 		amd_sriov_msg_checksum(
+-		vf2pf_info, vf2pf_info->header.size, 0, 0);
++		vf2pf_info, sizeof(*vf2pf_info), 0, 0);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/df_v1_7.c b/drivers/gpu/drm/amd/amdgpu/df_v1_7.c
+index d6aca1c080687..9587e8672a01c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/df_v1_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/df_v1_7.c
+@@ -70,6 +70,8 @@ static u32 df_v1_7_get_hbm_channel_number(struct amdgpu_device *adev)
+ 	int fb_channel_number;
+ 
+ 	fb_channel_number = adev->df.funcs->get_fb_channel_number(adev);
++	if (fb_channel_number >= ARRAY_SIZE(df_v1_7_channel_number))
++		fb_channel_number = 0;
+ 
+ 	return df_v1_7_channel_number[fb_channel_number];
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+index eadc9526d33fe..b81572dc115f7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+@@ -313,7 +313,7 @@ static void nbio_v7_4_handle_ras_controller_intr_no_bifring(struct amdgpu_device
+ 						RAS_CNTLR_INTERRUPT_CLEAR, 1);
+ 		WREG32_SOC15(NBIO, 0, mmBIF_DOORBELL_INT_CNTL, bif_doorbell_intr_cntl);
+ 
+-		if (!ras->disable_ras_err_cnt_harvest) {
++		if (ras && !ras->disable_ras_err_cnt_harvest && obj) {
+ 			/*
+ 			 * clear error status after ras_controller_intr
+ 			 * according to hw team and count ue number
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.h b/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
+index d54ceebd346b7..30c70b3ab17f1 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
+@@ -42,8 +42,6 @@
+ #define CRAT_OEMTABLEID_LENGTH	8
+ #define CRAT_RESERVED_LENGTH	6
+ 
+-#define CRAT_OEMID_64BIT_MASK ((1ULL << (CRAT_OEMID_LENGTH * 8)) - 1)
+-
+ /* Compute Unit flags */
+ #define COMPUTE_UNIT_CPU	(1 << 0)  /* Create Virtual CRAT for CPU */
+ #define COMPUTE_UNIT_GPU	(1 << 1)  /* Create Virtual CRAT for GPU */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 2b31c3066aaae..b5738032237e3 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -906,8 +906,7 @@ static void kfd_update_system_properties(void)
+ 	dev = list_last_entry(&topology_device_list,
+ 			struct kfd_topology_device, list);
+ 	if (dev) {
+-		sys_props.platform_id =
+-			(*((uint64_t *)dev->oem_id)) & CRAT_OEMID_64BIT_MASK;
++		sys_props.platform_id = dev->oem_id64;
+ 		sys_props.platform_oem = *((uint64_t *)dev->oem_table_id);
+ 		sys_props.platform_rev = dev->oem_revision;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.h b/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
+index 326d9b26b7aa7..22476a9390641 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
+@@ -182,7 +182,10 @@ struct kfd_topology_device {
+ 	struct attribute		attr_gpuid;
+ 	struct attribute		attr_name;
+ 	struct attribute		attr_props;
+-	uint8_t				oem_id[CRAT_OEMID_LENGTH];
++	union {
++		uint8_t				oem_id[CRAT_OEMID_LENGTH];
++		uint64_t			oem_id64;
++	};
+ 	uint8_t				oem_table_id[CRAT_OEMTABLEID_LENGTH];
+ 	uint32_t			oem_revision;
+ };
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 29ef0ed44d5f4..50921b340b886 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3341,7 +3341,10 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 
+ 	/* There is one primary plane per CRTC */
+ 	primary_planes = dm->dc->caps.max_streams;
+-	ASSERT(primary_planes <= AMDGPU_MAX_PLANES);
++	if (primary_planes > AMDGPU_MAX_PLANES) {
++		DRM_ERROR("DM: Plane nums out of 6 planes\n");
++		return -EINVAL;
++	}
+ 
+ 	/*
+ 	 * Initialize primary planes, implicit planes for legacy IOCTLS.
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index 0eba391e597fd..40d03f8cde2cf 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -455,7 +455,8 @@ static void build_watermark_ranges(struct clk_bw_params *bw_params, struct pp_sm
+ 			ranges->reader_wm_sets[num_valid_sets].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
+ 
+ 			/* Modify previous watermark range to cover up to max */
+-			ranges->reader_wm_sets[num_valid_sets - 1].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
++			if (num_valid_sets > 0)
++				ranges->reader_wm_sets[num_valid_sets - 1].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
+ 		}
+ 		num_valid_sets++;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c
+index 880954ac0b027..1b3cba5b1d749 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c
+@@ -690,6 +690,9 @@ static void wbscl_set_scaler_filter(
+ 	int pair;
+ 	uint16_t odd_coef, even_coef;
+ 
++	if (!filter)
++		return;
++
+ 	for (phase = 0; phase < (NUM_PHASES / 2 + 1); phase++) {
+ 		for (pair = 0; pair < tap_pairs; pair++) {
+ 			even_coef = filter[phase * taps + 2 * pair];
+diff --git a/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c b/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
+index dae8e489c8cf4..a5de27908914c 100644
+--- a/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
++++ b/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
+@@ -58,7 +58,7 @@ struct gpio_service *dal_gpio_service_create(
+ 	struct dc_context *ctx)
+ {
+ 	struct gpio_service *service;
+-	uint32_t index_of_id;
++	int32_t index_of_id;
+ 
+ 	service = kzalloc(sizeof(struct gpio_service), GFP_KERNEL);
+ 
+@@ -114,7 +114,7 @@ struct gpio_service *dal_gpio_service_create(
+ 	return service;
+ 
+ failure_2:
+-	while (index_of_id) {
++	while (index_of_id > 0) {
+ 		--index_of_id;
+ 		kfree(service->busyness[index_of_id]);
+ 	}
+@@ -241,6 +241,9 @@ static bool is_pin_busy(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
++	if (id == GPIO_ID_UNKNOWN)
++		return false;
++
+ 	return service->busyness[id][en];
+ }
+ 
+@@ -249,6 +252,9 @@ static void set_pin_busy(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
++	if (id == GPIO_ID_UNKNOWN)
++		return;
++
+ 	service->busyness[id][en] = true;
+ }
+ 
+@@ -257,6 +263,9 @@ static void set_pin_free(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
++	if (id == GPIO_ID_UNKNOWN)
++		return;
++
+ 	service->busyness[id][en] = false;
+ }
+ 
+@@ -265,7 +274,7 @@ enum gpio_result dal_gpio_service_lock(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
+-	if (!service->busyness[id]) {
++	if (id != GPIO_ID_UNKNOWN && !service->busyness[id]) {
+ 		ASSERT_CRITICAL(false);
+ 		return GPIO_RESULT_OPEN_FAILED;
+ 	}
+@@ -279,7 +288,7 @@ enum gpio_result dal_gpio_service_unlock(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
+-	if (!service->busyness[id]) {
++	if (id != GPIO_ID_UNKNOWN && !service->busyness[id]) {
+ 		ASSERT_CRITICAL(false);
+ 		return GPIO_RESULT_OPEN_FAILED;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+index 51855a2624cf4..b1d5387195054 100644
+--- a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
++++ b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+@@ -130,13 +130,21 @@ static bool hdmi_14_process_transaction(
+ 	const uint8_t hdcp_i2c_addr_link_primary = 0x3a; /* 0x74 >> 1*/
+ 	const uint8_t hdcp_i2c_addr_link_secondary = 0x3b; /* 0x76 >> 1*/
+ 	struct i2c_command i2c_command;
+-	uint8_t offset = hdcp_i2c_offsets[message_info->msg_id];
++	uint8_t offset;
+ 	struct i2c_payload i2c_payloads[] = {
+-		{ true, 0, 1, &offset },
++		{ true, 0, 1, 0 },
+ 		/* actual hdcp payload, will be filled later, zeroed for now*/
+ 		{ 0 }
+ 	};
+ 
++	if (message_info->msg_id == HDCP_MESSAGE_ID_INVALID) {
++		DC_LOG_ERROR("%s: Invalid message_info msg_id - %d\n", __func__, message_info->msg_id);
++		return false;
++	}
++
++	offset = hdcp_i2c_offsets[message_info->msg_id];
++	i2c_payloads[0].data = &offset;
++
+ 	switch (message_info->link) {
+ 	case HDCP_LINK_SECONDARY:
+ 		i2c_payloads[0].address = hdcp_i2c_addr_link_secondary;
+@@ -310,6 +318,11 @@ static bool dp_11_process_transaction(
+ 	struct dc_link *link,
+ 	struct hdcp_protection_message *message_info)
+ {
++	if (message_info->msg_id == HDCP_MESSAGE_ID_INVALID) {
++		DC_LOG_ERROR("%s: Invalid message_info msg_id - %d\n", __func__, message_info->msg_id);
++		return false;
++	}
++
+ 	return dpcd_access_helper(
+ 		link,
+ 		message_info->length,
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
+index 8e9caae7c9559..1b2df97226a3f 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
+@@ -156,11 +156,16 @@ static enum mod_hdcp_status read(struct mod_hdcp *hdcp,
+ 	uint32_t cur_size = 0;
+ 	uint32_t data_offset = 0;
+ 
+-	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID) {
++	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID ||
++		msg_id >= MOD_HDCP_MESSAGE_ID_MAX)
+ 		return MOD_HDCP_STATUS_DDC_FAILURE;
+-	}
+ 
+ 	if (is_dp_hdcp(hdcp)) {
++		int num_dpcd_addrs = sizeof(hdcp_dpcd_addrs) /
++			sizeof(hdcp_dpcd_addrs[0]);
++		if (msg_id >= num_dpcd_addrs)
++			return MOD_HDCP_STATUS_DDC_FAILURE;
++
+ 		while (buf_len > 0) {
+ 			cur_size = MIN(buf_len, HDCP_MAX_AUX_TRANSACTION_SIZE);
+ 			success = hdcp->config.ddc.funcs.read_dpcd(hdcp->config.ddc.handle,
+@@ -175,6 +180,11 @@ static enum mod_hdcp_status read(struct mod_hdcp *hdcp,
+ 			data_offset += cur_size;
+ 		}
+ 	} else {
++		int num_i2c_offsets = sizeof(hdcp_i2c_offsets) /
++			sizeof(hdcp_i2c_offsets[0]);
++		if (msg_id >= num_i2c_offsets)
++			return MOD_HDCP_STATUS_DDC_FAILURE;
++
+ 		success = hdcp->config.ddc.funcs.read_i2c(
+ 				hdcp->config.ddc.handle,
+ 				HDCP_I2C_ADDR,
+@@ -219,11 +229,16 @@ static enum mod_hdcp_status write(struct mod_hdcp *hdcp,
+ 	uint32_t cur_size = 0;
+ 	uint32_t data_offset = 0;
+ 
+-	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID) {
++	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID ||
++		msg_id >= MOD_HDCP_MESSAGE_ID_MAX)
+ 		return MOD_HDCP_STATUS_DDC_FAILURE;
+-	}
+ 
+ 	if (is_dp_hdcp(hdcp)) {
++		int num_dpcd_addrs = sizeof(hdcp_dpcd_addrs) /
++			sizeof(hdcp_dpcd_addrs[0]);
++		if (msg_id >= num_dpcd_addrs)
++			return MOD_HDCP_STATUS_DDC_FAILURE;
++
+ 		while (buf_len > 0) {
+ 			cur_size = MIN(buf_len, HDCP_MAX_AUX_TRANSACTION_SIZE);
+ 			success = hdcp->config.ddc.funcs.write_dpcd(
+@@ -239,6 +254,11 @@ static enum mod_hdcp_status write(struct mod_hdcp *hdcp,
+ 			data_offset += cur_size;
+ 		}
+ 	} else {
++		int num_i2c_offsets = sizeof(hdcp_i2c_offsets) /
++			sizeof(hdcp_i2c_offsets[0]);
++		if (msg_id >= num_i2c_offsets)
++			return MOD_HDCP_STATUS_DDC_FAILURE;
++
+ 		hdcp->buf[0] = hdcp_i2c_offsets[msg_id];
+ 		memmove(&hdcp->buf[1], buf, buf_len);
+ 		success = hdcp->config.ddc.funcs.write_i2c(
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
+index 31a32a79cfc20..fe70ab4e65bb5 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
+@@ -30,9 +30,8 @@ int psm_init_power_state_table(struct pp_hwmgr *hwmgr)
+ {
+ 	int result;
+ 	unsigned int i;
+-	unsigned int table_entries;
+ 	struct pp_power_state *state;
+-	int size;
++	int size, table_entries;
+ 
+ 	if (hwmgr->hwmgr_func->get_num_of_pp_table_entries == NULL)
+ 		return 0;
+@@ -40,15 +39,19 @@ int psm_init_power_state_table(struct pp_hwmgr *hwmgr)
+ 	if (hwmgr->hwmgr_func->get_power_state_size == NULL)
+ 		return 0;
+ 
+-	hwmgr->num_ps = table_entries = hwmgr->hwmgr_func->get_num_of_pp_table_entries(hwmgr);
++	table_entries = hwmgr->hwmgr_func->get_num_of_pp_table_entries(hwmgr);
+ 
+-	hwmgr->ps_size = size = hwmgr->hwmgr_func->get_power_state_size(hwmgr) +
++	size = hwmgr->hwmgr_func->get_power_state_size(hwmgr) +
+ 					  sizeof(struct pp_power_state);
+ 
+-	if (table_entries == 0 || size == 0) {
++	if (table_entries <= 0 || size == 0) {
+ 		pr_warn("Please check whether power state management is supported on this asic\n");
++		hwmgr->num_ps = 0;
++		hwmgr->ps_size = 0;
+ 		return 0;
+ 	}
++	hwmgr->num_ps = table_entries;
++	hwmgr->ps_size = size;
+ 
+ 	hwmgr->ps = kcalloc(table_entries, size, GFP_KERNEL);
+ 	if (hwmgr->ps == NULL)
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+index 01dc46dc9c8a0..165af862d0542 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+@@ -73,8 +73,9 @@ static int atomctrl_retrieve_ac_timing(
+ 					j++;
+ 				} else if ((table->mc_reg_address[i].uc_pre_reg_data &
+ 							LOW_NIBBLE_MASK) == DATA_EQU_PREV) {
+-					table->mc_reg_table_entry[num_ranges].mc_data[i] =
+-						table->mc_reg_table_entry[num_ranges].mc_data[i-1];
++					if (i)
++						table->mc_reg_table_entry[num_ranges].mc_data[i] =
++							table->mc_reg_table_entry[num_ranges].mc_data[i-1];
+ 				}
+ 			}
+ 			num_ranges++;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index 5e72b7555edae..3673a9e7ba449 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -5190,7 +5190,7 @@ static int smu7_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, uint
+ 	mode = input[size];
+ 	switch (mode) {
+ 	case PP_SMC_POWER_PROFILE_CUSTOM:
+-		if (size < 8 && size != 0)
++		if (size != 8 && size != 0)
+ 			return -EINVAL;
+ 		/* If only CUSTOM is passed in, use the saved values. Check
+ 		 * that we actually have a CUSTOM profile by ensuring that
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+index 35d0ff57a5960..e85a90b989b59 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+@@ -584,6 +584,7 @@ static int smu8_init_uvd_limit(struct pp_hwmgr *hwmgr)
+ 				hwmgr->dyn_state.uvd_clock_voltage_dependency_table;
+ 	unsigned long clock = 0;
+ 	uint32_t level;
++	int ret;
+ 
+ 	if (NULL == table || table->count <= 0)
+ 		return -EINVAL;
+@@ -591,7 +592,9 @@ static int smu8_init_uvd_limit(struct pp_hwmgr *hwmgr)
+ 	data->uvd_dpm.soft_min_clk = 0;
+ 	data->uvd_dpm.hard_min_clk = 0;
+ 
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxUvdLevel, &level);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxUvdLevel, &level);
++	if (ret)
++		return ret;
+ 
+ 	if (level < table->count)
+ 		clock = table->entries[level].vclk;
+@@ -611,6 +614,7 @@ static int smu8_init_vce_limit(struct pp_hwmgr *hwmgr)
+ 				hwmgr->dyn_state.vce_clock_voltage_dependency_table;
+ 	unsigned long clock = 0;
+ 	uint32_t level;
++	int ret;
+ 
+ 	if (NULL == table || table->count <= 0)
+ 		return -EINVAL;
+@@ -618,7 +622,9 @@ static int smu8_init_vce_limit(struct pp_hwmgr *hwmgr)
+ 	data->vce_dpm.soft_min_clk = 0;
+ 	data->vce_dpm.hard_min_clk = 0;
+ 
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxEclkLevel, &level);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxEclkLevel, &level);
++	if (ret)
++		return ret;
+ 
+ 	if (level < table->count)
+ 		clock = table->entries[level].ecclk;
+@@ -638,6 +644,7 @@ static int smu8_init_acp_limit(struct pp_hwmgr *hwmgr)
+ 				hwmgr->dyn_state.acp_clock_voltage_dependency_table;
+ 	unsigned long clock = 0;
+ 	uint32_t level;
++	int ret;
+ 
+ 	if (NULL == table || table->count <= 0)
+ 		return -EINVAL;
+@@ -645,7 +652,9 @@ static int smu8_init_acp_limit(struct pp_hwmgr *hwmgr)
+ 	data->acp_dpm.soft_min_clk = 0;
+ 	data->acp_dpm.hard_min_clk = 0;
+ 
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxAclkLevel, &level);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxAclkLevel, &level);
++	if (ret)
++		return ret;
+ 
+ 	if (level < table->count)
+ 		clock = table->entries[level].acpclk;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index 10678b5199957..79a41180adf13 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -355,13 +355,13 @@ static int vega10_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
+ 	return 0;
+ }
+ 
+-static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
++static int vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ {
+ 	struct vega10_hwmgr *data = hwmgr->backend;
+-	int i;
+ 	uint32_t sub_vendor_id, hw_revision;
+ 	uint32_t top32, bottom32;
+ 	struct amdgpu_device *adev = hwmgr->adev;
++	int ret, i;
+ 
+ 	vega10_initialize_power_tune_defaults(hwmgr);
+ 
+@@ -486,9 +486,12 @@ static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 	if (data->registry_data.vr0hot_enabled)
+ 		data->smu_features[GNLD_VR0HOT].supported = true;
+ 
+-	smum_send_msg_to_smc(hwmgr,
++	ret = smum_send_msg_to_smc(hwmgr,
+ 			PPSMC_MSG_GetSmuVersion,
+ 			&hwmgr->smu_version);
++	if (ret)
++		return ret;
++
+ 		/* ACG firmware has major version 5 */
+ 	if ((hwmgr->smu_version & 0xff000000) == 0x5000000)
+ 		data->smu_features[GNLD_ACG].supported = true;
+@@ -506,10 +509,16 @@ static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 		data->smu_features[GNLD_PCC_LIMIT].supported = true;
+ 
+ 	/* Get the SN to turn into a Unique ID */
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumTop32, &top32);
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumBottom32, &bottom32);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumTop32, &top32);
++	if (ret)
++		return ret;
++
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumBottom32, &bottom32);
++	if (ret)
++		return ret;
+ 
+ 	adev->unique_id = ((uint64_t)bottom32 << 32) | top32;
++	return 0;
+ }
+ 
+ #ifdef PPLIB_VEGA10_EVV_SUPPORT
+@@ -883,7 +892,9 @@ static int vega10_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ 
+ 	vega10_set_features_platform_caps(hwmgr);
+ 
+-	vega10_init_dpm_defaults(hwmgr);
++	result = vega10_init_dpm_defaults(hwmgr);
++	if (result)
++		return result;
+ 
+ #ifdef PPLIB_VEGA10_EVV_SUPPORT
+ 	/* Get leakage voltage based on leakage ID. */
+@@ -2350,15 +2361,20 @@ static int vega10_acg_enable(struct pp_hwmgr *hwmgr)
+ {
+ 	struct vega10_hwmgr *data = hwmgr->backend;
+ 	uint32_t agc_btc_response;
++	int ret;
+ 
+ 	if (data->smu_features[GNLD_ACG].supported) {
+ 		if (0 == vega10_enable_smc_features(hwmgr, true,
+ 					data->smu_features[GNLD_DPM_PREFETCHER].smu_feature_bitmap))
+ 			data->smu_features[GNLD_DPM_PREFETCHER].enabled = true;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_InitializeAcg, NULL);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_InitializeAcg, NULL);
++		if (ret)
++			return ret;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_RunAcgBtc, &agc_btc_response);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_RunAcgBtc, &agc_btc_response);
++		if (ret)
++			agc_btc_response = 0;
+ 
+ 		if (1 == agc_btc_response) {
+ 			if (1 == data->acg_loop_state)
+@@ -2572,8 +2588,11 @@ static int vega10_init_smc_table(struct pp_hwmgr *hwmgr)
+ 		}
+ 	}
+ 
+-	pp_atomfwctrl_get_voltage_table_v4(hwmgr, VOLTAGE_TYPE_VDDC,
++	result = pp_atomfwctrl_get_voltage_table_v4(hwmgr, VOLTAGE_TYPE_VDDC,
+ 			VOLTAGE_OBJ_SVID2,  &voltage_table);
++	PP_ASSERT_WITH_CODE(!result,
++			"Failed to get voltage table!",
++			return result);
+ 	pp_table->MaxVidStep = voltage_table.max_vid_step;
+ 
+ 	pp_table->GfxDpmVoltageMode =
+@@ -3391,13 +3410,17 @@ static int vega10_find_dpm_states_clocks_in_dpm_table(struct pp_hwmgr *hwmgr, co
+ 	const struct vega10_power_state *vega10_ps =
+ 			cast_const_phw_vega10_power_state(states->pnew_state);
+ 	struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table);
+-	uint32_t sclk = vega10_ps->performance_levels
+-			[vega10_ps->performance_level_count - 1].gfx_clock;
+ 	struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
+-	uint32_t mclk = vega10_ps->performance_levels
+-			[vega10_ps->performance_level_count - 1].mem_clock;
++	uint32_t sclk, mclk;
+ 	uint32_t i;
+ 
++	if (vega10_ps == NULL)
++		return -EINVAL;
++	sclk = vega10_ps->performance_levels
++			[vega10_ps->performance_level_count - 1].gfx_clock;
++	mclk = vega10_ps->performance_levels
++			[vega10_ps->performance_level_count - 1].mem_clock;
++
+ 	for (i = 0; i < sclk_table->count; i++) {
+ 		if (sclk == sclk_table->dpm_levels[i].value)
+ 			break;
+@@ -3704,6 +3727,9 @@ static int vega10_generate_dpm_level_enable_mask(
+ 			cast_const_phw_vega10_power_state(states->pnew_state);
+ 	int i;
+ 
++	if (vega10_ps == NULL)
++		return -EINVAL;
++
+ 	PP_ASSERT_WITH_CODE(!vega10_trim_dpm_states(hwmgr, vega10_ps),
+ 			"Attempt to Trim DPM States Failed!",
+ 			return -1);
+@@ -3876,11 +3902,14 @@ static int vega10_get_gpu_power(struct pp_hwmgr *hwmgr,
+ 		uint32_t *query)
+ {
+ 	uint32_t value;
++	int ret;
+ 
+ 	if (!query)
+ 		return -EINVAL;
+ 
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrPkgPwr, &value);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrPkgPwr, &value);
++	if (ret)
++		return ret;
+ 
+ 	/* SMC returning actual watts, keep consistent with legacy asics, low 8 bit as 8 fractional bits */
+ 	*query = value << 8;
+@@ -4633,14 +4662,16 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 	uint32_t gen_speed, lane_width, current_gen_speed, current_lane_width;
+ 	PPTable_t *pptable = &(data->smc_state_table.pp_table);
+ 
+-	int i, now, size = 0, count = 0;
++	int i, ret, now,  size = 0, count = 0;
+ 
+ 	switch (type) {
+ 	case PP_SCLK:
+ 		if (data->registry_data.sclk_dpm_key_disabled)
+ 			break;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentGfxclkIndex, &now);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentGfxclkIndex, &now);
++		if (ret)
++			break;
+ 
+ 		if (hwmgr->pp_one_vf &&
+ 		    (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK))
+@@ -4656,7 +4687,9 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 		if (data->registry_data.mclk_dpm_key_disabled)
+ 			break;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentUclkIndex, &now);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentUclkIndex, &now);
++		if (ret)
++			break;
+ 
+ 		for (i = 0; i < mclk_table->count; i++)
+ 			size += sprintf(buf + size, "%d: %uMhz %s\n",
+@@ -4667,7 +4700,9 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 		if (data->registry_data.socclk_dpm_key_disabled)
+ 			break;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentSocclkIndex, &now);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentSocclkIndex, &now);
++		if (ret)
++			break;
+ 
+ 		for (i = 0; i < soc_table->count; i++)
+ 			size += sprintf(buf + size, "%d: %uMhz %s\n",
+@@ -4678,8 +4713,10 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 		if (data->registry_data.dcefclk_dpm_key_disabled)
+ 			break;
+ 
+-		smum_send_msg_to_smc_with_parameter(hwmgr,
++		ret = smum_send_msg_to_smc_with_parameter(hwmgr,
+ 				PPSMC_MSG_GetClockFreqMHz, CLK_DCEFCLK, &now);
++		if (ret)
++			break;
+ 
+ 		for (i = 0; i < dcef_table->count; i++)
+ 			size += sprintf(buf + size, "%d: %uMhz %s\n",
+@@ -4828,6 +4865,9 @@ static int vega10_check_states_equal(struct pp_hwmgr *hwmgr,
+ 
+ 	psa = cast_const_phw_vega10_power_state(pstate1);
+ 	psb = cast_const_phw_vega10_power_state(pstate2);
++	if (psa == NULL || psb == NULL)
++		return -EINVAL;
++
+ 	/* If the two states don't even have the same number of performance levels they cannot be the same state. */
+ 	if (psa->performance_level_count != psb->performance_level_count) {
+ 		*equal = false;
+@@ -4953,6 +4993,8 @@ static int vega10_set_sclk_od(struct pp_hwmgr *hwmgr, uint32_t value)
+ 		return -EINVAL;
+ 
+ 	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
++	if (vega10_ps == NULL)
++		return -EINVAL;
+ 
+ 	vega10_ps->performance_levels
+ 	[vega10_ps->performance_level_count - 1].gfx_clock =
+@@ -5004,6 +5046,8 @@ static int vega10_set_mclk_od(struct pp_hwmgr *hwmgr, uint32_t value)
+ 		return -EINVAL;
+ 
+ 	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
++	if (vega10_ps == NULL)
++		return -EINVAL;
+ 
+ 	vega10_ps->performance_levels
+ 	[vega10_ps->performance_level_count - 1].mem_clock =
+@@ -5239,6 +5283,9 @@ static void vega10_odn_update_power_state(struct pp_hwmgr *hwmgr)
+ 		return;
+ 
+ 	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
++	if (vega10_ps == NULL)
++		return;
++
+ 	max_level = vega10_ps->performance_level_count - 1;
+ 
+ 	if (vega10_ps->performance_levels[max_level].gfx_clock !=
+@@ -5261,6 +5308,9 @@ static void vega10_odn_update_power_state(struct pp_hwmgr *hwmgr)
+ 
+ 	ps = (struct pp_power_state *)((unsigned long)(hwmgr->ps) + hwmgr->ps_size * (hwmgr->num_ps - 1));
+ 	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
++	if (vega10_ps == NULL)
++		return;
++
+ 	max_level = vega10_ps->performance_level_count - 1;
+ 
+ 	if (vega10_ps->performance_levels[max_level].gfx_clock !=
+@@ -5451,6 +5501,8 @@ static int vega10_get_performance_level(struct pp_hwmgr *hwmgr, const struct pp_
+ 		return -EINVAL;
+ 
+ 	ps = cast_const_phw_vega10_power_state(state);
++	if (ps == NULL)
++		return -EINVAL;
+ 
+ 	i = index > ps->performance_level_count - 1 ?
+ 			ps->performance_level_count - 1 : index;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+index 57a354a03e8ae..a55dc6ec4f766 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+@@ -4095,9 +4095,11 @@ static int vega20_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui
+ 	if (power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+ 		struct vega20_hwmgr *data =
+ 			(struct vega20_hwmgr *)(hwmgr->backend);
+-		if (size == 0 && !data->is_custom_profile_set)
++
++		if (size != 10 && size != 0)
+ 			return -EINVAL;
+-		if (size < 10 && size != 0)
++
++		if (size == 0 && !data->is_custom_profile_set)
+ 			return -EINVAL;
+ 
+ 		result = vega20_get_activity_monitor_coeff(hwmgr,
+@@ -4159,6 +4161,8 @@ static int vega20_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui
+ 			activity_monitor.Fclk_PD_Data_error_coeff = input[8];
+ 			activity_monitor.Fclk_PD_Data_error_rate_coeff = input[9];
+ 			break;
++		default:
++			return -EINVAL;
+ 		}
+ 
+ 		result = vega20_set_activity_monitor_coeff(hwmgr,
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
+index daf122f24f230..ae8305a1ff05a 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
+@@ -131,13 +131,17 @@ int vega10_get_enabled_smc_features(struct pp_hwmgr *hwmgr,
+ 			    uint64_t *features_enabled)
+ {
+ 	uint32_t enabled_features;
++	int ret;
+ 
+ 	if (features_enabled == NULL)
+ 		return -EINVAL;
+ 
+-	smum_send_msg_to_smc(hwmgr,
++	ret = smum_send_msg_to_smc(hwmgr,
+ 			PPSMC_MSG_GetEnabledSmuFeatures,
+ 			&enabled_features);
++	if (ret)
++		return ret;
++
+ 	*features_enabled = enabled_features;
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 43de9dfcba19a..f1091cb87de0c 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -318,6 +318,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"),
+ 		},
+ 		.driver_data = (void *)&lcd1600x2560_leftside_up,
++	}, {	/* OrangePi Neo */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "OrangePi"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "NEO-01"),
++		},
++		.driver_data = (void *)&lcd1200x1920_rightside_up,
+ 	}, {	/* Samsung GalaxyBook 10.6 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
+diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
+index 038d4c6884c5b..136a7163477da 100644
+--- a/drivers/gpu/drm/i915/i915_sw_fence.c
++++ b/drivers/gpu/drm/i915/i915_sw_fence.c
+@@ -44,7 +44,7 @@ static inline void debug_fence_init(struct i915_sw_fence *fence)
+ 	debug_object_init(fence, &i915_sw_fence_debug_descr);
+ }
+ 
+-static inline void debug_fence_init_onstack(struct i915_sw_fence *fence)
++static inline __maybe_unused void debug_fence_init_onstack(struct i915_sw_fence *fence)
+ {
+ 	debug_object_init_on_stack(fence, &i915_sw_fence_debug_descr);
+ }
+@@ -70,7 +70,7 @@ static inline void debug_fence_destroy(struct i915_sw_fence *fence)
+ 	debug_object_destroy(fence, &i915_sw_fence_debug_descr);
+ }
+ 
+-static inline void debug_fence_free(struct i915_sw_fence *fence)
++static inline __maybe_unused void debug_fence_free(struct i915_sw_fence *fence)
+ {
+ 	debug_object_free(fence, &i915_sw_fence_debug_descr);
+ 	smp_wmb(); /* flush the change in state before reallocation */
+@@ -87,7 +87,7 @@ static inline void debug_fence_init(struct i915_sw_fence *fence)
+ {
+ }
+ 
+-static inline void debug_fence_init_onstack(struct i915_sw_fence *fence)
++static inline __maybe_unused void debug_fence_init_onstack(struct i915_sw_fence *fence)
+ {
+ }
+ 
+@@ -108,7 +108,7 @@ static inline void debug_fence_destroy(struct i915_sw_fence *fence)
+ {
+ }
+ 
+-static inline void debug_fence_free(struct i915_sw_fence *fence)
++static inline __maybe_unused void debug_fence_free(struct i915_sw_fence *fence)
+ {
+ }
+ 
+diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/meson/meson_plane.c
+index 255c6b863f8d2..6d54c565b34fa 100644
+--- a/drivers/gpu/drm/meson/meson_plane.c
++++ b/drivers/gpu/drm/meson/meson_plane.c
+@@ -529,6 +529,7 @@ int meson_plane_create(struct meson_drm *priv)
+ 	struct meson_plane *meson_plane;
+ 	struct drm_plane *plane;
+ 	const uint64_t *format_modifiers = format_modifiers_default;
++	int ret;
+ 
+ 	meson_plane = devm_kzalloc(priv->drm->dev, sizeof(*meson_plane),
+ 				   GFP_KERNEL);
+@@ -543,12 +544,16 @@ int meson_plane_create(struct meson_drm *priv)
+ 	else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
+ 		format_modifiers = format_modifiers_afbc_g12a;
+ 
+-	drm_universal_plane_init(priv->drm, plane, 0xFF,
+-				 &meson_plane_funcs,
+-				 supported_drm_formats,
+-				 ARRAY_SIZE(supported_drm_formats),
+-				 format_modifiers,
+-				 DRM_PLANE_TYPE_PRIMARY, "meson_primary_plane");
++	ret = drm_universal_plane_init(priv->drm, plane, 0xFF,
++					&meson_plane_funcs,
++					supported_drm_formats,
++					ARRAY_SIZE(supported_drm_formats),
++					format_modifiers,
++					DRM_PLANE_TYPE_PRIMARY, "meson_primary_plane");
++	if (ret) {
++		devm_kfree(priv->drm->dev, meson_plane);
++		return ret;
++	}
+ 
+ 	drm_plane_helper_add(plane, &meson_plane_helper_funcs);
+ 
+diff --git a/drivers/hid/hid-cougar.c b/drivers/hid/hid-cougar.c
+index 28d671c5e0cac..d173b13ff1983 100644
+--- a/drivers/hid/hid-cougar.c
++++ b/drivers/hid/hid-cougar.c
+@@ -106,7 +106,7 @@ static void cougar_fix_g6_mapping(void)
+ static __u8 *cougar_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 				 unsigned int *rsize)
+ {
+-	if (rdesc[2] == 0x09 && rdesc[3] == 0x02 &&
++	if (*rsize >= 117 && rdesc[2] == 0x09 && rdesc[3] == 0x02 &&
+ 	    (rdesc[115] | rdesc[116] << 8) >= HID_MAX_USAGES) {
+ 		hid_info(hdev,
+ 			"usage count exceeds max: fixing up report descriptor\n");
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index e99400f3ae1d1..39339b152b8ba 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1965,6 +1965,7 @@ int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(vmbus_device_unregister);
+ 
+ /*
+  * vmbus_remove_channel_attr_group - remove the channel's attribute group
+diff --git a/drivers/hwmon/adc128d818.c b/drivers/hwmon/adc128d818.c
+index 6c9a906631b89..e73c4de9471fa 100644
+--- a/drivers/hwmon/adc128d818.c
++++ b/drivers/hwmon/adc128d818.c
+@@ -176,7 +176,7 @@ static ssize_t adc128_in_store(struct device *dev,
+ 
+ 	mutex_lock(&data->update_lock);
+ 	/* 10 mV LSB on limit registers */
+-	regval = clamp_val(DIV_ROUND_CLOSEST(val, 10), 0, 255);
++	regval = DIV_ROUND_CLOSEST(clamp_val(val, 0, 2550), 10);
+ 	data->in[index][nr] = regval << 4;
+ 	reg = index == 1 ? ADC128_REG_IN_MIN(nr) : ADC128_REG_IN_MAX(nr);
+ 	i2c_smbus_write_byte_data(data->client, reg, regval);
+@@ -214,7 +214,7 @@ static ssize_t adc128_temp_store(struct device *dev,
+ 		return err;
+ 
+ 	mutex_lock(&data->update_lock);
+-	regval = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -128, 127);
++	regval = DIV_ROUND_CLOSEST(clamp_val(val, -128000, 127000), 1000);
+ 	data->temp[index] = regval << 1;
+ 	i2c_smbus_write_byte_data(data->client,
+ 				  index == 1 ? ADC128_REG_TEMP_MAX
+diff --git a/drivers/hwmon/lm95234.c b/drivers/hwmon/lm95234.c
+index ac169a994ae00..db2aecdfbd17c 100644
+--- a/drivers/hwmon/lm95234.c
++++ b/drivers/hwmon/lm95234.c
+@@ -301,7 +301,8 @@ static ssize_t tcrit2_store(struct device *dev, struct device_attribute *attr,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, index ? 255 : 127);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, 0, (index ? 255 : 127) * 1000),
++				1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->tcrit2[index] = val;
+@@ -350,7 +351,7 @@ static ssize_t tcrit1_store(struct device *dev, struct device_attribute *attr,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->tcrit1[index] = val;
+@@ -391,7 +392,7 @@ static ssize_t tcrit1_hyst_store(struct device *dev,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	val = DIV_ROUND_CLOSEST(val, 1000);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, -255000, 255000), 1000);
+ 	val = clamp_val((int)data->tcrit1[index] - val, 0, 31);
+ 
+ 	mutex_lock(&data->update_lock);
+@@ -431,7 +432,7 @@ static ssize_t offset_store(struct device *dev, struct device_attribute *attr,
+ 		return ret;
+ 
+ 	/* Accuracy is 1/2 degrees C */
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 500), -128, 127);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, -64000, 63500), 500);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->toffset[index] = val;
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index 5bd15622a85f9..3645a19cdaf4d 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -2374,7 +2374,7 @@ store_temp_offset(struct device *dev, struct device_attribute *attr,
+ 	if (err < 0)
+ 		return err;
+ 
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -128, 127);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, -128000, 127000), 1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->temp_offset[nr] = val;
+diff --git a/drivers/hwmon/w83627ehf.c b/drivers/hwmon/w83627ehf.c
+index 3964ceab2817c..acf36862851ad 100644
+--- a/drivers/hwmon/w83627ehf.c
++++ b/drivers/hwmon/w83627ehf.c
+@@ -897,7 +897,7 @@ store_target_temp(struct device *dev, struct device_attribute *attr,
+ 	if (err < 0)
+ 		return err;
+ 
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 127);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 127000), 1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->target_temp[nr] = val;
+@@ -922,7 +922,7 @@ store_tolerance(struct device *dev, struct device_attribute *attr,
+ 		return err;
+ 
+ 	/* Limit the temp to 0C - 15C */
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 15);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 15000), 1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	reg = w83627ehf_read_value(data, W83627EHF_REG_TOLERANCE[nr]);
+diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
+index fd5f5c5a5244d..425597151dd3e 100644
+--- a/drivers/hwspinlock/hwspinlock_core.c
++++ b/drivers/hwspinlock/hwspinlock_core.c
+@@ -302,6 +302,34 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags)
+ }
+ EXPORT_SYMBOL_GPL(__hwspin_unlock);
+ 
++/**
++ * hwspin_lock_bust() - bust a specific hwspinlock
++ * @hwlock: a previously-acquired hwspinlock which we want to bust
++ * @id: identifier of the remote lock holder, if applicable
++ *
++ * This function will bust a hwspinlock that was previously acquired as
++ * long as the current owner of the lock matches the id given by the caller.
++ *
++ * Context: Process context.
++ *
++ * Returns: 0 on success, or -EINVAL if the hwspinlock does not exist, or
++ * the bust operation fails, and -EOPNOTSUPP if the bust operation is not
++ * defined for the hwspinlock.
++ */
++int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id)
++{
++	if (WARN_ON(!hwlock))
++		return -EINVAL;
++
++	if (!hwlock->bank->ops->bust) {
++		pr_err("bust operation not defined\n");
++		return -EOPNOTSUPP;
++	}
++
++	return hwlock->bank->ops->bust(hwlock, id);
++}
++EXPORT_SYMBOL_GPL(hwspin_lock_bust);
++
+ /**
+  * of_hwspin_lock_simple_xlate - translate hwlock_spec to return a lock id
+  * @bank: the hwspinlock device bank
+diff --git a/drivers/hwspinlock/hwspinlock_internal.h b/drivers/hwspinlock/hwspinlock_internal.h
+index 29892767bb7a0..f298fc0ee5adb 100644
+--- a/drivers/hwspinlock/hwspinlock_internal.h
++++ b/drivers/hwspinlock/hwspinlock_internal.h
+@@ -21,6 +21,8 @@ struct hwspinlock_device;
+  * @trylock: make a single attempt to take the lock. returns 0 on
+  *	     failure and true on success. may _not_ sleep.
+  * @unlock:  release the lock. always succeed. may _not_ sleep.
++ * @bust:    optional, platform-specific bust handler, called by hwspinlock
++ *	     core to bust a specific lock.
+  * @relax:   optional, platform-specific relax handler, called by hwspinlock
+  *	     core while spinning on a lock, between two successive
+  *	     invocations of @trylock. may _not_ sleep.
+@@ -28,6 +30,7 @@ struct hwspinlock_device;
+ struct hwspinlock_ops {
+ 	int (*trylock)(struct hwspinlock *lock);
+ 	void (*unlock)(struct hwspinlock *lock);
++	int (*bust)(struct hwspinlock *lock, unsigned int id);
+ 	void (*relax)(struct hwspinlock *lock);
+ };
+ 
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 19ab7d7251bcb..99d1288e66828 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -500,6 +500,7 @@ static int ad7124_soft_reset(struct ad7124_state *st)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	fsleep(200);
+ 	timeout = 100;
+ 	do {
+ 		ret = ad_sd_read_reg(&st->sd, AD7124_STATUS, 1, &readval);
+diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+index 93b4e9e6bb551..8aa6e12320e72 100644
+--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
++++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+@@ -180,7 +180,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
+ 
+ 	ret = dma_get_slave_caps(chan, &caps);
+ 	if (ret < 0)
+-		goto err_free;
++		goto err_release;
+ 
+ 	/* Needs to be aligned to the maximum of the minimums */
+ 	if (caps.src_addr_widths)
+@@ -207,6 +207,8 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
+ 
+ 	return &dmaengine_buffer->queue.buffer;
+ 
++err_release:
++	dma_release_channel(chan);
+ err_free:
+ 	kfree(dmaengine_buffer);
+ 	return ERR_PTR(ret);
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index c32b2577dd991..6e64ffde6c82d 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -610,17 +610,17 @@ static int iio_convert_raw_to_processed_unlocked(struct iio_channel *chan,
+ 		break;
+ 	case IIO_VAL_INT_PLUS_MICRO:
+ 		if (scale_val2 < 0)
+-			*processed = -raw64 * scale_val;
++			*processed = -raw64 * scale_val * scale;
+ 		else
+-			*processed = raw64 * scale_val;
++			*processed = raw64 * scale_val * scale;
+ 		*processed += div_s64(raw64 * (s64)scale_val2 * scale,
+ 				      1000000LL);
+ 		break;
+ 	case IIO_VAL_INT_PLUS_NANO:
+ 		if (scale_val2 < 0)
+-			*processed = -raw64 * scale_val;
++			*processed = -raw64 * scale_val * scale;
+ 		else
+-			*processed = raw64 * scale_val;
++			*processed = raw64 * scale_val * scale;
+ 		*processed += div_s64(raw64 * (s64)scale_val2 * scale,
+ 				      1000000000LL);
+ 		break;
+diff --git a/drivers/input/misc/uinput.c b/drivers/input/misc/uinput.c
+index f2593133e5247..790db3ceb2083 100644
+--- a/drivers/input/misc/uinput.c
++++ b/drivers/input/misc/uinput.c
+@@ -416,6 +416,20 @@ static int uinput_validate_absinfo(struct input_dev *dev, unsigned int code,
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * Limit number of contacts to a reasonable value (100). This
++	 * ensures that we need less than 2 pages for struct input_mt
++	 * (we are not using in-kernel slot assignment so not going to
++	 * allocate memory for the "red" table), and we should have no
++	 * trouble getting this much memory.
++	 */
++	if (code == ABS_MT_SLOT && max > 99) {
++		printk(KERN_DEBUG
++		       "%s: unreasonably large number of slots requested: %d\n",
++		       UINPUT_NAME, max);
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index a27765a7f6b75..72b380e17a1b0 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1333,7 +1333,7 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc,
+ 	 */
+ 	writel(qi->free_head << shift, iommu->reg + DMAR_IQT_REG);
+ 
+-	while (qi->desc_status[wait_index] != QI_DONE) {
++	while (READ_ONCE(qi->desc_status[wait_index]) != QI_DONE) {
+ 		/*
+ 		 * We will leave the interrupts disabled, to prevent interrupt
+ 		 * context to queue another cmd while a cmd is already submitted
+diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c
+index 65aa30d55d3ab..f31f66b123666 100644
+--- a/drivers/iommu/sun50i-iommu.c
++++ b/drivers/iommu/sun50i-iommu.c
+@@ -380,6 +380,7 @@ static int sun50i_iommu_enable(struct sun50i_iommu *iommu)
+ 		    IOMMU_TLB_PREFETCH_MASTER_ENABLE(3) |
+ 		    IOMMU_TLB_PREFETCH_MASTER_ENABLE(4) |
+ 		    IOMMU_TLB_PREFETCH_MASTER_ENABLE(5));
++	iommu_write(iommu, IOMMU_BYPASS_REG, 0);
+ 	iommu_write(iommu, IOMMU_INT_ENABLE_REG, IOMMU_INT_MASK);
+ 	iommu_write(iommu, IOMMU_DM_AUT_CTRL_REG(SUN50I_IOMMU_ACI_NONE),
+ 		    IOMMU_DM_AUT_CTRL_RD_UNAVAIL(SUN50I_IOMMU_ACI_NONE, 0) |
+diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
+index c76fb70c70bb6..e865a43428b83 100644
+--- a/drivers/irqchip/irq-armada-370-xp.c
++++ b/drivers/irqchip/irq-armada-370-xp.c
+@@ -546,6 +546,10 @@ static struct irq_chip armada_370_xp_irq_chip = {
+ static int armada_370_xp_mpic_irq_map(struct irq_domain *h,
+ 				      unsigned int virq, irq_hw_number_t hw)
+ {
++	/* IRQs 0 and 1 cannot be mapped, they are handled internally */
++	if (hw <= 1)
++		return -EINVAL;
++
+ 	armada_370_xp_irq_mask(irq_get_irq_data(virq));
+ 	if (!is_percpu_irq(hw))
+ 		writel(hw, per_cpu_int_base +
+diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
+index 4116b48e60aff..205a275196074 100644
+--- a/drivers/irqchip/irq-gic-v2m.c
++++ b/drivers/irqchip/irq-gic-v2m.c
+@@ -442,12 +442,12 @@ static int __init gicv2m_of_init(struct fwnode_handle *parent_handle,
+ 
+ 		ret = gicv2m_init_one(&child->fwnode, spi_start, nr_spis,
+ 				      &res, 0);
+-		if (ret) {
+-			of_node_put(child);
++		if (ret)
+ 			break;
+-		}
+ 	}
+ 
++	if (ret && child)
++		of_node_put(child);
+ 	if (!ret)
+ 		ret = gicv2m_allocate_domains(parent);
+ 	if (ret)
+diff --git a/drivers/leds/leds-spi-byte.c b/drivers/leds/leds-spi-byte.c
+index f1964c96fb159..82696e0607a53 100644
+--- a/drivers/leds/leds-spi-byte.c
++++ b/drivers/leds/leds-spi-byte.c
+@@ -91,7 +91,6 @@ static int spi_byte_probe(struct spi_device *spi)
+ 		dev_err(dev, "Device must have exactly one LED sub-node.");
+ 		return -EINVAL;
+ 	}
+-	child = of_get_next_available_child(dev_of_node(dev), NULL);
+ 
+ 	led = devm_kzalloc(dev, sizeof(*led), GFP_KERNEL);
+ 	if (!led)
+@@ -107,11 +106,13 @@ static int spi_byte_probe(struct spi_device *spi)
+ 	led->ldev.max_brightness = led->cdef->max_value - led->cdef->off_value;
+ 	led->ldev.brightness_set_blocking = spi_byte_brightness_set_blocking;
+ 
++	child = of_get_next_available_child(dev_of_node(dev), NULL);
+ 	state = of_get_property(child, "default-state", NULL);
+ 	if (state) {
+ 		if (!strcmp(state, "on")) {
+ 			led->ldev.brightness = led->ldev.max_brightness;
+ 		} else if (strcmp(state, "off")) {
++			of_node_put(child);
+ 			/* all other cases except "off" */
+ 			dev_err(dev, "default-state can only be 'on' or 'off'");
+ 			return -EINVAL;
+@@ -122,9 +123,12 @@ static int spi_byte_probe(struct spi_device *spi)
+ 
+ 	ret = devm_led_classdev_register(&spi->dev, &led->ldev);
+ 	if (ret) {
++		of_node_put(child);
+ 		mutex_destroy(&led->mutex);
+ 		return ret;
+ 	}
++
++	of_node_put(child);
+ 	spi_set_drvdata(spi, led);
+ 
+ 	return 0;
+diff --git a/drivers/md/dm-init.c b/drivers/md/dm-init.c
+index b0c45c6ebe0bf..f76477044ec1e 100644
+--- a/drivers/md/dm-init.c
++++ b/drivers/md/dm-init.c
+@@ -207,8 +207,10 @@ static char __init *dm_parse_device_entry(struct dm_device *dev, char *str)
+ 	strscpy(dev->dmi.uuid, field[1], sizeof(dev->dmi.uuid));
+ 	/* minor */
+ 	if (strlen(field[2])) {
+-		if (kstrtoull(field[2], 0, &dev->dmi.dev))
++		if (kstrtoull(field[2], 0, &dev->dmi.dev) ||
++		    dev->dmi.dev >= (1 << MINORBITS))
+ 			return ERR_PTR(-EINVAL);
++		dev->dmi.dev = huge_encode_dev((dev_t)dev->dmi.dev);
+ 		dev->dmi.flags |= DM_PERSISTENT_DEV_FLAG;
+ 	}
+ 	/* flags */
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index 9186881afc981..d074f426980dd 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -431,8 +431,11 @@ static int camss_of_parse_endpoint_node(struct device *dev,
+ 	struct v4l2_fwnode_bus_mipi_csi2 *mipi_csi2;
+ 	struct v4l2_fwnode_endpoint vep = { { 0 } };
+ 	unsigned int i;
++	int ret;
+ 
+-	v4l2_fwnode_endpoint_parse(of_fwnode_handle(node), &vep);
++	ret = v4l2_fwnode_endpoint_parse(of_fwnode_handle(node), &vep);
++	if (ret)
++		return ret;
+ 
+ 	csd->interface.csiphy_id = vep.base.port;
+ 
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index 437889e51ca05..2ce7f5567f512 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -113,8 +113,9 @@ static int vid_cap_queue_setup(struct vb2_queue *vq,
+ 		if (*nplanes != buffers)
+ 			return -EINVAL;
+ 		for (p = 0; p < buffers; p++) {
+-			if (sizes[p] < tpg_g_line_width(&dev->tpg, p) * h +
+-						dev->fmt_cap->data_offset[p])
++			if (sizes[p] < tpg_g_line_width(&dev->tpg, p) * h /
++					dev->fmt_cap->vdownsampling[p] +
++					dev->fmt_cap->data_offset[p])
+ 				return -EINVAL;
+ 		}
+ 	} else {
+@@ -1801,8 +1802,10 @@ int vidioc_s_edid(struct file *file, void *_fh,
+ 		return -EINVAL;
+ 	if (edid->blocks == 0) {
+ 		dev->edid_blocks = 0;
+-		v4l2_ctrl_s_ctrl(dev->ctrl_tx_edid_present, 0);
+-		v4l2_ctrl_s_ctrl(dev->ctrl_tx_hotplug, 0);
++		if (dev->num_outputs) {
++			v4l2_ctrl_s_ctrl(dev->ctrl_tx_edid_present, 0);
++			v4l2_ctrl_s_ctrl(dev->ctrl_tx_hotplug, 0);
++		}
+ 		phys_addr = CEC_PHYS_ADDR_INVALID;
+ 		goto set_phys_addr;
+ 	}
+@@ -1826,8 +1829,10 @@ int vidioc_s_edid(struct file *file, void *_fh,
+ 			display_present |=
+ 				dev->display_present[i] << j++;
+ 
+-	v4l2_ctrl_s_ctrl(dev->ctrl_tx_edid_present, display_present);
+-	v4l2_ctrl_s_ctrl(dev->ctrl_tx_hotplug, display_present);
++	if (dev->num_outputs) {
++		v4l2_ctrl_s_ctrl(dev->ctrl_tx_edid_present, display_present);
++		v4l2_ctrl_s_ctrl(dev->ctrl_tx_hotplug, display_present);
++	}
+ 
+ set_phys_addr:
+ 	/* TODO: a proper hotplug detect cycle should be emulated here */
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-out.c b/drivers/media/test-drivers/vivid/vivid-vid-out.c
+index cd6c247547d66..9038be90ab35d 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-out.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-out.c
+@@ -63,14 +63,16 @@ static int vid_out_queue_setup(struct vb2_queue *vq,
+ 		if (sizes[0] < size)
+ 			return -EINVAL;
+ 		for (p = 1; p < planes; p++) {
+-			if (sizes[p] < dev->bytesperline_out[p] * h +
+-				       vfmt->data_offset[p])
++			if (sizes[p] < dev->bytesperline_out[p] * h /
++					vfmt->vdownsampling[p] +
++					vfmt->data_offset[p])
+ 				return -EINVAL;
+ 		}
+ 	} else {
+ 		for (p = 0; p < planes; p++)
+-			sizes[p] = p ? dev->bytesperline_out[p] * h +
+-				       vfmt->data_offset[p] : size;
++			sizes[p] = p ? dev->bytesperline_out[p] * h /
++					vfmt->vdownsampling[p] +
++					vfmt->data_offset[p] : size;
+ 	}
+ 
+ 	if (vq->num_buffers + *nbuffers < 2)
+@@ -127,7 +129,7 @@ static int vid_out_buf_prepare(struct vb2_buffer *vb)
+ 
+ 	for (p = 0; p < planes; p++) {
+ 		if (p)
+-			size = dev->bytesperline_out[p] * h;
++			size = dev->bytesperline_out[p] * h / vfmt->vdownsampling[p];
+ 		size += vb->planes[p].data_offset;
+ 
+ 		if (vb2_get_plane_payload(vb, p) < size) {
+@@ -334,8 +336,8 @@ int vivid_g_fmt_vid_out(struct file *file, void *priv,
+ 	for (p = 0; p < mp->num_planes; p++) {
+ 		mp->plane_fmt[p].bytesperline = dev->bytesperline_out[p];
+ 		mp->plane_fmt[p].sizeimage =
+-			mp->plane_fmt[p].bytesperline * mp->height +
+-			fmt->data_offset[p];
++			mp->plane_fmt[p].bytesperline * mp->height /
++			fmt->vdownsampling[p] + fmt->data_offset[p];
+ 	}
+ 	for (p = fmt->buffers; p < fmt->planes; p++) {
+ 		unsigned stride = dev->bytesperline_out[p];
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 6334f99f1854d..cfbc7595cd0b8 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -948,16 +948,26 @@ static int uvc_parse_streaming(struct uvc_device *dev,
+ 		goto error;
+ 	}
+ 
+-	size = nformats * sizeof(*format) + nframes * sizeof(*frame)
++	/*
++	 * Allocate memory for the formats, the frames and the intervals,
++	 * plus any required padding to guarantee that everything has the
++	 * correct alignment.
++	 */
++	size = nformats * sizeof(*format);
++	size = ALIGN(size, __alignof__(*frame)) + nframes * sizeof(*frame);
++	size = ALIGN(size, __alignof__(*interval))
+ 	     + nintervals * sizeof(*interval);
++
+ 	format = kzalloc(size, GFP_KERNEL);
+-	if (format == NULL) {
++	if (!format) {
+ 		ret = -ENOMEM;
+ 		goto error;
+ 	}
+ 
+-	frame = (struct uvc_frame *)&format[nformats];
+-	interval = (u32 *)&frame[nframes];
++	frame = (void *)format + nformats * sizeof(*format);
++	frame = PTR_ALIGN(frame, __alignof__(*frame));
++	interval = (void *)frame + nframes * sizeof(*frame);
++	interval = PTR_ALIGN(interval, __alignof__(*interval));
+ 
+ 	streaming->format = format;
+ 	streaming->nformats = nformats;
+diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
+index 692daa9eff341..19c9d2cdd277b 100644
+--- a/drivers/misc/vmw_vmci/vmci_resource.c
++++ b/drivers/misc/vmw_vmci/vmci_resource.c
+@@ -144,7 +144,8 @@ void vmci_resource_remove(struct vmci_resource *resource)
+ 	spin_lock(&vmci_resource_table.lock);
+ 
+ 	hlist_for_each_entry(r, &vmci_resource_table.entries[idx], node) {
+-		if (vmci_handle_is_equal(r->handle, resource->handle)) {
++		if (vmci_handle_is_equal(r->handle, resource->handle) &&
++		    resource->type == r->type) {
+ 			hlist_del_init_rcu(&r->node);
+ 			break;
+ 		}
+diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
+index 23cf7912c1ba3..6a350f4953528 100644
+--- a/drivers/mmc/host/cqhci.c
++++ b/drivers/mmc/host/cqhci.c
+@@ -592,7 +592,7 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		cqhci_writel(cq_host, 0, CQHCI_CTL);
+ 		mmc->cqe_on = true;
+ 		pr_debug("%s: cqhci: CQE on\n", mmc_hostname(mmc));
+-		if (cqhci_readl(cq_host, CQHCI_CTL) && CQHCI_HALT) {
++		if (cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT) {
+ 			pr_err("%s: cqhci: CQE failed to exit halt state\n",
+ 			       mmc_hostname(mmc));
+ 		}
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 4da525f9c11f0..dc7a5ad41c420 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2826,8 +2826,8 @@ static int dw_mci_init_slot(struct dw_mci *host)
+ 	if (host->use_dma == TRANS_MODE_IDMAC) {
+ 		mmc->max_segs = host->ring_size;
+ 		mmc->max_blk_size = 65535;
+-		mmc->max_seg_size = 0x1000;
+-		mmc->max_req_size = mmc->max_seg_size * host->ring_size;
++		mmc->max_req_size = DW_MCI_DESC_DATA_LENGTH * host->ring_size;
++		mmc->max_seg_size = mmc->max_req_size;
+ 		mmc->max_blk_count = mmc->max_req_size / 512;
+ 	} else if (host->use_dma == TRANS_MODE_EDMAC) {
+ 		mmc->max_segs = 64;
+diff --git a/drivers/mmc/host/sdhci-of-aspeed.c b/drivers/mmc/host/sdhci-of-aspeed.c
+index 4f008ba3280eb..60810de52d4db 100644
+--- a/drivers/mmc/host/sdhci-of-aspeed.c
++++ b/drivers/mmc/host/sdhci-of-aspeed.c
+@@ -236,6 +236,7 @@ static const struct of_device_id aspeed_sdhci_of_match[] = {
+ 	{ .compatible = "aspeed,ast2600-sdhci", },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, aspeed_sdhci_of_match);
+ 
+ static struct platform_driver aspeed_sdhci_driver = {
+ 	.driver		= {
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index 53ef48588e59a..d9917120b8fac 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -75,7 +75,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 
+ 		if (skb_copy_bits(skb, BAREUDP_BASE_HLEN, &ipversion,
+ 				  sizeof(ipversion))) {
+-			bareudp->dev->stats.rx_dropped++;
++			DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 			goto drop;
+ 		}
+ 		ipversion >>= 4;
+@@ -85,7 +85,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 		} else if (ipversion == 6 && bareudp->multi_proto_mode) {
+ 			proto = htons(ETH_P_IPV6);
+ 		} else {
+-			bareudp->dev->stats.rx_dropped++;
++			DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 			goto drop;
+ 		}
+ 	} else if (bareudp->ethertype == htons(ETH_P_MPLS_UC)) {
+@@ -99,7 +99,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 				   ipv4_is_multicast(tunnel_hdr->daddr)) {
+ 				proto = htons(ETH_P_MPLS_MC);
+ 			} else {
+-				bareudp->dev->stats.rx_dropped++;
++				DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 				goto drop;
+ 			}
+ 		} else {
+@@ -115,7 +115,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 				   (addr_type & IPV6_ADDR_MULTICAST)) {
+ 				proto = htons(ETH_P_MPLS_MC);
+ 			} else {
+-				bareudp->dev->stats.rx_dropped++;
++				DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 				goto drop;
+ 			}
+ 		}
+@@ -127,12 +127,12 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 				 proto,
+ 				 !net_eq(bareudp->net,
+ 				 dev_net(bareudp->dev)))) {
+-		bareudp->dev->stats.rx_dropped++;
++		DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 		goto drop;
+ 	}
+ 	tun_dst = udp_tun_rx_dst(skb, family, TUNNEL_KEY, 0, 0);
+ 	if (!tun_dst) {
+-		bareudp->dev->stats.rx_dropped++;
++		DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 		goto drop;
+ 	}
+ 	skb_dst_set(skb, &tun_dst->dst);
+@@ -157,8 +157,8 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 						     &((struct ipv6hdr *)oiph)->saddr);
+ 		}
+ 		if (err > 1) {
+-			++bareudp->dev->stats.rx_frame_errors;
+-			++bareudp->dev->stats.rx_errors;
++			DEV_STATS_INC(bareudp->dev, rx_frame_errors);
++			DEV_STATS_INC(bareudp->dev, rx_errors);
+ 			goto drop;
+ 		}
+ 	}
+@@ -453,11 +453,11 @@ static netdev_tx_t bareudp_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	dev_kfree_skb(skb);
+ 
+ 	if (err == -ELOOP)
+-		dev->stats.collisions++;
++		DEV_STATS_INC(dev, collisions);
+ 	else if (err == -ENETUNREACH)
+-		dev->stats.tx_carrier_errors++;
++		DEV_STATS_INC(dev, tx_carrier_errors);
+ 
+-	dev->stats.tx_errors++;
++	DEV_STATS_INC(dev, tx_errors);
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index ffcb04aac9729..88d065718e990 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -755,7 +755,7 @@ static int mcp251x_hw_wake(struct spi_device *spi)
+ 	int ret;
+ 
+ 	/* Force wakeup interrupt to wake device, but don't execute IST */
+-	disable_irq(spi->irq);
++	disable_irq_nosync(spi->irq);
+ 	mcp251x_write_2regs(spi, CANINTE, CANINTE_WAKIE, CANINTF_WAKIF);
+ 
+ 	/* Wait for oscillator startup timer after wake up */
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index 8a21902212e04..7c2780ccf9d6f 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -35,7 +35,7 @@
+ #define VSC73XX_BLOCK_ANALYZER	0x2 /* Only subblock 0 */
+ #define VSC73XX_BLOCK_MII	0x3 /* Subblocks 0 and 1 */
+ #define VSC73XX_BLOCK_MEMINIT	0x3 /* Only subblock 2 */
+-#define VSC73XX_BLOCK_CAPTURE	0x4 /* Only subblock 2 */
++#define VSC73XX_BLOCK_CAPTURE	0x4 /* Subblocks 0-4, 6, 7 */
+ #define VSC73XX_BLOCK_ARBITER	0x5 /* Only subblock 0 */
+ #define VSC73XX_BLOCK_SYSTEM	0x7 /* Only subblock 0 */
+ 
+@@ -371,13 +371,19 @@ int vsc73xx_is_addr_valid(u8 block, u8 subblock)
+ 		break;
+ 
+ 	case VSC73XX_BLOCK_MII:
+-	case VSC73XX_BLOCK_CAPTURE:
+ 	case VSC73XX_BLOCK_ARBITER:
+ 		switch (subblock) {
+ 		case 0 ... 1:
+ 			return 1;
+ 		}
+ 		break;
++	case VSC73XX_BLOCK_CAPTURE:
++		switch (subblock) {
++		case 0 ... 4:
++		case 6 ... 7:
++			return 1;
++		}
++		break;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index cb7c028b1bf5a..90bd5583ac347 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -908,14 +908,18 @@ static inline void dpaa_setup_egress(const struct dpaa_priv *priv,
+ 	}
+ }
+ 
+-static void dpaa_fq_setup(struct dpaa_priv *priv,
+-			  const struct dpaa_fq_cbs *fq_cbs,
+-			  struct fman_port *tx_port)
++static int dpaa_fq_setup(struct dpaa_priv *priv,
++			 const struct dpaa_fq_cbs *fq_cbs,
++			 struct fman_port *tx_port)
+ {
+ 	int egress_cnt = 0, conf_cnt = 0, num_portals = 0, portal_cnt = 0, cpu;
+ 	const cpumask_t *affine_cpus = qman_affine_cpus();
+-	u16 channels[NR_CPUS];
+ 	struct dpaa_fq *fq;
++	u16 *channels;
++
++	channels = kcalloc(num_possible_cpus(), sizeof(u16), GFP_KERNEL);
++	if (!channels)
++		return -ENOMEM;
+ 
+ 	for_each_cpu_and(cpu, affine_cpus, cpu_online_mask)
+ 		channels[num_portals++] = qman_affine_channel(cpu);
+@@ -974,6 +978,10 @@ static void dpaa_fq_setup(struct dpaa_priv *priv,
+ 				break;
+ 		}
+ 	}
++
++	kfree(channels);
++
++	return 0;
+ }
+ 
+ static inline int dpaa_tx_fq_to_id(const struct dpaa_priv *priv,
+@@ -3015,7 +3023,9 @@ static int dpaa_eth_probe(struct platform_device *pdev)
+ 	 */
+ 	dpaa_eth_add_channel(priv->channel, &pdev->dev);
+ 
+-	dpaa_fq_setup(priv, &dpaa_fq_cbs, priv->mac_dev->port[TX]);
++	err = dpaa_fq_setup(priv, &dpaa_fq_cbs, priv->mac_dev->port[TX]);
++	if (err)
++		goto free_dpaa_bps;
+ 
+ 	/* Create a congestion group for this netdev, with
+ 	 * dynamically-allocated CGR ID.
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index 2f9075429c43e..d8cb0b99684ad 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -537,12 +537,16 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 			     struct ethtool_coalesce *c)
+ {
+ 	const cpumask_t *cpus = qman_affine_cpus();
+-	bool needs_revert[NR_CPUS] = {false};
+ 	struct qman_portal *portal;
+ 	u32 period, prev_period;
+ 	u8 thresh, prev_thresh;
++	bool *needs_revert;
+ 	int cpu, res;
+ 
++	needs_revert = kcalloc(num_possible_cpus(), sizeof(bool), GFP_KERNEL);
++	if (!needs_revert)
++		return -ENOMEM;
++
+ 	period = c->rx_coalesce_usecs;
+ 	thresh = c->rx_max_coalesced_frames;
+ 
+@@ -565,6 +569,8 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 		needs_revert[cpu] = true;
+ 	}
+ 
++	kfree(needs_revert);
++
+ 	return 0;
+ 
+ revert_values:
+@@ -578,6 +584,8 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 		qman_dqrr_set_ithresh(portal, prev_thresh);
+ 	}
+ 
++	kfree(needs_revert);
++
+ 	return res;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 0848613c3f45a..e2c38e5232dc2 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -6805,10 +6805,20 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
+ 
+ static void igb_tsync_interrupt(struct igb_adapter *adapter)
+ {
++	const u32 mask = (TSINTR_SYS_WRAP | E1000_TSICR_TXTS |
++			  TSINTR_TT0 | TSINTR_TT1 |
++			  TSINTR_AUTT0 | TSINTR_AUTT1);
+ 	struct e1000_hw *hw = &adapter->hw;
+ 	u32 tsicr = rd32(E1000_TSICR);
+ 	struct ptp_clock_event event;
+ 
++	if (hw->mac.type == e1000_82580) {
++		/* 82580 has a hardware bug that requires an explicit
++		 * write to clear the TimeSync interrupt cause.
++		 */
++		wr32(E1000_TSICR, tsicr & mask);
++	}
++
+ 	if (tsicr & TSINTR_SYS_WRAP) {
+ 		event.type = PTP_CLOCK_PPS;
+ 		if (adapter->ptp_caps.pps)
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 631ce793fb2ec..65cf7035b02d5 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -5740,6 +5740,7 @@ static void igc_io_resume(struct pci_dev *pdev)
+ 	rtnl_lock();
+ 	if (netif_running(netdev)) {
+ 		if (igc_open(netdev)) {
++			rtnl_unlock();
+ 			netdev_err(netdev, "igc_open failed after reset\n");
+ 			return;
+ 		}
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 324ef6990e9a7..f0c48f20d086d 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -210,7 +210,7 @@ static int ionic_request_irq(struct ionic_lif *lif, struct ionic_qcq *qcq)
+ 		name = dev_name(dev);
+ 
+ 	snprintf(intr->name, sizeof(intr->name),
+-		 "%s-%s-%s", IONIC_DRV_NAME, name, q->name);
++		 "%.5s-%.16s-%.8s", IONIC_DRV_NAME, name, q->name);
+ 
+ 	return devm_request_irq(dev, intr->vector, ionic_isr,
+ 				0, intr->name, &qcq->napi);
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index af35361a3dcee..08b479f04ed06 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -528,18 +528,15 @@ static struct sk_buff *geneve_gro_receive(struct sock *sk,
+ 
+ 	type = gh->proto_type;
+ 
+-	rcu_read_lock();
+ 	ptype = gro_find_receive_by_type(type);
+ 	if (!ptype)
+-		goto out_unlock;
++		goto out;
+ 
+ 	skb_gro_pull(skb, gh_len);
+ 	skb_gro_postpull_rcsum(skb, gh, gh_len);
+ 	pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
+ 	flush = 0;
+ 
+-out_unlock:
+-	rcu_read_unlock();
+ out:
+ 	skb_gro_flush_final(skb, pp, flush);
+ 
+@@ -559,13 +556,10 @@ static int geneve_gro_complete(struct sock *sk, struct sk_buff *skb,
+ 	gh_len = geneve_hlen(gh);
+ 	type = gh->proto_type;
+ 
+-	rcu_read_lock();
+ 	ptype = gro_find_complete_by_type(type);
+ 	if (ptype)
+ 		err = ptype->callbacks.gro_complete(skb, nhoff + gh_len);
+ 
+-	rcu_read_unlock();
+-
+ 	skb_set_inner_mac_header(skb, nhoff + gh_len);
+ 
+ 	return err;
+diff --git a/drivers/net/usb/ch9200.c b/drivers/net/usb/ch9200.c
+index d7f3b70d54775..f69d9b902da04 100644
+--- a/drivers/net/usb/ch9200.c
++++ b/drivers/net/usb/ch9200.c
+@@ -336,6 +336,7 @@ static int ch9200_bind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	int retval = 0;
+ 	unsigned char data[2];
++	u8 addr[ETH_ALEN];
+ 
+ 	retval = usbnet_get_endpoints(dev, intf);
+ 	if (retval)
+@@ -383,7 +384,8 @@ static int ch9200_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	retval = control_write(dev, REQUEST_WRITE, 0, MAC_REG_CTRL, data, 0x02,
+ 			       CONTROL_TIMEOUT_MS);
+ 
+-	retval = get_mac_address(dev, dev->net->dev_addr);
++	retval = get_mac_address(dev, addr);
++	eth_hw_addr_set(dev->net, addr);
+ 
+ 	return retval;
+ }
+diff --git a/drivers/net/usb/cx82310_eth.c b/drivers/net/usb/cx82310_eth.c
+index c4568a491dc4d..79a47e2fd4378 100644
+--- a/drivers/net/usb/cx82310_eth.c
++++ b/drivers/net/usb/cx82310_eth.c
+@@ -146,6 +146,7 @@ static int cx82310_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	u8 link[3];
+ 	int timeout = 50;
+ 	struct cx82310_priv *priv;
++	u8 addr[ETH_ALEN];
+ 
+ 	/* avoid ADSL modems - continue only if iProduct is "USB NET CARD" */
+ 	if (usb_string(udev, udev->descriptor.iProduct, buf, sizeof(buf)) > 0
+@@ -202,12 +203,12 @@ static int cx82310_bind(struct usbnet *dev, struct usb_interface *intf)
+ 		goto err;
+ 
+ 	/* get the MAC address */
+-	ret = cx82310_cmd(dev, CMD_GET_MAC_ADDR, true, NULL, 0,
+-			  dev->net->dev_addr, ETH_ALEN);
++	ret = cx82310_cmd(dev, CMD_GET_MAC_ADDR, true, NULL, 0, addr, ETH_ALEN);
+ 	if (ret) {
+ 		netdev_err(dev->net, "unable to read MAC address: %d\n", ret);
+ 		goto err;
+ 	}
++	eth_hw_addr_set(dev->net, addr);
+ 
+ 	/* start (does not seem to have any effect?) */
+ 	ret = cx82310_cmd(dev, CMD_START, false, NULL, 0, NULL, 0);
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index 06d9f19ca142a..4485388dcff2e 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -353,8 +353,8 @@ static int ipheth_close(struct net_device *net)
+ {
+ 	struct ipheth_device *dev = netdev_priv(net);
+ 
+-	cancel_delayed_work_sync(&dev->carrier_work);
+ 	netif_stop_queue(net);
++	cancel_delayed_work_sync(&dev->carrier_work);
+ 	return 0;
+ }
+ 
+@@ -443,7 +443,7 @@ static int ipheth_probe(struct usb_interface *intf,
+ 
+ 	netdev->netdev_ops = &ipheth_netdev_ops;
+ 	netdev->watchdog_timeo = IPHETH_TX_TIMEOUT;
+-	strcpy(netdev->name, "eth%d");
++	strscpy(netdev->name, "eth%d", sizeof(netdev->name));
+ 
+ 	dev = netdev_priv(netdev);
+ 	dev->udev = udev;
+diff --git a/drivers/net/usb/kaweth.c b/drivers/net/usb/kaweth.c
+index 144c686b43330..9b2bc1993ece2 100644
+--- a/drivers/net/usb/kaweth.c
++++ b/drivers/net/usb/kaweth.c
+@@ -1044,8 +1044,7 @@ static int kaweth_probe(
+ 		goto err_all_but_rxbuf;
+ 
+ 	memcpy(netdev->broadcast, &bcast_addr, sizeof(bcast_addr));
+-	memcpy(netdev->dev_addr, &kaweth->configuration.hw_addr,
+-               sizeof(kaweth->configuration.hw_addr));
++	eth_hw_addr_set(netdev, (u8 *)&kaweth->configuration.hw_addr);
+ 
+ 	netdev->netdev_ops = &kaweth_netdev_ops;
+ 	netdev->watchdog_timeo = KAWETH_TX_TIMEOUT;
+diff --git a/drivers/net/usb/mcs7830.c b/drivers/net/usb/mcs7830.c
+index 7e40e2e2f3723..57281296ba2ca 100644
+--- a/drivers/net/usb/mcs7830.c
++++ b/drivers/net/usb/mcs7830.c
+@@ -480,17 +480,19 @@ static const struct net_device_ops mcs7830_netdev_ops = {
+ static int mcs7830_bind(struct usbnet *dev, struct usb_interface *udev)
+ {
+ 	struct net_device *net = dev->net;
++	u8 addr[ETH_ALEN];
+ 	int ret;
+ 	int retry;
+ 
+ 	/* Initial startup: Gather MAC address setting from EEPROM */
+ 	ret = -EINVAL;
+ 	for (retry = 0; retry < 5 && ret; retry++)
+-		ret = mcs7830_hif_get_mac_address(dev, net->dev_addr);
++		ret = mcs7830_hif_get_mac_address(dev, addr);
+ 	if (ret) {
+ 		dev_warn(&dev->udev->dev, "Cannot read MAC address\n");
+ 		goto out;
+ 	}
++	eth_hw_addr_set(net, addr);
+ 
+ 	mcs7830_data_set_multicast(net);
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 47cc54a64b56d..0a1ab8c30a003 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1365,6 +1365,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x2692, 0x9025, 4)},    /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
+ 	{QMI_QUIRK_SET_DTR(0x1546, 0x1342, 4)},	/* u-blox LARA-L6 */
+ 	{QMI_QUIRK_SET_DTR(0x33f8, 0x0104, 4)}, /* Rolling RW101 RMNET */
++	{QMI_FIXED_INTF(0x2dee, 0x4d22, 5)},    /* MeiG Smart SRM825L */
+ 
+ 	/* 4. Gobi 1000 devices */
+ 	{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)},	/* Acer Gobi Modem Device */
+diff --git a/drivers/net/usb/sierra_net.c b/drivers/net/usb/sierra_net.c
+index 0abd257b634c6..777f672f288cb 100644
+--- a/drivers/net/usb/sierra_net.c
++++ b/drivers/net/usb/sierra_net.c
+@@ -669,6 +669,7 @@ static int sierra_net_bind(struct usbnet *dev, struct usb_interface *intf)
+ 		0x00, 0x00, SIERRA_NET_HIP_MSYNC_ID, 0x00};
+ 	static const u8 shdwn_tmplate[sizeof(priv->shdwn_msg)] = {
+ 		0x00, 0x00, SIERRA_NET_HIP_SHUTD_ID, 0x00};
++	u8 mod[2];
+ 
+ 	dev_dbg(&dev->udev->dev, "%s", __func__);
+ 
+@@ -698,8 +699,9 @@ static int sierra_net_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	dev->net->netdev_ops = &sierra_net_device_ops;
+ 
+ 	/* change MAC addr to include, ifacenum, and to be unique */
+-	dev->net->dev_addr[ETH_ALEN-2] = atomic_inc_return(&iface_counter);
+-	dev->net->dev_addr[ETH_ALEN-1] = ifacenum;
++	mod[0] = atomic_inc_return(&iface_counter);
++	mod[1] = ifacenum;
++	dev_addr_mod(dev->net, ETH_ALEN - 2, mod, 2);
+ 
+ 	/* prepare shutdown message template */
+ 	memcpy(priv->shdwn_msg, shdwn_tmplate, sizeof(priv->shdwn_msg));
+diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
+index 8d2e3daf03cf2..1ec11a08820d4 100644
+--- a/drivers/net/usb/sr9700.c
++++ b/drivers/net/usb/sr9700.c
+@@ -326,6 +326,7 @@ static int sr9700_bind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	struct net_device *netdev;
+ 	struct mii_if_info *mii;
++	u8 addr[ETH_ALEN];
+ 	int ret;
+ 
+ 	ret = usbnet_get_endpoints(dev, intf);
+@@ -356,11 +357,12 @@ static int sr9700_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	 * EEPROM automatically to PAR. In case there is no EEPROM externally,
+ 	 * a default MAC address is stored in PAR for making chip work properly.
+ 	 */
+-	if (sr_read(dev, SR_PAR, ETH_ALEN, netdev->dev_addr) < 0) {
++	if (sr_read(dev, SR_PAR, ETH_ALEN, addr) < 0) {
+ 		netdev_err(netdev, "Error reading MAC address\n");
+ 		ret = -ENODEV;
+ 		goto out;
+ 	}
++	eth_hw_addr_set(netdev, addr);
+ 
+ 	/* power up and reset phy */
+ 	sr_write_reg(dev, SR_PRR, PRR_PHY_RST);
+diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c
+index a5332e99102a5..351e0edcda2af 100644
+--- a/drivers/net/usb/sr9800.c
++++ b/drivers/net/usb/sr9800.c
+@@ -731,6 +731,7 @@ static int sr9800_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	struct sr_data *data = (struct sr_data *)&dev->data;
+ 	u16 led01_mux, led23_mux;
+ 	int ret, embd_phy;
++	u8 addr[ETH_ALEN];
+ 	u32 phyid;
+ 	u16 rx_ctl;
+ 
+@@ -756,12 +757,12 @@ static int sr9800_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	}
+ 
+ 	/* Get the MAC address */
+-	ret = sr_read_cmd(dev, SR_CMD_READ_NODE_ID, 0, 0, ETH_ALEN,
+-			  dev->net->dev_addr);
++	ret = sr_read_cmd(dev, SR_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, addr);
+ 	if (ret < 0) {
+ 		netdev_dbg(dev->net, "Failed to read MAC address: %d\n", ret);
+ 		return ret;
+ 	}
++	eth_hw_addr_set(dev->net, addr);
+ 	netdev_dbg(dev->net, "mac addr : %pM\n", dev->net->dev_addr);
+ 
+ 	/* Initialize MII structure */
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 481a41d879b53..669cd20cfe00a 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -64,9 +64,6 @@
+ 
+ /*-------------------------------------------------------------------------*/
+ 
+-// randomly generated ethernet address
+-static u8	node_id [ETH_ALEN];
+-
+ /* use ethtool to change the level for any given device */
+ static int msg_level = -1;
+ module_param (msg_level, int, 0);
+@@ -148,12 +145,13 @@ EXPORT_SYMBOL_GPL(usbnet_get_endpoints);
+ 
+ int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress)
+ {
++	u8		addr[ETH_ALEN];
+ 	int 		tmp = -1, ret;
+ 	unsigned char	buf [13];
+ 
+ 	ret = usb_string(dev->udev, iMACAddress, buf, sizeof buf);
+ 	if (ret == 12)
+-		tmp = hex2bin(dev->net->dev_addr, buf, 6);
++		tmp = hex2bin(addr, buf, 6);
+ 	if (tmp < 0) {
+ 		dev_dbg(&dev->udev->dev,
+ 			"bad MAC string %d fetch, %d\n", iMACAddress, tmp);
+@@ -161,6 +159,7 @@ int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress)
+ 			ret = -EINVAL;
+ 		return ret;
+ 	}
++	eth_hw_addr_set(dev->net, addr);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(usbnet_get_ethernet_addr);
+@@ -1693,8 +1692,7 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 	dev->interrupt_count = 0;
+ 
+ 	dev->net = net;
+-	strcpy (net->name, "usb%d");
+-	memcpy (net->dev_addr, node_id, sizeof node_id);
++	strscpy(net->name, "usb%d", sizeof(net->name));
+ 
+ 	/* rx and tx sides can use different message sizes;
+ 	 * bind() should set rx_urb_size in that case.
+@@ -1720,13 +1718,13 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 		if ((dev->driver_info->flags & FLAG_ETHER) != 0 &&
+ 		    ((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 ||
+ 		     (net->dev_addr [0] & 0x02) == 0))
+-			strcpy (net->name, "eth%d");
++			strscpy(net->name, "eth%d", sizeof(net->name));
+ 		/* WLAN devices should always be named "wlan%d" */
+ 		if ((dev->driver_info->flags & FLAG_WLAN) != 0)
+-			strcpy(net->name, "wlan%d");
++			strscpy(net->name, "wlan%d", sizeof(net->name));
+ 		/* WWAN devices should always be named "wwan%d" */
+ 		if ((dev->driver_info->flags & FLAG_WWAN) != 0)
+-			strcpy(net->name, "wwan%d");
++			strscpy(net->name, "wwan%d", sizeof(net->name));
+ 
+ 		/* devices that cannot do ARP */
+ 		if ((dev->driver_info->flags & FLAG_NOARP) != 0)
+@@ -1768,9 +1766,9 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 		goto out4;
+ 	}
+ 
+-	/* let userspace know we have a random address */
+-	if (ether_addr_equal(net->dev_addr, node_id))
+-		net->addr_assign_type = NET_ADDR_RANDOM;
++	/* this flags the device for user space */
++	if (!is_valid_ether_addr(net->dev_addr))
++		eth_hw_addr_random(net);
+ 
+ 	if ((dev->driver_info->flags & FLAG_WLAN) != 0)
+ 		SET_NETDEV_DEVTYPE(net, &wlan_type);
+@@ -2180,7 +2178,6 @@ static int __init usbnet_init(void)
+ 	BUILD_BUG_ON(
+ 		sizeof_field(struct sk_buff, cb) < sizeof(struct skb_data));
+ 
+-	eth_random_addr(node_id);
+ 	return 0;
+ }
+ module_init(usbnet_init);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index f7ed99561c192..99dea89b26788 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1497,7 +1497,7 @@ static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
+ 		return false;
+ }
+ 
+-static void virtnet_poll_cleantx(struct receive_queue *rq)
++static void virtnet_poll_cleantx(struct receive_queue *rq, int budget)
+ {
+ 	struct virtnet_info *vi = rq->vq->vdev->priv;
+ 	unsigned int index = vq2rxq(rq->vq);
+@@ -1508,7 +1508,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
+ 		return;
+ 
+ 	if (__netif_tx_trylock(txq)) {
+-		free_old_xmit_skbs(sq, true);
++		free_old_xmit_skbs(sq, !!budget);
+ 		__netif_tx_unlock(txq);
+ 	}
+ 
+@@ -1525,7 +1525,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
+ 	unsigned int received;
+ 	unsigned int xdp_xmit = 0;
+ 
+-	virtnet_poll_cleantx(rq);
++	virtnet_poll_cleantx(rq, budget);
+ 
+ 	received = virtnet_receive(rq, budget, &xdp_xmit);
+ 
+@@ -1598,7 +1598,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
+ 	txq = netdev_get_tx_queue(vi->dev, index);
+ 	__netif_tx_lock(txq, raw_smp_processor_id());
+ 	virtqueue_disable_cb(sq->vq);
+-	free_old_xmit_skbs(sq, true);
++	free_old_xmit_skbs(sq, !!budget);
+ 
+ 	opaque = virtqueue_enable_cb_prepare(sq->vq);
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
+index fb76b4a69a059..ad3893d450583 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
+@@ -1089,6 +1089,7 @@ static int ieee_hw_init(struct ieee80211_hw *hw)
+ 	ieee80211_hw_set(hw, AMPDU_AGGREGATION);
+ 	ieee80211_hw_set(hw, SIGNAL_DBM);
+ 	ieee80211_hw_set(hw, REPORTS_TX_ACK_STATUS);
++	ieee80211_hw_set(hw, MFP_CAPABLE);
+ 
+ 	hw->extra_tx_headroom = brcms_c_get_header_len();
+ 	hw->queues = N_TX_QUEUES;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+index 24d6ed3513ce5..c09a736f87e68 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+@@ -275,8 +275,7 @@ static ssize_t iwl_dbgfs_send_hcmd_write(struct iwl_fw_runtime *fwrt, char *buf,
+ 		.data = { NULL, },
+ 	};
+ 
+-	if (fwrt->ops && fwrt->ops->fw_running &&
+-	    !fwrt->ops->fw_running(fwrt->ops_ctx))
++	if (!iwl_trans_fw_running(fwrt->trans))
+ 		return -EIO;
+ 
+ 	if (count < header_size + 1 || count > 1024 * 4)
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+index cddcb4d9a264c..79ab8ef78f67a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+@@ -72,7 +72,6 @@
+ struct iwl_fw_runtime_ops {
+ 	int (*dump_start)(void *ctx);
+ 	void (*dump_end)(void *ctx);
+-	bool (*fw_running)(void *ctx);
+ 	int (*send_hcmd)(void *ctx, struct iwl_host_cmd *host_cmd);
+ 	bool (*d3_debug_enable)(void *ctx);
+ };
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 3548eb57f1f30..9b1a1455a7d51 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -577,11 +577,6 @@ static void iwl_mvm_fwrt_dump_end(void *ctx)
+ 	mutex_unlock(&mvm->mutex);
+ }
+ 
+-static bool iwl_mvm_fwrt_fw_running(void *ctx)
+-{
+-	return iwl_mvm_firmware_running(ctx);
+-}
+-
+ static int iwl_mvm_fwrt_send_hcmd(void *ctx, struct iwl_host_cmd *host_cmd)
+ {
+ 	struct iwl_mvm *mvm = (struct iwl_mvm *)ctx;
+@@ -602,7 +597,6 @@ static bool iwl_mvm_d3_debug_enable(void *ctx)
+ static const struct iwl_fw_runtime_ops iwl_mvm_fwrt_ops = {
+ 	.dump_start = iwl_mvm_fwrt_dump_start,
+ 	.dump_end = iwl_mvm_fwrt_dump_end,
+-	.fw_running = iwl_mvm_fwrt_fw_running,
+ 	.send_hcmd = iwl_mvm_fwrt_send_hcmd,
+ 	.d3_debug_enable = iwl_mvm_d3_debug_enable,
+ };
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.h b/drivers/net/wireless/marvell/mwifiex/main.h
+index f4e3dce10d654..5b14fe08811e8 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.h
++++ b/drivers/net/wireless/marvell/mwifiex/main.h
+@@ -1310,6 +1310,9 @@ mwifiex_get_priv_by_id(struct mwifiex_adapter *adapter,
+ 
+ 	for (i = 0; i < adapter->priv_num; i++) {
+ 		if (adapter->priv[i]) {
++			if (adapter->priv[i]->bss_mode == NL80211_IFTYPE_UNSPECIFIED)
++				continue;
++
+ 			if ((adapter->priv[i]->bss_num == bss_num) &&
+ 			    (adapter->priv[i]->bss_type == bss_type))
+ 				break;
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index e493fc709065a..5655f6d81cc09 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1787,8 +1787,10 @@ static u16 nvmet_tcp_install_queue(struct nvmet_sq *sq)
+ 	}
+ 
+ 	queue->nr_cmds = sq->size * 2;
+-	if (nvmet_tcp_alloc_cmds(queue))
++	if (nvmet_tcp_alloc_cmds(queue)) {
++		queue->nr_cmds = 0;
+ 		return NVME_SC_INTERNAL;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 1505c745154e7..45a10c15186be 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -962,13 +962,13 @@ void nvmem_device_put(struct nvmem_device *nvmem)
+ EXPORT_SYMBOL_GPL(nvmem_device_put);
+ 
+ /**
+- * devm_nvmem_device_get() - Get nvmem cell of device form a given id
++ * devm_nvmem_device_get() - Get nvmem device of device form a given id
+  *
+  * @dev: Device that requests the nvmem device.
+  * @id: name id for the requested nvmem device.
+  *
+- * Return: ERR_PTR() on error or a valid pointer to a struct nvmem_cell
+- * on success.  The nvmem_cell will be freed by the automatically once the
++ * Return: ERR_PTR() on error or a valid pointer to a struct nvmem_device
++ * on success.  The nvmem_device will be freed by the automatically once the
+  * device is freed.
+  */
+ struct nvmem_device *devm_nvmem_device_get(struct device *dev, const char *id)
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index 352e14b007e78..ad0cb49e233ac 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -288,7 +288,8 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ 	struct device_node *p;
+ 	const __be32 *addr;
+ 	u32 intsize;
+-	int i, res;
++	int i, res, addr_len;
++	__be32 addr_buf[3] = { 0 };
+ 
+ 	pr_debug("of_irq_parse_one: dev=%pOF, index=%d\n", device, index);
+ 
+@@ -297,13 +298,19 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ 		return of_irq_parse_oldworld(device, index, out_irq);
+ 
+ 	/* Get the reg property (if any) */
+-	addr = of_get_property(device, "reg", NULL);
++	addr = of_get_property(device, "reg", &addr_len);
++
++	/* Prevent out-of-bounds read in case of longer interrupt parent address size */
++	if (addr_len > (3 * sizeof(__be32)))
++		addr_len = 3 * sizeof(__be32);
++	if (addr)
++		memcpy(addr_buf, addr, addr_len);
+ 
+ 	/* Try the new-style interrupts-extended first */
+ 	res = of_parse_phandle_with_args(device, "interrupts-extended",
+ 					"#interrupt-cells", index, out_irq);
+ 	if (!res)
+-		return of_irq_parse_raw(addr, out_irq);
++		return of_irq_parse_raw(addr_buf, out_irq);
+ 
+ 	/* Look for the interrupt parent. */
+ 	p = of_irq_find_parent(device);
+@@ -333,7 +340,7 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ 
+ 
+ 	/* Check if there are any interrupt-map translations to process */
+-	res = of_irq_parse_raw(addr, out_irq);
++	res = of_irq_parse_raw(addr_buf, out_irq);
+  out:
+ 	of_node_put(p);
+ 	return res;
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index d3c3ca3ef4bae..0b49bdf149a69 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -35,6 +35,11 @@
+ #define PCIE_DEVICEID_SHIFT	16
+ 
+ /* Application registers */
++#define PID				0x000
++#define RTL				GENMASK(15, 11)
++#define RTL_SHIFT			11
++#define AM6_PCI_PG1_RTL_VER		0x15
++
+ #define CMD_STATUS			0x004
+ #define LTSSM_EN_VAL		        BIT(0)
+ #define OB_XLAT_EN_VAL		        BIT(1)
+@@ -105,6 +110,8 @@
+ 
+ #define to_keystone_pcie(x)		dev_get_drvdata((x)->dev)
+ 
++#define PCI_DEVICE_ID_TI_AM654X		0xb00c
++
+ struct ks_pcie_of_data {
+ 	enum dw_pcie_device_mode mode;
+ 	const struct dw_pcie_host_ops *host_ops;
+@@ -537,7 +544,11 @@ static int ks_pcie_start_link(struct dw_pcie *pci)
+ static void ks_pcie_quirk(struct pci_dev *dev)
+ {
+ 	struct pci_bus *bus = dev->bus;
++	struct keystone_pcie *ks_pcie;
++	struct device *bridge_dev;
+ 	struct pci_dev *bridge;
++	u32 val;
++
+ 	static const struct pci_device_id rc_pci_devids[] = {
+ 		{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK),
+ 		 .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
+@@ -549,6 +560,11 @@ static void ks_pcie_quirk(struct pci_dev *dev)
+ 		 .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
+ 		{ 0, },
+ 	};
++	static const struct pci_device_id am6_pci_devids[] = {
++		{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654X),
++		 .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
++		{ 0, },
++	};
+ 
+ 	if (pci_is_root_bus(bus))
+ 		bridge = dev;
+@@ -570,10 +586,36 @@ static void ks_pcie_quirk(struct pci_dev *dev)
+ 	 */
+ 	if (pci_match_id(rc_pci_devids, bridge)) {
+ 		if (pcie_get_readrq(dev) > 256) {
+-			dev_info(&dev->dev, "limiting MRRS to 256\n");
++			dev_info(&dev->dev, "limiting MRRS to 256 bytes\n");
+ 			pcie_set_readrq(dev, 256);
+ 		}
+ 	}
++
++	/*
++	 * Memory transactions fail with PCI controller in AM654 PG1.0
++	 * when MRRS is set to more than 128 bytes. Force the MRRS to
++	 * 128 bytes in all downstream devices.
++	 */
++	if (pci_match_id(am6_pci_devids, bridge)) {
++		bridge_dev = pci_get_host_bridge_device(dev);
++		if (!bridge_dev && !bridge_dev->parent)
++			return;
++
++		ks_pcie = dev_get_drvdata(bridge_dev->parent);
++		if (!ks_pcie)
++			return;
++
++		val = ks_pcie_app_readl(ks_pcie, PID);
++		val &= RTL;
++		val >>= RTL_SHIFT;
++		if (val != AM6_PCI_PG1_RTL_VER)
++			return;
++
++		if (pcie_get_readrq(dev) > 128) {
++			dev_info(&dev->dev, "limiting MRRS to 128 bytes\n");
++			pcie_set_readrq(dev, 128);
++		}
++	}
+ }
+ DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk);
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-al.c b/drivers/pci/controller/dwc/pcie-al.c
+index f973fbca90cf7..ac772fb11aa73 100644
+--- a/drivers/pci/controller/dwc/pcie-al.c
++++ b/drivers/pci/controller/dwc/pcie-al.c
+@@ -250,18 +250,24 @@ static struct pci_ops al_child_pci_ops = {
+ 	.write = pci_generic_config_write,
+ };
+ 
+-static void al_pcie_config_prepare(struct al_pcie *pcie)
++static int al_pcie_config_prepare(struct al_pcie *pcie)
+ {
+ 	struct al_pcie_target_bus_cfg *target_bus_cfg;
+ 	struct pcie_port *pp = &pcie->pci->pp;
+ 	unsigned int ecam_bus_mask;
++	struct resource_entry *ft;
+ 	u32 cfg_control_offset;
++	struct resource *bus;
+ 	u8 subordinate_bus;
+ 	u8 secondary_bus;
+ 	u32 cfg_control;
+ 	u32 reg;
+-	struct resource *bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS)->res;
+ 
++	ft = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS);
++	if (!ft)
++		return -ENODEV;
++
++	bus = ft->res;
+ 	target_bus_cfg = &pcie->target_bus_cfg;
+ 
+ 	ecam_bus_mask = (pcie->ecam_size >> 20) - 1;
+@@ -295,6 +301,8 @@ static void al_pcie_config_prepare(struct al_pcie *pcie)
+ 	       FIELD_PREP(CFG_CONTROL_SEC_BUS_MASK, secondary_bus);
+ 
+ 	al_pcie_controller_writel(pcie, cfg_control_offset, reg);
++
++	return 0;
+ }
+ 
+ static int al_pcie_host_init(struct pcie_port *pp)
+@@ -313,7 +321,9 @@ static int al_pcie_host_init(struct pcie_port *pp)
+ 	if (rc)
+ 		return rc;
+ 
+-	al_pcie_config_prepare(pcie);
++	rc = al_pcie_config_prepare(pcie);
++	if (rc)
++		return rc;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
+index 04565162a4495..cf9c0e75f0be4 100644
+--- a/drivers/pci/hotplug/pnv_php.c
++++ b/drivers/pci/hotplug/pnv_php.c
+@@ -38,7 +38,6 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+ 				bool disable_device)
+ {
+ 	struct pci_dev *pdev = php_slot->pdev;
+-	int irq = php_slot->irq;
+ 	u16 ctrl;
+ 
+ 	if (php_slot->irq > 0) {
+@@ -57,7 +56,7 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+ 		php_slot->wq = NULL;
+ 	}
+ 
+-	if (disable_device || irq > 0) {
++	if (disable_device) {
+ 		if (pdev->msix_enabled)
+ 			pci_disable_msix(pdev);
+ 		else if (pdev->msi_enabled)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 09d5fa637b984..800df0f1417d8 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5260,10 +5260,12 @@ static void pci_bus_lock(struct pci_bus *bus)
+ {
+ 	struct pci_dev *dev;
+ 
++	pci_dev_lock(bus->self);
+ 	list_for_each_entry(dev, &bus->devices, bus_list) {
+-		pci_dev_lock(dev);
+ 		if (dev->subordinate)
+ 			pci_bus_lock(dev->subordinate);
++		else
++			pci_dev_lock(dev);
+ 	}
+ }
+ 
+@@ -5275,8 +5277,10 @@ static void pci_bus_unlock(struct pci_bus *bus)
+ 	list_for_each_entry(dev, &bus->devices, bus_list) {
+ 		if (dev->subordinate)
+ 			pci_bus_unlock(dev->subordinate);
+-		pci_dev_unlock(dev);
++		else
++			pci_dev_unlock(dev);
+ 	}
++	pci_dev_unlock(bus->self);
+ }
+ 
+ /* Return 1 on successful lock, 0 on contention */
+@@ -5284,15 +5288,15 @@ static int pci_bus_trylock(struct pci_bus *bus)
+ {
+ 	struct pci_dev *dev;
+ 
++	if (!pci_dev_trylock(bus->self))
++		return 0;
++
+ 	list_for_each_entry(dev, &bus->devices, bus_list) {
+-		if (!pci_dev_trylock(dev))
+-			goto unlock;
+ 		if (dev->subordinate) {
+-			if (!pci_bus_trylock(dev->subordinate)) {
+-				pci_dev_unlock(dev);
++			if (!pci_bus_trylock(dev->subordinate))
+ 				goto unlock;
+-			}
+-		}
++		} else if (!pci_dev_trylock(dev))
++			goto unlock;
+ 	}
+ 	return 1;
+ 
+@@ -5300,8 +5304,10 @@ static int pci_bus_trylock(struct pci_bus *bus)
+ 	list_for_each_entry_continue_reverse(dev, &bus->devices, bus_list) {
+ 		if (dev->subordinate)
+ 			pci_bus_unlock(dev->subordinate);
+-		pci_dev_unlock(dev);
++		else
++			pci_dev_unlock(dev);
+ 	}
++	pci_dev_unlock(bus->self);
+ 	return 0;
+ }
+ 
+@@ -5333,9 +5339,10 @@ static void pci_slot_lock(struct pci_slot *slot)
+ 	list_for_each_entry(dev, &slot->bus->devices, bus_list) {
+ 		if (!dev->slot || dev->slot != slot)
+ 			continue;
+-		pci_dev_lock(dev);
+ 		if (dev->subordinate)
+ 			pci_bus_lock(dev->subordinate);
++		else
++			pci_dev_lock(dev);
+ 	}
+ }
+ 
+@@ -5361,14 +5368,13 @@ static int pci_slot_trylock(struct pci_slot *slot)
+ 	list_for_each_entry(dev, &slot->bus->devices, bus_list) {
+ 		if (!dev->slot || dev->slot != slot)
+ 			continue;
+-		if (!pci_dev_trylock(dev))
+-			goto unlock;
+ 		if (dev->subordinate) {
+ 			if (!pci_bus_trylock(dev->subordinate)) {
+ 				pci_dev_unlock(dev);
+ 				goto unlock;
+ 			}
+-		}
++		} else if (!pci_dev_trylock(dev))
++			goto unlock;
+ 	}
+ 	return 1;
+ 
+@@ -5379,7 +5385,8 @@ static int pci_slot_trylock(struct pci_slot *slot)
+ 			continue;
+ 		if (dev->subordinate)
+ 			pci_bus_unlock(dev->subordinate);
+-		pci_dev_unlock(dev);
++		else
++			pci_dev_unlock(dev);
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c
+index 84bfc0e85d6b9..f15b72c6e57ed 100644
+--- a/drivers/pcmcia/yenta_socket.c
++++ b/drivers/pcmcia/yenta_socket.c
+@@ -636,11 +636,11 @@ static int yenta_search_one_res(struct resource *root, struct resource *res,
+ 		start = PCIBIOS_MIN_CARDBUS_IO;
+ 		end = ~0U;
+ 	} else {
+-		unsigned long avail = root->end - root->start;
++		unsigned long avail = resource_size(root);
+ 		int i;
+ 		size = BRIDGE_MEM_MAX;
+-		if (size > avail/8) {
+-			size = (avail+1)/8;
++		if (size > (avail - 1) / 8) {
++			size = avail / 8;
+ 			/* round size down to next power of 2 */
+ 			i = 0;
+ 			while ((size /= 2) != 0)
+diff --git a/drivers/platform/x86/dell-smbios-base.c b/drivers/platform/x86/dell-smbios-base.c
+index 3a1dbf1994413..98e77cb210b70 100644
+--- a/drivers/platform/x86/dell-smbios-base.c
++++ b/drivers/platform/x86/dell-smbios-base.c
+@@ -610,7 +610,10 @@ static int __init dell_smbios_init(void)
+ 	return 0;
+ 
+ fail_sysfs:
+-	free_group(platform_device);
++	if (!wmi)
++		exit_dell_smbios_wmi();
++	if (!smm)
++		exit_dell_smbios_smm();
+ 
+ fail_create_group:
+ 	platform_device_del(platform_device);
+diff --git a/drivers/staging/iio/frequency/ad9834.c b/drivers/staging/iio/frequency/ad9834.c
+index 262c3590e64e3..fa0a7056dea40 100644
+--- a/drivers/staging/iio/frequency/ad9834.c
++++ b/drivers/staging/iio/frequency/ad9834.c
+@@ -115,7 +115,7 @@ static int ad9834_write_frequency(struct ad9834_state *st,
+ 
+ 	clk_freq = clk_get_rate(st->mclk);
+ 
+-	if (fout > (clk_freq / 2))
++	if (!clk_freq || fout > (clk_freq / 2))
+ 		return -EINVAL;
+ 
+ 	regval = ad9834_calc_freqreg(clk_freq, fout);
+diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
+index c31febe90d4ea..3343cac607379 100644
+--- a/drivers/uio/uio_hv_generic.c
++++ b/drivers/uio/uio_hv_generic.c
+@@ -104,10 +104,11 @@ static void hv_uio_channel_cb(void *context)
+ 
+ /*
+  * Callback from vmbus_event when channel is rescinded.
++ * It is meant for rescind of primary channels only.
+  */
+ static void hv_uio_rescind(struct vmbus_channel *channel)
+ {
+-	struct hv_device *hv_dev = channel->primary_channel->device_obj;
++	struct hv_device *hv_dev = channel->device_obj;
+ 	struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev);
+ 
+ 	/*
+@@ -118,6 +119,14 @@ static void hv_uio_rescind(struct vmbus_channel *channel)
+ 
+ 	/* Wake up reader */
+ 	uio_event_notify(&pdata->info);
++
++	/*
++	 * With rescind callback registered, rescind path will not unregister the device
++	 * from vmbus when the primary channel is rescinded.
++	 * Without it, rescind handling is incomplete and next onoffer msg does not come.
++	 * Unregister the device from vmbus here.
++	 */
++	vmbus_device_unregister(channel->device_obj);
+ }
+ 
+ /* Sysfs API to allow mmap of the ring buffers
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index ff6f41e7e0683..ea1680c4cc065 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -424,6 +424,7 @@ static void uas_data_cmplt(struct urb *urb)
+ 			uas_log_cmd_state(cmnd, "data cmplt err", status);
+ 		/* error: no data transfered */
+ 		scsi_set_resid(cmnd, sdb->length);
++		set_host_byte(cmnd, DID_ERROR);
+ 	} else {
+ 		scsi_set_resid(cmnd, sdb->length - urb->actual_length);
+ 	}
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 41e1a64da82e8..f75b1e2c05fec 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -365,7 +365,7 @@ ucsi_register_displayport(struct ucsi_connector *con,
+ 			  bool override, int offset,
+ 			  struct typec_altmode_desc *desc)
+ {
+-	return NULL;
++	return typec_port_register_altmode(con->port, desc);
+ }
+ 
+ static inline void
+diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c
+index 5dd41e8215e0f..bb34d647cf138 100644
+--- a/drivers/usb/usbip/stub_rx.c
++++ b/drivers/usb/usbip/stub_rx.c
+@@ -144,53 +144,62 @@ static int tweak_set_configuration_cmd(struct urb *urb)
+ 	if (err && err != -ENODEV)
+ 		dev_err(&sdev->udev->dev, "can't set config #%d, error %d\n",
+ 			config, err);
+-	return 0;
++	return err;
+ }
+ 
+ static int tweak_reset_device_cmd(struct urb *urb)
+ {
+ 	struct stub_priv *priv = (struct stub_priv *) urb->context;
+ 	struct stub_device *sdev = priv->sdev;
++	int err;
+ 
+ 	dev_info(&urb->dev->dev, "usb_queue_reset_device\n");
+ 
+-	if (usb_lock_device_for_reset(sdev->udev, NULL) < 0) {
++	err = usb_lock_device_for_reset(sdev->udev, NULL);
++	if (err < 0) {
+ 		dev_err(&urb->dev->dev, "could not obtain lock to reset device\n");
+-		return 0;
++		return err;
+ 	}
+-	usb_reset_device(sdev->udev);
++	err = usb_reset_device(sdev->udev);
+ 	usb_unlock_device(sdev->udev);
+ 
+-	return 0;
++	return err;
+ }
+ 
+ /*
+  * clear_halt, set_interface, and set_configuration require special tricks.
++ * Returns 1 if request was tweaked, 0 otherwise.
+  */
+-static void tweak_special_requests(struct urb *urb)
++static int tweak_special_requests(struct urb *urb)
+ {
++	int err;
++
+ 	if (!urb || !urb->setup_packet)
+-		return;
++		return 0;
+ 
+ 	if (usb_pipetype(urb->pipe) != PIPE_CONTROL)
+-		return;
++		return 0;
+ 
+ 	if (is_clear_halt_cmd(urb))
+ 		/* tweak clear_halt */
+-		 tweak_clear_halt_cmd(urb);
++		err = tweak_clear_halt_cmd(urb);
+ 
+ 	else if (is_set_interface_cmd(urb))
+ 		/* tweak set_interface */
+-		tweak_set_interface_cmd(urb);
++		err = tweak_set_interface_cmd(urb);
+ 
+ 	else if (is_set_configuration_cmd(urb))
+ 		/* tweak set_configuration */
+-		tweak_set_configuration_cmd(urb);
++		err = tweak_set_configuration_cmd(urb);
+ 
+ 	else if (is_reset_device_cmd(urb))
+-		tweak_reset_device_cmd(urb);
+-	else
++		err = tweak_reset_device_cmd(urb);
++	else {
+ 		usbip_dbg_stub_rx("no need to tweak\n");
++		return 0;
++	}
++
++	return !err;
+ }
+ 
+ /*
+@@ -468,6 +477,7 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ 	int support_sg = 1;
+ 	int np = 0;
+ 	int ret, i;
++	int is_tweaked;
+ 
+ 	if (pipe == -1)
+ 		return;
+@@ -580,8 +590,11 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ 		priv->urbs[i]->pipe = pipe;
+ 		priv->urbs[i]->complete = stub_complete;
+ 
+-		/* no need to submit an intercepted request, but harmless? */
+-		tweak_special_requests(priv->urbs[i]);
++		/*
++		 * all URBs belong to a single PDU, so a global is_tweaked flag is
++		 * enough
++		 */
++		is_tweaked = tweak_special_requests(priv->urbs[i]);
+ 
+ 		masking_bogus_flags(priv->urbs[i]);
+ 	}
+@@ -594,22 +607,32 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ 
+ 	/* urb is now ready to submit */
+ 	for (i = 0; i < priv->num_urbs; i++) {
+-		ret = usb_submit_urb(priv->urbs[i], GFP_KERNEL);
++		if (!is_tweaked) {
++			ret = usb_submit_urb(priv->urbs[i], GFP_KERNEL);
+ 
+-		if (ret == 0)
+-			usbip_dbg_stub_rx("submit urb ok, seqnum %u\n",
+-					pdu->base.seqnum);
+-		else {
+-			dev_err(&udev->dev, "submit_urb error, %d\n", ret);
+-			usbip_dump_header(pdu);
+-			usbip_dump_urb(priv->urbs[i]);
++			if (ret == 0)
++				usbip_dbg_stub_rx("submit urb ok, seqnum %u\n",
++						pdu->base.seqnum);
++			else {
++				dev_err(&udev->dev, "submit_urb error, %d\n", ret);
++				usbip_dump_header(pdu);
++				usbip_dump_urb(priv->urbs[i]);
+ 
++				/*
++				 * Pessimistic.
++				 * This connection will be discarded.
++				 */
++				usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT);
++				break;
++			}
++		} else {
+ 			/*
+-			 * Pessimistic.
+-			 * This connection will be discarded.
++			 * An identical URB was already submitted in
++			 * tweak_special_requests(). Skip submitting this URB to not
++			 * duplicate the request.
+ 			 */
+-			usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT);
+-			break;
++			priv->urbs[i]->status = 0;
++			stub_complete(priv->urbs[i]);
+ 		}
+ 	}
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 3ba43a40032cd..afa1eccd5e2d4 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4806,7 +4806,15 @@ static noinline void reada_walk_down(struct btrfs_trans_handle *trans,
+ 		/* We don't care about errors in readahead. */
+ 		if (ret < 0)
+ 			continue;
+-		BUG_ON(refs == 0);
++
++		/*
++		 * This could be racey, it's conceivable that we raced and end
++		 * up with a bogus refs count, if that's the case just skip, if
++		 * we are actually corrupt we will notice when we look up
++		 * everything again with our locks.
++		 */
++		if (refs == 0)
++			continue;
+ 
+ 		if (wc->stage == DROP_REFERENCE) {
+ 			if (refs == 1)
+@@ -4865,7 +4873,7 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans,
+ 	if (lookup_info &&
+ 	    ((wc->stage == DROP_REFERENCE && wc->refs[level] != 1) ||
+ 	     (wc->stage == UPDATE_BACKREF && !(wc->flags[level] & flag)))) {
+-		BUG_ON(!path->locks[level]);
++		ASSERT(path->locks[level]);
+ 		ret = btrfs_lookup_extent_info(trans, fs_info,
+ 					       eb->start, level, 1,
+ 					       &wc->refs[level],
+@@ -4873,7 +4881,11 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans,
+ 		BUG_ON(ret == -ENOMEM);
+ 		if (ret)
+ 			return ret;
+-		BUG_ON(wc->refs[level] == 0);
++		if (unlikely(wc->refs[level] == 0)) {
++			btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0",
++				  eb->start);
++			return -EUCLEAN;
++		}
+ 	}
+ 
+ 	if (wc->stage == DROP_REFERENCE) {
+@@ -4889,7 +4901,7 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans,
+ 
+ 	/* wc->stage == UPDATE_BACKREF */
+ 	if (!(wc->flags[level] & flag)) {
+-		BUG_ON(!path->locks[level]);
++		ASSERT(path->locks[level]);
+ 		ret = btrfs_inc_ref(trans, root, eb, 1);
+ 		BUG_ON(ret); /* -ENOMEM */
+ 		ret = btrfs_dec_ref(trans, root, eb, 0);
+@@ -5006,8 +5018,9 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans,
+ 		goto out_unlock;
+ 
+ 	if (unlikely(wc->refs[level - 1] == 0)) {
+-		btrfs_err(fs_info, "Missing references.");
+-		ret = -EIO;
++		btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0",
++			  bytenr);
++		ret = -EUCLEAN;
+ 		goto out_unlock;
+ 	}
+ 	*lookup_info = 0;
+@@ -5209,7 +5222,12 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 				path->locks[level] = 0;
+ 				return ret;
+ 			}
+-			BUG_ON(wc->refs[level] == 0);
++			if (unlikely(wc->refs[level] == 0)) {
++				btrfs_tree_unlock_rw(eb, path->locks[level]);
++				btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0",
++					  eb->start);
++				return -EUCLEAN;
++			}
+ 			if (wc->refs[level] == 1) {
+ 				btrfs_tree_unlock_rw(eb, path->locks[level]);
+ 				path->locks[level] = 0;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4bf28f74605fd..cd3156a9a268d 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -5527,7 +5527,7 @@ struct inode *btrfs_lookup_dentry(struct inode *dir, struct dentry *dentry)
+ 	struct inode *inode;
+ 	struct btrfs_root *root = BTRFS_I(dir)->root;
+ 	struct btrfs_root *sub_root = root;
+-	struct btrfs_key location;
++	struct btrfs_key location = { 0 };
+ 	u8 di_type = 0;
+ 	int ret = 0;
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index ab8ed187746ea..24c4d059cfabb 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -853,10 +853,7 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 		goto fail;
+ 	}
+ 
+-	spin_lock(&fs_info->trans_lock);
+-	list_add(&pending_snapshot->list,
+-		 &trans->transaction->pending_snapshots);
+-	spin_unlock(&fs_info->trans_lock);
++	trans->pending_snapshot = pending_snapshot;
+ 
+ 	ret = btrfs_commit_transaction(trans);
+ 	if (ret)
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 8cefe11c57dbc..8878aa7cbdc57 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -2075,6 +2075,27 @@ static inline void btrfs_wait_delalloc_flush(struct btrfs_trans_handle *trans)
+ 	}
+ }
+ 
++/*
++ * Add a pending snapshot associated with the given transaction handle to the
++ * respective handle. This must be called after the transaction commit started
++ * and while holding fs_info->trans_lock.
++ * This serves to guarantee a caller of btrfs_commit_transaction() that it can
++ * safely free the pending snapshot pointer in case btrfs_commit_transaction()
++ * returns an error.
++ */
++static void add_pending_snapshot(struct btrfs_trans_handle *trans)
++{
++	struct btrfs_transaction *cur_trans = trans->transaction;
++
++	if (!trans->pending_snapshot)
++		return;
++
++	lockdep_assert_held(&trans->fs_info->trans_lock);
++	ASSERT(cur_trans->state >= TRANS_STATE_COMMIT_START);
++
++	list_add(&trans->pending_snapshot->list, &cur_trans->pending_snapshots);
++}
++
+ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+@@ -2161,6 +2182,8 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 
+ 	spin_lock(&fs_info->trans_lock);
+ 	if (cur_trans->state >= TRANS_STATE_COMMIT_START) {
++		add_pending_snapshot(trans);
++
+ 		spin_unlock(&fs_info->trans_lock);
+ 		refcount_inc(&cur_trans->use_count);
+ 		ret = btrfs_end_transaction(trans);
+@@ -2243,6 +2266,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 	 * COMMIT_DOING so make sure to wait for num_writers to == 1 again.
+ 	 */
+ 	spin_lock(&fs_info->trans_lock);
++	add_pending_snapshot(trans);
+ 	cur_trans->state = TRANS_STATE_COMMIT_DOING;
+ 	spin_unlock(&fs_info->trans_lock);
+ 	wait_event(cur_trans->writer_wait,
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index f73654d93fa03..eb26eb068fe8d 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -122,6 +122,8 @@ struct btrfs_trans_handle {
+ 	struct btrfs_transaction *transaction;
+ 	struct btrfs_block_rsv *block_rsv;
+ 	struct btrfs_block_rsv *orig_rsv;
++	/* Set by a task that wants to create a snapshot. */
++	struct btrfs_pending_snapshot *pending_snapshot;
+ 	refcount_t use_count;
+ 	unsigned int type;
+ 	/*
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index a94cc7b22d7ea..1a371eb4470eb 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -493,6 +493,13 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
+ 			/* A hole? We can safely clear the dirty bit */
+ 			if (!buffer_mapped(bh))
+ 				clear_buffer_dirty(bh);
++			/*
++			 * Keeping dirty some buffer we cannot write? Make
++			 * sure to redirty the page. This happens e.g. when
++			 * doing writeout for transaction commit.
++			 */
++			if (buffer_dirty(bh) && !PageDirty(page))
++				redirty_page_for_writepage(wbc, page);
+ 			if (io->io_bio)
+ 				ext4_io_submit(io);
+ 			continue;
+@@ -500,6 +507,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
+ 		if (buffer_new(bh))
+ 			clear_buffer_new(bh);
+ 		set_buffer_async_write(bh);
++		clear_buffer_dirty(bh);
+ 		nr_to_submit++;
+ 	} while ((bh = bh->b_this_page) != head);
+ 
+@@ -542,7 +550,10 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
+ 			printk_ratelimited(KERN_ERR "%s: ret = %d\n", __func__, ret);
+ 			redirty_page_for_writepage(wbc, page);
+ 			do {
+-				clear_buffer_async_write(bh);
++				if (buffer_async_write(bh)) {
++					clear_buffer_async_write(bh);
++					set_buffer_dirty(bh);
++				}
+ 				bh = bh->b_this_page;
+ 			} while (bh != head);
+ 			goto unlock;
+@@ -555,7 +566,6 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
+ 			continue;
+ 		io_submit_add_bh(io, inode, page, bounce_page, bh);
+ 		nr_submitted++;
+-		clear_buffer_dirty(bh);
+ 	} while ((bh = bh->b_this_page) != head);
+ 
+ unlock:
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 13d97547eaf6c..fd7263ed25b92 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1692,10 +1692,16 @@ __acquires(fi->lock)
+ 	fuse_writepage_finish(fm, wpa);
+ 	spin_unlock(&fi->lock);
+ 
+-	/* After fuse_writepage_finish() aux request list is private */
++	/* After rb_erase() aux request list is private */
+ 	for (aux = wpa->next; aux; aux = next) {
++		struct backing_dev_info *bdi = inode_to_bdi(aux->inode);
++
+ 		next = aux->next;
+ 		aux->next = NULL;
++
++		dec_wb_stat(&bdi->wb, WB_WRITEBACK);
++		dec_node_page_state(aux->ia.ap.pages[0], NR_WRITEBACK_TEMP);
++		wb_writeout_inc(&bdi->wb);
+ 		fuse_writepage_free(aux);
+ 	}
+ 
+diff --git a/fs/fuse/xattr.c b/fs/fuse/xattr.c
+index cdea18de94f7e..314e460ce679d 100644
+--- a/fs/fuse/xattr.c
++++ b/fs/fuse/xattr.c
+@@ -79,7 +79,7 @@ ssize_t fuse_getxattr(struct inode *inode, const char *name, void *value,
+ 	}
+ 	ret = fuse_simple_request(fm, &args);
+ 	if (!ret && !size)
+-		ret = min_t(ssize_t, outarg.size, XATTR_SIZE_MAX);
++		ret = min_t(size_t, outarg.size, XATTR_SIZE_MAX);
+ 	if (ret == -ENOSYS) {
+ 		fm->fc->no_getxattr = 1;
+ 		ret = -EOPNOTSUPP;
+@@ -141,7 +141,7 @@ ssize_t fuse_listxattr(struct dentry *entry, char *list, size_t size)
+ 	}
+ 	ret = fuse_simple_request(fm, &args);
+ 	if (!ret && !size)
+-		ret = min_t(ssize_t, outarg.size, XATTR_LIST_MAX);
++		ret = min_t(size_t, outarg.size, XATTR_LIST_MAX);
+ 	if (ret > 0 && size)
+ 		ret = fuse_verify_xattr_list(list, ret);
+ 	if (ret == -ENOSYS) {
+diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
+index 5579e67da17db..c33f78513f00f 100644
+--- a/fs/lockd/svc.c
++++ b/fs/lockd/svc.c
+@@ -759,8 +759,6 @@ static const struct svc_version *nlmsvc_version[] = {
+ #endif
+ };
+ 
+-static struct svc_stat		nlmsvc_stats;
+-
+ #define NLM_NRVERS	ARRAY_SIZE(nlmsvc_version)
+ static struct svc_program	nlmsvc_program = {
+ 	.pg_prog		= NLM_PROGRAM,		/* program number */
+@@ -768,7 +766,6 @@ static struct svc_program	nlmsvc_program = {
+ 	.pg_vers		= nlmsvc_version,	/* version table */
+ 	.pg_name		= "lockd",		/* service name */
+ 	.pg_class		= "nfsd",		/* share authentication with nfsd */
+-	.pg_stats		= &nlmsvc_stats,	/* stats table */
+ 	.pg_authenticate	= &lockd_authenticate,	/* export authentication */
+ 	.pg_init_request	= svc_generic_init_request,
+ 	.pg_rpcbind_set		= svc_generic_rpcbind_set,
+diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
+index 8fe143cad4a2b..f00fff3633f60 100644
+--- a/fs/nfs/callback.c
++++ b/fs/nfs/callback.c
+@@ -407,15 +407,12 @@ static const struct svc_version *nfs4_callback_version[] = {
+ 	[4] = &nfs4_callback_version4,
+ };
+ 
+-static struct svc_stat nfs4_callback_stats;
+-
+ static struct svc_program nfs4_callback_program = {
+ 	.pg_prog = NFS4_CALLBACK,			/* RPC service number */
+ 	.pg_nvers = ARRAY_SIZE(nfs4_callback_version),	/* Number of entries */
+ 	.pg_vers = nfs4_callback_version,		/* version table */
+ 	.pg_name = "NFSv4 callback",			/* service name */
+ 	.pg_class = "nfs",				/* authentication class */
+-	.pg_stats = &nfs4_callback_stats,
+ 	.pg_authenticate = nfs_callback_authenticate,
+ 	.pg_init_request = svc_generic_init_request,
+ 	.pg_rpcbind_set	= svc_generic_rpcbind_set,
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 1ffce90760606..2d2238548a6e5 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -47,6 +47,7 @@
+ #include <linux/vfs.h>
+ #include <linux/inet.h>
+ #include <linux/in6.h>
++#include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <net/ipv6.h>
+ #include <linux/netdevice.h>
+@@ -219,6 +220,7 @@ static int __nfs_list_for_each_server(struct list_head *head,
+ 		ret = fn(server, data);
+ 		if (ret)
+ 			goto out;
++		cond_resched();
+ 		rcu_read_lock();
+ 	}
+ 	rcu_read_unlock();
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index 7c863f2c21e0c..617a5b6ae6c38 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -339,12 +339,16 @@ static int export_stats_init(struct export_stats *stats)
+ 
+ static void export_stats_reset(struct export_stats *stats)
+ {
+-	nfsd_percpu_counters_reset(stats->counter, EXP_STATS_COUNTERS_NUM);
++	if (stats)
++		nfsd_percpu_counters_reset(stats->counter,
++					   EXP_STATS_COUNTERS_NUM);
+ }
+ 
+ static void export_stats_destroy(struct export_stats *stats)
+ {
+-	nfsd_percpu_counters_destroy(stats->counter, EXP_STATS_COUNTERS_NUM);
++	if (stats)
++		nfsd_percpu_counters_destroy(stats->counter,
++					     EXP_STATS_COUNTERS_NUM);
+ }
+ 
+ static void svc_export_put(struct kref *ref)
+@@ -353,7 +357,8 @@ static void svc_export_put(struct kref *ref)
+ 	path_put(&exp->ex_path);
+ 	auth_domain_put(exp->ex_client);
+ 	nfsd4_fslocs_free(&exp->ex_fslocs);
+-	export_stats_destroy(&exp->ex_stats);
++	export_stats_destroy(exp->ex_stats);
++	kfree(exp->ex_stats);
+ 	kfree(exp->ex_uuid);
+ 	kfree_rcu(exp, ex_rcu);
+ }
+@@ -738,13 +743,15 @@ static int svc_export_show(struct seq_file *m,
+ 	seq_putc(m, '\t');
+ 	seq_escape(m, exp->ex_client->name, " \t\n\\");
+ 	if (export_stats) {
+-		seq_printf(m, "\t%lld\n", exp->ex_stats.start_time);
++		struct percpu_counter *counter = exp->ex_stats->counter;
++
++		seq_printf(m, "\t%lld\n", exp->ex_stats->start_time);
+ 		seq_printf(m, "\tfh_stale: %lld\n",
+-			   percpu_counter_sum_positive(&exp->ex_stats.counter[EXP_STATS_FH_STALE]));
++			   percpu_counter_sum_positive(&counter[EXP_STATS_FH_STALE]));
+ 		seq_printf(m, "\tio_read: %lld\n",
+-			   percpu_counter_sum_positive(&exp->ex_stats.counter[EXP_STATS_IO_READ]));
++			   percpu_counter_sum_positive(&counter[EXP_STATS_IO_READ]));
+ 		seq_printf(m, "\tio_write: %lld\n",
+-			   percpu_counter_sum_positive(&exp->ex_stats.counter[EXP_STATS_IO_WRITE]));
++			   percpu_counter_sum_positive(&counter[EXP_STATS_IO_WRITE]));
+ 		seq_putc(m, '\n');
+ 		return 0;
+ 	}
+@@ -790,7 +797,7 @@ static void svc_export_init(struct cache_head *cnew, struct cache_head *citem)
+ 	new->ex_layout_types = 0;
+ 	new->ex_uuid = NULL;
+ 	new->cd = item->cd;
+-	export_stats_reset(&new->ex_stats);
++	export_stats_reset(new->ex_stats);
+ }
+ 
+ static void export_update(struct cache_head *cnew, struct cache_head *citem)
+@@ -826,7 +833,14 @@ static struct cache_head *svc_export_alloc(void)
+ 	if (!i)
+ 		return NULL;
+ 
+-	if (export_stats_init(&i->ex_stats)) {
++	i->ex_stats = kmalloc(sizeof(*(i->ex_stats)), GFP_KERNEL);
++	if (!i->ex_stats) {
++		kfree(i);
++		return NULL;
++	}
++
++	if (export_stats_init(i->ex_stats)) {
++		kfree(i->ex_stats);
+ 		kfree(i);
+ 		return NULL;
+ 	}
+diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h
+index d03f7f6a8642d..f73e23bb24a1e 100644
+--- a/fs/nfsd/export.h
++++ b/fs/nfsd/export.h
+@@ -64,10 +64,10 @@ struct svc_export {
+ 	struct cache_head	h;
+ 	struct auth_domain *	ex_client;
+ 	int			ex_flags;
++	int			ex_fsid;
+ 	struct path		ex_path;
+ 	kuid_t			ex_anon_uid;
+ 	kgid_t			ex_anon_gid;
+-	int			ex_fsid;
+ 	unsigned char *		ex_uuid; /* 16 byte fsid */
+ 	struct nfsd4_fs_locations ex_fslocs;
+ 	uint32_t		ex_nflavors;
+@@ -76,7 +76,7 @@ struct svc_export {
+ 	struct nfsd4_deviceid_map *ex_devid_map;
+ 	struct cache_detail	*cd;
+ 	struct rcu_head		ex_rcu;
+-	struct export_stats	ex_stats;
++	struct export_stats	*ex_stats;
+ };
+ 
+ /* an "export key" (expkey) maps a filehandlefragement to an
+diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
+index 51a4b7885cae2..548422b24a7d7 100644
+--- a/fs/nfsd/netns.h
++++ b/fs/nfsd/netns.h
+@@ -10,8 +10,10 @@
+ 
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
++#include <linux/nfs4.h>
+ #include <linux/percpu_counter.h>
+ #include <linux/siphash.h>
++#include <linux/sunrpc/stats.h>
+ 
+ /* Hash tables for nfs4_clientid state */
+ #define CLIENT_HASH_BITS                 4
+@@ -25,10 +27,22 @@ struct nfsd4_client_tracking_ops;
+ 
+ enum {
+ 	/* cache misses due only to checksum comparison failures */
+-	NFSD_NET_PAYLOAD_MISSES,
++	NFSD_STATS_PAYLOAD_MISSES,
+ 	/* amount of memory (in bytes) currently consumed by the DRC */
+-	NFSD_NET_DRC_MEM_USAGE,
+-	NFSD_NET_COUNTERS_NUM
++	NFSD_STATS_DRC_MEM_USAGE,
++	NFSD_STATS_RC_HITS,		/* repcache hits */
++	NFSD_STATS_RC_MISSES,		/* repcache misses */
++	NFSD_STATS_RC_NOCACHE,		/* uncached reqs */
++	NFSD_STATS_FH_STALE,		/* FH stale error */
++	NFSD_STATS_IO_READ,		/* bytes returned to read requests */
++	NFSD_STATS_IO_WRITE,		/* bytes passed in write requests */
++#ifdef CONFIG_NFSD_V4
++	NFSD_STATS_FIRST_NFS4_OP,	/* count of individual nfsv4 operations */
++	NFSD_STATS_LAST_NFS4_OP = NFSD_STATS_FIRST_NFS4_OP + LAST_NFS4_OP,
++#define NFSD_STATS_NFS4_OP(op)	(NFSD_STATS_FIRST_NFS4_OP + (op))
++	NFSD_STATS_WDELEG_GETATTR,	/* count of getattr conflict with wdeleg */
++#endif
++	NFSD_STATS_COUNTERS_NUM
+ };
+ 
+ /*
+@@ -168,7 +182,10 @@ struct nfsd_net {
+ 	atomic_t                 num_drc_entries;
+ 
+ 	/* Per-netns stats counters */
+-	struct percpu_counter    counter[NFSD_NET_COUNTERS_NUM];
++	struct percpu_counter    counter[NFSD_STATS_COUNTERS_NUM];
++
++	/* sunrpc svc stats */
++	struct svc_stat          nfsd_svcstats;
+ 
+ 	/* longest hash chain seen */
+ 	unsigned int             longest_chain;
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 2c0de247083a9..f10e70f372855 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -2435,10 +2435,10 @@ nfsd4_proc_null(struct svc_rqst *rqstp)
+ 	return rpc_success;
+ }
+ 
+-static inline void nfsd4_increment_op_stats(u32 opnum)
++static inline void nfsd4_increment_op_stats(struct nfsd_net *nn, u32 opnum)
+ {
+ 	if (opnum >= FIRST_NFS4_OP && opnum <= LAST_NFS4_OP)
+-		percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_NFS4_OP(opnum)]);
++		percpu_counter_inc(&nn->counter[NFSD_STATS_NFS4_OP(opnum)]);
+ }
+ 
+ static const struct nfsd4_operation nfsd4_ops[];
+@@ -2713,7 +2713,7 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 					   status, nfsd4_op_name(op->opnum));
+ 
+ 		nfsd4_cstate_clear_replay(cstate);
+-		nfsd4_increment_op_stats(op->opnum);
++		nfsd4_increment_op_stats(nn, op->opnum);
+ 	}
+ 
+ 	fh_put(current_fh);
+diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
+index 2b5417e06d80d..448700939dfe9 100644
+--- a/fs/nfsd/nfscache.c
++++ b/fs/nfsd/nfscache.c
+@@ -85,8 +85,8 @@ nfsd_hashsize(unsigned int limit)
+ }
+ 
+ static struct svc_cacherep *
+-nfsd_reply_cache_alloc(struct svc_rqst *rqstp, __wsum csum,
+-			struct nfsd_net *nn)
++nfsd_cacherep_alloc(struct svc_rqst *rqstp, __wsum csum,
++		    struct nfsd_net *nn)
+ {
+ 	struct svc_cacherep	*rp;
+ 
+@@ -110,21 +110,48 @@ nfsd_reply_cache_alloc(struct svc_rqst *rqstp, __wsum csum,
+ 	return rp;
+ }
+ 
+-static void
+-nfsd_reply_cache_free_locked(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
+-				struct nfsd_net *nn)
++static void nfsd_cacherep_free(struct svc_cacherep *rp)
+ {
+-	if (rp->c_type == RC_REPLBUFF && rp->c_replvec.iov_base) {
+-		nfsd_stats_drc_mem_usage_sub(nn, rp->c_replvec.iov_len);
++	if (rp->c_type == RC_REPLBUFF)
+ 		kfree(rp->c_replvec.iov_base);
++	kmem_cache_free(drc_slab, rp);
++}
++
++static unsigned long
++nfsd_cacherep_dispose(struct list_head *dispose)
++{
++	struct svc_cacherep *rp;
++	unsigned long freed = 0;
++
++	while (!list_empty(dispose)) {
++		rp = list_first_entry(dispose, struct svc_cacherep, c_lru);
++		list_del(&rp->c_lru);
++		nfsd_cacherep_free(rp);
++		freed++;
+ 	}
++	return freed;
++}
++
++static void
++nfsd_cacherep_unlink_locked(struct nfsd_net *nn, struct nfsd_drc_bucket *b,
++			    struct svc_cacherep *rp)
++{
++	if (rp->c_type == RC_REPLBUFF && rp->c_replvec.iov_base)
++		nfsd_stats_drc_mem_usage_sub(nn, rp->c_replvec.iov_len);
+ 	if (rp->c_state != RC_UNUSED) {
+ 		rb_erase(&rp->c_node, &b->rb_head);
+ 		list_del(&rp->c_lru);
+ 		atomic_dec(&nn->num_drc_entries);
+ 		nfsd_stats_drc_mem_usage_sub(nn, sizeof(*rp));
+ 	}
+-	kmem_cache_free(drc_slab, rp);
++}
++
++static void
++nfsd_reply_cache_free_locked(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
++				struct nfsd_net *nn)
++{
++	nfsd_cacherep_unlink_locked(nn, b, rp);
++	nfsd_cacherep_free(rp);
+ }
+ 
+ static void
+@@ -132,8 +159,9 @@ nfsd_reply_cache_free(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
+ 			struct nfsd_net *nn)
+ {
+ 	spin_lock(&b->cache_lock);
+-	nfsd_reply_cache_free_locked(b, rp, nn);
++	nfsd_cacherep_unlink_locked(nn, b, rp);
+ 	spin_unlock(&b->cache_lock);
++	nfsd_cacherep_free(rp);
+ }
+ 
+ int nfsd_drc_slab_create(void)
+@@ -148,16 +176,6 @@ void nfsd_drc_slab_free(void)
+ 	kmem_cache_destroy(drc_slab);
+ }
+ 
+-static int nfsd_reply_cache_stats_init(struct nfsd_net *nn)
+-{
+-	return nfsd_percpu_counters_init(nn->counter, NFSD_NET_COUNTERS_NUM);
+-}
+-
+-static void nfsd_reply_cache_stats_destroy(struct nfsd_net *nn)
+-{
+-	nfsd_percpu_counters_destroy(nn->counter, NFSD_NET_COUNTERS_NUM);
+-}
+-
+ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ {
+ 	unsigned int hashsize;
+@@ -169,16 +187,12 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ 	hashsize = nfsd_hashsize(nn->max_drc_entries);
+ 	nn->maskbits = ilog2(hashsize);
+ 
+-	status = nfsd_reply_cache_stats_init(nn);
+-	if (status)
+-		goto out_nomem;
+-
+ 	nn->nfsd_reply_cache_shrinker.scan_objects = nfsd_reply_cache_scan;
+ 	nn->nfsd_reply_cache_shrinker.count_objects = nfsd_reply_cache_count;
+ 	nn->nfsd_reply_cache_shrinker.seeks = 1;
+ 	status = register_shrinker(&nn->nfsd_reply_cache_shrinker);
+ 	if (status)
+-		goto out_stats_destroy;
++		return status;
+ 
+ 	nn->drc_hashtbl = kvzalloc(array_size(hashsize,
+ 				sizeof(*nn->drc_hashtbl)), GFP_KERNEL);
+@@ -194,9 +208,6 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ 	return 0;
+ out_shrinker:
+ 	unregister_shrinker(&nn->nfsd_reply_cache_shrinker);
+-out_stats_destroy:
+-	nfsd_reply_cache_stats_destroy(nn);
+-out_nomem:
+ 	printk(KERN_ERR "nfsd: failed to allocate reply cache\n");
+ 	return -ENOMEM;
+ }
+@@ -216,7 +227,6 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ 									rp, nn);
+ 		}
+ 	}
+-	nfsd_reply_cache_stats_destroy(nn);
+ 
+ 	kvfree(nn->drc_hashtbl);
+ 	nn->drc_hashtbl = NULL;
+@@ -243,12 +253,21 @@ nfsd_cache_bucket_find(__be32 xid, struct nfsd_net *nn)
+ 	return &nn->drc_hashtbl[hash];
+ }
+ 
+-static long prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn,
+-			 unsigned int max)
++/*
++ * Remove and return no more than @max expired entries in bucket @b.
++ * If @max is zero, do not limit the number of removed entries.
++ */
++static void
++nfsd_prune_bucket_locked(struct nfsd_net *nn, struct nfsd_drc_bucket *b,
++			 unsigned int max, struct list_head *dispose)
+ {
++	unsigned long expiry = jiffies - RC_EXPIRE;
+ 	struct svc_cacherep *rp, *tmp;
+-	long freed = 0;
++	unsigned int freed = 0;
++
++	lockdep_assert_held(&b->cache_lock);
+ 
++	/* The bucket LRU is ordered oldest-first. */
+ 	list_for_each_entry_safe(rp, tmp, &b->lru_head, c_lru) {
+ 		/*
+ 		 * Don't free entries attached to calls that are still
+@@ -256,60 +275,77 @@ static long prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn,
+ 		 */
+ 		if (rp->c_state == RC_INPROG)
+ 			continue;
++
+ 		if (atomic_read(&nn->num_drc_entries) <= nn->max_drc_entries &&
+-		    time_before(jiffies, rp->c_timestamp + RC_EXPIRE))
++		    time_before(expiry, rp->c_timestamp))
+ 			break;
+-		nfsd_reply_cache_free_locked(b, rp, nn);
+-		if (max && freed++ > max)
++
++		nfsd_cacherep_unlink_locked(nn, b, rp);
++		list_add(&rp->c_lru, dispose);
++
++		if (max && ++freed > max)
+ 			break;
+ 	}
+-	return freed;
+ }
+ 
+-static long nfsd_prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn)
++/**
++ * nfsd_reply_cache_count - count_objects method for the DRC shrinker
++ * @shrink: our registered shrinker context
++ * @sc: garbage collection parameters
++ *
++ * Returns the total number of entries in the duplicate reply cache. To
++ * keep things simple and quick, this is not the number of expired entries
++ * in the cache (ie, the number that would be removed by a call to
++ * nfsd_reply_cache_scan).
++ */
++static unsigned long
++nfsd_reply_cache_count(struct shrinker *shrink, struct shrink_control *sc)
+ {
+-	return prune_bucket(b, nn, 3);
++	struct nfsd_net *nn = container_of(shrink,
++				struct nfsd_net, nfsd_reply_cache_shrinker);
++
++	return atomic_read(&nn->num_drc_entries);
+ }
+ 
+-/*
+- * Walk the LRU list and prune off entries that are older than RC_EXPIRE.
+- * Also prune the oldest ones when the total exceeds the max number of entries.
++/**
++ * nfsd_reply_cache_scan - scan_objects method for the DRC shrinker
++ * @shrink: our registered shrinker context
++ * @sc: garbage collection parameters
++ *
++ * Free expired entries on each bucket's LRU list until we've released
++ * nr_to_scan freed objects. Nothing will be released if the cache
++ * has not exceeded it's max_drc_entries limit.
++ *
++ * Returns the number of entries released by this call.
+  */
+-static long
+-prune_cache_entries(struct nfsd_net *nn)
++static unsigned long
++nfsd_reply_cache_scan(struct shrinker *shrink, struct shrink_control *sc)
+ {
++	struct nfsd_net *nn = container_of(shrink,
++				struct nfsd_net, nfsd_reply_cache_shrinker);
++	unsigned long freed = 0;
++	LIST_HEAD(dispose);
+ 	unsigned int i;
+-	long freed = 0;
+ 
+ 	for (i = 0; i < nn->drc_hashsize; i++) {
+ 		struct nfsd_drc_bucket *b = &nn->drc_hashtbl[i];
+ 
+ 		if (list_empty(&b->lru_head))
+ 			continue;
++
+ 		spin_lock(&b->cache_lock);
+-		freed += prune_bucket(b, nn, 0);
++		nfsd_prune_bucket_locked(nn, b, 0, &dispose);
+ 		spin_unlock(&b->cache_lock);
+-	}
+-	return freed;
+-}
+ 
+-static unsigned long
+-nfsd_reply_cache_count(struct shrinker *shrink, struct shrink_control *sc)
+-{
+-	struct nfsd_net *nn = container_of(shrink,
+-				struct nfsd_net, nfsd_reply_cache_shrinker);
++		freed += nfsd_cacherep_dispose(&dispose);
++		if (freed > sc->nr_to_scan)
++			break;
++	}
+ 
+-	return atomic_read(&nn->num_drc_entries);
++	trace_nfsd_drc_gc(nn, freed);
++	return freed;
+ }
+ 
+-static unsigned long
+-nfsd_reply_cache_scan(struct shrinker *shrink, struct shrink_control *sc)
+-{
+-	struct nfsd_net *nn = container_of(shrink,
+-				struct nfsd_net, nfsd_reply_cache_shrinker);
+-
+-	return prune_cache_entries(nn);
+-}
+ /*
+  * Walk an xdr_buf and get a CRC for at most the first RC_CSUMLEN bytes
+  */
+@@ -421,16 +457,18 @@ nfsd_cache_insert(struct nfsd_drc_bucket *b, struct svc_cacherep *key,
+  */
+ int nfsd_cache_lookup(struct svc_rqst *rqstp)
+ {
+-	struct nfsd_net		*nn;
++	struct nfsd_net		*nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	struct svc_cacherep	*rp, *found;
+ 	__wsum			csum;
+ 	struct nfsd_drc_bucket	*b;
+ 	int type = rqstp->rq_cachetype;
++	unsigned long freed;
++	LIST_HEAD(dispose);
+ 	int rtn = RC_DOIT;
+ 
+ 	rqstp->rq_cacherep = NULL;
+ 	if (type == RC_NOCACHE) {
+-		nfsd_stats_rc_nocache_inc();
++		nfsd_stats_rc_nocache_inc(nn);
+ 		goto out;
+ 	}
+ 
+@@ -440,8 +478,7 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp)
+ 	 * Since the common case is a cache miss followed by an insert,
+ 	 * preallocate an entry.
+ 	 */
+-	nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+-	rp = nfsd_reply_cache_alloc(rqstp, csum, nn);
++	rp = nfsd_cacherep_alloc(rqstp, csum, nn);
+ 	if (!rp)
+ 		goto out;
+ 
+@@ -450,25 +487,23 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp)
+ 	found = nfsd_cache_insert(b, rp, nn);
+ 	if (found != rp)
+ 		goto found_entry;
+-
+-	nfsd_stats_rc_misses_inc();
+ 	rqstp->rq_cacherep = rp;
+ 	rp->c_state = RC_INPROG;
++	nfsd_prune_bucket_locked(nn, b, 3, &dispose);
++	spin_unlock(&b->cache_lock);
+ 
++	freed = nfsd_cacherep_dispose(&dispose);
++	trace_nfsd_drc_gc(nn, freed);
++
++	nfsd_stats_rc_misses_inc(nn);
+ 	atomic_inc(&nn->num_drc_entries);
+ 	nfsd_stats_drc_mem_usage_add(nn, sizeof(*rp));
+-
+-	nfsd_prune_bucket(b, nn);
+-
+-out_unlock:
+-	spin_unlock(&b->cache_lock);
+-out:
+-	return rtn;
++	goto out;
+ 
+ found_entry:
+ 	/* We found a matching entry which is either in progress or done. */
+ 	nfsd_reply_cache_free_locked(NULL, rp, nn);
+-	nfsd_stats_rc_hits_inc();
++	nfsd_stats_rc_hits_inc(nn);
+ 	rtn = RC_DROPIT;
+ 	rp = found;
+ 
+@@ -501,7 +536,10 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp)
+ 
+ out_trace:
+ 	trace_nfsd_drc_found(nn, rqstp, rtn);
+-	goto out_unlock;
++out_unlock:
++	spin_unlock(&b->cache_lock);
++out:
++	return rtn;
+ }
+ 
+ /**
+@@ -613,15 +651,15 @@ int nfsd_reply_cache_stats_show(struct seq_file *m, void *v)
+ 		   atomic_read(&nn->num_drc_entries));
+ 	seq_printf(m, "hash buckets:          %u\n", 1 << nn->maskbits);
+ 	seq_printf(m, "mem usage:             %lld\n",
+-		   percpu_counter_sum_positive(&nn->counter[NFSD_NET_DRC_MEM_USAGE]));
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_DRC_MEM_USAGE]));
+ 	seq_printf(m, "cache hits:            %lld\n",
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_HITS]));
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_RC_HITS]));
+ 	seq_printf(m, "cache misses:          %lld\n",
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_MISSES]));
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_RC_MISSES]));
+ 	seq_printf(m, "not cached:            %lld\n",
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_NOCACHE]));
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_RC_NOCACHE]));
+ 	seq_printf(m, "payload misses:        %lld\n",
+-		   percpu_counter_sum_positive(&nn->counter[NFSD_NET_PAYLOAD_MISSES]));
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_PAYLOAD_MISSES]));
+ 	seq_printf(m, "longest chain len:     %u\n", nn->longest_chain);
+ 	seq_printf(m, "cachesize at longest:  %u\n", nn->longest_chain_cachesize);
+ 	return 0;
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index f77f00c931723..2feaa49fb9fe2 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1458,18 +1458,21 @@ static __net_init int nfsd_init_net(struct net *net)
+ 	retval = nfsd_idmap_init(net);
+ 	if (retval)
+ 		goto out_idmap_error;
++	retval = nfsd_stat_counters_init(nn);
++	if (retval)
++		goto out_repcache_error;
++	memset(&nn->nfsd_svcstats, 0, sizeof(nn->nfsd_svcstats));
++	nn->nfsd_svcstats.program = &nfsd_program;
+ 	nn->nfsd_versions = NULL;
+ 	nn->nfsd4_minorversions = NULL;
+ 	nfsd4_init_leases_net(nn);
+-	retval = nfsd_reply_cache_init(nn);
+-	if (retval)
+-		goto out_cache_error;
+ 	get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key));
+ 	seqlock_init(&nn->writeverf_lock);
++	nfsd_proc_stat_init(net);
+ 
+ 	return 0;
+ 
+-out_cache_error:
++out_repcache_error:
+ 	nfsd_idmap_shutdown(net);
+ out_idmap_error:
+ 	nfsd_export_shutdown(net);
+@@ -1481,10 +1484,11 @@ static __net_exit void nfsd_exit_net(struct net *net)
+ {
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 
+-	nfsd_reply_cache_shutdown(nn);
++	nfsd_proc_stat_shutdown(net);
++	nfsd_stat_counters_destroy(nn);
+ 	nfsd_idmap_shutdown(net);
+ 	nfsd_export_shutdown(net);
+-	nfsd_netns_free_versions(net_generic(net, nfsd_net_id));
++	nfsd_netns_free_versions(nn);
+ }
+ 
+ static struct pernet_operations nfsd_net_ops = {
+@@ -1504,12 +1508,9 @@ static int __init init_nfsd(void)
+ 	retval = nfsd4_init_pnfs();
+ 	if (retval)
+ 		goto out_free_slabs;
+-	retval = nfsd_stat_init();	/* Statistics */
+-	if (retval)
+-		goto out_free_pnfs;
+ 	retval = nfsd_drc_slab_create();
+ 	if (retval)
+-		goto out_free_stat;
++		goto out_free_pnfs;
+ 	nfsd_lockd_init();	/* lockd->nfsd callbacks */
+ 	retval = create_proc_exports_entry();
+ 	if (retval)
+@@ -1539,8 +1540,6 @@ static int __init init_nfsd(void)
+ out_free_lockd:
+ 	nfsd_lockd_shutdown();
+ 	nfsd_drc_slab_free();
+-out_free_stat:
+-	nfsd_stat_shutdown();
+ out_free_pnfs:
+ 	nfsd4_exit_pnfs();
+ out_free_slabs:
+@@ -1557,7 +1556,6 @@ static void __exit exit_nfsd(void)
+ 	nfsd_drc_slab_free();
+ 	remove_proc_entry("fs/nfs/exports", NULL);
+ 	remove_proc_entry("fs/nfs", NULL);
+-	nfsd_stat_shutdown();
+ 	nfsd_lockd_shutdown();
+ 	nfsd4_free_slabs();
+ 	nfsd4_exit_pnfs();
+diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
+index 013bfa24ced21..996f3f62335b2 100644
+--- a/fs/nfsd/nfsd.h
++++ b/fs/nfsd/nfsd.h
+@@ -69,6 +69,7 @@ extern struct mutex		nfsd_mutex;
+ extern spinlock_t		nfsd_drc_lock;
+ extern unsigned long		nfsd_drc_max_mem;
+ extern unsigned long		nfsd_drc_mem_used;
++extern atomic_t			nfsd_th_cnt;		/* number of available threads */
+ 
+ extern const struct seq_operations nfs_exports_op;
+ 
+diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
+index ae3323e0708dd..44e9a9dd28688 100644
+--- a/fs/nfsd/nfsfh.c
++++ b/fs/nfsd/nfsfh.c
+@@ -326,6 +326,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
+ __be32
+ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type, int access)
+ {
++	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	struct svc_export *exp = NULL;
+ 	struct dentry	*dentry;
+ 	__be32		error;
+@@ -399,7 +400,7 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type, int access)
+ 	}
+ out:
+ 	if (error == nfserr_stale)
+-		nfsd_stats_fh_stale_inc(exp);
++		nfsd_stats_fh_stale_inc(nn, exp);
+ 	return error;
+ }
+ 
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 3d4fd40c987bd..29eb9861684e3 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -34,6 +34,7 @@
+ 
+ #define NFSDDBG_FACILITY	NFSDDBG_SVC
+ 
++atomic_t			nfsd_th_cnt = ATOMIC_INIT(0);
+ extern struct svc_program	nfsd_program;
+ static int			nfsd(void *vrqstp);
+ #if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
+@@ -89,7 +90,6 @@ unsigned long	nfsd_drc_max_mem;
+ unsigned long	nfsd_drc_mem_used;
+ 
+ #if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
+-static struct svc_stat	nfsd_acl_svcstats;
+ static const struct svc_version *nfsd_acl_version[] = {
+ # if defined(CONFIG_NFSD_V2_ACL)
+ 	[2] = &nfsd_acl_version2,
+@@ -108,15 +108,11 @@ static struct svc_program	nfsd_acl_program = {
+ 	.pg_vers		= nfsd_acl_version,
+ 	.pg_name		= "nfsacl",
+ 	.pg_class		= "nfsd",
+-	.pg_stats		= &nfsd_acl_svcstats,
+ 	.pg_authenticate	= &svc_set_client,
+ 	.pg_init_request	= nfsd_acl_init_request,
+ 	.pg_rpcbind_set		= nfsd_acl_rpcbind_set,
+ };
+ 
+-static struct svc_stat	nfsd_acl_svcstats = {
+-	.program	= &nfsd_acl_program,
+-};
+ #endif /* defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL) */
+ 
+ static const struct svc_version *nfsd_version[] = {
+@@ -141,7 +137,6 @@ struct svc_program		nfsd_program = {
+ 	.pg_vers		= nfsd_version,		/* version table */
+ 	.pg_name		= "nfsd",		/* program name */
+ 	.pg_class		= "nfsd",		/* authentication class */
+-	.pg_stats		= &nfsd_svcstats,	/* version table */
+ 	.pg_authenticate	= &svc_set_client,	/* export authentication */
+ 	.pg_init_request	= nfsd_init_request,
+ 	.pg_rpcbind_set		= nfsd_rpcbind_set,
+@@ -427,16 +422,23 @@ static int nfsd_startup_net(struct net *net, const struct cred *cred)
+ 	ret = nfsd_file_cache_start_net(net);
+ 	if (ret)
+ 		goto out_lockd;
+-	ret = nfs4_state_start_net(net);
++
++	ret = nfsd_reply_cache_init(nn);
+ 	if (ret)
+ 		goto out_filecache;
+ 
++	ret = nfs4_state_start_net(net);
++	if (ret)
++		goto out_reply_cache;
++
+ #ifdef CONFIG_NFSD_V4_2_INTER_SSC
+ 	nfsd4_ssc_init_umount_work(nn);
+ #endif
+ 	nn->nfsd_net_up = true;
+ 	return 0;
+ 
++out_reply_cache:
++	nfsd_reply_cache_shutdown(nn);
+ out_filecache:
+ 	nfsd_file_cache_shutdown_net(net);
+ out_lockd:
+@@ -454,6 +456,7 @@ static void nfsd_shutdown_net(struct net *net)
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 
+ 	nfs4_state_shutdown_net(net);
++	nfsd_reply_cache_shutdown(nn);
+ 	nfsd_file_cache_shutdown_net(net);
+ 	if (nn->lockd_up) {
+ 		lockd_down(net);
+@@ -559,7 +562,6 @@ void nfsd_last_thread(struct net *net)
+ 		return;
+ 
+ 	nfsd_shutdown_net(net);
+-	pr_info("nfsd: last server has exited, flushing export cache\n");
+ 	nfsd_export_flush(net);
+ }
+ 
+@@ -662,7 +664,8 @@ int nfsd_create_serv(struct net *net)
+ 	if (nfsd_max_blksize == 0)
+ 		nfsd_max_blksize = nfsd_get_default_max_blksize();
+ 	nfsd_reset_versions(nn);
+-	serv = svc_create_pooled(&nfsd_program, nfsd_max_blksize, nfsd);
++	serv = svc_create_pooled(&nfsd_program, &nn->nfsd_svcstats,
++				 nfsd_max_blksize, nfsd);
+ 	if (serv == NULL)
+ 		return -ENOMEM;
+ 
+@@ -774,7 +777,6 @@ int
+ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
+ {
+ 	int	error;
+-	bool	nfsd_up_before;
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 	struct svc_serv *serv;
+ 
+@@ -794,8 +796,6 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
+ 	error = nfsd_create_serv(net);
+ 	if (error)
+ 		goto out;
+-
+-	nfsd_up_before = nn->nfsd_net_up;
+ 	serv = nn->nfsd_serv;
+ 
+ 	error = nfsd_startup_net(net, cred);
+@@ -803,17 +803,15 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
+ 		goto out_put;
+ 	error = svc_set_num_threads(serv, NULL, nrservs);
+ 	if (error)
+-		goto out_shutdown;
++		goto out_put;
+ 	error = serv->sv_nrthreads;
+-	if (error == 0)
+-		nfsd_last_thread(net);
+-out_shutdown:
+-	if (error < 0 && !nfsd_up_before)
+-		nfsd_shutdown_net(net);
+ out_put:
+ 	/* Threads now hold service active */
+ 	if (xchg(&nn->keep_active, 0))
+ 		svc_put(serv);
++
++	if (serv->sv_nrthreads == 0)
++		nfsd_last_thread(net);
+ 	svc_put(serv);
+ out:
+ 	mutex_unlock(&nfsd_mutex);
+@@ -938,7 +936,7 @@ nfsd(void *vrqstp)
+ 
+ 	current->fs->umask = 0;
+ 
+-	atomic_inc(&nfsdstats.th_cnt);
++	atomic_inc(&nfsd_th_cnt);
+ 
+ 	set_freezable();
+ 
+@@ -962,7 +960,7 @@ nfsd(void *vrqstp)
+ 		validate_process_creds();
+ 	}
+ 
+-	atomic_dec(&nfsdstats.th_cnt);
++	atomic_dec(&nfsd_th_cnt);
+ 
+ out:
+ 	/* Release the thread */
+diff --git a/fs/nfsd/stats.c b/fs/nfsd/stats.c
+index 777e24e5da33b..7a58dba0045c3 100644
+--- a/fs/nfsd/stats.c
++++ b/fs/nfsd/stats.c
+@@ -27,25 +27,22 @@
+ 
+ #include "nfsd.h"
+ 
+-struct nfsd_stats	nfsdstats;
+-struct svc_stat		nfsd_svcstats = {
+-	.program	= &nfsd_program,
+-};
+-
+ static int nfsd_show(struct seq_file *seq, void *v)
+ {
++	struct net *net = PDE_DATA(file_inode(seq->file));
++	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 	int i;
+ 
+ 	seq_printf(seq, "rc %lld %lld %lld\nfh %lld 0 0 0 0\nio %lld %lld\n",
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_HITS]),
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_MISSES]),
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_RC_NOCACHE]),
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_FH_STALE]),
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_IO_READ]),
+-		   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_IO_WRITE]));
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_RC_HITS]),
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_RC_MISSES]),
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_RC_NOCACHE]),
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_FH_STALE]),
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_IO_READ]),
++		   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_IO_WRITE]));
+ 
+ 	/* thread usage: */
+-	seq_printf(seq, "th %u 0", atomic_read(&nfsdstats.th_cnt));
++	seq_printf(seq, "th %u 0", atomic_read(&nfsd_th_cnt));
+ 
+ 	/* deprecated thread usage histogram stats */
+ 	for (i = 0; i < 10; i++)
+@@ -55,7 +52,7 @@ static int nfsd_show(struct seq_file *seq, void *v)
+ 	seq_puts(seq, "\nra 0 0 0 0 0 0 0 0 0 0 0 0\n");
+ 
+ 	/* show my rpc info */
+-	svc_seq_show(seq, &nfsd_svcstats);
++	svc_seq_show(seq, &nn->nfsd_svcstats);
+ 
+ #ifdef CONFIG_NFSD_V4
+ 	/* Show count for individual nfsv4 operations */
+@@ -63,7 +60,7 @@ static int nfsd_show(struct seq_file *seq, void *v)
+ 	seq_printf(seq,"proc4ops %u", LAST_NFS4_OP + 1);
+ 	for (i = 0; i <= LAST_NFS4_OP; i++) {
+ 		seq_printf(seq, " %lld",
+-			   percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_NFS4_OP(i)]));
++			   percpu_counter_sum_positive(&nn->counter[NFSD_STATS_NFS4_OP(i)]));
+ 	}
+ 
+ 	seq_putc(seq, '\n');
+@@ -74,7 +71,7 @@ static int nfsd_show(struct seq_file *seq, void *v)
+ 
+ DEFINE_PROC_SHOW_ATTRIBUTE(nfsd);
+ 
+-int nfsd_percpu_counters_init(struct percpu_counter counters[], int num)
++int nfsd_percpu_counters_init(struct percpu_counter *counters, int num)
+ {
+ 	int i, err = 0;
+ 
+@@ -106,31 +103,24 @@ void nfsd_percpu_counters_destroy(struct percpu_counter counters[], int num)
+ 		percpu_counter_destroy(&counters[i]);
+ }
+ 
+-static int nfsd_stat_counters_init(void)
++int nfsd_stat_counters_init(struct nfsd_net *nn)
+ {
+-	return nfsd_percpu_counters_init(nfsdstats.counter, NFSD_STATS_COUNTERS_NUM);
++	return nfsd_percpu_counters_init(nn->counter, NFSD_STATS_COUNTERS_NUM);
+ }
+ 
+-static void nfsd_stat_counters_destroy(void)
++void nfsd_stat_counters_destroy(struct nfsd_net *nn)
+ {
+-	nfsd_percpu_counters_destroy(nfsdstats.counter, NFSD_STATS_COUNTERS_NUM);
++	nfsd_percpu_counters_destroy(nn->counter, NFSD_STATS_COUNTERS_NUM);
+ }
+ 
+-int nfsd_stat_init(void)
++void nfsd_proc_stat_init(struct net *net)
+ {
+-	int err;
+-
+-	err = nfsd_stat_counters_init();
+-	if (err)
+-		return err;
++	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 
+-	svc_proc_register(&init_net, &nfsd_svcstats, &nfsd_proc_ops);
+-
+-	return 0;
++	svc_proc_register(net, &nn->nfsd_svcstats, &nfsd_proc_ops);
+ }
+ 
+-void nfsd_stat_shutdown(void)
++void nfsd_proc_stat_shutdown(struct net *net)
+ {
+-	nfsd_stat_counters_destroy();
+-	svc_proc_unregister(&init_net, "nfsd");
++	svc_proc_unregister(net, "nfsd");
+ }
+diff --git a/fs/nfsd/stats.h b/fs/nfsd/stats.h
+index 9b43dc3d99913..14525e854cbac 100644
+--- a/fs/nfsd/stats.h
++++ b/fs/nfsd/stats.h
+@@ -10,87 +10,66 @@
+ #include <uapi/linux/nfsd/stats.h>
+ #include <linux/percpu_counter.h>
+ 
+-
+-enum {
+-	NFSD_STATS_RC_HITS,		/* repcache hits */
+-	NFSD_STATS_RC_MISSES,		/* repcache misses */
+-	NFSD_STATS_RC_NOCACHE,		/* uncached reqs */
+-	NFSD_STATS_FH_STALE,		/* FH stale error */
+-	NFSD_STATS_IO_READ,		/* bytes returned to read requests */
+-	NFSD_STATS_IO_WRITE,		/* bytes passed in write requests */
+-#ifdef CONFIG_NFSD_V4
+-	NFSD_STATS_FIRST_NFS4_OP,	/* count of individual nfsv4 operations */
+-	NFSD_STATS_LAST_NFS4_OP = NFSD_STATS_FIRST_NFS4_OP + LAST_NFS4_OP,
+-#define NFSD_STATS_NFS4_OP(op)	(NFSD_STATS_FIRST_NFS4_OP + (op))
+-#endif
+-	NFSD_STATS_COUNTERS_NUM
+-};
+-
+-struct nfsd_stats {
+-	struct percpu_counter	counter[NFSD_STATS_COUNTERS_NUM];
+-
+-	atomic_t	th_cnt;		/* number of available threads */
+-};
+-
+-extern struct nfsd_stats	nfsdstats;
+-
+-extern struct svc_stat		nfsd_svcstats;
+-
+-int nfsd_percpu_counters_init(struct percpu_counter counters[], int num);
+-void nfsd_percpu_counters_reset(struct percpu_counter counters[], int num);
+-void nfsd_percpu_counters_destroy(struct percpu_counter counters[], int num);
+-int nfsd_stat_init(void);
+-void nfsd_stat_shutdown(void);
+-
+-static inline void nfsd_stats_rc_hits_inc(void)
++int nfsd_percpu_counters_init(struct percpu_counter *counters, int num);
++void nfsd_percpu_counters_reset(struct percpu_counter *counters, int num);
++void nfsd_percpu_counters_destroy(struct percpu_counter *counters, int num);
++int nfsd_stat_counters_init(struct nfsd_net *nn);
++void nfsd_stat_counters_destroy(struct nfsd_net *nn);
++void nfsd_proc_stat_init(struct net *net);
++void nfsd_proc_stat_shutdown(struct net *net);
++
++static inline void nfsd_stats_rc_hits_inc(struct nfsd_net *nn)
+ {
+-	percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_RC_HITS]);
++	percpu_counter_inc(&nn->counter[NFSD_STATS_RC_HITS]);
+ }
+ 
+-static inline void nfsd_stats_rc_misses_inc(void)
++static inline void nfsd_stats_rc_misses_inc(struct nfsd_net *nn)
+ {
+-	percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_RC_MISSES]);
++	percpu_counter_inc(&nn->counter[NFSD_STATS_RC_MISSES]);
+ }
+ 
+-static inline void nfsd_stats_rc_nocache_inc(void)
++static inline void nfsd_stats_rc_nocache_inc(struct nfsd_net *nn)
+ {
+-	percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_RC_NOCACHE]);
++	percpu_counter_inc(&nn->counter[NFSD_STATS_RC_NOCACHE]);
+ }
+ 
+-static inline void nfsd_stats_fh_stale_inc(struct svc_export *exp)
++static inline void nfsd_stats_fh_stale_inc(struct nfsd_net *nn,
++					   struct svc_export *exp)
+ {
+-	percpu_counter_inc(&nfsdstats.counter[NFSD_STATS_FH_STALE]);
+-	if (exp)
+-		percpu_counter_inc(&exp->ex_stats.counter[EXP_STATS_FH_STALE]);
++	percpu_counter_inc(&nn->counter[NFSD_STATS_FH_STALE]);
++	if (exp && exp->ex_stats)
++		percpu_counter_inc(&exp->ex_stats->counter[EXP_STATS_FH_STALE]);
+ }
+ 
+-static inline void nfsd_stats_io_read_add(struct svc_export *exp, s64 amount)
++static inline void nfsd_stats_io_read_add(struct nfsd_net *nn,
++					  struct svc_export *exp, s64 amount)
+ {
+-	percpu_counter_add(&nfsdstats.counter[NFSD_STATS_IO_READ], amount);
+-	if (exp)
+-		percpu_counter_add(&exp->ex_stats.counter[EXP_STATS_IO_READ], amount);
++	percpu_counter_add(&nn->counter[NFSD_STATS_IO_READ], amount);
++	if (exp && exp->ex_stats)
++		percpu_counter_add(&exp->ex_stats->counter[EXP_STATS_IO_READ], amount);
+ }
+ 
+-static inline void nfsd_stats_io_write_add(struct svc_export *exp, s64 amount)
++static inline void nfsd_stats_io_write_add(struct nfsd_net *nn,
++					   struct svc_export *exp, s64 amount)
+ {
+-	percpu_counter_add(&nfsdstats.counter[NFSD_STATS_IO_WRITE], amount);
+-	if (exp)
+-		percpu_counter_add(&exp->ex_stats.counter[EXP_STATS_IO_WRITE], amount);
++	percpu_counter_add(&nn->counter[NFSD_STATS_IO_WRITE], amount);
++	if (exp && exp->ex_stats)
++		percpu_counter_add(&exp->ex_stats->counter[EXP_STATS_IO_WRITE], amount);
+ }
+ 
+ static inline void nfsd_stats_payload_misses_inc(struct nfsd_net *nn)
+ {
+-	percpu_counter_inc(&nn->counter[NFSD_NET_PAYLOAD_MISSES]);
++	percpu_counter_inc(&nn->counter[NFSD_STATS_PAYLOAD_MISSES]);
+ }
+ 
+ static inline void nfsd_stats_drc_mem_usage_add(struct nfsd_net *nn, s64 amount)
+ {
+-	percpu_counter_add(&nn->counter[NFSD_NET_DRC_MEM_USAGE], amount);
++	percpu_counter_add(&nn->counter[NFSD_STATS_DRC_MEM_USAGE], amount);
+ }
+ 
+ static inline void nfsd_stats_drc_mem_usage_sub(struct nfsd_net *nn, s64 amount)
+ {
+-	percpu_counter_sub(&nn->counter[NFSD_NET_DRC_MEM_USAGE], amount);
++	percpu_counter_sub(&nn->counter[NFSD_STATS_DRC_MEM_USAGE], amount);
+ }
+ 
+ #endif /* _NFSD_STATS_H */
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index 445d00f00eab7..0e6c7ed9da1b4 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -1171,6 +1171,28 @@ TRACE_EVENT(nfsd_drc_mismatch,
+ 		__entry->ingress)
+ );
+ 
++TRACE_EVENT_CONDITION(nfsd_drc_gc,
++	TP_PROTO(
++		const struct nfsd_net *nn,
++		unsigned long freed
++	),
++	TP_ARGS(nn, freed),
++	TP_CONDITION(freed > 0),
++	TP_STRUCT__entry(
++		__field(unsigned long long, boot_time)
++		__field(unsigned long, freed)
++		__field(int, total)
++	),
++	TP_fast_assign(
++		__entry->boot_time = nn->boot_time;
++		__entry->freed = freed;
++		__entry->total = atomic_read(&nn->num_drc_entries);
++	),
++	TP_printk("boot_time=%16llx total=%d freed=%lu",
++		__entry->boot_time, __entry->total, __entry->freed
++	)
++);
++
+ TRACE_EVENT(nfsd_cb_args,
+ 	TP_PROTO(
+ 		const struct nfs4_client *clp,
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 0ea05ddff0d08..dab44f187d013 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1000,7 +1000,9 @@ static __be32 nfsd_finish_read(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 			       unsigned long *count, u32 *eof, ssize_t host_err)
+ {
+ 	if (host_err >= 0) {
+-		nfsd_stats_io_read_add(fhp->fh_export, host_err);
++		struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
++
++		nfsd_stats_io_read_add(nn, fhp->fh_export, host_err);
+ 		*eof = nfsd_eof_on_read(file, offset, host_err, *count);
+ 		*count = host_err;
+ 		fsnotify_access(file);
+@@ -1143,7 +1145,7 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 		goto out_nfserr;
+ 	}
+ 	*cnt = host_err;
+-	nfsd_stats_io_write_add(exp, *cnt);
++	nfsd_stats_io_write_add(nn, exp, *cnt);
+ 	fsnotify_modify(file);
+ 	host_err = filemap_check_wb_err(file->f_mapping, since);
+ 	if (host_err < 0)
+diff --git a/fs/nilfs2/recovery.c b/fs/nilfs2/recovery.c
+index 188b8cc52e2b6..33c4a97519de8 100644
+--- a/fs/nilfs2/recovery.c
++++ b/fs/nilfs2/recovery.c
+@@ -708,6 +708,33 @@ static void nilfs_finish_roll_forward(struct the_nilfs *nilfs,
+ 	brelse(bh);
+ }
+ 
++/**
++ * nilfs_abort_roll_forward - cleaning up after a failed rollforward recovery
++ * @nilfs: nilfs object
++ */
++static void nilfs_abort_roll_forward(struct the_nilfs *nilfs)
++{
++	struct nilfs_inode_info *ii, *n;
++	LIST_HEAD(head);
++
++	/* Abandon inodes that have read recovery data */
++	spin_lock(&nilfs->ns_inode_lock);
++	list_splice_init(&nilfs->ns_dirty_files, &head);
++	spin_unlock(&nilfs->ns_inode_lock);
++	if (list_empty(&head))
++		return;
++
++	set_nilfs_purging(nilfs);
++	list_for_each_entry_safe(ii, n, &head, i_dirty) {
++		spin_lock(&nilfs->ns_inode_lock);
++		list_del_init(&ii->i_dirty);
++		spin_unlock(&nilfs->ns_inode_lock);
++
++		iput(&ii->vfs_inode);
++	}
++	clear_nilfs_purging(nilfs);
++}
++
+ /**
+  * nilfs_salvage_orphan_logs - salvage logs written after the latest checkpoint
+  * @nilfs: nilfs object
+@@ -766,15 +793,19 @@ int nilfs_salvage_orphan_logs(struct the_nilfs *nilfs,
+ 		if (unlikely(err)) {
+ 			nilfs_err(sb, "error %d writing segment for recovery",
+ 				  err);
+-			goto failed;
++			goto put_root;
+ 		}
+ 
+ 		nilfs_finish_roll_forward(nilfs, ri);
+ 	}
+ 
+- failed:
++put_root:
+ 	nilfs_put_root(root);
+ 	return err;
++
++failed:
++	nilfs_abort_roll_forward(nilfs);
++	goto put_root;
+ }
+ 
+ /**
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index d9f92df15a84f..2213011afab70 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -1833,6 +1833,9 @@ static void nilfs_segctor_abort_construction(struct nilfs_sc_info *sci,
+ 	nilfs_abort_logs(&logs, ret ? : err);
+ 
+ 	list_splice_tail_init(&sci->sc_segbufs, &logs);
++	if (list_empty(&logs))
++		return; /* if the first segment buffer preparation failed */
++
+ 	nilfs_cancel_segusage(&logs, nilfs->ns_sufile);
+ 	nilfs_free_incomplete_logs(&logs, nilfs);
+ 
+@@ -2077,7 +2080,7 @@ static int nilfs_segctor_do_construct(struct nilfs_sc_info *sci, int mode)
+ 
+ 		err = nilfs_segctor_begin_construction(sci, nilfs);
+ 		if (unlikely(err))
+-			goto out;
++			goto failed;
+ 
+ 		/* Update time stamp */
+ 		sci->sc_seg_ctime = ktime_get_real_seconds();
+@@ -2140,10 +2143,9 @@ static int nilfs_segctor_do_construct(struct nilfs_sc_info *sci, int mode)
+ 	return err;
+ 
+  failed_to_write:
+-	if (sci->sc_stage.flags & NILFS_CF_IFILE_STARTED)
+-		nilfs_redirty_inodes(&sci->sc_dirty_files);
+-
+  failed:
++	if (mode == SC_LSEG_SR && nilfs_sc_cstage_get(sci) >= NILFS_ST_IFILE)
++		nilfs_redirty_inodes(&sci->sc_dirty_files);
+ 	if (nilfs_doing_gc())
+ 		nilfs_redirty_inodes(&sci->sc_gc_inodes);
+ 	nilfs_segctor_abort_construction(sci, nilfs, err);
+diff --git a/fs/nilfs2/sysfs.c b/fs/nilfs2/sysfs.c
+index 57afd06db62de..64ea44be0a646 100644
+--- a/fs/nilfs2/sysfs.c
++++ b/fs/nilfs2/sysfs.c
+@@ -108,7 +108,7 @@ static ssize_t
+ nilfs_snapshot_inodes_count_show(struct nilfs_snapshot_attr *attr,
+ 				 struct nilfs_root *root, char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return sysfs_emit(buf, "%llu\n",
+ 			(unsigned long long)atomic64_read(&root->inodes_count));
+ }
+ 
+@@ -116,7 +116,7 @@ static ssize_t
+ nilfs_snapshot_blocks_count_show(struct nilfs_snapshot_attr *attr,
+ 				 struct nilfs_root *root, char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return sysfs_emit(buf, "%llu\n",
+ 			(unsigned long long)atomic64_read(&root->blocks_count));
+ }
+ 
+@@ -129,7 +129,7 @@ static ssize_t
+ nilfs_snapshot_README_show(struct nilfs_snapshot_attr *attr,
+ 			    struct nilfs_root *root, char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, snapshot_readme_str);
++	return sysfs_emit(buf, snapshot_readme_str);
+ }
+ 
+ NILFS_SNAPSHOT_RO_ATTR(inodes_count);
+@@ -230,7 +230,7 @@ static ssize_t
+ nilfs_mounted_snapshots_README_show(struct nilfs_mounted_snapshots_attr *attr,
+ 				    struct the_nilfs *nilfs, char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, mounted_snapshots_readme_str);
++	return sysfs_emit(buf, mounted_snapshots_readme_str);
+ }
+ 
+ NILFS_MOUNTED_SNAPSHOTS_RO_ATTR(README);
+@@ -268,7 +268,7 @@ nilfs_checkpoints_checkpoints_number_show(struct nilfs_checkpoints_attr *attr,
+ 
+ 	ncheckpoints = cpstat.cs_ncps;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", ncheckpoints);
++	return sysfs_emit(buf, "%llu\n", ncheckpoints);
+ }
+ 
+ static ssize_t
+@@ -291,7 +291,7 @@ nilfs_checkpoints_snapshots_number_show(struct nilfs_checkpoints_attr *attr,
+ 
+ 	nsnapshots = cpstat.cs_nsss;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", nsnapshots);
++	return sysfs_emit(buf, "%llu\n", nsnapshots);
+ }
+ 
+ static ssize_t
+@@ -305,7 +305,7 @@ nilfs_checkpoints_last_seg_checkpoint_show(struct nilfs_checkpoints_attr *attr,
+ 	last_cno = nilfs->ns_last_cno;
+ 	spin_unlock(&nilfs->ns_last_segment_lock);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", last_cno);
++	return sysfs_emit(buf, "%llu\n", last_cno);
+ }
+ 
+ static ssize_t
+@@ -319,7 +319,7 @@ nilfs_checkpoints_next_checkpoint_show(struct nilfs_checkpoints_attr *attr,
+ 	cno = nilfs->ns_cno;
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", cno);
++	return sysfs_emit(buf, "%llu\n", cno);
+ }
+ 
+ static const char checkpoints_readme_str[] =
+@@ -335,7 +335,7 @@ static ssize_t
+ nilfs_checkpoints_README_show(struct nilfs_checkpoints_attr *attr,
+ 				struct the_nilfs *nilfs, char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, checkpoints_readme_str);
++	return sysfs_emit(buf, checkpoints_readme_str);
+ }
+ 
+ NILFS_CHECKPOINTS_RO_ATTR(checkpoints_number);
+@@ -366,7 +366,7 @@ nilfs_segments_segments_number_show(struct nilfs_segments_attr *attr,
+ 				     struct the_nilfs *nilfs,
+ 				     char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%lu\n", nilfs->ns_nsegments);
++	return sysfs_emit(buf, "%lu\n", nilfs->ns_nsegments);
+ }
+ 
+ static ssize_t
+@@ -374,7 +374,7 @@ nilfs_segments_blocks_per_segment_show(struct nilfs_segments_attr *attr,
+ 					struct the_nilfs *nilfs,
+ 					char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%lu\n", nilfs->ns_blocks_per_segment);
++	return sysfs_emit(buf, "%lu\n", nilfs->ns_blocks_per_segment);
+ }
+ 
+ static ssize_t
+@@ -388,7 +388,7 @@ nilfs_segments_clean_segments_show(struct nilfs_segments_attr *attr,
+ 	ncleansegs = nilfs_sufile_get_ncleansegs(nilfs->ns_sufile);
+ 	up_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%lu\n", ncleansegs);
++	return sysfs_emit(buf, "%lu\n", ncleansegs);
+ }
+ 
+ static ssize_t
+@@ -408,7 +408,7 @@ nilfs_segments_dirty_segments_show(struct nilfs_segments_attr *attr,
+ 		return err;
+ 	}
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", sustat.ss_ndirtysegs);
++	return sysfs_emit(buf, "%llu\n", sustat.ss_ndirtysegs);
+ }
+ 
+ static const char segments_readme_str[] =
+@@ -424,7 +424,7 @@ nilfs_segments_README_show(struct nilfs_segments_attr *attr,
+ 			    struct the_nilfs *nilfs,
+ 			    char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, segments_readme_str);
++	return sysfs_emit(buf, segments_readme_str);
+ }
+ 
+ NILFS_SEGMENTS_RO_ATTR(segments_number);
+@@ -461,7 +461,7 @@ nilfs_segctor_last_pseg_block_show(struct nilfs_segctor_attr *attr,
+ 	last_pseg = nilfs->ns_last_pseg;
+ 	spin_unlock(&nilfs->ns_last_segment_lock);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return sysfs_emit(buf, "%llu\n",
+ 			(unsigned long long)last_pseg);
+ }
+ 
+@@ -476,7 +476,7 @@ nilfs_segctor_last_seg_sequence_show(struct nilfs_segctor_attr *attr,
+ 	last_seq = nilfs->ns_last_seq;
+ 	spin_unlock(&nilfs->ns_last_segment_lock);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", last_seq);
++	return sysfs_emit(buf, "%llu\n", last_seq);
+ }
+ 
+ static ssize_t
+@@ -490,7 +490,7 @@ nilfs_segctor_last_seg_checkpoint_show(struct nilfs_segctor_attr *attr,
+ 	last_cno = nilfs->ns_last_cno;
+ 	spin_unlock(&nilfs->ns_last_segment_lock);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", last_cno);
++	return sysfs_emit(buf, "%llu\n", last_cno);
+ }
+ 
+ static ssize_t
+@@ -504,7 +504,7 @@ nilfs_segctor_current_seg_sequence_show(struct nilfs_segctor_attr *attr,
+ 	seg_seq = nilfs->ns_seg_seq;
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", seg_seq);
++	return sysfs_emit(buf, "%llu\n", seg_seq);
+ }
+ 
+ static ssize_t
+@@ -518,7 +518,7 @@ nilfs_segctor_current_last_full_seg_show(struct nilfs_segctor_attr *attr,
+ 	segnum = nilfs->ns_segnum;
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", segnum);
++	return sysfs_emit(buf, "%llu\n", segnum);
+ }
+ 
+ static ssize_t
+@@ -532,7 +532,7 @@ nilfs_segctor_next_full_seg_show(struct nilfs_segctor_attr *attr,
+ 	nextnum = nilfs->ns_nextnum;
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", nextnum);
++	return sysfs_emit(buf, "%llu\n", nextnum);
+ }
+ 
+ static ssize_t
+@@ -546,7 +546,7 @@ nilfs_segctor_next_pseg_offset_show(struct nilfs_segctor_attr *attr,
+ 	pseg_offset = nilfs->ns_pseg_offset;
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%lu\n", pseg_offset);
++	return sysfs_emit(buf, "%lu\n", pseg_offset);
+ }
+ 
+ static ssize_t
+@@ -560,7 +560,7 @@ nilfs_segctor_next_checkpoint_show(struct nilfs_segctor_attr *attr,
+ 	cno = nilfs->ns_cno;
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", cno);
++	return sysfs_emit(buf, "%llu\n", cno);
+ }
+ 
+ static ssize_t
+@@ -588,7 +588,7 @@ nilfs_segctor_last_seg_write_time_secs_show(struct nilfs_segctor_attr *attr,
+ 	ctime = nilfs->ns_ctime;
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", ctime);
++	return sysfs_emit(buf, "%llu\n", ctime);
+ }
+ 
+ static ssize_t
+@@ -616,7 +616,7 @@ nilfs_segctor_last_nongc_write_time_secs_show(struct nilfs_segctor_attr *attr,
+ 	nongc_ctime = nilfs->ns_nongc_ctime;
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", nongc_ctime);
++	return sysfs_emit(buf, "%llu\n", nongc_ctime);
+ }
+ 
+ static ssize_t
+@@ -630,7 +630,7 @@ nilfs_segctor_dirty_data_blocks_count_show(struct nilfs_segctor_attr *attr,
+ 	ndirtyblks = atomic_read(&nilfs->ns_ndirtyblks);
+ 	up_read(&nilfs->ns_segctor_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%u\n", ndirtyblks);
++	return sysfs_emit(buf, "%u\n", ndirtyblks);
+ }
+ 
+ static const char segctor_readme_str[] =
+@@ -667,7 +667,7 @@ static ssize_t
+ nilfs_segctor_README_show(struct nilfs_segctor_attr *attr,
+ 			  struct the_nilfs *nilfs, char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, segctor_readme_str);
++	return sysfs_emit(buf, segctor_readme_str);
+ }
+ 
+ NILFS_SEGCTOR_RO_ATTR(last_pseg_block);
+@@ -736,7 +736,7 @@ nilfs_superblock_sb_write_time_secs_show(struct nilfs_superblock_attr *attr,
+ 	sbwtime = nilfs->ns_sbwtime;
+ 	up_read(&nilfs->ns_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", sbwtime);
++	return sysfs_emit(buf, "%llu\n", sbwtime);
+ }
+ 
+ static ssize_t
+@@ -750,7 +750,7 @@ nilfs_superblock_sb_write_count_show(struct nilfs_superblock_attr *attr,
+ 	sbwcount = nilfs->ns_sbwcount;
+ 	up_read(&nilfs->ns_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%u\n", sbwcount);
++	return sysfs_emit(buf, "%u\n", sbwcount);
+ }
+ 
+ static ssize_t
+@@ -764,7 +764,7 @@ nilfs_superblock_sb_update_frequency_show(struct nilfs_superblock_attr *attr,
+ 	sb_update_freq = nilfs->ns_sb_update_freq;
+ 	up_read(&nilfs->ns_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%u\n", sb_update_freq);
++	return sysfs_emit(buf, "%u\n", sb_update_freq);
+ }
+ 
+ static ssize_t
+@@ -812,7 +812,7 @@ static ssize_t
+ nilfs_superblock_README_show(struct nilfs_superblock_attr *attr,
+ 				struct the_nilfs *nilfs, char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, sb_readme_str);
++	return sysfs_emit(buf, sb_readme_str);
+ }
+ 
+ NILFS_SUPERBLOCK_RO_ATTR(sb_write_time);
+@@ -843,11 +843,17 @@ ssize_t nilfs_dev_revision_show(struct nilfs_dev_attr *attr,
+ 				struct the_nilfs *nilfs,
+ 				char *buf)
+ {
+-	struct nilfs_super_block **sbp = nilfs->ns_sbp;
+-	u32 major = le32_to_cpu(sbp[0]->s_rev_level);
+-	u16 minor = le16_to_cpu(sbp[0]->s_minor_rev_level);
++	struct nilfs_super_block *raw_sb;
++	u32 major;
++	u16 minor;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d.%d\n", major, minor);
++	down_read(&nilfs->ns_sem);
++	raw_sb = nilfs->ns_sbp[0];
++	major = le32_to_cpu(raw_sb->s_rev_level);
++	minor = le16_to_cpu(raw_sb->s_minor_rev_level);
++	up_read(&nilfs->ns_sem);
++
++	return sysfs_emit(buf, "%d.%d\n", major, minor);
+ }
+ 
+ static
+@@ -855,7 +861,7 @@ ssize_t nilfs_dev_blocksize_show(struct nilfs_dev_attr *attr,
+ 				 struct the_nilfs *nilfs,
+ 				 char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%u\n", nilfs->ns_blocksize);
++	return sysfs_emit(buf, "%u\n", nilfs->ns_blocksize);
+ }
+ 
+ static
+@@ -863,10 +869,15 @@ ssize_t nilfs_dev_device_size_show(struct nilfs_dev_attr *attr,
+ 				    struct the_nilfs *nilfs,
+ 				    char *buf)
+ {
+-	struct nilfs_super_block **sbp = nilfs->ns_sbp;
+-	u64 dev_size = le64_to_cpu(sbp[0]->s_dev_size);
++	struct nilfs_super_block *raw_sb;
++	u64 dev_size;
++
++	down_read(&nilfs->ns_sem);
++	raw_sb = nilfs->ns_sbp[0];
++	dev_size = le64_to_cpu(raw_sb->s_dev_size);
++	up_read(&nilfs->ns_sem);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n", dev_size);
++	return sysfs_emit(buf, "%llu\n", dev_size);
+ }
+ 
+ static
+@@ -877,7 +888,7 @@ ssize_t nilfs_dev_free_blocks_show(struct nilfs_dev_attr *attr,
+ 	sector_t free_blocks = 0;
+ 
+ 	nilfs_count_free_blocks(nilfs, &free_blocks);
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return sysfs_emit(buf, "%llu\n",
+ 			(unsigned long long)free_blocks);
+ }
+ 
+@@ -886,9 +897,15 @@ ssize_t nilfs_dev_uuid_show(struct nilfs_dev_attr *attr,
+ 			    struct the_nilfs *nilfs,
+ 			    char *buf)
+ {
+-	struct nilfs_super_block **sbp = nilfs->ns_sbp;
++	struct nilfs_super_block *raw_sb;
++	ssize_t len;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%pUb\n", sbp[0]->s_uuid);
++	down_read(&nilfs->ns_sem);
++	raw_sb = nilfs->ns_sbp[0];
++	len = sysfs_emit(buf, "%pUb\n", raw_sb->s_uuid);
++	up_read(&nilfs->ns_sem);
++
++	return len;
+ }
+ 
+ static
+@@ -896,10 +913,16 @@ ssize_t nilfs_dev_volume_name_show(struct nilfs_dev_attr *attr,
+ 				    struct the_nilfs *nilfs,
+ 				    char *buf)
+ {
+-	struct nilfs_super_block **sbp = nilfs->ns_sbp;
++	struct nilfs_super_block *raw_sb;
++	ssize_t len;
++
++	down_read(&nilfs->ns_sem);
++	raw_sb = nilfs->ns_sbp[0];
++	len = scnprintf(buf, sizeof(raw_sb->s_volume_name), "%s\n",
++			raw_sb->s_volume_name);
++	up_read(&nilfs->ns_sem);
+ 
+-	return scnprintf(buf, sizeof(sbp[0]->s_volume_name), "%s\n",
+-			 sbp[0]->s_volume_name);
++	return len;
+ }
+ 
+ static const char dev_readme_str[] =
+@@ -916,7 +939,7 @@ static ssize_t nilfs_dev_README_show(struct nilfs_dev_attr *attr,
+ 				     struct the_nilfs *nilfs,
+ 				     char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, dev_readme_str);
++	return sysfs_emit(buf, dev_readme_str);
+ }
+ 
+ NILFS_DEV_RO_ATTR(revision);
+@@ -1060,7 +1083,7 @@ void nilfs_sysfs_delete_device_group(struct the_nilfs *nilfs)
+ static ssize_t nilfs_feature_revision_show(struct kobject *kobj,
+ 					    struct attribute *attr, char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%d.%d\n",
++	return sysfs_emit(buf, "%d.%d\n",
+ 			NILFS_CURRENT_REV, NILFS_MINOR_REV);
+ }
+ 
+@@ -1073,7 +1096,7 @@ static ssize_t nilfs_feature_README_show(struct kobject *kobj,
+ 					 struct attribute *attr,
+ 					 char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, features_readme_str);
++	return sysfs_emit(buf, features_readme_str);
+ }
+ 
+ NILFS_FEATURE_RO_ATTR(revision);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index 7974e91ffe134..b5d8f238fce42 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -103,17 +103,13 @@ void fsnotify_sb_delete(struct super_block *sb)
+  * parent cares.  Thus when an event happens on a child it can quickly tell
+  * if there is a need to find a parent and send the event to the parent.
+  */
+-void __fsnotify_update_child_dentry_flags(struct inode *inode)
++void fsnotify_set_children_dentry_flags(struct inode *inode)
+ {
+ 	struct dentry *alias;
+-	int watched;
+ 
+ 	if (!S_ISDIR(inode->i_mode))
+ 		return;
+ 
+-	/* determine if the children should tell inode about their events */
+-	watched = fsnotify_inode_watches_children(inode);
+-
+ 	spin_lock(&inode->i_lock);
+ 	/* run all of the dentries associated with this inode.  Since this is a
+ 	 * directory, there damn well better only be one item on this list */
+@@ -129,10 +125,7 @@ void __fsnotify_update_child_dentry_flags(struct inode *inode)
+ 				continue;
+ 
+ 			spin_lock_nested(&child->d_lock, DENTRY_D_LOCK_NESTED);
+-			if (watched)
+-				child->d_flags |= DCACHE_FSNOTIFY_PARENT_WATCHED;
+-			else
+-				child->d_flags &= ~DCACHE_FSNOTIFY_PARENT_WATCHED;
++			child->d_flags |= DCACHE_FSNOTIFY_PARENT_WATCHED;
+ 			spin_unlock(&child->d_lock);
+ 		}
+ 		spin_unlock(&alias->d_lock);
+@@ -140,6 +133,24 @@ void __fsnotify_update_child_dentry_flags(struct inode *inode)
+ 	spin_unlock(&inode->i_lock);
+ }
+ 
++/*
++ * Lazily clear false positive PARENT_WATCHED flag for child whose parent had
++ * stopped watching children.
++ */
++static void fsnotify_clear_child_dentry_flag(struct inode *pinode,
++					     struct dentry *dentry)
++{
++	spin_lock(&dentry->d_lock);
++	/*
++	 * d_lock is a sufficient barrier to prevent observing a non-watched
++	 * parent state from before the fsnotify_set_children_dentry_flags()
++	 * or fsnotify_update_flags() call that had set PARENT_WATCHED.
++	 */
++	if (!fsnotify_inode_watches_children(pinode))
++		dentry->d_flags &= ~DCACHE_FSNOTIFY_PARENT_WATCHED;
++	spin_unlock(&dentry->d_lock);
++}
++
+ /* Are inode/sb/mount interested in parent and name info with this event? */
+ static bool fsnotify_event_needs_parent(struct inode *inode, struct mount *mnt,
+ 					__u32 mask)
+@@ -208,7 +219,7 @@ int __fsnotify_parent(struct dentry *dentry, __u32 mask, const void *data,
+ 	p_inode = parent->d_inode;
+ 	p_mask = fsnotify_inode_watches_children(p_inode);
+ 	if (unlikely(parent_watched && !p_mask))
+-		__fsnotify_update_child_dentry_flags(p_inode);
++		fsnotify_clear_child_dentry_flag(p_inode, dentry);
+ 
+ 	/*
+ 	 * Include parent/name in notification either if some notification
+diff --git a/fs/notify/fsnotify.h b/fs/notify/fsnotify.h
+index fde74eb333cc9..2b4267de86e6b 100644
+--- a/fs/notify/fsnotify.h
++++ b/fs/notify/fsnotify.h
+@@ -74,7 +74,7 @@ static inline void fsnotify_clear_marks_by_sb(struct super_block *sb)
+  * update the dentry->d_flags of all of inode's children to indicate if inode cares
+  * about events that happen to its children.
+  */
+-extern void __fsnotify_update_child_dentry_flags(struct inode *inode);
++extern void fsnotify_set_children_dentry_flags(struct inode *inode);
+ 
+ extern struct kmem_cache *fsnotify_mark_connector_cachep;
+ 
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index c74ef947447d6..4be6e883d492f 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -176,6 +176,24 @@ static void *__fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
+ 	return fsnotify_update_iref(conn, want_iref);
+ }
+ 
++static bool fsnotify_conn_watches_children(
++					struct fsnotify_mark_connector *conn)
++{
++	if (conn->type != FSNOTIFY_OBJ_TYPE_INODE)
++		return false;
++
++	return fsnotify_inode_watches_children(fsnotify_conn_inode(conn));
++}
++
++static void fsnotify_conn_set_children_dentry_flags(
++					struct fsnotify_mark_connector *conn)
++{
++	if (conn->type != FSNOTIFY_OBJ_TYPE_INODE)
++		return;
++
++	fsnotify_set_children_dentry_flags(fsnotify_conn_inode(conn));
++}
++
+ /*
+  * Calculate mask of events for a list of marks. The caller must make sure
+  * connector and connector->obj cannot disappear under us.  Callers achieve
+@@ -184,15 +202,23 @@ static void *__fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
+  */
+ void fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
+ {
++	bool update_children;
++
+ 	if (!conn)
+ 		return;
+ 
+ 	spin_lock(&conn->lock);
++	update_children = !fsnotify_conn_watches_children(conn);
+ 	__fsnotify_recalc_mask(conn);
++	update_children &= fsnotify_conn_watches_children(conn);
+ 	spin_unlock(&conn->lock);
+-	if (conn->type == FSNOTIFY_OBJ_TYPE_INODE)
+-		__fsnotify_update_child_dentry_flags(
+-					fsnotify_conn_inode(conn));
++	/*
++	 * Set children's PARENT_WATCHED flags only if parent started watching.
++	 * When parent stops watching, we clear false positive PARENT_WATCHED
++	 * flags lazily in __fsnotify_parent().
++	 */
++	if (update_children)
++		fsnotify_conn_set_children_dentry_flags(conn);
+ }
+ 
+ /* Free all connectors queued for freeing once SRCU period ends */
+diff --git a/fs/squashfs/inode.c b/fs/squashfs/inode.c
+index 24463145b3513..f31649080a881 100644
+--- a/fs/squashfs/inode.c
++++ b/fs/squashfs/inode.c
+@@ -276,8 +276,13 @@ int squashfs_read_inode(struct inode *inode, long long ino)
+ 		if (err < 0)
+ 			goto failed_read;
+ 
+-		set_nlink(inode, le32_to_cpu(sqsh_ino->nlink));
+ 		inode->i_size = le32_to_cpu(sqsh_ino->symlink_size);
++		if (inode->i_size > PAGE_SIZE) {
++			ERROR("Corrupted symlink\n");
++			return -EINVAL;
++		}
++
++		set_nlink(inode, le32_to_cpu(sqsh_ino->nlink));
+ 		inode->i_op = &squashfs_symlink_inode_ops;
+ 		inode_nohighmem(inode);
+ 		inode->i_data.a_ops = &squashfs_symlink_aops;
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 1939678f0b622..ae75df43d51cb 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -86,6 +86,13 @@ enum {
+ #define UDF_MAX_LVID_NESTING 1000
+ 
+ enum { UDF_MAX_LINKS = 0xffff };
++/*
++ * We limit filesize to 4TB. This is arbitrary as the on-disk format supports
++ * more but because the file space is described by a linked list of extents,
++ * each of which can have at most 1GB, the creation and handling of extents
++ * gets unusably slow beyond certain point...
++ */
++#define UDF_MAX_FILESIZE (1ULL << 42)
+ 
+ /* These are the "meat" - everything else is stuffing */
+ static int udf_fill_super(struct super_block *, void *, int);
+@@ -1076,12 +1083,19 @@ static int udf_fill_partdesc_info(struct super_block *sb,
+ 	struct udf_part_map *map;
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	struct partitionHeaderDesc *phd;
++	u32 sum;
+ 	int err;
+ 
+ 	map = &sbi->s_partmaps[p_index];
+ 
+ 	map->s_partition_len = le32_to_cpu(p->partitionLength); /* blocks */
+ 	map->s_partition_root = le32_to_cpu(p->partitionStartingLocation);
++	if (check_add_overflow(map->s_partition_root, map->s_partition_len,
++			       &sum)) {
++		udf_err(sb, "Partition %d has invalid location %u + %u\n",
++			p_index, map->s_partition_root, map->s_partition_len);
++		return -EFSCORRUPTED;
++	}
+ 
+ 	if (p->accessType == cpu_to_le32(PD_ACCESS_TYPE_READ_ONLY))
+ 		map->s_partition_flags |= UDF_PART_FLAG_READ_ONLY;
+@@ -1137,6 +1151,14 @@ static int udf_fill_partdesc_info(struct super_block *sb,
+ 		bitmap->s_extPosition = le32_to_cpu(
+ 				phd->unallocSpaceBitmap.extPosition);
+ 		map->s_partition_flags |= UDF_PART_FLAG_UNALLOC_BITMAP;
++		/* Check whether math over bitmap won't overflow. */
++		if (check_add_overflow(map->s_partition_len,
++				       sizeof(struct spaceBitmapDesc) << 3,
++				       &sum)) {
++			udf_err(sb, "Partition %d is too long (%u)\n", p_index,
++				map->s_partition_len);
++			return -EFSCORRUPTED;
++		}
+ 		udf_debug("unallocSpaceBitmap (part %d) @ %u\n",
+ 			  p_index, bitmap->s_extPosition);
+ 	}
+@@ -2301,7 +2323,7 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ 		ret = -ENOMEM;
+ 		goto error_out;
+ 	}
+-	sb->s_maxbytes = MAX_LFS_FILESIZE;
++	sb->s_maxbytes = UDF_MAX_FILESIZE;
+ 	sb->s_max_links = UDF_MAX_LINKS;
+ 	return 0;
+ 
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index c9fafca1c30c5..6c6323a01d430 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -764,107 +764,54 @@ static inline void cgroup_threadgroup_change_end(struct task_struct *tsk) {}
+  * sock_cgroup_data is embedded at sock->sk_cgrp_data and contains
+  * per-socket cgroup information except for memcg association.
+  *
+- * On legacy hierarchies, net_prio and net_cls controllers directly set
+- * attributes on each sock which can then be tested by the network layer.
+- * On the default hierarchy, each sock is associated with the cgroup it was
+- * created in and the networking layer can match the cgroup directly.
+- *
+- * To avoid carrying all three cgroup related fields separately in sock,
+- * sock_cgroup_data overloads (prioidx, classid) and the cgroup pointer.
+- * On boot, sock_cgroup_data records the cgroup that the sock was created
+- * in so that cgroup2 matches can be made; however, once either net_prio or
+- * net_cls starts being used, the area is overriden to carry prioidx and/or
+- * classid.  The two modes are distinguished by whether the lowest bit is
+- * set.  Clear bit indicates cgroup pointer while set bit prioidx and
+- * classid.
+- *
+- * While userland may start using net_prio or net_cls at any time, once
+- * either is used, cgroup2 matching no longer works.  There is no reason to
+- * mix the two and this is in line with how legacy and v2 compatibility is
+- * handled.  On mode switch, cgroup references which are already being
+- * pointed to by socks may be leaked.  While this can be remedied by adding
+- * synchronization around sock_cgroup_data, given that the number of leaked
+- * cgroups is bound and highly unlikely to be high, this seems to be the
+- * better trade-off.
++ * On legacy hierarchies, net_prio and net_cls controllers directly
++ * set attributes on each sock which can then be tested by the network
++ * layer. On the default hierarchy, each sock is associated with the
++ * cgroup it was created in and the networking layer can match the
++ * cgroup directly.
+  */
+ struct sock_cgroup_data {
+-	union {
+-#ifdef __LITTLE_ENDIAN
+-		struct {
+-			u8	is_data : 1;
+-			u8	no_refcnt : 1;
+-			u8	unused : 6;
+-			u8	padding;
+-			u16	prioidx;
+-			u32	classid;
+-		} __packed;
+-#else
+-		struct {
+-			u32	classid;
+-			u16	prioidx;
+-			u8	padding;
+-			u8	unused : 6;
+-			u8	no_refcnt : 1;
+-			u8	is_data : 1;
+-		} __packed;
++	struct cgroup	*cgroup; /* v2 */
++#ifdef CONFIG_CGROUP_NET_CLASSID
++	u32		classid; /* v1 */
++#endif
++#ifdef CONFIG_CGROUP_NET_PRIO
++	u16		prioidx; /* v1 */
+ #endif
+-		u64		val;
+-	};
+ };
+ 
+-/*
+- * There's a theoretical window where the following accessors race with
+- * updaters and return part of the previous pointer as the prioidx or
+- * classid.  Such races are short-lived and the result isn't critical.
+- */
+ static inline u16 sock_cgroup_prioidx(const struct sock_cgroup_data *skcd)
+ {
+-	/* fallback to 1 which is always the ID of the root cgroup */
+-	return (skcd->is_data & 1) ? skcd->prioidx : 1;
++#ifdef CONFIG_CGROUP_NET_PRIO
++	return READ_ONCE(skcd->prioidx);
++#else
++	return 1;
++#endif
+ }
+ 
+ static inline u32 sock_cgroup_classid(const struct sock_cgroup_data *skcd)
+ {
+-	/* fallback to 0 which is the unconfigured default classid */
+-	return (skcd->is_data & 1) ? skcd->classid : 0;
++#ifdef CONFIG_CGROUP_NET_CLASSID
++	return READ_ONCE(skcd->classid);
++#else
++	return 0;
++#endif
+ }
+ 
+-/*
+- * If invoked concurrently, the updaters may clobber each other.  The
+- * caller is responsible for synchronization.
+- */
+ static inline void sock_cgroup_set_prioidx(struct sock_cgroup_data *skcd,
+ 					   u16 prioidx)
+ {
+-	struct sock_cgroup_data skcd_buf = {{ .val = READ_ONCE(skcd->val) }};
+-
+-	if (sock_cgroup_prioidx(&skcd_buf) == prioidx)
+-		return;
+-
+-	if (!(skcd_buf.is_data & 1)) {
+-		skcd_buf.val = 0;
+-		skcd_buf.is_data = 1;
+-	}
+-
+-	skcd_buf.prioidx = prioidx;
+-	WRITE_ONCE(skcd->val, skcd_buf.val);	/* see sock_cgroup_ptr() */
++#ifdef CONFIG_CGROUP_NET_PRIO
++	WRITE_ONCE(skcd->prioidx, prioidx);
++#endif
+ }
+ 
+ static inline void sock_cgroup_set_classid(struct sock_cgroup_data *skcd,
+ 					   u32 classid)
+ {
+-	struct sock_cgroup_data skcd_buf = {{ .val = READ_ONCE(skcd->val) }};
+-
+-	if (sock_cgroup_classid(&skcd_buf) == classid)
+-		return;
+-
+-	if (!(skcd_buf.is_data & 1)) {
+-		skcd_buf.val = 0;
+-		skcd_buf.is_data = 1;
+-	}
+-
+-	skcd_buf.classid = classid;
+-	WRITE_ONCE(skcd->val, skcd_buf.val);	/* see sock_cgroup_ptr() */
++#ifdef CONFIG_CGROUP_NET_CLASSID
++	WRITE_ONCE(skcd->classid, classid);
++#endif
+ }
+ 
+ #else	/* CONFIG_SOCK_CGROUP_DATA */
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index c9c430712d471..15c27a2c98e26 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -816,33 +816,13 @@ static inline void cgroup_account_cputime_field(struct task_struct *task,
+  */
+ #ifdef CONFIG_SOCK_CGROUP_DATA
+ 
+-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
+-extern spinlock_t cgroup_sk_update_lock;
+-#endif
+-
+-void cgroup_sk_alloc_disable(void);
+ void cgroup_sk_alloc(struct sock_cgroup_data *skcd);
+ void cgroup_sk_clone(struct sock_cgroup_data *skcd);
+ void cgroup_sk_free(struct sock_cgroup_data *skcd);
+ 
+ static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
+ {
+-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
+-	unsigned long v;
+-
+-	/*
+-	 * @skcd->val is 64bit but the following is safe on 32bit too as we
+-	 * just need the lower ulong to be written and read atomically.
+-	 */
+-	v = READ_ONCE(skcd->val);
+-
+-	if (v & 3)
+-		return &cgrp_dfl_root.cgrp;
+-
+-	return (struct cgroup *)(unsigned long)v ?: &cgrp_dfl_root.cgrp;
+-#else
+-	return (struct cgroup *)(unsigned long)skcd->val;
+-#endif
++	return skcd->cgroup;
+ }
+ 
+ #else	/* CONFIG_CGROUP_DATA */
+diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
+index d7d96c806bff2..096b79e4373f4 100644
+--- a/include/linux/fsnotify_backend.h
++++ b/include/linux/fsnotify_backend.h
+@@ -563,12 +563,14 @@ static inline __u32 fsnotify_parent_needed_mask(__u32 mask)
+ 
+ static inline int fsnotify_inode_watches_children(struct inode *inode)
+ {
++	__u32 parent_mask = READ_ONCE(inode->i_fsnotify_mask);
++
+ 	/* FS_EVENT_ON_CHILD is set if the inode may care */
+-	if (!(inode->i_fsnotify_mask & FS_EVENT_ON_CHILD))
++	if (!(parent_mask & FS_EVENT_ON_CHILD))
+ 		return 0;
+ 	/* this inode might care about child events, does it care about the
+ 	 * specific set of events that can happen on a child? */
+-	return inode->i_fsnotify_mask & FS_EVENTS_POSS_ON_CHILD;
++	return parent_mask & FS_EVENTS_POSS_ON_CHILD;
+ }
+ 
+ /*
+@@ -582,7 +584,7 @@ static inline void fsnotify_update_flags(struct dentry *dentry)
+ 	/*
+ 	 * Serialisation of setting PARENT_WATCHED on the dentries is provided
+ 	 * by d_lock. If inotify_inode_watched changes after we have taken
+-	 * d_lock, the following __fsnotify_update_child_dentry_flags call will
++	 * d_lock, the following fsnotify_set_children_dentry_flags call will
+ 	 * find our entry, so it will spin until we complete here, and update
+ 	 * us with the new state.
+ 	 */
+diff --git a/include/linux/hwspinlock.h b/include/linux/hwspinlock.h
+index bfe7c1f1ac6d1..f0231dbc47771 100644
+--- a/include/linux/hwspinlock.h
++++ b/include/linux/hwspinlock.h
+@@ -68,6 +68,7 @@ int __hwspin_lock_timeout(struct hwspinlock *, unsigned int, int,
+ int __hwspin_trylock(struct hwspinlock *, int, unsigned long *);
+ void __hwspin_unlock(struct hwspinlock *, int, unsigned long *);
+ int of_hwspin_lock_get_id_byname(struct device_node *np, const char *name);
++int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id);
+ int devm_hwspin_lock_free(struct device *dev, struct hwspinlock *hwlock);
+ struct hwspinlock *devm_hwspin_lock_request(struct device *dev);
+ struct hwspinlock *devm_hwspin_lock_request_specific(struct device *dev,
+@@ -127,6 +128,11 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags)
+ {
+ }
+ 
++static inline int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id)
++{
++	return 0;
++}
++
+ static inline int of_hwspin_lock_get_id(struct device_node *np, int index)
+ {
+ 	return 0;
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index a670ae129f4b9..6cfb530b3d43f 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -991,7 +991,7 @@ static inline int of_i2c_get_board_info(struct device *dev,
+ struct acpi_resource;
+ struct acpi_resource_i2c_serialbus;
+ 
+-#if IS_ENABLED(CONFIG_ACPI)
++#if IS_REACHABLE(CONFIG_ACPI) && IS_REACHABLE(CONFIG_I2C)
+ bool i2c_acpi_get_i2c_resource(struct acpi_resource *ares,
+ 			       struct acpi_resource_i2c_serialbus **i2c);
+ u32 i2c_acpi_find_bus_speed(struct device *dev);
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index 00303c636a89d..dea002ad99fc6 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -410,7 +410,6 @@ struct svc_program {
+ 	const struct svc_version **pg_vers;	/* version array */
+ 	char *			pg_name;	/* service name */
+ 	char *			pg_class;	/* class name: services sharing authentication */
+-	struct svc_stat *	pg_stats;	/* rpc statistics */
+ 	int			(*pg_authenticate)(struct svc_rqst *);
+ 	__be32			(*pg_init_request)(struct svc_rqst *,
+ 						   const struct svc_program *,
+@@ -484,7 +483,9 @@ void		   svc_rqst_replace_page(struct svc_rqst *rqstp,
+ 					 struct page *page);
+ void		   svc_rqst_free(struct svc_rqst *);
+ void		   svc_exit_thread(struct svc_rqst *);
+-struct svc_serv *  svc_create_pooled(struct svc_program *, unsigned int,
++struct svc_serv *  svc_create_pooled(struct svc_program *prog,
++				     struct svc_stat *stats,
++				     unsigned int bufsize,
+ 				     int (*threadfn)(void *data));
+ int		   svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
+ int		   svc_pool_stats_open(struct svc_serv *serv, struct file *file);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 9128c0db11f88..fe62943a35ddc 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -174,7 +174,6 @@ struct blocked_key {
+ struct smp_csrk {
+ 	bdaddr_t bdaddr;
+ 	u8 bdaddr_type;
+-	u8 link_type;
+ 	u8 type;
+ 	u8 val[16];
+ };
+@@ -184,7 +183,6 @@ struct smp_ltk {
+ 	struct rcu_head rcu;
+ 	bdaddr_t bdaddr;
+ 	u8 bdaddr_type;
+-	u8 link_type;
+ 	u8 authenticated;
+ 	u8 type;
+ 	u8 enc_size;
+@@ -199,7 +197,6 @@ struct smp_irk {
+ 	bdaddr_t rpa;
+ 	bdaddr_t bdaddr;
+ 	u8 addr_type;
+-	u8 link_type;
+ 	u8 val[16];
+ };
+ 
+@@ -207,8 +204,6 @@ struct link_key {
+ 	struct list_head list;
+ 	struct rcu_head rcu;
+ 	bdaddr_t bdaddr;
+-	u8 bdaddr_type;
+-	u8 link_type;
+ 	u8 type;
+ 	u8 val[HCI_LINK_KEY_SIZE];
+ 	u8 pin_len;
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 11400eba61242..643d8e178f7b9 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1773,9 +1773,9 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 		RCU_INIT_POINTER(scgrp->subsys[ssid], NULL);
+ 		rcu_assign_pointer(dcgrp->subsys[ssid], css);
+ 		ss->root = dst_root;
+-		css->cgroup = dcgrp;
+ 
+ 		spin_lock_irq(&css_set_lock);
++		css->cgroup = dcgrp;
+ 		WARN_ON(!list_empty(&dcgrp->e_csets[ss->id]));
+ 		list_for_each_entry_safe(cset, cset_pos, &scgrp->e_csets[ss->id],
+ 					 e_cset_node[ss->id]) {
+@@ -6557,74 +6557,51 @@ int cgroup_parse_float(const char *input, unsigned dec_shift, s64 *v)
+  */
+ #ifdef CONFIG_SOCK_CGROUP_DATA
+ 
+-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
+-
+-DEFINE_SPINLOCK(cgroup_sk_update_lock);
+-static bool cgroup_sk_alloc_disabled __read_mostly;
+-
+-void cgroup_sk_alloc_disable(void)
+-{
+-	if (cgroup_sk_alloc_disabled)
+-		return;
+-	pr_info("cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation\n");
+-	cgroup_sk_alloc_disabled = true;
+-}
+-
+-#else
+-
+-#define cgroup_sk_alloc_disabled	false
+-
+-#endif
+-
+ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
+ {
+-	if (cgroup_sk_alloc_disabled) {
+-		skcd->no_refcnt = 1;
+-		return;
+-	}
+-
+-	/* Don't associate the sock with unrelated interrupted task's cgroup. */
+-	if (in_interrupt())
+-		return;
++	struct cgroup *cgroup;
+ 
+ 	rcu_read_lock();
++	/* Don't associate the sock with unrelated interrupted task's cgroup. */
++	if (in_interrupt()) {
++		cgroup = &cgrp_dfl_root.cgrp;
++		cgroup_get(cgroup);
++		goto out;
++	}
+ 
+ 	while (true) {
+ 		struct css_set *cset;
+ 
+ 		cset = task_css_set(current);
+ 		if (likely(cgroup_tryget(cset->dfl_cgrp))) {
+-			skcd->val = (unsigned long)cset->dfl_cgrp;
+-			cgroup_bpf_get(cset->dfl_cgrp);
++			cgroup = cset->dfl_cgrp;
+ 			break;
+ 		}
+ 		cpu_relax();
+ 	}
+-
++out:
++	skcd->cgroup = cgroup;
++	cgroup_bpf_get(cgroup);
+ 	rcu_read_unlock();
+ }
+ 
+ void cgroup_sk_clone(struct sock_cgroup_data *skcd)
+ {
+-	if (skcd->val) {
+-		if (skcd->no_refcnt)
+-			return;
+-		/*
+-		 * We might be cloning a socket which is left in an empty
+-		 * cgroup and the cgroup might have already been rmdir'd.
+-		 * Don't use cgroup_get_live().
+-		 */
+-		cgroup_get(sock_cgroup_ptr(skcd));
+-		cgroup_bpf_get(sock_cgroup_ptr(skcd));
+-	}
++	struct cgroup *cgrp = sock_cgroup_ptr(skcd);
++
++	/*
++	 * We might be cloning a socket which is left in an empty
++	 * cgroup and the cgroup might have already been rmdir'd.
++	 * Don't use cgroup_get_live().
++	 */
++	cgroup_get(cgrp);
++	cgroup_bpf_get(cgrp);
+ }
+ 
+ void cgroup_sk_free(struct sock_cgroup_data *skcd)
+ {
+ 	struct cgroup *cgrp = sock_cgroup_ptr(skcd);
+ 
+-	if (skcd->no_refcnt)
+-		return;
+ 	cgroup_bpf_put(cgrp);
+ 	cgroup_put(cgrp);
+ }
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 0263983089097..654b039dfc335 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -447,8 +447,11 @@ void debug_dma_dump_mappings(struct device *dev)
+  * dma_active_cacheline entry to track per event.  dma_map_sg(), on the
+  * other hand, consumes a single dma_debug_entry, but inserts 'nents'
+  * entries into the tree.
++ *
++ * Use __GFP_NOWARN because the printk from an OOM, to netconsole, could end
++ * up right back in the DMA debugging code, leading to a deadlock.
+  */
+-static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC);
++static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC | __GFP_NOWARN);
+ static DEFINE_SPINLOCK(radix_lock);
+ #define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1)
+ #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index b60325cc8604d..55033d6c05777 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -1366,8 +1366,9 @@ static void put_ctx(struct perf_event_context *ctx)
+  *	  perf_event_context::mutex
+  *	    perf_event::child_mutex;
+  *	      perf_event_context::lock
+- *	    perf_event::mmap_mutex
+  *	    mmap_lock
++ *	      perf_event::mmap_mutex
++ *	        perf_buffer::aux_mutex
+  *	      perf_addr_filters_head::lock
+  *
+  *    cpu_hotplug_lock
+@@ -6091,12 +6092,11 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ 		event->pmu->event_unmapped(event, vma->vm_mm);
+ 
+ 	/*
+-	 * rb->aux_mmap_count will always drop before rb->mmap_count and
+-	 * event->mmap_count, so it is ok to use event->mmap_mutex to
+-	 * serialize with perf_mmap here.
++	 * The AUX buffer is strictly a sub-buffer, serialize using aux_mutex
++	 * to avoid complications.
+ 	 */
+ 	if (rb_has_aux(rb) && vma->vm_pgoff == rb->aux_pgoff &&
+-	    atomic_dec_and_mutex_lock(&rb->aux_mmap_count, &event->mmap_mutex)) {
++	    atomic_dec_and_mutex_lock(&rb->aux_mmap_count, &rb->aux_mutex)) {
+ 		/*
+ 		 * Stop all AUX events that are writing to this buffer,
+ 		 * so that we can free its AUX pages and corresponding PMU
+@@ -6113,7 +6113,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ 		rb_free_aux(rb);
+ 		WARN_ON_ONCE(refcount_read(&rb->aux_refcount));
+ 
+-		mutex_unlock(&event->mmap_mutex);
++		mutex_unlock(&rb->aux_mutex);
+ 	}
+ 
+ 	if (atomic_dec_and_test(&rb->mmap_count))
+@@ -6201,6 +6201,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	struct perf_event *event = file->private_data;
+ 	unsigned long user_locked, user_lock_limit;
+ 	struct user_struct *user = current_user();
++	struct mutex *aux_mutex = NULL;
+ 	struct perf_buffer *rb = NULL;
+ 	unsigned long locked, lock_limit;
+ 	unsigned long vma_size;
+@@ -6249,6 +6250,9 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 		if (!rb)
+ 			goto aux_unlock;
+ 
++		aux_mutex = &rb->aux_mutex;
++		mutex_lock(aux_mutex);
++
+ 		aux_offset = READ_ONCE(rb->user_page->aux_offset);
+ 		aux_size = READ_ONCE(rb->user_page->aux_size);
+ 
+@@ -6399,6 +6403,8 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 		atomic_dec(&rb->mmap_count);
+ 	}
+ aux_unlock:
++	if (aux_mutex)
++		mutex_unlock(aux_mutex);
+ 	mutex_unlock(&event->mmap_mutex);
+ 
+ 	/*
+diff --git a/kernel/events/internal.h b/kernel/events/internal.h
+index 8e63cc2bd4f7d..6f4a7bb2b2286 100644
+--- a/kernel/events/internal.h
++++ b/kernel/events/internal.h
+@@ -40,6 +40,7 @@ struct perf_buffer {
+ 	struct user_struct		*mmap_user;
+ 
+ 	/* AUX area */
++	struct mutex			aux_mutex;
+ 	long				aux_head;
+ 	unsigned int			aux_nest;
+ 	long				aux_wakeup;	/* last aux_watermark boundary crossed by aux_head */
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index ca27946fdaaf2..ffca72b8c4c6d 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -332,6 +332,8 @@ ring_buffer_init(struct perf_buffer *rb, long watermark, int flags)
+ 	 */
+ 	if (!rb->nr_pages)
+ 		rb->paused = 1;
++
++	mutex_init(&rb->aux_mutex);
+ }
+ 
+ void perf_aux_output_flag(struct perf_output_handle *handle, u64 flags)
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 826a2355da1ed..e91d6aac9855c 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1485,7 +1485,7 @@ static struct xol_area *__create_xol_area(unsigned long vaddr)
+ 	uprobe_opcode_t insn = UPROBE_SWBP_INSN;
+ 	struct xol_area *area;
+ 
+-	area = kmalloc(sizeof(*area), GFP_KERNEL);
++	area = kzalloc(sizeof(*area), GFP_KERNEL);
+ 	if (unlikely(!area))
+ 		goto out;
+ 
+@@ -1495,7 +1495,6 @@ static struct xol_area *__create_xol_area(unsigned long vaddr)
+ 		goto free_area;
+ 
+ 	area->xol_mapping.name = "[uprobes]";
+-	area->xol_mapping.fault = NULL;
+ 	area->xol_mapping.pages = area->pages;
+ 	area->pages[0] = alloc_page(GFP_HIGHUSER);
+ 	if (!area->pages[0])
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index f00dd928fc711..c6a2dafd4a3b4 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -1202,6 +1202,7 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
+ }
+ 
+ static void rt_mutex_handle_deadlock(int res, int detect_deadlock,
++				     struct rt_mutex *lock,
+ 				     struct rt_mutex_waiter *w)
+ {
+ 	/*
+@@ -1211,6 +1212,7 @@ static void rt_mutex_handle_deadlock(int res, int detect_deadlock,
+ 	if (res != -EDEADLOCK || detect_deadlock)
+ 		return;
+ 
++	raw_spin_unlock_irq(&lock->wait_lock);
+ 	/*
+ 	 * Yell lowdly and stop the task right here.
+ 	 */
+@@ -1266,7 +1268,7 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,
+ 	if (unlikely(ret)) {
+ 		__set_current_state(TASK_RUNNING);
+ 		remove_waiter(lock, &waiter);
+-		rt_mutex_handle_deadlock(ret, chwalk, &waiter);
++		rt_mutex_handle_deadlock(ret, chwalk, lock, &waiter);
+ 	}
+ 
+ 	/*
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 105fdc2bb004c..bede3a4f108e3 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -1240,7 +1240,7 @@ static void show_rcu_tasks_trace_gp_kthread(void)
+ {
+ 	char buf[64];
+ 
+-	sprintf(buf, "N%d h:%lu/%lu/%lu", atomic_read(&trc_n_readers_need_end),
++	snprintf(buf, sizeof(buf), "N%d h:%lu/%lu/%lu", atomic_read(&trc_n_readers_need_end),
+ 		data_race(n_heavy_reader_ofl_updates),
+ 		data_race(n_heavy_reader_updates),
+ 		data_race(n_heavy_reader_attempts));
+diff --git a/kernel/smp.c b/kernel/smp.c
+index b0684b4c111e9..c6b3ad79c72bd 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -1009,6 +1009,7 @@ int smp_call_on_cpu(unsigned int cpu, int (*func)(void *), void *par, bool phys)
+ 
+ 	queue_work_on(cpu, system_wq, &sscs.work);
+ 	wait_for_completion(&sscs.done);
++	destroy_work_on_stack(&sscs.work);
+ 
+ 	return sscs.ret;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 22e1e57118698..b16291f4c5731 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3742,6 +3742,8 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu)
+ 			break;
+ 		entries++;
+ 		ring_buffer_iter_advance(buf_iter);
++		/* This could be a big loop */
++		cond_resched();
+ 	}
+ 
+ 	per_cpu_ptr(iter->array_buffer->data, cpu)->skipped_entries = entries;
+diff --git a/lib/generic-radix-tree.c b/lib/generic-radix-tree.c
+index f25eb111c0516..34d3ac52de894 100644
+--- a/lib/generic-radix-tree.c
++++ b/lib/generic-radix-tree.c
+@@ -131,6 +131,8 @@ void *__genradix_ptr_alloc(struct __genradix *radix, size_t offset,
+ 		if ((v = cmpxchg_release(&radix->root, r, new_root)) == r) {
+ 			v = new_root;
+ 			new_node = NULL;
++		} else {
++			new_node->children[0] = NULL;
+ 		}
+ 	}
+ 
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 874f91715296b..8de7c72ae0258 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -5160,11 +5160,28 @@ static struct cftype mem_cgroup_legacy_files[] = {
+  */
+ 
+ static DEFINE_IDR(mem_cgroup_idr);
++static DEFINE_SPINLOCK(memcg_idr_lock);
++
++static int mem_cgroup_alloc_id(void)
++{
++	int ret;
++
++	idr_preload(GFP_KERNEL);
++	spin_lock(&memcg_idr_lock);
++	ret = idr_alloc(&mem_cgroup_idr, NULL, 1, MEM_CGROUP_ID_MAX + 1,
++			GFP_NOWAIT);
++	spin_unlock(&memcg_idr_lock);
++	idr_preload_end();
++	return ret;
++}
+ 
+ static void mem_cgroup_id_remove(struct mem_cgroup *memcg)
+ {
+ 	if (memcg->id.id > 0) {
++		spin_lock(&memcg_idr_lock);
+ 		idr_remove(&mem_cgroup_idr, memcg->id.id);
++		spin_unlock(&memcg_idr_lock);
++
+ 		memcg->id.id = 0;
+ 	}
+ }
+@@ -5294,9 +5311,7 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
+ 	if (!memcg)
+ 		return ERR_PTR(error);
+ 
+-	memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL,
+-				 1, MEM_CGROUP_ID_MAX,
+-				 GFP_KERNEL);
++	memcg->id.id = mem_cgroup_alloc_id();
+ 	if (memcg->id.id < 0) {
+ 		error = memcg->id.id;
+ 		goto fail;
+@@ -5342,7 +5357,9 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
+ 	INIT_LIST_HEAD(&memcg->deferred_split_queue.split_queue);
+ 	memcg->deferred_split_queue.split_queue_len = 0;
+ #endif
++	spin_lock(&memcg_idr_lock);
+ 	idr_replace(&mem_cgroup_idr, memcg, memcg->id.id);
++	spin_unlock(&memcg_idr_lock);
+ 	return memcg;
+ fail:
+ 	mem_cgroup_id_remove(memcg);
+diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c
+index 43aea97c57620..c96ff4a1d4a0b 100644
+--- a/net/8021q/vlan_core.c
++++ b/net/8021q/vlan_core.c
+@@ -482,10 +482,9 @@ static struct sk_buff *vlan_gro_receive(struct list_head *head,
+ 
+ 	type = vhdr->h_vlan_encapsulated_proto;
+ 
+-	rcu_read_lock();
+ 	ptype = gro_find_receive_by_type(type);
+ 	if (!ptype)
+-		goto out_unlock;
++		goto out;
+ 
+ 	flush = 0;
+ 
+@@ -504,8 +503,6 @@ static struct sk_buff *vlan_gro_receive(struct list_head *head,
+ 	skb_gro_postpull_rcsum(skb, vhdr, sizeof(*vhdr));
+ 	pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
+ 
+-out_unlock:
+-	rcu_read_unlock();
+ out:
+ 	skb_gro_flush_final(skb, pp, flush);
+ 
+@@ -519,12 +516,10 @@ static int vlan_gro_complete(struct sk_buff *skb, int nhoff)
+ 	struct packet_offload *ptype;
+ 	int err = -ENOENT;
+ 
+-	rcu_read_lock();
+ 	ptype = gro_find_complete_by_type(type);
+ 	if (ptype)
+ 		err = ptype->callbacks.gro_complete(skb, nhoff + sizeof(*vhdr));
+ 
+-	rcu_read_unlock();
+ 	return err;
+ }
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 0078e33e12ba9..51b16c2a279f4 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -2370,16 +2370,6 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "debug_keys %u key_count %u", cp->debug_keys,
+ 		   key_count);
+ 
+-	for (i = 0; i < key_count; i++) {
+-		struct mgmt_link_key_info *key = &cp->keys[i];
+-
+-		/* Considering SMP over BREDR/LE, there is no need to check addr_type */
+-		if (key->type > 0x08)
+-			return mgmt_cmd_status(sk, hdev->id,
+-					       MGMT_OP_LOAD_LINK_KEYS,
+-					       MGMT_STATUS_INVALID_PARAMS);
+-	}
+-
+ 	hci_dev_lock(hdev);
+ 
+ 	hci_link_keys_clear(hdev);
+@@ -2404,6 +2394,19 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
+ 			continue;
+ 		}
+ 
++		if (key->addr.type != BDADDR_BREDR) {
++			bt_dev_warn(hdev,
++				    "Invalid link address type %u for %pMR",
++				    key->addr.type, &key->addr.bdaddr);
++			continue;
++		}
++
++		if (key->type > 0x08) {
++			bt_dev_warn(hdev, "Invalid link key type %u for %pMR",
++				    key->type, &key->addr.bdaddr);
++			continue;
++		}
++
+ 		/* Always ignore debug keys and require a new pairing if
+ 		 * the user wants to use them.
+ 		 */
+@@ -5919,7 +5922,6 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
+ 
+ 	for (i = 0; i < irk_count; i++) {
+ 		struct mgmt_irk_info *irk = &cp->irks[i];
+-		u8 addr_type = le_addr_type(irk->addr.type);
+ 
+ 		if (hci_is_blocked_key(hdev,
+ 				       HCI_BLOCKED_KEY_TYPE_IRK,
+@@ -5929,12 +5931,8 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
+ 			continue;
+ 		}
+ 
+-		/* When using SMP over BR/EDR, the addr type should be set to BREDR */
+-		if (irk->addr.type == BDADDR_BREDR)
+-			addr_type = BDADDR_BREDR;
+-
+ 		hci_add_irk(hdev, &irk->addr.bdaddr,
+-			    addr_type, irk->val,
++			    le_addr_type(irk->addr.type), irk->val,
+ 			    BDADDR_ANY);
+ 	}
+ 
+@@ -5999,15 +5997,6 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 
+ 	bt_dev_dbg(hdev, "key_count %u", key_count);
+ 
+-	for (i = 0; i < key_count; i++) {
+-		struct mgmt_ltk_info *key = &cp->keys[i];
+-
+-		if (!ltk_is_valid(key))
+-			return mgmt_cmd_status(sk, hdev->id,
+-					       MGMT_OP_LOAD_LONG_TERM_KEYS,
+-					       MGMT_STATUS_INVALID_PARAMS);
+-	}
+-
+ 	hci_dev_lock(hdev);
+ 
+ 	hci_smp_ltks_clear(hdev);
+@@ -6015,7 +6004,6 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 	for (i = 0; i < key_count; i++) {
+ 		struct mgmt_ltk_info *key = &cp->keys[i];
+ 		u8 type, authenticated;
+-		u8 addr_type = le_addr_type(key->addr.type);
+ 
+ 		if (hci_is_blocked_key(hdev,
+ 				       HCI_BLOCKED_KEY_TYPE_LTK,
+@@ -6025,6 +6013,12 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 			continue;
+ 		}
+ 
++		if (!ltk_is_valid(key)) {
++			bt_dev_warn(hdev, "Invalid LTK for %pMR",
++				    &key->addr.bdaddr);
++			continue;
++		}
++
+ 		switch (key->type) {
+ 		case MGMT_LTK_UNAUTHENTICATED:
+ 			authenticated = 0x00;
+@@ -6050,12 +6044,8 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 			continue;
+ 		}
+ 
+-		/* When using SMP over BR/EDR, the addr type should be set to BREDR */
+-		if (key->addr.type == BDADDR_BREDR)
+-			addr_type = BDADDR_BREDR;
+-
+ 		hci_add_ltk(hdev, &key->addr.bdaddr,
+-			    addr_type, type, authenticated,
++			    le_addr_type(key->addr.type), type, authenticated,
+ 			    key->val, key->enc_size, key->ediv, key->rand);
+ 	}
+ 
+@@ -8058,7 +8048,7 @@ void mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key,
+ 
+ 	ev.store_hint = persistent;
+ 	bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
+-	ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
++	ev.key.addr.type = BDADDR_BREDR;
+ 	ev.key.type = key->type;
+ 	memcpy(ev.key.val, key->val, HCI_LINK_KEY_SIZE);
+ 	ev.key.pin_len = key->pin_len;
+@@ -8109,7 +8099,7 @@ void mgmt_new_ltk(struct hci_dev *hdev, struct smp_ltk *key, bool persistent)
+ 		ev.store_hint = persistent;
+ 
+ 	bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
+-	ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
++	ev.key.addr.type = link_to_bdaddr(LE_LINK, key->bdaddr_type);
+ 	ev.key.type = mgmt_ltk_type(key);
+ 	ev.key.enc_size = key->enc_size;
+ 	ev.key.ediv = key->ediv;
+@@ -8138,7 +8128,7 @@ void mgmt_new_irk(struct hci_dev *hdev, struct smp_irk *irk, bool persistent)
+ 
+ 	bacpy(&ev.rpa, &irk->rpa);
+ 	bacpy(&ev.irk.addr.bdaddr, &irk->bdaddr);
+-	ev.irk.addr.type = link_to_bdaddr(irk->link_type, irk->addr_type);
++	ev.irk.addr.type = link_to_bdaddr(LE_LINK, irk->addr_type);
+ 	memcpy(ev.irk.val, irk->val, sizeof(irk->val));
+ 
+ 	mgmt_event(MGMT_EV_NEW_IRK, hdev, &ev, sizeof(ev), NULL);
+@@ -8167,7 +8157,7 @@ void mgmt_new_csrk(struct hci_dev *hdev, struct smp_csrk *csrk,
+ 		ev.store_hint = persistent;
+ 
+ 	bacpy(&ev.key.addr.bdaddr, &csrk->bdaddr);
+-	ev.key.addr.type = link_to_bdaddr(csrk->link_type, csrk->bdaddr_type);
++	ev.key.addr.type = link_to_bdaddr(LE_LINK, csrk->bdaddr_type);
+ 	ev.key.type = csrk->type;
+ 	memcpy(ev.key.val, csrk->val, sizeof(csrk->val));
+ 
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 20cae8f768762..8f9566f37498e 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -1060,7 +1060,6 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 	}
+ 
+ 	if (smp->remote_irk) {
+-		smp->remote_irk->link_type = hcon->type;
+ 		mgmt_new_irk(hdev, smp->remote_irk, persistent);
+ 
+ 		/* Now that user space can be considered to know the
+@@ -1075,28 +1074,24 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 	}
+ 
+ 	if (smp->csrk) {
+-		smp->csrk->link_type = hcon->type;
+ 		smp->csrk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->csrk->bdaddr, &hcon->dst);
+ 		mgmt_new_csrk(hdev, smp->csrk, persistent);
+ 	}
+ 
+ 	if (smp->responder_csrk) {
+-		smp->responder_csrk->link_type = hcon->type;
+ 		smp->responder_csrk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->responder_csrk->bdaddr, &hcon->dst);
+ 		mgmt_new_csrk(hdev, smp->responder_csrk, persistent);
+ 	}
+ 
+ 	if (smp->ltk) {
+-		smp->ltk->link_type = hcon->type;
+ 		smp->ltk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->ltk->bdaddr, &hcon->dst);
+ 		mgmt_new_ltk(hdev, smp->ltk, persistent);
+ 	}
+ 
+ 	if (smp->responder_ltk) {
+-		smp->responder_ltk->link_type = hcon->type;
+ 		smp->responder_ltk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->responder_ltk->bdaddr, &hcon->dst);
+ 		mgmt_new_ltk(hdev, smp->responder_ltk, persistent);
+@@ -1116,8 +1111,6 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 		key = hci_add_link_key(hdev, smp->conn->hcon, &hcon->dst,
+ 				       smp->link_key, type, 0, &persistent);
+ 		if (key) {
+-			key->link_type = hcon->type;
+-			key->bdaddr_type = hcon->dst_type;
+ 			mgmt_new_link_key(hdev, key, persistent);
+ 
+ 			/* Don't keep debug keys around if the relevant
+diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
+index 8a6470a217024..8751571a3cb02 100644
+--- a/net/bridge/br_fdb.c
++++ b/net/bridge/br_fdb.c
+@@ -1238,12 +1238,10 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ 			modified = true;
+ 		}
+ 
+-		if (test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) {
++		if (test_and_set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) {
+ 			/* Refresh entry */
+ 			fdb->used = jiffies;
+-		} else if (!test_bit(BR_FDB_ADDED_BY_USER, &fdb->flags)) {
+-			/* Take over SW learned entry */
+-			set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags);
++		} else {
+ 			modified = true;
+ 		}
+ 
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 2388c619f29ca..b2b1bd6727871 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -1423,6 +1423,10 @@ static void bcm_notify(struct bcm_sock *bo, unsigned long msg,
+ 
+ 		/* remove device reference, if this is our bound device */
+ 		if (bo->bound && bo->ifindex == dev->ifindex) {
++#if IS_ENABLED(CONFIG_PROC_FS)
++			if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read)
++				remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir);
++#endif
+ 			bo->bound   = 0;
+ 			bo->ifindex = 0;
+ 			notify_enodev = 1;
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index 41b24cd31562a..b6de5ee22391c 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -72,11 +72,8 @@ static int update_classid_sock(const void *v, struct file *file, unsigned n)
+ 	struct update_classid_context *ctx = (void *)v;
+ 	struct socket *sock = sock_from_file(file, &err);
+ 
+-	if (sock) {
+-		spin_lock(&cgroup_sk_update_lock);
++	if (sock)
+ 		sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid);
+-		spin_unlock(&cgroup_sk_update_lock);
+-	}
+ 	if (--ctx->batch == 0) {
+ 		ctx->batch = UPDATE_CLASSID_BATCH;
+ 		return n + 1;
+@@ -122,8 +119,6 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+ 	struct css_task_iter it;
+ 	struct task_struct *p;
+ 
+-	cgroup_sk_alloc_disable();
+-
+ 	cs->classid = (u32)value;
+ 
+ 	css_task_iter_start(css, 0, &it);
+diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c
+index 9bd4cab7d510f..d4c71e382a13f 100644
+--- a/net/core/netprio_cgroup.c
++++ b/net/core/netprio_cgroup.c
+@@ -207,8 +207,6 @@ static ssize_t write_priomap(struct kernfs_open_file *of,
+ 	if (!dev)
+ 		return -ENODEV;
+ 
+-	cgroup_sk_alloc_disable();
+-
+ 	rtnl_lock();
+ 
+ 	ret = netprio_set_prio(of_css(of), dev, prio);
+@@ -222,12 +220,10 @@ static int update_netprio(const void *v, struct file *file, unsigned n)
+ {
+ 	int err;
+ 	struct socket *sock = sock_from_file(file, &err);
+-	if (sock) {
+-		spin_lock(&cgroup_sk_update_lock);
++
++	if (sock)
+ 		sock_cgroup_set_prioidx(&sock->sk->sk_cgrp_data,
+ 					(unsigned long)v);
+-		spin_unlock(&cgroup_sk_update_lock);
+-	}
+ 	return 0;
+ }
+ 
+@@ -236,8 +232,6 @@ static void net_prio_attach(struct cgroup_taskset *tset)
+ 	struct task_struct *p;
+ 	struct cgroup_subsys_state *css;
+ 
+-	cgroup_sk_alloc_disable();
+-
+ 	cgroup_taskset_for_each(p, css, tset) {
+ 		void *v = (void *)(unsigned long)css->id;
+ 
+diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c
+index 61cb40368723c..081390c32707d 100644
+--- a/net/ethernet/eth.c
++++ b/net/ethernet/eth.c
+@@ -430,19 +430,16 @@ struct sk_buff *eth_gro_receive(struct list_head *head, struct sk_buff *skb)
+ 
+ 	type = eh->h_proto;
+ 
+-	rcu_read_lock();
+ 	ptype = gro_find_receive_by_type(type);
+ 	if (ptype == NULL) {
+ 		flush = 1;
+-		goto out_unlock;
++		goto out;
+ 	}
+ 
+ 	skb_gro_pull(skb, sizeof(*eh));
+ 	skb_gro_postpull_rcsum(skb, eh, sizeof(*eh));
+ 	pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
+ 
+-out_unlock:
+-	rcu_read_unlock();
+ out:
+ 	skb_gro_flush_final(skb, pp, flush);
+ 
+@@ -460,13 +457,11 @@ int eth_gro_complete(struct sk_buff *skb, int nhoff)
+ 	if (skb->encapsulation)
+ 		skb_set_inner_mac_header(skb, nhoff);
+ 
+-	rcu_read_lock();
+ 	ptype = gro_find_complete_by_type(type);
+ 	if (ptype != NULL)
+ 		err = ptype->callbacks.gro_complete(skb, nhoff +
+ 						    sizeof(struct ethhdr));
+ 
+-	rcu_read_unlock();
+ 	return err;
+ }
+ EXPORT_SYMBOL(eth_gro_complete);
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index ce42626663de6..58dfca09093c2 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1471,19 +1471,18 @@ struct sk_buff *inet_gro_receive(struct list_head *head, struct sk_buff *skb)
+ 
+ 	proto = iph->protocol;
+ 
+-	rcu_read_lock();
+ 	ops = rcu_dereference(inet_offloads[proto]);
+ 	if (!ops || !ops->callbacks.gro_receive)
+-		goto out_unlock;
++		goto out;
+ 
+ 	if (*(u8 *)iph != 0x45)
+-		goto out_unlock;
++		goto out;
+ 
+ 	if (ip_is_fragment(iph))
+-		goto out_unlock;
++		goto out;
+ 
+ 	if (unlikely(ip_fast_csum((u8 *)iph, 5)))
+-		goto out_unlock;
++		goto out;
+ 
+ 	id = ntohl(*(__be32 *)&iph->id);
+ 	flush = (u16)((ntohl(*(__be32 *)iph) ^ skb_gro_len(skb)) | (id & ~IP_DF));
+@@ -1560,9 +1559,6 @@ struct sk_buff *inet_gro_receive(struct list_head *head, struct sk_buff *skb)
+ 	pp = indirect_call_gro_receive(tcp4_gro_receive, udp4_gro_receive,
+ 				       ops->callbacks.gro_receive, head, skb);
+ 
+-out_unlock:
+-	rcu_read_unlock();
+-
+ out:
+ 	skb_gro_flush_final(skb, pp, flush);
+ 
+@@ -1638,10 +1634,9 @@ int inet_gro_complete(struct sk_buff *skb, int nhoff)
+ 	csum_replace2(&iph->check, iph->tot_len, newlen);
+ 	iph->tot_len = newlen;
+ 
+-	rcu_read_lock();
+ 	ops = rcu_dereference(inet_offloads[proto]);
+ 	if (WARN_ON(!ops || !ops->callbacks.gro_complete))
+-		goto out_unlock;
++		goto out;
+ 
+ 	/* Only need to add sizeof(*iph) to get to the next hdr below
+ 	 * because any hdr with option will have been flushed in
+@@ -1651,9 +1646,7 @@ int inet_gro_complete(struct sk_buff *skb, int nhoff)
+ 			      tcp4_gro_complete, udp4_gro_complete,
+ 			      skb, nhoff + sizeof(*iph));
+ 
+-out_unlock:
+-	rcu_read_unlock();
+-
++out:
+ 	return err;
+ }
+ EXPORT_SYMBOL(inet_gro_complete);
+diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
+index e5f69b0bf3df5..135da756dd5ab 100644
+--- a/net/ipv4/fou.c
++++ b/net/ipv4/fou.c
+@@ -48,7 +48,7 @@ struct fou_net {
+ 
+ static inline struct fou *fou_from_sock(struct sock *sk)
+ {
+-	return sk->sk_user_data;
++	return rcu_dereference_sk_user_data(sk);
+ }
+ 
+ static int fou_recv_pull(struct sk_buff *skb, struct fou *fou, size_t len)
+@@ -230,10 +230,16 @@ static struct sk_buff *fou_gro_receive(struct sock *sk,
+ 				       struct list_head *head,
+ 				       struct sk_buff *skb)
+ {
+-	u8 proto = fou_from_sock(sk)->protocol;
+-	const struct net_offload **offloads;
++	const struct net_offload __rcu **offloads;
++	struct fou *fou = fou_from_sock(sk);
+ 	const struct net_offload *ops;
+ 	struct sk_buff *pp = NULL;
++	u8 proto;
++
++	if (!fou)
++		goto out;
++
++	proto = fou->protocol;
+ 
+ 	/* We can clear the encap_mark for FOU as we are essentially doing
+ 	 * one of two possible things.  We are either adding an L4 tunnel
+@@ -246,41 +252,45 @@ static struct sk_buff *fou_gro_receive(struct sock *sk,
+ 	/* Flag this frame as already having an outer encap header */
+ 	NAPI_GRO_CB(skb)->is_fou = 1;
+ 
+-	rcu_read_lock();
+ 	offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
+ 	ops = rcu_dereference(offloads[proto]);
+ 	if (!ops || !ops->callbacks.gro_receive)
+-		goto out_unlock;
++		goto out;
+ 
+ 	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+ 
+-out_unlock:
+-	rcu_read_unlock();
+-
++out:
+ 	return pp;
+ }
+ 
+ static int fou_gro_complete(struct sock *sk, struct sk_buff *skb,
+ 			    int nhoff)
+ {
++	const struct net_offload __rcu **offloads;
++	struct fou *fou = fou_from_sock(sk);
+ 	const struct net_offload *ops;
+-	u8 proto = fou_from_sock(sk)->protocol;
+-	int err = -ENOSYS;
+-	const struct net_offload **offloads;
++	u8 proto;
++	int err;
++
++	if (!fou) {
++		err = -ENOENT;
++		goto out;
++	}
++
++	proto = fou->protocol;
+ 
+-	rcu_read_lock();
+ 	offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
+ 	ops = rcu_dereference(offloads[proto]);
+-	if (WARN_ON(!ops || !ops->callbacks.gro_complete))
+-		goto out_unlock;
++	if (WARN_ON(!ops || !ops->callbacks.gro_complete)) {
++		err = -ENOSYS;
++		goto out;
++	}
+ 
+ 	err = ops->callbacks.gro_complete(skb, nhoff);
+ 
+ 	skb_set_inner_mac_header(skb, nhoff);
+ 
+-out_unlock:
+-	rcu_read_unlock();
+-
++out:
+ 	return err;
+ }
+ 
+@@ -311,7 +321,7 @@ static struct sk_buff *gue_gro_receive(struct sock *sk,
+ 				       struct list_head *head,
+ 				       struct sk_buff *skb)
+ {
+-	const struct net_offload **offloads;
++	const struct net_offload __rcu **offloads;
+ 	const struct net_offload *ops;
+ 	struct sk_buff *pp = NULL;
+ 	struct sk_buff *p;
+@@ -324,6 +334,9 @@ static struct sk_buff *gue_gro_receive(struct sock *sk,
+ 	struct gro_remcsum grc;
+ 	u8 proto;
+ 
++	if (!fou)
++		goto out;
++
+ 	skb_gro_remcsum_init(&grc);
+ 
+ 	off = skb_gro_offset(skb);
+@@ -438,17 +451,14 @@ static struct sk_buff *gue_gro_receive(struct sock *sk,
+ 	/* Flag this frame as already having an outer encap header */
+ 	NAPI_GRO_CB(skb)->is_fou = 1;
+ 
+-	rcu_read_lock();
+ 	offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
+ 	ops = rcu_dereference(offloads[proto]);
+ 	if (WARN_ON_ONCE(!ops || !ops->callbacks.gro_receive))
+-		goto out_unlock;
++		goto out;
+ 
+ 	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+ 	flush = 0;
+ 
+-out_unlock:
+-	rcu_read_unlock();
+ out:
+ 	skb_gro_flush_final_remcsum(skb, pp, flush, &grc);
+ 
+@@ -457,8 +467,8 @@ static struct sk_buff *gue_gro_receive(struct sock *sk,
+ 
+ static int gue_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
+ {
+-	const struct net_offload **offloads;
+ 	struct guehdr *guehdr = (struct guehdr *)(skb->data + nhoff);
++	const struct net_offload __rcu **offloads;
+ 	const struct net_offload *ops;
+ 	unsigned int guehlen = 0;
+ 	u8 proto;
+@@ -485,18 +495,16 @@ static int gue_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
+ 		return err;
+ 	}
+ 
+-	rcu_read_lock();
+ 	offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
+ 	ops = rcu_dereference(offloads[proto]);
+ 	if (WARN_ON(!ops || !ops->callbacks.gro_complete))
+-		goto out_unlock;
++		goto out;
+ 
+ 	err = ops->callbacks.gro_complete(skb, nhoff + guehlen);
+ 
+ 	skb_set_inner_mac_header(skb, nhoff + guehlen);
+ 
+-out_unlock:
+-	rcu_read_unlock();
++out:
+ 	return err;
+ }
+ 
+diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
+index e0a2465758872..b4da692b97342 100644
+--- a/net/ipv4/gre_offload.c
++++ b/net/ipv4/gre_offload.c
+@@ -158,10 +158,9 @@ static struct sk_buff *gre_gro_receive(struct list_head *head,
+ 
+ 	type = greh->protocol;
+ 
+-	rcu_read_lock();
+ 	ptype = gro_find_receive_by_type(type);
+ 	if (!ptype)
+-		goto out_unlock;
++		goto out;
+ 
+ 	grehlen = GRE_HEADER_SECTION;
+ 
+@@ -175,13 +174,13 @@ static struct sk_buff *gre_gro_receive(struct list_head *head,
+ 	if (skb_gro_header_hard(skb, hlen)) {
+ 		greh = skb_gro_header_slow(skb, hlen, off);
+ 		if (unlikely(!greh))
+-			goto out_unlock;
++			goto out;
+ 	}
+ 
+ 	/* Don't bother verifying checksum if we're going to flush anyway. */
+ 	if ((greh->flags & GRE_CSUM) && !NAPI_GRO_CB(skb)->flush) {
+ 		if (skb_gro_checksum_simple_validate(skb))
+-			goto out_unlock;
++			goto out;
+ 
+ 		skb_gro_checksum_try_convert(skb, IPPROTO_GRE,
+ 					     null_compute_pseudo);
+@@ -225,8 +224,6 @@ static struct sk_buff *gre_gro_receive(struct list_head *head,
+ 	pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
+ 	flush = 0;
+ 
+-out_unlock:
+-	rcu_read_unlock();
+ out:
+ 	skb_gro_flush_final(skb, pp, flush);
+ 
+@@ -251,13 +248,10 @@ static int gre_gro_complete(struct sk_buff *skb, int nhoff)
+ 	if (greh->flags & GRE_CSUM)
+ 		grehlen += GRE_HEADER_SECTION;
+ 
+-	rcu_read_lock();
+ 	ptype = gro_find_complete_by_type(type);
+ 	if (ptype)
+ 		err = ptype->callbacks.gro_complete(skb, nhoff + grehlen);
+ 
+-	rcu_read_unlock();
+-
+ 	skb_set_inner_mac_header(skb, nhoff + grehlen);
+ 
+ 	return err;
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 56deddeac1b0e..0fb5d758264fe 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -653,6 +653,7 @@ int __inet_hash(struct sock *sk, struct sock *osk)
+ 		if (err)
+ 			goto unlock;
+ 	}
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	if (IS_ENABLED(CONFIG_IPV6) && sk->sk_reuseport &&
+ 		sk->sk_family == AF_INET6)
+ 		__sk_nulls_add_node_tail_rcu(sk, &ilb->nulls_head);
+@@ -660,7 +661,6 @@ int __inet_hash(struct sock *sk, struct sock *osk)
+ 		__sk_nulls_add_node_rcu(sk, &ilb->nulls_head);
+ 	inet_hash2(hashinfo, sk);
+ 	ilb->count++;
+-	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+ unlock:
+ 	spin_unlock(&ilb->lock);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index f909e440bb226..ade27d63655c2 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -511,7 +511,7 @@ static int tcp_bpf_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 		err = sk_stream_error(sk, msg->msg_flags, err);
+ 	release_sock(sk);
+ 	sk_psock_put(sk, psock);
+-	return copied ? copied : err;
++	return copied > 0 ? copied : err;
+ }
+ 
+ static int tcp_bpf_sendpage(struct sock *sk, struct page *page, int offset,
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index a0b569d0085bc..6e36eb1ba2763 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -149,8 +149,8 @@ struct sk_buff *skb_udp_tunnel_segment(struct sk_buff *skb,
+ 				       netdev_features_t features,
+ 				       bool is_ipv6)
+ {
++	const struct net_offload __rcu **offloads;
+ 	__be16 protocol = skb->protocol;
+-	const struct net_offload **offloads;
+ 	const struct net_offload *ops;
+ 	struct sk_buff *segs = ERR_PTR(-EINVAL);
+ 	struct sk_buff *(*gso_inner_segment)(struct sk_buff *skb,
+@@ -606,13 +606,11 @@ struct sk_buff *udp4_gro_receive(struct list_head *head, struct sk_buff *skb)
+ 					     inet_gro_compute_pseudo);
+ skip:
+ 	NAPI_GRO_CB(skb)->is_ipv6 = 0;
+-	rcu_read_lock();
+ 
+ 	if (static_branch_unlikely(&udp_encap_needed_key))
+ 		sk = udp4_gro_lookup_skb(skb, uh->source, uh->dest);
+ 
+ 	pp = udp_gro_receive(head, skb, uh, sk);
+-	rcu_read_unlock();
+ 	return pp;
+ 
+ flush:
+@@ -647,7 +645,6 @@ int udp_gro_complete(struct sk_buff *skb, int nhoff,
+ 
+ 	uh->len = newlen;
+ 
+-	rcu_read_lock();
+ 	sk = INDIRECT_CALL_INET(lookup, udp6_lib_lookup_skb,
+ 				udp4_lib_lookup_skb, skb, uh->source, uh->dest);
+ 	if (sk && udp_sk(sk)->gro_complete) {
+@@ -663,7 +660,6 @@ int udp_gro_complete(struct sk_buff *skb, int nhoff,
+ 	} else {
+ 		err = udp_gro_complete_segment(skb);
+ 	}
+-	rcu_read_unlock();
+ 
+ 	if (skb->remcsum_offload)
+ 		skb_shinfo(skb)->gso_type |= SKB_GSO_TUNNEL_REMCSUM;
+diff --git a/net/ipv6/ila/ila.h b/net/ipv6/ila/ila.h
+index ad5f6f6ba3330..85b92917849bf 100644
+--- a/net/ipv6/ila/ila.h
++++ b/net/ipv6/ila/ila.h
+@@ -108,6 +108,7 @@ int ila_lwt_init(void);
+ void ila_lwt_fini(void);
+ 
+ int ila_xlat_init_net(struct net *net);
++void ila_xlat_pre_exit_net(struct net *net);
+ void ila_xlat_exit_net(struct net *net);
+ 
+ int ila_xlat_nl_cmd_add_mapping(struct sk_buff *skb, struct genl_info *info);
+diff --git a/net/ipv6/ila/ila_main.c b/net/ipv6/ila/ila_main.c
+index 36c58aa257e88..a5b0365c5e48e 100644
+--- a/net/ipv6/ila/ila_main.c
++++ b/net/ipv6/ila/ila_main.c
+@@ -71,6 +71,11 @@ static __net_init int ila_init_net(struct net *net)
+ 	return err;
+ }
+ 
++static __net_exit void ila_pre_exit_net(struct net *net)
++{
++	ila_xlat_pre_exit_net(net);
++}
++
+ static __net_exit void ila_exit_net(struct net *net)
+ {
+ 	ila_xlat_exit_net(net);
+@@ -78,6 +83,7 @@ static __net_exit void ila_exit_net(struct net *net)
+ 
+ static struct pernet_operations ila_net_ops = {
+ 	.init = ila_init_net,
++	.pre_exit = ila_pre_exit_net,
+ 	.exit = ila_exit_net,
+ 	.id   = &ila_net_id,
+ 	.size = sizeof(struct ila_net),
+diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
+index 163668531a57f..1f7b674b7c58b 100644
+--- a/net/ipv6/ila/ila_xlat.c
++++ b/net/ipv6/ila/ila_xlat.c
+@@ -616,6 +616,15 @@ int ila_xlat_init_net(struct net *net)
+ 	return 0;
+ }
+ 
++void ila_xlat_pre_exit_net(struct net *net)
++{
++	struct ila_net *ilan = net_generic(net, ila_net_id);
++
++	if (ilan->xlat.hooks_registered)
++		nf_unregister_net_hooks(net, ila_nf_hook_ops,
++					ARRAY_SIZE(ila_nf_hook_ops));
++}
++
+ void ila_xlat_exit_net(struct net *net)
+ {
+ 	struct ila_net *ilan = net_generic(net, ila_net_id);
+@@ -623,10 +632,6 @@ void ila_xlat_exit_net(struct net *net)
+ 	rhashtable_free_and_destroy(&ilan->xlat.rhash_table, ila_free_cb, NULL);
+ 
+ 	free_bucket_spinlocks(ilan->xlat.locks);
+-
+-	if (ilan->xlat.hooks_registered)
+-		nf_unregister_net_hooks(net, ila_nf_hook_ops,
+-					ARRAY_SIZE(ila_nf_hook_ops));
+ }
+ 
+ static int ila_xlat_addr(struct sk_buff *skb, bool sir2ila)
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 15c8eef1ef443..673f02ea62aae 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -209,7 +209,6 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head,
+ 
+ 	flush += ntohs(iph->payload_len) != skb_gro_len(skb);
+ 
+-	rcu_read_lock();
+ 	proto = iph->nexthdr;
+ 	ops = rcu_dereference(inet6_offloads[proto]);
+ 	if (!ops || !ops->callbacks.gro_receive) {
+@@ -222,7 +221,7 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head,
+ 
+ 		ops = rcu_dereference(inet6_offloads[proto]);
+ 		if (!ops || !ops->callbacks.gro_receive)
+-			goto out_unlock;
++			goto out;
+ 
+ 		iph = ipv6_hdr(skb);
+ 	}
+@@ -280,9 +279,6 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head,
+ 	pp = indirect_call_gro_receive_l4(tcp6_gro_receive, udp6_gro_receive,
+ 					 ops->callbacks.gro_receive, head, skb);
+ 
+-out_unlock:
+-	rcu_read_unlock();
+-
+ out:
+ 	skb_gro_flush_final(skb, pp, flush);
+ 
+@@ -332,18 +328,14 @@ INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff)
+ 
+ 	iph->payload_len = htons(skb->len - nhoff - sizeof(*iph));
+ 
+-	rcu_read_lock();
+-
+ 	nhoff += sizeof(*iph) + ipv6_exthdrs_len(iph, &ops);
+ 	if (WARN_ON(!ops || !ops->callbacks.gro_complete))
+-		goto out_unlock;
++		goto out;
+ 
+ 	err = INDIRECT_CALL_L4(ops->callbacks.gro_complete, tcp6_gro_complete,
+ 			       udp6_gro_complete, skb, nhoff);
+ 
+-out_unlock:
+-	rcu_read_unlock();
+-
++out:
+ 	return err;
+ }
+ 
+diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c
+index 7752e1e921f8f..1107782c083d5 100644
+--- a/net/ipv6/udp_offload.c
++++ b/net/ipv6/udp_offload.c
+@@ -144,13 +144,11 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb)
+ 
+ skip:
+ 	NAPI_GRO_CB(skb)->is_ipv6 = 1;
+-	rcu_read_lock();
+ 
+ 	if (static_branch_unlikely(&udpv6_encap_needed_key))
+ 		sk = udp6_gro_lookup_skb(skb, uh->source, uh->dest);
+ 
+ 	pp = udp_gro_receive(head, skb, uh, sk);
+-	rcu_read_unlock();
+ 	return pp;
+ 
+ flush:
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index f7a91266d5a9c..9b11396552dfc 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -96,7 +96,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			mp_opt->data_len = get_unaligned_be16(ptr);
+ 			ptr += 2;
+ 		}
+-		pr_debug("MP_CAPABLE version=%x, flags=%x, optlen=%d sndr=%llu, rcvr=%llu len=%d",
++		pr_debug("MP_CAPABLE version=%x, flags=%x, optlen=%d sndr=%llu, rcvr=%llu len=%d\n",
+ 			 version, flags, opsize, mp_opt->sndr_key,
+ 			 mp_opt->rcvr_key, mp_opt->data_len);
+ 		break;
+@@ -110,7 +110,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			ptr += 4;
+ 			mp_opt->nonce = get_unaligned_be32(ptr);
+ 			ptr += 4;
+-			pr_debug("MP_JOIN bkup=%u, id=%u, token=%u, nonce=%u",
++			pr_debug("MP_JOIN bkup=%u, id=%u, token=%u, nonce=%u\n",
+ 				 mp_opt->backup, mp_opt->join_id,
+ 				 mp_opt->token, mp_opt->nonce);
+ 		} else if (opsize == TCPOLEN_MPTCP_MPJ_SYNACK) {
+@@ -120,20 +120,20 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			ptr += 8;
+ 			mp_opt->nonce = get_unaligned_be32(ptr);
+ 			ptr += 4;
+-			pr_debug("MP_JOIN bkup=%u, id=%u, thmac=%llu, nonce=%u",
++			pr_debug("MP_JOIN bkup=%u, id=%u, thmac=%llu, nonce=%u\n",
+ 				 mp_opt->backup, mp_opt->join_id,
+ 				 mp_opt->thmac, mp_opt->nonce);
+ 		} else if (opsize == TCPOLEN_MPTCP_MPJ_ACK) {
+ 			ptr += 2;
+ 			memcpy(mp_opt->hmac, ptr, MPTCPOPT_HMAC_LEN);
+-			pr_debug("MP_JOIN hmac");
++			pr_debug("MP_JOIN hmac\n");
+ 		} else {
+ 			mp_opt->mp_join = 0;
+ 		}
+ 		break;
+ 
+ 	case MPTCPOPT_DSS:
+-		pr_debug("DSS");
++		pr_debug("DSS\n");
+ 		ptr++;
+ 
+ 		/* we must clear 'mpc_map' be able to detect MP_CAPABLE
+@@ -148,7 +148,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 		mp_opt->ack64 = (flags & MPTCP_DSS_ACK64) != 0;
+ 		mp_opt->use_ack = (flags & MPTCP_DSS_HAS_ACK);
+ 
+-		pr_debug("data_fin=%d dsn64=%d use_map=%d ack64=%d use_ack=%d",
++		pr_debug("data_fin=%d dsn64=%d use_map=%d ack64=%d use_ack=%d\n",
+ 			 mp_opt->data_fin, mp_opt->dsn64,
+ 			 mp_opt->use_map, mp_opt->ack64,
+ 			 mp_opt->use_ack);
+@@ -189,7 +189,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 				ptr += 4;
+ 			}
+ 
+-			pr_debug("data_ack=%llu", mp_opt->data_ack);
++			pr_debug("data_ack=%llu\n", mp_opt->data_ack);
+ 		}
+ 
+ 		if (mp_opt->use_map) {
+@@ -207,7 +207,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			mp_opt->data_len = get_unaligned_be16(ptr);
+ 			ptr += 2;
+ 
+-			pr_debug("data_seq=%llu subflow_seq=%u data_len=%u",
++			pr_debug("data_seq=%llu subflow_seq=%u data_len=%u\n",
+ 				 mp_opt->data_seq, mp_opt->subflow_seq,
+ 				 mp_opt->data_len);
+ 		}
+@@ -242,7 +242,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 
+ 		mp_opt->add_addr = 1;
+ 		mp_opt->addr_id = *ptr++;
+-		pr_debug("ADD_ADDR: id=%d, echo=%d", mp_opt->addr_id, mp_opt->echo);
++		pr_debug("ADD_ADDR: id=%d, echo=%d\n", mp_opt->addr_id, mp_opt->echo);
+ 		if (mp_opt->family == MPTCP_ADDR_IPVERSION_4) {
+ 			memcpy((u8 *)&mp_opt->addr.s_addr, (u8 *)ptr, 4);
+ 			ptr += 4;
+@@ -277,7 +277,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 
+ 		mp_opt->rm_addr = 1;
+ 		mp_opt->rm_id = *ptr++;
+-		pr_debug("RM_ADDR: id=%d", mp_opt->rm_id);
++		pr_debug("RM_ADDR: id=%d\n", mp_opt->rm_id);
+ 		break;
+ 
+ 	default:
+@@ -344,7 +344,7 @@ bool mptcp_syn_options(struct sock *sk, const struct sk_buff *skb,
+ 		*size = TCPOLEN_MPTCP_MPC_SYN;
+ 		return true;
+ 	} else if (subflow->request_join) {
+-		pr_debug("remote_token=%u, nonce=%u", subflow->remote_token,
++		pr_debug("remote_token=%u, nonce=%u\n", subflow->remote_token,
+ 			 subflow->local_nonce);
+ 		opts->suboptions = OPTION_MPTCP_MPJ_SYN;
+ 		opts->join_id = subflow->local_id;
+@@ -436,7 +436,7 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
+ 		else
+ 			*size = TCPOLEN_MPTCP_MPC_ACK;
+ 
+-		pr_debug("subflow=%p, local_key=%llu, remote_key=%llu map_len=%d",
++		pr_debug("subflow=%p, local_key=%llu, remote_key=%llu map_len=%d\n",
+ 			 subflow, subflow->local_key, subflow->remote_key,
+ 			 data_len);
+ 
+@@ -445,7 +445,7 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
+ 		opts->suboptions = OPTION_MPTCP_MPJ_ACK;
+ 		memcpy(opts->hmac, subflow->hmac, MPTCPOPT_HMAC_LEN);
+ 		*size = TCPOLEN_MPTCP_MPJ_ACK;
+-		pr_debug("subflow=%p", subflow);
++		pr_debug("subflow=%p\n", subflow);
+ 
+ 		schedule_3rdack_retransmission(sk);
+ 		return true;
+@@ -619,7 +619,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk,
+ 		}
+ 	}
+ #endif
+-	pr_debug("addr_id=%d, ahmac=%llu, echo=%d", opts->addr_id, opts->ahmac, echo);
++	pr_debug("addr_id=%d, ahmac=%llu, echo=%d\n", opts->addr_id, opts->ahmac, echo);
+ 
+ 	return true;
+ }
+@@ -644,7 +644,7 @@ static bool mptcp_established_options_rm_addr(struct sock *sk,
+ 	opts->suboptions |= OPTION_MPTCP_RM_ADDR;
+ 	opts->rm_id = rm_id;
+ 
+-	pr_debug("rm_id=%d", opts->rm_id);
++	pr_debug("rm_id=%d\n", opts->rm_id);
+ 
+ 	return true;
+ }
+@@ -703,7 +703,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
+ 		opts->suboptions = OPTION_MPTCP_MPC_SYNACK;
+ 		opts->sndr_key = subflow_req->local_key;
+ 		*size = TCPOLEN_MPTCP_MPC_SYNACK;
+-		pr_debug("subflow_req=%p, local_key=%llu",
++		pr_debug("subflow_req=%p, local_key=%llu\n",
+ 			 subflow_req, subflow_req->local_key);
+ 		return true;
+ 	} else if (subflow_req->mp_join) {
+@@ -712,7 +712,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
+ 		opts->join_id = subflow_req->local_id;
+ 		opts->thmac = subflow_req->thmac;
+ 		opts->nonce = subflow_req->local_nonce;
+-		pr_debug("req=%p, bkup=%u, id=%u, thmac=%llu, nonce=%u",
++		pr_debug("req=%p, bkup=%u, id=%u, thmac=%llu, nonce=%u\n",
+ 			 subflow_req, opts->backup, opts->join_id,
+ 			 opts->thmac, opts->nonce);
+ 		*size = TCPOLEN_MPTCP_MPJ_SYNACK;
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 1f310abbf1ede..a8c26f4179004 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -16,7 +16,7 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk,
+ 			   const struct mptcp_addr_info *addr,
+ 			   bool echo)
+ {
+-	pr_debug("msk=%p, local_id=%d", msk, addr->id);
++	pr_debug("msk=%p, local_id=%d\n", msk, addr->id);
+ 
+ 	msk->pm.local = *addr;
+ 	WRITE_ONCE(msk->pm.add_addr_echo, echo);
+@@ -26,7 +26,7 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk,
+ 
+ int mptcp_pm_remove_addr(struct mptcp_sock *msk, u8 local_id)
+ {
+-	pr_debug("msk=%p, local_id=%d", msk, local_id);
++	pr_debug("msk=%p, local_id=%d\n", msk, local_id);
+ 
+ 	msk->pm.rm_id = local_id;
+ 	WRITE_ONCE(msk->pm.rm_addr_signal, true);
+@@ -35,7 +35,7 @@ int mptcp_pm_remove_addr(struct mptcp_sock *msk, u8 local_id)
+ 
+ int mptcp_pm_remove_subflow(struct mptcp_sock *msk, u8 local_id)
+ {
+-	pr_debug("msk=%p, local_id=%d", msk, local_id);
++	pr_debug("msk=%p, local_id=%d\n", msk, local_id);
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+ 	mptcp_pm_nl_rm_subflow_received(msk, local_id);
+@@ -49,7 +49,7 @@ void mptcp_pm_new_connection(struct mptcp_sock *msk, int server_side)
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p, token=%u side=%d", msk, msk->token, server_side);
++	pr_debug("msk=%p, token=%u side=%d\n", msk, msk->token, server_side);
+ 
+ 	WRITE_ONCE(pm->server_side, server_side);
+ }
+@@ -59,7 +59,7 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk)
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 	int ret = 0;
+ 
+-	pr_debug("msk=%p subflows=%d max=%d allow=%d", msk, pm->subflows,
++	pr_debug("msk=%p subflows=%d max=%d allow=%d\n", msk, pm->subflows,
+ 		 pm->subflows_max, READ_ONCE(pm->accept_subflow));
+ 
+ 	/* try to avoid acquiring the lock below */
+@@ -83,7 +83,7 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk)
+ static bool mptcp_pm_schedule_work(struct mptcp_sock *msk,
+ 				   enum mptcp_pm_status new_status)
+ {
+-	pr_debug("msk=%p status=%x new=%lx", msk, msk->pm.status,
++	pr_debug("msk=%p status=%x new=%lx\n", msk, msk->pm.status,
+ 		 BIT(new_status));
+ 	if (msk->pm.status & BIT(new_status))
+ 		return false;
+@@ -98,7 +98,7 @@ void mptcp_pm_fully_established(struct mptcp_sock *msk)
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	/* try to avoid acquiring the lock below */
+ 	if (!READ_ONCE(pm->work_pending))
+@@ -114,7 +114,7 @@ void mptcp_pm_fully_established(struct mptcp_sock *msk)
+ 
+ void mptcp_pm_connection_closed(struct mptcp_sock *msk)
+ {
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ }
+ 
+ void mptcp_pm_subflow_established(struct mptcp_sock *msk,
+@@ -122,7 +122,7 @@ void mptcp_pm_subflow_established(struct mptcp_sock *msk,
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	if (!READ_ONCE(pm->work_pending))
+ 		return;
+@@ -137,7 +137,7 @@ void mptcp_pm_subflow_established(struct mptcp_sock *msk,
+ 
+ void mptcp_pm_subflow_closed(struct mptcp_sock *msk, u8 id)
+ {
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ }
+ 
+ void mptcp_pm_add_addr_received(struct mptcp_sock *msk,
+@@ -145,7 +145,7 @@ void mptcp_pm_add_addr_received(struct mptcp_sock *msk,
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p remote_id=%d accept=%d", msk, addr->id,
++	pr_debug("msk=%p remote_id=%d accept=%d\n", msk, addr->id,
+ 		 READ_ONCE(pm->accept_addr));
+ 
+ 	spin_lock_bh(&pm->lock);
+@@ -162,7 +162,7 @@ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk, u8 rm_id)
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p remote_id=%d", msk, rm_id);
++	pr_debug("msk=%p remote_id=%d\n", msk, rm_id);
+ 
+ 	spin_lock_bh(&pm->lock);
+ 	mptcp_pm_schedule_work(msk, MPTCP_PM_RM_ADDR_RECEIVED);
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index ca57d856d5df5..f115c92c45d4a 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -127,11 +127,13 @@ static bool lookup_subflow_by_saddr(const struct list_head *list,
+ 	return false;
+ }
+ 
+-static struct mptcp_pm_addr_entry *
++static bool
+ select_local_address(const struct pm_nl_pernet *pernet,
+-		     struct mptcp_sock *msk)
++		     struct mptcp_sock *msk,
++		     struct mptcp_pm_addr_entry *new_entry)
+ {
+-	struct mptcp_pm_addr_entry *entry, *ret = NULL;
++	struct mptcp_pm_addr_entry *entry;
++	bool found = false;
+ 
+ 	rcu_read_lock();
+ 	spin_lock_bh(&msk->join_list_lock);
+@@ -145,19 +147,23 @@ select_local_address(const struct pm_nl_pernet *pernet,
+ 		if (entry->addr.family == ((struct sock *)msk)->sk_family &&
+ 		    !lookup_subflow_by_saddr(&msk->conn_list, &entry->addr) &&
+ 		    !lookup_subflow_by_saddr(&msk->join_list, &entry->addr)) {
+-			ret = entry;
++			*new_entry = *entry;
++			found = true;
+ 			break;
+ 		}
+ 	}
+ 	spin_unlock_bh(&msk->join_list_lock);
+ 	rcu_read_unlock();
+-	return ret;
++
++	return found;
+ }
+ 
+-static struct mptcp_pm_addr_entry *
+-select_signal_address(struct pm_nl_pernet *pernet, unsigned int pos)
++static bool
++select_signal_address(struct pm_nl_pernet *pernet, unsigned int pos,
++		      struct mptcp_pm_addr_entry *new_entry)
+ {
+-	struct mptcp_pm_addr_entry *entry, *ret = NULL;
++	struct mptcp_pm_addr_entry *entry;
++	bool found = false;
+ 	int i = 0;
+ 
+ 	rcu_read_lock();
+@@ -170,12 +176,14 @@ select_signal_address(struct pm_nl_pernet *pernet, unsigned int pos)
+ 		if (!(entry->addr.flags & MPTCP_PM_ADDR_FLAG_SIGNAL))
+ 			continue;
+ 		if (i++ == pos) {
+-			ret = entry;
++			*new_entry = *entry;
++			found = true;
+ 			break;
+ 		}
+ 	}
+ 	rcu_read_unlock();
+-	return ret;
++
++	return found;
+ }
+ 
+ static void check_work_pending(struct mptcp_sock *msk)
+@@ -206,7 +214,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
+ 	struct mptcp_sock *msk = entry->sock;
+ 	struct sock *sk = (struct sock *)msk;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	if (!msk)
+ 		return;
+@@ -225,7 +233,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
+ 	spin_lock_bh(&msk->pm.lock);
+ 
+ 	if (!mptcp_pm_should_add_signal(msk)) {
+-		pr_debug("retransmit ADD_ADDR id=%d", entry->addr.id);
++		pr_debug("retransmit ADD_ADDR id=%d\n", entry->addr.id);
+ 		mptcp_pm_announce_addr(msk, &entry->addr, false);
+ 		entry->retrans_times++;
+ 	}
+@@ -289,7 +297,7 @@ void mptcp_pm_free_anno_list(struct mptcp_sock *msk)
+ 	struct sock *sk = (struct sock *)msk;
+ 	LIST_HEAD(free_list);
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+ 	list_splice_init(&msk->pm.anno_list, &free_list);
+@@ -305,7 +313,7 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ {
+ 	struct mptcp_addr_info remote = { 0 };
+ 	struct sock *sk = (struct sock *)msk;
+-	struct mptcp_pm_addr_entry *local;
++	struct mptcp_pm_addr_entry local;
+ 	struct pm_nl_pernet *pernet;
+ 
+ 	pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id);
+@@ -317,13 +325,11 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 
+ 	/* check first for announce */
+ 	if (msk->pm.add_addr_signaled < msk->pm.add_addr_signal_max) {
+-		local = select_signal_address(pernet,
+-					      msk->pm.add_addr_signaled);
+-
+-		if (local) {
+-			if (mptcp_pm_alloc_anno_list(msk, local)) {
++		if (select_signal_address(pernet, msk->pm.add_addr_signaled,
++					  &local)) {
++			if (mptcp_pm_alloc_anno_list(msk, &local)) {
+ 				msk->pm.add_addr_signaled++;
+-				mptcp_pm_announce_addr(msk, &local->addr, false);
++				mptcp_pm_announce_addr(msk, &local.addr, false);
+ 			}
+ 		} else {
+ 			/* pick failed, avoid fourther attempts later */
+@@ -338,13 +344,12 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 	    msk->pm.subflows < msk->pm.subflows_max) {
+ 		remote_address((struct sock_common *)sk, &remote);
+ 
+-		local = select_local_address(pernet, msk);
+-		if (local) {
++		if (select_local_address(pernet, msk, &local)) {
+ 			msk->pm.local_addr_used++;
+ 			msk->pm.subflows++;
+ 			check_work_pending(msk);
+ 			spin_unlock_bh(&msk->pm.lock);
+-			__mptcp_subflow_connect(sk, &local->addr, &remote);
++			__mptcp_subflow_connect(sk, &local.addr, &remote);
+ 			spin_lock_bh(&msk->pm.lock);
+ 			return;
+ 		}
+@@ -372,7 +377,7 @@ void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
+ 	struct mptcp_addr_info local;
+ 	int err;
+ 
+-	pr_debug("accepted %d:%d remote family %d",
++	pr_debug("accepted %d:%d remote family %d\n",
+ 		 msk->pm.add_addr_accepted, msk->pm.add_addr_accept_max,
+ 		 msk->pm.remote.family);
+ 	msk->pm.subflows++;
+@@ -405,7 +410,7 @@ void mptcp_pm_nl_rm_addr_received(struct mptcp_sock *msk)
+ 	struct mptcp_subflow_context *subflow, *tmp;
+ 	struct sock *sk = (struct sock *)msk;
+ 
+-	pr_debug("address rm_id %d", msk->pm.rm_id);
++	pr_debug("address rm_id %d\n", msk->pm.rm_id);
+ 
+ 	if (!msk->pm.rm_id)
+ 		return;
+@@ -441,7 +446,7 @@ void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk, u8 rm_id)
+ 	struct mptcp_subflow_context *subflow, *tmp;
+ 	struct sock *sk = (struct sock *)msk;
+ 
+-	pr_debug("subflow rm_id %d", rm_id);
++	pr_debug("subflow rm_id %d\n", rm_id);
+ 
+ 	if (!rm_id)
+ 		return;
+@@ -791,7 +796,7 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
+ 	struct mptcp_sock *msk;
+ 	long s_slot = 0, s_num = 0;
+ 
+-	pr_debug("remove_id=%d", addr->id);
++	pr_debug("remove_id=%d\n", addr->id);
+ 
+ 	while ((msk = mptcp_token_iter_next(net, &s_slot, &s_num)) != NULL) {
+ 		struct sock *sk = (struct sock *)msk;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 0ef6a99b62b0d..590e2c9bb67e2 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -128,7 +128,7 @@ static bool mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
+ 	    !skb_try_coalesce(to, from, &fragstolen, &delta))
+ 		return false;
+ 
+-	pr_debug("colesced seq %llx into %llx new len %d new end seq %llx",
++	pr_debug("colesced seq %llx into %llx new len %d new end seq %llx\n",
+ 		 MPTCP_SKB_CB(from)->map_seq, MPTCP_SKB_CB(to)->map_seq,
+ 		 to->len, MPTCP_SKB_CB(from)->end_seq);
+ 	MPTCP_SKB_CB(to)->end_seq = MPTCP_SKB_CB(from)->end_seq;
+@@ -164,7 +164,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
+ 	space = tcp_space(sk);
+ 	max_seq = space > 0 ? space + msk->ack_seq : msk->ack_seq;
+ 
+-	pr_debug("msk=%p seq=%llx limit=%llx empty=%d", msk, seq, max_seq,
++	pr_debug("msk=%p seq=%llx limit=%llx empty=%d\n", msk, seq, max_seq,
+ 		 RB_EMPTY_ROOT(&msk->out_of_order_queue));
+ 	if (after64(seq, max_seq)) {
+ 		/* out of window */
+@@ -469,7 +469,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
+ 	u32 old_copied_seq;
+ 	bool done = false;
+ 
+-	pr_debug("msk=%p ssk=%p", msk, ssk);
++	pr_debug("msk=%p ssk=%p\n", msk, ssk);
+ 	tp = tcp_sk(ssk);
+ 	old_copied_seq = tp->copied_seq;
+ 	do {
+@@ -552,7 +552,7 @@ static bool mptcp_ofo_queue(struct mptcp_sock *msk)
+ 	u64 end_seq;
+ 
+ 	p = rb_first(&msk->out_of_order_queue);
+-	pr_debug("msk=%p empty=%d", msk, RB_EMPTY_ROOT(&msk->out_of_order_queue));
++	pr_debug("msk=%p empty=%d\n", msk, RB_EMPTY_ROOT(&msk->out_of_order_queue));
+ 	while (p) {
+ 		skb = rb_to_skb(p);
+ 		if (after64(MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq))
+@@ -574,7 +574,7 @@ static bool mptcp_ofo_queue(struct mptcp_sock *msk)
+ 			int delta = msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
+ 
+ 			/* skip overlapping data, if any */
+-			pr_debug("uncoalesced seq=%llx ack seq=%llx delta=%d",
++			pr_debug("uncoalesced seq=%llx ack seq=%llx delta=%d\n",
+ 				 MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq,
+ 				 delta);
+ 			MPTCP_SKB_CB(skb)->offset += delta;
+@@ -956,12 +956,12 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 		psize = min_t(size_t, pfrag->size - offset, avail_size);
+ 
+ 		/* Copy to page */
+-		pr_debug("left=%zu", msg_data_left(msg));
++		pr_debug("left=%zu\n", msg_data_left(msg));
+ 		psize = copy_page_from_iter(pfrag->page, offset,
+ 					    min_t(size_t, msg_data_left(msg),
+ 						  psize),
+ 					    &msg->msg_iter);
+-		pr_debug("left=%zu", msg_data_left(msg));
++		pr_debug("left=%zu\n", msg_data_left(msg));
+ 		if (!psize)
+ 			return -EINVAL;
+ 
+@@ -1031,7 +1031,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 	mpext->use_map = 1;
+ 	mpext->dsn64 = 1;
+ 
+-	pr_debug("data_seq=%llu subflow_seq=%u data_len=%u dsn64=%d",
++	pr_debug("data_seq=%llu subflow_seq=%u data_len=%u dsn64=%d\n",
+ 		 mpext->data_seq, mpext->subflow_seq, mpext->data_len,
+ 		 mpext->dsn64);
+ 
+@@ -1147,7 +1147,7 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk,
+ 		}
+ 	}
+ 
+-	pr_debug("msk=%p nr_active=%d ssk=%p:%lld backup=%p:%lld",
++	pr_debug("msk=%p nr_active=%d ssk=%p:%lld backup=%p:%lld\n",
+ 		 msk, nr_active, send_info[0].ssk, send_info[0].ratio,
+ 		 send_info[1].ssk, send_info[1].ratio);
+ 
+@@ -1240,7 +1240,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	    sndbuf > READ_ONCE(sk->sk_sndbuf))
+ 		WRITE_ONCE(sk->sk_sndbuf, sndbuf);
+ 
+-	pr_debug("conn_list->subflow=%p", ssk);
++	pr_debug("conn_list->subflow=%p\n", ssk);
+ 
+ 	lock_sock(ssk);
+ 	tx_ok = msg_data_left(msg);
+@@ -1577,7 +1577,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 			}
+ 		}
+ 
+-		pr_debug("block timeout %ld", timeo);
++		pr_debug("block timeout %ld\n", timeo);
+ 		mptcp_wait_data(sk, &timeo);
+ 	}
+ 
+@@ -1595,7 +1595,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		set_bit(MPTCP_DATA_READY, &msk->flags);
+ 	}
+ out_err:
+-	pr_debug("msk=%p data_ready=%d rx queue empty=%d copied=%d",
++	pr_debug("msk=%p data_ready=%d rx queue empty=%d copied=%d\n",
+ 		 msk, test_bit(MPTCP_DATA_READY, &msk->flags),
+ 		 skb_queue_empty(&sk->sk_receive_queue), copied);
+ 	mptcp_rcv_space_adjust(msk, copied);
+@@ -1712,7 +1712,7 @@ static void pm_work(struct mptcp_sock *msk)
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+ 
+-	pr_debug("msk=%p status=%x", msk, pm->status);
++	pr_debug("msk=%p status=%x\n", msk, pm->status);
+ 	if (pm->status & BIT(MPTCP_PM_ADD_ADDR_RECEIVED)) {
+ 		pm->status &= ~BIT(MPTCP_PM_ADD_ADDR_RECEIVED);
+ 		mptcp_pm_nl_add_addr_received(msk);
+@@ -1913,11 +1913,11 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
+ 		break;
+ 	default:
+ 		if (__mptcp_check_fallback(mptcp_sk(sk))) {
+-			pr_debug("Fallback");
++			pr_debug("Fallback\n");
+ 			ssk->sk_shutdown |= how;
+ 			tcp_shutdown(ssk, how);
+ 		} else {
+-			pr_debug("Sending DATA_FIN on subflow %p", ssk);
++			pr_debug("Sending DATA_FIN on subflow %p\n", ssk);
+ 			mptcp_set_timeout(sk, ssk);
+ 			tcp_send_ack(ssk);
+ 		}
+@@ -1973,7 +1973,7 @@ static void mptcp_close(struct sock *sk, long timeout)
+ 	if (__mptcp_check_fallback(msk)) {
+ 		goto update_state;
+ 	} else if (mptcp_close_state(sk)) {
+-		pr_debug("Sending DATA_FIN sk=%p", sk);
++		pr_debug("Sending DATA_FIN sk=%p\n", sk);
+ 		WRITE_ONCE(msk->write_seq, msk->write_seq + 1);
+ 		WRITE_ONCE(msk->snd_data_fin_enable, 1);
+ 
+@@ -2181,12 +2181,12 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
+ 		return NULL;
+ 	}
+ 
+-	pr_debug("msk=%p, listener=%p", msk, mptcp_subflow_ctx(listener->sk));
++	pr_debug("msk=%p, listener=%p\n", msk, mptcp_subflow_ctx(listener->sk));
+ 	newsk = inet_csk_accept(listener->sk, flags, err, kern);
+ 	if (!newsk)
+ 		return NULL;
+ 
+-	pr_debug("msk=%p, subflow is mptcp=%d", msk, sk_is_mptcp(newsk));
++	pr_debug("msk=%p, subflow is mptcp=%d\n", msk, sk_is_mptcp(newsk));
+ 	if (sk_is_mptcp(newsk)) {
+ 		struct mptcp_subflow_context *subflow;
+ 		struct sock *new_mptcp_sock;
+@@ -2351,7 +2351,7 @@ static int mptcp_setsockopt(struct sock *sk, int level, int optname,
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 	struct sock *ssk;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	if (mptcp_unsupported(level, optname))
+ 		return -ENOPROTOOPT;
+@@ -2383,7 +2383,7 @@ static int mptcp_getsockopt(struct sock *sk, int level, int optname,
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 	struct sock *ssk;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	/* @@ the meaning of setsockopt() when the socket is connected and
+ 	 * there are multiple subflows is not yet defined. It is up to the
+@@ -2454,7 +2454,7 @@ static int mptcp_get_port(struct sock *sk, unsigned short snum)
+ 	struct socket *ssock;
+ 
+ 	ssock = __mptcp_nmpc_socket(msk);
+-	pr_debug("msk=%p, subflow=%p", msk, ssock);
++	pr_debug("msk=%p, subflow=%p\n", msk, ssock);
+ 	if (WARN_ON_ONCE(!ssock))
+ 		return -EINVAL;
+ 
+@@ -2472,7 +2472,7 @@ void mptcp_finish_connect(struct sock *ssk)
+ 	sk = subflow->conn;
+ 	msk = mptcp_sk(sk);
+ 
+-	pr_debug("msk=%p, token=%u", sk, subflow->token);
++	pr_debug("msk=%p, token=%u\n", sk, subflow->token);
+ 
+ 	mptcp_crypto_key_sha(subflow->remote_key, NULL, &ack_seq);
+ 	ack_seq++;
+@@ -2511,7 +2511,7 @@ bool mptcp_finish_join(struct sock *sk)
+ 	struct socket *parent_sock;
+ 	bool ret;
+ 
+-	pr_debug("msk=%p, subflow=%p", msk, subflow);
++	pr_debug("msk=%p, subflow=%p\n", msk, subflow);
+ 
+ 	/* mptcp socket already closing? */
+ 	if (!mptcp_is_fully_established(parent))
+@@ -2673,7 +2673,7 @@ static int mptcp_listen(struct socket *sock, int backlog)
+ 	struct socket *ssock;
+ 	int err;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	lock_sock(sock->sk);
+ 	ssock = __mptcp_nmpc_socket(msk);
+@@ -2703,7 +2703,7 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
+ 	struct socket *ssock;
+ 	int err;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	lock_sock(sock->sk);
+ 	if (sock->sk->sk_state != TCP_LISTEN)
+@@ -2762,7 +2762,7 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
+ 	sock_poll_wait(file, sock, wait);
+ 
+ 	state = inet_sk_state_load(sk);
+-	pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags);
++	pr_debug("msk=%p state=%d flags=%lx\n", msk, state, msk->flags);
+ 	if (state == TCP_LISTEN)
+ 		return mptcp_check_readable(msk);
+ 
+@@ -2783,7 +2783,7 @@ static int mptcp_shutdown(struct socket *sock, int how)
+ 	struct mptcp_subflow_context *subflow;
+ 	int ret = 0;
+ 
+-	pr_debug("sk=%p, how=%d", msk, how);
++	pr_debug("sk=%p, how=%d\n", msk, how);
+ 
+ 	lock_sock(sock->sk);
+ 
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 4348bccb982f9..b8351b671c2fa 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -523,7 +523,7 @@ static inline bool mptcp_check_fallback(const struct sock *sk)
+ static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
+ {
+ 	if (test_bit(MPTCP_FALLBACK_DONE, &msk->flags)) {
+-		pr_debug("TCP fallback already done (msk=%p)", msk);
++		pr_debug("TCP fallback already done (msk=%p)\n", msk);
+ 		return;
+ 	}
+ 	set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
+@@ -537,7 +537,7 @@ static inline void mptcp_do_fallback(struct sock *sk)
+ 	__mptcp_do_fallback(msk);
+ }
+ 
+-#define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)", __func__, a)
++#define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)\n", __func__, a)
+ 
+ static inline bool subflow_simultaneous_connect(struct sock *sk)
+ {
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index ba86cb06d6d8c..8a0ef50c307ce 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -34,7 +34,7 @@ static void subflow_req_destructor(struct request_sock *req)
+ {
+ 	struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);
+ 
+-	pr_debug("subflow_req=%p", subflow_req);
++	pr_debug("subflow_req=%p\n", subflow_req);
+ 
+ 	if (subflow_req->msk)
+ 		sock_put((struct sock *)subflow_req->msk);
+@@ -121,7 +121,7 @@ static void subflow_init_req(struct request_sock *req,
+ 	struct mptcp_options_received mp_opt;
+ 	int ret;
+ 
+-	pr_debug("subflow_req=%p, listener=%p", subflow_req, listener);
++	pr_debug("subflow_req=%p, listener=%p\n", subflow_req, listener);
+ 
+ 	ret = __subflow_init_req(req, sk_listener);
+ 	if (ret)
+@@ -183,7 +183,7 @@ static void subflow_init_req(struct request_sock *req,
+ 				subflow_init_req_cookie_join_save(subflow_req, skb);
+ 		}
+ 
+-		pr_debug("token=%u, remote_nonce=%u msk=%p", subflow_req->token,
++		pr_debug("token=%u, remote_nonce=%u msk=%p\n", subflow_req->token,
+ 			 subflow_req->remote_nonce, subflow_req->msk);
+ 	}
+ }
+@@ -306,7 +306,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 	subflow->rel_write_seq = 1;
+ 	subflow->conn_finished = 1;
+ 	subflow->ssn_offset = TCP_SKB_CB(skb)->seq;
+-	pr_debug("subflow=%p synack seq=%x", subflow, subflow->ssn_offset);
++	pr_debug("subflow=%p synack seq=%x\n", subflow, subflow->ssn_offset);
+ 
+ 	mptcp_get_options(skb, &mp_opt);
+ 	if (subflow->request_mptcp) {
+@@ -321,7 +321,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 		subflow->mp_capable = 1;
+ 		subflow->can_ack = 1;
+ 		subflow->remote_key = mp_opt.sndr_key;
+-		pr_debug("subflow=%p, remote_key=%llu", subflow,
++		pr_debug("subflow=%p, remote_key=%llu\n", subflow,
+ 			 subflow->remote_key);
+ 		mptcp_finish_connect(sk);
+ 	} else if (subflow->request_join) {
+@@ -332,7 +332,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 
+ 		subflow->thmac = mp_opt.thmac;
+ 		subflow->remote_nonce = mp_opt.nonce;
+-		pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u", subflow,
++		pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u\n", subflow,
+ 			 subflow->thmac, subflow->remote_nonce);
+ 
+ 		if (!subflow_thmac_valid(subflow)) {
+@@ -371,7 +371,7 @@ static int subflow_v4_conn_request(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 
+-	pr_debug("subflow=%p", subflow);
++	pr_debug("subflow=%p\n", subflow);
+ 
+ 	/* Never answer to SYNs sent to broadcast or multicast */
+ 	if (skb_rtable(skb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))
+@@ -401,7 +401,7 @@ static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 
+-	pr_debug("subflow=%p", subflow);
++	pr_debug("subflow=%p\n", subflow);
+ 
+ 	if (skb->protocol == htons(ETH_P_IP))
+ 		return subflow_v4_conn_request(sk, skb);
+@@ -543,7 +543,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 	struct sock *new_msk = NULL;
+ 	struct sock *child;
+ 
+-	pr_debug("listener=%p, req=%p, conn=%p", listener, req, listener->conn);
++	pr_debug("listener=%p, req=%p, conn=%p\n", listener, req, listener->conn);
+ 
+ 	/* After child creation we must look for 'mp_capable' even when options
+ 	 * are not parsed
+@@ -692,7 +692,7 @@ static u64 expand_seq(u64 old_seq, u16 old_data_len, u64 seq)
+ 
+ static void dbg_bad_map(struct mptcp_subflow_context *subflow, u32 ssn)
+ {
+-	pr_debug("Bad mapping: ssn=%d map_seq=%d map_data_len=%d",
++	pr_debug("Bad mapping: ssn=%d map_seq=%d map_data_len=%d\n",
+ 		 ssn, subflow->map_subflow_seq, subflow->map_data_len);
+ }
+ 
+@@ -768,7 +768,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 		goto validate_seq;
+ 	}
+ 
+-	pr_debug("seq=%llu is64=%d ssn=%u data_len=%u data_fin=%d",
++	pr_debug("seq=%llu is64=%d ssn=%u data_len=%u data_fin=%d\n",
+ 		 mpext->data_seq, mpext->dsn64, mpext->subflow_seq,
+ 		 mpext->data_len, mpext->data_fin);
+ 
+@@ -782,7 +782,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 		if (data_len == 1) {
+ 			bool updated = mptcp_update_rcv_data_fin(msk, mpext->data_seq,
+ 								 mpext->dsn64);
+-			pr_debug("DATA_FIN with no payload seq=%llu", mpext->data_seq);
++			pr_debug("DATA_FIN with no payload seq=%llu\n", mpext->data_seq);
+ 			if (subflow->map_valid) {
+ 				/* A DATA_FIN might arrive in a DSS
+ 				 * option before the previous mapping
+@@ -807,7 +807,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 				data_fin_seq &= GENMASK_ULL(31, 0);
+ 
+ 			mptcp_update_rcv_data_fin(msk, data_fin_seq, mpext->dsn64);
+-			pr_debug("DATA_FIN with mapping seq=%llu dsn64=%d",
++			pr_debug("DATA_FIN with mapping seq=%llu dsn64=%d\n",
+ 				 data_fin_seq, mpext->dsn64);
+ 		}
+ 
+@@ -818,7 +818,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 	if (!mpext->dsn64) {
+ 		map_seq = expand_seq(subflow->map_seq, subflow->map_data_len,
+ 				     mpext->data_seq);
+-		pr_debug("expanded seq=%llu", subflow->map_seq);
++		pr_debug("expanded seq=%llu\n", subflow->map_seq);
+ 	} else {
+ 		map_seq = mpext->data_seq;
+ 	}
+@@ -850,7 +850,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 	subflow->map_data_len = data_len;
+ 	subflow->map_valid = 1;
+ 	subflow->mpc_map = mpext->mpc_map;
+-	pr_debug("new map seq=%llu subflow_seq=%u data_len=%u",
++	pr_debug("new map seq=%llu subflow_seq=%u data_len=%u\n",
+ 		 subflow->map_seq, subflow->map_subflow_seq,
+ 		 subflow->map_data_len);
+ 
+@@ -880,7 +880,7 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
+ 	avail_len = skb->len - offset;
+ 	incr = limit >= avail_len ? avail_len + fin : limit;
+ 
+-	pr_debug("discarding=%d len=%d offset=%d seq=%d", incr, skb->len,
++	pr_debug("discarding=%d len=%d offset=%d seq=%d\n", incr, skb->len,
+ 		 offset, subflow->map_subflow_seq);
+ 	MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DUPDATA);
+ 	tcp_sk(ssk)->copied_seq += incr;
+@@ -901,7 +901,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
+ 	struct mptcp_sock *msk;
+ 	struct sk_buff *skb;
+ 
+-	pr_debug("msk=%p ssk=%p data_avail=%d skb=%p", subflow->conn, ssk,
++	pr_debug("msk=%p ssk=%p data_avail=%d skb=%p\n", subflow->conn, ssk,
+ 		 subflow->data_avail, skb_peek(&ssk->sk_receive_queue));
+ 	if (!skb_peek(&ssk->sk_receive_queue))
+ 		subflow->data_avail = 0;
+@@ -914,7 +914,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
+ 		u64 old_ack;
+ 
+ 		status = get_mapping_status(ssk, msk);
+-		pr_debug("msk=%p ssk=%p status=%d", msk, ssk, status);
++		pr_debug("msk=%p ssk=%p status=%d\n", msk, ssk, status);
+ 		if (status == MAPPING_INVALID) {
+ 			ssk->sk_err = EBADMSG;
+ 			goto fatal;
+@@ -953,7 +953,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
+ 
+ 		old_ack = READ_ONCE(msk->ack_seq);
+ 		ack_seq = mptcp_subflow_get_mapped_dsn(subflow);
+-		pr_debug("msk ack_seq=%llx subflow ack_seq=%llx", old_ack,
++		pr_debug("msk ack_seq=%llx subflow ack_seq=%llx\n", old_ack,
+ 			 ack_seq);
+ 		if (ack_seq == old_ack) {
+ 			subflow->data_avail = MPTCP_SUBFLOW_DATA_AVAIL;
+@@ -991,7 +991,7 @@ bool mptcp_subflow_data_available(struct sock *sk)
+ 		subflow->map_valid = 0;
+ 		subflow->data_avail = 0;
+ 
+-		pr_debug("Done with mapping: seq=%u data_len=%u",
++		pr_debug("Done with mapping: seq=%u data_len=%u\n",
+ 			 subflow->map_subflow_seq,
+ 			 subflow->map_data_len);
+ 	}
+@@ -1079,7 +1079,7 @@ void mptcpv6_handle_mapped(struct sock *sk, bool mapped)
+ 
+ 	target = mapped ? &subflow_v6m_specific : subflow_default_af_ops(sk);
+ 
+-	pr_debug("subflow=%p family=%d ops=%p target=%p mapped=%d",
++	pr_debug("subflow=%p family=%d ops=%p target=%p mapped=%d\n",
+ 		 subflow, sk->sk_family, icsk->icsk_af_ops, target, mapped);
+ 
+ 	if (likely(icsk->icsk_af_ops == target))
+@@ -1162,7 +1162,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
+ 		goto failed;
+ 
+ 	mptcp_crypto_key_sha(subflow->remote_key, &remote_token, NULL);
+-	pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk,
++	pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d\n", msk,
+ 		 remote_token, local_id, remote_id);
+ 	subflow->remote_token = remote_token;
+ 	subflow->local_id = local_id;
+@@ -1233,7 +1233,7 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock)
+ 	SOCK_INODE(sf)->i_gid = SOCK_INODE(sk->sk_socket)->i_gid;
+ 
+ 	subflow = mptcp_subflow_ctx(sf->sk);
+-	pr_debug("subflow=%p", subflow);
++	pr_debug("subflow=%p\n", subflow);
+ 
+ 	*new_sock = sf;
+ 	sock_hold(sk);
+@@ -1255,7 +1255,7 @@ static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk,
+ 	rcu_assign_pointer(icsk->icsk_ulp_data, ctx);
+ 	INIT_LIST_HEAD(&ctx->node);
+ 
+-	pr_debug("subflow=%p", ctx);
++	pr_debug("subflow=%p\n", ctx);
+ 
+ 	ctx->tcp_sock = sk;
+ 
+@@ -1332,7 +1332,7 @@ static int subflow_ulp_init(struct sock *sk)
+ 		goto out;
+ 	}
+ 
+-	pr_debug("subflow=%p, family=%d", ctx, sk->sk_family);
++	pr_debug("subflow=%p, family=%d\n", ctx, sk->sk_family);
+ 
+ 	tp->is_mptcp = 1;
+ 	ctx->icsk_af_ops = icsk->icsk_af_ops;
+diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
+index 82f36beb2e766..0ce12a33ffda4 100644
+--- a/net/netfilter/nf_conncount.c
++++ b/net/netfilter/nf_conncount.c
+@@ -310,7 +310,6 @@ insert_tree(struct net *net,
+ 	struct nf_conncount_rb *rbconn;
+ 	struct nf_conncount_tuple *conn;
+ 	unsigned int count = 0, gc_count = 0;
+-	u8 keylen = data->keylen;
+ 	bool do_gc = true;
+ 
+ 	spin_lock_bh(&nf_conncount_locks[hash]);
+@@ -322,7 +321,7 @@ insert_tree(struct net *net,
+ 		rbconn = rb_entry(*rbnode, struct nf_conncount_rb, node);
+ 
+ 		parent = *rbnode;
+-		diff = key_diff(key, rbconn->key, keylen);
++		diff = key_diff(key, rbconn->key, data->keylen);
+ 		if (diff < 0) {
+ 			rbnode = &((*rbnode)->rb_left);
+ 		} else if (diff > 0) {
+@@ -367,7 +366,7 @@ insert_tree(struct net *net,
+ 
+ 	conn->tuple = *tuple;
+ 	conn->zone = *zone;
+-	memcpy(rbconn->key, key, sizeof(u32) * keylen);
++	memcpy(rbconn->key, key, sizeof(u32) * data->keylen);
+ 
+ 	nf_conncount_list_init(&rbconn->list);
+ 	list_add(&conn->node, &rbconn->list.head);
+@@ -392,7 +391,6 @@ count_tree(struct net *net,
+ 	struct rb_node *parent;
+ 	struct nf_conncount_rb *rbconn;
+ 	unsigned int hash;
+-	u8 keylen = data->keylen;
+ 
+ 	hash = jhash2(key, data->keylen, conncount_rnd) % CONNCOUNT_SLOTS;
+ 	root = &data->root[hash];
+@@ -403,7 +401,7 @@ count_tree(struct net *net,
+ 
+ 		rbconn = rb_entry(parent, struct nf_conncount_rb, node);
+ 
+-		diff = key_diff(key, rbconn->key, keylen);
++		diff = key_diff(key, rbconn->key, data->keylen);
+ 		if (diff < 0) {
+ 			parent = rcu_dereference_raw(parent->rb_left);
+ 		} else if (diff > 0) {
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 5dc7a3c310c9d..4ddb43a6644ab 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -785,12 +785,15 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ 		 * queue, accept the collision, update the host tags.
+ 		 */
+ 		q->way_collisions++;
+-		if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
+-			q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
+-			q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
+-		}
+ 		allocate_src = cake_dsrc(flow_mode);
+ 		allocate_dst = cake_ddst(flow_mode);
++
++		if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
++			if (allocate_src)
++				q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
++			if (allocate_dst)
++				q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
++		}
+ found:
+ 		/* reserve queue for future packets in same flow */
+ 		reduced_hash = outer_hash + k;
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index e0e16b0fdb179..93ed7bac9ee60 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -733,11 +733,10 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+ 
+ 				err = qdisc_enqueue(skb, q->qdisc, &to_free);
+ 				kfree_skb_list(to_free);
+-				if (err != NET_XMIT_SUCCESS &&
+-				    net_xmit_drop_count(err)) {
+-					qdisc_qstats_drop(sch);
+-					qdisc_tree_reduce_backlog(sch, 1,
+-								  pkt_len);
++				if (err != NET_XMIT_SUCCESS) {
++					if (net_xmit_drop_count(err))
++						qdisc_qstats_drop(sch);
++					qdisc_tree_reduce_backlog(sch, 1, pkt_len);
+ 				}
+ 				goto tfifo_dequeue;
+ 			}
+diff --git a/net/sunrpc/stats.c b/net/sunrpc/stats.c
+index c964b48eaabae..a004c3ef35c0f 100644
+--- a/net/sunrpc/stats.c
++++ b/net/sunrpc/stats.c
+@@ -309,7 +309,7 @@ EXPORT_SYMBOL_GPL(rpc_proc_unregister);
+ struct proc_dir_entry *
+ svc_proc_register(struct net *net, struct svc_stat *statp, const struct proc_ops *proc_ops)
+ {
+-	return do_register(net, statp->program->pg_name, statp, proc_ops);
++	return do_register(net, statp->program->pg_name, net, proc_ops);
+ }
+ EXPORT_SYMBOL_GPL(svc_proc_register);
+ 
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index f8815ae776e68..4212fb1c3d887 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -445,8 +445,8 @@ __svc_init_bc(struct svc_serv *serv)
+  * Create an RPC service
+  */
+ static struct svc_serv *
+-__svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
+-	     int (*threadfn)(void *data))
++__svc_create(struct svc_program *prog, struct svc_stat *stats,
++	     unsigned int bufsize, int npools, int (*threadfn)(void *data))
+ {
+ 	struct svc_serv	*serv;
+ 	unsigned int vers;
+@@ -458,7 +458,7 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
+ 	serv->sv_name      = prog->pg_name;
+ 	serv->sv_program   = prog;
+ 	kref_init(&serv->sv_refcnt);
+-	serv->sv_stats     = prog->pg_stats;
++	serv->sv_stats     = stats;
+ 	if (bufsize > RPCSVC_MAXPAYLOAD)
+ 		bufsize = RPCSVC_MAXPAYLOAD;
+ 	serv->sv_max_payload = bufsize? bufsize : 4096;
+@@ -520,26 +520,28 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
+ struct svc_serv *svc_create(struct svc_program *prog, unsigned int bufsize,
+ 			    int (*threadfn)(void *data))
+ {
+-	return __svc_create(prog, bufsize, 1, threadfn);
++	return __svc_create(prog, NULL, bufsize, 1, threadfn);
+ }
+ EXPORT_SYMBOL_GPL(svc_create);
+ 
+ /**
+  * svc_create_pooled - Create an RPC service with pooled threads
+  * @prog: the RPC program the new service will handle
++ * @stats: the stats struct if desired
+  * @bufsize: maximum message size for @prog
+  * @threadfn: a function to service RPC requests for @prog
+  *
+  * Returns an instantiated struct svc_serv object or NULL.
+  */
+ struct svc_serv *svc_create_pooled(struct svc_program *prog,
++				   struct svc_stat *stats,
+ 				   unsigned int bufsize,
+ 				   int (*threadfn)(void *data))
+ {
+ 	struct svc_serv *serv;
+ 	unsigned int npools = svc_pool_map_get();
+ 
+-	serv = __svc_create(prog, bufsize, npools, threadfn);
++	serv = __svc_create(prog, stats, bufsize, npools, threadfn);
+ 	if (!serv)
+ 		goto out_err;
+ 	return serv;
+@@ -1355,7 +1357,8 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 		goto err_bad_proc;
+ 
+ 	/* Syntactic check complete */
+-	serv->sv_stats->rpccnt++;
++	if (serv->sv_stats)
++		serv->sv_stats->rpccnt++;
+ 	trace_svc_process(rqstp, progp->pg_name);
+ 
+ 	/* Build the reply header. */
+@@ -1421,7 +1424,8 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 	goto close_xprt;
+ 
+ err_bad_rpc:
+-	serv->sv_stats->rpcbadfmt++;
++	if (serv->sv_stats)
++		serv->sv_stats->rpcbadfmt++;
+ 	svc_putnl(resv, 1);	/* REJECT */
+ 	svc_putnl(resv, 0);	/* RPC_MISMATCH */
+ 	svc_putnl(resv, 2);	/* Only RPCv2 supported */
+@@ -1434,7 +1438,8 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ err_bad_auth:
+ 	dprintk("svc: authentication failed (%d)\n",
+ 		be32_to_cpu(rqstp->rq_auth_stat));
+-	serv->sv_stats->rpcbadauth++;
++	if (serv->sv_stats)
++		serv->sv_stats->rpcbadauth++;
+ 	/* Restore write pointer to location of accept status: */
+ 	xdr_ressize_check(rqstp, reply_statp);
+ 	svc_putnl(resv, 1);	/* REJECT */
+@@ -1444,7 +1449,8 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 
+ err_bad_prog:
+ 	dprintk("svc: unknown program %d\n", prog);
+-	serv->sv_stats->rpcbadfmt++;
++	if (serv->sv_stats)
++		serv->sv_stats->rpcbadfmt++;
+ 	svc_putnl(resv, RPC_PROG_UNAVAIL);
+ 	goto sendit;
+ 
+@@ -1452,7 +1458,8 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 	svc_printk(rqstp, "unknown version (%d for prog %d, %s)\n",
+ 		       rqstp->rq_vers, rqstp->rq_prog, progp->pg_name);
+ 
+-	serv->sv_stats->rpcbadfmt++;
++	if (serv->sv_stats)
++		serv->sv_stats->rpcbadfmt++;
+ 	svc_putnl(resv, RPC_PROG_MISMATCH);
+ 	svc_putnl(resv, process.mismatch.lovers);
+ 	svc_putnl(resv, process.mismatch.hivers);
+@@ -1461,7 +1468,8 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ err_bad_proc:
+ 	svc_printk(rqstp, "unknown procedure (%d)\n", rqstp->rq_proc);
+ 
+-	serv->sv_stats->rpcbadfmt++;
++	if (serv->sv_stats)
++		serv->sv_stats->rpcbadfmt++;
+ 	svc_putnl(resv, RPC_PROC_UNAVAIL);
+ 	goto sendit;
+ 
+@@ -1470,7 +1478,8 @@ svc_process_common(struct svc_rqst *rqstp, struct kvec *argv, struct kvec *resv)
+ 
+ 	rpc_stat = rpc_garbage_args;
+ err_bad:
+-	serv->sv_stats->rpcbadfmt++;
++	if (serv->sv_stats)
++		serv->sv_stats->rpcbadfmt++;
+ 	svc_putnl(resv, ntohl(rpc_stat));
+ 	goto sendit;
+ }
+@@ -1505,7 +1514,8 @@ svc_process(struct svc_rqst *rqstp)
+ 	if (dir != 0) {
+ 		/* direction != CALL */
+ 		svc_printk(rqstp, "bad direction %d, dropping request\n", dir);
+-		serv->sv_stats->rpcbadfmt++;
++		if (serv->sv_stats)
++			serv->sv_stats->rpcbadfmt++;
+ 		goto out_drop;
+ 	}
+ 
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+index 80a0c0e875909..7c50eddb8d3ca 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_rw.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+@@ -460,6 +460,8 @@ svc_rdma_build_writes(struct svc_rdma_write_info *info,
+ 		offset += info->wi_seg_off;
+ 
+ 		write_len = min(remaining, length - info->wi_seg_off);
++		if (!write_len)
++			goto out_overflow;
+ 		ctxt = svc_rdma_get_rw_ctxt(rdma,
+ 					    (write_len >> PAGE_SHIFT) + 2);
+ 		if (!ctxt)
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 0666f981618a2..e0cd6d7350533 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -2314,6 +2314,13 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ 	case -EALREADY:
+ 		xprt_unlock_connect(xprt, transport);
+ 		return;
++	case -EPERM:
++		/* Happens, for instance, if a BPF program is preventing
++		 * the connect. Remap the error so upper layers can better
++		 * deal with it.
++		 */
++		status = -ECONNREFUSED;
++		fallthrough;
+ 	case -EINVAL:
+ 		/* Happens, for instance, if the user specified a link
+ 		 * local IPv6 address without a scope-id.
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index e2ff610d27760..b7e9c1238516f 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -603,9 +603,6 @@ static void init_peercred(struct sock *sk)
+ 
+ static void copy_peercred(struct sock *sk, struct sock *peersk)
+ {
+-	const struct cred *old_cred;
+-	struct pid *old_pid;
+-
+ 	if (sk < peersk) {
+ 		spin_lock(&sk->sk_peer_lock);
+ 		spin_lock_nested(&peersk->sk_peer_lock, SINGLE_DEPTH_NESTING);
+@@ -613,16 +610,12 @@ static void copy_peercred(struct sock *sk, struct sock *peersk)
+ 		spin_lock(&peersk->sk_peer_lock);
+ 		spin_lock_nested(&sk->sk_peer_lock, SINGLE_DEPTH_NESTING);
+ 	}
+-	old_pid = sk->sk_peer_pid;
+-	old_cred = sk->sk_peer_cred;
++
+ 	sk->sk_peer_pid  = get_pid(peersk->sk_peer_pid);
+ 	sk->sk_peer_cred = get_cred(peersk->sk_peer_cred);
+ 
+ 	spin_unlock(&sk->sk_peer_lock);
+ 	spin_unlock(&peersk->sk_peer_lock);
+-
+-	put_pid(old_pid);
+-	put_cred(old_cred);
+ }
+ 
+ static int unix_listen(struct socket *sock, int backlog)
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 76a27b6d45d28..e8a9ce0392957 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1510,7 +1510,7 @@ struct cfg80211_bss *cfg80211_get_bss(struct wiphy *wiphy,
+ }
+ EXPORT_SYMBOL(cfg80211_get_bss);
+ 
+-static void rb_insert_bss(struct cfg80211_registered_device *rdev,
++static bool rb_insert_bss(struct cfg80211_registered_device *rdev,
+ 			  struct cfg80211_internal_bss *bss)
+ {
+ 	struct rb_node **p = &rdev->bss_tree.rb_node;
+@@ -1526,7 +1526,7 @@ static void rb_insert_bss(struct cfg80211_registered_device *rdev,
+ 
+ 		if (WARN_ON(!cmp)) {
+ 			/* will sort of leak this BSS */
+-			return;
++			return false;
+ 		}
+ 
+ 		if (cmp < 0)
+@@ -1537,6 +1537,7 @@ static void rb_insert_bss(struct cfg80211_registered_device *rdev,
+ 
+ 	rb_link_node(&bss->rbn, parent, p);
+ 	rb_insert_color(&bss->rbn, &rdev->bss_tree);
++	return true;
+ }
+ 
+ static struct cfg80211_internal_bss *
+@@ -1563,6 +1564,34 @@ rb_find_bss(struct cfg80211_registered_device *rdev,
+ 	return NULL;
+ }
+ 
++static void cfg80211_insert_bss(struct cfg80211_registered_device *rdev,
++				struct cfg80211_internal_bss *bss)
++{
++	lockdep_assert_held(&rdev->bss_lock);
++
++	if (!rb_insert_bss(rdev, bss))
++		return;
++	list_add_tail(&bss->list, &rdev->bss_list);
++	rdev->bss_entries++;
++}
++
++static void cfg80211_rehash_bss(struct cfg80211_registered_device *rdev,
++                                struct cfg80211_internal_bss *bss)
++{
++	lockdep_assert_held(&rdev->bss_lock);
++
++	rb_erase(&bss->rbn, &rdev->bss_tree);
++	if (!rb_insert_bss(rdev, bss)) {
++		list_del(&bss->list);
++		if (!list_empty(&bss->hidden_list))
++			list_del_init(&bss->hidden_list);
++		if (!list_empty(&bss->pub.nontrans_list))
++			list_del_init(&bss->pub.nontrans_list);
++		rdev->bss_entries--;
++	}
++	rdev->bss_generation++;
++}
++
+ static bool cfg80211_combine_bsses(struct cfg80211_registered_device *rdev,
+ 				   struct cfg80211_internal_bss *new)
+ {
+@@ -1838,9 +1867,7 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 			bss_ref_get(rdev, pbss);
+ 		}
+ 
+-		list_add_tail(&new->list, &rdev->bss_list);
+-		rdev->bss_entries++;
+-		rb_insert_bss(rdev, new);
++		cfg80211_insert_bss(rdev, new);
+ 		found = new;
+ 	}
+ 
+@@ -2702,10 +2729,7 @@ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev,
+ 		if (!WARN_ON(!__cfg80211_unlink_bss(rdev, new)))
+ 			rdev->bss_generation++;
+ 	}
+-
+-	rb_erase(&cbss->rbn, &rdev->bss_tree);
+-	rb_insert_bss(rdev, cbss);
+-	rdev->bss_generation++;
++	cfg80211_rehash_bss(rdev, cbss);
+ 
+ 	list_for_each_entry_safe(nontrans_bss, tmp,
+ 				 &cbss->pub.nontrans_list,
+@@ -2713,9 +2737,7 @@ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev,
+ 		bss = container_of(nontrans_bss,
+ 				   struct cfg80211_internal_bss, pub);
+ 		bss->pub.channel = chan;
+-		rb_erase(&bss->rbn, &rdev->bss_tree);
+-		rb_insert_bss(rdev, bss);
+-		rdev->bss_generation++;
++		cfg80211_rehash_bss(rdev, bss);
+ 	}
+ 
+ done:
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index 49d97b331abca..06eac22665656 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -1679,6 +1679,10 @@ int __aafs_profile_mkdir(struct aa_profile *profile, struct dentry *parent)
+ 		struct aa_profile *p;
+ 		p = aa_deref_parent(profile);
+ 		dent = prof_dir(p);
++		if (!dent) {
++			error = -ENOENT;
++			goto fail2;
++		}
+ 		/* adding to parent that previously didn't have children */
+ 		dent = aafs_create_dir("profiles", dent);
+ 		if (IS_ERR(dent))
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 8c790563b33ac..92bc6c9d793d6 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -3642,12 +3642,18 @@ static int smack_unix_stream_connect(struct sock *sock,
+ 		}
+ 	}
+ 
+-	/*
+-	 * Cross reference the peer labels for SO_PEERSEC.
+-	 */
+ 	if (rc == 0) {
++		/*
++		 * Cross reference the peer labels for SO_PEERSEC.
++		 */
+ 		nsp->smk_packet = ssp->smk_out;
+ 		ssp->smk_packet = osp->smk_out;
++
++		/*
++		 * new/child/established socket must inherit listening socket labels
++		 */
++		nsp->smk_out = osp->smk_out;
++		nsp->smk_in  = osp->smk_in;
+ 	}
+ 
+ 	return rc;
+@@ -4228,7 +4234,7 @@ static int smack_inet_conn_request(struct sock *sk, struct sk_buff *skb,
+ 	rcu_read_unlock();
+ 
+ 	if (hskp == NULL)
+-		rc = netlbl_req_setattr(req, &skp->smk_netlabel);
++		rc = netlbl_req_setattr(req, &ssp->smk_out->smk_netlabel);
+ 	else
+ 		netlbl_req_delattr(req);
+ 
+diff --git a/sound/hda/hdmi_chmap.c b/sound/hda/hdmi_chmap.c
+index aad5c4bf4d344..0ebf4d9078522 100644
+--- a/sound/hda/hdmi_chmap.c
++++ b/sound/hda/hdmi_chmap.c
+@@ -753,6 +753,20 @@ static int hdmi_chmap_ctl_get(struct snd_kcontrol *kcontrol,
+ 	return 0;
+ }
+ 
++/* a simple sanity check for input values to chmap kcontrol */
++static int chmap_value_check(struct hdac_chmap *hchmap,
++			     const struct snd_ctl_elem_value *ucontrol)
++{
++	int i;
++
++	for (i = 0; i < hchmap->channels_max; i++) {
++		if (ucontrol->value.integer.value[i] < 0 ||
++		    ucontrol->value.integer.value[i] > SNDRV_CHMAP_LAST)
++			return -EINVAL;
++	}
++	return 0;
++}
++
+ static int hdmi_chmap_ctl_put(struct snd_kcontrol *kcontrol,
+ 			      struct snd_ctl_elem_value *ucontrol)
+ {
+@@ -764,6 +778,10 @@ static int hdmi_chmap_ctl_put(struct snd_kcontrol *kcontrol,
+ 	unsigned char chmap[8], per_pin_chmap[8];
+ 	int i, err, ca, prepared = 0;
+ 
++	err = chmap_value_check(hchmap, ucontrol);
++	if (err < 0)
++		return err;
++
+ 	/* No monitor is connected in dyn_pcm_assign.
+ 	 * It's invalid to setup the chmap
+ 	 */
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 35113fa84a0fd..733dc9953a38b 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -5067,6 +5067,69 @@ void snd_hda_gen_stream_pm(struct hda_codec *codec, hda_nid_t nid, bool on)
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_gen_stream_pm);
+ 
++/* forcibly mute the speaker output without caching; return true if updated */
++static bool force_mute_output_path(struct hda_codec *codec, hda_nid_t nid)
++{
++	if (!nid)
++		return false;
++	if (!nid_has_mute(codec, nid, HDA_OUTPUT))
++		return false; /* no mute, skip */
++	if (snd_hda_codec_amp_read(codec, nid, 0, HDA_OUTPUT, 0) &
++	    snd_hda_codec_amp_read(codec, nid, 1, HDA_OUTPUT, 0) &
++	    HDA_AMP_MUTE)
++		return false; /* both channels already muted, skip */
++
++	/* direct amp update without caching */
++	snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_AMP_GAIN_MUTE,
++			    AC_AMP_SET_OUTPUT | AC_AMP_SET_LEFT |
++			    AC_AMP_SET_RIGHT | HDA_AMP_MUTE);
++	return true;
++}
++
++/**
++ * snd_hda_gen_shutup_speakers - Forcibly mute the speaker outputs
++ * @codec: the HDA codec
++ *
++ * Forcibly mute the speaker outputs, to be called at suspend or shutdown.
++ *
++ * The mute state done by this function isn't cached, hence the original state
++ * will be restored at resume.
++ *
++ * Return true if the mute state has been changed.
++ */
++bool snd_hda_gen_shutup_speakers(struct hda_codec *codec)
++{
++	struct hda_gen_spec *spec = codec->spec;
++	const int *paths;
++	const struct nid_path *path;
++	int i, p, num_paths;
++	bool updated = false;
++
++	/* if already powered off, do nothing */
++	if (!snd_hdac_is_power_on(&codec->core))
++		return false;
++
++	if (spec->autocfg.line_out_type == AUTO_PIN_SPEAKER_OUT) {
++		paths = spec->out_paths;
++		num_paths = spec->autocfg.line_outs;
++	} else {
++		paths = spec->speaker_paths;
++		num_paths = spec->autocfg.speaker_outs;
++	}
++
++	for (i = 0; i < num_paths; i++) {
++		path = snd_hda_get_path_from_idx(codec, paths[i]);
++		if (!path)
++			continue;
++		for (p = 0; p < path->depth; p++)
++			if (force_mute_output_path(codec, path->path[p]))
++				updated = true;
++	}
++
++	return updated;
++}
++EXPORT_SYMBOL_GPL(snd_hda_gen_shutup_speakers);
++
+ /**
+  * snd_hda_gen_parse_auto_config - Parse the given BIOS configuration and
+  * set up the hda_gen_spec
+diff --git a/sound/pci/hda/hda_generic.h b/sound/pci/hda/hda_generic.h
+index 578faa9adcdcd..fc00f8bc0d78d 100644
+--- a/sound/pci/hda/hda_generic.h
++++ b/sound/pci/hda/hda_generic.h
+@@ -364,5 +364,6 @@ int snd_hda_gen_add_mute_led_cdev(struct hda_codec *codec,
+ int snd_hda_gen_add_micmute_led_cdev(struct hda_codec *codec,
+ 				     int (*callback)(struct led_classdev *,
+ 						     enum led_brightness));
++bool snd_hda_gen_shutup_speakers(struct hda_codec *codec);
+ 
+ #endif /* __SOUND_HDA_GENERIC_H */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 5b37f5f14bc91..d908a39af9f5e 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -181,6 +181,8 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ {
+ 	struct conexant_spec *spec = codec->spec;
+ 
++	snd_hda_gen_shutup_speakers(codec);
++
+ 	/* Turn the problematic codec into D3 to avoid spurious noises
+ 	   from the internal speaker during (and after) reboot */
+ 	cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, false);
+@@ -236,6 +238,7 @@ enum {
+ 	CXT_FIXUP_HEADSET_MIC,
+ 	CXT_FIXUP_HP_MIC_NO_PRESENCE,
+ 	CXT_PINCFG_SWS_JS201D,
++	CXT_PINCFG_TOP_SPEAKER,
+ };
+ 
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -903,6 +906,13 @@ static const struct hda_fixup cxt_fixups[] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = cxt_pincfg_sws_js201d,
+ 	},
++	[CXT_PINCFG_TOP_SPEAKER] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x1d, 0x82170111 },
++			{ }
++		},
++	},
+ };
+ 
+ static const struct snd_pci_quirk cxt5045_fixups[] = {
+@@ -999,6 +1009,8 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004),
+ 	SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205),
++	SND_PCI_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER),
++	SND_PCI_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER),
+ 	{}
+ };
+ 
+@@ -1018,6 +1030,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
+ 	{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
+ 	{ .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" },
+ 	{ .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" },
++	{ .id = CXT_PINCFG_TOP_SPEAKER, .name = "sirius-top-speaker" },
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 04fd52bba0573..c104a33b3e8fa 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6968,6 +6968,7 @@ enum {
+ 	ALC236_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
++	ALC236_FIXUP_LENOVO_INV_DMIC,
+ 	ALC298_FIXUP_SAMSUNG_AMP,
+ 	ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ 	ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+@@ -8361,6 +8362,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc236_fixup_hp_mute_led_micmute_vref,
+ 	},
++	[ALC236_FIXUP_LENOVO_INV_DMIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_inv_dmic,
++		.chained = true,
++		.chain_id = ALC283_FIXUP_INT_MIC,
++	},
+ 	[ALC298_FIXUP_SAMSUNG_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc298_fixup_samsung_amp,
+@@ -9105,6 +9112,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f6, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x87fd, "HP Laptop 14-dq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x87fe, "HP Laptop 15s-fq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -9355,6 +9363,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3853, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+@@ -9596,6 +9605,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+ 	{.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+ 	{.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"},
++	{.id = ALC236_FIXUP_LENOVO_INV_DMIC, .name = "alc236-fixup-lenovo-inv-mic"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 754c1f16ee83f..acb46e1f9c0ae 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -4014,6 +4014,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w,
+ 
+ 	case SND_SOC_DAPM_POST_PMD:
+ 		kfree(substream->runtime);
++		substream->runtime = NULL;
+ 		break;
+ 
+ 	default:
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 23a5f9a52da0f..aa57f796e9dd3 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -998,6 +998,8 @@ static int soc_tplg_denum_create_values(struct soc_enum *se,
+ 		se->dobj.control.dvalues[i] = le32_to_cpu(ec->values[i]);
+ 	}
+ 
++	se->items = le32_to_cpu(ec->items);
++	se->values = (const unsigned int *)se->dobj.control.dvalues;
+ 	return 0;
+ }
+ 
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 015ed8253f739..33cdcfe106344 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -9005,7 +9005,7 @@ __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
+ struct bpf_map *
+ bpf_map__next(const struct bpf_map *prev, const struct bpf_object *obj)
+ {
+-	if (prev == NULL)
++	if (prev == NULL && obj != NULL)
+ 		return obj->maps;
+ 
+ 	return __bpf_map__iter(prev, obj, 1);
+@@ -9014,7 +9014,7 @@ bpf_map__next(const struct bpf_map *prev, const struct bpf_object *obj)
+ struct bpf_map *
+ bpf_map__prev(const struct bpf_map *next, const struct bpf_object *obj)
+ {
+-	if (next == NULL) {
++	if (next == NULL && obj != NULL) {
+ 		if (!obj->nr_maps)
+ 			return NULL;
+ 		return obj->maps + obj->nr_maps - 1;
+diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+index 909da9cdda97f..aa4be40f7d49f 100644
+--- a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
++++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+@@ -29,9 +29,11 @@ static int check_vgem(int fd)
+ 	version.name = name;
+ 
+ 	ret = ioctl(fd, DRM_IOCTL_VERSION, &version);
+-	if (ret)
++	if (ret || version.name_len != 4)
+ 		return 0;
+ 
++	name[4] = '\0';
++
+ 	return !strcmp(name, "vgem");
+ }
+ 


^ permalink raw reply related	[flat|nested] 289+ messages in thread

end of thread, other threads:[~2024-09-12 12:42 UTC | newest]

Thread overview: 289+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-02 15:03 [gentoo-commits] proj/linux-patches:5.10 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2024-09-12 12:42 Mike Pagano
2024-09-04 13:53 Mike Pagano
2024-08-19 10:44 Mike Pagano
2024-07-27  9:20 Mike Pagano
2024-07-27  9:17 Mike Pagano
2024-07-18 12:17 Mike Pagano
2024-07-05 10:53 Mike Pagano
2024-07-05 10:51 Mike Pagano
2024-06-21 14:08 Mike Pagano
2024-06-16 14:35 Mike Pagano
2024-05-25 15:14 Mike Pagano
2024-05-17 11:38 Mike Pagano
2024-05-05 18:14 Mike Pagano
2024-04-27 22:57 Mike Pagano
2024-04-13 13:09 Mike Pagano
2024-03-27 11:26 Mike Pagano
2024-03-15 22:02 Mike Pagano
2024-03-06 18:09 Mike Pagano
2024-03-01 13:09 Mike Pagano
2024-02-23 12:45 Mike Pagano
2024-02-23 12:39 Mike Pagano
2024-01-25 23:34 Mike Pagano
2024-01-15 18:49 Mike Pagano
2024-01-12 20:35 Mike Pagano
2024-01-05 14:29 Mike Pagano
2023-12-20 15:21 Mike Pagano
2023-12-13 18:29 Mike Pagano
2023-12-08 11:16 Mike Pagano
2023-12-01 17:47 Mike Pagano
2023-11-28 17:52 Mike Pagano
2023-11-20 11:25 Mike Pagano
2023-11-08 17:28 Mike Pagano
2023-10-25 11:38 Mike Pagano
2023-10-18 20:16 Mike Pagano
2023-10-10 20:34 Mike Pagano
2023-10-05 14:24 Mike Pagano
2023-09-23 10:19 Mike Pagano
2023-09-21 11:29 Mike Pagano
2023-09-19 13:22 Mike Pagano
2023-09-02  9:59 Mike Pagano
2023-08-30 14:45 Mike Pagano
2023-08-26 15:21 Mike Pagano
2023-08-16 17:01 Mike Pagano
2023-08-11 11:56 Mike Pagano
2023-08-08 18:42 Mike Pagano
2023-07-27 11:50 Mike Pagano
2023-07-24 20:28 Mike Pagano
2023-06-28 10:27 Mike Pagano
2023-06-21 14:55 Alice Ferrazzi
2023-06-14 10:34 Mike Pagano
2023-06-14 10:20 Mike Pagano
2023-06-09 11:31 Mike Pagano
2023-06-05 11:50 Mike Pagano
2023-05-30 12:56 Mike Pagano
2023-05-17 11:25 Mike Pagano
2023-05-17 10:59 Mike Pagano
2023-05-10 17:56 Mike Pagano
2023-04-27 14:11 Mike Pagano
2023-04-26  9:50 Alice Ferrazzi
2023-04-20 11:17 Alice Ferrazzi
2023-04-05 10:01 Alice Ferrazzi
2023-03-22 14:15 Alice Ferrazzi
2023-03-17 10:45 Mike Pagano
2023-03-13 11:32 Alice Ferrazzi
2023-03-11 16:05 Mike Pagano
2023-03-03 15:01 Mike Pagano
2023-03-03 12:30 Mike Pagano
2023-02-25 11:44 Mike Pagano
2023-02-24  3:06 Alice Ferrazzi
2023-02-22 14:04 Alice Ferrazzi
2023-02-15 16:40 Mike Pagano
2023-02-06 12:47 Mike Pagano
2023-02-02 19:11 Mike Pagano
2023-02-01  8:09 Alice Ferrazzi
2023-01-24  7:13 Alice Ferrazzi
2023-01-18 11:09 Mike Pagano
2023-01-14 13:52 Mike Pagano
2023-01-04 11:39 Mike Pagano
2022-12-21 19:00 Alice Ferrazzi
2022-12-19 12:33 Alice Ferrazzi
2022-12-14 12:14 Mike Pagano
2022-12-08 11:51 Alice Ferrazzi
2022-12-02 17:26 Mike Pagano
2022-11-25 17:06 Mike Pagano
2022-11-16 12:08 Alice Ferrazzi
2022-11-10 18:05 Mike Pagano
2022-11-03 15:17 Mike Pagano
2022-10-30  9:33 Mike Pagano
2022-10-28 13:38 Mike Pagano
2022-10-26 11:46 Mike Pagano
2022-10-17 16:46 Mike Pagano
2022-10-15 10:05 Mike Pagano
2022-10-05 11:58 Mike Pagano
2022-09-28  9:30 Mike Pagano
2022-09-23 12:40 Mike Pagano
2022-09-20 12:01 Mike Pagano
2022-09-15 10:31 Mike Pagano
2022-09-08 10:46 Mike Pagano
2022-09-05 12:04 Mike Pagano
2022-08-31 15:39 Mike Pagano
2022-08-29 10:46 Mike Pagano
2022-08-25 10:33 Mike Pagano
2022-08-21 16:52 Mike Pagano
2022-08-11 12:34 Mike Pagano
2022-08-03 14:24 Alice Ferrazzi
2022-07-29 16:37 Mike Pagano
2022-07-25 10:19 Alice Ferrazzi
2022-07-21 20:08 Mike Pagano
2022-07-15 10:03 Mike Pagano
2022-07-12 15:59 Mike Pagano
2022-07-07 16:17 Mike Pagano
2022-07-02 16:10 Mike Pagano
2022-06-29 11:08 Mike Pagano
2022-06-27 11:12 Mike Pagano
2022-06-25 19:45 Mike Pagano
2022-06-22 12:45 Mike Pagano
2022-06-16 11:44 Mike Pagano
2022-06-14 17:12 Mike Pagano
2022-06-09 11:27 Mike Pagano
2022-06-06 11:03 Mike Pagano
2022-05-30 13:59 Mike Pagano
2022-05-25 11:54 Mike Pagano
2022-05-18  9:48 Mike Pagano
2022-05-15 22:10 Mike Pagano
2022-05-12 11:29 Mike Pagano
2022-05-09 10:56 Mike Pagano
2022-04-27 12:24 Mike Pagano
2022-04-27 12:20 Mike Pagano
2022-04-26 12:17 Mike Pagano
2022-04-20 12:07 Mike Pagano
2022-04-13 20:20 Mike Pagano
2022-04-13 19:48 Mike Pagano
2022-04-12 19:08 Mike Pagano
2022-04-08 13:16 Mike Pagano
2022-03-28 10:58 Mike Pagano
2022-03-23 11:55 Mike Pagano
2022-03-19 13:20 Mike Pagano
2022-03-16 13:33 Mike Pagano
2022-03-11 11:31 Mike Pagano
2022-03-08 18:32 Mike Pagano
2022-03-02 13:06 Mike Pagano
2022-02-26 20:27 Mike Pagano
2022-02-23 12:37 Mike Pagano
2022-02-16 12:46 Mike Pagano
2022-02-11 12:35 Mike Pagano
2022-02-08 17:54 Mike Pagano
2022-02-05 19:04 Mike Pagano
2022-02-05 12:13 Mike Pagano
2022-02-01 17:23 Mike Pagano
2022-01-31 12:25 Mike Pagano
2022-01-29 17:43 Mike Pagano
2022-01-27 11:37 Mike Pagano
2022-01-20 10:00 Mike Pagano
2022-01-16 10:21 Mike Pagano
2022-01-11 14:50 Mike Pagano
2022-01-05 12:53 Mike Pagano
2021-12-29 13:06 Mike Pagano
2021-12-22 14:05 Mike Pagano
2021-12-21 19:37 Mike Pagano
2021-12-17 11:55 Mike Pagano
2021-12-16 16:04 Mike Pagano
2021-12-14 12:51 Mike Pagano
2021-12-14 12:12 Mike Pagano
2021-12-08 12:53 Mike Pagano
2021-12-01 12:49 Mike Pagano
2021-11-26 11:57 Mike Pagano
2021-11-21 20:42 Mike Pagano
2021-11-18 15:33 Mike Pagano
2021-11-12 14:18 Mike Pagano
2021-11-06 13:36 Mike Pagano
2021-11-02 19:30 Mike Pagano
2021-10-27 14:55 Mike Pagano
2021-10-27 11:57 Mike Pagano
2021-10-20 13:23 Mike Pagano
2021-10-18 21:17 Mike Pagano
2021-10-17 13:11 Mike Pagano
2021-10-13  9:35 Alice Ferrazzi
2021-10-09 21:31 Mike Pagano
2021-10-06 14:18 Mike Pagano
2021-09-30 10:48 Mike Pagano
2021-09-26 14:12 Mike Pagano
2021-09-22 11:38 Mike Pagano
2021-09-20 22:02 Mike Pagano
2021-09-18 16:07 Mike Pagano
2021-09-17 12:50 Mike Pagano
2021-09-17 12:46 Mike Pagano
2021-09-16 11:20 Mike Pagano
2021-09-15 12:00 Mike Pagano
2021-09-12 14:38 Mike Pagano
2021-09-08 13:00 Alice Ferrazzi
2021-09-03 11:47 Mike Pagano
2021-09-03 11:20 Mike Pagano
2021-08-26 14:34 Mike Pagano
2021-08-25 16:23 Mike Pagano
2021-08-24 21:33 Mike Pagano
2021-08-24 21:32 Mike Pagano
2021-08-21 14:17 Mike Pagano
2021-08-19 11:56 Mike Pagano
2021-08-18 12:46 Mike Pagano
2021-08-15 20:05 Mike Pagano
2021-08-12 11:53 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-08 13:36 Mike Pagano
2021-08-04 11:52 Mike Pagano
2021-08-03 11:03 Mike Pagano
2021-08-02 22:35 Mike Pagano
2021-07-31 10:30 Alice Ferrazzi
2021-07-28 13:22 Mike Pagano
2021-07-25 17:28 Mike Pagano
2021-07-25 17:26 Mike Pagano
2021-07-20 15:44 Alice Ferrazzi
2021-07-19 11:17 Mike Pagano
2021-07-14 16:31 Mike Pagano
2021-07-14 16:21 Mike Pagano
2021-07-13 12:37 Mike Pagano
2021-07-12 17:25 Mike Pagano
2021-07-11 15:11 Mike Pagano
2021-07-11 14:43 Mike Pagano
2021-07-08 12:27 Mike Pagano
2021-07-08  3:27 Alice Ferrazzi
2021-07-07 13:13 Mike Pagano
2021-07-02 19:38 Mike Pagano
2021-07-01 14:32 Mike Pagano
2021-06-30 14:23 Mike Pagano
2021-06-23 15:12 Mike Pagano
2021-06-18 11:37 Mike Pagano
2021-06-16 12:24 Mike Pagano
2021-06-11 17:34 Mike Pagano
2021-06-10 13:14 Mike Pagano
2021-06-10 12:09 Mike Pagano
2021-06-08 22:42 Mike Pagano
2021-06-03 10:26 Alice Ferrazzi
2021-05-28 12:15 Alice Ferrazzi
2021-05-26 12:07 Mike Pagano
2021-05-22 16:59 Mike Pagano
2021-05-19 12:24 Mike Pagano
2021-05-14 14:07 Alice Ferrazzi
2021-05-11 14:20 Mike Pagano
2021-05-07 11:27 Alice Ferrazzi
2021-05-02 16:03 Mike Pagano
2021-04-30 18:58 Mike Pagano
2021-04-28 12:03 Alice Ferrazzi
2021-04-21 11:42 Mike Pagano
2021-04-16 11:02 Alice Ferrazzi
2021-04-14 11:07 Alice Ferrazzi
2021-04-10 13:26 Mike Pagano
2021-04-07 13:27 Mike Pagano
2021-03-30 12:57 Alice Ferrazzi
2021-03-25  9:04 Alice Ferrazzi
2021-03-22 15:57 Mike Pagano
2021-03-20 14:35 Mike Pagano
2021-03-17 17:00 Mike Pagano
2021-03-11 15:08 Mike Pagano
2021-03-09 12:18 Mike Pagano
2021-03-07 15:17 Mike Pagano
2021-03-04 12:04 Alice Ferrazzi
2021-02-26 13:22 Mike Pagano
2021-02-26 10:42 Alice Ferrazzi
2021-02-23 15:16 Alice Ferrazzi
2021-02-18 20:45 Mike Pagano
2021-02-18 14:48 Mike Pagano
2021-02-17 11:14 Alice Ferrazzi
2021-02-13 15:51 Mike Pagano
2021-02-13 15:48 Mike Pagano
2021-02-13 14:42 Alice Ferrazzi
2021-02-10 10:23 Alice Ferrazzi
2021-02-10  9:51 Alice Ferrazzi
2021-02-09 19:10 Mike Pagano
2021-02-07 15:20 Alice Ferrazzi
2021-02-03 23:43 Alice Ferrazzi
2021-01-30 13:27 Alice Ferrazzi
2021-01-27 11:29 Mike Pagano
2021-01-23 16:38 Mike Pagano
2021-01-19 20:31 Mike Pagano
2021-01-17 16:18 Mike Pagano
2021-01-12 20:03 Mike Pagano
2021-01-09 17:58 Mike Pagano
2021-01-09  0:14 Mike Pagano
2021-01-06 14:54 Mike Pagano
2020-12-30 12:54 Mike Pagano
2020-12-26 15:32 Mike Pagano
2020-12-26 15:29 Mike Pagano
2020-12-21 13:26 Mike Pagano
2020-12-18 16:08 Mike Pagano
2020-12-14 20:45 Mike Pagano
2020-12-13 16:09 Mike Pagano
2020-11-19 13:03 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox